Industrial Instrumentation Class Report Calibration Of Flow Engineering Essay Example
Industrial Instrumentation Class Report Calibration Of Flow Engineering Essay Example

Industrial Instrumentation Class Report Calibration Of Flow Engineering Essay Example

Available Only on StudyHippo
  • Pages: 14 (3812 words)
  • Published: July 28, 2017
  • Type: Report
View Entire Sample
Text preview

The Chinese saying "a concatenation is merely every bit strong as its weakest nexus" can be applied to Measuring Processes, wherein various factors collaborate to yield an accurate measurement. One such factor pertains to the standardization of instruments. Similar to how an untrained human child is useless, so is an instrument that has not undergone calibration. The objective of this study is to provide comprehensive information regarding the standardization of instruments employed in industry with the aim of reinforcing this chain of factors. By doing so, readers will gain a better understanding of fundamental concepts related to standardization and the standards utilized for variables like pressure, temperature, flow rate, and level. Furthermore, the study includes a section dedicated to exploring the historical background of standardization.

Introduction

If you are seeking to select the most suitable instrument for a proposed measurement from commerci

...

ally available options or if you are involved in designing instruments for specific measuring tasks, performance standards become crucial. In order to make well-informed decisions, there must exist a quantitative basis for comparing one instrument (or proposed design) with other possible alternatives. This method of comparison is known as standardization.

What is Instrument Calibration?

Calibration of measurement instruments involves comparing the readings obtained from the instrument with sub-standards.The instrument's graduated scale is calibrated at different points in a research lab. Readings are taken and plotted to create a curve using sub-standards. If the instrument is accurate, there will be a match between its scale and the sub-standards. However, if there is a discrepancy between the measured value and the standard value, calibration is needed to ensure correct readings. Newly acquired instruments must be calibrated against set criteria initially.

View entire sample
Join StudyHippo to see entire essay

The instrument's scale is marked based on specific sub-standards available in research labs for this purpose. Over time, instruments may lose their standardization or have distorted scales. In such cases, if the instruments are still in good condition, they can be recalibrated. It is recommended to periodically calibrate instruments, even if they are functioning correctly, to guarantee accurate readings for critical parameters. This is particularly crucial for companies manufacturing precision products with high accuracy requirements.

Inactive standardization refers to maintaining constant values for all inputs (desired, interfering, modifying). Then, varying the input under study within a range of constant values causes the output(s) to vary within another range of constant values. These relationships developed under these constant conditions constitute valid inactive standardization.
This process can be repeated by sequentially changing each input of interest and developing a set of inactive input-output relationships. We may attempt to describe the overall inactive behavior of the instrument by superimposing these individual effects. In some cases, if we desire overall effects rather than individual effects, the standardization process would involve varying several inputs simultaneously.
Furthermore, when critically examining any practical instrument, we will find multiple modifying and/or interfering inputs, each with potentially minor effects that are impractical to control. Therefore, the statement "all other inputs are held constant" refers to an ideal situation that can only be approached but never achieved in practice.
The measurement method describes this ideal situation while the measurement procedure describes the progressive physical realization of the measuring method.
A common requirement for a physical variable criterion is that its truth must be higher than the instrument it is being calibrated against, typically at a 4-to-1 ratio. It is not

possible to calibrate an instrument to have a greater degree of truth than the criterion it is compared to.
A commonly followed regulation states that the standardization system, including the criterion and any supporting setup, must have an uncertainty four times better than the unit under test.To ensure a pressure pot achieves 1% accuracy, it must be calibrated against a standard with an accuracy of at least 0.25%. However, if the pressure pot has random errors of 3%, calibrating it against the 0.25% criterion will not make it a "1% pot". Therefore, careful consideration is necessary when selecting instruments to meet desired accuracy requirements.

A hierarchy of standards exists, ranging from highest to lowest accuracy. Primary standards are considered state-of-the-art for measuring the quantity of interest and are maintained by national research labs like NIST in the United States. These primary criteria are complex and expensive, typically reserved for critical situations.

For most standardization work, secondary and tertiary criteria are sufficient as they are simpler and more cost-effective. These criteria can be obtained from national research labs, commercial standardization research labs, or in-house standardization research labs associated with industrial companies and universities.

Definition of Calibration:

Calibration is the process that establishes the relationship between values indicated by a measurement instrument and the corresponding values realized by standards under specified conditions. It ensures that machines used in measurements remain at standard points and verifies their work and performance within a set of specifications. In scientific terms, calibration refers to the natural transition process used to measure accuracy and compare a measurement instrument with a standard to determine potential errors on a specific scale. A criterion is typically used for calibrating

a measuring instrument.

After the standardization process, adjustments are often made to the instrument to ensure that it provides measurements corresponding to specific values of the variable being measured. When the instrument is set to provide a measurement of zero for a zero value of the variable, this process is called zero accommodation.

Regarding history, many early measurement devices were intuitive and could be conceptually formalized. The term "standardization" was initially associated with precise divisions of distance and angles using dividing engines as well as gravitational mass measurements using weighing scales. These two forms of measurement, along with their derived functions, played crucial roles in commerce and engineering until around 1800 AD.The use of indirect measurement methods became widespread with the introduction of the Industrial Revolution. One early example is the measurement of pressure, which incorporated both direct and indirect techniques. The direct reading design for measuring pressure involves a hydrostatic manometer that uses Bourdon tubing pressure pots invented by Eugene Bourdon. In this design, an unknown pressure pushes the liquid down one side of the manometer U-tube, or an unknown vacuum pulls the liquid up the tubing. A graduated scale next to the tubing measures the pressure relative to atmospheric pressure. The resulting height difference "H" directly represents the pressure or vacuum. If there is no pressure or vacuum, H is equal to zero. Calibration is necessary to ensure accurate readings in this indirect measurement method using a Bourdon tubing system shown in two positions on the right side with force per unit area applied from underneath on a silver barbed pipe, causing curvature changes in the tubing connected to an arrow indicator at its free

end.Self-calibration is not possible, but users can generally correct the zero pressure state. In more recent times, direct measurement is also used to ensure measurement validity. Calibration involves several necessary steps: analyzing the instrument's construction and identifying potential inputs, determining which inputs are significant for the intended application, obtaining a setup that allows for adjustment of significant inputs within the considered range, obtaining standards to measure each input, and establishing desired static input-output relationships by varying inputs while keeping some constant and recording the results. Instruments used for measuring length, force per unit area, temperature, etc. should be regularly calibrated against a standard graduated table specified by the manufacturer. The methods of standardization vary depending on whether it is routine calibration or highly accurate standardization required for a specific purpose. Different instruments may require different methods of standardization. The process takes place in a research laboratory using sub-standard instruments rarely used for this purpose.The sub-standards are kept in a controlled air-conditioned environment to avoid calibration changes caused by external atmospheric conditions. To maintain accuracy, the sub-standards are periodically compared with a benchmark stored in secure and clean metrological laboratories. These benchmarks can be verified against absolute measurements of the quantity being measured. Mechanical instruments calibrate by comparing readings from their graduated table with the sub-standard values, creating a calibration curve. The instrument receives known values from the sub-standard through its transducer parts and compares them to the original value. If the system is additive, individual point standardization suffices; otherwise, multiple points must be taken. In most cases, instruments use inactive standardization based on an inactive input. However, some instruments like bonded strain pots lack input

measure presentation for standardization and rely on spot standardization performed by the manufacturer using a different process.(2) The standardization process involves creating a measurement instrument that maintains calibration over time. This design should allow for measurements within technology tolerance under specific environmental conditions for a reasonable duration. By designing a system with these characteristics, there is a higher chance of existing measurement instruments performing as expected. The method for determining tolerance values varies by state and industry. Generally, the manufacturer of the measuring equipment sets the measuring tolerance, recommends an interval for standardization, and specifies usage and storage conditions. However, the organization using the equipment determines the specific standardization interval based on frequency of use. In the United States, for example, if used 8-12 hours per day, 5 days per week, it is common to have a six-month interval. However, if used continuously (24/7), a shorter interval is usually advised. Assigning standardization intervals can involve following a formal process based on previous standards set. The next step in this process is defining the actual procedure for standardization. A crucial part of this procedure is selecting one or more criteria that have less uncertainty in measurement compared to the device being calibrated (ideally less than 1/4).When the goal of achieving an accuracy ratio of 4:1 is reached, the uncertainty of all criteria involved is considered insignificant for the final measurement. This ratio was initially established in Handbook 52, accompanying MIL-STD-45662A, a specification from the US Department of Defense's metrology program. In the past, electronic measurements had a ratio of 10:1, but engineering advancements made it impractical for most equipment. However, maintaining a 4:1 accuracy ratio with

modern equipment proves to be challenging. If the accuracy ratio is less than 4:1, the device being calibrated can be as accurate as the standard. Calibration at a 1:1 ratio ensures a perfect match between the standard and calibrated device for correct calibration. To address capability mismatch, another approach is to reduce the accuracy of the device being calibrated. For instance, if a pot has a manufacturer-stated accuracy of 3%, it can be adjusted to have an accuracy of 4% so that a higher precision standard (of 1%) can be used at a 4:1 ratio. This adjustment does not impact the final measurement's accuracy if higher levels are required by the application – this method is known as limited calibration. However, if an accuracy level requirement for final measurements surpasses that which can be achieved by adjusting tolerance (e.g., requiring 10% accuracy with a pot having only 3% capability), then adjusting tolerance may not provide an adequate solution in such cases. It should also be noted that when calibration is performed at 100 units using a 1% standard, there will actually be some variation between ±0.5 units around this value during calibration (ranging from approximately 99 to101 units).The acceptable calibration values for equipment at a 4:1 ratio should fall between 96 and 104 units, inclusive. However, adjusting the acceptable range to 97 to 103 units would result in a 3.3:1 ratio and meet all criteria. If we further modify the acceptable range to be from 98 to 102 units, it will restore a concluding ratio of more than 4:1.

It is crucial to document and have access to the thought process behind this calibration as informality

can make it difficult to identify post-calibration issues and tolerance scores.

Ideally, a single-point calibration should be performed at a standardization value of 100 units based on either the manufacturer's recommendation or similar devices' calibration method. Multiple-point calibrations are also common, and variations such as a zero unit state or user-resettable zero should be considered.

Recording the chosen calibration points is essential as they can impact the calibration process, especially in electronic calibrations where cable resistance can affect results.

All the information mentioned above is obtained through a specific trial method called standardization processes. These processes encompass all necessary steps for successful standardization. Clearinghouses like the Government-Industry Data Exchange Program (GIDEP) exist specifically for standardization processes in the United States, providing assistance with meeting requirements set by manufacturers or organizations.The laboratory repeats this procedure for each criterion, ensuring transportation standards, certified reference materials, and natural physical properties are achieved with minimal uncertainty. This establishes the traceability of the standardization. Once achieved, individual instruments of the same type can be calibrated. The process usually starts with a basic damage check. Some organizations, like nuclear power plants, collect calibration data before routine maintenance. After addressing maintenance and any deficiencies found during calibration, an "as-left" calibration is performed. In most cases, a calibration technician oversees the entire process and signs a certificate documenting successful completion. This mentioned calibration process is challenging and expensive due to significant effort required. Regular equipment maintenance typically costs around 10% of its original purchase value annually as a general rule-of-thumb. However, specialized equipment like scanning electron microscopes or gas chromatograph systems may require even more costly maintenance. The extent of the standardization program reflects an organization's

core beliefs but maintaining unity in organization-wide standardization can be easily compromised.When there is a lack of proper measurement, the connection between scientific theories, technological practices, and mass production may be lost at the start of new projects or eventually forgotten on old ones. The previous description mentioned a "single measurement" device that is part of the basic standardization process. However, depending on the organization, most devices that require standardization may have multiple ranges and functionalities within one instrument. An example of this is a modern oscilloscope which can have approximately 200,000 different settings to calibrate fully. There are limitations on how much automation can be applied to achieve an all-inclusive standardization process. Different approaches for standardization are available to organizations using oscilloscopes. Furthermore, if a quality assurance program is implemented, the clients and efforts for program compliance can directly influence the chosen approach for standardization.

Measurement instruments such as CROs (Cathode Ray Oscilloscopes) contribute significantly to an organization's value by providing measurements. These instruments can be depreciated over time for tax purposes but this tax treatment may affect decisions related to standardization. Manufacturers typically support new CROs for at least five years and offer direct or agent-mediated standardization services. It is uncommon for organizations to have only one CRO; they are typically absent or part of larger groupsOlder devices may not receive complete standardization and can be utilized for less demanding purposes. In production applications, CROs (Cathode Ray Oscilloscopes) can be placed in dedicated racks for specific purposes. Each type of instrument in the organization undergoes its own standardization process. The image below depicts the integration between Quality Assurance and standardization, showing a Digital Multimeter

(DMM), a rack-mounted Cathode Ray Oscilloscope (CRO), and a command panel.

The small unbroken horizontal paper seals connecting each instrument to the rack indicate that the instrument has not been removed since its last calibration. These seals also act as a safeguard against unauthorized access to the instrument's settings. Additionally, labels display the date of the most recent calibration and specify when the next one is due based on the calibration interval.

Some organizations also assign unique identifications to each instrument for standardized recordkeeping and tracking of accessories associated with a particular calibration status. When calibrated instruments are integrated with computers, both the integrated computer programs and any calibration adjustments are controlled.

In the United States, there is no universally accepted terminology for identifying individual instruments. Moreover, there are multiple devices with identical names, as well as different names for similar types of devices. This confusion is further complicated by slang and shorthand language that reflects intense competition since the Industrial Revolution era.

Calibration of flow meters

There are different methods available for calibrating flow meters, which can be categorized into two types: in situ and laboratory. Calibrating liquid flow meters is generally easier compared to gas flow meters because liquids can be stored in open vessels and water can often be used as the calibrating liquid.

Calibration methods for liquid flow meters

  • In situ calibration methods

  • Insertion point velocity method

  • Dilution estimation

The insertion point velocity method is a relatively simple in situ calibration method. It involves using velocity measuring devices at specific points in the flow stream adjacent to the meter being calibrated, allowing for measurement of the flow velocity. In more complex situations, a flow crossbeam can be used to determine the

flow profile and average flow velocity.

The dilution estimation method can be utilized for calibrating closed pipe and open channel flow meters. A suitable tracer is injected into the flow stream at a precisely measured constant rate, and samples are taken downstream of the injection point where complete mixing of the injected water has occurred. By measuring the concentration of the tracer in the samples, it is possible to determine tracer dilution. From this dilution value and injection rate, volumetric flow rate can be calculated.

The diagram illustrates the principle of dilution gauging using the tracer method. Alternatively, a pulsation of tracer substance can be introduced into the flow watercourse. The tracer travels a known distance and reaches a maximum concentration as a measure of the flow speed.

Laboratory standardization method

There are several methods used for laboratory standardization:

- Master meter method: A meter of known accuracy is used as a standard in this method. The meter to be calibrated and the master meter are connected in series, ensuring they both experience the same flow conditions. It is important to periodically recalibrate the master meter to ensure consistent accuracy.

- Volumetric method: In this method, the flow of liquid being calibrated is diverted into a tank of known volume. When the tank is full, its known volume can be compared to the integrated measure recorded by the flow meter being calibrated.

- Gravimetric method: This method involves diverting the flow of liquid through the meter being calibrated into a vessel and weighing it continuously or after a predetermined time. The weight of the liquid is then compared to the registered reading of the flow meter being calibrated.The pipe prover method, also

known as a meter prover, employs a U-shaped length of pipe and either a piston or an elastic element to measure flow. The calibrated flow meter is installed in the recess of the prover, and the medium is forced through the pipe using the flowing liquid. Switches are placed near both ends of the pipe and triggered when the sphere passes them. The volume between these switches is determined by initial calibration, which is then compared to the reading recorded by the flow meter during calibration.

Calibration methods for gas flow meters

Methods suitable for calibrating gas flow meters include In situ and Lab.

In situ calibration methods

In-situ methods are similar to those used for liquids.

Laboratory calibration method

The laboratory calibration makes use of soap film burette method, water displacing method, and gravimetric method.

Soap film burette method

This particular technique is employed for calibrating measurement systems with gas flows ranging from 10-7 to 10-4 cm3/cm. The tested gas flow from the meter passes through a perpendicular-mounted burette where it forms a soap film that moves at the same speed as the gas. By measuring how long it takes for this soap film to graduate in the burette, one can determine its flow rate.The water displacing method involves immersing a cylinder closed at one end in a water bath, resulting in the production of gas. The gas can escape through a pipe connected to the cylinder and is directed to a calibrated meter for measurement. By combining the time it takes for the cylinder to fall with the relationship between volume and length, the amount of gas displaced can be determined and compared with the flow meter's measurement during standardization.

In the

gravimetric method, gas is redirected through the meter being tested into a container over a measured period of time. The weight of the container before and after the test is used to determine the difference in weight caused by enclosed gas, allowing for calculation of flow. This measured flow can then be compared with that recorded by the flow meters.

Flow rate calibration and standards rely on standards based on either volume (length) and time or mass and time. Primary calibration involves achieving a constant flow through accurately measuring fluid volume or mass within a specific timeframe. If steady flow is achieved, it is possible to calculate volume or mass flow rate.A precise and stable flow meter, calibrated using primary methods, can serve as a secondary standard for calibrating other less accurate flow meters. Mistakes in flow meters can occur due to fluctuations in fluid properties (density, viscosity, temperature), as well as the meter's orientation, pressure level, and flow disturbances upstream (and to a lesser extent downstream) of the meter. When primary calibration methods cannot be justified, comparing with a secondary standard flow meter connected in series with the meter being calibrated may provide sufficient accuracy.

Calibration of level detectors

Hydrostatic level

Introduction

A level sensing device is used to determine the interface between liquids or between a liquid and vapor. It transmits a signal representing this value for processing by measurement and control instruments. The output reading changes proportionately with changes in tank level. Hydrostatic head pressure is used to measure fluid level by measuring the head pressure and knowing the liquid's specific gravity. This allows calculation of height. Hydrostatic level sensing often involves using a differential pressure

transmitter to compensate for atmospheric pressure on the liquid. The high-pressure port measures atmospheric pressure on the fluid in the tank.The high side of the system also measures hydrostatic head pressure, while the low side only senses atmospheric pressure. In dip pipe applications, gas flows through a pipe submerged in the liquid of the tank. A differential pressure sensor is used to measure the opposing pressure on the tube caused by an increase in tank level. This sensor has a high pressure port that senses the pressure increase caused by the opposing pressure in the dip pipe, and a low pressure port that is vented to atmosphere.

When it comes to calibration, any differential pressure measuring system follows the same process.

Regarding input and output measurement standards and connections, a low-pressure calibrator is used as an input standard for measurement. It provides and measures low-pressure values necessary for calibrating hydrostatic level systems. Such a calibrator contains

Get an explanation on any task
Get unstuck with the help of our AI assistant in seconds
New