Download Your In-Depth Guide to Calibration for the Process Industries

Download Your In-Depth Guide to Calibration for the Process Industries

This post was written by Mike Cable, author of the ISA book Calibration: A Technician’s Guide and manager of operations technology at Argos Therapeutics, and Ned Espy of Beamex. Click this link to download Calibration Essentials, an in-depth eBook for the process industries.

 

Proper calibration of instruments for the process industries is essential. Yet calibration tends to be one of the most overlooked processes in today’s plants and factories. With industrial technology and tools demanding greater levels of precision, there is an ever-increasing need to calibrate and ensure consistent, reliable measurement with the goal of minimizing downtime, achieving greater production efficiencies, and reducing overall operating costs.

But how do you know that you’re taking the most efficient path towards calibrated, automated production? To help you find that certainty, calibration experts at ISA have teamed with Beamex to publish an in-depth guide to calibration automation, delivering the information you need to ensure a fully calibrated and reliable facility.

The informative new eBook, Calibration Essentials, covers everything you need to know about today’s calibration processes including:

  • A comprehensive big picture guide on how to manage a facility-wide calibration program for industrial automation and control systems.
  • Informative overviews of calibration considerations, such as tolerance errors, and calibration uncertainty, as well as practice scenarios and solutions to manage them.
  • An in-depth look at some of the new smart instrumentation and WirelessHART instruments and how to effectively calibrate them.
  • A technical discussion on the pros and cons of an individual instrument calibration strategy versus a loop calibration strategy.
  • Detailed guidelines to ensure facility and employee safety and security, as well as compliance with standards, when conducting calibration tasks.

The 60-page eBook can serve as a key resource to help you ensure your facility operates safely and efficiently, and that you are getting the most out of your instrumentation. This roadmap to calibration has tools for workers at every level of your facility to standardize your effort and facilitate an advanced, automated production environment.

Click this link to download Calibration Essentials, an in-depth eBook for the process industries

 

About the Authors

Mike Cable is author of the ISA book Calibration: A Technician’s Guide and validation manager at Argos Therapeutics. He is a Level 3 Certified Control System Technician, and his responsibilities include managing the calibration program. Mike started his career as an electronics technician in the U.S. Navy Nuclear Power Program, serving as a reactor operator and engineering watch supervisor aboard the USS Los Angeles submarine, and then at the AIW prototype in Idaho Falls. After leaving the Navy, he started his civilian career at Performance Solutions performing technical services for the pharmaceutical industry. His 11 years there was highlighted by an assignment to Eli Lilly Corporate Process Automation managing Instrument qualification projects, and then starting a calibration services division within Performance Solutions. His practical expertise in instrumentation and controls led him to his career path in validation.

Connect with Mike:
LinkedIn

 

 

OLYMPUS DIGITAL CAMERANed Espy has been promoting calibration management with Beamex for more than 20 years. He has directed field experience in instrumentation measurement application for over 27 years. Today, Ned provides technical & application support to Beamex clients and partners throughout North America.

 

Connect with Ned:
LinkedIn

 

Best ISA Webinars of 2017

Best ISA Webinars of 2017

This 2017 webinar roundup was edited by Joel Don, ISA’s community manager.

 

As 2017 comes to a close, we surveyed the year’s lineup of educational webinars to select five of the most popular presentations co-hosted with ISA’s partners. Scroll down and enjoy this roundup of “best of the best” webinars.

Large Project Execution – A Better Way

There are many challenges to effectively executing automation scope as overall project size grows to $100 million and beyond. Interfaces between various stakeholders and proper distribution of work become critical to define and manage properly. The traditional model of sole-sourcing an EPC to handle everything, including the automation scope, has inherent weaknesses that can be mitigated by an alternate approach. Join us as we review common problems with executing automation scope in large projects and present solutions proven to be effective.

Cybersecurity for Control Systems in Process Automation

Attacks to your production system may happen at any time and at any level – from outside as well as from inside. Which concepts and measures exist to protect your assets efficiently from software attacks? The ISA99 standards development committee brings together industrial cybersecurity experts from across the globe to develop the ISA-62443 (IEC 62443) standards on industrial automation and control systems security. German-based Siemens AG is a leading provider of automation equipment, global manufacturing company with close to 300 factories and provider of Industrial Security Services. In this webinar, ISA99 Committee Co-Chair Eric Cosman and Siemens Plant Security Services PSSO Robert Thompson will present the current threat landscape and key steps you can take to protect your critical assets in the production environment.

How to Avoid the Most Common Mistakes in Field Calibration

Field process calibration isn’t just about getting the job done, it’s about getting the job done right. It’s cliché, but true. Instrument calibration requires the proper process, tools and parameters for each instrument application to ensure valid results. If one of the three are lacking, your results could have little validity. In this webinar, experts will expose the most easy-to-make mistakes and how to avoid them, so you can be confident in your calibration results.

Unlocking the Truth Behind Alarm Management Metrics

Alarm Management is a well understood process, supported by global standards and operating best practices. But ultimately, the process of Alarm Management goes beyond Alarm Benchmarking and KPI Reporting. To drive alarm system improvement, action is required. So, what action should you take? What are the metrics really telling you about how your plant is operating? In this joint presentation, Honeywell’s Global Alarm Management Product Director Tyron Vardy and Manufacturing Technology Fellow Nicholas Sands unlock the truth behind your plant’s alarm management metrics.

Protecting Cyber Assets and Manifest Destiny from the Industrial Internet of Threats

During the 1800s, settlers saw it as their “Manifest Destiny” to settle the American West; but, found their lands under attack by the cattlemen surrounding them. The Manifest Destiny of industrial process and power generation companies is under similar assault. Bands of outlaws, or hackers, are cutting down perimeter-based defenses and successfully infiltrating process control networks (PCN). They are aided by growing attack surfaces created by the Industrial Internet of Things (IIoT) adoption; it is why IIoT is often referred to as the Industrial Internet of Threats. These and other factors put the highly complex, proprietary, and heterogeneous cyber assets in the plant at risk. Watch this webinar to listen to a discussion on the current landscape of ICS cybersecurity solutions. We will share how ISA advises companies to proceed and discuss “gotchas” that can derail an ICS cybersecurity initiative.

 

 

How to Improve Industrial Productivity with Loop Calibration

How to Improve Industrial Productivity with Loop Calibration

This post is authored by Ned Espy, technical director at Beamex.

The typical approach to calibration has been to regularly test instrumentation that affects successful control, safe operation, quality, or other relevant criteria. In most cases, scheduling is conservative, and methods at a particular site have evolved over time. Instrument technicians follow practices that were set up many years ago. It is not uncommon to hear, “this is the way we have always done it.” Measurement technology continues to improve and is getting more accurate. It is also getting more complex—why test a fieldbus transmitter with the same approach as a pneumatic transmitter? Performing the standard five-point, up-down test does not always apply to today’s more sophisticated applications.

In general, calibration tasks require special skills and an investment in test equipment. Sophisticated, highly accurate, and multifunctional calibration equipment incorporating high-accuracy field calibrator and fieldbus communications for industrial networks (including HART, FOUNDATION Fieldbus, and Profibus PA instruments) is required to effectively calibrate advanced instrumentation, such as multivariable and smart/digital instruments. With the complexity of instrumentation, there is more pressure than ever on the calibration technician. Technicians with more than 30 years of experience at a single plant are retiring and cannot easily be replaced by outsourcing or by younger technicians. Documentation requirements are becoming much more common for improved quality, environmental monitoring, and government regulations adherence. Calibration software is often required to store and analyze detailed data as well as generate calibration certificates and reports. All of these factors should cause companies to scrutinize and evaluate current practices. Consider simpler and more efficient test methods to ensure proper plant operation.

While not a new concept, there are advanced calibration techniques based on loop testing. In some cases, it is the best practice to perform individual instrument calibration to achieve maximum accuracy (e.g., custody transfer metering). However, there are viable methods to test a loop end to end. If readings are within acceptable tolerances, there is no need to break into the loop for individual instrument testing. To be effective, a common-sense approach is required with the goal to minimize downtime and maximize technician efficiency while ensuring reliable control and maintaining a safe work environment.

Loop basics: What is a loop?

The idea of a loop can mean different things to different people based on their work background or industry. In practice, a loop is simply a group of instruments that in combination make a single measurement or affect a control action in a process plant. A typical temperature example is a temperature element (resistance temperature detector [RTD] or thermocouple [T/C]) that is connected to a transmitter, which is connected in series to a local indicator, and finally to a control system input card (distributed control system [DCS] or programmable logic controller [PLC]). The signal is then displayed on one or more control panels, and the measurement is ultimately used to control the process.

When evaluating a loop for testing, an important distinction is can the loop be tested from end to end or can only a portion of the loop be tested? For an end-to-end test, in the example temperature loop (figure 1), the temperature element needs to be removed from the process and placed in a dry block or temperature bath to simulate the process temperature. The final displayed measurement is compared to the simulated temperature, and the error interpreted. An end-to-end loop test is the best practice; if an accurate temperature is made for the control process, it does not matter how the individual instruments are performing. The DCS/PLC value is what is used to make any control changes, alarms, notifications, etc. However, if the loop measurement has a significant error, then the error of each instrument in the loop should be checked and corrected one by one to bring the final measurement back into good operation.

In some cases, it is not possible to make an end-to-end loop test. In the example loop, it may be extremely difficult or expensive to remove the probe from the process or to insert the probe into a temperature block or bath. If this is the situation, then a partial-loop test can be performed. The temperature element is disconnected from the transmitter, and a temperature calibrator is used to simulate a signal into the transmitter. As in the end-to-end loop test, the final displayed measurement is compared to the simulated temperature, and the error interpreted, etc. While the loop is broken apart, it would be good to check the installed temperature element; perhaps a single-point test could be done by temporarily inserting a certified probe or thermometer into the process and comparing that measurement against the element’s output when connected to a calibrator.

Analysis of loop error

Error limits can be somewhat difficult to determine, and many mistakes are made when it comes to setting them. One common judgment is to base process measurement tolerance on a manufacturer’s specification. Some manufacturers are better than others, but the marketing department might have as much to say about an accuracy specification as a research and development engineer. Furthermore, accuracy statements are generally off-the-shelf values that do not include such things as long-term stability (typically a significant error component), repeatability, and temperature effects. Sensor and transmitter accuracy should be a consideration of what the process measurement tolerance should be, not the final value.

The best method is to have a collaborative discussion between the control engineer, the quality or safety engineer, and the instrument engineer to set a realistic and practical tolerance. It is very important to keep in mind that the tighter the tolerance, potentially, the more expensive it will be to both make and maintain the measurement. The balance falls somewhere between the required tolerances to have efficient control, the best quality, and the highest safety versus minimizing downtime, maximizing technician efficiency, and utilizing optimum test equipment. In practice, it is common to see error evaluation as a percent of span. However, this does not easily apply to flow measurements (typically a percent of reading or rate) or analytical instruments (e.g., pH or parts per million).

One good way to look at error is to think in terms of the loop’s input engineering units. For the temperature loop example (figure 1), the discussion should focus on the minimum temperature error that creates the highest operating efficiency without compromising quality or safety and that can be realistically measured by the calibration or test equipment. One other complication for loop error is a given loop is no more accurate than the least accurate component contributing to the measurement. Today’s transmitters are extremely accurate and provide excellent performance; however, temperature sensors are typically not nearly as accurate and, depending on the process, can exhibit significant drift. If a typical RTD is rated to ±0.5ºF, a control engineer cannot expect better than ±0.5ºF to control the process. In reality, even though the transmitter and DCS analog-to-digital conversion can be significantly more accurate, recognize that these components add additional error to the loop measurement. A common practice to compute loop error is a statistical average or a root-mean-square (RMS) calculation. For the temperature loop example, assume the RTD sensor is rated ±0.5ºF, the transmitter is ±0.10 percent of span (span = 50ºF to 250ºF), and the DCS input card is ±0.25 percent of span (span = 50ºF to 250ºF). The loop error is:

The most conservative approach is to simply sum the errors (0.5 + 0.2 + 0.5 or ±1.2ºF). The final decision should also take into account the criticality of the measurement along with the impact the error will have on the process and the risks involved.

The discussion should not end here. The control engineer will push for the lowest number possible (±0.75ºF), but there are other factors. An evaluation of the test equipment is required. The typical temperature block has an accuracy anywhere from 0.3ºF to 1.0ºF, and it is good practice to have a 4:1 ratio of test equipment versus process measurement. To make a proper temperature simulation, a reference probe (reference or secondary primary resistance thermometers) and an accurate primary resistance thermometer both need to be used to improve measurement error to 0.1ºF to 0.2ºF. This could impose a significant investment in test equipment, depending on the industry. Note the more accurate test equipment has a higher maintenance cost. For example, what if the quality engineer reports that an error of ±5ºF is all that is needed to make a good product? Why impose an unnecessary burden on the instrumentation department? If the control engineer has no objection (after receiving input from reliability, safety, etc.), a practical approach is to set a loop tolerance of ±2.0ºF, assuming the temperature block is accurate to ±0.5ºF over the range of 50ºF to 250ºF. Although not as accurate as the instrumentation in the loop, it is better than 2:1 for what is required to make a quality product and allows the calibration technician to use a simple combination of equipment.

Although this is just one scenario, it is a good practice to determine the “weakest link” in the loop and not set an unrealistic performance tolerance. When looking at fuel costs or process efficiencies, this type of analysis could easily justify a larger investment in test equipment and frequent testing if the cost or risk of error is high. With good judgment, companies can strike a balance and avoid unreasonable testing requests, meeting lean manufacturing objectives.

Loop testing examples

Temperature loop test example

If a process plant has hundreds of temperature loops like the example (figure 1), there are good benefits of loop testing. While it takes time to make a test with a temperature block, the calibration technician is effectively checking two, three, or more instruments that make up the loop. With this approach, it might make sense to invest in more rugged or more accurate probes to minimize failures. Depending on the process, more frequent testing may be required, but in any case, management will have a high level of confidence in the accuracy of the measurements. With repeatable work methods, technicians will recognize common issues, and there should be efficiency gains. If calibrations are documented, companies can analyze test cycles and extend or at least optimize the most likely intervals. There will always be a need for troubleshooting and emergency repairs, but the loop cycle should be reset whenever such an event occurs. This methodical approach effectively provides a “touch” to every instrument in the plant while minimizing disturbances to the loop integrity and delivering the very best measurements to the control system.

Figure 1. Example temperature loop

Multivariable loop example

Flow measurements can be very demanding and often require very tight performance tolerances. In the case of natural gas metering or steam metering, a small error can cause significant errors in billing, creating extra scrutiny by management. A common example of orifice metering is to compensate for the differential pressure measurement by factoring in the process temperature and static pressure. The DCS can process these three measurements to make an accurate flow calculation. However, there are now differential pressure flowmeters (i.e., multivariable) with an integrated process RTD and static pressure measurement that have a compensated flow measurement output; the flow calculation is built into the smart transmitter (figure 2).

Figure 2. Example multivariable loop

If the control system independently processes the three measurements, typical test procedures apply, but a loop test should be done to verify the accuracy of the compensated flow reading. Multivariable meters appear to be complex. However, by identifying the measurement components, a loop test can be set up to quickly verify that the meter is correctly measuring the flow to a desired percent of reading accuracy. For example, consider a steam application:

  • Input pressure range: 0 – 250 inH2O
  • RTD input range: –200ºF to +800ºF
  • Normal process temperature: 450ºF
  • Static pressure input range: 0 to 800 psi
  • Ambient barometric pressure: 14.735 psia average local barometric pressure in 2012)
  • Output: 4–20 mA (typical range of 0 –1500 lbs/hr, ±1 percent of reading)

For this example, set up a nonlinear test where the expected pounds per hour (lbs/hr) output is calculated for specific pressure input test points assuming a constant, typical 450ºF temperature and a static pressure of 14.735 pounds per square inch (psi), since the low side of the transmitter is vented to atmosphere for testing. Consulting with the control engineer, expected measurements might be:

Instruments are available that have unique features for testing multivariable transmitters. The proceeding nonlinear table can be entered into software for a specific tag and can be downloaded into the test instrument for testing. Additionally, the three tests can be performed on the process variables versus each HART value that is used in the compensated output calculation. The only additional test tool required would be a temperature block.

The loop test should simply be a five-point check of inches of water (inH2O) versus pounds per hour at 0 percent, 50 percent, 100 percent, 50 percent, and 0 percent. If all the measurements fall within 1 percent of reading, the technician can move on to the next instrument. If the loop test result is marginal or a failure, then three tests of the differential pressure versus HART, RTD temperature versus HART, and static pressure versus HART will need to be performed and adjusted as needed. Upon completion of the three variables that plug into the flow calculation, a quick check of the 4–20 milliampere (mA) output should be done as well. Assuming one or more of the inputs required adjustment, a final “as left” loop test of the improved flow output will document that the meter is in good operating condition. It saves time to focus on the nonlinear input versus flow output for a multivariable loop. This will result in a much simpler maintenance task for the instrument technician.

Other loop examples

A pressure loop can easily be checked by applying a pressure to the input transmitter and comparing it to the DCS or final control reading. This can be done very quickly and can be much more effective than testing just the transmitter. Any batch control loop should be evaluated for loop testing with the goal to make work more efficient for the technician while verifying that control measurements are as accurate as possible.

This same technique should be considered for control valve testing, where a mA input into the I/P is compared to a mA output (feedback). This would also apply to smart control valve positioners using a communicator to step the valve and monitor the digital feedback. By making 10 percent test points, a quick test on a valve will verify it is operating correctly. In most cases, the valve will pass, and the technician can make a quick round of testing critical control valves.

An overlooked component of a flow loop is the primary element (orifice plates, annubars, or averaging pitot tubes). These are critical for proper flow measurement. They cannot be calibrated, but they should be inspected for damage or wear.

Another critical area where loop testing should be considered is the safety instrumented system (SIS). When the process is down, it is common to follow a script of testing procedures that can include calibration of single instruments. However, whenever possible, consider checking an entire loop where the integrity of a critical measurement can be verified, especially for temperature (using a block or bath) or pressure measurements. Also, it may be possible to perform quick and simple tests on an SIS while the process is running to ensure systems are operating properly.

Calibration for optimum control

Many, many process plants perform calibration by simply checking the transmitter. It takes time to use a temperature block or bath, but consider how important it is to test all the devices that make a given measurement. Transmitters are not the only devices that drift. Temperature probes drift due to thermal stress/shock and vibration or physical damage. DCS and PLC input cards drift as much or more than transmitters. If loops are not being tested, how can a good measurement be made? Without good measurements, how can optimum control, safety, reliability, and quality be ensured?

As instrumentation and automation evolve, so should the methods for calibrating instrumentation. Loop testing is not a new concept, but it is underutilized for instrumentation testing as an effective strategy. New integrated calibration devices enable flexible tests that meet a variety of applications and provide detailed documentation and electronic reporting.

By approaching the task of calibration with a fresh look, there are plenty of opportunities to do more with less and effectively “touch” every instrument in the plant more efficiently using loop calibration strategies. Logical and careful planning of loop testing strategies results in improved control performance without compromising quality, reliability, or safety of plant operations.

About the Author

OLYMPUS DIGITAL CAMERANed Espy has been promoting calibration management with Beamex for almost 20 years. Ned has helped develop best practices for calibration, with a focus on pressure, temperature, and multivariable instruments. He is a consistent editorial contributor to leading industry publications and has received significant recognition within the automation industry.  Ned teaches calibration best practices and provides technical support to end users and the Beamex sales team in North America.


Connect with Ned
:
LinkedIn

 

A version of this post originally was published at InTech magazine.

Webinar Recording: How to Calibrate Differential Pressure Flowmeters

Webinar Recording: How to Calibrate Differential Pressure Flowmeters

This ISA co-hosted webinar on differential pressure flowmeter calibration was presented by Nicole Meidl, multivariable transmitter expert at EmersonNed Espy, technical director at Beamex, and Roy Tomalino, professional services engineer at Beamex. This is Part 2 in the webinar series on flowmeter calibration. To view Part 1, click this link.

In this webinar, experts examine the complexity of testing popular multivariable differential pressure cells that provide a compensated flowmeter output based on differential inches of water (inH2O), static (psia) and temperature (RTD) variables. While this smart instrument can make a very accurate measurement, it is often a “mystery” as to how to properly check and calibrate the meter in the field.

 

About the Presenters

nicole-meidl

Nicole Miedl, is global product management engineer at Emerson Automation Solutions. Nicole is a multivariable transmitter expert and is involved in product development, support and training. She earned a bachelor’s degree in mechanical engineering from the University of St. Thomas.


Connect with Nicole
:
LinkedIn

 

 

OLYMPUS DIGITAL CAMERANed Espy has been promoting calibration management with Beamex for more than 20 years. He has directed field experience in instrumentation measurement application for over 27 years. Today, Ned provides technical & application support to Beamex clients and partners throughout North America.


Connect with Ned
:
LinkedIn

 

 

Roy TomalinoRoy Tomalino has been teaching calibration management for 14 years. Throughout his career, he has taught on four different continents to people from over 40 countries. His previous roles include technical marketing engineer and worldwide trainer for Hewlett-Packard and application engineer with Honeywell. Today, Roy is responsible for all Beamex training activities in North America.

Connect with Roy:

48x48-linkedinEmail

 

 

Webinar Recording: How Does Low Flow Affect Differential Pressure Flowmeter Calibration?

Webinar Recording: How Does Low Flow Affect Differential Pressure Flowmeter Calibration?

This guest post is authored by Ned Espy, technical director at Beamex. This post was written in conjunction with an ISA co-hosted webinar on differential pressure flowmeter calibration. This is Part 1 of the two-part ISA webinar series. To watch the webinar recording for Part 2, click this link.

For flowmeter calibration, the phenomenon of “low flow cut-off” is only associated with differential pressure transmitters with square root extraction—if the square root (flow) calculation is done in the control system (DCS/PLC), then this test approach does not apply. The graphic below illustrates the issue.

Some may ask why we are testing the initial point at 10 percent and may be concerned that we are not measuring the first 10 percent of flow. Note in the graphic above how rapidly the output flow signal (mA) changes with a very small change in the differential pressure input. For testing such a transmitter, the first 10 percent of flow cannot be used to meter the process because it is unstable, unpredictable and vendors filter that portion of the signal.

In demonstrating how to test and trim a DP transmitter, our first test point was actually at 1 percent of the input (DP) which represents 10 percent of the output (flow).  When the signal is below this value, it is usually converted to zero by the control system because it jumps around too much. The graphic below displays recommended example test points and a graph of actual test data.

low-flow-cut-off-calibration-table-graph

If your application requires accurate metering below 5.6 mA (less than 1 percent of the differential pressure, which is less than 10 percent of the flow rate), then you will need a secondary meter with a lower range (for example, 0 to 30 in H2O would complement a 0 to 207 in H2O range).

low-flow-cut-off-calibration-graph

Takeaway: There is no reason to test a differential pressure transmitter with square root extraction below 1 percent of DP/10 percent of flow (mA). Check with your control engineer to see if they utilize a low flow cut-off strategy.

This post was written in conjunction with an ISA co-hosted webinar on differential pressure flowmeter calibration. This is Part 1 of the two-part ISA webinar series. To watch the webinar recording for Part 2, click this link.
About the Author
OLYMPUS DIGITAL CAMERANed Espy has been promoting calibration management with Beamex for more than 20 years. He has directed field experience in instrumentation measurement application for over 27 years. Today, Ned provides technical & application support to Beamex clients and partners throughout North America.


Connect with Ned
:
LinkedIn

 

How to Build an Industrial Calibration System Business Case

How to Build an Industrial Calibration System Business Case

This guest post is authored by Villy Lindfelt, director of marketing & legal Affairs at at Beamex, in conjunction with an upcoming ISA co-hosted webinar on building a business case for a calibration system.  Click this link for more details on the webinar and how to register.

An investment into calibration equipment and systems must be financially justified, just like any other business investment. But does a cheaper cost of purchase always mean a higher return on investment? Not necessarily. When building a business case forbusiness-case-calibration calibration system investment, what may seem at the outset to be cheaper may not necessarily be so, if the evaluation is made from total or lifecycle cost perspective instead of evaluating cost of purchase only. Calibration system A can have a lower cost of purchase than calibration system B, but if calibration system B has better operational efficiency, the total cost of calibration system B can be lower than alternative A. The key point is to consider what elements form the total cost of implementing and running a system instead of focusing on isolated costs only, such as the purchase price of a calibrator or cost of a software license.

Why invest into a calibration system?

Your calibration system business case starts from defining a purpose. Why are you considering to make an investment into calibration? What value you are looking to generate for the company from the investment? Many times, a calibration system investment is competing with the same monetary resources as, for instance, a new building or renewing the company parking lot. That’s why the evaluation should start from defining why you are investing into calibration. Some common reasons for making a calibration system investment relate to improving efficiency of the calibration process (e.g. time savings), boosting plant safety, enriching product quality through more accurate measurements, improving compliance with applicable standards and regulations as well as harmonizing calibration processes between different manufacturing plants of a company.

Learn how to build a business case for a calibration system at an upcoming ISA co-hosted webinar. The webinar will review financial, compliance, risk mitigation, and system lifecycle aspects of a calibration system, and will also feature a case study from the power generation industry. The free webinar starts at 12pm ET, Tuesday, March 8.  For details and registration information, click this link.

Compare total costs, focus on system lifecycle

When investing into a calibration system and building a business case for comparing alternative solutions, the nature of calibration activities as well as the lifecycle of the system and calibration process should be included in the equation. Purchase price as well as features and functions comparisons are important, but only a start and only a small part of the financial evaluation of different alternative calibration solutions. To put it short, there are basically three different cost generating elements related to calibration: equipment, labor and downtime (planned/unexpected). When trying to understand the total costs between different calibration system investments as well as the economic benefits related to the alternative investments, you can ask yourself:

  • What are the implementation vs running costs of the system?
  • What is the expected system lifecycle?
  • What type of productivity benefits can be achieved (e.g. time savings, process improvements)?
  • Does the system impact process downtime?
  • What are the costs of maintaining the system?
  • Does the system improve plant safety?
  • Is product quality influenced?
  • What is the estimated labor impact?
  • Does the system impact risk mitigation and compliance?

Build for the complete lifecyle

The key element in creating a business case for calibration system is to build the business case for the entire lifecycle of the system and not just compare cost of equipment purchases and software licenses. Therefore, you should actually compare alternative calibration processes, and not the equipment as such. Consider what it means to implement and run a process with alternative equipment and have various viewpoints on this: financial, time-savings, risk mitigation, implementation/running/maintenance costs as well as headcount impact, among other things.

About the Author
Villy-LindfeltVilly Lindfelt is director of marketing & legal Affairs at Beamex Oy Ab. He supervises the marketing team as well as focuses on contracts and documentation related to calibration system implementation projects. Villy joined Beamex in 2004. He has a master’s degree in economics as well as a master of laws degree.
Connect with Villy:
48x48-linkedin

 

Pin It on Pinterest