Proper calibration of instruments for the process industries is essential. Yet calibration tends to be one of the most overlooked processes in today’s plants and factories. With industrial technology and tools demanding greater levels of precision, there is an ever-increasing need to calibrate and ensure consistent, reliable measurement with the goal of minimizing downtime, achieving greater production efficiencies, and reducing overall operating costs.
But how do you know that you’re taking the most efficient path towards calibrated, automated production? To help you find that certainty, calibration experts at ISA have teamed with Beamex to publish an in-depth guide to calibration automation, delivering the information you need to ensure a fully calibrated and reliable facility.
The informative new eBook, Calibration Essentials, covers everything you need to know about today’s calibration processes including:
- A comprehensive big picture guide on how to manage a facility-wide calibration program for industrial automation and control systems.
- Informative overviews of calibration considerations, such as tolerance errors, and calibration uncertainty, as well as practice scenarios and solutions to manage them.
- An in-depth look at some of the new smart instrumentation and WirelessHART instruments and how to effectively calibrate them.
- A technical discussion on the pros and cons of an individual instrument calibration strategy versus a loop calibration strategy.
- Detailed guidelines to ensure facility and employee safety and security, as well as compliance with standards, when conducting calibration tasks.
The 60-page eBook can serve as a key resource to help you ensure your facility operates safely and efficiently, and that you are getting the most out of your instrumentation. This roadmap to calibration has tools for workers at every level of your facility to standardize your effort and facilitate an advanced, automated production environment.
Glass manufacturing is one of the most energy-intensive industries, with energy costs representing roughly 14 percent of the total production costs. The bulk of energy consumed comes from natural gas combustion for heating furnaces to melt raw materials, which are then transformed into glass. Additionally, glass manufacturing is sensitive to the combustion processes, which can affect the quality of the glass and shorten the lifespan of the melting tanks if not managed properly. Historically, the composition of natural gas has been relatively stable. However, dramatic changes in the supply of natural gas (including shale gas and liquefied natural gas imports) are causing end users to experience rapid and pronounced fluctuations in gas quality.
The efficiency of the furnace can be optimized for the air/fuel ratio when the composition of the incoming gas changes. This can significantly reduce energy consumption and provide substantial savings to the business in product quality and equipment life. Optimizing the furnace efficiency has traditionally been complex and costly. Next-generation gas chromatography, however, is changing that paradigm, providing a cost-effective, task-focused methodology that can be carried out by less technically proficient personnel than were traditionally required.
Two unique fuel gas compositions can have the same energy content, but behave very differently in the burner. This is because the different amounts of diluents, nitrogen, and carbon dioxide, and the different ratios of hydrocarbons will cause different densities, and thus, different velocities through the burner restrictors. The Wobbe Index, the ratio of the energy value to the specific gravity (Wobbe Index = energy/√specific density), provides an indicator of how the fuel will act through a burner and provides a better variable to control the air/fuel ratio.
Gas chromatographs are used throughout the natural gas chain of custody (from wellhead to burner tip) to determine the gas composition for quality monitoring and energy content. For pipeline quality natural gas, the industry standard is the C6+ measurement method. This method determines the individual composition for each of the hydrocarbons from methane to normal-pentane, nitrogen, and carbon dioxide, and combines heavier hydrocarbons (e.g., hexane, heptane, octane) as a C6+ component. From the composition, energy content, specific gravity, Wobbe Index, and other physical properties are determined using calculations from international standards such as ISO 6976, GPA 2172, and AGA 8. Using the Wobbe Index and a gas chromatograph to determine the gas composition gives insight into the fuel quality variations from the gas supplier. Additionally, the C6+ measurement is the standard by which custody transfer billing is based, and therefore is a direct method of ensuring that the energy used matches the bill from the gas supplier.
Optimizing the air/fuel ratio
The value of optimizing the air/fuel ratio cannot be overstated. The energy value (British thermal unit [Btu] or calorific value) or Wobbe Index is output from the gas chromatograph via either Modbus or an analog 4–20 mA signal. This signal can be used to integrate with the plant process control system to trim the air/fuel ratio and ensure maximum production efficiency (figure 1). When the air/fuel mixing proportion is correct (stoichiometric), all the fuel will be consumed during the combustion process and will burn cleanly. This enables the furnace to operate at its most efficient, cost-effective point. Changes in the composition of the fuel gas will cause changes in:
- the physical properties of the gas
- the minimum air requirements needed to achieve stoichiometric combustion
- the flue gas composition
- flame speed and flame position
Because glass quality is sensitive to the combustion processes, failing to respond to variations in the composition of the natural gas can result in losing an entire production run due to poor gas quality.
A major glass company in the southeastern U.S. is a heavy user of natural gas. However, the gas comes from multiple locations, causing a constant fluctuation of the Btu value. Because gas flow is adjusted based on the Btu value, knowing the precise measurement is essential. In addition, because the density of the gas varies, knowing the Wobbe Index is critical to quality. When the company began employing a gas chromatograph to optimize its fuel quality, it found the traditional intricacies of gas chromatographs inappropriate for its application. Despite repeated training, its staff was unable to calibrate the instrument. New gas chromatograph technologies designed specifically for natural gas optimization significantly reduced the complexity of operation. In new designs, all of the complex analytical functions of the gas chromatograph may be contained in a replaceable module, greatly simplifying maintenance. Features like auto-calibration make operation easier and more accurate, even for novice users.
Reduced need for specially trained gas chromatograph technicians
At a glass manufacturer in the U.K., poor fuel gas energy measurement led to inadequate air/fuel control, higher energy costs, and a reduction in quality of the finished product. In addition, the company needed compositional data for the calculation of carbon emissions factors, and it lacked a workforce skilled in chromatography. By using new natural gas chromatography technology, it optimized the stoichiometric ratio for stable flame heat, maximized the lifetime of the melting tank, reduced energy costs, and participated in the EU Emission Trading System program. All of this was accomplished without the need for specially trained gas chromatograph technicians.
New gas chromatograph technologies also save costs with capabilities like calibration gas saving features. Self-diagnostics mean the users can rely on the instruments to signal the need for maintenance, while step-by-step on-screen instructions walk the techs through any required processes. The sample handling system includes both particulate and liquid filters and incorporates fixed flow restrictors, removing the need for operators to constantly monitor and adjust the sample system.
Many companies in a wide range of industries faced with the problems of inconsistent quality in natural gas may not have considered gas chromatography as a viable solution for balancing air/fuel ratio due to the traditional complexities of the measurement. It is time to look again. New developments in gas chromatography technology may make this approach the first choice for improving energy efficiency, and ultimately, process quality.
Last month’s blog produced uncommon interest, some from old friends and some from persons newly engaged in struggling to preserve assets. One had experience with military operations in Iraq. They understood that theft is not just benign self-interest but may be conducted with the intent of deliberate harm.
Sometimes such operations disclose information about operator capabilities and requirements. Some deliberately embarrass the entities dedicated to preventing such incursions. Perhaps the message is that petroleum is valuable in many contexts and worthy of active care. In some locations and situations our confidence that the pipe we buried last year is still doing just what we intended may be unrealistically naïve.
Possibly the world leader in organized anti-theft effort is Pemex. Monitoring has been in-place on some of its systems for well over a decade and the results have been amazing. Initially some systems were looked at more as free distribution points where “the people’s petroleum” was delivered. Of course, that was never quite the intent, and there are many stories. This year, some estimates put Pemex losses to theft at over USD$1.5 billion. Applying its highly successful aggressive monitoring and interdiction program could, based on actual experience, eliminate most or even all of those losses.
The earliest anti-theft efforts began on lines known to be heavily attacked. Monitoring could detect a tap and its location within minutes. Special pre-positioned response units would be notified and would deploy into tactical positions. Usually the theft operation’s people and equipment could be apprehended. After a few such interdictions word got out. Theft from targeted areas declined to zero. Unfortunately, theft continued unabated in unmonitored areas. The prospect of getting caught diminished enthusiasm, but actual interdiction seemed necessary to completely discourage these operations.In some countries, enabled and exacerbated by corruption or government dysfunction, theft seems to have become normal. It can be hard to curtail these situations once they are firmly started. The spread of corruption seems systemic and the impact severe, not just from damages to the facilities and theft of the product of the producers, but also to the people and businesses that rely on normal access to these products. The fact the petroleum is stolen does not make it free to users – in fact the incursions can produce scarcity, increasing user-level cost.
In these matters, automation helps. Theft can, with appropriate effort, be curtailed or limited. Rational economic outcomes restore some sense of markets and order which tends to normalize and enhance business in the surrounding communities. The power and wealth associated with these activities can be limited or eliminated by fast and decisive action. Even sophisticated theft mechanisms can be identified by appropriate monitoring methods and equipment. Technology moves the endeavor from a conflict of wills to the effective use of resources.
In one country, a pipeline that had been a substantial theft target was estimated to have perhaps 16 active theft taps at any given moment. Losses were in the many thousands of dollars per hour. A monitoring system was deployed, resulting in more than 20 apprehensions over the first few days. Theft attempts continued, but there were far fewer of them. Over a couple of months, theft on the entire pipeline was brought to, and maintained at, zero. Essentially, getting caught stealing oil involved sufficient consequences to concern these thieves, and the chance of getting caught was perceived to be very high. Together, these issues made theft an unattractively expensive activity.
So, technology, along with a determined and ethical attitude, can control these things. It isn’t even all that hard once it is productively organized and initiated. Safety is enhanced. Profitability is enhanced, security along the pipeline is enhanced, and business strength in the region improves – all good things for a successful and organized society.
Uncontrolled losses from pipelines – be it from accidents, equipment failures, or theft – is not a benign irritation. It can dramatically affect profitable operations and the sustained interest of investors. It can damage livestock and crops. It can initiate unimaginably intense fires and explosions that destroy lives, homes, and businesses along the pipeline. Surrounding businesses, such as fishing and agriculture, can be profoundly affected. Sometimes the damage is truly accidental, sometimes it is the result of poor design or changing operating conditions. Sometimes it results from inadequate maintenance practices such as corrosion control. In any case, the operator’s future may be improved or enhanced by responsible operation and aggressive mitigation. The public may be willing to excuse accidents but will often want to punish whatever they perceive as negligence.
Did you miss the other blogs in this series? Click these links to read the posts:
How to Optimize Pipeline Leak Detection: Focus on Design, Equipment and Insightful Operating Practices
What You Can Learn About Pipeline Leaks From Government Statistics
Is Theft the New Frontier for Process Control Equipment?
The two most common categories of process responses in industrial manufacturing processes are self-regulating and integrating. A self-regulating process response to a step input change is characterized by a change of the process variable, which moves to and stabilizes (or self-regulates) at a new value. An integrating process response to a step input change is characterized by a change in the slope of the process variable. From the standpoint of a proportional, integral, derivative (PID) process controller, the output of the PID controller is an input to the process.
The output of the process, the process variable (PV), is the input to the PID controller. Figure 1 compares the response of the process variable to a step change of the PID controller output for a self-regulating process and for an integrating response.
Self-regulating responses are very common in the process industry. Many flows, liquid pressures, temperatures, and composition processes are self-regulating. In the first blog post in this series, I presented techniques for tuning a PID controller used on an integrating process. In this post, I will present a method to tune PID controllers on self-regulating processes.
Regardless of the tuning of the PID controller, the control performance is limited by the performance of the instrumentation and final control element. Before tuning a controller, it is helpful to have an understanding of the process and to verify the performance of the instrumentation and final control element, usually a control valve. The control valve should have a small deadband and resolution—another topic of discussion! It should have an appropriate and consistent flow gain. It should have a response time that is appropriate for the process performance requirements. ANSI/ISA-75.25 and the EnTech Control Valve Dynamic Specification V3.0 are excellent sources of information on this topic. Also, the control scheme should be reviewed to make sure it is an appropriate, linear, control scheme for the application. Finally, the interaction of the control loop to be tuned with other control loops should be reviewed and understood. The desired “aggressiveness” of the loop tuning should be based on the interaction of the control loop with other loops and the consequences of movement of the controller output.
Tuning for a self-regulating process
A tuning methodology called lambda tuning addresses these challenges. The lambda tuning method allows the user to choose the closed loop response time, called lambda, and calculate the corresponding tuning. The lambda closed loop response time is chosen to achieve the desired process goals and stability criteria. This could result in choosing a small lambda for good load regulation, a large lambda to minimize changes in the controller output and manipulated variable by allowing the PV to deviate from the set point, or somewhere in between these two extremes. More importantly, the lambda of the loop can be used to coordinate the responses of many loops to reduce interaction and variability.
Lambda tuning for self-regulating processes can result in a closed loop response that is slower or faster than the open loop response time of the process. Though lambda is defined as the closed loop time constant of the process response to a step change of the controller set point, the load regulation capability is also a function of the lambda of the loop. The response to a step set point change and a step load change for a self-regulating process response with lambda tuning is shown in figure 2.
Self-regulating process responses typically include dead time and can usually be approximated by a “first-order” or “second-order” response. This article describes the lambda tuning procedure when the process response can be approximated by a first-order-plus-dead-time response. The lambda tuning for a second-order-plus-dead-time response will be covered in future articles.
The lambda tuning method for self-regulating processes involves three steps:
- Identify the process dynamics.
- Choose the desired closed loop speed of response, lambda.
- Calculate the required PID tuning constants.
Figure 3 shows the dynamic parameters of a self-regulating, “first-order-plus-dead-time” process, which include dead time (Td), in units of time; time constant (tau), in units of time; and the process gain (Kp), in units of percent controller PV span/percent controller output span. Typically several step tests are performed; the results are reviewed for consistency; and the average process dynamics are calculated and used for the tuning parameter calculations. If the controller output goes directly to a control valve, any significant deadband in the valve will reduce process gain if the output step was a reversal in direction. If the controller output cascades to the set point of a “slave” loop, the slave loop should be tuned first.
The next step is to choose the lambda to achieve the desired process control goal for the loop—the allowable stability margin and the expected changes in process dynamics. A shorter lambda produces more aggressive tuning and less stability margin. A longer lambda produces less aggressive tuning and more stability margin. It is not uncommon for the process dynamics, particularly the process gain, to vary by a factor of 0.5 to 2. If testing during different conditions reveals that the process dynamics change significantly, then an additional margin of stability is required. Or, the process response can be “linearized” or adaptive tuning can be used.
If the potential change in process dynamics is unknown, starting with lambda equal to three times the larger of the dead time or time constant will provide stability even if the dead time doubles and the process gain doubles. If it is desirable to coordinate the response of loops to avoid significant interaction, the lambda of the interacting loops can be chosen to differ by a factor of three or more. For cascade loops, the lambda can be chosen to ensure the slave loop of the cascade pair has a lambda 1/5 or less of the master control loop.
The lowest recommended lambda for a first-order-plus-dead-time self-regulating process is equal to the dead time, although this provides a very low gain and phase margin. Thus, a smaller increase in the dead time or process gain can cause instability of the loop.
From a stability standpoint, there is no upper limit on the lambda. If the lambda is not chosen based on a coordinated response, a good starting point for stability is:
The tuning performance can be monitored for a time period and adjusted to be a shorter or longer lambda as needed.
The final step is to calculate the tuning parameters from the process dynamics. Care should be taken to use consistent units of time for the dead time and the lambda. For a first-order-plus-dead-time process response (no significant lag or lead), the controller gain and reset times are calculated with the following equations. The derivative time is set to 0. These equations are valid for the standard (sometimes called ideal, noninteractive) and series (sometimes called classical, interactive) forms of the PID implementation. Note that only the controller gain changes as lambda (λ) changes. The integral time remains equal to the time constant regardless of the lambda chosen.
Consider the steam pressure controller shown in figure 4. The pressure controller, PIC-101, manipulates a properly sized control valve that has a high-performance digital positioner.
Figure 5 shows a step test of the pressure controller to identify the process dynamics. The process gain is %PV/%OUT; the dead time is 5 seconds; and the time constant is 20 seconds.
Because there are no “loop response coordination” requirements, the initial lambda is chosen to be 3 * (larger of dead time or time constant) = 3*20 seconds = 60 seconds.
Now, the tuning can be calculated with the lambda tuning rules.
In preparation for being able to make the tuning more aggressive if the control loop is consistent over the required operating range, the tuning can be calculated for shorter values of lambda. The following table shows the tuning for different values of lambda. Note that the integral time remains the same for all choices of lambda.
Figure 6 shows the response to a step set point and a step load change for each of the lambda values in the table. Note that the tuning is stable for much shorter lambda values than the starting point of 3 * (larger of dead time or time constant). However, this is with perfectly constant process dynamics in a simulator. Additional tests on a real process, at different operating conditions will help determine the consistency of the process dynamics.
Meeting process goals
Most published PID controller tuning methods are designed for optimum load regulation, not necessarily optimum process performance. The lambda tuning method provides the ability to tune the PID controller to achieve process performance goals, whether they are maximum load regulation or a coordinated response to other loops. Note that the lambda tuning method for integrating processes can also be used for a lag dominant, self-regulating process to achieve excellent load regulation. This technique and tuning for more complex dynamics will be covered in a future article in this series.
Click this link to read the first blog post in this loop tuning series.
A version of this article originally was published at InTech magazine.