How Next-Generation Gas Chromatography Improves Quality and Reduces Costs

How Next-Generation Gas Chromatography Improves Quality and Reduces Costs

This guest blog post was written by Bonnie Crossland, Rosemount product marketing manager for gas chromatographs at Emerson Process Management.

Glass manufacturing is one of the most energy-intensive industries, with energy costs representing roughly 14 percent of the total production costs. The bulk of energy consumed comes from natural gas combustion for heating furnaces to melt raw materials, which are then transformed into glass. Additionally, glass manufacturing is sensitive to the combustion processes, which can affect the quality of the glass and shorten the lifespan of the melting tanks if not managed properly. Historically, the composition of natural gas has been relatively stable. However, dramatic changes in the supply of natural gas (including shale gas and liquefied natural gas imports) are causing end users to experience rapid and pronounced fluctuations in gas quality.

The efficiency of the furnace can be optimized for the air/fuel ratio when the composition of the incoming gas changes. This can significantly reduce energy consumption and provide substantial savings to the business in product quality and equipment life. Optimizing the furnace efficiency has traditionally been complex and costly. Next-generation gas chromatography, however, is changing that paradigm, providing a cost-effective, task-focused methodology that can be carried out by less technically proficient personnel than were traditionally required.

gas chromatography

Two unique fuel gas compositions can have the same energy content, but behave very differently in the burner. This is because the different amounts of diluents, nitrogen, and carbon dioxide, and the different ratios of hydrocarbons will cause different densities, and thus, different velocities through the burner restrictors. The Wobbe Index, the ratio of the energy value to the specific gravity (Wobbe Index = energy/√specific density), provides an indicator of how the fuel will act through a burner and provides a better variable to control the air/fuel ratio.

Gas chromatographs are used throughout the natural gas chain of custody (from wellhead to burner tip) to determine the gas composition for quality monitoring and energy content. For pipeline quality natural gas, the industry standard is the C6+ measurement method. This method determines the individual composition for each of the hydrocarbons from methane to normal-pentane, nitrogen, and carbon dioxide, and combines heavier hydrocarbons (e.g., hexane, heptane, octane) as a C6+ component. From the composition, energy content, specific gravity, Wobbe Index, and other physical properties are determined using calculations from international standards such as ISO 6976, GPA 2172, and AGA 8. Using the Wobbe Index and a gas chromatograph to determine the gas composition gives insight into the fuel quality variations from the gas supplier. Additionally, the C6+ measurement is the standard by which custody transfer billing is based, and therefore is a direct method of ensuring that the energy used matches the bill from the gas supplier.

Optimizing the air/fuel ratio

The value of optimizing the air/fuel ratio cannot be overstated. The energy value (British thermal unit [Btu] or calorific value) or Wobbe Index is output from the gas chromatograph via either Modbus or an analog 4–20 mA signal. This signal can be used to integrate with the plant process control system to trim the air/fuel ratio and ensure maximum production efficiency (figure 1). When the air/fuel mixing proportion is correct (stoichiometric), all the fuel will be consumed during the combustion process and will burn cleanly. This enables the furnace to operate at its most efficient, cost-effective point. Changes in the composition of the fuel gas will cause changes in:

  • the physical properties of the gas
  • the minimum air requirements needed to achieve stoichiometric combustion
  • the flue gas composition
  • flame speed and flame position

Because glass quality is sensitive to the combustion processes, failing to respond to variations in the composition of the natural gas can result in losing an entire production run due to poor gas quality.

A major glass company in the southeastern U.S. is a heavy user of natural gas. However, the gas comes from multiple locations, causing a constant fluctuation of the Btu value. Because gas flow is adjusted based on the Btu value, knowing the precise measurement is essential. In addition, because the density of the gas varies, knowing the Wobbe Index is critical to quality. When the company began employing a gas chromatograph to optimize its fuel quality, it found the traditional intricacies of gas chromatographs inappropriate for its application. Despite repeated training, its staff was unable to calibrate the instrument. New gas chromatograph technologies designed specifically for natural gas optimization significantly reduced the complexity of operation. In new designs, all of the complex analytical functions of the gas chromatograph may be contained in a replaceable module, greatly simplifying maintenance. Features like auto-calibration make operation easier and more accurate, even for novice users.

Reduced need for specially trained gas chromatograph technicians

At a glass manufacturer in the U.K., poor fuel gas energy measurement led to inadequate air/fuel control, higher energy costs, and a reduction in quality of the finished product. In addition, the company needed compositional data for the calculation of carbon emissions factors, and it lacked a workforce skilled in chromatography. By using new natural gas chromatography technology, it optimized the stoichiometric ratio for stable flame heat, maximized the lifetime of the melting tank, reduced energy costs, and participated in the EU Emission Trading System program. All of this was accomplished without the need for specially trained gas chromatograph technicians.

New gas chromatograph technologies also save costs with capabilities like calibration gas saving features. Self-diagnostics mean the users can rely on the instruments to signal the need for maintenance, while step-by-step on-screen instructions walk the techs through any required processes. The sample handling system includes both particulate and liquid filters and incorporates fixed flow restrictors, removing the need for operators to constantly monitor and adjust the sample system.

Many companies in a wide range of industries faced with the problems of inconsistent quality in natural gas may not have considered gas chromatography as a viable solution for balancing air/fuel ratio due to the traditional complexities of the measurement. It is time to look again. New developments in gas chromatography technology may make this approach the first choice for improving energy efficiency, and ultimately, process quality.

About the Author

Bonnie Crossland is Rosemount product marketing manager for gas chromatographs at Emerson Process Management.

 

 

Connect with Bonnie:

LinkedInEmail
 

A version of this article originally was published at InTech magazine.
Image source: Wikipedia

How to Tune PID Controllers on Self-Regulating Processes

How to Tune PID Controllers on Self-Regulating Processes

This guest blog post was written by James Beall, a principal process control consultant at Emerson Process Management with 34 years of experience in process control. Beall is a member of AIChE and ISA, and chair of ISA committee ISA75.25, Control Valve Dynamic Testing. Click this link to read the first blog post in this loop tuning series.

 

The two most common categories of process responses in industrial manufacturing processes are self-regulating and integrating. A self-regulating process response to a step input change is characterized by a change of the process variable, which moves to and stabilizes (or self-regulates) at a new value. An integrating process response to a step input change is characterized by a change in the slope of the process variable. From the standpoint of a proportional, integral, derivative (PID) process controller, the output of the PID controller is an input to the process.

The output of the process, the process variable (PV), is the input to the PID controller. Figure 1 compares the response of the process variable to a step change of the PID controller output for a self-regulating process and for an integrating response.

Figure 1. Response of the PV to a step change of the controller output for a self-regulating and an integrating process.

Self-regulating responses are very common in the process industry. Many flows, liquid pressures, temperatures, and composition processes are self-regulating. In the first blog post in this series, I presented techniques for tuning a PID controller used on an integrating process. In this post, I will present a method to tune PID controllers on self-regulating processes.

Challenges

Regardless of the tuning of the PID controller, the control performance is limited by the performance of the instrumentation and final control element. Before tuning a controller, it is helpful to have an understanding of the process and to verify the performance of the instrumentation and final control element, usually a control valve. The control valve should have a small deadband and resolution—another topic of discussion! It should have an appropriate and consistent flow gain. It should have a response time that is appropriate for the process performance requirements. ANSI/ISA-75.25 and the EnTech Control Valve Dynamic Specification V3.0 are excellent sources of information on this topic. Also, the control scheme should be reviewed to make sure it is an appropriate, linear, control scheme for the application. Finally, the interaction of the control loop to be tuned with other control loops should be reviewed and understood. The desired “aggressiveness” of the loop tuning should be based on the interaction of the control loop with other loops and the consequences of movement of the controller output.

Tuning for a self-regulating process

A tuning methodology called lambda tuning addresses these challenges. The lambda tuning method allows the user to choose the closed loop response time, called lambda, and calculate the corresponding tuning. The lambda closed loop response time is chosen to achieve the desired process goals and stability criteria. This could result in choosing a small lambda for good load regulation, a large lambda to minimize changes in the controller output and manipulated variable by allowing the PV to deviate from the set point, or somewhere in between these two extremes. More importantly, the lambda of the loop can be used to coordinate the responses of many loops to reduce interaction and variability.

Lambda tuning for self-regulating processes can result in a closed loop response that is slower or faster than the open loop response time of the process. Though lambda is defined as the closed loop time constant of the process response to a step change of the controller set point, the load regulation capability is also a function of the lambda of the loop. The response to a step set point change and a step load change for a self-regulating process response with lambda tuning is shown in figure 2.

Figure 2. Response of lambda tuning for a self-regulating process for a step set point and a step load step change.

Self-regulating process responses typically include dead time and can usually be approximated by a “first-order” or “second-order” response. This article describes the lambda tuning procedure when the process response can be approximated by a first-order-plus-dead-time response. The lambda tuning for a second-order-plus-dead-time response will be covered in future articles.

Procedure

The lambda tuning method for self-regulating processes involves three steps:

  1. Identify the process dynamics.
  2. Choose the desired closed loop speed of response, lambda.
  3. Calculate the required PID tuning constants.

Figure 3 shows the dynamic parameters of a self-regulating, “first-order-plus-dead-time” process, which include dead time (Td), in units of time; time constant (tau), in units of time; and the process gain (Kp), in units of percent controller PV span/percent controller output span. Typically several step tests are performed; the results are reviewed for consistency; and the average process dynamics are calculated and used for the tuning parameter calculations. If the controller output goes directly to a control valve, any significant deadband in the valve will reduce process gain if the output step was a reversal in direction. If the controller output cascades to the set point of a “slave” loop, the slave loop should be tuned first.

Figure 3. Open loop process dynamics of a first-order, self-regulating process include dead time, the time constant, and process gain. T98 is the time required for the process to reach 98 percent of its final value.

The next step is to choose the lambda to achieve the desired process control goal for the loop—the allowable stability margin and the expected changes in process dynamics. A shorter lambda produces more aggressive tuning and less stability margin. A longer lambda produces less aggressive tuning and more stability margin. It is not uncommon for the process dynamics, particularly the process gain, to vary by a factor of 0.5 to 2. If testing during different conditions reveals that the process dynamics change significantly, then an additional margin of stability is required. Or, the process response can be “linearized” or adaptive tuning can be used.

If the potential change in process dynamics is unknown, starting with lambda equal to three times the larger of the dead time or time constant will provide stability even if the dead time doubles and the process gain doubles. If it is desirable to coordinate the response of loops to avoid significant interaction, the lambda of the interacting loops can be chosen to differ by a factor of three or more. For cascade loops, the lambda can be chosen to ensure the slave loop of the cascade pair has a lambda 1/5 or less of the master control loop.

The lowest recommended lambda for a first-order-plus-dead-time self-regulating process is equal to the dead time, although this provides a very low gain and phase margin. Thus, a smaller increase in the dead time or process gain can cause instability of the loop.

From a stability standpoint, there is no upper limit on the lambda. If the lambda is not chosen based on a coordinated response, a good starting point for stability is:

The tuning performance can be monitored for a time period and adjusted to be a shorter or longer lambda as needed.

The final step is to calculate the tuning parameters from the process dynamics. Care should be taken to use consistent units of time for the dead time and the lambda. For a first-order-plus-dead-time process response (no significant lag or lead), the controller gain and reset times are calculated with the following equations. The derivative time is set to 0. These equations are valid for the standard (sometimes called ideal, noninteractive) and series (sometimes called classical, interactive) forms of the PID implementation. Note that only the controller gain changes as lambda (λ) changes. The integral time remains equal to the time constant regardless of the lambda chosen.

Example

Consider the steam pressure controller shown in figure 4. The pressure controller, PIC-101, manipulates a properly sized control valve that has a high-performance digital positioner.

Figure 4. Process and control diagram for a reboiler shell steam pressure control.

Figure 5 shows a step test of the pressure controller to identify the process dynamics. The process gain is %PV/%OUT; the dead time is 5 seconds; and the time constant is 20 seconds.

Figure 5. Open loop step test and analysis of one step response.

 

Because there are no “loop response coordination” requirements, the initial lambda is chosen to be 3 * (larger of dead time or time constant) = 3*20 seconds = 60 seconds.

Now, the tuning can be calculated with the lambda tuning rules.


In preparation for being able to make the tuning more aggressive if the control loop is consistent over the required operating range, the tuning can be calculated for shorter values of lambda. The following table shows the tuning for different values of lambda. Note that the integral time remains the same for all choices of lambda.

Figure 6 shows the response to a step set point and a step load change for each of the lambda values in the table. Note that the tuning is stable for much shorter lambda values than the starting point of 3 * (larger of dead time or time constant). However, this is with perfectly constant process dynamics in a simulator. Additional tests on a real process, at different operating conditions will help determine the consistency of the process dynamics.

Figure 6. Response of self-regulating process for a step set point and step load change with different lambda values.

Meeting process goals

Most published PID controller tuning methods are designed for optimum load regulation, not necessarily optimum process performance. The lambda tuning method provides the ability to tune the PID controller to achieve process performance goals, whether they are maximum load regulation or a coordinated response to other loops. Note that the lambda tuning method for integrating processes can also be used for a lag dominant, self-regulating process to achieve excellent load regulation. This technique and tuning for more complex dynamics will be covered in a future article in this series.

Click this link to read the first blog post in this loop tuning series.

About the Author
James Beall is a principal process control consultant at Emerson Process Management with more than 34 years of experience in process control. He graduated from Texas A&M University with a BS in electrical engineering and worked for Eastman Chemical Company until 2001. He has worked at Emerson since 2001. Beall’s areas of expertise include process instrumentation, control valve performance, control strategy analysis and design, advanced regulatory control and multivariable, and model predictive control. He has designed and implemented process control improvement projects in the chemical, refinery, pulp and paper, power, pipeline, gas and oil, and pharmaceutical industries. Beall is a member of AIChE and ISA, and chair of ISA committee ISA75.25, Control Valve Dynamic Testing. He is a contributing author to the Process/Industrial Instruments and Control Handbook, 5th Edition.

Connect with James:

LinkedInEmail
 

A version of this article originally was published at InTech magazine.

Powering the Next Generation of HART-Enabled Devices

Powering the Next Generation of HART-Enabled Devices

This guest blog post was written by Sol Jacobs, vice president and general manager of Tadiran Batteries, has more than 30 years of experience in developing solutions for powering remote devices. His educational background includes a bachelor’s degree in engineering and an MBA.

 

While continually evolving, the HART communications protocol remains strong after 30 years, with approximately 30 million HART-enabled devices installed and in service worldwide. The HART protocol remains the industry standard for applications ranging from process control to asset management and safety systems, machine-to-machine, and other supervisory control and data automation applications.

The Highway Addressable Remote Transducer (HART) protocol employs Bell 202 frequency shift keying (the same standard found in analog phone caller-ID technology) to superimpose digital signals on top of 4–20 mA analog signals, with the two channels working in tandem to provide a low-cost field communications solution that is easy to use and configure.

Traditional HART connectivity requires hardwiring, which is highly restrictive. Experts believe that nearly 85 percent of all installed HART-enabled devices are not currently connected. The main obstacle is expense; it costs $100 or more per foot to create a hardwired connection. This limitation becomes even more problematic for remote, environmentally sensitive locations, where logistical, regulatory, and permitting requirements create added layers of expense and complexity.

Recognizing that industrial automation could not be held back by proximity to analog wiring, the HART-IP protocol was developed, enabling IP-based networks to communicate via Wi-Fi (IEEE 802.11) or Ethernet (IEEE 802.3).

The development of HART-IP led to low-power communications protocols, such as WirelessHART and ZigBee, that use IEEE 802.15.4-approved radio signals to deliver high reliability in challenging environments. The WirelessHART protocol has created a huge opportunity for wireless, battery-operated sensors to seamlessly integrate with other intelligent HART devices to play an integral role in the emerging Industrial Internet of Things (IIoT). This is a critical step toward a future where “big data” analytics will increasingly manage transportation infrastructure, energy production, environmental monitoring, manufacturing, distribution, health care, and smart buildings. The WirelessHART protocol has enabled the rapid development of wireless mesh networks that combine multiple low-power sensors to form redundant, self-healing networks.

The ideal power supply

A remote wireless device is only as reliable as its power supply, which needs be optimized based on application-specific requirements. The vast majority of remote wireless devices that require long operating life are powered by primary (nonrechargeable) lithium batteries. However, certain applications may be suited for energy-harvesting devices used in conjunction with rechargeable lithium-ion (Li-ion) batteries that store the harvested energy.

Generally speaking, the more remote the application, the greater the need for an industrial-grade lithium battery. For example, inexpensive consumer-grade alkaline batteries can suffice in certain instances, especially for easily accessible devices that operate within a moderate temperature range (i.e., flashlights, television remote controllers, and toys). However, alkaline batteries are not well suited to long-term industrial applications due to inherent limitations, including low voltage (1.5 V or lower), a limited temperature range (0°C to 60°C), a high self-discharge rate that reduces life expectancy to two to three years, and crimped seals that may leak.

The low initial cost of a consumer-grade battery can also be highly misleading, as the cost of labor to replace a consumer-grade battery typically far exceeds that of the battery itself. For example, consider what it takes to replace batteries in a seismic monitoring system sitting on the ocean floor or in a stress sensor attached to a bridge abutment.

To judge whether a short-lived consumer-grade alkaline battery is a worthy investment, you must calculate the lifetime cost of the power supply. To be accurate, the calculation has to properly account for the cost of all labor and materials associated with future battery replacements.

When specifying an industrial-grade lithium battery, you need to consider numerous factors, including energy consumed in active mode (including the size, duration, and frequency of pulses); energy consumed in dormant mode (the base current); storage time (as normal self-discharge during storage diminishes capacity); thermal environments (including storage and in-field operation); equipment cut-off voltage (as battery capacity is exhausted, or in extreme temperatures, voltage can drop to a point too low for the sensor to operate); battery self-discharge rate (which can be higher than the current draw from average sensor use); and cost considerations. Industrial-grade lithium batteries are commonly specified when the following performance features are required:

  • Reliability: The remote sensor is deployed in a hard-to-reach location where battery replacement is difficult or impossible, and data links cannot be interrupted by bad batteries.
  • Long operating life: The self-discharge rate of the battery can be more than the device usage of the battery, so initial battery capacity must be as high as possible.
  • Wide operating temperatures: A wide range is especially critical for extremely hot or cold environments.
  • Small size: When a small form factor is required, the battery’s energy density needs to be as high as possible.
  • Voltage: Higher voltage enables fewer cells to be required.
  • Lifetime costs: Replacement costs over time must be taken into account.

Trade-offs are inevitable, so you need to prioritize your list of desired performance attributes.

Choosing among primary lithium batteries

Lithium battery chemistry is preferred for long-term deployments, because its intrinsic negative potential exceeds that of all other metals. Lithium is also the lightest nongaseous metal and has the highest specific energy (energy per unit weight) and energy density (energy per unit volume) of all available battery chemistries. Lithium cells, all of which use a nonaqueous electrolyte, have a normal operating current voltage that ranges between 2.7 V and 3.6 V. The absence of water allows lithium batteries to endure more extreme temperatures. Numerous primary lithium chemistries are available (table 1), including iron disulfide (LiFeS2), lithium manganese dioxide (LiMNO2), and lithium thionyl chloride (LiSOCl2) chemistry.

Table 1. Numerous primary lithium chemistries are available.

Consumer-grade lithium iron disulfide (LiFeS2) cells are relatively inexpensive, and deliver the high pulses required to power a camera flash. These batteries have limitations, including a narrow temperature range of -20°C to 60°C, a high annual self-discharge rate, and crimped seals that may leak.

Lithium manganese dioxide (LiMNO2) cells, including the popular CR123A, provide a space-saving solution for cameras and toys, as a single 3-volt LiMNO2 cell can replace two 1.5-volt alkaline cells. LiMNO2 batteries can deliver moderate pulses, but suffer from low initial voltage, a narrow temperature range, a high self-discharge rate, and crimped seals.

Bobbin-type lithium thionyl chloride (LiSOCl2) batteries are particularly well suited for WirelessHART devices that draw low average daily current. Bobbin-type LiSOCl2 batteries offer the highest capacity and highest energy density of any lithium cell, along with an extremely low annual self-discharge rate—less than 1 percent per year—enabling certain low-power applications to operate without maintenance for up to 40 years. Bobbin-type LiSOCl2 batteries also deliver the widest possible temperature range (-80°C to 125°C) and have a glass-to-metal hermetic seal.

These unique attributes make bobbin-type LiSOCl2 batteries ideally suited for industrial applications, such as tank level monitoring and asset tracking, where remote sensors must endure extreme temperature cycling. A prime example is the medical cold chain, where wireless sensors are required to monitor the transport of frozen pharmaceuticals, tissue samples, and transplant organs at carefully controlled temperatures as low as -80°C. Certain bobbin-type LiSOCl2 batteries have been proven to operate successfully under prolonged test conditions at -100°C, which far exceeds the maximum temperature range of alkaline cells and consumer-grade lithium batteries.

Bobbin-type LiSOCl2 batteries are also used in virtually all meter transmitter units (MTUs) in advanced metering infrastructure/automatic meter reading (AMI/AMR) metering applications for water and gas utilities. These MTUs are often buried outside in underground pits and subjected to extreme temperatures. Extended battery life is essential to AMI/AMR metering applications, because any large-scale system-wide battery failure could create chaos by disrupting billing and customer service. To preempt this type of disruption, utility companies demand the use of bobbin-type LiSOCl2 batteries for their ability to operate for decades.

Battery operating life is largely influenced by the cell’s annual energy usage, along with its annual self-discharge rate. For this reason, many devices that use the WirelessHART protocol are designed to conserve energy by operating on a very low current. To further extended battery life, these devices operate mainly in a “sleep” mode that draws little or no current, periodically querying for the presence of data and awakening only if certain preset data thresholds are exceeded. It is not uncommon for more energy to be lost through annual battery self-discharge than through actual battery use.

When specifying a bobbin-type LiSOCl2 battery, be aware that battery operating life can vary significantly based on how the cell was manufactured and the quality of its raw materials. For example, the highest-quality bobbin-type LiSOCl2 cells can have a self-discharge rate as low as 0.7 percent annually, thus retaining nearly 70 percent of their original capacity after 40 years. By contrast, a lesser-quality bobbin-type LiSOCl2 cell can have an annual self-discharge rate as high as 3 percent, causing nearly 30 percent of available capacity to be lost every 10 years from annual self-discharge.

High pulse requirements

Standard bobbin-type LiSOCl2 cells are not designed to deliver high pulses, which can be overcome by combining a standard bobbin-type LiSOCl2 cell with a hybrid layer capacitor (HLC). The standard LiSOCl2 cell delivers the low background current needed to power the device during sleep mode, while the HLC works like a rechargeable battery to store and deliver the high pulses needed during data interrogation and transmission.

Alternatively, supercapacitors can be used to store high pulse energy in an electrostatic field. Supercapacitors are used in many consumer products, but are generally not recommended for industrial applications because of inherent performance limitations, including an inability to provide long-term power, linear discharge qualities that do not allow the use of all available energy, low capacity, low energy density, and high annual self-discharge rates (up to 60 percent per year). Supercapacitors linked in series also require cell-balancing circuits that draw additional current.

Opportunities for energy harvesting

A growing number of HART-IP connected devices are proving to be well suited for energy harvesting, with Li-ion rechargeable batteries being used to store the harvested energy. Several considerations go into the decision to deploy an energy-harvesting device, including the reliability of the device and its energy source, the expected operating life of the device, environmental parameters, size and weight restrictions, and the total cost of ownership. Photovoltaic cells are commonly used in HART-enabled applications. In certain situations, energy can also be harvested from equipment vibration or from radio frequency/electromagnetic signals.

Consumer-grade rechargeable Li-ion cells may be a sufficient solution if the device is easily accessible and needs to operate for no more than five years and 500 recharge cycles within a moderate temperature range (0°C to 40°C). However, if the wireless device will be used in a remote location or in extreme temperatures, then the application will likely require an industrial-grade Li-ion battery that can operate for up to 20 years and 5,000 full recharge cycles, with an expanded temperature range of -40°C to 85°C (table 2).

Table 2. For remote locations or extreme temperatures, industrial-grade lithium-ion batteries are usually required.

Another major advantage of an industrial-grade rechargeable Li-ion cell is its ability to deliver the high pulses (5 A for a AA-size cell) to support advanced, two-way communications. These ruggedly constructed cells also have a hermetic seal that is superior to the crimped seals on consumer-grade rechargeable batteries, which may leak.

Foundation of IIoT

The development of the HART-IP and WirelessHART communications protocols have created a growing need for battery-powered solutions that can operate without maintenance for decades and provide reliable, secure, and seamless interoperability between legacy technologies and the latest generation of wireless devices. These HART-enabled technologies form a critical foundation for the IIoT, which promises to revolutionize modern industrial automation.

Technology convergence and growing requirements for interoperability are currently being supported by the most recent bobbin-type LiSOCl2 batteries, including hybrid cells that can deliver the high pulses required for advanced, two-way communications. There is also growing demand for industrial-grade rechargeable lithium-ion batteries that offer a long-term power supply for energy-harvesting applications. Together, these advanced battery chemistries offer a wide range of reliable, long-term power design options for HART-connected devices.

About the Author
Sol Jacobs, vice president and general manager of Tadiran Batteries, has more than 30 years of experience in developing solutions for powering remote devices. His educational background includes a bachelor’s degree in engineering and an MBA.

 

 

Connect with Sol:
48x48-linkedinTwitter

 

 

A version of this article also was published at InTech magazine

 

How to Optimize Application Engineering to Keep Projects on Time and on Budget

How to Optimize Application Engineering to Keep Projects on Time and on Budget

This article was written by Bill Lydon, chief editor at InTech magazine

 

After attending an industry analyst meeting, I was struck by the number of presentations that discussed the cost and complications of customizations done for industrial automation projects. For example, one speaker mentioned that over 60 percent of motor control centers are being customized on projects deviating from standard industry offerings.

I believe this brings into focus the difference between application engineering and design. A reasonable part of my career was spent as an application engineer for automation projects. Based on delivering projects on time and on budget, I was also tasked with focusing people on application engineering using standard products to keep projects on budget.

Issues

Customization of products that deviate from vendor and industry standards leads to a number of problems on projects. Initially these typically increase the cost of hardware and software used on a project. The initial cost increase created by customization decisions is just the down payment leading to other expenses. Engineering and as-built documentation costs increase because of the deviation from industry-typical products and designs. For example, standard computer-aided design templates and information cannot be used to automatically generate documentation for unique modifications, creating more labor and the potential to induce more errors into the project. Nonstandard customized hardware and software increase installation labor costs for commissioning, startup, and validation, since they deviate from what people routinely understand. Later, these customizations create higher life-cycle support costs, and many times result in “brittle systems” prone to failure. Customizations by their nature increase mean time to repair (MTTR), because they are unusual—requiring maintenance people to find, study, and understand unique documentation. In some cases, customizations require unique parts for repair that may be difficult to find. Higher MTTR increases production downtime for repairs and lowers overall system availability. All of these factors increase the risk of project time line and cost overruns.

Customization can be a very seductive activity, providing personal satisfaction for the controls and automation engineers, who feel they are creating something special. Many vendor salespeople find customizations rewarding, because they increase cost and in many cases create a highly dependent customer relationship. This is particularly true where the customer has to pay the vendor for nonrecurring engineering investments, and this sunk cost creates a barrier to using other solutions for changes and upgrades in the future.

Application engineering

Working under experienced people, I was taught that the best application engineering uses standard products, software, and hardware to creatively achieve project goals. This is a different way of thinking that has the constraint of using standard products and the challenge to innovatively apply them. In many cases, this approach forces the application engineer to seek out new and better standards, components, and software to achieve project goals.

The application engineering process starts by working with stakeholders to understand, clarify, and gain consensus on the functional requirements and goals to be achieved. Application engineers identify requirements by establishing personal rapport with product, manufacturing, and operations people. This leads to collaboration, resulting in process improvements and efficient manufacturing.

A successful and effective industrial controls and automation system design and implementation is the work product resulting from the application engineer’s analysis of requirements translated into a working system. Application engineering is a creative approach that requires seeking knowledge, being resourceful, and thinking outside of the box.

Bill LydonAbout the Author
Bill Lydon is chief editor of InTech magazine. Lydon has been active in manufacturing automation for more than 25 years. He started his career as a designer of computer-based machine tool controls; in other positions, he applied programmable logic controllers and process control technology. In addition to experience at various large companies, he co-founded and was president of a venture-capital-funded industrial automation software company. Lydon believes the success factors in manufacturing are changing, making it imperative to apply automation as a strategic tool to compete.
Connect with Bill:
48x48-linkedinTwitterEmail

 

A version of this article originally was published at InTech magazine

IIoT Applications Deliver a Competitive Advantage to Process Industries

IIoT Applications Deliver a Competitive Advantage to Process Industries

This guest blog post was written by Deanna Johnson, global marketing communications manager at Emerson Process Management.

To some, the Industrial Internet of Things (IIoT) is just a new buzzword—but to the process industries, the IIoT is becoming a necessity to maintain competitiveness. Oil and gas companies, refineries, and other process industries are trying to cope with various market forces, many of which require improved plant performance.

The 650 major refineries globally are especially affected. Some of these plants are operating at peak performance, but many are not, causing a significant financial impact. Our calculations show the difference in operating costs associated with equipment reliability and energy efficiency between a well-run refinery and an average one is about $12.3 million per year for a typical 250,000 barrel-per-day facility. Assuming about 60 percent of refineries are not operating as well as they could, the overall worldwide financial impact runs to billions of dollars annually.

To increase reliability and efficiency, and to gain other operating benefits such as reduced maintenance and improved safety, many refineries and process plants are turning to the IIoT.

The IIoT essentially involves acquiring data from hundreds—if not thousands—of process and equipment sensors, and transmitting the data to central locations via wireless or hardwired networks. The goal is to sense anything, anywhere in a cost-effective manner.

Once the data arrives, it is stored in databases, historians, the cloud, and other locations where it can be accessed by software that analyzes and interprets the sensor information using “big data” techniques to diagnose conditions, detect equipment problems, and alert operations personnel. Such software can reside in the plant’s control system, a dedicated PC, or in a server half a world away.

The “Internet” part of IIoT refers to the fact that the Internet can be used to connect the various systems. In many instances, all the networking is done at the plant itself, with the Internet replaced by an internal intranet, but the basic principles still apply: huge amounts of data are gathered and analyzed to find and solve problems.

Space does not permit an exhaustive analysis of all the applications where the IIoT can save energy, reduce maintenance costs, and improve process efficiency. However, here is a short list of what is possible to monitor and analyze with these types of systems:

  • steam traps
  • pumps and compressors
  • heat exchangers
  • pressure relief valves
  • cooling towers
  • mobile workforces
  • safety showers and eye wash stations

Following are several examples of how the IIoT was used to improve efficiency and find problems at process plants worldwide.

Steam trap monitoring

Steam trap monitoring via wireless acoustic transmitters is a leading IIoT application. When traps fail open, high-pressure steam leaks out, so more steam has to be produced by boilers. Depending on the price of steam at a facility, a single failed-open steam trap can waste $30,000 worth of steam each year.

When traps fail closed, they do not remove water droplets from the steam. Accumulated water, moving through piping and equipment at a high rate of speed, can rupture steam lines and cause turbines to throw blades. Repairs are very expensive, and downtime is often significant.

Most plants monitor their steam traps manually via annual checks. This is very costly in terms of labor, misses many problems, and in the worst case can allow failed traps to operate for years.

Acoustic sensors and specialized software systems detect steam trap problems automatically and alert plant personnel so they can take action. In the past, these sensors were hardwired back to software systems, but the preferred modern method is to use wireless acoustic sensors connected back to software systems via a wireless mesh network, creating an IIoT.

Levaco Chemicals in Leverkusen, Germany, had to save energy to meet the June 2012 Energy Efficiency Directive required by the European Commission and ISO 50001. The plant determined that defective steam traps were causing loss of steam and inefficient heat transfer, and therefore wasting energy.

They installed 300 wireless steam trap monitors and three wireless gateways—one in each of three plant areas—on critical steam traps. The gateways connect to the wireless transmitters through the WirelessHART mesh network, and the gateways connect to the control system via hardwiring.

They also installed specialized data analysis software on a PC. The gateways connect to the PC via an Ethernet cable. This software analyzes real-time data from the steam trap acoustic monitors. These instruments measure the ultrasonic acoustic behavior and temperature of steam traps, and the software uses this data to identify existing and potential problems.

By repairing or replacing failed steam traps, the three plant areas immediately had substantial reductions in energy costs. Failed traps were no longer venting valuable steam, which lowered energy consumption to produce steam, and failed traps were no longer causing process shutdowns. The increased energy efficiency easily met the Energy Efficiency Directive and ISO 50001 requirements, and the plant was awarded a certificate of compliance in 2015.

Levaco calculated a return on investment of fewer than two years, thanks to savings in energy costs. It also reduced the number of process shutdowns because of steam trap failures, and eliminated the need for maintenance technicians to make regular rounds, resulting in further substantial savings.

In a similar application, a corn milling plant was experiencing a 15 percent annual steam trap failure rate, with 12.5 percent of the plant’s steam traps responsible for 38 percent of the steam loss. Steam trap issues were efficiently identified and addressed with the application of wireless steam trap acoustic sensors and accompanying analytics. The payback period was just a few months, and the annual savings were $301,108.

Table 1 illustrates the savings possible in a large plant that has 8,000 steam traps, where 1,200 are considered critical. If the plant previously experienced a 15 percent failure rate per year, by preventing those failures with steam trap monitors the plant will save $3,279,960 per year.

Pump monitoring

It is estimated that pumps account for 7 percent of the total maintenance cost of a plant or refinery, and pump failures are responsible for 0.2 percent of lost production. Many pump failures can be predicted using IIoT, modern condition-based monitoring techniques, predictive technologies, and reliability-centered maintenance best practices.

Historically, the expense of installing a dedicated IIoT online monitoring system has prevented it from being used on anything beyond the most critical pumps. But with the relative ease of adding online pump condition monitoring with today’s wireless sensor technology, online monitoring can be installed quickly and inexpensively.

Today, wireless transmitters make it possible to monitor many pumps cost effectively.

Cavitation monitoring is needed on high-head multistage pumps, as they cannot tolerate this condition, even for a brief time. Although cavitation often happens when pumps operate outside their design ranges, it can also be caused by intermittent pump suction or discharge restrictions. Damage can occur before manual rounds discover the problem, but can be detected sooner by continuously monitoring the pump discharge pressure for fluctuations with a wireless pressure transmitter.

Vibration monitoring detects many common causes of pump failure. Excessive motor and pump vibration can be caused by a failing concrete foundation or metal frame, shaft misalignment, impeller damage, pump or motor bearing wear, or coupling wear and cavitation. Increasing vibration commonly leads to seal failure and can result in expensive repairs, process upsets, reduced throughput, fines if hazardous material is leaked, and fire if the leaked material is flammable.

Online vibration monitoring has been successful in detecting several root causes of pump degradation. A complete IIoT pump health monitoring system can pay for itself in months. At one refinery, for example, pump monitoring systems were installed on 80 pumps throughout the complex. The annual savings was more than $1.2 million after implementing the pump monitoring solution, resulting in a payback period of fewer than six months (table 2). The savings came from decreased maintenance costs of $360,000, and fewer losses from process shutdowns because of failed pumps, which were conservatively valued at $912,000.

Heat exchangers in many plants can be a major source of downtime, often causing considerable maintenance expenses, significant loss of production, and poor plant performance. Existing monitoring may involve manual spot measurements performed periodically. These types of measurements provide an inconsistent view of failures and are time consuming, with accurate assessment based upon technician expertise.

Many refiners are trying to maximize their use of low-cost crudes, but using this type of feedstock often presents significant processing challenges. Typically, crude unit preheat exchangers can foul unpredictably with changes in the crude blend and process conditions. As a result, energy efficiency is lost, and production can be limited. Adding additional wireless temperature measurements to exchanger banks provides increased data to process analytics software that can then alert operations to excessive fouling conditions and rates. Using WirelessHART technology, heat exchanger monitoring can be quickly automated and integrated with the existing automation system in a matter of days.

Wireless temperature transmitters and heat exchanger modeling software can determine when crude unit preheat exchangers need cleaning.

At one refinery, the #2 Crude Unit was subject to preheat train fouling. The refinery was unable to determine when to clean the heat exchanger for the greatest benefit. This lack of information prevented economic analysis planning, such as fouling degradation versus additional fired heater fuel required. An IIoT real-time temperature monitoring system was installed on the unit, which sent data to heat exchanger modeling software. Based on the analysis, the heat exchanger was cleaned on an as-needed basis, resulting in an estimated annual savings of $225,000 in maintenance costs, with further savings of $912,500 realized by preventing downtime (table 3).

More than a buzzword

The IIoT is more than a buzzword. It is here, and plants are using it to realize value from the hundreds of millions of connected sensors currently installed, and the millions more coming online each year. Many of these new sensors are wireless, because they can be installed quicker and at less cost than their wired equivalents, often with no required downtime. These low-cost wireless sensors and accompanying analytics can dramatically improve plant performance, increase safety, and pay for themselves within months.

About the Author
Deanna Johnson, global marketing communications manager at Emerson Process Management, focuses on Rosemount products and pervasive-sensing strategies. Her previous positions included development of integrated marketing communications programs for Emerson’s oil and gas and refining industries, as well as work on WirelessHART marketing. Johnson started her Emerson career in 1996. She has an MBA with a marketing focus.

Connect with Deanna:
LinkedInTwitterEmail
 
 

A version of this article originally was published at InTech magazine.

The Arts Must Be Factored Into the STEM Equation

The Arts Must Be Factored Into the STEM Equation

This guest blog post was written by Stephen R. Huffman, vice president, marketing and business development, Mead O’Brien, Inc.

 

This blog began as a coy reply to Bill Lydon’s interesting post about Leonardo da Vinci’s accomplishments as an artist applying engineering principles to create engineered works of art. Lydon noted that da Vinci saw science and art as complementary rather than as distinct disciplines. I stated that the word “STEAM,” really STEM + art, was not a new concept. The most recent iteration started sometime within the first decade of the 21st century, gaining traction with the efforts of such influencers as the Rhode Island School of Design beginning in 2010. Lawmakers with whom the Automation Federation met while advocating for our profession on Capitol Hill saw the concept as a way to reach elementary school children who would not otherwise be interested in math, science, and engineering.

 

My point was why use the word “steam” and create confusion with the engine of the American industrial revolution—and still the most efficient turbine driver and heat transfer media in prominent use to this day? Ironically, I find a declining knowledge base regarding steam systems used in industry, especially in process control, as the baby boomers are now retiring at very high levels. New practitioners, automation or otherwise, who either work on or are charged with engineering or maintaining these utility systems for process are generally not well prepared from a knowledge or educational perspective. This issue really adds to the negative financial impact that poorly designed or poorly maintained steam systems contribute to product quality, throughput, and energy loss.

For the artistic, it seems someone should have realized that the word, with all its thermodynamic glory, was already taken. So is it right to add “art” to the critical-thinking process of STEM and to the engineering curriculum to add another dimension to the student’s education? A number of artists and engineers disagree, but mainly because they only view their “discipline” as a tool that makes the other “discipline” superior. In short, it does go both ways, and purists on both sides probably resent that art and engineering go together. Because we come from the engineering side of the fence, I feel that art probably does broaden the horizons of engineers, but bringing art into engineering certainly does nothing to diminish art in and of itself. As art teaches us, there are many ways to comprehend the same thing.

In my own experience with the brewing industry in St. Louis over the past 40 years, the process mix includes engineering, science, and the application of the art of brewing, which goes back to the ancient Greeks. Modern brewing evolved over the past 150 years with people from those disciplines working together, some even using the “glue” of automation to turn their processes into highly automated, high production, and sophisticated dynamos with dozens of new products released yearly, all of them starting with four basic ingredients.

I project that art in STEM (STEM+A if I were chief acronym maker) is absolutely necessary for automation professionals to better appreciate process and better visualize what the future holds. It is also essential for thinking more abstractly, and in homage to the next big thing, developing a critical eye to analyze, put to practical use, and translate from “production-speak” to meaningful “management-speak” the massive amount of data coming our way with the Industrial Internet of Things revolution of which we are on the cusp. Dealing with disruptive technologies in process and factory automation will require digital skills far in excess of what we can even see on the horizon today. It seems that steam may be creating some buzz, but in the future the real kinetic energy will be created by digital engineers.

About the Author
Stephen R. Huffman is vice president, marketing and business development, at Mead O’Brien, Inc., and serves as chairman of the Government Relations Committee at the Automation Federation. Stephen has a 40-year history of optimizing process systems, developing new applications, and providing technical education. He served as 2007 president of ISA.

 

Connect with Stephen:
LinkedInTwitterEmail

 

A version of this article originally was published at InTech magazine.

Pin It on Pinterest