How to Detect Defects in Rolling Element Bearings Used in Manufacturing Processes

How to Detect Defects in Rolling Element Bearings Used in Manufacturing Processes

This post is an excerpt from the journal ISA Transactions. All ISA Transactions articles are free to ISA members, or can be purchased from Elsevier Press.


Abstract: The active health monitoring of rotordynamic systems in the presence of bearing outer race defect is considered in this paper. The shaft is assumed to be supported by conventional mechanical bearings and an active magnetic bearing (AMB) is used in the mid of the shaft location as an exciter to apply electromagnetic force to the system. We investigate a nonlinear bearing-pedestal system model with the outer race defect under the electromagnetic force. The nonlinear differential equations are integrated using the fourth-order Runge–Kutta algorithm. The simulation and experimental results show that the characteristic signal of outer race incipient defect is significantly amplified under the electromagnetic force through the AMBs, which is helpful to improve the diagnosis accuracy of rolling element bearing’s incipient outer race defect.

The rolling element bearing is one of the most common components in rotating machinery. The health condition of these bearings directly determines the performance of the rotating machinery. Defects in bearings that may occur during operation or the manufacturing process can cause vibration, noise, and even system failure. It is thus critical to detect the defects in bearings at their initial stage to prevent catastrophic damage or failures to the rotating machine, resulting in plant downtime and reduced efficiencies.


 Free Bonus! To read the full version of this ISA Transactions article, click here.

Join ISA and get free access to all ISA Transactions articles as well as a wealth of other technical content, plus discounts on events, webinars, training & education courses, and professional certification.

Click here to join … learn, advance, succeed!


2006-2018 Elsevier Science Ltd. All rights reserved.

How Next-Generation Gas Chromatography Improves Quality and Reduces Costs

How Next-Generation Gas Chromatography Improves Quality and Reduces Costs

This guest blog post was written by Bonnie Crossland, Rosemount product marketing manager for gas chromatographs at Emerson Process Management.

Glass manufacturing is one of the most energy-intensive industries, with energy costs representing roughly 14 percent of the total production costs. The bulk of energy consumed comes from natural gas combustion for heating furnaces to melt raw materials, which are then transformed into glass. Additionally, glass manufacturing is sensitive to the combustion processes, which can affect the quality of the glass and shorten the lifespan of the melting tanks if not managed properly. Historically, the composition of natural gas has been relatively stable. However, dramatic changes in the supply of natural gas (including shale gas and liquefied natural gas imports) are causing end users to experience rapid and pronounced fluctuations in gas quality.

The efficiency of the furnace can be optimized for the air/fuel ratio when the composition of the incoming gas changes. This can significantly reduce energy consumption and provide substantial savings to the business in product quality and equipment life. Optimizing the furnace efficiency has traditionally been complex and costly. Next-generation gas chromatography, however, is changing that paradigm, providing a cost-effective, task-focused methodology that can be carried out by less technically proficient personnel than were traditionally required.

gas chromatography

Two unique fuel gas compositions can have the same energy content, but behave very differently in the burner. This is because the different amounts of diluents, nitrogen, and carbon dioxide, and the different ratios of hydrocarbons will cause different densities, and thus, different velocities through the burner restrictors. The Wobbe Index, the ratio of the energy value to the specific gravity (Wobbe Index = energy/√specific density), provides an indicator of how the fuel will act through a burner and provides a better variable to control the air/fuel ratio.

Gas chromatographs are used throughout the natural gas chain of custody (from wellhead to burner tip) to determine the gas composition for quality monitoring and energy content. For pipeline quality natural gas, the industry standard is the C6+ measurement method. This method determines the individual composition for each of the hydrocarbons from methane to normal-pentane, nitrogen, and carbon dioxide, and combines heavier hydrocarbons (e.g., hexane, heptane, octane) as a C6+ component. From the composition, energy content, specific gravity, Wobbe Index, and other physical properties are determined using calculations from international standards such as ISO 6976, GPA 2172, and AGA 8. Using the Wobbe Index and a gas chromatograph to determine the gas composition gives insight into the fuel quality variations from the gas supplier. Additionally, the C6+ measurement is the standard by which custody transfer billing is based, and therefore is a direct method of ensuring that the energy used matches the bill from the gas supplier.

Optimizing the air/fuel ratio

The value of optimizing the air/fuel ratio cannot be overstated. The energy value (British thermal unit [Btu] or calorific value) or Wobbe Index is output from the gas chromatograph via either Modbus or an analog 4–20 mA signal. This signal can be used to integrate with the plant process control system to trim the air/fuel ratio and ensure maximum production efficiency (figure 1). When the air/fuel mixing proportion is correct (stoichiometric), all the fuel will be consumed during the combustion process and will burn cleanly. This enables the furnace to operate at its most efficient, cost-effective point. Changes in the composition of the fuel gas will cause changes in:

  • the physical properties of the gas
  • the minimum air requirements needed to achieve stoichiometric combustion
  • the flue gas composition
  • flame speed and flame position

Because glass quality is sensitive to the combustion processes, failing to respond to variations in the composition of the natural gas can result in losing an entire production run due to poor gas quality.

A major glass company in the southeastern U.S. is a heavy user of natural gas. However, the gas comes from multiple locations, causing a constant fluctuation of the Btu value. Because gas flow is adjusted based on the Btu value, knowing the precise measurement is essential. In addition, because the density of the gas varies, knowing the Wobbe Index is critical to quality. When the company began employing a gas chromatograph to optimize its fuel quality, it found the traditional intricacies of gas chromatographs inappropriate for its application. Despite repeated training, its staff was unable to calibrate the instrument. New gas chromatograph technologies designed specifically for natural gas optimization significantly reduced the complexity of operation. In new designs, all of the complex analytical functions of the gas chromatograph may be contained in a replaceable module, greatly simplifying maintenance. Features like auto-calibration make operation easier and more accurate, even for novice users.

Reduced need for specially trained gas chromatograph technicians

At a glass manufacturer in the U.K., poor fuel gas energy measurement led to inadequate air/fuel control, higher energy costs, and a reduction in quality of the finished product. In addition, the company needed compositional data for the calculation of carbon emissions factors, and it lacked a workforce skilled in chromatography. By using new natural gas chromatography technology, it optimized the stoichiometric ratio for stable flame heat, maximized the lifetime of the melting tank, reduced energy costs, and participated in the EU Emission Trading System program. All of this was accomplished without the need for specially trained gas chromatograph technicians.

New gas chromatograph technologies also save costs with capabilities like calibration gas saving features. Self-diagnostics mean the users can rely on the instruments to signal the need for maintenance, while step-by-step on-screen instructions walk the techs through any required processes. The sample handling system includes both particulate and liquid filters and incorporates fixed flow restrictors, removing the need for operators to constantly monitor and adjust the sample system.

Many companies in a wide range of industries faced with the problems of inconsistent quality in natural gas may not have considered gas chromatography as a viable solution for balancing air/fuel ratio due to the traditional complexities of the measurement. It is time to look again. New developments in gas chromatography technology may make this approach the first choice for improving energy efficiency, and ultimately, process quality.

About the Author

Bonnie Crossland is Rosemount product marketing manager for gas chromatographs at Emerson Process Management.



Connect with Bonnie:


A version of this article originally was published at InTech magazine.
Image source: Wikipedia

Tuning Strategies and the Fragility of Fractional-Order PID Controllers

Tuning Strategies and the Fragility of Fractional-Order PID Controllers

This post is an excerpt from the journal ISA Transactions. All ISA Transactions articles are free to ISA members, or can be purchased from Elsevier Press.


Abstract: This paper analyzes the fragility issue of fractional-order proportional-integral-derivative controllers applied to integer first-order plus-dead-time processes. In particular, the effects of the variations of the controller parameters on the achieved control system robustness and performance are investigated. Results show that this kind of controller is more fragile with respect to the standard proportional-integral-derivative controllers and therefore significant attention should be paid by the user in their tuning.

A properly designed control system must provide an effective trade-off between performance and robustness. One of the main reasons to investigate the fragility of fractional-orde PID controllers is to enable an engineer or technician to use alternative strategies for tuning the controller.


Free Bonus! To read the full version of this ISA Transactions article, click here.


Join ISA and get free access to all ISA Transactions articles as well as a wealth of other technical content, plus discounts on events, webinars, training & education courses, and professional certification.

Click here to join … learn, advance, succeed!


2006-2018 Elsevier Science Ltd. All rights reserved.

How to Tune PID Controllers on Self-Regulating Processes

How to Tune PID Controllers on Self-Regulating Processes

This guest blog post was written by James Beall, a principal process control consultant at Emerson Process Management with 34 years of experience in process control. Beall is a member of AIChE and ISA, and chair of ISA committee ISA75.25, Control Valve Dynamic Testing. Click this link to read the first blog post in this loop tuning series.


The two most common categories of process responses in industrial manufacturing processes are self-regulating and integrating. A self-regulating process response to a step input change is characterized by a change of the process variable, which moves to and stabilizes (or self-regulates) at a new value. An integrating process response to a step input change is characterized by a change in the slope of the process variable. From the standpoint of a proportional, integral, derivative (PID) process controller, the output of the PID controller is an input to the process.

The output of the process, the process variable (PV), is the input to the PID controller. Figure 1 compares the response of the process variable to a step change of the PID controller output for a self-regulating process and for an integrating response.

Figure 1. Response of the PV to a step change of the controller output for a self-regulating and an integrating process.

Self-regulating responses are very common in the process industry. Many flows, liquid pressures, temperatures, and composition processes are self-regulating. In the first blog post in this series, I presented techniques for tuning a PID controller used on an integrating process. In this post, I will present a method to tune PID controllers on self-regulating processes.


Regardless of the tuning of the PID controller, the control performance is limited by the performance of the instrumentation and final control element. Before tuning a controller, it is helpful to have an understanding of the process and to verify the performance of the instrumentation and final control element, usually a control valve. The control valve should have a small deadband and resolution—another topic of discussion! It should have an appropriate and consistent flow gain. It should have a response time that is appropriate for the process performance requirements. ANSI/ISA-75.25 and the EnTech Control Valve Dynamic Specification V3.0 are excellent sources of information on this topic. Also, the control scheme should be reviewed to make sure it is an appropriate, linear, control scheme for the application. Finally, the interaction of the control loop to be tuned with other control loops should be reviewed and understood. The desired “aggressiveness” of the loop tuning should be based on the interaction of the control loop with other loops and the consequences of movement of the controller output.

Tuning for a self-regulating process

A tuning methodology called lambda tuning addresses these challenges. The lambda tuning method allows the user to choose the closed loop response time, called lambda, and calculate the corresponding tuning. The lambda closed loop response time is chosen to achieve the desired process goals and stability criteria. This could result in choosing a small lambda for good load regulation, a large lambda to minimize changes in the controller output and manipulated variable by allowing the PV to deviate from the set point, or somewhere in between these two extremes. More importantly, the lambda of the loop can be used to coordinate the responses of many loops to reduce interaction and variability.

Lambda tuning for self-regulating processes can result in a closed loop response that is slower or faster than the open loop response time of the process. Though lambda is defined as the closed loop time constant of the process response to a step change of the controller set point, the load regulation capability is also a function of the lambda of the loop. The response to a step set point change and a step load change for a self-regulating process response with lambda tuning is shown in figure 2.

Figure 2. Response of lambda tuning for a self-regulating process for a step set point and a step load step change.

Self-regulating process responses typically include dead time and can usually be approximated by a “first-order” or “second-order” response. This article describes the lambda tuning procedure when the process response can be approximated by a first-order-plus-dead-time response. The lambda tuning for a second-order-plus-dead-time response will be covered in future articles.


The lambda tuning method for self-regulating processes involves three steps:

  1. Identify the process dynamics.
  2. Choose the desired closed loop speed of response, lambda.
  3. Calculate the required PID tuning constants.

Figure 3 shows the dynamic parameters of a self-regulating, “first-order-plus-dead-time” process, which include dead time (Td), in units of time; time constant (tau), in units of time; and the process gain (Kp), in units of percent controller PV span/percent controller output span. Typically several step tests are performed; the results are reviewed for consistency; and the average process dynamics are calculated and used for the tuning parameter calculations. If the controller output goes directly to a control valve, any significant deadband in the valve will reduce process gain if the output step was a reversal in direction. If the controller output cascades to the set point of a “slave” loop, the slave loop should be tuned first.

Figure 3. Open loop process dynamics of a first-order, self-regulating process include dead time, the time constant, and process gain. T98 is the time required for the process to reach 98 percent of its final value.

The next step is to choose the lambda to achieve the desired process control goal for the loop—the allowable stability margin and the expected changes in process dynamics. A shorter lambda produces more aggressive tuning and less stability margin. A longer lambda produces less aggressive tuning and more stability margin. It is not uncommon for the process dynamics, particularly the process gain, to vary by a factor of 0.5 to 2. If testing during different conditions reveals that the process dynamics change significantly, then an additional margin of stability is required. Or, the process response can be “linearized” or adaptive tuning can be used.

If the potential change in process dynamics is unknown, starting with lambda equal to three times the larger of the dead time or time constant will provide stability even if the dead time doubles and the process gain doubles. If it is desirable to coordinate the response of loops to avoid significant interaction, the lambda of the interacting loops can be chosen to differ by a factor of three or more. For cascade loops, the lambda can be chosen to ensure the slave loop of the cascade pair has a lambda 1/5 or less of the master control loop.

The lowest recommended lambda for a first-order-plus-dead-time self-regulating process is equal to the dead time, although this provides a very low gain and phase margin. Thus, a smaller increase in the dead time or process gain can cause instability of the loop.

From a stability standpoint, there is no upper limit on the lambda. If the lambda is not chosen based on a coordinated response, a good starting point for stability is:

The tuning performance can be monitored for a time period and adjusted to be a shorter or longer lambda as needed.

The final step is to calculate the tuning parameters from the process dynamics. Care should be taken to use consistent units of time for the dead time and the lambda. For a first-order-plus-dead-time process response (no significant lag or lead), the controller gain and reset times are calculated with the following equations. The derivative time is set to 0. These equations are valid for the standard (sometimes called ideal, noninteractive) and series (sometimes called classical, interactive) forms of the PID implementation. Note that only the controller gain changes as lambda (λ) changes. The integral time remains equal to the time constant regardless of the lambda chosen.


Consider the steam pressure controller shown in figure 4. The pressure controller, PIC-101, manipulates a properly sized control valve that has a high-performance digital positioner.

Figure 4. Process and control diagram for a reboiler shell steam pressure control.

Figure 5 shows a step test of the pressure controller to identify the process dynamics. The process gain is %PV/%OUT; the dead time is 5 seconds; and the time constant is 20 seconds.

Figure 5. Open loop step test and analysis of one step response.


Because there are no “loop response coordination” requirements, the initial lambda is chosen to be 3 * (larger of dead time or time constant) = 3*20 seconds = 60 seconds.

Now, the tuning can be calculated with the lambda tuning rules.

In preparation for being able to make the tuning more aggressive if the control loop is consistent over the required operating range, the tuning can be calculated for shorter values of lambda. The following table shows the tuning for different values of lambda. Note that the integral time remains the same for all choices of lambda.

Figure 6 shows the response to a step set point and a step load change for each of the lambda values in the table. Note that the tuning is stable for much shorter lambda values than the starting point of 3 * (larger of dead time or time constant). However, this is with perfectly constant process dynamics in a simulator. Additional tests on a real process, at different operating conditions will help determine the consistency of the process dynamics.

Figure 6. Response of self-regulating process for a step set point and step load change with different lambda values.

Meeting process goals

Most published PID controller tuning methods are designed for optimum load regulation, not necessarily optimum process performance. The lambda tuning method provides the ability to tune the PID controller to achieve process performance goals, whether they are maximum load regulation or a coordinated response to other loops. Note that the lambda tuning method for integrating processes can also be used for a lag dominant, self-regulating process to achieve excellent load regulation. This technique and tuning for more complex dynamics will be covered in a future article in this series.

Click this link to read the first blog post in this loop tuning series.

About the Author
James Beall is a principal process control consultant at Emerson Process Management with more than 34 years of experience in process control. He graduated from Texas A&M University with a BS in electrical engineering and worked for Eastman Chemical Company until 2001. He has worked at Emerson since 2001. Beall’s areas of expertise include process instrumentation, control valve performance, control strategy analysis and design, advanced regulatory control and multivariable, and model predictive control. He has designed and implemented process control improvement projects in the chemical, refinery, pulp and paper, power, pipeline, gas and oil, and pharmaceutical industries. Beall is a member of AIChE and ISA, and chair of ISA committee ISA75.25, Control Valve Dynamic Testing. He is a contributing author to the Process/Industrial Instruments and Control Handbook, 5th Edition.

Connect with James:


A version of this article originally was published at InTech magazine.

Powering the Next Generation of HART-Enabled Devices

Powering the Next Generation of HART-Enabled Devices

This guest blog post was written by Sol Jacobs, vice president and general manager of Tadiran Batteries, has more than 30 years of experience in developing solutions for powering remote devices. His educational background includes a bachelor’s degree in engineering and an MBA.


While continually evolving, the HART communications protocol remains strong after 30 years, with approximately 30 million HART-enabled devices installed and in service worldwide. The HART protocol remains the industry standard for applications ranging from process control to asset management and safety systems, machine-to-machine, and other supervisory control and data automation applications.

The Highway Addressable Remote Transducer (HART) protocol employs Bell 202 frequency shift keying (the same standard found in analog phone caller-ID technology) to superimpose digital signals on top of 4–20 mA analog signals, with the two channels working in tandem to provide a low-cost field communications solution that is easy to use and configure.

Traditional HART connectivity requires hardwiring, which is highly restrictive. Experts believe that nearly 85 percent of all installed HART-enabled devices are not currently connected. The main obstacle is expense; it costs $100 or more per foot to create a hardwired connection. This limitation becomes even more problematic for remote, environmentally sensitive locations, where logistical, regulatory, and permitting requirements create added layers of expense and complexity.

Recognizing that industrial automation could not be held back by proximity to analog wiring, the HART-IP protocol was developed, enabling IP-based networks to communicate via Wi-Fi (IEEE 802.11) or Ethernet (IEEE 802.3).

The development of HART-IP led to low-power communications protocols, such as WirelessHART and ZigBee, that use IEEE 802.15.4-approved radio signals to deliver high reliability in challenging environments. The WirelessHART protocol has created a huge opportunity for wireless, battery-operated sensors to seamlessly integrate with other intelligent HART devices to play an integral role in the emerging Industrial Internet of Things (IIoT). This is a critical step toward a future where “big data” analytics will increasingly manage transportation infrastructure, energy production, environmental monitoring, manufacturing, distribution, health care, and smart buildings. The WirelessHART protocol has enabled the rapid development of wireless mesh networks that combine multiple low-power sensors to form redundant, self-healing networks.

The ideal power supply

A remote wireless device is only as reliable as its power supply, which needs be optimized based on application-specific requirements. The vast majority of remote wireless devices that require long operating life are powered by primary (nonrechargeable) lithium batteries. However, certain applications may be suited for energy-harvesting devices used in conjunction with rechargeable lithium-ion (Li-ion) batteries that store the harvested energy.

Generally speaking, the more remote the application, the greater the need for an industrial-grade lithium battery. For example, inexpensive consumer-grade alkaline batteries can suffice in certain instances, especially for easily accessible devices that operate within a moderate temperature range (i.e., flashlights, television remote controllers, and toys). However, alkaline batteries are not well suited to long-term industrial applications due to inherent limitations, including low voltage (1.5 V or lower), a limited temperature range (0°C to 60°C), a high self-discharge rate that reduces life expectancy to two to three years, and crimped seals that may leak.

The low initial cost of a consumer-grade battery can also be highly misleading, as the cost of labor to replace a consumer-grade battery typically far exceeds that of the battery itself. For example, consider what it takes to replace batteries in a seismic monitoring system sitting on the ocean floor or in a stress sensor attached to a bridge abutment.

To judge whether a short-lived consumer-grade alkaline battery is a worthy investment, you must calculate the lifetime cost of the power supply. To be accurate, the calculation has to properly account for the cost of all labor and materials associated with future battery replacements.

When specifying an industrial-grade lithium battery, you need to consider numerous factors, including energy consumed in active mode (including the size, duration, and frequency of pulses); energy consumed in dormant mode (the base current); storage time (as normal self-discharge during storage diminishes capacity); thermal environments (including storage and in-field operation); equipment cut-off voltage (as battery capacity is exhausted, or in extreme temperatures, voltage can drop to a point too low for the sensor to operate); battery self-discharge rate (which can be higher than the current draw from average sensor use); and cost considerations. Industrial-grade lithium batteries are commonly specified when the following performance features are required:

  • Reliability: The remote sensor is deployed in a hard-to-reach location where battery replacement is difficult or impossible, and data links cannot be interrupted by bad batteries.
  • Long operating life: The self-discharge rate of the battery can be more than the device usage of the battery, so initial battery capacity must be as high as possible.
  • Wide operating temperatures: A wide range is especially critical for extremely hot or cold environments.
  • Small size: When a small form factor is required, the battery’s energy density needs to be as high as possible.
  • Voltage: Higher voltage enables fewer cells to be required.
  • Lifetime costs: Replacement costs over time must be taken into account.

Trade-offs are inevitable, so you need to prioritize your list of desired performance attributes.

Choosing among primary lithium batteries

Lithium battery chemistry is preferred for long-term deployments, because its intrinsic negative potential exceeds that of all other metals. Lithium is also the lightest nongaseous metal and has the highest specific energy (energy per unit weight) and energy density (energy per unit volume) of all available battery chemistries. Lithium cells, all of which use a nonaqueous electrolyte, have a normal operating current voltage that ranges between 2.7 V and 3.6 V. The absence of water allows lithium batteries to endure more extreme temperatures. Numerous primary lithium chemistries are available (table 1), including iron disulfide (LiFeS2), lithium manganese dioxide (LiMNO2), and lithium thionyl chloride (LiSOCl2) chemistry.

Table 1. Numerous primary lithium chemistries are available.

Consumer-grade lithium iron disulfide (LiFeS2) cells are relatively inexpensive, and deliver the high pulses required to power a camera flash. These batteries have limitations, including a narrow temperature range of -20°C to 60°C, a high annual self-discharge rate, and crimped seals that may leak.

Lithium manganese dioxide (LiMNO2) cells, including the popular CR123A, provide a space-saving solution for cameras and toys, as a single 3-volt LiMNO2 cell can replace two 1.5-volt alkaline cells. LiMNO2 batteries can deliver moderate pulses, but suffer from low initial voltage, a narrow temperature range, a high self-discharge rate, and crimped seals.

Bobbin-type lithium thionyl chloride (LiSOCl2) batteries are particularly well suited for WirelessHART devices that draw low average daily current. Bobbin-type LiSOCl2 batteries offer the highest capacity and highest energy density of any lithium cell, along with an extremely low annual self-discharge rate—less than 1 percent per year—enabling certain low-power applications to operate without maintenance for up to 40 years. Bobbin-type LiSOCl2 batteries also deliver the widest possible temperature range (-80°C to 125°C) and have a glass-to-metal hermetic seal.

These unique attributes make bobbin-type LiSOCl2 batteries ideally suited for industrial applications, such as tank level monitoring and asset tracking, where remote sensors must endure extreme temperature cycling. A prime example is the medical cold chain, where wireless sensors are required to monitor the transport of frozen pharmaceuticals, tissue samples, and transplant organs at carefully controlled temperatures as low as -80°C. Certain bobbin-type LiSOCl2 batteries have been proven to operate successfully under prolonged test conditions at -100°C, which far exceeds the maximum temperature range of alkaline cells and consumer-grade lithium batteries.

Bobbin-type LiSOCl2 batteries are also used in virtually all meter transmitter units (MTUs) in advanced metering infrastructure/automatic meter reading (AMI/AMR) metering applications for water and gas utilities. These MTUs are often buried outside in underground pits and subjected to extreme temperatures. Extended battery life is essential to AMI/AMR metering applications, because any large-scale system-wide battery failure could create chaos by disrupting billing and customer service. To preempt this type of disruption, utility companies demand the use of bobbin-type LiSOCl2 batteries for their ability to operate for decades.

Battery operating life is largely influenced by the cell’s annual energy usage, along with its annual self-discharge rate. For this reason, many devices that use the WirelessHART protocol are designed to conserve energy by operating on a very low current. To further extended battery life, these devices operate mainly in a “sleep” mode that draws little or no current, periodically querying for the presence of data and awakening only if certain preset data thresholds are exceeded. It is not uncommon for more energy to be lost through annual battery self-discharge than through actual battery use.

When specifying a bobbin-type LiSOCl2 battery, be aware that battery operating life can vary significantly based on how the cell was manufactured and the quality of its raw materials. For example, the highest-quality bobbin-type LiSOCl2 cells can have a self-discharge rate as low as 0.7 percent annually, thus retaining nearly 70 percent of their original capacity after 40 years. By contrast, a lesser-quality bobbin-type LiSOCl2 cell can have an annual self-discharge rate as high as 3 percent, causing nearly 30 percent of available capacity to be lost every 10 years from annual self-discharge.

High pulse requirements

Standard bobbin-type LiSOCl2 cells are not designed to deliver high pulses, which can be overcome by combining a standard bobbin-type LiSOCl2 cell with a hybrid layer capacitor (HLC). The standard LiSOCl2 cell delivers the low background current needed to power the device during sleep mode, while the HLC works like a rechargeable battery to store and deliver the high pulses needed during data interrogation and transmission.

Alternatively, supercapacitors can be used to store high pulse energy in an electrostatic field. Supercapacitors are used in many consumer products, but are generally not recommended for industrial applications because of inherent performance limitations, including an inability to provide long-term power, linear discharge qualities that do not allow the use of all available energy, low capacity, low energy density, and high annual self-discharge rates (up to 60 percent per year). Supercapacitors linked in series also require cell-balancing circuits that draw additional current.

Opportunities for energy harvesting

A growing number of HART-IP connected devices are proving to be well suited for energy harvesting, with Li-ion rechargeable batteries being used to store the harvested energy. Several considerations go into the decision to deploy an energy-harvesting device, including the reliability of the device and its energy source, the expected operating life of the device, environmental parameters, size and weight restrictions, and the total cost of ownership. Photovoltaic cells are commonly used in HART-enabled applications. In certain situations, energy can also be harvested from equipment vibration or from radio frequency/electromagnetic signals.

Consumer-grade rechargeable Li-ion cells may be a sufficient solution if the device is easily accessible and needs to operate for no more than five years and 500 recharge cycles within a moderate temperature range (0°C to 40°C). However, if the wireless device will be used in a remote location or in extreme temperatures, then the application will likely require an industrial-grade Li-ion battery that can operate for up to 20 years and 5,000 full recharge cycles, with an expanded temperature range of -40°C to 85°C (table 2).

Table 2. For remote locations or extreme temperatures, industrial-grade lithium-ion batteries are usually required.

Another major advantage of an industrial-grade rechargeable Li-ion cell is its ability to deliver the high pulses (5 A for a AA-size cell) to support advanced, two-way communications. These ruggedly constructed cells also have a hermetic seal that is superior to the crimped seals on consumer-grade rechargeable batteries, which may leak.

Foundation of IIoT

The development of the HART-IP and WirelessHART communications protocols have created a growing need for battery-powered solutions that can operate without maintenance for decades and provide reliable, secure, and seamless interoperability between legacy technologies and the latest generation of wireless devices. These HART-enabled technologies form a critical foundation for the IIoT, which promises to revolutionize modern industrial automation.

Technology convergence and growing requirements for interoperability are currently being supported by the most recent bobbin-type LiSOCl2 batteries, including hybrid cells that can deliver the high pulses required for advanced, two-way communications. There is also growing demand for industrial-grade rechargeable lithium-ion batteries that offer a long-term power supply for energy-harvesting applications. Together, these advanced battery chemistries offer a wide range of reliable, long-term power design options for HART-connected devices.

About the Author
Sol Jacobs, vice president and general manager of Tadiran Batteries, has more than 30 years of experience in developing solutions for powering remote devices. His educational background includes a bachelor’s degree in engineering and an MBA.



Connect with Sol:



A version of this article also was published at InTech magazine


Low Cost Test Rig for a Standalone Wind Energy Conversion System

Low Cost Test Rig for a Standalone Wind Energy Conversion System

This post is an excerpt from the journal ISA Transactions. All ISA Transactions articles are free to ISA members, or can be purchased from Elsevier Press.


Abstract: In this paper, a contribution to the development of low-cost wind turbine (WT) test rig for stator fault diagnosis of wind turbine generator is proposed. The test rig is developed using a 2.5 kW, 1750 RPM DC motor coupled to a 1.5 kW, 1500 RPM self-excited induction generator interfaced with a WT mathematical model in LabVIEW. The performance of the test rig is benchmarked with already proven wind turbine test rigs. In order to detect the stator faults using non-stationary signals in self-excited induction generator, an online fault diagnostic technique of DWT-based multi-resolution analysis is proposed. It has been experimentally proven that for varying wind conditions wavelet decomposition allows good differentiation between faulty and healthy conditions leading to an effective diagnostic procedure for wind turbine condition monitoring.

 Free Bonus! To read the full version of this ISA Transactions article, click here.

Join ISA and get free access to all ISA Transactions articles as well as a wealth of other technical content, plus discounts on events, webinars, training & education courses, and professional certification.

Click here to join … learn, advance, succeed!


2006-2018 Elsevier Science Ltd. All rights reserved.

Pin It on Pinterest