Webinar Recording: The Growing Cybersecurity Threat to Natural Gas Distribution Systems

Webinar Recording: The Growing Cybersecurity Threat to Natural Gas Distribution Systems

This guest blog post was written by Pierre Dufour, a global product marketing manager at Honeywell Process Solutions. This post was written in conjunction with a co-hosted webinar on cybersecurity threats to natural gas providers. Click this link to watch the webinar replay

 

As I observe today’s natural gas market, I see companies under pressure from many forces in the world. Among these is the multiplicity of computer and communications systems that must be protected from those who would do harm to gas transmission and distribution capabilities.

Natural gas is the foundation fuel for a clean and secure future, providing benefits for the economy, environment and energy security. Alongside the economic and environmental opportunities of natural gas, there comes great responsibility to guard vital distribution assets from cyber attack.

In a connected world, with increasingly sophisticated electronic threats, it is unrealistic to assume gas delivery systems are isolated or immune from various forms of electronic compromise.

Relevant operational and business data are available in many places on the gas grid, most of the time. Companies want to be as easy as possible to take this information and make it useful. This includes solutions that regularly pull and store relevant gas meter data in a secure cloud. Gas metering data must also be collected more frequently and in smaller increments.

Leading automation suppliers provide advanced gas measurement and data management solutions to the natural gas industry. These solutions provide seamless connectivity and round-the-clock access to critical data and diagnostics. Companies also employ the capabilities of the Industrial Internet of Things (IIoT) to do automated meter reading. But, they need to do more than just collect reams of data for billing and back office analysis. Gas operators must be able to make decisions and take action at every level of their distribution system, optimizing analytics where it makes sense and enabling multiple applications to run edge devices to solve problems in new ways.

Learn how to improve the overall security of your metering installed base and reduce the chances of vulnerabilities being exploited. Watch the co-hosted webinar recording presented by Pierre Dufour of Honeywell and Steve Mustard, ISA leader and industrial security expert. Click this link to watch the webinar replay.

In any IIoT-based communication system, it is essential to ensure that sensitive information reaches its intended recipient, and that it cannot be intercepted or understood by a malicious individual or device. A cyber attack on devices that control the gas grid could result in disruption of operations or damaged equipment. Any device or system controlled by network communication that “faces” the Internet is at risk of being hacked.

Natural gas firms require a fully integrated, end-to-end technology platform for gas transmission and distribution. This platform should be based on a single design standard and follow strict cybersecurity and information technology data security guidelines uniformly across all components. As such, there will be no weak link to be exploited by cyber criminals.

It is crucial to implement cybersecurity solutions that are specifically intended to protect sensitive gas consumption information at both the data storage and data transfer levels. For example, there are security advantages to deploying an architectural design that ensures integrated, low-power cellular modems operate on the network for the shortest possible time – perhaps only a few minutes per day. This greatly reduces cyber vulnerabilities compared to approaches where modems remain on continuously.

In conclusion, natural gas providers are seeking to run a better business by implementing smarter, more responsible solutions for the customers they serve. Key to this effort is protecting all critical gas metering and data management assets from cyber threats. There is no substitute for sound, well-engineered cybersecurity processes, which reduce risks, mitigate hazards, and keep sensitive operational and business data safe and secure.

border_color=”#DDDDDD” rounded_corners=”false” inside_shadow=”false” outside_shadow=”false”] About the Author
Pierre Dufour is a global product marketing manager at Honeywell Process Solutions. He has been with Honeywell for over 20 years. He holds an engineering degree in electronics and an MBA. Pierre has worked in multiple departments including Technology, Product Specialization and Marketing. He is based out of the Honeywell Mercury Instruments site in Cincinnati, Ohio.

Connect with Pierre :

LinkedIn

 

 

[/dropshadowbox]
Industrial Risk Avoidance: Focus on Proactive Instead of Reactive Maintenance

Industrial Risk Avoidance: Focus on Proactive Instead of Reactive Maintenance

This post was written by Greg Sumners, president of North American business for Beamex

Many times, I meet process manufacturers who are satisfied with the status quo of their plant’s calibration system and have the “if it’s not broke, don’t fix it” mentality. To this, I say, have you ever thought of fixing it before it breaks? Then, I see the wheels starting to turn. Until a new regulation is created or a corporate governance mandate swoops in, what is the motivation to change processes or tools? It is just easier, less risky, and less expensive to stick with what you’ve got and know. Or is it?

When smartphones first appeared on the market, early adopters embraced the concept and started using them as tools to conduct business. Now, not too many years later, many large, successful companies equip their employees with smartphones to communicate. The situation within the process manufacturing environment is similar. Eventually, every plant will rely on an integrated calibration system as a tool to communicate, document, and store calibration data to run their plants in the most efficient manner—it is just a matter of time.

It is a fact that transforming a plant’s calibration system from single-function calibrators with a paper-based documentation system, or even manual entry, to a fully automated system, comprised of multifunction, documenting calibrators and calibration management software, improves safety and quality and lowers costs. Today’s calibration management software integrates into computerized maintenance management systems and even enterprise resource planning systems. Management has real-time insight into a plant’s health. So, what is stopping everyone in the process industry from going fully automated? Well, taking responsibility for a system project approach such as this can be time and resource consuming, with a high risk of failure, if the project is not properly planned, managed, and supported.

The key word to focus on is project. By definition, a project is an individual or collaborative enterprise that is carefully planned and designed to achieve a particular aim. When a plant’s calibration system is upgraded, it should be considered a project. Just like any other plant system, it should be strategically planned and executed to ensure it creates system collaborations and synergies within a plant, not system separations and alienations.

Where and how should you start? Taking a project approach to implementing a calibration process change can be a daunting task, but it does not have to be. Reasonably and logically, there are many factors to consider, including the immediate business needs and how to increase efficiency, lower costs, improve safety, and develop long-term sustainability. These are all important points to evaluate, but there are also other even more significant, somewhat intangible elements that influence success in real practice—like the company culture, processes, and everyday ease of use. If you develop a solution that fits into the company’s culture to meet needs, all the “logical” factors should ultimately be addressed.

Without going into all the exact details of what a project approach should entail, it is most important to define your goals and purpose. A solution provider or vendor that offers proven expertise and experience to help you accomplish your goals is usually more valuable and cost-effective than attempts to reinvent the wheel. Take advantage of the solution provider’s knowledge and work closely together, as a team, throughout the process to develop clear objectives and accomplish them.

So, what is the risk of not fixing your calibration system before it breaks? It is all about proactive maintenance instead of reactive maintenance. Lower your risks by closing the gap on room for errors and failures that can cause plant shutdowns, lost production time, poor product quality that could injure consumers, or a hazardous environment that could harm plant staff. All of these could affect the organization’s employees, customers, time, and money—four of any organization’s most precious commodities. Fix it before it breaks.

About the Author
Greg Sumners joined Beamex Inc. in 2008 as president, responsible for the North American business. An engineering graduate of Oxford, he furthered his education at Henley Management College in England. Sumners began his career as an industrial engineer and became a practitioner of the Institute of Management Services. Moving into management, he held purchasing, information technology, sales, and manufacturing positions.

Connect with Greg:
LinkedInEmail

 

A version of this article originally was published at InTech magazine.

How to Improve Industrial Productivity with Loop Calibration

How to Improve Industrial Productivity with Loop Calibration

This post is authored by Ned Espy, technical director at Beamex.

The typical approach to calibration has been to regularly test instrumentation that affects successful control, safe operation, quality, or other relevant criteria. In most cases, scheduling is conservative, and methods at a particular site have evolved over time. Instrument technicians follow practices that were set up many years ago. It is not uncommon to hear, “this is the way we have always done it.” Measurement technology continues to improve and is getting more accurate. It is also getting more complex—why test a fieldbus transmitter with the same approach as a pneumatic transmitter? Performing the standard five-point, up-down test does not always apply to today’s more sophisticated applications.

In general, calibration tasks require special skills and an investment in test equipment. Sophisticated, highly accurate, and multifunctional calibration equipment incorporating high-accuracy field calibrator and fieldbus communications for industrial networks (including HART, FOUNDATION Fieldbus, and Profibus PA instruments) is required to effectively calibrate advanced instrumentation, such as multivariable and smart/digital instruments. With the complexity of instrumentation, there is more pressure than ever on the calibration technician. Technicians with more than 30 years of experience at a single plant are retiring and cannot easily be replaced by outsourcing or by younger technicians. Documentation requirements are becoming much more common for improved quality, environmental monitoring, and government regulations adherence. Calibration software is often required to store and analyze detailed data as well as generate calibration certificates and reports. All of these factors should cause companies to scrutinize and evaluate current practices. Consider simpler and more efficient test methods to ensure proper plant operation.

While not a new concept, there are advanced calibration techniques based on loop testing. In some cases, it is the best practice to perform individual instrument calibration to achieve maximum accuracy (e.g., custody transfer metering). However, there are viable methods to test a loop end to end. If readings are within acceptable tolerances, there is no need to break into the loop for individual instrument testing. To be effective, a common-sense approach is required with the goal to minimize downtime and maximize technician efficiency while ensuring reliable control and maintaining a safe work environment.

Loop basics: What is a loop?

The idea of a loop can mean different things to different people based on their work background or industry. In practice, a loop is simply a group of instruments that in combination make a single measurement or affect a control action in a process plant. A typical temperature example is a temperature element (resistance temperature detector [RTD] or thermocouple [T/C]) that is connected to a transmitter, which is connected in series to a local indicator, and finally to a control system input card (distributed control system [DCS] or programmable logic controller [PLC]). The signal is then displayed on one or more control panels, and the measurement is ultimately used to control the process.

When evaluating a loop for testing, an important distinction is can the loop be tested from end to end or can only a portion of the loop be tested? For an end-to-end test, in the example temperature loop (figure 1), the temperature element needs to be removed from the process and placed in a dry block or temperature bath to simulate the process temperature. The final displayed measurement is compared to the simulated temperature, and the error interpreted. An end-to-end loop test is the best practice; if an accurate temperature is made for the control process, it does not matter how the individual instruments are performing. The DCS/PLC value is what is used to make any control changes, alarms, notifications, etc. However, if the loop measurement has a significant error, then the error of each instrument in the loop should be checked and corrected one by one to bring the final measurement back into good operation.

In some cases, it is not possible to make an end-to-end loop test. In the example loop, it may be extremely difficult or expensive to remove the probe from the process or to insert the probe into a temperature block or bath. If this is the situation, then a partial-loop test can be performed. The temperature element is disconnected from the transmitter, and a temperature calibrator is used to simulate a signal into the transmitter. As in the end-to-end loop test, the final displayed measurement is compared to the simulated temperature, and the error interpreted, etc. While the loop is broken apart, it would be good to check the installed temperature element; perhaps a single-point test could be done by temporarily inserting a certified probe or thermometer into the process and comparing that measurement against the element’s output when connected to a calibrator.

Analysis of loop error

Error limits can be somewhat difficult to determine, and many mistakes are made when it comes to setting them. One common judgment is to base process measurement tolerance on a manufacturer’s specification. Some manufacturers are better than others, but the marketing department might have as much to say about an accuracy specification as a research and development engineer. Furthermore, accuracy statements are generally off-the-shelf values that do not include such things as long-term stability (typically a significant error component), repeatability, and temperature effects. Sensor and transmitter accuracy should be a consideration of what the process measurement tolerance should be, not the final value.

The best method is to have a collaborative discussion between the control engineer, the quality or safety engineer, and the instrument engineer to set a realistic and practical tolerance. It is very important to keep in mind that the tighter the tolerance, potentially, the more expensive it will be to both make and maintain the measurement. The balance falls somewhere between the required tolerances to have efficient control, the best quality, and the highest safety versus minimizing downtime, maximizing technician efficiency, and utilizing optimum test equipment. In practice, it is common to see error evaluation as a percent of span. However, this does not easily apply to flow measurements (typically a percent of reading or rate) or analytical instruments (e.g., pH or parts per million).

One good way to look at error is to think in terms of the loop’s input engineering units. For the temperature loop example (figure 1), the discussion should focus on the minimum temperature error that creates the highest operating efficiency without compromising quality or safety and that can be realistically measured by the calibration or test equipment. One other complication for loop error is a given loop is no more accurate than the least accurate component contributing to the measurement. Today’s transmitters are extremely accurate and provide excellent performance; however, temperature sensors are typically not nearly as accurate and, depending on the process, can exhibit significant drift. If a typical RTD is rated to ±0.5ºF, a control engineer cannot expect better than ±0.5ºF to control the process. In reality, even though the transmitter and DCS analog-to-digital conversion can be significantly more accurate, recognize that these components add additional error to the loop measurement. A common practice to compute loop error is a statistical average or a root-mean-square (RMS) calculation. For the temperature loop example, assume the RTD sensor is rated ±0.5ºF, the transmitter is ±0.10 percent of span (span = 50ºF to 250ºF), and the DCS input card is ±0.25 percent of span (span = 50ºF to 250ºF). The loop error is:

The most conservative approach is to simply sum the errors (0.5 + 0.2 + 0.5 or ±1.2ºF). The final decision should also take into account the criticality of the measurement along with the impact the error will have on the process and the risks involved.

The discussion should not end here. The control engineer will push for the lowest number possible (±0.75ºF), but there are other factors. An evaluation of the test equipment is required. The typical temperature block has an accuracy anywhere from 0.3ºF to 1.0ºF, and it is good practice to have a 4:1 ratio of test equipment versus process measurement. To make a proper temperature simulation, a reference probe (reference or secondary primary resistance thermometers) and an accurate primary resistance thermometer both need to be used to improve measurement error to 0.1ºF to 0.2ºF. This could impose a significant investment in test equipment, depending on the industry. Note the more accurate test equipment has a higher maintenance cost. For example, what if the quality engineer reports that an error of ±5ºF is all that is needed to make a good product? Why impose an unnecessary burden on the instrumentation department? If the control engineer has no objection (after receiving input from reliability, safety, etc.), a practical approach is to set a loop tolerance of ±2.0ºF, assuming the temperature block is accurate to ±0.5ºF over the range of 50ºF to 250ºF. Although not as accurate as the instrumentation in the loop, it is better than 2:1 for what is required to make a quality product and allows the calibration technician to use a simple combination of equipment.

Although this is just one scenario, it is a good practice to determine the “weakest link” in the loop and not set an unrealistic performance tolerance. When looking at fuel costs or process efficiencies, this type of analysis could easily justify a larger investment in test equipment and frequent testing if the cost or risk of error is high. With good judgment, companies can strike a balance and avoid unreasonable testing requests, meeting lean manufacturing objectives.

Loop testing examples

Temperature loop test example

If a process plant has hundreds of temperature loops like the example (figure 1), there are good benefits of loop testing. While it takes time to make a test with a temperature block, the calibration technician is effectively checking two, three, or more instruments that make up the loop. With this approach, it might make sense to invest in more rugged or more accurate probes to minimize failures. Depending on the process, more frequent testing may be required, but in any case, management will have a high level of confidence in the accuracy of the measurements. With repeatable work methods, technicians will recognize common issues, and there should be efficiency gains. If calibrations are documented, companies can analyze test cycles and extend or at least optimize the most likely intervals. There will always be a need for troubleshooting and emergency repairs, but the loop cycle should be reset whenever such an event occurs. This methodical approach effectively provides a “touch” to every instrument in the plant while minimizing disturbances to the loop integrity and delivering the very best measurements to the control system.

Figure 1. Example temperature loop

Multivariable loop example

Flow measurements can be very demanding and often require very tight performance tolerances. In the case of natural gas metering or steam metering, a small error can cause significant errors in billing, creating extra scrutiny by management. A common example of orifice metering is to compensate for the differential pressure measurement by factoring in the process temperature and static pressure. The DCS can process these three measurements to make an accurate flow calculation. However, there are now differential pressure flowmeters (i.e., multivariable) with an integrated process RTD and static pressure measurement that have a compensated flow measurement output; the flow calculation is built into the smart transmitter (figure 2).

Figure 2. Example multivariable loop

If the control system independently processes the three measurements, typical test procedures apply, but a loop test should be done to verify the accuracy of the compensated flow reading. Multivariable meters appear to be complex. However, by identifying the measurement components, a loop test can be set up to quickly verify that the meter is correctly measuring the flow to a desired percent of reading accuracy. For example, consider a steam application:

  • Input pressure range: 0 – 250 inH2O
  • RTD input range: –200ºF to +800ºF
  • Normal process temperature: 450ºF
  • Static pressure input range: 0 to 800 psi
  • Ambient barometric pressure: 14.735 psia average local barometric pressure in 2012)
  • Output: 4–20 mA (typical range of 0 –1500 lbs/hr, ±1 percent of reading)

For this example, set up a nonlinear test where the expected pounds per hour (lbs/hr) output is calculated for specific pressure input test points assuming a constant, typical 450ºF temperature and a static pressure of 14.735 pounds per square inch (psi), since the low side of the transmitter is vented to atmosphere for testing. Consulting with the control engineer, expected measurements might be:

Instruments are available that have unique features for testing multivariable transmitters. The proceeding nonlinear table can be entered into software for a specific tag and can be downloaded into the test instrument for testing. Additionally, the three tests can be performed on the process variables versus each HART value that is used in the compensated output calculation. The only additional test tool required would be a temperature block.

The loop test should simply be a five-point check of inches of water (inH2O) versus pounds per hour at 0 percent, 50 percent, 100 percent, 50 percent, and 0 percent. If all the measurements fall within 1 percent of reading, the technician can move on to the next instrument. If the loop test result is marginal or a failure, then three tests of the differential pressure versus HART, RTD temperature versus HART, and static pressure versus HART will need to be performed and adjusted as needed. Upon completion of the three variables that plug into the flow calculation, a quick check of the 4–20 milliampere (mA) output should be done as well. Assuming one or more of the inputs required adjustment, a final “as left” loop test of the improved flow output will document that the meter is in good operating condition. It saves time to focus on the nonlinear input versus flow output for a multivariable loop. This will result in a much simpler maintenance task for the instrument technician.

Other loop examples

A pressure loop can easily be checked by applying a pressure to the input transmitter and comparing it to the DCS or final control reading. This can be done very quickly and can be much more effective than testing just the transmitter. Any batch control loop should be evaluated for loop testing with the goal to make work more efficient for the technician while verifying that control measurements are as accurate as possible.

This same technique should be considered for control valve testing, where a mA input into the I/P is compared to a mA output (feedback). This would also apply to smart control valve positioners using a communicator to step the valve and monitor the digital feedback. By making 10 percent test points, a quick test on a valve will verify it is operating correctly. In most cases, the valve will pass, and the technician can make a quick round of testing critical control valves.

An overlooked component of a flow loop is the primary element (orifice plates, annubars, or averaging pitot tubes). These are critical for proper flow measurement. They cannot be calibrated, but they should be inspected for damage or wear.

Another critical area where loop testing should be considered is the safety instrumented system (SIS). When the process is down, it is common to follow a script of testing procedures that can include calibration of single instruments. However, whenever possible, consider checking an entire loop where the integrity of a critical measurement can be verified, especially for temperature (using a block or bath) or pressure measurements. Also, it may be possible to perform quick and simple tests on an SIS while the process is running to ensure systems are operating properly.

Calibration for optimum control

Many, many process plants perform calibration by simply checking the transmitter. It takes time to use a temperature block or bath, but consider how important it is to test all the devices that make a given measurement. Transmitters are not the only devices that drift. Temperature probes drift due to thermal stress/shock and vibration or physical damage. DCS and PLC input cards drift as much or more than transmitters. If loops are not being tested, how can a good measurement be made? Without good measurements, how can optimum control, safety, reliability, and quality be ensured?

As instrumentation and automation evolve, so should the methods for calibrating instrumentation. Loop testing is not a new concept, but it is underutilized for instrumentation testing as an effective strategy. New integrated calibration devices enable flexible tests that meet a variety of applications and provide detailed documentation and electronic reporting.

By approaching the task of calibration with a fresh look, there are plenty of opportunities to do more with less and effectively “touch” every instrument in the plant more efficiently using loop calibration strategies. Logical and careful planning of loop testing strategies results in improved control performance without compromising quality, reliability, or safety of plant operations.

About the Author

OLYMPUS DIGITAL CAMERANed Espy has been promoting calibration management with Beamex for almost 20 years. Ned has helped develop best practices for calibration, with a focus on pressure, temperature, and multivariable instruments. He is a consistent editorial contributor to leading industry publications and has received significant recognition within the automation industry.  Ned teaches calibration best practices and provides technical support to end users and the Beamex sales team in North America.


Connect with Ned
:
LinkedIn

 

A version of this post originally was published at InTech magazine.

Automation Competency Model Helps Guide Future Technical Workforce

Automation Competency Model Helps Guide Future Technical Workforce

This post was written by Stephen R. Huffman, vice president, marketing and business development, at Mead O’Brien, Inc.

In 2007, the Automation Federation (AF) delegation told an audience at the Employment and Training Administration (ETA) about the people practicing automation careers in industry. Not long before our visit, the ETA, part of the U.S. Department of Labor (DOL), had worked with the National Institute of Standards and Technology (NIST) to develop a “competency model” framework based on the needs of advanced manufacturing. The ETA was eager to engage AF and ISA to use our tiered framework to develop a competency model for the automation profession.

After developing the preliminary model, hosting subject-matter expert (SME) meetings facilitated by the DOL to finalize our work, and then testing the model with several automation managers against their own criteria for validity, we rolled out the Automation Competency Model (ACM) to educators, government, and industry in 2008. Since then, it has been a tool for educators and parents to show students what automation professionals do, management to understand the skill sets their employees need to be effective and to use as a tool for gap analysis in reviews, program developers to create or alter curricula for effective education and training, and lawmakers to understand how U.S. manufacturing can be globally competitive and the jobs needed to reach that goal.

In the lower tiers, the model identifies necessary soft skills, including personal effectiveness, academic, and general workplace competencies. Automation-specific work functions, related competencies, and references (e.g., standards, certifications, and publications) are detailed in tier 5. In short, the model stakes out our professional territory and serves as a benchmark for skill standards for all aspects of process and factory automation. Previously, parts of the academic community and some U.S. lawmakers and agencies had the misconception that industrial automation and information technology (IT) are synonymous. Although there has been some convergence between IT and operational technology (OT), much of that perception has changed. OT-based industrial automation and control systems (IACS) were a focus in the recent cybersecurity framework development organized by NIST in response to the presidential executive order on cybersecurity for critical infrastructure.

The ACM has been a great tool for the AF to use to draw new organizational members and working groups, who visualize the big picture in automation career development. Also, we are telling our story and forming partnerships with science, technology, engineering, and math (STEM) organizations such as FIRST and Project Lead the Way. Since forming in 2006, AF now has 16 members representing more than 500,000 automation-related practitioners globally. After two three-year critical reviews, the ACM is still the most downloaded competency model on the DOL website. As a result of our work in creating the ACM and the IACS focus in cybersecurity framework meetings, the DOL asked AF to review a heavily IT focused Cybersecurity Competency Model. After adding IACS content and the philosophy of plant operation (versus IT) cybersecurity, the model released was a much stronger tool with wider applicability.

ISA, as a member of the American Association of Engineering Societies (AAES), presented the development of the ACM to AAES leadership as a way to provide tools for lifelong learning in the engineering profession. AF/ISA was once again invited to work with the DOL and other AAES member societies to lead in developing an Engineering Competency Model. The model framework and our experience in ACM development enabled us to identify the front-end skills, necessary abilities, knowledge to be developed, and academic prerequisites for any of the disciplines, plus industry-wide competencies from the perspective of all engineering-related plant functions: design, manufacturing, construction, operations and maintenance, sustainability and environmental impact, engineering economics, quality control and assurance, and environmental health and safety—with emphasis on cyber- and physical security, and plant safety and safety systems.

Now the societies dedicated to each vertical discipline listed in tier 5 will begin to identify all critical work functions, detail all competencies within each function, and note the reference materials. It is important for the participants to see the big picture, consider the future, and keep an open mind; agreement typically comes easily when SMEs participate with that mindset. Once the model through tier 5 is complete, job titles and job descriptions are created. When the DOL accepts the model, the U.S. government officially recognizes these positions. We hope the emerging Engineering Competency Model will be a great tool to address the overall skilled worker shortage. If the automation model is any indication, the new engineering model will have a large impact on achieving the skilled workforce goal.

About the Author
Stephen R. Huffman is vice president, marketing and business development, at Mead O’Brien, Inc., and chairman, government relations committee, at Automation Federation. He has a 40-year history of optimizing process systems, developing new applications, and providing technical education. He served as 2007 president of ISA.

 

Connect with Stephen:
LinkedInTwitterEmail

 

A version of this article originally was published at InTech magazine.

How ISA and Automation Federation Leadership Helped Secure Industrial Control Systems

How ISA and Automation Federation Leadership Helped Secure Industrial Control Systems

This post was written by Stephen R. Huffman, vice president, marketing and business development, at Mead O’Brien, Inc.

Technical leaders had the foresight to create the ISA99 standards committee back in 2002. They recognized the need for cybersecurity standards in areas outside of the traditional information technology (IT), national security, and critical infrastructure areas of concentration at the time. In the following years, a number of ISA99 committee members spent time and effort advocating and even testifying on Capitol Hill about our profession, which was not well defined, and our cybersecurity efforts therein, which were not well discerned from IT perceptions.

When Automation Federation (AF) refocused its efforts in 2007 with both automation profession advocacy and industrial automation and control system (IACS) cybersecurity as two of its strategic imperatives, we ventured forth to Capitol Hill with a message and a plan. We found that in general our lawmakers equated process and industrial automation as “IT” and thought that IT was already addressing cybersecurity in terms of identity theft and forensics, and that the Department of Defense was handling cyberprotection for national security. For the next several years, AF built its story around cyberthreats in the operational technology (OT) area and how ISA99 through its series of standards, technical reports, and work group output was providing guidance for asset owners, system integrators, and control system equipment manufacturers specifically for securing IACS.

The operating philosophy of IT cybersecurity versus OT cybersecurity is quite different. Although the approach of shutting down operations, isolating cybersecurity issues, and adding patches may work well to mitigate IT breaches, the same cannot be said for operating units in a real-time process. In short, it really is not feasible to “reboot the plant.” The message resonated enough for us to help create the Liebermann-Collins Cybersecurity Senate Bill introduced in 2012, but opposition (more political than reasonable) doomed this first effort.

In 2013, the President issued Executive Order 13636 for enhancing cybersecurity protection for critical infrastructure. It included directing the National Institute of Science and Technology (NIST) to establish a framework that organizations, regulators, and customers can use to create, guide, assess, or improve comprehensive cybersecurity programs. Of the more than 200 proposals submitted by organizations receiving a request for proposal, almost all were IT-based. The AF/ISA submittal took the perspective of operational technology backed by the strength of the existing ISA99 set of standards. After a set of five framework meetings of invited participants, including the AF “framework team,” over the course of 2013, the OT and IACS teams were much more successful in defining the needs, and the automation message was much better understood. NIST personnel with legislative experience with AF on the 2012 Senate bill understood that private industry is a key piece of the cybersecurity and physical security puzzle.

AF organized a series of NIST framework rollout meetings in 2014 around the country with attendees from the AF team, NIST, and the White House. The meetings were hosted by state manufacturing extension partnerships, which are state units of NIST. After these meetings and more work with Senate lawmakers, a bipartisan Senate bill, The Cybersecurity Enhancement Act, was signed by the President and put into law in December 2014 (www.congress.gov/bill/113th-congress/senate-bill/1353). In summary, the act authorizes the Secretary of Commerce through the director of NIST to facilitate and support the development of a voluntary, consensus-based, industry-led set of standards and procedures to cost effectively reduce cyberrisks to critical infrastructure. As you can imagine, ISA99, now IEC/ISA 62443, will play a more prominent role in securing the control systems of industry in the future through a public-private information-sharing partnership. Thanks for the foresight and fortitude of the ISA99 standards committee.

About the Author
Stephen R. Huffman is vice president, marketing and business development, at Mead O’Brien, Inc., and chairman, government relations committee, at Automation Federation. He has a 40-year history of optimizing process systems, developing new applications, and providing technical education. He served as 2007 president of ISA.

Connect with Stephen:
LinkedInEmail

 

A version of this article originally was published at InTech magazine.

Pin It on Pinterest