Select Page

## AutoQuiz: How to Read a Gauge Pressure Transmitter

AutoQuiz is edited by Joel Don, ISA’s community manager.

Today’s automation industry quiz question comes from the ISA Certified Control Systems Technician (CCST) program. Certified Control System Technicians calibrate, document, troubleshoot, and repair/replace instrumentation for systems that measure and control level, temperature, pressure, flow, and other process variables. Click this link for more information about the CCST program.

### A gauge pressure transmitter that measures the pressure in a 150 # high pressure steam header is mounted 6 feet below the center line of the header. The tap for the impulse line connects to the top of the header and rises 2 feet above the header center line, extends horizontally for 3 feet, and then drops down to the transmitter. In order to read the pressure in the steam header correctly, the transmitter output must be:

a) calibrated for suppressed zero, the suppression equal to 8 feet of liquid head pressure
b) calibrated for suppressed zero, the suppression equal to 6 feet of liquid head pressure
c) calibrated for elevated zero, the elevation equal to 8 feet of liquid pressure
d) calibrated for true zero
e) none of the above

At zero gauge pressure in the steam line, you are essentially suppressing (pushing) the transmitter output back down to zero after the system reaches equilibrium with the 8 feet of impulse line full of liquid. The transmitter will read that impulse line liquid head pressure plus any pressure exerted on top of the liquid. 6 feet of liquid head suppression is incorrect as the impulse arrangement will cause 8 feet of liquid head to accumulate.  Elevated zero is incorrect as that adjustment is used to adjust for negative pressure offsets resulting from the transmitter being mounted above the zero reference point (high pressure tap point) and where a liquid in the impulse line or a capillary system would exert a negative pressure.

The correct answer is A, calibrated for suppressed zero with the suppression equal to the 8 feet of liquid height that is in the impulse line.

Reference: Thomas A. Hughes; Measurement and Control Basics, 5th Edition, ISA Press.

## Can Risk Analysis Really Be Reduced to a Simple Procedure?

This guest blog post is part of a series written by Edward J. Farmer, PE, author of the new ISA book Detecting Leaks in Pipelines. If you would like more information on how to purchase the book, click this link. To download a free excerpt from Detecting Leaks in Pipelines, click here. To read all the posts in this series, scroll to the bottom of this post for the links.

Like most complex and multidimensional process management ventures, a good result usually begins with good science and robust methodology. Consequently, most successful operators have developed proprietary methods of identifying, assessing, and mitigating risk. This methodology is usually based on experience and the developed wisdom of people who have, “been there and done that.”

People with such breadth-of-experience, and such depth-of-practice, are becoming scarce due to specialization within the ever more complex industry coupled with the inevitable retirements. The ability to anticipate, assess, and mitigate risks becomes more difficult to do with confidence year-by-year.

Situations such as this spawn ventures into science and the development of concepts and processes that formulate methodical processes that aid in evaluation and ensure a path toward dependable results. In pipeline safety two divergent approaches have emerged that are referred to as the “deductive method” and the “inductive method.”

The deductive method, popularized by the tales of Sherlock Holmes, begins with an occurrence of a certain character at a certain place and time from which one deduces how it could have happened. In risk assessment language, an event and its cause are linked by a series of happenings, a “causality chain,” which somehow connects them. Often it is a brutally short chain – a backhoe hits a buried line. Sometimes it is fascinatingly complex, such as a pressure safety valve failing to protect some item of vulnerability resulting in a sort of cascading failure.

The inductive method wonders what causality chain or initiating occurrence could cause a presumed or actual accident or leak. Analysis is often based on causal experience as described in U.S. Department of Transportation accident statistics. Analysis involves working backward from the leak, considering all the potential linkages and branches in the causality chain to identify a source, or set of events, that could produce the subject outcome. This is the traditional approach to safety analysis, and it flows from the experience, wisdom, insight, intuition, and sometimes the creativity of the investigators.

The consequences that stem from some accidents or failures are sometimes far too serious to leave to intuition. The deductive method was developed by the U.S. Nuclear Regulatory Commission (NRC) as NUREG 492, or “Fault Tree Handbook.” It applies the systematic methodology of fault tree analysis to the often-complex causality chains that lead to accidents. The process involves identifying information, events, or circumstances along the path from normal operation to failure.

### The analytical process

The process begins with the observation that a system is a deterministic entity comprising an interacting collection of discrete elements. Analysis of a particular output of the system involves analysis of the discrete elements involved in causing, facilitating, or enabling the occurrence. From this analysis it is possible to assess causality and even occurrence probability. It is a correct, fundamental, documentable, and highly pedantic method of producing repeatable analyses. This has made it attractive in many important or high-risk ventures with potentially serious or even catastrophic consequences.

The only drawback is the work involved, which requires a thorough understanding of the process and a detailed understanding of its components, as well as some understanding of the statistical nature and impact each of these systems, subsystems, and occurrences might bring to discerning and evaluating the likelihood of a particular result. With all that done, however, it supports a quantitative determination of the potential outcome of a particular occurrence or combination of them.

These two approaches have been described using words like “vastly different,” “night and day,” “conjecture vs. determinism” and many others along the same thought process. One skilled and experienced person described the inductive method as the “wild-ass guess” and the deterministic method as “a lifetime’s work.”  The NRC’s interest in the pedantic and deterministic yet highly repeatable methodology of the deductive deterministic approach is understandable. There are a small number of systems that require such analysis and a large cast of people to do them.

The potential consequences are huge, justifying the cost and inconvenience of assembling these people into analysis teams spending months, perhaps years, achieving precise understanding of the systems and the interactions of their component parts. This sometimes (often) requires subject-matter experts on various equipment and process features that can be hard to find, combine, and schedule.

On the other hand, many industrial systems are simpler, less obtuse, than the complex systems the NRC envisioned. Having an adequate understanding of them as well as sufficient information may not be as rare, and the logic that comes from experience and even “common sense” may not only be pertinent, but adequate.

### The analysis

Once upon a time, a critical project was deemed to require the best possible risk analysis.  It required several subject-matter experts, lots of hard-to-obtain data , and an analysis team capable of understanding, assembling, and presenting the results. All that work provided very useful insights but nothing that the experienced people did not intuit. It took about three times as long and about five times the cost of the usual company approach.

In the end, the results were assessed by the review of experienced people to be “as expected.” This seemed to indicate that the method was a slow and expensive way to get to the same conclusion that experience, and logic, might predict. On the other hand, the results were clearly methodical, thorough, and repeatable which probably facilitated getting construction and operating permits.

As with many inherently good ideas the optimal implementation may involve some synthesis, which seems to be the case with some of the techniques used by industry leaders. All of this is worth thinking about, and perhaps re-thinking every now and again. My book, Detecting Leaks in Pipelines, includes a discussion of risk assessment and analysis involving each of these methods along with some additional insight. In any case, implementing the road to where you are trying to go begins with knowing where that is and how to get there. This kind of analysis can be a big help in making an optimal start on your way.

#### Want to read all the blogs in this series? Click these links to read the posts:

Edward Farmer has more than 40 years of experience in the “high tech” part of the oil industry. He originally graduated with a bachelor of science degree in electrical engineering from California State University, Chico, where he also completed the master’s program in physical science. Over the years, Edward has designed SCADA hardware and software, practiced and written extensively about process control technology, and has worked extensively in pipeline leak detection. He is the inventor of the Pressure Point Analysis® leak detection system as well as the Locator® high-accuracy, low-bandwidth leak location system. He is a Registered Professional Engineer in five states and has worked on a broad scope of projects worldwide. His work has produced three books, numerous articles, and four patents. Edward has also worked extensively in military communications where he has authored many papers for military publications and participated in the development and evaluation of two radio antennas currently in U.S. inventory. He is a graduate of the U.S. Marine Corps Command and Staff College. He is the owner and president of EFA Technologies, Inc., manufacturer of the LeakNet family of pipeline leak detection products.

Connect with Ed:

## How to Permanently Reduce Operating Costs in Your Maintenance Department

This guest blog post was written by Bryan Christiansen, founder and CEO at Limble CMMS. Limble is a mobile first, modern computerized maintenance management system application, designed to help managers organize, automate and streamline their maintenance operations..

Every organization that wants to stay competitive on the market has to strive to increase its profits. When it comes to the process industries, it is not rare that, in the search for higher profits, upper management often turns to reducing operational expenses.

Since most managers still look at maintenance only as a cost center, reducing operational costs in the maintenance department is often the first thing on their list. That puts a lot of strain on maintenance managers that are always under pressure to further optimize their maintenance operations.

While I would love to tell you that we discovered some hidden secrets you can use to reduce your maintenance costs, the reality is that there are no simple ways to permanently cut those costs down.

I mean, you can always try to make some tweaks to your workflow and communication to save a few bucks. However, if you really want to see significant long-term cost savings, there are two sure-fire ways you should explore:

2. Taking advantage of an appropriate maintenance software

Both approaches require a dose of clarification so let’s put everything in the right context.

### Maintenance strategies designed to reduce operating costs

Basically all maintenance strategies, besides breakdown maintenance (run-to-failure maintenance), are designed to improve the efficiency and effectiveness of your maintenance activities which, in turn, leads to reduced operational expenses.

Despite that, a recent survey shows that there are still around 50 percent of plants that strongly rely on reactive maintenance as a part of their overall maintenance strategy. Now, I won’t say that reactive maintenance doesn’t have its place in your maintenance strategy, but it should only play a supporting role and leave the heavy lifting to more effective strategies which we will discuss here.

### Preventive maintenance

If you look at the same research mentioned above, you will notice that preventive maintenance strategy is the most popular approach to maintenance. And that is not a coincidence. Over the years, it has been proven to have a great return on investment when implemented properly and the implementation process itself is more straightforward than any other proactive maintenance strategy.

Any business that operates on a larger scale should consider implementing a preventive maintenance strategy. Making a shift from reactive maintenance to preventive maintenance will take some time, but the benefits are numerous.

Conducting routine maintenance based on a quality preventive maintenance plan will:

• reduce the number of emergency repairs since you will be able to discover and fix problems before a breakdown occur
• reduce overtime labor cost as maintenance technicians will not need to stay late to fix a breakdown of a critical piece of equipment
• increase overall productivity and extend the life of critical equipment

While preventive maintenance can be a great choice for any facility that has a trouble keeping their maintenance costs in check, here are some situations in which it could be your go-to solution:

• you want to move away from reactive maintenance but you don’t have the resources for the large capital investment other maintenance strategies require
• you want a straightforward maintenance strategy that isn’t too complicated to implement
• you are willing to invest a few months to see the implementation go through successfully

### Predictive maintenance

Preventive and predictive maintenance (PdM) share the same goals but the execution of each approach is quite different.

PdM aims at predicting equipment failure before it actually occurs. Predictions are not based on the average life cycles of machinery as with a scheduled maintenance strategy.

Some PdM strategies rely on physical inspection of the respective equipment but you can get best results by implementing a software system to monitor and track production facilities. By incorporating readings from different sensors and metering into a maintenance platform, you are able to predict potential failures and get insights into your equipment’s current working status which will help prevent unexpected breakdowns.

Properly implemented predictive maintenance will:

• increase the lifecycle of your assets
• minimize the number of both scheduled and unscheduled downtimes
• increase uptime of your assets
• allow you to more efficiently manage your maintenance team’s work

You should consider implementing predictive maintenance when:

• you are willing to invest a moderate to large sum of money to get the project off the ground
• have a moderate to large amount of time and resources to implement the strategy and properly train your employees
• you have all the necessary data at your disposal or you are willing to wait a few months to gather enough data to actually start a predictive maintenance plan (you can shift to predictive maintenance only once you have enough data to generate actionable insights about your equipment; even if you use software to collect meter readings, it will take a while before the software is able to generate valuable and accurate reports)
• want to have a complete control and insight about your assets
• want to keep your parts inventory low (by predicting when you need to do certain repairs, you can order parts just before those repairs occur)
• already did or have plans to invest in industrial IoT

### Reliability-centered maintenance

Reliability-centered maintenance represents a very complex approach to maintenance. The main goal is to identify all possible failure modes of a machine and then draft a custom maintenance strategy for every piece of equipment.

This can be a daunting task for any business since you need to an in-depth analysis of hundreds, or even thousands, of pieces of equipment. Due to being an advanced maintenance strategy, RCM requires a regular collection of data from the machines, preventive and predictive maintenance measures, and regular basic inspection of all the equipment in place.

You can apply an RCM strategy for either small or large system but defining failure modes and differentiating between constituents of different systems may be hard. A business must define its business-critical production assets first, and only then assign priority to failure modes. An RCM strategy does not deal with functionality but reliability, so the proper categorization of assets is crucial.

An RCM might be a good solution when:

• you have enough knowledge and experience to develop an effective RCM strategy
• you are willing to invest a significant amount of time and money to complete the analysis and make the maintenance program
• you want to have a clear strategy for every likely failure mode for the equipment you analyzed

### Reducing maintenance costs

Every maintenance strategy has its pros and cons so choosing the one you should focus on can be a challenging task.

How do we know that one of the existing maintenance strategies isn’t superior to other across the board?

Well, the market is the one that ultimately decides which approach to maintenance is the most profitable. Since it is obvious that not all successful processing facilities have the same approach to maintenance, we can conclude that all strategies are still viable to one degree or another for your unique setup.

In an ideal scenario, you would use a mix of these strategies to get the best possible results and minimize your maintenance costs.

However, the more realistic scenario is the one in which you are concentrating on employing one or two strategies. For example, you would put all important assets on your preventive maintenance plan list, while some non-essential equipment (which breakdown won’t have much of an impact on your production line) doesn’t have to be regularly maintained and can be fixed when/if the failure occurs

When all is said and done, choosing the right strategy (or a mix of strategies) is one of the best ways to minimize costs that occur in your maintenance department.

### Reducing operational expenses with maintenance software

You probably already noticed that turning to more proactive maintenance strategy is close to impossible without the help of appropriate maintenance software. If you think about it, it is only logical.

An effective maintenance schedule HAS to be based on the accurate and reliable information. With so many moving parts, tracking all of the necessary information is simply impossible without a central hub of information that allows you to make data-driven plans.

Since the main purpose of every computerized maintenance management system (CMMS) is to provide you with invaluable and actionable insights you can use to optimize your entire maintenance process, it cannot be avoided when discussing the reduction of operational costs.

While a CMMS has basically the same key benefits as all of the strategies just discussed, there are some indirect (and often overlooked) cost reductions that come with it.

### Efficiently scheduling maintenance work

Ability to easily report problems, quickly schedule maintenance work, add priority levels, track work in progress, assign and reassign technicians with a few clicks, etc., saves a ton of time for maintenance managers and ensures that the most important work is being done on time.

The faster flow of information between your maintenance team, improved response times, eliminating overtime labor costs, an easier cooperation of multiple maintenance technicians on bigger maintenance tasks, are just some of the ways you indirectly reduce maintenance costs by employing a capable maintenance software.

### A plethora of statistical data

A tried and tested way to improve your operations on all levels of your organization is by making adjustments based on accurate statistical data and performance reports.

When it comes to maintenance, CMMS will enable you to look at things such as:

• what maintenance work has been that and how much is that costing you
• what is the overall performance level of your maintenance team
• which assets are costing you the most and why
• which one of your locations/facilities is performing the best an why

Long things short, making data-driven decisions is a solution to most of your problems.

### Conclusion

A modern production facility or manufacturing plant encompasses thousands of individual components. Which of them should be subject to preventive maintenance and where you should apply a predictive approach? Do you need to stop your entire production line for scheduled maintenance or you can reduce costs by replacing specific components on a run-to-failure basis without bothering to halt production?

Applying the right maintenance strategy to decrease operating costs requires making informed decisions based on accurate information. This data should be processed to generate actionable insights that enable you to draft long-term strategies that will permanently reduce your operating costs.

A major tool in your maintenance strategy should be a software platform capable of producing insights that let you combine chosen maintenance strategies and deploy the best solution for every particular scenario.

Which maintenance strategy are you using at your facility? You think that one approach is vastly superior to others? Don’t hold it in, let us know in the comments below.

Bryan Christiansen is founder and CEO at Limble CMMS. Limble is a mobile first, modern computerized maintenance management system application, designed to help managers organize, automate and streamline their maintenance operations.

Connect with Bryan:

## AutoQuiz: Failures of Complex Systems in SIS Design and SIL Selection

AutoQuiz is edited by Joel Don, ISA’s social media community manager.

Today’s automation industry quiz question comes from the ISA Certified Automation Professional certification program. ISA CAP certification provides a non-biased, third-party, objective assessment and confirmation of an automation professional’s skills. The CAP exam is focused on direction, definition, design, development/application, deployment, documentation, and support of systems, software, and equipment used in control systems, manufacturing information systems, systems integration, and operational consulting. Click this link for more information about the CAP program.

### In safety instrumented system design and safety integrity level selection, what is the most common top-down approach for describing the failures of complex systems?

a) Fault Tree Analysis
b) HAZOP Analysis
c) Root-Cause Analysis
d) Markov Analysis
e) none of the above

A fault tree analysis begins with the “top event,” which is the result of a number of basic events that contribute to, or initiate, the system failure.  The logic of a fault tree is displayed by the symbols that represent the basic events and gates that logically relate those events.

The correct answer is A, Fault Tree Analysis.

Reference: Edward M. Marszal, P.E., C.F.S.E and Dr. Eric W. Scharpf, MIPENZ; Safety Integrity Level Selection: Systematic Methods Including Layer of Protection Analysis, ISA Press.

## ISA Executive Board Approves New Vision and Mission Statements

This post is authored by Brian Curtis, president of ISA 2018.

I’m excited to announce that the ISA Executive Board, meeting earlier this month during the Spring Leaders Meeting in Raleigh, NC, USA, has approved new ISA vision and mission statements.

ISA’s new vision is to: Create a better world through automation. (This replaces: ISA sets the standard for automation by enabling automation professionals across the world to work together for the benefit of all.)

ISA’s new mission statement is to: Advance technical competence by connecting the automation community to achieve operational excellence. (This replaces: Enable our members, including world-wide subject matter experts, automation suppliers, and end-users, to work together to develop and deliver the highest quality, unbiased automation information, including standards, training, publications, and certifications.)

Why is this exciting? ISA now has mission and vision statements that are short, aspirational, and memorable. The previous iterations were too wordy and unwieldy, making it difficult for ISA members to concisely state why we exist and where we’re going, and for everyone else to understand why we exist and where we’re going.

This is all part of an effort to better define our Society—both within our walls and beyond them—and take a hard look at our organizational mainstays, including our values, strategies, goals, and metrics. Stay tuned for updates as we hone our strategic focus, global brand recognition, and operational priorities.

As most of you well know, an essential, near-term priority is the IT infrastructure upgrade project. Funding for the project has been approved by the Executive Board and staff, consultants, vendors, and leaders are hard at work envisioning all the plans and steps that will be involved in this project.

The ultimate goals for the project include improved digital content delivery and user engagement, a personalized user experience, a mobile responsive environment, and a fully streamlined e-commerce process. ISA will be leveraging an open architecture built upon a Salesforce platform, adding overlays and applications based on best-in-breed solutions available in the market. We’ve hired a full-time project manager to oversee all aspects of the project, Leo Nevar, and he will be in RTP as a staff member for the duration of the work. Project plans, approaches, timelines, and milestones will be vetted and monitored by the ISA Executive Board.

I also want to take this opportunity to recognize the contributions of ISA and Automation Federation staff and volunteer leaders at two highly visible STEM (science, technology, engineering and mathematics) events that took place in April.

Approximately 350,000 people—mostly primary and secondary students and their families—attended the USA Science & Engineering Festival, 7-8 April in Washington, D.C. At the ISA/Automation Federation exhibit, hundreds of young people and their parents (assisted by ISA and AF volunteers) competed in a computerized game based on an actual industrial automation and control system. The game, powered by a programmable logic controller (PLC), demonstrated essential control panel design concepts and computer game programming.

Later in the month, 18-21 April, more than 15,000 students, ages 6-18, from 43 countries competed in three robotics competition championships and a LEGO® competition championship at the FIRST® Championship Houston. ISA and AF volunteers met with FIRST competitors and their family members to answer questions about career opportunities in automation and engineering.

Maintaining a strong presence at these premier STEM events is rewarding for all involved. ISA members who take part can reconnect to the excitement that ignited their own drive to pursue an automation career and, at the same time, inspire others to follow their path toward success in the profession.

While most STEM initiatives like these target students enrolled in elementary, middle and high schools, ISA also recognizes the need to better engage with those young people further down the educational pathway: new engineering school graduates—particularly those active in ISA student sections.

All too often, ISA student section members at engineering schools lose their association with ISA when they graduate and leave their student memberships behind.  ISA is exploring ways to help new college graduates maintain their connection to ISA as they enter the first stage of their automation and engineering careers. More to come on this in a subsequent column.

I’ll also be sharing with you any actions relating to the Executive Board’s review of recommendations from the ISA Globalization Task Force, which was established in 2016 by former ISA President Jim Keaveney. The task force was created to explore financially viable ways to improve ISA’s international growth and presence. Long-time ISA leader and current co-chair of the ISA99 Committee Eric Cosman presented the recommendations at the Spring Leaders Meeting.

I’m eager to provide you with more details on these and other promising initiatives in upcoming columns. As always, I thank you for your support of and contributions to ISA.