AutoQuiz: How to Read a Gauge Pressure Transmitter

AutoQuiz: How to Read a Gauge Pressure Transmitter

AutoQuiz is edited by Joel Don, ISA’s community manager. 

 

Today’s automation industry quiz question comes from the ISA Certified Control Systems Technician (CCST) program. Certified Control System Technicians calibrate, document, troubleshoot, and repair/replace instrumentation for systems that measure and control level, temperature, pressure, flow, and other process variables. Click this link for more information about the CCST program.

 

A gauge pressure transmitter that measures the pressure in a 150 # high pressure steam header is mounted 6 feet below the center line of the header. The tap for the impulse line connects to the top of the header and rises 2 feet above the header center line, extends horizontally for 3 feet, and then drops down to the transmitter. In order to read the pressure in the steam header correctly, the transmitter output must be:

a) calibrated for suppressed zero, the suppression equal to 8 feet of liquid head pressure
b) calibrated for suppressed zero, the suppression equal to 6 feet of liquid head pressure
c) calibrated for elevated zero, the elevation equal to 8 feet of liquid pressure
d) calibrated for true zero
e) none of the above

Click Here to Reveal the Answer

At zero gauge pressure in the steam line, you are essentially suppressing (pushing) the transmitter output back down to zero after the system reaches equilibrium with the 8 feet of impulse line full of liquid. The transmitter will read that impulse line liquid head pressure plus any pressure exerted on top of the liquid. 6 feet of liquid head suppression is incorrect as the impulse arrangement will cause 8 feet of liquid head to accumulate.  Elevated zero is incorrect as that adjustment is used to adjust for negative pressure offsets resulting from the transmitter being mounted above the zero reference point (high pressure tap point) and where a liquid in the impulse line or a capillary system would exert a negative pressure.

The correct answer is A, calibrated for suppressed zero with the suppression equal to the 8 feet of liquid height that is in the impulse line.

Reference: Thomas A. Hughes; Measurement and Control Basics, 5th Edition, ISA Press.

 

Can Risk Analysis Really Be Reduced to a Simple Procedure?

Can Risk Analysis Really Be Reduced to a Simple Procedure?

This guest blog post is part of a series written by Edward J. Farmer, PE, author of the new ISA book Detecting Leaks in Pipelines. If you would like more information on how to purchase the book, click this link. To download a free excerpt from Detecting Leaks in Pipelines, click here. To read all the posts in this series, scroll to the bottom of this post for the links.

 

Like most complex and multidimensional process management ventures, a good result usually begins with good science and robust methodology. Consequently, most successful operators have developed proprietary methods of identifying, assessing, and mitigating risk. This methodology is usually based on experience and the developed wisdom of people who have, “been there and done that.”

People with such breadth-of-experience, and such depth-of-practice, are becoming scarce due to specialization within the ever more complex industry coupled with the inevitable retirements. The ability to anticipate, assess, and mitigate risks becomes more difficult to do with confidence year-by-year.

 

 

Situations such as this spawn ventures into science and the development of concepts and processes that formulate methodical processes that aid in evaluation and ensure a path toward dependable results. In pipeline safety two divergent approaches have emerged that are referred to as the “deductive method” and the “inductive method.”

The deductive method, popularized by the tales of Sherlock Holmes, begins with an occurrence of a certain character at a certain place and time from which one deduces how it could have happened. In risk assessment language, an event and its cause are linked by a series of happenings, a “causality chain,” which somehow connects them. Often it is a brutally short chain – a backhoe hits a buried line. Sometimes it is fascinatingly complex, such as a pressure safety valve failing to protect some item of vulnerability resulting in a sort of cascading failure.

The inductive method wonders what causality chain or initiating occurrence could cause a presumed or actual accident or leak. Analysis is often based on causal experience as described in U.S. Department of Transportation accident statistics. Analysis involves working backward from the leak, considering all the potential linkages and branches in the causality chain to identify a source, or set of events, that could produce the subject outcome. This is the traditional approach to safety analysis, and it flows from the experience, wisdom, insight, intuition, and sometimes the creativity of the investigators.

If you would like more information on how to purchase Detecting Leaks in Pipelines, click this link. To download a free excerpt from the book, click here.

The consequences that stem from some accidents or failures are sometimes far too serious to leave to intuition. The deductive method was developed by the U.S. Nuclear Regulatory Commission (NRC) as NUREG 492, or “Fault Tree Handbook.” It applies the systematic methodology of fault tree analysis to the often-complex causality chains that lead to accidents. The process involves identifying information, events, or circumstances along the path from normal operation to failure.

The analytical process

The process begins with the observation that a system is a deterministic entity comprising an interacting collection of discrete elements. Analysis of a particular output of the system involves analysis of the discrete elements involved in causing, facilitating, or enabling the occurrence. From this analysis it is possible to assess causality and even occurrence probability. It is a correct, fundamental, documentable, and highly pedantic method of producing repeatable analyses. This has made it attractive in many important or high-risk ventures with potentially serious or even catastrophic consequences.

The only drawback is the work involved, which requires a thorough understanding of the process and a detailed understanding of its components, as well as some understanding of the statistical nature and impact each of these systems, subsystems, and occurrences might bring to discerning and evaluating the likelihood of a particular result. With all that done, however, it supports a quantitative determination of the potential outcome of a particular occurrence or combination of them.

These two approaches have been described using words like “vastly different,” “night and day,” “conjecture vs. determinism” and many others along the same thought process. One skilled and experienced person described the inductive method as the “wild-ass guess” and the deterministic method as “a lifetime’s work.”  The NRC’s interest in the pedantic and deterministic yet highly repeatable methodology of the deductive deterministic approach is understandable. There are a small number of systems that require such analysis and a large cast of people to do them.

The potential consequences are huge, justifying the cost and inconvenience of assembling these people into analysis teams spending months, perhaps years, achieving precise understanding of the systems and the interactions of their component parts. This sometimes (often) requires subject-matter experts on various equipment and process features that can be hard to find, combine, and schedule.

leak detection, process industries, industrial automation, process control, industrial theft, pipelines, natural gas, oil

On the other hand, many industrial systems are simpler, less obtuse, than the complex systems the NRC envisioned. Having an adequate understanding of them as well as sufficient information may not be as rare, and the logic that comes from experience and even “common sense” may not only be pertinent, but adequate.

The analysis

Once upon a time, a critical project was deemed to require the best possible risk analysis.  It required several subject-matter experts, lots of hard-to-obtain data , and an analysis team capable of understanding, assembling, and presenting the results. All that work provided very useful insights but nothing that the experienced people did not intuit. It took about three times as long and about five times the cost of the usual company approach.

In the end, the results were assessed by the review of experienced people to be “as expected.” This seemed to indicate that the method was a slow and expensive way to get to the same conclusion that experience, and logic, might predict. On the other hand, the results were clearly methodical, thorough, and repeatable which probably facilitated getting construction and operating permits.

As with many inherently good ideas the optimal implementation may involve some synthesis, which seems to be the case with some of the techniques used by industry leaders. All of this is worth thinking about, and perhaps re-thinking every now and again. My book, Detecting Leaks in Pipelines, includes a discussion of risk assessment and analysis involving each of these methods along with some additional insight. In any case, implementing the road to where you are trying to go begins with knowing where that is and how to get there. This kind of analysis can be a big help in making an optimal start on your way.

Want to read all the blogs in this series? Click these links to read the posts:

How to Optimize Pipeline Leak Detection: Focus on Design, Equipment and Insightful Operating Practices
What You Can Learn About Pipeline Leaks From Government Statistics
Is Theft the New Frontier for Process Control Equipment?
What Is the Impact of Theft, Accidents, and Natural Losses From Pipelines?

 

About the Author
Edward Farmer has more than 40 years of experience in the “high tech” part of the oil industry. He originally graduated with a bachelor of science degree in electrical engineering from California State University, Chico, where he also completed the master’s program in physical science. Over the years, Edward has designed SCADA hardware and software, practiced and written extensively about process control technology, and has worked extensively in pipeline leak detection. He is the inventor of the Pressure Point Analysis® leak detection system as well as the Locator® high-accuracy, low-bandwidth leak location system. He is a Registered Professional Engineer in five states and has worked on a broad scope of projects worldwide. His work has produced three books, numerous articles, and four patents. Edward has also worked extensively in military communications where he has authored many papers for military publications and participated in the development and evaluation of two radio antennas currently in U.S. inventory. He is a graduate of the U.S. Marine Corps Command and Staff College. He is the owner and president of EFA Technologies, Inc., manufacturer of the LeakNet family of pipeline leak detection products.

If you would like more information on how to purchase Detecting Leaks in Pipelines, click this link. To download a free excerpt from the book, click here.

Connect with Ed:
48x48-linkedinEmail

 

AutoQuiz: Failures of Complex Systems in SIS Design and SIL Selection

AutoQuiz: Failures of Complex Systems in SIS Design and SIL Selection

AutoQuiz is edited by Joel Don, ISA’s social media community manager.

 

Today’s automation industry quiz question comes from the ISA Certified Automation Professional certification program. ISA CAP certification provides a non-biased, third-party, objective assessment and confirmation of an automation professional’s skills. The CAP exam is focused on direction, definition, design, development/application, deployment, documentation, and support of systems, software, and equipment used in control systems, manufacturing information systems, systems integration, and operational consulting. Click this link for more information about the CAP program.

 

 

In safety instrumented system design and safety integrity level selection, what is the most common top-down approach for describing the failures of complex systems?

a) Fault Tree Analysis
b) HAZOP Analysis
c) Root-Cause Analysis
d) Markov Analysis
e) none of the above

Click Here to Reveal the Answer

 

A fault tree analysis begins with the “top event,” which is the result of a number of basic events that contribute to, or initiate, the system failure.  The logic of a fault tree is displayed by the symbols that represent the basic events and gates that logically relate those events.

The correct answer is A, Fault Tree Analysis.

Reference: Edward M. Marszal, P.E., C.F.S.E and Dr. Eric W. Scharpf, MIPENZ; Safety Integrity Level Selection: Systematic Methods Including Layer of Protection Analysis, ISA Press.

ISA Executive Board Approves New Vision and Mission Statements

ISA Executive Board Approves New Vision and Mission Statements

This post is authored by Brian Curtis, president of ISA 2018.

 

I’m excited to announce that the ISA Executive Board, meeting earlier this month during the Spring Leaders Meeting in Raleigh, NC, USA, has approved new ISA vision and mission statements.

ISA’s new vision is to: Create a better world through automation. (This replaces: ISA sets the standard for automation by enabling automation professionals across the world to work together for the benefit of all.)

ISA’s new mission statement is to: Advance technical competence by connecting the automation community to achieve operational excellence. (This replaces: Enable our members, including world-wide subject matter experts, automation suppliers, and end-users, to work together to develop and deliver the highest quality, unbiased automation information, including standards, training, publications, and certifications.)

 

 

Why is this exciting? ISA now has mission and vision statements that are short, aspirational, and memorable. The previous iterations were too wordy and unwieldy, making it difficult for ISA members to concisely state why we exist and where we’re going, and for everyone else to understand why we exist and where we’re going. 

This is all part of an effort to better define our Society—both within our walls and beyond them—and take a hard look at our organizational mainstays, including our values, strategies, goals, and metrics. Stay tuned for updates as we hone our strategic focus, global brand recognition, and operational priorities.

As most of you well know, an essential, near-term priority is the IT infrastructure upgrade project. Funding for the project has been approved by the Executive Board and staff, consultants, vendors, and leaders are hard at work envisioning all the plans and steps that will be involved in this project.

The ultimate goals for the project include improved digital content delivery and user engagement, a personalized user experience, a mobile responsive environment, and a fully streamlined e-commerce process. ISA will be leveraging an open architecture built upon a Salesforce platform, adding overlays and applications based on best-in-breed solutions available in the market. We’ve hired a full-time project manager to oversee all aspects of the project, Leo Nevar, and he will be in RTP as a staff member for the duration of the work. Project plans, approaches, timelines, and milestones will be vetted and monitored by the ISA Executive Board.

I also want to take this opportunity to recognize the contributions of ISA and Automation Federation staff and volunteer leaders at two highly visible STEM (science, technology, engineering and mathematics) events that took place in April.

Approximately 350,000 people—mostly primary and secondary students and their families—attended the USA Science & Engineering Festival, 7-8 April in Washington, D.C. At the ISA/Automation Federation exhibit, hundreds of young people and their parents (assisted by ISA and AF volunteers) competed in a computerized game based on an actual industrial automation and control system. The game, powered by a programmable logic controller (PLC), demonstrated essential control panel design concepts and computer game programming.

Later in the month, 18-21 April, more than 15,000 students, ages 6-18, from 43 countries competed in three robotics competition championships and a LEGO® competition championship at the FIRST® Championship Houston. ISA and AF volunteers met with FIRST competitors and their family members to answer questions about career opportunities in automation and engineering.

Maintaining a strong presence at these premier STEM events is rewarding for all involved. ISA members who take part can reconnect to the excitement that ignited their own drive to pursue an automation career and, at the same time, inspire others to follow their path toward success in the profession.

While most STEM initiatives like these target students enrolled in elementary, middle and high schools, ISA also recognizes the need to better engage with those young people further down the educational pathway: new engineering school graduates—particularly those active in ISA student sections.

All too often, ISA student section members at engineering schools lose their association with ISA when they graduate and leave their student memberships behind.  ISA is exploring ways to help new college graduates maintain their connection to ISA as they enter the first stage of their automation and engineering careers. More to come on this in a subsequent column.

I’ll also be sharing with you any actions relating to the Executive Board’s review of recommendations from the ISA Globalization Task Force, which was established in 2016 by former ISA President Jim Keaveney. The task force was created to explore financially viable ways to improve ISA’s international growth and presence. Long-time ISA leader and current co-chair of the ISA99 Committee Eric Cosman presented the recommendations at the Spring Leaders Meeting.

I’m eager to provide you with more details on these and other promising initiatives in upcoming columns. As always, I thank you for your support of and contributions to ISA.

About the Author
Brian Curtis, I. Eng., LCGI, is the Operations Manager for Veolia Energy Ireland, providing services to Novartis Ringaskiddy Ltd. in Cork, Ireland. He has more than 35 years of experience in petrochemical, biotech, and bulk pharmaceutical industries, specializing in design, construction management, and commissioning of electrical, instrumentation, and automation control systems. He has managed complex engineering projects in Ireland, England, Belgium, the Netherlands, Italy, and Germany. A long-time ISA member, Curtis has served on the ISA Executive Board since 2013, the Geographic Assembly Board (2012 – 2015), and the Finance Committee (2013 – 2017.) He was Ireland Section President and Vice President of District 12, which includes Europe, the Middle East, and Africa. Curtis has also been active on several Society task forces, including Cybersecurity, Governance, and Globalization-related committees. He received the ISA Distinguished Society Service Award in 2010. He is the Former President of Cobh & Harbor Chamber of Commerce (2013-2015) and Former Chairman of the Ireland Southern Region Chambers (2015-2016) and is an active member of the Ireland National Standards Body, ETCI.

Connect with Brian:
48x48-linkedin Twitter48x48-email

 
 

A version of this article also has been published at ISA Insights.

Using OPC Technology to Support the Study of Advanced Process Control

Using OPC Technology to Support the Study of Advanced Process Control

This post is an excerpt from the journal ISA Transactions. All ISA Transactions articles are free to ISA members, or can be purchased from Elsevier Press.

 

 

Abstract: OPC, originally the object linking and embedding (OLE) for process control, brings a broad communication opportunity between different kinds of control systems. This paper investigates the use of OPC technology for the study of distributed control systems (DCS) as a cost effective and flexible research tool for the development and testing of advanced process control (APC) techniques in university research centers. Co-simulation environment based on Matlab, LabVIEW and TCP/IP network is presented here. Several implementation issues and OPC based client/server control application have been addressed for TCP/IP network. A nonlinear boiler model is simulated as OPC server and OPC client is used for closed loop model identification, and to design a model predictive controller (MPC). The MPC is able to control the NOx emissions in addition to drum water level and steam pressure.

Free Bonus! To read the full version of this ISA Transactions article, click here.

 

Enjoy this technical resource article? Join ISA and get free access to all ISA Transactions articles as well as a wealth of other technical content, plus professional networking as well as discounts on technical training, books, conferences, and professional certification.

Click here to join … learn, advance, succeed!

 

2006-2018 Elsevier Science Ltd. All rights reserved.

 

Pin It on Pinterest