As in all process control work, some understanding of science is involved, and some experience with what actually happens in the monitored system can be very useful. That suggests testing over a range of conditions is prudent, so the performance and application limits can be reliably assessed.
Over the years, the technology I’ve developed and the systems in which it is implemented have been tested thousands of times, many supervised by industry or government agencies as well as the organizations that deployed the systems. While the core system capabilities are well-understood there are often differences; sometimes subtle, sometimes huge; in the characteristics of a particular application.
Most of the time, good analysis coupled with some on-line measurements and observations moves the system design into the “ballpark” so in-place performance testing is more of a quantification exercise than a verification that it all really does work.
For in-situ testing over years, I’ve settled on a process that begins with a really big leak – perhaps 2 to 5 percent of the line flow rate. If the concept is working correctly it should be detected in a few seconds and located precisely. From there, the leak size is reduced, usually by halves, until detection fails. That establishes the performance limit. Results are plotted and the curves through the test points are assessed, considering the algorithms involved, to assess the quality and dependability of the results.
This process covers the range of applicability and is easily repeated to confirm and re-confirm results over time. Many customers test periodically to ensure everything that needs to work, and was working last test, is still providing the necessary information – the observations that make the algorithms work. Testing that isn’t repeatable is not scientific or especially valuable in establishing dependable performance.
Many users install test manifolds, or taps for them, so leaks with specific orifice sizes can be reliably tested. Timing is usually from opening of the leak to the alarm it has been detected. When multiple algorithms are involved separate alarms can occur for each of them and each is timed to verify all components of the system are performing “on the curves” as expected.
Unless there is some specific objective that requires otherwise, all tests are initiated from a stable line. Sometimes that means the readings are all stable. Sometimes it means the line has been running in its normal manner for some time. The presence of unusual or artificially induced transient conditions may affect the system’s response characteristics, and will probably be hard to reproduce. More testing, as opposed to artificial conditions, normally produces the more pertinent and dependable results.
In any case, the algorithms automatically tune and adapt to the actual conditions observed on the pipeline. When conditions change, e.g., from a leak, the algorithms assess why and where in a configurable manner. That provides an opportunity to optimize sensitivity, specificity, and detection time. Automatic tuning generally provides excellent results and can be “tweaked” for special conditions when necessary. Essentially, the system leaves very little to the operator and can be started up in a few hours.
There are some products based on the idea of pattern recognition, generally implemented using Bayesian statistics. The core idea is that if a data set that looks like the pattern produced by a leak is created and analyzed against blocks of actual data then it becomes possible to detect the event through pattern comparison using statistics. There are two problems with this. First, except for some highly unusual situations, there is no dependable pattern, no “fingerprint” for a leak.
The observable (measurable) characteristics are related by Newton’s momentum equation and how relevant data appear is a direct result of the ratio of force (e.g., from the difference between the pipeline pressure and outside pressure at the leak location), to the mass of the fluid involved. Second, Exacerbating the problem, there are often issues with the volatility (vapor pressure) of the fluid in the pipeline. Solving this problem normally involves using many Bayesian test data sets and comparing each of them to conditions in the pipeline, all while still risking the next leak might not have a “signature” on file for it in the inference engine.
Simply put, the detection methodology, and the testing of it, must be as realistic in terms of process monitoring as possible. Years ago, many users did impromptu testing where a team would set up and conduct a test without informing the operators. Results would be assessed on the basis of how long it took to implement the response plan. That is less common today but is worth thinking about.
Good test records are useful in identifying trends and the appearance of unusual conditions. We have on-line and off-line analysis systems that support careful analysis and review, not only of leaks but unusual operating occurrences and situations. The algorithms store data for every significant event and archive all selected data over time. This allows investigation of why, for example, a situation that was not a problem a few weeks ago is a problem today.
Testing provides the assessment of how a system can work, how it’s working now, the tools for analyzing unusual occurrences, and a vehicle for training and testing operators. Knowing how well things should work along with if and how well they are working now, is an important component of safety monitoring.
Learn more about pipeline leak detection and related industry topics
Book Excerpt + Author Q&A: Detecting Leaks in Pipelines
How to Optimize Pipeline Leak Detection: Focus on Design, Equipment and Insightful Operating Practices
What You Can Learn About Pipeline Leaks From Government Statistics
Is Theft the New Frontier for Process Control Equipment?
What Is the Impact of Theft, Accidents, and Natural Losses From Pipelines?
Can Risk Analysis Really Be Reduced to a Simple Procedure?
Do Government Pipeline Regulations Improve Safety?
What Are the Performance Measures for Pipeline Leak Detection?
What Observations Improve Specificity in Pipeline Leak Detection?
Three Decades of Life with Pipeline Leak Detection
How to Test and Validate a Pipeline Leak Detection System
Does Instrument Placement Matter in Dynamic Process Control?
Condition-Dependent Conundrum: How to Obtain Accurate Measurement in the Process Industries
Are Pipeline Leaks Deterministic or Stochastic?
How Differing Conditions Impact the Validity of Industrial Pipeline Monitoring and Leak Detection Assumptions
How Does Heat Transfer Affect Operation of Your Natural Gas or Crude Oil Pipeline?
Why You Must Factor Maintenance Into the Cost of Any Industrial System
Raw Beginnings: The Evolution of Offshore Oil Industry Pipeline Safety
How Long Does It Take to Detect a Leak on an Oil or Gas Pipeline?
Pipeline Leak Size: If We Can’t See It, We Can’t Detect It
An Introduction to Operations Research in the Process Industries