Pipeline leak detection: Tackling the uncertainties
Pipeline safety and leak detection
All of us involved in the pipeline industry seek to eliminate leaks, be they catastrophic or relatively minor. We at DNV GL have been at the forefront of these efforts and continue to invest heavily in advancing the understanding of pipeline failures and preventing them occurring. The combined efforts of the industry have made pipelines the safest and most dependable form of transporting oil and natural gas. However, incidents do happen and we are faced with a loss of containment. When these incidents occur we move into a damage limitation response. To minimize the impact from a leak we must detect it reliably, quickly and be able to locate it accurately. At the forefront of this response is the use of pipeline leak detection systems, such as DNV GL’s Synergi Pipeline Simulator. Synergi Pipeline Simulator is a part of our Pipeline Ecosystem solution.
In this blog post my colleague Sanjay Yadav explains some of the critical problems faced by leak detection and introduces the research efforts that he is currently involved in to address these. Sanjay is a senior member of the Analytics Advancement team within the DNV GL Pipeline Ecosystem product center. He is based in our Mechanicsburg, Pa office. His background is in hydraulics, fluid mechanics, numerical and computational methods. Over to you Sanjay.
Pipeline leaks have no friends!
Pipeline leaks are fortunately rare. However, the few leaks that occur still have a significant cost to the environment and pipeline companies. According to pipeline incident data collected by PHMSA, the annual cost of pipeline leaks runs into hundreds of millions of dollars.
Additionally, false alarms from leak detection systems are very costly for pipeline companies . Pipeline companies must investigate every leak alarm to either explain it or act on it which may mean shutting down the pipeline.
How we currently monitor for pipeline leaks
Pipeline companies usually employ a combination of methods to monitor for leaks. Among them, mass balance based computational pipeline monitoring (CPM) is one of the oldest approaches. The idea behind this approach is to use the principle of conservation of mass to assert that the difference between the mass flow rate into and out of a pipeline section must equal the rate of mass accumulation (also called packing rate) in that pipeline section. If there is a leak, it would appear as a discrepancy (referred to as a leak signal) in mass balance. CPM is a mature technology that has proven to be quite reliable. However, it is not perfect. What are some of the pain points? Are we at the limits of what can be achieved by CPM? What, if anything, can be done to improve it?
Uncertainty: The nemesis of pipeline leak detection
Much of the challenge in CPM comes from uncertainties, of which there are 2 principle types. One of these is the uncertainty associated with models used in CPM. The other is the uncertainties associated with input data used by CPM. To the extent that a model approximates reality, there is always uncertainty in model predictions no matter how good the input data is.
Sources of uncertainty
The uncertainty in input data comes from a variety of sources. Part of this uncertainty is from random variation and is not reducible. But a good part of this uncertainty stems from our lack of knowledge and is referred to as epistemic uncertainty. Epistemic uncertainty is reducible in theory, though it may be expensive or infeasible with current technology to reduce it. Examples of epistemic uncertainty are those related to fluid properties. These include thermal properties of the pipe and surrounding medium, time stamps associated with SCADA measurements etc.
These uncertainties create uncertainty in the leak signal and make it difficult to decide if the leak signal represents a real leak or whether it is just noise. A common strategy is to use a leak threshold and declare a leak only if the leak signal crosses the threshold. However, because of the uncertainties in leak signal there is a tug of war between leak sensitivity and probability of false leak alarms. To increase the probability of leak detection, one must increase the probability of false alarms.
Model uncertainty can be reduced by using better models. An example of this is the use of real-time transient models (RTTMs) to produce more accurate estimates of line pack or the use of better equation of states to model fluid properties. Uncertainty associated with data can be reduced by having a higher density and accuracy of instrumentation. Another way to reduce uncertainty coming from data is to use some form of state-estimation techniques that estimate the most likely state of the pipeline considering the SCADA measurements available. An approach of this type is used by Synergi Pipeline Simulator in its Statefinder module.
Despite the attempts to reduce uncertainty, there will be some uncertainty remaining and this will create uncertainty in the leak signal. The usual approach to manage this remaining uncertainty is to use the machinery of statistical analysis to determine if the noisy leak signal is a leak or just noise.
How can we better handle uncertainty in leak detection?
So, what can be improved to reduce uncertainty further or to manage it better? The models that describe transient fluid flow are based on sound principles of conservation of mass, momentum and energy. They provide very good results for most normal operating conditions in a pipeline. Model uncertainties may be higher in more challenging modeling situations such as slack line flow. However, discovering better models is a difficult task and model improvements occur infrequently.
The uncertainty coming from data can be reduced by better instrumentation and having more of it. Another way to reduce uncertainty associated with data is to bring in more information into the leak detection process. Pipeline companies maintain a lot of information about pipeline integrity and use risk models to prioritize pipelines for maintenance. It certainly seems that any information about pipeline integrity should have some role to play in leak detection.
We can also manage the remaining uncertainty better if we had explicit probabilistic models of uncertainty of different factors and how this uncertainty changes as we get more information from instrumentation, pipeline integrity and hydraulic models. With these explicit models, we can reason about uncertainty in a rigorous manner.
Probabilistic models and machine learning
How can we achieve these objectives? In the last two decades or so, the field of machine learning has made significant progress. Many of the machine learning methods are based on probabilistic approaches that allow one to build explicit probabilistic models of uncertainty. These methods also allow one to incorporate information from different sources in making predictions. DN VGL is working actively to use machine learning in solving a variety of problems including pipeline leak detection. I will have more to say about the use of machine learning for leak detection in future blogs. For now, I will leave you with a taster of the various kinds of other applications of machine learning that DNV GL are exploring.
You can read more on the advances we are making in our pipeline leak detection solution in our previous posing on system redundancy.