# Life Cycle Cost analysis at the Operational stage

**In a period with oil prices under pressure, understanding the risks and potential financial impact is essential to ensure a healthy business. With this in mind, life cycle cost analysis plays an important role in assisting estimation of the economic performance of a project over its entire lifecycle. This type of analysis has been traditionally applied in the early phase of projects with high capital demand, particularly for the oil and gas industries but also produces some interesting results during the operational stage.**

This post is a bit longer than the usual but it is a very interesting subject! We will be describing:

- Uncertainty and Sensitivity analysis
- Steps for performing uncertainty analysis using the Monte Carlo method
- Options to visualise the uncertainty related to a variable
- Case study involving a FPSO
- Tornado chart for a few KPIs
- How to make decisions using Tornado charts?
- How we could use what we learned to make decisions related to the operational cost/investment of a specific asset?

This blog post summarises a method for quantifying and understanding the uncertainty around some parameters by performing sensitivity analysis. A range of potential input values are considered; primarily for parameters of cost drivers through an advanced version of the well-established Monte Carlo simulation technique.

Uncertainty and sensitivity analyses techniques are closely connected but it is important to notice that they are not the same. Sensitivity analysis is a method which uses a range of inputs to a model to obtain a range of results. Uncertainty analysis aims to quantify the uncertainty of the model itself. Sensitivity analysis can be seen as using the tool, whereas uncertainty analysis is a quality or accuracy check of the tool.

Steps for performing uncertainty analysis using the Monte Carlo method are:

- Use your model to calculate a KPI value for a base case
- Select a range and distribution for each input variable for which you want to quantify uncertainty;
- Generate a sample from the input variable(s);
- Perform your analysis again using the new input variable(s);
- Compare the value versus your base case to quantify the change in results from your change in inputs i.e. the variability.

This is the key to uncertainty analysis; does a small input change have a large impact on the results or does a large input change have little change on the results? Commonly used quantification of uncertainty is presented as the output value change per input value change.

This can be visualised well in tornado charts, as described later.

Let’s consider a FPSO including a large range of different systems such as oil processing, gas compression and dehydration, produced water, flare, vent and power generation. The maintenance is handled by a maintenance crew located on site at the FPSO and extra resources are needed depending on the equipment failing.

There is a target for the production efficiency of this system which is 97%.

A tornado chart is used to understand the variability of independent parameters to the dependent variable measured. Tornado charts typically provide rapid means to understand how inputs impact a variable for further assessment. They are therefore very useful tools both for uncertainty and sensitivity analyses.

In order to create a tornado chart, one input parameter in the base case is changed to a higher and lower value than the original and all the other input parameters remain unchanged. The results are displayed as a bar chart where the length of the bars indicates the variability of results when changes are made to the parameter under investigation when compared to the base case (which is indicated at the colour change position). Since the input parameter is being varied to more and less, bars to the right show results which are higher than the base case for the dependent variable whereas bars to the left show results that are lower than the base case.

Example Tornado result format

After one parameter is changed, a second parameter is modified using the same factor applied to the previous case. This ensures that when analysing the chart, the weight of the changes are at the same level.

One disadvantage of the tornado chart is the assumption that some variables are fully independent. Many parameters depend on each other which would give a cumulative impact on results; when one parameter increases, the other one tends to increase (positive correlation) or decrease (negative correlation). Therefore, selecting the right parameters to vary is extremely important.

For this specific case study, the following parameters are considered:

- Equipment reliability patterns are defined by factoring the mean value and any other additional parameters appropriate to a specific failure and repair distribution.
- Uncertainty related to the duration of planned maintenance
- Mobilisation time for the maintenance crew is investigated to understand how approval processes impact the system performance.
- Spare parts for pumps and compressors are investigated to understand how uncertainty related to external suppliers will impact the overall performance

All aforementioned parameters will be factored by +/-20%. This will ensure that the analysis covers a wide range of scenarios.

Note that our minimum acceptable performance metric is 97% production efficiency.

After running the simulation, the production efficiency for the base case is 97.202%. When we apply +/-20% factors to our input parameters of interest we see that our results can go below the target efficiency of 97% for some aspects but not all. The MTTR and MTTF variations can put us below our minimum acceptable target of 97% production efficiency.

With the tornado chart it is possible to see in what areas uncertainty could give rise to unacceptable performance. So what can we do with this information? Remember that this is uncertainty analysis; we are not being told here that we will perform poorly; we are being told that the system has the potential to perform poorly. We can use these simulation results to take steps during operation of the asset in order to ensure acceptable performance. We could do this by assessing the maximum level of variation that should be permitted that would make us approach the 97% KPI value.

Criticality can be combined with uncertainty analysis to achieve this. The criticality graph ranks events, equipment failures, planned maintenance or operational bottlenecks causing losses. Combining the criticality metric to the maximum level of variation allowed, analysts can keep track of losses during operation and ensure that systems are behaving within the boundaries to achieve a specific target.

The limit of permissible variation can be estimated by first interpolating and extrapolating the values of the input variable, in this case Mean Time To Failure, versus production efficiency and then reading off the MTTF values that would see us reach the critical 97% value. The same methodology could be implemented for the Mean Time To Repair.

We can apply a curve to our known data points using regression techniques. A range of MTTF values have been used here to obtain a curve relating MTTF value and production efficiency.

The function used to describe the variance for this particular case study is also shown in the above figure.

With this function, the maximum level of inconsistency allowed can be calculated as a factor of 0.9298. This means that a variation from the base case MTTF of -7.0173% is permissible before the system performance approaches the critical 97% production efficiency target.

In practical terms what this means is that as your reliability programme gets underway during operation, once the data starts to populate your libraries, if you see that MTTF values do not align with the generic industry data used to design your system and your real life operations see a MTTF that is ~7% less than first assumed then action needs to be taken.

Finally, let us look at combining the maximum level of permitted variation and the base case criticality. Reading these graphs in combination empowers the analyst to understand at what point the variability is acceptable. For the criticality analysis, the following graphs can be created:

This graph above shows the production loss for the base case integrated to the uncertainty analysis. By keeping track of losses caused by each system, the running efficiency of the asset can be under constant evaluation.

For the graph above, the number of failures “permitted before reaching our critical KPI value” by each system can be easily identified.

In addition to the ability to draw conclusions directly from the graphical output, the opportunity to focus effort on collecting data to a specific set of variables is of vital importance. Unfortunately, acquiring good reliability data still a massive challenge in the oil and gas industry. Therefore, being able to target areas where the uncertainty is large will ensure that focus is giving to areas of higher impact.

Another important step in this analysis is to understand how external factors such as market condition will impact the financial performance of the system. With the current model it is possible the measure the revenue streams and operational expenditures (OpEx) for the base case. As we are monitoring each failure and the downtime caused, we can quantify the OPEX associated with maintenance and lost revenue. To properly compare the financial performance of the different case studies over the entire lifecycle a Net Present Value (NPV) is used.

A tornado chart is also used to understand the uncertainties around the market conditions. This is can be easily added to parameters already varied in the previous uncertainty analysis case study.

The Net Present Value for the base case is $2.49M. For the financial analysis, two targets are tracked:

- Positive NPV which relates to the 97% threshold for production efficiency
- A critical scenario; the complete recovery of the investment with NPV equal to zero.

Hence, with the same parameters used to produce the tornado chart plus the product price and the discount factor, another chart is created:

Since the oil price is the main source of revenue, it represents a vast contributor to the variability of the financial performance for the system. Together with the oil price, the MTTF plays an important role in the variance of the NPV.

The turning point from a positive NPV to negative NPV can also be estimated via regression techniques.

By using a linear regression technique, the function representing the NPV behaviour for the model is:

The oil price where the operation of this asset achieves an NPV of zero is $63.23 per barrel.

Another important factor to be taken into account when analysing the financial aspects of an oil and gas venture is the discount factor of the NPV calculation.

The variation in discount factor can push the boundaries where oil price is accepted. For example, with a discount factor of 8%, the oil price of $60 per barrel becomes acceptable.

A tornado chart has been found to be a powerful way of illustrating how changes to the selected set of variables impacting the dependent variables.

After estimating the variability, different regression techniques are used to estimate the maximum allowable level of variation from the base case assumptions. This empowers the analyst to keep track of specific variables throughout the asset’s life to ensure that the performance remains within the desired level of performance.

With this methodology, a deep understanding of external factors can also be incorporated. For example, product price and discount rate are variables that cannot be controlled by asset management techniques. Therefore, it is vital to understand how these factors will impact the financial health of an asset.

Other factors that may be taken into account using the methods described here are:

- Contract penalties
- Taxation
- Plant running cost