Capital allocation decisions made by Engineers that work at E&P companies are completely rational and unimpacted by their own experiences or personal exposure to the play. Oil well forecasting & completions design recommendations are objective, uncolored by cultural influence, and based on unbiased P50 estimates.
If this is your belief, and it must be a belief, please stop reading this post now.
Removing Inherent Bias from Oilfield Forecasting:
- What Really Drives oil well forecasting?
- Single Well Extrapolation Can Drive Decisional Bias
- The Planning Fallacy of oil well forecasting
- Amplifying the Reservoir Engineer With machine learning driven oil well forecasting
- Theory Induced Blindness in oil and gas forecasts
- Balancing Bias in well forecasting is Key
- Cloud Platforms to analyze data
What Really Drives oil well forecasting?
Well forecasting volumes for a development asset that is competing for capital requires the balance of several cultural forces:
- Stakes for the Company: The forecaster (usually a reservoir engineer) is tasked with coming up with a best estimate of future volumes. The decision to proceed or not with a play, in large part hinges on this forecast. There are hundreds of millions of dollars typically at stake for the company that makes the investment.
- Stakes for the Forecasting Engineer: The Engineer producing the oil and gas forecast is in a position that their work can impact their future. It is well known that teams that work on well-capitalized and successful assets with high rank often have more opportunities and rewards in any company. If they are close to a high value play, they have more value themselves. In a growth market, there is even more value associated with these forecasts.
- Long Road to Accountability: Projecting hydrocarbon volumes has no immediate verification. It may be months or years to assess if the P50 curve that was foundational in the investment decision is a good or bad representation.
- Behavioral Momentum: The theory that “describes the general relation between resistance to change (persistence of behavior) and the rate of reinforcement obtained in a given situation.” In Oil & Gas, the prospect of making a big deal is enticing. It brings attention, value, visibility and it can transform an organization.
These cultural forces are instrumental in opening the door for bias into an evaluation.
Single Well Extrapolation Can Drive Decisional Bias
Engineers are not just forecasting for the play they are working on, they are forecasting often for the team. The impact of the well production forecast is well known throughout the team.
The approach of using a single well result in an emerging play is a good example of how bias enters the equation when trying to demonstrate play potential:
- Evaluate early single well (stand-alone) results in the play; either your own, offsets, or both.
- Scaling those results for longer laterals and higher tonnage (bereft with assumptions on the bivariate relationship).
- Lay out multi-stack developments (more assumptions on top of the previous assumptions) often to capture the resource.
This workflow is simplified.
However, it has led to some drastic overpredictions and overcapitalization of oil and gas plays. It is worth noting that a mistake on forecasted volumes is magnified on the NPV10, even with a constant price deck. As an example, if the drop-in volume is 20%, the reduction in NPV10 is likely 40% or more. The return on capital is severely diminished, and the play would likely no longer rank in the portfolio.
This bias often drives overly optimistic oil and gas forecasts and, ultimately, an overly optimistic development plan.
I contend that a poor decision evaluated in arrears was NOT a poor decision if it was made on the best available data and technique at the time. The issue in the workflow cited above is not that the engineer was unable to assess the primary production drivers, understand the correlation, and scale their oil well forecasting conclusions appropriately. The issue is the inability to account for risk in assumptions layered into the development plan. Another issue is to simply not assign a risk at all. The engineer likely accounted for a reasonable upside but may have underweighted or not recognized the downside.
The Planning Fallacy of oil well forecasting
Failure to assess or be transparent with the risks leads to an unrealistic best-case scenario being presented. This is commonly known as the Planning Fallacy. Furthermore, it is often rewarded, at least in the short term, in terms of career growth and opportunity. There is every incentive for the engineer to insert bias into the decision-making process
Assessing development wells that may be stand-alone and possibly under-stimulated, and then scaling them to pad developments is not bad practice in principle. However, analysis of very limited data can lead to a planning fallacy, in the following ways:
- The primary production driver(s) the engineer selected is overly correlated or poorly correlated to production.
- Analog well selection bias may have given more weight to the production impact of certain variables.
- Other variables and factors are often either ignored or assigned too little risk.
You can see that a machine learning model driven by basin wide well data can pull signals from well performance all over the basin. For a single well, the model found signal from 1,553 analog wells from around the basin based on feature similarity. A “normal” process of selecting analogs is a hand coded, manual process. It typically involves eyeballing different variables to determine similarity.
This process is inefficient and riddled with biases
Biases are a key driver of unreliable oilfield forecasts. Large scale datasets, compute power, and algorithms can do a much more thorough job spelunking through all the data to select analog wells and weight their similarity when making a prediction. With the right front-end oil and gas forecasting software tools, the analog selection process can be completely transparent.
Without these types of tools and evaluation, you have all the right ingredients for a poor plan. Therein lies the issue. The capability or will to assess the impact (positive and negative) of all variables and associated risk that leads to a plan diverging from reality.
Amplifying the Reservoir Engineer With machine learning driven oil well forecasting
Does the industry use the right tools to do a proper multivariate evaluation? Reservoir Engineers (like myself) are only human. We are good at finding strong signals in the oil well data that have a strong impact on performance. We are relatively poor at parsing out multiple oil well production drivers when the impact of each might be marginal.
Assessing large volumes of variable data and maintaining a consistent assessment of positive and negative impact is physically impossible for any reservoir engineer to do systematically using the standard tools such as Spotfire or Excel, especially within tight deadlines. When coupled with the aforementioned cultural pressures, I would submit it is impossible to do without bias.
Unconventional oil and gas plays are incredibly complex
The variables that drive performance can change over time. The interaction between variables and the significance can change as well. Oil & gas forecast data needs to be looked at objectively considering all relationships and dimensions on various production streams (gas, oil, water). It is a tall order to parse these relationships and relate them to capital decisions.
On the other hand, the entrance of algorithms and mass compute power into engineering workflows allows for greatly improved systematic and accurate identification of positive and negative well production drivers. They are equally good at assessing the modest or large impact, whether it be positive or negative.
Shapley Values are the ML-driven model outputs
The chart above shows a useful output of machine learning driven models called Shapley Values. Shapley values are based on cooperative game theory. They are data outputs of a model built on top of a predictive regression model and are designed to show how the underlying regression model thinks about variable significance. Shapley values express as a # of bbls or mcf up or down from the model dataset average.
In the case of the chart above, we are looking at #bbls of impact that each feature had on cumulative predicted oil production at two IP days – IP30, and IP720. Each feature drives the # of bbls of oil up or down. In this case, the features are categorized and color coded.
The color code legend
The green feature set represents various well spacing features, a dataset that was derived from the well location and directional survey data. What’s interesting here in particular is that this particular well had a positive influence at IP30 given its spacing features, but, by IP720, those same spacing features were strongly negative. Evaluating the changing influence of different variables over time is critical to avoiding over-engineering of wells.
In addition, models built using empirical methods can quantify their own confidence in their assessment based on any forecasted well. This confidence can be used as a proxy for risk.
The figure above, which is based on a model built based on double bounded wells in the Delaware basin, shows how the model assesses its own confidence by looking at the ratio of P90/P10 confidence intervals (dark green=highest confidence, red=lowest confidence). In similar ways, any other desired confidence intervals can be evaluated by the Engineer.
Theory Induced Blindness in oil and gas forecasts
Beyond culture forces and physical limitations, there is another factor that prevents engineers from being able to identify secondary or tertiary production drivers and the related impact. This is a psychological effect where we gravitate towards a single key driver and then are unable to see other variables that drive well production.
All levels of a Company are susceptible to this “one dimensional driver” psychological effect. I remember conversations with brilliant executives that were certain the issue with a particular well was one key factor. The root cause of this belief was often driven by a theory derived from an experience (bias). Perhaps the well was in the wrong landing zone, a child well, wrong sand size, wrong stage spacing, etc.
Blinded by the Plight
In one specific example I recall, executives we’re sure that the team had pumped the wrong surfactant when completing a particular well, thus driving its substandard performance. The belief was that fixing this one issue would have drastically improved performance beyond what is statistically likely based on data driven analysis. The major downside with this fixation on a single theory is the inability to identify other variables and impacts.
This blocks true learning. Princeton researcher Daniel Kahneman refers to this as “Theory Induced Blindness” – the premise that the development of a single theory prevents us from seeking further depth in the analysis. I would add to that my own observation that most successful businessmen and women are entrepreneurs, and naturally optimistic. This leads to beliefs that wells can be ‘fixed’ by a single corrective measure.
Analyze the Shapley values to see the light
In the chart above, you can see the Shapley values for a large well dataset in the Delaware basin. This is the same Shapley data analyzed above, but in this case, we are looking at a basin level # of wells. Clearly, the model believes that increasing proppant levels impact oil production at IP720. So, -15%, means that this particular value of this particular feature will likely reduce the model’s prediction by that percentage of bbls of cumulative production at that particular day for this particular production stream at this particular IP720 IP day.
The good thing about Shapley is that it does take into account other things that influence well production, so the spacing, subsurface, stacking variables also influence the Shapley value for each dot you see above. In this way, engineers get true basin level insight into what is really driving production, relative to everything else that drives production, for each IP day, for each produced product, for each well. This dataset helps deconstruct well performance and is a useful tool for avoiding macro or micro level assumptions from harming economic results.
Balancing Bias in well forecasting is Key
The ability to assess all variables and their true impact, without pre-existing theories or biases, is ideal. Unfortunately, humans (even brilliant humans) are unable to do this as objectively as algorithms. Intuition, which represents a form of bias, can be a powerful tool for engineers to guide a modeling study for forecasting volumes. The key is understanding the variables that the operator has the most leverage over and incorporating empirical model findings with experience to drive value and impact. aligned within the corporate strategy.
Does this mean that empirical models are solely relied upon?
Not at all. Geoscientists and engineers are more important and impactful in this process than the conventional processes. All available data should be integrated and assessed into statistical model, not just the major and obvious ones (ie. tonnage).
This should include interpreted seismic properties and petrophysical attributes. Interpretations are valuable forms of quality control and bias. Incorporating reservoir characterization into statistical models represents a way to blend reasonable bias, engineering judgement, intuition and algorithm execution.
Cloud Platforms to analyze data
With the broader availability of platforms like Novi that are tuned for the industry and incorporate the latest best practice statistical methods, teams can now objectively identify how impactful geologic variables are on various hydrocarbon streams – through time. Imagine being able to say definitively that curvature is a stronger production driver than proppant intensity at a 12-month cumulative? This is no longer a dream of Geophysicists. These are powerful insights that have only occurred recently through the study of variable importance driven by large data driven models and machine learning algorithms
As mentioned, intuition is key in this process, as no model can work blind. Have you heard this before or even said it yourself? “The more I know, the more I realize I don’t know?” Common phrase. At the core is intuition. Intuition can be weak or strong. For strength it must be gained from multiple experiences and acquired skills through practice where feedback is given. It’s not a matter of being right or wrong, but knowing how to identify impact, value, risk, and upside. Intuition is a guide, and its strength is from the individual identifying personal weaknesses in their own intuition (ie. “the more I don’t know”). The combination of strong intuition and modeling practices is truly a sum greater than the parts.
- The industry must evolve its approach to pre-acquisition and pre-drill oilfield forecasting to avoid disastrous investment decisions that promise returns that statistically can be achieved.
- Cultural influence can play a large role in investment decisions. Compensation and recognition should balance engineering “hero” culture with incentives for analysis rooted in multivariate regression models. Reward good decisions vs high forecasts.
- Large scale data analysis helps teams avoid the “planning fallacy”.
- Theory Induced Blindness and other psychological influences represent very real limitations to objective forecasts. Modern oil and gas financial modeling based on algorithm driven platforms can be true assistants in understanding well production drivers.
- Algorithm driven data analysis platforms that are designed for Upstream E&P companies are now available in the market – incorporating them into workflows and comparing outputs across methods is key to success.
- Through peer review and intuition, better investment decisions and plans can be made that leverage human interpretation and blend it into empirical models.