[URTeC 2020] Benchmarking Operator Performance in the Williston Basin using a Predictive Machine Learning Model (ID 2750)

How can you properly benchmark operators — to identify overperformers to learn from, or underperformers to acquire? Our machine learning models provide two potential benchmarks — the models’ view on rock quality (what we term geoSHAP) and the models’ own predictions.

In this paper, we investigate why EOG shows overperformance relative to peers, and why Marathon starts out as a high performer but drops, rapidly. We also have an accompanying blog post preview!

Talk Details::

Wednesday, 11:30 AM. Theme 11: Business of Unconventionals, Maximizing Value and Reliability II

Actual oil compared to geoSHAP. Operators show a strong linear trend, except for EOG (and a few other smaller operators). Why does EOG outperform?

Abstract::

Operator performance relative to rock quality is of keen interest to public shareholders, acquisition- oriented companies, private equity, activist investors, and mineral owners. Here, we present a machine learning-based method to estimate total rock quality and identify under- or over-performance relative to expectation. We apply this method in the Williston Basin of North Dakota. We trained a decision trees- based algorithm to predict oil production using geology, completions, and spacing features using data provided by the NDIC. The model predicts production at 30-day increments out to IP day 720. We leverage SHAP values (SHapley Additive exPlanations) to build a total rock quality index that we term geoSHAP. This geoSHAP is then averaged for each operator, and we compare both actual oil and predicted oil to expectation for that relative rock quality. Average well performance varies approximately +/- 50% for a given geoSHAP, with some operators (such as EOG) even exceeding that +50% metric. We show that the vast majority of the difference in performance is due to controllable completions parameters such as proppant loading, which are captured in the model prediction. We then show how performance relative to model prediction can be used as its own benchmark for parameters not included in model training, chiefly artificial lift and operational choices. This machine learning model and benchmarking technique provides a powerful method to evaluate operator performance relative to expectation. It can be used by operators for surveillance purposes to improve well design or to proactively identify underperformers for potential acquisitions. Royalty interest owners, institutional shareholders and activist investors can use this technique as a third-party check on operators to identify where well designs are moving away from best practice in a given area.

Enter your name and email below. We will email the paper to you right away.

First name(Required)