A machine learning risk management framework for sustainable oil and gas solutions

A machine learning risk management framework for sustainable oil and gas solutions

Artificial intelligence (AI) and associated machine learning (ML) algorithms and models are generating new insights.

Landmark December 16, 2021

Artificial intelligence (AI) and associated machine learning (ML) algorithms and models are generating new insights from the dark data in the oil and gas industry. As the AI/ML models are increasingly deployed at scale, on-the-edge in the remote fields and off-shore platforms, their potential impact to transform the oil and gas industry is increasingly high across the oil and gas field lifecycle. With advancement in building AI/ML models, such as self-learning models, the velocity to build, update, and track the models is highly desirable. Though AI/ML models provide significant advantages for business transformation, the development, deployment, and operationalization of AI/ML requires solution sustainability. A sustainable machine learning solution requires that the associated risk with AI/ML model-based decision making be verified, documented, and updated on regular basis. Thus, a framework for model risk management should be incorporated in the process of building the machine learning based business solution.

In the context of the oil and gas industry, AI/ML model refers to hybrid models using machine learning theories and techniques on operational, scientific, engineering, financial, economic, and business data to generate actionable insights for real-time resource optimization.

Currently, there are no risk management guidelines for AI/ML model management for the oil and gas industry. However, a large body of guidelines, recommendations and frameworks exists in other industries. In the banking/finance industry, the two major documents that outline Model Risk Management are the Office of the Comptroller of the Currency (OCC)’s Bulletin 2011-12, and the Federal Reserve System’s Supervision and Regulation Letters (SR) SR 11-7. It is evident from these regulations that the government and regulatory bodies want to ensure that conceptual soundness, biases, and the business impact are well understood when ML models are used for issuing credit cards, loans, mortgage issuance and other financial instruments. The robustness and adherence of ML models to the regulatory compliance framework mentioned above is essential to reduce the risk to the banks, the consumers, and all stakeholders.

Along the same lines, the oil and gas industry must make sure that the risks associated with using AI/ML models are minimal for field operations, HSE and investment decisions. Hence, a framework for mitigating unintended consequences from using the AI/ML solution is a proactive requirement to make business decisions with confidence. AI/ML model risk management should include knowledgeable development and implementation processes that are contextualized for the phase of the oil-well lifecycle. These processes should be consistent with end-user goals and enterprise business policy guidelines.


The proposed framework for model risk management AI/ML, ATOPIC, outlines the guidelines to be considered when building a machine learning solution. This framework helps with the solution sustainability, explainability, and minimizes the impact of data and algorithm bias. The governing class of the ATOPIC framework is Audit with five subclasses: Theoretical soundness, Outline biases, Process provenance, Insights and impact provenance, and Communicate and update.

Figure 1: ATOPIC AI/ML Model Risk Management Framework
Figure 1: ATOPIC AI/ML Model Risk Management Framework

Concepts in the ATOPIC AI/ML risk management model framework are as follows:


When a machine learning solution is developed or delivered by an external entity the auditing of the delivered model becomes an important step before operationalizing the solution. Also, the model auditing aspect must be considered when building the technical solution itself. The data, process and model provenance all need to be verified during the audit; thus, provenance should be designed such that it is logical and traceable. The recommended best practice for model audit must include the following elements:

  • Theoretical soundness
  • Biases and model sensitivity analysis
  • Process provenance
  • Impact provenance
  • Communications

The audit process should address and document all these elements.

Figure 2: ATOPIC AI/ML Model Risk Management Framework
Figure 2: ATOPIC AI/ML Model Risk Management Framework


Theoretical soundness is the foundation of a meaningful solution. When developing ML models for a business solution, for example image recognition, the reason to select a single or ensemble of algorithms should be documented. The input parameter requirements, data size, data set selection, and the corresponding output parameters must be governed by causality. If the requirement is a convergence algorithm, then the convergence criteria must be explicitly defined and explainable. In a compliance framework, an AI/ML risk mitigation team independently validates and documents the theoretical claims and limits of the solution.


In the AI/ML model exercises, three major kinds of biases can have significant impact on outcomes that could derail the decision-making process for the business. These biases are:


Data or inherent bias can be addressed and documented with DecisionSpace® 365’s Data Foundation and Integration Foundation options to handle data volumes, new data types, and data quality workflows specifically for oil and gas. Because data quality plays a vital role and, in general, requires good data governance processes at the enterprise level, an enterprise’s past governance standards may need to be updated to help reduce data-related errors.


A machine learning solution may unintentionally provide repeatable and systemic errors because of skewed training samples or weighting of the different input parameter or both. Some unintentional bias can be mitigated by normalization or standardization of the training data and careful technical assessment of various weighting factors. For example, if an image processing solution is developed using seismic images with a high signal-to-noise ratio, at some point the results could become questionable. Having an AI/ML risk mitigation team to determine the limits of acceptable solutions keeps the business team aware of the risks and avoids black box decision making.


User bias is the most critical in the oil and gas business because end-users make decisions with direct impacts on business outcomes and part of the decision-making process gets stored in the form of structured and unstructured technical data. Also, subject matter expert experience is a form of tribal knowledge which is not captured anywhere but plays a pivotal role in business decision making. One way to mitigate user bias is by technical training and user-centric application design so that users can act and override on out-of-norm outcomes. If undesirable results are achieved, then users should have the ability to flag and provide feedback in the system.

At Halliburton Landmark, we can help to mitigate this bias through our SmartDigital® co-innovation service using a design thinking process which combines business and user centered approaches. To help minimize bias risk, bias mitigation steps, such as implementing a bias-weighted scheme for a solution, should be documented.


The increase in velocity of AI/ML model development and deployment requires creating and maintaining an audit trail of the training, validation, and deployment process. The key advantages of machine learning provenance are:


Reproducibility, traceability, and repeatability of the AI/ML modeling process helps to avoid the risk of using the wrong model. It is the most crucial factor for the business adoption and transformation exercise. The data science skill attrition rate is very high; thus, provenance of the method, process, and metadata becomes critical. If the solution cannot be reproduced by an independent team then it is already a lost investment.


Extensibility of the AI/ML models should be accomplished with minimal effort as data dimensionality or data properties evolve as a result of insights generated from AI/ML model implementations.


The insights and impact generated by using AI/ML models should be captured as part of business metrics though it is not a practice in the industry. As the models evolve over time because of the enhanced properties of the acquired data or system, the total value addition or cumulative business impact must be tracked over a period of time. For example, if a drilling solution helps reduce total drilling time by using preset optimized drilling performance metrics to achieve operational efficiency (dollars/minutes), customer satisfaction, and HSE metrics, the solution must be cumulatively tracked for realized business impact (tangible and non-tangible). The tracking needs to be broken down in small measurable steps.

The dollar impact of an AI/ML solution should be measurable and designed to improve the time-to-value metrics for a given business. The dollar impact of a business solution compounds over time. This is the last mile which remains to be captured by any business and a major factor that limits at-scale implementations of AI/ML solutions.


Any machine learning solution addresses a particular business problem whether it is to simplify decision-making (picking the best well design), automate arduous tasks (picking faults or horizons), or innovate a new solution (machine learning-driven data quality or asset-based EOR strategy recommendation). The hybrid solution is dynamic in nature since the training model needs to be periodically updated to stay in-sync with business needs. Thus, continuous monitoring of the model accuracy, drifts, and technical bounds are expected. Some solutions require more than one model: some may require monthly updates while others require weekly or daily updates. It is a business imperative to test the AI/ML models for validation and risk mitigation at defined frequencies. Thus, there should be clear communication and a plan outlined between the business and solution teams on model management metrics. Businesses should standardize the model governance, as part of data governance policy, in collaboration with the data science team to avoid unintended business consequences.

For example, during the exploration phase, AI/ML models developed and deployed to pick faults for a given seismic volume should result in consistent outcomes such that users can work with greater confidence. The AI/ML model building, retraining, and tracking should be transparent to the user and differences between different trained models and how to pick the best and most updated model (if the application provides such flexibility) must be communicated to users.


The broader objective of AI/ML solution is to drive actionable insights for real-time resource optimization. To achieve the real-time optimization goals, we need a consistent framework for at-scale solution adoption and business transformation. The purpose of the risk management framework is to reduce technical debt for the data science team, subject matter expert, and minimize the risk for business. An end user’s (drilling or completion engineer, or a geoscientist) objective is to use application to solve the business problem - not to become a data scientist or data guru. It is in the best interest of any organization to spend time and resources to validate a machine learning business solution using a consistent framework. It is desirable to have the independent AI/ML risk mitigation team to ascertain the desired business outcome to minimize the unintentional risk in decision-making.

The proposed ATOPIC risk mitigation AI/ML model framework is an advisory guidance for the consistent adoption of AI/ML solutions in the oil and gas industry to enable, expedite, and drive sustainable business transformation.


The authors would like to acknowledge the contributions to this article of Ashwani Dev, PhD, who provided expert guidance and insight.

Expert: Dr. Satyam Priyadarshy | Managing Director-India Centre, Technology Fellow And Chief Data Scientist

Duane Moonsammy | Regional General Manager, North America


To learn more about how the SmartDigital co-innovation service can help your organization benefit from our AI/ML model risk management frameworks

Contact Us