A New Way to Address Energy Problems With AI
By Blake Bixler
Artificial intelligence has become a buzzword in many industries, including energy. The term is broad and captures many ways of training machines to perform tasks. Frameworks like neural networks and unsupervised learning are based on correlations – the most common way to develop AI models. These types of AI create probabilistic models that have many challenges that render the AI ineffective in complex problem-solving environments. For example, limited data, inability to identify abnormal behavior appropriately, high-stakes problems, varying delays in cause and effect, and lack of explainability all contribute to correlation-based AI’s limitations. In this article, I’ll take a closer look at these challenges and propose an alternative – causal AI.
Current correlation-based AI challenges
Let’s take a closer look at each of the challenges for applying correlation-based AI in the energy industry:
Limited data – Most AI models require thousands of data sets for training. The number of data points and/or availability are often not sufficient.
Importance of identifying abnormal behavior – Correlation-based AI models are good at predicting expected behavior in normal situations. However, these models exclude data that does not fit the trend (outliers), as this data appears random and skews the model. While we may not have an explanation for the behavior, there are no random events in systems governed by conservation laws.
Too costly to be wrong – In high-stakes situations, the cost of being wrong is too high – financially, for human safety, and for environmental impact. Probabilistic models (all correlation-based AI models) do not provide the needed dependability in these cases.
Varying delay between cause and effect – Correlation-based AI models work best when the cause and effect are very closely tied together temporally. This breaks down in situations where aging and stress develop over time, or corrosive micro-organisms lie dormant for extended periods.
Lack of explainability – Even when AI produces an accurate result, it does not provide insight into how that answer was determined, which lowers engineers’ confidence in AI’s predictions.
Causal AI as an alternative
Given these challenges, it’s crucial to develop AI models that offer more than just correlations. Causation-based AI, also known as causal inference or causal AI, is a sophisticated alternative. It identifies cause and effect relationships, enhancing reliability and explainability. Understanding root causes enables better anticipation of future effects.
Unique additions to causal AI: SME insight
A recent method of expanding causal AI models’ capability is to include subject matter experts’ (SMEs) knowledge and intuition. SME insights, derived from personally analyzing extreme situations and developing hypotheses based on experience, can be translated into code to address high-stakes scenarios. A hybrid causal AI model, combining data and SME knowledge, can handle more extreme situations with less data, addressing outliers rather than discarding them.
Still, a causal AI model cannot predict everything. Acknowledging that some situations are too critical to be wrong means that the AI must admit when it does not know the answer.
Causal AI in practice
With the goal of estimating gas/oil ratio (GOR) using mud gas logs and other data available during drilling, Chevron piloted Senslytics’ CausX AI, a causal AI model that used a multi-view approach. Views were created based on hydrocarbon ratios, depth and pressure, and geologic age. If the views do not agree, Senslytics’ CausX AI platform tells the engineer that it is “unable to interpret the data.” In the example to the right, we were able to outperform three existing methods for estimating GOR. You’ll notice in Well 4, there are no points on the graph for any of the GOR estimation methods. For Formulas 1, 2, and 3, there are no points because the results were off by over 80 percent. In contrast, the CausX AI model recognized it could not interpret the data. In this case, technical experts could focus on one well (Well 4) that needed their expert insight, instead of dividing their attention and efforts across nine wells.
In addition to demonstrating improved accuracy and dependability, causal AI must also provide reasoning behind its interpretation or estimation to establish trust. The logic behind a conclusion should make sense to the engineers and SMEs using the models. This is one of the final key checks to determine whether an AI model’s interpretation should be trusted. In the example above, CausX AI would provide an answer such as: “hydrocarbon ratios from the mud gas log are not consistent with those found in the vicinity. Wells at similar depths, pressure, and geologic age typically have a higher GOR than is estimated by the hydrocarbon ratios.”
Moving forward
Causal AI has a multitude of potential use cases in energy (i.e. pipeline corrosion, flow assurance, hydrogen PEM degradation, etc.) and other scientific industries. The challenges of limited data, identifying abnormal behavior, high-stakes decision-making, varying delays between cause and effect, and lack of explainability are no longer insurmountable obstacles. Causal AI can address these challenges in a way that increases trust and keeps engineers involved in the decision-making process, while allowing them to focus their time on the most crucial issues. The result is preventing events that are catastrophic financially, environmentally, or to human health with the use of causal AI.
Blake Bixler
Blake Bixler is CEO of Senslytics and an Energy Tech Advisor for Cortado Ventures. Senslytics is a causal AI start-up that empowers engineers and energy professionals to prevent costly failures that were previously unpredictable, reducing risks and improving capital efficiency.