top of page

How can business decision makers trust the machine and deep learning predictions?

Over the past few years, there has been a substantial leap forward in the capabilities of machine learning (ML). ML techniques such as Random Forest, Gradient Boosted Trees, and Support Vector Machines as well as deep learning/artificial intelligence (AI) techniques such as Convolutional Nets, Auto Encoders, and Long-Short Term Models can enable highly accurate predictions to be made in a wide variety of contexts. Availability of large volumes of data combined with virtually unlimited computational power enabled the application of these highly non-linear techniques to predict risk, response, opportunity, and image detection outcomes with a remarkable accuracy. The improved predictions are in turn resulting in more effective business decisions across fraud detection, customer identification and engagement, and asymmetric risk identification, among others.

While enthusiastic about this improved business effectiveness, organizational leaders are rightly asking for reasons behind the ML and AI predictions. The business and analytical leaders have a responsibility to understand the nature of the predictions so that they can make the right decision based on these predictions. Our experience suggests that the business importance of model explainability varies by applications. On one end lie extremely regulated environments such as finance and rating agencies where each factor that contributes to risk has to be highly vetted and documented; here model explainability is almost mandatory. On the other extreme, in applications such as image annotation, the relevance of what combination of pixels helps make the predictions what it is, the explainability is perhaps not that critical. The vast majority of applications, however, lie somewhere along this continuum, where model explainability is always a desirable feature that builds business trust and supplies insights and understanding that enables adoption.

There are multiple levels of model explainability, one at a macro level (for example, if a predictor is positively/negatively correlated with the target), and one at the micro-level where the interest is in explaining how the predictors contributed to an individual prediction (for example, what factors led to a specific transaction getting a high risk score, or a rank ordering of features by their importance). Both are important, and in many application domains not only are they crucial to get business buy-in, quite often decisions on how the score is used are driven by insights at this level. Deriving this level of explainability is not straightforward. Even for simple models such as linear regression or logistic regression, explainability gets tricky as the number of predictors increases - due to the correlations between the predictors. Explainability gets more complicated for black-boxy AI and ML models.

At their core, ML and AI models begin by mapping/transforming the input data elements into features. These features are then used for prediction. The mathematics of input data transformations, called feature maps, differ across different ML and AI techniques. However, what is common to all ML and AI techniques is that the feature maps are encodings of input data that capture the essential information required to arrive at the target and prediction outcomes. An effective feature map enables predictions over a wide range of inputs, but its mathematical structure itself can be conceptually compact. For example, feature maps in Random Forest and Gradient Boosted Trees are formed by dividing the input space into rectangular regions. Feature maps in Convolutional Neural Nets (CNNs) are formed by moving a filter over the input space and pooling the filtered results across the input space. It is possible to explain the predictions by characterizing the rectangular regions of the input space or the pooled results of feature maps in terms of the business domain terminology. Once explained, the business users will be able to determine the drivers behind the ML and AI prediction, and with these drivers in perspective they can make informed and ethical decisions.

There are some interesting new developments that are specifically geared towards extracting explainability out of black-boxy models. One such approach is LIME (Local Interpretable Model-Agnostic Explanations) – which is an approach that extracts explainability from models without really getting into the guts of the model. The approach essentially involves perturbing a given input X around its neighborhood (XN)and gathering the model's predictions YN. A linear or interpretable (such as classification and regression tree or association rule mining) model can be calibrated on these (XN, YN) points, giving greater importance to points within XN that are closer to X. This model is accurate locally (around the specific point of interest), but not globally. Business knowledge, such as a particular fraud type, can be useful in defining a neighborhood.

SHAP (Shapley additive explanation) is another approach for explainability wherein contributions of each feature towards the target and prediction outcome are evaluated for each observation. While the objective is to interpret the feature contributions as the impact of respective features on the outcome, this often needs to be done with caution as similar features may share the impacts in multiple ways. Business knowledge can again be used to group similar features and their contributions. For example, comorbidities in the case of a patient may collectively impact a disease prognosis.

We recommend utilizing multiple explainability methods in parallel, embedding them within a well-designed user interface that allows users to interactively explore the combined implications of the multiple methods. Business leaders and experts working in conjunction with data science can help explain some of the predictions, to the extent they can then select the features that, they believe, help identify the fraud, risk and engagement drivers.

Featured Posts
Recent Posts
Archive
Search By Tags
bottom of page