‘AI that humans can understand’ Explain explainable AI

Machine learning and deep learning models are generally effective for classification and prediction, but are virtually never perfect. Models almost always have some percentage of false positive and false negative predictions. While this error can be tolerated in some cases, it is problematic for critical tasks. For example, if a drone weapon system misidentifies a school as a terrorist base, it could unintentionally kill innocent children and teachers unless human workers prevent the decision to attack.

Before allowing or canceling an attack, workers need to know why the AI ​​has classified a school as an attack target and the uncertainty of their decision-making. There are clear examples of terrorists using schools, hospitals and religious facilities as bases for missile attacks. Was the school in question one of those schools? Are there any intelligence or recent observations that the school is currently occupied by such terrorists? Are there reports or observations that the school does not have students or teachers?
ⓒ CarlosCastilla / Getty Images

Without such an explanation, the model is essentially a black box, which is a big problem. In high-impact AI decision-making (including not only life impacts, but also monetary or regulatory impacts), it is important to have a clear understanding of which factors are reflected in the model’s decision-making.

What is explainable AI?

Explainable AI (XAI), also known as interpretable AI, refers to machine learning and deep learning methods that can explain AI’s decisions in a way that humans can understand. The ultimate hope is that XAI will eventually be as accurate as a black box model.

Explainability can be either anti-hoc (a directly interpretable white-box model) or post-hoc (a technique for explaining a previously trained model or its predictions). Anti-hoc models include explainable neural networks (xNN), explainable boosting machines (EBM), ultrasparse linear integer models (SLIM), inverse temporal attention models (RETAIN), and Bayesian deep learning (BDL).

Post-hoc explanatory methods include locally interpretable model-independent explanations (LIME), and cumulative local effect (ALE) plots, one- and two-dimensional partial dependence plots (PDP), individual conditional expectations (ICE) plots, and decision tree substitution models. Local and global model prediction visualizations such as

How the XAI Algorithm Works

If you click all the links above to read the documentation, you can skip this section. Below is a brief summary. The first five are anti-hoc models, and the rest are post-hoc methods.

explainable neural networks

Explainable neural networks (xNNs) are based on additive index models that can approximate complex functions. The elements of these models are called projection indices and ridge functions. xNN is a neural network designed to learn additive index models, and sub-neural networks learn ridge functions. The first hidden layer uses a linear activation function, and the sub-neural network usually consists of several fully connected layers and uses a non-linear activation function.

xNN can be used as an explanatory predictive model built directly from data within xNN itself. It can also be used as an alternative model to describe other non-parametric models such as tree-based methods and feed-forward neural networks. You can see Wells Fargo’s 2018 paper on xNN.

explainable boosting machine

As mentioned in our review of Azure AI and Machine Learning, Microsoft has released the InterpretML package as open source and has integrated it into Azure Machine Learning’s Explanation dashboard. Among the many features of InterpretedML is Microsoft Research’s “glassbox” model, referred to as the Explainable Boosting Machine (EBM).

EBM is designed to be easy to interpret while having the same accuracy as random forests and boosting trees. It is a generalized additive model with several improvements. EBM learns each feature function using modern machine learning techniques such as bagging and gradient boosting. Since the boosting procedure is limited to learning one feature at a time in a sequential manner using a very low learning rate, feature order is not an issue. It can also detect and include pairwise interaction terms. C++ and Python implementations are parallelizable.

Ultrasparse Linear Integer Model

Ultrasparse Linear Integer Model (SLIM) is an integer programming problem that optimizes direct measures of accuracy (0-1 loss) and sparsity (10-half-standard) while limiting coefficients to a small set of disjoint integers. SLIM could create a data-driven scoring system useful for medical examinations.

Inverse Time Attention Model

The Reverse Temporal Attention (RETAIN) model is an interpretable predictive model for electronic medical record (EHR) data. RETAIN achieves high accuracy while maintaining clinical interpretability. It is based on a two-level neural attention model that detects historical visits with impact and important clinical variables within these visits (primary diagnosis). RETAIN mimics physicians in a way that explores EHR data in reverse chronological order, making recent clinical visits more likely to attract higher interest. The test data discussed in the RETAIN paper predicted heart failure based on long-term diagnosis and medications.

Bayesian Deep Learning

Bayesian Deep Learning (BDL) provides principled uncertainty estimation in deep learning architectures. Basically, BDL is a method of modeling a network ensemble by taking weights from a learned probability distribution, allowing it to cope with the problem that most deep learning models cannot model their own uncertainty. BDL usually doubles the number of parameters.


Source: ITWorld Korea by www.itworld.co.kr.

*The article has been translated based on the content of ITWorld Korea by www.itworld.co.kr. If there is any problem regarding the content, copyright, please leave a report below the article. We will try to process as quickly as possible to protect the rights of the author. Thank you very much!

*We just want readers to access information more quickly and easily with other multilingual content, instead of information only available in a certain language.

*We always respect the copyright of the content of the author and always include the original link of the source article.If the author disagrees, just leave the report below the article, the article will be edited or deleted at the request of the author. Thanks very much! Best regards!