AIME: Toward More Intuitive Explanations of Machine Learning Predictions – A Breakthrough from Researchers of Musashino University

Press Releases

Apr 23, 2024

TOKYO, April 23, 2024 /PRNewswire/ — Machine learning (ML) and artificial intelligence (AI) have emerged as key technologies for decision-making in various fields such as automated driving, medical diagnostics, and finance. Current ML and AI models have even surpassed human intellectual capabilities in some regards. Therefore, it is important to understand how these technologies predict and estimate results and which features of the data affect their outcomes the most in an intuitive and comprehensible way.

To meet these demands, interpretive ML algorithms and explainable AI (XAI) models, such as local interpretable model-agnostic explanations (LIME) and shapely additive explanations (SHAP), have been developed. These methods construct and observe an approximate simple model and attempt to explain how different features in the dataset contribute to their predictions and estimations. However, existing interpretive ML and XAI solve the forward calculation to derive an explanation for the black box, and it can sometimes be difficult to derive an explanation.

Against this backdrop, Associate Professor Takafumi Nakanishi from the Department of Data Science at Musashino University, Japan, has now introduced an innovative approximate inverse model explanations (AIME) approach that is meant to provide more intuitive explanations. Dr. Nakanishi explains: “AIME essentially reverse-calculates AI decisions.” His study was published in Volume 11 of IEEE Access on September 11, 2023, and summarized in an engaging video.

AIME takes a unique approach, estimating and constructing an inverse operator for an ML or AI model. This operator helps estimate the significance of both local and global features on the model’s outputs. Moreover, this method also introduces a representative similarity distribution plot that uses special representative estimation instances to identify how a particular prediction is related to other instances, providing insights into the complexity of the target dataset distribution.

The study found that explanations obtained from AIME were both relatively simpler and more intuitive than those provided by LIME and SHAP. It proved effective for a wide variety of datasets, including tabular and handwritten images of numbers and text. Additionally, the similarity distribution plot provided an objective visualization of the model’s complexity.

The experiments also revealed that the AIME approach was more robust in handling multicollinearity. “It is particularly relevant in scenarios like explaining AI-generated art. Furthermore, self-driving cars will soon have self-driving recorders like those in airplanes, which could be analyzed by AIME to ascertain the cause of accident through post-accident analysis,” remarks Dr. Nakanishi.

This development can bridge the gap between humans and AI, fostering deeper trust.

Reference
Author: Takafumi Nakanishi

Title of original paper: Approximate Inverse Model Explanations (AIME): Unveiling Local and Global Insights in Machine Learning Models

Journal: IEEE Access

DOI: https://doi.org/10.1109/ACCESS.2023.3314336

About Associate Professor Takafumi Nakanishi
Takafumi Nakanishi is currently an Associate Professor in the Department of Data Science at Musashino University, Japan. He received his Ph.D. in engineering from the Graduate School of Systems and Information Engineering at University of Tsukuba, Japan, in 2006. He also served as an associate professor and chief researcher at the International University of Japan’s Global Communication Center from 2014 to 2018. His research interests include XAI, data mining, big data analysis systems, integrated databases, emotional information processing, and media content analysis. He is currently researching and developing systems using the unique explainable AI (XAI) technology AIME.

Media Contact:
Takafumi Nakanishi
+81 90-2239-9471
376539@email4pr.com

Cision View original content to download multimedia:https://www.prnewswire.com/news-releases/aime-toward-more-intuitive-explanations-of-machine-learning-predictions–a-breakthrough-from-researchers-of-musashino-university-302124189.html

SOURCE Musashino University

YOU MAY ALSO LIKE

Körber’s Vimal Vasudevan named ‘AI Transformer’ on…

TOKYO, April 23, 2024 /PRNewswire/ -- Machine learning (ML) and artificial intelligence (AI) have emerged as key technologies for decision-making in various fields such as…

read more

Real Estate Industry Celebrates 1 Year of…

TOKYO, April 23, 2024 /PRNewswire/ -- Machine learning (ML) and artificial intelligence (AI) have emerged as key technologies for decision-making in various fields such as…

read more

Empowering Procurement Excellence: Fulton Bank and Simfoni’s…

TOKYO, April 23, 2024 /PRNewswire/ -- Machine learning (ML) and artificial intelligence (AI) have emerged as key technologies for decision-making in various fields such as…

read more