Explainable and Interpretable AI/ML
Research in Machine Learning (ML) and Artificial Intelligence (AI) has become very popular in recent years, with many types of models proposed to comprehend and predict patterns and trends in data originated from different domains. As these models get more and more complex, it also becomes harder for users to assess and trust their results, since their internal operations are mostly hidden in black boxes. The explanation of ML/AI models is currently a hot topic in the Information Visualization (InfoVis) community, with results showing that providing insights from ML models can lead to better predictions and improve the trustworthiness of the results.
Our current focus on this research area is (1) methodological by providing surveys and guidance through qualitative and quantitative analyses of its literature and research community as well as (2) technical by developing Visual Analytics (VA) methods to open the black boxes of various ML/AI models. In the latter case, our research encompasses both unsupervised dimensionality reduction (DR) models and supervised learning models such as single classifiers or multiple classifier systems.