Current eXplainable Artificial Intelligence (XAI) techniques assist individuals in interpreting AI recommendations. However, research primarily focuses on assessing users’ comprehension of explanations, neglecting important factors influencing decision support, such as whether the explanation uses the correct reasoning style to help the user understand the AI’s advice. In the last two years, our research aimed to fill this gap by examining the effects of factors such as user uncertainty, AI correctness, and the interplay between AI confidence and explanation logic styles in classification tasks. In this paper, we summarise the lesson learnt from this research and discuss its impact on the engineering of AI-based decision support systems.
Explaining Through the Right Reasoning Style: Lessons Learnt
Spano L. D.;Cau F. M.
2024-01-01
Abstract
Current eXplainable Artificial Intelligence (XAI) techniques assist individuals in interpreting AI recommendations. However, research primarily focuses on assessing users’ comprehension of explanations, neglecting important factors influencing decision support, such as whether the explanation uses the correct reasoning style to help the user understand the AI’s advice. In the last two years, our research aimed to fill this gap by examining the effects of factors such as user uncertainty, AI correctness, and the interplay between AI confidence and explanation logic styles in classification tasks. In this paper, we summarise the lesson learnt from this research and discuss its impact on the engineering of AI-based decision support systems.I documenti in IRIS sono protetti da copyright e tutti i diritti sono riservati, salvo diversa indicazione.