The spread of innovative Artificial Intelligence (AI) algorithms assists many individuals in their daily life decision-making tasks but also sensitive domains such as disease diagnosis and credit risk. However, a great majority of these algorithms are of a black-box nature, bringing the need to make them more transparent and interpretable along with the establishment of guidelines to help users manage these systems. The eXplainable Artificial Intelligence (XAI) community investigated numerous factors influencing subjective and objective metrics in the user-AI team, such as the effects of presenting AI-related information and explanations to users. Nevertheless, some factors that influence the effectiveness of explanations are still under-explored in the literature, such as user uncertainty, AI uncertainty, AI correctness, and different explanation styles. The main goal of this thesis is to investigate the interactions between different aspects of decision-making, focusing in particular on the effects of AI and user uncertainty, AI correctness, and the explanation reasoning style (inductive, abductive, and deductive) on different data types and domains considering classification tasks. We set up three user evaluations on images, text, and time series data to analyse these factors on users' task performance, agreement with the AI suggestion, and the user’s reliance on the XAI interface elements (instance, AI prediction, and explanation). The results for the image and text data show that user uncertainty and AI correctness on predictions significantly affected users’ classification decisions considering the analysed metrics. In both domains (images and text), users relied mainly on the instance to decide. Users were usually overconfident about their choices, and this evidence was more pronounced for text. Furthermore, the inductive style explanations led to over-reliance on AI advice in both domains – it was the most persuasive, even when the AI was incorrect. The abductive and deductive styles have complex effects depending on the domain and the AI uncertainty levels. Instead, the time series data results show that specific explanation styles (abductive and deductive) improve the user’s task performance in the case of high AI confidence compared to inductive explanations. In other words, these styles of explanations were able to invoke correct decisions (for both positive and negative decisions) when the system was certain. In such a condition, the agreement between the user’s decision and the AI prediction confirms this finding, highlighting a significant agreement increase when the AI is correct. This suggests that both explanation styles are suitable for evoking appropriate trust in a confident AI. The last part of the thesis focuses on the work done with the \enquote{CRS4 - Centro di Ricerca, Sviluppo e Studi Superiori in Sardegna}, for the implementation of the RIALE (Remote Intelligent Access to Lab Experiment) Platform. The work aims to help students explore a DNA-sequences experiment enriched with an AI tagging tool, which detects the objects used in the laboratory and its current phase. Further, the interface includes an interactive timeline which enables students to explore the AI predictions of the video experiment's steps and an XAI panel that provides explanations of the AI decisions - presented with abductive reasoning - on three levels (globally, by phase, and by frame). We evaluated the interface with students considering the subjective cognitive effort, ease of use, supporting information of the interface, general usability, and an interview on a set of questions on peculiar aspects of the application. The user evaluation results showed that students were positively satisfied with the interface and in favour of following didactic lessons using this tool.

Effects of Logic-Style Explanations and Uncertainty on Users’ Decisions

CAU, FEDERICO MARIA
2023-04-28

Abstract

The spread of innovative Artificial Intelligence (AI) algorithms assists many individuals in their daily life decision-making tasks but also sensitive domains such as disease diagnosis and credit risk. However, a great majority of these algorithms are of a black-box nature, bringing the need to make them more transparent and interpretable along with the establishment of guidelines to help users manage these systems. The eXplainable Artificial Intelligence (XAI) community investigated numerous factors influencing subjective and objective metrics in the user-AI team, such as the effects of presenting AI-related information and explanations to users. Nevertheless, some factors that influence the effectiveness of explanations are still under-explored in the literature, such as user uncertainty, AI uncertainty, AI correctness, and different explanation styles. The main goal of this thesis is to investigate the interactions between different aspects of decision-making, focusing in particular on the effects of AI and user uncertainty, AI correctness, and the explanation reasoning style (inductive, abductive, and deductive) on different data types and domains considering classification tasks. We set up three user evaluations on images, text, and time series data to analyse these factors on users' task performance, agreement with the AI suggestion, and the user’s reliance on the XAI interface elements (instance, AI prediction, and explanation). The results for the image and text data show that user uncertainty and AI correctness on predictions significantly affected users’ classification decisions considering the analysed metrics. In both domains (images and text), users relied mainly on the instance to decide. Users were usually overconfident about their choices, and this evidence was more pronounced for text. Furthermore, the inductive style explanations led to over-reliance on AI advice in both domains – it was the most persuasive, even when the AI was incorrect. The abductive and deductive styles have complex effects depending on the domain and the AI uncertainty levels. Instead, the time series data results show that specific explanation styles (abductive and deductive) improve the user’s task performance in the case of high AI confidence compared to inductive explanations. In other words, these styles of explanations were able to invoke correct decisions (for both positive and negative decisions) when the system was certain. In such a condition, the agreement between the user’s decision and the AI prediction confirms this finding, highlighting a significant agreement increase when the AI is correct. This suggests that both explanation styles are suitable for evoking appropriate trust in a confident AI. The last part of the thesis focuses on the work done with the \enquote{CRS4 - Centro di Ricerca, Sviluppo e Studi Superiori in Sardegna}, for the implementation of the RIALE (Remote Intelligent Access to Lab Experiment) Platform. The work aims to help students explore a DNA-sequences experiment enriched with an AI tagging tool, which detects the objects used in the laboratory and its current phase. Further, the interface includes an interactive timeline which enables students to explore the AI predictions of the video experiment's steps and an XAI panel that provides explanations of the AI decisions - presented with abductive reasoning - on three levels (globally, by phase, and by frame). We evaluated the interface with students considering the subjective cognitive effort, ease of use, supporting information of the interface, general usability, and an interview on a set of questions on peculiar aspects of the application. The user evaluation results showed that students were positively satisfied with the interface and in favour of following didactic lessons using this tool.
28-apr-2023
File in questo prodotto:
File Dimensione Formato  
tesi di dottorato_federicomariacau.pdf

accesso aperto

Descrizione: Effects of Logic-Style Explanations and Uncertainty on Users’ Decisions
Tipologia: Tesi di dottorato
Dimensione 11.29 MB
Formato Adobe PDF
11.29 MB Adobe PDF Visualizza/Apri

I documenti in IRIS sono protetti da copyright e tutti i diritti sono riservati, salvo diversa indicazione.

Utilizza questo identificativo per citare o creare un link a questo documento: https://hdl.handle.net/11584/359901
 Attenzione

Attenzione! I dati visualizzati non sono stati sottoposti a validazione da parte dell'ateneo

Citazioni
  • ???jsp.display-item.citation.pmc??? ND
  • Scopus ND
  • ???jsp.display-item.citation.isi??? ND
social impact