Explainable AI (XAI) aims to support human decision-making by improving understanding and fostering calibrated trust. Yet, it remains unclear whether specific explanation types consistently help users make better decisions, and how user traits such as Need for Cognition (NFC) influence their effects. We present a confirmatory analysis of two controlled user studies in different domains (loan approval and job candidate screening), comparing local, feature-based, and global, model-centric explanations. We analyze decision accuracy and over-reliance as a function of AI confidence and correctness, while accounting for individual differences in NFC. Across both tasks, AI confidence emerged as the strongest predictor of human accuracy: users were significantly more likely to follow correct AI recommendations when confidence was high. Local explanations further boosted accuracy on correct predictions. When the AI was wrong and low-confident, explanation effects varied by user trait: local explanations reduced over-reliance among low-NFC participants but had the opposite effect for high-NFC individuals. These results highlight that explanation effectiveness depends on model correctness, user traits, and context. We conclude with design implications for confidence-aware, trait-sensitive XAI systems that adapt explanation delivery to user profiles and prediction uncertainty.
The Explanation Scope Does not Fit all: Local, Model-Centric and the Role of Cognitive Traits
Cau, Federico Maria;Spano, Lucio Davide
2025-01-01
Abstract
Explainable AI (XAI) aims to support human decision-making by improving understanding and fostering calibrated trust. Yet, it remains unclear whether specific explanation types consistently help users make better decisions, and how user traits such as Need for Cognition (NFC) influence their effects. We present a confirmatory analysis of two controlled user studies in different domains (loan approval and job candidate screening), comparing local, feature-based, and global, model-centric explanations. We analyze decision accuracy and over-reliance as a function of AI confidence and correctness, while accounting for individual differences in NFC. Across both tasks, AI confidence emerged as the strongest predictor of human accuracy: users were significantly more likely to follow correct AI recommendations when confidence was high. Local explanations further boosted accuracy on correct predictions. When the AI was wrong and low-confident, explanation effects varied by user trait: local explanations reduced over-reliance among low-NFC participants but had the opposite effect for high-NFC individuals. These results highlight that explanation effectiveness depends on model correctness, user traits, and context. We conclude with design implications for confidence-aware, trait-sensitive XAI systems that adapt explanation delivery to user profiles and prediction uncertainty.I documenti in IRIS sono protetti da copyright e tutti i diritti sono riservati, salvo diversa indicazione.


