The last decade has been pervaded by the automatic applications leveraging Artificial Intelligence technologies. Novel systems have been adopted to automatically solve relevant tasks, from scanning passengers during border controls to suggesting the groceries to buy to fill the fridge. One of the most captivating applications of Artificial Intelligence is represented by voice assistants, like Alexa. They enable people to use their voice to perform simple tasks, such as setting an alarm or saving an appointment in an online calendar. Due to their worldwide usage, voice assistants are required to aid a diverse range of individuals encompassing various cultures, languages, accents, and preferences. It is then crucial for these systems to function fairly across different groups of people to ensure reliability and provide assistance without being influenced by sensitive attributes that may vary among them. This thesis deals with the design, implementation, and evaluation of Artificial Intelligence models that are optimized to operate fairly in the context of voice assistant systems. Assessing the level of performance of existing fairness-aware solutions is an essential step towards comprehending how much effort should be put to provide fair and reliable technologies. The contributions result in extensive analyses of existing methods to counteract unfairness, and in novel techniques to mitigate and explain unfairness that capitalize on Data Balancing, Counterfactuality, and Graph Neural Networks Explainability. The proposed solutions aim to support system designers and decision makers over several fairness requirements. Specifically, over methodologies to evaluate fairness of models outcomes, techniques aimed to improve users’ trustworthiness by mitigating unfairness, and strategies that generate explanations of the potential causes behind the estimated unfairness. Through our studies, we explore opportunities and challenges introduced by the latest advancements in Fair Artificial Intelligence, a relevant and timely topic in literature. Supported by extensive experiments, our findings illustrate the feasibility of designing Artificial Intelligence solutions for the mitigation and explanation of unfairness issues in the models adopted in voice assistants. Our results provide guidelines on fairness evaluation, and design of methods to counteract unfairness concerning the voice assistant scenario. Researchers can use our findings to follow a schematic protocol for fairness assessment, to discover the data aspects affecting the model fairness, and to mitigate the outcomes unfairness, among others. We expect that this thesis can support the adoption of fairness-aware solutions in the voice assistant pipeline, from the voice authentication to the requested task resolution.

Unfairness Assessment, Explanation and Mitigation in Machine Learning Models for Personalization

MEDDA, GIACOMO
2024-02-20

Abstract

The last decade has been pervaded by the automatic applications leveraging Artificial Intelligence technologies. Novel systems have been adopted to automatically solve relevant tasks, from scanning passengers during border controls to suggesting the groceries to buy to fill the fridge. One of the most captivating applications of Artificial Intelligence is represented by voice assistants, like Alexa. They enable people to use their voice to perform simple tasks, such as setting an alarm or saving an appointment in an online calendar. Due to their worldwide usage, voice assistants are required to aid a diverse range of individuals encompassing various cultures, languages, accents, and preferences. It is then crucial for these systems to function fairly across different groups of people to ensure reliability and provide assistance without being influenced by sensitive attributes that may vary among them. This thesis deals with the design, implementation, and evaluation of Artificial Intelligence models that are optimized to operate fairly in the context of voice assistant systems. Assessing the level of performance of existing fairness-aware solutions is an essential step towards comprehending how much effort should be put to provide fair and reliable technologies. The contributions result in extensive analyses of existing methods to counteract unfairness, and in novel techniques to mitigate and explain unfairness that capitalize on Data Balancing, Counterfactuality, and Graph Neural Networks Explainability. The proposed solutions aim to support system designers and decision makers over several fairness requirements. Specifically, over methodologies to evaluate fairness of models outcomes, techniques aimed to improve users’ trustworthiness by mitigating unfairness, and strategies that generate explanations of the potential causes behind the estimated unfairness. Through our studies, we explore opportunities and challenges introduced by the latest advancements in Fair Artificial Intelligence, a relevant and timely topic in literature. Supported by extensive experiments, our findings illustrate the feasibility of designing Artificial Intelligence solutions for the mitigation and explanation of unfairness issues in the models adopted in voice assistants. Our results provide guidelines on fairness evaluation, and design of methods to counteract unfairness concerning the voice assistant scenario. Researchers can use our findings to follow a schematic protocol for fairness assessment, to discover the data aspects affecting the model fairness, and to mitigate the outcomes unfairness, among others. We expect that this thesis can support the adoption of fairness-aware solutions in the voice assistant pipeline, from the voice authentication to the requested task resolution.
20-feb-2024
File in questo prodotto:
File Dimensione Formato  
PhD_Thesis_Giacomo Medda.pdf

accesso aperto

Descrizione: Tesi di Dottorato
Tipologia: Tesi di dottorato
Dimensione 12.96 MB
Formato Adobe PDF
12.96 MB Adobe PDF Visualizza/Apri

I documenti in IRIS sono protetti da copyright e tutti i diritti sono riservati, salvo diversa indicazione.

Utilizza questo identificativo per citare o creare un link a questo documento: https://hdl.handle.net/11584/391987
 Attenzione

Attenzione! I dati visualizzati non sono stati sottoposti a validazione da parte dell'ateneo

Citazioni
  • ???jsp.display-item.citation.pmc??? ND
  • Scopus ND
  • ???jsp.display-item.citation.isi??? ND
social impact