Algorithms are increasingly playing a pivotal role in organizations' day-to-day operations; however, a general distrust of artificial intelligence-based algorithms and automated processes persists. This aversion to algorithms raises questions about the drivers that lead managers to trust or reject their use. This conceptual paper aims to provide an integrated review of how users experience the encounter with AI-based algorithms over time. This is important for two reasons: first, their functional activities change over the course of time through machine learning; and second, users' trust develops with their level of knowledge of a particular algorithm. Based on our review, we propose an integrative framework to explain how users’ perceptions of trust change over time. This framework extends current understandings of trust in AI-based algorithms in two areas: First, it distinguishes between the formation of initial trust and trust over time in AI-based algorithms, and specifies the determinants of trust in each phase. Second, it links the transition between initial trust in AI-based algorithms and trust over time to representations of the technology as either human-like or system-like. Finally, it considers the additional determinants that intervene during this transition phase.

Why do users trust algorithms? A review and conceptualization of initial trust and trust over time

Cabiddu, Francesca;Moi, Ludovica;
2022-01-01

Abstract

Algorithms are increasingly playing a pivotal role in organizations' day-to-day operations; however, a general distrust of artificial intelligence-based algorithms and automated processes persists. This aversion to algorithms raises questions about the drivers that lead managers to trust or reject their use. This conceptual paper aims to provide an integrated review of how users experience the encounter with AI-based algorithms over time. This is important for two reasons: first, their functional activities change over the course of time through machine learning; and second, users' trust develops with their level of knowledge of a particular algorithm. Based on our review, we propose an integrative framework to explain how users’ perceptions of trust change over time. This framework extends current understandings of trust in AI-based algorithms in two areas: First, it distinguishes between the formation of initial trust and trust over time in AI-based algorithms, and specifies the determinants of trust in each phase. Second, it links the transition between initial trust in AI-based algorithms and trust over time to representations of the technology as either human-like or system-like. Finally, it considers the additional determinants that intervene during this transition phase.
2022
AI algorithms; trust; initial trust; trust over time; integrative review
File in questo prodotto:
Non ci sono file associati a questo prodotto.

I documenti in IRIS sono protetti da copyright e tutti i diritti sono riservati, salvo diversa indicazione.

Utilizza questo identificativo per citare o creare un link a questo documento: https://hdl.handle.net/11584/344888
 Attenzione

Attenzione! I dati visualizzati non sono stati sottoposti a validazione da parte dell'ateneo

Citazioni
  • ???jsp.display-item.citation.pmc??? ND
  • Scopus 30
  • ???jsp.display-item.citation.isi??? ND
social impact