Face detection and recognition play pivotal roles across various domains, spanning from personal authentication to forensic investigations, surveillance, entertainment, and social media. In our interconnected world, pinpointing an individual's identity amidst millions remains a formidable challenge. While contemporary face recognition techniques now rival or even surpass human accuracy in critical scenarios like border identity control, they do so at the expense of poor explainability, leaving the underlying causes of errors largely unresolved. Moreover, they demand substantial computational resources and a plethora of labeled samples for training. Drawing inspiration from the remarkably efficient human visual system, particularly in localizing and recognizing faces, holds promise for developing more efficient and interpretable systems, with high gains in scenarios where misidentification can yield grave consequences. In this context, we introduce the Uniss-FGD dataset, which captures gaze data from observers presented with facial images depicting diverse expressions. In view of the potential uses of Uniss-FGD, we propose two baseline experiments on a subset of the dataset in which we perform a comparative analysis juxtaposing the attention mechanisms of ViTs, multi-scale handcrafted features, and human observers when viewing facial images. These preliminary comparisons pave the way to future investigation into the integration of human attention dynamics into advanced and diverse image analysis frameworks. Beyond the realms of Computer Science, numerous research disciplines stand to benefit from the rich gaze data encapsulated in this dataset.

Uniss-FGD: A Novel Dataset of Human Gazes Over Images of Faces

Fadda, Mauro;Anedda, Matteo;Grosso, Enrico;
2024-01-01

Abstract

Face detection and recognition play pivotal roles across various domains, spanning from personal authentication to forensic investigations, surveillance, entertainment, and social media. In our interconnected world, pinpointing an individual's identity amidst millions remains a formidable challenge. While contemporary face recognition techniques now rival or even surpass human accuracy in critical scenarios like border identity control, they do so at the expense of poor explainability, leaving the underlying causes of errors largely unresolved. Moreover, they demand substantial computational resources and a plethora of labeled samples for training. Drawing inspiration from the remarkably efficient human visual system, particularly in localizing and recognizing faces, holds promise for developing more efficient and interpretable systems, with high gains in scenarios where misidentification can yield grave consequences. In this context, we introduce the Uniss-FGD dataset, which captures gaze data from observers presented with facial images depicting diverse expressions. In view of the potential uses of Uniss-FGD, we propose two baseline experiments on a subset of the dataset in which we perform a comparative analysis juxtaposing the attention mechanisms of ViTs, multi-scale handcrafted features, and human observers when viewing facial images. These preliminary comparisons pave the way to future investigation into the integration of human attention dynamics into advanced and diverse image analysis frameworks. Beyond the realms of Computer Science, numerous research disciplines stand to benefit from the rich gaze data encapsulated in this dataset.
2024
face recognition; observers; visualization; transformers; task analysis; face detection; gaze tracking; human factors; human gazes; vision transformers; handcrafted features; human faces; visual attention
File in questo prodotto:
File Dimensione Formato  
Uniss-FGD_A_Novel_Dataset_of_Human_Gazes_Over_Images_of_Faces.pdf

accesso aperto

Tipologia: versione editoriale
Dimensione 2.29 MB
Formato Adobe PDF
2.29 MB Adobe PDF Visualizza/Apri

I documenti in IRIS sono protetti da copyright e tutti i diritti sono riservati, salvo diversa indicazione.

Utilizza questo identificativo per citare o creare un link a questo documento: https://hdl.handle.net/11584/417765
Citazioni
  • ???jsp.display-item.citation.pmc??? ND
  • Scopus 0
  • ???jsp.display-item.citation.isi??? 0
social impact