This article investigates the possibility to estimate the perceived Quality of Experience (QoE) automatically and unobtrusively by analyzing the face of the consumer of video streaming services, from which facial expression and gaze direc tion are extracted. If effective, this would be a valuable tool for the monitoring of personal QoE during video streaming services without asking the user to provide feedback, with great advan tages for service management. Additionally, this would eliminate the bias of subjective tests and would avoid bothering the viewers with questions to collect opinions and feedback. The performed analysis relies on two different experiments: i) a crowdsourcing test, where the videos are subject to impairments caused by long initial delays and re-buffering events; ii) a laboratory test, where the videos are affected by blurring effects. The facial Action Units (AU) that represent the contractions of specific facial mus cles together with the position of the eyes’ pupils are extracted to identify the correlation between perceived quality and facial expressions. An SVM with a quadratic kernel and a k-NN clas sifier have been tested to predict the QoE from these features. These have also been combined with measured application-level parameters to improve the quality prediction. From the per formed experiments, it results that the best performance is obtained with the k-NN classifier by combining all the described features and after training it with both the datasets, with a prediction accuracy as high as 93.9% outperforming the state of the art achievements.

Estimation of the quality of experience during video streaming from facial expression and Gaze direction

Porcu, Simone;Floris, Alessandro
;
Atzori, Luigi;
2020-01-01

Abstract

This article investigates the possibility to estimate the perceived Quality of Experience (QoE) automatically and unobtrusively by analyzing the face of the consumer of video streaming services, from which facial expression and gaze direc tion are extracted. If effective, this would be a valuable tool for the monitoring of personal QoE during video streaming services without asking the user to provide feedback, with great advan tages for service management. Additionally, this would eliminate the bias of subjective tests and would avoid bothering the viewers with questions to collect opinions and feedback. The performed analysis relies on two different experiments: i) a crowdsourcing test, where the videos are subject to impairments caused by long initial delays and re-buffering events; ii) a laboratory test, where the videos are affected by blurring effects. The facial Action Units (AU) that represent the contractions of specific facial mus cles together with the position of the eyes’ pupils are extracted to identify the correlation between perceived quality and facial expressions. An SVM with a quadratic kernel and a k-NN clas sifier have been tested to predict the QoE from these features. These have also been combined with measured application-level parameters to improve the quality prediction. From the per formed experiments, it results that the best performance is obtained with the k-NN classifier by combining all the described features and after training it with both the datasets, with a prediction accuracy as high as 93.9% outperforming the state of the art achievements.
2020
Video streaming; Quality of Experience; Facial expressions; Gaze direction; Machine learning; Video Key Quality Indicators; QoE estimation
File in questo prodotto:
File Dimensione Formato  
pp2020-12 TNSM - Estimation of the QoE During Video Streaming_with cover.pdf

accesso aperto

Descrizione: AAM
Tipologia: versione post-print (AAM)
Dimensione 1.05 MB
Formato Adobe PDF
1.05 MB Adobe PDF Visualizza/Apri
pub2020-12 TNSM - Estimation of the QoE During Video Streaming.pdf

Solo gestori archivio

Descrizione: VoR
Tipologia: versione editoriale (VoR)
Dimensione 2.62 MB
Formato Adobe PDF
2.62 MB Adobe PDF   Visualizza/Apri   Richiedi una copia

I documenti in IRIS sono protetti da copyright e tutti i diritti sono riservati, salvo diversa indicazione.

Utilizza questo identificativo per citare o creare un link a questo documento: https://hdl.handle.net/11584/300701
Citazioni
  • ???jsp.display-item.citation.pmc??? ND
  • Scopus 21
  • ???jsp.display-item.citation.isi??? 15
social impact