Accurate models are necessary for the continuous estimation of the Quality of Experience (QoE), which is crucial for delivering successful multimedia services to end-users. These models are developed from subjective test data, which very often provide only specific aspects of the experience, i.e., a partial view (PV). Each PV conveys a relationship between a specific set of influence factors and the perceived QoE, limiting the applicability of the derived model to other application scenarios not considered in the initial subjective tests. To extend the applicability of the developed models, this paper introduces a multi-view (MV) learning framework that enhances QoE prediction by integrating complementary information from multiple perspectives obtained from different subjective tests. We leverage a fully connected deep neural network with two initially independent branches and an intermediate fusion layer to combine insights from separate feature sets, improving predictive accuracy while preserving data privacy. Our model is trained on a synthetic data set derived from the TID2008 image database, ensuring a controlled yet representative evaluation environment. On the one hand, the results demonstrate that the MV technique outperforms all PV configurations. On the other hand, the MV approach achieves QoE estimation performance comparable to the single-view (SV) model, in which one single branch analyzes the full set of impact factors. In particular, the largest performance gain (6.15% - 142.69%) across most evaluation metrics occurred when the input data set is equally divided between the two separate views.

Leveraging multi-view learning for quality of experience prediction models

Fratta, Matteo;Porcu, Simone;Floris, Alessandro;Atzori, Luigi
2025-01-01

Abstract

Accurate models are necessary for the continuous estimation of the Quality of Experience (QoE), which is crucial for delivering successful multimedia services to end-users. These models are developed from subjective test data, which very often provide only specific aspects of the experience, i.e., a partial view (PV). Each PV conveys a relationship between a specific set of influence factors and the perceived QoE, limiting the applicability of the derived model to other application scenarios not considered in the initial subjective tests. To extend the applicability of the developed models, this paper introduces a multi-view (MV) learning framework that enhances QoE prediction by integrating complementary information from multiple perspectives obtained from different subjective tests. We leverage a fully connected deep neural network with two initially independent branches and an intermediate fusion layer to combine insights from separate feature sets, improving predictive accuracy while preserving data privacy. Our model is trained on a synthetic data set derived from the TID2008 image database, ensuring a controlled yet representative evaluation environment. On the one hand, the results demonstrate that the MV technique outperforms all PV configurations. On the other hand, the MV approach achieves QoE estimation performance comparable to the single-view (SV) model, in which one single branch analyzes the full set of impact factors. In particular, the largest performance gain (6.15% - 142.69%) across most evaluation metrics occurred when the input data set is equally divided between the two separate views.
2025
979-8-3315-5435-4
979-8-3315-5436-1
Multi-view Learning
Neural Network
QoE prediction
Quality of Experience
File in questo prodotto:
File Dimensione Formato  
pub - Leveraging_Multi-View_Learning_for_QoE.pdf

Solo gestori archivio

Descrizione: VoR
Tipologia: versione editoriale (VoR)
Dimensione 1.31 MB
Formato Adobe PDF
1.31 MB Adobe PDF   Visualizza/Apri   Richiedi una copia
469452_AAM.pdf

accesso aperto

Descrizione: AAM
Tipologia: versione post-print (AAM)
Dimensione 2.04 MB
Formato Adobe PDF
2.04 MB Adobe PDF Visualizza/Apri

I documenti in IRIS sono protetti da copyright e tutti i diritti sono riservati, salvo diversa indicazione.

Utilizza questo identificativo per citare o creare un link a questo documento: https://hdl.handle.net/11584/469452
Citazioni
  • ???jsp.display-item.citation.pmc??? ND
  • Scopus 0
  • ???jsp.display-item.citation.isi??? ND
social impact