Among Bayesian methods, Monte Carlo dropout provides principled tools for evaluating the epistemic uncertainty of neural networks. Its popularity recently led to seminal works that proposed activating the dropout layers only during inference for evaluating epistemic uncertainty. This approach, which we call dropout injection, provides clear benefits over its traditional counterpart (which we call embedded dropout) since it allows one to obtain a post hoc uncertainty measure for any existing network previously trained without dropout, avoiding an additional, time-consuming training process. Unfortunately, no previous work thoroughly analyzed injected dropout and compared it with embedded dropout; therefore, we provide a first comprehensive investigation, focusing on regression problems. We show that the effectiveness of dropout injection strongly relies on a suitable scaling of the corresponding uncertainty measure, and propose an alternative method to implement it. We also considered the trade-off between negative log-likelihood and calibration error as a function of the scale factor. Experimental results on benchmark data sets from several regression tasks, including crowd counting, support our claim that dropout injection can effectively behave as a competitive post hoc alternative to embedded dropout.

Dropout injection at test time for post hoc uncertainty quantification in neural networks

Fumera, G
Secondo
;
Roli, F
Ultimo
2023-01-01

Abstract

Among Bayesian methods, Monte Carlo dropout provides principled tools for evaluating the epistemic uncertainty of neural networks. Its popularity recently led to seminal works that proposed activating the dropout layers only during inference for evaluating epistemic uncertainty. This approach, which we call dropout injection, provides clear benefits over its traditional counterpart (which we call embedded dropout) since it allows one to obtain a post hoc uncertainty measure for any existing network previously trained without dropout, avoiding an additional, time-consuming training process. Unfortunately, no previous work thoroughly analyzed injected dropout and compared it with embedded dropout; therefore, we provide a first comprehensive investigation, focusing on regression problems. We show that the effectiveness of dropout injection strongly relies on a suitable scaling of the corresponding uncertainty measure, and propose an alternative method to implement it. We also considered the trade-off between negative log-likelihood and calibration error as a function of the scale factor. Experimental results on benchmark data sets from several regression tasks, including crowd counting, support our claim that dropout injection can effectively behave as a competitive post hoc alternative to embedded dropout.
2023
Crowd counting
Epistemic uncertainty
Monte Carlo dropout
Trustworthy AI
Uncertainty quantification
File in questo prodotto:
File Dimensione Formato  
paper.pdf

embargo fino al 30/06/2025

Descrizione: Articolo principale e appendici
Tipologia: versione post-print
Dimensione 7.26 MB
Formato Adobe PDF
7.26 MB Adobe PDF   Visualizza/Apri   Richiedi una copia

I documenti in IRIS sono protetti da copyright e tutti i diritti sono riservati, salvo diversa indicazione.

Utilizza questo identificativo per citare o creare un link a questo documento: https://hdl.handle.net/11584/380063
Citazioni
  • ???jsp.display-item.citation.pmc??? ND
  • Scopus 3
  • ???jsp.display-item.citation.isi??? 3
social impact