Detecting learners in need of support is a complex process for both teachers and machines.Most prior work has devised visualization tools that allow teachers to do so by analyzing educational indicators.Other recent efforts have been devoted to models that predict whether learners might be at risk.However, the question on how teacher-like is the model behaving under this detection task still remains unanswered.In this paper, we investigate the (dis)agreement between teachers and model decisions, using a real-world flipped course as a case study.From the model perspective, we considered a well-known neural network, trained on educational indicators extracted from online pre-class logs.To gather teachers’ understanding, we employed a crowd sourcing approach including over 360 human intelligence tasks from 60 university teachers.We asked each recruited teacher to analyze visualizations pertaining to four relevant educational indicators of a given learner, and reason about their probability of failing the course (and so requiring support).Learners presented to teachers were selected to address different aspects of model confidence and (in)accuracy.Our results show that teacher and model predictions diverged for students who passed the course, while predictions were similar for students who failed the course.Moreover, confidence and correctness were more aligned in teachers than the model, reducing the unknown risks originally present in models.The source code is available at https://github.com/epfl-ml4ed/unknown-unknowns.

How Close are Predictive Models to Teachers in Detecting Learners at Risk?

Galici R.;Fenu G.;Marras M.
2023-01-01

Abstract

Detecting learners in need of support is a complex process for both teachers and machines.Most prior work has devised visualization tools that allow teachers to do so by analyzing educational indicators.Other recent efforts have been devoted to models that predict whether learners might be at risk.However, the question on how teacher-like is the model behaving under this detection task still remains unanswered.In this paper, we investigate the (dis)agreement between teachers and model decisions, using a real-world flipped course as a case study.From the model perspective, we considered a well-known neural network, trained on educational indicators extracted from online pre-class logs.To gather teachers’ understanding, we employed a crowd sourcing approach including over 360 human intelligence tasks from 60 university teachers.We asked each recruited teacher to analyze visualizations pertaining to four relevant educational indicators of a given learner, and reason about their probability of failing the course (and so requiring support).Learners presented to teachers were selected to address different aspects of model confidence and (in)accuracy.Our results show that teacher and model predictions diverged for students who passed the course, while predictions were similar for students who failed the course.Moreover, confidence and correctness were more aligned in teachers than the model, reducing the unknown risks originally present in models.The source code is available at https://github.com/epfl-ml4ed/unknown-unknowns.
2023
Artificial Intelligence
Education
Machine Learning
Student Success Prediction
User Modeling
File in questo prodotto:
Non ci sono file associati a questo prodotto.

I documenti in IRIS sono protetti da copyright e tutti i diritti sono riservati, salvo diversa indicazione.

Utilizza questo identificativo per citare o creare un link a questo documento: https://hdl.handle.net/11584/432655
 Attenzione

Attenzione! I dati visualizzati non sono stati sottoposti a validazione da parte dell'ateneo

Citazioni
  • ???jsp.display-item.citation.pmc??? ND
  • Scopus ND
  • ???jsp.display-item.citation.isi??? 2
social impact