Ayuda
Ir al contenido

Dialnet


Explanatory learner models: Why machine learning (alone) is not the answer.

  • Autores: Carolyn P. Rosé, Elizabeth A. McLaughlin, Ran Liu, Kenneth R. Koedinger
  • Localización: British journal of educational technology, ISSN 0007-1013, Vol. 50, Nº. 6, 2019, págs. 2943-2958
  • Idioma: inglés
  • Texto completo no disponible (Saber más ...)
  • Resumen
    • Using data to understand learning and improve education has great promise. However, the promise will not be achieved simply by AI and Machine Learning researchers developing innovative models that more accurately predict labeled data. As AI advances, modeling techniques and the models they produce are getting increasingly complex, often involving tens of thousands of parameters or more. Though strides towards interpretation of complex models are being made in core machine learning communities, it remains true in these cases of "black box" modeling that research teams may have little possibility to peer inside to try understand how, why, or even whether such models will work when applied beyond the data on which they were built. Rather than relying on AI expertise alone, we suggest that learning engineering teams bring interdisciplinary expertise to bear to develop explanatory learner models that provide interpretable and actionable insights in addition to accurate prediction. We describe examples that illustrate use of different kinds of data (eg, click stream and discourse data) in different course content (eg, math and writing) and toward different goals (eg, improving student models and generating actionable feedback). We recommend learning engineering teams, shared infrastructure and funder incentives toward better explanatory learner model development that advances learning science, produces better pedagogical practices and demonstrably improves student learning. Practitioner NotesWhat is already known about this topic Researchers in learning analytics and educational data mining have been successful in creating innovative models of data that optimize prediction.Some of these models produce scientific or practical insights and fewer have been put into use and demonstrated to enhance student learning.What this paper adds We provide examples of development of explanatory models of learners that not only accurately predict data but also provide scientific insights and yield practical outcomes.In particular, researchers with expertise in cognitive science and math education content use AI‐based data analytics to discover previously unrecognized barriers to geometry student learning. They use model‐derived insights to redesign an online tutoring system and "close‐the‐loop" by experimentally demonstrating that the new system produces better student learning than the original.Implications for practice and/or policy We define explanatory learning models and provide an articulation of a process for generating them that involves interdisciplinary teams employing human–computer interaction and learning engineering methods.Based on our experiences, we recommend learning engineering teams, shared infrastructure and funder incentives toward better explanatory learner model development that advances learning science, produces better pedagogical practices and demonstrably improves student learning. [ABSTRACT FROM AUTHOR]


Fundación Dialnet

Dialnet Plus

  • Más información sobre Dialnet Plus

Opciones de compartir

Opciones de entorno