![]() |
CMU-HCII-25-103 Human-Computer Interaction Institute School of Computer Science, Carnegie Mellon University
Meaningful Models: Unlocking Insights Napol Rachatasumrit June 2025 Ph.D. Thesis
In this thesis, I argue for a claim that meaningful interpretations are what we need rather than post-hoc explanations or uninterpreted interpretable models, especially in the context of EDM. I explore a concept of "meaningful models" as inherently interpretable models whose parameters and outputs are not only transparent but actively interpreted. Moreover, their interpretations lead to useful and actionable insights for stakeholders. I illustrate the benefits of meaningful models through examples where existing mechanisms or models are insufficient to produce meaningful interpretations and demonstrating how enhancements can yield scientifically or practically valuable insights. For example, Performance Factor Analysis (PFA) has been demonstrated to outperform its base model, but we show that PFA parameters are confounded, which resulted in ambiguous interpretations. We then proposed improved models that not only de-confound the parameters but also presented meaningful interpretations that lead to insights on the associated knowledge component model and suggested instructional improvement. Overall, this thesis highlights the essential role of meaningful models in EDM, emphasizing that only through meaningful interpretations can models effectively drive practical improvements in educational practices and advance scientific understanding.
90 pages
Brad A. Myers, Head, Human-Computer Interaction Institute
|
Return to:
SCS Technical Report Collection This page maintained by reports@cs.cmu.edu |