Trustworthy and Explainable AI for Learning Analytics

Min Jia Li, Shun Ting Li, Albert C.M. Yang, Anna Y.Q. Huang, Stephen J.H. Yang

研究成果: 雜誌貢獻會議論文同行評審

摘要

In recent years, there has been a surge of interest in combining artificial intelligence (AI) with education to enhance learning experiences. However, one major concern is the lack of transparency in AI models, which hinders our ability to understand their decision-making processes and establish trust in their outcomes. This study aims to address these challenges by focusing on the implications of explainable and trustworthy AI in education. The primary objective of this research is to improve trust and acceptance of AI systems in education by providing comprehensive explanations for model predictions. By doing so, it seeks to equip stakeholders with a better understanding of the decision-making process and increase their confidence in the outcomes. Additionally, the study highlights the importance of evaluation metrics in assessing the quality and effectiveness of explanations generated by explanation AI models. These metrics serve as vital tools for ensuring reliable system performance and upholding the fundamental principles necessary for building trustworthy AI. To accomplish these goals, the study utilizes the LBLS-467 dataset to predict high-risk students, employing both logistic regression and neural networks as AI models. Subsequently, explanation artificial intelligence techniques such as LIME (Local Interpretable Model-agnostic Explanations) and SHAP (Shapley Additive Explanations) are utilized to evaluate students' learning outcomes and provide explanations. Finally, six evaluation indicators are adopted to assess the accuracy and stability of these explanations. In conclusion, this study addresses the challenges associated with inconsistencies in explainable AI models within the field of education. It emphasizes the need for explainability and trust when applying AI systems in educational contexts. By providing comprehensive explanations and evaluation metrics, this research empowers education teams to make informed decisions and fosters a positive environment for the integration of AI. Ultimately, it contributes to the reliable implementation of AI technologies, enabling their full potential to be harnessed in educational settings for the benefit of learners and educators alike.

原文???core.languages.en_GB???
頁(從 - 到)3-12
頁數10
期刊CEUR Workshop Proceedings
3667
出版狀態已出版 - 2024
事件2024 Joint of International Conference on Learning Analytics and Knowledge Workshops, LAK-WS 2024 - Kyoto, Japan
持續時間: 18 3月 202422 3月 2024

指紋

深入研究「Trustworthy and Explainable AI for Learning Analytics」主題。共同形成了獨特的指紋。

引用此