TY - JOUR
T1 - Trustworthy and Explainable AI for Learning Analytics
AU - Li, Min Jia
AU - Li, Shun Ting
AU - Yang, Albert C.M.
AU - Huang, Anna Y.Q.
AU - Yang, Stephen J.H.
N1 - Publisher Copyright:
© 2024 CEUR-WS. All rights reserved.
PY - 2024
Y1 - 2024
N2 - In recent years, there has been a surge of interest in combining artificial intelligence (AI) with education to enhance learning experiences. However, one major concern is the lack of transparency in AI models, which hinders our ability to understand their decision-making processes and establish trust in their outcomes. This study aims to address these challenges by focusing on the implications of explainable and trustworthy AI in education. The primary objective of this research is to improve trust and acceptance of AI systems in education by providing comprehensive explanations for model predictions. By doing so, it seeks to equip stakeholders with a better understanding of the decision-making process and increase their confidence in the outcomes. Additionally, the study highlights the importance of evaluation metrics in assessing the quality and effectiveness of explanations generated by explanation AI models. These metrics serve as vital tools for ensuring reliable system performance and upholding the fundamental principles necessary for building trustworthy AI. To accomplish these goals, the study utilizes the LBLS-467 dataset to predict high-risk students, employing both logistic regression and neural networks as AI models. Subsequently, explanation artificial intelligence techniques such as LIME (Local Interpretable Model-agnostic Explanations) and SHAP (Shapley Additive Explanations) are utilized to evaluate students' learning outcomes and provide explanations. Finally, six evaluation indicators are adopted to assess the accuracy and stability of these explanations. In conclusion, this study addresses the challenges associated with inconsistencies in explainable AI models within the field of education. It emphasizes the need for explainability and trust when applying AI systems in educational contexts. By providing comprehensive explanations and evaluation metrics, this research empowers education teams to make informed decisions and fosters a positive environment for the integration of AI. Ultimately, it contributes to the reliable implementation of AI technologies, enabling their full potential to be harnessed in educational settings for the benefit of learners and educators alike.
AB - In recent years, there has been a surge of interest in combining artificial intelligence (AI) with education to enhance learning experiences. However, one major concern is the lack of transparency in AI models, which hinders our ability to understand their decision-making processes and establish trust in their outcomes. This study aims to address these challenges by focusing on the implications of explainable and trustworthy AI in education. The primary objective of this research is to improve trust and acceptance of AI systems in education by providing comprehensive explanations for model predictions. By doing so, it seeks to equip stakeholders with a better understanding of the decision-making process and increase their confidence in the outcomes. Additionally, the study highlights the importance of evaluation metrics in assessing the quality and effectiveness of explanations generated by explanation AI models. These metrics serve as vital tools for ensuring reliable system performance and upholding the fundamental principles necessary for building trustworthy AI. To accomplish these goals, the study utilizes the LBLS-467 dataset to predict high-risk students, employing both logistic regression and neural networks as AI models. Subsequently, explanation artificial intelligence techniques such as LIME (Local Interpretable Model-agnostic Explanations) and SHAP (Shapley Additive Explanations) are utilized to evaluate students' learning outcomes and provide explanations. Finally, six evaluation indicators are adopted to assess the accuracy and stability of these explanations. In conclusion, this study addresses the challenges associated with inconsistencies in explainable AI models within the field of education. It emphasizes the need for explainability and trust when applying AI systems in educational contexts. By providing comprehensive explanations and evaluation metrics, this research empowers education teams to make informed decisions and fosters a positive environment for the integration of AI. Ultimately, it contributes to the reliable implementation of AI technologies, enabling their full potential to be harnessed in educational settings for the benefit of learners and educators alike.
KW - Explainable AI
KW - Learning Analytics 1
KW - Trustworthy
UR - http://www.scopus.com/inward/record.url?scp=85191957430&partnerID=8YFLogxK
M3 - 會議論文
AN - SCOPUS:85191957430
SN - 1613-0073
VL - 3667
SP - 3
EP - 12
JO - CEUR Workshop Proceedings
JF - CEUR Workshop Proceedings
T2 - 2024 Joint of International Conference on Learning Analytics and Knowledge Workshops, LAK-WS 2024
Y2 - 18 March 2024 through 22 March 2024
ER -