AI and LLM technologies have emerged rapidly, creating more opportunities to develop and enhance e-learning. This paper provides a detailed account of the tools built by the current and future generations of LLMs and AI technologies for creating complex e-learning applications. It introduces the application of these tools, including innovative learning, context-awareness and intelligent learning systems. Moreover, it delves systematically into the security issues of these AI-integrated e-learning environments. The work applies strict research approaches to determine and describe possible risks in system design, data processing, and AI model deployment. We also undertake deeper penetration tests to assess the level of preparedness of these platforms against different types of attacks from the attackers such as attacks targeting the AI components, injection of unverified data and privacy violations. Our study findings reveal the opportunity to apply AI in e-learning, thus highlighting the importance of addressing these concerns. It is a complete guide for the developers and the educational institutions to integrate the LLMs and AI and keep the security strong. This paper adds to the existing literature on the role of AI in education and cybersecurity, providing essential information to the future advancements of safe and innovative e-learning systems.

Exploiting LLMs for E-Learning: A Cybersecurity Perspective on AI-Generated Tools in Education

Greco, Danilo;
2024-01-01

Abstract

AI and LLM technologies have emerged rapidly, creating more opportunities to develop and enhance e-learning. This paper provides a detailed account of the tools built by the current and future generations of LLMs and AI technologies for creating complex e-learning applications. It introduces the application of these tools, including innovative learning, context-awareness and intelligent learning systems. Moreover, it delves systematically into the security issues of these AI-integrated e-learning environments. The work applies strict research approaches to determine and describe possible risks in system design, data processing, and AI model deployment. We also undertake deeper penetration tests to assess the level of preparedness of these platforms against different types of attacks from the attackers such as attacks targeting the AI components, injection of unverified data and privacy violations. Our study findings reveal the opportunity to apply AI in e-learning, thus highlighting the importance of addressing these concerns. It is a complete guide for the developers and the educational institutions to integrate the LLMs and AI and keep the security strong. This paper adds to the existing literature on the role of AI in education and cybersecurity, providing essential information to the future advancements of safe and innovative e-learning systems.
2024
2024 IEEE International Workshop on Technologies for Defense and Security (TechDefense)
Technology-enhanced learning, large language models (LLMs), vulnerability testing, cybersecurity in e-learning, AI-generated code, security assessment, vulnerability analysis, GPT-4o, web application security
File in questo prodotto:
File Dimensione Formato  
Exploiting_LLMs_for_E-Learning_A_Cybersecurity_Perspective_on_AI-Generated_Tools_in_Education.pdf

Accesso riservato

Dimensione 737.24 kB
Formato Adobe PDF
737.24 kB Adobe PDF   Visualizza/Apri

I documenti in IRIS sono protetti da copyright e tutti i diritti sono riservati, salvo diversa indicazione.

Utilizza questo identificativo per citare o creare un link a questo documento: https://hdl.handle.net/11311/1283369
Citazioni
  • ???jsp.display-item.citation.pmc??? ND
  • Scopus 3
  • ???jsp.display-item.citation.isi??? 1
social impact