The practice of higher education in the era of artificial intelligence - Aleksandre Tevzadze
(HEREs' Publication) / (HEREs' Publication)
[Please scroll down for English]
The advent and widespread adoption of modern artificial intelligence (AI) technologies has presented various aspects of higher education (HE) with many challenges.
Using generative AI based on complex concepts of deep learning and neural networks, the average user can create human-like content without the need for technical knowledge. As artificial intelligence tools continue to develop, it becomes critically important to study its impact on the education sector.
Today, everyone is trying to discover the exciting possibilities created by AI for innovative teaching and learning practices in higher education. Its significant advantages include automated evaluation, research support, and improved human-computer interaction (Dempere 2023). However, everything has its price. Text generated by AI tools, which is difficult to distinguish from human-generated text, threatens academic honesty and the recognition of individual contributions. Generative AI is known not only for creating highly realistic texts, but also for imitating human critical thinking skills, which greatly threatens the fairness of online exams (Susnjak 2022). For example, ChatGPT is seen as a well-known tool for violating academic integrity.
Since banning new technologies is always unprofitable, we need to better assess the ethical issues of using generative AI tools such as ChatGPT and others (UNESCO 2003a).
In 2023, UNESCO adopted recommendations on the use of artificial intelligence in higher education, which suggest “the implementation of ethical impact assessments: to identify and evaluate the benefits and risks of artificial intelligence systems, with other assurance mechanisms for appropriate risk prevention, mitigation and monitoring measures” (UNESCO 2023b).
This should encourage higher education institutions (HEIs) to develop policy framework documents on AI both locally and internationally. Recommendations for governments and policy-making organizations include:
Using artificial intelligence in higher education
regulations and guidelines on the use of AI in higher education institutions;
Updating the quality assurance processes of higher educational institutions to include AI ethics in the processes;
In response to these challenges, many HEIs around the world reacted immediately and banned the use of ChatGPT to ensure academic integrity. Different, more moderate approaches involve implementing different strategies in response to students' use of AI tools. them
between:
Prohibition of AI tools while performing the task;
Implementation of other software tools to identify artificial intelligence generated text;
Changing assessments based on written exams to oral, written or proctored formats;
Use of assessments that are difficult for AI tools, eg, podcasts, lab activities, group work, student feedback, participation in assessment, graded assignments;
Allowing the use of artificial intelligence tools in the performance of tasks, but requiring clear indication of its use;
Current assessment practices in higher education need to be reviewed. The particular model chosen must be consistent with the values at the university, the unique requirements of the educational programs, and the willingness of academic staff to accept potentially challenging changes in established practices.
References
Dempere, J., Modugu, K., Hesham, A., and Ramasamy, LK, (2023) “The impact of ChatGPT on higher education”, Frontiers in Education, https://doi.org/10.3389/feduc.2023.1206936
Susnjak, T., (2022) “ChatGPT: The End of Online Exam Integrity?”, arXiv:2212.09292 https://doi.org/10.48550/arXiv.2212.09292
UNESCO (2023a) ChatGPT and Artificial Intelligence in higher education: Quick start guide, Available at https://unesdoc.unesco.org/ark:/48223/pf0000385146
UNESCO (2023b) Harnessing the era of artificial intelligence in higher education: a primer for higher education stakeholders, https://unesdoc.unesco.org/ark:/48223/pf0000386670
Rethinking Higher Education Practices in the Era of Artificial Intelligence – Alexander Tevzadze
The emergence and wide use of modern tools using artificial intelligence (AI) have led to a multitude of challenges across various aspects in Higher Education (HE).
Generative AI, which is based on once sophisticated notions of deep learning and neural networks, is able to generate human-like content for average users, regardless of their technical proficiency. With the further development of AI tools, understanding its implications in the educational sector becomes critically important.
Today everybody explores exciting opportunities created by AI for innovative teaching and learning practices in HE. Notable benefits include automated grading, research support and enhanced human-computer interaction (Dempere 2023). However, everything comes for a price. Tool generating text that is apparently indistinguishable from human-generated content creates an obvious concern about academic integrity and recognition of individual contribution. Generative AI is known for not only generating a highly realistic texts with minimal input, but also for mimicking human critical thinking skills, making it a major threat to the integrity of online exams (Susnjak 2022). As an example, ChatGPT is considered as a prominent instrument for academic misconduct.
Understanding that banning new technologies is always a losing game, we need to better evaluate the ethical implications of the use of generative AI tools, such as ChatGPT and others (UNESCO 2003a).
In 2023, UNESCO adopted recommendations on the Ethics of Artificial Intelligence in HE that suggests to "introduce an ethical impact assessment to identify and assess AI systems' benefits, concerns, and risks and introduce appropriate risk prevention, mitigation, and monitoring measures, among other assurance mechanisms" (UNESCO 2023b).
This should lead to the development of a policy framework on AI on local HEI as well as national levels. Recommendations to the governments and policymakers include:
- Regulations on the use of AI in HE together with the guidance to HEIs about the use of AI;
- Quality assurance processes for HEIs updated to include AI ethics;
Immediate ad hoc solutions in multiple HEIs around the world include banning ChatGPT due to concerns around academic integrity. Other more measured approaches to the problem include implementation of different strategies in response to the use of AI tools by students:
- Banning AI tools in assessments;
- Deploying other software tools to identify AI generated text;
- Changing exam-based assessments to oral, handwritten or supervised formats;
- Using assessments that are difficult for AI tools to produce eg, podcasts, laboratory activities, group work, reflections, grading participation, scaffolded assignments;
- Allowing the use of AI tools in assessments, but requesting to explicitly disclose its usage and output;
Assessment practices in higher education do need to be rethought. The specific model chosen should be consistent with the HEI's existing values, unique demands of educational programs and willingness of academics to adopt potentially challenging adjustments in well established practices.
References
Dempere, J., Modugu, K., Hesham, A., and Ramasamy, LK, (2023) “The impact of ChatGPT on higher education”, Frontiers in Education, https://doi.org/10.3389/feduc.2023.1206936
Susnjak, T., (2022) “ChatGPT: The End of Online Exam Integrity?”, arXiv: 2212.09292 https://doi.org/10.48550/arXiv.2212.09292
UNESCO (2023a) ChatGPT and Artificial Intelligence in higher education: Quick start guide, Available at https://unesdoc.unesco.org/ark:/48223/pf0000385146
UNESCO (2023b) Harnessing the era of artificial intelligence in higher education: a primer for higher education stakeholders, https://unesdoc.unesco.org/ark:/48223/pf0000386670
