The emergence of easily accessible generative AI tools has fundamentally impacted how we view teaching-learning-assessment (TLA) practices in higher education. Since ChatGPT's debut in November 2022, a wave of similarly advanced AI technologies has followed, and we can expect them to become increasingly sophisticated and integrated into our daily lives. In a recent update, Meta has added their Llama 3 model across all of their social media platforms, including Whatsapp – chatting with GenAI is now as easy and accessible as sending a message to a friend or a loved one and HEI's can definitely no longer present students' limited access to technology or tools as a reason not to engage with these technologies.
The important thing to realise is that these AI tools, while not originally developed for educational purposes, have been widely adopted by students and are now being used in TLA environments globally - whether HE teachers permit it or not (Dai, Liu, Lim 2023). And with growing integrations, trying to prohibit students' use of these tools are becoming less and less viable.
The prevalence of these tools then begs the question – if we are asking our students to do something that the 'machines' can already do faster and potentially better than they can, aren't we doing them a disservice? Global thought leaders such as Joseph Aoun (2017) and Reid Hoffman (2023) believe we are, and that we should instead be equipping students with the ways of thinking they might need to navigate their (uncertain) futures.
Since January 2023 and the creation of our position paper, SU has taken an open-minded approach to AI in TLA, encouraging our academic staff to get to know these tools and explore their potential impact within both our broader institutional and discipline-specific contexts. We position generative AI in TLA within the concept of academic integrity (the AI²-approach), encouraging open discussions about limitations, affordances and responsible use of these tools within our unique contexts.
Given this, we can start having the difficult conversations, where we can:
- Understand and redefine our teaching role in HE
- Consider whether our learning and assessment opportunities are AI-resilient
- Aim to preserve student learning through the concept of grappling as learning (or the “productive struggle")
- Evaluate whether our current assessments truly are the best way of assessing student learning in our unique contexts – are we really developing evaluative judgement and
To support lecturers on this journey, we have developed various resources (click on the blocks below to follow the links), ranging from an AI literacies framework for TLA specifically, to various short courses which can be used to measure and encourage responsible AI use. At SU we didn't want to offer easy tips and tricks for simply integrating generative AI in our TLA. Instead, we have opted to facilitate the difficult conversations about responsible AI integration and its implications for teaching, learning, and assessment at our institution.
SU guidelines on allowable AI use and academic integrity
Shortcourses on AI
Useful sources
An introduction to GenAI
AI² Discussion series
Webinar 2: AI in TLA
Presenters: Dr Hanelie Adendorff, dr Phil Southey, Dalene Joubert & Magriet de Villiers
Recording: YouTube
PPT: AI² Webinar 2
AI² Discussion series
Webinar 3: AI-enabled learning
Presenters: Dalene Joubert & dr Albert Strever
Recording: YouTube
PPT: AI² Webinar 3
AI² Discussion series
Webinar 5: Values to underpin our approach to AI in HE
Presenters: Magriet de Villiers & Tanya de Villiers-Botha
Recording: YouTube
Although banning generative AI tools might have seemed like a viable option in the early days of 2023, we don't encourage this, as AI detection software is not reliable and a culture of suspicion of our students is not aligned with the learning-centered approach to TLA encouraged throughout our institution. We therefore instead encourage working through the AI use guidelines and having critical conversations with students about generative AI and their learning.