Students are using AI tools for assignments, research, and as study buddies, but many of them may not have the foundational critical thinking skills required to understand the tools they are using and the ethical and safety implications of those tools. Education needs to shift towards purposeful, scaffolded integration of AI alongside the development of digital literacy skills.
Prohibition vs. Education: The case for education over restriction
I like to use a ‘drugs’ analogy in relation to AI. If you insist on telling your kids to ‘say no to drugs, ’ then they go out into the world armed with zero knowledge about the effects of drugs on their bodies. There’s a good chance they will try drugs, and they will have no idea what is happening or how to keep themselves safe.
AI is drugs. This analogy works for higher education in multiple ways:
- 🌿 Kids may not go out intending to find drugs. Drugs will often come to your kids.
- 🤖 Students may not go out of their
- to find AI tools, but marketing is so pervasive and targeted that they will find them.
- 🌿 Abstinence-only campaigns fail to teach kids the risks and drawbacks of drugs. They fail to teach kids how to be safe if and when they take drugs.
- 🤖 Blanket ‘can’t use AI’ rules don’t teach students the risks and drawbacks of AI. They fail to teach the student how to use the tools in safe and ethical ways.
- 🌿 In some cases, parents have no knowledge of drugs because they either didn’t use them or the drugs were just not available to them at the time. Some parents just weren’t interested in drugs.
- 🤖 In some cases, educators have no knowledge of AI because they either don’t use it or the tools just weren’t available to them when they studied. Some educators just aren’t interested in drugs. I mean AI. 😬
I asked Perplexity for feedback on this blog post and was told, “The analogy, while creative, may seem provocative to some.” I took this as a compliment.
The case for education over restriction
We need to think of ways to scaffold and explicitly develop our students’ professional skills, including digital literacy. Students with the skills to use something appropriately and better understand the ethical implications are less likely to misuse it. In the same way that students may try to collude or plagiarise, it’s generally related to their lack of skills to conduct the task, rather than an innate desire to cheat the system. We need to spend time in class teaching students what AI is, its drawbacks, and how to use it safely and ethically.
This doesn’t have to be onerous and can be integrated into the teaching activities or as an add-on workshop. In some cases, a single workshop can improve student confidence in generative AI use, improve policy understanding, help students identify purposeful study uses of AI and evaluate the output (Sullivan et al., 2024). Teachers increasingly want to change their teaching practices and “teach students how AI works, how to use AI, and the critical thinking skills and the ethical values needed for working in an AI-saturated world (Bower et al., 2024).
You want to draw students’ attention to the weaknesses and biases of AI and engage their critical thinking skills.
Scaffolding students’ AI literacy and critical thinking
Evaluate the style, content and evidence
- Ask different AI chatbots a course-related question and collaboratively analyse the answers.
- Have students evaluate the information presented and the depth of the argument.
- Ask the AI tools to cite their sources and check them. Have students search for these sources and find out if they exist and are quality, peer-reviewed sources. Are they reliable and unbiased? What’s the sample size and demographic of the participants used in the research?
- Have students check whether the information the AI tools cite is actually in the article itself.
- Create a rubric to evaluate the quality of work produced by ChatGPT and have students use the rubric to give it a mark.
Analyse the work for bias
- What assumptions have the AI tools made about gender, ethnicity and culture?
- Are there any reinforced stereotypes presented in the text?
- Look for any biased language in the text. Is it neutral and written in an academic style? Are there any loaded words or phrases that are designed to elicit an emotional response?
- If you change some of the details, such as gender, sexuality, ethnicity, culture, or location, does the information generated by AI change? What does this say about AI bias?
Role play with an AI chatbot
- Create a scenario involving a role-play between two people, and have student groups perform the role-play using AI as the other participant.
- Keep students in small groups so they can collaborate on their responses and evaluate ChatGPT’s responses as a group.
- Have the students try to change the chatbot’s opinion on the topic they’re role-playing.
- Finish with a post-class reflective task with students analysing the chatbot’s effectiveness to explore a scenario and the inherent risks involved.
Get feedback from AI
- Get students to give AI a text they’re working on and ask for feedback.
- Give AI one of the assessment criteria you are working on and your assessment task, and ask AI what you can do to improve your response to that criterion. This should be done on an institutional license so the data is protected.
- Have students analyse the quality of the feedback and whether they would implement any of the suggestions.
Note: The feedback Perplexity regularly gives me is that my voice switches between academic style and informal too frequently, and that my humour/literal asides are not appropriate. Alternatively, not everyone wants to sound like the majority of written information on the internet, and each unique voice should be celebrated. 🎉 Seriously, why does Grammarly constantly want to change ‘collaboration’ to cooperation? It literally doesn’t describe what I wanted to describe.
Additional note: When I got feedback on this blog post, I may have hurt Perplexity’s feelings: “Comments about Perplexity’s feedback on your writing style, while relatable, detract from your authority on the subject.” Thanks, I’d rather be relatable. 🫶
Documenting and Defending Your AI Use
Have students gather evidence for how they have engaged with AI over the process. Having students document their process can help in multiple ways.
Articulate and reflect on the process
First and foremost, it helps students unpack and refine their process. Often we plod through doing tasks in the same way over and over, but by analysing our process, we can often find issues, bottlenecks and gaps in our process. It’s important not to spend too much time on it, so students can unpack this with a few questions:
- Did you use one tool or multiple? Which was the most effective? Are different tools suitable for different tasks?
- During which stage of the process was AI most useful?
- Ideation and brainstorming?
- Planning and scheduling?
- Troubleshooting?
- Drafting?
- Feedback and revision?
Embed reflective tasks in the process so that students articulate how they have used AI, how it assisted them and even how it hindered their process. This could be a summative task aligned with your professional skills and graduate attributes.
Capture the process systematically
Secondly, capturing their process with documentation can help in case they are ever accused of misusing AI. Similar to above, it’s important not to spend too much time on this, but to design the process around capturing evidence. Strategies that can help are:
- Ensure they have an account and are logged in to the tool they use, so they can retain the chat history to show the receipts.
- Use Google Docs to capture the editing history.
- If they are unsure whether their tool is allowed, students should talk to the educators as early as possible. Being upfront and open about their use can help clarify with the educators what is appropriate and show that they are being transparent through the process.
The biggest difficulty is that we need to be comfortable with AI ourselves before we teach our students. Post-COVID, many educators are burnt out, afraid of redundancies, overloaded, and even systematically underpaid. On top of this, we must grapple with one of the biggest education shake-ups of all time. We need to know, for ourselves, why AI can be unethical, unsafe, and extremely damaging. Importantly, we don’t need to know more than every student in our class. We just need to lead with transparency and be open about our own knowledge and use of AI. If we tell our students they can’t use it to write their assessments but use it to create the assessments, we’re not leading by example. Learn in iterations. Don’t try to have all the answers. Learn together with your students.
References
Bower, M., Torrington, J., Lai, J. W. M., Petocz, P., & Alfano, M. (2024). How should we change teaching and assessment in response to increasingly powerful generative Artificial Intelligence? Outcomes of the ChatGPT teacher survey. Education and Information Technologies, 29(12), 15403–15440. https://doi.org/10.1007/s10639-023-12405-0
Sullivan, M., McAuley, M., Degiorgio, D., & McLaughlan, P. (2024). Improving students’ generative AI literacy: A single workshop can improve confidence and understanding. Journal of Applied Learning and Teaching, 7(2), 88–97. https://doi.org/10.37074/jalt.2024.7.2.7
