The Educational Designer

Student use of generative AI to gain feedback: Bard

This is the second post in a series where I compare the same prompts across the different generative AI services. For the initial prompt, assessment task and criteria, see the first post in the series: Student use of generative AI to gain feedback.

Completion 1: Feedback and improvements

Your assessment submission meets the criteria for task completion, referencing, and academic style. You have identified two potential barriers to seeking medical care for LGBTIQA+ individuals, and you have provided evidence from two credible sources to support your claims. Your writing is clear, concise, and free of errors.

Here are a few suggestions for how you could edit your work to further improve it:

Overall, your assessment submission is well-written and informative. I would recommend making the minor edits I have suggested to further improve your work.

Here are some additional resources that you may find helpful:

Completion: Improving task completion

Your task completion is detailed and informative. You have identified two potential barriers to seeking medical care for LGBTIQA+ individuals and have provided evidence from two credible sources to support your claims. You have also discussed the health needs of this population group in the community.

I would say that you completed the task with sophistication. You have clearly understood the task requirements and have written a well-organized and informative response. However, there are a few areas where you could improve your task completion.

Overall, your task completion is well-done. However, I believe that you could improve your work by providing more detail about the specific health needs of LGBTIQA+ individuals and the impact of these barriers on their health.

Here are some specific examples of how you could improve the detail of your task completion:

I believe that these changes would make your task completion even more informative and helpful.

Analysis: What did Bard pick up on?

In the initial completion, Bard gave more specific feedback in comparison to ChatGPT, citing sentences where changes were needed. ChatGPT’s feedback was likely more efficient, as a basic spell check would have picked up these issues. Bard suggested changing “GP’s can have bias” to “some GPs may be biased”, which shows the importance of hedging in academic writing, although it doesn’t explicitly explain this to the student. Bard also said that the writing was “clear, concise, and free of errors”, which is not only incorrect, but also completely contradicted by the feedback given by Bard. Both ChatGPT and Bard picked up on some citation and referencing errors.

One of the major issues of this student submission is that it’s entirely surface-level and doesn’t go into any real depth. It gives two barriers in a very broad sense. In terms of feedback on content, there was a noticeable difference between ChatGPT and Bard. ChatGPT gave feedback in a very broad sense, like linking the writing to the learning outcomes or giving examples. Initially, I assumed this was because of the appalling criterion ‘task completion’. Despite this, it seems that Bard managed to give examples that could be investigated further: “For example, you could discuss the higher rates of HIV/AIDS, mental health problems, and violence experienced by this population group.“. There is a lot to unpack in those three examples, and Bard didn’t really raise these items with any kind of sensitivity, but if the student was really struggling, they have some options to explore the topic and write in greater depth. However, Bard also states that the student has “clearly understood the task requirements and have written a well-organized and informative response.”, which is a stretch. I also have no idea why, but in the one completion, Bard appeared to give two different ways of stating the same thing.

Initially, I wanted to see if generative AI could be used by students in a productive way that helped them to improve their writing. What I would say is that generative AI may be capable of filling a feedback gap for students, particularly those students in units of hundreds or thousands of students. Tutors and lecturers simply don’t have the capacity to give detailed feedback in these units, and while most universities do have academic writing consultation services, they can be in high demand. So, generative AI could open up more opportunities for students to receive feedback on their academic writing, but discipline-specific content feedback is rudimentary and can’t really replicate a tutor or lecturer.

In fairness to generative AI, I deliberately designed the criteria to be vague, so anyone would struggle to provide feedback. However, criteria can take years to refine and a decent amount of rubrics in higher education are… well… vague.

Get new blog posts to your inbox.

Discover more from The Educational Designer

Subscribe now to keep reading and get access to the full archive.

Continue reading