TILT is receiving a growing number of inquiries about ChatGPT, the AI tool which is able to create high quality, sophisticated text in natural language. The major concerns are around academic integrity and the potential for students to use the AI to “write their papers for them”. Below are some ideas and resources that will provide a clearer idea of what ChatGPT can- and cannot- do. Also included are potential strategies that can help instructors navigate the accelerating changes in the educational landscape being brought about by advances in the use of artificial intelligence. (Credit to the Yale Poorvu Center for Teaching and Learning for foundational information and many of the links used in this article.)
(1) Instructors should be direct and transparent about what tools students are permitted to use, and about the reasons for any restrictions. The standard FHSU syllabus template links directly to the FHSU Academic Integrity Policy. If you expect students to avoid the use of AI chatbots when producing their work, add this to your policy. Find example statements below.
As for explaining why, understanding the learning goals behind assignments helps students commit to them. The only reason to assign written work is to help students learn — either to deepen their understanding of the material or to develop the skill of writing in itself.
Research shows that people learn more and retain the information longer when they write about it in their own words. If instead, students task an AI to generate texts, they won’t learn as much. This is why we ask them to write their own papers, homework assignments, problem sets, and coding assignments. This impact on learning applies across all disciplines—STEM problem sets that require explanations also depend on students generating language to learn more deeply. ChatGPT AI can also generate coding solutions, not just natural language.
(2) Controlling the use of AI writing through surveillance or detection technology is probably not feasible. Outcry over the potential for academic dishonesty has led the creators of ChatGPT to introduce a tool that can detect AI written text. However, the sophistication of AI software is likely to outpace technological detection. Rather than relying on catching and punishing users by using software, faculty should consider fruitful ways to adjust teaching in light of this new technology.
Tools like ChatGPT raise broader questions. Here is an approach (link is external) from Inside Higher Ed: “Go big. How do these tools allow us to achieve our intended outcomes differently and better? How can they promote equity and access? Better thinking and argumentation? … In the past, near-term prohibitions on … calculators … spellcheck, [and] search engines … have fared poorly. They focus on in-course tactics rather than on the shifting contexts of what students need to know and how they need to learn it.”
(3) Changes in assignment design and structure can substantially reduce students’ likelihood of cheating— and can also enhance their learning. Based on research about when students plagiarize (whether from published sources, commercial services, or each other), we know that students are less likely to cheat when they:
- Are pursuing questions they feel connected to
- Understand how the assignment will support their longer-term learning goals
- Have produced preliminary work before the deadline
- Have discussed their preliminary work with others
Here are some practices that prioritize student learning, and make it harder to collaborate with AI tools:
- Using alternative ways for students to represent their knowledge beyond text (e.g., draw images, make slides, facilitate a discussion). Consider adopting collaboration tools like Yellowdig.
- Incorporating the most up-to-date resources and information of your field so that students are answering questions that have not yet been answered or only begun to be answered
- Establishing scaffolded deadlines for large projects, papers, and presentations. One approach might be to set up numerous small deadlines and require students to talk to you, as well as each other, about their presentations and papers.
- Engaging with ChatGPT as a tool that exists in the world and having students critically engage with what it is able to produce, as in these examples (link is external) .
Addressing ChatGPT on your Syllabus
The simplest way to state a policy on the use of ChatGPT and other AI composition software is to address it in your academic integrity statement.
A policy prohibiting the use of ChatGPT for assignments in your course might read: Collaboration with ChatGPT or other AI composition software is not permitted in this course.
If you’d rather consider students’ use of ChatGPT on a case-by-case basis, your policy might read: Please obtain permission from me before collaborating with peers or AI chatbots (like ChatGPT) on assignments for this course.
Background: What ChatGPT Can and Cannot Do (Yet)
Can: ChatGPT produces good summaries of knowledge, like those you might find in the section of academic arguments that reviews previous research. It can produce texts that compare views or express judgment—for instance, texts that support a choice between alternative theories or approaches. It’s common for introductory courses to ask students to defend a choice between two or three positions, so it would be relatively easy for students to begin or supplement their work with ChatGPT.
Cannot (yet): So far, ChatGPT texts don’t cite sources. You can make “cite sources” part of the prompt, but the AI mostly just makes them up.
Less concretely, the products of ChatGPT often strike an informed reader as superficial or even perversely incorrect. Granted, this is sometimes true of texts produced by humans. But if one goal of academic writing is to extend or disrupt what is currently known, ChatGPT texts frequently fall short of this standard or produce claims that an informed reader finds to be patently false.
If you’d like to learn more about large language models check out Demystifying ChatGPT.
Recommended Reading
New articles on this fast-developing topic are appearing every day. Below is a selection of thoughtful pieces that address various elements of this tool.
Condensed list of faculty advice (link is external) from Inside Higher Ed, Jan. 12, 2023
Advice and a sample class activity (link is external) from Times Higher Education, Nancy Gleason, Dec. 9, 2022
Useful insights and advice (link is external) from the U. of Michigan CRLT, Jan. 9, 2023
Creative writing challenges that show AI is a toy, not a tool (link is external) from The Atlantic, Ian Bogost, Dec. 7, 2022
How to Use ChatGPT for Teaching: 5 Ways Teachers Can Utilize ChapGPT from Medium, Paul DelSignore, Dec. 20, 2022
I enjoyed reading this article, and I really admire the way that TILT is approaching this important issue. One way that I think AI can benefit students, without detriment to their learning, is by producing images for presentations and other work. Students would likely be using creative commons images anyway, so this allows for them to produce higher quality work by using images tailor-made for their purposes. In some courses, chatGPT and other AI software may be helpful to generate “filler text” for projects and to help students go above and beyond the lesson requirements. Ultimately, these tools will allow students to be more productive and produce higher-quality work, but it is important to regulate their usage so that it doesn’t become counterintuitive to the learning process.
I’m playing with ChatGPT and how it could influence information literacy. When I asked for sources, I looked up their 6 “articles” and could find none anywhere. That’s a problem.
But, I also found it helpful when trying to write something and needing a nudge with the language, similar to when Gmail or Microsoft Office will try to repopulate your writing with common phrases. Today I found this article from The Conversation, where the author suggests that ChatGPT can be used as a tool for brainstorming and improvisation. https://theconversation.com/chatgpt-is-great-youre-just-using-it-wrong-198848
I am curious/wary about AI that populates from other people’s work- like the AI image generators that Rob mentions in his comment. Some AI is blatantly copying artists’ work with no attribution or compensation. Is the writing AI doing the same? This is very problematic.
(apologies Samuel, I read “Ayers” as “Byers” in your comment about AI art)