Back To Blog
Faculty Blog: What are we going to do about ChatGPT?
August 8, 2023Written by: Dr. Nathan Nobis
Nathan Nobis, Philosophy, Morehouse College, nathan.nobis@morehouse.edu
Spring 2023 saw the arrival of “ChatGPT” and other artificial intelligence (AI) tools that are able to produce different forms of writing in response to prompts. While ChatGPT can be used in many contexts and for many purposes, my discussion here concerns its use by students in higher education.
For a bit of factual background, ChatGPT enables students to enter typical essay writing assignments (and other types of assignments that involve writing), with their requirements, and the AI will quickly produce an organized, informed, sometimes “thoughtful” and even “stylish” original writings that generally meet the requirements. The AI can also be prompted to (repeatedly) revise that product, in light of any requests the user has for revision.
And users can manually revise the output (and then seek further feedback from the AI, if desired).A student recently wrote in the Chronicle of Higher Education that many students are using ChatGPT, much more and in ways that few professors suspect. And there are also reports of students “feeding” ChatGPT their own previously written papers so that it will produce new writings for them that appear to be "written" in their own unique writing styles.
So the problem is this: ChatGPT enables students to more effectively cheat on writing assignments. It enables them to manufacture usually at least passable writing assignment submissions but not engage in any, or many, of the learning activities that writing assignments are intended to support: developing a topic, organizing information, developing a thesis, finding and organizing evidence, presenting arguments, responding to objections, thinking about how to best reach one’s audience, and so on.
Since ChatGPT enables cheating in these ways and evades the educational process, here’s our question: what are we going to do about ChatGPT?
Preliminaries
Let us begin with some initial observations:
- ChatGPT is a tool. Like many tools, it has legitimate uses and illegitimate uses. In educational contexts, illegitimate use of tools are uses that do not foster the academic or intellectual goals of a course. So, if a tool does not promote understanding of complex material in a student, or does not promote their abilities to effectively communicate that understanding, or does not promote their skills at analyzing and evaluating information and arguments, etc., then its use is illegitimate. Given this, any use of ChatGPT by students to avoid the work of engaging in challenging learning tasks, and thus preventing them from realizing the intellectual fruits of their struggles is an illegitimate use of the software.
- ChatGPT is sometimes compared to a calculator and it’s argued that since calculator “tools” are justifiably used, so is the use of ChatGPT. But this is a poor comparison: calculators are often justifiably used but usually only after a student has mastered some lower level of mathematics and is moving on to something new and more challenging: the calculator is used on tasks that they already know how to do on their own and so could do on their own, but they use the calculator to save time so they can focus on a more advanced learning activity. To make it vivid, an elementary school student has not “learned arithmetic” if they have learned the numbers and where the +, −, ÷, and ✕ buttons on a calculator—even if they can provide correct answers to arithmetic problems—because they do not understand the math and cannot do the problems on their own.
In situations where calculators are used to do things that students generally could not do themselves, there is a legitimate educational reason for that: that would not be the case with basic writing since there are no tasks there that students are better off outsourcing to AI.
So, a tool is illegitimately used in educational contexts if it is used to complete a task that the student could not do on their own. So if and when ChatGPT is used to circumvent learning activities that require students to work to develop the skills to demonstrate understanding and successful communication (and much more), ChatGPT’s use is illegitimate: if students use ChatGPT to produce writings that they could not produce on their own—given their current level of understanding and skills—that use is illegitimate. This suggests a memorable guide for when ChatGPT use is illegitimate:
Anything students can’t do, ChatGPT shouldn’t do for them.
One open question is what educational goals can legitimately be met with, or benefit from, the use of ChatGPT: there may be some, and these potentially positive uses should be identified. A concern about these potentially positive uses, however, is that they can often be met in other ways: for example, although ChatGPT could review materials for students, simplifying them in various ways, this is also a task that the instructor could do, or students could do together in groups; ChatGPT can also review students’ self-created writings and projects to suggest improvements: again, this could be done by other students—with benefits for all the students—and/or the instructor, with benefits for the student-teacher relationship. So just because a benefit can be achieved with ChatGPT doesn’t mean that’s the best way to seek that benefit: other routes may be equally or more beneficial. And there’s also a real concern about “slippery slopes”: perhaps students using ChatGPT to “check their work” will lead to them using ChatGPT to effectively doing their work, or too much of it.
The above thoughts about when ChatGPT use is illegitimate suggest a related principle that if ChatGPT is used by someone who could create that writing product themselves, then that use may be legitimate.
I, however, cannot identify any substantive writing-related activity that almost any undergraduate student is better off outsourcing to ChatGPT: they are not better off—in terms of improved understanding and skills—if ChatGPT finds a topic for them, if it creates an essay outline for them, if it generates a thesis for them, if it assembles support or evidence for them, if it finds and responds to objections, and so on. Since students lack the expert-level knowledge and understanding required for legitimate ChatGPT use, their using it is typically illegitimate: and students lack the ability to discern whether ChatGPT’s outputs are low or high quality and where any errors are: their use could be akin to a student “using” a calculator and then saying of the final answer, “Look, I really have no idea if this is correct or not and why: it says what it says.” And, again, students using ChatGPT for almost any potentially legitimate purpose is an easy slippery slope to illegitimate use.
(An aside: I do not find the use of spell and grammar-check, including the use of Grammarly.com, problematic, although it may seem to be ruled out by the principles above: some differences to explain this, however, are that sometimes students could spell and grammar check themselves and so this software is like a calculator doing things the student could do themselves; sometimes students genuinely cannot [yet?] do these things and the software can help them learn this [whereas I don’t think ChatGPT is usually going to help students become better writers], and I don’t think that finding spelling and minor grammatical errors are as “constitutive” of thinking and reasoning as the processes involved in, say, essay and presentation writing are. Another issue is that, for many students, if we required a high level of grammatical and spelling proficiency before moving to higher-level learning tasks, we might never get to those tasks, or we’d be waiting too long.)
- Many students do not cheat at all now, using any means. And many of these students would not cheat using ChatGPT. Unfortunately, however, it appears that a significant number or percentage of students do, will, or might: it’s hard to resist the temptation to cut corners, especially under the pressure of a full load of classes and everything else in our busy lives, and that’s true for nearly all human beings. However, any interventions to prevent and reduce the illegitimate use of ChatGPT should not negatively impact students who don’t and wouldn’t cheat: it’s unfair that their learning and skill development suffer because of efforts to reduce cheating in other students.
Given these preliminaries (preliminary preliminaries, since there surely are more background issues to be engaged!), again, what are we going to do about ChatGPT?
We can begin with our course goals and assignments: which of them might students use ChatGPT to circumvent? How can “ChatGPT-proof” goals and assignments or assessments be created? Here are some suggestions although none are perfect and can likely be defeated by students determined to cheat:
- in-class exams, done by hand. These need not be (entirely) essay or even paragraph-answer exams, in many cases, so they could be quickly gradable. For many fields, there are ways to create challenging multiple-choice questions that involve solving problems and applying concepts to new cases to demonstrate higher-level learning goals: multiple-choice questions need not be simplistic and so the best students need not be worse off for being tested in these ways;
- oral exams: discussing an issue can reveal how much someone has learned about that issue. Oral exams, however, can be very time-consuming and are challenging in other ways also–for one, these might be harder to grade in objective and fair manners;
- speaking-related assignments–including ones recorded to video and posted online—where students cannot merely read, verbatim, something that something could have been produced by AI: even if they are “talking through” something that AI contributed to, they can talk through it only if they have adequate understanding on their own;
- some additional assignment and learning activity ideas are found in the second half of this article, “Policing Is Not Pedagogy: On the Supposed Threat of ChatGPT.”
It is acknowledged that implementing these strategies in larger classes, where there is less personal attention, is surely harder.
It must also be recognized that some of these suggestions may be challenging to some students with some disabilities. And some students would not perform as well at these types of assignments, compared to other, more “traditional,” forms of assessments: however, it’s to be noted that that’s true of all kinds of assignments—some students do better than others on some kind of assignments, compared to others, so a mere change in typical assignment format need not be unfair: the best response, as its always been, is to provide some variety in types of assignments.
Concerning the specific teaching of writing, and the processes involved in effective writing, here are some suggestions, although none are perfect and can likely be defeated by students determined to cheat:
- in-class writing activities where AI access is not allowed: either writing by hand or using computers where AI use is (somehow!) prevented or monitored;
- slower, more scaffolded writing assignments where students’ steps in the writing process are reviewed along the way: this may result in less overall writing being done, but it should result in better writing, with a deeper understanding of the writing process gained. Breaking things into steps should also discourage students from procrastination and eventual panic that motivates some students to cheat on writing assignments;
- oral exams and presentations based on these slowly-developed writings;
- requiring an official citation method some instructors report has been helpful in reducing cheating.
Here are two probably bad ideas for responding to ChatGPT:
- one common suggestion involves the claims that since ChatGPT exists and will be used, students should learn how to use it better, and so some assignments could involve them analyzing a ChatGPT writing with an aim to improve this. This is unwise. First, there already exist not great writings on all sorts of topics that students can review with an aim to suggest improvements: indeed, “peer review” by students of student work can serve this function, and there are published writings that are not great either. Second, to successfully review a piece of writing by ChatGPT, students need to have the subject area understanding and communication skills that using ChatGPT undermines. Finally, if we want models of excellent work, ChatGPT’s writings aren’t what we want to examine. So there’s nothing that can be gained by engaging ChatGPT-produced writings that can’t be better gained in another manner: the suggestion that ChatGPT be used for this purpose appears to appeal only to the novelty of doing something with a new tool;
- it’s suggested that AI-detection software will get better and better so submitted assignments can be checked for authenticity. If this happens, however, surely anti-AI-detection software will develop also to meet the challenge. So I don’t see this technological “arms race” as a very good solution here, but this all depends on the effectiveness of the tools.
In sum, the issue here is not merely how we might reduce a new type of cheating that involves new AI tools. The issue is much more profound and fundamental and it’s this: for many reasons, our societies, our world, needs people who are able to learn about complex issues, understand them, communicate that understanding, present arguments for their perspectives, and productively engage contrary points of view. Simply put, we need educated people. And we need to not have people who appear to be educated in these ways but really are not since they cheated using ChatGPT. ChatGPT makes distinguishing these two categories of persons, of citizens, harder, and so its negative impact must be resisted, for the good of all.
Tag(s):
Faculty Blog
Other posts you might be interested in
View All Posts
February 27, 2021 |
Morehouse Faculty