Background
I’ve been teaching writing for more than 30 years at various schools in Pennsylvania. I’ve also worked as a professional technical writer and technical journalist. I’ve written quite a bit about open source software initiatives and at one point had the honor to interview open-source guru Linus Torvalds at a trade show. These areas of technology and writing pedagogy converge for me when developments like Chat GPT hit the market.
Playing in the Sandbox and Creating Models
I’ve been playing with ChatGPT (part of the OpenAI initiative) and have been learning a lot. I’ll try to encapsulate.
First, the information that ChatGPT can access is limited, so results are limited, at least at this point. From what I can see, a lot of it has to do with timeliness. For example, if writing about Edgar Allen Poe, ChatGPT would be able to access quite a bit of “hardened” information and provide a Wikipedia-style essay, based upon essential bio and consensus critical opinion. Here’s what it generated:
Journalism, however, is entirely different. When I asked it to create a current news story on the Boston Celtics, it was pretty much useless. It confidently talked about players who were no longer with the team and the stats were wrong.
A colleague pointed out that the more advanced the prompt, the more specific the results (with some online users indicating that the bot can write code). After several days of not being able to access ChatGPT, I was able to get it to write a few more essays, including an article 1) comparing ancient Rome to modern America and 2) an article on August Wilson (with citations).
So if you’ve been wondering whether citations can be autogenerated, the answer is yes. The key takeaway at this point is that ChatGPT can generate a lot of factual information but, as far as I can tell, zero insight.
Pedagogy
History tells us that we cannot “uninvent” something. Therefore, as a technologist and writing professor, I’ve always pushed toward teaching the technology. No difference in this situation. Besides, the best way to make make writing teachers irrelevant is to ignore the paradigm shifts.
My pedagogical strategy with GPT is to teach through it:
Create a lesson where they use it to create drafts
Submit the drafts to our shared Google folder
Critique the drafts to make decisions as to whether they can be salvaged or not and how, precisely (that’s the cherished outcome)
In this way, students have a choice: they can continue to write in the traditional way, they can do verbal brain dumps into their phones that serve as rough drafts (a technique I encourage), or they can become editors of the material that ChatGPT spits out. This last point is a significant component of the future of writing/editing/publishing, in my view, where editors serve as shepherds for the language that is so easily generated by the Ghost in the Machine.
And although GPT writing is extremely basic, it will certainly improve over time. My advice is to not throw up our hands and say “college writing is dead” (like so many screaming headlines) but assist students in learning how to produce good writing in an age where they have so many tools available. We English-types should also provide coaching to our colleagues in other disciplines who have students write papers in their classes.
For example, at the very least, the rough drafts created by ChatGPT can be used as outlines, which takes student writers past one of the great hurdles of college-level writing: generating the rough draft. (I have more teaching suggestions below, under Plagiarism.)
One example: the GPT essay on Ancient Rome v. Modern America can be taken in a number of directions. The direction that appealed most to me involved comparing art. And while I suppose you could have GPT write another essay comparing the art of the two times, it is likely (at this point) that the drafts will continue to be surface-level with no insight. This is where students need to find their own examples, their own sources, and develop their own insights.
Assistance to Writers is not a New Thing
And before we discount assistance provided to writers throughout the course of history, let’s acknowledge how great writers have had their own share of support (typically human women): John Steinbeck’s first wife, Carol Henning, typed up all of his manuscripts, fixing errors as she went. Stephen King’s assistant is Marsha DeFilippo, Catherine Asaro’s assistant is Kate Dolan, Neil Gaiman’s assistant is Lorraine Garland, Nancy Holder’s assistant is Erin Underwood, and Charlaine Harris’s assistant is Paula Woldan. The point here is that all writers have help. In today’s world, that help increasingly takes the form of AI. I know it’s not apples to apples but I’m trying to crush the idea that “great writers” were doing it all by themselves.
Plagiarism
But, you ask, what about plagiarism? My answer is to have students talk about their writing and sources, either by creating short Zoom videos, getting into small groups for discussion, or speaking briefly to the class. Make them submit their sources as PDFs to a shared folder or D2L. Have them write summaries of their sources, if necessary. There are ways.
I doo fear that plagiarism checkers are going to be outpaced by ChatGPT but we shall see. My gut says that even if you can strip out a suspicious line and Google it, it might not take you to any clear, existing source. So, again, press students to talk about their theses and their sources, so that they can’t just hand in a written assignment at the end of the semester, leaving you to try to figure out how the paper was actually written.
The other thorny question relates to this: What if I write something on August Wilson and GPT picks it up and places it within an essay in a robotic effort to provide insight? That writing (and those insights) are my intellectual property. Can The Machine just take it from me or are there copyright consequences? I imagine that copyright enforcement will enter the picture at some point but as we know technology developments typically outpace the legal system so it could be years–even decades–before law is fashioned around these situations.
Conclusion
Generative AI is here to stay and I would say from the point of view of English Departments, it is a good thing. It calls upon us to share our expertise, insights, and, yes, fears. Besides, let’s not forget how we all (mostly) use computers to write these days. I can remember, when I was an undergraduate, a professor holding up a pencil, saying, “This is all I need to be a writer.” I suppose that’s true, but at the time, the Apple IIe was out and the new generation of students were starting to hoard 5.25″ floppies and marveling at how we could revise our work from draft to draft. And as for pencils, I’ve nothing against them and use them all the time to make lists and notes, but I can’t answer a single email with a pencil.
Bottom line is that science will continue to create disruptive technologies and academics should lead the way in explaining, problematizing, and teaching those technologies. We ignore them at our own peril.