Robotic Writers- How ChatGPT Could Affect the Humanities

Written by Heather Verani

Last Friday, I started a new position in my graduate assistantship by helping and observing Dr. Ording with his English 110 courses. These compositional courses are the foundation for lifelong learning, as students are taught the essential elements of writing that they can apply and grow upon throughout their time in higher academic learning. During one of the transitions between classes, Dr. Ording asked if I or my fellow GA Becca had heard about the new AI writing program ChatGPT.  At the time, this technology was a far and distant concept to me. However, in just a week since the first mentioning of this program, it has become a topic of debate and concern in multiple fields, such as software development, legal disciplines, and the humanities.

The chatbot is described by the company on their website as a “dialogue format” that makes it possible for ChatGPT to “answer follow up questions, admit its mistakes, and challenge incorrect premises.” The chat is able to construct responses to questions and directions input by the user because it is “powered by large amounts of data and computing techniques to make predictions to string words together in a meaningful way” (Sundar). The way this artificial intelligence tool is able to instantaneously create sophisticated responses and answers poses a problem for professors in the humanities, especially those who teach writing courses. Students could use this technology to do the work for them, reaping the benefits of a good grade but loosing all the importance that comes with these courses, such as the previously mentioned foundational skills. If you are wondering what a response looks like from the chatbot (as I also have), I created a free account to see what exactly it produces. For this article, I have chosen a simple question and format that would be common within a high school writing course. I asked the program to “write a 5-paragraph essay on color symbolism within The Great Gatsby.” Below I have included a screenshot of some of the ai’s response; however, it is evident that these responses are not just ones copy-and-pasted from spark notes. They include a robust vocabulary, a connection to themes, and an understanding of emotion and morality that is connected to the colors. Overall, it is a pretty good essay that could easily pass as a student’s writing.

So, with the introduction of this technology, what does this mean for the humanities? Within John Warner’s book Why They Can’t Write: Killing the Five-Paragraph Essay and Other Necessities there is a sub-section titled  “Our Writing ‘Crisis’” where he discusses imitation in connection to teaching students how to write. He states “much of the writing students are asked to do in school is not writing so much as an imitation of writing, creating an artifact resembling writing which is not, in fact, the product of a robust, flexible writing process” (Warner 5). This idea of imitation in teaching/learning writing studies is one that has been discussed in many a writing studies course. The idea of imitation in writing now becomes even more elevated, as students loose so much by using a robot writer to answer questions that are specifically created to enhance their writing skills and have them reflect on how they can improve as writers.

Professors now not only have to consider and check if students are plagiarizing in their writing, but now if they are doing their own work at all. While playing around with the AI myself, I found a few points of weakness within the software’s structure that would hopefully be able to pinpoint it’s use. This first is that I asked the chatbot the question of “write a 5-paragraph essay about expressions dance at Millersville University.” The response I received seemed a little too familiar, and I saw that the chat had used direct lines and vocabulary from my blog article on clubs you should check out on campus- which featured expressions dance. This discovery shows that although not frequently, if the subject of writing is focused enough, it will directly plagiarized from other sources rather than creating an organic response, which can be caught using other software’s such as Turnitin. Another is that the answers lack depth. As seen in The Great Gatsby essay, the responses were correct, but they were basic in their knowing. Many courses in the humanities require direct quotations, a personal connection, or specific details that this software doesn’t know how to do.

The weaknesses of the software show how it is not a threat to the humanities just yet, and that there is still much to learn from human writers than the robotic ones.

(Below is the visual of my interaction with the ChatGTP AI).