Essential Music For Reading this Blog Post
In the 1990s the Odyssey was rebranded for the silver screen and titled Terminator 2: Judgement. Odysseus was named Sarah Connor (Linda Hamilton), and her long journey home is now aided by her tough guy teenaged son and a previously evil cybernetic organism from the future (played by Arnold Schwarzenegger) now sent to protect her and her son. The once villain now hero is reprogrammed for good and sent back in time to protect the very humans he was once created to destroy. This academy award winning film (for visual effects,sound mixing and makeup BUT not for best movie of all time about time travel and robots) remains in the cultural zeitgeist when talking about technology taking over. In summary, Skynet (the fictional defense company in the Terminator saga) represents the ultimate man vs machine fear; the machines see the biggest threat to peace is humankind since all wars are started and fought by humans, therefore peace can only be achieved when all humans are destroyed. Scary!
As a teaching fellow, I can’t help but think about terminators when I hear AI writing software names like Chat GPT. Is this new and therefore scary at first or is it a dangerous self-aware monster that’s going to destroy the world of writing as we know it? In AI for Editing by Nupoor Ranade, the author allies with AI services and invites students to partner and explore AI software. The use of AI as a writing enhancement tool reveals the strengths and weaknesses of the software. In terms of summarization most platforms were adequate but problematic in accuracy, misrepresenting the information. When it came to analysis, AI was able to set up the frame of an argument and reference the topic described, but no real supporting evidence to back-up the argument. A common problem within AI writing software is hallucinations or made up sources/citations by AI to persuade the reader. This a dangerous trick of technology that can go unnoticed by uninformed user.
Ethical Considerations in Courtroom Drama
According to this story from Forbes.com, Steven A. Schwartz, a seasoned lawyer (30+ years of courtroom experience) used Chat GPT to create arguments for his client’s case. Schwartz asked the AI software about the cases/sources it named, and the software reassured Schwartz that the information was real. It was in fact, not real. The program hallucinated, fabricating cases that never happened, to achieve the objective of justifying arguments. Now this lawyer faces severe consequences, claiming he didn’t know the software would be capable of doing something so dishonest. What’s frightening about this is that it was caught, posing the idea that there could be lawyers doing this same thing, only going unnoticed. The fabric of the legal system is only one of many areas that AI writing could appear to partner with, but unknowingly betray the user through their own ignorance.
The literacy of AI expands as the technology outpaces the learning by teachers and administrations to navigate this new real with caution and prevent this tool from becoming the destructive terminator-like machine that eliminates the humanity from writing. What can we do as teachers of rhetoric? We can treat this new tool with caution as we explore it with our students, making them aware of these flaws within the system. Making our students AI literate will hopefully enhance the usage of this tool to produce better writing. Getting students to step away from the software and develop their own writing muscles is a choice they must make and endure. One major challenge is not failure, that’s a natural part of life, it’s dealing with failure and adjusting for success. One major tool I use all the time is the “terrible example.” Where I show an awful resume, infographic, or poorly constructed website. I have the students fix it as a group, in a discussion (that ultimately turns into a feeding frenzy) where we constructively repair these terrible documents using the concepts we are about to define or have been exposed to. Students start to notice the typography changes, the poor spelling, the horrible document layout colors and their reactions help create a lively discussion about repairing documents, to fit the mold of what something abstract can fulfill.
AI writing software is perfectly fine as an editor on a rough draft, formatting citations, and on cleaning up that old resume. However, AI writing doing the heavy lifting of constructing the critical thought and supplying evidence (sometimes false) creates a dangerous reality that like the terminators warned us, removes humanity to prevent human mistakes. We must reprogram these bots, in order to ally with them, and fight the good fight, the fight of keeping humanity alive within the written word.
I’m a big fan of the ‘terrible example’ class exercise, that is a great idea. I agree with you, in order to prevent a Skynet moment, we rhetors and instructors have to be familiar with how to use the bot as a tool.
When I was talking with my students to get feedback on my lessons, they told me they thought I should re-design the AI lessons I taught in class to better demonstrate the failures of Ai. Unfortunately, it’s really easy to find examples of bad infographics and bad websites but it’s much harder to find situations in which Ai does a bad job.
if i reprogram the bot to live harmoniously with me can i make sarah connor my wife? asking for a friend