Way back in the 15th century, Johannes Trithemius, a German Benedictine abbot and a polymath, loathed the newfangled printing press and worried the invention would make monks, who typically copy text by hand, lazy. 1790: Reverend Enos Hitchcock blames novels for corrupting the minds and morals of youth. 1909: The Annual Report of the New York Society for the Prevention of Cruelty to Children states that movies draw girls astray to ‘lead dissolute lives’. 1926: San Francisco adults claim the telephone breaks the family apart and ruins the community. 2005: Hillary Clinton declares that video games steal the innocence of our children.
All of these developments in technology stick with us still and have developed even larger than themselves or what we thought they could be. History demonstrates that technology advances faster than we are able to reject. We can watch in real-time today with the feverous emergence and popularization of AI, permeating almost any sphere of human life we have to allow: transportation, art, construction, and one at the forefront of our sphere as professors and writers currently, education. Presently, educators scramble over how to deal with students submitting AI-created essays taken from bits and pieces across the internet, AI “reading” and summarizing texts for students, and the threat of AI taking the role of “educator” altogether in place of us teachers. In An Introduction to Teaching with Text Generation Technologies, Laquintano, Schnitzler, and Vee aptly communicate the main question academia is faced with in the midst of this AI storm: How are we supposed to deal with this?
Our authors of the text above are generally eager for the use of AI and recognize its influential power. While I do not share their excitement for this new frontier of technology (I’m just trying to make it to next week), I do agree that there is a paradigm shift of grand proportions in the realm of education and writing that is coming faster than we know it. So fast, in fact, that education facilities across the world aren’t able to keep up; schools struggle with developing an action plan and courses on working with AI, detecting AI usage, and the like. Our authors propose four possible ways to go about it: 1) prohibition, outright not using AI technology, 2) leaning into it, and accepting it wholly, 3) critical exploration, teaching how to use AI critically and its downfalls (which the authors posit as the best method), and 4) a blend of all methods previously mentioned.
Methods 1 and 2 are the most risky: 1 was demonstrated with our examples throughout history showing that most efforts to resist technological change as impactful as AI will be for naught, and 2 because it is generally unwise to trust any “new, incredibly influential” technology 100%, especially if it is trained to take the work of others and morph it into something else with little to no real logic driving it. It is in the name: “artificial” intelligence. Anything made or operated without a human eye to guide it must be under intense scrutiny. This is exactly why we must adapt to work with the expansion of AI critically, learning how to use it to our benefit while also actively acknowledging its faults.
I do not favor or support AI, as I cannot ethically stand with a tool that is made to leech off of the hard work, emotion, and skill of others, but I believe it’s far too powerful and already too grand-scaled of an innovation to simply ignore or try to fight to complete extermination. We must learn to use AI and work it to fit our goals, morals, and lives as most advantageous to all, or we will be overtaken in totality.
We as humans, experts of progression, can’t help but be fascinated with extreme technological advancements as some of us envision a grand future. We as humans, experts of survival, can’t help but be fearful as those new introductions can spell potential societal downfalls. Either way, like the printing press or the telephone, revolutionary technologies will continue to spread like wildfire. AI will likely be the new normal in our writing world, forcing us to either develop alongside it and learn how to use it in time or fall behind in the overwhelming struggle of rejecting change.
Your intro well demonstrates your point that “All of these developments in technology stick with us still and have developed even larger than themselves or what we thought they could be. History demonstrates that technology advances faster than we are able to reject.” In some ways, your examples are comforting because things turned out okay.
to be fair to poor old Johannes, the printing press did lead to 100 years of war and devastation across the continent. But that probably won’t happen to us now, right? …. right??
Amelia, I like the distinction you make here between “trust” in AI and “adapting” to AI. Full faith in AI is very dangerous, but respecting the prevalence and widespread adaptation of this new technology, especially in the classroom, is vital. Without a healthy respect for AI, we as instructors will not be able to respond to it’s increased usage amongst our students.
In all fairness video games do steal the innocence of children, Hillary was right about that. But technological advancements definitely continue no matter what people do, we have to adapt and embrace these changes in our pedagogy.
Your examples of other technological developments from the past that other generations were also opposed to was eye opening. Looking at AI from a bigger scope than just the view of a new technological lens was something that made me think more about how AI is progressing. There was a time when the internet was coming about and growing, and there was much caution and many questions about how it worked and the regulations that were going to be put into place. It was amazing to see the connection of the AI front to the front of technology in general.