Why do Students Use AI?: Preventative Measures for Unethical AI Use

My brother’s girlfriend is an attorney in New York, and while on Thanksgiving vacation with them, we got onto the topic of AI. She is beyond irritated that she’s caught and had to report several AI-written legal documents and cases sent to

The Careerist: Choose One: 1) Ditzy Broad or 2) Angry Bitch | The American  Lawyer

 her by other lawyers, making her job harder and wasting her time. I was able to empathize as I’d also had to deal with an AI-written paper or two– however, unlike her, I am unable to take such actions because we cannot 100% prove to a student that “This paper was written with AI.” We can gather the typical red flags (inconsistent writing style, lack of any specific examples from their work, missing course concepts, etc.). Still, without any for-sure guidelines to follow like with Safe Assign, we are taking a risk when we confront our students.

Since we can’t deal with students relying on AI work after it’s already created, what can we do to prevent our students from turning into lawyers depending on unethical AI when determining the livelihood of those at the hands of our judicial system? We have to shift our perspective and take a preventative approach; try and manage it before the AI-written text exists. 

To begin to handle the misuse of AI before it starts, the question we must ask ourselves as educators is: Why do students, and even professionals, use AI? I’ve read and discussed similar dialogues from my peers, where the main answers they’ve come across or hypothesized were that students felt like they couldn’t write well, had a lack of time, or just didn’t want to do the work. We as teachers can provide good examples of time management in class, provide moral support and praise for authentic work, and give incentives for completed work, for example; but in the end, if students have the option and access to use AI, they will use it. 

In addition to establishing good habits and confidence, since we can’t handle the product of AI and confront a student with it, we need to teach students how to use it properly. Mike Frazier outlines an example introduction lesson of ethical AI use to students and validates some concerns we as teachers have in “Promoting Ethical Artificial Intelligence Literacy”. Frazier states that “this overarching discussion exploring with students the ethical implications of Generative AI tools like ChatGPT can be a daunting process—particularly if, like us, instructors find themselves suddenly inundated with resources, opinions, and immediate threats to the status quo of instruction in higher education.” Often times when instructors come across AI work, it is difficult not to get insulted and think “Do they think I’m stupid?”, assuming the students are trying to pull the wool over our eyes and take advantage of us. As Frazier points out, this is the fear of a threat to our jobs and our student’s education taking hold and steering us in this direction. A main part of Frazier’s exercise is for the instructor to “lead a broader discussion on some critical points related to issues like data privacy, transparency, responsible and ethical use, and critical exploration of AI tools in two contexts: (1) as a learner in academia, and (2) as a future practitioner in their individual fields” to lay the foundations of critical and ethical AI-use for students now and in their future fields, whether they be lawyers or otherwise. 

It is integral that we don’t allow the fear of AI threats to alter us as educators who want the best for our students. For this exercise to work to bolster our students and prevent the misuse of AI, students must be “having these conversations in an educational space with a figure they trust, such as a caring and non-judgmental instructor, is essential to making thoughtful decisions related to academic integrity.” By engaging in these open conversations and comparing and contrasting the student’s own work with AI, we can open our eyes to the value of their work, offer them a key to managing their time better, and improve their work ethic, all within the ethical situation of AI use. Hopefully, with our instruction and guidance, my brother’s girlfriend won’t have to take time out of her day to report legal AI documents in the future. 

One thought on “Why do Students Use AI?: Preventative Measures for Unethical AI Use

  1. Love your comment: “In addition to establishing good habits and confidence, since we can’t handle the product of AI and confront a student with it, we need to teach students how to use it properly” because it puts us in a productive space versus a place.

Leave a Reply

Your email address will not be published. Required fields are marked *