Using Ai to Teach Science Communications 

In Professional Writing for Healthcare: Writing & Revising Research Summaries with Artificial Intelligence Heidi Mckee shares a writing assignment she designed for her Professional Writing for Healthcare course which she taught at the University of Miami. Mckee cites the goals of the assignment as “to provide opportunities for students to learn about writing research summaries for lay audiences and to learn about how they might approach writing with and, via prompting, for AI systems.” (Mckee).

Mkee’s assignment essentially consisted of students writing a summary of an article, then peer reviewing the summary, and then having AI generate a summary, students then analyzed that summary. She adds that “This project worked really well, and its general steps of (1) writing human-only, (2) using AI writing systems, (3) reflecting, (4) revising, and (5) annotating a final draft can be applied to any writing assignment.” (Mckee)

Mckee found that the assignment was productive in having students evaluate the writing of a machine in comparison with their own writing. Mckee found that “Students also recognized that much of the AI prose was more bland, lacking their own voice and style. By the end they almost all had integrated AI writing into their final summaries in strategic and select ways, a few sentences here, some words or phrasing there. In their mid-process and final memo reflections, students explained why they chose to include what they did for AI and many of them noted the need, always, for human decision-making and human agency in the writing process.” (Mckee)

Ava form Ex Machina as a robot.

In my Scientific Communications class, our Unit Two assignment consists of students rhetorically analyzing a scientific journal entry of their choice, with their analysis putting emphasis on audience, genre and ethos. In order to begin this unit, I had students look at two articles in class one from Science.org (https://www.science.org/doi/10.1126/science.ade2541) and a journalistic article written on the findings of the scientists published by Smithsonian (https://www.smithsonianmag.com/smart-news/the-amazon-may-be-hiding-more-than-10000-pre-columbian-structures-180983031/).

While looking at these texts as a class, we paid special attention to the ways in which the structure and meaning of the text was changed when going from a first hand scientific publication to a secondary journalistic source. Noting that the change in audience resulted in the omission of certain data points, that the change in genre resulted in different structural and stylistic choices from the authors, and that the change in publishers resulted in a change of established ethos. Noting especially that the Science.org findings come with the established ethos of the scientific community as well as the established ethos of the publication, whereas the piece in the Smithsonian relied mostly on the established ethos of the original.

I did not have enough time in this class to include Mckee’s exercise in its entirety (maybe next semester), but what I was able to do is have ChatGPT write a second journalistic summary of the Science.org article and spend time in class acknowledging the ways in which ChatGPT differed from the Smithsonian piece. 

Students were quick to pick up on the fact that ChatGPT did a better job of including data points which were omitted by the Smithsonian piece such as specifics in the data found by the researchers published by Science.org. In addition to this, ChatGPT successfully wrote a summary in plain language, which was free from jargon and easily comprehensible to a lay audience.

 

Ava from Ex Machina, disguised as a real human.

However, students were also acknowledged the ways in which the ChatGPT summary lacked in many of the ways in which the Smithsonian article did not. For example, the audience of journalistic summaries usually seek to learn a quick fun fact, and do not need or expect much of the information that ChatGPT included (for example, the exact statistical data that plants found near archaeological sites indicate domestication). In addition to this, ChatGPT included implications of the study that may not be suitable for journalistic publications, such as the socio-economic implications of the study.

Generically, Students saw that ChatGPT failed in many ways to produce something that would constitute journalism. The Smithsonian piece included interviews and images that ChatGPT was unable to replicate. Which created something that did not read like a journalistic article.

When students spoke about the ethos of the machine, they acknowledged that it had very little. I showcased ChatGPT’s ability to calculate things like word frequency or citations, which provide students with a useful tool to turn words into data. But students remained skeptical of the accuracy of the robot’s writings. Ultimately, they had no reason to trust the machine without the source journal.

By including this modified version of Mckee’s exercise in my class, students were able to get an introductory grasp into what constitutes scientific journal writing and how it compares to more familiar journalistic writing about science. As well as learn about the possible uses and many limitations of text generators,

 

5 thoughts on “Using Ai to Teach Science Communications 

  1. I think this is a great integration of AI tools into the SciComm curriculum, and I’m almost certainly going to steal your idea. I also love the choice to include the stills of Ex Machina, especially in conjunction with your conclusion that the students didn’t trust the machine without verifying with the OG source. That’s probably for the best.

  2. James, this is pretty cool. I really like how your students successfully realized the way text generators fail. I mention this in my blog post, but I think learning along with our student and showing them these limitations is the best way to approach AI, and I love how now I have a real-life example of someone doing that. This is something I may do in my class next year, as it seemed successful for you!

  3. Awesome analysis of the journalism/summary capabilities of AI James! I agree very much with some of your students’ observations. While AI may have superior summary skills in some regards, AI’s ability to synthesize sources for a specific audience/purpose is much more limited. Your example that AI incorporated “exact statistical data” that an audience may not find interesting or relevant seems to be a substantial example of this.

  4. Wow. What a great activity to engage a conversation about AI. Very astute observations by your students about the audience. Their comparisons of the pros and cons of each are thoughtful and thought-provoking.

  5. You’ve done it again! Great work James. I especially appreciate the inclusion of discussion around the accuracy of Chat GPT and letting your students draw conclusions. This method of “showing not telling” is so effective and I am sure to use it in my class.

Leave a Reply

Your email address will not be published. Required fields are marked *