In ‘Translating a Policy Document into Plain English’, Timothy Laquintano from Lafayette College describes a pretty brilliant assignment that challenges students to think about AI, literacy, and the loss of meaning during translations. In this assignment, the instructor had students translate part of a complicated document into a 7threading level, and then do the same exercise using AI to translate. The students then compared the translations, and discussed how meaning is lost in that process of translating for public audiences, and the limitations of the technology we use to aid our adaptation goals (looking at you, Flesch-Kincaid).
The document that students and AI were tasked with translating is a policy document – which, for those of you who have never had the displeasure of writing one, is a document with highly specific and rigid genre conventions. And I don’t just say this because a policy document is responsible for the only F I’ve ever received; policy documents are aimed specifically at policy creators, and they follow rules about tone, organization, and language. Students had to take this document, with all its explicit and embedded meanings, and translate it for the public. Then, they had to examine the translation provided by the AI, and compare.
The students understood that they were to write for an audience that reads English at a 7th grade level, but who are still well-educated and intelligent adults. The subtle difference here significantly influenced the metaphors and language choices the students used to explain the complex concepts, since the policy document they were examining was an Obama-era policy on AI (topical!).
Students engaged with both the translations, examined how meaning is lost during the translation process, and pondered the implications of simplifying text to reach a broader audience. Students also raised concerns about the accuracy of readability tests in capturing the depth and nuances of a text. Students even examined the Flesch-Kincaid test, and whether it truly and accurately measures readability, or something altogether different.
Incidentally, this is my concern as well. The Flesch-Kincaid test is based on English language structure and assumes that the same principles apply to all languages. Anyone who has taken Spanish or French in 8thgrade can tell you that this is not the case. The overreliance on readability can encourage a “dumbing down” of content to meet arbitrary readability targets, but the test doesn’t consider anything other than surface features like sentence length and syllable count to gauge readability. Focusing on these markers or readability oversimplifies the complex nature of comprehension and readability. A text may be highly readable, and still be unintelligible depending on the context and audience.
In technical or specialized fields, simplifying language to a low reading level may lead to oversimplification and loss of crucial details. This is what the students were asked to grapple with – what meanings are lost? Where are sacrifices made? Are they worth it?
The Flesch-Kincaid test does not take into account the content or context of the text. It assumes that shorter sentences and simpler words always result in better readability, which we know will not hold true for all types of writing. It also doesn’t consider the cognitive load imposed by complex ideas or concepts, something which we discussed in previous class discussions about readability standards.
The fact is, some texts will sometimes simply require a higher reading level, because they deal with intricate subjects, and simplifying them excessively could compromise their accuracy and meaning. In technical or specialized fields, simplifying language to a low reading level may lead to oversimplification and loss of crucial details. How then do we adapt those documents to public audiences? Perhaps we need a better conceptualization of readability, that doesn’t rely merely on short sentences.
I wish we could make a poster of this post. It really articulates your points about communication and social justice.
The Flesch-Kincaid score is so useless because just writing about a multi-syllabic topic it will give you a lower score. For example, if you wrote an article about George Bush, the whole article will get a lower score than if you write the same article about Dwight D. Eisenhower, simply because Bush’s name has less syllables.
Awesome analysis of readability concepts Barbara! This was a fun read. I would be interested to see how different AI software interprets different writing styles. What if an author decides to use shorter sentences or statements for emphasis? Will this be interpreted as a lower reading level?
Questioning the old standards is essential in adapting to the modern classroom. I love that you call out Fleisch-Kincaid and demand reform.
my mom works in education. the way they teach reading now is frankly dystopian. they teach students how to break down words and sound out letters but not how to discern meanings. they are concerned purely with phonetics. i am very afraid that we are looking at a future where the vast majority of the public is uneducated and unable to read.
I have a good quote from Fahnestock that you may like Barb that really details this, ““The assumption held by some proponents of the ‘plain language movement’ that meaning can be readily transferred from context to contact by mere editorial wizardry needs a second and third look” (Fahnestock, 33).