On the dubious wisdom of hiring AI as your editor

Members of the Queensland Symphony Orchestra in Brisbane, Australia were not happy about this AI-generated promotional image, which shows the orchestra sitting in the audience and a concert hall that doesn’t look like the one the orchestra actually plays in. Source: Slippedisc.

Amidst the intense debates over using artificial intelligence to prepare sermons, an even more serious crisis is going overlooked. Human editors are an endangered species, because people think they can just plug their written draft into ChatGPT and get a flawless result.

Not only does AI tend to produce bland, predictable prose, but because it lacks original thought, it doesn’t produce original content.

Recently I was asked to edit an essay for a African college student. The background section was written beautifully, but the description of research methodology was incoherent and there was no original contribution. My inquiries about the writer’s process generated defensive responses. To avoid a cultural clash, I provided the paper to two African friends, who promptly discerned what I should have realized immediately: the student was using AI, which could write a great background section but was useless after that.

For the rest of our lives, all of us need to resist laziness in depending on AI and read with skepticism everything that might have been AI-generated and pockmarked with credible-sounding but false information.

The remainder of today’s post is an essay I wrote for my friend Marc Baldwin, owner of the Edit911 editing service, who is trying to persuade clients to pay for human editors rather than thinking AI is an acceptable substitute. (Edit911 is not the editing service mentioned in the opening anecdote of the essay.)

I hope you can enjoy spending this day with real people!

This hungry editor insists he is still better than AI

I received an interesting message recently from an academic editing service I’ve worked for. The message reminded me that when customers ask to have their writing edited, they expect the work to be done by a human editor.

It turns out that some freelance editors had found a way to complete their assignments faster: they were running documents through ChatGPT or a similar artificial intelligence program.

For a moment, I was tempted to sympathize with the cheaters. After all, human freelancer editors are starving because people think AI can do just as good a job instantaneously. It was ironic, I thought, to see the beleaguered editors turn the tables and use AI to their own advantage!

However, they got caught. Some sharp-eyed Chinese clients noticed distinctive editing patterns and raised questions. An investigation conclusively identified the editors’ AI use and they were removed from the project.

The fact that these would-be cheaters couldn’t get away with their deception proves one important thing: AI editing is not the same as human editing.

Of course, technology has been helping people profit by seeming smarter than they are for a long time. Sixteen years ago, I requested a hotel reservation in Latin America and got a response in perfect English. When I reached the hotel, I asked which of the staff spoke English. The reply: “None of us do. We used Google Translate.” Happily, I speak Spanish, or we would have had to carry a computer to every conversation with hotel staff for the next six days.

But today’s AI is asserting a new type of competence by claiming that it can improve the writing even of excellent writers using their native language.

To analyze rigorously what AI produces—and in hopes of reassuring myself that I’m not hopelessly inferior to an algorithm—I inputted sections of two documents into ChatGPT’s substantive editing level and compared the results to my own edits.

I must admit that AI equals or even outperforms me in a few areas. It never accidentally omits a word, as I occasionally do. There are no sentence fragments or run-ons. AI makes some good word-level improvements, such as changing a description of a “multi-vocal” book to “multi-author.” And it sometimes restructures sentences to improve succinctness.

Gosh, I was almost ready to give AI a pat on the back, except that it has no back.

But I also found that despite its best intentions, AI sometimes mangles meanings or loses intended nuance.

For example, in a passage on the New Testament Gospel of John, AI referred to “John’s use of the vine metaphor in chapter 15, where, as this chapter argues, his indebtedness to the Jewish Scriptures is especially pronounced.” The words “this chapter,” added by ChatGPT, have an incorrect antecedent because AI has failed to distinguish between a chapter in the Bible and a chapter of the writer’s book.

In another passage, a reference to “debates based on the Scriptures and Jewish traditions” became “scripturally grounded debates,” which could imply an evaluative comment (i.e., that the debaters were using Scripture accurately) that the writer did not intend.

Since AI is still a machine, it can work only with what you feed it, so it may change documents in ways you don’t expect. When I submitted a portion of an academic article but did not include the footnotes, ChatGPT not only eliminated footnote numbers and quotation marks but even added the phrase “as has been noted,” which was inaccurate.

In a different article, ChapGPT read a reference to “biblical interpretations that perpetuate conflict” and changed the verb to “reinforce.” The verb change loses the desired implication that the conflict will be passed on to additional people and new generations.

Many other passages, though technically correct, feel clumsy or imprecise. AI described a book as “primarily a posture of lament,” but books themselves don’t cry or take on a posture. It awkwardly said the book teaches people how to “feel rightly,” as if there is a right or wrong way to experience emotions. Moreover, perhaps misled by the surrounding context of a book review, ChatGPT guessed wrong and changed “leaders” to “readers.”

I’m trying not to be jaundiced toward AI, even though I was deeply insulted when I asked Google’s AI mode, “Who is Bruce Barron?” and it provided descriptions of three other men before getting to me. So here is my attempt at a balanced conclusion: if you want clear-sounding board meeting minutes or personal correspondence, AI is a great editing tool. But if you want your professional communications or academic writing to be first-rate, you still need a real editor.

Previous
Previous

How much money should the West send to the Majority World?

Next
Next

How to choose forgiveness over grievance