Sotto Voce.

"Qui plume a, guerre a." — Voltaire

Some Thoughts on Chatbots, Coherence, and Fake News

One of the things that makes a good reporter good is their ability to discern meaningful patterns in a wide range of available data. The key word there being meaningful.

I would expect chatbots to be great at pattern recognition. That’s how they string stuff together. And I would expect people to be favorably disposed toward chatbot outputs that present what appear to be meaningful patterns. It’s easy to string meaningless things together coherently (“Colorless green ideas sleep furiously”), but the problem is that it’s just as easy to string seemingly meaningful things together coherently (“Steel melts at a higher temperature than jet fuel burns”).

The problem with fake news isn’t with the content, but with the coherence. It’s been argued that pattern recognition is Homo sapiens’ evolutionary advantage. That’s what’s at play here. We are predisposed to look for coherence; the problem is that when we find it, we are also predisposed to trust it.

The breakdown occurs when we conflate coherence with meaning. When you have people who are fixated on coherence, and all you give them is stuff that looks meaningful just because it’s coherent, then they’re going to think they have found something meaningful. And this is why people believe fake news is real and real news is fake.

We are literally wired for it. We find comfort in patterns and discomfort in discordance — regardless of whether or not the patterns have any factual grounding — because those reactions are biologically programmed into us as survival mechanisms.

So when you have chatbots that can give us stuff that accords perfectly with grammatical algorithms but means nothing (i.e., is literally factually incorrect), how do you know it?

How do you know if you can trust it?


Categorised as: Journalism Ruminations | Life the Universe and Everything

Comments are disabled on this post


3 Comments

  1. Richard P says:

    Difficult questions. I would say that the newer chatbots nearly always produce *meaningful* output—that is, words that, for a human reader, though not for the computer, signify something. Whether they are reliably true is a different issue.

    • sottovoce says:

      Maybe I’m using the wrong word, or using the word wrong. Maybe “accurate” or “correct” would be a better choice than “meaningful.”

      A few days ago on Facebook I saw a repost of something that an English Lit teacher (I think) wrote about her first encounter with a student paper that had been generated by ChatGPT. I’m paraphrasing, but what tipped her off was that it was coherent but ultimately meaningless to anyone who was knowledgeable about the author being written about — it was essentially double-speak and puffery. But to someone less well-versed in the topic, it could have come across as a convincing analysis. That’s what I was going for here.

  2. sottovoce says:

    Looks like I’m not the only one worried about problems at the coherence/meaning interface. Check out this excellent piece in Futurism, “CNET’s Article-Writing AI Is Already Publishing Very Dumb Errors,” by Jon Christian (who doesn’t appear to be a chatbot). It seems that CNET has been publishing some articles written by AI that included information that looked accurate but wasn’t. CNET says that the articles were reviewed by editors with subject-matter expertise, but from what it sounds like, once CNET has reviewed all the articles generated by the AI text generator, they may want to vet the credentials of some of their editors too.

    Here’s a quote from the story that goes pretty much to the heart of the concerns I expressed.

    “It’s a dumb error, and one that many financially literate people would have the common sense not to take at face value. But then again, the article is written at a level so basic that it would only really be of interest to those with extremely low information about personal finance in the first place, so it seems to run the risk of providing wildly unrealistic expectations — claiming you could earn $10,300 in a year on a $10,000 investment — to the exact readers who don’t know enough to be skeptical.”

    Christian’s article ends on an appropriately chilling note:

    “[I]t’s not just AI that’s the issue here. It’s that AI is maturing at a moment when the journalism industry has already been hollowed out by a decades-long race to the bottom — a perfect storm for media bosses eager to cut funding for human writers.”


Discover more from Sotto Voce.

Subscribe now to keep reading and get access to the full archive.

Continue reading