Some Thoughts on Chatbots, Coherence, and Fake News
One of the things that makes a good reporter good is their ability to discern meaningful patterns in a wide range of available data. The key word there being meaningful.
I would expect chatbots to be great at pattern recognition. That’s how they string stuff together. And I would expect people to be favorably disposed toward chatbot outputs that present what appear to be meaningful patterns. It’s easy to string meaningless things together coherently (“Colorless green ideas sleep furiously”), but the problem is that it’s just as easy to string seemingly meaningful things together coherently (“Steel melts at a higher temperature than jet fuel burns”).
The problem with fake news isn’t with the content, but with the coherence. It’s been argued that pattern recognition is Homo sapiens’ evolutionary advantage. That’s what’s at play here. We are predisposed to look for coherence; the problem is that when we find it, we are also predisposed to trust it.
The breakdown occurs when we conflate coherence with meaning. When you have people who are fixated on coherence, and all you give them is stuff that looks meaningful just because it’s coherent, then they’re going to think they have found something meaningful. And this is why people believe fake news is real and real news is fake.
We are literally wired for it. We find comfort in patterns and discomfort in discordance — regardless of whether or not the patterns have any factual grounding — because those reactions are biologically programmed into us as survival mechanisms.
So when you have chatbots that can give us stuff that accords perfectly with grammatical algorithms but means nothing (i.e., is literally factually incorrect), how do you know it?
How do you know if you can trust it?
Categorised as: Journalism Ruminations | Life the Universe and Everything
Comments are disabled on this post
Difficult questions. I would say that the newer chatbots nearly always produce *meaningful* output—that is, words that, for a human reader, though not for the computer, signify something. Whether they are reliably true is a different issue.
Maybe I’m using the wrong word, or using the word wrong. Maybe “accurate” or “correct” would be a better choice than “meaningful.”
A few days ago on Facebook I saw a repost of something that an English Lit teacher (I think) wrote about her first encounter with a student paper that had been generated by ChatGPT. I’m paraphrasing, but what tipped her off was that it was coherent but ultimately meaningless to anyone who was knowledgeable about the author being written about — it was essentially double-speak and puffery. But to someone less well-versed in the topic, it could have come across as a convincing analysis. That’s what I was going for here.
Looks like I’m not the only one worried about problems at the coherence/meaning interface. Check out this excellent piece in Futurism, “CNET’s Article-Writing AI Is Already Publishing Very Dumb Errors,” by Jon Christian (who doesn’t appear to be a chatbot). It seems that CNET has been publishing some articles written by AI that included information that looked accurate but wasn’t. CNET says that the articles were reviewed by editors with subject-matter expertise, but from what it sounds like, once CNET has reviewed all the articles generated by the AI text generator, they may want to vet the credentials of some of their editors too.
Here’s a quote from the story that goes pretty much to the heart of the concerns I expressed.
Christian’s article ends on an appropriately chilling note: