One of the things that makes a good reporter good is their ability to discern meaningful patterns in a wide range of available data. The key word there being meaningful.
I would expect chatbots to be great at pattern recognition. That’s how they string stuff together. And I would expect people to be favorably disposed toward chatbot outputs that present what appear to be meaningful patterns. It’s easy to string meaningless things together coherently (“Colorless green ideas sleep furiously”), but the problem is that it’s just as easy to string seemingly meaningful things together coherently (“Steel melts at a higher temperature than jet fuel burns”).
The problem isn’t with the content, but with the coherence. It’s been argued that pattern recognition is Homo sapiens’ evolutionary advantage. That’s what’s at play here. We are predisposed to look for coherence; the problem is that when we find it, we are also predisposed to trust it.
The breakdown occurs when we conflate coherence with meaning. When you have people who are fixated on coherence, and all you give them is stuff that looks meaningful just because it’s coherent, then they’re going to think they have found something meaningful. And this is why people believe fake news is real and real news is fake.
We are literally wired for it. We find comfort in patterns and discomfort in discordance — regardless of whether or not the patterns have any factual grounding — because those reactions are biologically programmed into us as survival mechanisms.
So when you have chatbots that can give us stuff that accords perfectly with grammatical algorithms but means nothing (i.e., is literally factually incorrect), how do you know it?
How do you know if you can trust it?