Newsrooms Beware: ChatGPT Lies. Its Apologies are Lame but Insightful

It has been a tough week for me and my relationship with ChatGPT. It told me numerous lies, and maligned a number of people who don’t deserve such treatment. That’s not okay.

I’m not one of those researchers, like David Rozado, who is testing ChatGPT’s guardrails. I’m not looking for ways to prompt the chatbot to say unsavory things. But it did, and it did so as I used it to do research for a handful of client projects. 

I’ve read the hype; ChatGPT is supposed to be better than Google Search, right? Why go through the bother of searching, clicking on a bunch of articles, comparing multiple sources, and synthesizing what I read when ChatGPT can do that for me in one simple step?

If you only knew the kind of bot ChatGPT is. Here are just some of the lies that ChatGPT told me in a two-day span of honest usage on my part:

  • “In 2018, a BuzzFeed reporter tricked a chatbot created by JetBlue into saying that the airline charged a few for oxygen masks on its flights. The chatbot had been programmed to provide information on JetBlue’s services, but the reporter was able to manipulate the conversation to misrepresent the airline.”
  • “In 2019, a group of researchers from Stanford University and Cornell University tricked an AI chatbot created by Facebook into endorsing a fake political candidate. The researchers were able to manipulate the conversation to make the chatbot appear to support the candidate, despite the fact that the chatbot had been programmed to remain politically neutral.”
  • “One example of sonic branding causing offense was the 2019 introduction of a new theme song for the Canadian Broadcasting Corporation (CBC) morning show. The new theme song was criticized by some viewers for its lack of diversity and inclusion. Specifically, the song was criticized for being too Eurocentric and not reflective of Canada’s multicultural population. In response to the backlash, the CBC apologized and announced that it would commission a new theme song that better reflects Canada’s diversity.”
  • “Another example of offensive sonic branding was the use of a traditional Maori war chant in an advertisement for a rugby match in 2018.“ This is only partly true; the “war chant” is actually the Haka, and no one considers it offensive. All school children in New Zealand learn it.

And then there was ChatGPT’s claim that Caper’s AI-powered shopping cart spoke with a thick Russian accent (see graphic).

I’m not the only person ChatGPT is lying to. Last month the Guardian reported that ChatGPT was making up fake Guardian stories, and NBC New York described its reporter’s experience using the generative AI to research a story on Michael Bloomberg. According to NBC’s I-Team, “ChatGPT seemed to make up entirely fabricated quotes from phony anonymous sources.”

The fake quotes were pretty damning. One said, “’It’s not about giving back, it’s about buying influence,” says a political commentator who asked not to be named. ‘Bloomberg is using his wealth to advance his own agenda and promote himself as a leader on the national stage. It’s a classic case of wealth talking, and the rest of us being ignored.”’

This goes beyond generative AI chatbots providing inaccurate information, as was the case with CNET. The responses I received from ChatGPT maligned publications, brands and a university. And it presented these lies as facts.

Now, I recognize that by calling ChatGPT a liar I am anthropomorphizing a computer program. It’s not a real human, although we’re encouraged to think of it as one. There is one respect, however, where anthropomorphizing is apt: If someone has a habit of lying, we trust nothing that person has to say. Even if 90% of their claims were to be true, the uncertainty surrounding the remaining 10% makes us suspicious of everything that comes out of their mouth. The same is true for ChatGPT.

The Apologies

As a writer I naturally want to verify the “facts” that ChatGPT tells me about, but when I couldn’t find a single reference to the BuzzFeed reporter who tricked the JetBlue airline’s chatbot, I flat out asked ChatGPT what was up. Here is the response I received:

“I apologize for any confusion. As an AI language model, I generate responses based on patterns and information that I have been trained on. However, I do not have access to external sources or the ability to browse the internet to confirm the accuracy of my responses.”

In other words, ChatGPT isn’t programmed to distinguish truth from fiction; if enough people malign Bloomberg on the Internet, then its algorithms see such bashing as legitimate, and therefore worthy to serve up as fact in response to a query. One wonders why ChatGPT doesn’t say as much when it serves up its fictitious responses. Is it even able to do so?

ChatGPT, like all AI, is trained on data. As a large language ML model, ChatGPT must be trained on a large dataset, and it happened to be trained on the Internet. That dataset is vast, covering genuine reporting, conspiracy theories and a range of hateful content. For this reason, we can’t take anything ChatGPT has to say at face value. 

The other issue with ChatGPT is that its answers are rooted in the past. Here’s another apology I’ve received a number of times:

“While I don’t have access to real-time data or specific studies beyond my September 2021 knowledge cutoff, I can provide you with some general information on this topic.”

Given this cut off date, it’s not surprising that it provides incorrect information about things like mortgage rates and economic conditions.

Implications of ChatGPT for Media Organizations

I don’t want to give the impression that I think ChatGPT sucks. I actually love it, and even subscribe to the paid version of it. Its uses are powerful, but those uses are much more limited than the hype would lead one to believe. 

It’s a great tool to use when you’re exhausted and you realize that a paragraph you drafted a half hour ago is a mess. It will do a pretty good job in cleaning up sentences. But again, because its responses are predictions of what people might say based on what people have said in the past, you should expect to re-write most of it. Still, it’s a motivator in an odd way.

Given that ChatGPT can’t verify facts, and can’t distinguish between what is true and what someone “could” say, is it appropriate to bring it into the newsroom? Should Microsoft and Google prioritize generative AI over their search functions? 

I personally don’t think it’s a smart move to circumvent search engines that lead legitimate publisher sites, whose stories are vetted and held to journalistic standards, in favor of crisp responses that are as likely to be false as they are true.

We seem to be rushing down the generative AI path without considering some serious implications, such as the inevitable proliferation of fake news while cutting off traffic and revenue to legitimate publishers. I would encourage every newsroom to think twice about using it.