|
Generative AI Makes for Irresistible Headlines, but at What Cost to Publishers? |
The News: Last week, the world woke up to yet another dire warning about generative AI. Earlier in the week, the Center for AI Safety issued a brief statement that said: “Mitigating the risk of extinction from A.I. should be a global priority alongside other societal-scale risks, such as pandemics and nuclear war.”
News organizations leaped into action, publishing stories with headlines that would cause many Americans to lose sleep. Here’s a small sample: Are we all going to die? Or is there something else going on? A Different Perspective: A few days before the Center of AI Safety’s announcement, the Columbia Journalism Review (CJR) published an article that should be required reading for all editors and journalists covering AI. The piece systematically examines how the press has reported on generative AI since OpenAI announced ChatGPT. CJR notes that the rollout of all new tech follows a specific “hype cycle.” It begins with the tech developer’s announcement. In OpenAI’s case, the statement was short and simple: “We’ve trained a model called ChatGPT which interacts in a conversational way. The dialogue format makes it possible for ChatGPT to answer follow-up questions, admit its mistakes, challenge incorrect premises, and reject inappropriate requests.”
Once the new tech is announced the press immediately takes sides. One half warns of grave consequences, while the other talks about the tech’s ability to end all earthly problems. Soon enough the coverage dies down, and the technology morphs into just another tool that changes or enhances the way we work or play. But generative AI is bucking the traditional hype-cycle trajectory. Rather than moving on to other threats that will end the world or resolve all its problems, the hysteria around AI is amping up. Why is that? Here CJR offers up an explanation: “The structural headwinds buffeting journalism—the collapse of advertising revenue, shrinking editorial budgets, smaller newsrooms, demand for SEO traffic—help explain the ‘breathless’ coverage—and a broader sense of chasing content for web traffic.” In other words, mainstream publications are succumbing to clickbait in order to get traffic. The Real Threats of AI: Getting lost in the human-extinction hysteria are the real threat of AI, as NBC’s Jake Ward explains. According to Ward, the scary things the Center for AI Safety warns us about are highly theoretical. In reality, the technology is nowhere close to having the ability to enslave humans or wipe us off the face of the planet. But, as Ward makes clear, current AI poses significant threats to real people. We’ve known for years that critical AI applications discriminate against communities because it was trained on highly biased data. That bias is then built into algorithms that decide really important things, like who is called for a job interview, who is offered a mortgage, and who is arrested for crimes they didn’t commit. Arvind Narayanan, a computer scientist at Princeton University echoes Ward’s concerns, telling the BBC News, “Sci-fi-like disaster scenarios are unrealistic. Current AI is nowhere near capable enough for these risks to materialize. As a result, it has distracted attention away from the near-term harms of AI." Interestingly, in its discussion of AI risks on its website, the Center for AI Safety is silent on the issues of discrimination, bias, and their impact on minority and marginalized communities. That’s not to say that generative AI is wholly benign. Real people and real news organizations have been burned by ChatGPT. Take the lawyer who used AI to prepare a court filing, citing cases and precedents that ChatGPT hallucinated. He’s now awaiting a probation hearing that will determine his fate. And ChatGPT has led more than one publication astray, luring them to publish articles that contained false information and diminishing their credibility. In some cases, generative AI will tell people to do dangerous things, such as Tessa, the AI-infused chatbot launched by a hotline for people with eating disorders. Tessa advised callers to count calories, and that it was safe to lose one to two pounds per week by severely restricting what they eat. |
At a time when trust in media organizations is at an all time low, bringing ChatGPT into the newsroom is a decision to which publishers must give considerable thought. As Emily Bell explains in the Guardian, ChatGPT could be disastrous to truth in journalism.
|
|
Speaking of AI — When Does it Infringe on Publisher IP Rights? |
A few months back, the News/Media Alliance released a set of AI principles designed to help new organizations navigate generative AI. A key concern is that the unlicensed use of content created by media companies and journalists by GAI systems is an intellectual property infringement: GAI systems are using proprietary content without permission. The Alliance said that any company that develops or deploys generative AI should be required to negotiate with publishers for the right to use their content when they:
News Corp., CEO Robert Thompson, told Axios, “Unless at the front end you define what the principles are, you are on the digital defensive.” |
Content is a major investment for publishers, and they need to be compensated when it’s used by others. Traditionally we’ve considered content use in terms of readers consuming it and researchers citing it. Now, however, big corporations like Microsoft and Google are using publisher content to train their large language models so they can develop new revenue streams. Publishers also deserve the opportunity to develop a new revenue stream. Publishers are already on the digital defense, however, as OpenAI won’t disclose the sources that composed its training data as a matter of policy. At present, publishers can only “suspect” when their content is used as raw material for another company’s products. |
|
||
@{optoutfooterhtml}@ |