Generative AI Makes for Irresistible Headlines, but at What Cost to Publishers?

AdMonsters Wrapper: The weekly ad tech news wrap up
This Week
June 07, 2023
Generative AI Makes for Irresistible Headlines, but at What Cost to Publishers?
Speaking of AI — When Does it Infringe on Publisher IP Rights?
Generative AI Makes for Irresistible Headlines, but at What Cost to Publishers?
The News: Last week, the world woke up to yet another dire warning about generative AI. Earlier in the week, the Center for AI Safety issued a brief statement that said:
 
“Mitigating the risk of extinction from A.I. should be a global priority alongside other societal-scale risks, such as pandemics and nuclear war.”

News organizations leaped into action, publishing stories with headlines that would cause many Americans to lose sleep. Here’s a small sample:
Are we all going to die? Or is there something else going on?

A Different Perspective: A few days before the Center of AI Safety’s announcement, the Columbia Journalism Review (CJR) published an article that should be required reading for all editors and journalists covering AI. The piece systematically examines how the press has reported on generative AI since OpenAI announced ChatGPT.

CJR notes that the rollout of all new tech follows a specific “hype cycle.” It begins with the tech developer’s announcement. In OpenAI’s case, the statement was short and simple:
 
“We’ve trained a model called ChatGPT which interacts in a conversational way. The dialogue format makes it possible for ChatGPT to answer follow-up questions, admit its mistakes, challenge incorrect premises, and reject inappropriate requests.”

Once the new tech is announced the press immediately takes sides. One half warns of grave consequences, while the other talks about the tech’s ability to end all earthly problems. Soon enough the coverage dies down, and the technology morphs into just another tool that changes or enhances the way we work or play.

But generative AI is bucking the traditional hype-cycle trajectory. Rather than moving on to other threats that will end the world or resolve all its problems, the hysteria around AI is amping up. Why is that?

Here CJR offers up an explanation: “The structural headwinds buffeting journalism—the collapse of advertising revenue, shrinking editorial budgets, smaller newsrooms, demand for SEO traffic—help explain the ‘breathless’ coverage—and a broader sense of chasing content for web traffic.” In other words, mainstream publications are succumbing to clickbait in order to get traffic.

The Real Threats of AI: Getting lost in the human-extinction hysteria are the real threat of AI, as NBC’s Jake Ward explains. According to Ward, the scary things the Center for AI Safety warns us about are highly theoretical. In reality, the technology is nowhere close to having the ability to enslave humans or wipe us off the face of the planet.

But, as Ward makes clear, current AI poses significant threats to real people. We’ve known for years that critical AI applications discriminate against communities because it was trained on highly biased data. That bias is then built into algorithms that decide really important things, like who is called for a job interview, who is offered a mortgage, and who is arrested for crimes they didn’t commit.

Arvind Narayanan, a computer scientist at Princeton University echoes Ward’s concerns, telling the BBC News, “Sci-fi-like disaster scenarios are unrealistic. Current AI is nowhere near capable enough for these risks to materialize. As a result, it has distracted attention away from the near-term harms of AI."

Interestingly, in its discussion of AI risks on its website, the Center for AI Safety is silent on the issues of discrimination, bias, and their impact on minority and marginalized communities.

That’s not to say that generative AI is wholly benign. Real people and real news organizations have been burned by ChatGPT. Take the lawyer who used AI to prepare a court filing, citing cases and precedents that ChatGPT hallucinated. He’s now awaiting a probation hearing that will determine his fate.

And ChatGPT has led more than one publication astray, luring them to publish articles that contained false information and diminishing their credibility. In some cases, generative AI will tell people to do dangerous things, such as Tessa, the AI-infused chatbot launched by a hotline for people with eating disorders. Tessa advised callers to count calories, and that it was safe to lose one to two pounds per week by severely restricting what they eat.
Why This Matters

At a time when trust in media organizations is at an all time low, bringing ChatGPT into the newsroom is a decision to which publishers must give considerable thought. As Emily Bell explains in the Guardian, ChatGPT could be disastrous to truth in journalism.

It can also be disastrous to the publisher’s bottom line (and jeopardize advertisers). Publishers that rely too much on generative AI to create content will offer little incentive for readers to visit or return.

And the continued hype about ChatGPT being better than search hurts publishers, advertisers, and everyday people alike. For publishers, it means a source of traffic is cut. Advertisers will find it much more challenging to get in front of new customers, and people who opt for ChatGPT over search will be presented with responses that were created with things other than real-world facts in mind (see ChatGPT’s disclaimer below the prompt bar). Search, on the other hand, will lead them to real sites, many of which have rigorous journalistic standards in place.



Generative AI will also affect publishers' monetization models as the cookie goes away and advertisers rely on contextual targeting to a greater degree. Writing in the Drum, Seedtag's UK and Netherlands Country Manager, Paul Thompson notes that, “AI is able to leverage a plethora of information for use within campaigns, including highly accurate predictive assumptions especially when it comes to things like demographics and consumption patterns.”

While this sounds intriguing, let’s not forget that within the US, the “plethora of information” is likely to include highly biased stuff, the kind of data that leads to discrimination at scale.

As Elizabeth Renieris, Oxford's Institute for Ethics in AI Senior Research Associate, told BBC News, “Advancements in AI will magnify the scale of automated decision-making that is biased, discriminatory, exclusionary or otherwise unfair while also being inscrutable and incontestable."

Renieris’ concern is one that the FTC shares. Two summers ago, the Commission put the market on notice, warning advertisers that they’ll fine anyone who uses algorithms that discriminate. Let’s say you’re a financial services company offering low-interest mortgages and you want to target wealthy Americans. Choosing the wrong algorithm may lead to your campaign discriminating against communities of color, landing you in hot water with the regulators, not unjustifiably so.

There are a lot of pros and cons to generative AI and news organizations can certainly use it to boost productivity and enhance their products with new functionality in order to win new subscribers (e.g. Buzzfeed’s chatbot Botatouille). However, guardrails are required to ensure it leads people to accurate and unbiased insights and conclusions. Those guardrails are coming. Want proof? Google “prompt engineering jobs.”

And ChatGPT is being tested in novel ways in the sector. For instance, the news app, Artifact, will deploy the AI to rewrite clickbait headlines of articles automatically if enough readers mark it as such. It will be interesting to see how crowdsourcing that function will turn out.

One Last Thing: All of the “breathless” articles and news clips about the Center for AI Safety’s warning point out the roster of its signatories, which is a veritable Who’s Who of the AI tech and research space. If the existential threats posed by AI are being overblown by tech leaders as Ward, Narayanan and Renieris contend, one wonders why they’re trashing their own wares. We can only speculate.

In WIRED Timnit Gebru reminds us that disgraced crypto entrepreneur Sam Bankman-Fried, along with Elon Musk and Peter Thiel and others, are major funders of the Center for Effective Altruism, which, as it happens, declared AI to be the “number one risk facing humanity.” But as Gebru writes, “All of this money has shaped the field of AI and its priorities in ways that harm people in marginalized groups while purporting to work on ‘beneficial artificial general intelligence’ that will bring techno-utopia for humanity.” In other words, human-extension warnings are a distraction.

Others, like Eschaton, have a more cynical take on the Center for AI Safety’s announcement: “Quite impressed the SV guys are going with, ‘AI will change the world and be worth trillions, also it might destroy the world, also you must give us more money and protect us from competition so we can make sure it doesn't destroy the world.’"

CNET reporter Nina Raemont echoed Eschaton’s theory, minus the snark: “There is also the potential that some of these tech leaders are requesting a halt on their competitors' products so that they can have time to build an AI product of their own.”

Speaking of AI — When Does it Infringe on Publisher IP Rights?
A few months back, the News/Media Alliance released a set of AI principles designed to help new organizations navigate generative AI. A key concern is that the unlicensed use of content created by media companies and journalists by GAI systems is an intellectual property infringement: GAI systems are using proprietary content without permission.

The Alliance said that any company that develops or deploys generative AI should be required to negotiate with publishers for the right to use their content when they:
  • Train and test the data on a publisher’s content
  • Use a publisher’s content in response to user inputs
  • Synthesize summaries, explanations, analyses, etc. of source content in response to a query.
Recently, Axios reported that news executives are beginning to think about those principles, and how publishers may go about conducting negotiations for compensation. The implication is that if publishers don’t take the lead, no one will, as regulators aren’t likely to step up to the task.

News Corp., CEO Robert Thompson, told Axios, “Unless at the front end you define what the principles are, you are on the digital defensive.”
Why This Matters
Content is a major investment for publishers, and they need to be compensated when it’s used by others. Traditionally we’ve considered content use in terms of readers consuming it and researchers citing it. Now, however, big corporations like Microsoft and Google are using publisher content to train their large language models so they can develop new revenue streams. Publishers also deserve the opportunity to develop a new revenue stream.

Publishers are already on the digital defense, however, as OpenAI won’t disclose the sources that composed its training data as a matter of policy. At present, publishers can only “suspect” when their content is used as raw material for another company’s products.
Sweet Tweet
Follow
SPO Is Real
"SPO is real. Publishers underestimate the power of their own Direct Seller seats- They're worth so much more than a reseller seat." @tompachys at #OpsNY @aqkraft #adstxt
Worth a Listen
Listen
Podcasting’s Biggest Opportunity
In a few weeks, Sounds Profitable will release its second major study for 2023, The Podcast Opportunity: Buyer Perceptions of Podcast Advertising. For this project, they combined qualitative and quantitative insights from approximately 300 brand and agency buyers across the US to learn why they do or do not buy podcasts, and what some of their perceptions are about podcasting as an advertising vehicle. One particular finding that is clearly emerging revolves around one of the main reasons agencies and media buyers choose not to buy podcasts: lack of client demand.
 
 

Facebook   Twitter   LinkedIn

@{optoutfooterhtml}@