Creators Lose Ground to Big AI; Google Updates AdSense Rev Model; Did Brand Safety Kill Jezebel?

AdMonsters Wrapper: The weekly ad tech news wrap up
This Week
November 13, 2023
Content Creators V. AI
The Accuracy of Generative AI
Should Employees Keep Political Opinions off Social Media?
Meet Us at the Water Cooler
Creators Continue To Lose Ground to Big AI
Image sourced from Shutterstock
The actors’ strike has settled, but not without some last-minute hiccups that almost derailed the deal. Those details are important to anyone involved in content creation.

The Alliance of Motion Pictures and Television Producers’ “final offer” contained clauses about AI that the actors found intolerable, such as using AI-generated body scans for Schedule F performers (i.e., movie and TV show extras).

A Raw Deal for Creators: Content creators, whether actors, writers, or journalists, worry that they’re getting a raw deal when their industries deploy generative AI in new ways.

In August, background actors told reporters that Hollywood studios started creating AI scans of them, without making it clear how those scans would be used. Their union wanted ironclad veto power over “digital doubles,” and minimum pay rates when AI is used to create an actor’s digital likeness.

According to the Hollywood Reporter, Hollywood, Google, OpenAI and other tech companies have joined forces to find new ways to use AI, which has creators asking: How are you training that AI? Is my content included in that training, and if so, how am I compensated?

It’s the same issue digital publishers are asking big tech. The past summer, news broke that the New York Times was in tense negotiations with OpenAI over compensation for incorporating its copyrighted content in the ChatGPT.

That same concern is playing out with Google’s Search Generative Experience (SGE), which allows users to search the web using generative AI. The search results include AI-generated summaries, which appear at the top. Users looking for a quick answer have no need to navigate to the site from which Google generated its summary, resulting in a loss of traffic and ad revenue for the publisher.

Publishers can tell Google not to use their content when generating those summaries, but according to Reuters, “they must use the same tool that would also prevent them from appearing in Google search results, rendering them virtually invisible on the web.”

This past Halloween, the News/Media Alliance published a White Paper. How the Pervasive Copying of Expressive Works to Train And Fuel Generative Artificial Intelligence Systems Is Copyright Infringement and Not a Fair Use.” According to the analysis, generative AI companies “disproportionately use online news, magazine, and digital media to train their GAI models” and benefit from their high quality. Those AI tools also copy content when generating outputs, which then directly compete with the content that was used to train them.

The News/Media Alliance asked the US Copyright Office to clarify “publicly that use of publishers’ expressive content for commercial generative AI training and development is likely to compete with and harm publisher businesses, which is disfavored as a fair use,” among other things.
Why This Matters
Publishers and all content creators have a right to be worried. There is very little transparency into how their content is used in model training, and the risk for generative AI users to plagiarize their work inadvertently is high.

In Hollywood, the Motion Pictures Association, Meta, and OpenAI are telling creators not to worry because they claim that existing intellectual property laws offer sufficient protection, but that assurance has failed to assuage their concerns.

While the Biden Administration has been seeking commitments from AI companies and issued the Executive Order on Safe, Secure, and Trustworthy Artificial Intelligence (Federal AI EO), these initiatives do not protect publisher content when training AI datasets. The closest it comes to addressing the issue is a requirement that AI developers provide the federal government with details on how it tested and trained its protocols.

It’s easy to see why content creators across a spectrum of industries are worried that they’re losing ground to Big AI, especially as the technology develops faster than the laws to protect their intellectual property and rights.
AI's Accuracy Isn’t Great, but it's a Profitable Investment
According to researchers, generative AI tools such as ChatGPT and Google Bard hallucinate — or makeup answers in their responses — way more than their users realize. Just how much?

That’s a question that a new startup called Vectara is trying to figure out. Even in scenarios designed to minimize or eliminate hallucinations, the chatbots still make stuff up. In some cases, 27% of AI-generated responses are hallucinations.

Hallucinations are part and parcel of using generative AI, although we heard little of them when ChatGPT burst onto the market a year ago. Since then, we’ve seen a steady stream of people and organizations suffer embarrassment and reputational risk after trusting the responses generative AI created.

Despite these obvious concerns, it’s full speed ahead for the generative AI investors. Amazon, Google, and Microsoft have invested billions in AI companies while giving them sweetheart deals to use their cloud platforms to run their businesses. AI startups Anthropic and OpenAI alone have received $20 billion from these companies.

But, as the Wall Street Journal points out, cloud services are one of the biggest expenses of AI, so in a sense, Amazon, Google, and Microsoft are investing in their own future customers.
Why This Matters
For those who were either burned by AI hallucinations or simply followed the stories in the press, generative AI is a tool to be approached with caution. It can be helpful in certain situations, such as helping users find specific sources. For instance, Perplexity.ai is a generative AI search tool that summarizes answers to questions but also provides direct links to the sources where those answers were found so that the user can verify the answers were not hallucinated and cite the original creator.

So, while caution is warranted with AI, one wonders if user caution will be a dominant message with big tech investing tens of billions of dollars in the technology in part to support their cloud computing businesses.
Employees, Keep Your Political Opinions off Social Media
Image sourced from Shutterstock
Hearst - Fashionista reported that Hearst has a new policy that applies to their employees’ personal social media accounts: keep your political opinions to yourself. The ban applies to all employees, including journalists.

According to the Hearst Magazines Media Union, which broke the news, that any post about a candidate or opinion must be reviewed by a supervisor first. Violators can be fired or disciplined in some manner. Worse, the policy also encourages employees to report their fellow coworkers for posts that feel too "inflammatory."
Why This Matters
The consequences of posting political opinions are getting more severe, with editors and employees losing their jobs, and students seeing job offers rescinded.

The First Amendment does not protect individuals from losing their jobs for expressing their opinions, as it only prevents the government from passing laws that limit free speech. Employers, like Hearst, are free to impose regulations on what their employees say, and fire or otherwise discipline them for posts they create or even share.

While there are well-known hot-button issues at the moment, it does raise the stakes for employees who are ardently opposed to specific candidates or political candidates. To some, election denialism is “inflammatory,” and to others, the opposite is true.

Defining what’s inflammatory and what’s not may dominate discussions as we head into the 2024 elections.
Around the Water Cooler
Here's what else you need to know...

Introducing Latimer, the Black GPT. AI, including generative AI, has been notoriously biased against people of color. Latimer seeks to address this bias by creating a more inclusive environment with its large language model. According to People of Color in Tech, “The LLM provides users with more accurate details, reflecting the experience, culture, and history of Black and Brown people.” Its training sets include books, oral histories, and local community archives. Latimer isn’t publicly available yet, but people interested in trying it once it is can “reserve a spot” by visiting latimer.ai.

In non-AI News, Google Updates AdSense Revenue Model. After 20 years, Google updated its AdSense revenue-sharing model, paying per impression rather than by click. AdSense publishers will see the change in their payouts beginning next year. “These changes will provide a consistent way for publishers to compare the differing fees across the various technologies they use to monetize and will provide even greater transparency into the media-buying process,” Dan Taylor, Google’s VP of Global Ads wrote in a blog post announcing the update.

He notes that publishers shouldn’t see any changes in their AdSense earnings. Google also says that publishers will see 80% of their revenue, minus DSP fees, for about 68% of the revenue earned from their inventory.

Does YouTube's Ad Blocking Crackdown Violate Privacy Laws? Privacy advocates in the European Union are betting that government regulations can put a stop to YouTube's ad-blocking crackdown. Privacy expert, Alexander Hanff, filed a complaint with the Irish Data Protection Commission arguing that YouTube’s ad blocker detection system violates privacy and is illegal under EU law. While Google denies these allegations, many users and privacy advocates oppose YouTube's global efforts to reduce ad blocking.

Did Brand Safety Kill Jezebel? Sometimes it doesn't seem to matter that media brands have a loyal following or are doing important work. “The closure of Jezebel also underscores fundamental flaws in the ad-supported media model where concerns about ‘brand safety’ limit monetizing content about the biggest, most important stories of the day—stories that create huge traffic because people read and share them,” Jezebel staff said in a statement from its union, the Writers Guild of America.

Could it be that brands like Jezebel don't have a place in today's open web? Brain Morrissey, in his latest issue of The Rebooting writes: "The economic value of text content is in decline, particularly in service of an ad model. Having a sharp edge isn’t great in the best of times for attracting advertisers, who often reward what I call an “eggplant premium” on bland, plausible brands that take on whatever flavors ladle on top." The old playbooks aren't working, he adds, but the door is open to newcomers with new models.

Is Anyone Ready for the Privacy Sandbox? AdMonsters held two publisher Identity and Addressability roundtables in the past two months, and most publisher talk about readying for the cookie cutoff focuses on first-party data and contextual and intent strategies, with a smattering of ID solutions thrown in. So it came as no surprise to us to hear that was the sentiment at the recent Prebid Summit.

Most notably, much noise was made about the lack of interoperability between Prebid and the Privacy Sandbox. What we're hearing on the ground at Publisher Forums is that there are just too many things to test and just too few resources to test them. This is why the future of the advertising ecosystem depends on interoperability and standardization. We probably won't see that in 2024.
One X Post
Follow
AI Is a Game Changer?
New: Why did Jezebel close, and what does that mean for women's media?

For Medialyte, I looked at why publishers will view it as a cautionary tale, the issue with its business model and what former staff should do now.

Plus: Could it still be sold?
Worth a Listen
Listen
Barack Obama on AI, Free Speech, and the Future of the Internet
The former president joined Nilay Patel on Decoder to discuss AI regulation, the First Amendment, and of course, what apps he has on his home screen. They also dug into how to think about democracy as AI and social networks collide.
Upcoming AdMonsters Events

Facebook   Twitter   LinkedIn

@{optoutfooterhtml}@