Triopoly Agrees to AI Safeguards; Ad Tech Scrubs MFA Sites; Oregon Privacy Act Signed

AdMonsters Wrapper: The weekly ad tech news wrap up
This Week
July 24, 2023
The Triopoly Agrees to AI Safeguards
ANA Study Leads Ad Tech MFA Site Crackdown
Oregon Governor Signs Opt-Out Privacy Laws
Around the Water Cooler
7 AI Companies Agree to Safeguards; the Triopoly Among Them
The News: The Biden Administration is pretty uneasy with AI for numerous reasons and has wrangled voluntary commitments from seven AI companies to put safeguards around the risks posed by generative AI, including algorithmic discrimination and disinformation.

“Companies developing these emerging technologies are responsible for ensuring their products are safe. To make the most of AI’s potential, the Biden-Harris Administration is encouraging this industry to uphold the highest standards to ensure that innovation doesn’t come at the expense of Americans’ rights and safety,” the White House said.

Safeguard Goals: The Biden administration is demanding safeguards for AI for several reasons, key among them being safety. We’ve all read about instances in which everyday people trusted AI to generate accurate and verified content, which harmed them. The administration wants to ensure that AI systems are adequately tested before being made available to the public.

The administration is also concerned about the bias and discrimination that has plagued numerous AI systems to date — and rightly so. Safeguards are needed to ensure any AI that is developed and released to the public is used in an ethical manner.

Who Answered the Call: The Administration asked for cooperation voluntarily, and seven of the largest AI companies agreed. They are Amazon, Anthropic, Google, Inflection, Meta, Microsoft, and OpenAI.

The Commitment: According to the White House, all seven AI companies have agreed to:
  • Internal and external security testing of their AI systems before release
  • Share information on managing AI risks across the industry and with governments, civil society, and academia.
  • Investments in cybersecurity and insider threat safeguards to protect proprietary and unreleased model weights.
  • Facilitate third-party discovery and reporting of vulnerabilities in their AI systems.
  • Develop robust technical mechanisms to ensure that users know when content is AI generated, such as a watermarking system.
  • Publicly report their AI systems' capabilities, limitations, and appropriate and inappropriate use areas.
  • Prioritize research on the societal risks that AI systems can pose, including avoiding harmful bias and discrimination and protecting privacy.
  • Develop and deploy advanced AI systems to help address society's greatest challenges, such as curing cancer.
The AI Bill of Rights: The Administration has also published a blueprint for an AI Bill of Rights to protect against algorithmic discrimination.
Why This Matters
Numerous generative AI technologies have been released to the public with very little effort to educate people on the limitations and risks. The public has been encouraged to put it to use in numerous ways, including writing articles, term papers, marketing materials, and in at least one instance, a legal brief.

ChatGPT is better than search, we were told, even prompting Google to issue a Code Red for its search business. In securing voluntary commitments to report their AI systems' "capabilities, limitations, and areas of appropriate and inappropriate use," the Administration hopes to end people using it in ways that destroy their professional reputation or academic standing.

More concerning is the inherent bias that is built into AI. AI is only as good as its underlying data, and all too often, that data is highly biased against women and people of color. The result is algorithmic discrimination in healthcare, the criminal justice system, and hiring practices.

Earlier this year, President Biden signed an Executive Order directing federal agencies "to root out bias in the design and use of new technologies, including AI, and to protect the public from algorithmic discrimination."

Complying with this executive order will require AI companies to train their models on synthetic data, which is the only way to weed out bias in many applications. In the meantime, the Administration has notified every AI user to be aware of any bias lurking and to take steps to mitigate it.
ANA Study Leads Ad Tech Vendors to Scrub MFA Sites
Made-for-advertising sites flew under the radar for some time, but thanks to the Association of National Advertisers (ANA) study, MFAs are getting a much-needed crackdown.

Companies like Magnite, Sharethrough, and PubMatic are blocking MFA sites to keep them from profiting from alleged deceptive practices. MFAs thrived in the supply chain for years. However, that changed when the ANA launched an investigation and found sites riddled with disruptive user experiences, tactics that compromised content value, and security risks.

Ad Tech Vendors Vs. MFAs: Many ad tech vendors had the same idea as the ANA. On June 14, Sharethrough announced they would sweep MFAs from their off-the-shelf deals and custom PMPs. PubMatic followed suit. Magnite promised to turn off MFAs upon advertiser request in PMPs and curated versions of the open programmatic marketplace.

“We focused on PMPs initially as they’re somewhat of a safe place for us to disable this MFA supply because when we do it, the buyers will allow their spend to redistribute to better quality sites,” said Curt Larson, chief product officer at Sharethrough. “If we turn off MFA in our entire exchange — and this would be true for any exchange — all that happens is the spend moves to our competitors.”

This tactic may not solve the problem. Experts say that marketers and their ad tech partners need to be the ones who stop purchasing MFA inventory. Without their support, the efforts from companies like Sharethrough and Pubmatic will be fruitless.
Why This Matters
With increased scrutiny on privacy, transparency, and creating a great user experience, the ad tech industry cannot afford to sit idly by as MFA sites wreak havoc on the ecosystem. Vendors are starting to scrub them, but is that enough? It also begs the question of how essential they became for the ecosystem.

"MFA 'synthetic' inventory may have already become too important for the economic machine of DSPs and SSPs," said Tom Triscari, a programmatic consultant who worked with ANA. "What if their economics fall apart without the extra volume?"

A variety of ad tech vendors and brand-safety suppliers have worked to provide solutions to the problem. NewsGuard, the news and info credibility ratings platform, announced a new service that enables brand marketers and agencies to create lists including non-MFA sites or ones explicitly excluding MFA sites. NewsGuard's MFA service aligns its news and info-site publisher database with the same taxonomy the ANA's analysts utilized, identifying MFA publishers.

"We were stunned by some of the data in the ANA report," said NewsGuard co-CEO Gordon Crovitz. ANA's study cited that 15% of the $88 billion advertisers spend on programmatic advertising is on MFA sites.

While the ecosystem is worried that removing MFAs without much thought will cause chaos in the supply chain, the industry is working to create alternative solutions.
Oregon Governor Signs Opt-Out Privacy Laws
In a groundbreaking move, Oregon Governor Tina Kotek signed a privacy bill, granting state residents the power to regain control over their online advertising fate. The law allows state residents to opt out of ad targeting based on their online activity.

The Oregon Consumer Privacy Act (SB 619) ensures that consumers have the right to know what personal information brands collect about them and who receives it. In addition, the language around personal data is relatively broad, covering identifiers such as cookies.

In addition, companies must secure consumers' opt-in consent before processing precise location data, biometric data, and other potentially sensitive information, encompassing details related to race, ethnicity, religion, health condition or diagnosis, sexual orientation, and immigration status.

Countdown to July 2024: Most of the provisions will go into effect a year from now. This calls for publishers and advertisers to brush up on the new state law!
Why This Matters
Oregon is the 12th state in the U.S. to introduce a robust privacy law. Last month, Delaware passed a privacy bill (HB 154), but they are still awaiting the government's signature.

While it's great that some states are pushing forward with privacy laws, the real question is, when are we getting a federal privacy law? The consensus amongst the ad tech industry is that we won't get one anytime soon.

President Biden has made several call-to-actions to pass a federal privacy bill, but ad groups are still determining if government officials understand the full scope of the issue. They believe any federal privacy law passed by the government will only hinder their businesses.

"Through good-faith collaboration, we can codify important data protections for consumers while protecting valuable ad-supported content and services," said Bob Liodice, CEO of the Association of National Advertisers. "Our common goal should be to stop unreasonable and unexpected data practices while allowing beneficial practices that drive innovation, growth, and consumer benefit."

Oregon made a great move, but it's time for the federal government to catch up.
Around the Water Cooler
OpenAI Will Invest In Local Journalism For AI Testing OpenAI promises millions of dollars in funding to local news outlets, allowing them to explore and experiment with AI technologies. They aim to bolster the use of AI to innovative approaches for news gathering, reporting, and audience engagement. (NiemanLab)

Generative AI Could Change Advertising Forever OpenAI’s ChatGPT opened the door for all the major online ad platforms to test generative AI in a push to provide personalization at scale. Still, there are issues with brand safety, copyright, and quality. That’s why we test. (CNBC)

Microsoft and Activision Blizzard Extend Acquisition Deadline The FTC failed to block Microsoft and Activision Blizzard's Acquisition, but the companies are extending their deadline for safe measures. (Axios)

Apple Testing AI Tech to Rival Competitors Apple is testing an AI chatbot that insiders call Apple GPT. There is no news about the official release of the chatbot, but the rumor is that Apple will make an announcement next year. (Tech Crunch)
Sweet Tweet
Follow
The Death of Twitter

Everyone still calls it the Tappan Zee, not the Cuomo.
Everyone still calls it the Triboro not the RFK
Everyone still calls it Heinz Field not “Acrisure Stadium”
It will always be Twitter. Never “X”

Worth a Listen
Listen
WARC Talks: Emotion in Advertising 
This week's WARC Talks is about emotion in advertising with Ian Forrester, founder and CEO of DAVID, and Lynette Poh, Head of Marketing Communications at Singtel. They discuss the power of AI and applying it to the optimization of creative and media strategies, and Singtel’s approach to the use and measurement of emotion in its advertising.

Stay up-to-date with the latest marketing and advertising news with our free daily newsletter.
Upcoming AdMonsters Events

Facebook   Twitter   LinkedIn

@{optoutfooterhtml}@