UK Online Safety Act Goes Live; Another Round of Media Layoffs are Coming

AdMonsters Wrapper: The weekly ad tech news wrap up
This Week
November 20, 2023
UK Online Safety Act
Another Round of Media Layoffs
Meet Us at the Water Cooler
UK Online Safety Act: An Imperfect Balance of Regulation, Privacy, and Free Expression
Image sourced from Shutterstock
Ofcom, the UK Telecom regulator, laid out new rules. This past October, the UK Online Safety Act went into effect. The law attempts to regulate online speech to protect children and adults when they engage with social media or any other place on the internet that supports user-generated content. Ofcom says the law will apply to about 100,000 sites.

Online harassment and toxic content have been a scourge of the Internet since the very beginning. The UK is no exception. According to the Alan Turing Institute, 30% to 40% of UK citizens have been the victim of online abuse.

Duty of Care: To address the challenge, the Act imposes a "duty of care" on any platform or service that's used or accessed by UK citizens. Duty of care means that media and social media companies will be held responsible for any harmful content on their sites. "If they do not act rapidly to prevent and remove illegal content and stop children from seeing harmful material, such as bullying, they will face significant fines that could reach billions of pounds. In some cases, their bosses may even face prison," the government states.

The duty of care requires the 100,000 targeted sites to:

• Remove illegal content quickly or prevent it from appearing in the first place, including content promoting self-harm.
• Prevent children from accessing harmful and age-inappropriate content.
• Enforce age limits and age-checking measures.
• Ensure the risks and dangers posed to children on the largest social media platforms are more transparent, including by publishing risk assessments.
• Provide parents and children with clear and accessible ways to report problems online when they arise.

Protections for Adults. The Act includes additional protection for adults who don't want to encounter legal but objectionable online content. For instance, platforms must offer all users the option to filter out such content regardless of age. It also places a legal responsibility to enforce the terms and conditions. All platforms have anti-harassment rules in their code of conduct, but they rarely enforce them. The Online Safety Act seeks to address that challenge.

Steep Fees for Non-Compliance. Ofcom, the UK regulator, will slap platforms with steep fees if they fail to remove reported content. Fines can reach £18 million or 10% of the platform's global annual revenue, whichever is biggest.
Why This Matters
Many sites and human rights organizations have raised objections throughout the legislative process. One of the challenges is that the law includes 200 clauses that cover a wide swath of content, which muddies the water for Trust and Safety teams. For instance, the law requires all platforms and sites to verify the age of their users, but as Wired points out, sites like Wikipedia (the 8th most visited site in the UK) will need to pull out of the market as age verification violates the Wikimedia Foundation's principles on collecting data about its users.

Platforms worry that the law that requires them to remove legal but objectionable content amounts to a new form of censorship. "If something is legal to say, it should be legal to type," Robert Colvile, director of UK think tank CPS, told The Guardian.

Then there's the question of X. Elon Musk famously fired most of the former Twitter's Trust & Safety team, leaving them with few resources to perform the enforcement requirements spelled out in the Act.

Finally, people are worried about the Act's impact on encryption and privacy. Many sites view Section 122 of the Act as compelling them to scan user messages to ensure they're not transmitting illegal material. This is a problem for WhatsApp and other apps and services that promise end-to-end encryption and fear that the law will require client-side scanning.

These concerns reflect a challenge to all legislation that aims to protect people as they go about their digital lives while still supporting a free exchange of ideas and content. Such bills can have unintended consequences, such as those raised by critics of SESTA/FOSTA, US regulations intended to combat online sex trafficking. Detractors criticize this legislation for demonizing sex workers and subjecting them to an increased risk of violence and abuse.
Another Round of Media Layoffs is Underway
Another holiday season, another round of layoffs for companies in the media industry. According to Axios, premium publishers are laying off workers due to volatility in the ad market.

Since the end of September, NPR, the Washington Post, Google News, Condé Nast, TVA Group, Starz, G / O Media and Vice Media, Bloomberg Media Group, and Recurrent Ventures have announced layoffs.

This time last year, media companies laid off people in response to a recession that never quite materialized. This year, they're due to double-digit declines in ad spending in traditional media. For instance, Axios reports that Warner Bros. Discovery, Comcast/NBCUniversal, Paramount, Fox, and Disney posted a "12% average decline in linear ads" in Q3.
Why This Matters
While Magna predicted strong growth for digital ad spending throughout the second half of 2023 and 5.6% growth in 2024, Axios notes that much of the "ad recovery" touted by Magna went to Big Tech, not premium advertisers. TikTok, Meta, and Google all invested in AI to make it easier for brands to advertise with them. TikTok, for instance, offers templates, video editing tools, smart targeting and so on. Despite threats to ban the platform from Congress, TikTok's ad revenue is growing 33% this year and may reach $17 billion next year.

In the meantime, publishers will likely find ways to weather a reduction in staff by deploying AI to streamline their internal workloads, enabling them to do more with fewer people.
Around the Water Cooler
Here's what else you need to know...

Malicious Ads Up 226%. GeoEdge released its Third Quarter Ad Quality Report which found a steep spike (226%) in malicious ads over the past two months. Auto-redirects became the most common type of attack, tripling the monthly average during the first half of 2023.

Analyzing billions of impressions, GeoEdge noted that the US and Canada are the countries with the most dramatic increase in malicious ads. In the US, one in every one hundred impressions was a malicious ad, in Canada it is one in sixty-five.

Read the whole report here.

Media Companies Rolling out Generative AI Products. Kinesso, the tech arm of IPG, announced the rollout of its first generative AI tool — a personal assistant, called MyBot, that is built on Google’s large language model. For now MyBot is an internal tool designed to optimize productivity. Graham Wilkinson, Chief Innovation Officer told MediaPost that the main goal is to help the IPG workforce get familiar and comfortable working with generative AI-powered assistants.

Meanwhile, Forbes launched a beta version of a generative AI search engine called Adelaide. The tool will help users navigate site content and pinpoint material of interest. Adelaide, named after the wife of Forbes founder BC Forbes, lets users “explore content spanning the previous twelve months, offering deeper insights into the topics they are searching for.” This is a great use of generative AI, as it helps users search the site faster and more efficiently, without any risk of hallucinations or copyright violations.
One X Post
Anatomy of a Co-dependent Relationship
Did we do it right?
Worth a Listen
Pasta with Bread on the Side" w/ Laura Belmont, General Counsel @ Civis Analytics
Long John Silver? Red Lobster? No, it's the Data Protection Breakfast Club with Andy & Pedro. They speak with guest Laura Belmont, an accomplished attorney and compliance professional, a leader in complex issue resolution, collaboration, and data-driven decision-making as General Counsel at Civis Analytics, blending business success with community service. The discussion centers on her work and career, as well as whether their work at Civis, Meta, and Open AP makes a difference in the world. They also touch on whether SaaS lived up to its promise, and of course data protection and privacy.
Upcoming AdMonsters Events

Facebook   Twitter   LinkedIn