Black Mirror’s Joan is Awful: The Implications of Privacy Ethics, Consumer Choice and Tailoring Content

The Black Mirror episode, Joan is Awful, examines streaming platforms’ predatory data collection practices and how they affect consumer choice.

Let’s be honest; does anyone read the terms and conditions when they sign up for an app or streaming service? 

On June 15, Black Mirror dropped the episodes to its highly anticipated sixth season, and the first episode, “Joan is Awful,” raised paranoia around what streaming services ask for in their Terms of Service (ToS). 

[Spoiler alert] 

When the episode starts, Joan makes a series of morally questionable decisions, including firing a work friend and meeting up with an ex-boyfriend behind her fiance’s back. After she comes home from work, Joan and her fiance Crish search the fictional streaming service “Streamberry” for a show to watch when they come across Joan is Awful. Joan watches in horror as the events of her day play out on the TV show, which subsequently causes her life to fall into complete chaos — her fiance leaves her, viewers villainize her on social media, and her company fires her.  

After consulting a lawyer, Joan learns that Streamberry has rights to her private information and life events to create content for the show since she signed them over in the Terms and Conditions. 

The ad tech community is well-versed in discussions surrounding privacy ethics, consumer protection, and consumer targeting. Still, this episode caused quite a debate about how platforms collect consumer data. No one will wake up tomorrow with a Netflix series about their life, but what are the real-world implications of this episode? 

@realfandcmfigures always read the terms & conditions #blackmirror #joanisawful #fyp ♬ original sound – Fandcmfigures

The Privacy Reckoning and Consumer Choice

Consumer choice is one of the most critical aspects of privacy ethics. Generally, a privacy policy gives consumers “notice about how their personal information will be collected, used, and shared and then informs the consumer what “choices” they have.” Platforms then ask consumers to agree to the policies to use the service. 

However, the structure is flawed because the notice of choice fails to protect the audience’s privacy. It puts the burden of privacy on the consumer, who is most likely unaware of how these systems and standards work. How is their choice when they don’t have all the facts? 

In the episode, the CEO of Streamberry took advantage of this notion to use their subscriber’s information for content. When Joan signed up for Streamberry, I’m sure she would have thought twice about agreeing to the ToS if she knew the platform would use her personal life for content without proper consent. 

Further in the episode, Streamberry’s CEO explains to a journalist that they decided to create shows centered around their audience to create more content tailored toward them. Joan’s show was only the start. She revealed that they tested out shows for each of their subscribers which meant every consumer’s data was up for grabs.  

Publishers and advertisers have broken consumer trust for years, and the only way to rebuild that trust is to put the consumer’s needs first. It’s about walking the fine line between making content relevant and being transparent with your audience to respect privacy ethics. 

“Privacy consent is at the core of a publisher’s relationship with their visitors,” said Dan Rua, CEO, Admiral. “The strongest relationships are built on mutual trust. As such, privacy consent shouldn’t be a silo run by lawyers, separate from thinking about engagement across the full visitor relationship journey.”

Tailoring Content For the Consumer

In the ad tech industry, ad ops professionals use all types of consumer targeting practices with one goal: getting the correct content to the right consumer. The goal was always to give consumers a great user experience, but over the years, regulators criticized publishers and advertisers for some of their targeting practices. 

For instance, the third-party cookie — on its way out entirely in 2024 — was criticized for collecting and utilizing consumer data without transparency and choice. Before the cookie pop-up notification became standard practice, websites used cookies to collect and share consumer information without their knowledge. Although even after they knew what cookies did, many were unaware of how platforms shared their data. 

Web browsers like Firefox and Safari have stopped using cookies, but Google is still working to weed them out. They are currently testing the privacy sandbox, which aims to undo years of persistent cookie tracking of users on their journeys across the web and mobile.

The ad tech community is working towards reconciling their past mistakes with privacy ethics. Regulators and government officials created laws such as the GDPR and state-level laws such as California’s CCPA to combat some of these exploitative practices. Apple also made privacy measures within its framework to help set standards for transparency and consent — Hide My Email and Apple ATT

These laws are coming full circle as brands face a rising number of lawsuits and fines if they don’t comply with the updated privacy regulations. This past week the French Data Protection Authority (CNIL) fined Criteo €40M ($44 Million) for breaching the GDPR’s laws. While Criteo asserted that their breach of privacy was not deliberate, the regulation pushed them to improve their data collection practices. 

“Shifting our mindset from notice and choice will not be easy and will require agreement in an ecosystem of competing interests,” wrote Jessica B. Lee, Partner, Co-Chair, Privacy, Security & Data Innovations at Loeb & Loeb LLP. “However, in the long term, both consumers and businesses may benefit from this change.”