AI Lights the Way for Brand Safety

I recently noted the cyclical nature of ad tech issues, and nowhere has that been more apparent than in brand safety’s reappearance in the forefront of advertisers’ and publishers’ concerns. As technology and content have evolved, we need to reconsider just what fits under the brand-safety umbrella in 2018 (something we’ll discuss in greater detail at the March 2018 Publisher Forum in Huntington Beach).

But the industry must also examine what tools are essential for meeting brand-safety needs now—and in the future. As Crisp CEO Adam Hildreth explains, brand safety controls are definitely not set-and-forget, but the scale at which brands operate these days require them to employ machine learning and AI to keep up with an ever-perilous environment.

GAVIN DUNAWAY: I always hate asking the “define this” question, but what do we think online brand safety is in 2018?

ADAM HILDRETH: It was nearly a year ago that brand safety truly became a major issue for advertisers and publishers. Numerous investigations by US and UK media highlighted reputation-damaging issues, including the fact that ads were being misplaced against terrorist propaganda and brands were unwittingly funding extremist activities. This quickly grew into widespread fear which caused major brands to pull programmatic spend.

In 2018 we’ll see the term “brand safety” broaden to cover any toxic, offensive, or illegal content appearing next to a brand’s assets that can threaten that brand’s reputation. This expands the issue significantly. Now brand safety issues can stem from controversial news stories or a social media advert appearing adjacent to violent content, to a brand being mentioned next to fake news. Even the perilous activities of brand influencers are included under the brand safety umbrella now. 

GD: How has brand safety changed over the last five years (or even the last two)?

adam-hildrethAH: Brand safety used to be concerned with ad fraud and the type of website that a brand appeared on. Today, brand safety focuses on the user-generated content that appears next to a brand’s content or marketing. Brand safety has become a mainstream issue for marketers because user-generated content, such as uploaded videos or comments on a news article, can be posted any time 24/7 and the brand and publisher have no control over the content.

Today blacklisting a website, a journalist, or a webpage does not go nearly far enough to protect brand reputation. Brand safety now needs to be assessed at a granular level. Users create comments, posts, videos, soundtracks, emojis, etc., that can all make what is usually a “safe” ad space, hugely damaging in a second. Brands need to be dynamic and flexible to ensure that every individual ad space is safe 100% of the time.

GD: What should fall under the brand safety umbrella?

AH:

  • The comments posted on a brand’s social media advertising.
  • The comments posted on a brand’s social sites.
  • The content that appears next to a brand’s digital ad.

GD: Is ambiguity with the meaning of brand safety the reason why Crisp calls its operators “risk analysts”?

AH: At Crisp, we keep brands safe by protecting them from online risks. We do this by identifying risky content using a combination of sophisticated technology and an expert team of risk analysts who are highly skilled at spotting hundreds of types of brand-damaging risks in user-generated content.

Our risk analysts are attuned to the wider context of a comment, video or image and the real-world impact that it could have on a client’s brand. This is especially important in ensuring brand safety, because understanding the context and nuanced risk of where an ad is seen, rather than simply the website it appears on, is crucial.

GD: Should brands be weighing risk rather than trying to avoid anything remotely controversial?

AH: Every brand we work with is different. Defining which content poses the biggest risks to them is strongly defined by their target audience, their brand values, and the types of risks which have the largest impact on their business. Rather than trying to avoid all risks, brands need a risk protection solution that can be nuanced to fit their brand positioning.

This tailored approach to risk means that there isn’t a one-size-fits-all solution to brand safety. Brands must take an active role in shaping how the crisis is solved, so that they get a solution that’s beneficial to them too. 

For brand safety measures to be effective and efficient, you want as little human interaction as possible so that when humans do moderate content, they can use the full context of the content to guide their decision.

Adam Hildreth Crisp

GD: Why is machine learning an optimal tool for tackling brand safety? Is it perhaps the *ultimate* tool?

AH: The sheer quantity and scale of brand-generated and user-generated content means that AI and machine learning are critical for the first-pass of moderation and monitoring. For brand safety measures to be effective and efficient, you want as little human interaction as possible so that when humans do moderate content, they can use the full context of the content to guide their decision. It’s about getting the right balance between technology and human intelligence.

Continual development is essential. You can’t implement a brand safety solution and leave it, because the problem will constantly evolve with new threats that always need solving. At Crisp, we are continually analyzing new trends and evasion tactics and using this to train our AI so that it becomes increasingly sophisticated at identifying illegal or offensive content.

GD: Although you’re focused mainly on brands and social platforms, do you have some best practices you could share with premium publishers when it comes to quelling brand safety concerns?

AH: User-generated content is critical for publishers to drive audience engagement, but all user-generated content can be risky. Using advanced monitoring technology, it is now very easy and cost-effective for publishers to control the content on their sites. To give you an idea, we work with many premium publishers, moderating hundreds of millions of comments each month for less than the monthly price of one part-time community manager.

GD: Is there a single aspect of brand safety that you think will dominate the conversation in 2018? Perhaps it’s a cluster of issues around a theme?

AH: From conversations I’ve had already, it will be fake news and user-generated posts distributing fake content about brands. People are already generating this through the likes of social media rumors, fake news articles about brands and fake product reviews. All this content calls a brand’s reputation into question, damages sales and puts brand safety at risk.