Imagine being head of security at a stadium that holds several hundred events during the year. You are responsible for the safety of each person entering the stadium and for ensuring nothing happens that might lead to a lawsuit. Obviously you can’t afford to let any weapons in no matter what, but you also need to stop people from bringing in food, which would cut into concession stand profits.
However, you have to balance these goals against the prerogatives of the stadium’s management, who know that the quicker people get into the venue, the more money they will spend. Also keep in mind, management sees you as a cost center – you’ll always be under pressure to do your job faster (cheaper), while knowing that a mistake could be catastrophic to the business. If your boss isn’t particularly risk-averse, you may very well have to accept that you can’t do everything possible to ensure safety.
Ad creatives aren’t as important as people’s lives, but ad operations can certainly relate to this pressure. The maturation of the digital media space and expansion of trading technologies has given rise to an underbelly of corrupt practices, some more heinous than others. While various players will drop cookies to shoplift publisher data, malware and malvertising purveyors threaten not only a site’s operations but its entire user base.
Creative quality assurance is the stadium entrance, and ad ops the weary security team tasked with preventing unwelcome parties from venturing in. Only thing is, creative QA is considered a cost of doing business for publishers – the security budget is constantly getting squeezed. Executing the creative QA process in a faster and cheaper manner while making sure to keep out the nastiest bits of code makes for one tough balancing act.
We reached out to a number of publishers in the AdMonsters community to see how they manage the creative QA aspect of their responsibilities, and the results were quite telling about how the supply side perceives and mitigates the risks.
Almost every publisher we talked to saw creative QA as a top priority – if not “the” top priority for their team – because first and foremost, advertising is an essential component of their websites – it must function properly, with each ad visually appealing and not misleading. As the “gatekeeper of quality,” ad operations effectively has editorial responsibility for only running advertising on the site that positively reflects on the brand. Every publisher queried at least gave a cursory examination of the creative and tested that it clicked through.
While some publisher creative specifications are easily quantified – e.g., file size and animation limits – many have quite subjective specifications. One publisher we spoke to requires a “clear and specific call to action” while another needed to ensure the ads didn’t offend their audience. “Inappropriate content” in the ad was mentioned by several people with some level of interpretation.
One publisher in particular mentioned checking that the “creative message matches the landing page message” not only to ensure the ad isn’t misleading but as if the whole experience of clicking the ad and moving on was simply an extension of the user experience of the website. I’m sure there are some judgement calls in that decision process that aren’t easily included in a spec document.
It’s exciting for operations to play this role. A trafficker isn’t just pushing bits to and fro, but is somehow empowered to decide if something should be flagged or not. In my own experience as the head of operations, I had a few campaign managers that often took exception to the subpar creative they were asked to traffic. I felt compelled at times to take up the fight and push back on sales or clients about aesthetic issues so my eager employees would witness that their objections were being heard. Otherwise, these people who were fighting for the quality of the company’s brand would become disenfranchised with the process.
I didn’t win many of those fights, but my only expectation was for my people to take pride in their work.
A Bit More Technical
While not as subjective as aesthetic specifications, technical specifications provide as many challenges. As the number of browsers/devices escalates, operations like their editorial and technical colleagues have to draw a line in the sand about what scenarios they will test for and what is acceptable in what circumstances.
Even something as relatively straightforward as checking file sizes is now increasingly complex. While broadband would seem to make the issue irrelevant, more people are accessing content over mobile devices, which increases latency concerns.
Industry standards in these cases only serve to get as many people on both the buy side and sell side on the same page, but can’t be systematically enforced – a creative that doesn’t follow specs (or even worse doesn’t function properly) can still be pushed through the buying process quite quickly to then have an ad operations person determine if it goes live. Pressure from the buyers, sales and revenue officers to get the ad live means standards often fall to the wayside. Bless those who hold the line.
An interesting observation is the amount of time operations spends on direct deals while letting programmatically bought creatives through with less stringent review. This is in part because most use SSPs that provide QA tools and services. Publishers can simply turn the SSP off if their requirements aren’t met, but I would argue the bar is still set higher for direct deals. It will be interesting to watch if more direct transactions come through programmatic buys if more pressure is put on the SSPs to provide even better tools or such a manual process simply goes away.
Raising the Stakes
If holding the line on file sizes against internal pressure was hard enough, forget some of the other things that should be reviewed as part of the creative process. While almost every publisher reviews creative look and functionality, fewer had a sufficient built-in process for the other things that come through a third party tag: namely malvertising and tracking pixels.
No one likes malvertising except malvertisers. Everyone across the ecosystem understands the negative implications of malvertising on digital advertising as a whole. Not enough is being done to eradicate it. This is a situation where those who create malvertising have a single focus and very few are focused on stopping them.
This often makes ad operations the last line of defense before a brand-crushing event can occur on their sites. For many publishers, ad operations isn’t empowered to do whatever it can to eliminate the threat. Publishers need to recognize that malvertising is a real problem and empower ad operations to do what they need to do to eliminate the threat.
It isn’t just the bad guys that are including additional code in with the creatives – media buyers themselves usually include tracking pixels that are used for a variety of purposes. Some pixels are used for retargeting so the buyer can serve ads elsewhere on the Internet. Some pixels are used for analysis of creative performance (e.g., a/b testing and brand studies).
Other pixels are from fourth-party companies and are used to verify the targeting of the campaign or the viewability of the campaign. Some code may be used in the creative to collect information about the user to improve the buyer’s profile on users for additional targeting.
The decision of which of these practices is allowed by a publisher on their site is very complex but can have huge ramifications on revenue. To simply not allow tracking pixels is saying “no” to a paying customer in a time when few can afford to turn away business.
However, allowing tracking pixels effectively enables a buyer to buy the publisher’s audience elsewhere for much cheaper – and zero compensation for the publisher. If not agreed to upfront, verification and viewability tracking can create additional discrepancies for which the publisher will most likely not get paid.
Because these issues are complex, too often the business simply ignores them. Sometimes resistance is given, but the argument, “other publishers allow our pixels” usually satisfies sales and management to let things through. Ad operations may know better, but will cave to the need for revenue. Security starts turning a blind eye.
Between the look of the ad, if it performs, what may or may not be lurking beneath the covers, there is a lot already to check for in the creative QA process. Now let’s layer in mobile with its own set of creative issues (including mobile malvertising[Link]). And what about video? Video specifically can be an issue because if the ad doesn’t work, often the content doesn’t as well.
Let’s go back to our stadium security analogy. Everything listed above is enough of a challenge for adoperations, but what turns things into a Woodstock situation (the bad one, not the good one) are remnant ad solutions. Pretty much multiply the above work by a factor of 10, assembling at the gates at near warp speed.
One other twist: creatives change. It’s not enough to sign off on a creative when it first comes in. The look, the performance and tracking pixels can easily changed after the campaign has been trafficked. Feeling the pressure now?
What Works Now
In talking with a number of publishers, my hope was to find some commonalities in how operations was handling the creative QA process and to see what best practices would emerge. As one would suspect, the approaches were quite varied.
1. The main difference was that for some publishers, creative QA was a part of every traffickers job and others had dedicated people to it. Based on rough numbers, this wasn’t completely a factor of volume of creatives or publisher size but in the importance they placed on the creative QA process.
2. Another difference was the investments in tools or outside resources to assist with the process. While some published used their ad server’s built-in QA tools, others were outsourcing the process, using companies like The Media Trust, Adometry, Krux, Evidon or advalidation.com or tools like Charles, HttpWatch, AdOpsTools.Net (still going after all these years!), Fiddler and Firebug. A few companies had even gone so far as to develop their own proprietary solutions.
Very few people I spoke to were specifically tracking the amount of time they were spending on creative QA. Those with dedicated people or outsourced people were very clear on the amount of time spent as I’m sure it’s much easier to break out that time and cost then guessing how long each trafficker spends.
We did have a few publishers who had roughly the same number of creatives to check and estimated how many hours per week it took to QA. This is all back-of-the-napkin math and only goes to show the variety of responses:
- pub #1: 10 hours/week for 100 creative: 6 minutes per creative done manually
- pub #2: 2 hours/week for 250 creative: 28 seconds per creative done manually
- pub #3: 25 hours/week for 250 creative: 6 minutes per creative using offshore resources
- pub #4: 2 hours/week for 200 creative: 36 seconds manually
- pub #5: 10 hours/week for 200 creative: 3 minutes/creative using a 3rd party service
- pub #6: 30 hours/week for 200 creative: 9 minutes/creative manually
One theme in the responses we got is that for ad operations there probably isn’t a routine week – if you don’t have a dedicated person to this, gauging the time spent by each trafficker on a task that probably varies creative to creative and is also impacted by other responsibilities (end of the month reconcilliations, other campaigns, etc.). One of the under-a-minute publishers listed above said, “Creative QA is important, but not as high as it should be. We try to catch as much as we can, but there’s always a few that slip through.”
My conclusion is ad operations leaders get it – they see the importance of creative QA, but the amount of time they can dedicate is really driven by the organizational understanding of the risks around around creative tags. If the CEO or CRO isn’t concerned with data leakage because of a lack of understanding or simply accepts it as the way the market works today, ad operations will not get the resources required to protect the company. Without enough resources, ad operations will focus on what can be seen (the look and performance of an ad) and less on what’s going on under the surface (malware and tracking pixels).
I started this article likening ad operations to stadium security. A big difference is ad operations isn’t recognized for the security role they play within the organization. They need to be. Publishers need to realize revenue is at most risk when they aren’t monitoring what they’ve allowed onto their own sites.