What’s Ahead for the Header

Latency, S2S, and Consolidation Shake Up the Header

“I hate the term header bidding,” a friend and industry resource told me over a cold beer. “It’s too catchy—it sounds like another piece of ad-tech buzzword BS.”

I’d argue “tagless tech”—the first name I heard in reference to header-based executions—was far worse (and horribly untrue). But my friend’s dislike really stems from the ad tech industry’s tendency to latch onto a term and flail it anywhere and everywhere until it’s completely drained of all meaning. It makes something revolutionary sound like a pittance, and encourages reactionaries to dismiss a seismic shift as merely a fad.

No, the header insurgency (and header bidding is just one flavor) is something far more substantial, banishing the contrived waterfall in favor of real, nearly level markets. For those who have been watching, that’s a first for the programmatic trading space, although it was always the ideal.

And we’re just getting rolling—the header is reaching a fascinating stage of development where efforts to curb latency, advances in server-to-server technology, and industry consolidation promise to reshape a landscape that just finished a major transformation. Buckle up, buckaroos.

Latency Slows You Down

Latency has long been the chief complaint about header integrations, and the dread that has kept some premium publishers far, far away from the space. Yes, it’s great to get smarter bids from a wider array of demand sources, but my page loads are already being weighed down by tags and third-party code… I can hear the ad blockers pounding on the door!

User experience concerns have gripped publisher revenue efforts, with particular focus on latency and data usage. Well, header bidding has issues with both of those. 

Auctions occur within the user’s browser, which can only do so much simultaneously (e.g., Chrome will only make 10 requests—6 per host—before pausing). To fight back against latency, publishers install timeouts that sacrifice beneficial bids to keep down user ire. But even with timeouts, every second counts considering users have become accustomed to zippy Internet speeds. 

This gets worse on mobile, where loading can also be slowed by lackluster network speeds. And the extra auction weight on the browser leads to data drain, an ongoing battle on the mobile web. Users aren’t fond of having their rather expensive data stockpile sucked up by advertising. Data drain is a particular drag for the mobile web, where header integrations could greatly help a lackadaisical programmatic space.

Header-bidding technology providers have long been aware they are in a battle against latency—smarter players have expanded their data-center distribution to offer better coverage. They have also built server-to-server connections with DSPs to faster deliver bids and introduced “pre-fetch” technology that hold auctions for inventory yet to appear on a page.

And they’ve introduced single-request bid architecture. Index Exchange President Andrew Casale goes into pretty amazing detail about the technology here, but I’ll give a shot at a layman’s explanation. The first generation of header bidding employed multiple-request bid architecture, in which each placement on a page is treated as an autonomous unit. That may sound nice in theory, but it means each placement gets its own auction by each partner. 

If you have 5 placements on a page trying to be filled by 5 header partners, suddenly you’ve got 25 auctions to run (if there aren’t auctions inside of auctions). Remember how we mentioned that Chrome only performs 10 tasks at once? Many of your auctions are going to be delayed (or never occur—pesky timeouts!). 

With single-request, on the other hand, the whole page (including potential units a la pre-fetch) can be auctioned at once rather than separate placements. In the above situation, you’d have 5 auctions for 5 partners, which is far more manageable for a browser.

Single-request opens up a bevy of opportunities (Tandem ads! Fluid placements!), but in particular it cuts latency (and data usage) by simply reducing the number of auctions that take place in the browser. It’s startlingly more efficient.

But there’s another header-based technology you’ve been doubtlessly reading about that truly kicks latency concerns to the curb: server-to-server.

Bless the S2S

Have you ever found yourself thinking, “The programmatic trading space would be a lot more efficient if ad servers allowed a variety of exchanges and demand partners to hook directly in on a server level”? Sure you have, and some ad servers do—just not the most popular one.

Doubleclick for Publishers initially only boasted an S2S link to its demand-side cousin, Google’s Ad Exchange—a fracas that gave launch to the wide adoption of header bidding. (Should we thank them for accidentally moving the industry forward?) In response to the header insurgency, last year Google opened up its server-to-server program (Exchange Bidding in Dynamic Allocation, aka EBDA) to an array of exchanges and demand partners.

Certainly EDBA will severely slash the latency issues of header bidding, but there are questions about the Google program such as: will there be a per-bid fee? What data (think cookies) are passed to the demand partners? Is EBDA a mediation platform? And so on and so on…

Tech and product review site Purch recently built out its own network of server-to-server connections, but such an enterprise requires developmental resources many publishers don’t have and are not willing to pay for.

There’s gotta be another way! Well, yeah—why not take the best of both worlds and have a header-based S2S? 

Instead of holding an auction in the browser, a header integration zips off user data (reportedly in less than a hundredth of a millisecond) to a third-party server where the auctions are held. Then the server zips the results right back to the header where it’s sent down to the ad server. Latency? Never heard of it.

But as Ad Ops Insider Ben Kneen pointed out in a wonderful overview of header-based S2S, scale is the real promise of S2S. Your demand partners are ultimately limited in the browser by the queue—10 simultaneous actions, 6 per host in Chrome. Even with single-request bid architecture, your header bidding is going to max out at a certain number of partners; with S2S, theoretically you can keep on adding demand sources till your S2S partner begs you to stop.

Unfortunately, it’s not that simple—demand partners have been willing to work within each other’s wrappers, but there’s an unease in dropping into another’s S2S. Same as with Google’s solutions, there are a lot questions around transparency into user data and bidding when using another partner’s server—after all, these tend to be demand competitors we’re talking about. Header tech companies believe they need to stay on the page (even in a wrapper) to get the best read on placements and deliver the smartest bids into the ad server.

Issues with data transfer could also reek havoc on yield efforts due to errors in ID-matching—when ID-syncing between the buy and sell sides (as well as between publishers) is driving increased use of programmatic channels. Many users showing up in exchanges from S2S connections will appear “cookie-less” or (gasp!) anonymous, which will dramatically lower their value to bidders. How can you properly value the (virtually) unknown?

In addition, some header-based S2S solutions are serving as mediation platforms when partners would rather have the ad server do the decisioning with their bids in tow—this is particularly beneficial for PMP campaigns or real-time guaranteed campaigns.

So header-based S2S may be great in the latency battle, but there are some sharp limitations. In the short term, publishers—especially those that are wary of header bidding’s inherent latency challenges—will try the S2S path in the header and through EDBA. But those that crave data and control will recognize at this point that “traditional” header bidding offers more control—and potentially better yields. I’d expect publishers with a lot of engineering and tech resources (as well as a lot of indirect revenue) to choose this path.

At the same time, it’s easy to picture a future where publishers adopt multiple header-based server-to-server connections from trusted demand sources that send bids down to the ad server for decisioning. But—once again—because of browser limitations, only so many could simulataneously function in an efficient manner. This suggests weaker tech will be pushed out of the header while the strongest survives.

The culling has already begun.

The Call of Consolidation

During the Publisher Forum in Miami, I asked keynote Oleg Korenfeld of Mediavest|Spark onstage what he thought of header bidding—because it’s bumping up publisher CPMs so much, it must be hurting media agency bottom lines, right?

Korenfeld gave the same answer that many other agency folk have given me in private: media agencies know about header bidding and get how it works, but they haven’t noticed an effect on their businesses.

The first time an agency person told me this I was puzzled—so where is all that extra spend coming from? And then it hit me like a ton of bricks: by better aligning the buy and sell sides, header bidding is blocking intermediary players from taking a disproportionate slice.

As an industry source explained, if header bidding inventory was overpriced, DSP algorithms are so well curated by agencies today that they would have gone out of their way to avoid it. The CPM boost is a result of intermediaries being starved of their “bites of the apple” and the pubs are getting a higher percentage of the original spend. (In particular, arbitragers getting killed.) Take rates for service providers have dropped from highs of 25% to 35% to a more reasonable 10%. 

Industry people enjoyed complaining about the so-called “ad tech tax” that it became the most overused trade phrase of 2016. But only some realized that header technology was wiping the tax away and leading to what will likely be an unprecedented wave of consolidation. You’ve already been hearing the reports of layoffs, mergers, and even closings.

Seems like everyone and their grandmother has a header option these days. The question is: how many are the real deal and how many are just junk code? Who has real, differentiated demand? During the reign of the waterfall, many a tech company focused on building a solid sales team to lock in sweet positioning within the ad server; placing engineering in the back seat may come back to haunt them. 

For publishers, figuring out which service providers will be the winners should just be a matter of looking for trends in the header reporting: bid rates, amounts, speed, etc. Already I’ve learned that some publishers are excising header partners that have not adopted single-request bid architecture. Beyond that, I imagine any players that don’t develop a header-based S2S solution are doomed.

It’s going to be a brutal period, but it will cut a great deal of dead tech out of the ecosystem. And—even better—the consolidation will put more spend directly in publishers’ pockets.


Comments are closed.