Between the IAB’s OpenData initiative, the LiveRamp/AppNexus-led data consortium, and a few other long-simmering initiatives (think DigiTrust), now is a great time for anyone who’s been championing data transparency. Whether these efforts are bearing fruit—that is to say, if they’re actually translating to more utility in sharing and processing data, and whether publishers are noticing a difference yet—is another story. But it’s a great time for people who have longed for the data discussion to open up a bit.
Let’s start with the data problem here, and focus on the publisher’s perspective. There is a ton of data out in the world, and data sets are often not compatible with each other. On a micro level, fields and headers don’t always match up right. On a macro level, mapping out user IDs from all of these disparate sources is wildly complicated. Google and Facebook have fewer problems. Each is already sitting on massive amounts of proprietary data, all orbiting around easy-enough-to-verify unique user IDs.
To keep Google and Facebook from running away with the entirety of the digital ad market, publishers and ad tech companies need to share data resources in some way, so buyers get the scale they demand. There ought to be publisher advantages here—first-party data is often richer and more useful than third-party data, which is available in droves and which yields deeply mixed results in targeting.
The LiveRamp/AppNexus consortium has been in the news lately—in large part because it used to be the LiveRamp/AppNexus/MediaMath consortium, and MediaMath just pulled out. (Index Exchange has stepped up to take a leading role in MediaMath’s stead.) The several ad tech companies in the consortium are aiming to combine their data sets and creating a common user identity asset, which is necessary for cross-device targeting. The downside is, there’s a notorious trust deficit among ad tech companies, and people around the industry are keeping an eye on the consortium to see whether it holds.
Indeed, in bowing out, MediaMath cited discomfort with the common identifier’s apparent dependence on LiveRamp’s cross-device linking methodology. While I was asking other digital media folks for their thoughts on these data developments, a few joked that a LiveRamp/AppNexus-led entity stood to become a duopoly of its own. (I’m not sure all of them were entirely joking.) So it’s interesting to think about MediaMath’s endgame in pulling out. Will the company be affected negatively in the long run? Do they have so much data scale on their own that they’ll come out in about the same place, with or without the consortium? These aren’t rhetorical questions.
The IAB Tech Lab’s OpenData initiative brings a very different, yet kind of related set of concerns and solutions to the table. It builds off of previous IAB-involved initiatives for standardized data nomenclature, which is such an important issue, I wish it looked better on a sandwich board. OpenData is certainly a snappier name.
While the data consortium is shooting for centralized user IDs, OpenData takes a step back and aims for an industry-wide, non-proprietary taxonomy. From a publisher’s perspective, what’s desirable here is to have a taxonomy that can work with Google’s, but isn’t reliant on Google’s. What’s also desirable is to come away with a clearer vision of campaign performance, to bring back to advertisers and communicate the real value the publisher delivers. Reporting discrepancies are an ongoing, time-consuming pain for publishers, and if there’s increased standardization across fields, reporting stands to become much less manual.
OpenData is part of the IAB Tech Lab’s OpenMedia package, the overall programmatic standard, which aims to allow for technical standards to be updated more quickly. OpenRTB 3.0 is also part of OpenMedia. We ought to expect more details about the initiative, and where it’s going, to emerge soon. (On Sept. 19, the Tech Lab just held an in-person-only seminar on OpenMedia at Criteo’s offices in Paris.) And we’ll see how adoption spreads. The IAB can only make suggestions, after all, not implement policy.
With the digital industry as fragmented as it is, someone needs to step up and push for industry-wide data standardization. Gotta fight the Duopoly, but also, it’s helpful to have some standardization so publishers can have a better time assessing campaign performance. And maybe more data standardization will empower publishers to put their first-party data to work for them—and for the marketers who value it so much—and finally move fearlessly into data licensing!
Maybe. It’s still a pipe dream for a lot of publishers. The number of publishers who are becoming data brokers, and succeeding at it, is still small. And fragmentation turns over a lot of control over data to third parties. When you have the audience modeling and targeting piece frequently happening at the DSP level, and the data science piece happening at the DMP level, you have these important pieces that are isolated from each other and don’t communicate as well as they should. And the number of publishers who are integrating directly with DSPs is extremely small.
All this fragmentation leads to publishers getting limited utility from their data. OpenData and the consortium are both trying to make the process more open, each in its own way, at different levels. But the publishers who are already doing the most advanced work with their own data are already a few steps out from where the initiatives are. It’ll be interesting to see when and if their advanced methods become commonplace among publishers, and whether increased standards are the tide that lifts all ships.