While the world frets about the possibility of a recession, one positively flourishing sector is the bot economy. And it’s not just growing in size; the level of sophistication of bot networks is increasing in leaps in bounds. As a result, the bot economy is now a favored tool for sophisticated organized criminal activity.
Recently, HUMAN Security made headlines when it reported it had successfully taken down a massive bot network known as VASTFLUX. At its height, VASTFLUX stole potentially tens of millions of dollars in revenue by launching fraudulent SSPs to host auctions for impressions that didn’t exist, and by using ad seats on DSPs to purchase ads that contained their zero-day payload.
That payload triggered unexpected new sideloaded auctions monetized by their fraudulent SSPs. It was a dazzlingly elaborate scheme that required real seats on DSPs, technical expertise, and supporting infrastructure that cost millions of dollars. This, to HUMAN, is a perfect example of the bot economy.
To learn more about today’s bot networks and how the industry can work together to limit their damage, Admonsters spoke with Zach Edwards, Senior Manager of Threats Insight for HUMAN.
Zach Edwards: We see a huge spike in account takeovers. They’ve increased by 98% in the last six months. Once deployed, Bots break into username-protected accounts and cause all sorts of grief for the victims.
AdMonsters: In a previous email, HUMAN said that the bot economy is flourishing with SaaS delivery and customer support. Does that mean anyone can buy a bot and use it to start stealing ad revenue from publishers and advertisers?
ZE: Not exactly. The more you simplify it, the more impossible that scenario becomes. The amount of money, technical skills, and infrastructure means that bot networks on par with VASTFLUX are out of reach for your college student looking to make quick money.
It’s great for the industry that the barriers are high. But at the same time, the bad actors who exist and target our ad system are not in jail.
AdMonsters: Then what do you mean by bot SaaS models?
ZE: It’s a software as a malicious service, meaning that bots are sold and used for malicious activities. In this ecosystem, we see overlapping threat actors, people who develop a threat tactic, backburner it for a few years, then bring it out again.
It’s important to think about this particular service ecosystem as a big affiliate structure, so it’s much more sophisticated than buying a sneaker bot, which anyone can buy on the web.
As you said in the intro, these bot networks require capital, infrastructure, technical expertise, and huge operations to create accounts. I believe that people in the industry will really benefit from an understanding of the operational chunk of a bot network.
AdMonsters: Okay, how do bot operations work?
ZE: There are multiple structures. Often, a bad actor will sign up for a DSP and submit fake or real corporate credentials tied back to a know-your-customer (KYC) process. But, they don’t accurately disclose where they do business or their location. This is where things are imploding.
The fraudsters will lay low, purchasing inventory and displaying ads without malicious code until they build their credentials. Once they’re flying under the radar a bit, they begin to deploy the malicious code. They also have sophisticated detection capabilities. For instance, they can detect when an ad is screened for bots and display an innocuous ad in such scenarios to avoid getting caught. This is the classic fraudulent DSP.
All malicious bot networks need a cashout mechanism, to divert the legitimate actor’s budget into their own pockets. In the case of VASTFLUX — which was discovered by my colleagues, HUMAN Threat Researcher Vikas Parthasarathy, and Data Scientist Marion Habiby, the malicious ads triggered additional invisible auctions. In a sense, the fraudsters cashed out by acting as fraudulent SSPs and selling millions of dollars worth of fake inventory.
AdMonsters: So the bad guys buy one legitimate impression, then sell that same impression to multiple unsuspecting buyers?
ZE: Exactly. To the buyers, it looks like they purchased a legitimate impression, so they don’t put a stop to their buys.
It’s important to note that VASTFLUX targeted real users with some portion of bots involved. But that’s just one investigation. We’re looking into dozens of others where that’s flipped the other way. These schemes rely more heavily on bots compared to real users.
The latter schemes rely on bots and fake traffic, which they can get from criminal organizations establishing affiliate networks spanning thousands of websites. The crime organization’s customers can purchase traffic to specific websites from different countries, and referral traffic from specific domains and social networks. This process allows the bad actors to customize what the fake traffic looks like or customize which bots they rent. The more enterprising ones can turn around and sell that customized bot network to their customers.
AdMonsters: What can the industry do to recognize when they’re buying fraud?
ZE: We need to recognize we’re buying too many impressions from a specific app. If you buy 30 million impressions a month on a specific app, you definitely want to be in contact with that app publisher. Reaching out to that app publisher and informing them of the exchanges in which you’re purchasing that app’s inventory will create a feedback loop that can let you know if things aren’t lining up. That app publisher may tell you they don’t sell on those exchanges, or that their apps don’t have the number of users required to generate 30 million impressions each month, or you may get a great direct buy deal with a discount on your impressions just by reaching out.
I’m not suggesting that such conversations alone can uncover schemes like VASTFLUX.. Still, they have an excellent way for buyers and sellers to assess if fraud exists in cases when the marketer is buying vast amounts of inventory. And anyway, those dialogs could lead to partnerships or discounts, so they never hurt.