Regardless of what industry you work in, AI is impossible to ignore.
It has infiltrated everything from art to the military, and now it’s becoming increasingly troublesome in the ad tech sector through malware, phishing, and scams.
At an AdMonsters Ops session on June 6th, “AI + Malvertising = ?,” attendees heard from Jerome Dangu, Chief Technology Officer and Co-Founder, Confiant, and Louis-David Mangin, CEO and Co-Founder, Confiant, about how AI is affecting ad tech and how we can stay aware of what is to come.
Confiant has been in the ad security business for 10 years, giving the company a great deal of experience in recognizing bad actors and helping publishers identify these risks as well. According to Confiant, ad tech has two dimensions of security risk: victim and vector.
Victim refers to specific types of fraud, namely bot fraud, attribution fraud, and arbitrage fraud. Vector refers to the types of ways bad actors attempt to infiltrate, through malware, phishing, and scams. Of these, scams are the bread and butter of those attacking through advertising.
Why AI Matters In Ad Tech Today
Ad tech is a marketplace that reaches a wide range of people, and because of this, it can be used as a vector for cyber criminals to reach others and deliver attacks. It’s important to know what our responsibility is in terms of protecting users, Mangin says.
While ad tech is not the only ad-based vector, those who are looking to scam others out of money do not distinguish between different facets of the internet; they will go wherever they can to make the most profit.
Ad tech is also growing very quickly, and while this is a good thing for the industry it also means more opportunity for attacks. Mangin suggests it is important to consider how we control the possible infiltration and whether our current systems are up to the task as AI continues to permeate all parts of society.
Our industry also lacks transparency, which is advantageous to the buyer, but causes a major blindspot when it comes to cybersecurity. Confiant has set up a website, buyers.json, to help create more transparency in the industry and to help limit malicious attacks.
There is already an established attacker base in the ad tech space, with at least 35 different groups that specialize in compromising ad tech systems. Confiant has also established a website that maps out these bad actors, matrix.confiant.com.
Cybercrime is generating trillions of dollars, and preventing these attacks is privatized, meaning you have to pay a private company to help you if you are the victim of a cybercrime. Cybercriminals also only need to succeed a fraction of the time to make their attempts worth it, and things will only continue to get more challenging as AI helps create more effective attempts with less human effort.
The Future of AI
The world is currently buzzing about the unintended consequences of AI technology that have resulted from those with good intentions, but the ad tech industry should be concerned with bad actors whose intentions are malicious from the start. Ads are the best way to reach people today, leaving our industry open for a slew of attacks, particularly as AI technology improves.
AI has a control problem, as evidenced by malfunctions that have been in the news recently. For example, the Air Force allegedly performed a simulation with AI where the AI drone killed its operators and even attempted to blow up the control tower while programmers worked to reprogram the drone to prevent further casualties. And. it has been proven that Chat GPT has the ability to lie to users.
“We’re fundamentally tinkering with intelligence here,” Mangin shares. We don’t quite understand the technology yet, which leads to complications. He notes that a large company was recently working with Confiant to try to create security to block AI attacks, but could not create proper defenses since they could not figure out how the AI had reached a particular conclusion.
The technology for AI is open source and every week there are huge improvements happening. Regulation is on the way soon, but while governments can enact regulations that large companies will have to follow, those who are operating in small numbers or as individuals will still be able to do what they want.
Deep Fakes And Scams
We’ve all seen AI-generated photos that look incredibly real, such as the Pope wearing a fantastic puffer jacket. This same technology that can create these photos can manipulate videos or audio to sound like it is coming from an authentic source. These deep fake videos can create trust with the people who watch them and convince them to buy into a scam.
Another improvement in this technology that is on the way is sending malicious calls to AI programs rather than call centers with real humans. This will increase the number of people scammers are able to attack because they won’t have to physically staff call centers to complete the scam.
It’s crucial that we as an industry are on the lookout for what is on the horizon. Bad actors will find a way to optimize access to targets through AI, and they will monetize this access. Of course, AI can help us to complete tasks, but it can also hurt, so we must be vigilant about keeping user data safe.