n Ideas from Integral Ad Science on how to defeat digital ad fraud
logo
Beyond HumanBig PictureCatalystsConnected WorldExchangeMarketing MixNew MoneyNew SchoolPeople SciencePulse

Digital ad fraud – and how to stop bots from clicking your ads

digital ad fraud digital ad fraud
Photo credit:

Surian Soosay

Niall-Hogan-UK-MD-Integral-Ad-ScienceDigital ad fraud is rampant. Bots are clicking ads that no human has even seen. Can they be stopped?

Integral Ad Science thinks so. Niall Hogan, the firm’s UK MD, explains how.

For those of us in the adtech space, there have been some really shocking headlines in the last few months. You can imagine the sobering impact of ‘Mercedes online ads viewed more by fraudster robots than humans’, which ran in the FT in May.

Digital ad fraud remains a real challenge for advertising, and we have to move quickly to deal with it, and minimize its impact on campaigns.

A priority is how we measure and give credit for performance. Currently, the way we do this is fundamentally flawed. Why? Because the system is built on correlation-based last touch models that actively incentivize the fraud that we are trying so hard to stop.

IntegralAdScience digital ad fraud

There are two broad areas of digital ad fraud we focus on: CPM fraud and bot fraud.

The first, CPM fraud, involves unscrupulous publishers knowingly trying to defraud an advertiser. How? By generating a falsely high number of ad clicks.

This type of fraud includes stuffing 1×1 pixels all over a page and serving a series of ads into those 1×1 pixels. Impression stuffing layers seven, eight, nine or ten impressions on top of each other in an ad slot so only the top ad is visible.

In the video space, we see similar types of behavior where video players are being stuffed into 1×1 iframes, or videos looping right after the other without being shown to users.

The second problem, bot fraud, exists where a machine has been taken over by a bot, and the bot gives that machine instructions to serve ads behind the scenes. There are botnets out there generating millions of ad impressions every day that no human will see.

To combat this, we look at behavioral patterns and activity on infected machines; we can differentiate whether the signals come from a bot or a human, and we can block ads from being served to these machines.

Given the scale of digital ad fraud in the industry, manual processors can’t find the cheats alone. They’re too smart and they move too quickly, so you need to leverage tools to help you identify and rid your exchange, network or campaign of fraud.

As well as blocking digital ad fraud when we see it, we need to disincentivize those who commit fraud in the first place.

Currently, the way we measure performance online is ineffective. The industry uses correlation-based models: i.e. was the last touch associated with this conversion? If so, the publisher should get credit.

But just because I saw the ad last doesn’t mean it’s the cause of my conversion. That’s correlation, not cause. And also if the system is based on last touch, fraudsters have an easy way to play the system.

That’s why we need to move away from correlation and towards cause.

One of the things we work on with our buy-side clients is how to derive causality for these campaigns. Take the example of three publishers on a campaign. Publisher one serves 100,000 impressions – it’s a direct premium publisher with almost no fraud on the campaign. Publisher two serves 500,000 impressions and half are fraudulent. Publisher three serves a million impressions and three quarters are fraudulent.

If you’re using last touch or last click attribution, chances are publisher three will wind up with some type of correlation-based conversion, because it is serving so many more ads, and a lot of those last touch values will be derived from some of the fraudulent impressions they’re serving.

But if you were calculating attribution based on causality as opposed to correlation, any impressions served by publishers two and three that were fraudulent would be automatically eliminated from the possibility of converting.

So ultimately publisher three would only have 250,000 impressions that could potentially count toward attribution, versus a million.

Recently, we saw a DSP client of ours optimize around what they thought was a viewable impression. In fact, it was a viewable impression being served by a bot, which the DSP counted as valid. And the vendor they were using – not us – was measuring it as in-view and optimizing around it.

However, it was digital ad fraud. The performance of the campaign never improved, but the DSP thought they were doing a good job optimizing around viewability.

They were really optimizing around fraud.

It’s clear that if you just look at correlation-based metrics, you’ll never derive true performance around a campaign, and we will never remove ad-fraud from our digital buys.