The issue of pervasive web traffic fraud has finally made it to the cover of Business Week, a couple years after the story began re-emerging in the advertising and technology trade press.
Late that year he and a half-dozen or so colleagues gathered in a New York conference room for a presentation on the performance of the online ads. They were stunned. Digital’s return on investment was around 2 to 1, a $2 increase in revenue for every $1 of ad spending, compared with at least 6 to 1 for TV. The most startling finding: Only 20 percent of the campaign’s “ad impressions”—ads that appear on a computer or smartphone screen—were even seen by actual people.
“The room basically stopped,” Amram recalls. The team was concerned about their jobs; someone asked, “Can they do that? Is it legal?” But mostly it was disbelief and outrage. “It was like we’d been throwing our money to the mob,” Amram says. “As an advertiser we were paying for eyeballs and thought that we were buying views. But in the digital world, you’re just paying for the ad to be served, and there’s no guarantee who will see it, or whether a human will see it at all.”
…
Increasingly, digital ad viewers aren’t human. A study done last year in conjunction with the Association of National Advertisers embedded billions of digital ads with code designed to determine who or what was seeing them. Eleven percent of display ads and almost a quarter of video ads were “viewed” by software, not people. According to the ANA study, which was conducted by the security firm White Ops and is titled The Bot Baseline: Fraud In Digital Advertising, fake traffic will cost advertisers $6.3 billion this year.
…
There’s also the possibility that the multitudes of smaller ad tech players will get serious about sanitizing their traffic. Walter Knapp, CEO of Sovrn Holdings, a programmatic exchange, says he was as alarmed as anyone at the rise of ad fraud. He decided it was a matter of survival. “There are 2,000 ad tech companies, and there is maybe room for 20,” he says. “I looked around and said, ‘This is bulls—.’ ”
About 18 months ago, he set to figuring out how much of his inventory—ad spaces for sale—was fake. The answer mortified him: “Two-thirds was either fraud or suspicious,” he says. He decided to remove all of it. “That’s $30 million in revenue, which is not insignificant.” Sovrn’s business eventually returned to, and then surpassed, where it was with the bad inventory. Knapp says his company had a scary few months, though, and he keeps part of a molar on his desk as a memento. “I was clenching it so hard, I cracked it in half,” he says.
This is a link to one of the white papers cited in the article.
There are both technological and economic factors behind this waste. On the technological side, the entire notion of targeting individuals with browser-based advertising assumes that when something that looks like a computer browser renders some Javascript — or at least creates the appearance of rendering that Javascript — it’s equivalent to a human viewing the ad. It’s also possible in many cases to spoof that process at scale.
This differs from the procedure in print media in a critical way. Each subscriber (apart from the relatively small number of people who buy publications from the news stand) is a validated subscriber. They verify their identity by sending money to the publisher. Professional publishers then pay a third party auditing firm to verify that their subscriber lists are accurate. This verification tells advertisers that, although they may not know how many people actually see the ads, they can at least know that the publisher’s lists are accurate. Subscribers will also presumably cancel their subscription or complain if they are unhappy with either the editorial or advertising content of the publication.
With the web, publishers have no such pressure from subscribers. Their chief pressure is increasing un-audited or lightly audited traffic numbers as measured through the proxy of the web browser. Users have a great deal of control over what runs in their browsers, which is why ad blockers are so popular.
Publishers do have some incentive to control the quality of the ads that show up — there’s an entire cottage profession dedicated to manually and programatically filtering ads that come in through a network — but filtering the quality of traffic is basically only pressured by how well the ad network regulates things, through some combination of manually blacklisting publishers and automatically filtering suspicious traffic.
This overhead slows down page speed at least somewhat, because ad networks tend not to trust the publishers. Validating that visitors are likely to be real people is not cost-free: it costs processing power ,and must lock itself in a competition with developers eager to spoof the validation mechanisms.
The other difficulty is that, owing to weak security on personal computers, plenty of legitimate users themselves own machines which are doing part-time work for a botnet. Anyone technically proficient who has had to help out a ‘regular’ person clean their personal computer knows that this is a pervasive problem.
Part of this problem derives from the ideology that information on the world wide web wants to be ‘free.’ What we see is that people who leave their web servers wide-open don’t have much of an incentive to manage what software runs on pages being served from their machines in a way that best serves the interests of guests connecting to those servers.
So, rather than treating each guest to the server as either a customer or potential customer, free websites will tend to do what they can to spy on their guests on behalf of third parties who will attempt to leverage this information to persuade them to buy things or take certain actions. ‘Free’ media websites tend to abuse their visitors the most because they have little incentive to align themselves with the long term interests of their guests. The only incentive is building a base of repeat visitors who match the demographic target profile that advertisers are looking for — but those profiles can be spoofed, and often are spoofed, because they’re not financially validated or auditable in the conventional way.
It’s much more efficient just to have advertisers work from profiles of subscribers who willingly give up the relatively basic information that’s typically demanded: age, income level, marital status, homeowner status, zip code, and some other particulars dependent on a given market. What’s not needed is a massive spy’s dossier on every person that connects to a server to download a web page. The proof of this is in the price that advertisers are willing to pay for validated audiences in print against semi-validated audiences on the internet. Person for person, print tends to pay many multiples of what gets paid on the internet — and the same goes for cable television as compared to digital television. Prices condense a lot of information about markets.
The quality of the information being served by the free companies also must decline over time, because validating that information is costly — too costly for the profit margins of the media companies to bear. Much like a sugar-snacks conglomerate, the company has an incentive to make the product as stimulating and addictive as possible while minimizing the nutritional content, up to the point to which consuming the product damages the health of the customers.
On the macroeconomic side of things, we have a monetary environment that supports these kinds of business models — which also emerged and collapsed in the previous ‘internet bubble’ — really a phenomenon of money and credit more than of ‘irrational exuberance’ — propping up far more of these middle-man companies than would be surviving otherwise. Artificially easy credit sustains companies which would otherwise fail, because it’s easier for them to raise funds or otherwise borrow money than would be possible in a more natural situation. What happens is that companies that want to compete on legitimate grounds wind up unable to compete without themselves participating in cannibalistic, parasitic market practices.
On the publishing side, the publishers can’t compete unless they buy traffic from dubious sources and use whichever ad networks provide the highest bids. The ad networks themselves may or may not be able to validate the quality of the traffic they’re serving ads against, because they’re participating in exchanges which are difficult to police. And naturally, government regulators are mostly oblivious to the issue, lacking the means, technical expertise, or incentive to police those global markets.
The takeaway will be, I expect, that there ain’t no such thing as a free lunch, and the internet is not a special place where great things can be had entirely for free indefinitely. Web sites will increasingly resemble storefronts because maintaining them as ‘commons’ has become increasingly costly owing to this arms race of parasitism.
What that also means is that a large portion of the capital structure built up in the latest part of the boom cycle is proving to be uneconomical and will have to be liquidated or reorganized.
This is a good roundup of the numbers.
Consumers are increasingly opting out of running ads on their browsers and blocking trackers because they dislike the business practices followed by free website operators. This will run them out of business, which is a good thing.
Yakimi says
>Consumers are increasingly opting out of running ads on their browsers and blocking trackers because they dislike the business practices followed by free website operators. This will run them out of business, which is a good thing.
Is it good for neoreaction?
henrydampier says
I don’t know.
With the thoughts you'd be thinkin says
Not sure how OT this is, but did you hear about how google search rankings could be deciding elections?
http://www.politico.com/magazine/story/2015/08/how-google-could-rig-the-2016-election-121548
henrydampier says
They could, they do, and people on all sides are also paid to manipulate the rankings to the best of their abilities.
It’s actually Google-legal to manipulate rankings so long as you follow their ever-changing rules in the matter and don’t call it manipulating.
thebillyc says
numbers fall into the “damned lies and statistics” category: the bits and bytes age brings statistics to matrix-level virtual reality. rank and file peasants such as myself are drawn to more mundane internet commentary involving what widget sells the cheapest, or will last the longest. the promise of the “free internet” has (unfortunately?) fallen into the clutches (again) of pigs who will never have enough to eat. they follow us around cyberspace snuffling after us wherever we go, ready to eat us if we trip in the pen. the blogs involving how to repair my japanese car or improve my compost seem to the only refuge from their constant rooting after my money. ad blockers now get warning signs from the sites who warn I “may not experience the full totality of their snake oil”. the term “consumer” has to be the most offensive thing madison avenue ever came up with; the vision of open mouthed zombies into which they pour an endless stream of shit- paid for with endless debt slavery.
Max says
>tfw you can’t tell if “link to one of the white papers” is a joke or not
henrydampier says
Whoops. This is fixed.
Ezra Pound's Ghost says
I think one thing we can definitely infer from the history of the 20th Century is that “advertising” is integrally related to public administration and national security. Therefore, “advertising” should be regulated somehow, no?
henrydampier says
The FTC regulates advertising: https://www.ftc.gov/about-ftc/bureaus-offices/bureau-consumer-protection/our-divisions/division-advertising-practices
Ezra Pound's Ghost says
I was thinking more of substantial regulation as opposed to formal regulation – the first being designed to serve the common weal, the latter whichever private interest has institutionally purchased the ear of the FTC. For example, the protection of Pharmaceutical and Agribusiness sectors seems to dominate the concerns of the FTC as regards advertising regulation. In short, the regulations are not meant to serve national or popular interests but rather private, parochial ones.
henrydampier says
Some networks are better-regulated than others, and in different segments.
Google’s Adwords network is harder to spoof on Google.com, but easier to spoof on the search network, which is why the first piece of advice most search advertisers give to clients is to disable advertising on the search network.
On the flip side, plenty of older ad networks run on old technology and are not well-policed.
This sometimes contradicts expectations and sometimes trustworthy networks have major lapses (AppNexus — http://adexchanger.com/platforms/appnexus-has-a-day-of-reckoning-on-the-ad-fraud-issue/ and Adroll — http://www.thesempost.com/adroll-retargeting-bot-attack-behind-ie7-traffic-surges/ — being two recent examples).
henrydampier says
^– So what I’m trying to say is that the networks that try to compete with the others on quality/security rather than pure price do talk about their better internal policing.
The publishers don’t necessarily care about policing if they value more revenue more than they value delivering results to advertisers. Most ‘ad ops’ workers are paid on commission plus salary for maximizing revenue to their boss and their boss doesn’t care about whether or not the traffic is legitimate so long as it’s profitable.
AntiDem says
Somewhat related: TechCrunch for some reason allows Arthur “Mindkill” Chu to use its website to call for the government to allow soft censorship of the internet by the SJW lynch mob via abuse of the legal system, forcing social media and content providers to self-censor rather than face endless nuisance lawsuits.
And he has the nerve to use an image of the Berlin Wall coming down as a header for this.
http://techcrunch.com/2015/09/29/mr-obama-tear-down-this-liability-shield/
ladderff says
Yes. Urbit is relevant here.
henrydampier says
Obviously so.