New Formats, Standards Straining Bounds Of Cloud Infrastructure

by , Op-Ed Contributor, September 28, 2016

Five years ago, an RTB request looked a lot different than it does today. It didn’t have to scan for fraud or viewablity.  It didn’t have to guard against arbitrage. It didn’t have to support dynamic content optimization, native or video. There was no header bidding.

New Formats, Standards Straining Bounds Of Cloud Infrastructure

The advertising ecosystem has evolved rapidly in a really short period of time. It’s transformed from an opaque landscape of automated display executions to an increasingly transparent and policed ecosystem of dynamically rendering ads, both immediately viewable and verified from the moment a bid is placed. The requirements of this new ecosystem have placed new demands on the underlying hardware infrastructure.

The introduction of dynamic ads, new standards in viewability measurement, and new fraud-prevention measures don’t occur in a vacuum. They take real, physical computing power that not all cloud-based systems (including Amazon Web Services) can handle with the requisite speed.

Now more than ever, players throughout the ad-tech ecosystem are going to have to take a hard look at their hardware. They may not like what they find.

Why hardware makes a difference

Programmatic advertising is a physical process in which a flow of data must be ensured across a geographical area. One way to speed up the process is by assigning an AS (autonomous system) number. That lets an ad server send traffic through a fiber directly to demand-side platforms — and, eventually, consumers. Putting ad tech in the same location as major exchanges can save an average of 13 milliseconds by short-circuiting network congestion.

That may not sound like much, but when it comes to ad serving, milliseconds count.

Using QDR Infiniband, a computer networking standard that supercomputers use, also shaves valuable time off an RTB request. Infiniband is what powers NASA’s computers. Overall, it provides about 12X less latency than traditional Ethernet interconnect.

Why this matters

Slower load times are bad for everyone. For publishers, pokey performance sends readers fleeing to comparatively speedy formats like Facebook Instant Articles. For marketers, holding consumers captive while their screen loads makes the ad less effective. For consumers, it’s just a bad experience overall, and it’s one that leads them to install ad blockers.  

Unfortunately, load times are getting worse, not better. A recent study by the Media Rating Council, for instance, found that the average load time for a mobile ad is five seconds. By that time, about 25% of people have already jumped ship.

I’m not the first to notice this, of course. The Interactive Advertising Bureau (IAB) instituted its LEAN guidelines last year to tackle this issue. However, the IAB’s approach of limiting file size will only go so far. The real problem is the layers of verification and ad-tech baggage that weigh down the average RTB request. The solution to this issue is on the back-end, and requires a new hardware infrastructure.

Assessing the realpolitik of the ad-tech world, it’s unlikely that the layers of verification and arbitrage involved in an RTB request are likely to abate. If anything, they are likely to increase as marketers clamor for more transparency and new ad-tech firms look to make their claims on the ecosystem.

As those efforts gather momentum and consumers grow increasingly impatient about sluggish load times, the best solution is to improve the physical piping involved in the transaction.

On that front, the major ad-tech players have a lot of catching up to do.

 

MediaPost.com: Search Marketing Daily

(24)