Just before noon on Monday, 15 April 2013, I was in a typical meeting at Adobe in Utah. While discussing project statuses and roadmap priorities, a program manager came into the room and asked an engineering director to step out. His message: the Boston Marathon had just been bombed. Traffic to news sites would jump immediately, and those news companies would be relying on our analytics tools to make critical decisions about content and placement.

Boston Marathon traffic spikes

Within 3 hours, traffic levels for many sites quadrupled, with some at five times normal levels. During the reporting on the manhunt the following Friday, traffic levels reached 7x typical volumes for some sites.

How did Adobe Analytics function under this load? The system did exactly what it should have. Data was available for reporting within minutes of it being received. Reports returned quickly. Companies made layout and content decisions to ensure their customers received relevant and helpful information. In short, the system performed the way it normally does.

 Current Data

This ability to smoothly handle extremely high volumes of traffic, both expected and unexpected, is one reason why Forrester ranked Adobe Analytics the top analytics tool in the Web analytics market. Just how much traffic does Adobe handle? More than 4 trillion transactions are processed every quarter. Every minute, Adobe processes roughly 18 times the number of global credit card transactions.

Adobe Analytics vs. Credit Card transactions

Over 1,000 of our customers have websites that accumulate over 1 billion server calls per month, with some of them receiving tens of billions per month. No matter the volume, data is available for reporting within minutes, allowing companies to make both micro (minute-level) and macro (month-level) optimizations from a single reporting tool.

How do we do this? It’s in our heritage. When Omniture released SiteCatalyst in the late 1990s, they found innovative ways to handle high-volume websites. From that foundation, the system has evolved and been rebuilt, always with a focus on scale. As an illustration of how we maintain such high reliability, consider our data collection system. In 11 data centers throughout the world, we maintain hundreds of high-performance servers. None of these servers are dedicated to a single company’s traffic, so spikes are absorbed by the mammoth capacity. Additionally, we maintain enough servers that they’re running well below full capacity. At the volumes we handle, when one site sees a 10x increase in traffic, those servers don’t even notice. It takes spikes across dozens or hundreds of sites for the global trend to be materially impacted.

World Map of Adobe Data Collection Sites

For an added layer of protection against processing latency, we strongly recommend providing advanced notice for traffic spikes. This allows us to allocate hardware where it’s needed. Submit any expected increases in traffic in the Admin Console to ensure optimal performance.

In conclusion, if you’re worried about whether Adobe can handle your traffic volume, don’t. We handle traffic for the largest brands, sites, and applications on the Web. And we’re ready to handle much more.