Just before noon on Mon­day, 15 April 2013, I was in a typ­i­cal meet­ing at Adobe in Utah. While dis­cussing project sta­tuses and roadmap pri­or­i­ties, a pro­gram man­ager came into the room and asked an engi­neer­ing direc­tor to step out. His mes­sage: the Boston Marathon had just been bombed. Traf­fic to news sites would jump imme­di­ately, and those news com­pa­nies would be rely­ing on our ana­lyt­ics tools to make crit­i­cal deci­sions about con­tent and placement.

Boston Marathon traffic spikes

Within 3 hours, traf­fic lev­els for many sites quadru­pled, with some at five times nor­mal lev­els. Dur­ing the report­ing on the man­hunt the fol­low­ing Fri­day, traf­fic lev­els reached 7x typ­i­cal vol­umes for some sites.

How did Adobe Ana­lyt­ics func­tion under this load? The sys­tem did exactly what it should have. Data was avail­able for report­ing within min­utes of it being received. Reports returned quickly. Com­pa­nies made lay­out and con­tent deci­sions to ensure their cus­tomers received rel­e­vant and help­ful infor­ma­tion. In short, the sys­tem per­formed the way it nor­mally does.

 Current Data

This abil­ity to smoothly han­dle extremely high vol­umes of traf­fic, both expected and unex­pected, is one rea­son why For­rester ranked Adobe Ana­lyt­ics the top ana­lyt­ics tool in the Web ana­lyt­ics mar­ket. Just how much traf­fic does Adobe han­dle? More than 4 tril­lion trans­ac­tions are processed every quar­ter. Every minute, Adobe processes roughly 18 times the num­ber of global credit card trans­ac­tions.

Adobe Analytics vs. Credit Card transactions

Over 1,000 of our cus­tomers have web­sites that accu­mu­late over 1 bil­lion server calls per month, with some of them receiv­ing tens of bil­lions per month. No mat­ter the vol­ume, data is avail­able for report­ing within min­utes, allow­ing com­pa­nies to make both micro (minute-level) and macro (month-level) opti­miza­tions from a sin­gle report­ing tool.

How do we do this? It’s in our her­itage. When Omni­ture released Site­Cat­a­lyst in the late 1990s, they found inno­v­a­tive ways to han­dle high-volume web­sites. From that foun­da­tion, the sys­tem has evolved and been rebuilt, always with a focus on scale. As an illus­tra­tion of how we main­tain such high reli­a­bil­ity, con­sider our data col­lec­tion sys­tem. In 11 data cen­ters through­out the world, we main­tain hun­dreds of high-performance servers. None of these servers are ded­i­cated to a sin­gle company’s traf­fic, so spikes are absorbed by the mam­moth capac­ity. Addi­tion­ally, we main­tain enough servers that they’re run­ning well below full capac­ity. At the vol­umes we han­dle, when one site sees a 10x increase in traf­fic, those servers don’t even notice. It takes spikes across dozens or hun­dreds of sites for the global trend to be mate­ri­ally impacted.

World Map of Adobe Data Collection Sites

For an added layer of pro­tec­tion against pro­cess­ing latency, we strongly rec­om­mend pro­vid­ing advanced notice for traf­fic spikes. This allows us to allo­cate hard­ware where it’s needed. Submit any expected increases in traf­fic in the Admin Con­sole to ensure opti­mal performance.

In con­clu­sion, if you’re wor­ried about whether Adobe can han­dle your traf­fic vol­ume, don’t. We han­dle traf­fic for the largest brands, sites, and appli­ca­tions on the Web. And we’re ready to han­dle much more.