Main | Preparing the initial release »

Some Background

I joined Adobe to work on Photoshop in 1996. Since that time, I’ve been working on Photoshop performance: making sure we test it correctly, finding the problem spots in the code, tuning the code for optimal performance, researching new techniques, testing new compilers, understanding new processors, teaching my coworkers how to test and tune for performance, etc.

Over the years, I’ve collected a lot of code showing particular problems we’ve found, and solutions we have found. But that code has only been shared inside Adobe or with compiler vendors with whom Adobe has a Non Disclosure Agreement (NDA).

Several months back, I was writing up some notes on the concept of abstraction in computer science and revisited Alex Stepanov’s abstraction penalty benchmark. Alex wrote this benchmark at a time (circa 1994) when C++ compilers weren’t doing a great job of optimizing even basic C++ abstraction. Within a few years of the benchmark’s release, most compiler vendors had identified and fixed their performance problems that the benchmark exposed.

I wondered if the compiler writers had really done the job well or taken shortcuts, and what the penalties would be for using more complex C++ abstraction. So, I asked Alex (who happens to work just down the hall from me) if anyone had updated his benchmark or was doing active research on it and related penalties. Alex said “No, I don’t know of anyone working on that. Why don’t you take it over?”.

Alex then proceeded to convince me that we really need better benchmarks – ones that don’t try to sum up the whole world in one number, but tests that probe specific areas and patterns. They should answer the questions such as: “What is the penalty for using X?” or “Does my compiler perform optimization Y?” Alex argued that such a set of benchmarks, if released as open source (without all the NDA hassles), would benefit all of Adobe’s applications by improving the compilers, and benefit all C++ users the same way. He said that it was almost the same thing I had been doing with our internal code, but with wider exposure, and hopefully a few more people contributing. Of course, then we had to convince my manager — but the idea was good (and backed by several senior researchers who had joined the discussion), so she agreed that I could spend part of my time creating benchmarks.

Now I’ve got a blank slate, a lot of historical code that can’t go out as-is, a long list of complaints about compilers, a longer list of suspicions about compilers, and a lot of things that I’ve heard other people claim or complain about C++ and compilers (much of which I know not to be true).

Where should I start?

Sorry, the comment form is closed at this time.

Sorry, the comment form is closed at this time.

Copyright © 2012 Adobe Systems Incorporated. All rights reserved.
Terms of Use | Privacy Policy and Cookies (Updated)