Do You Trust Me?

You probably don’t know me. Would you give me access to your machine? Let’s start with all the files in your user account. Can I have a peek at your records, photos, and e-mail? If I ask will you grant me access to your whole machine? Can I install just a little something in your OS?

Like I said, you probably don’t know me but the odds are pretty good that code I’ve written is on your disk right now. The odds are good that it executes reasonably often and it has access to pretty much everything in your account – possibly everything on your machine. It doesn’t matter if you are running, Mac, Windows, or Linux – my code is likely there. The code written by some of my friends is almost certainly executing on your machine as we speak… while you read this… it’s executing away and it has access to all your data.

A little scary? Lucky for you I’m not part of some gang of thieves. My friends and I are professional software developers – we write code that ends up in high profile products. There are at least a couple thousand people who can make a claim as bold as the one I made above – I know at least a couple hundred of them directly.

The security of your documents is based on a fairly weak system of trust. You trust the companies you do business with – you trust Microsoft, Apple, and Adobe because you know they want your money and if they are careless with your documents they won’t get it (at least in theory). Working at Adobe I can say this company has a strong ethical culture and your trust is very important to us. The company trusts its developers. If they do something malicious (or possibly even careless) they might lose their job and their career. Working as a developer pays fairly well. If you are like me, you have a basic trust that people are generally good. Most security exploits are just done out of curiosity or to prove that one can exploit the system. It’s fairly rare that an exploit is to profit and even more rare that it is intended to do harm.

You might think the company has some elaborate security scheme to keep malicious code from getting into the product – not so. It might have some tools and process in place to try and catch developer mistakes but nothing that world stop a clever developer from inserting malicious code (and clever describes most software developers I know).

It’s a good thing that people are basically good. If you look on your disk right now I’m pretty sure you’ll find lots of software that you’ve used that you just downloaded from the internet for free (not pirated – just some open source or freeware utility from some generous soul). You probably have some software from small companies that you’ve paid for – I know I buy quite a lot of small utilities – not just for my Mac or PC but also for my Treo 650. All of this code has access to pretty much all the data on my machine as well.

The fact is, the security model on our machines is horribly outdated. The idea of all code running in a user account being equal comes from systems where the goal of security was to prevent one user from getting to another user’s account. It was not to prevent code a user is running from getting to his own account.

But security experts these days are running around telling us developers (some of these experts are also developers and some of them are also my friends) that to secure a system we need “deep security” – every line of code – every instruction – must be ‘safe’. Proven correct isn’t even good enough for these folks – the code has to be ‘safe’ even if some other developer comes along and changes it after we wrote it! The problem, they say, is not the weak system of trust we have for software delivery but the fact that once your software is connected to the web, _anyone_ (meaning not one of those thousands of folks who wrote the good software but one of those _bad_ guys) can run any code they want on _your_ machine if us developers are allowed to make just one small mistake and let them through. These experts tell us we need safe languages to program in that will protect us developers from our own mistakes (we are only human – we will make mistakes) and that somehow adding layers of indirection above the hardware is going to keep your documents from getting into enemy hands.

The latest casualty in the war against unsafe programming is the “lowly integer.”: Apparently modulo numbers don’t just go to infinity – they eventually wrap right around the universe and back to zero taking all your documents with them. Who knew? I’m surprised nobody has pointed out that an index is really just as dangerous as a pointer (which is, just an index to memory).

The problem with this approach to security is that it is a battle that can’t be won and you, as the customer, get to pay the cost. What cost? The cost to pay the developer to make thousands of changes to his software to conform to security guidelines that will show few, if any actual problems and likely cause as many as are found. The cost to test those thousands of changes. And finally there is the performance loss in all of the redundant checks executing on your machine. Meanwhile – any high school kid can post his latest utility online and you’ll happily download and run it (I’m not blaming _you_ – I do the same).

Part of the problem is that we don’t even have a good definition of what secure means – when software prompts for your password and you hand it over and then it goes and does something bad, “is that a security hole?”:,1759,2067683,00.asp?kc=EWRSS03119TX1K0000594 I send a nasty letter every few months to American Express because they insist on sending me links like this in their e-mail:$LUk0/axp15?t=BfdUWCAZB6ABA0wQofAMm$LUkO&email=my)

I’m supposed to trust this link? And they wonder why people fall for phishing attacks.

There are two key components for security –

1. Trust – Even a fairly weak system of trust seems to be quite effective but we need to work on what it means to establish trust with a software vendor and what it means for them to establish trust with us as customers. Hello RIAA? Yes, I am your customer. I’m not here to steal your music. Now can I _please_ play it on the device I choose?

2. Verifiability – It doesn’t matter how much you trust me – you should be able to look over my shoulder. From a software standpoint this means that the interface between the application and the OS should be provably verifiable, meaning all preconditions can be validated from within the function call (big hint to thieves here – if you want to hack a system look for system calls where the preconditions cannot be verified within the function then see what happens when you send in bad data!). This interface code should be open for public inspection.

This isn’t a call to open source _everything_ – only the interface to the OS. It is important to establish trust, not only to check for mistakes, but to make sure that your code is doing what it claims. What’s needed is a new security model at the OS level – you _should_ be able to run any application you choose on your machine, written by anyone, without worry that it could do significant harm or get access to any personal information. The application should only have access to files for which you grant explicit permission or files it created (and yes, I know about things like decompression bombs – like I said, software developers are clever). The idea of executing an app in isolation isn’t new – it is known as a sandbox – but every application on your machine should run in a sandbox. Microsoft seems to be the only company actively pursing such measures (they are also the biggest proponent of trying to protect us developers from ourselves, especially when it locks us into their platform). That isn’t to say Apple isn’t doing something – Apple just isn’t talking. And before I get flamed – I’m not tight enough with the Linux community to know if they already have a great solution in the works or not.

The topic of how to teach software developers has been fairly hot lately as well (and ties into the security debate). There is quite the flame war in internet land about my friend “Bjarne’s interview in Technology Review.”: There is also a strong ‘counter’ “argument (albeit written first) by David Gries”: who I don’t know but for whom I have great respect. Although I agree with much of Dr. Gries’ article his conclusion is a little stunning – that we should be teaching casting (a way to circumvent a type system) before iteration? I need young developers with a deep understanding of how machines work – not a bunch of misconceptions formed by learning to program on a virtual machine. That’s not to say that Java can’t have a place in education (my first programming language was Applesoft Basic) but we can’t ignore reality – especially not as scientists. I wouldn’t presume to tell Dr. Gries what language to teach – but I would ask that he not ignore mathematics and physics in his lessons. I’m certain Dr. Gries’ students graduate more knowledgeable and well rounded than his hyperbole suggests.

An amazing thing is that the bulk of the good language research that I see happening these days is happing within the C++ community (okay – I admit that I’m part of this community so that could very well bias my perception). I’ve worked on large code bases in C++ (Photoshop would be an example) and Java is the language I dreamed about over decade ago as I was learning OOPs. But we built the equivalent of Java in C++ (as have many people) – not quite as pretty but all the basic functionality is there. Reference semantics and garbage collection. Bounds checking and eliminating raw pointers. Deep class hierarchies rooted in a common base object class. Introspection and serialization… And this _was_ a better way to program compared with how we did it before. But then the class hierarchies grew, and the class interfaces grew, and the messaging amongst objects increased. Eventually, we found we had object spaghetti. A vast network of objects and somewhere along the line we’d lost the ability to see the whole picture – it had grown too large to be comprehensible.

Now I’m looking for a better approach. Borrowing from what the functional programmers knew but adding a systematic way to deal with side effects. Borrowing from what mathematicians know about abstraction – axioms, algebraic structures, and theorems but adding the notion of complexity and memory. This is what generic programming is about – and the language of “choice” for generic programming is C++. And it isn’t because Alex Stepanov (who is both my friend and colleague) chose C++ for STL (“he worked in Scheme, Ada, and C, before C++”: ). It is because C++ got some fundamental things right. For example, like C it has the notion that an array has one more position then elements – a mathematical fact of any sequence which Ada denies. It is because Bjarne allowed C++ to be open enough that it could be used for purposes other than those he could foresee (the idea that templates could be used as a type function certainly wasn’t designed). And because he insisted that C++ be backward compatible (a choice that I often argue against) he assured a large community that could grow with those moving from C.

None of this could have happened if the current security experts had a grip on him. If we can hold off those same people from forcing their view that they do _know_ how to write correct software and their job is to protect us from ourselves then some truly great and novel things might just have a chance to develop. My bet is that the great ideas will come from some of those college students that you are kind enough to trust with your personal documents.

[As a follow up – the “second part of Bjarne’s interview”: was just posted.]

Building a Common Infrustructure

I recently participated in a study on open source software conducted by Oliver Alexy at Technical University of Munich. I just received the final report – not yet public but I’ll post the file or a link if/when I can. It got me thinking about the problems corporations face with open source, the protectionist attitude which is so prevalent in the software industry, and the challenge of building a common infrastructure.

When Alex Stepanov created what is now the “C++ Standard Template Library,”:” his intention was that the library would be an example of how to write such libraries which can be heavily reused and are minimally intrusive. Although STL is a success, and there have been many libraries built using ideas from STL, for the most part Alex’s vision of having a very rich environment of many algorithms just hasn’t happened. If you’ve been following Alex’s “notes on programming”: then you get an idea of how much STL itself could be improved.

What I’d like to create is a environment where computer scientists could collaborate on a library of algorithms, the concepts upon which the algorithms are defined, and data structures to support these concepts. I’d like this to be language neutral, with documentation for various languages describing how ideas such as refinement, models, and type functions are implemented along with quality libraries for many languages. One of the largest hurdles in making such a library a success is getting past the protectionist policies of companies such as Microsoft and Apple so they can use, and contribute to, such an effort. My hope was that releasing ASL under the MIT license would encourage Apple and Microsoft to borrow and contribute to the effort – so far that hasn’t happened.

Just today I was trying to hunt down what Apple recommends as the replacement for the deprecated Carbon API, SetCursor() – after much searching I found “example code”: of how to use the Cocoa NSCursor to replace it. The sad state of affairs is that all the cursor libraries I’ve ever seen are remarkably similar with only superficial differences – yet to the best of my knowledge no one has tried to get to the minimal requirements for a cursor and write this API for the last time.

With ASL we’re trying hard to define “concepts”: as we discover them. Currently our notation is C++ centric and as such even our definition of “regular type”: is only an approximation. I have some ideas about how to get such a library started and I’m sure I could quickly round up some university involvement – but I’m at a loss as to how to penetrate the protectionist commercial environment and ultimately I believe that will be necessary for success.

The Importance of an Eco System

In my ongoing attempt to buy a decent “iPod alarm clock,”: I discovered the “Big Screen Zip Connect”: from Sharper Image.

The unit has several nice features I’ll never use like the ability to tune to non-US radio stations and listen to TV (I don’t get any broadcast signals strong enough at my house). The unit is also pricey – $173 w/tax and the iPod Remote Zip Connect. I never thought I’d use the Sound Soother feature but found I like it – as does my wife. The volume for waking to the iPod has three choices, “Soft”, “Loud”, and “Ramp” (from Soft to Loud). “Soft” has a volume level of 5 (on their scale of 0 to 99). Again, on this device I wanted a volume of 2 but gave it a try. Still too loud, not as bad as the other units but bad enough that I’m returning it. Worse, the machine “pops” the speakers when it powers up the alarm, and if you hit the snooze or the alarm off you get a loud “beep” confirming your selection. The result is “pop…hmmm…MUSIC…BEEP!” – all I want is “music…” is that so hard? I tried waking to the soothing sound of the ocean – still loud enough to drown in. There are other UI problems, like you can turn off the iPod with the units power switch, but you can’t turn the iPod on, you have to lift it out of the unit and use the iPod controls. So this unit goes back and I’m looking for another.

What was interesting with this device though is the “Zip Connect.”: Sharper Image has taken nearly all of their audio gadgets and added a proprietary Zip Connect to it. Each of these devices comes with a module to adapt the connector to a standard stereo mini line-in. You can buy an optional module to connect to your iPod which will play through the unit while charging the iPod, and some unites allow you to control the iPod. None of the units that I saw cradle the iPod as well as most iPod specific accessories but their functional and work with any iPod. The Zip Connect is Sharper Image’s way to protect themselves in case Apple suddenly changes connections or they may support other devices in the future.

Having owned a Palm V, m505, Treo 600, and Treo 650, all of which have different connectors, I can appreciate this. I’ve spent a fair amount on accessories for older units which I couldn’t use with new units. This became a barrier to upgrade and with each upgrade I’ve been hesitant to invest in more accessories to avoid lock-in. I’d still be running the Treo 600 except it failed one month out of warrantee. Had Palm stuck with a single connector – they could have a rich eco system as the iPod does now.

The Sharper Image approach is the hardware equivalent of a private, pure virtual function call – it’s expensive (costs you $10 to subclass + added cost for default line-in module), it can’t do anything without a subclass, but it does provide some flexibility for the future – but only friends of Sharper Image can provide modules.

Ideally, we’d have an open standard for such connectors (maybe call it USB) with standard protocols for controlling audio devices. Carefully thought through, legacy devices would either just work or only require a small adapter – but new devices would plug right in. But for now, Apple, Sharper Image, Palm, and most other manufactures are sticking to the proprietary interfaces. Even if I do settle on the Zip Connect device, I still can’t plug my Treo into it.

The software solution to this problem is standardized Concepts. C++ provides the requirements for items such as iterators but stop short of semantic requirements. Even the most basic requirements of “regularity”: are not universally adhered to. A colleague recently sent me a piece of code which contained the following:

Matrix Inverse (const Matrix &M)
Matrix S(M), T(M); // temporary matrices of the same size as M

T = M;

Without the comment, I would likely have deleted the assignment – there is something very wrong when a copy constructor doesn’t copy. A Zip Connect won’t help you here.

Building Software Oracles

I’ve been giving some thought lately about how to structure software applications. One of the long term goals for the Begin example application, part of “ASL,”: is to create a rich web client for “desktop like” hosted applications. In a recent discussion it occurred to me that there isn’t any good reason why we shouldn’t keep the application front end in it’s own process. We can have a single, universal, application interface which is configured with declarations to be any application and pair with an app server, running locally or across the network. Yes, I know, this isn’t a particularly novel idea (I’m typing this in a browser at the moment) – I just happen to think with ASL we’re getting close to making it work _better_ than the current all-in-one-process desktop model.

In some ideal world this would have three layers – a declarative UI front-end, a declarative document model back-end, and a collection of generic algorithms to do processing in the middle. The difference between Photoshop and Word becomes the descriptions at both ends and what algorithms are in the middle – the more algorithms we have the more applications we can build.

One challenge with such an architecture is how do you monitor progress of the middle process? I certainly don’t want to require that the code be littered with “progress markers” to communicate state back to the interface. This is one of the challenges that’s often sited as a key feature for aspect oriented systems – you can build an “oracle” aspect which will monitor progress. That’s somewhat like littering the code with progress markers – except a less intrusive.

I believe there is a reasonably simple, generic, approach that should work quite well – for any operation which streams back results, we can simply mark progress by the data received – knowing the total amount of data expected and complexity of the operation this should work quite well. For operations which take significant time before they return a result, we should be able to accurately predict their completion by knowing the complexity of the operation and a profile of the operation for the machine (which can be generated through use) – in this way we can come to predict completion (and use this information to keep the OS from throttling our process back while we’re busy). If we run significantly over the expected completion time, we can assert failure and interrupt the process – with a good transaction model even failure can be graceful. What Apple is doing with “CoreData,”: is a good proof of concept for the back-end of such an architecture.

I think the key piece that has prevented desktop applications from having more of a “server” architecture (besides OS’s which could multi-task worth a damn until fairly recently) has been handling interactivity so the user is in direct control instead of feeling like they are filling out forms and issuing requests. By modeling the functions in the interface process, and building tight communication between the processes into the architecture this could be achieved.

A Crippled Human Interface

This is the first entry in what I intend to be my daily blog. I’ve been kicking the idea of a blog around for sometime – for me the question has been, “what should the purpose of the blog be?” My team already has a public face with the “Adobe Source Libraries”: so time spent working on a blog is time I’m not spending on code, documentation, a paper, or, more likely, answering e-mails. I decided a blog could be cathartic – a way to clear out the random thoughts rattling in my head before sitting down to focus on other work. (Of course, it could just be distraction in which case this will likely be a short lived blog.) So if your looking for deep insights, you’re likely better off reading something like my friend “Jon’s new blog.”:

The random thought for today has to do with a crippled human interface. I spend a fair amount of time thinking about human interfaces from the engineering side – a significant portion of Adobe’s application code is dedicated to the human interface and it is a focal point for our declarative programming efforts in ASL. I define a human interface as:

* A system to assist the user in selecting a function and providing a valid set of parameters to the function.

The definition of a GUI can be obtained by prepending “visual and interactive” to system. A key word in this definition is “assist.” There are many ways to assist and sometimes things go wrong – a little like the old story of the Boy Scout who insists on helping the old woman to cross the street against her will.

This morning I was rudely awakened by just such a case of assistance gone wrong. I’ve been on a mission to find a good alarm clock which can wake me to my “iPod mini.”: Last Christmas I was given an “iHome”: – the device is quite wonderful, except the minimum volume for the alarm is loud enough to wake the neighbors. It quite literally will blast you out of bed. I’m a fairly light sleeper and I can wake to just a little soft music – at the right volume it will wake me but not my wife. Because of this problem I returned the iHome.

The next alarm clock I tried was the “iBlaster.”: I didn’t even bring this one home, I tried it at the store and it had exactly the same problem. The interface is so close to the iHome that I’m certain they share the same internal circuitry.

Next up, the “Fisher Studio Standard.”: I purchased one of these yesterday and set it up last night. Again, I believe this device shares many of the same components with the iHome (and iBlaster) – but I had some hope. It appeared to be a “2.0” version of the interface. As I set the alarm it “asked” what volume I’d like – anywhere from 10 to 99. Except I want 2 and there is no way to get 2. 10 wasn’t nearly as loud as the iHome or iBlaster so I gave it a try. My wife was kicking me faster than I could hit the off button so this device is also heading back to the store.

I’m certain that the lower limit for the volume on all of these devices is an attempt to prevent me from picking a volume which is so low that I miss that important meeting – some very concerned UI Scout is trying to make sure I cross that road. There needs to be another merit badge for UI designers – one that teaches that the purpose of the human interface is _not_ to prevent me from getting to valid parameters. A perfect solution on these devices would be to have a volume level of 10 be the default when first setting an alarm, but let me pick the level I want.

If you have any suggestions for a good iPod alarm clock, post a comment.