I’ve been giving some thought lately about how to structure software applications. One of the long term goals for the Begin example application, part of “ASL,”:http://opensource.adobe.com/ is to create a rich web client for “desktop like” hosted applications. In a recent discussion it occurred to me that there isn’t any good reason why we shouldn’t keep the application front end in it’s own process. We can have a single, universal, application interface which is configured with declarations to be any application and pair with an app server, running locally or across the network. Yes, I know, this isn’t a particularly novel idea (I’m typing this in a browser at the moment) – I just happen to think with ASL we’re getting close to making it work _better_ than the current all-in-one-process desktop model.
In some ideal world this would have three layers – a declarative UI front-end, a declarative document model back-end, and a collection of generic algorithms to do processing in the middle. The difference between Photoshop and Word becomes the descriptions at both ends and what algorithms are in the middle – the more algorithms we have the more applications we can build.
One challenge with such an architecture is how do you monitor progress of the middle process? I certainly don’t want to require that the code be littered with “progress markers” to communicate state back to the interface. This is one of the challenges that’s often sited as a key feature for aspect oriented systems – you can build an “oracle” aspect which will monitor progress. That’s somewhat like littering the code with progress markers – except a less intrusive.
I believe there is a reasonably simple, generic, approach that should work quite well – for any operation which streams back results, we can simply mark progress by the data received – knowing the total amount of data expected and complexity of the operation this should work quite well. For operations which take significant time before they return a result, we should be able to accurately predict their completion by knowing the complexity of the operation and a profile of the operation for the machine (which can be generated through use) – in this way we can come to predict completion (and use this information to keep the OS from throttling our process back while we’re busy). If we run significantly over the expected completion time, we can assert failure and interrupt the process – with a good transaction model even failure can be graceful. What Apple is doing with “CoreData,”:http://developer.apple.com/macosx/coredata.html is a good proof of concept for the back-end of such an architecture.
I think the key piece that has prevented desktop applications from having more of a “server” architecture (besides OS’s which could multi-task worth a damn until fairly recently) has been handling interactivity so the user is in direct control instead of feeling like they are filling out forms and issuing requests. By modeling the functions in the interface process, and building tight communication between the processes into the architecture this could be achieved.