I finally read the introduction to Adobe’s ASL GUI system, and its Adam and Eve parts. Ralph Thomas has been using it to write his Mission Photo application, so it seems to work. It’s great to see the rationale documented publically, and I guess that Adobe will profit from the feedback. Design is so much easier when you can bounce ideas off someone. I notice that the overview document is dated December 2004, so I’m late with my feedback.
Eve seems to be a UI layout description format, much like Glade (including the use of widget identifiers that are looked up later, from code with libglade, or from Adam with ASL), but without specifying actual specific widget types, so it’s “button” instead of “GtkButton”. It’s nice that it uses generic widget names, which are later realised as actual GTK+, MacOS, or Windows UI parts. It’s annoying to me that it doesn’t use XML, which could make the structure more obvious.
The documentation makes a big deal of automatic layout, without calling it that, but that’s the only sane way that anyone does layout now anyway, so that alone is not enough to make this useful. But it’s understandable that this is an issue for them, because these guys built their code base with the awkward MacOS classic APIs, and then the crappy Win32 APIs, which demand fixed per-pixel layout and implementing your own event loop. I’ve been there. GTK+ does this for you.
Adam seems to define (declaratively) how the widgets behave. Adam files seems to be made up of sheets (maybe equivalent to a dialog, or maybe a user operation, I’m not sure), and cells (though that name is not mentioned explicitly in the syntax). Adam files specify the properties of a widget in terms of the properties of another widget, so that, for instance, a widget can be deactivated if another widget is unchecked. I guess it can do more than just simple boolean logic and mathematics, but this is where it feels like I’d need some real programming to get real things done.
But it’s a nice idea if it can work without getting in the way. We have grasped at similar things in GTK+, for instance by specifying that the keyboard mnemonic for one widget (a label) can activate another widget (an entry next to the label), plus our accessibility data that expresses higher-level relationships between the UI elements. But we haven’t generalized it. Paul Pogonyshev posted on the gtkmm list and on the libsigc++ list.recently about something like this, but I don’t think the result was useful enough, possibly due to the limitations of a statically-typed language. It would be nice to solve common problems more concisely.
Adam and eve introduce several new terms (model, command, cell, sheet, field) which leaves plenty of scope for confusion, particularly because these names seem quite arbitrary and they have several existing meanings. I think I’d use more explicit names, such as data-field, if it was my project.
As far as I understand, the system expects you to think of a UI (such as a dialog) as a way to generate a command, and ASL can represent a model of this command, though “model” is a confusing name for it. This command, produced by the view (UI) is then used to transform the document model (the data). That doesn’t sound very suitable for rich interaction, but obviously it works for Photoshop and Mission Photo. Possibly you can update your application state somehow in response to individual changes to UI widgets, to allow instant-apply.
Corrections are welcome.