FW: UI thoughts

Vlad Dumitrescu (EAW) <>
Wed Mar 5 09:15:16 CET 2003


I will take the liberty to resend to the list the mail that Jay sent me, because I think it will be an interesting reading. I hope it's okay with you, Jay.

best regards,

-----Original Message-----
From: Jay Nelson [mailto:]

At 04:31 PM 3/4/03 +0100, you wrote:
>yes, you are right and it's a good way to build an application. But this 
>will be the next level, right now I am wondering about how to build the 
>graphical engine and all controls/widgets in a good way.

You will continue to have conceptual problems if you start
by building a toolkit of UI "objects".  As soon as you use the
word widget you have already set your mind in the wrong

>There has to be some kind of framework that will provide building bricks 
>for the UI. Then of course, a framework for writing applications will be 
>needed too, bu I haven't been thinking that far yet.

I wouldn't agree with that.  What you need is a behavioral
description of the interaction first.  Just as client / gen_server
specifies the protocol completely independent of the content.
Then you add a rendering capability for displaying things.

For example, don't think of a window, think of interaction styles:

1) A rectangular area that is resizeable
2) Rectangular hotspots like stamps that can be pasted on top
of a rectangular area
3) Hotspots can render chars or change appearance when a
mouse passes over

These are all behaviors.  No one has defined a window or a
hierarchy.  If you want a window, create WindowManagerSupervisor
(WMS) and pass it an ordered list of items:

[ [resize_area, {width, 300}, {height 400}]
    {relative, {name, minimize}, {x, +270}, {y, +3}, [hotspot, {click, 
{send, minimize}}]}
    {relative, {name, maximize}, {x, +280}, {y, +3}, [hotspot, {click, 
{send, maximize}]}
    {relative, {name, close}, {x, +290}, {y, +3}, [hotspot, {mouseover, 
blue.gif}, {mouseoff, red.gif}, {click, {send, kill}]}

Each of what you call "widgets" can be hotspots.  A hotspot is a
process that works like gen_fsm.  Mouse click, mouse move, etc
can all cause an event that means a state change when the message
is received.  The WMS binds the areas together and specifies the
drawing order so that you get a flattened view that looks like a window,
but it is really a dynamic collection of behaviors that could fly apart
or transfer themselves to other processes (think of docking toolbars)
or react in concert with another process (think slider bars).  Modelling
this way allows the user to choose the elements of a "window" that
he would like to keep, instead of hard coding them into an inflexible
object construct that simulates flexibility by varying its internal state
member variables.

As soon as you call something a widget, or think in terms of a
window and a menu object, you have already pre-disposed
yourself to non-erlang thinking. Think in terms of behaviors
only, and implement them with collections of processes.
What does the user want to do?  Nothing on the screen, he
actually wants to enter, remove or modify data.  The screen
is a crutch to see what is happening, audio would work just as well,
but with a GUI you are pasting on non-Graphical things as a
way of being compliant with accessibility standards rather than
presenting the data in the most natural way for interaction.
What does the data want to do?  These are very independent
from: How should the data be displayed? or What controls allow

Chris said a textbox cannot live independent of a password box,
but classification systems are slippery slopes.  As soon as you
choose one item to classify on (think Euclidean geometry) you
have eliminated a whole realm of possibilities that are not only
improbable, but impossible once you start (a great arc is a shorter
distance than a straight line -- on a globe).

Both a text box and a password box are visual displays of
an internal text string.  In one case the user needs feedback
as he types, but no display; in the other he wants both feedback
and display.  The processes that accept keystrokes, filter for
validity, and store to the internal text string are separate because
they may be recombined to create other chains of events.  The
fact that the two screen representations are similar is of no
consequence other than predictability on the part of the user.
What is important is the task and the feedback.  A password
box could just be a single block that changes color with each
keypress and uses three different beeps -- one for characters
and two for accept or reject.

 > One process per real-world activity - the textbox underlying
 > a passwordbox isn't *really*a discrete real-world activity

This is the atomic argument (as in physics) versus the quark
argument.  Do you model physical real-world things, or the
elements that combine to create different kinds of reality?
Either is valid, they are just different views but they lead to
drastically different explanations of the seemingly same
external behavior.


More information about the erlang-questions mailing list