Pools, pipes and filters or Stream Algebra
Mon Mar 24 19:15:07 CET 2003
On Mon, 24 Mar 2003 07:11:01 -0800
Jay Nelson <> wrote:
> Chris Pressey wrote:
> >In other words, I'm not sure if you want constraints on how things are
> >constructed, or not, as you've now given arguments for both sides. I'm
> >assuming that you do, but you want looser constraints than are commonly
> >found at present. I agree with the spirit of your approach, I think,
> >but I find I'm having more and more trouble grasping the letter of it.
> As a user I don't want the layout and tools all predetermined. In an
> email client I have a composer, folders and a browser. Sometimes
> I want that and sometimes I want a different layout and different ways
> of finding and organizing things. Why only folders? Often I want the
> email data in my word processor or database or address book. Why
> does the data have to be divided by application? Why can't the data
> just be data and let me operate on it or view it in whatever fashion I
> choose from among the set of tools that are available to me?
Short answer is: because different tools make different assumptions about
If you're working at the level of a hex editor, the data *is* just data
and you can do whatever you like with it.
But obviously humans want to work with data at a more human level. The
problem is that there are infinitely many ways to arrange that.
In order for different tools to interoperate, they need a common
interchange system. Current interchange systems have achieved ubquity
through two main ways: bottom-up expedience (hierarchical file systems,
ASCII, etc) and top-down standards (XML, OLE, etc.)
The first category tends to be too simple to be really flexible; the
latter, too complex to be tractable.
The first category tends to win out (witness virtually any MUA: ASCII
messages in hierarchical folders.)
If you want something more sophisticated, you have to select or create a
standard. In order for the standard to actually provide an interchange
benefit, *everything* on the computer (/network) has to work by the same
(or compatible) standards. This is remarkably difficult (witness IPv6!),
which is why progress towards it is remarkably slow.
> The real issue is that "applications" are goal oriented. [...]
I think that's unavoidable. I think the issue is that applications
currently serve goals that are too elaborate for your tastes - you seem to
want smaller, more single-purpose micro-applications (traditionally called
'utilities') which can be combined into composites as the user desires.
This is not new, nor does it have anything to do with processor power as
far as I can see. In fact, this is the old way, the way from a time
before computers were ever thought of as personal possessions; from a time
when all computer users had some skill at building things out of smaller
things (and from when looking at all data through a hex editor was not out
of the ordinary.) Once computers became a mass-market consumer product,
the demand for more 'turnkeying' led to more packaging... just as it has
in the food industry, for whatever that analogy is worth (TV dinners,
anyone?) For better or worse, most people will prefer convenience to
Now this is just an explanation - it doesn't mean I like it any better
than you. But the shortest route for those of us in the minority who
prefer flexibility to prepackaging, for at least the next few years
(possibly a lot longer,) will be to become programmers ourselves so we can
change what we desire to change the most, to suit our own needs.
> The stream algebra is for me to develop a tool for the building of
> systems. The purpose of the algebra is to be able to reason about and
> prove the interchangeability of computational elements.
That's admirable, but the important question that occurs to me is what
happens when you prove that two streams aren't interchangeable.
If you're talking about a top-down approach, then you can define a
standard which all streams follow, and you don't need to prove anything so
long as you can reasonably guarantee that all streams do follow it.
If you're talking about a bottom-up, ad-hoc approach, you'll never be able
to guarantee that two arbitrary streams are interchangeable - so if
someone sends you an stream you can't recognize, what do you do with it?
Ideally, sure, a computer should just adapt to recognize it - but to do
that it needs a description of it - or a hell of a lot of AI.
Ignoring it is the only practical way to deal with it right now.
And we already have that well established, streams or no, stream calculus
or no. If I download a file in an unrecognized format, well, it's back to
the ol' hex editor for me innit?
This situation is mirrored in Erlang quite well, I think. If you want to
send a message to a server, you have to know what kinds of messages that
server will accept - you have to know its top-down structure. Or, you can
ad-hoc send a message to a server that you hope it will understand. But
even then, if you don't know for sure which messages the server will
recognize, and what the server will do with unrecoginzed messages, you're
taking a wild gamble... not the best way to ensure good results.
The nice thing about Erlang is that it allows the ad-hoc approach, while a
language with strict interfaces and strong typing generally forces you
into the top-down structure (on the assumption that anything else has a
likelihood of messing things up, which is often true.)
Probably a middleground could be found in having a top-down structure
which defines, as part of itself, an ad-hoc substructure... but even then
there are still no guarantees, and it's hard to see how it could adapt to
novel, unpredicted input, without being a programming language in its own
right, which brings along its own dangers (even sandboxes can't overcome
the Halting Problem :)
More information about the erlang-questions