Richard A. O'Keefe
Mon Dec 22 04:58:44 CET 2003
Kostis Sagonas <kostis@REDACTED> wrote:
Sorry to suddently turn this thread into a political one, but the
above argument seems to me an argument of the form:
"Why try to eliminate some social injustices, since we are never
going to eliminate them all (especially the most subtle ones)."
Just think about it...
Well yes, I have thought about it, and the analogy is invalid.
A much more interesting and possibly more fruitful analogy is to
the "human factors and safety of interfaces" stuff one often sees
popping up in comp.risks. If you automate too much, people get to
rely on the machine, and then the automation has to be _really_ good.
If you have alarms that keep going off, people start ignoring them,
and then really bad things happen, because some of the alarms aren't
false. It seems that there's an optimal level of human involvement
in checking; too much and people can't do it, too little and people
do even less than they should. It is important for people to have
a clear understanding of what the machine will check (so they don't
have to) and what it won't (so they DO have to).
A question I asked a couple of times in this year's functional programming
exam paper had the general form
- description of data structure
- set up the type declarations in Haskell!
- how much of the data structure invariants were you able
to tell the Haskell compiler about, and why?
- finish the job!
Whatever you end up doing, it is a human factors disaster if the
BEAM compiler and the HiPE compiler disagree, because then people will
not be able to form a coherent mental model of what will be checked
by machine and what they must check in their inspections.
More information about the erlang-questions