[erlang-questions] Rhetorical structure of code: Anyone interested in collaborating?

Richard A. O'Keefe ok@REDACTED
Wed May 11 03:17:19 CEST 2016


On 11/05/16 5:28 AM, Lyn Headley wrote:

(a message that was sent privately, to which I sent a private reply,
which I now have to reconstruct)
>> Richard A. O'Keefe wrote:
>> It may be that other "elders" are extremely good at reading "youths'" minds.
>> Perhaps it is a form of clairvoyance.  It would be very interesting to
>> explore
>> this experimentally.
> I'm not sure what led you to characterize my proposal as involving
> "mind reading."
The fact that the code reader communicates a problem to the
code explainer just by pressing an "I am puzzled <here>" button.
I am pretty sure that if I were the code explainer, my response
would be "I am puzzled about what puzzles you <there>", every time.

As it happens, there was a fair bit of work in the AI/Cognitive Science
field back in the 70s and 80s on diagnosing novice errors (in things
like arithmetic, simple programming, circuits).  It now occurs to me
that perhaps we can think of puzzlement as "failure to reconstruct a
plan under which the line in question serves a known purpose", and
perhaps we could diagnose that.  (Of course, this assumes that it
is the reader at fault, not the writer.  You probably remember Feynman's
delightful anecdote of being asked for advice about a nuclear reactor,
and for lack of any better idea, stabbing his finger at random on the
plan, and saying "What does that do?"  The others looked, and said
"You've found the problem!"  I suspect his choice wasn't _quite_
random...)

The point was basically that while making a transcription of a code
reader's odyssey through the code might be useful, I don't think it
is helpful to make the communication channel so narrow.  I think
that if someone is puzzled and wants help, it is not unreasonable
to give them the chance to write or speak at least a sentence about
what they find puzzling.

"I don't understand this syntax" and
"I thought that function was deprecated"
can both cause puzzlement, and at the same place.

> As your lecturer demonstrated, even face-to-face
> contexts require active listening and inference about what is being
> asked. At least my proposal enriches the semantic environment beyond
> that of the lone developer staring at a text file.

I think it is a stretch to call it a "semantic" environment.
If someone leaves the cursor on a line of code for five minutes,
what does that _mean_?  Are they reading it, are they looking
something up in a manual, did they just go to the toilet, or what?
If they are doing a search and stop at a line, is it because this
was a serious candidate for their interest or because the editor
has a stupid search interface that won't let you search for a
two word phrase (a problem I frequently have in a certain PDF
reader)?  The channel is very narrow and carries very little
meaning.  Let someone introspect into a microphone while they
are doing this and you *might* get some serious semantic context.
> Human communication is imperfect. Alas, we are all cast in the role of
> the mind reader. But some contexts provide more context than others.

Indeed they do, which was precisely my point.  I don't see anything to be
gained by deliberately choosing a narrow interface providing very little
context when video screen capture with audio annotation is such an
established technology that all our lecture theatres now have it, so the
basic idea can be tried out with no upfront development cost at all.

My research agenda is sufficiently full for this year that I was advised not
to post the original message in this thread, but that's something I have
put a bit of time into, and I find that in my code it's context I *want* to
provide and in others' code it's context I *wish* they'd given, but the
vocabulary hasn't quite gelled yet.




More information about the erlang-questions mailing list