[erlang-questions] At what point am I "playing compiler"

John Haugeland stonecypher@REDACTED
Wed May 20 22:27:49 CEST 2009


> Ah, but the *more common* behaviour is to optimise unnecessarily, not to
> reject optimisation where it's merited (how would that fly for very long?)

I wonder why you believe this.  In my admittedly limited experience,
this has been nearly universally untrue.  Almost the first thing I
hear out of most programmers is an admonition against premature
optimization as an excuse for doing shoddy work, then some hasty
assumptions about which time is more valuable.  Now, I'm just speaking
from personal experience, so I'm not going to make any sweeping
generalizations about what is and is not common, but it seems to me
fairly unlikely that I should have had this uniform a contrarian
experience by chance.

Do you have any data to back up this claim, that premature
optimization is more common than inadequate forward design?





> If it's needed, it's needed. If it's not, it's not. That is a rule of thumb
> that is essentially Knuth's rephrased.

We disagree on this point.  I would recommend a reading of the text;
it is my opinion that removing the germane quote from context has
significantly changed its meaning.

That said, if your current stated interpretation of the quote _is_
correct, then it actually defends my comment that attempting to treat
the metaphoric admonition as a literalism and thereby to validate
scaling it is a fundamental mistake.





> Your "counterexample" was obviously a
> case that was "needed" because it's built into your requirements: "I would
> prefer this to take days, rather than months".

And that has essentially nothing to do with my commentary that
platitudes don't scale mathematically.  It's also wrong: I made the
adjustments after the first version of the system was entirely
complete, on observing that my initial strategy was under-performing.

To suggest that something which is necessary but which was learned
after a build was part of the requirements defies my understanding of
the word "requirements".  Maybe we just use the word differently.

The reason I find this pan disconcerting is that what actually
happened was the software was made, and then I had to go back and
change it, because I did not perform the work that people want to
refer to as "premature optimization".  That's weird because by the
rules you're citing, I should be seen as having done it right; I just
built something that worked, and when underperformance was observed, I
then went back and altered the system.  It was presented as an example
contrary to the assumption-driven ratios of man hours and valuability
and run repeat count.

Unfortunately, despite that, you're trying to create a situation where
I ignored a requirement that I had to go back and fix, which
undermines the idea that these pieces of work shouldn't be done before
completion.

It's a simple yes or no: should I have done the refinement before the
first version?

 * If yes, where's the line between that and premature optimization?
Does it involve knowing beforehand how everything will perform?  How
do we pick up the talent to know these things?
 * If no, then why isn't this a clear counterexample to prior claims
of "only if it's run a million times" and the value of programmer
hours for one-use code?

>From where I stand, it appears that you're trying to argue both sides
of when to work on efficiency, in re: before and after performance
characteristics are known.  I must misunderstand some part of your
argument.  Perhaps you'd be so kind as to clarify?





> Or, you never realised it could be made orders of magnitude faster until you
> fixed a design flaw. (Optimisation by accident.)

I'd appreciate it if you wouldn't argue with me on basis of guesses
you're making about code you haven't seen.  Neither of your cases were
correct: it wasn't built into my requirements: I didn't realize there
would be a problem until I saw the lack of progress in the table, and
the thing I fixed was the throughput overhead of having work done in
the client application instead of in the database itself using stored
procedures.

I hope you won't amend the platitude again to continue to cope with
the changing sands of finding out more information.  That's generally
understood to be a red flag that the rule of thumb has failed.





>> Skepticism in all things is the basis of deep understanding.
>
> ...and that includes skepticism about the reflex and attendant
> "rationalisations" for micro-optimising early. (Which is just wasting a
> different and more valuable resource.)

Luckily, since I made no rationalization for micro-optimizing early,
this isn't germane.  All I did was to point out that platitudes don't
suit multiplicative extension by Moore's law.

I believe that this belief that any planning regarding performance is
early micro-optimization, and the groupthink that inevitably comes
with it, is nearly as large a disservice to the competant programmer
as actual premature optimization is.

It is interesting though that you seem to believe my carefully
defended argument, containing examples which undermine your current
assertion of value, was reflexive.  It's not actually the case, sir,
that should someone disagree with you they're just blurting out things
they haven't thought through.  Some people actually very carefully
think through the things they disagree about.

It would be appreciated if we could retain a civil and respectful tone
moving forward.



More information about the erlang-questions mailing list