obscure erlang-related publication
Richard A. O'Keefe
Wed Mar 29 09:22:22 CEST 2006
"Ulf Wiger \(AL/EAB\)" <> wrote:
> Their evaluation method basically amounts to making
> your preconceived ideas of what kind of solution you
> are looking for seem respectable by wrapping
> (arbitrary!) numbers around them.
Do you have a link to a better method?
The method boils down to the usual "choose a list of features/aspects,
assign a crude 'number of stars' rating to each, and add up your scores."
The main good point was considering more than one problem situation, so
using more than one set of weights. However, the connections between
successive tables were not entirely clear, so it wasn't completely clear
to me how to adapt their method to my situations.
When it comes to things like tool support, I have no idea at all (from the
paper) how to derive the entries. In fact some of the entries in some of
the tables were a surprise to me.
I'm not sure that a good _general_ language evaluation method actually
exists. The data gathering would be so costly that only a government
could do it. This doesn't mean that I don't think rational choices can
be made in specific situations.
The evaluation described in the paper is _far_ less
arbitrary than many evaluations I've witnessed
elsewhere. I think the way these comparisons are
handled in industry is sorely lacking.
Well, yes. "In the country of the blind, the one-eyed man is king."
There _are_ techniques for extracting scales from data like this.
(I'm particularly thinking of Multiple Correspondence Analysis.)
But they need a lot _more_ data. It would, for example, have been
very informative if, instead of reporting "pooled" estimates for
each language, each different person's ratings had been shown.
As it is, one may reasonably guess that a *lot* of rater disagreement
has been hidden from critical eyes.
More information about the erlang-questions