[erlang-questions] list vs binary performarnce, destructuring and consing
Erik Pearson
erik@REDACTED
Wed Oct 24 17:58:10 CEST 2012
Interesting comments. What exactly is getting copied? The accumulator
binary? In this case the allocated size of the binary (doesn't it default
to 256bytes or the twice initial contents, whichever is larger?) is much
greater than the size of the bytes stuffed into it for accumulation. (See
comments at the very end -- the Erlang docs state that the initial
iteration will cause the binary to be copied.)
In any case, I would hope that such operations are linear to the size of
the sequence. On the other hand, I think the costs of the operations on
each is the salient point, no? In this case binaries appear to be more
expensive both to iterate over (through destructuring) and accumulate.
I didn't necessarily think that these types of operations on binaries were
faster, but there has been some fuss over binaries in the text processing
side of things -- web servers, json parsers, etc. The reason I ran this
little test was that while creating a json parser I wanted to see for
myself.
I suppose a lot of the binary excitement has to do with just "utf8 in, utf8
out" efficiency.
Still, I think it would be surprising to some to find binaries slower for
normal iterative processing. Culturally, Erlang list strings have a bad
rep. Binaries to the rescue!
I wasn't able to find documentation comparing binary string to list string
operations. However, there is advice as to the most efficient way to
iterate over and accumulate binary strings.
>From "Programming Efficiently with Binaries and Bit Strings":
- When you are building new bit strings make sure you append new data to
the end of an old bit string
- When you iterate over a bit string use a direct style matching similar to
what you would do for lists
And from the section on optimizations:
The basis of this optimization is that it the emulator can create
bit strings with extra uninitialized space, so if a bit string is built
by continuously appending to a binary the data does not need to be copied
if there is enough uninitialized data at the end of the bit string.
One point I don't quite understand here:
http://www.erlang.org/doc/efficiency_guide/binaryhandling.html#match_context
is is this function:
my_list_to_binary(List) ->
my_list_to_binary(List, <<>>).
my_list_to_binary([H|T], Acc) ->
my_list_to_binary(T, <<Acc/binary,H>>);
my_list_to_binary([], Acc) ->
Acc.
the first iteration in my_list_binary/2 that Acc will be copied, but it
will not be copied after this. I don't know why any copying occurs at all,
since it is evident that in this case the initial binary <<>> is used
solely for the append operation in the first clause. From what I
understand, it is multiple references that cause copying to occur upon
append, since the append operation may result the movement of the binary in
memory, and the other references would otherwise become invalid. I only see
one reference threading through this function.
I would think it would be allocated in my_list_to_binary/1, passed by
reference, have an initial size of 256, be appended to without any copying,
and only maybe be copied if the accumulation exceeded 256 bytes (the binary
may be moved when it is extended.) Is the implication is that the initial
blank binary <<>> is a different type of binary, say fixed-length or
static, and that the first append operation triggers the copy of that
binary into a newly allocated extendable binary (with initial size twice
the original or 256 whichever is bigger)?
I really have to stop obsessing over this and move on, its a wonder where
the mind goes when one has a cold...
Erik.
On Tuesday, October 23, 2012, Erik Søe Sørensen wrote:
> Den 23/10/2012 14.00 skrev "Jesper Louis Andersen" <
> jesper.louis.andersen@REDACTED>:
> >
> > There is no reason to believe binaries to be faster than lists for this
> benchmark. The list construction can take its cells from the process heap
> and since it is garbage collected, doing so is an (amortized) O(1) pointer
> move. The reverse takes O(N). So we are looking at N*O(1) + O(N) = O(N).
> The binary construction has to reallocate the binary once in a while which
> imposes a copy on the binary to make it larger. In the worst case, and when
> it is badly implemented, this is an O(N^2) operation. If it is better
> implemented, we can shave it down to O(N lg N) or thereabout, depending on
> the slack amount we want to allocate. But it does not seem as cheap as the
> other solution, barring constants.
>
> The normal slack amount to use is some factor times the current length -
> eg. a doubling. This gives O(n) amortized cost (remember that while you do
> need to copy O(lg n) times, you need to copy far less than n elements each
> time - it sums up to <2n copyings).
>
> So asymptotically it should be at level with lists (I'm fairly certain
> that BEAM is smart about this) - but the constant factor is still wanting...
>
-------------- next part --------------
An HTML attachment was scrubbed...
URL: <http://erlang.org/pipermail/erlang-questions/attachments/20121024/aa77b992/attachment.htm>
More information about the erlang-questions
mailing list