[erlang-questions] unpacking big/little endian speed difference
Sergej Jurečko
sergej.jurecko@REDACTED
Tue Oct 13 14:59:20 CEST 2015
How come unpacking integers as big endian is faster than little endian,
when I'm running on a little endian platform (intel osx)?
I know big endian is erlang default, but if you're turning binaries into
integers, you must turn them into little endian anyway, so unpacking should
basically be memmcpy.
The test:
% Takes: ~1.3s
-define(ENDIAN,unsigned-big).
% Takes: ~1.8s
% -define(ENDIAN,unsigned-little).
loop() ->
L = [<<(random:uniform(1000000000)):32/?ENDIAN,1:32>> || _ <-
lists:seq(1,1000)],
S = os:timestamp(),
loop1(1000,L),
timer:now_diff(os:timestamp(),S).
loop1(0,_) ->
ok;
loop1(N,L) ->
lists:sort(fun(<<A:32/?ENDIAN,_/binary>>,<<B:32/?ENDIAN,_/binary>>) ->
A =< B end, L),
loop1(N-1,L).
-------------- next part --------------
An HTML attachment was scrubbed...
URL: <http://erlang.org/pipermail/erlang-questions/attachments/20151013/c1bc5527/attachment.htm>
More information about the erlang-questions
mailing list