[erlang-questions] unpacking big/little endian speed difference
Wed Oct 14 09:05:42 CEST 2015
The difference is really in "i_bs_get_integer_32_rfId" and
When looking in code, the "i_bs_get_integer_32_rfId" can extract 32 bit big
endian integer from bitstring - it is quite simple function, which can not
do much more, and even has fast path for binary (bitstring with byte
On the other side "i_bs_get_integer_small_imm_xIfId" is quite complex
function, which can extract integer with any bit size, signed, or unsigned,
little, or big endian. It is quite complex, and even use some temporary
So the reason is, that there is special "fast" opcode for extracting 32 bit
big endian integer (probably because authors of BEAM VM thought, that it
will be more useful).
On Tue, Oct 13, 2015 at 4:31 PM, Kostis Sagonas <> wrote:
> On 10/13/2015 02:59 PM, Sergej Jurečko wrote:
>> How come unpacking integers as big endian is faster than little endian,
>> when I'm running on a little endian platform (intel osx)?
>> I know big endian is erlang default, but if you're turning binaries into
>> integers, you must turn them into little endian anyway, so unpacking
>> should basically be memmcpy.
>> The test:
>> % Takes: ~1.3s
>> % Takes: ~1.8s
>> % -define(ENDIAN,unsigned-little).
> Well, if you are interested in performance, simply compile to native code
> your module and the time will drop to less than half...
> Eshell V7.1 (abort with ^G)
> 1> c(endian).
> 2> endian:loop().
> 3> endian:loop().
> 4> endian:loop().
> 5> endian:loop().
> 6> c(endian, [native]).
> 7> endian:loop().
> 8> endian:loop().
> erlang-questions mailing list
-------------- next part --------------
An HTML attachment was scrubbed...
More information about the erlang-questions