[erlang-patches] Optimization of beams string table generation
Björn Gustavsson
bgustavsson@REDACTED
Thu Jul 29 09:36:08 CEST 2010
2010/7/28 Paul Guyot <pguyot@REDACTED>:
> With 1000 of such clauses:
> lists:foreach(fun(_) -> <<I:80>> = crypto:rand_bytes(10), S = erlang:integer_to_list(I, 16), io:format("f(<<\"~s\">>) -> atom~s;\n", [S, S]) end, lists:seq(0, 1000)).
>
> I get:
> core_module : 0.05 s 25922.5 kB
> v3_codegen : 0.09 s 6660.6 kB
> beam_asm : 0.28 s 2.6 kB
>
> vs
> core_module : 0.06 s 20558.1 kB
> v3_codegen : 0.08 s 6617.9 kB
> beam_asm : 0.03 s 1.7 kB
>
I get similar figures (without any native code).
>> Regarding the implementation, it will probably be
>> faster to append new strings to the string table
>> like this:
>>
>> NewDict = Dict#asm{strings = <<Strings/binary,StrBin/binary>>,
>> string_offset=NextOffset+byte_size(StrBin)},
Compiling a module with 5000 clauses I can confirm that this
version is faster.
>>
>> Regarding the test case, a better place for it should be the
>> compilation_SUITE module.
>>
>> (The name of the test modules may not be the best, but
>> compile_SUITE (mainly) tests different compiler options,
>> while compilation_SUITE tests miscellanous language features.)
>
> Do you want me to update the branch ?
Normally I would ask you to update the branch yourself.
But since I have already done measurements using your
branch, it was easy enough for me to do the changes and
revise the commit message to emphasize the simplification
of the code rather than the optimization.
Here is the updated version (so far only in my own
github repository):
http://github.com/bjorng/otp/commit/173d1fd1c3fef385f73accc4b2bbb1b6f92ac3f5
If you approve this version, I will include it in 'pu' later
today.
--
Björn Gustavsson, Erlang/OTP, Ericsson AB
More information about the erlang-patches
mailing list