Performance of term_to_binary vs Bbinary_to_term
Tue Jun 8 14:57:18 CEST 2021
Why would decoding a term create *any* garbage in typical cases?
One source of garbage in my Smalltalk library is that floats are
represented as an integer power of two scale modifying an integer
(which might be a bignum), so the second integer (if large) is
garbage. But Erlang doesn't do that. It represents a float as
8 binary bytes. The reason is that my Smalltalk had to deal with
double extended, which could be 64, 80, 96, or 128 bits, so the
external representation had to deal with it, but Erlang supports
64-bit IEEE doubles only.
Erlang's external format follows the ASN Type-Length-Value
principle (more or less), so that when binary_to_term/1 reads
something, it knows exactly what to allocate and how big.
What am I missing here?
On Tue, 8 Jun 2021 at 00:47, Lukas Larsson <garazdawi@REDACTED> wrote:
> On Mon, Jun 7, 2021 at 2:38 PM Thomas Depierre <depierre.thomas@REDACTED>
>> Yes there is a pretty simple answer :) Parsing is far harder than
>> serialization ! for parsing, you have to read a bit of the binary stream,
>> find what type it is, then translate the data to a data type, which means
>> allocating memory. On top of that you need to validate that it is not a
>> mangled binary stream. And you need to do it piecemeal, with a lot of
>> information about current steps. This is far more complex than translating
>> a particular memory setting that you know the size of into a binary stream.
> Another thing that comes to mind is that the GC may be interfering with
> the results as binary_to_term would create more garbage than term_to_binary.
-------------- next part --------------
An HTML attachment was scrubbed...
More information about the erlang-questions