[erlang-questions] refactoring a very large record

Jesper Louis Andersen <>
Thu Oct 20 17:33:52 CEST 2011


On Thu, Oct 20, 2011 at 17:25, Matthew Sackman <> wrote:
> On Thu, Oct 20, 2011 at 05:19:36PM +0200, Jesper Louis Andersen wrote:
>> Right, A record is a tuple. A tuple is a set of pointers to elements
>> (or tagged integers). When you update an element, a new tuple gets
>> written. This new tuple copies all pointers from the old one and
>> updates the pointers for newly written elements. This is necessary due
>> to persistence.
>
> Please explain more about this. Why can analysis tell you how many live
> pointers there are to the old record and thus inform the VM whether or
> not it can do update-in-place?

In specialized cases, this is indeed possible. You can find the linear
relationship between updates and then you can figure out that you can
do a destructive update on the tuple instead. But look at this
function:

foo({A, B} = X) ->
   {A, 37}.

The problem here is that to figure out if we may or may not do the
destructive update, we need to know something about the behaviour of
the caller. If the caller does, e.g.,

Z = {123, 42},
{Z, foo(Z)}.

we are not allowed to destructively update Z as the original Z is
needed as well. Further complication is that this piece of code may
origin from another module so to see if we are allowed to do it
requires whole-program analysis.

There are type systems built for the purpose of tracking such
properties. See for instance Uniqueness types in the language Clean.
Thus the types can guarantee an access pattern that means the above
construction would be a type error. And thus you could destructively
update Z.

-- 
J.



More information about the erlang-questions mailing list