[erlang-questions] refactoring a very large record
Joel Reymont
joelr1@REDACTED
Thu Oct 20 16:21:15 CEST 2011
On Oct 20, 2011, at 3:14 PM, Jesper Louis Andersen wrote:
> I always hesitate when I hear about large records of this size. If
> they are only read, or mostly read, they tend to be fast. But they
> don't support updates very well as it requires you to write a new
> record object of size 80.
Are you sure?
Doesn't the record tuple keep pointers to each element and only updates changes the modified pointers on update?
> I tend to have modules that operate on records.
Aye! I'm doing this now.
> I rarely access the record
> directly, but I export "views" of the data in the record which i can
> pattern match on outside.
What does this mean?
I thought that by making the record opaque you lose the ability to pattern-match on it.
> You can store the 80-element tuple in ETS which is
> expensive as you copy the tuple to the ETS store.
Not this way.
Think of record keys being individual keys in ETS instead.
> Another viable option is to make the 80-record tuple into a process.
> Then one can move some of the work to the tuple itself rather than
> querying for it and then acting upon it locally in processes.
Message passing was 10x slower than function calls last time I checked ;-).
Thanks, Joel
--------------------------------------------------------------------------
- for hire: mac osx device driver ninja, kernel extensions and usb drivers
---------------------+------------+---------------------------------------
http://wagerlabs.com | @wagerlabs | http://www.linkedin.com/in/joelreymont
---------------------+------------+---------------------------------------
More information about the erlang-questions
mailing list