[erlang-questions] How would you implement TTL for ETS based cache?

Max Bourinov <>
Fri Mar 15 09:06:34 CET 2013


This will not work for me. I need pro-active TTL.

Best regards,
Max



On Fri, Mar 15, 2013 at 8:28 AM, Bach Le <> wrote:

> I like this approach. Thanks for sharing. For TTL per entry, you can do it
> lazily. Retrieve the entry and look at the TTL, if it's supposed to expire,
> return nothing and don't promote this entry. The generation will eventually
> be "garbage collected" anyway.
>
> Also, instead of create and delete table, is it faster to clear the table
> and reuse it? Assuming that ets does some memory pooling, clearing might be
> more efficient than just dropping.
>
>
> On Thursday, March 14, 2013 10:50:56 PM UTC+8, Valentin Micic wrote:
>
>> Hi Max,
>>
>> We have implemented something similar in the past -- we called it a
>> generational cache (hope no-one accuses me of a plagiarism, for English is
>> but a finite language after all. And so is the problem/solution space...).
>>
>> The gist of it is that instead of using a single ETS table for a cache,
>> you would use two or more "disposable" ETS tables.
>> The generation management is implemented as a process that, upon some
>> criteria are met, destroys the oldest table, and creates a new one in its
>> stead.
>>
>> The youngest cache (the newest ETS table) is always queried first. If the
>> value is not found, the older tables are queried and if entry with a
>> corresponding key is found -- it is moved to the youngest cache.
>> If none of the ETS tables in a cache set contains the key, then the key
>> is loaded from some external source and inserted into the youngest (the
>> newest ETS table) cache.
>>
>> This method worked quite well for us in the past, and we've implemented a
>> several variants of it, e.g. time based, size based, and combination of the
>> two.
>> Bear in mind that when you drop a very large ETS table (and I mean very
>> large, as in Gigabytes of memory), it may take some time to release the
>> allocated memory, but that may be a problem for another day.
>>
>> The downside, this does not really correspond to your requirement to keep
>> a TTL per single cache entry.
>> The upside, it is much faster and cheaper than traversing a single cache
>> periodically. Also, it end up keeping only entries that are most frequently
>> used.
>>
>> Hope this helps.
>>
>> Kind reagards
>>
>> V/
>>
>>
>> On 14 Mar 2013, at 4:15 PM, Max Bourinov wrote:
>>
>> > Hi Erlangers,
>> >
>> > How would you implement TTL with callback for ETS based cache?
>> >
>> > I want when TTL expires callback is triggered and cached value goes to
>> DB.
>> >
>> > For key I use 32 byte long binary(), value is not bigger than 8 KB.
>> >
>> > Any suggestions?
>> >
>> > p.s. elang:send_after/3 is a most brutal approach, but how would it be
>> when cache is under load?
>> >
>> > Best regards,
>> > Max
>> >
>> > ______________________________**_________________
>> > erlang-questions mailing list
>> > 
>> > http://erlang.org/mailman/**listinfo/erlang-questions<http://erlang.org/mailman/listinfo/erlang-questions>
>>
>> ______________________________**_________________
>> erlang-questions mailing list
>> 
>> http://erlang.org/mailman/**listinfo/erlang-questions<http://erlang.org/mailman/listinfo/erlang-questions>
>>
>
> _______________________________________________
> erlang-questions mailing list
> 
> http://erlang.org/mailman/listinfo/erlang-questions
>
>
-------------- next part --------------
An HTML attachment was scrubbed...
URL: <http://erlang.org/pipermail/erlang-questions/attachments/20130315/7bb0c795/attachment.html>


More information about the erlang-questions mailing list