[erlang-questions] high-volume logging via a gen_server
Mihai Balea
mihai@REDACTED
Mon Oct 4 21:42:17 CEST 2010
On Oct 4, 2010, at 2:57 PM, Dan Kelley wrote:
>
> So, what are good strategies to cope with a large incoming volume of
> messages that all need to wind up in the same logfile? Is there a more
> efficient way to write to disk than the simple io:format() call than I'm
> using above? What's a good way to parallelize the logging over multiple
> processes but keep all of the information in one file?
Logging is a typical producer - consumer issue. When your system produces more data than your logger can consume, you basically have a number of choices:
1. Log less data: are you sure you need all that stuff in your log file at all times? Maybe you want a more flexible logging system that can enable and disable various log areas and levels on demand.
2. Speed up the logger: look into using buffered disk ops (the "file" module). I suspect unbuffered disk access is where you lose most of your performance. Also, look into using error_logger with the log_mf_h handler, it writes stuff in binary format and it is very fast.
3. Use synchronous logging. Slowing down the entire system a little is better than overloading the logger and having the VM crash due to out of memory conditions.
4. Decide if it is acceptable to lose log items and design you logger to drop stuff if it gets too busy.
5. Design your logger to spread the write load to several workers that log into separate files. If you add a timestamp or some other sort of index to log items you can merge them later into one temporally consistent file. This tends to get complicated though, I would try the other things first.
Hope at least some of the above will be helpful :)
Cheers,
Mihai
More information about the erlang-questions
mailing list