[eeps] [erlang-questions] New EEP: setrlimit(2) analogue for Erlang

Matthew Evans <>
Thu Feb 7 18:51:23 CET 2013




________________________________
> Date: Thu, 7 Feb 2013 16:27:49 +0100 
> From:  
> To:  
> CC:  
> Subject: Re: [erlang-questions] [eeps] New EEP: setrlimit(2) analogue 
> for Erlang 
> 
> I dug out what I wrote a year ago .. 
> 
> eep-draft: 
> https://github.com/psyeugenic/eep/blob/egil/system_limits/eeps/eep-00xx.md 
> 
> Reference implementation: 
> https://github.com/psyeugenic/otp/commits/egil/limits-system-gc/OTP-9856 
> Remember, this is a prototype and a reference implementation. 
> 
> There is a couple of issues not addressed or at least open-ended. 
> 
> * Should processes be able to set limits on other processes? I think 
> not though my draft argues for it. It introduces unnecessary restraints 
> on erts and hinders performance. 'save_calls' is such an option. 

I agree, other processes should not be able to do this. It kind of goes against Erlang's ideas of process isolation.


> * ets - if your table increases beyond some limit. Who should we 
> punish? The inserter? The owner? What would be the rationale? We cannot 
> just punish the inserter, the ets table is still there taking a lot of 
> memory and no other process could insert into the table. They would be 
> killed as well. Remove the owner and hence the table (and potential 
> heir)? What kind of problems would arise then? Limits should be tied 
> into a supervision strategy and restart the whole thing. 

I think just the owner is good enough. It would be nice if the inserter could do it, but I imagine that's a non-trivial feature to implement. 


> 
> * Message queues. In the current implementation of message queues we 
> have two queues. An inner one which is locked by the receiver process 
> while executing and an outer one which other processes will use and 
> thus not compete for a message queue lock with the executing process. 
> When the inner queue is depleted the receiver process will lock the 
> outer queue and move the entire thing to the inner one. Rinse and 
> repeat. The only guarantee we have to ensure with our implementation 
> is: signal order between two processes. So, in the future we might have 
> several queues to improve performance. If you introduce monitoring of 
> the total number messages in the abstracted queue (all the queues) this 
> will most probable kill any sort of scalability. For instance a sender 
> would not be allowed to check the inner queue for this reason. Would a 
> "fast" counter check in the inner queue be allowed? Perhaps if it is 
> fast enough, but any sort of bookkeeping costs performance. If we 
> introduce even more queues for scalability reasons this will cost even 
> more. 

I wasn't aware that the workings of the message queues was that complex internally. You are correct, that this monitoring must not be at the expense of performance, maybe the "check" could be made when the scheduler decides to schedule and execute a process. Or maybe it can check every X reductions? This means a process may not terminate at exactly the user-defined threshold, but I think most people can live with that.



> I do believe in increments in development as long it is path to the 
> envisioned goal. 
> And to reiterate, i'm not convinced that limits on just processes is 
> the way to go. I think a complete monitoring system should be 
> envisioned, not just for processes. 


Personally I think a complete monitoring system would be great, but I think process limits are also invaluable. 

I point you to Joe Armstrong's thesis "Making reliable distributed systems in the presence of software errors". Certainly memory leaks are a perfectly valid software error, one that as things stand today will crash your VM.


Cheers

Matt

 		 	   		  


More information about the eeps mailing list