Hi guys, could be possible implement a MAX MEMORY LIMIT in each WORKER? for example... if one thread/process (thread/worker/etc...) consume more than Xbytes after a request, close the worker and open a new one, it's very usefull to avoid OOM what i'm doing today is a php script executing this, if http apache process memory usage is bigger than X MB execute a KILL command to graceful-restart the process, this save me a lot of nights sleeping well
an example of "bandaid" solution: http://askubuntu.com/questions/403222/kill-off-apache-processes-when-memory-usage-hits-90
others related comments about memory consume and apache https://feeding.cloud.geek.nz/posts/putting-limit-on-apache-and-php-memory/
these are mostly memleaks of modules like PHP and your kill command is far away from graceful beause you interrupt the request in a hard way - in other words what you want to see and do is nothing better as OOM killer which would kill the right processes if other services like mysqld are configured with OOMScoreAdjust=-1000 "MaxRequestsPerChild 50" would terminate each worker process *really* graceful after it handeled 50 requests and so the memleaks can't grow that far https://bugs.php.net/bug.php?id=73889 is one example of a memleak