Bug 63169 - MPM event, stuck process after graceful: no (old gen)
Summary: MPM event, stuck process after graceful: no (old gen)
Status: RESOLVED FIXED
Alias: None
Product: Apache httpd-2
Classification: Unclassified
Component: mpm_event (show other bugs)
Version: 2.4.37
Hardware: PC Linux
: P1 regression (vote)
Target Milestone: ---
Assignee: Apache HTTPD Bugs Mailing List
URL:
Keywords: FixedInTrunk
Depends on:
Blocks:
 
Reported: 2019-02-12 10:33 UTC by Stephan Schultchen
Modified: 2021-09-08 08:47 UTC (History)
3 users (show)



Attachments
Patch that fixes this bug (701 bytes, patch)
2021-09-03 19:01 UTC, Joel Self
Details | Diff

Note You need to log in before you can comment on or make changes to this bug.
Description Stephan Schultchen 2019-02-12 10:33:08 UTC
apachectl graceful fails to kill some processes, which in return keeps logfiles forever open after logrotate.

"apachectl fullstatus" says in the "stopping" row that the processes are "no (old gen)"
Comment 1 Yann Ylavic 2019-02-12 13:19:30 UTC
Since there are some changes in MPM event since 2.4.34, can you reproduce with the latest/2.4.38 version?
Comment 2 Stephan Schultchen 2019-02-12 13:55:36 UTC
2.4.38 is not available as a IUS rpm package. i updated some systems to 2.4.37. and check if this solves the issue
Comment 3 Stephan Schultchen 2019-02-13 08:40:10 UTC
the problem still persists with apache httpd 2.4.37 from the IUS repository.

the changelog from 2.4.37 seem to not have any changes related to MPM, or anthing else that looks related to log file handling.

quite now apache has some processes running since yesterday, that are in the "no (old gen)" stopping status, and these processes have the old, rotated logfiles open.
Comment 4 Yann Ylavic 2019-02-14 12:20:06 UTC
What's the state of old gen processes/threads?
Number of requests currently being processed, idle workers, status of the threads (letters S/R/W/K/L/D/C/./G/I/?), full status for "stuck" processes please?
Comment 5 akhil.jaggarwal 2019-05-13 19:00:50 UTC
I believe it is change introduced in 2.4.34 to run cleanups on exit in single process mode:

http://svn.apache.org/viewvc/httpd/httpd/trunk/os/unix/unixd.c?view=diff&r1=1830743&r2=1830744&pathrev=1830744

httpd drops privileges to switch to a non-root user if the User directive is defined in
httpd.conf. If httpd is required to started as root to bind to well known ports, and if 
the User directive is defined in the conf, piped logs remain running as root while http
d runs as a non-privileged user. This causes graceful restarts on receipt of a SIGTERM to
fail, causing both httpd and the piped log processes to remain stuck and not respond to
SIGTERMs.

A possible fix within apache would be to add code to drop privilges or any sub processes
registered with the server (for e.g., the piped log processes) like it is done for httpd. 

A workaround is to setuid the piped log processes to that of the User in httpd.conf using 
something like ap_uname2id() so as to make the owner of the piped logs same as the parent
httpd process.

Additionally, after making the setuid change in our piped log binaries, we've seen core dumps
around free() in apr_pool_destroy when cleaning up the apr pool. I'm still investigating
these but meanhile any help from the apache team would be well appreciated.
Comment 6 Joel Self 2021-09-03 19:01:17 UTC
Created attachment 38014 [details]
Patch that fixes this bug
Comment 7 Joel Self 2021-09-03 19:03:24 UTC
What's happening here is that the mpm is spawning more processes than the `active_daemons_limit` (`max_workers divided` by workers per process), but then during a graceful restart it is only sending `active_daemons_limit` number of kill characters (per bucket) over the pipe of death. You can see this in line 3114 of server/mpm/event/event.c (httpd 2.4.48):

```
        for (i = 0; i < num_buckets; i++) {
            ap_mpm_podx_killpg(all_buckets[i].pod, active_daemons_limit,
                               AP_MPM_PODX_GRACEFUL);
        }
```

The reason the mpm is spawning more child processes than `active_daemons_limit` is due to the exponential increasing in child processes created when the server is suddenly hit with a lot of traffic. In event.c `perform_idle_server_maintenance` examines each active process and checks how many active and idle threads there are and marks which slots are free to create a new child process in (there's no process in that slot). Then `perform_idle_server_maintenance` checks if the idle thread count is less than the minimum spare threads (line 2824):

```
else if (idle_thread_count < min_spare_threads / num_buckets) {
```

If the idle thread count is too low it needs to create new child processes to increase its idle thread count. At first it only creates a single child, but before `perform_idle_server_maintenance` returns it doubles the spawn rate, so the next time around perform_idle_server_maintenance will spawn two children (line 2881):

```
            else if (retained->idle_spawn_rate[child_bucket]
                     < MAX_SPAWN_RATE / num_buckets) {
                retained->idle_spawn_rate[child_bucket] *= 2;
            }
```



If the next time around perform_idle_server_maintenance doesn't need to spawn new children then the rate is returned to 1. However if the next call to `perform_idle_server_maintenance` needs to spawn children it will spawn 2 then increase the spawn rate to 4. This exponential growth continues until the `active_thread_count` is greater than or equal to `max_workers`. However, because of the exponential growth the last batch of child processes spawned may have created way more than the number of processes needed to reach `max_workers` and thus overshoot the 'active_daemons_limit`.

Here's a debug log I created to debug this issue:

```
idle_thread_count [0] < min_spare_threads [200], active_thread_count: 675, max_workers: 800
Spawning 16 children, active_daemons: 27, total_daemons: 27, idle_thread_count: 0, min_spare_threads: 200, active_thread_count: 675, max_workers: 800

```

We're at 0 idle threads. The last time we spawned children we made 8, so this time we'll make 16, but our active threads is 675. At 25 workers per processes, spawning 16 processes will get us to `675 + 16 * 25 = 1075` workers which is well beyond our `max_workers` of 800. Another way to look at it is that our `active_daemons_limit` is 32 and adding 16 new processes to the current 27 gets us to 43 which is 11 more than our max of 32. After we get 43 processes if we then do a graceful restart will only send 32 kill characters over the pipe of death (we only have 1 bucket), so 11 processes will never receive the kill character and will live on, processing requests with the old configuration.

We fixed this problem by limiting the amount of newly spawned children plus the `active_daemons` to be less than `active_daemons_limit`. Added at line 2856 of event.c:

```
            if (free_length + active_daemons > active_daemons_limit) {
                free_length = active_daemons_limit - active_daemons;
            }
```

I've attached to this bug a patch that we used to fix this.

Thanks,
Joel Self
Comment 8 Yann Ylavic 2021-09-07 09:36:04 UTC
Thanks Joel, nice catch and analysis.

Patch applied in r1893014, I will propose it for backport to 2.4.x.
Comment 9 Yann Ylavic 2021-09-08 08:47:38 UTC
Checked in r1893109, will be in 2.4.49 and later.