Bug 43958 - mod_proxy_balancer not balancing correct in combination with MAX=1
Summary: mod_proxy_balancer not balancing correct in combination with MAX=1
Status: RESOLVED LATER
Alias: None
Product: Apache httpd-2
Classification: Unclassified
Component: mod_proxy_balancer (show other bugs)
Version: 2.2.6
Hardware: PC Linux
: P2 major with 6 votes (vote)
Target Milestone: ---
Assignee: Apache HTTPD Bugs Mailing List
URL:
Keywords: MassUpdate
Depends on:
Blocks:
 
Reported: 2007-11-26 01:26 UTC by Toon WIjnands
Modified: 2018-11-07 21:09 UTC (History)
1 user (show)



Attachments

Note You need to log in before you can comment on or make changes to this bug.
Description Toon WIjnands 2007-11-26 01:26:58 UTC
I’m having a setup on Gentoo, Apache 2.2.6 with a setup that relies on
mod_proxy_balancer: Apache is the proxy and proxies request to 3 workers (in
fact, Mongrel which is a basic HTTP server server Ruby code).
 
Rewriting rules are in places and on normal use no issues, so I guess the setup
is correct.

In my testing enviroment (i.e. I have full control over de requests sent to the
setup), I experience the following:

Setup A:

    <Proxy balancer://ips.dev.test_setup.com.mongrel_cluster>
        BalancerMember http://localhost:8010
        BalancerMember http://localhost:8011
        BalancerMember http://localhost:8012
    </Proxy>

I request an action that takes a long time to execute (10 minutes). This is for
testing purposes. Than I fire subsequents requests with normal execution time (<
1 second). The results are:

-          worker 8010 gets the first long request and starts handling this
-          worker 8011 and 8012 get the next two requests
-          request 4 is proxied to 8010 where it has to wait for the long
request to finish before it gets served.

This is expected behaviour.
 

Next, I’ve changed the configuration to

Setup B:

    <Proxy balancer://ips.dev.test_setup.com.mongrel_cluster>
        BalancerMember http://localhost:8010 max=1
        BalancerMember http://localhost:8011 max=1
        BalancerMember http://localhost:8012 max=1
    </Proxy>

I’m repeating the experiment. The results are the same as in setup A. In my
opinion this is unexpected behaviour: 8010 gets the first long request and while
serving this, it should not get any subsequent request until the initial request
 has been fully served. IMHO this is the meaning of parameter MAX=1. The other
workers should handle the subsequent requests. (I fire the request by hand, with
a large enough interval to get a free worker available). 

More-over if I look in the balancer-manager, all workers have state Ok. I would
expect to see a state Busy (or something like that for 8010). This seems related
to bug #42668.

Effectively, I’m looking for a solution that one ‘bad performance’ thread does
not hurt the overall response
Comment 1 Ruediger Pluem 2007-11-26 05:42:43 UTC
You need to set the acquire parameter correctly to your workers:

<Proxy balancer://ips.dev.test_setup.com.mongrel_cluster>
        BalancerMember http://localhost:8010 max=1 acquire=100
        BalancerMember http://localhost:8011 max=1 acquire=100
        BalancerMember http://localhost:8012 max=1 acquire=100
</Proxy>

set it to 100ms.
Comment 2 Toon WIjnands 2007-11-26 06:37:23 UTC
Dear Ruediger,

Thanks for the input
I've tested with acquire=100; This does not help.

From the docs:
"acquire  	-  	If set this will be the maximum time to wait for a free
connection in the connection pool. If there are no free connections in the pool
the Apache will return SERVER_BUSY status to the client."

The problem at hand is *not* that there is no free worker available (in that
case setting the acquire parameter is correct). The problem at hand is: there
are free workers available. However, they are not selected. Instead the 'busy'
worker is selected, in spite of the fact that it is configured to receive at max
1 request.
Comment 3 Ruediger Pluem 2007-11-26 13:23:00 UTC
(In reply to comment #2)
> Dear Ruediger,
> 
> Thanks for the input
> I've tested with acquire=100; This does not help.
> 
> From the docs:
> "acquire  	-  	If set this will be the maximum time to wait for a free
> connection in the connection pool. If there are no free connections in the pool
> the Apache will return SERVER_BUSY status to the client."
> 
> The problem at hand is *not* that there is no free worker available (in that

No the problem is that there are no *free* connections available for some worker
and this is what acquire should prevent. In the unbalanced case this causes a
HTTP_SERVICE_UNAVAILABLE to be returned to the client if no free connection can
be obtained from the worker in acquire ms time and in the balanced case it
should cause a fail over to the next worker after this time. If it does not (and
according to you it does not) this is a bug.

Comment 4 Toon WIjnands 2007-11-26 23:38:55 UTC
Ok. Is there a way I can contribute more testdata about this situation, which
will help the developer fixing this?
Comment 5 William A. Rowe Jr. 2018-11-07 21:09:05 UTC
Please help us to refine our list of open and current defects; this is a mass update of old and inactive Bugzilla reports which reflect user error, already resolved defects, and still-existing defects in httpd.

As repeatedly announced, the Apache HTTP Server Project has discontinued all development and patch review of the 2.2.x series of releases. The final release 2.2.34 was published in July 2017, and no further evaluation of bug reports or security risks will be considered or published for 2.2.x releases. All reports older than 2.4.x have been updated to status RESOLVED/LATER; no further action is expected unless the report still applies to a current version of httpd.

If your report represented a question or confusion about how to use an httpd feature, an unexpected server behavior, problems building or installing httpd, or working with an external component (a third party module, browser etc.) we ask you to start by bringing your question to the User Support and Discussion mailing list, see [https://httpd.apache.org/lists.html#http-users] for details. Include a link to this Bugzilla report for completeness with your question.

If your report was clearly a defect in httpd or a feature request, we ask that you retest using a modern httpd release (2.4.33 or later) released in the past year. If it can be reproduced, please reopen this bug and change the Version field above to the httpd version you have reconfirmed with.

Your help in identifying defects or enhancements still applicable to the current httpd server software release is greatly appreciated.