Although it is well documented that mod_deflate does not modify the Content- Length header when decoding a gzip-compressed message body, the incorrect value seems to confuse mod_dav -- when I try to upload a compressed message body with the PUT method, mod_dav returns a 400 Bad Request error, and this message appears in the log: [Mon Aug 04 19:49:25 2003] [error] [client 192.168.0.144] An error occurred while reading the request body. [400, #0] To test, I'm PUT'ing the output of "echo test | gzip -9c". What happens is that dav_method_put()'s first call to ap_get_client_block() returns 5 (strlen ("test\n")), but then it calls ap_get_client_block() again, and it returns -1 to signal an error. I believe this is because it's expecting more data. I'm not sure how to solve this problem. It's probably not a great idea to buffer the whole body in memory then fix up the Content-Length header after decompressing it all, so perhaps mod_deflate could fake up a "chunked"-encoded body, putting each block of data returned from zlib into a new chunk, and signalling EOF with a 0-length chunk. I'm not familiar with the internals of the Apache httpd, so I'm not sure. I'm filing this as "Enhancement" because although the problem is documented, it would be really nice to be able to use PUT requests w/ gzip-compressed bodies.
mod_dav should use bucket brigades when reading PUT data, then all should be fine. So changing component to mod_dav and accept this as a bug. Thanks for the report.
Created attachment 7641 [details] Patch to make mod_dav use bucket brigades when handling PUT requests
Thanks for the suggestion to use bucket brigades -- I've got compressed uploads at least partly working now, but it seems to choke on large requests: [Mon Aug 04 23:21:13 2003] [error] [client 192.168.0.144] (55)No buffer space available: Could not get brigade. [500, #0] There are other things I'm not quite sure about in the patch, but I thought I'd upload it just to get the ball rolling.
Created attachment 7642 [details] Call apr_brigade_cleanup() to avoid running out of buffer space
Created attachment 7643 [details] clean patch I wrote in the meantime :)
Yeah, it was probably the missing cleanup. Can you try my patch nevertheless? If it works, I'm going to commit it in 2.1 and propose it for backport.
This seems to work, thanks! I haven't performed exhaustive testing; I uploaded a 15MB file, both with and without gzip compression, and verified that the files were identical and had the same md5sum as the original file.
It's essentially the same patch as yours :) Thanks for testing so far, I'm going to commit now.
fix already committed to Apache 2.1-dev... sounds like we need to consider it for merge to 2.0.next
It's considered already, but the review you know ... :-))
Backport now in 2.0 tree courtesy of Justin...