Here is the relevant output from diff of the 2.2.3 from the 2.2.4 version of mod_cache.c diff httpd-2.2.3/modules/cache/mod_cache.c httpd-2.2.4/modules/cache/mod_cache.c 428a431,435 > else if (exp != APR_DATE_BAD && exp < r->request_time) > { > /* if a Expires header is in the past, don't cache it */ > reason = "Expires header already expired, not cacheable"; > } The above check is comparing the the expires time from the server (in second granularity) to the request_time from the client ( upto micro second granularity). Even though we have all of our hosts running NTP, it is possible to get a very small skew which is enough to cause this check to pass and the object ends up not getting put into local cache. We have a workaround by commenting out the above code, but i don't believe this check is needed when you have an explicit expires or Cache-Control header from the server. regards, Sridhar
I do not get your problem here. Caching is only denied if there is an Expires header and if the time mentioned in the Expires header is earlier then the request time. If this comparison becomes true only due to the different granularity of the times measured (which can happen) this means that the difference between the time in the Expires header and the request time is less than a second. So there is a very short period of time (less than a second) during which this entity (if it is cached) would be regarded as fresh and thus deliverable to the client from the cache without further revalidation. So what is the point in having cached objects that are that short lived?
Some of the content we server is required to have an expires "now" or max-age=0 or must-revalidate to guarantee freshness. Without this check, the origin sees IMS requests from the cache a vast majority of the time so isn't being taxed that much. With the additional check, the edge does a lot more unconditional GET requests to the origin taxing the origin resources a lot more. regards, Sridhar