The end goal here is for LibHTTP to be the home of our RFC 9111 (HTTP
caching) implementation. We currently have one implementation in LibWeb
for our in-memory cache and another in RequestServer for our disk cache.
The implementations both largely revolve around interacting with HTTP
headers. But in LibWeb, we are using Fetch's header infra, and in RS we
are using are home-grown header infra from LibHTTP.
So to give these a common denominator, this patch replaces the LibHTTP
implementation with Fetch's infra. Our existing LibHTTP implementation
was not particularly compliant with any spec, so this at least gives us
a standards-based common implementation.
This migration also required moving a handful of other Fetch AOs over
to LibHTTP. (It turns out these AOs were all from the Fetch/Infra/HTTP
folder, so perhaps it makes sense for LibHTTP to be the implementation
of that entire set of facilities.)
This effectively reverts 9b8f6b8108.
I misunderstood what Chrome was doing here - they will issue a range
request only for what they call "sparse" cache entries. These entries
are basically used to cache partial large file, e.g. a multi-gigabyte
video. If they hit a legitimate read error, they will fail the request
with a ERR_CACHE_READ_FAILURE status.
We will now (again) fail with a network error when a cache read fails.
Given the RequestServer created the request fd with socketpair() on
Windows and pipe2() on Unix, this abstraction avoids inlined ifdef soup
by hiding the details of how the AK::Stream and Core::Notifier are
created.
On 64-bit Windows, long's are only 4 bytes. This meant that when the
client is deocding the IPC message from a async_request_finished call,
we were corrupting the RequestTimingInfo field and all the fields after
it as we were 4 bytes off.
If transferring a cached response body fails for any reason, we will now
issue a network request instead of failing the request outright.
The catch here is that we will have already transferred the response
code and headers to the client, and potentially some of the body. So we
attempt to only request the remaining data over the network using a
range request. This feels a bit sketchy, but this is also how Chromium
behaves.
However, the server may or may not support range requests. If they do,
we can expect an HTTP 206 response with the bytes we need. If not, we
will receive an HTTP 200 (assuming the request succeeded), along with
the entire object's body. In this case, we also behave like Chromium,
and internally drop number of bytes we had already transferred.
This adds a disk cache for HTTP responses received from the network. For
now, we take a rather conservative approach to caching. We don't cache a
response until we're 100% sure it is cacheable (there are heuristics we
can implement in the future based on the absence of specific headers).
The cache is broken into 2 categories of files:
1. An index file. This is a SQL database containing metadata about each
cache entry (URL, timestamps, etc.).
2. Cache files. Each cached response is in its own file. The file is an
amalgamation of all info needed to reconstruct an HTTP response. This
includes the status code, headers, body, etc.
A cache entry is created once we receive the headers for a response. The
index, however, is not updated at this point. We stream the body into
the cache entry as it is received. Once we've successfully cached the
entire body, we create an index entry in the database. If any of these
steps failed along the way, the cache entry is removed and the index is
left untouched.
Subsequent requests are checked for cache hits from the index. If a hit
is found, we read just enough of the cache entry to inform WebContent of
the status code and headers. The body of the response is piped to WC via
syscalls, such that the transfer happens entirely in the kernel; no need
to allocate the memory for the body in userspace (WC still allocates a
buffer to hold the data, of course). If an error occurs while piping the
body, we currently error out the request. There is a FIXME to switch to
a network request.
Cache hits are also validated for freshness before they are used. If a
response has expired, we remove it and its index entry, and proceed with
a network request.
Instead of wrapping all non-movable members of TransportSocket in OwnPtr
to keep it movable, make TransportSocket itself non-movable and wrap it
in OwnPtr.
When the request is stopped, we clear its internal stream data. There is
a window where RequestServer may have sent an IPC message whose callback
will try to access that data in the time between the data being cleared
and RS receiving the stop signal. When this happens, just bail from IPC.
This has been a longstanding ergonomic issue with our IPC compiler. Non-
trivial types were previously passed by const&. So if we wanted to avoid
expensive copies, we would have to const_cast and move the data.
We now pass ownership of all transferred data to the client subclasses.
This allows us to remove const_cast from these methods, and allows us to
avoid some trivial expensive copies that we didn't bother to const_cast.