We currently have a FIXME to validate cached data with a crc32. But this
is sort of a non-starter, because we never actually have the cached data
in-memory - we transfer it to the WebContent process via system calls,
and it never reaches userspace in RequestServer.
Chrome makes a bit of an educated gamble here. They assume cosmic bit
blips are extremely unlikely, thus the cached data does not get verified
with a hash. Instead, they store non-cryptographic hashes of some select
fields, and they validate just those hashes.
Here, we store a hash of the cache key in the cache header, and a hash
of the cache header in the cache footer. With these validations, along
with other validations already in-place, we can be reasonably sure we
are not sending corrupt data to the WebContent process.
This effectively reverts 9b8f6b8108.
I misunderstood what Chrome was doing here - they will issue a range
request only for what they call "sparse" cache entries. These entries
are basically used to cache partial large file, e.g. a multi-gigabyte
video. If they hit a legitimate read error, they will fail the request
with a ERR_CACHE_READ_FAILURE status.
We will now (again) fail with a network error when a cache read fails.
a290034a81 passed an empty vector to this,
which caused nodes that appeared multiple times to reset the trie
metadata...which broke the optimisation.
This patchset makes the function take a 'provide missing metadata'
function instead, and only invokes it when the node is missing rather
than unconditionally setting the metadata on all nodes.
Previously, we only supported very basic numbers and a single level of
text positioning support in the `x`, `y`, `dx` and `dy` attributes in
`<text>` and `<tspan>` SVG elements.
This improves our support for them in the following ways:
* Any `length-percentage` or `number` type value is accepted;
* Nested `<text>` and `<tspan>` use the 'current text position'
concept to determine where the next text run should go;
* We expose the attributes' values through the API.
Though we still do not support:
* Applying the `rotate` attribute;
* Applying transformations on a per-character basis.
* Proper horizontal and vertical glyph advancing (we just use the path
bounding box for now).
CSS Text 3 gives `text-indent` a couple of optional keywords to control
which lines are affected. This commit parses them, but doesn't yet do
anything with them.
Previously, unbuffered requests were only available as a special mode
for EventSource. With this change, they are enabled by default, which
means chunks can be read from the stream as soon as they arrive.
This unlocks some interesting possibilities, such as starting to parse
HTML documents before the entire response has been received (that, in
turn, allows us to initiate subresource fetches earlier or begin
executing scripts sooner), or start rendering videos before they are
fully downloaded.
Co-authored-by: Timothy Flynn <trflynn89@pm.me>
187f8c54 made `HTML::Task` runnable for destroyed documents, and this
change aligns microtask behavior with that. This is required for an
upcoming change that switches Fetch to be unbuffered by default. During
navigation, fetching the new document is initiated by the previous
document, which means we need to allow microtasks created in the
previous document's realm to run even after that document has been
destroyed.
Currently these tests work, because we are lucky to have fetched
response body available in `load_document()` to correctly sniff the MIME
type. This is a preparation for upcoming change that makes fetching
unbuffered, which will break these test unless we explicitly set the
`Content-Type` header.
This changes `link-element-rel-preload-load-event.html`, so the test no
longer depends on the relative ordering of load and error events.
This is required for the upcoming change that makes fetching unbuffered
by default, as the test would otherwise become flaky.
...for iframes with srcdoc, this ensures the test won't get stuck if the
load event fires between creating the iframe and adding the onload
handler in the `asyncTest()` callback.
Previously, this test would pass in Firefox and Safari, but get stuck in
Chrome. Now all browsers pass it.
`font-weight` and `font-size` both can have keywords that are relative
to their inherited value, and so need recomputing when that changes.
Fixes all but one subtest in font-weight-computed.html, because that
remaining one uses container-query units. No font-size tests seem to be
affected: font-size-computed.html doesn't update the parent element's
`font-size` so this invalidation bug didn't apply.
Our HTTP disk cache is currently manually tested against various sites.
This patch adds some tests to cover various scenarios, including non-
cacheable responses, expired responses, and revalidation.
In order to ensure we hit the disk cache in RequestServer, we must
disable the in-memory cache in WebContent.
For example, we will want to be able to test that a cached object was
expired after N seconds. Rather than waiting that time during testing,
this adds a testing-only request header to internally advance the clock
for a single HTTP request.
This mode allows us to test the HTTP disk cache with two mechanisms:
1. If RequestServer is launched with --http-disk-cache-mode=testing, it
will cache requests with a X-Ladybird-Enable-Disk-Cache header.
2. In test mode, RS will include a X-Ladybird-Disk-Cache-Status response
header indicating how the response was handled by the cache. There is
no standard way for a web request to know what happened with respect
to the disk cache, so this fills that hole for testing.
This mode is not exposed to users.
This is causing misaligned reads with Address Sanitizer enabled. We
could maintain the packed attribute, and deal with alignment - but we
ended up not actually needing these to be packed anyways. The only thing
we need to know is the serialized size of the header, which we can just
determine differently.
We previously excluded headers exempted from storage when we serialized
the headers into the database. However, we stored the original headers
in-memory. So when a subsequent request hit CacheIndex::find_entry, we
would return an entry with response headers that should have been
excluded.
We can use Duration as-is without coercing values to seconds. This
probably doesn't make much difference in real-world scenarios, but when
we hammer the cache during tests, this truncation will cause flakey
behavior.
Fixes a bug where we would clip `box-shadow` when `overflow: hidden`
was set, which is not supposed to happen since `overflow` only affects
clipping of an element's content.
If either of the two transform functions during interpolation is a 3D
function, both of them get coerced to a 3D function before deciding what
to do next. However, we only supported converting 2D functions to 3D if
they had a 2D primitive they could be converted to first.
Change our behavior to default to converting to matrix3d() if there is
no explicit conversion path. Fixes a crash in
`css/css-transforms/animation/transform-interpolation-004.html`.
The Windows RequestPipe implementation uses a non blocking local socket
pair, which means the non-fatal "resource is temporarily unavailable"
error that can occur in the non-blocking HTTP Response data writes can
be retried. This was seen often when loading https://ladybird.org.
While the EAGAIN errno is defined on Windows, WSAEWOULDBLOCK is the
error code returned in this scenario, so we were not detecting that we
could retry and treated the failed write attempt as a proper error.
We now detect WSAEWOULDBLOCK and convert it into the errno equivalent
EWOULDBLOCK. There is precedent for doing a similar conversion in the
Windows PosixSocketHelper::read() implementation.
Finally, we retry when we receive either EAGAIN or EWOULDBLOCK error
codes on all platforms. While POSIX allows these 2 error codes to have
the same value, which they do on Linux according to
https://www.man7.org/linux/man-pages/man3/errno.3.html, it is not
guarenteed. So we now ensure platforms that return EWOULDBLOCK with a
value different than EAGAIN also perform write retries.
The initial IOCP event loop implementation adjusted wake() to manually
queue a completion packet onto the current threads IOCP. This caused
us to now be dependent on the current threads IOCP, when the previous
behaviour did not depend on any data from the thread that was waking
the event loop.
Restoring that old behaviour allows https://hardwaretester.com/gamepad
to be loaded again.
The initial IOCP event loop implementation removed the single shot
timer fix added in 0005207 was removed.
Adding this back allowed simple web pages like https://ladybird.org/ to
be loaded again.
The initial IOCP event loop implementation had a fd() method for the
EventLoopNotifier packet that did not actually return the fd for the
notifier, but a to_fd() call on an object HANDLE that was always NULL.
This meant we were always posting NotifierActivationEvents with a fd of
0.
This rendered all of our WinSock2 I/O invalid, meaning no IPC messages
would ever be successfully sent or received.