Previously we were doing a couple things wrong:
- Using the cascaded rather than computed value (so we didn't support
CSS-wide keywords)
- Only supporting the case where we had one animation-play-state
When testing with JS_BYTECODE_DEBUG macro defined or using the All_Debug
profile, internal objects were not known in Value class to the function
Value::to_string_without_side_effects leading to a VERIFICATION FAILED
when running the test-js or test-web programs.
Internal objects are known in the Value class as cells, and do not have
a dedicated tag to identify them. Internal objects are detected using
is_cell function call, but only after all other types have been
checked as other types of non-internal objects can also be cells.
Both the String and Utf16String version of the function were updated.
The recent commits 28ba610f32 and
70c4ed261f adjusted some include
directives to avoid excessive recompilation when changing some header
files. This has broken compilation with clang-cl on Windows without
getting noticed before the PRs were merged.
This allows us to use the bytecode implementation of await, which
correctly suspends execution contexts and handles completion
injections.
This gains us 4 test262 tests around mutating Array.fromAsync's
iterable whilst it's suspended as well.
This is also one step towards removing spin_until, which the
non-bytecode implementation of await uses.
```
Duration:
-5.98s
Summary:
Diff Tests:
+4 ✅ -4 ❌
Diff Tests:
[...]/Array/fromAsync/asyncitems-array-add-to-singleton.js ❌ -> ✅
[...]/Array/fromAsync/asyncitems-array-add.js ❌ -> ✅
[...]/Array/fromAsync/asyncitems-array-mutate.js ❌ -> ✅
[...]/Array/fromAsync/asyncitems-array-remove.js ❌ -> ✅
```
This hosts the ability to compile and run JavaScript to implement
native functions. This is particularly useful for any native function
that is not a normal function, for example async functions such as
Array.fromAsync, which require yielding.
These functions are not allowed to observe anything from outside their
environment. Any global identifiers will instead be assumed to be a
reference to an abstract operation or a constant. The generator will
inject the appropriate bytecode if the name of the global identifier
matches a known name. Anything else will cause a code generation error.
All the data we need for compilation is in SharedFunctionInstanceData,
so we shouldn't depend on ECMAScriptFunctionObject.
Allows NativeJavaScriptBackedFunction to compile bytecode.
This was causing a fair bit of root registration churn on pages that
throw lots of errors.
Since there's no need for these pointers to float around freely, we can
just visit them during the mark phase as usual.
Some of this is rearranged for clarity, but it's mostly the exact same
code. Steps 3, 10, 11, and 15 are new, but don't have any effect until
we implement downward-growing cells.
Corresponds to:
93634aed57
The current live spec has been rearranged since this went in, so that
these steps are no longer located here. But that's a much larger change
that I don't want to implement right now. See here:
e09d10202d
While I was at it, I also made use of extract_error_information() to
populate the ErrorEvent.
I missed where this change happened in the spec. The second half of
abort_the_ongoing_navigation() becomes a separate method, which is
slightly rearranged. I've placed this in Navigation instead of
NavigateEvent because of how many steps poke at the Navigation's
internals.
The text here includes the amendments I made in
https://github.com/whatwg/html/pull/11967 to correct a variable name.
A bonus is that we now actually populate the ErrorEvent instead of
leaving it blank.
For excessively long strings, we often end up spending a ton of time
hashing and comparing them, and it basically ruins the value of the
cache as optimization.
This commit puts a cap (256) on the length of strings we put into the
cache. The number is arbitrary and there may be value in tuning it.
When no options (or undefined) are passed into these APIs, the spec has
us synthesize a temporary "options object" to read the options from.
We don't actually need these objects, so let's sidestep the allocation
if we can (and just use the default values in that case).
Since we already have a ByteBuffer from decoding the Base64 data, we can
pass that when creating a new ArrayBuffer.
This avoids a buffer allocation + memory clear + memory copy.
There are a couple of remaining RFC 9111 methods in LibWeb's Fetch, but
these are currently directly tied to the way we store GC-allocated HTTP
response objects. So de-coupling that is left as a future exercise.
We currently have two ongoing implementations of RFC 9111, HTTP caching.
In order to consolidate these, this patch moves the implementation from
RequestServer to LibHTTP for re-use within LibWeb.
The verified pixel output in this test just reflects currently observed
behavior, I have not verified that all cases output the correct data wrt
what the spec expects.
This makes it so that we can visit satellite and hybrid view on
maps.apple.com. Previously this would crash from a bounds check due to
us asking Skia to write 32bpp RGBx8888 data into a buffer sized for
24bpp RGB888.
This is not technically necessary, presumably the bitmap type does not
offer any guarantees about what values are stored in the alpha component
for non-alpha pixel formats.
But the previous behavior meant that `set_pixel()` would produce
different values in the alpha component depending on whether the bitmap
format was RGBx or BGRx (i.e. `color.alpha()` vs. 0x00), which can be a
bit confusing.
This factors the conversion logic to be independent from WebGL code,
allowing us to write unit tests for it that can run in CI (since WebGL
can't run in CI).
The `Bitmap` type was referring to to its internal pixel format by a
name that represents the order of the color components as they are layed
out in memory. Contrary, the `Color` type was using a naming that where
the name represents the order of the components from most to least
significant byte when viewed as a unsigned 32bit integer. This is
confusing as you have to keep remembering which mental model to use
depending on which code you work with.
To unify the two, the naming of RGBA-like colors in the `Color` type has
been adjusted to match the one from the Bitmap type. This seems to be
generally in line with how web APIs think about these types:
* `ImageData.pixelFormat` can be `rgba-8unorm` backed by a
`Uint8ClamedArray`, but there is no pixel format backed by a 32bit
unsigned type.
* WebGL can use format `RGBA` with type `UNSIGNED_BYTE`, but there is no
such format with type `UNSIGNED_INT`.
Additionally, it appears that other browsers and browser-adjacent
libraries also think similarly about these types:
* Firefox:
https://github.com/mozilla-firefox/firefox/blob/main/gfx/2d/Types.h
* WebKit:
https://github.com/WebKit/WebKit/blob/main/Source/WebCore/platform/graphics/PixelFormat.h
* Skia:
https://chromium.googlesource.com/skia/+/refs/heads/main/include/core/SkColorType.h
This has the not so nice side effect that APIs that interact with these
types through 32bit unsigned integers now have the component order
inverted due to little-endian byte order. E.g. specifying a color as hex
constant needs to be done as `0xAABBGGRR` if it is to be treated as
RGBA8888.
We could alleviate this by providing endian-independent APIs to callers.
But I suspect long-term we might want to think differently about bitmap
data anyway, e.g. to better support HDR in the future. However, such
changes would be more involved than just unifying the naming as done
here. So I considered that out of scope for now.