This implements parsing part of customizable <select> spec update.
See whatwg/html PR #10548.
Two failing subtests in `html5lib_innerHTML_tests_innerHTML_1.html`
and `customizable-select/select-parsing.html` are due to the spec
still disallowing `<input>` inside `<select>`, even though Chrome
has already implemented this behavoir (see whatwg/html#11288).
An upcoming commit will migrate the contents of Headers.h/cpp to LibHTTP
for use outside of LibWeb. These CORS and MIME helpers depend on other
LibWeb facilities, however, so they cannot be moved.
Step 2.(a).5 says to abort, but we were instead carrying on and would
run steps 3 and 4. Those steps would not change the result at all, but
this avoids a little unnecessary work.
I wrapped a couple of comments at 120 columns while I was at it.
When detecting an element's opening tag, the spec asks us to skip ahead
to the first whitespace or end chevron character before trying to read
attributes. Instead, we were always skipping 2 positions ahead and then
ignoring all whitespace characters and slashes, which was clearly wrong.
Theoretically this could have caused some weird behaviors if part of the
opening tag matched an expected attribute name, but it's very unlikely
to see that in the wild.
This did not cause any immediate issues except generating instances of
`Attr` with useless values which caused some unnecessary work during
encoding detection.
This ends up saving quite a bit of memory on many pages, since UTF-32
uses 4 bytes per code points.
As an example, it reduces the footprint on https://gymgrossisten.com/
by 2 MiB.
Update Element::parse_fragment and Node::unsafely_set_html to
propagate exceptions.
This refactor is needed as a prerequisite for implementing the XML
fragment parser, which requires consistent error handling in fragment
parsing.
CSSUnitValue is a typed-om type which we will implement separately in
the future. However, it still seems useful to give our dimension values
a base class. (Maybe they could be templated in the future?) So instead
of deleting it entirely, rename it to DimensionStyleValue and make its
API match our style better.
This reverts 0e3487b9ab.
Back when I made that change, I thought we could make our StyleValue
classes match the typed-om definitions directly. However, they have
different requirements. Typed-om types need to be mutable and GCed,
whereas StyleValues are immutable and ideally wouldn't require a JS VM.
While I was already making such a cataclysmic change, I've moved it into
the StyleValues directory, because it *not* being there has bothered me
for a long time. 😅
Introduces a few ad-hoc modifications to the DAFSA aimed to increase
performance while keeping the data size small.
- The 'first layer' of nodes is extracted out and replaced with a lookup
table. This turns the search for the first character from O(n) to O
(1), and doesn't increase the data size because all first characters
in the set of named character references have the
values 'a'-'z'/'A'-'Z', so a lookup array of exactly 52 elements can
be used. The lookup table stores the cumulative "number" fields that
would be calculated by a linear scan that matches a given node, thus
allowing the unique index to be built-up as normal with a O(1) search
instead of a linear scan.
- The 'second layer' of nodes is also extracted out and searches of the
second layer are done using a bit field of 52 bits (the set bits of
the bit field depend on the first character's value), where each set
bit corresponds to one of 'a'-'z'/'A'-'Z' (similar to the first
layer, the second layer can only contain ASCII alphabetic
characters). The bit field is then re-used (along with an offset) to
get the index into the array of second layer nodes. This technique
ultimately allows for storing the minimum number of nodes in the
second layer, and therefore only increasing the size of the data by
the size of the 'first to second layer link' info which is 52 * 8 =
416 bytes.
- After the second layer, the rest of the data is stored using a
mostly-normal DAFSA, but there are still a few differences:
- The "number" field is cumulative, in the same way that the
first/second layer store a cumulative "number" field. This cuts
down slightly on the amount of work done during the search of a
list of children, and we can get away with it because the
cumulative "number" fields of the remaining nodes in the DAFSA
(after the first and second layer nodes were extracted out) happens
to require few enough bits that we can store the cumulative version
while staying under our 32-bit budget.
- Instead of storing a 'last sibling' flag to denote the end of a
list of children, the length of each node's list of children is
stored. Again, this is mostly done just because there are enough
bits available to do so while keeping the DAFSA node within 32
bits.
- Note: Together, these modifications open up the possibility of
using a binary search instead of a linear search over the
children, but due to the consistently small lengths of the lists
of children in the remaining DAFSA, a linear search actually seems
to be the better option.
The new data size is 24,724 bytes, up from 24,412 bytes (+312, -104 from
the 52 first layer nodes going from 4-bytes to 2-bytes, and +416 from
the addition of the 'first to second layer link' data).
In terms of raw matching speed (outside the context of the tokenizer),
this provides about a 1.72x speedup.
In very named-character-reference-heavy tokenizer benchmarks, this
provides about a 1.05x speedup (the effect of named character reference
matching speed is diluted when benchmarking the tokenizer).
Additionally, fixes the size of the named character reference data when
targeting Windows.
When there is an active insertion point, it's necessary to tokenize
code-point-by-code-point to handle the case of document.write being
used to insert a named character reference one code point at a time.
However, when there is no insertion point defined, looking ahead at the
input and doing the matching all-at-once is more efficient since it
allows:
- Avoiding the work done in next_code_point between each code point
being matched (leading to better CPU cache usage in theory)
- Skipping ahead to the end of the match all at once, which does less
work overall than the equivalent number of next_code_point calls
(that is, skip(N) does less work than next_code_point called N times)
In my benchmarking, this provides a small performance boost (fewer
instructions, fewer cpu cycles, fewer branch misses) essentially for
free.
The `muted` content attribute should only affect the state of the
`muted` IDL property when the media element is first created. The
attribute should have no dynamic effect.
Documents created via DOMParser.parseFromString()
are parsed synchronously and do not participate in the
browsing context's loading pipeline.
This patch ensures that if the document has no browsing context
(i.e. was parsed via DOMParser),
its readiness is set to "complete" synchronously.
Fixes WPT:
domparsing/xmldomparser.html
Partly corresponds to this which adds numbering to some substeps:
d426109ea1
This is not a complete review of all the spec steps to check that
they're up to date - I just updated the parts affected by that above
commit, and then added some `->` marks to places I noticed it was
missing. There may be actual spec differences still.
An actual change that needs tackling later is that `handle_in_head()`'s
branch for `<template>` has some new steps related to custom element
registries.
Instead, porting over all users to use the newly created
Origin::create_opaque factory function. This also requires porting
over some users of Origin to avoid default construction.
As part of the effort of removing the default constructor of
Origin, since document has the origin set after construction,
port Document's origin over to an Optional<Origin>.
This exposes that we were never setting the origin of the document
during fragment parsing. For now, to maintain previous behaviour,
let's explicitly set it to an opaque origin.
Which has an optmization if both size of the string being passed
through are FlyStrings, which actually ends up being the case
in some places during selector matching comparing attribute names.
Instead of maintaining more overloads of
Infra::is_ascii_case_insensitive_match, switch
everything over to equals_ignoring_ascii_case instead.
Instead of using UTF-8 iterators to traverse the HTMLTokenizer input
stream one code point at a time, we now do a one-shot conversion up
front from the input encoding to a Vector<u32> of Unicode code points.
This simplifies the tokenizer logic somewhat, and ends up being faster
as well, so win-win.
1.02x speedup on Speedometer 2.1
To allow for adding the concept of a WorkerAgent to be reused
between shared and dedicated workers. An event loop is the
commonality between the different agent types, though, there
are some differences between those event loops which we customize
on the construction of the HTML::EventLoop.
If attachment fails for whatever reason (e.g the host element is not
allowed to be a host), the HTML spec tells us to insert the template
element anyway and proceed.
Before this change, we were recomputing the insertion location at this
point, which caused it to be *inside* the template element. Inserting
the template element into itself didn't work, and so the DOM would end
up incorrect.
The fix here is to simply use the insertion point we determined earlier
in the same function, before putting a template element on the stack of
open elements. We already do this elsewhere.
Fixes at least 228 subtests on WPT. :^)