This corresponds with the editorial change to the HTML standard
introducing the parsing mode enum of:
01c45cede
And a follow up normative change of:
508706c80
Making fragment parsing derive its scripting mode from the context
document.
Inline JS-to-JS frames no longer live in the raw execution context
vector, so LibWeb callers that need to inspect or pop contexts now go
through VM helpers instead of peeking into that storage directly.
This keeps the execution context bookkeeping encapsulated while
preserving existing microtask and realm-entry checks.
HTMLParser::the_end() had three spin_until calls that blocked the event
loop: step 5 (deferred scripts), step 7 (ASAP scripts), and step 8
(load event delay). This replaces them with an HTMLParserEndState state
machine that progresses asynchronously via callbacks.
The state machine has three phases matching the three spin_until calls:
- WaitingForDeferredScripts: loops executing ready deferred scripts
- WaitingForASAPScripts: waits for ASAP script lists to empty
- WaitingForLoadEventDelay: waits for nothing to delay the load event
Notification triggers re-evaluate the state machine when conditions
change: HTMLScriptElement::mark_as_ready, stylesheet unblocking in
StyleElementBase/HTMLLinkElement, did_stop_being_active_document, and
DocumentLoadEventDelayer decrements. NavigableContainer state changes
(session history readiness, content navigable cleared, lazy load flag)
also trigger re-evaluation of the load event delay check.
Key design decisions and why:
1. Microtask checkpoint in schedule_progress_check(): The old spin_until
called perform_a_microtask_checkpoint() before checking conditions.
This is critical because HTMLImageElement::update_the_image_data step
8 queues a microtask that creates the DocumentLoadEventDelayer.
Without the checkpoint, check_progress() would see zero delayers and
complete before images start delaying the load event.
2. deferred_invoke in schedule_progress_check():
I tried Core::Timer (0ms), queue_global_task, and synchronous calls.
Timers caused non-deterministic ordering with the HTML event loop's
task processing timer, leading to image layout tests failing (wrong
subtest pass/fail patterns). Synchronous calls fired too early during
image load processing before dimensions were set, causing 0-height
images in layout tests. queue_global_task had task ordering issues
with the session history traversal queue. deferred_invoke runs after
the current callback returns but within the same event loop pump,
giving the right balance.
3. Navigation load event guard (m_navigation_load_event_guard): During
cross-document navigation, finalize_a_cross_document_navigation step
2 calls set_delaying_load_events(false) before the session history
traversal activates the new document. This creates a transient state
where the parent's load event delay check sees the about:blank (which
has ready_for_post_load_tasks=true) as the active document and
completes prematurely.
Remove includes from Node.h that are only needed for forward
declarations (AccessibilityTreeNode.h, XMLSerializer.h,
JsonObjectSerializer.h). Extract StyleInvalidationReason and
FragmentSerializationMode enums into standalone lightweight
headers so downstream headers (CSSStyleSheet.h, CSSStyleProperties.h,
HTMLParser.h) can include just the enum they need instead of all of
Node.h. Replace Node.h with forward declarations in headers that only
use Node by pointer/reference.
This breaks the circular dependency between Node.h and
AccessibilityTreeNode.h, reducing AccessibilityTreeNode.h's
recompilation footprint from ~1399 to ~25 files.
When a document is navigated away from while HTMLParser::the_end() is
spinning the event loop (steps 7 and 8), the spin_until stays on the
call stack indefinitely, causing all subsequent event processing on the
same event loop to happen within nested spin_until pumping. Add
is_fully_active() checks to bail out early in this case.
This adds visit_edges(Cell::Visitor&) methods to various helper structs
that contain GC pointers, and makes sure they are called from owning
GC-heap-allocated objects as needed.
These were found by our Clang plugin after expanding its capabilities.
The added rules will be enforced by CI going forward.
Introduce the HTMLSelectedContentElement and integrate it into
<select>, <option> and HTMLParser.
See whatwg/html#10548.
There are two bugs with WPT tests which causes the third subtest
in selectedcontent.html and selectedcontent-mutations.html fail.
See whatwg/html#11882, web-platform-tests/wpt#55849.
This implements parsing part of customizable <select> spec update.
See whatwg/html PR #10548.
Two failing subtests in `html5lib_innerHTML_tests_innerHTML_1.html`
and `customizable-select/select-parsing.html` are due to the spec
still disallowing `<input>` inside `<select>`, even though Chrome
has already implemented this behavoir (see whatwg/html#11288).
An upcoming commit will migrate the contents of Headers.h/cpp to LibHTTP
for use outside of LibWeb. These CORS and MIME helpers depend on other
LibWeb facilities, however, so they cannot be moved.
Step 2.(a).5 says to abort, but we were instead carrying on and would
run steps 3 and 4. Those steps would not change the result at all, but
this avoids a little unnecessary work.
I wrapped a couple of comments at 120 columns while I was at it.
When detecting an element's opening tag, the spec asks us to skip ahead
to the first whitespace or end chevron character before trying to read
attributes. Instead, we were always skipping 2 positions ahead and then
ignoring all whitespace characters and slashes, which was clearly wrong.
Theoretically this could have caused some weird behaviors if part of the
opening tag matched an expected attribute name, but it's very unlikely
to see that in the wild.
This did not cause any immediate issues except generating instances of
`Attr` with useless values which caused some unnecessary work during
encoding detection.
This ends up saving quite a bit of memory on many pages, since UTF-32
uses 4 bytes per code points.
As an example, it reduces the footprint on https://gymgrossisten.com/
by 2 MiB.
Update Element::parse_fragment and Node::unsafely_set_html to
propagate exceptions.
This refactor is needed as a prerequisite for implementing the XML
fragment parser, which requires consistent error handling in fragment
parsing.
CSSUnitValue is a typed-om type which we will implement separately in
the future. However, it still seems useful to give our dimension values
a base class. (Maybe they could be templated in the future?) So instead
of deleting it entirely, rename it to DimensionStyleValue and make its
API match our style better.
This reverts 0e3487b9ab.
Back when I made that change, I thought we could make our StyleValue
classes match the typed-om definitions directly. However, they have
different requirements. Typed-om types need to be mutable and GCed,
whereas StyleValues are immutable and ideally wouldn't require a JS VM.
While I was already making such a cataclysmic change, I've moved it into
the StyleValues directory, because it *not* being there has bothered me
for a long time. 😅
Introduces a few ad-hoc modifications to the DAFSA aimed to increase
performance while keeping the data size small.
- The 'first layer' of nodes is extracted out and replaced with a lookup
table. This turns the search for the first character from O(n) to O
(1), and doesn't increase the data size because all first characters
in the set of named character references have the
values 'a'-'z'/'A'-'Z', so a lookup array of exactly 52 elements can
be used. The lookup table stores the cumulative "number" fields that
would be calculated by a linear scan that matches a given node, thus
allowing the unique index to be built-up as normal with a O(1) search
instead of a linear scan.
- The 'second layer' of nodes is also extracted out and searches of the
second layer are done using a bit field of 52 bits (the set bits of
the bit field depend on the first character's value), where each set
bit corresponds to one of 'a'-'z'/'A'-'Z' (similar to the first
layer, the second layer can only contain ASCII alphabetic
characters). The bit field is then re-used (along with an offset) to
get the index into the array of second layer nodes. This technique
ultimately allows for storing the minimum number of nodes in the
second layer, and therefore only increasing the size of the data by
the size of the 'first to second layer link' info which is 52 * 8 =
416 bytes.
- After the second layer, the rest of the data is stored using a
mostly-normal DAFSA, but there are still a few differences:
- The "number" field is cumulative, in the same way that the
first/second layer store a cumulative "number" field. This cuts
down slightly on the amount of work done during the search of a
list of children, and we can get away with it because the
cumulative "number" fields of the remaining nodes in the DAFSA
(after the first and second layer nodes were extracted out) happens
to require few enough bits that we can store the cumulative version
while staying under our 32-bit budget.
- Instead of storing a 'last sibling' flag to denote the end of a
list of children, the length of each node's list of children is
stored. Again, this is mostly done just because there are enough
bits available to do so while keeping the DAFSA node within 32
bits.
- Note: Together, these modifications open up the possibility of
using a binary search instead of a linear search over the
children, but due to the consistently small lengths of the lists
of children in the remaining DAFSA, a linear search actually seems
to be the better option.
The new data size is 24,724 bytes, up from 24,412 bytes (+312, -104 from
the 52 first layer nodes going from 4-bytes to 2-bytes, and +416 from
the addition of the 'first to second layer link' data).
In terms of raw matching speed (outside the context of the tokenizer),
this provides about a 1.72x speedup.
In very named-character-reference-heavy tokenizer benchmarks, this
provides about a 1.05x speedup (the effect of named character reference
matching speed is diluted when benchmarking the tokenizer).
Additionally, fixes the size of the named character reference data when
targeting Windows.
When there is an active insertion point, it's necessary to tokenize
code-point-by-code-point to handle the case of document.write being
used to insert a named character reference one code point at a time.
However, when there is no insertion point defined, looking ahead at the
input and doing the matching all-at-once is more efficient since it
allows:
- Avoiding the work done in next_code_point between each code point
being matched (leading to better CPU cache usage in theory)
- Skipping ahead to the end of the match all at once, which does less
work overall than the equivalent number of next_code_point calls
(that is, skip(N) does less work than next_code_point called N times)
In my benchmarking, this provides a small performance boost (fewer
instructions, fewer cpu cycles, fewer branch misses) essentially for
free.
The `muted` content attribute should only affect the state of the
`muted` IDL property when the media element is first created. The
attribute should have no dynamic effect.
Documents created via DOMParser.parseFromString()
are parsed synchronously and do not participate in the
browsing context's loading pipeline.
This patch ensures that if the document has no browsing context
(i.e. was parsed via DOMParser),
its readiness is set to "complete" synchronously.
Fixes WPT:
domparsing/xmldomparser.html