Introduces a few ad-hoc modifications to the DAFSA aimed to increase
performance while keeping the data size small.
- The 'first layer' of nodes is extracted out and replaced with a lookup
table. This turns the search for the first character from O(n) to O
(1), and doesn't increase the data size because all first characters
in the set of named character references have the
values 'a'-'z'/'A'-'Z', so a lookup array of exactly 52 elements can
be used. The lookup table stores the cumulative "number" fields that
would be calculated by a linear scan that matches a given node, thus
allowing the unique index to be built-up as normal with a O(1) search
instead of a linear scan.
- The 'second layer' of nodes is also extracted out and searches of the
second layer are done using a bit field of 52 bits (the set bits of
the bit field depend on the first character's value), where each set
bit corresponds to one of 'a'-'z'/'A'-'Z' (similar to the first
layer, the second layer can only contain ASCII alphabetic
characters). The bit field is then re-used (along with an offset) to
get the index into the array of second layer nodes. This technique
ultimately allows for storing the minimum number of nodes in the
second layer, and therefore only increasing the size of the data by
the size of the 'first to second layer link' info which is 52 * 8 =
416 bytes.
- After the second layer, the rest of the data is stored using a
mostly-normal DAFSA, but there are still a few differences:
- The "number" field is cumulative, in the same way that the
first/second layer store a cumulative "number" field. This cuts
down slightly on the amount of work done during the search of a
list of children, and we can get away with it because the
cumulative "number" fields of the remaining nodes in the DAFSA
(after the first and second layer nodes were extracted out) happens
to require few enough bits that we can store the cumulative version
while staying under our 32-bit budget.
- Instead of storing a 'last sibling' flag to denote the end of a
list of children, the length of each node's list of children is
stored. Again, this is mostly done just because there are enough
bits available to do so while keeping the DAFSA node within 32
bits.
- Note: Together, these modifications open up the possibility of
using a binary search instead of a linear search over the
children, but due to the consistently small lengths of the lists
of children in the remaining DAFSA, a linear search actually seems
to be the better option.
The new data size is 24,724 bytes, up from 24,412 bytes (+312, -104 from
the 52 first layer nodes going from 4-bytes to 2-bytes, and +416 from
the addition of the 'first to second layer link' data).
In terms of raw matching speed (outside the context of the tokenizer),
this provides about a 1.72x speedup.
In very named-character-reference-heavy tokenizer benchmarks, this
provides about a 1.05x speedup (the effect of named character reference
matching speed is diluted when benchmarking the tokenizer).
Additionally, fixes the size of the named character reference data when
targeting Windows.
When there is an active insertion point, it's necessary to tokenize
code-point-by-code-point to handle the case of document.write being
used to insert a named character reference one code point at a time.
However, when there is no insertion point defined, looking ahead at the
input and doing the matching all-at-once is more efficient since it
allows:
- Avoiding the work done in next_code_point between each code point
being matched (leading to better CPU cache usage in theory)
- Skipping ahead to the end of the match all at once, which does less
work overall than the equivalent number of next_code_point calls
(that is, skip(N) does less work than next_code_point called N times)
In my benchmarking, this provides a small performance boost (fewer
instructions, fewer cpu cycles, fewer branch misses) essentially for
free.
The `muted` content attribute should only affect the state of the
`muted` IDL property when the media element is first created. The
attribute should have no dynamic effect.
Documents created via DOMParser.parseFromString()
are parsed synchronously and do not participate in the
browsing context's loading pipeline.
This patch ensures that if the document has no browsing context
(i.e. was parsed via DOMParser),
its readiness is set to "complete" synchronously.
Fixes WPT:
domparsing/xmldomparser.html
Partly corresponds to this which adds numbering to some substeps:
d426109ea1
This is not a complete review of all the spec steps to check that
they're up to date - I just updated the parts affected by that above
commit, and then added some `->` marks to places I noticed it was
missing. There may be actual spec differences still.
An actual change that needs tackling later is that `handle_in_head()`'s
branch for `<template>` has some new steps related to custom element
registries.
Instead, porting over all users to use the newly created
Origin::create_opaque factory function. This also requires porting
over some users of Origin to avoid default construction.
As part of the effort of removing the default constructor of
Origin, since document has the origin set after construction,
port Document's origin over to an Optional<Origin>.
This exposes that we were never setting the origin of the document
during fragment parsing. For now, to maintain previous behaviour,
let's explicitly set it to an opaque origin.
Which has an optmization if both size of the string being passed
through are FlyStrings, which actually ends up being the case
in some places during selector matching comparing attribute names.
Instead of maintaining more overloads of
Infra::is_ascii_case_insensitive_match, switch
everything over to equals_ignoring_ascii_case instead.
Instead of using UTF-8 iterators to traverse the HTMLTokenizer input
stream one code point at a time, we now do a one-shot conversion up
front from the input encoding to a Vector<u32> of Unicode code points.
This simplifies the tokenizer logic somewhat, and ends up being faster
as well, so win-win.
1.02x speedup on Speedometer 2.1
To allow for adding the concept of a WorkerAgent to be reused
between shared and dedicated workers. An event loop is the
commonality between the different agent types, though, there
are some differences between those event loops which we customize
on the construction of the HTML::EventLoop.
If attachment fails for whatever reason (e.g the host element is not
allowed to be a host), the HTML spec tells us to insert the template
element anyway and proceed.
Before this change, we were recomputing the insertion location at this
point, which caused it to be *inside* the template element. Inserting
the template element into itself didn't work, and so the DOM would end
up incorrect.
The fix here is to simply use the insertion point we determined earlier
in the same function, before putting a template element on the stack of
open elements. We already do this elsewhere.
Fixes at least 228 subtests on WPT. :^)
Start work on a speculative HTML Parser in Swift. This component will
walk ahead of the normal HTML parser looking for fetch() requests to
make while the normal parser is blocked. This work exposed many holes in
the Swift C++ interop component, which have been reported upstream.
There are two changes happening here: a correctness fix, and an
optimization. In theory they are unrelated, but the optimization
actually paves the way for the correctness fix.
Before this commit, the HTML tokenizer would attempt to look for named
character references by checking from after the `&` until the end of
m_decoded_input, which meant that it was unable to recognize things like
named character references that are inserted via `document.write` one
byte at a time. For example, if `∉` was written one-byte-at-a-time
with `document.write`, then the tokenizer would only check against `n`
since that's all that would exist at the time of the check and therefore
erroneously conclude that it was an invalid named character reference.
This commit modifies the approach taken for named character reference
matching by using a trie-like structure (specifically, a deterministic
acyclic finite state automaton or DAFSA), which allows for efficiently
matching one-character-at-a-time and therefore it is able to pick up
matching where it left off after each code point is consumed.
Note: Because it's possible for a partial match to not actually develop
into a full match (e.g. `¬indo` which could lead to `⋵̸`),
some backtracking is performed after-the-fact in order to only consume
the code points within the longest match found (e.g. `¬indo` would
backtrack back to `¬`).
With this new approach, `document.write` being called one-byte-at-a-time
is handled correctly, which allows for passing more WPT tests, with the
most directly relevant tests being
`/html/syntax/parsing/html5lib_entities01.html`
and
`/html/syntax/parsing/html5lib_entities02.html`
when run with `?run_type=write_single`. Additionally, the implementation
now better conforms to the language of the spec (and resolves a FIXME)
because exactly the matched characters are consumed and nothing more, so
SWITCH_TO is able to be used as the spec says instead of RECONSUME_IN.
The new approach is also an optimization:
- Instead of a linear search using `starts_with`, the usage of a DAFSA
means that it is always aware of which characters can lead to a match
at any given point, and will bail out whenever a match is no longer
possible.
- The DAFSA is able to take advantage of the note in the section
`13.5 Named character references` that says "This list is static and
will not be expanded or changed in the future." and tailor its Node
struct accordingly to tightly pack each node's data into 32-bits.
Together with the inherent DAFSA property of redundant node
deduplication, the amount of data stored for named character reference
matching is minimized.
In my testing:
- A benchmark tokenizing an arbitrary set of HTML test files was about
1.23x faster (2070ms to 1682ms).
- A benchmark tokenizing a file with tens of thousands of named
character references mixed in with truncated named character
references and arbitrary ASCII characters/ampersands runs about 8x
faster (758ms to 93ms).
- The size of `liblagom-web.so` was reduced by 94.96KiB.
Some technical details:
A DAFSA (deterministic acyclic finite state automaton) is essentially a
trie flattened into an array, but it also uses techniques to minimize
redundant nodes. This provides fast lookups while minimizing the
required data size, but normally does not allow for associating data
related to each word. However, by adding a count of the number of
possible words from each node, it becomes possible to also use it to
achieve minimal perfect hashing for the set of words (which allows going
from word -> unique index as well as unique index -> word). This allows
us to store a second array of data so that the DAFSA can be used as a
lookup for e.g. the associated code points.
For the Swift implementation, the new NamedCharacterReferenceMatcher
was used to satisfy the previous API and the tokenizer was left alone
otherwise. In the future, the Swift implementation should be updated to
use the same implementation for its NamedCharacterReference state as
the updated C++ implementation.
When setting scroll position during page load we need to consider
whether we actually have a fragment to scroll to. A script may already
have run at that point and may already have set a scroll position.
If there is an actual fragment to scroll to, it is fine to scroll to
that fragment, since it should take precedence. If we don't have a
fragment however, we should not unnecessarily overwrite the scroll
position set by the script back to (0, 0).
Since this problem is caused by a spec bug, I have tested the behavior
in the three major browsers engines. Unfortunately they do not agree
fully with each other. If there is no fragment at all (e.g. `foo.html`),
all browsers will respect the scroll position set by the script. If
there is a fragment (e.g. `foo.html#bar`), all browsers will set the
scroll position to the fragment element and ignore the one set by
script. However, when the fragment is empty (e.g. `foo.html#`), then
Blink and WebKit will set scroll position to the fragment, while Gecko
will set scroll position from script. Since all of this is ad-hoc
behavior anyway, I simply implemented the Blink/WebKit behavior because
of the majority vote for now.
This fixes a regression introduced in 51102254b5.
This commit implements the main "render blocking" behavior for link
elements, drastically reducing the amount of FOUC (flash of unstyled
content) we subject our users to.
The document will now block rendering until linked style sheets
referenced by parser-created link elements have loaded (or failed).
Note that we don't yet extend the blocking period until "critical
subresources" such as imported style sheets have been downloaded
as well.
Previously, the charset of name "UTF-16BE/LE" would be checked against
when following standards to convert the charset to UTF-8, but in
reality, the charsets "UTF-16BE" and "UTF-16LE" should be checked
separately.
Co-authored-by: Jelle Raaijmakers <jelle@ladybird.org>
There's a quirk in HTML where the parser should ignore any line feed
character immediately following a `pre` or `textarea` start tag.
This was working fine when we could peek ahead in the input stream and
see the next token, but didn't work in character-at-a-time parsing with
document.write().
This commit adds the "can ignore next line feed character" as a parser
flag that is maintained across invocations, making it work in this
parsing mode as well.
20 new passes in WPT/html/syntax/parsing/ :^)
Instead of always inserting a new text node, we now continue appending
to an extisting text node if the parser's character insertion point is
a suitable text node.
This fixes an issue where multiple invocations of document.write() would
create unnecessary sequences of text nodes. Such sequences are now
merged automatically.
19 new passes in WPT/html/syntax/parsing/ :^)
We were neglecting to return after handling the `frameset` start tag,
which caused us to process it twice, once properly and once generically.
54 new passes in WPT/html/syntax/parsing/ :^)
Before this change, the explicit EOF inserted by document.close() would
instantly abort the parser. This meant that parsing algorithms that ran
as part of the parser unwinding on EOF would never actually run.
591 new passes in WPT/html/syntax/parsing/ :^)
This exposed a problem where the parser would try to insert a root
<html> element on EOF in a document where someone already inserted such
an element via direct DOM manipulation. The parser now gracefully
handles this scenario. It's covered by existing tests (which would
crash without this change.)
This fixes a crash in the included test that regressed in 0adf261,
and is hit by the following HTML:
```html
<body></body>
<script>
const frame = document.body.appendChild(document.createElement("iframe"));
frame.contentDocument.open();
const child = frame.contentDocument.createElement("html")
const html = frame.contentDocument.appendChild(child);
frame.contentDocument.close();
</script>
```
I am not 100% sure this is fully the correct fix and there are other
cases which would not work properly. But it's definitely an improvement
to make the confuisingly named 'insert_an_eof' function of the tokenizer
actually do something.
We've historically asserted that no "saturated" size values end up as
final metrics for boxes in layout. This always had a chance of producing
false positives, since you can trivially create extremely large boxes
with CSS.
The reason we had those assertions was to catch bugs in our own engine
code where we'd incorrectly end up with non-finite values in layout
algorithms. At this point, we've found and fixed all known bugs of that
nature, and what remains are a bunch of false positives on pages that
create very large scrollable areas, iframes etc.
So, let's change it! We now clamp content width and height of boxes to
17895700 pixels, apparently the same cap as Firefox uses.
There's also the issue of calc() being able to produce non-finite
values. Note that we don't clamp the result of calc() directly, but
instead just clamp values when assigning them to content sizes.
Fixes#645.
Fixes#1236.
Fixes#1249.
Fixes#1908.
Fixes#3057.
This makes it more convenient to use the 'relvant agent' concept,
instead of the awkward dynamic casts we needed to do for every call
site.
mutation_observers is also changed to hold a GC::Root instead of raw
GC::Ptr. Somehow this was not causing problems before, but trips up CI
after these changes.