Importing these tests now because they are for input-element types that
have requirements related to the constraint-validation API — which we’ve
been implementing recently.
This commit only imports tests, without any changes to our code.
For attributes like Element.ariaControlsElements, which are a reflection
of FrozenArray<Element>, we must return the same JS::Array object every
time the attribute is invoked - until its contents have changed. This
patch implements caching of the reflected array in accordance with the
spec.
There are ARIA attributes, e.g. ariaControlsElements, which refer to a
list of elements by their ID. For example:
<div aria-controls="item1 item2">
The div.ariaControlsElements attribute would be a list of elements whose
ID matches the values in the aria-controls attribute.
Type changes are now signaled to radio buttons. This causes other radio
buttons in the group to be unchecked if the input element is a checked
radio button after the type change.
similar-origin window agents have the [[CanBlock]] flag set to false.
Achieve this by hooking up JS's concept with an agent to HTML::Agent.
For now, this is only hooked up to the similar-origin window agent
case but should be extended to the other agent types in the future.
This change adds “default step” and “step scale factor” handling for all
remaining HTMLInputElement input types for which the spec defines such
and that we didn’t yet have handling for.
Previously, we would only invalidate style when setting the `media` IDL
attribute; changing the attribute via `setAttribute()` and
`removeAttribute()` had no immediate effect.
There are two changes happening here: a correctness fix, and an
optimization. In theory they are unrelated, but the optimization
actually paves the way for the correctness fix.
Before this commit, the HTML tokenizer would attempt to look for named
character references by checking from after the `&` until the end of
m_decoded_input, which meant that it was unable to recognize things like
named character references that are inserted via `document.write` one
byte at a time. For example, if `∉` was written one-byte-at-a-time
with `document.write`, then the tokenizer would only check against `n`
since that's all that would exist at the time of the check and therefore
erroneously conclude that it was an invalid named character reference.
This commit modifies the approach taken for named character reference
matching by using a trie-like structure (specifically, a deterministic
acyclic finite state automaton or DAFSA), which allows for efficiently
matching one-character-at-a-time and therefore it is able to pick up
matching where it left off after each code point is consumed.
Note: Because it's possible for a partial match to not actually develop
into a full match (e.g. `¬indo` which could lead to `⋵̸`),
some backtracking is performed after-the-fact in order to only consume
the code points within the longest match found (e.g. `¬indo` would
backtrack back to `¬`).
With this new approach, `document.write` being called one-byte-at-a-time
is handled correctly, which allows for passing more WPT tests, with the
most directly relevant tests being
`/html/syntax/parsing/html5lib_entities01.html`
and
`/html/syntax/parsing/html5lib_entities02.html`
when run with `?run_type=write_single`. Additionally, the implementation
now better conforms to the language of the spec (and resolves a FIXME)
because exactly the matched characters are consumed and nothing more, so
SWITCH_TO is able to be used as the spec says instead of RECONSUME_IN.
The new approach is also an optimization:
- Instead of a linear search using `starts_with`, the usage of a DAFSA
means that it is always aware of which characters can lead to a match
at any given point, and will bail out whenever a match is no longer
possible.
- The DAFSA is able to take advantage of the note in the section
`13.5 Named character references` that says "This list is static and
will not be expanded or changed in the future." and tailor its Node
struct accordingly to tightly pack each node's data into 32-bits.
Together with the inherent DAFSA property of redundant node
deduplication, the amount of data stored for named character reference
matching is minimized.
In my testing:
- A benchmark tokenizing an arbitrary set of HTML test files was about
1.23x faster (2070ms to 1682ms).
- A benchmark tokenizing a file with tens of thousands of named
character references mixed in with truncated named character
references and arbitrary ASCII characters/ampersands runs about 8x
faster (758ms to 93ms).
- The size of `liblagom-web.so` was reduced by 94.96KiB.
Some technical details:
A DAFSA (deterministic acyclic finite state automaton) is essentially a
trie flattened into an array, but it also uses techniques to minimize
redundant nodes. This provides fast lookups while minimizing the
required data size, but normally does not allow for associating data
related to each word. However, by adding a count of the number of
possible words from each node, it becomes possible to also use it to
achieve minimal perfect hashing for the set of words (which allows going
from word -> unique index as well as unique index -> word). This allows
us to store a second array of data so that the DAFSA can be used as a
lookup for e.g. the associated code points.
For the Swift implementation, the new NamedCharacterReferenceMatcher
was used to satisfy the previous API and the tokenizer was left alone
otherwise. In the future, the Swift implementation should be updated to
use the same implementation for its NamedCharacterReference state as
the updated C++ implementation.
This change implements the requirements for the “suffering from an
overflow” and “suffering from an underflow” algorithms for
HTMLInputElement constraint validation.
This change fixes a bug in our implementation of the “step base”
algorithm at https://html.spec.whatwg.org/#concept-input-min-zero. We
were using the “value” IDL/DOM attribute in a particular step, where the
spec instead actually requires using the “value” content attribute.
This change implements HTMLInputElement type=email constraint validation
in conformance with the current spec requirements (which happens to also
produce behavior that’s interoperable with other existing engines).
This change implements HTMLInputElement type=url constraint validation
in such a way as to match the behavior in other existing engines (which
is, however, very different from what the spec currently requires).
This change — part of the HTML constraint-validation API (aka
“client-side form validation”) — implements the willValidate IDL/DOM
attribute/property for all form controls that support it.
One point to note is that I am not entirely sure what the result
of the pre-existing valueAsNumber test should be for this strange
case which does not lie exactly on a week/day boundary. Chrome
gives a negative timestamp, which seems more wrong than the result
we give, and neither gecko or WebKit appear to support the 'week'
type. So I'm considering this result acceptable for now, and this
may be something that will need more WPT tests added in the future.
Corresponds to part of https://github.com/whatwg/html/pull/9841 and then
https://github.com/whatwg/html/pull/11047
Adding `Auto` as a type state feels a little odd, as it's not an actual
type allowed in HTML. However, it's the default state when the value is
missing or invalid, which works out the same, as long as we never
serialize "auto", which we don't.
There's a quirk in HTML where the parser should ignore any line feed
character immediately following a `pre` or `textarea` start tag.
This was working fine when we could peek ahead in the input stream and
see the next token, but didn't work in character-at-a-time parsing with
document.write().
This commit adds the "can ignore next line feed character" as a parser
flag that is maintained across invocations, making it work in this
parsing mode as well.
20 new passes in WPT/html/syntax/parsing/ :^)
Instead of always inserting a new text node, we now continue appending
to an extisting text node if the parser's character insertion point is
a suitable text node.
This fixes an issue where multiple invocations of document.write() would
create unnecessary sequences of text nodes. Such sequences are now
merged automatically.
19 new passes in WPT/html/syntax/parsing/ :^)