Pseudo-elements have specific rules about which CSS properties can be
applied to them. This is a first step to supporting that.
- If a property whitelist isn't present, all properties are allowed.
- Properties are named as in CSS.
- Names of property groups are prefixed with `#`, which makes this match
the spec more clearly. These groups are implemented directly in the
code generator for now.
- Any property name beginning with "FIXME:" is ignored, so we can mark
properties we don't implement yet.
We previously supported a few -webkit vendor-prefixed pseudo-elements.
This patch adds those back, along with -moz equivalents, by aliasing
them to standard ones. They behave identically, except for serializing
with their original name, just like for unrecognized -webkit
pseudo-elements.
It's likely to be a while before the forms spec settles and authors
start using the new pseudo-elements, so until then, we can still make
use of styles they've written for the non-standard ones.
There are two changes happening here: a correctness fix, and an
optimization. In theory they are unrelated, but the optimization
actually paves the way for the correctness fix.
Before this commit, the HTML tokenizer would attempt to look for named
character references by checking from after the `&` until the end of
m_decoded_input, which meant that it was unable to recognize things like
named character references that are inserted via `document.write` one
byte at a time. For example, if `∉` was written one-byte-at-a-time
with `document.write`, then the tokenizer would only check against `n`
since that's all that would exist at the time of the check and therefore
erroneously conclude that it was an invalid named character reference.
This commit modifies the approach taken for named character reference
matching by using a trie-like structure (specifically, a deterministic
acyclic finite state automaton or DAFSA), which allows for efficiently
matching one-character-at-a-time and therefore it is able to pick up
matching where it left off after each code point is consumed.
Note: Because it's possible for a partial match to not actually develop
into a full match (e.g. `¬indo` which could lead to `⋵̸`),
some backtracking is performed after-the-fact in order to only consume
the code points within the longest match found (e.g. `¬indo` would
backtrack back to `¬`).
With this new approach, `document.write` being called one-byte-at-a-time
is handled correctly, which allows for passing more WPT tests, with the
most directly relevant tests being
`/html/syntax/parsing/html5lib_entities01.html`
and
`/html/syntax/parsing/html5lib_entities02.html`
when run with `?run_type=write_single`. Additionally, the implementation
now better conforms to the language of the spec (and resolves a FIXME)
because exactly the matched characters are consumed and nothing more, so
SWITCH_TO is able to be used as the spec says instead of RECONSUME_IN.
The new approach is also an optimization:
- Instead of a linear search using `starts_with`, the usage of a DAFSA
means that it is always aware of which characters can lead to a match
at any given point, and will bail out whenever a match is no longer
possible.
- The DAFSA is able to take advantage of the note in the section
`13.5 Named character references` that says "This list is static and
will not be expanded or changed in the future." and tailor its Node
struct accordingly to tightly pack each node's data into 32-bits.
Together with the inherent DAFSA property of redundant node
deduplication, the amount of data stored for named character reference
matching is minimized.
In my testing:
- A benchmark tokenizing an arbitrary set of HTML test files was about
1.23x faster (2070ms to 1682ms).
- A benchmark tokenizing a file with tens of thousands of named
character references mixed in with truncated named character
references and arbitrary ASCII characters/ampersands runs about 8x
faster (758ms to 93ms).
- The size of `liblagom-web.so` was reduced by 94.96KiB.
Some technical details:
A DAFSA (deterministic acyclic finite state automaton) is essentially a
trie flattened into an array, but it also uses techniques to minimize
redundant nodes. This provides fast lookups while minimizing the
required data size, but normally does not allow for associating data
related to each word. However, by adding a count of the number of
possible words from each node, it becomes possible to also use it to
achieve minimal perfect hashing for the set of words (which allows going
from word -> unique index as well as unique index -> word). This allows
us to store a second array of data so that the DAFSA can be used as a
lookup for e.g. the associated code points.
For the Swift implementation, the new NamedCharacterReferenceMatcher
was used to satisfy the previous API and the tokenizer was left alone
otherwise. In the future, the Swift implementation should be updated to
use the same implementation for its NamedCharacterReference state as
the updated C++ implementation.
CSSStyleDeclaration is a base class that's used by various collections
of style properties or descriptors. This commit moves all
style-property-related code into CSSStyleProperties, where it belongs.
As noted in the previous commit, we also apply the CSSStyleProperties
prototype now.
The DOMParsing spec is in the process of being merged into the HTML one,
gradually. The linked spec change moves XMLSerializer, but many of the
algorithms are still in the DOMParsing spec so I've left the links to
those alone.
I've done my best to update the GN build but since I'm not actually
using it, I might have done that wrong.
Corresponds to 2edb8cc7ee
Having multiple kinds of node that hold numeric values made things more
complicated than they needed to be, and we were already converting
ConstantCalculationNodes to NumericCalculationNodes in the first
simplification pass that happens at parse-time, so they didn't exist
after that.
As noted, the spec allows for other contexts to introduce their own
numeric keywords, which might be resolved later than parse-time. We'll
need a different mechanism to support those, but
ConstantCalculationNode could not have done so anyway.
Before this change, we only parsed fit-content as a standalone keyword,
but CSS-SIZING-3 added it as a function as well. I don't know of
anything else in CSS that is overloaded like this, so it ends up looking
a little awkward in the implementation.
Note that a lot of code had already been prepped for fit-content values
to have an argument, we just weren't parsing it.
This makes it so that the IDL generator no longer assumed that
dictionary members with a default value are optional, since they
will always, at least, have the default value.
It might be a good idea to do this on other platforms as well, but at
least on Windows, the command line for GenerateWindowOrWorkerInterfaces
becomes too large.
This is not a very pleasant fix, but matches a similar const_cast that
we do to return JS objects returned in a union. Ideally we would
'simply' remove the const from the value being visited in the variant,
but that opens up a whole can of worms where we are currently relying on
temporary lifetime extension so that interfaces can return a Variant of
GC::Ref's to JS::Objects.
We were previously assuming that dictionary members were always
required when being returned.
This is a bit of a weird case, because unlike _input_ dictionaries
which the spec marks as required, 'result' dictionaries do not seem to
be marked in spec IDL as required. This is still fine from the POV that
the spec is written as it states that we should only be putting the
values into the dictionary if the value exists.
We could do this through some metaprogramming constexpr type checks.
For example, if the type in our C++ representation was not an
Optional, we can skip the has_value check.
Instead of doing that, change the IDL of the result dictionaries to
annotate these members so that the IDL generator knows this
information up front. While all current cases have every single
member returned or not returned, it is conceivable that the spec
could have a situation that one member is always returned (and
should get marked as required), while the others are optionally
returned. Therefore, this new GenerateAsRequired attribute is
applied for each individual member.
This fixes a compile error of multiple variables of the same name within
the same scope for the URLPattern IDL, which has a dictionary return
type that contains multiple dictionaries of the same type. Conveniently,
this also makes the complicated generated code of the URLPattern
interface easier to read by adding some more structure :^)
This matches the prototype attributes.
Used by https://chatgpt.com/, where it runs this code:
```js
CSS.supports('animation-timeline: --works')
```
If this returns false, it will attempt to polyfill Animation Timeline
and override CSS.supports to support Animation Timeline properties.
This is not really a context, but more of a set of parameters for
creating a Parser. So, treat it as such: Rename it to ParsingParams,
and store its values and methods directly in the Parser instead of
keeping the ParsingContext around.
This has a nice side-effect of not including DOM/Document.h everywhere
that needs a Parser.
Calc simplification (which I'm working towards) involves repeatedly
deriving a new calculation tree from an existing one, and in many
cases, either the whole result or a portion of it will be identical to
that of the original. Using RefPtr lets us avoid making unnecessary
copies. As a bonus it will also make it easier to return either `this`
or a new node.
In future we could also cache commonly-used nodes, similar to how we do
so for 1px and 0px LengthStyleValues and various keywords.
This is done by using the combination of format and type to map to the
appropriate Skia bitmap type. With this, we then read the SkImage of
the TexImageSource into a new SkPixmap with the destination format
information and holding an appropriately sized buffer. Once created,
readPixels is called to convert and write the image into the buffer.
This is required to return original references to the shaders attached
to a program from getAttachedShaders. This is required for Figma (and
likely all other Emscripten compiled applications that use WebGL) to
get it's own generated shader IDs from the shaders returned from
getAttachedShaders.