Some shorthand properties work differently to normal in that mapping of
provided values to longhands isn't necessarily 1-to-1 and depends on the
number of values provided, for example `margin`, `border-width`, `gap`,
etc.
These properties have distinct behaviors in how they are parsed and
serialized, having them marked allows us to implement theses behaviors
in a generic way.
No functionality changes.
Our currently implementation of structured serialization has a design
flaw, where if the serialized/transferred type was not used in the
destination realm, it would not be seen as exposed and thus we would
not re-create the type on the other side.
This is very common, for example, transferring a MessagePort to a just
inserted iframe, or the just inserted iframe transferring a MessagePort
to it's parent. This is what Google reCAPTCHA does.
This flaw occurred due to relying on lazily populated HashMaps of
constructors, namespaces and interfaces. This commit changes it so that
per-type "is exposed" implementations are generated.
Since it no longer relies on interface name strings, this commit
changes serializable types to indicate their type with an enum,
in line with how transferrable types indicate their type.
This makes Google reCAPTCHA work on https://www.google.com/recaptcha/api2/demo
It currently doesn't work on non-Google origins due to a separate
same-origin policy bug.
Introduces a few ad-hoc modifications to the DAFSA aimed to increase
performance while keeping the data size small.
- The 'first layer' of nodes is extracted out and replaced with a lookup
table. This turns the search for the first character from O(n) to O
(1), and doesn't increase the data size because all first characters
in the set of named character references have the
values 'a'-'z'/'A'-'Z', so a lookup array of exactly 52 elements can
be used. The lookup table stores the cumulative "number" fields that
would be calculated by a linear scan that matches a given node, thus
allowing the unique index to be built-up as normal with a O(1) search
instead of a linear scan.
- The 'second layer' of nodes is also extracted out and searches of the
second layer are done using a bit field of 52 bits (the set bits of
the bit field depend on the first character's value), where each set
bit corresponds to one of 'a'-'z'/'A'-'Z' (similar to the first
layer, the second layer can only contain ASCII alphabetic
characters). The bit field is then re-used (along with an offset) to
get the index into the array of second layer nodes. This technique
ultimately allows for storing the minimum number of nodes in the
second layer, and therefore only increasing the size of the data by
the size of the 'first to second layer link' info which is 52 * 8 =
416 bytes.
- After the second layer, the rest of the data is stored using a
mostly-normal DAFSA, but there are still a few differences:
- The "number" field is cumulative, in the same way that the
first/second layer store a cumulative "number" field. This cuts
down slightly on the amount of work done during the search of a
list of children, and we can get away with it because the
cumulative "number" fields of the remaining nodes in the DAFSA
(after the first and second layer nodes were extracted out) happens
to require few enough bits that we can store the cumulative version
while staying under our 32-bit budget.
- Instead of storing a 'last sibling' flag to denote the end of a
list of children, the length of each node's list of children is
stored. Again, this is mostly done just because there are enough
bits available to do so while keeping the DAFSA node within 32
bits.
- Note: Together, these modifications open up the possibility of
using a binary search instead of a linear search over the
children, but due to the consistently small lengths of the lists
of children in the remaining DAFSA, a linear search actually seems
to be the better option.
The new data size is 24,724 bytes, up from 24,412 bytes (+312, -104 from
the 52 first layer nodes going from 4-bytes to 2-bytes, and +416 from
the addition of the 'first to second layer link' data).
In terms of raw matching speed (outside the context of the tokenizer),
this provides about a 1.72x speedup.
In very named-character-reference-heavy tokenizer benchmarks, this
provides about a 1.05x speedup (the effect of named character reference
matching speed is diluted when benchmarking the tokenizer).
Additionally, fixes the size of the named character reference data when
targeting Windows.
The `shorthands_for_longhand`, `longhands_for_shorthand`, and
`expanded_longhands_for_shorthand` methods can be pretty hot in
profiles where we serialize a lot of CSS properties.
By returning a const reference to a static vector instead of allocating
and returning a new vector every time we can avoid a decent amount of
work.
Overall runtime for the particularly serialization heavy
wpt.live/css/cssom/cssom-getPropertyValue-common-checks.html
decreased by ~20% comparing before and after this change.
We often want to identify a property, but if we have a PropertyID we
don't want to have to convert it to a string to then convert it back
again. However, custom properties don't have a useful PropertyID. So,
here's a type with a verbose name.
To support this, how we declare logical property aliases has changed.
Instead of `logical-alias-for` being a list of properties, it's now an
object with a `group` and `mapping`. The group is the name of a logical
property group in LogicalPropertyGroups.json. The mapping is which
side/dimension/corner this property is. Hopefully it's self-explanatory
enough.
The generated code is very much a copy of what was previously in
`StyleComputer::map_logical_alias_to_physical_property_id()`, so there
should be no behaviour change.
For simplicity, this requires that the setlike Foo class has a
`void on_set_modified_from_js(Badge<Bindings::FooPrototype>)` method.
This will be called after the set is modified from a generated `add()`,
`delete()`, or `clear()` method.
This copies the latest generated code in tree and then removes code
generation for the WebGL rendering contexts. This is because it didn't
add much value, and we can maintain the generated output instead of
both that and the generator itself.
The primary purpose of these is to add bounds checking to older OpenGL
API calls that take arbitrarily sized buffers, but don't know the size
of the buffer and thus rely on the application being certain the buffer
is large enough.
Since these API calls are exposed to arbitrary JS which can make
arbitrarily sized buffers, it is not safe to use the non-robust
variants, as we cannot know the size of the buffer ahead of time, nor
the amount of data required by the API call.
The robust variants provided by ANGLE adds a buffer size parameter,
where it'll calculate the amount of data it needs for that API call
for us and return an error if it's bigger than the given buffer size.
Credit to https://github.com/s41nt0l3xus for finding this during a CTF
and providing a write up that exploits this.
See: 92efbaed6c/gpnctf-2025/WebGL-bird
Add OffscreenCanvas to TexImageSource and CanvasImageSource.
Implement all the necessary features to make it work in all cases where
these types are used.
This implements the basic interface, classes and functions for
OffscreenCanvas. Many are still stubbed out and have many FIXMEs in
them, but it is a basic skeleton.
Previously we would incorrectly map these in
`CSSStyleProperties::convert_declarations_to_specified_order`, aside
from being too early (as it meant we didn't maintain them as distinct
from their physical counterparts in CSSStyleProperties), this meant
that we didn't yet have the required context to map them correctly.
We now map them as part of the cascade process. To compute the mapping
context we do a cascade without mapping, and extract the relevant
properties (writing-direction and direction).
Any optional or nullable attribute will end up in an `if/else` branch
when we collect the attribute values, and this is inherently
incompatibly with an `auto {attr}_wrapped = ...` expression. Define the
variable as an `JS::Value` before generating the wrap statement so we
can properly support `toJSON()` for an attribute like:
readonly attribute double? altitude;
The "longhands" array is populated in the code generator to avoid the
overhead of manually maintaining the list in Properties.json
There is one subtest that still fails in
'cssstyledeclaration-csstext-all-shorthand', this is related to
us not maintaining the relative order of CSS declarations for custom vs
non-custom properties.
When serializing CSS declarations we now support combining multiple
properties into a single shorthand property in some cases.
This comes with a healthy dose of FIXMEs, including work to be done
around supporting:
- Nested shorthands (e.g. background, border, etc)
- Shorthands which aren't represented by the ShorthandStyleValue type
- Subproperties pending substitution
This gains us a bunch of new test passes, both for WPT and in-tree
The spec has a general rule for this, which is roughly that "If it's not
a falsey value, it's true". However, a couple of media-features are
always false, apparently breaking this rule. To handle that, we have an
array of false keywords in the JSON, instead of a single keyword. For
those always-false media-features, we can enter all their values into
this array.
Gets us 2 more WPT subtest passes.
Which has an optmization if both size of the string being passed
through are FlyStrings, which actually ends up being the case
in some places during selector matching comparing attribute names.
Instead of maintaining more overloads of
Infra::is_ascii_case_insensitive_match, switch
everything over to equals_ignoring_ascii_case instead.