This adds a new IDL type, Utf16DOMString. This is the same as DOMString,
except it is UTF-16. This type is temporary - we will want DOMString to
be UTF-16 by default once we've ported enough of LibWeb.
To make this support easier, some string IDL generator handling is moved
directly into `generate_to_string` from the call sites.
flatpak-builder doesn't respect .gitignore when creating its local build
directory, so we need to explicitly skip potentially large ignored
directories to avoid bloating the flatpak build directory during builds.
This uses a `foo>bar` notation in the `valid-identifiers` field of
Properties.json, to say "replace `foo` with `bar`".
The motivation here is to avoid calling `parse_css_value_for_property()`
inside the per-property switch in `parse_css_value()`. Eventually we'll
need to be able to call that switch from
`parse_css_value_for_properties()` so that shorthands can make use of
any bespoke parsing code to parse their longhands.
We now clean up installed helper tools, includes from dependencies, and
pkgconfig/CMake files. This decreases the size of the flatpak from
~206MiB to ~150 MiB on my machine.
The manifest is also mostly clean of linter warnings from the
flatpack-builder manifest linter, with the exception of the overly broad
session bus policy.
The docs at https://docs.flathub.org/docs/for-app-authors/linter list a
method for selecting the correct session bus policies, but it is unclear
how to actually get the full set.
On FreeBSD some symbols like `environ` or `__progname` are not exported
anywhere and are filled in by the dynamic loader. `environ` is
a special case because we make use of it explicitly so we need to mark
it a weak symbol so the linker doesn't complain.
Some shorthand properties work differently to normal in that mapping of
provided values to longhands isn't necessarily 1-to-1 and depends on the
number of values provided, for example `margin`, `border-width`, `gap`,
etc.
These properties have distinct behaviors in how they are parsed and
serialized, having them marked allows us to implement theses behaviors
in a generic way.
No functionality changes.
Our currently implementation of structured serialization has a design
flaw, where if the serialized/transferred type was not used in the
destination realm, it would not be seen as exposed and thus we would
not re-create the type on the other side.
This is very common, for example, transferring a MessagePort to a just
inserted iframe, or the just inserted iframe transferring a MessagePort
to it's parent. This is what Google reCAPTCHA does.
This flaw occurred due to relying on lazily populated HashMaps of
constructors, namespaces and interfaces. This commit changes it so that
per-type "is exposed" implementations are generated.
Since it no longer relies on interface name strings, this commit
changes serializable types to indicate their type with an enum,
in line with how transferrable types indicate their type.
This makes Google reCAPTCHA work on https://www.google.com/recaptcha/api2/demo
It currently doesn't work on non-Google origins due to a separate
same-origin policy bug.
Introduces a few ad-hoc modifications to the DAFSA aimed to increase
performance while keeping the data size small.
- The 'first layer' of nodes is extracted out and replaced with a lookup
table. This turns the search for the first character from O(n) to O
(1), and doesn't increase the data size because all first characters
in the set of named character references have the
values 'a'-'z'/'A'-'Z', so a lookup array of exactly 52 elements can
be used. The lookup table stores the cumulative "number" fields that
would be calculated by a linear scan that matches a given node, thus
allowing the unique index to be built-up as normal with a O(1) search
instead of a linear scan.
- The 'second layer' of nodes is also extracted out and searches of the
second layer are done using a bit field of 52 bits (the set bits of
the bit field depend on the first character's value), where each set
bit corresponds to one of 'a'-'z'/'A'-'Z' (similar to the first
layer, the second layer can only contain ASCII alphabetic
characters). The bit field is then re-used (along with an offset) to
get the index into the array of second layer nodes. This technique
ultimately allows for storing the minimum number of nodes in the
second layer, and therefore only increasing the size of the data by
the size of the 'first to second layer link' info which is 52 * 8 =
416 bytes.
- After the second layer, the rest of the data is stored using a
mostly-normal DAFSA, but there are still a few differences:
- The "number" field is cumulative, in the same way that the
first/second layer store a cumulative "number" field. This cuts
down slightly on the amount of work done during the search of a
list of children, and we can get away with it because the
cumulative "number" fields of the remaining nodes in the DAFSA
(after the first and second layer nodes were extracted out) happens
to require few enough bits that we can store the cumulative version
while staying under our 32-bit budget.
- Instead of storing a 'last sibling' flag to denote the end of a
list of children, the length of each node's list of children is
stored. Again, this is mostly done just because there are enough
bits available to do so while keeping the DAFSA node within 32
bits.
- Note: Together, these modifications open up the possibility of
using a binary search instead of a linear search over the
children, but due to the consistently small lengths of the lists
of children in the remaining DAFSA, a linear search actually seems
to be the better option.
The new data size is 24,724 bytes, up from 24,412 bytes (+312, -104 from
the 52 first layer nodes going from 4-bytes to 2-bytes, and +416 from
the addition of the 'first to second layer link' data).
In terms of raw matching speed (outside the context of the tokenizer),
this provides about a 1.72x speedup.
In very named-character-reference-heavy tokenizer benchmarks, this
provides about a 1.05x speedup (the effect of named character reference
matching speed is diluted when benchmarking the tokenizer).
Additionally, fixes the size of the named character reference data when
targeting Windows.
The `shorthands_for_longhand`, `longhands_for_shorthand`, and
`expanded_longhands_for_shorthand` methods can be pretty hot in
profiles where we serialize a lot of CSS properties.
By returning a const reference to a static vector instead of allocating
and returning a new vector every time we can avoid a decent amount of
work.
Overall runtime for the particularly serialization heavy
wpt.live/css/cssom/cssom-getPropertyValue-common-checks.html
decreased by ~20% comparing before and after this change.