Most locations already make this assumption, but we had a few that would
silently ignore data mismatches. Let's just always assume the type is
correct for now. If a bad actor has a hold of our transport socket, it's
probably better to crash WebContent than to continue on with incorrect
data.
In the future, maybe we will want to throw an exception instead.
Now that these serializers are internal to StructuredSerialize.cpp,
let's put them above Serializer so they don't have to be forward-
declared and explicitly instantiated.
Our structured serialization implementation had its own bespoke encoder
and decoder to serialize JS values. It also used a u32 buffer under the
hood, which made using its structures a bit awkward. We had previously
worked around its data structures in transferable streams, which nested
transfers of MessagePort instances. We basically had to add hooks into
the MessagePort to route to the correct transfer receiving steps, and
we could not invoke the correct AOs directly as the spec dictates.
We now use IPC mechanics to encode and decode data. This works because,
although we are encoding JS values, we are only ultimately encoding
primitive and basic AK types. The resulting data structures actually
enforce that we implement transferable streams exactly as the spec is
worded (I had planned to do that in a separate commit, but the fallout
of this patch actually required that change).
No need to manually prepare / clean up a context. We also previously
would not have done the clean up steps if structured deserialization
threw an exception.
Instead of every branch being of the form:
if (value.is_object() && is<SomeType>(value.as_object()) {
auto& some_type = static_cast<SomeType&>(value.as_object());
}
Let's extract the `is_object` check to an outer branch, and use `as_if`
to check the type.
No functional change, but this makes a future change simpler to review.
More things need this than just the `<path>` element, so let's avoid
having to include `SVGPathElement.h` in places that don't need it.
Minor changes at the same time:
- Wrap it in a Path class
- Specify underlying type for PathInstructionType
- Make a couple of free functions into methods
- Give PathInstruction an operator==
No functionality changes.
Note that it's not actually executing tasks in parallel, it's still
throwing them on the HTML event loop task queue, each with its own
unique task source.
This makes our fetch implementation a lot more robust when HTTP caching
is enabled, and you can now click links on https://terminal.shop/
without hitting TODO assertions in fetch.
`CSSColorValue`s which have unresolved `calc` components should be able
to be resolved. Previously we would always resolve them but with
incorrect values.
This is useful as we will now be able to now whether we should serialize
colors in their normalized form or not.
Slight regression in that we now serialize (RGB, HSL and HWB) colors
with components that rely on compute-time information as an empty
string, but that will be fixed in the next commit.
Our currently implementation of structured serialization has a design
flaw, where if the serialized/transferred type was not used in the
destination realm, it would not be seen as exposed and thus we would
not re-create the type on the other side.
This is very common, for example, transferring a MessagePort to a just
inserted iframe, or the just inserted iframe transferring a MessagePort
to it's parent. This is what Google reCAPTCHA does.
This flaw occurred due to relying on lazily populated HashMaps of
constructors, namespaces and interfaces. This commit changes it so that
per-type "is exposed" implementations are generated.
Since it no longer relies on interface name strings, this commit
changes serializable types to indicate their type with an enum,
in line with how transferrable types indicate their type.
This makes Google reCAPTCHA work on https://www.google.com/recaptcha/api2/demo
It currently doesn't work on non-Google origins due to a separate
same-origin policy bug.
If the MessagePort is not entangled, then m_transport is null, meaning
it's not valid to read from the transport.
This fixes the cross-piping streams WPT crash test crashing in the
upcoming commit.
Introduces a few ad-hoc modifications to the DAFSA aimed to increase
performance while keeping the data size small.
- The 'first layer' of nodes is extracted out and replaced with a lookup
table. This turns the search for the first character from O(n) to O
(1), and doesn't increase the data size because all first characters
in the set of named character references have the
values 'a'-'z'/'A'-'Z', so a lookup array of exactly 52 elements can
be used. The lookup table stores the cumulative "number" fields that
would be calculated by a linear scan that matches a given node, thus
allowing the unique index to be built-up as normal with a O(1) search
instead of a linear scan.
- The 'second layer' of nodes is also extracted out and searches of the
second layer are done using a bit field of 52 bits (the set bits of
the bit field depend on the first character's value), where each set
bit corresponds to one of 'a'-'z'/'A'-'Z' (similar to the first
layer, the second layer can only contain ASCII alphabetic
characters). The bit field is then re-used (along with an offset) to
get the index into the array of second layer nodes. This technique
ultimately allows for storing the minimum number of nodes in the
second layer, and therefore only increasing the size of the data by
the size of the 'first to second layer link' info which is 52 * 8 =
416 bytes.
- After the second layer, the rest of the data is stored using a
mostly-normal DAFSA, but there are still a few differences:
- The "number" field is cumulative, in the same way that the
first/second layer store a cumulative "number" field. This cuts
down slightly on the amount of work done during the search of a
list of children, and we can get away with it because the
cumulative "number" fields of the remaining nodes in the DAFSA
(after the first and second layer nodes were extracted out) happens
to require few enough bits that we can store the cumulative version
while staying under our 32-bit budget.
- Instead of storing a 'last sibling' flag to denote the end of a
list of children, the length of each node's list of children is
stored. Again, this is mostly done just because there are enough
bits available to do so while keeping the DAFSA node within 32
bits.
- Note: Together, these modifications open up the possibility of
using a binary search instead of a linear search over the
children, but due to the consistently small lengths of the lists
of children in the remaining DAFSA, a linear search actually seems
to be the better option.
The new data size is 24,724 bytes, up from 24,412 bytes (+312, -104 from
the 52 first layer nodes going from 4-bytes to 2-bytes, and +416 from
the addition of the 'first to second layer link' data).
In terms of raw matching speed (outside the context of the tokenizer),
this provides about a 1.72x speedup.
In very named-character-reference-heavy tokenizer benchmarks, this
provides about a 1.05x speedup (the effect of named character reference
matching speed is diluted when benchmarking the tokenizer).
Additionally, fixes the size of the named character reference data when
targeting Windows.
Recently, we moved the backing store manager into Navigable, which means
we now try to allocate a backing store for all navigables, including
those corresponding to SVG image documents. This change disables that
behavior for all navigables except top-level non-SVG traversables,
because otherwise it causes issues when we stop repainting: the browser
process was notified about an allocated backing stores that does not
correspond to the page, and then all subsequent repaints are ignored
until the window is resized.
We forgot to implement a couple of "otherwise," statements from the
"populating a session history entry" spec. While we're here, let's
update the spec copy where relevant.
A "namespace prefix map", see:
https://w3c.github.io/DOM-Parsing/#the-namespace-prefix-map
Is meant to also hold null namespaces:
> where namespaceURI values are the map's unique keys
> (which can include the null value representing no namespace)
Which we previously neglected. This resulted in a crash for
the updated WPT test.
The included WPT test passes through -1 which ends up modolo'ing
to u32 max at the IDL conversion layer, resulting in an unsigned
overflow when checking bounds.
Now, when Skia backend context is available by the time backing stores
are allocated, there is no need to have a separate BackingStore class.
This allows us to get rid of BackingStore -> PaintingSurface cache.