Our structured serialization implementation had its own bespoke encoder
and decoder to serialize JS values. It also used a u32 buffer under the
hood, which made using its structures a bit awkward. We had previously
worked around its data structures in transferable streams, which nested
transfers of MessagePort instances. We basically had to add hooks into
the MessagePort to route to the correct transfer receiving steps, and
we could not invoke the correct AOs directly as the spec dictates.
We now use IPC mechanics to encode and decode data. This works because,
although we are encoding JS values, we are only ultimately encoding
primitive and basic AK types. The resulting data structures actually
enforce that we implement transferable streams exactly as the spec is
worded (I had planned to do that in a separate commit, but the fallout
of this patch actually required that change).
Our currently implementation of structured serialization has a design
flaw, where if the serialized/transferred type was not used in the
destination realm, it would not be seen as exposed and thus we would
not re-create the type on the other side.
This is very common, for example, transferring a MessagePort to a just
inserted iframe, or the just inserted iframe transferring a MessagePort
to it's parent. This is what Google reCAPTCHA does.
This flaw occurred due to relying on lazily populated HashMaps of
constructors, namespaces and interfaces. This commit changes it so that
per-type "is exposed" implementations are generated.
Since it no longer relies on interface name strings, this commit
changes serializable types to indicate their type with an enum,
in line with how transferrable types indicate their type.
This makes Google reCAPTCHA work on https://www.google.com/recaptcha/api2/demo
It currently doesn't work on non-Google origins due to a separate
same-origin policy bug.
...when Array.prototype and Object.prototype are intact.
If `internal_set()` is called on an array exotic object with a numeric
PropertyKey, and:
- the prototype chain has not been modified (i.e., there are no getters
or setters for indexed properties), and
- the array is not the target of a Proxy object,
then we can directly store the value in the receiver's indexed
properties, without checking whether it already exists somewhere in the
prototype chain.
1.7x improvement on the following program:
```js
function f() {
let a = [];
let i = 0;
while (i < 10_000_000) {
a.push(i);
i++;
}
}
f();
```
This is *extremely* common on the web, but barely shows up at all in
JavaScript benchmarks.
A typical example is setting Element.innerHTML on a HTMLDivElement.
HTMLDivElement doesn't have innerHTML, so it has to travel up the
prototype chain until it finds it.
Before this change, we didn't cache this at all, so we had to travel
the prototype chain every time a setter like this was used.
We now use the same mechanism we already had for GetBydId and cache
PutById setter accesses in the prototype chain as well.
1.74x speedup on MicroBench/setter-in-prototype-chain.js
For attributes like Element.ariaControlsElements, which are a reflection
of FrozenArray<Element>, we must return the same JS::Array object every
time the attribute is invoked - until its contents have changed. This
patch implements caching of the reflected array in accordance with the
spec.
Before this change, we were going through the chain of base classes for
each IDL interface object and having them set the prototype to their
prototype.
Instead of doing that, reorder things so that we set the right prototype
immediately in Foo::initialize(), and then don't bother in all the base
class overrides.
This knocks off a ~1% profile item on Speedometer 3.
These callbacks are evaluated synchronously via JS::Call. We do not need
to construct an expensive RootVector container just to immediately
invoke the callbacks.
Stylistically, this also helps indicate where the actual arguments start
at the call sites, by wrapping the arguments in braces.
When we need the callback to return a promise, we can use this alternate
invoker to construct the WebIDL::Promise for us. Currently, the Streams
API will use WebIDL::invoke_callback to create a JS::Promise, and then
wrap that result in a resolved WebIDL::Promise. This results in rejected
JS::Promise instances not being propagated.
This adds support for async iterators of the form:
async iterable<value_type>;
async iterable<value_type>(/* arguments... */);
It does not yet support the value pairs of the form:
async iterable<key_type, value_type>;
async iterable<key_type, value_type>(/* arguments... */);
Async iterators have an optional `return` data property. There's not a
particularly good way to know what interfaces implement this property.
So this adds a new extended attribute, DefinesAsyncIteratorReturn, which
interfaces can use to declare their support.
Our existing implementation of stream piping was extremely ad-hoc. It
did nothing to handle closed/errored streams, and did not read from or
write to streams in a way required by the spec.
This new implementation uses a custom JS::Cell to drive the read/write
loop.
This adds formatters for `WebIDL::Exception`, `WebIDL::SimpleException`
and `WebIDL::DOMException`. These are useful for displaying the content
of errors when debugging.
`WebIDL::invoke_callback()` now takes an `exception_behavior`
parameter, which can be set to `report` to report an exception before
returning. Setting `exception_behavior` to `rethrow` retains the
previous behavior.
We are currently returning LibJS's invalid code point message, but not
formatting it with the bad value. So we get something like:
Unhandled JavaScript exception: [TypeError] Invalid code point {},
must be an integer no less than 0 and no greater than 0x10FFFF
So not only is the error unformatted, but it's inaccurate; in this case,
the byte cannot be larger than 255.
In line with the ShadowRealm proposal changes in the WebIDL spec:
webidl#1437 and supporting changes in HTML spec.
This is required for ShadowRealms as they have no relevant settings
object on the shadow realm, so fixes a crash in the QueueingStrategy
test in this commit.
Resulting in a massive rename across almost everywhere! Alongside the
namespace change, we now have the following names:
* JS::NonnullGCPtr -> GC::Ref
* JS::GCPtr -> GC::Ptr
* JS::HeapFunction -> GC::Function
* JS::CellImpl -> GC::Cell
* JS::Handle -> GC::Root
The main motivation behind this is to remove JS specifics of the Realm
from the implementation of the Heap.
As a side effect of this change, this is a bit nicer to read than the
previous approach, and in my opinion, also makes it a little more clear
that this method is specific to a JavaScript Realm.