"return" method is not defined on any of builtin iterators, so we could
skip it, avoiding method lookup.
I measured 10% improvement in array-destructuring-assignment.js micro
benchmark with this change.
Expose a method on built-in iterators that allows retrieving the next
iteration result without allocating a JS::Object. This change is a
preparation for optimizing for..of and for..in loops.
This is a normative change in the ECMA-262 spec. See:
de62e8d
This did not actually seem to affect our implementation as we were not
using generators here to begin with. So this patch is basically just
adding spec comments.
This way it's always automatically correct, and we don't have to
manually flush it in push_execution_context().
~7% speedup on the MicroBench/call* tests :^)
Most of the time there are no queued promise jobs to run after exiting
a stack frame. By moving the check inline, leaving a function call gets
a measurable speedup in the common case.
This concept is rarely used in codebase and very much error-prone
if you forget to check it.
Instead, make it so that operations that would produce invalid integers
return an error instead.
Instead of letting every [[Call]] implementation allocate an
ExecutionContext, we now make that a responsibility of the caller.
The main point of this exercise is to allow the Call instruction
to write function arguments directly into the callee ExecutionContext
instead of copying them later.
This makes function calls significantly faster:
- 10-20% faster on micro-benchmarks (depending on argument count)
- 4% speedup on Kraken
- 2% speedup on Octane
- 5% speedup on JetStream
Object defines an is_error virtual method to be overridden by Error for
fast-is. This is the same name as the Error.isError constructor method.
Rename the former to avoid conflicts, as GCC 15 just started warning on
this.
It turns out it was a mistake to make this a virtual since
ServiceWorkerAgents are effectively the exact same as
DedicatedWorkerAgents and SharedWorkerAgents just with [[CanBlock]]
set to false.
This helps unwind a niggly depedency where the VM owns and constructs
the Heap and the Agent. But the agent wants to have customized
construction that depends on the heap. Solve this by defering
the initialization of the Agent to after we have constructed the
VM and the heap.
By doing that we avoid doing separate allocation for each such vector,
which was really expensive on js heavy websites. For example this change
helps to get EC allocation down from ~17% to ~2% on Google Maps. This
comes at cost of adding extra complexity to custom execution context
allocator, because EC no longer has fixed size and we need to maintain
a list of buckets.
This is better because:
- Better data locality
- Allocate vector for registers+constants+locals+arguments in one go
instead of allocating two vectors separately
For the slight cost of counting code points when converting between
encodings and a teeny bit of memory, this commit adds a fast path for
all-happy utf-16 substrings and code point operations.
This seems to be a significant chunk of time spent in many regex
benchmarks.
similar-origin window agents have the [[CanBlock]] flag set to false.
Achieve this by hooking up JS's concept with an agent to HTML::Agent.
For now, this is only hooked up to the similar-origin window agent
case but should be extended to the other agent types in the future.
Instead of using the more generic define_native_accessor() here,
we poke directly at indexed property storage for the parameter map.
We can also construct the NativeFunction objects directly, without
giving them names like "get 0" etc, since these are not observable
by userspace.
Finally, by using default property attributes (not observable anyway),
we get simple indexed storage instead of generic (hash map) storage.
Instead, let JS::NativeFunction store the AK::Function directly, and
take care of conservatively marking its captured data.
This avoids an extra GC allocation for every JS::NativeFunction.