Instead of SetVariable having 2x2 modes for variable/lexical and
initialize/set, those 4 modes are now separate instructions, which
makes each instruction much less branchy.
These were out-of-line because we had some ideas about marking
instruction streams PROT_READ only, but that seems pretty arbitrary and
there's a lot of performance to be gained by putting these inline.
Instead, generate bytecode to execute their AST nodes and save the
resulting operands inside the NewClass instruction.
Moving property expression evaluation to happen before NewClass
execution also moves along creation of new private environment and
its population with private members (private members should be visible
during property evaluation).
Before:
- NewClass
After:
- CreatePrivateEnvironment
- AddPrivateName
- ...
- AddPrivateName
- NewClass
- LeavePrivateEnvironment
We already had fast access to own properties via shape-based IC.
This patch extends the mechanism to properties on the prototype chain,
using the "validity cell" technique from V8.
- Prototype objects now have unique shape
- Each prototype has an associated PrototypeChainValidity
- When a prototype shape is mutated, every prototype shape "below" it
in any prototype chain is invalidated.
- Invalidation happens by marking the validity object as invalid,
and then replacing it with a new validity object.
- Property caches keep a pointer to the last seen valid validity.
If there is no validity, or the validity is invalid, the cache
misses and gets repopulated.
This is very helpful when using JavaScript to access DOM objects,
as we frequently have to traverse 4+ prototype objects before finding
the property we're interested in on e.g EventTarget or Node.
If a function has the following properties:
- uses only local variables and registers
- does not use `this`
- does not use `new.target`
- does not use `super`
- does not use direct eval() calls
then it is possible to entirely skip function environment allocation
because it will never be used
This change adds gathering of information whether a function needs to
access `this` from environment and updates `prepare_for_ordinary_call()`
to skip allocation when possible.
For now, this optimisation is too aggressively blocked; e.g. if `this`
is used in a function scope, then all functions in outer scopes have to
allocate an environment. It could be improved in the future, although
this implementation already allows skipping >80% of environment
allocations on Discord, GitHub and Twitter.
Before this change both ExecutionContext and CallFrame were created
before executing function/module/script with a couple exceptions:
- executable created for default function argument evaluation has to
run in function's execution context.
- `execute_ast_node()` where executable compiled for ASTNode has to be
executed in running execution context.
This change moves all members previously owned by CallFrame into
ExecutionContext, and makes two exceptions where an executable that does
not have a corresponding execution context saves and restores registers
before running.
Now, all execution state lives in a single entity, which makes it a bit
easier to reason about and opens opportunities for optimizations, such
as moving registers and local variables into a single array.
When a PutById / PutByValue bytecode operation results in accessing a
nullish object, we now include the name of the property and the object
being accessed in the exception message (if available). This should make
it easier to debug live websites.
For example, the following errors would all previously produce a generic
error message of "ToObject on null or undefined":
> foo = null
> foo.bar = 1
Uncaught exception:
[TypeError] Cannot access property "bar" on null object "foo"
at <unknown>
> foo = { bar: undefined }
> foo.bar.baz = 1
Uncaught exception:
[TypeError] Cannot access property "baz" on undefined object "foo.bar"
at <unknown>
Note we certainly don't capture all possible nullish property write
accesses here. This just covers cases I've seen most on live websites;
we can cover more cases as they arise.
When a GetById / GetByValue bytecode operation results in accessing a
nullish object, we now include the name of the property and the object
being accessed in the exception message (if available). This should make
it easier to debug live websites.
For example, the following errors would all previously produce a generic
error message of "ToObject on null or undefined":
> foo = null
> foo.bar
Uncaught exception:
[TypeError] Cannot access property "bar" on null object "foo"
at <unknown>
> foo = { bar: undefined }
> foo.bar.baz
Uncaught exception:
[TypeError] Cannot access property "baz" on undefined object "foo.bar"
at <unknown>
Note we certainly don't capture all possible nullish property read
accesses here. This just covers cases I've seen most on live websites;
we can cover more cases as they arise.
Since get() returns an empty Optional if the index is not present, we
can combine these two into a single get() operation and save the cost of
a virtual call.
Instead of looking these up in the VM execution context stack whenever
we need them, we now just cache them in the interpreter when entering
a new call frame.
The exisiting fast path only permits for valid i32 values.
On https://cyxx.github.io/another_js, this eliminates the runtime of
typed_array_set_element, and reduces the runtime of put_by_value from
11.1% to 7.7%.
This patch moves us away from the accumulator-based bytecode format to
one with explicit source and destination registers.
The new format has multiple benefits:
- ~25% faster on the Kraken and Octane benchmarks :^)
- Fewer instructions to accomplish the same thing
- Much easier for humans to read(!)
Because this change requires a fundamental shift in how bytecode is
generated, it is quite comprehensive.
Main implementation mechanism: generate_bytecode() virtual function now
takes an optional "preferred dst" operand, which allows callers to
communicate when they have an operand that would be optimal for the
result to go into. It also returns an optional "actual dst" operand,
which is where the completion value (if any) of the AST node is stored
after the node has "executed".
One thing of note that's new: because instructions can now take locals
as operands, this means we got rid of the GetLocal instruction.
A side-effect of that is we have to think about the temporal deadzone
(TDZ) a bit differently for locals (GetLocal would previously check
for empty values and interpret that as a TDZ access and throw).
We now insert special ThrowIfTDZ instructions in places where a local
access may be in the TDZ, to maintain the correct behavior.
There are a number of progressions and regressions from this test:
A number of async generator tests have been accidentally fixed while
converting the implementation to the new bytecode format. It didn't
seem useful to preserve bugs in the original code when converting it.
Some "does eval() return the correct completion value" tests have
regressed, in particular ones related to propagating the appropriate
completion after control flow statements like continue and break.
These are all fairly obscure issues, and I believe we can continue
working on them separately.
The net test262 result is a progression though. :^)
This commit un-deprecates DeprecatedString, and repurposes it as a byte
string.
As the null state has already been removed, there are no other
particularly hairy blockers in repurposing this type as a byte string
(what it _really_ is).
This commit is auto-generated:
$ xs=$(ack -l \bDeprecatedString\b\|deprecated_string AK Userland \
Meta Ports Ladybird Tests Kernel)
$ perl -pie 's/\bDeprecatedString\b/ByteString/g;
s/deprecated_string/byte_string/g' $xs
$ clang-format --style=file -i \
$(git diff --name-only | grep \.cpp\|\.h)
$ gn format $(git ls-files '*.gn' '*.gni')
This patch makes IteratorRecord an Object. Although it's not exposed to
author code, this does allow us to store it in a VM register.
Now that we can store it in a VM register, we don't need to convert it
back and forth between IteratorRecord and Object when accessing it from
bytecode.
The big win here is avoiding 3 [[Get]] accesses on every iteration step
of for..of loops. There are also a bunch of smaller efficiencies gained.
20% speed-up on this microbenchmark:
function go(a) {
for (const p of a) {
}
}
const a = [];
a.length = 1_000_000;
go(a);
(Instead of MarkedVector<Value>.) This is a step towards not storing
argument lists in MarkedVector<Value> at all. Note that they still end
up in MarkedVectors since that's what ExecutionContext has.
This patch makes it possible for JS::Object::internal_set() to populate
a CacheablePropertyMetadata, and uses this to implement a basic
monomorphic cache for the most common form of property write access.