1.25x speed-up on this microbenchmark:
let o = { get x() { return 1; } };
for (let i = 0; i < 10_000_000; ++i)
o.x;
I looked into this because I noticed getter invocation when profiling
long-running WPT tests. We already had the mechanism for non-getter
properties, and the change to support getters turned out to be trivial.
Instead of re-symbolicating entire stacks from scratch every time
we want a JS VM backtrace, we now use the ExecutionContext object as
cache storage via a new CachedSourceRange object.
This means that once a stack frame has been symbolicated, we don't
have to resymbolicate it again (unless the program counter moves
within that stack frame).
This drastically reduces time spent in symbolication in some WPT tests.
Loading Ladybird on Github results in 37 debug logs about being unable
to parse an empty Date string. This log is intended to catch Date
formats we do not support to detect web compatability problems, which
makes this case not particuarly useful to log.
Instead of trying to parse all of the different date formats and
logging that the string is not valid, let's just return NAN immediately.
Instead of creating a unique new prototype shape every time a function
object is instantiated, we now keep one cached with the intrinsics.
This avoids a whole lot of shape allocations, reducing GC pressure.
This didn't make any sense, and was already handled by pushing a new
execution context anyway.
By simply removing these bogus lines of code, we fix a bug where
throwing inside a function whose bytecode was shorter than the calling
function would crash trying to generate an Error stack trace (because
the bytecode offset we were trying to symbolicate was actually from
the longer caller function, and not valid in the callee function.)
This makes --log-all-js-exceptions less crash prone and more helpful.
In some cases, we have a timestamp as a double in milliseconds. We then
would convert it to nanoseconds as a BigInt, just to bring it back to a
double for TZDB lookups. Add an overload to avoid this needless round
trip.
Even though the underlying time zone is already cached by LibUnicode, JS
performs additional expensive lookups with that time zone. There's no
need to do those lookups again until the system time zone has changed.
Note that we can currently only use simdutf for Base64 decoding if the
provided stopBeforePartial option is loose, which is the default. There
is an open issue for simdutf to implement strict and stop-before-partial
options. Until then, for those options, we implement a slow decoder that
is written exactly as the spec steps dictate.
See: https://github.com/simdutf/simdutf/issues/440
- Expose table from console object
- Add new Table log level
- Create a JS object that represents table rows and columns
- Print table as HTML using WebContentConsoleClient
The Initialize* AOs for Intl formatters were removed some time ago, and
the formatter construction steps are now inlined in the constructors
themselves. InitializeNumberFormat was the one remaining initializer we
still had laying around.
This constructor has undergone a handful of editorial changes that we
fell behind on. But we weren't able to take the updates until now due to
a spec bug in those updates. See:
3f029b0
The result is that we can remove the inheritance of Intl::DateTimeFormat
from Unicode::DateTimeFormat; the former now contains the latter as an
internal slot.
The steps have been updated to indicate a few options that should be
defaulted based on locale preferences. Those steps have not yet been
implemented in this patch.
We were never able to implement anything other than a basic, locale-
unaware collator with the JSON export of the CLDR as it did not have
collation data. We can now use ICU to implement collation.
This is a normative change in the Temporal proposal. See:
1104cad45462c4
Although our Temporal implementation is wildly out of date, this AO and
the changes to it are relied on in Intl.DurationFormat.
The typeof operator has a very small set of possible resulting strings,
so let's make it much faster by caching those strings on the VM.
~8x speed-up on this microbenchmark:
for (let i = 0; i < 10_000_000; ++i) {
typeof i;
}
For performance, rather than slowly incrementing the capacity of the
rope string's buffer, compute an approximate length for that buffer to
be reserved up front.
The JS::Error types all store their exception messages as a String. So
by using ByteString, we hit the StringView constructor, and end up
allocating the same string twice.