ParsedFontFace and FontLoader now both keep track of which
CSSStyleSheet (if any) was the source of the font-face, so the URLs can
be completed correctly.
This is the simplest fix I could find that resolves a buggy interaction
between this and the CSS fetch algorithms, which also doesn't regress
anything. (As far as I can tell.)
Convert FontLoader to use fetch_a_style_resource(). ResourceLoader used
to keep its downloaded data around for us, but fetch doesn't, so we use
Gfx::Typeface::try_load_from_temporary_memory() so that the font has a
permanent copy of that data.
Typeface::try_load_from_externally_owned_memory() relies on that
external owner keeping the memory around. However, neither WOFF nor
WOFF2 do so - they both create separate ByteBuffers to hold the TTF
data. So, rename them to make it clearer that they don't have any
requirements on the byte owner.
Even though this code was already optimized to re-use a single result
object, returning { value, done } directly in output parameters still
provides a substantial speedup.
1.21x speedup on MicroBench/for-in-indexed-properties.js
Apply a little ensure_capacity() to avoid excessive rehashing of the
property key table when enumerating a large number of properties.
1.23x speedup on MicroBench/for-in-indexed-properties.js
Shared workers are essentially just workers that may be accessed from
scripts within the same origin. There are plenty of FIXMEs here (mostly
building on existing worker FIXMEs that are already in place), but this
lets us run the shared worker variants of WPT tests.
We currently rely on UI-specific APIs to interact with the system
clipboard from AppKit and Qt. We do not have access to these from
headless-browser.
We should ultimately implement these APIs without relying on the UI as
a middle-man. For now, store a clipboard item so that we may exercise
the clipboard WPT tests.
We currently have a single IPC to set clipboard data. We will also need
an IPC to retrieve that data from the UI. This defines system clipboard
data in LibWeb to handle this transfer, and adds the IPC to provide it.
Instead of going through String::formatted(), we now have a specialized
code path for base-10 serialization directly to UTF-8.
This is roughly 5-10x faster than the previous implementation, depending
on how many digits we end up outputting.
1.07x speedup on MicroBench/for-in-indexed-properties.js
"return" method is not defined on any of builtin iterators, so we could
skip it, avoiding method lookup.
I measured 10% improvement in array-destructuring-assignment.js micro
benchmark with this change.
...by avoiding `{ value, done }` iterator result value allocation. This
change applies the same otimization 81b6a11 added for `for..in` and
`for..of`.
Makes following micro benchmark go 22% faster on my computer:
```js
function f() {
const arr = [];
for (let i = 0; i < 10_000_000; i++) {
arr.push([i]);
}
let sum = 0;
for (let [i] of arr) {
sum += i;
}
}
f();
```
The editing command that relies the most on this, `insertLinebreak`,
did not perform a layout update after inserting a `<br>` which caused
this algorithm to always return false. But instead of actually building
the layout tree needlessly, we can check the DOM tree instead.
Before this change, IDAT data was mistakenly always included in the
animation. Now we only include frames with explicit fcTL chunks.
As per the PNG spec (third edition):
"The static image may be included as the first frame of the animation
by the presence of a single fcTL chunk before IDAT. Otherwise, the
static image is not part of the animation."
We also fall back to the IDAT data when APNG has acTL but no fcTL
chunks. Test image is 062.png from fDAT-inherits-cICP.html from WPT.
The assertion in `WebSocketImplCurl::did_connect()` keeps failing for
multiple websockets when loading `https://www.speedtest.net/` since
commit 14ebcd4. This fixes that by checking and returning false if
something went wrong and letting the caller function handle it.
Introduce special instruction for `for..of` and `for..in` loop that
skips `{ value, done }` result object allocation if iterator is builtin
(array, map, set, string). This reduces GC pressure significantly and
avoids extracting the `value` and `done` properties.
This change makes this micro benchmark 48% faster on my computer:
```js
const arr = new Array(10_000_000);
let counter = 0;
for (let _ of arr) {
counter++;
}
```
Expose a method on built-in iterators that allows retrieving the next
iteration result without allocating a JS::Object. This change is a
preparation for optimizing for..of and for..in loops.
This is a LibWeb special. We keep running into cases where we end up
with one or more Platform or event loop spin_until() calls on the stack
after the event loop has been cancelled and the WebContent process has
been asked to exit.
To prevent too much nonsense from exiting processes early from affecting
our other, more well-behaved processes, put this special logic in the
critical path of such Web-specific event loop spins.