Windows flavor of non-blocking IO, overlapped IO, differs from that on
Linux. On Windows, the OS handles writing to overlapped buffer, while
on Linux user must do it manually.
Additionally, we can only have overlapped sockets because it is the
requirement to be able to wait on them - WSAEventSelect automatically
sets socket to nonblocking mode.
So we end up emulating Linux-nonblocking sockets with
Windows-nonblocking sockets.
Pending IO state (ERROR_IO_PENDING) must not escape read/write
functions. If that happens, all synchronization like WSAPoll and
WaitForMultipleObjects stops working (WaitForMultipleObjects stops
working because with overlapped IO you are supposed to wait on an event
in OVERLAPPED structure, while we are waiting on WSA Event, see
EventLoopImplementationWindows.cpp).
We were previously assuming that dictionary members were always
required when being returned.
This is a bit of a weird case, because unlike _input_ dictionaries
which the spec marks as required, 'result' dictionaries do not seem to
be marked in spec IDL as required. This is still fine from the POV that
the spec is written as it states that we should only be putting the
values into the dictionary if the value exists.
We could do this through some metaprogramming constexpr type checks.
For example, if the type in our C++ representation was not an
Optional, we can skip the has_value check.
Instead of doing that, change the IDL of the result dictionaries to
annotate these members so that the IDL generator knows this
information up front. While all current cases have every single
member returned or not returned, it is conceivable that the spec
could have a situation that one member is always returned (and
should get marked as required), while the others are optionally
returned. Therefore, this new GenerateAsRequired attribute is
applied for each individual member.
This is the return value of a URLPattern after `exec` is called on it.
It conveys information about the named (or unammed) regex groups
matched for each component of the URL. For example,
```
let p = new URLPattern({ hostname: "{:subdomain.}*example.com" });
const result = pattern.exec({ hostname: "foo.bar.example.com" });
console.log(result.hostname.groups.subdomain);
```
Will log 'foo.bar'.
We don't yet support a proxy configuration, but we can still validate
the capability received from the WebDriver client. We should also fail
to create a WebDriver session if a proxy configuration is present.
We currently define our custom WebDriver capabilities with a dictionary
of the form:
"serenity:ladybird": {
"headless": true
}
This patch flattens the configuration, such that each Ladybird option
will be its own capability. This matches how Firefox configures their
own options with geckodriver. So we now have:
"ladybird:headless": true
When a BackgroundAction completes, it resolves a Promise (stored on the
BackgroundAction object) with a reference to itself. The Promise will
never unset this resolved value, thus it will hold a strong reference to
the BackgroundAction until it is destroyed. But because the Promise is
owned by the BackgroundAction itself, we have a reference cycle, and
neither object can be destroyed.
The only user of BackgroundAction is the ImageDecoder process. The
consequence was that the ImageDecoder process would never release any
image data for successfully decoded images.
To fix this, instead of storing the promise on the class itself, we can
just create it as a local variable and pass it around.
We have to be careful to always destroy the jpeglib decompression struct
before returning from JPEGLoadingContext::decode. We were doing this in
jpeglib error handlers, but we have a couple of paths that bail from the
decoder via TRY. These paths were neither cleaning up memory nor setting
the image decoder to an error state.
So this patch sets up a scope guard to ensure we free the decompressor
upon exit from the function. And it delegates the responsibility of
setting the decoder state to the caller (of which there is only one),
to ensure all error paths result in an error state.
Moves pseudo class matching helpers into Element methods, so they don't
have to be duplicated between SelectorEngine and function that checks if
element is included in invalidation set.
This was an old hack from before we understood how and when to resolve
percentages in flex layout. Removing it should not change anything,
but it does avoid a lot of redundant layout work on many pages.
The current implementation of `:has()` style invalidation is divided
into two cases:
- When used in subject position (e.g., `.a:has(.b)`).
- When in a non-subject position (e.g., `.a > .b:has(.c)`).
This change focuses on improving the first case. For non-subject usage,
we still perform a full tree traversal and invalidate all elements
affected by the `:has()` pseudo-class invalidation set.
We already optimize subject `:has()` invalidations by limiting
invalidated elements to ones that were tested against `has()` selectors
during selector matching. However, selectors like `div:has(.a)`
currently cause every div element in the document to be invalidated.
By modifying the invalidation traversal to consider only ancestor nodes
(and, optionally, their siblings), we can drastically reduce the number
of invalidated elements for broad selectors like the example above.
On Discord, when scrolling through message history, this change allows
to reduce number of invalidated elements from ~1k to ~5.
This fixes a crash in the included test that regressed in 0adf261,
and is hit by the following HTML:
```html
<body></body>
<script>
const frame = document.body.appendChild(document.createElement("iframe"));
frame.contentDocument.open();
const child = frame.contentDocument.createElement("html")
const html = frame.contentDocument.appendChild(child);
frame.contentDocument.close();
</script>
```
I am not 100% sure this is fully the correct fix and there are other
cases which would not work properly. But it's definitely an improvement
to make the confuisingly named 'insert_an_eof' function of the tokenizer
actually do something.
Previously, if the user made a find-in-page query, then cleared the
selection made by that query, subsequent queries would inadvertently
advance to the next match instead of reselecting the first match.
The implementation was removed with the migration to ANGLE. This
reimplements it. This is required by Stimulation Clicker on neal.fun,
which does not clear the framebuffer itself, instead relying on the
browser doing it.
These properties are always substrings of the RegExp input string,
and so we can store them as views and lazily construct strings if
they're actually accessed (which most of the time they aren't).
This avoids a bunch of unnecessary memory copying, saving roughly
2.1 seconds per iteration of Speedometer.
Required by the server-side rendering mode of React Router, used by
https://chatgpt.com/
Note that the imported tests do not have the worker variants to prevent
freezing on macOS.
ReadLoop requests require the chunks to be Uint8Array objects, however,
TextEncoderStream requires a String (Convertible) value. This is fixed
by implementing read_all_chunks as a loop of DefaultReader requests
instead, which is an identity transformation. This should be okay to
do, as stream chunk steps expect a JS::Value, and convert it to the
type they want.
Before this change, tasks associated with a destroyed document would get
stuck in the task queue forever, since document-associated tasks are not
allowed to run when their document isn't fully active (and destroyed
documents never become fully active again). This caused everything
captured by task callbacks to leak.
We now treat tasks for destroyed documents as runnable immediately,
which gets them out of the queue.
This fixes another massive GC leak on Speedometer.
Before this change, Agent held on to all of the live MutationObserver
objects via GC::Root. This prevented them from ever getting
garbage-collected.
Instead of roots, we now use a simple IntrusiveList and remove them
from it in the finalizer for MutationObserver.
This fixes a massive GC leak on Speedometer.
f7a3f78 made the layout tree invalidate only the inserted nodes
themselves, but it turned out that CSS containment invalidation relies
on the parent being invalidated as well.