Performing a lookup in the blob URL registry does not work in the case
of a web worker - as the registry is not shared between processes.
However - the URL itself passed to a worker has the blob attached to it,
which we can pull out of the URL on a fetch.
There were several instances where the spec marks an AO invocation as
infallible, but we were propagating WebIDL::ExceptionOr. These mostly
cannot throw due to knowledge about the values they are provided. By
unwinding these, we can remove a decent amount of exception handling.
This was resulting in a whole lot of rebuilding whenever a new IDL
interface was added.
Instead, just directly include the prototype in every C++ file which
needs it. While we only really need a forward declaration in each cpp
file; including the full prototype header (which itself only includes
LibJS/Object.h, which is already transitively brought in by
PlatformObject) - it seems like a small price to pay compared to what
feels like a full rebuild of LibWeb whenever a new IDL file is added.
Given all of these includes are only needed for the ::initialize
method, there is probably a smart way of avoiding this problem
altogether. I've considered both using some macro trickery or generating
these functions somehow instead.
Fetched bodies can be on the order of gigabytes, so rather than crashing
when we hit OOM here, we can simply invoke the error callback with a DOM
exception. We use "UnknownError" here as the spec directly supports this
for OOM errors:
UnknownError: The operation failed for an unknown transient reason
(e.g. out of memory).
This is still an ad-hoc implementation. We should be using streams, and
we do have the AOs available to do so. But they need to be massaged to
be compatible with callers of Body::fully_read. And once we do use
streams, this function will become infallible - so making it infallible
here is at least a step in the right direction.
The only subclass was already GC-allocated, so let's hoist the JS::Cell
inheritance up one level. This ends up simplifying a bit of rather
dubious looking code where we were previously slicing ESOs.
Changes the signature of queue_fetch_task() from AK:Function to
JS::HeapFunction to be more clear to the user of the function that this
is what it uses internally.
Changes the signature of queue_global_task() from AK:Function to
JS::HeapFunction to be more clear to the user of the function that this
is what it uses internally.
Previously, calling fetch with a signal object provided by
`AbortSignal.timeout()` would cause a crash when the signal timed out.
We now push a `TemporaryExecutionContext` to the stack when we invoke
the signal's abort steps, as an execution context is required when
calling native functions.
...and use HeapFunction instead of SafeFunction for task steps.
Since there is only one EventLoop per process, it lives as a global
handle in the VM custom data.
This makes it much easier to reason about lifetimes of tasks, task
steps, and random stuff captured by them.
We were off-by-one when returning the result of parsing a quoted string
in Web::Fetch::Infrastructure::collect_an_http_quoted_string. Instead of
backtracking the lexer and consuming the backtracked string, do a simple
substring operation.
Since the introduction of `AbortSignal::any()`, the specification says
`AbortSignal::create_dependent_abort_signal()` should be used where
`AbortSignal::follow` was previously.
This is a fetching AO and is only used by LibWeb in the context of fetch
tasks. Move it to LibWeb with other fetch methods.
The main reason for this is that it requires the use of other LibWeb AOs
such as the forgiving Base64 decoder and MIME sniffing. These AOs aren't
available within LibURL.
The HTMLMediaElement, for example, contains spec text which states any
ongoing fetch process must be "stopped". The spec does not indicate how
to do this, so our implementation is rather ad-hoc.
Our current implementation may cause a crash in places that assume one
of the fetch algorithms that we set to null is *not* null. For example:
if (fetch_params.process_response) {
queue_fetch_task([]() {
fetch_params.process_response();
};
}
If the fetch process is stopped after queuing the fetch task, but not
before the fetch task is run, we will crash when running this fetch
algorithm.
We now track queued fetch tasks on the fetch controller. When the fetch
process is stopped, we cancel any such pending task.
It is a little bit awkward maintaining a fetch task ID. Ideally, we
could use the underlying task ID throughout. But we do not have access
to the underlying task nor its ID when the task is running, at which
point we need some ID to remove from the pending task list.
This URL library ends up being a relatively fundamental base library of
the system, as LibCore depends on LibURL.
This change has two main benefits:
* Moving AK back more towards being an agnostic library that can
be used between the kernel and userspace. URL has never really fit
that description - and is not used in the kernel.
* URL _should_ depend on LibUnicode, as it needs punnycode support.
However, it's not really possible to do this inside of AK as it can't
depend on any external library. This change brings us a little closer
to being able to do that, but unfortunately we aren't there quite
yet, as the code generators depend on LibCore.
This commit introduces a WEB_SET_PROTOTYPE_FOR_INTERFACE macro that
caches the interface name in a local static FlyString. This means that
we only pay for FlyString-from-literal lookup once per browser lifetime
instead of every time the interface is instantiated.