Because of the previous awkward factoring of Origin we had two
implementations of Origin serializing and creation. Move the
implementation of DOMURL::url_origin into URL::origin, and
instead use the implemenation of URL::Origin::serialize for
serialization (replacing URL::serialize_origin).
This happens to fix 8 URL subtests as the two implemenations had
diverged, and URL::serialize_origin was previously missing the spec
changes of: whatwg/url@eee49fd and whatwg/url@fff33c3
While Origin is defined in the HTML spec - this leaves us with quite an
awkward relationship as the URL spec makes use of AO's from what is
defined in the HTML spec.
To simplify this factoring, relocate Origin into LibURL.
This change causes HTTP status codes to be set on cached HTTP responses.
Otherwise, without this change, no status codes at all are set on cached
HTTP responses — which causes all cached responses to default to being
loaded/served with a 200 status code. And as a result of that, if the
cached response is from a 30x redirect, then without this change, when
that cached 30x response is loaded, we don’t follow the redirect —
because we see a 200 status, rather than the expected/original 30x.
Fixes https://github.com/LadybirdBrowser/ladybird/issues/863
Note that this change also reverts the temporary workaround added in
f735c464d3
(https://github.com/LadybirdBrowser/ladybird/pull/899).
If a HTTP 401 response we get does not contain a `WWW-Authenticate`
header, we should not trigger the logic to ask the user for credentials
and retry the request.
This part is hinted at in a TODO / 'Needs testing' remark in the spec
but needs to be fleshed out. Raised an upstream issue to do so:
https://github.com/whatwg/fetch/issues/1766
This fixes login forms triggering an infinite fetch loop when providing
incorrect credentials.
Co-Authored-By: Victor Tran <vicr12345@gmail.com>
They are now blocked on pages which:
- Don't have an opaque origin (should be only user-initiated or about:)
- Aren't other file: pages
- Aren't other resource: pages
This patch adds a simple in-memory HTTP cache to each WebContent
process.
It's currently off by default (turn it on with --enable-http-cache)
since the validation logic is lacking and incomplete.
Instead of using a HashMap<ByteString, ByteString, CaseInsensitive...>
everywhere, we now encapsulate this in a class.
Even better, the new class also allows keeping track of multiple headers
with the same name! This will make it possible for HTTP responses to
actually retain all their headers on the perilous journey from
RequestServer to LibWeb.
Supporting unbuffered fetches is actually part of the fetch spec in its
HTTP-network-fetch algorithm. We had previously implemented this method
in a very ad-hoc manner as a simple wrapper around ResourceLoader. This
is still the case, but we now implement a good amount of these steps
according to spec, using ResourceLoader's unbuffered API. The response
data is forwarded through to the fetch response using streams.
This will eventually let us remove the use of ResourceLoader's buffered
API, as all responses should just be streamed this way. The streams spec
then supplies ways to wait for completion, thus allowing fully buffered
responses. However, we have more work to do to make the other parts of
our fetch implementation (namely, Body::fully_read) use streams before
we can do this.
This callback is meant to be triggered by streams, which does not always
provide a WebIDL::DOMException. Pass a plain value instead. Of all the
users of this callback, only one actually uses the value, and already
converts the DOMException to a plain value.
Performing a lookup in the blob URL registry does not work in the case
of a web worker - as the registry is not shared between processes.
However - the URL itself passed to a worker has the blob attached to it,
which we can pull out of the URL on a fetch.
Fetched bodies can be on the order of gigabytes, so rather than crashing
when we hit OOM here, we can simply invoke the error callback with a DOM
exception. We use "UnknownError" here as the spec directly supports this
for OOM errors:
UnknownError: The operation failed for an unknown transient reason
(e.g. out of memory).
This is still an ad-hoc implementation. We should be using streams, and
we do have the AOs available to do so. But they need to be massaged to
be compatible with callers of Body::fully_read. And once we do use
streams, this function will become infallible - so making it infallible
here is at least a step in the right direction.
Changes the signature of queue_fetch_task() from AK:Function to
JS::HeapFunction to be more clear to the user of the function that this
is what it uses internally.