Aligning the name with the the PR implementing the javascript
shadow realm proposal into the web platform. This commit
simply performs the rename before implementing the behaviour
change.
The actual change to the behaviour of the AO is not implemented in this
commit to support 'synthetic' shadow realms as the surrounding
infrastructure is not in place yet.
Not all specs have a MR open to align with this proposed change to the
HTML standard. But in this case we can just apply the same mechanical
change everywhere.
Reading the RFC9111 spec makes it clear that the stored response was
not intended to be cloned. This is because there is a "clone response"
operation that is used in other places, but never for stored responses.
Responses returned from `http_network_or_cache_fetch` were copied
directly from the cache, which is incorrect, since revalidation may
later modify the response, or even invalidate it, such as when the
`Access-Control-Allow-Origin` header is changed.
This fixes WPT test [wpt/cors/304.htm](http://wpt.live/cors/304.htm)
This change also removes as much direct use of JS::Promise in LibWeb
as possible. When specs refer to `Promise<T>` they should be assumed
to be referring to the WebIDL Promise type, not the JS::Promise type.
The one exception is the HostPromiseRejectionTracker hook on the JS
VM. This facility and its associated sets and events are intended to
expose the exact opaque object handles that were rejected to author
code. This is not possible with the WebIDL Promise type, so we have
to use JS::Promise or JS::Object to hold onto the promises.
It also exposes which specs need some updates in the area of
promises. WebDriver stands out in this regard. WebAudio could use
some more cross-references to WebIDL as well to clarify things.
There was no need to use FlyString for error messages, and it just
caused a bunch of churn since these strings typically only existed
during the lifetime of the error.
Because of the previous awkward factoring of Origin we had two
implementations of Origin serializing and creation. Move the
implementation of DOMURL::url_origin into URL::origin, and
instead use the implemenation of URL::Origin::serialize for
serialization (replacing URL::serialize_origin).
This happens to fix 8 URL subtests as the two implemenations had
diverged, and URL::serialize_origin was previously missing the spec
changes of: whatwg/url@eee49fd and whatwg/url@fff33c3
While Origin is defined in the HTML spec - this leaves us with quite an
awkward relationship as the URL spec makes use of AO's from what is
defined in the HTML spec.
To simplify this factoring, relocate Origin into LibURL.
This change causes HTTP status codes to be set on cached HTTP responses.
Otherwise, without this change, no status codes at all are set on cached
HTTP responses — which causes all cached responses to default to being
loaded/served with a 200 status code. And as a result of that, if the
cached response is from a 30x redirect, then without this change, when
that cached 30x response is loaded, we don’t follow the redirect —
because we see a 200 status, rather than the expected/original 30x.
Fixes https://github.com/LadybirdBrowser/ladybird/issues/863
Note that this change also reverts the temporary workaround added in
f735c464d3
(https://github.com/LadybirdBrowser/ladybird/pull/899).
If a HTTP 401 response we get does not contain a `WWW-Authenticate`
header, we should not trigger the logic to ask the user for credentials
and retry the request.
This part is hinted at in a TODO / 'Needs testing' remark in the spec
but needs to be fleshed out. Raised an upstream issue to do so:
https://github.com/whatwg/fetch/issues/1766
This fixes login forms triggering an infinite fetch loop when providing
incorrect credentials.
Co-Authored-By: Victor Tran <vicr12345@gmail.com>
This also fixes a bug where task IDs were being deallocated from the
wrong IDAllocator. I don't know if it was actually possible to cause any
real trouble with that mistake, nor do I know how to write a test for
it, but this makes the bug go away.
We currently have 2 base64 coders: one in AK, another in LibWeb for a
"forgiving" implementation. ECMA-262 has an upcoming proposal which will
require a third implementation.
Instead, let's use the base64 implementation that is used by Node.js and
recommended by the upcoming proposal. It handles forgiving decoding as
well.
Our users of AK's implementation should be fine with the forgiving
implementation. The AK impl originally had naive forgiving behavior, but
that was removed solely for performance reasons.
Using http://mattmahoney.net/dc/enwik8.zip (100MB unzipped) as a test,
performance of our old home-grown implementations vs. the simdutf
implementation (on Linux x64):
Encode Decode
AK base64 0.226s 0.169s
LibWeb base64 N/A 1.244s
simdutf 0.161s 0.047s
They are now blocked on pages which:
- Don't have an opaque origin (should be only user-initiated or about:)
- Aren't other file: pages
- Aren't other resource: pages
This patch adds a simple in-memory HTTP cache to each WebContent
process.
It's currently off by default (turn it on with --enable-http-cache)
since the validation logic is lacking and incomplete.
Instead of using a HashMap<ByteString, ByteString, CaseInsensitive...>
everywhere, we now encapsulate this in a class.
Even better, the new class also allows keeping track of multiple headers
with the same name! This will make it possible for HTTP responses to
actually retain all their headers on the perilous journey from
RequestServer to LibWeb.