USVString is defined in the IDL spec as:
> The USVString type corresponds to scalar value strings. Depending on
> the context, these can be treated as sequences of either 16-bit
> unsigned integer code units or scalar values.
This means we need to account for surrogate code points by using the
replacement character.
This fixes the last test in https://wpt.live/url/url-constructor.any.html
We can't simply use the base URL as it may need to be modified in some
form. For example - for the included test, the fragment was previously
being included in the resulting URL.
This fixes 1 test on https://wpt.live/url/url-constructor.any.html
There were some extra steps in there which produced wrong results for
relative file URLs.
Fixes 7 test cases in: https://wpt.live/url/url-constructor.any.html
We also need to adjust the test results in TestURL. The behaviour tested
does not match how URL is specified to work as an absolute relative is
given.
The check for:
```
if (start_index >= end_index)
return {};
```
To prevent an out of bounds when trimming the start and end of the input
of whitespace was preventing valid URLs (only having whitespace in the
input) from being parsed.
Instead, prevent start_index from ever getting above end_index in the
first place, and don't treat empty inputs as an error.
Fixes one WPT test on:
https://wpt.live/url/url-constructor.any.html
Instead of implementing this inline, put it into a function. Use this
new function to correctly implement shortening paths for some places
where this logic was previously missing.
Before these changes, the pathname for the included test was incorrectly
being set to '/' as we were not considering the windows drive letter.
None of the existing tests contain a URL which has a fragment in them,
but this does verify that the URL parser does not actually find any!
Also, this should let us verify the correctness of URLs which actually
do contain fragments.
This could happen if a sequence of '0' parts was followed by a longer
sequence of '0' parts at the end of the host. The first sequence was
being used for the compress, and not the second.
For example, [1:1:0:0:1:0:0:0] was being serialized as: [1:1::1:0:0:0]
instead of [1:1:0:0:1::].
Fix this by checking at the end of the loop if we are in the middle of a
sequence of '0' parts that is longer than the current longest.
This is defined in the spec, but was missing in our table. Fix this, and
add a spec comment for what is missing. Also begin a basic text based
test for URL, so we can get some coverage of LibWeb's usage of URL too.