This was actually an older change to the Streams spec that we missed
when we implemented TransformStreams. This fixes a crash in the imported
WPT tests.
See: 007d729
These callbacks are evaluated synchronously via JS::Call. We do not need
to construct an expensive RootVector container just to immediately
invoke the callbacks.
Stylistically, this also helps indicate where the actual arguments start
at the call sites, by wrapping the arguments in braces.
This is preparation for removing the endianness override, since it was
only used by a single client: LibTextCodec.
While here, add helpers and make use of simdutf for fast conversion.
When we need the callback to return a promise, we can use this alternate
invoker to construct the WebIDL::Promise for us. Currently, the Streams
API will use WebIDL::invoke_callback to create a JS::Promise, and then
wrap that result in a resolved WebIDL::Promise. This results in rejected
JS::Promise instances not being propagated.
There is an open issue to clarify exactly what realm is to be used when
creating promises. There are surely many more places we will need to
update to use the correct realm (which will be the realm of `this`'s
relevant global object).
Memory stream is a more suitable container for the socket send queue,
as using it results in fewer allocations than trying to emulate a stream
using a Vector.
Chaining workflows does not cause the subsequently spawned workflow runs
to use the same event, but rather it uses the latest head SHA based on
the branch it runs on. This would cause the JS benchmarks jobs to not be
able to find artifacts (if a new JS repl workflow was started before the
previous one could finish) and/or assign the wrong commit SHA to the
benchmark results.
Since `github.event` contains information about the original workflow
run that spawned the JS benchmarks jobs, we can take the commit SHA from
there and use it to download the correct artifact.
If the user clicked directly on the input inside a label, then it
already received a click event. Dispatching a second one via the label
is redundant, and means that if the input is a checkbox, it gets its
value toggled twice.
This is very frequently returned by Object.prototype.toString(),
so we benefit from caching it instead of recreating it every time.
Takes Speedometer2.1/EmberJS-Debug-TodoMVC from ~4000ms to ~3700ms
on my M3 MacBook Pro.
We had concurrency set on the JS artifacts and JS benchmarks workflows
causing them to not run in parallel for the same combination of
(workflow, OS name). You'd expect that this causes a FIFO queue to exist
of the jobs to run sequentially, but in reality GitHub maintains a
single job to prioritize and cancels all others. We don't want that for
our artifacts and benchmarks: we want them to run on each push.
For example, a new push could have workflows getting cancelled because
someone restarted a previously failed workflow, resulting in the
following message:
"Canceling since a higher priority waiting request for [..] exists"
By removing the concurrency setting from these workflows, we make use of
all available runners to execute the jobs and potentially run some of
them in parallel. For the benchmarks however, we currently only have one
matching self-hosted runner per job, and as such they are still not run
in parallel.
We now don't absolutize the URL during parsing, keeping it as a CSS::URL
object which we then pass to the "fetch an external image for a
stylesheet" algorithm. Our version of this algorithm is a bit ad-hoc,
in order to make use of SharedResourceRequest. To try and reduce
duplication, I've pulled all of fetch_a_style_resource() into a static
function, apart from the "actually do the fetch" step.
The CSS `fetch_foo()` functions resolve the URL relative to the
CSSStyleSheet if one is provided. So, style values that do so need to
know what CSSStyleSheet they are part of so that, for example, `url
(foo.png)` is loaded relative to the style sheet's URL instead of the
document's one.
That all works without this change because we currently absolutize URLs
during parsing, but we're in the process of stopping that.
This commit adds the infrastructure for telling style values what their
CSSStyleSheet is.
See the spec issue in the comments for details. There are situations
where we will need to call this but don't have a CSSStyleSheet to pass
in, but we always have a Document, so use that where possible.
These actually always return a value, despite the `CSSStyleSheet*`
return type. So, make that clearer by returning `GC::Ref<CSSStyleSheet>`
instead. This also means we can remove some ad-hoc error-checking code.
The spec is unclear about when exactly we should parse the style sheet.
Previously we'd do so before calling this algorithm, which was
error-prone, as seen by the bug fixed by the previous commit. The spec
for step 1 of "create a CSS style sheet" says:
1. Create a new CSS style sheet object and set its properties as
specified.
The definitions linked are UA-defined enough that it seems reasonable to
put the parsing here. That simplifies the user code a little and makes
it harder to mess up. It does raise the question of what to do if
parsing fails. I've matched our previous behaviour by just logging and
returning in that case.
While I'm modifying this method, I've also converted the bool params to
enums so they're a little clearer to read.
Before this change, we assigned the style sheet's location *after* its
content rules were parsed and added to it. This meant any `@import`s
would try to fetch their style sheet before they knew the URL they
should fetch it relative to.
Previously if we encountered any attributes with a namespace other than
`xml` or `xmlns`, we treated it as a parse error. Now, allow it as long
as it's been declared in the current context.
We also handle errors more gracefully - instead of exploding if setting
the namespace fails, treat it as an error and carry on.
Names like ":hi", "wow:", or "a🅱️c" are invalid, so early-out instead
of searching our namespace stack for matching prefixes.
And also rename the function because it's relevant to attributes too.
With this change we ensure that Skia surfaces are allowed to be created
or destroyed only by one thread at a time. Not enforcing this before
resulted in "Single owner failure." assertion crashes in debug mode on
pages with canvas elements.
A couple of minor things I came across while debugging BYOB streams.
Adjust some spec text to match the latest spec, and use GC::Ref instead
of a raw pointer where applicable.