Memory stream is a more suitable container for the socket send queue,
as using it results in fewer allocations than trying to emulate a stream
using a Vector.
Chaining workflows does not cause the subsequently spawned workflow runs
to use the same event, but rather it uses the latest head SHA based on
the branch it runs on. This would cause the JS benchmarks jobs to not be
able to find artifacts (if a new JS repl workflow was started before the
previous one could finish) and/or assign the wrong commit SHA to the
benchmark results.
Since `github.event` contains information about the original workflow
run that spawned the JS benchmarks jobs, we can take the commit SHA from
there and use it to download the correct artifact.
If the user clicked directly on the input inside a label, then it
already received a click event. Dispatching a second one via the label
is redundant, and means that if the input is a checkbox, it gets its
value toggled twice.
This is very frequently returned by Object.prototype.toString(),
so we benefit from caching it instead of recreating it every time.
Takes Speedometer2.1/EmberJS-Debug-TodoMVC from ~4000ms to ~3700ms
on my M3 MacBook Pro.
We had concurrency set on the JS artifacts and JS benchmarks workflows
causing them to not run in parallel for the same combination of
(workflow, OS name). You'd expect that this causes a FIFO queue to exist
of the jobs to run sequentially, but in reality GitHub maintains a
single job to prioritize and cancels all others. We don't want that for
our artifacts and benchmarks: we want them to run on each push.
For example, a new push could have workflows getting cancelled because
someone restarted a previously failed workflow, resulting in the
following message:
"Canceling since a higher priority waiting request for [..] exists"
By removing the concurrency setting from these workflows, we make use of
all available runners to execute the jobs and potentially run some of
them in parallel. For the benchmarks however, we currently only have one
matching self-hosted runner per job, and as such they are still not run
in parallel.
We now don't absolutize the URL during parsing, keeping it as a CSS::URL
object which we then pass to the "fetch an external image for a
stylesheet" algorithm. Our version of this algorithm is a bit ad-hoc,
in order to make use of SharedResourceRequest. To try and reduce
duplication, I've pulled all of fetch_a_style_resource() into a static
function, apart from the "actually do the fetch" step.
The CSS `fetch_foo()` functions resolve the URL relative to the
CSSStyleSheet if one is provided. So, style values that do so need to
know what CSSStyleSheet they are part of so that, for example, `url
(foo.png)` is loaded relative to the style sheet's URL instead of the
document's one.
That all works without this change because we currently absolutize URLs
during parsing, but we're in the process of stopping that.
This commit adds the infrastructure for telling style values what their
CSSStyleSheet is.
See the spec issue in the comments for details. There are situations
where we will need to call this but don't have a CSSStyleSheet to pass
in, but we always have a Document, so use that where possible.
These actually always return a value, despite the `CSSStyleSheet*`
return type. So, make that clearer by returning `GC::Ref<CSSStyleSheet>`
instead. This also means we can remove some ad-hoc error-checking code.
The spec is unclear about when exactly we should parse the style sheet.
Previously we'd do so before calling this algorithm, which was
error-prone, as seen by the bug fixed by the previous commit. The spec
for step 1 of "create a CSS style sheet" says:
1. Create a new CSS style sheet object and set its properties as
specified.
The definitions linked are UA-defined enough that it seems reasonable to
put the parsing here. That simplifies the user code a little and makes
it harder to mess up. It does raise the question of what to do if
parsing fails. I've matched our previous behaviour by just logging and
returning in that case.
While I'm modifying this method, I've also converted the bool params to
enums so they're a little clearer to read.
Before this change, we assigned the style sheet's location *after* its
content rules were parsed and added to it. This meant any `@import`s
would try to fetch their style sheet before they knew the URL they
should fetch it relative to.
Previously if we encountered any attributes with a namespace other than
`xml` or `xmlns`, we treated it as a parse error. Now, allow it as long
as it's been declared in the current context.
We also handle errors more gracefully - instead of exploding if setting
the namespace fails, treat it as an error and carry on.
Names like ":hi", "wow:", or "a🅱️c" are invalid, so early-out instead
of searching our namespace stack for matching prefixes.
And also rename the function because it's relevant to attributes too.
With this change we ensure that Skia surfaces are allowed to be created
or destroyed only by one thread at a time. Not enforcing this before
resulted in "Single owner failure." assertion crashes in debug mode on
pages with canvas elements.
A couple of minor things I came across while debugging BYOB streams.
Adjust some spec text to match the latest spec, and use GC::Ref instead
of a raw pointer where applicable.
This adds support for async iterators of the form:
async iterable<value_type>;
async iterable<value_type>(/* arguments... */);
It does not yet support the value pairs of the form:
async iterable<key_type, value_type>;
async iterable<key_type, value_type>(/* arguments... */);
Async iterators have an optional `return` data property. There's not a
particularly good way to know what interfaces implement this property.
So this adds a new extended attribute, DefinesAsyncIteratorReturn, which
interfaces can use to declare their support.
I don't quite see what spec text requires this, but it is explicitly
checked by WPT. We used to pass this test, but that regressed after
commit 3c6010c663.
This required a bit of manual manipulation. These tests dynamically
fetch generated IDL files, e.g.:
https://github.com/web-platform-tests/wpt/blob/master/interfaces/streams.idl
Our WPT importer is not able to detect the IDL files that need to be
imported, so dom.idl and streams.idl was copied over manually. Further,
idlharness.js would create URLs of the form "file://interfaces/dom.idl".
So idlharness.js was adapted to create a URL relative to the test file.
If we don't do this, then we endlessly spin trying to read data which
ends up in a deadlock.
The description for SSL_ERROR_ZERO_RETURN states:
> The TLS/SSL connection has been closed. If the protocol version is SSL
> 3.0 or TLS 1.0, this result code is returned only if a closure alert
> has occurred in the protocol, i.e., if the connection has been closed
> cleanly. Note that in this case SSL_ERROR_ZERO_RETURN does not
> necessarily indicate that the underlying transport has been closed.
This widens the assertion from only checking if the WritableStream's
state is Errored or Erroring to asserting that the WritableStream is not
in a Writable state.
By the time we're executing bytecode, we know the the bytecode will be
flattened. This means we can use ReadonlySpan to look into it instead of
DisjointChunks::spans(), which allocates.