For text, we always ended up with a leading \0 byte (on little-endian),
which prints as nothing. Since that's the only thing we do with this
data, no actual behavior change.
From HTML spec https://html.spec.whatwg.org/#definitions-3
"... Note that in this setup, the processing model still enforces that
the user agent would never process events from any one task source out
of order."
I can't come up with an example that is fixed by this change. However,
debugging a bug caused by violating this assumption from the spec is
likely to be very painful.
...callback, otherwise Networking task source will be blocked until the
end of HTML parsing.
This is a preparation before forbidding to interleave HTML tasks with
the same source.
On macOS, it's not trivial to get a Mach task port for your children.
This implementation registers the chrome process as a well-known
service with launchd based on its pid, and lets each child process
send over a reference to its mach_task_self() back to the chrome.
We'll need this Mach task port right to get process statistics.
This piggybacks on the same fragment serialization code that innerHTML
uses, but instead of constructing an imaginary parent element like the
spec asks us to, we just add a separate serialization mode that includes
the context element in the serialized markup.
This makes the image carousel on https://utah.edu/ show up :^)
We don't do anything with this (except log the contents if
JPEG2000_DEBUG is 1).
The motivation is that we now decode all marker segments that are
used in all JPEG2000 files I've seen so far, allowing us to make
remaining unknown marker types fatal.
C++ classes that inherit from JS::Cell and are leaf classes should have
their own type-specific allocator. We also do this for non-leaf classes
that are constructable from JS.
To do this, JSON messages are passed to communicate information about
each class the Clang tool comes across. This is the only message we have
to worry about for now, but in the future if we want to transmit
different kinds of information, we can make this message format more
generic.
This allows each Clang process to send JSON messages to the
orchestrating Python process, which aggregates the message and can do
something with them all at the end. This is required because we run
Clang multithreaded to speed up the tool execution.
I did try to add a second frontend tool that accepts all the files at
once, but it was _extremely_ slow, so this is the next best thing.