For now, this slot is always 0 - (the default value per spec). But
once we start actually processing audio streams this internal slot
should be changed correspondingly.
This will help us in detecting potential web compatability issues from
not having this implemented.
While we're at it, update the spec link, as it was moved from the DOM
parsing spec to the HTML one, and implement this function in a manner
that closr resembles spec text.
To determine the palette of colors we use the median cut algorithm.
While being a correct implementation, enhancements are obviously
existing on both the median cut algorithm and the encoding side.
This is useful to find the best matching color palette from an existing
bitmap. It can be used in PixelPaint but also in encoders of old image
formats that only support indexed colors e.g. GIF.
It wasn't safe to use addition_would_overflow(a, -b) to check if
subtraction (a - b) would overflow, since it doesn't cover this case.
I don't know why we didn't have subtraction_would_overflow(), so this
patch adds it. :^)
Previously, when looking for the labeled control of a label element, we
were only checking its child elements. The specification says we should
check all elements in the same tree as the label element.
As MMIO is placed at fixed physical addressed, and does not need to be
backed by real RAM physical pages, there's no need to use PhysicalPage
instances to track their pages.
This results in slightly reduced allocations, but more importantly
makes MMIO addresses which end up after the normal RAM ranges work,
like 64-bit PCI BARs usually are.
* Matches how the loader is organized
* `compress_VP8L_image_data()` will grow longer when we add actual
compression
* Maybe someone wants to write a lossy compressor one day
No behavior change.
This code path now also compresses to memory once, and then writes to
the output stream.
Since the animation writer has a SeekableStream, it could compress to
the stream directly and fix up offsets later. That's more complicated
though, and keeping the animated and non-animated code paths similar
seems nice. And the drawback is just temporary higher memory use, and
the used memory is smaller than the memory needed by the input bitmap.
Before, we used to compress the image data to memory, then make another
copy to memory, and then write to the output stream.
Now, we compress to memory once and then write to the output stream.
No behavior change.
It is now possible to pass an optional `ImageDataSettings` object to
the `CanvasImageData.createImageData()` and
`CanvasImageData.getImageData()` methods.
We'll want to explicitly load fonts from FontFace and other Web APIs
in the future. A future refactor should also move this completely away
from StyleComputer and call it something like 'FontCache'.