Now both /bin/zcat and /bin/gunzip are symlinks to /bin/gzip, and we
essentially running it in decompression mode through these symlinks.
This ensures we don't maintain 2 versions of code to decompress Gzipped
data anymore, and handle the use case of gzipped-streaming input only
once in the codebase.
For example, for 7z7c.gif, we now store one 500x500 frame and then
a 94x78 frame at (196, 208) and a 91x78 frame at (198, 208).
This reduces how much data we have to store.
We currently store all pixels in the rect with changed pixels.
We could in the future store pixels that are equal in that rect
as transparent pixels. When inputs are gif files, this would
guaranteee that new frames only have at most 256 distinct colors
(since GIFs require that), which would help a future color indexing
transform. For now, we don't do that though.
The API I'm adding here is a bit ugly:
* WebPs can only store x/y offsets that are a multiple of 2. This
currently leaks into the AnimationWriter base class.
(Since we potentially have to make a webp frame 1 pixel wider
and higher due to this, it's possible to have a frame that has
<= 256 colors in a gif input but > 256 colors in the webp,
if we do the technique above.)
* Every client writing animations has to have logic to track
previous frames, decide which of the two functions to call, etc.
This also adds an opt-out flag to `animation`, because:
1. Some clients apparently assume the size of the last VP8L
chunk is the size of the image
(see https://github.com/discord/lilliput/issues/159).
2. Having incremental frames is good for filesize and for
playing the animation start-to-end, but it makes it hard
to extract arbitrary frames (have to extract all frames
from start to target frame) -- but this is mean tto be a
delivery codec, not an editing codec. It's also more vulnerable to
corrupted bytes in the middle of the file -- but transport
protocols are good these days.
(It'd also be an idea to write a full frame every N frames.)
For https://giphy.com/gifs/XT9HMdwmpHqqOu1f1a (an 184K gif),
output webp size goes from 21M to 11M.
For 7z7c.gif (an 11K gif), output webp size goes from 2.1M to 775K.
(The webp image data still isn't compressed at all.)
This means that SetVariable instructions will now remember which
(relative) environment contains the targeted binding, letting it bypass
the full binding resolution machinery on subsequent accesses.
This is a more general and robust replacement of the LibJSGCVerifier.
We want to add more generic static analysis, and this new plugin will
be built in a way that integrates into the rest of the system.
Truncating the value is mathematically incorrect, this error made the
conversion to grayscale unstable. In other world, calling `to_grayscale`
on a gray value would return a different value. As an example,
`Color::from_string("#686868ff"sv).to_grayscale()` used to return
#676767ff.
Merging registers, constants and locals into single vector means:
- Better data locality
- No need to check type in Interpreter::get() and Interpreter::set()
which are very hot functions
Performance improvement is visible in almost all Octane and Kraken
tests.
Instead of copying the image data pixel-by-pixel, we can memcpy full
scanlines at a time.
This knocks a 4% item down to <1% in profiles of Another World JS.
Filling a typed array with an integer shouldn't have to go through the
generic Set for every element.
This knocks a 7% item down to <1% in profiles of Another World JS at
https://cyxx.github.io/another_js/
When comparing two numbers, we can avoid a lot of implicit type
conversion nonsense and go straight to comparison, saving time in the
most common case.
To align the cursor theme tab to how DisplaySettings themes tab works,
this change forces the theme combo box to not allow free-text. Currently
on a click it puts the text cursor in the box to allow typing anything
rather than acting as a dropdown when clicking anywhere on the field.
Fixes#24306
These were out-of-line because we had some ideas about marking
instruction streams PROT_READ only, but that seems pretty arbitrary and
there's a lot of performance to be gained by putting these inline.
Performing a lookup in the blob URL registry does not work in the case
of a web worker - as the registry is not shared between processes.
However - the URL itself passed to a worker has the blob attached to it,
which we can pull out of the URL on a fetch.