Before, we used to compress the image data to memory, then make another
copy to memory, and then write to the output stream.
Now, we compress to memory once and then write to the output stream.
No behavior change.
Once we see a frame with transparent pixels, we now toggle the
"has alpha" bit in the header.
To not require a SeekableStream opened for reading, we now pass the
unmodified original flag bit to WebPAnimationWriter.
The high-level design is that we have a static method on WebPWriter that
returns an AnimationWriter object. AnimationWriter has a virtual method
for writing individual frames. This allows streaming animations to disk,
without having to buffer up the entire animation in memory first.
The semantics of this function, add_frame(), are that data is flushed
to disk every time the function is called, so that no explicit `close()`
method is needed.
For some formats that store animation length at the start of the file,
including WebP, this means that this needs to write to a SeekableStream,
so that add_frame() can seek to the start and update the size when a
frame is written.
This design should work for GIF and APNG writing as well. We can move
AnimationWriter to a new header if we add writers for these.
Currently, `animation` can read any animated image format we can read
(apng, gif, webp) and convert it to an animated webp file.
The written animated webp file is not compressed whatsoever, so this
creates large output files at the moment.
Two bugs:
1. Correctly set bits in VP8X header.
Turns out these were set in the wrong order.
2. Correctly set the `has_alpha` flag.
Also add a test for writing webp files with icc data. With the
additional checks in other commits in this PR, this test catches
the bug in WebPWriter.
Rearrange some existing functions to make it easier to write this test:
* Extract encode_bitmap() from get_roundtrip_bitmap().
encode_bitmap() allows passing extra_args that the test uses to pass
in ICC data.
* Extract expect_bitmaps_equal() from test_roundtrip()
That way, we can write 0 instead of 8 bits for every alpha byte.
Reduces the size of sunset-retro.png when saved as a webp file
from 3 MiB to 2.25 MiB, without affecting encode speed.
Once we use CanonicalCodes we'll get this for free for all channels,
but opaque images are common enough that it feels worth it to do this
before then.
This doesn't use any transforms yet (in particular not the predictor
transform), and doesn't do anything else that actually compresses the
data.
It also give all 256 values code length 8 unconditionally. This means
the huffman trees are not data-dependent at all and provide no
compression. It also means we can just write out the image data
unmodified.
So the output is fairly large. But it _is_ a valid lossless webp file.
Possible follow-ups, to improve compression later:
1. Use actual byte distributions to create huffman trees, to get
huffman compression.
2. If the distribution has just 1 element, write a simple code length
code (that way, images that have alpha 0xff everywhere need to store
no data for alpha).
3. Add backref support, to get full deflate(ish) compression of pixels.
4. Add predictor transform.
5. Consider writing different sets of prefix codes for every 16x16 tile
(by writing a meta prefix code image).
(It might be nice to make the follow-ups optional, so that this can also
be used as a webp example file generator.)