That way, we can write 0 instead of 8 bits for every alpha byte.
Reduces the size of sunset-retro.png when saved as a webp file
from 3 MiB to 2.25 MiB, without affecting encode speed.
Once we use CanonicalCodes we'll get this for free for all channels,
but opaque images are common enough that it feels worth it to do this
before then.
This doesn't use any transforms yet (in particular not the predictor
transform), and doesn't do anything else that actually compresses the
data.
It also give all 256 values code length 8 unconditionally. This means
the huffman trees are not data-dependent at all and provide no
compression. It also means we can just write out the image data
unmodified.
So the output is fairly large. But it _is_ a valid lossless webp file.
Possible follow-ups, to improve compression later:
1. Use actual byte distributions to create huffman trees, to get
huffman compression.
2. If the distribution has just 1 element, write a simple code length
code (that way, images that have alpha 0xff everywhere need to store
no data for alpha).
3. Add backref support, to get full deflate(ish) compression of pixels.
4. Add predictor transform.
5. Consider writing different sets of prefix codes for every 16x16 tile
(by writing a meta prefix code image).
(It might be nice to make the follow-ups optional, so that this can also
be used as a webp example file generator.)