When buffering is enabled for the entire protocol request, it's possible
that the entirety of the file data might be available on the first read
of the pipe passed from RequestServer. If the file is then closed on the
RequestServer end, the client would never realize that the file is EOF
and call the user-provided on_finish callback.
By reading until there's an error, we expect to get an EAGAIN or similar
non-blocking pipe error message if there is still more data.
Also add a note to the Concepts header that the reason we have all the
strange concepts in place for container types is to work around the
language limitation that we cannot partially specialize function
templates.
The semantics of BGRx8888 aren't super clear and it means different
things for different parts of the codebase. In particular, the PNG
writer still writes the x channel to the alpha channel of its output.
In BMPs, the 4th palette byte is usually 0, which means after #21412 we
started writing all .bmp files with <= 8bpp as completely transparent
to PNGs.
This works around that.
(See also #19464 for previous similar workarounds.)
The added `bitmap.bmp` is a 1bpp file I drew in Photoshop and saved
using its "Save as..." saving path.
As long as the inputs are Int32, we can convert them to UInt32 in a
spec-compliant way with a simple static_cast<u32>.
This allows calculations like `-3 >>> 2` to take the fast path as well,
which is extremely valuable for stuff like crypto code.
While we're doing this, also remove the fast paths from the generic
shift functions in Value.cpp, since we only end up there if we *didn't*
take the same fast path in the interpreter.
Since get() returns an empty Optional if the index is not present, we
can combine these two into a single get() operation and save the cost of
a virtual call.
This patch adds a new "Peephole" pass for performing small, local
optimizations to bytecode.
We also introduce the first such optimization, fusing a sequence of
some comparison instruction FooCompare followed by a JumpIf into a
new set of JumpFooCompare instructions.
This gives a ~50% speed-up on the following microbenchmark:
for (let i = 0; i < 10_000_000; ++i) {
}
But more traditional benchmarks see a pretty sizable speed-up as well,
for example 15% on Kraken/ai-astar.js and 16% on Kraken/audio-dft.js :^)
The entry block must stay in place, although it's okay to merge stuff
into it.
This fixes 4 test262 tests and brings us to parity with optimization
disabled. :^)
With this the `<circle>` element now correctly parses percentage sizes,
and resolves them relative to the viewport.
The rest of the geometry elements are still left TODO.
This will allow resolving paths that use sizes that are relative to the
viewport. This necessarily removes the on element caching, which has
been redundant for a while as computed paths are stored on the
paintable.
This was seen in Browser when hotkey activations were processed twice.
If we open the Inspector with a hotkey (F12) and quickly activate that
hotkey again, we could try sending a JS command (inspector.loadDOMTree)
before the inspector.js file was actually loaded in the WebContent.
The window for this bug is larger on Serenity, where loading WebContent
is a bit slower than on Linux. So even with the Browser bug fixed, it is
pretty easy to hit this window still.
Instead of emitting a NewBigInt instruction to construct a primitive
bigint from a parsed literal, we now instantiate the BigInt on the heap
during codegen.
Instead of emitting a NewString instruction to construct a primitive
string from a parsed literal, we now instantiate the PrimitiveString on
the heap during codegen.
The property values here will always be StyleValueLists and not
TransformationStyleValues. The handling of interpolation in this case
gets quite a bit more complex, so let's just remove the dead code for
now and attempt this optimization again in the future if it's needed.
An array image mask contains a min/max range for each channel,
and if each channel of a given pixel is in that channel's range,
that pixel is masked out (i.e. transparent). (It's similar to
having a single color or palette index be transparent, but it
supports a range of transparent colors if desired.)
What makes this a bit awkward is that the range is relative to the
origin bits per pixel and the inputs to the image's color space.
So an indexed (palettized) image with 4bpp has a 2-element mask
array where both entries are between 0 and 15.
We currently apply masks after converting images to a Gfx::Bitmap,
that is after converting to 8bpp sRGB. And we do this by mapping
everything to 8bpp very early on in load_image().
This leaves us with a bunch of options that are all a bit awkward:
1. Make load_image() store the up- (or for 16bpp inputs, down-)
sampled-to-8bpp pixel data. And also return if we expanded the
pixel range while resampling (for color values) or not (for
palettized images). Then, when applying the image filter,
resample the array bounds in exactly the same way. This requires
passing around more stuff.
2. Like 1, but pass in the mask array to load_image() and apply
the mask right there and then. This means we'd apply mask arrays
at a different time than other masks.
3. Make the function that computes the mask from the mask array
work from the original, unprocessed image data. This is the most
local change, but probably also requires the largest amount of
code (in return, the color mask for 16bpp images is precise, in
addition that it separates concerns the most nicely).
This goes with 3 for now.
From https://drafts.csswg.org/css-backgrounds-4/#background-clip
"The background is painted within (clipped to) the intersection of the
border box and the geometry of the text in the element and its in-flow
and floated descendants"
This change implements it in the following way:
1. Traverse the descendants of the element, collecting the Gfx::Path of
glyphs into a vector.
2. The vector of collected paths is saved in the background painting
command.
3. The painting commands executor uses the list of glyphs to paint a
mask for background clipping.
Co-authored-by: Aliaksandr Kalenik <kalenik.aliaksandr@gmail.com>
In the situation where the amount of content preceeding the hunk was
greater than the max context of the hunk there would be an unsigned
underflow, as the logic was assuming signed arithmitic.
This underflow would result in the patch not applying, as patch would
assume the massive calculated fuzz would result in the patch matching
against any file.
While StringView does have a null state, we have been moving away from
this in our other String classes. To represent a StringView not being
given at all on the commandline, use an Optional.
This is still a very naive implementation and there are plenty of other
cases that we should handle (like a quoted path) - but just looking for
a tab handles the common case.