While introducing clip and scroll frame trees, I made a mistake by
assuming that the paintable tree includes boxes from nested navigables.
Therefore, this comment in the code was incorrect, and clip/scroll
frames were simply not assigned for iframes:
// NOTE: We only need to refresh the scroll state for traversables
// because they are responsible for tracking the state of all
// nested navigables.
As a result, anything with "overflow: scroll" is currently not
scrollable inside an iframe
This change fixes that by ensuring clip and scroll frames are assigned
and refreshed for each navigable. To achieve this, I had to modify the
display list building process to record a separate display list for each
navigable. This is necessary because scroll frame ids are local to a
navigable, making it impossible to call
`DisplayList::apply_scroll_offsets()` on a display list that contains
ids from multiple navigables.
Implements the corresponding encoders, selects the appropriate one when
encoding URL search params. If an encoder for the given encoding could
not be found, fallback to utf-8.
`m_needs_repaint = true` is not enough because it doesn't schedule
repaint of a parent navigable.
Fixes the bug when an iframe is not repainted after scrolling.
When converting a `Gfx::Bitmap` to a Skia bitmap, we cannot assume the
color data is unpremultiplied. For example, everything canvas-related
uses premultiplied color data:
https://html.spec.whatwg.org/multipage/canvas.html#premultiplied-alpha-and-the-2d-rendering-context
We were probably assuming unpremultiplied since that is what the PNG
decoder gives us. Since we now make `Gfx::Bitmap` identify what alpha
type is being used, we can instruct Skia a bit better :^)
Update our `EdgeFlagPathRasterizer` to use premultiplied alpha instead
of unpremultiplied so we can apply alpha correctly for path masks.
This fixes the dark borders sometimes visible when SVGs are blended
with a colored background.
This also exposed an issue with our `CanvasRenderingContext2D`, which is
supposed to hold a bitmap with premultiplied alpha internally but expose
a bitmap with unpremultiplied alpha in `CanvasImageData`. Expand our C2D
test to include the alpha channel as well.
Finally, this also exposed an off-by-one issue in
`EdgeFlagPathRasterizer` which caused the last scanlines for edges to
render incorrectly. We had some reference images which included these
corruptions (they were almost unnoticeable), so update them as well.
We use instances of `Gfx::Bitmap` to move pixel data all the way from
raw image bytes up to the Skia renderer. A vital piece of information
for correct blending of bitmaps is the alpha type, i.e. are we dealing
with premultiplied or unpremultiplied color values?
Premultiplied means that the RGB colors have been multiplied with the
associated alpha value, i.e. RGB(255, 255, 255) with an alpha of 2% is
stored as RGBA(5, 5, 5, 2%).
Unpremultiplied means that the original RGB colors are stored,
regardless of the alpha value. I.e. RGB(255, 255, 255) with an alpha of
2% is stored as RGBA(255, 255, 255, 2%).
It is important to know how the color data is stored in a
`Gfx::Bitmap`, because correct blending depends on knowing the alpha
type: premultiplied blending uses `S + (1 - A) * D`, while
unpremultiplied blending uses `A * S + (1 - A) * D`.
This adds the alpha type information to `Gfx::Bitmap` across the board.
It isn't used anywhere yet.
Right now, we deviate from the CSSOM spec regarding our
CSSStyleDeclaration classes, so this is not as close to the spec as I'd
like. But it works, which means we'll be able to test pseudo-element
styling a lot more easily. :^)
The spec didn't match how other browsers behave, and we dutifully did
what the spec said. A spec bug has been filed, so let's fix this locally
for now with a FIXME.
For the SVG <use> element, we want to support loading HTML documents
that have a SVG element inside of it pointed to by the URL fragment.
In this situation we would need to fetch and parse the entire document
in SharedImageRequest (so that we can still cache the SVGs). Rename
SharedImageRequest to SharedResourceRequest to make the class a little
more generic for future usecases.
This closer mirrors handle_failed_fetch, making the handling slightly
more clear to understand. It also means that we don't need to take a
strong reference to this on a successful SVG resource load.
...When loading images through SharedImageRequest.
There is a small behavioural difference here - handle_failed_fetch
clears the pending callbacks whereas handled_failed_decode was
previously not. This does not seem intentional, and appears like a bug.
Implementing it this way is _slightly_ simpler - and also means we
don't need to take a strong handle to this in the case of loading an
SVG image.
Web specs do not return through javascript percent decoded URL path
components - but we were doing this in a number of places due to the
default behaviour of URL::serialize_path.
Since percent encoded URL paths may not contain valid UTF-8 - this was
resulting in us crashing in these places.
For example - on an HTMLAnchorElement when retrieving the pathname for
the URL of:
http://ladybird.org/foo%C2%91%91
To fix this make the URL class only return the percent encoded
serialized path, matching the URL spec. When the decoded path is
required instead explicitly call URL::percent_decode.
This fixes a crash running WPT URL tests for the anchor element on:
https://wpt.live/url/a-element.html
This also fixes a bug where task IDs were being deallocated from the
wrong IDAllocator. I don't know if it was actually possible to cause any
real trouble with that mistake, nor do I know how to write a test for
it, but this makes the bug go away.
We were saving to source declarations for *every* property, even though
we only ever looked it up for animation-name.
This patch gets rid of the per-property source pointer and we now keep
a single pointer to the animation-name source only.
This shrinks StyleProperties from 6512 bytes to 4368 bytes per instance.
Navigables are re-used for navigations within the same tab. Its current
ownership of the cursor position is a bit ad-hoc, so nothing in the spec
indicates when to reset the cursor, nor do we manually do so. So when a
cursor update happens on one page, that cursor is retained on the next
page.
Instead, let's have the document own the cursor. Each navigation results
in a new document, thus we don't need to worry about resetting cursors.
This also makes many of the callsites feel nicer. We were previously
often going from the node, to the document, to the navigable, to the
cursor. This patch removes the navigable hop.
The spec does not define activation behavior of ctrl/cmd clicks, so we
have to go a bit ad-hoc here. When an anchor element is clicked with the
cmd (on macOS) or ctrl (on other platforms) modifier pressed, we will
skip activation of that element and pass the event on to the chrome. We
still dispatch the event to allow scripts to cancel the event.
Now that the implementation is in FormAssociatedElement, the
implementation in HTMLInputElement is effectively just a passthrough,
with some minor differences to handle small behavioural quirks between
the two (such as the difference in nullability of types).
Contrary to LibGfx, where corner clipping was implemented by sampling
and blitting pixels under corners into a temporary bitmap, Skia allows
us to simply apply a mask. As a result, we no longer need the
BlitCornerClipping command, which has become a no-op.
- SampleUnderCorners is renamed to AddRoundedRectClip
- The optimization that skipped unnecessary blit and sample commands has
been removed. However, this should not result in a performance
regression because Skia seems to perform mask rasterization lazily.
In the HTML parser spec, there are 2 instances of the following text:
add the attribute and its corresponding value to that element
The "add the attribute" text does not have a corresponding spec link to
actually specify what to do. We currently use `set_attribute`, which can
throw an exception if the attribute name contains an invalid character
(such as '<'). Instead, switch to `append_attribute`, which allows such
attribute names. This behavior matches Firefox.
Note we cannot yet make the unclosed-html-element.html test match the
expectations of the unclosed-body-element.html due to another bug that
would prevent checking if the expected element has the right attribute.
That will be fixed in an upcoming commit.
This removes some ambiguity about what the return value should be if
the index is out of range.
Previously, we would sometimes return a JS null, and other times a JS
undefined.
It will also let us fold together the checks for whether an index is a
supported property index, followed by getting the value just afterwards.