For a long time, we've used two terms, inconsistently:
- "Identifier" is a spec term, but refers to a sequence of alphanumeric
characters, which may or may not be a keyword. (Keywords are a
subset of all identifiers.)
- "ValueID" is entirely non-spec, and is directly called a "keyword" in
the CSS specs.
So to avoid confusion as much as possible, let's align with the spec
terminology. I've attempted to change variable names as well, but
obviously we use Keywords in a lot of places in LibWeb and so I may
have missed some.
One exception is that I've not renamed "valid-identifiers" in
Properties.json... I'd like to combine that and the "valid-types" array
together eventually, so there's no benefit to doing an extra rename
now.
If given, the spec expects the input URL to be manipulated on the fly
as it is being parsed, and may ignore any errors thrown by the URL
parser.
Previously, we were not exactly following the specs assumption here
which resulted in us needed to make awkward copies of the URL in these
situations.
For most cases this is not an issue. But it does cause problems for
situations where URL parsing would result in a failure (which is
ignored by the caller), and the URL is _partially_ updated
while parsing.
Such a situation can occur when setting the host of an href alongside a
port number which is not valid. It is expected that this situation will
result in the host being updates - but not the port number.
Adjust the URL parser API so that it mutates the URL given (if any), and
adjust the callers accordingly.
Fixes two tests on https://wpt.live/url/url-setters-a-area.window.html
This change causes the viewport to be treated as a "scroll frame,"
similar to how it already works for boxes with "overflow: scroll."
This means that, instead of encoding the viewport translation into a
display list, the items will be assigned the scroll frame id of the
viewport and then shifted by the scroll offset before execution. In the
future it will allow us to reuse a display list for repainting if only
scroll offset has changed.
As a side effect, it also removes the need for special handling of
"position: fixed" because compensating for the viewport offset while
painting or hit-testing is no longer necessary. Instead, anything
contained within a "position: fixed" element is simply not assigned
a scroll frame id, which means it is not shifted by the scroll offset.
While introducing clip and scroll frame trees, I made a mistake by
assuming that the paintable tree includes boxes from nested navigables.
Therefore, this comment in the code was incorrect, and clip/scroll
frames were simply not assigned for iframes:
// NOTE: We only need to refresh the scroll state for traversables
// because they are responsible for tracking the state of all
// nested navigables.
As a result, anything with "overflow: scroll" is currently not
scrollable inside an iframe
This change fixes that by ensuring clip and scroll frames are assigned
and refreshed for each navigable. To achieve this, I had to modify the
display list building process to record a separate display list for each
navigable. This is necessary because scroll frame ids are local to a
navigable, making it impossible to call
`DisplayList::apply_scroll_offsets()` on a display list that contains
ids from multiple navigables.
Implements the corresponding encoders, selects the appropriate one when
encoding URL search params. If an encoder for the given encoding could
not be found, fallback to utf-8.
`m_needs_repaint = true` is not enough because it doesn't schedule
repaint of a parent navigable.
Fixes the bug when an iframe is not repainted after scrolling.
When converting a `Gfx::Bitmap` to a Skia bitmap, we cannot assume the
color data is unpremultiplied. For example, everything canvas-related
uses premultiplied color data:
https://html.spec.whatwg.org/multipage/canvas.html#premultiplied-alpha-and-the-2d-rendering-context
We were probably assuming unpremultiplied since that is what the PNG
decoder gives us. Since we now make `Gfx::Bitmap` identify what alpha
type is being used, we can instruct Skia a bit better :^)
Update our `EdgeFlagPathRasterizer` to use premultiplied alpha instead
of unpremultiplied so we can apply alpha correctly for path masks.
This fixes the dark borders sometimes visible when SVGs are blended
with a colored background.
This also exposed an issue with our `CanvasRenderingContext2D`, which is
supposed to hold a bitmap with premultiplied alpha internally but expose
a bitmap with unpremultiplied alpha in `CanvasImageData`. Expand our C2D
test to include the alpha channel as well.
Finally, this also exposed an off-by-one issue in
`EdgeFlagPathRasterizer` which caused the last scanlines for edges to
render incorrectly. We had some reference images which included these
corruptions (they were almost unnoticeable), so update them as well.
We use instances of `Gfx::Bitmap` to move pixel data all the way from
raw image bytes up to the Skia renderer. A vital piece of information
for correct blending of bitmaps is the alpha type, i.e. are we dealing
with premultiplied or unpremultiplied color values?
Premultiplied means that the RGB colors have been multiplied with the
associated alpha value, i.e. RGB(255, 255, 255) with an alpha of 2% is
stored as RGBA(5, 5, 5, 2%).
Unpremultiplied means that the original RGB colors are stored,
regardless of the alpha value. I.e. RGB(255, 255, 255) with an alpha of
2% is stored as RGBA(255, 255, 255, 2%).
It is important to know how the color data is stored in a
`Gfx::Bitmap`, because correct blending depends on knowing the alpha
type: premultiplied blending uses `S + (1 - A) * D`, while
unpremultiplied blending uses `A * S + (1 - A) * D`.
This adds the alpha type information to `Gfx::Bitmap` across the board.
It isn't used anywhere yet.
Right now, we deviate from the CSSOM spec regarding our
CSSStyleDeclaration classes, so this is not as close to the spec as I'd
like. But it works, which means we'll be able to test pseudo-element
styling a lot more easily. :^)
The spec didn't match how other browsers behave, and we dutifully did
what the spec said. A spec bug has been filed, so let's fix this locally
for now with a FIXME.
For the SVG <use> element, we want to support loading HTML documents
that have a SVG element inside of it pointed to by the URL fragment.
In this situation we would need to fetch and parse the entire document
in SharedImageRequest (so that we can still cache the SVGs). Rename
SharedImageRequest to SharedResourceRequest to make the class a little
more generic for future usecases.
This closer mirrors handle_failed_fetch, making the handling slightly
more clear to understand. It also means that we don't need to take a
strong reference to this on a successful SVG resource load.
...When loading images through SharedImageRequest.
There is a small behavioural difference here - handle_failed_fetch
clears the pending callbacks whereas handled_failed_decode was
previously not. This does not seem intentional, and appears like a bug.
Implementing it this way is _slightly_ simpler - and also means we
don't need to take a strong handle to this in the case of loading an
SVG image.
Web specs do not return through javascript percent decoded URL path
components - but we were doing this in a number of places due to the
default behaviour of URL::serialize_path.
Since percent encoded URL paths may not contain valid UTF-8 - this was
resulting in us crashing in these places.
For example - on an HTMLAnchorElement when retrieving the pathname for
the URL of:
http://ladybird.org/foo%C2%91%91
To fix this make the URL class only return the percent encoded
serialized path, matching the URL spec. When the decoded path is
required instead explicitly call URL::percent_decode.
This fixes a crash running WPT URL tests for the anchor element on:
https://wpt.live/url/a-element.html