The upcoming generated types will match those for pseudo-classes: A
PseudoElementSelector type, that then holds a PseudoElement enum
defining what it is. That enum will be at the top level in the Web::CSS
namespace.
In order to keep the diffs clearer, this commit renames and moves the
types, and then a following one will replace the handwritten enum with
a generated one.
This allows us to parse the Content-Security-Policy header and
Referrer-Policy header from navigation responses and actually allow
them to start having an effect.
This isn't actually necessary, since we already invalidate style for the
entire document, and the subsequent style update will discover any
additional layout invalidation needed as well.
12c6ac78e2 with fixed mistake when cache
slot is copied instead of being referenced:
```cpp
auto cache =
box.cached_intrinsic_sizes().min_content_height.ensure(width);
```
while it should've been:
```cpp
auto& cache =
box.cached_intrinsic_sizes().min_content_height.ensure(width);
```
This change moves intrinsic sizes cache from
LayoutState, which is local to current layout run,
to layout nodes, so it could be reused between
layout runs. This optimization is possible because
we can guarantee that these measurements will
remain unchanged unless the style of the element
or any of its descendants changes.
For now, invalidation is implemented simply by
resetting cache on whole ancestors chain once we
figured that element needs layout update.
The case when layout is invalidated by DOM's
structural changes is covered by layout tree
invalidation that drops intrinsic sizes cache
along with layout nodes.
I measured improvement on couple websites:
- Mail list on GMail 28ms -> 6ms
- GitHub large code page 47ms -> 36ms
- Discord chat history 15ms -> 8ms
(Time does not include `commit()`)
This allows us to avoid a full layout tree rebuild after change of
"display" property, which happens frequently in practice. It also
allows us to avoid a full rebuild after DOM node insertion, since
previously, computing styles for newly inserted nodes would trigger a
complete layout tree rebuild.
All necessary invalidations are issued while invalidating animated
style. There is no need to drop display list simply because there are
some animations that might need an update.
We set the page's focused navigable upon mouse-down events from the UI.
However, we neglected to ever clear that focused navigable upon events
such as subsequent page navigations. This left the page with a stale
reference to a no-longer-active navigable. The effect was that any key
events from the UI would not be sent to the new page until either the
reference was collected by GC, or another mouse-down event occurred.
In the test added here, without this fix, the text sent to the input
element would not be received, and the change event would not fire.
This commit implements the main "render blocking" behavior for link
elements, drastically reducing the amount of FOUC (flash of unstyled
content) we subject our users to.
The document will now block rendering until linked style sheets
referenced by parser-created link elements have loaded (or failed).
Note that we don't yet extend the blocking period until "critical
subresources" such as imported style sheets have been downloaded
as well.
This change fixes a bug that can be reproduced with the following steps:
```js
const iframe = document.createElement("iframe");
document.body.appendChild(iframe);
iframe.contentWindow.location.href = ("http://localhost:8080/demo.html");
```
These steps are executed in the following order:
1. Create iframe and schedule session history traversal task that adds
session history entry for the iframe.
2. Generate navigation id for scheduled navigation to
`http://localhost:8080/demo.html`.
3. Execute the scheduled session history traversal task, which adds
session history entry for the iframe.
4. Ooops, navigation to `http://localhost:8080/demo.html` is aborted
because addings SHE for the iframe resets the navigation id.
This change fixes this by delaying all navigations until SHE for a
navigable is created.
Previously, they would stay open for the entire WebContent lifetime,
or until the server closed the connection. This was particularly
noticeable on collaborative websites/games such as
https://jigsawpuzzles.io/, where the user using Ladybird would stick
around even after they had navigated away.
The spec changes seem to mostly be about introducing a TrustedHTML type
which we do not yet support, so we have a couple of FIXMEs.
TrustedTypes::InjectionSink is an attempt at matching the spec, but it's
not entirely clear to me how it should work. I'm sure it'll get
revisited once we start implementing trusted types.
Our own Inspector differs from most other DevTools implementations with
regard to highlighting DOM nodes as you hover elements in the inspected
DOM tree. In other implementations, as you change the hovered node, the
browser will render a box model overlay onto the page for that node. We
currently don't do this; we wait until you click the node, at which
point we both paint the overlay and inspect the node's properties.
This patch does not change that behavior, but separates the IPCs and
internal tracking of inspected nodes to support the standard DevTools
behavior. So the DOM document now stores an inspected node and a
highlighted node. The former is used for features such as "$0" in the
JavaScript console, and the latter is used for the box model overlay.
Our Inspector continues to set these to the same node.
This reduces the number of `.cpp` files that need to be recompiled when
one of the below header files changes as follows:
CSS/ComputedProperties.h: 1113 -> 49
CSS/ComputedValues.h: 1120 -> 209
Analysis of selectors on modern websites shows that the `:hover`
pseudo-class is mostly used in the subject position within relatively
simple selectors like `.a:hover`. This suggests that we could greatly
benefit from segregating them by id/class/tag name, this way reducing
number of selectors tested during hover style invalidation.
With this change, hover invalidation on Discord goes down from 70ms to
3ms on my machine. I also tested GMail and GitHub where this change
shows nice 2x-3x speedup.
This is required to store Content Security Policies, as their
Directives are implemented as subclasses with overridden virtual
functions. Thus, they cannot be stored as generic Directive classes, as
it'll lose the ability to call overridden functions when they are
copied.
If selector does not have any descendant combinators then we know for
sure it won't be filtered out by ancestor filter, which means there is
no need to check for it.
This change makes hover style invalidation go faster on Discord where
with this change we spend 4-5% in `should_reject_with_ancestor_filter()`
instead of 20%.
Before this change, we did the following:
1. Created a bitmap with the matching state for each rule containing
`:hover`.
2. Changed the actively hovered element in the document.
3. Created another bitmap with the matching state for each rule
containing `:hover`.
With this change, we iterate rule by rule and compare the matching
state. This allows us to break early once we find the first rule whose
matching state changes after the hovered element update. Additionally,
this removes the need to allocate a bitmap.
Instead of using `has_pseudo_elements()` that iterates over all pseudo
elements, only check if `::before` or `::after` are present.
Before this change, `has_pseudo_elements()` was 10% of profiles on
Discord while now it's 1-2%.
MediaQueryList will now remember if a state change occurred when
evaluating its match state. This memory can then be used by the document
later on when it's updating all queries, to ensure that we don't forget
to fire at least one change event.
This also required plumbing the system visibility state to initial
about:blank documents, since otherwise they would be stuck in "hidden"
state indefinitely and never evaluate their media queries.
Instead of checking all elements in a document for containment in
`:has()` invalidation set, we could narrow this down to ancestors and
ancestor siblings, like we already do for subject `:has()` invalidation.
This change brings great improvement on GitHub that has selectors with
non-subject `:has()` and sibling combinators (e.g., `.a:has(.b) ~ .c`)
which prior to this change meant style invalidation for whole document.
This commit changes the strategy for updating inherited styles. Instead
of marking all potentially affected nodes during style invalidation, the
decision is now made on-the-fly during style recalculation. Child nodes
will only have their inherited styles recalculated if their parent's
properties have changed.
On Discord this allows to 1000x reduce number of nodes with recalculated
inherited style.
`invalidate_style()` already tries to avoid scheduling invalidation for
`:has()` by checking result of `may_have_has_selectors()`, but it might
still result in unnecessary work because `may_have_has_selectors()`
does not force building of rules cache. This change adds
`have_has_selectors()` that forces building of rules cache and is
invoked in `update_style()` to double-check whether we actually need to
process scheduled `:has()` invalidations.
This allows to skip ~100000 ancestor traversals on this WPT test:
https://wpt.live/html/select/options-length-too-large.html
The current implementation of `:has()` style invalidation is divided
into two cases:
- When used in subject position (e.g., `.a:has(.b)`).
- When in a non-subject position (e.g., `.a > .b:has(.c)`).
This change focuses on improving the first case. For non-subject usage,
we still perform a full tree traversal and invalidate all elements
affected by the `:has()` pseudo-class invalidation set.
We already optimize subject `:has()` invalidations by limiting
invalidated elements to ones that were tested against `has()` selectors
during selector matching. However, selectors like `div:has(.a)`
currently cause every div element in the document to be invalidated.
By modifying the invalidation traversal to consider only ancestor nodes
(and, optionally, their siblings), we can drastically reduce the number
of invalidated elements for broad selectors like the example above.
On Discord, when scrolling through message history, this change allows
to reduce number of invalidated elements from ~1k to ~5.
Before this change, tasks associated with a destroyed document would get
stuck in the task queue forever, since document-associated tasks are not
allowed to run when their document isn't fully active (and destroyed
documents never become fully active again). This caused everything
captured by task callbacks to leak.
We now treat tasks for destroyed documents as runnable immediately,
which gets them out of the queue.
This fixes another massive GC leak on Speedometer.
This is not really a context, but more of a set of parameters for
creating a Parser. So, treat it as such: Rename it to ParsingParams,
and store its values and methods directly in the Parser instead of
keeping the ParsingContext around.
This has a nice side-effect of not including DOM/Document.h everywhere
that needs a Parser.
This ad-hoc code informs the client of a potentially changed page title.
But because we always update the title element (either the SVG or HTML
title) the client was already informed, causing the code to run twice.
Before this change, checking if fast selector matching could be used was
only enabled in style recalculation and hover invalidation. With this
change it's enabled for all callers of SelectorEngine::matches() by
default. This way APIs like `Element.matches()` and `querySelector()`
could take advantage of this optimization.
This is "update document for history step application" but that's too
long for the commit title. :^)
No code changes, just adding more FIXME comments for the new steps.
(And indented step 7's substeps for clarity.)
Corresponds to https://github.com/whatwg/html/pull/10910