The upcoming generated types will match those for pseudo-classes: A
PseudoElementSelector type, that then holds a PseudoElement enum
defining what it is. That enum will be at the top level in the Web::CSS
namespace.
In order to keep the diffs clearer, this commit renames and moves the
types, and then a following one will replace the handwritten enum with
a generated one.
We previously had PropertyOwningCSSStyleDeclaration and
ResolvedCSSStyleDeclaration, representing the current style properties
and resolved style respectively. Both of these were the
CSSStyleDeclaration type in the CSSOM. (We also had
ElementInlineCSSStyleDeclaration but I removed that in a previous
commit.)
In the meantime, the spec has changed so that these should now be a new
CSSStyleProperties type in the CSSOM. Also, we need to subclass
CSSStyleDeclaration for things like CSSFontFaceRule's list of
descriptors, which means it wouldn't hold style properties.
So, this commit does the fairly messy work of combining these two types
into a new CSSStyleProperties class. A lot of what previously was done
as separate methods in the two classes, now follows the spec steps of
"if the readonly flag is set, do X" instead, which is hopefully easier
to follow too.
There is still some functionality in CSSStyleDeclaration that belongs in
CSSStyleProperties, but I'll do that next. To avoid a huge diff for
"CSSStyleDeclaration-all-supported-properties-and-default-values.txt"
both here and in the following commit, we don't apply the (currently
empty) CSSStyleProperties prototype yet.
A font-size with rem units need to resolve against the default font
metrics for the root element, otherwise every time we compute style,
the reference value for rem units grows.
This fixes an issue where text on some web pages would grow every time
there was a relayout. This was very noticeable on https://proton.me/Fixes#339
We already have logic to play or cancel animations in an element's
subtree when the display property changes to or from none. However,
this was not sufficient to cover the case when an element starts/stops
being nested in display none after insertion.
When setting `font-family: monospace;` in CSS, we have to interpret
the keyword font sizes (small, medium, large, etc) as slightly smaller
for historical reasons. Normally the medium font size is 16px, but
for monospace it's 13px.
The way this needs to behave is extremely strange:
When encountering `font-family: monospace`, we have to go back and
replay the CSS cascade as if the medium font size had been 13px all
along. Otherwise relative values like 2em/200%/etc could have gotten
lost in the inheritance chain.
We implement this in a fairly naive way by explicitly checking for
`font-family: monospace` (note: it has to be *exactly* like that,
it can't be `font-family: monospace, Courier` or similar.)
When encountered, we simply walk the element ancestors and re-run the
cascade for the font-size property. This is clumsy and inefficient,
but it does work for the common cases.
Other browsers do more elaborate things that we should eventually care
about as well, such as user-configurable font settings, per-language
behavior, etc. For now, this is just something that allows us to handle
more WPT tests where things fall apart due to unexpected font sizes.
To learn more about the wonders of font-size, see this blog post:
https://manishearth.github.io/blog/2017/08/10/font-size-an-unexpectedly-complex-css-property/
This reduces the number of `.cpp` files that need to be recompiled when
one of the below header files changes as follows:
CSS/ComputedProperties.h: 1113 -> 49
CSS/ComputedValues.h: 1120 -> 209
Analysis of selectors on modern websites shows that the `:hover`
pseudo-class is mostly used in the subject position within relatively
simple selectors like `.a:hover`. This suggests that we could greatly
benefit from segregating them by id/class/tag name, this way reducing
number of selectors tested during hover style invalidation.
With this change, hover invalidation on Discord goes down from 70ms to
3ms on my machine. I also tested GMail and GitHub where this change
shows nice 2x-3x speedup.
If selector does not have any descendant combinators then we know for
sure it won't be filtered out by ancestor filter, which means there is
no need to check for it.
This change makes hover style invalidation go faster on Discord where
with this change we spend 4-5% in `should_reject_with_ancestor_filter()`
instead of 20%.
We only need a Page for file:// urls. At some point we probably
needed it for other kinds of requests, but the current functionality
doesn't need to store the Page pointer on the ResourceLoader.
`invalidate_style()` already tries to avoid scheduling invalidation for
`:has()` by checking result of `may_have_has_selectors()`, but it might
still result in unnecessary work because `may_have_has_selectors()`
does not force building of rules cache. This change adds
`have_has_selectors()` that forces building of rules cache and is
invoked in `update_style()` to double-check whether we actually need to
process scheduled `:has()` invalidations.
This allows to skip ~100000 ancestor traversals on this WPT test:
https://wpt.live/html/select/options-length-too-large.html
This is not really a context, but more of a set of parameters for
creating a Parser. So, treat it as such: Rename it to ParsingParams,
and store its values and methods directly in the Parser instead of
keeping the ParsingContext around.
This has a nice side-effect of not including DOM/Document.h everywhere
that needs a Parser.
At computed-value time, this is converted to whatever the parent's
computed value is. So it behaves a little like `inherit`, except that
an inherited start/end value uses the parent's start/end, which might
be different from the child's.
Before this change, checking if fast selector matching could be used was
only enabled in style recalculation and hover invalidation. With this
change it's enabled for all callers of SelectorEngine::matches() by
default. This way APIs like `Element.matches()` and `querySelector()`
could take advantage of this optimization.
Initially I added this to the existing CalculationContext, but in
reality, we have some data at parse-time and different data at
resolve-time, so it made more sense to keep those separate.
Instead of needing a variety of methods for resolving a Foo, depending
on whether we have a Layout::Node available, or a percentage basis, or
a length resolution context... put those in a
CalculationResolutionContext, and just pass that one thing to these
methods. This also removes the need for separate resolve_*_percentage()
methods, because we can just pass the percentage basis in to the regular
resolve_foo() method.
This also corrects the issue that *any* calculation may need to resolve
lengths, but we previously only passed a length resolution context to
specific types in some situations. Now, they can all have one available,
though it's up to the caller to provide it.
Prior to this change, we invalidated all elements in the document if it
used any selectors with :has(). This change aims to improve that by
applying a combination of techniques:
- Collect metadata for each element if it was matched against a selector
with :has() in the subject position. This is needed to invalidate all
elements that could be affected by selectors like `div:has(.a:empty)`
because they are not covered by the invalidation sets.
- Use invalidation sets to invalidate elements that are affected by
selectors with :has() in a non-subject position.
Selectors like `.a:has(.b) + .c` still cause whole-document invalidation
because invalidation sets cover only descendants, not siblings. As a
result, there is no performance improvement on github.com due to this
limitation. However, youtube.com and discord.com benefit from this
change.
This means we only need to consider rules from the document and the
current shadow root, instead of the document and *every* shadow root.
Dramatically reduces the amount of rules processed on many pages.
Shaves 2.5 seconds of load time off of https://wpt.fyi/ :^)
Instead of creating and passing around Vector<MatchingRule> inside
StyleComputer (internally, not exposed in API), we now use vectors
of pointers/references instead.
Note that we use pointers in case we want to quick_sort() the vectors.
Knocks 4 seconds of loading time off of https://wpt.fyi/
Instead, change the APIs from "has :foo" to "may have :foo" and return
true if we don't have a valid rule cache at the moment.
This allows us to defer the rebuilding of the rule cache until a later
time, for the cost of a wider invalidation at the moment.
Do note that if our rule cache is invalid, the whole document has
invalid style anyway! So this is actually always less work. :^)
Knocks ~1 second of loading time off of https://wpt.fyi/
Implements idea described in
https://docs.google.com/document/d/1vEW86DaeVs4uQzNFI5R-_xS9TcS1Cs_EUsHRSgCHGu8
Invalidation sets are used to reduce the number of elements marked for
style recalculation by collecting metadata from style rules about the
dependencies between properties that could affect an element’s style.
Currently, this optimization is only applied to style invalidation
triggered by class list mutations on an element.
Previously, we optimized hover style invalidation to mark for style
updates only those elements that were matched by :hover selectors in the
last style calculation.
This change takes it a step further by invalidating only the elements
where the set of selectors that use :hover changes after hovered element
is modified. The implementation is as follows:
1. Collect all elements whose styles might be affected by a change in
the hovered element.
2. Retrieve a list of all selectors that use :hover.
3. Test each selector against each element and record which selectors
match.
4. Update m_hovered_node to the newly hovered element.
5. Repeat step 3.
6. For each element, compare the previous and current sets of matched
selectors. If they differ, mark the element for style recalculation.
Instead of recalculating styles for all nodes in the common ancestor of
the new and old hovered nodes' subtrees, this change introduces the
following approach:
- While calculating ComputedProperties, a flag is saved if any rule
applied to an element is affected by the hover state during the
execution of SelectorEngine::matches().
- When the hovered element changes, styles are marked for recalculation
only if the flag saved in ComputedProperties indicates that the
element could be affected by the hover state.
Some websites (Reddit, for example) create lots of temporary documents
for fragment parsing. Before this change, we had to build a rule cache
for these documents just to determine whether there are :has, :defined,
or attribute selectors, while it should be safe to simply return `false`
right away.
This fixes a bug where, if a non-existent font family is specified in
CSS, whitespaces would be rendered using the emoji font, while letters
would use the default font. This issue occurred because the font was
resolved separately for each code point. Since the emoji font was listed
before the default font, it was chosen for whitespace characters due to
its inclusion of whitespace glyphs (at least in the Apple Color Emoji
font on macOS). This change resolves the issue by placing the default
font before the emoji font in the list.
If there are no :defined pseudo-class selectors anywhere in the
document, we don't have to invalidate style at all when an element's
custom element state changes.
Many times, attribute mutation doesn't necessitate a full style
invalidation on the element. However, the conditions are pretty
elaborate, so this first version has a lot of false positives.
We only need to invalidate style when any of these things apply:
1. The change may affect the match state of a selector somewhere.
2. The change may affect presentational hints applied to the element.
For (1) in this first version, we have a fixed list of attribute names
that may affect selectors. We also collect all names referenced by
attribute selectors anywhere in the document.
For (2), we add a new Element::is_presentational_hint() virtual that
tells us whether a given attribute name is a presentational hint.
This drastically reduces style work on many websites. As an example,
https://cnn.com/ is once again browseable.