If a calculation was simplified down to a single numeric node, then most
of the time we can instead return a regular StyleValue, for example
`calc(2px + 3px)` would be simplified down to a `5px` LengthStyleValue.
This means that parse_calculated_value() can't return a
CalculatedStyleValue directly, and its callers all have to handle
non-calculated values as well as calculated ones.
This simplification is reflected in the new test results. Serialization
is not yet correct in all cases but we're closer than we were. :^)
Calc simplification eventually produces a single style-value as its
output. This extra context data will let us know whether a calculated
number should be treated as a `<number>` or an `<integer>`, so that for
example, `z-index: 12` and `z-index: calc(12)` both produce an
`IntegerStyleValue` containing 12.
The goal of this VERIFY was to ensure that we didn't mess up the logic
for calculating the correct type. However, it's now unable to do so
because it doesn't have access to the CalculationContext, which
determines what type percentages are. This makes it crash when running
the simplification algorithm. The benefits of this check are small, and
it meant doing extra work, so let's just remove it.
Calc simplification (which I'm working towards) involves repeatedly
deriving a new calculation tree from an existing one, and in many
cases, either the whole result or a portion of it will be identical to
that of the original. Using RefPtr lets us avoid making unnecessary
copies. As a bonus it will also make it easier to return either `this`
or a new node.
In future we could also cache commonly-used nodes, similar to how we do
so for 1px and 0px LengthStyleValues and various keywords.
To be properly compatible with calc(), the resolved() methods all need:
- A length resolution context
- To return an Optional, as the calculation might not be resolvable
A bonus of this is that we can get rid of the overloads of `resolved()`
as they now all behave the same way.
A downside is a scattering of `value_or()` wherever these are used. It
might be the case that all unresolvable calculations have been rejected
before this point, but I'm not confident, and so I'll leave it like
this for now.
Initially I added this to the existing CalculationContext, but in
reality, we have some data at parse-time and different data at
resolve-time, so it made more sense to keep those separate.
Instead of needing a variety of methods for resolving a Foo, depending
on whether we have a Layout::Node available, or a percentage basis, or
a length resolution context... put those in a
CalculationResolutionContext, and just pass that one thing to these
methods. This also removes the need for separate resolve_*_percentage()
methods, because we can just pass the percentage basis in to the regular
resolve_foo() method.
This also corrects the issue that *any* calculation may need to resolve
lengths, but we previously only passed a length resolution context to
specific types in some situations. Now, they can all have one available,
though it's up to the caller to provide it.
Prior to this change, we invalidated all elements in the document if it
used any selectors with :has(). This change aims to improve that by
applying a combination of techniques:
- Collect metadata for each element if it was matched against a selector
with :has() in the subject position. This is needed to invalidate all
elements that could be affected by selectors like `div:has(.a:empty)`
because they are not covered by the invalidation sets.
- Use invalidation sets to invalidate elements that are affected by
selectors with :has() in a non-subject position.
Selectors like `.a:has(.b) + .c` still cause whole-document invalidation
because invalidation sets cover only descendants, not siblings. As a
result, there is no performance improvement on github.com due to this
limitation. However, youtube.com and discord.com benefit from this
change.
By using ancestor filters some selectors could be early rejected
skipping selector engine invocation. According to my measurements it's
30-80% hover selectors depending on the website.
We have an optimization that allows us to invalidate only the style of
the element itself and mark descendants for inherited properties update
when the "style" attribute changes (unless there are any CSS rules that
use the "style" attribute, then we also invalidate all descendants that
might be affected by those rules). This optimization was not taking into
account that when the inline style has custom properties, we also need
to invalidate all descendants whose style might be affected by them.
This change fixes this bug by saving a flag in Element that indicates
whether its style depends on any custom properties and then invalidating
all descendants with this flag set when the "style" attribute changes.
Unlike font relative lengths invalidation, for elements that depend on
custom properties, we need to actually recompute the style, instead of
individual properties, because values without expanded custom properties
are gone after cascading, and it has to be done again.
The test added for this change is a version of an existing test we had
restructured such that it doesn't trigger aggressive style invalidation
caused by DOM structured changes until the last moment when test results
are printed.
This means we only need to consider rules from the document and the
current shadow root, instead of the document and *every* shadow root.
Dramatically reduces the amount of rules processed on many pages.
Shaves 2.5 seconds of load time off of https://wpt.fyi/ :^)
For all invalidation properties nested into nth-child argument list we
need to invalidate whole subtree to make sure style of sibling elements
will be recalculated.
Instead of creating and passing around Vector<MatchingRule> inside
StyleComputer (internally, not exposed in API), we now use vectors
of pointers/references instead.
Note that we use pointers in case we want to quick_sort() the vectors.
Knocks 4 seconds of loading time off of https://wpt.fyi/
Instead, change the APIs from "has :foo" to "may have :foo" and return
true if we don't have a valid rule cache at the moment.
This allows us to defer the rebuilding of the rule cache until a later
time, for the cost of a wider invalidation at the moment.
Do note that if our rule cache is invalid, the whole document has
invalid style anyway! So this is actually always less work. :^)
Knocks ~1 second of loading time off of https://wpt.fyi/
Implements idea described in
https://docs.google.com/document/d/1vEW86DaeVs4uQzNFI5R-_xS9TcS1Cs_EUsHRSgCHGu8
Invalidation sets are used to reduce the number of elements marked for
style recalculation by collecting metadata from style rules about the
dependencies between properties that could affect an element’s style.
Currently, this optimization is only applied to style invalidation
triggered by class list mutations on an element.
Same again, although rotation is more complicated: `rotate`
is "equivalent to" multiple different transform function depending on
its arguments. So we can parse as one of those instead of the full
`rotate3d()`, but then need to handle this when serializing.
The only ways this varies from the `scale()` function is with parsing
and serialization. Parsing stays separate, and serialization is done by
telling `TransformationStyleValue` which property it is, and overriding
its normal `to_string()` code for properties other than `transform`.
Invalidation for adopted style sheets was broken because we had an
assumption that "active" style sheet is always attached to
StyleSheetList which is not true for adopted style sheets. This change
addresses that by keeping track of all documents/shadow roots that own
a style sheet and notifying them about invalidation instead of going
through the StyleSheetList.