Initially I added this to the existing CalculationContext, but in
reality, we have some data at parse-time and different data at
resolve-time, so it made more sense to keep those separate.
Instead of needing a variety of methods for resolving a Foo, depending
on whether we have a Layout::Node available, or a percentage basis, or
a length resolution context... put those in a
CalculationResolutionContext, and just pass that one thing to these
methods. This also removes the need for separate resolve_*_percentage()
methods, because we can just pass the percentage basis in to the regular
resolve_foo() method.
This also corrects the issue that *any* calculation may need to resolve
lengths, but we previously only passed a length resolution context to
specific types in some situations. Now, they can all have one available,
though it's up to the caller to provide it.
BFC roots behave differently in that their height is computed twice,
before and after inside layout, since automatic height depends on the
results of inside layout. Other formatting contexts only require the
"before" pass, and so we can treat their content sizes as definite
before proceeding with inside layout.
This makes https://play.tailwind.com/ look beautiful. :^)
The focus chain always consists of newly created GC::Root objects, so
the condition always produced `false`. The fix is to use GC::Root's
overloaded operator== method, which compares the pointers of the stored
type.
This fixes Figma dropdowns and context menus instantly disappearing
upon opening them. This is because they become focused when they insert
them. Because of this bug, it would fire blur events all the way up to
and including the window. Figma listens for the blur event on the
window, and when received, it will instantly hide dropdowns and context
menus. The intention behind this seems to be hiding them when the user
clicks off the browser window, or switches tab.
Several interfaces that return a high resolution time require that
time to be coarsened, in order to prevent timing attacks. This
implementation simply reduces the resolution of the returned timestamp
to the minimum values given in the specification. Further work may be
needed to make our implementation more robust to the kind of attacks
that this mechanism is designed to prevent.
Corresponds to https://github.com/whatwg/html/pull/10904
However, we don't implement most of what is changed in that PR, so this
simply moves the FIXME'd getContextAttributes() method from one place
to another.
...until Document::update_style(). This allows to avoid doing full
document DOM tree traversal on each Node::invalidate_style() call.
Fixes performance regression on wpt.fyi
It may happen that the scalars used by SECPxxxr1 turn out to be slightly
smaller than their actual size when serialized to `UnsignedBigInteger`,
especially for P521. Handle this case by serializing zeros instead of
failing.
Originally discovered as a flaky WPT test.
Now that we have implemented most of the element types, this dbgln
is a lot less useful for detecting issues on live sites, and is in
my experience more likely to log cases that are not a FIXME. It also
cleans up some of the log spam from running tests.
Prior to this change, we invalidated all elements in the document if it
used any selectors with :has(). This change aims to improve that by
applying a combination of techniques:
- Collect metadata for each element if it was matched against a selector
with :has() in the subject position. This is needed to invalidate all
elements that could be affected by selectors like `div:has(.a:empty)`
because they are not covered by the invalidation sets.
- Use invalidation sets to invalidate elements that are affected by
selectors with :has() in a non-subject position.
Selectors like `.a:has(.b) + .c` still cause whole-document invalidation
because invalidation sets cover only descendants, not siblings. As a
result, there is no performance improvement on github.com due to this
limitation. However, youtube.com and discord.com benefit from this
change.
...by replacing existing method to check if an element is affected by
invalidation property. It turned out there is no need to check if an
element is affected only by some specific property, so it's more
convenient to have a method that accepts the whole set.
...in hover style invalidation. Instead, pass down a flag that indicates
all subsequent nodes in tree traversal have to be marked for inherited
style update.
By using ancestor filters some selectors could be early rejected
skipping selector engine invocation. According to my measurements it's
30-80% hover selectors depending on the website.
Instead of allocating 3 vectors with size equal to the number of
elements potentially affected by hover:
- for the elements themselves
- for selector match state of each element before hovered node change
- for selector match state of each element after hovered node change
now we allocate none of them, but mark element for style recalculation
as we traverse the tree.
We have an optimization that allows us to invalidate only the style of
the element itself and mark descendants for inherited properties update
when the "style" attribute changes (unless there are any CSS rules that
use the "style" attribute, then we also invalidate all descendants that
might be affected by those rules). This optimization was not taking into
account that when the inline style has custom properties, we also need
to invalidate all descendants whose style might be affected by them.
This change fixes this bug by saving a flag in Element that indicates
whether its style depends on any custom properties and then invalidating
all descendants with this flag set when the "style" attribute changes.
Unlike font relative lengths invalidation, for elements that depend on
custom properties, we need to actually recompute the style, instead of
individual properties, because values without expanded custom properties
are gone after cascading, and it has to be done again.
The test added for this change is a version of an existing test we had
restructured such that it doesn't trigger aggressive style invalidation
caused by DOM structured changes until the last moment when test results
are printed.
Previously, we were generating timestamps relative to the current time
of the monotonic clock. We now generate timestamps relative to the
event loop's last render opportunity time, per the spec.
This means we only need to consider rules from the document and the
current shadow root, instead of the document and *every* shadow root.
Dramatically reduces the amount of rules processed on many pages.
Shaves 2.5 seconds of load time off of https://wpt.fyi/ :^)
When positioning floats against an edge, we are taking all current
relevant floats at that side into account to determine the Y offset at
which to place the new float. However, we were using the margin box
height instead of the absolute bottom position, which disregards the
current float's Y-position within the root, and we were setting the Y
offset to that height, instead of taking the new float's Y position
inside of the root into account.
The new code determines the lowest margin bottom value within the root
of the current floats, and adds the difference between that value and
the new float's Y position to the Y offset.
I see no good reason to keep them out of line, and having the methods
named differently than the property they're updating caused me some
confusion initially.
The URLPattern spec is intended to be implemented inside of LibURL, with
LibWeb only responsible for the IDL conversion layer, in a similar
manner to how URL is implemented.
This currently uses a non spec-compliant property on the Response
object, which represents the time that the Response was created.
Setting this value allows `Performance.timeOrigin` to return a
reasonable value.
Our `UnsignedBigInteger` implementation cannot handle numbers whose
size is not a multiple of 4. For this reason we need to carry the real
size around for P-521 support.