These commands are used for the "Edit As HTML" feature in DevTools. This
renames our existing HTML getter IPC to indicate that it is for outer
HTML. DevTools will need a separate inner HTML getter.
Since cross-site navigation is a pretty frequent task, creating a spare
process is commonplace in other browsers to reduce the overhead of
directing the target site to a new process.
We store this process on the WebView application. If it is unavailable,
we queue a task to create it later.
Site isolation is a common technique to reduce the chance that malicious
sites can access data from other sites. When the user navigates, we now
check if the target site is the same as the current site. If not, we
instruct the UI to perform the navigation in a new WebContent process.
The phrase "site" here is defined as the public suffix of the URL plus
one level. This means that navigating from "www.example.com" to
"sub.example.com" remains in the same process.
There's plenty of room for optimization around this. For example, we can
create a spare WebContent process ahead of time to hot-swap the target
site. We can also create a policy to keep the navigated-from process
around, in case the user quickly navigates back.
We were using the public suffix of the URL's host as its registrable
domain. But the registrable domain is actually the public suffix plus
one additional label.
If an element is affected only by selectors using the direct sibling
combinator `+`, we can calculate the maximum invalidation distance and
use it to limit style invalidation. For example, the selector
`.a + .b + .c` has a maximum invalidation distance of 2, meaning we can
skip invalidating any element affected by this selector if it's more
than two siblings away from the element that triggered the style
invalidation.
This change results in visible performance improvement when hovering
PR list on GitHub.
The public one at echo.websocket.org tends to be slow at times,
frequently causing timeouts. Ideally we would run a local websocket
server but for now we can make do with a self-hosted alternative.
Instead of trying to manually determine which parts of a bitmap fall
within the box of the `<img>` element, just draw the whole bitmap and
let Skia clip the draw-area to the correct rectangle.
This fixes a bug where the entire bitmap was squashed into the rectangle
of the image box instead of being clipped.
With this change, image rendering is now correct enough to import some
of the WPT tests for object-fit and object-position. To get some good
coverage I have imported all tests for the `<img>` tag. I also wanted to
import a subset of the tests for the `<object>` tag, since those are
passing as well now. Unfortunately, they are flaky for unknown reasons.
This is the second attempt at this bugfix. The prior one was e055927ead
and broke image rendering whenever the page was scrolled. It has
subsequently been reverted in 16b14273d1. Hopefully this time it is not
horribly broken.
e055927ead introduced a regression that caused images to not be rendered
when the page was scrolled. This was subsequently reverted in
16b14273d1. To ensure this mistake cannot happen again, this commit
introduces a regression test for that error case.
When setting scroll position during page load we need to consider
whether we actually have a fragment to scroll to. A script may already
have run at that point and may already have set a scroll position.
If there is an actual fragment to scroll to, it is fine to scroll to
that fragment, since it should take precedence. If we don't have a
fragment however, we should not unnecessarily overwrite the scroll
position set by the script back to (0, 0).
Since this problem is caused by a spec bug, I have tested the behavior
in the three major browsers engines. Unfortunately they do not agree
fully with each other. If there is no fragment at all (e.g. `foo.html`),
all browsers will respect the scroll position set by the script. If
there is a fragment (e.g. `foo.html#bar`), all browsers will set the
scroll position to the fragment element and ignore the one set by
script. However, when the fragment is empty (e.g. `foo.html#`), then
Blink and WebKit will set scroll position to the fragment, while Gecko
will set scroll position from script. Since all of this is ad-hoc
behavior anyway, I simply implemented the Blink/WebKit behavior because
of the majority vote for now.
This fixes a regression introduced in 51102254b5.
This has been a longstanding ergonomic issue with our IPC compiler. Non-
trivial types were previously passed by const&. So if we wanted to avoid
expensive copies, we would have to const_cast and move the data.
We now pass ownership of all transferred data to the client subclasses.
This allows us to remove const_cast from these methods, and allows us to
avoid some trivial expensive copies that we didn't bother to const_cast.
This removes a couple of places where we were constructing strings or
vectors just to transfer data over IPC. And passes some values by const&
to remove clangd noise.
For example, consider the following IPC message:
do_something(u64 page_id, String string, Vector<Data> data) =|
We would previously generate the following C++ method to encode/transfer
this message:
void do_something(u64 page_id, String string, Vector<Data> data);
This required the caller to either have to copy the non-trivial types or
`move` them in. In some places, this meant we had to construct temporary
vectors just to send an IPC.
This isn't necessary because we weren't holding onto these parameters
anyways. We would construct an IPC::Message subclass with them (which
does require owning types), but then immediate encode the message to
an IPC::MessageBuffer and send it.
We now generate code such that we don't need to construct a Message. We
can simply encode the parameters directly without needing ownership.
This allows us to take view-types to IPC parameters.
So the above example now becomes:
void do_something(u64, StringView, ReadonlySpan<Data>);
This will be needed in an upcoming commit so that this method may call
itself recursively to generate overloads. Doing this extraction ahead of
time will simply make that diff easier to grok.
This isn't particularly important, but when staring at generated IPC
files, it's nice not to have an extra newline after every line of code
throughout the files.
This isn't actually necessary, since we already invalidate style for the
entire document, and the subsequent style update will discover any
additional layout invalidation needed as well.
Because we cache the transformed text string in text nodes affected by
text-transform, we have to actually update the layout tree when this
property value changes.
If a CSS animation or transition was being used to manipulate a property
that itself does not affect layout, we were still doing a full relayout
whenever any animation or transition related property was changed.
As it turns out, we can just not do that, and we avoid a bunch of
unnecessary layout work on many pages. When a layout-affecting property
is being animated, the animation/transition update code takes care to
invalidate layout as appropriate anyway!
This was very noticeable on GitHub, where moving the mouse cursor
between "Issues" and "Pull requests" would trigger a full relayout every
time. Now that doesn't happen, and it's much more responsive. :^)
12c6ac78e2 with fixed mistake when cache
slot is copied instead of being referenced:
```cpp
auto cache =
box.cached_intrinsic_sizes().min_content_height.ensure(width);
```
while it should've been:
```cpp
auto& cache =
box.cached_intrinsic_sizes().min_content_height.ensure(width);
```
This change moves intrinsic sizes cache from
LayoutState, which is local to current layout run,
to layout nodes, so it could be reused between
layout runs. This optimization is possible because
we can guarantee that these measurements will
remain unchanged unless the style of the element
or any of its descendants changes.
For now, invalidation is implemented simply by
resetting cache on whole ancestors chain once we
figured that element needs layout update.
The case when layout is invalidated by DOM's
structural changes is covered by layout tree
invalidation that drops intrinsic sizes cache
along with layout nodes.
I measured improvement on couple websites:
- Mail list on GMail 28ms -> 6ms
- GitHub large code page 47ms -> 36ms
- Discord chat history 15ms -> 8ms
(Time does not include `commit()`)
This was completely unnecessary, and we can just let the internal
DOM tree changes trigger partial layout updates instead.
Noticed we were repeatedly dropping layout trees on ChatGPT and this
was one of the culprits.
If the CharacterData node has no layout node when we're changing its
text, we don't need to mark the document for relayout.
This is fine, because if the node ends up getting a layout node attached
to it, we'll naturally perform relayout after that anyway.