This allows us to inspect its properties. To avoid wasted work, we only
compute and cache the properties if the originating element was, or is,
displaying as a list item.
This commit changes the strategy for updating inherited styles. Instead
of marking all potentially affected nodes during style invalidation, the
decision is now made on-the-fly during style recalculation. Child nodes
will only have their inherited styles recalculated if their parent's
properties have changed.
On Discord this allows to 1000x reduce number of nodes with recalculated
inherited style.
This was an old hack intended to make percentage sizes on flex items
before we had implemented the appropriate special behavior of definite
sizes in flex layout.
Removing it makes flex layout less magical and should not change
behavior in any observable way.
The spec tells us to treat `auto` as `fit-content` when determining
flex item cross sizes, so let's just do *that* instead of awkwardly
doing an uncacheable nested layout of the item.
This was the only instance of `LayoutState` nesting outside of intrinsic
sizing, so removing it is an important step towards simplifying layout.
Turns out it was a lot easier than expected.
We've long claimed to support this, but then silently ignored string
values, until 4cb2063577 which would
not-so-silently crash instead. (Oops)
So, actually pass the string value along and use it in the list marker.
As part of this, rename our `list-style-type` enum to
`counter-style-name-keyword`. This is an awkward name, attempting to be
spec-based. (The spec says `<counter-style>`, which is either a
`<counter-style-name>` or a function, and the `<counter-style-name>` is
a `<custom-ident>` that also has a few predefined values. So this is the
best I could come up with.)
Unfortunately only one WPT test for this passes - the others fail
because we produce a different layout when text is in `::before` than
when it's in `::marker`, and similar issues.
These are copied and modified from ByteString, with the addition of a
Case parameter so that we can construct them in lowercase instead of
having to them make a copy.
`invalidate_style()` already tries to avoid scheduling invalidation for
`:has()` by checking result of `may_have_has_selectors()`, but it might
still result in unnecessary work because `may_have_has_selectors()`
does not force building of rules cache. This change adds
`have_has_selectors()` that forces building of rules cache and is
invoked in `update_style()` to double-check whether we actually need to
process scheduled `:has()` invalidations.
This allows to skip ~100000 ancestor traversals on this WPT test:
https://wpt.live/html/select/options-length-too-large.html
It fixes a bug in which ImageDecoder and RequestServer
do not exit because their connections don't close.
This makes the shutdown behavior match the Linux version,
which receives FD_READ | FD_HANGUP on socket close, and
TransportSocket::read_as_much_as_possible_without_blocking calls
schedule_shutdown when read from a socket returns 0 bytes.
On Windows, we have to explicitly call WIN32 shutdown to receive
notification FD_CLOSE.
Windows flavor of non-blocking IO, overlapped IO, differs from that on
Linux. On Windows, the OS handles writing to overlapped buffer, while
on Linux user must do it manually.
Additionally, we can only have overlapped sockets because it is the
requirement to be able to wait on them - WSAEventSelect automatically
sets socket to nonblocking mode.
So we end up emulating Linux-nonblocking sockets with
Windows-nonblocking sockets.
Pending IO state (ERROR_IO_PENDING) must not escape read/write
functions. If that happens, all synchronization like WSAPoll and
WaitForMultipleObjects stops working (WaitForMultipleObjects stops
working because with overlapped IO you are supposed to wait on an event
in OVERLAPPED structure, while we are waiting on WSA Event, see
EventLoopImplementationWindows.cpp).
This is not a very pleasant fix, but matches a similar const_cast that
we do to return JS objects returned in a union. Ideally we would
'simply' remove the const from the value being visited in the variant,
but that opens up a whole can of worms where we are currently relying on
temporary lifetime extension so that interfaces can return a Variant of
GC::Ref's to JS::Objects.
We were previously assuming that dictionary members were always
required when being returned.
This is a bit of a weird case, because unlike _input_ dictionaries
which the spec marks as required, 'result' dictionaries do not seem to
be marked in spec IDL as required. This is still fine from the POV that
the spec is written as it states that we should only be putting the
values into the dictionary if the value exists.
We could do this through some metaprogramming constexpr type checks.
For example, if the type in our C++ representation was not an
Optional, we can skip the has_value check.
Instead of doing that, change the IDL of the result dictionaries to
annotate these members so that the IDL generator knows this
information up front. While all current cases have every single
member returned or not returned, it is conceivable that the spec
could have a situation that one member is always returned (and
should get marked as required), while the others are optionally
returned. Therefore, this new GenerateAsRequired attribute is
applied for each individual member.
This fixes a compile error of multiple variables of the same name within
the same scope for the URLPattern IDL, which has a dictionary return
type that contains multiple dictionaries of the same type. Conveniently,
this also makes the complicated generated code of the URLPattern
interface easier to read by adding some more structure :^)
This is the return value of a URLPattern after `exec` is called on it.
It conveys information about the named (or unammed) regex groups
matched for each component of the URL. For example,
```
let p = new URLPattern({ hostname: "{:subdomain.}*example.com" });
const result = pattern.exec({ hostname: "foo.bar.example.com" });
console.log(result.hostname.groups.subdomain);
```
Will log 'foo.bar'.
We don't yet support a proxy configuration, but we can still validate
the capability received from the WebDriver client. We should also fail
to create a WebDriver session if a proxy configuration is present.
We currently define our custom WebDriver capabilities with a dictionary
of the form:
"serenity:ladybird": {
"headless": true
}
This patch flattens the configuration, such that each Ladybird option
will be its own capability. This matches how Firefox configures their
own options with geckodriver. So we now have:
"ladybird:headless": true
The WebDriver spec now separately tracks an active HTTP session list,
which will contains all non-BiDi WebDriver sessions by default. There
may only be one active HTTP session at a time.
See: 63a397f
Session management is a bit awkward right now in that the list of active
sessions is managed by Client, resulting in operations like closing a
session being split between several functions in Client and Session.
This patch moves all session management to the Session class. Closing a
session is now entirely in Session::close().
This will make managing a separate HTTP session list a bit simpler.