The special empty value (that we use for array holes, Optional<Value>
when empty and a few other other placeholder/sentinel tasks) still
exists, but you now create one via JS::js_special_empty_value() and
check for it with Value::is_special_empty_value().
The main idea here is to make it very unlikely to accidentally create an
unexpected special empty value.
Start work on a speculative HTML Parser in Swift. This component will
walk ahead of the normal HTML parser looking for fetch() requests to
make while the normal parser is blocked. This work exposed many holes in
the Swift C++ interop component, which have been reported upstream.
Skia has a check in debug mode to verify that surface is only used
within one thread. Before this change we were violating this by
allocating surfaces on the main thread while using and destructing them
on the rendering thread.
It turned out that some web applications want to send fairly large
messages to WebWorker through IPC (for example, MapLibre GL sends
~1200KiB), which led to failures (at least on macOS) because buffer size
of TransportSocket is limited to 128KiB. This change solves the problem
by wrapping messages that exceed socket buffer size into another message
that holds wrapped message content in shared memory.
Co-Authored-By: Luke Wilde <luke@ladybird.org>
This is very clearly a very dangerous API to have, and was causing
a crash on Linux as a result of a stack use-after-free when visiting
https://www.index.hr/.
Fixes#3901
Deleteing set_surface() makes DisplayListPlayer API a bit more intuitive
because now caller doesn't have to think whether it's necessary to
restore previous surface after execution, instead DisplayListPlayer
takes care of it by maintaining a stack of surfaces.
The display list is an immutable data structure, so once it's created,
rasterization can be moved to a separate thread. This allows more room
for performing other tasks between processing HTML rendering tasks.
This change makes PaintingSurface, ImmutableBitmap, and GlyphRun atomic
ref-counted, as they are shared between the main and rendering threads
by being included in the display list.
Previously, all `GC::Cell` derived classes were Weakable. Marking only
those classes that require this functionality as Weakable allows us to
reduce the memory footprint of some frequently used classes.
PrimitiveString is now internally either UTF-8, UTF-16, or both.
We no longer convert them to/from ByteString anywhere, nor does VM have
a ByteString cache.
This fixes the frame-ancestors WPT tests from crashing when an iframe
is blocked from loading. This is because it would get an undefined
location.href from the cross-origin iframe, which causes a crash as it
expects it to be there.
Instead of reparsing the style attributes every time we instantiate
the internal shadow tree for a text input element, we now parse them
once (in the internal CSS realm) and reuse them for all elements.
Roughly a ~10% speedup on Speedometer 2.1 :^)
The upcoming generated types will match those for pseudo-classes: A
PseudoElementSelector type, that then holds a PseudoElement enum
defining what it is. That enum will be at the top level in the Web::CSS
namespace.
In order to keep the diffs clearer, this commit renames and moves the
types, and then a following one will replace the handwritten enum with
a generated one.
Instead of bothering the GC heap with a bunch of DOMRect allocations,
we can just pass around CSSPixelRect internally in many cases.
Before this change, we were generating so much DOMRect garbage that
we had to do a garbage collection *every frame* on the Immich demo.
This was due to the large number of intersection observers checked.
We still need to relax way more when idle, but for comparison, before
this change, when doing nothing for 10 seconds on Immich, we'd spend
2.5 seconds updating intersection observers. After this change, we now
spend 600 ms.
When decoding data into bitmaps, we end up with different alpha types
(premultiplied vs. unpremultiplied color data). Unfortunately, Skia only
seems to handle premultiplied color data well when scaling bitmaps with
an alpha channel. This might be due to Skia historically only supporting
premultiplied color blending, with unpremultiplied support having been
added more recently.
When using Skia to blend bitmaps, we need the color data to be
premultiplied. ImmutableBitmap gains a new method to enforce the alpha
type to be used, which is now used by SharedResourceRequest and
CanvasRenderingContext2D to enforce the right alpha type.
Our LibWeb tests actually had a couple of screenshot tests that exposed
the graphical glitches caused by Skia; see the big smiley faces in the
CSS backgrounds tests for example. The failing tests are now updated to
accommodate the new behavior.
Chromium and Firefox both seem to apply the same behavior; e.g. they
actively decode PNGs (which are unpremultiplied in nature) to a
premultiplied bitmap.
Fixes#3691.
This adds a basic settings page to manage persistent Ladybird settings.
As a first pass, this exposes settings for the new tab page URL and the
default search engine.
The way the search engine option works is that once search is enabled,
the user must choose their default search engine; we do not apply any
default automatically. Search remains disabled until this is done.
There are a couple of improvements that we should make here:
* Settings changes are not broadcasted to all open about:settings pages.
So if two instances are open, and the user changes the search engine
in one instance, the other instance will have a stale UI.
* Adding an IPC per setting is going to get annoying. It would be nice
if we can come up with a smaller set of IPCs to send only the relevant
changed settings.
When the return key is pressed, we try to handle it as a commit action
for input elements. However, we would then go on to actually insert the
return key's code point (U+000D) into the input element. This would be
sanitized out, but would leave the input element in a state where it
thinks it has text to commit. This would result in a change event being
fired when the return key is pressed multiple times in a row.
We were also firing the beforeinput/input events twice for all return
key presses.
To fix this, this patch changes the input event target to signify if it
actually handled the return key. If not (i.e. for textarea elements),
only then do we insert the code point. We also must not fall through to
the generic key handler, to avoid the repeated input events.
Previously, we would only invalidate style when setting the `media` IDL
attribute; changing the attribute via `setAttribute()` and
`removeAttribute()` had no immediate effect.
There are two changes happening here: a correctness fix, and an
optimization. In theory they are unrelated, but the optimization
actually paves the way for the correctness fix.
Before this commit, the HTML tokenizer would attempt to look for named
character references by checking from after the `&` until the end of
m_decoded_input, which meant that it was unable to recognize things like
named character references that are inserted via `document.write` one
byte at a time. For example, if `∉` was written one-byte-at-a-time
with `document.write`, then the tokenizer would only check against `n`
since that's all that would exist at the time of the check and therefore
erroneously conclude that it was an invalid named character reference.
This commit modifies the approach taken for named character reference
matching by using a trie-like structure (specifically, a deterministic
acyclic finite state automaton or DAFSA), which allows for efficiently
matching one-character-at-a-time and therefore it is able to pick up
matching where it left off after each code point is consumed.
Note: Because it's possible for a partial match to not actually develop
into a full match (e.g. `¬indo` which could lead to `⋵̸`),
some backtracking is performed after-the-fact in order to only consume
the code points within the longest match found (e.g. `¬indo` would
backtrack back to `¬`).
With this new approach, `document.write` being called one-byte-at-a-time
is handled correctly, which allows for passing more WPT tests, with the
most directly relevant tests being
`/html/syntax/parsing/html5lib_entities01.html`
and
`/html/syntax/parsing/html5lib_entities02.html`
when run with `?run_type=write_single`. Additionally, the implementation
now better conforms to the language of the spec (and resolves a FIXME)
because exactly the matched characters are consumed and nothing more, so
SWITCH_TO is able to be used as the spec says instead of RECONSUME_IN.
The new approach is also an optimization:
- Instead of a linear search using `starts_with`, the usage of a DAFSA
means that it is always aware of which characters can lead to a match
at any given point, and will bail out whenever a match is no longer
possible.
- The DAFSA is able to take advantage of the note in the section
`13.5 Named character references` that says "This list is static and
will not be expanded or changed in the future." and tailor its Node
struct accordingly to tightly pack each node's data into 32-bits.
Together with the inherent DAFSA property of redundant node
deduplication, the amount of data stored for named character reference
matching is minimized.
In my testing:
- A benchmark tokenizing an arbitrary set of HTML test files was about
1.23x faster (2070ms to 1682ms).
- A benchmark tokenizing a file with tens of thousands of named
character references mixed in with truncated named character
references and arbitrary ASCII characters/ampersands runs about 8x
faster (758ms to 93ms).
- The size of `liblagom-web.so` was reduced by 94.96KiB.
Some technical details:
A DAFSA (deterministic acyclic finite state automaton) is essentially a
trie flattened into an array, but it also uses techniques to minimize
redundant nodes. This provides fast lookups while minimizing the
required data size, but normally does not allow for associating data
related to each word. However, by adding a count of the number of
possible words from each node, it becomes possible to also use it to
achieve minimal perfect hashing for the set of words (which allows going
from word -> unique index as well as unique index -> word). This allows
us to store a second array of data so that the DAFSA can be used as a
lookup for e.g. the associated code points.
For the Swift implementation, the new NamedCharacterReferenceMatcher
was used to satisfy the previous API and the tokenizer was left alone
otherwise. In the future, the Swift implementation should be updated to
use the same implementation for its NamedCharacterReference state as
the updated C++ implementation.
The intent is that this will replace the separate Task Manager window.
This will allow us to more easily add features such as actual process
management, better rendering of the process table, etc. Included in this
page is the ability to sort table rows.
This also lays the ground work for more internal `about` pages, such as
about:config.
We previously had PropertyOwningCSSStyleDeclaration and
ResolvedCSSStyleDeclaration, representing the current style properties
and resolved style respectively. Both of these were the
CSSStyleDeclaration type in the CSSOM. (We also had
ElementInlineCSSStyleDeclaration but I removed that in a previous
commit.)
In the meantime, the spec has changed so that these should now be a new
CSSStyleProperties type in the CSSOM. Also, we need to subclass
CSSStyleDeclaration for things like CSSFontFaceRule's list of
descriptors, which means it wouldn't hold style properties.
So, this commit does the fairly messy work of combining these two types
into a new CSSStyleProperties class. A lot of what previously was done
as separate methods in the two classes, now follows the spec steps of
"if the readonly flag is set, do X" instead, which is hopefully easier
to follow too.
There is still some functionality in CSSStyleDeclaration that belongs in
CSSStyleProperties, but I'll do that next. To avoid a huge diff for
"CSSStyleDeclaration-all-supported-properties-and-default-values.txt"
both here and in the following commit, we don't apply the (currently
empty) CSSStyleProperties prototype yet.
This also implements the `:high-value` and `:low-value` that are in the
spec.
Same note as before about this being based on the very-drafty CSS Forms
spec. In fact, some of this isn't even in that spec yet. Specifically,
the `:suboptimal-value` and `:even-less-good-value` names are undecided
and subject to change. However, it's clear that this is a pseudo-class
situation, not a pseudo-element one, so I think this is still an
improvement, as it allows styling of the `::fill` pseudo-element
regardless of what state it is in.
Relevant spec issue: https://github.com/openui/open-ui/issues/1130
This spec is very early on, and likely to change. However, it still
feels preferable to use these rather than the prefixed -webkit ones.
Plus, as we have a `::fill` on range inputs, we can use that for styling
the bar instead of inserting CSS from C++.
A Storage object may be created with an existing storage bottle. For
example, if you navigate from site.com/page1 to site.com/page2, they
will have different localStorage objects, but will use the same bottle
for actual storage.
Previously, if page1 set some key/value item, we would initialize the
byte count to 0 on page2 despite having a non-empty bottle. Thus, if
page2 set a smaller value with the same key, we would overflow the
computed byte count, and all subsequent writes would be rejected.
This was seen navigating from the chess.com home page to the daily
puzzle page.