This commit implements the main "render blocking" behavior for link
elements, drastically reducing the amount of FOUC (flash of unstyled
content) we subject our users to.
The document will now block rendering until linked style sheets
referenced by parser-created link elements have loaded (or failed).
Note that we don't yet extend the blocking period until "critical
subresources" such as imported style sheets have been downloaded
as well.
Previously, the charset of name "UTF-16BE/LE" would be checked against
when following standards to convert the charset to UTF-8, but in
reality, the charsets "UTF-16BE" and "UTF-16LE" should be checked
separately.
Co-authored-by: Jelle Raaijmakers <jelle@ladybird.org>
There's a quirk in HTML where the parser should ignore any line feed
character immediately following a `pre` or `textarea` start tag.
This was working fine when we could peek ahead in the input stream and
see the next token, but didn't work in character-at-a-time parsing with
document.write().
This commit adds the "can ignore next line feed character" as a parser
flag that is maintained across invocations, making it work in this
parsing mode as well.
20 new passes in WPT/html/syntax/parsing/ :^)
Instead of always inserting a new text node, we now continue appending
to an extisting text node if the parser's character insertion point is
a suitable text node.
This fixes an issue where multiple invocations of document.write() would
create unnecessary sequences of text nodes. Such sequences are now
merged automatically.
19 new passes in WPT/html/syntax/parsing/ :^)
We were neglecting to return after handling the `frameset` start tag,
which caused us to process it twice, once properly and once generically.
54 new passes in WPT/html/syntax/parsing/ :^)
Before this change, the explicit EOF inserted by document.close() would
instantly abort the parser. This meant that parsing algorithms that ran
as part of the parser unwinding on EOF would never actually run.
591 new passes in WPT/html/syntax/parsing/ :^)
This exposed a problem where the parser would try to insert a root
<html> element on EOF in a document where someone already inserted such
an element via direct DOM manipulation. The parser now gracefully
handles this scenario. It's covered by existing tests (which would
crash without this change.)
This fixes a crash in the included test that regressed in 0adf261,
and is hit by the following HTML:
```html
<body></body>
<script>
const frame = document.body.appendChild(document.createElement("iframe"));
frame.contentDocument.open();
const child = frame.contentDocument.createElement("html")
const html = frame.contentDocument.appendChild(child);
frame.contentDocument.close();
</script>
```
I am not 100% sure this is fully the correct fix and there are other
cases which would not work properly. But it's definitely an improvement
to make the confuisingly named 'insert_an_eof' function of the tokenizer
actually do something.
We've historically asserted that no "saturated" size values end up as
final metrics for boxes in layout. This always had a chance of producing
false positives, since you can trivially create extremely large boxes
with CSS.
The reason we had those assertions was to catch bugs in our own engine
code where we'd incorrectly end up with non-finite values in layout
algorithms. At this point, we've found and fixed all known bugs of that
nature, and what remains are a bunch of false positives on pages that
create very large scrollable areas, iframes etc.
So, let's change it! We now clamp content width and height of boxes to
17895700 pixels, apparently the same cap as Firefox uses.
There's also the issue of calc() being able to produce non-finite
values. Note that we don't clamp the result of calc() directly, but
instead just clamp values when assigning them to content sizes.
Fixes#645.
Fixes#1236.
Fixes#1249.
Fixes#1908.
Fixes#3057.
This makes it more convenient to use the 'relvant agent' concept,
instead of the awkward dynamic casts we needed to do for every call
site.
mutation_observers is also changed to hold a GC::Root instead of raw
GC::Ptr. Somehow this was not causing problems before, but trips up CI
after these changes.
Previously, if the NumericCharacterReferenceEnd state was reached when
current_input_character was None, then the
DONT_CONSUME_NEXT_INPUT_CHARACTER macro would restore back before the
EOF, and allow the next state (after the SWITCH_TO_RETURN_STATE) to
proceed with the last digit of the numeric character reference.
For example, with something like `ї`, before this commit the
output would incorrectly be `<code point with the value 1111>1` instead
of just `<code point with the value 1111>`.
Instead of putting the `if (current_input_character.has_value())` check
inside NumericCharacterReferenceEnd directly, it was instead added to
DONT_CONSUME_NEXT_INPUT_CHARACTER, because all usages of the macro
benefit from this check, even if the other existing usage sites don't
exhibit any bugs without it:
- In MarkupDeclarationOpen, if the current_input_character is EOF, then
the previous character is always `!`, so restoring and then checking
forward for strings like `--`, `DOCTYPE`, etc won't match and the
BogusComment state will run one extra time (once for `!` and once
for EOF) with no practical consequences. With the `has_value()` check,
BogusComment will only run once with EOF.
- In AfterDOCTYPEName, ConsumeNextResult::RanOutOfCharacters can only
occur when stopping at the insertion point, and because of how
the code is structured, it is guaranteed that current_input_character
is either `P` or `S`, so the `has_value()` check is irrelevant.
Using a default reference capture for these kinds of tasks is dangerous
and prone to error. Some of the variables should for sure be captured
by value so that we can keep a GC object alive rather than trying to
refer to stack objects.
In particular, input character lookahead now knows how to stop at the
insertion point marker if needed.
This makes it possible to do amazing things like having document.write()
insert doctypes one character at a time.
If we reach the insertion point at the same time as we switch to another
tokenizer state, we have to bail immediately without proceeding with the
next code point. Otherwise we'd fetch the next token, get an EOF marker,
and then proceed as if we're at the end of the input stream, even though
more data may be coming (with more calls to document.write()..)
Resulting in a massive rename across almost everywhere! Alongside the
namespace change, we now have the following names:
* JS::NonnullGCPtr -> GC::Ref
* JS::GCPtr -> GC::Ptr
* JS::HeapFunction -> GC::Function
* JS::CellImpl -> GC::Cell
* JS::Handle -> GC::Root
Now that the heap has no knowledge about a JavaScript realm and is
purely for managing the memory of the heap, it does not make sense
to name this function to say that it is a non-realm variant.
The main motivation behind this is to remove JS specifics of the Realm
from the implementation of the Heap.
As a side effect of this change, this is a bit nicer to read than the
previous approach, and in my opinion, also makes it a little more clear
that this method is specific to a JavaScript Realm.
This fixes 4 issues:
- RECONSUME_IN_RETURN_STATE was functionally equivalent to
SWITCH_TO_RETURN_STATE, which caused us to lose characters.
For example, &test= would lose the =
- & characters by themselves would be lost. For example, 1 & 2
would become 1 2. This is because we forgot to flush
characters in the the ANYTHING_ELSE path in CharacterReference
- Named character references didn't work at all in attributes.
This is because there was a path that was checking the entity
code points instead of the entity itself. Plus, the path that
was checking the entity itself wasn't quite spec compliant.
- If we fail to match a named character reference, the first
character is lost. For example &test would become &est.
However, this relies on a little hack since I can't wrap my
head around on how to change the code to do as the spec says.
The hack is to reconsume in AmbigiousAmpersand instead of
just switching to it.
Fixes#3957