This allows WindowServer to use multiple framebuffer devices and
compose the desktop with any arbitrary layout. Currently, it is assumed
that it is configured contiguous and non-overlapping, but this should
eventually be enforced.
To make rendering efficient, each window now also tracks on which
screens it needs to be rendered. This way we don't have to iterate all
the windows for each screen but instead use the same rendering loop and
then only render to the screen (or screens) that the window actually
uses.
This now matches the spec's OrdinaryObjectCreate() across the board:
instead of implicitly setting the created object's prototype to
%Object.prototype% and then in many cases setting it to a nullptr right
away, it now has an 'Object* prototype' parameter with _no default
value_. This makes the code easier to compare with the spec, very clear
in terms of what prototype is being used as well as avoiding unnecessary
shape transitions.
Also fixes a couple of cases were we weren't setting the correct
prototype.
There's no reason to assume that the object would not be empty (as in
having own properties), so let's follow our existing pattern of
Type::create(...) and simply call it 'create'.
This counter is increased each time a synchronous execution sequence
completes, and will allow us to emulate the abstract operations
AddToKeptObjects & ClearKeptObjects efficiently.
And use them to highlight javascript in HTML source.
This commit also changes how TextDocumentSpan::data is interpreted,
as it used to be an opaque pointer, but everyone stuffed an enum value
inside it, which made the values not unique to each highlighter;
that field is now a u64 serial id.
The syntax highlighters don't need to change their ways of stuffing
token types into that field, but a highlighter that calls another
nested highlighter needs to register the nested types for use with
token pairs.
This changes the RequestClient::start_request() method to take a URL
object instead of a URL string as argument. All callers of the method
already had a URL object anyway, and start_request() in turn parses the
URL string back into a URL object. This removes this unnecessary
conversion.
This patch removes some FIXMEs from the StyleResolver, specifically
adding the proper float-parsing to the flex: shorthand. The
functionality was already there it just didn't get plumbed in before.
Previously, AK::Function would accept _any_ callable type, and try to
call it when called, first with the given set of arguments, then with
zero arguments, and if all of those failed, it would simply not call the
function and **return a value-constructed Out type**.
This lead to many, many, many hard to debug situations when someone
forgot a `const` in their lambda argument types, and many cases of
people taking zero arguments in their lambdas to ignore them.
This commit reworks the Function interface to not include any such
surprising behaviour, if your function instance is not callable with
the declared argument set of the Function, it can simply not be
assigned to that Function instance, end of story.
This patch aims to fix wrong highlighting for some cases in HTML's
syntax highlighter. The values were somewhat experimentally determined
are are subject to change. Regardless, it should be more correct with
this patch than without it. :^)
This changes the HTML SyntaxHighlighter to conform to the now-fixed
rendering of syntax highlighting spans in GUI::TextEditor. It also
avoids emitting tokens if they have a zero or negative length.
This fixes a bug where single-character tokens were not highlighted
properly.
This patch changes HTMLTokenizer::nth_last_position to not fail if the
requested position is not available. Rather, it will just return (0-0).
While this is not the correct solution, it prevents the tokenizer from
crashing just because it cannot find a source position. This should only
affect SyntaxHighlighter.
This patch completely reworks TextNode::compute_text_for_rendering(). It
removes the unnecessary usage of Utf8View to find spaces in a String.
Furthermore, it adds a couple fast return paths for common but trivial
cases such as empty, single-character and whitespace-less strings.
For the HTML spec bookmarks, around two thirds of all function calls
(which amounts to around 10'000) use the fast paths and thus avoid
allocating a StringBuilder just to build a copy of the already present
String.
The previous behavior was to always VERIFY that the UTF-8 bytes were
valid when iterating over the code points of an UTF8View. This change
makes it so we instead output the 0xFFFD 'REPLACEMENT CHARACTER'
code point when encountering invalid bytes, and keep iterating the
view after skipping one byte.
Leaving the decision to the consumer would break symmetry with the
UTF32View API, which would in turn require heavy refactoring and/or
code duplication in generic code such as the one found in
Gfx::Painter and the Shell.
To make it easier for the consumers to detect the original bytes, we
provide a new method on the iterator that returns a Span over the
data that has been decoded. This method is immediately used in the
TextNode::compute_text_for_rendering method, which previously did
this in a ad-hoc waay.
This also add tests for the new behavior in TestUtf8.cpp, as well
as reinforcements to the existing tests to check if the underlying
bytes match up with their expected values.
This replaces ctype.h with CharacterType.h everywhere I could find
issues with narrowing conversions. While using it will probably make
sense almost everywhere in the future, the most critical places should
have been addressed.
We currently only support application/x-www-form-urlencoded for
form submissions, which uses a special percent encode set when
percent encoding the body/query. However, we were not using this
percent encode set.
With the new URL implementation, we can now specify the percent encode
set to be used, allowing us to use this special percent encode set.
This is one of the fixes needed to make the Google cookie consent work.
This removes URLParser, because its two exposed functions, urlencode()
and urldecode(), have been superseded by URL::percent_encode() and
URL::percent_decode(). This is in preparation for the introduction of a
new URL parser.
This replaces all occurrences of those functions with the newly
implemented functions URL::percent_encode() and URL::percent_decode().
The old functions will be removed in a further commit.