We use instances of `Gfx::Bitmap` to move pixel data all the way from
raw image bytes up to the Skia renderer. A vital piece of information
for correct blending of bitmaps is the alpha type, i.e. are we dealing
with premultiplied or unpremultiplied color values?
Premultiplied means that the RGB colors have been multiplied with the
associated alpha value, i.e. RGB(255, 255, 255) with an alpha of 2% is
stored as RGBA(5, 5, 5, 2%).
Unpremultiplied means that the original RGB colors are stored,
regardless of the alpha value. I.e. RGB(255, 255, 255) with an alpha of
2% is stored as RGBA(255, 255, 255, 2%).
It is important to know how the color data is stored in a
`Gfx::Bitmap`, because correct blending depends on knowing the alpha
type: premultiplied blending uses `S + (1 - A) * D`, while
unpremultiplied blending uses `A * S + (1 - A) * D`.
This adds the alpha type information to `Gfx::Bitmap` across the board.
It isn't used anywhere yet.
Web specs do not return through javascript percent decoded URL path
components - but we were doing this in a number of places due to the
default behaviour of URL::serialize_path.
Since percent encoded URL paths may not contain valid UTF-8 - this was
resulting in us crashing in these places.
For example - on an HTMLAnchorElement when retrieving the pathname for
the URL of:
http://ladybird.org/foo%C2%91%91
To fix this make the URL class only return the percent encoded
serialized path, matching the URL spec. When the decoded path is
required instead explicitly call URL::percent_decode.
This fixes a crash running WPT URL tests for the anchor element on:
https://wpt.live/url/a-element.html
This removes the --enable-callgrind-profiling flag, and replaces it with
a --profile-process=<process-name> flag. For example:
ladybird --profile-process=WebContent
ladybird --profile-process=RequestServer
This allows profiling any helper process with callgrind.
This removes the --debug-web-content flag, and replaces it with a
--debug-process=<process-name> flag. For example:
ladybird --debug-process=WebContent
ladybird --debug-process=RequestServer
This allows attaching gdb to any helper process.
Currently, if we want to add a new e.g. WebContent command line option,
we have to add it to all of Qt, AppKit, and headless-browser. (Or worse,
we only add it to one of these, and we have feature disparity).
To prevent this, this moves command line flags to WebView::Application.
The flags are assigned to ChromeOptions and WebContentOptions structs.
Each chrome can still add its platform-specific options; for example,
the Qt chrome has a flag to enable Qt networking.
There should be no behavior change here, other than that AppKit will now
support command line flags that were previously only supported by Qt.
The default limit (at least on Linux) causes us to run out of file
descriptors at around 15 tabs. Increase this limit to 8k. This is a
rather arbitrary number, but matches the limit set by Chrome.
Previously, the presence of surrounding whitespace would give file paths
the `https` schema instead of the `file` schema, making navigation
unsuccessful.
This call is used to inform the chrome that it should display a tooltip
now and avoid any hovering timers. This is used by <video> tags to
display the volume percentage when it is changed.
Now instead of sending the position in which the user entered the
tooltip area, send just the text, and let the chrome figure out how to
display it.
In the case of Qt, wait for 600 milliseconds of no mouse movement, then
display it under the mouse cursor.
Since "Character" tokens do not have an "end position", viewing source
drops the source contents following the final non-"Character" token.
By handling the EOF token and breaking out of the loop, we avoid this
issue.
This ensures that removing the last view from a WebContentClient will
close its associated process, assuming the WebContent process is not
hung. A more drastic measure will be needed to trigger forcefully
killing the process when it doesn't respond to this request.
This large commit also refactors LibWebView's process handling to use
a top-level Application class that uses a new WebView::Process class to
encapsulate the IPC-centric nature of each helper process.
This is mostly useful when some application-level logic needs to
iterate over all child processes. A more robust Process abstraction
would make this easier.
Using mmap-allocated memory for backing stores does not allow us to
benefit from using GPU-accelerated painting, because all the performance
increase we get is mostly negated by reading the GPU-allocated texture
back into RAM, so it can be shared with the browser process.
With IOSurface, we get a framebuffer that is both shareable between
processes and can be used as underlying memory for an OpenGL/Metal
texture.
This change does not yet benefit from using IOSurface and merely wraps
them into Gfx::Bitmap to be used by the CPU painter.
Allows WebContentClient to get pid of WebContent process right after
creation, so there is no window between forking and
notify_process_information() IPC response, when client doesn't know the
pid.
In the upcoming changes, we are going to switch macOS to using an
IOSurface for the backing store. This change will simplify the process
of sharing an IOSurface between processes because we already have the
MachPortServer running in the browser, and WebContent knows how to
locate the corresponding server.
This adds a motion preference to the browser UI similar to the existing
ones for color scheme and contrast.
Both AppKit UI and Qt UI has this new preference.
The auto value is currently the same as NoPreference, follow-ups can
address wiring that up to the actual preference for the OS.
Previously, part of the procedure we used to sanitize URLs entered via
the command line would check the host against the public suffix
database. This led to some valid, but not publicly accessible URLs
being treated as invalid.
This change allows the results of a find in page query to be reported
back to the user interface. Currently, the number of results found and
the current match index are reported.
This makes WebView::Database wrap around sqlite3 instead of LibSQL. The
effect on outside callers is pretty minimal. The main consequences are:
1. We must ensure the Cookie table exists before preparing any SQL
statements involving that table.
2. We can use an INSERT OR REPLACE statement instead of separate INSERT
and UPDATE statements.
We will use sqlite3 as a replacement for LibSQL. Using a tried-and-true
database will allow us to avoid maintaining our an incomplete, non-ACID,
and less performant implementation. It also means we do not have to
launch and manage the singleton SQLServer process.
The main intention of this change is to have a consistent look and
behavior across all scrollbars, including elements with
`overflow: scroll` and `overflow: auto`, iframes, and a page.
Before:
- Page's scrollbar is painted by Browser (Qt/AppKit) using the
corresponding UI framework style,
- Both WebContent and Browser know the scroll position offset.
- WebContent uses did_request_scroll_to() IPC call to send updates.
- Browser uses set_viewport_rect() to send updates.
After:
- Page's scrollbar is painted on WebContent side using the same style as
currently used for elements with `overflow: scroll` and
`overflow: auto`. A nice side effects: scrollbars are now painted for
iframes, and page's scrollbar respects scrollbar-width CSS property.
- Only WebContent knows scroll position offset.
- did_request_scroll_to() is no longer used.
- set_viewport_rect() is changed to set_viewport_size().
This allows searching for text with case-insensitivity. As this is
probably what most users expect, the default behavior is changes to
perform case-insensitive lookups. Chromes may add UI to change the
behavior as they see fit.
This allows the browser to send a query to the WebContent process,
which will search the page for the given string and highlight any
occurrences of that string.
LibWeb will need to use unbuffered requests to support server-sent
events. Connection for such events remain open and the remote end sends
data as HTTP bodies at its leisure. The browser needs to be able to
handle this data as it arrives, as the request essentially never
finishes.
To support this, this make Protocol::Request operate in one of two
modes: buffered or unbuffered. The existing mechanism for setting up a
buffered request was a bit awkward; you had to set specific callbacks,
but be sure not to set some others, and then set a flag. The new
mechanism is to set the mode and the callbacks that the mode needs in
one API.