Due to removal of local ca-certificates we need to use system's
certificate. However, on Android it is stored in multiple files.
Ladybird doesn't support multiple certificates yet, so we just
concatenate all of them into one big file.
This adds an overlay port for curl that adds the features required for
HTTP/3.
This is not quite compatible with the upstream vcpkg.json, because
enabling HTTP/2 makes it use the default SSL backend, which is
sectransp for macOS and schannel on Windows. These backends are not
compatible with ngtcp2. Additionally, we can not build curl with
multiple SSL backends when using ngtcp2.
I couldn't find a way to selectively disable/enable dependencies based
on what features are enabled, so I made HTTP/2 pick OpenSSL in our
overlay port. Upstream vcpkg will likely want to support wolfSSL and
GnuTLS backends for ngtcp2, so they'll be additional work to get this
into upstream.
Using ladybird_lib() adds all sorts of extra goodies to the target, such
as installation, soname setting, a custom target name, adding lagom- to
the name of the library, etc. All we need for this impl lib is the
generated sources support, so move to a bare add_library() call instead.
The previous call was also wrong, and always created liblagom-TYPE.so.
By default, if multiple requests start to a newly seen origin, curl
will not wait for a connection to open to figure out if the server
supports multiplexing and will instead open a new connection for each
request (including a new TLS session and such)
This is particularly an issue for initial page load, where a complex
website could, for example, request tens of items at once (e.g. a bunch
of scripts).
We can be kinder to servers that support multiplexing by telling curl
to wait till an initial connection is established to determine if
multiplexing is supported.
On my machine and internet connection, this reduces the amount of
connections to github.githubassets.com on initial load of
https://github.com/LadybirdBrowser/ladybird from 12 to 2.
We haven't required a local copy of the ca-certificates since switching
to OpenSSL as the backend for TLS. Remove the script to download the
PEM file, and update the tests to use the system's CA certificates.
Without these fixes, RequestServer was likely to crash if the client
crashed (e.g. WebContent). This was because there was no error handling
for when writing to the client failed.
This is particularly an issue because RequestServer has shared
instances, so it would then crash every other client of RequestServer.
Then, because another RequestServer instance is not currently spun up,
it becomes impossible to start any clients that need a RequestServer
instance. Recreating a RequestServer should also be handled, but that's
not in the scope of this change.
We can tell curl that we failed to write data to the client and that
the request should be aborted by returning `CURL_WRITEFUNC_ERROR` from
the write callback.
It is also possible for requests to be destroyed with buffered data,
which is normal to happen if the client disappears
(i.e. ConnectionFromClient is destroyed) or the request is cancelled by
the client. We log a warning in case this is not expected, to assist
with debugging related issues.
...to become writable.
Solves triangular deadlock problem that happened in the following case
(copied from https://github.com/LadybirdBrowser/ladybird/issues/1816):
- The WebContent process is spinning on
`send_sync_but_allow_failure` waiting for the UI process to respond
- The UI process is spinning on `send_sync_but_allow_failure`, waiting
for RequestServer to respond
- RequestServer is stuck in this loop, trying to write to the
WebContent's socket file (when I attach to RS, we are always in the
sched_yield call, so we're spinning on EAGAIN).
For me the issue was reliably reproducible on Google Maps and with this
change we no longer deadlock there.
The assertion in `WebSocketImplCurl::did_connect()` keeps failing for
multiple websockets when loading `https://www.speedtest.net/` since
commit 14ebcd4. This fixes that by checking and returning false if
something went wrong and letting the caller function handle it.
The HTTPS server used by WPT will close TLS connections without sending
a "close notify" alert. For responses that did not have a Content-Length
header, curl treats this as an error.
Instead of wrapping all non-movable members of TransportSocket in OwnPtr
to keep it movable, make TransportSocket itself non-movable and wrap it
in OwnPtr.
This is no longer needed since `IPv6Address::from_string` supports
square brackets. After the update to curl, `CURLOPT_RESOLVE` now
supports replacing IPv6 hosts as well.
This has been a longstanding ergonomic issue with our IPC compiler. Non-
trivial types were previously passed by const&. So if we wanted to avoid
expensive copies, we would have to const_cast and move the data.
We now pass ownership of all transferred data to the client subclasses.
This allows us to remove const_cast from these methods, and allows us to
avoid some trivial expensive copies that we didn't bother to const_cast.
This implementation can be better improved in the future by ripping
out a lot of the manual logic in LibWebSocket and rely on libcurl to
parse our message payloads. But for now, this uses the 'raw mode' of
curl websockets in connect-only mode to allow for somewhat seamless
integration into our event loop.
Explicitly link final targets with OpenSSL to ensure that the vcpkg
version is loaded instead of the system one.
Before this change we would inherit `libcrypto.so` and `libssl.so` from
other dependencies, like Qt, that do not have their RPATH rewritten.
This would cause the loader to prefer the system libraries over the
vcpkg ones causing all sorts of version mismatch issues.
The effectiveness of this change can be verified with
`readelf -d ./bin/Ladybird` showing `libcrypto.so` and `libssl.so` as
direct dependencies, before they would not appear. Additionally, `ldd`
will show `libcrypto.so` and `libssl.so` pointing to the vcpkg builds.
Before, if something went wrong with DNS lookup and there were unrelated
records (i.e. not A or AAAA) then we would still attempt to build a
resolve list. This resulted in curl errors related to the option itself
and displayed as "unknown network error" to the user.
Previously, we leaked the `curl_slist`s on every request. This also
validates the pointer we get from `curl_slist_append` before setting the
option.
Also, use the `set_option` helper for CURLOPT_RESOLVE as it will print
when there is an error.