This commit removes the /J flag when compiling with clang-cl. The /J
flag changes the default char type from signed char to unsigned char
which is the opposite of what we want. Now char will be signed, which is
the default on msvc and what we set on linux.
This commit increases compilation speed when using clang-cl. It also
decreases object size as well as adds -fstrict-aliasing to match the
default on linux. The linker option added is not recognized by link.exe
so we enable lld-link by default, which will also increase compilation
speed by itself, and allow for LTO.
This makes the build system aware of which macOS version we're
targeting, and will make it an error to newer APIs without explicitly
checking the availability.
Note that the js REPL CI job still sets the deployment target to 11.0
explicitly.
An overlay port is required to add the `stdc-iec-559` and `install-pc`
patches.
The `stdc-iec-559` patch is required because Clang doesn't define
`__STDC_IEC_559__`. However, glibc and musl define it if `__GCC_IEC_559`
is not defined. The macro is taken from glibc source code.
The `install-pc` patch is required because libtommath doesn't install
the pkg-config files when building statically compromising our ability
to find it during build.
Clang: https://clang.llvm.org/c_status.html#:~:text=Yes-,
IEC%2060559%20support,-Unknown
glibc: https://sourceware.org/git/?p=glibc.git;a=blob;
f=include/stdc-predef.h
This will allow us to gradually build up official support for Windows,
as only targets we have explicitly marked as supported on windows will
be built and ran during CI.
If we are doing a statically linked build, there is no need for full
`-fPIC`, just `-fpie` is enough (which lets the compiler assume that
global variables can be accessed directly without the GOT, etc.). CMake
does the right thing already when we set the `POSITION_INDEPENDENT_CODE`
property.
Root had identical copy of what was being done in Meta/Lagom
so now we ensure this is still included globally but is
isolated to its own cmake module to make sanitizer
config easier to discover
We don't use C++20 modules, and CMake will look for clang-scan-deps in
clang builds if this setting is not disabled. This is a problem when
clang-scan-deps is not installed, as it will cause CMake to fail to link
hidden visibility libraries properly.
The implicit default CMAKE_OSX_SYSROOT was a workaround in CMake for
macOS ~10.8. As of CMake 4.x, CMake expects macOS host compilers to have
their own default sysroot detection logic. However, upstream llvm does
not actually do this, only Apple Clang does. To keep supporting homebrew
clang and manually compiled clang from llvm/llvm-project, we need to
set the sysroot explicitly.
The behavior difference and lack of default detection logic in the clang
driver is tracked at https://gitlab.kitware.com/cmake/cmake/-/issues/26863
This adds support for async iterators of the form:
async iterable<value_type>;
async iterable<value_type>(/* arguments... */);
It does not yet support the value pairs of the form:
async iterable<key_type, value_type>;
async iterable<key_type, value_type>(/* arguments... */);
Async iterators have an optional `return` data property. There's not a
particularly good way to know what interfaces implement this property.
So this adds a new extended attribute, DefinesAsyncIteratorReturn, which
interfaces can use to declare their support.
These generate what seems to be nonsense warnings on Function and
ByteBuffer; they *should* be investigated at some point, but they don't
provide anything useful at this point.
Similar to the existing macros for compile options and link options,
this macro wraps the command line definitions for swiftc in a way that
avoids warnings about conditional compilation flags not having values.
Add a new JSON file describing at-rule descriptors, and then use it to
generate a DescriptorID enum, and code to check if it's accepted in a
given at-rule.
Turns out we need this directory to pass to the -frontend command
for creating the interop header, so refactor the whole find module
to find it on each platform.
Add the proper annotations for the Cell and Cell::Visitor classes to be
visible in Swift. This lets us remove some OpaquePointer shinangians in
the Swift bindings.
This patch adds a workaround for a Swift issue where boolean bitfields
with getters and setters in SWIFT_UNSAFE_REFERENCE types are improperly
imported, causing an ICE.
In CMake 4.0, having a minimum policy version set to less than 3.5 is
a hard error at configure time. Add an override until the issue can be
resolved in vcpkg itself.
This involves yeeting the 'invalid' union member as it was not really
checked against properly anyway; now the 'invalid' state is simply
StringData*{nullptr}, which was assumed to not exist previously.
Note that this is still accessing inactive union members, but is
promising to the compiler that they're fine where they are (the provided
debug macro AK_STRINGBASE_VERIFY_LAUNDER_DEBUG makes the
would-be-UB-if-not-for-launder ops verify that the operation is correct)
Should fix the GCC build.
There are two changes happening here: a correctness fix, and an
optimization. In theory they are unrelated, but the optimization
actually paves the way for the correctness fix.
Before this commit, the HTML tokenizer would attempt to look for named
character references by checking from after the `&` until the end of
m_decoded_input, which meant that it was unable to recognize things like
named character references that are inserted via `document.write` one
byte at a time. For example, if `∉` was written one-byte-at-a-time
with `document.write`, then the tokenizer would only check against `n`
since that's all that would exist at the time of the check and therefore
erroneously conclude that it was an invalid named character reference.
This commit modifies the approach taken for named character reference
matching by using a trie-like structure (specifically, a deterministic
acyclic finite state automaton or DAFSA), which allows for efficiently
matching one-character-at-a-time and therefore it is able to pick up
matching where it left off after each code point is consumed.
Note: Because it's possible for a partial match to not actually develop
into a full match (e.g. `¬indo` which could lead to `⋵̸`),
some backtracking is performed after-the-fact in order to only consume
the code points within the longest match found (e.g. `¬indo` would
backtrack back to `¬`).
With this new approach, `document.write` being called one-byte-at-a-time
is handled correctly, which allows for passing more WPT tests, with the
most directly relevant tests being
`/html/syntax/parsing/html5lib_entities01.html`
and
`/html/syntax/parsing/html5lib_entities02.html`
when run with `?run_type=write_single`. Additionally, the implementation
now better conforms to the language of the spec (and resolves a FIXME)
because exactly the matched characters are consumed and nothing more, so
SWITCH_TO is able to be used as the spec says instead of RECONSUME_IN.
The new approach is also an optimization:
- Instead of a linear search using `starts_with`, the usage of a DAFSA
means that it is always aware of which characters can lead to a match
at any given point, and will bail out whenever a match is no longer
possible.
- The DAFSA is able to take advantage of the note in the section
`13.5 Named character references` that says "This list is static and
will not be expanded or changed in the future." and tailor its Node
struct accordingly to tightly pack each node's data into 32-bits.
Together with the inherent DAFSA property of redundant node
deduplication, the amount of data stored for named character reference
matching is minimized.
In my testing:
- A benchmark tokenizing an arbitrary set of HTML test files was about
1.23x faster (2070ms to 1682ms).
- A benchmark tokenizing a file with tens of thousands of named
character references mixed in with truncated named character
references and arbitrary ASCII characters/ampersands runs about 8x
faster (758ms to 93ms).
- The size of `liblagom-web.so` was reduced by 94.96KiB.
Some technical details:
A DAFSA (deterministic acyclic finite state automaton) is essentially a
trie flattened into an array, but it also uses techniques to minimize
redundant nodes. This provides fast lookups while minimizing the
required data size, but normally does not allow for associating data
related to each word. However, by adding a count of the number of
possible words from each node, it becomes possible to also use it to
achieve minimal perfect hashing for the set of words (which allows going
from word -> unique index as well as unique index -> word). This allows
us to store a second array of data so that the DAFSA can be used as a
lookup for e.g. the associated code points.
For the Swift implementation, the new NamedCharacterReferenceMatcher
was used to satisfy the previous API and the tokenizer was left alone
otherwise. In the future, the Swift implementation should be updated to
use the same implementation for its NamedCharacterReference state as
the updated C++ implementation.
LibCore's list of ignored header files for Swift was missing the Apple
only files on non-Apple platforms. Additionally, any generic glue code
cannot use -fobjc-arc, so we need to rely on -fblocks only.
This has two slightly different implementations for ARC and non-ARC
compiler modes. The main idea is to store a block pointer as our
closure and use either ARC magic or BlockRuntime methods to manage
the memory for the block. Things are complicated by the fact that
we don't yet force-enable swift, so we can't count on the swift.org
llvm fork being our compiler toolchain. The patch adds some CMake
checks and ifdefs to still support environments without support
for blocks or ARC.