diff --git a/Documentation/AdvancedBuildInstructions.md b/Documentation/AdvancedBuildInstructions.md
index 87e86be85fa..1c2b651a754 100644
--- a/Documentation/AdvancedBuildInstructions.md
+++ b/Documentation/AdvancedBuildInstructions.md
@@ -60,7 +60,7 @@ For more information on how the CMake cache works, see the CMake guide for [Runn
## Tests
-For information on running host and target tests, see [Running Tests](RunningTests.md). The documentation there also contains useful information for debugging CI test failures.
+For information on running host and target tests, see [Testing](Testing.md). The documentation there also contains useful information for debugging CI test failures.
## Clang-format updates
diff --git a/Documentation/Browser/Patterns.md b/Documentation/Browser/Patterns.md
index 520bf2de3cb..673b4a60df8 100644
--- a/Documentation/Browser/Patterns.md
+++ b/Documentation/Browser/Patterns.md
@@ -140,36 +140,3 @@ namespace, e.g. `Fetch::Request` vs `Fetch::Infrastructure::Request`.
The `.cpp`, `.h`, and `.idl` files for a given interface should all be in the same directory, unless
the implementation is hand-written when it cannot be generated from IDL. In those cases, no IDL file
is present and code should be placed in `Bindings/`.
-
-## Testing
-
-Every feature or bug fix added to LibWeb should have a corresponding test in `Tests/LibWeb`.
-The test should be either a Text, Layout or Ref test depending on the feature.
-
-LibWeb tests can be run in one of two ways. The easiest is to use the `ladybird.sh` script. The LibWeb tests are
-registered with CMake as a test in `Ladybird/CMakeLists.txt`. Using the builtin test filtering, you can run all tests
-with `Meta/ladybird.sh test` or run just the LibWeb tests with `Meta/ladybird.sh test LibWeb`. The second
-way is to invoke the headless browser test runner directly. See the invocation in `Ladybird/CMakeLists.txt` for the
-expected command line arguments.
-
-Running `Tests/LibWeb/add_libweb_test.py your-new-test-name` will create a new test HTML file in
-`Tests/LibWeb/Text/input/your-new-test-name.html` with the correct boilerplate code for a Text test — along with
-a corresponding expectations file in `Tests/LibWeb/Text/expected/your-new-test-name.txt`.
-
-After you update/replace the generated boilerplate in your `your-new-test-name.html` test file with your actual test,
-running `./Meta/ladybird.sh run headless-browser --run-tests "${LADYBIRD_SOURCE_DIR}/Tests/LibWeb" --rebaseline -f Text/input/foobar.html` will
-regenerate the corresponding expectations file — to match the actual output from your updated test (where
-`/opt/ladybird` should be replaced with the absolute path your ladybird clone in your local environment).
-
-Future versions of the `add_libweb_test.py` script will support Layout and Ref tests.
-
-### Text tests
-
-Text tests are intended to test Web APIs that don't have a visual representation. They are written in JavaScript and
-run in a headless browser. Each test has a test function in a script tag that exercises the API and prints expected
-results using the `println` function. `println` calls are accumulated into an output test file, which is then
-compared to the expected output file by the test runner.
-
-Text tests can be either sync or async. Async tests should use the `done` callback to signal completion.
-Async tests are not necessarily run in an async context, they simply require the test function to signal completion
-when it is done. If an async context is needed to test the API, the lambda passed to `test` can be async.
diff --git a/Documentation/README.md b/Documentation/README.md
index 1d2e428f91d..1457fb671c1 100644
--- a/Documentation/README.md
+++ b/Documentation/README.md
@@ -8,7 +8,7 @@ you are welcome to ask on [Discord](../README.md#get-in-touch-and-participate).
* [Build Instructions](BuildInstructionsLadybird.md)
* [Advanced Build Instructions](AdvancedBuildInstructions.md)
* [Troubleshooting](Troubleshooting.md)
-* [Running Tests](RunningTests.md)
+* [Testing](Testing.md)
* [Profiling the Build](BuildProfilingInstructions.md)
## Configuring Editors
diff --git a/Documentation/RunningTests.md b/Documentation/RunningTests.md
deleted file mode 100644
index 7f46bea7efe..00000000000
--- a/Documentation/RunningTests.md
+++ /dev/null
@@ -1,97 +0,0 @@
-# Running Tests
-
-To reproduce a CI failure, see the section on [Running with Sanitizers](#running-with-sanitizers).
-
-The simplest way to run tests locally is to use the `default` preset from ``CMakePresets.json``:
-
-```sh
-cmake --preset default
-cmake --build --preset default
-ctest --preset default
-```
-
-If you want to avoid building and running LibWeb tests, you can use a Lagom-only build.
-
-```sh
-cmake -GNinja -S Meta/Lagom -B Build/lagom
-```
-
-The tests can be run via ninja after doing a build. Note that `test-js` requires the `LADYBIRD_SOURCE_DIR` environment variable to be set
-to the root of the ladybird source tree.
-
-```sh
-# /path/to/ladybird repository
-export LADYBIRD_SOURCE_DIR=${PWD}
-cd Build/lagom
-ninja
-ninja test
-```
-
-To see the stdout/stderr output of failing tests, the recommended way is to set the environment variable [`CTEST_OUTPUT_ON_FAILURE`](https://cmake.org/cmake/help/latest/manual/ctest.1.html#options) to 1.
-
-```sh
-CTEST_OUTPUT_ON_FAILURE=1 ninja test
-
-# or, using ctest directly...
-ctest --output-on-failure
-```
-
-# Running with Sanitizers
-
-CI runs host tests with Address Sanitizer and Undefined Sanitizer instrumentation enabled. These tools catch many
-classes of common C++ errors, including memory leaks, out of bounds access to stack and heap allocations, and
-signed integer overflow. For more info on the sanitizers, check out the Address Sanitizer [wiki page](https://github.com/google/sanitizers/wiki),
-or the Undefined Sanitizer [documentation](https://clang.llvm.org/docs/UndefinedBehaviorSanitizer.html) from clang.
-
-Note that a sanitizer build will take significantly longer than a non-sanitizer build, and will mess with caches in tools such as `ccache`.
-The sanitizers can be enabled with the `-DENABLE_FOO_SANITIZER` set of flags.
-
-The simplest way to enable sanitizers is to use the `Sanitizer` preset.
-
-```sh
-cmake --preset Sanitizer
-cmake --build --preset Sanitizer
-ctest --preset Sanitizer
-```
-
-Or from a Lagom build:
-
-To ensure that the test behaves the same way as CI, make sure to set the ASAN_OPTIONS and UBSAN_OPTIONS appropriately.
-The Sanitizer test preset already sets these environment variables.
-
-```sh
-export ASAN_OPTIONS='strict_string_checks=1:check_initialization_order=1:strict_init_order=1:detect_stack_use_after_return=1:allocator_may_return_null=1'
-export UBSAN_OPTIONS='print_stacktrace=1:print_summary=1:halt_on_error=1'
-cmake -GNinja -S Meta/Lagom -B Build/lagom -DENABLE_ADDRESS_SANITIZER=ON -DENABLE_UNDEFINED_SANITIZER=ON
-cd Build/lagom
-ninja
-CTEST_OUTPUT_ON_FAILURE=1 LADYBIRD_SOURCE_DIR=${PWD}/../.. ninja test
-```
-
-# Running the Web Platform Tests
-
-The Web Platform Tests can be run with the `WPT.sh` script. This script can also be used to compare the results of two
-test runs.
-
-Enabling the Qt chrome is recommended when running the Web Platform Tests on MacOS. This can be done by running the
-following command:
-
-```sh
-cmake -GNinja Build/ladybird -DENABLE_QT=ON
-```
-
-Example usage:
-
-```sh
-# Run the WPT tests then run them again, comparing the results from the two runs
-./Meta/WPT.sh run --log expectations.log css
-git checkout my-css-change
-./Meta/WPT.sh compare --log results.log expectations.log css
-```
-
-```sh
-# Pull the latest changes from the upstream WPT repository
-./Meta/WPT.sh update
-# Run all of the Web Platform Tests, outputting the results to results.log
-./Meta/WPT.sh run --log results.log
-```
diff --git a/Documentation/Testing.md b/Documentation/Testing.md
new file mode 100644
index 00000000000..2c503cbec89
--- /dev/null
+++ b/Documentation/Testing.md
@@ -0,0 +1,160 @@
+# Testing Ladybird
+
+Tests are locates in `Tests/`, with a directory for each library.
+
+Every feature or bug fix added to LibWeb should have a corresponding test in `Tests/LibWeb`.
+The test should be either a Text, Layout, Ref, or Screenshot test depending on the feature.
+Tests of internal C++ code go in their own `TestFoo.cpp` file in `Tests/LibWeb`.
+
+## Running Tests
+
+> [!NOTE]
+> To reproduce a CI failure, see the section on [Running with Sanitizers](#running-with-sanitizers).
+
+The easiest way to run tests is to use the `ladybird.sh` script. The LibWeb tests are registered with CMake as a test in
+`Ladybird/CMakeLists.txt`. Using the built-in test filtering, you can run all tests with `Meta/ladybird.sh test` or run
+just the LibWeb tests with `Meta/ladybird.sh test LibWeb`. The second way is to invoke the headless browser test runner
+directly. See the invocation in `Ladybird/CMakeLists.txt` for the expected command line arguments.
+
+A third way is to invoke `ctest` directly. The simplest method is to use the `default` preset from ``CMakePresets.json``:
+
+```sh
+cmake --preset default
+cmake --build --preset default
+ctest --preset default
+```
+
+If you want to avoid building and running LibWeb tests, you can use a Lagom-only build.
+
+```sh
+cmake -GNinja -S Meta/Lagom -B Build/lagom
+```
+
+The tests can be run via ninja after doing a build. Note that `test-js` requires the `LADYBIRD_SOURCE_DIR` environment variable to be set
+to the root of the ladybird source tree.
+
+```sh
+# /path/to/ladybird repository
+export LADYBIRD_SOURCE_DIR=${PWD}
+cd Build/lagom
+ninja
+ninja test
+```
+
+To see the stdout/stderr output of failing tests, the recommended way is to set the environment variable [`CTEST_OUTPUT_ON_FAILURE`](https://cmake.org/cmake/help/latest/manual/ctest.1.html#options) to 1.
+
+```sh
+CTEST_OUTPUT_ON_FAILURE=1 ninja test
+
+# or, using ctest directly...
+ctest --output-on-failure
+```
+
+### Running with Sanitizers
+
+CI runs host tests with Address Sanitizer and Undefined Sanitizer instrumentation enabled. These tools catch many
+classes of common C++ errors, including memory leaks, out of bounds access to stack and heap allocations, and
+signed integer overflow. For more info on the sanitizers, check out the Address Sanitizer [wiki page](https://github.com/google/sanitizers/wiki),
+or the Undefined Sanitizer [documentation](https://clang.llvm.org/docs/UndefinedBehaviorSanitizer.html) from clang.
+
+Note that a sanitizer build will take significantly longer than a non-sanitizer build, and will mess with caches in tools such as `ccache`.
+The sanitizers can be enabled with the `-DENABLE_FOO_SANITIZER` set of flags.
+
+The simplest way to enable sanitizers is to use the `Sanitizer` preset.
+
+```sh
+cmake --preset Sanitizer
+cmake --build --preset Sanitizer
+ctest --preset Sanitizer
+```
+
+Or from a Lagom build:
+
+To ensure that the test behaves the same way as CI, make sure to set the ASAN_OPTIONS and UBSAN_OPTIONS appropriately.
+The Sanitizer test preset already sets these environment variables.
+
+```sh
+export ASAN_OPTIONS='strict_string_checks=1:check_initialization_order=1:strict_init_order=1:detect_stack_use_after_return=1:allocator_may_return_null=1'
+export UBSAN_OPTIONS='print_stacktrace=1:print_summary=1:halt_on_error=1'
+cmake -GNinja -S Meta/Lagom -B Build/lagom -DENABLE_ADDRESS_SANITIZER=ON -DENABLE_UNDEFINED_SANITIZER=ON
+cd Build/lagom
+ninja
+CTEST_OUTPUT_ON_FAILURE=1 LADYBIRD_SOURCE_DIR=${PWD}/../.. ninja test
+```
+
+### Running the Web Platform Tests
+
+The Web Platform Tests can be run with the `WPT.sh` script. This script can also be used to compare the results of two
+test runs.
+
+Enabling the Qt chrome is recommended when running the Web Platform Tests on MacOS. This can be done by running the
+following command:
+
+```sh
+cmake -GNinja Build/ladybird -DENABLE_QT=ON
+```
+
+Example usage:
+
+```sh
+# Run the WPT tests then run them again, comparing the results from the two runs
+./Meta/WPT.sh run --log expectations.log css
+git checkout my-css-change
+./Meta/WPT.sh compare --log results.log expectations.log css
+```
+
+```sh
+# Pull the latest changes from the upstream WPT repository
+./Meta/WPT.sh update
+# Run all of the Web Platform Tests, outputting the results to results.log
+./Meta/WPT.sh run --log results.log
+```
+
+## Writing tests
+
+Running `Tests/LibWeb/add_libweb_test.py your-new-test-name` will create a new test HTML file in
+`Tests/LibWeb/Text/input/your-new-test-name.html` with the correct boilerplate code for a Text test — along with
+a corresponding expectations file in `Tests/LibWeb/Text/expected/your-new-test-name.txt`.
+
+After you update/replace the generated boilerplate in your `your-new-test-name.html` test file with your actual test,
+running `./Meta/ladybird.sh run headless-browser --run-tests "${LADYBIRD_SOURCE_DIR}/Tests/LibWeb" --rebaseline -f Text/input/foobar.html` will
+regenerate the corresponding expectations file — to match the actual output from your updated test (where
+`/opt/ladybird` should be replaced with the absolute path your ladybird clone in your local environment).
+
+Future versions of the `add_libweb_test.py` script will support other test types.
+
+### Text tests
+
+Text tests are intended to test Web APIs that don't have a visual representation. They are written in JavaScript and
+run in a headless browser. Each test has a test function in a script tag that exercises the API and prints expected
+results using the `println` function. `println` calls are accumulated into an output test file, which is then
+compared to the expected output file by the test runner.
+
+Text tests can be either sync or async. Async tests should use the `done` callback to signal completion.
+Async tests are not necessarily run in an async context, they simply require the test function to signal completion
+when it is done. If an async context is needed to test the API, the lambda passed to `test` can be async.
+
+### Layout
+
+Layout tests compare the layout tree of a page with an expected one. They are best suited for testing layout code, but
+are also used for testing some other features that have an observable effect on the layout. No JavaScript is needed —
+once the page loads, the layout tree will be dumped automatically.
+
+### Ref
+
+Reference or "ref" tests compare a screenshot of the test page with one of a reference page. The test passes if the two
+are identical. These are ideal for testing visual effects such as background images or shadows. If you're finding it
+difficult to recreate the effect in the reference page, (such as for SVG or canvas,) consider using a Screenshot test
+instead.
+
+Each Ref test includes a special `` tag, which the test runner
+uses to locate the reference page. In this way, multiple tests can use the same reference.
+
+### Screenshot
+
+Screenshot tests can be thought of as a subtype of Ref tests, where the reference page is a single `` tag linking
+to a screenshot of the expected output. In general, try to avoid using them if a regular Ref test would do, as they are
+sensitive to small rendering changes, and won't work on all platforms.
+
+Like Ref tests, they require a `` tag to indicate the reference
+page to use.
diff --git a/Meta/Lagom/ReadMe.md b/Meta/Lagom/ReadMe.md
index c4fa0e918be..99492a398f7 100644
--- a/Meta/Lagom/ReadMe.md
+++ b/Meta/Lagom/ReadMe.md
@@ -11,7 +11,7 @@ If you want to bring the comfortable Serenity classes with you to another system
Lagom is used by the Serenity project in the following ways:
- [Build tools](./Tools) required to build Serenity itself using Serenity's own C++ libraries are in Lagom.
-- [Unit tests](../../Documentation/RunningTests.md) in CI are built using the Lagom build for host systems to ensure portability.
+- [Unit tests](../../Documentation/Testing.md) in CI are built using the Lagom build for host systems to ensure portability.
- [Continuous fuzzing](#fuzzing-on-oss-fuzz) is done with the help of OSS-fuzz using the Lagom build.
- [The Ladybird browser](../../Ladybird/README.md) uses Lagom to provide LibWeb and LibJS for non-Serenity systems.
- [ECMA 262 spec tests](https://ladybirdbrowser.github.io/libjs-website/test262) for LibJS are run per-commit and tracked on [LibJS website](https://ladybirdbrowser.github.io/libjs-website/).