summaryrefslogtreecommitdiffstats
path: root/testing/web-platform/tests/docs
diff options
context:
space:
mode:
authorDaniel Baumann <daniel.baumann@progress-linux.org>2024-04-19 01:47:29 +0000
committerDaniel Baumann <daniel.baumann@progress-linux.org>2024-04-19 01:47:29 +0000
commit0ebf5bdf043a27fd3dfb7f92e0cb63d88954c44d (patch)
treea31f07c9bcca9d56ce61e9a1ffd30ef350d513aa /testing/web-platform/tests/docs
parentInitial commit. (diff)
downloadfirefox-esr-0ebf5bdf043a27fd3dfb7f92e0cb63d88954c44d.tar.xz
firefox-esr-0ebf5bdf043a27fd3dfb7f92e0cb63d88954c44d.zip
Adding upstream version 115.8.0esr.upstream/115.8.0esr
Signed-off-by: Daniel Baumann <daniel.baumann@progress-linux.org>
Diffstat (limited to 'testing/web-platform/tests/docs')
-rw-r--r--testing/web-platform/tests/docs/.gitignore4
-rw-r--r--testing/web-platform/tests/docs/.ruby-version1
-rw-r--r--testing/web-platform/tests/docs/Dockerfile25
-rw-r--r--testing/web-platform/tests/docs/META.yml2
-rw-r--r--testing/web-platform/tests/docs/README.md24
-rw-r--r--testing/web-platform/tests/docs/__init__.py0
-rw-r--r--testing/web-platform/tests/docs/admin/index.md105
-rw-r--r--testing/web-platform/tests/docs/admin/pywebsocket3.rst91
-rw-r--r--testing/web-platform/tests/docs/assets/commit-directly.pngbin0 -> 8546 bytes
-rw-r--r--testing/web-platform/tests/docs/assets/commitbtn.pngbin0 -> 5642 bytes
-rw-r--r--testing/web-platform/tests/docs/assets/createpr.pngbin0 -> 2118 bytes
-rw-r--r--testing/web-platform/tests/docs/assets/custom.css11
-rw-r--r--testing/web-platform/tests/docs/assets/files-changed.pngbin0 -> 2585 bytes
-rw-r--r--testing/web-platform/tests/docs/assets/forkbtn.pngbin0 -> 631 bytes
-rw-r--r--testing/web-platform/tests/docs/assets/gh-fork-ribbon.scss124
-rw-r--r--testing/web-platform/tests/docs/assets/more-commits.pngbin0 -> 10927 bytes
-rw-r--r--testing/web-platform/tests/docs/assets/pencil-icon.pngbin0 -> 178 bytes
-rw-r--r--testing/web-platform/tests/docs/assets/praccepteddelete.pngbin0 -> 18087 bytes
-rw-r--r--testing/web-platform/tests/docs/assets/pullrequestbtn.pngbin0 -> 1604 bytes
-rw-r--r--testing/web-platform/tests/docs/assets/reftest-tutorial-test-screenshot.pngbin0 -> 14308 bytes
-rw-r--r--testing/web-platform/tests/docs/assets/testharness-tutorial-test-screenshot-1.pngbin0 -> 20826 bytes
-rw-r--r--testing/web-platform/tests/docs/assets/testharness-tutorial-test-screenshot-2.pngbin0 -> 24180 bytes
-rw-r--r--testing/web-platform/tests/docs/assets/web-platform.pngbin0 -> 11819 bytes
-rw-r--r--testing/web-platform/tests/docs/commands.json12
-rw-r--r--testing/web-platform/tests/docs/conf.py225
-rw-r--r--testing/web-platform/tests/docs/frontend.py127
-rw-r--r--testing/web-platform/tests/docs/index.md79
-rw-r--r--testing/web-platform/tests/docs/intro-video-transcript.md232
-rw-r--r--testing/web-platform/tests/docs/package.json11
-rw-r--r--testing/web-platform/tests/docs/requirements.txt6
-rw-r--r--testing/web-platform/tests/docs/reviewing-tests/checklist.md157
-rw-r--r--testing/web-platform/tests/docs/reviewing-tests/email.md7
-rw-r--r--testing/web-platform/tests/docs/reviewing-tests/git.md83
-rw-r--r--testing/web-platform/tests/docs/reviewing-tests/index.md62
-rw-r--r--testing/web-platform/tests/docs/reviewing-tests/reverting.md23
-rw-r--r--testing/web-platform/tests/docs/running-tests/android_webview.md51
-rw-r--r--testing/web-platform/tests/docs/running-tests/chrome-chromium-installation-detection.md96
-rw-r--r--testing/web-platform/tests/docs/running-tests/chrome.md33
-rw-r--r--testing/web-platform/tests/docs/running-tests/chrome_android.md22
-rw-r--r--testing/web-platform/tests/docs/running-tests/command-line-arguments.md14
-rw-r--r--testing/web-platform/tests/docs/running-tests/custom-runner.md21
-rw-r--r--testing/web-platform/tests/docs/running-tests/from-ci.md32
-rw-r--r--testing/web-platform/tests/docs/running-tests/from-local-system.md215
-rw-r--r--testing/web-platform/tests/docs/running-tests/from-web.md27
-rw-r--r--testing/web-platform/tests/docs/running-tests/index.md24
-rw-r--r--testing/web-platform/tests/docs/running-tests/safari.md47
-rw-r--r--testing/web-platform/tests/docs/running-tests/testing-polyfills.md67
-rw-r--r--testing/web-platform/tests/docs/running-tests/webkitgtk_minibrowser.md20
-rw-r--r--testing/web-platform/tests/docs/test-suite-design.md79
-rw-r--r--testing/web-platform/tests/docs/wpt_lint_rules.py78
-rw-r--r--testing/web-platform/tests/docs/writing-tests/ahem.md78
-rw-r--r--testing/web-platform/tests/docs/writing-tests/assumptions.md40
-rw-r--r--testing/web-platform/tests/docs/writing-tests/channels.md159
-rw-r--r--testing/web-platform/tests/docs/writing-tests/crashtest.md29
-rw-r--r--testing/web-platform/tests/docs/writing-tests/css-metadata.md188
-rw-r--r--testing/web-platform/tests/docs/writing-tests/css-user-styles.md90
-rw-r--r--testing/web-platform/tests/docs/writing-tests/file-names.md78
-rw-r--r--testing/web-platform/tests/docs/writing-tests/general-guidelines.md230
-rw-r--r--testing/web-platform/tests/docs/writing-tests/github-intro.md318
-rw-r--r--testing/web-platform/tests/docs/writing-tests/h2tests.md152
-rw-r--r--testing/web-platform/tests/docs/writing-tests/idlharness.md101
-rw-r--r--testing/web-platform/tests/docs/writing-tests/index.md91
-rw-r--r--testing/web-platform/tests/docs/writing-tests/interacting-features.md25
-rw-r--r--testing/web-platform/tests/docs/writing-tests/lint-tool.md78
-rw-r--r--testing/web-platform/tests/docs/writing-tests/making-a-testing-plan.md540
-rw-r--r--testing/web-platform/tests/docs/writing-tests/manual.md77
-rw-r--r--testing/web-platform/tests/docs/writing-tests/print-reftests.md45
-rw-r--r--testing/web-platform/tests/docs/writing-tests/python-handlers/index.md116
-rw-r--r--testing/web-platform/tests/docs/writing-tests/reftest-tutorial.md276
-rw-r--r--testing/web-platform/tests/docs/writing-tests/reftests.md192
-rw-r--r--testing/web-platform/tests/docs/writing-tests/rendering.md84
-rw-r--r--testing/web-platform/tests/docs/writing-tests/server-features.md157
-rw-r--r--testing/web-platform/tests/docs/writing-tests/server-pipes.md155
-rw-r--r--testing/web-platform/tests/docs/writing-tests/submission-process.md41
-rw-r--r--testing/web-platform/tests/docs/writing-tests/test-templates.md168
-rw-r--r--testing/web-platform/tests/docs/writing-tests/testdriver-extension-tutorial.md300
-rw-r--r--testing/web-platform/tests/docs/writing-tests/testdriver.md235
-rw-r--r--testing/web-platform/tests/docs/writing-tests/testharness-api.md839
-rw-r--r--testing/web-platform/tests/docs/writing-tests/testharness-tutorial.md395
-rw-r--r--testing/web-platform/tests/docs/writing-tests/testharness.md285
-rw-r--r--testing/web-platform/tests/docs/writing-tests/tools.md25
-rw-r--r--testing/web-platform/tests/docs/writing-tests/visual.md27
-rw-r--r--testing/web-platform/tests/docs/writing-tests/wdspec.md68
83 files changed, 7919 insertions, 0 deletions
diff --git a/testing/web-platform/tests/docs/.gitignore b/testing/web-platform/tests/docs/.gitignore
new file mode 100644
index 0000000000..ed1740de54
--- /dev/null
+++ b/testing/web-platform/tests/docs/.gitignore
@@ -0,0 +1,4 @@
+_build/
+
+# This directory is created to store symbolic links to additional input
+tools/
diff --git a/testing/web-platform/tests/docs/.ruby-version b/testing/web-platform/tests/docs/.ruby-version
new file mode 100644
index 0000000000..262714f1d7
--- /dev/null
+++ b/testing/web-platform/tests/docs/.ruby-version
@@ -0,0 +1 @@
+ruby-2.4.0
diff --git a/testing/web-platform/tests/docs/Dockerfile b/testing/web-platform/tests/docs/Dockerfile
new file mode 100644
index 0000000000..bf5b7088a5
--- /dev/null
+++ b/testing/web-platform/tests/docs/Dockerfile
@@ -0,0 +1,25 @@
+FROM ubuntu:20.04
+
+# No interactive frontend during docker build
+ENV DEBIAN_FRONTEND=noninteractive \
+ DEBCONF_NONINTERACTIVE_SEEN=true
+
+# Documentation is generated using Python 3.9 due to spinx-js not
+# supporting 3.10: https://github.com/mozilla/sphinx-js/issues/186
+RUN apt-get -qqy update \
+ && apt-get -qqy install git npm python3.9 python3.9-venv
+
+WORKDIR /app/
+
+COPY package.json requirements.txt ./
+
+RUN npm install .
+ENV PATH=/app/node_modules/.bin:$PATH
+
+# Use venv to create a virtual environment with the docs dependencies installed,
+# setting the environment variables needed for this to always be active. The
+# `./wpt build-docs` then uses this venv with --skip-venv-setup.
+ENV VIRTUAL_ENV=/app/venv
+RUN python3.9 -m venv $VIRTUAL_ENV
+ENV PATH=$VIRTUAL_ENV/bin:$PATH
+RUN pip install -r requirements.txt
diff --git a/testing/web-platform/tests/docs/META.yml b/testing/web-platform/tests/docs/META.yml
new file mode 100644
index 0000000000..978b5c8572
--- /dev/null
+++ b/testing/web-platform/tests/docs/META.yml
@@ -0,0 +1,2 @@
+suggested_reviewers:
+ - sideshowbarker
diff --git a/testing/web-platform/tests/docs/README.md b/testing/web-platform/tests/docs/README.md
new file mode 100644
index 0000000000..a753462429
--- /dev/null
+++ b/testing/web-platform/tests/docs/README.md
@@ -0,0 +1,24 @@
+# Project documentation tooling
+
+The documentation for the web-platform-tests project is built using [the Sphinx
+documentation generator](http://www.sphinx-doc.org). [The GitHub Actions
+service](https://github.com/features/actions) is configured to automatically
+update the public website each time changes are merged to the repository.
+
+## Local Development
+
+If you would like to build the site locally, follow these instructions.
+
+1. Install the system dependencies. The free and open source software tools
+ [Python](https://www.python.org/) and [Git](https://git-scm.com/) are
+ required. Each website has instructions for downloading and installing on a
+ variety of systems.
+2. Download the source code. Clone this repository using the `git clone`
+ command.
+3. Install the Python dependencies. Run the following command in a terminal
+ from the "docs" directory of the WPT repository:
+
+ pip install -r requirements.txt
+
+4. Build the documentation. Windows users should execute the `make.bat` batch
+ file. GNU/Linux and macOS users should use the `make` command.
diff --git a/testing/web-platform/tests/docs/__init__.py b/testing/web-platform/tests/docs/__init__.py
new file mode 100644
index 0000000000..e69de29bb2
--- /dev/null
+++ b/testing/web-platform/tests/docs/__init__.py
diff --git a/testing/web-platform/tests/docs/admin/index.md b/testing/web-platform/tests/docs/admin/index.md
new file mode 100644
index 0000000000..dd7dfe2e72
--- /dev/null
+++ b/testing/web-platform/tests/docs/admin/index.md
@@ -0,0 +1,105 @@
+# Project Administration
+
+This section documents all the information necessary to administer the
+infrastructure which makes the project possible.
+
+## Tooling
+
+```eval_rst
+.. toctree::
+ :titlesonly:
+
+ ../README
+ /tools/wptrunner/README.rst
+ /tools/wptserve/docs/index.rst
+ pywebsocket3
+
+.. toctree::
+ :hidden:
+
+ ../tools/wptserve/README
+ ../tools/third_party/pywebsocket3/README
+```
+
+### Indices and tables
+
+```eval_rst
+* :ref:`modindex`
+* :ref:`genindex`
+* :ref:`search`
+```
+
+## Secrets
+
+SSL certificates for all HTTPS-enabled domains are retrieved via [Let's
+Encrypt](https://letsencrypt.org/), so that data does not represent an
+explicitly-managed secret.
+
+## Third-party account owners
+
+- (unknown registrar): https://web-platform-tests.org
+ - jgraham@hoppipolla.co.uk
+- (unknown registrar): https://w3c-test.org
+ - mike@w3.org
+- (unknown registrar): http://testthewebforward.org
+ - web-human@w3.org
+- [Google Domains](https://domains.google/): https://wpt.fyi
+ - danielrsmith@google.com
+ - foolip@google.com
+ - kyleju@google.com
+ - pastithas@google.com
+- [GitHub](https://github.com/): web-platform-tests
+ - [@foolip](https://github.com/foolip)
+ - [@jgraham](https://github.com/jgraham)
+ - [@plehegar](https://github.com/plehegar)
+ - [@thejohnjansen](https://github.com/thejohnjansen)
+ - [@youennf](https://github.com/youennf)
+ - [@zcorpan](https://github.com/zcorpan)
+- [GitHub](https://github.com/): w3c
+ - [@plehegar](https://github.com/plehegar)
+ - [@sideshowbarker](https://github.com/sideshowbarker)
+- [Google Cloud Platform](https://cloud.google.com/): wptdashboard{-staging}
+ - danielrsmith@google.com
+ - foolip@google.com
+ - kyleju@google.com
+ - pastithas@google.com
+- [Google Cloud Platform](https://cloud.google.com/): wpt-live
+ - danielrsmith@chromium.org
+ - foolip@chromium.org
+ - kyleju@chromium.org
+ - mike@bocoup.com
+ - pastithas@chromium.org
+ - The DNS for wpt.live, not-wpt.live, wptpr.live, and not-wptpr.live are also managed in this project, while the domains are registered with a Google-internal mechanism.
+- [Google Cloud Platform](https://cloud.google.com/): wpt-pr-bot
+ - danielrsmith@google.com
+ - foolip@google.com
+ - kyleju@google.com
+ - pastithas@google.com
+- E-mail address: wpt.pr.bot@gmail.com
+ - smcgruer@google.com
+ - boaz@bocoup.com
+ - mike@bocoup.com
+ - simon@bocoup.com
+- [GitHub](https://github.com/): @wpt-pr-bot account
+ - smcgruer@google.com
+ - boaz@bocoup.com
+ - mike@bocoup.com
+ - simon@bocoup.com
+
+## Emergency playbook
+
+### Lock down write access to the repo
+
+**Recommended but not yet verified approach:** Create a [new branch protection
+rule](https://github.com/web-platform-tests/wpt/settings/branch_protection_rules/new)
+that applies to `*` (i.e. all branches), and check "Restrict who can push to
+matching branches". This should prevent everyone except those with the
+"Maintain" role (currently only the GitHub admins listed above) from pushing
+to *any* branch. To lift the limit, delete this branch protection rule.
+
+**Alternative approach proven to work in
+[#21424](https://github.com/web-platform-tests/wpt/issues/21424):** Go to
+[manage access](https://github.com/web-platform-tests/wpt/settings/access),
+and change the permission of "reviewers" to "Read". To lift the limit, change
+it back to "Write". This has the known downside of *resubscribing all reviewers
+to repo notifications*.
diff --git a/testing/web-platform/tests/docs/admin/pywebsocket3.rst b/testing/web-platform/tests/docs/admin/pywebsocket3.rst
new file mode 100644
index 0000000000..af768d6b35
--- /dev/null
+++ b/testing/web-platform/tests/docs/admin/pywebsocket3.rst
@@ -0,0 +1,91 @@
+pywebsocket3: A Standalone WebSocket Server for testing purposes
+================================================================
+
+.. contents::
+ :local:
+
+:mod:`mod_pywebsocket`
+---------------------------------------------
+
+.. automodule:: mod_pywebsocket
+ :members:
+
+:mod:`mod_pywebsocket.common`
+---------------------------------------------
+
+.. automodule:: mod_pywebsocket.common
+ :members:
+
+:mod:`mod_pywebsocket.dispatch`
+---------------------------------------------
+
+.. automodule:: mod_pywebsocket.dispatch
+ :members:
+
+:mod:`mod_pywebsocket.extensions`
+---------------------------------------------
+
+.. automodule:: mod_pywebsocket.extensions
+ :members:
+
+:mod:`mod_pywebsocket.handshake`
+---------------------------------------------
+
+.. automodule:: mod_pywebsocket.handshake
+ :members:
+ :imported-members:
+
+:mod:`mod_pywebsocket.request_handler`
+---------------------------------------------
+
+.. automodule:: mod_pywebsocket.request_handler
+ :members:
+
+:mod:`mod_pywebsocket.stream`
+---------------------------------------------
+
+.. automodule:: mod_pywebsocket.stream
+ :members:
+ :imported-members:
+
+:mod:`mod_pywebsocket.http_header_util`
+---------------------------------------------
+
+.. automodule:: mod_pywebsocket.http_header_util
+ :members:
+
+:mod:`mod_pywebsocket.msgutil`
+---------------------------------------------
+
+.. automodule:: mod_pywebsocket.msgutil
+ :members:
+
+:mod:`mod_pywebsocket.util`
+---------------------------------------------
+
+.. automodule:: mod_pywebsocket.util
+ :members:
+
+:mod:`mod_pywebsocket.memorizingfile`
+---------------------------------------------
+
+.. automodule:: mod_pywebsocket.memorizingfile
+ :members:
+
+:mod:`mod_pywebsocket.websocket_server`
+---------------------------------------------
+
+.. automodule:: mod_pywebsocket.websocket_server
+ :members:
+
+:mod:`mod_pywebsocket.server_util`
+---------------------------------------------
+
+.. automodule:: mod_pywebsocket.server_util
+ :members:
+
+:mod:`mod_pywebsocket.standalone`
+---------------------------------------------
+
+.. automodule:: mod_pywebsocket.standalone
+ :members:
diff --git a/testing/web-platform/tests/docs/assets/commit-directly.png b/testing/web-platform/tests/docs/assets/commit-directly.png
new file mode 100644
index 0000000000..d02eef4ab5
--- /dev/null
+++ b/testing/web-platform/tests/docs/assets/commit-directly.png
Binary files differ
diff --git a/testing/web-platform/tests/docs/assets/commitbtn.png b/testing/web-platform/tests/docs/assets/commitbtn.png
new file mode 100644
index 0000000000..2008489075
--- /dev/null
+++ b/testing/web-platform/tests/docs/assets/commitbtn.png
Binary files differ
diff --git a/testing/web-platform/tests/docs/assets/createpr.png b/testing/web-platform/tests/docs/assets/createpr.png
new file mode 100644
index 0000000000..4403a95c14
--- /dev/null
+++ b/testing/web-platform/tests/docs/assets/createpr.png
Binary files differ
diff --git a/testing/web-platform/tests/docs/assets/custom.css b/testing/web-platform/tests/docs/assets/custom.css
new file mode 100644
index 0000000000..58a982579f
--- /dev/null
+++ b/testing/web-platform/tests/docs/assets/custom.css
@@ -0,0 +1,11 @@
+div.body {
+ min-width: auto;
+}
+
+#video-introduction-transcript iframe {
+ max-width: 100%;
+}
+
+.table-container {
+ overflow: auto;
+} \ No newline at end of file
diff --git a/testing/web-platform/tests/docs/assets/files-changed.png b/testing/web-platform/tests/docs/assets/files-changed.png
new file mode 100644
index 0000000000..7472edcb96
--- /dev/null
+++ b/testing/web-platform/tests/docs/assets/files-changed.png
Binary files differ
diff --git a/testing/web-platform/tests/docs/assets/forkbtn.png b/testing/web-platform/tests/docs/assets/forkbtn.png
new file mode 100644
index 0000000000..f33cdd05ac
--- /dev/null
+++ b/testing/web-platform/tests/docs/assets/forkbtn.png
Binary files differ
diff --git a/testing/web-platform/tests/docs/assets/gh-fork-ribbon.scss b/testing/web-platform/tests/docs/assets/gh-fork-ribbon.scss
new file mode 100644
index 0000000000..c81530bfdb
--- /dev/null
+++ b/testing/web-platform/tests/docs/assets/gh-fork-ribbon.scss
@@ -0,0 +1,124 @@
+/*!
+ * "Fork me on GitHub" CSS ribbon v0.2.2 | MIT License
+ * https://github.com/simonwhitaker/github-fork-ribbon-css
+*/
+
+.github-fork-ribbon {
+ width: 12.1em;
+ height: 12.1em;
+ position: absolute;
+ overflow: hidden;
+ top: 0;
+ right: 0;
+ z-index: 9999;
+ pointer-events: none;
+ font-size: 13px;
+ text-decoration: none;
+ text-indent: -999999px;
+}
+
+.github-fork-ribbon.fixed {
+ position: fixed;
+}
+
+.github-fork-ribbon:hover, .github-fork-ribbon:active {
+ background-color: rgba(0, 0, 0, 0.0);
+}
+
+.github-fork-ribbon:before, .github-fork-ribbon:after {
+ /* The right and left classes determine the side we attach our banner to */
+ position: absolute;
+ display: block;
+ width: 15.38em;
+ height: 1.54em;
+
+ top: 3.23em;
+ right: -3.23em;
+
+ -webkit-box-sizing: content-box;
+ -moz-box-sizing: content-box;
+ box-sizing: content-box;
+
+ -webkit-transform: rotate(45deg);
+ -moz-transform: rotate(45deg);
+ -ms-transform: rotate(45deg);
+ -o-transform: rotate(45deg);
+ transform: rotate(45deg);
+}
+
+.github-fork-ribbon:before {
+ content: "";
+
+ /* Add a bit of padding to give some substance outside the "stitching" */
+ padding: .38em 0;
+
+ /* Set the base colour */
+ background-color: #a00;
+
+ /* Set a gradient: transparent black at the top to almost-transparent black at the bottom */
+ background-image: -webkit-gradient(linear, left top, left bottom, from(rgba(0, 0, 0, 0)), to(rgba(0, 0, 0, 0.15)));
+ background-image: -webkit-linear-gradient(top, rgba(0, 0, 0, 0), rgba(0, 0, 0, 0.15));
+ background-image: -moz-linear-gradient(top, rgba(0, 0, 0, 0), rgba(0, 0, 0, 0.15));
+ background-image: -ms-linear-gradient(top, rgba(0, 0, 0, 0), rgba(0, 0, 0, 0.15));
+ background-image: -o-linear-gradient(top, rgba(0, 0, 0, 0), rgba(0, 0, 0, 0.15));
+ background-image: linear-gradient(to bottom, rgba(0, 0, 0, 0), rgba(0, 0, 0, 0.15));
+
+ /* Add a drop shadow */
+ -webkit-box-shadow: 0 .15em .23em 0 rgba(0, 0, 0, 0.5);
+ -moz-box-shadow: 0 .15em .23em 0 rgba(0, 0, 0, 0.5);
+ box-shadow: 0 .15em .23em 0 rgba(0, 0, 0, 0.5);
+
+ pointer-events: auto;
+}
+
+.github-fork-ribbon:after {
+ /* Set the text from the data-ribbon attribute */
+ content: attr(data-ribbon);
+
+ /* Set the text properties */
+ color: #fff;
+ font: 700 1em "Helvetica Neue", Helvetica, Arial, sans-serif;
+ line-height: 1.54em;
+ text-decoration: none;
+ text-shadow: 0 -.08em rgba(0, 0, 0, 0.5);
+ text-align: center;
+ text-indent: 0;
+
+ /* Set the layout properties */
+ padding: .15em 0;
+ margin: .15em 0;
+
+ /* Add "stitching" effect */
+ border-width: .08em 0;
+ border-style: dotted;
+ border-color: #fff;
+ border-color: rgba(255, 255, 255, 0.7);
+}
+
+.github-fork-ribbon.left-top, .github-fork-ribbon.left-bottom {
+ right: auto;
+ left: 0;
+}
+
+.github-fork-ribbon.left-bottom, .github-fork-ribbon.right-bottom {
+ top: auto;
+ bottom: 0;
+}
+
+.github-fork-ribbon.left-top:before, .github-fork-ribbon.left-top:after, .github-fork-ribbon.left-bottom:before, .github-fork-ribbon.left-bottom:after {
+ right: auto;
+ left: -3.23em;
+}
+
+.github-fork-ribbon.left-bottom:before, .github-fork-ribbon.left-bottom:after, .github-fork-ribbon.right-bottom:before, .github-fork-ribbon.right-bottom:after {
+ top: auto;
+ bottom: 3.23em;
+}
+
+.github-fork-ribbon.left-top:before, .github-fork-ribbon.left-top:after, .github-fork-ribbon.right-bottom:before, .github-fork-ribbon.right-bottom:after {
+ -webkit-transform: rotate(-45deg);
+ -moz-transform: rotate(-45deg);
+ -ms-transform: rotate(-45deg);
+ -o-transform: rotate(-45deg);
+ transform: rotate(-45deg);
+}
diff --git a/testing/web-platform/tests/docs/assets/more-commits.png b/testing/web-platform/tests/docs/assets/more-commits.png
new file mode 100644
index 0000000000..0d6b1a9794
--- /dev/null
+++ b/testing/web-platform/tests/docs/assets/more-commits.png
Binary files differ
diff --git a/testing/web-platform/tests/docs/assets/pencil-icon.png b/testing/web-platform/tests/docs/assets/pencil-icon.png
new file mode 100644
index 0000000000..ea347dd43b
--- /dev/null
+++ b/testing/web-platform/tests/docs/assets/pencil-icon.png
Binary files differ
diff --git a/testing/web-platform/tests/docs/assets/praccepteddelete.png b/testing/web-platform/tests/docs/assets/praccepteddelete.png
new file mode 100644
index 0000000000..efb8e0798b
--- /dev/null
+++ b/testing/web-platform/tests/docs/assets/praccepteddelete.png
Binary files differ
diff --git a/testing/web-platform/tests/docs/assets/pullrequestbtn.png b/testing/web-platform/tests/docs/assets/pullrequestbtn.png
new file mode 100644
index 0000000000..07d9c6a2e9
--- /dev/null
+++ b/testing/web-platform/tests/docs/assets/pullrequestbtn.png
Binary files differ
diff --git a/testing/web-platform/tests/docs/assets/reftest-tutorial-test-screenshot.png b/testing/web-platform/tests/docs/assets/reftest-tutorial-test-screenshot.png
new file mode 100644
index 0000000000..8d882822e1
--- /dev/null
+++ b/testing/web-platform/tests/docs/assets/reftest-tutorial-test-screenshot.png
Binary files differ
diff --git a/testing/web-platform/tests/docs/assets/testharness-tutorial-test-screenshot-1.png b/testing/web-platform/tests/docs/assets/testharness-tutorial-test-screenshot-1.png
new file mode 100644
index 0000000000..c469e94a55
--- /dev/null
+++ b/testing/web-platform/tests/docs/assets/testharness-tutorial-test-screenshot-1.png
Binary files differ
diff --git a/testing/web-platform/tests/docs/assets/testharness-tutorial-test-screenshot-2.png b/testing/web-platform/tests/docs/assets/testharness-tutorial-test-screenshot-2.png
new file mode 100644
index 0000000000..612eda5448
--- /dev/null
+++ b/testing/web-platform/tests/docs/assets/testharness-tutorial-test-screenshot-2.png
Binary files differ
diff --git a/testing/web-platform/tests/docs/assets/web-platform.png b/testing/web-platform/tests/docs/assets/web-platform.png
new file mode 100644
index 0000000000..8547f49183
--- /dev/null
+++ b/testing/web-platform/tests/docs/assets/web-platform.png
Binary files differ
diff --git a/testing/web-platform/tests/docs/commands.json b/testing/web-platform/tests/docs/commands.json
new file mode 100644
index 0000000000..b908485c43
--- /dev/null
+++ b/testing/web-platform/tests/docs/commands.json
@@ -0,0 +1,12 @@
+{
+ "build-docs": {
+ "path": "frontend.py",
+ "parser": "get_parser",
+ "script": "build",
+ "help": "Build documentation",
+ "virtualenv": true,
+ "requirements": [
+ "requirements.txt"
+ ]
+ }
+}
diff --git a/testing/web-platform/tests/docs/conf.py b/testing/web-platform/tests/docs/conf.py
new file mode 100644
index 0000000000..bd2ef6be6d
--- /dev/null
+++ b/testing/web-platform/tests/docs/conf.py
@@ -0,0 +1,225 @@
+# -*- coding: utf-8 -*-
+#
+# Configuration file for the Sphinx documentation builder.
+#
+# This file does only contain a selection of the most common options. For a
+# full list see the documentation:
+# http://www.sphinx-doc.org/en/master/config
+
+# -- Path setup --------------------------------------------------------------
+
+# If extensions (or modules to document with autodoc) are in another directory,
+# add these directories to sys.path here. If the directory is relative to the
+# documentation root, use os.path.abspath to make it absolute, like shown here.
+#
+import os
+import sys
+sys.path.insert(0, os.path.abspath('..'))
+sys.path.insert(0, os.path.abspath('../tools/wptserve'))
+sys.path.insert(0, os.path.abspath('../tools/third_party/pywebsocket3'))
+sys.path.insert(0, os.path.abspath('../tools'))
+import localpaths
+
+# -- Project information -----------------------------------------------------
+
+project = u'web-platform-tests'
+copyright = u'2019, wpt contributors'
+author = u'wpt contributors'
+
+# The short X.Y version
+version = u''
+# The full version, including alpha/beta/rc tags
+release = u''
+
+
+# -- General configuration ---------------------------------------------------
+
+# If your documentation needs a minimal Sphinx version, state it here.
+#
+# needs_sphinx = '1.0'
+
+# Add any Sphinx extension module names here, as strings. They can be
+# extensions coming with Sphinx (named 'sphinx.ext.*') or your custom
+# ones.
+extensions = [
+ 'recommonmark',
+ 'sphinxarg.ext',
+ 'sphinx.ext.autodoc',
+ 'sphinx.ext.intersphinx',
+ # Google-style Python docs
+ 'sphinx.ext.napoleon',
+ 'sphinx_js'
+]
+
+# Add any paths that contain templates here, relative to this directory.
+templates_path = ['_templates']
+
+# The suffix(es) of source filenames.
+# You can specify multiple suffix as a list of string:
+#
+# source_suffix = ['.rst', '.md']
+source_suffix = '.rst'
+
+# The master toctree document.
+master_doc = 'index'
+
+# These values are used in third-party documentation not recognized by Sphinx.
+# https://stackoverflow.com/questions/51824453/how-to-document-parameter-of-type-function-in-sphinx
+nitpick_ignore = [
+ # wptserve
+ ('py:class', 'Callable'),
+ ('py:obj', 'None'),
+ # pywebsocket3
+ ('py:exc', 'AbortedByUserException'),
+ ('py:exc', 'HandshakeException'),
+ ('py:exc', 'InvalidFrameException'),
+ ('py:exc', 'UnsupportedFrameException'),
+ ('py:exc', 'InvalidUTF8Exception'),
+ ('py:exc', 'ConnectionTerminatedException'),
+ ('py:exc', 'BadOperationException'),
+ ('py:exc', 'Exception'),
+ ('py:exc', 'ValueError'),
+ ('py:class', 'http.client.HTTPMessage')
+]
+
+# The language for content autogenerated by Sphinx. Refer to documentation
+# for a list of supported languages.
+#
+# This is also used if you do content translation via gettext catalogs.
+# Usually you set "language" from the command line for these cases.
+language = None
+
+# List of patterns, relative to source directory, that match files and
+# directories to ignore when looking for source files.
+# This pattern also affects html_static_path and html_extra_path.
+exclude_patterns = [
+ '**/.tox',
+ '**/.DS_Store',
+ '**/Thumbs.db',
+ '_build',
+ 'node_modules',
+ 'package.json',
+ 'package-lock.json',
+]
+
+from docs.wpt_lint_rules import WPTLintRules
+# Enable inline reStructured Text within Markdown-formatted files
+# https://recommonmark.readthedocs.io/en/latest/auto_structify.html
+from recommonmark.transform import AutoStructify
+def setup(app):
+ app.add_transform(AutoStructify)
+ app.add_directive('wpt-lint-rules', WPTLintRules)
+
+# The name of the Pygments (syntax highlighting) style to use.
+pygments_style = None
+
+
+# -- Options for HTML output -------------------------------------------------
+
+# The theme to use for HTML and HTML Help pages. See the documentation for
+# a list of builtin themes.
+#
+html_theme = 'alabaster'
+
+# Theme options are theme-specific and customize the look and feel of a theme
+# further. For a list of options available for each theme, see the
+# documentation.
+#
+# html_theme_options = {}
+
+# Add any paths that contain custom static files (such as style sheets) here,
+# relative to this directory. They are copied after the builtin static files,
+# so a file named "default.css" will overwrite the builtin "default.css".
+html_static_path = ['assets']
+
+# Custom sidebar templates, must be a dictionary that maps document names
+# to template names.
+#
+# The default sidebars (for documents that don't match any pattern) are
+# defined by theme itself. Builtin themes are using these templates by
+# default: ``['localtoc.html', 'relations.html', 'sourcelink.html',
+# 'searchbox.html']``.
+#
+# html_sidebars = {}
+
+# Sphix-js configuration
+
+# Only document things under resources/ for now
+js_source_path = '../resources'
+
+# -- Options for HTMLHelp output ---------------------------------------------
+
+# Output file base name for HTML help builder.
+htmlhelp_basename = 'web-platform-testsdoc'
+
+
+# -- Options for LaTeX output ------------------------------------------------
+
+latex_elements = {
+ # The paper size ('letterpaper' or 'a4paper').
+ #
+ # 'papersize': 'letterpaper',
+
+ # The font size ('10pt', '11pt' or '12pt').
+ #
+ # 'pointsize': '10pt',
+
+ # Additional stuff for the LaTeX preamble.
+ #
+ # 'preamble': '',
+
+ # Latex figure (float) alignment
+ #
+ # 'figure_align': 'htbp',
+}
+
+# Grouping the document tree into LaTeX files. List of tuples
+# (source start file, target name, title,
+# author, documentclass [howto, manual, or own class]).
+latex_documents = [
+ (master_doc, 'web-platform-tests.tex', u'web-platform-tests Documentation',
+ u'wpt contributors', 'manual'),
+]
+
+
+# -- Options for manual page output ------------------------------------------
+
+# One entry per manual page. List of tuples
+# (source start file, name, description, authors, manual section).
+man_pages = [
+ (master_doc, 'web-platform-tests', u'web-platform-tests Documentation',
+ [author], 1)
+]
+
+
+# -- Options for Texinfo output ----------------------------------------------
+
+# Grouping the document tree into Texinfo files. List of tuples
+# (source start file, target name, title, author,
+# dir menu entry, description, category)
+texinfo_documents = [
+ (master_doc, 'web-platform-tests', u'web-platform-tests Documentation',
+ author, 'web-platform-tests', 'One line description of project.',
+ 'Miscellaneous'),
+]
+
+
+# -- Options for Epub output -------------------------------------------------
+
+# Bibliographic Dublin Core info.
+epub_title = project
+
+# The unique identifier of the text. This can be a ISBN number
+# or the project homepage.
+#
+# epub_identifier = ''
+
+# A unique identification for the text.
+#
+# epub_uid = ''
+
+# A list of files that should not be packed into the epub file.
+epub_exclude_files = ['search.html']
+
+intersphinx_mapping = {'python': ('https://docs.python.org/3/', None),
+ 'mozilla': ('https://firefox-source-docs.mozilla.org/', None)}
diff --git a/testing/web-platform/tests/docs/frontend.py b/testing/web-platform/tests/docs/frontend.py
new file mode 100644
index 0000000000..c8d114b39f
--- /dev/null
+++ b/testing/web-platform/tests/docs/frontend.py
@@ -0,0 +1,127 @@
+import argparse
+import logging
+import os
+import subprocess
+import sys
+
+here = os.path.dirname(__file__)
+wpt_root = os.path.abspath(os.path.join(here, ".."))
+
+# Directories relative to the wpt root that we want to include in the docs
+# Sphinx doesn't support including files outside of docs/ so we temporarily symlink
+# these directories under docs/ whilst running the build.
+link_dirs = [
+ "tools/wptserve",
+ "tools/certs",
+ "tools/wptrunner",
+ "tools/webtransport",
+ "tools/third_party/pywebsocket3",
+]
+
+logger = logging.getLogger()
+
+
+def link_source_dirs():
+ created = set()
+ failed = []
+ for rel_path in link_dirs:
+ rel_path = rel_path.replace("/", os.path.sep)
+ src = os.path.join(wpt_root, rel_path)
+ dest = os.path.join(here, rel_path)
+ try:
+ dest_dir = os.path.dirname(dest)
+ if not os.path.exists(dest_dir):
+ os.makedirs(dest_dir)
+ created.add(dest_dir)
+ if not os.path.exists(dest):
+ os.symlink(src, dest, target_is_directory=True)
+ else:
+ if (not os.path.islink(dest) or
+ os.path.join(os.path.dirname(dest), os.readlink(dest)) != src):
+ # The file exists but it isn't a link or points at the wrong target
+ raise OSError("File exists")
+ except Exception as e:
+ failed.append((dest, e))
+ else:
+ created.add(dest)
+ return created, failed
+
+
+def unlink_source_dirs(created):
+ # Sort backwards in length to remove all files before getting to directory
+ for path in sorted(created, key=lambda x: -len(x)):
+ # This will also remove empty parent directories
+ if not os.path.islink(path) and os.path.isdir(path):
+ os.removedirs(path)
+ else:
+ os.unlink(path)
+
+
+def get_parser():
+ p = argparse.ArgumentParser()
+ p.add_argument("--type", default="html", help="Output type (default: html)")
+ p.add_argument("--docker", action="store_true", help="Run inside the docs docker image")
+ p.add_argument("--serve", default=None, nargs="?", const=8000,
+ type=int, help="Run a server on the specified port (default: 8000)")
+ return p
+
+
+def docker_build(tag="wpt:docs"):
+ subprocess.check_call(["docker",
+ "build",
+ "--pull",
+ "--tag", tag,
+ here])
+
+def docker_run(**kwargs):
+ cmd = ["docker", "run"]
+ cmd.extend(["--mount",
+ "type=bind,source=%s,target=/app/web-platform-tests" % wpt_root])
+ if kwargs["serve"] is not None:
+ serve = str(kwargs["serve"])
+ cmd.extend(["--expose", serve, "--publish", f"{serve}:{serve}"])
+ cmd.extend(["-w", "/app/web-platform-tests"])
+ if os.isatty(os.isatty(sys.stdout.fileno())):
+ cmd.append("-it")
+ cmd.extend(["wpt:docs", "./wpt"])
+ # /app/venv is created during docker build and is always active inside the
+ # container.
+ cmd.extend(["--venv", "/app/venv", "--skip-venv-setup"])
+ cmd.extend(["build-docs", "--type", kwargs["type"]])
+ if kwargs["serve"] is not None:
+ cmd.extend(["--serve", str(kwargs["serve"])])
+ logger.debug(" ".join(cmd))
+ return subprocess.call(cmd)
+
+
+def build(_venv, **kwargs):
+ if kwargs["docker"]:
+ docker_build()
+ return docker_run(**kwargs)
+
+ out_dir = os.path.join(here, "_build")
+ try:
+ created, failed = link_source_dirs()
+ if failed:
+ failure_msg = "\n".join(f"{dest}: {err}" for (dest, err) in failed)
+ logger.error(f"Failed to create source symlinks:\n{failure_msg}")
+ sys.exit(1)
+ if kwargs["serve"] is not None:
+ executable = "sphinx-autobuild"
+ extras = ["--port", str(kwargs["serve"]),
+ "--host", "0.0.0.0",
+ "--watch", os.path.abspath(os.path.join(here, os.pardir, "resources")),
+ # Ignore changes to files specified with glob pattern
+ "--ignore", "**/flycheck_*",
+ "--ignore", "**/.*",
+ "--ignore", "**/#*",
+ "--ignore", "docs/frontend.py",
+ "--ignore", "docs/Dockerfile"]
+ else:
+ executable = "sphinx-build"
+ extras = []
+ cmd = [executable, "-n", "-v", "-b", kwargs["type"], "-j", "auto"] + extras + [here, out_dir]
+ logger.debug(" ".join(cmd))
+ subprocess.check_call(cmd)
+ finally:
+ unlink_source_dirs(created)
diff --git a/testing/web-platform/tests/docs/index.md b/testing/web-platform/tests/docs/index.md
new file mode 100644
index 0000000000..799c8e44b8
--- /dev/null
+++ b/testing/web-platform/tests/docs/index.md
@@ -0,0 +1,79 @@
+# web-platform-tests documentation
+
+The web-platform-tests project is a cross-browser test suite for [the
+Web-platform stack](https://platform.html5.org). Writing tests in a way that
+allows them to be run in all browsers gives browser projects confidence that
+they are shipping software which is compatible with other implementations, and
+that later implementations will be compatible with their implementations. This
+in turn gives Web authors/developers confidence that they can actually rely on
+the Web platform to deliver on the promise of working across browsers and
+devices without needing extra layers of abstraction to paper over the gaps left
+by specification editors and implementors.
+
+
+The most important sources of information and activity are:
+
+- [github.com/web-platform-tests/wpt](https://github.com/web-platform-tests/wpt):
+ the canonical location of the project's source code revision history and the
+ discussion forum for changes to the code
+- [web-platform-tests.org](https://web-platform-tests.org): the documentation
+ website; details how to set up the project, how to write tests, how to give
+ and receive peer review, how to serve as an administrator, and more
+- [wpt.live](https://wpt.live): a public deployment of the test suite,
+ allowing anyone to run the tests by visiting from an
+ Internet-enabled browser of their choice
+- [wpt.fyi](https://wpt.fyi): an archive of test results collected from an
+ array of web browsers on a regular basis
+- [Real-time chat room](https://app.element.io/#/room/#wpt:matrix.org): the
+ `wpt:matrix.org` matrix channel; includes participants located
+ around the world, but busiest during the European working day.
+- [Mailing list](https://lists.w3.org/Archives/Public/public-test-infra/): a
+ public and low-traffic discussion list
+
+**If you'd like clarification about anything**, don't hesitate to ask in the
+chat room or on the mailing list.
+
+## Video Introduction ([transcript](intro-video-transcript))
+
+<iframe
+ width="560"
+ height="315"
+ src="https://www.youtube.com/embed/zuK1uyXPZS0"
+ frameborder="0"
+ allow="autoplay; encrypted-media"
+ allowfullscreen></iframe>
+
+See also [this lecture from Web Engines Hackfest 2018 (30
+minutes)](https://www.youtube.com/watch?v=XnfE3MfH5hQ)
+
+## GitHub
+
+[GitHub](https://github.com/web-platform-tests/wpt/) is used both for [issue tracking](https://github.com/web-platform-tests/wpt/issues) and [test submissions](https://github.com/web-platform-tests/wpt/pulls); we
+provide [a limited introduction][github-intro] to both git and
+GitHub.
+
+Pull Requests are automatically labeled based on the directory the
+files they change are in; there are also comments added automatically
+to notify a number of people: this list of people comes from META.yml
+files in those same directories and their parents (i.e., they work
+recursively: `a/META.yml` will get notified for `a/foo.html` and
+`a/b/bar.html`).
+
+If you want to be notified about changes to tests in a directory, feel
+free to add yourself to the META.yml file!
+
+## Table of Contents
+
+```eval_rst
+.. toctree::
+ :maxdepth: 2
+
+ test-suite-design
+ intro-video-transcript
+ running-tests/index
+ writing-tests/index
+ reviewing-tests/index
+ admin/index
+```
+
+[github-intro]: writing-tests/github-intro
diff --git a/testing/web-platform/tests/docs/intro-video-transcript.md b/testing/web-platform/tests/docs/intro-video-transcript.md
new file mode 100644
index 0000000000..b43ebf728f
--- /dev/null
+++ b/testing/web-platform/tests/docs/intro-video-transcript.md
@@ -0,0 +1,232 @@
+# "Introduction to WPT" video transcript
+
+<iframe
+ width="560"
+ height="315"
+ src="https://www.youtube.com/embed/zuK1uyXPZS0"
+ frameborder="0"
+ allow="autoplay; encrypted-media"
+ allowfullscreen></iframe>
+
+**Still image of the WPT logo. The song ["My
+Luck"](http://brokeforfree.bandcamp.com/track/my-luck) by [Broke for
+Free](http://brokeforfree.com/) (licensed under [Creative Commons Attribution
+3.0](https://creativecommons.org/licenses/by/3.0/))
+plays in the background.**
+
+> Hello, and welcome to the Web Platform Tests!
+>
+> The goal of this project is to ensure that all web browsers present websites
+> in exactly the way the authors intended.
+>
+> But what is the web platform, exactly? You can think of it as having three
+> main parts.
+
+**A top-down shot of a blank sheet of graph paper**
+
+> First, there are the web browsers.
+
+A hand places a paper cutout depicting a browser window in the lower-right
+corner of the sheet.
+
+> Applications like Firefox, Chrome, and Safari allow people to interact with
+> pages and with each other.
+>
+> Second, there are the web standards.
+
+A hand places a paper cutout depicting a scroll of parchment paper in the
+lower-left corner of the sheet.
+
+> These documents define how the browsers are supposed to behave.
+
+**A screen recording of a web browser**
+
+`https://platform.html5.org` is entered into the location bar, and the browser
+loads the page.
+
+> That includes everything from how text is rendered to how augmented reality
+> apps are built. Specifying it all takes a lot of work!
+
+The browser clicks through to the Fetch standard and begins scrolling.
+
+> And as you might expect, the standards can get really complicated.
+
+**Return to the graph paper**
+
+A hand draws an arrow from the cutout of the scroll to the cutout of the
+browser window.
+
+> The people who build the browsers use the specifications as a blue print for
+> their work. A shared set of generic instructions like these make it possible
+> for people to choose between different browsers, but only if the browsers get
+> it right.
+
+A hand places a cutout representing a stack of papers on the top-center of the
+sheet and draws an arrow from that cutout to the cutout of the browser window.
+
+> To verify their work, the browser maintainers rely on the third part of the
+> web platform: conformance tests.
+
+A hand draws an arrow from the cutout of the scroll to the cutout of the tests.
+
+> We author tests to describe the same behavior as the standards, just
+> formatted in a way that a computer can understand.
+
+A hand draws an arrow from the cutout of the browser window to the cutout of
+the scroll.
+
+> In the process, the maintainers sometimes uncover problems in the design of
+> the specifications, and they recommend changes to fix them.
+
+A hand draws an arrow from the cutout of the tests to the cutout of the scroll.
+
+> Test authors also find and fix these so-called "spec bugs."
+
+A hand draws an arrow from the cutout of the browser window to the cutout of
+the tests.
+
+> ...and as they implement the standards, the maintainers of each browser
+> frequently write new tests that can be shared with the others.
+>
+> This is how thousands of people coordinate to build the cohesive programming
+> platform that we call the world wide web. The web-platform-tests project is
+> one of the test suites that make this possible.
+>
+> That's pretty abstract, though! Let's take a quick look at the tests
+> themselves.
+
+**A screen recording of a web browser**
+
+`http://wpt.live` is entered into the location bar, and the browser loads the
+page.
+
+> The latest version of the tests is publicly hosted in executable form on the
+> web at wpt.live.
+
+The browser begins scrolling through the enormous list of directories.
+
+> There, were can navigate among all the tests for all the different web
+> technologies.
+>
+> Let's take a look at a typical test.
+
+The browser stops scrolling, and a mouse cursor clicks on `fetch`, then `api`,
+then `headers`, and finally `headers-basic.html`.
+
+> This test is written with the web-platform-tests's testing framework,
+> testharness.js. The test completes almost instantly, and testharness.js
+> reports that this browser passes all but one subtest. To understand the
+> failure, we can read the source code.
+
+The mouse opens a context menu, selects "View Source", and scrolls to the
+source of the failing test.
+
+> It looks like the failing subtest is for what happens when a `Headers`
+> instance has a custom JavaScript iterator method. That's a strange edge case,
+> but it's important for browsers to agree on every detail!
+
+The mouse clicks on the browser's "Back" button and then navigates through the
+directory structure to the test at
+`css/css-transforms/transform-transformed-tr-contains-fixed-position.html`. It
+displays text rendered at an angle.
+
+> Many tests don't use testharness.js at all. Let's take a look at a couple
+> other test types.
+>
+> When it comes to the visual appearance of a page, it can be hard to verify
+> the intended behavior using JavaScript alone. For these situations, the
+> web-platform-tests uses what's known as a reftest.
+>
+> Short for "reference test", this type of test uses at least two different web
+> pages.
+>
+> The first page demonstrates the feature under test.
+
+The mouse opens a context menu, selects "View Source", and clicks on the `href`
+value for the matching reference. It looks identical to the previous page.
+
+> Inside of it, we'll find a link to a second page. This second page is the
+> reference page. It's designed to use a different approach to produce the same
+> output.
+
+The mouse clicks back and forth between the browser tabs displaying the test
+page and the reference page.
+
+> When tests like these are run automatically, a computer verifies that
+> screenshots of the two pages are identical.
+
+The mouse clicks on the browser's "Back" button and then navigates through the
+directory structure to the test at
+`css/css-animations/animation-fill-mode-002-manual.html`. The page includes the
+text, "Test passes if there is a filled color square with 'Filler Text', whose
+color gradually changes in the order: YELLOW to GREEN." It also includes the
+described animated square.
+
+> Even with testharness.js and reftests, there are many web platform features
+> that a computer can't automatically verify. In cases like these, we fall back
+> to using manual tests.
+>
+> Manual tests can only be verified by a living, breathing human. They describe
+> their expectations in plain English so that a human operator can easily
+> determine whether the browser is behaving correctly.
+
+`https://web-platform-tests.org` is entered into the location bar, and the
+browser loads the page.
+
+> You can read more about all the test types in the project documentation at
+> [web-platform-tests.org](https://web-platform-tests.org).
+
+`https://wpt.fyi` is entered into the location bar, and the browser loads the
+page.
+
+> [wpt.fyi](https://wpt.fyi) is a great way to see how today's browsers are
+> performing on the web-platform-tests.
+
+The browser scrolls to `fetch`, and a mouse cursor clicks on `fetch`, then
+`api`, then `headers`, and finally `headers-basic.html`.
+
+> Here, you'll find all the same tests, just presented with the results from
+> various browsers.
+
+`https://web-platform-tests.live/LICENSE.md` is entered into the location bar,
+and the browser loads the page.
+
+> The web-platform-tests project is free and open source software. From bug
+> reports to documentation improvements and brand new tests, we welcome all
+> sorts of contributions from everyone.
+
+`https://github.com/web-platform-tests/wpt` is entered into the location bar,
+and the browser loads the page.
+
+> To get involved, you can visit the project management website hosted on
+> GitHub.com.
+
+The browser navigates to the project's "issues" list and filters the list for
+just the ones labeled as "good first issue."
+
+> Some issues are more difficult than others, but many are perfect for people who
+> are just getting started with the project. When we come across an issue like
+> that, we label it as a "good first issue."
+
+`https://lists.w3.org/Archives/Public/public-test-infra` is entered into the
+location bar, and the browser loads the page.
+
+> You can also join the mailing list to receive e-mail with announcements and
+> discussion about the project.
+
+`http://irc.w3.org/` is entered into the location bar, and the browser loads
+the page. `web4all` is entered as the Nickname, and `#testing` is entered as
+the channel name. A mouse clicks on the "Connect" button.
+
+> For more immediate communication, you can join the "testing" channel on the
+> IRC server run by the W3C.
+
+**Return to the graph paper**
+
+A hand places a paper cutout depicting a human silhouette on the sheet. It then
+draws arrows from the new cutout to each of the three previously-introduced
+cutouts.
+
+![](assets/web-platform.png "The diagram drawn in the video")
+
+> We're looking forward to working with you!
diff --git a/testing/web-platform/tests/docs/package.json b/testing/web-platform/tests/docs/package.json
new file mode 100644
index 0000000000..d88de942a4
--- /dev/null
+++ b/testing/web-platform/tests/docs/package.json
@@ -0,0 +1,11 @@
+{
+ "name": "wpt-docs",
+ "description": "This package file is for node modules used in web-platform-tests docs",
+ "repository": {},
+ "license": "BSD",
+ "devDependencies": {
+ "jsdoc": "3.6.7"
+ },
+ "version": "1.0.0",
+ "private": true
+}
diff --git a/testing/web-platform/tests/docs/requirements.txt b/testing/web-platform/tests/docs/requirements.txt
new file mode 100644
index 0000000000..b29747cac1
--- /dev/null
+++ b/testing/web-platform/tests/docs/requirements.txt
@@ -0,0 +1,6 @@
+recommonmark==0.7.1
+sphinx-argparse==0.4.0
+sphinx-autobuild==2021.3.14
+sphinx-js==3.2.1
+sphinx==4.4.0
+markupsafe==2.0.1
diff --git a/testing/web-platform/tests/docs/reviewing-tests/checklist.md b/testing/web-platform/tests/docs/reviewing-tests/checklist.md
new file mode 100644
index 0000000000..be0f4d134e
--- /dev/null
+++ b/testing/web-platform/tests/docs/reviewing-tests/checklist.md
@@ -0,0 +1,157 @@
+# Review Checklist
+
+The following checklist is provided as a guideline to assist in reviewing
+tests; in case of any contradiction with requirements stated elsewhere in the
+documentation it should be ignored
+(please [file a bug](https://github.com/web-platform-tests/wpt/issues/new)!).
+
+As noted on the [reviewing tests](./index.md) page, nits need not block PRs
+from landing.
+
+
+## All tests
+
+<label><input type="checkbox">
+The CI jobs on the pull request have passed.
+</label>
+
+<label><input type="checkbox">
+It is obvious what the test is trying to test.
+</label>
+
+<label><input type="checkbox">
+The test passes when it's supposed to pass.
+</label>
+
+<label><input type="checkbox">
+The test fails when it's supposed to fail.
+</label>
+
+<label><input type="checkbox">
+The test is testing what it thinks it's testing.
+</label>
+
+<label><input type="checkbox">
+The spec backs up the expected behavior in the test.
+</label>
+
+<label><input type="checkbox">
+The test is automated as either [reftest](../writing-tests/reftests) or
+a [script test](../writing-tests/testharness) unless there's a very good reason for it not to be.
+</label>
+
+<label><input type="checkbox">
+The test does not use external resources.
+</label>
+
+<label><input type="checkbox">
+The test does not use proprietary features (vendor-prefixed or otherwise).
+</label>
+
+<label><input type="checkbox">
+The test does not contain commented-out code.
+</label>
+
+<label><input type="checkbox">
+The test is placed in the relevant directory.
+</label>
+
+<label><input type="checkbox">
+The test has a reasonable and concise filename.
+</label>
+
+<label><input type="checkbox">
+If the test needs code running on the server side, the server code must be
+written in Python, and the Python code must not do anything potentially unsafe.
+</label>
+
+<label><input type="checkbox">
+If the test needs to be run in some non-standard configuration or needs user
+interaction, it is a manual test.
+</label>
+
+<label><input type="checkbox">
+**Nit**: The title is descriptive but not too wordy.
+</label>
+
+
+## Reftests Only
+
+<label><input type="checkbox">
+The reference file is accurate and will render pixel-perfect
+identically to the test on all platforms.
+</label>
+
+<label><input type="checkbox">
+The reference file uses a different technique that won't fail in
+the same way as the test.
+</label>
+
+<label><input type="checkbox">
+The test and reference render within a 800x600 viewport, only displaying
+scrollbars if their presence is being tested.
+</label>
+
+<label><input type="checkbox">
+**Nit**: The test has a self-describing statement.
+</label>
+
+<label><input type="checkbox">
+**Nit**: The self-describing statement is accurate, precise, simple, and
+self-explanatory. Someone with no technical knowledge should be able to say
+whether the test passed or failed within a few seconds, and not need to spend
+several minutes thinking or asking questions.
+</label>
+
+
+## Script Tests Only
+
+<label><input type="checkbox">
+The number of tests in each file and the test names are consistent
+across runs and browsers. It is best to avoid the pattern where there is
+a test that asserts that the feature is supported and bails out without
+running the rest of the tests in the file if it isn't.
+</label>
+
+<label><input type="checkbox">
+The test avoids patterns that make it less likely to be stable.
+In particular, tests should avoid setting internal timeouts, since the
+time taken to run it may vary on different devices; events should be used
+instead (if at all possible).
+</label>
+
+<label><input type="checkbox">
+The test uses the most specific asserts possible (e.g. doesn't use
+`assert_true` for everything).
+</label>
+
+<label><input type="checkbox">
+The test uses `idlharness.js` if it is testing basic IDL-defined behavior.
+</label>
+
+<label><input type="checkbox">
+**Nit**: Tests in a single file are separated by one empty line.
+</label>
+
+
+## Visual Tests Only
+
+<label><input type="checkbox">
+The test has a self-describing statement.
+</label>
+
+<label><input type="checkbox">
+The self-describing statement is accurate, precise, simple, and
+self-explanatory. Someone with no technical knowledge should be able to say
+whether the test passed or failed within a few seconds, and not need to spend
+several minutes thinking or asking questions.
+</label>
+
+<label><input type="checkbox">
+The test renders within a 800x600 viewport, only displaying scrollbars if their
+presence is being tested.
+</label>
+
+<label><input type="checkbox">
+The test renders to a fixed, static page with no animation.
+</label>
diff --git a/testing/web-platform/tests/docs/reviewing-tests/email.md b/testing/web-platform/tests/docs/reviewing-tests/email.md
new file mode 100644
index 0000000000..55bbf44367
--- /dev/null
+++ b/testing/web-platform/tests/docs/reviewing-tests/email.md
@@ -0,0 +1,7 @@
+# Email Filters
+
+See the [GitHub support page](https://help.github.com/articles/about-email-notifications/)
+for how to filter certain types of email notifications. These are the most
+useful `Cc` addresses when reviewing in web-platform-tests:
+- `review_requested@noreply.github.com` when you are added as a (suggested) `reviewer` on a pull request.
+- `assign@noreply.github.com` when you are added as the `assignee` (i.e. as _the_ reviewer) on a pull request.
diff --git a/testing/web-platform/tests/docs/reviewing-tests/git.md b/testing/web-platform/tests/docs/reviewing-tests/git.md
new file mode 100644
index 0000000000..b74b4b77ae
--- /dev/null
+++ b/testing/web-platform/tests/docs/reviewing-tests/git.md
@@ -0,0 +1,83 @@
+# Working with Pull Requests as a reviewer
+
+In order to do a thorough review,
+it is sometimes desirable to have a local copy of the tests one wishes to review.
+
+Reviewing tests also often results in wanting a few things to be changed.
+Generally, the reviewer should ask the author to make the desired changes.
+However, sometimes the original author does not respond to the requests,
+or the changes are so trivial (e.g. fixing a typo)
+that bothering the original author seems like a waste of time.
+
+Here is how to do all that.
+
+## Trivial cases
+
+If it is possible to review the tests without a local copy,
+but the reviewer still wants to make some simple tweaks to the tests before merging,
+it is possible to do so via the Github web UI.
+
+1. Open the pull request. E.g. https://github.com/web-platform-tests/wpt/pull/1234
+2. Go to the ![Files changed](../assets/files-changed.png) view (e.g. https://github.com/web-platform-tests/wpt/pull/1234/files)
+3. Locate the files you wish to change, and click the ![pencil](../assets/pencil-icon.png) icon in the upper right corner
+4. Make the desired change
+5. Write a commit message (including a good title) at the bottom
+6. Make sure the ![Commit directly to the [name-of-the-PR-branch] branch.](../assets/commit-directly.png) radio button is selected.
+
+ _Note: If the PR predates the introduction of this feature by Github,
+ or if the author of the PR has disabled write-access by reviewers to the PR branch,
+ this may not be available,
+ and your only option would be to commit to a new branch, creating a new PR._
+7. Click the ![Commit Changes](../assets/commitbtn.png) button.
+
+
+## The Normal Way
+
+This is how to import the Pull Request's branch into your existing local
+checkout of the repository. If you don't have one, go [fork][fork],
+[clone][clone], and [configure][configure] it.
+
+1. Move into your local clone: `cd wherever-you-put-your-repo`
+2. Add a remote for the PR author's repo: `git remote add <author-id> git://github.com/<author-id>/<repo-name>.git`
+3. Fetch the PR: `git fetch <author-id> <name-of-the-PR-branch>`
+4. Checkout that branch: `git checkout <name-of-the-PR-branch>`
+
+ _The relevant `<author-id>`, `<repo-name>`, and `<name-of-the-PR-branch>` can be found by looking for this sentence in on the Github page of the PR:
+ ![Add more commits by pushing to the name-of-the-PR-branch branch on author-id/repo-name.](../assets/more-commits.png)_
+
+If all you meant to do was reviewing files locally, you're all set.
+If you wish to make changes to the PR branch:
+
+1. Make changes and [commit][commit] normally
+2. Push your changes upstream: `git push <author-id> <name-of-the-PR-branch>`
+
+ _Note: If the PR predates the introduction of this feature by Github,
+ or if the author of the PR has disabled write-access by reviewers to the PR branch,
+ this will not work, and you will need to use the alternative described below._
+
+If, instead of modifying the existing PR, you wish to make a new one based on it:
+
+1. Set up a new branch that contains the existing PR by doing one of the following:
+ 1. Create a new branch from the tip of the PR:
+ `git branch <your-new-branch> <name-of-the-PR-branch> && git checkout <your-new-branch>`
+ 2. Create a new branch from `master` and merge the PR into it:
+ `git branch <your-new-branch> master && git checkout <your-new-branch> && git merge <name-of-the-PR-branch>`
+2. Make changes and [commit][commit] normally
+3. Push your changes to **your** repo: `git push origin <your-new-branch>`
+4. Go to the Github Web UI to [submit a new Pull Request][submit].
+
+ _Note: You should also close the original pull request._
+
+When you're done reviewing or making changes,
+you can delete the branch: `git branch -d <name-of-the-PR-branch>`
+(use `-D` instead of `-d` to delete a branch that has not been merged into master yet).
+
+If you do not expect work with more PRs from the same author,
+you may also discard your connection to their repo:
+`git remote remove <author-id>`
+
+[clone]: ../writing-tests/github-intro.html#clone
+[commit]: ../writing-tests/github-intro.html#commit
+[configure]: ../writing-tests/github-intro.html#configure-remote-upstream
+[fork]: ../writing-tests/github-intro.html#fork-the-test-repository
+[submit]: ../writing-tests/github-intro.html#submit
diff --git a/testing/web-platform/tests/docs/reviewing-tests/index.md b/testing/web-platform/tests/docs/reviewing-tests/index.md
new file mode 100644
index 0000000000..e313f84596
--- /dev/null
+++ b/testing/web-platform/tests/docs/reviewing-tests/index.md
@@ -0,0 +1,62 @@
+# Reviewing Tests
+
+In order to encourage a high level of quality in the W3C test
+suites, test contributions must be reviewed by a peer.
+
+```eval_rst
+.. toctree::
+ :maxdepth: 1
+
+ checklist
+ email
+ git
+ reverting
+```
+
+## Test Review Policy
+
+The reviewer can be anyone (other than the original test author) that
+has the required experience with both the spec under test and with
+the [general test guidelines](../writing-tests/general-guidelines).
+
+The review must happen in public, but there is no requirement for it
+to happen in any specific location. In particular if a vendor is
+submitting tests that have already been publicly reviewed in their own
+review system, that review may be carried forward. For other submissions, we
+recommend using GitHub's built-in review tools.
+
+Regardless of what review tool is used, the review must be clearly
+linked in the pull request.
+
+In general, we tend on the side of merging things with nits (i.e.,
+anything sub-optimal that isn't absolutely required to be right) and
+then opening issues to leaving pull requests open indefinitely waiting
+on the original submitter to fix them; when tests are being upstreamed
+from vendors it is frequently the case that the author has moved on to
+working on other things as tests frequently only get pushed upstream
+once the code lands in their implementation.
+
+To assist with test reviews, a [review checklist](checklist) is available.
+
+[GitHub.com allows reviewers to formally signal their approval of a pull
+request through a dedicated user
+interface.](https://help.github.com/en/articles/about-pull-request-reviews)
+Every pull request submitted to WPT must be approved by at least one project
+collaborator before it can be merged.
+
+## Notifications
+
+META.yml files are used to indicate who should be notified of pull
+requests. If you are interested in receiving notifications of proposed
+changes to tests in a given directory, feel free to add your GitHub account
+username to the `suggested_reviewers` list in the META.yml file.
+
+## Finding contributions to review
+
+Here are a few search filters to find things to review:
+
+* [Open PRs (excluding vendor exports)](https://github.com/web-platform-tests/wpt/pulls?utf8=%E2%9C%93&q=is%3Apr+is%3Aopen+-label%3A%22mozilla%3Agecko-sync%22+-label%3A%22chromium-export%22+-label%3A%22webkit-export%22+-label%3A%22servo-export%22+-label%3Avendor-imports)
+* [Reviewed but still open PRs (excluding vendor exports)](https://github.com/web-platform-tests/wpt/pulls?utf8=%E2%9C%93&q=is%3Apr+is%3Aopen+-label%3Amozilla%3Agecko-sync+-label%3Achromium-export+-label%3Awebkit-export+-label%3Aservo-export+-label%3Avendor-imports+review%3Aapproved+-label%3A%22do+not+merge+yet%22+-label%3A%22status%3Aneeds-spec-decision%22) (Merge? Something left to fix? Ping other reviewer?)
+* [Open PRs without reviewers](https://github.com/web-platform-tests/wpt/pulls?q=is%3Apr+is%3Aopen+label%3Astatus%3Aneeds-reviewers)
+* [Open PRs with label `infra` (excluding vendor exports)](https://github.com/web-platform-tests/wpt/pulls?utf8=%E2%9C%93&q=is%3Apr+is%3Aopen+label%3Ainfra+-label%3A%22mozilla%3Agecko-sync%22+-label%3A%22chromium-export%22+-label%3A%22webkit-export%22+-label%3A%22servo-export%22+-label%3Avendor-imports)
+* [Open PRs with label `docs` (excluding vendor exports)](https://github.com/web-platform-tests/wpt/pulls?utf8=%E2%9C%93&q=is%3Apr+is%3Aopen+label%3Adocs+-label%3A%22mozilla%3Agecko-sync%22+-label%3A%22chromium-export%22+-label%3A%22webkit-export%22+-label%3A%22servo-export%22+-label%3Avendor-imports)
diff --git a/testing/web-platform/tests/docs/reviewing-tests/reverting.md b/testing/web-platform/tests/docs/reviewing-tests/reverting.md
new file mode 100644
index 0000000000..d374f0558e
--- /dev/null
+++ b/testing/web-platform/tests/docs/reviewing-tests/reverting.md
@@ -0,0 +1,23 @@
+# Reverting Changes
+
+Testing is imperfect and from time to time changes are merged into master which
+break things for users of web-platform-tests. Such breakage can include:
+
+ * Failures in Travis or Taskcluster runs for this repository, either on the
+ master branch or on pull requests following the breaking change.
+
+ * Breakage in browser engine repositories which import and run
+ web-platform-tests, such as Chromium, Edge, Gecko, Servo and WebKit.
+
+ * Breakage in results collections systems for results dashboards, such as
+ [wpt.fyi](https://wpt.fyi).
+
+When such breakage happens, if the maintainers of the affected systems request
+it, pull requests to revert the original change should normally be approved and
+merged as soon as possible. (When the original change itself was fixing a
+serious problem, it's a judgement call, but prefer the fastest path to a stable
+state acceptable to everyone.)
+
+Once a revert has happened, the maintainers of the affected systems are
+expected to work with the original patch author to resolve the problem so that
+the change can be relanded. A reasonable timeframe to do so is within one week.
diff --git a/testing/web-platform/tests/docs/running-tests/android_webview.md b/testing/web-platform/tests/docs/running-tests/android_webview.md
new file mode 100644
index 0000000000..4a86814fdf
--- /dev/null
+++ b/testing/web-platform/tests/docs/running-tests/android_webview.md
@@ -0,0 +1,51 @@
+# Android WebView
+
+To run WPT on WebView on an Android device, some additional set-up is required.
+
+Currently, Android WebView support is experimental.
+
+## Prerequisites
+
+Please check [Chrome for Android](chrome_android.md) for the common instructions for Android support first.
+
+Ensure you have a userdebug or eng Android build installed on the device.
+
+Install an up-to-date version of system webview shell:
+1. Go to [chromium-browser-snapshots](https://commondatastorage.googleapis.com/chromium-browser-snapshots/index.html?prefix=Android/)
+2. Find the subdirectory with the highest number and click it, this number can be found
+ in the "Commit Position" column of row "LAST_CHANGE" (at bottom of page).
+3. Download `chrome-android.zip` file and unzip it.
+4. Install `SystemWebViewShell.apk`.
+5. On emulator, system webview shell may already be installed by default. Then you may need to remove the existing apk:
+ * Choose a userdebug build.
+ * Run an emulator with
+ [writable system partition from command line](https://chromium.googlesource.com/chromium/src/+/HEAD/docs/android_emulator.md/)
+
+If you have an issue with ChromeDriver version mismatch, try one of the following:
+ * Try removing `_venv/bin/chromedriver` such that wpt runner can install a matching version
+ automatically. Failing that, please check your environment path and make
+ sure that no other ChromeDriver is used.
+ * Download the [ChromeDriver binary](https://chromedriver.chromium.org/) matching your WebView's major version and specify it on the command line
+ ```
+ ./wpt run --webdriver-binary <binary path> ...
+ ```
+
+Configure host remap rules in the [webview commandline file](https://cs.chromium.org/chromium/src/android_webview/docs/commandline-flags.md?l=57):
+```
+adb shell "echo '_ --host-resolver-rules=\"MAP nonexistent.*.test ~NOTFOUND, MAP *.test 127.0.0.1\"' > /data/local/tmp/webview-command-line"
+```
+
+Ensure that `adb` can be found on your system's PATH.
+
+## Running Tests
+
+Example command line:
+
+```bash
+./wpt run --test-type=testharness android_webview <TESTS>
+```
+
+* Note that there is no support for channel or automatic installation. The test
+ will be run against the current WebView version installed on the device.
+
+* Reftests are not supported at the moment.
diff --git a/testing/web-platform/tests/docs/running-tests/chrome-chromium-installation-detection.md b/testing/web-platform/tests/docs/running-tests/chrome-chromium-installation-detection.md
new file mode 100644
index 0000000000..63fcd8688d
--- /dev/null
+++ b/testing/web-platform/tests/docs/running-tests/chrome-chromium-installation-detection.md
@@ -0,0 +1,96 @@
+# Detection and Installation of Browser and WebDriver Binaries and for Chrome and Chromium
+
+This is a detailed description of the process in which WPT detects and installs the browser
+components for Chrome and Chromium. This process can seem convoluted and difficult to
+understand at first glance, but the reason for this process is to best ensure these components
+are compatible with each other and are the intended items that the user is trying to test.
+
+## Chrome
+
+### Detection
+**Browser**: Because WPT does not offer installation of Chrome browser binaries, it will
+not attempt to detect a Chrome browser binary in the virtual environment directory.
+Instead, commonly-used installation locations on various operating systems are checked to
+detect a valid Chrome binary. This detection process is only used if the user has not passed
+a binary path as an argument using the `--binary` flag.
+
+**WebDriver**: ChromeDriver detection for Chrome will only occur if a valid browser binary
+has been found. Once the browser binary version is detected, the virtual environment
+directory will be checked to see if a matching ChromeDriver version is already installed.
+If the browser and ChromeDriver versions do not match, the ChromeDriver binary will be
+removed from the directory and the user will be prompted to begin the webdriver installation
+process. A ChromeDriver version is considered matching the browser version if ChromeDriver shares
+the same major version, or next major version when testing Chrome Dev. For example, Chrome 98.x.x.x
+is considered to match ChromeDriver version 98.x.x.x, or also ChromeDriver 99.x.x.x if testing
+Chrome Dev.
+
+Note: Both Chrome and Chromium’s versions of ChromeDriver are stored in separate
+directories in the virtual environment directory i.e
+`_venv3/bin/{chrome|chromium}/{chromedriver}`. This safeguards from accidentally
+using Chromium’s ChromeDriver for a Chrome run and vice versa. Additionally, there
+is no need to reinstall ChromeDriver versions if switching between testing Chrome and Chromium.
+
+### Installation
+**Browser**: Browser binary installation is not provided through WPT and will throw a
+`NotImplementedError` if attempted via `./wpt install`. The user will need to
+have a browser binary on their system that can be detected or provide a path explicitly
+using the `--binary` flag.
+
+**WebDriver**: A version of ChromeDriver will only be installed once a Chrome browser binary
+has been given or detected. A `FileNotFoundError` will be raised if the user tries to download
+ChromeDriver via `./wpt install` and a browser binary is not located. After browser binary
+detection, a version of ChromeDriver that matches the browser binary will be installed.
+The download source for this ChromeDriver is
+[described here](https://chromedriver.chromium.org/downloads/version-selection).
+If a matching ChromeDriver version cannot be found using this process, it is assumed that
+the Chrome browser binary is a dev version which does not have a ChromeDriver version available
+through official releases. In this case, the Chromium revision associated with this version is
+detected from [OmahaProxy](https://omahaproxy.appspot.com/) and used to download
+Chromium's version of ChromeDriver for use from Chromium snapshots, as this is currently
+the closest version we can match for Chrome Dev. Finally, if the revision number detected is
+not available in Chromium snapshots, or if the version does not match any revision number,
+the latest revision of Chromium's ChromeDriver is installed from Chromium snapshots.
+
+## Chromium
+
+### Detection
+**Browser**: Chromium browser binary detection is only done in the virtual
+environment directory `_venv3/browsers/{channel}/`, not on the user’s system
+outside of this directory. This detection process is only used if the user has
+not passed a binary path as an argument using the `--binary` flag.
+
+**WebDriver**: ChromeDriver detection for Chromium will only occur if a valid browser binary has
+been found. Once the browser binary version is detected, the virtual environment directory will
+be checked to see if a matching ChromeDriver version is already installed. If the versions do not
+match, the ChromeDriver binary will be removed from the directory and the user will be prompted to
+begin the webdriver installation process. For Chromium, the ChromeDriver and browser versions must be
+the same to be considered matching. For example, Chromium 99.0.4844.74 will only match ChromeDriver
+99.0.4844.74.
+
+### Installation
+**Browser**: Chromium’s browser binary will be installed from
+[Chromium snapshots storage](https://storage.googleapis.com/chromium-browser-snapshots/index.html).
+The last revision associated with the user’s operating system will be downloaded
+(this revision is obtained by the LAST_CHANGE designation from the snapshots bucket).
+Chromium does not have varying channels, so the installation uses the default `nightly`
+designation. The install path is `_venv3/browsers/nightly/{chromium_binary}`.
+
+Note: If this download process is successful, the Chromium snapshot URL that the browser
+binary was downloaded from will be kept during the current invocation. If a Chromium ChromeDriver
+is also downloaded later to match this browser binary, the same URL is used for that download to
+ensure both components are downloaded from the same source.
+
+**WebDriver**: A version of ChromeDriver will only be installed once a Chromium browser binary
+has been given or detected. A FileNotFoundError will be raised if the user tries to download
+ChromeDriver via the install command and a browser binary is not located. A version of
+ChromeDriver that matches the version of the browser binary will be installed. The download
+source for this ChromeDriver will be the Chromium snapshots. If a Chromium browser
+binary and webdriver are installed in the same invocation of `./wpt run`
+(for example, by passing both `--install-browser` and `--install-webdriver` flags), then the
+browser binary and ChromeDriver will be pulled from the same Chromium snapshots URL (see Note
+from browser installation). Although unusual, if a Chromium browser binary is detected and
+it is not the tip-of-tree revision and the browser binary was not downloaded and installed
+during this invocation of `./wpt run` and the currently installed ChromeDriver version does
+not match the browser version, then an attempt will be made to detect the revision number from
+the browser binary version using the [OmahaProxy](https://omahaproxy.appspot.com/)
+and download the matching ChromeDriver using this revision number from Chromium snapshots.
diff --git a/testing/web-platform/tests/docs/running-tests/chrome.md b/testing/web-platform/tests/docs/running-tests/chrome.md
new file mode 100644
index 0000000000..45293af65e
--- /dev/null
+++ b/testing/web-platform/tests/docs/running-tests/chrome.md
@@ -0,0 +1,33 @@
+# Chrome and Chromium
+
+When running Chrome, there are some useful command line arguments.
+
+You can inform `wpt` of the release channel of Chrome using `--channel`.
+`wpt` is able to find the correct binary in the following cases:
+* On Linux for stable, beta and dev channels if
+ `google-chrome-{stable,beta,unstable}` are in `PATH`;
+* On Mac for stable and canary channels if the official DMGs are installed.
+
+In other cases, you will need to specify the path to the Chrome binary with
+`--binary`. For example:
+
+```bash
+./wpt run --channel dev --binary /path/to/non-default/google-chrome chrome
+```
+
+Note: when the channel is "dev", `wpt` will *automatically* enable all
+[experimental web platform features][1]
+(chrome://flags/#enable-experimental-web-platform-features) by passing
+`--enable-experimental-web-platform-features` to Chrome.
+
+If you want to enable a specific [runtime enabled feature][1], use
+`--binary-arg` to specify the flag(s) that you want to pass to Chrome:
+
+```bash
+./wpt run --binary-arg=--enable-blink-features=AsyncClipboard chrome clipboard-apis/
+```
+
+[A detailed explanation is available](running-tests/chrome-chromium-installation-detection.html)
+for more information on how wpt detects and installs the components for Chrome and Chromium.
+
+[1]: https://chromium.googlesource.com/chromium/src/+/main/third_party/blink/renderer/platform/RuntimeEnabledFeatures.md
diff --git a/testing/web-platform/tests/docs/running-tests/chrome_android.md b/testing/web-platform/tests/docs/running-tests/chrome_android.md
new file mode 100644
index 0000000000..a216a8a68b
--- /dev/null
+++ b/testing/web-platform/tests/docs/running-tests/chrome_android.md
@@ -0,0 +1,22 @@
+# Chrome for Android
+
+To run WPT on Chrome on an Android device, some additional set up is required.
+
+As with usual Android development, you need to have `adb` and be able to
+connect to the device. Run `adb devices` to verify.
+
+Currently, Android support is a prototype with some known issues:
+
+* If you have previously run `./wpt run` against Chrome, you might need to
+ remove `_venv/bin/chromedriver` so that we can install the correct
+ ChromeDriver corresponding to your Chrome for Android version.
+* We do not support reftests at the moment.
+* You will need to manually kill Chrome (all channels) before running tests.
+
+Note: rooting the device or installing a root CA is no longer required.
+
+Example (assuming you have Chrome Canary installed on your phone):
+
+```bash
+./wpt run --test-type=testharness --channel=canary chrome_android TESTS
+```
diff --git a/testing/web-platform/tests/docs/running-tests/command-line-arguments.md b/testing/web-platform/tests/docs/running-tests/command-line-arguments.md
new file mode 100644
index 0000000000..598c9da2a1
--- /dev/null
+++ b/testing/web-platform/tests/docs/running-tests/command-line-arguments.md
@@ -0,0 +1,14 @@
+# Command-Line Arguments
+
+The `wpt` command-line application offers a number of features for interacting
+with WPT. The functionality is organized into "sub-commands", and each accepts
+a different set of command-line arguments.
+
+This page documents all of the available sub-commands and associated arguments.
+
+```eval_rst
+.. argparse::
+ :module: tools.wpt.wpt
+ :func: create_complete_parser
+ :prog: wpt
+```
diff --git a/testing/web-platform/tests/docs/running-tests/custom-runner.md b/testing/web-platform/tests/docs/running-tests/custom-runner.md
new file mode 100644
index 0000000000..029a8771c1
--- /dev/null
+++ b/testing/web-platform/tests/docs/running-tests/custom-runner.md
@@ -0,0 +1,21 @@
+# Writing Your Own Runner
+
+Most test runners have two stages: finding all tests, followed by
+executing them (or a subset thereof).
+
+To find all tests in the repository, it is **strongly** recommended to
+use the included `wpt manifest` tool: the required behaviors are more
+complex than what are documented (especially when it comes to
+precedence of the various possibilities and some undocumented legacy
+ways to define test types), and hence its behavior should be
+considered the canonical definition of how to enumerate tests and find
+their type in the repository.
+
+For test execution, please read the documentation for the various test
+types very carefully and then check your understanding on the [mailing
+list][public-test-infra] or [matrix][matrix]. It's possible edge-case
+behavior isn't properly documented!
+
+[public-test-infra]: https://lists.w3.org/Archives/Public/public-test-infra/
+[matrix]: https://app.element.io/#/room/#wpt:matrix.org
+[web irc]: http://irc.w3.org
diff --git a/testing/web-platform/tests/docs/running-tests/from-ci.md b/testing/web-platform/tests/docs/running-tests/from-ci.md
new file mode 100644
index 0000000000..9ea142bb4b
--- /dev/null
+++ b/testing/web-platform/tests/docs/running-tests/from-ci.md
@@ -0,0 +1,32 @@
+# Running Tests on CI
+
+Contributors with write access to the repository can trigger full runs in the
+same CI systems used to produce results for [wpt.fyi](https://wpt.fyi). The runs
+are triggered by pushing to branch names on the form `triggers/$browser_$channel`
+and the results will be automatically submitted to wpt.fyi.
+
+This is useful when making infrastructure changes that could affect very many
+tests, in order to avoid regressions.
+
+Note: Full runs use a lot of CI resources, so please take care to not trigger
+them more than necessary.
+
+Instructions:
+
+ * Base your changes on a commit for which there are already results in wpt.fyi.
+
+ * Determine which branch name to push to by looking for `refs/heads/triggers/`
+ in `.azure-pipelines.yml` and `.taskcluster.yml`. For example, to trigger a
+ full run of Safari Technology Preview, the branch name is
+ `triggers/safari_preview`.
+
+ * Force push to the branch, for example:
+ `git push --force-with-lease origin HEAD:triggers/safari_preview`.
+ The `--force-with-lease` argument is to detect if someone else has just
+ pushed. When this happens wait for the checkout step of their triggered run
+ to finish before you force push again.
+
+You can see if the run started from the commit status on GitHub's commits listing
+([example](https://github.com/web-platform-tests/wpt/commits/triggers/safari_preview))
+and if successful the results will show up on wpt.fyi within 10 minutes
+([example](https://wpt.fyi/runs?product=safari)).
diff --git a/testing/web-platform/tests/docs/running-tests/from-local-system.md b/testing/web-platform/tests/docs/running-tests/from-local-system.md
new file mode 100644
index 0000000000..3865038ef6
--- /dev/null
+++ b/testing/web-platform/tests/docs/running-tests/from-local-system.md
@@ -0,0 +1,215 @@
+# Running Tests from the Local System
+
+The tests are designed to be run from your local computer.
+
+## System Setup
+
+Running the tests requires `python`, `pip` and `virtualenv`, as well as updating
+the system `hosts` file.
+
+WPT requires Python 3.7 or higher.
+
+The required setup is different depending on your operating system.
+
+### Linux Setup
+
+If not already present, use the system package manager to install `python`,
+`pip` and `virtualenv`.
+
+On Debian or Ubuntu:
+
+```bash
+sudo apt-get install python python-pip virtualenv
+```
+
+It is important to have a package that provides a `python` binary. On Fedora,
+for example, that means installing the `python-unversioned-command` package. On
+Ubuntu Focal and later, the package is called `python-is-python3`.
+
+### macOS Setup
+
+The system-provided Python can be used, while `pip` and `virtualenv` can be
+installed for the user only:
+
+```bash
+python -m ensurepip --user
+export PATH="$PATH:$( python3 -m site --user-base )/bin"
+pip install --user virtualenv
+```
+
+To make the `PATH` change persistent, add it to your `~/.bash_profile` file or
+wherever you currently set your PATH.
+
+See also [additional setup required to run Safari](safari.md).
+
+### Windows Setup
+
+Download and install [Python 3](https://www.python.org/downloads). The
+installer includes `pip` by default.
+
+Add `C:\Python39` and `C:\Python39\Scripts` to your `%Path%`
+[environment variable](http://www.computerhope.com/issues/ch000549.htm).
+
+Finally, install `virtualenv`:
+
+```bash
+pip install virtualenv
+```
+
+The standard Windows shell requires that all `wpt` commands are prefixed
+by the Python binary i.e. assuming `python` is on your path the server is
+started using:
+
+```bash
+python wpt serve
+```
+
+#### Windows Subsystem for Linux
+
+Optionally on Windows you can use the [Windows Subsystem for
+Linux](https://docs.microsoft.com/en-us/windows/wsl/about) (WSL). If doing so,
+installation and usage are similar to the Linux instructions. Be aware that WSL
+may attempt to override `/etc/hosts` each time it is launched, which would then
+require you to re-run [`hosts` File Setup](#hosts-file-setup). This behavior
+[can be configured](https://docs.microsoft.com/en-us/windows/wsl/wsl-config#network).
+
+### `hosts` File Setup
+
+To get the tests running, you need to set up the test domains in your
+[`hosts` file](http://en.wikipedia.org/wiki/Hosts_%28file%29%23Location_in_the_file_system).
+
+On Linux, macOS or other UNIX-like system:
+
+```bash
+./wpt make-hosts-file | sudo tee -a /etc/hosts
+```
+
+And on Windows (this must be run in a PowerShell session with Administrator privileges):
+
+```
+python wpt make-hosts-file | Out-File $env:SystemRoot\System32\drivers\etc\hosts -Encoding ascii -Append
+```
+
+If you are behind a proxy, you also need to make sure the domains above are
+excluded from your proxy lookups.
+
+## Via the browser
+
+The test environment can then be started using
+
+ ./wpt serve
+
+This will start HTTP servers on two ports and a websockets server on
+one port. By default the web servers start on ports 8000 and 8443 and the other
+ports are randomly-chosen free ports. Tests must be loaded from the
+*first* HTTP server in the output. To change the ports,
+create a `config.json` file in the wpt root directory, and add
+port definitions of your choice e.g.:
+
+```
+{
+ "ports": {
+ "http": [1234, "auto"],
+ "https":[5678]
+ }
+}
+```
+
+After your `hosts` file is configured, the servers will be locally accessible at:
+
+http://web-platform.test:8000/<br>
+https://web-platform.test:8443/ *
+
+To use the web-based runner point your browser to:
+
+http://web-platform.test:8000/tools/runner/index.html<br>
+https://web-platform.test:8443/tools/runner/index.html *
+
+This server has all the capabilities of the publicly-deployed version--see
+[Running the Tests from the Web](from-web.md).
+
+\**See [Trusting Root CA](../tools/certs/README.md)*
+
+## Via the command line
+
+Many tests can be automatically executed in a new browser instance using
+
+ ./wpt run [browsername] [tests]
+
+This will automatically load the tests in the chosen browser and extract the
+test results. For example to run the `dom/historical.html` tests in a local
+copy of Chrome:
+
+ ./wpt run chrome dom/historical.html
+
+Or to run in a specified copy of Firefox:
+
+ ./wpt run --binary ~/local/firefox/firefox firefox dom/historical.html
+
+For details on the supported products and a large number of other options for
+customising the test run:
+
+ ./wpt run --help
+
+[A complete listing of the command-line arguments is available
+here](command-line-arguments.html#run).
+
+```eval_rst
+.. toctree::
+ :hidden:
+
+ command-line-arguments
+```
+
+### Browser-specific instructions
+
+```eval_rst
+.. toctree::
+
+ chrome
+ chrome_android
+ android_webview
+ safari
+ webkitgtk_minibrowser
+```
+
+### Running in parallel
+
+To speed up the testing process, use the `--processes` option to run multiple
+browser instances in parallel. For example, to run the tests in dom/ with six
+Firefox instances in parallel:
+
+ ./wpt run --processes=6 firefox dom/
+
+But note that behaviour in this mode is necessarily less deterministic than with
+a single process (the default), so there may be more noise in the test results.
+
+### Output formats
+
+By default, `./wpt run` outputs test results and a summary in a human readable
+format. For debugging, `--log-mach` can give more verbose output. For example:
+
+ ./wpt run --log-mach=- --log-mach-level=info firefox dom/
+
+A machine readable JSON report can be produced using `--log-wptreport`. This
+together with `--log-wptscreenshot` is what is used to produce results for
+[wpt.fyi](https://wpt.fyi). For example:
+
+ ./wpt run --log-wptreport=report.json --log-wptscreenshot=screenshots.txt firefox css/css-grid/
+
+(See [wpt.fyi documentation](https://github.com/web-platform-tests/wpt.fyi/blob/main/api/README.md#results-creation)
+for how results are uploaded.)
+
+### Expectation data
+
+For use in continuous integration systems, and other scenarios where regression
+tracking is required, the command-line interface supports storing and loading
+the expected result of each test in a test run. See [Expectations
+Data](../../tools/wptrunner/docs/expectation) for more information on creating
+and maintaining these files.
+
+## Testing polyfills
+
+Polyfill scripts can be tested using the `--inject-script` argument to either
+`wpt run` or `wpt serve`. See [Testing Polyfills](testing-polyfills) for
+details.
diff --git a/testing/web-platform/tests/docs/running-tests/from-web.md b/testing/web-platform/tests/docs/running-tests/from-web.md
new file mode 100644
index 0000000000..157598da36
--- /dev/null
+++ b/testing/web-platform/tests/docs/running-tests/from-web.md
@@ -0,0 +1,27 @@
+# Running Tests from the Web
+
+Tests that have been merged on GitHub are mirrored at
+[wpt.live](https://wpt.live) and [w3c-test.org](https://w3c-test.org).
+[On properly-configured systems](from-local-system), local files may also be
+served from the URL [http://web-platform.test](http://web-platform.test).
+
+Not all tests can be executed in-browser, as some tests rely on automation
+(e.g. via [testdriver.js](../writing-tests/testdriver)) that is not available
+when running a browser in a normal user session.
+
+## Web test runner
+
+For running multiple tests inside a browser, there is a test runner
+located at `/tools/runner/index.html`.
+
+This allows all the tests, or those matching a specific prefix
+(e.g. all tests under `/dom/`) to be run. For testharness.js tests,
+the results will be automatically collected, while the runner
+provides a simple UI for manually comparing reftest rendering and
+running manual tests.
+
+Note, however, it does not currently handle more complex reftests with
+more than one reference involved.
+
+Because it runs entirely in-browser, this runner cannot deal with
+edge-cases like tests that cause the browser to crash or hang.
diff --git a/testing/web-platform/tests/docs/running-tests/index.md b/testing/web-platform/tests/docs/running-tests/index.md
new file mode 100644
index 0000000000..17b361dde8
--- /dev/null
+++ b/testing/web-platform/tests/docs/running-tests/index.md
@@ -0,0 +1,24 @@
+# Running Tests
+
+```eval_rst
+.. toctree::
+
+ from-web
+ from-local-system
+ from-ci
+ custom-runner
+ ../tools/certs/README.md
+```
+
+The simplest way to run the tests is via the public website. More detail on
+that approach is available in [Running tests from the Web](from-web).
+
+Contributors who are interested in modifying and creating tests should refer to
+[Running Tests from the Local System](from-local-system).
+
+Contributors with write access to the repository can also trigger full runs
+in our CI setups, see [Running Tests on CI](from-ci).
+
+Advanced use cases may call for a customized method of executing the tests.
+Guidelines for writing a custom "runner" are available at [Writing Your Own
+Runner](custom-runner).
diff --git a/testing/web-platform/tests/docs/running-tests/safari.md b/testing/web-platform/tests/docs/running-tests/safari.md
new file mode 100644
index 0000000000..eed5225471
--- /dev/null
+++ b/testing/web-platform/tests/docs/running-tests/safari.md
@@ -0,0 +1,47 @@
+# Safari
+
+To run Safari on macOS, some manual setup is required. Some steps are different
+for Safari and Safari Technology Preview.
+
+ * Allow Safari to be controlled by SafariDriver:
+ * `safaridriver --enable` or
+ * `"/Applications/Safari Technology Preview.app/Contents/MacOS/safaridriver" --enable`
+
+ * Allow pop-up windows:
+ * `defaults write com.apple.Safari WebKitJavaScriptCanOpenWindowsAutomatically 1` or
+ * `defaults write com.apple.SafariTechnologyPreview WebKitJavaScriptCanOpenWindowsAutomatically 1`
+
+ * Turn on additional experimental features Safari Technology Preview:
+ * `defaults write com.apple.SafariTechnologyPreview ExperimentalServerTimingEnabled 1`
+
+ * Trust the certificate:
+ * `security add-trusted-cert -k "$(security default-keychain | cut -d\" -f2)" tools/certs/cacert.pem`
+
+ * Set `no_proxy='*'` in your environment. This is a
+ workaround for a known
+ [macOS High Sierra issue](https://github.com/web-platform-tests/wpt/issues/9007).
+
+Now, run the tests using the `safari` product:
+```
+./wpt run safari [test_list]
+```
+
+This will use the `safaridriver` found on the path, which will be stable Safari.
+To run Safari Technology Preview instead, use the `--channel=preview` argument:
+```
+./wpt run --channel=preview safari [test_list]
+```
+
+## Debugging
+
+To debug problems with `safaridriver`, add the `--webdriver-arg=--diagnose`
+argument:
+```
+./wpt run --channel=preview --webdriver-arg=--diagnose safari [test_list]
+```
+
+The logs will be in `~/Library/Logs/com.apple.WebDriver/`.
+See `man safaridriver` for more information.
+
+To enable safaridriver diagnostics in Azure Pipelines, set
+`safaridriver_diagnose` to `true` in `.azure-pipelines.yml`.
diff --git a/testing/web-platform/tests/docs/running-tests/testing-polyfills.md b/testing/web-platform/tests/docs/running-tests/testing-polyfills.md
new file mode 100644
index 0000000000..468bfa2e03
--- /dev/null
+++ b/testing/web-platform/tests/docs/running-tests/testing-polyfills.md
@@ -0,0 +1,67 @@
+# Testing polyfills
+
+## Preparing the polyfill
+
+The polyfill script-injection feature currently only supports scripts which
+are immediately invoked. The script must be prepared as a single file whose
+contents will be inlined into a script tag served as part of every test page.
+
+If your polyfill is only available as an asynchronous module with dependent
+scripts, you can use a tool such as
+[microbundle](https://github.com/developit/microbundle) to repackage it as a
+single synchronous script file, e.g.:
+
+```bash
+microbundle -f iife -i polyfill/src/main.js -o polyfill.js
+```
+
+## Running the tests
+
+Follow the steps for [Running Tests from the Local System](from-local-system) to
+set up your test environment. When running tests via the browser or via the
+command line, add the `--inject-script=polyfill.js` to either command, e.g.
+
+Via the browser:
+
+```bash
+./wpt serve --inject-script=polyfill.js
+```
+
+Then visit http://web-platform.test:8000/ or https://web-platform.test:8443/ to
+run the tests in your browser.
+
+Via the command line:
+
+```bash
+./wpt run --inject-script=polyfill.js [browsername] [tests]
+```
+
+## Limitations
+
+Polyfill scripts are injected to an inline script tag which removes itself from
+the DOM after executing. This is done by modifying the server response for
+documents with a `text/html` MIME type to insert the following before the first tag in
+the served response:
+
+```html
+<script>
+// <-- The polyfill file is inlined here
+// Remove the injected script tag from the DOM.
+document.currentScript.remove();
+```
+
+This approach has a couple limitations:
+* This requires that the polyfill is self-contained and executes
+synchronously in a single inline script. See [Preparing the
+polyfill](#preparing-the-polyfill) for suggestions on transforming polyfills to
+run in that way.
+* Does not inject into python handlers which write directly to the output
+ stream.
+* Does not inject into the worker context of `.any.js` tests.
+
+### Observability
+
+The script tag is removed from the DOM before any other script has run, and runs
+from an inline script. As such, it should not affect mutation observers on the
+same page or resource timing APIs, as it is not a separate resource. The polyfill
+may be observable by a mutation observer added by a parent frame before load.
diff --git a/testing/web-platform/tests/docs/running-tests/webkitgtk_minibrowser.md b/testing/web-platform/tests/docs/running-tests/webkitgtk_minibrowser.md
new file mode 100644
index 0000000000..7aac81e5fc
--- /dev/null
+++ b/testing/web-platform/tests/docs/running-tests/webkitgtk_minibrowser.md
@@ -0,0 +1,20 @@
+# WebKitGTK MiniBrowser
+
+
+To be able to run tests with the WebKitGTK MiniBrowser you need the
+following packages installed:
+
+* Fedora: `webkit2gtk3-devel`
+* Debian or Ubuntu: `webkit2gtk-driver`
+
+
+The WebKitGTK MiniBrowser is not installed on the default binary path.
+The `wpt` script will try to automatically locate it, but if you need
+to run it manually you can find it on any of this paths:
+
+* Fedora: `/usr/libexec/webkit2gtk-4.0/MiniBrowser`
+* Debian or Ubuntu: `/usr/lib/x86_64-linux-gnu/webkit2gtk-4.0/MiniBrowser`
+ * Note: if the machine architecture is not `x86_64`, then it will be located
+ inside:
+ `/usr/lib/${TRIPLET}/webkit2gtk-4.0/MiniBrowser`
+ where `TRIPLET=$(gcc -dumpmachine)`
diff --git a/testing/web-platform/tests/docs/test-suite-design.md b/testing/web-platform/tests/docs/test-suite-design.md
new file mode 100644
index 0000000000..6a104e2f1d
--- /dev/null
+++ b/testing/web-platform/tests/docs/test-suite-design.md
@@ -0,0 +1,79 @@
+# Test Suite Design
+
+The vast majority of the test suite is formed of HTML pages, which can
+be loaded in a browser and either programmatically provide a result or
+provide a set of steps to run the test and obtain the result.
+
+The tests are, in general, short, cross-platform, and self-contained,
+and should be easy to run in any browser.
+
+
+## Test Layout
+
+Most of the repository's top-level directories hold tests for specific web
+standards. For [W3C specs](https://www.w3.org/standards/), these directories
+are typically named after the shortname of the spec (i.e. the name used for
+snapshot publications under `/TR/`); for [WHATWG
+specs](https://spec.whatwg.org/), they are typically named after the subdomain
+of the spec (i.e. trimming `.spec.whatwg.org` from the URL); for other specs,
+something deemed sensible is used. The `css/` directory contains test suites
+for [the CSS Working Group
+specifications](https://www.w3.org/Style/CSS/current-work).
+
+Within the specification-specific directory there are two common ways
+of laying out tests: the first is a flat structure which is sometimes
+adopted for very short specifications; the alternative is a nested
+structure with each subdirectory corresponding to the id of a heading
+in the specification. The latter provides some implicit metadata about
+the part of a specification being tested according to its location in
+the filesystem, and is preferred for larger specifications.
+
+For example, tests in HTML for ["The History
+interface"](https://html.spec.whatwg.org/multipage/history.html#the-history-interface)
+are located in `html/browsers/history/the-history-interface/`.
+
+Many directories also include a file named `META.yml`. This file may define any
+of the following properties:
+
+- `spec` - a link to the specification covered by the tests in the directory
+- `suggested_reviewers` - a list of GitHub account username belonging to
+ people who are notified when pull requests modify files in the directory
+
+Various resources that tests depend on are in `common`, `images`, `fonts`,
+`media`, and `resources`.
+
+## Test Types
+
+Tests in this project use a few different approaches to verify expected
+behavior. The tests can be classified based on the way they express
+expectations:
+
+* Rendering tests ensure that the browser graphically displays pages as
+ expected. There are a few different ways this is done:
+
+ * [Reftests][] render two (or more) web pages and combine them with equality
+ assertions about their rendering (e.g., `A.html` and `B.html` must render
+ identically), run either by the user switching between tabs/windows and
+ trying to observe differences or through [automated
+ scripts][running-from-local-system].
+
+ * [Visual tests][visual] display a page where the result is determined either
+ by a human looking at it or by comparing it with a saved screenshot for
+ that user agent on that platform.
+
+* [testharness.js][] tests verify that JavaScript interfaces behave as
+ expected. They get their name from the JavaScript harness that's used to
+ execute them.
+
+* [wdspec][] tests are written in Python and test [the WebDriver browser
+ automation protocol](https://w3c.github.io/webdriver/)
+
+* [Manual tests][manual] rely on a human to run them and determine their
+ result.
+
+[reftests]: writing-tests/reftests
+[testharness.js]: writing-tests/testharness
+[visual]: writing-tests/visual
+[manual]: writing-tests/manual
+[running-from-local-system]: running-tests/from-local-system
+[wdspec]: writing-tests/wdspec
diff --git a/testing/web-platform/tests/docs/wpt_lint_rules.py b/testing/web-platform/tests/docs/wpt_lint_rules.py
new file mode 100644
index 0000000000..01f965edfd
--- /dev/null
+++ b/testing/web-platform/tests/docs/wpt_lint_rules.py
@@ -0,0 +1,78 @@
+from docutils.parsers.rst import Directive, nodes
+from docutils.utils import new_document
+from recommonmark.parser import CommonMarkParser
+import importlib
+import textwrap
+
+class WPTLintRules(Directive):
+ """A docutils directive to generate documentation for the
+ web-platform-test-test's linting tool from its source code. Requires a
+ single argument: a Python module specifier for a file which declares
+ linting rules."""
+ has_content = True
+ required_arguments = 1
+ optional_arguments = 0
+ _md_parser = CommonMarkParser()
+
+ @staticmethod
+ def _parse_markdown(markdown):
+ WPTLintRules._md_parser.parse(markdown, new_document("<string>"))
+ return WPTLintRules._md_parser.document.children[0]
+
+ @property
+ def module_specifier(self):
+ return self.arguments[0]
+
+ def _get_rules(self):
+ try:
+ module = importlib.import_module(self.module_specifier)
+ except ImportError:
+ raise ImportError(
+ """wpt-lint-rules: unable to resolve the module at "{}".""".format(self.module_specifier)
+ )
+
+ for binding_name, value in module.__dict__.items():
+ if hasattr(value, "__abstractmethods__") and len(value.__abstractmethods__):
+ continue
+
+ description = getattr(value, "description", None)
+ name = getattr(value, "name", None)
+ to_fix = getattr(value, "to_fix", None)
+
+ if description is None:
+ continue
+
+ if to_fix is not None:
+ to_fix = textwrap.dedent(to_fix)
+
+ yield {
+ "name": name,
+ "description": textwrap.dedent(description),
+ "to_fix": to_fix
+ }
+
+
+ def run(self):
+ definition_list = nodes.definition_list()
+
+ for rule in sorted(self._get_rules(), key=lambda rule: rule['name']):
+ item = nodes.definition_list_item()
+ definition = nodes.definition()
+ term = nodes.term()
+ item += term
+ item += definition
+ definition_list += item
+
+ term += nodes.literal(text=rule["name"])
+ definition += WPTLintRules._parse_markdown(rule["description"])
+
+ if rule["to_fix"]:
+ definition += nodes.strong(text="To fix:")
+ definition += WPTLintRules._parse_markdown(rule["to_fix"])
+
+ if len(definition_list.children) == 0:
+ raise Exception(
+ """wpt-lint-rules: no linting rules found at "{}".""".format(self.module_specifier)
+ )
+
+ return [definition_list]
diff --git a/testing/web-platform/tests/docs/writing-tests/ahem.md b/testing/web-platform/tests/docs/writing-tests/ahem.md
new file mode 100644
index 0000000000..30a3fcde26
--- /dev/null
+++ b/testing/web-platform/tests/docs/writing-tests/ahem.md
@@ -0,0 +1,78 @@
+# The Ahem Font
+
+A font called [Ahem][ahem-readme] has been developed which consists of
+some very well defined glyphs of precise sizes and shapes; it is
+especially useful for testing font and text properties. Installation
+instructions are available in [Running Tests from the Local
+System](../running-tests/from-local-system).
+
+The font's em-square is exactly square. Its ascent and descent
+combined is exactly the size of the em square; this means that the
+font's extent is exactly the same as its line-height, meaning that it
+can be exactly aligned with padding, borders, margins, and so
+forth. Its alphabetic baseline is 0.2em above its bottom, and 0.8em
+below its top.
+
+The font has four glyphs:
+
+* X (U+0058): A square exactly 1em in height and width.
+* p (U+0070): A rectangle exactly 0.2em high, 1em wide, and aligned so
+that its top is flush with the baseline.
+* É (U+00C9): A rectangle exactly 0.8em high, 1em wide, and aligned so
+that its bottom is flush with the baseline.
+* [space] (U+0020): A transparent space exactly 1em high and wide.
+
+Most other US-ASCII characters in the font have the same glyph as X.
+
+## Usage
+Ahem should be loaded in tests as a web font. To simplify this, a test can
+link to the `/fonts/ahem.css` stylesheet:
+
+```
+<link rel="stylesheet" type="text/css" href="/fonts/ahem.css" />
+```
+
+If the test uses the Ahem font, make sure its computed font-size is a
+multiple of 5px, otherwise baseline alignment may be rendered
+inconsistently. A minimum computed font-size of 20px is suggested.
+
+An explicit (i.e., not `normal`) line-height should also always be
+used, with the difference between the computed line-height and
+font-size being divisible by 2. In the common case, having the same
+value for both is desirable.
+
+Other font properties should make sure they have their default values;
+as such, the `font` shorthand should normally be used.
+
+As a result, what is typically recommended is:
+
+
+``` css
+div {
+ font: 25px/1 Ahem;
+}
+```
+
+Some things to avoid:
+
+``` css
+div {
+ font: 1em/1em Ahem; /* computed font-size is typically 16px and potentially
+ affected by parent elements */
+}
+
+div {
+ font: 20px Ahem; /* computed line-height value is normal */
+}
+
+div {
+ /* doesn't use font shorthand; font-weight and font-style are inherited */
+ font-family: Ahem;
+ font-size: 25px;
+ line-height: 50px; /* the difference between computed line-height and
+ computed font-size is not divisible by 2
+ (50 - 25 = 25; 25 / 2 = 12.5). */
+}
+```
+
+[ahem-readme]: https://www.w3.org/Style/CSS/Test/Fonts/Ahem/README
diff --git a/testing/web-platform/tests/docs/writing-tests/assumptions.md b/testing/web-platform/tests/docs/writing-tests/assumptions.md
new file mode 100644
index 0000000000..5afa416121
--- /dev/null
+++ b/testing/web-platform/tests/docs/writing-tests/assumptions.md
@@ -0,0 +1,40 @@
+# Test Assumptions
+
+The tests make a number of assumptions of the user agent, and new
+tests can freely rely on these assumptions being true:
+
+ * The device is a full-color device.
+ * The device has a viewport width of at least 800px.
+ * The UA imposes no minimum font size.
+ * The `medium` `font-size` computes to 16px.
+ * The canvas background is `white`.
+ * The initial value of `color` is `black`.
+ * The user stylesheet is empty (except where indicated by the tests).
+ * The device is interactive and uses scroll bars.
+ * The HTML `div` element is assigned `display: block;`, the
+ `unicode-bidi` property may be declared, and no other property
+ declarations.
+ <!-- unicode-bidi: isolate should be required; we currently don't
+ assume this because Chrome and Safari are yet to ship this: see
+ https://bugs.chromium.org/p/chromium/issues/detail?id=296863 and
+ https://bugs.webkit.org/show_bug.cgi?id=65617 -->
+ * The HTML `span` element is assigned `display: inline;` and no other
+ property declaration.
+ * The HTML `p` element is assigned `display: block;`
+ * The HTML `li` element is assigned `display: list-item;`
+ * The HTML `table` elements `table`, `tbody`, `tr`, and `td` are
+ assigned the `display` values `table`, `table-row-group`,
+ `table-row`, and `table-cell`, respectively.
+ * The UA implements reasonable line-breaking behavior; e.g., it is
+ assumed that spaces between alphanumeric characters provide line
+ breaking opportunities and that UAs will not break at every
+ opportunity, but only near the end of a line unless a line break is
+ forced.
+
+Tests for printing behavior make some further assumptions:
+
+ * The UA is set to print background colors and, if it supports
+ graphics, background images.
+ * The UA implements reasonable page-breaking behavior; e.g., it is
+ assumed that UAs will not break at every opportunity, but only near
+ the end of a page unless a page break is forced.
diff --git a/testing/web-platform/tests/docs/writing-tests/channels.md b/testing/web-platform/tests/docs/writing-tests/channels.md
new file mode 100644
index 0000000000..9296247fca
--- /dev/null
+++ b/testing/web-platform/tests/docs/writing-tests/channels.md
@@ -0,0 +1,159 @@
+# Message Channels
+
+```eval_rst
+
+.. contents:: Table of Contents
+ :depth: 3
+ :local:
+ :backlinks: none
+```
+
+Message channels provide a mechanism to communicate across globals,
+including in cases where there is no client-side mechanism to
+establish a communication channel (i.e. when the globals are in
+different browsing context groups).
+
+## Markup ##
+
+```html
+<script src="/resources/channels.sub.js"></script>
+```
+
+Channels can be used in any global and are not specifically linked to
+`testharness.js`.
+
+### High Level API ###
+
+The high level API provides a way to message another global, and to
+execute functions in that global and return the result.
+
+Globals wanting to recieve messages using the high level API have to
+be loaded with a `uuid` query parameter in their URL, with a value
+that's a UUID. This will be used to identify the channel dedicated to
+messages sent to that context.
+
+The context must call either `global_channel` or
+`start_global_channel` when it's ready to receive messages. This
+returns a `RecvChannel` that can be used to add message handlers.
+
+```eval_rst
+
+.. js:autofunction:: global_channel
+ :short-name:
+.. js:autofunction:: start_global_channel
+ :short-name:
+.. js:autoclass:: RemoteGlobalCommandRecvChannel
+ :members:
+```
+
+Contexts wanting to communicate with the remote context do so using a
+`RemoteGlobal` object.
+
+```eval_rst
+
+.. js:autoclass:: RemoteGlobal
+ :members:
+```
+
+#### Remote Objects ####
+
+By default objects (e.g. script arguments) sent to the remote global
+are cloned. In order to support referencing objects owned by the
+originating global, there is a `RemoteObject` type which can pass a
+reference to an object across a channel.
+
+```eval_rst
+
+.. js:autoclass:: RemoteObject
+ :members:
+```
+
+#### Example ####
+
+test.html
+
+```html
+<!doctype html>
+<title>call example</title>
+<script src="/resources/testharness.js">
+<script src="/resources/testharnessreport.js">
+<script src="/resources/channel.js">
+
+<script>
+promise_test(async t => {
+ let remote = new RemoteGlobal();
+ window.open(`child.html?uuid=${remote.uuid}`, "_blank", "noopener");
+ let result = await remote.call(id => {
+ return document.getElementById(id).textContent;
+ }, "test");
+ assert_equals("result", "PASS");
+});
+</script>
+```
+
+child.html
+
+```html
+<script src="/resources/channel.js">
+
+<p id="nottest">FAIL</p>
+<p id="test">PASS</p>
+<script>
+start_global_channel();
+</script>
+```
+
+### Low Level API ###
+
+The high level API is implemented in terms of a channel
+abstraction. Each channel is identified by a UUID, and corresponds to
+a message queue hosted by the server. Channels are multiple producer,
+single consumer, so there's only only entity responsible for
+processing messages sent to the channel. This is designed to
+discourage race conditions where multiple consumers try to process the
+same message.
+
+On the client side, the read side of a channel is represented by a
+`RecvChannel` object, and the send side by `SendChannel`. An initial
+channel pair is created with the `channel()` function.
+
+```eval_rst
+
+.. js:autofunction:: channel
+ :members:
+.. js:autoclass:: Channel
+ :members:
+.. js:autoclass:: SendChannel
+ :members:
+.. js:autoclass:: RecvChannel
+ :members:
+```
+
+### Navigation and bfcache
+
+For specific use cases around bfcache, it's important to be able to
+ensure that no network connections (including websockets) remain open
+at the time of navigation, otherwise the page will be excluded from
+bfcache. This is handled as follows:
+
+* A `disconnectReader` method on `SendChannel`. This causes a
+ server-initiated disconnect of the corresponding `RecvChannel`
+ websocket. The idea is to allow a page to send a command that will
+ initiate a navigation, then without knowing when the navigation is
+ done, send further commands that will be processed when the
+ `RecvChannel` reconnects. If the commands are sent before the
+ navigation, but not processed, they can be buffered by the remote
+ and then lost during navigation.
+
+* A `close_all_channel_sockets()` function. This just closes all the open
+ websockets associated with channels in the global in which it's
+ called. Any channel then has to be reconnected to be used
+ again. Calling `closeAllChannelSockets()` right before navigating
+ will leave you in a state with no open websocket connections (unless
+ something happens to reopen one before the navigation starts).
+
+```eval_rst
+
+.. js:autofunction:: close_all_channel_sockets
+ :members:
+```
diff --git a/testing/web-platform/tests/docs/writing-tests/crashtest.md b/testing/web-platform/tests/docs/writing-tests/crashtest.md
new file mode 100644
index 0000000000..0166bdeb75
--- /dev/null
+++ b/testing/web-platform/tests/docs/writing-tests/crashtest.md
@@ -0,0 +1,29 @@
+# crashtest tests
+
+Crash tests are used to ensure that a document can be loaded without
+crashing or experiencing other low-level issues that may be checked by
+implementation-specific tooling (e.g. leaks, asserts, or sanitizer
+failures).
+
+Crashtests are identified by the string `-crash` in the filename immediately
+before the extension, or by being in a directory called `crashtests`. Examples:
+
+- `css/css-foo/bar-crash.html` is a crash test
+- `css/css-foo/crashtests/bar.html` is a crash test
+- `css/css-foo/bar-crash-001.html` is **not** a crash test
+
+The simplest crashtest is a single HTML file with any content. The
+test passes if the load event is reached, and the browser finishes
+painting, without terminating.
+
+In some cases crashtests may need to perform work after the initial page load.
+In this case the test may specify a `class=test-wait` attribute on the root
+element. The test will not complete until that attribute is removed from the
+root. At the time when the test would otherwise have ended a `TestRendered`
+event is emitted; test authors can use this event to perform modifications that
+are guaranteed not to be batched with the initial paint. This matches the
+behaviour of [reftests](reftests).
+
+Note that crash tests **do not** need to include `testharness.js` or use any of
+the [testharness API](testharness-api.md) (e.g. they do not need to declare a
+`test(..)`).
diff --git a/testing/web-platform/tests/docs/writing-tests/css-metadata.md b/testing/web-platform/tests/docs/writing-tests/css-metadata.md
new file mode 100644
index 0000000000..9d8ebeddff
--- /dev/null
+++ b/testing/web-platform/tests/docs/writing-tests/css-metadata.md
@@ -0,0 +1,188 @@
+# CSS Metadata
+
+CSS tests have some additional metadata.
+
+### Specification Links
+
+Each test **requires** at least one link to specifications:
+
+``` html
+<link rel="help" href="RELEVANT_SPEC_SECTION" />
+```
+
+The specification link elements provide a way to align the test with
+information in the specification being tested.
+
+* Links should link to relevant sections within the specification
+* Use the anchors from the specification's Table of Contents
+* A test can have multiple specification links
+ * Always list the primary section that is being tested as the
+ first item in the list of specification links
+ * Order the list from the most used/specific to least used/specific
+ * There is no need to list common incidental features like the
+ color green if it is being used to validate the test unless the
+ case is specifically testing the color green
+* If the test is part of multiple test suites, link to the relevant
+ sections of each spec.
+
+Example 1:
+
+``` html
+<link rel="help"
+href="https://www.w3.org/TR/CSS21/text.html#alignment-prop" />
+```
+
+Example 2:
+
+``` html
+<link rel="help"
+href="https://www.w3.org/TR/CSS21/text.html#alignment-prop" />
+<link rel="help" href="https://www.w3.org/TR/CSS21/visudet.html#q7" />
+<link rel="help"
+href="https://www.w3.org/TR/CSS21/visudet.html#line-height" />
+<link rel="help"
+href="https://www.w3.org/TR/CSS21/colors.html#background-properties" />
+```
+
+### Requirement Flags
+
+If a test has any of the following requirements, a meta element can be added
+to include the corresponding flags (tokens):
+
+<table>
+<tr>
+ <th>Token</th>
+ <th>Description</th>
+</tr>
+<tr>
+ <td>asis</td>
+ <td>The test has particular markup formatting requirements and
+ cannot be re-serialized.</td>
+</tr>
+<tr>
+ <td>HTMLonly</td>
+ <td>Test case is only valid for HTML</td>
+</tr>
+<tr>
+ <td>invalid</td>
+ <td>Tests handling of invalid CSS. Note: This case contains CSS
+ properties and syntax that may not validate.</td>
+</tr>
+<tr>
+ <td>may</td>
+ <td>Behavior tested is preferred but OPTIONAL.
+ <a href="https://www.ietf.org/rfc/rfc2119.txt">[RFC2119]</a></td>
+</tr>
+<tr>
+ <td>nonHTML</td>
+ <td>Test case is only valid for formats besides HTML (e.g. XHTML
+ or arbitrary XML)</td>
+</tr>
+<tr>
+ <td>paged</td>
+ <td>Only valid for paged media</td>
+</tr>
+<tr>
+ <td>scroll</td>
+ <td>Only valid for continuous (scrolling) media</td>
+</tr>
+<tr>
+ <td>should</td>
+ <td>Behavior tested is RECOMMENDED, but not REQUIRED. <a
+ href="https://www.ietf.org/rfc/rfc2119.txt">[RFC2119]</a></td>
+</tr>
+</table>
+
+The following flags are **deprecated** and should not be declared by new tests.
+Tests which satisfy the described criteria should simply be designated as
+"manual" using [the `-manual` file name flag](file-names).
+
+<table>
+<tr>
+ <th>Token</th>
+ <th>Description</th>
+</tr>
+<tr>
+ <td>animated</td>
+ <td>Test is animated in final state. (Cannot be verified using
+ reftests/screenshots.)</td>
+</tr>
+<tr>
+ <td>font</td>
+ <td>Requires a specific font to be installed at the OS level. (A link to the
+ font to be installed must be provided; this is not needed if only web
+ fonts are used.)</td>
+</tr>
+<tr>
+ <td>history</td>
+ <td>User agent session history is required. Testing :visited is a
+ good example where this may be used.</td>
+</tr>
+<tr>
+ <td>interact</td>
+ <td>Requires human interaction (such as for testing scrolling
+ behavior)</td>
+</tr>
+<tr>
+ <td>speech</td>
+ <td>Device supports audio output. Text-to-speech (TTS) engine
+ installed</td>
+</tr>
+<tr>
+ <td>userstyle</td>
+ <td>Requires a user style sheet to be set</td>
+</tr>
+</table>
+
+
+Example 1 (one token applies):
+
+``` html
+<meta name="flags" content="invalid" />
+```
+
+Example 2 (multiple tokens apply):
+
+``` html
+<meta name="flags" content="asis HTMLonly may" />
+```
+
+### Test Assertions
+
+``` html
+<meta name="assert" content="TEST ASSERTION" />
+```
+
+This element should contain a complete detailed statement expressing
+what specifically the test is attempting to prove. If the assertion
+is only valid in certain cases, those conditions should be described
+in the statement.
+
+The assertion should not be:
+
+* A copy of the title text
+* A copy of the test verification instructions
+* A duplicate of another assertion in the test suite
+* A line or reference from the CSS specification unless that line is
+ a complete assertion when taken out of context.
+
+The test assertion is **optional**, but is highly recommended.
+It helps the reviewer understand
+the goal of the test so that he or she can make sure it is being
+tested correctly. Also, in case a problem is found with the test
+later, the testing method (e.g. using `color` to determine pass/fail)
+can be changed (e.g. to using `background-color`) while preserving
+the intent of the test (e.g. testing support for ID selectors).
+
+Examples of good test assertions:
+
+* "This test checks that a background image with no intrinsic size
+ covers the entire padding box."
+* "This test checks that 'word-spacing' affects each space (U+0020)
+ and non-breaking space (U+00A0)."
+* "This test checks that if 'top' and 'bottom' offsets are specified
+ on an absolutely-positioned replaced element, then any remaining
+ space is split amongst the 'auto' vertical margins."
+* "This test checks that 'text-indent' affects only the first line
+ of a block container if that line is also the first formatted line
+ of an element."
diff --git a/testing/web-platform/tests/docs/writing-tests/css-user-styles.md b/testing/web-platform/tests/docs/writing-tests/css-user-styles.md
new file mode 100644
index 0000000000..9dac5af651
--- /dev/null
+++ b/testing/web-platform/tests/docs/writing-tests/css-user-styles.md
@@ -0,0 +1,90 @@
+# CSS User Stylesheets
+
+Some test may require special user style sheets to be applied in order
+for the case to be verified. In order for proper indications and
+prerequisite to be displayed every user style sheet should contain the
+following rules.
+
+``` css
+#user-stylesheet-indication
+{
+ /* Used by the harness to display an indication there is a user
+ style sheet applied */
+ display: block!important;
+}
+```
+
+The rule ```#user-stylesheet-indication``` is to be used by any
+harness running the test suite.
+
+A harness should identify test that need a user style sheet by
+looking at their flags meta tag. It then should display appropriate
+messages indicating if a style sheet is applied or if a style sheet
+should not be applied.
+
+Harness style sheet rules:
+
+``` css
+.userstyle
+{
+ color: green;
+ display: none;
+}
+.nouserstyle
+{
+ color: red;
+ display: none;
+}
+```
+
+Harness userstyle flag found:
+
+``` html
+<p id="user-stylesheet-indication" class="userstyle">A user style
+sheet is applied.</p>
+```
+
+Harness userstyle flag NOT found:
+
+``` html
+<p id="user-stylesheet-indication" class="nouserstyle">A user style
+sheet is applied.</p>
+```
+
+Within the test case it is recommended that the case itself indicate
+the necessary user style sheet that is required.
+
+Examples: (code for the [`cascade.css`][cascade-css] file)
+
+``` css
+#cascade /* ID name should match user style sheet file name */
+{
+ /* Used by the test to hide the prerequisite */
+ display: none;
+}
+```
+
+The rule ```#cascade``` in the example above is used by the test
+page to hide the prerequisite text. The rule name should match the
+user style sheet CSS file name in order to keep this orderly.
+
+Examples: (code for [the `cascade-###.xht` files][cascade-xht])
+
+``` html
+<p id="cascade">
+ PREREQUISITE: The <a href="support/cascade.css">
+ "cascade.css"</a> file is enabled as the user agent's user style
+ sheet.
+</p>
+```
+
+The id value should match the user style sheet CSS file name and the
+user style sheet rule that is used to hide this text when the style
+sheet is properly applied.
+
+Please flag test that require user style sheets with the userstyle
+flag so people running the tests know that a user style sheet is
+required.
+
+[cascade-css]: https://github.com/w3c/csswg-test/blob/master/css21/cascade/support/cascade.css
+[cascade-xht]: https://github.com/w3c/csswg-test/blob/master/css21/cascade/cascade-001.xht
diff --git a/testing/web-platform/tests/docs/writing-tests/file-names.md b/testing/web-platform/tests/docs/writing-tests/file-names.md
new file mode 100644
index 0000000000..96296c4ff6
--- /dev/null
+++ b/testing/web-platform/tests/docs/writing-tests/file-names.md
@@ -0,0 +1,78 @@
+# File Name Flags
+
+The test filename is significant in determining the type of test it
+contains, and enabling specific optional features. This page documents
+the various flags available and their meaning.
+
+In some cases flags can also be set via a directory name, such that any file
+that is a (recursive) descendent of the directory inherits the flag value.
+These are individually documented for each flag that supports it.
+
+
+### Test Type
+
+These flags must be the last element in the filename before the
+extension e.g. `foo-manual.html` will indicate a manual test, but
+`foo-manual-other.html` will not. Unlike test features, test types
+are mutually exclusive.
+
+
+`-manual`
+ : Indicates that a test is a non-automated test.
+
+`-visual`
+ : Indicates that a file is a visual test.
+
+
+### Test Features
+
+These flags are preceded by a `.` in the filename, and must
+themselves precede any test type flag, but are otherwise unordered.
+
+
+`.https`
+ : Indicates that a test is loaded over HTTPS.
+
+ `.h2`
+ : Indicates that a test is loaded over HTTP/2.
+
+ `.www`
+ : Indicates that a test is run on the `www` subdomain.
+
+`.sub`
+ : Indicates that a test uses the [server-side substitution](server-pipes.html#sub)
+ feature.
+
+`.window`
+ : (js files only) Indicates that the file generates a test in which
+ it is run in a Window environment.
+
+`.worker`
+ : (js files only) Indicates that the file generates a test in which
+ it is run in a dedicated worker environment.
+
+`.any`
+ : (js files only) Indicates that the file generates tests in which it
+ is [run in multiple scopes](testharness).
+
+`.optional`
+ : Indicates that a test makes assertions about optional behavior in a
+ specification, typically marked by the [RFC 2119] "MAY" or "OPTIONAL"
+ keywords. This flag should not be used for "SHOULD"; such requirements
+ can be tested with regular tests, like "MUST".
+
+`.tentative`
+ : Indicates that a test makes assertions not yet required by any specification,
+ or in contradiction to some specification. This is useful when implementation
+ experience is needed to inform the specification. It should be apparent in
+ context why the test is tentative and what needs to be resolved to make it
+ non-tentative.
+
+ This flag can be enabled for an entire directory (and all its descendents),
+ by naming the directory 'tentative'. For example, every test underneath
+ 'foo/tentative/' will be considered tentative.
+
+It's preferable that `.window`, `.worker`, and `.any` are immediately followed
+by their final `.js` extension.
+
+[RFC 2119]: https://tools.ietf.org/html/rfc2119
diff --git a/testing/web-platform/tests/docs/writing-tests/general-guidelines.md b/testing/web-platform/tests/docs/writing-tests/general-guidelines.md
new file mode 100644
index 0000000000..1689c064a3
--- /dev/null
+++ b/testing/web-platform/tests/docs/writing-tests/general-guidelines.md
@@ -0,0 +1,230 @@
+# General Test Guidelines
+
+### File Paths and Names
+
+When choosing where in the directory structure to put any new tests,
+try to follow the structure of existing tests for that specification;
+if there are no existing tests, it is generally recommended to create
+subdirectories for each section.
+
+Due to path length limitations on Windows, test paths must be less
+that 150 characters relative to the test root directory (this gives
+vendors just over 100 characters for their own paths when running in
+automation).
+
+File names should generally be somewhat descriptive of what is being
+tested; very generic names like `001.html` are discouraged. A common
+format is `test-topic-001.html`, where `test-topic` is a short
+identifier that describes the test. It should avoid conjunctions,
+articles, and prepositions as it should be as concise as possible. The
+integer that follows is normally just increased incrementally, and
+padded to three digits. (If you'd end up with more than 999 tests,
+your `test-topic` is probably too broad!)
+
+The test filename is significant in enabling specific optional features, such as HTTPS
+or server-side substitution. See the documentation on [file names flags][file-name-flags]
+for more details.
+
+In the css directory, the file names should be unique within the whole
+css/ directory, regardless of where they are in the directory structure.
+
+### HTTPS
+
+By default, tests are served over plain HTTP. If a test requires HTTPS
+it must be given a filename containing `.https.` e.g.,
+`test-secure.https.html`, or be the generated service worker test of a
+`.https`-less `.any` test. For more details see the documentation on
+[file names][file-name-flags].
+
+### HTTP2
+
+If a test must be served from an HTTP/2 server, it must be given a
+filename containing `.h2`.
+
+#### Support Files
+
+Various support files are available in in the directories named `/common/`,
+`/media/`, and `/css/support/`. Reusing existing resources is encouraged where
+possible, as is adding generally-useful files to these common areas rather than
+to specific test suites.
+
+
+#### Tools
+
+Sometimes you may want to add a script to the repository that's meant
+to be used from the command line, not from a browser (e.g., a script
+for generating test files). If you want to ensure (e.g., for security
+reasons) that such scripts will only be usable from the command line
+but won't be handled by the HTTP server then place them in a `tools`
+subdirectory at the appropriate level—the server will then return a
+404 if they are requested.
+
+For example, if you wanted to add a script for use with tests in the
+`notifications` directory, create the `notifications/tools`
+subdirectory and put your script there.
+
+
+### File Formats
+
+Tests are generally formatted as HTML (including XHTML) or XML (including SVG).
+Some test types support other formats:
+
+- [testharness.js tests](testharness) may be expressed as JavaScript files
+ ([the WPT server automatically generates the HTML documents for these][server
+ features])
+- [WebDriver specification tests](wdspec) are expressed as Python files
+
+The best way to determine how to format a new test is to look at how
+similar tests have been formatted. You can also ask for advice in [the
+project's matrix channel][matrix].
+
+
+### Character Encoding
+
+Except when specifically testing encoding, files must be encoded in
+UTF-8. In file formats where UTF-8 is not the default encoding, they
+must contain metadata to mark them as such (e.g., `<meta
+charset=utf-8>` in HTML files) or be pure ASCII.
+
+
+### Server Side Support
+
+The custom web server
+supports [a variety of features][server features] useful for testing
+browsers, including (but not limited to!) support for writing out
+appropriate domains and custom (per-file and per-directory) HTTP
+headers.
+
+
+### Be Short
+
+Tests should be as short as possible. For reftests in particular
+scrollbars at 800&#xD7;600px window size must be avoided unless scrolling
+behavior is specifically being tested. For all tests extraneous
+elements on the page should be avoided so it is clear what is part of
+the test (for a typical testharness test, the only content on the page
+will be rendered by the harness itself).
+
+
+### Be Conservative
+
+Tests should generally avoid depending on edge case behavior of
+features that they don't explicitly intend on testing. For example,
+except where testing parsing, tests should contain
+no [parse errors](https://validator.nu).
+
+This is not, however, to discourage testing of edge cases or
+interactions between multiple features; such tests are an essential
+part of ensuring interoperability of the web platform. When possible, use the
+canonical support libraries provided by features; for more information, see the documentation on [testing interactions between features][interacting-features].
+
+Tests should pass when the feature under test exposes the expected behavior,
+and they should fail when the feature under test is not implemented or is
+implemented incorrectly. Tests should not rely on unrelated features if doing
+so causes failures in the latest stable release of [Apple
+Safari][apple-safari], [Google Chrome][google-chrome], or [Mozilla
+Firefox][mozilla-firefox]. They should, therefore, not rely on any features
+aside from the one under test unless they are supported in all three browsers.
+
+Existing tests can be used as a guide to identify acceptable features. For
+language features that are not used in existing tests, community-maintained
+projects such as [the ECMAScript compatibility tables][es-compat] and
+[caniuse.com][caniuse] provide an overview of basic feature support across the
+browsers listed above.
+
+For JavaScript code that is re-used across many tests (e.g. `testharness.js`
+and the files located in the directory named `common`), only use language
+features that have been supported by each of the major browser engines above
+for over a year. This practice avoids introducing test failures for consumers
+maintaining older JavaScript runtimes.
+
+Patches to make tests run on older versions or other browsers will be accepted
+provided they are relatively simple and do not add undue complexity to the
+test.
+
+
+### Be Cross-Platform
+
+Tests should be as cross-platform as reasonably possible, working
+across different devices, screen resolutions, paper sizes, etc. The
+assumptions that can be relied on are documented [here][assumptions];
+tests that rely on anything else should be manual tests that document
+their assumptions.
+
+Fonts cannot be relied on to be either installed or to have specific
+metrics. As such, in most cases when a known font is needed, [Ahem][ahem]
+should be used and loaded as a web font. In other cases, `@font-face`
+should be used.
+
+
+### Be Self-Contained
+
+Tests must not depend on external network resources. When these tests
+are run on CI systems, they are typically configured with access to
+external resources disabled, so tests that try to access them will
+fail. Where tests want to use multiple hosts, this is possible through
+a known set of subdomains and the [text substitution features of
+wptserve](server-features).
+
+
+### Be Self-Describing
+
+Tests should make it obvious when they pass and when they fail. It
+shouldn't be necessary to consult the specification to figure out
+whether a test has passed of failed.
+
+
+### Style Rules
+
+A number of style rules should be applied to the test file. These are
+not uniformly enforced throughout the existing tests, but will be for
+new tests. Any of these rules may be broken if the test demands it:
+
+ * No trailing whitespace
+ * Use spaces rather than tabs for indentation
+ * Use UNIX-style line endings (i.e. no CR characters at EOL)
+
+We have a lint tool for catching these and other common mistakes. You
+can run it manually by starting the `wpt` executable from the root of
+your local web-platform-tests working directory, and invoking the
+`lint` subcommand, like this:
+
+```
+./wpt lint
+```
+
+The lint tool is also run automatically for every submitted pull request,
+and reviewers will not merge branches with tests that have lint errors, so
+you must fix any errors the lint tool reports. For details on doing that,
+see the [lint-tool documentation][lint-tool].
+
+But in the unusual case of error reports for things essential to a certain
+test or that for other exceptional reasons shouldn't prevent a merge of a
+test, update and commit the `lint.ignore` file in the web-platform-tests
+root directory to suppress the error reports. For details on doing that,
+see the [lint-tool documentation][lint-tool].
+
+
+## CSS-Specific Requirements
+
+In order to be included in an official specification test suite, tests
+for CSS have some additional requirements for:
+
+* [Metadata][css-metadata], and
+* [User style sheets][css-user-styles].
+
+
+[server features]: server-features
+[assumptions]: assumptions
+[ahem]: ahem
+[matrix]: https://app.element.io/#/room/#wpt:matrix.org
+[lint-tool]: lint-tool
+[css-metadata]: css-metadata
+[css-user-styles]: css-user-styles
+[file-name-flags]: file-names
+[interacting-features]: interacting-features
+[mozilla-firefox]: https://mozilla.org/firefox
+[google-chrome]: https://google.com/chrome/browser/desktop/
+[apple-safari]: https://apple.com/safari
+[es-compat]: https://kangax.github.io/compat-table/
+[caniuse]: https://caniuse.com/
diff --git a/testing/web-platform/tests/docs/writing-tests/github-intro.md b/testing/web-platform/tests/docs/writing-tests/github-intro.md
new file mode 100644
index 0000000000..f1fc161a8a
--- /dev/null
+++ b/testing/web-platform/tests/docs/writing-tests/github-intro.md
@@ -0,0 +1,318 @@
+# Introduction to GitHub
+
+All the basics that you need to know are documented on this page, but for the
+full GitHub documentation, visit [help.github.com][help].
+
+If you are already an experienced Git/GitHub user, all you need to know is that
+we use the [normal GitHub Pull Request workflow][github flow] for test
+submissions.
+
+If you are a first-time GitHub user, read on for more details of the workflow.
+
+## Setup
+
+1. Create a GitHub account if you do not already have one on
+ [github.com][github].
+
+2. Download and install the latest version of Git:
+ [https://git-scm.com/downloads][git]; please refer to the instructions there
+ for different platforms.
+
+3. Configure your settings so your commits are properly labeled:
+
+ On Mac or Linux or Solaris, open the Terminal.
+
+ On Windows, open Git Bash (From the Start Menu > Git > Git Bash).
+
+ At the prompt, type:
+
+ $ git config --global user.name "Your Name"
+
+ _This will be the name that is displayed with your test submissions_
+
+ Next, type:
+
+ $ git config --global user.email "your_email@address.com"
+
+ _This should be the email address you used to create the account in Step 1._
+
+4. (Optional) If you don't want to enter your username and password every
+ time you talk to the remote server, you'll need to set up password caching.
+ See [Caching your GitHub password in Git][password-caching].
+
+## Fork the test repository
+
+Now that you have Git set up, you will need to "fork" the test repository. Your
+fork will be a completely independent version of the repository, hosted on
+GitHub.com. This will enable you to [submit](#submit) your tests using a pull
+request (more on this [below](#submit)).
+
+1. In the browser, go to [web-platform-tests on GitHub][main-repo].
+
+2. Click the ![fork](/assets/forkbtn.png) button in the upper right.
+
+3. The fork will take several seconds, then you will be redirected to your
+ GitHub page for this forked repository.
+ You will now be at
+ **https://github.com/username/wpt**.
+
+4. After the fork is complete, you're ready to [clone](#clone).
+
+## Clone
+
+If your [fork](#fork) was successful, the next step is to clone (download a copy of the files).
+
+### Clone the test repository
+
+Open a command prompt in the directory where you want to keep the tests. Then
+execute the following command:
+
+ $ git clone https://github.com/username/wpt.git
+
+This will download the tests into a directory named for the repository: `wpt/`.
+
+You should now have a full copy of the test repository on your local
+machine. Feel free to browse the directories on your hard drive. You can also
+[browse them on github.com][main-repo] and see the full history of
+contributions there.
+
+## Configure Remote / Upstream
+
+Your forked repository is completely independent of the canonical repository,
+which is commonly referred to as the "upstream" repository. Synchronizing your
+forked repository with the upstream repository will keep your forked local copy
+up-to-date with the latest commits.
+
+In the vast majority of cases, the **only** upstream branch that you should
+need to care about is `master`. If you see other branches in the repository,
+you can generally safely ignore them.
+
+1. On the command line, navigate to to the directory where your forked copy of
+ the repository is located.
+
+2. Make sure that you are on the master branch. This will be the case if you
+ just forked, otherwise switch to master.
+
+ $ git checkout master
+
+3. Next, add the remote of the repository your forked. This assigns the
+ original repository to a remote called "upstream":
+
+ $ git remote add upstream https://github.com/web-platform-tests/wpt.git
+
+4. To pull in changes in the original repository that are not present in your
+ local repository first fetch them:
+
+ $ git fetch -p upstream
+
+ Then merge them into your local repository:
+
+ $ git merge upstream/master
+
+ We recommend using `-p` to "prune" the outdated branches that would
+ otherwise accumulate in your local repository.
+
+For additional information, please see the [GitHub docs][github-fork-docs].
+
+## Configure your environment
+
+If all you intend to do is to load [manual tests](../writing-tests/manual) or [reftests](../writing-tests/reftests) from your local file system,
+the above setup should be sufficient.
+But many tests (and in particular, all [testharness.js tests](../writing-tests/testharness)) require a local web server.
+
+See [Local Setup][local-setup] for more information.
+
+## Branch
+
+Now that you have everything locally, create a branch for your tests.
+
+_Note: If you have already been through these steps and created a branch
+and now want to create another branch, you should always do so from the
+master branch. To do this follow the steps from the beginning of the [previous
+section](#configure-remote-upstream). If you don't start with a clean master
+branch you will end up with a big nested mess._
+
+At the command line:
+
+ $ git checkout -b topic
+
+This will create a branch named `topic` and immediately
+switch this to be your active working branch.
+
+The branch name should describe specifically what you are testing. For example:
+
+ $ git checkout -b flexbox-flex-direction-prop
+
+You're ready to start writing tests! Come back to this page you're ready to
+[commit](#commit) them or [submit](#submit) them for review.
+
+
+## Commit
+
+Before you submit your tests for review and contribution to the main test
+repository, you'll need to first commit them locally, where you now have your
+own personal version control system with git. In fact, as you are writing your
+tests, you may want to save versions of your work as you go before you submit
+them to be reviewed and merged.
+
+1. When you're ready to save a version of your work, open a command
+ prompt and change to the directory where your files are.
+
+2. First, ask git what new or modified files you have:
+
+ $ git status
+
+ _This will show you files that have been added or modified_.
+
+3. For all new or modified files, you need to tell git to add them to the
+ list of things you'd like to commit:
+
+ $ git add [file1] [file2] ... [fileN]
+
+ Or:
+
+ $ git add [directory_of_files]
+
+4. Run `git status` again to see what you have on the 'Changes to be
+ committed' list. These files are now 'staged'. Alternatively, you can run
+ `git diff --staged` to see a visual representation of the changes to be
+ committed.
+
+5. Once you've added everything, you can commit and add a message to this
+ set of changes:
+
+ $ git commit -m "Tests for indexed getters in the HTMLExampleInterface"
+
+6. Repeat these steps as many times as you'd like before you submit.
+
+## Verify
+
+The Web Platform Test project has an automated tool
+to verify that coding conventions have been followed,
+and to catch a number of common mistakes.
+
+We recommend running this tool locally. That will help you discover and fix
+issues that would make it hard for us to accept your contribution.
+
+1. On the command line, navigate to to the directory where your clone
+of the repository is located.
+
+2. Run `./wpt lint`
+
+3. Fix any mistake it reports and [commit](#commit) again.
+
+For more details, see the [documentation about the lint tool](../writing-tests/lint-tool).
+
+## Submit
+
+If you're here now looking for more instructions, that means you've written
+some awesome tests and are ready to submit them. Congratulations and welcome
+back!
+
+1. The first thing you do before submitting them to the web-platform-tests
+ repository is to push them back up to your fork:
+
+ $ git push origin topic
+
+ _Note: Here,_ `origin` _refers to remote repository from which you cloned
+ (downloaded) the files after you forked, referred to as
+ web-platform-tests.git in the previous example;_
+ `topic` _refers to the name of your local branch that
+ you want to share_.
+
+2. Now you can send a message that you have changes or additions you'd like
+ to be reviewed and merged into the main (original) test repository. You do
+ this by creating a pull request. In a browser, open the GitHub page for
+ your forked repository: **https://github.com/username/wpt**.
+
+3. Now create the pull request. There are several ways to create a PR in the
+GitHub UI. Below is one method and others can be found on
+[GitHub.com][github-createpr]
+
+ 1. Click the ![new pull request](../assets/pullrequestbtn.png) button.
+
+ 2. On the left, you should see the base repository is the
+ web-platform-tests/wpt. On the right, you should see your fork of that
+ repository. In the branch menu of your forked repository, switch to `topic`
+
+ If you see "There isn't anything to compare", make sure your fork and
+ your `topic` branch is selected on the right side.
+
+ 3. Select the ![create pull request](../assets/createpr.png) button at the top.
+
+ 4. Scroll down and review the summary of changes.
+
+ 5. Scroll back up and in the Title field, enter a brief description for
+ your submission.
+
+ Example: "Tests for CSS Transforms skew() function."
+
+ 6. If you'd like to add more detailed comments, use the comment field
+ below.
+
+ 7. Click ![the create pull request button](../assets/createpr.png)
+
+
+4. Wait for feedback on your pull request and once your pull request is
+accepted, delete your branch (see '[When Pull Request is Accepted](#cleanup)').
+
+[This page on the submissions process](submission-process) has more detail
+about what to expect when contributing code to WPT.
+
+## Refine
+
+Once you submit your pull request, a reviewer will check your proposed changes
+for correctness and style. They may ask you to modify your code. When you are
+ready to make the changes, follow these steps:
+
+1. Check out the branch corresponding to your changes e.g. if your branch was
+ called `topic`
+ run:
+
+ $ git checkout topic
+
+2. Make the changes needed to address the comments, and commit them just like
+ before.
+
+3. Push the changes to the remote branch containing the pull request:
+
+ $ git push origin topic
+
+4. The pull request will automatically be updated with the new commit.
+
+Sometimes it takes multiple iterations through a review before the changes are
+finally accepted. Don't worry about this; it's totally normal. The goal of test
+review is to work together to create the best possible set of tests for the web
+platform.
+
+## Cleanup
+Once your pull request has been accepted, you will be notified in the GitHub
+user interface, and you may get an email. At this point, your changes have been merged
+into the main test repository. You do not need to take any further action
+on the test but you should delete your branch. This can easily be done in
+the GitHub user interface by navigating to the pull request and clicking the
+"Delete Branch" button.
+
+![pull request accepted delete branch](/assets/praccepteddelete.png)
+
+Alternatively, you can delete the branch on the command line.
+
+ $ git push origin --delete <branchName>
+
+## Further Reading
+
+Git is a very powerful tool, and there are many ways to achieve subtly
+different results. Recognizing when (and understanding how) to use other
+approaches is beyond the scope of this tutorial. [The Pro Git Book][git-book]
+is a free digital resource that can help you learn more.
+
+[local-setup]: ../running-tests/from-local-system
+[git]: https://git-scm.com/downloads
+[git-book]: https://git-scm.com/book
+[github]: https://github.com/
+[github-fork-docs]: https://help.github.com/articles/fork-a-repo
+[github-createpr]: https://help.github.com/articles/creating-a-pull-request
+[help]: https://help.github.com/
+[main-repo]: https://github.com/web-platform-tests/wpt
+[password-caching]: https://help.github.com/articles/caching-your-github-password-in-git
+[github flow]: https://guides.github.com/introduction/flow/
diff --git a/testing/web-platform/tests/docs/writing-tests/h2tests.md b/testing/web-platform/tests/docs/writing-tests/h2tests.md
new file mode 100644
index 0000000000..7745dca55d
--- /dev/null
+++ b/testing/web-platform/tests/docs/writing-tests/h2tests.md
@@ -0,0 +1,152 @@
+# Writing H2 Tests
+
+These instructions assume you are already familiar with the testing
+infrastructure and know how to write a standard HTTP/1.1 test.
+
+On top of the standard `main` handler that the H1 server offers, the
+H2 server also offers support for specific frame handlers in the Python
+scripts. Currently there is support for for `handle_headers` and `handle_data`.
+Unlike the `main` handler, these are run whenever the server receives a
+HEADERS frame (RequestReceived event) or a DATA frame (DataReceived event).
+`main` can still be used, but it will be run after the server has received
+the request in its entirety.
+
+Here is what a Python script for a test might look like:
+```python
+def handle_headers(frame, request, response):
+ if request.headers["test"] == "pass":
+ response.status = 200
+ response.headers.update([('test', 'passed')])
+ response.write_status_headers()
+ else:
+ response.status = 403
+ response.headers.update([('test', 'failed')])
+ response.write_status_headers()
+ response.writer.end_stream()
+
+def handle_data(frame, request, response):
+ response.writer.write_data(frame.data[::-1])
+
+def main(request, response):
+ response.writer.write_data('\nEnd of File', last=True)
+```
+
+The above script is fairly simple:
+1. Upon receiving the HEADERS frame, `handle_headers` is run.
+ - This checks for a header called 'test' and checks if it is set to 'pass'.
+ If true, it will immediately send a response header, otherwise it responds
+ with a 403 and ends the stream.
+2. Any DATA frames received will then be handled by `handle_data`. This will
+simply reverse the data and send it back.
+3. Once the request has been fully received, `main` is run which will send
+one last DATA frame and signal its the end of the stream.
+
+## Response Writer API ##
+
+The H2Response API is pretty much the same as the H1 variant, the main API
+difference lies in the H2ResponseWriter which is accessed through `response.writer`
+
+---
+
+#### `write_headers(self, headers, status_code, status_message=None, stream_id=None, last=False):`
+Write a HEADER frame using the H2 Connection object, will only work if the
+stream is in a state to send HEADER frames. This will automatically format
+the headers so that pseudo headers are at the start of the list and correctly
+prefixed with ':'. Since this using the H2 Connection object, it requires that
+the stream is in the correct state to be sending this frame.
+
+> <b>Note</b>: Will raise ProtocolErrors if pseudo headers are missing.
+
+- <b>Parameters</b>
+
+ - <b>headers</b>: List of (header, value) tuples
+ - <b>status_code</b>: The HTTP status code of the response
+ - <b>stream_id</b>: Id of stream to send frame on. Will use the request stream ID if None
+ - <b>last</b>: Flag to signal if this is the last frame in stream.
+
+---
+
+#### `write_data(self, item, last=False, stream_id=None):`
+Write a DATA frame using the H2 Connection object, will only work if the
+stream is in a state to send DATA frames. Uses flow control to split data
+into multiple data frames if it exceeds the size that can be in a single frame.
+Since this using the H2 Connection object, it requires that the stream is in
+the correct state to be sending this frame.
+
+- <b>Parameters</b>
+
+ - <b>item</b>: The content of the DATA frame
+ - <b>last</b>: Flag to signal if this is the last frame in stream.
+ - <b>stream_id</b>: Id of stream to send frame on. Will use the request stream ID if None
+
+---
+
+#### `write_push(self, promise_headers, push_stream_id=None, status=None, response_headers=None, response_data=None):`
+This will write a push promise to the request stream. If you do not provide
+headers and data for the response, then no response will be pushed, and you
+should send them yourself using the ID returned from this function.
+
+- <b>Parameters</b>
+ - <b>promise_headers</b>: A list of header tuples that matches what the client would use to
+ request the pushed response
+ - <b>push_stream_id</b>: The ID of the stream the response should be pushed to. If none given, will
+ use the next available id.
+ - <b>status</b>: The status code of the response, REQUIRED if response_headers given
+ - <b>response_headers</b>: The headers of the response
+ - <b>response_data</b>: The response data.
+
+- <b>Returns</b>: The ID of the push stream
+
+---
+
+#### `write_raw_header_frame(self, headers, stream_id=None, end_stream=False, end_headers=False, frame_cls=HeadersFrame):`
+Unlike `write_headers`, this does not check to see if a stream is in the
+correct state to have HEADER frames sent through to it. It also won't force
+the order of the headers or make sure pseudo headers are prefixed with ':'.
+It will build a HEADER frame and send it without using the H2 Connection
+object other than to HPACK encode the headers.
+
+> <b>Note</b>: The `frame_cls` parameter is so that this class can be reused
+by `write_raw_continuation_frame`, as their construction is identical.
+
+- <b>Parameters</b>
+ - <b>headers</b>: List of (header, value) tuples
+ - <b>stream_id</b>: Id of stream to send frame on. Will use the request stream ID if None
+ - <b>end_stream</b>: Set to `True` to add END_STREAM flag to frame
+ - <b>end_headers</b>: Set to `True` to add END_HEADERS flag to frame
+
+---
+
+#### `write_raw_data_frame(self, data, stream_id=None, end_stream=False):`
+Unlike `write_data`, this does not check to see if a stream is in the correct
+state to have DATA frames sent through to it. It will build a DATA frame and
+send it without using the H2 Connection object. It will not perform any flow control checks.
+
+- <b>Parameters</b>
+ - <b>data</b>: The data to be sent in the frame
+ - <b>stream_id</b>: Id of stream to send frame on. Will use the request stream ID if None
+ - <b>end_stream</b>: Set to True to add END_STREAM flag to frame
+
+---
+
+#### `write_raw_continuation_frame(self, headers, stream_id=None, end_headers=False):`
+This provides the ability to create and write a CONTINUATION frame to the
+stream, which is not exposed by `write_headers` as the h2 library handles
+the split between HEADER and CONTINUATION internally. Will perform HPACK
+encoding on the headers. It also ignores the state of the stream.
+
+This calls `write_raw_data_frame` with `frame_cls=ContinuationFrame` since
+the HEADER and CONTINUATION frames are constructed in the same way.
+
+- <b>Parameters</b>:
+ - <b>headers</b>: List of (header, value) tuples
+ - <b>stream_id</b>: Id of stream to send frame on. Will use the request stream ID if None
+ - <b>end_headers</b>: Set to True to add END_HEADERS flag to frame
+
+---
+
+#### `end_stream(self, stream_id=None):`
+Ends the stream with the given ID, or the one that request was made on if no ID given.
+
+- <b>Parameters</b>
+ - <b>stream_id</b>: Id of stream to send frame on. Will use the request stream ID if None
diff --git a/testing/web-platform/tests/docs/writing-tests/idlharness.md b/testing/web-platform/tests/docs/writing-tests/idlharness.md
new file mode 100644
index 0000000000..e2abce0a48
--- /dev/null
+++ b/testing/web-platform/tests/docs/writing-tests/idlharness.md
@@ -0,0 +1,101 @@
+# IDL Tests (idlharness.js)
+
+## Introduction ##
+
+`idlharness.js` generates tests for Web IDL fragments, using the
+[JavaScript Tests (`testharness.js`)](testharness.md) infrastructure. You typically want to use
+`.any.js` or `.window.js` for this to avoid having to write unnessary boilerplate.
+
+## Adding IDL fragments
+
+Web IDL is automatically scraped from specifications and added to the `/interfaces/` directory. See
+the [README](https://github.com/web-platform-tests/wpt/blob/master/interfaces/README.md) there for
+details.
+
+## Testing IDL fragments
+
+For example, the Fetch API's IDL is tested in
+[`/fetch/api/idlharness.any.js`](https://github.com/web-platform-tests/wpt/blob/master/fetch/api/idlharness.any.js):
+```js
+// META: global=window,worker
+// META: script=/resources/WebIDLParser.js
+// META: script=/resources/idlharness.js
+// META: timeout=long
+
+idl_test(
+ ['fetch'],
+ ['referrer-policy', 'html', 'dom'],
+ idl_array => {
+ idl_array.add_objects({
+ Headers: ["new Headers()"],
+ Request: ["new Request('about:blank')"],
+ Response: ["new Response()"],
+ });
+ if (self.GLOBAL.isWindow()) {
+ idl_array.add_objects({ Window: ['window'] });
+ } else if (self.GLOBAL.isWorker()) {
+ idl_array.add_objects({ WorkerGlobalScope: ['self'] });
+ }
+ }
+);
+```
+Note how it includes `/resources/WebIDLParser.js` and `/resources/idlharness.js` in addition to
+`testharness.js` and `testharnessreport.js` (automatically included due to usage of `.any.js`).
+These are needed to make the `idl_test` function work.
+
+The `idl_test` function takes three arguments:
+
+* _srcs_: a list of specifications whose IDL you want to test. The names here need to match the filenames (excluding the extension) in `/interfaces/`.
+* _deps_: a list of specifications the IDL listed in _srcs_ depends upon. Be careful to list them in the order that the dependencies are revealed.
+* _setup_func_: a function or async function that takes care of creating the various objects that you want to test.
+
+## Methods of `IdlArray` ##
+
+`IdlArray` objects can be obtained through the _setup_func_ argument of `idl_test`. Anything not
+documented in this section should be considered an implementation detail, and outside callers should
+not use it.
+
+### `add_objects(dict)`
+
+_dict_ should be an object whose keys are the names of interfaces or exceptions, and whose values
+are arrays of strings. When an interface or exception is tested, every string registered for it
+with `add_objects()` will be evaluated, and tests will be run on the result to verify that it
+correctly implements that interface or exception. This is the only way to test anything about
+`[LegacyNoInterfaceObject]` interfaces, and there are many tests that can't be run on any interface
+without an object to fiddle with.
+
+The interface has to be the *primary* interface of all the objects provided. For example, don't
+pass `{Node: ["document"]}`, but rather `{Document: ["document"]}`. Assuming the `Document`
+interface was declared to inherit from `Node`, this will automatically test that document implements
+the `Node` interface too.
+
+Warning: methods will be called on any provided objects, in a manner that WebIDL requires be safe.
+For instance, if a method has mandatory arguments, the test suite will try calling it with too few
+arguments to see if it throws an exception. If an implementation incorrectly runs the function
+instead of throwing, this might have side effects, possibly even preventing the test suite from
+running correctly.
+
+### `prevent_multiple_testing(name)`
+
+This is a niche method for use in case you're testing many objects that implement the same
+interfaces, and don't want to retest the same interfaces every single time. For instance, HTML
+defines many interfaces that all inherit from `HTMLElement`, so the HTML test suite has something
+like
+
+```js
+.add_objects({
+ HTMLHtmlElement: ['document.documentElement'],
+ HTMLHeadElement: ['document.head'],
+ HTMLBodyElement: ['document.body'],
+ ...
+})
+```
+
+and so on for dozens of element types. This would mean that it would retest that each and every one
+of those elements implements `HTMLElement`, `Element`, and `Node`, which would be thousands of
+basically redundant tests. The test suite therefore calls `prevent_multiple_testing("HTMLElement")`.
+This means that once one object has been tested to implement `HTMLElement` and its ancestors, no
+other object will be. Thus in the example code above, the harness would test that
+`document.documentElement` correctly implements `HTMLHtmlElement`, `HTMLElement`, `Element`, and
+`Node`; but `document.head` would only be tested for `HTMLHeadElement`, and so on for further
+objects.
diff --git a/testing/web-platform/tests/docs/writing-tests/index.md b/testing/web-platform/tests/docs/writing-tests/index.md
new file mode 100644
index 0000000000..e5739c9a6e
--- /dev/null
+++ b/testing/web-platform/tests/docs/writing-tests/index.md
@@ -0,0 +1,91 @@
+# Writing Tests
+
+So you'd like to write new tests for WPT? Great! For starters, we recommend
+reading [the introduction](../index) to learn how the tests are organized and
+interpreted. You might already have an idea about what needs testing, but it's
+okay if you don't know where to begin. In either case, [the guide on making a
+testing plan](making-a-testing-plan) will help you decide what to write.
+
+There's also a load of [general guidelines](general-guidelines) that apply to all tests.
+
+## Test Types
+
+There are various different ways of writing tests:
+
+* [JavaScript tests (testharness.js)](testharness) are preferred for testing APIs and may be used
+ for other features too. They are built with the testharness.js unit testing framework, and consist
+ of assertions written in JavaScript. A high-level [testharness.js tutorial](testharness-tutorial)
+ is available.
+
+* Rendering tests should be used to verify that the browser graphically
+ displays pages as expected. See the [rendering test guidelines](rendering)
+ for tips on how to write great rendering tests. There are a few different
+ ways to write rendering tests:
+
+ * [Reftests](reftests) should be used to test rendering and layout. They
+ consist of two or more pages with assertions as to whether they render
+ identically or not. A high-level [reftest tutorial](reftest-tutorial) is available. A
+ [print reftests](print-reftests) variant is available too.
+
+ * [Visual tests](visual) should be used for checking rendering where there is
+ a large number of conforming renderings such that reftests are impractical.
+ They consist of a page that renders to final state at which point a
+ screenshot can be taken and compared to an expected rendering for that user
+ agent on that platform.
+
+* [Crashtests](crashtest) tests are used to check that the browser is
+ able to load a given document without crashing or experiencing other
+ low-level issues (asserts, leaks, etc.). They pass if the load
+ completes without error.
+
+* [wdspec](wdspec) tests are written in Python using
+ [pytest](https://docs.pytest.org/en/latest/) and test [the WebDriver browser
+ automation protocol](https://w3c.github.io/webdriver/)
+
+* [Manual tests](manual) are used as a last resort for anything that can't be
+ tested using any of the above. They consist of a page that needs manual
+ interaction or verification of the final result.
+
+See [file names](file-names) for test types and features determined by the file names,
+and [server features](server-features) for advanced testing features.
+
+## Submitting Tests
+
+Once you've written tests, please submit them using
+the [typical GitHub Pull Request workflow](submission-process); please
+make sure you run the [`lint` script](lint-tool) before opening a pull request!
+
+## Table of Contents
+
+```eval_rst
+.. toctree::
+ :maxdepth: 1
+
+ general-guidelines
+ making-a-testing-plan
+ testharness
+ testharness-tutorial
+ rendering
+ reftests
+ reftest-tutorial
+ print-reftests
+ visual
+ crashtest
+ wdspec
+ manual
+ file-names
+ server-features
+ submission-process
+ lint-tool
+ ahem
+ assumptions
+ css-metadata
+ css-user-styles
+ h2tests
+ testdriver
+ testdriver-extension-tutorial
+ tools
+ test-templates
+ github-intro
+ ../tools/webtransport/README.md
+```
diff --git a/testing/web-platform/tests/docs/writing-tests/interacting-features.md b/testing/web-platform/tests/docs/writing-tests/interacting-features.md
new file mode 100644
index 0000000000..b8c6ce3895
--- /dev/null
+++ b/testing/web-platform/tests/docs/writing-tests/interacting-features.md
@@ -0,0 +1,25 @@
+# Testing interactions between features
+
+Web platform features do not exist in isolation. Often, testing the interaction between two features is necessary in tests.
+To support this, many directories contain libraries which are intended to be used from other directories.
+
+These are not WPT server features, but are canonical usage of one feature intended for other features to test against.
+This allows the tests for a feature to be decoupled as much as possible from the specifics of another feature which it should integrate with.
+
+## Web Platform Feature Testing Support Libraries
+
+### Common
+
+There are several useful utilities in the `/common/` directory
+
+### Cookies
+
+Features which need to test their interaction with cookies can use the scripts in `cookies/resources` to control which cookies are set on a given request.
+
+### Permissions Policy
+
+Features which integrate with Permissions Policy can make use of the `permissions-policy.js` support library to generate a set of tests for that integration.
+
+### Reporting
+
+Testing integration with the Reporting API can be done with the help of the common report collector. This service will collect reports sent from tests and provides an API to retrieve them. See documentation at `reporting/resources/README.md`.
diff --git a/testing/web-platform/tests/docs/writing-tests/lint-tool.md b/testing/web-platform/tests/docs/writing-tests/lint-tool.md
new file mode 100644
index 0000000000..95f8b57415
--- /dev/null
+++ b/testing/web-platform/tests/docs/writing-tests/lint-tool.md
@@ -0,0 +1,78 @@
+# Lint Tool
+
+We have a lint tool for catching common mistakes in test files. You can run
+it manually by running the `wpt lint` command from the root of your local
+web-platform-tests working directory like this:
+
+```
+./wpt lint
+```
+
+The lint tool is also run automatically for every submitted pull request,
+and reviewers will not merge branches with tests that have lint errors, so
+you must either [fix all lint errors](#fixing-lint-errors), or you must
+[add an exception](#updating-the-ignored-files) to suppress the errors.
+
+## Fixing lint errors
+
+You must fix any errors the lint tool reports, unless an error is for something
+essential to a certain test or that for some other exceptional reason shouldn't
+prevent the test from being merged; in those cases you can [add an
+exception](#updating-the-ignored-files) to suppress the errors. In all other
+cases, follow the instructions below to fix all errors reported.
+
+<!--
+ This listing is automatically generated from the linting tool's Python source
+ code.
+-->
+
+```eval_rst
+.. wpt-lint-rules:: tools.lint.rules
+```
+
+## Updating the ignored files
+
+Normally you must [fix all lint errors](#fixing-lint-errors). But in the
+unusual case of error reports for things essential to certain tests or that
+for other exceptional reasons shouldn't prevent a merge of a test, you can
+update and commit the `lint.ignore` file in the web-platform-tests root
+directory to suppress errors the lint tool would report for a test file.
+
+To add a test file or directory to the list, use the following format:
+
+```
+ERROR TYPE:file/name/pattern
+```
+
+For example, to ignore all `TRAILING WHITESPACE` errors in the file
+`example/file.html`, add the following line to the `lint.ignore` file:
+
+```
+TRAILING WHITESPACE:example/file.html
+```
+
+To ignore errors for an entire directory rather than just one file, use the `*`
+wildcard. For example, to ignore all `TRAILING WHITESPACE` errors in the
+`example` directory, add the following line to the `lint.ignore` file:
+
+```
+TRAILING WHITESPACE:example/*
+```
+
+Similarly, you can also
+use
+[shell-style wildcards](https://docs.python.org/library/fnmatch.html) to
+express other filename patterns or directory-name patterns.
+
+Finally, to ignore just one line in a file, use the following format:
+
+```
+ERROR TYPE:file/name/pattern:line_number
+```
+
+For example, to ignore the `TRAILING WHITESPACE` error for just line 128 of the
+file `example/file.html`, add the following to the `lint.ignore` file:
+
+```
+TRAILING WHITESPACE:example/file.html:128
+```
diff --git a/testing/web-platform/tests/docs/writing-tests/making-a-testing-plan.md b/testing/web-platform/tests/docs/writing-tests/making-a-testing-plan.md
new file mode 100644
index 0000000000..a4007039ae
--- /dev/null
+++ b/testing/web-platform/tests/docs/writing-tests/making-a-testing-plan.md
@@ -0,0 +1,540 @@
+# Making a Testing Plan
+
+When contributing to a project as large and open-ended as WPT, it's easy to get
+lost in the details. It can be helpful to start by making a rough list of tests
+you intend to write. That plan will let you anticipate how much work will be
+involved, and it will help you stay focused once you begin.
+
+Many people come to WPT with a general testing goal in mind:
+
+- specification authors often want to test for new spec text
+- browser maintainers often want to test new features or fixes to existing
+ features
+- web developers often want to test discrepancies between browsers on their web
+ applications
+
+(If you don't have any particular goal, we can help you get started. Check out
+[the issues labeled with `type:missing-coverage` on
+GitHub.com](https://github.com/web-platform-tests/wpt/labels/type%3Amissing-coverage).
+Leave a comment if you'd like to get started with one, and don't hesitate to
+ask clarifying questions!)
+
+This guide will help you write a testing plan by:
+
+1. showing you how to use the specifications to learn what kinds of tests will
+ be most helpful
+2. developing your sense for what *doesn't* need to be tested
+3. demonstrating methods for figuring out which tests (if any) have already
+ been written for WPT
+
+The level of detail in useful testing plans can vary widely. From [a list of
+specific
+cases](https://github.com/web-platform-tests/wpt/issues/6980#issue-252255894),
+to [an outline of important coverage
+areas](https://github.com/web-platform-tests/wpt/issues/18549#issuecomment-522631537),
+to [an annotated version of the specification under
+test](https://rwaldron.github.io/webrtc-pc/), the appropriate fidelity depends
+on your needs, so you can be as precise as you feel is helpful.
+
+## Understanding the "testing surface"
+
+Web platform specifications are instructions about how a feature should work.
+They're critical for implementers to "build the right thing," but they are also
+important for anyone writing tests. We can use the same instructions to infer
+what kinds of tests would be likely to detect mistakes. Here are a few common
+patterns in specification text and the kind of tests they suggest.
+
+### Input sources
+
+Algorithms may accept input from many sources. Modifying the input is the most
+direct way we can influence the browser's behavior and verify that it matches
+the specifications. That's why it's helpful to be able to recognize different
+sources of input.
+
+```eval_rst
+================ ==============================================================
+Type of feature Potential input sources
+================ ==============================================================
+JavaScript parameters, `context object <https://dom.spec.whatwg.org/#context-object>`_
+HTML element content, attributes, attribute values
+CSS selector strings, property values, markup
+================ ==============================================================
+```
+
+Determine which input sources are relevant for your chosen feature, and build a
+list of values which seem worthwhile to test (keep reading for advice on
+identifying worthwhile values). For features that accept multiple sources of
+input, remember that the interaction between values can often produce
+interesting results. Every value you identify should go into your testing plan.
+
+*Example:* This is the first step of the `Notification` constructor from [the
+Notifications standard](https://notifications.spec.whatwg.org/#constructors):
+
+> The Notification(title, options) constructor, when invoked, must run these steps:
+>
+> 1. If the [current global
+> object](https://html.spec.whatwg.org/multipage/webappapis.html#current-global-object)
+> is a
+> [ServiceWorkerGlobalScope](https://w3c.github.io/ServiceWorker/#serviceworkerglobalscope)
+> object, then [throw](https://webidl.spec.whatwg.org/#dfn-throw) a
+> `TypeError` exception.
+> 2. Let *notification* be the result of [creating a
+> notification](https://notifications.spec.whatwg.org/#create-a-notification)
+> given *title* and *options*. Rethrow any exceptions.
+>
+> [...]
+
+A thorough test suite for this constructor will include tests for the behavior
+of many different values of the *title* parameter and the *options* parameter.
+Choosing those values can be a challenge unto itself--see [Avoid Excessive
+Breadth](#avoid-excessive-breadth) for advice.
+
+### Browser state
+
+The state of the browser may also influence algorithm behavior. Examples
+include the current document, the dimensions of the viewport, and the entries
+in the browsing history. Just like with direct input, a thorough set of tests
+will likely need to control these values. Browser state is often more expensive
+to manipulate (whether in terms of code, execution time, or system resources),
+and you may want to design your tests to mitigate these costs (e.g. by writing
+many subtests from the same state).
+
+You may not be able to control all relevant aspects of the browser's state.
+[The `type:untestable`
+label](https://github.com/web-platform-tests/wpt/issues?q=is%3Aopen+is%3Aissue+label%3Atype%3Auntestable)
+includes issues for web platform features which cannot be controlled in a
+cross-browser way. You should include tests like these in your plan both to
+communicate your intention and to remind you when/if testing solutions become
+available.
+
+*Example:* In [the `Notification` constructor referenced
+above](https://notifications.spec.whatwg.org/#constructors), the type of "the
+current global object" is also a form of input. The test suite should include
+tests which execute with different types of global objects.
+
+### Branches
+
+When an algorithm branches based on some condition, that's an indication of an
+interesting behavior that might be missed. Your testing plan should have at
+least one test that verifies the behavior when the branch is taken and at least
+one more test that verifies the behavior when the branch is *not* taken.
+
+*Example:* The following algorithm from [the HTML
+standard](https://html.spec.whatwg.org/) describes how the
+`localStorage.getItem` method works:
+
+> The `getItem`(*key*) method must return the current value associated with the
+> given *key*. If the given *key* does not exist in the list associated with
+> the object then this method must return null.
+
+This algorithm exhibits different behavior depending on whether or not an item
+exists at the provided key. To test this thoroughly, we would write two tests:
+one test would verify that `null` is returned when there is no item at the
+provided key, and the other test would verify that an item we previously stored
+was correctly retrieved when we called the method with its name.
+
+### Sequence
+
+Even without branching, the interplay between sequential algorithm steps can
+suggest interesting test cases. If two steps have observable side-effects, then
+it can be useful to verify they happen in the correct order.
+
+Most of the time, step sequence is implicit in the nature of the
+algorithm--each step operates on the result of the step that precedes it, so
+verifying the end result implicitly verifies the sequence of the steps. But
+sometimes, the order of two steps isn't particularly relevant to the result of
+the overall algorithm. This makes it easier for implementations to diverge.
+
+There are many common patterns where step sequence is observable but not
+necessarily inherent to the correctness of the algorithm:
+
+- input validation (when an algorithm verifies that two or more input values
+ satisfy some criteria)
+- event dispatch (when an algorithm
+ [fires](https://dom.spec.whatwg.org/#concept-event-fire) two or more events)
+- object property access (when an algorithm retrieves two or more property
+ values from an object provided as input)
+
+*Example:* The following text is an abbreviated excerpt of the algorithm that
+runs during drag operations (from [the HTML
+specification](https://html.spec.whatwg.org/multipage/dnd.html#dnd)):
+
+> [...]
+> 4. Otherwise, if the user ended the drag-and-drop operation (e.g. by
+> releasing the mouse button in a mouse-driven drag-and-drop interface), or
+> if the `drag` event was canceled, then this will be the last iteration.
+> Run the following steps, then stop the drag-and-drop operation:
+> 1. If the [current drag
+> operation](https://html.spec.whatwg.org/multipage/dnd.html#current-drag-operation)
+> is "`none`" (no drag operation) [...] Otherwise, the drag operation
+> might be a success; run these substeps:
+> 1. Let *dropped* be true.
+> 2. If the [current target
+> element](https://html.spec.whatwg.org/multipage/dnd.html#current-target-element)
+> is a DOM element, [fire a DND
+> event](https://html.spec.whatwg.org/multipage/dnd.html#fire-a-dnd-event)
+> named `drop` at it; otherwise, use platform-specific conventions for
+> indicating a drop.
+> 3. [...]
+> 2. [Fire a DND
+> event](https://html.spec.whatwg.org/multipage/dnd.html#fire-a-dnd-event)
+> named `dragend` at the [source
+> node](https://html.spec.whatwg.org/multipage/dnd.html#source-node).
+> 3. [...]
+
+A thorough test suite will verify that the `drop` event is fired as specified,
+and it will also verify that the `dragend` event is fired as specified. An even
+better test suite will also verify that the `drop` event is fired *before* the
+`dragend` event.
+
+In September of 2019, [Chromium accidentally changed the ordering of the `drop`
+and `dragend`
+events](https://bugs.chromium.org/p/chromium/issues/detail?id=1005747), and as
+a result, real web applications stopped functioning. If there had been a test
+for the sequence of these events, then this confusion would have been avoided.
+
+When making your testing plan, be sure to look carefully for event dispatch and
+the other patterns listed above. They won't always be as clear as the "drag"
+example!
+
+### Optional behavior
+
+Specifications occasionally allow browsers discretion in how they implement
+certain features. These are described using [RFC
+2119](https://tools.ietf.org/html/rfc2119) terms like "MAY" and "OPTIONAL".
+Although browsers should not be penalized for deciding not to implement such
+behavior, WPT offers tests that verify the correctness of the browsers which
+do. Be sure to [label the test as optional according to WPT's
+conventions](file-names) so that people reviewing test results know how to
+interpret failures.
+
+*Example:* The algorithm underpinning
+[`document.getElementsByTagName`](https://developer.mozilla.org/en-US/docs/Web/API/Document/getElementsByTagName)
+includes the following paragraph:
+
+> When invoked with the same argument, and as long as *root*'s [node
+> document](https://dom.spec.whatwg.org/#concept-node-document)'s
+> [type](https://dom.spec.whatwg.org/#concept-document-type) has not changed,
+> the same [HTMLCollection](https://dom.spec.whatwg.org/#htmlcollection) object
+> may be returned as returned by an earlier call.
+
+That statement uses the word "may," so even though it modifies the behavior of
+the preceding algorithm, it is strictly optional. The test we write for this
+should be designated accordingly.
+
+It's important to read these sections carefully because the distinction between
+"mandatory" behavior and "optional" behavior can be nuanced. In this case, the
+optional behavior is never allowed if the document's type has changed. That
+makes for a mandatory test, one that verifies browsers don't return the same
+result when the document's type changes.
+
+## Exercising Restraint
+
+When writing conformance tests, choosing what *not* to test is sometimes just
+as hard as finding what needs testing.
+
+### Don't dive too deep
+
+Algorithms are composed of many other algorithms which themselves are defined
+in terms of still more algorithms. It can be intimidating to consider
+exhaustively testing one of those "nested" algorithms, especially when they are
+shared by many different APIs.
+
+In general, you should plan to write "surface tests" for the nested algorithms.
+That means only verifying that they exhibit the basic behavior you are
+expecting.
+
+It's definitely important to test exhaustively, but it's just as important to
+do so in a structured way. Reach out to the test suite's maintainers to learn
+if and how they have already tested those algorithms. In many cases, it's
+acceptable to test them in just one place (and maybe through a different API
+entirely), and rely only on surface-level testing everywhere else. While it's
+always possible for more tests to uncover new bugs, the chances may be slim.
+The time we spend writing tests is highly valuable, so we have to be efficient!
+
+*Example:* The following algorithm from [the DOM
+standard](https://dom.spec.whatwg.org/) powers
+[`document.querySelector`](https://developer.mozilla.org/en-US/docs/Web/API/Document/querySelector):
+
+> To **scope-match a selectors string** *selectors* against a *node*, run these
+> steps:
+>
+> 1. Let *s* be the result of [parse a
+> selector](https://drafts.csswg.org/selectors-4/#parse-a-selector)
+> *selectors*.
+> 2. If *s* is failure, then
+> [throw](https://webidl.spec.whatwg.org/#dfn-throw) a
+> "[`SyntaxError`](https://webidl.spec.whatwg.org/#syntaxerror)"
+> [DOMException](https://webidl.spec.whatwg.org/#idl-DOMException).
+> 3. Return the result of [match a selector against a
+> tree](https://drafts.csswg.org/selectors-4/#match-a-selector-against-a-tree)
+> with *s* and *node*'s
+> [root](https://dom.spec.whatwg.org/#concept-tree-root) using [scoping
+> root](https://drafts.csswg.org/selectors-4/#scoping-root) *node*.
+
+As described earlier in this guide, we'd certainly want to test the branch
+regarding the parsing failure. However, there are many ways a string might fail
+to parse--should we verify them all in the tests for `document.querySelector`?
+What about `document.querySelectorAll`? Should we test them all there, too?
+
+The answers depend on the current state of the test suite: whether or not tests
+for selector parsing exist and where they are located. That's why it's best to
+confer with the people who are maintaining the tests.
+
+### Avoid excessive breadth
+
+When the set of input values is finite, it can be tempting to test them all
+exhaustively. When the set is very large, test authors can reduce repetition by
+defining tests programmatically in loops.
+
+Using advanced control flow techniques to dynamically generate tests can
+actually *reduce* test quality. It may obscure the intent of the tests since
+readers have to mentally "unwind" the iteration to determine what is actually
+being verified. The practice is more susceptible to bugs. These bugs may not be
+obvious--they may not cause failures, and they may exercise fewer cases than
+intended. Finally, tests authored using this approach often take a relatively
+long time to complete, and that puts a burden on people who collect test
+results in large numbers.
+
+The severity of these drawbacks varies with the complexity of the generation
+logic. For example, it would be pronounced in a test which conditionally made
+different assertions within many nested loops. Conversely, the severity would
+be low in a test which only iterated over a list of values in order to make the
+same assertions about each. Recognizing when the benefits outweigh the risks
+requires discretion, so once you understand them, you should use your best
+judgement.
+
+*Example:* We can see this consideration in the very first step of the
+`Response` constructor from [the Fetch
+standard](https://fetch.spec.whatwg.org/)
+
+> The `Response`(*body*, *init*) constructor, when invoked, must run these
+> steps:
+>
+> 1. If *init*["`status`"] is not in the range `200` to `599`, inclusive, then
+> [throw](https://webidl.spec.whatwg.org/#dfn-throw) a `RangeError`.
+>
+> [...]
+
+This function accepts exactly 400 values for the "status." With [WPT's
+testharness.js](./testharness), it's easy to dynamically create one test for
+each value. Unless we have reason to believe that a browser may exhibit
+drastically different behavior for any of those values (e.g. correctly
+accepting `546` but incorrectly rejecting `547`), then the complexity of
+testing those cases probably isn't warranted.
+
+Instead, focus on writing declarative tests for specific values which are novel
+in the context of the algorithm. For ranges like in this example, testing the
+boundaries is a good idea. `200` and `599` should not produce an error while
+`199` and `600` should produce an error. Feel free to use what you know about
+the feature to choose additional values. In this case, HTTP response status
+codes are classified by the "hundred" order of magnitude, so we might also want
+to test a "3xx" value and a "4xx" value.
+
+## Assessing coverage
+
+It's very likely that WPT already has some tests for the feature (or at least
+the specification) that you're interesting in testing. In that case, you'll
+have to learn what's already been done before starting to write new tests.
+Understanding the design of existing tests will let you avoid duplicating
+effort, and it will also help you integrate your work more logically.
+
+Even if the feature you're testing does *not* have any tests, you should still
+keep these guidelines in mind. Sooner or later, someone else will want to
+extend your work, so you ought to give them a good starting point!
+
+### File names
+
+The names of existing files and folders in the repository can help you find
+tests that are relevant to your work. [This page on the design of
+WPT](../test-suite-design) goes into detail about how files are generally laid
+out in the repository.
+
+Generally speaking, every conformance tests is stored in a subdirectory
+dedicated to the specification it verifies. The structure of these
+subdirectories vary. Some organize tests in directories related to algorithms
+or behaviors. Others have a more "flat" layout, where all tests are listed
+together.
+
+Whatever the case, test authors try to choose names that communicate the
+behavior under test, so you can use them to make an educated guess about where
+your tests should go.
+
+*Example:* Imagine you wanted to write a test to verify that headers were made
+immutable by the `Request.error` method defined in [the Fetch
+standard](https://fetch.spec.whatwg.org). Here's the algorithm:
+
+> The static error() method, when invoked, must run these steps:
+>
+> 1. Let *r* be a new [Response](https://fetch.spec.whatwg.org/#response)
+> object, whose
+> [response](https://fetch.spec.whatwg.org/#concept-response-response) is a
+> new [network error](https://fetch.spec.whatwg.org/#concept-network-error).
+> 2. Set *r*'s [headers](https://fetch.spec.whatwg.org/#response-headers) to a
+> new [Headers](https://fetch.spec.whatwg.org/#headers) object whose
+> [guard](https://fetch.spec.whatwg.org/#concept-headers-guard) is
+> "`immutable`".
+> 3. Return *r*.
+
+In order to figure out where to write the test (and whether it's needed at
+all), you can review the contents of the `fetch/` directory in WPT. Here's how
+that looks on a UNIX-like command line:
+
+ $ ls fetch
+ api/ DIR_METADATA OWNERS
+ connection-pool/ h1-parsing/ local-network-access/
+ content-encoding/ http-cache/ range/
+ content-length/ images/ README.md
+ content-type/ metadata/ redirect-navigate/
+ corb/ META.yml redirects/
+ cross-origin-resource-policy/ nosniff/ security/
+ data-urls/ origin/ stale-while-revalidate/
+
+This test is for a behavior directly exposed through the API, so we should look
+in the `api/` directory:
+
+ $ ls fetch/api
+ abort/ cors/ headers/ policies/ request/ response/
+ basic/ credentials/ idlharness.any.js redirect/ resources/
+
+And since this is a static method on the `Response` constructor, we would
+expect the test to belong in the `response/` directory:
+
+ $ ls fetch/api/response
+ multi-globals/ response-static-error.html
+ response-cancel-stream.html response-static-redirect.html
+ response-clone.html response-stream-disturbed-1.html
+ response-consume-empty.html response-stream-disturbed-2.html
+ response-consume.html response-stream-disturbed-3.html
+ response-consume-stream.html response-stream-disturbed-4.html
+ response-error-from-stream.html response-stream-disturbed-5.html
+ response-error.html response-stream-disturbed-6.html
+ response-from-stream.any.js response-stream-with-broken-then.any.js
+ response-init-001.html response-trailer.html
+ response-init-002.html
+
+There seems to be a test file for the `error` method:
+`response-static-error.html`. We can open that to decide if the behavior is
+already covered. If not, then we know where to [write the
+test](https://github.com/web-platform-tests/wpt/pull/19601)!
+
+### Failures on wpt.fyi
+
+There are many behaviors that are difficult to describe in a succinct file
+name. That's commonly the case with low-level rendering details of CSS
+specifications. Test authors may resort to generic number-based naming schemes
+for their files, e.g. `feature-001.html`, `feature-002.html`, etc. This makes
+it difficult to determine if a test case exists judging only by the names of
+files.
+
+If the behavior you want to test is demonstrated by some browsers but not by
+others, you may be able to use the *results* of the tests to locate the
+relevant test.
+
+[wpt.fyi](https://wpt.fyi) is a website which publishes results of WPT in
+various browsers. Because most browsers pass most tests, the pass/fail
+characteristics of the behavior you're testing can help you filter through a
+large number of highly similar tests.
+
+*Example:* Imagine you've found a bug in the way Safari renders the top CSS
+border of HTML tables. By searching through directory names and file names,
+you've determined the probable location for the test: the `css/CSS2/borders/`
+directory. However, there are *three hundred* files that begin with
+`border-top-`! None of the names mention the `<table>` element, so any one of
+the files may already be testing the case you found.
+
+Luckily, you also know that Firefox and Chrome do not exhibit this bug. You
+could find such tests by visual inspection of the [wpt.fyi](https://wpt.fyi)
+results overview, but [the website's "search" feature includes operators that
+let you query for this information
+directly](https://github.com/web-platform-tests/wpt.fyi/blob/master/api/query/README.md).
+To find the tests which begin with `border-top-`, pass in Chrome, pass in
+Firefox, and fail in Safari, you could write [`border-top- chrome:pass
+firefox:pass
+safari:fail](https://wpt.fyi/results/?label=master&label=experimental&aligned&q=border-top-%20safari%3Afail%20firefox%3Apass%20chrome%3Apass).
+The results show only three such tests exist:
+
+- `border-top-applies-to-005.xht`
+- `border-top-color-applies-to-005.xht`
+- `border-top-width-applies-to-005.xht`
+
+These may not describe the behavior you're interested in testing; the only way
+to know for sure is to review their contents. However, this is a much more
+manageable set to work with!
+
+### Querying file contents
+
+Some web platform features are enabled with a predictable pattern. For example,
+HTML attributes follow a fairly consistent format. If you're interested in
+testing a feature like this, you may be able to learn where your tests belong
+by querying the contents of the files in WPT.
+
+You may be able to perform such a search on the web. WPT is hosted on
+GitHub.com, and [GitHub offers some basic functionality for querying
+code](https://help.github.com/en/articles/about-searching-on-github). If your
+search criteria are short and distinctive (e.g. all files containing
+"querySelectorAll"), then this interface may be sufficient. However, more
+complicated criteria may require [regular
+expressions](https://www.regular-expressions.info/). For that, you can
+[download the WPT
+repository](https://web-platform-tests.org/writing-tests/github-intro.html) and
+use [git](https://git-scm.com) to perform more powerful searches.
+
+The following table lists some common search criteria and examples of how they
+can be expressed using regular expressions:
+
+<div class="table-container">
+
+```eval_rst
+================================= ================== ==========================
+Criteria Example match Example regular expression
+================================= ================== ==========================
+JavaScript identifier references ``obj.foo()`` ``\bfoo\b``
+JavaScript string literals ``x = "foo";`` ``(["'])foo\1``
+HTML tag names ``<foo attr>`` ``<foo(\s|>|$)``
+HTML attributes ``<div foo=3>`` ``<[a-zA-Z][^>]*\sfoo(\s|>|=|$)``
+CSS property name ``style="foo: 4"`` ``([{;=\"']|\s|^)foo\s+:``
+================================= ================== ==========================
+```
+
+</div>
+
+Bear in mind that searches like this are not necessarily exhaustive. Depending
+on the feature, it may be difficult (or even impossible) to write a query that
+correctly identifies all relevant tests. This strategy can give a helpful
+guide, but the results may not be conclusive.
+
+*Example:* Imagine you're interested in testing how the `src` attribute of the
+`iframe` element works with `javascript:` URLs. Judging only from the names of
+directories, you've found a lot of potential locations for such a test. You
+also know many tests use `javascript:` URLs without describing that in their
+name. How can you find where to contribute new tests?
+
+You can design a regular expression that matches many cases where a
+`javascript:` URL is assigned to the `src` property in HTML. You can use the
+`git grep` command to query the contents of the `html/` directory:
+
+ $ git grep -lE "src\s*=\s*[\"']?javascript:" html
+ html/browsers/browsing-the-web/navigating-across-documents/javascript-url-query-fragment-components.html
+ html/browsers/browsing-the-web/navigating-across-documents/javascript-url-return-value-handling.html
+ html/dom/documents/dom-tree-accessors/Document.currentScript.html
+ html/dom/self-origin.sub.html
+ html/editing/dnd/target-origin/114-manual.html
+ html/semantics/embedded-content/media-elements/track/track-element/cloneNode.html
+ html/semantics/scripting-1/the-script-element/execution-timing/040.html
+ html/semantics/scripting-1/the-script-element/execution-timing/080.html
+ html/semantics/scripting-1/the-script-element/execution-timing/108.html
+ html/semantics/scripting-1/the-script-element/execution-timing/109.html
+ html/webappapis/dynamic-markup-insertion/opening-the-input-stream/document-open-cancels-javascript-url-navigation.html
+
+You will still have to review the contents to know which are relevant for your
+purposes (if any), but compared to the 5,000 files in the `html/` directory,
+this list is far more approachable!
+
+## Writing the Tests
+
+With a complete testing plan in hand, you now have a good idea of the scope of
+your work. It's finally time to write the tests! There's a lot to say about how
+this is done technically. To learn more, check out [the WPT "reftest"
+tutorial](./reftest-tutorial) and [the testharness.js
+tutorial](./testharness-tutorial).
diff --git a/testing/web-platform/tests/docs/writing-tests/manual.md b/testing/web-platform/tests/docs/writing-tests/manual.md
new file mode 100644
index 0000000000..122a22b3f3
--- /dev/null
+++ b/testing/web-platform/tests/docs/writing-tests/manual.md
@@ -0,0 +1,77 @@
+# Manual Tests
+
+Some testing scenarios are intrinsically difficult to automate and
+require a human to run the test and check the pass condition.
+
+## When to Write Manual Tests
+
+Whenever possible it's best to write a fully automated test. For a
+browser vendor it's possible to run an automated test hundreds of
+times a day, but manual tests are likely to be run at most a handful
+of times a year (and quite possibly approximately never!). This makes
+them significantly less useful for catching regressions than automated
+tests.
+
+However, there are certain scenarios in which this is not yet
+possible. For example:
+
+* Test which require observing animation (e.g., a test for CSS
+ animation or for video playback),
+
+* Tests that require interaction with browser security UI (e.g., a
+ test in which a user refuses a geolocation permissions grant),
+
+* Tests that require interaction with the underlying OS (e.g., tests
+ for drag and drop from the desktop onto the browser),
+
+* Tests that require non-default browser configuration (e.g., images
+ disabled), and
+
+* Tests that require interaction with the physical environment (e.g.,
+ tests that the vibration API causes the device to vibrate or that
+ various sensor APIs respond in the expected way).
+
+## Requirements for a Manual Test
+
+Manual tests are distinguished by their filename; all manual tests
+have filenames of the form `name-manual.ext` (i.e., a `-manual` suffix
+after the main filename but before the extension).
+
+Manual tests must be
+fully
+[self-describing](general-guidelines).
+It is particularly important for these tests that it is easy to
+determine the result from the information provided in the page to the
+tester, because a tester may have hundreds of tests to get through and
+little understanding of the features that they are testing. As a
+result, minimalism is especially a virtue for manual tests.
+
+A test should have, at a minimum step-by-step instructions for
+performing the test, and a clear statement of either the test result
+if it can be automatically determined after some setup or how to
+otherwise determine the outcome.
+
+Any information other than this (e.g., quotes from the spec) should be
+avoided (though, as always, can be provided in
+HTML/CSS/JS/etc. comments).
+
+## Using testharness.js for Manual Tests
+
+A convenient way to present the results of a test that can have the
+result determined by script after some manual setup steps is to use
+testharness.js to determine and present the result. In this case one
+must pass `{explicit_timeout: true}` in a call to `setup()` in order
+to disable the automatic timeout of the test. For example:
+
+```html
+<!doctype html>
+<title>Manual click on button triggers onclick handler</title>
+<script src="/resources/testharness.js"></script>
+<script src="/resources/testharnessreport.js"></script>
+<script>
+setup({explicit_timeout: true})
+</script>
+<p>Click on the button below. If a "PASS" result appears the test
+passes, otherwise it fails</p>
+<button onclick="done()">Click Here</button>
+```
diff --git a/testing/web-platform/tests/docs/writing-tests/print-reftests.md b/testing/web-platform/tests/docs/writing-tests/print-reftests.md
new file mode 100644
index 0000000000..62a037da12
--- /dev/null
+++ b/testing/web-platform/tests/docs/writing-tests/print-reftests.md
@@ -0,0 +1,45 @@
+# Print Reftests
+
+Print reftests are like ordinary [reftests](reftests), except that the
+output is rendered to pagninated form and then compared page-by-page
+with the reference.
+
+Print reftests are distinguished by the string `-print` in the
+filename immediately before the extension, or by being under a
+directory named `print`. Examples:
+
+- `css/css-foo/bar-print.html` is a print reftest
+- `css/css-foo/print/bar.html` is a print reftest
+- `css/css-foo/bar-print-001.html` is **not** a print reftest
+
+
+Like ordinary reftests, the reference is specified using a `<link
+rel=match>` element.
+
+The default page size for print reftests is 12.7 cm by 7.62 cm (5
+inches by 3 inches).
+
+All the features of ordinary reftests also work with print reftests
+including [fuzzy matching](reftests.html#fuzzy-matching). Any fuzzy
+specifier applies to each image comparison performed i.e. separately
+for each page.
+
+## Page Ranges
+
+In some cases it may be desirable to only compare a subset of the
+output pages in the reftest. This is possible using
+```
+<meta name=reftest-pages content=[range-specifier]>
+```
+Where a range specifier has the form
+```
+range-specifier = <specifier-item> ["," <specifier-item>]*
+specifier-item = <int> | <int>? "-" <int>?
+```
+
+For example to specify rendering pages 1 and 2, 4, 6 and 7, and 9 and
+10 of a 10 page page document one could write:
+
+```
+<meta name=reftest-pages content="-2,4,6,7,9-">
+```
diff --git a/testing/web-platform/tests/docs/writing-tests/python-handlers/index.md b/testing/web-platform/tests/docs/writing-tests/python-handlers/index.md
new file mode 100644
index 0000000000..e52e137179
--- /dev/null
+++ b/testing/web-platform/tests/docs/writing-tests/python-handlers/index.md
@@ -0,0 +1,116 @@
+# Python Handlers
+
+Python file handlers are Python files which the server executes in response to
+requests made to the corresponding URL. This is hooked up to a route like
+`("*", "*.py", python_file_handler)`, meaning that any .py file will be
+treated as a handler file (note that this makes it easy to write unsafe
+handlers, particularly when running the server in a web-exposed setting).
+
+The Python files must define a function named `main` with the signature:
+
+ main(request, response)
+
+...where `request` is [a wptserve `Request`
+object](/tools/wptserve/docs/request) and `response` is [a wptserve `Response`
+object](/tools/wptserve/docs/response).
+
+This function must return a value in one of the following four formats:
+
+ ((status_code, reason), headers, content)
+ (status_code, headers, content)
+ (headers, content)
+ content
+
+Above, `headers` is a list of (field name, value) pairs, and `content` is a
+string or an iterable returning strings.
+
+The `main` function may also update the response manually. For example, one may
+use `response.headers.set` to set a response header, and only return the
+content. One may even use this kind of handler, but manipulate the output
+socket directly. The `writer` property of the response exposes a
+`ResponseWriter` object that allows writing specific parts of the request or
+direct access to the underlying socket. If used, the return value of the
+`main` function and the properties of the `response` object will be ignored.
+
+The wptserver implements a number of Python APIs for controlling traffic.
+
+```eval_rst
+.. toctree::
+ :maxdepth: 1
+
+ /tools/wptserve/docs/request
+ /tools/wptserve/docs/response
+ /tools/wptserve/docs/stash
+```
+
+### Importing local helper scripts
+
+Python file handlers may import local helper scripts, e.g. to share logic
+across multiple handlers. To avoid module name collision, however, imports must
+be relative to the root of WPT. For example, in an imaginary
+`cookies/resources/myhandler.py`:
+
+```python
+# DON'T DO THIS
+import myhelper
+
+# DO THIS
+from cookies.resources import myhelper
+```
+
+Only absolute imports are allowed; do not use relative imports. If the path to
+your helper script includes a hyphen (`-`), you can use `import_module` from
+`importlib` to import it. For example:
+
+```python
+import importlib
+myhelper = importlib.import_module('common.security-features.myhelper')
+```
+
+**Note on __init__ files**: Importing helper scripts like this
+requires a 'path' of empty `__init__.py` files in every directory down
+to the helper. For example, if your helper is
+`css/css-align/resources/myhelper.py`, you need to have:
+
+```
+css/__init__.py
+css/css-align/__init__.py
+css/css-align/resources/__init__.py
+```
+
+## Example: Dynamic HTTP headers
+
+The following code defines a Python handler that allows the requester to
+control the value of the `Content-Type` HTTP response header:
+
+```python
+def main(request, response):
+ content_type = request.GET.first('content-type')
+ headers = [('Content-Type', content_type)]
+
+ return (200, 'my status text'), headers, 'my response content'
+```
+
+If saved to a file named `resources/control-content-type.py`, the WPT server
+will respond to requests for `resources/control-content-type.py` by executing
+that code.
+
+This could be used from a [testharness.js test](../testharness) like so:
+
+```html
+<!DOCTYPE html>
+<meta charset="utf-8">
+<title>Demonstrating the WPT server's Python handler feature</title>
+<script src="/resources/testharness.js"></script>
+<script src="/resources/testharnessreport.js"></script>
+<script>
+promise_test(function() {
+ return fetch('resources/control-content-type.py?content-type=text/foobar')
+ .then(function(response) {
+ assert_equals(response.status, 200);
+ assert_equals(response.statusText, 'my status text');
+ assert_equals(response.headers.get('Content-Type'), 'text/foobar');
+ });
+});
+</script>
+```
diff --git a/testing/web-platform/tests/docs/writing-tests/reftest-tutorial.md b/testing/web-platform/tests/docs/writing-tests/reftest-tutorial.md
new file mode 100644
index 0000000000..a51430942c
--- /dev/null
+++ b/testing/web-platform/tests/docs/writing-tests/reftest-tutorial.md
@@ -0,0 +1,276 @@
+# Writing a reftest
+
+<!--
+Note to maintainers:
+
+This tutorial is designed to be an authentic depiction of the WPT contribution
+experience. It is not intended to be comprehensive; its scope is intentionally
+limited in order to demonstrate authoring a complete test without overwhelming
+the reader with features. Because typical WPT usage patterns change over time,
+this should be updated periodically; please weigh extensions against the
+demotivating effect that a lengthy guide can have on new contributors.
+-->
+
+Let's say you've discovered that WPT doesn't have any tests for the `dir`
+attribute of [the `<bdo>`
+element](https://developer.mozilla.org/en-US/docs/Web/HTML/Element/bdo). This
+tutorial will guide you through the process of writing and submitting a test.
+You'll need to [configure your system to use WPT's
+tools](../running-tests/from-local-system), but you won't need them until
+towards the end of this tutorial. Although it includes some very brief
+instructions on using git, you can find more guidance in [the tutorial for git
+and GitHub](../writing-tests/github-intro).
+
+WPT's reftests are great for testing web-platform features that have some
+visual effect. [The reftests reference page](reftests) describes them in the
+abstract, but for the purposes of this guide, we'll only consider the features
+we need to test the `<bdo>` element.
+
+```eval_rst
+.. contents::
+ :local:
+```
+
+## Setting up your workspace
+
+To make sure you have the latest code, first type the following into a terminal
+located in the root of the WPT git repository:
+
+ $ git fetch git@github.com:web-platform-tests/wpt.git
+
+Next, we need a place to store the change set we're about to author. Here's how
+to create a new git branch named `reftest-for-bdo` from the revision of WPT we
+just downloaded:
+
+ $ git checkout -b reftest-for-bdo FETCH_HEAD
+
+Now you're ready to create your patch.
+
+## Writing the test file
+
+First, we'll create a file that demonstrates the "feature under test." That is:
+we'll write an HTML document that displays some text using a `<bdo>` element.
+
+WPT has thousands of tests, so it can be daunting to decide where to put a new
+one. Generally speaking, [test files should be placed in directories
+corresponding to the specification text they are
+verifying](../test-suite-design). `<bdo>` is defined in [the "text-level
+semantics" chapter of the HTML
+specification](https://html.spec.whatwg.org/multipage/text-level-semantics.html),
+so we'll want to create our new test in the directory
+`html/semantics/text-level-semantics/the-bdo-element/`. Create a file named
+`rtl.html` and open it in your text editor.
+
+Here's one way to demonstrate the feature:
+
+```html
+<!DOCTYPE html>
+<meta charset="utf-8">
+<title>BDO element dir=rtl</title>
+<link rel="help" href="https://html.spec.whatwg.org/#the-bdo-element">
+<meta name="assert" content="BDO element's DIR content attribute renders corrently given value of 'rtl'.">
+
+<p>Test passes if WAS is displayed below.</p>
+<bdo dir="rtl">SAW</bdo>
+```
+
+That's pretty dense! Let's break it down:
+
+- ```html
+ <!DOCTYPE html>
+ <meta charset="utf-8">
+ ```
+
+ We explicitly set the DOCTYPE and character set to be sure that browsers
+ don't infer them to be something we aren't expecting. We're omitting the
+ `<html>` and `<head>` tags. That's a common practice in WPT, preferred
+ because it makes tests more concise.
+
+- ```html
+ <title>BDO element dir=rtl</title>
+ ```
+ The document's title should succinctly describe the feature under test.
+
+- ```html
+ <link rel="help" href="https://html.spec.whatwg.org/#the-bdo-element">
+ ```
+
+ The "help" metadata should reference the specification under test so that
+ everyone understands the motivation. This is so helpful that [the CSS Working
+ Group requires it for CSS tests](css-metadata)! If you're writing a reftest
+ for a feature outside of CSS, feel free to omit this tag.
+
+- ```html
+ <meta name="assert" content="BDO element's DIR content attribute renders corrently given value of 'rtl'.">
+ ```
+
+ The "assert" metadata is a structured way for you to describe exactly what
+ you want your reftest to verify. For a direct test like the one we're writing
+ here, it might seem a little superfluous. It's much more helpful for
+ more-involved tests where reviewers might need some help understanding your
+ intentions.
+
+ This tag is optional, so you can skip it if you think it's unnecessary. We
+ recommend using it for your first few tests since it may let reviewers give
+ you more helpful feedback. As you get more familiar with WPT and the
+ specifications, you'll get a sense for when and where it's better to leave it
+ out.
+
+- ```html
+ <p>Test passes if WAS is displayed below.</p>
+ ```
+
+ We're communicating the "pass" condition in plain English to make the test
+ self-describing.
+
+- ```html
+ <bdo dir="rtl">SAW</bdo>
+ ```
+
+ This is the real focus of the test. We're including some text inside a
+ `<bdo>` element in order to demonstrate the feature under test.
+
+Since this page doesn't rely on any [special WPT server
+features](server-features), we can view it by loading the HTML file directly.
+There are a bunch of ways to do this; one is to navigate to the
+`html/semantics/text-level-semantics/the-bdo-element/` directory in a file
+browser and drag the new `rtl.html` file into an open web browser window.
+
+![](/assets/reftest-tutorial-test-screenshot.png "screen shot of the new test")
+
+Sighted people can open that document and verify whether or not the stated
+expectation is satisfied. If we were writing a [manual test](manual), we'd be
+done. However, it's time-consuming for a human to run tests, so we should
+prefer making tests automatic whenever possible. Remember that we set out to
+write a "reference test." Now it's time to write the reference file.
+
+## Writing a "match" reference
+
+The "match" reference file describes what the test file is supposed to look
+like. Critically, it *must not* use the technology that we are testing. The
+reference file is what allows the test to be run by a computer--the computer
+can verify that each pixel in the test document exactly matches the
+corresponding pixel in the reference document.
+
+Make a new file in the same
+`html/semantics/text-level-semantics/the-bdo-element/` directory named
+`rtl-ref.html`, and save the following markup into it:
+
+```html
+<!DOCTYPE html>
+<meta charset="utf-8">
+<title>BDO element dir=rtl reference</title>
+
+<p>Test passes if WAS is displayed below.</p>
+<p>WAS</p>
+```
+
+This is like a stripped-down version of the test file. In order to produce a
+visual rendering which is the same as the expected rendering, it uses a `<p>`
+element whose contents is the characters in right-to-left order. That way, if
+the browser doesn't support the `<bdo>` element, this file will still show text
+in the correct sequence.
+
+This file is also completely functional without the WPT server, so you can open
+it in a browser directly from your hard drive.
+
+Currently, there's no way for a human operator or an automated script to know
+that the two files we've created are supposed to match visually. We'll need to
+add one more piece of metadata to the test file we created earlier. Open
+`html/semantics/text-level-semantics/the-bdo-element/rtl.html` in your text
+editor and add another `<link>` tag as described by the following change
+summary:
+
+```diff
+ <!DOCTYPE html>
+ <meta charset="utf-8">
+ <title>BDO element dir=rtl</title>
+ <link rel="author" title="Sam Smith" href="mailto:sam@example.com">
+ <link rel="help" href="https://html.spec.whatwg.org/#the-bdo-element">
++<link rel="match" href="rtl-ref.html">
+ <meta name="assert" content="BDO element's DIR content attribute renders corrently given value of 'rtl'.">
+
+ <p>Test passes if WAS is displayed below.</p>
+ <bdo dir="rtl">SAW</bdo>
+```
+
+Now, anyone (human or computer) reviewing the test file will know where to find
+the associated reference file.
+
+## Verifying our work
+
+We're done writing the test, but we should make sure it fits in with the rest
+of WPT before we submit it. This involves using some of the project's tools, so
+this is the point you'll need to [configure your system to run
+WPT](../running-tests/from-local-system).
+
+[The lint tool](lint-tool) can detect some of the common mistakes people make
+when contributing to WPT. To run it, open a command-line terminal, navigate to
+the root of the WPT repository, and enter the following command:
+
+ python ./wpt lint html/semantics/text-level-semantics/the-bdo-element
+
+If this recognizes any of those common mistakes in the new files, it will tell
+you where they are and how to fix them. If you do have changes to make, you can
+run the command again to make sure you got them right.
+
+Now, we'll run the test using the automated pixel-by-pixel comparison approach
+mentioned earlier. This is important for reftests because the test and the
+reference may differ in very subtle ways that are hard to catch with the naked
+eye. That's not to say your test has to pass in all browsers (or even in *any*
+browser). But if we expect the test to pass, then running it this way will help
+us catch other kinds of mistakes.
+
+The tools support running the tests in many different browsers. We'll use
+Firefox this time:
+
+ python ./wpt run firefox html/semantics/text-level-semantics/the-bdo-element/rtl.html
+
+We expect this test to pass, so if it does, we're ready to submit it. If we
+were testing a web platform feature that Firefox didn't support, we would
+expect the test to fail instead.
+
+There are a few problems to look out for in addition to passing/failing status.
+The report will describe fewer tests than we expect if the test isn't run at
+all. That's usually a sign of a formatting mistake, so you'll want to make sure
+you've used the right file names and metadata. Separately, the web browser
+might crash. That's often a sign of a browser bug, so you should consider
+[reporting it to the browser's
+maintainers](https://rachelandrew.co.uk/archives/2017/01/30/reporting-browser-bugs/)!
+
+## Submitting the test
+
+First, let's stage the new files for committing:
+
+ $ git add html/semantics/text-level-semantics/the-bdo-element/rtl.html
+ $ git add html/semantics/text-level-semantics/the-bdo-element/rtl-ref.html
+
+We can make sure the commit has everything we want to submit (and nothing we
+don't) by using `git diff`:
+
+ $ git diff --staged
+
+On most systems, you can use the arrow keys to navigate through the changes,
+and you can press the `q` key when you're done reviewing.
+
+Next, we'll create a commit with the staged changes:
+
+ $ git commit -m '[html] Add test for the `<bdo>` element'
+
+And now we can push the commit to our fork of WPT:
+
+ $ git push origin reftest-for-bdo
+
+The last step is to submit the test for review. WPT doesn't actually need the
+test we wrote in this tutorial, but if we wanted to submit it for inclusion in
+the repository, we would create a pull request on GitHub. [The guide on git and
+GitHub](../writing-tests/github-intro) has all the details on how to do that.
+
+## More practice
+
+Here are some ways you can keep experimenting with WPT using this test:
+
+- Improve coverage by adding more tests for related behaviors (e.g. nested
+ `<bdo>` elements)
+- Add another reference document which describes what the test should *not*
+ look like using [`rel=mismatch`](reftests)
diff --git a/testing/web-platform/tests/docs/writing-tests/reftests.md b/testing/web-platform/tests/docs/writing-tests/reftests.md
new file mode 100644
index 0000000000..219e5887a0
--- /dev/null
+++ b/testing/web-platform/tests/docs/writing-tests/reftests.md
@@ -0,0 +1,192 @@
+# Reftests
+
+Reftests are one of the primary tools for testing things relating to
+rendering; they are made up of the test and one or more other pages
+("references") with assertions as to whether they render identically
+or not. This page describes their aspects exhaustively; [the tutorial
+on writing a reftest](reftest-tutorial) offers a more limited but
+grounded guide to the process.
+
+## How to Run Reftests
+
+Reftests can be run manually simply by opening the test and the
+reference file in multiple windows or tabs and flipping between the
+two. In automation the comparison is done in an automated fashion,
+which can lead to differences hard for the human eye to notice to
+cause the test to fail.
+
+## Components of a Reftest
+
+In the simplest case, a reftest consists of a pair of files called the
+*test* and the *reference*.
+
+The *test* file is the one that makes use of the technology being
+tested. It also contains a `link` element with `rel="match"` or
+`rel="mismatch"` and `href` attribute pointing to the *reference*
+file, e.g. `<link rel=match href=references/green-box-ref.html>`. A
+`match` test only passes if the two files render pixel-for-pixel
+identically within a 800x600 window *including* scroll-bars if
+present; a `mismatch` test only passes if they *don't* render
+identically.
+
+The *reference* file is typically written to be as simple as possible,
+and does not use the technology under test. It is desirable that the
+reference be rendered correctly even in UAs with relatively poor
+support for CSS and no support for the technology under test.
+
+## Writing a Good Reftest
+
+In general the files used in a reftest should follow
+the [general guidelines][] and
+the [rendering test guidelines][rendering]. They should also be
+self-describing, to allow a human to determine whether the the
+rendering is as expected.
+
+References can be shared between tests; this is strongly encouraged as
+it makes it easier to tell at a glance whether a test passes (through
+familiarity) and enables some optimizations in automated test
+runners. Shared references are typically placed in `references`
+directories, either alongside the tests they are expected to be useful
+for or at the top level if expected to be generally applicable (e.g.,
+many layout tests can be written such that the correct rendering is a
+100x100 green square!). For references that are applicable only to a
+single test, it is recommended to use the test name with a suffix of
+`-ref` as their filename; e.g., `test.html` would have `test-ref.html`
+as a reference.
+
+## Multiple References
+
+Sometimes, a test's pass condition cannot be captured in a single
+reference.
+
+If a test has multiple links, then the test passes if:
+
+ * If there are any match references, at least one must match, and
+ * If there are any mismatch references, all must mismatch.
+
+ If you need multiple matches to succeed, these can be turned into
+ multiple tests (for example, by just having a reference be a test
+ itself!). If this seems like an unreasonable restriction, please file
+ a bug and let us know!
+
+## Controlling When Comparison Occurs
+
+By default, reftest screenshots are taken after the following
+conditions are met:
+
+* The `load` event has fired
+* Web fonts (if any) are loaded
+* Pending paints have completed
+
+In some cases it is necessary to delay the screenshot later than this,
+for example because some DOM manipulation is required to set up the
+desired test conditions. To enable this, the test may have a
+`class="reftest-wait"` attribute specified on the root element. In
+this case the harness will run the following sequence of steps:
+
+* Wait for the `load` event to fire and fonts to load.
+* Wait for pending paints to complete.
+* Fire an event named `TestRendered` at the root element, with the
+ `bubbles` attribute set to true.
+* Wait for the `reftest-wait` class to be removed from the root
+ element.
+* Wait for pending paints to complete.
+* Screenshot the viewport.
+
+The `TestRendered` event provides a hook for tests to make
+modifications to the test document that are not batched into the
+initial layout/paint.
+
+## Fuzzy Matching
+
+In some situations a test may have subtle differences in rendering
+compared to the reference due to, e.g., anti-aliasing. To allow for
+these small differences, we allow tests to specify a fuzziness
+characterised by two parameters, both of which must be specified:
+
+ * A maximum difference in the per-channel color value for any pixel.
+ * A number of total pixels that may be different.
+
+The maximum difference in the per pixel color value is formally
+defined as follows: let <code>T<sub>x,y,c</sub></code> be the value of
+colour channel `c` at pixel coordinates `x`, `y` in the test image and
+<code>R<sub>x,y,c</sub></code> be the corresponding value in the
+reference image, and let <code>width</code> and <code>height</code> be
+the dimensions of the image in pixels. Then <code>maxDifference =
+max<sub>x=[0,width) y=[0,height), c={r,g,b}</sub>(|T<sub>x,y,c</sub> -
+R<sub>x,y,c</sub>|)</code>.
+
+To specify the fuzziness in the test file one may add a `<meta
+name=fuzzy>` element (or, in the case of more complex tests, to any
+page containing the `<link rel=[mis]match>` elements). In the simplest
+case this has a `content` attribute containing the parameters above,
+separated by a semicolon e.g.
+
+```
+<meta name=fuzzy content="maxDifference=15;totalPixels=300">
+```
+
+would allow for a difference of exactly 15 / 255 on any color channel
+and 300 exactly pixels total difference. The argument names are optional
+and may be elided; the above is the same as:
+
+```
+<meta name=fuzzy content="15;300">
+```
+
+The values may also be given as ranges e.g.
+
+```
+<meta name=fuzzy content="maxDifference=10-15;totalPixels=200-300">
+```
+
+or
+
+```
+<meta name=fuzzy content="10-15;200-300">
+```
+
+In this case the maximum pixel difference must be in the range
+`10-15` and the total number of different pixels must be in the range
+`200-300`. These range checks are inclusive.
+
+In cases where a single test has multiple possible refs and the
+fuzziness is not the same for all refs, a ref may be specified by
+prefixing the `content` value with the relative url for the ref e.g.
+
+```
+<meta name=fuzzy content="option1-ref.html:10-15;200-300">
+```
+
+One meta element is required per reference requiring a unique
+fuzziness value, but any unprefixed value will automatically be
+applied to any ref that doesn't have a more specific value.
+
+### Debugging fuzzy reftests
+
+When debugging a fuzzy reftest via `wpt run`, it can be useful to know what the
+allowed and detected differences were. Many of the output logger options will
+provide this information. For example, by passing `--log-mach=-` for a run of a
+hypothetical failing test, one might get:
+
+```
+ 0:08.15 TEST_START: /foo/bar.html
+ 0:09.70 INFO Found 250 pixels different, maximum difference per channel 6 on page 1
+ 0:09.70 INFO Allowed 0-100 pixels different, maximum difference per channel 0-0
+ 0:09.70 TEST_END: FAIL, expected PASS - /foo/bar.html ['f83385ed9c9bea168108b8c448366678c7941627']
+```
+
+For other logging flags, see the output of `wpt run --help`.
+
+## Limitations
+
+In some cases, a test cannot be a reftest. For example, there is no
+way to create a reference for underlining, since the position and
+thickness of the underline depends on the UA, the font, and/or the
+platform. However, once it's established that underlining an inline
+element works, it's possible to construct a reftest for underlining
+a block element, by constructing a reference using underlines on a
+```<span>``` that wraps all the content inside the block.
+
+[general guidelines]: general-guidelines
+[rendering]: rendering
diff --git a/testing/web-platform/tests/docs/writing-tests/rendering.md b/testing/web-platform/tests/docs/writing-tests/rendering.md
new file mode 100644
index 0000000000..e17b6ef879
--- /dev/null
+++ b/testing/web-platform/tests/docs/writing-tests/rendering.md
@@ -0,0 +1,84 @@
+# Rendering Test Guidelines
+
+There are a number of techniques typically used when writing rendering tests;
+these are especially using for [visual](visual) tests which need to be manually
+judged and following common patterns makes it easier to correctly tell if a
+given test passed or not.
+
+## Indicating success
+
+Success is largely indicated by the color green; typically in one of
+two ways:
+
+ * **The green paragraph**: arguably the simplest form of test, this
+ typically consists of single line of text with a pass condition of,
+ "This text should be green". A variant of this is using the
+ background instead, with a pass condition of, "This should have a
+ green background".
+
+ * **The green square**: applicable to many block layout tests, the test
+ renders a green square when it passes; these can mostly be written to
+ match [this][ref-filled-green-100px-square] reference. This green square is
+ often rendered over a red square, such that when the test fails there is red
+ visible on the page; this can even be done using text by using the
+ [Ahem][ahem] font.
+
+More occasionally, the entire canvas is rendered green, typically when
+testing parts of CSS that affect the entire page. Care has to be taken
+when writing tests like this that the test will not result in a single
+green paragraph if it fails. This is usually done by forcing the short
+descriptive paragraph to have a neutral color (e.g., white).
+
+Sometimes instead of a green square, a white square is used to ensure
+any red is obvious. To ensure the stylesheet has loaded, it is
+recommended to make the pass condition paragraph green and require
+that in addition to there being no red on the page.
+
+## Indicating failure
+
+In addition to having clearly defined characteristics when
+they pass, well designed tests should have some clear signs when
+they fail. It can sometimes be hard to make a test do something only
+when the test fails, because it is very hard to predict how user
+agents will fail! Furthermore, in a rather ironic twist, the best
+tests are those that catch the most unpredictable failures!
+
+Having said that, here are the best ways to indicate failures:
+
+ * Using the color red is probably the best way of highlighting
+ failures. Tests should be designed so that if the rendering is a
+ few pixels off some red is uncovered or otherwise rendered on the
+ page.
+
+ * Tests of the `line-height`, `font-size` and similar properties can
+ sometimes be devised in such a way that a failure will result in
+ the text overlapping.
+
+ * Some properties lend themselves well to making "FAIL" render in the
+ case of something going wrong, for example `quotes` and
+ `content`.
+
+## Other Colors
+
+Aside from green and red, other colors are generally used in specific
+ways:
+
+ * Black is typically used for descriptive text,
+
+ * Blue is frequently used as an obvious color for tests with complex
+ pass conditions,
+
+ * Fuchsia, yellow, teal, and orange are typically used when multiple
+ colors are needed,
+
+ * Dark gray is often used for descriptive lines, and
+
+ * Silver or light gray is often used for irrelevant content, such as
+ filler text.
+
+None of these rules are absolute because testing
+color-related functionality will necessitate using some of these
+colors!
+
+[ref-filled-green-100px-square]: https://github.com/w3c/csswg-test/blob/master/reference/ref-filled-green-100px-square.xht
+[ahem]: ahem \ No newline at end of file
diff --git a/testing/web-platform/tests/docs/writing-tests/server-features.md b/testing/web-platform/tests/docs/writing-tests/server-features.md
new file mode 100644
index 0000000000..b50b495212
--- /dev/null
+++ b/testing/web-platform/tests/docs/writing-tests/server-features.md
@@ -0,0 +1,157 @@
+# Server Features
+
+For many tests, writing one or more static HTML files is
+sufficient. However there are a large class of tests for which this
+approach is insufficient, including:
+
+* Tests that require cross-domain access
+
+* Tests that depend on setting specific headers or status codes
+
+* Tests that need to inspect the browser-sent request
+
+* Tests that require state to be stored on the server
+
+* Tests that require precise timing of the response.
+
+To make writing such tests possible, we are using a number of
+server-side components designed to make it easy to manipulate the
+precise details of the response:
+
+* *wptserve*, a custom Python HTTP server
+
+* *pywebsocket*, an existing websockets server
+
+wptserve is a Python-based web server. By default it serves static
+files in the test suite. For more sophisticated requirements, several
+mechanisms are available to take control of the response. These are
+outlined below.
+
+### Tests Involving Multiple Origins
+
+Our test servers are guaranteed to be accessible through two domains
+and five subdomains under each. The 'main' domain is unnamed; the
+other is called 'alt'. These subdomains are: `www`, `www1`, `www2`,
+`天気の良い日`, and `élève`; there is also `nonexistent` which is
+guaranteed not to resolve. In addition, the HTTP server listens on two
+ports, and the WebSockets server on one. These subdomains and ports
+must be used for cross-origin tests.
+
+Tests must not hardcode the hostname of the server that they expect to
+be running on or the port numbers, as these are not guaranteed by the
+test environment. Instead they can get this information in one of two
+ways:
+
+* From script, using the `location` API.
+
+* By using a textual substitution feature of the server.
+
+In order for the latter to work, a file must either have a name of the form
+`{name}.sub.{ext}` e.g. `example-test.sub.html` or be referenced through a URL
+containing `pipe=sub` in the query string e.g. `example-test.html?pipe=sub`.
+The substitution syntax uses `{{ }}` to delimit items for substitution. For
+example to substitute in the main host name, one would write: `{{host}}`.
+
+To get full domains, including subdomains, there is the `hosts` dictionary,
+where the first dimension is the name of the domain, and the second the
+subdomain. For example, `{{hosts[][www]}}` would give the `www` subdomain under
+the main (unnamed) domain, and `{{hosts[alt][élève]}}` would give the `élève`
+subdomain under the alt domain.
+
+For mostly historic reasons, the subdomains of the main domain are
+also available under the `domains` dictionary; this is identical to
+`hosts[]`.
+
+Ports are also available on a per-protocol basis. For example,
+`{{ports[ws][0]}}` is replaced with the first (and only) WebSockets port, while
+`{{ports[http][1]}}` is replaced with the second HTTP port.
+
+The request URL itself can be used as part of the substitution using the
+`location` dictionary, which has entries matching the `window.location` API.
+For example, `{{location[host]}}` is replaced by `hostname:port` for the
+current request, matching `location.host`.
+
+
+### Tests Requiring Special Headers
+
+For tests requiring that a certain HTTP header is set to some static
+value, a file with the same path as the test file except for an an
+additional `.headers` suffix may be created. For example for
+`/example/test.html`, the headers file would be
+`/example/test.html.headers`. This file consists of lines of the form
+
+ header-name: header-value
+
+For example
+
+ Content-Type: text/html; charset=big5
+
+To apply the same headers to all files in a directory use a
+`__dir__.headers` file. This will only apply to the immediate
+directory and not subdirectories.
+
+Headers files may be used in combination with substitutions by naming
+the file e.g. `test.html.sub.headers`.
+
+
+### Tests Requiring Full Control Over The HTTP Response
+
+```eval_rst
+.. toctree::
+ :maxdepth: 1
+
+ python-handlers/index
+ server-pipes
+```
+
+For full control over the request and response, the server provides the ability
+to write `.asis` files; these are served as literal HTTP responses. In other
+words, they are sent byte-for-byte to the server without adding an HTTP status
+line, headers, or anything else. This makes them suitable for testing
+situations where the precise bytes on the wire are static, and control over the
+timing is unnecessary, but the response does not conform to HTTP requirements.
+
+The server also provides the ability to write [Python
+"handlers"](python-handlers/index)--Python scripts that have access to request
+data and can manipulate the content and timing of the response. Responses are
+also influenced by [the `pipe` query string parameter](server-pipes).
+
+
+### Tests Requiring HTTP/2.0
+
+To make a test run over an HTTP/2.0 connection, use `.h2.` in the filename.
+By default the HTTP/2.0 server can be accessed using port 9000. At the moment
+accessing tests that use `.h2.` over ports that do not use an HTTP/2.0 server
+also succeeds, so beware of that when creating them.
+
+The HTTP/2.0 server supports handlers that work per-frame; these, along with the
+API are documented in [Writing H2 Tests](h2tests).
+
+
+### Tests Requiring WebTransport over HTTP/3
+
+We do not support loading a test over WebTransport over HTTP/3 yet, but a test
+can establish a WebTransport session to the test server.
+
+The WebTransport over HTTP/3 server is not yet enabled by default, so
+WebTransport tests will fail unless `--enable-webtransport` is specified to
+ `./wpt run`.
+
+### Test Features specified as query params
+
+Alternatively to specifying [Test Features](file-names.html#test-features) in
+the test filename, they can be specified by setting the `wpt_flags` in the
+[test variant](testharness.html#variants). For example, the following variant
+will be loaded over HTTPS:
+```html
+<meta name="variant" content="?wpt_flags=https">
+```
+
+`https`, `h2` and `www` features are supported by `wpt_flags`.
+
+Multiple features can be specified by having multiple `wpt_flags`. For example,
+the following variant will be loaded over HTTPS and run on the www subdomain.
+
+```html
+<meta name="variant" content="wpt_flags=www&wpt_flags=https">
+```
diff --git a/testing/web-platform/tests/docs/writing-tests/server-pipes.md b/testing/web-platform/tests/docs/writing-tests/server-pipes.md
new file mode 100644
index 0000000000..dc376ddacf
--- /dev/null
+++ b/testing/web-platform/tests/docs/writing-tests/server-pipes.md
@@ -0,0 +1,155 @@
+# wptserve Pipes
+
+Pipes are designed to allow simple manipulation of the way that
+static files are sent without requiring any custom code. They are also
+useful for cross-origin tests because they can be used to activate a
+substitution mechanism which can fill in details of ports and server
+names in the setup on which the tests are being run.
+
+## Enabling
+
+Pipes are functions that may be used when serving files to alter parts
+of the response. These are invoked by adding a pipe= query parameter
+taking a | separated list of pipe functions and parameters. The pipe
+functions are applied to the response from left to right. For example:
+
+ GET /sample.txt?pipe=slice(1,200)|status(404).
+
+This would serve bytes 1 to 199, inclusive, of foo.txt with the HTTP status
+code 404.
+
+Note: If you write directly to the response socket using ResponseWriter, or
+when using the asis handler, only the trickle pipe will affect the response.
+
+There are several built-in pipe functions, and it is possible to add
+more using the `@pipe` decorator on a function, if required.
+
+Note: Because of the way pipes compose, using some pipe functions prevents the
+content-length of the response from being known in advance. In these cases the
+server will close the connection to indicate the end of the response,
+preventing the use of HTTP 1.1 keepalive.
+
+## Built-In Pipes
+
+### `sub`
+
+Used to substitute variables from the server environment, or from the
+request into the response. A typical use case is for testing
+cross-domain since the exact domain name and ports of the servers are
+generally unknown.
+
+Substitutions are marked in a file using a block delimited by `{{`
+and `}}`. Inside the block the following variables are available:
+
+- `{{host}}` - The host name of the server excluding any subdomain part.
+- `{{domains[]}}` - The domain name of a particular subdomain e.g.
+ `{{domains[www]}}` for the `www` subdomain.
+- `{{hosts[][]}}` - The domain name of a particular subdomain for a particular
+ host. The first key may be empty (designating the "default" host) or the
+ value `alt`; i.e., `{{hosts[alt][]}}` (designating the alternate host).
+- `{{ports[][]}}` - The port number of servers, by protocol e.g.
+ `{{ports[http][0]}}` for the first (and, depending on setup, possibly only)
+ http server
+- `{{headers[]}}` The HTTP headers in the request e.g. `{{headers[X-Test]}}`
+ for a hypothetical `X-Test` header.
+- `{{header_or_default(header, default)}}` The value of an HTTP header, or a
+ default value if it is absent. e.g. `{{header_or_default(X-Test,
+ test-header-absent)}}`
+- `{{GET[]}}` The query parameters for the request e.g. `{{GET[id]}}` for an id
+ parameter sent with the request.
+
+So, for example, to write a JavaScript file called `xhr.js` that
+depends on the host name of the server, without hardcoding, one might
+write:
+
+ var server_url = http://{{host}}:{{ports[http][0]}}/path/to/resource;
+ //Create the actual XHR and so on
+
+The file would then be included as:
+
+ <script src="xhr.js?pipe=sub"></script>
+
+This pipe can also be enabled by using a filename `*.sub.ext`, e.g. the file above could be called `xhr.sub.js`.
+
+### `status`
+
+Used to set the HTTP status of the response, for example:
+
+ example.js?pipe=status(410)
+
+### `headers`
+
+Used to add or replace http headers in the response. Takes two or
+three arguments; the header name, the header value and whether to
+append the header rather than replace an existing header (default:
+False). So, for example, a request for:
+
+ example.html?pipe=header(Content-Type,text/plain)
+
+causes example.html to be returned with a text/plain content type
+whereas:
+
+ example.html?pipe=header(Content-Type,text/plain,True)
+
+Will cause example.html to be returned with both text/html and
+text/plain content-type headers.
+
+If the comma (`,`) or closing parenthesis (`)`) characters appear in the header
+value, those characters must be escaped with a backslash (`\`):
+
+ example?pipe=header(Expires,Thu\,%2014%20Aug%201986%2018:00:00%20GMT)
+
+(Note that the programming environment from which the request is issued may
+require that the backslash character itself be escaped.)
+
+### `slice`
+
+Used to send only part of a response body. Takes the start and,
+optionally, end bytes as arguments, although either can be null to
+indicate the start or end of the file, respectively. So for example:
+
+ example.txt?pipe=slice(10,20)
+
+Would result in a response with a body containing 10 bytes of
+example.txt including byte 10 but excluding byte 20.
+
+ example.txt?pipe=slice(10)
+
+Would cause all bytes from byte 10 of example.txt to be sent, but:
+
+ example.txt?pipe=slice(null,20)
+
+Would send the first 20 bytes of example.txt.
+
+### `trickle`
+
+Note: Using this function will force a connection close.
+
+Used to send the body of a response in chunks with delays. Takes a
+single argument that is a microsyntax consisting of colon-separated
+commands. There are three types of commands:
+
+* Bare numbers represent a number of bytes to send
+
+* Numbers prefixed `d` indicate a delay in seconds
+
+* Numbers prefixed `r` must only appear at the end of the command, and
+ indicate that the preceding N items must be repeated until there is
+ no more content to send. The number of items to repeat must be even.
+
+In the absence of a repetition command, the entire remainder of the content is
+sent at once when the command list is exhausted. So for example:
+
+ example.txt?pipe=trickle(d1)
+
+causes a 1s delay before sending the entirety of example.txt.
+
+ example.txt?pipe=trickle(100:d1)
+
+causes 100 bytes of example.txt to be sent, followed by a 1s delay,
+and then the remainder of the file to be sent. On the other hand:
+
+ example.txt?pipe=trickle(100:d1:r2)
+
+Will cause the file to be sent in 100 byte chunks separated by a 1s
+delay until the whole content has been sent.
diff --git a/testing/web-platform/tests/docs/writing-tests/submission-process.md b/testing/web-platform/tests/docs/writing-tests/submission-process.md
new file mode 100644
index 0000000000..73161cd170
--- /dev/null
+++ b/testing/web-platform/tests/docs/writing-tests/submission-process.md
@@ -0,0 +1,41 @@
+# Submitting Tests
+
+Test submission is via the typical [GitHub workflow][github flow]. For detailed
+guidelines on setup and each of these steps, please refer to the [Github Test
+Submission](github-intro) documentation.
+
+* Fork the [GitHub repository][repo].
+
+* Create a feature branch for your changes.
+
+* Make your changes.
+
+* Run the `lint` script in the root of your checkout to detect common
+ mistakes in test submissions. There is [detailed documentation for the lint
+ tool](lint-tool).
+
+* Commit your changes.
+
+* Push your local branch to your GitHub repository.
+
+* Using the GitHub UI, create a Pull Request for your branch.
+
+* When you get review comments, make more commits to your branch to
+ address the comments.
+
+* Once everything is reviewed and all issues are addressed, your pull
+ request will be automatically merged.
+
+We can sometimes take a little while to go through pull requests because we
+have to go through all the tests and ensure that they match the specification
+correctly. But we look at all of them, and take everything that we can.
+
+Hop on to the [mailing list][public-test-infra] or [matrix
+channel][matrix] if you have an issue. There is no need to announce
+your review request; as soon as you make a Pull Request, GitHub will
+inform interested parties.
+
+[repo]: https://github.com/web-platform-tests/wpt/
+[github flow]: https://guides.github.com/introduction/flow/
+[public-test-infra]: https://lists.w3.org/Archives/Public/public-test-infra/
+[matrix]: https://app.element.io/#/room/#wpt:matrix.org
diff --git a/testing/web-platform/tests/docs/writing-tests/test-templates.md b/testing/web-platform/tests/docs/writing-tests/test-templates.md
new file mode 100644
index 0000000000..e8f4bfe77f
--- /dev/null
+++ b/testing/web-platform/tests/docs/writing-tests/test-templates.md
@@ -0,0 +1,168 @@
+# Test Templates
+
+This page contains templates for creating tests. The template syntax
+is compatible with several popular editors including TextMate, Sublime
+Text, and emacs' YASnippet mode.
+
+Templates for filenames are also given. In this case `{}` is used to
+delimit text to be replaced and `#` represents a digit.
+
+## Reftests
+
+### HTML test
+
+<!--
+ Syntax highlighting cannot be enabled for the following template because it
+ contains invalid CSS.
+-->
+
+```
+<!DOCTYPE html>
+<meta charset="utf-8">
+<title>${1:Test title}</title>
+<link rel="match" href="${2:URL of match}">
+<style>
+ ${3:Test CSS}
+</style>
+<body>
+ ${4:Test content}
+</body>
+```
+
+Filename: `{test-topic}-###.html`
+
+### HTML reference
+
+<!--
+ Syntax highlighting cannot be enabled for the following template because it
+ contains invalid CSS.
+-->
+
+```
+<!DOCTYPE html>
+<meta charset="utf-8">
+<title>${1:Reference title}</title>
+<style>
+ ${2:Reference CSS}
+</style>
+<body>
+ ${3:Reference content}
+</body>
+```
+
+Filename: `{description}.html` or `{test-topic}-###-ref.html`
+
+### SVG test
+
+``` xml
+<svg xmlns="http://www.w3.org/2000/svg" xmlns:h="http://www.w3.org/1999/xhtml">
+ <title>${1:Test title}</title>
+ <metadata>
+ <h:link rel="help" href="${2:Specification link}"/>
+ <h:link rel="match" href="${3:URL of match}"/>
+ </metadata>
+ ${4:Test body}
+</svg>
+```
+
+Filename: `{test-topic}-###.svg`
+
+### SVG reference
+
+``` xml
+<svg xmlns="http://www.w3.org/2000/svg">
+ <title>${1:Reference title}</title>
+ ${2:Reference content}
+</svg>
+```
+
+Filename: `{description}.svg` or `{test-topic}-###-ref.svg`
+
+## testharness.js tests
+
+### HTML
+
+``` html
+<!DOCTYPE html>
+<meta charset="utf-8">
+<title>${1:Test title}</title>
+<script src="/resources/testharness.js"></script>
+<script src="/resources/testharnessreport.js"></script>
+<script>
+${2:Test body}
+</script>
+```
+
+Filename: `{test-topic}-###.html`
+
+### HTML with [testdriver automation](testdriver)
+``` html
+<!DOCTYPE html>
+<meta charset="utf-8">
+<title>${1:Test title}</title>
+<script src="/resources/testharness.js"></script>
+<script src="/resources/testharnessreport.js"></script>
+<script src="/resources/testdriver.js"></script>
+<script src="/resources/testdriver-vendor.js"></script>
+
+<script>
+${2:Test body}
+</script>
+```
+
+Filename: `{test-topic}-###.html`
+
+### SVG
+
+``` xml
+<svg xmlns="http://www.w3.org/2000/svg" xmlns:h="http://www.w3.org/1999/xhtml">
+ <title>${1:Test title}</title>
+ <metadata>
+ <h:link rel="help" href="${2:Specification link}"/>
+ </metadata>
+ <h:script src="/resources/testharness.js"/>
+ <h:script src="/resources/testharnessreport.js"/>
+ <script><![CDATA[
+ ${4:Test body}
+ ]]></script>
+</svg>
+```
+
+Filename: `{test-topic}-###.svg`
+
+### Manual Test
+
+#### HTML
+
+``` html
+<!DOCTYPE html>
+<meta charset="utf-8">
+<title>${1:Test title}</title>
+<script src="/resources/testharness.js"></script>
+<script src="/resources/testharnessreport.js"></script>
+<script>
+setup({explicit_timeout: true});
+${2:Test body}
+</script>
+```
+
+Filename: `{test-topic}-###-manual.html`
+
+#### SVG
+
+``` xml
+<svg xmlns="http://www.w3.org/2000/svg" xmlns:h="http://www.w3.org/1999/xhtml">
+ <title>${1:Test title}</title>
+ <metadata>
+ <h:link rel="help" href="${2:Specification link}"/>
+ </metadata>
+ <h:script src="/resources/testharness.js"/>
+ <h:script src="/resources/testharnessreport.js"/>
+ <script><![CDATA[
+ setup({explicit_timeout: true});
+ ${4:Test body}
+ ]]></script>
+</svg>
+```
+
+Filename: `{test-topic}-###-manual.svg`
diff --git a/testing/web-platform/tests/docs/writing-tests/testdriver-extension-tutorial.md b/testing/web-platform/tests/docs/writing-tests/testdriver-extension-tutorial.md
new file mode 100644
index 0000000000..185a27f1a4
--- /dev/null
+++ b/testing/web-platform/tests/docs/writing-tests/testdriver-extension-tutorial.md
@@ -0,0 +1,300 @@
+# Testdriver extension tutorial
+Adding new commands to testdriver.js
+
+## Assumptions
+We assume the following in this writeup:
+ - You know what web-platform-tests is and you have a working checkout and can run tests
+ - You know what WebDriver is
+ - Familiarity with JavaScript and Python
+
+## Introduction!
+
+Let's implement window resizing. We can do this via the [Set Window Rect](https://w3c.github.io/webdriver/#set-window-rect) command in WebDriver.
+
+First, we need to think of what the API will look like a little. We will be using WebDriver and Marionette for this, so we can look and see that they take in x, y coordinates, width and height integers.
+
+The first part of this will be browser agnostic, but later we will need to implement a specific layer for each browser (here we will do Firefox and Chrome).
+
+## RFC Process
+
+Before we invest any significant work into extending the testdriver.js API, we should check in with other stakeholders of the Web Platform Tests community on the proposed changes, by writing an [RFC](https://github.com/web-platform-tests/rfcs) ("request for comments"). This is especially useful for changes that may affect test authors or downstream users of web-platform-tests.
+
+The [process is given in more detail in the RFC repo](https://github.com/web-platform-tests/rfcs#the-rfc-process), but to start let's send in a PR to the RFCs repo by adding a file named `rfcs/testdriver_set_window_rect.md`:
+
+```md
+# RFC N: Add window resizing to testdriver.js
+(*Note: N should be replaced by the PR number*)
+
+## Summary
+
+Add testdriver.js support for the [Set Window Rect command](https://w3c.github.io/webdriver/#set-window-rect).
+
+## Details
+(*add details here*)
+
+## Risks
+(*add risks here*)
+```
+
+Members of the community will then have the opportunity to comment on our proposed changes, and perhaps suggest improvements to our ideas. If all goes well it will be approved and merged in.
+
+With that said, developing a prototype implementation may help others evaluate the proposal during the RFC process, so let's move on to writing some code.
+
+## Code!
+
+### [resources/testdriver.js](https://github.com/web-platform-tests/wpt/blob/master/resources/testdriver.js)
+
+This is the main entry point the tests get. Here we need to add a function to the `test_driver` object that will call the `test_driver_internal` object.
+
+```javascript
+window.test_driver = {
+
+ // other commands...
+
+ /**
+ * Triggers browser window to be resized and relocated
+ *
+ * This matches the behaviour of the {@link
+ * https://w3c.github.io/webdriver/#set-window-rect|WebDriver
+ * Set Window Rect command}.
+ *
+ * @param {Integer} x - The x coordinate of the top left of the window
+ * @param {Integer} y - The y coordinate of the top left of the window
+ * @param {Integer} width - The width of the window
+ * @param {Integer} height - The width of the window
+ * @returns {Promise} fulfilled after window rect is set occurs, or rejected in
+ * the cases the WebDriver command errors
+ */
+ set_window_rect: function(x, y, width, height) {
+ return window.test_driver_internal.set_element_rect(x, y, width, height);
+ }
+```
+
+In the same file, lets add to the internal object. ( do we need to do this?) (make sure to do this if the internal call has different arguments than the external call, especially if it calls multiple internal calls)
+
+```javascript
+window.test_driver_internal = {
+
+ // other commands...
+
+ set_window_rect: function(x, y, width, height) {
+ return Promise.reject(new Error("unimplemented"))
+ }
+```
+We will leave this unimplemented and override it in another file. Lets do that now!
+
+### [tools/wptrunner/wptrunner/testdriver-extra.js](https://github.com/web-platform-tests/wpt/blob/master/tools/wptrunner/wptrunner/testdriver-extra.js)
+
+This will be the default function called when invoking the test driver commands (sometimes it is overridden by testdriver-vendor.js, but that is outside the scope of this tutorial). In most cases this is just boilerplate:
+
+```javascript
+window.test_driver_internal.set_element_rect = function(x, y, width, height) {
+ return create_action("set_element_rect", {x, y, width, height});
+};
+```
+
+The `create_action` helper function does the heavy lifting of setting up a postMessage to the wptrunner internals as well as returning a promise that will resolve once the call is complete.
+
+Next, this is passed to the executor and protocol in wptrunner. Time to switch to Python!
+
+### [tools/wptrunner/wptrunner/executors/protocol.py](https://github.com/web-platform-tests/wpt/blob/master/tools/wptrunner/wptrunner/executors/protocol.py)
+
+```python
+class SetWindowRectProtocolPart(ProtocolPart):
+ """Protocol part for resizing and changing location of window"""
+ __metaclass__ = ABCMeta
+
+ name = "set_window_rect"
+
+ @abstractmethod
+ def set_window_rect(self, x, y, width, height):
+ """Change the window rect
+
+ :param x: The x coordinate of the top left of the window.
+ :param y: The y coordinate of the top left of the window.
+ :param width: The width of the window.
+ :param height: The height of the window."""
+ pass
+```
+
+Next we create a representation of our new action.
+
+### [tools/wptrunner/wptrunner/executors/actions.py](https://github.com/web-platform-tests/wpt/blob/master/tools/wptrunner/wptrunner/executors/actions.py)
+
+```python
+class SetWindowRectAction(object):
+ def __init__(self, logger, protocol):
+ self.logger = logger
+ self.protocol = protocol
+
+ def __call__(self, payload):
+ x, y, width, height = payload["x"], payload["y"], payload["width"], payload["height"]
+ self.logger.debug("Setting window rect to be: x=%s, y=%s, width=%s, height=%s"
+ .format(x, y, width, height))
+ self.protocol.set_window_rect.set_window_rect(x, y, width, height)
+```
+
+Then add your new class to the `actions = [...]` list at the end of the file.
+
+Don't forget to write docs in ```testdriver.md```.
+Now we write the browser specific implementations.
+
+### Chrome
+
+We will modify [executorwebdriver.py](https://github.com/web-platform-tests/wpt/blob/master/tools/wptrunner/wptrunner/executors/executorwebdriver.py) and use the WebDriver API.
+
+There isn't too much work to do here, we just need to define a subclass of the protocol part we defined earlier.
+
+```python
+class WebDriverSetWindowRectProtocolPart(SetWindowRectProtocolPart):
+ def setup(self):
+ self.webdriver = self.parent.webdriver
+
+ def set_window_rect(self, x, y, width, height):
+ return self.webdriver.set_window_rect(x, y, width, height)
+```
+
+Make sure to import the protocol part too!
+
+```python
+from .protocol import (BaseProtocolPart,
+ TestharnessProtocolPart,
+ Protocol,
+ SelectorProtocolPart,
+ ClickProtocolPart,
+ SendKeysProtocolPart,
+ {... other protocol parts}
+ SetWindowRectProtocolPart, # add this!
+ TestDriverProtocolPart)
+```
+
+Here we have the setup method which just redefines the webdriver object at this level. The important part is the `set_window_rect` function (and it's important it is named that since we called it that earlier). This will call the WebDriver API for [set window rect](https://w3c.github.io/webdriver/#set-window-rect).
+
+Finally, we just need to tell the WebDriverProtocol to implement this part.
+
+```python
+class WebDriverProtocol(Protocol):
+ implements = [WebDriverBaseProtocolPart,
+ WebDriverTestharnessProtocolPart,
+ WebDriverSelectorProtocolPart,
+ WebDriverClickProtocolPart,
+ WebDriverSendKeysProtocolPart,
+ {... other protocol parts}
+ WebDriverSetWindowRectProtocolPart, # add this!
+ WebDriverTestDriverProtocolPart]
+```
+
+
+### Firefox
+We use the [set window rect](https://firefox-source-docs.mozilla.org/python/marionette_driver.html#marionette_driver.marionette.Marionette.set_window_rect) Marionette command.
+
+We will modify [executormarionette.py](https://github.com/web-platform-tests/wpt/blob/master/tools/wptrunner/wptrunner/executors/executormarionette.py) and use the Marionette Python API.
+
+We have little actual work to do here! We just need to define a subclass of the protocol part we defined earlier.
+
+```python
+class MarionetteSetWindowRectProtocolPart(SetWindowRectProtocolPart):
+ def setup(self):
+ self.marionette = self.parent.marionette
+
+ def set_window_rect(self, x, y, width, height):
+ return self.marionette.set_window_rect(x, y, width, height)
+```
+
+Make sure to import the protocol part too!
+
+```python
+from .protocol import (BaseProtocolPart,
+ TestharnessProtocolPart,
+ Protocol,
+ SelectorProtocolPart,
+ ClickProtocolPart,
+ SendKeysProtocolPart,
+ {... other protocol parts}
+ SetWindowRectProtocolPart, # add this!
+ TestDriverProtocolPart)
+```
+
+Here we have the setup method which just redefines the webdriver object at this level. The important part is the `set_window_rect` function (and it's important it is named that since we called it that earlier). This will call the Marionette API for [set window rect](https://firefox-source-docs.mozilla.org/python/marionette_driver.html#marionette_driver.marionette.Marionette.set_window_rect) (`self.marionette` is a marionette instance here).
+
+Finally, we just need to tell the MarionetteProtocol to implement this part.
+
+```python
+class MarionetteProtocol(Protocol):
+ implements = [MarionetteBaseProtocolPart,
+ MarionetteTestharnessProtocolPart,
+ MarionettePrefsProtocolPart,
+ MarionetteStorageProtocolPart,
+ MarionetteSelectorProtocolPart,
+ MarionetteClickProtocolPart,
+ MarionetteSendKeysProtocolPart,
+ {... other protocol parts}
+ MarionetteSetWindowRectProtocolPart, # add this
+ MarionetteTestDriverProtocolPart]
+```
+
+### Other Browsers
+
+Other browsers (such as safari) may use executorselenium, or a completely new executor (such as servo). For these, you must change the executor in the same way as we did with chrome and firefox.
+
+### Write an infra test
+
+Make sure to add a test to `infrastructure/testdriver` :)
+
+Here is some template code!
+
+```html
+<!DOCTYPE html>
+<meta charset="utf-8">
+<title>TestDriver set window rect method</title>
+<script src="/resources/testharness.js"></script>
+<script src="/resources/testharnessreport.js"></script>
+<script src="/resources/testdriver.js"></script>
+<script src="/resources/testdriver-vendor.js"></script>
+
+<script>
+promise_test(async t => {
+ await test_driver.set_window_rect(100, 100, 100, 100);
+ // do something
+}
+</script>
+```
+
+### What about testdriver-vendor.js?
+
+The file [testdriver-vendor.js](https://github.com/web-platform-tests/wpt/blob/master/resources/testdriver-vendor.js) is the equivalent to testdriver-extra.js above, except it is
+run instead of testdriver-extra.js in browser-specific test environments. For example, in [Chromium web_tests](https://cs.chromium.org/chromium/src/third_party/blink/web_tests/).
+
+### What if I need to return a value from my testdriver API?
+
+You can return values from testdriver by just making your Action and Protocol classes use return statements. The data being returned will be serialized into JSON and passed
+back to the test on the resolving promise. The test can then deserialize the JSON to access the return values. Here is an example of a theoretical GetWindowRect API:
+
+```python
+class GetWindowRectAction(object):
+ def __call__(self, payload):
+ return self.protocol.get_window_rect.get_window_rect()
+```
+
+The WebDriver command will return a [WindowRect object](https://w3c.github.io/webdriver/#dfn-window-rect), which is a dictionary with keys `x`, `y`, `width`, and `height`.
+```python
+class WebDriverGetWindowRectProtocolPart(GetWindowRectProtocolPart):
+ def get_window_rect(self):
+ return self.webdriver.get_window_rect()
+```
+
+Then a test can access the return value as follows:
+```html
+<script>
+async_test(t => {
+ test_driver.get_window_rect()
+ .then((result) => {
+ assert_equals(result.x, 0)
+ assert_equals(result.y, 10)
+ assert_equals(result.width, 800)
+ assert_equals(result.height, 600)
+ t.done();
+ })
+});
+</script>
+```
diff --git a/testing/web-platform/tests/docs/writing-tests/testdriver.md b/testing/web-platform/tests/docs/writing-tests/testdriver.md
new file mode 100644
index 0000000000..24159e82cc
--- /dev/null
+++ b/testing/web-platform/tests/docs/writing-tests/testdriver.md
@@ -0,0 +1,235 @@
+# testdriver.js Automation
+
+```eval_rst
+
+.. contents:: Table of Contents
+ :depth: 3
+ :local:
+ :backlinks: none
+```
+
+testdriver.js provides a means to automate tests that cannot be
+written purely using web platform APIs. Outside of automation
+contexts, it allows human operators to provide expected input
+manually (for operations which may be described in simple terms).
+
+It is currently supported only for [testharness.js](testharness)
+tests.
+
+## Markup ##
+
+The `testdriver.js` and `testdriver-vendor.js` must both be included
+in any document that uses testdriver (and in the top-level test
+document when using testdriver from a different context):
+
+```html
+<script src="/resources/testdriver.js"></script>
+<script src="/resources/testdriver-vendor.js"></script>
+```
+
+## API ##
+
+testdriver.js exposes its API through the `test_driver` variable in
+the global scope.
+
+### User Interaction ###
+
+```eval_rst
+.. js:autofunction:: test_driver.click
+.. js:autofunction:: test_driver.send_keys
+.. js:autofunction:: test_driver.action_sequence
+.. js:autofunction:: test_driver.bless
+```
+
+### Window State ###
+```eval_rst
+.. js:autofunction:: test_driver.minimize_window
+.. js:autofunction:: test_driver.set_window_rect
+```
+
+### Cookies ###
+```eval_rst
+.. js:autofunction:: test_driver.delete_all_cookies
+.. js:autofunction:: test_driver.get_all_cookies
+.. js:autofunction:: test_driver.get_named_cookie
+```
+
+### Permissions ###
+```eval_rst
+.. js:autofunction:: test_driver.set_permission
+```
+
+### Authentication ###
+
+```eval_rst
+.. js:autofunction:: test_driver.add_virtual_authenticator
+.. js:autofunction:: test_driver.remove_virtual_authenticator
+.. js:autofunction:: test_driver.add_credential
+.. js:autofunction:: test_driver.get_credentials
+.. js:autofunction:: test_driver.remove_credential
+.. js:autofunction:: test_driver.remove_all_credentials
+.. js:autofunction:: test_driver.set_user_verified
+```
+
+### Page Lifecycle ###
+```eval_rst
+.. js:autofunction:: test_driver.freeze
+```
+
+### Reporting Observer ###
+```eval_rst
+.. js:autofunction:: test_driver.generate_test_report
+```
+
+### Storage ###
+```eval_rst
+.. js:autofunction:: test_driver.set_storage_access
+
+```
+
+### Accessibility ###
+```eval_rst
+.. js:autofunction:: test_driver.get_computed_label
+.. js:autofunction:: test_driver.get_computed_role
+
+```
+
+### Seure Payment Confirmation ###
+```eval_rst
+.. js:autofunction:: test_driver.set_spc_transaction_mode
+```
+
+### Using test_driver in other browsing contexts ###
+
+Testdriver can be used in browsing contexts (i.e. windows or frames)
+from which it's possible to get a reference to the top-level test
+context. There are two basic approaches depending on whether the
+context in which testdriver is used is same-origin with the test
+context, or different origin.
+
+For same-origin contexts, the context can be passed directly into the
+testdriver API calls. For functions that take an element argument this
+is done implicitly using the owner document of the element. For
+functions that don't take an element, this is done via an explicit
+context argument, which takes a WindowProxy object.
+
+Example:
+```
+let win = window.open("example.html")
+win.onload = () => {
+ await test_driver.set_permission({ name: "background-fetch" }, "denied", win);
+}
+```
+
+```eval_rst
+.. js:autofunction:: test_driver.set_test_context
+.. js:autofunction:: test_driver.message_test
+```
+
+For cross-origin cases, passing in the `context` doesn't work because
+of limitations in the WebDriver protocol used to implement testdriver
+in a cross-browser fashion. Instead one may include the testdriver
+scripts directly in the relevant document, and use the
+[`test_driver.set_test_context`](#test_driver.set_test_context) API to
+specify the browsing context containing testharness.js. Commands are
+then sent via `postMessage` to the test context. For convenience there
+is also a [`test_driver.message_test`](#test_driver.message_test)
+function that can be used to send arbitary messages to the test
+window. For example, in an auxillary browsing context:
+
+```js
+test_driver.set_test_context(window.opener)
+await test_driver.click(document.getElementsByTagName("button")[0])
+test_driver.message_test("click complete")
+```
+
+The requirement to have a handle to the test window does mean it's
+currently not possible to write tests where such handles can't be
+obtained e.g. in the case of `rel=noopener`.
+
+## Actions ##
+
+### Markup ###
+
+To use the [Actions](#Actions) API `testdriver-actions.js` must be
+included in the document, in addition to `testdriver.js`:
+
+```html
+<script src="/resources/testdriver-actions.js"></script>
+```
+
+### API ###
+
+```eval_rst
+.. js:autoclass:: Actions
+ :members:
+```
+
+
+### Using in other browsing contexts ###
+
+For the actions API, the context can be set using the `setContext`
+method on the builder:
+
+```js
+let actions = new test_driver.Actions()
+ .setContext(frames[0])
+ .keyDown("p")
+ .keyUp("p");
+await actions.send();
+```
+
+Note that if an action uses an element reference, the context will be
+derived from that element, and must match any explicitly set
+context. Using elements in multiple contexts in a single action chain
+is not supported.
+
+### send_keys
+
+Usage: `test_driver.send_keys(element, keys)`
+ * _element_: a DOM Element object
+ * _keys_: string to send to the element
+
+This function causes the string _keys_ to be sent to the target
+element (an `Element` object), potentially scrolling the document to
+make it possible to send keys. It returns a promise that resolves
+after the keys have been sent, or rejects if the keys cannot be sent
+to the element.
+
+This works with elements in other frames/windows as long as they are
+same-origin with the test, and the test does not depend on the
+window.name property remaining unset on the target window.
+
+Note that if the element that the keys need to be sent to does not have
+a unique ID, the document must not have any DOM mutations made
+between the function being called and the promise settling.
+
+To send special keys, one must send the respective key's codepoint. Since this uses the WebDriver protocol, you can find a [list for code points to special keys in the spec](https://w3c.github.io/webdriver/#keyboard-actions).
+For example, to send the tab key you would send "\uE004".
+
+_Note: these special-key codepoints are not necessarily what you would expect. For example, <kbd>Esc</kbd> is the invalid Unicode character `\uE00C`, not the `\u001B` Escape character from ASCII._
+
+[activation]: https://html.spec.whatwg.org/multipage/interaction.html#activation
+
+### set_permission
+
+Usage: `test_driver.set_permission(descriptor, state, context=null)`
+ * _descriptor_: a
+ [PermissionDescriptor](https://w3c.github.io/permissions/#dictdef-permissiondescriptor)
+ or derived object
+ * _state_: a
+ [PermissionState](https://w3c.github.io/permissions/#enumdef-permissionstate)
+ value
+ * context: a WindowProxy for the browsing context in which to perform the call
+
+This function causes permission requests and queries for the status of a
+certain permission type (e.g. "push", or "background-fetch") to always
+return _state_. It returns a promise that resolves after the permission has
+been set to be overridden with _state_.
+
+Example:
+
+``` js
+await test_driver.set_permission({ name: "background-fetch" }, "denied");
+await test_driver.set_permission({ name: "push", userVisibleOnly: true }, "granted");
+```
diff --git a/testing/web-platform/tests/docs/writing-tests/testharness-api.md b/testing/web-platform/tests/docs/writing-tests/testharness-api.md
new file mode 100644
index 0000000000..339815c5ff
--- /dev/null
+++ b/testing/web-platform/tests/docs/writing-tests/testharness-api.md
@@ -0,0 +1,839 @@
+# testharness.js API
+
+```eval_rst
+
+.. contents:: Table of Contents
+ :depth: 3
+ :local:
+ :backlinks: none
+```
+
+testharness.js provides a framework for writing testcases. It is intended to
+provide a convenient API for making common assertions, and to work both
+for testing synchronous and asynchronous DOM features in a way that
+promotes clear, robust, tests.
+
+## Markup ##
+
+The test harness script can be used from HTML or SVG documents and workers.
+
+From an HTML or SVG document, start by importing both `testharness.js` and
+`testharnessreport.js` scripts into the document:
+
+```html
+<script src="/resources/testharness.js"></script>
+<script src="/resources/testharnessreport.js"></script>
+```
+
+Refer to the [Web Workers](#web-workers) section for details and an example on
+testing within a web worker.
+
+Within each file one may define one or more tests. Each test is atomic in the
+sense that a single test has a single status (`PASS`/`FAIL`/`TIMEOUT`/`NOTRUN`).
+Within each test one may have a number of asserts. The test fails at the first
+failing assert, and the remainder of the test is (typically) not run.
+
+**Note:** From the point of view of a test harness, each document
+using testharness.js is a single "test" and each js-defined
+[`Test`](#Test) is referred to as a "subtest".
+
+By default tests must be created before the load event fires. For ways
+to create tests after the load event, see [determining when all tests
+are complete](#determining-when-all-tests-are-complete).
+
+### Harness Timeout ###
+
+Execution of tests on a page is subject to a global timeout. By
+default this is 10s, but a test runner may set a timeout multiplier
+which alters the value according to the requirements of the test
+environment (e.g. to give a longer timeout for debug builds).
+
+Long-running tests may opt into a longer timeout by providing a
+`<meta>` element:
+
+```html
+<meta name="timeout" content="long">
+```
+
+By default this increases the timeout to 60s, again subject to the
+timeout multiplier.
+
+Tests which define a large number of subtests may need to use the
+[variant](testharness.html#specifying-test-variants) feature to break
+a single test document into several chunks that complete inside the
+timeout.
+
+Occasionally tests may have a race between the harness timing out and
+a particular test failing; typically when the test waits for some
+event that never occurs. In this case it is possible to use
+[`Test.force_timeout()`](#Test.force_timeout) in place of
+[`assert_unreached()`](#assert_unreached), to immediately fail the
+test but with a status of `TIMEOUT`. This should only be used as a
+last resort when it is not possible to make the test reliable in some
+other way.
+
+## Defining Tests ##
+
+### Synchronous Tests ###
+
+```eval_rst
+.. js:autofunction:: <anonymous>~test
+ :short-name:
+```
+A trivial test for the DOM [`hasFeature()`](https://dom.spec.whatwg.org/#dom-domimplementation-hasfeature)
+method (which is defined to always return true) would be:
+
+```js
+test(function() {
+ assert_true(document.implementation.hasFeature());
+}, "hasFeature() with no arguments")
+```
+
+### Asynchronous Tests ###
+
+Testing asynchronous features is somewhat more complex since the
+result of a test may depend on one or more events or other
+callbacks. The API provided for testing these features is intended to
+be rather low-level but applicable to many situations.
+
+```eval_rst
+.. js:autofunction:: async_test
+
+```
+
+Create a [`Test`](#Test):
+
+```js
+var t = async_test("DOMContentLoaded")
+```
+
+Code is run as part of the test by calling the [`step`](#Test.step)
+method with a function containing the test
+[assertions](#assert-functions):
+
+```js
+document.addEventListener("DOMContentLoaded", function(e) {
+ t.step(function() {
+ assert_true(e.bubbles, "bubbles should be true");
+ });
+});
+```
+
+When all the steps are complete, the [`done`](#Test.done) method must
+be called:
+
+```js
+t.done();
+```
+
+`async_test` can also takes a function as first argument. This
+function is called with the test object as both its `this` object and
+first argument. The above example can be rewritten as:
+
+```js
+async_test(function(t) {
+ document.addEventListener("DOMContentLoaded", function(e) {
+ t.step(function() {
+ assert_true(e.bubbles, "bubbles should be true");
+ });
+ t.done();
+ });
+}, "DOMContentLoaded");
+```
+
+In many cases it is convenient to run a step in response to an event or a
+callback. A convenient method of doing this is through the `step_func` method
+which returns a function that, when called runs a test step. For example:
+
+```js
+document.addEventListener("DOMContentLoaded", t.step_func(function(e) {
+ assert_true(e.bubbles, "bubbles should be true");
+ t.done();
+}));
+```
+
+As a further convenience, the `step_func` that calls
+[`done`](#Test.done) can instead use
+[`step_func_done`](#Test.step_func_done), as follows:
+
+```js
+document.addEventListener("DOMContentLoaded", t.step_func_done(function(e) {
+ assert_true(e.bubbles, "bubbles should be true");
+}));
+```
+
+For asynchronous callbacks that should never execute,
+[`unreached_func`](#Test.unreached_func) can be used. For example:
+
+```js
+document.documentElement.addEventListener("DOMContentLoaded",
+ t.unreached_func("DOMContentLoaded should not be fired on the document element"));
+```
+
+**Note:** the `testharness.js` doesn't impose any scheduling on async
+tests; they run whenever the step functions are invoked. This means
+multiple tests in the same global can be running concurrently and must
+take care not to interfere with each other.
+
+### Promise Tests ###
+
+```eval_rst
+.. js:autofunction:: promise_test
+```
+
+`test_function` is a function that receives a new [Test](#Test) as an
+argument. It must return a promise. The test completes when the
+returned promise settles. The test fails if the returned promise
+rejects.
+
+E.g.:
+
+```js
+function foo() {
+ return Promise.resolve("foo");
+}
+
+promise_test(function() {
+ return foo()
+ .then(function(result) {
+ assert_equals(result, "foo", "foo should return 'foo'");
+ });
+}, "Simple example");
+```
+
+In the example above, `foo()` returns a Promise that resolves with the string
+"foo". The `test_function` passed into `promise_test` invokes `foo` and attaches
+a resolve reaction that verifies the returned value.
+
+Note that in the promise chain constructed in `test_function`
+assertions don't need to be wrapped in [`step`](#Test.step) or
+[`step_func`](#Test.step_func) calls.
+
+It is possible to mix promise tests with callback functions using
+[`step`](#Test.step). However this tends to produce confusing tests;
+it's recommended to convert any asynchronous behaviour into part of
+the promise chain. For example, instead of
+
+```js
+promise_test(t => {
+ return new Promise(resolve => {
+ window.addEventListener("DOMContentLoaded", t.step_func(event => {
+ assert_true(event.bubbles, "bubbles should be true");
+ resolve();
+ }));
+ });
+}, "DOMContentLoaded");
+```
+
+Try,
+
+```js
+promise_test(() => {
+ return new Promise(resolve => {
+ window.addEventListener("DOMContentLoaded", resolve);
+ }).then(event => {
+ assert_true(event.bubbles, "bubbles should be true");
+ });
+}, "DOMContentLoaded");
+```
+
+**Note:** Unlike asynchronous tests, testharness.js queues promise
+tests so the next test won't start running until after the previous
+promise test finishes. [When mixing promise-based logic and async
+steps](https://github.com/web-platform-tests/wpt/pull/17924), the next
+test may begin to execute before the returned promise has settled. Use
+[add_cleanup](#cleanup) to register any necessary cleanup actions such
+as resetting global state that need to happen consistently before the
+next test starts.
+
+To test that a promise rejects with a specified exception see [promise
+rejection].
+
+### Single Page Tests ###
+
+Sometimes, particularly when dealing with asynchronous behaviour,
+having exactly one test per page is desirable, and the overhead of
+wrapping everything in functions for isolation becomes
+burdensome. For these cases `testharness.js` support "single page
+tests".
+
+In order for a test to be interpreted as a single page test, it should set the
+`single_test` [setup option](#setup) to `true`.
+
+```html
+<!doctype html>
+<title>Basic document.body test</title>
+<script src="/resources/testharness.js"></script>
+<script src="/resources/testharnessreport.js"></script>
+<body>
+ <script>
+ setup({ single_test: true });
+ assert_equals(document.body, document.getElementsByTagName("body")[0])
+ done()
+ </script>
+```
+
+The test title for single page tests is always taken from `document.title`.
+
+## Making assertions ##
+
+Functions for making assertions start `assert_`. The full list of
+asserts available is documented in the [asserts](#assert-functions)
+section. The general signature is:
+
+```js
+assert_something(actual, expected, description)
+```
+
+although not all assertions precisely match this pattern
+e.g. [`assert_true`](#assert_true) only takes `actual` and
+`description` as arguments.
+
+The description parameter is used to present more useful error
+messages when a test fails.
+
+When assertions are violated, they throw an
+[`AssertionError`](#AssertionError) exception. This interrupts test
+execution, so subsequent statements are not evaluated. A given test
+can only fail due to one such violation, so if you would like to
+assert multiple behaviors independently, you should use multiple
+tests.
+
+**Note:** Unless the test is a [single page test](#single-page-tests),
+assert functions must only be called in the context of a
+[`Test`](#Test).
+
+### Optional Features ###
+
+If a test depends on a specification or specification feature that is
+OPTIONAL (in the [RFC 2119
+sense](https://tools.ietf.org/html/rfc2119)),
+[`assert_implements_optional`](#assert_implements_optional) can be
+used to indicate that failing the test does not mean violating a web
+standard. For example:
+
+```js
+async_test((t) => {
+ const video = document.createElement("video");
+ assert_implements_optional(video.canPlayType("video/webm"));
+ video.src = "multitrack.webm";
+ // test something specific to multiple audio tracks in a WebM container
+ t.done();
+}, "WebM with multiple audio tracks");
+```
+
+A failing [`assert_implements_optional`](#assert_implements_optional)
+call is reported as a status of `PRECONDITION_FAILED` for the
+subtest. This unusual status code is a legacy leftover; see the [RFC
+that introduced
+`assert_implements_optional`](https://github.com/web-platform-tests/rfcs/pull/48).
+
+[`assert_implements_optional`](#assert_implements_optional) can also
+be used during [test setup](#setup). For example:
+
+```js
+setup(() => {
+ assert_implements_optional("optionalfeature" in document.body,
+ "'optionalfeature' event supported");
+});
+async_test(() => { /* test #1 waiting for "optionalfeature" event */ });
+async_test(() => { /* test #2 waiting for "optionalfeature" event */ });
+```
+
+A failing [`assert_implements_optional`](#assert_implements_optional)
+during setup is reported as a status of `PRECONDITION_FAILED` for the
+test, and the subtests will not run.
+
+See also the `.optional` [file name convention](file-names.md), which may be
+preferable if the entire test is optional.
+
+## Testing Across Globals ##
+
+### Consolidating tests from other documents ###
+
+```eval_rst
+.. js::autofunction fetch_tests_from_window
+```
+
+**Note:** By default any markup file referencing `testharness.js` will
+be detected as a test. To avoid this, it must be put in a `support`
+directory.
+
+The current test suite will not report completion until all fetched
+tests are complete, and errors in the child contexts will result in
+failures for the suite in the current context.
+
+Here's an example that uses `window.open`.
+
+`support/child.html`:
+
+```html
+<!DOCTYPE html>
+<html>
+<title>Child context test(s)</title>
+<head>
+ <script src="/resources/testharness.js"></script>
+</head>
+<body>
+ <div id="log"></div>
+ <script>
+ test(function(t) {
+ assert_true(true, "true is true");
+ }, "Simple test");
+ </script>
+</body>
+</html>
+```
+
+`test.html`:
+
+```html
+<!DOCTYPE html>
+<html>
+<title>Primary test context</title>
+<head>
+ <script src="/resources/testharness.js"></script>
+ <script src="/resources/testharnessreport.js"></script>
+</head>
+<body>
+ <div id="log"></div>
+ <script>
+ var child_window = window.open("support/child.html");
+ fetch_tests_from_window(child_window);
+ </script>
+</body>
+</html>
+```
+
+### Web Workers ###
+
+```eval_rst
+.. js:autofunction fetch_tests_from_worker
+```
+
+The `testharness.js` script can be used from within [dedicated workers, shared
+workers](https://html.spec.whatwg.org/multipage/workers.html) and [service
+workers](https://w3c.github.io/ServiceWorker/).
+
+Testing from a worker script is different from testing from an HTML document in
+several ways:
+
+* Workers have no reporting capability since they are running in the background.
+ Hence they rely on `testharness.js` running in a companion client HTML document
+ for reporting.
+
+* Shared and service workers do not have a unique client document
+ since there could be more than one document that communicates with
+ these workers. So a client document needs to explicitly connect to a
+ worker and fetch test results from it using
+ [`fetch_tests_from_worker`](#fetch_tests_from_worker). This is true
+ even for a dedicated worker. Once connected, the individual tests
+ running in the worker (or those that have already run to completion)
+ will be automatically reflected in the client document.
+
+* The client document controls the timeout of the tests. All worker
+ scripts act as if they were started with the
+ [`explicit_timeout`](#setup) option.
+
+* Dedicated and shared workers don't have an equivalent of an `onload`
+ event. Thus the test harness has no way to know when all tests have
+ completed (see [Determining when all tests are
+ complete](#determining-when-all-tests-are-complete)). So these
+ worker tests behave as if they were started with the
+ [`explicit_done`](#setup) option. Service workers depend on the
+ [oninstall](https://w3c.github.io/ServiceWorker/#service-worker-global-scope-install-event)
+ event and don't require an explicit [`done`](#done) call.
+
+Here's an example that uses a dedicated worker.
+
+`worker.js`:
+
+```js
+importScripts("/resources/testharness.js");
+
+test(function(t) {
+ assert_true(true, "true is true");
+}, "Simple test");
+
+// done() is needed because the testharness is running as if explicit_done
+// was specified.
+done();
+```
+
+`test.html`:
+
+```html
+<!DOCTYPE html>
+<title>Simple test</title>
+<script src="/resources/testharness.js"></script>
+<script src="/resources/testharnessreport.js"></script>
+<div id="log"></div>
+<script>
+
+fetch_tests_from_worker(new Worker("worker.js"));
+
+</script>
+```
+
+
+`fetch_tests_from_worker` returns a promise that resolves once all the remote
+tests have completed. This is useful if you're importing tests from multiple
+workers and want to ensure they run in series:
+
+```js
+(async function() {
+ await fetch_tests_from_worker(new Worker("worker-1.js"));
+ await fetch_tests_from_worker(new Worker("worker-2.js"));
+})();
+```
+
+## Cleanup ##
+
+Occasionally tests may create state that will persist beyond the test
+itself. In order to ensure that tests are independent, such state
+should be cleaned up once the test has a result. This can be achieved
+by adding cleanup callbacks to the test. Such callbacks are registered
+using the [`add_cleanup`](#Test.add_cleanup) method. All registered
+callbacks will be run as soon as the test result is known. For
+example:
+
+```js
+ test(function() {
+ var element = document.createElement("div");
+ element.setAttribute("id", "null");
+ document.body.appendChild(element);
+ this.add_cleanup(function() { document.body.removeChild(element) });
+ assert_equals(document.getElementById(null), element);
+ }, "Calling document.getElementById with a null argument.");
+```
+
+If the test was created using the [`promise_test`](#promise_test) API,
+then cleanup functions may optionally return a Promise and delay the
+completion of the test until all cleanup promises have settled.
+
+All callbacks will be invoked synchronously; tests that require more
+complex cleanup behavior should manage execution order explicitly. If
+any of the eventual values are rejected, the test runner will report
+an error.
+
+### AbortSignal support ###
+
+[`Test.get_signal`](#Test.get_signal) gives an AbortSignal that is aborted when
+the test finishes. This can be useful when dealing with AbortSignal-supported
+APIs.
+
+```js
+promise_test(t => {
+ // Throws when the user agent does not support AbortSignal
+ const signal = t.get_signal();
+ const event = await new Promise(resolve => {
+ document.body.addEventListener(resolve, { once: true, signal });
+ document.body.click();
+ });
+ assert_equals(event.type, "click");
+}, "");
+```
+
+## Timers in Tests ##
+
+In general the use of timers (i.e. `setTimeout`) in tests is
+discouraged because this is an observed source of instability on test
+running in CI. In particular if a test should fail when
+something doesn't happen, it is good practice to simply let the test
+run to the full timeout rather than trying to guess an appropriate
+shorter timeout to use.
+
+In other cases it may be necessary to use a timeout (e.g., for a test
+that only passes if some event is *not* fired). In this case it is
+*not* permitted to use the standard `setTimeout` function. Instead use
+either [`Test.step_wait()`](#Test.step_wait),
+[`Test.step_wait_func()`](#Test.step_wait_func), or
+[`Test.step_timeout()`](#Test.step_timeout). [`Test.step_wait()`](#Test.step_wait)
+and [`Test.step_wait_func()`](#Test.step_wait_func) are preferred
+when there's a specific condition that needs to be met for the test to
+proceed. [`Test.step_timeout()`](#Test.step_timeout) is preferred in other cases.
+
+Note that timeouts generally need to be a few seconds long in order to
+produce stable results in all test environments.
+
+For [single page tests](#single-page-tests),
+[step_timeout](#step_timeout) is also available as a global function.
+
+```eval_rst
+
+.. js:autofunction:: <anonymous>~step_timeout
+ :short-name:
+```
+
+## Harness Configuration ###
+
+### Setup ###
+
+<!-- sphinx-js doesn't support documenting types so we have to copy in
+ the SettingsObject documentation by hand -->
+
+```eval_rst
+.. js:autofunction:: setup
+
+.. js:autofunction:: promise_setup
+
+:SettingsObject:
+
+ :Properties:
+ - **single_test** (*bool*) - Use the single-page-test mode. In this
+ mode the Document represents a single :js:class:`Test`. Asserts may be
+ used directly without requiring :js:func:`Test.step` or similar wrappers,
+ and any exceptions set the status of the test rather than the status
+ of the harness.
+
+ - **allow_uncaught_exception** (*bool*) - don't treat an
+ uncaught exception as an error; needed when e.g. testing the
+ `window.onerror` handler.
+
+ - **explicit_done** (*bool*) - Wait for a call to :js:func:`done`
+ before declaring all tests complete (this is always true for
+ single-page tests).
+
+ - **hide_test_state** (*bool*) - hide the test state output while
+ the test is running; This is helpful when the output of the test state
+ may interfere the test results.
+
+ - **explicit_timeout** (*bool*) - disable file timeout; only
+ stop waiting for results when the :js:func:`timeout` function is
+ called. This should typically only be set for manual tests, or
+ by a test runner that provides its own timeout mechanism.
+
+ - **timeout_multiplier** (*Number*) - Multiplier to apply to
+ timeouts. This should only be set by a test runner.
+
+ - **output** (*bool*) - (default: `true`) Whether to output a table
+ containing a summary of test results. This should typically
+ only be set by a test runner, and is typically set to false
+ for performance reasons when running in CI.
+
+ - **output_document** (*Document*) output_document - The document to which
+ results should be logged. By default this is the current
+ document but could be an ancestor document in some cases e.g. a
+ SVG test loaded in an HTML wrapper
+
+ - **debug** (*bool*) - (default: `false`) Whether to output
+ additional debugging information such as a list of
+ asserts. This should typically only be set by a test runner.
+```
+
+### Output ###
+
+If the file containing the tests is a HTML file, a table containing
+the test results will be added to the document after all tests have
+run. By default this will be added to a `div` element with `id=log` if
+it exists, or a new `div` element appended to `document.body` if it
+does not. This can be suppressed by setting the [`output`](#setup)
+setting to `false`.
+
+If [`output`](#setup) is `true`, the test will, by default, report
+progress during execution. In some cases this progress report will
+invalidate the test. In this case the test should set the
+[`hide_test_state`](#setup) setting to `true`.
+
+
+### Determining when all tests are complete ###
+
+By default, tests running in a `WindowGlobalScope`, which are not
+configured as a [single page test](#single-page-tests) the test
+harness will assume there are no more results to come when:
+
+ 1. There are no `Test` objects that have been created but not completed
+ 2. The load event on the document has fired
+
+For single page tests, or when the `explicit_done` property has been
+set in the [setup](#setup), the [`done`](#done) function must be used.
+
+```eval_rst
+
+.. js:autofunction:: <anonymous>~done
+ :short-name:
+.. js:autofunction:: <anonymous>~timeout
+ :short-name:
+```
+
+Dedicated and shared workers don't have an event that corresponds to
+the `load` event in a document. Therefore these worker tests always
+behave as if the `explicit_done` property is set to true (unless they
+are defined using [the "multi-global"
+pattern](testharness.html#multi-global-tests)). Service workers depend
+on the
+[install](https://w3c.github.io/ServiceWorker/#service-worker-global-scope-install-event)
+event which is fired following the completion of [running the
+worker](https://html.spec.whatwg.org/multipage/workers.html#run-a-worker).
+
+## Reporting API ##
+
+### Callbacks ###
+
+The framework provides callbacks corresponding to 4 events:
+
+ * `start` - triggered when the first Test is created
+ * `test_state` - triggered when a test state changes
+ * `result` - triggered when a test result is received
+ * `complete` - triggered when all results are received
+
+```eval_rst
+.. js:autofunction:: add_start_callback
+.. js:autofunction:: add_test_state_callback
+.. js:autofunction:: add_result_callback
+.. js:autofunction:: add_completion_callback
+.. js:autoclass:: TestsStatus
+ :members:
+.. js:autoclass:: AssertRecord
+ :members:
+```
+
+### External API ###
+
+In order to collect the results of multiple pages containing tests, the test
+harness will, when loaded in a nested browsing context, attempt to call
+certain functions in each ancestor and opener browsing context:
+
+ * start - `start_callback`
+ * test\_state - `test_state_callback`
+ * result - `result_callback`
+ * complete - `completion_callback`
+
+These are given the same arguments as the corresponding internal callbacks
+described above.
+
+The test harness will also send messages using cross-document
+messaging to each ancestor and opener browsing context. Since it uses the
+wildcard keyword (\*), cross-origin communication is enabled and script on
+different origins can collect the results.
+
+This API follows similar conventions as those described above only slightly
+modified to accommodate message event API. Each message is sent by the harness
+is passed a single vanilla object, available as the `data` property of the event
+object. These objects are structured as follows:
+
+ * start - `{ type: "start" }`
+ * test\_state - `{ type: "test_state", test: Test }`
+ * result - `{ type: "result", test: Test }`
+ * complete - `{ type: "complete", tests: [Test, ...], status: TestsStatus }`
+
+
+## Assert Functions ##
+
+```eval_rst
+.. js:autofunction:: assert_true
+.. js:autofunction:: assert_false
+.. js:autofunction:: assert_equals
+.. js:autofunction:: assert_not_equals
+.. js:autofunction:: assert_in_array
+.. js:autofunction:: assert_array_equals
+.. js:autofunction:: assert_approx_equals
+.. js:autofunction:: assert_array_approx_equals
+.. js:autofunction:: assert_less_than
+.. js:autofunction:: assert_greater_than
+.. js:autofunction:: assert_between_exclusive
+.. js:autofunction:: assert_less_than_equal
+.. js:autofunction:: assert_greater_than_equal
+.. js:autofunction:: assert_between_inclusive
+.. js:autofunction:: assert_regexp_match
+.. js:autofunction:: assert_class_string
+.. js:autofunction:: assert_own_property
+.. js:autofunction:: assert_not_own_property
+.. js:autofunction:: assert_inherits
+.. js:autofunction:: assert_idl_attribute
+.. js:autofunction:: assert_readonly
+.. js:autofunction:: assert_throws_dom
+.. js:autofunction:: assert_throws_js
+.. js:autofunction:: assert_throws_exactly
+.. js:autofunction:: assert_implements
+.. js:autofunction:: assert_implements_optional
+.. js:autofunction:: assert_unreached
+.. js:autofunction:: assert_any
+
+```
+
+Assertions fail by throwing an `AssertionError`:
+
+```eval_rst
+.. js:autoclass:: AssertionError
+```
+
+### Promise Rejection ###
+
+```eval_rst
+.. js:autofunction:: promise_rejects_dom
+.. js:autofunction:: promise_rejects_js
+.. js:autofunction:: promise_rejects_exactly
+```
+
+`promise_rejects_dom`, `promise_rejects_js`, and `promise_rejects_exactly` can
+be used to test Promises that need to reject.
+
+Here's an example where the `bar()` function returns a Promise that rejects
+with a TypeError:
+
+```js
+function bar() {
+ return Promise.reject(new TypeError());
+}
+
+promise_test(function(t) {
+ return promise_rejects_js(t, TypeError, bar());
+}, "Another example");
+```
+
+## Test Objects ##
+
+```eval_rst
+
+.. js:autoclass:: Test
+ :members:
+```
+
+## Helpers ##
+
+### Waiting for events ###
+
+```eval_rst
+.. js:autoclass:: EventWatcher
+ :members:
+```
+
+Here's an example of how to use `EventWatcher`:
+
+```js
+var t = async_test("Event order on animation start");
+
+var animation = watchedNode.getAnimations()[0];
+var eventWatcher = new EventWatcher(t, watchedNode, ['animationstart',
+ 'animationiteration',
+ 'animationend']);
+
+eventWatcher.wait_for('animationstart').then(t.step_func(function() {
+ assertExpectedStateAtStartOfAnimation();
+ animation.currentTime = END_TIME; // skip to end
+ // We expect two animationiteration events then an animationend event on
+ // skipping to the end of the animation.
+ return eventWatcher.wait_for(['animationiteration',
+ 'animationiteration',
+ 'animationend']);
+})).then(t.step_func(function() {
+ assertExpectedStateAtEndOfAnimation();
+ t.done();
+}));
+```
+
+### Utility Functions ###
+
+```eval_rst
+.. js:autofunction:: format_value
+```
+
+## Deprecated APIs ##
+
+```eval_rst
+.. js:autofunction:: generate_tests
+.. js:autofunction:: on_event
+```
+
+
diff --git a/testing/web-platform/tests/docs/writing-tests/testharness-tutorial.md b/testing/web-platform/tests/docs/writing-tests/testharness-tutorial.md
new file mode 100644
index 0000000000..6689ad5341
--- /dev/null
+++ b/testing/web-platform/tests/docs/writing-tests/testharness-tutorial.md
@@ -0,0 +1,395 @@
+# testharness.js tutorial
+
+<!--
+Note to maintainers:
+
+This tutorial is designed to be an authentic depiction of the WPT contribution
+experience. It is not intended to be comprehensive; its scope is intentionally
+limited in order to demonstrate authoring a complete test without overwhelming
+the reader with features. Because typical WPT usage patterns change over time,
+this should be updated periodically; please weigh extensions against the
+demotivating effect that a lengthy guide can have on new contributors.
+-->
+
+Let's say you've discovered that WPT doesn't have any tests for how [the Fetch
+API](https://fetch.spec.whatwg.org/) sets cookies from an HTTP response. This
+tutorial will guide you through the process of writing a test for the
+web-platform, verifying it, and submitting it back to WPT. Although it includes
+some very brief instructions on using git, you can find more guidance in [the
+tutorial for git and GitHub](github-intro).
+
+WPT's testharness.js is a framework designed to help people write tests for the
+web platform's JavaScript APIs. [The testharness.js reference
+page](testharness) describes the framework in the abstract, but for the
+purposes of this guide, we'll only consider the features we need to test the
+behavior of `fetch`.
+
+```eval_rst
+.. contents:: Table of Contents
+ :depth: 3
+ :local:
+ :backlinks: none
+```
+
+## Setting up your workspace
+
+To make sure you have the latest code, first type the following into a terminal
+located in the root of the WPT git repository:
+
+ $ git fetch git@github.com:web-platform-tests/wpt.git
+
+Next, we need a place to store the change set we're about to author. Here's how
+to create a new git branch named `fetch-cookie` from the revision of WPT we
+just downloaded:
+
+ $ git checkout -b fetch-cookie FETCH_HEAD
+
+The tests we're going to write will rely on special abilities of the WPT
+server, so you'll also need to [configure your system to run
+WPT](../running-tests/from-local-system) before you continue.
+
+With that out of the way, you're ready to create your patch.
+
+## Writing a subtest
+
+<!--
+Goals of this section:
+
+- demonstrate asynchronous testing with Promises
+- motivate non-trivial integration with WPT server
+- use web technology likely to be familiar to web developers
+- use web technology likely to be supported in the reader's browser
+-->
+
+The first thing we'll do is configure the server to respond to a certain request
+by setting a cookie. Once that's done, we'll be able to make the request with
+`fetch` and verify that it interpreted the response correctly.
+
+We'll configure the server with an "asis" file. That's the WPT convention for
+controlling the contents of an HTTP response. [You can read more about it
+here](server-features), but for now, we'll save the following text into a file
+named `set-cookie.asis` in the `fetch/api/basic/` directory of WPT:
+
+```
+HTTP/1.1 204 No Content
+Set-Cookie: test1=t1
+```
+
+With this in place, any requests to `/fetch/api/basic/set-cookie.asis` will
+receive an HTTP 204 response that sets the cookie named `test1`. When writing
+more tests in the future, you may want the server to behave more dynamically.
+In that case, [you can write Python code to control how the server
+responds](python-handlers/index).
+
+Now, we can write the test! Create a new file named `set-cookie.html` in the
+same directory and insert the following text:
+
+```html
+<!DOCTYPE html>
+<meta charset="utf-8">
+<title>fetch: setting cookies</title>
+<script src="/resources/testharness.js"></script>
+<script src="/resources/testharnessreport.js"></script>
+
+<script>
+promise_test(function() {
+ return fetch('set-cookie.asis')
+ .then(function() {
+ assert_equals(document.cookie, 'test1=t1');
+ });
+});
+</script>
+```
+
+Let's step through each part of this file.
+
+- ```html
+ <!DOCTYPE html>
+ <meta charset="utf-8">
+ ```
+
+ We explicitly set the DOCTYPE and character set to be sure that browsers
+ don't infer them to be something we aren't expecting. We're omitting the
+ `<html>` and `<head>` tags. That's a common practice in WPT, preferred
+ because it makes tests more concise.
+
+- ```html
+ <title>fetch: setting cookies</title>
+ ```
+ The document's title should succinctly describe the feature under test.
+
+- ```html
+ <script src="/resources/testharness.js"></script>
+ <script src="/resources/testharnessreport.js"></script>
+ ```
+
+ These two `<script>` tags retrieve the code that powers testharness.js. A
+ testharness.js test can't run without them!
+
+- ```html
+ <script>
+ promise_test(function() {
+ return fetch('thing.asis')
+ .then(function() {
+ assert_equals(document.cookie, 'test1=t1');
+ });
+ });
+ </script>
+ ```
+
+ This script uses the testharness.js function `promise_test` to define a
+ "subtest". We're using that because the behavior we're testing is
+ asynchronous. By returning a Promise value, we tell the harness to wait until
+ that Promise settles. The harness will report that the test has passed if
+ the Promise is fulfilled, and it will report that the test has failed if the
+ Promise is rejected.
+
+ We invoke the global `fetch` function to exercise the "behavior under test,"
+ and in the fulfillment handler, we verify that the expected cookie is set.
+ We're using the testharness.js `assert_equals` function to verify that the
+ value is correct; the function will throw an error otherwise. That will cause
+ the Promise to be rejected, and *that* will cause the harness to report a
+ failure.
+
+If you run the server according to the instructions in [the guide for local
+configuration](../running-tests/from-local-system), you can access the test at
+[http://web-platform.test:8000/fetch/api/basic/set-cookie.html](http://web-platform.test:8000/fetch/api/basic/set-cookie.html.).
+You should see something like this:
+
+![](../assets/testharness-tutorial-test-screenshot-1.png "screen shot of testharness.js reporting the test results")
+
+## Refining the subtest
+
+<!--
+Goals of this section:
+
+- explain the motivation for "clean up" logic and demonstrate its usage
+- motivate explicit test naming
+-->
+
+We'd like to test a little more about `fetch` and cookies, but before we do,
+there are some improvements we can make to what we've written so far.
+
+For instance, we should remove the cookie after the subtest is complete. This
+ensures a consistent state for any additional subtests we may add and also for
+any tests that follow. We'll use the `add_cleanup` method to ensure that the
+cookie is deleted even if the test fails.
+
+```diff
+-promise_test(function() {
++promise_test(function(t) {
++ t.add_cleanup(function() {
++ document.cookie = 'test1=;expires=Thu, 01 Jan 1970 00:00:01 GMT;';
++ });
++
+ return fetch('thing.asis')
+ .then(function() {
+ assert_equals(document.cookie, 'test1=t1');
+ });
+ });
+```
+
+Although we'd prefer it if there were no other cookies defined during our test,
+we shouldn't take that for granted. As written, the test will fail if the
+`document.cookie` includes additional cookies. We'll use slightly more
+complicated logic to test for the presence of the expected cookie.
+
+
+```diff
+ promise_test(function(t) {
+ t.add_cleanup(function() {
+ document.cookie = 'test1=;expires=Thu, 01 Jan 1970 00:00:01 GMT;';
+ });
+
+ return fetch('thing.asis')
+ .then(function() {
+- assert_equals(document.cookie, 'test1=t1');
++ assert_true(/(^|; )test1=t1($|;)/.test(document.cookie);
+ });
+ });
+```
+
+In the screen shot above, the subtest's result was reported using the
+document's title, "fetch: setting cookies". Since we expect to add another
+subtest, we should give this one a more specific name:
+
+```diff
+ promise_test(function(t) {
+ t.add_cleanup(function() {
+ document.cookie = 'test1=;expires=Thu, 01 Jan 1970 00:00:01 GMT;';
+ });
+
+ return fetch('thing.asis')
+ .then(function() {
+ assert_true(/(^|; )test1=t1($|;)/.test(document.cookie));
+ });
+-});
++}, 'cookie set for successful request');
+```
+
+## Writing a second subtest
+
+<!--
+Goals of this section:
+
+- introduce the concept of cross-domain testing and the associated tooling
+- demonstrate how to verify promise rejection
+- demonstrate additional assertion functions
+-->
+
+There are many things we might want to verify about how `fetch` sets cookies.
+For instance, it should *not* set a cookie if the request fails due to
+cross-origin security restrictions. Let's write a subtest which verifies that.
+
+We'll add another `<script>` tag for a JavaScript support file:
+
+```diff
+ <!DOCTYPE html>
+ <meta charset="utf-8">
+ <title>fetch: setting cookies</title>
+ <script src="/resources/testharness.js"></script>
+ <script src="/resources/testharnessreport.js"></script>
++<script src="/common/get-host-info.sub.js"></script>
+```
+
+`get-host-info.sub.js` is a general-purpose script provided by WPT. It's
+designed to help with testing cross-domain functionality. Since it's stored in
+WPT's `common/` directory, tests from all sorts of specifications rely on it.
+
+Next, we'll define the new subtest inside the same `<script>` tag that holds
+our first subtest.
+
+```js
+promise_test(function(t) {
+ t.add_cleanup(function() {
+ document.cookie = 'test1=;expires=Thu, 01 Jan 1970 00:00:01 GMT;';
+ });
+ const url = get_host_info().HTTP_NOTSAMESITE_ORIGIN +
+ '/fetch/api/basic/set-cookie.asis';
+
+ return fetch(url)
+ .then(function() {
+ assert_unreached('The promise for the aborted fetch operation should reject.');
+ }, function() {
+ assert_false(/(^|; )test1=t1($|;)/.test(document.cookie));
+ });
+}, 'no cookie is set for cross-domain fetch operations');
+```
+
+This may look familiar from the previous subtest, but there are some important
+differences.
+
+- ```js
+ const url = get_host_info().HTTP_NOTSAMESITE_ORIGIN +
+ '/fetch/api/basic/set-cookie.asis';
+ ```
+
+ We're requesting the same resource, but we're referring to it with an
+ alternate host name. The name of the host depends on how the WPT server has
+ been configured, so we rely on the helper to provide an appropriate value.
+
+- ```js
+ return fetch(url)
+ .then(function() {
+ assert_unreached('The promise for the aborted fetch operation should reject.');
+ }, function() {
+ assert_false(/(^|; )test1=t1($|;)/.test(document.cookie));
+ });
+ ```
+
+ We're returning a Promise value, just like the first subtest. This time, we
+ expect the operation to fail, so the Promise should be rejected. To express
+ this, we've used `assert_unreached` *in the fulfillment handler*.
+ `assert_unreached` is a testharness.js utility function which always throws
+ an error. With this in place, if fetch does *not* produce an error, then this
+ subtest will fail.
+
+ We've moved the assertion about the cookie to the rejection handler. We also
+ switched from `assert_true` to `assert_false` because the test should only
+ pass if the cookie is *not* set. It's a good thing we have the cleanup logic
+ in the previous subtest, right?
+
+If you run the test in your browser now, you can expect to see both tests
+reported as passing with their distinct names.
+
+![](../assets/testharness-tutorial-test-screenshot-2.png "screen shot of testharness.js reporting the test results")
+
+## Verifying our work
+
+We're done writing the test, but we should make sure it fits in with the rest
+of WPT before we submit it.
+
+[The lint tool](lint-tool) can detect some of the common mistakes people make
+when contributing to WPT. You enabled it when you [configured your system to
+work with WPT](../running-tests/from-local-system). To run it, open a
+command-line terminal, navigate to the root of the WPT repository, and enter
+the following command:
+
+ python ./wpt lint fetch/api/basic
+
+If this recognizes any of those common mistakes in the new files, it will tell
+you where they are and how to fix them. If you do have changes to make, you can
+run the command again to make sure you got them right.
+
+Now, we'll run the test using the automated test runner. This is important for
+testharness.js tests because there are subtleties of the automated test runner
+which can influence how the test behaves. That's not to say your test has to
+pass in all browsers (or even in *any* browser). But if we expect the test to
+pass, then running it this way will help us catch other kinds of mistakes.
+
+The tools support running the tests in many different browsers. We'll use
+Firefox this time:
+
+ python ./wpt run firefox fetch/api/basic/set-cookie.html
+
+We expect this test to pass, so if it does, we're ready to submit it. If we
+were testing a web-platform feature that Firefox didn't support, we would
+expect the test to fail instead.
+
+There are a few problems to look out for in addition to passing/failing status.
+The report will describe fewer tests than we expect if the test isn't run at
+all. That's usually a sign of a formatting mistake, so you'll want to make sure
+you've used the right file names and metadata. Separately, the web browser
+might crash. That's often a sign of a browser bug, so you should consider
+[reporting it to the browser's
+maintainers](https://rachelandrew.co.uk/archives/2017/01/30/reporting-browser-bugs/)!
+
+## Submitting the test
+
+First, let's stage the new files for committing:
+
+ $ git add fetch/api/basic/set-cookie.asis
+ $ git add fetch/api/basic/set-cookie.html
+
+We can make sure the commit has everything we want to submit (and nothing we
+don't) by using `git diff`:
+
+ $ git diff --staged
+
+On most systems, you can use the arrow keys to navigate through the changes,
+and you can press the `q` key when you're done reviewing.
+
+Next, we'll create a commit with the staged changes:
+
+ $ git commit -m '[fetch] Add test for setting cookies'
+
+And now we can push the commit to our fork of WPT:
+
+ $ git push origin fetch-cookie
+
+The last step is to submit the test for review. WPT doesn't actually need the
+test we wrote in this tutorial, but if we wanted to submit it for inclusion in
+the repository, we would create a pull request on GitHub. [The guide on git and
+GitHub](github-intro) has all the details on how to do that.
+
+## More practice
+
+Here are some ways you can keep experimenting with WPT using this test:
+
+- Improve the test's readability by defining helper functions like
+ `cookieIsSet` and `deleteCookie`
+- Improve the test's coverage by refactoring it into [a "multi-global"
+ test](testharness)
+- Improve the test's coverage by writing more subtests (e.g. the behavior when
+ the fetch operation is aborted by `window.stop`, or the behavior when the
+ HTTP response sets multiple cookies)
diff --git a/testing/web-platform/tests/docs/writing-tests/testharness.md b/testing/web-platform/tests/docs/writing-tests/testharness.md
new file mode 100644
index 0000000000..fd4450f440
--- /dev/null
+++ b/testing/web-platform/tests/docs/writing-tests/testharness.md
@@ -0,0 +1,285 @@
+# JavaScript Tests (testharness.js)
+
+JavaScript tests are the correct type of test to write in any
+situation where you are not specifically interested in the rendering
+of a page, and where human interaction isn't required; these tests are
+written in JavaScript using a framework called `testharness.js`.
+
+A high-level overview is provided below and more information can be found here:
+
+ * [testharness.js Documentation](testharness-api.md) — An introduction
+ to the library and a detailed API reference. [The tutorial on writing a
+ testharness.js test](testharness-tutorial) provides a concise guide to writing
+ a test — a good place to start for newcomers to the project.
+
+ * [testdriver.js Automation](testdriver.md) — Automating end user actions, such as moving or
+ clicking a mouse. See also the
+ [testdriver.js extension tutorial](testdriver-extension-tutorial.md) for adding new commands.
+
+ * [idlharness.js](idlharness.md) — A library for testing
+ IDL interfaces using `testharness.js`.
+
+ * [Message Channels](channels.md) - A way to communicate between
+ different globals, including window globals not in the same
+ browsing context group.
+
+ * [Server features](server-features.md) - Advanced testing features
+ that are commonly used with JavaScript tests.
+
+See also the [general guidelines](general-guidelines.md) for all test types.
+
+## Window tests
+
+### Without HTML boilerplate (`.window.js`)
+
+Create a JavaScript file whose filename ends in `.window.js` to have the necessary HTML boilerplate
+generated for you at `.window.html`. I.e., for `test.window.js` the server will ensure
+`test.window.html` is available.
+
+In this JavaScript file you can place one or more tests, as follows:
+```js
+test(() => {
+ // Place assertions and logic here
+ assert_equals(document.characterSet, "UTF-8");
+}, "Ensure HTML boilerplate uses UTF-8"); // This is the title of the test
+```
+
+If you only need to test a [single thing](testharness-api.html#single-page-tests), you could also use:
+```js
+// META: title=Ensure HTML boilerplate uses UTF-8
+setup({ single_test: true });
+assert_equals(document.characterSet, "UTF-8");
+done();
+```
+
+See [asynchronous (`async_test()`)](testharness-api.html#asynchronous-tests) and
+[promise tests (`promise_test()`)](testharness-api.html#promise-tests) for more involved setups.
+
+### With HTML boilerplate
+
+You need to be a bit more explicit and include the `testharness.js` framework directly as well as an
+additional file used by implementations:
+
+```html
+<!doctype html>
+<meta charset=utf-8>
+<script src="/resources/testharness.js"></script>
+<script src="/resources/testharnessreport.js"></script>
+<body>
+ <script>
+ test(() => {
+ assert_equals(document.characterSet, "UTF-8");
+ }, "Ensure UTF-8 declaration is observed");
+ </script>
+```
+
+Here too you could avoid the wrapper `test()` function:
+
+```html
+<!doctype html>
+<meta charset=utf-8>
+<title>Ensure UTF-8 declaration is observed</title>
+<script src="/resources/testharness.js"></script>
+<script src="/resources/testharnessreport.js"></script>
+<body>
+ <script>
+ setup({ single_test: true });
+ assert_equals(document.characterSet, "UTF-8");
+ done();
+ </script>
+```
+
+In this case the test title is taken from the `title` element.
+
+## Dedicated worker tests (`.worker.js`)
+
+Create a JavaScript file that imports `testharness.js` and whose filename ends in `.worker.js` to
+have the necessary HTML boilerplate generated for you at `.worker.html`.
+
+For example, one could write a test for the `FileReaderSync` API by
+creating a `FileAPI/FileReaderSync.worker.js` as follows:
+
+```js
+importScripts("/resources/testharness.js");
+test(function () {
+ const blob = new Blob(["Hello"]);
+ const fr = new FileReaderSync();
+ assert_equals(fr.readAsText(blob), "Hello");
+}, "FileReaderSync#readAsText.");
+done();
+```
+
+This test could then be run from `FileAPI/FileReaderSync.worker.html`.
+
+(Removing the need for `importScripts()` and `done()` is tracked in
+[issue #11529](https://github.com/web-platform-tests/wpt/issues/11529).)
+
+## Tests for other or multiple globals (`.any.js`)
+
+Tests for features that exist in multiple global scopes can be written in a way
+that they are automatically run in several scopes. In this case, the test is a
+JavaScript file with extension `.any.js`, which can use all the usual APIs.
+
+By default, the test runs in a window scope and a dedicated worker scope.
+
+For example, one could write a test for the `Blob` constructor by
+creating a `FileAPI/Blob-constructor.any.js` as follows:
+
+```js
+test(function () {
+ const blob = new Blob();
+ assert_equals(blob.size, 0);
+ assert_equals(blob.type, "");
+ assert_false(blob.isClosed);
+}, "The Blob constructor.");
+```
+
+This test could then be run from `FileAPI/Blob-constructor.any.worker.html` as well
+as `FileAPI/Blob-constructor.any.html`.
+
+It is possible to customize the set of scopes with a metadata comment, such as
+
+```
+// META: global=sharedworker
+// ==> would run in the shared worker scope
+// META: global=window,serviceworker
+// ==> would only run in the window and service worker scope
+// META: global=dedicatedworker
+// ==> would run in the default dedicated worker scope
+// META: global=dedicatedworker-module
+// ==> would run in the dedicated worker scope as a module
+// META: global=worker
+// ==> would run in the dedicated, shared, and service worker scopes
+```
+
+For a test file <code><var>x</var>.any.js</code>, the available scope keywords
+are:
+
+* `window` (default): to be run at <code><var>x</var>.any.html</code>
+* `dedicatedworker` (default): to be run at <code><var>x</var>.any.worker.html</code>
+* `dedicatedworker-module` to be run at <code><var>x</var>.any.worker-module.html</code>
+* `serviceworker`: to be run at <code><var>x</var>.any.serviceworker.html</code> (`.https` is
+ implied)
+* `serviceworker-module`: to be run at <code><var>x</var>.any.serviceworker-module.html</code>
+ (`.https` is implied)
+* `sharedworker`: to be run at <code><var>x</var>.any.sharedworker.html</code>
+* `sharedworker-module`: to be run at <code><var>x</var>.any.sharedworker-module.html</code>
+* `jsshell`: to be run in a JavaScript shell, without access to the DOM
+ (currently only supported in SpiderMonkey, and skipped in wptrunner)
+* `worker`: shorthand for the dedicated, shared, and service worker scopes
+* `shadowrealm`: runs the test code in a
+ [ShadowRealm](https://github.com/tc39/proposal-shadowrealm) context hosted in
+ an ordinary Window context; to be run at <code><var>x</var>.any.shadowrealm.html</code>
+
+To check if your test is run from a window or worker you can use the following two methods that will
+be made available by the framework:
+
+ self.GLOBAL.isWindow()
+ self.GLOBAL.isWorker()
+
+Although [the global `done()` function must be explicitly invoked for most
+dedicated worker tests and shared worker
+tests](testharness-api.html#determining-when-all-tests-are-complete), it is
+automatically invoked for tests defined using the "multi-global" pattern.
+
+## Other features of `.window.js`, `.worker.js` and `.any.js`
+
+### Specifying a test title
+
+Use `// META: title=This is the title of the test` at the beginning of the resource.
+
+### Including other JavaScript files
+
+Use `// META: script=link/to/resource.js` at the beginning of the resource. For example,
+
+```
+// META: script=/common/utils.js
+// META: script=resources/utils.js
+```
+
+can be used to include both the global and a local `utils.js` in a test.
+
+In window environments, the script will be included using a classic `<script>` tag. In classic
+worker environments, the script will be imported using `importScripts()`. In module worker
+environments, the script will be imported using a static `import`.
+
+### Specifying a timeout of long
+
+Use `// META: timeout=long` at the beginning of the resource.
+
+### Specifying test [variants](#variants)
+
+Use `// META: variant=url-suffix` at the beginning of the resource. For example,
+
+```
+// META: variant=
+// META: variant=?wss
+```
+
+## Variants
+
+A test file can have multiple variants by including `meta` elements,
+for example:
+
+```html
+<meta name="variant" content="">
+<meta name="variant" content="?wss">
+```
+
+Test runners will execute the test for each variant specified, appending the corresponding content
+attribute value to the URL of the test as they do so.
+
+`/common/subset-tests.js` and `/common/subset-tests-by-key.js` are two utility scripts that work
+well together with variants, allowing a test to be split up into subtests in cases when there are
+otherwise too many tests to complete inside the timeout. For example:
+
+```html
+<!doctype html>
+<title>Testing variants</title>
+<meta name="variant" content="?1-1000">
+<meta name="variant" content="?1001-2000">
+<meta name="variant" content="?2001-last">
+<script src="/resources/testharness.js"></script>
+<script src="/resources/testharnessreport.js"></script>
+<script src="/common/subset-tests.js"></script>
+<script>
+ const tests = [
+ { fn: t => { ... }, name: "..." },
+ ... lots of tests ...
+ ];
+ for (const test of tests) {
+ subsetTest(async_test, test.fn, test.name);
+ }
+</script>
+```
+
+With `subsetTestByKey`, the key is given as the first argument, and the
+query string can include or exclude a key (which will be matched as a regular
+expression).
+
+```html
+<!doctype html>
+<title>Testing variants by key</title>
+<meta name="variant" content="?include=Foo">
+<meta name="variant" content="?include=Bar">
+<meta name="variant" content="?exclude=(Foo|Bar)">
+<script src="/resources/testharness.js"></script>
+<script src="/resources/testharnessreport.js"></script>
+<script src="/common/subset-tests-by-key.js"></script>
+<script>
+ subsetTestByKey("Foo", async_test, () => { ... }, "Testing foo");
+ ...
+</script>
+```
+
+## Table of Contents
+
+```eval_rst
+.. toctree::
+ :maxdepth: 1
+
+ testharness-api
+ testdriver
+ testdriver-extension-tutorial
+ idlharness
+```
diff --git a/testing/web-platform/tests/docs/writing-tests/tools.md b/testing/web-platform/tests/docs/writing-tests/tools.md
new file mode 100644
index 0000000000..0a9a7dcfd5
--- /dev/null
+++ b/testing/web-platform/tests/docs/writing-tests/tools.md
@@ -0,0 +1,25 @@
+# Command-line utility scripts
+
+Sometimes you may want to add a script to the repository that's meant to be
+used from the command line, not from a browser (e.g., a script for generating
+test files). If you want to ensure (e.g., for security reasons) that such
+scripts won't be handled by the HTTP server, but will instead only be usable
+from the command line, then place them in either:
+
+* the `tools` subdir at the root of the repository, or
+
+* the `tools` subdir at the root of any top-level directory in the repository
+ which contains the tests the script is meant to be used with
+
+Any files in those `tools` directories won't be handled by the HTTP server;
+instead the server will return a 404 if a user navigates to the URL for a file
+within them.
+
+If you want to add a script for use with a particular set of tests but there
+isn't yet any `tools` subdir at the root of a top-level directory in the
+repository containing those tests, you can create a `tools` subdir at the root
+of that top-level directory and place your scripts there.
+
+For example, if you wanted to add a script for use with tests in the
+`notifications` directory, create the `notifications/tools` subdir and put your
+script there.
diff --git a/testing/web-platform/tests/docs/writing-tests/visual.md b/testing/web-platform/tests/docs/writing-tests/visual.md
new file mode 100644
index 0000000000..a8ae53d071
--- /dev/null
+++ b/testing/web-platform/tests/docs/writing-tests/visual.md
@@ -0,0 +1,27 @@
+# Visual Tests
+
+Visual tests are typically used when testing rendering of things that
+cannot be tested with [reftests](reftests).
+
+Their main advantage of over manual tests is they can be verified using
+browser-specific and platform-specific screenshots; note, however, that many
+browser vendors treat them identically to manual tests hence they are
+similarly discouraged as they very infrequently, if ever, get run by them.
+
+## Writing a Visual Test
+
+Visuals tests are test files which have `-visual` at the end of their
+filename, before the extension. There is nothing needed in them to
+make them work.
+
+They should follow the [general test guidelines](general-guidelines),
+especially noting the requirement to be self-describing (i.e., they
+must give a clear pass condition in their rendering).
+
+Similarly, they should consider the [rendering test guidelines](rendering),
+especially those about color, to ensure those running the test don't
+incorrectly judge its result.
+
+The screenshot for comparison is taken at the same point as when screenshots
+for [reftest comparisons](reftests) are taken, including potentially waiting
+for any `class="reftest-wait"` to be removed from the root element.
diff --git a/testing/web-platform/tests/docs/writing-tests/wdspec.md b/testing/web-platform/tests/docs/writing-tests/wdspec.md
new file mode 100644
index 0000000000..1943fb9e94
--- /dev/null
+++ b/testing/web-platform/tests/docs/writing-tests/wdspec.md
@@ -0,0 +1,68 @@
+# wdspec tests
+
+The term "wdspec" describes a type of test in WPT which verifies some aspect of
+[the WebDriver protocol](https://w3c.github.io/webdriver/). These tests are
+written in [the Python programming language](https://www.python.org/) and
+structured with [the pytest testing
+framework](https://docs.pytest.org/en/latest/).
+
+The test files are organized into subdirectories based on the WebDriver
+command under test. For example, tests for [the Close Window
+command](https://w3c.github.io/webdriver/#close-window) are located in then
+`close_window` directory.
+
+Similar to [testharness.js](testharness) tests, wdspec tests contain within
+them any number of "sub-tests." Sub-tests are defined as Python functions whose
+name begins with `test_`, e.g. `test_stale_element`.
+
+## The `webdriver` client library
+
+web-platform-tests maintains a WebDriver client library called `webdriver`
+located in the `tools/webdriver/` directory. Like other client libraries, it
+makes it easier to write code which interfaces with a browser using the
+protocol.
+
+Many tests require some "set up" code--logic intended to bring the browser to a
+known state from which the expected behavior can be verified. The convenience
+methods in the `webdriver` library **should** be used to perform this task
+because they reduce duplication.
+
+However, the same methods **should not** be used to issue the command under
+test. Instead, the HTTP request describing the command should be sent directly.
+This practice promotes the descriptive quality of the tests and limits
+indirection that tends to obfuscate test failures.
+
+Here is an example of a test for [the Element Click
+command](https://w3c.github.io/webdriver/#element-click):
+
+```python
+from tests.support.asserts import assert_success
+
+def test_null_response_value(session, inline):
+ # The high-level API is used to set up a document and locate a click target
+ session.url = inline("<p>foo")
+ element = session.find.css("p", all=False)
+
+ # An HTTP request is explicitly constructed for the "click" command itself
+ response = session.transport.send(
+ "POST", "session/{session_id}/element/{element_id}/click".format(
+ session_id=session.session_id,
+ element_id=element.id))
+
+ assert_success(response)
+```
+
+## Utility functions
+
+The `wedbdriver` library is minimal by design. It mimics the structure of the
+WebDriver specification. Many conformance tests perform similar operations
+(e.g. calculating the center point of an element or creating a document), but
+the library does not expose methods to facilitate them. Instead, wdspec tests
+define shared functionality in the form of "support" files.
+
+Many of these functions are intended to be used directly from the tests using
+Python's built-in `import` keyword. Others (particularly those that operate on
+a WebDriver session) are defined in terms of Pytest "fixtures" and must be
+loaded accordingly. For more detail on how to define and use test fixtures,
+please refer to [the pytest project's documentation on the
+topic](https://docs.pytest.org/en/latest/fixture.html).