summaryrefslogtreecommitdiffstats
path: root/testing/docs
diff options
context:
space:
mode:
authorDaniel Baumann <daniel.baumann@progress-linux.org>2024-04-19 00:47:55 +0000
committerDaniel Baumann <daniel.baumann@progress-linux.org>2024-04-19 00:47:55 +0000
commit26a029d407be480d791972afb5975cf62c9360a6 (patch)
treef435a8308119effd964b339f76abb83a57c29483 /testing/docs
parentInitial commit. (diff)
downloadfirefox-26a029d407be480d791972afb5975cf62c9360a6.tar.xz
firefox-26a029d407be480d791972afb5975cf62c9360a6.zip
Adding upstream version 124.0.1.upstream/124.0.1
Signed-off-by: Daniel Baumann <daniel.baumann@progress-linux.org>
Diffstat (limited to 'testing/docs')
-rw-r--r--testing/docs/assert.rst19
-rw-r--r--testing/docs/automated-testing/index.rst348
-rw-r--r--testing/docs/automated-testing/manifest-sandbox.rst103
-rw-r--r--testing/docs/browser-chrome/browsertestutils.rst5
-rw-r--r--testing/docs/browser-chrome/index.md89
-rw-r--r--testing/docs/browser-chrome/writing.md149
-rw-r--r--testing/docs/chrome-tests/index.rst120
-rw-r--r--testing/docs/ci-configs/index.md64
-rw-r--r--testing/docs/ci-configs/schedule.md50
-rw-r--r--testing/docs/eventutils.rst45
-rw-r--r--testing/docs/intermittent/index.rst375
-rw-r--r--testing/docs/mochitest-plain/faq.md326
-rw-r--r--testing/docs/mochitest-plain/index.md301
-rw-r--r--testing/docs/sheriffed-intermittents/index.md44
-rw-r--r--testing/docs/simpletest.rst5
-rw-r--r--testing/docs/test-verification/index.rst239
-rw-r--r--testing/docs/testing-policy/index.md26
-rw-r--r--testing/docs/tests-for-new-config/index.rst130
-rw-r--r--testing/docs/tests-for-new-config/manual.rst224
-rw-r--r--testing/docs/testutils.rst5
-rw-r--r--testing/docs/treeherder-try/img/th_bug_suggestions.pngbin0 -> 93426 bytes
-rw-r--r--testing/docs/treeherder-try/img/th_confirm_failures.pngbin0 -> 54258 bytes
-rw-r--r--testing/docs/treeherder-try/img/th_filter.pngbin0 -> 31047 bytes
-rw-r--r--testing/docs/treeherder-try/img/th_filter_add.pngbin0 -> 41025 bytes
-rw-r--r--testing/docs/treeherder-try/img/th_filter_classifications.pngbin0 -> 40698 bytes
-rw-r--r--testing/docs/treeherder-try/img/th_mitten.pngbin0 -> 1793 bytes
-rw-r--r--testing/docs/treeherder-try/img/th_new.pngbin0 -> 35114 bytes
-rw-r--r--testing/docs/treeherder-try/img/th_retrigger.pngbin0 -> 49838 bytes
-rw-r--r--testing/docs/treeherder-try/img/th_select_task.pngbin0 -> 52732 bytes
-rw-r--r--testing/docs/treeherder-try/img/th_task_action.pngbin0 -> 48999 bytes
-rw-r--r--testing/docs/treeherder-try/index.rst139
-rw-r--r--testing/docs/webrender/index.rst90
-rw-r--r--testing/docs/xpcshell/index.rst822
33 files changed, 3718 insertions, 0 deletions
diff --git a/testing/docs/assert.rst b/testing/docs/assert.rst
new file mode 100644
index 0000000000..c3ef1aab9d
--- /dev/null
+++ b/testing/docs/assert.rst
@@ -0,0 +1,19 @@
+.. _assert-module:
+
+Assert module
+=============
+
+For XPCShell tests and mochitests, ``Assert`` is already present as an
+instantiated global to which you can refer - you don't need to construct it
+yourself. You can immediately start using ``Assert.ok`` and similar methods as
+test assertions.
+
+The full class documentation follows, but it is perhaps worth noting that this
+API is largely identical to
+`NodeJS' assert module <https://nodejs.org/api/assert.html>`_, with some
+omissions/changes including strict mode and string matching.
+
+.. js:autoclass:: Assert
+ :members: ok, equal, notEqual, strictEqual, notStrictEqual, deepEqual, notDeepEqual,
+ greater, less, greaterOrEqual, lessOrEqual, stringContains, stringMatches,
+ throws, rejects, *
diff --git a/testing/docs/automated-testing/index.rst b/testing/docs/automated-testing/index.rst
new file mode 100644
index 0000000000..632b3acf59
--- /dev/null
+++ b/testing/docs/automated-testing/index.rst
@@ -0,0 +1,348 @@
+Automated Testing
+=================
+
+You've just written a feature and (hopefully!) want to test it. Or you've
+decided that an existing feature doesn't have enough tests and want to contribute
+some. But where do you start? You've looked around and found references to things
+like "xpcshell" or "web-platform-tests" or "talos". What code, features or
+platforms do they all test? Where do their feature sets overlap? In short, where
+should your new tests go? This document is a starting point for those who want
+to start to learn about Mozilla's automated testing tools and procedures. Below
+you'll find a short summary of each framework we use, and some questions to help
+you pick the framework(s) you need for your purposes.
+
+If you still have questions, ask on `Matrix <https://wiki.mozilla.org/Matrix>`__
+or on the relevant bug.
+
+Firefox Production
+------------------
+These tests are found within the `mozilla-central <https://hg.mozilla.org/mozilla-central>`__
+tree, along with the product code.
+
+They are run when a changeset is pushed
+to `mozilla-central <https://hg.mozilla.org/mozilla-central>`__,
+`autoland <https://hg.mozilla.org/integration/autoland/>`__, or
+`try </tools/try/index.html>`_, with the results showing up on
+`Treeherder <https://treeherder.mozilla.org/>`__. Not all tests will be run on
+every changeset; alogrithms are put in place to run the most likely failures,
+with all tests being run on a regular basis.
+
+They can also be run on local builds.
+Note: Most of the mobile tests run on emulators, but some of the tests
+(notably, performance tests) run on hardware devices.
+We try to avoid running mobile tests on hardware devices unnecessarily.
+In Treeherder, tests with names that start with "hw-" run on hardware.
+
+Linting
+~~~~~~~
+
+Lint tests help to ensure better quality, less error-prone code by
+analysing the code with a linter.
+
+
+.. csv-table:: Linters
+ :header-rows: 1
+
+ "Treeherder Symbol", "Name", "Platforms", "What is Tested"
+ "``ES``", "`ESLint </code-quality/lint/linters/eslint.html>`__", "All", "JavaScript is analyzed for correctness."
+ "``ES-build``", "`eslint-build </code-quality/lint/linters/eslint.html#eslint-build-es-b>`_", "All", "Extended javascript analysis that uses build artifacts."
+ "``mocha(EPM)``", "`ESLint-plugin-mozilla </code-quality/lint/linters/eslint-plugin-mozilla.html>`__", "Desktop", "The ESLint plugin rules."
+ "``f8``", "`flake8 </code-quality/lint/linters/flake8.html>`__", "All", "Python analyzed for style and correctness."
+ "``stylelint``", "`Stylelint </code-quality/lint/linters/stylelint.html>`__", "All", "CSS is analyzed for correctness."
+ "``W``", "`wpt lint </web-platform/index.html>`__", "Desktop", "web-platform-tests analyzed for style and manifest correctness"
+ "``WR(tidy)``", "`WebRender servo-tidy </testing/webrender/index.html>`__", "Desktop", "Code in gfx/wr is run through servo-tidy."
+ "``A``", "`Spotless </code-quality/lint/linters/android-format.html>`_", "Android", "Java is analyzed for style and correctness."
+
+.. _Functional_testing:
+
+Functional testing
+~~~~~~~~~~~~~~~~~~
+
+.. csv-table:: Automated Test Suites
+ :header-rows: 2
+
+ "Treeherder Symbol", "Name", "Platform", "Process", "Environment", "", "Privilege", "What is Tested"
+ "", "", "", "", "Shell", "Browser Profile", "",
+ "``R(J)``", "JS Reftest", "Desktop", "N/A", "JSShell", "N/A", "N/A", "The JavaScript engine's implementation of the JavaScript language."
+ "``R(C)``", "`Crashtest </web-platform/index.html>`__", "All", "Child", "Content", "Yes", "Low", "That pages load without crashing, asserting, or leaking."
+ "``R(R)``", "`Reftest </web-platform/index.html>`__", "All", "Child", "Content", "Yes", "Low", "That pages are rendered (and thus also layed out) correctly."
+ "``GTest``", "`GTest </gtest/index.html>`__", "All", "N/A", "Terminal", "N/A", "N/A", "Code that is not exposed to JavaScript."
+ "``X``", "`xpcshell </testing/xpcshell/index.html>`__", "All", "Parent, Allow", "XPCShell", "Allow", "High", "Low-level code exposed to JavaScript, such as XPCOM components."
+ "``M(a11y)``", "Accessibility (mochitest-a11y)", "Desktop", "Child", "Content", "Yes", "?", "Accessibility interfaces."
+ "``M(1), M(2), M(...)``", "`Mochitest plain </testing/mochitest-plain/index.html>`__", "All", "Child", "Content", "Yes", "Low, Allow", "Features exposed to JavaScript in web content, like DOM and other Web APIs, where the APIs do not require elevated permissions to test."
+ "``M(c1/c2/c3)``", "`Mochitest chrome </testing/chrome-tests/index.html>`__", "All", "Child, Allow", "Content", "Yes", "High", "Code requiring UI or JavaScript interactions with privileged objects."
+ "``M(bc)``", "`Mochitest browser-chrome </testing/mochitest-plain/index.html>`__", "All", "Parent, Allow", "Browser", "Yes", "High", "How the browser UI interacts with itself and with content."
+ "``M(remote)``", "Mochitest Remote Protocol", "All", "Parent, Allow", "Browser", "Yes", "High", "Firefox Remote Protocol (Implements parts of Chrome dev-tools protocol). Based on Mochitest browser-chrome."
+ "``SM(...), SM(pkg)``", "`SpiderMonkey automation <https://wiki.mozilla.org/Javascript:Automation_Builds>`__", "Desktop", "N/A", "JSShell", "N/A", "Low", "SpiderMonkey engine shell tests and JSAPI tests."
+ "``W``", "`web-platform-tests </web-platform/index.html>`__", "Desktop", "Child", "Content", "Yes", "Low", "Standardized features exposed to ECMAScript in web content; tests are shared with other vendors."
+ "``Wr``", "`web-platform-tests </web-platform/writing-tests/reftests.html>`__", "All", "Child", "Content", "Yes", "Low", "Layout and graphic correctness for standardized features; tests are shared with other vendors."
+ "``Mn``", "`Marionette </testing/marionette/Testing.html>`__", "Desktop", "?", "Content, Browser", "?", "High", "Large out-of-process function integration tests and tests that do communication with multiple remote Gecko processes."
+ "``Fxfn``", "`Firefox UI Tests </remote/Testing.html#puppeteer-tests>`__", "Desktop", "?", "Content, Browser", "Yes", "High", "Integration tests with a focus on the user interface and localization."
+ "``tt(c)``", "`telemetry-tests-client </toolkit/components/telemetry/internals/tests.html>`__", "Desktop", "N/A", "Content, Browser", "Yes", "High", "Integration tests for the Firefox Telemetry client."
+ "``TV``", "`Test Verification (test-verify) </testing/test-verification/index.html>`__", "All", "Depends on test harness", "?", "?", "?", "Uses other test harnesses - mochitest, reftest, xpcshell - to perform extra testing on new/modified tests."
+ "``TVw``", "`Test Verification for wpt (test-verify-wpt) </testing/test-verification/index.html>`__", "Desktop", "Child", "?", "?", "?", "Uses wpt test harnesses to perform extra testing on new/modified web-platform tests."
+ "``WR(wrench)``", "`WebRender standalone tests </testing/webrender/index.html>`__", "All", "N/A", "Terminal", "N/A", "N/A", "WebRender rust code (as a standalone module, with Gecko integration)."
+
+Note: there are preference-based variations of the previous testing suites.
+For example, mochitests on Treeherder can have ``gli``, ``swr``, ``spi``,
+``nofis``, ``a11y-checks``, ``spi-nw-1proc``, and many others. Another
+example is GTest, which can use ``GTest-1proc``. To learn more about
+these variations, you can mouse hover over these items to read a
+description of what these abbreviations mean.
+
+.. _Table_key:
+
+Table key
+^^^^^^^^^
+
+Symbol
+ Abbreviation for the test suite used by
+ `Treeherder <https://treeherder.mozilla.org/>`__. The first letter
+ generally indicates which of the test harnesses is used to execute
+ the test. The letter in parentheses identifies the actual test suite.
+Name
+ Common name used when referring to the test suite.
+File type
+ When adding a new test, you will generally create a file of this type
+ in the source tree and then declare it in a manifest or makefile.
+Platform
+ Most test suites are supported only on a subset of the available
+ plaforms and operating systems. Unless otherwise noted:
+
+ - **Desktop** tests run on Windows, Mac OS X, and Linux.
+ - **Mobile** tests run on Android emulators or remotely on Android
+ devices.
+
+Process
+ - When **Parent** is indicated, the test file will always run in the
+ parent process, even when the browser is running in Electrolysis
+ (e10s) mode.
+ - When **Child** is indicated, the test file will run in the child
+ process when the browser is running in Electrolysis (e10s) mode.
+ - The **Allow** label indicates that the test has access to
+ mechanisms to run code in the other process.
+
+Environment
+ - The **JSShell** and **XPCShell** environments are limited
+ JavaScript execution environments with no windows or user
+ interface (note however that XPCShell tests on Android are run
+ within a browser window.)
+ - The **Content** indication means that the test is run inside a
+ content page loaded by a browser window.
+ - The **Browser** indication means that the test is loaded in the
+ JavaScript context of a browser XUL window.
+ - The **Browser Profile** column indicates whether a browser profile
+ is loaded when the test starts. The **Allow** label means that the
+ test can optionally load a profile using a special command.
+
+Privilege
+ Indicates whether the tests normally run with low (content) or high
+ (chrome) JavaScript privileges. The **Allow** label means that the
+ test can optionally run code in a privileged environment using a
+ special command.
+
+.. _Performance_testing:
+
+Performance testing
+~~~~~~~~~~~~~~~~~~~
+
+There are many test harnesses used to test performance.
+`For more information on the various performance harnesses,
+check out the perf docs. </testing/perfdocs>`_
+
+
+.. _So_which_should_I_use:
+
+So which should I use?
+----------------------
+
+Generally, you should pick the lowest-level framework that you can. If
+you are testing JavaScript but don't need a window, use XPCShell or even
+JSShell. If you're testing page layout, try to use
+`web-platform-test reftest.
+<https://web-platform-tests.org/writing-tests/reftests.html>`_
+The advantage in lower level testing is that you don't drag in a lot of
+other components that might have their own problems, so you can home in
+quickly on any bugs in what you are specifically testing.
+
+Here's a series of questions to ask about your work when you want to
+write some tests.
+
+.. _Is_it_low-level_code:
+
+Is it low-level code?
+~~~~~~~~~~~~~~~~~~~~~
+
+If the functionality is exposed to JavaScript, and you don't need a
+window, consider `XPCShell </testing/xpcshell/index.html>`__. If not,
+you'll probably have to use `GTest </gtest/index.html>`__, which can
+test pretty much anything. In general, this should be your
+last option for a new test, unless you have to test something that is
+not exposed to JavaScript.
+
+.. _Does_it_cause_a_crash:
+
+Does it cause a crash?
+~~~~~~~~~~~~~~~~~~~~~~
+
+If you've found pages that crash Firefox, add a
+`crashtest </web-platform/index.html>`__ to
+make sure future versions don't experience this crash (assertion or
+leak) again. Note that this may lead to more tests once the core
+problem is found.
+
+.. _Is_it_a_layoutgraphics_feature:
+
+Is it a layout/graphics feature?
+~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
+
+`Reftest </layout/Reftest.html#writing-tests>`__ is your best bet, if possible.
+
+.. _Do_you_need_to_verify_performance:
+
+Do you need to verify performance?
+~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
+
+`Use an appropriate performance test suite from this list </testing/perfdocs>`_.
+
+.. _Are_you_testing_UI_features:
+
+Are you testing UI features?
+~~~~~~~~~~~~~~~~~~~~~~~~~~~~
+
+Try one of the flavors of
+`mochitest </testing/mochitest-plain/index.html>`__, or
+`Marionette </docs/Marionette>`__ if the application also needs to be
+restarted, or tested with localized builds.
+
+.. _Are_you_testing_MobileAndroid:
+
+Are you testing Mobile/Android?
+~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
+
+If you are testing GeckoView, you will need to need to use
+`JUnit integration tests
+</mobile/android/geckoview/contributor/junit.html#testing-overview>`__.
+
+There are some specific features that
+`Mochitest </testing/mochitest-plain/index.html>`__ or
+`Reftest </layout/Reftest.html>`__ can cover. Browser-chrome tests do not run on
+Android. If you want to test performance, `Raptor </testing/perfdocs/raptor.html>`__ will
+be a good choice.
+
+
+.. _Are_you_doing_none_of_the_above:
+
+Are you doing none of the above?
+~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
+
+- To get your tests running in continuous integration, try
+ `web-platform-tests </web-platform/index.html>`_, or
+ `Mochitest </testing/mochitest-plain/index.html>`__, or,
+ if higher privileges are required, try
+ `Mochitest browser chrome tests </testing/mochitest-plain/index.html>`__.
+- For Desktop Firefox, or if you just want to see the future of Gecko
+ testing, look into the on-going
+ `Marionette </testing/marionette/Testing.html#harness-tests>`__ project.
+
+.. _Need_to_get_more_data_out_of_your_tests:
+
+Need to get more data out of your tests?
+----------------------------------------
+
+Most test jobs now expose an environment variable named
+``$MOZ_UPLOAD_DIR``. If this variable is set during automated test runs,
+you can drop additional files into this directory, and they will be
+uploaded to a web server when the test finishes. The URLs to retrieve
+the files will be output in the test log.
+
+.. _Need_to_set_preferences_for_test-suites:
+
+Need to set preferences for test-suites?
+----------------------------------------
+
+First ask yourself if these prefs need to be enabled for all tests or
+just a subset of tests (e.g to enable a feature).
+
+.. _Setting_prefs_that_only_apply_to_certain_tests:
+
+Setting prefs that only apply to certain tests
+~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
+
+If the answer is the latter, try to set the pref as local to the tests
+that need it as possible. Here are some options:
+
+- If the test runs in chrome scope (e.g mochitest chrome or
+ browser-chrome), you can use
+ `Services.prefs
+ <https://searchfox.org/mozilla-central/source/modules/libpref/nsIPrefBranch.idl>`__
+ to set the prefs in your test's setup function. Be sure to reset the
+ pref back to its original value during teardown!
+
+- Mochitest plain tests can use
+ `SpecialPowers
+ <https://developer.mozilla.org/en-US/docs/Mozilla/Projects/Mochitest/SpecialPowers>`__
+ to set prefs.
+
+- All variants of mochitest can set prefs in their manifests. For
+ example, to set a pref for all tests in a manifest:
+
+ ::
+
+ [DEFAULT]
+ prefs =
+ my.awesome.pref=foo,
+ my.other.awesome.pref=bar,
+ [test_foo.js]
+ [test_bar.js]
+
+- All variants of reftest can also set prefs in their
+ `manifests </layout/Reftest.html>`__.
+
+- All variants of web-platform-tests can also `set prefs in their
+ manifests </web-platform/index.html#enabling-prefs>`__.
+
+.. _Setting_prefs_that_apply_to_the_entire_suite:
+
+Setting prefs that apply to the entire suite
+~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
+
+Most test suites define prefs in user.js files that live under
+`testing/profiles
+<https://searchfox.org/mozilla-central/source/testing/profiles>`__.
+Each directory is a profile that contains a ``user.js`` file with a
+number of prefs defined in it. Test suites will then merge one or more
+of these basic profiles into their own profile at runtime. To see which
+profiles apply to which test suites, you can inspect
+`testing/profiles/profiles.json
+<https://searchfox.org/mozilla-central/source/testing/profiles/profiles.json>`__.
+Profiles at the beginning of the list get overridden by profiles at the
+end of the list.
+
+Because this system makes it hard to get an overall view of which
+profiles are set for any given test suite, a handy ``profile`` utility
+was created:
+
+::
+
+ $ cd testing/profiles
+ $ ./profile -- --help
+ usage: profile [-h] {diff,sort,show,rm} ...
+ $ ./profile show mochitest # prints all prefs that will be set in mochitest
+ $ ./profile diff mochitest reftest # prints differences between the mochitest and reftest suites
+
+.. container:: blockIndicator note
+
+ **Note:** JS engine tests do not use testing/profiles yet, instead
+ `set prefs
+ here <https://searchfox.org/mozilla-central/source/js/src/tests/user.js>`__.
+
+Adding New Context to Skip Conditions
+-------------------------------------
+
+Often when standing up new test configurations, it's necessary to add new keys
+that can be used in ``skip-if`` annotations.
+
+.. toctree::
+
+ manifest-sandbox
diff --git a/testing/docs/automated-testing/manifest-sandbox.rst b/testing/docs/automated-testing/manifest-sandbox.rst
new file mode 100644
index 0000000000..19df4ed883
--- /dev/null
+++ b/testing/docs/automated-testing/manifest-sandbox.rst
@@ -0,0 +1,103 @@
+Adding Context to ``manifestparser`` based Manifests
+----------------------------------------------------
+
+Suites that use ``manifestparser``, like Mochitest and XPCShell, have test
+manifests that denote whether a given test should be skipped or not based
+on a set of context.
+
+Gecko builds generate a ``target.mozinfo.json`` with metadata about the build.
+An example ``target.mozinfo.json`` might look like :download:`this
+<target.mozinfo.json>`. These keys can then be used ``skip-if`` in test manifests:
+
+.. code-block::
+
+ skip-if = e10s && os == 'win'
+
+In this case, ``e10s`` is a boolean.
+
+The test will download the build's ``target.mozinfo.json``, then update the
+mozinfo dictionary with additional runtime information based on the task or
+runtime environment. This logic lives in `mozinfo
+<https://hg.mozilla.org/mozilla-central/file/default/testing/mozbase/mozinfo/mozinfo/mozinfo.py>`__.
+
+How to Add a Keyword
+~~~~~~~~~~~~~~~~~~~~
+
+Where to add the new key depends on what type of information it is.
+
+1. If the key is a property of the build, you'll need to patch `this file
+ <https://searchfox.org/mozilla-central/source/python/mozbuild/mozbuild/mozinfo.py>`_.
+2. If the key is a property of the test environment, you'll need to patch
+ `mozinfo <https://firefox-source-docs.mozilla.org/mozbase/mozinfo.html>`_.
+3. If the key is a runtime configuration, for example based on a pref that is
+ passed in via mach or the task configuration, then you'll need to update the
+ individual test harnesses. For example, `this location
+ <https://searchfox.org/mozilla-central/rev/a7e33b7f61e7729e2b1051d2a7a27799f11a5de6/testing/mochitest/runtests.py#3341>`_
+ for Mochitest. Currently there is no shared location to set runtime keys
+ across test harnesses.
+
+Adding a Context to Reftest Style Manifests
+-------------------------------------------
+
+Reftests and Crashtests use a different kind of manifest, but the general idea
+is the same.
+
+As before, Gecko builds generate a ``target.mozinfo.json`` with metadata about
+the build. An example ``target.mozinfo.json`` might look like :download:`this
+<target.mozinfo.json>`. This is consumed in the Reftest harness and translated
+into keywords that can be used like:
+
+.. code-block::
+
+ fuzzyIf(cocoaWidget&&isDebugBuild,1-1,85-88)
+
+In this case, ``cocoaWidget`` and ``isDebugbuild`` are booleans.
+
+The test will download the build's ``target.mozinfo.json``, then in addition to
+the mozinfo, will query runtime info from the browser to build a sandbox of
+keywords. This logic lives in `manifest.sys.mjs
+<https://searchfox.org/mozilla-central/source/layout/tools/reftest/manifest.sys.mjs#439>`__.
+
+How to Add a Keyword
+~~~~~~~~~~~~~~~~~~~~
+
+Where to add the new key depends on what type of information it is.
+
+1. If the key is a property of the build, you'll need to patch `this file
+ <https://searchfox.org/mozilla-central/source/python/mozbuild/mozbuild/mozinfo.py>`_.
+2. If the key is a property of the test environment or a runtime configuration,
+ then you'll need need to update manifest sandbox.
+
+For example, for Apple Silicon, we can add an ``apple_silicon`` keyword with a
+patch like this:
+
+.. code-block:: diff
+
+ --- a/layout/tools/reftest/manifest.sys.mjs
+ +++ b/layout/tools/reftest/manifest.sys.mjs
+ @@ -572,16 +572,18 @@ function BuildConditionSandbox(aURL) {
+
+ // Set OSX to be the Mac OS X version, as an integer, or undefined
+ // for other platforms. The integer is formed by 100 times the
+ // major version plus the minor version, so 1006 for 10.6, 1010 for
+ // 10.10, etc.
+ var osxmatch = /Mac OS X (\d+).(\d+)$/.exec(hh.oscpu);
+ sandbox.OSX = osxmatch ? parseInt(osxmatch[1]) * 100 + parseInt(osxmatch[2]) : undefined;
+
+ + sandbox.apple_silicon = sandbox.cocoaWidget && sandbox.OSX>=11;
+ +
+ // Plugins are no longer supported. Don't try to use TestPlugin.
+ sandbox.haveTestPlugin = false;
+
+ // Set a flag on sandbox if the windows default theme is active
+ sandbox.windowsDefaultTheme = g.containingWindow.matchMedia("(-moz-windows-default-theme)").matches;
+
+ try {
+ sandbox.nativeThemePref = !prefs.getBoolPref("widget.disable-native-theme-for-content");
+
+
+Then to use this:
+
+.. code-block::
+
+ fuzzy-if(apple_silicon,1-1,281-281) == frame_above_rules_none.html frame_above_rules_none_ref.html
diff --git a/testing/docs/browser-chrome/browsertestutils.rst b/testing/docs/browser-chrome/browsertestutils.rst
new file mode 100644
index 0000000000..96b375fbf5
--- /dev/null
+++ b/testing/docs/browser-chrome/browsertestutils.rst
@@ -0,0 +1,5 @@
+BrowserTestUtils module
+=======================
+
+.. js:autoclass:: BrowserTestUtils
+ :members:
diff --git a/testing/docs/browser-chrome/index.md b/testing/docs/browser-chrome/index.md
new file mode 100644
index 0000000000..d327c8a6f0
--- /dev/null
+++ b/testing/docs/browser-chrome/index.md
@@ -0,0 +1,89 @@
+Browser chrome mochitests
+=========================
+
+Browser chrome mochitests are mochitests that run in the context of the desktop
+Firefox browser window. The test files are named `browser_something.js` by
+convention, and in addition to mochitest assertions supports the
+[CommonJS standard assertions](http://wiki.commonjs.org/wiki/Unit_Testing/1.1),
+like [nodejs' assert module](https://nodejs.org/api/assert.html#assert) but
+implemented in [`Assert.sys.mjs`](../assert.rst).
+
+These tests are used to test UI-related behaviour in Firefox for
+Desktop. They do not run on Android. If you're testing internal code that
+does not directly interact with the user interface,
+[xpcshell tests](../xpcshell/index.rst) are probably a better fit for your needs.
+
+
+Running the tests
+-----------------
+
+You can run individual tests locally using the standard `./mach test` command:
+`./mach test path/to/browser_test.js`. You can omit the path if the filename
+is unique. You can also run entire directories, or specific test manifests:
+
+```
+./mach test path/to/browser.toml
+```
+
+You can also use the more specific `./mach mochitest` command in the same way.
+Using `./mach mochitest --help` will give you an exhaustive overview of useful
+other available flags relating to running, debugging and evaluating tests.
+
+For both commands, you can use the `--verify` flag to run the test under
+[test verification](../test-verification/index.rst). This helps flush out
+intermittent issues with the test.
+
+
+On our infrastructure, these tests run in the mochitest-browser-chrome jobs.
+There, they run on a per-manifest basis (so for most manifests, more than one
+test will run while the browser stays open).
+
+The tests also get run in `verify` mode in the `test-verify` jobs, whenever
+the test itself is changed.
+
+Note that these tests use "real" focus and input, so you'll need to not touch
+your machine while running them. You can run them with the `--headless`
+flag to avoid this, but some tests may break in this mode.
+
+
+Adding new tests
+----------------
+
+You can use the standard `./mach addtest path/to/new/browser_test.js` command
+to generate a new browser test, and add it to the relevant manifest, if tests
+already exist in that directory. This automatically creates a test file using
+the right template for you, and adds it to the manifest.
+
+If there are no tests in the directory yet (for example, for an entirely new
+feature and directory) you will need to:
+
+1. create an empty `browser.toml` file
+2. add it to `BROWSER_CHROME_MANIFESTS` collection from a `moz.build` file.
+3. then run the `./mach addtest` command as before.
+
+In terms of the contents of the test, please see [Writing new browser
+mochitests](writing.md).
+
+Debugging tests
+---------------
+
+The `./mach test` and `./mach mochitest` commands support a `--jsdebugger`
+flag which will open the browser toolbox. If you add the
+[`debugger;` keyword](https://developer.mozilla.org/en-US/docs/Web/JavaScript/Reference/Statements/debugger)
+in your test, the debugger will pause there.
+
+Alternatively, you can set breakpoints using the debugger yourself. If you want
+to pause the debugger before running the test, you can use the `--no-autorun`
+flag. Alternatively, if you want to pause the debugger on failure, you can use
+`--debug-on-failure`.
+
+For more details, see [Avoiding intermittent tests](../intermittent/index.rst).
+
+Reference material
+------------------
+
+- [Assert module](../assert.rst)
+- [TestUtils module](../testutils.rst)
+- [BrowserTestUtils module](browsertestutils.rst)
+- [SimpleTest utilities](../simpletest.rst)
+- [EventUtils utilities](../eventutils.rst)
diff --git a/testing/docs/browser-chrome/writing.md b/testing/docs/browser-chrome/writing.md
new file mode 100644
index 0000000000..3ae51beb47
--- /dev/null
+++ b/testing/docs/browser-chrome/writing.md
@@ -0,0 +1,149 @@
+# Writing new browser mochitests
+
+After [creating a new empty test file](index.md#adding-new-tests), you will
+have an empty `add_task` into which you can write your test.
+
+## General guidance
+
+The test can use `ok`, `is`, `isnot`, as well as all the regular
+[CommonJS standard assertions](http://wiki.commonjs.org/wiki/Unit_Testing/1.1),
+to make test assertions.
+
+The test can use `info` to log strings into the test output.
+``console.log`` will work for local runs of individual tests, but aren't
+normally used for checked-in tests.
+
+The test will run in a separate scope inside the browser window.
+`gBrowser`, `gURLBar`, `document`, and various other globals are thus
+accessible just as they are for non-test code in the same window. However,
+variables declared in the test file will not outlive the test.
+
+## Test architecture
+
+It is the responsibility of individual tests to leave the browser as they
+found it. If the test changes prefs, opens tabs, customizes the UI, or makes
+other changes, it should revert those when it is done.
+
+To help do this, a number of useful primitives are available:
+
+- `add_setup` allows you to add setup tasks that run before any `add_task` tasks.
+- `SpecialPowers.pushPrefEnv` ([see below](#changing-preferences)) allows you to set prefs that will be automatically
+ reverted when the test file has finished running.
+- [`BrowserTestUtils.withNewTab`](browsertestutils.rst#BrowserTestUtils.withNewTab), allows you to easily run async code
+ talking to a tab that you open and close it when done.
+- `registerCleanupFunction` takes an async callback function that you can use
+ to do any other cleanup your test might need.
+
+## Common operations
+
+### Opening new tabs and new windows, and closing them
+
+Should be done using the relevant methods in `BrowserTestUtils` (which
+is available without any additional work).
+
+Typical would be something like:
+
+```js
+add_task(async function() {
+ await BrowserTestUtils.withNewTab("https://example.com/mypage", async (browser) {
+ // `browser` will have finished loading the passed URL when this code runs.
+ // Do stuff with `browser` in here. When the async function exits,
+ // the test framework will clean up the tab.
+ });
+});
+```
+
+### Executing code in the content process associated with a tab or its subframes
+
+Should be done using `SpecialPowers.spawn`:
+
+```js
+let result = await SpecialPowers.spawn(browser, [42, 100], async (val, val2) => {
+ // Replaces the document body with '42':
+ content.document.body.textContent = val;
+ // Optionally, return a result. Has to be serializable to make it back to
+ // the parent process (so DOM nodes or similar won't work!).
+ return Promise.resolve(val2 * 2);
+});
+```
+
+You can pass a BrowsingContext reference instead of `browser` to directly execute
+code in subframes.
+
+Inside the function argument passed to `SpecialPowers.spawn`, `content` refers
+to the `window` of the web content in that browser/BrowsingContext.
+
+For some operations, like mouse clicks, convenience helpers are available on
+`BrowserTestUtils`:
+
+```js
+await BrowserTestUtils.synthesizeMouseAtCenter("#my.css.selector", {accelKey: true}, browser);
+```
+
+### Changing preferences
+
+Use `SpecialPowers.pushPrefEnv`:
+
+```js
+await SpecialPowers.pushPrefEnv({
+ set: [["accessibility.tabfocus", 7]]
+});
+```
+This example sets the pref allowing buttons and other controls to receive tab focus -
+this is the default on Windows and Linux but not on macOS, so it can be necessary in
+order for your test to pass reliably on macOS if it uses keyboard focus.
+
+### Wait for an observer service notification topic or DOM event
+
+Use the utilities for this on [`TestUtils`](../testutils.rst#TestUtils.topicObserved):
+
+```js
+await TestUtils.topicObserved("sync-pane-loaded");
+```
+
+and [`BrowserTestUtils`](browsertestutils.rst#BrowserTestUtils.waitForEvent), respectively:
+
+```js
+await BrowserTestUtils.waitForEvent(domElement, "click");
+```
+
+### Wait for some DOM to update.
+
+Use [`BrowserTestUtils.waitForMutationCondition`](browsertestutils.rst#BrowserTestUtils.waitForMutationCondition).
+Do **not** use `waitForCondition`, which uses a timeout loop and often
+leads to intermittent failures.
+
+### Mocking code not under test
+
+The [`Sinon`](https://sinonjs.org/) mocking framework is available. You can import it
+using something like:
+
+```js
+const { sinon } = ChromeUtils.importESModule("resource://testing-common/Sinon.sys.mjs");
+```
+
+More details on how to do mocking are available on the Sinon website.
+
+## Additional files
+
+You can use extra files (e.g. webpages to load) by adding them to a `support-files`
+property using the `browser.toml` file:
+
+```toml
+["browser_foo.js"]
+support-files = [
+ "bar.html",
+ "baz.js",
+]
+```
+
+## Reusing code across tests
+
+For operations that are common to a specific set of tests, you can use the `head.js`
+file to share JS code.
+
+Where code is needed across various directories of tests, you should consider if it's
+common enough to warrant being in `BrowserTestUtils.sys.mjs`, or if not, setting up
+a separate `jsm` module containing your test helpers. You can add these to
+`TESTING_JS_MODULES` in `moz.build` to avoid packaging them with Firefox. They
+will be available in `resource://testing-common/` to all tests.
diff --git a/testing/docs/chrome-tests/index.rst b/testing/docs/chrome-tests/index.rst
new file mode 100644
index 0000000000..5227c7228f
--- /dev/null
+++ b/testing/docs/chrome-tests/index.rst
@@ -0,0 +1,120 @@
+Chrome Tests
+============
+
+.. _DISCLAIMER:
+
+**DISCLAIMER**
+~~~~~~~~~~~~~~
+
+**NOTE: Please use this document as a reference for existing chrome tests as you do not want to create new chrome tests.
+If you're trying to test privileged browser code, write a browser mochitest instead;
+if you are testing web platform code, use a wpt test, or a "plain" mochitest if you are unable to use a wpt test.**
+
+.. _Introduction:
+
+Introduction
+~~~~~~~~~~~~
+
+A chrome test is similar but not equivalent to a Mochitest running with chrome privileges.
+
+The chrome test suite is an automated testing framework designed to
+allow testing of application chrome windows using JavaScript.
+It allows you to run JavaScript code in the non-electroysis (e10s) content area
+with chrome privileges, instead of directly in the browser window (as browser tests do instead).
+These tests reports results using the same functions as the Mochitest test framework.
+The chrome test suite depends on runtests.py from the Mochitest framework.
+
+.. _Running_the_chrome_tests:
+
+Running the chrome tests
+~~~~~~~~~~~~~~~~~~~~~~~~
+
+To run chrome tests, you need to `build
+Firefox </setup>`__ with your
+changes and find the test or test manifest you want to run.
+
+For example, to run all chrome tests under `toolkit/content`, run the following command:
+
+::
+
+ ./mach test toolkit/content/test/chrome/chrome.ini
+
+To run a single test, just pass the path to the test into mach:
+
+::
+
+ ./mach test toolkit/content/tests/chrome/test_largemenu.html
+
+You can also pass the path to a directory containing many tests. Run
+`./mach test --help` for full documentation.
+
+.. _Writing_chrome_tests:
+
+Writing chrome tests
+~~~~~~~~~~~~~~~~~~~~
+
+A chrome tests is similar but not equivalent to a Mochitest
+running with chrome privileges, i.e. code and UI are referenced by
+``chrome://`` URIs. A basic XHTML test file could look like this:
+
+.. code:: xml
+
+ <?xml version="1.0"?>
+ <?xml-stylesheet href="chrome://global/skin" type="text/css"?>
+ <?xml-stylesheet href="chrome://mochikit/content/tests/SimpleTest/test.css" type="text/css"?>
+
+ <window title="Demo Test"
+ xmlns="http://www.mozilla.org/keymaster/gatekeeper/there.is.only.xul">
+ <title>Demo Test</title>
+
+ <script type="application/javascript"
+ src="chrome://mochikit/content/tests/SimpleTest/SimpleTest.js"/>
+
+ <script type="application/javascript">
+ <![CDATA[
+ add_task(async function runTest() {
+ ok (true == 1, "this passes");
+ todo(true === 1, "this fails");
+ });
+ ]]>
+ </script>
+
+ <body xmlns="http://www.w3.org/1999/xhtml">
+ <p id="display"></p>
+ <div id="content" style="display: none"></div>
+ <pre id="test"></pre>
+ </body>
+ </window>
+
+
+The comparison functions are identical to those supported by Mochitests,
+see how the comparison functions work
+in the Mochitest documentation for more details. The `EventUtils helper
+functions <https://searchfox.org/mozilla-central/source/testing/mochitest/tests/SimpleTest/EventUtils.js>`__
+are available on the "EventUtils" object defined in the global scope.
+
+The test suite also supports asynchronous tests.
+To use these asynchronous tests, please use the `add_task() <https://searchfox.org/mozilla-central/source/testing/mochitest/tests/SimpleTest/SimpleTest.js#2025>`__ functionality.
+
+Any exceptions thrown while running a test will be caught and reported
+in the test output as a failure. Exceptions thrown outside of the test's
+context (e.g. in a timeout, event handler, etc) will not be caught, but
+will result in a timed out test.
+
+The test file name must be prefixed with "test_", and must have a file
+extension of ".xhtml". Files that don't match this pattern will be ignored
+by the test harness, but you still can include them. For example, a XUL
+window file opened by your test_demo.xhtml via openDialog should be named
+window_demo.xhtml. Putting the bug number in the file name is recommended
+if your test verifies a bugfix, e.g. "test_bug123456.xhtml".
+
+Helper files can be included, for example, from
+``https://example.com/chrome/dom/workers/test/serviceworkers/serviceworkermanager_iframe.html``.
+
+.. _Adding_a_new_chrome_test_to_the_tree:
+
+Adding a new chrome test to the tree
+~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
+
+To add a new chrome test to the tree, please use `./mach test addtest the_test_directory/the_test_you_want_to_create.xhtml`.
+For more information about `addtest`, please run `./mach test addtest --help`.
diff --git a/testing/docs/ci-configs/index.md b/testing/docs/ci-configs/index.md
new file mode 100644
index 0000000000..9987cec7e1
--- /dev/null
+++ b/testing/docs/ci-configs/index.md
@@ -0,0 +1,64 @@
+# Configuration Changes
+
+This process outlines how Mozilla will handle configuration changes. For a list of configuration changes, please see the [schedule](schedule.md)
+
+## Infrastructure setup (2-4 weeks)
+
+This is behind the scenes, when there is a need for a configuration change (upgrade or addition of a new platform), the first step
+is to build a machine and work to get the OS working with taskcluster. This is work for hardware/cloud is done by IT. Sometimes
+this is as simple as installing a package or changing an OS setting on an existing machine, but this requires automation and documentation.
+
+In some cases there is little to no work as the CI change is running tests with different runtime settings (environment variables or preferences).
+
+
+## Setting up a pool on try server (1 week)
+
+The next step is getting some machines available on try server. This is where we add some code in tree to support the new config
+(a new worker type, test variant, etc.) and validate any setup done by IT works with taskcluster client. Then Releng ensures the target tests
+can run at a basic level (mozharness, testharness, os environment, logging, something passes).
+
+
+## Green up tests (1 week)
+
+This is a stage where Releng will run all the target tests on try server and disable, skip, fail-if all tests that are not passing or frequently
+intermittent. Typically there are a dozen or so iterations of this because a crash on one test means we don't run the rest of the tests in the
+manifest.
+
+
+## Turn on new config as tier-2 (1/2 week)
+
+We will time this at the start of a new release.
+
+Releng will land changes to manifests for all non passing tests and then schedule the new jobs by default. This will be tier-2 for a couple reasons:
+ * it is a new config with a lot of tests that still need attention
+ * in many cases there is a previous config (lets say upgrading windows 10 from 1803 -> 1903) which is still running in parallel as tier-1
+
+This will now run on central and integration and be available on try server. In a few cases where there are limited machines (android phones),
+there will be needs to turn off the old config, or make the try server access hidden behind `./mach try --full`
+
+
+## Turn on new backstop jobs which run the skipped tests (1/2 week)
+
+Releng will turn on a new temporary job that will run the tests which are not green by default. These will run as tier-2 on mozilla-central and be sheriffed.
+
+The goal here is to find tests that are now passing and should be run by default. By doing this we are effectively running all the tests instead of
+disabling dozens of tests and forgetting about them.
+
+
+## Handoff to developers (1 week)
+
+Releng will file bugs for all failing tests (one bug per manifest) and needinfo the triage owner to raise awareness that one or more tests in their area need
+attention. At this point, Releng is done and will move onto other work. Developers can reproduce the failures on try server and when fixed edit the manifest
+as appropriate.
+
+There will be at least 6 weeks to investigate and fix the tests before they are promoted to tier-1.
+
+
+## move config to tier-1 (6-7 weeks later)
+
+After the config has been running as tier-2 makes it to beta and then to the release branch (i.e. 2 new releases later), Releng will:
+ * turn off the old tier-1 tests (if applicable)
+ * promote the tier-2 jobs to tier-1
+ * turn off the backstop jobs
+
+This allows developers to schedule time in a 6 weeks period to investigate and fix any test failures.
diff --git a/testing/docs/ci-configs/schedule.md b/testing/docs/ci-configs/schedule.md
new file mode 100644
index 0000000000..6b95b589ab
--- /dev/null
+++ b/testing/docs/ci-configs/schedule.md
@@ -0,0 +1,50 @@
+# Schedule
+
+For each CI config change, we need to follow:
+ * scope of work (what will run, how frequently)
+ * capacity planning (cost, physical space limitations)
+ * will this replace anything or is this 100% new
+ * puppet/deployment scripts or documentation
+ * setup pool on try server
+ * documented updated on this page, communicate with release management and others as appropriate
+
+
+## Current / Future CI config changes
+
+Start Date | Completed | Tracking Bug | Description
+--- | --- | --- | ---
+TBD | TBD | TBD | Upgrade Ubuntu 18.04 -> Ubuntu 22.04 X11
+TBD | TBD | TBD | Add Ubuntu 22.04 Wayland
+TBD | TBD | TBD | Upgrade Mac M1 from 11.2.3 -> 13.2.1
+TBD | TBD | TBD | replace 2017 acer perf laptops with lower end NUCs
+TBD | TBD | TBD | replace windows moonshots with mid level NUCs
+TBD | TBD | TBD | Upgrade android emulators to modern version
+
+
+## Completed CI config changes
+
+Start Date | Completed | Tracking Bug | Description
+--- | --- | --- | ---
+October 2022 | March 2023 | [Bug 1794900](https://bugzilla.mozilla.org/show_bug.cgi?id=1794900) | Migrate from win10 -> win11
+November 2022 | February 2023 | [Bug 1804790](https://bugzilla.mozilla.org/show_bug.cgi?id=1804790) | Migrate Win7 unittests from AWS -> Azure
+October 2022 | February 2023 | [Bug 1794895](https://bugzilla.mozilla.org/show_bug.cgi?id=1794895) | Migrate unittests from pixel2 -> pixel5
+November 2020 | August 2021 | [Bug 1676850](https://bugzilla.mozilla.org/show_bug.cgi?id=1676850) | Windows tests migrate from AWS -> Datacenter/Azure and 1803 -> 20.04
+May 2022 | July 2022 | [Bug 1767486](https://bugzilla.mozilla.org/show_bug.cgi?id=1767486) | Migrate perftests from Moto G5 phones to Samsung A51 phones
+March 2021 | October 2021 | [Bug 1699541](https://bugzilla.mozilla.org/show_bug.cgi?id=1699541) | Migrate from OSX 10.14 -> 10.15
+July 2020 | March 2021 | [Bug 1572739](https://bugzilla.mozilla.org/show_bug.cgi?id=1572739) | upgrade datacenter linux perf machines from ubuntu 16.04 to 18.04
+September 2020 | January 2021 | [Bug 1665012](https://bugzilla.mozilla.org/show_bug.cgi?id=1665012) | Android phones upgrade from version 7 -> 10
+October 2020 | February 2021 | [Bug 1673067](https://bugzilla.mozilla.org/show_bug.cgi?id=1673067) | Run tests on MacOSX Aarch64 (subset in parallel)
+September 2020 | March 2021 | [Bug 1548264](https://bugzilla.mozilla.org/show_bug.cgi?id=1548264) | Python 2.7 -> 3.6 migration in CI
+July 2020 | October 2020| [Bug 1653344](https://bugzilla.mozilla.org/show_bug.cgi?id=1653344) | Remove EDID dongles from MacOSX machines
+August 2020 | September 2020 | [Bug 1643689](https://bugzilla.mozilla.org/show_bug.cgi?id=1643689) | Schedule tests by test selection/manifest
+June 2020 | August 2020 | [Bug 1486004](https://bugzilla.mozilla.org/show_bug.cgi?id=1486004) | Android hardware tests running without rooted phones
+August 2019 | January 2020 | [Bug 1572242](https://bugzilla.mozilla.org/show_bug.cgi?id=1572242) | Upgrade Ubuntu from 16.04 to 18.04 (finished in January)
+
+
+## Appendix:
+ * *OS*: base operating system such as Android, Linux, Mac OSX, Windows
+ * *Hardware*: specific cpu/memory/disk/graphics/display/inputs that we are using, could be physical hardware we own or manage, or it could be a cloud provider.
+ * *Platform*: a combination of hardware and OS
+ * *Configuration*: what we change on a platform (can be runtime with flags), installed OS software updates (service pack), tools (python/node/etc.), hardware or OS settings (anti aliasing, display resolution, background processes, clipboard), environment variables,
+ * *Test Failure*: a test doesn’t report the expected result (if we expect fail and we crash, that is unexpected). Typically this is a failure, but it can be a timeout, crash, not run, or even pass
+ * *Greening up*: Assuming all tests return expected results (passing), they are green. When tests fail, they are orange. We need to find a way to get all tests green by investigating test failures.
diff --git a/testing/docs/eventutils.rst b/testing/docs/eventutils.rst
new file mode 100644
index 0000000000..37eaa4bd91
--- /dev/null
+++ b/testing/docs/eventutils.rst
@@ -0,0 +1,45 @@
+EventUtils documentation
+========================
+
+``EventUtils``' methods are available in all browser mochitests on the ``EventUtils``
+object.
+
+In mochitest-plain and mochitest-chrome, you can load
+``"chrome://mochikit/content/tests/SimpleTest/EventUtils.js"`` using a regular
+HTML script tag to gain access to this set of utilities. In this case, all the
+documented methods here are **not** on a separate object, but available as global
+functions.
+
+Mouse input
+-----------
+
+.. js:autofunction:: sendMouseEvent
+.. js:autofunction:: EventUtils.synthesizeMouse
+.. js:autofunction:: synthesizeMouseAtCenter
+.. js:autofunction:: synthesizeNativeMouseEvent
+.. js:autofunction:: synthesizeMouseExpectEvent
+
+.. js:autofunction:: synthesizeWheel
+.. js:autofunction:: EventUtils.synthesizeWheelAtPoint
+.. js:autofunction:: sendWheelAndPaint
+.. js:autofunction:: sendWheelAndPaintNoFlush
+
+Keyboard input
+--------------
+
+.. js:autofunction:: sendKey
+.. js:autofunction:: EventUtils.sendChar
+.. js:autofunction:: sendString
+.. js:autofunction:: EventUtils.synthesizeKey
+.. js:autofunction:: synthesizeNativeKey
+.. js:autofunction:: synthesizeKeyExpectEvent
+
+Drag and drop
+-------------
+
+.. js:autofunction:: synthesizeDragOver
+.. js:autofunction:: synthesizeDrop
+.. js:autofunction:: synthesizeDropAfterDragOver
+.. js:autofunction:: synthesizePlainDragAndDrop
+.. js:autofunction:: synthesizePlainDragAndCancel
+.. js:autofunction:: sendDragEvent
diff --git a/testing/docs/intermittent/index.rst b/testing/docs/intermittent/index.rst
new file mode 100644
index 0000000000..93acfd980e
--- /dev/null
+++ b/testing/docs/intermittent/index.rst
@@ -0,0 +1,375 @@
+Avoiding intermittent tests
+===========================
+
+Intermittent oranges are test failures which happen intermittently,
+in a seemingly random way. Many of such failures could be avoided by
+good test writing principles. This page tries to explain some of
+those principles for use by people who contribute tests, and also
+those who review them for inclusion into mozilla-central.
+
+They are also called flaky tests in other projects.
+
+A list of patterns which have been known to cause intermittent failures
+comes next, with a description of why each one causes test failures, and
+how to avoid it.
+
+After writing a successful test case, make sure to run it locally,
+preferably in a debug build. Maybe tests depend on the state of another
+test or some future test or browser operation to clean up what is left
+over. This is a common problem in browser-chrome, here are things to
+try:
+
+- debug mode, run test standalone `./mach <test> <path>/<to>/<test>/test.html|js`
+- debug mode, run test standalone directory `./mach <test> <path>/<to>/<test>`
+- debug mode, run test standalone larger directory `./mach <test> <path>/<to>`
+
+
+Accessing DOM elements too soon
+-------------------------------
+
+``data:`` URLs load asynchronously. You should wait for the load event
+of an `<iframe> <https://developer.mozilla.org/docs/Web/HTML/Element/iframe>`__ that is
+loading a ``data:`` URL before trying to access the DOM of the
+subdocument.
+
+For example, the following code pattern is bad:
+
+.. code:: html
+
+ <html>
+ <body>
+ <iframe id="x" src="data:text/html,<div id='y'>"></iframe>
+ <script>
+ var elem = document.getElementById("x").
+ contentDocument.
+ getElementById("y"); // might fail
+ // ...
+ </script>
+ </body>
+ </html>
+
+Instead, write the code like this:
+
+.. code:: html
+
+ <html>
+ <body>
+ <script>
+ function onLoad() {
+ var elem = this.contentDocument.
+ getElementById("y");
+ // ...
+ };
+ </script>
+ <iframe src="data:text/html,<div id='y'>"
+ onload="onLoad()"></iframe>
+ </body>
+ </html>
+
+
+Using script functions before they're defined
+---------------------------------------------
+
+This may be relevant to event handlers, more than anything else. Let's
+say that you have an `<iframe>` and you want to
+do something after it's been loaded, so you might write code like this:
+
+.. code:: html
+
+ <iframe src="..." onload="onLoad()"></iframe>
+ <script>
+ function onLoad() { // oops, too late!
+ // ...
+ }
+ </script>
+
+This is bad, because the
+`<iframe>`'s load may be
+completed before the script gets parsed, and therefore before the
+``onLoad`` function comes into existence. This will cause you to miss
+the `<iframe>` load, which
+may cause your test to time out, for example. The best way to fix this
+is to move the function definition before where it's used in the DOM,
+like this:
+
+.. code:: html
+
+ <script>
+ function onLoad() {
+ // ...
+ }
+ </script>
+ <iframe src="..." onload="onLoad()"></iframe>
+
+
+Relying on the order of asynchronous operations
+-----------------------------------------------
+
+In general, when you have two asynchronous operations, you cannot assume
+any order between them. For example, let's say you have two
+`<iframe>`'s like this:
+
+.. code:: html
+
+ <script>
+ var f1Doc;
+ function f1Loaded() {
+ f1Doc = document.getElementById("f1").contentDocument;
+ }
+ function f2Loaded() {
+ var elem = f1Doc.getElementById("foo"); // oops, f1Doc might not be set yet!
+ }
+ </script>
+ <iframe id="f1" src="..." onload="f1Loaded()"></iframe>
+ <iframe id="f2" src="..." onload="f2Loaded()"></iframe>
+
+This code is implicitly assuming that ``f1`` will be loaded before
+``f2``, but this assumption is incorrect. A simple fix is to just
+detect when all of the asynchronous operations have been finished, and
+then do what you need to do, like this:
+
+.. code:: html
+
+ <script>
+ var f1Doc, loadCounter = 0;
+ function process() {
+ var elem = f1Doc.getElementById("foo");
+ }
+ function f1Loaded() {
+ f1Doc = document.getElementById("f1").contentDocument;
+ if (++loadCounter == 2) process();
+ }
+ function f2Loaded() {
+ if (++loadCounter == 2) process();
+ }
+ </script>
+ <iframe id="f1" src="..." onload="f1Loaded()"></iframe>
+ <iframe id="f2" src="..." onload="f2Loaded()"></iframe>
+
+
+Using magical timeouts to cause delays
+--------------------------------------
+
+Sometimes when there is an asynchronous operation going on, it may be
+tempting to use a timeout to wait a while, hoping that the operation has
+been finished by then and that it's then safe to continue. Such code
+uses patterns like this:
+
+.. code:: js
+
+ setTimeout(handler, 500);
+
+This should raise an alarm in your head. As soon as you see such code,
+you should ask yourself: "Why 500, and not 100? Why not 1000? Why not
+328, for that matter?" You can never answer this question, so you
+should always avoid code like this!
+
+What's wrong with this code is that you're assuming that 500ms is enough
+for whatever operation you're waiting for. This may stop being true
+depending on the platform, whether it's a debug or optimized build of
+Firefox running this code, machine load, whether the test is run on a
+VM, etc. And it will start failing, sooner or later.
+
+Instead of code like this, you should wait for the operation to be
+completed explicitly. Most of the time this can be done by listening
+for an event. Some of the time there is no good event to listen for, in
+which case you can add one to the code responsible for the completion of
+the task at hand.
+
+Ideally magical timeouts are never necessary, but there are a couple
+cases, in particular when writing web-platform-tests, where you might
+need them. In such cases consider documenting why a timer was used so it
+can be removed if in the future it turns out to be no longer needed.
+
+
+Using objects without accounting for the possibility of their death
+-------------------------------------------------------------------
+
+This is a very common pattern in our test suite, which was recently
+discovered to be responsible for many intermittent failures:
+
+.. code:: js
+
+ function runLater(func) {
+ var timer = Cc["@mozilla.org/timer;1"].createInstance(Ci.nsITimer);
+ timer.initWithCallback(func, 0, Ci.nsITimer.TYPE_ONE_SHOT);
+ }
+
+The problem with this code is that it assumes that the ``timer`` object
+will live long enough for the timer to fire. That may not be the case
+if a garbage collection is performed before the timer needs to fire. If
+that happens, the ``timer`` object will get garbage collected and will
+go away before the timer has had a chance to fire. A simple way to fix
+this is to make the ``timer`` object global, so that an outstanding
+reference to the object would still exist by the time that the garbage
+collection code attempts to collect it.
+
+.. code:: js
+
+ var timer;
+ function runLater(func) {
+ timer = Cc["@mozilla.org/timer;1"].createInstance(Ci.nsITimer);
+ timer.initWithCallback(func, 0, Ci.nsITimer.TYPE_ONE_SHOT);
+ }
+
+A similar problem may happen with ``nsIWebProgressListener`` objects
+passed to the ``nsIWebProgress.addProgressListener()`` method, because
+the web progress object stores a weak reference to the
+``nsIWebProgressListener`` object, which does not prevent it from being
+garbage collected.
+
+
+Tests which require focus
+-------------------------
+
+Some tests require the application window to be focused in order to
+function properly.
+
+For example if you're writing a crashtest or reftest which tests an
+element which is focused, you need to specify it in the manifest file,
+like this:
+
+::
+
+ needs-focus load my-crashtest.html
+ needs-focus == my-reftest.html my-reftest-ref.html
+
+Also, if you're writing a mochitest which synthesizes keyboard events
+using ``synthesizeKey()``, the window needs to be focused, otherwise the
+test would fail intermittently on Linux. You can ensure that by using
+``SimpleTest.waitForFocus()`` and start what your test does from inside
+the callback for that function, as below:
+
+.. code:: js
+
+ SimpleTest.waitForFocus(function() {
+ synthesizeKey("x", {});
+ // ...
+ });
+
+Tests which require mouse interaction, open context menus, etc. may also
+require focus. Note that waitForFocus implicitly waits for a load event
+as well, so it's safe to call it for a window which has not finished
+loading yet.
+
+
+Tests which take too long
+-------------------------
+
+Sometimes what happens in a single unit test is just too much. This
+will cause the test to time out in random places during its execution if
+the running machine is under a heavy load, which is a sign that the test
+needs to have more time to execute. This could potentially happen only
+in debug builds, as they are slower in general. There are two ways to
+solve this problem. One of them is to split the test into multiple
+smaller tests (which might have other advantages as well, including
+better readability in the test), or to ask the test runner framework to
+give the test more time to finish correctly. The latter can be done
+using the ``requestLongerTimeout`` function.
+
+
+Tests that do not clean up properly
+-----------------------------------
+
+Sometimes, tests register event handlers for various events, but they
+don't clean up after themselves correctly. Alternatively, sometimes
+tests do things which have persistent effects in the browser running the
+test suite. Examples include opening a new window, adding a bookmark,
+changing the value of a preference, etc.
+
+In these situations, sometimes the problem is caught as soon as the test
+is checked into the tree. But it's also possible for the thing which
+was not cleaned up properly to have an intermittent effect on future
+(and perhaps seemingly unrelated) tests. These types of intermittent
+failures may be extremely hard to debug, and not obvious at first
+because most people only look at the test in which the failure happens
+instead of previous tests. How the failure would look varies on a case
+by case basis, but one example is `bug
+612625 <https://bugzilla.mozilla.org/show_bug.cgi?id=612625>`__.
+
+
+Not waiting on the specific event that you need
+-----------------------------------------------
+
+Sometimes, instead of waiting for event A, tests wait on event B,
+implicitly hoping that B occurring means that A has occurred too. `Bug
+626168 <https://bugzilla.mozilla.org/show_bug.cgi?id=626168>`__ was an
+example of this. The test really needed to wait for a paint in the
+middle of its execution, but instead it would wait for an event loop
+hit, hoping that by the time that we hit the event loop, a paint has
+also occurred. While these types of assumptions may hold true when
+developing the test, they're not guaranteed to be true every time that
+the test is run. When writing a test, if you have to wait for an event,
+you need to take note of why you're waiting for the event, and what
+exactly you're waiting on, and then make sure that you're really waiting
+on the correct event.
+
+
+Tests that rely on external sites
+---------------------------------
+
+Even if the external site is not actually down, variable performance of
+the external site, and external networks can add enough variation to
+test duration that it can easily cause a test to fail intermittently.
+
+External sites should NOT be used for testing.
+
+
+Tests that rely on Math.random() to create unique values
+--------------------------------------------------------
+
+Sometimes you need unique values in your test. Using ``Math.random()``
+to get unique values works most of the time, but this function actually
+doesn't guarantee that its return values are unique, so your test might
+get repeated values from this function, which means that it may fail
+intermittently. You can use the following pattern instead of calling
+``Math.random()`` if you need values that have to be unique for your
+test:
+
+.. code:: js
+
+ var gUniqueCounter = 0;
+ function generateUniqueValues() {
+ return Date.now() + "-" + (++gUniqueCounter);
+ }
+
+Tests that depend on the current time
+-------------------------------------
+
+When writing a test which depends on the current time, extra attention
+should be paid to different types of behavior depending on when a test
+runs. For example, how does your test handle the case where the
+daylight saving (DST) settings change while it's running? If you're
+testing for a time concept relative to now (like today, yesterday,
+tomorrow, etc) does your test handle the case where these concepts
+change their meaning at the middle of the test (for example, what if
+your test starts at 23:59:36 on a given day and finishes at 00:01:13)?
+
+
+Tests that depend on time differences or comparison
+---------------------------------------------------
+
+When doing time differences the operating system timers resolution
+should be taken into account. For example consecutive calls to
+`Date() <https://developer.mozilla.org/docs/Web/JavaScript/Reference/Global_Objects/Date>`__ don't
+guarantee to get different values. Also when crossing XPCOM different
+time implementations can give surprising results. For example when
+comparing a timestamp got through :ref:`PR_Now` with one
+got though a JavaScript date, the last call could result in the past of
+the first call! These differences are more pronounced on Windows, where
+the skew can be up to 16ms. Globally, the timers' resolutions are
+guesses that are not guaranteed (also due to bogus resolutions on
+virtual machines), so it's better to use larger brackets when the
+comparison is really needed.
+
+
+Tests that destroy the original tab
+-----------------------------------
+
+Tests that remove the original tab from the browser chrome test window
+can cause intermittent oranges or can, and of themselves, be
+intermittent oranges. Obviously, both of these outcomes are undesirable.
+You should neither write tests that do this, or r+ tests that do this.
+As a general rule, if you call ``addTab`` or other tab-opening methods
+in your test cleanup code, you're probably doing something you shouldn't
+be.
diff --git a/testing/docs/mochitest-plain/faq.md b/testing/docs/mochitest-plain/faq.md
new file mode 100644
index 0000000000..47a8bbed18
--- /dev/null
+++ b/testing/docs/mochitest-plain/faq.md
@@ -0,0 +1,326 @@
+# Mochitest FAQ
+
+## SSL and https-enabled tests
+
+Mochitests must be run from http://mochi.test/ to succeed. However, some tests
+may require use of additional protocols, hosts, or ports to test cross-origin
+functionality.
+
+The Mochitest harness addresses this need by mirroring all content of the
+original server onto a variety of other servers through the magic of proxy
+autoconfig and SSL tunneling. The full list of schemes, hosts, and ports on
+which tests are served, is specified in `build/pgo/server-locations.txt`.
+
+The origins described there are not the same, as some of them specify
+particular SSL certificates for testing purposes, while some allow pages on
+that server to request elevated privileges; read the file for full details.
+
+It works as follows: The Mochitest harness includes preference values which
+cause the browser to use proxy autoconfig to match requested URLs with servers.
+The `network.proxy.autoconfig_url` preference is set to a data: URL that
+encodes the JavaScript function, `FindProxyForURL`, which determines the host
+of the given URL. In the case of SSL sites to be mirrored, the function maps
+them to an SSL tunnel, which transparently forwards the traffic to the actual
+server, as per the description of the CONNECT method given in RFC 2817. In this
+manner a single HTTP server at http://127.0.0.1:8888 can successfully emulate
+dozens of servers at distinct locations.
+
+## What if my tests aren't done when onload fires?
+
+Use `add_task()`, or call `SimpleTest.waitForExplicitFinish()` before onload
+fires (and `SimpleTest.finish()` when you're done).
+
+## How can I get the full log output for my test in automation for debugging?
+
+Add the following to your test:
+
+```
+SimpleTest.requestCompleteLog();
+```
+
+## What if I need to change a preference to run my test?
+
+The `SpecialPowers` object provides APIs to get and set preferences:
+
+```js
+await SpecialPowers.pushPrefEnv({ set: [["your-preference", "your-value" ]] });
+// ...
+await SpecialPowers.popPrefEnv(); // Implicit at the end of the test too.
+```
+
+You can also set prefs directly in the manifest:
+
+```ini
+[DEFAULT]
+prefs =
+ browser.chrome.guess_favicon=true
+```
+
+If you need to change a pref when running a test locally, you can use the
+`--setpref` flag:
+
+```
+./mach mochitest --setpref="javascript.options.jit.chrome=false" somePath/someTestFile.html
+```
+
+Equally, if you need to change a string pref:
+
+```
+./mach mochitest --setpref="webgl.osmesa=string with whitespace" somePath/someTestFile.html
+```
+
+## Can tests be run under a chrome URL?
+
+Yes, use [mochitest-chrome](../chrome-tests/index.rst).
+
+## How do I change the HTTP headers or status sent with a file used in a Mochitest?
+
+Create a text file next to the file whose headers you want to modify. The name
+of the text file should be the name of the file whose headers you're modifying
+followed by `^headers^`. For example, if you have a file `foo.jpg`, the
+text file should be named `foo.jpg^headers^`. (Don't try to actually use the
+headers file in any other way in the test, because the HTTP server's
+hidden-file functionality prevents any file ending in exactly one ^ from being
+served.)
+
+Edit the file to contain the headers and/or status you want to set, like so:
+
+```
+HTTP 404 Not Found
+Content-Type: text/html
+Random-Header-of-Doom: 17
+```
+
+The first line sets the HTTP status and a description (optional) associated
+with the file. This line is optional; you don't need it if you're fine with the
+normal response status and description.
+
+Any other lines in the file describe additional headers which you want to add
+or overwrite (most typically the Content-Type header, for the latter case) on
+the response. The format follows the conventions of HTTP, except that you don't
+need to have HTTP line endings and you can't use a header more than once (the
+last line for a particular header wins). The file may end with at most one
+blank line to match Unix text file conventions, but the trailing newline isn't
+strictly necessary.
+
+## How do I write tests that check header values, method types, etc. of HTTP requests?
+
+To write such a test, you simply need to write an SJS (server-side JavaScript)
+for it. See the [testing HTTP server](/networking/http_server_for_testing.rst)
+docs for less mochitest-specific documentation of what you can do in SJS
+scripts.
+
+An SJS is simply a JavaScript file with the extension .sjs which is loaded in a
+sandbox. Don't forget to reference it from your `mochitest.ini` file too!
+
+```ini
+[DEFAULT]
+support-files =
+ test_file.sjs
+```
+
+The global property `handleRequest` defined by the script is then executed with
+request and response objects, and the script populates the response based on the
+information in the request.
+
+Here's an example of a simple SJS:
+
+```js
+function handleRequest(request, response) {
+ // Allow cross-origin, so you can XHR to it!
+ response.setHeader("Access-Control-Allow-Origin", "*", false);
+ // Avoid confusing cache behaviors
+ response.setHeader("Cache-Control", "no-cache", false);
+ response.setHeader("Content-Type", "text/plain", false);
+ response.write("Hello world!");
+}
+```
+
+The file is run, for example, at either
+http://mochi.test:8888/tests/PATH/TO/YOUR/test_file.sjs,
+http://{server-location}/tests/PATH/TO/YOUR/test_file.sjs - see
+`build/pgo/server-locations.txt` for server locations!
+
+If you want to actually execute the file, you need to reference it somehow. For
+instance, you can XHR to it OR you could use a HTML element:
+
+```js
+var xhr = new XMLHttpRequest();
+xhr.open("GET", "http://test/tests/dom/manifest/test/test_file.sjs");
+xhr.onload = function(e){ console.log("loaded!", this.responseText)}
+xhr.send();
+```
+
+The exact properties of the request and response parameters are defined in the
+`nsIHttpRequestMetadata` and `nsIHttpResponse` interfaces in
+`nsIHttpServer.idl`. However, here are a few useful ones:
+
+
+ * `.scheme` (string). The scheme of the request.
+ * `.host` (string). The scheme of the request.
+ * `.port` (string). The port of the request.
+ * `.method` (string). The HTTP method.
+ * `.httpVersion` (string). The protocol version, typically "1.1".
+ * `.path` (string). Path of the request,
+ * `.headers` (object). Name and values representing the headers.
+ * `.queryString` (string). The query string of the requested URL.
+ * `.bodyInputStream` ??
+ * `.getHeader(name)`. Gets a request header by name.
+ * `.hasHeader(name)` (boolean). Gets a request header by name.
+
+**Note**: The browser is free to cache responses generated by your script. If
+you ever want an SJS to return different data for multiple requests to the same
+URL, you should add a `Cache-Control: no-cache` header to the response to
+prevent the test from accidentally failing, especially if it's manually run
+multiple times in the same Mochitest session.
+
+## How do I keep state across loads of different server-side scripts?
+
+Server-side scripts in Mochitest are run inside sandboxes, with a new sandbox
+created for each new load. Consequently, any variables set in a handler don't
+persist across loads. To support state storage, use the `getState(k)` and
+`setState(k, v)` methods defined on the global object. These methods expose a
+key-value storage mechanism for the server, with keys and values as strings.
+(Use JSON to store objects and other structured data.) The myriad servers in
+Mochitest are in reality a single server with some proxying and tunnelling
+magic, so a stored state is the same in all servers at all times.
+
+The `getState` and `setState` methods are scoped to the path being loaded. For
+example, the absolute URLs `/foo/bar/baz, /foo/bar/baz?quux, and
+/foo/bar/baz#fnord` all share the same state; the state for /foo/bar is entirely
+separate.
+
+You should use per-path state whenever possible to avoid inter-test dependencies
+and bugs.
+
+However, in rare cases it may be necessary for two scripts to collaborate in
+some manner, and it may not be possible to use a custom query string to request
+divergent behaviors from the script.
+
+For this use case only you should use the `getSharedState(k, v)` and
+`setSharedState(k, v)` methods defined on the global object. No restrictions
+are placed on access to this whole-server shared state, and any script may add
+new state that any other script may delete. To avoid conflicts, you should use
+a key within a faux namespace so as to avoid accidental conflicts. For example,
+if you needed shared state for an HTML5 video test, you might use a key like
+`dom.media.video:sharedState`.
+
+A further form of state storage is provided by the `getObjectState(k)` and
+`setObjectState(k, v)` methods, which will store any `nsISupports` object.
+These methods reside on the `nsIHttpServer` interface, but a limitation of
+the sandbox object used by the server to process SJS responses means that the
+former is present in the SJS request handler's global environment with the
+signature `getObjectState(k, callback)`, where callback is a function to be
+invoked by `getObjectState` with the object corresponding to the provided key
+as the sole argument.
+
+Note that this value mapping requires the value to be an XPCOM object; an
+arbitrary JavaScript object with no `QueryInterface` method is insufficient.
+If you wish to store a JavaScript object, you may find it useful
+to provide the object with a `QueryInterface` implementation and then make
+use of `wrappedJSObject` to reveal the actual JavaScript object through the
+wrapping performed by XPConnect.
+
+For further details on state-saving mechanisms provided by `httpd.js`, see
+`netwerk/test/httpserver/nsIHttpServer.idl` and the
+`nsIHttpServer.get(Shared|Object)?State` methods.
+
+## How do I write a SJS script that responds asynchronously?
+
+Sometimes you need to respond to a request asynchronously, for example after
+waiting for a short period of time. You can do this by using the
+`processAsync()` and `finish()` functions on the response object passed to the
+`handleRequest()` function.
+
+`processAsync()` must be called before returning from `handleRequest()`. Once
+called, you can at any point call methods on the request object to send
+more of the response. Once you are done, call the `finish()` function. For
+example you can use the `setState()` / `getState()` functions described above to
+store a request and later retrieve and finish it. However be aware that the
+browser often reorders requests and so your code must be resilient to that to
+avoid intermittent failures.
+
+```js
+let { setTimeout } = ChromeUtils.importESModule("resource://gre/modules/Timer.sys.mjs");
+
+function handleRequest(request, response) {
+ response.processAsync();
+ response.setHeader("Content-Type", "text/plain", false);
+ response.write("hello...");
+
+ setTimeout(function() {
+ response.write("world!");
+ response.finish();
+ }, 5 * 1000);
+}
+```
+
+For more details, see the `processAsync()` function documentation in
+`netwerk/test/httpserver/nsIHttpServer.idl`.
+
+## How do I get access to the files on the server as XPCOM objects from an SJS script?
+
+If you need access to a file, because it's easier to store image data in a file
+than directly in an SJS script, use the presupplied `SERVER_ROOT` object
+state available to SJS scripts running in Mochitest:
+
+```js
+function handleRequest(req, res) {
+ var file;
+ getObjectState("SERVER_ROOT", function(serverRoot) {
+ file = serverRoot.getFile("tests/content/media/test/320x240.ogv");
+ });
+ // file is now an XPCOM object referring to the given file
+ res.write("file: " + file);
+}
+```
+
+The path you specify is used as a path relative to the root directory served by
+`httpd.js`, and an `nsIFile` corresponding to the file at that location is
+returned.
+
+Beware of typos: the file you specify doesn't actually have to exist
+because file objects are mere encapsulations of string paths.
+
+## Diagnosing and fixing leakcheck failures
+
+Mochitests output a log of the windows and docshells that are created during the
+test during debug builds. At the end of the test, the test runner runs a
+leakcheck analysis to determine if any of them did not get cleaned up before the
+test was ended.
+
+Leaks can happen for a variety of reasons. One common one is that a JavaScript
+event listener is retaining a reference that keeps the window alive.
+
+```js
+// Add an observer.
+Services.obs.addObserver(myObserver, "event-name");
+
+// Make sure and clean it up, or it may leak!
+Services.obs.removeObserver(myObserver, "event-name");
+```
+
+Other sources of issues include accidentally leaving a window, or iframe
+attached to the DOM, or setting an iframe's src to a blank string (creating an
+about:blank page), rather than removing the iframe.
+
+Finding the leak can be difficult, but the first step is to reproduce it
+locally. Ensure you are on a debug build and the `MOZ_QUIET` environment flag
+is not enabled. The leakcheck test analyzes the test output. After reproducing
+the leak in the test, start commenting out code until the leak goes away. Then
+once the leak stop reproducing, find the exact location where it is happening.
+
+See [this post](https://crisal.io/words/2019/11/13/shutdown-leak-hunting.html)
+for more advanced debugging techniques involving CC and GC logs.
+
+## How can I run accessibility tests (a11y-checks)?
+
+The accessibility tests could be run locally with the `--enable-a11y-checks` flag:
+
+```
+./mach mochitest --enable-a11y-checks somePath/someTestFile.html
+```
+
+On CI, a11y-checks only run on tier 2 Linux 18.04 x64 WebRender (Opt and Shippable) builds. If you'd like to run only a11y-checks on Try, you can run the ``./mach try fuzzy --full` command with the query `a11y-checks linux !wayland !tsan !asan !ccov !debug !devedition` for all checks. Alternatively, to exclude devtools chrome tests, pass the query `swr-a11y-checks` to `./mach try fuzzy --full`.
+
+If you have questions on the results of a11y-checks and the ways to remediate any issues, reach out to the Accessibility team the [#accessibility room on Matrix](https://matrix.to/#/#accessibility:mozilla.org).
diff --git a/testing/docs/mochitest-plain/index.md b/testing/docs/mochitest-plain/index.md
new file mode 100644
index 0000000000..0bfce6a2fe
--- /dev/null
+++ b/testing/docs/mochitest-plain/index.md
@@ -0,0 +1,301 @@
+# Mochitest
+
+## DISCLAIMER
+
+If you are testing web platform code, prefer using use a [wpt
+test](/web-platform/index.rst) (preferably upstreamable ones).
+
+## Introduction
+
+Mochitest is an automated testing framework built on top of the
+[MochiKit](https://mochi.github.io/mochikit/) JavaScript libraries.
+
+Only things that can be tested using JavaScript (with chrome privileges!) can be
+tested with this framework. Given some creativity, that's actually much more
+than you might first think, but it's not possible to write Mochitest tests to
+directly test a non-scripted C++ component, for example. (Use a compiled-code
+test like [GTest](/gtest/index.rst) to do that.)
+
+## Running tests
+
+To run a single test (perhaps a new test you just added) or a subset of the
+entire Mochitest suite, pass a path parameter to the `mach` command.
+
+For example, to run only the test `test_CrossSiteXHR.html` in the Mozilla source
+tree, you would run this command:
+
+```
+./mach test dom/security/test/cors/test_CrossSiteXHR.html
+```
+
+To run all the tests in `dom/svg/`, this command would work:
+
+```
+./mach test dom/svg/
+```
+
+You can also pass a manifest path to run all tests on that manifest:
+
+```
+./mach test dom/base/test/mochitest.ini
+```
+
+## Running flavors and subsuites
+
+Flavors are variations of the default configuration used to run Mochitest. For
+example, a flavor might have a slightly different set of prefs set for it, a
+custom extension installed or even run in a completely different scope.
+
+The Mochitest flavors are:
+
+ * **plain** - The most basic and common Mochitest. They run in content scope,
+ but can access certain privileged APIs with SpecialPowers.
+
+ * **browser** - These often test the browser UI itself and run in browser
+ window scope.
+
+ * **chrome** - These run in chrome scope and are typically used for testing
+ privileged JavaScript APIs. More information can be found
+ [here](../chrome-tests/index.rst).
+
+ * **a11y** - These test the accessibility interfaces. They can be found under
+ the top `accessible` directory and run in chrome scope. Note that these run
+ without e10s / fission.
+
+A subsuite is similar to a flavor, except that it has an identical
+configuration. It is just logically separated from the "default" subsuite for
+display purposes. For example, devtools is a subsuite of the browser flavor.
+There is no difference in how these two jobs are run. It exists so that the
+devtools team can easily see and run their tests.
+
+**Note**: There are also tags, which are similar to subsuites. Although they
+both are used to logically group related sets of tests, they behave
+differently. For example, applying a subsuite to a test removes that test from
+the default set, whereas, a tag does not remove it.
+
+By default, mach finds and runs every test in the given subdirectory no matter
+which flavor or subsuite it belongs to. But sometimes, you might only want to
+run a specific flavor or subsuite. This can be accomplished using the `--flavor`
+(or `-f`) and `--subsuite` options respectively. For example:
+
+
+```
+./mach mochitest -f plain # runs all plain tests
+./mach mochitest -f browser --subsuite devtools # runs all browser tests in the devtools subsuite
+./mach mochitest -f chrome dom/indexedDB # runs all chrome tests in the dom/indexedDB subdirectory
+```
+
+In many cases, it won't be necessary to filter by flavor or subsuite as running
+specific directories will do it implicitly. For example running:
+
+```
+./mach mochitest devtools/
+```
+
+Is a rough equivalent to running the `devtools` subsuite. There might be
+situations where you might want to run tests that don't belong to any subsuite.
+To do this, use:
+
+```
+./mach mochitest --subsuite default
+```
+
+## Debugging individual tests
+
+If you need to debug an individual test, you could reload the page containing
+the test with the debugger attached. If attaching a debugger before the problem
+shows up is hard (for example, if the browser crashes as the test is loading),
+you can specify a debugger when you run mochitest:
+
+```
+./mach mochitest --debugger=gdb ...
+```
+
+See also the `--debugger-args` and `--debugger-interactive` arguments. You can
+also use the `--jsdebugger` argument to debug JavaScript.
+
+## Finding errors
+
+Search for the string `TEST-UNEXPECTED-FAIL` to find unexpected failures. You
+can also search for `SimpleTest FINISHED` to see the final test summary.
+## Logging results
+
+The output from a test run can be sent to the console and/or a file (by default
+the results are only displayed in the browser). There are several levels of
+detail to choose from. The levels are `DEBUG`, `INFO`, `WARNING`, `ERROR` and
+`CRITICAL`, where `DEBUG` produces the highest detail (everything), and
+`CRITICAL` produces the least.
+
+Mochitest uses structured logging. This means that you can use a set of command
+line arguments to configure the log output. To log to stdout using the mach
+formatter and log to a file in JSON format, you can use `--log-mach=-`
+`--log-raw=mochitest.log`. By default the file logging level for all your
+formatters is `INFO` but you can change this using `--log-mach-level=<level>`.
+
+To turn on logging to the console use `--console-level=<level>`.
+
+For example, to log test run output with the default (tbpl) formatter to the
+file `~/mochitest.log` at `DEBUG` level detail you would use:
+
+```
+./mach mochitest --log-tbpl=~/mochitest.log --log-tbpl-level=DEBUG
+```
+
+## Headless mode
+
+The tests must run in a focused window, which effectively prevents any other
+user activity on the engaged computer. You can avoid this by using the
+`--headless` argument or `MOZ_HEADLESS=1` environment variable.
+
+```
+./mach mochitest --headless ...
+```
+
+## Writing tests
+
+A Mochitest plain test is simply an HTML or XHTML file that contains some
+JavaScript to test for some condition.
+
+### Asynchronous Tests
+
+Sometimes tests involve asynchronous patterns, such as waiting for events or
+observers. In these cases, you need to use `add_task`:
+
+```js
+add_task(async function my_test() {
+ let keypress = new Promise(...);
+ // .. simulate keypress
+ await keypress;
+ // .. run test
+});
+```
+
+Use `add_setup()` when asynchronous test task is meant to prepare test for run.
+All setup tasks are executed once in order they appear prior to any test tasks.
+
+```js
+add_setup(async () => {
+ await clearStorage();
+});
+```
+
+Or alternatively, manually call `waitForExplicitFinish` and `finish`:
+
+```js
+SimpleTest.waitForExplicitFinish();
+addEventListener("keypress", function() {
+ // ... run test ...
+ SimpleTest.finish();
+}, false);
+// ... simulate key press ...
+```
+
+
+If you need more time, `requestLongerTimeout(number)` can be quite useful.
+`requestLongerTimeout()` takes an integer factor that is a multiplier for the
+default 45 seconds timeout. So a factor of 2 means: "Wait for at last 90s
+(2*45s)". This is really useful if you want to pause execution to do a little
+debugging.
+
+### Test functions
+
+Each test must contain some JavaScript that will run and tell Mochitest whether
+the test has passed or failed. `SimpleTest.js` provides a number of functions
+for the test to use, to communicate the results back to Mochitest. These
+include:
+
+
+ * `ok(expressionThatShouldBeTrue, "Description of the check")` -- tests a value for its truthfulness
+ * `is(actualValue, expectedValue, "Description of the check")` -- compares two values (using Object.is)
+ * `isnot(actualValue, unexpectedValue, "Description of the check")` -- opposite of is()
+
+If you want to include a test for something that currently fails, don't just
+comment it out! Instead, use one of the "todo" equivalents so we notice if it
+suddenly starts passing (at which point the test can be re-enabled):
+
+ * `todo(falseButShouldBeTrue, "Description of the check")`
+ * `todo_is(actualValue, expectedValue, "Description of the check")`
+ * `todo_isnot(actualValue, unexpectedValue, "Description of the check")`
+
+Tests can call a function `info("Message string")` to write a message to the
+test log.
+
+In addition to mochitest assertions, mochitest supports the
+[CommonJS standard assertions](http://wiki.commonjs.org/wiki/Unit_Testing/1.1),
+like [nodejs' assert module](https://nodejs.org/api/assert.html#assert) but
+implemented in `Assert.sys.mjs`. These are auto-imported in the browser flavor, but
+need to be imported manually in other flavors.
+
+### Helper functions
+
+Right now, useful helpers derived from MochiKit are available in
+[`testing/mochitest/tests/SimpleTest/SimpleTest.js`](https://searchfox.org/mozilla-central/source/testing/mochitest/tests/SimpleTest/SimpleTest.js).
+
+Although all of Mochikit is available at `testing/mochitest/MochiKit`, only
+include files that you require to minimize test load times. Bug 367569 added
+`sendChar`, `sendKey`, and `sendString` helpers.
+These are available in [`testing/mochitest/tests/SimpleTest/EventUtils.js`](https://searchfox.org/mozilla-central/source/testing/mochitest/tests/SimpleTest/EventUtils.js).
+
+If you need to access some data files from your Mochitest, you can get an URI
+for them by using `SimpleTest.getTestFileURL("relative/path/to/data.file")`.
+Then you can eventually fetch their content by using `XMLHttpRequest` or so.
+
+### Adding tests to the tree
+
+`mach addtest` is the preferred way to add a test to the tree:
+
+```
+./mach addtest --suite mochitest-{plain,chrome,browser-chrome} path/to/new/test
+```
+
+That will add the manifest entry to the relevant manifest (`mochitest.ini`,
+`chrome.ini`, etc. depending on the flavor) to tell the build system about your
+new test, as well as creating the file based on a template.
+
+```ini
+[test_new_feature.html]
+```
+
+Optionally, you can specify metadata for your test, like whether to skip the
+test on certain platforms:
+
+```ini
+[test_new_feature.html]
+skip-if = os == 'win'
+```
+
+The [mochitest.ini format](/build/buildsystem/test_manifests.rst), which is
+recognized by the parser, defines a long list of metadata.
+
+### Adding a new mochitest.ini or chrome.ini file
+
+If a `mochitest.ini` or `chrome.ini` file does not exist in the test directory
+where you want to add a test, add them and update the moz.build file in the
+directory for your test. For example, in `gfx/layers/moz.build`, we add
+these two manifest files:
+
+```python
+MOCHITEST_MANIFESTS += ['apz/test/mochitest.ini']
+MOCHITEST_CHROME_MANIFESTS += ['apz/test/chrome.ini']
+```
+
+<!-- TODO: This might be outdated.*
+
+## Getting Stack Traces
+
+
+To get stack when Mochitest crashes:
+
+ * Get a minidump_stackwalk binary for your platform from http://hg.mozilla.org/build/tools/file/tip/breakpad/
+ * Set the MINIDUMP_STACKWALK environment variable to point to the absolute path of the binary.
+
+If the resulting stack trace doesn't have line numbers, run `mach buildsymbols`
+to generate the requisite symbol files.
+
+-->
+
+## FAQ
+
+See the [Mochitest FAQ page](faq.md) for other features and such that you may
+want to use, such as SSL-enabled tests, custom http headers, async tests, leak
+debugging, prefs...
diff --git a/testing/docs/sheriffed-intermittents/index.md b/testing/docs/sheriffed-intermittents/index.md
new file mode 100644
index 0000000000..1fbe2be670
--- /dev/null
+++ b/testing/docs/sheriffed-intermittents/index.md
@@ -0,0 +1,44 @@
+Sheriffed intermittent failures
+===============================
+The Firefox Sheriff team looks at all failures for tasks which are visible by default
+in Treeherder (tier 1 and 2) and are part of a push on a sheriffed tree (autoland,
+mozilla-central, mozilla-beta, mozilla-release, mozilla-esr trees) and determines if the
+failure is a regression or an intermittent failure. In the case of an intermittent
+failure, the sheriffs annotate the failure and the annotation is logged in
+[Treeherder](https://treeherder.mozilla.org/intermittent-failures).
+
+In most cases the sheriffs will determine if a new bug is needed or an existing bug
+already tracks this kind of failure. In most cases sheriffs will annotate the failure
+using a "Single Tracking Bug", in other cases there will be specific failures that are
+tracked seperately.
+
+Single tracking bugs
+--------------------
+Single tracking bugs are used to track test failures seen in CI. These are tracked
+at the test case level (typically path/filename) instead of the error message.
+We have found that many times we have >1 bug tracking failures on a test case,
+but none of the bugs are frequent enough to get attention of the test owners.
+In addition, when a developer is looking into fixing an intermittent failure, they
+are debugging the file, and it is great to see ALL the related failures in one place.
+
+There are 2 ways to get detailed information about test failures:
+1. [Treeherder](https://treeherder.mozilla.org/intermittent-failures), when viewing a specific issue,
+there is a table, on the far right side of the table is the column titled `Log`.
+If you click the text box underneath that, a drop down of all failure types will be populated,
+selecting a failure will filter on that failure to see logs, etc.
+2. from `mach test-info`, one can view a breakdown of all failures and where they occur
+(example: `./mach test-info failure-report --bugid 1781668`)
+
+
+The workflow of single tracking bugs is as follows:
+ - Sheriffs find new failures in CI and create new bugs. If Treeherder can find the path,
+ Treeherder will recommend a `single tracking bug` and strip out the error message.
+ - Sheriffs will annotate existing bugs if the failure to annotate has a test path and in
+ the list of bug suggestions there is a `single tracking bug`. Treeherder will offer
+ that up as the choice as long as the test paths match (and other criteria like crashes,
+ assertions, leaks, etc. are met)
+ - Sheriffs will needinfo triage owner if enough failures occur
+ (currently 30 failures a week, etc.)
+ - Developers will be able to investigate the set of failures, any specific bugs they are
+ fixing (for some or all of the conditions). It is best practice to use the
+ `single tracking bug` as a META bug and file a new bug blocking the META bug with the specific fix.
diff --git a/testing/docs/simpletest.rst b/testing/docs/simpletest.rst
new file mode 100644
index 0000000000..c137f61329
--- /dev/null
+++ b/testing/docs/simpletest.rst
@@ -0,0 +1,5 @@
+SimpleTest framework
+====================
+
+.. js:autoclass:: SimpleTest
+ :members:
diff --git a/testing/docs/test-verification/index.rst b/testing/docs/test-verification/index.rst
new file mode 100644
index 0000000000..edade4fbab
--- /dev/null
+++ b/testing/docs/test-verification/index.rst
@@ -0,0 +1,239 @@
+Test Verification
+=================
+
+When a changeset adds a new test, or modifies an existing test, the test
+verification (TV) test suite performs additional testing to help find
+intermittent failures in the modified test as quickly as possible. TV
+uses other test harnesses to run the test multiple times, sometimes in a
+variety of configurations. For instance, when a mochitest is
+modified, TV runs the mochitest harness in a verify mode on the modified
+mochitest. That test will be run 10 times, then the same test will be
+run another 5 times, each time in a new browser instance. Once this is
+done, the whole sequence will be repeated in the test chaos mode
+(setting MOZ_CHAOSMODE). If any test run fails then the failure is
+reported normally, testing ends, and the test suite reports the failure.
+
+Initially, there are some limitations:
+
+- TV only applies to mochitests (all flavors and subsuites), reftests
+ (including crashtests and js-reftests) and xpcshell tests; a separate
+ job, TVw, handles web-platform tests.
+- Only some of the test chaos mode features are enabled
+
+.. _Running_test_verification_with_mach:
+
+Running test verification with mach
+~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
+
+Supported test harnesses accept the --verify option:
+
+::
+
+ mach web-platform-test <test> --verify
+
+ mach mochitest <test> --verify
+
+ mach reftest <test> --verify
+
+ mach xpcshell-test <test> --verify
+
+Multiple tests, even manifests or directories, can be verified at once,
+but this is generally not recommended. Verification is easier to
+understand one test at a time!
+
+.. _Verification_steps:
+
+Verification steps
+~~~~~~~~~~~~~~~~~~
+
+Each test harness implements --verify behavior in one or more "steps".
+Each step uses a different strategy for finding intermittent failures.
+For instance, the first step in mochitest verification is running the
+test with --repeat=20; the second step is running the test just once in
+a separate browser session, closing the browser, and repeating that
+sequence several times. If a failure is found in one step, later steps
+are skipped.
+
+.. _Verification_summary:
+
+Verification summary
+~~~~~~~~~~~~~~~~~~~~
+
+Test verification can produce a lot of output, much of it is repetitive.
+To help communicate what verification has been found, each test harness
+prints a summary for each file which has been verified. With each
+verification step, there is either a pass or fail status and an overall
+verification status, such as:
+
+::
+
+ :::
+ ::: Test verification summary for:
+ :::
+ ::: dom/base/test/test_data_uri.html
+ :::
+ ::: 1. Run each test 20 times in one browser. : FAIL
+ ::: 2. Run each test 10 times in a new browser each time. : not run / incomplete
+ :::
+ ::: Test verification FAILED!
+ :::
+
+.. _Long-running_tests_and_verification_duration:
+
+Long-running tests and verification duration
+~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
+
+Test verification is intended to be quick: Determine if this test fails
+intermittently as soon as possible, so that a pass or fail result is
+communicated quickly and test resources are not wasted.
+
+Tests have a wide range of run-times, from milliseconds up to many
+minutes. Of course, a test that takes 5 minutes to run, may take a very
+long time to verify. There may also be cases where many tests are being
+verified at one time. For instance, in automation a changeset might make
+a trivial change to hundreds of tests at once, or a merge might result
+in a similar situation. Even if each test is reasonably quick to verify,
+the time required to verify all these files may be considerable.
+
+Each test harness which supports the --verify option also supports the
+--max-verify-time option:
+
+::
+
+ mach mochitest <test> --verify --max-verify-time=7200
+
+The default max-verify-time is 3600 seconds (1 hour). If a verification
+step exceeds the max-verify-time, later steps are not run.
+
+In automation, the TV task uses --max-verify-time to try to limit
+verification to about 1 hour, regardless of how many tests are to be
+verified or how long each one runs. If verification is incomplete, the
+task does not fail. It reports success and is green in the treeherder,
+in addition the treeherder "Job Status" pane will also report
+"Verification too long! Not all tests were verified."
+
+.. _Test_Verification_in_Automation:
+
+Test Verification in Automation
+~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
+
+In automation, the TV and TVw tasks run whenever a changeset contains
+modifications to a .js, .html, .xhtml or .xul file. The TV/TVw task
+itself checks test manifests to determine if any of the modified files
+are test files; if any of the files are tests, TV/TVw will verify those
+tests.
+
+Treeherder status is:
+
+- **Green**: All modified tests in supported suites were verified with
+ no test failures, or test verification did not have enough time to
+ verify one or more tests.
+- **Orange**: One or more tests modified by this changeset failed
+ verification. **Backout should be considered (but is not
+ mandatory)**, to avoid future intermittent failures in these tests.
+
+There are some limitations:
+
+- Pre-existing conditions: A test may be failing, then updated on a
+ push in a net-positive way, but continue failing intermittently. If
+ the author is aware of the remaining issues, it is probably best not
+ to backout.
+- Failures due to test-verify conditions: In some cases, a test may
+ fail because test-verify runs a test with --repeat, or because
+ test-verify uses chaos mode, but those failures might not arise in
+ "normal" runs of the test. Ideally, all tests should be able to run
+ successfully in test-verify, but there may be exceptions.
+
+.. _Test_Verification_on_try:
+
+Test Verification on try
+~~~~~~~~~~~~~~~~~~~~~~~~
+
+To use test verification on try, use something like:
+
+::
+
+ mach try -b do -p linux64 -u test-verify-e10s --artifact
+
+Tests modified in the push will be verified.
+
+For TVw, use something like:
+
+::
+
+ mach try -b do -p linux64 -u test-verify-wpt-e10s --artifact
+
+Web-platform tests modified in the push will be verified.
+
+You can also run test verification on a test without modifying the test
+using something like:
+
+::
+
+ mach try fuzzy <path-to-test>
+
+
+.. _Skipping_Verification:
+
+Skipping Verification
+~~~~~~~~~~~~~~~~~~~~~
+
+In the great majority of cases, test-verify failures indicate test
+weaknesses that should be addressed.
+
+In unusual cases, where test-verify failures does not provide value,
+test-verify may be "skipped" on a test: In subsequent pushes where the
+test is modified, the test-verify job will not try to verify the skipped
+test.
+
+For mochitests, xpcshell tests, and other tests using the .ini manifest
+format, use something like:
+
+::
+
+ [sometest.html]
+ skip-if = verify
+
+For reftests (including crashtests and jsreftests), use something like:
+
+::
+
+ skip-if(verify) == sometest.html ...
+
+At this time, there is no corresponding support for skipping
+web-platform tests in verify mode.
+
+.. _FAQ:
+
+FAQ
+~~~
+
+**Why is there a "spike" of test-verify failures for my test? Why did it
+stop?**
+
+Bug reports for test-verify failures usually show a "spike" of failures
+on one day. That's because TV only runs against a particular test when
+that test is modified. A particular push modifies the test, TV runs and
+the test fails, and then TV doesn't run again for that test on
+subsequent pushes (until/unless the test files are modified again). Of
+course, when that push is merged to other trees, TV is triggered again,
+so the same failure is usually noted on multiple trees in succession:
+say, one revision on mozilla-inbound, then again when that revision is
+merged to mozilla-central and again on autoland.
+
+**When TV fails, is it worth retriggering?**
+
+No - usually not. TV runs specific tests over and over again - sometimes
+50 times in a single run. Retriggering on treeherder is generally
+unnecessary and will very likely produce the same pass/fail result as
+the original run. In this sense, TV failures are almost always
+"perma-fail".
+
+.. _Contact_information:
+
+Contact information
+~~~~~~~~~~~~~~~~~~~
+
+Test verification is maintained by :**gbrown** and :**jmaher**. Bugs
+should be filed in **Testing :: General**. You may want to reference
+`bug 1357513 <https://bugzilla.mozilla.org/show_bug.cgi?id=1357513>`__.
diff --git a/testing/docs/testing-policy/index.md b/testing/docs/testing-policy/index.md
new file mode 100644
index 0000000000..c94cca39c4
--- /dev/null
+++ b/testing/docs/testing-policy/index.md
@@ -0,0 +1,26 @@
+# Testing Policy
+
+**Everything that lands in mozilla-central includes automated tests by default**. Every commit has tests that cover every major piece of functionality and expected input conditions.
+
+One of the following Project Tags must be applied in Phabricator before landing, at the discretion of the reviewer:
+* `testing-approved` if it has sufficient automated test coverage.
+* One of `testing-exception-*` if not. After speaking with many teams across the project we’ve identified the most common exceptions, which are detailed below.
+
+## Exceptions
+
+* **testing-exception-unchanged**: Commits that don’t change behavior for end users. For example:
+ * Refactors, mechanical changes, and deleting dead code as long as they aren’t meaningfully changing or removing any existing tests. Authors should consider checking for and adding missing test coverage in a separate commit before a refactor.
+ * Code that doesn’t ship to users (for example: documentation, build scripts and manifest files, mach commands). Effort should be made to test these when regressions are likely to cause bustage or confusion for developers, but it’s left to the discretion of the reviewer.
+* **testing-exception-ui**: Commits that change UI styling, images, or localized strings. While we have end-to-end automated tests that ensure the frontend isn’t totally broken, and screenshot-based tracking of changes over time, we currently rely only on manual testing and bug reports to surface style regressions.
+* **testing-exception-elsewhere**: Commits where tests exist but are somewhere else. This **requires a comment** from the reviewer explaining where the tests are. For example:
+ * In another commit in the Stack.
+ * In a followup bug.
+ * In an external repository for third party code.
+ * When following the [Security Bug Approval Process](https://firefox-source-docs.mozilla.org/bug-mgmt/processes/security-approval.html) tests are usually landed later, but should be written and reviewed at the same time as the commit.
+* **testing-exception-other**: Commits where none of the defined exceptions above apply but it should still be landed. This should be scrutinized by the reviewer before using it - consider whether an exception is actually required or if a test could be reasonably added before using it. This **requires a comment** from the reviewer explaining why it’s appropriate to land without tests. Some examples that have been identified include:
+ * Interacting with external hardware or software and our code is missing abstractions to mock the interaction out.
+ * Inability to reproduce a reported problem, so landing something to test a fix in Nightly.
+
+## Phabricator WebExtension
+
+When accepting a patch on Phabricator, the [phab-test-policy](https://addons.mozilla.org/en-US/firefox/addon/phab-test-policy/) webextension will show the list of available testing tags so you can add one faster.
diff --git a/testing/docs/tests-for-new-config/index.rst b/testing/docs/tests-for-new-config/index.rst
new file mode 100644
index 0000000000..32fdf62b7f
--- /dev/null
+++ b/testing/docs/tests-for-new-config/index.rst
@@ -0,0 +1,130 @@
+Turning on Firefox tests for a new configuration
+================================================
+
+You are ready to go with turning on Firefox tests for a new config. Once you
+get to this stage, you will have seen a try push with all the tests running
+(many not green) to verify some tests pass and there are enough machines
+available to run tests.
+
+For the purpose of this document, assume you are tasked with upgrading Windows
+10 OS from version 1803 -> 1903. To simplify this we can call this `windows_1903`,
+and we need to:
+
+ * create meta bug
+ * push to try
+ * run skip-fails
+ * repeat 2 more times
+ * land changes and turn on tests
+ * turn on run only failures
+
+If you are running this manually or on configs/tests that are not supported with
+`./mach try --new-test-config`, then please follow the steps `here <manual.html>`__
+
+
+Create Meta Bug
+---------------
+
+This is a simple step where you create a meta bug to track the failures associated
+with the tests you are greening up. If this is a test suite (i.e. `devtools`), it
+is ok to have a meta bug just for the test suite and the new platform.
+
+All bugs related to tests skipped or failing will be blocking this meta bug.
+
+Push to Try Server
+------------------
+
+Now that you have a configuration setup and machines available via try server, it
+is time to run try. If you are migrating mochitest or xpcshell, then you can do:
+
+ ``./mach try fuzzy --no-artifact --full --rebuild 10 --new-task-config -q 'test-windows10-64-1903 mochitest-browser-chrome !ccov !ship !browsertime !talos !asan'``
+
+This will run many tests (thanks to --full and --rebuild 10), but will give plenty
+of useful data.
+
+In the scenario you are migrating tests such as:
+ * performance
+ * web-platform-tests
+ * reftest / crashtest / jsreftest
+ * mochitest-webgl (has a different process for test skipping)
+ * cppunittest / gtest / junit
+ * marionette / firefox-ui / telemetry
+
+ then please follow the steps `here <manual.html>`__
+
+ If you are migrating to a small machine pool, it is best to avoid `--rebuild 10` and
+ instead do `--rebuild 3`. Likewise please limit your jobs to be the specific test
+ suite and variant. The size of a worker pool is shown at the Workers page of the
+ Taskcluster instance.
+
+Run skip-fails
+--------------
+
+When the try push is completed it is time to run skip-fails. Skip-fails will
+look at all the test results and automatically create a set of local changes
+with skip-if conditions to green up the tests faster.
+
+``./mach manifest skip-fails --b bugzilla.mozilla.org -m <meta_bug_id> --turbo "https://treeherder.mozilla.org/jobs?repo=try&revision=<rev>"``
+
+Please input the proper `meta_bug_id` and `rev` into the above command.
+
+The first time running this, you will need to get a `bugzilla api key <https://bugzilla.mozilla.org/userprefs.cgi?tab=apikey>`__. copy
+this key and add it to your `~/.config/python-bugzilla/bugzilla-rc` file:
+
+.. code-block:: none
+
+ cat bugzillarc
+ [DEFAULT]
+ url = https://bugzilla.mozilla.org
+ [bugzilla.mozilla.org]
+ api_key = <key>
+
+When the command finishes, you will have new bugs created that are blocking the
+meta bug. In addition you will have many changes to manifests adding skip-if
+conditions. For tests than fail 40% of the time or for entire manifests that
+take >20 minutes to run on opt or >40 minutes on debug.
+
+You will need to create a commit (or `--amend` your previous commit if this is round 2 or 3):
+
+``hg commit -m "Bug <meta_bug_id> - Green up tests for <suite> on <platform>"``
+
+
+Repeat 2 More Times
+-------------------
+
+In 3 rounds this should be complete and ready to submit for review and turn on
+the new tests.
+
+There will be additional failures, those will follow the normal process of
+intermittents.
+
+
+Land Changes and Turn on Tests
+------------------------------
+
+After you have a green test run, it is time to land the patches. There could
+be changes needed to the taskgraph in order to add the new hardware type and
+duplicate tests to run on both the old and the new, or create a new variant and
+denote which tests to run on that variant.
+
+Using our example of ``windows_1903``, this would be a new worker type that
+would require these edits:
+
+ * `transforms/tests.py <https://searchfox.org/mozilla-central/source/taskcluster/taskgraph/transforms/tests.py#97>`__ (duplicate windows 10 entries)
+ * `test-platforms.py <https://searchfox.org/mozilla-central/source/taskcluster/ci/test/test-platforms.yml#229>`__ (copy windows10 debug/opt/shippable/asan entries and make win10_1903)
+ * `test-sets.py <https://searchfox.org/mozilla-central/source/taskcluster/ci/test/test-sets.yml#293>`__ (ideally you need nothing, otherwise copy ``windows-tests`` and edit the test list)
+
+In general this should allow you to have tests scheduled with no custom flags
+in try server and all of these will be scheduled by default on
+``mozilla-central``, ``autoland``, and ``release-branches``.
+
+Turn on Run Only Failures
+-------------------------
+
+Now that we have tests running regularly, the next step is to take all the
+disabled tests and run them in the special failures job.
+
+We have a basic framework created, but for every test harness (i.e. xpcshell,
+mochitest-gpu, browser-chrome, devtools, web-platform-tests, crashtest, etc.),
+there will need to be a corresponding tier-3 job that is created.
+
+TODO: point to examples of how to add this after we get our first jobs running.
diff --git a/testing/docs/tests-for-new-config/manual.rst b/testing/docs/tests-for-new-config/manual.rst
new file mode 100644
index 0000000000..cf2485251a
--- /dev/null
+++ b/testing/docs/tests-for-new-config/manual.rst
@@ -0,0 +1,224 @@
+:orphan:
+
+Turning on Firefox tests for a new configuration (manual)
+=========================================================
+
+You are ready to go with turning on Firefox tests for a new config. Once you
+get to this stage, you will have seen a try push with all the tests running
+(many not green) to verify some tests pass and there are enough machines
+available to run tests.
+
+For the purpose of this document, assume you are tasked with upgrading Windows
+10 OS from 1803 -> 1903. To simplify this we can call this `windows_1903`, and
+we need to:
+
+ * push to try
+ * analyze test failures
+ * disable tests in manifests
+ * repeat try push until no failures
+ * file bugs for test failures
+ * land changes and turn on tests
+ * turn on run only failures
+
+There are many edge cases, and I will outline them inside each step.
+
+
+Push to Try Server
+------------------
+
+As you have new machines (or cloud instances) available with the updated
+OS/config, it is time to push to try.
+
+In order to run all tests, we would need to execute:
+ ``./mach try fuzzy --no-artifact -q 'test-windows !-raptor- !-talos- --rebuild 10``
+
+There are a few exceptions here:
+
+ * Perf tests don't need to be run (hence the ``!-raptor- !-talos-``)
+ * Need to make sure we are not building with artifact builds (hence the
+ ``--no-artifact``)
+ * There are jobs hidden behind tier-3, some for a good reason (code coverage is
+ a good example, but fission tests might not be green)
+
+ The last piece to sort out is running on the new config, here are some
+ considerations for new configs:
+
+ * duplicated jobs (i.e. fission, a11y-checks), you can just run those specific
+ tasks: ``./mach try fuzzy --no-artifact -q 'test-windows fission --rebuild
+ 5``
+ * new OS/hardware (i.e. aarch64, os upgrade), you need to reference the new
+ hardware, typically this is with ``--worker-override``: ``./mach try fuzzy
+ --no-artifact -q 'test-windows --rebuild 10 --worker-override
+ t-win10-64=gecko-t/t-win10-64-1903``
+
+ * the risk here is a scenario where hardware is limited, then ``--rebuild
+ 10`` will create too many tasks and some will expire.
+ * in low hardware situations, either run a subset of tests (i.e.
+ web-platform-tests, mochitest), or ``--rebuild 3`` and repeat.
+
+
+Analyze Test Failures
+---------------------
+
+A try push will take many hours, it is best to push when you start work and
+then results will be ready later in your day, or push at the end of your day
+and results will be ready when you come back to work the next day. Please
+make sure some tasks start before walking away, otherwise a small typo can
+delay this process by hours or a full day.
+
+The best way to look at test failures is to use Push Health to avoid misleading
+data. Push Health will bucket failures into possible regressions, known
+regression, etc. When looking at 5 data points (from ``--rebuild 10``), this
+will filter out intermittent failures.
+
+There are many reasons you might have invalid or misleading data:
+
+ # Tests fail intermittently, we need a pattern to know if it is consistent or
+ intermittent.
+ # We still want to disable high frequency intermittent tests, those are just
+ annoying.
+ # You could be pushing off a bad base revision (regression or intermittent that
+ comes from the base revision).
+ # The machines you run on could be bad, skewing the data.
+ # Infrastructure problems could cause jobs to fail at random places, repeated
+ jobs filter that out.
+ # Some failures could affect future tests in the same browser session or tasks.
+ # If a crash occurs, or we timeout- it is possible that we will not run all of
+ the tests in the task, therefore believing a test was run 5 times, but maybe it
+ was only run once (and failed), or never run at all.
+ # Task failures that do not have a test name (leak on shutdown, crash on
+ shutdown, timeout on shutdown, etc.)
+
+That is a long list of reasons to not trust the data, luckily most of the time
+using ``--rebuild 10`` will give us enough data to give enough confidence we
+found all failures and can ignore random/intermittent failures.
+
+Knowing the reasons for misleading data, here is a way to use `Push Health
+<https://treeherder.mozilla.org/push-health/push?revision=abaff26f8e084ac719bea0438dba741ace3cf5d8&repo=try&testGroup=pr>`__.
+
+ * Alternatively, you could use the `API
+ <https://treeherder.mozilla.org/api/project/try/push/health/?revision=abaff26f8e084ac719bea0438dba741ace3cf5d8>`__
+ to get raw data and work towards building a tool
+ * If you write a tool, you need to parse the resulting JSON file and keep in
+ mind to build a list of failures and match it with a list of jobnames to find
+ how many times the job ran and failed/passed.
+
+The main goal here is to know what <path>/<filenames> are failing, and having a
+list of those. Ideally you would record some additional information like
+timeout, crash, failure, etc. In the end you might end up with::
+
+ dom/html/test/test_fullscreen-api.html, scrollbar
+ gfx/layers/apz/test/mochitest/test_group_hittest.html, scrollbar
+ image/test/mochitest/test_animSVGImage.html, timeout
+ browser/base/content/test/general/browser_restore_isAppTab.js, crashed
+
+
+
+
+Disable Tests in the Manifest Files
+-----------------------------------
+
+The code sheriffs have been using `this documentation
+<https://wiki.mozilla.org/Auto-tools/Projects/Stockwell/disable-recommended>`__
+for training and reference when they disable intermittents.
+
+First you need to add a keyword to be available in the manifest (e.g. ``skip-if
+= windows_1903``).
+
+There are many exceptions, the bulk of the work will fall into one of 4
+categories:
+
+ # `manifestparser <mochitest_xpcshell_manifest_keywords>`_: \*.toml (mochitest*,
+ firefox-ui, marionette, xpcshell) easy to edit by adding a ``skip-if =
+ windows_1903 # <comment>``, a few exceptions here
+ # `reftest <reftest_manifest_keywords>`_: \*.list (reftest, crashtest) need to
+ add a ``fuzzy-if(windows_1903, A, B)``, this is more specific
+ # web-platform-test: testing/web-platform/meta/\*\*.ini (wpt, wpt-reftest,
+ etc.) need to edit/add testing/web-platform/meta/<path>/<testname>.ini, and add
+ expected results
+ # Other (compiled tests, jsreftest, etc.) edit source code, ask for help.
+
+Basically we want to take every non-intermittent failure found from push health
+and edit the manifest, this typically means:
+
+ * Finding the proper manifest.
+ * Adding the right text to the manifest.
+
+To find the proper manifest, it is typically <path>/<harness>.[toml|list].
+There are exceptions and if in doubt use searchfox.org/ to find the manifest
+which contains the testname.
+
+Once you have the manifest, open it in an editor, and search for the exact test
+name (there could be similar named tests).
+
+Rerun Try Push, Repeat as Necessary
+-----------------------------------
+
+It is important to test your changes and for a new platform that will be
+sheriffed, to rerun all the tests at scale.
+
+With your change in a commit, push again to try with ``--rebuild 10`` and come
+back the next day.
+
+As there are so many edge cases, it is quite likely that you will have more
+failures, mentally plan on 3 iterations of this, where each iteration has fewer
+failures.
+
+Once you get a full push to show no persistent failures, it is time to land
+those changes and turn on the new tests. There is a large risk here that the
+longer you take to find all failures, the greater the chance of:
+
+ * Bitrot of your patch
+ * New tests being added which could fail on your config
+ * Other edits to tests/tools which could affect your new config
+
+Since the new config process is designed to find failures fast and get the
+changes landed fast, we do not need to ask developers for review, that comes
+after the new config is running successfully where we notify the teams of what
+tests are failing.
+
+File Bugs for Test Failures
+---------------------------
+
+Once the failure jobs are running on mozilla-central, now we have full coverage
+and the ability to run tests on try server. There could be >100 tests that are
+marked as ``skip-if`` and that would take a lot of time to file bugs. Instead
+we will file a bug for each manifest that is edited, typically this reduces the
+bugs to about 40% the total tests (average out to 2.5 test failures/manifest).
+
+When filing the bug, indicate the timeline, how to run the failure, link to the
+bug where we created the config, describe briefly the config change (i.e.
+upgrade windows 10 from version 1803 to 1903), and finally needinfo the triage
+owner indicating this is a heads up and these tests are running reguarly on
+mozilla-central for the next 6-7 weeks.
+
+Land Changes and Turn on Tests
+------------------------------
+
+After you have a green test run, it is time to land the patches. There could
+be changes needed to the taskgraph in order to add the new hardware type and
+duplicate tests to run on both the old and the new, or create a new variant and
+denote which tests to run on that variant.
+
+Using our example of ``windows_1903``, this would be a new worker type that
+would require these edits:
+
+ * `transforms/tests.py <https://searchfox.org/mozilla-central/source/taskcluster/taskgraph/transforms/tests.py#97>`__ (duplicate windows 10 entries)
+ * `test-platforms.py <https://searchfox.org/mozilla-central/source/taskcluster/ci/test/test-platforms.yml#229>`__ (copy windows10 debug/opt/shippable/asan entries and make win10_1903)
+ * `test-sets.py <https://searchfox.org/mozilla-central/source/taskcluster/ci/test/test-sets.yml#293>`__ (ideally you need nothing, otherwise copy ``windows-tests`` and edit the test list)
+
+In general this should allow you to have tests scheduled with no custom flags
+in try server and all of these will be scheduled by default on
+``mozilla-central``, ``autoland``, and ``release-branches``.
+
+Turn on Run Only Failures
+-------------------------
+
+Now that we have tests running regularly, the next step is to take all the
+disabled tests and run them in the special failures job.
+
+We have a basic framework created, but for every test harness (i.e. xpcshell,
+mochitest-gpu, browser-chrome, devtools, web-platform-tests, crashtest, etc.),
+there will need to be a corresponding tier-3 job that is created.
+
+TODO: point to examples of how to add this after we get our first jobs running.
diff --git a/testing/docs/testutils.rst b/testing/docs/testutils.rst
new file mode 100644
index 0000000000..32294e9708
--- /dev/null
+++ b/testing/docs/testutils.rst
@@ -0,0 +1,5 @@
+TestUtils module
+================
+
+.. js:autoclass:: TestUtils
+ :members:
diff --git a/testing/docs/treeherder-try/img/th_bug_suggestions.png b/testing/docs/treeherder-try/img/th_bug_suggestions.png
new file mode 100644
index 0000000000..e0581a653e
--- /dev/null
+++ b/testing/docs/treeherder-try/img/th_bug_suggestions.png
Binary files differ
diff --git a/testing/docs/treeherder-try/img/th_confirm_failures.png b/testing/docs/treeherder-try/img/th_confirm_failures.png
new file mode 100644
index 0000000000..863b304934
--- /dev/null
+++ b/testing/docs/treeherder-try/img/th_confirm_failures.png
Binary files differ
diff --git a/testing/docs/treeherder-try/img/th_filter.png b/testing/docs/treeherder-try/img/th_filter.png
new file mode 100644
index 0000000000..45a98af2d6
--- /dev/null
+++ b/testing/docs/treeherder-try/img/th_filter.png
Binary files differ
diff --git a/testing/docs/treeherder-try/img/th_filter_add.png b/testing/docs/treeherder-try/img/th_filter_add.png
new file mode 100644
index 0000000000..00c631f824
--- /dev/null
+++ b/testing/docs/treeherder-try/img/th_filter_add.png
Binary files differ
diff --git a/testing/docs/treeherder-try/img/th_filter_classifications.png b/testing/docs/treeherder-try/img/th_filter_classifications.png
new file mode 100644
index 0000000000..cca520a02f
--- /dev/null
+++ b/testing/docs/treeherder-try/img/th_filter_classifications.png
Binary files differ
diff --git a/testing/docs/treeherder-try/img/th_mitten.png b/testing/docs/treeherder-try/img/th_mitten.png
new file mode 100644
index 0000000000..5ea3a93fe2
--- /dev/null
+++ b/testing/docs/treeherder-try/img/th_mitten.png
Binary files differ
diff --git a/testing/docs/treeherder-try/img/th_new.png b/testing/docs/treeherder-try/img/th_new.png
new file mode 100644
index 0000000000..388afaf30a
--- /dev/null
+++ b/testing/docs/treeherder-try/img/th_new.png
Binary files differ
diff --git a/testing/docs/treeherder-try/img/th_retrigger.png b/testing/docs/treeherder-try/img/th_retrigger.png
new file mode 100644
index 0000000000..7eb60bab9a
--- /dev/null
+++ b/testing/docs/treeherder-try/img/th_retrigger.png
Binary files differ
diff --git a/testing/docs/treeherder-try/img/th_select_task.png b/testing/docs/treeherder-try/img/th_select_task.png
new file mode 100644
index 0000000000..10bfe22f1f
--- /dev/null
+++ b/testing/docs/treeherder-try/img/th_select_task.png
Binary files differ
diff --git a/testing/docs/treeherder-try/img/th_task_action.png b/testing/docs/treeherder-try/img/th_task_action.png
new file mode 100644
index 0000000000..f8b4a02840
--- /dev/null
+++ b/testing/docs/treeherder-try/img/th_task_action.png
Binary files differ
diff --git a/testing/docs/treeherder-try/index.rst b/testing/docs/treeherder-try/index.rst
new file mode 100644
index 0000000000..995749f77b
--- /dev/null
+++ b/testing/docs/treeherder-try/index.rst
@@ -0,0 +1,139 @@
+Understanding Treeherder Results
+================================
+
+`Treeherder <https://treeherder.mozilla.org/userguide>`__ serves as the primary dashboard for developers looking to view CI test results from their try pushes. The main purpose of the dashboard is to display all the tasks and their status along with tool to view logs, see failures, add jobs, cancel jobs, or connect to other tools.
+
+When a test fails, it is important to figure out if this is a regression or an intermittent failure. It is typically to see 3-10% of the jobs fail, although the majority of the failures are intermittent (infrastructure failed, timing is off, test is sort of flaky, etc.). Sometimes a regression occurs but the failure is assumed intermittent.
+
+There are a few tools to use when a test fails:
+ * Confirm Failure (CF)
+ * Retrigger (R)
+ * NEW annotation (NEW)
+ * MITTEN icon
+ * Bug Suggestions
+
+The quick answer is: use confirm failure for all failures and when all results are in, whichever tests are orange and do not have a mitten icon are what need further investigation.
+
+It is best to understand how each of the tools work and use them in combination to be most efficient with your time and to find a regression.
+
+Confirm Failure (CF)
+--------------------
+This tool will give a strong signal when it is applicable. Confirm failure works by running the failing test path 10x in the same browser. We have found that this is the strongest signal compared to other methods, but there are limitations:
+ * This requires a test failure that is discoverable in the failure line.
+ * This requires the failure to be in a supported test harness (web-platform-test, mochitest*, reftest, xpcshell).
+ * Some exceptions exist around specific hardware (e.g. android reftests) or required machines.
+ * Running this can result in infrastructure failure.
+ * Some specific tests do not work well when run in this method.
+ * This launches a CF task for every failure (up to 10) discovered in the failing task, so often you end up with >1 CF job.
+
+When this runs, a new task is scheduled and typically only takes a few minutes to complete (a retrigger can take 15-90 minutes depending on the task). If you run confirm failure on a failure and there is a lack of test path, unsupported harness or other well known limitations, a retrigger will automatically be scheduled.
+
+To launch confirm failure you need to select the task
+
+ .. image:: img/th_select_task.png
+ :width: 300
+
+
+then click the task action menu
+
+ .. image:: img/th_task_action.png
+ :width: 300
+
+
+and finally select "Confirm Test Failures"
+
+ .. image:: img/th_confirm_failures.png
+ :width: 300
+
+
+When the jobs are done, Treeherder will determine if the CF job passes and add a |MITTEN| icon to the original failure if it is intermittent.
+
+In the future we are planning to run confirm failure automatically on NEW failures only.
+
+
+Retrigger (R)
+-------------
+When a retrigger happens the entire task is rerun. This sounds straightforward if a test fails once, rerun it and if it passes it is intermittent. In reality that isn't the strongest signal, but sometimes it is the best signal. Here are some limitations:
+ * a retrigger has infrastructure failures
+ * a retrigger has a different set of test failures (maybe not even running your test)
+ * a retrigger runs other tests that could influence the state of the browser/machine causing uncertain results
+ * a retrigger can take a long time (many tasks run >30 minutes)
+
+Given these limitations, a retrigger is a useful tool for gathering more information, just be prepared to double check the logs and failures in more detail if the task doesn't succeed.
+
+To launch a Retrigger, you can:
+select a task:
+
+ .. image:: img/th_select_task.png
+ :width: 300
+
+
+click the rotating arrow icon in the task action bar:
+
+ .. image:: img/th_retrigger.png
+ :width: 300
+
+
+
+ OR type the 'r' key on your keyboard
+
+If a task is retriggered and it is green, the original failing task will have a |MITTEN| icon.
+
+
+NEW annotations (NEW)
+---------------------
+Treeherder keeps a cache of every failure line seen on Autoland and Mozilla-Central for the last 3 weeks. When a new failure line shows up it is flagged in the task failure summary with a NEW tag. The NEW tag is very successful at finding nearly all regressions, the downside is that many of the NEW tags are seen on intermittents (if an intermittent wasn't seen recently, or the failure line is slightly different).
+
+NEW failures are for all tasks (build, lint, test, etc.) and for all failures (infra failures as well)
+
+.. image:: img/th_new.png
+ :width: 400
+
+
+On Try server, the NEW annotations are shown, and can act as a way to quickly filter a large number of failing tasks down to a more manageable number. It is best practice to run confirm failure or retrigger on the failing NEW task. To view only failing tasks, you can:
+ click on the filter icon for the entire page
+
+.. image:: img/th_filter.png
+ :width: 300
+
+
+select the field "failure classification" and select the value "new failure not classified"
+
+.. image:: img/th_filter_classifications.png
+ :width: 300
+
+
+then click "Add"
+
+.. image:: img/th_filter_add.png
+ :width: 300
+
+
+the above will add a `&failure_classification=6` to the url, you can add that manually if you wish.
+
+
+MITTEN icon
+-----------
+Another feature in Treeherder is the |MITTEN| icon. This is added to an orange job if the job had a retrigger or confirm failure and it is green. This is a great visual shortcut to ignore job failures.
+
+
+Bug Suggestions
+---------------
+Built into Treeherder originally is the feature when looking at the "Failure Summary" there will be bug suggestions showing you similar bugs that match the failure.
+
+.. image:: img/th_bug_suggestions.png
+ :width: 300
+
+
+If there is a `Single Tracking Bug <../sheriffed-intermittents/index.html#single-tracking-bugs>`__, only that will be shown.
+
+Some caveats to keep in mind:
+ * It is easy to assume if there is a bug that the failure is an intermittent.
+ * this doesn't tell you if you have made an intermittent a permanent failure.
+ * the bug could be for a different configuration (look at failure history and compare platform, build type, test variant to make sure this failure isn't spreading)
+ * the bug could be inactive for months.
+
+
+
+.. |MITTEN| image:: img/th_mitten.png
+ :width: 30
diff --git a/testing/docs/webrender/index.rst b/testing/docs/webrender/index.rst
new file mode 100644
index 0000000000..c5a0358168
--- /dev/null
+++ b/testing/docs/webrender/index.rst
@@ -0,0 +1,90 @@
+WebRender Tests
+===============
+
+The WebRender class of tests are used to test the WebRender module
+(lives in gfx/wr) in a standalone way, without being pulled into Gecko.
+WebRender is written entirely in Rust code, and has its own test suites.
+
+If you are having trouble with these test suites, please contact the
+Graphics team (#gfx on Matrix/Element or Slack) and they will be able to
+point you in the right direction. Bugs against these test suites should
+be filed in the `Core :: Graphics: WebRender`__ component.
+
+__ https://bugzilla.mozilla.org/enter_bug.cgi?product=Core&component=Graphics%3A%20WebRender
+
+WebRender
+---------
+
+The WebRender suite has one linting job, ``WR(tidy)``, and a
+``WR(wrench)`` test job per platform. Generally these test jobs are only
+run if code inside the ``gfx/wr`` subtree are touched, although they may
+also run if upstream files they depend on (e.g. docker images) are
+modified.
+
+WR(tidy)
+~~~~~~~~
+
+The tidy lint job basically runs the ``servo-tidy`` tool on the code in
+the ``gfx/wr`` subtree. This tool checks a number of code style and
+licensing things, and is good at emitting useful error messages if it
+encounters problems. To run this locally, you can do something like
+this:
+
+.. code:: shell
+
+ cd gfx/wr
+ pip install servo-tidy
+ servo-tidy
+
+To run on tryserver, use ``./mach try fuzzy`` and select the
+``webrender-lint-tidy`` job.
+
+WR(wrench)
+~~~~~~~~~~
+
+The exact commands run by this test job vary per-platform. Generally,
+the commands do some subset of these things:
+
+- build the different webrender crates with different features
+ enabled/disabled to make sure they build without errors
+- run ``cargo test`` to run the built-in rust tests
+- run the reftests to ensure that the rendering produced by WebRender
+ matches the expectations
+- run the rawtests (scenarios hand-written in Rust code) to ensure the
+ behaviour exhibited by WebRender is correct
+
+Running locally (Desktop platforms)
+^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
+
+The test scripts can be found in the ``gfx/wr/ci-scripts/`` folder and
+can be run directly from the ``gfx/wr`` folder if you have the
+prerequisite tools (compilers, libraries, etc.) installed. If you build
+mozilla-central you should already have these tools. On MacOS you may
+need to do a ``brew install cmake pkg-config`` in order to get
+additional dependencies needed for building osmesa-src.
+
+.. code:: shell
+
+ cd gfx/wr
+ ci-scripts/linux-debug-tests.sh # use the script for your platform as needed
+
+Note that when running these tests locally, you might get small
+antialiasing differences in the reftests, depending on your local
+freetype library. This may cause a few tests from the ``reftests/text``
+folder to fail. Usually as long as they fail the same before/after your
+patch it shouldn't be a problem, but doing a try push will confirm that.
+
+Running locally (Android emulator/device)
+^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
+
+To run the wrench reftests locally on an Android platform, you have to
+first build the wrench tool for Android, and then run the mozharness
+script that will control the emulator/device, install the APK, and run
+the reftests. Steps for doing this are documented in more detail in the
+``gfx/wr/wrench/android.txt`` file.
+
+Running on tryserver
+^^^^^^^^^^^^^^^^^^^^
+
+To run on tryserver, use ``./mach try fuzzy`` and select the appropriate
+``webrender-<platform>-(release|debug)`` job.
diff --git a/testing/docs/xpcshell/index.rst b/testing/docs/xpcshell/index.rst
new file mode 100644
index 0000000000..e9a8e93aca
--- /dev/null
+++ b/testing/docs/xpcshell/index.rst
@@ -0,0 +1,822 @@
+XPCShell tests
+==============
+
+xpcshell tests are quick-to-run tests, that are generally used to write
+unit tests. They do not have access to the full browser chrome like
+``browser chrome tests``, and so have much
+lower overhead. They are typical run by using ``./mach xpcshell-test``
+which initiates a new ``xpcshell`` session with
+the xpcshell testing harness. Anything available to the XPCOM layer
+(through scriptable interfaces) can be tested with xpcshell. See
+``Mozilla automated testing`` and ``pages
+tagged "automated testing"`` for more
+information.
+
+Introducing xpcshell testing
+----------------------------
+
+xpcshell test filenames must start with ``test_``.
+
+Creating a new test directory
+^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
+
+If you need to create a new test directory, then follow the steps here.
+The test runner needs to know about the existence of the tests and how
+to configure them through the use of the ``xpcshell.ini`` manifest file.
+
+First add a ``XPCSHELL_TESTS_MANIFESTS += ['xpcshell.ini']`` declaration
+(with the correct relative ``xpcshell.ini`` path) to the ``moz.build``
+file located in or above the directory.
+
+Then create an empty ``xpcshell.ini`` file to tell the build system
+about the individual tests, and provide any additional configuration
+options.
+
+Creating a new test in an existing directory
+^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
+
+If you're creating a new test in an existing directory, you can simply
+run:
+
+.. code:: bash
+
+ $ ./mach addtest path/to/test/test_example.js
+ $ hg add path/to/test/test_example.js
+
+This will automatically create the test file and add it to
+``xpcshell.ini``, the second line adds it to your commit.
+
+The test file contains an empty test which will give you an idea of how
+to write a test. There are plenty more examples throughout
+mozilla-central.
+
+Running tests
+-------------
+
+To run the test, execute it by running the ``mach`` command from the
+root of the Gecko source code directory.
+
+.. code:: bash
+
+ # Run a single test:
+ $ ./mach xpcshell-test path/to/tests/test_example.js
+
+ # Test an entire test suite in a folder:
+ $ ./mach xpcshell-test path/to/tests/
+
+ # Or run any type of test, including both xpcshell and browser chrome tests:
+ $ ./mach test path/to/tests/test_example.js
+
+The test is executed by the testing harness. It will call in turn:
+
+- ``run_test`` (if it exists).
+- Any functions added with ``add_task`` or ``add_test`` in the order
+ they were defined in the file.
+
+See also the notes below around ``add_task`` and ``add_test``.
+
+xpcshell Testing API
+--------------------
+
+xpcshell tests have access to the following functions. They are defined
+in
+:searchfox:`testing/xpcshell/head.js <testing/xpcshell/head.js>`
+and
+:searchfox:`testing/modules/Assert.sys.mjs <testing/modules/Assert.sys.mjs>`.
+
+Assertions
+^^^^^^^^^^
+
+- ``Assert.ok(truthyOrFalsy[, message])``
+- ``Assert.equal(actual, expected[, message])``
+- ``Assert.notEqual(actual, expected[, message])``
+- ``Assert.deepEqual(actual, expected[, message])``
+- ``Assert.notDeepEqual(actual, expected[, message])``
+- ``Assert.strictEqual(actual, expected[, message])``
+- ``Assert.notStrictEqual(actual, expected[, message])``
+- ``Assert.rejects(actual, expected[, message])``
+- ``Assert.greater(actual, expected[, message])``
+- ``Assert.greaterOrEqual(actual, expected[, message])``
+- ``Assert.less(actual, expected[, message])``
+- ``Assert.lessOrEqual(actual, expected[, message])``
+
+
+These assertion methods are provided by
+:searchfox:`testing/modules/Assert.sys.mjs <testing/modules/Assert.sys.mjs>`.
+It implements the `CommonJS Unit Testing specification version
+1.1 <http://wiki.commonjs.org/wiki/Unit_Testing/1.1>`__, which
+provides a basic, standardized interface for performing in-code
+logical assertions with optional, customizable error reporting. It is
+*highly* recommended to use these assertion methods, instead of the
+ones mentioned below. You can on all these methods remove the
+``Assert.`` from the beginning of the name, e.g. ``ok(true)`` rather
+than ``Assert.ok(true)``, however keeping the ``Assert.`` prefix may
+be seen as more descriptive and easier to spot where the tests are.
+``Assert.throws(callback, expectedException[, message])``
+``Assert.throws(callback[, message])``
+Asserts that the provided callback function throws an exception. The
+``expectedException`` argument can be an ``Error`` instance, or a
+regular expression matching part of the error message (like in
+``Assert.throws(() => a.b, /is not defined/``).
+``Assert.rejects(promise, expectedException[, message])``
+Asserts that the provided promise is rejected. Note: that this should
+be called prefixed with an ``await``. The ``expectedException``
+argument can be an ``Error`` instance, or a regular expression
+matching part of the error message. Example:
+``await Assert.rejects(myPromise, /bad response/);``
+
+Test case registration and execution
+^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
+
+``add_task([condition, ]testFunc)``
+ Add an asynchronous function or to the list of tests that are to be
+ run asynchronously. Whenever the function ``await``\ s a
+ `Promise </en-US/docs/Mozilla/JavaScript_code_modules/Promise.jsm>`__,
+ the test runner waits until the promise is resolved or rejected
+ before proceeding. Rejected promises are converted into exceptions,
+ and resolved promises are converted into values.
+ You can optionally specify a condition which causes the test function
+ to be skipped; see `Adding conditions through the add_task or
+ add_test
+ function <#adding-conditions-through-the-add-task-or-add-test-function>`__
+ for details.
+ For tests that use ``add_task()``, the ``run_test()`` function is
+ optional, but if present, it should also call ``run_next_test()`` to
+ start execution of all asynchronous test functions. The test cases
+ must not call ``run_next_test()``, it is called automatically when
+ the task finishes. See `Async tests <#async-tests>`__, below, for
+ more information.
+``add_test([condition, ]testFunction)``
+ Add a test function to the list of tests that are to be run
+ asynchronously.
+ You can optionally specify a condition which causes the test function
+ to be skipped; see `Adding conditions through the add_task or
+ add_test
+ function <#adding-conditions-through-the-add-task-or-add-test-function>`__
+ for details.
+ Each test function must call ``run_next_test()`` when it's done. For
+ tests that use ``add_test()``, ``the run_test()`` function is
+ optional, but if present, it should also call ``run_next_test()`` to
+ start execution of all asynchronous test functions. In most cases,
+ you should rather use the more readable variant ``add_task()``. See
+ `Async tests <#async-tests>`__, below, for more information.
+``run_next_test()``
+ Run the next test function from the list of asynchronous tests. Each
+ test function must call ``run_next_test()`` when it's done.
+ ``run_test()`` should also call ``run_next_test()`` to start
+ execution of all asynchronous test functions. See `Async
+ tests <#async-tests>`__, below, for more information.
+**``registerCleanupFunction``**\ ``(callback)``
+ Executes the function ``callback`` after the current JS test file has
+ finished running, regardless of whether the tests inside it pass or
+ fail. You can use this to clean up anything that might otherwise
+ cause problems between test runs.
+ If ``callback`` returns a ``Promise``, the test will not finish until
+ the promise is fulfilled or rejected (making the termination function
+ asynchronous).
+ Cleanup functions are called in reverse order of registration.
+``do_test_pending()``
+ Delay exit of the test until do_test_finished() is called.
+ do_test_pending() may be called multiple times, and
+ do_test_finished() must be paired with each before the unit test will
+ exit.
+``do_test_finished()``
+ Call this function to inform the test framework that an asynchronous
+ operation has completed. If all asynchronous operations have
+ completed (i.e., every do_test_pending() has been matched with a
+ do_test_finished() in execution), then the unit test will exit.
+
+Environment
+^^^^^^^^^^^
+
+``do_get_file(testdirRelativePath, allowNonexistent)``
+ Returns an ``nsILocalFile`` object representing the given file (or
+ directory) in the test directory. For example, if your test is
+ unit/test_something.js, and you need to access unit/data/somefile,
+ you would call ``do_get_file('data/somefile')``. The given path must
+ be delimited with forward slashes. You can use this to access
+ test-specific auxiliary files if your test requires access to
+ external files. Note that you can also use this function to get
+ directories.
+
+ .. note::
+
+ **Note:** If your test needs access to one or more files that
+ aren't in the test directory, you should install those files to
+ the test directory in the Makefile where you specify
+ ``XPCSHELL_TESTS``. For an example, see
+ ``netwerk/test/Makefile.in#117``.
+``do_get_profile()``
+ Registers a directory with the profile service and returns an
+ ``nsILocalFile`` object representing that directory. It also makes
+ sure that the **profile-change-net-teardown**,
+ **profile-change-teardown**, and **profile-before-change** `observer
+ notifications </en/Observer_Notifications#Application_shutdown>`__
+ are sent before the test finishes. This is useful if the components
+ loaded in the test observe them to do cleanup on shutdown (e.g.,
+ places).
+
+ .. note::
+
+ **Note:** ``do_register_cleanup`` will perform any cleanup
+ operation *before* the profile and the network is shut down by the
+ observer notifications.
+``do_get_idle()``
+ By default xpcshell tests will disable the idle service, so that idle
+ time will always be reported as 0. Calling this function will
+ re-enable the service and return a handle to it; the idle time will
+ then be correctly requested to the underlying OS. The idle-daily
+ notification could be fired when requesting idle service. It is
+ suggested to always get the service through this method if the test
+ has to use idle.
+``do_get_cwd()``
+ Returns an ``nsILocalFile`` object representing the test directory.
+ This is the directory containing the test file when it is currently
+ being run. Your test can write to this directory as well as read any
+ files located alongside your test. Your test should be careful to
+ ensure that it will not fail if a file it intends to write already
+ exists, however.
+``load(testdirRelativePath)``
+ Imports the JavaScript file referenced by ``testdirRelativePath``
+ into the global script context, executing the code inside it. The
+ file specified is a file within the test directory. For example, if
+ your test is unit/test_something.js and you have another file
+ unit/extra_helpers.js, you can load the second file from the first by
+ calling ``load('extra_helpers.js')``.
+
+Utility
+^^^^^^^
+
+``do_parse_document(path, type)``
+ Parses and returns a DOM document.
+``executeSoon(callback)``
+ Executes the function ``callback`` on a later pass through the event
+ loop. Use this when you want some code to execute after the current
+ function has finished executing, but you don't care about a specific
+ time delay. This function will automatically insert a
+ ``do_test_pending`` / ``do_test_finished`` pair for you.
+``do_timeout(delay, fun)``
+ Call this function to schedule a timeout. The given function will be
+ called with no arguments provided after the specified delay (in
+ milliseconds). Note that you must call ``do_test_pending`` so that
+ the test isn't completed before your timer fires, and you must call
+ ``do_test_finished`` when the actions you perform in the timeout
+ complete, if you have no other functionality to test. (Note: the
+ function argument used to be a string argument to be passed to eval,
+ and some older branches support only a string argument or support
+ both string and function.)
+
+Multiprocess communication
+^^^^^^^^^^^^^^^^^^^^^^^^^^
+
+``do_send_remote_message(name, optionalData)``
+ Asynchronously send a message to all remote processes. Pairs with
+ ``do_await_remote_message`` or equivalent ProcessMessageManager
+ listeners.
+``do_await_remote_message(name, optionalCallback)``
+ Returns a promise that is resolved when the message is received. Must
+ be paired with\ ``do_send_remote_message`` or equivalent
+ ProcessMessageManager calls. If **optionalCallback** is provided, the
+ callback must call ``do_test_finished``. If optionalData is passed
+ to ``do_send_remote_message`` then that data is the first argument to
+ **optionalCallback** or the value to which the promise resolves.
+
+
+xpcshell.ini manifest
+---------------------
+
+The manifest controls what tests are included in a test suite, and the
+configuration of the tests. It is loaded via the \`moz.build\` property
+configuration property.
+
+The following are all of the configuration options for a test suite as
+listed under the ``[DEFAULT]`` section of the manifest.
+
+``tags``
+ Tests can be filtered by tags when running multiple tests. The
+ command for mach is ``./mach xpcshell-test --tag TAGNAME``
+``head``
+ The relative path to the head JavaScript file, which is run once
+ before a test suite is run. The variables declared in the root scope
+ are available as globals in the test files. See `Test head and
+ support files <#test-head-and-support-files>`__ for more information
+ and usage.
+``firefox-appdir``
+ Set this to "browser" if your tests need access to things in the
+ browser/ directory (e.g. additional XPCOM services that live there)
+``skip-if`` ``run-if`` ``fail-if``
+ For this entire test suite, run the tests only if they meet certain
+ conditions. See `Adding conditions in the xpcshell.ini
+ manifest <#adding-conditions-through-the-add-task-or-add-test-function>`__ for how
+ to use these properties.
+``support-files``
+ Make files available via the ``resource://test/[filename]`` path to
+ the tests. The path can be relative to other directories, but it will
+ be served only with the filename. See `Test head and support
+ files <#test-head-and-support-files>`__ for more information and
+ usage.
+``[test_*]``
+ Test file names must start with ``test_`` and are listed in square
+ brackets
+
+
+Creating a new xpcshell.ini file
+^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
+
+When creating a new directory and new xpcshell.ini manifest file, the
+following must be added to a moz.build file near that file in the
+directory hierarchy:
+
+.. code:: bash
+
+ XPCSHELL_TESTS_MANIFESTS += ['path/to/xpcshell.ini']
+
+Typically, the moz.build containing *XPCSHELL_TESTS_MANIFESTS* is not in
+the same directory as *xpcshell.ini*, but rather in a parent directory.
+Common directory structures look like:
+
+.. code:: bash
+
+ feature
+ ├──moz.build
+ └──tests/xpcshell
+ └──xpcshell.ini
+
+ # or
+
+ feature
+ ├──moz.build
+ └──tests
+ ├──moz.build
+ └──xpcshell
+ └──xpcshell.ini
+
+
+Test head and support files
+^^^^^^^^^^^^^^^^^^^^^^^^^^^
+
+Typically in a test suite, similar setup code and dependencies will need
+to be loaded in across each test. This can be done through the test
+head, which is the file declared in the ``xpcshell.ini`` manifest file
+under the ``head`` property. The file itself is typically called
+``head.js``. Any variable declared in the test head will be in the
+global scope of each test in that test suite.
+
+In addition to the test head, other support files can be declared in the
+``xpcshell.ini`` manifest file. This is done through the
+``support-files`` declaration. These files will be made available
+through the url ``resource://test`` plus the name of the file. These
+files can then be loaded in using the
+``ChromeUtils.import`` function
+or other loaders. The support files can be located in other directory as
+well, and they will be made available by their filename.
+
+.. code:: bash
+
+ # File structure:
+
+ path/to/tests
+ ├──head.js
+ ├──module.jsm
+ ├──moz.build
+ ├──test_example.js
+ └──xpcshell.ini
+
+.. code:: ini
+
+ # xpcshell.ini
+ [DEFAULT]
+ head = head.js
+ support-files =
+ ./module.jsm
+ ../../some/other/file.js
+ [test_component_state.js]
+
+.. code:: js
+
+ // head.js
+ var globalValue = "A global value.";
+
+ // Import support-files.
+ const { foo } = ChromeUtils.import("resource://test/module.jsm");
+ const { bar } = ChromeUtils.import("resource://test/file.jsm");
+
+.. code:: js
+
+ // test_example.js
+ function run_test() {
+ equal(globalValue, "A global value.", "Declarations in head.js can be accessed");
+ }
+
+
+Additional testing considerations
+---------------------------------
+
+Async tests
+^^^^^^^^^^^
+
+Asynchronous tests (that is, those whose success cannot be determined
+until after ``run_test`` finishes) can be written in a variety of ways.
+
+Task-based asynchronous tests
+-----------------------------
+
+The easiest is using the ``add_task`` helper. ``add_task`` can take an
+asynchronous function as a parameter. ``add_task`` tests are run
+automatically if you don't have a ``run_test`` function.
+
+.. code:: js
+
+ add_task(async function test_foo() {
+ let foo = await makeFoo(); // makeFoo() returns a Promise<foo>
+ equal(foo, expectedFoo, "Should have received the expected object");
+ });
+
+ add_task(async function test_bar() {
+ let foo = await makeBar(); // makeBar() returns a Promise<bar>
+ Assert.equal(bar, expectedBar, "Should have received the expected object");
+ });
+
+Callback-based asynchronous tests
+^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
+
+You can also use ``add_test``, which takes a function and adds it to the
+list of asynchronously-run functions. Each function given to
+``add_test`` must also call ``run_next_test`` at its end. You should
+normally use ``add_task`` instead of ``add_test``, but you may see
+``add_test`` in existing tests.
+
+.. code:: js
+
+ add_test(function test_foo() {
+ makeFoo(function callback(foo) { // makeFoo invokes a callback<foo> once completed
+ equal(foo, expectedFoo);
+ run_next_test();
+ });
+ });
+
+ add_test(function test_bar() {
+ makeBar(function callback(bar) {
+ equal(bar, expectedBar);
+ run_next_test();
+ });
+ });
+
+
+Other tests
+^^^^^^^^^^^
+
+We can also tell the test harness not to kill the test process once
+``run_test()`` is finished, but to keep spinning the event loop until
+our callbacks have been called and our test has completed. Newer tests
+prefer the use of ``add_task`` rather than this method. This can be
+achieved with ``do_test_pending()`` and ``do_test_finished()``:
+
+.. code:: js
+
+ function run_test() {
+ // Tell the harness to keep spinning the event loop at least
+ // until the next do_test_finished() call.
+ do_test_pending();
+
+ someAsyncProcess(function callback(result) {
+ equal(result, expectedResult);
+
+ // Close previous do_test_pending() call.
+ do_test_finished();
+ });
+ }
+
+
+Testing in child processeses
+^^^^^^^^^^^^^^^^^^^^^^^^^^^^
+
+By default xpcshell tests run in the parent process. If you wish to run
+test logic in the child, you have several ways to do it:
+
+#. Create a regular test_foo.js test, and then write a wrapper
+ test_foo_wrap.js file that uses the ``run_test_in_child()`` function
+ to run an entire script file in the child. This is an easy way to
+ arrange for a test to be run twice, once in chrome and then later
+ (via the \_wrap.js file) in content. See /network/test/unit_ipc for
+ examples. The ``run_test_in_child()`` function takes a callback, so
+ you should be able to call it multiple times with different files, if
+ that's useful.
+#. For tests that need to run logic in both the parent + child processes
+ during a single test run, you may use the poorly documented
+ ``sendCommand()`` function, which takes a code string to be executed
+ on the child, and a callback function to be run on the parent when it
+ has completed. You will want to first call
+ do_load_child_test_harness() to set up a reasonable test environment
+ on the child. ``sendCommand`` returns immediately, so you will
+ generally want to use ``do_test_pending``/``do_test_finished`` with
+ it. NOTE: this method of test has not been used much, and your level
+ of pain may be significant. Consider option #1 if possible.
+
+See the documentation for ``run_test_in_child()`` and
+``do_load_child_test_harness()`` in testing/xpcshell/head.js for more
+information.
+
+
+Platform-specific tests
+^^^^^^^^^^^^^^^^^^^^^^^
+
+Sometimes you might want a test to know what platform it's running on
+(to test platform-specific features, or allow different behaviors). Unit
+tests are not normally invoked from a Makefile (unlike Mochitests), or
+preprocessed (so not #ifdefs), so platform detection with those methods
+isn't trivial.
+
+
+Runtime detection
+^^^^^^^^^^^^^^^^^
+
+Some tests will want to only execute certain portions on specific
+platforms. Use
+`AppConstants.jsm <https://searchfox.org/mozilla-central/rev/a0333927deabfe980094a14d0549b589f34cbe49/toolkit/modules/AppConstants.jsm#148>`__
+for determining the platform, for example:
+
+.. code:: js
+
+ ChromeUtils.import("resource://gre/modules/AppConstants.jsm");
+
+ let isMac = AppConstants.platform == "macosx";
+
+
+Conditionally running a test
+----------------------------
+
+There are two different ways to conditional skip a test, either through
+
+
+Adding conditions through the ``add_task`` or ``add_test`` function
+^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
+
+You can use conditionals on individual test functions instead of entire
+files. The condition is provided as an optional first parameter passed
+into ``add_task()`` or ``add_test()``. The condition is an object which
+contains a function named ``skip_if()``, which is an `arrow
+function </en-US/docs/Web/JavaScript/Reference/Functions/Arrow_functions>`__
+returning a boolean value which is **``true``** if the test should be
+skipped.
+
+For example, you can provide a test which only runs on Mac OS X like
+this:
+
+.. code:: js
+
+ ChromeUtils.import("resource://gre/modules/AppConstants.jsm");
+
+ add_task({
+ skip_if: () => AppConstants.platform != "mac"
+ }, async function some_test() {
+ // Test code goes here
+ });
+
+Since ``AppConstants.platform != "mac"`` is ``true`` only when testing
+on Mac OS X, the test will be skipped on all other platforms.
+
+.. note::
+
+ **Note:** Arrow functions are ideal here because if your condition
+ compares constants, it will already have been evaluated before the
+ test is even run, meaning your output will not be able to show the
+ specifics of what the condition is.
+
+
+Adding conditions in the xpcshell.ini manifest
+^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
+
+Sometimes you may want to add conditions to specify that a test should
+be skipped in certain configurations, or that a test is known to fail on
+certain platforms. You can do this in xpcshell manifests by adding
+annotations below the test file entry in the manifest, for example:
+
+.. code:: ini
+
+ [test_example.js]
+ skip-if = os == 'win'
+
+This example would skip running ``test_example.js`` on Windows.
+
+.. note::
+
+ **Note:** Starting with Gecko (Firefox 40 / Thunderbird 40 /
+ SeaMonkey 2.37), you can use conditionals on individual test
+ functions instead of on entire files. See `Adding conditions through
+ the add_task or add_test
+ function <#adding-conditions-through-the-add-task-or-add-test-function>`__
+ above for details.
+
+There are currently four conditionals you can specify:
+
+skip-if
+"""""""
+
+``skip-if`` tells the harness to skip running this test if the condition
+evaluates to true. You should use this only if the test has no meaning
+on a certain platform, or causes undue problems like hanging the test
+suite for a long time.
+
+run-if
+''''''
+
+``run-if`` tells the harness to only run this test if the condition
+evaluates to true. It functions as the inverse of ``skip-if``.
+
+fail-if
+"""""""
+
+``fail-if`` tells the harness that this test is expected to fail if the
+condition is true. If you add this to a test, make sure you file a bug
+on the failure and include the bug number in a comment in the manifest,
+like:
+
+.. code:: ini
+
+ [test_example.js]
+ # bug xxxxxx
+ fail-if = os == 'linux'
+
+run-sequentially
+""""""""""""""""
+
+``run-sequentially``\ basically tells the harness to run the respective
+test in isolation. This is required for tests that are not
+"thread-safe". You should do all you can to avoid using this option,
+since this will kill performance. However, we understand that there are
+some cases where this is imperative, so we made this option available.
+If you add this to a test, make sure you specify a reason and possibly
+even a bug number, like:
+
+.. code:: ini
+
+ [test_example.js]
+ run-sequentially = Has to launch Firefox binary, bug 123456.
+
+
+Manifest conditional expressions
+^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
+
+For a more detailed description of the syntax of the conditional
+expressions, as well as what variables are available, `see this
+page </en/XPCshell_Test_Manifest_Expressions`.
+
+
+Running a specific test only
+----------------------------
+
+When working on a specific feature or issue, it is convenient to only
+run a specific task from a whole test suite. Use ``.only()`` for that
+purpose:
+
+.. code:: js
+
+ add_task(async function some_test() {
+ // Some test.
+ });
+
+ add_task(async function some_interesting_test() {
+ // Only this test will be executed.
+ }).only();
+
+
+Problems with pending events and shutdown
+-----------------------------------------
+
+Events are not processed during test execution if not explicitly
+triggered. This sometimes causes issues during shutdown, when code is
+run that expects previously created events to have been already
+processed. In such cases, this code at the end of a test can help:
+
+.. code:: js
+
+ let thread = gThreadManager.currentThread;
+ while (thread.hasPendingEvents())
+ thread.processNextEvent(true);
+
+
+Debugging xpcshell-tests
+------------------------
+
+
+Running unit tests under the javascript debugger
+^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
+
+
+Via --jsdebugger
+^^^^^^^^^^^^^^^^
+
+You can specify flags when issuing the ``xpcshell-test`` command that
+will cause your test to stop right before running so you can attach the
+`javascript debugger </docs/Tools/Tools_Toolbox>`__.
+
+Example:
+
+.. code:: bash
+
+ $ ./mach xpcshell-test --jsdebugger browser/components/tests/unit/test_browserGlue_pingcentre.js
+ 0:00.50 INFO Running tests sequentially.
+ ...
+ 0:00.68 INFO ""
+ 0:00.68 INFO "*******************************************************************"
+ 0:00.68 INFO "Waiting for the debugger to connect on port 6000"
+ 0:00.68 INFO ""
+ 0:00.68 INFO "To connect the debugger, open a Firefox instance, select 'Connect'"
+ 0:00.68 INFO "from the Developer menu and specify the port as 6000"
+ 0:00.68 INFO "*******************************************************************"
+ 0:00.68 INFO ""
+ 0:00.71 INFO "Still waiting for debugger to connect..."
+ ...
+
+At this stage in a running Firefox instance:
+
+- Go to the three-bar menu, then select ``More tools`` ->
+ ``Remote Debugging``
+- A new tab is opened. In the Network Location box, enter
+ ``localhost:6000`` and select ``Connect``
+- You should then get a link to *``Main Process``*, click it and the
+ Developer Tools debugger window will open.
+- It will be paused at the start of the test, so you can add
+ breakpoints, or start running as appropriate.
+
+If you get a message such as:
+
+::
+
+ 0:00.62 ERROR Failed to initialize debugging: Error: resource://devtools appears to be inaccessible from the xpcshell environment.
+ This can usually be resolved by adding:
+ firefox-appdir = browser
+ to the xpcshell.ini manifest.
+ It is possible for this to alter test behevior by triggering additional browser code to run, so check test behavior after making this change.
+
+This is typically a test in core code. You can attempt to add that to
+the xpcshell.ini, however as it says, it might affect how the test runs
+and cause failures. Generally the firefox-appdir should only be left in
+xpcshell.ini for tests that are in the browser/ directory, or are
+Firefox-only.
+
+
+Running unit tests under a C++ debugger
+---------------------------------------
+
+
+Via ``--debugger and -debugger-interactive``
+^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
+
+You can specify flags when issuing the ``xpcshell-test`` command that
+will launch xpcshell in the specified debugger (implemented in
+`bug 382682 <https://bugzilla.mozilla.org/show_bug.cgi?id=382682>`__).
+Provide the full path to the debugger, or ensure that the named debugger
+is in your system PATH.
+
+Example:
+
+.. code:: bash
+
+ $ ./mach xpcshell-test --debugger gdb --debugger-interactive netwerk/test/unit/test_resumable_channel.js
+ # js>_execute_test();
+ ...failure or success messages are printed to the console...
+ # js>quit();
+
+On Windows with the VS debugger:
+
+.. code:: bash
+
+ $ ./mach xpcshell-test --debugger devenv --debugger-interactive netwerk/test/test_resumable_channel.js
+
+Or with WinDBG:
+
+.. code:: bash
+
+ $ ./mach xpcshell-test --debugger windbg --debugger-interactive netwerk/test/test_resumable_channel.js
+
+Or with modern WinDbg (WinDbg Preview as of April 2020):
+
+.. code:: bash
+
+ $ ./mach xpcshell-test --debugger WinDbgX --debugger-interactive netwerk/test/test_resumable_channel.js
+
+
+Debugging xpcshell tests in a child process
+^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
+
+To debug the child process, where code is often being run in a project,
+set MOZ_DEBUG_CHILD_PROCESS=1 in your environment (or on the command
+line) and run the test. You will see the child process emit a printf
+with its process ID, then sleep. Attach a debugger to the child's pid,
+and when it wakes up you can debug it:
+
+.. code:: bash
+
+ $ MOZ_DEBUG_CHILD_PROCESS=1 ./mach xpcshell-test test_simple_wrap.js
+ CHILDCHILDCHILDCHILD
+ debug me @13476
+
+
+Debug both parent and child processes
+^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
+
+Use MOZ_DEBUG_CHILD_PROCESS=1 to attach debuggers to each process. (For
+gdb at least, this means running separate copies of gdb, one for each
+process.)