summaryrefslogtreecommitdiffstats
path: root/devtools/docs/contributor/tests
diff options
context:
space:
mode:
authorDaniel Baumann <daniel.baumann@progress-linux.org>2024-04-07 19:33:14 +0000
committerDaniel Baumann <daniel.baumann@progress-linux.org>2024-04-07 19:33:14 +0000
commit36d22d82aa202bb199967e9512281e9a53db42c9 (patch)
tree105e8c98ddea1c1e4784a60a5a6410fa416be2de /devtools/docs/contributor/tests
parentInitial commit. (diff)
downloadfirefox-esr-36d22d82aa202bb199967e9512281e9a53db42c9.tar.xz
firefox-esr-36d22d82aa202bb199967e9512281e9a53db42c9.zip
Adding upstream version 115.7.0esr.upstream/115.7.0esrupstream
Signed-off-by: Daniel Baumann <daniel.baumann@progress-linux.org>
Diffstat (limited to 'devtools/docs/contributor/tests')
-rw-r--r--devtools/docs/contributor/tests/README.md22
-rw-r--r--devtools/docs/contributor/tests/debugging-intermittents.md84
-rw-r--r--devtools/docs/contributor/tests/mochitest-chrome.md13
-rw-r--r--devtools/docs/contributor/tests/mochitest-devtools.md36
-rw-r--r--devtools/docs/contributor/tests/node-tests.md78
-rw-r--r--devtools/docs/contributor/tests/perfherder-compare-link.pngbin0 -> 197741 bytes
-rw-r--r--devtools/docs/contributor/tests/perfherder-compare.pngbin0 -> 86289 bytes
-rw-r--r--devtools/docs/contributor/tests/perfherder-create-gecko-profile.pngbin0 -> 268065 bytes
-rw-r--r--devtools/docs/contributor/tests/perfherder-damp.pngbin0 -> 75934 bytes
-rw-r--r--devtools/docs/contributor/tests/perfherder-filter-subtests.pngbin0 -> 132350 bytes
-rw-r--r--devtools/docs/contributor/tests/perfherder-subtests.pngbin0 -> 136288 bytes
-rw-r--r--devtools/docs/contributor/tests/performance-tests-damp.md193
-rw-r--r--devtools/docs/contributor/tests/performance-tests-overview.md103
-rw-r--r--devtools/docs/contributor/tests/regression-graph.pngbin0 -> 96461 bytes
-rw-r--r--devtools/docs/contributor/tests/regression-popup.pngbin0 -> 75353 bytes
-rw-r--r--devtools/docs/contributor/tests/writing-perf-tests-example.md68
-rw-r--r--devtools/docs/contributor/tests/writing-perf-tests-tips.md41
-rw-r--r--devtools/docs/contributor/tests/writing-perf-tests.md140
-rw-r--r--devtools/docs/contributor/tests/writing-tests.md237
-rw-r--r--devtools/docs/contributor/tests/xpcshell.md13
20 files changed, 1028 insertions, 0 deletions
diff --git a/devtools/docs/contributor/tests/README.md b/devtools/docs/contributor/tests/README.md
new file mode 100644
index 0000000000..d981361b28
--- /dev/null
+++ b/devtools/docs/contributor/tests/README.md
@@ -0,0 +1,22 @@
+# Automated tests
+
+When working on a patch for DevTools, there's almost never a reason not to add a new test. If you are fixing a bug, you probably should write a new test to prevent this bug from occurring again. If you're implementing a new feature, you should write new tests to cover the aspects of this new feature.
+
+Ask yourself:
+* Are there enough tests for my patch?
+* Are they the right types of tests?
+
+We use three suites of tests:
+
+* [`xpcshell`](xpcshell.md): Unit-test style of tests. No browser window, only a JavaScript shell. Mostly testing APIs directly.
+* [Chrome mochitests](mochitest-chrome.md): Unit-test style of tests, but with a browser window. Mostly testing APIs that interact with the DOM.
+* [DevTools mochitests](mochitest-devtools.md): Integration style of tests. Fires up a whole browser window with every test and you can test clicking on buttons, etc.
+
+
+To run all DevTools tests, regardless of suite type:
+
+```bash
+./mach test devtools/*
+```
+
+Have a look at the child pages for more specific commands for running only a single suite or single test in a suite.
diff --git a/devtools/docs/contributor/tests/debugging-intermittents.md b/devtools/docs/contributor/tests/debugging-intermittents.md
new file mode 100644
index 0000000000..d82c35bdb6
--- /dev/null
+++ b/devtools/docs/contributor/tests/debugging-intermittents.md
@@ -0,0 +1,84 @@
+# Debugging Intermittent Test Failures
+
+## What are Intermittents (aka Oranges)?
+
+Intermittents are test failures which happen intermittently, in a seemingly random way. Often you'll write a test that passes fine locally on your computer, but when ran thousands of times on various CI environments (some of them under heavy load) it may start to fail randomly.
+
+Intermittents are also known as Oranges, because the corresponding test jobs are rendered orange on [treeherder](http://treeherder.mozilla.org/).
+
+These intermittent failures are tracked in Bugzilla. When a test starts being intermittent a bug is filed in Bugzilla (usually by a Mozilla code sheriff).
+
+Once the bug exists for a given test failure, all further similar failures of that test will be reported as comments within that bug.
+These reports are usually posted weekly and look like this:
+
+> 5 failures in 2740 pushes (0.002 failures/push) were associated with this bug in the last 7 days.
+
+See [an example here](https://bugzilla.mozilla.org/show_bug.cgi?id=1250523#c4).
+
+Sometimes, tests start failing more frequently and these reports are then posted daily.
+
+To help with the (unfortunately) ever-growing list of intermittents, the Stockwell project was initiated a while ago (read more about the goals of that project on [their wiki](https://wiki.mozilla.org/Auto-tools/Projects/Stockwell)).
+
+This project defines a scenario where very frequently failing tests get disabled.
+Ideally, we should try to avoid this, because this means reducing our test coverage, but sometimes we do not have time to investigate the failure, and disabling it is the only remaining option.
+
+## Finding Intermittents
+
+You will have no trouble finding out that a particular test is intermittent, because a bug for it will be filed and you will see it in Bugzilla ([watching the Bugzilla component of your choice](https://bugzilla.mozilla.org/userprefs.cgi?tab=component_watch) is a good way to avoid missing the failure reports).
+
+However, it can still be useful to see intermittents in context. The [Intermittent Failures View on Treeherder](https://treeherder.mozilla.org/intermittent-failures.html) shows intermittents ranked by frequency.
+
+You can also see intermittents in Bugzilla. Go to [the settings page](https://bugzilla.mozilla.org/userprefs.cgi?tab=settings) and enable "When viewing a bug, show its corresponding Orange Factor page".
+
+## Reproducing Test Failures locally
+
+The first step to fix an intermittent is to reproduce it.
+
+Sometimes reproducing the failure can only be done in automation, but it's worth trying locally, because this makes it much simpler to debug.
+
+First, try running the test in isolation. You can use the `--repeat` and `--run-until-failure` flags to `mach mochitest` to automate this a bit. It's nice to do this sort of thing in headless mode (`--headless`) or in a VM (or using Xnest on Linux) to avoid locking up your machine.
+
+Sometimes, though, a test will only fail if it is run in conjunction with one or more other tests. You can use the `--start-at` and `--end-at` flags with `mach mochitest` to run a group of tests together.
+
+For some jobs, but not all, you can get an [interactive shell from TaskCluster](https://jonasfj.dk/2016/03/one-click-loaners-with-taskcluster/).
+
+There's also a [handy page of e10s test debugging tips](https://wiki.mozilla.org/Electrolysis/e10s_test_tips) that is worth a read.
+
+Because intermittents are often caused by race conditions, it's sometimes useful to enable Chaos Mode. This changes timings and event orderings a bit. The simplest way to do this is to enable it in a specific test, by
+calling `SimpleTest.testInChaosMode`. You can also set the `MOZ_CHAOSMODE` environment variable, or even edit `mfbt/ChaosMode.cpp` directly.
+
+Some tests leak intermittently. Use `ac_add_options --enable-logrefcnt` in your mozconfig to potentially find them.<!--TODO: how? add more detail about this -->
+
+The `rr` tool has [its own chaos mode](http://robert.ocallahan.org/2016/02/introducing-rr-chaos-mode.html). This can also sometimes reproduce a failure that isn't ordinarily reproducible. While it's difficult to debug JS bugs using `rr`, often if you can reliably reproduce the failure you can at least experiment (see below) to attempt a fix.
+
+## That Didn't Work
+
+If you couldn't reproduce locally, there are other options.
+
+One useful approach is to add additional logging to the test, then push again. Sometimes log buffering makes the output weird; you can add a call to `SimpleTest.requestCompleteLog()` to fix this.
+
+You can run a single directory of tests on try using `mach try DIR`. You can also use the `--rebuild` flag to retrigger test jobs multiple times; or you can also do this easily from treeherder.<!--TODO: how? and why is it easy?-->
+
+## Solving
+
+If a test fails at different places for each failure it might be a timeout. The current mochitest timeout is 45 seconds, so if successful runs of an intermittent are ~40 seconds, it might just be a
+real timeout. This is particularly true if the failure is most often seen on the slower builds, for example Linux 32 debug. In this case you can either split the test or call `requestLongerTimeout` somewhere at the beginning of the test (here's [an example](https://searchfox.org/mozilla-central/rev/c56977420df7a1b692ce0f7e499ddb364d9fd7b2/devtools/client/framework/test/browser_toolbox_tool_remote_reopen.js#12)).
+
+Sometimes the problem is a race at a specific spot in the test. You can test this theory by adding a short wait to see if the failure goes away, like:
+```javascript
+yield new Promise(r => setTimeout(r, 100));
+```
+
+See the `waitForTick` and `waitForTime` functions in `DevToolsUtils` for similar functionality.
+
+You can use a similar trick to "pause" the test at a certain point. This is useful when debugging locally because it will leave Firefox open and responsive, at the specific spot you've chosen. Do this
+using `yield new Promise(r => r);`.
+
+`shared-head.js` also has some helpers, like `once`, to bind to events with additional logging.
+
+You can also binary search the test by either commenting out chunks of it, or hacking in early `return`s. You can do a bunch of these experiments in parallel without waiting for the first to complete.
+
+## Verifying
+
+It's difficult to verify that an intermittent has truly been fixed.
+One thing you can do is push to try, and then retrigger the job many times in treeherder. Exactly how many times you should retrigger depends on the frequency of the failure.
diff --git a/devtools/docs/contributor/tests/mochitest-chrome.md b/devtools/docs/contributor/tests/mochitest-chrome.md
new file mode 100644
index 0000000000..5417999baa
--- /dev/null
+++ b/devtools/docs/contributor/tests/mochitest-chrome.md
@@ -0,0 +1,13 @@
+# Automated tests: chrome mochitests
+
+To run the whole suite of chrome mochitests:
+
+```bash
+./mach mochitest -f chrome --tag devtools
+```
+
+To run a specific chrome mochitest:
+
+```bash
+./mach mochitest devtools/path/to/the/test_you_want_to_run.html
+```
diff --git a/devtools/docs/contributor/tests/mochitest-devtools.md b/devtools/docs/contributor/tests/mochitest-devtools.md
new file mode 100644
index 0000000000..e5f44ba1d6
--- /dev/null
+++ b/devtools/docs/contributor/tests/mochitest-devtools.md
@@ -0,0 +1,36 @@
+# Automated tests: DevTools mochitests
+
+To run the whole suite of browser mochitests for DevTools (sit back and relax):
+
+```bash
+./mach mochitest --subsuite devtools --tag devtools
+```
+To run a specific tool's suite of browser mochitests:
+
+```bash
+./mach mochitest devtools/client/<tool>
+```
+
+For example, run all of the debugger browser mochitests:
+
+```bash
+./mach mochitest devtools/client/debugger
+```
+To run a specific DevTools mochitest:
+
+```bash
+./mach mochitest devtools/client/path/to/the/test_you_want_to_run.js
+```
+Note that the mochitests *must* have focus while running. The tests run in the browser which looks like someone is magically testing your code by hand. If the browser loses focus, the tests will stop and fail after some time. (Again, sit back and relax)
+
+In case you'd like to run the mochitests without having to care about focus and be able to touch your computer while running:
+
+```bash
+./mach mochitest --headless devtools/client/<tool>
+```
+
+You can also run just a single test:
+
+```bash
+./mach mochitest --headless devtools/client/path/to/the/test_you_want_to_run.js
+```
diff --git a/devtools/docs/contributor/tests/node-tests.md b/devtools/docs/contributor/tests/node-tests.md
new file mode 100644
index 0000000000..ab682ef61c
--- /dev/null
+++ b/devtools/docs/contributor/tests/node-tests.md
@@ -0,0 +1,78 @@
+# DevTools node tests
+
+In addition to mochitests and xpcshell tests, some panels in DevTools are using node test libraries to run unit tests. For instance, several panels are using [Jest](https://jestjs.io/) to run React component unit tests.
+
+## Find the node tests on Try
+
+The DevTools node test task, `node(devtools)`, is running on the `Linux 64 opt` platform.
+It is a tier 1 job, which means that any failure will lead to a backout.
+
+## Run Tests On Try
+
+To run the DevTools node tests on try, you can use `./mach try fuzzy` and look for the job named `source-test-node-devtools-tests`.
+
+They are also run when using the "devtools" preset: `./mach try --preset devtools`.
+
+### Node tests try job definition
+
+The definition of those try jobs can be found at [taskcluster/ci/source-test/node.yml](https://searchfox.org/mozilla-central/source/taskcluster/ci/source-test/node.yml).
+
+The definition also contains the list of files that will trigger the node test jobs. Currently the the devtools tests run when any file is modified under `devtools/client` or `devtools/shared`.
+
+You will need yarn to be installed in order to run the DevTools tests. See [https://yarnpkg.com/getting-started](https://yarnpkg.com/getting-started).
+
+To run the DevTools tests, the easiest is to rely on the same script as the one used to run the tests on try:
+```
+> node devtools/client/bin/devtools-node-test-runner.js --suite={suitename}
+```
+
+At the moment of writing, the supported suites for this script are:
+- `aboutdebugging`
+- `accessibility`
+- `application`
+- `compatibility`
+- `debugger`
+- `framework`
+- `netmonitor`
+- `performance`
+- `shared_components`
+- `webconsole`
+
+(You can see the full list and the associated configuration in devtools/client/bin/devtools-node-test-runner.js)
+
+Alternatively, you can also locate the `package.json` corresponding to a given suite, and run `yarn && yarn test`.
+
+## Updating snapshots
+
+Some of the node tests are snapshot tests, which means they compare the output of a given component to a previous text snapshot. They might break if you are legitimately modifying a component and it means the snapshots need to be updated.
+
+A snapshot failure will show up as follows:
+```
+› 1 snapshot failed from 1 test suite
+```
+
+It should also mention the command you can run to update the snapshots:
+```
+Inspect your code changes or run `yarn run test-ci -u` to update them.
+```
+
+For example, if you need to update snapshots in a specific panel, first locate the package.json corresponding to the node test folder of the panel. In theory it should be under `devtools/client/{panelname}/test/node/` but it might be slightly different depending on each panel. Then run `yarn run test-ci -u` in this folder and add the snapshot changes to your commit.
+
+## TypeScript
+
+The "performance" suite performs TypeScript checks. The TypeScript usage in the performance panel is documented at [devtools/client/performance-new/typescript.md](https://searchfox.org/mozilla-central/source/devtools/client/performance-new/typescript.md) ([see rendered version on GitHub](https://github.com/mozilla/gecko-dev/blob/master/devtools/client/performance-new/typescript.md)).
+
+## devtools-bundle
+
+The devtools-bundle job is a tier2 job which checks if DevTools bundles are outdated. DevTools bundles are generated JavaScript files built from other dependencies in tree in order to run in specific environments (typically a worker).
+
+All the bundles used by DevTools are generated by devtools/client/debugger/bin/bundle.js. The devtools-bundle job is simply running this script and fails if any versioned file is updated.
+
+In order to fix a failure, you should run the script:
+
+```
+> cd devtools/client/debugger/
+> yarn && node bin/bundle.js
+```
+
+And commit the changes, either in the commit which updated the bundle dependencies, or in a separate commit in order to keep things separated.
diff --git a/devtools/docs/contributor/tests/perfherder-compare-link.png b/devtools/docs/contributor/tests/perfherder-compare-link.png
new file mode 100644
index 0000000000..8e253bf363
--- /dev/null
+++ b/devtools/docs/contributor/tests/perfherder-compare-link.png
Binary files differ
diff --git a/devtools/docs/contributor/tests/perfherder-compare.png b/devtools/docs/contributor/tests/perfherder-compare.png
new file mode 100644
index 0000000000..55ba3b3c5d
--- /dev/null
+++ b/devtools/docs/contributor/tests/perfherder-compare.png
Binary files differ
diff --git a/devtools/docs/contributor/tests/perfherder-create-gecko-profile.png b/devtools/docs/contributor/tests/perfherder-create-gecko-profile.png
new file mode 100644
index 0000000000..a7526bb25f
--- /dev/null
+++ b/devtools/docs/contributor/tests/perfherder-create-gecko-profile.png
Binary files differ
diff --git a/devtools/docs/contributor/tests/perfherder-damp.png b/devtools/docs/contributor/tests/perfherder-damp.png
new file mode 100644
index 0000000000..e8b853adb7
--- /dev/null
+++ b/devtools/docs/contributor/tests/perfherder-damp.png
Binary files differ
diff --git a/devtools/docs/contributor/tests/perfherder-filter-subtests.png b/devtools/docs/contributor/tests/perfherder-filter-subtests.png
new file mode 100644
index 0000000000..c33187d556
--- /dev/null
+++ b/devtools/docs/contributor/tests/perfherder-filter-subtests.png
Binary files differ
diff --git a/devtools/docs/contributor/tests/perfherder-subtests.png b/devtools/docs/contributor/tests/perfherder-subtests.png
new file mode 100644
index 0000000000..fbe90299ac
--- /dev/null
+++ b/devtools/docs/contributor/tests/perfherder-subtests.png
Binary files differ
diff --git a/devtools/docs/contributor/tests/performance-tests-damp.md b/devtools/docs/contributor/tests/performance-tests-damp.md
new file mode 100644
index 0000000000..c2cdb3a2e8
--- /dev/null
+++ b/devtools/docs/contributor/tests/performance-tests-damp.md
@@ -0,0 +1,193 @@
+# Performance Tests: DAMP
+
+DAMP (DevTools At Maximum Performance) is our test suite to track performance.
+
+## How to run it locally?
+
+```bash
+./mach talos-test --suite damp
+```
+Note that the first run is slower as it pulls a large tarball with various website copies.
+This will run all DAMP tests, you can filter by test name with:
+```bash
+./mach talos-test --suite damp --subtests console
+```
+This command will run all tests which contains "console" in their name.
+
+Note that in continuous integration, DAMP tests are split in smaller tests suites: `damp-inspector`, `damp-other` and `damp-webconsole`. Actually `--suite damp` is only used locally because it contains all possible tests and makes it easier to use. But if needed you can substitute `damp` with any of the other test suites if you want to only run tests associated to a given test suite. You can find the mapping between tests and test suites in [damp-tests.js](https://searchfox.org/mozilla-central/source/testing/talos/talos/tests/devtools/addon/content/damp-tests.js).
+### Command line options
+
+#### Running tests only once
+
+```bash
+./mach talos-test --suite damp --cycles 1 --tppagecycles 1
+```
+`--cycles` will limit the number of Firefox restart to only one, while
+`--tppagecycles` will limit the number of test re-run in each firefox start to one.
+This is often helpful when debugging one particular subtest.
+
+#### Taking screenshots
+
+```bash
+DEBUG_DEVTOOLS_SCREENSHOTS=1 ./mach talos-test --suite damp
+```
+When passing `DEBUG_DEVTOOLS_SCREENSHOTS` env variable, screenshots will be taken after each subtest
+was run. The screenshot will be opened in new tabs and their title
+includes the subtest label. Firefox won't automatically close so that you can view the screenshots.
+
+#### Recording a profile
+
+```bash
+./mach talos-test --suite damp --gecko-profile --gecko-profile-entries 100000000
+```
+This will automatically record the tests and open the profile. You may use the following command in order
+to focus on just one subtest run:
+```bash
+./mach talos-test --suite damp --subtests custom.webconsole --cycles 1 --tppagecycles 1 --gecko-profile --gecko-profile-entries 100000000
+```
+
+## How to run it on try?
+
+```bash
+./mach try fuzzy --query "'test-linux1804-64-shippable-qr/ 'damp" --rebuild 6
+```
+* Linux appears to build and run quickly, and offers quite stable results over the other OSes.
+The vast majority of performance issues for DevTools are OS agnostic, so it doesn't really matter which one you run them on.
+* "damp" is the talos bucket in which we run DAMP.
+* And 6 is the number of times we run DAMP tests. That's to do averages between all the 6 runs and helps filtering out the noise.
+
+## How to get performance profiles on try?
+
+Once you have a successful try job for `damp`:
+* select this job in treeherder
+* click on the `...` menu in the bottom left
+* select "Create Gecko Profile"
+
+![PerfHerder Create Gecko Profile menu](perfherder-create-gecko-profile.png)
+
+This should start a new damp job called `damp-p`. Once `damp-p` is finished:
+* select the `damp-p` job
+* click on `Job Details` tab
+* click on `open in Firefox Profiler`
+
+## What does it do?
+
+DAMP measures three important operations:
+* Open a toolbox
+* Reload the web page
+* Close the toolbox
+It measures the time it takes to do each of these operations for the following panels:
+
+inspector, console, netmonitor debugger, memory, performance.
+
+It runs all these three tests two times. Each time against a different web page:
+* "simple": an empty webpage. This test highlights the performance of all tools against the simplest possible page.
+* "complicated": a copy of bild.de website. This is supposed to represent a typical website to debug via DevTools.
+
+Then, there are a couple of extra tests:
+* "cold": we run the three operations (open toolbox, page reload and close toolbox) first with the inspector.
+This is run first after Firefox's startup, before any other test.
+This test allows to measure a "cold startup". When a user first interacts with DevTools, many resources are loaded and cached,
+so that all next interactions will be significantly faster.
+* and many other smaller tests, focused on one particular feature or possible slowness for each panel.
+
+## How to see the results from try?
+
+First, open TreeHerder. A link is displayed in your console when executing `./mach try`.
+You should also receive a mail with a link to it.
+
+Look for "T-e10s(+6)", click on "+6", then click on "damp":
+![TreeHerder jobs](perfherder-damp.png)
+
+On the bottom panel that just opened, click on "Compare result against another revision".
+![TreeHerder panel](perfherder-compare-link.png)
+
+You are now on PerfHerder, click on "Compare",
+![PerfHerder compare](perfherder-compare.png)
+
+Next to "Talos" select menu, in the filter textbox, type "damp".
+Under "damp opt e10s" item, mouse over the "linux64" line, click on "subtests" link.
+![PerfHerder filter](perfherder-filter-subtests.png)
+
+And here you get the results for each DAMP test:
+![PerfHerder subtests](perfherder-subtests.png)
+
+On this page, you can filter by test name with the filter box on top of the result table.
+This table has the following columns:
+* Base:
+ Average time it took to run the test on the base build (by default, the last 2 days of DAMP runs on mozilla-central revisions)
+* New:
+ Average time it took to run the test on the new build, the one with your patches.
+ Both "Base" and "New" have a "± x.xx%" suffix which tells you the variance of the timings.
+ i.e. the average difference in percent between the median timing and both the slowest and the fastest.
+* Delta:
+ Difference in percent between the base and new runs.
+ The color of this can be red, orange or green:
+ * Red means "certainly regressing"
+ * Orange means "possibly regressing"
+ * Green means "certainly improving"
+ * No colored background means "nothing to conclude"
+ The difference between certainly and possibly is explained by the next column.
+* Confidence:
+ If there is a significant difference between the two runs, tells if the results is trustworthy.
+ * "low" either means there isn't a significant difference between the two runs, or the difference is smaller than the typical variance of the given test.
+ If the test is known to have an execution time varying by 2% between two runs of the same build, and you get a 1% difference between your base and new builds,
+ the confidence will be low. You really can't make any conclusion.
+ * "med" means medium confidence and the delta is around the size of the variance. It may highlight a regression, but it can still be justified by the test noise.
+ * "high" means that this is a high confidence difference. The delta is significantly higher than the typical test variance. A regression is most likely detected.
+
+There is also "Show only important changes" checkbox, which helps seeing if there is any significant regression.
+It will only display regressions and improvements with a medium or high confidence.
+
+## How to contribute to DAMP?
+
+DAMP is based on top of a more generic test suite called [Talos](https://wiki.mozilla.org/Buildbot/Talos).
+Talos is a Mozilla test suite to follow all Firefox components performance.
+It is written in Python and here are [the sources](https://searchfox.org/mozilla-central/source/testing/talos/) in mozilla-central.
+Compared to the other test suites, it isn't run on the cloud, but on dedicated hardware.
+This is to ensure performance numbers are stable over time and between two runs.
+Talos runs various types of tests. More specifically, DAMP is a [Page loader test](https://wiki.mozilla.org/Buildbot/Talos/Tests#Page_Load_Tests).
+The [source code](http://searchfox.org/mozilla-central/source/testing/talos/talos/tests/devtools/) for DAMP is also in mozilla-central.
+See [Writing new performance test](./writing-perf-tests.md) for more information about the implementation of DAMP tests.
+
+## How to see the performance trends?
+
+You can find the dedicated performance dashboard for DevTools at http://firefox-dev.tools/performance-dashboard. You will find links to trend charts for various tools:
+* [Inspector dashboard](http://firefox-dev.tools/performance-dashboard/tools/inspector.html?days=60&filterstddev=true)
+* [Console dashboard](http://firefox-dev.tools/performance-dashboard/tools/console.html?days=60&filterstddev=true)
+* [Netmonitor dashboard](http://firefox-dev.tools/performance-dashboard/tools/netmonitor.html?days=60&filterstddev=true)
+* [Debugger dashboard](http://firefox-dev.tools/performance-dashboard/tools/debugger.html?days=60&filterstddev=true)
+
+Each tool page displays charts for all the subtests relevant for a given panel.
+
+Each circle on the chart is a push to mozilla-central. You can hover on a circle to see some additional information about the push, such as the date, the performance impact for the subtest, and the push id. Clicking on a circle will take you to the pushlog.
+
+Colored circles indicate that the push contains a change that was identified as having a performance impact. Those can be categorized as:
+- hardware: hardware change for the machines used to run Talos
+- platform: non-DevTools change that impacts DevTools performance
+- damp: test change in DAMP that impacts test results
+- devtools: identified DevTools change that introduced an improvement or regression
+
+This data is synchronized from a [shared Google doc](https://docs.google.com/spreadsheets/d/12Goo3vq-0X0_Ay-J6gfV56pUB8GC0Nl62I4p8G-UsEA/edit#gid=0).
+
+There is a PerfHerder link on each chart that will take you to the PerfHerder page corresponding to this subtest.
+
+## How to use PerfHerder charts
+
+On PerfHerder charts, each circle is a push on mozilla-central.
+When you see a spike or a drop, you can try to identify the patch that relates to it by clicking the circles.
+It will show a black popup. Then click on the changeset hash like "cb717386aec8" and you will get a mercurial changelog.
+Then it is up to you to read the changelog and see which changeset may have hit the performance.
+
+For example, open [this page](https://treeherder.mozilla.org/perf.html#/graphs?timerange=31536000&series=mozilla-central,1417969,1,1&series=mozilla-central,1417971,1,1&series=mozilla-central,1417966,1,1&highlightedRevisions=a06f92099a5d&zoom=1482734645161.3916,1483610598216.4773,594.756508587898,969.2883437938906).
+This is tracking inspector opening performance against the "Simple" page.
+![Perfherder graphs](regression-graph.png)
+
+See the regression on Dec 31th?
+Now, click on the first yellow circle of this spike.
+You will get a black popup like this one:
+![Perfherder changeset popup](regression-popup.png)
+
+Click on the [changelog link](https://hg.mozilla.org/mozilla-central/pushloghtml?fromchange=9104708cc3ac0ccfe4cf5d518e13736773c565d7&tochange=a06f92099a5d8edeb05e5971967fe8d6cd4c593c) to see which changesets were added during this run. Here, you will see that the regression comes from these patches:
+ * Bug 1245921 - Turn toolbox toolbar into a React component
+ * Bug 1245921 - Monkey patch ReactDOM event system for XUL
diff --git a/devtools/docs/contributor/tests/performance-tests-overview.md b/devtools/docs/contributor/tests/performance-tests-overview.md
new file mode 100644
index 0000000000..93ec771786
--- /dev/null
+++ b/devtools/docs/contributor/tests/performance-tests-overview.md
@@ -0,0 +1,103 @@
+# DevTools Performance Tests overview
+
+This page provides a short overview of the various DevTools performance tests.
+
+## damp
+
+DAMP (short for DevTools At Maximum Performance) is the main DevTools performance test suite, based on the talos framework. It mostly runs end to end scenarios, opening the toolbox, various panels and interacting with the UI. It might regress for a wide variety of reasons: DevTools frontend changes, DevTools server changes, platform changes etc. To investigate DAMP regressions or improvements, it is usually necessary to analyze DAMP subtests individually.
+
+See [DAMP Performance tests](performance-tests-damp.md) for more details on how to run DAMP, analyze results or add new tests.
+
+## debugger-metrics
+
+debugger-metrics measures the number of modules and the overall size of modules loaded when opening the Debugger in DevTools. This test is a mochitest which can be executed locally with:
+
+```bash
+./mach test devtools/client/framework/test/metrics/browser_metrics_debugger.js --headless
+```
+
+At the end of the test, logs should contain a `PERFHERDER_DATA` entry containing 4 measures. `debugger-modules` is the number of debugger-specific modules loaded, `debugger-chars` is the number of characters in said modules. `all-modules` is the number of modules loaded including shared modules, `all-chars` is the number of characters in said modules.
+
+A significant regression or improvement to this test can indicate that modules are no longer lazy loaded, or a new part of the UI is now loaded upfront.
+
+## inspector-metrics
+
+See the description for debugger-metrics. This test is exactly the same but applied to the inspector panel. It can be executed locally with:
+
+```bash
+./mach test devtools/client/framework/test/metrics/browser_metrics_inspector.js --headless
+```
+
+## netmonitor-metrics
+
+See the description for debugger-metrics. This test is exactly the same but applied to the netmonitor panel. It can be executed locally with:
+
+```bash
+./mach test devtools/client/framework/test/metrics/browser_metrics_netmonitor.js --headless
+```
+
+## webconsole-metrics
+
+See the description for debugger-metrics. This test is exactly the same but applied to the webconsole panel. It can be executed locally with:
+
+```bash
+./mach test devtools/client/framework/test/metrics/browser_metrics_webconsole.js --headless
+```
+
+## server.pool
+
+server.pool measures the performance of the DevTools `Pool` [class](https://searchfox.org/mozilla-central/source/devtools/shared/protocol/Pool.js) which is intensively used by the DevTools server. This test is a mochitest which can be executed with:
+
+```bash
+./mach test devtools/client/framework/test/metrics/browser_metrics_pool.js --headless
+```
+
+At the end of the test, logs should contain a `PERFHERDER_DATA` entry which contain values corresponding to various APIs of the `Pool` class.
+
+A regression or improvement in this test is most likely linked to a change in a file from devtools/shared/protocol.
+
+## toolbox:parent-process
+
+toolbox:parent-process measures the number of objects allocated by DevTools after opening and closing a DevTools toolbox. This test is a mochitest which can be executed with:
+
+```bash
+./mach test devtools/client/framework/test/allocations/browser_allocations_toolbox.js --headless
+```
+
+The test will record allocations while opening and closing the Toolbox several times. The `PERFHERDER_DATA` entry in the logs will contain 3 measures. objects-with-stacks is the number of allocated objects for which the allocation site is known and should be easy to fix for developers. You can refer to the [allocation tests documentation](https://searchfox.org/mozilla-central/source/devtools/client/framework/test/allocations/docs/index.md) for a more detailed description of this test and how to use it to investigate and fix memory issues.
+
+A regression here may indicate a leak, for instance a module which no longer cleans its dependencies. It can also indicate that DevTools is loading more singletons or other objects which are not tied to the lifecycle of the DevTools objects.
+
+## target:parent-process
+
+target:parent-process measures the number of objects created by DevTools to create a tab target. It does not involve DevTools frontend. This test is a mochitest which can be executed with:
+
+```bash
+./mach test devtools/client/framework/test/allocations/browser_allocations_target.js --headless
+```
+
+See the description for toolbox:parent-process for more information.
+
+## reload:parent-process
+
+target:parent-process measures the number of objects created by DevTools when reloading a page inspected by a DevTools Toolbox. This test is a mochitest which can be executed with:
+
+```bash
+./mach test devtools/client/framework/test/allocations/browser_allocations_reload.js --headless
+```
+
+See the description for toolbox:parent-process for more information. Note that this test also records another suite, reload:content-process.
+
+## reload:content-process
+
+See the description for reload:parent-process.
+
+## browser-console:parent-process
+
+browser-console:parent-process measures the number of objects created by DevTools when opening and closing the Browser Console. This test is a mochitest which can be executed with:
+
+```bash
+./mach test devtools/client/framework/test/allocations/browser_allocations_browser_console.js --headless
+```
+
+See the description for toolbox:parent-process for more information.
diff --git a/devtools/docs/contributor/tests/regression-graph.png b/devtools/docs/contributor/tests/regression-graph.png
new file mode 100644
index 0000000000..3212324012
--- /dev/null
+++ b/devtools/docs/contributor/tests/regression-graph.png
Binary files differ
diff --git a/devtools/docs/contributor/tests/regression-popup.png b/devtools/docs/contributor/tests/regression-popup.png
new file mode 100644
index 0000000000..e64d55df0d
--- /dev/null
+++ b/devtools/docs/contributor/tests/regression-popup.png
Binary files differ
diff --git a/devtools/docs/contributor/tests/writing-perf-tests-example.md b/devtools/docs/contributor/tests/writing-perf-tests-example.md
new file mode 100644
index 0000000000..5f68d46406
--- /dev/null
+++ b/devtools/docs/contributor/tests/writing-perf-tests-example.md
@@ -0,0 +1,68 @@
+# Performance test example: performance of click event in the inspector
+
+Let's look at a trivial but practical example and add a simple test to measure the performance of a click in the inspector.
+
+First we create a file under [tests/inspector](https://searchfox.org/mozilla-central/source/testing/talos/talos/tests/devtools/addon/content/tests/inspector) since we are writing an inspector test. We call the file `click.js`.
+
+We will use a dummy test document here: `data:text/html,click test document`.
+
+We prepare the imports needed to write the test, from head.js and inspector-helper.js:
+- `testSetup`, `testTeardown`, `openToolbox` and `runTest` from head.js
+- `reloadInspectorAndLog` from inspector-helper.js
+
+The full code for the test looks as follows:
+```
+const {
+ reloadInspectorAndLog,
+} = require("devtools/docs/tests/inspector-helpers");
+
+const {
+ openToolbox,
+ runTest,
+ testSetup,
+ testTeardown,
+} = require("devtools/docs/head");
+
+module.exports = async function() {
+ // Define here your custom document via a data URI:
+ const url = "data:text/html,click test document";
+
+ await testSetup(url);
+ const toolbox = await openToolbox("inspector");
+
+ const inspector = toolbox.getPanel("inspector");
+ const window = inspector.panelWin; // Get inspector's panel window object
+ const body = window.document.body;
+
+ await new Promise(resolve => {
+ const test = runTest("inspector.click");
+ body.addEventListener("click", function () {
+ test.done();
+ resolve();
+ }, { once: true });
+ body.click();
+ });
+
+ // Check if the inspector reload is impacted by click
+ await reloadInspectorAndLog("click", toolbox);
+
+ await testTeardown();
+}
+```
+
+Finally we add an entry in [damp-tests.js](https://searchfox.org/mozilla-central/source/testing/talos/talos/tests/devtools/addon/content/damp-tests.js):
+```
+ {
+ name: "inspector.click",
+ path: "inspector/click.js",
+ description:
+ "Measure the time to click in the inspector, and reload the inspector",
+ },
+```
+
+Since this is an inspector test, we add it under `TEST_SUITES.INSPECTOR`, which contains all the tests which will run with the `damp-inspector` test suite in continuous integration. The test is still part of the overall `damp` suite by default, there is no action needed to ensure that.
+
+Then we can run our test with:
+```
+./mach talos-test --suite damp --subtest inspector.click
+```
diff --git a/devtools/docs/contributor/tests/writing-perf-tests-tips.md b/devtools/docs/contributor/tests/writing-perf-tests-tips.md
new file mode 100644
index 0000000000..9cb300805d
--- /dev/null
+++ b/devtools/docs/contributor/tests/writing-perf-tests-tips.md
@@ -0,0 +1,41 @@
+# How to write a good performance test?
+
+## Verify that you wait for all asynchronous code
+
+If your test involves asynchronous code, which is very likely given the DevTools codebase, please review carefully your test script.
+You should ensure that _any_ code ran directly or indirectly by your test is completed.
+You should not only wait for the functions related to the very precise feature you are trying to measure.
+
+This is to prevent introducing noise in the test run after yours. If any asynchronous code is pending,
+it is likely to run in parallel with the next test and increase its variance.
+Noise in the tests makes it hard to detect small regressions.
+
+You should typically wait for:
+* All RDP requests to finish,
+* All DOM Events to fire,
+* Redux action to be dispatched,
+* React updates,
+* ...
+
+
+## Ensure that its results change when regressing/fixing the code or feature you want to watch.
+
+If you are writing the new test to cover a recent regression and you have a patch to fix it, push your test to try without _and_ with the regression fix.
+Look at the try push and confirm that your fix actually reduces the duration of your perf test significantly.
+If you are introducing a test without any patch to improve the performance, try slowing down the code you are trying to cover with a fake slowness like `setTimeout` for asynchronous code, or very slow `for` loop for synchronous code. This is to ensure your test would catch a significant regression.
+
+For our click performance test, we could do this from the inspector codebase:
+```
+window.addEventListener("click", function () {
+
+ // This for loop will fake a hang and should slow down the duration of our test
+ for (let i = 0; i < 100000000; i++) {}
+
+}, true); // pass `true` in order to execute before the test click listener
+```
+
+
+## Keep your test execution short.
+
+Running performance tests is expensive. We are currently running them 25 times for each changeset landed in Firefox.
+Aim to run tests in less than a second on try.
diff --git a/devtools/docs/contributor/tests/writing-perf-tests.md b/devtools/docs/contributor/tests/writing-perf-tests.md
new file mode 100644
index 0000000000..5d3a9842da
--- /dev/null
+++ b/devtools/docs/contributor/tests/writing-perf-tests.md
@@ -0,0 +1,140 @@
+# Writing new DAMP performance tests
+
+See [DAMP Performance tests](performance-tests-damp.md) for an overall description of our performance tests.
+Here, we will describe how to write a new test and register it to run in DAMP.
+
+```{note}
+ **Reuse existing tests if possible!**
+ If a `custom` page already exists for the tool you are testing, try to modify the existing `custom` test rather than adding a new individual test.
+ New individual tests run separately, in new tabs, and make DAMP slower than just modifying existing tests. Complexifying `custom` test pages should also help cover more scenarios and catch more regressions. For those reasons, modifying existing tests should be the preferred way of extending DAMP coverage.
+ `custom` tests are using complex documents that should stress a particular tool in various ways. They are all named `custom.${tool}` (for instance `custom.inspector`). The test pages for those tests can be found in [pages/custom](https://searchfox.org/mozilla-central/source/testing/talos/talos/tests/devtools/addon/content/pages/custom).
+ If your test case requires a dedicated document or can't run next to the other tests in the current `custom` test, follow the instructions below to add a new individual test.
+```
+
+This page contains the general documentation for writing DAMP tests. See also:
+- [Performance test writing example](writing-perf-tests-example.md) for a practical example of creating a new test
+- [Performance test writing tips](writing-perf-tests-tips.md) for detailed tips on how to write a good and efficient test
+
+## Test location
+
+Tests are located in [testing/talos/talos/tests/devtools/addon/content/tests](https://searchfox.org/mozilla-central/source/testing/talos/talos/tests/devtools/addon/content/tests). You will find subfolders for panels already tested in DAMP (debugger, inspector, …) as well as other subfolders for tests not specific to a given panel (server, toolbox).
+
+Tests are isolated in dedicated files. Some examples of tests:
+- [tests/netmonitor/simple.js](https://searchfox.org/mozilla-central/source/testing/talos/talos/tests/devtools/addon/content/tests/netmonitor/simple.js)
+- [tests/inspector/mutations.js](https://searchfox.org/mozilla-central/source/testing/talos/talos/tests/devtools/addon/content/tests/inspector/mutations.js)
+
+## Basic test
+
+The basic skeleton of a test is:
+
+```
+const {
+ testSetup,
+ testTeardown,
+ SIMPLE_URL,
+} = require("devtools/docs/head");
+
+module.exports = async function() {
+ await testSetup(SIMPLE_URL);
+
+ // Run some measures here
+
+ await testTeardown();
+};
+```
+
+* always start the test by calling `testSetup(url)`, with the `url` of the document to use
+* always end the test with `testTeardown()`
+
+
+## Test documents
+
+DevTools performance heavily depends on the document against which DevTools are opened. There are two "historical" documents you can use for tests for any panel:
+* "Simple", an empty webpage. This one helps highlighting the load time of panels,
+* "Complicated", a copy of bild.be, a German newspaper website. This allows us to examine the performance of the tools when inspecting complicated, big websites.
+
+The URL of those documents are exposed by [tests/head.js](https://searchfox.org/mozilla-central/source/testing/talos/talos/tests/devtools/addon/content/tests/head.js). The Simple page can be found at [testing/talos/talos/tests/devtools/addon/content/pages/simple.html](https://searchfox.org/mozilla-central/source/testing/talos/talos/tests/devtools/addon/content/pages/simple.html). The Complicated page is downloaded via [tooltool](https://wiki.mozilla.org/ReleaseEngineering/Applications/Tooltool) automatically the first time you run the DAMP tests.
+
+You can create also new test documents under [testing/talos/talos/tests/devtools/addon/content/pages](https://searchfox.org/mozilla-central/source/testing/talos/talos/tests/devtools/addon/content/pages). See the pages in the `custom` subfolder for instance. If you create a document in `pages/custom/mypanel/index.html`, the URL of the document in your tests should be `PAGES_BASE_URL + "custom/mypanel/index.html"`. The constant `PAGES_BASE_URL` is exposed by head.js.
+
+Note that modifying any existing test document will most likely impact the baseline for existing tests.
+
+Finally you can also create very simple test documents using data urls. Test documents don't have to contain any specific markup or script to be valid DAMP test documents, so something as simple as `testSetup("data:text/html,my test document");` is valid.
+
+
+## Test helpers
+
+Helper methods have been extracted in shared modules:
+* [tests/head.js](https://searchfox.org/mozilla-central/source/testing/talos/talos/tests/devtools/addon/content/tests/head.js) for the most common ones
+* tests/{subfolder}/{subfolder}-helpers.js for folder-specific helpers ([example](https://searchfox.org/mozilla-central/source/testing/talos/talos/tests/devtools/addon/content/tests/inspector/inspector-helpers.js))
+
+To measure something which is not covered by an existing helper, you should use `runTest`, exposed by head.js.
+
+```
+module.exports = async function() {
+ await testSetup(SIMPLE_URL);
+
+ // Calling `runTest` will immediately start recording your action duration.
+ // You can execute any necessary setup action you don't want to record before calling it.
+ const test = runTest(`mypanel.mytest.mymeasure`);
+
+ await doSomeThings(); // <== Do an action you want to record here
+
+ // Once your action is completed, call `runTest` returned object's `done` method.
+ // It will automatically record the action duration and appear in PerfHerder as a new subtest.
+ // It also creates markers in the profiler so that you can better inspect this action in
+ // profiler.firefox.com.
+ test.done();
+
+ await testTeardown();
+};
+```
+
+If your measure is not simply the time spent by an asynchronous call (for instance computing an average, counting things…) there is a lower level helper called `logTestResult` which will directly log a value. See [this example](https://searchfox.org/mozilla-central/rev/325c1a707819602feff736f129cb36055ba6d94f/testing/talos/talos/tests/devtools/addon/content/tests/webconsole/streamlog.js#62).
+
+
+## Test runner
+
+If you need to dive into the internals of the DAMP runner, most of the logic is in [testing/talos/talos/tests/devtools/addon/content/damp.js](https://searchfox.org/mozilla-central/source/testing/talos/talos/tests/devtools/addon/content/damp.js).
+
+
+# How to name your test and register it?
+
+If a new test file was created, it needs to be registered in the test suite. To register the new test, add it in [damp-tests.js](https://searchfox.org/mozilla-central/source/testing/talos/talos/tests/devtools/addon/content/damp-tests.js). This file acts as the manifest for the DAMP test suite.
+
+If your are writing a test executing against Simple and Complicated documents, your test name will look like: `(simple|complicated).${tool-name}.${test-name}`.
+So for our example, it would be `simple.inspector.click` and `complicated.inspector.click`.
+For independent tests that don't use the Simple or Complicated documents, the test name only needs to start with the tool name, if the test is specific to that tool
+For the example, it would be `inspector.click`.
+
+In general, the test name should try to match the path of the test file. As you can see in damp-tests.js this naming convention is not consistently followed. We have discrepancies for simple/complicated/custom tests, as well as for webconsole tests. This is largely for historical reasons.
+
+You will see that tests are split across different subsuites: damp-inspector, damp-other and damp-webconsole. The goal of this split is to run DAMP tests in parallel in CI, so we aim to keep them balanced in terms of number of tests, and mostly running time. Add your test in the suite which makes the most sense. We can add more suites and rearrange tests in the future if needed.
+
+
+# How to run your new test?
+
+You can run any performance test with this command:
+```
+./mach talos-test --suite damp --subtest ${your-test-name}
+```
+
+By default, it will run the test 25 times. In order to run it just once, do:
+```
+./mach talos-test --suite damp --subtest ${your-test-name} --cycles 1 --tppagecycles 1
+```
+`--cycles` controls the number of times Firefox is restarted
+`--tppagecycles` defines the number of times we repeat the test after each Firefox start
+
+Also, you can record a profile while running the test. To do that, execute:
+```
+./mach talos-test --suite damp --subtest ${your-test-name} --cycles 1 --tppagecycles 1 --gecko-profile --gecko-profile-entries 100000000
+```
+`--gecko-profile` enables the profiler
+`--gecko-profile-entries` defines the profiler buffer size, which needs to be large while recording performance tests
+
+Once it is done executing, the profile lives in a zip file you have to uncompress like this:
+```
+unzip testing/mozharness/build/blobber_upload_dir/profile_damp.zip
+```
+Then you have to open [https://profiler.firefox.com/](https://profiler.firefox.com/) and manually load the profile file that lives here: `profile_damp/page_0_pagecycle_1/cycle_0.profile`
diff --git a/devtools/docs/contributor/tests/writing-tests.md b/devtools/docs/contributor/tests/writing-tests.md
new file mode 100644
index 0000000000..39dac2a99e
--- /dev/null
+++ b/devtools/docs/contributor/tests/writing-tests.md
@@ -0,0 +1,237 @@
+# Automated tests: writing tests
+
+<!--TODO this file might benefit from being split in other various files. For now it's just taken from the wiki with some edits-->
+
+## Adding a new browser chrome test
+
+It's almost always a better idea to create a new test file rather than to add new test cases to an existing one.
+
+This prevents test files from growing up to the point where they timeout for running too long. Test systems may be under lots of stress at time and run a lot slower than your regular local environment.
+
+It also helps with making tests more maintainable: with many small files, it's easier to track a problem rather than in one huge file.
+
+### Creating the new file
+
+The first thing you need to do is create a file. This file should go next to the code it's testing, in the `tests` directory. For example, an inspector test would go into `devtools/inspector/test/`.
+
+### Naming the new file
+
+Naming your file is pretty important to help other people get a feeling of what it is supposed to test.
+Having said that, the name shouldn't be too long either.
+
+A good naming convention is `browser_<panel>_<short-description>[_N].js`
+
+where:
+
+* `<panel>` is one of `debugger`, `markupview`, `inspector`, `ruleview`, etc.
+* `<short-description>` should be about 3 to 4 words, separated by hyphens (-)
+* and optionally add a number at the end if you have several files testing the same thing
+
+For example: `browser_ruleview_completion-existing-property_01.js`
+
+Note that not all existing tests are consistently named. So the rule we try to follow is to **be consistent with how other tests in the same test folder are named**.
+
+### Basic structure of a test
+
+```javascript
+/* Any copyright is dedicated to the Public Domain.
+http://creativecommons.org/publicdomain/zero/1.0/ */
+"use strict";
+
+// A detailed description of what the test is supposed to test
+
+const TEST_URL = TEST_URL_ROOT + "doc_some_test_page.html";
+
+add_task(async function() {
+ await addTab(TEST_URL_ROOT);
+ let {toolbox, inspector, view} = await openRuleView();
+ await selectNode("#testNode", inspector);
+ await checkSomethingFirst(view);
+ await checkSomethingElse(view);
+});
+
+async function checkSomethingFirst(view) {
+/* ... do something ... this function can await */
+}
+
+async function checkSomethingElse(view) {
+/* ... do something ... this function can await */
+}
+```
+
+### Referencing the new file
+
+For your test to be run, it needs to be referenced in the `browser.ini` file that you'll find in the same directory. For example: `browser/devtools/debugger/test/browser.ini`
+
+Add a line with your file name between square brackets, and make sure that the list of files **is always sorted by alphabetical order** (some lists can be really long, so the alphabetical order helps in finding and reasoning about things).
+
+For example, if you were to add the test from the previous section, you'd add this to `browser.ini`:
+
+```ini
+[browser_ruleview_completion-existing-property_01.js]
+```
+
+### Adding support files
+
+Sometimes your test may need to open an HTML file in a tab, and it may also need to load CSS or JavaScript. For this to work, you'll need to...
+
+1. place these files in the same directory, and also
+2. reference them in the `browser.ini` file.
+
+There's a naming convention for support files: `doc_<support-some-test>.html`
+
+But again, often names do not follow this convention, so try to follow the style of the other support files currently in the same test directory.
+
+To reference your new support file, add its filename in the `support-files` section of `browser.ini`, also making sure this section is in alphabetical order.
+
+Support files can be accessed via a local server that is started while tests are running. This server is accessible at [http://example.com/browser/](http://example.com/browser/). See the `head.js section` below for more information.
+
+## Leveraging helpers in `head.js`
+
+`head.js` is a special support file that is loaded in the scope the test runs in, before the test starts. It contains global helpers that are useful for most tests. Read through the head.js file in your test directory to see what functions are there and therefore avoid duplicating code.
+
+Each panel in DevTools has its own test directory with its own `head.js`, so you'll find different things in each panel's `head.js` file.
+
+For example, the head.js files in the `markupview` and `styleinspector` test folders contain these useful functions and constants:
+
+* Base URLs for support files: `TEST_URL_ROOT`. This avoids having to duplicate the http://example.com/browser/browser/devtools/styleinspector/ URL fragment in all tests,
+* `waitForExplicitFinish()` is called in `head.js` once and for all<!--TODO: what does this even mean?-->. All tests are asynchronous, so there's no need to call it again in each and every test,
+* `auto-cleanup`: the toolbox is closed automatically and all tabs are closed,
+* `tab addTab(url)`
+* `{toolbox, inspector} openInspector()`
+* `{toolbox, inspector, view} openRuleView()`
+* `selectNode(selectorOrNode, inspector)`
+* `node getNode(selectorOrNode)`
+* ...
+
+## Shared head.js file
+
+A [shared-head.js](https://searchfox.org/mozilla-central/source/devtools/client/shared/test/shared-head.js) file has been introduced to avoid duplicating code in various `head.js` files.
+
+It's important to know whether or not the `shared.js` in your test directory already imports `shared-head.js` (look for a <code>Services.scriptloader.loadSubScript</code> call), as common helpers in `shared-head.js` might be useful for your test.
+
+If you're planning to work on a lot of new tests, it might be worth the time actually importing `shared-head.js` in your `head.js` if it isn't here already.
+
+## Electrolysis
+
+E10S is the codename for Firefox multi-process, and what that means for us is that the process in which the test runs isn't the same as the one in which the test content page runs.
+
+You can learn more about E10S [from this blog post](https://timtaubert.de/blog/2011/08/firefox-electrolysis-101/), [the Electrolysis wiki page](https://wiki.mozilla.org/Electrolysis) and the page on [tests and E10s](https://wiki.mozilla.org/Electrolysis/e10s_test_tips).
+
+One of the direct consequences of E10S on tests is that you cannot retrieve and manipulate objects from the content page as you'd do without E10S.
+
+So when creating a new test, if this test needs to access the content page in any way, you can use [the message manager or JSActors](https://firefox-source-docs.mozilla.org/dom/ipc/jsactors.html) to communicate with a script loaded in the content process to do things for you instead of accessing objects in the page directly.
+
+You can use the helper `ContentTask.spawn()` for this. See [this list of DevTools tests that use that helper for examples](https://searchfox.org/mozilla-central/search?q=ContentTask.spawn%28&path=devtools%2Fclient).
+
+Note that a lot of tests only need to access the DevTools UI anyway, and don't need to interact with the content process at all. Since the UI lives in the same process as the test, you won't need to use the message manager to access it.
+
+## Asynchronous tests
+
+Most browser chrome DevTools tests are asynchronous. One of the reasons why they are asynchronous is that the code needs to register event handlers for various user interactions in the tools and then simulate these interactions. Another reason is that most DevTools operations are done asynchronously via the debugger protocol.
+
+Here are a few things to keep in mind with regards to asynchronous testing:
+
+* `head.js` already calls `waitForExplicitFinish()` so there's no need for your new test to do it too.
+* Using `add_task` with an async function means that you can await calls to functions that return promises. It also means your main test function can be written to almost look like synchronous code, by adding `await` before calls to asynchronous functions. For example:
+
+```javascript
+for (let test of testData) {
+ await testCompletion(test, editor, view);
+}
+```
+
+Each call to `testCompletion` is asynchronous, but the code doesn't need to rely on nested callbacks and maintain an index, a standard for loop can be used.
+
+## Writing clean, maintainable test code
+
+Test code is as important as feature code itself, it helps avoiding regressions of course, but it also helps understanding complex parts of the code that would be otherwise hard to grasp.
+
+Since we find ourselves working with test code a large portion of our time, we should spend the time and energy it takes to make this time enjoyable.
+
+### Logs and comments
+
+Reading test output logs isn't exactly fun and it takes time but is needed at times. Make sure your test generates enough logs by using:
+
+```
+info("doing something now")
+```
+
+it helps a lot knowing around which lines the test fails, if it fails.
+
+One good rule of thumb is if you're about to add a JS line comment in your test to explain what the code below is about to test, write the same comment in an `info()` instead.
+
+Also add a description at the top of the file to help understand what this test is about. The file name is often not long enough to convey everything you need to know about the test. Understanding a test often teaches you about the feature itself.
+
+Not really a comment, but don't forget to "use strict";
+
+### Callbacks and promises
+
+Avoid multiple nested callbacks or chained promises. They make it hard to read the code. Use async/await instead.
+
+### Clean up after yourself
+
+Do not expose global variables in your test file, they may end up causing bugs that are hard to track. Most functions in `head.js` return useful instances of the DevTools panels, and you can pass these as arguments to your sub functions, no need to store them in the global scope.
+This avoids having to remember nullifying them at the end.
+
+If your test needs to toggle user preferences, make sure you reset these preferences when the test ends. Do not reset them at the end of the test function though because if your test fails, the preferences will never be reset. Use the `registerCleanupFunction` helper instead.
+
+It may be a good idea to do the reset in `head.js`.
+
+### Write small, maintainable code
+
+Split your main test function into smaller test functions with self explanatory names.
+
+Make sure your test files are small. If you are working on a new feature, you can create a new test each time you add a new functionality, a new button to the UI for instance. This helps having small, incremental tests and can also help writing test while coding.
+
+If your test is just a sequence of functions being called to do the same thing over and over again, it may be better to describe the test steps in an array instead and just have one function that runs each item of the array. See the following example
+
+```javascript
+const TESTS = [
+ {desc: "add a class", cssSelector: "#id1", makeChanges: async function() {...}},
+ {desc: "change href", cssSelector: "a.the-link", makeChanges: async function() {...}},
+ ...
+];
+
+add_task(async function() {
+ await addTab("...");
+ let {toolbox, inspector} = await openInspector();
+ for (let step of TESTS) {
+ info("Testing step: " + step.desc);
+ await selectNode(step.cssSelector, inspector);
+ await step.makeChanges();
+ }
+});
+```
+
+As shown in this code example, you can add as many test cases as you want in the TESTS array and the actual test code will remain very short, and easy to understand and maintain (note that when looping through test arrays, it's always a good idea to add a "desc" property that will be used in an info() log output).
+
+### Avoid exceptions
+
+Even when they're not failing the test, exceptions are bad because they pollute the logs and make them harder to read.
+They're also bad because when your test is run as part of a test suite and if an other, unrelated, test fails then the exceptions may give wrong information to the person fixing the unrelated test.
+
+After your test has run locally, just make sure it doesn't output exceptions by scrolling through the logs.
+
+Often, non-blocking exceptions may be caused by hanging protocol requests that haven't been responded to yet when the tools get closed at the end of the test. Make sure you register to the right events and give time to the tools to update themselves before moving on.
+
+### Avoid test timeouts
+
+<!--TODO: this recommendation is conflicting with the above recommendation. What? -->
+When tests fail, it's far better to have them fail and end immediately with an exception that will help fix it rather than have them hang until they hit the timeout and get killed.
+
+## Adding new helpers
+
+In some cases, you may want to extract some common code from your test to use it another another test.
+
+* If this is very common code that all tests could use, then add it to `devtools/client/shared/test/shared-head.js`.
+* If this is common code specific to a given tool, then add it to the corresponding `head.js` file.
+* If it isn't common enough to live in `head.js`, then it may be a good idea to create a helper file to avoid duplication anyway. Here's how to create a helper file:
+ * Create a new file in your test director. The naming convention should be `helper_<description_of_the_helper>.js`
+ * Add it to the browser.ini support-files section, making sure it is sorted alphabetically
+ * Load the helper file in the tests
+ * `browser/devtools/markupview/test/head.js` has a handy `loadHelperScript(fileName)` function that you can use.
+ * The file will be loaded in the test global scope, so any global function or variables it defines will be available (just like `head.js`).
+ * Use the special ESLint comment `/* import-globals-from helper_file.js */` to prevent ESLint errors for undefined variables.
+
+In all cases, new helper functions should be properly commented with an jsdoc comment block.
diff --git a/devtools/docs/contributor/tests/xpcshell.md b/devtools/docs/contributor/tests/xpcshell.md
new file mode 100644
index 0000000000..e98c9deef7
--- /dev/null
+++ b/devtools/docs/contributor/tests/xpcshell.md
@@ -0,0 +1,13 @@
+# Automated tests: `xpcshell` tests
+
+To run all of the xpcshell tests:
+
+```bash
+./mach xpcshell-test --tag devtools
+```
+
+To run a specific xpcshell test:
+
+```bash
+./mach xpcshell-test devtools/path/to/the/test_you_want_to_run.js
+```