summaryrefslogtreecommitdiffstats
path: root/testing/perfdocs
diff options
context:
space:
mode:
Diffstat (limited to 'testing/perfdocs')
-rw-r--r--testing/perfdocs/generated/Talos.rst122
-rw-r--r--testing/perfdocs/generated/developing.rst154
-rw-r--r--testing/perfdocs/generated/index.rst16
-rw-r--r--testing/perfdocs/generated/mozperftest.rst204
-rw-r--r--testing/perfdocs/generated/raptor.rst127
-rw-r--r--testing/perfdocs/generated/running.rst46
-rw-r--r--testing/perfdocs/generated/writing.rst176
-rw-r--r--testing/perfdocs/moz.build1
8 files changed, 846 insertions, 0 deletions
diff --git a/testing/perfdocs/generated/Talos.rst b/testing/perfdocs/generated/Talos.rst
new file mode 100644
index 0000000000..3561f250c2
--- /dev/null
+++ b/testing/perfdocs/generated/Talos.rst
@@ -0,0 +1,122 @@
+=====
+Talos
+=====
+
+Talos is a cross-platform Python performance testing framework that is specifically for
+Firefox on desktop. New performance tests should be added to the newer framework
+`mozperftest </testing/perfdocs/mozperftest.html>`_ unless there are limitations
+there (highly unlikely) that make it absolutely necessary to add them to Talos. Talos is
+named after the `bronze automaton from Greek myth <https://en.wikipedia.org/wiki/Talos>`_.
+
+Talos tests are run in a similar manner to xpcshell and mochitests. They are started via
+the command :code:`mach talos-test`. A `python script <https://searchfox.org/mozilla-central/source/testing/talos>`_
+then launches Firefox, which runs the tests via JavaScript special powers. The test timing
+information is recorded in a text log file, e.g. :code:`browser_output.txt`, and then processed
+into the `JSON format supported by Perfherder <https://searchfox.org/mozilla-central/source/testing/mozharness/external_tools/performance-artifact-schema.json>`_.
+
+Talos bugs can be filed in `Testing::Talos <https://bugzilla.mozilla.org/enter_bug.cgi?product=Testing&component=Talos>`_.
+
+Talos infrastructure is still mostly documented `on the Mozilla Wiki <https://wiki.mozilla.org/TestEngineering/Performance/Talos>`_.
+In addition, there are plans to surface all of the individual tests using PerfDocs.
+This work is tracked in `Bug 1674220 <https://bugzilla.mozilla.org/show_bug.cgi?id=1674220>`_.
+
+Examples of current Talos runs can be `found in Treeherder by searching for "Talos" <https://treeherder.mozilla.org/jobs?repo=autoland&searchStr=Talos>`_.
+If none are immediately available, then scroll to the bottom of the page and load more test
+runs. The tests all share a group symbol starting with a :code:`T`, for example
+:code:`T(c d damp g1)` or :code:`T-gli(webgl)`.
+
+Running Talos Locally
+*********************
+
+Running tests locally is most likely only useful for debugging what is going on in a test,
+as the test output is only reported as raw JSON. The CLI is documented via:
+
+.. code-block:: bash
+
+ ./mach talos-test --help
+
+To quickly try out the :code:`./mach talos-test` command, the following can be run to do a
+single run of the DevTools' simple netmonitor test.
+
+.. code-block:: bash
+
+ # Run the "simple.netmonitor" test very quickly with 1 cycle, and 1 page cycle.
+ ./mach talos-test --activeTests damp --subtests simple.netmonitor --cycles 1 --tppagecycles 1
+
+
+The :code:`--print-suites` and :code:`--print-tests` are two helpful command flags to
+figure out what suites and tests are available to run.
+
+.. code-block:: bash
+
+ # Print out the suites:
+ ./mach talos-test --print-suites
+
+ # Available suites:
+ # bcv (basic_compositor_video)
+ # chromez (about_preferences_basic:tresize:about_newtab_with_snippets)
+ # dromaeojs (dromaeo_css:kraken)
+ # flex (tart_flex:ts_paint_flex)
+ # ...
+
+ # Run all of the tests in the "bcv" test suite:
+ ./mach talos-test --suite bcv
+
+ # Print out the tests:
+ ./mach talos-test --print-tests
+
+ # Available tests:
+ # ================
+ #
+ # a11yr
+ # -----
+ # This test ensures basic a11y tables and permutations do not cause
+ # performance regressions.
+ #
+ # about_newtab_with_snippets
+ # --------------------------
+ # Load about ActivityStream (about:home and about:newtab) with snippets enabled
+ #
+ # ...
+
+ # Run the tests in "a11yr" listed above
+ ./mach talos-test --activeTests a11yr
+
+Running Talos on Try
+********************
+
+Talos runs can be generated through the mach try fuzzy finder:
+
+.. code-block:: bash
+
+ ./mach try fuzzy
+
+The following is an example output at the time of this writing. Refine the query for the
+platform and test suites of your choosing.
+
+.. code-block::
+
+ | test-windows10-64-qr/opt-talos-bcv-swr-e10s
+ | test-linux64-shippable/opt-talos-webgl-e10s
+ | test-linux64-shippable/opt-talos-other-e10s
+ | test-linux64-shippable-qr/opt-talos-g5-e10s
+ | test-linux64-shippable-qr/opt-talos-g4-e10s
+ | test-linux64-shippable-qr/opt-talos-g3-e10s
+ | test-linux64-shippable-qr/opt-talos-g1-e10s
+ | test-windows10-64/opt-talos-webgl-gli-e10s
+ | test-linux64-shippable/opt-talos-tp5o-e10s
+ | test-linux64-shippable/opt-talos-svgr-e10s
+ | test-linux64-shippable/opt-talos-flex-e10s
+ | test-linux64-shippable/opt-talos-damp-e10s
+ > test-windows7-32/opt-talos-webgl-gli-e10s
+ | test-linux64-shippable/opt-talos-bcv-e10s
+ | test-linux64-shippable/opt-talos-g5-e10s
+ | test-linux64-shippable/opt-talos-g4-e10s
+ | test-linux64-shippable/opt-talos-g3-e10s
+ | test-linux64-shippable/opt-talos-g1-e10s
+ | test-linux64-qr/opt-talos-bcv-swr-e10s
+
+ For more shortcuts, see mach help try fuzzy and man fzf
+ select: <tab>, accept: <enter>, cancel: <ctrl-c>, select-all: <ctrl-a>, cursor-up: <up>, cursor-down: <down>
+ 1379/2967
+ > talos
diff --git a/testing/perfdocs/generated/developing.rst b/testing/perfdocs/generated/developing.rst
new file mode 100644
index 0000000000..e361f1a890
--- /dev/null
+++ b/testing/perfdocs/generated/developing.rst
@@ -0,0 +1,154 @@
+Developing in mozperftest
+=========================
+
+Architecture overview
+---------------------
+
+`mozperftest` implements a mach command that is a thin wrapper on the
+top of `runner.py`, which allows us to run the tool without having to go through
+a mach call. Command arguments are prepared in `argparser.py` and then made
+available for the runner.
+
+The runner creates a `MachEnvironment` instance (see `environment.py`) and a
+`Metadata` instance (see `metadata.py`). These two objects are shared during the
+whole test and used to share data across all parts.
+
+The runner then calls `MachEnvironment.run`, which is in charge of running the test.
+The `MachEnvironment` instance runs a sequence of **layers**.
+
+Layers are classes responsible of one single aspect of a performance test. They
+are organized in three categories:
+
+- **system**: anything that sets up and tears down some resources or services
+ on the system. Existing system layers: **android**, **proxy**
+- **test**: layers that are in charge of running a test to collect metrics.
+ Existing test layers: **browsertime** and **androidlog**
+- **metrics**: all layers that process the metrics to turn them into usable
+ metrics. Existing system layers: **perfherder** and **console**
+
+The MachEnvironment instance collects a series of layers for each category and
+runs them sequentially.
+
+The goal of this organization is to allow adding new performance tests runners
+that will be based on a specific combination of layers. To avoid messy code,
+we need to make sure that each layer represents a single aspect of the process
+and that is completely independant from other layers (besides sharing the data
+through the common environment.)
+
+For instance, we could use `perftest` to run a C++ benchmark by implementing a
+new **test** layer.
+
+
+Layer
+-----
+
+A layer is a class that inherits from `mozperftest.layers.Layer` and implements
+a few methods and class variables.
+
+List of methods and variables:
+
+- `name`: name of the layer (class variable, mandatory)
+- `activated`: boolean to activate by default the layer (class variable, False)
+- `user_exception`: will trigger the `on_exception` hook when an exception occurs
+- `arguments`: dict containing arguments. Each argument is following
+ the `argparser` standard
+- `run(self, medatata)`: called to execute the layer
+- `setup(self)`: called when the layer is about to be executed
+- `teardown(self)`: called when the layer is exiting
+
+Example::
+
+ class EmailSender(Layer):
+ """Sends an email with the results
+ """
+ name = "email"
+ activated = False
+
+ arguments = {
+ "recipient": {
+ "type": str,
+ "default": "tarek@mozilla.com",
+ "help": "Recipient",
+ },
+ }
+
+ def setup(self):
+ self.server = smtplib.SMTP(smtp_server,port)
+
+ def teardown(self):
+ self.server.quit()
+
+ def __call__(self, metadata):
+ self.server.send_email(self.get_arg("recipient"), metadata.results())
+
+
+It can then be added to one of the top functions that are used to create a list
+of layers for each category:
+
+- **mozperftest.metrics.pick_metrics** for the metrics category
+- **mozperftest.system.pick_system** for the system category
+- **mozperftest.test.pick_browser** for the test category
+
+And also added in each `get_layers` function in each of those category.
+The `get_layers` functions are invoked when building the argument parser.
+
+In our example, adding the `EmailSender` layer will add two new options:
+
+- **--email** a flag to activate the layer
+- **--email-recipient**
+
+
+Important layers
+----------------
+
+**mozperftest** can be used to run performance tests against browsers using the
+**browsertime** test layer. It leverages the `browsertime.js
+<https://www.sitespeed.io/documentation/browsertime/>`_ framework and provides
+a full integration into Mozilla's build and CI systems.
+
+Browsertime uses the selenium webdriver client to drive the browser, and
+provides some metrics to measure performance during a user journey.
+
+
+Coding style
+------------
+
+For the coding style, we want to:
+
+- Follow `PEP 257 <https://www.python.org/dev/peps/pep-0257/>`_ for docstrings
+- Avoid complexity as much as possible
+- Use modern Python 3 code (for instance `pathlib` instead of `os.path`)
+- Avoid dependencies on Mozilla build projects and frameworks as much as possible
+ (mozharness, mozbuild, etc), or make sure they are isolated and documented
+
+
+Landing patches
+---------------
+
+.. warning::
+
+ It is mandatory for each patch to have a test. Any change without a test
+ will be rejected.
+
+Before landing a patch for mozperftest, make sure you run `perftest-test`::
+
+ % ./mach perftest-test
+ => black [OK]
+ => flake8 [OK]
+ => remove old coverage data [OK]
+ => running tests [OK]
+ => coverage
+ Name Stmts Miss Cover Missing
+ ------------------------------------------------------------------------------------------
+ mozperftest/metrics/notebook/analyzer.py 29 20 31% 26-36, 39-42, 45-51
+ ...
+ mozperftest/system/proxy.py 37 0 100%
+ ------------------------------------------------------------------------------------------
+ TOTAL 1614 240 85%
+
+ [OK]
+
+The command will run `black`, `flake8` and also make sure that the test coverage has not regressed.
+
+You can use the `-s` option to bypass flake8/black to speed up your workflow, but make
+sure you do a full tests run. You can also pass the name of one single test module.
diff --git a/testing/perfdocs/generated/index.rst b/testing/perfdocs/generated/index.rst
new file mode 100644
index 0000000000..904cc6fdf0
--- /dev/null
+++ b/testing/perfdocs/generated/index.rst
@@ -0,0 +1,16 @@
+###################
+Performance Testing
+###################
+
+Performance tests are designed to catch performance regressions before they reach our
+end users. At this time, there is no unified approach for these types of tests,
+but `mozperftest </testing/perfdocs/mozperftest.html>`_ aims to provide this in the future.
+
+For more detailed information about each test suite, see each projects' documentation:
+
+ * :doc:`Talos`
+ * :doc:`mozperftest`
+ * :doc:`raptor`
+
+For more information about the performance testing team,
+`visit the wiki page <https://wiki.mozilla.org/TestEngineering/Performance>`_.
diff --git a/testing/perfdocs/generated/mozperftest.rst b/testing/perfdocs/generated/mozperftest.rst
new file mode 100644
index 0000000000..0dc6d6657c
--- /dev/null
+++ b/testing/perfdocs/generated/mozperftest.rst
@@ -0,0 +1,204 @@
+===========
+mozperftest
+===========
+
+**mozperftest** can be used to run performance tests.
+
+
+.. toctree::
+
+ running
+ writing
+ developing
+
+The following documents all testing we have for mozperftest.
+If the owner does not specify the Usage and Description, it's marked N/A.
+
+browser/base/content/test
+-------------------------
+Performance tests from the 'browser/base/content/test' folder.
+
+perftest_browser_xhtml_dom.js
+=============================
+
+:owner: Browser Front-end team
+:name: Dom-size
+
+**Measures the size of the DOM**
+
+
+netwerk/test/perf
+-----------------
+Performance tests from the 'network/test/perf' folder.
+
+perftest_http3_cloudflareblog.js
+================================
+
+:owner: Network Team
+:name: cloudflare
+
+**User-journey live site test for cloudflare blog.**
+
+perftest_http3_controlled.js
+============================
+
+:owner: Network Team
+:name: controlled
+:tags: throttlable
+
+**User-journey live site test for controlled server**
+
+perftest_http3_facebook_scroll.js
+=================================
+
+:owner: Network Team
+:name: facebook-scroll
+
+**Measures the number of requests per second after a scroll.**
+
+perftest_http3_google_image.js
+==============================
+
+:owner: Network Team
+:name: g-image
+
+**Measures the number of images per second after a scroll.**
+
+perftest_http3_google_search.js
+===============================
+
+:owner: Network Team
+:name: g-search
+
+**User-journey live site test for google search**
+
+perftest_http3_lucasquicfetch.js
+================================
+
+:owner: Network Team
+:name: lq-fetch
+
+**Measures the amount of time it takes to load a set of images.**
+
+perftest_http3_youtube_watch.js
+===============================
+
+:owner: Network Team
+:name: youtube-noscroll
+
+**Measures quality of the video being played.**
+
+perftest_http3_youtube_watch_scroll.js
+======================================
+
+:owner: Network Team
+:name: youtube-scroll
+
+**Measures quality of the video being played.**
+
+
+netwerk/test/unit
+-----------------
+Performance tests from the 'netwerk/test/unit' folder.
+
+test_http3_perf.js
+==================
+
+:owner: Network Team
+:name: http3 raw
+:tags: network,http3,quic
+:Default options:
+
+::
+
+ --perfherder
+ --perfherder-metrics name:speed,unit:bps
+ --xpcshell-cycles 13
+ --verbose
+ --try-platform linux, mac
+
+**XPCShell tests that verifies the lib integration against a local server**
+
+
+testing/performance
+-------------------
+Performance tests from the 'testing/performance' folder.
+
+perftest_bbc_link.js
+====================
+
+:owner: Performance Team
+:name: BBC Link
+
+**Measures time to load BBC homepage**
+
+perftest_facebook.js
+====================
+
+:owner: Performance Team
+:name: Facebook
+
+**Measures time to log in to Facebook**
+
+perftest_jsconf_cold.js
+=======================
+
+:owner: Performance Team
+:name: JSConf (cold)
+
+**Measures time to load JSConf page (cold)**
+
+perftest_jsconf_warm.js
+=======================
+
+:owner: Performance Team
+:name: JSConf (warm)
+
+**Measures time to load JSConf page (warm)**
+
+perftest_politico_link.js
+=========================
+
+:owner: Performance Team
+:name: Politico Link
+
+**Measures time to load Politico homepage**
+
+perftest_android_view.js
+========================
+
+:owner: Performance Team
+:name: VIEW
+
+**Measures cold process view time**
+
+This test launches the appropriate android app, simulating a opening a link through VIEW intent workflow. The application is launched with the intent action android.intent.action.VIEW loading a trivially simple website. The reported metric is the time from process start to navigationStart, reported as processLaunchToNavStart
+
+perftest_youtube_link.js
+========================
+
+:owner: Performance Team
+:name: YouTube Link
+
+**Measures time to load YouTube video**
+
+perftest_android_main.js
+========================
+
+:owner: Performance Team
+:name: main
+
+**Measures the time from process start until the Fenix main activity (HomeActivity) reports Fully Drawn**
+
+This test launches Fenix to its main activity (HomeActivity). The application logs "Fully Drawn" when the activity is drawn. Using the android log transformer we measure the time from process start to this event.
+
+perftest_pageload.js
+====================
+
+:owner: Performance Team
+:name: pageload
+
+**Measures time to load mozilla page**
+
+
+If you have any questions, please see this `wiki page <https://wiki.mozilla.org/TestEngineering/Performance#Where_to_find_us>`_.
diff --git a/testing/perfdocs/generated/raptor.rst b/testing/perfdocs/generated/raptor.rst
new file mode 100644
index 0000000000..4a094da765
--- /dev/null
+++ b/testing/perfdocs/generated/raptor.rst
@@ -0,0 +1,127 @@
+######
+Raptor
+######
+
+The following documents all testing we have for Raptor.
+
+Benchmarks
+----------
+Standard benchmarks are third-party tests (i.e. Speedometer) that we have integrated into Raptor to run per-commit in our production CI.
+
+
+Desktop
+-------
+Tests for page-load performance. The links direct to the actual websites that are being tested. (WX: WebExtension, BT: Browsertime, FF: Firefox, CH: Chrome, CU: Chromium)
+
+* `amazon (BT, FF, CH, CU) <https://www.amazon.com/s?k=laptop&ref=nb_sb_noss_1>`__
+* `apple (BT, FF, CH, CU) <https://www.apple.com/macbook-pro/>`__
+* `bing-search (BT, FF, CH, CU) <https://www.bing.com/search?q=barack+obama>`__
+* `ebay (BT, FF, CH, CU) <https://www.ebay.com/>`__
+* `facebook (BT, FF, CH, CU) <https://www.facebook.com>`__
+* `facebook-redesign (BT, FF, CH, CU) <https://www.facebook.com>`__
+* `fandom (BT, FF, CH, CU) <https://www.fandom.com/articles/fallout-76-will-live-and-die-on-the-creativity-of-its-playerbase>`__
+* `google-docs (BT, FF, CH, CU) <https://docs.google.com/document/d/1US-07msg12slQtI_xchzYxcKlTs6Fp7WqIc6W5GK5M8/edit?usp=sharing>`__
+* `google-mail (BT, FF, CH, CU) <https://mail.google.com/>`__
+* `google-search (BT, FF, CH, CU) <https://www.google.com/search?hl=en&q=barack+obama&cad=h>`__
+* `google-sheets (BT, FF, CH, CU) <https://docs.google.com/spreadsheets/d/1jT9qfZFAeqNoOK97gruc34Zb7y_Q-O_drZ8kSXT-4D4/edit?usp=sharing>`__
+* `google-slides (BT, FF, CH, CU) <https://docs.google.com/presentation/d/1Ici0ceWwpFvmIb3EmKeWSq_vAQdmmdFcWqaiLqUkJng/edit?usp=sharing>`__
+* `imdb (BT, FF, CH, CU) <https://www.imdb.com/title/tt0084967/?ref_=nv_sr_2>`__
+* `imgur (BT, FF, CH, CU) <https://imgur.com/gallery/m5tYJL6>`__
+* `instagram (BT, FF, CH, CU) <https://www.instagram.com/>`__
+* `linkedin (BT, FF, CH, CU) <https://www.linkedin.com/in/thommy-harris-hk-385723106/>`__
+* `microsoft (BT, FF, CH, CU) <https://www.microsoft.com/en-us/>`__
+* `netflix (BT, FF, CH, CU) <https://www.netflix.com/title/80117263>`__
+* `office (BT, FF, CH, CU) <https://office.live.com/start/Word.aspx?omkt=en-US>`__
+* `outlook (BT, FF, CH, CU) <https://outlook.live.com/mail/inbox>`__
+* `paypal (BT, FF, CH, CU) <https://www.paypal.com/myaccount/summary/>`__
+* `pinterest (BT, FF, CH, CU) <https://pinterest.com/>`__
+* `raptor-tp6-amazon (WX, FF, CH, CU) <https://www.amazon.com/s?k=laptop&ref=nb_sb_noss_1>`__
+* `raptor-tp6-apple (WX, FF, CH, CU) <https://www.apple.com/macbook-pro/>`__
+* `raptor-tp6-bing (WX, FF, CH, CU) <https://www.bing.com/search?q=barack+obama>`__
+* `raptor-tp6-cnn-ampstories (WX, FF) <https://cnn.com/ampstories/us/why-hurricane-michael-is-a-monster-unlike-any-other>`__
+* `raptor-tp6-docs (WX, FF, CH, CU) <https://docs.google.com/document/d/1US-07msg12slQtI_xchzYxcKlTs6Fp7WqIc6W5GK5M8/edit?usp=sharing>`__
+* `raptor-tp6-ebay (WX, FF, CH, CU) <https://www.ebay.com/>`__
+* `raptor-tp6-facebook (WX, CH, CU) <https://www.facebook.com>`__
+* `raptor-tp6-fandom (WX, FF, CH, CU) <https://www.fandom.com/articles/fallout-76-will-live-and-die-on-the-creativity-of-its-playerbase>`__
+* `raptor-tp6-google (WX, FF, CH, CU) <https://www.google.com/search?hl=en&q=barack+obama&cad=h>`__
+* `raptor-tp6-google-mail (WX, FF, CH, CU) <https://mail.google.com/>`__
+* `raptor-tp6-imdb (WX, FF, CH, CU) <https://www.imdb.com/title/tt0084967/?ref_=nv_sr_2>`__
+* `raptor-tp6-imgur (WX, FF, CH, CU) <https://imgur.com/gallery/m5tYJL6>`__
+* `raptor-tp6-instagram (WX, FF, CH, CU) <https://www.instagram.com/>`__
+* `raptor-tp6-linkedin (WX, FF, CH, CU) <https://www.linkedin.com/in/thommy-harris-hk-385723106/>`__
+* `raptor-tp6-microsoft (WX, FF, CH, CU) <https://www.microsoft.com/en-us/>`__
+* `raptor-tp6-netflix (WX, FF, CH, CU) <https://www.netflix.com/title/80117263>`__
+* `raptor-tp6-office (WX, FF, CH, CU) <https://office.live.com/start/Word.aspx?omkt=en-US>`__
+* `raptor-tp6-outlook (WX, FF, CH, CU) <https://outlook.live.com/mail/inbox>`__
+* `raptor-tp6-paypal (WX, FF, CH, CU) <https://www.paypal.com/myaccount/summary/>`__
+* `raptor-tp6-pinterest (WX, FF, CH, CU) <https://pinterest.com/>`__
+* `raptor-tp6-reddit (WX, FF, CH, CU) <https://www.reddit.com/r/technology/comments/9sqwyh/we_posed_as_100_senators_to_run_ads_on_facebook/>`__
+* `raptor-tp6-sheets (WX, FF, CH, CU) <https://docs.google.com/spreadsheets/d/1jT9qfZFAeqNoOK97gruc34Zb7y_Q-O_drZ8kSXT-4D4/edit?usp=sharing>`__
+* `raptor-tp6-slides (WX, FF, CH, CU) <https://docs.google.com/presentation/d/1Ici0ceWwpFvmIb3EmKeWSq_vAQdmmdFcWqaiLqUkJng/edit?usp=sharing>`__
+* `raptor-tp6-tumblr (WX, FF, CH, CU) <https://www.tumblr.com/dashboard>`__
+* `raptor-tp6-twitch (WX, FF, CH, CU) <https://www.twitch.tv/videos/326804629>`__
+* `raptor-tp6-twitter (WX, FF, CH, CU) <https://twitter.com/BarackObama>`__
+* `raptor-tp6-wikipedia (WX, FF, CH, CU) <https://en.wikipedia.org/wiki/Barack_Obama>`__
+* `raptor-tp6-yahoo-mail (WX, FF, CH, CU) <https://mail.yahoo.com/>`__
+* `raptor-tp6-yahoo-news (WX, FF, CH, CU) <https://www.yahoo.com/lifestyle/police-respond-noise-complaint-end-playing-video-games-respectful-tenants-002329963.html>`__
+* `raptor-tp6-yandex (WX, FF, CH, CU) <https://yandex.ru/search/?text=barack%20obama&lr=10115>`__
+* `raptor-tp6-youtube (WX, FF, CH, CU) <https://www.youtube.com>`__
+* `reddit (BT, FF, CH, CU) <https://www.reddit.com/r/technology/comments/9sqwyh/we_posed_as_100_senators_to_run_ads_on_facebook/>`__
+* `tumblr (BT, FF, CH, CU) <https://www.tumblr.com/dashboard>`__
+* `twitch (BT, FF, CH, CU) <https://www.twitch.tv/videos/326804629>`__
+* `twitter (BT, FF, CH, CU) <https://twitter.com/BarackObama>`__
+* `wikipedia (BT, FF, CH, CU) <https://en.wikipedia.org/wiki/Barack_Obama>`__
+* `yahoo-mail (BT, FF, CH, CU) <https://mail.yahoo.com/>`__
+* `yahoo-news (BT, FF, CH, CU) <https://www.yahoo.com/lifestyle/police-respond-noise-complaint-end-playing-video-games-respectful-tenants-002329963.html>`__
+* `yandex (BT, FF, CH, CU) <https://yandex.ru/search/?text=barack%20obama&lr=10115>`__
+* `youtube (BT, FF, CH, CU) <https://www.youtube.com>`__
+
+Live
+----
+A set of test pages that are run as live sites instead of recorded versions. These tests are available on all browsers, on all platforms.
+
+
+Mobile
+------
+Page-load performance test suite on Android. The links direct to the actual websites that are being tested. (WX: WebExtension, BT: Browsertime, GV: Geckoview, RB: Refbrow, FE: Fenix, F68: Fennec68, CH-M: Chrome mobile)
+
+* `allrecipes (BT, GV, FE, RB, F68, CH-M) <https://www.allrecipes.com/>`__
+* `amazon (BT, GV, FE, RB, F68, CH-M) <https://www.amazon.com>`__
+* `amazon-search (BT, GV, FE, RB, F68, CH-M) <https://www.amazon.com/s/ref=nb_sb_noss_2/139-6317191-5622045?url=search-alias%3Daps&field-keywords=mobile+phone>`__
+* `bbc (BT, GV, FE, RB, F68, CH-M) <https://www.bbc.com/news/business-47245877>`__
+* `bing (BT, GV, FE, RB, F68, CH-M) <https://www.bing.com/>`__
+* `bing-search-restaurants (BT, GV, FE, RB, F68, CH-M) <https://www.bing.com/search?q=restaurants>`__
+* `booking (BT, GV, FE, RB, F68, CH-M) <https://www.booking.com/>`__
+* `cnn (BT, GV, FE, RB, F68, CH-M) <https://cnn.com>`__
+* `cnn-ampstories (BT, GV, FE, RB, F68, CH-M) <https://cnn.com/ampstories/us/why-hurricane-michael-is-a-monster-unlike-any-other>`__
+* `ebay-kleinanzeigen (BT, GV, FE, RB, F68, CH-M) <https://m.ebay-kleinanzeigen.de>`__
+* `ebay-kleinanzeigen-search (BT, GV, FE, RB, F68, CH-M) <https://m.ebay-kleinanzeigen.de/s-anzeigen/auf-zeit-wg-berlin/zimmer/c199-l3331>`__
+* `espn (BT, GV, FE, RB, F68, CH-M) <http://www.espn.com/nba/story/_/page/allstarweekend25788027/the-comparison-lebron-james-michael-jordan-their-own-words>`__
+* `facebook (BT, GV, FE, RB, F68, CH-M) <https://m.facebook.com>`__
+* `facebook-cristiano (BT, GV, FE, RB, F68, CH-M) <https://m.facebook.com/Cristiano>`__
+* `google (BT, GV, FE, RB, F68, CH-M) <https://www.google.com>`__
+* `google-maps (BT, GV, FE, RB, F68, CH-M) <https://www.google.com/maps?force=pwa>`__
+* `google-search-restaurants (BT, GV, FE, RB, F68, CH-M) <https://www.google.com/search?q=restaurants+near+me>`__
+* `imdb (BT, GV, FE, RB, F68, CH-M) <https://m.imdb.com/>`__
+* `instagram (BT, GV, FE, RB, F68, CH-M) <https://www.instagram.com>`__
+* `jianshu (BT, GV, FE, RB, F68, CH-M) <https://www.jianshu.com/>`__
+* `microsoft-support (BT, GV, FE, RB, F68, CH-M) <https://support.microsoft.com/en-us>`__
+* `reddit (BT, GV, FE, RB, F68, CH-M) <https://www.reddit.com>`__
+* `stackoverflow (BT, GV, FE, RB, F68, CH-M) <https://stackoverflow.com/>`__
+* `web-de (BT, GV, FE, RB, F68, CH-M) <https://web.de/magazine/politik/politologe-glaubt-grossen-koalition-herbst-knallen-33563566>`__
+* `wikipedia (BT, GV, FE, RB, F68, CH-M) <https://en.m.wikipedia.org/wiki/Main_Page>`__
+* `youtube (BT, GV, FE, RB, F68, CH-M) <https://m.youtube.com>`__
+* `youtube-watch (BT, GV, FE, RB, F68, CH-M) <https://www.youtube.com/watch?v=COU5T-Wafa4>`__
+
+Scenario
+--------
+Tests that perform a specific action (a scenario), i.e. idle application, idle application in background, etc.
+
+
+Unittests
+---------
+These tests aren't used in standard testing, they are only used in the Raptor unit tests (they are similar to raptor-tp6 tests though).
+
+
+
+The methods for calling the tests can be found in the `Raptor wiki page <https://wiki.mozilla.org/TestEngineering/Performance/Raptor>`_.
diff --git a/testing/perfdocs/generated/running.rst b/testing/perfdocs/generated/running.rst
new file mode 100644
index 0000000000..7325f8ed60
--- /dev/null
+++ b/testing/perfdocs/generated/running.rst
@@ -0,0 +1,46 @@
+Running a performance test
+==========================
+
+You can run `perftest` locally or in Mozilla's CI
+
+Running locally
+---------------
+
+Running a test is as simple as calling it using `mach perftest` in a mozilla-central source
+checkout::
+
+ $ ./mach perftest
+
+The `mach` command will bootstrap the installation of all required tools for the
+framework to run, and display a selection screen to pick a test. Once the
+selection is done, the performance test will run locally.
+
+If you know what test you want to run, you can use its path explicitely::
+
+ $ ./mach perftest perftest_script.js
+
+`mach perftest` comes with numerous options, and the test script should provide
+decent defaults so you don't have to bother with them. If you need to tweak some
+options, you can use `./mach perftest --help` to learn about them.
+
+
+Running in the CI
+-----------------
+
+You can run in the CI directly from the `mach perftest` command by adding the `--push-to-try` option
+to your locally working perftest call.
+
+This call will run the fuzzy selector and then send the job into our CI::
+
+ $ ./mach perftest --push-to-try
+
+We have phones on bitbar that can run your android tests. Tests are fairly fast
+to run in the CI because they use sparse profiles. Depending on the
+availability of workers, once the task starts, it takes around 15mn to start
+the test.
+
+.. warning::
+
+ If you plan to run tests often in the CI for android, you should contact the android
+ infra team to make sure there's availability in our pool of devices.
+
diff --git a/testing/perfdocs/generated/writing.rst b/testing/perfdocs/generated/writing.rst
new file mode 100644
index 0000000000..2281744852
--- /dev/null
+++ b/testing/perfdocs/generated/writing.rst
@@ -0,0 +1,176 @@
+Performance scripts
+===================
+
+Performance scripts are programs that drive the browser to run a specific
+benchmark (like a page load or a lower level call) and produce metrics.
+
+We support two flavors right now in `perftest` (but it's easy to add
+new ones):
+
+- **xpcshell** a classical xpcshell test, turned into a performance test
+- **browsertime** a browsertime script, which runs a full browser and controls
+ it via a Selenium client.
+
+In order to qualify as performance tests, both flavors require metadata.
+
+For our supported flavors that are both Javascript modules, those are
+provided in a `perfMetadata` mapping variable in the module, or in
+the `module.exports` variable when using Node.
+
+This is the list of fields:
+
+- **owner**: name of the owner (person or team) [mandatory]
+- **author**: author of the test
+- **name**: name of the test [mandatory]
+- **description**: short description [mandatory]
+- **longDescription**: longer description
+- **options**: options used to run the test
+- **supportedBrowsers**: list of supported browsers (or "Any")
+- **supportedPlatforms**: list of supported platforms (or "Any")
+- **tags** a list of tags that describe the test
+
+Tests are registered using tests manifests and the **PERFTESTS_MANIFESTS**
+variable in `moz.build` files - it's good practice to name this file
+`perftest.ini`.
+
+Example of such a file: https://searchfox.org/mozilla-central/source/testing/performance/perftest.ini
+
+
+xpcshell
+--------
+
+`xpcshell` tests are plain xpcshell tests, with two more things:
+
+- the `perfMetadata` variable, as described in the previous section
+- calls to `info("perfMetrics", ...)` to send metrics to the `perftest` framework.
+
+Here's an example of such a metrics call::
+
+ # compute some speed metrics
+ let speed = 12345;
+ info("perfMetrics", { speed });
+
+
+Browsertime
+-----------
+
+With the browsertime layer, performance scenarios are Node modules that
+implement at least one async function that will be called by the framework once
+the browser has started. The function gets a webdriver session and can interact
+with the browser.
+
+You can write complex, interactive scenarios to simulate a user journey,
+and collect various metrics.
+
+Full documentation is available `here <https://www.sitespeed.io/documentation/sitespeed.io/scripting/>`_
+
+The mozilla-central repository has a few performance tests script in
+`testing/performance` and more should be added in components in the future.
+
+By convention, a performance test is prefixed with **perftest_** to be
+recognized by the `perftest` command.
+
+A performance test implements at least one async function published in node's
+`module.exports` as `test`. The function receives two objects:
+
+- **context**, which contains:
+
+ - **options** - All the options sent from the CLI to Browsertime
+ - **log** - an instance to the log system so you can log from your navigation script
+ - **index** - the index of the runs, so you can keep track of which run you are currently on
+ - **storageManager** - The Browsertime storage manager that can help you read/store files to disk
+ - **selenium.webdriver** - The Selenium WebDriver public API object
+ - **selenium.driver** - The instantiated version of the WebDriver driving the current version of the browser
+
+- **command** provides API to interact with the browser. It's a wrapper
+ around the selenium client `Full documentation here <https://www.sitespeed.io/documentation/sitespeed.io/scripting/#commands>`_
+
+
+Below is an example of a test that visits the BBC homepage and clicks on a link.
+
+.. sourcecode:: javascript
+
+ "use strict";
+
+ async function setUp(context) {
+ context.log.info("setUp example!");
+ }
+
+ async function test(context, commands) {
+ await commands.navigate("https://www.bbc.com/");
+
+ // Wait for browser to settle
+ await commands.wait.byTime(10000);
+
+ // Start the measurement
+ await commands.measure.start("pageload");
+
+ // Click on the link and wait for page complete check to finish.
+ await commands.click.byClassNameAndWait("block-link__overlay-link");
+
+ // Stop and collect the measurement
+ await commands.measure.stop();
+ }
+
+ async function tearDown(context) {
+ context.log.info("tearDown example!");
+ }
+
+ module.exports = {
+ setUp,
+ test,
+ tearDown,
+ owner: "Performance Team",
+ test_name: "BBC",
+ description: "Measures pageload performance when clicking on a link from the bbc.com",
+ supportedBrowsers: "Any",
+ supportePlatforms: "Any",
+ };
+
+
+Besides the `test` function, scripts can implement a `setUp` and a `tearDown` function to run
+some code before and after the test. Those functions will be called just once, whereas
+the `test` function might be called several times (through the `iterations` option)
+
+
+Hooks
+-----
+
+A Python module can be used to run functions during a run lifecycle. Available hooks are:
+
+- **before_iterations(args)** runs before everything is started. Gets the args, which
+ can be changed. The **args** argument also contains a **virtualenv** variable that
+ can be used for installing Python packages (e.g. through `install_package <https://searchfox.org/mozilla-central/source/python/mozperftest/mozperftest/utils.py#115-144>`_).
+- **before_runs(env)** runs before the test is launched. Can be used to
+ change the running environment.
+- **after_runs(env)** runs after the test is done.
+- **on_exception(env, layer, exception)** called on any exception. Provides the
+ layer in which the exception occured, and the exception. If the hook returns `True`
+ the exception is ignored and the test resumes. If the hook returns `False`, the
+ exception is ignored and the test ends immediatly. The hook can also re-raise the
+ exception or raise its own exception.
+
+In the example below, the `before_runs` hook is setting the options on the fly,
+so users don't have to provide them in the command line::
+
+ from mozperftest.browser.browsertime import add_options
+
+ url = "'https://www.example.com'"
+
+ common_options = [("processStartTime", "true"),
+ ("firefox.disableBrowsertimeExtension", "true"),
+ ("firefox.android.intentArgument", "'-a'"),
+ ("firefox.android.intentArgument", "'android.intent.action.VIEW'"),
+ ("firefox.android.intentArgument", "'-d'"),
+ ("firefox.android.intentArgument", url)]
+
+
+ def before_runs(env, **kw):
+ add_options(env, common_options)
+
+
+To use this hook module, it can be passed to the `--hooks` option::
+
+ $ ./mach perftest --hooks hooks.py perftest_example.js
+
+
diff --git a/testing/perfdocs/moz.build b/testing/perfdocs/moz.build
new file mode 100644
index 0000000000..9905709937
--- /dev/null
+++ b/testing/perfdocs/moz.build
@@ -0,0 +1 @@
+SPHINX_TREES["/testing/perfdocs"] = "generated"