summaryrefslogtreecommitdiffstats
path: root/python/mozperftest/perfdocs
diff options
context:
space:
mode:
authorDaniel Baumann <daniel.baumann@progress-linux.org>2024-04-19 00:47:55 +0000
committerDaniel Baumann <daniel.baumann@progress-linux.org>2024-04-19 00:47:55 +0000
commit26a029d407be480d791972afb5975cf62c9360a6 (patch)
treef435a8308119effd964b339f76abb83a57c29483 /python/mozperftest/perfdocs
parentInitial commit. (diff)
downloadfirefox-26a029d407be480d791972afb5975cf62c9360a6.tar.xz
firefox-26a029d407be480d791972afb5975cf62c9360a6.zip
Adding upstream version 124.0.1.upstream/124.0.1
Signed-off-by: Daniel Baumann <daniel.baumann@progress-linux.org>
Diffstat (limited to 'python/mozperftest/perfdocs')
-rw-r--r--python/mozperftest/perfdocs/config.yml49
-rw-r--r--python/mozperftest/perfdocs/developing.rst154
-rw-r--r--python/mozperftest/perfdocs/index.rst20
-rw-r--r--python/mozperftest/perfdocs/running.rst51
-rw-r--r--python/mozperftest/perfdocs/tools.rst21
-rw-r--r--python/mozperftest/perfdocs/vision.rst66
-rw-r--r--python/mozperftest/perfdocs/writing.rst228
7 files changed, 589 insertions, 0 deletions
diff --git a/python/mozperftest/perfdocs/config.yml b/python/mozperftest/perfdocs/config.yml
new file mode 100644
index 0000000000..049d004682
--- /dev/null
+++ b/python/mozperftest/perfdocs/config.yml
@@ -0,0 +1,49 @@
+# This Source Code Form is subject to the terms of the Mozilla Public
+# License, v. 2.0. If a copy of the MPL was not distributed with this
+# file, You can obtain one at http://mozilla.org/MPL/2.0/.
+---
+name: mozperftest
+manifest: None
+static-only: False
+suites:
+ netwerk/test/perf:
+ description: "Performance tests from the 'network/test/perf' folder."
+ tests:
+ youtube-scroll: ""
+ facebook-scroll: ""
+ cloudflare: ""
+ controlled: ""
+ g-search: ""
+ g-image: ""
+ lq-fetch: ""
+ youtube-noscroll: ""
+ netwerk/test/unit:
+ description: "Performance tests from the 'netwerk/test/unit' folder."
+ tests:
+ http3 raw: ""
+ testing/performance:
+ description: "Performance tests from the 'testing/performance' folder."
+ tests:
+ Politico Link: ""
+ BBC Link: ""
+ JSConf (cold): ""
+ Facebook: ""
+ YouTube Link: ""
+ pageload: ""
+ JSConf (warm): ""
+ perfstats: ""
+ webpagetest-firefox: ""
+ webpagetest-chrome: ""
+ android-startup: ""
+
+ browser/base/content/test:
+ description: "Performance tests from the 'browser/base/content/test' folder."
+ tests:
+ Dom-size: ""
+
+ dom/serviceworkers/test/performance:
+ description: "Performance tests running through Mochitest for Service Workers"
+ tests:
+ "Service Worker Caching": ""
+ "Service Worker Fetch": ""
+ "Service Worker Registration": ""
diff --git a/python/mozperftest/perfdocs/developing.rst b/python/mozperftest/perfdocs/developing.rst
new file mode 100644
index 0000000000..97125d8729
--- /dev/null
+++ b/python/mozperftest/perfdocs/developing.rst
@@ -0,0 +1,154 @@
+Developing in mozperftest
+=========================
+
+Architecture overview
+---------------------
+
+`mozperftest` implements a mach command that is a thin wrapper on the
+top of `runner.py`, which allows us to run the tool without having to go through
+a mach call. Command arguments are prepared in `argparser.py` and then made
+available for the runner.
+
+The runner creates a `MachEnvironment` instance (see `environment.py`) and a
+`Metadata` instance (see `metadata.py`). These two objects are shared during the
+whole test and used to share data across all parts.
+
+The runner then calls `MachEnvironment.run`, which is in charge of running the test.
+The `MachEnvironment` instance runs a sequence of **layers**.
+
+Layers are classes responsible of one single aspect of a performance test. They
+are organized in three categories:
+
+- **system**: anything that sets up and tears down some resources or services
+ on the system. Existing system layers: **android**, **proxy**
+- **test**: layers that are in charge of running a test to collect metrics.
+ Existing test layers: **browsertime** and **androidlog**
+- **metrics**: all layers that process the metrics to turn them into usable
+ metrics. Existing system layers: **perfherder** and **console**
+
+The MachEnvironment instance collects a series of layers for each category and
+runs them sequentially.
+
+The goal of this organization is to allow adding new performance tests runners
+that will be based on a specific combination of layers. To avoid messy code,
+we need to make sure that each layer represents a single aspect of the process
+and that is completely independent from other layers (besides sharing the data
+through the common environment.)
+
+For instance, we could use `perftest` to run a C++ benchmark by implementing a
+new **test** layer.
+
+
+Layer
+-----
+
+A layer is a class that inherits from `mozperftest.layers.Layer` and implements
+a few methods and class variables.
+
+List of methods and variables:
+
+- `name`: name of the layer (class variable, mandatory)
+- `activated`: boolean to activate by default the layer (class variable, False)
+- `user_exception`: will trigger the `on_exception` hook when an exception occurs
+- `arguments`: dict containing arguments. Each argument is following
+ the `argparser` standard
+- `run(self, medatata)`: called to execute the layer
+- `setup(self)`: called when the layer is about to be executed
+- `teardown(self)`: called when the layer is exiting
+
+Example::
+
+ class EmailSender(Layer):
+ """Sends an email with the results
+ """
+ name = "email"
+ activated = False
+
+ arguments = {
+ "recipient": {
+ "type": str,
+ "default": "tarek@mozilla.com",
+ "help": "Recipient",
+ },
+ }
+
+ def setup(self):
+ self.server = smtplib.SMTP(smtp_server,port)
+
+ def teardown(self):
+ self.server.quit()
+
+ def __call__(self, metadata):
+ self.server.send_email(self.get_arg("recipient"), metadata.results())
+
+
+It can then be added to one of the top functions that are used to create a list
+of layers for each category:
+
+- **mozperftest.metrics.pick_metrics** for the metrics category
+- **mozperftest.system.pick_system** for the system category
+- **mozperftest.test.pick_browser** for the test category
+
+And also added in each `get_layers` function in each of those category.
+The `get_layers` functions are invoked when building the argument parser.
+
+In our example, adding the `EmailSender` layer will add two new options:
+
+- **--email** a flag to activate the layer
+- **--email-recipient**
+
+
+Important layers
+----------------
+
+**mozperftest** can be used to run performance tests against browsers using the
+**browsertime** test layer. It leverages the `browsertime.js
+<https://www.sitespeed.io/documentation/browsertime/>`_ framework and provides
+a full integration into Mozilla's build and CI systems.
+
+Browsertime uses the selenium webdriver client to drive the browser, and
+provides some metrics to measure performance during a user journey.
+
+
+Coding style
+------------
+
+For the coding style, we want to:
+
+- Follow `PEP 257 <https://www.python.org/dev/peps/pep-0257/>`_ for docstrings
+- Avoid complexity as much as possible
+- Use modern Python 3 code (for instance `pathlib` instead of `os.path`)
+- Avoid dependencies on Mozilla build projects and frameworks as much as possible
+ (mozharness, mozbuild, etc), or make sure they are isolated and documented
+
+
+Landing patches
+---------------
+
+.. warning::
+
+ It is mandatory for each patch to have a test. Any change without a test
+ will be rejected.
+
+Before landing a patch for mozperftest, make sure you run `perftest-test`::
+
+ % ./mach perftest-test
+ => black [OK]
+ => flake8 [OK]
+ => remove old coverage data [OK]
+ => running tests [OK]
+ => coverage
+ Name Stmts Miss Cover Missing
+ ------------------------------------------------------------------------------------------
+ mozperftest/metrics/notebook/analyzer.py 29 20 31% 26-36, 39-42, 45-51
+ ...
+ mozperftest/system/proxy.py 37 0 100%
+ ------------------------------------------------------------------------------------------
+ TOTAL 1614 240 85%
+
+ [OK]
+
+The command will run `black`, `flake8` and also make sure that the test coverage has not regressed.
+
+You can use the `-s` option to bypass flake8/black to speed up your workflow, but make
+sure you do a full tests run. You can also pass the name of one single test module.
diff --git a/python/mozperftest/perfdocs/index.rst b/python/mozperftest/perfdocs/index.rst
new file mode 100644
index 0000000000..8c313197b3
--- /dev/null
+++ b/python/mozperftest/perfdocs/index.rst
@@ -0,0 +1,20 @@
+===========
+Mozperftest
+===========
+
+**Mozperftest** can be used to run performance tests.
+
+
+.. toctree::
+
+ running
+ tools
+ writing
+ developing
+ vision
+
+The following documents all testing we have for mozperftest.
+If the owner does not specify the Usage and Description, it's marked N/A.
+
+{documentation}
+If you have any questions, please see this `wiki page <https://wiki.mozilla.org/TestEngineering/Performance#Where_to_find_us>`_.
diff --git a/python/mozperftest/perfdocs/running.rst b/python/mozperftest/perfdocs/running.rst
new file mode 100644
index 0000000000..ed8d9947a9
--- /dev/null
+++ b/python/mozperftest/perfdocs/running.rst
@@ -0,0 +1,51 @@
+Running a performance test
+==========================
+
+You can run `perftest` locally or in Mozilla's CI
+
+Running locally
+---------------
+
+Running a test is as simple as calling it using `mach perftest` in a mozilla-central source
+checkout::
+
+ $ ./mach perftest
+
+The `mach` command will bootstrap the installation of all required tools for the
+framework to run, and display a selection screen to pick a test. Once the
+selection is done, the performance test will run locally.
+
+If you know what test you want to run, you can use its path explicitly::
+
+ $ ./mach perftest perftest_script.js
+
+`mach perftest` comes with numerous options, and the test script should provide
+decent defaults so you don't have to bother with them. If you need to tweak some
+options, you can use `./mach perftest --help` to learn about them.
+
+
+Running in the CI
+-----------------
+
+.. warning::
+
+ If you are looking for how to run performance tests in CI and ended up here, you might want to checkout :ref:`Mach Try Perf`.
+
+.. warning::
+
+ If you plan to run tests often in the CI for android, you should contact the android
+ infra team to make sure there's availability in our pool of devices.
+
+You can run in the CI directly from the `mach perftest` command by adding the `--push-to-try` option
+to your locally working perftest call.
+
+This call will run the fuzzy selector and then send the job into our CI::
+
+ $ ./mach perftest --push-to-try
+
+We have phones on bitbar that can run your android tests. Tests are fairly fast
+to run in the CI because they use sparse profiles. Depending on the
+availability of workers, once the task starts, it takes around 15mn to start
+the test.
+
+
diff --git a/python/mozperftest/perfdocs/tools.rst b/python/mozperftest/perfdocs/tools.rst
new file mode 100644
index 0000000000..4bb975e9f9
--- /dev/null
+++ b/python/mozperftest/perfdocs/tools.rst
@@ -0,0 +1,21 @@
+Running a performance tool
+==========================
+
+You can run `perftest-tools` locally.
+
+Running locally
+---------------
+
+You can run `mach perftest-tools` in a mozilla-central source
+checkout::
+
+ $ ./mach perftest-tools side-by-side --help
+
+The `mach` command will bootstrap the installation of all required dependencies for the
+side-by-side tool to run.
+
+The following arguments are required: `-t/--test-name`, `--base-revision`, `--new-revision`,
+`--base-platform`
+
+The `--help` argument will explain more about what arguments you need to
+run in order to use the tool.
diff --git a/python/mozperftest/perfdocs/vision.rst b/python/mozperftest/perfdocs/vision.rst
new file mode 100644
index 0000000000..64a1f4f92b
--- /dev/null
+++ b/python/mozperftest/perfdocs/vision.rst
@@ -0,0 +1,66 @@
+Vision
+======
+
+The `mozperftest` project was created with the intention to replace all
+existing performance testing frameworks that exist in the mozilla central
+source tree with a single one, and make performance tests a standardized, first-class
+citizen, alongside mochitests and xpcshell tests.
+
+We want to give the ability to any developer to write performance tests in
+their component, both locally and in the CI, exactly like how they would do with
+`xpcshell` tests and `mochitests`.
+
+Historically, we have `Talos`, that provided a lot of different tests, from
+micro-benchmarks to page load tests. From there we had `Raptor`, that was a
+fork of Talos, focusing on page loads only. Then `mach browsertime` was added,
+which was a wrapper around the `browsertime` tool.
+
+All those frameworks besides `mach browsertime` were mainly focusing on working
+well in the CI, and were hard to use locally. `mach browsertime` worked locally but
+not on all platforms and was specific to the browsertime framework.
+
+`mozperftest` currently provides the `mach perftest` command, that will scan
+for all tests that are declared in ini files such as
+https://searchfox.org/mozilla-central/source/netwerk/test/perf/perftest.toml and
+registered under **PERFTESTS_MANIFESTS** in `moz.build` files such as
+https://searchfox.org/mozilla-central/source/netwerk/test/moz.build#17
+
+If you launch `./mach perftest` without any parameters, you will get a full list
+of available tests, and you can pick and run one. Adding `--push-to-try` will
+run it on try.
+
+The framework loads perf tests and read its metadata, that can be declared
+within the test. We have a parser that is currently able to recognize and load
+**xpcshell** tests and **browsertime** tests, and a runner for each one of those.
+
+But the framework can be extended to support more formats. We would like to add
+support for **jsshell** and any other format we have in m-c.
+
+A performance test is a script that perftest runs, and that returns metrics we
+can use. Right now we consume those metrics directly in the console, and
+also in perfherder, but other formats could be added. For instance, there's
+a new **influxdb** output that has been added, to push the data in an **influxdb**
+time series database.
+
+What is important is to make sure performance tests belong to the component it's
+testing in the source tree. We've learned with Talos that grouping all performance
+tests in a single place is problematic because there's no sense of ownership from
+developers once it's added there. It becomes the perf team problem. If the tests
+stay in each component alongside mochitests and xpcshell tests, the component
+maintainers will own and maintain it.
+
+
+Next steps
+----------
+
+We want to rewrite all Talos and Raptor tests into perftest. For Raptor, we need
+to have the ability to use proxy records, which is a work in progress. From there,
+running a **raptor** test will be a simple, one liner browsertime script.
+
+For Talos, we'll need to refactor the existing micro-benchmarks into xpchsell tests,
+and if that does not suffice, create a new runner.
+
+For JS benchmarks, once the **jsshell** runner is added into perftest, it will be
+straightforward.
+
+
diff --git a/python/mozperftest/perfdocs/writing.rst b/python/mozperftest/perfdocs/writing.rst
new file mode 100644
index 0000000000..232c17eedd
--- /dev/null
+++ b/python/mozperftest/perfdocs/writing.rst
@@ -0,0 +1,228 @@
+Performance scripts
+===================
+
+Performance scripts are programs that drive the browser to run a specific
+benchmark (like a page load or a lower level call) and produce metrics.
+
+We support two flavors right now in `perftest` (but it's easy to add
+new ones):
+
+- **xpcshell** a classical xpcshell test, turned into a performance test
+- **browsertime** a browsertime script, which runs a full browser and controls
+ it via a Selenium client.
+- **mochitest** a classical mochitest test, turned into a performance test
+
+In order to qualify as performance tests, both flavors require metadata.
+
+For our supported flavors that are both Javascript modules, those are
+provided in a `perfMetadata` mapping variable in the module, or in
+the `module.exports` variable when using Node.
+
+This is the list of fields:
+
+- **owner**: name of the owner (person or team) [mandatory]
+- **author**: author of the test
+- **name**: name of the test [mandatory]
+- **description**: short description [mandatory]
+- **longDescription**: longer description
+- **options**: options used to run the test
+- **supportedBrowsers**: list of supported browsers (or "Any")
+- **supportedPlatforms**: list of supported platforms (or "Any")
+- **tags** a list of tags that describe the test
+
+Tests are registered using tests manifests and the **PERFTESTS_MANIFESTS**
+variable in `moz.build` files - it's good practice to name this file
+`perftest.toml`.
+
+Example of such a file: https://searchfox.org/mozilla-central/source/testing/performance/perftest.toml
+
+
+xpcshell
+--------
+
+`xpcshell` tests are plain xpcshell tests, with two more things:
+
+- the `perfMetadata` variable, as described in the previous section
+- calls to `info("perfMetrics", ...)` to send metrics to the `perftest` framework.
+
+Here's an example of such a metrics call::
+
+ # compute some speed metrics
+ let speed = 12345;
+ info("perfMetrics", JSON.stringify({ speed }));
+
+
+Mochitest
+---------
+
+Similar to ``xpcshell`` tests, these are standard ``mochitest`` tests with some extra things:
+
+- the ``perfMetadata`` variable, as described in the previous section
+- calls to ``info("perfMetrics", ...)`` to send metrics to the ``perftest`` framework
+
+Note that the ``perfMetadata`` variable can exist in any ``<script>...</script>`` element in the Mochitest HTML test file. The ``perfMetadata`` variable also needs a couple additional settings in Mochitest tests. These are the ``manifest``, and ``manifest_flavor`` options::
+
+ var perfMetadata = {
+ owner: "Performance Team",
+ name: "Test test",
+ description: "N/A",
+ options: {
+ default: {
+ perfherder: true,
+ perfherder_metrics: [
+ { name: "Registration", unit: "ms" },
+ ],
+ manifest: "perftest.toml",
+ manifest_flavor: "plain",
+ extra_args: [
+ "headless",
+ ]
+ },
+ },
+ };
+
+The ``extra_args`` setting provides an area to provide custom Mochitest command-line arguments for this test.
+
+Here's an example of a call that will produce metrics::
+
+ # compute some speed metrics
+ let speed = 12345;
+ info("perfMetrics", JSON.stringify({ speed }));
+
+Existing Mochitest unit tests can be modified with these to be compatible with mozperftest, but note that some issues exist when doing this:
+
+- unittest issues with mochitest tests running on hardware
+- multiple configurations of a test running in a single manifest
+
+At the top of this document, you can find some information about the recommended approach for adding a new manifest dedicated to running performance tests.
+
+Locally, mozperftest uses ``./mach test`` to run your test. Always ensure that your test works in ``./mach test`` before attempting to run it through ``./mach perftest``. In CI, we use a custom "remote" run that runs Mochitest directly, skipping ``./mach test``.
+
+If everything is setup correctly, running a performance test locally will be as simple as this::
+
+ ./mach perftest <path/to/my/mochitest-test.html>
+
+
+Browsertime
+-----------
+
+With the browsertime layer, performance scenarios are Node modules that
+implement at least one async function that will be called by the framework once
+the browser has started. The function gets a webdriver session and can interact
+with the browser.
+
+You can write complex, interactive scenarios to simulate a user journey,
+and collect various metrics.
+
+Full documentation is available `here <https://www.sitespeed.io/documentation/sitespeed.io/scripting/>`_
+
+The mozilla-central repository has a few performance tests script in
+`testing/performance` and more should be added in components in the future.
+
+By convention, a performance test is prefixed with **perftest_** to be
+recognized by the `perftest` command.
+
+A performance test implements at least one async function published in node's
+`module.exports` as `test`. The function receives two objects:
+
+- **context**, which contains:
+
+ - **options** - All the options sent from the CLI to Browsertime
+ - **log** - an instance to the log system so you can log from your navigation script
+ - **index** - the index of the runs, so you can keep track of which run you are currently on
+ - **storageManager** - The Browsertime storage manager that can help you read/store files to disk
+ - **selenium.webdriver** - The Selenium WebDriver public API object
+ - **selenium.driver** - The instantiated version of the WebDriver driving the current version of the browser
+
+- **command** provides API to interact with the browser. It's a wrapper
+ around the selenium client `Full documentation here <https://www.sitespeed.io/documentation/sitespeed.io/scripting/#commands>`_
+
+
+Below is an example of a test that visits the BBC homepage and clicks on a link.
+
+.. sourcecode:: javascript
+
+ "use strict";
+
+ async function setUp(context) {
+ context.log.info("setUp example!");
+ }
+
+ async function test(context, commands) {
+ await commands.navigate("https://www.bbc.com/");
+
+ // Wait for browser to settle
+ await commands.wait.byTime(10000);
+
+ // Start the measurement
+ await commands.measure.start("pageload");
+
+ // Click on the link and wait for page complete check to finish.
+ await commands.click.byClassNameAndWait("block-link__overlay-link");
+
+ // Stop and collect the measurement
+ await commands.measure.stop();
+ }
+
+ async function tearDown(context) {
+ context.log.info("tearDown example!");
+ }
+
+ module.exports = {
+ setUp,
+ test,
+ tearDown,
+ owner: "Performance Team",
+ test_name: "BBC",
+ description: "Measures pageload performance when clicking on a link from the bbc.com",
+ supportedBrowsers: "Any",
+ supportePlatforms: "Any",
+ };
+
+
+Besides the `test` function, scripts can implement a `setUp` and a `tearDown` function to run
+some code before and after the test. Those functions will be called just once, whereas
+the `test` function might be called several times (through the `iterations` option)
+
+
+Hooks
+-----
+
+A Python module can be used to run functions during a run lifecycle. Available hooks are:
+
+- **before_iterations(args)** runs before everything is started. Gets the args, which
+ can be changed. The **args** argument also contains a **virtualenv** variable that
+ can be used for installing Python packages (e.g. through `install_package <https://searchfox.org/mozilla-central/source/python/mozperftest/mozperftest/utils.py#115-144>`_).
+- **before_runs(env)** runs before the test is launched. Can be used to
+ change the running environment.
+- **after_runs(env)** runs after the test is done.
+- **on_exception(env, layer, exception)** called on any exception. Provides the
+ layer in which the exception occurred, and the exception. If the hook returns `True`
+ the exception is ignored and the test resumes. If the hook returns `False`, the
+ exception is ignored and the test ends immediately. The hook can also re-raise the
+ exception or raise its own exception.
+
+In the example below, the `before_runs` hook is setting the options on the fly,
+so users don't have to provide them in the command line::
+
+ from mozperftest.browser.browsertime import add_options
+
+ url = "'https://www.example.com'"
+
+ common_options = [("processStartTime", "true"),
+ ("firefox.disableBrowsertimeExtension", "true"),
+ ("firefox.android.intentArgument", "'-a'"),
+ ("firefox.android.intentArgument", "'android.intent.action.VIEW'"),
+ ("firefox.android.intentArgument", "'-d'"),
+ ("firefox.android.intentArgument", url)]
+
+
+ def before_runs(env, **kw):
+ add_options(env, common_options)
+
+
+To use this hook module, it can be passed to the `--hooks` option::
+
+ $ ./mach perftest --hooks hooks.py perftest_example.js
+
+