summaryrefslogtreecommitdiffstats
path: root/testing/perfdocs/generated/developing.rst
blob: 97125d8729da25e29dfa0e05105e4e844c2f682e (plain)
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
42
43
44
45
46
47
48
49
50
51
52
53
54
55
56
57
58
59
60
61
62
63
64
65
66
67
68
69
70
71
72
73
74
75
76
77
78
79
80
81
82
83
84
85
86
87
88
89
90
91
92
93
94
95
96
97
98
99
100
101
102
103
104
105
106
107
108
109
110
111
112
113
114
115
116
117
118
119
120
121
122
123
124
125
126
127
128
129
130
131
132
133
134
135
136
137
138
139
140
141
142
143
144
145
146
147
148
149
150
151
152
153
154
Developing in mozperftest
=========================

Architecture overview
---------------------

`mozperftest` implements a mach command that is a thin wrapper on the
top of `runner.py`, which allows us to run the tool without having to go through
a mach call. Command arguments are prepared in `argparser.py` and then made
available for the runner.

The runner creates a `MachEnvironment` instance (see `environment.py`) and a
`Metadata` instance (see `metadata.py`). These two objects are shared during the
whole test and used to share data across all parts.

The runner then calls `MachEnvironment.run`,  which is in charge of running the test.
The `MachEnvironment` instance runs a sequence of **layers**.

Layers are classes responsible of one single aspect of a performance test. They
are organized in three categories:

- **system**: anything that sets up and tears down some resources or services
  on the system. Existing system layers: **android**, **proxy**
- **test**: layers that are in charge of running a test to collect metrics.
  Existing test layers: **browsertime** and **androidlog**
- **metrics**: all layers that process the metrics to turn them into usable
  metrics. Existing system layers: **perfherder** and **console**

The MachEnvironment instance collects a series of layers for each category and
runs them sequentially.

The goal of this organization is to allow adding new performance tests runners
that will be based on a specific combination of layers. To avoid messy code,
we need to make sure that each layer represents a single aspect of the process
and that is completely independent from other layers (besides sharing the data
through the common environment.)

For instance, we could use `perftest` to run a C++ benchmark by implementing a
new **test** layer.


Layer
-----

A layer is a class that inherits from `mozperftest.layers.Layer` and implements
a few methods and class variables.

List of methods and variables:

- `name`: name of the layer (class variable, mandatory)
- `activated`: boolean to activate by default the layer (class variable, False)
- `user_exception`: will trigger the `on_exception` hook when an exception occurs
- `arguments`: dict containing arguments. Each argument is following
  the `argparser` standard
- `run(self, medatata)`: called to execute the layer
- `setup(self)`: called when the layer is about to be executed
- `teardown(self)`: called when the layer is exiting

Example::

    class EmailSender(Layer):
        """Sends an email with the results
        """
        name = "email"
        activated = False

        arguments = {
            "recipient": {
                "type": str,
                "default": "tarek@mozilla.com",
                "help": "Recipient",
            },
        }

        def setup(self):
            self.server = smtplib.SMTP(smtp_server,port)

        def teardown(self):
            self.server.quit()

        def __call__(self, metadata):
            self.server.send_email(self.get_arg("recipient"), metadata.results())


It can then be added to one of the top functions that are used to create a list
of layers for each category:

- **mozperftest.metrics.pick_metrics** for the metrics category
- **mozperftest.system.pick_system** for the system category
- **mozperftest.test.pick_browser** for the test category

And also added in each `get_layers` function in each of those category.
The `get_layers` functions are invoked when building the argument parser.

In our example, adding the `EmailSender` layer will add two new options:

- **--email** a flag to activate the layer
- **--email-recipient**


Important layers
----------------

**mozperftest** can be used to run performance tests against browsers using the
**browsertime** test layer. It leverages the `browsertime.js
<https://www.sitespeed.io/documentation/browsertime/>`_ framework and provides
a full integration into Mozilla's build and CI systems.

Browsertime uses the selenium webdriver client to drive the browser, and
provides some metrics to measure performance during a user journey.


Coding style
------------

For the coding style, we want to:

- Follow `PEP 257 <https://www.python.org/dev/peps/pep-0257/>`_ for docstrings
- Avoid complexity as much as possible
- Use modern Python 3 code (for instance `pathlib` instead of `os.path`)
- Avoid dependencies on Mozilla build projects and frameworks as much as possible
  (mozharness, mozbuild, etc), or make sure they are isolated and documented


Landing patches
---------------

.. warning::

   It is mandatory for each patch to have a test. Any change without a test
   will be rejected.

Before landing a patch for mozperftest, make sure you run `perftest-test`::

    % ./mach perftest-test
    => black [OK]
    => flake8 [OK]
    => remove old coverage data [OK]
    => running tests [OK]
    => coverage
    Name                                             Stmts   Miss  Cover   Missing
    ------------------------------------------------------------------------------------------
    mozperftest/metrics/notebook/analyzer.py         29      20     31%    26-36, 39-42, 45-51
    ...
    mozperftest/system/proxy.py                      37      0     100%
    ------------------------------------------------------------------------------------------
    TOTAL                                            1614    240    85%

    [OK]

The command will run `black`, `flake8` and also make sure that the test coverage has not regressed.

You can use the `-s` option to bypass flake8/black to speed up your workflow, but make
sure you do a full tests run. You can also pass the name of one single test module.