summaryrefslogtreecommitdiffstats
path: root/testing/performance
diff options
context:
space:
mode:
authorDaniel Baumann <daniel.baumann@progress-linux.org>2024-06-12 05:35:29 +0000
committerDaniel Baumann <daniel.baumann@progress-linux.org>2024-06-12 05:35:29 +0000
commit59203c63bb777a3bacec32fb8830fba33540e809 (patch)
tree58298e711c0ff0575818c30485b44a2f21bf28a0 /testing/performance
parentAdding upstream version 126.0.1. (diff)
downloadfirefox-59203c63bb777a3bacec32fb8830fba33540e809.tar.xz
firefox-59203c63bb777a3bacec32fb8830fba33540e809.zip
Adding upstream version 127.0.upstream/127.0
Signed-off-by: Daniel Baumann <daniel.baumann@progress-linux.org>
Diffstat (limited to 'testing/performance')
-rw-r--r--testing/performance/mach-try-perf/perfdocs/index.rst2
-rw-r--r--testing/performance/mach-try-perf/perfdocs/standard-workflow.rst30
2 files changed, 30 insertions, 2 deletions
diff --git a/testing/performance/mach-try-perf/perfdocs/index.rst b/testing/performance/mach-try-perf/perfdocs/index.rst
index a637788624..6fcd0e70f2 100644
--- a/testing/performance/mach-try-perf/perfdocs/index.rst
+++ b/testing/performance/mach-try-perf/perfdocs/index.rst
@@ -39,7 +39,7 @@ The tool is built to be conservative about the number of tests to run, so if you
at all relative to the existing GeckoView, and Fenix
tasks, then you will need to make fixes in the
associated taskcluster files (e.g.
- taskcluster/ci/test/browsertime-mobile.yml).
+ taskcluster/kinds/test/browsertime-mobile.yml).
Alternatively, set MOZ_FIREFOX_ANDROID_APK_OUTPUT to a
path to an APK, and then run the command with
--browsertime-upload-apk firefox-android. This option
diff --git a/testing/performance/mach-try-perf/perfdocs/standard-workflow.rst b/testing/performance/mach-try-perf/perfdocs/standard-workflow.rst
index e874ecc7b8..5b1fd4c667 100644
--- a/testing/performance/mach-try-perf/perfdocs/standard-workflow.rst
+++ b/testing/performance/mach-try-perf/perfdocs/standard-workflow.rst
@@ -39,7 +39,35 @@ Some more information on this tool `can be found here <https://wiki.mozilla.org/
Understanding the Results
-------------------------
-In the image above, the **base**, and **new** columns show the average value across all tests. The number of data points used here can be changed by clicking the **Use replicates** button at the top-right of the table (only for try pushes). Alternatively, you can navigate to the try runs to retrigger the tests, or use the "refresh"-like button mentioned above. Hovering over the values in these columns will show you the spread of the data along with the standard deviation in percentage form.
+In the image above, the **base**, and **new** columns show the average value across all tests. You can navigate to the try runs to retrigger the tests, or use the "refresh"-like button mentioned above (only visible while logged in). Hovering over the values in these columns will show you the spread of the data along with the standard deviation in percentage form. The number of data points used here can also be changed by clicking the **Use replicates** button at the top-right of the table (only for try pushes). It causes the comparison to use the trials/replicates data from the perfherder data instead of the summary values. Only 1 summary value is generated per task, whereas there can be multiple replicates generated per task. Here's an example of where this data comes from in the ``PERFHERDER_DATA`` JSON (output in the performance task logs, or as ``perfherder-data.json`` file):
+
+.. code-block:: none
+
+ ... # This type of data can be found in any PERFHERDER_DATA output
+ "subtests": [
+ {
+ "alertThreshold": 2.0,
+ "lowerIsBetter": true,
+ "name": "Charts-chartjs/Draw opaque scatter/Async",
+ "replicates": [ # These are the trials/replicates (multiple per task)
+ 1.74,
+ 1.36,
+ 1.16,
+ 1.62,
+ 1.42,
+ 1.28,
+ 1.12,
+ 1.4,
+ 1.26,
+ 1.44,
+ 1.22,
+ 3.32
+ ],
+ "unit": "ms",
+ "value": 1.542 # This is the summary value of those replicates (only 1 per task)
+ },
+ ...
+
The **delta** column shows the difference between the two revisions' average in percentage. A negative value here means that the associated metric has decreased, and vice versa for positive values.