diff options
author | Daniel Baumann <daniel.baumann@progress-linux.org> | 2024-04-07 09:22:09 +0000 |
---|---|---|
committer | Daniel Baumann <daniel.baumann@progress-linux.org> | 2024-04-07 09:22:09 +0000 |
commit | 43a97878ce14b72f0981164f87f2e35e14151312 (patch) | |
tree | 620249daf56c0258faa40cbdcf9cfba06de2a846 /devtools/docs/contributor/tests/writing-perf-tests-tips.md | |
parent | Initial commit. (diff) | |
download | firefox-43a97878ce14b72f0981164f87f2e35e14151312.tar.xz firefox-43a97878ce14b72f0981164f87f2e35e14151312.zip |
Adding upstream version 110.0.1.upstream/110.0.1upstream
Signed-off-by: Daniel Baumann <daniel.baumann@progress-linux.org>
Diffstat (limited to 'devtools/docs/contributor/tests/writing-perf-tests-tips.md')
-rw-r--r-- | devtools/docs/contributor/tests/writing-perf-tests-tips.md | 41 |
1 files changed, 41 insertions, 0 deletions
diff --git a/devtools/docs/contributor/tests/writing-perf-tests-tips.md b/devtools/docs/contributor/tests/writing-perf-tests-tips.md new file mode 100644 index 0000000000..9cb300805d --- /dev/null +++ b/devtools/docs/contributor/tests/writing-perf-tests-tips.md @@ -0,0 +1,41 @@ +# How to write a good performance test? + +## Verify that you wait for all asynchronous code + +If your test involves asynchronous code, which is very likely given the DevTools codebase, please review carefully your test script. +You should ensure that _any_ code ran directly or indirectly by your test is completed. +You should not only wait for the functions related to the very precise feature you are trying to measure. + +This is to prevent introducing noise in the test run after yours. If any asynchronous code is pending, +it is likely to run in parallel with the next test and increase its variance. +Noise in the tests makes it hard to detect small regressions. + +You should typically wait for: +* All RDP requests to finish, +* All DOM Events to fire, +* Redux action to be dispatched, +* React updates, +* ... + + +## Ensure that its results change when regressing/fixing the code or feature you want to watch. + +If you are writing the new test to cover a recent regression and you have a patch to fix it, push your test to try without _and_ with the regression fix. +Look at the try push and confirm that your fix actually reduces the duration of your perf test significantly. +If you are introducing a test without any patch to improve the performance, try slowing down the code you are trying to cover with a fake slowness like `setTimeout` for asynchronous code, or very slow `for` loop for synchronous code. This is to ensure your test would catch a significant regression. + +For our click performance test, we could do this from the inspector codebase: +``` +window.addEventListener("click", function () { + + // This for loop will fake a hang and should slow down the duration of our test + for (let i = 0; i < 100000000; i++) {} + +}, true); // pass `true` in order to execute before the test click listener +``` + + +## Keep your test execution short. + +Running performance tests is expensive. We are currently running them 25 times for each changeset landed in Firefox. +Aim to run tests in less than a second on try. |