summaryrefslogtreecommitdiffstats
path: root/testing/perfdocs/generated/debugging.rst
blob: 8cc799e8203ca66c030a0bc045e662c2292cd5a0 (plain)
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
42
43
44
45
46
47
48
49
50
51
52
53
54
55
56
57
58
59
60
61
62
63
64
65
66
67
68
69
70
71
72
73
74
75
76
77
78
79
80
81
82
83
84
85
86
87
88
89
90
91
92
93
94
95
96
97
98
99
100
101
102
103
104
105
106
107
108
109
110
111
112
113
114
115
116
117
118
119
120
121
122
123
124
125
126
127
128
129
130
131
132
133
134
135
136
137
138
139
140
141
142
143
144
145
#########
Debugging
#########

.. contents::
   :depth: 2
   :local:

Debugging Desktop Product Failures
**********************************

As of now, there is no easy way to do this. Raptor was not built for debugging functional failures. Hitting these in Raptor is indicative that we lack functional test coverage so regression tests should be added for those failures after they are fixed.

To debug a functional failure in Raptor you can follow these steps:

#. If bug 1653617 has not landed yet, apply the patch.
#. Add the --verbose flag to the extra-options list `here <https://searchfox.org/mozilla-central/source/taskcluster/ci/test/raptor.yml#98-101>`__.
#. If the --setenv doesn't exist yet (`bug 1494669 <https://bugzilla.mozilla.org/show_bug.cgi?id=1494669>`_), then add your MOZ_LOG environment variables to give you additional logging `here <https://searchfox.org/mozilla-central/source/testing/raptor/raptor/webextension/desktop.py#42>`_.
#. If the flag does exist, then you can add the MOZ_LOG variables to the `raptor.yml <https://searchfox.org/mozilla-central/source/taskcluster/ci/test/raptor.yml>`_ configuration file.
#. Push to try if you can't reproduce the failure locally.

You can follow `bug 1655554 <https://bugzilla.mozilla.org/show_bug.cgi?id=1655554>`_ as we work on improving this workflow.

In some cases, you might not be able to get logging for what you are debugging (browser console logging for instance). In this case, you should make your own debug prints with printf or something along those lines (`see :agi's debugging work for an example <https://matrix.to/#/!LfXZSWEroPFPMQcYmw:mozilla.org/$r_azj7OipkgDzQ75SCns2QIayp4260PIMHLWLApJJNg?via=mozilla.org&via=matrix.org&via=rduce.org>`_).

Debugging the Raptor Web Extension
**********************************

When developing on Raptor and debugging, there's often a need to look at the output coming from the `Raptor Web Extension <https://searchfox.org/mozilla-central/source/testing/raptor/webext/raptor>`_. Here are some pointers to help.

Raptor Debug Mode
-----------------

The easiest way to debug the Raptor web extension is to run the Raptor test locally and invoke debug mode, i.e. for Firefox:

::

  ./mach raptor --test raptor-tp6-amazon-firefox --debug-mode

Or on Chrome, for example:

::

  ./mach raptor --test raptor-tp6-amazon-chrome --app=chrome --binary="/Applications/Google Chrome.app/Contents/MacOS/Google Chrome" --debug-mode

Running Raptor with debug mode will:

* Automatically set the number of test page-cycles to 2 maximum
* Reduce the 30 second post-browser startup delay from 30 seconds to 3 seconds
* On Firefox, the devtools browser console will automatically open, where you can view all of the console log messages generated by the Raptor web extension
* On Chrome, the devtools console will automatically open
* The browser will remain open after the Raptor test has finished; you will be prompted in the terminal to manually shutdown the browser when you're finished debugging.

Manual Debugging on Firefox Desktop
-----------------------------------

The main Raptor runner is '`runner.js <https://searchfox.org/mozilla-central/source/testing/raptor/webext/raptor/runner.js>`_' which is inside the web extension. The code that actually captures the performance measures is in the web extension content code '`measure.js <https://searchfox.org/mozilla-central/source/testing/raptor/webext/raptor/measure.js>`_'.

In order to retrieve the console.log() output from the Raptor runner, do the following:

#. Invoke Raptor locally via ``./mach raptor``
#. During the 30 second Raptor pause which happens right after Firefox has started up, in the ALREADY OPEN current tab, type "about:debugging" for the URL.
#. On the debugging page that appears, make sure "Add-ons" is selected on the left (default).
#. Turn ON the "Enable add-on debugging" check-box
#. Then scroll down the page until you see the Raptor web extension in the list of currently-loaded add-ons. Under "Raptor" click the blue "Debug" link.
#. A new window will open in a minute, and click the "console" tab

To retrieve the console.log() output from the Raptor content 'measure.js' code:

#. As soon as Raptor opens the new test tab (and the test starts running / or the page starts loading), in Firefox just choose "Tools => Web Developer => Web Console", and select the "console' tab.

Raptor automatically closes the test tab and the entire browser after test completion; which will close any open debug consoles. In order to have more time to review the console logs, Raptor can be temporarily hacked locally in order to prevent the test tab and browser from being closed. Currently this must be done manually, as follows:

#. In the Raptor web extension runner, comment out the line that closes the test tab in the test clean-up. That line of `code is here <https://searchfox.org/mozilla-central/rev/3c85ea2f8700ab17e38b82d77cd44644b4dae703/testing/raptor/webext/raptor/runner.js#357>`_.
#. Add a return statement at the top of the Raptor control server method that shuts-down the browser, the browser shut-down `method is here <https://searchfox.org/mozilla-central/rev/924e3d96d81a40d2f0eec1db5f74fc6594337128/testing/raptor/raptor/control_server.py#120>`_.

For **benchmark type tests** (i.e. speedometer, motionmark, etc.) Raptor doesn't inject 'measure.js' into the test page content; instead it injects '`benchmark-relay.js <https://searchfox.org/mozilla-central/source/testing/raptor/webext/raptor/benchmark-relay.js>`_' into the benchmark test content. Benchmark-relay is as it sounds; it basically relays the test results coming from the benchmark test, to the Raptor web extension runner. Viewing the console.log() output from benchmark-relay is done the same was as noted for the 'measure.js' content above.

Note, `Bug 1470450 <https://bugzilla.mozilla.org/show_bug.cgi?id=1470450>`_ is on file to add a debug mode to Raptor that will automatically grab the web extension console output and dump it to the terminal (if possible) that will make debugging much easier.

Debugging TP6 and Killing the Mitmproxy Server
----------------------------------------------

Regarding debugging Raptor pageload tests that use Mitmproxy (i.e. tp6, gdocs). If Raptor doesn't finish naturally and doesn't stop the Mitmproxy tool, the next time you attempt to run Raptor it might fail out with this error:

::

    INFO -  Error starting proxy server: OSError(48, 'Address already in use')
    NFO -  raptor-mitmproxy Aborting: mitmproxy playback process failed to start, poll returned: 1

That just means the Mitmproxy server was already running before so it couldn't startup. In this case, you need to kill the Mitmproxy server processes, i.e:

::

    mozilla-unified rwood$ ps -ax | grep mitm
    5439 ttys000    0:00.09 /Users/rwood/mozilla-unified/obj-x86_64-apple-darwin17.7.0/testing/raptor/mitmdump -k -q -s /Users/rwood/mozilla-unified/testing/raptor/raptor/playback/alternate-server-replay.py /Users/rwood/mozilla-unified/obj-x86_64-apple-darwin17.7.0/testing/raptor/amazon.mp
    440 ttys000    0:01.64 /Users/rwood/mozilla-unified/obj-x86_64-apple-darwin17.7.0/testing/raptor/mitmdump -k -q -s /Users/rwood/mozilla-unified/testing/raptor/raptor/playback/alternate-server-replay.py /Users/rwood/mozilla-unified/obj-x86_64-apple-darwin17.7.0/testing/raptor/amazon.mp
    5509 ttys000    0:00.01 grep mitm

Then just kill the first mitm process in the list and that's sufficient:

::

    mozilla-unified rwood$ kill 5439

Now when you run Raptor again, the Mitmproxy server will be able to start.

Manual Debugging on Firefox Android
-----------------------------------

Be sure to read the above section first on how to debug the Raptor web extension when running on Firefox Desktop.

When running Raptor tests on Firefox on Android (i.e. geckoview), to see the console.log() output from the Raptor web extension, do the following:

#. With your android device all set up and connected to USB, invoke the Raptor test normally via ``./mach raptor``
#. Start up a local copy of the Firefox Nightly Desktop browser
#. In Firefox Desktop choose "Tools => Web Developer => WebIDE"
#. In the Firefox WebIDE dialog that appears, look under "USB Devices" listed on the top right. If your device is not there, there may be a link to install remote device tools - if that link appears click it and let that install.
#. Under "USB Devices" on the top right your android device should be listed (i.e. "Firefox Custom on Android") - click on your device.
#. The debugger opens. On the left side click on "Main Process", and click the "console" tab below - and the Raptor runner output will be included there.
#. On the left side under "Tabs" you'll also see an option for the active tab/page; select that and the Raptor content console.log() output should be included there.

Also note: When debugging Raptor on Android, the 'adb logcat' is very useful. More specifically for 'geckoview', the output (including for Raptor) is prefixed with "GeckoConsole" - so this command is very handy:

::

  adb logcat | grep GeckoConsole

Manual Debugging on Google Chrome
---------------------------------

Same as on Firefox desktop above, but use the Google Chrome console: View ==> Developer ==> Developer Tools.

Debugging local Python environment
**********************************

Sometimes your local system python will not behave as expected with some of the performance test suites (like Raptor) due to how the virtual environment gets set up. It is presently unclear what the underlying reason is for this, and this issue seems to pop up most frequently on macOS and sometimes Linux.

To determine if this may be the issue, the failure log will likely have something like ``'/usr/local/lib/Python3' (no such file), '/usr/lib/Python3' (no such file)`` within it.

If clobbering your environment and/or removing your ``obj-*`` directory does not work, it is recommended that you consider trying an alternative method to managing your local python environment like e.g. `pyenv <https://github.com/pyenv/pyenv>`_. There are some other alternatives `listed here <https://firefox-source-docs.mozilla.org/build/buildsystem/python.html#installing-python-manually>`_ as well.

For example if you choose to use pyenv, after following the `installation instructions <https://github.com/pyenv/pyenv#installation>`_ you can use pyenv to install and manage multiple Python versions, and easily switch back and forth between them. Pyenv uses shim executables to intercept your Python commands, and as a result, provide a way to supersede the system python/mozilla-central virtual env issues mentioned above. Further information on how it works can be `found here <https://github.com/pyenv/pyenv#how-it-works>`_. **Note** you `may` have to re-install moz-phab upon installing and using a new Python version through pyenv, but this is fairly simple to do so.

If these suggestions do not work, reach out to #perftest on Element!