summaryrefslogtreecommitdiffstats
path: root/tests/deckard/doc/user_guide.rst
blob: b6e379f1682f9d5c6de140d835961eb20b8ec207 (plain)
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
42
43
44
45
46
47
48
49
50
51
52
53
54
55
56
57
58
59
60
61
62
63
64
65
66
67
68
69
70
71
72
73
74
75
76
77
78
79
80
81
82
83
84
85
86
87
88
89
90
91
92
93
94
95
96
97
98
99
100
101
102
103
104
105
106
107
108
109
110
111
112
113
114
115
116
117
118
119
120
121
122
123
124
125
126
127
128
129
130
131
132
133
134
135
136
137
138
139
140
141
142
143
144
145
146
147
148
149
150
151
152
153
154
155
156
157
158
159
160
161
162
163
164
165
166
167
168
169
170
171
172
173
174
175
176
177
178
179
180
181
182
183
184
185
186
187
188
189
190
191
192
193
194
195
196
197
198
199
200
201
202
203
204
205
206
207
208
209
210
211
212
213
214
215
216
217
218
219
220
221
222
223
224
225
226
227
228
229
230
231
232
233
234
235
236
237
238
239
240
241
242
243
244
245
246
247
248
249
250
251
252
253
254
255
256
257
258
259
260
261
262
263
264
265
266
267
268
269
270
271
272
273
274
275
276
277
278
279
280
281
282
283
284
285
286
287
288
289
290
291
292
293
294
295
296
297
.. sectnum::

How to use Deckard
==================
.. contents::

Deckard runs one or more binaries in isolated network which is described by so-called *scenario*.
There are four components in play:

- Deckard itself (test orchestrator)
- binary under test (your own)
- configuration for the binary (generated by Deckard from *template* and *YaML configuration*, i.e. ``.j2`` and ``.yaml`` files)
- environment description and test data (Deckard *scenario*, i.e. ``.rpl`` file)

It is easy to run tests if everything is already prepared and running tests gets harder
as number of components you have to prepare yourself raises.

Let's start with the easiest case:

First run
---------
Easiest way to run Deckard is using one of the prepared Shell scripts in Deckard repository (``{kresd,unbound,pdns}_run.sh`` for Knot Resolver, Unbound and PowerDNS Recursor respectively).

Please note that Deckard depends on a couple of modified C libraries.
These will be automatically downloaded and compiled on first run, so do not be surprised when you see
output from Git and C compiler:

.. code-block::

   $ ./kresd_run.sh
   Submodule 'contrib/libfaketime' (https://github.com/wolfcw/libfaketime.git) registered for path 'contrib/libfaketime'
   Submodule 'contrib/libswrap' (https://gitlab.labs.nic.cz/labs/socket_wrapper.git) registered for path 'contrib/libswrap'
      [...]
   -- The C compiler identification is GNU 6.3.1
      [...]
   [ 50%] Building C object src/CMakeFiles/socket_wrapper.dir/socket_wrapper.c.o
      [...]
   [100%] Built target socket_wrapper
   …

For details see `README <../README.rst>`_.

Deckard uses `pytest` to generate and run the tests as well as collect the results.
Output is therefore generated by `pytest` as well (``.`` for passed test, ``F`` for failed test and ``s`` for skipped test) and will look something like this:

.. code-block::

   $ ./kresd_run.sh
   ........s...s...s....................ssss...s.ss.............ssss..s..ss [ 24%]
   ssss.....sssssssssssssss.sssss.......ssssss.ss...s..s.ss.sss.s.s........ [ 49%]
   .............ssss....................................................... [ 73%]
   ........................................................................ [ 98%]
   ....                                                                     [100%]
   229 passed, 62 skipped in 76.50 seconds

.. note:: There is a lot of tests skipped because we run them with query minimization both on and off and some of the scenarios work only with query minimization on (or off respectively). For details see `Scenario guide#Configuration <scenario_guide.rst#configuration-config-end>`_.

          Time elapsed which is printed by `py.test` is often not acurate (or even negative). `py.test` is confused about our time shifting shenanigans done with ``libfaketime``. We can overcome this by using ``-n`` command line argument. See below.


Command line arguments
----------------------
As mentioned above we use `py.test` to run the tests so all possible command line arguments for the ``*run.sh`` scripts can be seen by running ``py.test -h`` in the root of Deckard repository.

Here is a list of the most useful ones:

- ``-n number`` – runs the testing in parallel with ``number`` of processes (this requires `pytest-xdist` to be installed)
- ``-k EXPRESSION`` – only run tests which match the given substring expression (e.g. ``./kresd_run -k "world_"`` will only run the scenarios with `world_` in their file name.
- ``--collectonly`` – only print the names of selected tests, no tests will be run
- ``--log-level DEBUG`` – print all debug information for failed tests
- ``--scenarios path`` – specifies where to look for `.rpl` files (``sets/resolver`` is the default)

YaML configuration
------------------
All ``*_run.sh`` scripts internally call the ``run.sh`` script and pass command line arguments to it. For example:

.. code-block::

   # running ./kresd_run.sh -n 4 -k "iter_" will result in running
   ./run.sh --config configs/kresd.yaml  -n 4 -k "iter_"

As you can see, path to YaML configuration file is passed to ``run.sh``. You can edit one of the prepared ones stored in `configs/` or write your own.

Commented contents of ``kresd.yaml`` follows:

.. code-block:: yaml

  programs:
  - name: kresd             # path to binary under test
    binary: kresd
    additional:             # list additional parameters for binary under test (e.g. path to configuration files)
      - -f
      - "1"                 # CAUTION: All parameters must be strings.
    templates:
      - template/kresd.j2   # list of Jinja2_ template files to generate configuration files
    configs:
      - config              # list of names of configuration files to be generated from Jinja2_ templates
  noclean: True             # optional, do not remove working dir after a successful test

- 'configs' files will be generated from respective files in 'templates' list
- i.e. the first file in 'configs' list is the result of processing of the first file from 'templates' list and so on
- generated files are stored in a new working directory created by Deckard for each binary

Most often it is sufficient to use these files for basic configuration changes. Read next section for details about config file templates.

Running multiple binaries
^^^^^^^^^^^^^^^^^^^^^^^^^
You can specify multiple programs to run in the YaML configuration. Deckard executes all binaries using parameters from the file. This is handy for testing interoperability of multiple binaries, e.g. when one program is configured as DNS recursor and other program is using it as forwarder.

The YAML file contains **ordered** list of binaries and their parameters. Deckard will send queries to the binary listed first.

.. code-block:: yaml

  programs:
  - name: forwarding            # name of this Knot Resolver instance
    binary: kresd               # kresd is first so it will receive queries from Deckard
    additional: []
    templates:
      - template/kresd_fwd.j2   # this template uses variable IPADDRS['recursor']
    configs:
      - config
  - name: recursor              # name of this Unbound instance
    binary: unbound
    additional:
      - -d
      - -c
      - unbound.conf
    templates:
      - template/unbound.j2
      - template/hints_zone.j2  # this template uses variable ROOT_ADDR
    configs:
      - unbound.conf
      - hints.zone
      - ta.keys

In this setup it is necessary to configure one binary to contact the other. IP addresses assigned by Deckard at run-time are accessible using ``IPADDRS`` `template variables`_ and symbolic names assigned to binaries in the YAML file. For example, template ``kresd_fwd.j2`` can use IP address of binary named ``recursor`` like this:

.. code-block:: lua

   policy.add(policy.all(policy.FORWARD("{{IPADDRS['recursor']}}")))

When all preparations are finished, run Deckard using following syntax:

.. code-block:: bash

   $ ./run.sh --config path/to/config.yaml

.. note:: You can run multiple configs in one test instance. Just be aware that ``--scenarios`` must be provided for each config.

.. code-block::

  # This will run scenarios from `scenarios1` folder with configuration from `config1.yaml` and `scenarios2.yaml` with `config2.yaml` respectively.
  $ ./run.sh --config path/to/config1.yaml --scenarios path/to/scenarios1 --config path/to/config2.yaml --scenarios path/to/scenarios2




Using an existing scenarios with custom configuration template
--------------------------------------------------------------

It some cases it is necessary to modify or create new template files. Typically this is needed when:

- there are no templates for particular binary (e.g. if you want to test a brand new program)
- an existing template hardcodes some configuration and you want to change it

Deckard uses the Jinja2_ templating engine (like Ansible or Salt) and supplies several variables that you can use in templates. For simplicity you can imagine that all occurrences of ``{{variable}}`` in template are replaced with value of the *variable*. See Jinja2_ documentation for further details.

Here is an example of template for Unbound:

.. code-block:: jinja

   server:
	directory: ""                 # do not leave current working directory
	chroot: ""
	pidfile: ""
	username: ""

	interface: {{SELF_ADDR}}      # Deckard will assign an address
	interface-automatic: no
	access-control: ::0/0 allow   # accept queries from Deckard

	do-daemonize: no              # log to stdout & stderr
	use-syslog: no
	verbosity: 3                  # be verbose, it is handy for debugging
	val-log-level: 2
	log-queries: yes

	{% if QMIN == "false" %}      # Jinja2 condition
	qname-minimisation: no        # a constant inside condition
	{% else %}
	qname-minimisation: yes
	{% endif %}
	harden-glue: no               # hardcoded constant, use a variable instead!

	root-hints: "hints.zone"      # reference to other files in working directory
	trust-anchor-file: "ta.keys"  # use separate template to generate these

This configuration snippet refers to files ``hints.zone`` and ``ta.keys`` which need to be generated as well. Each file uses own template file. An template for ``hints.zone`` might look like this:

.. code-block:: jinja

   # this is hints file which directs resolver to query
   # fake root server simulated by Deckard
   .                        3600000      NS    K.ROOT-SERVERS.NET.
   # IP address version depends on scenario setting, handle IPv4 & IPv6
   {% if ':' in ROOT_ADDR %}
   K.ROOT-SERVERS.NET.      3600000      AAAA  {{ROOT_ADDR}}
   {% else %}
   K.ROOT-SERVERS.NET.      3600000      A     {{ROOT_ADDR}}
   {% endif %}

Templates can use any of following variables:

.. _`template variables`:

List of variables for templates
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
Addresses:

- ``DAEMON_NAME``  - user-specified symbolic name of particular binary under test, e.g. ``recursor``
- ``IPADDRS``      - dictionary with ``{symbolic name: IP address}`` mapping

  - it is handy for cases where configuration for one binary under test has to refer to another binary under test

- ``ROOT_ADDR``    - fake root server hint (Deckard is listening here; port is not expressed, must be 53)

  - IP version depends on settings in particular scenario
  - templates must handle IPv4 and IPv6 as well

- ``SELF_ADDR``    - address assigned to the binary under test (port is not expressed, must be 53)

Path variables:

- ``INSTALL_DIR``  - path to directory containing file ``deckard.py``
- ``WORKING_DIR``  - working directory for binary under test, each binary gets its own directory

DNS specifics:

- ``DO_NOT_QUERY_LOCALHOST`` [bool]_ - allows or disallows querying local addresses
- ``HARDEN_GLUE``     [bool]_ - enables or disables additional checks on glue addresses
- ``QMIN``            [bool]_ - enables or disables query minimization respectively
- ``TRUST_ANCHORS`` - list of trust anchors in form of a DS records, see `scenario guide <doc/scenario_guide.rst>`_
- ``NEGATIVE_TRUST_ANCHORS`` - list of domain names with explicitly disabled DNSSEC validation

.. [bool] boolean expressed as string ``true``/``false``

It's okay if you don't use all of the variables, but expect some tests to fail. E.g. if you don't set the ``TRUST_ANCHORS``,
then the DNSSEC tests will not work properly.


Debugging scenario execution
----------------------------
Output from a failed test looks like this:

.. code-block::

   $ ./kresd_run.sh
   =========================================== FAILURES ===========================================
   _____ test_passes_qmin_off[Scenario(path='sets/resolver/val_ta_sentinel.rpl', qmin=False)] _____
  [...]
  E    ValueError: val_ta_sentinel.rpl step 212 char position 15875, "rcode": expected 'SERVFAIL',
  E    got 'NOERROR' in the response:
  E    id 54873
  E    opcode QUERY
  E    rcode NOERROR
  E    flags QR RD RA AD
  E    edns 0
  E    payload 4096
  E    ;QUESTION
  E    _is-ta-bd19.test. IN A
  E    ;ANSWER
  E    _is-ta-bd19.test. 5 IN A 192.0.2.1
  E    ;AUTHORITY
  E    ;ADDITIONAL

  pydnstest/scenario.py:888: ValueError

In this example, the test step ``212`` in scenario ``sets/resolver/val_ta_sentinel.rpl`` is failing with query-minimisation off. The binary under test did not produce expected answer, so either the test scenario or binary is wrong. If we were debugging this example, we would have to open file ``val_ta_sentinel.rpl`` on character postition ``15875`` and use our brains :-).

Tips:

- details about scenario format are in `the scenario guide <scenario_guide.rst>`_
- network traffic from each binary is logged in PCAP format to a file in working directory
- standard output and error from each binary is logged into log file in working directory
- working directory can be explicitly specified in environment variable ``SOCKET_WRAPPER_DIR``
- command line argument ``--log-level DEBUG`` forces extra verbose logging, including logs from all binaries and packets handled by Deckard


Writting own scenarios
----------------------
See `the scenario guide <scenario_guide.rst>`_.





.. _`Jinja2`: http://jinja.pocoo.org/