summaryrefslogtreecommitdiffstats
path: root/tests/deckard/doc/user_guide.rst
diff options
context:
space:
mode:
authorDaniel Baumann <daniel.baumann@progress-linux.org>2024-05-06 00:55:53 +0000
committerDaniel Baumann <daniel.baumann@progress-linux.org>2024-05-06 00:55:53 +0000
commit3d0386f27ca66379acf50199e1d1298386eeeeb8 (patch)
treef87bd4a126b3a843858eb447e8fd5893c3ee3882 /tests/deckard/doc/user_guide.rst
parentInitial commit. (diff)
downloadknot-resolver-3d0386f27ca66379acf50199e1d1298386eeeeb8.tar.xz
knot-resolver-3d0386f27ca66379acf50199e1d1298386eeeeb8.zip
Adding upstream version 3.2.1.upstream/3.2.1upstream
Signed-off-by: Daniel Baumann <daniel.baumann@progress-linux.org>
Diffstat (limited to 'tests/deckard/doc/user_guide.rst')
-rw-r--r--tests/deckard/doc/user_guide.rst297
1 files changed, 297 insertions, 0 deletions
diff --git a/tests/deckard/doc/user_guide.rst b/tests/deckard/doc/user_guide.rst
new file mode 100644
index 0000000..b6e379f
--- /dev/null
+++ b/tests/deckard/doc/user_guide.rst
@@ -0,0 +1,297 @@
+.. sectnum::
+
+How to use Deckard
+==================
+.. contents::
+
+Deckard runs one or more binaries in isolated network which is described by so-called *scenario*.
+There are four components in play:
+
+- Deckard itself (test orchestrator)
+- binary under test (your own)
+- configuration for the binary (generated by Deckard from *template* and *YaML configuration*, i.e. ``.j2`` and ``.yaml`` files)
+- environment description and test data (Deckard *scenario*, i.e. ``.rpl`` file)
+
+It is easy to run tests if everything is already prepared and running tests gets harder
+as number of components you have to prepare yourself raises.
+
+Let's start with the easiest case:
+
+First run
+---------
+Easiest way to run Deckard is using one of the prepared Shell scripts in Deckard repository (``{kresd,unbound,pdns}_run.sh`` for Knot Resolver, Unbound and PowerDNS Recursor respectively).
+
+Please note that Deckard depends on a couple of modified C libraries.
+These will be automatically downloaded and compiled on first run, so do not be surprised when you see
+output from Git and C compiler:
+
+.. code-block::
+
+ $ ./kresd_run.sh
+ Submodule 'contrib/libfaketime' (https://github.com/wolfcw/libfaketime.git) registered for path 'contrib/libfaketime'
+ Submodule 'contrib/libswrap' (https://gitlab.labs.nic.cz/labs/socket_wrapper.git) registered for path 'contrib/libswrap'
+ [...]
+ -- The C compiler identification is GNU 6.3.1
+ [...]
+ [ 50%] Building C object src/CMakeFiles/socket_wrapper.dir/socket_wrapper.c.o
+ [...]
+ [100%] Built target socket_wrapper
+ …
+
+For details see `README <../README.rst>`_.
+
+Deckard uses `pytest` to generate and run the tests as well as collect the results.
+Output is therefore generated by `pytest` as well (``.`` for passed test, ``F`` for failed test and ``s`` for skipped test) and will look something like this:
+
+.. code-block::
+
+ $ ./kresd_run.sh
+ ........s...s...s....................ssss...s.ss.............ssss..s..ss [ 24%]
+ ssss.....sssssssssssssss.sssss.......ssssss.ss...s..s.ss.sss.s.s........ [ 49%]
+ .............ssss....................................................... [ 73%]
+ ........................................................................ [ 98%]
+ .... [100%]
+ 229 passed, 62 skipped in 76.50 seconds
+
+.. note:: There is a lot of tests skipped because we run them with query minimization both on and off and some of the scenarios work only with query minimization on (or off respectively). For details see `Scenario guide#Configuration <scenario_guide.rst#configuration-config-end>`_.
+
+ Time elapsed which is printed by `py.test` is often not acurate (or even negative). `py.test` is confused about our time shifting shenanigans done with ``libfaketime``. We can overcome this by using ``-n`` command line argument. See below.
+
+
+Command line arguments
+----------------------
+As mentioned above we use `py.test` to run the tests so all possible command line arguments for the ``*run.sh`` scripts can be seen by running ``py.test -h`` in the root of Deckard repository.
+
+Here is a list of the most useful ones:
+
+- ``-n number`` – runs the testing in parallel with ``number`` of processes (this requires `pytest-xdist` to be installed)
+- ``-k EXPRESSION`` – only run tests which match the given substring expression (e.g. ``./kresd_run -k "world_"`` will only run the scenarios with `world_` in their file name.
+- ``--collectonly`` – only print the names of selected tests, no tests will be run
+- ``--log-level DEBUG`` – print all debug information for failed tests
+- ``--scenarios path`` – specifies where to look for `.rpl` files (``sets/resolver`` is the default)
+
+YaML configuration
+------------------
+All ``*_run.sh`` scripts internally call the ``run.sh`` script and pass command line arguments to it. For example:
+
+.. code-block::
+
+ # running ./kresd_run.sh -n 4 -k "iter_" will result in running
+ ./run.sh --config configs/kresd.yaml -n 4 -k "iter_"
+
+As you can see, path to YaML configuration file is passed to ``run.sh``. You can edit one of the prepared ones stored in `configs/` or write your own.
+
+Commented contents of ``kresd.yaml`` follows:
+
+.. code-block:: yaml
+
+ programs:
+ - name: kresd # path to binary under test
+ binary: kresd
+ additional: # list additional parameters for binary under test (e.g. path to configuration files)
+ - -f
+ - "1" # CAUTION: All parameters must be strings.
+ templates:
+ - template/kresd.j2 # list of Jinja2_ template files to generate configuration files
+ configs:
+ - config # list of names of configuration files to be generated from Jinja2_ templates
+ noclean: True # optional, do not remove working dir after a successful test
+
+- 'configs' files will be generated from respective files in 'templates' list
+- i.e. the first file in 'configs' list is the result of processing of the first file from 'templates' list and so on
+- generated files are stored in a new working directory created by Deckard for each binary
+
+Most often it is sufficient to use these files for basic configuration changes. Read next section for details about config file templates.
+
+Running multiple binaries
+^^^^^^^^^^^^^^^^^^^^^^^^^
+You can specify multiple programs to run in the YaML configuration. Deckard executes all binaries using parameters from the file. This is handy for testing interoperability of multiple binaries, e.g. when one program is configured as DNS recursor and other program is using it as forwarder.
+
+The YAML file contains **ordered** list of binaries and their parameters. Deckard will send queries to the binary listed first.
+
+.. code-block:: yaml
+
+ programs:
+ - name: forwarding # name of this Knot Resolver instance
+ binary: kresd # kresd is first so it will receive queries from Deckard
+ additional: []
+ templates:
+ - template/kresd_fwd.j2 # this template uses variable IPADDRS['recursor']
+ configs:
+ - config
+ - name: recursor # name of this Unbound instance
+ binary: unbound
+ additional:
+ - -d
+ - -c
+ - unbound.conf
+ templates:
+ - template/unbound.j2
+ - template/hints_zone.j2 # this template uses variable ROOT_ADDR
+ configs:
+ - unbound.conf
+ - hints.zone
+ - ta.keys
+
+In this setup it is necessary to configure one binary to contact the other. IP addresses assigned by Deckard at run-time are accessible using ``IPADDRS`` `template variables`_ and symbolic names assigned to binaries in the YAML file. For example, template ``kresd_fwd.j2`` can use IP address of binary named ``recursor`` like this:
+
+.. code-block:: lua
+
+ policy.add(policy.all(policy.FORWARD("{{IPADDRS['recursor']}}")))
+
+When all preparations are finished, run Deckard using following syntax:
+
+.. code-block:: bash
+
+ $ ./run.sh --config path/to/config.yaml
+
+.. note:: You can run multiple configs in one test instance. Just be aware that ``--scenarios`` must be provided for each config.
+
+.. code-block::
+
+ # This will run scenarios from `scenarios1` folder with configuration from `config1.yaml` and `scenarios2.yaml` with `config2.yaml` respectively.
+ $ ./run.sh --config path/to/config1.yaml --scenarios path/to/scenarios1 --config path/to/config2.yaml --scenarios path/to/scenarios2
+
+
+
+
+Using an existing scenarios with custom configuration template
+--------------------------------------------------------------
+
+It some cases it is necessary to modify or create new template files. Typically this is needed when:
+
+- there are no templates for particular binary (e.g. if you want to test a brand new program)
+- an existing template hardcodes some configuration and you want to change it
+
+Deckard uses the Jinja2_ templating engine (like Ansible or Salt) and supplies several variables that you can use in templates. For simplicity you can imagine that all occurrences of ``{{variable}}`` in template are replaced with value of the *variable*. See Jinja2_ documentation for further details.
+
+Here is an example of template for Unbound:
+
+.. code-block:: jinja
+
+ server:
+ directory: "" # do not leave current working directory
+ chroot: ""
+ pidfile: ""
+ username: ""
+
+ interface: {{SELF_ADDR}} # Deckard will assign an address
+ interface-automatic: no
+ access-control: ::0/0 allow # accept queries from Deckard
+
+ do-daemonize: no # log to stdout & stderr
+ use-syslog: no
+ verbosity: 3 # be verbose, it is handy for debugging
+ val-log-level: 2
+ log-queries: yes
+
+ {% if QMIN == "false" %} # Jinja2 condition
+ qname-minimisation: no # a constant inside condition
+ {% else %}
+ qname-minimisation: yes
+ {% endif %}
+ harden-glue: no # hardcoded constant, use a variable instead!
+
+ root-hints: "hints.zone" # reference to other files in working directory
+ trust-anchor-file: "ta.keys" # use separate template to generate these
+
+This configuration snippet refers to files ``hints.zone`` and ``ta.keys`` which need to be generated as well. Each file uses own template file. An template for ``hints.zone`` might look like this:
+
+.. code-block:: jinja
+
+ # this is hints file which directs resolver to query
+ # fake root server simulated by Deckard
+ . 3600000 NS K.ROOT-SERVERS.NET.
+ # IP address version depends on scenario setting, handle IPv4 & IPv6
+ {% if ':' in ROOT_ADDR %}
+ K.ROOT-SERVERS.NET. 3600000 AAAA {{ROOT_ADDR}}
+ {% else %}
+ K.ROOT-SERVERS.NET. 3600000 A {{ROOT_ADDR}}
+ {% endif %}
+
+Templates can use any of following variables:
+
+.. _`template variables`:
+
+List of variables for templates
+^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
+Addresses:
+
+- ``DAEMON_NAME`` - user-specified symbolic name of particular binary under test, e.g. ``recursor``
+- ``IPADDRS`` - dictionary with ``{symbolic name: IP address}`` mapping
+
+ - it is handy for cases where configuration for one binary under test has to refer to another binary under test
+
+- ``ROOT_ADDR`` - fake root server hint (Deckard is listening here; port is not expressed, must be 53)
+
+ - IP version depends on settings in particular scenario
+ - templates must handle IPv4 and IPv6 as well
+
+- ``SELF_ADDR`` - address assigned to the binary under test (port is not expressed, must be 53)
+
+Path variables:
+
+- ``INSTALL_DIR`` - path to directory containing file ``deckard.py``
+- ``WORKING_DIR`` - working directory for binary under test, each binary gets its own directory
+
+DNS specifics:
+
+- ``DO_NOT_QUERY_LOCALHOST`` [bool]_ - allows or disallows querying local addresses
+- ``HARDEN_GLUE`` [bool]_ - enables or disables additional checks on glue addresses
+- ``QMIN`` [bool]_ - enables or disables query minimization respectively
+- ``TRUST_ANCHORS`` - list of trust anchors in form of a DS records, see `scenario guide <doc/scenario_guide.rst>`_
+- ``NEGATIVE_TRUST_ANCHORS`` - list of domain names with explicitly disabled DNSSEC validation
+
+.. [bool] boolean expressed as string ``true``/``false``
+
+It's okay if you don't use all of the variables, but expect some tests to fail. E.g. if you don't set the ``TRUST_ANCHORS``,
+then the DNSSEC tests will not work properly.
+
+
+Debugging scenario execution
+----------------------------
+Output from a failed test looks like this:
+
+.. code-block::
+
+ $ ./kresd_run.sh
+ =========================================== FAILURES ===========================================
+ _____ test_passes_qmin_off[Scenario(path='sets/resolver/val_ta_sentinel.rpl', qmin=False)] _____
+ [...]
+ E ValueError: val_ta_sentinel.rpl step 212 char position 15875, "rcode": expected 'SERVFAIL',
+ E got 'NOERROR' in the response:
+ E id 54873
+ E opcode QUERY
+ E rcode NOERROR
+ E flags QR RD RA AD
+ E edns 0
+ E payload 4096
+ E ;QUESTION
+ E _is-ta-bd19.test. IN A
+ E ;ANSWER
+ E _is-ta-bd19.test. 5 IN A 192.0.2.1
+ E ;AUTHORITY
+ E ;ADDITIONAL
+
+ pydnstest/scenario.py:888: ValueError
+
+In this example, the test step ``212`` in scenario ``sets/resolver/val_ta_sentinel.rpl`` is failing with query-minimisation off. The binary under test did not produce expected answer, so either the test scenario or binary is wrong. If we were debugging this example, we would have to open file ``val_ta_sentinel.rpl`` on character postition ``15875`` and use our brains :-).
+
+Tips:
+
+- details about scenario format are in `the scenario guide <scenario_guide.rst>`_
+- network traffic from each binary is logged in PCAP format to a file in working directory
+- standard output and error from each binary is logged into log file in working directory
+- working directory can be explicitly specified in environment variable ``SOCKET_WRAPPER_DIR``
+- command line argument ``--log-level DEBUG`` forces extra verbose logging, including logs from all binaries and packets handled by Deckard
+
+
+Writting own scenarios
+----------------------
+See `the scenario guide <scenario_guide.rst>`_.
+
+
+
+
+
+.. _`Jinja2`: http://jinja.pocoo.org/