.. sectnum:: How to use Deckard ================== .. contents:: Deckard runs one or more binaries in isolated network which is described by so-called *scenario*. There are four components in play: - Deckard itself (test orchestrator) - binary under test (your own) - configuration for the binary (generated by Deckard from *template* and *YaML configuration*, i.e. ``.j2`` and ``.yaml`` files) - environment description and test data (Deckard *scenario*, i.e. ``.rpl`` file) It is easy to run tests if everything is already prepared and running tests gets harder as number of components you have to prepare yourself raises. Let's start with the easiest case: First run --------- Easiest way to run Deckard is using one of the prepared Shell scripts in Deckard repository (``{kresd,named,pdns,unbound}_run.sh`` for Knot Resolver, BIND, PowerDNS, and Unbound Recursor respectively). Deckard uses `pytest` to generate and run the tests as well as collect the results. Output is therefore generated by `pytest` as well (``.`` for passed test, ``F`` for failed test and ``s`` for skipped test) and will look something like this: .. code-block:: $ ./kresd_run.sh deckard_pytest.py::test_passes_qmin_on[Scenario(path='sets/resolver/black_data.rpl', qmin=False, config={'programs': [{'name': 'kresd', 'binary': 'kresd', 'additional': ['-n'], 'templates': ['template/kresd.j2'], 'configs': ['config']}]})-max-retries-3] SKIPPED [ 0%] […many lines later…] deckard_pytest.py::test_passes_qmin_off[Scenario(path='sets/resolver/world_mx_nic_www.rpl', qmin=None, config={'programs': [{'name': 'kresd', 'binary': 'kresd', 'additional': ['-n'], 'templates': ['template/kresd.j2'], 'configs': ['config']}]})-max-retries-3] PASSED [100%] ======================= 275 passed, 97 skipped in 316.61s (0:05:16) ======================= .. note:: There is a lot of tests skipped because we run them with query minimization both on and off and some of the scenarios work only with query minimization on (or off respectively). For details see `Scenario guide#Configuration `_. Command line arguments ---------------------- As mentioned above we use `py.test` to run the tests so all possible command line arguments for the ``*run.sh`` scripts can be seen by running ``py.test -h`` in the root of Deckard repository. Here is a list of the most useful ones: - ``-n number`` – runs the testing in parallel with ``number`` of processes (this requires `pytest-xdist` and `pytest-forked` to be installed) - ``-k EXPRESSION`` – only run tests which match the given substring expression (e.g. ``./kresd_run -k "world_"`` will only run the scenarios with `world_` in their file name. - ``--collectonly`` – only print the names of selected tests, no tests will be run - ``--log-level DEBUG`` – print all debug information for failed tests - ``--scenarios path`` – specifies where to look for `.rpl` files (``sets/resolver`` is the default) YaML configuration ------------------ All ``*_run.sh`` scripts internally call the ``run.sh`` script and pass command line arguments to it. For example: .. code-block:: # running ./kresd_run.sh -n 4 -k "iter_" will result in running ./run.sh --config configs/kresd.yaml -n 4 -k "iter_" As you can see, path to YaML configuration file is passed to ``run.sh``. You can edit one of the prepared ones stored in `configs/` or write your own. Commented contents of ``kresd.yaml`` follows: .. code-block:: yaml programs: - name: kresd # path to binary under test binary: kresd additional: # list additional parameters for binary under test (e.g. path to configuration files) - --noninteractive conncheck: True # wait until TCP port 53 accepts connections (enabled by default) templates: - template/kresd.j2 # list of Jinja2_ template files to generate configuration files configs: - config # list of names of configuration files to be generated from Jinja2_ templates noclean: True # optional, do not remove working dir after a successful test - 'configs' files will be generated from respective files in 'templates' list - i.e. the first file in 'configs' list is the result of processing of the first file from 'templates' list and so on - generated files are stored in a new working directory created by Deckard for each binary Most often it is sufficient to use these files for basic configuration changes. Read next section for details about config file templates. Running multiple binaries ^^^^^^^^^^^^^^^^^^^^^^^^^ You can specify multiple programs to run in the YaML configuration. Deckard executes all binaries using parameters from the file. This is handy for testing interoperability of multiple binaries, e.g. when one program is configured as DNS recursor and other program is using it as forwarder. The YAML file contains **ordered** list of binaries and their parameters. Deckard will send queries to the binary listed first. .. code-block:: yaml programs: - name: forwarding # name of this Knot Resolver instance binary: kresd # kresd is first so it will receive queries from Deckard additional: [] templates: - template/kresd_fwd.j2 # uses variable PROGRAMS['recursor']['address'] configs: - config - name: recursor # name of this Unbound instance binary: unbound additional: - -d - -c - unbound.conf templates: - template/unbound.j2 - template/hints_zone.j2 # uses variable ROOT_ADDR configs: - unbound.conf - hints.zone - ta.keys In this setup it is necessary to configure one binary to contact the other. IP addresses assigned by Deckard at run-time are accessible using ``PROGRAMS`` `template variables`_ and symbolic names assigned to binaries in the YAML file. For example, template ``kresd_fwd.j2`` can use IP address of binary named ``recursor`` like this: .. code-block:: lua policy.add(policy.all(policy.FORWARD("{{PROGRAMS['recursor']['address']}}"))) When all preparations are finished, run Deckard using following syntax: .. code-block:: bash $ ./run.sh --config path/to/config.yaml .. note:: You can run multiple configs in one test instance. Just be aware that ``--scenarios`` must be provided for each config. .. code-block:: # This will run scenarios from `scenarios1` folder with configuration from `config1.yaml` and `scenarios2.yaml` with `config2.yaml` respectively. $ ./run.sh --config path/to/config1.yaml --scenarios path/to/scenarios1 --config path/to/config2.yaml --scenarios path/to/scenarios2 Using an existing scenarios with custom configuration template -------------------------------------------------------------- It some cases it is necessary to modify or create new template files. Typically this is needed when: - there are no templates for particular binary (e.g. if you want to test a brand new program) - an existing template hardcodes some configuration and you want to change it Deckard uses the Jinja2_ templating engine (like Ansible or Salt) and supplies several variables that you can use in templates. For simplicity you can imagine that all occurrences of ``{{variable}}`` in template are replaced with value of the *variable*. See Jinja2_ documentation for further details. Here is an example of template for Unbound: .. code-block:: jinja server: directory: "" # do not leave current working directory chroot: "" pidfile: "" username: "" interface: {{SELF_ADDR}} # Deckard will assign an address interface-automatic: no access-control: ::0/0 allow # accept queries from Deckard do-daemonize: no # log to stdout & stderr use-syslog: no verbosity: 3 # be verbose, it is handy for debugging val-log-level: 2 log-queries: yes {% if QMIN == "false" %} # Jinja2 condition qname-minimisation: no # a constant inside condition {% else %} qname-minimisation: yes {% endif %} harden-glue: no # hardcoded constant, use a variable instead! root-hints: "hints.zone" # reference to other files in working directory trust-anchor-file: "ta.keys" # use separate template to generate these This configuration snippet refers to files ``hints.zone`` and ``ta.keys`` which need to be generated as well. Each file uses own template file. An template for ``hints.zone`` might look like this: .. code-block:: jinja # this is hints file which directs resolver to query # fake root server simulated by Deckard . 3600000 NS K.ROOT-SERVERS.NET. # IP address version depends on scenario setting, handle IPv4 & IPv6 {% if ':' in ROOT_ADDR %} K.ROOT-SERVERS.NET. 3600000 AAAA {{ROOT_ADDR}} {% else %} K.ROOT-SERVERS.NET. 3600000 A {{ROOT_ADDR}} {% endif %} Templates can use any of following variables: .. _`template variables`: List of variables for templates ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ - ``DAEMON_NAME`` - user-specified symbolic name of particular binary under test, e.g. ``recursor`` Addresses: - ``ROOT_ADDR`` - fake root server hint (an address declared in a RANGE) - ``FORWARD_ADDR`` - IP address where resolver should forward all queries (an address declared in a RANGE) - ``SELF_ADDR`` - address assigned to the binary under test - port is not expressed, must be 53 - IP version depends on settings in particular scenario - templates must handle IPv4 and IPv6 as well Path variables: - ``INSTALL_DIR`` - path to directory containing file ``deckard.py`` - ``WORKING_DIR`` - working directory for binary under test, each binary gets its own directory DNS specifics: - ``DO_NOT_QUERY_LOCALHOST`` [bool]_ - allows or disallows querying local addresses - ``HARDEN_GLUE`` [bool]_ - enables or disables additional checks on glue addresses - ``QMIN`` [bool]_ - enables or disables query minimization respectively - ``TRUST_ANCHORS`` - list of trust anchors in form of a DS records, see `scenario guide `_ - ``NEGATIVE_TRUST_ANCHORS`` - list of domain names with explicitly disabled DNSSEC validation Cross references: - ``PROGRAMS`` - dictionary of dictionaries with parameters for each binary under test - it is handy for cases where configuration for one binary under test has to refer to another binary under test, e.g. ``PROGRAMS['recursor']['address']`` and ``PROGRAMS['forwarder']['address']``. .. [bool] boolean expressed as string ``true``/``false`` It's okay if you don't use all of the variables, but expect some tests to fail. E.g. if you don't set the ``TRUST_ANCHORS``, then the DNSSEC tests will not work properly. Debugging scenario execution ---------------------------- Output from a failed test looks like this: .. code-block:: $ ./kresd_run.sh =========================================== FAILURES =========================================== _____ test_passes_qmin_off[Scenario(path='sets/resolver/val_ta_sentinel.rpl', qmin=False)] _____ [...] E ValueError: val_ta_sentinel.rpl step 212 char position 15875, "rcode": expected 'SERVFAIL', E got 'NOERROR' in the response: E id 54873 E opcode QUERY E rcode NOERROR E flags QR RD RA AD E edns 0 E payload 4096 E ;QUESTION E _is-ta-bd19.test. IN A E ;ANSWER E _is-ta-bd19.test. 5 IN A 192.0.2.1 E ;AUTHORITY E ;ADDITIONAL pydnstest/scenario.py:888: ValueError In this example, the test step ``212`` in scenario ``sets/resolver/val_ta_sentinel.rpl`` is failing with query-minimisation off. The binary under test did not produce expected answer, so either the test scenario or binary is wrong. If we were debugging this example, we would have to open file ``val_ta_sentinel.rpl`` on character postition ``15875`` and use our brains :-). Tips: - details about scenario format are in `the scenario guide `_ - network traffic from each binary is logged in PCAP format to a file in working directory - standard output and error from each binary is logged into log file in working directory - working directory can be explicitly specified in environment variable ``DECKARD_DIR` - command line argument ``--log-level DEBUG`` forces extra verbose logging, including logs from all binaries and packets handled by Deckard - environment variable ``DECKARD_NOCLEAN`` instructs Deckard not to remove working directories after successful tests - environment variable ``DECKARD_WRAPPER`` is prepended to all commands to be executed, intended usage is to run binary under test with ``valgrind`` or ``rr record`` Writting own scenarios ---------------------- See `the scenario guide `_. .. _`Jinja2`: http://jinja.pocoo.org/