Copyright (C) Internet Systems Consortium, Inc. ("ISC") SPDX-License-Identifier: MPL-2.0 This Source Code Form is subject to the terms of the Mozilla Public License, v. 2.0. If a copy of the MPL was not distributed with this file, you can obtain one at https://mozilla.org/MPL/2.0/. See the COPYRIGHT file distributed with this work for additional information regarding copyright ownership. Introduction === This directory holds a simple test environment for running bind9 system tests involving multiple name servers. With the exception of "common" (which holds configuration information common to multiple tests), each directory holds a set of scripts and configuration files to test different parts of BIND. The directories are named for the aspect of BIND they test, for example: dnssec/ DNSSEC tests forward/ Forwarding tests glue/ Glue handling tests etc. Typically each set of tests sets up 2-5 name servers and then performs one or more tests against them. Within the test subdirectory, each name server has a separate subdirectory containing its configuration data. These subdirectories are named "nsN" or "ansN" (where N is a number between 1 and 8, e.g. ns1, ans2 etc.) The tests are completely self-contained and do not require access to the real DNS. Generally, one of the test servers (usually ns1) is set up as a root nameserver and is listed in the hints file of the others. Preparing to Run the Tests === To enable all servers to run on the same machine, they bind to separate virtual IP addresses on the loopback interface. ns1 runs on 10.53.0.1, ns2 on 10.53.0.2, etc. Before running any tests, you must set up these addresses by running the command sh ifconfig.sh up as root. The interfaces can be removed by executing the command: sh ifconfig.sh down ... also as root. The servers use unprivileged ports (above 1024) instead of the usual port 53, so they can be run without root privileges once the interfaces have been set up. Note for MacOS Users --- If you wish to make the interfaces survive across reboots, copy org.isc.bind.system and org.isc.bind.system.plist to /Library/LaunchDaemons then run launchctl load /Library/LaunchDaemons/org.isc.bind.system.plist ... as root. Running the System Tests with pytest === The pytest system test runner is currently in development, but it is the recommended way to run tests. Please report issues to QA. Running an Individual Test --- pytest -k Note that in comparison to the legacy test runner, some additional tests might be picked up when specifying just the system test directory name. To check which tests will be executed, you can use the `--collect-only` option. You might also be able to find a more specific test name to provide to ensure only your desired test is executed. See help for `-k` option in `pytest --help` for more info. It is also possible to run a single individual pytest test case. For example, you can use the name test_sslyze_dot to execute just the test_sslyze_dot() function from doth/tests_sslyze.py. The entire needed setup and teardown will be handled by the framework. Running All the System Tests --- Issuing plain `pytest` command without any argument will execute all tests sequenatially. To execute them in parallel, ensure you have pytest-xdist installed and run: pytest -n Running the System Tests Using the Legacy Runner === !!! WARNING !!! --- The legacy way to run system tests is currently being reworked into a pytest system test runner described in the previous section. The contents of this section might be out of date and no longer applicable. Please try and use the pytest runner if possible and report issues and missing features. Running an Individual Test --- The tests can be run individually using the following command: sh legacy.run.sh [flags] [] e.g. sh legacy.run.sh [flags] notify Optional flags are: -k Keep servers running after the test completes. Each test usually starts a number of nameservers, either instances of the "named" being tested, or custom servers (written in Python or Perl) that feature test-specific behavior. The servers are automatically started before the test is run and stopped after it ends. This flag leaves them running at the end of the test, so that additional queries can be sent by hand. To stop the servers afterwards, use the command "sh stop.sh ". -n Noclean - do not remove the output files if the test completes successfully. By default, files created by the test are deleted if it passes; they are not deleted if the test fails. -p Sets the range of ports used by the test. A block of 100 ports is available for each test, the number given to the "-p" switch being the number of the start of that block (e.g. "-p 7900" will mean that the test is able to use ports 7900 through 7999). If not specified, the test will have ports 5000 to 5099 available to it. Arguments are: test-name Mandatory. The name of the test, which is the name of the subdirectory in bin/tests/system holding the test files. test-arguments Optional arguments that are passed to each of the test's scripts. Running All The System Tests --- To run all the system tests, enter the command: sh runall.sh [-c] [-n] [numproc] The optional flag "-c" forces colored output (by default system test output is not printed in color due to legacy.run.sh being piped through "tee"). The optional flag "-n" has the same effect as it does for "legacy.run.sh" - it causes the retention of all output files from all tests. The optional "numproc" argument specifies the maximum number of tests that can run in parallel. The default is 1, which means that all of the tests run sequentially. If greater than 1, up to "numproc" tests will run simultaneously, new tests being started as tests finish. Each test will get a unique set of ports, so there is no danger of tests interfering with one another. Parallel running will reduce the total time taken to run the BIND system tests, but will mean that the output from all the tests sent to the screen will be mixed up with one another. However, the systests.output file produced at the end of the run (in the bin/tests/system directory) will contain the output from each test in sequential order. Note that it is not possible to pass arguments to tests though the "runall.sh" script. A run of all the system tests can also be initiated via make: make [-j numproc] test In this case, retention of the output files after a test completes successfully is specified by setting the environment variable SYSTEMTEST_NO_CLEAN to 1 prior to running make, e.g. SYSTEMTEST_NO_CLEAN=1 make [-j numproc] test while setting environment variable SYSTEMTEST_FORCE_COLOR to 1 forces system test output to be printed in color. Running Multiple System Test Suites Simultaneously --- In some cases it may be desirable to have multiple instances of the system test suite running simultaneously (e.g. from different terminal windows). To do this: 1. Each installation must have its own directory tree. The system tests create files in the test directories, so separate directory trees are required to avoid interference between the same test running in the different installations. 2. For one of the test suites, the starting port number must be specified by setting the environment variable STARTPORT before starting the test suite. Each test suite comprises about 100 tests, each being allocated a set of 100 ports. The port ranges for each test are allocated sequentially, so each test suite requires about 10,000 ports to itself. By default, the port allocation starts at 5,000. So the following set of commands: Terminal Window 1: cd /bin/tests/system sh runall.sh 4 Terminal Window 2: cd /bin/tests/system STARTPORT=20000 sh runall.sh 4 ... will start the test suite for installation-1 using the default base port of 5,000, so the test suite will use ports 5,000 through 15,000 (or there abouts). The use of "STARTPORT=20000" to prefix the run of the test suite for installation-2 will mean the test suite uses ports 20,000 through 30,000 or so. Format of Test Output --- All output from the system tests is in the form of lines with the following structure: :: [()] e.g. I:catz:checking that dom1.example is not served by primary (1) The meanings of the fields are as follows: This indicates the type of message. This is one of: S Start of the test A Start of test (retained for backwards compatibility) T Start of test (retained for backwards compatibility) E End of the test I Information. A test will typically output many of these messages during its run, indicating test progress. Note that such a message may be of the form "I:testname:failed", indicating that a sub-test has failed. R Result. Each test will result in one such message, which is of the form: R:: where is one of: PASS The test passed FAIL The test failed SKIPPED The test was not run, usually because some prerequisites required to run the test are missing. This is the name of the test from which the message emanated, which is also the name of the subdirectory holding the test files. This is text output by the test during its execution. () If present, this will correlate with a file created by the test. The tests execute commands and route the output of each command to a file. The name of this file depends on the command and the test, but will usually be of the form: .out. e.g. nsupdate.out.test28, dig.out.q3. This aids diagnosis of problems by allowing the output that caused the problem message to be identified. Re-Running the Tests --- If there is a requirement to re-run a test (or the entire test suite), the files produced by the tests should be deleted first. Normally, these files are deleted if the test succeeds but are retained on error. The legacy.run.sh script automatically calls a given test's clean.sh script before invoking its setup.sh script. Deletion of the files produced by the set of tests (e.g. after the execution of "runall.sh") can be carried out using the command: sh cleanall.sh or make testclean (Note that the Makefile has two other targets for cleaning up files: "clean" will delete all the files produced by the tests, as well as the object and executable files used by the tests. "distclean" does all the work of "clean" as well as deleting configuration files produced by "configure".) Developer Notes === This section is intended for developers writing new tests. Overview --- As noted above, each test is in a separate directory. To interact with the test framework, the directories contain the following standard files: prereq.sh Run at the beginning to determine whether the test can be run at all; if not, we see a R:SKIPPED result. This file is optional: if not present, the test is assumed to have all its prerequisites met. setup.sh Run after prereq.sh, this sets up the preconditions for the tests. Although optional, virtually all tests will require such a file to set up the ports they should use for the test. tests.sh Runs the actual tests. This file is mandatory. clean.sh Run at the end to clean up temporary files, but only if the test was completed successfully and its running was not inhibited by the "-n" switch being passed to "legacy.run.sh". Otherwise the temporary files are left in place for inspection. ns These subdirectories contain test name servers that can be queried or can interact with each other. The value of N indicates the address the server listens on: for example, ns2 listens on 10.53.0.2, and ns4 on 10.53.0.4. All test servers use an unprivileged port, so they don't need to run as root. These servers log at the highest debug level and the log is captured in the file "named.run". ans Like ns[X], but these are simple mock name servers implemented in Perl or Python. They are generally programmed to misbehave in ways named would not so as to exercise named's ability to interoperate with badly behaved name servers. Port Usage --- In order for the tests to run in parallel, each test requires a unique set of ports. These are specified by the "-p" option passed to "legacy.run.sh", which sets environment variables that the scripts listed above can reference. The convention used in the system tests is that the number passed is the start of a range of 100 ports. The test is free to use the ports as required, although the first ten ports in the block are named and generally tests use the named ports for their intended purpose. The names of the environment variables are: PORT Number to be used for the query port. CONTROLPORT Number to be used as the RNDC control port. EXTRAPORT1 - EXTRAPORT8 Eight port numbers that can be used as needed. Two other environment variables are defined: LOWPORT The lowest port number in the range. HIGHPORT The highest port number in the range. Since port ranges usually start on a boundary of 10, the variables are set such that the last digit of the port number corresponds to the number of the EXTRAPORTn variable. For example, if the port range were to start at 5200, the port assignments would be: PORT = 5200 EXTRAPORT1 = 5201 : EXTRAPORT8 = 5208 CONTROLPORT = 5209 LOWPORT = 5200 HIGHPORT = 5299 When running tests in parallel (i.e. giving a value of "numproc" greater than 1 in the "make" or "runall.sh" commands listed above), it is guaranteed that each test will get a set of unique port numbers. Writing a Test --- The test framework requires up to four shell scripts (listed above) as well as a number of nameserver instances to run. Certain expectations are put on each script: General --- 1. Each of the four scripts will be invoked with the command (cd ; sh