summaryrefslogtreecommitdiffstats
path: root/bin/tests/system/README
diff options
context:
space:
mode:
Diffstat (limited to 'bin/tests/system/README')
-rw-r--r--bin/tests/system/README843
1 files changed, 843 insertions, 0 deletions
diff --git a/bin/tests/system/README b/bin/tests/system/README
new file mode 100644
index 0000000..265a9ef
--- /dev/null
+++ b/bin/tests/system/README
@@ -0,0 +1,843 @@
+Copyright (C) Internet Systems Consortium, Inc. ("ISC")
+
+SPDX-License-Identifier: MPL-2.0
+
+This Source Code Form is subject to the terms of the Mozilla Public
+License, v. 2.0. If a copy of the MPL was not distributed with this
+file, you can obtain one at https://mozilla.org/MPL/2.0/.
+
+See the COPYRIGHT file distributed with this work for additional
+information regarding copyright ownership.
+
+Introduction
+===
+This directory holds a simple test environment for running bind9 system tests
+involving multiple name servers.
+
+With the exception of "common" (which holds configuration information common to
+multiple tests), each directory holds a set of scripts and configuration
+files to test different parts of BIND. The directories are named for the
+aspect of BIND they test, for example:
+
+ dnssec/ DNSSEC tests
+ forward/ Forwarding tests
+ glue/ Glue handling tests
+
+etc.
+
+Typically each set of tests sets up 2-5 name servers and then performs one or
+more tests against them. Within the test subdirectory, each name server has a
+separate subdirectory containing its configuration data. These subdirectories
+are named "nsN" or "ansN" (where N is a number between 1 and 8, e.g. ns1, ans2
+etc.)
+
+The tests are completely self-contained and do not require access to the real
+DNS. Generally, one of the test servers (usually ns1) is set up as a root
+nameserver and is listed in the hints file of the others.
+
+
+Preparing to Run the Tests
+===
+To enable all servers to run on the same machine, they bind to separate virtual
+IP addresses on the loopback interface. ns1 runs on 10.53.0.1, ns2 on
+10.53.0.2, etc. Before running any tests, you must set up these addresses by
+running the command
+
+ sh ifconfig.sh up
+
+as root. The interfaces can be removed by executing the command:
+
+ sh ifconfig.sh down
+
+... also as root.
+
+The servers use unprivileged ports (above 1024) instead of the usual port 53,
+so they can be run without root privileges once the interfaces have been set
+up.
+
+
+Note for MacOS Users
+---
+If you wish to make the interfaces survive across reboots, copy
+org.isc.bind.system and org.isc.bind.system.plist to /Library/LaunchDaemons
+then run
+
+ launchctl load /Library/LaunchDaemons/org.isc.bind.system.plist
+
+... as root.
+
+
+Running the System Tests with pytest
+===
+
+The pytest system test runner is currently in development, but it is the
+recommended way to run tests. Please report issues to QA.
+
+Running an Individual Test
+---
+
+pytest -k <test-name>
+
+Note that in comparison to the legacy test runner, some additional tests might
+be picked up when specifying just the system test directory name. To check
+which tests will be executed, you can use the `--collect-only` option. You
+might also be able to find a more specific test name to provide to ensure only
+your desired test is executed. See help for `-k` option in `pytest --help` for
+more info.
+
+It is also possible to run a single individual pytest test case. For example,
+you can use the name test_sslyze_dot to execute just the test_sslyze_dot()
+function from doth/tests_sslyze.py. The entire needed setup and teardown will
+be handled by the framework.
+
+Running All the System Tests
+---
+
+Issuing plain `pytest` command without any argument will execute all tests
+sequenatially. To execute them in parallel, ensure you have pytest-xdist
+installed and run:
+
+pytest -n <number-of-workers>
+
+
+Running the System Tests Using the Legacy Runner
+===
+
+!!! WARNING !!!
+---
+The legacy way to run system tests is currently being reworked into a pytest
+system test runner described in the previous section. The contents of this
+section might be out of date and no longer applicable. Please try and use the
+pytest runner if possible and report issues and missing features.
+
+Running an Individual Test
+---
+The tests can be run individually using the following command:
+
+ sh legacy.run.sh [flags] <test-name> [<test-arguments>]
+
+e.g.
+
+ sh legacy.run.sh [flags] notify
+
+Optional flags are:
+
+ -k Keep servers running after the test completes. Each test
+ usually starts a number of nameservers, either instances
+ of the "named" being tested, or custom servers (written in
+ Python or Perl) that feature test-specific behavior. The
+ servers are automatically started before the test is run
+ and stopped after it ends. This flag leaves them running
+ at the end of the test, so that additional queries can be
+ sent by hand. To stop the servers afterwards, use the
+ command "sh stop.sh <test-name>".
+
+ -n Noclean - do not remove the output files if the test
+ completes successfully. By default, files created by the
+ test are deleted if it passes; they are not deleted if the
+ test fails.
+
+ -p <number> Sets the range of ports used by the test. A block of 100
+ ports is available for each test, the number given to the
+ "-p" switch being the number of the start of that block
+ (e.g. "-p 7900" will mean that the test is able to use
+ ports 7900 through 7999). If not specified, the test will
+ have ports 5000 to 5099 available to it.
+
+Arguments are:
+
+ test-name Mandatory. The name of the test, which is the name of the
+ subdirectory in bin/tests/system holding the test files.
+
+ test-arguments Optional arguments that are passed to each of the test's
+ scripts.
+
+
+Running All The System Tests
+---
+To run all the system tests, enter the command:
+
+ sh runall.sh [-c] [-n] [numproc]
+
+The optional flag "-c" forces colored output (by default system test output is
+not printed in color due to legacy.run.sh being piped through "tee").
+
+The optional flag "-n" has the same effect as it does for "legacy.run.sh" - it
+causes the retention of all output files from all tests.
+
+The optional "numproc" argument specifies the maximum number of tests that can
+run in parallel. The default is 1, which means that all of the tests run
+sequentially. If greater than 1, up to "numproc" tests will run simultaneously,
+new tests being started as tests finish. Each test will get a unique set of
+ports, so there is no danger of tests interfering with one another. Parallel
+running will reduce the total time taken to run the BIND system tests, but will
+mean that the output from all the tests sent to the screen will be mixed up
+with one another. However, the systests.output file produced at the end of the
+run (in the bin/tests/system directory) will contain the output from each test
+in sequential order.
+
+Note that it is not possible to pass arguments to tests though the "runall.sh"
+script.
+
+A run of all the system tests can also be initiated via make:
+
+ make [-j numproc] test
+
+In this case, retention of the output files after a test completes successfully
+is specified by setting the environment variable SYSTEMTEST_NO_CLEAN to 1 prior
+to running make, e.g.
+
+ SYSTEMTEST_NO_CLEAN=1 make [-j numproc] test
+
+while setting environment variable SYSTEMTEST_FORCE_COLOR to 1 forces system
+test output to be printed in color.
+
+
+Running Multiple System Test Suites Simultaneously
+---
+In some cases it may be desirable to have multiple instances of the system test
+suite running simultaneously (e.g. from different terminal windows). To do
+this:
+
+1. Each installation must have its own directory tree. The system tests create
+files in the test directories, so separate directory trees are required to
+avoid interference between the same test running in the different
+installations.
+
+2. For one of the test suites, the starting port number must be specified by
+setting the environment variable STARTPORT before starting the test suite.
+Each test suite comprises about 100 tests, each being allocated a set of 100
+ports. The port ranges for each test are allocated sequentially, so each test
+suite requires about 10,000 ports to itself. By default, the port allocation
+starts at 5,000. So the following set of commands:
+
+ Terminal Window 1:
+ cd <installation-1>/bin/tests/system
+ sh runall.sh 4
+
+ Terminal Window 2:
+ cd <installation-2>/bin/tests/system
+ STARTPORT=20000 sh runall.sh 4
+
+... will start the test suite for installation-1 using the default base port
+of 5,000, so the test suite will use ports 5,000 through 15,000 (or there
+abouts). The use of "STARTPORT=20000" to prefix the run of the test suite for
+installation-2 will mean the test suite uses ports 20,000 through 30,000 or so.
+
+
+Format of Test Output
+---
+All output from the system tests is in the form of lines with the following
+structure:
+
+ <letter>:<test-name>:<message> [(<number>)]
+
+e.g.
+
+ I:catz:checking that dom1.example is not served by primary (1)
+
+The meanings of the fields are as follows:
+
+<letter>
+This indicates the type of message. This is one of:
+
+ S Start of the test
+ A Start of test (retained for backwards compatibility)
+ T Start of test (retained for backwards compatibility)
+ E End of the test
+ I Information. A test will typically output many of these messages
+ during its run, indicating test progress. Note that such a message may
+ be of the form "I:testname:failed", indicating that a sub-test has
+ failed.
+ R Result. Each test will result in one such message, which is of the
+ form:
+
+ R:<test-name>:<result>
+
+ where <result> is one of:
+
+ PASS The test passed
+ FAIL The test failed
+ SKIPPED The test was not run, usually because some
+ prerequisites required to run the test are missing.
+
+<test-name>
+This is the name of the test from which the message emanated, which is also the
+name of the subdirectory holding the test files.
+
+<message>
+This is text output by the test during its execution.
+
+(<number>)
+If present, this will correlate with a file created by the test. The tests
+execute commands and route the output of each command to a file. The name of
+this file depends on the command and the test, but will usually be of the form:
+
+ <command>.out.<suffix><number>
+
+e.g. nsupdate.out.test28, dig.out.q3. This aids diagnosis of problems by
+allowing the output that caused the problem message to be identified.
+
+
+Re-Running the Tests
+---
+If there is a requirement to re-run a test (or the entire test suite), the
+files produced by the tests should be deleted first. Normally, these files are
+deleted if the test succeeds but are retained on error. The legacy.run.sh
+script automatically calls a given test's clean.sh script before invoking its
+setup.sh script.
+
+Deletion of the files produced by the set of tests (e.g. after the execution
+of "runall.sh") can be carried out using the command:
+
+ sh cleanall.sh
+
+or
+
+ make testclean
+
+(Note that the Makefile has two other targets for cleaning up files: "clean"
+will delete all the files produced by the tests, as well as the object and
+executable files used by the tests. "distclean" does all the work of "clean"
+as well as deleting configuration files produced by "configure".)
+
+
+Developer Notes
+===
+This section is intended for developers writing new tests.
+
+
+Overview
+---
+As noted above, each test is in a separate directory. To interact with the
+test framework, the directories contain the following standard files:
+
+prereq.sh Run at the beginning to determine whether the test can be run at
+ all; if not, we see a R:SKIPPED result. This file is optional:
+ if not present, the test is assumed to have all its prerequisites
+ met.
+
+setup.sh Run after prereq.sh, this sets up the preconditions for the tests.
+ Although optional, virtually all tests will require such a file to
+ set up the ports they should use for the test.
+
+tests.sh Runs the actual tests. This file is mandatory.
+
+clean.sh Run at the end to clean up temporary files, but only if the test
+ was completed successfully and its running was not inhibited by the
+ "-n" switch being passed to "legacy.run.sh". Otherwise the
+ temporary files are left in place for inspection.
+
+ns<N> These subdirectories contain test name servers that can be queried
+ or can interact with each other. The value of N indicates the
+ address the server listens on: for example, ns2 listens on
+ 10.53.0.2, and ns4 on 10.53.0.4. All test servers use an
+ unprivileged port, so they don't need to run as root. These
+ servers log at the highest debug level and the log is captured in
+ the file "named.run".
+
+ans<N> Like ns[X], but these are simple mock name servers implemented in
+ Perl or Python. They are generally programmed to misbehave in ways
+ named would not so as to exercise named's ability to interoperate
+ with badly behaved name servers.
+
+
+Port Usage
+---
+In order for the tests to run in parallel, each test requires a unique set of
+ports. These are specified by the "-p" option passed to "legacy.run.sh", which
+sets environment variables that the scripts listed above can reference.
+
+The convention used in the system tests is that the number passed is the start
+of a range of 100 ports. The test is free to use the ports as required,
+although the first ten ports in the block are named and generally tests use the
+named ports for their intended purpose. The names of the environment variables
+are:
+
+ PORT Number to be used for the query port.
+ CONTROLPORT Number to be used as the RNDC control port.
+ EXTRAPORT1 - EXTRAPORT8 Eight port numbers that can be used as needed.
+
+Two other environment variables are defined:
+
+ LOWPORT The lowest port number in the range.
+ HIGHPORT The highest port number in the range.
+
+Since port ranges usually start on a boundary of 10, the variables are set such
+that the last digit of the port number corresponds to the number of the
+EXTRAPORTn variable. For example, if the port range were to start at 5200, the
+port assignments would be:
+
+ PORT = 5200
+ EXTRAPORT1 = 5201
+ :
+ EXTRAPORT8 = 5208
+ CONTROLPORT = 5209
+ LOWPORT = 5200
+ HIGHPORT = 5299
+
+When running tests in parallel (i.e. giving a value of "numproc" greater than 1
+in the "make" or "runall.sh" commands listed above), it is guaranteed that each
+test will get a set of unique port numbers.
+
+
+Writing a Test
+---
+The test framework requires up to four shell scripts (listed above) as well as
+a number of nameserver instances to run. Certain expectations are put on each
+script:
+
+
+General
+---
+1. Each of the four scripts will be invoked with the command
+
+ (cd <test-directory> ; sh <script> [<arguments>] )
+
+... so that working directory when the script starts executing is the test
+directory.
+
+2. Arguments can be only passed to the script if the test is being run as a
+one-off with "legacy.run.sh". In this case, everything on the command line
+after the name of the test is passed to each script. For example, the command:
+
+ sh legacy.run.sh -p 12300 mytest -D xyz
+
+... will run "mytest" with a port range of 12300 to 12399. Each of the
+framework scripts provided by the test will be invoked using the remaining
+arguments, e.g.:
+
+ (cd mytest ; sh prereq.sh -D xyz)
+ (cd mytest ; sh setup.sh -D xyz)
+ (cd mytest ; sh tests.sh -D xyz)
+ (cd mytest ; sh clean.sh -D xyz)
+
+No arguments will be passed to the test scripts if the test is run as part of
+a run of the full test suite (e.g. the tests are started with "runall.sh").
+
+3. Each script should start with the following lines:
+
+ . ../conf.sh
+
+"conf.sh" defines a series of environment variables together with functions
+useful for the test scripts.
+
+
+prereq.sh
+---
+As noted above, this is optional. If present, it should check whether specific
+software needed to run the test is available and/or whether BIND has been
+configured with the appropriate options required.
+
+ * If the software required to run the test is present and the BIND
+ configure options are correct, prereq.sh should return with a status code
+ of 0.
+
+ * If the software required to run the test is not available and/or BIND
+ has not been configured with the appropriate options, prereq.sh should
+ return with a status code of 1.
+
+ * If there is some other problem (e.g. prerequisite software is available
+ but is not properly configured), a status code of 255 should be returned.
+
+
+setup.sh
+---
+This is responsible for setting up the configuration files used in the test.
+
+To cope with the varying port number, ports are not hard-coded into
+configuration files (or, for that matter, scripts that emulate nameservers).
+Instead, setup.sh is responsible for editing the configuration files to set the
+port numbers.
+
+To do this, configuration files should be supplied in the form of templates
+containing tokens identifying ports. The tokens have the same name as the
+environment variables listed above, but are prefixed and suffixed by the "@"
+symbol. For example, a fragment of a configuration file template might look
+like:
+
+ controls {
+ inet 10.53.0.1 port @CONTROLPORT@ allow { any; } keys { rndc_key; };
+ };
+
+ options {
+ query-source address 10.53.0.1;
+ notify-source 10.53.0.1;
+ transfer-source 10.53.0.1;
+ port @PORT@;
+ allow-new-zones yes;
+ };
+
+setup.sh should copy the template to the desired filename using the
+"copy_setports" shell function defined in "conf.sh", i.e.
+
+ copy_setports ns1/named.conf.in ns1/named.conf
+
+This replaces the tokens @PORT@, @CONTROLPORT@, @EXTRAPORT1@ through
+@EXTRAPORT8@ with the contents of the environment variables listed above.
+setup.sh should do this for all configuration files required when the test
+starts.
+
+("setup.sh" should also use this method for replacing the tokens in any Perl or
+Python name servers used in the test.)
+
+
+tests.sh
+---
+This is the main test file and the contents depend on the test. The contents
+are completely up to the developer, although most test scripts have a form
+similar to the following for each sub-test:
+
+ 1. n=$((n + 1))
+ 2. echo_i "prime cache nodata.example ($n)"
+ 3. ret=0
+ 4. $DIG -p ${PORT} @10.53.0.1 nodata.example TXT > dig.out.test$n
+ 5. grep "status: NOERROR" dig.out.test$n > /dev/null || ret=1
+ 6. grep "ANSWER: 0," dig.out.test$n > /dev/null || ret=1
+ 7. if [ $ret != 0 ]; then echo_i "failed"; fi
+ 8. status=$((status + ret))
+
+1. Increment the test number "n" (initialized to zero at the start of the
+ script).
+
+2. Indicate that the sub-test is about to begin. Note that "echo_i" instead
+ of "echo" is used. echo_i is a function defined in "conf.sh" which will
+ prefix the message with "I:<testname>:", so allowing the output from each
+ test to be identified within the output. The test number is included in
+ the message in order to tie the sub-test with its output.
+
+3. Initialize return status.
+
+4 - 6. Carry out the sub-test. In this case, a nameserver is queried (note
+ that the port used is given by the PORT environment variable, which was set
+ by the inclusion of the file "conf.sh" at the start of the script). The
+ output is routed to a file whose suffix includes the test number. The
+ response from the server is examined and, in this case, if the required
+ string is not found, an error is indicated by setting "ret" to 1.
+
+7. If the sub-test failed, a message is printed. "echo_i" is used to print
+ the message to add the prefix "I:<test-name>:" before it is output.
+
+8. "status", used to track how many of the sub-tests have failed, is
+ incremented accordingly. The value of "status" determines the status
+ returned by "tests.sh", which in turn determines whether the framework
+ prints the PASS or FAIL message.
+
+Regardless of this, rules that should be followed are:
+
+a. Use the environment variables set by conf.sh to determine the ports to use
+ for sending and receiving queries.
+
+b. Use a counter to tag messages and to associate the messages with the output
+ files.
+
+c. Store all output produced by queries/commands into files. These files
+ should be named according to the command that produced them, e.g. "dig"
+ output should be stored in a file "dig.out.<suffix>", the suffix being
+ related to the value of the counter.
+
+d. Use "echo_i" to output informational messages.
+
+e. Retain a count of test failures and return this as the exit status from
+ the script.
+
+
+clean.sh
+---
+The inverse of "setup.sh", this is invoked by the framework to clean up the
+test directory. It should delete all files that have been created by the test
+during its run.
+
+
+Starting Nameservers
+---
+As noted earlier, a system test will involve a number of nameservers. These
+will be either instances of named, or special servers written in a language
+such as Perl or Python.
+
+For the former, the version of "named" being run is that in the "bin/named"
+directory in the tree holding the tests (i.e. if "make test" is being run
+immediately after "make", the version of "named" used is that just built). The
+configuration files, zone files etc. for these servers are located in
+subdirectories of the test directory named "nsN", where N is a small integer.
+The latter are special nameservers, mostly used for generating deliberately bad
+responses, located in subdirectories named "ansN" (again, N is an integer).
+In addition to configuration files, these directories should hold the
+appropriate script files as well.
+
+Note that the "N" for a particular test forms a single number space, e.g. if
+there is an "ns2" directory, there cannot be an "ans2" directory as well.
+Ideally, the directory numbers should start at 1 and work upwards.
+
+When running a test, the servers are started using "start.sh" (which is nothing
+more than a wrapper for start.pl). The options for "start.pl" are documented
+in the header for that file, so will not be repeated here. In summary, when
+invoked by "legacy.run.sh", start.pl looks for directories named "nsN" or
+"ansN" in the test directory and starts the servers it finds there.
+
+
+"named" Command-Line Options
+---
+By default, start.pl starts a "named" server with the following options:
+
+ -c named.conf Specifies the configuration file to use (so by implication,
+ each "nsN" nameserver's configuration file must be called
+ named.conf).
+
+ -d 99 Sets the maximum debugging level.
+
+ -D <name> The "-D" option sets a string used to identify the
+ nameserver in a process listing. In this case, the string
+ is the name of the subdirectory.
+
+ -g Runs the server in the foreground and logs everything to
+ stderr.
+
+ -m record
+ Turns on these memory usage debugging flags.
+
+ -U 4 Uses four listeners.
+
+ -X named.lock Acquires a lock on this file in the "nsN" directory, so
+ preventing multiple instances of this named running in this
+ directory (which could possibly interfere with the test).
+
+All output is sent to a file called "named.run" in the nameserver directory.
+
+The options used to start named can be altered. There are three ways of doing
+this. "start.pl" checks the methods in a specific order: if a check succeeds,
+the options are set and any other specification is ignored. In order, these
+are:
+
+1. Specifying options to "start.sh"/"start.pl" after the name of the test
+directory, e.g.
+
+ sh start.sh reclimit ns1 -- "-c n.conf -d 43"
+
+(This is only really useful when running tests interactively.)
+
+2. Including a file called "named.args" in the "nsN" directory. If present,
+the contents of the first non-commented, non-blank line of the file are used as
+the named command-line arguments. The rest of the file is ignored.
+
+3. Tweaking the default command line arguments with "-T" options. This flag is
+used to alter the behavior of BIND for testing and is not documented in the
+ARM. The presence of certain files in the "nsN" directory adds flags to
+the default command line (the content of the files is irrelevant - it
+is only the presence that counts):
+
+ named.noaa Appends "-T noaa" to the command line, which causes
+ "named" to never set the AA bit in an answer.
+
+ named.dropedns Adds "-T dropedns" to the command line, which causes
+ "named" to recognise EDNS options in messages, but drop
+ messages containing them.
+
+ named.maxudp1460 Adds "-T maxudp1460" to the command line, setting the
+ maximum UDP size handled by named to 1460.
+
+ named.maxudp512 Adds "-T maxudp512" to the command line, setting the
+ maximum UDP size handled by named to 512.
+
+ named.noedns Appends "-T noedns" to the command line, which disables
+ recognition of EDNS options in messages.
+
+ named.notcp Adds "-T notcp", which disables TCP in "named".
+
+ named.soa Appends "-T nosoa" to the command line, which disables
+ the addition of SOA records to negative responses (or to
+ the additional section if the response is triggered by RPZ
+ rewriting).
+
+Starting Other Nameservers
+---
+In contrast to "named", nameservers written in Perl or Python (whose script
+file should have the name "ans.pl" or "ans.py" respectively) are started with a
+fixed command line. In essence, the server is given the address and nothing
+else.
+
+(This is not strictly true: Python servers are provided with the number of the
+query port to use. Altering the port used by Perl servers currently requires
+creating a template file containing the "@PORT@" token, and having "setup.sh"
+substitute the actual port being used before the test starts.)
+
+
+Stopping Nameservers
+---
+As might be expected, the test system stops nameservers with the script
+"stop.sh", which is little more than a wrapper for "stop.pl". Like "start.pl",
+the options available are listed in the file's header and will not be repeated
+here.
+
+In summary though, the nameservers for a given test, if left running by
+specifying the "-k" flag to "legacy.run.sh" when the test is started, can be
+stopped by the command:
+
+ sh stop.sh <test-name> [server]
+
+... where if the server (e.g. "ns1", "ans3") is not specified, all servers
+associated with the test are stopped.
+
+
+Adding a Test to the System Test Suite
+---
+Once a test has been created, the following files should be edited:
+
+* conf.sh.in The name of the test should be added to the PARALLELDIRS or
+SEQUENTIALDIRS variables as appropriate. The former is used for tests that
+can run in parallel with other tests, the latter for tests that are unable to
+do so.
+
+* Makefile.in The name of the test should be added to one of the the PARALLEL
+or SEQUENTIAL variables.
+
+(It is likely that a future iteration of the system test suite will remove the
+need to edit multiple files to add a test.)
+
+
+Valgrind
+---
+When running system tests, named can be run under Valgrind. The output from
+Valgrind are sent to per-process files that can be reviewed after the test has
+completed. To enable this, set the USE_VALGRIND environment variable to
+"helgrind" to run the Helgrind tool, or any other value to run the Memcheck
+tool. To use "helgrind" effectively, build BIND with --disable-atomic.
+
+Developer Notes for pytest runner
+===
+
+Test discovery and collection
+---
+There are two distinct types of system tests. The first is a shell script
+tests.sh containing individual test cases executed sequentially and the
+success/failure is determined by return code. The second type is a regular
+pytest file which contains test functions.
+
+Dealing with the regular pytest files doesn't require any special consideration
+as long as the naming conventions are met. Discovering the tests.sh tests is
+more complicated.
+
+The chosen solution is to add a bit of glue for each system test. For every
+tests.sh, there is an accompanying tests_sh_*.py file that contains a test
+function which utilizes a custom run_tests_sh fixture to call the tests.sh
+script. Other solutions were tried and eventually rejected. While this
+introduces a bit of extra glue, it is the most portable, compatible and least
+complex solution.
+
+Module scope
+---
+Pytest fixtures can have a scope. The "module" scope is the most important for
+our use. A module is a python file which contains test functions. Every system
+test directory may contain multiple modules (i.e. tests_*.py files)!
+
+The server setup/teardown is done for a module. Bundling test cases together
+inside a single module may save some resources. However, test cases inside a
+single module can't be executed in parallel.
+
+It is possible to execute different modules defined within a single system test
+directory in parallel. This is possible thanks to executing the tests inside a
+temporary directory and proper port assignment to ensure there won't be any
+conflicts.
+
+Test logging
+---
+Each module has a separate log which will be saved as pytest.log.txt in the
+temporary directory in which the test is executed. This log includes messages
+for this module setup/teardown as well as any logging from the tests using the
+`logger` fixture. Logging level DEBUG and above will be present in this log.
+
+In general, any log messages using INFO or above will also be printed out
+during pytest execution. In CI, the pytest output is also saved to
+pytest.out.txt in the bin/tests/system directory.
+
+Parallel execution
+---
+As mentioned in the previous section, test cases inside a single module can't
+be executed in parallel. To put it differently, all tests cases inside the same
+module must be performed by the same worker/thread. Otherwise, server
+setup/teardown fixtures won't be shared and runtime issues due to port
+collisions are likely to occur.
+
+Pytest-xdist is used for executing pytest test cases in parallel using the `-n
+N_WORKERS` option. By default, xdist will distribute any test case to any
+worker, which would lead to the issue described above. Therefore, conftest.py
+enforces equivalent of `--dist loadscope` option which ensures that test cases
+within the same (module) scope will be handled by the same worker. Parallelism
+is automatically disabled when xdist.scheduler.loadscope library is not
+available.
+
+$ pytest -n auto
+
+Test selection
+---
+It is possible to run just a single pytest test case from any module. Use
+standard pytest facility to select the desired test case(s), i.e. pass a
+sufficiently unique identifier for `-k` parameter. You can also check which
+tests will be executed by using the `--collect-only` flag to debug your `-k`
+expression.
+
+Compatibility with older pytest version
+---
+Keep in mind that the pytest runner must work with ancient versions of pytest.
+When implementing new features, it is advisable to check feature support in
+pytest and pytest-xdist in older distributions first.
+
+As a general rule, any changes to the pytest runner need to keep working on all
+platforms in CI that use the pytest runner. As of 2023-01-13, the oldest
+supported version is whatever is available in EL8.
+
+We may need to add more compat code eventually to handle breaking upstream
+changes. For example, using request.fspath attribute is already deprecatred in
+latest pytest.
+
+Maintenance Notes for legacy runner
+===
+This section is aimed at developers maintaining BIND's system test framework.
+
+Notes on Parallel Execution
+---
+Although execution of an individual test is controlled by "legacy.run.sh",
+which executes the above shell scripts (and starts the relevant servers) for
+each test, the running of all tests in the test suite is controlled by the
+Makefile. ("runall.sh" does little more than invoke "make" on the Makefile.)
+
+All system tests are capable of being run in parallel. For this to work, each
+test needs to use a unique set of ports. To avoid the need to define which
+tests use which ports (and so risk port clashes as further tests are added),
+the ports are assigned when the tests are run. This is achieved by having the
+"test" target in the Makefile depend on "parallel.mk". That file is created
+when "make check" is run, and contains a target for each test of the form:
+
+ <test-name>:
+ @$(SHELL) legacy.run.sh -p <baseport> <test-name>
+
+The <baseport> is unique and the values of <baseport> for each test are
+separated by at least 100 ports.
+
+
+Cleaning Up From Tests
+---
+When a test is run, up to three different types of files are created:
+
+1. Files generated by the test itself, e.g. output from "dig" and "rndc", are
+stored in the test directory.
+
+2. Files produced by named which may not be cleaned up if named exits
+abnormally, e.g. core files, PID files etc., are stored in the test directory.
+
+3. A file "test.output.<test-name>" containing the text written to stdout by the
+test is written to bin/tests/system/. This file is only produced when the test
+is run as part of the entire test suite (e.g. via "runall.sh").
+
+If the test fails, all these files are retained. But if the test succeeds,
+they are cleaned up at different times:
+
+1. Files generated by the test itself are cleaned up by the test's own
+"clean.sh", which is called from "legacy.run.sh".
+
+2. Files that may not be cleaned up if named exits abnormally can be removed
+using the "cleanall.sh" script.
+
+3. "test.output.*" files are deleted when the test suite ends. At this point,
+the file "testsummary.sh" is called which concatenates all the "test.output.*"
+files into a single "systests.output" file before deleting them.