diff options
Diffstat (limited to 'cts/README.md')
-rw-r--r-- | cts/README.md | 110 |
1 files changed, 31 insertions, 79 deletions
diff --git a/cts/README.md b/cts/README.md index 0ff1065..cbf319a 100644 --- a/cts/README.md +++ b/cts/README.md @@ -21,11 +21,10 @@ CTS includes: * The CTS lab: This is a cluster exerciser for intensively testing the behavior of an entire working cluster. It is primarily for developers and packagers of the Pacemaker source code, but it can be useful for users who wish to see how - their cluster will react to various situations. In an installed deployment, - the CTS lab is in the cts subdirectory of this directory; in a source - distibution, it is in cts/lab. + their cluster will react to various situations. Most of the lab code is in + the Pacemaker Python module. The front end, cts-lab, is in this directory. - The CTS lab runs a randomized series of predefined tests on the cluster. CTS + The CTS lab runs a randomized series of predefined tests on the cluster. It can be run against a pre-existing cluster configuration or overwrite the existing configuration with a test configuration. @@ -46,15 +45,13 @@ CTS includes: /usr/libexec/pacemaker/cts-support uninstall + (The actual directory location may vary depending on how Pacemaker was + built.) + * Cluster benchmark: The benchmark subdirectory of this directory contains some cluster test environment benchmarking code. It is not particularly useful for end users. -* LXC generator: The lxc\_autogen.sh script can be used to create some guest - nodes for testing using LXC containers. It is not particularly useful for end - users. In an installed deployment, it is in the cts subdirectory of this - directory; in a source distribution, it is in this directory. - * Valgrind suppressions: When memory-testing Pacemaker code with valgrind, various bugs in non-Pacemaker libraries and such can clutter the results. The valgrind-pcmk.suppressions file in this directory can be used with valgrind's @@ -109,9 +106,11 @@ CTS includes: ### Run -The primary interface to the CTS lab is the CTSlab.py executable: +The primary interface to the CTS lab is the cts-lab executable: - /usr/share/pacemaker/tests/cts/CTSlab.py [options] <number-of-tests-to-run> + /usr/share/pacemaker/tests/cts-lab [options] <number-of-tests-to-run> + +(The actual directory location may vary depending on how Pacemaker was built.) As part of the options, specify the cluster nodes with --nodes, for example: @@ -138,13 +137,13 @@ Configure some sort of fencing, for example to use fence\_xvm: Putting all the above together, a command line might look like: - /usr/share/pacemaker/tests/cts/CTSlab.py --nodes "pcmk-1 pcmk-2 pcmk-3" \ + /usr/share/pacemaker/tests/cts-lab --nodes "pcmk-1 pcmk-2 pcmk-3" \ --outputfile ~/cts.log --clobber-cib --populate-resources \ --test-ip-base 192.168.9.100 --stonith xvm 50 For more options, run with the --help option. -There are also a couple of wrappers for CTSlab.py that some users may find more +There are also a couple of wrappers for cts-lab that some users may find more convenient: cts, which is typically installed in the same place as the rest of the testing code; and cluster\_test, which is in the source directory and typically not installed. @@ -172,7 +171,7 @@ setting the following environment variables on all cluster nodes: --gen-suppressions=all" If running the CTS lab with valgrind enabled on the cluster nodes, add these -options to CTSlab.py: +options to cts-lab: --valgrind-tests --valgrind-procs "pacemaker-attrd pacemaker-based pacemaker-controld pacemaker-execd pacemaker-schedulerd pacemaker-fenced" @@ -217,22 +216,22 @@ lab, but the C library variables may be set differently on different nodes. ### Optional: Remote node testing -If the pacemaker-remoted daemon is installed on all cluster nodes, CTS will -enable remote node tests. +If the pacemaker-remoted daemon is installed on all cluster nodes, the CTS lab +will enable remote node tests. The remote node tests choose a random node, stop the cluster on it, start pacemaker-remoted on it, and add an ocf:pacemaker:remote resource to turn it -into a remote node. When the test is done, CTS will turn the node back into +into a remote node. When the test is done, the lab will turn the node back into a cluster node. -To avoid conflicts, CTS will rename the node, prefixing the original node name -with "remote-". For example, "pcmk-1" will become "remote-pcmk-1". These names -do not need to be resolvable. +To avoid conflicts, the lab will rename the node, prefixing the original node +name with "remote-". For example, "pcmk-1" will become "remote-pcmk-1". These +names do not need to be resolvable. The name change may require special fencing configuration, if the fence agent expects the node name to be the same as its hostname. A common approach is to specify the "remote-" names in pcmk\_host\_list. If you use -pcmk\_host\_list=all, CTS will expand that to all cluster nodes and their +pcmk\_host\_list=all, the lab will expand that to all cluster nodes and their "remote-" names. You may additionally need a pcmk\_host\_map argument to map the "remote-" names to the hostnames. Example: @@ -267,34 +266,9 @@ valgrind. For example: EOF -### Optional: Container testing - -If the --container-tests option is given to CTSlab.py, it will enable -testing of LXC resources (currently only the RemoteLXC test, -which starts a remote node using an LXC container). - -The container tests have additional package dependencies (see the toplevel -INSTALL.md). Also, SELinux must be enabled (in either permissive or enforcing -mode), libvirtd must be enabled and running, and root must be able to ssh -without a password between all cluster nodes (not just from the exerciser). -Before running the tests, you can verify your environment with: - - /usr/share/pacemaker/tests/cts/lxc_autogen.sh -v - -LXC tests will create two containers with hardcoded parameters: a NAT'ed bridge -named virbr0 using the IP network 192.168.123.0/24 will be created on the -cluster node hosting the containers; the host will be assigned -52:54:00:A8:12:35 as the MAC address and 192.168.123.1 as the IP address. -Each container will be assigned a random MAC address starting with 52:54:, -the IP address 192.168.123.11 or 192.168.123.12, the hostname lxc1 or lxc2 -(which will be added to the host's /etc/hosts file), and 196MB RAM. - -The test will revert all of the configuration when it is done. - - ### Mini-HOWTO: Allow passwordless remote SSH connections -The CTS scripts run "ssh -l root" so you don't have to do any of your testing +The CTS lab runs "ssh -l root" so you don't have to do any of your testing logged in as root on the exerciser. Here is how to allow such connections without requiring a password to be entered each time: @@ -328,42 +302,20 @@ without requiring a password to be entered each time: If not, look at the documentation for your version of ssh. -## Note on the maintenance +## Upgrading scheduler test inputs for new XSLTs -### Tests for scheduler - -The source `*.xml` files are preferably kept in sync with the newest -major (and only major, which is enough) schema version, since these -tests are not meant to double as schema upgrade ones (except some cases +The scheduler/xml inputs should be kept in sync with the latest major schema +version, since these tests are not meant to test schema upgrades (unless expressly designated as such). -Currently and unless something goes wrong, the procedure of upgrading -these tests en masse is as easy as: +To upgrade the inputs to a new major schema version: - cd "$(git rev-parse --show-toplevel)/cts" # if not already - pushd "$(git rev-parse --show-toplevel)/xml" + cd "$(git rev-parse --show-toplevel)/xml" ./regression.sh cts_scheduler -G - popd + cd "$(git rev-parse --show-toplevel)/cts" git add --interactive . - git commit -m 'XML: upgrade-M.N.xsl: apply on scheduler CTS test cases' - git reset HEAD && git checkout . # if some differences still remain - ./cts-scheduler # absolutely vital to check nothing got broken! - -Now, sadly, there's no proved automated way to minimize instances like this: - - <primitive id="rsc1" class="ocf" provider="heartbeat" type="apache"> - </primitive> - -that may be left behind into more canonical: - - <primitive id="rsc1" class="ocf" provider="heartbeat" type="apache"/> - -so manual editing is tasked, or perhaps `--format` or `--c14n` -to `xmllint` will be of help (without any other side effects). + git commit -m 'Test: scheduler: upgrade test inputs to schema $X.$Y' + ./cts-scheduler || echo 'Investigate what went wrong' -If the overall process gets stuck anywhere, common sense to the rescue. -The initial part of the above recipe can be repeated anytime to verify -there's nothing to upgrade artificially like this, which is a desired -state. Note that `regression.sh` script performs validation of both -the input and output, should the upgrade take place, implicitly, so -there's no need of revalidation in the happy case. +The first two commands can be run anytime to verify no further upgrades are +needed. |