diff options
Diffstat (limited to 'tools/testing/selftests/tc-testing/creating-testcases/AddingTestCases.txt')
-rw-r--r-- | tools/testing/selftests/tc-testing/creating-testcases/AddingTestCases.txt | 105 |
1 files changed, 105 insertions, 0 deletions
diff --git a/tools/testing/selftests/tc-testing/creating-testcases/AddingTestCases.txt b/tools/testing/selftests/tc-testing/creating-testcases/AddingTestCases.txt new file mode 100644 index 000000000..a28571aff --- /dev/null +++ b/tools/testing/selftests/tc-testing/creating-testcases/AddingTestCases.txt @@ -0,0 +1,105 @@ +tdc - Adding test cases for tdc + +Author: Lucas Bates - lucasb@mojatatu.com + +ADDING TEST CASES +----------------- + +User-defined tests should be added by defining a separate JSON file. This +will help prevent conflicts when updating the repository. Refer to +template.json for the required JSON format for test cases. + +Include the 'id' field, but do not assign a value. Running tdc with the -i +option will generate a unique ID for that test case. + +tdc will recursively search the 'tc-tests' subdirectory (or the +directories named with the -D option) for .json files. Any test case +files you create in these directories will automatically be included. +If you wish to store your custom test cases elsewhere, be sure to run +tdc with the -f argument and the path to your file, or the -D argument +and the path to your directory(ies). + +Be aware of required escape characters in the JSON data - particularly +when defining the match pattern. Refer to the supplied json test files +for examples when in doubt. The match pattern is written in json, and +will be used by python. So the match pattern will be a python regular +expression, but should be written using json syntax. + + +TEST CASE STRUCTURE +------------------- + +Each test case has required data: + +id: A unique alphanumeric value to identify a particular test case +name: Descriptive name that explains the command under test +skip: A completely optional key, if the corresponding value is "yes" + then tdc will not execute the test case in question. However, + this test case will still appear in the results output but + marked as skipped. This key can be placed anywhere inside the + test case at the top level. +category: A list of single-word descriptions covering what the command + under test is testing. Example: filter, actions, u32, gact, etc. +setup: The list of commands required to ensure the command under test + succeeds. For example: if testing a filter, the command to create + the qdisc would appear here. + This list can be empty. + Each command can be a string to be executed, or a list consisting + of a string which is a command to be executed, followed by 1 or + more acceptable exit codes for this command. + If only a string is given for the command, then an exit code of 0 + will be expected. +cmdUnderTest: The tc command being tested itself. +expExitCode: The code returned by the command under test upon its termination. + tdc will compare this value against the actual returned value. +verifyCmd: The tc command to be run to verify successful execution. + For example: if the command under test creates a gact action, + verifyCmd should be "$TC actions show action gact" +matchPattern: A regular expression to be applied against the output of the + verifyCmd to prove the command under test succeeded. This pattern + should be as specific as possible so that a false positive is not + matched. +matchCount: How many times the regex in matchPattern should match. A value + of 0 is acceptable. +teardown: The list of commands to clean up after the test is completed. + The environment should be returned to the same state as when + this test was started: qdiscs deleted, actions flushed, etc. + This list can be empty. + Each command can be a string to be executed, or a list consisting + of a string which is a command to be executed, followed by 1 or + more acceptable exit codes for this command. + If only a string is given for the command, then an exit code of 0 + will be expected. + + +SETUP/TEARDOWN ERRORS +--------------------- + +If an error is detected during the setup/teardown process, execution of the +tests will immediately stop with an error message and the namespace in which +the tests are run will be destroyed. This is to prevent inaccurate results +in the test cases. tdc will output a series of TAP results for the skipped +tests. + +Repeated failures of the setup/teardown may indicate a problem with the test +case, or possibly even a bug in one of the commands that are not being tested. + +It's possible to include acceptable exit codes with the setup/teardown command +so that it doesn't halt the script for an error that doesn't matter. Turn the +individual command into a list, with the command being first, followed by all +acceptable exit codes for the command. + +Example: + +A pair of setup commands. The first can have exit code 0, 1 or 255, the +second must have exit code 0. + + "setup": [ + [ + "$TC actions flush action gact", + 0, + 1, + 255 + ], + "$TC actions add action reclassify index 65536" + ], |