summaryrefslogtreecommitdiffstats
path: root/lrm/test/README.regression
blob: 3588172c86736a9dff6d0f0eadeaaa0cf2773b0a (plain)
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
42
43
44
45
46
47
48
49
50
51
52
53
54
55
56
57
58
59
60
61
62
63
64
65
66
67
68
69
70
71
72
73
74
75
76
77
78
79
80
81
82
83
84
85
86
87
88
89
90
91
92
93
94
95
96
97
98
99
100
101
102
103
104
105
106
107
108
109
110
111
112
113
114
115
116
117
118
119
120
121
122
123
124
125
126
127
128
129
130
131
132
133
134
135
136
137
138
139
140
141
142
143
144
145
146
147
148
149
150
151
152
153
154
155
156
157
158
159
160
161
162
163
164
LRM regression tests

* WARNING * WARNING * WARNING * WARNING * WARNING * WARNING *
* 
* evaltest.sh uses eval to an extent you don't really want to
* know about. Beware. Beware twice. Any input from the testcases
* directory is considered to be trusted. So, think twice before
* devising your tests lest you kill your precious data. Got it?
* Good.
*
* Furthermore, we are deliberately small on testing the user
* input and no one should try to predict what is to happen on
* random input from the testcases.
* 
* WARNING * WARNING * WARNING * WARNING * WARNING * WARNING *

Manifest

	regression.sh: the top level program
	evaltest.sh: the engine test engine

	lrmadmin-interface: interface to lrmd (lrmadmin)
	descriptions: describe what we are about to do
	defaults: the default settings for test commands

	testcases/: here are the testcases and filters
	output/: here goes the output

All volatile data lives in the testcases/ directory.

NB: You should never ever need to edit regression.sh and
evaltest.sh. If you really have to, please talk to me and I will
try to fix it so that you do not have to.

Please write new test cases. The more the merrier :)

Usage

The usage is:

	./regression.sh ["prepare"] ["set:"<setname>|<testcase>]

Test cases are collected in test sets. The default test set is
basicset and running regression.sh without arguments will do all
tests from that set.

To show progress, for each test a '.' is printed. For sleeps,
a '+' for each second. Once all tests have been evaluated, the
output is checked against the expect file. If successful, "PASS"
is printed, otherwise "FAIL".

Specifying "prepare" will make regression.sh create expect
output files for the given set of tests or testcase.

The script will start and stop lrmd itself. stonithd is also
started to test the XML descriptions printed by stonith agents.
No other parts of stonithd functionality is tested.

The following files may be generated:

	output/<testcase>.out: the output of the testcase
	output/regression.out: the output of regression.sh
	output/lrmd.out: the output of lrmd

On success output from testcases is removed and regression.out is
empty.

Driving the test cases yourself

evaltest.sh accepts input from stdin, evaluates it immediately,
and prints results to stdout/stderr. One can perhaps get a better
feeling of what's actually going on by running it interactively.
Please note that you have to start the lrmd yourself beforehand.

Test cases

Tests are written in a simple metalanguage. The list of commands
with rough translation to lrmadmin's options is in the language
file. The best description of the language is in the
lrmadmin-interface and descriptions scripts:

$ egrep '^lrm|echo' lrmadmin-interface descriptions

A test case is a list of tests, one per line. A few examples:

	add  # add a resource with default name
	list # list all resources
	del rsc=wiwi # remove a wiwi resource

A set of defaults for LRM options is in the defaults file. That's
why we can write short forms instead of

	add rsc=r1 class=ocf type=lrmregtest provider=heartbeat ...

Special operations

There are special operations with which it is possible to change
environment and do other useful things. All special ops start
with the '%' sign and may be followed by additional parameters.

%setenv
	change the environment variable; see defaults for the
	set of global variables and resetvars() in evaltest.sh

%sleep
	sleep

%stop
	skip the rest of the tests

%extcheck
	feed the output of the next test case to the specified
	external program/filter; the program should either reside in
	testcases/ or be in the PATH, i.e.
	
	%extcheck cat
	
	simulates a null op :)

	see testcases/metadata for some examples

%repeat num
	repeat the next test num times
	there are several variables which are substituted in the test
	lines, so that we can simulate a for loop:

	s/%t/$test_cnt/g
	s/%l/$line/g
	s/%j/$job_cnt/g
	s/%i/$repeat_cnt/g

	for example, to add 10 resources:

	%repeat 10
	add rsc=r-%i

%bg [num]
	run next num (or just the next one) tests in background

%bgrepeat [num]
	a combination of the previous two (used often)

%wait
	wait for the last background test to finish

%shell
	feed whatever is in the rest of the line to 'sh -s'

Filters and except files

Some output is necessarily very volatile, such as time stamps.
It is possible to specify a filter for each testcase to get rid
of superfluous information. A filter is a filter in UNIX
sense, it takes input from stdin and prints results to stdout.

There is a common filter called very inventively
testcases/common.filter which is applied to all test cases.

Except files are a list of extended regular expressions fed to
egrep(1). That way one can filter out lines which are not
interesting. Again, the one applied to all is
testcases/common.excl.