blob: ed04525795da363426afde1eac454b36505bb08d (
plain)
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
42
43
44
45
46
47
48
49
50
51
52
53
54
55
56
57
58
59
60
61
62
63
64
65
66
67
68
69
70
71
72
73
74
75
76
77
78
79
80
81
82
83
84
85
86
87
88
89
90
91
92
93
94
95
96
97
98
99
100
101
102
103
104
105
106
107
108
109
110
111
112
113
114
115
116
117
118
119
120
121
122
123
124
125
126
127
128
129
130
131
132
133
134
135
136
137
138
139
140
141
142
143
144
145
146
147
148
149
150
151
152
153
154
155
156
157
158
159
160
161
162
163
164
165
166
167
168
169
170
171
172
173
174
175
176
177
178
179
180
181
182
183
184
185
186
187
188
189
190
191
192
193
194
195
196
197
198
199
200
201
202
203
204
205
|
@bootstrap
Feature: crmsh bootstrap process - init, join and remove
Test crmsh bootstrap init/join/remove process
Need nodes: hanode1 hanode2 hanode3
Background: Setup a two nodes cluster
Given Nodes ["hanode1", "hanode2", "hanode3"] are cleaned up
And Cluster service is "stopped" on "hanode1"
And Cluster service is "stopped" on "hanode2"
When Run "crm cluster init -y" on "hanode1"
Then Cluster service is "started" on "hanode1"
And Show cluster status on "hanode1"
When Run "crm cluster join -c hanode1 -y" on "hanode2"
Then Cluster service is "started" on "hanode2"
And Online nodes are "hanode1 hanode2"
And Show cluster status on "hanode1"
Scenario: Init cluster service on node "hanode1", and join on node "hanode2"
Scenario: Support --all or specific node to manage cluster and nodes
When Run "crm node standby --all" on "hanode1"
Then Node "hanode1" is standby
And Node "hanode2" is standby
When Run "crm node online --all" on "hanode1"
Then Node "hanode1" is online
And Node "hanode2" is online
When Wait for DC
When Run "crm cluster stop --all" on "hanode1"
Then Cluster service is "stopped" on "hanode1"
And Cluster service is "stopped" on "hanode2"
When Run "crm cluster start --all" on "hanode1"
Then Cluster service is "started" on "hanode1"
And Cluster service is "started" on "hanode2"
When Wait for DC
When Run "crm cluster stop hanode2" on "hanode1"
Then Cluster service is "stopped" on "hanode2"
When Run "crm cluster start hanode2" on "hanode1"
Then Cluster service is "started" on "hanode2"
When Run "crm cluster disable hanode2" on "hanode1"
Then Cluster service is "disabled" on "hanode2"
When Run "crm cluster enable hanode2" on "hanode1"
Then Cluster service is "enabled" on "hanode2"
When Run "crm cluster restart --all" on "hanode1"
Then Cluster service is "started" on "hanode1"
And Cluster service is "started" on "hanode2"
Scenario: Remove peer node "hanode2"
When Run "crm configure primitive d1 Dummy" on "hanode1"
When Run "crm configure primitive d2 Dummy" on "hanode2"
Then File "/etc/csync2/csync2.cfg" exists on "hanode2"
Then File "/etc/csync2/key_hagroup" exists on "hanode2"
Then File "/etc/corosync/authkey" exists on "hanode2"
Then File "/etc/corosync/corosync.conf" exists on "hanode2"
Then File "/etc/pacemaker/authkey" exists on "hanode2"
Then Directory "/var/lib/csync2/" not empty on "hanode2"
Then Directory "/var/lib/pacemaker/cib/" not empty on "hanode2"
Then Directory "/var/lib/pacemaker/pengine/" not empty on "hanode2"
Then Directory "/var/lib/corosync/" not empty on "hanode2"
When Run "crm cluster remove hanode2 -y" on "hanode1"
Then Cluster service is "started" on "hanode1"
And Cluster service is "stopped" on "hanode2"
And Online nodes are "hanode1"
And Show cluster status on "hanode1"
Then File "/etc/csync2/csync2.cfg" not exist on "hanode2"
Then File "/etc/csync2/key_hagroup" not exist on "hanode2"
Then File "/etc/corosync/authkey" not exist on "hanode2"
Then File "/etc/corosync/corosync.conf" not exist on "hanode2"
Then File "/etc/pacemaker/authkey" not exist on "hanode2"
Then Directory "/var/lib/csync2/" is empty on "hanode2"
Then Directory "/var/lib/pacemaker/cib/" is empty on "hanode2"
Then Directory "/var/lib/pacemaker/pengine/" is empty on "hanode2"
Then Directory "/var/lib/corosync/" is empty on "hanode2"
Scenario: Remove local node "hanode1"
When Run "crm configure primitive d1 Dummy" on "hanode1"
When Run "crm configure primitive d2 Dummy" on "hanode1"
Then File "/etc/csync2/csync2.cfg" exists on "hanode1"
Then File "/etc/csync2/key_hagroup" exists on "hanode1"
Then File "/etc/corosync/authkey" exists on "hanode1"
Then File "/etc/corosync/corosync.conf" exists on "hanode1"
Then File "/etc/pacemaker/authkey" exists on "hanode1"
Then Directory "/var/lib/csync2/" not empty on "hanode1"
Then Directory "/var/lib/pacemaker/cib/" not empty on "hanode1"
Then Directory "/var/lib/pacemaker/pengine/" not empty on "hanode1"
Then Directory "/var/lib/corosync/" not empty on "hanode1"
When Run "crm cluster remove hanode1 -y --force" on "hanode1"
Then Cluster service is "stopped" on "hanode1"
And Cluster service is "started" on "hanode2"
And Show cluster status on "hanode2"
Then File "/etc/csync2/csync2.cfg" not exist on "hanode1"
Then File "/etc/csync2/key_hagroup" not exist on "hanode1"
Then File "/etc/corosync/authkey" not exist on "hanode1"
Then File "/etc/corosync/corosync.conf" not exist on "hanode1"
Then File "/etc/pacemaker/authkey" not exist on "hanode1"
Then Directory "/var/lib/csync2/" is empty on "hanode1"
Then Directory "/var/lib/pacemaker/cib/" is empty on "hanode1"
Then Directory "/var/lib/pacemaker/pengine/" is empty on "hanode1"
Then Directory "/var/lib/corosync/" is empty on "hanode1"
Scenario: Remove peer node "hanode2" with `crm -F node delete`
When Run "crm configure primitive d1 Dummy" on "hanode1"
When Run "crm configure primitive d2 Dummy" on "hanode2"
Then File "/etc/csync2/csync2.cfg" exists on "hanode2"
Then File "/etc/csync2/key_hagroup" exists on "hanode2"
Then File "/etc/corosync/authkey" exists on "hanode2"
Then File "/etc/corosync/corosync.conf" exists on "hanode2"
Then File "/etc/pacemaker/authkey" exists on "hanode2"
Then Directory "/var/lib/csync2/" not empty on "hanode2"
Then Directory "/var/lib/pacemaker/cib/" not empty on "hanode2"
Then Directory "/var/lib/pacemaker/pengine/" not empty on "hanode2"
Then Directory "/var/lib/corosync/" not empty on "hanode2"
When Run "crm -F cluster remove hanode2" on "hanode1"
Then Cluster service is "started" on "hanode1"
And Cluster service is "stopped" on "hanode2"
And Online nodes are "hanode1"
And Show cluster status on "hanode1"
Then File "/etc/csync2/csync2.cfg" not exist on "hanode2"
Then File "/etc/csync2/key_hagroup" not exist on "hanode2"
Then File "/etc/corosync/authkey" not exist on "hanode2"
Then File "/etc/corosync/corosync.conf" not exist on "hanode2"
Then File "/etc/pacemaker/authkey" not exist on "hanode2"
Then Directory "/var/lib/csync2/" is empty on "hanode2"
Then Directory "/var/lib/pacemaker/cib/" is empty on "hanode2"
Then Directory "/var/lib/pacemaker/pengine/" is empty on "hanode2"
Then Directory "/var/lib/corosync/" is empty on "hanode2"
When Run "crm cluster remove hanode1 -y --force" on "hanode1"
Then File "/etc/corosync/corosync.conf" not exist on "hanode1"
Scenario: Remove local node "hanode1" with `crm -F node delete`
When Run "crm configure primitive d1 Dummy" on "hanode1"
When Run "crm configure primitive d2 Dummy" on "hanode1"
Then File "/etc/csync2/csync2.cfg" exists on "hanode1"
Then File "/etc/csync2/key_hagroup" exists on "hanode1"
Then File "/etc/corosync/authkey" exists on "hanode1"
Then File "/etc/corosync/corosync.conf" exists on "hanode1"
Then File "/etc/pacemaker/authkey" exists on "hanode1"
Then Directory "/var/lib/csync2/" not empty on "hanode1"
Then Directory "/var/lib/pacemaker/cib/" not empty on "hanode1"
Then Directory "/var/lib/pacemaker/pengine/" not empty on "hanode1"
Then Directory "/var/lib/corosync/" not empty on "hanode1"
When Run "crm -F node delete hanode1" on "hanode1"
Then Cluster service is "stopped" on "hanode1"
And Cluster service is "started" on "hanode2"
And Show cluster status on "hanode2"
Then File "/etc/csync2/csync2.cfg" not exist on "hanode1"
Then File "/etc/csync2/key_hagroup" not exist on "hanode1"
Then File "/etc/corosync/authkey" not exist on "hanode1"
Then File "/etc/corosync/corosync.conf" not exist on "hanode1"
Then File "/etc/pacemaker/authkey" not exist on "hanode1"
Then Directory "/var/lib/csync2/" is empty on "hanode1"
Then Directory "/var/lib/pacemaker/cib/" is empty on "hanode1"
Then Directory "/var/lib/pacemaker/pengine/" is empty on "hanode1"
Then Directory "/var/lib/corosync/" is empty on "hanode1"
Scenario: Check hacluster's passwordless configuration on 2 nodes
Then Check user shell for hacluster between "hanode1 hanode2"
Then Check passwordless for hacluster between "hanode1 hanode2"
Scenario: Check hacluster's passwordless configuration in old cluster, 2 nodes
When Run "crm cluster stop --all" on "hanode1"
Then Cluster service is "stopped" on "hanode1"
And Cluster service is "stopped" on "hanode2"
When Run "crm cluster init -y" on "hanode1"
Then Cluster service is "started" on "hanode1"
When Run "rm -rf /var/lib/heartbeat/cores/hacluster/.ssh" on "hanode1"
When Run "crm cluster join -c hanode1 -y" on "hanode2"
Then Cluster service is "started" on "hanode2"
And Online nodes are "hanode1 hanode2"
And Check passwordless for hacluster between "hanode1 hanode2"
Scenario: Check hacluster's passwordless configuration on 3 nodes
Given Cluster service is "stopped" on "hanode3"
When Run "crm cluster join -c hanode1 -y" on "hanode3"
Then Cluster service is "started" on "hanode3"
And Online nodes are "hanode1 hanode2 hanode3"
And Check user shell for hacluster between "hanode1 hanode2 hanode3"
And Check passwordless for hacluster between "hanode1 hanode2 hanode3"
Scenario: Check hacluster's passwordless configuration in old cluster, 3 nodes
Given Cluster service is "stopped" on "hanode3"
When Run "rm -rf /var/lib/heartbeat/cores/hacluster/.ssh" on "hanode1"
And Run "rm -rf /var/lib/heartbeat/cores/hacluster/.ssh" on "hanode2"
When Run "crm cluster join -c hanode1 -y" on "hanode3"
Then Cluster service is "started" on "hanode3"
And Online nodes are "hanode1 hanode2 hanode3"
And Check passwordless for hacluster between "hanode1 hanode2 hanode3"
Scenario: Check hacluster's user shell
Given Cluster service is "stopped" on "hanode3"
When Run "crm cluster join -c hanode1 -y" on "hanode3"
Then Cluster service is "started" on "hanode3"
And Online nodes are "hanode1 hanode2 hanode3"
When Run "rm -rf /var/lib/heartbeat/cores/hacluster/.ssh" on "hanode1"
And Run "rm -rf /var/lib/heartbeat/cores/hacluster/.ssh" on "hanode2"
And Run "rm -rf /var/lib/heartbeat/cores/hacluster/.ssh" on "hanode3"
And Run "usermod -s /usr/sbin/nologin hacluster" on "hanode1"
And Run "usermod -s /usr/sbin/nologin hacluster" on "hanode2"
And Run "usermod -s /usr/sbin/nologin hacluster" on "hanode3"
And Run "rm -f /var/lib/crmsh/upgrade_seq" on "hanode1"
And Run "rm -f /var/lib/crmsh/upgrade_seq" on "hanode2"
And Run "rm -f /var/lib/crmsh/upgrade_seq" on "hanode3"
And Run "crm status" on "hanode1"
Then Check user shell for hacluster between "hanode1 hanode2 hanode3"
Then Check passwordless for hacluster between "hanode1 hanode2 hanode3"
|