summaryrefslogtreecommitdiffstats
path: root/cts/scheduler/summary/clone-anon-failcount.summary
blob: 8d4f369e3e13de465ba12550cff5bae57e3d5f97 (plain)
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
42
43
44
45
46
47
48
49
50
51
52
53
54
55
56
57
58
59
60
61
62
63
64
65
66
67
68
69
70
71
72
73
74
75
76
77
78
79
80
81
82
83
84
85
86
87
88
89
90
91
92
93
94
95
96
97
98
99
100
101
102
103
104
105
106
107
108
109
110
111
112
113
114
115
116
117
118
119
Current cluster status:
  * Node List:
    * Online: [ srv01 srv02 srv03 srv04 ]

  * Full List of Resources:
    * Resource Group: UMgroup01:
      * UmVIPcheck	(ocf:pacemaker:Dummy):	 Started srv01
      * UmIPaddr	(ocf:pacemaker:Dummy):	 Started srv01
      * UmDummy01	(ocf:pacemaker:Dummy):	 Started srv01
      * UmDummy02	(ocf:pacemaker:Dummy):	 Started srv01
    * Resource Group: OVDBgroup02-1:
      * prmExPostgreSQLDB1	(ocf:pacemaker:Dummy):	 Started srv01
    * Resource Group: OVDBgroup02-2:
      * prmExPostgreSQLDB2	(ocf:pacemaker:Dummy):	 Started srv02
    * Resource Group: OVDBgroup02-3:
      * prmExPostgreSQLDB3	(ocf:pacemaker:Dummy):	 Started srv03
    * Resource Group: grpStonith1:
      * prmStonithN1	(stonith:external/ssh):	 Started srv04
    * Resource Group: grpStonith2:
      * prmStonithN2	(stonith:external/ssh):	 Started srv01
    * Resource Group: grpStonith3:
      * prmStonithN3	(stonith:external/ssh):	 Started srv02
    * Resource Group: grpStonith4:
      * prmStonithN4	(stonith:external/ssh):	 Started srv03
    * Clone Set: clnUMgroup01 [clnUmResource]:
      * Resource Group: clnUmResource:0:
        * clnUMdummy01	(ocf:pacemaker:Dummy):	 FAILED srv04
        * clnUMdummy02	(ocf:pacemaker:Dummy):	 Started srv04
      * Started: [ srv01 ]
      * Stopped: [ srv02 srv03 ]
    * Clone Set: clnPingd [clnPrmPingd]:
      * Started: [ srv01 srv02 srv03 srv04 ]
    * Clone Set: clnDiskd1 [clnPrmDiskd1]:
      * Started: [ srv01 srv02 srv03 srv04 ]
    * Clone Set: clnG3dummy1 [clnG3dummy01]:
      * Started: [ srv01 srv02 srv03 srv04 ]
    * Clone Set: clnG3dummy2 [clnG3dummy02]:
      * Started: [ srv01 srv02 srv03 srv04 ]

Transition Summary:
  * Move       UmVIPcheck         ( srv01 -> srv04 )
  * Move       UmIPaddr           ( srv01 -> srv04 )
  * Move       UmDummy01          ( srv01 -> srv04 )
  * Move       UmDummy02          ( srv01 -> srv04 )
  * Recover    clnUMdummy01:0     (          srv04 )
  * Restart    clnUMdummy02:0     (          srv04 )  due to required clnUMdummy01:0 start
  * Stop       clnUMdummy01:1     (          srv01 )  due to node availability
  * Stop       clnUMdummy02:1     (          srv01 )  due to node availability

Executing Cluster Transition:
  * Pseudo action:   UMgroup01_stop_0
  * Resource action: UmDummy02       stop on srv01
  * Resource action: UmDummy01       stop on srv01
  * Resource action: UmIPaddr        stop on srv01
  * Resource action: UmVIPcheck      stop on srv01
  * Pseudo action:   UMgroup01_stopped_0
  * Pseudo action:   clnUMgroup01_stop_0
  * Pseudo action:   clnUmResource:0_stop_0
  * Resource action: clnUMdummy02:1  stop on srv04
  * Pseudo action:   clnUmResource:1_stop_0
  * Resource action: clnUMdummy02:0  stop on srv01
  * Resource action: clnUMdummy01:1  stop on srv04
  * Resource action: clnUMdummy01:0  stop on srv01
  * Pseudo action:   clnUmResource:0_stopped_0
  * Pseudo action:   clnUmResource:1_stopped_0
  * Pseudo action:   clnUMgroup01_stopped_0
  * Pseudo action:   clnUMgroup01_start_0
  * Pseudo action:   clnUmResource:0_start_0
  * Resource action: clnUMdummy01:1  start on srv04
  * Resource action: clnUMdummy01:1  monitor=10000 on srv04
  * Resource action: clnUMdummy02:1  start on srv04
  * Resource action: clnUMdummy02:1  monitor=10000 on srv04
  * Pseudo action:   clnUmResource:0_running_0
  * Pseudo action:   clnUMgroup01_running_0
  * Pseudo action:   UMgroup01_start_0
  * Resource action: UmVIPcheck      start on srv04
  * Resource action: UmIPaddr        start on srv04
  * Resource action: UmDummy01       start on srv04
  * Resource action: UmDummy02       start on srv04
  * Pseudo action:   UMgroup01_running_0
  * Resource action: UmIPaddr        monitor=10000 on srv04
  * Resource action: UmDummy01       monitor=10000 on srv04
  * Resource action: UmDummy02       monitor=10000 on srv04

Revised Cluster Status:
  * Node List:
    * Online: [ srv01 srv02 srv03 srv04 ]

  * Full List of Resources:
    * Resource Group: UMgroup01:
      * UmVIPcheck	(ocf:pacemaker:Dummy):	 Started srv04
      * UmIPaddr	(ocf:pacemaker:Dummy):	 Started srv04
      * UmDummy01	(ocf:pacemaker:Dummy):	 Started srv04
      * UmDummy02	(ocf:pacemaker:Dummy):	 Started srv04
    * Resource Group: OVDBgroup02-1:
      * prmExPostgreSQLDB1	(ocf:pacemaker:Dummy):	 Started srv01
    * Resource Group: OVDBgroup02-2:
      * prmExPostgreSQLDB2	(ocf:pacemaker:Dummy):	 Started srv02
    * Resource Group: OVDBgroup02-3:
      * prmExPostgreSQLDB3	(ocf:pacemaker:Dummy):	 Started srv03
    * Resource Group: grpStonith1:
      * prmStonithN1	(stonith:external/ssh):	 Started srv04
    * Resource Group: grpStonith2:
      * prmStonithN2	(stonith:external/ssh):	 Started srv01
    * Resource Group: grpStonith3:
      * prmStonithN3	(stonith:external/ssh):	 Started srv02
    * Resource Group: grpStonith4:
      * prmStonithN4	(stonith:external/ssh):	 Started srv03
    * Clone Set: clnUMgroup01 [clnUmResource]:
      * Started: [ srv04 ]
      * Stopped: [ srv01 srv02 srv03 ]
    * Clone Set: clnPingd [clnPrmPingd]:
      * Started: [ srv01 srv02 srv03 srv04 ]
    * Clone Set: clnDiskd1 [clnPrmDiskd1]:
      * Started: [ srv01 srv02 srv03 srv04 ]
    * Clone Set: clnG3dummy1 [clnG3dummy01]:
      * Started: [ srv01 srv02 srv03 srv04 ]
    * Clone Set: clnG3dummy2 [clnG3dummy02]:
      * Started: [ srv01 srv02 srv03 srv04 ]