summaryrefslogtreecommitdiffstats
path: root/cts/scheduler/summary
diff options
context:
space:
mode:
authorDaniel Baumann <daniel.baumann@progress-linux.org>2024-04-17 07:46:09 +0000
committerDaniel Baumann <daniel.baumann@progress-linux.org>2024-04-17 07:46:09 +0000
commit043aa641ad4373e96fd748deb1e7fab3cb579a07 (patch)
treef8fde8a97ab5db152043f6c01043672114c0a4df /cts/scheduler/summary
parentReleasing progress-linux version 2.1.6-5~progress7.99u1. (diff)
downloadpacemaker-043aa641ad4373e96fd748deb1e7fab3cb579a07.tar.xz
pacemaker-043aa641ad4373e96fd748deb1e7fab3cb579a07.zip
Merging upstream version 2.1.7.
Signed-off-by: Daniel Baumann <daniel.baumann@progress-linux.org>
Diffstat (limited to '')
-rw-r--r--cts/scheduler/summary/11-a-then-bm-b-move-a-clone-starting.summary2
-rw-r--r--cts/scheduler/summary/5-am-then-bm-a-not-migratable.summary2
-rw-r--r--cts/scheduler/summary/7-migrate-group-one-unmigratable.summary2
-rw-r--r--cts/scheduler/summary/bundle-interleave-start.summary70
-rw-r--r--cts/scheduler/summary/bundle-order-fencing.summary4
-rw-r--r--cts/scheduler/summary/bundle-order-stop-on-remote.summary6
-rw-r--r--cts/scheduler/summary/bundle-promoted-anticolocation-1.summary33
-rw-r--r--cts/scheduler/summary/bundle-promoted-anticolocation-2.summary33
-rw-r--r--cts/scheduler/summary/bundle-promoted-anticolocation-3.summary45
-rw-r--r--cts/scheduler/summary/bundle-promoted-anticolocation-4.summary45
-rw-r--r--cts/scheduler/summary/bundle-promoted-anticolocation-5.summary51
-rw-r--r--cts/scheduler/summary/bundle-promoted-anticolocation-6.summary51
-rw-r--r--cts/scheduler/summary/bundle-promoted-colocation-1.summary33
-rw-r--r--cts/scheduler/summary/bundle-promoted-colocation-2.summary33
-rw-r--r--cts/scheduler/summary/bundle-promoted-colocation-3.summary45
-rw-r--r--cts/scheduler/summary/bundle-promoted-colocation-4.summary45
-rw-r--r--cts/scheduler/summary/bundle-promoted-colocation-5.summary51
-rw-r--r--cts/scheduler/summary/bundle-promoted-colocation-6.summary51
-rw-r--r--cts/scheduler/summary/bundle-promoted-location-1.summary27
-rw-r--r--cts/scheduler/summary/bundle-promoted-location-2.summary54
-rw-r--r--cts/scheduler/summary/bundle-promoted-location-3.summary27
-rw-r--r--cts/scheduler/summary/bundle-promoted-location-4.summary27
-rw-r--r--cts/scheduler/summary/bundle-promoted-location-5.summary27
-rw-r--r--cts/scheduler/summary/bundle-promoted-location-6.summary40
-rw-r--r--cts/scheduler/summary/cancel-behind-moving-remote.summary78
-rw-r--r--cts/scheduler/summary/clone-recover-no-shuffle-1.summary29
-rw-r--r--cts/scheduler/summary/clone-recover-no-shuffle-10.summary29
-rw-r--r--cts/scheduler/summary/clone-recover-no-shuffle-11.summary34
-rw-r--r--cts/scheduler/summary/clone-recover-no-shuffle-12.summary43
-rw-r--r--cts/scheduler/summary/clone-recover-no-shuffle-2.summary32
-rw-r--r--cts/scheduler/summary/clone-recover-no-shuffle-3.summary42
-rw-r--r--cts/scheduler/summary/clone-recover-no-shuffle-4.summary29
-rw-r--r--cts/scheduler/summary/clone-recover-no-shuffle-5.summary32
-rw-r--r--cts/scheduler/summary/clone-recover-no-shuffle-6.summary42
-rw-r--r--cts/scheduler/summary/clone-recover-no-shuffle-7.summary38
-rw-r--r--cts/scheduler/summary/clone-recover-no-shuffle-8.summary52
-rw-r--r--cts/scheduler/summary/clone-recover-no-shuffle-9.summary56
-rw-r--r--cts/scheduler/summary/coloc-with-inner-group-member.summary45
-rw-r--r--cts/scheduler/summary/group-anticolocation-2.summary41
-rw-r--r--cts/scheduler/summary/group-anticolocation-3.summary33
-rw-r--r--cts/scheduler/summary/group-anticolocation-4.summary41
-rw-r--r--cts/scheduler/summary/group-anticolocation-5.summary41
-rw-r--r--cts/scheduler/summary/group-anticolocation.summary16
-rw-r--r--cts/scheduler/summary/migrate-fencing.summary2
-rw-r--r--cts/scheduler/summary/no-promote-on-unrunnable-guest.summary2
-rw-r--r--cts/scheduler/summary/node-pending-timeout.summary26
-rw-r--r--cts/scheduler/summary/pending-node-no-uname.summary23
-rw-r--r--cts/scheduler/summary/promoted-ordering.summary24
-rw-r--r--cts/scheduler/summary/promoted-probed-score.summary124
-rw-r--r--cts/scheduler/summary/timeout-by-node.summary43
-rw-r--r--cts/scheduler/summary/unfence-definition.summary2
-rw-r--r--cts/scheduler/summary/unfence-parameters.summary2
52 files changed, 1637 insertions, 168 deletions
diff --git a/cts/scheduler/summary/11-a-then-bm-b-move-a-clone-starting.summary b/cts/scheduler/summary/11-a-then-bm-b-move-a-clone-starting.summary
index 7bd3b49..7388644 100644
--- a/cts/scheduler/summary/11-a-then-bm-b-move-a-clone-starting.summary
+++ b/cts/scheduler/summary/11-a-then-bm-b-move-a-clone-starting.summary
@@ -11,7 +11,7 @@ Current cluster status:
Transition Summary:
* Move myclone:0 ( f20node1 -> f20node2 )
- * Move vm ( f20node1 -> f20node2 ) due to unrunnable myclone-clone stop
+ * Move vm ( f20node1 -> f20node2 ) due to unmigrateable myclone-clone stop
Executing Cluster Transition:
* Resource action: myclone monitor on f20node2
diff --git a/cts/scheduler/summary/5-am-then-bm-a-not-migratable.summary b/cts/scheduler/summary/5-am-then-bm-a-not-migratable.summary
index 2c88bc3..2a755e1 100644
--- a/cts/scheduler/summary/5-am-then-bm-a-not-migratable.summary
+++ b/cts/scheduler/summary/5-am-then-bm-a-not-migratable.summary
@@ -8,7 +8,7 @@ Current cluster status:
Transition Summary:
* Move A ( 18node1 -> 18node2 )
- * Move B ( 18node2 -> 18node1 ) due to unrunnable A stop
+ * Move B ( 18node2 -> 18node1 ) due to unmigrateable A stop
Executing Cluster Transition:
* Resource action: B stop on 18node2
diff --git a/cts/scheduler/summary/7-migrate-group-one-unmigratable.summary b/cts/scheduler/summary/7-migrate-group-one-unmigratable.summary
index 0d0c7ff..92eecaf 100644
--- a/cts/scheduler/summary/7-migrate-group-one-unmigratable.summary
+++ b/cts/scheduler/summary/7-migrate-group-one-unmigratable.summary
@@ -11,7 +11,7 @@ Current cluster status:
Transition Summary:
* Migrate A ( 18node1 -> 18node2 )
* Move B ( 18node1 -> 18node2 )
- * Move C ( 18node1 -> 18node2 ) due to unrunnable B stop
+ * Move C ( 18node1 -> 18node2 ) due to unmigrateable B stop
Executing Cluster Transition:
* Pseudo action: thegroup_stop_0
diff --git a/cts/scheduler/summary/bundle-interleave-start.summary b/cts/scheduler/summary/bundle-interleave-start.summary
index 1648e92..5a59847 100644
--- a/cts/scheduler/summary/bundle-interleave-start.summary
+++ b/cts/scheduler/summary/bundle-interleave-start.summary
@@ -14,24 +14,24 @@ Current cluster status:
* app-bundle-2 (ocf:pacemaker:Stateful): Stopped
Transition Summary:
- * Start base-bundle-podman-0 ( node2 )
- * Start base-bundle-0 ( node2 )
- * Start base:0 ( base-bundle-0 )
- * Start base-bundle-podman-1 ( node3 )
- * Start base-bundle-1 ( node3 )
- * Start base:1 ( base-bundle-1 )
- * Start base-bundle-podman-2 ( node4 )
- * Start base-bundle-2 ( node4 )
- * Start base:2 ( base-bundle-2 )
- * Start app-bundle-podman-0 ( node2 )
- * Start app-bundle-0 ( node2 )
- * Start app:0 ( app-bundle-0 )
- * Start app-bundle-podman-1 ( node3 )
- * Start app-bundle-1 ( node3 )
- * Start app:1 ( app-bundle-1 )
- * Start app-bundle-podman-2 ( node4 )
- * Start app-bundle-2 ( node4 )
- * Start app:2 ( app-bundle-2 )
+ * Start base-bundle-podman-0 ( node2 )
+ * Start base-bundle-0 ( node2 )
+ * Start base:0 ( base-bundle-0 )
+ * Start base-bundle-podman-1 ( node3 )
+ * Start base-bundle-1 ( node3 )
+ * Start base:1 ( base-bundle-1 )
+ * Start base-bundle-podman-2 ( node4 )
+ * Start base-bundle-2 ( node4 )
+ * Promote base:2 ( Stopped -> Promoted base-bundle-2 )
+ * Start app-bundle-podman-0 ( node2 )
+ * Start app-bundle-0 ( node2 )
+ * Start app:0 ( app-bundle-0 )
+ * Start app-bundle-podman-1 ( node3 )
+ * Start app-bundle-1 ( node3 )
+ * Start app:1 ( app-bundle-1 )
+ * Start app-bundle-podman-2 ( node4 )
+ * Start app-bundle-2 ( node4 )
+ * Promote app:2 ( Stopped -> Promoted app-bundle-2 )
Executing Cluster Transition:
* Resource action: base-bundle-podman-0 monitor on node5
@@ -91,17 +91,18 @@ Executing Cluster Transition:
* Resource action: base-bundle-podman-2 monitor=60000 on node4
* Resource action: base-bundle-2 start on node4
* Resource action: base:0 start on base-bundle-0
- * Resource action: base:1 start on base-bundle-1
- * Resource action: base:2 start on base-bundle-2
- * Pseudo action: base-bundle-clone_running_0
* Resource action: base-bundle-0 monitor=30000 on node2
* Resource action: base-bundle-1 monitor=30000 on node3
* Resource action: base-bundle-2 monitor=30000 on node4
- * Pseudo action: base-bundle_running_0
+ * Resource action: base:1 start on base-bundle-1
* Resource action: base:0 monitor=16000 on base-bundle-0
+ * Resource action: base:2 start on base-bundle-2
* Resource action: base:1 monitor=16000 on base-bundle-1
- * Resource action: base:2 monitor=16000 on base-bundle-2
+ * Pseudo action: base-bundle-clone_running_0
+ * Pseudo action: base-bundle_running_0
* Pseudo action: app-bundle_start_0
+ * Pseudo action: base-bundle_promote_0
+ * Pseudo action: base-bundle-clone_promote_0
* Pseudo action: app-bundle-clone_start_0
* Resource action: app-bundle-podman-0 start on node2
* Resource action: app-bundle-0 monitor on node5
@@ -121,23 +122,32 @@ Executing Cluster Transition:
* Resource action: app-bundle-2 monitor on node3
* Resource action: app-bundle-2 monitor on node2
* Resource action: app-bundle-2 monitor on node1
+ * Resource action: base:2 promote on base-bundle-2
+ * Pseudo action: base-bundle-clone_promoted_0
* Resource action: app-bundle-podman-0 monitor=60000 on node2
* Resource action: app-bundle-0 start on node2
* Resource action: app-bundle-podman-1 monitor=60000 on node3
* Resource action: app-bundle-1 start on node3
* Resource action: app-bundle-podman-2 monitor=60000 on node4
* Resource action: app-bundle-2 start on node4
+ * Pseudo action: base-bundle_promoted_0
+ * Resource action: base:2 monitor=15000 on base-bundle-2
* Resource action: app:0 start on app-bundle-0
- * Resource action: app:1 start on app-bundle-1
- * Resource action: app:2 start on app-bundle-2
- * Pseudo action: app-bundle-clone_running_0
* Resource action: app-bundle-0 monitor=30000 on node2
* Resource action: app-bundle-1 monitor=30000 on node3
* Resource action: app-bundle-2 monitor=30000 on node4
- * Pseudo action: app-bundle_running_0
+ * Resource action: app:1 start on app-bundle-1
* Resource action: app:0 monitor=16000 on app-bundle-0
+ * Resource action: app:2 start on app-bundle-2
* Resource action: app:1 monitor=16000 on app-bundle-1
- * Resource action: app:2 monitor=16000 on app-bundle-2
+ * Pseudo action: app-bundle-clone_running_0
+ * Pseudo action: app-bundle_running_0
+ * Pseudo action: app-bundle_promote_0
+ * Pseudo action: app-bundle-clone_promote_0
+ * Resource action: app:2 promote on app-bundle-2
+ * Pseudo action: app-bundle-clone_promoted_0
+ * Pseudo action: app-bundle_promoted_0
+ * Resource action: app:2 monitor=15000 on app-bundle-2
Revised Cluster Status:
* Node List:
@@ -149,8 +159,8 @@ Revised Cluster Status:
* Container bundle set: base-bundle [localhost/pcmktest:base]:
* base-bundle-0 (ocf:pacemaker:Stateful): Unpromoted node2
* base-bundle-1 (ocf:pacemaker:Stateful): Unpromoted node3
- * base-bundle-2 (ocf:pacemaker:Stateful): Unpromoted node4
+ * base-bundle-2 (ocf:pacemaker:Stateful): Promoted node4
* Container bundle set: app-bundle [localhost/pcmktest:app]:
* app-bundle-0 (ocf:pacemaker:Stateful): Unpromoted node2
* app-bundle-1 (ocf:pacemaker:Stateful): Unpromoted node3
- * app-bundle-2 (ocf:pacemaker:Stateful): Unpromoted node4
+ * app-bundle-2 (ocf:pacemaker:Stateful): Promoted node4
diff --git a/cts/scheduler/summary/bundle-order-fencing.summary b/cts/scheduler/summary/bundle-order-fencing.summary
index e3a25c2..4088c15 100644
--- a/cts/scheduler/summary/bundle-order-fencing.summary
+++ b/cts/scheduler/summary/bundle-order-fencing.summary
@@ -145,6 +145,7 @@ Executing Cluster Transition:
* Pseudo action: galera-bundle_stopped_0
* Resource action: rabbitmq notify on rabbitmq-bundle-1
* Resource action: rabbitmq notify on rabbitmq-bundle-2
+ * Pseudo action: rabbitmq_notified_0
* Pseudo action: rabbitmq-bundle-clone_confirmed-post_notify_stopped_0
* Pseudo action: rabbitmq-bundle-clone_pre_notify_start_0
* Pseudo action: galera-bundle-master_running_0
@@ -155,7 +156,6 @@ Executing Cluster Transition:
* Pseudo action: redis-bundle-docker-0_stop_0
* Pseudo action: galera-bundle_running_0
* Pseudo action: rabbitmq-bundle_stopped_0
- * Pseudo action: rabbitmq_notified_0
* Pseudo action: rabbitmq-bundle-clone_confirmed-pre_notify_start_0
* Pseudo action: rabbitmq-bundle-clone_start_0
* Pseudo action: redis_stop_0
@@ -165,11 +165,11 @@ Executing Cluster Transition:
* Pseudo action: rabbitmq-bundle-clone_post_notify_running_0
* Resource action: redis notify on redis-bundle-1
* Resource action: redis notify on redis-bundle-2
+ * Pseudo action: redis_notified_0
* Pseudo action: redis-bundle-master_confirmed-post_notify_stopped_0
* Pseudo action: redis-bundle-master_pre_notify_start_0
* Pseudo action: redis-bundle_stopped_0
* Pseudo action: rabbitmq-bundle-clone_confirmed-post_notify_running_0
- * Pseudo action: redis_notified_0
* Pseudo action: redis-bundle-master_confirmed-pre_notify_start_0
* Pseudo action: redis-bundle-master_start_0
* Pseudo action: rabbitmq-bundle_running_0
diff --git a/cts/scheduler/summary/bundle-order-stop-on-remote.summary b/cts/scheduler/summary/bundle-order-stop-on-remote.summary
index 5e2e367..612e701 100644
--- a/cts/scheduler/summary/bundle-order-stop-on-remote.summary
+++ b/cts/scheduler/summary/bundle-order-stop-on-remote.summary
@@ -140,8 +140,8 @@ Executing Cluster Transition:
* Resource action: galera-bundle-docker-2 monitor=60000 on database-2
* Resource action: galera-bundle-2 start on controller-1
* Resource action: redis notify on redis-bundle-0
- * Resource action: redis notify on redis-bundle-1
* Resource action: redis notify on redis-bundle-2
+ * Resource action: redis notify on redis-bundle-1
* Pseudo action: redis-bundle-master_confirmed-post_notify_running_0
* Pseudo action: redis-bundle_running_0
* Resource action: galera start on galera-bundle-0
@@ -153,8 +153,8 @@ Executing Cluster Transition:
* Pseudo action: redis-bundle_promote_0
* Pseudo action: galera-bundle_running_0
* Resource action: redis notify on redis-bundle-0
- * Resource action: redis notify on redis-bundle-1
* Resource action: redis notify on redis-bundle-2
+ * Resource action: redis notify on redis-bundle-1
* Pseudo action: redis-bundle-master_confirmed-pre_notify_promote_0
* Pseudo action: redis-bundle-master_promote_0
* Pseudo action: galera-bundle_promote_0
@@ -169,8 +169,8 @@ Executing Cluster Transition:
* Resource action: galera monitor=10000 on galera-bundle-0
* Resource action: galera monitor=10000 on galera-bundle-2
* Resource action: redis notify on redis-bundle-0
- * Resource action: redis notify on redis-bundle-1
* Resource action: redis notify on redis-bundle-2
+ * Resource action: redis notify on redis-bundle-1
* Pseudo action: redis-bundle-master_confirmed-post_notify_promoted_0
* Pseudo action: redis-bundle_promoted_0
* Resource action: redis monitor=20000 on redis-bundle-0
diff --git a/cts/scheduler/summary/bundle-promoted-anticolocation-1.summary b/cts/scheduler/summary/bundle-promoted-anticolocation-1.summary
new file mode 100644
index 0000000..ec6cf2b
--- /dev/null
+++ b/cts/scheduler/summary/bundle-promoted-anticolocation-1.summary
@@ -0,0 +1,33 @@
+Current cluster status:
+ * Node List:
+ * Online: [ node1 node2 node3 ]
+ * GuestOnline: [ base-bundle-0 base-bundle-1 base-bundle-2 ]
+
+ * Full List of Resources:
+ * Fencing (stonith:fence_xvm): Started node2
+ * Container bundle set: base-bundle [localhost/pcmktest]:
+ * base-bundle-0 (ocf:pacemaker:Stateful): Unpromoted node1
+ * base-bundle-1 (ocf:pacemaker:Stateful): Unpromoted node2
+ * base-bundle-2 (ocf:pacemaker:Stateful): Promoted node3
+ * vip (ocf:heartbeat:IPaddr2): Started node3
+
+Transition Summary:
+ * Move vip ( node3 -> node1 )
+
+Executing Cluster Transition:
+ * Resource action: vip stop on node3
+ * Resource action: vip start on node1
+ * Resource action: vip monitor=10000 on node1
+
+Revised Cluster Status:
+ * Node List:
+ * Online: [ node1 node2 node3 ]
+ * GuestOnline: [ base-bundle-0 base-bundle-1 base-bundle-2 ]
+
+ * Full List of Resources:
+ * Fencing (stonith:fence_xvm): Started node2
+ * Container bundle set: base-bundle [localhost/pcmktest]:
+ * base-bundle-0 (ocf:pacemaker:Stateful): Unpromoted node1
+ * base-bundle-1 (ocf:pacemaker:Stateful): Unpromoted node2
+ * base-bundle-2 (ocf:pacemaker:Stateful): Promoted node3
+ * vip (ocf:heartbeat:IPaddr2): Started node1
diff --git a/cts/scheduler/summary/bundle-promoted-anticolocation-2.summary b/cts/scheduler/summary/bundle-promoted-anticolocation-2.summary
new file mode 100644
index 0000000..ec6cf2b
--- /dev/null
+++ b/cts/scheduler/summary/bundle-promoted-anticolocation-2.summary
@@ -0,0 +1,33 @@
+Current cluster status:
+ * Node List:
+ * Online: [ node1 node2 node3 ]
+ * GuestOnline: [ base-bundle-0 base-bundle-1 base-bundle-2 ]
+
+ * Full List of Resources:
+ * Fencing (stonith:fence_xvm): Started node2
+ * Container bundle set: base-bundle [localhost/pcmktest]:
+ * base-bundle-0 (ocf:pacemaker:Stateful): Unpromoted node1
+ * base-bundle-1 (ocf:pacemaker:Stateful): Unpromoted node2
+ * base-bundle-2 (ocf:pacemaker:Stateful): Promoted node3
+ * vip (ocf:heartbeat:IPaddr2): Started node3
+
+Transition Summary:
+ * Move vip ( node3 -> node1 )
+
+Executing Cluster Transition:
+ * Resource action: vip stop on node3
+ * Resource action: vip start on node1
+ * Resource action: vip monitor=10000 on node1
+
+Revised Cluster Status:
+ * Node List:
+ * Online: [ node1 node2 node3 ]
+ * GuestOnline: [ base-bundle-0 base-bundle-1 base-bundle-2 ]
+
+ * Full List of Resources:
+ * Fencing (stonith:fence_xvm): Started node2
+ * Container bundle set: base-bundle [localhost/pcmktest]:
+ * base-bundle-0 (ocf:pacemaker:Stateful): Unpromoted node1
+ * base-bundle-1 (ocf:pacemaker:Stateful): Unpromoted node2
+ * base-bundle-2 (ocf:pacemaker:Stateful): Promoted node3
+ * vip (ocf:heartbeat:IPaddr2): Started node1
diff --git a/cts/scheduler/summary/bundle-promoted-anticolocation-3.summary b/cts/scheduler/summary/bundle-promoted-anticolocation-3.summary
new file mode 100644
index 0000000..e9db462
--- /dev/null
+++ b/cts/scheduler/summary/bundle-promoted-anticolocation-3.summary
@@ -0,0 +1,45 @@
+Current cluster status:
+ * Node List:
+ * Online: [ node1 node2 node3 ]
+ * GuestOnline: [ base-bundle-0 base-bundle-1 base-bundle-2 ]
+
+ * Full List of Resources:
+ * Fencing (stonith:fence_xvm): Started node2
+ * Container bundle set: base-bundle [localhost/pcmktest]:
+ * base-bundle-0 (ocf:pacemaker:Stateful): Unpromoted node1
+ * base-bundle-1 (ocf:pacemaker:Stateful): Unpromoted node2
+ * base-bundle-2 (ocf:pacemaker:Stateful): Promoted node3
+ * vip (ocf:heartbeat:IPaddr2): Started node3
+
+Transition Summary:
+ * Promote base:1 ( Unpromoted -> Promoted base-bundle-1 )
+ * Demote base:2 ( Promoted -> Unpromoted base-bundle-2 )
+
+Executing Cluster Transition:
+ * Resource action: base cancel=16000 on base-bundle-1
+ * Resource action: base cancel=15000 on base-bundle-2
+ * Pseudo action: base-bundle_demote_0
+ * Pseudo action: base-bundle-clone_demote_0
+ * Resource action: base demote on base-bundle-2
+ * Pseudo action: base-bundle-clone_demoted_0
+ * Pseudo action: base-bundle_demoted_0
+ * Pseudo action: base-bundle_promote_0
+ * Resource action: base monitor=16000 on base-bundle-2
+ * Pseudo action: base-bundle-clone_promote_0
+ * Resource action: base promote on base-bundle-1
+ * Pseudo action: base-bundle-clone_promoted_0
+ * Pseudo action: base-bundle_promoted_0
+ * Resource action: base monitor=15000 on base-bundle-1
+
+Revised Cluster Status:
+ * Node List:
+ * Online: [ node1 node2 node3 ]
+ * GuestOnline: [ base-bundle-0 base-bundle-1 base-bundle-2 ]
+
+ * Full List of Resources:
+ * Fencing (stonith:fence_xvm): Started node2
+ * Container bundle set: base-bundle [localhost/pcmktest]:
+ * base-bundle-0 (ocf:pacemaker:Stateful): Unpromoted node1
+ * base-bundle-1 (ocf:pacemaker:Stateful): Promoted node2
+ * base-bundle-2 (ocf:pacemaker:Stateful): Unpromoted node3
+ * vip (ocf:heartbeat:IPaddr2): Started node3
diff --git a/cts/scheduler/summary/bundle-promoted-anticolocation-4.summary b/cts/scheduler/summary/bundle-promoted-anticolocation-4.summary
new file mode 100644
index 0000000..e9db462
--- /dev/null
+++ b/cts/scheduler/summary/bundle-promoted-anticolocation-4.summary
@@ -0,0 +1,45 @@
+Current cluster status:
+ * Node List:
+ * Online: [ node1 node2 node3 ]
+ * GuestOnline: [ base-bundle-0 base-bundle-1 base-bundle-2 ]
+
+ * Full List of Resources:
+ * Fencing (stonith:fence_xvm): Started node2
+ * Container bundle set: base-bundle [localhost/pcmktest]:
+ * base-bundle-0 (ocf:pacemaker:Stateful): Unpromoted node1
+ * base-bundle-1 (ocf:pacemaker:Stateful): Unpromoted node2
+ * base-bundle-2 (ocf:pacemaker:Stateful): Promoted node3
+ * vip (ocf:heartbeat:IPaddr2): Started node3
+
+Transition Summary:
+ * Promote base:1 ( Unpromoted -> Promoted base-bundle-1 )
+ * Demote base:2 ( Promoted -> Unpromoted base-bundle-2 )
+
+Executing Cluster Transition:
+ * Resource action: base cancel=16000 on base-bundle-1
+ * Resource action: base cancel=15000 on base-bundle-2
+ * Pseudo action: base-bundle_demote_0
+ * Pseudo action: base-bundle-clone_demote_0
+ * Resource action: base demote on base-bundle-2
+ * Pseudo action: base-bundle-clone_demoted_0
+ * Pseudo action: base-bundle_demoted_0
+ * Pseudo action: base-bundle_promote_0
+ * Resource action: base monitor=16000 on base-bundle-2
+ * Pseudo action: base-bundle-clone_promote_0
+ * Resource action: base promote on base-bundle-1
+ * Pseudo action: base-bundle-clone_promoted_0
+ * Pseudo action: base-bundle_promoted_0
+ * Resource action: base monitor=15000 on base-bundle-1
+
+Revised Cluster Status:
+ * Node List:
+ * Online: [ node1 node2 node3 ]
+ * GuestOnline: [ base-bundle-0 base-bundle-1 base-bundle-2 ]
+
+ * Full List of Resources:
+ * Fencing (stonith:fence_xvm): Started node2
+ * Container bundle set: base-bundle [localhost/pcmktest]:
+ * base-bundle-0 (ocf:pacemaker:Stateful): Unpromoted node1
+ * base-bundle-1 (ocf:pacemaker:Stateful): Promoted node2
+ * base-bundle-2 (ocf:pacemaker:Stateful): Unpromoted node3
+ * vip (ocf:heartbeat:IPaddr2): Started node3
diff --git a/cts/scheduler/summary/bundle-promoted-anticolocation-5.summary b/cts/scheduler/summary/bundle-promoted-anticolocation-5.summary
new file mode 100644
index 0000000..c35f2e0
--- /dev/null
+++ b/cts/scheduler/summary/bundle-promoted-anticolocation-5.summary
@@ -0,0 +1,51 @@
+Current cluster status:
+ * Node List:
+ * Online: [ node1 node2 node3 ]
+ * GuestOnline: [ bundle-a-0 bundle-a-1 bundle-a-2 bundle-b-0 bundle-b-1 bundle-b-2 ]
+
+ * Full List of Resources:
+ * Fencing (stonith:fence_xvm): Started node2
+ * Container bundle set: bundle-a [localhost/pcmktest]:
+ * bundle-a-0 (ocf:pacemaker:Stateful): Unpromoted node1
+ * bundle-a-1 (ocf:pacemaker:Stateful): Promoted node3
+ * bundle-a-2 (ocf:pacemaker:Stateful): Unpromoted node2
+ * Container bundle set: bundle-b [localhost/pcmktest]:
+ * bundle-b-0 (ocf:pacemaker:Stateful): Unpromoted node1
+ * bundle-b-1 (ocf:pacemaker:Stateful): Promoted node3
+ * bundle-b-2 (ocf:pacemaker:Stateful): Unpromoted node2
+
+Transition Summary:
+ * Demote bundle-a-rsc:1 ( Promoted -> Unpromoted bundle-a-1 )
+ * Promote bundle-a-rsc:2 ( Unpromoted -> Promoted bundle-a-2 )
+
+Executing Cluster Transition:
+ * Resource action: bundle-a-rsc cancel=16000 on bundle-a-2
+ * Resource action: bundle-a-rsc cancel=15000 on bundle-a-1
+ * Pseudo action: bundle-a_demote_0
+ * Pseudo action: bundle-a-clone_demote_0
+ * Resource action: bundle-a-rsc demote on bundle-a-1
+ * Pseudo action: bundle-a-clone_demoted_0
+ * Pseudo action: bundle-a_demoted_0
+ * Pseudo action: bundle-a_promote_0
+ * Resource action: bundle-a-rsc monitor=16000 on bundle-a-1
+ * Pseudo action: bundle-a-clone_promote_0
+ * Resource action: bundle-a-rsc promote on bundle-a-2
+ * Pseudo action: bundle-a-clone_promoted_0
+ * Pseudo action: bundle-a_promoted_0
+ * Resource action: bundle-a-rsc monitor=15000 on bundle-a-2
+
+Revised Cluster Status:
+ * Node List:
+ * Online: [ node1 node2 node3 ]
+ * GuestOnline: [ bundle-a-0 bundle-a-1 bundle-a-2 bundle-b-0 bundle-b-1 bundle-b-2 ]
+
+ * Full List of Resources:
+ * Fencing (stonith:fence_xvm): Started node2
+ * Container bundle set: bundle-a [localhost/pcmktest]:
+ * bundle-a-0 (ocf:pacemaker:Stateful): Unpromoted node1
+ * bundle-a-1 (ocf:pacemaker:Stateful): Unpromoted node3
+ * bundle-a-2 (ocf:pacemaker:Stateful): Promoted node2
+ * Container bundle set: bundle-b [localhost/pcmktest]:
+ * bundle-b-0 (ocf:pacemaker:Stateful): Unpromoted node1
+ * bundle-b-1 (ocf:pacemaker:Stateful): Promoted node3
+ * bundle-b-2 (ocf:pacemaker:Stateful): Unpromoted node2
diff --git a/cts/scheduler/summary/bundle-promoted-anticolocation-6.summary b/cts/scheduler/summary/bundle-promoted-anticolocation-6.summary
new file mode 100644
index 0000000..c35f2e0
--- /dev/null
+++ b/cts/scheduler/summary/bundle-promoted-anticolocation-6.summary
@@ -0,0 +1,51 @@
+Current cluster status:
+ * Node List:
+ * Online: [ node1 node2 node3 ]
+ * GuestOnline: [ bundle-a-0 bundle-a-1 bundle-a-2 bundle-b-0 bundle-b-1 bundle-b-2 ]
+
+ * Full List of Resources:
+ * Fencing (stonith:fence_xvm): Started node2
+ * Container bundle set: bundle-a [localhost/pcmktest]:
+ * bundle-a-0 (ocf:pacemaker:Stateful): Unpromoted node1
+ * bundle-a-1 (ocf:pacemaker:Stateful): Promoted node3
+ * bundle-a-2 (ocf:pacemaker:Stateful): Unpromoted node2
+ * Container bundle set: bundle-b [localhost/pcmktest]:
+ * bundle-b-0 (ocf:pacemaker:Stateful): Unpromoted node1
+ * bundle-b-1 (ocf:pacemaker:Stateful): Promoted node3
+ * bundle-b-2 (ocf:pacemaker:Stateful): Unpromoted node2
+
+Transition Summary:
+ * Demote bundle-a-rsc:1 ( Promoted -> Unpromoted bundle-a-1 )
+ * Promote bundle-a-rsc:2 ( Unpromoted -> Promoted bundle-a-2 )
+
+Executing Cluster Transition:
+ * Resource action: bundle-a-rsc cancel=16000 on bundle-a-2
+ * Resource action: bundle-a-rsc cancel=15000 on bundle-a-1
+ * Pseudo action: bundle-a_demote_0
+ * Pseudo action: bundle-a-clone_demote_0
+ * Resource action: bundle-a-rsc demote on bundle-a-1
+ * Pseudo action: bundle-a-clone_demoted_0
+ * Pseudo action: bundle-a_demoted_0
+ * Pseudo action: bundle-a_promote_0
+ * Resource action: bundle-a-rsc monitor=16000 on bundle-a-1
+ * Pseudo action: bundle-a-clone_promote_0
+ * Resource action: bundle-a-rsc promote on bundle-a-2
+ * Pseudo action: bundle-a-clone_promoted_0
+ * Pseudo action: bundle-a_promoted_0
+ * Resource action: bundle-a-rsc monitor=15000 on bundle-a-2
+
+Revised Cluster Status:
+ * Node List:
+ * Online: [ node1 node2 node3 ]
+ * GuestOnline: [ bundle-a-0 bundle-a-1 bundle-a-2 bundle-b-0 bundle-b-1 bundle-b-2 ]
+
+ * Full List of Resources:
+ * Fencing (stonith:fence_xvm): Started node2
+ * Container bundle set: bundle-a [localhost/pcmktest]:
+ * bundle-a-0 (ocf:pacemaker:Stateful): Unpromoted node1
+ * bundle-a-1 (ocf:pacemaker:Stateful): Unpromoted node3
+ * bundle-a-2 (ocf:pacemaker:Stateful): Promoted node2
+ * Container bundle set: bundle-b [localhost/pcmktest]:
+ * bundle-b-0 (ocf:pacemaker:Stateful): Unpromoted node1
+ * bundle-b-1 (ocf:pacemaker:Stateful): Promoted node3
+ * bundle-b-2 (ocf:pacemaker:Stateful): Unpromoted node2
diff --git a/cts/scheduler/summary/bundle-promoted-colocation-1.summary b/cts/scheduler/summary/bundle-promoted-colocation-1.summary
new file mode 100644
index 0000000..61cc974
--- /dev/null
+++ b/cts/scheduler/summary/bundle-promoted-colocation-1.summary
@@ -0,0 +1,33 @@
+Current cluster status:
+ * Node List:
+ * Online: [ node1 node2 node3 ]
+ * GuestOnline: [ base-bundle-0 base-bundle-1 base-bundle-2 ]
+
+ * Full List of Resources:
+ * Fencing (stonith:fence_xvm): Started node2
+ * Container bundle set: base-bundle [localhost/pcmktest]:
+ * base-bundle-0 (ocf:pacemaker:Stateful): Unpromoted node1
+ * base-bundle-1 (ocf:pacemaker:Stateful): Unpromoted node2
+ * base-bundle-2 (ocf:pacemaker:Stateful): Promoted node3
+ * vip (ocf:heartbeat:IPaddr2): Started node1
+
+Transition Summary:
+ * Move vip ( node1 -> node3 )
+
+Executing Cluster Transition:
+ * Resource action: vip stop on node1
+ * Resource action: vip start on node3
+ * Resource action: vip monitor=10000 on node3
+
+Revised Cluster Status:
+ * Node List:
+ * Online: [ node1 node2 node3 ]
+ * GuestOnline: [ base-bundle-0 base-bundle-1 base-bundle-2 ]
+
+ * Full List of Resources:
+ * Fencing (stonith:fence_xvm): Started node2
+ * Container bundle set: base-bundle [localhost/pcmktest]:
+ * base-bundle-0 (ocf:pacemaker:Stateful): Unpromoted node1
+ * base-bundle-1 (ocf:pacemaker:Stateful): Unpromoted node2
+ * base-bundle-2 (ocf:pacemaker:Stateful): Promoted node3
+ * vip (ocf:heartbeat:IPaddr2): Started node3
diff --git a/cts/scheduler/summary/bundle-promoted-colocation-2.summary b/cts/scheduler/summary/bundle-promoted-colocation-2.summary
new file mode 100644
index 0000000..61cc974
--- /dev/null
+++ b/cts/scheduler/summary/bundle-promoted-colocation-2.summary
@@ -0,0 +1,33 @@
+Current cluster status:
+ * Node List:
+ * Online: [ node1 node2 node3 ]
+ * GuestOnline: [ base-bundle-0 base-bundle-1 base-bundle-2 ]
+
+ * Full List of Resources:
+ * Fencing (stonith:fence_xvm): Started node2
+ * Container bundle set: base-bundle [localhost/pcmktest]:
+ * base-bundle-0 (ocf:pacemaker:Stateful): Unpromoted node1
+ * base-bundle-1 (ocf:pacemaker:Stateful): Unpromoted node2
+ * base-bundle-2 (ocf:pacemaker:Stateful): Promoted node3
+ * vip (ocf:heartbeat:IPaddr2): Started node1
+
+Transition Summary:
+ * Move vip ( node1 -> node3 )
+
+Executing Cluster Transition:
+ * Resource action: vip stop on node1
+ * Resource action: vip start on node3
+ * Resource action: vip monitor=10000 on node3
+
+Revised Cluster Status:
+ * Node List:
+ * Online: [ node1 node2 node3 ]
+ * GuestOnline: [ base-bundle-0 base-bundle-1 base-bundle-2 ]
+
+ * Full List of Resources:
+ * Fencing (stonith:fence_xvm): Started node2
+ * Container bundle set: base-bundle [localhost/pcmktest]:
+ * base-bundle-0 (ocf:pacemaker:Stateful): Unpromoted node1
+ * base-bundle-1 (ocf:pacemaker:Stateful): Unpromoted node2
+ * base-bundle-2 (ocf:pacemaker:Stateful): Promoted node3
+ * vip (ocf:heartbeat:IPaddr2): Started node3
diff --git a/cts/scheduler/summary/bundle-promoted-colocation-3.summary b/cts/scheduler/summary/bundle-promoted-colocation-3.summary
new file mode 100644
index 0000000..64b4157
--- /dev/null
+++ b/cts/scheduler/summary/bundle-promoted-colocation-3.summary
@@ -0,0 +1,45 @@
+Current cluster status:
+ * Node List:
+ * Online: [ node1 node2 node3 ]
+ * GuestOnline: [ base-bundle-0 base-bundle-1 base-bundle-2 ]
+
+ * Full List of Resources:
+ * Fencing (stonith:fence_xvm): Started node2
+ * Container bundle set: base-bundle [localhost/pcmktest]:
+ * base-bundle-0 (ocf:pacemaker:Stateful): Unpromoted node1
+ * base-bundle-1 (ocf:pacemaker:Stateful): Unpromoted node2
+ * base-bundle-2 (ocf:pacemaker:Stateful): Promoted node3
+ * vip (ocf:heartbeat:IPaddr2): Started node1
+
+Transition Summary:
+ * Promote base:0 ( Unpromoted -> Promoted base-bundle-0 )
+ * Demote base:2 ( Promoted -> Unpromoted base-bundle-2 )
+
+Executing Cluster Transition:
+ * Resource action: base cancel=16000 on base-bundle-0
+ * Resource action: base cancel=15000 on base-bundle-2
+ * Pseudo action: base-bundle_demote_0
+ * Pseudo action: base-bundle-clone_demote_0
+ * Resource action: base demote on base-bundle-2
+ * Pseudo action: base-bundle-clone_demoted_0
+ * Pseudo action: base-bundle_demoted_0
+ * Pseudo action: base-bundle_promote_0
+ * Resource action: base monitor=16000 on base-bundle-2
+ * Pseudo action: base-bundle-clone_promote_0
+ * Resource action: base promote on base-bundle-0
+ * Pseudo action: base-bundle-clone_promoted_0
+ * Pseudo action: base-bundle_promoted_0
+ * Resource action: base monitor=15000 on base-bundle-0
+
+Revised Cluster Status:
+ * Node List:
+ * Online: [ node1 node2 node3 ]
+ * GuestOnline: [ base-bundle-0 base-bundle-1 base-bundle-2 ]
+
+ * Full List of Resources:
+ * Fencing (stonith:fence_xvm): Started node2
+ * Container bundle set: base-bundle [localhost/pcmktest]:
+ * base-bundle-0 (ocf:pacemaker:Stateful): Promoted node1
+ * base-bundle-1 (ocf:pacemaker:Stateful): Unpromoted node2
+ * base-bundle-2 (ocf:pacemaker:Stateful): Unpromoted node3
+ * vip (ocf:heartbeat:IPaddr2): Started node1
diff --git a/cts/scheduler/summary/bundle-promoted-colocation-4.summary b/cts/scheduler/summary/bundle-promoted-colocation-4.summary
new file mode 100644
index 0000000..64b4157
--- /dev/null
+++ b/cts/scheduler/summary/bundle-promoted-colocation-4.summary
@@ -0,0 +1,45 @@
+Current cluster status:
+ * Node List:
+ * Online: [ node1 node2 node3 ]
+ * GuestOnline: [ base-bundle-0 base-bundle-1 base-bundle-2 ]
+
+ * Full List of Resources:
+ * Fencing (stonith:fence_xvm): Started node2
+ * Container bundle set: base-bundle [localhost/pcmktest]:
+ * base-bundle-0 (ocf:pacemaker:Stateful): Unpromoted node1
+ * base-bundle-1 (ocf:pacemaker:Stateful): Unpromoted node2
+ * base-bundle-2 (ocf:pacemaker:Stateful): Promoted node3
+ * vip (ocf:heartbeat:IPaddr2): Started node1
+
+Transition Summary:
+ * Promote base:0 ( Unpromoted -> Promoted base-bundle-0 )
+ * Demote base:2 ( Promoted -> Unpromoted base-bundle-2 )
+
+Executing Cluster Transition:
+ * Resource action: base cancel=16000 on base-bundle-0
+ * Resource action: base cancel=15000 on base-bundle-2
+ * Pseudo action: base-bundle_demote_0
+ * Pseudo action: base-bundle-clone_demote_0
+ * Resource action: base demote on base-bundle-2
+ * Pseudo action: base-bundle-clone_demoted_0
+ * Pseudo action: base-bundle_demoted_0
+ * Pseudo action: base-bundle_promote_0
+ * Resource action: base monitor=16000 on base-bundle-2
+ * Pseudo action: base-bundle-clone_promote_0
+ * Resource action: base promote on base-bundle-0
+ * Pseudo action: base-bundle-clone_promoted_0
+ * Pseudo action: base-bundle_promoted_0
+ * Resource action: base monitor=15000 on base-bundle-0
+
+Revised Cluster Status:
+ * Node List:
+ * Online: [ node1 node2 node3 ]
+ * GuestOnline: [ base-bundle-0 base-bundle-1 base-bundle-2 ]
+
+ * Full List of Resources:
+ * Fencing (stonith:fence_xvm): Started node2
+ * Container bundle set: base-bundle [localhost/pcmktest]:
+ * base-bundle-0 (ocf:pacemaker:Stateful): Promoted node1
+ * base-bundle-1 (ocf:pacemaker:Stateful): Unpromoted node2
+ * base-bundle-2 (ocf:pacemaker:Stateful): Unpromoted node3
+ * vip (ocf:heartbeat:IPaddr2): Started node1
diff --git a/cts/scheduler/summary/bundle-promoted-colocation-5.summary b/cts/scheduler/summary/bundle-promoted-colocation-5.summary
new file mode 100644
index 0000000..dbcf940
--- /dev/null
+++ b/cts/scheduler/summary/bundle-promoted-colocation-5.summary
@@ -0,0 +1,51 @@
+Current cluster status:
+ * Node List:
+ * Online: [ node1 node2 node3 ]
+ * GuestOnline: [ bundle-a-0 bundle-a-1 bundle-a-2 bundle-b-0 bundle-b-1 bundle-b-2 ]
+
+ * Full List of Resources:
+ * Fencing (stonith:fence_xvm): Started node2
+ * Container bundle set: bundle-a [localhost/pcmktest]:
+ * bundle-a-0 (ocf:pacemaker:Stateful): Unpromoted node1
+ * bundle-a-1 (ocf:pacemaker:Stateful): Promoted node3
+ * bundle-a-2 (ocf:pacemaker:Stateful): Unpromoted node2
+ * Container bundle set: bundle-b [localhost/pcmktest]:
+ * bundle-b-0 (ocf:pacemaker:Stateful): Unpromoted node1
+ * bundle-b-1 (ocf:pacemaker:Stateful): Unpromoted node3
+ * bundle-b-2 (ocf:pacemaker:Stateful): Promoted node2
+
+Transition Summary:
+ * Demote bundle-a-rsc:1 ( Promoted -> Unpromoted bundle-a-1 )
+ * Promote bundle-a-rsc:2 ( Unpromoted -> Promoted bundle-a-2 )
+
+Executing Cluster Transition:
+ * Resource action: bundle-a-rsc cancel=16000 on bundle-a-2
+ * Resource action: bundle-a-rsc cancel=15000 on bundle-a-1
+ * Pseudo action: bundle-a_demote_0
+ * Pseudo action: bundle-a-clone_demote_0
+ * Resource action: bundle-a-rsc demote on bundle-a-1
+ * Pseudo action: bundle-a-clone_demoted_0
+ * Pseudo action: bundle-a_demoted_0
+ * Pseudo action: bundle-a_promote_0
+ * Resource action: bundle-a-rsc monitor=16000 on bundle-a-1
+ * Pseudo action: bundle-a-clone_promote_0
+ * Resource action: bundle-a-rsc promote on bundle-a-2
+ * Pseudo action: bundle-a-clone_promoted_0
+ * Pseudo action: bundle-a_promoted_0
+ * Resource action: bundle-a-rsc monitor=15000 on bundle-a-2
+
+Revised Cluster Status:
+ * Node List:
+ * Online: [ node1 node2 node3 ]
+ * GuestOnline: [ bundle-a-0 bundle-a-1 bundle-a-2 bundle-b-0 bundle-b-1 bundle-b-2 ]
+
+ * Full List of Resources:
+ * Fencing (stonith:fence_xvm): Started node2
+ * Container bundle set: bundle-a [localhost/pcmktest]:
+ * bundle-a-0 (ocf:pacemaker:Stateful): Unpromoted node1
+ * bundle-a-1 (ocf:pacemaker:Stateful): Unpromoted node3
+ * bundle-a-2 (ocf:pacemaker:Stateful): Promoted node2
+ * Container bundle set: bundle-b [localhost/pcmktest]:
+ * bundle-b-0 (ocf:pacemaker:Stateful): Unpromoted node1
+ * bundle-b-1 (ocf:pacemaker:Stateful): Unpromoted node3
+ * bundle-b-2 (ocf:pacemaker:Stateful): Promoted node2
diff --git a/cts/scheduler/summary/bundle-promoted-colocation-6.summary b/cts/scheduler/summary/bundle-promoted-colocation-6.summary
new file mode 100644
index 0000000..dbcf940
--- /dev/null
+++ b/cts/scheduler/summary/bundle-promoted-colocation-6.summary
@@ -0,0 +1,51 @@
+Current cluster status:
+ * Node List:
+ * Online: [ node1 node2 node3 ]
+ * GuestOnline: [ bundle-a-0 bundle-a-1 bundle-a-2 bundle-b-0 bundle-b-1 bundle-b-2 ]
+
+ * Full List of Resources:
+ * Fencing (stonith:fence_xvm): Started node2
+ * Container bundle set: bundle-a [localhost/pcmktest]:
+ * bundle-a-0 (ocf:pacemaker:Stateful): Unpromoted node1
+ * bundle-a-1 (ocf:pacemaker:Stateful): Promoted node3
+ * bundle-a-2 (ocf:pacemaker:Stateful): Unpromoted node2
+ * Container bundle set: bundle-b [localhost/pcmktest]:
+ * bundle-b-0 (ocf:pacemaker:Stateful): Unpromoted node1
+ * bundle-b-1 (ocf:pacemaker:Stateful): Unpromoted node3
+ * bundle-b-2 (ocf:pacemaker:Stateful): Promoted node2
+
+Transition Summary:
+ * Demote bundle-a-rsc:1 ( Promoted -> Unpromoted bundle-a-1 )
+ * Promote bundle-a-rsc:2 ( Unpromoted -> Promoted bundle-a-2 )
+
+Executing Cluster Transition:
+ * Resource action: bundle-a-rsc cancel=16000 on bundle-a-2
+ * Resource action: bundle-a-rsc cancel=15000 on bundle-a-1
+ * Pseudo action: bundle-a_demote_0
+ * Pseudo action: bundle-a-clone_demote_0
+ * Resource action: bundle-a-rsc demote on bundle-a-1
+ * Pseudo action: bundle-a-clone_demoted_0
+ * Pseudo action: bundle-a_demoted_0
+ * Pseudo action: bundle-a_promote_0
+ * Resource action: bundle-a-rsc monitor=16000 on bundle-a-1
+ * Pseudo action: bundle-a-clone_promote_0
+ * Resource action: bundle-a-rsc promote on bundle-a-2
+ * Pseudo action: bundle-a-clone_promoted_0
+ * Pseudo action: bundle-a_promoted_0
+ * Resource action: bundle-a-rsc monitor=15000 on bundle-a-2
+
+Revised Cluster Status:
+ * Node List:
+ * Online: [ node1 node2 node3 ]
+ * GuestOnline: [ bundle-a-0 bundle-a-1 bundle-a-2 bundle-b-0 bundle-b-1 bundle-b-2 ]
+
+ * Full List of Resources:
+ * Fencing (stonith:fence_xvm): Started node2
+ * Container bundle set: bundle-a [localhost/pcmktest]:
+ * bundle-a-0 (ocf:pacemaker:Stateful): Unpromoted node1
+ * bundle-a-1 (ocf:pacemaker:Stateful): Unpromoted node3
+ * bundle-a-2 (ocf:pacemaker:Stateful): Promoted node2
+ * Container bundle set: bundle-b [localhost/pcmktest]:
+ * bundle-b-0 (ocf:pacemaker:Stateful): Unpromoted node1
+ * bundle-b-1 (ocf:pacemaker:Stateful): Unpromoted node3
+ * bundle-b-2 (ocf:pacemaker:Stateful): Promoted node2
diff --git a/cts/scheduler/summary/bundle-promoted-location-1.summary b/cts/scheduler/summary/bundle-promoted-location-1.summary
new file mode 100644
index 0000000..4c0a0ab
--- /dev/null
+++ b/cts/scheduler/summary/bundle-promoted-location-1.summary
@@ -0,0 +1,27 @@
+Current cluster status:
+ * Node List:
+ * Online: [ node1 node2 node3 ]
+ * GuestOnline: [ base-bundle-0 base-bundle-1 base-bundle-2 ]
+
+ * Full List of Resources:
+ * Fencing (stonith:fence_xvm): Started node2
+ * Container bundle set: base-bundle [localhost/pcmktest]:
+ * base-bundle-0 (ocf:pacemaker:Stateful): Promoted node3
+ * base-bundle-1 (ocf:pacemaker:Stateful): Unpromoted node2
+ * base-bundle-2 (ocf:pacemaker:Stateful): Unpromoted node1
+
+Transition Summary:
+
+Executing Cluster Transition:
+
+Revised Cluster Status:
+ * Node List:
+ * Online: [ node1 node2 node3 ]
+ * GuestOnline: [ base-bundle-0 base-bundle-1 base-bundle-2 ]
+
+ * Full List of Resources:
+ * Fencing (stonith:fence_xvm): Started node2
+ * Container bundle set: base-bundle [localhost/pcmktest]:
+ * base-bundle-0 (ocf:pacemaker:Stateful): Promoted node3
+ * base-bundle-1 (ocf:pacemaker:Stateful): Unpromoted node2
+ * base-bundle-2 (ocf:pacemaker:Stateful): Unpromoted node1
diff --git a/cts/scheduler/summary/bundle-promoted-location-2.summary b/cts/scheduler/summary/bundle-promoted-location-2.summary
new file mode 100644
index 0000000..bd3b3a9
--- /dev/null
+++ b/cts/scheduler/summary/bundle-promoted-location-2.summary
@@ -0,0 +1,54 @@
+Current cluster status:
+ * Node List:
+ * Online: [ node1 node2 node3 ]
+ * GuestOnline: [ base-bundle-0 base-bundle-1 base-bundle-2 ]
+
+ * Full List of Resources:
+ * Fencing (stonith:fence_xvm): Started node2
+ * Container bundle set: base-bundle [localhost/pcmktest]:
+ * base-bundle-0 (ocf:pacemaker:Stateful): Promoted node3
+ * base-bundle-1 (ocf:pacemaker:Stateful): Unpromoted node2
+ * base-bundle-2 (ocf:pacemaker:Stateful): Unpromoted node1
+
+Transition Summary:
+ * Stop base-bundle-podman-0 ( node3 ) due to node availability
+ * Stop base-bundle-0 ( node3 ) due to unrunnable base-bundle-podman-0 start
+ * Stop base:0 ( Promoted base-bundle-0 ) due to unrunnable base-bundle-podman-0 start
+ * Promote base:1 ( Unpromoted -> Promoted base-bundle-1 )
+
+Executing Cluster Transition:
+ * Resource action: base cancel=16000 on base-bundle-1
+ * Resource action: base cancel=15000 on base-bundle-0
+ * Pseudo action: base-bundle_demote_0
+ * Pseudo action: base-bundle-clone_demote_0
+ * Resource action: base demote on base-bundle-0
+ * Pseudo action: base-bundle-clone_demoted_0
+ * Pseudo action: base-bundle_demoted_0
+ * Pseudo action: base-bundle_stop_0
+ * Pseudo action: base-bundle-clone_stop_0
+ * Resource action: base stop on base-bundle-0
+ * Pseudo action: base-bundle-clone_stopped_0
+ * Pseudo action: base-bundle-clone_start_0
+ * Resource action: base-bundle-0 stop on node3
+ * Pseudo action: base-bundle-clone_running_0
+ * Resource action: base-bundle-podman-0 stop on node3
+ * Pseudo action: base-bundle_stopped_0
+ * Pseudo action: base-bundle_running_0
+ * Pseudo action: base-bundle_promote_0
+ * Pseudo action: base-bundle-clone_promote_0
+ * Resource action: base promote on base-bundle-1
+ * Pseudo action: base-bundle-clone_promoted_0
+ * Pseudo action: base-bundle_promoted_0
+ * Resource action: base monitor=15000 on base-bundle-1
+
+Revised Cluster Status:
+ * Node List:
+ * Online: [ node1 node2 node3 ]
+ * GuestOnline: [ base-bundle-1 base-bundle-2 ]
+
+ * Full List of Resources:
+ * Fencing (stonith:fence_xvm): Started node2
+ * Container bundle set: base-bundle [localhost/pcmktest]:
+ * base-bundle-0 (ocf:pacemaker:Stateful): Stopped
+ * base-bundle-1 (ocf:pacemaker:Stateful): Promoted node2
+ * base-bundle-2 (ocf:pacemaker:Stateful): Unpromoted node1
diff --git a/cts/scheduler/summary/bundle-promoted-location-3.summary b/cts/scheduler/summary/bundle-promoted-location-3.summary
new file mode 100644
index 0000000..4c0a0ab
--- /dev/null
+++ b/cts/scheduler/summary/bundle-promoted-location-3.summary
@@ -0,0 +1,27 @@
+Current cluster status:
+ * Node List:
+ * Online: [ node1 node2 node3 ]
+ * GuestOnline: [ base-bundle-0 base-bundle-1 base-bundle-2 ]
+
+ * Full List of Resources:
+ * Fencing (stonith:fence_xvm): Started node2
+ * Container bundle set: base-bundle [localhost/pcmktest]:
+ * base-bundle-0 (ocf:pacemaker:Stateful): Promoted node3
+ * base-bundle-1 (ocf:pacemaker:Stateful): Unpromoted node2
+ * base-bundle-2 (ocf:pacemaker:Stateful): Unpromoted node1
+
+Transition Summary:
+
+Executing Cluster Transition:
+
+Revised Cluster Status:
+ * Node List:
+ * Online: [ node1 node2 node3 ]
+ * GuestOnline: [ base-bundle-0 base-bundle-1 base-bundle-2 ]
+
+ * Full List of Resources:
+ * Fencing (stonith:fence_xvm): Started node2
+ * Container bundle set: base-bundle [localhost/pcmktest]:
+ * base-bundle-0 (ocf:pacemaker:Stateful): Promoted node3
+ * base-bundle-1 (ocf:pacemaker:Stateful): Unpromoted node2
+ * base-bundle-2 (ocf:pacemaker:Stateful): Unpromoted node1
diff --git a/cts/scheduler/summary/bundle-promoted-location-4.summary b/cts/scheduler/summary/bundle-promoted-location-4.summary
new file mode 100644
index 0000000..4c0a0ab
--- /dev/null
+++ b/cts/scheduler/summary/bundle-promoted-location-4.summary
@@ -0,0 +1,27 @@
+Current cluster status:
+ * Node List:
+ * Online: [ node1 node2 node3 ]
+ * GuestOnline: [ base-bundle-0 base-bundle-1 base-bundle-2 ]
+
+ * Full List of Resources:
+ * Fencing (stonith:fence_xvm): Started node2
+ * Container bundle set: base-bundle [localhost/pcmktest]:
+ * base-bundle-0 (ocf:pacemaker:Stateful): Promoted node3
+ * base-bundle-1 (ocf:pacemaker:Stateful): Unpromoted node2
+ * base-bundle-2 (ocf:pacemaker:Stateful): Unpromoted node1
+
+Transition Summary:
+
+Executing Cluster Transition:
+
+Revised Cluster Status:
+ * Node List:
+ * Online: [ node1 node2 node3 ]
+ * GuestOnline: [ base-bundle-0 base-bundle-1 base-bundle-2 ]
+
+ * Full List of Resources:
+ * Fencing (stonith:fence_xvm): Started node2
+ * Container bundle set: base-bundle [localhost/pcmktest]:
+ * base-bundle-0 (ocf:pacemaker:Stateful): Promoted node3
+ * base-bundle-1 (ocf:pacemaker:Stateful): Unpromoted node2
+ * base-bundle-2 (ocf:pacemaker:Stateful): Unpromoted node1
diff --git a/cts/scheduler/summary/bundle-promoted-location-5.summary b/cts/scheduler/summary/bundle-promoted-location-5.summary
new file mode 100644
index 0000000..4c0a0ab
--- /dev/null
+++ b/cts/scheduler/summary/bundle-promoted-location-5.summary
@@ -0,0 +1,27 @@
+Current cluster status:
+ * Node List:
+ * Online: [ node1 node2 node3 ]
+ * GuestOnline: [ base-bundle-0 base-bundle-1 base-bundle-2 ]
+
+ * Full List of Resources:
+ * Fencing (stonith:fence_xvm): Started node2
+ * Container bundle set: base-bundle [localhost/pcmktest]:
+ * base-bundle-0 (ocf:pacemaker:Stateful): Promoted node3
+ * base-bundle-1 (ocf:pacemaker:Stateful): Unpromoted node2
+ * base-bundle-2 (ocf:pacemaker:Stateful): Unpromoted node1
+
+Transition Summary:
+
+Executing Cluster Transition:
+
+Revised Cluster Status:
+ * Node List:
+ * Online: [ node1 node2 node3 ]
+ * GuestOnline: [ base-bundle-0 base-bundle-1 base-bundle-2 ]
+
+ * Full List of Resources:
+ * Fencing (stonith:fence_xvm): Started node2
+ * Container bundle set: base-bundle [localhost/pcmktest]:
+ * base-bundle-0 (ocf:pacemaker:Stateful): Promoted node3
+ * base-bundle-1 (ocf:pacemaker:Stateful): Unpromoted node2
+ * base-bundle-2 (ocf:pacemaker:Stateful): Unpromoted node1
diff --git a/cts/scheduler/summary/bundle-promoted-location-6.summary b/cts/scheduler/summary/bundle-promoted-location-6.summary
new file mode 100644
index 0000000..5e1cce2
--- /dev/null
+++ b/cts/scheduler/summary/bundle-promoted-location-6.summary
@@ -0,0 +1,40 @@
+Current cluster status:
+ * Node List:
+ * Online: [ node1 node2 node3 ]
+ * GuestOnline: [ base-bundle-0 base-bundle-1 base-bundle-2 ]
+
+ * Full List of Resources:
+ * Fencing (stonith:fence_xvm): Started node2
+ * Container bundle set: base-bundle [localhost/pcmktest]:
+ * base-bundle-0 (ocf:pacemaker:Stateful): Promoted node3
+ * base-bundle-1 (ocf:pacemaker:Stateful): Unpromoted node2
+ * base-bundle-2 (ocf:pacemaker:Stateful): Unpromoted node1
+
+Transition Summary:
+ * Stop base-bundle-podman-1 ( node2 ) due to node availability
+ * Stop base-bundle-1 ( node2 ) due to unrunnable base-bundle-podman-1 start
+ * Stop base:1 ( Unpromoted base-bundle-1 ) due to unrunnable base-bundle-podman-1 start
+
+Executing Cluster Transition:
+ * Pseudo action: base-bundle_stop_0
+ * Pseudo action: base-bundle-clone_stop_0
+ * Resource action: base stop on base-bundle-1
+ * Pseudo action: base-bundle-clone_stopped_0
+ * Pseudo action: base-bundle-clone_start_0
+ * Resource action: base-bundle-1 stop on node2
+ * Pseudo action: base-bundle-clone_running_0
+ * Resource action: base-bundle-podman-1 stop on node2
+ * Pseudo action: base-bundle_stopped_0
+ * Pseudo action: base-bundle_running_0
+
+Revised Cluster Status:
+ * Node List:
+ * Online: [ node1 node2 node3 ]
+ * GuestOnline: [ base-bundle-0 base-bundle-2 ]
+
+ * Full List of Resources:
+ * Fencing (stonith:fence_xvm): Started node2
+ * Container bundle set: base-bundle [localhost/pcmktest]:
+ * base-bundle-0 (ocf:pacemaker:Stateful): Promoted node3
+ * base-bundle-1 (ocf:pacemaker:Stateful): Stopped
+ * base-bundle-2 (ocf:pacemaker:Stateful): Unpromoted node1
diff --git a/cts/scheduler/summary/cancel-behind-moving-remote.summary b/cts/scheduler/summary/cancel-behind-moving-remote.summary
index 7726876..945f3c8 100644
--- a/cts/scheduler/summary/cancel-behind-moving-remote.summary
+++ b/cts/scheduler/summary/cancel-behind-moving-remote.summary
@@ -58,22 +58,18 @@ Current cluster status:
Transition Summary:
* Start rabbitmq-bundle-1 ( controller-0 ) due to unrunnable rabbitmq-bundle-podman-1 start (blocked)
* Start rabbitmq:1 ( rabbitmq-bundle-1 ) due to unrunnable rabbitmq-bundle-podman-1 start (blocked)
- * Start ovn-dbs-bundle-podman-0 ( controller-2 )
- * Start ovn-dbs-bundle-0 ( controller-2 )
+ * Start ovn-dbs-bundle-podman-0 ( controller-0 )
+ * Start ovn-dbs-bundle-0 ( controller-0 )
* Start ovndb_servers:0 ( ovn-dbs-bundle-0 )
- * Move ovn-dbs-bundle-podman-1 ( controller-2 -> controller-0 )
- * Move ovn-dbs-bundle-1 ( controller-2 -> controller-0 )
- * Restart ovndb_servers:1 ( Unpromoted -> Promoted ovn-dbs-bundle-1 ) due to required ovn-dbs-bundle-podman-1 start
- * Start ip-172.17.1.87 ( controller-0 )
+ * Promote ovndb_servers:2 ( Unpromoted -> Promoted ovn-dbs-bundle-2 )
+ * Start ip-172.17.1.87 ( controller-1 )
* Move stonith-fence_ipmilan-52540040bb56 ( messaging-2 -> database-0 )
* Move stonith-fence_ipmilan-525400e1534e ( database-1 -> messaging-2 )
Executing Cluster Transition:
* Pseudo action: rabbitmq-bundle-clone_pre_notify_start_0
- * Resource action: ovndb_servers cancel=30000 on ovn-dbs-bundle-1
- * Pseudo action: ovn-dbs-bundle-master_pre_notify_stop_0
- * Cluster action: clear_failcount for ovn-dbs-bundle-0 on controller-0
- * Cluster action: clear_failcount for ovn-dbs-bundle-1 on controller-2
+ * Resource action: ovndb_servers cancel=30000 on ovn-dbs-bundle-2
+ * Pseudo action: ovn-dbs-bundle-master_pre_notify_start_0
* Cluster action: clear_failcount for stonith-fence_compute-fence-nova on messaging-0
* Cluster action: clear_failcount for nova-evacuate on messaging-0
* Cluster action: clear_failcount for stonith-fence_ipmilan-525400aa1373 on database-0
@@ -87,71 +83,53 @@ Executing Cluster Transition:
* Cluster action: clear_failcount for stonith-fence_ipmilan-52540060dbba on messaging-0
* Cluster action: clear_failcount for stonith-fence_ipmilan-525400e018b6 on database-0
* Cluster action: clear_failcount for stonith-fence_ipmilan-525400c87cdb on database-2
- * Pseudo action: ovn-dbs-bundle_stop_0
+ * Pseudo action: ovn-dbs-bundle_start_0
* Pseudo action: rabbitmq-bundle_start_0
* Pseudo action: rabbitmq-bundle-clone_confirmed-pre_notify_start_0
* Pseudo action: rabbitmq-bundle-clone_start_0
- * Resource action: ovndb_servers notify on ovn-dbs-bundle-1
* Resource action: ovndb_servers notify on ovn-dbs-bundle-2
- * Pseudo action: ovn-dbs-bundle-master_confirmed-pre_notify_stop_0
- * Pseudo action: ovn-dbs-bundle-master_stop_0
+ * Resource action: ovndb_servers notify on ovn-dbs-bundle-1
+ * Pseudo action: ovn-dbs-bundle-master_confirmed-pre_notify_start_0
+ * Pseudo action: ovn-dbs-bundle-master_start_0
+ * Resource action: ovn-dbs-bundle-podman-0 start on controller-0
+ * Resource action: ovn-dbs-bundle-0 start on controller-0
* Resource action: stonith-fence_ipmilan-52540040bb56 start on database-0
* Resource action: stonith-fence_ipmilan-525400e1534e start on messaging-2
* Pseudo action: rabbitmq-bundle-clone_running_0
- * Resource action: ovndb_servers stop on ovn-dbs-bundle-1
- * Pseudo action: ovn-dbs-bundle-master_stopped_0
- * Resource action: ovn-dbs-bundle-1 stop on controller-2
+ * Resource action: ovndb_servers start on ovn-dbs-bundle-0
+ * Pseudo action: ovn-dbs-bundle-master_running_0
+ * Resource action: ovn-dbs-bundle-podman-0 monitor=60000 on controller-0
+ * Resource action: ovn-dbs-bundle-0 monitor=30000 on controller-0
* Resource action: stonith-fence_ipmilan-52540040bb56 monitor=60000 on database-0
* Resource action: stonith-fence_ipmilan-525400e1534e monitor=60000 on messaging-2
* Pseudo action: rabbitmq-bundle-clone_post_notify_running_0
- * Pseudo action: ovn-dbs-bundle-master_post_notify_stopped_0
- * Resource action: ovn-dbs-bundle-podman-1 stop on controller-2
+ * Pseudo action: ovn-dbs-bundle-master_post_notify_running_0
* Pseudo action: rabbitmq-bundle-clone_confirmed-post_notify_running_0
* Resource action: ovndb_servers notify on ovn-dbs-bundle-2
- * Pseudo action: ovn-dbs-bundle-master_confirmed-post_notify_stopped_0
- * Pseudo action: ovn-dbs-bundle-master_pre_notify_start_0
- * Pseudo action: ovn-dbs-bundle_stopped_0
- * Pseudo action: ovn-dbs-bundle_start_0
- * Pseudo action: rabbitmq-bundle_running_0
- * Resource action: ovndb_servers notify on ovn-dbs-bundle-2
- * Pseudo action: ovn-dbs-bundle-master_confirmed-pre_notify_start_0
- * Pseudo action: ovn-dbs-bundle-master_start_0
- * Resource action: ovn-dbs-bundle-podman-0 start on controller-2
- * Resource action: ovn-dbs-bundle-0 start on controller-2
- * Resource action: ovn-dbs-bundle-podman-1 start on controller-0
- * Resource action: ovn-dbs-bundle-1 start on controller-0
- * Resource action: ovndb_servers start on ovn-dbs-bundle-0
- * Resource action: ovndb_servers start on ovn-dbs-bundle-1
- * Pseudo action: ovn-dbs-bundle-master_running_0
- * Resource action: ovn-dbs-bundle-podman-0 monitor=60000 on controller-2
- * Resource action: ovn-dbs-bundle-0 monitor=30000 on controller-2
- * Resource action: ovn-dbs-bundle-podman-1 monitor=60000 on controller-0
- * Resource action: ovn-dbs-bundle-1 monitor=30000 on controller-0
- * Pseudo action: ovn-dbs-bundle-master_post_notify_running_0
* Resource action: ovndb_servers notify on ovn-dbs-bundle-0
* Resource action: ovndb_servers notify on ovn-dbs-bundle-1
- * Resource action: ovndb_servers notify on ovn-dbs-bundle-2
* Pseudo action: ovn-dbs-bundle-master_confirmed-post_notify_running_0
* Pseudo action: ovn-dbs-bundle_running_0
+ * Pseudo action: rabbitmq-bundle_running_0
* Pseudo action: ovn-dbs-bundle-master_pre_notify_promote_0
* Pseudo action: ovn-dbs-bundle_promote_0
+ * Resource action: ovndb_servers notify on ovn-dbs-bundle-2
* Resource action: ovndb_servers notify on ovn-dbs-bundle-0
* Resource action: ovndb_servers notify on ovn-dbs-bundle-1
- * Resource action: ovndb_servers notify on ovn-dbs-bundle-2
* Pseudo action: ovn-dbs-bundle-master_confirmed-pre_notify_promote_0
* Pseudo action: ovn-dbs-bundle-master_promote_0
- * Resource action: ip-172.17.1.87 start on controller-0
- * Resource action: ovndb_servers promote on ovn-dbs-bundle-1
+ * Resource action: ip-172.17.1.87 start on controller-1
+ * Resource action: ovndb_servers promote on ovn-dbs-bundle-2
* Pseudo action: ovn-dbs-bundle-master_promoted_0
- * Resource action: ip-172.17.1.87 monitor=10000 on controller-0
+ * Resource action: ip-172.17.1.87 monitor=10000 on controller-1
* Pseudo action: ovn-dbs-bundle-master_post_notify_promoted_0
+ * Resource action: ovndb_servers notify on ovn-dbs-bundle-2
* Resource action: ovndb_servers notify on ovn-dbs-bundle-0
* Resource action: ovndb_servers notify on ovn-dbs-bundle-1
- * Resource action: ovndb_servers notify on ovn-dbs-bundle-2
* Pseudo action: ovn-dbs-bundle-master_confirmed-post_notify_promoted_0
* Pseudo action: ovn-dbs-bundle_promoted_0
+ * Resource action: ovndb_servers monitor=10000 on ovn-dbs-bundle-2
* Resource action: ovndb_servers monitor=30000 on ovn-dbs-bundle-0
- * Resource action: ovndb_servers monitor=10000 on ovn-dbs-bundle-1
Using the original execution date of: 2021-02-15 01:40:51Z
Revised Cluster Status:
@@ -187,10 +165,10 @@ Revised Cluster Status:
* haproxy-bundle-podman-1 (ocf:heartbeat:podman): Started controller-0
* haproxy-bundle-podman-2 (ocf:heartbeat:podman): Started controller-1
* Container bundle set: ovn-dbs-bundle [cluster.common.tag/rhosp16-openstack-ovn-northd:pcmklatest]:
- * ovn-dbs-bundle-0 (ocf:ovn:ovndb-servers): Unpromoted controller-2
- * ovn-dbs-bundle-1 (ocf:ovn:ovndb-servers): Promoted controller-0
- * ovn-dbs-bundle-2 (ocf:ovn:ovndb-servers): Unpromoted controller-1
- * ip-172.17.1.87 (ocf:heartbeat:IPaddr2): Started controller-0
+ * ovn-dbs-bundle-0 (ocf:ovn:ovndb-servers): Unpromoted controller-0
+ * ovn-dbs-bundle-1 (ocf:ovn:ovndb-servers): Unpromoted controller-2
+ * ovn-dbs-bundle-2 (ocf:ovn:ovndb-servers): Promoted controller-1
+ * ip-172.17.1.87 (ocf:heartbeat:IPaddr2): Started controller-1
* stonith-fence_compute-fence-nova (stonith:fence_compute): Started database-1
* Clone Set: compute-unfence-trigger-clone [compute-unfence-trigger]:
* Started: [ compute-0 compute-1 ]
diff --git a/cts/scheduler/summary/clone-recover-no-shuffle-1.summary b/cts/scheduler/summary/clone-recover-no-shuffle-1.summary
new file mode 100644
index 0000000..0b6866e
--- /dev/null
+++ b/cts/scheduler/summary/clone-recover-no-shuffle-1.summary
@@ -0,0 +1,29 @@
+Using the original execution date of: 2023-06-21 00:59:59Z
+Current cluster status:
+ * Node List:
+ * Online: [ node1 node2 node3 ]
+
+ * Full List of Resources:
+ * Fencing (stonith:fence_xvm): Started node2
+ * Clone Set: dummy-clone [dummy]:
+ * Started: [ node2 node3 ]
+ * Stopped: [ node1 ]
+
+Transition Summary:
+ * Start dummy:2 ( node1 )
+
+Executing Cluster Transition:
+ * Pseudo action: dummy-clone_start_0
+ * Resource action: dummy start on node1
+ * Pseudo action: dummy-clone_running_0
+ * Resource action: dummy monitor=10000 on node1
+Using the original execution date of: 2023-06-21 00:59:59Z
+
+Revised Cluster Status:
+ * Node List:
+ * Online: [ node1 node2 node3 ]
+
+ * Full List of Resources:
+ * Fencing (stonith:fence_xvm): Started node2
+ * Clone Set: dummy-clone [dummy]:
+ * Started: [ node1 node2 node3 ]
diff --git a/cts/scheduler/summary/clone-recover-no-shuffle-10.summary b/cts/scheduler/summary/clone-recover-no-shuffle-10.summary
new file mode 100644
index 0000000..5b0f9b6
--- /dev/null
+++ b/cts/scheduler/summary/clone-recover-no-shuffle-10.summary
@@ -0,0 +1,29 @@
+Current cluster status:
+ * Node List:
+ * Online: [ node1 node2 node3 ]
+
+ * Full List of Resources:
+ * Fencing (stonith:fence_xvm): Started node2
+ * Clone Set: dummy-clone [dummy] (promotable):
+ * Promoted: [ node2 ]
+ * Unpromoted: [ node3 ]
+ * Stopped: [ node1 ]
+
+Transition Summary:
+ * Start dummy:2 ( node1 )
+
+Executing Cluster Transition:
+ * Pseudo action: dummy-clone_start_0
+ * Resource action: dummy start on node1
+ * Pseudo action: dummy-clone_running_0
+ * Resource action: dummy monitor=11000 on node1
+
+Revised Cluster Status:
+ * Node List:
+ * Online: [ node1 node2 node3 ]
+
+ * Full List of Resources:
+ * Fencing (stonith:fence_xvm): Started node2
+ * Clone Set: dummy-clone [dummy] (promotable):
+ * Promoted: [ node2 ]
+ * Unpromoted: [ node1 node3 ]
diff --git a/cts/scheduler/summary/clone-recover-no-shuffle-11.summary b/cts/scheduler/summary/clone-recover-no-shuffle-11.summary
new file mode 100644
index 0000000..e0bdb61
--- /dev/null
+++ b/cts/scheduler/summary/clone-recover-no-shuffle-11.summary
@@ -0,0 +1,34 @@
+Current cluster status:
+ * Node List:
+ * Online: [ node1 node2 node3 ]
+
+ * Full List of Resources:
+ * Fencing (stonith:fence_xvm): Started node2
+ * Clone Set: grp-clone [grp] (promotable):
+ * Promoted: [ node2 ]
+ * Unpromoted: [ node3 ]
+ * Stopped: [ node1 ]
+
+Transition Summary:
+ * Start rsc1:2 ( node1 )
+ * Start rsc2:2 ( node1 )
+
+Executing Cluster Transition:
+ * Pseudo action: grp-clone_start_0
+ * Pseudo action: grp:2_start_0
+ * Resource action: rsc1 start on node1
+ * Resource action: rsc2 start on node1
+ * Pseudo action: grp:2_running_0
+ * Resource action: rsc1 monitor=11000 on node1
+ * Resource action: rsc2 monitor=11000 on node1
+ * Pseudo action: grp-clone_running_0
+
+Revised Cluster Status:
+ * Node List:
+ * Online: [ node1 node2 node3 ]
+
+ * Full List of Resources:
+ * Fencing (stonith:fence_xvm): Started node2
+ * Clone Set: grp-clone [grp] (promotable):
+ * Promoted: [ node2 ]
+ * Unpromoted: [ node1 node3 ]
diff --git a/cts/scheduler/summary/clone-recover-no-shuffle-12.summary b/cts/scheduler/summary/clone-recover-no-shuffle-12.summary
new file mode 100644
index 0000000..6e55a0b
--- /dev/null
+++ b/cts/scheduler/summary/clone-recover-no-shuffle-12.summary
@@ -0,0 +1,43 @@
+Current cluster status:
+ * Node List:
+ * Online: [ node1 node2 node3 ]
+ * GuestOnline: [ base-bundle-0 base-bundle-1 ]
+
+ * Full List of Resources:
+ * Fencing (stonith:fence_xvm): Started node2
+ * Container bundle set: base-bundle [localhost/pcmktest]:
+ * base-bundle-0 (ocf:pacemaker:Stateful): Unpromoted node3
+ * base-bundle-1 (ocf:pacemaker:Stateful): Promoted node2
+ * base-bundle-2 (ocf:pacemaker:Stateful): Stopped
+
+Transition Summary:
+ * Start base-bundle-podman-2 ( node1 )
+ * Start base-bundle-2 ( node1 )
+ * Start base:2 ( base-bundle-2 )
+
+Executing Cluster Transition:
+ * Pseudo action: base-bundle_start_0
+ * Pseudo action: base-bundle-clone_start_0
+ * Resource action: base-bundle-podman-2 start on node1
+ * Resource action: base-bundle-2 monitor on node3
+ * Resource action: base-bundle-2 monitor on node2
+ * Resource action: base-bundle-2 monitor on node1
+ * Resource action: base-bundle-podman-2 monitor=60000 on node1
+ * Resource action: base-bundle-2 start on node1
+ * Resource action: base start on base-bundle-2
+ * Pseudo action: base-bundle-clone_running_0
+ * Resource action: base-bundle-2 monitor=30000 on node1
+ * Pseudo action: base-bundle_running_0
+ * Resource action: base monitor=16000 on base-bundle-2
+
+Revised Cluster Status:
+ * Node List:
+ * Online: [ node1 node2 node3 ]
+ * GuestOnline: [ base-bundle-0 base-bundle-1 base-bundle-2 ]
+
+ * Full List of Resources:
+ * Fencing (stonith:fence_xvm): Started node2
+ * Container bundle set: base-bundle [localhost/pcmktest]:
+ * base-bundle-0 (ocf:pacemaker:Stateful): Unpromoted node3
+ * base-bundle-1 (ocf:pacemaker:Stateful): Promoted node2
+ * base-bundle-2 (ocf:pacemaker:Stateful): Unpromoted node1
diff --git a/cts/scheduler/summary/clone-recover-no-shuffle-2.summary b/cts/scheduler/summary/clone-recover-no-shuffle-2.summary
new file mode 100644
index 0000000..8b18120
--- /dev/null
+++ b/cts/scheduler/summary/clone-recover-no-shuffle-2.summary
@@ -0,0 +1,32 @@
+Current cluster status:
+ * Node List:
+ * Online: [ node1 node2 node3 ]
+
+ * Full List of Resources:
+ * Fencing (stonith:fence_xvm): Started node2
+ * Clone Set: grp-clone [grp]:
+ * Started: [ node2 node3 ]
+ * Stopped: [ node1 ]
+
+Transition Summary:
+ * Start rsc1:2 ( node1 )
+ * Start rsc2:2 ( node1 )
+
+Executing Cluster Transition:
+ * Pseudo action: grp-clone_start_0
+ * Pseudo action: grp:2_start_0
+ * Resource action: rsc1 start on node1
+ * Resource action: rsc2 start on node1
+ * Pseudo action: grp:2_running_0
+ * Resource action: rsc1 monitor=10000 on node1
+ * Resource action: rsc2 monitor=10000 on node1
+ * Pseudo action: grp-clone_running_0
+
+Revised Cluster Status:
+ * Node List:
+ * Online: [ node1 node2 node3 ]
+
+ * Full List of Resources:
+ * Fencing (stonith:fence_xvm): Started node2
+ * Clone Set: grp-clone [grp]:
+ * Started: [ node1 node2 node3 ]
diff --git a/cts/scheduler/summary/clone-recover-no-shuffle-3.summary b/cts/scheduler/summary/clone-recover-no-shuffle-3.summary
new file mode 100644
index 0000000..5702177
--- /dev/null
+++ b/cts/scheduler/summary/clone-recover-no-shuffle-3.summary
@@ -0,0 +1,42 @@
+Current cluster status:
+ * Node List:
+ * Online: [ node1 node2 node3 ]
+ * GuestOnline: [ base-bundle-0 base-bundle-1 ]
+
+ * Full List of Resources:
+ * Fencing (stonith:fence_xvm): Started node2
+ * Container bundle set: base-bundle [localhost/pcmktest]:
+ * base-bundle-0 (ocf:pacemaker:Stateful): Started node3
+ * base-bundle-1 (ocf:pacemaker:Stateful): Started node2
+ * base-bundle-2 (ocf:pacemaker:Stateful): Stopped
+
+Transition Summary:
+ * Start base-bundle-podman-2 ( node1 )
+ * Start base-bundle-2 ( node1 )
+ * Start base:2 ( base-bundle-2 )
+
+Executing Cluster Transition:
+ * Pseudo action: base-bundle_start_0
+ * Pseudo action: base-bundle-clone_start_0
+ * Resource action: base-bundle-podman-2 start on node1
+ * Resource action: base-bundle-2 monitor on node3
+ * Resource action: base-bundle-2 monitor on node2
+ * Resource action: base-bundle-2 monitor on node1
+ * Resource action: base-bundle-podman-2 monitor=60000 on node1
+ * Resource action: base-bundle-2 start on node1
+ * Resource action: base start on base-bundle-2
+ * Pseudo action: base-bundle-clone_running_0
+ * Resource action: base-bundle-2 monitor=30000 on node1
+ * Pseudo action: base-bundle_running_0
+
+Revised Cluster Status:
+ * Node List:
+ * Online: [ node1 node2 node3 ]
+ * GuestOnline: [ base-bundle-0 base-bundle-1 base-bundle-2 ]
+
+ * Full List of Resources:
+ * Fencing (stonith:fence_xvm): Started node2
+ * Container bundle set: base-bundle [localhost/pcmktest]:
+ * base-bundle-0 (ocf:pacemaker:Stateful): Started node3
+ * base-bundle-1 (ocf:pacemaker:Stateful): Started node2
+ * base-bundle-2 (ocf:pacemaker:Stateful): Started node1
diff --git a/cts/scheduler/summary/clone-recover-no-shuffle-4.summary b/cts/scheduler/summary/clone-recover-no-shuffle-4.summary
new file mode 100644
index 0000000..0b6866e
--- /dev/null
+++ b/cts/scheduler/summary/clone-recover-no-shuffle-4.summary
@@ -0,0 +1,29 @@
+Using the original execution date of: 2023-06-21 00:59:59Z
+Current cluster status:
+ * Node List:
+ * Online: [ node1 node2 node3 ]
+
+ * Full List of Resources:
+ * Fencing (stonith:fence_xvm): Started node2
+ * Clone Set: dummy-clone [dummy]:
+ * Started: [ node2 node3 ]
+ * Stopped: [ node1 ]
+
+Transition Summary:
+ * Start dummy:2 ( node1 )
+
+Executing Cluster Transition:
+ * Pseudo action: dummy-clone_start_0
+ * Resource action: dummy start on node1
+ * Pseudo action: dummy-clone_running_0
+ * Resource action: dummy monitor=10000 on node1
+Using the original execution date of: 2023-06-21 00:59:59Z
+
+Revised Cluster Status:
+ * Node List:
+ * Online: [ node1 node2 node3 ]
+
+ * Full List of Resources:
+ * Fencing (stonith:fence_xvm): Started node2
+ * Clone Set: dummy-clone [dummy]:
+ * Started: [ node1 node2 node3 ]
diff --git a/cts/scheduler/summary/clone-recover-no-shuffle-5.summary b/cts/scheduler/summary/clone-recover-no-shuffle-5.summary
new file mode 100644
index 0000000..8b18120
--- /dev/null
+++ b/cts/scheduler/summary/clone-recover-no-shuffle-5.summary
@@ -0,0 +1,32 @@
+Current cluster status:
+ * Node List:
+ * Online: [ node1 node2 node3 ]
+
+ * Full List of Resources:
+ * Fencing (stonith:fence_xvm): Started node2
+ * Clone Set: grp-clone [grp]:
+ * Started: [ node2 node3 ]
+ * Stopped: [ node1 ]
+
+Transition Summary:
+ * Start rsc1:2 ( node1 )
+ * Start rsc2:2 ( node1 )
+
+Executing Cluster Transition:
+ * Pseudo action: grp-clone_start_0
+ * Pseudo action: grp:2_start_0
+ * Resource action: rsc1 start on node1
+ * Resource action: rsc2 start on node1
+ * Pseudo action: grp:2_running_0
+ * Resource action: rsc1 monitor=10000 on node1
+ * Resource action: rsc2 monitor=10000 on node1
+ * Pseudo action: grp-clone_running_0
+
+Revised Cluster Status:
+ * Node List:
+ * Online: [ node1 node2 node3 ]
+
+ * Full List of Resources:
+ * Fencing (stonith:fence_xvm): Started node2
+ * Clone Set: grp-clone [grp]:
+ * Started: [ node1 node2 node3 ]
diff --git a/cts/scheduler/summary/clone-recover-no-shuffle-6.summary b/cts/scheduler/summary/clone-recover-no-shuffle-6.summary
new file mode 100644
index 0000000..5702177
--- /dev/null
+++ b/cts/scheduler/summary/clone-recover-no-shuffle-6.summary
@@ -0,0 +1,42 @@
+Current cluster status:
+ * Node List:
+ * Online: [ node1 node2 node3 ]
+ * GuestOnline: [ base-bundle-0 base-bundle-1 ]
+
+ * Full List of Resources:
+ * Fencing (stonith:fence_xvm): Started node2
+ * Container bundle set: base-bundle [localhost/pcmktest]:
+ * base-bundle-0 (ocf:pacemaker:Stateful): Started node3
+ * base-bundle-1 (ocf:pacemaker:Stateful): Started node2
+ * base-bundle-2 (ocf:pacemaker:Stateful): Stopped
+
+Transition Summary:
+ * Start base-bundle-podman-2 ( node1 )
+ * Start base-bundle-2 ( node1 )
+ * Start base:2 ( base-bundle-2 )
+
+Executing Cluster Transition:
+ * Pseudo action: base-bundle_start_0
+ * Pseudo action: base-bundle-clone_start_0
+ * Resource action: base-bundle-podman-2 start on node1
+ * Resource action: base-bundle-2 monitor on node3
+ * Resource action: base-bundle-2 monitor on node2
+ * Resource action: base-bundle-2 monitor on node1
+ * Resource action: base-bundle-podman-2 monitor=60000 on node1
+ * Resource action: base-bundle-2 start on node1
+ * Resource action: base start on base-bundle-2
+ * Pseudo action: base-bundle-clone_running_0
+ * Resource action: base-bundle-2 monitor=30000 on node1
+ * Pseudo action: base-bundle_running_0
+
+Revised Cluster Status:
+ * Node List:
+ * Online: [ node1 node2 node3 ]
+ * GuestOnline: [ base-bundle-0 base-bundle-1 base-bundle-2 ]
+
+ * Full List of Resources:
+ * Fencing (stonith:fence_xvm): Started node2
+ * Container bundle set: base-bundle [localhost/pcmktest]:
+ * base-bundle-0 (ocf:pacemaker:Stateful): Started node3
+ * base-bundle-1 (ocf:pacemaker:Stateful): Started node2
+ * base-bundle-2 (ocf:pacemaker:Stateful): Started node1
diff --git a/cts/scheduler/summary/clone-recover-no-shuffle-7.summary b/cts/scheduler/summary/clone-recover-no-shuffle-7.summary
new file mode 100644
index 0000000..7744570
--- /dev/null
+++ b/cts/scheduler/summary/clone-recover-no-shuffle-7.summary
@@ -0,0 +1,38 @@
+Current cluster status:
+ * Node List:
+ * Online: [ node1 node2 node3 ]
+
+ * Full List of Resources:
+ * Fencing (stonith:fence_xvm): Started node2
+ * Clone Set: dummy-clone [dummy] (promotable):
+ * Promoted: [ node2 ]
+ * Unpromoted: [ node3 ]
+ * Stopped: [ node1 ]
+
+Transition Summary:
+ * Demote dummy:1 ( Promoted -> Unpromoted node2 )
+ * Promote dummy:2 ( Stopped -> Promoted node1 )
+
+Executing Cluster Transition:
+ * Resource action: dummy cancel=10000 on node2
+ * Pseudo action: dummy-clone_demote_0
+ * Resource action: dummy demote on node2
+ * Pseudo action: dummy-clone_demoted_0
+ * Pseudo action: dummy-clone_start_0
+ * Resource action: dummy monitor=11000 on node2
+ * Resource action: dummy start on node1
+ * Pseudo action: dummy-clone_running_0
+ * Pseudo action: dummy-clone_promote_0
+ * Resource action: dummy promote on node1
+ * Pseudo action: dummy-clone_promoted_0
+ * Resource action: dummy monitor=10000 on node1
+
+Revised Cluster Status:
+ * Node List:
+ * Online: [ node1 node2 node3 ]
+
+ * Full List of Resources:
+ * Fencing (stonith:fence_xvm): Started node2
+ * Clone Set: dummy-clone [dummy] (promotable):
+ * Promoted: [ node1 ]
+ * Unpromoted: [ node2 node3 ]
diff --git a/cts/scheduler/summary/clone-recover-no-shuffle-8.summary b/cts/scheduler/summary/clone-recover-no-shuffle-8.summary
new file mode 100644
index 0000000..878f248
--- /dev/null
+++ b/cts/scheduler/summary/clone-recover-no-shuffle-8.summary
@@ -0,0 +1,52 @@
+Current cluster status:
+ * Node List:
+ * Online: [ node1 node2 node3 ]
+
+ * Full List of Resources:
+ * Fencing (stonith:fence_xvm): Started node2
+ * Clone Set: grp-clone [grp] (promotable):
+ * Promoted: [ node2 ]
+ * Unpromoted: [ node3 ]
+ * Stopped: [ node1 ]
+
+Transition Summary:
+ * Demote rsc1:1 ( Promoted -> Unpromoted node2 )
+ * Demote rsc2:1 ( Promoted -> Unpromoted node2 )
+ * Promote rsc1:2 ( Stopped -> Promoted node1 )
+ * Promote rsc2:2 ( Stopped -> Promoted node1 )
+
+Executing Cluster Transition:
+ * Resource action: rsc1 cancel=10000 on node2
+ * Resource action: rsc2 cancel=10000 on node2
+ * Pseudo action: grp-clone_demote_0
+ * Pseudo action: grp:1_demote_0
+ * Resource action: rsc2 demote on node2
+ * Resource action: rsc1 demote on node2
+ * Resource action: rsc2 monitor=11000 on node2
+ * Pseudo action: grp:1_demoted_0
+ * Resource action: rsc1 monitor=11000 on node2
+ * Pseudo action: grp-clone_demoted_0
+ * Pseudo action: grp-clone_start_0
+ * Pseudo action: grp:2_start_0
+ * Resource action: rsc1 start on node1
+ * Resource action: rsc2 start on node1
+ * Pseudo action: grp:2_running_0
+ * Pseudo action: grp-clone_running_0
+ * Pseudo action: grp-clone_promote_0
+ * Pseudo action: grp:2_promote_0
+ * Resource action: rsc1 promote on node1
+ * Resource action: rsc2 promote on node1
+ * Pseudo action: grp:2_promoted_0
+ * Resource action: rsc1 monitor=10000 on node1
+ * Resource action: rsc2 monitor=10000 on node1
+ * Pseudo action: grp-clone_promoted_0
+
+Revised Cluster Status:
+ * Node List:
+ * Online: [ node1 node2 node3 ]
+
+ * Full List of Resources:
+ * Fencing (stonith:fence_xvm): Started node2
+ * Clone Set: grp-clone [grp] (promotable):
+ * Promoted: [ node1 ]
+ * Unpromoted: [ node2 node3 ]
diff --git a/cts/scheduler/summary/clone-recover-no-shuffle-9.summary b/cts/scheduler/summary/clone-recover-no-shuffle-9.summary
new file mode 100644
index 0000000..7ede39a
--- /dev/null
+++ b/cts/scheduler/summary/clone-recover-no-shuffle-9.summary
@@ -0,0 +1,56 @@
+Current cluster status:
+ * Node List:
+ * Online: [ node1 node2 node3 ]
+ * GuestOnline: [ base-bundle-0 base-bundle-1 ]
+
+ * Full List of Resources:
+ * Fencing (stonith:fence_xvm): Started node2
+ * Container bundle set: base-bundle [localhost/pcmktest]:
+ * base-bundle-0 (ocf:pacemaker:Stateful): Unpromoted node3
+ * base-bundle-1 (ocf:pacemaker:Stateful): Promoted node2
+ * base-bundle-2 (ocf:pacemaker:Stateful): Stopped
+
+Transition Summary:
+ * Demote base:1 ( Promoted -> Unpromoted base-bundle-1 )
+ * Start base-bundle-podman-2 ( node1 )
+ * Start base-bundle-2 ( node1 )
+ * Promote base:2 ( Stopped -> Promoted base-bundle-2 )
+
+Executing Cluster Transition:
+ * Resource action: base cancel=15000 on base-bundle-1
+ * Pseudo action: base-bundle_demote_0
+ * Pseudo action: base-bundle-clone_demote_0
+ * Resource action: base demote on base-bundle-1
+ * Pseudo action: base-bundle-clone_demoted_0
+ * Pseudo action: base-bundle_demoted_0
+ * Pseudo action: base-bundle_start_0
+ * Resource action: base monitor=16000 on base-bundle-1
+ * Pseudo action: base-bundle-clone_start_0
+ * Resource action: base-bundle-podman-2 start on node1
+ * Resource action: base-bundle-2 monitor on node3
+ * Resource action: base-bundle-2 monitor on node2
+ * Resource action: base-bundle-2 monitor on node1
+ * Resource action: base-bundle-podman-2 monitor=60000 on node1
+ * Resource action: base-bundle-2 start on node1
+ * Resource action: base start on base-bundle-2
+ * Pseudo action: base-bundle-clone_running_0
+ * Resource action: base-bundle-2 monitor=30000 on node1
+ * Pseudo action: base-bundle_running_0
+ * Pseudo action: base-bundle_promote_0
+ * Pseudo action: base-bundle-clone_promote_0
+ * Resource action: base promote on base-bundle-2
+ * Pseudo action: base-bundle-clone_promoted_0
+ * Pseudo action: base-bundle_promoted_0
+ * Resource action: base monitor=15000 on base-bundle-2
+
+Revised Cluster Status:
+ * Node List:
+ * Online: [ node1 node2 node3 ]
+ * GuestOnline: [ base-bundle-0 base-bundle-1 base-bundle-2 ]
+
+ * Full List of Resources:
+ * Fencing (stonith:fence_xvm): Started node2
+ * Container bundle set: base-bundle [localhost/pcmktest]:
+ * base-bundle-0 (ocf:pacemaker:Stateful): Unpromoted node3
+ * base-bundle-1 (ocf:pacemaker:Stateful): Unpromoted node2
+ * base-bundle-2 (ocf:pacemaker:Stateful): Promoted node1
diff --git a/cts/scheduler/summary/coloc-with-inner-group-member.summary b/cts/scheduler/summary/coloc-with-inner-group-member.summary
new file mode 100644
index 0000000..6659721
--- /dev/null
+++ b/cts/scheduler/summary/coloc-with-inner-group-member.summary
@@ -0,0 +1,45 @@
+Using the original execution date of: 2023-06-20 20:45:06Z
+Current cluster status:
+ * Node List:
+ * Online: [ rhel8-1 rhel8-2 rhel8-3 rhel8-4 rhel8-5 ]
+
+ * Full List of Resources:
+ * Fencing (stonith:fence_xvm): Started rhel8-1
+ * vip-dep (ocf:pacemaker:Dummy): Started rhel8-3
+ * Resource Group: grp:
+ * foo (ocf:pacemaker:Dummy): Started rhel8-4
+ * bar (ocf:pacemaker:Dummy): Started rhel8-4
+ * vip (ocf:pacemaker:Dummy): Started rhel8-3
+
+Transition Summary:
+ * Move foo ( rhel8-4 -> rhel8-3 )
+ * Move bar ( rhel8-4 -> rhel8-3 )
+ * Restart vip ( rhel8-3 ) due to required bar start
+
+Executing Cluster Transition:
+ * Pseudo action: grp_stop_0
+ * Resource action: vip stop on rhel8-3
+ * Resource action: bar stop on rhel8-4
+ * Resource action: foo stop on rhel8-4
+ * Pseudo action: grp_stopped_0
+ * Pseudo action: grp_start_0
+ * Resource action: foo start on rhel8-3
+ * Resource action: bar start on rhel8-3
+ * Resource action: vip start on rhel8-3
+ * Resource action: vip monitor=10000 on rhel8-3
+ * Pseudo action: grp_running_0
+ * Resource action: foo monitor=10000 on rhel8-3
+ * Resource action: bar monitor=10000 on rhel8-3
+Using the original execution date of: 2023-06-20 20:45:06Z
+
+Revised Cluster Status:
+ * Node List:
+ * Online: [ rhel8-1 rhel8-2 rhel8-3 rhel8-4 rhel8-5 ]
+
+ * Full List of Resources:
+ * Fencing (stonith:fence_xvm): Started rhel8-1
+ * vip-dep (ocf:pacemaker:Dummy): Started rhel8-3
+ * Resource Group: grp:
+ * foo (ocf:pacemaker:Dummy): Started rhel8-3
+ * bar (ocf:pacemaker:Dummy): Started rhel8-3
+ * vip (ocf:pacemaker:Dummy): Started rhel8-3
diff --git a/cts/scheduler/summary/group-anticolocation-2.summary b/cts/scheduler/summary/group-anticolocation-2.summary
new file mode 100644
index 0000000..3ecb056
--- /dev/null
+++ b/cts/scheduler/summary/group-anticolocation-2.summary
@@ -0,0 +1,41 @@
+Current cluster status:
+ * Node List:
+ * Online: [ node1 node2 ]
+
+ * Full List of Resources:
+ * Fencing (stonith:fence_xvm): Started node1
+ * Resource Group: group1:
+ * member1a (ocf:pacemaker:Dummy): Started node2
+ * member1b (ocf:pacemaker:Dummy): Started node2
+ * Resource Group: group2:
+ * member2a (ocf:pacemaker:Dummy): Started node1
+ * member2b (ocf:pacemaker:Dummy): FAILED node1
+
+Transition Summary:
+ * Move member2a ( node1 -> node2 )
+ * Recover member2b ( node1 -> node2 )
+
+Executing Cluster Transition:
+ * Pseudo action: group2_stop_0
+ * Resource action: member2b stop on node1
+ * Resource action: member2a stop on node1
+ * Pseudo action: group2_stopped_0
+ * Pseudo action: group2_start_0
+ * Resource action: member2a start on node2
+ * Resource action: member2b start on node2
+ * Pseudo action: group2_running_0
+ * Resource action: member2a monitor=10000 on node2
+ * Resource action: member2b monitor=10000 on node2
+
+Revised Cluster Status:
+ * Node List:
+ * Online: [ node1 node2 ]
+
+ * Full List of Resources:
+ * Fencing (stonith:fence_xvm): Started node1
+ * Resource Group: group1:
+ * member1a (ocf:pacemaker:Dummy): Started node2
+ * member1b (ocf:pacemaker:Dummy): Started node2
+ * Resource Group: group2:
+ * member2a (ocf:pacemaker:Dummy): Started node2
+ * member2b (ocf:pacemaker:Dummy): Started node2
diff --git a/cts/scheduler/summary/group-anticolocation-3.summary b/cts/scheduler/summary/group-anticolocation-3.summary
new file mode 100644
index 0000000..c9d4321
--- /dev/null
+++ b/cts/scheduler/summary/group-anticolocation-3.summary
@@ -0,0 +1,33 @@
+Current cluster status:
+ * Node List:
+ * Online: [ node1 node2 ]
+
+ * Full List of Resources:
+ * Fencing (stonith:fence_xvm): Started node1
+ * Resource Group: group1:
+ * member1a (ocf:pacemaker:Dummy): Started node2
+ * member1b (ocf:pacemaker:Dummy): Started node2
+ * Resource Group: group2:
+ * member2a (ocf:pacemaker:Dummy): Started node1
+ * member2b (ocf:pacemaker:Dummy): FAILED node1
+
+Transition Summary:
+ * Stop member2b ( node1 ) due to node availability
+
+Executing Cluster Transition:
+ * Pseudo action: group2_stop_0
+ * Resource action: member2b stop on node1
+ * Pseudo action: group2_stopped_0
+
+Revised Cluster Status:
+ * Node List:
+ * Online: [ node1 node2 ]
+
+ * Full List of Resources:
+ * Fencing (stonith:fence_xvm): Started node1
+ * Resource Group: group1:
+ * member1a (ocf:pacemaker:Dummy): Started node2
+ * member1b (ocf:pacemaker:Dummy): Started node2
+ * Resource Group: group2:
+ * member2a (ocf:pacemaker:Dummy): Started node1
+ * member2b (ocf:pacemaker:Dummy): Stopped
diff --git a/cts/scheduler/summary/group-anticolocation-4.summary b/cts/scheduler/summary/group-anticolocation-4.summary
new file mode 100644
index 0000000..3ecb056
--- /dev/null
+++ b/cts/scheduler/summary/group-anticolocation-4.summary
@@ -0,0 +1,41 @@
+Current cluster status:
+ * Node List:
+ * Online: [ node1 node2 ]
+
+ * Full List of Resources:
+ * Fencing (stonith:fence_xvm): Started node1
+ * Resource Group: group1:
+ * member1a (ocf:pacemaker:Dummy): Started node2
+ * member1b (ocf:pacemaker:Dummy): Started node2
+ * Resource Group: group2:
+ * member2a (ocf:pacemaker:Dummy): Started node1
+ * member2b (ocf:pacemaker:Dummy): FAILED node1
+
+Transition Summary:
+ * Move member2a ( node1 -> node2 )
+ * Recover member2b ( node1 -> node2 )
+
+Executing Cluster Transition:
+ * Pseudo action: group2_stop_0
+ * Resource action: member2b stop on node1
+ * Resource action: member2a stop on node1
+ * Pseudo action: group2_stopped_0
+ * Pseudo action: group2_start_0
+ * Resource action: member2a start on node2
+ * Resource action: member2b start on node2
+ * Pseudo action: group2_running_0
+ * Resource action: member2a monitor=10000 on node2
+ * Resource action: member2b monitor=10000 on node2
+
+Revised Cluster Status:
+ * Node List:
+ * Online: [ node1 node2 ]
+
+ * Full List of Resources:
+ * Fencing (stonith:fence_xvm): Started node1
+ * Resource Group: group1:
+ * member1a (ocf:pacemaker:Dummy): Started node2
+ * member1b (ocf:pacemaker:Dummy): Started node2
+ * Resource Group: group2:
+ * member2a (ocf:pacemaker:Dummy): Started node2
+ * member2b (ocf:pacemaker:Dummy): Started node2
diff --git a/cts/scheduler/summary/group-anticolocation-5.summary b/cts/scheduler/summary/group-anticolocation-5.summary
new file mode 100644
index 0000000..6f83538
--- /dev/null
+++ b/cts/scheduler/summary/group-anticolocation-5.summary
@@ -0,0 +1,41 @@
+Current cluster status:
+ * Node List:
+ * Online: [ node1 node2 node3 ]
+
+ * Full List of Resources:
+ * Fencing (stonith:fence_xvm): Started node1
+ * Resource Group: group1:
+ * member1a (ocf:pacemaker:Dummy): Started node2
+ * member1b (ocf:pacemaker:Dummy): Started node2
+ * Resource Group: group2:
+ * member2a (ocf:pacemaker:Dummy): Started node1
+ * member2b (ocf:pacemaker:Dummy): FAILED node1
+
+Transition Summary:
+ * Move member2a ( node1 -> node3 )
+ * Recover member2b ( node1 -> node3 )
+
+Executing Cluster Transition:
+ * Pseudo action: group2_stop_0
+ * Resource action: member2b stop on node1
+ * Resource action: member2a stop on node1
+ * Pseudo action: group2_stopped_0
+ * Pseudo action: group2_start_0
+ * Resource action: member2a start on node3
+ * Resource action: member2b start on node3
+ * Pseudo action: group2_running_0
+ * Resource action: member2a monitor=10000 on node3
+ * Resource action: member2b monitor=10000 on node3
+
+Revised Cluster Status:
+ * Node List:
+ * Online: [ node1 node2 node3 ]
+
+ * Full List of Resources:
+ * Fencing (stonith:fence_xvm): Started node1
+ * Resource Group: group1:
+ * member1a (ocf:pacemaker:Dummy): Started node2
+ * member1b (ocf:pacemaker:Dummy): Started node2
+ * Resource Group: group2:
+ * member2a (ocf:pacemaker:Dummy): Started node3
+ * member2b (ocf:pacemaker:Dummy): Started node3
diff --git a/cts/scheduler/summary/group-anticolocation.summary b/cts/scheduler/summary/group-anticolocation.summary
index 3ecb056..93d2e73 100644
--- a/cts/scheduler/summary/group-anticolocation.summary
+++ b/cts/scheduler/summary/group-anticolocation.summary
@@ -12,17 +12,29 @@ Current cluster status:
* member2b (ocf:pacemaker:Dummy): FAILED node1
Transition Summary:
+ * Move member1a ( node2 -> node1 )
+ * Move member1b ( node2 -> node1 )
* Move member2a ( node1 -> node2 )
* Recover member2b ( node1 -> node2 )
Executing Cluster Transition:
+ * Pseudo action: group1_stop_0
+ * Resource action: member1b stop on node2
* Pseudo action: group2_stop_0
* Resource action: member2b stop on node1
+ * Resource action: member1a stop on node2
* Resource action: member2a stop on node1
+ * Pseudo action: group1_stopped_0
+ * Pseudo action: group1_start_0
+ * Resource action: member1a start on node1
+ * Resource action: member1b start on node1
* Pseudo action: group2_stopped_0
* Pseudo action: group2_start_0
* Resource action: member2a start on node2
* Resource action: member2b start on node2
+ * Pseudo action: group1_running_0
+ * Resource action: member1a monitor=10000 on node1
+ * Resource action: member1b monitor=10000 on node1
* Pseudo action: group2_running_0
* Resource action: member2a monitor=10000 on node2
* Resource action: member2b monitor=10000 on node2
@@ -34,8 +46,8 @@ Revised Cluster Status:
* Full List of Resources:
* Fencing (stonith:fence_xvm): Started node1
* Resource Group: group1:
- * member1a (ocf:pacemaker:Dummy): Started node2
- * member1b (ocf:pacemaker:Dummy): Started node2
+ * member1a (ocf:pacemaker:Dummy): Started node1
+ * member1b (ocf:pacemaker:Dummy): Started node1
* Resource Group: group2:
* member2a (ocf:pacemaker:Dummy): Started node2
* member2b (ocf:pacemaker:Dummy): Started node2
diff --git a/cts/scheduler/summary/migrate-fencing.summary b/cts/scheduler/summary/migrate-fencing.summary
index ebc65bd..500c78a 100644
--- a/cts/scheduler/summary/migrate-fencing.summary
+++ b/cts/scheduler/summary/migrate-fencing.summary
@@ -23,7 +23,7 @@ Current cluster status:
* Unpromoted: [ pcmk-1 pcmk-2 pcmk-3 ]
Transition Summary:
- * Fence (reboot) pcmk-4 'termination was requested'
+ * Fence (reboot) pcmk-4 'fencing was requested'
* Stop FencingChild:0 ( pcmk-4 ) due to node availability
* Move r192.168.101.181 ( pcmk-4 -> pcmk-1 )
* Move r192.168.101.182 ( pcmk-4 -> pcmk-1 )
diff --git a/cts/scheduler/summary/no-promote-on-unrunnable-guest.summary b/cts/scheduler/summary/no-promote-on-unrunnable-guest.summary
index c06f8f0..ab8f8ff 100644
--- a/cts/scheduler/summary/no-promote-on-unrunnable-guest.summary
+++ b/cts/scheduler/summary/no-promote-on-unrunnable-guest.summary
@@ -37,9 +37,9 @@ Executing Cluster Transition:
* Resource action: ovndb_servers cancel=30000 on ovn-dbs-bundle-1
* Pseudo action: ovn-dbs-bundle-master_pre_notify_stop_0
* Pseudo action: ovn-dbs-bundle_stop_0
- * Resource action: ovndb_servers notify on ovn-dbs-bundle-0
* Resource action: ovndb_servers notify on ovn-dbs-bundle-1
* Resource action: ovndb_servers notify on ovn-dbs-bundle-2
+ * Resource action: ovndb_servers notify on ovn-dbs-bundle-0
* Pseudo action: ovn-dbs-bundle-master_confirmed-pre_notify_stop_0
* Pseudo action: ovn-dbs-bundle-master_stop_0
* Resource action: ovndb_servers stop on ovn-dbs-bundle-0
diff --git a/cts/scheduler/summary/node-pending-timeout.summary b/cts/scheduler/summary/node-pending-timeout.summary
new file mode 100644
index 0000000..0fef982
--- /dev/null
+++ b/cts/scheduler/summary/node-pending-timeout.summary
@@ -0,0 +1,26 @@
+Using the original execution date of: 2023-02-21 12:19:57Z
+Current cluster status:
+ * Node List:
+ * Node node-2: UNCLEAN (online)
+ * Online: [ node-1 ]
+
+ * Full List of Resources:
+ * st-sbd (stonith:external/sbd): Stopped
+
+Transition Summary:
+ * Fence (reboot) node-2 'peer pending timed out on joining the process group'
+ * Start st-sbd ( node-1 )
+
+Executing Cluster Transition:
+ * Resource action: st-sbd monitor on node-1
+ * Fencing node-2 (reboot)
+ * Resource action: st-sbd start on node-1
+Using the original execution date of: 2023-02-21 12:19:57Z
+
+Revised Cluster Status:
+ * Node List:
+ * Online: [ node-1 ]
+ * OFFLINE: [ node-2 ]
+
+ * Full List of Resources:
+ * st-sbd (stonith:external/sbd): Started node-1
diff --git a/cts/scheduler/summary/pending-node-no-uname.summary b/cts/scheduler/summary/pending-node-no-uname.summary
new file mode 100644
index 0000000..5f04fc6
--- /dev/null
+++ b/cts/scheduler/summary/pending-node-no-uname.summary
@@ -0,0 +1,23 @@
+Using the original execution date of: 2023-02-21 12:19:57Z
+Current cluster status:
+ * Node List:
+ * Node node-2: pending
+ * Online: [ node-1 ]
+
+ * Full List of Resources:
+ * st-sbd (stonith:external/sbd): Stopped
+
+Transition Summary:
+ * Start st-sbd ( node-1 ) blocked
+
+Executing Cluster Transition:
+ * Resource action: st-sbd monitor on node-1
+Using the original execution date of: 2023-02-21 12:19:57Z
+
+Revised Cluster Status:
+ * Node List:
+ * Node node-2: pending
+ * Online: [ node-1 ]
+
+ * Full List of Resources:
+ * st-sbd (stonith:external/sbd): Stopped
diff --git a/cts/scheduler/summary/promoted-ordering.summary b/cts/scheduler/summary/promoted-ordering.summary
index 3222e18..0ef1bd8 100644
--- a/cts/scheduler/summary/promoted-ordering.summary
+++ b/cts/scheduler/summary/promoted-ordering.summary
@@ -9,8 +9,8 @@ Current cluster status:
* extip_2 (ocf:heartbeat:IPaddr2): Stopped
* Resource Group: group_main:
* intip_0_main (ocf:heartbeat:IPaddr2): Stopped
- * intip_1_master (ocf:heartbeat:IPaddr2): Stopped
- * intip_2_slave (ocf:heartbeat:IPaddr2): Stopped
+ * intip_1_active (ocf:heartbeat:IPaddr2): Stopped
+ * intip_2_passive (ocf:heartbeat:IPaddr2): Stopped
* Clone Set: ms_drbd_www [drbd_www] (promotable):
* Stopped: [ webcluster01 webcluster02 ]
* Clone Set: clone_ocfs2_www [ocfs2_www] (unique):
@@ -25,8 +25,8 @@ Current cluster status:
Transition Summary:
* Start extip_1 ( webcluster01 )
* Start extip_2 ( webcluster01 )
- * Start intip_1_master ( webcluster01 )
- * Start intip_2_slave ( webcluster01 )
+ * Start intip_1_active ( webcluster01 )
+ * Start intip_2_passive ( webcluster01 )
* Start drbd_www:0 ( webcluster01 )
* Start drbd_mysql:0 ( webcluster01 )
@@ -35,8 +35,8 @@ Executing Cluster Transition:
* Resource action: extip_1 monitor on webcluster01
* Resource action: extip_2 monitor on webcluster01
* Resource action: intip_0_main monitor on webcluster01
- * Resource action: intip_1_master monitor on webcluster01
- * Resource action: intip_2_slave monitor on webcluster01
+ * Resource action: intip_1_active monitor on webcluster01
+ * Resource action: intip_2_passive monitor on webcluster01
* Resource action: drbd_www:0 monitor on webcluster01
* Pseudo action: ms_drbd_www_pre_notify_start_0
* Resource action: ocfs2_www:0 monitor on webcluster01
@@ -48,16 +48,16 @@ Executing Cluster Transition:
* Resource action: fs_mysql monitor on webcluster01
* Resource action: extip_1 start on webcluster01
* Resource action: extip_2 start on webcluster01
- * Resource action: intip_1_master start on webcluster01
- * Resource action: intip_2_slave start on webcluster01
+ * Resource action: intip_1_active start on webcluster01
+ * Resource action: intip_2_passive start on webcluster01
* Pseudo action: ms_drbd_www_confirmed-pre_notify_start_0
* Pseudo action: ms_drbd_www_start_0
* Pseudo action: ms_drbd_mysql_confirmed-pre_notify_start_0
* Pseudo action: ms_drbd_mysql_start_0
* Resource action: extip_1 monitor=30000 on webcluster01
* Resource action: extip_2 monitor=30000 on webcluster01
- * Resource action: intip_1_master monitor=30000 on webcluster01
- * Resource action: intip_2_slave monitor=30000 on webcluster01
+ * Resource action: intip_1_active monitor=30000 on webcluster01
+ * Resource action: intip_2_passive monitor=30000 on webcluster01
* Resource action: drbd_www:0 start on webcluster01
* Pseudo action: ms_drbd_www_running_0
* Resource action: drbd_mysql:0 start on webcluster01
@@ -80,8 +80,8 @@ Revised Cluster Status:
* extip_2 (ocf:heartbeat:IPaddr2): Started webcluster01
* Resource Group: group_main:
* intip_0_main (ocf:heartbeat:IPaddr2): Stopped
- * intip_1_master (ocf:heartbeat:IPaddr2): Started webcluster01
- * intip_2_slave (ocf:heartbeat:IPaddr2): Started webcluster01
+ * intip_1_active (ocf:heartbeat:IPaddr2): Started webcluster01
+ * intip_2_passive (ocf:heartbeat:IPaddr2): Started webcluster01
* Clone Set: ms_drbd_www [drbd_www] (promotable):
* Unpromoted: [ webcluster01 ]
* Stopped: [ webcluster02 ]
diff --git a/cts/scheduler/summary/promoted-probed-score.summary b/cts/scheduler/summary/promoted-probed-score.summary
index 3c9326c..52487d4 100644
--- a/cts/scheduler/summary/promoted-probed-score.summary
+++ b/cts/scheduler/summary/promoted-probed-score.summary
@@ -39,8 +39,8 @@ Current cluster status:
* Proxy (ocf:heartbeat:VirtualDomain): Stopped
Transition Summary:
- * Promote AdminDrbd:0 ( Stopped -> Promoted hypatia-corosync.nevis.columbia.edu )
- * Promote AdminDrbd:1 ( Stopped -> Promoted orestes-corosync.nevis.columbia.edu )
+ * Promote AdminDrbd:0 ( Stopped -> Promoted orestes-corosync.nevis.columbia.edu )
+ * Promote AdminDrbd:1 ( Stopped -> Promoted hypatia-corosync.nevis.columbia.edu )
* Start CronAmbientTemperature ( hypatia-corosync.nevis.columbia.edu )
* Start StonithHypatia ( orestes-corosync.nevis.columbia.edu )
* Start StonithOrestes ( hypatia-corosync.nevis.columbia.edu )
@@ -83,18 +83,18 @@ Transition Summary:
* Start ExportUsrNevis:1 ( orestes-corosync.nevis.columbia.edu )
* Start ExportUsrNevisOffsite:1 ( orestes-corosync.nevis.columbia.edu )
* Start ExportWWW:1 ( orestes-corosync.nevis.columbia.edu )
- * Start AdminLvm:0 ( hypatia-corosync.nevis.columbia.edu )
- * Start FSUsrNevis:0 ( hypatia-corosync.nevis.columbia.edu )
- * Start FSVarNevis:0 ( hypatia-corosync.nevis.columbia.edu )
- * Start FSVirtualMachines:0 ( hypatia-corosync.nevis.columbia.edu )
- * Start FSMail:0 ( hypatia-corosync.nevis.columbia.edu )
- * Start FSWork:0 ( hypatia-corosync.nevis.columbia.edu )
- * Start AdminLvm:1 ( orestes-corosync.nevis.columbia.edu )
- * Start FSUsrNevis:1 ( orestes-corosync.nevis.columbia.edu )
- * Start FSVarNevis:1 ( orestes-corosync.nevis.columbia.edu )
- * Start FSVirtualMachines:1 ( orestes-corosync.nevis.columbia.edu )
- * Start FSMail:1 ( orestes-corosync.nevis.columbia.edu )
- * Start FSWork:1 ( orestes-corosync.nevis.columbia.edu )
+ * Start AdminLvm:0 ( orestes-corosync.nevis.columbia.edu )
+ * Start FSUsrNevis:0 ( orestes-corosync.nevis.columbia.edu )
+ * Start FSVarNevis:0 ( orestes-corosync.nevis.columbia.edu )
+ * Start FSVirtualMachines:0 ( orestes-corosync.nevis.columbia.edu )
+ * Start FSMail:0 ( orestes-corosync.nevis.columbia.edu )
+ * Start FSWork:0 ( orestes-corosync.nevis.columbia.edu )
+ * Start AdminLvm:1 ( hypatia-corosync.nevis.columbia.edu )
+ * Start FSUsrNevis:1 ( hypatia-corosync.nevis.columbia.edu )
+ * Start FSVarNevis:1 ( hypatia-corosync.nevis.columbia.edu )
+ * Start FSVirtualMachines:1 ( hypatia-corosync.nevis.columbia.edu )
+ * Start FSMail:1 ( hypatia-corosync.nevis.columbia.edu )
+ * Start FSWork:1 ( hypatia-corosync.nevis.columbia.edu )
* Start KVM-guest ( hypatia-corosync.nevis.columbia.edu )
* Start Proxy ( orestes-corosync.nevis.columbia.edu )
@@ -125,74 +125,74 @@ Executing Cluster Transition:
* Resource action: ExportUsrNevis:1 monitor on orestes-corosync.nevis.columbia.edu
* Resource action: ExportUsrNevisOffsite:1 monitor on orestes-corosync.nevis.columbia.edu
* Resource action: ExportWWW:1 monitor on orestes-corosync.nevis.columbia.edu
- * Resource action: AdminLvm:0 monitor on hypatia-corosync.nevis.columbia.edu
- * Resource action: FSUsrNevis:0 monitor on hypatia-corosync.nevis.columbia.edu
- * Resource action: FSVarNevis:0 monitor on hypatia-corosync.nevis.columbia.edu
- * Resource action: FSVirtualMachines:0 monitor on hypatia-corosync.nevis.columbia.edu
- * Resource action: FSMail:0 monitor on hypatia-corosync.nevis.columbia.edu
- * Resource action: FSWork:0 monitor on hypatia-corosync.nevis.columbia.edu
- * Resource action: AdminLvm:1 monitor on orestes-corosync.nevis.columbia.edu
- * Resource action: FSUsrNevis:1 monitor on orestes-corosync.nevis.columbia.edu
- * Resource action: FSVarNevis:1 monitor on orestes-corosync.nevis.columbia.edu
- * Resource action: FSVirtualMachines:1 monitor on orestes-corosync.nevis.columbia.edu
- * Resource action: FSMail:1 monitor on orestes-corosync.nevis.columbia.edu
- * Resource action: FSWork:1 monitor on orestes-corosync.nevis.columbia.edu
+ * Resource action: AdminLvm:0 monitor on orestes-corosync.nevis.columbia.edu
+ * Resource action: FSUsrNevis:0 monitor on orestes-corosync.nevis.columbia.edu
+ * Resource action: FSVarNevis:0 monitor on orestes-corosync.nevis.columbia.edu
+ * Resource action: FSVirtualMachines:0 monitor on orestes-corosync.nevis.columbia.edu
+ * Resource action: FSMail:0 monitor on orestes-corosync.nevis.columbia.edu
+ * Resource action: FSWork:0 monitor on orestes-corosync.nevis.columbia.edu
+ * Resource action: AdminLvm:1 monitor on hypatia-corosync.nevis.columbia.edu
+ * Resource action: FSUsrNevis:1 monitor on hypatia-corosync.nevis.columbia.edu
+ * Resource action: FSVarNevis:1 monitor on hypatia-corosync.nevis.columbia.edu
+ * Resource action: FSVirtualMachines:1 monitor on hypatia-corosync.nevis.columbia.edu
+ * Resource action: FSMail:1 monitor on hypatia-corosync.nevis.columbia.edu
+ * Resource action: FSWork:1 monitor on hypatia-corosync.nevis.columbia.edu
* Resource action: KVM-guest monitor on orestes-corosync.nevis.columbia.edu
* Resource action: KVM-guest monitor on hypatia-corosync.nevis.columbia.edu
* Resource action: Proxy monitor on orestes-corosync.nevis.columbia.edu
* Resource action: Proxy monitor on hypatia-corosync.nevis.columbia.edu
* Pseudo action: AdminClone_confirmed-pre_notify_start_0
* Pseudo action: AdminClone_start_0
- * Resource action: AdminDrbd:0 start on hypatia-corosync.nevis.columbia.edu
- * Resource action: AdminDrbd:1 start on orestes-corosync.nevis.columbia.edu
+ * Resource action: AdminDrbd:0 start on orestes-corosync.nevis.columbia.edu
+ * Resource action: AdminDrbd:1 start on hypatia-corosync.nevis.columbia.edu
* Pseudo action: AdminClone_running_0
* Pseudo action: AdminClone_post_notify_running_0
- * Resource action: AdminDrbd:0 notify on hypatia-corosync.nevis.columbia.edu
- * Resource action: AdminDrbd:1 notify on orestes-corosync.nevis.columbia.edu
+ * Resource action: AdminDrbd:0 notify on orestes-corosync.nevis.columbia.edu
+ * Resource action: AdminDrbd:1 notify on hypatia-corosync.nevis.columbia.edu
* Pseudo action: AdminClone_confirmed-post_notify_running_0
* Pseudo action: AdminClone_pre_notify_promote_0
- * Resource action: AdminDrbd:0 notify on hypatia-corosync.nevis.columbia.edu
- * Resource action: AdminDrbd:1 notify on orestes-corosync.nevis.columbia.edu
+ * Resource action: AdminDrbd:0 notify on orestes-corosync.nevis.columbia.edu
+ * Resource action: AdminDrbd:1 notify on hypatia-corosync.nevis.columbia.edu
* Pseudo action: AdminClone_confirmed-pre_notify_promote_0
* Pseudo action: AdminClone_promote_0
- * Resource action: AdminDrbd:0 promote on hypatia-corosync.nevis.columbia.edu
- * Resource action: AdminDrbd:1 promote on orestes-corosync.nevis.columbia.edu
+ * Resource action: AdminDrbd:0 promote on orestes-corosync.nevis.columbia.edu
+ * Resource action: AdminDrbd:1 promote on hypatia-corosync.nevis.columbia.edu
* Pseudo action: AdminClone_promoted_0
* Pseudo action: AdminClone_post_notify_promoted_0
- * Resource action: AdminDrbd:0 notify on hypatia-corosync.nevis.columbia.edu
- * Resource action: AdminDrbd:1 notify on orestes-corosync.nevis.columbia.edu
+ * Resource action: AdminDrbd:0 notify on orestes-corosync.nevis.columbia.edu
+ * Resource action: AdminDrbd:1 notify on hypatia-corosync.nevis.columbia.edu
* Pseudo action: AdminClone_confirmed-post_notify_promoted_0
* Pseudo action: FilesystemClone_start_0
- * Resource action: AdminDrbd:0 monitor=59000 on hypatia-corosync.nevis.columbia.edu
- * Resource action: AdminDrbd:1 monitor=59000 on orestes-corosync.nevis.columbia.edu
+ * Resource action: AdminDrbd:0 monitor=59000 on orestes-corosync.nevis.columbia.edu
+ * Resource action: AdminDrbd:1 monitor=59000 on hypatia-corosync.nevis.columbia.edu
* Pseudo action: FilesystemGroup:0_start_0
- * Resource action: AdminLvm:0 start on hypatia-corosync.nevis.columbia.edu
- * Resource action: FSUsrNevis:0 start on hypatia-corosync.nevis.columbia.edu
- * Resource action: FSVarNevis:0 start on hypatia-corosync.nevis.columbia.edu
- * Resource action: FSVirtualMachines:0 start on hypatia-corosync.nevis.columbia.edu
- * Resource action: FSMail:0 start on hypatia-corosync.nevis.columbia.edu
- * Resource action: FSWork:0 start on hypatia-corosync.nevis.columbia.edu
+ * Resource action: AdminLvm:0 start on orestes-corosync.nevis.columbia.edu
+ * Resource action: FSUsrNevis:0 start on orestes-corosync.nevis.columbia.edu
+ * Resource action: FSVarNevis:0 start on orestes-corosync.nevis.columbia.edu
+ * Resource action: FSVirtualMachines:0 start on orestes-corosync.nevis.columbia.edu
+ * Resource action: FSMail:0 start on orestes-corosync.nevis.columbia.edu
+ * Resource action: FSWork:0 start on orestes-corosync.nevis.columbia.edu
* Pseudo action: FilesystemGroup:1_start_0
- * Resource action: AdminLvm:1 start on orestes-corosync.nevis.columbia.edu
- * Resource action: FSUsrNevis:1 start on orestes-corosync.nevis.columbia.edu
- * Resource action: FSVarNevis:1 start on orestes-corosync.nevis.columbia.edu
- * Resource action: FSVirtualMachines:1 start on orestes-corosync.nevis.columbia.edu
- * Resource action: FSMail:1 start on orestes-corosync.nevis.columbia.edu
- * Resource action: FSWork:1 start on orestes-corosync.nevis.columbia.edu
+ * Resource action: AdminLvm:1 start on hypatia-corosync.nevis.columbia.edu
+ * Resource action: FSUsrNevis:1 start on hypatia-corosync.nevis.columbia.edu
+ * Resource action: FSVarNevis:1 start on hypatia-corosync.nevis.columbia.edu
+ * Resource action: FSVirtualMachines:1 start on hypatia-corosync.nevis.columbia.edu
+ * Resource action: FSMail:1 start on hypatia-corosync.nevis.columbia.edu
+ * Resource action: FSWork:1 start on hypatia-corosync.nevis.columbia.edu
* Pseudo action: FilesystemGroup:0_running_0
- * Resource action: AdminLvm:0 monitor=30000 on hypatia-corosync.nevis.columbia.edu
- * Resource action: FSUsrNevis:0 monitor=20000 on hypatia-corosync.nevis.columbia.edu
- * Resource action: FSVarNevis:0 monitor=20000 on hypatia-corosync.nevis.columbia.edu
- * Resource action: FSVirtualMachines:0 monitor=20000 on hypatia-corosync.nevis.columbia.edu
- * Resource action: FSMail:0 monitor=20000 on hypatia-corosync.nevis.columbia.edu
- * Resource action: FSWork:0 monitor=20000 on hypatia-corosync.nevis.columbia.edu
+ * Resource action: AdminLvm:0 monitor=30000 on orestes-corosync.nevis.columbia.edu
+ * Resource action: FSUsrNevis:0 monitor=20000 on orestes-corosync.nevis.columbia.edu
+ * Resource action: FSVarNevis:0 monitor=20000 on orestes-corosync.nevis.columbia.edu
+ * Resource action: FSVirtualMachines:0 monitor=20000 on orestes-corosync.nevis.columbia.edu
+ * Resource action: FSMail:0 monitor=20000 on orestes-corosync.nevis.columbia.edu
+ * Resource action: FSWork:0 monitor=20000 on orestes-corosync.nevis.columbia.edu
* Pseudo action: FilesystemGroup:1_running_0
- * Resource action: AdminLvm:1 monitor=30000 on orestes-corosync.nevis.columbia.edu
- * Resource action: FSUsrNevis:1 monitor=20000 on orestes-corosync.nevis.columbia.edu
- * Resource action: FSVarNevis:1 monitor=20000 on orestes-corosync.nevis.columbia.edu
- * Resource action: FSVirtualMachines:1 monitor=20000 on orestes-corosync.nevis.columbia.edu
- * Resource action: FSMail:1 monitor=20000 on orestes-corosync.nevis.columbia.edu
- * Resource action: FSWork:1 monitor=20000 on orestes-corosync.nevis.columbia.edu
+ * Resource action: AdminLvm:1 monitor=30000 on hypatia-corosync.nevis.columbia.edu
+ * Resource action: FSUsrNevis:1 monitor=20000 on hypatia-corosync.nevis.columbia.edu
+ * Resource action: FSVarNevis:1 monitor=20000 on hypatia-corosync.nevis.columbia.edu
+ * Resource action: FSVirtualMachines:1 monitor=20000 on hypatia-corosync.nevis.columbia.edu
+ * Resource action: FSMail:1 monitor=20000 on hypatia-corosync.nevis.columbia.edu
+ * Resource action: FSWork:1 monitor=20000 on hypatia-corosync.nevis.columbia.edu
* Pseudo action: FilesystemClone_running_0
* Resource action: CronAmbientTemperature start on hypatia-corosync.nevis.columbia.edu
* Pseudo action: DhcpGroup_start_0
diff --git a/cts/scheduler/summary/timeout-by-node.summary b/cts/scheduler/summary/timeout-by-node.summary
new file mode 100644
index 0000000..78f4fcd
--- /dev/null
+++ b/cts/scheduler/summary/timeout-by-node.summary
@@ -0,0 +1,43 @@
+Current cluster status:
+ * Node List:
+ * Online: [ node1 node2 node3 node4 node5 ]
+
+ * Full List of Resources:
+ * Fencing (stonith:fence_xvm): Started node1
+ * Clone Set: rsc1-clone [rsc1]:
+ * Stopped: [ node1 node2 node3 node4 node5 ]
+
+Transition Summary:
+ * Start rsc1:0 ( node2 )
+ * Start rsc1:1 ( node3 )
+ * Start rsc1:2 ( node4 )
+ * Start rsc1:3 ( node5 )
+ * Start rsc1:4 ( node1 )
+
+Executing Cluster Transition:
+ * Resource action: rsc1:0 monitor on node2
+ * Resource action: rsc1:1 monitor on node3
+ * Resource action: rsc1:2 monitor on node4
+ * Resource action: rsc1:3 monitor on node5
+ * Resource action: rsc1:4 monitor on node1
+ * Pseudo action: rsc1-clone_start_0
+ * Resource action: rsc1:0 start on node2
+ * Resource action: rsc1:1 start on node3
+ * Resource action: rsc1:2 start on node4
+ * Resource action: rsc1:3 start on node5
+ * Resource action: rsc1:4 start on node1
+ * Pseudo action: rsc1-clone_running_0
+ * Resource action: rsc1:0 monitor=10000 on node2
+ * Resource action: rsc1:1 monitor=10000 on node3
+ * Resource action: rsc1:2 monitor=10000 on node4
+ * Resource action: rsc1:3 monitor=10000 on node5
+ * Resource action: rsc1:4 monitor=10000 on node1
+
+Revised Cluster Status:
+ * Node List:
+ * Online: [ node1 node2 node3 node4 node5 ]
+
+ * Full List of Resources:
+ * Fencing (stonith:fence_xvm): Started node1
+ * Clone Set: rsc1-clone [rsc1]:
+ * Started: [ node1 node2 node3 node4 node5 ]
diff --git a/cts/scheduler/summary/unfence-definition.summary b/cts/scheduler/summary/unfence-definition.summary
index bb22680..2d94f71 100644
--- a/cts/scheduler/summary/unfence-definition.summary
+++ b/cts/scheduler/summary/unfence-definition.summary
@@ -32,8 +32,8 @@ Executing Cluster Transition:
* Resource action: fencing monitor on virt-3
* Resource action: fencing delete on virt-1
* Resource action: dlm monitor on virt-3
- * Resource action: clvmd stop on virt-1
* Resource action: clvmd monitor on virt-3
+ * Resource action: clvmd stop on virt-1
* Pseudo action: clvmd-clone_stopped_0
* Pseudo action: dlm-clone_stop_0
* Resource action: dlm stop on virt-1
diff --git a/cts/scheduler/summary/unfence-parameters.summary b/cts/scheduler/summary/unfence-parameters.summary
index b872a41..93a65e6 100644
--- a/cts/scheduler/summary/unfence-parameters.summary
+++ b/cts/scheduler/summary/unfence-parameters.summary
@@ -31,8 +31,8 @@ Executing Cluster Transition:
* Fencing virt-3 (on)
* Resource action: fencing monitor on virt-3
* Resource action: dlm monitor on virt-3
- * Resource action: clvmd stop on virt-1
* Resource action: clvmd monitor on virt-3
+ * Resource action: clvmd stop on virt-1
* Pseudo action: clvmd-clone_stopped_0
* Pseudo action: dlm-clone_stop_0
* Resource action: dlm stop on virt-1