summaryrefslogtreecommitdiffstats
path: root/cts/scheduler/summary
diff options
context:
space:
mode:
authorDaniel Baumann <daniel.baumann@progress-linux.org>2024-04-17 06:53:20 +0000
committerDaniel Baumann <daniel.baumann@progress-linux.org>2024-04-17 06:53:20 +0000
commite5a812082ae033afb1eed82c0f2df3d0f6bdc93f (patch)
treea6716c9275b4b413f6c9194798b34b91affb3cc7 /cts/scheduler/summary
parentInitial commit. (diff)
downloadpacemaker-e5a812082ae033afb1eed82c0f2df3d0f6bdc93f.tar.xz
pacemaker-e5a812082ae033afb1eed82c0f2df3d0f6bdc93f.zip
Adding upstream version 2.1.6.upstream/2.1.6
Signed-off-by: Daniel Baumann <daniel.baumann@progress-linux.org>
Diffstat (limited to 'cts/scheduler/summary')
-rw-r--r--cts/scheduler/summary/1-a-then-bm-move-b.summary25
-rw-r--r--cts/scheduler/summary/10-a-then-bm-b-move-a-clone.summary33
-rw-r--r--cts/scheduler/summary/11-a-then-bm-b-move-a-clone-starting.summary36
-rw-r--r--cts/scheduler/summary/1360.summary30
-rw-r--r--cts/scheduler/summary/1484.summary21
-rw-r--r--cts/scheduler/summary/1494.summary27
-rw-r--r--cts/scheduler/summary/2-am-then-b-move-a.summary25
-rw-r--r--cts/scheduler/summary/3-am-then-bm-both-migrate.summary31
-rw-r--r--cts/scheduler/summary/4-am-then-bm-b-not-migratable.summary29
-rw-r--r--cts/scheduler/summary/5-am-then-bm-a-not-migratable.summary27
-rw-r--r--cts/scheduler/summary/594.summary55
-rw-r--r--cts/scheduler/summary/6-migrate-group.summary45
-rw-r--r--cts/scheduler/summary/662.summary67
-rw-r--r--cts/scheduler/summary/696.summary62
-rw-r--r--cts/scheduler/summary/7-migrate-group-one-unmigratable.summary41
-rw-r--r--cts/scheduler/summary/726.summary89
-rw-r--r--cts/scheduler/summary/735.summary52
-rw-r--r--cts/scheduler/summary/764.summary57
-rw-r--r--cts/scheduler/summary/797.summary73
-rw-r--r--cts/scheduler/summary/8-am-then-bm-a-migrating-b-stopping.summary29
-rw-r--r--cts/scheduler/summary/829.summary64
-rw-r--r--cts/scheduler/summary/9-am-then-bm-b-migrating-a-stopping.summary25
-rw-r--r--cts/scheduler/summary/994-2.summary38
-rw-r--r--cts/scheduler/summary/994.summary33
-rw-r--r--cts/scheduler/summary/Makefile.am12
-rw-r--r--cts/scheduler/summary/a-demote-then-b-migrate.summary57
-rw-r--r--cts/scheduler/summary/a-promote-then-b-migrate.summary42
-rw-r--r--cts/scheduler/summary/allow-unhealthy-nodes.summary35
-rw-r--r--cts/scheduler/summary/anon-instance-pending.summary224
-rw-r--r--cts/scheduler/summary/anti-colocation-order.summary45
-rw-r--r--cts/scheduler/summary/anti-colocation-promoted.summary38
-rw-r--r--cts/scheduler/summary/anti-colocation-unpromoted.summary36
-rw-r--r--cts/scheduler/summary/asymmetric.summary29
-rw-r--r--cts/scheduler/summary/asymmetrical-order-move.summary27
-rw-r--r--cts/scheduler/summary/asymmetrical-order-restart.summary27
-rw-r--r--cts/scheduler/summary/attrs1.summary21
-rw-r--r--cts/scheduler/summary/attrs2.summary21
-rw-r--r--cts/scheduler/summary/attrs3.summary21
-rw-r--r--cts/scheduler/summary/attrs4.summary21
-rw-r--r--cts/scheduler/summary/attrs5.summary19
-rw-r--r--cts/scheduler/summary/attrs6.summary21
-rw-r--r--cts/scheduler/summary/attrs7.summary21
-rw-r--r--cts/scheduler/summary/attrs8.summary21
-rw-r--r--cts/scheduler/summary/balanced.summary29
-rw-r--r--cts/scheduler/summary/base-score.summary23
-rw-r--r--cts/scheduler/summary/bnc-515172.summary36
-rw-r--r--cts/scheduler/summary/bug-1572-1.summary85
-rw-r--r--cts/scheduler/summary/bug-1572-2.summary61
-rw-r--r--cts/scheduler/summary/bug-1573.summary34
-rw-r--r--cts/scheduler/summary/bug-1685.summary38
-rw-r--r--cts/scheduler/summary/bug-1718.summary44
-rw-r--r--cts/scheduler/summary/bug-1765.summary38
-rw-r--r--cts/scheduler/summary/bug-1820-1.summary44
-rw-r--r--cts/scheduler/summary/bug-1820.summary38
-rw-r--r--cts/scheduler/summary/bug-1822.summary44
-rw-r--r--cts/scheduler/summary/bug-5014-A-start-B-start.summary27
-rw-r--r--cts/scheduler/summary/bug-5014-A-stop-B-started.summary23
-rw-r--r--cts/scheduler/summary/bug-5014-A-stopped-B-stopped.summary24
-rw-r--r--cts/scheduler/summary/bug-5014-CLONE-A-start-B-start.summary35
-rw-r--r--cts/scheduler/summary/bug-5014-CLONE-A-stop-B-started.summary29
-rw-r--r--cts/scheduler/summary/bug-5014-CthenAthenB-C-stopped.summary28
-rw-r--r--cts/scheduler/summary/bug-5014-GROUP-A-start-B-start.summary33
-rw-r--r--cts/scheduler/summary/bug-5014-GROUP-A-stopped-B-started.summary29
-rw-r--r--cts/scheduler/summary/bug-5014-GROUP-A-stopped-B-stopped.summary26
-rw-r--r--cts/scheduler/summary/bug-5014-ordered-set-symmetrical-false.summary27
-rw-r--r--cts/scheduler/summary/bug-5014-ordered-set-symmetrical-true.summary29
-rw-r--r--cts/scheduler/summary/bug-5025-1.summary25
-rw-r--r--cts/scheduler/summary/bug-5025-2.summary23
-rw-r--r--cts/scheduler/summary/bug-5025-3.summary28
-rw-r--r--cts/scheduler/summary/bug-5025-4.summary24
-rw-r--r--cts/scheduler/summary/bug-5028-bottom.summary26
-rw-r--r--cts/scheduler/summary/bug-5028-detach.summary28
-rw-r--r--cts/scheduler/summary/bug-5028.summary26
-rw-r--r--cts/scheduler/summary/bug-5038.summary25
-rw-r--r--cts/scheduler/summary/bug-5059.summary77
-rw-r--r--cts/scheduler/summary/bug-5069-op-disabled.summary21
-rw-r--r--cts/scheduler/summary/bug-5069-op-enabled.summary19
-rw-r--r--cts/scheduler/summary/bug-5140-require-all-false.summary83
-rw-r--r--cts/scheduler/summary/bug-5143-ms-shuffle.summary78
-rw-r--r--cts/scheduler/summary/bug-5186-partial-migrate.summary91
-rw-r--r--cts/scheduler/summary/bug-cl-5168.summary76
-rw-r--r--cts/scheduler/summary/bug-cl-5170.summary37
-rw-r--r--cts/scheduler/summary/bug-cl-5212.summary69
-rw-r--r--cts/scheduler/summary/bug-cl-5213.summary22
-rw-r--r--cts/scheduler/summary/bug-cl-5219.summary43
-rw-r--r--cts/scheduler/summary/bug-cl-5247.summary87
-rw-r--r--cts/scheduler/summary/bug-lf-1852.summary40
-rw-r--r--cts/scheduler/summary/bug-lf-1920.summary18
-rw-r--r--cts/scheduler/summary/bug-lf-2106.summary91
-rw-r--r--cts/scheduler/summary/bug-lf-2153.summary59
-rw-r--r--cts/scheduler/summary/bug-lf-2160.summary23
-rw-r--r--cts/scheduler/summary/bug-lf-2171.summary39
-rw-r--r--cts/scheduler/summary/bug-lf-2213.summary30
-rw-r--r--cts/scheduler/summary/bug-lf-2317.summary36
-rw-r--r--cts/scheduler/summary/bug-lf-2358.summary68
-rw-r--r--cts/scheduler/summary/bug-lf-2361.summary44
-rw-r--r--cts/scheduler/summary/bug-lf-2422.summary83
-rw-r--r--cts/scheduler/summary/bug-lf-2435.summary33
-rw-r--r--cts/scheduler/summary/bug-lf-2445.summary28
-rw-r--r--cts/scheduler/summary/bug-lf-2453.summary41
-rw-r--r--cts/scheduler/summary/bug-lf-2474.summary23
-rw-r--r--cts/scheduler/summary/bug-lf-2493.summary66
-rw-r--r--cts/scheduler/summary/bug-lf-2508.summary112
-rw-r--r--cts/scheduler/summary/bug-lf-2544.summary24
-rw-r--r--cts/scheduler/summary/bug-lf-2551.summary226
-rw-r--r--cts/scheduler/summary/bug-lf-2574.summary38
-rw-r--r--cts/scheduler/summary/bug-lf-2581.summary59
-rw-r--r--cts/scheduler/summary/bug-lf-2606.summary46
-rw-r--r--cts/scheduler/summary/bug-lf-2619.summary100
-rw-r--r--cts/scheduler/summary/bug-n-385265-2.summary33
-rw-r--r--cts/scheduler/summary/bug-n-385265.summary25
-rw-r--r--cts/scheduler/summary/bug-n-387749.summary59
-rw-r--r--cts/scheduler/summary/bug-pm-11.summary48
-rw-r--r--cts/scheduler/summary/bug-pm-12.summary57
-rw-r--r--cts/scheduler/summary/bug-rh-1097457.summary126
-rw-r--r--cts/scheduler/summary/bug-rh-880249.summary29
-rw-r--r--cts/scheduler/summary/bug-suse-707150.summary75
-rw-r--r--cts/scheduler/summary/bundle-connection-with-container.summary63
-rw-r--r--cts/scheduler/summary/bundle-interleave-down.summary91
-rw-r--r--cts/scheduler/summary/bundle-interleave-promote.summary51
-rw-r--r--cts/scheduler/summary/bundle-interleave-start.summary156
-rw-r--r--cts/scheduler/summary/bundle-nested-colocation.summary106
-rw-r--r--cts/scheduler/summary/bundle-order-fencing.summary228
-rw-r--r--cts/scheduler/summary/bundle-order-partial-start-2.summary100
-rw-r--r--cts/scheduler/summary/bundle-order-partial-start.summary97
-rw-r--r--cts/scheduler/summary/bundle-order-partial-stop.summary127
-rw-r--r--cts/scheduler/summary/bundle-order-partial.summary47
-rw-r--r--cts/scheduler/summary/bundle-order-startup-clone-2.summary213
-rw-r--r--cts/scheduler/summary/bundle-order-startup-clone.summary79
-rw-r--r--cts/scheduler/summary/bundle-order-startup.summary141
-rw-r--r--cts/scheduler/summary/bundle-order-stop-clone.summary88
-rw-r--r--cts/scheduler/summary/bundle-order-stop-on-remote.summary224
-rw-r--r--cts/scheduler/summary/bundle-order-stop.summary127
-rw-r--r--cts/scheduler/summary/bundle-probe-order-1.summary34
-rw-r--r--cts/scheduler/summary/bundle-probe-order-2.summary34
-rw-r--r--cts/scheduler/summary/bundle-probe-order-3.summary33
-rw-r--r--cts/scheduler/summary/bundle-probe-remotes.summary168
-rw-r--r--cts/scheduler/summary/bundle-replicas-change.summary77
-rw-r--r--cts/scheduler/summary/cancel-behind-moving-remote.summary211
-rw-r--r--cts/scheduler/summary/clbz5007-promotable-colocation.summary31
-rw-r--r--cts/scheduler/summary/clone-anon-dup.summary35
-rw-r--r--cts/scheduler/summary/clone-anon-failcount.summary119
-rw-r--r--cts/scheduler/summary/clone-anon-probe-1.summary27
-rw-r--r--cts/scheduler/summary/clone-anon-probe-2.summary24
-rw-r--r--cts/scheduler/summary/clone-fail-block-colocation.summary61
-rw-r--r--cts/scheduler/summary/clone-interleave-1.summary53
-rw-r--r--cts/scheduler/summary/clone-interleave-2.summary44
-rw-r--r--cts/scheduler/summary/clone-interleave-3.summary47
-rw-r--r--cts/scheduler/summary/clone-max-zero.summary51
-rw-r--r--cts/scheduler/summary/clone-no-shuffle.summary61
-rw-r--r--cts/scheduler/summary/clone-order-16instances.summary72
-rw-r--r--cts/scheduler/summary/clone-order-primitive.summary29
-rw-r--r--cts/scheduler/summary/clone-require-all-1.summary36
-rw-r--r--cts/scheduler/summary/clone-require-all-2.summary42
-rw-r--r--cts/scheduler/summary/clone-require-all-3.summary47
-rw-r--r--cts/scheduler/summary/clone-require-all-4.summary41
-rw-r--r--cts/scheduler/summary/clone-require-all-5.summary45
-rw-r--r--cts/scheduler/summary/clone-require-all-6.summary37
-rw-r--r--cts/scheduler/summary/clone-require-all-7.summary48
-rw-r--r--cts/scheduler/summary/clone-require-all-no-interleave-1.summary56
-rw-r--r--cts/scheduler/summary/clone-require-all-no-interleave-2.summary56
-rw-r--r--cts/scheduler/summary/clone-require-all-no-interleave-3.summary62
-rw-r--r--cts/scheduler/summary/clone-requires-quorum-recovery.summary48
-rw-r--r--cts/scheduler/summary/clone-requires-quorum.summary42
-rw-r--r--cts/scheduler/summary/clone_min_interleave_start_one.summary41
-rw-r--r--cts/scheduler/summary/clone_min_interleave_start_two.summary61
-rw-r--r--cts/scheduler/summary/clone_min_interleave_stop_one.summary36
-rw-r--r--cts/scheduler/summary/clone_min_interleave_stop_two.summary54
-rw-r--r--cts/scheduler/summary/clone_min_start_one.summary38
-rw-r--r--cts/scheduler/summary/clone_min_start_two.summary38
-rw-r--r--cts/scheduler/summary/clone_min_stop_all.summary44
-rw-r--r--cts/scheduler/summary/clone_min_stop_one.summary33
-rw-r--r--cts/scheduler/summary/clone_min_stop_two.summary43
-rw-r--r--cts/scheduler/summary/cloned-group-stop.summary91
-rw-r--r--cts/scheduler/summary/cloned-group.summary48
-rw-r--r--cts/scheduler/summary/cloned_start_one.summary42
-rw-r--r--cts/scheduler/summary/cloned_start_two.summary43
-rw-r--r--cts/scheduler/summary/cloned_stop_one.summary41
-rw-r--r--cts/scheduler/summary/cloned_stop_two.summary46
-rw-r--r--cts/scheduler/summary/cluster-specific-params.summary24
-rw-r--r--cts/scheduler/summary/colo_promoted_w_native.summary49
-rw-r--r--cts/scheduler/summary/colo_unpromoted_w_native.summary53
-rw-r--r--cts/scheduler/summary/coloc-attr.summary31
-rw-r--r--cts/scheduler/summary/coloc-clone-stays-active.summary209
-rw-r--r--cts/scheduler/summary/coloc-dependee-should-move.summary61
-rw-r--r--cts/scheduler/summary/coloc-dependee-should-stay.summary41
-rw-r--r--cts/scheduler/summary/coloc-group.summary39
-rw-r--r--cts/scheduler/summary/coloc-intra-set.summary38
-rw-r--r--cts/scheduler/summary/coloc-list.summary42
-rw-r--r--cts/scheduler/summary/coloc-loop.summary36
-rw-r--r--cts/scheduler/summary/coloc-many-one.summary38
-rw-r--r--cts/scheduler/summary/coloc-negative-group.summary26
-rw-r--r--cts/scheduler/summary/coloc-unpromoted-anti.summary48
-rw-r--r--cts/scheduler/summary/coloc_fp_logic.summary23
-rw-r--r--cts/scheduler/summary/colocate-primitive-with-clone.summary127
-rw-r--r--cts/scheduler/summary/colocate-unmanaged-group.summary31
-rw-r--r--cts/scheduler/summary/colocated-utilization-clone.summary73
-rw-r--r--cts/scheduler/summary/colocated-utilization-group.summary55
-rw-r--r--cts/scheduler/summary/colocated-utilization-primitive-1.summary35
-rw-r--r--cts/scheduler/summary/colocated-utilization-primitive-2.summary33
-rw-r--r--cts/scheduler/summary/colocation-influence.summary170
-rw-r--r--cts/scheduler/summary/colocation-priority-group.summary59
-rw-r--r--cts/scheduler/summary/colocation-vs-stickiness.summary45
-rw-r--r--cts/scheduler/summary/colocation_constraint_stops_promoted.summary38
-rw-r--r--cts/scheduler/summary/colocation_constraint_stops_unpromoted.summary36
-rw-r--r--cts/scheduler/summary/comments.summary27
-rw-r--r--cts/scheduler/summary/complex_enforce_colo.summary455
-rw-r--r--cts/scheduler/summary/concurrent-fencing.summary27
-rw-r--r--cts/scheduler/summary/container-1.summary32
-rw-r--r--cts/scheduler/summary/container-2.summary33
-rw-r--r--cts/scheduler/summary/container-3.summary32
-rw-r--r--cts/scheduler/summary/container-4.summary33
-rw-r--r--cts/scheduler/summary/container-group-1.summary36
-rw-r--r--cts/scheduler/summary/container-group-2.summary39
-rw-r--r--cts/scheduler/summary/container-group-3.summary37
-rw-r--r--cts/scheduler/summary/container-group-4.summary39
-rw-r--r--cts/scheduler/summary/container-is-remote-node.summary59
-rw-r--r--cts/scheduler/summary/date-1.summary21
-rw-r--r--cts/scheduler/summary/date-2.summary21
-rw-r--r--cts/scheduler/summary/date-3.summary21
-rw-r--r--cts/scheduler/summary/dc-fence-ordering.summary82
-rw-r--r--cts/scheduler/summary/enforce-colo1.summary39
-rw-r--r--cts/scheduler/summary/expire-non-blocked-failure.summary24
-rw-r--r--cts/scheduler/summary/expired-failed-probe-primitive.summary26
-rw-r--r--cts/scheduler/summary/expired-stop-1.summary22
-rw-r--r--cts/scheduler/summary/failcount-block.summary39
-rw-r--r--cts/scheduler/summary/failcount.summary63
-rw-r--r--cts/scheduler/summary/failed-demote-recovery-promoted.summary60
-rw-r--r--cts/scheduler/summary/failed-demote-recovery.summary48
-rw-r--r--cts/scheduler/summary/failed-probe-clone.summary48
-rw-r--r--cts/scheduler/summary/failed-probe-primitive.summary27
-rw-r--r--cts/scheduler/summary/failed-sticky-anticolocated-group.summary41
-rw-r--r--cts/scheduler/summary/failed-sticky-group.summary90
-rw-r--r--cts/scheduler/summary/force-anon-clone-max.summary74
-rw-r--r--cts/scheduler/summary/group-anticolocation.summary41
-rw-r--r--cts/scheduler/summary/group-colocation-failure.summary47
-rw-r--r--cts/scheduler/summary/group-dependents.summary196
-rw-r--r--cts/scheduler/summary/group-fail.summary39
-rw-r--r--cts/scheduler/summary/group-stop-ordering.summary29
-rw-r--r--cts/scheduler/summary/group-unmanaged-stopped.summary27
-rw-r--r--cts/scheduler/summary/group-unmanaged.summary23
-rw-r--r--cts/scheduler/summary/group1.summary37
-rw-r--r--cts/scheduler/summary/group10.summary68
-rw-r--r--cts/scheduler/summary/group11.summary32
-rw-r--r--cts/scheduler/summary/group13.summary27
-rw-r--r--cts/scheduler/summary/group14.summary102
-rw-r--r--cts/scheduler/summary/group15.summary51
-rw-r--r--cts/scheduler/summary/group2.summary49
-rw-r--r--cts/scheduler/summary/group3.summary59
-rw-r--r--cts/scheduler/summary/group4.summary32
-rw-r--r--cts/scheduler/summary/group5.summary51
-rw-r--r--cts/scheduler/summary/group6.summary63
-rw-r--r--cts/scheduler/summary/group7.summary72
-rw-r--r--cts/scheduler/summary/group8.summary52
-rw-r--r--cts/scheduler/summary/group9.summary66
-rw-r--r--cts/scheduler/summary/guest-host-not-fenceable.summary91
-rw-r--r--cts/scheduler/summary/guest-node-cleanup.summary55
-rw-r--r--cts/scheduler/summary/guest-node-host-dies.summary82
-rw-r--r--cts/scheduler/summary/history-1.summary55
-rw-r--r--cts/scheduler/summary/honor_stonith_rsc_order1.summary38
-rw-r--r--cts/scheduler/summary/honor_stonith_rsc_order2.summary48
-rw-r--r--cts/scheduler/summary/honor_stonith_rsc_order3.summary46
-rw-r--r--cts/scheduler/summary/honor_stonith_rsc_order4.summary30
-rw-r--r--cts/scheduler/summary/ignore_stonith_rsc_order1.summary25
-rw-r--r--cts/scheduler/summary/ignore_stonith_rsc_order2.summary34
-rw-r--r--cts/scheduler/summary/ignore_stonith_rsc_order3.summary38
-rw-r--r--cts/scheduler/summary/ignore_stonith_rsc_order4.summary38
-rw-r--r--cts/scheduler/summary/inc0.summary47
-rw-r--r--cts/scheduler/summary/inc1.summary59
-rw-r--r--cts/scheduler/summary/inc10.summary46
-rw-r--r--cts/scheduler/summary/inc11.summary43
-rw-r--r--cts/scheduler/summary/inc12.summary132
-rw-r--r--cts/scheduler/summary/inc2.summary44
-rw-r--r--cts/scheduler/summary/inc3.summary71
-rw-r--r--cts/scheduler/summary/inc4.summary71
-rw-r--r--cts/scheduler/summary/inc5.summary139
-rw-r--r--cts/scheduler/summary/inc6.summary101
-rw-r--r--cts/scheduler/summary/inc7.summary100
-rw-r--r--cts/scheduler/summary/inc8.summary71
-rw-r--r--cts/scheduler/summary/inc9.summary30
-rw-r--r--cts/scheduler/summary/interleave-0.summary241
-rw-r--r--cts/scheduler/summary/interleave-1.summary241
-rw-r--r--cts/scheduler/summary/interleave-2.summary241
-rw-r--r--cts/scheduler/summary/interleave-3.summary241
-rw-r--r--cts/scheduler/summary/interleave-pseudo-stop.summary83
-rw-r--r--cts/scheduler/summary/interleave-restart.summary97
-rw-r--r--cts/scheduler/summary/interleave-stop.summary74
-rw-r--r--cts/scheduler/summary/intervals.summary52
-rw-r--r--cts/scheduler/summary/leftover-pending-monitor.summary30
-rw-r--r--cts/scheduler/summary/load-stopped-loop-2.summary114
-rw-r--r--cts/scheduler/summary/load-stopped-loop.summary337
-rw-r--r--cts/scheduler/summary/location-date-rules-1.summary36
-rw-r--r--cts/scheduler/summary/location-date-rules-2.summary36
-rw-r--r--cts/scheduler/summary/location-sets-templates.summary51
-rw-r--r--cts/scheduler/summary/managed-0.summary132
-rw-r--r--cts/scheduler/summary/managed-1.summary132
-rw-r--r--cts/scheduler/summary/managed-2.summary166
-rw-r--r--cts/scheduler/summary/migrate-1.summary25
-rw-r--r--cts/scheduler/summary/migrate-2.summary19
-rw-r--r--cts/scheduler/summary/migrate-3.summary23
-rw-r--r--cts/scheduler/summary/migrate-4.summary22
-rw-r--r--cts/scheduler/summary/migrate-5.summary35
-rw-r--r--cts/scheduler/summary/migrate-begin.summary28
-rw-r--r--cts/scheduler/summary/migrate-both-vms.summary102
-rw-r--r--cts/scheduler/summary/migrate-fail-2.summary27
-rw-r--r--cts/scheduler/summary/migrate-fail-3.summary26
-rw-r--r--cts/scheduler/summary/migrate-fail-4.summary26
-rw-r--r--cts/scheduler/summary/migrate-fail-5.summary25
-rw-r--r--cts/scheduler/summary/migrate-fail-6.summary27
-rw-r--r--cts/scheduler/summary/migrate-fail-7.summary25
-rw-r--r--cts/scheduler/summary/migrate-fail-8.summary26
-rw-r--r--cts/scheduler/summary/migrate-fail-9.summary25
-rw-r--r--cts/scheduler/summary/migrate-fencing.summary108
-rw-r--r--cts/scheduler/summary/migrate-partial-1.summary24
-rw-r--r--cts/scheduler/summary/migrate-partial-2.summary27
-rw-r--r--cts/scheduler/summary/migrate-partial-3.summary31
-rw-r--r--cts/scheduler/summary/migrate-partial-4.summary126
-rw-r--r--cts/scheduler/summary/migrate-shutdown.summary92
-rw-r--r--cts/scheduler/summary/migrate-start-complex.summary50
-rw-r--r--cts/scheduler/summary/migrate-start.summary33
-rw-r--r--cts/scheduler/summary/migrate-stop-complex.summary49
-rw-r--r--cts/scheduler/summary/migrate-stop-start-complex.summary50
-rw-r--r--cts/scheduler/summary/migrate-stop.summary35
-rw-r--r--cts/scheduler/summary/migrate-stop_start.summary41
-rw-r--r--cts/scheduler/summary/migrate-success.summary23
-rw-r--r--cts/scheduler/summary/migration-behind-migrating-remote.summary39
-rw-r--r--cts/scheduler/summary/migration-intermediary-cleaned.summary89
-rw-r--r--cts/scheduler/summary/migration-ping-pong.summary27
-rw-r--r--cts/scheduler/summary/minimal.summary29
-rw-r--r--cts/scheduler/summary/mon-rsc-1.summary22
-rw-r--r--cts/scheduler/summary/mon-rsc-2.summary24
-rw-r--r--cts/scheduler/summary/mon-rsc-3.summary20
-rw-r--r--cts/scheduler/summary/mon-rsc-4.summary24
-rw-r--r--cts/scheduler/summary/monitor-onfail-restart.summary23
-rw-r--r--cts/scheduler/summary/monitor-onfail-stop.summary21
-rw-r--r--cts/scheduler/summary/monitor-recovery.summary32
-rw-r--r--cts/scheduler/summary/multi1.summary21
-rw-r--r--cts/scheduler/summary/multiple-active-block-group.summary27
-rw-r--r--cts/scheduler/summary/multiple-monitor-one-failed.summary22
-rw-r--r--cts/scheduler/summary/multiply-active-stonith.summary28
-rw-r--r--cts/scheduler/summary/nested-remote-recovery.summary131
-rw-r--r--cts/scheduler/summary/no-promote-on-unrunnable-guest.summary103
-rw-r--r--cts/scheduler/summary/no_quorum_demote.summary40
-rw-r--r--cts/scheduler/summary/node-maintenance-1.summary26
-rw-r--r--cts/scheduler/summary/node-maintenance-2.summary25
-rw-r--r--cts/scheduler/summary/not-installed-agent.summary29
-rw-r--r--cts/scheduler/summary/not-installed-tools.summary25
-rw-r--r--cts/scheduler/summary/not-reschedule-unneeded-monitor.summary40
-rw-r--r--cts/scheduler/summary/notifs-for-unrunnable.summary99
-rw-r--r--cts/scheduler/summary/notify-0.summary39
-rw-r--r--cts/scheduler/summary/notify-1.summary51
-rw-r--r--cts/scheduler/summary/notify-2.summary51
-rw-r--r--cts/scheduler/summary/notify-3.summary62
-rw-r--r--cts/scheduler/summary/notify-behind-stopping-remote.summary64
-rw-r--r--cts/scheduler/summary/novell-239079.summary33
-rw-r--r--cts/scheduler/summary/novell-239082.summary59
-rw-r--r--cts/scheduler/summary/novell-239087.summary23
-rw-r--r--cts/scheduler/summary/novell-251689.summary49
-rw-r--r--cts/scheduler/summary/novell-252693-2.summary103
-rw-r--r--cts/scheduler/summary/novell-252693-3.summary112
-rw-r--r--cts/scheduler/summary/novell-252693.summary94
-rw-r--r--cts/scheduler/summary/nvpair-date-rules-1.summary38
-rw-r--r--cts/scheduler/summary/nvpair-id-ref.summary31
-rw-r--r--cts/scheduler/summary/obsolete-lrm-resource.summary25
-rw-r--r--cts/scheduler/summary/ocf_degraded-remap-ocf_ok.summary21
-rw-r--r--cts/scheduler/summary/ocf_degraded_promoted-remap-ocf_ok.summary25
-rw-r--r--cts/scheduler/summary/on-fail-ignore.summary23
-rw-r--r--cts/scheduler/summary/on_fail_demote1.summary88
-rw-r--r--cts/scheduler/summary/on_fail_demote2.summary43
-rw-r--r--cts/scheduler/summary/on_fail_demote3.summary36
-rw-r--r--cts/scheduler/summary/on_fail_demote4.summary189
-rw-r--r--cts/scheduler/summary/one-or-more-0.summary38
-rw-r--r--cts/scheduler/summary/one-or-more-1.summary34
-rw-r--r--cts/scheduler/summary/one-or-more-2.summary38
-rw-r--r--cts/scheduler/summary/one-or-more-3.summary34
-rw-r--r--cts/scheduler/summary/one-or-more-4.summary38
-rw-r--r--cts/scheduler/summary/one-or-more-5.summary47
-rw-r--r--cts/scheduler/summary/one-or-more-6.summary27
-rw-r--r--cts/scheduler/summary/one-or-more-7.summary27
-rw-r--r--cts/scheduler/summary/one-or-more-unrunnable-instances.summary736
-rw-r--r--cts/scheduler/summary/op-defaults-2.summary48
-rw-r--r--cts/scheduler/summary/op-defaults-3.summary28
-rw-r--r--cts/scheduler/summary/op-defaults.summary48
-rw-r--r--cts/scheduler/summary/order-clone.summary45
-rw-r--r--cts/scheduler/summary/order-expired-failure.summary112
-rw-r--r--cts/scheduler/summary/order-first-probes.summary37
-rw-r--r--cts/scheduler/summary/order-mandatory.summary30
-rw-r--r--cts/scheduler/summary/order-optional-keyword.summary25
-rw-r--r--cts/scheduler/summary/order-optional.summary25
-rw-r--r--cts/scheduler/summary/order-required.summary30
-rw-r--r--cts/scheduler/summary/order-serialize-set.summary73
-rw-r--r--cts/scheduler/summary/order-serialize.summary73
-rw-r--r--cts/scheduler/summary/order-sets.summary41
-rw-r--r--cts/scheduler/summary/order-wrong-kind.summary29
-rw-r--r--cts/scheduler/summary/order1.summary33
-rw-r--r--cts/scheduler/summary/order2.summary39
-rw-r--r--cts/scheduler/summary/order3.summary39
-rw-r--r--cts/scheduler/summary/order4.summary33
-rw-r--r--cts/scheduler/summary/order5.summary51
-rw-r--r--cts/scheduler/summary/order6.summary51
-rw-r--r--cts/scheduler/summary/order7.summary40
-rw-r--r--cts/scheduler/summary/order_constraint_stops_promoted.summary44
-rw-r--r--cts/scheduler/summary/order_constraint_stops_unpromoted.summary36
-rw-r--r--cts/scheduler/summary/ordered-set-basic-startup.summary42
-rw-r--r--cts/scheduler/summary/ordered-set-natural.summary55
-rw-r--r--cts/scheduler/summary/origin.summary18
-rw-r--r--cts/scheduler/summary/orphan-0.summary38
-rw-r--r--cts/scheduler/summary/orphan-1.summary42
-rw-r--r--cts/scheduler/summary/orphan-2.summary44
-rw-r--r--cts/scheduler/summary/params-0.summary40
-rw-r--r--cts/scheduler/summary/params-1.summary47
-rw-r--r--cts/scheduler/summary/params-2.summary37
-rw-r--r--cts/scheduler/summary/params-3.summary47
-rw-r--r--cts/scheduler/summary/params-4.summary46
-rw-r--r--cts/scheduler/summary/params-5.summary47
-rw-r--r--cts/scheduler/summary/params-6.summary379
-rw-r--r--cts/scheduler/summary/partial-live-migration-multiple-active.summary25
-rw-r--r--cts/scheduler/summary/partial-unmanaged-group.summary41
-rw-r--r--cts/scheduler/summary/per-node-attrs.summary22
-rw-r--r--cts/scheduler/summary/per-op-failcount.summary34
-rw-r--r--cts/scheduler/summary/placement-capacity.summary23
-rw-r--r--cts/scheduler/summary/placement-location.summary25
-rw-r--r--cts/scheduler/summary/placement-priority.summary25
-rw-r--r--cts/scheduler/summary/placement-stickiness.summary25
-rw-r--r--cts/scheduler/summary/primitive-with-group-with-clone.summary71
-rw-r--r--cts/scheduler/summary/primitive-with-group-with-promoted.summary75
-rw-r--r--cts/scheduler/summary/primitive-with-unrunnable-group.summary37
-rw-r--r--cts/scheduler/summary/priority-fencing-delay.summary104
-rw-r--r--cts/scheduler/summary/probe-0.summary41
-rw-r--r--cts/scheduler/summary/probe-1.summary21
-rw-r--r--cts/scheduler/summary/probe-2.summary163
-rw-r--r--cts/scheduler/summary/probe-3.summary57
-rw-r--r--cts/scheduler/summary/probe-4.summary58
-rw-r--r--cts/scheduler/summary/probe-pending-node.summary55
-rw-r--r--cts/scheduler/summary/probe-target-of-failed-migrate_to-1.summary23
-rw-r--r--cts/scheduler/summary/probe-target-of-failed-migrate_to-2.summary19
-rw-r--r--cts/scheduler/summary/probe-timeout.summary31
-rw-r--r--cts/scheduler/summary/promoted-0.summary47
-rw-r--r--cts/scheduler/summary/promoted-1.summary50
-rw-r--r--cts/scheduler/summary/promoted-10.summary75
-rw-r--r--cts/scheduler/summary/promoted-11.summary40
-rw-r--r--cts/scheduler/summary/promoted-12.summary33
-rw-r--r--cts/scheduler/summary/promoted-13.summary62
-rw-r--r--cts/scheduler/summary/promoted-2.summary71
-rw-r--r--cts/scheduler/summary/promoted-3.summary50
-rw-r--r--cts/scheduler/summary/promoted-4.summary94
-rw-r--r--cts/scheduler/summary/promoted-5.summary88
-rw-r--r--cts/scheduler/summary/promoted-6.summary87
-rw-r--r--cts/scheduler/summary/promoted-7.summary121
-rw-r--r--cts/scheduler/summary/promoted-8.summary124
-rw-r--r--cts/scheduler/summary/promoted-9.summary100
-rw-r--r--cts/scheduler/summary/promoted-allow-start.summary21
-rw-r--r--cts/scheduler/summary/promoted-asymmetrical-order.summary37
-rw-r--r--cts/scheduler/summary/promoted-colocation.summary34
-rw-r--r--cts/scheduler/summary/promoted-demote-2.summary75
-rw-r--r--cts/scheduler/summary/promoted-demote-block.summary26
-rw-r--r--cts/scheduler/summary/promoted-demote.summary70
-rw-r--r--cts/scheduler/summary/promoted-depend.summary62
-rw-r--r--cts/scheduler/summary/promoted-dependent-ban.summary38
-rw-r--r--cts/scheduler/summary/promoted-failed-demote-2.summary47
-rw-r--r--cts/scheduler/summary/promoted-failed-demote.summary64
-rw-r--r--cts/scheduler/summary/promoted-group.summary37
-rw-r--r--cts/scheduler/summary/promoted-move.summary72
-rw-r--r--cts/scheduler/summary/promoted-notify.summary36
-rw-r--r--cts/scheduler/summary/promoted-ordering.summary96
-rw-r--r--cts/scheduler/summary/promoted-partially-demoted-group.summary118
-rw-r--r--cts/scheduler/summary/promoted-probed-score.summary329
-rw-r--r--cts/scheduler/summary/promoted-promotion-constraint.summary36
-rw-r--r--cts/scheduler/summary/promoted-pseudo.summary60
-rw-r--r--cts/scheduler/summary/promoted-reattach.summary34
-rw-r--r--cts/scheduler/summary/promoted-role.summary24
-rw-r--r--cts/scheduler/summary/promoted-score-startup.summary54
-rw-r--r--cts/scheduler/summary/promoted-stop.summary24
-rw-r--r--cts/scheduler/summary/promoted-unmanaged-monitor.summary69
-rw-r--r--cts/scheduler/summary/promoted-with-blocked.summary59
-rw-r--r--cts/scheduler/summary/promoted_monitor_restart.summary24
-rw-r--r--cts/scheduler/summary/quorum-1.summary30
-rw-r--r--cts/scheduler/summary/quorum-2.summary29
-rw-r--r--cts/scheduler/summary/quorum-3.summary30
-rw-r--r--cts/scheduler/summary/quorum-4.summary25
-rw-r--r--cts/scheduler/summary/quorum-5.summary35
-rw-r--r--cts/scheduler/summary/quorum-6.summary50
-rw-r--r--cts/scheduler/summary/rebalance-unique-clones.summary29
-rw-r--r--cts/scheduler/summary/rec-node-1.summary27
-rw-r--r--cts/scheduler/summary/rec-node-10.summary29
-rw-r--r--cts/scheduler/summary/rec-node-11.summary47
-rw-r--r--cts/scheduler/summary/rec-node-12.summary92
-rw-r--r--cts/scheduler/summary/rec-node-13.summary80
-rw-r--r--cts/scheduler/summary/rec-node-14.summary27
-rw-r--r--cts/scheduler/summary/rec-node-15.summary88
-rw-r--r--cts/scheduler/summary/rec-node-2.summary62
-rw-r--r--cts/scheduler/summary/rec-node-3.summary27
-rw-r--r--cts/scheduler/summary/rec-node-4.summary36
-rw-r--r--cts/scheduler/summary/rec-node-5.summary27
-rw-r--r--cts/scheduler/summary/rec-node-6.summary36
-rw-r--r--cts/scheduler/summary/rec-node-7.summary36
-rw-r--r--cts/scheduler/summary/rec-node-8.summary33
-rw-r--r--cts/scheduler/summary/rec-node-9.summary25
-rw-r--r--cts/scheduler/summary/rec-rsc-0.summary21
-rw-r--r--cts/scheduler/summary/rec-rsc-1.summary21
-rw-r--r--cts/scheduler/summary/rec-rsc-2.summary22
-rw-r--r--cts/scheduler/summary/rec-rsc-3.summary20
-rw-r--r--cts/scheduler/summary/rec-rsc-4.summary20
-rw-r--r--cts/scheduler/summary/rec-rsc-5.summary36
-rw-r--r--cts/scheduler/summary/rec-rsc-6.summary21
-rw-r--r--cts/scheduler/summary/rec-rsc-7.summary21
-rw-r--r--cts/scheduler/summary/rec-rsc-8.summary19
-rw-r--r--cts/scheduler/summary/rec-rsc-9.summary42
-rw-r--r--cts/scheduler/summary/reload-becomes-restart.summary55
-rw-r--r--cts/scheduler/summary/remote-connection-shutdown.summary162
-rw-r--r--cts/scheduler/summary/remote-connection-unrecoverable.summary54
-rw-r--r--cts/scheduler/summary/remote-disable.summary35
-rw-r--r--cts/scheduler/summary/remote-fence-before-reconnect.summary39
-rw-r--r--cts/scheduler/summary/remote-fence-unclean-3.summary103
-rw-r--r--cts/scheduler/summary/remote-fence-unclean.summary47
-rw-r--r--cts/scheduler/summary/remote-fence-unclean2.summary31
-rw-r--r--cts/scheduler/summary/remote-move.summary39
-rw-r--r--cts/scheduler/summary/remote-orphaned.summary69
-rw-r--r--cts/scheduler/summary/remote-orphaned2.summary29
-rw-r--r--cts/scheduler/summary/remote-partial-migrate.summary190
-rw-r--r--cts/scheduler/summary/remote-partial-migrate2.summary208
-rw-r--r--cts/scheduler/summary/remote-probe-disable.summary37
-rw-r--r--cts/scheduler/summary/remote-reconnect-delay.summary67
-rw-r--r--cts/scheduler/summary/remote-recover-all.summary152
-rw-r--r--cts/scheduler/summary/remote-recover-connection.summary132
-rw-r--r--cts/scheduler/summary/remote-recover-fail.summary54
-rw-r--r--cts/scheduler/summary/remote-recover-no-resources.summary143
-rw-r--r--cts/scheduler/summary/remote-recover-unknown.summary145
-rw-r--r--cts/scheduler/summary/remote-recover.summary34
-rw-r--r--cts/scheduler/summary/remote-recovery.summary132
-rw-r--r--cts/scheduler/summary/remote-stale-node-entry.summary112
-rw-r--r--cts/scheduler/summary/remote-start-fail.summary25
-rw-r--r--cts/scheduler/summary/remote-startup-probes.summary44
-rw-r--r--cts/scheduler/summary/remote-startup.summary39
-rw-r--r--cts/scheduler/summary/remote-unclean2.summary27
-rw-r--r--cts/scheduler/summary/reprobe-target_rc.summary55
-rw-r--r--cts/scheduler/summary/resource-discovery.summary128
-rw-r--r--cts/scheduler/summary/restart-with-extra-op-params.summary25
-rw-r--r--cts/scheduler/summary/route-remote-notify.summary98
-rw-r--r--cts/scheduler/summary/rsc-defaults-2.summary29
-rw-r--r--cts/scheduler/summary/rsc-defaults.summary41
-rw-r--r--cts/scheduler/summary/rsc-discovery-per-node.summary130
-rw-r--r--cts/scheduler/summary/rsc-maintenance.summary31
-rw-r--r--cts/scheduler/summary/rsc-sets-clone-1.summary86
-rw-r--r--cts/scheduler/summary/rsc-sets-clone.summary38
-rw-r--r--cts/scheduler/summary/rsc-sets-promoted.summary49
-rw-r--r--cts/scheduler/summary/rsc-sets-seq-false.summary47
-rw-r--r--cts/scheduler/summary/rsc-sets-seq-true.summary47
-rw-r--r--cts/scheduler/summary/rsc_dep1.summary27
-rw-r--r--cts/scheduler/summary/rsc_dep10.summary25
-rw-r--r--cts/scheduler/summary/rsc_dep2.summary33
-rw-r--r--cts/scheduler/summary/rsc_dep3.summary27
-rw-r--r--cts/scheduler/summary/rsc_dep4.summary36
-rw-r--r--cts/scheduler/summary/rsc_dep5.summary31
-rw-r--r--cts/scheduler/summary/rsc_dep7.summary33
-rw-r--r--cts/scheduler/summary/rsc_dep8.summary33
-rw-r--r--cts/scheduler/summary/rule-dbl-as-auto-number-match.summary21
-rw-r--r--cts/scheduler/summary/rule-dbl-as-auto-number-no-match.summary19
-rw-r--r--cts/scheduler/summary/rule-dbl-as-integer-match.summary21
-rw-r--r--cts/scheduler/summary/rule-dbl-as-integer-no-match.summary19
-rw-r--r--cts/scheduler/summary/rule-dbl-as-number-match.summary21
-rw-r--r--cts/scheduler/summary/rule-dbl-as-number-no-match.summary19
-rw-r--r--cts/scheduler/summary/rule-dbl-parse-fail-default-str-match.summary21
-rw-r--r--cts/scheduler/summary/rule-dbl-parse-fail-default-str-no-match.summary19
-rw-r--r--cts/scheduler/summary/rule-int-as-auto-integer-match.summary21
-rw-r--r--cts/scheduler/summary/rule-int-as-auto-integer-no-match.summary19
-rw-r--r--cts/scheduler/summary/rule-int-as-integer-match.summary21
-rw-r--r--cts/scheduler/summary/rule-int-as-integer-no-match.summary19
-rw-r--r--cts/scheduler/summary/rule-int-as-number-match.summary21
-rw-r--r--cts/scheduler/summary/rule-int-as-number-no-match.summary19
-rw-r--r--cts/scheduler/summary/rule-int-parse-fail-default-str-match.summary21
-rw-r--r--cts/scheduler/summary/rule-int-parse-fail-default-str-no-match.summary19
-rw-r--r--cts/scheduler/summary/shutdown-lock-expiration.summary33
-rw-r--r--cts/scheduler/summary/shutdown-lock.summary32
-rw-r--r--cts/scheduler/summary/shutdown-maintenance-node.summary21
-rw-r--r--cts/scheduler/summary/simple1.summary17
-rw-r--r--cts/scheduler/summary/simple11.summary27
-rw-r--r--cts/scheduler/summary/simple12.summary27
-rw-r--r--cts/scheduler/summary/simple2.summary21
-rw-r--r--cts/scheduler/summary/simple3.summary19
-rw-r--r--cts/scheduler/summary/simple4.summary19
-rw-r--r--cts/scheduler/summary/simple6.summary24
-rw-r--r--cts/scheduler/summary/simple7.summary20
-rw-r--r--cts/scheduler/summary/simple8.summary31
-rw-r--r--cts/scheduler/summary/site-specific-params.summary25
-rw-r--r--cts/scheduler/summary/standby.summary87
-rw-r--r--cts/scheduler/summary/start-then-stop-with-unfence.summary44
-rw-r--r--cts/scheduler/summary/stonith-0.summary111
-rw-r--r--cts/scheduler/summary/stonith-1.summary113
-rw-r--r--cts/scheduler/summary/stonith-2.summary78
-rw-r--r--cts/scheduler/summary/stonith-3.summary37
-rw-r--r--cts/scheduler/summary/stonith-4.summary40
-rw-r--r--cts/scheduler/summary/stop-all-resources.summary83
-rw-r--r--cts/scheduler/summary/stop-failure-no-fencing.summary27
-rw-r--r--cts/scheduler/summary/stop-failure-no-quorum.summary45
-rw-r--r--cts/scheduler/summary/stop-failure-with-fencing.summary45
-rw-r--r--cts/scheduler/summary/stop-unexpected-2.summary29
-rw-r--r--cts/scheduler/summary/stop-unexpected.summary41
-rw-r--r--cts/scheduler/summary/stopped-monitor-00.summary23
-rw-r--r--cts/scheduler/summary/stopped-monitor-01.summary21
-rw-r--r--cts/scheduler/summary/stopped-monitor-02.summary23
-rw-r--r--cts/scheduler/summary/stopped-monitor-03.summary22
-rw-r--r--cts/scheduler/summary/stopped-monitor-04.summary19
-rw-r--r--cts/scheduler/summary/stopped-monitor-05.summary19
-rw-r--r--cts/scheduler/summary/stopped-monitor-06.summary19
-rw-r--r--cts/scheduler/summary/stopped-monitor-07.summary19
-rw-r--r--cts/scheduler/summary/stopped-monitor-08.summary25
-rw-r--r--cts/scheduler/summary/stopped-monitor-09.summary17
-rw-r--r--cts/scheduler/summary/stopped-monitor-10.summary17
-rw-r--r--cts/scheduler/summary/stopped-monitor-11.summary19
-rw-r--r--cts/scheduler/summary/stopped-monitor-12.summary19
-rw-r--r--cts/scheduler/summary/stopped-monitor-20.summary23
-rw-r--r--cts/scheduler/summary/stopped-monitor-21.summary22
-rw-r--r--cts/scheduler/summary/stopped-monitor-22.summary25
-rw-r--r--cts/scheduler/summary/stopped-monitor-23.summary21
-rw-r--r--cts/scheduler/summary/stopped-monitor-24.summary19
-rw-r--r--cts/scheduler/summary/stopped-monitor-25.summary21
-rw-r--r--cts/scheduler/summary/stopped-monitor-26.summary17
-rw-r--r--cts/scheduler/summary/stopped-monitor-27.summary19
-rw-r--r--cts/scheduler/summary/stopped-monitor-30.summary19
-rw-r--r--cts/scheduler/summary/stopped-monitor-31.summary21
-rw-r--r--cts/scheduler/summary/suicide-needed-inquorate.summary27
-rw-r--r--cts/scheduler/summary/suicide-not-needed-initial-quorum.summary25
-rw-r--r--cts/scheduler/summary/suicide-not-needed-never-quorate.summary23
-rw-r--r--cts/scheduler/summary/suicide-not-needed-quorate.summary25
-rw-r--r--cts/scheduler/summary/systemhealth1.summary26
-rw-r--r--cts/scheduler/summary/systemhealth2.summary36
-rw-r--r--cts/scheduler/summary/systemhealth3.summary36
-rw-r--r--cts/scheduler/summary/systemhealthm1.summary26
-rw-r--r--cts/scheduler/summary/systemhealthm2.summary36
-rw-r--r--cts/scheduler/summary/systemhealthm3.summary28
-rw-r--r--cts/scheduler/summary/systemhealthn1.summary26
-rw-r--r--cts/scheduler/summary/systemhealthn2.summary36
-rw-r--r--cts/scheduler/summary/systemhealthn3.summary36
-rw-r--r--cts/scheduler/summary/systemhealtho1.summary26
-rw-r--r--cts/scheduler/summary/systemhealtho2.summary28
-rw-r--r--cts/scheduler/summary/systemhealtho3.summary28
-rw-r--r--cts/scheduler/summary/systemhealthp1.summary26
-rw-r--r--cts/scheduler/summary/systemhealthp2.summary34
-rw-r--r--cts/scheduler/summary/systemhealthp3.summary28
-rw-r--r--cts/scheduler/summary/tags-coloc-order-1.summary39
-rw-r--r--cts/scheduler/summary/tags-coloc-order-2.summary87
-rw-r--r--cts/scheduler/summary/tags-location.summary51
-rw-r--r--cts/scheduler/summary/tags-ticket.summary39
-rw-r--r--cts/scheduler/summary/target-0.summary40
-rw-r--r--cts/scheduler/summary/target-1.summary43
-rw-r--r--cts/scheduler/summary/target-2.summary44
-rw-r--r--cts/scheduler/summary/template-1.summary30
-rw-r--r--cts/scheduler/summary/template-2.summary28
-rw-r--r--cts/scheduler/summary/template-3.summary33
-rw-r--r--cts/scheduler/summary/template-clone-group.summary37
-rw-r--r--cts/scheduler/summary/template-clone-primitive.summary27
-rw-r--r--cts/scheduler/summary/template-coloc-1.summary39
-rw-r--r--cts/scheduler/summary/template-coloc-2.summary39
-rw-r--r--cts/scheduler/summary/template-coloc-3.summary51
-rw-r--r--cts/scheduler/summary/template-order-1.summary39
-rw-r--r--cts/scheduler/summary/template-order-2.summary39
-rw-r--r--cts/scheduler/summary/template-order-3.summary51
-rw-r--r--cts/scheduler/summary/template-rsc-sets-1.summary45
-rw-r--r--cts/scheduler/summary/template-rsc-sets-2.summary45
-rw-r--r--cts/scheduler/summary/template-rsc-sets-3.summary45
-rw-r--r--cts/scheduler/summary/template-rsc-sets-4.summary27
-rw-r--r--cts/scheduler/summary/template-ticket.summary27
-rw-r--r--cts/scheduler/summary/ticket-clone-1.summary23
-rw-r--r--cts/scheduler/summary/ticket-clone-10.summary23
-rw-r--r--cts/scheduler/summary/ticket-clone-11.summary29
-rw-r--r--cts/scheduler/summary/ticket-clone-12.summary21
-rw-r--r--cts/scheduler/summary/ticket-clone-13.summary21
-rw-r--r--cts/scheduler/summary/ticket-clone-14.summary27
-rw-r--r--cts/scheduler/summary/ticket-clone-15.summary27
-rw-r--r--cts/scheduler/summary/ticket-clone-16.summary21
-rw-r--r--cts/scheduler/summary/ticket-clone-17.summary27
-rw-r--r--cts/scheduler/summary/ticket-clone-18.summary27
-rw-r--r--cts/scheduler/summary/ticket-clone-19.summary21
-rw-r--r--cts/scheduler/summary/ticket-clone-2.summary29
-rw-r--r--cts/scheduler/summary/ticket-clone-20.summary27
-rw-r--r--cts/scheduler/summary/ticket-clone-21.summary33
-rw-r--r--cts/scheduler/summary/ticket-clone-22.summary21
-rw-r--r--cts/scheduler/summary/ticket-clone-23.summary27
-rw-r--r--cts/scheduler/summary/ticket-clone-24.summary21
-rw-r--r--cts/scheduler/summary/ticket-clone-3.summary27
-rw-r--r--cts/scheduler/summary/ticket-clone-4.summary23
-rw-r--r--cts/scheduler/summary/ticket-clone-5.summary29
-rw-r--r--cts/scheduler/summary/ticket-clone-6.summary27
-rw-r--r--cts/scheduler/summary/ticket-clone-7.summary23
-rw-r--r--cts/scheduler/summary/ticket-clone-8.summary29
-rw-r--r--cts/scheduler/summary/ticket-clone-9.summary33
-rw-r--r--cts/scheduler/summary/ticket-group-1.summary27
-rw-r--r--cts/scheduler/summary/ticket-group-10.summary27
-rw-r--r--cts/scheduler/summary/ticket-group-11.summary31
-rw-r--r--cts/scheduler/summary/ticket-group-12.summary23
-rw-r--r--cts/scheduler/summary/ticket-group-13.summary23
-rw-r--r--cts/scheduler/summary/ticket-group-14.summary29
-rw-r--r--cts/scheduler/summary/ticket-group-15.summary29
-rw-r--r--cts/scheduler/summary/ticket-group-16.summary23
-rw-r--r--cts/scheduler/summary/ticket-group-17.summary29
-rw-r--r--cts/scheduler/summary/ticket-group-18.summary29
-rw-r--r--cts/scheduler/summary/ticket-group-19.summary23
-rw-r--r--cts/scheduler/summary/ticket-group-2.summary31
-rw-r--r--cts/scheduler/summary/ticket-group-20.summary29
-rw-r--r--cts/scheduler/summary/ticket-group-21.summary32
-rw-r--r--cts/scheduler/summary/ticket-group-22.summary23
-rw-r--r--cts/scheduler/summary/ticket-group-23.summary29
-rw-r--r--cts/scheduler/summary/ticket-group-24.summary23
-rw-r--r--cts/scheduler/summary/ticket-group-3.summary29
-rw-r--r--cts/scheduler/summary/ticket-group-4.summary27
-rw-r--r--cts/scheduler/summary/ticket-group-5.summary31
-rw-r--r--cts/scheduler/summary/ticket-group-6.summary29
-rw-r--r--cts/scheduler/summary/ticket-group-7.summary27
-rw-r--r--cts/scheduler/summary/ticket-group-8.summary31
-rw-r--r--cts/scheduler/summary/ticket-group-9.summary32
-rw-r--r--cts/scheduler/summary/ticket-primitive-1.summary21
-rw-r--r--cts/scheduler/summary/ticket-primitive-10.summary21
-rw-r--r--cts/scheduler/summary/ticket-primitive-11.summary22
-rw-r--r--cts/scheduler/summary/ticket-primitive-12.summary19
-rw-r--r--cts/scheduler/summary/ticket-primitive-13.summary19
-rw-r--r--cts/scheduler/summary/ticket-primitive-14.summary21
-rw-r--r--cts/scheduler/summary/ticket-primitive-15.summary21
-rw-r--r--cts/scheduler/summary/ticket-primitive-16.summary19
-rw-r--r--cts/scheduler/summary/ticket-primitive-17.summary21
-rw-r--r--cts/scheduler/summary/ticket-primitive-18.summary21
-rw-r--r--cts/scheduler/summary/ticket-primitive-19.summary19
-rw-r--r--cts/scheduler/summary/ticket-primitive-2.summary22
-rw-r--r--cts/scheduler/summary/ticket-primitive-20.summary21
-rw-r--r--cts/scheduler/summary/ticket-primitive-21.summary24
-rw-r--r--cts/scheduler/summary/ticket-primitive-22.summary19
-rw-r--r--cts/scheduler/summary/ticket-primitive-23.summary21
-rw-r--r--cts/scheduler/summary/ticket-primitive-24.summary19
-rw-r--r--cts/scheduler/summary/ticket-primitive-3.summary21
-rw-r--r--cts/scheduler/summary/ticket-primitive-4.summary21
-rw-r--r--cts/scheduler/summary/ticket-primitive-5.summary22
-rw-r--r--cts/scheduler/summary/ticket-primitive-6.summary21
-rw-r--r--cts/scheduler/summary/ticket-primitive-7.summary21
-rw-r--r--cts/scheduler/summary/ticket-primitive-8.summary22
-rw-r--r--cts/scheduler/summary/ticket-primitive-9.summary24
-rw-r--r--cts/scheduler/summary/ticket-promoted-1.summary23
-rw-r--r--cts/scheduler/summary/ticket-promoted-10.summary29
-rw-r--r--cts/scheduler/summary/ticket-promoted-11.summary26
-rw-r--r--cts/scheduler/summary/ticket-promoted-12.summary23
-rw-r--r--cts/scheduler/summary/ticket-promoted-13.summary21
-rw-r--r--cts/scheduler/summary/ticket-promoted-14.summary31
-rw-r--r--cts/scheduler/summary/ticket-promoted-15.summary31
-rw-r--r--cts/scheduler/summary/ticket-promoted-16.summary21
-rw-r--r--cts/scheduler/summary/ticket-promoted-17.summary26
-rw-r--r--cts/scheduler/summary/ticket-promoted-18.summary26
-rw-r--r--cts/scheduler/summary/ticket-promoted-19.summary21
-rw-r--r--cts/scheduler/summary/ticket-promoted-2.summary31
-rw-r--r--cts/scheduler/summary/ticket-promoted-20.summary26
-rw-r--r--cts/scheduler/summary/ticket-promoted-21.summary36
-rw-r--r--cts/scheduler/summary/ticket-promoted-22.summary21
-rw-r--r--cts/scheduler/summary/ticket-promoted-23.summary26
-rw-r--r--cts/scheduler/summary/ticket-promoted-24.summary23
-rw-r--r--cts/scheduler/summary/ticket-promoted-3.summary31
-rw-r--r--cts/scheduler/summary/ticket-promoted-4.summary29
-rw-r--r--cts/scheduler/summary/ticket-promoted-5.summary26
-rw-r--r--cts/scheduler/summary/ticket-promoted-6.summary26
-rw-r--r--cts/scheduler/summary/ticket-promoted-7.summary29
-rw-r--r--cts/scheduler/summary/ticket-promoted-8.summary26
-rw-r--r--cts/scheduler/summary/ticket-promoted-9.summary36
-rw-r--r--cts/scheduler/summary/ticket-rsc-sets-1.summary49
-rw-r--r--cts/scheduler/summary/ticket-rsc-sets-10.summary52
-rw-r--r--cts/scheduler/summary/ticket-rsc-sets-11.summary33
-rw-r--r--cts/scheduler/summary/ticket-rsc-sets-12.summary41
-rw-r--r--cts/scheduler/summary/ticket-rsc-sets-13.summary52
-rw-r--r--cts/scheduler/summary/ticket-rsc-sets-14.summary52
-rw-r--r--cts/scheduler/summary/ticket-rsc-sets-2.summary57
-rw-r--r--cts/scheduler/summary/ticket-rsc-sets-3.summary52
-rw-r--r--cts/scheduler/summary/ticket-rsc-sets-4.summary49
-rw-r--r--cts/scheduler/summary/ticket-rsc-sets-5.summary44
-rw-r--r--cts/scheduler/summary/ticket-rsc-sets-6.summary46
-rw-r--r--cts/scheduler/summary/ticket-rsc-sets-7.summary52
-rw-r--r--cts/scheduler/summary/ticket-rsc-sets-8.summary33
-rw-r--r--cts/scheduler/summary/ticket-rsc-sets-9.summary52
-rw-r--r--cts/scheduler/summary/unfence-definition.summary65
-rw-r--r--cts/scheduler/summary/unfence-device.summary31
-rw-r--r--cts/scheduler/summary/unfence-parameters.summary64
-rw-r--r--cts/scheduler/summary/unfence-startup.summary49
-rw-r--r--cts/scheduler/summary/unmanaged-block-restart.summary32
-rw-r--r--cts/scheduler/summary/unmanaged-promoted.summary75
-rw-r--r--cts/scheduler/summary/unmanaged-stop-1.summary22
-rw-r--r--cts/scheduler/summary/unmanaged-stop-2.summary22
-rw-r--r--cts/scheduler/summary/unmanaged-stop-3.summary25
-rw-r--r--cts/scheduler/summary/unmanaged-stop-4.summary27
-rw-r--r--cts/scheduler/summary/unrunnable-1.summary67
-rw-r--r--cts/scheduler/summary/unrunnable-2.summary178
-rw-r--r--cts/scheduler/summary/use-after-free-merge.summary45
-rw-r--r--cts/scheduler/summary/utilization-check-allowed-nodes.summary27
-rw-r--r--cts/scheduler/summary/utilization-complex.summary148
-rw-r--r--cts/scheduler/summary/utilization-order1.summary25
-rw-r--r--cts/scheduler/summary/utilization-order2.summary39
-rw-r--r--cts/scheduler/summary/utilization-order3.summary28
-rw-r--r--cts/scheduler/summary/utilization-order4.summary63
-rw-r--r--cts/scheduler/summary/utilization-shuffle.summary94
-rw-r--r--cts/scheduler/summary/utilization.summary27
-rw-r--r--cts/scheduler/summary/value-source.summary62
-rw-r--r--cts/scheduler/summary/whitebox-asymmetric.summary42
-rw-r--r--cts/scheduler/summary/whitebox-fail1.summary59
-rw-r--r--cts/scheduler/summary/whitebox-fail2.summary59
-rw-r--r--cts/scheduler/summary/whitebox-fail3.summary55
-rw-r--r--cts/scheduler/summary/whitebox-imply-stop-on-fence.summary104
-rw-r--r--cts/scheduler/summary/whitebox-migrate1.summary56
-rw-r--r--cts/scheduler/summary/whitebox-move.summary49
-rw-r--r--cts/scheduler/summary/whitebox-ms-ordering-move.summary107
-rw-r--r--cts/scheduler/summary/whitebox-ms-ordering.summary73
-rw-r--r--cts/scheduler/summary/whitebox-nested-group.summary102
-rw-r--r--cts/scheduler/summary/whitebox-orphan-ms.summary87
-rw-r--r--cts/scheduler/summary/whitebox-orphaned.summary59
-rw-r--r--cts/scheduler/summary/whitebox-start.summary56
-rw-r--r--cts/scheduler/summary/whitebox-stop.summary53
-rw-r--r--cts/scheduler/summary/whitebox-unexpectedly-running.summary35
-rw-r--r--cts/scheduler/summary/year-2038.summary112
811 files changed, 40773 insertions, 0 deletions
diff --git a/cts/scheduler/summary/1-a-then-bm-move-b.summary b/cts/scheduler/summary/1-a-then-bm-move-b.summary
new file mode 100644
index 0000000..b261578
--- /dev/null
+++ b/cts/scheduler/summary/1-a-then-bm-move-b.summary
@@ -0,0 +1,25 @@
+Current cluster status:
+ * Node List:
+ * Online: [ 18node1 18node2 18node3 ]
+
+ * Full List of Resources:
+ * A (ocf:heartbeat:Dummy): Started 18node1
+ * B (ocf:heartbeat:Dummy): Started 18node2
+
+Transition Summary:
+ * Migrate B ( 18node2 -> 18node1 )
+
+Executing Cluster Transition:
+ * Resource action: B migrate_to on 18node2
+ * Resource action: B migrate_from on 18node1
+ * Resource action: B stop on 18node2
+ * Pseudo action: B_start_0
+ * Resource action: B monitor=60000 on 18node1
+
+Revised Cluster Status:
+ * Node List:
+ * Online: [ 18node1 18node2 18node3 ]
+
+ * Full List of Resources:
+ * A (ocf:heartbeat:Dummy): Started 18node1
+ * B (ocf:heartbeat:Dummy): Started 18node1
diff --git a/cts/scheduler/summary/10-a-then-bm-b-move-a-clone.summary b/cts/scheduler/summary/10-a-then-bm-b-move-a-clone.summary
new file mode 100644
index 0000000..dd14d65
--- /dev/null
+++ b/cts/scheduler/summary/10-a-then-bm-b-move-a-clone.summary
@@ -0,0 +1,33 @@
+Current cluster status:
+ * Node List:
+ * Node f20node1: standby (with active resources)
+ * Online: [ f20node2 ]
+
+ * Full List of Resources:
+ * Clone Set: myclone-clone [myclone]:
+ * Started: [ f20node1 f20node2 ]
+ * vm (ocf:heartbeat:Dummy): Started f20node1
+
+Transition Summary:
+ * Stop myclone:1 ( f20node1 ) due to node availability
+ * Migrate vm ( f20node1 -> f20node2 )
+
+Executing Cluster Transition:
+ * Resource action: vm migrate_to on f20node1
+ * Resource action: vm migrate_from on f20node2
+ * Resource action: vm stop on f20node1
+ * Pseudo action: myclone-clone_stop_0
+ * Pseudo action: vm_start_0
+ * Resource action: myclone stop on f20node1
+ * Pseudo action: myclone-clone_stopped_0
+
+Revised Cluster Status:
+ * Node List:
+ * Node f20node1: standby
+ * Online: [ f20node2 ]
+
+ * Full List of Resources:
+ * Clone Set: myclone-clone [myclone]:
+ * Started: [ f20node2 ]
+ * Stopped: [ f20node1 ]
+ * vm (ocf:heartbeat:Dummy): Started f20node2
diff --git a/cts/scheduler/summary/11-a-then-bm-b-move-a-clone-starting.summary b/cts/scheduler/summary/11-a-then-bm-b-move-a-clone-starting.summary
new file mode 100644
index 0000000..7bd3b49
--- /dev/null
+++ b/cts/scheduler/summary/11-a-then-bm-b-move-a-clone-starting.summary
@@ -0,0 +1,36 @@
+Current cluster status:
+ * Node List:
+ * Node f20node1: standby (with active resources)
+ * Online: [ f20node2 ]
+
+ * Full List of Resources:
+ * Clone Set: myclone-clone [myclone]:
+ * Started: [ f20node1 ]
+ * Stopped: [ f20node2 ]
+ * vm (ocf:heartbeat:Dummy): Started f20node1
+
+Transition Summary:
+ * Move myclone:0 ( f20node1 -> f20node2 )
+ * Move vm ( f20node1 -> f20node2 ) due to unrunnable myclone-clone stop
+
+Executing Cluster Transition:
+ * Resource action: myclone monitor on f20node2
+ * Resource action: vm stop on f20node1
+ * Pseudo action: myclone-clone_stop_0
+ * Resource action: myclone stop on f20node1
+ * Pseudo action: myclone-clone_stopped_0
+ * Pseudo action: myclone-clone_start_0
+ * Resource action: myclone start on f20node2
+ * Pseudo action: myclone-clone_running_0
+ * Resource action: vm start on f20node2
+
+Revised Cluster Status:
+ * Node List:
+ * Node f20node1: standby
+ * Online: [ f20node2 ]
+
+ * Full List of Resources:
+ * Clone Set: myclone-clone [myclone]:
+ * Started: [ f20node2 ]
+ * Stopped: [ f20node1 ]
+ * vm (ocf:heartbeat:Dummy): Started f20node2
diff --git a/cts/scheduler/summary/1360.summary b/cts/scheduler/summary/1360.summary
new file mode 100644
index 0000000..6a08320
--- /dev/null
+++ b/cts/scheduler/summary/1360.summary
@@ -0,0 +1,30 @@
+Current cluster status:
+ * Node List:
+ * Online: [ ssgtest1a ssgtest1b ]
+
+ * Full List of Resources:
+ * Resource Group: ClusterAlias:
+ * VIP (ocf:testing:VIP-RIP.sh): Started ssgtest1a
+ * Clone Set: dolly [dollies]:
+ * Started: [ ssgtest1a ]
+
+Transition Summary:
+ * Move dollies:0 ( ssgtest1a -> ssgtest1b )
+
+Executing Cluster Transition:
+ * Pseudo action: dolly_stop_0
+ * Resource action: dollies:0 stop on ssgtest1a
+ * Pseudo action: dolly_stopped_0
+ * Pseudo action: dolly_start_0
+ * Resource action: dollies:0 start on ssgtest1b
+ * Pseudo action: dolly_running_0
+
+Revised Cluster Status:
+ * Node List:
+ * Online: [ ssgtest1a ssgtest1b ]
+
+ * Full List of Resources:
+ * Resource Group: ClusterAlias:
+ * VIP (ocf:testing:VIP-RIP.sh): Started ssgtest1a
+ * Clone Set: dolly [dollies]:
+ * Started: [ ssgtest1b ]
diff --git a/cts/scheduler/summary/1484.summary b/cts/scheduler/summary/1484.summary
new file mode 100644
index 0000000..92b6f09
--- /dev/null
+++ b/cts/scheduler/summary/1484.summary
@@ -0,0 +1,21 @@
+Current cluster status:
+ * Node List:
+ * Online: [ hb1 hb2 ]
+ * OFFLINE: [ hb3 ]
+
+ * Full List of Resources:
+ * the-future-of-vaj (ocf:heartbeat:Dummy): FAILED hb2
+
+Transition Summary:
+ * Stop the-future-of-vaj ( hb2 ) due to node availability
+
+Executing Cluster Transition:
+ * Resource action: the-future-of-vaj stop on hb2
+
+Revised Cluster Status:
+ * Node List:
+ * Online: [ hb1 hb2 ]
+ * OFFLINE: [ hb3 ]
+
+ * Full List of Resources:
+ * the-future-of-vaj (ocf:heartbeat:Dummy): Stopped
diff --git a/cts/scheduler/summary/1494.summary b/cts/scheduler/summary/1494.summary
new file mode 100644
index 0000000..f0792c3
--- /dev/null
+++ b/cts/scheduler/summary/1494.summary
@@ -0,0 +1,27 @@
+Current cluster status:
+ * Node List:
+ * Online: [ hb1 hb2 ]
+ * OFFLINE: [ hb3 ]
+
+ * Full List of Resources:
+ * Clone Set: ima_cloneid [ima_rscid] (unique):
+ * ima_rscid:0 (ocf:heartbeat:Dummy): Started hb1
+ * ima_rscid:1 (ocf:heartbeat:Dummy): Started hb2
+
+Transition Summary:
+ * Stop ima_rscid:0 ( hb1 ) due to node availability
+
+Executing Cluster Transition:
+ * Pseudo action: ima_cloneid_stop_0
+ * Resource action: ima_rscid:0 stop on hb1
+ * Pseudo action: ima_cloneid_stopped_0
+
+Revised Cluster Status:
+ * Node List:
+ * Online: [ hb1 hb2 ]
+ * OFFLINE: [ hb3 ]
+
+ * Full List of Resources:
+ * Clone Set: ima_cloneid [ima_rscid] (unique):
+ * ima_rscid:0 (ocf:heartbeat:Dummy): Stopped
+ * ima_rscid:1 (ocf:heartbeat:Dummy): Started hb2
diff --git a/cts/scheduler/summary/2-am-then-b-move-a.summary b/cts/scheduler/summary/2-am-then-b-move-a.summary
new file mode 100644
index 0000000..4fb45d7
--- /dev/null
+++ b/cts/scheduler/summary/2-am-then-b-move-a.summary
@@ -0,0 +1,25 @@
+Current cluster status:
+ * Node List:
+ * Online: [ 18node1 18node2 18node3 ]
+
+ * Full List of Resources:
+ * A (ocf:heartbeat:Dummy): Started 18node1
+ * B (ocf:heartbeat:Dummy): Started 18node2
+
+Transition Summary:
+ * Migrate A ( 18node1 -> 18node2 )
+
+Executing Cluster Transition:
+ * Resource action: A migrate_to on 18node1
+ * Resource action: A migrate_from on 18node2
+ * Resource action: A stop on 18node1
+ * Pseudo action: A_start_0
+ * Resource action: A monitor=60000 on 18node2
+
+Revised Cluster Status:
+ * Node List:
+ * Online: [ 18node1 18node2 18node3 ]
+
+ * Full List of Resources:
+ * A (ocf:heartbeat:Dummy): Started 18node2
+ * B (ocf:heartbeat:Dummy): Started 18node2
diff --git a/cts/scheduler/summary/3-am-then-bm-both-migrate.summary b/cts/scheduler/summary/3-am-then-bm-both-migrate.summary
new file mode 100644
index 0000000..4498194
--- /dev/null
+++ b/cts/scheduler/summary/3-am-then-bm-both-migrate.summary
@@ -0,0 +1,31 @@
+Current cluster status:
+ * Node List:
+ * Online: [ 18node1 18node2 18node3 ]
+
+ * Full List of Resources:
+ * A (ocf:heartbeat:Dummy): Started 18node1
+ * B (ocf:heartbeat:Dummy): Started 18node2
+
+Transition Summary:
+ * Migrate A ( 18node1 -> 18node2 )
+ * Migrate B ( 18node2 -> 18node1 )
+
+Executing Cluster Transition:
+ * Resource action: A migrate_to on 18node1
+ * Resource action: A migrate_from on 18node2
+ * Resource action: B migrate_to on 18node2
+ * Resource action: B migrate_from on 18node1
+ * Resource action: B stop on 18node2
+ * Resource action: A stop on 18node1
+ * Pseudo action: A_start_0
+ * Pseudo action: B_start_0
+ * Resource action: A monitor=60000 on 18node2
+ * Resource action: B monitor=60000 on 18node1
+
+Revised Cluster Status:
+ * Node List:
+ * Online: [ 18node1 18node2 18node3 ]
+
+ * Full List of Resources:
+ * A (ocf:heartbeat:Dummy): Started 18node2
+ * B (ocf:heartbeat:Dummy): Started 18node1
diff --git a/cts/scheduler/summary/4-am-then-bm-b-not-migratable.summary b/cts/scheduler/summary/4-am-then-bm-b-not-migratable.summary
new file mode 100644
index 0000000..6459c74
--- /dev/null
+++ b/cts/scheduler/summary/4-am-then-bm-b-not-migratable.summary
@@ -0,0 +1,29 @@
+Current cluster status:
+ * Node List:
+ * Online: [ 18node1 18node2 18node3 ]
+
+ * Full List of Resources:
+ * A (ocf:heartbeat:Dummy): Started 18node1
+ * B (ocf:heartbeat:Dummy): Started 18node2
+
+Transition Summary:
+ * Migrate A ( 18node1 -> 18node2 )
+ * Move B ( 18node2 -> 18node1 )
+
+Executing Cluster Transition:
+ * Resource action: B stop on 18node2
+ * Resource action: A migrate_to on 18node1
+ * Resource action: A migrate_from on 18node2
+ * Resource action: A stop on 18node1
+ * Pseudo action: A_start_0
+ * Resource action: B start on 18node1
+ * Resource action: A monitor=60000 on 18node2
+ * Resource action: B monitor=60000 on 18node1
+
+Revised Cluster Status:
+ * Node List:
+ * Online: [ 18node1 18node2 18node3 ]
+
+ * Full List of Resources:
+ * A (ocf:heartbeat:Dummy): Started 18node2
+ * B (ocf:heartbeat:Dummy): Started 18node1
diff --git a/cts/scheduler/summary/5-am-then-bm-a-not-migratable.summary b/cts/scheduler/summary/5-am-then-bm-a-not-migratable.summary
new file mode 100644
index 0000000..2c88bc3
--- /dev/null
+++ b/cts/scheduler/summary/5-am-then-bm-a-not-migratable.summary
@@ -0,0 +1,27 @@
+Current cluster status:
+ * Node List:
+ * Online: [ 18node1 18node2 18node3 ]
+
+ * Full List of Resources:
+ * A (ocf:heartbeat:Dummy): Started 18node1
+ * B (ocf:heartbeat:Dummy): Started 18node2
+
+Transition Summary:
+ * Move A ( 18node1 -> 18node2 )
+ * Move B ( 18node2 -> 18node1 ) due to unrunnable A stop
+
+Executing Cluster Transition:
+ * Resource action: B stop on 18node2
+ * Resource action: A stop on 18node1
+ * Resource action: A start on 18node2
+ * Resource action: B start on 18node1
+ * Resource action: A monitor=60000 on 18node2
+ * Resource action: B monitor=60000 on 18node1
+
+Revised Cluster Status:
+ * Node List:
+ * Online: [ 18node1 18node2 18node3 ]
+
+ * Full List of Resources:
+ * A (ocf:heartbeat:Dummy): Started 18node2
+ * B (ocf:heartbeat:Dummy): Started 18node1
diff --git a/cts/scheduler/summary/594.summary b/cts/scheduler/summary/594.summary
new file mode 100644
index 0000000..dc6db75
--- /dev/null
+++ b/cts/scheduler/summary/594.summary
@@ -0,0 +1,55 @@
+Current cluster status:
+ * Node List:
+ * Node hadev3: UNCLEAN (offline)
+ * Online: [ hadev1 hadev2 ]
+
+ * Full List of Resources:
+ * DcIPaddr (ocf:heartbeat:IPaddr): Started hadev2
+ * rsc_hadev3 (ocf:heartbeat:IPaddr): Started hadev1
+ * rsc_hadev2 (ocf:heartbeat:IPaddr): Started hadev2
+ * rsc_hadev1 (ocf:heartbeat:IPaddr): Started hadev1
+ * Clone Set: DoFencing [child_DoFencing] (unique):
+ * child_DoFencing:0 (stonith:ssh): Started hadev2
+ * child_DoFencing:1 (stonith:ssh): Started hadev1
+ * child_DoFencing:2 (stonith:ssh): Started hadev1
+
+Transition Summary:
+ * Fence (reboot) hadev3 'peer is no longer part of the cluster'
+ * Move DcIPaddr ( hadev2 -> hadev1 )
+ * Move rsc_hadev2 ( hadev2 -> hadev1 )
+ * Stop child_DoFencing:0 ( hadev2 ) due to node availability
+ * Stop child_DoFencing:2 ( hadev1 ) due to node availability
+
+Executing Cluster Transition:
+ * Resource action: DcIPaddr stop on hadev2
+ * Resource action: DcIPaddr monitor on hadev1
+ * Resource action: rsc_hadev3 monitor on hadev2
+ * Resource action: rsc_hadev2 stop on hadev2
+ * Resource action: rsc_hadev2 monitor on hadev1
+ * Resource action: child_DoFencing:0 monitor on hadev1
+ * Resource action: child_DoFencing:2 monitor on hadev2
+ * Pseudo action: DoFencing_stop_0
+ * Fencing hadev3 (reboot)
+ * Resource action: DcIPaddr start on hadev1
+ * Resource action: rsc_hadev2 start on hadev1
+ * Resource action: child_DoFencing:0 stop on hadev2
+ * Resource action: child_DoFencing:2 stop on hadev1
+ * Pseudo action: DoFencing_stopped_0
+ * Cluster action: do_shutdown on hadev2
+ * Resource action: DcIPaddr monitor=5000 on hadev1
+ * Resource action: rsc_hadev2 monitor=5000 on hadev1
+
+Revised Cluster Status:
+ * Node List:
+ * Online: [ hadev1 hadev2 ]
+ * OFFLINE: [ hadev3 ]
+
+ * Full List of Resources:
+ * DcIPaddr (ocf:heartbeat:IPaddr): Started hadev1
+ * rsc_hadev3 (ocf:heartbeat:IPaddr): Started hadev1
+ * rsc_hadev2 (ocf:heartbeat:IPaddr): Started hadev1
+ * rsc_hadev1 (ocf:heartbeat:IPaddr): Started hadev1
+ * Clone Set: DoFencing [child_DoFencing] (unique):
+ * child_DoFencing:0 (stonith:ssh): Stopped
+ * child_DoFencing:1 (stonith:ssh): Started hadev1
+ * child_DoFencing:2 (stonith:ssh): Stopped
diff --git a/cts/scheduler/summary/6-migrate-group.summary b/cts/scheduler/summary/6-migrate-group.summary
new file mode 100644
index 0000000..bfa374b
--- /dev/null
+++ b/cts/scheduler/summary/6-migrate-group.summary
@@ -0,0 +1,45 @@
+Current cluster status:
+ * Node List:
+ * Online: [ 18node1 18node2 18node3 ]
+
+ * Full List of Resources:
+ * Resource Group: thegroup:
+ * A (ocf:heartbeat:Dummy): Started 18node1
+ * B (ocf:heartbeat:Dummy): Started 18node1
+ * C (ocf:heartbeat:Dummy): Started 18node1
+
+Transition Summary:
+ * Migrate A ( 18node1 -> 18node2 )
+ * Migrate B ( 18node1 -> 18node2 )
+ * Migrate C ( 18node1 -> 18node2 )
+
+Executing Cluster Transition:
+ * Pseudo action: thegroup_stop_0
+ * Resource action: A migrate_to on 18node1
+ * Resource action: A migrate_from on 18node2
+ * Resource action: B migrate_to on 18node1
+ * Resource action: B migrate_from on 18node2
+ * Resource action: C migrate_to on 18node1
+ * Resource action: C migrate_from on 18node2
+ * Resource action: C stop on 18node1
+ * Resource action: B stop on 18node1
+ * Resource action: A stop on 18node1
+ * Pseudo action: thegroup_stopped_0
+ * Pseudo action: thegroup_start_0
+ * Pseudo action: A_start_0
+ * Pseudo action: B_start_0
+ * Pseudo action: C_start_0
+ * Pseudo action: thegroup_running_0
+ * Resource action: A monitor=60000 on 18node2
+ * Resource action: B monitor=60000 on 18node2
+ * Resource action: C monitor=60000 on 18node2
+
+Revised Cluster Status:
+ * Node List:
+ * Online: [ 18node1 18node2 18node3 ]
+
+ * Full List of Resources:
+ * Resource Group: thegroup:
+ * A (ocf:heartbeat:Dummy): Started 18node2
+ * B (ocf:heartbeat:Dummy): Started 18node2
+ * C (ocf:heartbeat:Dummy): Started 18node2
diff --git a/cts/scheduler/summary/662.summary b/cts/scheduler/summary/662.summary
new file mode 100644
index 0000000..1ad51a4
--- /dev/null
+++ b/cts/scheduler/summary/662.summary
@@ -0,0 +1,67 @@
+Current cluster status:
+ * Node List:
+ * Online: [ c001n02 c001n03 c001n04 c001n09 ]
+
+ * Full List of Resources:
+ * DcIPaddr (ocf:heartbeat:IPaddr): Started c001n09
+ * rsc_c001n09 (ocf:heartbeat:IPaddr): Started c001n09
+ * rsc_c001n02 (ocf:heartbeat:IPaddr): Started c001n02
+ * rsc_c001n03 (ocf:heartbeat:IPaddr): Started c001n03
+ * rsc_c001n04 (ocf:heartbeat:IPaddr): Started c001n04
+ * Clone Set: DoFencing [child_DoFencing] (unique):
+ * child_DoFencing:0 (stonith:ssh): Started c001n02
+ * child_DoFencing:1 (stonith:ssh): Started c001n03
+ * child_DoFencing:2 (stonith:ssh): Started c001n04
+ * child_DoFencing:3 (stonith:ssh): Started c001n09
+
+Transition Summary:
+ * Move rsc_c001n02 ( c001n02 -> c001n03 )
+ * Stop child_DoFencing:0 ( c001n02 ) due to node availability
+
+Executing Cluster Transition:
+ * Resource action: DcIPaddr monitor on c001n04
+ * Resource action: DcIPaddr monitor on c001n03
+ * Resource action: DcIPaddr monitor on c001n02
+ * Resource action: rsc_c001n09 monitor on c001n04
+ * Resource action: rsc_c001n09 monitor on c001n03
+ * Resource action: rsc_c001n09 monitor on c001n02
+ * Resource action: rsc_c001n02 stop on c001n02
+ * Resource action: rsc_c001n02 monitor on c001n09
+ * Resource action: rsc_c001n02 monitor on c001n04
+ * Resource action: rsc_c001n02 monitor on c001n03
+ * Resource action: rsc_c001n03 monitor on c001n09
+ * Resource action: rsc_c001n03 monitor on c001n04
+ * Resource action: rsc_c001n03 monitor on c001n02
+ * Resource action: rsc_c001n04 monitor on c001n09
+ * Resource action: rsc_c001n04 monitor on c001n03
+ * Resource action: child_DoFencing:0 monitor on c001n09
+ * Resource action: child_DoFencing:0 monitor on c001n04
+ * Resource action: child_DoFencing:1 monitor on c001n04
+ * Resource action: child_DoFencing:1 monitor on c001n02
+ * Resource action: child_DoFencing:2 monitor on c001n09
+ * Resource action: child_DoFencing:2 monitor on c001n03
+ * Resource action: child_DoFencing:3 monitor on c001n04
+ * Resource action: child_DoFencing:3 monitor on c001n03
+ * Resource action: child_DoFencing:3 monitor on c001n02
+ * Pseudo action: DoFencing_stop_0
+ * Resource action: rsc_c001n02 start on c001n03
+ * Resource action: child_DoFencing:0 stop on c001n02
+ * Pseudo action: DoFencing_stopped_0
+ * Cluster action: do_shutdown on c001n02
+ * Resource action: rsc_c001n02 monitor=5000 on c001n03
+
+Revised Cluster Status:
+ * Node List:
+ * Online: [ c001n02 c001n03 c001n04 c001n09 ]
+
+ * Full List of Resources:
+ * DcIPaddr (ocf:heartbeat:IPaddr): Started c001n09
+ * rsc_c001n09 (ocf:heartbeat:IPaddr): Started c001n09
+ * rsc_c001n02 (ocf:heartbeat:IPaddr): Started c001n03
+ * rsc_c001n03 (ocf:heartbeat:IPaddr): Started c001n03
+ * rsc_c001n04 (ocf:heartbeat:IPaddr): Started c001n04
+ * Clone Set: DoFencing [child_DoFencing] (unique):
+ * child_DoFencing:0 (stonith:ssh): Stopped
+ * child_DoFencing:1 (stonith:ssh): Started c001n03
+ * child_DoFencing:2 (stonith:ssh): Started c001n04
+ * child_DoFencing:3 (stonith:ssh): Started c001n09
diff --git a/cts/scheduler/summary/696.summary b/cts/scheduler/summary/696.summary
new file mode 100644
index 0000000..3090cae
--- /dev/null
+++ b/cts/scheduler/summary/696.summary
@@ -0,0 +1,62 @@
+Current cluster status:
+ * Node List:
+ * Online: [ hadev1 hadev2 hadev3 ]
+
+ * Full List of Resources:
+ * DcIPaddr (ocf:heartbeat:IPaddr): Starting hadev2
+ * rsc_hadev1 (ocf:heartbeat:IPaddr): Started hadev3 (Monitoring)
+ * rsc_hadev2 (ocf:heartbeat:IPaddr): Starting hadev2
+ * rsc_hadev3 (ocf:heartbeat:IPaddr): Started hadev3 (Monitoring)
+ * Clone Set: DoFencing [child_DoFencing] (unique):
+ * child_DoFencing:0 (stonith:ssh): Started hadev2 (Monitoring)
+ * child_DoFencing:1 (stonith:ssh): Started hadev3 (Monitoring)
+ * child_DoFencing:2 (stonith:ssh): Stopped
+
+Transition Summary:
+ * Move rsc_hadev1 ( hadev3 -> hadev1 )
+ * Start child_DoFencing:2 ( hadev1 )
+
+Executing Cluster Transition:
+ * Resource action: DcIPaddr monitor on hadev3
+ * Resource action: DcIPaddr monitor on hadev1
+ * Resource action: rsc_hadev1 stop on hadev3
+ * Resource action: rsc_hadev1 monitor on hadev2
+ * Resource action: rsc_hadev1 monitor on hadev1
+ * Resource action: rsc_hadev2 monitor on hadev3
+ * Resource action: rsc_hadev2 monitor on hadev1
+ * Resource action: rsc_hadev3 monitor=5000 on hadev3
+ * Resource action: rsc_hadev3 monitor on hadev2
+ * Resource action: rsc_hadev3 monitor on hadev1
+ * Resource action: child_DoFencing:0 monitor=5000 on hadev2
+ * Resource action: child_DoFencing:0 monitor on hadev3
+ * Resource action: child_DoFencing:0 monitor on hadev1
+ * Resource action: child_DoFencing:1 monitor=5000 on hadev3
+ * Resource action: child_DoFencing:1 monitor on hadev2
+ * Resource action: child_DoFencing:1 monitor on hadev1
+ * Resource action: child_DoFencing:2 monitor on hadev3
+ * Resource action: child_DoFencing:2 monitor on hadev2
+ * Resource action: child_DoFencing:2 monitor on hadev1
+ * Pseudo action: DoFencing_start_0
+ * Resource action: DcIPaddr start on hadev2
+ * Resource action: rsc_hadev1 start on hadev1
+ * Resource action: rsc_hadev2 start on hadev2
+ * Resource action: child_DoFencing:2 start on hadev1
+ * Pseudo action: DoFencing_running_0
+ * Resource action: DcIPaddr monitor=5000 on hadev2
+ * Resource action: rsc_hadev1 monitor=5000 on hadev1
+ * Resource action: rsc_hadev2 monitor=5000 on hadev2
+ * Resource action: child_DoFencing:2 monitor=5000 on hadev1
+
+Revised Cluster Status:
+ * Node List:
+ * Online: [ hadev1 hadev2 hadev3 ]
+
+ * Full List of Resources:
+ * DcIPaddr (ocf:heartbeat:IPaddr): Started hadev2
+ * rsc_hadev1 (ocf:heartbeat:IPaddr): Started hadev1
+ * rsc_hadev2 (ocf:heartbeat:IPaddr): Started hadev2
+ * rsc_hadev3 (ocf:heartbeat:IPaddr): Started hadev3
+ * Clone Set: DoFencing [child_DoFencing] (unique):
+ * child_DoFencing:0 (stonith:ssh): Started hadev2
+ * child_DoFencing:1 (stonith:ssh): Started hadev3
+ * child_DoFencing:2 (stonith:ssh): Started hadev1
diff --git a/cts/scheduler/summary/7-migrate-group-one-unmigratable.summary b/cts/scheduler/summary/7-migrate-group-one-unmigratable.summary
new file mode 100644
index 0000000..0d0c7ff
--- /dev/null
+++ b/cts/scheduler/summary/7-migrate-group-one-unmigratable.summary
@@ -0,0 +1,41 @@
+Current cluster status:
+ * Node List:
+ * Online: [ 18node1 18node2 18node3 ]
+
+ * Full List of Resources:
+ * Resource Group: thegroup:
+ * A (ocf:heartbeat:Dummy): Started 18node1
+ * B (ocf:heartbeat:Dummy): Started 18node1
+ * C (ocf:heartbeat:Dummy): Started 18node1
+
+Transition Summary:
+ * Migrate A ( 18node1 -> 18node2 )
+ * Move B ( 18node1 -> 18node2 )
+ * Move C ( 18node1 -> 18node2 ) due to unrunnable B stop
+
+Executing Cluster Transition:
+ * Pseudo action: thegroup_stop_0
+ * Resource action: C stop on 18node1
+ * Resource action: B stop on 18node1
+ * Resource action: A migrate_to on 18node1
+ * Resource action: A migrate_from on 18node2
+ * Resource action: A stop on 18node1
+ * Pseudo action: thegroup_stopped_0
+ * Pseudo action: thegroup_start_0
+ * Pseudo action: A_start_0
+ * Resource action: B start on 18node2
+ * Resource action: C start on 18node2
+ * Pseudo action: thegroup_running_0
+ * Resource action: A monitor=60000 on 18node2
+ * Resource action: B monitor=60000 on 18node2
+ * Resource action: C monitor=60000 on 18node2
+
+Revised Cluster Status:
+ * Node List:
+ * Online: [ 18node1 18node2 18node3 ]
+
+ * Full List of Resources:
+ * Resource Group: thegroup:
+ * A (ocf:heartbeat:Dummy): Started 18node2
+ * B (ocf:heartbeat:Dummy): Started 18node2
+ * C (ocf:heartbeat:Dummy): Started 18node2
diff --git a/cts/scheduler/summary/726.summary b/cts/scheduler/summary/726.summary
new file mode 100644
index 0000000..4bd880e
--- /dev/null
+++ b/cts/scheduler/summary/726.summary
@@ -0,0 +1,89 @@
+Current cluster status:
+ * Node List:
+ * Online: [ ibm1 sgi2 test02 test03 ]
+
+ * Full List of Resources:
+ * DcIPaddr (ocf:heartbeat:IPaddr): Started test03 (Monitoring)
+ * rsc_sgi2 (ocf:heartbeat:IPaddr): Stopped
+ * rsc_ibm1 (ocf:heartbeat:IPaddr): Started test03 (Monitoring)
+ * rsc_test02 (ocf:heartbeat:IPaddr): Stopped
+ * rsc_test03 (ocf:heartbeat:IPaddr): Started test03 (Monitoring)
+ * Clone Set: DoFencing [child_DoFencing] (unique):
+ * child_DoFencing:0 (stonith:ssh): Starting test02
+ * child_DoFencing:1 (stonith:ssh): Starting test03
+ * child_DoFencing:2 (stonith:ssh): Stopped
+ * child_DoFencing:3 (stonith:ssh): Stopped
+
+Transition Summary:
+ * Start rsc_sgi2 ( sgi2 )
+ * Move rsc_ibm1 ( test03 -> ibm1 )
+ * Start rsc_test02 ( test02 )
+ * Start child_DoFencing:2 ( ibm1 )
+ * Start child_DoFencing:3 ( sgi2 )
+
+Executing Cluster Transition:
+ * Resource action: DcIPaddr monitor=5000 on test03
+ * Resource action: DcIPaddr monitor on test02
+ * Resource action: DcIPaddr monitor on sgi2
+ * Resource action: DcIPaddr monitor on ibm1
+ * Resource action: rsc_sgi2 monitor on test03
+ * Resource action: rsc_sgi2 monitor on test02
+ * Resource action: rsc_sgi2 monitor on sgi2
+ * Resource action: rsc_sgi2 monitor on ibm1
+ * Resource action: rsc_ibm1 stop on test03
+ * Resource action: rsc_ibm1 monitor on test02
+ * Resource action: rsc_ibm1 monitor on sgi2
+ * Resource action: rsc_ibm1 monitor on ibm1
+ * Resource action: rsc_test02 monitor on test03
+ * Resource action: rsc_test02 monitor on test02
+ * Resource action: rsc_test02 monitor on sgi2
+ * Resource action: rsc_test02 monitor on ibm1
+ * Resource action: rsc_test03 monitor=5000 on test03
+ * Resource action: rsc_test03 monitor on test02
+ * Resource action: rsc_test03 monitor on sgi2
+ * Resource action: rsc_test03 monitor on ibm1
+ * Resource action: child_DoFencing:0 monitor on sgi2
+ * Resource action: child_DoFencing:0 monitor on ibm1
+ * Resource action: child_DoFencing:1 monitor on test02
+ * Resource action: child_DoFencing:1 monitor on sgi2
+ * Resource action: child_DoFencing:1 monitor on ibm1
+ * Resource action: child_DoFencing:2 monitor on test03
+ * Resource action: child_DoFencing:2 monitor on test02
+ * Resource action: child_DoFencing:2 monitor on sgi2
+ * Resource action: child_DoFencing:2 monitor on ibm1
+ * Resource action: child_DoFencing:3 monitor on test03
+ * Resource action: child_DoFencing:3 monitor on test02
+ * Resource action: child_DoFencing:3 monitor on sgi2
+ * Resource action: child_DoFencing:3 monitor on ibm1
+ * Pseudo action: DoFencing_start_0
+ * Resource action: rsc_sgi2 start on sgi2
+ * Resource action: rsc_ibm1 start on ibm1
+ * Resource action: rsc_test02 start on test02
+ * Resource action: child_DoFencing:0 start on test02
+ * Resource action: child_DoFencing:1 start on test03
+ * Resource action: child_DoFencing:2 start on ibm1
+ * Resource action: child_DoFencing:3 start on sgi2
+ * Pseudo action: DoFencing_running_0
+ * Resource action: rsc_sgi2 monitor=5000 on sgi2
+ * Resource action: rsc_ibm1 monitor=5000 on ibm1
+ * Resource action: rsc_test02 monitor=5000 on test02
+ * Resource action: child_DoFencing:0 monitor=5000 on test02
+ * Resource action: child_DoFencing:1 monitor=5000 on test03
+ * Resource action: child_DoFencing:2 monitor=5000 on ibm1
+ * Resource action: child_DoFencing:3 monitor=5000 on sgi2
+
+Revised Cluster Status:
+ * Node List:
+ * Online: [ ibm1 sgi2 test02 test03 ]
+
+ * Full List of Resources:
+ * DcIPaddr (ocf:heartbeat:IPaddr): Started test03
+ * rsc_sgi2 (ocf:heartbeat:IPaddr): Started sgi2
+ * rsc_ibm1 (ocf:heartbeat:IPaddr): Started ibm1
+ * rsc_test02 (ocf:heartbeat:IPaddr): Started test02
+ * rsc_test03 (ocf:heartbeat:IPaddr): Started test03
+ * Clone Set: DoFencing [child_DoFencing] (unique):
+ * child_DoFencing:0 (stonith:ssh): Started test02
+ * child_DoFencing:1 (stonith:ssh): Started test03
+ * child_DoFencing:2 (stonith:ssh): Started ibm1
+ * child_DoFencing:3 (stonith:ssh): Started sgi2
diff --git a/cts/scheduler/summary/735.summary b/cts/scheduler/summary/735.summary
new file mode 100644
index 0000000..8489a21
--- /dev/null
+++ b/cts/scheduler/summary/735.summary
@@ -0,0 +1,52 @@
+Current cluster status:
+ * Node List:
+ * Online: [ hadev2 hadev3 ]
+ * OFFLINE: [ hadev1 ]
+
+ * Full List of Resources:
+ * DcIPaddr (ocf:heartbeat:IPaddr): Started hadev2
+ * rsc_hadev1 (ocf:heartbeat:IPaddr): Starting hadev2
+ * rsc_hadev2 (ocf:heartbeat:IPaddr): Started hadev2
+ * rsc_hadev3 (ocf:heartbeat:IPaddr): Starting hadev2
+ * Clone Set: DoFencing [child_DoFencing] (unique):
+ * child_DoFencing:0 (stonith:ssh): Starting hadev2
+ * child_DoFencing:1 (stonith:ssh): Stopped
+ * child_DoFencing:2 (stonith:ssh): Stopped
+
+Transition Summary:
+ * Move rsc_hadev1 ( hadev2 -> hadev3 )
+ * Start rsc_hadev3 ( hadev3 )
+ * Start child_DoFencing:0 ( hadev2 )
+ * Start child_DoFencing:1 ( hadev3 )
+
+Executing Cluster Transition:
+ * Resource action: DcIPaddr monitor on hadev3
+ * Resource action: rsc_hadev1 stop on hadev2
+ * Resource action: rsc_hadev1 start on hadev3
+ * Resource action: rsc_hadev2 monitor on hadev3
+ * Resource action: rsc_hadev3 start on hadev3
+ * Resource action: child_DoFencing:0 monitor on hadev3
+ * Resource action: child_DoFencing:2 monitor on hadev3
+ * Pseudo action: DoFencing_start_0
+ * Resource action: rsc_hadev1 monitor=5000 on hadev3
+ * Resource action: rsc_hadev3 monitor=5000 on hadev3
+ * Resource action: child_DoFencing:0 start on hadev2
+ * Resource action: child_DoFencing:1 start on hadev3
+ * Pseudo action: DoFencing_running_0
+ * Resource action: child_DoFencing:0 monitor=5000 on hadev2
+ * Resource action: child_DoFencing:1 monitor=5000 on hadev3
+
+Revised Cluster Status:
+ * Node List:
+ * Online: [ hadev2 hadev3 ]
+ * OFFLINE: [ hadev1 ]
+
+ * Full List of Resources:
+ * DcIPaddr (ocf:heartbeat:IPaddr): Started hadev2
+ * rsc_hadev1 (ocf:heartbeat:IPaddr): Started hadev3
+ * rsc_hadev2 (ocf:heartbeat:IPaddr): Started hadev2
+ * rsc_hadev3 (ocf:heartbeat:IPaddr): Started hadev3
+ * Clone Set: DoFencing [child_DoFencing] (unique):
+ * child_DoFencing:0 (stonith:ssh): Started hadev2
+ * child_DoFencing:1 (stonith:ssh): Started hadev3
+ * child_DoFencing:2 (stonith:ssh): Stopped
diff --git a/cts/scheduler/summary/764.summary b/cts/scheduler/summary/764.summary
new file mode 100644
index 0000000..158a064
--- /dev/null
+++ b/cts/scheduler/summary/764.summary
@@ -0,0 +1,57 @@
+Current cluster status:
+ * Node List:
+ * Online: [ posic041 posic043 ]
+ * OFFLINE: [ posic042 posic044 ]
+
+ * Full List of Resources:
+ * DcIPaddr (ocf:heartbeat:IPaddr): Started posic043
+ * rsc_posic041 (ocf:heartbeat:IPaddr): Started posic041
+ * rsc_posic042 (ocf:heartbeat:IPaddr): Started posic041
+ * rsc_posic043 (ocf:heartbeat:IPaddr): Started posic043
+ * rsc_posic044 (ocf:heartbeat:IPaddr): Starting posic041
+ * Clone Set: DoFencing [child_DoFencing] (unique):
+ * child_DoFencing:0 (stonith:ssh): Started posic043
+ * child_DoFencing:1 (stonith:ssh): Started posic041
+ * child_DoFencing:2 (stonith:ssh): Stopped
+ * child_DoFencing:3 (stonith:ssh): Stopped
+
+Transition Summary:
+ * Stop DcIPaddr ( posic043 ) due to no quorum
+ * Stop rsc_posic041 ( posic041 ) due to no quorum
+ * Stop rsc_posic042 ( posic041 ) due to no quorum
+ * Stop rsc_posic043 ( posic043 ) due to no quorum
+ * Stop rsc_posic044 ( posic041 ) due to no quorum
+
+Executing Cluster Transition:
+ * Resource action: DcIPaddr stop on posic043
+ * Resource action: DcIPaddr monitor on posic041
+ * Resource action: rsc_posic041 stop on posic041
+ * Resource action: rsc_posic041 monitor on posic043
+ * Resource action: rsc_posic042 stop on posic041
+ * Resource action: rsc_posic042 monitor on posic043
+ * Resource action: rsc_posic043 stop on posic043
+ * Resource action: rsc_posic043 monitor on posic041
+ * Resource action: rsc_posic044 stop on posic041
+ * Resource action: rsc_posic044 monitor on posic043
+ * Resource action: child_DoFencing:0 monitor=5000 on posic043
+ * Resource action: child_DoFencing:1 monitor=5000 on posic041
+ * Resource action: child_DoFencing:1 monitor on posic043
+ * Resource action: child_DoFencing:2 monitor on posic041
+ * Resource action: child_DoFencing:3 monitor on posic041
+
+Revised Cluster Status:
+ * Node List:
+ * Online: [ posic041 posic043 ]
+ * OFFLINE: [ posic042 posic044 ]
+
+ * Full List of Resources:
+ * DcIPaddr (ocf:heartbeat:IPaddr): Stopped
+ * rsc_posic041 (ocf:heartbeat:IPaddr): Stopped
+ * rsc_posic042 (ocf:heartbeat:IPaddr): Stopped
+ * rsc_posic043 (ocf:heartbeat:IPaddr): Stopped
+ * rsc_posic044 (ocf:heartbeat:IPaddr): Started posic041
+ * Clone Set: DoFencing [child_DoFencing] (unique):
+ * child_DoFencing:0 (stonith:ssh): Started posic043
+ * child_DoFencing:1 (stonith:ssh): Started posic041
+ * child_DoFencing:2 (stonith:ssh): Stopped
+ * child_DoFencing:3 (stonith:ssh): Stopped
diff --git a/cts/scheduler/summary/797.summary b/cts/scheduler/summary/797.summary
new file mode 100644
index 0000000..d31572b
--- /dev/null
+++ b/cts/scheduler/summary/797.summary
@@ -0,0 +1,73 @@
+Current cluster status:
+ * Node List:
+ * Node c001n08: UNCLEAN (offline)
+ * Online: [ c001n01 c001n02 c001n03 ]
+
+ * Full List of Resources:
+ * DcIPaddr (ocf:heartbeat:IPaddr): Started c001n03
+ * rsc_c001n08 (ocf:heartbeat:IPaddr): Started c001n02
+ * rsc_c001n02 (ocf:heartbeat:IPaddr): Started c001n02
+ * rsc_c001n03 (ocf:heartbeat:IPaddr): Started c001n03
+ * rsc_c001n01 (ocf:heartbeat:IPaddr): Started c001n01
+ * Clone Set: DoFencing [child_DoFencing] (unique):
+ * child_DoFencing:0 (stonith:ssh): Started (Monitoring) [ c001n01 c001n03 ]
+ * child_DoFencing:1 (stonith:ssh): Started c001n02
+ * child_DoFencing:2 (stonith:ssh): Started c001n03
+ * child_DoFencing:3 (stonith:ssh): Stopped
+
+Transition Summary:
+ * Stop DcIPaddr ( c001n03 ) due to no quorum
+ * Stop rsc_c001n08 ( c001n02 ) due to no quorum
+ * Stop rsc_c001n02 ( c001n02 ) due to no quorum
+ * Stop rsc_c001n03 ( c001n03 ) due to no quorum
+ * Stop rsc_c001n01 ( c001n01 ) due to no quorum
+ * Restart child_DoFencing:0 ( c001n01 )
+ * Stop child_DoFencing:1 ( c001n02 ) due to node availability
+
+Executing Cluster Transition:
+ * Resource action: DcIPaddr monitor on c001n02
+ * Resource action: DcIPaddr monitor on c001n01
+ * Resource action: DcIPaddr stop on c001n03
+ * Resource action: rsc_c001n08 stop on c001n02
+ * Resource action: rsc_c001n08 monitor on c001n03
+ * Resource action: rsc_c001n08 monitor on c001n01
+ * Resource action: rsc_c001n02 stop on c001n02
+ * Resource action: rsc_c001n02 monitor on c001n03
+ * Resource action: rsc_c001n02 monitor on c001n01
+ * Resource action: rsc_c001n03 stop on c001n03
+ * Resource action: rsc_c001n03 monitor on c001n02
+ * Resource action: rsc_c001n03 monitor on c001n01
+ * Resource action: rsc_c001n01 stop on c001n01
+ * Resource action: rsc_c001n01 monitor on c001n03
+ * Resource action: child_DoFencing:2 monitor on c001n01
+ * Resource action: child_DoFencing:3 monitor on c001n03
+ * Resource action: child_DoFencing:3 monitor on c001n02
+ * Resource action: child_DoFencing:3 monitor on c001n01
+ * Pseudo action: DoFencing_stop_0
+ * Resource action: DcIPaddr delete on c001n03
+ * Resource action: child_DoFencing:0 stop on c001n03
+ * Resource action: child_DoFencing:0 stop on c001n01
+ * Resource action: child_DoFencing:1 stop on c001n02
+ * Pseudo action: DoFencing_stopped_0
+ * Pseudo action: DoFencing_start_0
+ * Cluster action: do_shutdown on c001n02
+ * Resource action: child_DoFencing:0 start on c001n01
+ * Resource action: child_DoFencing:0 monitor=5000 on c001n01
+ * Pseudo action: DoFencing_running_0
+
+Revised Cluster Status:
+ * Node List:
+ * Node c001n08: UNCLEAN (offline)
+ * Online: [ c001n01 c001n02 c001n03 ]
+
+ * Full List of Resources:
+ * DcIPaddr (ocf:heartbeat:IPaddr): Stopped
+ * rsc_c001n08 (ocf:heartbeat:IPaddr): Stopped
+ * rsc_c001n02 (ocf:heartbeat:IPaddr): Stopped
+ * rsc_c001n03 (ocf:heartbeat:IPaddr): Stopped
+ * rsc_c001n01 (ocf:heartbeat:IPaddr): Stopped
+ * Clone Set: DoFencing [child_DoFencing] (unique):
+ * child_DoFencing:0 (stonith:ssh): Started c001n01
+ * child_DoFencing:1 (stonith:ssh): Stopped
+ * child_DoFencing:2 (stonith:ssh): Started c001n03
+ * child_DoFencing:3 (stonith:ssh): Stopped
diff --git a/cts/scheduler/summary/8-am-then-bm-a-migrating-b-stopping.summary b/cts/scheduler/summary/8-am-then-bm-a-migrating-b-stopping.summary
new file mode 100644
index 0000000..54c19eb
--- /dev/null
+++ b/cts/scheduler/summary/8-am-then-bm-a-migrating-b-stopping.summary
@@ -0,0 +1,29 @@
+1 of 2 resource instances DISABLED and 0 BLOCKED from further action due to failure
+
+Current cluster status:
+ * Node List:
+ * Online: [ 18node1 18node2 18node3 ]
+
+ * Full List of Resources:
+ * A (ocf:heartbeat:Dummy): Started 18node1
+ * B (ocf:heartbeat:Dummy): Started 18node2 (disabled)
+
+Transition Summary:
+ * Migrate A ( 18node1 -> 18node2 )
+ * Stop B ( 18node2 ) due to node availability
+
+Executing Cluster Transition:
+ * Resource action: B stop on 18node2
+ * Resource action: A migrate_to on 18node1
+ * Resource action: A migrate_from on 18node2
+ * Resource action: A stop on 18node1
+ * Pseudo action: A_start_0
+ * Resource action: A monitor=60000 on 18node2
+
+Revised Cluster Status:
+ * Node List:
+ * Online: [ 18node1 18node2 18node3 ]
+
+ * Full List of Resources:
+ * A (ocf:heartbeat:Dummy): Started 18node2
+ * B (ocf:heartbeat:Dummy): Stopped (disabled)
diff --git a/cts/scheduler/summary/829.summary b/cts/scheduler/summary/829.summary
new file mode 100644
index 0000000..f51849e
--- /dev/null
+++ b/cts/scheduler/summary/829.summary
@@ -0,0 +1,64 @@
+Current cluster status:
+ * Node List:
+ * Node c001n02: UNCLEAN (offline)
+ * Online: [ c001n01 c001n03 c001n08 ]
+
+ * Full List of Resources:
+ * DcIPaddr (ocf:heartbeat:IPaddr): Started c001n08
+ * rsc_c001n08 (ocf:heartbeat:IPaddr): Started c001n08
+ * rsc_c001n02 (ocf:heartbeat:IPaddr): Started c001n02 (UNCLEAN)
+ * rsc_c001n03 (ocf:heartbeat:IPaddr): Started c001n03
+ * rsc_c001n01 (ocf:heartbeat:IPaddr): Started c001n01
+ * Clone Set: DoFencing [child_DoFencing] (unique):
+ * child_DoFencing:0 (stonith:ssh): Started c001n02 (UNCLEAN)
+ * child_DoFencing:1 (stonith:ssh): Started c001n03
+ * child_DoFencing:2 (stonith:ssh): Started c001n01
+ * child_DoFencing:3 (stonith:ssh): Started c001n08
+
+Transition Summary:
+ * Fence (reboot) c001n02 'peer is no longer part of the cluster'
+ * Move rsc_c001n02 ( c001n02 -> c001n01 )
+ * Stop child_DoFencing:0 ( c001n02 ) due to node availability
+
+Executing Cluster Transition:
+ * Resource action: DcIPaddr monitor on c001n03
+ * Resource action: DcIPaddr monitor on c001n01
+ * Resource action: rsc_c001n08 monitor on c001n03
+ * Resource action: rsc_c001n08 monitor on c001n01
+ * Resource action: rsc_c001n02 monitor on c001n08
+ * Resource action: rsc_c001n02 monitor on c001n03
+ * Resource action: rsc_c001n02 monitor on c001n01
+ * Resource action: rsc_c001n03 monitor on c001n08
+ * Resource action: rsc_c001n03 monitor on c001n01
+ * Resource action: rsc_c001n01 monitor on c001n08
+ * Resource action: rsc_c001n01 monitor on c001n03
+ * Resource action: child_DoFencing:0 monitor on c001n01
+ * Resource action: child_DoFencing:1 monitor on c001n01
+ * Resource action: child_DoFencing:2 monitor on c001n08
+ * Resource action: child_DoFencing:2 monitor on c001n03
+ * Resource action: child_DoFencing:3 monitor on c001n03
+ * Resource action: child_DoFencing:3 monitor on c001n01
+ * Pseudo action: DoFencing_stop_0
+ * Fencing c001n02 (reboot)
+ * Pseudo action: rsc_c001n02_stop_0
+ * Pseudo action: child_DoFencing:0_stop_0
+ * Pseudo action: DoFencing_stopped_0
+ * Resource action: rsc_c001n02 start on c001n01
+ * Resource action: rsc_c001n02 monitor=5000 on c001n01
+
+Revised Cluster Status:
+ * Node List:
+ * Online: [ c001n01 c001n03 c001n08 ]
+ * OFFLINE: [ c001n02 ]
+
+ * Full List of Resources:
+ * DcIPaddr (ocf:heartbeat:IPaddr): Started c001n08
+ * rsc_c001n08 (ocf:heartbeat:IPaddr): Started c001n08
+ * rsc_c001n02 (ocf:heartbeat:IPaddr): Started c001n01
+ * rsc_c001n03 (ocf:heartbeat:IPaddr): Started c001n03
+ * rsc_c001n01 (ocf:heartbeat:IPaddr): Started c001n01
+ * Clone Set: DoFencing [child_DoFencing] (unique):
+ * child_DoFencing:0 (stonith:ssh): Stopped
+ * child_DoFencing:1 (stonith:ssh): Started c001n03
+ * child_DoFencing:2 (stonith:ssh): Started c001n01
+ * child_DoFencing:3 (stonith:ssh): Started c001n08
diff --git a/cts/scheduler/summary/9-am-then-bm-b-migrating-a-stopping.summary b/cts/scheduler/summary/9-am-then-bm-b-migrating-a-stopping.summary
new file mode 100644
index 0000000..e37689e
--- /dev/null
+++ b/cts/scheduler/summary/9-am-then-bm-b-migrating-a-stopping.summary
@@ -0,0 +1,25 @@
+1 of 2 resource instances DISABLED and 0 BLOCKED from further action due to failure
+
+Current cluster status:
+ * Node List:
+ * Online: [ 18node1 18node2 18node3 ]
+
+ * Full List of Resources:
+ * A (ocf:heartbeat:Dummy): Started 18node1 (disabled)
+ * B (ocf:heartbeat:Dummy): Started 18node2
+
+Transition Summary:
+ * Stop A ( 18node1 ) due to node availability
+ * Stop B ( 18node2 ) due to unrunnable A start
+
+Executing Cluster Transition:
+ * Resource action: B stop on 18node2
+ * Resource action: A stop on 18node1
+
+Revised Cluster Status:
+ * Node List:
+ * Online: [ 18node1 18node2 18node3 ]
+
+ * Full List of Resources:
+ * A (ocf:heartbeat:Dummy): Stopped (disabled)
+ * B (ocf:heartbeat:Dummy): Stopped
diff --git a/cts/scheduler/summary/994-2.summary b/cts/scheduler/summary/994-2.summary
new file mode 100644
index 0000000..cac43b9
--- /dev/null
+++ b/cts/scheduler/summary/994-2.summary
@@ -0,0 +1,38 @@
+Current cluster status:
+ * Node List:
+ * Online: [ paul ]
+
+ * Full List of Resources:
+ * Resource Group: group_1:
+ * datadisk_1 (ocf:heartbeat:datadisk): Started paul
+ * Filesystem_2 (ocf:heartbeat:Filesystem): Started paul
+ * IPaddr_5 (ocf:heartbeat:IPaddr): Started paul
+ * postfix_9 (lsb:postfix): FAILED paul
+ * depends (lsb:postfix): Started paul
+
+Transition Summary:
+ * Recover postfix_9 ( paul )
+ * Restart depends ( paul ) due to required group_1 running
+
+Executing Cluster Transition:
+ * Resource action: depends stop on paul
+ * Pseudo action: group_1_stop_0
+ * Resource action: postfix_9 stop on paul
+ * Pseudo action: group_1_stopped_0
+ * Pseudo action: group_1_start_0
+ * Resource action: postfix_9 start on paul
+ * Resource action: postfix_9 monitor=120000 on paul
+ * Pseudo action: group_1_running_0
+ * Resource action: depends start on paul
+
+Revised Cluster Status:
+ * Node List:
+ * Online: [ paul ]
+
+ * Full List of Resources:
+ * Resource Group: group_1:
+ * datadisk_1 (ocf:heartbeat:datadisk): Started paul
+ * Filesystem_2 (ocf:heartbeat:Filesystem): Started paul
+ * IPaddr_5 (ocf:heartbeat:IPaddr): Started paul
+ * postfix_9 (lsb:postfix): Started paul
+ * depends (lsb:postfix): Started paul
diff --git a/cts/scheduler/summary/994.summary b/cts/scheduler/summary/994.summary
new file mode 100644
index 0000000..5d8efdf
--- /dev/null
+++ b/cts/scheduler/summary/994.summary
@@ -0,0 +1,33 @@
+Current cluster status:
+ * Node List:
+ * Online: [ paul ]
+
+ * Full List of Resources:
+ * Resource Group: group_1:
+ * datadisk_1 (ocf:heartbeat:datadisk): Started paul
+ * Filesystem_2 (ocf:heartbeat:Filesystem): Started paul
+ * IPaddr_5 (ocf:heartbeat:IPaddr): Started paul
+ * postfix_9 (lsb:postfix): FAILED paul
+
+Transition Summary:
+ * Recover postfix_9 ( paul )
+
+Executing Cluster Transition:
+ * Pseudo action: group_1_stop_0
+ * Resource action: postfix_9 stop on paul
+ * Pseudo action: group_1_stopped_0
+ * Pseudo action: group_1_start_0
+ * Resource action: postfix_9 start on paul
+ * Resource action: postfix_9 monitor=120000 on paul
+ * Pseudo action: group_1_running_0
+
+Revised Cluster Status:
+ * Node List:
+ * Online: [ paul ]
+
+ * Full List of Resources:
+ * Resource Group: group_1:
+ * datadisk_1 (ocf:heartbeat:datadisk): Started paul
+ * Filesystem_2 (ocf:heartbeat:Filesystem): Started paul
+ * IPaddr_5 (ocf:heartbeat:IPaddr): Started paul
+ * postfix_9 (lsb:postfix): Started paul
diff --git a/cts/scheduler/summary/Makefile.am b/cts/scheduler/summary/Makefile.am
new file mode 100644
index 0000000..f89c904
--- /dev/null
+++ b/cts/scheduler/summary/Makefile.am
@@ -0,0 +1,12 @@
+#
+# Copyright 2001-2021 the Pacemaker project contributors
+#
+# The version control history for this file may have further details.
+#
+# This source code is licensed under the GNU General Public License version 2
+# or later (GPLv2+) WITHOUT ANY WARRANTY.
+#
+MAINTAINERCLEANFILES = Makefile.in
+
+summarydir = $(datadir)/$(PACKAGE)/tests/scheduler/summary
+dist_summary_DATA = $(wildcard *.summary)
diff --git a/cts/scheduler/summary/a-demote-then-b-migrate.summary b/cts/scheduler/summary/a-demote-then-b-migrate.summary
new file mode 100644
index 0000000..32c136e
--- /dev/null
+++ b/cts/scheduler/summary/a-demote-then-b-migrate.summary
@@ -0,0 +1,57 @@
+Current cluster status:
+ * Node List:
+ * Online: [ node1 node2 ]
+
+ * Full List of Resources:
+ * Clone Set: ms1 [rsc1] (promotable):
+ * Promoted: [ node1 ]
+ * Unpromoted: [ node2 ]
+ * rsc2 (ocf:pacemaker:Dummy): Started node1
+
+Transition Summary:
+ * Demote rsc1:0 ( Promoted -> Unpromoted node1 )
+ * Promote rsc1:1 ( Unpromoted -> Promoted node2 )
+ * Migrate rsc2 ( node1 -> node2 )
+
+Executing Cluster Transition:
+ * Resource action: rsc1:1 cancel=5000 on node1
+ * Resource action: rsc1:0 cancel=10000 on node2
+ * Pseudo action: ms1_pre_notify_demote_0
+ * Resource action: rsc1:1 notify on node1
+ * Resource action: rsc1:0 notify on node2
+ * Pseudo action: ms1_confirmed-pre_notify_demote_0
+ * Pseudo action: ms1_demote_0
+ * Resource action: rsc1:1 demote on node1
+ * Pseudo action: ms1_demoted_0
+ * Pseudo action: ms1_post_notify_demoted_0
+ * Resource action: rsc1:1 notify on node1
+ * Resource action: rsc1:0 notify on node2
+ * Pseudo action: ms1_confirmed-post_notify_demoted_0
+ * Pseudo action: ms1_pre_notify_promote_0
+ * Resource action: rsc2 migrate_to on node1
+ * Resource action: rsc1:1 notify on node1
+ * Resource action: rsc1:0 notify on node2
+ * Pseudo action: ms1_confirmed-pre_notify_promote_0
+ * Resource action: rsc2 migrate_from on node2
+ * Resource action: rsc2 stop on node1
+ * Pseudo action: rsc2_start_0
+ * Pseudo action: ms1_promote_0
+ * Resource action: rsc2 monitor=5000 on node2
+ * Resource action: rsc1:0 promote on node2
+ * Pseudo action: ms1_promoted_0
+ * Pseudo action: ms1_post_notify_promoted_0
+ * Resource action: rsc1:1 notify on node1
+ * Resource action: rsc1:0 notify on node2
+ * Pseudo action: ms1_confirmed-post_notify_promoted_0
+ * Resource action: rsc1:1 monitor=10000 on node1
+ * Resource action: rsc1:0 monitor=5000 on node2
+
+Revised Cluster Status:
+ * Node List:
+ * Online: [ node1 node2 ]
+
+ * Full List of Resources:
+ * Clone Set: ms1 [rsc1] (promotable):
+ * Promoted: [ node2 ]
+ * Unpromoted: [ node1 ]
+ * rsc2 (ocf:pacemaker:Dummy): Started node2
diff --git a/cts/scheduler/summary/a-promote-then-b-migrate.summary b/cts/scheduler/summary/a-promote-then-b-migrate.summary
new file mode 100644
index 0000000..6489a4f
--- /dev/null
+++ b/cts/scheduler/summary/a-promote-then-b-migrate.summary
@@ -0,0 +1,42 @@
+Current cluster status:
+ * Node List:
+ * Online: [ node1 node2 ]
+
+ * Full List of Resources:
+ * Clone Set: ms1 [rsc1] (promotable):
+ * Promoted: [ node1 ]
+ * Unpromoted: [ node2 ]
+ * rsc2 (ocf:pacemaker:Dummy): Started node1
+
+Transition Summary:
+ * Promote rsc1:1 ( Unpromoted -> Promoted node2 )
+ * Migrate rsc2 ( node1 -> node2 )
+
+Executing Cluster Transition:
+ * Resource action: rsc1:1 cancel=10000 on node2
+ * Pseudo action: ms1_pre_notify_promote_0
+ * Resource action: rsc1:0 notify on node1
+ * Resource action: rsc1:1 notify on node2
+ * Pseudo action: ms1_confirmed-pre_notify_promote_0
+ * Pseudo action: ms1_promote_0
+ * Resource action: rsc1:1 promote on node2
+ * Pseudo action: ms1_promoted_0
+ * Pseudo action: ms1_post_notify_promoted_0
+ * Resource action: rsc1:0 notify on node1
+ * Resource action: rsc1:1 notify on node2
+ * Pseudo action: ms1_confirmed-post_notify_promoted_0
+ * Resource action: rsc2 migrate_to on node1
+ * Resource action: rsc1:1 monitor=5000 on node2
+ * Resource action: rsc2 migrate_from on node2
+ * Resource action: rsc2 stop on node1
+ * Pseudo action: rsc2_start_0
+ * Resource action: rsc2 monitor=5000 on node2
+
+Revised Cluster Status:
+ * Node List:
+ * Online: [ node1 node2 ]
+
+ * Full List of Resources:
+ * Clone Set: ms1 [rsc1] (promotable):
+ * Promoted: [ node1 node2 ]
+ * rsc2 (ocf:pacemaker:Dummy): Started node2
diff --git a/cts/scheduler/summary/allow-unhealthy-nodes.summary b/cts/scheduler/summary/allow-unhealthy-nodes.summary
new file mode 100644
index 0000000..5d7ac0b
--- /dev/null
+++ b/cts/scheduler/summary/allow-unhealthy-nodes.summary
@@ -0,0 +1,35 @@
+Using the original execution date of: 2022-04-01 17:57:38Z
+Current cluster status:
+ * Node List:
+ * Node rhel8-5: online (health is RED)
+ * Online: [ rhel8-1 rhel8-2 rhel8-3 rhel8-4 ]
+
+ * Full List of Resources:
+ * Fencing (stonith:fence_xvm): Started rhel8-1
+ * FencingPass (stonith:fence_dummy): Started rhel8-2
+ * FencingFail (stonith:fence_dummy): Started rhel8-3
+ * dummy (ocf:pacemaker:Dummy): Started rhel8-5
+ * Clone Set: health-clone [health]:
+ * Started: [ rhel8-1 rhel8-2 rhel8-3 rhel8-4 rhel8-5 ]
+
+Transition Summary:
+ * Move dummy ( rhel8-5 -> rhel8-3 )
+
+Executing Cluster Transition:
+ * Resource action: dummy stop on rhel8-5
+ * Resource action: dummy start on rhel8-3
+ * Resource action: dummy monitor=10000 on rhel8-3
+Using the original execution date of: 2022-04-01 17:57:38Z
+
+Revised Cluster Status:
+ * Node List:
+ * Node rhel8-5: online (health is RED)
+ * Online: [ rhel8-1 rhel8-2 rhel8-3 rhel8-4 ]
+
+ * Full List of Resources:
+ * Fencing (stonith:fence_xvm): Started rhel8-1
+ * FencingPass (stonith:fence_dummy): Started rhel8-2
+ * FencingFail (stonith:fence_dummy): Started rhel8-3
+ * dummy (ocf:pacemaker:Dummy): Started rhel8-3
+ * Clone Set: health-clone [health]:
+ * Started: [ rhel8-1 rhel8-2 rhel8-3 rhel8-4 rhel8-5 ]
diff --git a/cts/scheduler/summary/anon-instance-pending.summary b/cts/scheduler/summary/anon-instance-pending.summary
new file mode 100644
index 0000000..379fbce
--- /dev/null
+++ b/cts/scheduler/summary/anon-instance-pending.summary
@@ -0,0 +1,224 @@
+Current cluster status:
+ * Node List:
+ * Online: [ node1 node2 node3 node4 node5 node6 node7 node8 node9 node10 node11 ]
+
+ * Full List of Resources:
+ * Fencing (stonith:fence_imaginary): Started node1
+ * Clone Set: clone1 [clone1rsc] (promotable):
+ * clone1rsc (ocf:pacemaker:Stateful): Starting node4
+ * Promoted: [ node3 ]
+ * Unpromoted: [ node1 node2 ]
+ * Stopped: [ node5 node6 node7 node8 node9 node10 node11 ]
+ * Clone Set: clone2 [clone2rsc]:
+ * clone2rsc (ocf:pacemaker:Dummy): Starting node4
+ * Started: [ node2 ]
+ * Stopped: [ node1 node3 node5 node6 node7 node8 node9 node10 node11 ]
+ * Clone Set: clone3 [clone3rsc]:
+ * Started: [ node3 ]
+ * Stopped: [ node1 node2 node4 node5 node6 node7 node8 node9 node10 node11 ]
+ * Clone Set: clone4 [clone4rsc]:
+ * clone4rsc (ocf:pacemaker:Dummy): Stopping node8
+ * clone4rsc (ocf:pacemaker:Dummy): ORPHANED Started node9
+ * Started: [ node1 node5 node6 node7 ]
+ * Stopped: [ node2 node3 node4 node10 node11 ]
+ * Clone Set: clone5 [clone5group]:
+ * Resource Group: clone5group:2:
+ * clone5rsc1 (ocf:pacemaker:Dummy): Started node3
+ * clone5rsc2 (ocf:pacemaker:Dummy): Starting node3
+ * clone5rsc3 (ocf:pacemaker:Dummy): Stopped
+ * Started: [ node1 node2 ]
+ * Stopped: [ node4 node5 node6 node7 node8 node9 node10 node11 ]
+
+Transition Summary:
+ * Start clone1rsc:4 ( node9 )
+ * Start clone1rsc:5 ( node10 )
+ * Start clone1rsc:6 ( node11 )
+ * Start clone1rsc:7 ( node5 )
+ * Start clone1rsc:8 ( node6 )
+ * Start clone1rsc:9 ( node7 )
+ * Start clone1rsc:10 ( node8 )
+ * Start clone2rsc:2 ( node10 )
+ * Start clone2rsc:3 ( node11 )
+ * Start clone2rsc:4 ( node3 )
+ * Start clone3rsc:1 ( node5 )
+ * Start clone3rsc:2 ( node6 )
+ * Start clone3rsc:3 ( node7 )
+ * Start clone3rsc:4 ( node8 )
+ * Start clone3rsc:5 ( node9 )
+ * Start clone3rsc:6 ( node1 )
+ * Start clone3rsc:7 ( node10 )
+ * Start clone3rsc:8 ( node11 )
+ * Start clone3rsc:9 ( node2 )
+ * Start clone3rsc:10 ( node4 )
+ * Stop clone4rsc:5 ( node9 ) due to node availability
+ * Start clone5rsc3:2 ( node3 )
+ * Start clone5rsc1:3 ( node9 )
+ * Start clone5rsc2:3 ( node9 )
+ * Start clone5rsc3:3 ( node9 )
+ * Start clone5rsc1:4 ( node10 )
+ * Start clone5rsc2:4 ( node10 )
+ * Start clone5rsc3:4 ( node10 )
+ * Start clone5rsc1:5 ( node11 )
+ * Start clone5rsc2:5 ( node11 )
+ * Start clone5rsc3:5 ( node11 )
+ * Start clone5rsc1:6 ( node4 )
+ * Start clone5rsc2:6 ( node4 )
+ * Start clone5rsc3:6 ( node4 )
+ * Start clone5rsc1:7 ( node5 )
+ * Start clone5rsc2:7 ( node5 )
+ * Start clone5rsc3:7 ( node5 )
+ * Start clone5rsc1:8 ( node6 )
+ * Start clone5rsc2:8 ( node6 )
+ * Start clone5rsc3:8 ( node6 )
+ * Start clone5rsc1:9 ( node7 )
+ * Start clone5rsc2:9 ( node7 )
+ * Start clone5rsc3:9 ( node7 )
+ * Start clone5rsc1:10 ( node8 )
+ * Start clone5rsc2:10 ( node8 )
+ * Start clone5rsc3:10 ( node8 )
+
+Executing Cluster Transition:
+ * Pseudo action: clone1_start_0
+ * Pseudo action: clone2_start_0
+ * Resource action: clone3rsc monitor on node2
+ * Pseudo action: clone3_start_0
+ * Pseudo action: clone4_stop_0
+ * Pseudo action: clone5_start_0
+ * Resource action: clone1rsc start on node4
+ * Resource action: clone1rsc start on node9
+ * Resource action: clone1rsc start on node10
+ * Resource action: clone1rsc start on node11
+ * Resource action: clone1rsc start on node5
+ * Resource action: clone1rsc start on node6
+ * Resource action: clone1rsc start on node7
+ * Resource action: clone1rsc start on node8
+ * Pseudo action: clone1_running_0
+ * Resource action: clone2rsc start on node4
+ * Resource action: clone2rsc start on node10
+ * Resource action: clone2rsc start on node11
+ * Resource action: clone2rsc start on node3
+ * Pseudo action: clone2_running_0
+ * Resource action: clone3rsc start on node5
+ * Resource action: clone3rsc start on node6
+ * Resource action: clone3rsc start on node7
+ * Resource action: clone3rsc start on node8
+ * Resource action: clone3rsc start on node9
+ * Resource action: clone3rsc start on node1
+ * Resource action: clone3rsc start on node10
+ * Resource action: clone3rsc start on node11
+ * Resource action: clone3rsc start on node2
+ * Resource action: clone3rsc start on node4
+ * Pseudo action: clone3_running_0
+ * Resource action: clone4rsc stop on node9
+ * Pseudo action: clone4_stopped_0
+ * Pseudo action: clone5group:2_start_0
+ * Resource action: clone5rsc2 start on node3
+ * Resource action: clone5rsc3 start on node3
+ * Pseudo action: clone5group:3_start_0
+ * Resource action: clone5rsc1 start on node9
+ * Resource action: clone5rsc2 start on node9
+ * Resource action: clone5rsc3 start on node9
+ * Pseudo action: clone5group:4_start_0
+ * Resource action: clone5rsc1 start on node10
+ * Resource action: clone5rsc2 start on node10
+ * Resource action: clone5rsc3 start on node10
+ * Pseudo action: clone5group:5_start_0
+ * Resource action: clone5rsc1 start on node11
+ * Resource action: clone5rsc2 start on node11
+ * Resource action: clone5rsc3 start on node11
+ * Pseudo action: clone5group:6_start_0
+ * Resource action: clone5rsc1 start on node4
+ * Resource action: clone5rsc2 start on node4
+ * Resource action: clone5rsc3 start on node4
+ * Pseudo action: clone5group:7_start_0
+ * Resource action: clone5rsc1 start on node5
+ * Resource action: clone5rsc2 start on node5
+ * Resource action: clone5rsc3 start on node5
+ * Pseudo action: clone5group:8_start_0
+ * Resource action: clone5rsc1 start on node6
+ * Resource action: clone5rsc2 start on node6
+ * Resource action: clone5rsc3 start on node6
+ * Pseudo action: clone5group:9_start_0
+ * Resource action: clone5rsc1 start on node7
+ * Resource action: clone5rsc2 start on node7
+ * Resource action: clone5rsc3 start on node7
+ * Pseudo action: clone5group:10_start_0
+ * Resource action: clone5rsc1 start on node8
+ * Resource action: clone5rsc2 start on node8
+ * Resource action: clone5rsc3 start on node8
+ * Resource action: clone1rsc monitor=10000 on node4
+ * Resource action: clone1rsc monitor=10000 on node9
+ * Resource action: clone1rsc monitor=10000 on node10
+ * Resource action: clone1rsc monitor=10000 on node11
+ * Resource action: clone1rsc monitor=10000 on node5
+ * Resource action: clone1rsc monitor=10000 on node6
+ * Resource action: clone1rsc monitor=10000 on node7
+ * Resource action: clone1rsc monitor=10000 on node8
+ * Resource action: clone2rsc monitor=10000 on node4
+ * Resource action: clone2rsc monitor=10000 on node10
+ * Resource action: clone2rsc monitor=10000 on node11
+ * Resource action: clone2rsc monitor=10000 on node3
+ * Resource action: clone3rsc monitor=10000 on node5
+ * Resource action: clone3rsc monitor=10000 on node6
+ * Resource action: clone3rsc monitor=10000 on node7
+ * Resource action: clone3rsc monitor=10000 on node8
+ * Resource action: clone3rsc monitor=10000 on node9
+ * Resource action: clone3rsc monitor=10000 on node1
+ * Resource action: clone3rsc monitor=10000 on node10
+ * Resource action: clone3rsc monitor=10000 on node11
+ * Resource action: clone3rsc monitor=10000 on node2
+ * Resource action: clone3rsc monitor=10000 on node4
+ * Pseudo action: clone5group:2_running_0
+ * Resource action: clone5rsc2 monitor=10000 on node3
+ * Resource action: clone5rsc3 monitor=10000 on node3
+ * Pseudo action: clone5group:3_running_0
+ * Resource action: clone5rsc1 monitor=10000 on node9
+ * Resource action: clone5rsc2 monitor=10000 on node9
+ * Resource action: clone5rsc3 monitor=10000 on node9
+ * Pseudo action: clone5group:4_running_0
+ * Resource action: clone5rsc1 monitor=10000 on node10
+ * Resource action: clone5rsc2 monitor=10000 on node10
+ * Resource action: clone5rsc3 monitor=10000 on node10
+ * Pseudo action: clone5group:5_running_0
+ * Resource action: clone5rsc1 monitor=10000 on node11
+ * Resource action: clone5rsc2 monitor=10000 on node11
+ * Resource action: clone5rsc3 monitor=10000 on node11
+ * Pseudo action: clone5group:6_running_0
+ * Resource action: clone5rsc1 monitor=10000 on node4
+ * Resource action: clone5rsc2 monitor=10000 on node4
+ * Resource action: clone5rsc3 monitor=10000 on node4
+ * Pseudo action: clone5group:7_running_0
+ * Resource action: clone5rsc1 monitor=10000 on node5
+ * Resource action: clone5rsc2 monitor=10000 on node5
+ * Resource action: clone5rsc3 monitor=10000 on node5
+ * Pseudo action: clone5group:8_running_0
+ * Resource action: clone5rsc1 monitor=10000 on node6
+ * Resource action: clone5rsc2 monitor=10000 on node6
+ * Resource action: clone5rsc3 monitor=10000 on node6
+ * Pseudo action: clone5group:9_running_0
+ * Resource action: clone5rsc1 monitor=10000 on node7
+ * Resource action: clone5rsc2 monitor=10000 on node7
+ * Resource action: clone5rsc3 monitor=10000 on node7
+ * Pseudo action: clone5group:10_running_0
+ * Resource action: clone5rsc1 monitor=10000 on node8
+ * Resource action: clone5rsc2 monitor=10000 on node8
+ * Resource action: clone5rsc3 monitor=10000 on node8
+ * Pseudo action: clone5_running_0
+
+Revised Cluster Status:
+ * Node List:
+ * Online: [ node1 node2 node3 node4 node5 node6 node7 node8 node9 node10 node11 ]
+
+ * Full List of Resources:
+ * Fencing (stonith:fence_imaginary): Started node1
+ * Clone Set: clone1 [clone1rsc] (promotable):
+ * Promoted: [ node3 ]
+ * Unpromoted: [ node1 node2 node4 node5 node6 node7 node8 node9 node10 node11 ]
+ * Clone Set: clone2 [clone2rsc]:
+ * Started: [ node2 node3 node4 node10 node11 ]
+ * Clone Set: clone3 [clone3rsc]:
+ * Started: [ node1 node2 node3 node4 node5 node6 node7 node8 node9 node10 node11 ]
+ * Clone Set: clone4 [clone4rsc]:
+ * Started: [ node1 node5 node6 node7 node8 ]
+ * Clone Set: clone5 [clone5group]:
+ * Started: [ node1 node2 node3 node4 node5 node6 node7 node8 node9 node10 node11 ]
diff --git a/cts/scheduler/summary/anti-colocation-order.summary b/cts/scheduler/summary/anti-colocation-order.summary
new file mode 100644
index 0000000..774942d
--- /dev/null
+++ b/cts/scheduler/summary/anti-colocation-order.summary
@@ -0,0 +1,45 @@
+Current cluster status:
+ * Node List:
+ * Node node1: standby (with active resources)
+ * Online: [ node2 ]
+
+ * Full List of Resources:
+ * Resource Group: group1:
+ * rsc1 (ocf:pacemaker:Dummy): Started node1
+ * rsc2 (ocf:pacemaker:Dummy): Started node1
+ * Resource Group: group2:
+ * rsc3 (ocf:pacemaker:Dummy): Started node2
+ * rsc4 (ocf:pacemaker:Dummy): Started node2
+
+Transition Summary:
+ * Move rsc1 ( node1 -> node2 )
+ * Move rsc2 ( node1 -> node2 )
+ * Stop rsc3 ( node2 ) due to node availability
+ * Stop rsc4 ( node2 ) due to node availability
+
+Executing Cluster Transition:
+ * Pseudo action: group1_stop_0
+ * Resource action: rsc2 stop on node1
+ * Pseudo action: group2_stop_0
+ * Resource action: rsc4 stop on node2
+ * Resource action: rsc1 stop on node1
+ * Resource action: rsc3 stop on node2
+ * Pseudo action: group1_stopped_0
+ * Pseudo action: group2_stopped_0
+ * Pseudo action: group1_start_0
+ * Resource action: rsc1 start on node2
+ * Resource action: rsc2 start on node2
+ * Pseudo action: group1_running_0
+
+Revised Cluster Status:
+ * Node List:
+ * Node node1: standby
+ * Online: [ node2 ]
+
+ * Full List of Resources:
+ * Resource Group: group1:
+ * rsc1 (ocf:pacemaker:Dummy): Started node2
+ * rsc2 (ocf:pacemaker:Dummy): Started node2
+ * Resource Group: group2:
+ * rsc3 (ocf:pacemaker:Dummy): Stopped
+ * rsc4 (ocf:pacemaker:Dummy): Stopped
diff --git a/cts/scheduler/summary/anti-colocation-promoted.summary b/cts/scheduler/summary/anti-colocation-promoted.summary
new file mode 100644
index 0000000..2348f76
--- /dev/null
+++ b/cts/scheduler/summary/anti-colocation-promoted.summary
@@ -0,0 +1,38 @@
+Using the original execution date of: 2016-04-29 09:06:59Z
+Current cluster status:
+ * Node List:
+ * Online: [ sle12sp2-1 sle12sp2-2 ]
+
+ * Full List of Resources:
+ * st_sbd (stonith:external/sbd): Started sle12sp2-2
+ * dummy1 (ocf:pacemaker:Dummy): Started sle12sp2-2
+ * Clone Set: ms1 [state1] (promotable):
+ * Promoted: [ sle12sp2-1 ]
+ * Unpromoted: [ sle12sp2-2 ]
+
+Transition Summary:
+ * Move dummy1 ( sle12sp2-2 -> sle12sp2-1 )
+ * Promote state1:0 ( Unpromoted -> Promoted sle12sp2-2 )
+ * Demote state1:1 ( Promoted -> Unpromoted sle12sp2-1 )
+
+Executing Cluster Transition:
+ * Resource action: dummy1 stop on sle12sp2-2
+ * Pseudo action: ms1_demote_0
+ * Resource action: state1 demote on sle12sp2-1
+ * Pseudo action: ms1_demoted_0
+ * Pseudo action: ms1_promote_0
+ * Resource action: dummy1 start on sle12sp2-1
+ * Resource action: state1 promote on sle12sp2-2
+ * Pseudo action: ms1_promoted_0
+Using the original execution date of: 2016-04-29 09:06:59Z
+
+Revised Cluster Status:
+ * Node List:
+ * Online: [ sle12sp2-1 sle12sp2-2 ]
+
+ * Full List of Resources:
+ * st_sbd (stonith:external/sbd): Started sle12sp2-2
+ * dummy1 (ocf:pacemaker:Dummy): Started sle12sp2-1
+ * Clone Set: ms1 [state1] (promotable):
+ * Promoted: [ sle12sp2-2 ]
+ * Unpromoted: [ sle12sp2-1 ]
diff --git a/cts/scheduler/summary/anti-colocation-unpromoted.summary b/cts/scheduler/summary/anti-colocation-unpromoted.summary
new file mode 100644
index 0000000..a7087bc
--- /dev/null
+++ b/cts/scheduler/summary/anti-colocation-unpromoted.summary
@@ -0,0 +1,36 @@
+Current cluster status:
+ * Node List:
+ * Online: [ sle12sp2-1 sle12sp2-2 ]
+
+ * Full List of Resources:
+ * st_sbd (stonith:external/sbd): Started sle12sp2-1
+ * Clone Set: ms1 [state1] (promotable):
+ * Promoted: [ sle12sp2-1 ]
+ * Unpromoted: [ sle12sp2-2 ]
+ * dummy1 (ocf:pacemaker:Dummy): Started sle12sp2-1
+
+Transition Summary:
+ * Demote state1:0 ( Promoted -> Unpromoted sle12sp2-1 )
+ * Promote state1:1 ( Unpromoted -> Promoted sle12sp2-2 )
+ * Move dummy1 ( sle12sp2-1 -> sle12sp2-2 )
+
+Executing Cluster Transition:
+ * Resource action: dummy1 stop on sle12sp2-1
+ * Pseudo action: ms1_demote_0
+ * Resource action: state1 demote on sle12sp2-1
+ * Pseudo action: ms1_demoted_0
+ * Pseudo action: ms1_promote_0
+ * Resource action: state1 promote on sle12sp2-2
+ * Pseudo action: ms1_promoted_0
+ * Resource action: dummy1 start on sle12sp2-2
+
+Revised Cluster Status:
+ * Node List:
+ * Online: [ sle12sp2-1 sle12sp2-2 ]
+
+ * Full List of Resources:
+ * st_sbd (stonith:external/sbd): Started sle12sp2-1
+ * Clone Set: ms1 [state1] (promotable):
+ * Promoted: [ sle12sp2-2 ]
+ * Unpromoted: [ sle12sp2-1 ]
+ * dummy1 (ocf:pacemaker:Dummy): Started sle12sp2-2
diff --git a/cts/scheduler/summary/asymmetric.summary b/cts/scheduler/summary/asymmetric.summary
new file mode 100644
index 0000000..f9c8f7e
--- /dev/null
+++ b/cts/scheduler/summary/asymmetric.summary
@@ -0,0 +1,29 @@
+Current cluster status:
+ * Node List:
+ * Online: [ puma1 puma3 ]
+
+ * Full List of Resources:
+ * Clone Set: ms_drbd_poolA [ebe3fb6e-7778-426e-be58-190ab1ff3dd3] (promotable):
+ * Promoted: [ puma3 ]
+ * Unpromoted: [ puma1 ]
+ * vpool_ip_poolA (ocf:heartbeat:IPaddr2): Stopped
+ * drbd_target_poolA (ocf:vpools:iscsi_target): Stopped
+
+Transition Summary:
+
+Executing Cluster Transition:
+ * Resource action: ebe3fb6e-7778-426e-be58-190ab1ff3dd3:1 monitor=19000 on puma1
+ * Resource action: ebe3fb6e-7778-426e-be58-190ab1ff3dd3:0 monitor=20000 on puma3
+ * Resource action: drbd_target_poolA monitor on puma3
+ * Resource action: drbd_target_poolA monitor on puma1
+
+Revised Cluster Status:
+ * Node List:
+ * Online: [ puma1 puma3 ]
+
+ * Full List of Resources:
+ * Clone Set: ms_drbd_poolA [ebe3fb6e-7778-426e-be58-190ab1ff3dd3] (promotable):
+ * Promoted: [ puma3 ]
+ * Unpromoted: [ puma1 ]
+ * vpool_ip_poolA (ocf:heartbeat:IPaddr2): Stopped
+ * drbd_target_poolA (ocf:vpools:iscsi_target): Stopped
diff --git a/cts/scheduler/summary/asymmetrical-order-move.summary b/cts/scheduler/summary/asymmetrical-order-move.summary
new file mode 100644
index 0000000..dc72e43
--- /dev/null
+++ b/cts/scheduler/summary/asymmetrical-order-move.summary
@@ -0,0 +1,27 @@
+Using the original execution date of: 2016-04-28 11:50:29Z
+1 of 3 resource instances DISABLED and 0 BLOCKED from further action due to failure
+
+Current cluster status:
+ * Node List:
+ * Online: [ sle12sp2-1 sle12sp2-2 ]
+
+ * Full List of Resources:
+ * st_sbd (stonith:external/sbd): Started sle12sp2-2
+ * dummy1 (ocf:pacemaker:Dummy): Stopped (disabled)
+ * dummy2 (ocf:pacemaker:Dummy): Started sle12sp2-1
+
+Transition Summary:
+ * Stop dummy2 ( sle12sp2-1 ) due to unrunnable dummy1 start
+
+Executing Cluster Transition:
+ * Resource action: dummy2 stop on sle12sp2-1
+Using the original execution date of: 2016-04-28 11:50:29Z
+
+Revised Cluster Status:
+ * Node List:
+ * Online: [ sle12sp2-1 sle12sp2-2 ]
+
+ * Full List of Resources:
+ * st_sbd (stonith:external/sbd): Started sle12sp2-2
+ * dummy1 (ocf:pacemaker:Dummy): Stopped (disabled)
+ * dummy2 (ocf:pacemaker:Dummy): Stopped
diff --git a/cts/scheduler/summary/asymmetrical-order-restart.summary b/cts/scheduler/summary/asymmetrical-order-restart.summary
new file mode 100644
index 0000000..fe55c52
--- /dev/null
+++ b/cts/scheduler/summary/asymmetrical-order-restart.summary
@@ -0,0 +1,27 @@
+Using the original execution date of: 2018-08-09 18:55:41Z
+1 of 3 resource instances DISABLED and 0 BLOCKED from further action due to failure
+
+Current cluster status:
+ * Node List:
+ * Online: [ cesr105-p16 cesr109-p16 ]
+
+ * Full List of Resources:
+ * cesr104ipmi (stonith:fence_ipmilan): Started cesr105-p16
+ * sleep_a (ocf:classe:anything): Stopped (disabled)
+ * sleep_b (ocf:classe:anything): FAILED cesr109-p16
+
+Transition Summary:
+ * Stop sleep_b ( cesr109-p16 ) due to unrunnable sleep_a start
+
+Executing Cluster Transition:
+ * Resource action: sleep_b stop on cesr109-p16
+Using the original execution date of: 2018-08-09 18:55:41Z
+
+Revised Cluster Status:
+ * Node List:
+ * Online: [ cesr105-p16 cesr109-p16 ]
+
+ * Full List of Resources:
+ * cesr104ipmi (stonith:fence_ipmilan): Started cesr105-p16
+ * sleep_a (ocf:classe:anything): Stopped (disabled)
+ * sleep_b (ocf:classe:anything): Stopped
diff --git a/cts/scheduler/summary/attrs1.summary b/cts/scheduler/summary/attrs1.summary
new file mode 100644
index 0000000..794b3c6
--- /dev/null
+++ b/cts/scheduler/summary/attrs1.summary
@@ -0,0 +1,21 @@
+Current cluster status:
+ * Node List:
+ * Online: [ node1 node2 ]
+
+ * Full List of Resources:
+ * rsc1 (ocf:heartbeat:apache): Stopped
+
+Transition Summary:
+ * Start rsc1 ( node2 )
+
+Executing Cluster Transition:
+ * Resource action: rsc1 monitor on node2
+ * Resource action: rsc1 monitor on node1
+ * Resource action: rsc1 start on node2
+
+Revised Cluster Status:
+ * Node List:
+ * Online: [ node1 node2 ]
+
+ * Full List of Resources:
+ * rsc1 (ocf:heartbeat:apache): Started node2
diff --git a/cts/scheduler/summary/attrs2.summary b/cts/scheduler/summary/attrs2.summary
new file mode 100644
index 0000000..794b3c6
--- /dev/null
+++ b/cts/scheduler/summary/attrs2.summary
@@ -0,0 +1,21 @@
+Current cluster status:
+ * Node List:
+ * Online: [ node1 node2 ]
+
+ * Full List of Resources:
+ * rsc1 (ocf:heartbeat:apache): Stopped
+
+Transition Summary:
+ * Start rsc1 ( node2 )
+
+Executing Cluster Transition:
+ * Resource action: rsc1 monitor on node2
+ * Resource action: rsc1 monitor on node1
+ * Resource action: rsc1 start on node2
+
+Revised Cluster Status:
+ * Node List:
+ * Online: [ node1 node2 ]
+
+ * Full List of Resources:
+ * rsc1 (ocf:heartbeat:apache): Started node2
diff --git a/cts/scheduler/summary/attrs3.summary b/cts/scheduler/summary/attrs3.summary
new file mode 100644
index 0000000..7d133a8
--- /dev/null
+++ b/cts/scheduler/summary/attrs3.summary
@@ -0,0 +1,21 @@
+Current cluster status:
+ * Node List:
+ * Online: [ node1 node2 ]
+
+ * Full List of Resources:
+ * rsc1 (ocf:heartbeat:apache): Stopped
+
+Transition Summary:
+ * Start rsc1 ( node1 )
+
+Executing Cluster Transition:
+ * Resource action: rsc1 monitor on node2
+ * Resource action: rsc1 monitor on node1
+ * Resource action: rsc1 start on node1
+
+Revised Cluster Status:
+ * Node List:
+ * Online: [ node1 node2 ]
+
+ * Full List of Resources:
+ * rsc1 (ocf:heartbeat:apache): Started node1
diff --git a/cts/scheduler/summary/attrs4.summary b/cts/scheduler/summary/attrs4.summary
new file mode 100644
index 0000000..7d133a8
--- /dev/null
+++ b/cts/scheduler/summary/attrs4.summary
@@ -0,0 +1,21 @@
+Current cluster status:
+ * Node List:
+ * Online: [ node1 node2 ]
+
+ * Full List of Resources:
+ * rsc1 (ocf:heartbeat:apache): Stopped
+
+Transition Summary:
+ * Start rsc1 ( node1 )
+
+Executing Cluster Transition:
+ * Resource action: rsc1 monitor on node2
+ * Resource action: rsc1 monitor on node1
+ * Resource action: rsc1 start on node1
+
+Revised Cluster Status:
+ * Node List:
+ * Online: [ node1 node2 ]
+
+ * Full List of Resources:
+ * rsc1 (ocf:heartbeat:apache): Started node1
diff --git a/cts/scheduler/summary/attrs5.summary b/cts/scheduler/summary/attrs5.summary
new file mode 100644
index 0000000..7209be2
--- /dev/null
+++ b/cts/scheduler/summary/attrs5.summary
@@ -0,0 +1,19 @@
+Current cluster status:
+ * Node List:
+ * Online: [ node1 node2 ]
+
+ * Full List of Resources:
+ * rsc1 (ocf:heartbeat:apache): Stopped
+
+Transition Summary:
+
+Executing Cluster Transition:
+ * Resource action: rsc1 monitor on node2
+ * Resource action: rsc1 monitor on node1
+
+Revised Cluster Status:
+ * Node List:
+ * Online: [ node1 node2 ]
+
+ * Full List of Resources:
+ * rsc1 (ocf:heartbeat:apache): Stopped
diff --git a/cts/scheduler/summary/attrs6.summary b/cts/scheduler/summary/attrs6.summary
new file mode 100644
index 0000000..7d133a8
--- /dev/null
+++ b/cts/scheduler/summary/attrs6.summary
@@ -0,0 +1,21 @@
+Current cluster status:
+ * Node List:
+ * Online: [ node1 node2 ]
+
+ * Full List of Resources:
+ * rsc1 (ocf:heartbeat:apache): Stopped
+
+Transition Summary:
+ * Start rsc1 ( node1 )
+
+Executing Cluster Transition:
+ * Resource action: rsc1 monitor on node2
+ * Resource action: rsc1 monitor on node1
+ * Resource action: rsc1 start on node1
+
+Revised Cluster Status:
+ * Node List:
+ * Online: [ node1 node2 ]
+
+ * Full List of Resources:
+ * rsc1 (ocf:heartbeat:apache): Started node1
diff --git a/cts/scheduler/summary/attrs7.summary b/cts/scheduler/summary/attrs7.summary
new file mode 100644
index 0000000..794b3c6
--- /dev/null
+++ b/cts/scheduler/summary/attrs7.summary
@@ -0,0 +1,21 @@
+Current cluster status:
+ * Node List:
+ * Online: [ node1 node2 ]
+
+ * Full List of Resources:
+ * rsc1 (ocf:heartbeat:apache): Stopped
+
+Transition Summary:
+ * Start rsc1 ( node2 )
+
+Executing Cluster Transition:
+ * Resource action: rsc1 monitor on node2
+ * Resource action: rsc1 monitor on node1
+ * Resource action: rsc1 start on node2
+
+Revised Cluster Status:
+ * Node List:
+ * Online: [ node1 node2 ]
+
+ * Full List of Resources:
+ * rsc1 (ocf:heartbeat:apache): Started node2
diff --git a/cts/scheduler/summary/attrs8.summary b/cts/scheduler/summary/attrs8.summary
new file mode 100644
index 0000000..794b3c6
--- /dev/null
+++ b/cts/scheduler/summary/attrs8.summary
@@ -0,0 +1,21 @@
+Current cluster status:
+ * Node List:
+ * Online: [ node1 node2 ]
+
+ * Full List of Resources:
+ * rsc1 (ocf:heartbeat:apache): Stopped
+
+Transition Summary:
+ * Start rsc1 ( node2 )
+
+Executing Cluster Transition:
+ * Resource action: rsc1 monitor on node2
+ * Resource action: rsc1 monitor on node1
+ * Resource action: rsc1 start on node2
+
+Revised Cluster Status:
+ * Node List:
+ * Online: [ node1 node2 ]
+
+ * Full List of Resources:
+ * rsc1 (ocf:heartbeat:apache): Started node2
diff --git a/cts/scheduler/summary/balanced.summary b/cts/scheduler/summary/balanced.summary
new file mode 100644
index 0000000..78d7ab3
--- /dev/null
+++ b/cts/scheduler/summary/balanced.summary
@@ -0,0 +1,29 @@
+Current cluster status:
+ * Node List:
+ * Online: [ host1 host2 ]
+
+ * Full List of Resources:
+ * rsc1 (ocf:pacemaker:Dummy): Stopped
+ * rsc2 (ocf:pacemaker:Dummy): Stopped
+
+Transition Summary:
+ * Start rsc1 ( host2 )
+ * Start rsc2 ( host1 )
+
+Executing Cluster Transition:
+ * Resource action: rsc1 monitor on host2
+ * Resource action: rsc1 monitor on host1
+ * Resource action: rsc2 monitor on host2
+ * Resource action: rsc2 monitor on host1
+ * Pseudo action: load_stopped_host2
+ * Pseudo action: load_stopped_host1
+ * Resource action: rsc1 start on host2
+ * Resource action: rsc2 start on host1
+
+Revised Cluster Status:
+ * Node List:
+ * Online: [ host1 host2 ]
+
+ * Full List of Resources:
+ * rsc1 (ocf:pacemaker:Dummy): Started host2
+ * rsc2 (ocf:pacemaker:Dummy): Started host1
diff --git a/cts/scheduler/summary/base-score.summary b/cts/scheduler/summary/base-score.summary
new file mode 100644
index 0000000..aeec6c6
--- /dev/null
+++ b/cts/scheduler/summary/base-score.summary
@@ -0,0 +1,23 @@
+Current cluster status:
+ * Node List:
+ * Online: [ puma1 puma2 puma3 puma4 ]
+
+ * Full List of Resources:
+ * Dummy (ocf:heartbeat:Dummy): Stopped
+
+Transition Summary:
+ * Start Dummy ( puma1 )
+
+Executing Cluster Transition:
+ * Resource action: Dummy monitor on puma4
+ * Resource action: Dummy monitor on puma3
+ * Resource action: Dummy monitor on puma2
+ * Resource action: Dummy monitor on puma1
+ * Resource action: Dummy start on puma1
+
+Revised Cluster Status:
+ * Node List:
+ * Online: [ puma1 puma2 puma3 puma4 ]
+
+ * Full List of Resources:
+ * Dummy (ocf:heartbeat:Dummy): Started puma1
diff --git a/cts/scheduler/summary/bnc-515172.summary b/cts/scheduler/summary/bnc-515172.summary
new file mode 100644
index 0000000..b758338
--- /dev/null
+++ b/cts/scheduler/summary/bnc-515172.summary
@@ -0,0 +1,36 @@
+Current cluster status:
+ * Node List:
+ * Online: [ sles11-ha1 sles11-ha2 sles11-ha3 ]
+
+ * Full List of Resources:
+ * Clone Set: Stinith_Clone_Resource [Stonith_Resource]:
+ * Started: [ sles11-ha1 sles11-ha2 sles11-ha3 ]
+ * Resource Group: GRP_Web_Server:
+ * PRIM_Web_IP1 (ocf:heartbeat:IPaddr): Stopped
+ * Clone Set: pingd_Gateway [Res_Pingd_Gateway]:
+ * Started: [ sles11-ha1 sles11-ha2 sles11-ha3 ]
+ * Clone Set: Pingd_Public [Res_Pingd_Public]:
+ * Started: [ sles11-ha1 sles11-ha2 sles11-ha3 ]
+
+Transition Summary:
+ * Start PRIM_Web_IP1 ( sles11-ha2 )
+
+Executing Cluster Transition:
+ * Pseudo action: GRP_Web_Server_start_0
+ * Resource action: PRIM_Web_IP1 start on sles11-ha2
+ * Pseudo action: GRP_Web_Server_running_0
+ * Resource action: PRIM_Web_IP1 monitor=5000 on sles11-ha2
+
+Revised Cluster Status:
+ * Node List:
+ * Online: [ sles11-ha1 sles11-ha2 sles11-ha3 ]
+
+ * Full List of Resources:
+ * Clone Set: Stinith_Clone_Resource [Stonith_Resource]:
+ * Started: [ sles11-ha1 sles11-ha2 sles11-ha3 ]
+ * Resource Group: GRP_Web_Server:
+ * PRIM_Web_IP1 (ocf:heartbeat:IPaddr): Started sles11-ha2
+ * Clone Set: pingd_Gateway [Res_Pingd_Gateway]:
+ * Started: [ sles11-ha1 sles11-ha2 sles11-ha3 ]
+ * Clone Set: Pingd_Public [Res_Pingd_Public]:
+ * Started: [ sles11-ha1 sles11-ha2 sles11-ha3 ]
diff --git a/cts/scheduler/summary/bug-1572-1.summary b/cts/scheduler/summary/bug-1572-1.summary
new file mode 100644
index 0000000..16870b2
--- /dev/null
+++ b/cts/scheduler/summary/bug-1572-1.summary
@@ -0,0 +1,85 @@
+Current cluster status:
+ * Node List:
+ * Online: [ arc-dknightlx arc-tkincaidlx.wsicorp.com ]
+
+ * Full List of Resources:
+ * Clone Set: ms_drbd_7788 [rsc_drbd_7788] (promotable):
+ * Promoted: [ arc-tkincaidlx.wsicorp.com ]
+ * Unpromoted: [ arc-dknightlx ]
+ * Resource Group: grp_pgsql_mirror:
+ * fs_mirror (ocf:heartbeat:Filesystem): Started arc-tkincaidlx.wsicorp.com
+ * pgsql_5555 (ocf:heartbeat:pgsql): Started arc-tkincaidlx.wsicorp.com
+ * IPaddr_147_81_84_133 (ocf:heartbeat:IPaddr): Started arc-tkincaidlx.wsicorp.com
+
+Transition Summary:
+ * Stop rsc_drbd_7788:0 ( Unpromoted arc-dknightlx ) due to node availability
+ * Restart rsc_drbd_7788:1 ( Promoted arc-tkincaidlx.wsicorp.com ) due to resource definition change
+ * Restart fs_mirror ( arc-tkincaidlx.wsicorp.com ) due to required ms_drbd_7788 notified
+ * Restart pgsql_5555 ( arc-tkincaidlx.wsicorp.com ) due to required fs_mirror start
+ * Restart IPaddr_147_81_84_133 ( arc-tkincaidlx.wsicorp.com ) due to required pgsql_5555 start
+
+Executing Cluster Transition:
+ * Pseudo action: ms_drbd_7788_pre_notify_demote_0
+ * Pseudo action: grp_pgsql_mirror_stop_0
+ * Resource action: IPaddr_147_81_84_133 stop on arc-tkincaidlx.wsicorp.com
+ * Resource action: rsc_drbd_7788:0 notify on arc-dknightlx
+ * Resource action: rsc_drbd_7788:1 notify on arc-tkincaidlx.wsicorp.com
+ * Pseudo action: ms_drbd_7788_confirmed-pre_notify_demote_0
+ * Resource action: pgsql_5555 stop on arc-tkincaidlx.wsicorp.com
+ * Resource action: fs_mirror stop on arc-tkincaidlx.wsicorp.com
+ * Pseudo action: grp_pgsql_mirror_stopped_0
+ * Pseudo action: ms_drbd_7788_demote_0
+ * Resource action: rsc_drbd_7788:1 demote on arc-tkincaidlx.wsicorp.com
+ * Pseudo action: ms_drbd_7788_demoted_0
+ * Pseudo action: ms_drbd_7788_post_notify_demoted_0
+ * Resource action: rsc_drbd_7788:0 notify on arc-dknightlx
+ * Resource action: rsc_drbd_7788:1 notify on arc-tkincaidlx.wsicorp.com
+ * Pseudo action: ms_drbd_7788_confirmed-post_notify_demoted_0
+ * Pseudo action: ms_drbd_7788_pre_notify_stop_0
+ * Resource action: rsc_drbd_7788:0 notify on arc-dknightlx
+ * Resource action: rsc_drbd_7788:1 notify on arc-tkincaidlx.wsicorp.com
+ * Pseudo action: ms_drbd_7788_confirmed-pre_notify_stop_0
+ * Pseudo action: ms_drbd_7788_stop_0
+ * Resource action: rsc_drbd_7788:0 stop on arc-dknightlx
+ * Resource action: rsc_drbd_7788:1 stop on arc-tkincaidlx.wsicorp.com
+ * Pseudo action: ms_drbd_7788_stopped_0
+ * Cluster action: do_shutdown on arc-dknightlx
+ * Pseudo action: ms_drbd_7788_post_notify_stopped_0
+ * Pseudo action: ms_drbd_7788_confirmed-post_notify_stopped_0
+ * Pseudo action: ms_drbd_7788_pre_notify_start_0
+ * Pseudo action: ms_drbd_7788_confirmed-pre_notify_start_0
+ * Pseudo action: ms_drbd_7788_start_0
+ * Resource action: rsc_drbd_7788:1 start on arc-tkincaidlx.wsicorp.com
+ * Pseudo action: ms_drbd_7788_running_0
+ * Pseudo action: ms_drbd_7788_post_notify_running_0
+ * Resource action: rsc_drbd_7788:1 notify on arc-tkincaidlx.wsicorp.com
+ * Pseudo action: ms_drbd_7788_confirmed-post_notify_running_0
+ * Pseudo action: ms_drbd_7788_pre_notify_promote_0
+ * Resource action: rsc_drbd_7788:1 notify on arc-tkincaidlx.wsicorp.com
+ * Pseudo action: ms_drbd_7788_confirmed-pre_notify_promote_0
+ * Pseudo action: ms_drbd_7788_promote_0
+ * Resource action: rsc_drbd_7788:1 promote on arc-tkincaidlx.wsicorp.com
+ * Pseudo action: ms_drbd_7788_promoted_0
+ * Pseudo action: ms_drbd_7788_post_notify_promoted_0
+ * Resource action: rsc_drbd_7788:1 notify on arc-tkincaidlx.wsicorp.com
+ * Pseudo action: ms_drbd_7788_confirmed-post_notify_promoted_0
+ * Pseudo action: grp_pgsql_mirror_start_0
+ * Resource action: fs_mirror start on arc-tkincaidlx.wsicorp.com
+ * Resource action: pgsql_5555 start on arc-tkincaidlx.wsicorp.com
+ * Resource action: pgsql_5555 monitor=30000 on arc-tkincaidlx.wsicorp.com
+ * Resource action: IPaddr_147_81_84_133 start on arc-tkincaidlx.wsicorp.com
+ * Resource action: IPaddr_147_81_84_133 monitor=25000 on arc-tkincaidlx.wsicorp.com
+ * Pseudo action: grp_pgsql_mirror_running_0
+
+Revised Cluster Status:
+ * Node List:
+ * Online: [ arc-dknightlx arc-tkincaidlx.wsicorp.com ]
+
+ * Full List of Resources:
+ * Clone Set: ms_drbd_7788 [rsc_drbd_7788] (promotable):
+ * Promoted: [ arc-tkincaidlx.wsicorp.com ]
+ * Stopped: [ arc-dknightlx ]
+ * Resource Group: grp_pgsql_mirror:
+ * fs_mirror (ocf:heartbeat:Filesystem): Started arc-tkincaidlx.wsicorp.com
+ * pgsql_5555 (ocf:heartbeat:pgsql): Started arc-tkincaidlx.wsicorp.com
+ * IPaddr_147_81_84_133 (ocf:heartbeat:IPaddr): Started arc-tkincaidlx.wsicorp.com
diff --git a/cts/scheduler/summary/bug-1572-2.summary b/cts/scheduler/summary/bug-1572-2.summary
new file mode 100644
index 0000000..c161239
--- /dev/null
+++ b/cts/scheduler/summary/bug-1572-2.summary
@@ -0,0 +1,61 @@
+Current cluster status:
+ * Node List:
+ * Online: [ arc-dknightlx arc-tkincaidlx.wsicorp.com ]
+
+ * Full List of Resources:
+ * Clone Set: ms_drbd_7788 [rsc_drbd_7788] (promotable):
+ * Promoted: [ arc-tkincaidlx.wsicorp.com ]
+ * Unpromoted: [ arc-dknightlx ]
+ * Resource Group: grp_pgsql_mirror:
+ * fs_mirror (ocf:heartbeat:Filesystem): Started arc-tkincaidlx.wsicorp.com
+ * pgsql_5555 (ocf:heartbeat:pgsql): Started arc-tkincaidlx.wsicorp.com
+ * IPaddr_147_81_84_133 (ocf:heartbeat:IPaddr): Started arc-tkincaidlx.wsicorp.com
+
+Transition Summary:
+ * Stop rsc_drbd_7788:0 ( Unpromoted arc-dknightlx ) due to node availability
+ * Demote rsc_drbd_7788:1 ( Promoted -> Unpromoted arc-tkincaidlx.wsicorp.com )
+ * Stop fs_mirror ( arc-tkincaidlx.wsicorp.com ) due to node availability
+ * Stop pgsql_5555 ( arc-tkincaidlx.wsicorp.com ) due to node availability
+ * Stop IPaddr_147_81_84_133 ( arc-tkincaidlx.wsicorp.com ) due to node availability
+
+Executing Cluster Transition:
+ * Pseudo action: ms_drbd_7788_pre_notify_demote_0
+ * Pseudo action: grp_pgsql_mirror_stop_0
+ * Resource action: IPaddr_147_81_84_133 stop on arc-tkincaidlx.wsicorp.com
+ * Resource action: rsc_drbd_7788:0 notify on arc-dknightlx
+ * Resource action: rsc_drbd_7788:1 notify on arc-tkincaidlx.wsicorp.com
+ * Pseudo action: ms_drbd_7788_confirmed-pre_notify_demote_0
+ * Resource action: pgsql_5555 stop on arc-tkincaidlx.wsicorp.com
+ * Resource action: fs_mirror stop on arc-tkincaidlx.wsicorp.com
+ * Pseudo action: grp_pgsql_mirror_stopped_0
+ * Pseudo action: ms_drbd_7788_demote_0
+ * Resource action: rsc_drbd_7788:1 demote on arc-tkincaidlx.wsicorp.com
+ * Pseudo action: ms_drbd_7788_demoted_0
+ * Pseudo action: ms_drbd_7788_post_notify_demoted_0
+ * Resource action: rsc_drbd_7788:0 notify on arc-dknightlx
+ * Resource action: rsc_drbd_7788:1 notify on arc-tkincaidlx.wsicorp.com
+ * Pseudo action: ms_drbd_7788_confirmed-post_notify_demoted_0
+ * Pseudo action: ms_drbd_7788_pre_notify_stop_0
+ * Resource action: rsc_drbd_7788:0 notify on arc-dknightlx
+ * Resource action: rsc_drbd_7788:1 notify on arc-tkincaidlx.wsicorp.com
+ * Pseudo action: ms_drbd_7788_confirmed-pre_notify_stop_0
+ * Pseudo action: ms_drbd_7788_stop_0
+ * Resource action: rsc_drbd_7788:0 stop on arc-dknightlx
+ * Pseudo action: ms_drbd_7788_stopped_0
+ * Cluster action: do_shutdown on arc-dknightlx
+ * Pseudo action: ms_drbd_7788_post_notify_stopped_0
+ * Resource action: rsc_drbd_7788:1 notify on arc-tkincaidlx.wsicorp.com
+ * Pseudo action: ms_drbd_7788_confirmed-post_notify_stopped_0
+
+Revised Cluster Status:
+ * Node List:
+ * Online: [ arc-dknightlx arc-tkincaidlx.wsicorp.com ]
+
+ * Full List of Resources:
+ * Clone Set: ms_drbd_7788 [rsc_drbd_7788] (promotable):
+ * Unpromoted: [ arc-tkincaidlx.wsicorp.com ]
+ * Stopped: [ arc-dknightlx ]
+ * Resource Group: grp_pgsql_mirror:
+ * fs_mirror (ocf:heartbeat:Filesystem): Stopped
+ * pgsql_5555 (ocf:heartbeat:pgsql): Stopped
+ * IPaddr_147_81_84_133 (ocf:heartbeat:IPaddr): Stopped
diff --git a/cts/scheduler/summary/bug-1573.summary b/cts/scheduler/summary/bug-1573.summary
new file mode 100644
index 0000000..c40d96b
--- /dev/null
+++ b/cts/scheduler/summary/bug-1573.summary
@@ -0,0 +1,34 @@
+Current cluster status:
+ * Node List:
+ * Online: [ xen-b ]
+ * OFFLINE: [ xen-c ]
+
+ * Full List of Resources:
+ * Resource Group: group_1:
+ * IPaddr_192_168_1_101 (ocf:heartbeat:IPaddr): Stopped
+ * apache_2 (ocf:heartbeat:apache): Stopped
+ * Resource Group: group_11:
+ * IPaddr_192_168_1_102 (ocf:heartbeat:IPaddr): Started xen-b
+ * apache_6 (ocf:heartbeat:apache): Stopped
+
+Transition Summary:
+ * Stop IPaddr_192_168_1_102 ( xen-b ) due to node availability
+
+Executing Cluster Transition:
+ * Pseudo action: group_11_stop_0
+ * Resource action: IPaddr_192_168_1_102 stop on xen-b
+ * Cluster action: do_shutdown on xen-b
+ * Pseudo action: group_11_stopped_0
+
+Revised Cluster Status:
+ * Node List:
+ * Online: [ xen-b ]
+ * OFFLINE: [ xen-c ]
+
+ * Full List of Resources:
+ * Resource Group: group_1:
+ * IPaddr_192_168_1_101 (ocf:heartbeat:IPaddr): Stopped
+ * apache_2 (ocf:heartbeat:apache): Stopped
+ * Resource Group: group_11:
+ * IPaddr_192_168_1_102 (ocf:heartbeat:IPaddr): Stopped
+ * apache_6 (ocf:heartbeat:apache): Stopped
diff --git a/cts/scheduler/summary/bug-1685.summary b/cts/scheduler/summary/bug-1685.summary
new file mode 100644
index 0000000..2ed29bc
--- /dev/null
+++ b/cts/scheduler/summary/bug-1685.summary
@@ -0,0 +1,38 @@
+Current cluster status:
+ * Node List:
+ * Online: [ redun1 redun2 ]
+
+ * Full List of Resources:
+ * Clone Set: shared_storage [prim_shared_storage] (promotable):
+ * Unpromoted: [ redun1 redun2 ]
+ * shared_filesystem (ocf:heartbeat:Filesystem): Stopped
+
+Transition Summary:
+ * Promote prim_shared_storage:0 ( Unpromoted -> Promoted redun2 )
+ * Start shared_filesystem ( redun2 )
+
+Executing Cluster Transition:
+ * Pseudo action: shared_storage_pre_notify_promote_0
+ * Resource action: prim_shared_storage:0 notify on redun2
+ * Resource action: prim_shared_storage:1 notify on redun1
+ * Pseudo action: shared_storage_confirmed-pre_notify_promote_0
+ * Pseudo action: shared_storage_promote_0
+ * Resource action: prim_shared_storage:0 promote on redun2
+ * Pseudo action: shared_storage_promoted_0
+ * Pseudo action: shared_storage_post_notify_promoted_0
+ * Resource action: prim_shared_storage:0 notify on redun2
+ * Resource action: prim_shared_storage:1 notify on redun1
+ * Pseudo action: shared_storage_confirmed-post_notify_promoted_0
+ * Resource action: shared_filesystem start on redun2
+ * Resource action: prim_shared_storage:1 monitor=120000 on redun1
+ * Resource action: shared_filesystem monitor=120000 on redun2
+
+Revised Cluster Status:
+ * Node List:
+ * Online: [ redun1 redun2 ]
+
+ * Full List of Resources:
+ * Clone Set: shared_storage [prim_shared_storage] (promotable):
+ * Promoted: [ redun2 ]
+ * Unpromoted: [ redun1 ]
+ * shared_filesystem (ocf:heartbeat:Filesystem): Started redun2
diff --git a/cts/scheduler/summary/bug-1718.summary b/cts/scheduler/summary/bug-1718.summary
new file mode 100644
index 0000000..76beca0
--- /dev/null
+++ b/cts/scheduler/summary/bug-1718.summary
@@ -0,0 +1,44 @@
+1 of 5 resource instances DISABLED and 0 BLOCKED from further action due to failure
+
+Current cluster status:
+ * Node List:
+ * Online: [ biggame.ds9 heartbeat.ds9 ops.ds9 ]
+ * OFFLINE: [ defiant.ds9 warbird.ds9 ]
+
+ * Full List of Resources:
+ * Resource Group: Web_Group:
+ * Apache_IP (ocf:heartbeat:IPaddr): Started heartbeat.ds9
+ * resource_IP2 (ocf:heartbeat:IPaddr): Stopped (disabled)
+ * resource_dummyweb (ocf:heartbeat:Dummy): Stopped
+ * Resource Group: group_fUN:
+ * resource_IP3 (ocf:heartbeat:IPaddr): Started ops.ds9
+ * resource_dummy (ocf:heartbeat:Dummy): Started ops.ds9
+
+Transition Summary:
+ * Stop resource_IP3 ( ops.ds9 ) due to unrunnable Web_Group running
+ * Stop resource_dummy ( ops.ds9 ) due to required resource_IP3 start
+
+Executing Cluster Transition:
+ * Pseudo action: group_fUN_stop_0
+ * Resource action: resource_dummy stop on ops.ds9
+ * Resource action: OpenVPN_IP delete on ops.ds9
+ * Resource action: OpenVPN_IP delete on heartbeat.ds9
+ * Resource action: Apache delete on ops.ds9
+ * Resource action: Apache delete on heartbeat.ds9
+ * Resource action: Apache delete on biggame.ds9
+ * Resource action: resource_IP3 stop on ops.ds9
+ * Pseudo action: group_fUN_stopped_0
+
+Revised Cluster Status:
+ * Node List:
+ * Online: [ biggame.ds9 heartbeat.ds9 ops.ds9 ]
+ * OFFLINE: [ defiant.ds9 warbird.ds9 ]
+
+ * Full List of Resources:
+ * Resource Group: Web_Group:
+ * Apache_IP (ocf:heartbeat:IPaddr): Started heartbeat.ds9
+ * resource_IP2 (ocf:heartbeat:IPaddr): Stopped (disabled)
+ * resource_dummyweb (ocf:heartbeat:Dummy): Stopped
+ * Resource Group: group_fUN:
+ * resource_IP3 (ocf:heartbeat:IPaddr): Stopped
+ * resource_dummy (ocf:heartbeat:Dummy): Stopped
diff --git a/cts/scheduler/summary/bug-1765.summary b/cts/scheduler/summary/bug-1765.summary
new file mode 100644
index 0000000..ae851fe
--- /dev/null
+++ b/cts/scheduler/summary/bug-1765.summary
@@ -0,0 +1,38 @@
+Current cluster status:
+ * Node List:
+ * Online: [ sles236 sles238 ]
+
+ * Full List of Resources:
+ * Clone Set: ms-drbd0 [drbd0] (promotable):
+ * Promoted: [ sles236 ]
+ * Stopped: [ sles238 ]
+ * Clone Set: ms-drbd1 [drbd1] (promotable):
+ * Promoted: [ sles236 ]
+ * Unpromoted: [ sles238 ]
+
+Transition Summary:
+ * Start drbd0:1 ( sles238 )
+
+Executing Cluster Transition:
+ * Pseudo action: ms-drbd0_pre_notify_start_0
+ * Resource action: drbd0:0 notify on sles236
+ * Pseudo action: ms-drbd0_confirmed-pre_notify_start_0
+ * Pseudo action: ms-drbd0_start_0
+ * Resource action: drbd0:1 start on sles238
+ * Pseudo action: ms-drbd0_running_0
+ * Pseudo action: ms-drbd0_post_notify_running_0
+ * Resource action: drbd0:0 notify on sles236
+ * Resource action: drbd0:1 notify on sles238
+ * Pseudo action: ms-drbd0_confirmed-post_notify_running_0
+
+Revised Cluster Status:
+ * Node List:
+ * Online: [ sles236 sles238 ]
+
+ * Full List of Resources:
+ * Clone Set: ms-drbd0 [drbd0] (promotable):
+ * Promoted: [ sles236 ]
+ * Unpromoted: [ sles238 ]
+ * Clone Set: ms-drbd1 [drbd1] (promotable):
+ * Promoted: [ sles236 ]
+ * Unpromoted: [ sles238 ]
diff --git a/cts/scheduler/summary/bug-1820-1.summary b/cts/scheduler/summary/bug-1820-1.summary
new file mode 100644
index 0000000..5142348
--- /dev/null
+++ b/cts/scheduler/summary/bug-1820-1.summary
@@ -0,0 +1,44 @@
+Current cluster status:
+ * Node List:
+ * Online: [ star world ]
+
+ * Full List of Resources:
+ * p1 (ocf:heartbeat:Xen): Stopped
+ * Resource Group: gr1:
+ * test1 (ocf:heartbeat:Xen): Started star
+ * test2 (ocf:heartbeat:Xen): Started star
+
+Transition Summary:
+ * Start p1 ( world )
+ * Migrate test1 ( star -> world )
+ * Migrate test2 ( star -> world )
+
+Executing Cluster Transition:
+ * Resource action: p1 monitor on world
+ * Resource action: p1 monitor on star
+ * Pseudo action: gr1_stop_0
+ * Resource action: test1 migrate_to on star
+ * Resource action: p1 start on world
+ * Resource action: test1 migrate_from on world
+ * Resource action: test2 migrate_to on star
+ * Resource action: test2 migrate_from on world
+ * Resource action: test2 stop on star
+ * Resource action: test1 stop on star
+ * Cluster action: do_shutdown on star
+ * Pseudo action: gr1_stopped_0
+ * Pseudo action: gr1_start_0
+ * Pseudo action: test1_start_0
+ * Pseudo action: test2_start_0
+ * Pseudo action: gr1_running_0
+ * Resource action: test1 monitor=10000 on world
+ * Resource action: test2 monitor=10000 on world
+
+Revised Cluster Status:
+ * Node List:
+ * Online: [ star world ]
+
+ * Full List of Resources:
+ * p1 (ocf:heartbeat:Xen): Started world
+ * Resource Group: gr1:
+ * test1 (ocf:heartbeat:Xen): Started world
+ * test2 (ocf:heartbeat:Xen): Started world
diff --git a/cts/scheduler/summary/bug-1820.summary b/cts/scheduler/summary/bug-1820.summary
new file mode 100644
index 0000000..1862ac1
--- /dev/null
+++ b/cts/scheduler/summary/bug-1820.summary
@@ -0,0 +1,38 @@
+Current cluster status:
+ * Node List:
+ * Online: [ star world ]
+
+ * Full List of Resources:
+ * Resource Group: gr1:
+ * test1 (ocf:heartbeat:Xen): Started star
+ * test2 (ocf:heartbeat:Xen): Started star
+
+Transition Summary:
+ * Migrate test1 ( star -> world )
+ * Migrate test2 ( star -> world )
+
+Executing Cluster Transition:
+ * Pseudo action: gr1_stop_0
+ * Resource action: test1 migrate_to on star
+ * Resource action: test1 migrate_from on world
+ * Resource action: test2 migrate_to on star
+ * Resource action: test2 migrate_from on world
+ * Resource action: test2 stop on star
+ * Resource action: test1 stop on star
+ * Cluster action: do_shutdown on star
+ * Pseudo action: gr1_stopped_0
+ * Pseudo action: gr1_start_0
+ * Pseudo action: test1_start_0
+ * Pseudo action: test2_start_0
+ * Pseudo action: gr1_running_0
+ * Resource action: test1 monitor=10000 on world
+ * Resource action: test2 monitor=10000 on world
+
+Revised Cluster Status:
+ * Node List:
+ * Online: [ star world ]
+
+ * Full List of Resources:
+ * Resource Group: gr1:
+ * test1 (ocf:heartbeat:Xen): Started world
+ * test2 (ocf:heartbeat:Xen): Started world
diff --git a/cts/scheduler/summary/bug-1822.summary b/cts/scheduler/summary/bug-1822.summary
new file mode 100644
index 0000000..3890a02
--- /dev/null
+++ b/cts/scheduler/summary/bug-1822.summary
@@ -0,0 +1,44 @@
+Current cluster status:
+ * Node List:
+ * Online: [ process1a process2b ]
+
+ * Full List of Resources:
+ * Clone Set: ms-sf [ms-sf_group] (promotable, unique):
+ * Resource Group: ms-sf_group:0:
+ * promotable_Stateful:0 (ocf:heartbeat:Dummy-statful): Unpromoted process2b
+ * promotable_procdctl:0 (ocf:heartbeat:procdctl): Stopped
+ * Resource Group: ms-sf_group:1:
+ * promotable_Stateful:1 (ocf:heartbeat:Dummy-statful): Promoted process1a
+ * promotable_procdctl:1 (ocf:heartbeat:procdctl): Promoted process1a
+
+Transition Summary:
+ * Stop promotable_Stateful:1 ( Promoted process1a ) due to node availability
+ * Stop promotable_procdctl:1 ( Promoted process1a ) due to node availability
+
+Executing Cluster Transition:
+ * Pseudo action: ms-sf_demote_0
+ * Pseudo action: ms-sf_group:1_demote_0
+ * Resource action: promotable_Stateful:1 demote on process1a
+ * Resource action: promotable_procdctl:1 demote on process1a
+ * Pseudo action: ms-sf_group:1_demoted_0
+ * Pseudo action: ms-sf_demoted_0
+ * Pseudo action: ms-sf_stop_0
+ * Pseudo action: ms-sf_group:1_stop_0
+ * Resource action: promotable_Stateful:1 stop on process1a
+ * Resource action: promotable_procdctl:1 stop on process1a
+ * Cluster action: do_shutdown on process1a
+ * Pseudo action: ms-sf_group:1_stopped_0
+ * Pseudo action: ms-sf_stopped_0
+
+Revised Cluster Status:
+ * Node List:
+ * Online: [ process1a process2b ]
+
+ * Full List of Resources:
+ * Clone Set: ms-sf [ms-sf_group] (promotable, unique):
+ * Resource Group: ms-sf_group:0:
+ * promotable_Stateful:0 (ocf:heartbeat:Dummy-statful): Unpromoted process2b
+ * promotable_procdctl:0 (ocf:heartbeat:procdctl): Stopped
+ * Resource Group: ms-sf_group:1:
+ * promotable_Stateful:1 (ocf:heartbeat:Dummy-statful): Stopped
+ * promotable_procdctl:1 (ocf:heartbeat:procdctl): Stopped
diff --git a/cts/scheduler/summary/bug-5014-A-start-B-start.summary b/cts/scheduler/summary/bug-5014-A-start-B-start.summary
new file mode 100644
index 0000000..fdc06b0
--- /dev/null
+++ b/cts/scheduler/summary/bug-5014-A-start-B-start.summary
@@ -0,0 +1,27 @@
+Current cluster status:
+ * Node List:
+ * Online: [ fc16-builder ]
+
+ * Full List of Resources:
+ * ClusterIP (ocf:heartbeat:IPaddr2): Stopped
+ * ClusterIP2 (ocf:heartbeat:IPaddr2): Stopped
+
+Transition Summary:
+ * Start ClusterIP ( fc16-builder )
+ * Start ClusterIP2 ( fc16-builder )
+
+Executing Cluster Transition:
+ * Resource action: ClusterIP monitor on fc16-builder
+ * Resource action: ClusterIP2 monitor on fc16-builder
+ * Resource action: ClusterIP start on fc16-builder
+ * Resource action: ClusterIP2 start on fc16-builder
+ * Resource action: ClusterIP monitor=30000 on fc16-builder
+ * Resource action: ClusterIP2 monitor=30000 on fc16-builder
+
+Revised Cluster Status:
+ * Node List:
+ * Online: [ fc16-builder ]
+
+ * Full List of Resources:
+ * ClusterIP (ocf:heartbeat:IPaddr2): Started fc16-builder
+ * ClusterIP2 (ocf:heartbeat:IPaddr2): Started fc16-builder
diff --git a/cts/scheduler/summary/bug-5014-A-stop-B-started.summary b/cts/scheduler/summary/bug-5014-A-stop-B-started.summary
new file mode 100644
index 0000000..025fc67
--- /dev/null
+++ b/cts/scheduler/summary/bug-5014-A-stop-B-started.summary
@@ -0,0 +1,23 @@
+1 of 2 resource instances DISABLED and 0 BLOCKED from further action due to failure
+
+Current cluster status:
+ * Node List:
+ * Online: [ fc16-builder ]
+
+ * Full List of Resources:
+ * ClusterIP (ocf:heartbeat:IPaddr2): Started fc16-builder (disabled)
+ * ClusterIP2 (ocf:heartbeat:IPaddr2): Started fc16-builder
+
+Transition Summary:
+ * Stop ClusterIP ( fc16-builder ) due to node availability
+
+Executing Cluster Transition:
+ * Resource action: ClusterIP stop on fc16-builder
+
+Revised Cluster Status:
+ * Node List:
+ * Online: [ fc16-builder ]
+
+ * Full List of Resources:
+ * ClusterIP (ocf:heartbeat:IPaddr2): Stopped (disabled)
+ * ClusterIP2 (ocf:heartbeat:IPaddr2): Started fc16-builder
diff --git a/cts/scheduler/summary/bug-5014-A-stopped-B-stopped.summary b/cts/scheduler/summary/bug-5014-A-stopped-B-stopped.summary
new file mode 100644
index 0000000..ced70e7
--- /dev/null
+++ b/cts/scheduler/summary/bug-5014-A-stopped-B-stopped.summary
@@ -0,0 +1,24 @@
+1 of 2 resource instances DISABLED and 0 BLOCKED from further action due to failure
+
+Current cluster status:
+ * Node List:
+ * Online: [ fc16-builder ]
+
+ * Full List of Resources:
+ * ClusterIP (ocf:heartbeat:IPaddr2): Stopped (disabled)
+ * ClusterIP2 (ocf:heartbeat:IPaddr2): Stopped
+
+Transition Summary:
+ * Start ClusterIP2 ( fc16-builder ) due to unrunnable ClusterIP start (blocked)
+
+Executing Cluster Transition:
+ * Resource action: ClusterIP monitor on fc16-builder
+ * Resource action: ClusterIP2 monitor on fc16-builder
+
+Revised Cluster Status:
+ * Node List:
+ * Online: [ fc16-builder ]
+
+ * Full List of Resources:
+ * ClusterIP (ocf:heartbeat:IPaddr2): Stopped (disabled)
+ * ClusterIP2 (ocf:heartbeat:IPaddr2): Stopped
diff --git a/cts/scheduler/summary/bug-5014-CLONE-A-start-B-start.summary b/cts/scheduler/summary/bug-5014-CLONE-A-start-B-start.summary
new file mode 100644
index 0000000..fc93e4c
--- /dev/null
+++ b/cts/scheduler/summary/bug-5014-CLONE-A-start-B-start.summary
@@ -0,0 +1,35 @@
+Current cluster status:
+ * Node List:
+ * Online: [ fc16-builder ]
+
+ * Full List of Resources:
+ * Clone Set: clone1 [ClusterIP]:
+ * Stopped: [ fc16-builder ]
+ * Clone Set: clone2 [ClusterIP2]:
+ * Stopped: [ fc16-builder ]
+
+Transition Summary:
+ * Start ClusterIP:0 ( fc16-builder )
+ * Start ClusterIP2:0 ( fc16-builder )
+
+Executing Cluster Transition:
+ * Resource action: ClusterIP:0 monitor on fc16-builder
+ * Pseudo action: clone1_start_0
+ * Resource action: ClusterIP2:0 monitor on fc16-builder
+ * Resource action: ClusterIP:0 start on fc16-builder
+ * Pseudo action: clone1_running_0
+ * Pseudo action: clone2_start_0
+ * Resource action: ClusterIP:0 monitor=30000 on fc16-builder
+ * Resource action: ClusterIP2:0 start on fc16-builder
+ * Pseudo action: clone2_running_0
+ * Resource action: ClusterIP2:0 monitor=30000 on fc16-builder
+
+Revised Cluster Status:
+ * Node List:
+ * Online: [ fc16-builder ]
+
+ * Full List of Resources:
+ * Clone Set: clone1 [ClusterIP]:
+ * Started: [ fc16-builder ]
+ * Clone Set: clone2 [ClusterIP2]:
+ * Started: [ fc16-builder ]
diff --git a/cts/scheduler/summary/bug-5014-CLONE-A-stop-B-started.summary b/cts/scheduler/summary/bug-5014-CLONE-A-stop-B-started.summary
new file mode 100644
index 0000000..a0c5e54
--- /dev/null
+++ b/cts/scheduler/summary/bug-5014-CLONE-A-stop-B-started.summary
@@ -0,0 +1,29 @@
+1 of 2 resource instances DISABLED and 0 BLOCKED from further action due to failure
+
+Current cluster status:
+ * Node List:
+ * Online: [ fc16-builder ]
+
+ * Full List of Resources:
+ * Clone Set: clone1 [ClusterIP] (disabled):
+ * Started: [ fc16-builder ]
+ * Clone Set: clone2 [ClusterIP2]:
+ * Started: [ fc16-builder ]
+
+Transition Summary:
+ * Stop ClusterIP:0 ( fc16-builder ) due to node availability
+
+Executing Cluster Transition:
+ * Pseudo action: clone1_stop_0
+ * Resource action: ClusterIP:0 stop on fc16-builder
+ * Pseudo action: clone1_stopped_0
+
+Revised Cluster Status:
+ * Node List:
+ * Online: [ fc16-builder ]
+
+ * Full List of Resources:
+ * Clone Set: clone1 [ClusterIP] (disabled):
+ * Stopped (disabled): [ fc16-builder ]
+ * Clone Set: clone2 [ClusterIP2]:
+ * Started: [ fc16-builder ]
diff --git a/cts/scheduler/summary/bug-5014-CthenAthenB-C-stopped.summary b/cts/scheduler/summary/bug-5014-CthenAthenB-C-stopped.summary
new file mode 100644
index 0000000..b166377
--- /dev/null
+++ b/cts/scheduler/summary/bug-5014-CthenAthenB-C-stopped.summary
@@ -0,0 +1,28 @@
+1 of 3 resource instances DISABLED and 0 BLOCKED from further action due to failure
+
+Current cluster status:
+ * Node List:
+ * Online: [ fc16-builder ]
+
+ * Full List of Resources:
+ * ClusterIP (ocf:heartbeat:IPaddr2): Stopped
+ * ClusterIP2 (ocf:heartbeat:IPaddr2): Stopped
+ * ClusterIP3 (ocf:heartbeat:IPaddr2): Stopped (disabled)
+
+Transition Summary:
+ * Start ClusterIP ( fc16-builder ) due to unrunnable ClusterIP3 start (blocked)
+ * Start ClusterIP2 ( fc16-builder ) due to unrunnable ClusterIP start (blocked)
+
+Executing Cluster Transition:
+ * Resource action: ClusterIP monitor on fc16-builder
+ * Resource action: ClusterIP2 monitor on fc16-builder
+ * Resource action: ClusterIP3 monitor on fc16-builder
+
+Revised Cluster Status:
+ * Node List:
+ * Online: [ fc16-builder ]
+
+ * Full List of Resources:
+ * ClusterIP (ocf:heartbeat:IPaddr2): Stopped
+ * ClusterIP2 (ocf:heartbeat:IPaddr2): Stopped
+ * ClusterIP3 (ocf:heartbeat:IPaddr2): Stopped (disabled)
diff --git a/cts/scheduler/summary/bug-5014-GROUP-A-start-B-start.summary b/cts/scheduler/summary/bug-5014-GROUP-A-start-B-start.summary
new file mode 100644
index 0000000..7fd1568
--- /dev/null
+++ b/cts/scheduler/summary/bug-5014-GROUP-A-start-B-start.summary
@@ -0,0 +1,33 @@
+Current cluster status:
+ * Node List:
+ * Online: [ fc16-builder ]
+
+ * Full List of Resources:
+ * Resource Group: group1:
+ * ClusterIP (ocf:heartbeat:IPaddr2): Stopped
+ * Resource Group: group2:
+ * ClusterIP2 (ocf:heartbeat:IPaddr2): Stopped
+
+Transition Summary:
+ * Start ClusterIP ( fc16-builder )
+ * Start ClusterIP2 ( fc16-builder )
+
+Executing Cluster Transition:
+ * Pseudo action: group1_start_0
+ * Resource action: ClusterIP start on fc16-builder
+ * Pseudo action: group1_running_0
+ * Resource action: ClusterIP monitor=30000 on fc16-builder
+ * Pseudo action: group2_start_0
+ * Resource action: ClusterIP2 start on fc16-builder
+ * Pseudo action: group2_running_0
+ * Resource action: ClusterIP2 monitor=30000 on fc16-builder
+
+Revised Cluster Status:
+ * Node List:
+ * Online: [ fc16-builder ]
+
+ * Full List of Resources:
+ * Resource Group: group1:
+ * ClusterIP (ocf:heartbeat:IPaddr2): Started fc16-builder
+ * Resource Group: group2:
+ * ClusterIP2 (ocf:heartbeat:IPaddr2): Started fc16-builder
diff --git a/cts/scheduler/summary/bug-5014-GROUP-A-stopped-B-started.summary b/cts/scheduler/summary/bug-5014-GROUP-A-stopped-B-started.summary
new file mode 100644
index 0000000..7bf976c
--- /dev/null
+++ b/cts/scheduler/summary/bug-5014-GROUP-A-stopped-B-started.summary
@@ -0,0 +1,29 @@
+1 of 2 resource instances DISABLED and 0 BLOCKED from further action due to failure
+
+Current cluster status:
+ * Node List:
+ * Online: [ fc16-builder ]
+
+ * Full List of Resources:
+ * Resource Group: group1 (disabled):
+ * ClusterIP (ocf:heartbeat:IPaddr2): Started fc16-builder (disabled)
+ * Resource Group: group2:
+ * ClusterIP2 (ocf:heartbeat:IPaddr2): Started fc16-builder
+
+Transition Summary:
+ * Stop ClusterIP ( fc16-builder ) due to node availability
+
+Executing Cluster Transition:
+ * Pseudo action: group1_stop_0
+ * Resource action: ClusterIP stop on fc16-builder
+ * Pseudo action: group1_stopped_0
+
+Revised Cluster Status:
+ * Node List:
+ * Online: [ fc16-builder ]
+
+ * Full List of Resources:
+ * Resource Group: group1 (disabled):
+ * ClusterIP (ocf:heartbeat:IPaddr2): Stopped (disabled)
+ * Resource Group: group2:
+ * ClusterIP2 (ocf:heartbeat:IPaddr2): Started fc16-builder
diff --git a/cts/scheduler/summary/bug-5014-GROUP-A-stopped-B-stopped.summary b/cts/scheduler/summary/bug-5014-GROUP-A-stopped-B-stopped.summary
new file mode 100644
index 0000000..426b576
--- /dev/null
+++ b/cts/scheduler/summary/bug-5014-GROUP-A-stopped-B-stopped.summary
@@ -0,0 +1,26 @@
+1 of 2 resource instances DISABLED and 0 BLOCKED from further action due to failure
+
+Current cluster status:
+ * Node List:
+ * Online: [ fc16-builder ]
+
+ * Full List of Resources:
+ * Resource Group: group1:
+ * ClusterIP (ocf:heartbeat:IPaddr2): Stopped (disabled)
+ * Resource Group: group2:
+ * ClusterIP2 (ocf:heartbeat:IPaddr2): Stopped
+
+Transition Summary:
+ * Start ClusterIP2 ( fc16-builder ) due to unrunnable group1 running (blocked)
+
+Executing Cluster Transition:
+
+Revised Cluster Status:
+ * Node List:
+ * Online: [ fc16-builder ]
+
+ * Full List of Resources:
+ * Resource Group: group1:
+ * ClusterIP (ocf:heartbeat:IPaddr2): Stopped (disabled)
+ * Resource Group: group2:
+ * ClusterIP2 (ocf:heartbeat:IPaddr2): Stopped
diff --git a/cts/scheduler/summary/bug-5014-ordered-set-symmetrical-false.summary b/cts/scheduler/summary/bug-5014-ordered-set-symmetrical-false.summary
new file mode 100644
index 0000000..a25c618
--- /dev/null
+++ b/cts/scheduler/summary/bug-5014-ordered-set-symmetrical-false.summary
@@ -0,0 +1,27 @@
+1 of 3 resource instances DISABLED and 0 BLOCKED from further action due to failure
+
+Current cluster status:
+ * Node List:
+ * Online: [ fc16-builder ]
+ * OFFLINE: [ fc16-builder2 ]
+
+ * Full List of Resources:
+ * A (ocf:pacemaker:Dummy): Started fc16-builder
+ * B (ocf:pacemaker:Dummy): Started fc16-builder
+ * C (ocf:pacemaker:Dummy): Started fc16-builder (disabled)
+
+Transition Summary:
+ * Stop C ( fc16-builder ) due to node availability
+
+Executing Cluster Transition:
+ * Resource action: C stop on fc16-builder
+
+Revised Cluster Status:
+ * Node List:
+ * Online: [ fc16-builder ]
+ * OFFLINE: [ fc16-builder2 ]
+
+ * Full List of Resources:
+ * A (ocf:pacemaker:Dummy): Started fc16-builder
+ * B (ocf:pacemaker:Dummy): Started fc16-builder
+ * C (ocf:pacemaker:Dummy): Stopped (disabled)
diff --git a/cts/scheduler/summary/bug-5014-ordered-set-symmetrical-true.summary b/cts/scheduler/summary/bug-5014-ordered-set-symmetrical-true.summary
new file mode 100644
index 0000000..70159d1
--- /dev/null
+++ b/cts/scheduler/summary/bug-5014-ordered-set-symmetrical-true.summary
@@ -0,0 +1,29 @@
+1 of 3 resource instances DISABLED and 0 BLOCKED from further action due to failure
+
+Current cluster status:
+ * Node List:
+ * Online: [ fc16-builder ]
+ * OFFLINE: [ fc16-builder2 ]
+
+ * Full List of Resources:
+ * A (ocf:pacemaker:Dummy): Started fc16-builder
+ * B (ocf:pacemaker:Dummy): Started fc16-builder
+ * C (ocf:pacemaker:Dummy): Started fc16-builder (disabled)
+
+Transition Summary:
+ * Stop A ( fc16-builder ) due to required C start
+ * Stop C ( fc16-builder ) due to node availability
+
+Executing Cluster Transition:
+ * Resource action: A stop on fc16-builder
+ * Resource action: C stop on fc16-builder
+
+Revised Cluster Status:
+ * Node List:
+ * Online: [ fc16-builder ]
+ * OFFLINE: [ fc16-builder2 ]
+
+ * Full List of Resources:
+ * A (ocf:pacemaker:Dummy): Stopped
+ * B (ocf:pacemaker:Dummy): Started fc16-builder
+ * C (ocf:pacemaker:Dummy): Stopped (disabled)
diff --git a/cts/scheduler/summary/bug-5025-1.summary b/cts/scheduler/summary/bug-5025-1.summary
new file mode 100644
index 0000000..f83116e
--- /dev/null
+++ b/cts/scheduler/summary/bug-5025-1.summary
@@ -0,0 +1,25 @@
+Current cluster status:
+ * Node List:
+ * Online: [ fc16-builder ]
+ * OFFLINE: [ fc16-builder2 fc16-builder3 ]
+
+ * Full List of Resources:
+ * virt-fencing (stonith:fence_xvm): Started fc16-builder
+ * A (ocf:pacemaker:Dummy): Started fc16-builder
+
+Transition Summary:
+ * Reload A ( fc16-builder )
+
+Executing Cluster Transition:
+ * Cluster action: clear_failcount for A on fc16-builder
+ * Resource action: A reload-agent on fc16-builder
+ * Resource action: A monitor=30000 on fc16-builder
+
+Revised Cluster Status:
+ * Node List:
+ * Online: [ fc16-builder ]
+ * OFFLINE: [ fc16-builder2 fc16-builder3 ]
+
+ * Full List of Resources:
+ * virt-fencing (stonith:fence_xvm): Started fc16-builder
+ * A (ocf:pacemaker:Dummy): Started fc16-builder
diff --git a/cts/scheduler/summary/bug-5025-2.summary b/cts/scheduler/summary/bug-5025-2.summary
new file mode 100644
index 0000000..9e0bdfd
--- /dev/null
+++ b/cts/scheduler/summary/bug-5025-2.summary
@@ -0,0 +1,23 @@
+Current cluster status:
+ * Node List:
+ * Online: [ fc16-builder ]
+ * OFFLINE: [ fc16-builder2 fc16-builder3 ]
+
+ * Full List of Resources:
+ * virt-fencing (stonith:fence_xvm): Stopped
+ * A (ocf:pacemaker:Dummy): Started fc16-builder
+ * B (ocf:pacemaker:Dummy): Started fc16-builder
+
+Transition Summary:
+
+Executing Cluster Transition:
+
+Revised Cluster Status:
+ * Node List:
+ * Online: [ fc16-builder ]
+ * OFFLINE: [ fc16-builder2 fc16-builder3 ]
+
+ * Full List of Resources:
+ * virt-fencing (stonith:fence_xvm): Stopped
+ * A (ocf:pacemaker:Dummy): Started fc16-builder
+ * B (ocf:pacemaker:Dummy): Started fc16-builder
diff --git a/cts/scheduler/summary/bug-5025-3.summary b/cts/scheduler/summary/bug-5025-3.summary
new file mode 100644
index 0000000..68a471a
--- /dev/null
+++ b/cts/scheduler/summary/bug-5025-3.summary
@@ -0,0 +1,28 @@
+Current cluster status:
+ * Node List:
+ * Online: [ fc16-builder ]
+ * OFFLINE: [ fc16-builder2 fc16-builder3 ]
+
+ * Full List of Resources:
+ * virt-fencing (stonith:fence_xvm): Stopped
+ * A (ocf:pacemaker:Dummy): Started fc16-builder
+ * B (ocf:pacemaker:Dummy): Started fc16-builder
+
+Transition Summary:
+ * Restart A ( fc16-builder ) due to resource definition change
+
+Executing Cluster Transition:
+ * Resource action: A stop on fc16-builder
+ * Cluster action: clear_failcount for A on fc16-builder
+ * Resource action: A start on fc16-builder
+ * Resource action: A monitor=30000 on fc16-builder
+
+Revised Cluster Status:
+ * Node List:
+ * Online: [ fc16-builder ]
+ * OFFLINE: [ fc16-builder2 fc16-builder3 ]
+
+ * Full List of Resources:
+ * virt-fencing (stonith:fence_xvm): Stopped
+ * A (ocf:pacemaker:Dummy): Started fc16-builder
+ * B (ocf:pacemaker:Dummy): Started fc16-builder
diff --git a/cts/scheduler/summary/bug-5025-4.summary b/cts/scheduler/summary/bug-5025-4.summary
new file mode 100644
index 0000000..2456018
--- /dev/null
+++ b/cts/scheduler/summary/bug-5025-4.summary
@@ -0,0 +1,24 @@
+Current cluster status:
+ * Node List:
+ * Online: [ 18builder ]
+ * OFFLINE: [ 18node1 18node2 18node3 ]
+
+ * Full List of Resources:
+ * remote-node (ocf:pacemaker:Dummy): Stopped
+
+Transition Summary:
+ * Start remote-node ( 18builder )
+
+Executing Cluster Transition:
+ * Resource action: remote-node delete on 18builder
+ * Cluster action: clear_failcount for remote-node on 18builder
+ * Resource action: remote-node start on 18builder
+ * Resource action: remote-node monitor=30000 on 18builder
+
+Revised Cluster Status:
+ * Node List:
+ * Online: [ 18builder ]
+ * OFFLINE: [ 18node1 18node2 18node3 ]
+
+ * Full List of Resources:
+ * remote-node (ocf:pacemaker:Dummy): Started 18builder
diff --git a/cts/scheduler/summary/bug-5028-bottom.summary b/cts/scheduler/summary/bug-5028-bottom.summary
new file mode 100644
index 0000000..060b133
--- /dev/null
+++ b/cts/scheduler/summary/bug-5028-bottom.summary
@@ -0,0 +1,26 @@
+0 of 2 resource instances DISABLED and 1 BLOCKED from further action due to failure
+
+Current cluster status:
+ * Node List:
+ * Online: [ bl460g6a bl460g6b ]
+
+ * Full List of Resources:
+ * Resource Group: dummy-g:
+ * dummy01 (ocf:heartbeat:Dummy): FAILED bl460g6a (blocked)
+ * dummy02 (ocf:heartbeat:Dummy-stop-NG): Started bl460g6a
+
+Transition Summary:
+ * Stop dummy02 ( bl460g6a ) due to node availability
+
+Executing Cluster Transition:
+ * Pseudo action: dummy-g_stop_0
+ * Resource action: dummy02 stop on bl460g6a
+
+Revised Cluster Status:
+ * Node List:
+ * Online: [ bl460g6a bl460g6b ]
+
+ * Full List of Resources:
+ * Resource Group: dummy-g:
+ * dummy01 (ocf:heartbeat:Dummy): FAILED bl460g6a (blocked)
+ * dummy02 (ocf:heartbeat:Dummy-stop-NG): Stopped
diff --git a/cts/scheduler/summary/bug-5028-detach.summary b/cts/scheduler/summary/bug-5028-detach.summary
new file mode 100644
index 0000000..ab5a278
--- /dev/null
+++ b/cts/scheduler/summary/bug-5028-detach.summary
@@ -0,0 +1,28 @@
+
+ *** Resource management is DISABLED ***
+ The cluster will not attempt to start, stop or recover services
+
+0 of 2 resource instances DISABLED and 1 BLOCKED from further action due to failure
+
+Current cluster status:
+ * Node List:
+ * Online: [ bl460g6a bl460g6b ]
+
+ * Full List of Resources:
+ * Resource Group: dummy-g (maintenance):
+ * dummy01 (ocf:heartbeat:Dummy): Started bl460g6a (maintenance)
+ * dummy02 (ocf:heartbeat:Dummy-stop-NG): FAILED bl460g6a (blocked)
+
+Transition Summary:
+
+Executing Cluster Transition:
+ * Cluster action: do_shutdown on bl460g6a
+
+Revised Cluster Status:
+ * Node List:
+ * Online: [ bl460g6a bl460g6b ]
+
+ * Full List of Resources:
+ * Resource Group: dummy-g (maintenance):
+ * dummy01 (ocf:heartbeat:Dummy): Started bl460g6a (maintenance)
+ * dummy02 (ocf:heartbeat:Dummy-stop-NG): FAILED bl460g6a (blocked)
diff --git a/cts/scheduler/summary/bug-5028.summary b/cts/scheduler/summary/bug-5028.summary
new file mode 100644
index 0000000..b8eb46a
--- /dev/null
+++ b/cts/scheduler/summary/bug-5028.summary
@@ -0,0 +1,26 @@
+0 of 2 resource instances DISABLED and 1 BLOCKED from further action due to failure
+
+Current cluster status:
+ * Node List:
+ * Online: [ bl460g6a bl460g6b ]
+
+ * Full List of Resources:
+ * Resource Group: dummy-g:
+ * dummy01 (ocf:heartbeat:Dummy): Started bl460g6a
+ * dummy02 (ocf:heartbeat:Dummy-stop-NG): FAILED bl460g6a (blocked)
+
+Transition Summary:
+ * Stop dummy01 ( bl460g6a ) due to unrunnable dummy02 stop (blocked)
+
+Executing Cluster Transition:
+ * Pseudo action: dummy-g_stop_0
+ * Pseudo action: dummy-g_start_0
+
+Revised Cluster Status:
+ * Node List:
+ * Online: [ bl460g6a bl460g6b ]
+
+ * Full List of Resources:
+ * Resource Group: dummy-g:
+ * dummy01 (ocf:heartbeat:Dummy): Started bl460g6a
+ * dummy02 (ocf:heartbeat:Dummy-stop-NG): FAILED bl460g6a (blocked)
diff --git a/cts/scheduler/summary/bug-5038.summary b/cts/scheduler/summary/bug-5038.summary
new file mode 100644
index 0000000..f7f8a7b
--- /dev/null
+++ b/cts/scheduler/summary/bug-5038.summary
@@ -0,0 +1,25 @@
+Current cluster status:
+ * Node List:
+ * Online: [ node-0 node-2 ]
+
+ * Full List of Resources:
+ * Clone Set: clone-node-app-rsc [node-app-rsc]:
+ * Started: [ node-0 node-2 ]
+ * Resource Group: group-dc:
+ * failover-ip (ocf:heartbeat:IPaddr2): Started node-0
+ * master-app-rsc (lsb:cluster-master): Started node-0
+
+Transition Summary:
+
+Executing Cluster Transition:
+
+Revised Cluster Status:
+ * Node List:
+ * Online: [ node-0 node-2 ]
+
+ * Full List of Resources:
+ * Clone Set: clone-node-app-rsc [node-app-rsc]:
+ * Started: [ node-0 node-2 ]
+ * Resource Group: group-dc:
+ * failover-ip (ocf:heartbeat:IPaddr2): Started node-0
+ * master-app-rsc (lsb:cluster-master): Started node-0
diff --git a/cts/scheduler/summary/bug-5059.summary b/cts/scheduler/summary/bug-5059.summary
new file mode 100644
index 0000000..b3661e0
--- /dev/null
+++ b/cts/scheduler/summary/bug-5059.summary
@@ -0,0 +1,77 @@
+Current cluster status:
+ * Node List:
+ * Node gluster03.h: standby
+ * Online: [ gluster01.h gluster02.h ]
+ * OFFLINE: [ gluster04.h ]
+
+ * Full List of Resources:
+ * Clone Set: ms_stateful [g_stateful] (promotable):
+ * Resource Group: g_stateful:0:
+ * p_stateful1 (ocf:pacemaker:Stateful): Unpromoted gluster01.h
+ * p_stateful2 (ocf:pacemaker:Stateful): Stopped
+ * Resource Group: g_stateful:1:
+ * p_stateful1 (ocf:pacemaker:Stateful): Unpromoted gluster02.h
+ * p_stateful2 (ocf:pacemaker:Stateful): Stopped
+ * Stopped: [ gluster03.h gluster04.h ]
+ * Clone Set: c_dummy [p_dummy1]:
+ * Started: [ gluster01.h gluster02.h ]
+
+Transition Summary:
+ * Promote p_stateful1:0 ( Unpromoted -> Promoted gluster01.h )
+ * Promote p_stateful2:0 ( Stopped -> Promoted gluster01.h )
+ * Start p_stateful2:1 ( gluster02.h )
+
+Executing Cluster Transition:
+ * Pseudo action: ms_stateful_pre_notify_start_0
+ * Resource action: iptest delete on gluster02.h
+ * Resource action: ipsrc2 delete on gluster02.h
+ * Resource action: p_stateful1:0 notify on gluster01.h
+ * Resource action: p_stateful1:1 notify on gluster02.h
+ * Pseudo action: ms_stateful_confirmed-pre_notify_start_0
+ * Pseudo action: ms_stateful_start_0
+ * Pseudo action: g_stateful:0_start_0
+ * Resource action: p_stateful2:0 start on gluster01.h
+ * Pseudo action: g_stateful:1_start_0
+ * Resource action: p_stateful2:1 start on gluster02.h
+ * Pseudo action: g_stateful:0_running_0
+ * Pseudo action: g_stateful:1_running_0
+ * Pseudo action: ms_stateful_running_0
+ * Pseudo action: ms_stateful_post_notify_running_0
+ * Resource action: p_stateful1:0 notify on gluster01.h
+ * Resource action: p_stateful2:0 notify on gluster01.h
+ * Resource action: p_stateful1:1 notify on gluster02.h
+ * Resource action: p_stateful2:1 notify on gluster02.h
+ * Pseudo action: ms_stateful_confirmed-post_notify_running_0
+ * Pseudo action: ms_stateful_pre_notify_promote_0
+ * Resource action: p_stateful1:0 notify on gluster01.h
+ * Resource action: p_stateful2:0 notify on gluster01.h
+ * Resource action: p_stateful1:1 notify on gluster02.h
+ * Resource action: p_stateful2:1 notify on gluster02.h
+ * Pseudo action: ms_stateful_confirmed-pre_notify_promote_0
+ * Pseudo action: ms_stateful_promote_0
+ * Pseudo action: g_stateful:0_promote_0
+ * Resource action: p_stateful1:0 promote on gluster01.h
+ * Resource action: p_stateful2:0 promote on gluster01.h
+ * Pseudo action: g_stateful:0_promoted_0
+ * Pseudo action: ms_stateful_promoted_0
+ * Pseudo action: ms_stateful_post_notify_promoted_0
+ * Resource action: p_stateful1:0 notify on gluster01.h
+ * Resource action: p_stateful2:0 notify on gluster01.h
+ * Resource action: p_stateful1:1 notify on gluster02.h
+ * Resource action: p_stateful2:1 notify on gluster02.h
+ * Pseudo action: ms_stateful_confirmed-post_notify_promoted_0
+ * Resource action: p_stateful1:1 monitor=10000 on gluster02.h
+ * Resource action: p_stateful2:1 monitor=10000 on gluster02.h
+
+Revised Cluster Status:
+ * Node List:
+ * Node gluster03.h: standby
+ * Online: [ gluster01.h gluster02.h ]
+ * OFFLINE: [ gluster04.h ]
+
+ * Full List of Resources:
+ * Clone Set: ms_stateful [g_stateful] (promotable):
+ * Promoted: [ gluster01.h ]
+ * Unpromoted: [ gluster02.h ]
+ * Clone Set: c_dummy [p_dummy1]:
+ * Started: [ gluster01.h gluster02.h ]
diff --git a/cts/scheduler/summary/bug-5069-op-disabled.summary b/cts/scheduler/summary/bug-5069-op-disabled.summary
new file mode 100644
index 0000000..f77b9cc
--- /dev/null
+++ b/cts/scheduler/summary/bug-5069-op-disabled.summary
@@ -0,0 +1,21 @@
+Current cluster status:
+ * Node List:
+ * Online: [ fc16-builder2 ]
+ * OFFLINE: [ fc16-builder fc16-builder3 ]
+
+ * Full List of Resources:
+ * A (ocf:pacemaker:Dummy): Started fc16-builder2 (failure ignored)
+
+Transition Summary:
+
+Executing Cluster Transition:
+ * Cluster action: clear_failcount for A on fc16-builder2
+ * Resource action: A cancel=10000 on fc16-builder2
+
+Revised Cluster Status:
+ * Node List:
+ * Online: [ fc16-builder2 ]
+ * OFFLINE: [ fc16-builder fc16-builder3 ]
+
+ * Full List of Resources:
+ * A (ocf:pacemaker:Dummy): Started fc16-builder2 (failure ignored)
diff --git a/cts/scheduler/summary/bug-5069-op-enabled.summary b/cts/scheduler/summary/bug-5069-op-enabled.summary
new file mode 100644
index 0000000..ec1dde3
--- /dev/null
+++ b/cts/scheduler/summary/bug-5069-op-enabled.summary
@@ -0,0 +1,19 @@
+Current cluster status:
+ * Node List:
+ * Online: [ fc16-builder2 ]
+ * OFFLINE: [ fc16-builder fc16-builder3 ]
+
+ * Full List of Resources:
+ * A (ocf:pacemaker:Dummy): Started fc16-builder2 (failure ignored)
+
+Transition Summary:
+
+Executing Cluster Transition:
+
+Revised Cluster Status:
+ * Node List:
+ * Online: [ fc16-builder2 ]
+ * OFFLINE: [ fc16-builder fc16-builder3 ]
+
+ * Full List of Resources:
+ * A (ocf:pacemaker:Dummy): Started fc16-builder2 (failure ignored)
diff --git a/cts/scheduler/summary/bug-5140-require-all-false.summary b/cts/scheduler/summary/bug-5140-require-all-false.summary
new file mode 100644
index 0000000..a56fe6d
--- /dev/null
+++ b/cts/scheduler/summary/bug-5140-require-all-false.summary
@@ -0,0 +1,83 @@
+4 of 35 resource instances DISABLED and 0 BLOCKED from further action due to failure
+
+Current cluster status:
+ * Node List:
+ * Node hex-1: standby
+ * Node hex-2: standby (with active resources)
+ * Node hex-3: OFFLINE (standby)
+
+ * Full List of Resources:
+ * fencing (stonith:external/sbd): Stopped
+ * Clone Set: baseclone [basegrp]:
+ * Resource Group: basegrp:0:
+ * dlm (ocf:pacemaker:controld): Started hex-2
+ * clvmd (ocf:lvm2:clvmd): Started hex-2
+ * o2cb (ocf:ocfs2:o2cb): Started hex-2
+ * vg1 (ocf:heartbeat:LVM): Stopped
+ * fs-ocfs-1 (ocf:heartbeat:Filesystem): Stopped
+ * Stopped: [ hex-1 hex-3 ]
+ * fs-xfs-1 (ocf:heartbeat:Filesystem): Stopped
+ * Clone Set: fs2 [fs-ocfs-2]:
+ * Stopped: [ hex-1 hex-2 hex-3 ]
+ * Clone Set: ms-r0 [drbd-r0] (promotable, disabled):
+ * Stopped (disabled): [ hex-1 hex-2 hex-3 ]
+ * Clone Set: ms-r1 [drbd-r1] (promotable, disabled):
+ * Stopped (disabled): [ hex-1 hex-2 hex-3 ]
+ * Resource Group: md0-group:
+ * md0 (ocf:heartbeat:Raid1): Stopped
+ * vg-md0 (ocf:heartbeat:LVM): Stopped
+ * fs-md0 (ocf:heartbeat:Filesystem): Stopped
+ * dummy1 (ocf:heartbeat:Delay): Stopped
+ * dummy3 (ocf:heartbeat:Delay): Stopped
+ * dummy4 (ocf:heartbeat:Delay): Stopped
+ * dummy5 (ocf:heartbeat:Delay): Stopped
+ * dummy6 (ocf:heartbeat:Delay): Stopped
+ * Resource Group: r0-group:
+ * fs-r0 (ocf:heartbeat:Filesystem): Stopped
+ * dummy2 (ocf:heartbeat:Delay): Stopped
+ * cluster-md0 (ocf:heartbeat:Raid1): Stopped
+
+Transition Summary:
+ * Stop dlm:0 ( hex-2 ) due to node availability
+ * Stop clvmd:0 ( hex-2 ) due to node availability
+ * Stop o2cb:0 ( hex-2 ) due to node availability
+
+Executing Cluster Transition:
+ * Pseudo action: baseclone_stop_0
+ * Pseudo action: basegrp:0_stop_0
+ * Resource action: o2cb stop on hex-2
+ * Resource action: clvmd stop on hex-2
+ * Resource action: dlm stop on hex-2
+ * Pseudo action: basegrp:0_stopped_0
+ * Pseudo action: baseclone_stopped_0
+
+Revised Cluster Status:
+ * Node List:
+ * Node hex-1: standby
+ * Node hex-2: standby
+ * Node hex-3: OFFLINE (standby)
+
+ * Full List of Resources:
+ * fencing (stonith:external/sbd): Stopped
+ * Clone Set: baseclone [basegrp]:
+ * Stopped: [ hex-1 hex-2 hex-3 ]
+ * fs-xfs-1 (ocf:heartbeat:Filesystem): Stopped
+ * Clone Set: fs2 [fs-ocfs-2]:
+ * Stopped: [ hex-1 hex-2 hex-3 ]
+ * Clone Set: ms-r0 [drbd-r0] (promotable, disabled):
+ * Stopped (disabled): [ hex-1 hex-2 hex-3 ]
+ * Clone Set: ms-r1 [drbd-r1] (promotable, disabled):
+ * Stopped (disabled): [ hex-1 hex-2 hex-3 ]
+ * Resource Group: md0-group:
+ * md0 (ocf:heartbeat:Raid1): Stopped
+ * vg-md0 (ocf:heartbeat:LVM): Stopped
+ * fs-md0 (ocf:heartbeat:Filesystem): Stopped
+ * dummy1 (ocf:heartbeat:Delay): Stopped
+ * dummy3 (ocf:heartbeat:Delay): Stopped
+ * dummy4 (ocf:heartbeat:Delay): Stopped
+ * dummy5 (ocf:heartbeat:Delay): Stopped
+ * dummy6 (ocf:heartbeat:Delay): Stopped
+ * Resource Group: r0-group:
+ * fs-r0 (ocf:heartbeat:Filesystem): Stopped
+ * dummy2 (ocf:heartbeat:Delay): Stopped
+ * cluster-md0 (ocf:heartbeat:Raid1): Stopped
diff --git a/cts/scheduler/summary/bug-5143-ms-shuffle.summary b/cts/scheduler/summary/bug-5143-ms-shuffle.summary
new file mode 100644
index 0000000..18f2566
--- /dev/null
+++ b/cts/scheduler/summary/bug-5143-ms-shuffle.summary
@@ -0,0 +1,78 @@
+1 of 34 resource instances DISABLED and 0 BLOCKED from further action due to failure
+
+Current cluster status:
+ * Node List:
+ * Online: [ hex-1 hex-2 hex-3 ]
+
+ * Full List of Resources:
+ * fencing (stonith:external/sbd): Started hex-1
+ * Clone Set: baseclone [basegrp]:
+ * Started: [ hex-1 hex-2 hex-3 ]
+ * fs-xfs-1 (ocf:heartbeat:Filesystem): Started hex-2
+ * Clone Set: fs2 [fs-ocfs-2]:
+ * Started: [ hex-1 hex-2 hex-3 ]
+ * Clone Set: ms-r0 [drbd-r0] (promotable):
+ * Promoted: [ hex-1 ]
+ * Unpromoted: [ hex-2 ]
+ * Clone Set: ms-r1 [drbd-r1] (promotable):
+ * Unpromoted: [ hex-2 hex-3 ]
+ * Resource Group: md0-group:
+ * md0 (ocf:heartbeat:Raid1): Started hex-3
+ * vg-md0 (ocf:heartbeat:LVM): Started hex-3
+ * fs-md0 (ocf:heartbeat:Filesystem): Started hex-3
+ * dummy1 (ocf:heartbeat:Delay): Started hex-3
+ * dummy3 (ocf:heartbeat:Delay): Started hex-1
+ * dummy4 (ocf:heartbeat:Delay): Started hex-2
+ * dummy5 (ocf:heartbeat:Delay): Started hex-1
+ * dummy6 (ocf:heartbeat:Delay): Started hex-2
+ * Resource Group: r0-group:
+ * fs-r0 (ocf:heartbeat:Filesystem): Stopped (disabled)
+ * dummy2 (ocf:heartbeat:Delay): Stopped
+
+Transition Summary:
+ * Promote drbd-r1:1 ( Unpromoted -> Promoted hex-3 )
+
+Executing Cluster Transition:
+ * Pseudo action: ms-r1_pre_notify_promote_0
+ * Resource action: drbd-r1 notify on hex-2
+ * Resource action: drbd-r1 notify on hex-3
+ * Pseudo action: ms-r1_confirmed-pre_notify_promote_0
+ * Pseudo action: ms-r1_promote_0
+ * Resource action: drbd-r1 promote on hex-3
+ * Pseudo action: ms-r1_promoted_0
+ * Pseudo action: ms-r1_post_notify_promoted_0
+ * Resource action: drbd-r1 notify on hex-2
+ * Resource action: drbd-r1 notify on hex-3
+ * Pseudo action: ms-r1_confirmed-post_notify_promoted_0
+ * Resource action: drbd-r1 monitor=29000 on hex-2
+ * Resource action: drbd-r1 monitor=31000 on hex-3
+
+Revised Cluster Status:
+ * Node List:
+ * Online: [ hex-1 hex-2 hex-3 ]
+
+ * Full List of Resources:
+ * fencing (stonith:external/sbd): Started hex-1
+ * Clone Set: baseclone [basegrp]:
+ * Started: [ hex-1 hex-2 hex-3 ]
+ * fs-xfs-1 (ocf:heartbeat:Filesystem): Started hex-2
+ * Clone Set: fs2 [fs-ocfs-2]:
+ * Started: [ hex-1 hex-2 hex-3 ]
+ * Clone Set: ms-r0 [drbd-r0] (promotable):
+ * Promoted: [ hex-1 ]
+ * Unpromoted: [ hex-2 ]
+ * Clone Set: ms-r1 [drbd-r1] (promotable):
+ * Promoted: [ hex-3 ]
+ * Unpromoted: [ hex-2 ]
+ * Resource Group: md0-group:
+ * md0 (ocf:heartbeat:Raid1): Started hex-3
+ * vg-md0 (ocf:heartbeat:LVM): Started hex-3
+ * fs-md0 (ocf:heartbeat:Filesystem): Started hex-3
+ * dummy1 (ocf:heartbeat:Delay): Started hex-3
+ * dummy3 (ocf:heartbeat:Delay): Started hex-1
+ * dummy4 (ocf:heartbeat:Delay): Started hex-2
+ * dummy5 (ocf:heartbeat:Delay): Started hex-1
+ * dummy6 (ocf:heartbeat:Delay): Started hex-2
+ * Resource Group: r0-group:
+ * fs-r0 (ocf:heartbeat:Filesystem): Stopped (disabled)
+ * dummy2 (ocf:heartbeat:Delay): Stopped
diff --git a/cts/scheduler/summary/bug-5186-partial-migrate.summary b/cts/scheduler/summary/bug-5186-partial-migrate.summary
new file mode 100644
index 0000000..daa64e3
--- /dev/null
+++ b/cts/scheduler/summary/bug-5186-partial-migrate.summary
@@ -0,0 +1,91 @@
+Current cluster status:
+ * Node List:
+ * Node bl460g1n7: UNCLEAN (offline)
+ * Online: [ bl460g1n6 bl460g1n8 ]
+
+ * Full List of Resources:
+ * prmDummy (ocf:pacemaker:Dummy): Started bl460g1n7 (UNCLEAN)
+ * prmVM2 (ocf:heartbeat:VirtualDomain): Migrating bl460g1n7 (UNCLEAN)
+ * Resource Group: grpStonith6:
+ * prmStonith6-1 (stonith:external/stonith-helper): Started bl460g1n8
+ * prmStonith6-2 (stonith:external/ipmi): Started bl460g1n8
+ * Resource Group: grpStonith7:
+ * prmStonith7-1 (stonith:external/stonith-helper): Started bl460g1n6
+ * prmStonith7-2 (stonith:external/ipmi): Started bl460g1n6
+ * Resource Group: grpStonith8:
+ * prmStonith8-1 (stonith:external/stonith-helper): Started bl460g1n7 (UNCLEAN)
+ * prmStonith8-2 (stonith:external/ipmi): Started bl460g1n7 (UNCLEAN)
+ * Clone Set: clnDiskd1 [prmDiskd1]:
+ * prmDiskd1 (ocf:pacemaker:diskd): Started bl460g1n7 (UNCLEAN)
+ * Started: [ bl460g1n6 bl460g1n8 ]
+ * Clone Set: clnDiskd2 [prmDiskd2]:
+ * prmDiskd2 (ocf:pacemaker:diskd): Started bl460g1n7 (UNCLEAN)
+ * Started: [ bl460g1n6 bl460g1n8 ]
+ * Clone Set: clnPing [prmPing]:
+ * prmPing (ocf:pacemaker:ping): Started bl460g1n7 (UNCLEAN)
+ * Started: [ bl460g1n6 bl460g1n8 ]
+
+Transition Summary:
+ * Fence (reboot) bl460g1n7 'prmDummy is thought to be active there'
+ * Move prmDummy ( bl460g1n7 -> bl460g1n6 )
+ * Move prmVM2 ( bl460g1n7 -> bl460g1n8 )
+ * Move prmStonith8-1 ( bl460g1n7 -> bl460g1n6 )
+ * Move prmStonith8-2 ( bl460g1n7 -> bl460g1n6 )
+ * Stop prmDiskd1:0 ( bl460g1n7 ) due to node availability
+ * Stop prmDiskd2:0 ( bl460g1n7 ) due to node availability
+ * Stop prmPing:0 ( bl460g1n7 ) due to node availability
+
+Executing Cluster Transition:
+ * Resource action: prmVM2 stop on bl460g1n6
+ * Pseudo action: grpStonith8_stop_0
+ * Pseudo action: prmStonith8-2_stop_0
+ * Fencing bl460g1n7 (reboot)
+ * Pseudo action: prmDummy_stop_0
+ * Pseudo action: prmVM2_stop_0
+ * Pseudo action: prmStonith8-1_stop_0
+ * Pseudo action: clnDiskd1_stop_0
+ * Pseudo action: clnDiskd2_stop_0
+ * Pseudo action: clnPing_stop_0
+ * Resource action: prmDummy start on bl460g1n6
+ * Resource action: prmVM2 start on bl460g1n8
+ * Pseudo action: grpStonith8_stopped_0
+ * Pseudo action: grpStonith8_start_0
+ * Resource action: prmStonith8-1 start on bl460g1n6
+ * Resource action: prmStonith8-2 start on bl460g1n6
+ * Pseudo action: prmDiskd1_stop_0
+ * Pseudo action: clnDiskd1_stopped_0
+ * Pseudo action: prmDiskd2_stop_0
+ * Pseudo action: clnDiskd2_stopped_0
+ * Pseudo action: prmPing_stop_0
+ * Pseudo action: clnPing_stopped_0
+ * Resource action: prmVM2 monitor=10000 on bl460g1n8
+ * Pseudo action: grpStonith8_running_0
+ * Resource action: prmStonith8-1 monitor=10000 on bl460g1n6
+ * Resource action: prmStonith8-2 monitor=3600000 on bl460g1n6
+
+Revised Cluster Status:
+ * Node List:
+ * Online: [ bl460g1n6 bl460g1n8 ]
+ * OFFLINE: [ bl460g1n7 ]
+
+ * Full List of Resources:
+ * prmDummy (ocf:pacemaker:Dummy): Started bl460g1n6
+ * prmVM2 (ocf:heartbeat:VirtualDomain): Started bl460g1n8
+ * Resource Group: grpStonith6:
+ * prmStonith6-1 (stonith:external/stonith-helper): Started bl460g1n8
+ * prmStonith6-2 (stonith:external/ipmi): Started bl460g1n8
+ * Resource Group: grpStonith7:
+ * prmStonith7-1 (stonith:external/stonith-helper): Started bl460g1n6
+ * prmStonith7-2 (stonith:external/ipmi): Started bl460g1n6
+ * Resource Group: grpStonith8:
+ * prmStonith8-1 (stonith:external/stonith-helper): Started bl460g1n6
+ * prmStonith8-2 (stonith:external/ipmi): Started bl460g1n6
+ * Clone Set: clnDiskd1 [prmDiskd1]:
+ * Started: [ bl460g1n6 bl460g1n8 ]
+ * Stopped: [ bl460g1n7 ]
+ * Clone Set: clnDiskd2 [prmDiskd2]:
+ * Started: [ bl460g1n6 bl460g1n8 ]
+ * Stopped: [ bl460g1n7 ]
+ * Clone Set: clnPing [prmPing]:
+ * Started: [ bl460g1n6 bl460g1n8 ]
+ * Stopped: [ bl460g1n7 ]
diff --git a/cts/scheduler/summary/bug-cl-5168.summary b/cts/scheduler/summary/bug-cl-5168.summary
new file mode 100644
index 0000000..11064b0
--- /dev/null
+++ b/cts/scheduler/summary/bug-cl-5168.summary
@@ -0,0 +1,76 @@
+Current cluster status:
+ * Node List:
+ * Online: [ hex-1 hex-2 hex-3 ]
+
+ * Full List of Resources:
+ * fencing (stonith:external/sbd): Started hex-1
+ * Clone Set: baseclone [basegrp]:
+ * Started: [ hex-1 hex-2 hex-3 ]
+ * fs-xfs-1 (ocf:heartbeat:Filesystem): Started hex-2
+ * Clone Set: fs2 [fs-ocfs-2]:
+ * Started: [ hex-1 hex-2 hex-3 ]
+ * Clone Set: ms-r0 [drbd-r0] (promotable):
+ * Promoted: [ hex-1 ]
+ * Unpromoted: [ hex-2 ]
+ * Resource Group: md0-group:
+ * md0 (ocf:heartbeat:Raid1): Started hex-3
+ * vg-md0 (ocf:heartbeat:LVM): Started hex-3
+ * fs-md0 (ocf:heartbeat:Filesystem): Started hex-3
+ * dummy1 (ocf:heartbeat:Delay): Started hex-3
+ * dummy3 (ocf:heartbeat:Delay): Started hex-1
+ * dummy4 (ocf:heartbeat:Delay): Started hex-2
+ * dummy5 (ocf:heartbeat:Delay): Started hex-1
+ * dummy6 (ocf:heartbeat:Delay): Started hex-2
+ * Resource Group: r0-group:
+ * fs-r0 (ocf:heartbeat:Filesystem): Started hex-1
+ * dummy2 (ocf:heartbeat:Delay): Started hex-1
+ * Clone Set: ms-r1 [drbd-r1] (promotable):
+ * Unpromoted: [ hex-2 hex-3 ]
+
+Transition Summary:
+ * Promote drbd-r1:1 ( Unpromoted -> Promoted hex-3 )
+
+Executing Cluster Transition:
+ * Pseudo action: ms-r1_pre_notify_promote_0
+ * Resource action: drbd-r1 notify on hex-2
+ * Resource action: drbd-r1 notify on hex-3
+ * Pseudo action: ms-r1_confirmed-pre_notify_promote_0
+ * Pseudo action: ms-r1_promote_0
+ * Resource action: drbd-r1 promote on hex-3
+ * Pseudo action: ms-r1_promoted_0
+ * Pseudo action: ms-r1_post_notify_promoted_0
+ * Resource action: drbd-r1 notify on hex-2
+ * Resource action: drbd-r1 notify on hex-3
+ * Pseudo action: ms-r1_confirmed-post_notify_promoted_0
+ * Resource action: drbd-r1 monitor=29000 on hex-2
+ * Resource action: drbd-r1 monitor=31000 on hex-3
+
+Revised Cluster Status:
+ * Node List:
+ * Online: [ hex-1 hex-2 hex-3 ]
+
+ * Full List of Resources:
+ * fencing (stonith:external/sbd): Started hex-1
+ * Clone Set: baseclone [basegrp]:
+ * Started: [ hex-1 hex-2 hex-3 ]
+ * fs-xfs-1 (ocf:heartbeat:Filesystem): Started hex-2
+ * Clone Set: fs2 [fs-ocfs-2]:
+ * Started: [ hex-1 hex-2 hex-3 ]
+ * Clone Set: ms-r0 [drbd-r0] (promotable):
+ * Promoted: [ hex-1 ]
+ * Unpromoted: [ hex-2 ]
+ * Resource Group: md0-group:
+ * md0 (ocf:heartbeat:Raid1): Started hex-3
+ * vg-md0 (ocf:heartbeat:LVM): Started hex-3
+ * fs-md0 (ocf:heartbeat:Filesystem): Started hex-3
+ * dummy1 (ocf:heartbeat:Delay): Started hex-3
+ * dummy3 (ocf:heartbeat:Delay): Started hex-1
+ * dummy4 (ocf:heartbeat:Delay): Started hex-2
+ * dummy5 (ocf:heartbeat:Delay): Started hex-1
+ * dummy6 (ocf:heartbeat:Delay): Started hex-2
+ * Resource Group: r0-group:
+ * fs-r0 (ocf:heartbeat:Filesystem): Started hex-1
+ * dummy2 (ocf:heartbeat:Delay): Started hex-1
+ * Clone Set: ms-r1 [drbd-r1] (promotable):
+ * Promoted: [ hex-3 ]
+ * Unpromoted: [ hex-2 ]
diff --git a/cts/scheduler/summary/bug-cl-5170.summary b/cts/scheduler/summary/bug-cl-5170.summary
new file mode 100644
index 0000000..3129376
--- /dev/null
+++ b/cts/scheduler/summary/bug-cl-5170.summary
@@ -0,0 +1,37 @@
+0 of 4 resource instances DISABLED and 1 BLOCKED from further action due to failure
+
+Current cluster status:
+ * Node List:
+ * Node TCS-1: OFFLINE (standby)
+ * Online: [ TCS-2 ]
+
+ * Full List of Resources:
+ * Resource Group: svc:
+ * ip_trf (ocf:heartbeat:IPaddr2): Started TCS-2
+ * ip_mgmt (ocf:heartbeat:IPaddr2): Started TCS-2
+ * Clone Set: cl_tomcat_nms [d_tomcat_nms]:
+ * d_tomcat_nms (ocf:ntc:tomcat): FAILED TCS-2 (blocked)
+ * Stopped: [ TCS-1 ]
+
+Transition Summary:
+ * Stop ip_trf ( TCS-2 ) due to node availability
+ * Stop ip_mgmt ( TCS-2 ) due to node availability
+
+Executing Cluster Transition:
+ * Pseudo action: svc_stop_0
+ * Resource action: ip_mgmt stop on TCS-2
+ * Resource action: ip_trf stop on TCS-2
+ * Pseudo action: svc_stopped_0
+
+Revised Cluster Status:
+ * Node List:
+ * Node TCS-1: OFFLINE (standby)
+ * Online: [ TCS-2 ]
+
+ * Full List of Resources:
+ * Resource Group: svc:
+ * ip_trf (ocf:heartbeat:IPaddr2): Stopped
+ * ip_mgmt (ocf:heartbeat:IPaddr2): Stopped
+ * Clone Set: cl_tomcat_nms [d_tomcat_nms]:
+ * d_tomcat_nms (ocf:ntc:tomcat): FAILED TCS-2 (blocked)
+ * Stopped: [ TCS-1 ]
diff --git a/cts/scheduler/summary/bug-cl-5212.summary b/cts/scheduler/summary/bug-cl-5212.summary
new file mode 100644
index 0000000..7cbe975
--- /dev/null
+++ b/cts/scheduler/summary/bug-cl-5212.summary
@@ -0,0 +1,69 @@
+Current cluster status:
+ * Node List:
+ * Node srv01: UNCLEAN (offline)
+ * Node srv02: UNCLEAN (offline)
+ * Online: [ srv03 ]
+
+ * Full List of Resources:
+ * Resource Group: grpStonith1:
+ * prmStonith1-1 (stonith:external/ssh): Started srv02 (UNCLEAN)
+ * Resource Group: grpStonith2:
+ * prmStonith2-1 (stonith:external/ssh): Started srv01 (UNCLEAN)
+ * Resource Group: grpStonith3:
+ * prmStonith3-1 (stonith:external/ssh): Started srv01 (UNCLEAN)
+ * Clone Set: msPostgresql [pgsql] (promotable):
+ * pgsql (ocf:pacemaker:Stateful): Unpromoted srv02 (UNCLEAN)
+ * pgsql (ocf:pacemaker:Stateful): Promoted srv01 (UNCLEAN)
+ * Unpromoted: [ srv03 ]
+ * Clone Set: clnPingd [prmPingd]:
+ * prmPingd (ocf:pacemaker:ping): Started srv02 (UNCLEAN)
+ * prmPingd (ocf:pacemaker:ping): Started srv01 (UNCLEAN)
+ * Started: [ srv03 ]
+
+Transition Summary:
+ * Stop prmStonith1-1 ( srv02 ) blocked
+ * Stop prmStonith2-1 ( srv01 ) blocked
+ * Stop prmStonith3-1 ( srv01 ) due to node availability (blocked)
+ * Stop pgsql:0 ( Unpromoted srv02 ) due to node availability (blocked)
+ * Stop pgsql:1 ( Promoted srv01 ) due to node availability (blocked)
+ * Stop prmPingd:0 ( srv02 ) due to node availability (blocked)
+ * Stop prmPingd:1 ( srv01 ) due to node availability (blocked)
+
+Executing Cluster Transition:
+ * Pseudo action: grpStonith1_stop_0
+ * Pseudo action: grpStonith1_start_0
+ * Pseudo action: grpStonith2_stop_0
+ * Pseudo action: grpStonith2_start_0
+ * Pseudo action: grpStonith3_stop_0
+ * Pseudo action: msPostgresql_pre_notify_stop_0
+ * Pseudo action: clnPingd_stop_0
+ * Resource action: pgsql notify on srv03
+ * Pseudo action: msPostgresql_confirmed-pre_notify_stop_0
+ * Pseudo action: msPostgresql_stop_0
+ * Pseudo action: clnPingd_stopped_0
+ * Pseudo action: msPostgresql_stopped_0
+ * Pseudo action: msPostgresql_post_notify_stopped_0
+ * Resource action: pgsql notify on srv03
+ * Pseudo action: msPostgresql_confirmed-post_notify_stopped_0
+
+Revised Cluster Status:
+ * Node List:
+ * Node srv01: UNCLEAN (offline)
+ * Node srv02: UNCLEAN (offline)
+ * Online: [ srv03 ]
+
+ * Full List of Resources:
+ * Resource Group: grpStonith1:
+ * prmStonith1-1 (stonith:external/ssh): Started srv02 (UNCLEAN)
+ * Resource Group: grpStonith2:
+ * prmStonith2-1 (stonith:external/ssh): Started srv01 (UNCLEAN)
+ * Resource Group: grpStonith3:
+ * prmStonith3-1 (stonith:external/ssh): Started srv01 (UNCLEAN)
+ * Clone Set: msPostgresql [pgsql] (promotable):
+ * pgsql (ocf:pacemaker:Stateful): Unpromoted srv02 (UNCLEAN)
+ * pgsql (ocf:pacemaker:Stateful): Promoted srv01 (UNCLEAN)
+ * Unpromoted: [ srv03 ]
+ * Clone Set: clnPingd [prmPingd]:
+ * prmPingd (ocf:pacemaker:ping): Started srv02 (UNCLEAN)
+ * prmPingd (ocf:pacemaker:ping): Started srv01 (UNCLEAN)
+ * Started: [ srv03 ]
diff --git a/cts/scheduler/summary/bug-cl-5213.summary b/cts/scheduler/summary/bug-cl-5213.summary
new file mode 100644
index 0000000..047f75d
--- /dev/null
+++ b/cts/scheduler/summary/bug-cl-5213.summary
@@ -0,0 +1,22 @@
+Current cluster status:
+ * Node List:
+ * Online: [ srv01 srv02 ]
+
+ * Full List of Resources:
+ * A-master (ocf:heartbeat:Dummy): Started srv02
+ * Clone Set: msPostgresql [pgsql] (promotable):
+ * Unpromoted: [ srv01 srv02 ]
+
+Transition Summary:
+
+Executing Cluster Transition:
+ * Resource action: pgsql monitor=10000 on srv01
+
+Revised Cluster Status:
+ * Node List:
+ * Online: [ srv01 srv02 ]
+
+ * Full List of Resources:
+ * A-master (ocf:heartbeat:Dummy): Started srv02
+ * Clone Set: msPostgresql [pgsql] (promotable):
+ * Unpromoted: [ srv01 srv02 ]
diff --git a/cts/scheduler/summary/bug-cl-5219.summary b/cts/scheduler/summary/bug-cl-5219.summary
new file mode 100644
index 0000000..c5935e1
--- /dev/null
+++ b/cts/scheduler/summary/bug-cl-5219.summary
@@ -0,0 +1,43 @@
+1 of 9 resource instances DISABLED and 0 BLOCKED from further action due to failure
+
+Current cluster status:
+ * Node List:
+ * Online: [ ha1.test.anchor.net.au ha2.test.anchor.net.au ]
+
+ * Full List of Resources:
+ * child1-service (ocf:pacemaker:Dummy): Started ha2.test.anchor.net.au (disabled)
+ * child2-service (ocf:pacemaker:Dummy): Started ha2.test.anchor.net.au
+ * parent-service (ocf:pacemaker:Dummy): Started ha2.test.anchor.net.au
+ * Clone Set: child1 [stateful-child1] (promotable):
+ * Promoted: [ ha2.test.anchor.net.au ]
+ * Unpromoted: [ ha1.test.anchor.net.au ]
+ * Clone Set: child2 [stateful-child2] (promotable):
+ * Promoted: [ ha2.test.anchor.net.au ]
+ * Unpromoted: [ ha1.test.anchor.net.au ]
+ * Clone Set: parent [stateful-parent] (promotable):
+ * Promoted: [ ha2.test.anchor.net.au ]
+ * Unpromoted: [ ha1.test.anchor.net.au ]
+
+Transition Summary:
+ * Stop child1-service ( ha2.test.anchor.net.au ) due to node availability
+
+Executing Cluster Transition:
+ * Resource action: child1-service stop on ha2.test.anchor.net.au
+
+Revised Cluster Status:
+ * Node List:
+ * Online: [ ha1.test.anchor.net.au ha2.test.anchor.net.au ]
+
+ * Full List of Resources:
+ * child1-service (ocf:pacemaker:Dummy): Stopped (disabled)
+ * child2-service (ocf:pacemaker:Dummy): Started ha2.test.anchor.net.au
+ * parent-service (ocf:pacemaker:Dummy): Started ha2.test.anchor.net.au
+ * Clone Set: child1 [stateful-child1] (promotable):
+ * Promoted: [ ha2.test.anchor.net.au ]
+ * Unpromoted: [ ha1.test.anchor.net.au ]
+ * Clone Set: child2 [stateful-child2] (promotable):
+ * Promoted: [ ha2.test.anchor.net.au ]
+ * Unpromoted: [ ha1.test.anchor.net.au ]
+ * Clone Set: parent [stateful-parent] (promotable):
+ * Promoted: [ ha2.test.anchor.net.au ]
+ * Unpromoted: [ ha1.test.anchor.net.au ]
diff --git a/cts/scheduler/summary/bug-cl-5247.summary b/cts/scheduler/summary/bug-cl-5247.summary
new file mode 100644
index 0000000..b18bdd8
--- /dev/null
+++ b/cts/scheduler/summary/bug-cl-5247.summary
@@ -0,0 +1,87 @@
+Using the original execution date of: 2015-08-12 02:53:40Z
+Current cluster status:
+ * Node List:
+ * Online: [ bl460g8n3 bl460g8n4 ]
+ * GuestOnline: [ pgsr01 ]
+
+ * Full List of Resources:
+ * prmDB1 (ocf:heartbeat:VirtualDomain): Started bl460g8n3
+ * prmDB2 (ocf:heartbeat:VirtualDomain): FAILED bl460g8n4
+ * Resource Group: grpStonith1:
+ * prmStonith1-2 (stonith:external/ipmi): Started bl460g8n4
+ * Resource Group: grpStonith2:
+ * prmStonith2-2 (stonith:external/ipmi): Started bl460g8n3
+ * Resource Group: master-group:
+ * vip-master (ocf:heartbeat:Dummy): FAILED pgsr02
+ * vip-rep (ocf:heartbeat:Dummy): FAILED pgsr02
+ * Clone Set: msPostgresql [pgsql] (promotable):
+ * Promoted: [ pgsr01 ]
+ * Stopped: [ bl460g8n3 bl460g8n4 ]
+
+Transition Summary:
+ * Fence (off) pgsr02 (resource: prmDB2) 'guest is unclean'
+ * Stop prmDB2 ( bl460g8n4 ) due to node availability
+ * Recover vip-master ( pgsr02 -> pgsr01 )
+ * Recover vip-rep ( pgsr02 -> pgsr01 )
+ * Stop pgsql:0 ( Promoted pgsr02 ) due to node availability
+ * Stop pgsr02 ( bl460g8n4 ) due to node availability
+
+Executing Cluster Transition:
+ * Resource action: vip-master monitor on pgsr01
+ * Resource action: vip-rep monitor on pgsr01
+ * Pseudo action: msPostgresql_pre_notify_demote_0
+ * Resource action: pgsr01 monitor on bl460g8n4
+ * Resource action: pgsr02 stop on bl460g8n4
+ * Resource action: pgsr02 monitor on bl460g8n3
+ * Resource action: prmDB2 stop on bl460g8n4
+ * Resource action: pgsql notify on pgsr01
+ * Pseudo action: msPostgresql_confirmed-pre_notify_demote_0
+ * Pseudo action: msPostgresql_demote_0
+ * Pseudo action: stonith-pgsr02-off on pgsr02
+ * Pseudo action: pgsql_post_notify_stop_0
+ * Pseudo action: pgsql_demote_0
+ * Pseudo action: msPostgresql_demoted_0
+ * Pseudo action: msPostgresql_post_notify_demoted_0
+ * Resource action: pgsql notify on pgsr01
+ * Pseudo action: msPostgresql_confirmed-post_notify_demoted_0
+ * Pseudo action: msPostgresql_pre_notify_stop_0
+ * Pseudo action: master-group_stop_0
+ * Pseudo action: vip-rep_stop_0
+ * Resource action: pgsql notify on pgsr01
+ * Pseudo action: msPostgresql_confirmed-pre_notify_stop_0
+ * Pseudo action: msPostgresql_stop_0
+ * Pseudo action: vip-master_stop_0
+ * Pseudo action: pgsql_stop_0
+ * Pseudo action: msPostgresql_stopped_0
+ * Pseudo action: master-group_stopped_0
+ * Pseudo action: master-group_start_0
+ * Resource action: vip-master start on pgsr01
+ * Resource action: vip-rep start on pgsr01
+ * Pseudo action: msPostgresql_post_notify_stopped_0
+ * Pseudo action: master-group_running_0
+ * Resource action: vip-master monitor=10000 on pgsr01
+ * Resource action: vip-rep monitor=10000 on pgsr01
+ * Resource action: pgsql notify on pgsr01
+ * Pseudo action: msPostgresql_confirmed-post_notify_stopped_0
+ * Pseudo action: pgsql_notified_0
+ * Resource action: pgsql monitor=9000 on pgsr01
+Using the original execution date of: 2015-08-12 02:53:40Z
+
+Revised Cluster Status:
+ * Node List:
+ * Online: [ bl460g8n3 bl460g8n4 ]
+ * GuestOnline: [ pgsr01 ]
+
+ * Full List of Resources:
+ * prmDB1 (ocf:heartbeat:VirtualDomain): Started bl460g8n3
+ * prmDB2 (ocf:heartbeat:VirtualDomain): FAILED
+ * Resource Group: grpStonith1:
+ * prmStonith1-2 (stonith:external/ipmi): Started bl460g8n4
+ * Resource Group: grpStonith2:
+ * prmStonith2-2 (stonith:external/ipmi): Started bl460g8n3
+ * Resource Group: master-group:
+ * vip-master (ocf:heartbeat:Dummy): FAILED [ pgsr01 pgsr02 ]
+ * vip-rep (ocf:heartbeat:Dummy): FAILED [ pgsr01 pgsr02 ]
+ * Clone Set: msPostgresql [pgsql] (promotable):
+ * Promoted: [ pgsr01 ]
+ * Stopped: [ bl460g8n3 bl460g8n4 ]
diff --git a/cts/scheduler/summary/bug-lf-1852.summary b/cts/scheduler/summary/bug-lf-1852.summary
new file mode 100644
index 0000000..26c73e1
--- /dev/null
+++ b/cts/scheduler/summary/bug-lf-1852.summary
@@ -0,0 +1,40 @@
+Current cluster status:
+ * Node List:
+ * Online: [ mysql-01 mysql-02 ]
+
+ * Full List of Resources:
+ * Clone Set: ms-drbd0 [drbd0] (promotable):
+ * Promoted: [ mysql-02 ]
+ * Stopped: [ mysql-01 ]
+ * Resource Group: fs_mysql_ip:
+ * fs0 (ocf:heartbeat:Filesystem): Started mysql-02
+ * mysqlid (lsb:mysql): Started mysql-02
+ * ip_resource (ocf:heartbeat:IPaddr2): Started mysql-02
+
+Transition Summary:
+ * Start drbd0:1 ( mysql-01 )
+
+Executing Cluster Transition:
+ * Pseudo action: ms-drbd0_pre_notify_start_0
+ * Resource action: drbd0:0 notify on mysql-02
+ * Pseudo action: ms-drbd0_confirmed-pre_notify_start_0
+ * Pseudo action: ms-drbd0_start_0
+ * Resource action: drbd0:1 start on mysql-01
+ * Pseudo action: ms-drbd0_running_0
+ * Pseudo action: ms-drbd0_post_notify_running_0
+ * Resource action: drbd0:0 notify on mysql-02
+ * Resource action: drbd0:1 notify on mysql-01
+ * Pseudo action: ms-drbd0_confirmed-post_notify_running_0
+
+Revised Cluster Status:
+ * Node List:
+ * Online: [ mysql-01 mysql-02 ]
+
+ * Full List of Resources:
+ * Clone Set: ms-drbd0 [drbd0] (promotable):
+ * Promoted: [ mysql-02 ]
+ * Unpromoted: [ mysql-01 ]
+ * Resource Group: fs_mysql_ip:
+ * fs0 (ocf:heartbeat:Filesystem): Started mysql-02
+ * mysqlid (lsb:mysql): Started mysql-02
+ * ip_resource (ocf:heartbeat:IPaddr2): Started mysql-02
diff --git a/cts/scheduler/summary/bug-lf-1920.summary b/cts/scheduler/summary/bug-lf-1920.summary
new file mode 100644
index 0000000..e8dd985
--- /dev/null
+++ b/cts/scheduler/summary/bug-lf-1920.summary
@@ -0,0 +1,18 @@
+Current cluster status:
+ * Node List:
+ * Online: [ dktest1sles10 dktest2sles10 ]
+
+ * Full List of Resources:
+ * mysql-bin (ocf:heartbeat:mysql): Started dktest2sles10
+
+Transition Summary:
+
+Executing Cluster Transition:
+ * Resource action: mysql-bin monitor=30000 on dktest2sles10
+
+Revised Cluster Status:
+ * Node List:
+ * Online: [ dktest1sles10 dktest2sles10 ]
+
+ * Full List of Resources:
+ * mysql-bin (ocf:heartbeat:mysql): Started dktest2sles10
diff --git a/cts/scheduler/summary/bug-lf-2106.summary b/cts/scheduler/summary/bug-lf-2106.summary
new file mode 100644
index 0000000..391b5fb
--- /dev/null
+++ b/cts/scheduler/summary/bug-lf-2106.summary
@@ -0,0 +1,91 @@
+Current cluster status:
+ * Node List:
+ * Online: [ cl-virt-1 cl-virt-2 ]
+
+ * Full List of Resources:
+ * apcstonith (stonith:apcmastersnmp): Started cl-virt-1
+ * Clone Set: pingdclone [pingd]:
+ * Started: [ cl-virt-1 cl-virt-2 ]
+ * Resource Group: ssh:
+ * ssh-ip1 (ocf:heartbeat:IPaddr2): Started cl-virt-2
+ * ssh-ip2 (ocf:heartbeat:IPaddr2): Started cl-virt-2
+ * ssh-bin (ocf:dk:opensshd): Started cl-virt-2
+ * itwiki (ocf:heartbeat:VirtualDomain): Started cl-virt-2
+ * Clone Set: ms-itwiki [drbd-itwiki] (promotable):
+ * Promoted: [ cl-virt-2 ]
+ * Unpromoted: [ cl-virt-1 ]
+ * bugtrack (ocf:heartbeat:VirtualDomain): Started cl-virt-2
+ * Clone Set: ms-bugtrack [drbd-bugtrack] (promotable):
+ * Promoted: [ cl-virt-2 ]
+ * Unpromoted: [ cl-virt-1 ]
+ * servsyslog (ocf:heartbeat:VirtualDomain): Started cl-virt-2
+ * Clone Set: ms-servsyslog [drbd-servsyslog] (promotable):
+ * Promoted: [ cl-virt-2 ]
+ * Unpromoted: [ cl-virt-1 ]
+ * smsprod2 (ocf:heartbeat:VirtualDomain): Started cl-virt-2
+ * Clone Set: ms-smsprod2 [drbd-smsprod2] (promotable):
+ * Promoted: [ cl-virt-2 ]
+ * Unpromoted: [ cl-virt-1 ]
+ * medomus-cvs (ocf:heartbeat:VirtualDomain): Started cl-virt-2
+ * Clone Set: ms-medomus-cvs [drbd-medomus-cvs] (promotable):
+ * Promoted: [ cl-virt-2 ]
+ * Unpromoted: [ cl-virt-1 ]
+ * infotos (ocf:heartbeat:VirtualDomain): Started cl-virt-2
+ * Clone Set: ms-infotos [drbd-infotos] (promotable):
+ * Promoted: [ cl-virt-2 ]
+ * Unpromoted: [ cl-virt-1 ]
+
+Transition Summary:
+ * Restart pingd:0 ( cl-virt-1 ) due to resource definition change
+ * Restart pingd:1 ( cl-virt-2 ) due to resource definition change
+
+Executing Cluster Transition:
+ * Cluster action: clear_failcount for pingd on cl-virt-1
+ * Cluster action: clear_failcount for pingd on cl-virt-2
+ * Pseudo action: pingdclone_stop_0
+ * Resource action: pingd:0 stop on cl-virt-1
+ * Resource action: pingd:0 stop on cl-virt-2
+ * Pseudo action: pingdclone_stopped_0
+ * Pseudo action: pingdclone_start_0
+ * Resource action: pingd:0 start on cl-virt-1
+ * Resource action: pingd:0 monitor=30000 on cl-virt-1
+ * Resource action: pingd:0 start on cl-virt-2
+ * Resource action: pingd:0 monitor=30000 on cl-virt-2
+ * Pseudo action: pingdclone_running_0
+
+Revised Cluster Status:
+ * Node List:
+ * Online: [ cl-virt-1 cl-virt-2 ]
+
+ * Full List of Resources:
+ * apcstonith (stonith:apcmastersnmp): Started cl-virt-1
+ * Clone Set: pingdclone [pingd]:
+ * Started: [ cl-virt-1 cl-virt-2 ]
+ * Resource Group: ssh:
+ * ssh-ip1 (ocf:heartbeat:IPaddr2): Started cl-virt-2
+ * ssh-ip2 (ocf:heartbeat:IPaddr2): Started cl-virt-2
+ * ssh-bin (ocf:dk:opensshd): Started cl-virt-2
+ * itwiki (ocf:heartbeat:VirtualDomain): Started cl-virt-2
+ * Clone Set: ms-itwiki [drbd-itwiki] (promotable):
+ * Promoted: [ cl-virt-2 ]
+ * Unpromoted: [ cl-virt-1 ]
+ * bugtrack (ocf:heartbeat:VirtualDomain): Started cl-virt-2
+ * Clone Set: ms-bugtrack [drbd-bugtrack] (promotable):
+ * Promoted: [ cl-virt-2 ]
+ * Unpromoted: [ cl-virt-1 ]
+ * servsyslog (ocf:heartbeat:VirtualDomain): Started cl-virt-2
+ * Clone Set: ms-servsyslog [drbd-servsyslog] (promotable):
+ * Promoted: [ cl-virt-2 ]
+ * Unpromoted: [ cl-virt-1 ]
+ * smsprod2 (ocf:heartbeat:VirtualDomain): Started cl-virt-2
+ * Clone Set: ms-smsprod2 [drbd-smsprod2] (promotable):
+ * Promoted: [ cl-virt-2 ]
+ * Unpromoted: [ cl-virt-1 ]
+ * medomus-cvs (ocf:heartbeat:VirtualDomain): Started cl-virt-2
+ * Clone Set: ms-medomus-cvs [drbd-medomus-cvs] (promotable):
+ * Promoted: [ cl-virt-2 ]
+ * Unpromoted: [ cl-virt-1 ]
+ * infotos (ocf:heartbeat:VirtualDomain): Started cl-virt-2
+ * Clone Set: ms-infotos [drbd-infotos] (promotable):
+ * Promoted: [ cl-virt-2 ]
+ * Unpromoted: [ cl-virt-1 ]
diff --git a/cts/scheduler/summary/bug-lf-2153.summary b/cts/scheduler/summary/bug-lf-2153.summary
new file mode 100644
index 0000000..631e73a
--- /dev/null
+++ b/cts/scheduler/summary/bug-lf-2153.summary
@@ -0,0 +1,59 @@
+Current cluster status:
+ * Node List:
+ * Node bob: standby (with active resources)
+ * Online: [ alice ]
+
+ * Full List of Resources:
+ * Clone Set: ms_drbd_iscsivg01 [res_drbd_iscsivg01] (promotable):
+ * Promoted: [ alice ]
+ * Unpromoted: [ bob ]
+ * Clone Set: cl_tgtd [res_tgtd]:
+ * Started: [ alice bob ]
+ * Resource Group: rg_iscsivg01:
+ * res_portblock_iscsivg01_block (ocf:heartbeat:portblock): Started alice
+ * res_lvm_iscsivg01 (ocf:heartbeat:LVM): Started alice
+ * res_target_iscsivg01 (ocf:heartbeat:iSCSITarget): Started alice
+ * res_lu_iscsivg01_lun1 (ocf:heartbeat:iSCSILogicalUnit): Started alice
+ * res_lu_iscsivg01_lun2 (ocf:heartbeat:iSCSILogicalUnit): Started alice
+ * res_ip_alicebob01 (ocf:heartbeat:IPaddr2): Started alice
+ * res_portblock_iscsivg01_unblock (ocf:heartbeat:portblock): Started alice
+
+Transition Summary:
+ * Stop res_drbd_iscsivg01:0 ( Unpromoted bob ) due to node availability
+ * Stop res_tgtd:0 ( bob ) due to node availability
+
+Executing Cluster Transition:
+ * Pseudo action: ms_drbd_iscsivg01_pre_notify_stop_0
+ * Pseudo action: cl_tgtd_stop_0
+ * Resource action: res_drbd_iscsivg01:0 notify on bob
+ * Resource action: res_drbd_iscsivg01:1 notify on alice
+ * Pseudo action: ms_drbd_iscsivg01_confirmed-pre_notify_stop_0
+ * Pseudo action: ms_drbd_iscsivg01_stop_0
+ * Resource action: res_tgtd:0 stop on bob
+ * Pseudo action: cl_tgtd_stopped_0
+ * Resource action: res_drbd_iscsivg01:0 stop on bob
+ * Pseudo action: ms_drbd_iscsivg01_stopped_0
+ * Pseudo action: ms_drbd_iscsivg01_post_notify_stopped_0
+ * Resource action: res_drbd_iscsivg01:1 notify on alice
+ * Pseudo action: ms_drbd_iscsivg01_confirmed-post_notify_stopped_0
+
+Revised Cluster Status:
+ * Node List:
+ * Node bob: standby
+ * Online: [ alice ]
+
+ * Full List of Resources:
+ * Clone Set: ms_drbd_iscsivg01 [res_drbd_iscsivg01] (promotable):
+ * Promoted: [ alice ]
+ * Stopped: [ bob ]
+ * Clone Set: cl_tgtd [res_tgtd]:
+ * Started: [ alice ]
+ * Stopped: [ bob ]
+ * Resource Group: rg_iscsivg01:
+ * res_portblock_iscsivg01_block (ocf:heartbeat:portblock): Started alice
+ * res_lvm_iscsivg01 (ocf:heartbeat:LVM): Started alice
+ * res_target_iscsivg01 (ocf:heartbeat:iSCSITarget): Started alice
+ * res_lu_iscsivg01_lun1 (ocf:heartbeat:iSCSILogicalUnit): Started alice
+ * res_lu_iscsivg01_lun2 (ocf:heartbeat:iSCSILogicalUnit): Started alice
+ * res_ip_alicebob01 (ocf:heartbeat:IPaddr2): Started alice
+ * res_portblock_iscsivg01_unblock (ocf:heartbeat:portblock): Started alice
diff --git a/cts/scheduler/summary/bug-lf-2160.summary b/cts/scheduler/summary/bug-lf-2160.summary
new file mode 100644
index 0000000..f7fb9ed
--- /dev/null
+++ b/cts/scheduler/summary/bug-lf-2160.summary
@@ -0,0 +1,23 @@
+Current cluster status:
+ * Node List:
+ * Online: [ cardhu dualamd1 dualamd3 ]
+
+ * Full List of Resources:
+ * domU-test01 (ocf:heartbeat:Xen): Started dualamd1
+ * Clone Set: clone-dom0-iscsi1 [dom0-iscsi1-cnx1]:
+ * Started: [ dualamd1 dualamd3 ]
+
+Transition Summary:
+
+Executing Cluster Transition:
+ * Resource action: domU-test01 monitor on cardhu
+ * Resource action: dom0-iscsi1-cnx1:0 monitor on cardhu
+
+Revised Cluster Status:
+ * Node List:
+ * Online: [ cardhu dualamd1 dualamd3 ]
+
+ * Full List of Resources:
+ * domU-test01 (ocf:heartbeat:Xen): Started dualamd1
+ * Clone Set: clone-dom0-iscsi1 [dom0-iscsi1-cnx1]:
+ * Started: [ dualamd1 dualamd3 ]
diff --git a/cts/scheduler/summary/bug-lf-2171.summary b/cts/scheduler/summary/bug-lf-2171.summary
new file mode 100644
index 0000000..5117608
--- /dev/null
+++ b/cts/scheduler/summary/bug-lf-2171.summary
@@ -0,0 +1,39 @@
+2 of 4 resource instances DISABLED and 0 BLOCKED from further action due to failure
+
+Current cluster status:
+ * Node List:
+ * Online: [ xenserver1 xenserver2 ]
+
+ * Full List of Resources:
+ * Clone Set: cl_res_Dummy1 [res_Dummy1] (disabled):
+ * Started: [ xenserver1 xenserver2 ]
+ * Resource Group: gr_Dummy (disabled):
+ * res_Dummy2 (ocf:heartbeat:Dummy): Started xenserver1
+ * res_Dummy3 (ocf:heartbeat:Dummy): Started xenserver1
+
+Transition Summary:
+ * Stop res_Dummy1:0 ( xenserver1 ) due to node availability
+ * Stop res_Dummy1:1 ( xenserver2 ) due to node availability
+ * Stop res_Dummy2 ( xenserver1 ) due to unrunnable cl_res_Dummy1 running
+ * Stop res_Dummy3 ( xenserver1 ) due to unrunnable cl_res_Dummy1 running
+
+Executing Cluster Transition:
+ * Pseudo action: gr_Dummy_stop_0
+ * Resource action: res_Dummy2 stop on xenserver1
+ * Resource action: res_Dummy3 stop on xenserver1
+ * Pseudo action: gr_Dummy_stopped_0
+ * Pseudo action: cl_res_Dummy1_stop_0
+ * Resource action: res_Dummy1:1 stop on xenserver1
+ * Resource action: res_Dummy1:0 stop on xenserver2
+ * Pseudo action: cl_res_Dummy1_stopped_0
+
+Revised Cluster Status:
+ * Node List:
+ * Online: [ xenserver1 xenserver2 ]
+
+ * Full List of Resources:
+ * Clone Set: cl_res_Dummy1 [res_Dummy1] (disabled):
+ * Stopped (disabled): [ xenserver1 xenserver2 ]
+ * Resource Group: gr_Dummy (disabled):
+ * res_Dummy2 (ocf:heartbeat:Dummy): Stopped
+ * res_Dummy3 (ocf:heartbeat:Dummy): Stopped
diff --git a/cts/scheduler/summary/bug-lf-2213.summary b/cts/scheduler/summary/bug-lf-2213.summary
new file mode 100644
index 0000000..83b0f17
--- /dev/null
+++ b/cts/scheduler/summary/bug-lf-2213.summary
@@ -0,0 +1,30 @@
+Current cluster status:
+ * Node List:
+ * Online: [ fs1 fs2 web1 web2 ]
+
+ * Full List of Resources:
+ * Clone Set: cl-test [gr-test]:
+ * Stopped: [ fs1 fs2 web1 web2 ]
+
+Transition Summary:
+ * Start test:0 ( web1 )
+ * Start test:1 ( web2 )
+
+Executing Cluster Transition:
+ * Pseudo action: cl-test_start_0
+ * Pseudo action: gr-test:0_start_0
+ * Resource action: test:0 start on web1
+ * Pseudo action: gr-test:1_start_0
+ * Resource action: test:1 start on web2
+ * Pseudo action: gr-test:0_running_0
+ * Pseudo action: gr-test:1_running_0
+ * Pseudo action: cl-test_running_0
+
+Revised Cluster Status:
+ * Node List:
+ * Online: [ fs1 fs2 web1 web2 ]
+
+ * Full List of Resources:
+ * Clone Set: cl-test [gr-test]:
+ * Started: [ web1 web2 ]
+ * Stopped: [ fs1 fs2 ]
diff --git a/cts/scheduler/summary/bug-lf-2317.summary b/cts/scheduler/summary/bug-lf-2317.summary
new file mode 100644
index 0000000..96603fd
--- /dev/null
+++ b/cts/scheduler/summary/bug-lf-2317.summary
@@ -0,0 +1,36 @@
+Current cluster status:
+ * Node List:
+ * Online: [ ibm1.isg.si ibm2.isg.si ]
+
+ * Full List of Resources:
+ * HostingIsg (ocf:heartbeat:Xen): Started ibm2.isg.si
+ * Clone Set: ms_drbd_r0 [drbd_r0] (promotable):
+ * Promoted: [ ibm2.isg.si ]
+ * Unpromoted: [ ibm1.isg.si ]
+
+Transition Summary:
+ * Promote drbd_r0:1 ( Unpromoted -> Promoted ibm1.isg.si )
+
+Executing Cluster Transition:
+ * Resource action: drbd_r0:0 cancel=30000 on ibm1.isg.si
+ * Pseudo action: ms_drbd_r0_pre_notify_promote_0
+ * Resource action: drbd_r0:1 notify on ibm2.isg.si
+ * Resource action: drbd_r0:0 notify on ibm1.isg.si
+ * Pseudo action: ms_drbd_r0_confirmed-pre_notify_promote_0
+ * Pseudo action: ms_drbd_r0_promote_0
+ * Resource action: drbd_r0:0 promote on ibm1.isg.si
+ * Pseudo action: ms_drbd_r0_promoted_0
+ * Pseudo action: ms_drbd_r0_post_notify_promoted_0
+ * Resource action: drbd_r0:1 notify on ibm2.isg.si
+ * Resource action: drbd_r0:0 notify on ibm1.isg.si
+ * Pseudo action: ms_drbd_r0_confirmed-post_notify_promoted_0
+ * Resource action: drbd_r0:0 monitor=15000 on ibm1.isg.si
+
+Revised Cluster Status:
+ * Node List:
+ * Online: [ ibm1.isg.si ibm2.isg.si ]
+
+ * Full List of Resources:
+ * HostingIsg (ocf:heartbeat:Xen): Started ibm2.isg.si
+ * Clone Set: ms_drbd_r0 [drbd_r0] (promotable):
+ * Promoted: [ ibm1.isg.si ibm2.isg.si ]
diff --git a/cts/scheduler/summary/bug-lf-2358.summary b/cts/scheduler/summary/bug-lf-2358.summary
new file mode 100644
index 0000000..b89aadc
--- /dev/null
+++ b/cts/scheduler/summary/bug-lf-2358.summary
@@ -0,0 +1,68 @@
+2 of 15 resource instances DISABLED and 0 BLOCKED from further action due to failure
+
+Current cluster status:
+ * Node List:
+ * Online: [ alice.demo bob.demo ]
+
+ * Full List of Resources:
+ * Clone Set: ms_drbd_nfsexport [res_drbd_nfsexport] (promotable, disabled):
+ * Stopped (disabled): [ alice.demo bob.demo ]
+ * Resource Group: rg_nfs:
+ * res_fs_nfsexport (ocf:heartbeat:Filesystem): Stopped
+ * res_ip_nfs (ocf:heartbeat:IPaddr2): Stopped
+ * res_nfs (lsb:nfs): Stopped
+ * Resource Group: rg_mysql1:
+ * res_fs_mysql1 (ocf:heartbeat:Filesystem): Started bob.demo
+ * res_ip_mysql1 (ocf:heartbeat:IPaddr2): Started bob.demo
+ * res_mysql1 (ocf:heartbeat:mysql): Started bob.demo
+ * Clone Set: ms_drbd_mysql1 [res_drbd_mysql1] (promotable):
+ * Promoted: [ bob.demo ]
+ * Stopped: [ alice.demo ]
+ * Clone Set: ms_drbd_mysql2 [res_drbd_mysql2] (promotable):
+ * Promoted: [ alice.demo ]
+ * Unpromoted: [ bob.demo ]
+ * Resource Group: rg_mysql2:
+ * res_fs_mysql2 (ocf:heartbeat:Filesystem): Started alice.demo
+ * res_ip_mysql2 (ocf:heartbeat:IPaddr2): Started alice.demo
+ * res_mysql2 (ocf:heartbeat:mysql): Started alice.demo
+
+Transition Summary:
+ * Start res_drbd_mysql1:1 ( alice.demo )
+
+Executing Cluster Transition:
+ * Pseudo action: ms_drbd_mysql1_pre_notify_start_0
+ * Resource action: res_drbd_mysql1:0 notify on bob.demo
+ * Pseudo action: ms_drbd_mysql1_confirmed-pre_notify_start_0
+ * Pseudo action: ms_drbd_mysql1_start_0
+ * Resource action: res_drbd_mysql1:1 start on alice.demo
+ * Pseudo action: ms_drbd_mysql1_running_0
+ * Pseudo action: ms_drbd_mysql1_post_notify_running_0
+ * Resource action: res_drbd_mysql1:0 notify on bob.demo
+ * Resource action: res_drbd_mysql1:1 notify on alice.demo
+ * Pseudo action: ms_drbd_mysql1_confirmed-post_notify_running_0
+
+Revised Cluster Status:
+ * Node List:
+ * Online: [ alice.demo bob.demo ]
+
+ * Full List of Resources:
+ * Clone Set: ms_drbd_nfsexport [res_drbd_nfsexport] (promotable, disabled):
+ * Stopped (disabled): [ alice.demo bob.demo ]
+ * Resource Group: rg_nfs:
+ * res_fs_nfsexport (ocf:heartbeat:Filesystem): Stopped
+ * res_ip_nfs (ocf:heartbeat:IPaddr2): Stopped
+ * res_nfs (lsb:nfs): Stopped
+ * Resource Group: rg_mysql1:
+ * res_fs_mysql1 (ocf:heartbeat:Filesystem): Started bob.demo
+ * res_ip_mysql1 (ocf:heartbeat:IPaddr2): Started bob.demo
+ * res_mysql1 (ocf:heartbeat:mysql): Started bob.demo
+ * Clone Set: ms_drbd_mysql1 [res_drbd_mysql1] (promotable):
+ * Promoted: [ bob.demo ]
+ * Unpromoted: [ alice.demo ]
+ * Clone Set: ms_drbd_mysql2 [res_drbd_mysql2] (promotable):
+ * Promoted: [ alice.demo ]
+ * Unpromoted: [ bob.demo ]
+ * Resource Group: rg_mysql2:
+ * res_fs_mysql2 (ocf:heartbeat:Filesystem): Started alice.demo
+ * res_ip_mysql2 (ocf:heartbeat:IPaddr2): Started alice.demo
+ * res_mysql2 (ocf:heartbeat:mysql): Started alice.demo
diff --git a/cts/scheduler/summary/bug-lf-2361.summary b/cts/scheduler/summary/bug-lf-2361.summary
new file mode 100644
index 0000000..4ea272d
--- /dev/null
+++ b/cts/scheduler/summary/bug-lf-2361.summary
@@ -0,0 +1,44 @@
+Current cluster status:
+ * Node List:
+ * Online: [ alice.demo bob.demo ]
+
+ * Full List of Resources:
+ * dummy1 (ocf:heartbeat:Dummy): Stopped
+ * Clone Set: ms_stateful [stateful] (promotable):
+ * Stopped: [ alice.demo bob.demo ]
+ * Clone Set: cl_dummy2 [dummy2]:
+ * Stopped: [ alice.demo bob.demo ]
+
+Transition Summary:
+ * Start stateful:0 ( alice.demo )
+ * Start stateful:1 ( bob.demo )
+ * Start dummy2:0 ( alice.demo ) due to unrunnable dummy1 start (blocked)
+ * Start dummy2:1 ( bob.demo ) due to unrunnable dummy1 start (blocked)
+
+Executing Cluster Transition:
+ * Pseudo action: ms_stateful_pre_notify_start_0
+ * Resource action: service2:0 delete on bob.demo
+ * Resource action: service2:0 delete on alice.demo
+ * Resource action: service2:1 delete on bob.demo
+ * Resource action: service1 delete on bob.demo
+ * Resource action: service1 delete on alice.demo
+ * Pseudo action: ms_stateful_confirmed-pre_notify_start_0
+ * Pseudo action: ms_stateful_start_0
+ * Resource action: stateful:0 start on alice.demo
+ * Resource action: stateful:1 start on bob.demo
+ * Pseudo action: ms_stateful_running_0
+ * Pseudo action: ms_stateful_post_notify_running_0
+ * Resource action: stateful:0 notify on alice.demo
+ * Resource action: stateful:1 notify on bob.demo
+ * Pseudo action: ms_stateful_confirmed-post_notify_running_0
+
+Revised Cluster Status:
+ * Node List:
+ * Online: [ alice.demo bob.demo ]
+
+ * Full List of Resources:
+ * dummy1 (ocf:heartbeat:Dummy): Stopped
+ * Clone Set: ms_stateful [stateful] (promotable):
+ * Unpromoted: [ alice.demo bob.demo ]
+ * Clone Set: cl_dummy2 [dummy2]:
+ * Stopped: [ alice.demo bob.demo ]
diff --git a/cts/scheduler/summary/bug-lf-2422.summary b/cts/scheduler/summary/bug-lf-2422.summary
new file mode 100644
index 0000000..023d07d
--- /dev/null
+++ b/cts/scheduler/summary/bug-lf-2422.summary
@@ -0,0 +1,83 @@
+4 of 21 resource instances DISABLED and 0 BLOCKED from further action due to failure
+
+Current cluster status:
+ * Node List:
+ * Online: [ qa-suse-1 qa-suse-2 qa-suse-3 qa-suse-4 ]
+
+ * Full List of Resources:
+ * sbd_stonith (stonith:external/sbd): Started qa-suse-2
+ * Clone Set: c-o2stage [o2stage]:
+ * Started: [ qa-suse-1 qa-suse-2 qa-suse-3 qa-suse-4 ]
+ * Clone Set: c-ocfs [ocfs]:
+ * Started: [ qa-suse-1 qa-suse-2 qa-suse-3 qa-suse-4 ]
+
+Transition Summary:
+ * Stop o2cb:0 ( qa-suse-1 ) due to node availability
+ * Stop cmirror:0 ( qa-suse-1 ) due to node availability
+ * Stop o2cb:1 ( qa-suse-4 ) due to node availability
+ * Stop cmirror:1 ( qa-suse-4 ) due to node availability
+ * Stop o2cb:2 ( qa-suse-3 ) due to node availability
+ * Stop cmirror:2 ( qa-suse-3 ) due to node availability
+ * Stop o2cb:3 ( qa-suse-2 ) due to node availability
+ * Stop cmirror:3 ( qa-suse-2 ) due to node availability
+ * Stop ocfs:0 ( qa-suse-1 ) due to node availability
+ * Stop ocfs:1 ( qa-suse-4 ) due to node availability
+ * Stop ocfs:2 ( qa-suse-3 ) due to node availability
+ * Stop ocfs:3 ( qa-suse-2 ) due to node availability
+
+Executing Cluster Transition:
+ * Resource action: sbd_stonith monitor=15000 on qa-suse-2
+ * Pseudo action: c-ocfs_stop_0
+ * Resource action: ocfs:3 stop on qa-suse-2
+ * Resource action: ocfs:2 stop on qa-suse-3
+ * Resource action: ocfs:0 stop on qa-suse-4
+ * Resource action: ocfs:1 stop on qa-suse-1
+ * Pseudo action: c-ocfs_stopped_0
+ * Pseudo action: c-o2stage_stop_0
+ * Pseudo action: o2stage:0_stop_0
+ * Resource action: cmirror:1 stop on qa-suse-1
+ * Pseudo action: o2stage:1_stop_0
+ * Resource action: cmirror:0 stop on qa-suse-4
+ * Pseudo action: o2stage:2_stop_0
+ * Resource action: cmirror:2 stop on qa-suse-3
+ * Pseudo action: o2stage:3_stop_0
+ * Resource action: cmirror:3 stop on qa-suse-2
+ * Resource action: o2cb:1 stop on qa-suse-1
+ * Resource action: o2cb:0 stop on qa-suse-4
+ * Resource action: o2cb:2 stop on qa-suse-3
+ * Resource action: o2cb:3 stop on qa-suse-2
+ * Pseudo action: o2stage:0_stopped_0
+ * Pseudo action: o2stage:1_stopped_0
+ * Pseudo action: o2stage:2_stopped_0
+ * Pseudo action: o2stage:3_stopped_0
+ * Pseudo action: c-o2stage_stopped_0
+
+Revised Cluster Status:
+ * Node List:
+ * Online: [ qa-suse-1 qa-suse-2 qa-suse-3 qa-suse-4 ]
+
+ * Full List of Resources:
+ * sbd_stonith (stonith:external/sbd): Started qa-suse-2
+ * Clone Set: c-o2stage [o2stage]:
+ * Resource Group: o2stage:0:
+ * dlm (ocf:pacemaker:controld): Started qa-suse-1
+ * clvm (ocf:lvm2:clvmd): Started qa-suse-1
+ * o2cb (ocf:ocfs2:o2cb): Stopped (disabled)
+ * cmirror (ocf:lvm2:cmirrord): Stopped
+ * Resource Group: o2stage:1:
+ * dlm (ocf:pacemaker:controld): Started qa-suse-4
+ * clvm (ocf:lvm2:clvmd): Started qa-suse-4
+ * o2cb (ocf:ocfs2:o2cb): Stopped (disabled)
+ * cmirror (ocf:lvm2:cmirrord): Stopped
+ * Resource Group: o2stage:2:
+ * dlm (ocf:pacemaker:controld): Started qa-suse-3
+ * clvm (ocf:lvm2:clvmd): Started qa-suse-3
+ * o2cb (ocf:ocfs2:o2cb): Stopped (disabled)
+ * cmirror (ocf:lvm2:cmirrord): Stopped
+ * Resource Group: o2stage:3:
+ * dlm (ocf:pacemaker:controld): Started qa-suse-2
+ * clvm (ocf:lvm2:clvmd): Started qa-suse-2
+ * o2cb (ocf:ocfs2:o2cb): Stopped (disabled)
+ * cmirror (ocf:lvm2:cmirrord): Stopped
+ * Clone Set: c-ocfs [ocfs]:
+ * Stopped: [ qa-suse-1 qa-suse-2 qa-suse-3 qa-suse-4 ]
diff --git a/cts/scheduler/summary/bug-lf-2435.summary b/cts/scheduler/summary/bug-lf-2435.summary
new file mode 100644
index 0000000..2077c2d
--- /dev/null
+++ b/cts/scheduler/summary/bug-lf-2435.summary
@@ -0,0 +1,33 @@
+Current cluster status:
+ * Node List:
+ * Node c20.chepkov.lan: standby (with active resources)
+ * Online: [ c19.chepkov.lan c21.chepkov.lan ]
+
+ * Full List of Resources:
+ * dummy1 (ocf:pacemaker:Dummy): Started c19.chepkov.lan
+ * dummy2 (ocf:pacemaker:Dummy): Started c20.chepkov.lan
+ * dummy4 (ocf:pacemaker:Dummy): Stopped
+ * dummy3 (ocf:pacemaker:Dummy): Started c21.chepkov.lan
+
+Transition Summary:
+ * Move dummy2 ( c20.chepkov.lan -> c21.chepkov.lan )
+ * Stop dummy3 ( c21.chepkov.lan ) due to node availability
+
+Executing Cluster Transition:
+ * Resource action: dummy2 stop on c20.chepkov.lan
+ * Resource action: dummy4 monitor on c21.chepkov.lan
+ * Resource action: dummy4 monitor on c20.chepkov.lan
+ * Resource action: dummy4 monitor on c19.chepkov.lan
+ * Resource action: dummy3 stop on c21.chepkov.lan
+ * Resource action: dummy2 start on c21.chepkov.lan
+
+Revised Cluster Status:
+ * Node List:
+ * Node c20.chepkov.lan: standby
+ * Online: [ c19.chepkov.lan c21.chepkov.lan ]
+
+ * Full List of Resources:
+ * dummy1 (ocf:pacemaker:Dummy): Started c19.chepkov.lan
+ * dummy2 (ocf:pacemaker:Dummy): Started c21.chepkov.lan
+ * dummy4 (ocf:pacemaker:Dummy): Stopped
+ * dummy3 (ocf:pacemaker:Dummy): Stopped
diff --git a/cts/scheduler/summary/bug-lf-2445.summary b/cts/scheduler/summary/bug-lf-2445.summary
new file mode 100644
index 0000000..6888938
--- /dev/null
+++ b/cts/scheduler/summary/bug-lf-2445.summary
@@ -0,0 +1,28 @@
+Current cluster status:
+ * Node List:
+ * Online: [ node1 node2 ]
+
+ * Full List of Resources:
+ * Clone Set: C [P] (unique):
+ * P:0 (ocf:pacemaker:Dummy): Started node1
+ * P:1 (ocf:pacemaker:Dummy): Started node1
+
+Transition Summary:
+ * Move P:1 ( node1 -> node2 )
+
+Executing Cluster Transition:
+ * Pseudo action: C_stop_0
+ * Resource action: P:1 stop on node1
+ * Pseudo action: C_stopped_0
+ * Pseudo action: C_start_0
+ * Resource action: P:1 start on node2
+ * Pseudo action: C_running_0
+
+Revised Cluster Status:
+ * Node List:
+ * Online: [ node1 node2 ]
+
+ * Full List of Resources:
+ * Clone Set: C [P] (unique):
+ * P:0 (ocf:pacemaker:Dummy): Started node1
+ * P:1 (ocf:pacemaker:Dummy): Started node2
diff --git a/cts/scheduler/summary/bug-lf-2453.summary b/cts/scheduler/summary/bug-lf-2453.summary
new file mode 100644
index 0000000..c8d1bdf
--- /dev/null
+++ b/cts/scheduler/summary/bug-lf-2453.summary
@@ -0,0 +1,41 @@
+2 of 5 resource instances DISABLED and 0 BLOCKED from further action due to failure
+
+Current cluster status:
+ * Node List:
+ * Online: [ domu1 domu2 ]
+
+ * Full List of Resources:
+ * PrimitiveResource1 (ocf:heartbeat:IPaddr2): Started domu1
+ * Clone Set: CloneResource1 [apache] (disabled):
+ * Started: [ domu1 domu2 ]
+ * Clone Set: CloneResource2 [DummyResource]:
+ * Started: [ domu1 domu2 ]
+
+Transition Summary:
+ * Stop PrimitiveResource1 ( domu1 ) due to required CloneResource2 running
+ * Stop apache:0 ( domu1 ) due to node availability
+ * Stop apache:1 ( domu2 ) due to node availability
+ * Stop DummyResource:0 ( domu1 ) due to unrunnable CloneResource1 running
+ * Stop DummyResource:1 ( domu2 ) due to unrunnable CloneResource1 running
+
+Executing Cluster Transition:
+ * Resource action: PrimitiveResource1 stop on domu1
+ * Pseudo action: CloneResource2_stop_0
+ * Resource action: DummyResource:1 stop on domu1
+ * Resource action: DummyResource:0 stop on domu2
+ * Pseudo action: CloneResource2_stopped_0
+ * Pseudo action: CloneResource1_stop_0
+ * Resource action: apache:1 stop on domu1
+ * Resource action: apache:0 stop on domu2
+ * Pseudo action: CloneResource1_stopped_0
+
+Revised Cluster Status:
+ * Node List:
+ * Online: [ domu1 domu2 ]
+
+ * Full List of Resources:
+ * PrimitiveResource1 (ocf:heartbeat:IPaddr2): Stopped
+ * Clone Set: CloneResource1 [apache] (disabled):
+ * Stopped (disabled): [ domu1 domu2 ]
+ * Clone Set: CloneResource2 [DummyResource]:
+ * Stopped: [ domu1 domu2 ]
diff --git a/cts/scheduler/summary/bug-lf-2474.summary b/cts/scheduler/summary/bug-lf-2474.summary
new file mode 100644
index 0000000..6e2a072
--- /dev/null
+++ b/cts/scheduler/summary/bug-lf-2474.summary
@@ -0,0 +1,23 @@
+Current cluster status:
+ * Node List:
+ * Online: [ hex-14 ]
+
+ * Full List of Resources:
+ * dummy-10s-timeout (ocf:pacemaker:Dummy): Stopped
+ * dummy-default-timeout (ocf:pacemaker:Dummy): Stopped
+
+Transition Summary:
+ * Start dummy-10s-timeout ( hex-14 )
+ * Start dummy-default-timeout ( hex-14 )
+
+Executing Cluster Transition:
+ * Resource action: dummy-10s-timeout start on hex-14
+ * Resource action: dummy-default-timeout start on hex-14
+
+Revised Cluster Status:
+ * Node List:
+ * Online: [ hex-14 ]
+
+ * Full List of Resources:
+ * dummy-10s-timeout (ocf:pacemaker:Dummy): Started hex-14
+ * dummy-default-timeout (ocf:pacemaker:Dummy): Started hex-14
diff --git a/cts/scheduler/summary/bug-lf-2493.summary b/cts/scheduler/summary/bug-lf-2493.summary
new file mode 100644
index 0000000..35749b2
--- /dev/null
+++ b/cts/scheduler/summary/bug-lf-2493.summary
@@ -0,0 +1,66 @@
+Current cluster status:
+ * Node List:
+ * Online: [ hpn07 hpn08 ]
+
+ * Full List of Resources:
+ * p_dummy1 (ocf:pacemaker:Dummy): Started hpn07
+ * p_dummy2 (ocf:pacemaker:Dummy): Stopped
+ * p_dummy4 (ocf:pacemaker:Dummy): Stopped
+ * p_dummy3 (ocf:pacemaker:Dummy): Stopped
+ * Clone Set: ms_stateful1 [p_stateful1] (promotable):
+ * Promoted: [ hpn07 ]
+ * Unpromoted: [ hpn08 ]
+
+Transition Summary:
+ * Start p_dummy2 ( hpn08 )
+ * Start p_dummy4 ( hpn07 )
+ * Start p_dummy3 ( hpn08 )
+
+Executing Cluster Transition:
+ * Resource action: p_dummy2 start on hpn08
+ * Resource action: p_dummy3 start on hpn08
+ * Resource action: res_Filesystem_nfs_fs1 delete on hpn08
+ * Resource action: res_Filesystem_nfs_fs1 delete on hpn07
+ * Resource action: res_drbd_nfs:0 delete on hpn08
+ * Resource action: res_drbd_nfs:0 delete on hpn07
+ * Resource action: res_Filesystem_nfs_fs2 delete on hpn08
+ * Resource action: res_Filesystem_nfs_fs2 delete on hpn07
+ * Resource action: res_Filesystem_nfs_fs3 delete on hpn08
+ * Resource action: res_Filesystem_nfs_fs3 delete on hpn07
+ * Resource action: res_exportfs_fs1 delete on hpn08
+ * Resource action: res_exportfs_fs1 delete on hpn07
+ * Resource action: res_exportfs_fs2 delete on hpn08
+ * Resource action: res_exportfs_fs2 delete on hpn07
+ * Resource action: res_exportfs_fs3 delete on hpn08
+ * Resource action: res_exportfs_fs3 delete on hpn07
+ * Resource action: res_drbd_nfs:1 delete on hpn08
+ * Resource action: res_drbd_nfs:1 delete on hpn07
+ * Resource action: res_LVM_nfs delete on hpn08
+ * Resource action: res_LVM_nfs delete on hpn07
+ * Resource action: res_LVM_p_vg-sap delete on hpn08
+ * Resource action: res_LVM_p_vg-sap delete on hpn07
+ * Resource action: res_exportfs_rootfs:0 delete on hpn07
+ * Resource action: res_IPaddr2_nfs delete on hpn08
+ * Resource action: res_IPaddr2_nfs delete on hpn07
+ * Resource action: res_drbd_hpn78:0 delete on hpn08
+ * Resource action: res_drbd_hpn78:0 delete on hpn07
+ * Resource action: res_Filesystem_sap_db delete on hpn08
+ * Resource action: res_Filesystem_sap_db delete on hpn07
+ * Resource action: res_Filesystem_sap_ci delete on hpn08
+ * Resource action: res_Filesystem_sap_ci delete on hpn07
+ * Resource action: res_exportfs_rootfs:1 delete on hpn08
+ * Resource action: res_drbd_hpn78:1 delete on hpn08
+ * Resource action: p_dummy4 start on hpn07
+
+Revised Cluster Status:
+ * Node List:
+ * Online: [ hpn07 hpn08 ]
+
+ * Full List of Resources:
+ * p_dummy1 (ocf:pacemaker:Dummy): Started hpn07
+ * p_dummy2 (ocf:pacemaker:Dummy): Started hpn08
+ * p_dummy4 (ocf:pacemaker:Dummy): Started hpn07
+ * p_dummy3 (ocf:pacemaker:Dummy): Started hpn08
+ * Clone Set: ms_stateful1 [p_stateful1] (promotable):
+ * Promoted: [ hpn07 ]
+ * Unpromoted: [ hpn08 ]
diff --git a/cts/scheduler/summary/bug-lf-2508.summary b/cts/scheduler/summary/bug-lf-2508.summary
new file mode 100644
index 0000000..0563f73
--- /dev/null
+++ b/cts/scheduler/summary/bug-lf-2508.summary
@@ -0,0 +1,112 @@
+Current cluster status:
+ * Node List:
+ * Node srv02: UNCLEAN (offline)
+ * Online: [ srv01 srv03 srv04 ]
+
+ * Full List of Resources:
+ * Resource Group: Group01:
+ * Dummy01 (ocf:heartbeat:Dummy): Stopped
+ * Resource Group: Group02:
+ * Dummy02 (ocf:heartbeat:Dummy): Started srv02 (UNCLEAN)
+ * Resource Group: Group03:
+ * Dummy03 (ocf:heartbeat:Dummy): Started srv03
+ * Clone Set: clnStonith1 [grpStonith1]:
+ * Resource Group: grpStonith1:1:
+ * prmStonith1-1 (stonith:external/stonith-helper): Started srv02 (UNCLEAN)
+ * prmStonith1-3 (stonith:external/ssh): Started srv02 (UNCLEAN)
+ * Started: [ srv03 srv04 ]
+ * Stopped: [ srv01 ]
+ * Clone Set: clnStonith2 [grpStonith2]:
+ * Started: [ srv01 srv03 srv04 ]
+ * Stopped: [ srv02 ]
+ * Clone Set: clnStonith3 [grpStonith3]:
+ * Resource Group: grpStonith3:0:
+ * prmStonith3-1 (stonith:external/stonith-helper): Started srv02 (UNCLEAN)
+ * prmStonith3-3 (stonith:external/ssh): Started srv02 (UNCLEAN)
+ * Resource Group: grpStonith3:1:
+ * prmStonith3-1 (stonith:external/stonith-helper): Started srv01
+ * prmStonith3-3 (stonith:external/ssh): Stopped
+ * Started: [ srv04 ]
+ * Stopped: [ srv03 ]
+ * Clone Set: clnStonith4 [grpStonith4]:
+ * Resource Group: grpStonith4:1:
+ * prmStonith4-1 (stonith:external/stonith-helper): Started srv02 (UNCLEAN)
+ * prmStonith4-3 (stonith:external/ssh): Started srv02 (UNCLEAN)
+ * Started: [ srv01 srv03 ]
+ * Stopped: [ srv04 ]
+
+Transition Summary:
+ * Fence (reboot) srv02 'peer is no longer part of the cluster'
+ * Start Dummy01 ( srv01 )
+ * Move Dummy02 ( srv02 -> srv04 )
+ * Stop prmStonith1-1:1 ( srv02 ) due to node availability
+ * Stop prmStonith1-3:1 ( srv02 ) due to node availability
+ * Stop prmStonith3-1:0 ( srv02 ) due to node availability
+ * Stop prmStonith3-3:0 ( srv02 ) due to node availability
+ * Start prmStonith3-3:1 ( srv01 )
+ * Stop prmStonith4-1:1 ( srv02 ) due to node availability
+ * Stop prmStonith4-3:1 ( srv02 ) due to node availability
+
+Executing Cluster Transition:
+ * Pseudo action: Group01_start_0
+ * Pseudo action: clnStonith1_stop_0
+ * Resource action: prmStonith3-1:1 monitor=3600000 on srv01
+ * Pseudo action: clnStonith3_stop_0
+ * Pseudo action: clnStonith4_stop_0
+ * Fencing srv02 (reboot)
+ * Resource action: Dummy01 start on srv01
+ * Pseudo action: Group02_stop_0
+ * Pseudo action: Dummy02_stop_0
+ * Pseudo action: grpStonith1:1_stop_0
+ * Pseudo action: prmStonith1-3:1_stop_0
+ * Pseudo action: grpStonith3:0_stop_0
+ * Pseudo action: prmStonith3-3:1_stop_0
+ * Pseudo action: grpStonith4:1_stop_0
+ * Pseudo action: prmStonith4-3:1_stop_0
+ * Pseudo action: Group01_running_0
+ * Resource action: Dummy01 monitor=10000 on srv01
+ * Pseudo action: Group02_stopped_0
+ * Pseudo action: Group02_start_0
+ * Resource action: Dummy02 start on srv04
+ * Pseudo action: prmStonith1-1:1_stop_0
+ * Pseudo action: prmStonith3-1:1_stop_0
+ * Pseudo action: prmStonith4-1:1_stop_0
+ * Pseudo action: Group02_running_0
+ * Resource action: Dummy02 monitor=10000 on srv04
+ * Pseudo action: grpStonith1:1_stopped_0
+ * Pseudo action: clnStonith1_stopped_0
+ * Pseudo action: grpStonith3:0_stopped_0
+ * Pseudo action: clnStonith3_stopped_0
+ * Pseudo action: clnStonith3_start_0
+ * Pseudo action: grpStonith4:1_stopped_0
+ * Pseudo action: clnStonith4_stopped_0
+ * Pseudo action: grpStonith3:1_start_0
+ * Resource action: prmStonith3-3:1 start on srv01
+ * Pseudo action: grpStonith3:1_running_0
+ * Resource action: prmStonith3-3:1 monitor=3600000 on srv01
+ * Pseudo action: clnStonith3_running_0
+
+Revised Cluster Status:
+ * Node List:
+ * Online: [ srv01 srv03 srv04 ]
+ * OFFLINE: [ srv02 ]
+
+ * Full List of Resources:
+ * Resource Group: Group01:
+ * Dummy01 (ocf:heartbeat:Dummy): Started srv01
+ * Resource Group: Group02:
+ * Dummy02 (ocf:heartbeat:Dummy): Started srv04
+ * Resource Group: Group03:
+ * Dummy03 (ocf:heartbeat:Dummy): Started srv03
+ * Clone Set: clnStonith1 [grpStonith1]:
+ * Started: [ srv03 srv04 ]
+ * Stopped: [ srv01 srv02 ]
+ * Clone Set: clnStonith2 [grpStonith2]:
+ * Started: [ srv01 srv03 srv04 ]
+ * Stopped: [ srv02 ]
+ * Clone Set: clnStonith3 [grpStonith3]:
+ * Started: [ srv01 srv04 ]
+ * Stopped: [ srv02 srv03 ]
+ * Clone Set: clnStonith4 [grpStonith4]:
+ * Started: [ srv01 srv03 ]
+ * Stopped: [ srv02 srv04 ]
diff --git a/cts/scheduler/summary/bug-lf-2544.summary b/cts/scheduler/summary/bug-lf-2544.summary
new file mode 100644
index 0000000..b21de80
--- /dev/null
+++ b/cts/scheduler/summary/bug-lf-2544.summary
@@ -0,0 +1,24 @@
+Current cluster status:
+ * Node List:
+ * Online: [ node-0 node-1 ]
+
+ * Full List of Resources:
+ * Clone Set: ms0 [s0] (promotable):
+ * Unpromoted: [ node-0 node-1 ]
+
+Transition Summary:
+ * Promote s0:1 ( Unpromoted -> Promoted node-1 )
+
+Executing Cluster Transition:
+ * Pseudo action: ms0_promote_0
+ * Resource action: s0:1 promote on node-1
+ * Pseudo action: ms0_promoted_0
+
+Revised Cluster Status:
+ * Node List:
+ * Online: [ node-0 node-1 ]
+
+ * Full List of Resources:
+ * Clone Set: ms0 [s0] (promotable):
+ * Promoted: [ node-1 ]
+ * Unpromoted: [ node-0 ]
diff --git a/cts/scheduler/summary/bug-lf-2551.summary b/cts/scheduler/summary/bug-lf-2551.summary
new file mode 100644
index 0000000..ebfe1ad
--- /dev/null
+++ b/cts/scheduler/summary/bug-lf-2551.summary
@@ -0,0 +1,226 @@
+Current cluster status:
+ * Node List:
+ * Node hex-9: UNCLEAN (offline)
+ * Online: [ hex-0 hex-7 hex-8 ]
+
+ * Full List of Resources:
+ * vm-00 (ocf:heartbeat:Xen): Started hex-0
+ * Clone Set: base-clone [base-group]:
+ * Resource Group: base-group:3:
+ * dlm (ocf:pacemaker:controld): Started hex-9 (UNCLEAN)
+ * o2cb (ocf:ocfs2:o2cb): Started hex-9 (UNCLEAN)
+ * clvm (ocf:lvm2:clvmd): Started hex-9 (UNCLEAN)
+ * cmirrord (ocf:lvm2:cmirrord): Started hex-9 (UNCLEAN)
+ * vg1 (ocf:heartbeat:LVM): Started hex-9 (UNCLEAN)
+ * ocfs2-1 (ocf:heartbeat:Filesystem): Started hex-9 (UNCLEAN)
+ * Started: [ hex-0 hex-7 hex-8 ]
+ * vm-01 (ocf:heartbeat:Xen): Started hex-7
+ * vm-02 (ocf:heartbeat:Xen): Started hex-8
+ * vm-03 (ocf:heartbeat:Xen): Started hex-9 (UNCLEAN)
+ * vm-04 (ocf:heartbeat:Xen): Started hex-7
+ * vm-05 (ocf:heartbeat:Xen): Started hex-8
+ * fencing-sbd (stonith:external/sbd): Started hex-9 (UNCLEAN)
+ * vm-06 (ocf:heartbeat:Xen): Started hex-9 (UNCLEAN)
+ * vm-07 (ocf:heartbeat:Xen): Started hex-7
+ * vm-08 (ocf:heartbeat:Xen): Started hex-8
+ * vm-09 (ocf:heartbeat:Xen): Started hex-9 (UNCLEAN)
+ * vm-10 (ocf:heartbeat:Xen): Started hex-0
+ * vm-11 (ocf:heartbeat:Xen): Started hex-7
+ * vm-12 (ocf:heartbeat:Xen): Started hex-8
+ * vm-13 (ocf:heartbeat:Xen): Started hex-9 (UNCLEAN)
+ * vm-14 (ocf:heartbeat:Xen): Started hex-0
+ * vm-15 (ocf:heartbeat:Xen): Started hex-7
+ * vm-16 (ocf:heartbeat:Xen): Started hex-8
+ * vm-17 (ocf:heartbeat:Xen): Started hex-9 (UNCLEAN)
+ * vm-18 (ocf:heartbeat:Xen): Started hex-0
+ * vm-19 (ocf:heartbeat:Xen): Started hex-7
+ * vm-20 (ocf:heartbeat:Xen): Started hex-8
+ * vm-21 (ocf:heartbeat:Xen): Started hex-9 (UNCLEAN)
+ * vm-22 (ocf:heartbeat:Xen): Started hex-0
+ * vm-23 (ocf:heartbeat:Xen): Started hex-7
+ * vm-24 (ocf:heartbeat:Xen): Started hex-8
+ * vm-25 (ocf:heartbeat:Xen): Started hex-9 (UNCLEAN)
+ * vm-26 (ocf:heartbeat:Xen): Started hex-0
+ * vm-27 (ocf:heartbeat:Xen): Started hex-7
+ * vm-28 (ocf:heartbeat:Xen): Started hex-8
+ * vm-29 (ocf:heartbeat:Xen): Started hex-9 (UNCLEAN)
+ * vm-30 (ocf:heartbeat:Xen): Started hex-0
+ * vm-31 (ocf:heartbeat:Xen): Started hex-7
+ * vm-32 (ocf:heartbeat:Xen): Started hex-8
+ * dummy1 (ocf:heartbeat:Dummy): Started hex-9 (UNCLEAN)
+ * vm-33 (ocf:heartbeat:Xen): Started hex-9 (UNCLEAN)
+ * vm-34 (ocf:heartbeat:Xen): Started hex-0
+ * vm-35 (ocf:heartbeat:Xen): Started hex-7
+ * vm-36 (ocf:heartbeat:Xen): Started hex-8
+ * vm-37 (ocf:heartbeat:Xen): Started hex-9 (UNCLEAN)
+ * vm-38 (ocf:heartbeat:Xen): Started hex-0
+ * vm-39 (ocf:heartbeat:Xen): Started hex-7
+ * vm-40 (ocf:heartbeat:Xen): Started hex-8
+ * vm-41 (ocf:heartbeat:Xen): Started hex-9 (UNCLEAN)
+ * vm-42 (ocf:heartbeat:Xen): Started hex-0
+ * vm-43 (ocf:heartbeat:Xen): Started hex-7
+ * vm-44 (ocf:heartbeat:Xen): Started hex-8
+ * vm-45 (ocf:heartbeat:Xen): Started hex-9 (UNCLEAN)
+ * vm-46 (ocf:heartbeat:Xen): Started hex-0
+ * vm-47 (ocf:heartbeat:Xen): Started hex-7
+ * vm-48 (ocf:heartbeat:Xen): Started hex-8
+ * vm-49 (ocf:heartbeat:Xen): Started hex-9 (UNCLEAN)
+ * vm-50 (ocf:heartbeat:Xen): Started hex-0
+ * vm-51 (ocf:heartbeat:Xen): Started hex-7
+ * vm-52 (ocf:heartbeat:Xen): Started hex-8
+ * vm-53 (ocf:heartbeat:Xen): Started hex-9 (UNCLEAN)
+ * vm-54 (ocf:heartbeat:Xen): Started hex-0
+ * vm-55 (ocf:heartbeat:Xen): Started hex-7
+ * vm-56 (ocf:heartbeat:Xen): Started hex-8
+ * vm-57 (ocf:heartbeat:Xen): Started hex-9 (UNCLEAN)
+ * vm-58 (ocf:heartbeat:Xen): Started hex-0
+ * vm-59 (ocf:heartbeat:Xen): Started hex-7
+ * vm-60 (ocf:heartbeat:Xen): Started hex-8
+ * vm-61 (ocf:heartbeat:Xen): Started hex-9 (UNCLEAN)
+ * vm-62 (ocf:heartbeat:Xen): Stopped
+ * vm-63 (ocf:heartbeat:Xen): Stopped
+ * vm-64 (ocf:heartbeat:Xen): Stopped
+
+Transition Summary:
+ * Fence (reboot) hex-9 'peer is no longer part of the cluster'
+ * Move fencing-sbd ( hex-9 -> hex-0 )
+ * Move dummy1 ( hex-9 -> hex-0 )
+ * Stop dlm:3 ( hex-9 ) due to node availability
+ * Stop o2cb:3 ( hex-9 ) due to node availability
+ * Stop clvm:3 ( hex-9 ) due to node availability
+ * Stop cmirrord:3 ( hex-9 ) due to node availability
+ * Stop vg1:3 ( hex-9 ) due to node availability
+ * Stop ocfs2-1:3 ( hex-9 ) due to node availability
+ * Stop vm-03 ( hex-9 ) due to node availability
+ * Stop vm-06 ( hex-9 ) due to node availability
+ * Stop vm-09 ( hex-9 ) due to node availability
+ * Stop vm-13 ( hex-9 ) due to node availability
+ * Stop vm-17 ( hex-9 ) due to node availability
+ * Stop vm-21 ( hex-9 ) due to node availability
+ * Stop vm-25 ( hex-9 ) due to node availability
+ * Stop vm-29 ( hex-9 ) due to node availability
+ * Stop vm-33 ( hex-9 ) due to node availability
+ * Stop vm-37 ( hex-9 ) due to node availability
+ * Stop vm-41 ( hex-9 ) due to node availability
+ * Stop vm-45 ( hex-9 ) due to node availability
+ * Stop vm-49 ( hex-9 ) due to node availability
+ * Stop vm-53 ( hex-9 ) due to node availability
+ * Stop vm-57 ( hex-9 ) due to node availability
+ * Stop vm-61 ( hex-9 ) due to node availability
+
+Executing Cluster Transition:
+ * Pseudo action: fencing-sbd_stop_0
+ * Resource action: dummy1 monitor=300000 on hex-8
+ * Resource action: dummy1 monitor=300000 on hex-7
+ * Pseudo action: load_stopped_hex-8
+ * Pseudo action: load_stopped_hex-7
+ * Pseudo action: load_stopped_hex-0
+ * Fencing hex-9 (reboot)
+ * Resource action: fencing-sbd start on hex-0
+ * Pseudo action: dummy1_stop_0
+ * Pseudo action: vm-03_stop_0
+ * Pseudo action: vm-06_stop_0
+ * Pseudo action: vm-09_stop_0
+ * Pseudo action: vm-13_stop_0
+ * Pseudo action: vm-17_stop_0
+ * Pseudo action: vm-21_stop_0
+ * Pseudo action: vm-25_stop_0
+ * Pseudo action: vm-29_stop_0
+ * Pseudo action: vm-33_stop_0
+ * Pseudo action: vm-37_stop_0
+ * Pseudo action: vm-41_stop_0
+ * Pseudo action: vm-45_stop_0
+ * Pseudo action: vm-49_stop_0
+ * Pseudo action: vm-53_stop_0
+ * Pseudo action: vm-57_stop_0
+ * Pseudo action: vm-61_stop_0
+ * Pseudo action: load_stopped_hex-9
+ * Resource action: dummy1 start on hex-0
+ * Pseudo action: base-clone_stop_0
+ * Resource action: dummy1 monitor=30000 on hex-0
+ * Pseudo action: base-group:3_stop_0
+ * Pseudo action: ocfs2-1:3_stop_0
+ * Pseudo action: vg1:3_stop_0
+ * Pseudo action: cmirrord:3_stop_0
+ * Pseudo action: clvm:3_stop_0
+ * Pseudo action: o2cb:3_stop_0
+ * Pseudo action: dlm:3_stop_0
+ * Pseudo action: base-group:3_stopped_0
+ * Pseudo action: base-clone_stopped_0
+
+Revised Cluster Status:
+ * Node List:
+ * Online: [ hex-0 hex-7 hex-8 ]
+ * OFFLINE: [ hex-9 ]
+
+ * Full List of Resources:
+ * vm-00 (ocf:heartbeat:Xen): Started hex-0
+ * Clone Set: base-clone [base-group]:
+ * Started: [ hex-0 hex-7 hex-8 ]
+ * Stopped: [ hex-9 ]
+ * vm-01 (ocf:heartbeat:Xen): Started hex-7
+ * vm-02 (ocf:heartbeat:Xen): Started hex-8
+ * vm-03 (ocf:heartbeat:Xen): Stopped
+ * vm-04 (ocf:heartbeat:Xen): Started hex-7
+ * vm-05 (ocf:heartbeat:Xen): Started hex-8
+ * fencing-sbd (stonith:external/sbd): Started hex-0
+ * vm-06 (ocf:heartbeat:Xen): Stopped
+ * vm-07 (ocf:heartbeat:Xen): Started hex-7
+ * vm-08 (ocf:heartbeat:Xen): Started hex-8
+ * vm-09 (ocf:heartbeat:Xen): Stopped
+ * vm-10 (ocf:heartbeat:Xen): Started hex-0
+ * vm-11 (ocf:heartbeat:Xen): Started hex-7
+ * vm-12 (ocf:heartbeat:Xen): Started hex-8
+ * vm-13 (ocf:heartbeat:Xen): Stopped
+ * vm-14 (ocf:heartbeat:Xen): Started hex-0
+ * vm-15 (ocf:heartbeat:Xen): Started hex-7
+ * vm-16 (ocf:heartbeat:Xen): Started hex-8
+ * vm-17 (ocf:heartbeat:Xen): Stopped
+ * vm-18 (ocf:heartbeat:Xen): Started hex-0
+ * vm-19 (ocf:heartbeat:Xen): Started hex-7
+ * vm-20 (ocf:heartbeat:Xen): Started hex-8
+ * vm-21 (ocf:heartbeat:Xen): Stopped
+ * vm-22 (ocf:heartbeat:Xen): Started hex-0
+ * vm-23 (ocf:heartbeat:Xen): Started hex-7
+ * vm-24 (ocf:heartbeat:Xen): Started hex-8
+ * vm-25 (ocf:heartbeat:Xen): Stopped
+ * vm-26 (ocf:heartbeat:Xen): Started hex-0
+ * vm-27 (ocf:heartbeat:Xen): Started hex-7
+ * vm-28 (ocf:heartbeat:Xen): Started hex-8
+ * vm-29 (ocf:heartbeat:Xen): Stopped
+ * vm-30 (ocf:heartbeat:Xen): Started hex-0
+ * vm-31 (ocf:heartbeat:Xen): Started hex-7
+ * vm-32 (ocf:heartbeat:Xen): Started hex-8
+ * dummy1 (ocf:heartbeat:Dummy): Started hex-0
+ * vm-33 (ocf:heartbeat:Xen): Stopped
+ * vm-34 (ocf:heartbeat:Xen): Started hex-0
+ * vm-35 (ocf:heartbeat:Xen): Started hex-7
+ * vm-36 (ocf:heartbeat:Xen): Started hex-8
+ * vm-37 (ocf:heartbeat:Xen): Stopped
+ * vm-38 (ocf:heartbeat:Xen): Started hex-0
+ * vm-39 (ocf:heartbeat:Xen): Started hex-7
+ * vm-40 (ocf:heartbeat:Xen): Started hex-8
+ * vm-41 (ocf:heartbeat:Xen): Stopped
+ * vm-42 (ocf:heartbeat:Xen): Started hex-0
+ * vm-43 (ocf:heartbeat:Xen): Started hex-7
+ * vm-44 (ocf:heartbeat:Xen): Started hex-8
+ * vm-45 (ocf:heartbeat:Xen): Stopped
+ * vm-46 (ocf:heartbeat:Xen): Started hex-0
+ * vm-47 (ocf:heartbeat:Xen): Started hex-7
+ * vm-48 (ocf:heartbeat:Xen): Started hex-8
+ * vm-49 (ocf:heartbeat:Xen): Stopped
+ * vm-50 (ocf:heartbeat:Xen): Started hex-0
+ * vm-51 (ocf:heartbeat:Xen): Started hex-7
+ * vm-52 (ocf:heartbeat:Xen): Started hex-8
+ * vm-53 (ocf:heartbeat:Xen): Stopped
+ * vm-54 (ocf:heartbeat:Xen): Started hex-0
+ * vm-55 (ocf:heartbeat:Xen): Started hex-7
+ * vm-56 (ocf:heartbeat:Xen): Started hex-8
+ * vm-57 (ocf:heartbeat:Xen): Stopped
+ * vm-58 (ocf:heartbeat:Xen): Started hex-0
+ * vm-59 (ocf:heartbeat:Xen): Started hex-7
+ * vm-60 (ocf:heartbeat:Xen): Started hex-8
+ * vm-61 (ocf:heartbeat:Xen): Stopped
+ * vm-62 (ocf:heartbeat:Xen): Stopped
+ * vm-63 (ocf:heartbeat:Xen): Stopped
+ * vm-64 (ocf:heartbeat:Xen): Stopped
diff --git a/cts/scheduler/summary/bug-lf-2574.summary b/cts/scheduler/summary/bug-lf-2574.summary
new file mode 100644
index 0000000..fb01cde
--- /dev/null
+++ b/cts/scheduler/summary/bug-lf-2574.summary
@@ -0,0 +1,38 @@
+Current cluster status:
+ * Node List:
+ * Online: [ srv01 srv02 srv03 ]
+
+ * Full List of Resources:
+ * main_rsc (ocf:pacemaker:Dummy): Started srv01
+ * main_rsc2 (ocf:pacemaker:Dummy): Started srv02
+ * Clone Set: clnDummy1 [prmDummy1]:
+ * Started: [ srv02 srv03 ]
+ * Stopped: [ srv01 ]
+ * Clone Set: clnPingd [prmPingd]:
+ * Started: [ srv01 srv02 srv03 ]
+
+Transition Summary:
+ * Move main_rsc ( srv01 -> srv03 )
+ * Stop prmPingd:0 ( srv01 ) due to node availability
+
+Executing Cluster Transition:
+ * Resource action: main_rsc stop on srv01
+ * Pseudo action: clnPingd_stop_0
+ * Resource action: main_rsc start on srv03
+ * Resource action: prmPingd:0 stop on srv01
+ * Pseudo action: clnPingd_stopped_0
+ * Resource action: main_rsc monitor=10000 on srv03
+
+Revised Cluster Status:
+ * Node List:
+ * Online: [ srv01 srv02 srv03 ]
+
+ * Full List of Resources:
+ * main_rsc (ocf:pacemaker:Dummy): Started srv03
+ * main_rsc2 (ocf:pacemaker:Dummy): Started srv02
+ * Clone Set: clnDummy1 [prmDummy1]:
+ * Started: [ srv02 srv03 ]
+ * Stopped: [ srv01 ]
+ * Clone Set: clnPingd [prmPingd]:
+ * Started: [ srv02 srv03 ]
+ * Stopped: [ srv01 ]
diff --git a/cts/scheduler/summary/bug-lf-2581.summary b/cts/scheduler/summary/bug-lf-2581.summary
new file mode 100644
index 0000000..dbcf545
--- /dev/null
+++ b/cts/scheduler/summary/bug-lf-2581.summary
@@ -0,0 +1,59 @@
+Current cluster status:
+ * Node List:
+ * Online: [ elvis queen ]
+
+ * Full List of Resources:
+ * Clone Set: AZ-clone [AZ-group]:
+ * Started: [ elvis ]
+ * Stopped: [ queen ]
+ * Resource Group: BC-group-1:
+ * B-1 (ocf:rgk:typeB): Started elvis
+ * C-1 (ocf:rgk:typeC): Started elvis
+ * Resource Group: BC-group-2:
+ * B-2 (ocf:rgk:typeB): Started elvis
+ * C-2 (ocf:rgk:typeC): Started elvis
+ * Clone Set: stonith-l2network-set [stonith-l2network]:
+ * Started: [ elvis ]
+ * Stopped: [ queen ]
+
+Transition Summary:
+ * Start A:1 ( queen )
+ * Start Z:1 ( queen )
+ * Start stonith-l2network:1 ( queen )
+
+Executing Cluster Transition:
+ * Resource action: A:1 monitor on queen
+ * Resource action: Z:1 monitor on queen
+ * Pseudo action: AZ-clone_start_0
+ * Resource action: B-1 monitor on queen
+ * Resource action: C-1 monitor on queen
+ * Resource action: B-2 monitor on queen
+ * Resource action: C-2 monitor on queen
+ * Resource action: stonith-l2network:1 monitor on queen
+ * Pseudo action: stonith-l2network-set_start_0
+ * Pseudo action: AZ-group:1_start_0
+ * Resource action: A:1 start on queen
+ * Resource action: Z:1 start on queen
+ * Resource action: stonith-l2network:1 start on queen
+ * Pseudo action: stonith-l2network-set_running_0
+ * Pseudo action: AZ-group:1_running_0
+ * Resource action: A:1 monitor=120000 on queen
+ * Resource action: Z:1 monitor=120000 on queen
+ * Pseudo action: AZ-clone_running_0
+ * Resource action: stonith-l2network:1 monitor=300000 on queen
+
+Revised Cluster Status:
+ * Node List:
+ * Online: [ elvis queen ]
+
+ * Full List of Resources:
+ * Clone Set: AZ-clone [AZ-group]:
+ * Started: [ elvis queen ]
+ * Resource Group: BC-group-1:
+ * B-1 (ocf:rgk:typeB): Started elvis
+ * C-1 (ocf:rgk:typeC): Started elvis
+ * Resource Group: BC-group-2:
+ * B-2 (ocf:rgk:typeB): Started elvis
+ * C-2 (ocf:rgk:typeC): Started elvis
+ * Clone Set: stonith-l2network-set [stonith-l2network]:
+ * Started: [ elvis queen ]
diff --git a/cts/scheduler/summary/bug-lf-2606.summary b/cts/scheduler/summary/bug-lf-2606.summary
new file mode 100644
index 0000000..e0b7ebf
--- /dev/null
+++ b/cts/scheduler/summary/bug-lf-2606.summary
@@ -0,0 +1,46 @@
+1 of 5 resource instances DISABLED and 0 BLOCKED from further action due to failure
+
+Current cluster status:
+ * Node List:
+ * Node node2: UNCLEAN (online)
+ * Online: [ node1 ]
+
+ * Full List of Resources:
+ * rsc_stonith (stonith:null): Started node1
+ * rsc1 (ocf:pacemaker:Dummy): FAILED node2 (disabled)
+ * rsc2 (ocf:pacemaker:Dummy): Started node2
+ * Clone Set: ms3 [rsc3] (promotable):
+ * Promoted: [ node2 ]
+ * Unpromoted: [ node1 ]
+
+Transition Summary:
+ * Fence (reboot) node2 'rsc1 failed there'
+ * Stop rsc1 ( node2 ) due to node availability
+ * Move rsc2 ( node2 -> node1 )
+ * Stop rsc3:1 ( Promoted node2 ) due to node availability
+
+Executing Cluster Transition:
+ * Pseudo action: ms3_demote_0
+ * Fencing node2 (reboot)
+ * Pseudo action: rsc1_stop_0
+ * Pseudo action: rsc2_stop_0
+ * Pseudo action: rsc3:1_demote_0
+ * Pseudo action: ms3_demoted_0
+ * Pseudo action: ms3_stop_0
+ * Resource action: rsc2 start on node1
+ * Pseudo action: rsc3:1_stop_0
+ * Pseudo action: ms3_stopped_0
+ * Resource action: rsc2 monitor=10000 on node1
+
+Revised Cluster Status:
+ * Node List:
+ * Online: [ node1 ]
+ * OFFLINE: [ node2 ]
+
+ * Full List of Resources:
+ * rsc_stonith (stonith:null): Started node1
+ * rsc1 (ocf:pacemaker:Dummy): Stopped (disabled)
+ * rsc2 (ocf:pacemaker:Dummy): Started node1
+ * Clone Set: ms3 [rsc3] (promotable):
+ * Unpromoted: [ node1 ]
+ * Stopped: [ node2 ]
diff --git a/cts/scheduler/summary/bug-lf-2619.summary b/cts/scheduler/summary/bug-lf-2619.summary
new file mode 100644
index 0000000..5eeb72e
--- /dev/null
+++ b/cts/scheduler/summary/bug-lf-2619.summary
@@ -0,0 +1,100 @@
+Current cluster status:
+ * Node List:
+ * Online: [ act1 act2 act3 sby1 sby2 ]
+
+ * Full List of Resources:
+ * Resource Group: grpPostgreSQLDB1:
+ * prmExPostgreSQLDB1 (ocf:pacemaker:Dummy): Started act1
+ * prmFsPostgreSQLDB1-1 (ocf:pacemaker:Dummy): Started act1
+ * prmFsPostgreSQLDB1-2 (ocf:pacemaker:Dummy): Started act1
+ * prmFsPostgreSQLDB1-3 (ocf:pacemaker:Dummy): Started act1
+ * prmIpPostgreSQLDB1 (ocf:pacemaker:Dummy): Started act1
+ * prmApPostgreSQLDB1 (ocf:pacemaker:Dummy): Started act1
+ * Resource Group: grpPostgreSQLDB2:
+ * prmExPostgreSQLDB2 (ocf:pacemaker:Dummy): Started act2
+ * prmFsPostgreSQLDB2-1 (ocf:pacemaker:Dummy): Started act2
+ * prmFsPostgreSQLDB2-2 (ocf:pacemaker:Dummy): Started act2
+ * prmFsPostgreSQLDB2-3 (ocf:pacemaker:Dummy): Started act2
+ * prmIpPostgreSQLDB2 (ocf:pacemaker:Dummy): Started act2
+ * prmApPostgreSQLDB2 (ocf:pacemaker:Dummy): Started act2
+ * Resource Group: grpPostgreSQLDB3:
+ * prmExPostgreSQLDB3 (ocf:pacemaker:Dummy): Started act3
+ * prmFsPostgreSQLDB3-1 (ocf:pacemaker:Dummy): Started act3
+ * prmFsPostgreSQLDB3-2 (ocf:pacemaker:Dummy): Started act3
+ * prmFsPostgreSQLDB3-3 (ocf:pacemaker:Dummy): Started act3
+ * prmIpPostgreSQLDB3 (ocf:pacemaker:Dummy): Started act3
+ * prmApPostgreSQLDB3 (ocf:pacemaker:Dummy): Started act3
+ * Clone Set: clnPingd [prmPingd]:
+ * prmPingd (ocf:pacemaker:ping): FAILED act1
+ * Started: [ act2 act3 sby1 sby2 ]
+
+Transition Summary:
+ * Move prmExPostgreSQLDB1 ( act1 -> sby1 )
+ * Move prmFsPostgreSQLDB1-1 ( act1 -> sby1 )
+ * Move prmFsPostgreSQLDB1-2 ( act1 -> sby1 )
+ * Move prmFsPostgreSQLDB1-3 ( act1 -> sby1 )
+ * Move prmIpPostgreSQLDB1 ( act1 -> sby1 )
+ * Move prmApPostgreSQLDB1 ( act1 -> sby1 )
+ * Stop prmPingd:0 ( act1 ) due to node availability
+
+Executing Cluster Transition:
+ * Pseudo action: grpPostgreSQLDB1_stop_0
+ * Resource action: prmApPostgreSQLDB1 stop on act1
+ * Pseudo action: load_stopped_sby2
+ * Pseudo action: load_stopped_sby1
+ * Pseudo action: load_stopped_act3
+ * Pseudo action: load_stopped_act2
+ * Resource action: prmIpPostgreSQLDB1 stop on act1
+ * Resource action: prmFsPostgreSQLDB1-3 stop on act1
+ * Resource action: prmFsPostgreSQLDB1-2 stop on act1
+ * Resource action: prmFsPostgreSQLDB1-1 stop on act1
+ * Resource action: prmExPostgreSQLDB1 stop on act1
+ * Pseudo action: load_stopped_act1
+ * Pseudo action: grpPostgreSQLDB1_stopped_0
+ * Pseudo action: grpPostgreSQLDB1_start_0
+ * Resource action: prmExPostgreSQLDB1 start on sby1
+ * Resource action: prmFsPostgreSQLDB1-1 start on sby1
+ * Resource action: prmFsPostgreSQLDB1-2 start on sby1
+ * Resource action: prmFsPostgreSQLDB1-3 start on sby1
+ * Resource action: prmIpPostgreSQLDB1 start on sby1
+ * Resource action: prmApPostgreSQLDB1 start on sby1
+ * Pseudo action: clnPingd_stop_0
+ * Pseudo action: grpPostgreSQLDB1_running_0
+ * Resource action: prmExPostgreSQLDB1 monitor=5000 on sby1
+ * Resource action: prmFsPostgreSQLDB1-1 monitor=5000 on sby1
+ * Resource action: prmFsPostgreSQLDB1-2 monitor=5000 on sby1
+ * Resource action: prmFsPostgreSQLDB1-3 monitor=5000 on sby1
+ * Resource action: prmIpPostgreSQLDB1 monitor=5000 on sby1
+ * Resource action: prmApPostgreSQLDB1 monitor=5000 on sby1
+ * Resource action: prmPingd:0 stop on act1
+ * Pseudo action: clnPingd_stopped_0
+
+Revised Cluster Status:
+ * Node List:
+ * Online: [ act1 act2 act3 sby1 sby2 ]
+
+ * Full List of Resources:
+ * Resource Group: grpPostgreSQLDB1:
+ * prmExPostgreSQLDB1 (ocf:pacemaker:Dummy): Started sby1
+ * prmFsPostgreSQLDB1-1 (ocf:pacemaker:Dummy): Started sby1
+ * prmFsPostgreSQLDB1-2 (ocf:pacemaker:Dummy): Started sby1
+ * prmFsPostgreSQLDB1-3 (ocf:pacemaker:Dummy): Started sby1
+ * prmIpPostgreSQLDB1 (ocf:pacemaker:Dummy): Started sby1
+ * prmApPostgreSQLDB1 (ocf:pacemaker:Dummy): Started sby1
+ * Resource Group: grpPostgreSQLDB2:
+ * prmExPostgreSQLDB2 (ocf:pacemaker:Dummy): Started act2
+ * prmFsPostgreSQLDB2-1 (ocf:pacemaker:Dummy): Started act2
+ * prmFsPostgreSQLDB2-2 (ocf:pacemaker:Dummy): Started act2
+ * prmFsPostgreSQLDB2-3 (ocf:pacemaker:Dummy): Started act2
+ * prmIpPostgreSQLDB2 (ocf:pacemaker:Dummy): Started act2
+ * prmApPostgreSQLDB2 (ocf:pacemaker:Dummy): Started act2
+ * Resource Group: grpPostgreSQLDB3:
+ * prmExPostgreSQLDB3 (ocf:pacemaker:Dummy): Started act3
+ * prmFsPostgreSQLDB3-1 (ocf:pacemaker:Dummy): Started act3
+ * prmFsPostgreSQLDB3-2 (ocf:pacemaker:Dummy): Started act3
+ * prmFsPostgreSQLDB3-3 (ocf:pacemaker:Dummy): Started act3
+ * prmIpPostgreSQLDB3 (ocf:pacemaker:Dummy): Started act3
+ * prmApPostgreSQLDB3 (ocf:pacemaker:Dummy): Started act3
+ * Clone Set: clnPingd [prmPingd]:
+ * Started: [ act2 act3 sby1 sby2 ]
+ * Stopped: [ act1 ]
diff --git a/cts/scheduler/summary/bug-n-385265-2.summary b/cts/scheduler/summary/bug-n-385265-2.summary
new file mode 100644
index 0000000..8fe5130
--- /dev/null
+++ b/cts/scheduler/summary/bug-n-385265-2.summary
@@ -0,0 +1,33 @@
+Current cluster status:
+ * Node List:
+ * Online: [ ih01 ih02 ]
+
+ * Full List of Resources:
+ * Resource Group: group_common:
+ * resource_ip_common (ocf:heartbeat:IPaddr2): FAILED ih02
+ * resource_idvscommon (ocf:dfs:idvs): Started ih02
+
+Transition Summary:
+ * Recover resource_ip_common ( ih02 -> ih01 )
+ * Move resource_idvscommon ( ih02 -> ih01 )
+
+Executing Cluster Transition:
+ * Pseudo action: group_common_stop_0
+ * Resource action: resource_idvscommon stop on ih02
+ * Resource action: resource_ip_common stop on ih02
+ * Pseudo action: group_common_stopped_0
+ * Pseudo action: group_common_start_0
+ * Resource action: resource_ip_common start on ih01
+ * Resource action: resource_idvscommon start on ih01
+ * Pseudo action: group_common_running_0
+ * Resource action: resource_ip_common monitor=30000 on ih01
+ * Resource action: resource_idvscommon monitor=30000 on ih01
+
+Revised Cluster Status:
+ * Node List:
+ * Online: [ ih01 ih02 ]
+
+ * Full List of Resources:
+ * Resource Group: group_common:
+ * resource_ip_common (ocf:heartbeat:IPaddr2): Started ih01
+ * resource_idvscommon (ocf:dfs:idvs): Started ih01
diff --git a/cts/scheduler/summary/bug-n-385265.summary b/cts/scheduler/summary/bug-n-385265.summary
new file mode 100644
index 0000000..56b3924
--- /dev/null
+++ b/cts/scheduler/summary/bug-n-385265.summary
@@ -0,0 +1,25 @@
+Current cluster status:
+ * Node List:
+ * Online: [ ih01 ih02 ]
+
+ * Full List of Resources:
+ * Resource Group: group_common:
+ * resource_ip_common (ocf:heartbeat:IPaddr2): Started ih02
+ * resource_idvscommon (ocf:dfs:idvs): FAILED ih02
+
+Transition Summary:
+ * Stop resource_idvscommon ( ih02 ) due to node availability
+
+Executing Cluster Transition:
+ * Pseudo action: group_common_stop_0
+ * Resource action: resource_idvscommon stop on ih02
+ * Pseudo action: group_common_stopped_0
+
+Revised Cluster Status:
+ * Node List:
+ * Online: [ ih01 ih02 ]
+
+ * Full List of Resources:
+ * Resource Group: group_common:
+ * resource_ip_common (ocf:heartbeat:IPaddr2): Started ih02
+ * resource_idvscommon (ocf:dfs:idvs): Stopped
diff --git a/cts/scheduler/summary/bug-n-387749.summary b/cts/scheduler/summary/bug-n-387749.summary
new file mode 100644
index 0000000..17275a1
--- /dev/null
+++ b/cts/scheduler/summary/bug-n-387749.summary
@@ -0,0 +1,59 @@
+Current cluster status:
+ * Node List:
+ * Online: [ power720-1 power720-2 ]
+ * OFFLINE: [ power720-4 ]
+
+ * Full List of Resources:
+ * Clone Set: export_home_ocfs2_clone_set [export_home_ocfs2] (unique):
+ * export_home_ocfs2:0 (ocf:heartbeat:Filesystem): Stopped
+ * export_home_ocfs2:1 (ocf:heartbeat:Filesystem): Started power720-2
+ * export_home_ocfs2:2 (ocf:heartbeat:Filesystem): Stopped
+ * Resource Group: group_nfs:
+ * resource_ipaddr1_single (ocf:heartbeat:IPaddr): Started power720-2
+ * resource_nfsserver_single (lsb:nfsserver): Started power720-2
+
+Transition Summary:
+ * Start export_home_ocfs2:0 ( power720-1 )
+ * Move resource_ipaddr1_single ( power720-2 -> power720-1 )
+ * Move resource_nfsserver_single ( power720-2 -> power720-1 )
+
+Executing Cluster Transition:
+ * Resource action: export_home_ocfs2:0 monitor on power720-1
+ * Resource action: export_home_ocfs2:1 monitor on power720-1
+ * Resource action: export_home_ocfs2:2 monitor on power720-1
+ * Pseudo action: export_home_ocfs2_clone_set_pre_notify_start_0
+ * Pseudo action: group_nfs_stop_0
+ * Resource action: resource_ipaddr1_single monitor on power720-1
+ * Resource action: resource_nfsserver_single stop on power720-2
+ * Resource action: resource_nfsserver_single monitor on power720-1
+ * Resource action: export_home_ocfs2:1 notify on power720-2
+ * Pseudo action: export_home_ocfs2_clone_set_confirmed-pre_notify_start_0
+ * Pseudo action: export_home_ocfs2_clone_set_start_0
+ * Resource action: resource_ipaddr1_single stop on power720-2
+ * Resource action: export_home_ocfs2:0 start on power720-1
+ * Pseudo action: export_home_ocfs2_clone_set_running_0
+ * Pseudo action: group_nfs_stopped_0
+ * Pseudo action: export_home_ocfs2_clone_set_post_notify_running_0
+ * Resource action: export_home_ocfs2:0 notify on power720-1
+ * Resource action: export_home_ocfs2:1 notify on power720-2
+ * Pseudo action: export_home_ocfs2_clone_set_confirmed-post_notify_running_0
+ * Pseudo action: group_nfs_start_0
+ * Resource action: resource_ipaddr1_single start on power720-1
+ * Resource action: resource_nfsserver_single start on power720-1
+ * Pseudo action: group_nfs_running_0
+ * Resource action: resource_ipaddr1_single monitor=5000 on power720-1
+ * Resource action: resource_nfsserver_single monitor=15000 on power720-1
+
+Revised Cluster Status:
+ * Node List:
+ * Online: [ power720-1 power720-2 ]
+ * OFFLINE: [ power720-4 ]
+
+ * Full List of Resources:
+ * Clone Set: export_home_ocfs2_clone_set [export_home_ocfs2] (unique):
+ * export_home_ocfs2:0 (ocf:heartbeat:Filesystem): Started power720-1
+ * export_home_ocfs2:1 (ocf:heartbeat:Filesystem): Started power720-2
+ * export_home_ocfs2:2 (ocf:heartbeat:Filesystem): Stopped
+ * Resource Group: group_nfs:
+ * resource_ipaddr1_single (ocf:heartbeat:IPaddr): Started power720-1
+ * resource_nfsserver_single (lsb:nfsserver): Started power720-1
diff --git a/cts/scheduler/summary/bug-pm-11.summary b/cts/scheduler/summary/bug-pm-11.summary
new file mode 100644
index 0000000..c3f8f5b
--- /dev/null
+++ b/cts/scheduler/summary/bug-pm-11.summary
@@ -0,0 +1,48 @@
+Current cluster status:
+ * Node List:
+ * Online: [ node-a node-b ]
+
+ * Full List of Resources:
+ * Clone Set: ms-sf [group] (promotable, unique):
+ * Resource Group: group:0:
+ * stateful-1:0 (ocf:heartbeat:Stateful): Unpromoted node-b
+ * stateful-2:0 (ocf:heartbeat:Stateful): Stopped
+ * Resource Group: group:1:
+ * stateful-1:1 (ocf:heartbeat:Stateful): Promoted node-a
+ * stateful-2:1 (ocf:heartbeat:Stateful): Stopped
+
+Transition Summary:
+ * Start stateful-2:0 ( node-b )
+ * Promote stateful-2:1 ( Stopped -> Promoted node-a )
+
+Executing Cluster Transition:
+ * Resource action: stateful-2:0 monitor on node-b
+ * Resource action: stateful-2:0 monitor on node-a
+ * Resource action: stateful-2:1 monitor on node-b
+ * Resource action: stateful-2:1 monitor on node-a
+ * Pseudo action: ms-sf_start_0
+ * Pseudo action: group:0_start_0
+ * Resource action: stateful-2:0 start on node-b
+ * Pseudo action: group:1_start_0
+ * Resource action: stateful-2:1 start on node-a
+ * Pseudo action: group:0_running_0
+ * Pseudo action: group:1_running_0
+ * Pseudo action: ms-sf_running_0
+ * Pseudo action: ms-sf_promote_0
+ * Pseudo action: group:1_promote_0
+ * Resource action: stateful-2:1 promote on node-a
+ * Pseudo action: group:1_promoted_0
+ * Pseudo action: ms-sf_promoted_0
+
+Revised Cluster Status:
+ * Node List:
+ * Online: [ node-a node-b ]
+
+ * Full List of Resources:
+ * Clone Set: ms-sf [group] (promotable, unique):
+ * Resource Group: group:0:
+ * stateful-1:0 (ocf:heartbeat:Stateful): Unpromoted node-b
+ * stateful-2:0 (ocf:heartbeat:Stateful): Unpromoted node-b
+ * Resource Group: group:1:
+ * stateful-1:1 (ocf:heartbeat:Stateful): Promoted node-a
+ * stateful-2:1 (ocf:heartbeat:Stateful): Promoted node-a
diff --git a/cts/scheduler/summary/bug-pm-12.summary b/cts/scheduler/summary/bug-pm-12.summary
new file mode 100644
index 0000000..8defffe
--- /dev/null
+++ b/cts/scheduler/summary/bug-pm-12.summary
@@ -0,0 +1,57 @@
+Current cluster status:
+ * Node List:
+ * Online: [ node-a node-b ]
+
+ * Full List of Resources:
+ * Clone Set: ms-sf [group] (promotable, unique):
+ * Resource Group: group:0:
+ * stateful-1:0 (ocf:heartbeat:Stateful): Unpromoted node-b
+ * stateful-2:0 (ocf:heartbeat:Stateful): Unpromoted node-b
+ * Resource Group: group:1:
+ * stateful-1:1 (ocf:heartbeat:Stateful): Promoted node-a
+ * stateful-2:1 (ocf:heartbeat:Stateful): Promoted node-a
+
+Transition Summary:
+ * Restart stateful-2:0 ( Unpromoted node-b ) due to resource definition change
+ * Restart stateful-2:1 ( Promoted node-a ) due to resource definition change
+
+Executing Cluster Transition:
+ * Pseudo action: ms-sf_demote_0
+ * Pseudo action: group:1_demote_0
+ * Resource action: stateful-2:1 demote on node-a
+ * Pseudo action: group:1_demoted_0
+ * Pseudo action: ms-sf_demoted_0
+ * Pseudo action: ms-sf_stop_0
+ * Pseudo action: group:0_stop_0
+ * Resource action: stateful-2:0 stop on node-b
+ * Pseudo action: group:1_stop_0
+ * Resource action: stateful-2:1 stop on node-a
+ * Pseudo action: group:0_stopped_0
+ * Pseudo action: group:1_stopped_0
+ * Pseudo action: ms-sf_stopped_0
+ * Pseudo action: ms-sf_start_0
+ * Pseudo action: group:0_start_0
+ * Resource action: stateful-2:0 start on node-b
+ * Pseudo action: group:1_start_0
+ * Resource action: stateful-2:1 start on node-a
+ * Pseudo action: group:0_running_0
+ * Pseudo action: group:1_running_0
+ * Pseudo action: ms-sf_running_0
+ * Pseudo action: ms-sf_promote_0
+ * Pseudo action: group:1_promote_0
+ * Resource action: stateful-2:1 promote on node-a
+ * Pseudo action: group:1_promoted_0
+ * Pseudo action: ms-sf_promoted_0
+
+Revised Cluster Status:
+ * Node List:
+ * Online: [ node-a node-b ]
+
+ * Full List of Resources:
+ * Clone Set: ms-sf [group] (promotable, unique):
+ * Resource Group: group:0:
+ * stateful-1:0 (ocf:heartbeat:Stateful): Unpromoted node-b
+ * stateful-2:0 (ocf:heartbeat:Stateful): Unpromoted node-b
+ * Resource Group: group:1:
+ * stateful-1:1 (ocf:heartbeat:Stateful): Promoted node-a
+ * stateful-2:1 (ocf:heartbeat:Stateful): Promoted node-a
diff --git a/cts/scheduler/summary/bug-rh-1097457.summary b/cts/scheduler/summary/bug-rh-1097457.summary
new file mode 100644
index 0000000..f68a509
--- /dev/null
+++ b/cts/scheduler/summary/bug-rh-1097457.summary
@@ -0,0 +1,126 @@
+2 of 26 resource instances DISABLED and 0 BLOCKED from further action due to failure
+
+Current cluster status:
+ * Node List:
+ * Online: [ lama2 lama3 ]
+ * GuestOnline: [ lamaVM1 lamaVM2 lamaVM3 ]
+
+ * Full List of Resources:
+ * restofencelama2 (stonith:fence_ipmilan): Started lama3
+ * restofencelama3 (stonith:fence_ipmilan): Started lama2
+ * VM1 (ocf:heartbeat:VirtualDomain): Started lama2
+ * FSlun1 (ocf:heartbeat:Filesystem): Started lamaVM1
+ * FSlun2 (ocf:heartbeat:Filesystem): Started lamaVM1
+ * VM2 (ocf:heartbeat:VirtualDomain): FAILED lama3
+ * VM3 (ocf:heartbeat:VirtualDomain): Started lama3
+ * FSlun3 (ocf:heartbeat:Filesystem): FAILED lamaVM2
+ * FSlun4 (ocf:heartbeat:Filesystem): Started lamaVM3
+ * FAKE5-IP (ocf:heartbeat:IPaddr2): Stopped (disabled)
+ * FAKE6-IP (ocf:heartbeat:IPaddr2): Stopped (disabled)
+ * FAKE5 (ocf:heartbeat:Dummy): Started lamaVM3
+ * Resource Group: lamaVM1-G1:
+ * FAKE1 (ocf:heartbeat:Dummy): Started lamaVM1
+ * FAKE1-IP (ocf:heartbeat:IPaddr2): Started lamaVM1
+ * Resource Group: lamaVM1-G2:
+ * FAKE2 (ocf:heartbeat:Dummy): Started lamaVM1
+ * FAKE2-IP (ocf:heartbeat:IPaddr2): Started lamaVM1
+ * Resource Group: lamaVM1-G3:
+ * FAKE3 (ocf:heartbeat:Dummy): Started lamaVM1
+ * FAKE3-IP (ocf:heartbeat:IPaddr2): Started lamaVM1
+ * Resource Group: lamaVM2-G4:
+ * FAKE4 (ocf:heartbeat:Dummy): Started lamaVM2
+ * FAKE4-IP (ocf:heartbeat:IPaddr2): Started lamaVM2
+ * Clone Set: FAKE6-clone [FAKE6]:
+ * Started: [ lamaVM1 lamaVM2 lamaVM3 ]
+
+Transition Summary:
+ * Fence (reboot) lamaVM2 (resource: VM2) 'guest is unclean'
+ * Recover VM2 ( lama3 )
+ * Recover FSlun3 ( lamaVM2 -> lama2 )
+ * Restart FAKE4 ( lamaVM2 ) due to required VM2 start
+ * Restart FAKE4-IP ( lamaVM2 ) due to required VM2 start
+ * Restart FAKE6:2 ( lamaVM2 ) due to required VM2 start
+ * Restart lamaVM2 ( lama3 ) due to required VM2 start
+
+Executing Cluster Transition:
+ * Resource action: FSlun1 monitor on lamaVM3
+ * Resource action: FSlun2 monitor on lamaVM3
+ * Resource action: FSlun3 monitor on lamaVM3
+ * Resource action: FSlun3 monitor on lamaVM1
+ * Resource action: FSlun4 monitor on lamaVM1
+ * Resource action: FAKE5-IP monitor on lamaVM3
+ * Resource action: FAKE5-IP monitor on lamaVM1
+ * Resource action: FAKE6-IP monitor on lamaVM3
+ * Resource action: FAKE6-IP monitor on lamaVM1
+ * Resource action: FAKE5 monitor on lamaVM1
+ * Resource action: FAKE1 monitor on lamaVM3
+ * Resource action: FAKE1-IP monitor on lamaVM3
+ * Resource action: FAKE2 monitor on lamaVM3
+ * Resource action: FAKE2-IP monitor on lamaVM3
+ * Resource action: FAKE3 monitor on lamaVM3
+ * Resource action: FAKE3-IP monitor on lamaVM3
+ * Resource action: FAKE4 monitor on lamaVM3
+ * Resource action: FAKE4 monitor on lamaVM1
+ * Resource action: FAKE4-IP monitor on lamaVM3
+ * Resource action: FAKE4-IP monitor on lamaVM1
+ * Resource action: lamaVM2 stop on lama3
+ * Resource action: VM2 stop on lama3
+ * Pseudo action: stonith-lamaVM2-reboot on lamaVM2
+ * Resource action: VM2 start on lama3
+ * Resource action: VM2 monitor=10000 on lama3
+ * Pseudo action: lamaVM2-G4_stop_0
+ * Pseudo action: FAKE4-IP_stop_0
+ * Pseudo action: FAKE6-clone_stop_0
+ * Resource action: lamaVM2 start on lama3
+ * Resource action: lamaVM2 monitor=30000 on lama3
+ * Resource action: FSlun3 monitor=10000 on lamaVM2
+ * Pseudo action: FAKE4_stop_0
+ * Pseudo action: FAKE6_stop_0
+ * Pseudo action: FAKE6-clone_stopped_0
+ * Pseudo action: FAKE6-clone_start_0
+ * Pseudo action: lamaVM2-G4_stopped_0
+ * Resource action: FAKE6 start on lamaVM2
+ * Resource action: FAKE6 monitor=30000 on lamaVM2
+ * Pseudo action: FAKE6-clone_running_0
+ * Pseudo action: FSlun3_stop_0
+ * Resource action: FSlun3 start on lama2
+ * Pseudo action: lamaVM2-G4_start_0
+ * Resource action: FAKE4 start on lamaVM2
+ * Resource action: FAKE4 monitor=30000 on lamaVM2
+ * Resource action: FAKE4-IP start on lamaVM2
+ * Resource action: FAKE4-IP monitor=30000 on lamaVM2
+ * Resource action: FSlun3 monitor=10000 on lama2
+ * Pseudo action: lamaVM2-G4_running_0
+
+Revised Cluster Status:
+ * Node List:
+ * Online: [ lama2 lama3 ]
+ * GuestOnline: [ lamaVM1 lamaVM2 lamaVM3 ]
+
+ * Full List of Resources:
+ * restofencelama2 (stonith:fence_ipmilan): Started lama3
+ * restofencelama3 (stonith:fence_ipmilan): Started lama2
+ * VM1 (ocf:heartbeat:VirtualDomain): Started lama2
+ * FSlun1 (ocf:heartbeat:Filesystem): Started lamaVM1
+ * FSlun2 (ocf:heartbeat:Filesystem): Started lamaVM1
+ * VM2 (ocf:heartbeat:VirtualDomain): FAILED lama3
+ * VM3 (ocf:heartbeat:VirtualDomain): Started lama3
+ * FSlun3 (ocf:heartbeat:Filesystem): FAILED [ lama2 lamaVM2 ]
+ * FSlun4 (ocf:heartbeat:Filesystem): Started lamaVM3
+ * FAKE5-IP (ocf:heartbeat:IPaddr2): Stopped (disabled)
+ * FAKE6-IP (ocf:heartbeat:IPaddr2): Stopped (disabled)
+ * FAKE5 (ocf:heartbeat:Dummy): Started lamaVM3
+ * Resource Group: lamaVM1-G1:
+ * FAKE1 (ocf:heartbeat:Dummy): Started lamaVM1
+ * FAKE1-IP (ocf:heartbeat:IPaddr2): Started lamaVM1
+ * Resource Group: lamaVM1-G2:
+ * FAKE2 (ocf:heartbeat:Dummy): Started lamaVM1
+ * FAKE2-IP (ocf:heartbeat:IPaddr2): Started lamaVM1
+ * Resource Group: lamaVM1-G3:
+ * FAKE3 (ocf:heartbeat:Dummy): Started lamaVM1
+ * FAKE3-IP (ocf:heartbeat:IPaddr2): Started lamaVM1
+ * Resource Group: lamaVM2-G4:
+ * FAKE4 (ocf:heartbeat:Dummy): Started lamaVM2
+ * FAKE4-IP (ocf:heartbeat:IPaddr2): Started lamaVM2
+ * Clone Set: FAKE6-clone [FAKE6]:
+ * Started: [ lamaVM1 lamaVM2 lamaVM3 ]
diff --git a/cts/scheduler/summary/bug-rh-880249.summary b/cts/scheduler/summary/bug-rh-880249.summary
new file mode 100644
index 0000000..4cf3fe8
--- /dev/null
+++ b/cts/scheduler/summary/bug-rh-880249.summary
@@ -0,0 +1,29 @@
+Current cluster status:
+ * Node List:
+ * Online: [ 18node1 18node2 18node3 ]
+
+ * Full List of Resources:
+ * shoot1 (stonith:fence_xvm): Started 18node1
+ * shoot2 (stonith:fence_xvm): Started 18node2
+ * dummystateful (ocf:pacemaker:Stateful): Promoted [ 18node2 18node1 18node3 ]
+
+Transition Summary:
+ * Move dummystateful ( Promoted 18node2 -> Started 18node3 )
+
+Executing Cluster Transition:
+ * Resource action: dummystateful demote on 18node3
+ * Resource action: dummystateful demote on 18node1
+ * Resource action: dummystateful demote on 18node2
+ * Resource action: dummystateful stop on 18node3
+ * Resource action: dummystateful stop on 18node1
+ * Resource action: dummystateful stop on 18node2
+ * Resource action: dummystateful start on 18node3
+
+Revised Cluster Status:
+ * Node List:
+ * Online: [ 18node1 18node2 18node3 ]
+
+ * Full List of Resources:
+ * shoot1 (stonith:fence_xvm): Started 18node1
+ * shoot2 (stonith:fence_xvm): Started 18node2
+ * dummystateful (ocf:pacemaker:Stateful): Started 18node3
diff --git a/cts/scheduler/summary/bug-suse-707150.summary b/cts/scheduler/summary/bug-suse-707150.summary
new file mode 100644
index 0000000..37e9f5b
--- /dev/null
+++ b/cts/scheduler/summary/bug-suse-707150.summary
@@ -0,0 +1,75 @@
+5 of 28 resource instances DISABLED and 0 BLOCKED from further action due to failure
+
+Current cluster status:
+ * Node List:
+ * Online: [ hex-0 hex-9 ]
+ * OFFLINE: [ hex-7 hex-8 ]
+
+ * Full List of Resources:
+ * vm-00 (ocf:heartbeat:Xen): Stopped (disabled)
+ * Clone Set: base-clone [base-group]:
+ * Resource Group: base-group:0:
+ * dlm (ocf:pacemaker:controld): Started hex-0
+ * o2cb (ocf:ocfs2:o2cb): Stopped
+ * clvm (ocf:lvm2:clvmd): Stopped
+ * cmirrord (ocf:lvm2:cmirrord): Stopped
+ * vg1 (ocf:heartbeat:LVM): Stopped (disabled)
+ * ocfs2-1 (ocf:heartbeat:Filesystem): Stopped
+ * Stopped: [ hex-7 hex-8 hex-9 ]
+ * vm-01 (ocf:heartbeat:Xen): Stopped
+ * fencing-sbd (stonith:external/sbd): Started hex-9
+ * dummy1 (ocf:heartbeat:Dummy): Started hex-0
+
+Transition Summary:
+ * Start o2cb:0 ( hex-0 )
+ * Start clvm:0 ( hex-0 )
+ * Start cmirrord:0 ( hex-0 )
+ * Start dlm:1 ( hex-9 )
+ * Start o2cb:1 ( hex-9 )
+ * Start clvm:1 ( hex-9 )
+ * Start cmirrord:1 ( hex-9 )
+ * Start vm-01 ( hex-9 ) due to unrunnable base-clone running (blocked)
+
+Executing Cluster Transition:
+ * Resource action: vg1:1 monitor on hex-9
+ * Pseudo action: base-clone_start_0
+ * Pseudo action: load_stopped_hex-9
+ * Pseudo action: load_stopped_hex-8
+ * Pseudo action: load_stopped_hex-7
+ * Pseudo action: load_stopped_hex-0
+ * Pseudo action: base-group:0_start_0
+ * Resource action: o2cb:0 start on hex-0
+ * Resource action: clvm:0 start on hex-0
+ * Resource action: cmirrord:0 start on hex-0
+ * Pseudo action: base-group:1_start_0
+ * Resource action: dlm:1 start on hex-9
+ * Resource action: o2cb:1 start on hex-9
+ * Resource action: clvm:1 start on hex-9
+ * Resource action: cmirrord:1 start on hex-9
+
+Revised Cluster Status:
+ * Node List:
+ * Online: [ hex-0 hex-9 ]
+ * OFFLINE: [ hex-7 hex-8 ]
+
+ * Full List of Resources:
+ * vm-00 (ocf:heartbeat:Xen): Stopped (disabled)
+ * Clone Set: base-clone [base-group]:
+ * Resource Group: base-group:0:
+ * dlm (ocf:pacemaker:controld): Started hex-0
+ * o2cb (ocf:ocfs2:o2cb): Started hex-0
+ * clvm (ocf:lvm2:clvmd): Started hex-0
+ * cmirrord (ocf:lvm2:cmirrord): Started hex-0
+ * vg1 (ocf:heartbeat:LVM): Stopped (disabled)
+ * ocfs2-1 (ocf:heartbeat:Filesystem): Stopped
+ * Resource Group: base-group:1:
+ * dlm (ocf:pacemaker:controld): Started hex-9
+ * o2cb (ocf:ocfs2:o2cb): Started hex-9
+ * clvm (ocf:lvm2:clvmd): Started hex-9
+ * cmirrord (ocf:lvm2:cmirrord): Started hex-9
+ * vg1 (ocf:heartbeat:LVM): Stopped (disabled)
+ * ocfs2-1 (ocf:heartbeat:Filesystem): Stopped
+ * Stopped: [ hex-7 hex-8 ]
+ * vm-01 (ocf:heartbeat:Xen): Stopped
+ * fencing-sbd (stonith:external/sbd): Started hex-9
+ * dummy1 (ocf:heartbeat:Dummy): Started hex-0
diff --git a/cts/scheduler/summary/bundle-connection-with-container.summary b/cts/scheduler/summary/bundle-connection-with-container.summary
new file mode 100644
index 0000000..62e0ec6
--- /dev/null
+++ b/cts/scheduler/summary/bundle-connection-with-container.summary
@@ -0,0 +1,63 @@
+Using the original execution date of: 2022-07-13 22:13:26Z
+Current cluster status:
+ * Node List:
+ * Online: [ rhel8-1 rhel8-3 rhel8-4 rhel8-5 ]
+ * OFFLINE: [ rhel8-2 ]
+ * RemoteOnline: [ remote-rhel8-2 ]
+ * GuestOnline: [ httpd-bundle-1 httpd-bundle-2 ]
+
+ * Full List of Resources:
+ * Fencing (stonith:fence_xvm): Started rhel8-3
+ * FencingPass (stonith:fence_dummy): Started rhel8-4
+ * FencingFail (stonith:fence_dummy): Started rhel8-5
+ * remote-rhel8-2 (ocf:pacemaker:remote): Started rhel8-1
+ * remote-rsc (ocf:pacemaker:Dummy): Started remote-rhel8-2
+ * Container bundle set: httpd-bundle [localhost/pcmktest:http]:
+ * httpd-bundle-0 (192.168.122.131) (ocf:heartbeat:apache): FAILED rhel8-1
+ * httpd-bundle-1 (192.168.122.132) (ocf:heartbeat:apache): Started rhel8-3
+ * httpd-bundle-2 (192.168.122.133) (ocf:heartbeat:apache): Started remote-rhel8-2
+
+Transition Summary:
+ * Fence (reboot) httpd-bundle-0 (resource: httpd-bundle-podman-0) 'guest is unclean'
+ * Recover httpd-bundle-podman-0 ( rhel8-1 )
+ * Recover httpd-bundle-0 ( rhel8-1 )
+ * Recover httpd:0 ( httpd-bundle-0 )
+
+Executing Cluster Transition:
+ * Resource action: httpd-bundle-0 stop on rhel8-1
+ * Pseudo action: httpd-bundle_stop_0
+ * Pseudo action: httpd-bundle_start_0
+ * Resource action: httpd-bundle-podman-0 stop on rhel8-1
+ * Pseudo action: stonith-httpd-bundle-0-reboot on httpd-bundle-0
+ * Pseudo action: httpd-bundle-clone_stop_0
+ * Resource action: httpd-bundle-podman-0 start on rhel8-1
+ * Resource action: httpd-bundle-podman-0 monitor=60000 on rhel8-1
+ * Resource action: httpd-bundle-0 start on rhel8-1
+ * Resource action: httpd-bundle-0 monitor=30000 on rhel8-1
+ * Pseudo action: httpd_stop_0
+ * Pseudo action: httpd-bundle-clone_stopped_0
+ * Pseudo action: httpd-bundle-clone_start_0
+ * Pseudo action: httpd-bundle_stopped_0
+ * Resource action: httpd start on httpd-bundle-0
+ * Pseudo action: httpd-bundle-clone_running_0
+ * Pseudo action: httpd-bundle_running_0
+ * Resource action: httpd monitor=15000 on httpd-bundle-0
+Using the original execution date of: 2022-07-13 22:13:26Z
+
+Revised Cluster Status:
+ * Node List:
+ * Online: [ rhel8-1 rhel8-3 rhel8-4 rhel8-5 ]
+ * OFFLINE: [ rhel8-2 ]
+ * RemoteOnline: [ remote-rhel8-2 ]
+ * GuestOnline: [ httpd-bundle-0 httpd-bundle-1 httpd-bundle-2 ]
+
+ * Full List of Resources:
+ * Fencing (stonith:fence_xvm): Started rhel8-3
+ * FencingPass (stonith:fence_dummy): Started rhel8-4
+ * FencingFail (stonith:fence_dummy): Started rhel8-5
+ * remote-rhel8-2 (ocf:pacemaker:remote): Started rhel8-1
+ * remote-rsc (ocf:pacemaker:Dummy): Started remote-rhel8-2
+ * Container bundle set: httpd-bundle [localhost/pcmktest:http]:
+ * httpd-bundle-0 (192.168.122.131) (ocf:heartbeat:apache): Started rhel8-1
+ * httpd-bundle-1 (192.168.122.132) (ocf:heartbeat:apache): Started rhel8-3
+ * httpd-bundle-2 (192.168.122.133) (ocf:heartbeat:apache): Started remote-rhel8-2
diff --git a/cts/scheduler/summary/bundle-interleave-down.summary b/cts/scheduler/summary/bundle-interleave-down.summary
new file mode 100644
index 0000000..ca99ae0
--- /dev/null
+++ b/cts/scheduler/summary/bundle-interleave-down.summary
@@ -0,0 +1,91 @@
+9 of 19 resource instances DISABLED and 0 BLOCKED from further action due to failure
+
+Current cluster status:
+ * Node List:
+ * Online: [ node1 node2 node3 node4 node5 ]
+ * GuestOnline: [ app-bundle-0 app-bundle-1 app-bundle-2 base-bundle-0 base-bundle-1 base-bundle-2 ]
+
+ * Full List of Resources:
+ * Fencing (stonith:fence_xvm): Started node1
+ * Container bundle set: base-bundle [localhost/pcmktest:base]:
+ * base-bundle-0 (ocf:pacemaker:Stateful): Unpromoted node2 (disabled)
+ * base-bundle-1 (ocf:pacemaker:Stateful): Unpromoted node3 (disabled)
+ * base-bundle-2 (ocf:pacemaker:Stateful): Promoted node4 (disabled)
+ * Container bundle set: app-bundle [localhost/pcmktest:app]:
+ * app-bundle-0 (ocf:pacemaker:Stateful): Unpromoted node2
+ * app-bundle-1 (ocf:pacemaker:Stateful): Unpromoted node3
+ * app-bundle-2 (ocf:pacemaker:Stateful): Promoted node4
+
+Transition Summary:
+ * Stop base-bundle-podman-0 ( node2 ) due to node availability
+ * Stop base-bundle-0 ( node2 ) due to node availability
+ * Stop base:0 ( Unpromoted base-bundle-0 ) due to node availability
+ * Stop base-bundle-podman-1 ( node3 ) due to node availability
+ * Stop base-bundle-1 ( node3 ) due to node availability
+ * Stop base:1 ( Unpromoted base-bundle-1 ) due to node availability
+ * Stop base-bundle-podman-2 ( node4 ) due to node availability
+ * Stop base-bundle-2 ( node4 ) due to node availability
+ * Stop base:2 ( Promoted base-bundle-2 ) due to node availability
+ * Stop app-bundle-podman-0 ( node2 ) due to node availability
+ * Stop app-bundle-0 ( node2 ) due to unrunnable app-bundle-podman-0 start
+ * Stop app:0 ( Unpromoted app-bundle-0 ) due to unrunnable app-bundle-podman-0 start
+ * Stop app-bundle-podman-1 ( node3 ) due to node availability
+ * Stop app-bundle-1 ( node3 ) due to unrunnable app-bundle-podman-1 start
+ * Stop app:1 ( Unpromoted app-bundle-1 ) due to unrunnable app-bundle-podman-1 start
+ * Stop app-bundle-podman-2 ( node4 ) due to node availability
+ * Stop app-bundle-2 ( node4 ) due to unrunnable app-bundle-podman-2 start
+ * Stop app:2 ( Promoted app-bundle-2 ) due to unrunnable app-bundle-podman-2 start
+
+Executing Cluster Transition:
+ * Resource action: app cancel=15000 on app-bundle-2
+ * Pseudo action: app-bundle_demote_0
+ * Pseudo action: app-bundle-clone_demote_0
+ * Resource action: app demote on app-bundle-2
+ * Pseudo action: app-bundle-clone_demoted_0
+ * Pseudo action: app-bundle_demoted_0
+ * Pseudo action: app-bundle_stop_0
+ * Pseudo action: base-bundle_demote_0
+ * Pseudo action: base-bundle-clone_demote_0
+ * Pseudo action: app-bundle-clone_stop_0
+ * Resource action: base demote on base-bundle-2
+ * Pseudo action: base-bundle-clone_demoted_0
+ * Resource action: app stop on app-bundle-2
+ * Resource action: app-bundle-2 stop on node4
+ * Pseudo action: base-bundle_demoted_0
+ * Resource action: app stop on app-bundle-1
+ * Resource action: app-bundle-1 stop on node3
+ * Resource action: app-bundle-podman-2 stop on node4
+ * Resource action: app stop on app-bundle-0
+ * Pseudo action: app-bundle-clone_stopped_0
+ * Resource action: app-bundle-0 stop on node2
+ * Resource action: app-bundle-podman-1 stop on node3
+ * Resource action: app-bundle-podman-0 stop on node2
+ * Pseudo action: app-bundle_stopped_0
+ * Pseudo action: base-bundle_stop_0
+ * Pseudo action: base-bundle-clone_stop_0
+ * Resource action: base stop on base-bundle-2
+ * Resource action: base-bundle-2 stop on node4
+ * Resource action: base stop on base-bundle-1
+ * Resource action: base-bundle-1 stop on node3
+ * Resource action: base-bundle-podman-2 stop on node4
+ * Resource action: base stop on base-bundle-0
+ * Pseudo action: base-bundle-clone_stopped_0
+ * Resource action: base-bundle-0 stop on node2
+ * Resource action: base-bundle-podman-1 stop on node3
+ * Resource action: base-bundle-podman-0 stop on node2
+ * Pseudo action: base-bundle_stopped_0
+
+Revised Cluster Status:
+ * Node List:
+ * Online: [ node1 node2 node3 node4 node5 ]
+
+ * Full List of Resources:
+ * Fencing (stonith:fence_xvm): Started node1
+ * Container bundle set: base-bundle [localhost/pcmktest:base]:
+ * base-bundle-0 (ocf:pacemaker:Stateful): Stopped (disabled)
+ * base-bundle-1 (ocf:pacemaker:Stateful): Stopped (disabled)
+ * base-bundle-2 (ocf:pacemaker:Stateful): Stopped (disabled)
+ * Container bundle set: app-bundle [localhost/pcmktest:app]:
+ * app-bundle-0 (ocf:pacemaker:Stateful): Stopped
+ * app-bundle-1 (ocf:pacemaker:Stateful): Stopped
+ * app-bundle-2 (ocf:pacemaker:Stateful): Stopped
diff --git a/cts/scheduler/summary/bundle-interleave-promote.summary b/cts/scheduler/summary/bundle-interleave-promote.summary
new file mode 100644
index 0000000..8e8725e
--- /dev/null
+++ b/cts/scheduler/summary/bundle-interleave-promote.summary
@@ -0,0 +1,51 @@
+Current cluster status:
+ * Node List:
+ * Online: [ node1 node2 node3 node4 node5 ]
+ * GuestOnline: [ app-bundle-0 app-bundle-1 app-bundle-2 base-bundle-0 base-bundle-1 base-bundle-2 ]
+
+ * Full List of Resources:
+ * Fencing (stonith:fence_xvm): Started node1
+ * Container bundle set: base-bundle [localhost/pcmktest:base]:
+ * base-bundle-0 (ocf:pacemaker:Stateful): Unpromoted node2
+ * base-bundle-1 (ocf:pacemaker:Stateful): Unpromoted node3
+ * base-bundle-2 (ocf:pacemaker:Stateful): Unpromoted node4
+ * Container bundle set: app-bundle [localhost/pcmktest:app]:
+ * app-bundle-0 (ocf:pacemaker:Stateful): Unpromoted node2
+ * app-bundle-1 (ocf:pacemaker:Stateful): Unpromoted node3
+ * app-bundle-2 (ocf:pacemaker:Stateful): Unpromoted node4
+
+Transition Summary:
+ * Promote base:2 ( Unpromoted -> Promoted base-bundle-2 )
+ * Promote app:2 ( Unpromoted -> Promoted app-bundle-2 )
+
+Executing Cluster Transition:
+ * Resource action: base cancel=16000 on base-bundle-2
+ * Resource action: app cancel=16000 on app-bundle-2
+ * Pseudo action: base-bundle_promote_0
+ * Pseudo action: base-bundle-clone_promote_0
+ * Resource action: base promote on base-bundle-2
+ * Pseudo action: base-bundle-clone_promoted_0
+ * Pseudo action: base-bundle_promoted_0
+ * Resource action: base monitor=15000 on base-bundle-2
+ * Pseudo action: app-bundle_promote_0
+ * Pseudo action: app-bundle-clone_promote_0
+ * Resource action: app promote on app-bundle-2
+ * Pseudo action: app-bundle-clone_promoted_0
+ * Pseudo action: app-bundle_promoted_0
+ * Resource action: app monitor=15000 on app-bundle-2
+
+Revised Cluster Status:
+ * Node List:
+ * Online: [ node1 node2 node3 node4 node5 ]
+ * GuestOnline: [ app-bundle-0 app-bundle-1 app-bundle-2 base-bundle-0 base-bundle-1 base-bundle-2 ]
+
+ * Full List of Resources:
+ * Fencing (stonith:fence_xvm): Started node1
+ * Container bundle set: base-bundle [localhost/pcmktest:base]:
+ * base-bundle-0 (ocf:pacemaker:Stateful): Unpromoted node2
+ * base-bundle-1 (ocf:pacemaker:Stateful): Unpromoted node3
+ * base-bundle-2 (ocf:pacemaker:Stateful): Promoted node4
+ * Container bundle set: app-bundle [localhost/pcmktest:app]:
+ * app-bundle-0 (ocf:pacemaker:Stateful): Unpromoted node2
+ * app-bundle-1 (ocf:pacemaker:Stateful): Unpromoted node3
+ * app-bundle-2 (ocf:pacemaker:Stateful): Promoted node4
diff --git a/cts/scheduler/summary/bundle-interleave-start.summary b/cts/scheduler/summary/bundle-interleave-start.summary
new file mode 100644
index 0000000..1648e92
--- /dev/null
+++ b/cts/scheduler/summary/bundle-interleave-start.summary
@@ -0,0 +1,156 @@
+Current cluster status:
+ * Node List:
+ * Online: [ node1 node2 node3 node4 node5 ]
+
+ * Full List of Resources:
+ * Fencing (stonith:fence_xvm): Started node1
+ * Container bundle set: base-bundle [localhost/pcmktest:base]:
+ * base-bundle-0 (ocf:pacemaker:Stateful): Stopped
+ * base-bundle-1 (ocf:pacemaker:Stateful): Stopped
+ * base-bundle-2 (ocf:pacemaker:Stateful): Stopped
+ * Container bundle set: app-bundle [localhost/pcmktest:app]:
+ * app-bundle-0 (ocf:pacemaker:Stateful): Stopped
+ * app-bundle-1 (ocf:pacemaker:Stateful): Stopped
+ * app-bundle-2 (ocf:pacemaker:Stateful): Stopped
+
+Transition Summary:
+ * Start base-bundle-podman-0 ( node2 )
+ * Start base-bundle-0 ( node2 )
+ * Start base:0 ( base-bundle-0 )
+ * Start base-bundle-podman-1 ( node3 )
+ * Start base-bundle-1 ( node3 )
+ * Start base:1 ( base-bundle-1 )
+ * Start base-bundle-podman-2 ( node4 )
+ * Start base-bundle-2 ( node4 )
+ * Start base:2 ( base-bundle-2 )
+ * Start app-bundle-podman-0 ( node2 )
+ * Start app-bundle-0 ( node2 )
+ * Start app:0 ( app-bundle-0 )
+ * Start app-bundle-podman-1 ( node3 )
+ * Start app-bundle-1 ( node3 )
+ * Start app:1 ( app-bundle-1 )
+ * Start app-bundle-podman-2 ( node4 )
+ * Start app-bundle-2 ( node4 )
+ * Start app:2 ( app-bundle-2 )
+
+Executing Cluster Transition:
+ * Resource action: base-bundle-podman-0 monitor on node5
+ * Resource action: base-bundle-podman-0 monitor on node4
+ * Resource action: base-bundle-podman-0 monitor on node3
+ * Resource action: base-bundle-podman-0 monitor on node2
+ * Resource action: base-bundle-podman-0 monitor on node1
+ * Resource action: base-bundle-podman-1 monitor on node5
+ * Resource action: base-bundle-podman-1 monitor on node4
+ * Resource action: base-bundle-podman-1 monitor on node3
+ * Resource action: base-bundle-podman-1 monitor on node2
+ * Resource action: base-bundle-podman-1 monitor on node1
+ * Resource action: base-bundle-podman-2 monitor on node5
+ * Resource action: base-bundle-podman-2 monitor on node4
+ * Resource action: base-bundle-podman-2 monitor on node3
+ * Resource action: base-bundle-podman-2 monitor on node2
+ * Resource action: base-bundle-podman-2 monitor on node1
+ * Resource action: app-bundle-podman-0 monitor on node5
+ * Resource action: app-bundle-podman-0 monitor on node4
+ * Resource action: app-bundle-podman-0 monitor on node3
+ * Resource action: app-bundle-podman-0 monitor on node2
+ * Resource action: app-bundle-podman-0 monitor on node1
+ * Resource action: app-bundle-podman-1 monitor on node5
+ * Resource action: app-bundle-podman-1 monitor on node4
+ * Resource action: app-bundle-podman-1 monitor on node3
+ * Resource action: app-bundle-podman-1 monitor on node2
+ * Resource action: app-bundle-podman-1 monitor on node1
+ * Resource action: app-bundle-podman-2 monitor on node5
+ * Resource action: app-bundle-podman-2 monitor on node4
+ * Resource action: app-bundle-podman-2 monitor on node3
+ * Resource action: app-bundle-podman-2 monitor on node2
+ * Resource action: app-bundle-podman-2 monitor on node1
+ * Pseudo action: base-bundle_start_0
+ * Pseudo action: base-bundle-clone_start_0
+ * Resource action: base-bundle-podman-0 start on node2
+ * Resource action: base-bundle-0 monitor on node5
+ * Resource action: base-bundle-0 monitor on node4
+ * Resource action: base-bundle-0 monitor on node3
+ * Resource action: base-bundle-0 monitor on node2
+ * Resource action: base-bundle-0 monitor on node1
+ * Resource action: base-bundle-podman-1 start on node3
+ * Resource action: base-bundle-1 monitor on node5
+ * Resource action: base-bundle-1 monitor on node4
+ * Resource action: base-bundle-1 monitor on node3
+ * Resource action: base-bundle-1 monitor on node2
+ * Resource action: base-bundle-1 monitor on node1
+ * Resource action: base-bundle-podman-2 start on node4
+ * Resource action: base-bundle-2 monitor on node5
+ * Resource action: base-bundle-2 monitor on node4
+ * Resource action: base-bundle-2 monitor on node3
+ * Resource action: base-bundle-2 monitor on node2
+ * Resource action: base-bundle-2 monitor on node1
+ * Resource action: base-bundle-podman-0 monitor=60000 on node2
+ * Resource action: base-bundle-0 start on node2
+ * Resource action: base-bundle-podman-1 monitor=60000 on node3
+ * Resource action: base-bundle-1 start on node3
+ * Resource action: base-bundle-podman-2 monitor=60000 on node4
+ * Resource action: base-bundle-2 start on node4
+ * Resource action: base:0 start on base-bundle-0
+ * Resource action: base:1 start on base-bundle-1
+ * Resource action: base:2 start on base-bundle-2
+ * Pseudo action: base-bundle-clone_running_0
+ * Resource action: base-bundle-0 monitor=30000 on node2
+ * Resource action: base-bundle-1 monitor=30000 on node3
+ * Resource action: base-bundle-2 monitor=30000 on node4
+ * Pseudo action: base-bundle_running_0
+ * Resource action: base:0 monitor=16000 on base-bundle-0
+ * Resource action: base:1 monitor=16000 on base-bundle-1
+ * Resource action: base:2 monitor=16000 on base-bundle-2
+ * Pseudo action: app-bundle_start_0
+ * Pseudo action: app-bundle-clone_start_0
+ * Resource action: app-bundle-podman-0 start on node2
+ * Resource action: app-bundle-0 monitor on node5
+ * Resource action: app-bundle-0 monitor on node4
+ * Resource action: app-bundle-0 monitor on node3
+ * Resource action: app-bundle-0 monitor on node2
+ * Resource action: app-bundle-0 monitor on node1
+ * Resource action: app-bundle-podman-1 start on node3
+ * Resource action: app-bundle-1 monitor on node5
+ * Resource action: app-bundle-1 monitor on node4
+ * Resource action: app-bundle-1 monitor on node3
+ * Resource action: app-bundle-1 monitor on node2
+ * Resource action: app-bundle-1 monitor on node1
+ * Resource action: app-bundle-podman-2 start on node4
+ * Resource action: app-bundle-2 monitor on node5
+ * Resource action: app-bundle-2 monitor on node4
+ * Resource action: app-bundle-2 monitor on node3
+ * Resource action: app-bundle-2 monitor on node2
+ * Resource action: app-bundle-2 monitor on node1
+ * Resource action: app-bundle-podman-0 monitor=60000 on node2
+ * Resource action: app-bundle-0 start on node2
+ * Resource action: app-bundle-podman-1 monitor=60000 on node3
+ * Resource action: app-bundle-1 start on node3
+ * Resource action: app-bundle-podman-2 monitor=60000 on node4
+ * Resource action: app-bundle-2 start on node4
+ * Resource action: app:0 start on app-bundle-0
+ * Resource action: app:1 start on app-bundle-1
+ * Resource action: app:2 start on app-bundle-2
+ * Pseudo action: app-bundle-clone_running_0
+ * Resource action: app-bundle-0 monitor=30000 on node2
+ * Resource action: app-bundle-1 monitor=30000 on node3
+ * Resource action: app-bundle-2 monitor=30000 on node4
+ * Pseudo action: app-bundle_running_0
+ * Resource action: app:0 monitor=16000 on app-bundle-0
+ * Resource action: app:1 monitor=16000 on app-bundle-1
+ * Resource action: app:2 monitor=16000 on app-bundle-2
+
+Revised Cluster Status:
+ * Node List:
+ * Online: [ node1 node2 node3 node4 node5 ]
+ * GuestOnline: [ app-bundle-0 app-bundle-1 app-bundle-2 base-bundle-0 base-bundle-1 base-bundle-2 ]
+
+ * Full List of Resources:
+ * Fencing (stonith:fence_xvm): Started node1
+ * Container bundle set: base-bundle [localhost/pcmktest:base]:
+ * base-bundle-0 (ocf:pacemaker:Stateful): Unpromoted node2
+ * base-bundle-1 (ocf:pacemaker:Stateful): Unpromoted node3
+ * base-bundle-2 (ocf:pacemaker:Stateful): Unpromoted node4
+ * Container bundle set: app-bundle [localhost/pcmktest:app]:
+ * app-bundle-0 (ocf:pacemaker:Stateful): Unpromoted node2
+ * app-bundle-1 (ocf:pacemaker:Stateful): Unpromoted node3
+ * app-bundle-2 (ocf:pacemaker:Stateful): Unpromoted node4
diff --git a/cts/scheduler/summary/bundle-nested-colocation.summary b/cts/scheduler/summary/bundle-nested-colocation.summary
new file mode 100644
index 0000000..1949096
--- /dev/null
+++ b/cts/scheduler/summary/bundle-nested-colocation.summary
@@ -0,0 +1,106 @@
+Using the original execution date of: 2017-07-14 08:50:25Z
+Current cluster status:
+ * Node List:
+ * Online: [ overcloud-controller-0 overcloud-controller-1 overcloud-controller-2 overcloud-galera-0 overcloud-galera-1 overcloud-galera-2 ]
+ * RemoteOnline: [ overcloud-rabbit-0 overcloud-rabbit-1 overcloud-rabbit-2 ]
+
+ * Full List of Resources:
+ * overcloud-rabbit-0 (ocf:pacemaker:remote): Started overcloud-controller-0
+ * overcloud-rabbit-1 (ocf:pacemaker:remote): Started overcloud-controller-1
+ * overcloud-rabbit-2 (ocf:pacemaker:remote): Started overcloud-controller-2
+ * Container bundle set: rabbitmq-bundle [192.168.24.1:8787/tripleoupstream/centos-binary-rabbitmq:latest]:
+ * rabbitmq-bundle-0 (ocf:heartbeat:rabbitmq-cluster): Stopped overcloud-rabbit-0
+ * rabbitmq-bundle-1 (ocf:heartbeat:rabbitmq-cluster): Stopped overcloud-rabbit-1
+ * rabbitmq-bundle-2 (ocf:heartbeat:rabbitmq-cluster): Stopped overcloud-rabbit-2
+ * Container bundle set: galera-bundle [192.168.24.1:8787/tripleoupstream/centos-binary-mariadb:latest]:
+ * galera-bundle-docker-0 (ocf:heartbeat:docker): Started overcloud-galera-0
+ * galera-bundle-docker-1 (ocf:heartbeat:docker): Started overcloud-galera-1
+ * galera-bundle-docker-2 (ocf:heartbeat:docker): Started overcloud-galera-2
+
+Transition Summary:
+ * Restart rabbitmq-bundle-docker-0 ( overcloud-rabbit-0 ) due to resource definition change
+ * Start rabbitmq-bundle-0 ( overcloud-controller-0 )
+ * Start rabbitmq:0 ( rabbitmq-bundle-0 )
+ * Restart rabbitmq-bundle-docker-1 ( overcloud-rabbit-1 ) due to resource definition change
+ * Start rabbitmq-bundle-1 ( overcloud-controller-1 )
+ * Start rabbitmq:1 ( rabbitmq-bundle-1 )
+ * Restart rabbitmq-bundle-docker-2 ( overcloud-rabbit-2 ) due to resource definition change
+ * Start rabbitmq-bundle-2 ( overcloud-controller-2 )
+ * Start rabbitmq:2 ( rabbitmq-bundle-2 )
+
+Executing Cluster Transition:
+ * Pseudo action: rabbitmq-bundle-clone_pre_notify_start_0
+ * Pseudo action: rabbitmq-bundle_stop_0
+ * Pseudo action: rabbitmq-bundle_start_0
+ * Pseudo action: rabbitmq-bundle-clone_confirmed-pre_notify_start_0
+ * Resource action: rabbitmq-bundle-docker-0 stop on overcloud-rabbit-0
+ * Resource action: rabbitmq-bundle-docker-0 start on overcloud-rabbit-0
+ * Resource action: rabbitmq-bundle-docker-0 monitor=60000 on overcloud-rabbit-0
+ * Resource action: rabbitmq-bundle-0 monitor on overcloud-galera-2
+ * Resource action: rabbitmq-bundle-0 monitor on overcloud-galera-1
+ * Resource action: rabbitmq-bundle-0 monitor on overcloud-galera-0
+ * Resource action: rabbitmq-bundle-0 monitor on overcloud-controller-2
+ * Resource action: rabbitmq-bundle-0 monitor on overcloud-controller-1
+ * Resource action: rabbitmq-bundle-0 monitor on overcloud-controller-0
+ * Resource action: rabbitmq-bundle-docker-1 stop on overcloud-rabbit-1
+ * Resource action: rabbitmq-bundle-docker-1 start on overcloud-rabbit-1
+ * Resource action: rabbitmq-bundle-docker-1 monitor=60000 on overcloud-rabbit-1
+ * Resource action: rabbitmq-bundle-1 monitor on overcloud-galera-2
+ * Resource action: rabbitmq-bundle-1 monitor on overcloud-galera-1
+ * Resource action: rabbitmq-bundle-1 monitor on overcloud-galera-0
+ * Resource action: rabbitmq-bundle-1 monitor on overcloud-controller-2
+ * Resource action: rabbitmq-bundle-1 monitor on overcloud-controller-1
+ * Resource action: rabbitmq-bundle-1 monitor on overcloud-controller-0
+ * Resource action: rabbitmq-bundle-docker-2 stop on overcloud-rabbit-2
+ * Resource action: rabbitmq-bundle-docker-2 start on overcloud-rabbit-2
+ * Resource action: rabbitmq-bundle-docker-2 monitor=60000 on overcloud-rabbit-2
+ * Resource action: rabbitmq-bundle-2 monitor on overcloud-galera-2
+ * Resource action: rabbitmq-bundle-2 monitor on overcloud-galera-1
+ * Resource action: rabbitmq-bundle-2 monitor on overcloud-galera-0
+ * Resource action: rabbitmq-bundle-2 monitor on overcloud-controller-2
+ * Resource action: rabbitmq-bundle-2 monitor on overcloud-controller-1
+ * Resource action: rabbitmq-bundle-2 monitor on overcloud-controller-0
+ * Pseudo action: rabbitmq-bundle_stopped_0
+ * Resource action: rabbitmq-bundle-0 start on overcloud-controller-0
+ * Resource action: rabbitmq-bundle-1 start on overcloud-controller-1
+ * Resource action: rabbitmq-bundle-2 start on overcloud-controller-2
+ * Resource action: rabbitmq:0 monitor on rabbitmq-bundle-0
+ * Resource action: rabbitmq:1 monitor on rabbitmq-bundle-1
+ * Resource action: rabbitmq:2 monitor on rabbitmq-bundle-2
+ * Pseudo action: rabbitmq-bundle-clone_start_0
+ * Resource action: rabbitmq-bundle-0 monitor=30000 on overcloud-controller-0
+ * Resource action: rabbitmq-bundle-1 monitor=30000 on overcloud-controller-1
+ * Resource action: rabbitmq-bundle-2 monitor=30000 on overcloud-controller-2
+ * Resource action: rabbitmq:0 start on rabbitmq-bundle-0
+ * Resource action: rabbitmq:1 start on rabbitmq-bundle-1
+ * Resource action: rabbitmq:2 start on rabbitmq-bundle-2
+ * Pseudo action: rabbitmq-bundle-clone_running_0
+ * Pseudo action: rabbitmq-bundle-clone_post_notify_running_0
+ * Resource action: rabbitmq:0 notify on rabbitmq-bundle-0
+ * Resource action: rabbitmq:1 notify on rabbitmq-bundle-1
+ * Resource action: rabbitmq:2 notify on rabbitmq-bundle-2
+ * Pseudo action: rabbitmq-bundle-clone_confirmed-post_notify_running_0
+ * Pseudo action: rabbitmq-bundle_running_0
+ * Resource action: rabbitmq:0 monitor=10000 on rabbitmq-bundle-0
+ * Resource action: rabbitmq:1 monitor=10000 on rabbitmq-bundle-1
+ * Resource action: rabbitmq:2 monitor=10000 on rabbitmq-bundle-2
+Using the original execution date of: 2017-07-14 08:50:25Z
+
+Revised Cluster Status:
+ * Node List:
+ * Online: [ overcloud-controller-0 overcloud-controller-1 overcloud-controller-2 overcloud-galera-0 overcloud-galera-1 overcloud-galera-2 ]
+ * RemoteOnline: [ overcloud-rabbit-0 overcloud-rabbit-1 overcloud-rabbit-2 ]
+ * GuestOnline: [ rabbitmq-bundle-0 rabbitmq-bundle-1 rabbitmq-bundle-2 ]
+
+ * Full List of Resources:
+ * overcloud-rabbit-0 (ocf:pacemaker:remote): Started overcloud-controller-0
+ * overcloud-rabbit-1 (ocf:pacemaker:remote): Started overcloud-controller-1
+ * overcloud-rabbit-2 (ocf:pacemaker:remote): Started overcloud-controller-2
+ * Container bundle set: rabbitmq-bundle [192.168.24.1:8787/tripleoupstream/centos-binary-rabbitmq:latest]:
+ * rabbitmq-bundle-0 (ocf:heartbeat:rabbitmq-cluster): Started overcloud-rabbit-0
+ * rabbitmq-bundle-1 (ocf:heartbeat:rabbitmq-cluster): Started overcloud-rabbit-1
+ * rabbitmq-bundle-2 (ocf:heartbeat:rabbitmq-cluster): Started overcloud-rabbit-2
+ * Container bundle set: galera-bundle [192.168.24.1:8787/tripleoupstream/centos-binary-mariadb:latest]:
+ * galera-bundle-docker-0 (ocf:heartbeat:docker): Started overcloud-galera-0
+ * galera-bundle-docker-1 (ocf:heartbeat:docker): Started overcloud-galera-1
+ * galera-bundle-docker-2 (ocf:heartbeat:docker): Started overcloud-galera-2
diff --git a/cts/scheduler/summary/bundle-order-fencing.summary b/cts/scheduler/summary/bundle-order-fencing.summary
new file mode 100644
index 0000000..e3a25c2
--- /dev/null
+++ b/cts/scheduler/summary/bundle-order-fencing.summary
@@ -0,0 +1,228 @@
+Using the original execution date of: 2017-09-12 10:51:59Z
+Current cluster status:
+ * Node List:
+ * Node controller-0: UNCLEAN (offline)
+ * Online: [ controller-1 controller-2 ]
+ * GuestOnline: [ galera-bundle-1 galera-bundle-2 rabbitmq-bundle-1 rabbitmq-bundle-2 redis-bundle-1 redis-bundle-2 ]
+
+ * Full List of Resources:
+ * Container bundle set: rabbitmq-bundle [192.168.24.1:8787/rhosp12/openstack-rabbitmq-docker:pcmklatest]:
+ * rabbitmq-bundle-0 (ocf:heartbeat:rabbitmq-cluster): FAILED controller-0 (UNCLEAN)
+ * rabbitmq-bundle-1 (ocf:heartbeat:rabbitmq-cluster): Started controller-1
+ * rabbitmq-bundle-2 (ocf:heartbeat:rabbitmq-cluster): Started controller-2
+ * Container bundle set: galera-bundle [192.168.24.1:8787/rhosp12/openstack-mariadb-docker:pcmklatest]:
+ * galera-bundle-0 (ocf:heartbeat:galera): FAILED Promoted controller-0 (UNCLEAN)
+ * galera-bundle-1 (ocf:heartbeat:galera): Promoted controller-1
+ * galera-bundle-2 (ocf:heartbeat:galera): Promoted controller-2
+ * Container bundle set: redis-bundle [192.168.24.1:8787/rhosp12/openstack-redis-docker:pcmklatest]:
+ * redis-bundle-0 (ocf:heartbeat:redis): FAILED Promoted controller-0 (UNCLEAN)
+ * redis-bundle-1 (ocf:heartbeat:redis): Unpromoted controller-1
+ * redis-bundle-2 (ocf:heartbeat:redis): Unpromoted controller-2
+ * ip-192.168.24.7 (ocf:heartbeat:IPaddr2): Started controller-0 (UNCLEAN)
+ * ip-10.0.0.109 (ocf:heartbeat:IPaddr2): Started controller-0 (UNCLEAN)
+ * ip-172.17.1.14 (ocf:heartbeat:IPaddr2): Started controller-2
+ * ip-172.17.1.19 (ocf:heartbeat:IPaddr2): Started controller-2
+ * ip-172.17.3.19 (ocf:heartbeat:IPaddr2): Started controller-2
+ * ip-172.17.4.11 (ocf:heartbeat:IPaddr2): Started controller-0 (UNCLEAN)
+ * Container bundle set: haproxy-bundle [192.168.24.1:8787/rhosp12/openstack-haproxy-docker:pcmklatest]:
+ * haproxy-bundle-docker-0 (ocf:heartbeat:docker): Started controller-0 (UNCLEAN)
+ * haproxy-bundle-docker-1 (ocf:heartbeat:docker): Started controller-2
+ * haproxy-bundle-docker-2 (ocf:heartbeat:docker): Started controller-1
+ * openstack-cinder-volume (systemd:openstack-cinder-volume): Started controller-2
+ * stonith-fence_ipmilan-525400efba5c (stonith:fence_ipmilan): Started controller-2
+ * stonith-fence_ipmilan-5254003e8e97 (stonith:fence_ipmilan): Started controller-0 (UNCLEAN)
+ * stonith-fence_ipmilan-5254000dcb3f (stonith:fence_ipmilan): Started controller-0 (UNCLEAN)
+
+Transition Summary:
+ * Fence (off) redis-bundle-0 (resource: redis-bundle-docker-0) 'guest is unclean'
+ * Fence (off) rabbitmq-bundle-0 (resource: rabbitmq-bundle-docker-0) 'guest is unclean'
+ * Fence (off) galera-bundle-0 (resource: galera-bundle-docker-0) 'guest is unclean'
+ * Fence (reboot) controller-0 'peer is no longer part of the cluster'
+ * Stop rabbitmq-bundle-docker-0 ( controller-0 ) due to node availability
+ * Stop rabbitmq-bundle-0 ( controller-0 ) due to unrunnable rabbitmq-bundle-docker-0 start
+ * Stop rabbitmq:0 ( rabbitmq-bundle-0 ) due to unrunnable rabbitmq-bundle-docker-0 start
+ * Stop galera-bundle-docker-0 ( controller-0 ) due to node availability
+ * Stop galera-bundle-0 ( controller-0 ) due to unrunnable galera-bundle-docker-0 start
+ * Stop galera:0 ( Promoted galera-bundle-0 ) due to unrunnable galera-bundle-docker-0 start
+ * Stop redis-bundle-docker-0 ( controller-0 ) due to node availability
+ * Stop redis-bundle-0 ( controller-0 ) due to unrunnable redis-bundle-docker-0 start
+ * Stop redis:0 ( Promoted redis-bundle-0 ) due to unrunnable redis-bundle-docker-0 start
+ * Promote redis:1 ( Unpromoted -> Promoted redis-bundle-1 )
+ * Move ip-192.168.24.7 ( controller-0 -> controller-2 )
+ * Move ip-10.0.0.109 ( controller-0 -> controller-1 )
+ * Move ip-172.17.4.11 ( controller-0 -> controller-1 )
+ * Stop haproxy-bundle-docker-0 ( controller-0 ) due to node availability
+ * Move stonith-fence_ipmilan-5254003e8e97 ( controller-0 -> controller-1 )
+ * Move stonith-fence_ipmilan-5254000dcb3f ( controller-0 -> controller-2 )
+
+Executing Cluster Transition:
+ * Pseudo action: rabbitmq-bundle-clone_pre_notify_stop_0
+ * Pseudo action: rabbitmq-bundle-0_stop_0
+ * Resource action: rabbitmq-bundle-0 monitor on controller-2
+ * Resource action: rabbitmq-bundle-0 monitor on controller-1
+ * Resource action: rabbitmq-bundle-1 monitor on controller-2
+ * Resource action: rabbitmq-bundle-2 monitor on controller-1
+ * Pseudo action: galera-bundle-0_stop_0
+ * Resource action: galera-bundle-0 monitor on controller-2
+ * Resource action: galera-bundle-0 monitor on controller-1
+ * Resource action: galera-bundle-1 monitor on controller-2
+ * Resource action: galera-bundle-2 monitor on controller-1
+ * Resource action: redis cancel=45000 on redis-bundle-1
+ * Resource action: redis cancel=60000 on redis-bundle-1
+ * Pseudo action: redis-bundle-master_pre_notify_demote_0
+ * Pseudo action: redis-bundle-0_stop_0
+ * Resource action: redis-bundle-0 monitor on controller-2
+ * Resource action: redis-bundle-0 monitor on controller-1
+ * Resource action: redis-bundle-1 monitor on controller-2
+ * Resource action: redis-bundle-2 monitor on controller-1
+ * Pseudo action: stonith-fence_ipmilan-5254003e8e97_stop_0
+ * Pseudo action: stonith-fence_ipmilan-5254000dcb3f_stop_0
+ * Pseudo action: haproxy-bundle_stop_0
+ * Pseudo action: redis-bundle_demote_0
+ * Pseudo action: galera-bundle_demote_0
+ * Pseudo action: rabbitmq-bundle_stop_0
+ * Pseudo action: rabbitmq-bundle_start_0
+ * Fencing controller-0 (reboot)
+ * Resource action: rabbitmq notify on rabbitmq-bundle-1
+ * Resource action: rabbitmq notify on rabbitmq-bundle-2
+ * Pseudo action: rabbitmq-bundle-clone_confirmed-pre_notify_stop_0
+ * Pseudo action: rabbitmq-bundle-docker-0_post_notify_stonith_0
+ * Pseudo action: rabbitmq-bundle-docker-0_stop_0
+ * Pseudo action: rabbitmq-bundle-0_post_notify_stonith_0
+ * Pseudo action: galera-bundle-master_demote_0
+ * Resource action: redis notify on redis-bundle-1
+ * Resource action: redis notify on redis-bundle-2
+ * Pseudo action: redis-bundle-master_confirmed-pre_notify_demote_0
+ * Pseudo action: redis-bundle-master_demote_0
+ * Pseudo action: redis-bundle-docker-0_post_notify_stonith_0
+ * Pseudo action: redis-bundle-0_post_notify_stonith_0
+ * Pseudo action: haproxy-bundle-docker-0_stop_0
+ * Resource action: stonith-fence_ipmilan-5254003e8e97 start on controller-1
+ * Resource action: stonith-fence_ipmilan-5254000dcb3f start on controller-2
+ * Pseudo action: stonith-redis-bundle-0-off on redis-bundle-0
+ * Pseudo action: stonith-rabbitmq-bundle-0-off on rabbitmq-bundle-0
+ * Pseudo action: stonith-galera-bundle-0-off on galera-bundle-0
+ * Pseudo action: haproxy-bundle_stopped_0
+ * Pseudo action: rabbitmq_post_notify_stop_0
+ * Pseudo action: rabbitmq-bundle-clone_stop_0
+ * Pseudo action: rabbitmq-bundle-docker-0_confirmed-post_notify_stonith_0
+ * Pseudo action: rabbitmq-bundle-0_confirmed-post_notify_stonith_0
+ * Pseudo action: galera_demote_0
+ * Pseudo action: galera-bundle-master_demoted_0
+ * Pseudo action: redis_post_notify_stop_0
+ * Pseudo action: redis_demote_0
+ * Pseudo action: redis-bundle-master_demoted_0
+ * Pseudo action: redis-bundle-docker-0_confirmed-post_notify_stonith_0
+ * Pseudo action: redis-bundle-0_confirmed-post_notify_stonith_0
+ * Pseudo action: ip-192.168.24.7_stop_0
+ * Pseudo action: ip-10.0.0.109_stop_0
+ * Pseudo action: ip-172.17.4.11_stop_0
+ * Resource action: stonith-fence_ipmilan-5254003e8e97 monitor=60000 on controller-1
+ * Resource action: stonith-fence_ipmilan-5254000dcb3f monitor=60000 on controller-2
+ * Pseudo action: galera-bundle_demoted_0
+ * Pseudo action: galera-bundle_stop_0
+ * Pseudo action: rabbitmq_stop_0
+ * Pseudo action: rabbitmq-bundle-clone_stopped_0
+ * Pseudo action: galera-bundle-master_stop_0
+ * Pseudo action: galera-bundle-docker-0_stop_0
+ * Pseudo action: redis-bundle-master_post_notify_demoted_0
+ * Resource action: ip-192.168.24.7 start on controller-2
+ * Resource action: ip-10.0.0.109 start on controller-1
+ * Resource action: ip-172.17.4.11 start on controller-1
+ * Pseudo action: rabbitmq-bundle-clone_post_notify_stopped_0
+ * Pseudo action: galera_stop_0
+ * Pseudo action: galera-bundle-master_stopped_0
+ * Pseudo action: galera-bundle-master_start_0
+ * Resource action: redis notify on redis-bundle-1
+ * Resource action: redis notify on redis-bundle-2
+ * Pseudo action: redis-bundle-master_confirmed-post_notify_demoted_0
+ * Pseudo action: redis-bundle-master_pre_notify_stop_0
+ * Resource action: ip-192.168.24.7 monitor=10000 on controller-2
+ * Resource action: ip-10.0.0.109 monitor=10000 on controller-1
+ * Resource action: ip-172.17.4.11 monitor=10000 on controller-1
+ * Pseudo action: redis-bundle_demoted_0
+ * Pseudo action: redis-bundle_stop_0
+ * Pseudo action: galera-bundle_stopped_0
+ * Resource action: rabbitmq notify on rabbitmq-bundle-1
+ * Resource action: rabbitmq notify on rabbitmq-bundle-2
+ * Pseudo action: rabbitmq-bundle-clone_confirmed-post_notify_stopped_0
+ * Pseudo action: rabbitmq-bundle-clone_pre_notify_start_0
+ * Pseudo action: galera-bundle-master_running_0
+ * Resource action: redis notify on redis-bundle-1
+ * Resource action: redis notify on redis-bundle-2
+ * Pseudo action: redis-bundle-master_confirmed-pre_notify_stop_0
+ * Pseudo action: redis-bundle-master_stop_0
+ * Pseudo action: redis-bundle-docker-0_stop_0
+ * Pseudo action: galera-bundle_running_0
+ * Pseudo action: rabbitmq-bundle_stopped_0
+ * Pseudo action: rabbitmq_notified_0
+ * Pseudo action: rabbitmq-bundle-clone_confirmed-pre_notify_start_0
+ * Pseudo action: rabbitmq-bundle-clone_start_0
+ * Pseudo action: redis_stop_0
+ * Pseudo action: redis-bundle-master_stopped_0
+ * Pseudo action: rabbitmq-bundle-clone_running_0
+ * Pseudo action: redis-bundle-master_post_notify_stopped_0
+ * Pseudo action: rabbitmq-bundle-clone_post_notify_running_0
+ * Resource action: redis notify on redis-bundle-1
+ * Resource action: redis notify on redis-bundle-2
+ * Pseudo action: redis-bundle-master_confirmed-post_notify_stopped_0
+ * Pseudo action: redis-bundle-master_pre_notify_start_0
+ * Pseudo action: redis-bundle_stopped_0
+ * Pseudo action: rabbitmq-bundle-clone_confirmed-post_notify_running_0
+ * Pseudo action: redis_notified_0
+ * Pseudo action: redis-bundle-master_confirmed-pre_notify_start_0
+ * Pseudo action: redis-bundle-master_start_0
+ * Pseudo action: rabbitmq-bundle_running_0
+ * Pseudo action: redis-bundle-master_running_0
+ * Pseudo action: redis-bundle-master_post_notify_running_0
+ * Pseudo action: redis-bundle-master_confirmed-post_notify_running_0
+ * Pseudo action: redis-bundle_running_0
+ * Pseudo action: redis-bundle-master_pre_notify_promote_0
+ * Pseudo action: redis-bundle_promote_0
+ * Resource action: redis notify on redis-bundle-1
+ * Resource action: redis notify on redis-bundle-2
+ * Pseudo action: redis-bundle-master_confirmed-pre_notify_promote_0
+ * Pseudo action: redis-bundle-master_promote_0
+ * Resource action: redis promote on redis-bundle-1
+ * Pseudo action: redis-bundle-master_promoted_0
+ * Pseudo action: redis-bundle-master_post_notify_promoted_0
+ * Resource action: redis notify on redis-bundle-1
+ * Resource action: redis notify on redis-bundle-2
+ * Pseudo action: redis-bundle-master_confirmed-post_notify_promoted_0
+ * Pseudo action: redis-bundle_promoted_0
+ * Resource action: redis monitor=20000 on redis-bundle-1
+Using the original execution date of: 2017-09-12 10:51:59Z
+
+Revised Cluster Status:
+ * Node List:
+ * Online: [ controller-1 controller-2 ]
+ * OFFLINE: [ controller-0 ]
+ * GuestOnline: [ galera-bundle-1 galera-bundle-2 rabbitmq-bundle-1 rabbitmq-bundle-2 redis-bundle-1 redis-bundle-2 ]
+
+ * Full List of Resources:
+ * Container bundle set: rabbitmq-bundle [192.168.24.1:8787/rhosp12/openstack-rabbitmq-docker:pcmklatest]:
+ * rabbitmq-bundle-0 (ocf:heartbeat:rabbitmq-cluster): FAILED
+ * rabbitmq-bundle-1 (ocf:heartbeat:rabbitmq-cluster): Started controller-1
+ * rabbitmq-bundle-2 (ocf:heartbeat:rabbitmq-cluster): Started controller-2
+ * Container bundle set: galera-bundle [192.168.24.1:8787/rhosp12/openstack-mariadb-docker:pcmklatest]:
+ * galera-bundle-0 (ocf:heartbeat:galera): FAILED Promoted
+ * galera-bundle-1 (ocf:heartbeat:galera): Promoted controller-1
+ * galera-bundle-2 (ocf:heartbeat:galera): Promoted controller-2
+ * Container bundle set: redis-bundle [192.168.24.1:8787/rhosp12/openstack-redis-docker:pcmklatest]:
+ * redis-bundle-0 (ocf:heartbeat:redis): FAILED Promoted
+ * redis-bundle-1 (ocf:heartbeat:redis): Promoted controller-1
+ * redis-bundle-2 (ocf:heartbeat:redis): Unpromoted controller-2
+ * ip-192.168.24.7 (ocf:heartbeat:IPaddr2): Started controller-2
+ * ip-10.0.0.109 (ocf:heartbeat:IPaddr2): Started controller-1
+ * ip-172.17.1.14 (ocf:heartbeat:IPaddr2): Started controller-2
+ * ip-172.17.1.19 (ocf:heartbeat:IPaddr2): Started controller-2
+ * ip-172.17.3.19 (ocf:heartbeat:IPaddr2): Started controller-2
+ * ip-172.17.4.11 (ocf:heartbeat:IPaddr2): Started controller-1
+ * Container bundle set: haproxy-bundle [192.168.24.1:8787/rhosp12/openstack-haproxy-docker:pcmklatest]:
+ * haproxy-bundle-docker-0 (ocf:heartbeat:docker): Stopped
+ * haproxy-bundle-docker-1 (ocf:heartbeat:docker): Started controller-2
+ * haproxy-bundle-docker-2 (ocf:heartbeat:docker): Started controller-1
+ * openstack-cinder-volume (systemd:openstack-cinder-volume): Started controller-2
+ * stonith-fence_ipmilan-525400efba5c (stonith:fence_ipmilan): Started controller-2
+ * stonith-fence_ipmilan-5254003e8e97 (stonith:fence_ipmilan): Started controller-1
+ * stonith-fence_ipmilan-5254000dcb3f (stonith:fence_ipmilan): Started controller-2
diff --git a/cts/scheduler/summary/bundle-order-partial-start-2.summary b/cts/scheduler/summary/bundle-order-partial-start-2.summary
new file mode 100644
index 0000000..1e2ca2c
--- /dev/null
+++ b/cts/scheduler/summary/bundle-order-partial-start-2.summary
@@ -0,0 +1,100 @@
+Current cluster status:
+ * Node List:
+ * Online: [ undercloud ]
+ * GuestOnline: [ galera-bundle-0 rabbitmq-bundle-0 redis-bundle-0 ]
+
+ * Full List of Resources:
+ * Container bundle: rabbitmq-bundle [192.168.24.1:8787/tripleoupstream/centos-binary-rabbitmq:latest]:
+ * rabbitmq-bundle-0 (ocf:heartbeat:rabbitmq-cluster): Stopped undercloud
+ * Container bundle: galera-bundle [192.168.24.1:8787/tripleoupstream/centos-binary-mariadb:latest]:
+ * galera-bundle-0 (ocf:heartbeat:galera): Stopped undercloud
+ * Container bundle: redis-bundle [192.168.24.1:8787/tripleoupstream/centos-binary-redis:latest]:
+ * redis-bundle-0 (ocf:heartbeat:redis): Unpromoted undercloud
+ * ip-192.168.122.254 (ocf:heartbeat:IPaddr2): Started undercloud
+ * ip-192.168.122.250 (ocf:heartbeat:IPaddr2): Started undercloud
+ * ip-192.168.122.249 (ocf:heartbeat:IPaddr2): Started undercloud
+ * ip-192.168.122.253 (ocf:heartbeat:IPaddr2): Started undercloud
+ * ip-192.168.122.247 (ocf:heartbeat:IPaddr2): Started undercloud
+ * ip-192.168.122.248 (ocf:heartbeat:IPaddr2): Started undercloud
+ * Container bundle: haproxy-bundle [192.168.24.1:8787/tripleoupstream/centos-binary-haproxy:latest]:
+ * haproxy-bundle-docker-0 (ocf:heartbeat:docker): Stopped
+ * Container bundle: openstack-cinder-volume [192.168.24.1:8787/tripleoupstream/centos-binary-cinder-volume:latest]:
+ * openstack-cinder-volume-docker-0 (ocf:heartbeat:docker): Started undercloud
+
+Transition Summary:
+ * Start rabbitmq:0 ( rabbitmq-bundle-0 )
+ * Restart galera-bundle-docker-0 ( undercloud ) due to required redis-bundle promote
+ * Restart galera-bundle-0 ( undercloud ) due to required galera-bundle-docker-0 start
+ * Start galera:0 ( galera-bundle-0 )
+ * Promote redis:0 ( Unpromoted -> Promoted redis-bundle-0 )
+ * Start haproxy-bundle-docker-0 ( undercloud )
+
+Executing Cluster Transition:
+ * Resource action: rabbitmq:0 monitor on rabbitmq-bundle-0
+ * Pseudo action: rabbitmq-bundle-clone_pre_notify_start_0
+ * Resource action: galera-bundle-0 stop on undercloud
+ * Pseudo action: redis-bundle-master_pre_notify_promote_0
+ * Resource action: haproxy-bundle-docker-0 monitor on undercloud
+ * Pseudo action: haproxy-bundle_start_0
+ * Pseudo action: redis-bundle_promote_0
+ * Pseudo action: galera-bundle_stop_0
+ * Pseudo action: rabbitmq-bundle_start_0
+ * Pseudo action: rabbitmq-bundle-clone_confirmed-pre_notify_start_0
+ * Pseudo action: rabbitmq-bundle-clone_start_0
+ * Resource action: galera-bundle-docker-0 stop on undercloud
+ * Resource action: redis notify on redis-bundle-0
+ * Pseudo action: redis-bundle-master_confirmed-pre_notify_promote_0
+ * Pseudo action: redis-bundle-master_promote_0
+ * Resource action: haproxy-bundle-docker-0 start on undercloud
+ * Pseudo action: haproxy-bundle_running_0
+ * Pseudo action: galera-bundle_stopped_0
+ * Pseudo action: galera-bundle_start_0
+ * Resource action: rabbitmq:0 start on rabbitmq-bundle-0
+ * Pseudo action: rabbitmq-bundle-clone_running_0
+ * Resource action: galera-bundle-docker-0 start on undercloud
+ * Resource action: galera-bundle-docker-0 monitor=60000 on undercloud
+ * Resource action: galera-bundle-0 start on undercloud
+ * Resource action: galera-bundle-0 monitor=30000 on undercloud
+ * Resource action: redis promote on redis-bundle-0
+ * Pseudo action: redis-bundle-master_promoted_0
+ * Resource action: haproxy-bundle-docker-0 monitor=60000 on undercloud
+ * Pseudo action: rabbitmq-bundle-clone_post_notify_running_0
+ * Resource action: galera:0 monitor on galera-bundle-0
+ * Pseudo action: galera-bundle-master_start_0
+ * Pseudo action: redis-bundle-master_post_notify_promoted_0
+ * Resource action: rabbitmq:0 notify on rabbitmq-bundle-0
+ * Pseudo action: rabbitmq-bundle-clone_confirmed-post_notify_running_0
+ * Resource action: galera:0 start on galera-bundle-0
+ * Pseudo action: galera-bundle-master_running_0
+ * Resource action: redis notify on redis-bundle-0
+ * Pseudo action: redis-bundle-master_confirmed-post_notify_promoted_0
+ * Pseudo action: redis-bundle_promoted_0
+ * Pseudo action: galera-bundle_running_0
+ * Pseudo action: rabbitmq-bundle_running_0
+ * Resource action: rabbitmq:0 monitor=10000 on rabbitmq-bundle-0
+ * Resource action: galera:0 monitor=30000 on galera-bundle-0
+ * Resource action: galera:0 monitor=20000 on galera-bundle-0
+ * Resource action: redis monitor=20000 on redis-bundle-0
+
+Revised Cluster Status:
+ * Node List:
+ * Online: [ undercloud ]
+ * GuestOnline: [ galera-bundle-0 rabbitmq-bundle-0 redis-bundle-0 ]
+
+ * Full List of Resources:
+ * Container bundle: rabbitmq-bundle [192.168.24.1:8787/tripleoupstream/centos-binary-rabbitmq:latest]:
+ * rabbitmq-bundle-0 (ocf:heartbeat:rabbitmq-cluster): Started undercloud
+ * Container bundle: galera-bundle [192.168.24.1:8787/tripleoupstream/centos-binary-mariadb:latest]:
+ * galera-bundle-0 (ocf:heartbeat:galera): Unpromoted undercloud
+ * Container bundle: redis-bundle [192.168.24.1:8787/tripleoupstream/centos-binary-redis:latest]:
+ * redis-bundle-0 (ocf:heartbeat:redis): Promoted undercloud
+ * ip-192.168.122.254 (ocf:heartbeat:IPaddr2): Started undercloud
+ * ip-192.168.122.250 (ocf:heartbeat:IPaddr2): Started undercloud
+ * ip-192.168.122.249 (ocf:heartbeat:IPaddr2): Started undercloud
+ * ip-192.168.122.253 (ocf:heartbeat:IPaddr2): Started undercloud
+ * ip-192.168.122.247 (ocf:heartbeat:IPaddr2): Started undercloud
+ * ip-192.168.122.248 (ocf:heartbeat:IPaddr2): Started undercloud
+ * Container bundle: haproxy-bundle [192.168.24.1:8787/tripleoupstream/centos-binary-haproxy:latest]:
+ * haproxy-bundle-docker-0 (ocf:heartbeat:docker): Started undercloud
+ * Container bundle: openstack-cinder-volume [192.168.24.1:8787/tripleoupstream/centos-binary-cinder-volume:latest]:
+ * openstack-cinder-volume-docker-0 (ocf:heartbeat:docker): Started undercloud
diff --git a/cts/scheduler/summary/bundle-order-partial-start.summary b/cts/scheduler/summary/bundle-order-partial-start.summary
new file mode 100644
index 0000000..79eb7b5
--- /dev/null
+++ b/cts/scheduler/summary/bundle-order-partial-start.summary
@@ -0,0 +1,97 @@
+Current cluster status:
+ * Node List:
+ * Online: [ undercloud ]
+ * GuestOnline: [ rabbitmq-bundle-0 redis-bundle-0 ]
+
+ * Full List of Resources:
+ * Container bundle: rabbitmq-bundle [192.168.24.1:8787/tripleoupstream/centos-binary-rabbitmq:latest]:
+ * rabbitmq-bundle-0 (ocf:heartbeat:rabbitmq-cluster): Stopped undercloud
+ * Container bundle: galera-bundle [192.168.24.1:8787/tripleoupstream/centos-binary-mariadb:latest]:
+ * galera-bundle-0 (ocf:heartbeat:galera): Stopped
+ * Container bundle: redis-bundle [192.168.24.1:8787/tripleoupstream/centos-binary-redis:latest]:
+ * redis-bundle-0 (ocf:heartbeat:redis): Unpromoted undercloud
+ * ip-192.168.122.254 (ocf:heartbeat:IPaddr2): Started undercloud
+ * ip-192.168.122.250 (ocf:heartbeat:IPaddr2): Started undercloud
+ * ip-192.168.122.249 (ocf:heartbeat:IPaddr2): Started undercloud
+ * ip-192.168.122.253 (ocf:heartbeat:IPaddr2): Started undercloud
+ * ip-192.168.122.247 (ocf:heartbeat:IPaddr2): Started undercloud
+ * ip-192.168.122.248 (ocf:heartbeat:IPaddr2): Started undercloud
+ * Container bundle: haproxy-bundle [192.168.24.1:8787/tripleoupstream/centos-binary-haproxy:latest]:
+ * haproxy-bundle-docker-0 (ocf:heartbeat:docker): Stopped
+ * Container bundle: openstack-cinder-volume [192.168.24.1:8787/tripleoupstream/centos-binary-cinder-volume:latest]:
+ * openstack-cinder-volume-docker-0 (ocf:heartbeat:docker): Started undercloud
+
+Transition Summary:
+ * Start rabbitmq:0 ( rabbitmq-bundle-0 )
+ * Start galera-bundle-docker-0 ( undercloud )
+ * Start galera-bundle-0 ( undercloud )
+ * Start galera:0 ( galera-bundle-0 )
+ * Promote redis:0 ( Unpromoted -> Promoted redis-bundle-0 )
+ * Start haproxy-bundle-docker-0 ( undercloud )
+
+Executing Cluster Transition:
+ * Resource action: rabbitmq:0 monitor on rabbitmq-bundle-0
+ * Pseudo action: rabbitmq-bundle-clone_pre_notify_start_0
+ * Resource action: galera-bundle-docker-0 monitor on undercloud
+ * Pseudo action: redis-bundle-master_pre_notify_promote_0
+ * Resource action: haproxy-bundle-docker-0 monitor on undercloud
+ * Pseudo action: haproxy-bundle_start_0
+ * Pseudo action: redis-bundle_promote_0
+ * Pseudo action: rabbitmq-bundle_start_0
+ * Pseudo action: rabbitmq-bundle-clone_confirmed-pre_notify_start_0
+ * Pseudo action: rabbitmq-bundle-clone_start_0
+ * Resource action: redis notify on redis-bundle-0
+ * Pseudo action: redis-bundle-master_confirmed-pre_notify_promote_0
+ * Pseudo action: redis-bundle-master_promote_0
+ * Resource action: haproxy-bundle-docker-0 start on undercloud
+ * Pseudo action: haproxy-bundle_running_0
+ * Pseudo action: galera-bundle_start_0
+ * Resource action: rabbitmq:0 start on rabbitmq-bundle-0
+ * Pseudo action: rabbitmq-bundle-clone_running_0
+ * Pseudo action: galera-bundle-master_start_0
+ * Resource action: galera-bundle-docker-0 start on undercloud
+ * Resource action: galera-bundle-0 monitor on undercloud
+ * Resource action: redis promote on redis-bundle-0
+ * Pseudo action: redis-bundle-master_promoted_0
+ * Resource action: haproxy-bundle-docker-0 monitor=60000 on undercloud
+ * Pseudo action: rabbitmq-bundle-clone_post_notify_running_0
+ * Resource action: galera-bundle-docker-0 monitor=60000 on undercloud
+ * Resource action: galera-bundle-0 start on undercloud
+ * Pseudo action: redis-bundle-master_post_notify_promoted_0
+ * Resource action: rabbitmq:0 notify on rabbitmq-bundle-0
+ * Pseudo action: rabbitmq-bundle-clone_confirmed-post_notify_running_0
+ * Resource action: galera:0 start on galera-bundle-0
+ * Pseudo action: galera-bundle-master_running_0
+ * Resource action: galera-bundle-0 monitor=30000 on undercloud
+ * Resource action: redis notify on redis-bundle-0
+ * Pseudo action: redis-bundle-master_confirmed-post_notify_promoted_0
+ * Pseudo action: redis-bundle_promoted_0
+ * Pseudo action: galera-bundle_running_0
+ * Pseudo action: rabbitmq-bundle_running_0
+ * Resource action: rabbitmq:0 monitor=10000 on rabbitmq-bundle-0
+ * Resource action: galera:0 monitor=30000 on galera-bundle-0
+ * Resource action: galera:0 monitor=20000 on galera-bundle-0
+ * Resource action: redis monitor=20000 on redis-bundle-0
+
+Revised Cluster Status:
+ * Node List:
+ * Online: [ undercloud ]
+ * GuestOnline: [ galera-bundle-0 rabbitmq-bundle-0 redis-bundle-0 ]
+
+ * Full List of Resources:
+ * Container bundle: rabbitmq-bundle [192.168.24.1:8787/tripleoupstream/centos-binary-rabbitmq:latest]:
+ * rabbitmq-bundle-0 (ocf:heartbeat:rabbitmq-cluster): Started undercloud
+ * Container bundle: galera-bundle [192.168.24.1:8787/tripleoupstream/centos-binary-mariadb:latest]:
+ * galera-bundle-0 (ocf:heartbeat:galera): Unpromoted undercloud
+ * Container bundle: redis-bundle [192.168.24.1:8787/tripleoupstream/centos-binary-redis:latest]:
+ * redis-bundle-0 (ocf:heartbeat:redis): Promoted undercloud
+ * ip-192.168.122.254 (ocf:heartbeat:IPaddr2): Started undercloud
+ * ip-192.168.122.250 (ocf:heartbeat:IPaddr2): Started undercloud
+ * ip-192.168.122.249 (ocf:heartbeat:IPaddr2): Started undercloud
+ * ip-192.168.122.253 (ocf:heartbeat:IPaddr2): Started undercloud
+ * ip-192.168.122.247 (ocf:heartbeat:IPaddr2): Started undercloud
+ * ip-192.168.122.248 (ocf:heartbeat:IPaddr2): Started undercloud
+ * Container bundle: haproxy-bundle [192.168.24.1:8787/tripleoupstream/centos-binary-haproxy:latest]:
+ * haproxy-bundle-docker-0 (ocf:heartbeat:docker): Started undercloud
+ * Container bundle: openstack-cinder-volume [192.168.24.1:8787/tripleoupstream/centos-binary-cinder-volume:latest]:
+ * openstack-cinder-volume-docker-0 (ocf:heartbeat:docker): Started undercloud
diff --git a/cts/scheduler/summary/bundle-order-partial-stop.summary b/cts/scheduler/summary/bundle-order-partial-stop.summary
new file mode 100644
index 0000000..5fc2efe
--- /dev/null
+++ b/cts/scheduler/summary/bundle-order-partial-stop.summary
@@ -0,0 +1,127 @@
+Current cluster status:
+ * Node List:
+ * Online: [ undercloud ]
+ * GuestOnline: [ galera-bundle-0 rabbitmq-bundle-0 redis-bundle-0 ]
+
+ * Full List of Resources:
+ * Container bundle: rabbitmq-bundle [192.168.24.1:8787/tripleoupstream/centos-binary-rabbitmq:latest]:
+ * rabbitmq-bundle-0 (ocf:heartbeat:rabbitmq-cluster): Started undercloud
+ * Container bundle: galera-bundle [192.168.24.1:8787/tripleoupstream/centos-binary-mariadb:latest]:
+ * galera-bundle-0 (ocf:heartbeat:galera): Promoted undercloud
+ * Container bundle: redis-bundle [192.168.24.1:8787/tripleoupstream/centos-binary-redis:latest]:
+ * redis-bundle-0 (ocf:heartbeat:redis): Promoted undercloud
+ * ip-192.168.122.254 (ocf:heartbeat:IPaddr2): Started undercloud
+ * ip-192.168.122.250 (ocf:heartbeat:IPaddr2): Started undercloud
+ * ip-192.168.122.249 (ocf:heartbeat:IPaddr2): Started undercloud
+ * ip-192.168.122.253 (ocf:heartbeat:IPaddr2): Started undercloud
+ * ip-192.168.122.247 (ocf:heartbeat:IPaddr2): Started undercloud
+ * ip-192.168.122.248 (ocf:heartbeat:IPaddr2): Started undercloud
+ * Container bundle: haproxy-bundle [192.168.24.1:8787/tripleoupstream/centos-binary-haproxy:latest]:
+ * haproxy-bundle-docker-0 (ocf:heartbeat:docker): Started undercloud
+ * Container bundle: openstack-cinder-volume [192.168.24.1:8787/tripleoupstream/centos-binary-cinder-volume:latest]:
+ * openstack-cinder-volume-docker-0 (ocf:heartbeat:docker): Started undercloud
+
+Transition Summary:
+ * Stop rabbitmq-bundle-docker-0 ( undercloud ) due to node availability
+ * Stop rabbitmq-bundle-0 ( undercloud ) due to node availability
+ * Stop rabbitmq:0 ( rabbitmq-bundle-0 ) due to colocation with haproxy-bundle-docker-0
+ * Stop galera-bundle-docker-0 ( undercloud ) due to node availability
+ * Stop galera-bundle-0 ( undercloud ) due to node availability
+ * Stop galera:0 ( Promoted galera-bundle-0 ) due to unrunnable galera-bundle-0 start
+ * Stop redis-bundle-docker-0 ( undercloud ) due to node availability
+ * Stop redis-bundle-0 ( undercloud ) due to node availability
+ * Stop redis:0 ( Promoted redis-bundle-0 ) due to unrunnable redis-bundle-0 start
+ * Stop ip-192.168.122.254 ( undercloud ) due to node availability
+ * Stop ip-192.168.122.250 ( undercloud ) due to node availability
+ * Stop ip-192.168.122.249 ( undercloud ) due to node availability
+ * Stop ip-192.168.122.253 ( undercloud ) due to node availability
+ * Stop ip-192.168.122.247 ( undercloud ) due to node availability
+ * Stop ip-192.168.122.248 ( undercloud ) due to node availability
+ * Stop haproxy-bundle-docker-0 ( undercloud ) due to node availability
+ * Stop openstack-cinder-volume-docker-0 ( undercloud ) due to node availability
+
+Executing Cluster Transition:
+ * Pseudo action: rabbitmq-bundle-clone_pre_notify_stop_0
+ * Resource action: galera cancel=10000 on galera-bundle-0
+ * Resource action: redis cancel=20000 on redis-bundle-0
+ * Pseudo action: redis-bundle-master_pre_notify_demote_0
+ * Pseudo action: openstack-cinder-volume_stop_0
+ * Pseudo action: haproxy-bundle_stop_0
+ * Pseudo action: redis-bundle_demote_0
+ * Pseudo action: galera-bundle_demote_0
+ * Pseudo action: rabbitmq-bundle_stop_0
+ * Resource action: rabbitmq notify on rabbitmq-bundle-0
+ * Pseudo action: rabbitmq-bundle-clone_confirmed-pre_notify_stop_0
+ * Pseudo action: rabbitmq-bundle-clone_stop_0
+ * Pseudo action: galera-bundle-master_demote_0
+ * Resource action: redis notify on redis-bundle-0
+ * Pseudo action: redis-bundle-master_confirmed-pre_notify_demote_0
+ * Pseudo action: redis-bundle-master_demote_0
+ * Resource action: haproxy-bundle-docker-0 stop on undercloud
+ * Resource action: openstack-cinder-volume-docker-0 stop on undercloud
+ * Pseudo action: openstack-cinder-volume_stopped_0
+ * Pseudo action: haproxy-bundle_stopped_0
+ * Resource action: rabbitmq stop on rabbitmq-bundle-0
+ * Pseudo action: rabbitmq-bundle-clone_stopped_0
+ * Resource action: rabbitmq-bundle-0 stop on undercloud
+ * Resource action: galera demote on galera-bundle-0
+ * Pseudo action: galera-bundle-master_demoted_0
+ * Resource action: redis demote on redis-bundle-0
+ * Pseudo action: redis-bundle-master_demoted_0
+ * Resource action: ip-192.168.122.254 stop on undercloud
+ * Resource action: ip-192.168.122.250 stop on undercloud
+ * Resource action: ip-192.168.122.249 stop on undercloud
+ * Resource action: ip-192.168.122.253 stop on undercloud
+ * Resource action: ip-192.168.122.247 stop on undercloud
+ * Resource action: ip-192.168.122.248 stop on undercloud
+ * Pseudo action: galera-bundle_demoted_0
+ * Pseudo action: galera-bundle_stop_0
+ * Pseudo action: rabbitmq-bundle-clone_post_notify_stopped_0
+ * Resource action: rabbitmq-bundle-docker-0 stop on undercloud
+ * Pseudo action: galera-bundle-master_stop_0
+ * Pseudo action: redis-bundle-master_post_notify_demoted_0
+ * Pseudo action: rabbitmq-bundle-clone_confirmed-post_notify_stopped_0
+ * Resource action: galera stop on galera-bundle-0
+ * Pseudo action: galera-bundle-master_stopped_0
+ * Resource action: galera-bundle-0 stop on undercloud
+ * Resource action: redis notify on redis-bundle-0
+ * Pseudo action: redis-bundle-master_confirmed-post_notify_demoted_0
+ * Pseudo action: redis-bundle-master_pre_notify_stop_0
+ * Pseudo action: redis-bundle_demoted_0
+ * Pseudo action: rabbitmq-bundle_stopped_0
+ * Resource action: galera-bundle-docker-0 stop on undercloud
+ * Resource action: redis notify on redis-bundle-0
+ * Pseudo action: redis-bundle-master_confirmed-pre_notify_stop_0
+ * Pseudo action: galera-bundle_stopped_0
+ * Pseudo action: redis-bundle_stop_0
+ * Pseudo action: redis-bundle-master_stop_0
+ * Resource action: redis stop on redis-bundle-0
+ * Pseudo action: redis-bundle-master_stopped_0
+ * Resource action: redis-bundle-0 stop on undercloud
+ * Pseudo action: redis-bundle-master_post_notify_stopped_0
+ * Resource action: redis-bundle-docker-0 stop on undercloud
+ * Cluster action: do_shutdown on undercloud
+ * Pseudo action: redis-bundle-master_confirmed-post_notify_stopped_0
+ * Pseudo action: redis-bundle_stopped_0
+
+Revised Cluster Status:
+ * Node List:
+ * Online: [ undercloud ]
+
+ * Full List of Resources:
+ * Container bundle: rabbitmq-bundle [192.168.24.1:8787/tripleoupstream/centos-binary-rabbitmq:latest]:
+ * rabbitmq-bundle-0 (ocf:heartbeat:rabbitmq-cluster): Stopped
+ * Container bundle: galera-bundle [192.168.24.1:8787/tripleoupstream/centos-binary-mariadb:latest]:
+ * galera-bundle-0 (ocf:heartbeat:galera): Stopped
+ * Container bundle: redis-bundle [192.168.24.1:8787/tripleoupstream/centos-binary-redis:latest]:
+ * redis-bundle-0 (ocf:heartbeat:redis): Stopped
+ * ip-192.168.122.254 (ocf:heartbeat:IPaddr2): Stopped
+ * ip-192.168.122.250 (ocf:heartbeat:IPaddr2): Stopped
+ * ip-192.168.122.249 (ocf:heartbeat:IPaddr2): Stopped
+ * ip-192.168.122.253 (ocf:heartbeat:IPaddr2): Stopped
+ * ip-192.168.122.247 (ocf:heartbeat:IPaddr2): Stopped
+ * ip-192.168.122.248 (ocf:heartbeat:IPaddr2): Stopped
+ * Container bundle: haproxy-bundle [192.168.24.1:8787/tripleoupstream/centos-binary-haproxy:latest]:
+ * haproxy-bundle-docker-0 (ocf:heartbeat:docker): Stopped
+ * Container bundle: openstack-cinder-volume [192.168.24.1:8787/tripleoupstream/centos-binary-cinder-volume:latest]:
+ * openstack-cinder-volume-docker-0 (ocf:heartbeat:docker): Stopped
diff --git a/cts/scheduler/summary/bundle-order-partial.summary b/cts/scheduler/summary/bundle-order-partial.summary
new file mode 100644
index 0000000..fa8b1d9
--- /dev/null
+++ b/cts/scheduler/summary/bundle-order-partial.summary
@@ -0,0 +1,47 @@
+
+Current cluster status:
+Online: [ undercloud ]
+GuestOnline: [ galera-bundle-0:galera-bundle-docker-0 rabbitmq-bundle-0:rabbitmq-bundle-docker-0 redis-bundle-0:redis-bundle-docker-0 ]
+
+ Docker container: rabbitmq-bundle [192.168.24.1:8787/tripleoupstream/centos-binary-rabbitmq:latest]
+ rabbitmq-bundle-0 (ocf::heartbeat:rabbitmq-cluster): Started undercloud
+ Docker container: galera-bundle [192.168.24.1:8787/tripleoupstream/centos-binary-mariadb:latest]
+ galera-bundle-0 (ocf::heartbeat:galera): Master undercloud
+ Docker container: redis-bundle [192.168.24.1:8787/tripleoupstream/centos-binary-redis:latest]
+ redis-bundle-0 (ocf::heartbeat:redis): Master undercloud
+ ip-192.168.122.254 (ocf::heartbeat:IPaddr2): Started undercloud
+ ip-192.168.122.250 (ocf::heartbeat:IPaddr2): Started undercloud
+ ip-192.168.122.249 (ocf::heartbeat:IPaddr2): Started undercloud
+ ip-192.168.122.253 (ocf::heartbeat:IPaddr2): Started undercloud
+ ip-192.168.122.247 (ocf::heartbeat:IPaddr2): Started undercloud
+ ip-192.168.122.248 (ocf::heartbeat:IPaddr2): Started undercloud
+ Docker container: haproxy-bundle [192.168.24.1:8787/tripleoupstream/centos-binary-haproxy:latest]
+ haproxy-bundle-docker-0 (ocf::heartbeat:docker): Started undercloud
+ Docker container: openstack-cinder-volume [192.168.24.1:8787/tripleoupstream/centos-binary-cinder-volume:latest]
+ openstack-cinder-volume-docker-0 (ocf::heartbeat:docker): Started undercloud
+
+Transition Summary:
+
+Executing cluster transition:
+
+Revised cluster status:
+Online: [ undercloud ]
+GuestOnline: [ galera-bundle-0:galera-bundle-docker-0 rabbitmq-bundle-0:rabbitmq-bundle-docker-0 redis-bundle-0:redis-bundle-docker-0 ]
+
+ Docker container: rabbitmq-bundle [192.168.24.1:8787/tripleoupstream/centos-binary-rabbitmq:latest]
+ rabbitmq-bundle-0 (ocf::heartbeat:rabbitmq-cluster): Started undercloud
+ Docker container: galera-bundle [192.168.24.1:8787/tripleoupstream/centos-binary-mariadb:latest]
+ galera-bundle-0 (ocf::heartbeat:galera): Master undercloud
+ Docker container: redis-bundle [192.168.24.1:8787/tripleoupstream/centos-binary-redis:latest]
+ redis-bundle-0 (ocf::heartbeat:redis): Master undercloud
+ ip-192.168.122.254 (ocf::heartbeat:IPaddr2): Started undercloud
+ ip-192.168.122.250 (ocf::heartbeat:IPaddr2): Started undercloud
+ ip-192.168.122.249 (ocf::heartbeat:IPaddr2): Started undercloud
+ ip-192.168.122.253 (ocf::heartbeat:IPaddr2): Started undercloud
+ ip-192.168.122.247 (ocf::heartbeat:IPaddr2): Started undercloud
+ ip-192.168.122.248 (ocf::heartbeat:IPaddr2): Started undercloud
+ Docker container: haproxy-bundle [192.168.24.1:8787/tripleoupstream/centos-binary-haproxy:latest]
+ haproxy-bundle-docker-0 (ocf::heartbeat:docker): Started undercloud
+ Docker container: openstack-cinder-volume [192.168.24.1:8787/tripleoupstream/centos-binary-cinder-volume:latest]
+ openstack-cinder-volume-docker-0 (ocf::heartbeat:docker): Started undercloud
+
diff --git a/cts/scheduler/summary/bundle-order-startup-clone-2.summary b/cts/scheduler/summary/bundle-order-startup-clone-2.summary
new file mode 100644
index 0000000..2d7cd9b
--- /dev/null
+++ b/cts/scheduler/summary/bundle-order-startup-clone-2.summary
@@ -0,0 +1,213 @@
+Current cluster status:
+ * Node List:
+ * Online: [ metal-1 metal-2 metal-3 ]
+ * RemoteOFFLINE: [ rabbitmq-bundle-0 ]
+
+ * Full List of Resources:
+ * Clone Set: storage-clone [storage]:
+ * Stopped: [ metal-1 metal-2 metal-3 rabbitmq-bundle-0 ]
+ * Container bundle set: galera-bundle [192.168.24.1:8787/tripleoupstream/centos-binary-mariadb:latest]:
+ * galera-bundle-0 (ocf:heartbeat:galera): Stopped
+ * galera-bundle-1 (ocf:heartbeat:galera): Stopped
+ * galera-bundle-2 (ocf:heartbeat:galera): Stopped
+ * Container bundle set: haproxy-bundle [192.168.24.1:8787/tripleoupstream/centos-binary-haproxy:latest]:
+ * haproxy-bundle-docker-0 (ocf:heartbeat:docker): Stopped
+ * haproxy-bundle-docker-1 (ocf:heartbeat:docker): Stopped
+ * haproxy-bundle-docker-2 (ocf:heartbeat:docker): Stopped
+ * Container bundle set: redis-bundle [192.168.24.1:8787/tripleoupstream/centos-binary-redis:latest]:
+ * redis-bundle-0 (ocf:heartbeat:redis): Stopped
+ * redis-bundle-1 (ocf:heartbeat:redis): Stopped
+ * redis-bundle-2 (ocf:heartbeat:redis): Stopped
+
+Transition Summary:
+ * Start storage:0 ( metal-1 )
+ * Start storage:1 ( metal-2 )
+ * Start storage:2 ( metal-3 )
+ * Start galera-bundle-docker-0 ( metal-1 )
+ * Start galera-bundle-0 ( metal-1 )
+ * Start galera:0 ( galera-bundle-0 )
+ * Start galera-bundle-docker-1 ( metal-2 )
+ * Start galera-bundle-1 ( metal-2 )
+ * Start galera:1 ( galera-bundle-1 )
+ * Start galera-bundle-docker-2 ( metal-3 )
+ * Start galera-bundle-2 ( metal-3 )
+ * Start galera:2 ( galera-bundle-2 )
+ * Start haproxy-bundle-docker-0 ( metal-1 )
+ * Start haproxy-bundle-docker-1 ( metal-2 )
+ * Start haproxy-bundle-docker-2 ( metal-3 )
+ * Start redis-bundle-docker-0 ( metal-1 )
+ * Start redis-bundle-0 ( metal-1 )
+ * Promote redis:0 ( Stopped -> Promoted redis-bundle-0 )
+ * Start redis-bundle-docker-1 ( metal-2 )
+ * Start redis-bundle-1 ( metal-2 )
+ * Promote redis:1 ( Stopped -> Promoted redis-bundle-1 )
+ * Start redis-bundle-docker-2 ( metal-3 )
+ * Start redis-bundle-2 ( metal-3 )
+ * Promote redis:2 ( Stopped -> Promoted redis-bundle-2 )
+
+Executing Cluster Transition:
+ * Resource action: storage:0 monitor on metal-1
+ * Resource action: storage:1 monitor on metal-2
+ * Resource action: storage:2 monitor on metal-3
+ * Pseudo action: storage-clone_pre_notify_start_0
+ * Resource action: galera-bundle-docker-0 monitor on metal-3
+ * Resource action: galera-bundle-docker-0 monitor on metal-2
+ * Resource action: galera-bundle-docker-0 monitor on metal-1
+ * Resource action: galera-bundle-docker-1 monitor on metal-3
+ * Resource action: galera-bundle-docker-1 monitor on metal-2
+ * Resource action: galera-bundle-docker-1 monitor on metal-1
+ * Resource action: galera-bundle-docker-2 monitor on metal-3
+ * Resource action: galera-bundle-docker-2 monitor on metal-2
+ * Resource action: galera-bundle-docker-2 monitor on metal-1
+ * Resource action: haproxy-bundle-docker-0 monitor on metal-3
+ * Resource action: haproxy-bundle-docker-0 monitor on metal-2
+ * Resource action: haproxy-bundle-docker-0 monitor on metal-1
+ * Resource action: haproxy-bundle-docker-1 monitor on metal-3
+ * Resource action: haproxy-bundle-docker-1 monitor on metal-2
+ * Resource action: haproxy-bundle-docker-1 monitor on metal-1
+ * Resource action: haproxy-bundle-docker-2 monitor on metal-3
+ * Resource action: haproxy-bundle-docker-2 monitor on metal-2
+ * Resource action: haproxy-bundle-docker-2 monitor on metal-1
+ * Pseudo action: redis-bundle-master_pre_notify_start_0
+ * Resource action: redis-bundle-docker-0 monitor on metal-3
+ * Resource action: redis-bundle-docker-0 monitor on metal-2
+ * Resource action: redis-bundle-docker-0 monitor on metal-1
+ * Resource action: redis-bundle-docker-1 monitor on metal-3
+ * Resource action: redis-bundle-docker-1 monitor on metal-2
+ * Resource action: redis-bundle-docker-1 monitor on metal-1
+ * Resource action: redis-bundle-docker-2 monitor on metal-3
+ * Resource action: redis-bundle-docker-2 monitor on metal-2
+ * Resource action: redis-bundle-docker-2 monitor on metal-1
+ * Pseudo action: redis-bundle_start_0
+ * Pseudo action: haproxy-bundle_start_0
+ * Pseudo action: storage-clone_confirmed-pre_notify_start_0
+ * Resource action: haproxy-bundle-docker-0 start on metal-1
+ * Resource action: haproxy-bundle-docker-1 start on metal-2
+ * Resource action: haproxy-bundle-docker-2 start on metal-3
+ * Pseudo action: redis-bundle-master_confirmed-pre_notify_start_0
+ * Pseudo action: redis-bundle-master_start_0
+ * Resource action: redis-bundle-docker-0 start on metal-1
+ * Resource action: redis-bundle-0 monitor on metal-3
+ * Resource action: redis-bundle-0 monitor on metal-2
+ * Resource action: redis-bundle-0 monitor on metal-1
+ * Resource action: redis-bundle-docker-1 start on metal-2
+ * Resource action: redis-bundle-1 monitor on metal-3
+ * Resource action: redis-bundle-1 monitor on metal-2
+ * Resource action: redis-bundle-1 monitor on metal-1
+ * Resource action: redis-bundle-docker-2 start on metal-3
+ * Resource action: redis-bundle-2 monitor on metal-3
+ * Resource action: redis-bundle-2 monitor on metal-2
+ * Resource action: redis-bundle-2 monitor on metal-1
+ * Pseudo action: haproxy-bundle_running_0
+ * Resource action: haproxy-bundle-docker-0 monitor=60000 on metal-1
+ * Resource action: haproxy-bundle-docker-1 monitor=60000 on metal-2
+ * Resource action: haproxy-bundle-docker-2 monitor=60000 on metal-3
+ * Resource action: redis-bundle-docker-0 monitor=60000 on metal-1
+ * Resource action: redis-bundle-0 start on metal-1
+ * Resource action: redis-bundle-docker-1 monitor=60000 on metal-2
+ * Resource action: redis-bundle-1 start on metal-2
+ * Resource action: redis-bundle-docker-2 monitor=60000 on metal-3
+ * Resource action: redis-bundle-2 start on metal-3
+ * Resource action: redis:0 start on redis-bundle-0
+ * Resource action: redis:1 start on redis-bundle-1
+ * Resource action: redis:2 start on redis-bundle-2
+ * Pseudo action: redis-bundle-master_running_0
+ * Resource action: redis-bundle-0 monitor=30000 on metal-1
+ * Resource action: redis-bundle-1 monitor=30000 on metal-2
+ * Resource action: redis-bundle-2 monitor=30000 on metal-3
+ * Pseudo action: redis-bundle-master_post_notify_running_0
+ * Resource action: redis:0 notify on redis-bundle-0
+ * Resource action: redis:1 notify on redis-bundle-1
+ * Resource action: redis:2 notify on redis-bundle-2
+ * Pseudo action: redis-bundle-master_confirmed-post_notify_running_0
+ * Pseudo action: redis-bundle_running_0
+ * Pseudo action: redis-bundle-master_pre_notify_promote_0
+ * Pseudo action: redis-bundle_promote_0
+ * Pseudo action: storage-clone_start_0
+ * Resource action: redis:0 notify on redis-bundle-0
+ * Resource action: redis:1 notify on redis-bundle-1
+ * Resource action: redis:2 notify on redis-bundle-2
+ * Pseudo action: redis-bundle-master_confirmed-pre_notify_promote_0
+ * Pseudo action: redis-bundle-master_promote_0
+ * Resource action: storage:0 start on metal-1
+ * Resource action: storage:1 start on metal-2
+ * Resource action: storage:2 start on metal-3
+ * Pseudo action: storage-clone_running_0
+ * Resource action: redis:0 promote on redis-bundle-0
+ * Resource action: redis:1 promote on redis-bundle-1
+ * Resource action: redis:2 promote on redis-bundle-2
+ * Pseudo action: redis-bundle-master_promoted_0
+ * Pseudo action: storage-clone_post_notify_running_0
+ * Pseudo action: redis-bundle-master_post_notify_promoted_0
+ * Resource action: storage:0 notify on metal-1
+ * Resource action: storage:1 notify on metal-2
+ * Resource action: storage:2 notify on metal-3
+ * Pseudo action: storage-clone_confirmed-post_notify_running_0
+ * Resource action: redis:0 notify on redis-bundle-0
+ * Resource action: redis:1 notify on redis-bundle-1
+ * Resource action: redis:2 notify on redis-bundle-2
+ * Pseudo action: redis-bundle-master_confirmed-post_notify_promoted_0
+ * Pseudo action: redis-bundle_promoted_0
+ * Pseudo action: galera-bundle_start_0
+ * Resource action: storage:0 monitor=30000 on metal-1
+ * Resource action: storage:1 monitor=30000 on metal-2
+ * Resource action: storage:2 monitor=30000 on metal-3
+ * Pseudo action: galera-bundle-master_start_0
+ * Resource action: galera-bundle-docker-0 start on metal-1
+ * Resource action: galera-bundle-0 monitor on metal-3
+ * Resource action: galera-bundle-0 monitor on metal-2
+ * Resource action: galera-bundle-0 monitor on metal-1
+ * Resource action: galera-bundle-docker-1 start on metal-2
+ * Resource action: galera-bundle-1 monitor on metal-3
+ * Resource action: galera-bundle-1 monitor on metal-2
+ * Resource action: galera-bundle-1 monitor on metal-1
+ * Resource action: galera-bundle-docker-2 start on metal-3
+ * Resource action: galera-bundle-2 monitor on metal-3
+ * Resource action: galera-bundle-2 monitor on metal-2
+ * Resource action: galera-bundle-2 monitor on metal-1
+ * Resource action: redis:0 monitor=20000 on redis-bundle-0
+ * Resource action: redis:1 monitor=20000 on redis-bundle-1
+ * Resource action: redis:2 monitor=20000 on redis-bundle-2
+ * Resource action: galera-bundle-docker-0 monitor=60000 on metal-1
+ * Resource action: galera-bundle-0 start on metal-1
+ * Resource action: galera-bundle-docker-1 monitor=60000 on metal-2
+ * Resource action: galera-bundle-1 start on metal-2
+ * Resource action: galera-bundle-docker-2 monitor=60000 on metal-3
+ * Resource action: galera-bundle-2 start on metal-3
+ * Resource action: galera:0 start on galera-bundle-0
+ * Resource action: galera:1 start on galera-bundle-1
+ * Resource action: galera:2 start on galera-bundle-2
+ * Pseudo action: galera-bundle-master_running_0
+ * Resource action: galera-bundle-0 monitor=30000 on metal-1
+ * Resource action: galera-bundle-1 monitor=30000 on metal-2
+ * Resource action: galera-bundle-2 monitor=30000 on metal-3
+ * Pseudo action: galera-bundle_running_0
+ * Resource action: galera:0 monitor=30000 on galera-bundle-0
+ * Resource action: galera:0 monitor=20000 on galera-bundle-0
+ * Resource action: galera:1 monitor=30000 on galera-bundle-1
+ * Resource action: galera:1 monitor=20000 on galera-bundle-1
+ * Resource action: galera:2 monitor=30000 on galera-bundle-2
+ * Resource action: galera:2 monitor=20000 on galera-bundle-2
+
+Revised Cluster Status:
+ * Node List:
+ * Online: [ metal-1 metal-2 metal-3 ]
+ * RemoteOFFLINE: [ rabbitmq-bundle-0 ]
+ * GuestOnline: [ galera-bundle-0 galera-bundle-1 galera-bundle-2 redis-bundle-0 redis-bundle-1 redis-bundle-2 ]
+
+ * Full List of Resources:
+ * Clone Set: storage-clone [storage]:
+ * Started: [ metal-1 metal-2 metal-3 ]
+ * Stopped: [ rabbitmq-bundle-0 ]
+ * Container bundle set: galera-bundle [192.168.24.1:8787/tripleoupstream/centos-binary-mariadb:latest]:
+ * galera-bundle-0 (ocf:heartbeat:galera): Unpromoted metal-1
+ * galera-bundle-1 (ocf:heartbeat:galera): Unpromoted metal-2
+ * galera-bundle-2 (ocf:heartbeat:galera): Unpromoted metal-3
+ * Container bundle set: haproxy-bundle [192.168.24.1:8787/tripleoupstream/centos-binary-haproxy:latest]:
+ * haproxy-bundle-docker-0 (ocf:heartbeat:docker): Started metal-1
+ * haproxy-bundle-docker-1 (ocf:heartbeat:docker): Started metal-2
+ * haproxy-bundle-docker-2 (ocf:heartbeat:docker): Started metal-3
+ * Container bundle set: redis-bundle [192.168.24.1:8787/tripleoupstream/centos-binary-redis:latest]:
+ * redis-bundle-0 (ocf:heartbeat:redis): Promoted metal-1
+ * redis-bundle-1 (ocf:heartbeat:redis): Promoted metal-2
+ * redis-bundle-2 (ocf:heartbeat:redis): Promoted metal-3
diff --git a/cts/scheduler/summary/bundle-order-startup-clone.summary b/cts/scheduler/summary/bundle-order-startup-clone.summary
new file mode 100644
index 0000000..67ee801
--- /dev/null
+++ b/cts/scheduler/summary/bundle-order-startup-clone.summary
@@ -0,0 +1,79 @@
+Current cluster status:
+ * Node List:
+ * Online: [ metal-1 metal-2 metal-3 ]
+ * RemoteOFFLINE: [ rabbitmq-bundle-0 ]
+
+ * Full List of Resources:
+ * Clone Set: storage-clone [storage]:
+ * Stopped: [ metal-1 metal-2 metal-3 rabbitmq-bundle-0 ]
+ * Container bundle: galera-bundle [192.168.24.1:8787/tripleoupstream/centos-binary-mariadb:latest]:
+ * galera-bundle-0 (ocf:heartbeat:galera): Stopped
+ * Container bundle: haproxy-bundle [192.168.24.1:8787/tripleoupstream/centos-binary-haproxy:latest]:
+ * haproxy-bundle-docker-0 (ocf:heartbeat:docker): Stopped
+ * Container bundle: redis-bundle [192.168.24.1:8787/tripleoupstream/centos-binary-redis:latest]:
+ * redis-bundle-0 (ocf:heartbeat:redis): Stopped
+
+Transition Summary:
+ * Start storage:0 ( metal-1 ) due to unrunnable redis-bundle promote (blocked)
+ * Start storage:1 ( metal-2 ) due to unrunnable redis-bundle promote (blocked)
+ * Start storage:2 ( metal-3 ) due to unrunnable redis-bundle promote (blocked)
+ * Start galera-bundle-docker-0 ( metal-1 ) due to unrunnable storage-clone notified (blocked)
+ * Start galera-bundle-0 ( metal-1 ) due to unrunnable galera-bundle-docker-0 start (blocked)
+ * Start galera:0 ( galera-bundle-0 ) due to unrunnable galera-bundle-docker-0 start (blocked)
+ * Start haproxy-bundle-docker-0 ( metal-2 )
+ * Start redis-bundle-docker-0 ( metal-2 )
+ * Start redis-bundle-0 ( metal-2 )
+ * Start redis:0 ( redis-bundle-0 )
+
+Executing Cluster Transition:
+ * Resource action: storage:0 monitor on metal-1
+ * Resource action: storage:1 monitor on metal-2
+ * Resource action: storage:2 monitor on metal-3
+ * Resource action: galera-bundle-docker-0 monitor on metal-3
+ * Resource action: galera-bundle-docker-0 monitor on metal-2
+ * Resource action: galera-bundle-docker-0 monitor on metal-1
+ * Resource action: haproxy-bundle-docker-0 monitor on metal-3
+ * Resource action: haproxy-bundle-docker-0 monitor on metal-2
+ * Resource action: haproxy-bundle-docker-0 monitor on metal-1
+ * Pseudo action: redis-bundle-master_pre_notify_start_0
+ * Resource action: redis-bundle-docker-0 monitor on metal-3
+ * Resource action: redis-bundle-docker-0 monitor on metal-2
+ * Resource action: redis-bundle-docker-0 monitor on metal-1
+ * Pseudo action: redis-bundle_start_0
+ * Pseudo action: haproxy-bundle_start_0
+ * Resource action: haproxy-bundle-docker-0 start on metal-2
+ * Pseudo action: redis-bundle-master_confirmed-pre_notify_start_0
+ * Pseudo action: redis-bundle-master_start_0
+ * Resource action: redis-bundle-docker-0 start on metal-2
+ * Resource action: redis-bundle-0 monitor on metal-3
+ * Resource action: redis-bundle-0 monitor on metal-2
+ * Resource action: redis-bundle-0 monitor on metal-1
+ * Pseudo action: haproxy-bundle_running_0
+ * Resource action: haproxy-bundle-docker-0 monitor=60000 on metal-2
+ * Resource action: redis-bundle-docker-0 monitor=60000 on metal-2
+ * Resource action: redis-bundle-0 start on metal-2
+ * Resource action: redis:0 start on redis-bundle-0
+ * Pseudo action: redis-bundle-master_running_0
+ * Resource action: redis-bundle-0 monitor=30000 on metal-2
+ * Pseudo action: redis-bundle-master_post_notify_running_0
+ * Resource action: redis:0 notify on redis-bundle-0
+ * Pseudo action: redis-bundle-master_confirmed-post_notify_running_0
+ * Pseudo action: redis-bundle_running_0
+ * Resource action: redis:0 monitor=60000 on redis-bundle-0
+ * Resource action: redis:0 monitor=45000 on redis-bundle-0
+
+Revised Cluster Status:
+ * Node List:
+ * Online: [ metal-1 metal-2 metal-3 ]
+ * RemoteOFFLINE: [ rabbitmq-bundle-0 ]
+ * GuestOnline: [ redis-bundle-0 ]
+
+ * Full List of Resources:
+ * Clone Set: storage-clone [storage]:
+ * Stopped: [ metal-1 metal-2 metal-3 rabbitmq-bundle-0 ]
+ * Container bundle: galera-bundle [192.168.24.1:8787/tripleoupstream/centos-binary-mariadb:latest]:
+ * galera-bundle-0 (ocf:heartbeat:galera): Stopped
+ * Container bundle: haproxy-bundle [192.168.24.1:8787/tripleoupstream/centos-binary-haproxy:latest]:
+ * haproxy-bundle-docker-0 (ocf:heartbeat:docker): Started metal-2
+ * Container bundle: redis-bundle [192.168.24.1:8787/tripleoupstream/centos-binary-redis:latest]:
+ * redis-bundle-0 (ocf:heartbeat:redis): Unpromoted metal-2
diff --git a/cts/scheduler/summary/bundle-order-startup.summary b/cts/scheduler/summary/bundle-order-startup.summary
new file mode 100644
index 0000000..7204890
--- /dev/null
+++ b/cts/scheduler/summary/bundle-order-startup.summary
@@ -0,0 +1,141 @@
+Current cluster status:
+ * Node List:
+ * Online: [ undercloud ]
+
+ * Full List of Resources:
+ * Container bundle: rabbitmq-bundle [192.168.24.1:8787/tripleoupstream/centos-binary-rabbitmq:latest]:
+ * rabbitmq-bundle-0 (ocf:heartbeat:rabbitmq-cluster): Stopped
+ * Container bundle: galera-bundle [192.168.24.1:8787/tripleoupstream/centos-binary-mariadb:latest]:
+ * galera-bundle-0 (ocf:heartbeat:galera): Stopped
+ * Container bundle: redis-bundle [192.168.24.1:8787/tripleoupstream/centos-binary-redis:latest]:
+ * redis-bundle-0 (ocf:heartbeat:redis): Stopped
+ * ip-192.168.122.254 (ocf:heartbeat:IPaddr2): Stopped
+ * ip-192.168.122.250 (ocf:heartbeat:IPaddr2): Stopped
+ * ip-192.168.122.249 (ocf:heartbeat:IPaddr2): Stopped
+ * ip-192.168.122.253 (ocf:heartbeat:IPaddr2): Stopped
+ * ip-192.168.122.247 (ocf:heartbeat:IPaddr2): Stopped
+ * ip-192.168.122.248 (ocf:heartbeat:IPaddr2): Stopped
+ * Container bundle: haproxy-bundle [192.168.24.1:8787/tripleoupstream/centos-binary-haproxy:latest]:
+ * haproxy-bundle-docker-0 (ocf:heartbeat:docker): Stopped
+ * Container bundle: openstack-cinder-volume [192.168.24.1:8787/tripleoupstream/centos-binary-cinder-volume:latest]:
+ * openstack-cinder-volume-docker-0 (ocf:heartbeat:docker): Stopped
+
+Transition Summary:
+ * Start rabbitmq-bundle-docker-0 ( undercloud )
+ * Start rabbitmq-bundle-0 ( undercloud )
+ * Start rabbitmq:0 ( rabbitmq-bundle-0 )
+ * Start galera-bundle-docker-0 ( undercloud )
+ * Start galera-bundle-0 ( undercloud )
+ * Start galera:0 ( galera-bundle-0 )
+ * Start redis-bundle-docker-0 ( undercloud )
+ * Start redis-bundle-0 ( undercloud )
+ * Start redis:0 ( redis-bundle-0 )
+ * Start ip-192.168.122.254 ( undercloud )
+ * Start ip-192.168.122.250 ( undercloud )
+ * Start ip-192.168.122.249 ( undercloud )
+ * Start ip-192.168.122.253 ( undercloud )
+ * Start ip-192.168.122.247 ( undercloud )
+ * Start ip-192.168.122.248 ( undercloud )
+ * Start haproxy-bundle-docker-0 ( undercloud )
+ * Start openstack-cinder-volume-docker-0 ( undercloud )
+
+Executing Cluster Transition:
+ * Pseudo action: rabbitmq-bundle-clone_pre_notify_start_0
+ * Resource action: rabbitmq-bundle-docker-0 monitor on undercloud
+ * Resource action: galera-bundle-docker-0 monitor on undercloud
+ * Pseudo action: redis-bundle-master_pre_notify_start_0
+ * Resource action: redis-bundle-docker-0 monitor on undercloud
+ * Resource action: ip-192.168.122.254 monitor on undercloud
+ * Resource action: ip-192.168.122.250 monitor on undercloud
+ * Resource action: ip-192.168.122.249 monitor on undercloud
+ * Resource action: ip-192.168.122.253 monitor on undercloud
+ * Resource action: ip-192.168.122.247 monitor on undercloud
+ * Resource action: ip-192.168.122.248 monitor on undercloud
+ * Resource action: haproxy-bundle-docker-0 monitor on undercloud
+ * Resource action: openstack-cinder-volume-docker-0 monitor on undercloud
+ * Pseudo action: openstack-cinder-volume_start_0
+ * Pseudo action: rabbitmq-bundle_start_0
+ * Pseudo action: rabbitmq-bundle-clone_confirmed-pre_notify_start_0
+ * Pseudo action: rabbitmq-bundle-clone_start_0
+ * Resource action: rabbitmq-bundle-docker-0 start on undercloud
+ * Resource action: rabbitmq-bundle-0 monitor on undercloud
+ * Pseudo action: redis-bundle-master_confirmed-pre_notify_start_0
+ * Resource action: ip-192.168.122.254 start on undercloud
+ * Resource action: ip-192.168.122.250 start on undercloud
+ * Resource action: ip-192.168.122.249 start on undercloud
+ * Resource action: ip-192.168.122.253 start on undercloud
+ * Resource action: ip-192.168.122.247 start on undercloud
+ * Resource action: ip-192.168.122.248 start on undercloud
+ * Resource action: openstack-cinder-volume-docker-0 start on undercloud
+ * Pseudo action: openstack-cinder-volume_running_0
+ * Pseudo action: haproxy-bundle_start_0
+ * Resource action: rabbitmq-bundle-docker-0 monitor=60000 on undercloud
+ * Resource action: rabbitmq-bundle-0 start on undercloud
+ * Resource action: ip-192.168.122.254 monitor=10000 on undercloud
+ * Resource action: ip-192.168.122.250 monitor=10000 on undercloud
+ * Resource action: ip-192.168.122.249 monitor=10000 on undercloud
+ * Resource action: ip-192.168.122.253 monitor=10000 on undercloud
+ * Resource action: ip-192.168.122.247 monitor=10000 on undercloud
+ * Resource action: ip-192.168.122.248 monitor=10000 on undercloud
+ * Resource action: haproxy-bundle-docker-0 start on undercloud
+ * Resource action: openstack-cinder-volume-docker-0 monitor=60000 on undercloud
+ * Pseudo action: haproxy-bundle_running_0
+ * Pseudo action: redis-bundle_start_0
+ * Pseudo action: galera-bundle_start_0
+ * Resource action: rabbitmq:0 start on rabbitmq-bundle-0
+ * Pseudo action: rabbitmq-bundle-clone_running_0
+ * Resource action: rabbitmq-bundle-0 monitor=30000 on undercloud
+ * Pseudo action: galera-bundle-master_start_0
+ * Resource action: galera-bundle-docker-0 start on undercloud
+ * Resource action: galera-bundle-0 monitor on undercloud
+ * Pseudo action: redis-bundle-master_start_0
+ * Resource action: redis-bundle-docker-0 start on undercloud
+ * Resource action: redis-bundle-0 monitor on undercloud
+ * Resource action: haproxy-bundle-docker-0 monitor=60000 on undercloud
+ * Pseudo action: rabbitmq-bundle-clone_post_notify_running_0
+ * Resource action: galera-bundle-docker-0 monitor=60000 on undercloud
+ * Resource action: galera-bundle-0 start on undercloud
+ * Resource action: redis-bundle-docker-0 monitor=60000 on undercloud
+ * Resource action: redis-bundle-0 start on undercloud
+ * Resource action: rabbitmq:0 notify on rabbitmq-bundle-0
+ * Pseudo action: rabbitmq-bundle-clone_confirmed-post_notify_running_0
+ * Resource action: galera:0 start on galera-bundle-0
+ * Pseudo action: galera-bundle-master_running_0
+ * Resource action: galera-bundle-0 monitor=30000 on undercloud
+ * Resource action: redis:0 start on redis-bundle-0
+ * Pseudo action: redis-bundle-master_running_0
+ * Resource action: redis-bundle-0 monitor=30000 on undercloud
+ * Pseudo action: galera-bundle_running_0
+ * Pseudo action: rabbitmq-bundle_running_0
+ * Resource action: rabbitmq:0 monitor=10000 on rabbitmq-bundle-0
+ * Resource action: galera:0 monitor=30000 on galera-bundle-0
+ * Resource action: galera:0 monitor=20000 on galera-bundle-0
+ * Pseudo action: redis-bundle-master_post_notify_running_0
+ * Resource action: redis:0 notify on redis-bundle-0
+ * Pseudo action: redis-bundle-master_confirmed-post_notify_running_0
+ * Pseudo action: redis-bundle_running_0
+ * Resource action: redis:0 monitor=60000 on redis-bundle-0
+ * Resource action: redis:0 monitor=45000 on redis-bundle-0
+
+Revised Cluster Status:
+ * Node List:
+ * Online: [ undercloud ]
+ * GuestOnline: [ galera-bundle-0 rabbitmq-bundle-0 redis-bundle-0 ]
+
+ * Full List of Resources:
+ * Container bundle: rabbitmq-bundle [192.168.24.1:8787/tripleoupstream/centos-binary-rabbitmq:latest]:
+ * rabbitmq-bundle-0 (ocf:heartbeat:rabbitmq-cluster): Started undercloud
+ * Container bundle: galera-bundle [192.168.24.1:8787/tripleoupstream/centos-binary-mariadb:latest]:
+ * galera-bundle-0 (ocf:heartbeat:galera): Unpromoted undercloud
+ * Container bundle: redis-bundle [192.168.24.1:8787/tripleoupstream/centos-binary-redis:latest]:
+ * redis-bundle-0 (ocf:heartbeat:redis): Unpromoted undercloud
+ * ip-192.168.122.254 (ocf:heartbeat:IPaddr2): Started undercloud
+ * ip-192.168.122.250 (ocf:heartbeat:IPaddr2): Started undercloud
+ * ip-192.168.122.249 (ocf:heartbeat:IPaddr2): Started undercloud
+ * ip-192.168.122.253 (ocf:heartbeat:IPaddr2): Started undercloud
+ * ip-192.168.122.247 (ocf:heartbeat:IPaddr2): Started undercloud
+ * ip-192.168.122.248 (ocf:heartbeat:IPaddr2): Started undercloud
+ * Container bundle: haproxy-bundle [192.168.24.1:8787/tripleoupstream/centos-binary-haproxy:latest]:
+ * haproxy-bundle-docker-0 (ocf:heartbeat:docker): Started undercloud
+ * Container bundle: openstack-cinder-volume [192.168.24.1:8787/tripleoupstream/centos-binary-cinder-volume:latest]:
+ * openstack-cinder-volume-docker-0 (ocf:heartbeat:docker): Started undercloud
diff --git a/cts/scheduler/summary/bundle-order-stop-clone.summary b/cts/scheduler/summary/bundle-order-stop-clone.summary
new file mode 100644
index 0000000..46708d0
--- /dev/null
+++ b/cts/scheduler/summary/bundle-order-stop-clone.summary
@@ -0,0 +1,88 @@
+Current cluster status:
+ * Node List:
+ * Online: [ metal-1 metal-2 metal-3 ]
+ * RemoteOFFLINE: [ rabbitmq-bundle-0 ]
+ * GuestOnline: [ galera-bundle-0 galera-bundle-1 galera-bundle-2 redis-bundle-0 redis-bundle-1 redis-bundle-2 ]
+
+ * Full List of Resources:
+ * Clone Set: storage-clone [storage]:
+ * Started: [ metal-1 metal-2 metal-3 ]
+ * Stopped: [ rabbitmq-bundle-0 ]
+ * Container bundle set: galera-bundle [192.168.24.1:8787/tripleoupstream/centos-binary-mariadb:latest]:
+ * galera-bundle-0 (ocf:heartbeat:galera): Unpromoted metal-1
+ * galera-bundle-1 (ocf:heartbeat:galera): Unpromoted metal-2
+ * galera-bundle-2 (ocf:heartbeat:galera): Unpromoted metal-3
+ * Container bundle set: haproxy-bundle [192.168.24.1:8787/tripleoupstream/centos-binary-haproxy:latest]:
+ * haproxy-bundle-docker-0 (ocf:heartbeat:docker): Started metal-1
+ * haproxy-bundle-docker-1 (ocf:heartbeat:docker): Started metal-2
+ * haproxy-bundle-docker-2 (ocf:heartbeat:docker): Started metal-3
+ * Container bundle set: redis-bundle [192.168.24.1:8787/tripleoupstream/centos-binary-redis:latest]:
+ * redis-bundle-0 (ocf:heartbeat:redis): Promoted metal-1
+ * redis-bundle-1 (ocf:heartbeat:redis): Promoted metal-2
+ * redis-bundle-2 (ocf:heartbeat:redis): Promoted metal-3
+
+Transition Summary:
+ * Stop storage:0 ( metal-1 ) due to node availability
+ * Stop galera-bundle-docker-0 ( metal-1 ) due to node availability
+ * Stop galera-bundle-0 ( metal-1 ) due to unrunnable galera-bundle-docker-0 start
+ * Stop galera:0 ( Unpromoted galera-bundle-0 ) due to unrunnable galera-bundle-docker-0 start
+
+Executing Cluster Transition:
+ * Pseudo action: storage-clone_pre_notify_stop_0
+ * Resource action: galera-bundle-0 monitor on metal-3
+ * Resource action: galera-bundle-0 monitor on metal-2
+ * Resource action: galera-bundle-1 monitor on metal-3
+ * Resource action: galera-bundle-1 monitor on metal-1
+ * Resource action: galera-bundle-2 monitor on metal-2
+ * Resource action: galera-bundle-2 monitor on metal-1
+ * Resource action: redis-bundle-0 monitor on metal-3
+ * Resource action: redis-bundle-0 monitor on metal-2
+ * Resource action: redis-bundle-1 monitor on metal-3
+ * Resource action: redis-bundle-1 monitor on metal-1
+ * Resource action: redis-bundle-2 monitor on metal-2
+ * Resource action: redis-bundle-2 monitor on metal-1
+ * Pseudo action: galera-bundle_stop_0
+ * Resource action: storage:0 notify on metal-1
+ * Resource action: storage:1 notify on metal-2
+ * Resource action: storage:2 notify on metal-3
+ * Pseudo action: storage-clone_confirmed-pre_notify_stop_0
+ * Pseudo action: galera-bundle-master_stop_0
+ * Resource action: galera:0 stop on galera-bundle-0
+ * Pseudo action: galera-bundle-master_stopped_0
+ * Resource action: galera-bundle-0 stop on metal-1
+ * Resource action: galera-bundle-docker-0 stop on metal-1
+ * Pseudo action: galera-bundle_stopped_0
+ * Pseudo action: galera-bundle_start_0
+ * Pseudo action: storage-clone_stop_0
+ * Pseudo action: galera-bundle-master_start_0
+ * Resource action: storage:0 stop on metal-1
+ * Pseudo action: storage-clone_stopped_0
+ * Pseudo action: galera-bundle-master_running_0
+ * Pseudo action: galera-bundle_running_0
+ * Pseudo action: storage-clone_post_notify_stopped_0
+ * Resource action: storage:1 notify on metal-2
+ * Resource action: storage:2 notify on metal-3
+ * Pseudo action: storage-clone_confirmed-post_notify_stopped_0
+
+Revised Cluster Status:
+ * Node List:
+ * Online: [ metal-1 metal-2 metal-3 ]
+ * RemoteOFFLINE: [ rabbitmq-bundle-0 ]
+ * GuestOnline: [ galera-bundle-1 galera-bundle-2 redis-bundle-0 redis-bundle-1 redis-bundle-2 ]
+
+ * Full List of Resources:
+ * Clone Set: storage-clone [storage]:
+ * Started: [ metal-2 metal-3 ]
+ * Stopped: [ metal-1 rabbitmq-bundle-0 ]
+ * Container bundle set: galera-bundle [192.168.24.1:8787/tripleoupstream/centos-binary-mariadb:latest]:
+ * galera-bundle-0 (ocf:heartbeat:galera): Stopped
+ * galera-bundle-1 (ocf:heartbeat:galera): Unpromoted metal-2
+ * galera-bundle-2 (ocf:heartbeat:galera): Unpromoted metal-3
+ * Container bundle set: haproxy-bundle [192.168.24.1:8787/tripleoupstream/centos-binary-haproxy:latest]:
+ * haproxy-bundle-docker-0 (ocf:heartbeat:docker): Started metal-1
+ * haproxy-bundle-docker-1 (ocf:heartbeat:docker): Started metal-2
+ * haproxy-bundle-docker-2 (ocf:heartbeat:docker): Started metal-3
+ * Container bundle set: redis-bundle [192.168.24.1:8787/tripleoupstream/centos-binary-redis:latest]:
+ * redis-bundle-0 (ocf:heartbeat:redis): Promoted metal-1
+ * redis-bundle-1 (ocf:heartbeat:redis): Promoted metal-2
+ * redis-bundle-2 (ocf:heartbeat:redis): Promoted metal-3
diff --git a/cts/scheduler/summary/bundle-order-stop-on-remote.summary b/cts/scheduler/summary/bundle-order-stop-on-remote.summary
new file mode 100644
index 0000000..5e2e367
--- /dev/null
+++ b/cts/scheduler/summary/bundle-order-stop-on-remote.summary
@@ -0,0 +1,224 @@
+Current cluster status:
+ * Node List:
+ * RemoteNode database-0: UNCLEAN (offline)
+ * RemoteNode database-2: UNCLEAN (offline)
+ * Online: [ controller-0 controller-1 controller-2 ]
+ * RemoteOnline: [ database-1 messaging-0 messaging-1 messaging-2 ]
+ * GuestOnline: [ galera-bundle-1 rabbitmq-bundle-0 rabbitmq-bundle-1 rabbitmq-bundle-2 redis-bundle-0 redis-bundle-2 ]
+
+ * Full List of Resources:
+ * database-0 (ocf:pacemaker:remote): Stopped
+ * database-1 (ocf:pacemaker:remote): Started controller-2
+ * database-2 (ocf:pacemaker:remote): Stopped
+ * messaging-0 (ocf:pacemaker:remote): Started controller-2
+ * messaging-1 (ocf:pacemaker:remote): Started controller-2
+ * messaging-2 (ocf:pacemaker:remote): Started controller-2
+ * Container bundle set: rabbitmq-bundle [192.168.24.1:8787/rhosp12/openstack-rabbitmq-docker:pcmklatest]:
+ * rabbitmq-bundle-0 (ocf:heartbeat:rabbitmq-cluster): Started messaging-0
+ * rabbitmq-bundle-1 (ocf:heartbeat:rabbitmq-cluster): Started messaging-1
+ * rabbitmq-bundle-2 (ocf:heartbeat:rabbitmq-cluster): Started messaging-2
+ * Container bundle set: galera-bundle [192.168.24.1:8787/rhosp12/openstack-mariadb-docker:pcmklatest]:
+ * galera-bundle-0 (ocf:heartbeat:galera): FAILED Promoted database-0 (UNCLEAN)
+ * galera-bundle-1 (ocf:heartbeat:galera): Promoted database-1
+ * galera-bundle-2 (ocf:heartbeat:galera): FAILED Promoted database-2 (UNCLEAN)
+ * Container bundle set: redis-bundle [192.168.24.1:8787/rhosp12/openstack-redis-docker:pcmklatest]:
+ * redis-bundle-0 (ocf:heartbeat:redis): Unpromoted controller-0
+ * redis-bundle-1 (ocf:heartbeat:redis): Stopped
+ * redis-bundle-2 (ocf:heartbeat:redis): Unpromoted controller-2
+ * ip-192.168.24.11 (ocf:heartbeat:IPaddr2): Stopped
+ * ip-10.0.0.104 (ocf:heartbeat:IPaddr2): Stopped
+ * ip-172.17.1.19 (ocf:heartbeat:IPaddr2): Started controller-2
+ * ip-172.17.1.11 (ocf:heartbeat:IPaddr2): Stopped
+ * ip-172.17.3.13 (ocf:heartbeat:IPaddr2): Stopped
+ * ip-172.17.4.19 (ocf:heartbeat:IPaddr2): Started controller-2
+ * Container bundle set: haproxy-bundle [192.168.24.1:8787/rhosp12/openstack-haproxy-docker:pcmklatest]:
+ * haproxy-bundle-docker-0 (ocf:heartbeat:docker): Started controller-0
+ * haproxy-bundle-docker-1 (ocf:heartbeat:docker): Stopped
+ * haproxy-bundle-docker-2 (ocf:heartbeat:docker): Started controller-2
+ * openstack-cinder-volume (systemd:openstack-cinder-volume): Stopped
+ * stonith-fence_ipmilan-525400244e09 (stonith:fence_ipmilan): Started controller-2
+ * stonith-fence_ipmilan-525400cdec10 (stonith:fence_ipmilan): Started controller-2
+ * stonith-fence_ipmilan-525400c709f7 (stonith:fence_ipmilan): Stopped
+ * stonith-fence_ipmilan-525400a7f9e0 (stonith:fence_ipmilan): Started controller-0
+ * stonith-fence_ipmilan-525400a25787 (stonith:fence_ipmilan): Started controller-2
+ * stonith-fence_ipmilan-5254005ea387 (stonith:fence_ipmilan): Stopped
+ * stonith-fence_ipmilan-525400542c06 (stonith:fence_ipmilan): Stopped
+ * stonith-fence_ipmilan-525400aac413 (stonith:fence_ipmilan): Started controller-2
+ * stonith-fence_ipmilan-525400498d34 (stonith:fence_ipmilan): Stopped
+
+Transition Summary:
+ * Fence (reboot) galera-bundle-2 (resource: galera-bundle-docker-2) 'guest is unclean'
+ * Fence (reboot) galera-bundle-0 (resource: galera-bundle-docker-0) 'guest is unclean'
+ * Start database-0 ( controller-0 )
+ * Start database-2 ( controller-1 )
+ * Recover galera-bundle-docker-0 ( database-0 )
+ * Start galera-bundle-0 ( controller-0 )
+ * Recover galera:0 ( Promoted galera-bundle-0 )
+ * Recover galera-bundle-docker-2 ( database-2 )
+ * Start galera-bundle-2 ( controller-1 )
+ * Recover galera:2 ( Promoted galera-bundle-2 )
+ * Promote redis:0 ( Unpromoted -> Promoted redis-bundle-0 )
+ * Start redis-bundle-docker-1 ( controller-1 )
+ * Start redis-bundle-1 ( controller-1 )
+ * Start redis:1 ( redis-bundle-1 )
+ * Start ip-192.168.24.11 ( controller-0 )
+ * Start ip-10.0.0.104 ( controller-1 )
+ * Start ip-172.17.1.11 ( controller-0 )
+ * Start ip-172.17.3.13 ( controller-1 )
+ * Start haproxy-bundle-docker-1 ( controller-1 )
+ * Start openstack-cinder-volume ( controller-0 )
+ * Start stonith-fence_ipmilan-525400c709f7 ( controller-1 )
+ * Start stonith-fence_ipmilan-5254005ea387 ( controller-1 )
+ * Start stonith-fence_ipmilan-525400542c06 ( controller-0 )
+ * Start stonith-fence_ipmilan-525400498d34 ( controller-1 )
+
+Executing Cluster Transition:
+ * Resource action: database-0 start on controller-0
+ * Resource action: database-2 start on controller-1
+ * Pseudo action: redis-bundle-master_pre_notify_start_0
+ * Resource action: stonith-fence_ipmilan-525400c709f7 start on controller-1
+ * Resource action: stonith-fence_ipmilan-5254005ea387 start on controller-1
+ * Resource action: stonith-fence_ipmilan-525400542c06 start on controller-0
+ * Resource action: stonith-fence_ipmilan-525400498d34 start on controller-1
+ * Pseudo action: redis-bundle_start_0
+ * Pseudo action: galera-bundle_demote_0
+ * Resource action: database-0 monitor=20000 on controller-0
+ * Resource action: database-2 monitor=20000 on controller-1
+ * Pseudo action: galera-bundle-master_demote_0
+ * Resource action: redis notify on redis-bundle-0
+ * Resource action: redis notify on redis-bundle-2
+ * Pseudo action: redis-bundle-master_confirmed-pre_notify_start_0
+ * Pseudo action: redis-bundle-master_start_0
+ * Resource action: stonith-fence_ipmilan-525400c709f7 monitor=60000 on controller-1
+ * Resource action: stonith-fence_ipmilan-5254005ea387 monitor=60000 on controller-1
+ * Resource action: stonith-fence_ipmilan-525400542c06 monitor=60000 on controller-0
+ * Resource action: stonith-fence_ipmilan-525400498d34 monitor=60000 on controller-1
+ * Pseudo action: galera_demote_0
+ * Pseudo action: galera_demote_0
+ * Pseudo action: galera-bundle-master_demoted_0
+ * Pseudo action: galera-bundle_demoted_0
+ * Pseudo action: galera-bundle_stop_0
+ * Resource action: galera-bundle-docker-0 stop on database-0
+ * Resource action: galera-bundle-docker-2 stop on database-2
+ * Pseudo action: stonith-galera-bundle-2-reboot on galera-bundle-2
+ * Pseudo action: stonith-galera-bundle-0-reboot on galera-bundle-0
+ * Pseudo action: galera-bundle-master_stop_0
+ * Resource action: redis-bundle-docker-1 start on controller-1
+ * Resource action: redis-bundle-1 monitor on controller-1
+ * Resource action: ip-192.168.24.11 start on controller-0
+ * Resource action: ip-10.0.0.104 start on controller-1
+ * Resource action: ip-172.17.1.11 start on controller-0
+ * Resource action: ip-172.17.3.13 start on controller-1
+ * Resource action: openstack-cinder-volume start on controller-0
+ * Pseudo action: haproxy-bundle_start_0
+ * Pseudo action: galera_stop_0
+ * Resource action: redis-bundle-docker-1 monitor=60000 on controller-1
+ * Resource action: redis-bundle-1 start on controller-1
+ * Resource action: ip-192.168.24.11 monitor=10000 on controller-0
+ * Resource action: ip-10.0.0.104 monitor=10000 on controller-1
+ * Resource action: ip-172.17.1.11 monitor=10000 on controller-0
+ * Resource action: ip-172.17.3.13 monitor=10000 on controller-1
+ * Resource action: haproxy-bundle-docker-1 start on controller-1
+ * Resource action: openstack-cinder-volume monitor=60000 on controller-0
+ * Pseudo action: haproxy-bundle_running_0
+ * Pseudo action: galera_stop_0
+ * Pseudo action: galera-bundle-master_stopped_0
+ * Resource action: redis start on redis-bundle-1
+ * Pseudo action: redis-bundle-master_running_0
+ * Resource action: redis-bundle-1 monitor=30000 on controller-1
+ * Resource action: haproxy-bundle-docker-1 monitor=60000 on controller-1
+ * Pseudo action: galera-bundle_stopped_0
+ * Pseudo action: galera-bundle_start_0
+ * Pseudo action: galera-bundle-master_start_0
+ * Resource action: galera-bundle-docker-0 start on database-0
+ * Resource action: galera-bundle-0 monitor on controller-1
+ * Resource action: galera-bundle-docker-2 start on database-2
+ * Resource action: galera-bundle-2 monitor on controller-1
+ * Pseudo action: redis-bundle-master_post_notify_running_0
+ * Resource action: galera-bundle-docker-0 monitor=60000 on database-0
+ * Resource action: galera-bundle-0 start on controller-0
+ * Resource action: galera-bundle-docker-2 monitor=60000 on database-2
+ * Resource action: galera-bundle-2 start on controller-1
+ * Resource action: redis notify on redis-bundle-0
+ * Resource action: redis notify on redis-bundle-1
+ * Resource action: redis notify on redis-bundle-2
+ * Pseudo action: redis-bundle-master_confirmed-post_notify_running_0
+ * Pseudo action: redis-bundle_running_0
+ * Resource action: galera start on galera-bundle-0
+ * Resource action: galera start on galera-bundle-2
+ * Pseudo action: galera-bundle-master_running_0
+ * Resource action: galera-bundle-0 monitor=30000 on controller-0
+ * Resource action: galera-bundle-2 monitor=30000 on controller-1
+ * Pseudo action: redis-bundle-master_pre_notify_promote_0
+ * Pseudo action: redis-bundle_promote_0
+ * Pseudo action: galera-bundle_running_0
+ * Resource action: redis notify on redis-bundle-0
+ * Resource action: redis notify on redis-bundle-1
+ * Resource action: redis notify on redis-bundle-2
+ * Pseudo action: redis-bundle-master_confirmed-pre_notify_promote_0
+ * Pseudo action: redis-bundle-master_promote_0
+ * Pseudo action: galera-bundle_promote_0
+ * Pseudo action: galera-bundle-master_promote_0
+ * Resource action: redis promote on redis-bundle-0
+ * Pseudo action: redis-bundle-master_promoted_0
+ * Resource action: galera promote on galera-bundle-0
+ * Resource action: galera promote on galera-bundle-2
+ * Pseudo action: galera-bundle-master_promoted_0
+ * Pseudo action: redis-bundle-master_post_notify_promoted_0
+ * Pseudo action: galera-bundle_promoted_0
+ * Resource action: galera monitor=10000 on galera-bundle-0
+ * Resource action: galera monitor=10000 on galera-bundle-2
+ * Resource action: redis notify on redis-bundle-0
+ * Resource action: redis notify on redis-bundle-1
+ * Resource action: redis notify on redis-bundle-2
+ * Pseudo action: redis-bundle-master_confirmed-post_notify_promoted_0
+ * Pseudo action: redis-bundle_promoted_0
+ * Resource action: redis monitor=20000 on redis-bundle-0
+ * Resource action: redis monitor=60000 on redis-bundle-1
+ * Resource action: redis monitor=45000 on redis-bundle-1
+
+Revised Cluster Status:
+ * Node List:
+ * Online: [ controller-0 controller-1 controller-2 ]
+ * RemoteOnline: [ database-0 database-1 database-2 messaging-0 messaging-1 messaging-2 ]
+ * GuestOnline: [ galera-bundle-0 galera-bundle-1 galera-bundle-2 rabbitmq-bundle-0 rabbitmq-bundle-1 rabbitmq-bundle-2 redis-bundle-0 redis-bundle-1 redis-bundle-2 ]
+
+ * Full List of Resources:
+ * database-0 (ocf:pacemaker:remote): Started controller-0
+ * database-1 (ocf:pacemaker:remote): Started controller-2
+ * database-2 (ocf:pacemaker:remote): Started controller-1
+ * messaging-0 (ocf:pacemaker:remote): Started controller-2
+ * messaging-1 (ocf:pacemaker:remote): Started controller-2
+ * messaging-2 (ocf:pacemaker:remote): Started controller-2
+ * Container bundle set: rabbitmq-bundle [192.168.24.1:8787/rhosp12/openstack-rabbitmq-docker:pcmklatest]:
+ * rabbitmq-bundle-0 (ocf:heartbeat:rabbitmq-cluster): Started messaging-0
+ * rabbitmq-bundle-1 (ocf:heartbeat:rabbitmq-cluster): Started messaging-1
+ * rabbitmq-bundle-2 (ocf:heartbeat:rabbitmq-cluster): Started messaging-2
+ * Container bundle set: galera-bundle [192.168.24.1:8787/rhosp12/openstack-mariadb-docker:pcmklatest]:
+ * galera-bundle-0 (ocf:heartbeat:galera): Promoted database-0
+ * galera-bundle-1 (ocf:heartbeat:galera): Promoted database-1
+ * galera-bundle-2 (ocf:heartbeat:galera): Promoted database-2
+ * Container bundle set: redis-bundle [192.168.24.1:8787/rhosp12/openstack-redis-docker:pcmklatest]:
+ * redis-bundle-0 (ocf:heartbeat:redis): Promoted controller-0
+ * redis-bundle-1 (ocf:heartbeat:redis): Unpromoted controller-1
+ * redis-bundle-2 (ocf:heartbeat:redis): Unpromoted controller-2
+ * ip-192.168.24.11 (ocf:heartbeat:IPaddr2): Started controller-0
+ * ip-10.0.0.104 (ocf:heartbeat:IPaddr2): Started controller-1
+ * ip-172.17.1.19 (ocf:heartbeat:IPaddr2): Started controller-2
+ * ip-172.17.1.11 (ocf:heartbeat:IPaddr2): Started controller-0
+ * ip-172.17.3.13 (ocf:heartbeat:IPaddr2): Started controller-1
+ * ip-172.17.4.19 (ocf:heartbeat:IPaddr2): Started controller-2
+ * Container bundle set: haproxy-bundle [192.168.24.1:8787/rhosp12/openstack-haproxy-docker:pcmklatest]:
+ * haproxy-bundle-docker-0 (ocf:heartbeat:docker): Started controller-0
+ * haproxy-bundle-docker-1 (ocf:heartbeat:docker): Started controller-1
+ * haproxy-bundle-docker-2 (ocf:heartbeat:docker): Started controller-2
+ * openstack-cinder-volume (systemd:openstack-cinder-volume): Started controller-0
+ * stonith-fence_ipmilan-525400244e09 (stonith:fence_ipmilan): Started controller-2
+ * stonith-fence_ipmilan-525400cdec10 (stonith:fence_ipmilan): Started controller-2
+ * stonith-fence_ipmilan-525400c709f7 (stonith:fence_ipmilan): Started controller-1
+ * stonith-fence_ipmilan-525400a7f9e0 (stonith:fence_ipmilan): Started controller-0
+ * stonith-fence_ipmilan-525400a25787 (stonith:fence_ipmilan): Started controller-2
+ * stonith-fence_ipmilan-5254005ea387 (stonith:fence_ipmilan): Started controller-1
+ * stonith-fence_ipmilan-525400542c06 (stonith:fence_ipmilan): Started controller-0
+ * stonith-fence_ipmilan-525400aac413 (stonith:fence_ipmilan): Started controller-2
+ * stonith-fence_ipmilan-525400498d34 (stonith:fence_ipmilan): Started controller-1
diff --git a/cts/scheduler/summary/bundle-order-stop.summary b/cts/scheduler/summary/bundle-order-stop.summary
new file mode 100644
index 0000000..5fc2efe
--- /dev/null
+++ b/cts/scheduler/summary/bundle-order-stop.summary
@@ -0,0 +1,127 @@
+Current cluster status:
+ * Node List:
+ * Online: [ undercloud ]
+ * GuestOnline: [ galera-bundle-0 rabbitmq-bundle-0 redis-bundle-0 ]
+
+ * Full List of Resources:
+ * Container bundle: rabbitmq-bundle [192.168.24.1:8787/tripleoupstream/centos-binary-rabbitmq:latest]:
+ * rabbitmq-bundle-0 (ocf:heartbeat:rabbitmq-cluster): Started undercloud
+ * Container bundle: galera-bundle [192.168.24.1:8787/tripleoupstream/centos-binary-mariadb:latest]:
+ * galera-bundle-0 (ocf:heartbeat:galera): Promoted undercloud
+ * Container bundle: redis-bundle [192.168.24.1:8787/tripleoupstream/centos-binary-redis:latest]:
+ * redis-bundle-0 (ocf:heartbeat:redis): Promoted undercloud
+ * ip-192.168.122.254 (ocf:heartbeat:IPaddr2): Started undercloud
+ * ip-192.168.122.250 (ocf:heartbeat:IPaddr2): Started undercloud
+ * ip-192.168.122.249 (ocf:heartbeat:IPaddr2): Started undercloud
+ * ip-192.168.122.253 (ocf:heartbeat:IPaddr2): Started undercloud
+ * ip-192.168.122.247 (ocf:heartbeat:IPaddr2): Started undercloud
+ * ip-192.168.122.248 (ocf:heartbeat:IPaddr2): Started undercloud
+ * Container bundle: haproxy-bundle [192.168.24.1:8787/tripleoupstream/centos-binary-haproxy:latest]:
+ * haproxy-bundle-docker-0 (ocf:heartbeat:docker): Started undercloud
+ * Container bundle: openstack-cinder-volume [192.168.24.1:8787/tripleoupstream/centos-binary-cinder-volume:latest]:
+ * openstack-cinder-volume-docker-0 (ocf:heartbeat:docker): Started undercloud
+
+Transition Summary:
+ * Stop rabbitmq-bundle-docker-0 ( undercloud ) due to node availability
+ * Stop rabbitmq-bundle-0 ( undercloud ) due to node availability
+ * Stop rabbitmq:0 ( rabbitmq-bundle-0 ) due to colocation with haproxy-bundle-docker-0
+ * Stop galera-bundle-docker-0 ( undercloud ) due to node availability
+ * Stop galera-bundle-0 ( undercloud ) due to node availability
+ * Stop galera:0 ( Promoted galera-bundle-0 ) due to unrunnable galera-bundle-0 start
+ * Stop redis-bundle-docker-0 ( undercloud ) due to node availability
+ * Stop redis-bundle-0 ( undercloud ) due to node availability
+ * Stop redis:0 ( Promoted redis-bundle-0 ) due to unrunnable redis-bundle-0 start
+ * Stop ip-192.168.122.254 ( undercloud ) due to node availability
+ * Stop ip-192.168.122.250 ( undercloud ) due to node availability
+ * Stop ip-192.168.122.249 ( undercloud ) due to node availability
+ * Stop ip-192.168.122.253 ( undercloud ) due to node availability
+ * Stop ip-192.168.122.247 ( undercloud ) due to node availability
+ * Stop ip-192.168.122.248 ( undercloud ) due to node availability
+ * Stop haproxy-bundle-docker-0 ( undercloud ) due to node availability
+ * Stop openstack-cinder-volume-docker-0 ( undercloud ) due to node availability
+
+Executing Cluster Transition:
+ * Pseudo action: rabbitmq-bundle-clone_pre_notify_stop_0
+ * Resource action: galera cancel=10000 on galera-bundle-0
+ * Resource action: redis cancel=20000 on redis-bundle-0
+ * Pseudo action: redis-bundle-master_pre_notify_demote_0
+ * Pseudo action: openstack-cinder-volume_stop_0
+ * Pseudo action: haproxy-bundle_stop_0
+ * Pseudo action: redis-bundle_demote_0
+ * Pseudo action: galera-bundle_demote_0
+ * Pseudo action: rabbitmq-bundle_stop_0
+ * Resource action: rabbitmq notify on rabbitmq-bundle-0
+ * Pseudo action: rabbitmq-bundle-clone_confirmed-pre_notify_stop_0
+ * Pseudo action: rabbitmq-bundle-clone_stop_0
+ * Pseudo action: galera-bundle-master_demote_0
+ * Resource action: redis notify on redis-bundle-0
+ * Pseudo action: redis-bundle-master_confirmed-pre_notify_demote_0
+ * Pseudo action: redis-bundle-master_demote_0
+ * Resource action: haproxy-bundle-docker-0 stop on undercloud
+ * Resource action: openstack-cinder-volume-docker-0 stop on undercloud
+ * Pseudo action: openstack-cinder-volume_stopped_0
+ * Pseudo action: haproxy-bundle_stopped_0
+ * Resource action: rabbitmq stop on rabbitmq-bundle-0
+ * Pseudo action: rabbitmq-bundle-clone_stopped_0
+ * Resource action: rabbitmq-bundle-0 stop on undercloud
+ * Resource action: galera demote on galera-bundle-0
+ * Pseudo action: galera-bundle-master_demoted_0
+ * Resource action: redis demote on redis-bundle-0
+ * Pseudo action: redis-bundle-master_demoted_0
+ * Resource action: ip-192.168.122.254 stop on undercloud
+ * Resource action: ip-192.168.122.250 stop on undercloud
+ * Resource action: ip-192.168.122.249 stop on undercloud
+ * Resource action: ip-192.168.122.253 stop on undercloud
+ * Resource action: ip-192.168.122.247 stop on undercloud
+ * Resource action: ip-192.168.122.248 stop on undercloud
+ * Pseudo action: galera-bundle_demoted_0
+ * Pseudo action: galera-bundle_stop_0
+ * Pseudo action: rabbitmq-bundle-clone_post_notify_stopped_0
+ * Resource action: rabbitmq-bundle-docker-0 stop on undercloud
+ * Pseudo action: galera-bundle-master_stop_0
+ * Pseudo action: redis-bundle-master_post_notify_demoted_0
+ * Pseudo action: rabbitmq-bundle-clone_confirmed-post_notify_stopped_0
+ * Resource action: galera stop on galera-bundle-0
+ * Pseudo action: galera-bundle-master_stopped_0
+ * Resource action: galera-bundle-0 stop on undercloud
+ * Resource action: redis notify on redis-bundle-0
+ * Pseudo action: redis-bundle-master_confirmed-post_notify_demoted_0
+ * Pseudo action: redis-bundle-master_pre_notify_stop_0
+ * Pseudo action: redis-bundle_demoted_0
+ * Pseudo action: rabbitmq-bundle_stopped_0
+ * Resource action: galera-bundle-docker-0 stop on undercloud
+ * Resource action: redis notify on redis-bundle-0
+ * Pseudo action: redis-bundle-master_confirmed-pre_notify_stop_0
+ * Pseudo action: galera-bundle_stopped_0
+ * Pseudo action: redis-bundle_stop_0
+ * Pseudo action: redis-bundle-master_stop_0
+ * Resource action: redis stop on redis-bundle-0
+ * Pseudo action: redis-bundle-master_stopped_0
+ * Resource action: redis-bundle-0 stop on undercloud
+ * Pseudo action: redis-bundle-master_post_notify_stopped_0
+ * Resource action: redis-bundle-docker-0 stop on undercloud
+ * Cluster action: do_shutdown on undercloud
+ * Pseudo action: redis-bundle-master_confirmed-post_notify_stopped_0
+ * Pseudo action: redis-bundle_stopped_0
+
+Revised Cluster Status:
+ * Node List:
+ * Online: [ undercloud ]
+
+ * Full List of Resources:
+ * Container bundle: rabbitmq-bundle [192.168.24.1:8787/tripleoupstream/centos-binary-rabbitmq:latest]:
+ * rabbitmq-bundle-0 (ocf:heartbeat:rabbitmq-cluster): Stopped
+ * Container bundle: galera-bundle [192.168.24.1:8787/tripleoupstream/centos-binary-mariadb:latest]:
+ * galera-bundle-0 (ocf:heartbeat:galera): Stopped
+ * Container bundle: redis-bundle [192.168.24.1:8787/tripleoupstream/centos-binary-redis:latest]:
+ * redis-bundle-0 (ocf:heartbeat:redis): Stopped
+ * ip-192.168.122.254 (ocf:heartbeat:IPaddr2): Stopped
+ * ip-192.168.122.250 (ocf:heartbeat:IPaddr2): Stopped
+ * ip-192.168.122.249 (ocf:heartbeat:IPaddr2): Stopped
+ * ip-192.168.122.253 (ocf:heartbeat:IPaddr2): Stopped
+ * ip-192.168.122.247 (ocf:heartbeat:IPaddr2): Stopped
+ * ip-192.168.122.248 (ocf:heartbeat:IPaddr2): Stopped
+ * Container bundle: haproxy-bundle [192.168.24.1:8787/tripleoupstream/centos-binary-haproxy:latest]:
+ * haproxy-bundle-docker-0 (ocf:heartbeat:docker): Stopped
+ * Container bundle: openstack-cinder-volume [192.168.24.1:8787/tripleoupstream/centos-binary-cinder-volume:latest]:
+ * openstack-cinder-volume-docker-0 (ocf:heartbeat:docker): Stopped
diff --git a/cts/scheduler/summary/bundle-probe-order-1.summary b/cts/scheduler/summary/bundle-probe-order-1.summary
new file mode 100644
index 0000000..c885e43
--- /dev/null
+++ b/cts/scheduler/summary/bundle-probe-order-1.summary
@@ -0,0 +1,34 @@
+Using the original execution date of: 2017-10-12 07:31:56Z
+Current cluster status:
+ * Node List:
+ * Online: [ centos1 centos2 centos3 ]
+
+ * Full List of Resources:
+ * Container bundle set: galera-bundle [docker.io/tripleoupstream/centos-binary-mariadb:latest] (unmanaged):
+ * galera-bundle-0 (ocf:heartbeat:galera): Stopped (unmanaged)
+ * galera-bundle-1 (ocf:heartbeat:galera): Stopped (unmanaged)
+ * galera-bundle-2 (ocf:heartbeat:galera): Stopped (unmanaged)
+
+Transition Summary:
+
+Executing Cluster Transition:
+ * Resource action: galera-bundle-docker-0 monitor on centos3
+ * Resource action: galera-bundle-docker-0 monitor on centos2
+ * Resource action: galera-bundle-docker-0 monitor on centos1
+ * Resource action: galera-bundle-docker-1 monitor on centos3
+ * Resource action: galera-bundle-docker-1 monitor on centos2
+ * Resource action: galera-bundle-docker-1 monitor on centos1
+ * Resource action: galera-bundle-docker-2 monitor on centos3
+ * Resource action: galera-bundle-docker-2 monitor on centos2
+ * Resource action: galera-bundle-docker-2 monitor on centos1
+Using the original execution date of: 2017-10-12 07:31:56Z
+
+Revised Cluster Status:
+ * Node List:
+ * Online: [ centos1 centos2 centos3 ]
+
+ * Full List of Resources:
+ * Container bundle set: galera-bundle [docker.io/tripleoupstream/centos-binary-mariadb:latest] (unmanaged):
+ * galera-bundle-0 (ocf:heartbeat:galera): Stopped (unmanaged)
+ * galera-bundle-1 (ocf:heartbeat:galera): Stopped (unmanaged)
+ * galera-bundle-2 (ocf:heartbeat:galera): Stopped (unmanaged)
diff --git a/cts/scheduler/summary/bundle-probe-order-2.summary b/cts/scheduler/summary/bundle-probe-order-2.summary
new file mode 100644
index 0000000..aecc2a4
--- /dev/null
+++ b/cts/scheduler/summary/bundle-probe-order-2.summary
@@ -0,0 +1,34 @@
+Using the original execution date of: 2017-10-12 07:31:57Z
+Current cluster status:
+ * Node List:
+ * GuestNode galera-bundle-0: maintenance
+ * Online: [ centos1 centos2 centos3 ]
+
+ * Full List of Resources:
+ * Container bundle set: galera-bundle [docker.io/tripleoupstream/centos-binary-mariadb:latest] (unmanaged):
+ * galera-bundle-0 (ocf:heartbeat:galera): Stopped centos2 (unmanaged)
+ * galera-bundle-1 (ocf:heartbeat:galera): Stopped (unmanaged)
+ * galera-bundle-2 (ocf:heartbeat:galera): Stopped (unmanaged)
+
+Transition Summary:
+
+Executing Cluster Transition:
+ * Resource action: galera:0 monitor on galera-bundle-0
+ * Resource action: galera-bundle-docker-0 monitor=60000 on centos2
+ * Resource action: galera-bundle-0 monitor=30000 on centos2
+ * Resource action: galera-bundle-docker-1 monitor on centos2
+ * Resource action: galera-bundle-docker-2 monitor on centos3
+ * Resource action: galera-bundle-docker-2 monitor on centos2
+ * Resource action: galera-bundle-docker-2 monitor on centos1
+Using the original execution date of: 2017-10-12 07:31:57Z
+
+Revised Cluster Status:
+ * Node List:
+ * GuestNode galera-bundle-0: maintenance
+ * Online: [ centos1 centos2 centos3 ]
+
+ * Full List of Resources:
+ * Container bundle set: galera-bundle [docker.io/tripleoupstream/centos-binary-mariadb:latest] (unmanaged):
+ * galera-bundle-0 (ocf:heartbeat:galera): Stopped centos2 (unmanaged)
+ * galera-bundle-1 (ocf:heartbeat:galera): Stopped (unmanaged)
+ * galera-bundle-2 (ocf:heartbeat:galera): Stopped (unmanaged)
diff --git a/cts/scheduler/summary/bundle-probe-order-3.summary b/cts/scheduler/summary/bundle-probe-order-3.summary
new file mode 100644
index 0000000..331bd87
--- /dev/null
+++ b/cts/scheduler/summary/bundle-probe-order-3.summary
@@ -0,0 +1,33 @@
+Using the original execution date of: 2017-10-12 07:31:57Z
+Current cluster status:
+ * Node List:
+ * Online: [ centos1 centos2 centos3 ]
+
+ * Full List of Resources:
+ * Container bundle set: galera-bundle [docker.io/tripleoupstream/centos-binary-mariadb:latest] (unmanaged):
+ * galera-bundle-0 (ocf:heartbeat:galera): Stopped centos2 (unmanaged)
+ * galera-bundle-1 (ocf:heartbeat:galera): Stopped (unmanaged)
+ * galera-bundle-2 (ocf:heartbeat:galera): Stopped (unmanaged)
+
+Transition Summary:
+
+Executing Cluster Transition:
+ * Resource action: galera-bundle-docker-0 monitor=60000 on centos2
+ * Resource action: galera-bundle-0 monitor on centos3
+ * Resource action: galera-bundle-0 monitor on centos2
+ * Resource action: galera-bundle-0 monitor on centos1
+ * Resource action: galera-bundle-docker-1 monitor on centos2
+ * Resource action: galera-bundle-docker-2 monitor on centos3
+ * Resource action: galera-bundle-docker-2 monitor on centos2
+ * Resource action: galera-bundle-docker-2 monitor on centos1
+Using the original execution date of: 2017-10-12 07:31:57Z
+
+Revised Cluster Status:
+ * Node List:
+ * Online: [ centos1 centos2 centos3 ]
+
+ * Full List of Resources:
+ * Container bundle set: galera-bundle [docker.io/tripleoupstream/centos-binary-mariadb:latest] (unmanaged):
+ * galera-bundle-0 (ocf:heartbeat:galera): Stopped centos2 (unmanaged)
+ * galera-bundle-1 (ocf:heartbeat:galera): Stopped (unmanaged)
+ * galera-bundle-2 (ocf:heartbeat:galera): Stopped (unmanaged)
diff --git a/cts/scheduler/summary/bundle-probe-remotes.summary b/cts/scheduler/summary/bundle-probe-remotes.summary
new file mode 100644
index 0000000..1dd8523
--- /dev/null
+++ b/cts/scheduler/summary/bundle-probe-remotes.summary
@@ -0,0 +1,168 @@
+Current cluster status:
+ * Node List:
+ * Online: [ c09-h05-r630 c09-h06-r630 c09-h07-r630 ]
+ * RemoteOFFLINE: [ c09-h08-r630 c09-h09-r630 c09-h10-r630 ]
+
+ * Full List of Resources:
+ * c09-h08-r630 (ocf:pacemaker:remote): Stopped
+ * c09-h09-r630 (ocf:pacemaker:remote): Stopped
+ * c09-h10-r630 (ocf:pacemaker:remote): Stopped
+ * Container bundle set: scale1-bundle [beekhof:remote]:
+ * scale1-bundle-0 (ocf:pacemaker:Dummy): Stopped
+ * scale1-bundle-1 (ocf:pacemaker:Dummy): Stopped
+ * scale1-bundle-2 (ocf:pacemaker:Dummy): Stopped
+ * scale1-bundle-3 (ocf:pacemaker:Dummy): Stopped
+ * scale1-bundle-4 (ocf:pacemaker:Dummy): Stopped
+ * scale1-bundle-5 (ocf:pacemaker:Dummy): Stopped
+
+Transition Summary:
+ * Start c09-h08-r630 ( c09-h05-r630 )
+ * Start c09-h09-r630 ( c09-h06-r630 )
+ * Start c09-h10-r630 ( c09-h07-r630 )
+ * Start scale1-bundle-docker-0 ( c09-h05-r630 )
+ * Start scale1-bundle-0 ( c09-h05-r630 )
+ * Start dummy1:0 ( scale1-bundle-0 )
+ * Start scale1-bundle-docker-1 ( c09-h06-r630 )
+ * Start scale1-bundle-1 ( c09-h06-r630 )
+ * Start dummy1:1 ( scale1-bundle-1 )
+ * Start scale1-bundle-docker-2 ( c09-h07-r630 )
+ * Start scale1-bundle-2 ( c09-h07-r630 )
+ * Start dummy1:2 ( scale1-bundle-2 )
+ * Start scale1-bundle-docker-3 ( c09-h08-r630 )
+ * Start scale1-bundle-3 ( c09-h05-r630 )
+ * Start dummy1:3 ( scale1-bundle-3 )
+ * Start scale1-bundle-docker-4 ( c09-h09-r630 )
+ * Start scale1-bundle-4 ( c09-h06-r630 )
+ * Start dummy1:4 ( scale1-bundle-4 )
+ * Start scale1-bundle-docker-5 ( c09-h10-r630 )
+ * Start scale1-bundle-5 ( c09-h07-r630 )
+ * Start dummy1:5 ( scale1-bundle-5 )
+
+Executing Cluster Transition:
+ * Resource action: c09-h08-r630 monitor on c09-h07-r630
+ * Resource action: c09-h08-r630 monitor on c09-h06-r630
+ * Resource action: c09-h08-r630 monitor on c09-h05-r630
+ * Resource action: c09-h09-r630 monitor on c09-h07-r630
+ * Resource action: c09-h09-r630 monitor on c09-h06-r630
+ * Resource action: c09-h09-r630 monitor on c09-h05-r630
+ * Resource action: c09-h10-r630 monitor on c09-h07-r630
+ * Resource action: c09-h10-r630 monitor on c09-h06-r630
+ * Resource action: c09-h10-r630 monitor on c09-h05-r630
+ * Resource action: scale1-bundle-docker-0 monitor on c09-h07-r630
+ * Resource action: scale1-bundle-docker-0 monitor on c09-h06-r630
+ * Resource action: scale1-bundle-docker-0 monitor on c09-h05-r630
+ * Resource action: scale1-bundle-docker-1 monitor on c09-h07-r630
+ * Resource action: scale1-bundle-docker-1 monitor on c09-h06-r630
+ * Resource action: scale1-bundle-docker-1 monitor on c09-h05-r630
+ * Resource action: scale1-bundle-docker-2 monitor on c09-h07-r630
+ * Resource action: scale1-bundle-docker-2 monitor on c09-h06-r630
+ * Resource action: scale1-bundle-docker-2 monitor on c09-h05-r630
+ * Resource action: scale1-bundle-docker-3 monitor on c09-h07-r630
+ * Resource action: scale1-bundle-docker-3 monitor on c09-h06-r630
+ * Resource action: scale1-bundle-docker-3 monitor on c09-h05-r630
+ * Resource action: scale1-bundle-docker-4 monitor on c09-h07-r630
+ * Resource action: scale1-bundle-docker-4 monitor on c09-h06-r630
+ * Resource action: scale1-bundle-docker-4 monitor on c09-h05-r630
+ * Resource action: scale1-bundle-docker-5 monitor on c09-h07-r630
+ * Resource action: scale1-bundle-docker-5 monitor on c09-h06-r630
+ * Resource action: scale1-bundle-docker-5 monitor on c09-h05-r630
+ * Pseudo action: scale1-bundle_start_0
+ * Resource action: c09-h08-r630 start on c09-h05-r630
+ * Resource action: c09-h09-r630 start on c09-h06-r630
+ * Resource action: c09-h10-r630 start on c09-h07-r630
+ * Resource action: scale1-bundle-docker-0 monitor on c09-h10-r630
+ * Resource action: scale1-bundle-docker-0 monitor on c09-h09-r630
+ * Resource action: scale1-bundle-docker-0 monitor on c09-h08-r630
+ * Resource action: scale1-bundle-docker-1 monitor on c09-h10-r630
+ * Resource action: scale1-bundle-docker-1 monitor on c09-h09-r630
+ * Resource action: scale1-bundle-docker-1 monitor on c09-h08-r630
+ * Resource action: scale1-bundle-docker-2 monitor on c09-h10-r630
+ * Resource action: scale1-bundle-docker-2 monitor on c09-h09-r630
+ * Resource action: scale1-bundle-docker-2 monitor on c09-h08-r630
+ * Resource action: scale1-bundle-docker-3 monitor on c09-h10-r630
+ * Resource action: scale1-bundle-docker-3 monitor on c09-h09-r630
+ * Resource action: scale1-bundle-docker-3 monitor on c09-h08-r630
+ * Resource action: scale1-bundle-docker-4 monitor on c09-h10-r630
+ * Resource action: scale1-bundle-docker-4 monitor on c09-h09-r630
+ * Resource action: scale1-bundle-docker-4 monitor on c09-h08-r630
+ * Resource action: scale1-bundle-docker-5 monitor on c09-h10-r630
+ * Resource action: scale1-bundle-docker-5 monitor on c09-h09-r630
+ * Resource action: scale1-bundle-docker-5 monitor on c09-h08-r630
+ * Resource action: c09-h08-r630 monitor=60000 on c09-h05-r630
+ * Resource action: c09-h09-r630 monitor=60000 on c09-h06-r630
+ * Resource action: c09-h10-r630 monitor=60000 on c09-h07-r630
+ * Pseudo action: scale1-bundle-clone_start_0
+ * Resource action: scale1-bundle-docker-0 start on c09-h05-r630
+ * Resource action: scale1-bundle-0 monitor on c09-h07-r630
+ * Resource action: scale1-bundle-0 monitor on c09-h06-r630
+ * Resource action: scale1-bundle-0 monitor on c09-h05-r630
+ * Resource action: scale1-bundle-docker-1 start on c09-h06-r630
+ * Resource action: scale1-bundle-1 monitor on c09-h07-r630
+ * Resource action: scale1-bundle-1 monitor on c09-h06-r630
+ * Resource action: scale1-bundle-1 monitor on c09-h05-r630
+ * Resource action: scale1-bundle-docker-2 start on c09-h07-r630
+ * Resource action: scale1-bundle-2 monitor on c09-h07-r630
+ * Resource action: scale1-bundle-2 monitor on c09-h06-r630
+ * Resource action: scale1-bundle-2 monitor on c09-h05-r630
+ * Resource action: scale1-bundle-docker-3 start on c09-h08-r630
+ * Resource action: scale1-bundle-3 monitor on c09-h07-r630
+ * Resource action: scale1-bundle-3 monitor on c09-h06-r630
+ * Resource action: scale1-bundle-3 monitor on c09-h05-r630
+ * Resource action: scale1-bundle-docker-4 start on c09-h09-r630
+ * Resource action: scale1-bundle-4 monitor on c09-h07-r630
+ * Resource action: scale1-bundle-4 monitor on c09-h06-r630
+ * Resource action: scale1-bundle-4 monitor on c09-h05-r630
+ * Resource action: scale1-bundle-docker-5 start on c09-h10-r630
+ * Resource action: scale1-bundle-5 monitor on c09-h07-r630
+ * Resource action: scale1-bundle-5 monitor on c09-h06-r630
+ * Resource action: scale1-bundle-5 monitor on c09-h05-r630
+ * Resource action: scale1-bundle-docker-0 monitor=60000 on c09-h05-r630
+ * Resource action: scale1-bundle-0 start on c09-h05-r630
+ * Resource action: scale1-bundle-docker-1 monitor=60000 on c09-h06-r630
+ * Resource action: scale1-bundle-1 start on c09-h06-r630
+ * Resource action: scale1-bundle-docker-2 monitor=60000 on c09-h07-r630
+ * Resource action: scale1-bundle-2 start on c09-h07-r630
+ * Resource action: scale1-bundle-docker-3 monitor=60000 on c09-h08-r630
+ * Resource action: scale1-bundle-3 start on c09-h05-r630
+ * Resource action: scale1-bundle-docker-4 monitor=60000 on c09-h09-r630
+ * Resource action: scale1-bundle-4 start on c09-h06-r630
+ * Resource action: scale1-bundle-docker-5 monitor=60000 on c09-h10-r630
+ * Resource action: scale1-bundle-5 start on c09-h07-r630
+ * Resource action: dummy1:0 start on scale1-bundle-0
+ * Resource action: dummy1:1 start on scale1-bundle-1
+ * Resource action: dummy1:2 start on scale1-bundle-2
+ * Resource action: dummy1:3 start on scale1-bundle-3
+ * Resource action: dummy1:4 start on scale1-bundle-4
+ * Resource action: dummy1:5 start on scale1-bundle-5
+ * Pseudo action: scale1-bundle-clone_running_0
+ * Resource action: scale1-bundle-0 monitor=30000 on c09-h05-r630
+ * Resource action: scale1-bundle-1 monitor=30000 on c09-h06-r630
+ * Resource action: scale1-bundle-2 monitor=30000 on c09-h07-r630
+ * Resource action: scale1-bundle-3 monitor=30000 on c09-h05-r630
+ * Resource action: scale1-bundle-4 monitor=30000 on c09-h06-r630
+ * Resource action: scale1-bundle-5 monitor=30000 on c09-h07-r630
+ * Pseudo action: scale1-bundle_running_0
+ * Resource action: dummy1:0 monitor=10000 on scale1-bundle-0
+ * Resource action: dummy1:1 monitor=10000 on scale1-bundle-1
+ * Resource action: dummy1:2 monitor=10000 on scale1-bundle-2
+ * Resource action: dummy1:3 monitor=10000 on scale1-bundle-3
+ * Resource action: dummy1:4 monitor=10000 on scale1-bundle-4
+ * Resource action: dummy1:5 monitor=10000 on scale1-bundle-5
+
+Revised Cluster Status:
+ * Node List:
+ * Online: [ c09-h05-r630 c09-h06-r630 c09-h07-r630 ]
+ * RemoteOnline: [ c09-h08-r630 c09-h09-r630 c09-h10-r630 ]
+ * GuestOnline: [ scale1-bundle-0 scale1-bundle-1 scale1-bundle-2 scale1-bundle-3 scale1-bundle-4 scale1-bundle-5 ]
+
+ * Full List of Resources:
+ * c09-h08-r630 (ocf:pacemaker:remote): Started c09-h05-r630
+ * c09-h09-r630 (ocf:pacemaker:remote): Started c09-h06-r630
+ * c09-h10-r630 (ocf:pacemaker:remote): Started c09-h07-r630
+ * Container bundle set: scale1-bundle [beekhof:remote]:
+ * scale1-bundle-0 (ocf:pacemaker:Dummy): Started c09-h05-r630
+ * scale1-bundle-1 (ocf:pacemaker:Dummy): Started c09-h06-r630
+ * scale1-bundle-2 (ocf:pacemaker:Dummy): Started c09-h07-r630
+ * scale1-bundle-3 (ocf:pacemaker:Dummy): Started c09-h08-r630
+ * scale1-bundle-4 (ocf:pacemaker:Dummy): Started c09-h09-r630
+ * scale1-bundle-5 (ocf:pacemaker:Dummy): Started c09-h10-r630
diff --git a/cts/scheduler/summary/bundle-replicas-change.summary b/cts/scheduler/summary/bundle-replicas-change.summary
new file mode 100644
index 0000000..5cc92f3
--- /dev/null
+++ b/cts/scheduler/summary/bundle-replicas-change.summary
@@ -0,0 +1,77 @@
+Current cluster status:
+ * Node List:
+ * Online: [ rh74-test ]
+ * GuestOnline: [ httpd-bundle-0 ]
+
+ * Full List of Resources:
+ * Container bundle set: httpd-bundle [pcmktest:http] (unique):
+ * httpd-bundle-0 (192.168.20.188) (ocf:heartbeat:apache): Stopped rh74-test
+ * httpd-bundle-1 (192.168.20.189) (ocf:heartbeat:apache): Stopped
+ * httpd-bundle-2 (192.168.20.190) (ocf:heartbeat:apache): Stopped
+ * httpd (ocf:heartbeat:apache): ORPHANED Started httpd-bundle-0
+
+Transition Summary:
+ * Restart httpd-bundle-docker-0 ( rh74-test )
+ * Restart httpd-bundle-0 ( rh74-test ) due to required httpd-bundle-docker-0 start
+ * Start httpd:0 ( httpd-bundle-0 )
+ * Start httpd-bundle-ip-192.168.20.189 ( rh74-test )
+ * Start httpd-bundle-docker-1 ( rh74-test )
+ * Start httpd-bundle-1 ( rh74-test )
+ * Start httpd:1 ( httpd-bundle-1 )
+ * Start httpd-bundle-ip-192.168.20.190 ( rh74-test )
+ * Start httpd-bundle-docker-2 ( rh74-test )
+ * Start httpd-bundle-2 ( rh74-test )
+ * Start httpd:2 ( httpd-bundle-2 )
+ * Stop httpd ( httpd-bundle-0 ) due to node availability
+
+Executing Cluster Transition:
+ * Resource action: httpd-bundle-ip-192.168.20.189 monitor on rh74-test
+ * Resource action: httpd-bundle-docker-1 monitor on rh74-test
+ * Resource action: httpd-bundle-ip-192.168.20.190 monitor on rh74-test
+ * Resource action: httpd-bundle-docker-2 monitor on rh74-test
+ * Resource action: httpd stop on httpd-bundle-0
+ * Pseudo action: httpd-bundle_stop_0
+ * Pseudo action: httpd-bundle_start_0
+ * Resource action: httpd-bundle-0 stop on rh74-test
+ * Resource action: httpd-bundle-ip-192.168.20.189 start on rh74-test
+ * Resource action: httpd-bundle-docker-1 start on rh74-test
+ * Resource action: httpd-bundle-1 monitor on rh74-test
+ * Resource action: httpd-bundle-ip-192.168.20.190 start on rh74-test
+ * Resource action: httpd-bundle-docker-2 start on rh74-test
+ * Resource action: httpd-bundle-2 monitor on rh74-test
+ * Resource action: httpd-bundle-docker-0 stop on rh74-test
+ * Resource action: httpd-bundle-docker-0 start on rh74-test
+ * Resource action: httpd-bundle-docker-0 monitor=60000 on rh74-test
+ * Resource action: httpd-bundle-0 start on rh74-test
+ * Resource action: httpd-bundle-0 monitor=30000 on rh74-test
+ * Resource action: httpd-bundle-ip-192.168.20.189 monitor=60000 on rh74-test
+ * Resource action: httpd-bundle-docker-1 monitor=60000 on rh74-test
+ * Resource action: httpd-bundle-1 start on rh74-test
+ * Resource action: httpd-bundle-ip-192.168.20.190 monitor=60000 on rh74-test
+ * Resource action: httpd-bundle-docker-2 monitor=60000 on rh74-test
+ * Resource action: httpd-bundle-2 start on rh74-test
+ * Resource action: httpd delete on httpd-bundle-0
+ * Pseudo action: httpd-bundle_stopped_0
+ * Resource action: httpd:0 monitor on httpd-bundle-0
+ * Pseudo action: httpd-bundle-clone_start_0
+ * Resource action: httpd-bundle-1 monitor=30000 on rh74-test
+ * Resource action: httpd-bundle-2 monitor=30000 on rh74-test
+ * Resource action: httpd:0 start on httpd-bundle-0
+ * Resource action: httpd:1 start on httpd-bundle-1
+ * Resource action: httpd:2 start on httpd-bundle-2
+ * Pseudo action: httpd-bundle-clone_running_0
+ * Pseudo action: httpd-bundle_running_0
+ * Resource action: httpd:0 monitor=10000 on httpd-bundle-0
+ * Resource action: httpd:1 monitor=10000 on httpd-bundle-1
+ * Resource action: httpd:2 monitor=10000 on httpd-bundle-2
+
+Revised Cluster Status:
+ * Node List:
+ * Online: [ rh74-test ]
+ * GuestOnline: [ httpd-bundle-0 httpd-bundle-1 httpd-bundle-2 ]
+
+ * Full List of Resources:
+ * Container bundle set: httpd-bundle [pcmktest:http] (unique):
+ * httpd-bundle-0 (192.168.20.188) (ocf:heartbeat:apache): Started rh74-test
+ * httpd-bundle-1 (192.168.20.189) (ocf:heartbeat:apache): Started rh74-test
+ * httpd-bundle-2 (192.168.20.190) (ocf:heartbeat:apache): Started rh74-test
diff --git a/cts/scheduler/summary/cancel-behind-moving-remote.summary b/cts/scheduler/summary/cancel-behind-moving-remote.summary
new file mode 100644
index 0000000..7726876
--- /dev/null
+++ b/cts/scheduler/summary/cancel-behind-moving-remote.summary
@@ -0,0 +1,211 @@
+Using the original execution date of: 2021-02-15 01:40:51Z
+Current cluster status:
+ * Node List:
+ * Online: [ controller-0 controller-1 controller-2 database-0 database-1 database-2 messaging-0 messaging-2 ]
+ * OFFLINE: [ messaging-1 ]
+ * RemoteOnline: [ compute-0 compute-1 ]
+ * GuestOnline: [ galera-bundle-0 galera-bundle-1 galera-bundle-2 ovn-dbs-bundle-1 ovn-dbs-bundle-2 rabbitmq-bundle-0 rabbitmq-bundle-2 redis-bundle-0 redis-bundle-1 redis-bundle-2 ]
+
+ * Full List of Resources:
+ * compute-0 (ocf:pacemaker:remote): Started controller-1
+ * compute-1 (ocf:pacemaker:remote): Started controller-2
+ * Container bundle set: galera-bundle [cluster.common.tag/rhosp16-openstack-mariadb:pcmklatest]:
+ * galera-bundle-0 (ocf:heartbeat:galera): Promoted database-0
+ * galera-bundle-1 (ocf:heartbeat:galera): Promoted database-1
+ * galera-bundle-2 (ocf:heartbeat:galera): Promoted database-2
+ * Container bundle set: rabbitmq-bundle [cluster.common.tag/rhosp16-openstack-rabbitmq:pcmklatest]:
+ * rabbitmq-bundle-0 (ocf:heartbeat:rabbitmq-cluster): Started messaging-0
+ * rabbitmq-bundle-1 (ocf:heartbeat:rabbitmq-cluster): Stopped
+ * rabbitmq-bundle-2 (ocf:heartbeat:rabbitmq-cluster): Started messaging-2
+ * Container bundle set: redis-bundle [cluster.common.tag/rhosp16-openstack-redis:pcmklatest]:
+ * redis-bundle-0 (ocf:heartbeat:redis): Promoted controller-2
+ * redis-bundle-1 (ocf:heartbeat:redis): Unpromoted controller-0
+ * redis-bundle-2 (ocf:heartbeat:redis): Unpromoted controller-1
+ * ip-192.168.24.150 (ocf:heartbeat:IPaddr2): Started controller-1
+ * ip-10.0.0.150 (ocf:heartbeat:IPaddr2): Started controller-2
+ * ip-172.17.1.151 (ocf:heartbeat:IPaddr2): Started controller-1
+ * ip-172.17.1.150 (ocf:heartbeat:IPaddr2): Started controller-1
+ * ip-172.17.3.150 (ocf:heartbeat:IPaddr2): Started controller-1
+ * ip-172.17.4.150 (ocf:heartbeat:IPaddr2): Started controller-2
+ * Container bundle set: haproxy-bundle [cluster.common.tag/rhosp16-openstack-haproxy:pcmklatest]:
+ * haproxy-bundle-podman-0 (ocf:heartbeat:podman): Started controller-2
+ * haproxy-bundle-podman-1 (ocf:heartbeat:podman): Started controller-0
+ * haproxy-bundle-podman-2 (ocf:heartbeat:podman): Started controller-1
+ * Container bundle set: ovn-dbs-bundle [cluster.common.tag/rhosp16-openstack-ovn-northd:pcmklatest]:
+ * ovn-dbs-bundle-0 (ocf:ovn:ovndb-servers): Stopped
+ * ovn-dbs-bundle-1 (ocf:ovn:ovndb-servers): Unpromoted controller-2
+ * ovn-dbs-bundle-2 (ocf:ovn:ovndb-servers): Unpromoted controller-1
+ * ip-172.17.1.87 (ocf:heartbeat:IPaddr2): Stopped
+ * stonith-fence_compute-fence-nova (stonith:fence_compute): Started database-1
+ * Clone Set: compute-unfence-trigger-clone [compute-unfence-trigger]:
+ * Started: [ compute-0 compute-1 ]
+ * Stopped: [ controller-0 controller-1 controller-2 database-0 database-1 database-2 messaging-0 messaging-1 messaging-2 ]
+ * nova-evacuate (ocf:openstack:NovaEvacuate): Started database-2
+ * stonith-fence_ipmilan-525400aa1373 (stonith:fence_ipmilan): Started messaging-0
+ * stonith-fence_ipmilan-525400dc23e0 (stonith:fence_ipmilan): Started messaging-2
+ * stonith-fence_ipmilan-52540040bb56 (stonith:fence_ipmilan): Started messaging-2
+ * stonith-fence_ipmilan-525400addd38 (stonith:fence_ipmilan): Started messaging-0
+ * stonith-fence_ipmilan-52540078fb07 (stonith:fence_ipmilan): Started database-0
+ * stonith-fence_ipmilan-525400ea59b0 (stonith:fence_ipmilan): Started database-1
+ * stonith-fence_ipmilan-525400066e50 (stonith:fence_ipmilan): Started database-2
+ * stonith-fence_ipmilan-525400e1534e (stonith:fence_ipmilan): Started database-1
+ * stonith-fence_ipmilan-52540060dbba (stonith:fence_ipmilan): Started database-2
+ * stonith-fence_ipmilan-525400e018b6 (stonith:fence_ipmilan): Started database-0
+ * stonith-fence_ipmilan-525400c87cdb (stonith:fence_ipmilan): Started messaging-0
+ * Container bundle: openstack-cinder-volume [cluster.common.tag/rhosp16-openstack-cinder-volume:pcmklatest]:
+ * openstack-cinder-volume-podman-0 (ocf:heartbeat:podman): Started controller-2
+
+Transition Summary:
+ * Start rabbitmq-bundle-1 ( controller-0 ) due to unrunnable rabbitmq-bundle-podman-1 start (blocked)
+ * Start rabbitmq:1 ( rabbitmq-bundle-1 ) due to unrunnable rabbitmq-bundle-podman-1 start (blocked)
+ * Start ovn-dbs-bundle-podman-0 ( controller-2 )
+ * Start ovn-dbs-bundle-0 ( controller-2 )
+ * Start ovndb_servers:0 ( ovn-dbs-bundle-0 )
+ * Move ovn-dbs-bundle-podman-1 ( controller-2 -> controller-0 )
+ * Move ovn-dbs-bundle-1 ( controller-2 -> controller-0 )
+ * Restart ovndb_servers:1 ( Unpromoted -> Promoted ovn-dbs-bundle-1 ) due to required ovn-dbs-bundle-podman-1 start
+ * Start ip-172.17.1.87 ( controller-0 )
+ * Move stonith-fence_ipmilan-52540040bb56 ( messaging-2 -> database-0 )
+ * Move stonith-fence_ipmilan-525400e1534e ( database-1 -> messaging-2 )
+
+Executing Cluster Transition:
+ * Pseudo action: rabbitmq-bundle-clone_pre_notify_start_0
+ * Resource action: ovndb_servers cancel=30000 on ovn-dbs-bundle-1
+ * Pseudo action: ovn-dbs-bundle-master_pre_notify_stop_0
+ * Cluster action: clear_failcount for ovn-dbs-bundle-0 on controller-0
+ * Cluster action: clear_failcount for ovn-dbs-bundle-1 on controller-2
+ * Cluster action: clear_failcount for stonith-fence_compute-fence-nova on messaging-0
+ * Cluster action: clear_failcount for nova-evacuate on messaging-0
+ * Cluster action: clear_failcount for stonith-fence_ipmilan-525400aa1373 on database-0
+ * Cluster action: clear_failcount for stonith-fence_ipmilan-525400dc23e0 on database-2
+ * Resource action: stonith-fence_ipmilan-52540040bb56 stop on messaging-2
+ * Cluster action: clear_failcount for stonith-fence_ipmilan-52540078fb07 on messaging-2
+ * Cluster action: clear_failcount for stonith-fence_ipmilan-525400ea59b0 on database-0
+ * Cluster action: clear_failcount for stonith-fence_ipmilan-525400066e50 on messaging-2
+ * Resource action: stonith-fence_ipmilan-525400e1534e stop on database-1
+ * Cluster action: clear_failcount for stonith-fence_ipmilan-525400e1534e on database-2
+ * Cluster action: clear_failcount for stonith-fence_ipmilan-52540060dbba on messaging-0
+ * Cluster action: clear_failcount for stonith-fence_ipmilan-525400e018b6 on database-0
+ * Cluster action: clear_failcount for stonith-fence_ipmilan-525400c87cdb on database-2
+ * Pseudo action: ovn-dbs-bundle_stop_0
+ * Pseudo action: rabbitmq-bundle_start_0
+ * Pseudo action: rabbitmq-bundle-clone_confirmed-pre_notify_start_0
+ * Pseudo action: rabbitmq-bundle-clone_start_0
+ * Resource action: ovndb_servers notify on ovn-dbs-bundle-1
+ * Resource action: ovndb_servers notify on ovn-dbs-bundle-2
+ * Pseudo action: ovn-dbs-bundle-master_confirmed-pre_notify_stop_0
+ * Pseudo action: ovn-dbs-bundle-master_stop_0
+ * Resource action: stonith-fence_ipmilan-52540040bb56 start on database-0
+ * Resource action: stonith-fence_ipmilan-525400e1534e start on messaging-2
+ * Pseudo action: rabbitmq-bundle-clone_running_0
+ * Resource action: ovndb_servers stop on ovn-dbs-bundle-1
+ * Pseudo action: ovn-dbs-bundle-master_stopped_0
+ * Resource action: ovn-dbs-bundle-1 stop on controller-2
+ * Resource action: stonith-fence_ipmilan-52540040bb56 monitor=60000 on database-0
+ * Resource action: stonith-fence_ipmilan-525400e1534e monitor=60000 on messaging-2
+ * Pseudo action: rabbitmq-bundle-clone_post_notify_running_0
+ * Pseudo action: ovn-dbs-bundle-master_post_notify_stopped_0
+ * Resource action: ovn-dbs-bundle-podman-1 stop on controller-2
+ * Pseudo action: rabbitmq-bundle-clone_confirmed-post_notify_running_0
+ * Resource action: ovndb_servers notify on ovn-dbs-bundle-2
+ * Pseudo action: ovn-dbs-bundle-master_confirmed-post_notify_stopped_0
+ * Pseudo action: ovn-dbs-bundle-master_pre_notify_start_0
+ * Pseudo action: ovn-dbs-bundle_stopped_0
+ * Pseudo action: ovn-dbs-bundle_start_0
+ * Pseudo action: rabbitmq-bundle_running_0
+ * Resource action: ovndb_servers notify on ovn-dbs-bundle-2
+ * Pseudo action: ovn-dbs-bundle-master_confirmed-pre_notify_start_0
+ * Pseudo action: ovn-dbs-bundle-master_start_0
+ * Resource action: ovn-dbs-bundle-podman-0 start on controller-2
+ * Resource action: ovn-dbs-bundle-0 start on controller-2
+ * Resource action: ovn-dbs-bundle-podman-1 start on controller-0
+ * Resource action: ovn-dbs-bundle-1 start on controller-0
+ * Resource action: ovndb_servers start on ovn-dbs-bundle-0
+ * Resource action: ovndb_servers start on ovn-dbs-bundle-1
+ * Pseudo action: ovn-dbs-bundle-master_running_0
+ * Resource action: ovn-dbs-bundle-podman-0 monitor=60000 on controller-2
+ * Resource action: ovn-dbs-bundle-0 monitor=30000 on controller-2
+ * Resource action: ovn-dbs-bundle-podman-1 monitor=60000 on controller-0
+ * Resource action: ovn-dbs-bundle-1 monitor=30000 on controller-0
+ * Pseudo action: ovn-dbs-bundle-master_post_notify_running_0
+ * Resource action: ovndb_servers notify on ovn-dbs-bundle-0
+ * Resource action: ovndb_servers notify on ovn-dbs-bundle-1
+ * Resource action: ovndb_servers notify on ovn-dbs-bundle-2
+ * Pseudo action: ovn-dbs-bundle-master_confirmed-post_notify_running_0
+ * Pseudo action: ovn-dbs-bundle_running_0
+ * Pseudo action: ovn-dbs-bundle-master_pre_notify_promote_0
+ * Pseudo action: ovn-dbs-bundle_promote_0
+ * Resource action: ovndb_servers notify on ovn-dbs-bundle-0
+ * Resource action: ovndb_servers notify on ovn-dbs-bundle-1
+ * Resource action: ovndb_servers notify on ovn-dbs-bundle-2
+ * Pseudo action: ovn-dbs-bundle-master_confirmed-pre_notify_promote_0
+ * Pseudo action: ovn-dbs-bundle-master_promote_0
+ * Resource action: ip-172.17.1.87 start on controller-0
+ * Resource action: ovndb_servers promote on ovn-dbs-bundle-1
+ * Pseudo action: ovn-dbs-bundle-master_promoted_0
+ * Resource action: ip-172.17.1.87 monitor=10000 on controller-0
+ * Pseudo action: ovn-dbs-bundle-master_post_notify_promoted_0
+ * Resource action: ovndb_servers notify on ovn-dbs-bundle-0
+ * Resource action: ovndb_servers notify on ovn-dbs-bundle-1
+ * Resource action: ovndb_servers notify on ovn-dbs-bundle-2
+ * Pseudo action: ovn-dbs-bundle-master_confirmed-post_notify_promoted_0
+ * Pseudo action: ovn-dbs-bundle_promoted_0
+ * Resource action: ovndb_servers monitor=30000 on ovn-dbs-bundle-0
+ * Resource action: ovndb_servers monitor=10000 on ovn-dbs-bundle-1
+Using the original execution date of: 2021-02-15 01:40:51Z
+
+Revised Cluster Status:
+ * Node List:
+ * Online: [ controller-0 controller-1 controller-2 database-0 database-1 database-2 messaging-0 messaging-2 ]
+ * OFFLINE: [ messaging-1 ]
+ * RemoteOnline: [ compute-0 compute-1 ]
+ * GuestOnline: [ galera-bundle-0 galera-bundle-1 galera-bundle-2 ovn-dbs-bundle-0 ovn-dbs-bundle-1 ovn-dbs-bundle-2 rabbitmq-bundle-0 rabbitmq-bundle-2 redis-bundle-0 redis-bundle-1 redis-bundle-2 ]
+
+ * Full List of Resources:
+ * compute-0 (ocf:pacemaker:remote): Started controller-1
+ * compute-1 (ocf:pacemaker:remote): Started controller-2
+ * Container bundle set: galera-bundle [cluster.common.tag/rhosp16-openstack-mariadb:pcmklatest]:
+ * galera-bundle-0 (ocf:heartbeat:galera): Promoted database-0
+ * galera-bundle-1 (ocf:heartbeat:galera): Promoted database-1
+ * galera-bundle-2 (ocf:heartbeat:galera): Promoted database-2
+ * Container bundle set: rabbitmq-bundle [cluster.common.tag/rhosp16-openstack-rabbitmq:pcmklatest]:
+ * rabbitmq-bundle-0 (ocf:heartbeat:rabbitmq-cluster): Started messaging-0
+ * rabbitmq-bundle-1 (ocf:heartbeat:rabbitmq-cluster): Stopped
+ * rabbitmq-bundle-2 (ocf:heartbeat:rabbitmq-cluster): Started messaging-2
+ * Container bundle set: redis-bundle [cluster.common.tag/rhosp16-openstack-redis:pcmklatest]:
+ * redis-bundle-0 (ocf:heartbeat:redis): Promoted controller-2
+ * redis-bundle-1 (ocf:heartbeat:redis): Unpromoted controller-0
+ * redis-bundle-2 (ocf:heartbeat:redis): Unpromoted controller-1
+ * ip-192.168.24.150 (ocf:heartbeat:IPaddr2): Started controller-1
+ * ip-10.0.0.150 (ocf:heartbeat:IPaddr2): Started controller-2
+ * ip-172.17.1.151 (ocf:heartbeat:IPaddr2): Started controller-1
+ * ip-172.17.1.150 (ocf:heartbeat:IPaddr2): Started controller-1
+ * ip-172.17.3.150 (ocf:heartbeat:IPaddr2): Started controller-1
+ * ip-172.17.4.150 (ocf:heartbeat:IPaddr2): Started controller-2
+ * Container bundle set: haproxy-bundle [cluster.common.tag/rhosp16-openstack-haproxy:pcmklatest]:
+ * haproxy-bundle-podman-0 (ocf:heartbeat:podman): Started controller-2
+ * haproxy-bundle-podman-1 (ocf:heartbeat:podman): Started controller-0
+ * haproxy-bundle-podman-2 (ocf:heartbeat:podman): Started controller-1
+ * Container bundle set: ovn-dbs-bundle [cluster.common.tag/rhosp16-openstack-ovn-northd:pcmklatest]:
+ * ovn-dbs-bundle-0 (ocf:ovn:ovndb-servers): Unpromoted controller-2
+ * ovn-dbs-bundle-1 (ocf:ovn:ovndb-servers): Promoted controller-0
+ * ovn-dbs-bundle-2 (ocf:ovn:ovndb-servers): Unpromoted controller-1
+ * ip-172.17.1.87 (ocf:heartbeat:IPaddr2): Started controller-0
+ * stonith-fence_compute-fence-nova (stonith:fence_compute): Started database-1
+ * Clone Set: compute-unfence-trigger-clone [compute-unfence-trigger]:
+ * Started: [ compute-0 compute-1 ]
+ * Stopped: [ controller-0 controller-1 controller-2 database-0 database-1 database-2 messaging-0 messaging-1 messaging-2 ]
+ * nova-evacuate (ocf:openstack:NovaEvacuate): Started database-2
+ * stonith-fence_ipmilan-525400aa1373 (stonith:fence_ipmilan): Started messaging-0
+ * stonith-fence_ipmilan-525400dc23e0 (stonith:fence_ipmilan): Started messaging-2
+ * stonith-fence_ipmilan-52540040bb56 (stonith:fence_ipmilan): Started database-0
+ * stonith-fence_ipmilan-525400addd38 (stonith:fence_ipmilan): Started messaging-0
+ * stonith-fence_ipmilan-52540078fb07 (stonith:fence_ipmilan): Started database-0
+ * stonith-fence_ipmilan-525400ea59b0 (stonith:fence_ipmilan): Started database-1
+ * stonith-fence_ipmilan-525400066e50 (stonith:fence_ipmilan): Started database-2
+ * stonith-fence_ipmilan-525400e1534e (stonith:fence_ipmilan): Started messaging-2
+ * stonith-fence_ipmilan-52540060dbba (stonith:fence_ipmilan): Started database-2
+ * stonith-fence_ipmilan-525400e018b6 (stonith:fence_ipmilan): Started database-0
+ * stonith-fence_ipmilan-525400c87cdb (stonith:fence_ipmilan): Started messaging-0
+ * Container bundle: openstack-cinder-volume [cluster.common.tag/rhosp16-openstack-cinder-volume:pcmklatest]:
+ * openstack-cinder-volume-podman-0 (ocf:heartbeat:podman): Started controller-2
diff --git a/cts/scheduler/summary/clbz5007-promotable-colocation.summary b/cts/scheduler/summary/clbz5007-promotable-colocation.summary
new file mode 100644
index 0000000..58348bc
--- /dev/null
+++ b/cts/scheduler/summary/clbz5007-promotable-colocation.summary
@@ -0,0 +1,31 @@
+Current cluster status:
+ * Node List:
+ * Online: [ fc16-builder fc16-builder2 ]
+
+ * Full List of Resources:
+ * Clone Set: MS_DUMMY [DUMMY] (promotable):
+ * Promoted: [ fc16-builder ]
+ * Unpromoted: [ fc16-builder2 ]
+ * UNPROMOTED_IP (ocf:pacemaker:Dummy): Started fc16-builder
+ * PROMOTED_IP (ocf:pacemaker:Dummy): Started fc16-builder2
+
+Transition Summary:
+ * Move UNPROMOTED_IP ( fc16-builder -> fc16-builder2 )
+ * Move PROMOTED_IP ( fc16-builder2 -> fc16-builder )
+
+Executing Cluster Transition:
+ * Resource action: UNPROMOTED_IP stop on fc16-builder
+ * Resource action: PROMOTED_IP stop on fc16-builder2
+ * Resource action: UNPROMOTED_IP start on fc16-builder2
+ * Resource action: PROMOTED_IP start on fc16-builder
+
+Revised Cluster Status:
+ * Node List:
+ * Online: [ fc16-builder fc16-builder2 ]
+
+ * Full List of Resources:
+ * Clone Set: MS_DUMMY [DUMMY] (promotable):
+ * Promoted: [ fc16-builder ]
+ * Unpromoted: [ fc16-builder2 ]
+ * UNPROMOTED_IP (ocf:pacemaker:Dummy): Started fc16-builder2
+ * PROMOTED_IP (ocf:pacemaker:Dummy): Started fc16-builder
diff --git a/cts/scheduler/summary/clone-anon-dup.summary b/cts/scheduler/summary/clone-anon-dup.summary
new file mode 100644
index 0000000..1e807b7
--- /dev/null
+++ b/cts/scheduler/summary/clone-anon-dup.summary
@@ -0,0 +1,35 @@
+Current cluster status:
+ * Node List:
+ * Online: [ wc01 wc02 wc03 ]
+
+ * Full List of Resources:
+ * stonith-1 (stonith:dummy): Stopped
+ * Clone Set: clone_webservice [group_webservice]:
+ * Resource Group: group_webservice:2:
+ * fs_www (ocf:heartbeat:Filesystem): ORPHANED Stopped
+ * apache2 (ocf:heartbeat:apache): ORPHANED Started wc02
+ * Started: [ wc01 wc02 ]
+
+Transition Summary:
+ * Start stonith-1 ( wc01 )
+ * Stop apache2:2 ( wc02 ) due to node availability
+
+Executing Cluster Transition:
+ * Resource action: stonith-1 monitor on wc03
+ * Resource action: stonith-1 monitor on wc02
+ * Resource action: stonith-1 monitor on wc01
+ * Pseudo action: clone_webservice_stop_0
+ * Resource action: stonith-1 start on wc01
+ * Pseudo action: group_webservice:2_stop_0
+ * Resource action: apache2:0 stop on wc02
+ * Pseudo action: group_webservice:2_stopped_0
+ * Pseudo action: clone_webservice_stopped_0
+
+Revised Cluster Status:
+ * Node List:
+ * Online: [ wc01 wc02 wc03 ]
+
+ * Full List of Resources:
+ * stonith-1 (stonith:dummy): Started wc01
+ * Clone Set: clone_webservice [group_webservice]:
+ * Started: [ wc01 wc02 ]
diff --git a/cts/scheduler/summary/clone-anon-failcount.summary b/cts/scheduler/summary/clone-anon-failcount.summary
new file mode 100644
index 0000000..8d4f369
--- /dev/null
+++ b/cts/scheduler/summary/clone-anon-failcount.summary
@@ -0,0 +1,119 @@
+Current cluster status:
+ * Node List:
+ * Online: [ srv01 srv02 srv03 srv04 ]
+
+ * Full List of Resources:
+ * Resource Group: UMgroup01:
+ * UmVIPcheck (ocf:pacemaker:Dummy): Started srv01
+ * UmIPaddr (ocf:pacemaker:Dummy): Started srv01
+ * UmDummy01 (ocf:pacemaker:Dummy): Started srv01
+ * UmDummy02 (ocf:pacemaker:Dummy): Started srv01
+ * Resource Group: OVDBgroup02-1:
+ * prmExPostgreSQLDB1 (ocf:pacemaker:Dummy): Started srv01
+ * Resource Group: OVDBgroup02-2:
+ * prmExPostgreSQLDB2 (ocf:pacemaker:Dummy): Started srv02
+ * Resource Group: OVDBgroup02-3:
+ * prmExPostgreSQLDB3 (ocf:pacemaker:Dummy): Started srv03
+ * Resource Group: grpStonith1:
+ * prmStonithN1 (stonith:external/ssh): Started srv04
+ * Resource Group: grpStonith2:
+ * prmStonithN2 (stonith:external/ssh): Started srv01
+ * Resource Group: grpStonith3:
+ * prmStonithN3 (stonith:external/ssh): Started srv02
+ * Resource Group: grpStonith4:
+ * prmStonithN4 (stonith:external/ssh): Started srv03
+ * Clone Set: clnUMgroup01 [clnUmResource]:
+ * Resource Group: clnUmResource:0:
+ * clnUMdummy01 (ocf:pacemaker:Dummy): FAILED srv04
+ * clnUMdummy02 (ocf:pacemaker:Dummy): Started srv04
+ * Started: [ srv01 ]
+ * Stopped: [ srv02 srv03 ]
+ * Clone Set: clnPingd [clnPrmPingd]:
+ * Started: [ srv01 srv02 srv03 srv04 ]
+ * Clone Set: clnDiskd1 [clnPrmDiskd1]:
+ * Started: [ srv01 srv02 srv03 srv04 ]
+ * Clone Set: clnG3dummy1 [clnG3dummy01]:
+ * Started: [ srv01 srv02 srv03 srv04 ]
+ * Clone Set: clnG3dummy2 [clnG3dummy02]:
+ * Started: [ srv01 srv02 srv03 srv04 ]
+
+Transition Summary:
+ * Move UmVIPcheck ( srv01 -> srv04 )
+ * Move UmIPaddr ( srv01 -> srv04 )
+ * Move UmDummy01 ( srv01 -> srv04 )
+ * Move UmDummy02 ( srv01 -> srv04 )
+ * Recover clnUMdummy01:0 ( srv04 )
+ * Restart clnUMdummy02:0 ( srv04 ) due to required clnUMdummy01:0 start
+ * Stop clnUMdummy01:1 ( srv01 ) due to node availability
+ * Stop clnUMdummy02:1 ( srv01 ) due to node availability
+
+Executing Cluster Transition:
+ * Pseudo action: UMgroup01_stop_0
+ * Resource action: UmDummy02 stop on srv01
+ * Resource action: UmDummy01 stop on srv01
+ * Resource action: UmIPaddr stop on srv01
+ * Resource action: UmVIPcheck stop on srv01
+ * Pseudo action: UMgroup01_stopped_0
+ * Pseudo action: clnUMgroup01_stop_0
+ * Pseudo action: clnUmResource:0_stop_0
+ * Resource action: clnUMdummy02:1 stop on srv04
+ * Pseudo action: clnUmResource:1_stop_0
+ * Resource action: clnUMdummy02:0 stop on srv01
+ * Resource action: clnUMdummy01:1 stop on srv04
+ * Resource action: clnUMdummy01:0 stop on srv01
+ * Pseudo action: clnUmResource:0_stopped_0
+ * Pseudo action: clnUmResource:1_stopped_0
+ * Pseudo action: clnUMgroup01_stopped_0
+ * Pseudo action: clnUMgroup01_start_0
+ * Pseudo action: clnUmResource:0_start_0
+ * Resource action: clnUMdummy01:1 start on srv04
+ * Resource action: clnUMdummy01:1 monitor=10000 on srv04
+ * Resource action: clnUMdummy02:1 start on srv04
+ * Resource action: clnUMdummy02:1 monitor=10000 on srv04
+ * Pseudo action: clnUmResource:0_running_0
+ * Pseudo action: clnUMgroup01_running_0
+ * Pseudo action: UMgroup01_start_0
+ * Resource action: UmVIPcheck start on srv04
+ * Resource action: UmIPaddr start on srv04
+ * Resource action: UmDummy01 start on srv04
+ * Resource action: UmDummy02 start on srv04
+ * Pseudo action: UMgroup01_running_0
+ * Resource action: UmIPaddr monitor=10000 on srv04
+ * Resource action: UmDummy01 monitor=10000 on srv04
+ * Resource action: UmDummy02 monitor=10000 on srv04
+
+Revised Cluster Status:
+ * Node List:
+ * Online: [ srv01 srv02 srv03 srv04 ]
+
+ * Full List of Resources:
+ * Resource Group: UMgroup01:
+ * UmVIPcheck (ocf:pacemaker:Dummy): Started srv04
+ * UmIPaddr (ocf:pacemaker:Dummy): Started srv04
+ * UmDummy01 (ocf:pacemaker:Dummy): Started srv04
+ * UmDummy02 (ocf:pacemaker:Dummy): Started srv04
+ * Resource Group: OVDBgroup02-1:
+ * prmExPostgreSQLDB1 (ocf:pacemaker:Dummy): Started srv01
+ * Resource Group: OVDBgroup02-2:
+ * prmExPostgreSQLDB2 (ocf:pacemaker:Dummy): Started srv02
+ * Resource Group: OVDBgroup02-3:
+ * prmExPostgreSQLDB3 (ocf:pacemaker:Dummy): Started srv03
+ * Resource Group: grpStonith1:
+ * prmStonithN1 (stonith:external/ssh): Started srv04
+ * Resource Group: grpStonith2:
+ * prmStonithN2 (stonith:external/ssh): Started srv01
+ * Resource Group: grpStonith3:
+ * prmStonithN3 (stonith:external/ssh): Started srv02
+ * Resource Group: grpStonith4:
+ * prmStonithN4 (stonith:external/ssh): Started srv03
+ * Clone Set: clnUMgroup01 [clnUmResource]:
+ * Started: [ srv04 ]
+ * Stopped: [ srv01 srv02 srv03 ]
+ * Clone Set: clnPingd [clnPrmPingd]:
+ * Started: [ srv01 srv02 srv03 srv04 ]
+ * Clone Set: clnDiskd1 [clnPrmDiskd1]:
+ * Started: [ srv01 srv02 srv03 srv04 ]
+ * Clone Set: clnG3dummy1 [clnG3dummy01]:
+ * Started: [ srv01 srv02 srv03 srv04 ]
+ * Clone Set: clnG3dummy2 [clnG3dummy02]:
+ * Started: [ srv01 srv02 srv03 srv04 ]
diff --git a/cts/scheduler/summary/clone-anon-probe-1.summary b/cts/scheduler/summary/clone-anon-probe-1.summary
new file mode 100644
index 0000000..51cf914
--- /dev/null
+++ b/cts/scheduler/summary/clone-anon-probe-1.summary
@@ -0,0 +1,27 @@
+Current cluster status:
+ * Node List:
+ * Online: [ mysql-01 mysql-02 ]
+
+ * Full List of Resources:
+ * Clone Set: ms-drbd0 [drbd0]:
+ * Stopped: [ mysql-01 mysql-02 ]
+
+Transition Summary:
+ * Start drbd0:0 ( mysql-01 )
+ * Start drbd0:1 ( mysql-02 )
+
+Executing Cluster Transition:
+ * Resource action: drbd0:0 monitor on mysql-01
+ * Resource action: drbd0:1 monitor on mysql-02
+ * Pseudo action: ms-drbd0_start_0
+ * Resource action: drbd0:0 start on mysql-01
+ * Resource action: drbd0:1 start on mysql-02
+ * Pseudo action: ms-drbd0_running_0
+
+Revised Cluster Status:
+ * Node List:
+ * Online: [ mysql-01 mysql-02 ]
+
+ * Full List of Resources:
+ * Clone Set: ms-drbd0 [drbd0]:
+ * Started: [ mysql-01 mysql-02 ]
diff --git a/cts/scheduler/summary/clone-anon-probe-2.summary b/cts/scheduler/summary/clone-anon-probe-2.summary
new file mode 100644
index 0000000..79a2fb8
--- /dev/null
+++ b/cts/scheduler/summary/clone-anon-probe-2.summary
@@ -0,0 +1,24 @@
+Current cluster status:
+ * Node List:
+ * Online: [ mysql-01 mysql-02 ]
+
+ * Full List of Resources:
+ * Clone Set: ms-drbd0 [drbd0]:
+ * Started: [ mysql-02 ]
+ * Stopped: [ mysql-01 ]
+
+Transition Summary:
+ * Start drbd0:1 ( mysql-01 )
+
+Executing Cluster Transition:
+ * Pseudo action: ms-drbd0_start_0
+ * Resource action: drbd0:1 start on mysql-01
+ * Pseudo action: ms-drbd0_running_0
+
+Revised Cluster Status:
+ * Node List:
+ * Online: [ mysql-01 mysql-02 ]
+
+ * Full List of Resources:
+ * Clone Set: ms-drbd0 [drbd0]:
+ * Started: [ mysql-01 mysql-02 ]
diff --git a/cts/scheduler/summary/clone-fail-block-colocation.summary b/cts/scheduler/summary/clone-fail-block-colocation.summary
new file mode 100644
index 0000000..eab4078
--- /dev/null
+++ b/cts/scheduler/summary/clone-fail-block-colocation.summary
@@ -0,0 +1,61 @@
+0 of 10 resource instances DISABLED and 1 BLOCKED from further action due to failure
+
+Current cluster status:
+ * Node List:
+ * Online: [ DEM-1 DEM-2 ]
+
+ * Full List of Resources:
+ * Resource Group: svc:
+ * ipv6_dem_tas_dns (ocf:heartbeat:IPv6addr): Started DEM-1
+ * d_bird_subnet_state (lsb:bird_subnet_state): Started DEM-1
+ * ip_mgmt (ocf:heartbeat:IPaddr2): Started DEM-1
+ * ip_trf_tas (ocf:heartbeat:IPaddr2): Started DEM-1
+ * Clone Set: cl_bird [d_bird]:
+ * Started: [ DEM-1 DEM-2 ]
+ * Clone Set: cl_bird6 [d_bird6]:
+ * d_bird6 (lsb:bird6): FAILED DEM-1 (blocked)
+ * Started: [ DEM-2 ]
+ * Clone Set: cl_tomcat_nms [d_tomcat_nms]:
+ * Started: [ DEM-1 DEM-2 ]
+
+Transition Summary:
+ * Move ipv6_dem_tas_dns ( DEM-1 -> DEM-2 )
+ * Move d_bird_subnet_state ( DEM-1 -> DEM-2 )
+ * Move ip_mgmt ( DEM-1 -> DEM-2 )
+ * Move ip_trf_tas ( DEM-1 -> DEM-2 )
+
+Executing Cluster Transition:
+ * Pseudo action: svc_stop_0
+ * Resource action: ip_trf_tas stop on DEM-1
+ * Resource action: ip_mgmt stop on DEM-1
+ * Resource action: d_bird_subnet_state stop on DEM-1
+ * Resource action: ipv6_dem_tas_dns stop on DEM-1
+ * Pseudo action: svc_stopped_0
+ * Pseudo action: svc_start_0
+ * Resource action: ipv6_dem_tas_dns start on DEM-2
+ * Resource action: d_bird_subnet_state start on DEM-2
+ * Resource action: ip_mgmt start on DEM-2
+ * Resource action: ip_trf_tas start on DEM-2
+ * Pseudo action: svc_running_0
+ * Resource action: ipv6_dem_tas_dns monitor=10000 on DEM-2
+ * Resource action: d_bird_subnet_state monitor=10000 on DEM-2
+ * Resource action: ip_mgmt monitor=10000 on DEM-2
+ * Resource action: ip_trf_tas monitor=10000 on DEM-2
+
+Revised Cluster Status:
+ * Node List:
+ * Online: [ DEM-1 DEM-2 ]
+
+ * Full List of Resources:
+ * Resource Group: svc:
+ * ipv6_dem_tas_dns (ocf:heartbeat:IPv6addr): Started DEM-2
+ * d_bird_subnet_state (lsb:bird_subnet_state): Started DEM-2
+ * ip_mgmt (ocf:heartbeat:IPaddr2): Started DEM-2
+ * ip_trf_tas (ocf:heartbeat:IPaddr2): Started DEM-2
+ * Clone Set: cl_bird [d_bird]:
+ * Started: [ DEM-1 DEM-2 ]
+ * Clone Set: cl_bird6 [d_bird6]:
+ * d_bird6 (lsb:bird6): FAILED DEM-1 (blocked)
+ * Started: [ DEM-2 ]
+ * Clone Set: cl_tomcat_nms [d_tomcat_nms]:
+ * Started: [ DEM-1 DEM-2 ]
diff --git a/cts/scheduler/summary/clone-interleave-1.summary b/cts/scheduler/summary/clone-interleave-1.summary
new file mode 100644
index 0000000..ddb153d
--- /dev/null
+++ b/cts/scheduler/summary/clone-interleave-1.summary
@@ -0,0 +1,53 @@
+Current cluster status:
+ * Node List:
+ * Online: [ pcmk-1 pcmk-2 pcmk-3 ]
+
+ * Full List of Resources:
+ * dummy (ocf:pacemaker:Dummy): Stopped
+ * Clone Set: clone-1 [child-1]:
+ * Stopped: [ pcmk-1 pcmk-2 pcmk-3 ]
+ * Clone Set: clone-2 [child-2]:
+ * Stopped: [ pcmk-1 pcmk-2 pcmk-3 ]
+ * Clone Set: clone-3 [child-3]:
+ * Stopped: [ pcmk-1 pcmk-2 pcmk-3 ]
+
+Transition Summary:
+ * Start dummy ( pcmk-1 )
+ * Start child-1:0 ( pcmk-2 )
+ * Start child-1:1 ( pcmk-3 )
+ * Start child-1:2 ( pcmk-1 )
+ * Start child-2:0 ( pcmk-2 )
+ * Start child-2:1 ( pcmk-3 )
+ * Start child-3:1 ( pcmk-2 )
+ * Start child-3:2 ( pcmk-3 )
+
+Executing Cluster Transition:
+ * Pseudo action: clone-1_start_0
+ * Resource action: child-1:0 start on pcmk-2
+ * Resource action: child-1:1 start on pcmk-3
+ * Resource action: child-1:2 start on pcmk-1
+ * Pseudo action: clone-1_running_0
+ * Pseudo action: clone-2_start_0
+ * Resource action: child-2:0 start on pcmk-2
+ * Resource action: child-2:1 start on pcmk-3
+ * Pseudo action: clone-2_running_0
+ * Pseudo action: clone-3_start_0
+ * Resource action: child-3:1 start on pcmk-2
+ * Resource action: child-3:2 start on pcmk-3
+ * Pseudo action: clone-3_running_0
+ * Resource action: dummy start on pcmk-1
+
+Revised Cluster Status:
+ * Node List:
+ * Online: [ pcmk-1 pcmk-2 pcmk-3 ]
+
+ * Full List of Resources:
+ * dummy (ocf:pacemaker:Dummy): Started pcmk-1
+ * Clone Set: clone-1 [child-1]:
+ * Started: [ pcmk-1 pcmk-2 pcmk-3 ]
+ * Clone Set: clone-2 [child-2]:
+ * Started: [ pcmk-2 pcmk-3 ]
+ * Stopped: [ pcmk-1 ]
+ * Clone Set: clone-3 [child-3]:
+ * Started: [ pcmk-2 pcmk-3 ]
+ * Stopped: [ pcmk-1 ]
diff --git a/cts/scheduler/summary/clone-interleave-2.summary b/cts/scheduler/summary/clone-interleave-2.summary
new file mode 100644
index 0000000..5817b10
--- /dev/null
+++ b/cts/scheduler/summary/clone-interleave-2.summary
@@ -0,0 +1,44 @@
+Current cluster status:
+ * Node List:
+ * Online: [ pcmk-1 pcmk-2 pcmk-3 ]
+
+ * Full List of Resources:
+ * dummy (ocf:pacemaker:Dummy): Started pcmk-1
+ * Clone Set: clone-1 [child-1]:
+ * Started: [ pcmk-1 pcmk-2 pcmk-3 ]
+ * Clone Set: clone-2 [child-2]:
+ * Started: [ pcmk-1 pcmk-2 pcmk-3 ]
+ * Clone Set: clone-3 [child-3]:
+ * Started: [ pcmk-1 pcmk-2 pcmk-3 ]
+
+Transition Summary:
+ * Restart dummy ( pcmk-1 ) due to required clone-3 running
+ * Stop child-2:0 ( pcmk-1 ) due to node availability
+ * Stop child-3:0 ( pcmk-1 )
+
+Executing Cluster Transition:
+ * Resource action: dummy stop on pcmk-1
+ * Pseudo action: clone-3_stop_0
+ * Resource action: child-3:2 stop on pcmk-1
+ * Pseudo action: clone-3_stopped_0
+ * Pseudo action: clone-3_start_0
+ * Pseudo action: clone-2_stop_0
+ * Pseudo action: clone-3_running_0
+ * Resource action: dummy start on pcmk-1
+ * Resource action: child-2:2 stop on pcmk-1
+ * Pseudo action: clone-2_stopped_0
+
+Revised Cluster Status:
+ * Node List:
+ * Online: [ pcmk-1 pcmk-2 pcmk-3 ]
+
+ * Full List of Resources:
+ * dummy (ocf:pacemaker:Dummy): Started pcmk-1
+ * Clone Set: clone-1 [child-1]:
+ * Started: [ pcmk-1 pcmk-2 pcmk-3 ]
+ * Clone Set: clone-2 [child-2]:
+ * Started: [ pcmk-2 pcmk-3 ]
+ * Stopped: [ pcmk-1 ]
+ * Clone Set: clone-3 [child-3]:
+ * Started: [ pcmk-2 pcmk-3 ]
+ * Stopped: [ pcmk-1 ]
diff --git a/cts/scheduler/summary/clone-interleave-3.summary b/cts/scheduler/summary/clone-interleave-3.summary
new file mode 100644
index 0000000..4bac5ee
--- /dev/null
+++ b/cts/scheduler/summary/clone-interleave-3.summary
@@ -0,0 +1,47 @@
+Current cluster status:
+ * Node List:
+ * Online: [ pcmk-1 pcmk-2 pcmk-3 ]
+
+ * Full List of Resources:
+ * dummy (ocf:pacemaker:Dummy): Started pcmk-1
+ * Clone Set: clone-1 [child-1]:
+ * Started: [ pcmk-1 pcmk-2 pcmk-3 ]
+ * Clone Set: clone-2 [child-2]:
+ * child-2 (ocf:pacemaker:Dummy): FAILED pcmk-1
+ * Started: [ pcmk-2 pcmk-3 ]
+ * Clone Set: clone-3 [child-3]:
+ * Started: [ pcmk-1 pcmk-2 pcmk-3 ]
+
+Transition Summary:
+ * Restart dummy ( pcmk-1 ) due to required clone-3 running
+ * Recover child-2:0 ( pcmk-1 )
+ * Restart child-3:0 ( pcmk-1 ) due to required child-2:0 start
+
+Executing Cluster Transition:
+ * Resource action: dummy stop on pcmk-1
+ * Pseudo action: clone-3_stop_0
+ * Resource action: child-3:2 stop on pcmk-1
+ * Pseudo action: clone-3_stopped_0
+ * Pseudo action: clone-2_stop_0
+ * Resource action: child-2:2 stop on pcmk-1
+ * Pseudo action: clone-2_stopped_0
+ * Pseudo action: clone-2_start_0
+ * Resource action: child-2:2 start on pcmk-1
+ * Pseudo action: clone-2_running_0
+ * Pseudo action: clone-3_start_0
+ * Resource action: child-3:2 start on pcmk-1
+ * Pseudo action: clone-3_running_0
+ * Resource action: dummy start on pcmk-1
+
+Revised Cluster Status:
+ * Node List:
+ * Online: [ pcmk-1 pcmk-2 pcmk-3 ]
+
+ * Full List of Resources:
+ * dummy (ocf:pacemaker:Dummy): Started pcmk-1
+ * Clone Set: clone-1 [child-1]:
+ * Started: [ pcmk-1 pcmk-2 pcmk-3 ]
+ * Clone Set: clone-2 [child-2]:
+ * Started: [ pcmk-1 pcmk-2 pcmk-3 ]
+ * Clone Set: clone-3 [child-3]:
+ * Started: [ pcmk-1 pcmk-2 pcmk-3 ]
diff --git a/cts/scheduler/summary/clone-max-zero.summary b/cts/scheduler/summary/clone-max-zero.summary
new file mode 100644
index 0000000..b5f4ec7
--- /dev/null
+++ b/cts/scheduler/summary/clone-max-zero.summary
@@ -0,0 +1,51 @@
+Current cluster status:
+ * Node List:
+ * Online: [ c001n11 c001n12 ]
+
+ * Full List of Resources:
+ * fencing (stonith:external/ssh): Started c001n11
+ * Clone Set: dlm-clone [dlm]:
+ * dlm (ocf:pacemaker:controld): ORPHANED Started c001n12
+ * dlm (ocf:pacemaker:controld): ORPHANED Started c001n11
+ * Clone Set: o2cb-clone [o2cb]:
+ * Started: [ c001n11 c001n12 ]
+ * Clone Set: clone-drbd0 [drbd0]:
+ * Started: [ c001n11 c001n12 ]
+ * Clone Set: c-ocfs2-1 [ocfs2-1]:
+ * Started: [ c001n11 c001n12 ]
+
+Transition Summary:
+ * Stop dlm:0 ( c001n12 ) due to node availability
+ * Stop dlm:1 ( c001n11 ) due to node availability
+ * Stop o2cb:0 ( c001n12 ) due to node availability
+ * Stop o2cb:1 ( c001n11 ) due to node availability
+ * Stop ocfs2-1:0 ( c001n12 ) due to node availability
+ * Stop ocfs2-1:1 ( c001n11 ) due to node availability
+
+Executing Cluster Transition:
+ * Pseudo action: c-ocfs2-1_stop_0
+ * Resource action: ocfs2-1:1 stop on c001n12
+ * Resource action: ocfs2-1:0 stop on c001n11
+ * Pseudo action: c-ocfs2-1_stopped_0
+ * Pseudo action: o2cb-clone_stop_0
+ * Resource action: o2cb:1 stop on c001n12
+ * Resource action: o2cb:0 stop on c001n11
+ * Pseudo action: o2cb-clone_stopped_0
+ * Pseudo action: dlm-clone_stop_0
+ * Resource action: dlm:1 stop on c001n12
+ * Resource action: dlm:0 stop on c001n11
+ * Pseudo action: dlm-clone_stopped_0
+
+Revised Cluster Status:
+ * Node List:
+ * Online: [ c001n11 c001n12 ]
+
+ * Full List of Resources:
+ * fencing (stonith:external/ssh): Started c001n11
+ * Clone Set: dlm-clone [dlm]:
+ * Clone Set: o2cb-clone [o2cb]:
+ * Stopped: [ c001n11 c001n12 ]
+ * Clone Set: clone-drbd0 [drbd0]:
+ * Started: [ c001n11 c001n12 ]
+ * Clone Set: c-ocfs2-1 [ocfs2-1]:
+ * Stopped: [ c001n11 c001n12 ]
diff --git a/cts/scheduler/summary/clone-no-shuffle.summary b/cts/scheduler/summary/clone-no-shuffle.summary
new file mode 100644
index 0000000..e9b61b6
--- /dev/null
+++ b/cts/scheduler/summary/clone-no-shuffle.summary
@@ -0,0 +1,61 @@
+Current cluster status:
+ * Node List:
+ * Online: [ dktest1sles10 dktest2sles10 ]
+
+ * Full List of Resources:
+ * stonith-1 (stonith:dummy): Stopped
+ * Clone Set: ms-drbd1 [drbd1] (promotable):
+ * Promoted: [ dktest2sles10 ]
+ * Stopped: [ dktest1sles10 ]
+ * testip (ocf:heartbeat:IPaddr2): Started dktest2sles10
+
+Transition Summary:
+ * Start stonith-1 ( dktest1sles10 )
+ * Stop drbd1:0 ( Promoted dktest2sles10 ) due to node availability
+ * Start drbd1:1 ( dktest1sles10 )
+ * Stop testip ( dktest2sles10 ) due to node availability
+
+Executing Cluster Transition:
+ * Resource action: stonith-1 monitor on dktest2sles10
+ * Resource action: stonith-1 monitor on dktest1sles10
+ * Resource action: drbd1:1 monitor on dktest1sles10
+ * Pseudo action: ms-drbd1_pre_notify_demote_0
+ * Resource action: testip stop on dktest2sles10
+ * Resource action: testip monitor on dktest1sles10
+ * Resource action: stonith-1 start on dktest1sles10
+ * Resource action: drbd1:0 notify on dktest2sles10
+ * Pseudo action: ms-drbd1_confirmed-pre_notify_demote_0
+ * Pseudo action: ms-drbd1_demote_0
+ * Resource action: drbd1:0 demote on dktest2sles10
+ * Pseudo action: ms-drbd1_demoted_0
+ * Pseudo action: ms-drbd1_post_notify_demoted_0
+ * Resource action: drbd1:0 notify on dktest2sles10
+ * Pseudo action: ms-drbd1_confirmed-post_notify_demoted_0
+ * Pseudo action: ms-drbd1_pre_notify_stop_0
+ * Resource action: drbd1:0 notify on dktest2sles10
+ * Pseudo action: ms-drbd1_confirmed-pre_notify_stop_0
+ * Pseudo action: ms-drbd1_stop_0
+ * Resource action: drbd1:0 stop on dktest2sles10
+ * Pseudo action: ms-drbd1_stopped_0
+ * Pseudo action: ms-drbd1_post_notify_stopped_0
+ * Pseudo action: ms-drbd1_confirmed-post_notify_stopped_0
+ * Pseudo action: ms-drbd1_pre_notify_start_0
+ * Pseudo action: ms-drbd1_confirmed-pre_notify_start_0
+ * Pseudo action: ms-drbd1_start_0
+ * Resource action: drbd1:1 start on dktest1sles10
+ * Pseudo action: ms-drbd1_running_0
+ * Pseudo action: ms-drbd1_post_notify_running_0
+ * Resource action: drbd1:1 notify on dktest1sles10
+ * Pseudo action: ms-drbd1_confirmed-post_notify_running_0
+ * Resource action: drbd1:1 monitor=11000 on dktest1sles10
+
+Revised Cluster Status:
+ * Node List:
+ * Online: [ dktest1sles10 dktest2sles10 ]
+
+ * Full List of Resources:
+ * stonith-1 (stonith:dummy): Started dktest1sles10
+ * Clone Set: ms-drbd1 [drbd1] (promotable):
+ * Unpromoted: [ dktest1sles10 ]
+ * Stopped: [ dktest2sles10 ]
+ * testip (ocf:heartbeat:IPaddr2): Stopped
diff --git a/cts/scheduler/summary/clone-order-16instances.summary b/cts/scheduler/summary/clone-order-16instances.summary
new file mode 100644
index 0000000..52cf880
--- /dev/null
+++ b/cts/scheduler/summary/clone-order-16instances.summary
@@ -0,0 +1,72 @@
+16 of 33 resource instances DISABLED and 0 BLOCKED from further action due to failure
+
+Current cluster status:
+ * Node List:
+ * Online: [ virt-009.cluster-qe.lab.eng.brq.redhat.com virt-010.cluster-qe.lab.eng.brq.redhat.com virt-012.cluster-qe.lab.eng.brq.redhat.com virt-013.cluster-qe.lab.eng.brq.redhat.com virt-014.cluster-qe.lab.eng.brq.redhat.com virt-015.cluster-qe.lab.eng.brq.redhat.com virt-016.cluster-qe.lab.eng.brq.redhat.com virt-020.cluster-qe.lab.eng.brq.redhat.com virt-027.cluster-qe.lab.eng.brq.redhat.com virt-028.cluster-qe.lab.eng.brq.redhat.com virt-029.cluster-qe.lab.eng.brq.redhat.com virt-030.cluster-qe.lab.eng.brq.redhat.com virt-031.cluster-qe.lab.eng.brq.redhat.com virt-032.cluster-qe.lab.eng.brq.redhat.com virt-033.cluster-qe.lab.eng.brq.redhat.com virt-034.cluster-qe.lab.eng.brq.redhat.com ]
+
+ * Full List of Resources:
+ * virt-fencing (stonith:fence_xvm): Started virt-010.cluster-qe.lab.eng.brq.redhat.com
+ * Clone Set: dlm-clone [dlm]:
+ * Started: [ virt-010.cluster-qe.lab.eng.brq.redhat.com virt-012.cluster-qe.lab.eng.brq.redhat.com ]
+ * Stopped: [ virt-009.cluster-qe.lab.eng.brq.redhat.com virt-013.cluster-qe.lab.eng.brq.redhat.com virt-014.cluster-qe.lab.eng.brq.redhat.com virt-015.cluster-qe.lab.eng.brq.redhat.com virt-016.cluster-qe.lab.eng.brq.redhat.com virt-020.cluster-qe.lab.eng.brq.redhat.com virt-027.cluster-qe.lab.eng.brq.redhat.com virt-028.cluster-qe.lab.eng.brq.redhat.com virt-029.cluster-qe.lab.eng.brq.redhat.com virt-030.cluster-qe.lab.eng.brq.redhat.com virt-031.cluster-qe.lab.eng.brq.redhat.com virt-032.cluster-qe.lab.eng.brq.redhat.com virt-033.cluster-qe.lab.eng.brq.redhat.com virt-034.cluster-qe.lab.eng.brq.redhat.com ]
+ * Clone Set: clvmd-clone [clvmd] (disabled):
+ * Stopped (disabled): [ virt-009.cluster-qe.lab.eng.brq.redhat.com virt-010.cluster-qe.lab.eng.brq.redhat.com virt-012.cluster-qe.lab.eng.brq.redhat.com virt-013.cluster-qe.lab.eng.brq.redhat.com virt-014.cluster-qe.lab.eng.brq.redhat.com virt-015.cluster-qe.lab.eng.brq.redhat.com virt-016.cluster-qe.lab.eng.brq.redhat.com virt-020.cluster-qe.lab.eng.brq.redhat.com virt-027.cluster-qe.lab.eng.brq.redhat.com virt-028.cluster-qe.lab.eng.brq.redhat.com virt-029.cluster-qe.lab.eng.brq.redhat.com virt-030.cluster-qe.lab.eng.brq.redhat.com virt-031.cluster-qe.lab.eng.brq.redhat.com virt-032.cluster-qe.lab.eng.brq.redhat.com virt-033.cluster-qe.lab.eng.brq.redhat.com virt-034.cluster-qe.lab.eng.brq.redhat.com ]
+
+Transition Summary:
+ * Start dlm:2 ( virt-009.cluster-qe.lab.eng.brq.redhat.com )
+ * Start dlm:3 ( virt-013.cluster-qe.lab.eng.brq.redhat.com )
+ * Start dlm:4 ( virt-014.cluster-qe.lab.eng.brq.redhat.com )
+ * Start dlm:5 ( virt-015.cluster-qe.lab.eng.brq.redhat.com )
+ * Start dlm:6 ( virt-016.cluster-qe.lab.eng.brq.redhat.com )
+ * Start dlm:7 ( virt-020.cluster-qe.lab.eng.brq.redhat.com )
+ * Start dlm:8 ( virt-027.cluster-qe.lab.eng.brq.redhat.com )
+ * Start dlm:9 ( virt-028.cluster-qe.lab.eng.brq.redhat.com )
+ * Start dlm:10 ( virt-029.cluster-qe.lab.eng.brq.redhat.com )
+ * Start dlm:11 ( virt-030.cluster-qe.lab.eng.brq.redhat.com )
+ * Start dlm:12 ( virt-031.cluster-qe.lab.eng.brq.redhat.com )
+ * Start dlm:13 ( virt-032.cluster-qe.lab.eng.brq.redhat.com )
+ * Start dlm:14 ( virt-033.cluster-qe.lab.eng.brq.redhat.com )
+ * Start dlm:15 ( virt-034.cluster-qe.lab.eng.brq.redhat.com )
+
+Executing Cluster Transition:
+ * Pseudo action: dlm-clone_start_0
+ * Resource action: dlm start on virt-009.cluster-qe.lab.eng.brq.redhat.com
+ * Resource action: dlm start on virt-013.cluster-qe.lab.eng.brq.redhat.com
+ * Resource action: dlm start on virt-014.cluster-qe.lab.eng.brq.redhat.com
+ * Resource action: dlm start on virt-015.cluster-qe.lab.eng.brq.redhat.com
+ * Resource action: dlm start on virt-016.cluster-qe.lab.eng.brq.redhat.com
+ * Resource action: dlm start on virt-020.cluster-qe.lab.eng.brq.redhat.com
+ * Resource action: dlm start on virt-027.cluster-qe.lab.eng.brq.redhat.com
+ * Resource action: dlm start on virt-028.cluster-qe.lab.eng.brq.redhat.com
+ * Resource action: dlm start on virt-029.cluster-qe.lab.eng.brq.redhat.com
+ * Resource action: dlm start on virt-030.cluster-qe.lab.eng.brq.redhat.com
+ * Resource action: dlm start on virt-031.cluster-qe.lab.eng.brq.redhat.com
+ * Resource action: dlm start on virt-032.cluster-qe.lab.eng.brq.redhat.com
+ * Resource action: dlm start on virt-033.cluster-qe.lab.eng.brq.redhat.com
+ * Resource action: dlm start on virt-034.cluster-qe.lab.eng.brq.redhat.com
+ * Pseudo action: dlm-clone_running_0
+ * Resource action: dlm monitor=30000 on virt-009.cluster-qe.lab.eng.brq.redhat.com
+ * Resource action: dlm monitor=30000 on virt-013.cluster-qe.lab.eng.brq.redhat.com
+ * Resource action: dlm monitor=30000 on virt-014.cluster-qe.lab.eng.brq.redhat.com
+ * Resource action: dlm monitor=30000 on virt-015.cluster-qe.lab.eng.brq.redhat.com
+ * Resource action: dlm monitor=30000 on virt-016.cluster-qe.lab.eng.brq.redhat.com
+ * Resource action: dlm monitor=30000 on virt-020.cluster-qe.lab.eng.brq.redhat.com
+ * Resource action: dlm monitor=30000 on virt-027.cluster-qe.lab.eng.brq.redhat.com
+ * Resource action: dlm monitor=30000 on virt-028.cluster-qe.lab.eng.brq.redhat.com
+ * Resource action: dlm monitor=30000 on virt-029.cluster-qe.lab.eng.brq.redhat.com
+ * Resource action: dlm monitor=30000 on virt-030.cluster-qe.lab.eng.brq.redhat.com
+ * Resource action: dlm monitor=30000 on virt-031.cluster-qe.lab.eng.brq.redhat.com
+ * Resource action: dlm monitor=30000 on virt-032.cluster-qe.lab.eng.brq.redhat.com
+ * Resource action: dlm monitor=30000 on virt-033.cluster-qe.lab.eng.brq.redhat.com
+ * Resource action: dlm monitor=30000 on virt-034.cluster-qe.lab.eng.brq.redhat.com
+
+Revised Cluster Status:
+ * Node List:
+ * Online: [ virt-009.cluster-qe.lab.eng.brq.redhat.com virt-010.cluster-qe.lab.eng.brq.redhat.com virt-012.cluster-qe.lab.eng.brq.redhat.com virt-013.cluster-qe.lab.eng.brq.redhat.com virt-014.cluster-qe.lab.eng.brq.redhat.com virt-015.cluster-qe.lab.eng.brq.redhat.com virt-016.cluster-qe.lab.eng.brq.redhat.com virt-020.cluster-qe.lab.eng.brq.redhat.com virt-027.cluster-qe.lab.eng.brq.redhat.com virt-028.cluster-qe.lab.eng.brq.redhat.com virt-029.cluster-qe.lab.eng.brq.redhat.com virt-030.cluster-qe.lab.eng.brq.redhat.com virt-031.cluster-qe.lab.eng.brq.redhat.com virt-032.cluster-qe.lab.eng.brq.redhat.com virt-033.cluster-qe.lab.eng.brq.redhat.com virt-034.cluster-qe.lab.eng.brq.redhat.com ]
+
+ * Full List of Resources:
+ * virt-fencing (stonith:fence_xvm): Started virt-010.cluster-qe.lab.eng.brq.redhat.com
+ * Clone Set: dlm-clone [dlm]:
+ * Started: [ virt-009.cluster-qe.lab.eng.brq.redhat.com virt-010.cluster-qe.lab.eng.brq.redhat.com virt-012.cluster-qe.lab.eng.brq.redhat.com virt-013.cluster-qe.lab.eng.brq.redhat.com virt-014.cluster-qe.lab.eng.brq.redhat.com virt-015.cluster-qe.lab.eng.brq.redhat.com virt-016.cluster-qe.lab.eng.brq.redhat.com virt-020.cluster-qe.lab.eng.brq.redhat.com virt-027.cluster-qe.lab.eng.brq.redhat.com virt-028.cluster-qe.lab.eng.brq.redhat.com virt-029.cluster-qe.lab.eng.brq.redhat.com virt-030.cluster-qe.lab.eng.brq.redhat.com virt-031.cluster-qe.lab.eng.brq.redhat.com virt-032.cluster-qe.lab.eng.brq.redhat.com virt-033.cluster-qe.lab.eng.brq.redhat.com virt-034.cluster-qe.lab.eng.brq.redhat.com ]
+ * Clone Set: clvmd-clone [clvmd] (disabled):
+ * Stopped (disabled): [ virt-009.cluster-qe.lab.eng.brq.redhat.com virt-010.cluster-qe.lab.eng.brq.redhat.com virt-012.cluster-qe.lab.eng.brq.redhat.com virt-013.cluster-qe.lab.eng.brq.redhat.com virt-014.cluster-qe.lab.eng.brq.redhat.com virt-015.cluster-qe.lab.eng.brq.redhat.com virt-016.cluster-qe.lab.eng.brq.redhat.com virt-020.cluster-qe.lab.eng.brq.redhat.com virt-027.cluster-qe.lab.eng.brq.redhat.com virt-028.cluster-qe.lab.eng.brq.redhat.com virt-029.cluster-qe.lab.eng.brq.redhat.com virt-030.cluster-qe.lab.eng.brq.redhat.com virt-031.cluster-qe.lab.eng.brq.redhat.com virt-032.cluster-qe.lab.eng.brq.redhat.com virt-033.cluster-qe.lab.eng.brq.redhat.com virt-034.cluster-qe.lab.eng.brq.redhat.com ]
diff --git a/cts/scheduler/summary/clone-order-primitive.summary b/cts/scheduler/summary/clone-order-primitive.summary
new file mode 100644
index 0000000..33f613e
--- /dev/null
+++ b/cts/scheduler/summary/clone-order-primitive.summary
@@ -0,0 +1,29 @@
+Current cluster status:
+ * Node List:
+ * Online: [ pcw2058.see.ed.ac.uk pcw2059.see.ed.ac.uk pcw2688.see.ed.ac.uk pcw2709.see.ed.ac.uk ]
+
+ * Full List of Resources:
+ * Clone Set: cups_clone [cups_lsb]:
+ * Stopped: [ pcw2058.see.ed.ac.uk pcw2059.see.ed.ac.uk pcw2688.see.ed.ac.uk pcw2709.see.ed.ac.uk ]
+ * smb_lsb (lsb:smb): Stopped
+
+Transition Summary:
+ * Start cups_lsb:0 ( pcw2058.see.ed.ac.uk )
+ * Start cups_lsb:1 ( pcw2059.see.ed.ac.uk )
+ * Start smb_lsb ( pcw2688.see.ed.ac.uk )
+
+Executing Cluster Transition:
+ * Resource action: smb_lsb start on pcw2688.see.ed.ac.uk
+ * Pseudo action: cups_clone_start_0
+ * Resource action: cups_lsb:0 start on pcw2058.see.ed.ac.uk
+ * Resource action: cups_lsb:1 start on pcw2059.see.ed.ac.uk
+ * Pseudo action: cups_clone_running_0
+
+Revised Cluster Status:
+ * Node List:
+ * Online: [ pcw2058.see.ed.ac.uk pcw2059.see.ed.ac.uk pcw2688.see.ed.ac.uk pcw2709.see.ed.ac.uk ]
+
+ * Full List of Resources:
+ * Clone Set: cups_clone [cups_lsb]:
+ * Started: [ pcw2058.see.ed.ac.uk pcw2059.see.ed.ac.uk ]
+ * smb_lsb (lsb:smb): Started pcw2688.see.ed.ac.uk
diff --git a/cts/scheduler/summary/clone-require-all-1.summary b/cts/scheduler/summary/clone-require-all-1.summary
new file mode 100644
index 0000000..7037eb8
--- /dev/null
+++ b/cts/scheduler/summary/clone-require-all-1.summary
@@ -0,0 +1,36 @@
+Current cluster status:
+ * Node List:
+ * Online: [ rhel7-auto1 rhel7-auto2 rhel7-auto3 rhel7-auto4 ]
+
+ * Full List of Resources:
+ * shooter (stonith:fence_xvm): Started rhel7-auto1
+ * Clone Set: A-clone [A]:
+ * Started: [ rhel7-auto1 rhel7-auto2 ]
+ * Stopped: [ rhel7-auto3 rhel7-auto4 ]
+ * Clone Set: B-clone [B]:
+ * Stopped: [ rhel7-auto1 rhel7-auto2 rhel7-auto3 rhel7-auto4 ]
+
+Transition Summary:
+ * Start B:0 ( rhel7-auto3 )
+ * Start B:1 ( rhel7-auto4 )
+
+Executing Cluster Transition:
+ * Pseudo action: B-clone_start_0
+ * Resource action: B start on rhel7-auto3
+ * Resource action: B start on rhel7-auto4
+ * Pseudo action: B-clone_running_0
+ * Resource action: B monitor=10000 on rhel7-auto3
+ * Resource action: B monitor=10000 on rhel7-auto4
+
+Revised Cluster Status:
+ * Node List:
+ * Online: [ rhel7-auto1 rhel7-auto2 rhel7-auto3 rhel7-auto4 ]
+
+ * Full List of Resources:
+ * shooter (stonith:fence_xvm): Started rhel7-auto1
+ * Clone Set: A-clone [A]:
+ * Started: [ rhel7-auto1 rhel7-auto2 ]
+ * Stopped: [ rhel7-auto3 rhel7-auto4 ]
+ * Clone Set: B-clone [B]:
+ * Started: [ rhel7-auto3 rhel7-auto4 ]
+ * Stopped: [ rhel7-auto1 rhel7-auto2 ]
diff --git a/cts/scheduler/summary/clone-require-all-2.summary b/cts/scheduler/summary/clone-require-all-2.summary
new file mode 100644
index 0000000..72d6f24
--- /dev/null
+++ b/cts/scheduler/summary/clone-require-all-2.summary
@@ -0,0 +1,42 @@
+Current cluster status:
+ * Node List:
+ * Node rhel7-auto1: standby (with active resources)
+ * Node rhel7-auto2: standby (with active resources)
+ * Online: [ rhel7-auto3 rhel7-auto4 ]
+
+ * Full List of Resources:
+ * shooter (stonith:fence_xvm): Started rhel7-auto1
+ * Clone Set: A-clone [A]:
+ * Started: [ rhel7-auto1 rhel7-auto2 ]
+ * Stopped: [ rhel7-auto3 rhel7-auto4 ]
+ * Clone Set: B-clone [B]:
+ * Stopped: [ rhel7-auto1 rhel7-auto2 rhel7-auto3 rhel7-auto4 ]
+
+Transition Summary:
+ * Move shooter ( rhel7-auto1 -> rhel7-auto3 )
+ * Stop A:0 ( rhel7-auto1 ) due to node availability
+ * Stop A:1 ( rhel7-auto2 ) due to node availability
+ * Start B:0 ( rhel7-auto4 ) due to unrunnable clone-one-or-more:order-A-clone-B-clone-mandatory (blocked)
+ * Start B:1 ( rhel7-auto3 ) due to unrunnable clone-one-or-more:order-A-clone-B-clone-mandatory (blocked)
+
+Executing Cluster Transition:
+ * Resource action: shooter stop on rhel7-auto1
+ * Pseudo action: A-clone_stop_0
+ * Resource action: shooter start on rhel7-auto3
+ * Resource action: A stop on rhel7-auto1
+ * Resource action: A stop on rhel7-auto2
+ * Pseudo action: A-clone_stopped_0
+ * Resource action: shooter monitor=60000 on rhel7-auto3
+
+Revised Cluster Status:
+ * Node List:
+ * Node rhel7-auto1: standby
+ * Node rhel7-auto2: standby
+ * Online: [ rhel7-auto3 rhel7-auto4 ]
+
+ * Full List of Resources:
+ * shooter (stonith:fence_xvm): Started rhel7-auto3
+ * Clone Set: A-clone [A]:
+ * Stopped: [ rhel7-auto1 rhel7-auto2 rhel7-auto3 rhel7-auto4 ]
+ * Clone Set: B-clone [B]:
+ * Stopped: [ rhel7-auto1 rhel7-auto2 rhel7-auto3 rhel7-auto4 ]
diff --git a/cts/scheduler/summary/clone-require-all-3.summary b/cts/scheduler/summary/clone-require-all-3.summary
new file mode 100644
index 0000000..b828bff
--- /dev/null
+++ b/cts/scheduler/summary/clone-require-all-3.summary
@@ -0,0 +1,47 @@
+Current cluster status:
+ * Node List:
+ * Node rhel7-auto1: standby (with active resources)
+ * Node rhel7-auto2: standby (with active resources)
+ * Online: [ rhel7-auto3 rhel7-auto4 ]
+
+ * Full List of Resources:
+ * shooter (stonith:fence_xvm): Started rhel7-auto1
+ * Clone Set: A-clone [A]:
+ * Started: [ rhel7-auto1 rhel7-auto2 ]
+ * Stopped: [ rhel7-auto3 rhel7-auto4 ]
+ * Clone Set: B-clone [B]:
+ * Started: [ rhel7-auto3 rhel7-auto4 ]
+ * Stopped: [ rhel7-auto1 rhel7-auto2 ]
+
+Transition Summary:
+ * Move shooter ( rhel7-auto1 -> rhel7-auto3 )
+ * Stop A:0 ( rhel7-auto1 ) due to node availability
+ * Stop A:1 ( rhel7-auto2 ) due to node availability
+ * Stop B:0 ( rhel7-auto3 ) due to unrunnable clone-one-or-more:order-A-clone-B-clone-mandatory
+ * Stop B:1 ( rhel7-auto4 ) due to unrunnable clone-one-or-more:order-A-clone-B-clone-mandatory
+
+Executing Cluster Transition:
+ * Resource action: shooter stop on rhel7-auto1
+ * Pseudo action: B-clone_stop_0
+ * Resource action: shooter start on rhel7-auto3
+ * Resource action: B stop on rhel7-auto3
+ * Resource action: B stop on rhel7-auto4
+ * Pseudo action: B-clone_stopped_0
+ * Resource action: shooter monitor=60000 on rhel7-auto3
+ * Pseudo action: A-clone_stop_0
+ * Resource action: A stop on rhel7-auto1
+ * Resource action: A stop on rhel7-auto2
+ * Pseudo action: A-clone_stopped_0
+
+Revised Cluster Status:
+ * Node List:
+ * Node rhel7-auto1: standby
+ * Node rhel7-auto2: standby
+ * Online: [ rhel7-auto3 rhel7-auto4 ]
+
+ * Full List of Resources:
+ * shooter (stonith:fence_xvm): Started rhel7-auto3
+ * Clone Set: A-clone [A]:
+ * Stopped: [ rhel7-auto1 rhel7-auto2 rhel7-auto3 rhel7-auto4 ]
+ * Clone Set: B-clone [B]:
+ * Stopped: [ rhel7-auto1 rhel7-auto2 rhel7-auto3 rhel7-auto4 ]
diff --git a/cts/scheduler/summary/clone-require-all-4.summary b/cts/scheduler/summary/clone-require-all-4.summary
new file mode 100644
index 0000000..ebd7b6b
--- /dev/null
+++ b/cts/scheduler/summary/clone-require-all-4.summary
@@ -0,0 +1,41 @@
+Current cluster status:
+ * Node List:
+ * Node rhel7-auto1: standby (with active resources)
+ * Online: [ rhel7-auto2 rhel7-auto3 rhel7-auto4 ]
+
+ * Full List of Resources:
+ * shooter (stonith:fence_xvm): Started rhel7-auto1
+ * Clone Set: A-clone [A]:
+ * Started: [ rhel7-auto1 rhel7-auto2 ]
+ * Stopped: [ rhel7-auto3 rhel7-auto4 ]
+ * Clone Set: B-clone [B]:
+ * Started: [ rhel7-auto3 rhel7-auto4 ]
+ * Stopped: [ rhel7-auto1 rhel7-auto2 ]
+
+Transition Summary:
+ * Move shooter ( rhel7-auto1 -> rhel7-auto2 )
+ * Stop A:0 ( rhel7-auto1 ) due to node availability
+
+Executing Cluster Transition:
+ * Resource action: shooter stop on rhel7-auto1
+ * Pseudo action: A-clone_stop_0
+ * Resource action: shooter start on rhel7-auto2
+ * Resource action: A stop on rhel7-auto1
+ * Pseudo action: A-clone_stopped_0
+ * Pseudo action: A-clone_start_0
+ * Resource action: shooter monitor=60000 on rhel7-auto2
+ * Pseudo action: A-clone_running_0
+
+Revised Cluster Status:
+ * Node List:
+ * Node rhel7-auto1: standby
+ * Online: [ rhel7-auto2 rhel7-auto3 rhel7-auto4 ]
+
+ * Full List of Resources:
+ * shooter (stonith:fence_xvm): Started rhel7-auto2
+ * Clone Set: A-clone [A]:
+ * Started: [ rhel7-auto2 ]
+ * Stopped: [ rhel7-auto1 rhel7-auto3 rhel7-auto4 ]
+ * Clone Set: B-clone [B]:
+ * Started: [ rhel7-auto3 rhel7-auto4 ]
+ * Stopped: [ rhel7-auto1 rhel7-auto2 ]
diff --git a/cts/scheduler/summary/clone-require-all-5.summary b/cts/scheduler/summary/clone-require-all-5.summary
new file mode 100644
index 0000000..b47049e
--- /dev/null
+++ b/cts/scheduler/summary/clone-require-all-5.summary
@@ -0,0 +1,45 @@
+Current cluster status:
+ * Node List:
+ * Online: [ rhel7-auto1 rhel7-auto2 rhel7-auto3 rhel7-auto4 ]
+
+ * Full List of Resources:
+ * shooter (stonith:fence_xvm): Started rhel7-auto1
+ * Clone Set: A-clone [A]:
+ * Started: [ rhel7-auto1 rhel7-auto2 ]
+ * Stopped: [ rhel7-auto3 rhel7-auto4 ]
+ * Clone Set: B-clone [B]:
+ * Stopped: [ rhel7-auto1 rhel7-auto2 rhel7-auto3 rhel7-auto4 ]
+
+Transition Summary:
+ * Start A:2 ( rhel7-auto3 )
+ * Start B:0 ( rhel7-auto4 )
+ * Start B:1 ( rhel7-auto3 )
+ * Start B:2 ( rhel7-auto1 )
+
+Executing Cluster Transition:
+ * Pseudo action: A-clone_start_0
+ * Resource action: A start on rhel7-auto3
+ * Pseudo action: A-clone_running_0
+ * Pseudo action: clone-one-or-more:order-A-clone-B-clone-mandatory
+ * Resource action: A monitor=10000 on rhel7-auto3
+ * Pseudo action: B-clone_start_0
+ * Resource action: B start on rhel7-auto4
+ * Resource action: B start on rhel7-auto3
+ * Resource action: B start on rhel7-auto1
+ * Pseudo action: B-clone_running_0
+ * Resource action: B monitor=10000 on rhel7-auto4
+ * Resource action: B monitor=10000 on rhel7-auto3
+ * Resource action: B monitor=10000 on rhel7-auto1
+
+Revised Cluster Status:
+ * Node List:
+ * Online: [ rhel7-auto1 rhel7-auto2 rhel7-auto3 rhel7-auto4 ]
+
+ * Full List of Resources:
+ * shooter (stonith:fence_xvm): Started rhel7-auto1
+ * Clone Set: A-clone [A]:
+ * Started: [ rhel7-auto1 rhel7-auto2 rhel7-auto3 ]
+ * Stopped: [ rhel7-auto4 ]
+ * Clone Set: B-clone [B]:
+ * Started: [ rhel7-auto1 rhel7-auto3 rhel7-auto4 ]
+ * Stopped: [ rhel7-auto2 ]
diff --git a/cts/scheduler/summary/clone-require-all-6.summary b/cts/scheduler/summary/clone-require-all-6.summary
new file mode 100644
index 0000000..5bae20c
--- /dev/null
+++ b/cts/scheduler/summary/clone-require-all-6.summary
@@ -0,0 +1,37 @@
+Current cluster status:
+ * Node List:
+ * Online: [ rhel7-auto1 rhel7-auto2 rhel7-auto3 rhel7-auto4 ]
+
+ * Full List of Resources:
+ * shooter (stonith:fence_xvm): Started rhel7-auto1
+ * Clone Set: A-clone [A]:
+ * Started: [ rhel7-auto1 rhel7-auto2 rhel7-auto3 ]
+ * Stopped: [ rhel7-auto4 ]
+ * Clone Set: B-clone [B]:
+ * Started: [ rhel7-auto1 rhel7-auto3 rhel7-auto4 ]
+ * Stopped: [ rhel7-auto2 ]
+
+Transition Summary:
+ * Stop A:0 ( rhel7-auto1 ) due to node availability
+ * Stop A:2 ( rhel7-auto3 ) due to node availability
+
+Executing Cluster Transition:
+ * Pseudo action: A-clone_stop_0
+ * Resource action: A stop on rhel7-auto1
+ * Resource action: A stop on rhel7-auto3
+ * Pseudo action: A-clone_stopped_0
+ * Pseudo action: A-clone_start_0
+ * Pseudo action: A-clone_running_0
+
+Revised Cluster Status:
+ * Node List:
+ * Online: [ rhel7-auto1 rhel7-auto2 rhel7-auto3 rhel7-auto4 ]
+
+ * Full List of Resources:
+ * shooter (stonith:fence_xvm): Started rhel7-auto1
+ * Clone Set: A-clone [A]:
+ * Started: [ rhel7-auto2 ]
+ * Stopped: [ rhel7-auto1 rhel7-auto3 rhel7-auto4 ]
+ * Clone Set: B-clone [B]:
+ * Started: [ rhel7-auto1 rhel7-auto3 rhel7-auto4 ]
+ * Stopped: [ rhel7-auto2 ]
diff --git a/cts/scheduler/summary/clone-require-all-7.summary b/cts/scheduler/summary/clone-require-all-7.summary
new file mode 100644
index 0000000..f0f2820
--- /dev/null
+++ b/cts/scheduler/summary/clone-require-all-7.summary
@@ -0,0 +1,48 @@
+Current cluster status:
+ * Node List:
+ * Online: [ rhel7-auto1 rhel7-auto2 rhel7-auto3 rhel7-auto4 ]
+
+ * Full List of Resources:
+ * shooter (stonith:fence_xvm): Started rhel7-auto1
+ * Clone Set: A-clone [A]:
+ * Stopped: [ rhel7-auto1 rhel7-auto2 rhel7-auto3 rhel7-auto4 ]
+ * Clone Set: B-clone [B]:
+ * Stopped: [ rhel7-auto1 rhel7-auto2 rhel7-auto3 rhel7-auto4 ]
+
+Transition Summary:
+ * Start A:0 ( rhel7-auto2 )
+ * Start A:1 ( rhel7-auto1 )
+ * Start B:0 ( rhel7-auto3 )
+ * Start B:1 ( rhel7-auto4 )
+
+Executing Cluster Transition:
+ * Resource action: A:0 monitor on rhel7-auto4
+ * Resource action: A:0 monitor on rhel7-auto3
+ * Resource action: A:0 monitor on rhel7-auto2
+ * Resource action: A:1 monitor on rhel7-auto1
+ * Pseudo action: A-clone_start_0
+ * Resource action: A:0 start on rhel7-auto2
+ * Resource action: A:1 start on rhel7-auto1
+ * Pseudo action: A-clone_running_0
+ * Pseudo action: clone-one-or-more:order-A-clone-B-clone-mandatory
+ * Resource action: A:0 monitor=10000 on rhel7-auto2
+ * Resource action: A:1 monitor=10000 on rhel7-auto1
+ * Pseudo action: B-clone_start_0
+ * Resource action: B start on rhel7-auto3
+ * Resource action: B start on rhel7-auto4
+ * Pseudo action: B-clone_running_0
+ * Resource action: B monitor=10000 on rhel7-auto3
+ * Resource action: B monitor=10000 on rhel7-auto4
+
+Revised Cluster Status:
+ * Node List:
+ * Online: [ rhel7-auto1 rhel7-auto2 rhel7-auto3 rhel7-auto4 ]
+
+ * Full List of Resources:
+ * shooter (stonith:fence_xvm): Started rhel7-auto1
+ * Clone Set: A-clone [A]:
+ * Started: [ rhel7-auto1 rhel7-auto2 ]
+ * Stopped: [ rhel7-auto3 rhel7-auto4 ]
+ * Clone Set: B-clone [B]:
+ * Started: [ rhel7-auto3 rhel7-auto4 ]
+ * Stopped: [ rhel7-auto1 rhel7-auto2 ]
diff --git a/cts/scheduler/summary/clone-require-all-no-interleave-1.summary b/cts/scheduler/summary/clone-require-all-no-interleave-1.summary
new file mode 100644
index 0000000..646bfa3
--- /dev/null
+++ b/cts/scheduler/summary/clone-require-all-no-interleave-1.summary
@@ -0,0 +1,56 @@
+Current cluster status:
+ * Node List:
+ * Node rhel7-auto4: standby
+ * Online: [ rhel7-auto1 rhel7-auto2 rhel7-auto3 ]
+
+ * Full List of Resources:
+ * shooter (stonith:fence_xvm): Started rhel7-auto1
+ * Clone Set: A-clone [A]:
+ * Stopped: [ rhel7-auto1 rhel7-auto2 rhel7-auto3 rhel7-auto4 ]
+ * Clone Set: B-clone [B]:
+ * Stopped: [ rhel7-auto1 rhel7-auto2 rhel7-auto3 rhel7-auto4 ]
+ * Clone Set: C-clone [C]:
+ * Stopped: [ rhel7-auto1 rhel7-auto2 rhel7-auto3 rhel7-auto4 ]
+
+Transition Summary:
+ * Start A:0 ( rhel7-auto3 )
+ * Start B:0 ( rhel7-auto3 )
+ * Start C:0 ( rhel7-auto2 )
+ * Start C:1 ( rhel7-auto1 )
+ * Start C:2 ( rhel7-auto3 )
+
+Executing Cluster Transition:
+ * Pseudo action: A-clone_start_0
+ * Resource action: A start on rhel7-auto3
+ * Pseudo action: A-clone_running_0
+ * Pseudo action: B-clone_start_0
+ * Resource action: A monitor=10000 on rhel7-auto3
+ * Resource action: B start on rhel7-auto3
+ * Pseudo action: B-clone_running_0
+ * Pseudo action: clone-one-or-more:order-B-clone-C-clone-mandatory
+ * Resource action: B monitor=10000 on rhel7-auto3
+ * Pseudo action: C-clone_start_0
+ * Resource action: C start on rhel7-auto2
+ * Resource action: C start on rhel7-auto1
+ * Resource action: C start on rhel7-auto3
+ * Pseudo action: C-clone_running_0
+ * Resource action: C monitor=10000 on rhel7-auto2
+ * Resource action: C monitor=10000 on rhel7-auto1
+ * Resource action: C monitor=10000 on rhel7-auto3
+
+Revised Cluster Status:
+ * Node List:
+ * Node rhel7-auto4: standby
+ * Online: [ rhel7-auto1 rhel7-auto2 rhel7-auto3 ]
+
+ * Full List of Resources:
+ * shooter (stonith:fence_xvm): Started rhel7-auto1
+ * Clone Set: A-clone [A]:
+ * Started: [ rhel7-auto3 ]
+ * Stopped: [ rhel7-auto1 rhel7-auto2 rhel7-auto4 ]
+ * Clone Set: B-clone [B]:
+ * Started: [ rhel7-auto3 ]
+ * Stopped: [ rhel7-auto1 rhel7-auto2 rhel7-auto4 ]
+ * Clone Set: C-clone [C]:
+ * Started: [ rhel7-auto1 rhel7-auto2 rhel7-auto3 ]
+ * Stopped: [ rhel7-auto4 ]
diff --git a/cts/scheduler/summary/clone-require-all-no-interleave-2.summary b/cts/scheduler/summary/clone-require-all-no-interleave-2.summary
new file mode 100644
index 0000000..e40230c
--- /dev/null
+++ b/cts/scheduler/summary/clone-require-all-no-interleave-2.summary
@@ -0,0 +1,56 @@
+Current cluster status:
+ * Node List:
+ * Node rhel7-auto3: standby
+ * Online: [ rhel7-auto1 rhel7-auto2 rhel7-auto4 ]
+
+ * Full List of Resources:
+ * shooter (stonith:fence_xvm): Started rhel7-auto1
+ * Clone Set: A-clone [A]:
+ * Stopped: [ rhel7-auto1 rhel7-auto2 rhel7-auto3 rhel7-auto4 ]
+ * Clone Set: B-clone [B]:
+ * Stopped: [ rhel7-auto1 rhel7-auto2 rhel7-auto3 rhel7-auto4 ]
+ * Clone Set: C-clone [C]:
+ * Stopped: [ rhel7-auto1 rhel7-auto2 rhel7-auto3 rhel7-auto4 ]
+
+Transition Summary:
+ * Start A:0 ( rhel7-auto4 )
+ * Start B:0 ( rhel7-auto4 )
+ * Start C:0 ( rhel7-auto2 )
+ * Start C:1 ( rhel7-auto1 )
+ * Start C:2 ( rhel7-auto4 )
+
+Executing Cluster Transition:
+ * Pseudo action: A-clone_start_0
+ * Resource action: A start on rhel7-auto4
+ * Pseudo action: A-clone_running_0
+ * Pseudo action: B-clone_start_0
+ * Resource action: A monitor=10000 on rhel7-auto4
+ * Resource action: B start on rhel7-auto4
+ * Pseudo action: B-clone_running_0
+ * Pseudo action: clone-one-or-more:order-B-clone-C-clone-mandatory
+ * Resource action: B monitor=10000 on rhel7-auto4
+ * Pseudo action: C-clone_start_0
+ * Resource action: C start on rhel7-auto2
+ * Resource action: C start on rhel7-auto1
+ * Resource action: C start on rhel7-auto4
+ * Pseudo action: C-clone_running_0
+ * Resource action: C monitor=10000 on rhel7-auto2
+ * Resource action: C monitor=10000 on rhel7-auto1
+ * Resource action: C monitor=10000 on rhel7-auto4
+
+Revised Cluster Status:
+ * Node List:
+ * Node rhel7-auto3: standby
+ * Online: [ rhel7-auto1 rhel7-auto2 rhel7-auto4 ]
+
+ * Full List of Resources:
+ * shooter (stonith:fence_xvm): Started rhel7-auto1
+ * Clone Set: A-clone [A]:
+ * Started: [ rhel7-auto4 ]
+ * Stopped: [ rhel7-auto1 rhel7-auto2 rhel7-auto3 ]
+ * Clone Set: B-clone [B]:
+ * Started: [ rhel7-auto4 ]
+ * Stopped: [ rhel7-auto1 rhel7-auto2 rhel7-auto3 ]
+ * Clone Set: C-clone [C]:
+ * Started: [ rhel7-auto1 rhel7-auto2 rhel7-auto4 ]
+ * Stopped: [ rhel7-auto3 ]
diff --git a/cts/scheduler/summary/clone-require-all-no-interleave-3.summary b/cts/scheduler/summary/clone-require-all-no-interleave-3.summary
new file mode 100644
index 0000000..a22bf45
--- /dev/null
+++ b/cts/scheduler/summary/clone-require-all-no-interleave-3.summary
@@ -0,0 +1,62 @@
+Current cluster status:
+ * Node List:
+ * Node rhel7-auto4: standby (with active resources)
+ * Online: [ rhel7-auto1 rhel7-auto2 rhel7-auto3 ]
+
+ * Full List of Resources:
+ * shooter (stonith:fence_xvm): Started rhel7-auto1
+ * Clone Set: A-clone [A]:
+ * Started: [ rhel7-auto4 ]
+ * Stopped: [ rhel7-auto1 rhel7-auto2 rhel7-auto3 ]
+ * Clone Set: B-clone [B]:
+ * Started: [ rhel7-auto4 ]
+ * Stopped: [ rhel7-auto1 rhel7-auto2 rhel7-auto3 ]
+ * Clone Set: C-clone [C]:
+ * Started: [ rhel7-auto1 rhel7-auto2 rhel7-auto4 ]
+ * Stopped: [ rhel7-auto3 ]
+
+Transition Summary:
+ * Move A:0 ( rhel7-auto4 -> rhel7-auto3 )
+ * Move B:0 ( rhel7-auto4 -> rhel7-auto3 )
+ * Move C:0 ( rhel7-auto4 -> rhel7-auto3 )
+
+Executing Cluster Transition:
+ * Pseudo action: C-clone_stop_0
+ * Resource action: C stop on rhel7-auto4
+ * Pseudo action: C-clone_stopped_0
+ * Pseudo action: B-clone_stop_0
+ * Resource action: B stop on rhel7-auto4
+ * Pseudo action: B-clone_stopped_0
+ * Pseudo action: A-clone_stop_0
+ * Resource action: A stop on rhel7-auto4
+ * Pseudo action: A-clone_stopped_0
+ * Pseudo action: A-clone_start_0
+ * Resource action: A start on rhel7-auto3
+ * Pseudo action: A-clone_running_0
+ * Pseudo action: B-clone_start_0
+ * Resource action: A monitor=10000 on rhel7-auto3
+ * Resource action: B start on rhel7-auto3
+ * Pseudo action: B-clone_running_0
+ * Pseudo action: clone-one-or-more:order-B-clone-C-clone-mandatory
+ * Resource action: B monitor=10000 on rhel7-auto3
+ * Pseudo action: C-clone_start_0
+ * Resource action: C start on rhel7-auto3
+ * Pseudo action: C-clone_running_0
+ * Resource action: C monitor=10000 on rhel7-auto3
+
+Revised Cluster Status:
+ * Node List:
+ * Node rhel7-auto4: standby
+ * Online: [ rhel7-auto1 rhel7-auto2 rhel7-auto3 ]
+
+ * Full List of Resources:
+ * shooter (stonith:fence_xvm): Started rhel7-auto1
+ * Clone Set: A-clone [A]:
+ * Started: [ rhel7-auto3 ]
+ * Stopped: [ rhel7-auto1 rhel7-auto2 rhel7-auto4 ]
+ * Clone Set: B-clone [B]:
+ * Started: [ rhel7-auto3 ]
+ * Stopped: [ rhel7-auto1 rhel7-auto2 rhel7-auto4 ]
+ * Clone Set: C-clone [C]:
+ * Started: [ rhel7-auto1 rhel7-auto2 rhel7-auto3 ]
+ * Stopped: [ rhel7-auto4 ]
diff --git a/cts/scheduler/summary/clone-requires-quorum-recovery.summary b/cts/scheduler/summary/clone-requires-quorum-recovery.summary
new file mode 100644
index 0000000..364dabe
--- /dev/null
+++ b/cts/scheduler/summary/clone-requires-quorum-recovery.summary
@@ -0,0 +1,48 @@
+Using the original execution date of: 2018-05-24 15:29:56Z
+Current cluster status:
+ * Node List:
+ * Node rhel7-5: UNCLEAN (offline)
+ * Online: [ rhel7-1 rhel7-2 rhel7-3 rhel7-4 ]
+
+ * Full List of Resources:
+ * Fencing (stonith:fence_xvm): Started rhel7-1
+ * FencingFail (stonith:fence_dummy): Started rhel7-2
+ * dummy-solo (ocf:pacemaker:Dummy): Started rhel7-3
+ * Clone Set: dummy-crowd-clone [dummy-crowd]:
+ * dummy-crowd (ocf:pacemaker:Dummy): ORPHANED Started rhel7-5 (UNCLEAN)
+ * Started: [ rhel7-1 rhel7-4 ]
+ * Stopped: [ rhel7-2 rhel7-3 ]
+ * Clone Set: dummy-boss-clone [dummy-boss] (promotable):
+ * Promoted: [ rhel7-3 ]
+ * Unpromoted: [ rhel7-2 rhel7-4 ]
+
+Transition Summary:
+ * Fence (reboot) rhel7-5 'peer is no longer part of the cluster'
+ * Start dummy-crowd:2 ( rhel7-2 )
+ * Stop dummy-crowd:3 ( rhel7-5 ) due to node availability
+
+Executing Cluster Transition:
+ * Pseudo action: dummy-crowd-clone_stop_0
+ * Fencing rhel7-5 (reboot)
+ * Pseudo action: dummy-crowd_stop_0
+ * Pseudo action: dummy-crowd-clone_stopped_0
+ * Pseudo action: dummy-crowd-clone_start_0
+ * Resource action: dummy-crowd start on rhel7-2
+ * Pseudo action: dummy-crowd-clone_running_0
+ * Resource action: dummy-crowd monitor=10000 on rhel7-2
+Using the original execution date of: 2018-05-24 15:29:56Z
+
+Revised Cluster Status:
+ * Node List:
+ * Online: [ rhel7-1 rhel7-2 rhel7-3 rhel7-4 ]
+ * OFFLINE: [ rhel7-5 ]
+
+ * Full List of Resources:
+ * Fencing (stonith:fence_xvm): Started rhel7-1
+ * FencingFail (stonith:fence_dummy): Started rhel7-2
+ * dummy-solo (ocf:pacemaker:Dummy): Started rhel7-3
+ * Clone Set: dummy-crowd-clone [dummy-crowd]:
+ * Started: [ rhel7-1 rhel7-2 rhel7-4 ]
+ * Clone Set: dummy-boss-clone [dummy-boss] (promotable):
+ * Promoted: [ rhel7-3 ]
+ * Unpromoted: [ rhel7-2 rhel7-4 ]
diff --git a/cts/scheduler/summary/clone-requires-quorum.summary b/cts/scheduler/summary/clone-requires-quorum.summary
new file mode 100644
index 0000000..e45b031
--- /dev/null
+++ b/cts/scheduler/summary/clone-requires-quorum.summary
@@ -0,0 +1,42 @@
+Using the original execution date of: 2018-05-24 15:30:29Z
+Current cluster status:
+ * Node List:
+ * Node rhel7-5: UNCLEAN (offline)
+ * Online: [ rhel7-1 rhel7-2 rhel7-3 rhel7-4 ]
+
+ * Full List of Resources:
+ * Fencing (stonith:fence_xvm): Started rhel7-1
+ * FencingFail (stonith:fence_dummy): Started rhel7-2
+ * dummy-solo (ocf:pacemaker:Dummy): Started rhel7-3
+ * Clone Set: dummy-crowd-clone [dummy-crowd]:
+ * dummy-crowd (ocf:pacemaker:Dummy): ORPHANED Started rhel7-5 (UNCLEAN)
+ * Started: [ rhel7-1 rhel7-2 rhel7-4 ]
+ * Clone Set: dummy-boss-clone [dummy-boss] (promotable):
+ * Promoted: [ rhel7-3 ]
+ * Unpromoted: [ rhel7-2 rhel7-4 ]
+
+Transition Summary:
+ * Fence (reboot) rhel7-5 'peer is no longer part of the cluster'
+ * Stop dummy-crowd:3 ( rhel7-5 ) due to node availability
+
+Executing Cluster Transition:
+ * Pseudo action: dummy-crowd-clone_stop_0
+ * Fencing rhel7-5 (reboot)
+ * Pseudo action: dummy-crowd_stop_0
+ * Pseudo action: dummy-crowd-clone_stopped_0
+Using the original execution date of: 2018-05-24 15:30:29Z
+
+Revised Cluster Status:
+ * Node List:
+ * Online: [ rhel7-1 rhel7-2 rhel7-3 rhel7-4 ]
+ * OFFLINE: [ rhel7-5 ]
+
+ * Full List of Resources:
+ * Fencing (stonith:fence_xvm): Started rhel7-1
+ * FencingFail (stonith:fence_dummy): Started rhel7-2
+ * dummy-solo (ocf:pacemaker:Dummy): Started rhel7-3
+ * Clone Set: dummy-crowd-clone [dummy-crowd]:
+ * Started: [ rhel7-1 rhel7-2 rhel7-4 ]
+ * Clone Set: dummy-boss-clone [dummy-boss] (promotable):
+ * Promoted: [ rhel7-3 ]
+ * Unpromoted: [ rhel7-2 rhel7-4 ]
diff --git a/cts/scheduler/summary/clone_min_interleave_start_one.summary b/cts/scheduler/summary/clone_min_interleave_start_one.summary
new file mode 100644
index 0000000..026d688
--- /dev/null
+++ b/cts/scheduler/summary/clone_min_interleave_start_one.summary
@@ -0,0 +1,41 @@
+Current cluster status:
+ * Node List:
+ * Online: [ c7auto1 c7auto2 c7auto3 ]
+
+ * Full List of Resources:
+ * shooter (stonith:fence_phd_kvm): Started c7auto1
+ * Clone Set: FAKE1-clone [FAKE1]:
+ * Stopped: [ c7auto1 c7auto2 c7auto3 ]
+ * Clone Set: FAKE2-clone [FAKE2]:
+ * Stopped: [ c7auto1 c7auto2 c7auto3 ]
+ * Clone Set: FAKE3-clone [FAKE3]:
+ * Stopped: [ c7auto1 c7auto2 c7auto3 ]
+
+Transition Summary:
+ * Start FAKE1:0 ( c7auto1 )
+ * Start FAKE2:0 ( c7auto2 ) due to unrunnable clone-one-or-more:order-FAKE1-clone-FAKE2-clone-mandatory (blocked)
+ * Start FAKE2:1 ( c7auto3 ) due to unrunnable clone-one-or-more:order-FAKE1-clone-FAKE2-clone-mandatory (blocked)
+ * Start FAKE2:2 ( c7auto1 ) due to unrunnable clone-one-or-more:order-FAKE1-clone-FAKE2-clone-mandatory (blocked)
+ * Start FAKE3:0 ( c7auto2 ) due to unrunnable FAKE2:0 start (blocked)
+ * Start FAKE3:1 ( c7auto3 ) due to unrunnable FAKE2:1 start (blocked)
+ * Start FAKE3:2 ( c7auto1 ) due to unrunnable FAKE2:2 start (blocked)
+
+Executing Cluster Transition:
+ * Pseudo action: FAKE1-clone_start_0
+ * Resource action: FAKE1 start on c7auto1
+ * Pseudo action: FAKE1-clone_running_0
+ * Resource action: FAKE1 monitor=10000 on c7auto1
+
+Revised Cluster Status:
+ * Node List:
+ * Online: [ c7auto1 c7auto2 c7auto3 ]
+
+ * Full List of Resources:
+ * shooter (stonith:fence_phd_kvm): Started c7auto1
+ * Clone Set: FAKE1-clone [FAKE1]:
+ * Started: [ c7auto1 ]
+ * Stopped: [ c7auto2 c7auto3 ]
+ * Clone Set: FAKE2-clone [FAKE2]:
+ * Stopped: [ c7auto1 c7auto2 c7auto3 ]
+ * Clone Set: FAKE3-clone [FAKE3]:
+ * Stopped: [ c7auto1 c7auto2 c7auto3 ]
diff --git a/cts/scheduler/summary/clone_min_interleave_start_two.summary b/cts/scheduler/summary/clone_min_interleave_start_two.summary
new file mode 100644
index 0000000..74c5a45
--- /dev/null
+++ b/cts/scheduler/summary/clone_min_interleave_start_two.summary
@@ -0,0 +1,61 @@
+Current cluster status:
+ * Node List:
+ * Online: [ c7auto1 c7auto2 c7auto3 ]
+
+ * Full List of Resources:
+ * shooter (stonith:fence_phd_kvm): Started c7auto1
+ * Clone Set: FAKE1-clone [FAKE1]:
+ * Stopped: [ c7auto1 c7auto2 c7auto3 ]
+ * Clone Set: FAKE2-clone [FAKE2]:
+ * Stopped: [ c7auto1 c7auto2 c7auto3 ]
+ * Clone Set: FAKE3-clone [FAKE3]:
+ * Stopped: [ c7auto1 c7auto2 c7auto3 ]
+
+Transition Summary:
+ * Start FAKE1:0 ( c7auto2 )
+ * Start FAKE1:1 ( c7auto1 )
+ * Start FAKE2:0 ( c7auto3 )
+ * Start FAKE2:1 ( c7auto2 )
+ * Start FAKE2:2 ( c7auto1 )
+ * Start FAKE3:0 ( c7auto3 )
+ * Start FAKE3:1 ( c7auto2 )
+ * Start FAKE3:2 ( c7auto1 )
+
+Executing Cluster Transition:
+ * Pseudo action: FAKE1-clone_start_0
+ * Resource action: FAKE1 start on c7auto2
+ * Resource action: FAKE1 start on c7auto1
+ * Pseudo action: FAKE1-clone_running_0
+ * Pseudo action: clone-one-or-more:order-FAKE1-clone-FAKE2-clone-mandatory
+ * Resource action: FAKE1 monitor=10000 on c7auto2
+ * Resource action: FAKE1 monitor=10000 on c7auto1
+ * Pseudo action: FAKE2-clone_start_0
+ * Resource action: FAKE2 start on c7auto3
+ * Resource action: FAKE2 start on c7auto2
+ * Resource action: FAKE2 start on c7auto1
+ * Pseudo action: FAKE2-clone_running_0
+ * Pseudo action: FAKE3-clone_start_0
+ * Resource action: FAKE2 monitor=10000 on c7auto3
+ * Resource action: FAKE2 monitor=10000 on c7auto2
+ * Resource action: FAKE2 monitor=10000 on c7auto1
+ * Resource action: FAKE3 start on c7auto3
+ * Resource action: FAKE3 start on c7auto2
+ * Resource action: FAKE3 start on c7auto1
+ * Pseudo action: FAKE3-clone_running_0
+ * Resource action: FAKE3 monitor=10000 on c7auto3
+ * Resource action: FAKE3 monitor=10000 on c7auto2
+ * Resource action: FAKE3 monitor=10000 on c7auto1
+
+Revised Cluster Status:
+ * Node List:
+ * Online: [ c7auto1 c7auto2 c7auto3 ]
+
+ * Full List of Resources:
+ * shooter (stonith:fence_phd_kvm): Started c7auto1
+ * Clone Set: FAKE1-clone [FAKE1]:
+ * Started: [ c7auto1 c7auto2 ]
+ * Stopped: [ c7auto3 ]
+ * Clone Set: FAKE2-clone [FAKE2]:
+ * Started: [ c7auto1 c7auto2 c7auto3 ]
+ * Clone Set: FAKE3-clone [FAKE3]:
+ * Started: [ c7auto1 c7auto2 c7auto3 ]
diff --git a/cts/scheduler/summary/clone_min_interleave_stop_one.summary b/cts/scheduler/summary/clone_min_interleave_stop_one.summary
new file mode 100644
index 0000000..ac1f40b
--- /dev/null
+++ b/cts/scheduler/summary/clone_min_interleave_stop_one.summary
@@ -0,0 +1,36 @@
+Current cluster status:
+ * Node List:
+ * Online: [ c7auto1 c7auto2 c7auto3 ]
+
+ * Full List of Resources:
+ * shooter (stonith:fence_phd_kvm): Started c7auto1
+ * Clone Set: FAKE1-clone [FAKE1]:
+ * Started: [ c7auto1 c7auto2 c7auto3 ]
+ * Clone Set: FAKE2-clone [FAKE2]:
+ * Started: [ c7auto1 c7auto2 c7auto3 ]
+ * Clone Set: FAKE3-clone [FAKE3]:
+ * Started: [ c7auto1 c7auto2 c7auto3 ]
+
+Transition Summary:
+ * Stop FAKE1:0 ( c7auto3 ) due to node availability
+
+Executing Cluster Transition:
+ * Pseudo action: FAKE1-clone_stop_0
+ * Resource action: FAKE1 stop on c7auto3
+ * Pseudo action: FAKE1-clone_stopped_0
+ * Pseudo action: FAKE1-clone_start_0
+ * Pseudo action: FAKE1-clone_running_0
+
+Revised Cluster Status:
+ * Node List:
+ * Online: [ c7auto1 c7auto2 c7auto3 ]
+
+ * Full List of Resources:
+ * shooter (stonith:fence_phd_kvm): Started c7auto1
+ * Clone Set: FAKE1-clone [FAKE1]:
+ * Started: [ c7auto1 c7auto2 ]
+ * Stopped: [ c7auto3 ]
+ * Clone Set: FAKE2-clone [FAKE2]:
+ * Started: [ c7auto1 c7auto2 c7auto3 ]
+ * Clone Set: FAKE3-clone [FAKE3]:
+ * Started: [ c7auto1 c7auto2 c7auto3 ]
diff --git a/cts/scheduler/summary/clone_min_interleave_stop_two.summary b/cts/scheduler/summary/clone_min_interleave_stop_two.summary
new file mode 100644
index 0000000..d5d63fb
--- /dev/null
+++ b/cts/scheduler/summary/clone_min_interleave_stop_two.summary
@@ -0,0 +1,54 @@
+Current cluster status:
+ * Node List:
+ * Online: [ c7auto1 c7auto2 c7auto3 ]
+
+ * Full List of Resources:
+ * shooter (stonith:fence_phd_kvm): Started c7auto1
+ * Clone Set: FAKE1-clone [FAKE1]:
+ * Started: [ c7auto1 c7auto2 c7auto3 ]
+ * Clone Set: FAKE2-clone [FAKE2]:
+ * Started: [ c7auto1 c7auto2 c7auto3 ]
+ * Clone Set: FAKE3-clone [FAKE3]:
+ * Started: [ c7auto1 c7auto2 c7auto3 ]
+
+Transition Summary:
+ * Stop FAKE1:0 ( c7auto3 ) due to node availability
+ * Stop FAKE1:2 ( c7auto2 ) due to node availability
+ * Stop FAKE2:0 ( c7auto3 ) due to unrunnable clone-one-or-more:order-FAKE1-clone-FAKE2-clone-mandatory
+ * Stop FAKE2:1 ( c7auto1 ) due to unrunnable clone-one-or-more:order-FAKE1-clone-FAKE2-clone-mandatory
+ * Stop FAKE2:2 ( c7auto2 ) due to unrunnable clone-one-or-more:order-FAKE1-clone-FAKE2-clone-mandatory
+ * Stop FAKE3:0 ( c7auto3 ) due to required FAKE2:0 start
+ * Stop FAKE3:1 ( c7auto1 ) due to required FAKE2:1 start
+ * Stop FAKE3:2 ( c7auto2 ) due to required FAKE2:2 start
+
+Executing Cluster Transition:
+ * Pseudo action: FAKE3-clone_stop_0
+ * Resource action: FAKE3 stop on c7auto3
+ * Resource action: FAKE3 stop on c7auto1
+ * Resource action: FAKE3 stop on c7auto2
+ * Pseudo action: FAKE3-clone_stopped_0
+ * Pseudo action: FAKE2-clone_stop_0
+ * Resource action: FAKE2 stop on c7auto3
+ * Resource action: FAKE2 stop on c7auto1
+ * Resource action: FAKE2 stop on c7auto2
+ * Pseudo action: FAKE2-clone_stopped_0
+ * Pseudo action: FAKE1-clone_stop_0
+ * Resource action: FAKE1 stop on c7auto3
+ * Resource action: FAKE1 stop on c7auto2
+ * Pseudo action: FAKE1-clone_stopped_0
+ * Pseudo action: FAKE1-clone_start_0
+ * Pseudo action: FAKE1-clone_running_0
+
+Revised Cluster Status:
+ * Node List:
+ * Online: [ c7auto1 c7auto2 c7auto3 ]
+
+ * Full List of Resources:
+ * shooter (stonith:fence_phd_kvm): Started c7auto1
+ * Clone Set: FAKE1-clone [FAKE1]:
+ * Started: [ c7auto1 ]
+ * Stopped: [ c7auto2 c7auto3 ]
+ * Clone Set: FAKE2-clone [FAKE2]:
+ * Stopped: [ c7auto1 c7auto2 c7auto3 ]
+ * Clone Set: FAKE3-clone [FAKE3]:
+ * Stopped: [ c7auto1 c7auto2 c7auto3 ]
diff --git a/cts/scheduler/summary/clone_min_start_one.summary b/cts/scheduler/summary/clone_min_start_one.summary
new file mode 100644
index 0000000..395b131
--- /dev/null
+++ b/cts/scheduler/summary/clone_min_start_one.summary
@@ -0,0 +1,38 @@
+Current cluster status:
+ * Node List:
+ * Node c7auto1: standby (with active resources)
+ * Node c7auto2: standby
+ * Online: [ c7auto3 c7auto4 ]
+
+ * Full List of Resources:
+ * shooter (stonith:fence_phd_kvm): Started c7auto1
+ * Clone Set: FAKECLONE-clone [FAKECLONE]:
+ * Stopped: [ c7auto1 c7auto2 c7auto3 c7auto4 ]
+ * FAKE (ocf:heartbeat:Dummy): Stopped
+
+Transition Summary:
+ * Move shooter ( c7auto1 -> c7auto3 )
+ * Start FAKECLONE:0 ( c7auto3 )
+ * Start FAKE ( c7auto4 ) due to unrunnable clone-one-or-more:order-FAKECLONE-clone-FAKE-mandatory (blocked)
+
+Executing Cluster Transition:
+ * Resource action: shooter stop on c7auto1
+ * Pseudo action: FAKECLONE-clone_start_0
+ * Resource action: shooter start on c7auto3
+ * Resource action: FAKECLONE start on c7auto3
+ * Pseudo action: FAKECLONE-clone_running_0
+ * Resource action: shooter monitor=60000 on c7auto3
+ * Resource action: FAKECLONE monitor=10000 on c7auto3
+
+Revised Cluster Status:
+ * Node List:
+ * Node c7auto1: standby
+ * Node c7auto2: standby
+ * Online: [ c7auto3 c7auto4 ]
+
+ * Full List of Resources:
+ * shooter (stonith:fence_phd_kvm): Started c7auto3
+ * Clone Set: FAKECLONE-clone [FAKECLONE]:
+ * Started: [ c7auto3 ]
+ * Stopped: [ c7auto1 c7auto2 c7auto4 ]
+ * FAKE (ocf:heartbeat:Dummy): Stopped
diff --git a/cts/scheduler/summary/clone_min_start_two.summary b/cts/scheduler/summary/clone_min_start_two.summary
new file mode 100644
index 0000000..43eb34d
--- /dev/null
+++ b/cts/scheduler/summary/clone_min_start_two.summary
@@ -0,0 +1,38 @@
+Current cluster status:
+ * Node List:
+ * Node c7auto2: standby
+ * Online: [ c7auto1 c7auto3 c7auto4 ]
+
+ * Full List of Resources:
+ * shooter (stonith:fence_phd_kvm): Started c7auto1
+ * Clone Set: FAKECLONE-clone [FAKECLONE]:
+ * Stopped: [ c7auto1 c7auto2 c7auto3 c7auto4 ]
+ * FAKE (ocf:heartbeat:Dummy): Stopped
+
+Transition Summary:
+ * Start FAKECLONE:0 ( c7auto3 )
+ * Start FAKECLONE:1 ( c7auto1 )
+ * Start FAKE ( c7auto4 )
+
+Executing Cluster Transition:
+ * Pseudo action: FAKECLONE-clone_start_0
+ * Resource action: FAKECLONE start on c7auto3
+ * Resource action: FAKECLONE start on c7auto1
+ * Pseudo action: FAKECLONE-clone_running_0
+ * Pseudo action: clone-one-or-more:order-FAKECLONE-clone-FAKE-mandatory
+ * Resource action: FAKECLONE monitor=10000 on c7auto3
+ * Resource action: FAKECLONE monitor=10000 on c7auto1
+ * Resource action: FAKE start on c7auto4
+ * Resource action: FAKE monitor=10000 on c7auto4
+
+Revised Cluster Status:
+ * Node List:
+ * Node c7auto2: standby
+ * Online: [ c7auto1 c7auto3 c7auto4 ]
+
+ * Full List of Resources:
+ * shooter (stonith:fence_phd_kvm): Started c7auto1
+ * Clone Set: FAKECLONE-clone [FAKECLONE]:
+ * Started: [ c7auto1 c7auto3 ]
+ * Stopped: [ c7auto2 c7auto4 ]
+ * FAKE (ocf:heartbeat:Dummy): Started c7auto4
diff --git a/cts/scheduler/summary/clone_min_stop_all.summary b/cts/scheduler/summary/clone_min_stop_all.summary
new file mode 100644
index 0000000..9f52aa9
--- /dev/null
+++ b/cts/scheduler/summary/clone_min_stop_all.summary
@@ -0,0 +1,44 @@
+Current cluster status:
+ * Node List:
+ * Node c7auto1: standby (with active resources)
+ * Node c7auto2: standby (with active resources)
+ * Node c7auto3: standby (with active resources)
+ * Online: [ c7auto4 ]
+
+ * Full List of Resources:
+ * shooter (stonith:fence_phd_kvm): Started c7auto1
+ * Clone Set: FAKECLONE-clone [FAKECLONE]:
+ * Started: [ c7auto1 c7auto2 c7auto3 ]
+ * Stopped: [ c7auto4 ]
+ * FAKE (ocf:heartbeat:Dummy): Started c7auto4
+
+Transition Summary:
+ * Move shooter ( c7auto1 -> c7auto4 )
+ * Stop FAKECLONE:0 ( c7auto1 ) due to node availability
+ * Stop FAKECLONE:1 ( c7auto2 ) due to node availability
+ * Stop FAKECLONE:2 ( c7auto3 ) due to node availability
+ * Stop FAKE ( c7auto4 ) due to unrunnable clone-one-or-more:order-FAKECLONE-clone-FAKE-mandatory
+
+Executing Cluster Transition:
+ * Resource action: shooter stop on c7auto1
+ * Resource action: FAKE stop on c7auto4
+ * Resource action: shooter start on c7auto4
+ * Pseudo action: FAKECLONE-clone_stop_0
+ * Resource action: shooter monitor=60000 on c7auto4
+ * Resource action: FAKECLONE stop on c7auto1
+ * Resource action: FAKECLONE stop on c7auto2
+ * Resource action: FAKECLONE stop on c7auto3
+ * Pseudo action: FAKECLONE-clone_stopped_0
+
+Revised Cluster Status:
+ * Node List:
+ * Node c7auto1: standby
+ * Node c7auto2: standby
+ * Node c7auto3: standby
+ * Online: [ c7auto4 ]
+
+ * Full List of Resources:
+ * shooter (stonith:fence_phd_kvm): Started c7auto4
+ * Clone Set: FAKECLONE-clone [FAKECLONE]:
+ * Stopped: [ c7auto1 c7auto2 c7auto3 c7auto4 ]
+ * FAKE (ocf:heartbeat:Dummy): Stopped
diff --git a/cts/scheduler/summary/clone_min_stop_one.summary b/cts/scheduler/summary/clone_min_stop_one.summary
new file mode 100644
index 0000000..ec2b9ae
--- /dev/null
+++ b/cts/scheduler/summary/clone_min_stop_one.summary
@@ -0,0 +1,33 @@
+Current cluster status:
+ * Node List:
+ * Node c7auto2: standby (with active resources)
+ * Online: [ c7auto1 c7auto3 c7auto4 ]
+
+ * Full List of Resources:
+ * shooter (stonith:fence_phd_kvm): Started c7auto1
+ * Clone Set: FAKECLONE-clone [FAKECLONE]:
+ * Started: [ c7auto1 c7auto2 c7auto3 ]
+ * Stopped: [ c7auto4 ]
+ * FAKE (ocf:heartbeat:Dummy): Started c7auto4
+
+Transition Summary:
+ * Stop FAKECLONE:1 ( c7auto2 ) due to node availability
+
+Executing Cluster Transition:
+ * Pseudo action: FAKECLONE-clone_stop_0
+ * Resource action: FAKECLONE stop on c7auto2
+ * Pseudo action: FAKECLONE-clone_stopped_0
+ * Pseudo action: FAKECLONE-clone_start_0
+ * Pseudo action: FAKECLONE-clone_running_0
+
+Revised Cluster Status:
+ * Node List:
+ * Node c7auto2: standby
+ * Online: [ c7auto1 c7auto3 c7auto4 ]
+
+ * Full List of Resources:
+ * shooter (stonith:fence_phd_kvm): Started c7auto1
+ * Clone Set: FAKECLONE-clone [FAKECLONE]:
+ * Started: [ c7auto1 c7auto3 ]
+ * Stopped: [ c7auto2 c7auto4 ]
+ * FAKE (ocf:heartbeat:Dummy): Started c7auto4
diff --git a/cts/scheduler/summary/clone_min_stop_two.summary b/cts/scheduler/summary/clone_min_stop_two.summary
new file mode 100644
index 0000000..bdf8025
--- /dev/null
+++ b/cts/scheduler/summary/clone_min_stop_two.summary
@@ -0,0 +1,43 @@
+Current cluster status:
+ * Node List:
+ * Node c7auto1: standby (with active resources)
+ * Node c7auto2: standby (with active resources)
+ * Online: [ c7auto3 c7auto4 ]
+
+ * Full List of Resources:
+ * shooter (stonith:fence_phd_kvm): Started c7auto1
+ * Clone Set: FAKECLONE-clone [FAKECLONE]:
+ * Started: [ c7auto1 c7auto2 c7auto3 ]
+ * Stopped: [ c7auto4 ]
+ * FAKE (ocf:heartbeat:Dummy): Started c7auto4
+
+Transition Summary:
+ * Move shooter ( c7auto1 -> c7auto3 )
+ * Stop FAKECLONE:0 ( c7auto1 ) due to node availability
+ * Stop FAKECLONE:1 ( c7auto2 ) due to node availability
+ * Stop FAKE ( c7auto4 ) due to unrunnable clone-one-or-more:order-FAKECLONE-clone-FAKE-mandatory
+
+Executing Cluster Transition:
+ * Resource action: shooter stop on c7auto1
+ * Resource action: FAKE stop on c7auto4
+ * Resource action: shooter start on c7auto3
+ * Pseudo action: FAKECLONE-clone_stop_0
+ * Resource action: shooter monitor=60000 on c7auto3
+ * Resource action: FAKECLONE stop on c7auto1
+ * Resource action: FAKECLONE stop on c7auto2
+ * Pseudo action: FAKECLONE-clone_stopped_0
+ * Pseudo action: FAKECLONE-clone_start_0
+ * Pseudo action: FAKECLONE-clone_running_0
+
+Revised Cluster Status:
+ * Node List:
+ * Node c7auto1: standby
+ * Node c7auto2: standby
+ * Online: [ c7auto3 c7auto4 ]
+
+ * Full List of Resources:
+ * shooter (stonith:fence_phd_kvm): Started c7auto3
+ * Clone Set: FAKECLONE-clone [FAKECLONE]:
+ * Started: [ c7auto3 ]
+ * Stopped: [ c7auto1 c7auto2 c7auto4 ]
+ * FAKE (ocf:heartbeat:Dummy): Stopped
diff --git a/cts/scheduler/summary/cloned-group-stop.summary b/cts/scheduler/summary/cloned-group-stop.summary
new file mode 100644
index 0000000..9ba97be
--- /dev/null
+++ b/cts/scheduler/summary/cloned-group-stop.summary
@@ -0,0 +1,91 @@
+2 of 20 resource instances DISABLED and 0 BLOCKED from further action due to failure
+
+Current cluster status:
+ * Node List:
+ * Online: [ rhos4-node3 rhos4-node4 ]
+
+ * Full List of Resources:
+ * virt-fencing (stonith:fence_xvm): Started rhos4-node3
+ * Resource Group: mysql-group:
+ * mysql-vip (ocf:heartbeat:IPaddr2): Started rhos4-node3
+ * mysql-fs (ocf:heartbeat:Filesystem): Started rhos4-node3
+ * mysql-db (ocf:heartbeat:mysql): Started rhos4-node3
+ * Clone Set: qpidd-clone [qpidd] (disabled):
+ * Started: [ rhos4-node3 rhos4-node4 ]
+ * Clone Set: keystone-clone [keystone]:
+ * Started: [ rhos4-node3 rhos4-node4 ]
+ * Clone Set: glance-clone [glance]:
+ * Started: [ rhos4-node3 rhos4-node4 ]
+ * Clone Set: cinder-clone [cinder]:
+ * Started: [ rhos4-node3 rhos4-node4 ]
+
+Transition Summary:
+ * Stop qpidd:0 ( rhos4-node4 ) due to node availability
+ * Stop qpidd:1 ( rhos4-node3 ) due to node availability
+ * Stop keystone:0 ( rhos4-node4 ) due to unrunnable qpidd-clone running
+ * Stop keystone:1 ( rhos4-node3 ) due to unrunnable qpidd-clone running
+ * Stop glance-fs:0 ( rhos4-node4 ) due to required keystone-clone running
+ * Stop glance-registry:0 ( rhos4-node4 ) due to required glance-fs:0 stop
+ * Stop glance-api:0 ( rhos4-node4 ) due to required glance-registry:0 start
+ * Stop glance-fs:1 ( rhos4-node3 ) due to required keystone-clone running
+ * Stop glance-registry:1 ( rhos4-node3 ) due to required glance-fs:1 stop
+ * Stop glance-api:1 ( rhos4-node3 ) due to required glance-registry:1 start
+ * Stop cinder-api:0 ( rhos4-node4 ) due to required glance-clone running
+ * Stop cinder-scheduler:0 ( rhos4-node4 ) due to required cinder-api:0 stop
+ * Stop cinder-volume:0 ( rhos4-node4 ) due to required cinder-scheduler:0 start
+ * Stop cinder-api:1 ( rhos4-node3 ) due to required glance-clone running
+ * Stop cinder-scheduler:1 ( rhos4-node3 ) due to required cinder-api:1 stop
+ * Stop cinder-volume:1 ( rhos4-node3 ) due to required cinder-scheduler:1 start
+
+Executing Cluster Transition:
+ * Pseudo action: cinder-clone_stop_0
+ * Pseudo action: cinder:0_stop_0
+ * Resource action: cinder-volume stop on rhos4-node4
+ * Pseudo action: cinder:1_stop_0
+ * Resource action: cinder-volume stop on rhos4-node3
+ * Resource action: cinder-scheduler stop on rhos4-node4
+ * Resource action: cinder-scheduler stop on rhos4-node3
+ * Resource action: cinder-api stop on rhos4-node4
+ * Resource action: cinder-api stop on rhos4-node3
+ * Pseudo action: cinder:0_stopped_0
+ * Pseudo action: cinder:1_stopped_0
+ * Pseudo action: cinder-clone_stopped_0
+ * Pseudo action: glance-clone_stop_0
+ * Pseudo action: glance:0_stop_0
+ * Resource action: glance-api stop on rhos4-node4
+ * Pseudo action: glance:1_stop_0
+ * Resource action: glance-api stop on rhos4-node3
+ * Resource action: glance-registry stop on rhos4-node4
+ * Resource action: glance-registry stop on rhos4-node3
+ * Resource action: glance-fs stop on rhos4-node4
+ * Resource action: glance-fs stop on rhos4-node3
+ * Pseudo action: glance:0_stopped_0
+ * Pseudo action: glance:1_stopped_0
+ * Pseudo action: glance-clone_stopped_0
+ * Pseudo action: keystone-clone_stop_0
+ * Resource action: keystone stop on rhos4-node4
+ * Resource action: keystone stop on rhos4-node3
+ * Pseudo action: keystone-clone_stopped_0
+ * Pseudo action: qpidd-clone_stop_0
+ * Resource action: qpidd stop on rhos4-node4
+ * Resource action: qpidd stop on rhos4-node3
+ * Pseudo action: qpidd-clone_stopped_0
+
+Revised Cluster Status:
+ * Node List:
+ * Online: [ rhos4-node3 rhos4-node4 ]
+
+ * Full List of Resources:
+ * virt-fencing (stonith:fence_xvm): Started rhos4-node3
+ * Resource Group: mysql-group:
+ * mysql-vip (ocf:heartbeat:IPaddr2): Started rhos4-node3
+ * mysql-fs (ocf:heartbeat:Filesystem): Started rhos4-node3
+ * mysql-db (ocf:heartbeat:mysql): Started rhos4-node3
+ * Clone Set: qpidd-clone [qpidd] (disabled):
+ * Stopped (disabled): [ rhos4-node3 rhos4-node4 ]
+ * Clone Set: keystone-clone [keystone]:
+ * Stopped: [ rhos4-node3 rhos4-node4 ]
+ * Clone Set: glance-clone [glance]:
+ * Stopped: [ rhos4-node3 rhos4-node4 ]
+ * Clone Set: cinder-clone [cinder]:
+ * Stopped: [ rhos4-node3 rhos4-node4 ]
diff --git a/cts/scheduler/summary/cloned-group.summary b/cts/scheduler/summary/cloned-group.summary
new file mode 100644
index 0000000..c584972
--- /dev/null
+++ b/cts/scheduler/summary/cloned-group.summary
@@ -0,0 +1,48 @@
+Current cluster status:
+ * Node List:
+ * Online: [ webcluster01 ]
+ * OFFLINE: [ webcluster02 ]
+
+ * Full List of Resources:
+ * Clone Set: apache2_clone [grrr]:
+ * Resource Group: grrr:2:
+ * apache2 (ocf:heartbeat:apache): ORPHANED Started webcluster01
+ * mysql-proxy (lsb:mysql-proxy): ORPHANED Started webcluster01
+ * Started: [ webcluster01 ]
+ * Stopped: [ webcluster02 ]
+
+Transition Summary:
+ * Restart apache2:0 ( webcluster01 ) due to resource definition change
+ * Restart mysql-proxy:0 ( webcluster01 ) due to required apache2:0 start
+ * Stop apache2:2 ( webcluster01 ) due to node availability
+ * Stop mysql-proxy:2 ( webcluster01 ) due to node availability
+
+Executing Cluster Transition:
+ * Pseudo action: apache2_clone_stop_0
+ * Pseudo action: grrr:0_stop_0
+ * Resource action: mysql-proxy:1 stop on webcluster01
+ * Pseudo action: grrr:2_stop_0
+ * Resource action: mysql-proxy:0 stop on webcluster01
+ * Resource action: apache2:1 stop on webcluster01
+ * Resource action: apache2:0 stop on webcluster01
+ * Pseudo action: grrr:0_stopped_0
+ * Pseudo action: grrr:2_stopped_0
+ * Pseudo action: apache2_clone_stopped_0
+ * Pseudo action: apache2_clone_start_0
+ * Pseudo action: grrr:0_start_0
+ * Resource action: apache2:1 start on webcluster01
+ * Resource action: apache2:1 monitor=10000 on webcluster01
+ * Resource action: mysql-proxy:1 start on webcluster01
+ * Resource action: mysql-proxy:1 monitor=10000 on webcluster01
+ * Pseudo action: grrr:0_running_0
+ * Pseudo action: apache2_clone_running_0
+
+Revised Cluster Status:
+ * Node List:
+ * Online: [ webcluster01 ]
+ * OFFLINE: [ webcluster02 ]
+
+ * Full List of Resources:
+ * Clone Set: apache2_clone [grrr]:
+ * Started: [ webcluster01 ]
+ * Stopped: [ webcluster02 ]
diff --git a/cts/scheduler/summary/cloned_start_one.summary b/cts/scheduler/summary/cloned_start_one.summary
new file mode 100644
index 0000000..f3bed71
--- /dev/null
+++ b/cts/scheduler/summary/cloned_start_one.summary
@@ -0,0 +1,42 @@
+Current cluster status:
+ * Node List:
+ * Node c7auto2: standby
+ * Node c7auto3: standby (with active resources)
+ * Online: [ c7auto1 c7auto4 ]
+
+ * Full List of Resources:
+ * shooter (stonith:fence_phd_kvm): Started c7auto1
+ * Clone Set: FAKECLONE-clone [FAKECLONE]:
+ * Stopped: [ c7auto1 c7auto2 c7auto3 c7auto4 ]
+ * Clone Set: FAKECLONE2-clone [FAKECLONE2]:
+ * Started: [ c7auto3 c7auto4 ]
+ * Stopped: [ c7auto1 c7auto2 ]
+
+Transition Summary:
+ * Start FAKECLONE:0 ( c7auto1 )
+ * Stop FAKECLONE2:0 ( c7auto3 ) due to node availability
+ * Stop FAKECLONE2:1 ( c7auto4 ) due to unrunnable clone-one-or-more:order-FAKECLONE-clone-FAKECLONE2-clone-mandatory
+
+Executing Cluster Transition:
+ * Pseudo action: FAKECLONE-clone_start_0
+ * Pseudo action: FAKECLONE2-clone_stop_0
+ * Resource action: FAKECLONE start on c7auto1
+ * Pseudo action: FAKECLONE-clone_running_0
+ * Resource action: FAKECLONE2 stop on c7auto3
+ * Resource action: FAKECLONE2 stop on c7auto4
+ * Pseudo action: FAKECLONE2-clone_stopped_0
+ * Resource action: FAKECLONE monitor=10000 on c7auto1
+
+Revised Cluster Status:
+ * Node List:
+ * Node c7auto2: standby
+ * Node c7auto3: standby
+ * Online: [ c7auto1 c7auto4 ]
+
+ * Full List of Resources:
+ * shooter (stonith:fence_phd_kvm): Started c7auto1
+ * Clone Set: FAKECLONE-clone [FAKECLONE]:
+ * Started: [ c7auto1 ]
+ * Stopped: [ c7auto2 c7auto3 c7auto4 ]
+ * Clone Set: FAKECLONE2-clone [FAKECLONE2]:
+ * Stopped: [ c7auto1 c7auto2 c7auto3 c7auto4 ]
diff --git a/cts/scheduler/summary/cloned_start_two.summary b/cts/scheduler/summary/cloned_start_two.summary
new file mode 100644
index 0000000..d863fb2
--- /dev/null
+++ b/cts/scheduler/summary/cloned_start_two.summary
@@ -0,0 +1,43 @@
+Current cluster status:
+ * Node List:
+ * Node c7auto3: standby (with active resources)
+ * Online: [ c7auto1 c7auto2 c7auto4 ]
+
+ * Full List of Resources:
+ * shooter (stonith:fence_phd_kvm): Started c7auto1
+ * Clone Set: FAKECLONE-clone [FAKECLONE]:
+ * Stopped: [ c7auto1 c7auto2 c7auto3 c7auto4 ]
+ * Clone Set: FAKECLONE2-clone [FAKECLONE2]:
+ * Started: [ c7auto3 c7auto4 ]
+ * Stopped: [ c7auto1 c7auto2 ]
+
+Transition Summary:
+ * Start FAKECLONE:0 ( c7auto2 )
+ * Start FAKECLONE:1 ( c7auto1 )
+ * Stop FAKECLONE2:0 ( c7auto3 ) due to node availability
+
+Executing Cluster Transition:
+ * Pseudo action: FAKECLONE-clone_start_0
+ * Pseudo action: FAKECLONE2-clone_stop_0
+ * Resource action: FAKECLONE start on c7auto2
+ * Resource action: FAKECLONE start on c7auto1
+ * Pseudo action: FAKECLONE-clone_running_0
+ * Resource action: FAKECLONE2 stop on c7auto3
+ * Pseudo action: FAKECLONE2-clone_stopped_0
+ * Pseudo action: clone-one-or-more:order-FAKECLONE-clone-FAKECLONE2-clone-mandatory
+ * Resource action: FAKECLONE monitor=10000 on c7auto2
+ * Resource action: FAKECLONE monitor=10000 on c7auto1
+
+Revised Cluster Status:
+ * Node List:
+ * Node c7auto3: standby
+ * Online: [ c7auto1 c7auto2 c7auto4 ]
+
+ * Full List of Resources:
+ * shooter (stonith:fence_phd_kvm): Started c7auto1
+ * Clone Set: FAKECLONE-clone [FAKECLONE]:
+ * Started: [ c7auto1 c7auto2 ]
+ * Stopped: [ c7auto3 c7auto4 ]
+ * Clone Set: FAKECLONE2-clone [FAKECLONE2]:
+ * Started: [ c7auto4 ]
+ * Stopped: [ c7auto1 c7auto2 c7auto3 ]
diff --git a/cts/scheduler/summary/cloned_stop_one.summary b/cts/scheduler/summary/cloned_stop_one.summary
new file mode 100644
index 0000000..539016f
--- /dev/null
+++ b/cts/scheduler/summary/cloned_stop_one.summary
@@ -0,0 +1,41 @@
+Current cluster status:
+ * Node List:
+ * Node c7auto3: standby (with active resources)
+ * Online: [ c7auto1 c7auto2 c7auto4 ]
+
+ * Full List of Resources:
+ * shooter (stonith:fence_phd_kvm): Started c7auto1
+ * Clone Set: FAKECLONE-clone [FAKECLONE]:
+ * Started: [ c7auto1 c7auto2 c7auto3 ]
+ * Stopped: [ c7auto4 ]
+ * Clone Set: FAKECLONE2-clone [FAKECLONE2]:
+ * Started: [ c7auto3 c7auto4 ]
+ * Stopped: [ c7auto1 c7auto2 ]
+
+Transition Summary:
+ * Stop FAKECLONE:2 ( c7auto3 ) due to node availability
+ * Stop FAKECLONE2:0 ( c7auto3 ) due to node availability
+
+Executing Cluster Transition:
+ * Pseudo action: FAKECLONE2-clone_stop_0
+ * Resource action: FAKECLONE2 stop on c7auto3
+ * Pseudo action: FAKECLONE2-clone_stopped_0
+ * Pseudo action: FAKECLONE-clone_stop_0
+ * Resource action: FAKECLONE stop on c7auto3
+ * Pseudo action: FAKECLONE-clone_stopped_0
+ * Pseudo action: FAKECLONE-clone_start_0
+ * Pseudo action: FAKECLONE-clone_running_0
+
+Revised Cluster Status:
+ * Node List:
+ * Node c7auto3: standby
+ * Online: [ c7auto1 c7auto2 c7auto4 ]
+
+ * Full List of Resources:
+ * shooter (stonith:fence_phd_kvm): Started c7auto1
+ * Clone Set: FAKECLONE-clone [FAKECLONE]:
+ * Started: [ c7auto1 c7auto2 ]
+ * Stopped: [ c7auto3 c7auto4 ]
+ * Clone Set: FAKECLONE2-clone [FAKECLONE2]:
+ * Started: [ c7auto4 ]
+ * Stopped: [ c7auto1 c7auto2 c7auto3 ]
diff --git a/cts/scheduler/summary/cloned_stop_two.summary b/cts/scheduler/summary/cloned_stop_two.summary
new file mode 100644
index 0000000..53795f5
--- /dev/null
+++ b/cts/scheduler/summary/cloned_stop_two.summary
@@ -0,0 +1,46 @@
+Current cluster status:
+ * Node List:
+ * Node c7auto2: standby (with active resources)
+ * Node c7auto3: standby (with active resources)
+ * Online: [ c7auto1 c7auto4 ]
+
+ * Full List of Resources:
+ * shooter (stonith:fence_phd_kvm): Started c7auto1
+ * Clone Set: FAKECLONE-clone [FAKECLONE]:
+ * Started: [ c7auto1 c7auto2 c7auto3 ]
+ * Stopped: [ c7auto4 ]
+ * Clone Set: FAKECLONE2-clone [FAKECLONE2]:
+ * Started: [ c7auto3 c7auto4 ]
+ * Stopped: [ c7auto1 c7auto2 ]
+
+Transition Summary:
+ * Stop FAKECLONE:1 ( c7auto2 ) due to node availability
+ * Stop FAKECLONE:2 ( c7auto3 ) due to node availability
+ * Stop FAKECLONE2:0 ( c7auto3 ) due to node availability
+ * Stop FAKECLONE2:1 ( c7auto4 ) due to unrunnable clone-one-or-more:order-FAKECLONE-clone-FAKECLONE2-clone-mandatory
+
+Executing Cluster Transition:
+ * Pseudo action: FAKECLONE2-clone_stop_0
+ * Resource action: FAKECLONE2 stop on c7auto3
+ * Resource action: FAKECLONE2 stop on c7auto4
+ * Pseudo action: FAKECLONE2-clone_stopped_0
+ * Pseudo action: FAKECLONE-clone_stop_0
+ * Resource action: FAKECLONE stop on c7auto2
+ * Resource action: FAKECLONE stop on c7auto3
+ * Pseudo action: FAKECLONE-clone_stopped_0
+ * Pseudo action: FAKECLONE-clone_start_0
+ * Pseudo action: FAKECLONE-clone_running_0
+
+Revised Cluster Status:
+ * Node List:
+ * Node c7auto2: standby
+ * Node c7auto3: standby
+ * Online: [ c7auto1 c7auto4 ]
+
+ * Full List of Resources:
+ * shooter (stonith:fence_phd_kvm): Started c7auto1
+ * Clone Set: FAKECLONE-clone [FAKECLONE]:
+ * Started: [ c7auto1 ]
+ * Stopped: [ c7auto2 c7auto3 c7auto4 ]
+ * Clone Set: FAKECLONE2-clone [FAKECLONE2]:
+ * Stopped: [ c7auto1 c7auto2 c7auto3 c7auto4 ]
diff --git a/cts/scheduler/summary/cluster-specific-params.summary b/cts/scheduler/summary/cluster-specific-params.summary
new file mode 100644
index 0000000..8a1d5e4
--- /dev/null
+++ b/cts/scheduler/summary/cluster-specific-params.summary
@@ -0,0 +1,24 @@
+Current cluster status:
+ * Node List:
+ * Online: [ node1 node2 ]
+
+ * Full List of Resources:
+ * rsc_stonith (stonith:null): Started node1
+ * rsc1 (ocf:pacemaker:Dummy): Stopped
+
+Transition Summary:
+ * Start rsc1 ( node2 )
+
+Executing Cluster Transition:
+ * Resource action: rsc1 monitor on node2
+ * Resource action: rsc1 monitor on node1
+ * Resource action: rsc1 start on node2
+ * Resource action: rsc1 monitor=10000 on node2
+
+Revised Cluster Status:
+ * Node List:
+ * Online: [ node1 node2 ]
+
+ * Full List of Resources:
+ * rsc_stonith (stonith:null): Started node1
+ * rsc1 (ocf:pacemaker:Dummy): Started node2
diff --git a/cts/scheduler/summary/colo_promoted_w_native.summary b/cts/scheduler/summary/colo_promoted_w_native.summary
new file mode 100644
index 0000000..ad67078
--- /dev/null
+++ b/cts/scheduler/summary/colo_promoted_w_native.summary
@@ -0,0 +1,49 @@
+Current cluster status:
+ * Node List:
+ * Online: [ node1 node2 ]
+
+ * Full List of Resources:
+ * A (ocf:pacemaker:Dummy): Started node1
+ * Clone Set: MS_RSC [MS_RSC_NATIVE] (promotable):
+ * Promoted: [ node2 ]
+ * Unpromoted: [ node1 ]
+
+Transition Summary:
+ * Demote MS_RSC_NATIVE:0 ( Promoted -> Unpromoted node2 )
+ * Promote MS_RSC_NATIVE:1 ( Unpromoted -> Promoted node1 )
+
+Executing Cluster Transition:
+ * Resource action: MS_RSC_NATIVE:1 cancel=15000 on node1
+ * Pseudo action: MS_RSC_pre_notify_demote_0
+ * Resource action: MS_RSC_NATIVE:0 notify on node2
+ * Resource action: MS_RSC_NATIVE:1 notify on node1
+ * Pseudo action: MS_RSC_confirmed-pre_notify_demote_0
+ * Pseudo action: MS_RSC_demote_0
+ * Resource action: MS_RSC_NATIVE:0 demote on node2
+ * Pseudo action: MS_RSC_demoted_0
+ * Pseudo action: MS_RSC_post_notify_demoted_0
+ * Resource action: MS_RSC_NATIVE:0 notify on node2
+ * Resource action: MS_RSC_NATIVE:1 notify on node1
+ * Pseudo action: MS_RSC_confirmed-post_notify_demoted_0
+ * Pseudo action: MS_RSC_pre_notify_promote_0
+ * Resource action: MS_RSC_NATIVE:0 notify on node2
+ * Resource action: MS_RSC_NATIVE:1 notify on node1
+ * Pseudo action: MS_RSC_confirmed-pre_notify_promote_0
+ * Pseudo action: MS_RSC_promote_0
+ * Resource action: MS_RSC_NATIVE:1 promote on node1
+ * Pseudo action: MS_RSC_promoted_0
+ * Pseudo action: MS_RSC_post_notify_promoted_0
+ * Resource action: MS_RSC_NATIVE:0 notify on node2
+ * Resource action: MS_RSC_NATIVE:1 notify on node1
+ * Pseudo action: MS_RSC_confirmed-post_notify_promoted_0
+ * Resource action: MS_RSC_NATIVE:0 monitor=15000 on node2
+
+Revised Cluster Status:
+ * Node List:
+ * Online: [ node1 node2 ]
+
+ * Full List of Resources:
+ * A (ocf:pacemaker:Dummy): Started node1
+ * Clone Set: MS_RSC [MS_RSC_NATIVE] (promotable):
+ * Promoted: [ node1 ]
+ * Unpromoted: [ node2 ]
diff --git a/cts/scheduler/summary/colo_unpromoted_w_native.summary b/cts/scheduler/summary/colo_unpromoted_w_native.summary
new file mode 100644
index 0000000..42df383
--- /dev/null
+++ b/cts/scheduler/summary/colo_unpromoted_w_native.summary
@@ -0,0 +1,53 @@
+Current cluster status:
+ * Node List:
+ * Online: [ node1 node2 ]
+
+ * Full List of Resources:
+ * A (ocf:pacemaker:Dummy): Started node1
+ * Clone Set: MS_RSC [MS_RSC_NATIVE] (promotable):
+ * Promoted: [ node2 ]
+ * Unpromoted: [ node1 ]
+
+Transition Summary:
+ * Move A ( node1 -> node2 )
+ * Demote MS_RSC_NATIVE:0 ( Promoted -> Unpromoted node2 )
+ * Promote MS_RSC_NATIVE:1 ( Unpromoted -> Promoted node1 )
+
+Executing Cluster Transition:
+ * Resource action: A stop on node1
+ * Resource action: MS_RSC_NATIVE:1 cancel=15000 on node1
+ * Pseudo action: MS_RSC_pre_notify_demote_0
+ * Resource action: A start on node2
+ * Resource action: MS_RSC_NATIVE:0 notify on node2
+ * Resource action: MS_RSC_NATIVE:1 notify on node1
+ * Pseudo action: MS_RSC_confirmed-pre_notify_demote_0
+ * Pseudo action: MS_RSC_demote_0
+ * Resource action: A monitor=10000 on node2
+ * Resource action: MS_RSC_NATIVE:0 demote on node2
+ * Pseudo action: MS_RSC_demoted_0
+ * Pseudo action: MS_RSC_post_notify_demoted_0
+ * Resource action: MS_RSC_NATIVE:0 notify on node2
+ * Resource action: MS_RSC_NATIVE:1 notify on node1
+ * Pseudo action: MS_RSC_confirmed-post_notify_demoted_0
+ * Pseudo action: MS_RSC_pre_notify_promote_0
+ * Resource action: MS_RSC_NATIVE:0 notify on node2
+ * Resource action: MS_RSC_NATIVE:1 notify on node1
+ * Pseudo action: MS_RSC_confirmed-pre_notify_promote_0
+ * Pseudo action: MS_RSC_promote_0
+ * Resource action: MS_RSC_NATIVE:1 promote on node1
+ * Pseudo action: MS_RSC_promoted_0
+ * Pseudo action: MS_RSC_post_notify_promoted_0
+ * Resource action: MS_RSC_NATIVE:0 notify on node2
+ * Resource action: MS_RSC_NATIVE:1 notify on node1
+ * Pseudo action: MS_RSC_confirmed-post_notify_promoted_0
+ * Resource action: MS_RSC_NATIVE:0 monitor=15000 on node2
+
+Revised Cluster Status:
+ * Node List:
+ * Online: [ node1 node2 ]
+
+ * Full List of Resources:
+ * A (ocf:pacemaker:Dummy): Started node2
+ * Clone Set: MS_RSC [MS_RSC_NATIVE] (promotable):
+ * Promoted: [ node1 ]
+ * Unpromoted: [ node2 ]
diff --git a/cts/scheduler/summary/coloc-attr.summary b/cts/scheduler/summary/coloc-attr.summary
new file mode 100644
index 0000000..db3fd8e
--- /dev/null
+++ b/cts/scheduler/summary/coloc-attr.summary
@@ -0,0 +1,31 @@
+Current cluster status:
+ * Node List:
+ * Online: [ power720-1 power720-2 power720-3 power720-4 ]
+
+ * Full List of Resources:
+ * Resource Group: group_test1:
+ * resource_t11 (lsb:nfsserver): Stopped
+ * Resource Group: group_test2:
+ * resource_t21 (ocf:heartbeat:Dummy): Stopped
+
+Transition Summary:
+ * Start resource_t11 ( power720-3 )
+ * Start resource_t21 ( power720-4 )
+
+Executing Cluster Transition:
+ * Pseudo action: group_test1_start_0
+ * Resource action: resource_t11 start on power720-3
+ * Pseudo action: group_test1_running_0
+ * Pseudo action: group_test2_start_0
+ * Resource action: resource_t21 start on power720-4
+ * Pseudo action: group_test2_running_0
+
+Revised Cluster Status:
+ * Node List:
+ * Online: [ power720-1 power720-2 power720-3 power720-4 ]
+
+ * Full List of Resources:
+ * Resource Group: group_test1:
+ * resource_t11 (lsb:nfsserver): Started power720-3
+ * Resource Group: group_test2:
+ * resource_t21 (ocf:heartbeat:Dummy): Started power720-4
diff --git a/cts/scheduler/summary/coloc-clone-stays-active.summary b/cts/scheduler/summary/coloc-clone-stays-active.summary
new file mode 100644
index 0000000..cb212e1
--- /dev/null
+++ b/cts/scheduler/summary/coloc-clone-stays-active.summary
@@ -0,0 +1,209 @@
+9 of 87 resource instances DISABLED and 0 BLOCKED from further action due to failure
+
+Current cluster status:
+ * Node List:
+ * Online: [ s01-0 s01-1 ]
+
+ * Full List of Resources:
+ * stonith-s01-0 (stonith:external/ipmi): Started s01-1
+ * stonith-s01-1 (stonith:external/ipmi): Started s01-0
+ * Resource Group: iscsi-pool-0-target-all:
+ * iscsi-pool-0-target (ocf:vds-ok:iSCSITarget): Started s01-0
+ * iscsi-pool-0-lun-1 (ocf:vds-ok:iSCSILogicalUnit): Started s01-0
+ * Resource Group: iscsi-pool-0-vips:
+ * vip-235 (ocf:heartbeat:IPaddr2): Started s01-0
+ * vip-236 (ocf:heartbeat:IPaddr2): Started s01-0
+ * Resource Group: iscsi-pool-1-target-all:
+ * iscsi-pool-1-target (ocf:vds-ok:iSCSITarget): Started s01-1
+ * iscsi-pool-1-lun-1 (ocf:vds-ok:iSCSILogicalUnit): Started s01-1
+ * Resource Group: iscsi-pool-1-vips:
+ * vip-237 (ocf:heartbeat:IPaddr2): Started s01-1
+ * vip-238 (ocf:heartbeat:IPaddr2): Started s01-1
+ * Clone Set: ms-drbd-pool-0 [drbd-pool-0] (promotable):
+ * Promoted: [ s01-0 ]
+ * Unpromoted: [ s01-1 ]
+ * Clone Set: ms-drbd-pool-1 [drbd-pool-1] (promotable):
+ * Promoted: [ s01-1 ]
+ * Unpromoted: [ s01-0 ]
+ * Clone Set: ms-iscsi-pool-0-vips-fw [iscsi-pool-0-vips-fw] (promotable):
+ * Promoted: [ s01-0 ]
+ * Unpromoted: [ s01-1 ]
+ * Clone Set: ms-iscsi-pool-1-vips-fw [iscsi-pool-1-vips-fw] (promotable):
+ * Promoted: [ s01-1 ]
+ * Unpromoted: [ s01-0 ]
+ * Clone Set: cl-o2cb [o2cb] (disabled):
+ * Stopped (disabled): [ s01-0 s01-1 ]
+ * Clone Set: ms-drbd-s01-service [drbd-s01-service] (promotable):
+ * Promoted: [ s01-0 s01-1 ]
+ * Clone Set: cl-s01-service-fs [s01-service-fs]:
+ * Started: [ s01-0 s01-1 ]
+ * Clone Set: cl-ietd [ietd]:
+ * Started: [ s01-0 s01-1 ]
+ * Clone Set: cl-dhcpd [dhcpd] (disabled):
+ * Stopped (disabled): [ s01-0 s01-1 ]
+ * Resource Group: http-server:
+ * vip-233 (ocf:heartbeat:IPaddr2): Started s01-0
+ * nginx (lsb:nginx): Stopped (disabled)
+ * Clone Set: ms-drbd-s01-logs [drbd-s01-logs] (promotable):
+ * Promoted: [ s01-0 s01-1 ]
+ * Clone Set: cl-s01-logs-fs [s01-logs-fs]:
+ * Started: [ s01-0 s01-1 ]
+ * Resource Group: syslog-server:
+ * vip-234 (ocf:heartbeat:IPaddr2): Started s01-1
+ * syslog-ng (ocf:heartbeat:syslog-ng): Started s01-1
+ * Resource Group: tftp-server:
+ * vip-232 (ocf:heartbeat:IPaddr2): Stopped
+ * tftpd (ocf:heartbeat:Xinetd): Stopped
+ * Clone Set: cl-xinetd [xinetd]:
+ * Started: [ s01-0 s01-1 ]
+ * Clone Set: cl-ospf-routing [ospf-routing]:
+ * Started: [ s01-0 s01-1 ]
+ * Clone Set: connected-outer [ping-bmc-and-switch]:
+ * Started: [ s01-0 s01-1 ]
+ * Resource Group: iscsi-vds-dom0-stateless-0-target-all (disabled):
+ * iscsi-vds-dom0-stateless-0-target (ocf:vds-ok:iSCSITarget): Stopped (disabled)
+ * iscsi-vds-dom0-stateless-0-lun-1 (ocf:vds-ok:iSCSILogicalUnit): Stopped (disabled)
+ * Resource Group: iscsi-vds-dom0-stateless-0-vips:
+ * vip-227 (ocf:heartbeat:IPaddr2): Stopped
+ * vip-228 (ocf:heartbeat:IPaddr2): Stopped
+ * Clone Set: ms-drbd-vds-dom0-stateless-0 [drbd-vds-dom0-stateless-0] (promotable):
+ * Promoted: [ s01-0 ]
+ * Unpromoted: [ s01-1 ]
+ * Clone Set: ms-iscsi-vds-dom0-stateless-0-vips-fw [iscsi-vds-dom0-stateless-0-vips-fw] (promotable):
+ * Unpromoted: [ s01-0 s01-1 ]
+ * Clone Set: cl-dlm [dlm]:
+ * Started: [ s01-0 s01-1 ]
+ * Clone Set: ms-drbd-vds-tftpboot [drbd-vds-tftpboot] (promotable):
+ * Promoted: [ s01-0 s01-1 ]
+ * Clone Set: cl-vds-tftpboot-fs [vds-tftpboot-fs] (disabled):
+ * Stopped (disabled): [ s01-0 s01-1 ]
+ * Clone Set: cl-gfs2 [gfs2]:
+ * Started: [ s01-0 s01-1 ]
+ * Clone Set: ms-drbd-vds-http [drbd-vds-http] (promotable):
+ * Promoted: [ s01-0 s01-1 ]
+ * Clone Set: cl-vds-http-fs [vds-http-fs]:
+ * Started: [ s01-0 s01-1 ]
+ * Clone Set: cl-clvmd [clvmd]:
+ * Started: [ s01-0 s01-1 ]
+ * Clone Set: ms-drbd-s01-vm-data [drbd-s01-vm-data] (promotable):
+ * Promoted: [ s01-0 s01-1 ]
+ * Clone Set: cl-s01-vm-data-metadata-fs [s01-vm-data-metadata-fs]:
+ * Started: [ s01-0 s01-1 ]
+ * Clone Set: cl-vg-s01-vm-data [vg-s01-vm-data]:
+ * Started: [ s01-0 s01-1 ]
+ * mgmt-vm (ocf:vds-ok:VirtualDomain): Started s01-0
+ * Clone Set: cl-drbdlinks-s01-service [drbdlinks-s01-service]:
+ * Started: [ s01-0 s01-1 ]
+ * Clone Set: cl-libvirtd [libvirtd]:
+ * Started: [ s01-0 s01-1 ]
+ * Clone Set: cl-s01-vm-data-storage-pool [s01-vm-data-storage-pool]:
+ * Started: [ s01-0 s01-1 ]
+
+Transition Summary:
+ * Migrate mgmt-vm ( s01-0 -> s01-1 )
+
+Executing Cluster Transition:
+ * Resource action: mgmt-vm migrate_to on s01-0
+ * Resource action: mgmt-vm migrate_from on s01-1
+ * Resource action: mgmt-vm stop on s01-0
+ * Pseudo action: mgmt-vm_start_0
+ * Resource action: mgmt-vm monitor=10000 on s01-1
+
+Revised Cluster Status:
+ * Node List:
+ * Online: [ s01-0 s01-1 ]
+
+ * Full List of Resources:
+ * stonith-s01-0 (stonith:external/ipmi): Started s01-1
+ * stonith-s01-1 (stonith:external/ipmi): Started s01-0
+ * Resource Group: iscsi-pool-0-target-all:
+ * iscsi-pool-0-target (ocf:vds-ok:iSCSITarget): Started s01-0
+ * iscsi-pool-0-lun-1 (ocf:vds-ok:iSCSILogicalUnit): Started s01-0
+ * Resource Group: iscsi-pool-0-vips:
+ * vip-235 (ocf:heartbeat:IPaddr2): Started s01-0
+ * vip-236 (ocf:heartbeat:IPaddr2): Started s01-0
+ * Resource Group: iscsi-pool-1-target-all:
+ * iscsi-pool-1-target (ocf:vds-ok:iSCSITarget): Started s01-1
+ * iscsi-pool-1-lun-1 (ocf:vds-ok:iSCSILogicalUnit): Started s01-1
+ * Resource Group: iscsi-pool-1-vips:
+ * vip-237 (ocf:heartbeat:IPaddr2): Started s01-1
+ * vip-238 (ocf:heartbeat:IPaddr2): Started s01-1
+ * Clone Set: ms-drbd-pool-0 [drbd-pool-0] (promotable):
+ * Promoted: [ s01-0 ]
+ * Unpromoted: [ s01-1 ]
+ * Clone Set: ms-drbd-pool-1 [drbd-pool-1] (promotable):
+ * Promoted: [ s01-1 ]
+ * Unpromoted: [ s01-0 ]
+ * Clone Set: ms-iscsi-pool-0-vips-fw [iscsi-pool-0-vips-fw] (promotable):
+ * Promoted: [ s01-0 ]
+ * Unpromoted: [ s01-1 ]
+ * Clone Set: ms-iscsi-pool-1-vips-fw [iscsi-pool-1-vips-fw] (promotable):
+ * Promoted: [ s01-1 ]
+ * Unpromoted: [ s01-0 ]
+ * Clone Set: cl-o2cb [o2cb] (disabled):
+ * Stopped (disabled): [ s01-0 s01-1 ]
+ * Clone Set: ms-drbd-s01-service [drbd-s01-service] (promotable):
+ * Promoted: [ s01-0 s01-1 ]
+ * Clone Set: cl-s01-service-fs [s01-service-fs]:
+ * Started: [ s01-0 s01-1 ]
+ * Clone Set: cl-ietd [ietd]:
+ * Started: [ s01-0 s01-1 ]
+ * Clone Set: cl-dhcpd [dhcpd] (disabled):
+ * Stopped (disabled): [ s01-0 s01-1 ]
+ * Resource Group: http-server:
+ * vip-233 (ocf:heartbeat:IPaddr2): Started s01-0
+ * nginx (lsb:nginx): Stopped (disabled)
+ * Clone Set: ms-drbd-s01-logs [drbd-s01-logs] (promotable):
+ * Promoted: [ s01-0 s01-1 ]
+ * Clone Set: cl-s01-logs-fs [s01-logs-fs]:
+ * Started: [ s01-0 s01-1 ]
+ * Resource Group: syslog-server:
+ * vip-234 (ocf:heartbeat:IPaddr2): Started s01-1
+ * syslog-ng (ocf:heartbeat:syslog-ng): Started s01-1
+ * Resource Group: tftp-server:
+ * vip-232 (ocf:heartbeat:IPaddr2): Stopped
+ * tftpd (ocf:heartbeat:Xinetd): Stopped
+ * Clone Set: cl-xinetd [xinetd]:
+ * Started: [ s01-0 s01-1 ]
+ * Clone Set: cl-ospf-routing [ospf-routing]:
+ * Started: [ s01-0 s01-1 ]
+ * Clone Set: connected-outer [ping-bmc-and-switch]:
+ * Started: [ s01-0 s01-1 ]
+ * Resource Group: iscsi-vds-dom0-stateless-0-target-all (disabled):
+ * iscsi-vds-dom0-stateless-0-target (ocf:vds-ok:iSCSITarget): Stopped (disabled)
+ * iscsi-vds-dom0-stateless-0-lun-1 (ocf:vds-ok:iSCSILogicalUnit): Stopped (disabled)
+ * Resource Group: iscsi-vds-dom0-stateless-0-vips:
+ * vip-227 (ocf:heartbeat:IPaddr2): Stopped
+ * vip-228 (ocf:heartbeat:IPaddr2): Stopped
+ * Clone Set: ms-drbd-vds-dom0-stateless-0 [drbd-vds-dom0-stateless-0] (promotable):
+ * Promoted: [ s01-0 ]
+ * Unpromoted: [ s01-1 ]
+ * Clone Set: ms-iscsi-vds-dom0-stateless-0-vips-fw [iscsi-vds-dom0-stateless-0-vips-fw] (promotable):
+ * Unpromoted: [ s01-0 s01-1 ]
+ * Clone Set: cl-dlm [dlm]:
+ * Started: [ s01-0 s01-1 ]
+ * Clone Set: ms-drbd-vds-tftpboot [drbd-vds-tftpboot] (promotable):
+ * Promoted: [ s01-0 s01-1 ]
+ * Clone Set: cl-vds-tftpboot-fs [vds-tftpboot-fs] (disabled):
+ * Stopped (disabled): [ s01-0 s01-1 ]
+ * Clone Set: cl-gfs2 [gfs2]:
+ * Started: [ s01-0 s01-1 ]
+ * Clone Set: ms-drbd-vds-http [drbd-vds-http] (promotable):
+ * Promoted: [ s01-0 s01-1 ]
+ * Clone Set: cl-vds-http-fs [vds-http-fs]:
+ * Started: [ s01-0 s01-1 ]
+ * Clone Set: cl-clvmd [clvmd]:
+ * Started: [ s01-0 s01-1 ]
+ * Clone Set: ms-drbd-s01-vm-data [drbd-s01-vm-data] (promotable):
+ * Promoted: [ s01-0 s01-1 ]
+ * Clone Set: cl-s01-vm-data-metadata-fs [s01-vm-data-metadata-fs]:
+ * Started: [ s01-0 s01-1 ]
+ * Clone Set: cl-vg-s01-vm-data [vg-s01-vm-data]:
+ * Started: [ s01-0 s01-1 ]
+ * mgmt-vm (ocf:vds-ok:VirtualDomain): Started s01-1
+ * Clone Set: cl-drbdlinks-s01-service [drbdlinks-s01-service]:
+ * Started: [ s01-0 s01-1 ]
+ * Clone Set: cl-libvirtd [libvirtd]:
+ * Started: [ s01-0 s01-1 ]
+ * Clone Set: cl-s01-vm-data-storage-pool [s01-vm-data-storage-pool]:
+ * Started: [ s01-0 s01-1 ]
diff --git a/cts/scheduler/summary/coloc-dependee-should-move.summary b/cts/scheduler/summary/coloc-dependee-should-move.summary
new file mode 100644
index 0000000..7df3f6e
--- /dev/null
+++ b/cts/scheduler/summary/coloc-dependee-should-move.summary
@@ -0,0 +1,61 @@
+Using the original execution date of: 2019-10-22 20:53:06Z
+Current cluster status:
+ * Node List:
+ * Online: [ rhel7-1 rhel7-2 rhel7-3 rhel7-4 rhel7-5 ]
+
+ * Full List of Resources:
+ * Fencing (stonith:fence_xvm): Started rhel7-2
+ * FencingFail (stonith:fence_dummy): Started rhel7-1
+ * Resource Group: dummy1:
+ * dummy1a (ocf:pacemaker:Dummy): Started rhel7-3
+ * dummy1b (ocf:heartbeat:Dummy): Started rhel7-3
+ * dummy1c (ocf:heartbeat:Dummy): Started rhel7-3
+ * dummy1d (ocf:heartbeat:Dummy): Started rhel7-3
+ * Resource Group: dummy2:
+ * dummy2a (ocf:pacemaker:Dummy): Started rhel7-4
+ * dummy2b (ocf:heartbeat:Dummy): Started rhel7-4
+ * dummy2c (ocf:heartbeat:Dummy): Started rhel7-4
+ * dummy2d (ocf:heartbeat:Dummy): Started rhel7-4
+
+Transition Summary:
+ * Move dummy2a ( rhel7-4 -> rhel7-3 )
+ * Move dummy2b ( rhel7-4 -> rhel7-3 )
+ * Move dummy2c ( rhel7-4 -> rhel7-3 )
+ * Move dummy2d ( rhel7-4 -> rhel7-3 )
+
+Executing Cluster Transition:
+ * Pseudo action: dummy2_stop_0
+ * Resource action: dummy2d stop on rhel7-4
+ * Resource action: dummy2c stop on rhel7-4
+ * Resource action: dummy2b stop on rhel7-4
+ * Resource action: dummy2a stop on rhel7-4
+ * Pseudo action: dummy2_stopped_0
+ * Pseudo action: dummy2_start_0
+ * Resource action: dummy2a start on rhel7-3
+ * Resource action: dummy2b start on rhel7-3
+ * Resource action: dummy2c start on rhel7-3
+ * Resource action: dummy2d start on rhel7-3
+ * Pseudo action: dummy2_running_0
+ * Resource action: dummy2a monitor=10000 on rhel7-3
+ * Resource action: dummy2b monitor=10000 on rhel7-3
+ * Resource action: dummy2c monitor=10000 on rhel7-3
+ * Resource action: dummy2d monitor=10000 on rhel7-3
+Using the original execution date of: 2019-10-22 20:53:06Z
+
+Revised Cluster Status:
+ * Node List:
+ * Online: [ rhel7-1 rhel7-2 rhel7-3 rhel7-4 rhel7-5 ]
+
+ * Full List of Resources:
+ * Fencing (stonith:fence_xvm): Started rhel7-2
+ * FencingFail (stonith:fence_dummy): Started rhel7-1
+ * Resource Group: dummy1:
+ * dummy1a (ocf:pacemaker:Dummy): Started rhel7-3
+ * dummy1b (ocf:heartbeat:Dummy): Started rhel7-3
+ * dummy1c (ocf:heartbeat:Dummy): Started rhel7-3
+ * dummy1d (ocf:heartbeat:Dummy): Started rhel7-3
+ * Resource Group: dummy2:
+ * dummy2a (ocf:pacemaker:Dummy): Started rhel7-3
+ * dummy2b (ocf:heartbeat:Dummy): Started rhel7-3
+ * dummy2c (ocf:heartbeat:Dummy): Started rhel7-3
+ * dummy2d (ocf:heartbeat:Dummy): Started rhel7-3
diff --git a/cts/scheduler/summary/coloc-dependee-should-stay.summary b/cts/scheduler/summary/coloc-dependee-should-stay.summary
new file mode 100644
index 0000000..38eb64d
--- /dev/null
+++ b/cts/scheduler/summary/coloc-dependee-should-stay.summary
@@ -0,0 +1,41 @@
+Using the original execution date of: 2019-10-22 20:53:06Z
+Current cluster status:
+ * Node List:
+ * Online: [ rhel7-1 rhel7-2 rhel7-3 rhel7-4 rhel7-5 ]
+
+ * Full List of Resources:
+ * Fencing (stonith:fence_xvm): Started rhel7-2
+ * FencingFail (stonith:fence_dummy): Started rhel7-1
+ * Resource Group: dummy1:
+ * dummy1a (ocf:pacemaker:Dummy): Started rhel7-3
+ * dummy1b (ocf:heartbeat:Dummy): Started rhel7-3
+ * dummy1c (ocf:heartbeat:Dummy): Started rhel7-3
+ * dummy1d (ocf:heartbeat:Dummy): Started rhel7-3
+ * Resource Group: dummy2:
+ * dummy2a (ocf:pacemaker:Dummy): Started rhel7-4
+ * dummy2b (ocf:heartbeat:Dummy): Started rhel7-4
+ * dummy2c (ocf:heartbeat:Dummy): Started rhel7-4
+ * dummy2d (ocf:heartbeat:Dummy): Started rhel7-4
+
+Transition Summary:
+
+Executing Cluster Transition:
+Using the original execution date of: 2019-10-22 20:53:06Z
+
+Revised Cluster Status:
+ * Node List:
+ * Online: [ rhel7-1 rhel7-2 rhel7-3 rhel7-4 rhel7-5 ]
+
+ * Full List of Resources:
+ * Fencing (stonith:fence_xvm): Started rhel7-2
+ * FencingFail (stonith:fence_dummy): Started rhel7-1
+ * Resource Group: dummy1:
+ * dummy1a (ocf:pacemaker:Dummy): Started rhel7-3
+ * dummy1b (ocf:heartbeat:Dummy): Started rhel7-3
+ * dummy1c (ocf:heartbeat:Dummy): Started rhel7-3
+ * dummy1d (ocf:heartbeat:Dummy): Started rhel7-3
+ * Resource Group: dummy2:
+ * dummy2a (ocf:pacemaker:Dummy): Started rhel7-4
+ * dummy2b (ocf:heartbeat:Dummy): Started rhel7-4
+ * dummy2c (ocf:heartbeat:Dummy): Started rhel7-4
+ * dummy2d (ocf:heartbeat:Dummy): Started rhel7-4
diff --git a/cts/scheduler/summary/coloc-group.summary b/cts/scheduler/summary/coloc-group.summary
new file mode 100644
index 0000000..94163e2
--- /dev/null
+++ b/cts/scheduler/summary/coloc-group.summary
@@ -0,0 +1,39 @@
+Current cluster status:
+ * Node List:
+ * Online: [ node1 node2 node3 ]
+
+ * Full List of Resources:
+ * rsc1 (ocf:heartbeat:apache): Stopped
+ * Resource Group: group1:
+ * rsc2 (ocf:heartbeat:apache): Stopped
+ * rsc3 (ocf:heartbeat:apache): Stopped
+ * rsc4 (ocf:heartbeat:apache): Stopped
+
+Transition Summary:
+ * Start rsc1 ( node2 )
+ * Start rsc2 ( node2 )
+
+Executing Cluster Transition:
+ * Resource action: rsc1 monitor on node3
+ * Resource action: rsc1 monitor on node2
+ * Resource action: rsc1 monitor on node1
+ * Pseudo action: group1_start_0
+ * Resource action: rsc2 monitor on node2
+ * Resource action: rsc3 monitor on node3
+ * Resource action: rsc3 monitor on node1
+ * Resource action: rsc4 monitor on node3
+ * Resource action: rsc4 monitor on node2
+ * Resource action: rsc4 monitor on node1
+ * Resource action: rsc1 start on node2
+ * Resource action: rsc2 start on node2
+
+Revised Cluster Status:
+ * Node List:
+ * Online: [ node1 node2 node3 ]
+
+ * Full List of Resources:
+ * rsc1 (ocf:heartbeat:apache): Started node2
+ * Resource Group: group1:
+ * rsc2 (ocf:heartbeat:apache): Started node2
+ * rsc3 (ocf:heartbeat:apache): Stopped
+ * rsc4 (ocf:heartbeat:apache): Stopped
diff --git a/cts/scheduler/summary/coloc-intra-set.summary b/cts/scheduler/summary/coloc-intra-set.summary
new file mode 100644
index 0000000..fa95dab
--- /dev/null
+++ b/cts/scheduler/summary/coloc-intra-set.summary
@@ -0,0 +1,38 @@
+Current cluster status:
+ * Node List:
+ * Online: [ hex-13 hex-14 ]
+
+ * Full List of Resources:
+ * fencing-sbd (stonith:external/sbd): Started hex-13
+ * dummy0 (ocf:heartbeat:Dummy): Started hex-14
+ * dummy1 (ocf:heartbeat:Dummy): Started hex-13
+ * dummy2 (ocf:heartbeat:Dummy): Started hex-14
+ * dummy3 (ocf:heartbeat:Dummy): Started hex-13
+
+Transition Summary:
+ * Move dummy1 ( hex-13 -> hex-14 )
+ * Move dummy3 ( hex-13 -> hex-14 )
+
+Executing Cluster Transition:
+ * Resource action: dummy1 stop on hex-13
+ * Resource action: dummy3 stop on hex-13
+ * Resource action: d0:0 delete on hex-13
+ * Resource action: o2cb:0 delete on hex-13
+ * Resource action: dummy4 delete on hex-13
+ * Resource action: dlm:0 delete on hex-13
+ * Resource action: ocfs2-3:0 delete on hex-13
+ * Resource action: dummy1 start on hex-14
+ * Resource action: dummy3 start on hex-14
+ * Resource action: dummy1 monitor=15000 on hex-14
+ * Resource action: dummy3 monitor=15000 on hex-14
+
+Revised Cluster Status:
+ * Node List:
+ * Online: [ hex-13 hex-14 ]
+
+ * Full List of Resources:
+ * fencing-sbd (stonith:external/sbd): Started hex-13
+ * dummy0 (ocf:heartbeat:Dummy): Started hex-14
+ * dummy1 (ocf:heartbeat:Dummy): Started hex-14
+ * dummy2 (ocf:heartbeat:Dummy): Started hex-14
+ * dummy3 (ocf:heartbeat:Dummy): Started hex-14
diff --git a/cts/scheduler/summary/coloc-list.summary b/cts/scheduler/summary/coloc-list.summary
new file mode 100644
index 0000000..e3ac574
--- /dev/null
+++ b/cts/scheduler/summary/coloc-list.summary
@@ -0,0 +1,42 @@
+Current cluster status:
+ * Node List:
+ * Online: [ node1 node2 node3 ]
+
+ * Full List of Resources:
+ * rsc1 (ocf:heartbeat:apache): Stopped
+ * rsc2 (ocf:heartbeat:apache): Stopped
+ * rsc3 (ocf:heartbeat:apache): Stopped
+ * rsc4 (ocf:heartbeat:apache): Stopped
+ * rsc5 (ocf:heartbeat:apache): Stopped
+
+Transition Summary:
+ * Start rsc1 ( node2 )
+ * Start rsc2 ( node2 )
+ * Start rsc4 ( node2 )
+
+Executing Cluster Transition:
+ * Resource action: rsc1 monitor on node3
+ * Resource action: rsc1 monitor on node2
+ * Resource action: rsc1 monitor on node1
+ * Resource action: rsc2 monitor on node2
+ * Resource action: rsc3 monitor on node3
+ * Resource action: rsc3 monitor on node1
+ * Resource action: rsc4 monitor on node3
+ * Resource action: rsc4 monitor on node2
+ * Resource action: rsc4 monitor on node1
+ * Resource action: rsc5 monitor on node3
+ * Resource action: rsc5 monitor on node1
+ * Resource action: rsc1 start on node2
+ * Resource action: rsc2 start on node2
+ * Resource action: rsc4 start on node2
+
+Revised Cluster Status:
+ * Node List:
+ * Online: [ node1 node2 node3 ]
+
+ * Full List of Resources:
+ * rsc1 (ocf:heartbeat:apache): Started node2
+ * rsc2 (ocf:heartbeat:apache): Started node2
+ * rsc3 (ocf:heartbeat:apache): Stopped
+ * rsc4 (ocf:heartbeat:apache): Started node2
+ * rsc5 (ocf:heartbeat:apache): Stopped
diff --git a/cts/scheduler/summary/coloc-loop.summary b/cts/scheduler/summary/coloc-loop.summary
new file mode 100644
index 0000000..9d11ab0
--- /dev/null
+++ b/cts/scheduler/summary/coloc-loop.summary
@@ -0,0 +1,36 @@
+Current cluster status:
+ * Node List:
+ * Online: [ node1 node2 node3 ]
+
+ * Full List of Resources:
+ * rsc1 (ocf:heartbeat:apache): Stopped
+ * rsc2 (ocf:heartbeat:apache): Stopped
+ * rsc3 (ocf:heartbeat:apache): Stopped
+
+Transition Summary:
+ * Start rsc1 ( node1 )
+ * Start rsc2 ( node1 )
+ * Start rsc3 ( node1 )
+
+Executing Cluster Transition:
+ * Resource action: rsc1 monitor on node3
+ * Resource action: rsc1 monitor on node2
+ * Resource action: rsc1 monitor on node1
+ * Resource action: rsc2 monitor on node3
+ * Resource action: rsc2 monitor on node2
+ * Resource action: rsc2 monitor on node1
+ * Resource action: rsc3 monitor on node3
+ * Resource action: rsc3 monitor on node2
+ * Resource action: rsc3 monitor on node1
+ * Resource action: rsc1 start on node1
+ * Resource action: rsc2 start on node1
+ * Resource action: rsc3 start on node1
+
+Revised Cluster Status:
+ * Node List:
+ * Online: [ node1 node2 node3 ]
+
+ * Full List of Resources:
+ * rsc1 (ocf:heartbeat:apache): Started node1
+ * rsc2 (ocf:heartbeat:apache): Started node1
+ * rsc3 (ocf:heartbeat:apache): Started node1
diff --git a/cts/scheduler/summary/coloc-many-one.summary b/cts/scheduler/summary/coloc-many-one.summary
new file mode 100644
index 0000000..d83f2c1
--- /dev/null
+++ b/cts/scheduler/summary/coloc-many-one.summary
@@ -0,0 +1,38 @@
+Current cluster status:
+ * Node List:
+ * Online: [ node1 node2 node3 ]
+
+ * Full List of Resources:
+ * rsc1 (ocf:heartbeat:apache): Stopped
+ * rsc2 (ocf:heartbeat:apache): Stopped
+ * rsc3 (ocf:heartbeat:apache): Stopped
+ * rsc4 (ocf:heartbeat:apache): Stopped
+
+Transition Summary:
+ * Start rsc1 ( node2 )
+ * Start rsc2 ( node2 )
+ * Start rsc4 ( node2 )
+
+Executing Cluster Transition:
+ * Resource action: rsc1 monitor on node3
+ * Resource action: rsc1 monitor on node2
+ * Resource action: rsc1 monitor on node1
+ * Resource action: rsc2 monitor on node2
+ * Resource action: rsc3 monitor on node3
+ * Resource action: rsc3 monitor on node1
+ * Resource action: rsc4 monitor on node3
+ * Resource action: rsc4 monitor on node2
+ * Resource action: rsc4 monitor on node1
+ * Resource action: rsc1 start on node2
+ * Resource action: rsc2 start on node2
+ * Resource action: rsc4 start on node2
+
+Revised Cluster Status:
+ * Node List:
+ * Online: [ node1 node2 node3 ]
+
+ * Full List of Resources:
+ * rsc1 (ocf:heartbeat:apache): Started node2
+ * rsc2 (ocf:heartbeat:apache): Started node2
+ * rsc3 (ocf:heartbeat:apache): Stopped
+ * rsc4 (ocf:heartbeat:apache): Started node2
diff --git a/cts/scheduler/summary/coloc-negative-group.summary b/cts/scheduler/summary/coloc-negative-group.summary
new file mode 100644
index 0000000..ed8faa2
--- /dev/null
+++ b/cts/scheduler/summary/coloc-negative-group.summary
@@ -0,0 +1,26 @@
+Current cluster status:
+ * Node List:
+ * Online: [ lenny-a lenny-b ]
+
+ * Full List of Resources:
+ * Resource Group: grp_1:
+ * res_Dummy_1 (ocf:heartbeat:Dummy): Started lenny-b
+ * res_Dummy_2 (ocf:heartbeat:Dummy): Started lenny-b (unmanaged)
+ * res_Dummy_3 (ocf:heartbeat:Dummy): Started lenny-a
+
+Transition Summary:
+
+Executing Cluster Transition:
+ * Resource action: res_Dummy_1 cancel=10000 on lenny-b
+ * Resource action: res_Dummy_2 cancel=10000 on lenny-b
+ * Resource action: res_Dummy_3 cancel=10000 on lenny-a
+
+Revised Cluster Status:
+ * Node List:
+ * Online: [ lenny-a lenny-b ]
+
+ * Full List of Resources:
+ * Resource Group: grp_1:
+ * res_Dummy_1 (ocf:heartbeat:Dummy): Started lenny-b
+ * res_Dummy_2 (ocf:heartbeat:Dummy): Started lenny-b (unmanaged)
+ * res_Dummy_3 (ocf:heartbeat:Dummy): Started lenny-a
diff --git a/cts/scheduler/summary/coloc-unpromoted-anti.summary b/cts/scheduler/summary/coloc-unpromoted-anti.summary
new file mode 100644
index 0000000..a8518d3
--- /dev/null
+++ b/cts/scheduler/summary/coloc-unpromoted-anti.summary
@@ -0,0 +1,48 @@
+Current cluster status:
+ * Node List:
+ * Online: [ pollux sirius ]
+
+ * Full List of Resources:
+ * Clone Set: pingd-clone [pingd-1]:
+ * Started: [ pollux sirius ]
+ * Clone Set: drbd-msr [drbd-r0] (promotable):
+ * Promoted: [ pollux ]
+ * Unpromoted: [ sirius ]
+ * Resource Group: group-1:
+ * fs-1 (ocf:heartbeat:Filesystem): Stopped
+ * ip-198 (ocf:heartbeat:IPaddr2): Stopped
+ * apache (ocf:custom:apache2): Stopped
+ * pollux-fencing (stonith:external/ipmi-soft): Started sirius
+ * sirius-fencing (stonith:external/ipmi-soft): Started pollux
+
+Transition Summary:
+ * Start fs-1 ( pollux )
+ * Start ip-198 ( pollux )
+ * Start apache ( pollux )
+
+Executing Cluster Transition:
+ * Pseudo action: group-1_start_0
+ * Resource action: fs-1 start on pollux
+ * Resource action: ip-198 start on pollux
+ * Resource action: apache start on pollux
+ * Pseudo action: group-1_running_0
+ * Resource action: fs-1 monitor=20000 on pollux
+ * Resource action: ip-198 monitor=30000 on pollux
+ * Resource action: apache monitor=60000 on pollux
+
+Revised Cluster Status:
+ * Node List:
+ * Online: [ pollux sirius ]
+
+ * Full List of Resources:
+ * Clone Set: pingd-clone [pingd-1]:
+ * Started: [ pollux sirius ]
+ * Clone Set: drbd-msr [drbd-r0] (promotable):
+ * Promoted: [ pollux ]
+ * Unpromoted: [ sirius ]
+ * Resource Group: group-1:
+ * fs-1 (ocf:heartbeat:Filesystem): Started pollux
+ * ip-198 (ocf:heartbeat:IPaddr2): Started pollux
+ * apache (ocf:custom:apache2): Started pollux
+ * pollux-fencing (stonith:external/ipmi-soft): Started sirius
+ * sirius-fencing (stonith:external/ipmi-soft): Started pollux
diff --git a/cts/scheduler/summary/coloc_fp_logic.summary b/cts/scheduler/summary/coloc_fp_logic.summary
new file mode 100644
index 0000000..7826b88
--- /dev/null
+++ b/cts/scheduler/summary/coloc_fp_logic.summary
@@ -0,0 +1,23 @@
+Current cluster status:
+ * Node List:
+ * Online: [ node1 node2 ]
+
+ * Full List of Resources:
+ * A (ocf:pacemaker:Dummy): Started node1
+ * B (ocf:pacemaker:Dummy): Started node2
+
+Transition Summary:
+ * Move A ( node1 -> node2 )
+
+Executing Cluster Transition:
+ * Resource action: A stop on node1
+ * Resource action: A start on node2
+ * Resource action: A monitor=10000 on node2
+
+Revised Cluster Status:
+ * Node List:
+ * Online: [ node1 node2 ]
+
+ * Full List of Resources:
+ * A (ocf:pacemaker:Dummy): Started node2
+ * B (ocf:pacemaker:Dummy): Started node2
diff --git a/cts/scheduler/summary/colocate-primitive-with-clone.summary b/cts/scheduler/summary/colocate-primitive-with-clone.summary
new file mode 100644
index 0000000..e884428
--- /dev/null
+++ b/cts/scheduler/summary/colocate-primitive-with-clone.summary
@@ -0,0 +1,127 @@
+Current cluster status:
+ * Node List:
+ * Online: [ srv01 srv02 srv03 srv04 ]
+
+ * Full List of Resources:
+ * Resource Group: UMgroup01:
+ * UmVIPcheck (ocf:heartbeat:Dummy): Stopped
+ * UmIPaddr (ocf:heartbeat:Dummy): Stopped
+ * UmDummy01 (ocf:heartbeat:Dummy): Stopped
+ * UmDummy02 (ocf:heartbeat:Dummy): Stopped
+ * Resource Group: OVDBgroup02-1:
+ * prmExPostgreSQLDB1 (ocf:heartbeat:Dummy): Started srv04
+ * prmFsPostgreSQLDB1-1 (ocf:heartbeat:Dummy): Started srv04
+ * prmFsPostgreSQLDB1-2 (ocf:heartbeat:Dummy): Started srv04
+ * prmFsPostgreSQLDB1-3 (ocf:heartbeat:Dummy): Started srv04
+ * prmIpPostgreSQLDB1 (ocf:heartbeat:Dummy): Started srv04
+ * prmApPostgreSQLDB1 (ocf:heartbeat:Dummy): Started srv04
+ * Resource Group: OVDBgroup02-2:
+ * prmExPostgreSQLDB2 (ocf:heartbeat:Dummy): Started srv02
+ * prmFsPostgreSQLDB2-1 (ocf:heartbeat:Dummy): Started srv02
+ * prmFsPostgreSQLDB2-2 (ocf:heartbeat:Dummy): Started srv02
+ * prmFsPostgreSQLDB2-3 (ocf:heartbeat:Dummy): Started srv02
+ * prmIpPostgreSQLDB2 (ocf:heartbeat:Dummy): Started srv02
+ * prmApPostgreSQLDB2 (ocf:heartbeat:Dummy): Started srv02
+ * Resource Group: OVDBgroup02-3:
+ * prmExPostgreSQLDB3 (ocf:heartbeat:Dummy): Started srv03
+ * prmFsPostgreSQLDB3-1 (ocf:heartbeat:Dummy): Started srv03
+ * prmFsPostgreSQLDB3-2 (ocf:heartbeat:Dummy): Started srv03
+ * prmFsPostgreSQLDB3-3 (ocf:heartbeat:Dummy): Started srv03
+ * prmIpPostgreSQLDB3 (ocf:heartbeat:Dummy): Started srv03
+ * prmApPostgreSQLDB3 (ocf:heartbeat:Dummy): Started srv03
+ * Resource Group: grpStonith1:
+ * prmStonithN1 (stonith:external/ssh): Started srv04
+ * Resource Group: grpStonith2:
+ * prmStonithN2 (stonith:external/ssh): Started srv03
+ * Resource Group: grpStonith3:
+ * prmStonithN3 (stonith:external/ssh): Started srv02
+ * Resource Group: grpStonith4:
+ * prmStonithN4 (stonith:external/ssh): Started srv03
+ * Clone Set: clnUMgroup01 [clnUmResource]:
+ * Started: [ srv04 ]
+ * Stopped: [ srv01 srv02 srv03 ]
+ * Clone Set: clnPingd [clnPrmPingd]:
+ * Started: [ srv02 srv03 srv04 ]
+ * Stopped: [ srv01 ]
+ * Clone Set: clnDiskd1 [clnPrmDiskd1]:
+ * Started: [ srv02 srv03 srv04 ]
+ * Stopped: [ srv01 ]
+ * Clone Set: clnG3dummy1 [clnG3dummy01]:
+ * Started: [ srv02 srv03 srv04 ]
+ * Stopped: [ srv01 ]
+ * Clone Set: clnG3dummy2 [clnG3dummy02]:
+ * Started: [ srv02 srv03 srv04 ]
+ * Stopped: [ srv01 ]
+
+Transition Summary:
+ * Start UmVIPcheck ( srv04 )
+ * Start UmIPaddr ( srv04 )
+ * Start UmDummy01 ( srv04 )
+ * Start UmDummy02 ( srv04 )
+
+Executing Cluster Transition:
+ * Pseudo action: UMgroup01_start_0
+ * Resource action: UmVIPcheck start on srv04
+ * Resource action: UmIPaddr start on srv04
+ * Resource action: UmDummy01 start on srv04
+ * Resource action: UmDummy02 start on srv04
+ * Cluster action: do_shutdown on srv01
+ * Pseudo action: UMgroup01_running_0
+ * Resource action: UmIPaddr monitor=10000 on srv04
+ * Resource action: UmDummy01 monitor=10000 on srv04
+ * Resource action: UmDummy02 monitor=10000 on srv04
+
+Revised Cluster Status:
+ * Node List:
+ * Online: [ srv01 srv02 srv03 srv04 ]
+
+ * Full List of Resources:
+ * Resource Group: UMgroup01:
+ * UmVIPcheck (ocf:heartbeat:Dummy): Started srv04
+ * UmIPaddr (ocf:heartbeat:Dummy): Started srv04
+ * UmDummy01 (ocf:heartbeat:Dummy): Started srv04
+ * UmDummy02 (ocf:heartbeat:Dummy): Started srv04
+ * Resource Group: OVDBgroup02-1:
+ * prmExPostgreSQLDB1 (ocf:heartbeat:Dummy): Started srv04
+ * prmFsPostgreSQLDB1-1 (ocf:heartbeat:Dummy): Started srv04
+ * prmFsPostgreSQLDB1-2 (ocf:heartbeat:Dummy): Started srv04
+ * prmFsPostgreSQLDB1-3 (ocf:heartbeat:Dummy): Started srv04
+ * prmIpPostgreSQLDB1 (ocf:heartbeat:Dummy): Started srv04
+ * prmApPostgreSQLDB1 (ocf:heartbeat:Dummy): Started srv04
+ * Resource Group: OVDBgroup02-2:
+ * prmExPostgreSQLDB2 (ocf:heartbeat:Dummy): Started srv02
+ * prmFsPostgreSQLDB2-1 (ocf:heartbeat:Dummy): Started srv02
+ * prmFsPostgreSQLDB2-2 (ocf:heartbeat:Dummy): Started srv02
+ * prmFsPostgreSQLDB2-3 (ocf:heartbeat:Dummy): Started srv02
+ * prmIpPostgreSQLDB2 (ocf:heartbeat:Dummy): Started srv02
+ * prmApPostgreSQLDB2 (ocf:heartbeat:Dummy): Started srv02
+ * Resource Group: OVDBgroup02-3:
+ * prmExPostgreSQLDB3 (ocf:heartbeat:Dummy): Started srv03
+ * prmFsPostgreSQLDB3-1 (ocf:heartbeat:Dummy): Started srv03
+ * prmFsPostgreSQLDB3-2 (ocf:heartbeat:Dummy): Started srv03
+ * prmFsPostgreSQLDB3-3 (ocf:heartbeat:Dummy): Started srv03
+ * prmIpPostgreSQLDB3 (ocf:heartbeat:Dummy): Started srv03
+ * prmApPostgreSQLDB3 (ocf:heartbeat:Dummy): Started srv03
+ * Resource Group: grpStonith1:
+ * prmStonithN1 (stonith:external/ssh): Started srv04
+ * Resource Group: grpStonith2:
+ * prmStonithN2 (stonith:external/ssh): Started srv03
+ * Resource Group: grpStonith3:
+ * prmStonithN3 (stonith:external/ssh): Started srv02
+ * Resource Group: grpStonith4:
+ * prmStonithN4 (stonith:external/ssh): Started srv03
+ * Clone Set: clnUMgroup01 [clnUmResource]:
+ * Started: [ srv04 ]
+ * Stopped: [ srv01 srv02 srv03 ]
+ * Clone Set: clnPingd [clnPrmPingd]:
+ * Started: [ srv02 srv03 srv04 ]
+ * Stopped: [ srv01 ]
+ * Clone Set: clnDiskd1 [clnPrmDiskd1]:
+ * Started: [ srv02 srv03 srv04 ]
+ * Stopped: [ srv01 ]
+ * Clone Set: clnG3dummy1 [clnG3dummy01]:
+ * Started: [ srv02 srv03 srv04 ]
+ * Stopped: [ srv01 ]
+ * Clone Set: clnG3dummy2 [clnG3dummy02]:
+ * Started: [ srv02 srv03 srv04 ]
+ * Stopped: [ srv01 ]
diff --git a/cts/scheduler/summary/colocate-unmanaged-group.summary b/cts/scheduler/summary/colocate-unmanaged-group.summary
new file mode 100644
index 0000000..f29452b
--- /dev/null
+++ b/cts/scheduler/summary/colocate-unmanaged-group.summary
@@ -0,0 +1,31 @@
+Using the original execution date of: 2020-02-26 05:50:16Z
+Current cluster status:
+ * Node List:
+ * Online: [ rh80-test01 rh80-test02 ]
+
+ * Full List of Resources:
+ * Clone Set: prmStateful-clone [prmStateful] (promotable):
+ * Stopped: [ rh80-test01 rh80-test02 ]
+ * Resource Group: grpTest:
+ * prmDummy1 (ocf:heartbeat:Dummy): Started rh80-test01 (unmanaged)
+ * prmDummy2 (ocf:heartbeat:Dummy): Stopped
+ * prmDummy3 (ocf:heartbeat:Dummy): Stopped
+
+Transition Summary:
+
+Executing Cluster Transition:
+ * Resource action: prmDummy1 monitor=10000 on rh80-test01
+ * Resource action: prmDummy3 monitor on rh80-test01
+Using the original execution date of: 2020-02-26 05:50:16Z
+
+Revised Cluster Status:
+ * Node List:
+ * Online: [ rh80-test01 rh80-test02 ]
+
+ * Full List of Resources:
+ * Clone Set: prmStateful-clone [prmStateful] (promotable):
+ * Stopped: [ rh80-test01 rh80-test02 ]
+ * Resource Group: grpTest:
+ * prmDummy1 (ocf:heartbeat:Dummy): Started rh80-test01 (unmanaged)
+ * prmDummy2 (ocf:heartbeat:Dummy): Stopped
+ * prmDummy3 (ocf:heartbeat:Dummy): Stopped
diff --git a/cts/scheduler/summary/colocated-utilization-clone.summary b/cts/scheduler/summary/colocated-utilization-clone.summary
new file mode 100644
index 0000000..d303bdc
--- /dev/null
+++ b/cts/scheduler/summary/colocated-utilization-clone.summary
@@ -0,0 +1,73 @@
+Current cluster status:
+ * Node List:
+ * Online: [ node1 node2 node3 ]
+
+ * Full List of Resources:
+ * Clone Set: clone1 [rsc1]:
+ * Stopped: [ node1 node2 node3 ]
+ * Clone Set: clone2 [group1]:
+ * Stopped: [ node1 node2 node3 ]
+ * Resource Group: group2:
+ * rsc4 (ocf:pacemaker:Dummy): Stopped
+ * rsc5 (ocf:pacemaker:Dummy): Stopped
+
+Transition Summary:
+ * Start rsc1:0 ( node2 )
+ * Start rsc1:1 ( node3 )
+ * Start rsc2:0 ( node3 )
+ * Start rsc3:0 ( node3 )
+ * Start rsc2:1 ( node2 )
+ * Start rsc3:1 ( node2 )
+ * Start rsc4 ( node3 )
+ * Start rsc5 ( node3 )
+
+Executing Cluster Transition:
+ * Resource action: rsc1:0 monitor on node2
+ * Resource action: rsc1:0 monitor on node1
+ * Resource action: rsc1:1 monitor on node3
+ * Pseudo action: clone1_start_0
+ * Resource action: rsc2:0 monitor on node3
+ * Resource action: rsc2:0 monitor on node1
+ * Resource action: rsc3:0 monitor on node3
+ * Resource action: rsc3:0 monitor on node1
+ * Resource action: rsc2:1 monitor on node2
+ * Resource action: rsc3:1 monitor on node2
+ * Resource action: rsc4 monitor on node3
+ * Resource action: rsc4 monitor on node2
+ * Resource action: rsc4 monitor on node1
+ * Resource action: rsc5 monitor on node3
+ * Resource action: rsc5 monitor on node2
+ * Resource action: rsc5 monitor on node1
+ * Pseudo action: load_stopped_node3
+ * Pseudo action: load_stopped_node2
+ * Pseudo action: load_stopped_node1
+ * Resource action: rsc1:0 start on node2
+ * Resource action: rsc1:1 start on node3
+ * Pseudo action: clone1_running_0
+ * Pseudo action: clone2_start_0
+ * Pseudo action: group1:0_start_0
+ * Resource action: rsc2:0 start on node3
+ * Resource action: rsc3:0 start on node3
+ * Pseudo action: group1:1_start_0
+ * Resource action: rsc2:1 start on node2
+ * Resource action: rsc3:1 start on node2
+ * Pseudo action: group1:0_running_0
+ * Pseudo action: group1:1_running_0
+ * Pseudo action: clone2_running_0
+ * Pseudo action: group2_start_0
+ * Resource action: rsc4 start on node3
+ * Resource action: rsc5 start on node3
+ * Pseudo action: group2_running_0
+
+Revised Cluster Status:
+ * Node List:
+ * Online: [ node1 node2 node3 ]
+
+ * Full List of Resources:
+ * Clone Set: clone1 [rsc1]:
+ * Started: [ node2 node3 ]
+ * Clone Set: clone2 [group1]:
+ * Started: [ node2 node3 ]
+ * Resource Group: group2:
+ * rsc4 (ocf:pacemaker:Dummy): Started node3
+ * rsc5 (ocf:pacemaker:Dummy): Started node3
diff --git a/cts/scheduler/summary/colocated-utilization-group.summary b/cts/scheduler/summary/colocated-utilization-group.summary
new file mode 100644
index 0000000..b76d913
--- /dev/null
+++ b/cts/scheduler/summary/colocated-utilization-group.summary
@@ -0,0 +1,55 @@
+Current cluster status:
+ * Node List:
+ * Online: [ node1 node2 ]
+
+ * Full List of Resources:
+ * rsc1 (ocf:pacemaker:Dummy): Stopped
+ * Resource Group: group1:
+ * rsc2 (ocf:pacemaker:Dummy): Stopped
+ * rsc3 (ocf:pacemaker:Dummy): Stopped
+ * Resource Group: group2:
+ * rsc4 (ocf:pacemaker:Dummy): Stopped
+ * rsc5 (ocf:pacemaker:Dummy): Stopped
+
+Transition Summary:
+ * Start rsc1 ( node2 )
+ * Start rsc2 ( node2 )
+ * Start rsc3 ( node2 )
+ * Start rsc4 ( node2 )
+ * Start rsc5 ( node2 )
+
+Executing Cluster Transition:
+ * Resource action: rsc1 monitor on node2
+ * Resource action: rsc1 monitor on node1
+ * Pseudo action: group1_start_0
+ * Resource action: rsc2 monitor on node2
+ * Resource action: rsc2 monitor on node1
+ * Resource action: rsc3 monitor on node2
+ * Resource action: rsc3 monitor on node1
+ * Pseudo action: group2_start_0
+ * Resource action: rsc4 monitor on node2
+ * Resource action: rsc4 monitor on node1
+ * Resource action: rsc5 monitor on node2
+ * Resource action: rsc5 monitor on node1
+ * Pseudo action: load_stopped_node2
+ * Pseudo action: load_stopped_node1
+ * Resource action: rsc1 start on node2
+ * Resource action: rsc2 start on node2
+ * Resource action: rsc3 start on node2
+ * Resource action: rsc4 start on node2
+ * Resource action: rsc5 start on node2
+ * Pseudo action: group1_running_0
+ * Pseudo action: group2_running_0
+
+Revised Cluster Status:
+ * Node List:
+ * Online: [ node1 node2 ]
+
+ * Full List of Resources:
+ * rsc1 (ocf:pacemaker:Dummy): Started node2
+ * Resource Group: group1:
+ * rsc2 (ocf:pacemaker:Dummy): Started node2
+ * rsc3 (ocf:pacemaker:Dummy): Started node2
+ * Resource Group: group2:
+ * rsc4 (ocf:pacemaker:Dummy): Started node2
+ * rsc5 (ocf:pacemaker:Dummy): Started node2
diff --git a/cts/scheduler/summary/colocated-utilization-primitive-1.summary b/cts/scheduler/summary/colocated-utilization-primitive-1.summary
new file mode 100644
index 0000000..dc4d9a6
--- /dev/null
+++ b/cts/scheduler/summary/colocated-utilization-primitive-1.summary
@@ -0,0 +1,35 @@
+Current cluster status:
+ * Node List:
+ * Online: [ node1 node2 ]
+
+ * Full List of Resources:
+ * rsc1 (ocf:pacemaker:Dummy): Stopped
+ * rsc2 (ocf:pacemaker:Dummy): Stopped
+ * rsc3 (ocf:pacemaker:Dummy): Stopped
+
+Transition Summary:
+ * Start rsc1 ( node2 )
+ * Start rsc2 ( node2 )
+ * Start rsc3 ( node2 )
+
+Executing Cluster Transition:
+ * Resource action: rsc1 monitor on node2
+ * Resource action: rsc1 monitor on node1
+ * Resource action: rsc2 monitor on node2
+ * Resource action: rsc2 monitor on node1
+ * Resource action: rsc3 monitor on node2
+ * Resource action: rsc3 monitor on node1
+ * Pseudo action: load_stopped_node2
+ * Pseudo action: load_stopped_node1
+ * Resource action: rsc1 start on node2
+ * Resource action: rsc2 start on node2
+ * Resource action: rsc3 start on node2
+
+Revised Cluster Status:
+ * Node List:
+ * Online: [ node1 node2 ]
+
+ * Full List of Resources:
+ * rsc1 (ocf:pacemaker:Dummy): Started node2
+ * rsc2 (ocf:pacemaker:Dummy): Started node2
+ * rsc3 (ocf:pacemaker:Dummy): Started node2
diff --git a/cts/scheduler/summary/colocated-utilization-primitive-2.summary b/cts/scheduler/summary/colocated-utilization-primitive-2.summary
new file mode 100644
index 0000000..001d647
--- /dev/null
+++ b/cts/scheduler/summary/colocated-utilization-primitive-2.summary
@@ -0,0 +1,33 @@
+Current cluster status:
+ * Node List:
+ * Online: [ node1 node2 ]
+
+ * Full List of Resources:
+ * rsc3 (ocf:pacemaker:Dummy): Stopped
+ * rsc1 (ocf:pacemaker:Dummy): Stopped
+ * rsc2 (ocf:pacemaker:Dummy): Stopped
+
+Transition Summary:
+ * Start rsc3 ( node2 )
+ * Start rsc1 ( node2 )
+
+Executing Cluster Transition:
+ * Resource action: rsc3 monitor on node2
+ * Resource action: rsc3 monitor on node1
+ * Resource action: rsc1 monitor on node2
+ * Resource action: rsc1 monitor on node1
+ * Resource action: rsc2 monitor on node2
+ * Resource action: rsc2 monitor on node1
+ * Pseudo action: load_stopped_node2
+ * Pseudo action: load_stopped_node1
+ * Resource action: rsc3 start on node2
+ * Resource action: rsc1 start on node2
+
+Revised Cluster Status:
+ * Node List:
+ * Online: [ node1 node2 ]
+
+ * Full List of Resources:
+ * rsc3 (ocf:pacemaker:Dummy): Started node2
+ * rsc1 (ocf:pacemaker:Dummy): Started node2
+ * rsc2 (ocf:pacemaker:Dummy): Stopped
diff --git a/cts/scheduler/summary/colocation-influence.summary b/cts/scheduler/summary/colocation-influence.summary
new file mode 100644
index 0000000..e240003
--- /dev/null
+++ b/cts/scheduler/summary/colocation-influence.summary
@@ -0,0 +1,170 @@
+Current cluster status:
+ * Node List:
+ * Online: [ rhel7-1 rhel7-2 rhel7-3 rhel7-4 rhel7-5 ]
+ * GuestOnline: [ bundle10-0 bundle10-1 bundle11-0 ]
+
+ * Full List of Resources:
+ * Fencing (stonith:fence_xvm): Started rhel7-1
+ * rsc1a (ocf:pacemaker:Dummy): Started rhel7-2
+ * rsc1b (ocf:pacemaker:Dummy): Started rhel7-2
+ * rsc2a (ocf:pacemaker:Dummy): Started rhel7-4
+ * rsc2b (ocf:pacemaker:Dummy): Started rhel7-4
+ * rsc3a (ocf:pacemaker:Dummy): Stopped
+ * rsc3b (ocf:pacemaker:Dummy): Stopped
+ * rsc4a (ocf:pacemaker:Dummy): Started rhel7-3
+ * rsc4b (ocf:pacemaker:Dummy): Started rhel7-3
+ * rsc5a (ocf:pacemaker:Dummy): Started rhel7-1
+ * Resource Group: group5a:
+ * rsc5a1 (ocf:pacemaker:Dummy): Started rhel7-1
+ * rsc5a2 (ocf:pacemaker:Dummy): Started rhel7-1
+ * Resource Group: group6a:
+ * rsc6a1 (ocf:pacemaker:Dummy): Started rhel7-2
+ * rsc6a2 (ocf:pacemaker:Dummy): Started rhel7-2
+ * rsc6a (ocf:pacemaker:Dummy): Started rhel7-2
+ * Resource Group: group7a:
+ * rsc7a1 (ocf:pacemaker:Dummy): Started rhel7-3
+ * rsc7a2 (ocf:pacemaker:Dummy): Started rhel7-3
+ * Clone Set: rsc8a-clone [rsc8a]:
+ * Started: [ rhel7-1 rhel7-3 rhel7-4 ]
+ * Clone Set: rsc8b-clone [rsc8b]:
+ * Started: [ rhel7-1 rhel7-3 rhel7-4 ]
+ * rsc9a (ocf:pacemaker:Dummy): Started rhel7-4
+ * rsc9b (ocf:pacemaker:Dummy): Started rhel7-4
+ * rsc9c (ocf:pacemaker:Dummy): Started rhel7-4
+ * rsc10a (ocf:pacemaker:Dummy): Started rhel7-2
+ * rsc11a (ocf:pacemaker:Dummy): Started rhel7-1
+ * rsc12a (ocf:pacemaker:Dummy): Started rhel7-1
+ * rsc12b (ocf:pacemaker:Dummy): Started rhel7-1
+ * rsc12c (ocf:pacemaker:Dummy): Started rhel7-1
+ * Container bundle set: bundle10 [pcmktest:http]:
+ * bundle10-0 (192.168.122.131) (ocf:heartbeat:apache): Started rhel7-2
+ * bundle10-1 (192.168.122.132) (ocf:heartbeat:apache): Started rhel7-3
+ * Container bundle set: bundle11 [pcmktest:http]:
+ * bundle11-0 (192.168.122.134) (ocf:pacemaker:Dummy): Started rhel7-1
+ * bundle11-1 (192.168.122.135) (ocf:pacemaker:Dummy): Stopped
+ * rsc13a (ocf:pacemaker:Dummy): Started rhel7-3
+ * Clone Set: rsc13b-clone [rsc13b] (promotable):
+ * Promoted: [ rhel7-3 ]
+ * Unpromoted: [ rhel7-1 rhel7-2 rhel7-4 ]
+ * Stopped: [ rhel7-5 ]
+ * rsc14b (ocf:pacemaker:Dummy): Started rhel7-4
+ * Clone Set: rsc14a-clone [rsc14a] (promotable):
+ * Promoted: [ rhel7-4 ]
+ * Unpromoted: [ rhel7-1 rhel7-2 rhel7-3 ]
+ * Stopped: [ rhel7-5 ]
+
+Transition Summary:
+ * Move rsc1a ( rhel7-2 -> rhel7-3 )
+ * Move rsc1b ( rhel7-2 -> rhel7-3 )
+ * Stop rsc2a ( rhel7-4 ) due to node availability
+ * Start rsc3a ( rhel7-2 )
+ * Start rsc3b ( rhel7-2 )
+ * Stop rsc4a ( rhel7-3 ) due to node availability
+ * Stop rsc5a ( rhel7-1 ) due to node availability
+ * Stop rsc6a1 ( rhel7-2 ) due to node availability
+ * Stop rsc6a2 ( rhel7-2 ) due to node availability
+ * Stop rsc7a2 ( rhel7-3 ) due to node availability
+ * Stop rsc8a:1 ( rhel7-4 ) due to node availability
+ * Stop rsc9c ( rhel7-4 ) due to node availability
+ * Move rsc10a ( rhel7-2 -> rhel7-3 )
+ * Stop rsc12b ( rhel7-1 ) due to node availability
+ * Start bundle11-1 ( rhel7-5 ) due to unrunnable bundle11-docker-1 start (blocked)
+ * Start bundle11a:1 ( bundle11-1 ) due to unrunnable bundle11-docker-1 start (blocked)
+ * Stop rsc13a ( rhel7-3 ) due to node availability
+ * Stop rsc14a:1 ( Promoted rhel7-4 ) due to node availability
+
+Executing Cluster Transition:
+ * Resource action: rsc1a stop on rhel7-2
+ * Resource action: rsc1b stop on rhel7-2
+ * Resource action: rsc2a stop on rhel7-4
+ * Resource action: rsc3a start on rhel7-2
+ * Resource action: rsc3b start on rhel7-2
+ * Resource action: rsc4a stop on rhel7-3
+ * Resource action: rsc5a stop on rhel7-1
+ * Pseudo action: group6a_stop_0
+ * Resource action: rsc6a2 stop on rhel7-2
+ * Pseudo action: group7a_stop_0
+ * Resource action: rsc7a2 stop on rhel7-3
+ * Pseudo action: rsc8a-clone_stop_0
+ * Resource action: rsc9c stop on rhel7-4
+ * Resource action: rsc10a stop on rhel7-2
+ * Resource action: rsc12b stop on rhel7-1
+ * Resource action: rsc13a stop on rhel7-3
+ * Pseudo action: rsc14a-clone_demote_0
+ * Pseudo action: bundle11_start_0
+ * Resource action: rsc1a start on rhel7-3
+ * Resource action: rsc1b start on rhel7-3
+ * Resource action: rsc3a monitor=10000 on rhel7-2
+ * Resource action: rsc3b monitor=10000 on rhel7-2
+ * Resource action: rsc6a1 stop on rhel7-2
+ * Pseudo action: group7a_stopped_0
+ * Resource action: rsc8a stop on rhel7-4
+ * Pseudo action: rsc8a-clone_stopped_0
+ * Resource action: rsc10a start on rhel7-3
+ * Pseudo action: bundle11-clone_start_0
+ * Resource action: rsc14a demote on rhel7-4
+ * Pseudo action: rsc14a-clone_demoted_0
+ * Pseudo action: rsc14a-clone_stop_0
+ * Resource action: rsc1a monitor=10000 on rhel7-3
+ * Resource action: rsc1b monitor=10000 on rhel7-3
+ * Pseudo action: group6a_stopped_0
+ * Resource action: rsc10a monitor=10000 on rhel7-3
+ * Pseudo action: bundle11-clone_running_0
+ * Resource action: rsc14a stop on rhel7-4
+ * Pseudo action: rsc14a-clone_stopped_0
+ * Pseudo action: bundle11_running_0
+
+Revised Cluster Status:
+ * Node List:
+ * Online: [ rhel7-1 rhel7-2 rhel7-3 rhel7-4 rhel7-5 ]
+ * GuestOnline: [ bundle10-0 bundle10-1 bundle11-0 ]
+
+ * Full List of Resources:
+ * Fencing (stonith:fence_xvm): Started rhel7-1
+ * rsc1a (ocf:pacemaker:Dummy): Started rhel7-3
+ * rsc1b (ocf:pacemaker:Dummy): Started rhel7-3
+ * rsc2a (ocf:pacemaker:Dummy): Stopped
+ * rsc2b (ocf:pacemaker:Dummy): Started rhel7-4
+ * rsc3a (ocf:pacemaker:Dummy): Started rhel7-2
+ * rsc3b (ocf:pacemaker:Dummy): Started rhel7-2
+ * rsc4a (ocf:pacemaker:Dummy): Stopped
+ * rsc4b (ocf:pacemaker:Dummy): Started rhel7-3
+ * rsc5a (ocf:pacemaker:Dummy): Stopped
+ * Resource Group: group5a:
+ * rsc5a1 (ocf:pacemaker:Dummy): Started rhel7-1
+ * rsc5a2 (ocf:pacemaker:Dummy): Started rhel7-1
+ * Resource Group: group6a:
+ * rsc6a1 (ocf:pacemaker:Dummy): Stopped
+ * rsc6a2 (ocf:pacemaker:Dummy): Stopped
+ * rsc6a (ocf:pacemaker:Dummy): Started rhel7-2
+ * Resource Group: group7a:
+ * rsc7a1 (ocf:pacemaker:Dummy): Started rhel7-3
+ * rsc7a2 (ocf:pacemaker:Dummy): Stopped
+ * Clone Set: rsc8a-clone [rsc8a]:
+ * Started: [ rhel7-1 rhel7-3 ]
+ * Stopped: [ rhel7-2 rhel7-4 rhel7-5 ]
+ * Clone Set: rsc8b-clone [rsc8b]:
+ * Started: [ rhel7-1 rhel7-3 rhel7-4 ]
+ * rsc9a (ocf:pacemaker:Dummy): Started rhel7-4
+ * rsc9b (ocf:pacemaker:Dummy): Started rhel7-4
+ * rsc9c (ocf:pacemaker:Dummy): Stopped
+ * rsc10a (ocf:pacemaker:Dummy): Started rhel7-3
+ * rsc11a (ocf:pacemaker:Dummy): Started rhel7-1
+ * rsc12a (ocf:pacemaker:Dummy): Started rhel7-1
+ * rsc12b (ocf:pacemaker:Dummy): Stopped
+ * rsc12c (ocf:pacemaker:Dummy): Started rhel7-1
+ * Container bundle set: bundle10 [pcmktest:http]:
+ * bundle10-0 (192.168.122.131) (ocf:heartbeat:apache): Started rhel7-2
+ * bundle10-1 (192.168.122.132) (ocf:heartbeat:apache): Started rhel7-3
+ * Container bundle set: bundle11 [pcmktest:http]:
+ * bundle11-0 (192.168.122.134) (ocf:pacemaker:Dummy): Started rhel7-1
+ * bundle11-1 (192.168.122.135) (ocf:pacemaker:Dummy): Stopped
+ * rsc13a (ocf:pacemaker:Dummy): Stopped
+ * Clone Set: rsc13b-clone [rsc13b] (promotable):
+ * Promoted: [ rhel7-3 ]
+ * Unpromoted: [ rhel7-1 rhel7-2 rhel7-4 ]
+ * Stopped: [ rhel7-5 ]
+ * rsc14b (ocf:pacemaker:Dummy): Started rhel7-4
+ * Clone Set: rsc14a-clone [rsc14a] (promotable):
+ * Unpromoted: [ rhel7-1 rhel7-2 rhel7-3 ]
+ * Stopped: [ rhel7-4 rhel7-5 ]
diff --git a/cts/scheduler/summary/colocation-priority-group.summary b/cts/scheduler/summary/colocation-priority-group.summary
new file mode 100644
index 0000000..3a7cf2a
--- /dev/null
+++ b/cts/scheduler/summary/colocation-priority-group.summary
@@ -0,0 +1,59 @@
+Current cluster status:
+ * Node List:
+ * Online: [ node1 node2 ]
+
+ * Full List of Resources:
+ * Resource Group: group1:
+ * member1a (ocf:pacemaker:Dummy): Stopped
+ * member1b (ocf:pacemaker:Dummy): Stopped
+ * rsc3 (ocf:pacemaker:Dummy): Stopped
+ * rsc2 (ocf:pacemaker:Dummy): Stopped
+ * Fencing (stonith:fence_xvm): Started node1
+ * rsc4 (ocf:pacemaker:Dummy): Stopped
+
+Transition Summary:
+ * Start member1a ( node1 )
+ * Start member1b ( node1 )
+ * Start rsc3 ( node1 )
+ * Move Fencing ( node1 -> node2 )
+ * Start rsc4 ( node2 )
+
+Executing Cluster Transition:
+ * Pseudo action: group1_start_0
+ * Resource action: member1a monitor on node2
+ * Resource action: member1a monitor on node1
+ * Resource action: member1b monitor on node2
+ * Resource action: member1b monitor on node1
+ * Resource action: rsc3 monitor on node2
+ * Resource action: rsc3 monitor on node1
+ * Resource action: rsc2 monitor on node2
+ * Resource action: rsc2 monitor on node1
+ * Resource action: Fencing stop on node1
+ * Resource action: rsc4 monitor on node2
+ * Resource action: rsc4 monitor on node1
+ * Pseudo action: load_stopped_node2
+ * Pseudo action: load_stopped_node1
+ * Resource action: member1a start on node1
+ * Resource action: member1b start on node1
+ * Resource action: rsc3 start on node1
+ * Resource action: Fencing start on node2
+ * Resource action: rsc4 start on node2
+ * Pseudo action: group1_running_0
+ * Resource action: member1a monitor=10000 on node1
+ * Resource action: member1b monitor=10000 on node1
+ * Resource action: rsc3 monitor=10000 on node1
+ * Resource action: Fencing monitor=120000 on node2
+ * Resource action: rsc4 monitor=10000 on node2
+
+Revised Cluster Status:
+ * Node List:
+ * Online: [ node1 node2 ]
+
+ * Full List of Resources:
+ * Resource Group: group1:
+ * member1a (ocf:pacemaker:Dummy): Started node1
+ * member1b (ocf:pacemaker:Dummy): Started node1
+ * rsc3 (ocf:pacemaker:Dummy): Started node1
+ * rsc2 (ocf:pacemaker:Dummy): Stopped
+ * Fencing (stonith:fence_xvm): Started node2
+ * rsc4 (ocf:pacemaker:Dummy): Started node2
diff --git a/cts/scheduler/summary/colocation-vs-stickiness.summary b/cts/scheduler/summary/colocation-vs-stickiness.summary
new file mode 100644
index 0000000..8bfb8b0
--- /dev/null
+++ b/cts/scheduler/summary/colocation-vs-stickiness.summary
@@ -0,0 +1,45 @@
+Using the original execution date of: 2018-09-26 16:40:38Z
+Current cluster status:
+ * Node List:
+ * Node rhel7-1: standby
+ * Node rhel7-2: standby
+ * Node rhel7-3: standby
+ * Online: [ rhel7-4 rhel7-5 ]
+
+ * Full List of Resources:
+ * Fencing (stonith:fence_xvm): Started rhel7-5
+ * FencingPass (stonith:fence_dummy): Started rhel7-5
+ * FencingFail (stonith:fence_dummy): Started rhel7-5
+ * Resource Group: group1:
+ * dummy1 (ocf:pacemaker:Dummy): Started rhel7-5
+ * dummy1b (ocf:pacemaker:Dummy): Started rhel7-5
+ * dummy1c (ocf:pacemaker:Dummy): Started rhel7-5
+ * Resource Group: group2:
+ * dummy2a (ocf:pacemaker:Dummy): Started rhel7-5
+ * dummy2b (ocf:pacemaker:Dummy): Started rhel7-5
+ * dummy2c (ocf:pacemaker:Dummy): Started rhel7-5
+
+Transition Summary:
+
+Executing Cluster Transition:
+Using the original execution date of: 2018-09-26 16:40:38Z
+
+Revised Cluster Status:
+ * Node List:
+ * Node rhel7-1: standby
+ * Node rhel7-2: standby
+ * Node rhel7-3: standby
+ * Online: [ rhel7-4 rhel7-5 ]
+
+ * Full List of Resources:
+ * Fencing (stonith:fence_xvm): Started rhel7-5
+ * FencingPass (stonith:fence_dummy): Started rhel7-5
+ * FencingFail (stonith:fence_dummy): Started rhel7-5
+ * Resource Group: group1:
+ * dummy1 (ocf:pacemaker:Dummy): Started rhel7-5
+ * dummy1b (ocf:pacemaker:Dummy): Started rhel7-5
+ * dummy1c (ocf:pacemaker:Dummy): Started rhel7-5
+ * Resource Group: group2:
+ * dummy2a (ocf:pacemaker:Dummy): Started rhel7-5
+ * dummy2b (ocf:pacemaker:Dummy): Started rhel7-5
+ * dummy2c (ocf:pacemaker:Dummy): Started rhel7-5
diff --git a/cts/scheduler/summary/colocation_constraint_stops_promoted.summary b/cts/scheduler/summary/colocation_constraint_stops_promoted.summary
new file mode 100644
index 0000000..5d330eb
--- /dev/null
+++ b/cts/scheduler/summary/colocation_constraint_stops_promoted.summary
@@ -0,0 +1,38 @@
+Current cluster status:
+ * Node List:
+ * Online: [ fc16-builder fc16-builder2 ]
+
+ * Full List of Resources:
+ * Clone Set: PROMOTABLE_RSC_A [NATIVE_RSC_A] (promotable):
+ * Promoted: [ fc16-builder ]
+
+Transition Summary:
+ * Stop NATIVE_RSC_A:0 ( Promoted fc16-builder ) due to node availability
+
+Executing Cluster Transition:
+ * Pseudo action: PROMOTABLE_RSC_A_pre_notify_demote_0
+ * Resource action: NATIVE_RSC_A:0 notify on fc16-builder
+ * Pseudo action: PROMOTABLE_RSC_A_confirmed-pre_notify_demote_0
+ * Pseudo action: PROMOTABLE_RSC_A_demote_0
+ * Resource action: NATIVE_RSC_A:0 demote on fc16-builder
+ * Pseudo action: PROMOTABLE_RSC_A_demoted_0
+ * Pseudo action: PROMOTABLE_RSC_A_post_notify_demoted_0
+ * Resource action: NATIVE_RSC_A:0 notify on fc16-builder
+ * Pseudo action: PROMOTABLE_RSC_A_confirmed-post_notify_demoted_0
+ * Pseudo action: PROMOTABLE_RSC_A_pre_notify_stop_0
+ * Resource action: NATIVE_RSC_A:0 notify on fc16-builder
+ * Pseudo action: PROMOTABLE_RSC_A_confirmed-pre_notify_stop_0
+ * Pseudo action: PROMOTABLE_RSC_A_stop_0
+ * Resource action: NATIVE_RSC_A:0 stop on fc16-builder
+ * Resource action: NATIVE_RSC_A:0 delete on fc16-builder2
+ * Pseudo action: PROMOTABLE_RSC_A_stopped_0
+ * Pseudo action: PROMOTABLE_RSC_A_post_notify_stopped_0
+ * Pseudo action: PROMOTABLE_RSC_A_confirmed-post_notify_stopped_0
+
+Revised Cluster Status:
+ * Node List:
+ * Online: [ fc16-builder fc16-builder2 ]
+
+ * Full List of Resources:
+ * Clone Set: PROMOTABLE_RSC_A [NATIVE_RSC_A] (promotable):
+ * Stopped: [ fc16-builder fc16-builder2 ]
diff --git a/cts/scheduler/summary/colocation_constraint_stops_unpromoted.summary b/cts/scheduler/summary/colocation_constraint_stops_unpromoted.summary
new file mode 100644
index 0000000..32047e9
--- /dev/null
+++ b/cts/scheduler/summary/colocation_constraint_stops_unpromoted.summary
@@ -0,0 +1,36 @@
+1 of 2 resource instances DISABLED and 0 BLOCKED from further action due to failure
+
+Current cluster status:
+ * Node List:
+ * Online: [ fc16-builder ]
+ * OFFLINE: [ fc16-builder2 ]
+
+ * Full List of Resources:
+ * Clone Set: PROMOTABLE_RSC_A [NATIVE_RSC_A] (promotable):
+ * Unpromoted: [ fc16-builder ]
+ * NATIVE_RSC_B (ocf:pacemaker:Dummy): Started fc16-builder (disabled)
+
+Transition Summary:
+ * Stop NATIVE_RSC_A:0 ( Unpromoted fc16-builder ) due to node availability
+ * Stop NATIVE_RSC_B ( fc16-builder ) due to node availability
+
+Executing Cluster Transition:
+ * Pseudo action: PROMOTABLE_RSC_A_pre_notify_stop_0
+ * Resource action: NATIVE_RSC_B stop on fc16-builder
+ * Resource action: NATIVE_RSC_A:0 notify on fc16-builder
+ * Pseudo action: PROMOTABLE_RSC_A_confirmed-pre_notify_stop_0
+ * Pseudo action: PROMOTABLE_RSC_A_stop_0
+ * Resource action: NATIVE_RSC_A:0 stop on fc16-builder
+ * Pseudo action: PROMOTABLE_RSC_A_stopped_0
+ * Pseudo action: PROMOTABLE_RSC_A_post_notify_stopped_0
+ * Pseudo action: PROMOTABLE_RSC_A_confirmed-post_notify_stopped_0
+
+Revised Cluster Status:
+ * Node List:
+ * Online: [ fc16-builder ]
+ * OFFLINE: [ fc16-builder2 ]
+
+ * Full List of Resources:
+ * Clone Set: PROMOTABLE_RSC_A [NATIVE_RSC_A] (promotable):
+ * Stopped: [ fc16-builder fc16-builder2 ]
+ * NATIVE_RSC_B (ocf:pacemaker:Dummy): Stopped (disabled)
diff --git a/cts/scheduler/summary/comments.summary b/cts/scheduler/summary/comments.summary
new file mode 100644
index 0000000..e9bcaf5
--- /dev/null
+++ b/cts/scheduler/summary/comments.summary
@@ -0,0 +1,27 @@
+Current cluster status:
+ * Node List:
+ * Online: [ node1 node2 ]
+
+ * Full List of Resources:
+ * rsc1 (ocf:heartbeat:apache): Stopped
+ * rsc2 (ocf:heartbeat:apache): Stopped
+
+Transition Summary:
+ * Start rsc1 ( node1 )
+ * Start rsc2 ( node2 )
+
+Executing Cluster Transition:
+ * Resource action: rsc1 monitor on node2
+ * Resource action: rsc1 monitor on node1
+ * Resource action: rsc2 monitor on node2
+ * Resource action: rsc2 monitor on node1
+ * Resource action: rsc1 start on node1
+ * Resource action: rsc2 start on node2
+
+Revised Cluster Status:
+ * Node List:
+ * Online: [ node1 node2 ]
+
+ * Full List of Resources:
+ * rsc1 (ocf:heartbeat:apache): Started node1
+ * rsc2 (ocf:heartbeat:apache): Started node2
diff --git a/cts/scheduler/summary/complex_enforce_colo.summary b/cts/scheduler/summary/complex_enforce_colo.summary
new file mode 100644
index 0000000..195ad85
--- /dev/null
+++ b/cts/scheduler/summary/complex_enforce_colo.summary
@@ -0,0 +1,455 @@
+3 of 132 resource instances DISABLED and 0 BLOCKED from further action due to failure
+
+Current cluster status:
+ * Node List:
+ * Online: [ rhos6-node1 rhos6-node2 rhos6-node3 ]
+
+ * Full List of Resources:
+ * node1-fence (stonith:fence_xvm): Started rhos6-node1
+ * node2-fence (stonith:fence_xvm): Started rhos6-node2
+ * node3-fence (stonith:fence_xvm): Started rhos6-node3
+ * Clone Set: lb-haproxy-clone [lb-haproxy]:
+ * Started: [ rhos6-node1 rhos6-node2 rhos6-node3 ]
+ * vip-db (ocf:heartbeat:IPaddr2): Started rhos6-node1
+ * vip-rabbitmq (ocf:heartbeat:IPaddr2): Started rhos6-node2
+ * vip-qpid (ocf:heartbeat:IPaddr2): Started rhos6-node3
+ * vip-keystone (ocf:heartbeat:IPaddr2): Started rhos6-node1
+ * vip-glance (ocf:heartbeat:IPaddr2): Started rhos6-node2
+ * vip-cinder (ocf:heartbeat:IPaddr2): Started rhos6-node3
+ * vip-swift (ocf:heartbeat:IPaddr2): Started rhos6-node1
+ * vip-neutron (ocf:heartbeat:IPaddr2): Started rhos6-node2
+ * vip-nova (ocf:heartbeat:IPaddr2): Started rhos6-node3
+ * vip-horizon (ocf:heartbeat:IPaddr2): Started rhos6-node1
+ * vip-heat (ocf:heartbeat:IPaddr2): Started rhos6-node2
+ * vip-ceilometer (ocf:heartbeat:IPaddr2): Started rhos6-node3
+ * Clone Set: galera-master [galera] (promotable):
+ * Promoted: [ rhos6-node1 rhos6-node2 rhos6-node3 ]
+ * Clone Set: rabbitmq-server-clone [rabbitmq-server]:
+ * Started: [ rhos6-node1 rhos6-node2 rhos6-node3 ]
+ * Clone Set: memcached-clone [memcached]:
+ * Started: [ rhos6-node1 rhos6-node2 rhos6-node3 ]
+ * Clone Set: mongodb-clone [mongodb]:
+ * Started: [ rhos6-node1 rhos6-node2 rhos6-node3 ]
+ * Clone Set: keystone-clone [keystone]:
+ * Started: [ rhos6-node1 rhos6-node2 rhos6-node3 ]
+ * Clone Set: glance-fs-clone [glance-fs]:
+ * Started: [ rhos6-node1 rhos6-node2 rhos6-node3 ]
+ * Clone Set: glance-registry-clone [glance-registry]:
+ * Started: [ rhos6-node1 rhos6-node2 rhos6-node3 ]
+ * Clone Set: glance-api-clone [glance-api]:
+ * Started: [ rhos6-node1 rhos6-node2 rhos6-node3 ]
+ * cinder-api (systemd:openstack-cinder-api): Started rhos6-node1
+ * cinder-scheduler (systemd:openstack-cinder-scheduler): Started rhos6-node1
+ * cinder-volume (systemd:openstack-cinder-volume): Started rhos6-node1
+ * Clone Set: swift-fs-clone [swift-fs]:
+ * Started: [ rhos6-node1 rhos6-node2 rhos6-node3 ]
+ * Clone Set: swift-account-clone [swift-account]:
+ * Started: [ rhos6-node1 rhos6-node2 rhos6-node3 ]
+ * Clone Set: swift-container-clone [swift-container]:
+ * Started: [ rhos6-node1 rhos6-node2 rhos6-node3 ]
+ * Clone Set: swift-object-clone [swift-object]:
+ * Started: [ rhos6-node1 rhos6-node2 rhos6-node3 ]
+ * Clone Set: swift-proxy-clone [swift-proxy]:
+ * Started: [ rhos6-node1 rhos6-node2 rhos6-node3 ]
+ * swift-object-expirer (systemd:openstack-swift-object-expirer): Started rhos6-node2
+ * Clone Set: neutron-server-clone [neutron-server]:
+ * Started: [ rhos6-node1 rhos6-node2 rhos6-node3 ]
+ * Clone Set: neutron-scale-clone [neutron-scale] (unique):
+ * neutron-scale:0 (ocf:neutron:NeutronScale): Started rhos6-node3
+ * neutron-scale:1 (ocf:neutron:NeutronScale): Started rhos6-node2
+ * neutron-scale:2 (ocf:neutron:NeutronScale): Started rhos6-node1
+ * Clone Set: neutron-ovs-cleanup-clone [neutron-ovs-cleanup]:
+ * Started: [ rhos6-node1 rhos6-node2 rhos6-node3 ]
+ * Clone Set: neutron-netns-cleanup-clone [neutron-netns-cleanup]:
+ * Started: [ rhos6-node1 rhos6-node2 rhos6-node3 ]
+ * Clone Set: neutron-openvswitch-agent-clone [neutron-openvswitch-agent]:
+ * Started: [ rhos6-node1 rhos6-node2 rhos6-node3 ]
+ * Clone Set: neutron-dhcp-agent-clone [neutron-dhcp-agent]:
+ * Started: [ rhos6-node1 rhos6-node2 rhos6-node3 ]
+ * Clone Set: neutron-l3-agent-clone [neutron-l3-agent]:
+ * Started: [ rhos6-node1 rhos6-node2 rhos6-node3 ]
+ * Clone Set: neutron-metadata-agent-clone [neutron-metadata-agent]:
+ * Started: [ rhos6-node1 rhos6-node2 rhos6-node3 ]
+ * Clone Set: nova-consoleauth-clone [nova-consoleauth]:
+ * Started: [ rhos6-node1 rhos6-node2 rhos6-node3 ]
+ * Clone Set: nova-novncproxy-clone [nova-novncproxy]:
+ * Started: [ rhos6-node1 rhos6-node2 rhos6-node3 ]
+ * Clone Set: nova-api-clone [nova-api]:
+ * Started: [ rhos6-node1 rhos6-node2 rhos6-node3 ]
+ * Clone Set: nova-scheduler-clone [nova-scheduler]:
+ * Started: [ rhos6-node1 rhos6-node2 rhos6-node3 ]
+ * Clone Set: nova-conductor-clone [nova-conductor]:
+ * Started: [ rhos6-node1 rhos6-node2 rhos6-node3 ]
+ * ceilometer-central (systemd:openstack-ceilometer-central): Started rhos6-node3
+ * Clone Set: ceilometer-collector-clone [ceilometer-collector]:
+ * Started: [ rhos6-node1 rhos6-node2 rhos6-node3 ]
+ * Clone Set: ceilometer-api-clone [ceilometer-api]:
+ * Started: [ rhos6-node1 rhos6-node2 rhos6-node3 ]
+ * Clone Set: ceilometer-delay-clone [ceilometer-delay]:
+ * Started: [ rhos6-node1 rhos6-node2 rhos6-node3 ]
+ * Clone Set: ceilometer-alarm-evaluator-clone [ceilometer-alarm-evaluator]:
+ * Started: [ rhos6-node1 rhos6-node2 rhos6-node3 ]
+ * Clone Set: ceilometer-alarm-notifier-clone [ceilometer-alarm-notifier]:
+ * Started: [ rhos6-node1 rhos6-node2 rhos6-node3 ]
+ * Clone Set: ceilometer-notification-clone [ceilometer-notification]:
+ * Started: [ rhos6-node1 rhos6-node2 rhos6-node3 ]
+ * Clone Set: heat-api-clone [heat-api]:
+ * Started: [ rhos6-node1 rhos6-node2 rhos6-node3 ]
+ * Clone Set: heat-api-cfn-clone [heat-api-cfn]:
+ * Started: [ rhos6-node1 rhos6-node2 rhos6-node3 ]
+ * Clone Set: heat-api-cloudwatch-clone [heat-api-cloudwatch]:
+ * Started: [ rhos6-node1 rhos6-node2 rhos6-node3 ]
+ * heat-engine (systemd:openstack-heat-engine): Started rhos6-node2
+ * Clone Set: horizon-clone [horizon]:
+ * Started: [ rhos6-node1 rhos6-node2 rhos6-node3 ]
+
+Transition Summary:
+ * Stop keystone:0 ( rhos6-node1 ) due to node availability
+ * Stop keystone:1 ( rhos6-node2 ) due to node availability
+ * Stop keystone:2 ( rhos6-node3 ) due to node availability
+ * Stop glance-registry:0 ( rhos6-node1 )
+ * Stop glance-registry:1 ( rhos6-node2 )
+ * Stop glance-registry:2 ( rhos6-node3 )
+ * Stop glance-api:0 ( rhos6-node1 )
+ * Stop glance-api:1 ( rhos6-node2 )
+ * Stop glance-api:2 ( rhos6-node3 )
+ * Stop cinder-api ( rhos6-node1 ) due to unrunnable keystone-clone running
+ * Stop cinder-scheduler ( rhos6-node1 ) due to required cinder-api start
+ * Stop cinder-volume ( rhos6-node1 ) due to colocation with cinder-scheduler
+ * Stop swift-account:0 ( rhos6-node1 )
+ * Stop swift-account:1 ( rhos6-node2 )
+ * Stop swift-account:2 ( rhos6-node3 )
+ * Stop swift-container:0 ( rhos6-node1 )
+ * Stop swift-container:1 ( rhos6-node2 )
+ * Stop swift-container:2 ( rhos6-node3 )
+ * Stop swift-object:0 ( rhos6-node1 )
+ * Stop swift-object:1 ( rhos6-node2 )
+ * Stop swift-object:2 ( rhos6-node3 )
+ * Stop swift-proxy:0 ( rhos6-node1 )
+ * Stop swift-proxy:1 ( rhos6-node2 )
+ * Stop swift-proxy:2 ( rhos6-node3 )
+ * Stop swift-object-expirer ( rhos6-node2 ) due to required swift-proxy-clone running
+ * Stop neutron-server:0 ( rhos6-node1 )
+ * Stop neutron-server:1 ( rhos6-node2 )
+ * Stop neutron-server:2 ( rhos6-node3 )
+ * Stop neutron-scale:0 ( rhos6-node3 )
+ * Stop neutron-scale:1 ( rhos6-node2 )
+ * Stop neutron-scale:2 ( rhos6-node1 )
+ * Stop neutron-ovs-cleanup:0 ( rhos6-node1 )
+ * Stop neutron-ovs-cleanup:1 ( rhos6-node2 )
+ * Stop neutron-ovs-cleanup:2 ( rhos6-node3 )
+ * Stop neutron-netns-cleanup:0 ( rhos6-node1 )
+ * Stop neutron-netns-cleanup:1 ( rhos6-node2 )
+ * Stop neutron-netns-cleanup:2 ( rhos6-node3 )
+ * Stop neutron-openvswitch-agent:0 ( rhos6-node1 )
+ * Stop neutron-openvswitch-agent:1 ( rhos6-node2 )
+ * Stop neutron-openvswitch-agent:2 ( rhos6-node3 )
+ * Stop neutron-dhcp-agent:0 ( rhos6-node1 )
+ * Stop neutron-dhcp-agent:1 ( rhos6-node2 )
+ * Stop neutron-dhcp-agent:2 ( rhos6-node3 )
+ * Stop neutron-l3-agent:0 ( rhos6-node1 )
+ * Stop neutron-l3-agent:1 ( rhos6-node2 )
+ * Stop neutron-l3-agent:2 ( rhos6-node3 )
+ * Stop neutron-metadata-agent:0 ( rhos6-node1 )
+ * Stop neutron-metadata-agent:1 ( rhos6-node2 )
+ * Stop neutron-metadata-agent:2 ( rhos6-node3 )
+ * Stop nova-consoleauth:0 ( rhos6-node1 )
+ * Stop nova-consoleauth:1 ( rhos6-node2 )
+ * Stop nova-consoleauth:2 ( rhos6-node3 )
+ * Stop nova-novncproxy:0 ( rhos6-node1 )
+ * Stop nova-novncproxy:1 ( rhos6-node2 )
+ * Stop nova-novncproxy:2 ( rhos6-node3 )
+ * Stop nova-api:0 ( rhos6-node1 )
+ * Stop nova-api:1 ( rhos6-node2 )
+ * Stop nova-api:2 ( rhos6-node3 )
+ * Stop nova-scheduler:0 ( rhos6-node1 )
+ * Stop nova-scheduler:1 ( rhos6-node2 )
+ * Stop nova-scheduler:2 ( rhos6-node3 )
+ * Stop nova-conductor:0 ( rhos6-node1 )
+ * Stop nova-conductor:1 ( rhos6-node2 )
+ * Stop nova-conductor:2 ( rhos6-node3 )
+ * Stop ceilometer-central ( rhos6-node3 ) due to unrunnable keystone-clone running
+ * Stop ceilometer-collector:0 ( rhos6-node1 ) due to required ceilometer-central start
+ * Stop ceilometer-collector:1 ( rhos6-node2 ) due to required ceilometer-central start
+ * Stop ceilometer-collector:2 ( rhos6-node3 ) due to required ceilometer-central start
+ * Stop ceilometer-api:0 ( rhos6-node1 ) due to required ceilometer-collector:0 start
+ * Stop ceilometer-api:1 ( rhos6-node2 ) due to required ceilometer-collector:1 start
+ * Stop ceilometer-api:2 ( rhos6-node3 ) due to required ceilometer-collector:2 start
+ * Stop ceilometer-delay:0 ( rhos6-node1 ) due to required ceilometer-api:0 start
+ * Stop ceilometer-delay:1 ( rhos6-node2 ) due to required ceilometer-api:1 start
+ * Stop ceilometer-delay:2 ( rhos6-node3 ) due to required ceilometer-api:2 start
+ * Stop ceilometer-alarm-evaluator:0 ( rhos6-node1 ) due to required ceilometer-delay:0 start
+ * Stop ceilometer-alarm-evaluator:1 ( rhos6-node2 ) due to required ceilometer-delay:1 start
+ * Stop ceilometer-alarm-evaluator:2 ( rhos6-node3 ) due to required ceilometer-delay:2 start
+ * Stop ceilometer-alarm-notifier:0 ( rhos6-node1 ) due to required ceilometer-alarm-evaluator:0 start
+ * Stop ceilometer-alarm-notifier:1 ( rhos6-node2 ) due to required ceilometer-alarm-evaluator:1 start
+ * Stop ceilometer-alarm-notifier:2 ( rhos6-node3 ) due to required ceilometer-alarm-evaluator:2 start
+ * Stop ceilometer-notification:0 ( rhos6-node1 ) due to required ceilometer-alarm-notifier:0 start
+ * Stop ceilometer-notification:1 ( rhos6-node2 ) due to required ceilometer-alarm-notifier:1 start
+ * Stop ceilometer-notification:2 ( rhos6-node3 ) due to required ceilometer-alarm-notifier:2 start
+ * Stop heat-api:0 ( rhos6-node1 ) due to required ceilometer-notification:0 start
+ * Stop heat-api:1 ( rhos6-node2 ) due to required ceilometer-notification:1 start
+ * Stop heat-api:2 ( rhos6-node3 ) due to required ceilometer-notification:2 start
+ * Stop heat-api-cfn:0 ( rhos6-node1 ) due to required heat-api:0 start
+ * Stop heat-api-cfn:1 ( rhos6-node2 ) due to required heat-api:1 start
+ * Stop heat-api-cfn:2 ( rhos6-node3 ) due to required heat-api:2 start
+ * Stop heat-api-cloudwatch:0 ( rhos6-node1 ) due to required heat-api-cfn:0 start
+ * Stop heat-api-cloudwatch:1 ( rhos6-node2 ) due to required heat-api-cfn:1 start
+ * Stop heat-api-cloudwatch:2 ( rhos6-node3 ) due to required heat-api-cfn:2 start
+ * Stop heat-engine ( rhos6-node2 ) due to colocation with heat-api-cloudwatch-clone
+
+Executing Cluster Transition:
+ * Pseudo action: glance-api-clone_stop_0
+ * Resource action: cinder-volume stop on rhos6-node1
+ * Pseudo action: swift-object-clone_stop_0
+ * Resource action: swift-object-expirer stop on rhos6-node2
+ * Pseudo action: neutron-metadata-agent-clone_stop_0
+ * Pseudo action: nova-conductor-clone_stop_0
+ * Resource action: heat-engine stop on rhos6-node2
+ * Resource action: glance-api stop on rhos6-node1
+ * Resource action: glance-api stop on rhos6-node2
+ * Resource action: glance-api stop on rhos6-node3
+ * Pseudo action: glance-api-clone_stopped_0
+ * Resource action: cinder-scheduler stop on rhos6-node1
+ * Resource action: swift-object stop on rhos6-node1
+ * Resource action: swift-object stop on rhos6-node2
+ * Resource action: swift-object stop on rhos6-node3
+ * Pseudo action: swift-object-clone_stopped_0
+ * Pseudo action: swift-proxy-clone_stop_0
+ * Resource action: neutron-metadata-agent stop on rhos6-node1
+ * Resource action: neutron-metadata-agent stop on rhos6-node2
+ * Resource action: neutron-metadata-agent stop on rhos6-node3
+ * Pseudo action: neutron-metadata-agent-clone_stopped_0
+ * Resource action: nova-conductor stop on rhos6-node1
+ * Resource action: nova-conductor stop on rhos6-node2
+ * Resource action: nova-conductor stop on rhos6-node3
+ * Pseudo action: nova-conductor-clone_stopped_0
+ * Pseudo action: heat-api-cloudwatch-clone_stop_0
+ * Pseudo action: glance-registry-clone_stop_0
+ * Resource action: cinder-api stop on rhos6-node1
+ * Pseudo action: swift-container-clone_stop_0
+ * Resource action: swift-proxy stop on rhos6-node1
+ * Resource action: swift-proxy stop on rhos6-node2
+ * Resource action: swift-proxy stop on rhos6-node3
+ * Pseudo action: swift-proxy-clone_stopped_0
+ * Pseudo action: neutron-l3-agent-clone_stop_0
+ * Pseudo action: nova-scheduler-clone_stop_0
+ * Resource action: heat-api-cloudwatch stop on rhos6-node1
+ * Resource action: heat-api-cloudwatch stop on rhos6-node2
+ * Resource action: heat-api-cloudwatch stop on rhos6-node3
+ * Pseudo action: heat-api-cloudwatch-clone_stopped_0
+ * Resource action: glance-registry stop on rhos6-node1
+ * Resource action: glance-registry stop on rhos6-node2
+ * Resource action: glance-registry stop on rhos6-node3
+ * Pseudo action: glance-registry-clone_stopped_0
+ * Resource action: swift-container stop on rhos6-node1
+ * Resource action: swift-container stop on rhos6-node2
+ * Resource action: swift-container stop on rhos6-node3
+ * Pseudo action: swift-container-clone_stopped_0
+ * Resource action: neutron-l3-agent stop on rhos6-node1
+ * Resource action: neutron-l3-agent stop on rhos6-node2
+ * Resource action: neutron-l3-agent stop on rhos6-node3
+ * Pseudo action: neutron-l3-agent-clone_stopped_0
+ * Resource action: nova-scheduler stop on rhos6-node1
+ * Resource action: nova-scheduler stop on rhos6-node2
+ * Resource action: nova-scheduler stop on rhos6-node3
+ * Pseudo action: nova-scheduler-clone_stopped_0
+ * Pseudo action: heat-api-cfn-clone_stop_0
+ * Pseudo action: swift-account-clone_stop_0
+ * Pseudo action: neutron-dhcp-agent-clone_stop_0
+ * Pseudo action: nova-api-clone_stop_0
+ * Resource action: heat-api-cfn stop on rhos6-node1
+ * Resource action: heat-api-cfn stop on rhos6-node2
+ * Resource action: heat-api-cfn stop on rhos6-node3
+ * Pseudo action: heat-api-cfn-clone_stopped_0
+ * Resource action: swift-account stop on rhos6-node1
+ * Resource action: swift-account stop on rhos6-node2
+ * Resource action: swift-account stop on rhos6-node3
+ * Pseudo action: swift-account-clone_stopped_0
+ * Resource action: neutron-dhcp-agent stop on rhos6-node1
+ * Resource action: neutron-dhcp-agent stop on rhos6-node2
+ * Resource action: neutron-dhcp-agent stop on rhos6-node3
+ * Pseudo action: neutron-dhcp-agent-clone_stopped_0
+ * Resource action: nova-api stop on rhos6-node1
+ * Resource action: nova-api stop on rhos6-node2
+ * Resource action: nova-api stop on rhos6-node3
+ * Pseudo action: nova-api-clone_stopped_0
+ * Pseudo action: heat-api-clone_stop_0
+ * Pseudo action: neutron-openvswitch-agent-clone_stop_0
+ * Pseudo action: nova-novncproxy-clone_stop_0
+ * Resource action: heat-api stop on rhos6-node1
+ * Resource action: heat-api stop on rhos6-node2
+ * Resource action: heat-api stop on rhos6-node3
+ * Pseudo action: heat-api-clone_stopped_0
+ * Resource action: neutron-openvswitch-agent stop on rhos6-node1
+ * Resource action: neutron-openvswitch-agent stop on rhos6-node2
+ * Resource action: neutron-openvswitch-agent stop on rhos6-node3
+ * Pseudo action: neutron-openvswitch-agent-clone_stopped_0
+ * Resource action: nova-novncproxy stop on rhos6-node1
+ * Resource action: nova-novncproxy stop on rhos6-node2
+ * Resource action: nova-novncproxy stop on rhos6-node3
+ * Pseudo action: nova-novncproxy-clone_stopped_0
+ * Pseudo action: ceilometer-notification-clone_stop_0
+ * Pseudo action: neutron-netns-cleanup-clone_stop_0
+ * Pseudo action: nova-consoleauth-clone_stop_0
+ * Resource action: ceilometer-notification stop on rhos6-node1
+ * Resource action: ceilometer-notification stop on rhos6-node2
+ * Resource action: ceilometer-notification stop on rhos6-node3
+ * Pseudo action: ceilometer-notification-clone_stopped_0
+ * Resource action: neutron-netns-cleanup stop on rhos6-node1
+ * Resource action: neutron-netns-cleanup stop on rhos6-node2
+ * Resource action: neutron-netns-cleanup stop on rhos6-node3
+ * Pseudo action: neutron-netns-cleanup-clone_stopped_0
+ * Resource action: nova-consoleauth stop on rhos6-node1
+ * Resource action: nova-consoleauth stop on rhos6-node2
+ * Resource action: nova-consoleauth stop on rhos6-node3
+ * Pseudo action: nova-consoleauth-clone_stopped_0
+ * Pseudo action: ceilometer-alarm-notifier-clone_stop_0
+ * Pseudo action: neutron-ovs-cleanup-clone_stop_0
+ * Resource action: ceilometer-alarm-notifier stop on rhos6-node1
+ * Resource action: ceilometer-alarm-notifier stop on rhos6-node2
+ * Resource action: ceilometer-alarm-notifier stop on rhos6-node3
+ * Pseudo action: ceilometer-alarm-notifier-clone_stopped_0
+ * Resource action: neutron-ovs-cleanup stop on rhos6-node1
+ * Resource action: neutron-ovs-cleanup stop on rhos6-node2
+ * Resource action: neutron-ovs-cleanup stop on rhos6-node3
+ * Pseudo action: neutron-ovs-cleanup-clone_stopped_0
+ * Pseudo action: ceilometer-alarm-evaluator-clone_stop_0
+ * Pseudo action: neutron-scale-clone_stop_0
+ * Resource action: ceilometer-alarm-evaluator stop on rhos6-node1
+ * Resource action: ceilometer-alarm-evaluator stop on rhos6-node2
+ * Resource action: ceilometer-alarm-evaluator stop on rhos6-node3
+ * Pseudo action: ceilometer-alarm-evaluator-clone_stopped_0
+ * Resource action: neutron-scale:0 stop on rhos6-node3
+ * Resource action: neutron-scale:1 stop on rhos6-node2
+ * Resource action: neutron-scale:2 stop on rhos6-node1
+ * Pseudo action: neutron-scale-clone_stopped_0
+ * Pseudo action: ceilometer-delay-clone_stop_0
+ * Pseudo action: neutron-server-clone_stop_0
+ * Resource action: ceilometer-delay stop on rhos6-node1
+ * Resource action: ceilometer-delay stop on rhos6-node2
+ * Resource action: ceilometer-delay stop on rhos6-node3
+ * Pseudo action: ceilometer-delay-clone_stopped_0
+ * Resource action: neutron-server stop on rhos6-node1
+ * Resource action: neutron-server stop on rhos6-node2
+ * Resource action: neutron-server stop on rhos6-node3
+ * Pseudo action: neutron-server-clone_stopped_0
+ * Pseudo action: ceilometer-api-clone_stop_0
+ * Resource action: ceilometer-api stop on rhos6-node1
+ * Resource action: ceilometer-api stop on rhos6-node2
+ * Resource action: ceilometer-api stop on rhos6-node3
+ * Pseudo action: ceilometer-api-clone_stopped_0
+ * Pseudo action: ceilometer-collector-clone_stop_0
+ * Resource action: ceilometer-collector stop on rhos6-node1
+ * Resource action: ceilometer-collector stop on rhos6-node2
+ * Resource action: ceilometer-collector stop on rhos6-node3
+ * Pseudo action: ceilometer-collector-clone_stopped_0
+ * Resource action: ceilometer-central stop on rhos6-node3
+ * Pseudo action: keystone-clone_stop_0
+ * Resource action: keystone stop on rhos6-node1
+ * Resource action: keystone stop on rhos6-node2
+ * Resource action: keystone stop on rhos6-node3
+ * Pseudo action: keystone-clone_stopped_0
+
+Revised Cluster Status:
+ * Node List:
+ * Online: [ rhos6-node1 rhos6-node2 rhos6-node3 ]
+
+ * Full List of Resources:
+ * node1-fence (stonith:fence_xvm): Started rhos6-node1
+ * node2-fence (stonith:fence_xvm): Started rhos6-node2
+ * node3-fence (stonith:fence_xvm): Started rhos6-node3
+ * Clone Set: lb-haproxy-clone [lb-haproxy]:
+ * Started: [ rhos6-node1 rhos6-node2 rhos6-node3 ]
+ * vip-db (ocf:heartbeat:IPaddr2): Started rhos6-node1
+ * vip-rabbitmq (ocf:heartbeat:IPaddr2): Started rhos6-node2
+ * vip-qpid (ocf:heartbeat:IPaddr2): Started rhos6-node3
+ * vip-keystone (ocf:heartbeat:IPaddr2): Started rhos6-node1
+ * vip-glance (ocf:heartbeat:IPaddr2): Started rhos6-node2
+ * vip-cinder (ocf:heartbeat:IPaddr2): Started rhos6-node3
+ * vip-swift (ocf:heartbeat:IPaddr2): Started rhos6-node1
+ * vip-neutron (ocf:heartbeat:IPaddr2): Started rhos6-node2
+ * vip-nova (ocf:heartbeat:IPaddr2): Started rhos6-node3
+ * vip-horizon (ocf:heartbeat:IPaddr2): Started rhos6-node1
+ * vip-heat (ocf:heartbeat:IPaddr2): Started rhos6-node2
+ * vip-ceilometer (ocf:heartbeat:IPaddr2): Started rhos6-node3
+ * Clone Set: galera-master [galera] (promotable):
+ * Promoted: [ rhos6-node1 rhos6-node2 rhos6-node3 ]
+ * Clone Set: rabbitmq-server-clone [rabbitmq-server]:
+ * Started: [ rhos6-node1 rhos6-node2 rhos6-node3 ]
+ * Clone Set: memcached-clone [memcached]:
+ * Started: [ rhos6-node1 rhos6-node2 rhos6-node3 ]
+ * Clone Set: mongodb-clone [mongodb]:
+ * Started: [ rhos6-node1 rhos6-node2 rhos6-node3 ]
+ * Clone Set: keystone-clone [keystone]:
+ * Stopped (disabled): [ rhos6-node1 rhos6-node2 rhos6-node3 ]
+ * Clone Set: glance-fs-clone [glance-fs]:
+ * Started: [ rhos6-node1 rhos6-node2 rhos6-node3 ]
+ * Clone Set: glance-registry-clone [glance-registry]:
+ * Stopped: [ rhos6-node1 rhos6-node2 rhos6-node3 ]
+ * Clone Set: glance-api-clone [glance-api]:
+ * Stopped: [ rhos6-node1 rhos6-node2 rhos6-node3 ]
+ * cinder-api (systemd:openstack-cinder-api): Stopped
+ * cinder-scheduler (systemd:openstack-cinder-scheduler): Stopped
+ * cinder-volume (systemd:openstack-cinder-volume): Stopped
+ * Clone Set: swift-fs-clone [swift-fs]:
+ * Started: [ rhos6-node1 rhos6-node2 rhos6-node3 ]
+ * Clone Set: swift-account-clone [swift-account]:
+ * Stopped: [ rhos6-node1 rhos6-node2 rhos6-node3 ]
+ * Clone Set: swift-container-clone [swift-container]:
+ * Stopped: [ rhos6-node1 rhos6-node2 rhos6-node3 ]
+ * Clone Set: swift-object-clone [swift-object]:
+ * Stopped: [ rhos6-node1 rhos6-node2 rhos6-node3 ]
+ * Clone Set: swift-proxy-clone [swift-proxy]:
+ * Stopped: [ rhos6-node1 rhos6-node2 rhos6-node3 ]
+ * swift-object-expirer (systemd:openstack-swift-object-expirer): Stopped
+ * Clone Set: neutron-server-clone [neutron-server]:
+ * Stopped: [ rhos6-node1 rhos6-node2 rhos6-node3 ]
+ * Clone Set: neutron-scale-clone [neutron-scale] (unique):
+ * neutron-scale:0 (ocf:neutron:NeutronScale): Stopped
+ * neutron-scale:1 (ocf:neutron:NeutronScale): Stopped
+ * neutron-scale:2 (ocf:neutron:NeutronScale): Stopped
+ * Clone Set: neutron-ovs-cleanup-clone [neutron-ovs-cleanup]:
+ * Stopped: [ rhos6-node1 rhos6-node2 rhos6-node3 ]
+ * Clone Set: neutron-netns-cleanup-clone [neutron-netns-cleanup]:
+ * Stopped: [ rhos6-node1 rhos6-node2 rhos6-node3 ]
+ * Clone Set: neutron-openvswitch-agent-clone [neutron-openvswitch-agent]:
+ * Stopped: [ rhos6-node1 rhos6-node2 rhos6-node3 ]
+ * Clone Set: neutron-dhcp-agent-clone [neutron-dhcp-agent]:
+ * Stopped: [ rhos6-node1 rhos6-node2 rhos6-node3 ]
+ * Clone Set: neutron-l3-agent-clone [neutron-l3-agent]:
+ * Stopped: [ rhos6-node1 rhos6-node2 rhos6-node3 ]
+ * Clone Set: neutron-metadata-agent-clone [neutron-metadata-agent]:
+ * Stopped: [ rhos6-node1 rhos6-node2 rhos6-node3 ]
+ * Clone Set: nova-consoleauth-clone [nova-consoleauth]:
+ * Stopped: [ rhos6-node1 rhos6-node2 rhos6-node3 ]
+ * Clone Set: nova-novncproxy-clone [nova-novncproxy]:
+ * Stopped: [ rhos6-node1 rhos6-node2 rhos6-node3 ]
+ * Clone Set: nova-api-clone [nova-api]:
+ * Stopped: [ rhos6-node1 rhos6-node2 rhos6-node3 ]
+ * Clone Set: nova-scheduler-clone [nova-scheduler]:
+ * Stopped: [ rhos6-node1 rhos6-node2 rhos6-node3 ]
+ * Clone Set: nova-conductor-clone [nova-conductor]:
+ * Stopped: [ rhos6-node1 rhos6-node2 rhos6-node3 ]
+ * ceilometer-central (systemd:openstack-ceilometer-central): Stopped
+ * Clone Set: ceilometer-collector-clone [ceilometer-collector]:
+ * Stopped: [ rhos6-node1 rhos6-node2 rhos6-node3 ]
+ * Clone Set: ceilometer-api-clone [ceilometer-api]:
+ * Stopped: [ rhos6-node1 rhos6-node2 rhos6-node3 ]
+ * Clone Set: ceilometer-delay-clone [ceilometer-delay]:
+ * Stopped: [ rhos6-node1 rhos6-node2 rhos6-node3 ]
+ * Clone Set: ceilometer-alarm-evaluator-clone [ceilometer-alarm-evaluator]:
+ * Stopped: [ rhos6-node1 rhos6-node2 rhos6-node3 ]
+ * Clone Set: ceilometer-alarm-notifier-clone [ceilometer-alarm-notifier]:
+ * Stopped: [ rhos6-node1 rhos6-node2 rhos6-node3 ]
+ * Clone Set: ceilometer-notification-clone [ceilometer-notification]:
+ * Stopped: [ rhos6-node1 rhos6-node2 rhos6-node3 ]
+ * Clone Set: heat-api-clone [heat-api]:
+ * Stopped: [ rhos6-node1 rhos6-node2 rhos6-node3 ]
+ * Clone Set: heat-api-cfn-clone [heat-api-cfn]:
+ * Stopped: [ rhos6-node1 rhos6-node2 rhos6-node3 ]
+ * Clone Set: heat-api-cloudwatch-clone [heat-api-cloudwatch]:
+ * Stopped: [ rhos6-node1 rhos6-node2 rhos6-node3 ]
+ * heat-engine (systemd:openstack-heat-engine): Stopped
+ * Clone Set: horizon-clone [horizon]:
+ * Started: [ rhos6-node1 rhos6-node2 rhos6-node3 ]
diff --git a/cts/scheduler/summary/concurrent-fencing.summary b/cts/scheduler/summary/concurrent-fencing.summary
new file mode 100644
index 0000000..18cbcfd
--- /dev/null
+++ b/cts/scheduler/summary/concurrent-fencing.summary
@@ -0,0 +1,27 @@
+Current cluster status:
+ * Node List:
+ * Node node1: UNCLEAN (offline)
+ * Node node2: UNCLEAN (offline)
+ * Node node3: UNCLEAN (offline)
+
+ * Full List of Resources:
+ * stonith-1 (stonith:dummy): Stopped
+ * lsb_dummy (lsb:/usr/lib/heartbeat/cts/LSBDummy): Stopped
+
+Transition Summary:
+ * Fence (reboot) node3 'peer is no longer part of the cluster'
+ * Fence (reboot) node2 'peer is no longer part of the cluster'
+ * Fence (reboot) node1 'peer is no longer part of the cluster'
+
+Executing Cluster Transition:
+ * Fencing node3 (reboot)
+ * Fencing node1 (reboot)
+ * Fencing node2 (reboot)
+
+Revised Cluster Status:
+ * Node List:
+ * OFFLINE: [ node1 node2 node3 ]
+
+ * Full List of Resources:
+ * stonith-1 (stonith:dummy): Stopped
+ * lsb_dummy (lsb:/usr/lib/heartbeat/cts/LSBDummy): Stopped
diff --git a/cts/scheduler/summary/container-1.summary b/cts/scheduler/summary/container-1.summary
new file mode 100644
index 0000000..366ebff
--- /dev/null
+++ b/cts/scheduler/summary/container-1.summary
@@ -0,0 +1,32 @@
+Current cluster status:
+ * Node List:
+ * Online: [ node1 node2 ]
+
+ * Full List of Resources:
+ * container1 (ocf:pacemaker:Dummy): Stopped
+ * rsc1 (ocf:pacemaker:Dummy): Stopped
+ * rsc2 (ocf:pacemaker:Dummy): Stopped
+
+Transition Summary:
+ * Start container1 ( node1 )
+ * Start rsc1 ( node1 )
+ * Start rsc2 ( node1 )
+
+Executing Cluster Transition:
+ * Resource action: container1 monitor on node2
+ * Resource action: container1 monitor on node1
+ * Resource action: container1 start on node1
+ * Resource action: rsc1 start on node1
+ * Resource action: rsc2 start on node1
+ * Resource action: container1 monitor=20000 on node1
+ * Resource action: rsc1 monitor=10000 on node1
+ * Resource action: rsc2 monitor=5000 on node1
+
+Revised Cluster Status:
+ * Node List:
+ * Online: [ node1 node2 ]
+
+ * Full List of Resources:
+ * container1 (ocf:pacemaker:Dummy): Started node1
+ * rsc1 (ocf:pacemaker:Dummy): Started node1
+ * rsc2 (ocf:pacemaker:Dummy): Started node1
diff --git a/cts/scheduler/summary/container-2.summary b/cts/scheduler/summary/container-2.summary
new file mode 100644
index 0000000..29c69a4
--- /dev/null
+++ b/cts/scheduler/summary/container-2.summary
@@ -0,0 +1,33 @@
+Current cluster status:
+ * Node List:
+ * Online: [ node1 node2 ]
+
+ * Full List of Resources:
+ * container1 (ocf:pacemaker:Dummy): Started node1
+ * rsc1 (ocf:pacemaker:Dummy): FAILED node1
+ * rsc2 (ocf:pacemaker:Dummy): Started node1
+
+Transition Summary:
+ * Restart container1 ( node1 )
+ * Recover rsc1 ( node1 )
+ * Restart rsc2 ( node1 ) due to required container1 start
+
+Executing Cluster Transition:
+ * Resource action: rsc1 stop on node1
+ * Resource action: rsc2 stop on node1
+ * Resource action: container1 stop on node1
+ * Resource action: container1 start on node1
+ * Resource action: container1 monitor=20000 on node1
+ * Resource action: rsc1 start on node1
+ * Resource action: rsc1 monitor=10000 on node1
+ * Resource action: rsc2 start on node1
+ * Resource action: rsc2 monitor=5000 on node1
+
+Revised Cluster Status:
+ * Node List:
+ * Online: [ node1 node2 ]
+
+ * Full List of Resources:
+ * container1 (ocf:pacemaker:Dummy): Started node1
+ * rsc1 (ocf:pacemaker:Dummy): Started node1
+ * rsc2 (ocf:pacemaker:Dummy): Started node1
diff --git a/cts/scheduler/summary/container-3.summary b/cts/scheduler/summary/container-3.summary
new file mode 100644
index 0000000..f579f1f
--- /dev/null
+++ b/cts/scheduler/summary/container-3.summary
@@ -0,0 +1,32 @@
+Current cluster status:
+ * Node List:
+ * Online: [ node1 node2 ]
+
+ * Full List of Resources:
+ * container1 (ocf:pacemaker:Dummy): Started node1
+ * rsc1 (ocf:pacemaker:Dummy): FAILED (failure ignored)
+ * rsc2 (ocf:pacemaker:Dummy): Started node1
+
+Transition Summary:
+ * Restart container1 ( node1 )
+ * Start rsc1 ( node1 )
+ * Restart rsc2 ( node1 ) due to required container1 start
+
+Executing Cluster Transition:
+ * Resource action: rsc2 stop on node1
+ * Resource action: container1 stop on node1
+ * Resource action: container1 start on node1
+ * Resource action: container1 monitor=20000 on node1
+ * Resource action: rsc1 start on node1
+ * Resource action: rsc2 start on node1
+ * Resource action: rsc2 monitor=5000 on node1
+ * Resource action: rsc1 monitor=10000 on node1
+
+Revised Cluster Status:
+ * Node List:
+ * Online: [ node1 node2 ]
+
+ * Full List of Resources:
+ * container1 (ocf:pacemaker:Dummy): Started node1
+ * rsc1 (ocf:pacemaker:Dummy): Started node1 (failure ignored)
+ * rsc2 (ocf:pacemaker:Dummy): Started node1
diff --git a/cts/scheduler/summary/container-4.summary b/cts/scheduler/summary/container-4.summary
new file mode 100644
index 0000000..a60393e
--- /dev/null
+++ b/cts/scheduler/summary/container-4.summary
@@ -0,0 +1,33 @@
+Current cluster status:
+ * Node List:
+ * Online: [ node1 node2 ]
+
+ * Full List of Resources:
+ * container1 (ocf:pacemaker:Dummy): Started node1
+ * rsc1 (ocf:pacemaker:Dummy): FAILED node1
+ * rsc2 (ocf:pacemaker:Dummy): Started node1
+
+Transition Summary:
+ * Move container1 ( node1 -> node2 )
+ * Recover rsc1 ( node1 -> node2 )
+ * Move rsc2 ( node1 -> node2 )
+
+Executing Cluster Transition:
+ * Resource action: rsc1 stop on node1
+ * Resource action: rsc2 stop on node1
+ * Resource action: container1 stop on node1
+ * Resource action: container1 start on node2
+ * Resource action: rsc1 start on node2
+ * Resource action: rsc2 start on node2
+ * Resource action: container1 monitor=20000 on node2
+ * Resource action: rsc1 monitor=10000 on node2
+ * Resource action: rsc2 monitor=5000 on node2
+
+Revised Cluster Status:
+ * Node List:
+ * Online: [ node1 node2 ]
+
+ * Full List of Resources:
+ * container1 (ocf:pacemaker:Dummy): Started node2
+ * rsc1 (ocf:pacemaker:Dummy): Started node2
+ * rsc2 (ocf:pacemaker:Dummy): Started node2
diff --git a/cts/scheduler/summary/container-group-1.summary b/cts/scheduler/summary/container-group-1.summary
new file mode 100644
index 0000000..955f865
--- /dev/null
+++ b/cts/scheduler/summary/container-group-1.summary
@@ -0,0 +1,36 @@
+Current cluster status:
+ * Node List:
+ * Online: [ node1 node2 ]
+
+ * Full List of Resources:
+ * Resource Group: container-group:
+ * container1 (ocf:pacemaker:Dummy): Stopped
+ * rsc1 (ocf:pacemaker:Dummy): Stopped
+ * rsc2 (ocf:pacemaker:Dummy): Stopped
+
+Transition Summary:
+ * Start container1 ( node1 )
+ * Start rsc1 ( node1 )
+ * Start rsc2 ( node1 )
+
+Executing Cluster Transition:
+ * Pseudo action: container-group_start_0
+ * Resource action: container1 monitor on node2
+ * Resource action: container1 monitor on node1
+ * Resource action: container1 start on node1
+ * Resource action: rsc1 start on node1
+ * Resource action: rsc2 start on node1
+ * Pseudo action: container-group_running_0
+ * Resource action: container1 monitor=20000 on node1
+ * Resource action: rsc1 monitor=10000 on node1
+ * Resource action: rsc2 monitor=5000 on node1
+
+Revised Cluster Status:
+ * Node List:
+ * Online: [ node1 node2 ]
+
+ * Full List of Resources:
+ * Resource Group: container-group:
+ * container1 (ocf:pacemaker:Dummy): Started node1
+ * rsc1 (ocf:pacemaker:Dummy): Started node1
+ * rsc2 (ocf:pacemaker:Dummy): Started node1
diff --git a/cts/scheduler/summary/container-group-2.summary b/cts/scheduler/summary/container-group-2.summary
new file mode 100644
index 0000000..a3af18c
--- /dev/null
+++ b/cts/scheduler/summary/container-group-2.summary
@@ -0,0 +1,39 @@
+Current cluster status:
+ * Node List:
+ * Online: [ node1 node2 ]
+
+ * Full List of Resources:
+ * Resource Group: container-group:
+ * container1 (ocf:pacemaker:Dummy): Started node1
+ * rsc1 (ocf:pacemaker:Dummy): FAILED node1
+ * rsc2 (ocf:pacemaker:Dummy): Started node1
+
+Transition Summary:
+ * Restart container1 ( node1 )
+ * Recover rsc1 ( node1 )
+ * Restart rsc2 ( node1 ) due to required rsc1 start
+
+Executing Cluster Transition:
+ * Pseudo action: container-group_stop_0
+ * Resource action: rsc2 stop on node1
+ * Resource action: rsc1 stop on node1
+ * Resource action: container1 stop on node1
+ * Pseudo action: container-group_stopped_0
+ * Pseudo action: container-group_start_0
+ * Resource action: container1 start on node1
+ * Resource action: container1 monitor=20000 on node1
+ * Resource action: rsc1 start on node1
+ * Resource action: rsc1 monitor=10000 on node1
+ * Resource action: rsc2 start on node1
+ * Resource action: rsc2 monitor=5000 on node1
+ * Pseudo action: container-group_running_0
+
+Revised Cluster Status:
+ * Node List:
+ * Online: [ node1 node2 ]
+
+ * Full List of Resources:
+ * Resource Group: container-group:
+ * container1 (ocf:pacemaker:Dummy): Started node1
+ * rsc1 (ocf:pacemaker:Dummy): Started node1
+ * rsc2 (ocf:pacemaker:Dummy): Started node1
diff --git a/cts/scheduler/summary/container-group-3.summary b/cts/scheduler/summary/container-group-3.summary
new file mode 100644
index 0000000..0859a23
--- /dev/null
+++ b/cts/scheduler/summary/container-group-3.summary
@@ -0,0 +1,37 @@
+Current cluster status:
+ * Node List:
+ * Online: [ node1 node2 ]
+
+ * Full List of Resources:
+ * Resource Group: container-group:
+ * container1 (ocf:pacemaker:Dummy): Started node1
+ * rsc1 (ocf:pacemaker:Dummy): FAILED (failure ignored)
+ * rsc2 (ocf:pacemaker:Dummy): Stopped
+
+Transition Summary:
+ * Restart container1 ( node1 )
+ * Start rsc1 ( node1 )
+ * Start rsc2 ( node1 )
+
+Executing Cluster Transition:
+ * Pseudo action: container-group_stop_0
+ * Resource action: container1 stop on node1
+ * Pseudo action: container-group_stopped_0
+ * Pseudo action: container-group_start_0
+ * Resource action: container1 start on node1
+ * Resource action: container1 monitor=20000 on node1
+ * Resource action: rsc1 start on node1
+ * Resource action: rsc2 start on node1
+ * Pseudo action: container-group_running_0
+ * Resource action: rsc1 monitor=10000 on node1
+ * Resource action: rsc2 monitor=5000 on node1
+
+Revised Cluster Status:
+ * Node List:
+ * Online: [ node1 node2 ]
+
+ * Full List of Resources:
+ * Resource Group: container-group:
+ * container1 (ocf:pacemaker:Dummy): Started node1
+ * rsc1 (ocf:pacemaker:Dummy): Started node1 (failure ignored)
+ * rsc2 (ocf:pacemaker:Dummy): Started node1
diff --git a/cts/scheduler/summary/container-group-4.summary b/cts/scheduler/summary/container-group-4.summary
new file mode 100644
index 0000000..4ad9f2e
--- /dev/null
+++ b/cts/scheduler/summary/container-group-4.summary
@@ -0,0 +1,39 @@
+Current cluster status:
+ * Node List:
+ * Online: [ node1 node2 ]
+
+ * Full List of Resources:
+ * Resource Group: container-group:
+ * container1 (ocf:pacemaker:Dummy): Started node1
+ * rsc1 (ocf:pacemaker:Dummy): FAILED node1
+ * rsc2 (ocf:pacemaker:Dummy): Started node1
+
+Transition Summary:
+ * Move container1 ( node1 -> node2 )
+ * Recover rsc1 ( node1 -> node2 )
+ * Move rsc2 ( node1 -> node2 )
+
+Executing Cluster Transition:
+ * Pseudo action: container-group_stop_0
+ * Resource action: rsc2 stop on node1
+ * Resource action: rsc1 stop on node1
+ * Resource action: container1 stop on node1
+ * Pseudo action: container-group_stopped_0
+ * Pseudo action: container-group_start_0
+ * Resource action: container1 start on node2
+ * Resource action: rsc1 start on node2
+ * Resource action: rsc2 start on node2
+ * Pseudo action: container-group_running_0
+ * Resource action: container1 monitor=20000 on node2
+ * Resource action: rsc1 monitor=10000 on node2
+ * Resource action: rsc2 monitor=5000 on node2
+
+Revised Cluster Status:
+ * Node List:
+ * Online: [ node1 node2 ]
+
+ * Full List of Resources:
+ * Resource Group: container-group:
+ * container1 (ocf:pacemaker:Dummy): Started node2
+ * rsc1 (ocf:pacemaker:Dummy): Started node2
+ * rsc2 (ocf:pacemaker:Dummy): Started node2
diff --git a/cts/scheduler/summary/container-is-remote-node.summary b/cts/scheduler/summary/container-is-remote-node.summary
new file mode 100644
index 0000000..c022e89
--- /dev/null
+++ b/cts/scheduler/summary/container-is-remote-node.summary
@@ -0,0 +1,59 @@
+3 of 19 resource instances DISABLED and 0 BLOCKED from further action due to failure
+
+Current cluster status:
+ * Node List:
+ * Online: [ lama2 lama3 ]
+ * GuestOnline: [ RNVM1 ]
+
+ * Full List of Resources:
+ * restofencelama2 (stonith:fence_ipmilan): Started lama3
+ * restofencelama3 (stonith:fence_ipmilan): Started lama2
+ * Clone Set: dlm-clone [dlm]:
+ * Started: [ lama2 lama3 ]
+ * Stopped: [ RNVM1 ]
+ * Clone Set: clvmd-clone [clvmd]:
+ * Started: [ lama2 lama3 ]
+ * Stopped: [ RNVM1 ]
+ * Clone Set: gfs2-lv_1_1-clone [gfs2-lv_1_1]:
+ * Started: [ lama2 lama3 ]
+ * Stopped: [ RNVM1 ]
+ * Clone Set: gfs2-lv_1_2-clone [gfs2-lv_1_2] (disabled):
+ * Stopped (disabled): [ lama2 lama3 RNVM1 ]
+ * VM1 (ocf:heartbeat:VirtualDomain): Started lama2
+ * Resource Group: RES1:
+ * FSdata1 (ocf:heartbeat:Filesystem): Started RNVM1
+ * RES1-IP (ocf:heartbeat:IPaddr2): Started RNVM1
+ * res-rsyslog (ocf:heartbeat:rsyslog.test): Started RNVM1
+
+Transition Summary:
+
+Executing Cluster Transition:
+ * Resource action: dlm monitor on RNVM1
+ * Resource action: clvmd monitor on RNVM1
+ * Resource action: gfs2-lv_1_1 monitor on RNVM1
+ * Resource action: gfs2-lv_1_2 monitor on RNVM1
+
+Revised Cluster Status:
+ * Node List:
+ * Online: [ lama2 lama3 ]
+ * GuestOnline: [ RNVM1 ]
+
+ * Full List of Resources:
+ * restofencelama2 (stonith:fence_ipmilan): Started lama3
+ * restofencelama3 (stonith:fence_ipmilan): Started lama2
+ * Clone Set: dlm-clone [dlm]:
+ * Started: [ lama2 lama3 ]
+ * Stopped: [ RNVM1 ]
+ * Clone Set: clvmd-clone [clvmd]:
+ * Started: [ lama2 lama3 ]
+ * Stopped: [ RNVM1 ]
+ * Clone Set: gfs2-lv_1_1-clone [gfs2-lv_1_1]:
+ * Started: [ lama2 lama3 ]
+ * Stopped: [ RNVM1 ]
+ * Clone Set: gfs2-lv_1_2-clone [gfs2-lv_1_2] (disabled):
+ * Stopped (disabled): [ lama2 lama3 RNVM1 ]
+ * VM1 (ocf:heartbeat:VirtualDomain): Started lama2
+ * Resource Group: RES1:
+ * FSdata1 (ocf:heartbeat:Filesystem): Started RNVM1
+ * RES1-IP (ocf:heartbeat:IPaddr2): Started RNVM1
+ * res-rsyslog (ocf:heartbeat:rsyslog.test): Started RNVM1
diff --git a/cts/scheduler/summary/date-1.summary b/cts/scheduler/summary/date-1.summary
new file mode 100644
index 0000000..794b3c6
--- /dev/null
+++ b/cts/scheduler/summary/date-1.summary
@@ -0,0 +1,21 @@
+Current cluster status:
+ * Node List:
+ * Online: [ node1 node2 ]
+
+ * Full List of Resources:
+ * rsc1 (ocf:heartbeat:apache): Stopped
+
+Transition Summary:
+ * Start rsc1 ( node2 )
+
+Executing Cluster Transition:
+ * Resource action: rsc1 monitor on node2
+ * Resource action: rsc1 monitor on node1
+ * Resource action: rsc1 start on node2
+
+Revised Cluster Status:
+ * Node List:
+ * Online: [ node1 node2 ]
+
+ * Full List of Resources:
+ * rsc1 (ocf:heartbeat:apache): Started node2
diff --git a/cts/scheduler/summary/date-2.summary b/cts/scheduler/summary/date-2.summary
new file mode 100644
index 0000000..3f99a18
--- /dev/null
+++ b/cts/scheduler/summary/date-2.summary
@@ -0,0 +1,21 @@
+Current cluster status:
+ * Node List:
+ * OFFLINE: [ router1 router2 ]
+
+ * Full List of Resources:
+ * Resource Group: test:
+ * test_ip (ocf:heartbeat:IPaddr2_vlan): Stopped
+ * test_mailto (ocf:heartbeat:MailTo): Stopped
+
+Transition Summary:
+
+Executing Cluster Transition:
+
+Revised Cluster Status:
+ * Node List:
+ * OFFLINE: [ router1 router2 ]
+
+ * Full List of Resources:
+ * Resource Group: test:
+ * test_ip (ocf:heartbeat:IPaddr2_vlan): Stopped
+ * test_mailto (ocf:heartbeat:MailTo): Stopped
diff --git a/cts/scheduler/summary/date-3.summary b/cts/scheduler/summary/date-3.summary
new file mode 100644
index 0000000..3f99a18
--- /dev/null
+++ b/cts/scheduler/summary/date-3.summary
@@ -0,0 +1,21 @@
+Current cluster status:
+ * Node List:
+ * OFFLINE: [ router1 router2 ]
+
+ * Full List of Resources:
+ * Resource Group: test:
+ * test_ip (ocf:heartbeat:IPaddr2_vlan): Stopped
+ * test_mailto (ocf:heartbeat:MailTo): Stopped
+
+Transition Summary:
+
+Executing Cluster Transition:
+
+Revised Cluster Status:
+ * Node List:
+ * OFFLINE: [ router1 router2 ]
+
+ * Full List of Resources:
+ * Resource Group: test:
+ * test_ip (ocf:heartbeat:IPaddr2_vlan): Stopped
+ * test_mailto (ocf:heartbeat:MailTo): Stopped
diff --git a/cts/scheduler/summary/dc-fence-ordering.summary b/cts/scheduler/summary/dc-fence-ordering.summary
new file mode 100644
index 0000000..0261cad
--- /dev/null
+++ b/cts/scheduler/summary/dc-fence-ordering.summary
@@ -0,0 +1,82 @@
+Using the original execution date of: 2018-11-28 18:37:16Z
+Current cluster status:
+ * Node List:
+ * Node rhel7-1: UNCLEAN (online)
+ * Online: [ rhel7-2 rhel7-4 rhel7-5 ]
+ * OFFLINE: [ rhel7-3 ]
+
+ * Full List of Resources:
+ * Fencing (stonith:fence_xvm): Stopped
+ * FencingPass (stonith:fence_dummy): Stopped
+ * FencingFail (stonith:fence_dummy): Stopped
+ * rsc_rhel7-1 (ocf:heartbeat:IPaddr2): Stopped
+ * rsc_rhel7-2 (ocf:heartbeat:IPaddr2): Stopped
+ * rsc_rhel7-3 (ocf:heartbeat:IPaddr2): Stopped
+ * rsc_rhel7-4 (ocf:heartbeat:IPaddr2): Stopped
+ * rsc_rhel7-5 (ocf:heartbeat:IPaddr2): Stopped
+ * migrator (ocf:pacemaker:Dummy): Stopped
+ * Clone Set: Connectivity [ping-1]:
+ * Stopped: [ rhel7-1 rhel7-2 rhel7-3 rhel7-4 rhel7-5 ]
+ * Clone Set: promotable-1 [stateful-1] (promotable):
+ * Promoted: [ rhel7-1 ]
+ * Unpromoted: [ rhel7-2 rhel7-4 rhel7-5 ]
+ * Stopped: [ rhel7-3 ]
+ * Resource Group: group-1:
+ * r192.168.122.207 (ocf:heartbeat:IPaddr2): Started rhel7-1
+ * petulant (service:pacemaker-cts-dummyd@10): FAILED rhel7-1
+ * r192.168.122.208 (ocf:heartbeat:IPaddr2): Stopped
+ * lsb-dummy (lsb:LSBDummy): Stopped
+
+Transition Summary:
+ * Fence (reboot) rhel7-1 'petulant failed there'
+ * Stop stateful-1:0 ( Unpromoted rhel7-5 ) due to node availability
+ * Stop stateful-1:1 ( Promoted rhel7-1 ) due to node availability
+ * Stop stateful-1:2 ( Unpromoted rhel7-2 ) due to node availability
+ * Stop stateful-1:3 ( Unpromoted rhel7-4 ) due to node availability
+ * Stop r192.168.122.207 ( rhel7-1 ) due to node availability
+ * Stop petulant ( rhel7-1 ) due to node availability
+
+Executing Cluster Transition:
+ * Fencing rhel7-1 (reboot)
+ * Pseudo action: group-1_stop_0
+ * Pseudo action: petulant_stop_0
+ * Pseudo action: r192.168.122.207_stop_0
+ * Pseudo action: group-1_stopped_0
+ * Pseudo action: promotable-1_demote_0
+ * Pseudo action: stateful-1_demote_0
+ * Pseudo action: promotable-1_demoted_0
+ * Pseudo action: promotable-1_stop_0
+ * Resource action: stateful-1 stop on rhel7-5
+ * Pseudo action: stateful-1_stop_0
+ * Resource action: stateful-1 stop on rhel7-2
+ * Resource action: stateful-1 stop on rhel7-4
+ * Pseudo action: promotable-1_stopped_0
+ * Cluster action: do_shutdown on rhel7-5
+ * Cluster action: do_shutdown on rhel7-4
+ * Cluster action: do_shutdown on rhel7-2
+Using the original execution date of: 2018-11-28 18:37:16Z
+
+Revised Cluster Status:
+ * Node List:
+ * Online: [ rhel7-2 rhel7-4 rhel7-5 ]
+ * OFFLINE: [ rhel7-1 rhel7-3 ]
+
+ * Full List of Resources:
+ * Fencing (stonith:fence_xvm): Stopped
+ * FencingPass (stonith:fence_dummy): Stopped
+ * FencingFail (stonith:fence_dummy): Stopped
+ * rsc_rhel7-1 (ocf:heartbeat:IPaddr2): Stopped
+ * rsc_rhel7-2 (ocf:heartbeat:IPaddr2): Stopped
+ * rsc_rhel7-3 (ocf:heartbeat:IPaddr2): Stopped
+ * rsc_rhel7-4 (ocf:heartbeat:IPaddr2): Stopped
+ * rsc_rhel7-5 (ocf:heartbeat:IPaddr2): Stopped
+ * migrator (ocf:pacemaker:Dummy): Stopped
+ * Clone Set: Connectivity [ping-1]:
+ * Stopped: [ rhel7-1 rhel7-2 rhel7-3 rhel7-4 rhel7-5 ]
+ * Clone Set: promotable-1 [stateful-1] (promotable):
+ * Stopped: [ rhel7-1 rhel7-2 rhel7-3 rhel7-4 rhel7-5 ]
+ * Resource Group: group-1:
+ * r192.168.122.207 (ocf:heartbeat:IPaddr2): Stopped
+ * petulant (service:pacemaker-cts-dummyd@10): Stopped
+ * r192.168.122.208 (ocf:heartbeat:IPaddr2): Stopped
+ * lsb-dummy (lsb:LSBDummy): Stopped
diff --git a/cts/scheduler/summary/enforce-colo1.summary b/cts/scheduler/summary/enforce-colo1.summary
new file mode 100644
index 0000000..5bc9aa0
--- /dev/null
+++ b/cts/scheduler/summary/enforce-colo1.summary
@@ -0,0 +1,39 @@
+3 of 6 resource instances DISABLED and 0 BLOCKED from further action due to failure
+
+Current cluster status:
+ * Node List:
+ * Online: [ rhel7-auto1 rhel7-auto2 rhel7-auto3 ]
+
+ * Full List of Resources:
+ * shooter (stonith:fence_xvm): Started rhel7-auto2
+ * engine (ocf:heartbeat:Dummy): Started rhel7-auto3
+ * Clone Set: keystone-clone [keystone] (disabled):
+ * Started: [ rhel7-auto1 rhel7-auto2 rhel7-auto3 ]
+ * central (ocf:heartbeat:Dummy): Started rhel7-auto3
+
+Transition Summary:
+ * Stop engine ( rhel7-auto3 ) due to colocation with central
+ * Stop keystone:0 ( rhel7-auto2 ) due to node availability
+ * Stop keystone:1 ( rhel7-auto3 ) due to node availability
+ * Stop keystone:2 ( rhel7-auto1 ) due to node availability
+ * Stop central ( rhel7-auto3 ) due to unrunnable keystone-clone running
+
+Executing Cluster Transition:
+ * Resource action: engine stop on rhel7-auto3
+ * Resource action: central stop on rhel7-auto3
+ * Pseudo action: keystone-clone_stop_0
+ * Resource action: keystone stop on rhel7-auto2
+ * Resource action: keystone stop on rhel7-auto3
+ * Resource action: keystone stop on rhel7-auto1
+ * Pseudo action: keystone-clone_stopped_0
+
+Revised Cluster Status:
+ * Node List:
+ * Online: [ rhel7-auto1 rhel7-auto2 rhel7-auto3 ]
+
+ * Full List of Resources:
+ * shooter (stonith:fence_xvm): Started rhel7-auto2
+ * engine (ocf:heartbeat:Dummy): Stopped
+ * Clone Set: keystone-clone [keystone] (disabled):
+ * Stopped (disabled): [ rhel7-auto1 rhel7-auto2 rhel7-auto3 ]
+ * central (ocf:heartbeat:Dummy): Stopped
diff --git a/cts/scheduler/summary/expire-non-blocked-failure.summary b/cts/scheduler/summary/expire-non-blocked-failure.summary
new file mode 100644
index 0000000..0ca6c54
--- /dev/null
+++ b/cts/scheduler/summary/expire-non-blocked-failure.summary
@@ -0,0 +1,24 @@
+0 of 3 resource instances DISABLED and 1 BLOCKED from further action due to failure
+
+Current cluster status:
+ * Node List:
+ * Online: [ node1 node2 ]
+
+ * Full List of Resources:
+ * rsc_stonith (stonith:null): Started node1
+ * rsc1 (ocf:pacemaker:Dummy): FAILED node2 (blocked)
+ * rsc2 (ocf:pacemaker:Dummy): Started node1
+
+Transition Summary:
+
+Executing Cluster Transition:
+ * Cluster action: clear_failcount for rsc2 on node1
+
+Revised Cluster Status:
+ * Node List:
+ * Online: [ node1 node2 ]
+
+ * Full List of Resources:
+ * rsc_stonith (stonith:null): Started node1
+ * rsc1 (ocf:pacemaker:Dummy): FAILED node2 (blocked)
+ * rsc2 (ocf:pacemaker:Dummy): Started node1
diff --git a/cts/scheduler/summary/expired-failed-probe-primitive.summary b/cts/scheduler/summary/expired-failed-probe-primitive.summary
new file mode 100644
index 0000000..ac0604e
--- /dev/null
+++ b/cts/scheduler/summary/expired-failed-probe-primitive.summary
@@ -0,0 +1,26 @@
+Current cluster status:
+ * Node List:
+ * Online: [ cluster01 cluster02 ]
+
+ * Full List of Resources:
+ * Fencing (stonith:fence_xvm): Started cluster01
+ * dummy-1 (ocf:pacemaker:Dummy): Stopped
+ * dummy-2 (ocf:pacemaker:Dummy): Started cluster02
+
+Transition Summary:
+ * Start dummy-1 ( cluster02 )
+
+Executing Cluster Transition:
+ * Resource action: dummy-1 monitor on cluster02
+ * Resource action: dummy-1 monitor on cluster01
+ * Resource action: dummy-2 monitor on cluster01
+ * Resource action: dummy-1 start on cluster02
+
+Revised Cluster Status:
+ * Node List:
+ * Online: [ cluster01 cluster02 ]
+
+ * Full List of Resources:
+ * Fencing (stonith:fence_xvm): Started cluster01
+ * dummy-1 (ocf:pacemaker:Dummy): Started cluster02
+ * dummy-2 (ocf:pacemaker:Dummy): Started cluster02
diff --git a/cts/scheduler/summary/expired-stop-1.summary b/cts/scheduler/summary/expired-stop-1.summary
new file mode 100644
index 0000000..9e94257
--- /dev/null
+++ b/cts/scheduler/summary/expired-stop-1.summary
@@ -0,0 +1,22 @@
+1 of 1 resource instances DISABLED and 0 BLOCKED from further action due to failure
+
+Current cluster status:
+ * Node List:
+ * Online: [ node1 node2 node3 node4 node5 ]
+
+ * Full List of Resources:
+ * rsc1 (ocf:pacemaker:Dummy): Started node2 (disabled)
+
+Transition Summary:
+ * Stop rsc1 ( node2 ) due to node availability
+
+Executing Cluster Transition:
+ * Resource action: rsc1 stop on node2
+ * Cluster action: clear_failcount for rsc1 on node2
+
+Revised Cluster Status:
+ * Node List:
+ * Online: [ node1 node2 node3 node4 node5 ]
+
+ * Full List of Resources:
+ * rsc1 (ocf:pacemaker:Dummy): Stopped (disabled)
diff --git a/cts/scheduler/summary/failcount-block.summary b/cts/scheduler/summary/failcount-block.summary
new file mode 100644
index 0000000..646f76b
--- /dev/null
+++ b/cts/scheduler/summary/failcount-block.summary
@@ -0,0 +1,39 @@
+0 of 5 resource instances DISABLED and 1 BLOCKED from further action due to failure
+
+Current cluster status:
+ * Node List:
+ * Online: [ pcmk-1 ]
+ * OFFLINE: [ pcmk-4 ]
+
+ * Full List of Resources:
+ * rsc_pcmk-1 (ocf:heartbeat:IPaddr2): Started pcmk-1
+ * rsc_pcmk-2 (ocf:heartbeat:IPaddr2): FAILED pcmk-1 (blocked)
+ * rsc_pcmk-3 (ocf:heartbeat:IPaddr2): Stopped
+ * rsc_pcmk-4 (ocf:heartbeat:IPaddr2): Stopped
+ * rsc_pcmk-5 (ocf:heartbeat:IPaddr2): Started pcmk-1
+
+Transition Summary:
+ * Start rsc_pcmk-3 ( pcmk-1 )
+ * Start rsc_pcmk-4 ( pcmk-1 )
+
+Executing Cluster Transition:
+ * Resource action: rsc_pcmk-1 monitor=5000 on pcmk-1
+ * Cluster action: clear_failcount for rsc_pcmk-1 on pcmk-1
+ * Resource action: rsc_pcmk-3 start on pcmk-1
+ * Cluster action: clear_failcount for rsc_pcmk-3 on pcmk-1
+ * Resource action: rsc_pcmk-4 start on pcmk-1
+ * Cluster action: clear_failcount for rsc_pcmk-5 on pcmk-1
+ * Resource action: rsc_pcmk-3 monitor=5000 on pcmk-1
+ * Resource action: rsc_pcmk-4 monitor=5000 on pcmk-1
+
+Revised Cluster Status:
+ * Node List:
+ * Online: [ pcmk-1 ]
+ * OFFLINE: [ pcmk-4 ]
+
+ * Full List of Resources:
+ * rsc_pcmk-1 (ocf:heartbeat:IPaddr2): Started pcmk-1
+ * rsc_pcmk-2 (ocf:heartbeat:IPaddr2): FAILED pcmk-1 (blocked)
+ * rsc_pcmk-3 (ocf:heartbeat:IPaddr2): Started pcmk-1
+ * rsc_pcmk-4 (ocf:heartbeat:IPaddr2): Started pcmk-1
+ * rsc_pcmk-5 (ocf:heartbeat:IPaddr2): Started pcmk-1
diff --git a/cts/scheduler/summary/failcount.summary b/cts/scheduler/summary/failcount.summary
new file mode 100644
index 0000000..02268c3
--- /dev/null
+++ b/cts/scheduler/summary/failcount.summary
@@ -0,0 +1,63 @@
+Current cluster status:
+ * Node List:
+ * Online: [ dresproddns01 dresproddns02 ]
+
+ * Full List of Resources:
+ * Clone Set: cl-openfire [gr-openfire]:
+ * Started: [ dresproddns01 dresproddns02 ]
+ * Clone Set: cl-named [gr-named]:
+ * Started: [ dresproddns01 dresproddns02 ]
+ * re-monitor (ocf:pacemaker:ClusterMon): Started dresproddns02
+ * Clone Set: cl-ping-sjim [re-ping-sjim]:
+ * Started: [ dresproddns01 dresproddns02 ]
+ * Clone Set: cl-wesbprod [gr-wesbprod]:
+ * Started: [ dresproddns01 dresproddns02 ]
+ * Clone Set: cl-svn [gr-svn]:
+ * Started: [ dresproddns01 dresproddns02 ]
+ * Clone Set: cl-ldirectord [re-ldirectord-gluster]:
+ * Started: [ dresproddns01 dresproddns02 ]
+ * re-drdns-ip (ocf:heartbeat:IPaddr): Started dresproddns01
+ * Clone Set: cl-cdeprod [gr-cdeprod]:
+ * Started: [ dresproddns01 dresproddns02 ]
+ * Clone Set: cl-sysinfo [re-sysinfo]:
+ * Started: [ dresproddns01 dresproddns02 ]
+ * Clone Set: cl-haproxy [gr-haproxy]:
+ * Started: [ dresproddns01 dresproddns02 ]
+ * Clone Set: cl-ping-sjho [re-ping-sjho]:
+ * Started: [ dresproddns01 dresproddns02 ]
+
+Transition Summary:
+
+Executing Cluster Transition:
+ * Cluster action: clear_failcount for re-openfire-lsb on dresproddns01
+ * Cluster action: clear_failcount for re-openfire-lsb on dresproddns02
+ * Resource action: re-named-lsb:1 monitor=10000 on dresproddns01
+ * Resource action: re-named-lsb:0 monitor=10000 on dresproddns02
+
+Revised Cluster Status:
+ * Node List:
+ * Online: [ dresproddns01 dresproddns02 ]
+
+ * Full List of Resources:
+ * Clone Set: cl-openfire [gr-openfire]:
+ * Started: [ dresproddns01 dresproddns02 ]
+ * Clone Set: cl-named [gr-named]:
+ * Started: [ dresproddns01 dresproddns02 ]
+ * re-monitor (ocf:pacemaker:ClusterMon): Started dresproddns02
+ * Clone Set: cl-ping-sjim [re-ping-sjim]:
+ * Started: [ dresproddns01 dresproddns02 ]
+ * Clone Set: cl-wesbprod [gr-wesbprod]:
+ * Started: [ dresproddns01 dresproddns02 ]
+ * Clone Set: cl-svn [gr-svn]:
+ * Started: [ dresproddns01 dresproddns02 ]
+ * Clone Set: cl-ldirectord [re-ldirectord-gluster]:
+ * Started: [ dresproddns01 dresproddns02 ]
+ * re-drdns-ip (ocf:heartbeat:IPaddr): Started dresproddns01
+ * Clone Set: cl-cdeprod [gr-cdeprod]:
+ * Started: [ dresproddns01 dresproddns02 ]
+ * Clone Set: cl-sysinfo [re-sysinfo]:
+ * Started: [ dresproddns01 dresproddns02 ]
+ * Clone Set: cl-haproxy [gr-haproxy]:
+ * Started: [ dresproddns01 dresproddns02 ]
+ * Clone Set: cl-ping-sjho [re-ping-sjho]:
+ * Started: [ dresproddns01 dresproddns02 ]
diff --git a/cts/scheduler/summary/failed-demote-recovery-promoted.summary b/cts/scheduler/summary/failed-demote-recovery-promoted.summary
new file mode 100644
index 0000000..2d11c46
--- /dev/null
+++ b/cts/scheduler/summary/failed-demote-recovery-promoted.summary
@@ -0,0 +1,60 @@
+Using the original execution date of: 2017-11-30 12:37:50Z
+Current cluster status:
+ * Node List:
+ * Online: [ fastvm-rhel-7-4-95 fastvm-rhel-7-4-96 ]
+
+ * Full List of Resources:
+ * fence-fastvm-rhel-7-4-95 (stonith:fence_xvm): Started fastvm-rhel-7-4-96
+ * fence-fastvm-rhel-7-4-96 (stonith:fence_xvm): Started fastvm-rhel-7-4-95
+ * Clone Set: DB2_HADR-master [DB2_HADR] (promotable):
+ * DB2_HADR (ocf:heartbeat:db2): FAILED fastvm-rhel-7-4-96
+ * Unpromoted: [ fastvm-rhel-7-4-95 ]
+
+Transition Summary:
+ * Recover DB2_HADR:1 ( Unpromoted -> Promoted fastvm-rhel-7-4-96 )
+
+Executing Cluster Transition:
+ * Pseudo action: DB2_HADR-master_pre_notify_stop_0
+ * Resource action: DB2_HADR notify on fastvm-rhel-7-4-95
+ * Resource action: DB2_HADR notify on fastvm-rhel-7-4-96
+ * Pseudo action: DB2_HADR-master_confirmed-pre_notify_stop_0
+ * Pseudo action: DB2_HADR-master_stop_0
+ * Resource action: DB2_HADR stop on fastvm-rhel-7-4-96
+ * Pseudo action: DB2_HADR-master_stopped_0
+ * Pseudo action: DB2_HADR-master_post_notify_stopped_0
+ * Resource action: DB2_HADR notify on fastvm-rhel-7-4-95
+ * Pseudo action: DB2_HADR-master_confirmed-post_notify_stopped_0
+ * Pseudo action: DB2_HADR-master_pre_notify_start_0
+ * Resource action: DB2_HADR notify on fastvm-rhel-7-4-95
+ * Pseudo action: DB2_HADR-master_confirmed-pre_notify_start_0
+ * Pseudo action: DB2_HADR-master_start_0
+ * Resource action: DB2_HADR start on fastvm-rhel-7-4-96
+ * Pseudo action: DB2_HADR-master_running_0
+ * Pseudo action: DB2_HADR-master_post_notify_running_0
+ * Resource action: DB2_HADR notify on fastvm-rhel-7-4-95
+ * Resource action: DB2_HADR notify on fastvm-rhel-7-4-96
+ * Pseudo action: DB2_HADR-master_confirmed-post_notify_running_0
+ * Pseudo action: DB2_HADR-master_pre_notify_promote_0
+ * Resource action: DB2_HADR notify on fastvm-rhel-7-4-95
+ * Resource action: DB2_HADR notify on fastvm-rhel-7-4-96
+ * Pseudo action: DB2_HADR-master_confirmed-pre_notify_promote_0
+ * Pseudo action: DB2_HADR-master_promote_0
+ * Resource action: DB2_HADR promote on fastvm-rhel-7-4-96
+ * Pseudo action: DB2_HADR-master_promoted_0
+ * Pseudo action: DB2_HADR-master_post_notify_promoted_0
+ * Resource action: DB2_HADR notify on fastvm-rhel-7-4-95
+ * Resource action: DB2_HADR notify on fastvm-rhel-7-4-96
+ * Pseudo action: DB2_HADR-master_confirmed-post_notify_promoted_0
+ * Resource action: DB2_HADR monitor=22000 on fastvm-rhel-7-4-96
+Using the original execution date of: 2017-11-30 12:37:50Z
+
+Revised Cluster Status:
+ * Node List:
+ * Online: [ fastvm-rhel-7-4-95 fastvm-rhel-7-4-96 ]
+
+ * Full List of Resources:
+ * fence-fastvm-rhel-7-4-95 (stonith:fence_xvm): Started fastvm-rhel-7-4-96
+ * fence-fastvm-rhel-7-4-96 (stonith:fence_xvm): Started fastvm-rhel-7-4-95
+ * Clone Set: DB2_HADR-master [DB2_HADR] (promotable):
+ * Promoted: [ fastvm-rhel-7-4-96 ]
+ * Unpromoted: [ fastvm-rhel-7-4-95 ]
diff --git a/cts/scheduler/summary/failed-demote-recovery.summary b/cts/scheduler/summary/failed-demote-recovery.summary
new file mode 100644
index 0000000..8c91259
--- /dev/null
+++ b/cts/scheduler/summary/failed-demote-recovery.summary
@@ -0,0 +1,48 @@
+Using the original execution date of: 2017-11-30 12:37:50Z
+Current cluster status:
+ * Node List:
+ * Online: [ fastvm-rhel-7-4-95 fastvm-rhel-7-4-96 ]
+
+ * Full List of Resources:
+ * fence-fastvm-rhel-7-4-95 (stonith:fence_xvm): Started fastvm-rhel-7-4-96
+ * fence-fastvm-rhel-7-4-96 (stonith:fence_xvm): Started fastvm-rhel-7-4-95
+ * Clone Set: DB2_HADR-master [DB2_HADR] (promotable):
+ * DB2_HADR (ocf:heartbeat:db2): FAILED fastvm-rhel-7-4-96
+ * Unpromoted: [ fastvm-rhel-7-4-95 ]
+
+Transition Summary:
+ * Recover DB2_HADR:1 ( Unpromoted fastvm-rhel-7-4-96 )
+
+Executing Cluster Transition:
+ * Pseudo action: DB2_HADR-master_pre_notify_stop_0
+ * Resource action: DB2_HADR notify on fastvm-rhel-7-4-95
+ * Resource action: DB2_HADR notify on fastvm-rhel-7-4-96
+ * Pseudo action: DB2_HADR-master_confirmed-pre_notify_stop_0
+ * Pseudo action: DB2_HADR-master_stop_0
+ * Resource action: DB2_HADR stop on fastvm-rhel-7-4-96
+ * Pseudo action: DB2_HADR-master_stopped_0
+ * Pseudo action: DB2_HADR-master_post_notify_stopped_0
+ * Resource action: DB2_HADR notify on fastvm-rhel-7-4-95
+ * Pseudo action: DB2_HADR-master_confirmed-post_notify_stopped_0
+ * Pseudo action: DB2_HADR-master_pre_notify_start_0
+ * Resource action: DB2_HADR notify on fastvm-rhel-7-4-95
+ * Pseudo action: DB2_HADR-master_confirmed-pre_notify_start_0
+ * Pseudo action: DB2_HADR-master_start_0
+ * Resource action: DB2_HADR start on fastvm-rhel-7-4-96
+ * Pseudo action: DB2_HADR-master_running_0
+ * Pseudo action: DB2_HADR-master_post_notify_running_0
+ * Resource action: DB2_HADR notify on fastvm-rhel-7-4-95
+ * Resource action: DB2_HADR notify on fastvm-rhel-7-4-96
+ * Pseudo action: DB2_HADR-master_confirmed-post_notify_running_0
+ * Resource action: DB2_HADR monitor=5000 on fastvm-rhel-7-4-96
+Using the original execution date of: 2017-11-30 12:37:50Z
+
+Revised Cluster Status:
+ * Node List:
+ * Online: [ fastvm-rhel-7-4-95 fastvm-rhel-7-4-96 ]
+
+ * Full List of Resources:
+ * fence-fastvm-rhel-7-4-95 (stonith:fence_xvm): Started fastvm-rhel-7-4-96
+ * fence-fastvm-rhel-7-4-96 (stonith:fence_xvm): Started fastvm-rhel-7-4-95
+ * Clone Set: DB2_HADR-master [DB2_HADR] (promotable):
+ * Unpromoted: [ fastvm-rhel-7-4-95 fastvm-rhel-7-4-96 ]
diff --git a/cts/scheduler/summary/failed-probe-clone.summary b/cts/scheduler/summary/failed-probe-clone.summary
new file mode 100644
index 0000000..febee14
--- /dev/null
+++ b/cts/scheduler/summary/failed-probe-clone.summary
@@ -0,0 +1,48 @@
+Current cluster status:
+ * Node List:
+ * Online: [ cluster01 cluster02 ]
+
+ * Full List of Resources:
+ * Fencing (stonith:fence_xvm): Started cluster01
+ * Clone Set: ping-1-clone [ping-1]:
+ * Stopped (not installed): [ cluster01 cluster02 ]
+ * Clone Set: ping-2-clone [ping-2]:
+ * Stopped: [ cluster02 ]
+ * Stopped (not installed): [ cluster01 ]
+ * Clone Set: ping-3-clone [ping-3]:
+ * ping-3 (ocf:pacemaker:ping): FAILED cluster01
+ * Stopped (not installed): [ cluster02 ]
+
+Transition Summary:
+ * Start ping-2:0 ( cluster02 )
+ * Stop ping-3:0 ( cluster01 ) due to node availability
+
+Executing Cluster Transition:
+ * Cluster action: clear_failcount for ping-1 on cluster02
+ * Cluster action: clear_failcount for ping-1 on cluster01
+ * Cluster action: clear_failcount for ping-2 on cluster02
+ * Cluster action: clear_failcount for ping-2 on cluster01
+ * Pseudo action: ping-2-clone_start_0
+ * Cluster action: clear_failcount for ping-3 on cluster01
+ * Cluster action: clear_failcount for ping-3 on cluster02
+ * Pseudo action: ping-3-clone_stop_0
+ * Resource action: ping-2 start on cluster02
+ * Pseudo action: ping-2-clone_running_0
+ * Resource action: ping-3 stop on cluster01
+ * Pseudo action: ping-3-clone_stopped_0
+ * Resource action: ping-2 monitor=10000 on cluster02
+
+Revised Cluster Status:
+ * Node List:
+ * Online: [ cluster01 cluster02 ]
+
+ * Full List of Resources:
+ * Fencing (stonith:fence_xvm): Started cluster01
+ * Clone Set: ping-1-clone [ping-1]:
+ * Stopped (not installed): [ cluster01 cluster02 ]
+ * Clone Set: ping-2-clone [ping-2]:
+ * Started: [ cluster02 ]
+ * Stopped (not installed): [ cluster01 ]
+ * Clone Set: ping-3-clone [ping-3]:
+ * Stopped: [ cluster01 ]
+ * Stopped (not installed): [ cluster02 ]
diff --git a/cts/scheduler/summary/failed-probe-primitive.summary b/cts/scheduler/summary/failed-probe-primitive.summary
new file mode 100644
index 0000000..ea8edae
--- /dev/null
+++ b/cts/scheduler/summary/failed-probe-primitive.summary
@@ -0,0 +1,27 @@
+Current cluster status:
+ * Node List:
+ * Online: [ cluster01 cluster02 ]
+
+ * Full List of Resources:
+ * Fencing (stonith:fence_xvm): Started cluster01
+ * dummy-1 (ocf:pacemaker:Dummy): Stopped (not installed)
+ * dummy-2 (ocf:pacemaker:Dummy): Stopped (not installed)
+ * dummy-3 (ocf:pacemaker:Dummy): FAILED cluster01
+
+Transition Summary:
+ * Start dummy-2 ( cluster02 )
+ * Stop dummy-3 ( cluster01 ) due to node availability
+
+Executing Cluster Transition:
+ * Resource action: dummy-2 start on cluster02
+ * Resource action: dummy-3 stop on cluster01
+
+Revised Cluster Status:
+ * Node List:
+ * Online: [ cluster01 cluster02 ]
+
+ * Full List of Resources:
+ * Fencing (stonith:fence_xvm): Started cluster01
+ * dummy-1 (ocf:pacemaker:Dummy): Stopped (not installed)
+ * dummy-2 (ocf:pacemaker:Dummy): Started cluster02
+ * dummy-3 (ocf:pacemaker:Dummy): Stopped (not installed)
diff --git a/cts/scheduler/summary/failed-sticky-anticolocated-group.summary b/cts/scheduler/summary/failed-sticky-anticolocated-group.summary
new file mode 100644
index 0000000..3ecb056
--- /dev/null
+++ b/cts/scheduler/summary/failed-sticky-anticolocated-group.summary
@@ -0,0 +1,41 @@
+Current cluster status:
+ * Node List:
+ * Online: [ node1 node2 ]
+
+ * Full List of Resources:
+ * Fencing (stonith:fence_xvm): Started node1
+ * Resource Group: group1:
+ * member1a (ocf:pacemaker:Dummy): Started node2
+ * member1b (ocf:pacemaker:Dummy): Started node2
+ * Resource Group: group2:
+ * member2a (ocf:pacemaker:Dummy): Started node1
+ * member2b (ocf:pacemaker:Dummy): FAILED node1
+
+Transition Summary:
+ * Move member2a ( node1 -> node2 )
+ * Recover member2b ( node1 -> node2 )
+
+Executing Cluster Transition:
+ * Pseudo action: group2_stop_0
+ * Resource action: member2b stop on node1
+ * Resource action: member2a stop on node1
+ * Pseudo action: group2_stopped_0
+ * Pseudo action: group2_start_0
+ * Resource action: member2a start on node2
+ * Resource action: member2b start on node2
+ * Pseudo action: group2_running_0
+ * Resource action: member2a monitor=10000 on node2
+ * Resource action: member2b monitor=10000 on node2
+
+Revised Cluster Status:
+ * Node List:
+ * Online: [ node1 node2 ]
+
+ * Full List of Resources:
+ * Fencing (stonith:fence_xvm): Started node1
+ * Resource Group: group1:
+ * member1a (ocf:pacemaker:Dummy): Started node2
+ * member1b (ocf:pacemaker:Dummy): Started node2
+ * Resource Group: group2:
+ * member2a (ocf:pacemaker:Dummy): Started node2
+ * member2b (ocf:pacemaker:Dummy): Started node2
diff --git a/cts/scheduler/summary/failed-sticky-group.summary b/cts/scheduler/summary/failed-sticky-group.summary
new file mode 100644
index 0000000..2114be7
--- /dev/null
+++ b/cts/scheduler/summary/failed-sticky-group.summary
@@ -0,0 +1,90 @@
+Current cluster status:
+ * Node List:
+ * Online: [ act1 act2 act3 sby1 sby2 ]
+
+ * Full List of Resources:
+ * Resource Group: grpPostgreSQLDB1:
+ * prmExPostgreSQLDB1 (ocf:pacemaker:Dummy): Started act1
+ * prmFsPostgreSQLDB1-1 (ocf:pacemaker:Dummy): Started act1
+ * prmFsPostgreSQLDB1-2 (ocf:pacemaker:Dummy): Started act1
+ * prmFsPostgreSQLDB1-3 (ocf:pacemaker:Dummy): Started act1
+ * prmIpPostgreSQLDB1 (ocf:pacemaker:Dummy): Started act1
+ * prmApPostgreSQLDB1 (ocf:pacemaker:Dummy): FAILED act1
+ * Resource Group: grpPostgreSQLDB2:
+ * prmExPostgreSQLDB2 (ocf:pacemaker:Dummy): Started act2
+ * prmFsPostgreSQLDB2-1 (ocf:pacemaker:Dummy): Started act2
+ * prmFsPostgreSQLDB2-2 (ocf:pacemaker:Dummy): Started act2
+ * prmFsPostgreSQLDB2-3 (ocf:pacemaker:Dummy): Started act2
+ * prmIpPostgreSQLDB2 (ocf:pacemaker:Dummy): Started act2
+ * prmApPostgreSQLDB2 (ocf:pacemaker:Dummy): Started act2
+ * Resource Group: grpPostgreSQLDB3:
+ * prmExPostgreSQLDB3 (ocf:pacemaker:Dummy): Started act3
+ * prmFsPostgreSQLDB3-1 (ocf:pacemaker:Dummy): Started act3
+ * prmFsPostgreSQLDB3-2 (ocf:pacemaker:Dummy): Started act3
+ * prmFsPostgreSQLDB3-3 (ocf:pacemaker:Dummy): Started act3
+ * prmIpPostgreSQLDB3 (ocf:pacemaker:Dummy): Started act3
+ * prmApPostgreSQLDB3 (ocf:pacemaker:Dummy): Started act3
+
+Transition Summary:
+ * Move prmExPostgreSQLDB1 ( act1 -> sby1 )
+ * Move prmFsPostgreSQLDB1-1 ( act1 -> sby1 )
+ * Move prmFsPostgreSQLDB1-2 ( act1 -> sby1 )
+ * Move prmFsPostgreSQLDB1-3 ( act1 -> sby1 )
+ * Move prmIpPostgreSQLDB1 ( act1 -> sby1 )
+ * Recover prmApPostgreSQLDB1 ( act1 -> sby1 )
+
+Executing Cluster Transition:
+ * Pseudo action: grpPostgreSQLDB1_stop_0
+ * Resource action: prmApPostgreSQLDB1 stop on act1
+ * Pseudo action: load_stopped_sby2
+ * Pseudo action: load_stopped_sby1
+ * Pseudo action: load_stopped_act3
+ * Pseudo action: load_stopped_act2
+ * Resource action: prmIpPostgreSQLDB1 stop on act1
+ * Resource action: prmFsPostgreSQLDB1-3 stop on act1
+ * Resource action: prmFsPostgreSQLDB1-2 stop on act1
+ * Resource action: prmFsPostgreSQLDB1-1 stop on act1
+ * Resource action: prmExPostgreSQLDB1 stop on act1
+ * Pseudo action: load_stopped_act1
+ * Pseudo action: grpPostgreSQLDB1_stopped_0
+ * Pseudo action: grpPostgreSQLDB1_start_0
+ * Resource action: prmExPostgreSQLDB1 start on sby1
+ * Resource action: prmFsPostgreSQLDB1-1 start on sby1
+ * Resource action: prmFsPostgreSQLDB1-2 start on sby1
+ * Resource action: prmFsPostgreSQLDB1-3 start on sby1
+ * Resource action: prmIpPostgreSQLDB1 start on sby1
+ * Resource action: prmApPostgreSQLDB1 start on sby1
+ * Pseudo action: grpPostgreSQLDB1_running_0
+ * Resource action: prmExPostgreSQLDB1 monitor=5000 on sby1
+ * Resource action: prmFsPostgreSQLDB1-1 monitor=5000 on sby1
+ * Resource action: prmFsPostgreSQLDB1-2 monitor=5000 on sby1
+ * Resource action: prmFsPostgreSQLDB1-3 monitor=5000 on sby1
+ * Resource action: prmIpPostgreSQLDB1 monitor=5000 on sby1
+ * Resource action: prmApPostgreSQLDB1 monitor=5000 on sby1
+
+Revised Cluster Status:
+ * Node List:
+ * Online: [ act1 act2 act3 sby1 sby2 ]
+
+ * Full List of Resources:
+ * Resource Group: grpPostgreSQLDB1:
+ * prmExPostgreSQLDB1 (ocf:pacemaker:Dummy): Started sby1
+ * prmFsPostgreSQLDB1-1 (ocf:pacemaker:Dummy): Started sby1
+ * prmFsPostgreSQLDB1-2 (ocf:pacemaker:Dummy): Started sby1
+ * prmFsPostgreSQLDB1-3 (ocf:pacemaker:Dummy): Started sby1
+ * prmIpPostgreSQLDB1 (ocf:pacemaker:Dummy): Started sby1
+ * prmApPostgreSQLDB1 (ocf:pacemaker:Dummy): Started sby1
+ * Resource Group: grpPostgreSQLDB2:
+ * prmExPostgreSQLDB2 (ocf:pacemaker:Dummy): Started act2
+ * prmFsPostgreSQLDB2-1 (ocf:pacemaker:Dummy): Started act2
+ * prmFsPostgreSQLDB2-2 (ocf:pacemaker:Dummy): Started act2
+ * prmFsPostgreSQLDB2-3 (ocf:pacemaker:Dummy): Started act2
+ * prmIpPostgreSQLDB2 (ocf:pacemaker:Dummy): Started act2
+ * prmApPostgreSQLDB2 (ocf:pacemaker:Dummy): Started act2
+ * Resource Group: grpPostgreSQLDB3:
+ * prmExPostgreSQLDB3 (ocf:pacemaker:Dummy): Started act3
+ * prmFsPostgreSQLDB3-1 (ocf:pacemaker:Dummy): Started act3
+ * prmFsPostgreSQLDB3-2 (ocf:pacemaker:Dummy): Started act3
+ * prmFsPostgreSQLDB3-3 (ocf:pacemaker:Dummy): Started act3
+ * prmIpPostgreSQLDB3 (ocf:pacemaker:Dummy): Started act3
+ * prmApPostgreSQLDB3 (ocf:pacemaker:Dummy): Started act3
diff --git a/cts/scheduler/summary/force-anon-clone-max.summary b/cts/scheduler/summary/force-anon-clone-max.summary
new file mode 100644
index 0000000..d2320e9
--- /dev/null
+++ b/cts/scheduler/summary/force-anon-clone-max.summary
@@ -0,0 +1,74 @@
+Current cluster status:
+ * Node List:
+ * Online: [ node1 node2 node3 ]
+
+ * Full List of Resources:
+ * Fencing (stonith:fence_imaginary): Stopped
+ * Clone Set: clone1 [lsb1]:
+ * Stopped: [ node1 node2 node3 ]
+ * Clone Set: clone2 [lsb2]:
+ * Stopped: [ node1 node2 node3 ]
+ * Clone Set: clone3 [group1]:
+ * Stopped: [ node1 node2 node3 ]
+
+Transition Summary:
+ * Start Fencing ( node1 )
+ * Start lsb1:0 ( node2 )
+ * Start lsb1:1 ( node3 )
+ * Start lsb2:0 ( node1 )
+ * Start lsb2:1 ( node2 )
+ * Start lsb2:2 ( node3 )
+ * Start dummy1:0 ( node1 )
+ * Start dummy2:0 ( node1 )
+ * Start lsb3:0 ( node1 )
+ * Start dummy1:1 ( node2 )
+ * Start dummy2:1 ( node2 )
+ * Start lsb3:1 ( node2 )
+
+Executing Cluster Transition:
+ * Resource action: Fencing start on node1
+ * Pseudo action: clone1_start_0
+ * Pseudo action: clone2_start_0
+ * Pseudo action: clone3_start_0
+ * Resource action: lsb1:0 start on node2
+ * Resource action: lsb1:1 start on node3
+ * Pseudo action: clone1_running_0
+ * Resource action: lsb2:0 start on node1
+ * Resource action: lsb2:1 start on node2
+ * Resource action: lsb2:2 start on node3
+ * Pseudo action: clone2_running_0
+ * Pseudo action: group1:0_start_0
+ * Resource action: dummy1:0 start on node1
+ * Resource action: dummy2:0 start on node1
+ * Resource action: lsb3:0 start on node1
+ * Pseudo action: group1:1_start_0
+ * Resource action: dummy1:1 start on node2
+ * Resource action: dummy2:1 start on node2
+ * Resource action: lsb3:1 start on node2
+ * Resource action: lsb1:0 monitor=5000 on node2
+ * Resource action: lsb1:1 monitor=5000 on node3
+ * Resource action: lsb2:0 monitor=5000 on node1
+ * Resource action: lsb2:1 monitor=5000 on node2
+ * Resource action: lsb2:2 monitor=5000 on node3
+ * Pseudo action: group1:0_running_0
+ * Resource action: dummy1:0 monitor=5000 on node1
+ * Resource action: dummy2:0 monitor=5000 on node1
+ * Resource action: lsb3:0 monitor=5000 on node1
+ * Pseudo action: group1:1_running_0
+ * Resource action: dummy1:1 monitor=5000 on node2
+ * Resource action: dummy2:1 monitor=5000 on node2
+ * Resource action: lsb3:1 monitor=5000 on node2
+ * Pseudo action: clone3_running_0
+
+Revised Cluster Status:
+ * Node List:
+ * Online: [ node1 node2 node3 ]
+
+ * Full List of Resources:
+ * Fencing (stonith:fence_imaginary): Started node1
+ * Clone Set: clone1 [lsb1]:
+ * Started: [ node2 node3 ]
+ * Clone Set: clone2 [lsb2]:
+ * Started: [ node1 node2 node3 ]
+ * Clone Set: clone3 [group1]:
+ * Started: [ node1 node2 ]
diff --git a/cts/scheduler/summary/group-anticolocation.summary b/cts/scheduler/summary/group-anticolocation.summary
new file mode 100644
index 0000000..3ecb056
--- /dev/null
+++ b/cts/scheduler/summary/group-anticolocation.summary
@@ -0,0 +1,41 @@
+Current cluster status:
+ * Node List:
+ * Online: [ node1 node2 ]
+
+ * Full List of Resources:
+ * Fencing (stonith:fence_xvm): Started node1
+ * Resource Group: group1:
+ * member1a (ocf:pacemaker:Dummy): Started node2
+ * member1b (ocf:pacemaker:Dummy): Started node2
+ * Resource Group: group2:
+ * member2a (ocf:pacemaker:Dummy): Started node1
+ * member2b (ocf:pacemaker:Dummy): FAILED node1
+
+Transition Summary:
+ * Move member2a ( node1 -> node2 )
+ * Recover member2b ( node1 -> node2 )
+
+Executing Cluster Transition:
+ * Pseudo action: group2_stop_0
+ * Resource action: member2b stop on node1
+ * Resource action: member2a stop on node1
+ * Pseudo action: group2_stopped_0
+ * Pseudo action: group2_start_0
+ * Resource action: member2a start on node2
+ * Resource action: member2b start on node2
+ * Pseudo action: group2_running_0
+ * Resource action: member2a monitor=10000 on node2
+ * Resource action: member2b monitor=10000 on node2
+
+Revised Cluster Status:
+ * Node List:
+ * Online: [ node1 node2 ]
+
+ * Full List of Resources:
+ * Fencing (stonith:fence_xvm): Started node1
+ * Resource Group: group1:
+ * member1a (ocf:pacemaker:Dummy): Started node2
+ * member1b (ocf:pacemaker:Dummy): Started node2
+ * Resource Group: group2:
+ * member2a (ocf:pacemaker:Dummy): Started node2
+ * member2b (ocf:pacemaker:Dummy): Started node2
diff --git a/cts/scheduler/summary/group-colocation-failure.summary b/cts/scheduler/summary/group-colocation-failure.summary
new file mode 100644
index 0000000..fed71c8
--- /dev/null
+++ b/cts/scheduler/summary/group-colocation-failure.summary
@@ -0,0 +1,47 @@
+Current cluster status:
+ * Node List:
+ * Online: [ node1 node2 ]
+
+ * Full List of Resources:
+ * Fencing (stonith:fence_xvm): Started node1
+ * Resource Group: group1:
+ * member1a (ocf:pacemaker:Dummy): Started node2
+ * member1b (ocf:pacemaker:Dummy): Started node2
+ * Resource Group: group2:
+ * member2a (ocf:pacemaker:Dummy): FAILED node2
+
+Transition Summary:
+ * Move member1a ( node2 -> node1 )
+ * Move member1b ( node2 -> node1 )
+ * Recover member2a ( node2 -> node1 )
+
+Executing Cluster Transition:
+ * Pseudo action: group1_stop_0
+ * Resource action: member1b stop on node2
+ * Pseudo action: group2_stop_0
+ * Resource action: member2a stop on node2
+ * Resource action: member1a stop on node2
+ * Pseudo action: group2_stopped_0
+ * Pseudo action: group2_start_0
+ * Resource action: member2a start on node1
+ * Pseudo action: group1_stopped_0
+ * Pseudo action: group1_start_0
+ * Resource action: member1a start on node1
+ * Resource action: member1b start on node1
+ * Pseudo action: group2_running_0
+ * Resource action: member2a monitor=10000 on node1
+ * Pseudo action: group1_running_0
+ * Resource action: member1a monitor=10000 on node1
+ * Resource action: member1b monitor=10000 on node1
+
+Revised Cluster Status:
+ * Node List:
+ * Online: [ node1 node2 ]
+
+ * Full List of Resources:
+ * Fencing (stonith:fence_xvm): Started node1
+ * Resource Group: group1:
+ * member1a (ocf:pacemaker:Dummy): Started node1
+ * member1b (ocf:pacemaker:Dummy): Started node1
+ * Resource Group: group2:
+ * member2a (ocf:pacemaker:Dummy): Started node1
diff --git a/cts/scheduler/summary/group-dependents.summary b/cts/scheduler/summary/group-dependents.summary
new file mode 100644
index 0000000..3365255
--- /dev/null
+++ b/cts/scheduler/summary/group-dependents.summary
@@ -0,0 +1,196 @@
+Current cluster status:
+ * Node List:
+ * Online: [ asttest1 asttest2 ]
+
+ * Full List of Resources:
+ * Resource Group: voip:
+ * mysqld (lsb:mysql): Started asttest1
+ * dahdi (lsb:dahdi): Started asttest1
+ * fonulator (lsb:fonulator): Stopped
+ * asterisk (lsb:asterisk-11.0.1): Stopped
+ * iax2_mon (lsb:iax2_mon): Stopped
+ * httpd (lsb:apache2): Stopped
+ * tftp (lsb:tftp-srce): Stopped
+ * Resource Group: ip_voip_routes:
+ * ip_voip_route_test1 (ocf:heartbeat:Route): Started asttest1
+ * ip_voip_route_test2 (ocf:heartbeat:Route): Started asttest1
+ * Resource Group: ip_voip_addresses_p:
+ * ip_voip_vlan850 (ocf:heartbeat:IPaddr2): Started asttest1
+ * ip_voip_vlan998 (ocf:heartbeat:IPaddr2): Started asttest1
+ * ip_voip_vlan851 (ocf:heartbeat:IPaddr2): Started asttest1
+ * ip_voip_vlan852 (ocf:heartbeat:IPaddr2): Started asttest1
+ * ip_voip_vlan853 (ocf:heartbeat:IPaddr2): Started asttest1
+ * ip_voip_vlan854 (ocf:heartbeat:IPaddr2): Started asttest1
+ * ip_voip_vlan855 (ocf:heartbeat:IPaddr2): Started asttest1
+ * ip_voip_vlan856 (ocf:heartbeat:IPaddr2): Started asttest1
+ * Clone Set: cl_route [ip_voip_route_default]:
+ * Started: [ asttest1 asttest2 ]
+ * fs_drbd (ocf:heartbeat:Filesystem): Started asttest1
+ * Clone Set: ms_drbd [drbd] (promotable):
+ * Promoted: [ asttest1 ]
+ * Unpromoted: [ asttest2 ]
+
+Transition Summary:
+ * Migrate mysqld ( asttest1 -> asttest2 )
+ * Migrate dahdi ( asttest1 -> asttest2 )
+ * Start fonulator ( asttest2 )
+ * Start asterisk ( asttest2 )
+ * Start iax2_mon ( asttest2 )
+ * Start httpd ( asttest2 )
+ * Start tftp ( asttest2 )
+ * Migrate ip_voip_route_test1 ( asttest1 -> asttest2 )
+ * Migrate ip_voip_route_test2 ( asttest1 -> asttest2 )
+ * Migrate ip_voip_vlan850 ( asttest1 -> asttest2 )
+ * Migrate ip_voip_vlan998 ( asttest1 -> asttest2 )
+ * Migrate ip_voip_vlan851 ( asttest1 -> asttest2 )
+ * Migrate ip_voip_vlan852 ( asttest1 -> asttest2 )
+ * Migrate ip_voip_vlan853 ( asttest1 -> asttest2 )
+ * Migrate ip_voip_vlan854 ( asttest1 -> asttest2 )
+ * Migrate ip_voip_vlan855 ( asttest1 -> asttest2 )
+ * Migrate ip_voip_vlan856 ( asttest1 -> asttest2 )
+ * Move fs_drbd ( asttest1 -> asttest2 )
+ * Demote drbd:0 ( Promoted -> Unpromoted asttest1 )
+ * Promote drbd:1 ( Unpromoted -> Promoted asttest2 )
+
+Executing Cluster Transition:
+ * Pseudo action: voip_stop_0
+ * Resource action: mysqld migrate_to on asttest1
+ * Resource action: ip_voip_route_test1 migrate_to on asttest1
+ * Resource action: ip_voip_route_test2 migrate_to on asttest1
+ * Resource action: ip_voip_vlan850 migrate_to on asttest1
+ * Resource action: ip_voip_vlan998 migrate_to on asttest1
+ * Resource action: ip_voip_vlan851 migrate_to on asttest1
+ * Resource action: ip_voip_vlan852 migrate_to on asttest1
+ * Resource action: ip_voip_vlan853 migrate_to on asttest1
+ * Resource action: ip_voip_vlan854 migrate_to on asttest1
+ * Resource action: ip_voip_vlan855 migrate_to on asttest1
+ * Resource action: ip_voip_vlan856 migrate_to on asttest1
+ * Resource action: drbd:1 cancel=31000 on asttest2
+ * Pseudo action: ms_drbd_pre_notify_demote_0
+ * Resource action: mysqld migrate_from on asttest2
+ * Resource action: dahdi migrate_to on asttest1
+ * Resource action: ip_voip_route_test1 migrate_from on asttest2
+ * Resource action: ip_voip_route_test2 migrate_from on asttest2
+ * Resource action: ip_voip_vlan850 migrate_from on asttest2
+ * Resource action: ip_voip_vlan998 migrate_from on asttest2
+ * Resource action: ip_voip_vlan851 migrate_from on asttest2
+ * Resource action: ip_voip_vlan852 migrate_from on asttest2
+ * Resource action: ip_voip_vlan853 migrate_from on asttest2
+ * Resource action: ip_voip_vlan854 migrate_from on asttest2
+ * Resource action: ip_voip_vlan855 migrate_from on asttest2
+ * Resource action: ip_voip_vlan856 migrate_from on asttest2
+ * Resource action: drbd:0 notify on asttest1
+ * Resource action: drbd:1 notify on asttest2
+ * Pseudo action: ms_drbd_confirmed-pre_notify_demote_0
+ * Resource action: dahdi migrate_from on asttest2
+ * Resource action: dahdi stop on asttest1
+ * Resource action: mysqld stop on asttest1
+ * Pseudo action: voip_stopped_0
+ * Pseudo action: ip_voip_routes_stop_0
+ * Resource action: ip_voip_route_test1 stop on asttest1
+ * Resource action: ip_voip_route_test2 stop on asttest1
+ * Pseudo action: ip_voip_routes_stopped_0
+ * Pseudo action: ip_voip_addresses_p_stop_0
+ * Resource action: ip_voip_vlan850 stop on asttest1
+ * Resource action: ip_voip_vlan998 stop on asttest1
+ * Resource action: ip_voip_vlan851 stop on asttest1
+ * Resource action: ip_voip_vlan852 stop on asttest1
+ * Resource action: ip_voip_vlan853 stop on asttest1
+ * Resource action: ip_voip_vlan854 stop on asttest1
+ * Resource action: ip_voip_vlan855 stop on asttest1
+ * Resource action: ip_voip_vlan856 stop on asttest1
+ * Pseudo action: ip_voip_addresses_p_stopped_0
+ * Resource action: fs_drbd stop on asttest1
+ * Pseudo action: ms_drbd_demote_0
+ * Resource action: drbd:0 demote on asttest1
+ * Pseudo action: ms_drbd_demoted_0
+ * Pseudo action: ms_drbd_post_notify_demoted_0
+ * Resource action: drbd:0 notify on asttest1
+ * Resource action: drbd:1 notify on asttest2
+ * Pseudo action: ms_drbd_confirmed-post_notify_demoted_0
+ * Pseudo action: ms_drbd_pre_notify_promote_0
+ * Resource action: drbd:0 notify on asttest1
+ * Resource action: drbd:1 notify on asttest2
+ * Pseudo action: ms_drbd_confirmed-pre_notify_promote_0
+ * Pseudo action: ms_drbd_promote_0
+ * Resource action: drbd:1 promote on asttest2
+ * Pseudo action: ms_drbd_promoted_0
+ * Pseudo action: ms_drbd_post_notify_promoted_0
+ * Resource action: drbd:0 notify on asttest1
+ * Resource action: drbd:1 notify on asttest2
+ * Pseudo action: ms_drbd_confirmed-post_notify_promoted_0
+ * Resource action: fs_drbd start on asttest2
+ * Resource action: drbd:0 monitor=31000 on asttest1
+ * Pseudo action: ip_voip_addresses_p_start_0
+ * Pseudo action: ip_voip_vlan850_start_0
+ * Pseudo action: ip_voip_vlan998_start_0
+ * Pseudo action: ip_voip_vlan851_start_0
+ * Pseudo action: ip_voip_vlan852_start_0
+ * Pseudo action: ip_voip_vlan853_start_0
+ * Pseudo action: ip_voip_vlan854_start_0
+ * Pseudo action: ip_voip_vlan855_start_0
+ * Pseudo action: ip_voip_vlan856_start_0
+ * Resource action: fs_drbd monitor=1000 on asttest2
+ * Pseudo action: ip_voip_addresses_p_running_0
+ * Resource action: ip_voip_vlan850 monitor=1000 on asttest2
+ * Resource action: ip_voip_vlan998 monitor=1000 on asttest2
+ * Resource action: ip_voip_vlan851 monitor=1000 on asttest2
+ * Resource action: ip_voip_vlan852 monitor=1000 on asttest2
+ * Resource action: ip_voip_vlan853 monitor=1000 on asttest2
+ * Resource action: ip_voip_vlan854 monitor=1000 on asttest2
+ * Resource action: ip_voip_vlan855 monitor=1000 on asttest2
+ * Resource action: ip_voip_vlan856 monitor=1000 on asttest2
+ * Pseudo action: ip_voip_routes_start_0
+ * Pseudo action: ip_voip_route_test1_start_0
+ * Pseudo action: ip_voip_route_test2_start_0
+ * Pseudo action: ip_voip_routes_running_0
+ * Resource action: ip_voip_route_test1 monitor=1000 on asttest2
+ * Resource action: ip_voip_route_test2 monitor=1000 on asttest2
+ * Pseudo action: voip_start_0
+ * Pseudo action: mysqld_start_0
+ * Pseudo action: dahdi_start_0
+ * Resource action: fonulator start on asttest2
+ * Resource action: asterisk start on asttest2
+ * Resource action: iax2_mon start on asttest2
+ * Resource action: httpd start on asttest2
+ * Resource action: tftp start on asttest2
+ * Pseudo action: voip_running_0
+ * Resource action: mysqld monitor=1000 on asttest2
+ * Resource action: dahdi monitor=1000 on asttest2
+ * Resource action: fonulator monitor=1000 on asttest2
+ * Resource action: asterisk monitor=1000 on asttest2
+ * Resource action: iax2_mon monitor=60000 on asttest2
+ * Resource action: httpd monitor=1000 on asttest2
+ * Resource action: tftp monitor=60000 on asttest2
+
+Revised Cluster Status:
+ * Node List:
+ * Online: [ asttest1 asttest2 ]
+
+ * Full List of Resources:
+ * Resource Group: voip:
+ * mysqld (lsb:mysql): Started asttest2
+ * dahdi (lsb:dahdi): Started asttest2
+ * fonulator (lsb:fonulator): Started asttest2
+ * asterisk (lsb:asterisk-11.0.1): Started asttest2
+ * iax2_mon (lsb:iax2_mon): Started asttest2
+ * httpd (lsb:apache2): Started asttest2
+ * tftp (lsb:tftp-srce): Started asttest2
+ * Resource Group: ip_voip_routes:
+ * ip_voip_route_test1 (ocf:heartbeat:Route): Started asttest2
+ * ip_voip_route_test2 (ocf:heartbeat:Route): Started asttest2
+ * Resource Group: ip_voip_addresses_p:
+ * ip_voip_vlan850 (ocf:heartbeat:IPaddr2): Started asttest2
+ * ip_voip_vlan998 (ocf:heartbeat:IPaddr2): Started asttest2
+ * ip_voip_vlan851 (ocf:heartbeat:IPaddr2): Started asttest2
+ * ip_voip_vlan852 (ocf:heartbeat:IPaddr2): Started asttest2
+ * ip_voip_vlan853 (ocf:heartbeat:IPaddr2): Started asttest2
+ * ip_voip_vlan854 (ocf:heartbeat:IPaddr2): Started asttest2
+ * ip_voip_vlan855 (ocf:heartbeat:IPaddr2): Started asttest2
+ * ip_voip_vlan856 (ocf:heartbeat:IPaddr2): Started asttest2
+ * Clone Set: cl_route [ip_voip_route_default]:
+ * Started: [ asttest1 asttest2 ]
+ * fs_drbd (ocf:heartbeat:Filesystem): Started asttest2
+ * Clone Set: ms_drbd [drbd] (promotable):
+ * Promoted: [ asttest2 ]
+ * Unpromoted: [ asttest1 ]
diff --git a/cts/scheduler/summary/group-fail.summary b/cts/scheduler/summary/group-fail.summary
new file mode 100644
index 0000000..ab29ea9
--- /dev/null
+++ b/cts/scheduler/summary/group-fail.summary
@@ -0,0 +1,39 @@
+Current cluster status:
+ * Node List:
+ * Online: [ node1 node2 ]
+
+ * Full List of Resources:
+ * Resource Group: group1:
+ * rsc1 (ocf:heartbeat:apache): Stopped
+ * rsc2 (ocf:heartbeat:apache): Started node1
+ * rsc3 (ocf:heartbeat:apache): Stopped
+ * rsc4 (ocf:heartbeat:apache): Started node1
+
+Transition Summary:
+ * Start rsc1 ( node1 )
+ * Restart rsc2 ( node1 ) due to required rsc1 start
+ * Start rsc3 ( node1 )
+ * Restart rsc4 ( node1 ) due to required rsc3 start
+
+Executing Cluster Transition:
+ * Pseudo action: group1_stop_0
+ * Resource action: rsc4 stop on node1
+ * Resource action: rsc2 stop on node1
+ * Pseudo action: group1_stopped_0
+ * Pseudo action: group1_start_0
+ * Resource action: rsc1 start on node1
+ * Resource action: rsc2 start on node1
+ * Resource action: rsc3 start on node1
+ * Resource action: rsc4 start on node1
+ * Pseudo action: group1_running_0
+
+Revised Cluster Status:
+ * Node List:
+ * Online: [ node1 node2 ]
+
+ * Full List of Resources:
+ * Resource Group: group1:
+ * rsc1 (ocf:heartbeat:apache): Started node1
+ * rsc2 (ocf:heartbeat:apache): Started node1
+ * rsc3 (ocf:heartbeat:apache): Started node1
+ * rsc4 (ocf:heartbeat:apache): Started node1
diff --git a/cts/scheduler/summary/group-stop-ordering.summary b/cts/scheduler/summary/group-stop-ordering.summary
new file mode 100644
index 0000000..35b4cd1
--- /dev/null
+++ b/cts/scheduler/summary/group-stop-ordering.summary
@@ -0,0 +1,29 @@
+0 of 5 resource instances DISABLED and 1 BLOCKED from further action due to failure
+
+Current cluster status:
+ * Node List:
+ * Online: [ fastvm-rhel-7-5-73 fastvm-rhel-7-5-74 ]
+
+ * Full List of Resources:
+ * fence-fastvm-rhel-7-5-73 (stonith:fence_xvm): Started fastvm-rhel-7-5-74
+ * fence-fastvm-rhel-7-5-74 (stonith:fence_xvm): Started fastvm-rhel-7-5-73
+ * outside_resource (ocf:pacemaker:Dummy): FAILED fastvm-rhel-7-5-73 (blocked)
+ * Resource Group: grp:
+ * inside_resource_2 (ocf:pacemaker:Dummy): Started fastvm-rhel-7-5-74
+ * inside_resource_3 (ocf:pacemaker:Dummy): Started fastvm-rhel-7-5-74
+
+Transition Summary:
+
+Executing Cluster Transition:
+
+Revised Cluster Status:
+ * Node List:
+ * Online: [ fastvm-rhel-7-5-73 fastvm-rhel-7-5-74 ]
+
+ * Full List of Resources:
+ * fence-fastvm-rhel-7-5-73 (stonith:fence_xvm): Started fastvm-rhel-7-5-74
+ * fence-fastvm-rhel-7-5-74 (stonith:fence_xvm): Started fastvm-rhel-7-5-73
+ * outside_resource (ocf:pacemaker:Dummy): FAILED fastvm-rhel-7-5-73 (blocked)
+ * Resource Group: grp:
+ * inside_resource_2 (ocf:pacemaker:Dummy): Started fastvm-rhel-7-5-74
+ * inside_resource_3 (ocf:pacemaker:Dummy): Started fastvm-rhel-7-5-74
diff --git a/cts/scheduler/summary/group-unmanaged-stopped.summary b/cts/scheduler/summary/group-unmanaged-stopped.summary
new file mode 100644
index 0000000..5164f92
--- /dev/null
+++ b/cts/scheduler/summary/group-unmanaged-stopped.summary
@@ -0,0 +1,27 @@
+Current cluster status:
+ * Node List:
+ * Online: [ pcmk-1 pcmk-2 ]
+
+ * Full List of Resources:
+ * Resource Group: group-1:
+ * r192.168.122.113 (ocf:heartbeat:IPaddr2): Started pcmk-1
+ * r192.168.122.114 (ocf:heartbeat:IPaddr2): Stopped (unmanaged)
+ * r192.168.122.115 (ocf:heartbeat:IPaddr2): Started pcmk-1
+
+Transition Summary:
+ * Stop r192.168.122.115 ( pcmk-1 ) due to node availability
+
+Executing Cluster Transition:
+ * Pseudo action: group-1_stop_0
+ * Resource action: r192.168.122.115 stop on pcmk-1
+ * Pseudo action: group-1_stopped_0
+
+Revised Cluster Status:
+ * Node List:
+ * Online: [ pcmk-1 pcmk-2 ]
+
+ * Full List of Resources:
+ * Resource Group: group-1:
+ * r192.168.122.113 (ocf:heartbeat:IPaddr2): Started pcmk-1
+ * r192.168.122.114 (ocf:heartbeat:IPaddr2): Stopped (unmanaged)
+ * r192.168.122.115 (ocf:heartbeat:IPaddr2): Stopped
diff --git a/cts/scheduler/summary/group-unmanaged.summary b/cts/scheduler/summary/group-unmanaged.summary
new file mode 100644
index 0000000..7eac146
--- /dev/null
+++ b/cts/scheduler/summary/group-unmanaged.summary
@@ -0,0 +1,23 @@
+Current cluster status:
+ * Node List:
+ * Online: [ pcmk-1 pcmk-2 ]
+
+ * Full List of Resources:
+ * Resource Group: group-1:
+ * r192.168.122.113 (ocf:heartbeat:IPaddr2): Started pcmk-1
+ * r192.168.122.114 (ocf:heartbeat:IPaddr2): Started pcmk-1 (unmanaged)
+ * r192.168.122.115 (ocf:heartbeat:IPaddr2): Started pcmk-1
+
+Transition Summary:
+
+Executing Cluster Transition:
+
+Revised Cluster Status:
+ * Node List:
+ * Online: [ pcmk-1 pcmk-2 ]
+
+ * Full List of Resources:
+ * Resource Group: group-1:
+ * r192.168.122.113 (ocf:heartbeat:IPaddr2): Started pcmk-1
+ * r192.168.122.114 (ocf:heartbeat:IPaddr2): Started pcmk-1 (unmanaged)
+ * r192.168.122.115 (ocf:heartbeat:IPaddr2): Started pcmk-1
diff --git a/cts/scheduler/summary/group1.summary b/cts/scheduler/summary/group1.summary
new file mode 100644
index 0000000..ef9bd92
--- /dev/null
+++ b/cts/scheduler/summary/group1.summary
@@ -0,0 +1,37 @@
+Current cluster status:
+ * Node List:
+ * Online: [ node1 node2 ]
+
+ * Full List of Resources:
+ * Resource Group: rsc1:
+ * child_rsc1 (ocf:heartbeat:apache): Stopped
+ * child_rsc2 (ocf:heartbeat:apache): Stopped
+ * child_rsc3 (ocf:heartbeat:apache): Stopped
+
+Transition Summary:
+ * Start child_rsc1 ( node1 )
+ * Start child_rsc2 ( node1 )
+ * Start child_rsc3 ( node1 )
+
+Executing Cluster Transition:
+ * Pseudo action: rsc1_start_0
+ * Resource action: child_rsc1 monitor on node2
+ * Resource action: child_rsc1 monitor on node1
+ * Resource action: child_rsc2 monitor on node2
+ * Resource action: child_rsc2 monitor on node1
+ * Resource action: child_rsc3 monitor on node2
+ * Resource action: child_rsc3 monitor on node1
+ * Resource action: child_rsc1 start on node1
+ * Resource action: child_rsc2 start on node1
+ * Resource action: child_rsc3 start on node1
+ * Pseudo action: rsc1_running_0
+
+Revised Cluster Status:
+ * Node List:
+ * Online: [ node1 node2 ]
+
+ * Full List of Resources:
+ * Resource Group: rsc1:
+ * child_rsc1 (ocf:heartbeat:apache): Started node1
+ * child_rsc2 (ocf:heartbeat:apache): Started node1
+ * child_rsc3 (ocf:heartbeat:apache): Started node1
diff --git a/cts/scheduler/summary/group10.summary b/cts/scheduler/summary/group10.summary
new file mode 100644
index 0000000..35890d1
--- /dev/null
+++ b/cts/scheduler/summary/group10.summary
@@ -0,0 +1,68 @@
+Current cluster status:
+ * Node List:
+ * Online: [ c001n01 c001n02 c001n03 c001n08 ]
+
+ * Full List of Resources:
+ * DcIPaddr (ocf:heartbeat:IPaddr): Started c001n08
+ * Resource Group: group-1:
+ * child_192.168.100.181 (ocf:heartbeat:IPaddr): FAILED c001n01
+ * child_192.168.100.182 (ocf:heartbeat:IPaddr): Started c001n01
+ * child_192.168.100.183 (ocf:heartbeat:IPaddr): Started c001n01
+ * rsc_c001n08 (ocf:heartbeat:IPaddr): Started c001n08
+ * rsc_c001n02 (ocf:heartbeat:IPaddr): Started c001n02
+ * rsc_c001n03 (ocf:heartbeat:IPaddr): Started c001n03
+ * rsc_c001n01 (ocf:heartbeat:IPaddr): Started c001n01
+ * Clone Set: DoFencing [child_DoFencing] (unique):
+ * child_DoFencing:0 (stonith:ssh): Started c001n01
+ * child_DoFencing:1 (stonith:ssh): Started c001n02
+ * child_DoFencing:2 (stonith:ssh): Started c001n03
+ * child_DoFencing:3 (stonith:ssh): Started c001n08
+
+Transition Summary:
+ * Recover child_192.168.100.181 ( c001n01 )
+ * Restart child_192.168.100.182 ( c001n01 ) due to required child_192.168.100.181 start
+ * Restart child_192.168.100.183 ( c001n01 ) due to required child_192.168.100.182 start
+
+Executing Cluster Transition:
+ * Pseudo action: group-1_stop_0
+ * Resource action: child_192.168.100.183 stop on c001n01
+ * Resource action: child_DoFencing:1 monitor on c001n08
+ * Resource action: child_DoFencing:1 monitor on c001n03
+ * Resource action: child_DoFencing:1 monitor on c001n01
+ * Resource action: child_DoFencing:2 monitor on c001n08
+ * Resource action: child_DoFencing:2 monitor on c001n02
+ * Resource action: child_DoFencing:2 monitor on c001n01
+ * Resource action: child_DoFencing:3 monitor on c001n03
+ * Resource action: child_DoFencing:3 monitor on c001n02
+ * Resource action: child_DoFencing:3 monitor on c001n01
+ * Resource action: child_192.168.100.182 stop on c001n01
+ * Resource action: child_192.168.100.181 stop on c001n01
+ * Pseudo action: group-1_stopped_0
+ * Pseudo action: group-1_start_0
+ * Resource action: child_192.168.100.181 start on c001n01
+ * Resource action: child_192.168.100.181 monitor=5000 on c001n01
+ * Resource action: child_192.168.100.182 start on c001n01
+ * Resource action: child_192.168.100.182 monitor=5000 on c001n01
+ * Resource action: child_192.168.100.183 start on c001n01
+ * Resource action: child_192.168.100.183 monitor=5000 on c001n01
+ * Pseudo action: group-1_running_0
+
+Revised Cluster Status:
+ * Node List:
+ * Online: [ c001n01 c001n02 c001n03 c001n08 ]
+
+ * Full List of Resources:
+ * DcIPaddr (ocf:heartbeat:IPaddr): Started c001n08
+ * Resource Group: group-1:
+ * child_192.168.100.181 (ocf:heartbeat:IPaddr): Started c001n01
+ * child_192.168.100.182 (ocf:heartbeat:IPaddr): Started c001n01
+ * child_192.168.100.183 (ocf:heartbeat:IPaddr): Started c001n01
+ * rsc_c001n08 (ocf:heartbeat:IPaddr): Started c001n08
+ * rsc_c001n02 (ocf:heartbeat:IPaddr): Started c001n02
+ * rsc_c001n03 (ocf:heartbeat:IPaddr): Started c001n03
+ * rsc_c001n01 (ocf:heartbeat:IPaddr): Started c001n01
+ * Clone Set: DoFencing [child_DoFencing] (unique):
+ * child_DoFencing:0 (stonith:ssh): Started c001n01
+ * child_DoFencing:1 (stonith:ssh): Started c001n02
+ * child_DoFencing:2 (stonith:ssh): Started c001n03
+ * child_DoFencing:3 (stonith:ssh): Started c001n08
diff --git a/cts/scheduler/summary/group11.summary b/cts/scheduler/summary/group11.summary
new file mode 100644
index 0000000..4ba5c9d
--- /dev/null
+++ b/cts/scheduler/summary/group11.summary
@@ -0,0 +1,32 @@
+1 of 3 resource instances DISABLED and 0 BLOCKED from further action due to failure
+
+Current cluster status:
+ * Node List:
+ * Online: [ node1 ]
+
+ * Full List of Resources:
+ * Resource Group: group1:
+ * rsc1 (ocf:heartbeat:apache): Started node1
+ * rsc2 (ocf:heartbeat:apache): Started node1 (disabled)
+ * rsc3 (ocf:heartbeat:apache): Started node1
+
+Transition Summary:
+ * Stop rsc2 ( node1 ) due to node availability
+ * Stop rsc3 ( node1 ) due to node availability
+
+Executing Cluster Transition:
+ * Pseudo action: group1_stop_0
+ * Resource action: rsc3 stop on node1
+ * Resource action: rsc2 stop on node1
+ * Pseudo action: group1_stopped_0
+ * Pseudo action: group1_start_0
+
+Revised Cluster Status:
+ * Node List:
+ * Online: [ node1 ]
+
+ * Full List of Resources:
+ * Resource Group: group1:
+ * rsc1 (ocf:heartbeat:apache): Started node1
+ * rsc2 (ocf:heartbeat:apache): Stopped (disabled)
+ * rsc3 (ocf:heartbeat:apache): Stopped
diff --git a/cts/scheduler/summary/group13.summary b/cts/scheduler/summary/group13.summary
new file mode 100644
index 0000000..7f8fad1
--- /dev/null
+++ b/cts/scheduler/summary/group13.summary
@@ -0,0 +1,27 @@
+Current cluster status:
+ * Node List:
+ * Online: [ jamesltc ]
+
+ * Full List of Resources:
+ * Resource Group: nfs:
+ * resource_nfs (lsb:nfs): Started jamesltc
+ * Resource Group: fs:
+ * resource_fs (ocf:heartbeat:Filesystem): Stopped
+
+Transition Summary:
+ * Stop resource_nfs ( jamesltc ) due to node availability
+
+Executing Cluster Transition:
+ * Pseudo action: nfs_stop_0
+ * Resource action: resource_nfs stop on jamesltc
+ * Pseudo action: nfs_stopped_0
+
+Revised Cluster Status:
+ * Node List:
+ * Online: [ jamesltc ]
+
+ * Full List of Resources:
+ * Resource Group: nfs:
+ * resource_nfs (lsb:nfs): Stopped
+ * Resource Group: fs:
+ * resource_fs (ocf:heartbeat:Filesystem): Stopped
diff --git a/cts/scheduler/summary/group14.summary b/cts/scheduler/summary/group14.summary
new file mode 100644
index 0000000..80ded38
--- /dev/null
+++ b/cts/scheduler/summary/group14.summary
@@ -0,0 +1,102 @@
+Current cluster status:
+ * Node List:
+ * Online: [ c001n06 c001n07 ]
+ * OFFLINE: [ c001n02 c001n03 c001n04 c001n05 ]
+
+ * Full List of Resources:
+ * DcIPaddr (ocf:heartbeat:IPaddr): Stopped
+ * Resource Group: group-1:
+ * r192.168.100.181 (ocf:heartbeat:IPaddr): Started c001n06
+ * r192.168.100.182 (ocf:heartbeat:IPaddr): Stopped
+ * r192.168.100.183 (ocf:heartbeat:IPaddr): Stopped
+ * lsb_dummy (lsb:/usr/lib/heartbeat/cts/LSBDummy): Stopped
+ * migrator (ocf:heartbeat:Dummy): Stopped
+ * rsc_c001n03 (ocf:heartbeat:IPaddr): Stopped
+ * rsc_c001n02 (ocf:heartbeat:IPaddr): Stopped
+ * rsc_c001n04 (ocf:heartbeat:IPaddr): Stopped
+ * rsc_c001n05 (ocf:heartbeat:IPaddr): Stopped
+ * rsc_c001n06 (ocf:heartbeat:IPaddr): Stopped
+ * rsc_c001n07 (ocf:heartbeat:IPaddr): Stopped
+ * Clone Set: DoFencing [child_DoFencing]:
+ * Stopped: [ c001n02 c001n03 c001n04 c001n05 c001n06 c001n07 ]
+ * Clone Set: master_rsc_1 [ocf_msdummy] (promotable, unique):
+ * ocf_msdummy:0 (ocf:heartbeat:Stateful): Stopped
+ * ocf_msdummy:1 (ocf:heartbeat:Stateful): Stopped
+ * ocf_msdummy:2 (ocf:heartbeat:Stateful): Stopped
+ * ocf_msdummy:3 (ocf:heartbeat:Stateful): Stopped
+ * ocf_msdummy:4 (ocf:heartbeat:Stateful): Stopped
+ * ocf_msdummy:5 (ocf:heartbeat:Stateful): Stopped
+ * ocf_msdummy:6 (ocf:heartbeat:Stateful): Stopped
+ * ocf_msdummy:7 (ocf:heartbeat:Stateful): Stopped
+ * ocf_msdummy:8 (ocf:heartbeat:Stateful): Stopped
+ * ocf_msdummy:9 (ocf:heartbeat:Stateful): Stopped
+ * ocf_msdummy:10 (ocf:heartbeat:Stateful): Stopped
+ * ocf_msdummy:11 (ocf:heartbeat:Stateful): Stopped
+
+Transition Summary:
+ * Start DcIPaddr ( c001n06 ) due to no quorum (blocked)
+ * Stop r192.168.100.181 ( c001n06 ) due to no quorum
+ * Start r192.168.100.182 ( c001n07 ) due to no quorum (blocked)
+ * Start r192.168.100.183 ( c001n07 ) due to no quorum (blocked)
+ * Start lsb_dummy ( c001n06 ) due to no quorum (blocked)
+ * Start migrator ( c001n06 ) due to no quorum (blocked)
+ * Start rsc_c001n03 ( c001n06 ) due to no quorum (blocked)
+ * Start rsc_c001n02 ( c001n07 ) due to no quorum (blocked)
+ * Start rsc_c001n04 ( c001n06 ) due to no quorum (blocked)
+ * Start rsc_c001n05 ( c001n07 ) due to no quorum (blocked)
+ * Start rsc_c001n06 ( c001n06 ) due to no quorum (blocked)
+ * Start rsc_c001n07 ( c001n07 ) due to no quorum (blocked)
+ * Start child_DoFencing:0 ( c001n06 )
+ * Start child_DoFencing:1 ( c001n07 )
+ * Start ocf_msdummy:0 ( c001n06 ) due to no quorum (blocked)
+ * Start ocf_msdummy:1 ( c001n07 ) due to no quorum (blocked)
+ * Start ocf_msdummy:2 ( c001n06 ) due to no quorum (blocked)
+ * Start ocf_msdummy:3 ( c001n07 ) due to no quorum (blocked)
+
+Executing Cluster Transition:
+ * Pseudo action: group-1_stop_0
+ * Resource action: r192.168.100.181 stop on c001n06
+ * Pseudo action: DoFencing_start_0
+ * Pseudo action: group-1_stopped_0
+ * Pseudo action: group-1_start_0
+ * Resource action: child_DoFencing:0 start on c001n06
+ * Resource action: child_DoFencing:1 start on c001n07
+ * Pseudo action: DoFencing_running_0
+ * Resource action: child_DoFencing:0 monitor=20000 on c001n06
+ * Resource action: child_DoFencing:1 monitor=20000 on c001n07
+
+Revised Cluster Status:
+ * Node List:
+ * Online: [ c001n06 c001n07 ]
+ * OFFLINE: [ c001n02 c001n03 c001n04 c001n05 ]
+
+ * Full List of Resources:
+ * DcIPaddr (ocf:heartbeat:IPaddr): Stopped
+ * Resource Group: group-1:
+ * r192.168.100.181 (ocf:heartbeat:IPaddr): Stopped
+ * r192.168.100.182 (ocf:heartbeat:IPaddr): Stopped
+ * r192.168.100.183 (ocf:heartbeat:IPaddr): Stopped
+ * lsb_dummy (lsb:/usr/lib/heartbeat/cts/LSBDummy): Stopped
+ * migrator (ocf:heartbeat:Dummy): Stopped
+ * rsc_c001n03 (ocf:heartbeat:IPaddr): Stopped
+ * rsc_c001n02 (ocf:heartbeat:IPaddr): Stopped
+ * rsc_c001n04 (ocf:heartbeat:IPaddr): Stopped
+ * rsc_c001n05 (ocf:heartbeat:IPaddr): Stopped
+ * rsc_c001n06 (ocf:heartbeat:IPaddr): Stopped
+ * rsc_c001n07 (ocf:heartbeat:IPaddr): Stopped
+ * Clone Set: DoFencing [child_DoFencing]:
+ * Started: [ c001n06 c001n07 ]
+ * Stopped: [ c001n02 c001n03 c001n04 c001n05 ]
+ * Clone Set: master_rsc_1 [ocf_msdummy] (promotable, unique):
+ * ocf_msdummy:0 (ocf:heartbeat:Stateful): Stopped
+ * ocf_msdummy:1 (ocf:heartbeat:Stateful): Stopped
+ * ocf_msdummy:2 (ocf:heartbeat:Stateful): Stopped
+ * ocf_msdummy:3 (ocf:heartbeat:Stateful): Stopped
+ * ocf_msdummy:4 (ocf:heartbeat:Stateful): Stopped
+ * ocf_msdummy:5 (ocf:heartbeat:Stateful): Stopped
+ * ocf_msdummy:6 (ocf:heartbeat:Stateful): Stopped
+ * ocf_msdummy:7 (ocf:heartbeat:Stateful): Stopped
+ * ocf_msdummy:8 (ocf:heartbeat:Stateful): Stopped
+ * ocf_msdummy:9 (ocf:heartbeat:Stateful): Stopped
+ * ocf_msdummy:10 (ocf:heartbeat:Stateful): Stopped
+ * ocf_msdummy:11 (ocf:heartbeat:Stateful): Stopped
diff --git a/cts/scheduler/summary/group15.summary b/cts/scheduler/summary/group15.summary
new file mode 100644
index 0000000..82a32ca
--- /dev/null
+++ b/cts/scheduler/summary/group15.summary
@@ -0,0 +1,51 @@
+Current cluster status:
+ * Node List:
+ * Online: [ node1 node2 ]
+
+ * Full List of Resources:
+ * Resource Group: foo:
+ * rsc3 (ocf:heartbeat:apache): Stopped
+ * rsc4 (ocf:heartbeat:apache): Stopped
+ * rsc5 (ocf:heartbeat:apache): Stopped
+ * Resource Group: bar:
+ * rsc6 (ocf:heartbeat:apache): Stopped
+ * rsc7 (ocf:heartbeat:apache): Stopped
+ * rsc8 (ocf:heartbeat:apache): Stopped
+
+Transition Summary:
+ * Start rsc6 ( node1 )
+ * Start rsc7 ( node1 )
+ * Start rsc8 ( node1 )
+
+Executing Cluster Transition:
+ * Resource action: rsc3 monitor on node2
+ * Resource action: rsc3 monitor on node1
+ * Resource action: rsc4 monitor on node2
+ * Resource action: rsc4 monitor on node1
+ * Resource action: rsc5 monitor on node2
+ * Resource action: rsc5 monitor on node1
+ * Pseudo action: bar_start_0
+ * Resource action: rsc6 monitor on node2
+ * Resource action: rsc6 monitor on node1
+ * Resource action: rsc7 monitor on node2
+ * Resource action: rsc7 monitor on node1
+ * Resource action: rsc8 monitor on node2
+ * Resource action: rsc8 monitor on node1
+ * Resource action: rsc6 start on node1
+ * Resource action: rsc7 start on node1
+ * Resource action: rsc8 start on node1
+ * Pseudo action: bar_running_0
+
+Revised Cluster Status:
+ * Node List:
+ * Online: [ node1 node2 ]
+
+ * Full List of Resources:
+ * Resource Group: foo:
+ * rsc3 (ocf:heartbeat:apache): Stopped
+ * rsc4 (ocf:heartbeat:apache): Stopped
+ * rsc5 (ocf:heartbeat:apache): Stopped
+ * Resource Group: bar:
+ * rsc6 (ocf:heartbeat:apache): Started node1
+ * rsc7 (ocf:heartbeat:apache): Started node1
+ * rsc8 (ocf:heartbeat:apache): Started node1
diff --git a/cts/scheduler/summary/group2.summary b/cts/scheduler/summary/group2.summary
new file mode 100644
index 0000000..f71faf4
--- /dev/null
+++ b/cts/scheduler/summary/group2.summary
@@ -0,0 +1,49 @@
+Current cluster status:
+ * Node List:
+ * Online: [ node1 node2 ]
+
+ * Full List of Resources:
+ * rsc1 (ocf:heartbeat:apache): Stopped
+ * Resource Group: rsc2:
+ * child_rsc1 (ocf:heartbeat:apache): Stopped
+ * child_rsc2 (ocf:heartbeat:apache): Stopped
+ * child_rsc3 (ocf:heartbeat:apache): Stopped
+ * rsc3 (ocf:heartbeat:apache): Stopped
+
+Transition Summary:
+ * Start rsc1 ( node1 )
+ * Start child_rsc1 ( node2 )
+ * Start child_rsc2 ( node2 )
+ * Start child_rsc3 ( node2 )
+ * Start rsc3 ( node1 )
+
+Executing Cluster Transition:
+ * Resource action: rsc1 monitor on node2
+ * Resource action: rsc1 monitor on node1
+ * Resource action: child_rsc1 monitor on node2
+ * Resource action: child_rsc1 monitor on node1
+ * Resource action: child_rsc2 monitor on node2
+ * Resource action: child_rsc2 monitor on node1
+ * Resource action: child_rsc3 monitor on node2
+ * Resource action: child_rsc3 monitor on node1
+ * Resource action: rsc3 monitor on node2
+ * Resource action: rsc3 monitor on node1
+ * Resource action: rsc1 start on node1
+ * Pseudo action: rsc2_start_0
+ * Resource action: child_rsc1 start on node2
+ * Resource action: child_rsc2 start on node2
+ * Resource action: child_rsc3 start on node2
+ * Pseudo action: rsc2_running_0
+ * Resource action: rsc3 start on node1
+
+Revised Cluster Status:
+ * Node List:
+ * Online: [ node1 node2 ]
+
+ * Full List of Resources:
+ * rsc1 (ocf:heartbeat:apache): Started node1
+ * Resource Group: rsc2:
+ * child_rsc1 (ocf:heartbeat:apache): Started node2
+ * child_rsc2 (ocf:heartbeat:apache): Started node2
+ * child_rsc3 (ocf:heartbeat:apache): Started node2
+ * rsc3 (ocf:heartbeat:apache): Started node1
diff --git a/cts/scheduler/summary/group3.summary b/cts/scheduler/summary/group3.summary
new file mode 100644
index 0000000..e1bdce4
--- /dev/null
+++ b/cts/scheduler/summary/group3.summary
@@ -0,0 +1,59 @@
+Current cluster status:
+ * Node List:
+ * Online: [ node1 node2 ]
+
+ * Full List of Resources:
+ * Resource Group: rsc1:
+ * child_rsc1 (ocf:heartbeat:apache): Stopped
+ * child_rsc2 (ocf:heartbeat:apache): Stopped
+ * child_rsc3 (ocf:heartbeat:apache): Stopped
+ * Resource Group: rsc2:
+ * child_rsc4 (ocf:heartbeat:apache): Stopped
+ * child_rsc5 (ocf:heartbeat:apache): Stopped
+ * child_rsc6 (ocf:heartbeat:apache): Stopped
+
+Transition Summary:
+ * Start child_rsc1 ( node1 )
+ * Start child_rsc2 ( node1 )
+ * Start child_rsc3 ( node1 )
+ * Start child_rsc4 ( node2 )
+ * Start child_rsc5 ( node2 )
+ * Start child_rsc6 ( node2 )
+
+Executing Cluster Transition:
+ * Pseudo action: rsc1_start_0
+ * Resource action: child_rsc1 monitor on node2
+ * Resource action: child_rsc1 monitor on node1
+ * Resource action: child_rsc2 monitor on node2
+ * Resource action: child_rsc2 monitor on node1
+ * Resource action: child_rsc3 monitor on node2
+ * Resource action: child_rsc3 monitor on node1
+ * Resource action: child_rsc4 monitor on node2
+ * Resource action: child_rsc4 monitor on node1
+ * Resource action: child_rsc5 monitor on node2
+ * Resource action: child_rsc5 monitor on node1
+ * Resource action: child_rsc6 monitor on node2
+ * Resource action: child_rsc6 monitor on node1
+ * Resource action: child_rsc1 start on node1
+ * Resource action: child_rsc2 start on node1
+ * Resource action: child_rsc3 start on node1
+ * Pseudo action: rsc1_running_0
+ * Pseudo action: rsc2_start_0
+ * Resource action: child_rsc4 start on node2
+ * Resource action: child_rsc5 start on node2
+ * Resource action: child_rsc6 start on node2
+ * Pseudo action: rsc2_running_0
+
+Revised Cluster Status:
+ * Node List:
+ * Online: [ node1 node2 ]
+
+ * Full List of Resources:
+ * Resource Group: rsc1:
+ * child_rsc1 (ocf:heartbeat:apache): Started node1
+ * child_rsc2 (ocf:heartbeat:apache): Started node1
+ * child_rsc3 (ocf:heartbeat:apache): Started node1
+ * Resource Group: rsc2:
+ * child_rsc4 (ocf:heartbeat:apache): Started node2
+ * child_rsc5 (ocf:heartbeat:apache): Started node2
+ * child_rsc6 (ocf:heartbeat:apache): Started node2
diff --git a/cts/scheduler/summary/group4.summary b/cts/scheduler/summary/group4.summary
new file mode 100644
index 0000000..386925f
--- /dev/null
+++ b/cts/scheduler/summary/group4.summary
@@ -0,0 +1,32 @@
+Current cluster status:
+ * Node List:
+ * Online: [ node1 node2 ]
+
+ * Full List of Resources:
+ * rsc1 (ocf:heartbeat:apache): Started node2
+ * Resource Group: rsc2:
+ * child_rsc1 (ocf:heartbeat:apache): Started node2
+ * child_rsc2 (ocf:heartbeat:apache): Started node2
+ * child_rsc3 (ocf:heartbeat:apache): Started node2
+ * rsc3 (ocf:heartbeat:apache): Started node2
+
+Transition Summary:
+
+Executing Cluster Transition:
+ * Resource action: rsc1 monitor on node1
+ * Resource action: child_rsc1 monitor on node1
+ * Resource action: child_rsc2 monitor on node1
+ * Resource action: child_rsc3 monitor on node1
+ * Resource action: rsc3 monitor on node1
+
+Revised Cluster Status:
+ * Node List:
+ * Online: [ node1 node2 ]
+
+ * Full List of Resources:
+ * rsc1 (ocf:heartbeat:apache): Started node2
+ * Resource Group: rsc2:
+ * child_rsc1 (ocf:heartbeat:apache): Started node2
+ * child_rsc2 (ocf:heartbeat:apache): Started node2
+ * child_rsc3 (ocf:heartbeat:apache): Started node2
+ * rsc3 (ocf:heartbeat:apache): Started node2
diff --git a/cts/scheduler/summary/group5.summary b/cts/scheduler/summary/group5.summary
new file mode 100644
index 0000000..a95ec3f
--- /dev/null
+++ b/cts/scheduler/summary/group5.summary
@@ -0,0 +1,51 @@
+Current cluster status:
+ * Node List:
+ * Online: [ node1 node2 ]
+
+ * Full List of Resources:
+ * rsc1 (ocf:heartbeat:apache): Started node1
+ * Resource Group: rsc2:
+ * child_rsc1 (ocf:heartbeat:apache): Started node1
+ * child_rsc2 (ocf:heartbeat:apache): Started node1
+ * child_rsc3 (ocf:heartbeat:apache): Started node1
+ * rsc3 (ocf:heartbeat:apache): Started node1
+
+Transition Summary:
+ * Move rsc1 ( node1 -> node2 )
+ * Move child_rsc1 ( node1 -> node2 )
+ * Move child_rsc2 ( node1 -> node2 )
+ * Move child_rsc3 ( node1 -> node2 )
+ * Move rsc3 ( node1 -> node2 )
+
+Executing Cluster Transition:
+ * Resource action: rsc1 monitor on node2
+ * Resource action: child_rsc1 monitor on node2
+ * Resource action: child_rsc2 monitor on node2
+ * Resource action: child_rsc3 monitor on node2
+ * Resource action: rsc3 stop on node1
+ * Resource action: rsc3 monitor on node2
+ * Pseudo action: rsc2_stop_0
+ * Resource action: child_rsc3 stop on node1
+ * Resource action: child_rsc2 stop on node1
+ * Resource action: child_rsc1 stop on node1
+ * Pseudo action: rsc2_stopped_0
+ * Resource action: rsc1 stop on node1
+ * Resource action: rsc1 start on node2
+ * Pseudo action: rsc2_start_0
+ * Resource action: child_rsc1 start on node2
+ * Resource action: child_rsc2 start on node2
+ * Resource action: child_rsc3 start on node2
+ * Pseudo action: rsc2_running_0
+ * Resource action: rsc3 start on node2
+
+Revised Cluster Status:
+ * Node List:
+ * Online: [ node1 node2 ]
+
+ * Full List of Resources:
+ * rsc1 (ocf:heartbeat:apache): Started node2
+ * Resource Group: rsc2:
+ * child_rsc1 (ocf:heartbeat:apache): Started node2
+ * child_rsc2 (ocf:heartbeat:apache): Started node2
+ * child_rsc3 (ocf:heartbeat:apache): Started node2
+ * rsc3 (ocf:heartbeat:apache): Started node2
diff --git a/cts/scheduler/summary/group6.summary b/cts/scheduler/summary/group6.summary
new file mode 100644
index 0000000..7ad4823
--- /dev/null
+++ b/cts/scheduler/summary/group6.summary
@@ -0,0 +1,63 @@
+Current cluster status:
+ * Node List:
+ * Online: [ node1 node2 ]
+
+ * Full List of Resources:
+ * Resource Group: rsc1:
+ * child_rsc1 (ocf:heartbeat:apache): Started node1
+ * child_rsc2 (ocf:heartbeat:apache): Started node1
+ * child_rsc3 (ocf:heartbeat:apache): Started node1
+ * Resource Group: rsc2:
+ * child_rsc4 (ocf:heartbeat:apache): Started node1
+ * child_rsc5 (ocf:heartbeat:apache): Started node1
+ * child_rsc6 (ocf:heartbeat:apache): Started node1
+
+Transition Summary:
+ * Move child_rsc1 ( node1 -> node2 )
+ * Move child_rsc2 ( node1 -> node2 )
+ * Move child_rsc3 ( node1 -> node2 )
+ * Move child_rsc4 ( node1 -> node2 )
+ * Move child_rsc5 ( node1 -> node2 )
+ * Move child_rsc6 ( node1 -> node2 )
+
+Executing Cluster Transition:
+ * Resource action: child_rsc1 monitor on node2
+ * Resource action: child_rsc2 monitor on node2
+ * Resource action: child_rsc3 monitor on node2
+ * Pseudo action: rsc2_stop_0
+ * Resource action: child_rsc4 monitor on node2
+ * Resource action: child_rsc5 monitor on node2
+ * Resource action: child_rsc6 stop on node1
+ * Resource action: child_rsc6 monitor on node2
+ * Resource action: child_rsc5 stop on node1
+ * Resource action: child_rsc4 stop on node1
+ * Pseudo action: rsc2_stopped_0
+ * Pseudo action: rsc1_stop_0
+ * Resource action: child_rsc3 stop on node1
+ * Resource action: child_rsc2 stop on node1
+ * Resource action: child_rsc1 stop on node1
+ * Pseudo action: rsc1_stopped_0
+ * Pseudo action: rsc1_start_0
+ * Resource action: child_rsc1 start on node2
+ * Resource action: child_rsc2 start on node2
+ * Resource action: child_rsc3 start on node2
+ * Pseudo action: rsc1_running_0
+ * Pseudo action: rsc2_start_0
+ * Resource action: child_rsc4 start on node2
+ * Resource action: child_rsc5 start on node2
+ * Resource action: child_rsc6 start on node2
+ * Pseudo action: rsc2_running_0
+
+Revised Cluster Status:
+ * Node List:
+ * Online: [ node1 node2 ]
+
+ * Full List of Resources:
+ * Resource Group: rsc1:
+ * child_rsc1 (ocf:heartbeat:apache): Started node2
+ * child_rsc2 (ocf:heartbeat:apache): Started node2
+ * child_rsc3 (ocf:heartbeat:apache): Started node2
+ * Resource Group: rsc2:
+ * child_rsc4 (ocf:heartbeat:apache): Started node2
+ * child_rsc5 (ocf:heartbeat:apache): Started node2
+ * child_rsc6 (ocf:heartbeat:apache): Started node2
diff --git a/cts/scheduler/summary/group7.summary b/cts/scheduler/summary/group7.summary
new file mode 100644
index 0000000..79ce76b
--- /dev/null
+++ b/cts/scheduler/summary/group7.summary
@@ -0,0 +1,72 @@
+Current cluster status:
+ * Node List:
+ * Online: [ node1 node2 node3 ]
+
+ * Full List of Resources:
+ * rsc1 (ocf:heartbeat:apache): Stopped
+ * Resource Group: rsc2:
+ * child_rsc1 (ocf:heartbeat:apache): Stopped
+ * child_rsc2 (ocf:heartbeat:apache): Stopped
+ * child_rsc3 (ocf:heartbeat:apache): Stopped
+ * Resource Group: rsc3:
+ * child_rsc4 (ocf:heartbeat:apache): Stopped
+ * child_rsc5 (ocf:heartbeat:apache): Stopped
+ * child_rsc6 (ocf:heartbeat:apache): Stopped
+
+Transition Summary:
+ * Start rsc1 ( node1 )
+ * Start child_rsc1 ( node2 )
+ * Start child_rsc2 ( node2 )
+ * Start child_rsc3 ( node2 )
+ * Start child_rsc4 ( node2 )
+ * Start child_rsc5 ( node2 )
+ * Start child_rsc6 ( node2 )
+
+Executing Cluster Transition:
+ * Resource action: rsc1 monitor on node3
+ * Resource action: rsc1 monitor on node2
+ * Resource action: rsc1 monitor on node1
+ * Pseudo action: rsc2_start_0
+ * Resource action: child_rsc1 monitor on node3
+ * Resource action: child_rsc1 monitor on node2
+ * Resource action: child_rsc1 monitor on node1
+ * Resource action: child_rsc2 monitor on node3
+ * Resource action: child_rsc2 monitor on node2
+ * Resource action: child_rsc2 monitor on node1
+ * Resource action: child_rsc3 monitor on node3
+ * Resource action: child_rsc3 monitor on node2
+ * Resource action: child_rsc3 monitor on node1
+ * Resource action: child_rsc4 monitor on node3
+ * Resource action: child_rsc4 monitor on node2
+ * Resource action: child_rsc4 monitor on node1
+ * Resource action: child_rsc5 monitor on node3
+ * Resource action: child_rsc5 monitor on node2
+ * Resource action: child_rsc5 monitor on node1
+ * Resource action: child_rsc6 monitor on node3
+ * Resource action: child_rsc6 monitor on node2
+ * Resource action: child_rsc6 monitor on node1
+ * Resource action: rsc1 start on node1
+ * Resource action: child_rsc1 start on node2
+ * Resource action: child_rsc2 start on node2
+ * Resource action: child_rsc3 start on node2
+ * Pseudo action: rsc2_running_0
+ * Pseudo action: rsc3_start_0
+ * Resource action: child_rsc4 start on node2
+ * Resource action: child_rsc5 start on node2
+ * Resource action: child_rsc6 start on node2
+ * Pseudo action: rsc3_running_0
+
+Revised Cluster Status:
+ * Node List:
+ * Online: [ node1 node2 node3 ]
+
+ * Full List of Resources:
+ * rsc1 (ocf:heartbeat:apache): Started node1
+ * Resource Group: rsc2:
+ * child_rsc1 (ocf:heartbeat:apache): Started node2
+ * child_rsc2 (ocf:heartbeat:apache): Started node2
+ * child_rsc3 (ocf:heartbeat:apache): Started node2
+ * Resource Group: rsc3:
+ * child_rsc4 (ocf:heartbeat:apache): Started node2
+ * child_rsc5 (ocf:heartbeat:apache): Started node2
+ * child_rsc6 (ocf:heartbeat:apache): Started node2
diff --git a/cts/scheduler/summary/group8.summary b/cts/scheduler/summary/group8.summary
new file mode 100644
index 0000000..37ef66c
--- /dev/null
+++ b/cts/scheduler/summary/group8.summary
@@ -0,0 +1,52 @@
+Current cluster status:
+ * Node List:
+ * Online: [ node1 ]
+ * OFFLINE: [ node2 node3 ]
+
+ * Full List of Resources:
+ * rsc1 (ocf:heartbeat:apache): Stopped
+ * Resource Group: rsc2:
+ * child_rsc1 (ocf:heartbeat:apache): Stopped
+ * child_rsc2 (ocf:heartbeat:apache): Stopped
+ * child_rsc3 (ocf:heartbeat:apache): Stopped
+ * Resource Group: rsc3:
+ * child_rsc4 (ocf:heartbeat:apache): Stopped
+ * child_rsc5 (ocf:heartbeat:apache): Stopped
+ * child_rsc6 (ocf:heartbeat:apache): Stopped
+
+Transition Summary:
+ * Start rsc1 ( node1 )
+ * Start child_rsc1 ( node1 )
+ * Start child_rsc2 ( node1 )
+ * Start child_rsc3 ( node1 )
+
+Executing Cluster Transition:
+ * Resource action: rsc1 monitor on node1
+ * Pseudo action: rsc2_start_0
+ * Resource action: child_rsc1 monitor on node1
+ * Resource action: child_rsc2 monitor on node1
+ * Resource action: child_rsc3 monitor on node1
+ * Resource action: child_rsc4 monitor on node1
+ * Resource action: child_rsc5 monitor on node1
+ * Resource action: child_rsc6 monitor on node1
+ * Resource action: rsc1 start on node1
+ * Resource action: child_rsc1 start on node1
+ * Resource action: child_rsc2 start on node1
+ * Resource action: child_rsc3 start on node1
+ * Pseudo action: rsc2_running_0
+
+Revised Cluster Status:
+ * Node List:
+ * Online: [ node1 ]
+ * OFFLINE: [ node2 node3 ]
+
+ * Full List of Resources:
+ * rsc1 (ocf:heartbeat:apache): Started node1
+ * Resource Group: rsc2:
+ * child_rsc1 (ocf:heartbeat:apache): Started node1
+ * child_rsc2 (ocf:heartbeat:apache): Started node1
+ * child_rsc3 (ocf:heartbeat:apache): Started node1
+ * Resource Group: rsc3:
+ * child_rsc4 (ocf:heartbeat:apache): Stopped
+ * child_rsc5 (ocf:heartbeat:apache): Stopped
+ * child_rsc6 (ocf:heartbeat:apache): Stopped
diff --git a/cts/scheduler/summary/group9.summary b/cts/scheduler/summary/group9.summary
new file mode 100644
index 0000000..57cd144
--- /dev/null
+++ b/cts/scheduler/summary/group9.summary
@@ -0,0 +1,66 @@
+Current cluster status:
+ * Node List:
+ * Online: [ node1 node2 ]
+
+ * Full List of Resources:
+ * rsc1 (ocf:heartbeat:apache): Started node1
+ * rsc2 (ocf:heartbeat:apache): Started node1
+ * Resource Group: foo:
+ * rsc3 (ocf:heartbeat:apache): Started node1
+ * rsc4 (ocf:heartbeat:apache): FAILED node1
+ * rsc5 (ocf:heartbeat:apache): Started node1
+ * Resource Group: bar:
+ * rsc6 (ocf:heartbeat:apache): Started node1
+ * rsc7 (ocf:heartbeat:apache): FAILED node1
+ * rsc8 (ocf:heartbeat:apache): Started node1
+
+Transition Summary:
+ * Recover rsc4 ( node1 )
+ * Restart rsc5 ( node1 ) due to required rsc4 start
+ * Move rsc6 ( node1 -> node2 )
+ * Recover rsc7 ( node1 -> node2 )
+ * Move rsc8 ( node1 -> node2 )
+
+Executing Cluster Transition:
+ * Resource action: rsc1 monitor on node2
+ * Resource action: rsc2 monitor on node2
+ * Pseudo action: foo_stop_0
+ * Resource action: rsc3 monitor on node2
+ * Resource action: rsc4 monitor on node2
+ * Resource action: rsc5 stop on node1
+ * Resource action: rsc5 monitor on node2
+ * Pseudo action: bar_stop_0
+ * Resource action: rsc6 monitor on node2
+ * Resource action: rsc7 monitor on node2
+ * Resource action: rsc8 stop on node1
+ * Resource action: rsc8 monitor on node2
+ * Resource action: rsc4 stop on node1
+ * Resource action: rsc7 stop on node1
+ * Pseudo action: foo_stopped_0
+ * Pseudo action: foo_start_0
+ * Resource action: rsc4 start on node1
+ * Resource action: rsc5 start on node1
+ * Resource action: rsc6 stop on node1
+ * Pseudo action: foo_running_0
+ * Pseudo action: bar_stopped_0
+ * Pseudo action: bar_start_0
+ * Resource action: rsc6 start on node2
+ * Resource action: rsc7 start on node2
+ * Resource action: rsc8 start on node2
+ * Pseudo action: bar_running_0
+
+Revised Cluster Status:
+ * Node List:
+ * Online: [ node1 node2 ]
+
+ * Full List of Resources:
+ * rsc1 (ocf:heartbeat:apache): Started node1
+ * rsc2 (ocf:heartbeat:apache): Started node1
+ * Resource Group: foo:
+ * rsc3 (ocf:heartbeat:apache): Started node1
+ * rsc4 (ocf:heartbeat:apache): Started node1
+ * rsc5 (ocf:heartbeat:apache): Started node1
+ * Resource Group: bar:
+ * rsc6 (ocf:heartbeat:apache): Started node2
+ * rsc7 (ocf:heartbeat:apache): Started node2
+ * rsc8 (ocf:heartbeat:apache): Started node2
diff --git a/cts/scheduler/summary/guest-host-not-fenceable.summary b/cts/scheduler/summary/guest-host-not-fenceable.summary
new file mode 100644
index 0000000..9e3b5db
--- /dev/null
+++ b/cts/scheduler/summary/guest-host-not-fenceable.summary
@@ -0,0 +1,91 @@
+Using the original execution date of: 2019-08-26 04:52:42Z
+Current cluster status:
+ * Node List:
+ * Node node2: UNCLEAN (offline)
+ * Node node3: UNCLEAN (offline)
+ * Online: [ node1 ]
+ * GuestOnline: [ galera-bundle-0 rabbitmq-bundle-0 ]
+
+ * Full List of Resources:
+ * Container bundle set: rabbitmq-bundle [192.168.122.139:8787/rhosp13/openstack-rabbitmq:pcmklatest]:
+ * rabbitmq-bundle-0 (ocf:heartbeat:rabbitmq-cluster): Started node1
+ * rabbitmq-bundle-1 (ocf:heartbeat:rabbitmq-cluster): FAILED node2 (UNCLEAN)
+ * rabbitmq-bundle-2 (ocf:heartbeat:rabbitmq-cluster): FAILED node3 (UNCLEAN)
+ * Container bundle set: galera-bundle [192.168.122.139:8787/rhosp13/openstack-mariadb:pcmklatest]:
+ * galera-bundle-0 (ocf:heartbeat:galera): FAILED Promoted node1
+ * galera-bundle-1 (ocf:heartbeat:galera): FAILED Promoted node2 (UNCLEAN)
+ * galera-bundle-2 (ocf:heartbeat:galera): FAILED Promoted node3 (UNCLEAN)
+ * stonith-fence_ipmilan-node1 (stonith:fence_ipmilan): Started node2 (UNCLEAN)
+ * stonith-fence_ipmilan-node3 (stonith:fence_ipmilan): Started node2 (UNCLEAN)
+ * stonith-fence_ipmilan-node2 (stonith:fence_ipmilan): Started node3 (UNCLEAN)
+
+Transition Summary:
+ * Stop rabbitmq-bundle-docker-0 ( node1 ) due to no quorum
+ * Stop rabbitmq-bundle-0 ( node1 ) due to no quorum
+ * Stop rabbitmq:0 ( rabbitmq-bundle-0 ) due to no quorum
+ * Stop rabbitmq-bundle-docker-1 ( node2 ) due to node availability (blocked)
+ * Stop rabbitmq-bundle-1 ( node2 ) due to no quorum (blocked)
+ * Stop rabbitmq:1 ( rabbitmq-bundle-1 ) due to no quorum (blocked)
+ * Stop rabbitmq-bundle-docker-2 ( node3 ) due to node availability (blocked)
+ * Stop rabbitmq-bundle-2 ( node3 ) due to no quorum (blocked)
+ * Stop rabbitmq:2 ( rabbitmq-bundle-2 ) due to no quorum (blocked)
+ * Stop galera-bundle-docker-0 ( node1 ) due to no quorum
+ * Stop galera-bundle-0 ( node1 ) due to no quorum
+ * Stop galera:0 ( Promoted galera-bundle-0 ) due to no quorum
+ * Stop galera-bundle-docker-1 ( node2 ) due to node availability (blocked)
+ * Stop galera-bundle-1 ( node2 ) due to no quorum (blocked)
+ * Stop galera:1 ( Promoted galera-bundle-1 ) due to no quorum (blocked)
+ * Stop galera-bundle-docker-2 ( node3 ) due to node availability (blocked)
+ * Stop galera-bundle-2 ( node3 ) due to no quorum (blocked)
+ * Stop galera:2 ( Promoted galera-bundle-2 ) due to no quorum (blocked)
+ * Stop stonith-fence_ipmilan-node1 ( node2 ) due to node availability (blocked)
+ * Stop stonith-fence_ipmilan-node3 ( node2 ) due to no quorum (blocked)
+ * Stop stonith-fence_ipmilan-node2 ( node3 ) due to no quorum (blocked)
+
+Executing Cluster Transition:
+ * Pseudo action: rabbitmq-bundle-clone_pre_notify_stop_0
+ * Pseudo action: galera-bundle_demote_0
+ * Pseudo action: rabbitmq-bundle_stop_0
+ * Resource action: rabbitmq notify on rabbitmq-bundle-0
+ * Pseudo action: rabbitmq-bundle-clone_confirmed-pre_notify_stop_0
+ * Pseudo action: rabbitmq-bundle-clone_stop_0
+ * Pseudo action: galera-bundle-master_demote_0
+ * Resource action: rabbitmq stop on rabbitmq-bundle-0
+ * Pseudo action: rabbitmq-bundle-clone_stopped_0
+ * Resource action: rabbitmq-bundle-0 stop on node1
+ * Resource action: rabbitmq-bundle-0 cancel=60000 on node1
+ * Resource action: galera demote on galera-bundle-0
+ * Pseudo action: galera-bundle-master_demoted_0
+ * Pseudo action: galera-bundle_demoted_0
+ * Pseudo action: galera-bundle_stop_0
+ * Pseudo action: rabbitmq-bundle-clone_post_notify_stopped_0
+ * Resource action: rabbitmq-bundle-docker-0 stop on node1
+ * Pseudo action: galera-bundle-master_stop_0
+ * Pseudo action: rabbitmq-bundle-clone_confirmed-post_notify_stopped_0
+ * Resource action: galera stop on galera-bundle-0
+ * Pseudo action: galera-bundle-master_stopped_0
+ * Resource action: galera-bundle-0 stop on node1
+ * Resource action: galera-bundle-0 cancel=60000 on node1
+ * Pseudo action: rabbitmq-bundle_stopped_0
+ * Resource action: galera-bundle-docker-0 stop on node1
+ * Pseudo action: galera-bundle_stopped_0
+Using the original execution date of: 2019-08-26 04:52:42Z
+
+Revised Cluster Status:
+ * Node List:
+ * Node node2: UNCLEAN (offline)
+ * Node node3: UNCLEAN (offline)
+ * Online: [ node1 ]
+
+ * Full List of Resources:
+ * Container bundle set: rabbitmq-bundle [192.168.122.139:8787/rhosp13/openstack-rabbitmq:pcmklatest]:
+ * rabbitmq-bundle-0 (ocf:heartbeat:rabbitmq-cluster): Stopped
+ * rabbitmq-bundle-1 (ocf:heartbeat:rabbitmq-cluster): FAILED node2 (UNCLEAN)
+ * rabbitmq-bundle-2 (ocf:heartbeat:rabbitmq-cluster): FAILED node3 (UNCLEAN)
+ * Container bundle set: galera-bundle [192.168.122.139:8787/rhosp13/openstack-mariadb:pcmklatest]:
+ * galera-bundle-0 (ocf:heartbeat:galera): Stopped
+ * galera-bundle-1 (ocf:heartbeat:galera): FAILED Promoted node2 (UNCLEAN)
+ * galera-bundle-2 (ocf:heartbeat:galera): FAILED Promoted node3 (UNCLEAN)
+ * stonith-fence_ipmilan-node1 (stonith:fence_ipmilan): Started node2 (UNCLEAN)
+ * stonith-fence_ipmilan-node3 (stonith:fence_ipmilan): Started node2 (UNCLEAN)
+ * stonith-fence_ipmilan-node2 (stonith:fence_ipmilan): Started node3 (UNCLEAN)
diff --git a/cts/scheduler/summary/guest-node-cleanup.summary b/cts/scheduler/summary/guest-node-cleanup.summary
new file mode 100644
index 0000000..f68fb4f
--- /dev/null
+++ b/cts/scheduler/summary/guest-node-cleanup.summary
@@ -0,0 +1,55 @@
+Using the original execution date of: 2018-10-15 16:02:04Z
+Current cluster status:
+ * Node List:
+ * Online: [ rhel7-1 rhel7-2 rhel7-3 rhel7-4 rhel7-5 ]
+ * GuestOnline: [ lxc2 ]
+
+ * Full List of Resources:
+ * Fencing (stonith:fence_xvm): Started rhel7-2
+ * FencingPass (stonith:fence_dummy): Started rhel7-3
+ * container1 (ocf:heartbeat:VirtualDomain): FAILED
+ * container2 (ocf:heartbeat:VirtualDomain): Started rhel7-1
+ * Clone Set: lxc-ms-master [lxc-ms] (promotable):
+ * Unpromoted: [ lxc2 ]
+ * Stopped: [ rhel7-1 rhel7-2 rhel7-3 rhel7-4 rhel7-5 ]
+
+Transition Summary:
+ * Fence (reboot) lxc1 (resource: container1) 'guest is unclean'
+ * Start container1 ( rhel7-1 )
+ * Recover lxc-ms:1 ( Promoted lxc1 )
+ * Restart lxc1 ( rhel7-1 ) due to required container1 start
+
+Executing Cluster Transition:
+ * Resource action: container1 monitor on rhel7-1
+ * Pseudo action: lxc-ms-master_demote_0
+ * Resource action: lxc1 stop on rhel7-1
+ * Pseudo action: stonith-lxc1-reboot on lxc1
+ * Resource action: container1 start on rhel7-1
+ * Pseudo action: lxc-ms_demote_0
+ * Pseudo action: lxc-ms-master_demoted_0
+ * Pseudo action: lxc-ms-master_stop_0
+ * Resource action: lxc1 start on rhel7-1
+ * Resource action: lxc1 monitor=30000 on rhel7-1
+ * Pseudo action: lxc-ms_stop_0
+ * Pseudo action: lxc-ms-master_stopped_0
+ * Pseudo action: lxc-ms-master_start_0
+ * Resource action: lxc-ms start on lxc1
+ * Pseudo action: lxc-ms-master_running_0
+ * Pseudo action: lxc-ms-master_promote_0
+ * Resource action: lxc-ms promote on lxc1
+ * Pseudo action: lxc-ms-master_promoted_0
+Using the original execution date of: 2018-10-15 16:02:04Z
+
+Revised Cluster Status:
+ * Node List:
+ * Online: [ rhel7-1 rhel7-2 rhel7-3 rhel7-4 rhel7-5 ]
+ * GuestOnline: [ lxc1 lxc2 ]
+
+ * Full List of Resources:
+ * Fencing (stonith:fence_xvm): Started rhel7-2
+ * FencingPass (stonith:fence_dummy): Started rhel7-3
+ * container1 (ocf:heartbeat:VirtualDomain): Started rhel7-1
+ * container2 (ocf:heartbeat:VirtualDomain): Started rhel7-1
+ * Clone Set: lxc-ms-master [lxc-ms] (promotable):
+ * Promoted: [ lxc1 ]
+ * Unpromoted: [ lxc2 ]
diff --git a/cts/scheduler/summary/guest-node-host-dies.summary b/cts/scheduler/summary/guest-node-host-dies.summary
new file mode 100644
index 0000000..84074c1
--- /dev/null
+++ b/cts/scheduler/summary/guest-node-host-dies.summary
@@ -0,0 +1,82 @@
+Current cluster status:
+ * Node List:
+ * Node rhel7-1: UNCLEAN (offline)
+ * Online: [ rhel7-2 rhel7-3 rhel7-4 rhel7-5 ]
+
+ * Full List of Resources:
+ * Fencing (stonith:fence_xvm): Started rhel7-4
+ * rsc_rhel7-1 (ocf:heartbeat:IPaddr2): Started rhel7-1 (UNCLEAN)
+ * container1 (ocf:heartbeat:VirtualDomain): FAILED rhel7-1 (UNCLEAN)
+ * container2 (ocf:heartbeat:VirtualDomain): FAILED rhel7-1 (UNCLEAN)
+ * Clone Set: lxc-ms-master [lxc-ms] (promotable):
+ * Stopped: [ rhel7-1 rhel7-2 rhel7-3 rhel7-4 rhel7-5 ]
+
+Transition Summary:
+ * Fence (reboot) lxc2 (resource: container2) 'guest is unclean'
+ * Fence (reboot) lxc1 (resource: container1) 'guest is unclean'
+ * Fence (reboot) rhel7-1 'rsc_rhel7-1 is thought to be active there'
+ * Restart Fencing ( rhel7-4 ) due to resource definition change
+ * Move rsc_rhel7-1 ( rhel7-1 -> rhel7-5 )
+ * Recover container1 ( rhel7-1 -> rhel7-2 )
+ * Recover container2 ( rhel7-1 -> rhel7-3 )
+ * Recover lxc-ms:0 ( Promoted lxc1 )
+ * Recover lxc-ms:1 ( Unpromoted lxc2 )
+ * Move lxc1 ( rhel7-1 -> rhel7-2 )
+ * Move lxc2 ( rhel7-1 -> rhel7-3 )
+
+Executing Cluster Transition:
+ * Resource action: Fencing stop on rhel7-4
+ * Pseudo action: lxc-ms-master_demote_0
+ * Pseudo action: lxc1_stop_0
+ * Resource action: lxc1 monitor on rhel7-5
+ * Resource action: lxc1 monitor on rhel7-4
+ * Resource action: lxc1 monitor on rhel7-3
+ * Pseudo action: lxc2_stop_0
+ * Resource action: lxc2 monitor on rhel7-5
+ * Resource action: lxc2 monitor on rhel7-4
+ * Resource action: lxc2 monitor on rhel7-2
+ * Fencing rhel7-1 (reboot)
+ * Pseudo action: rsc_rhel7-1_stop_0
+ * Pseudo action: container1_stop_0
+ * Pseudo action: container2_stop_0
+ * Pseudo action: stonith-lxc2-reboot on lxc2
+ * Pseudo action: stonith-lxc1-reboot on lxc1
+ * Resource action: Fencing start on rhel7-4
+ * Resource action: Fencing monitor=120000 on rhel7-4
+ * Resource action: rsc_rhel7-1 start on rhel7-5
+ * Resource action: container1 start on rhel7-2
+ * Resource action: container2 start on rhel7-3
+ * Pseudo action: lxc-ms_demote_0
+ * Pseudo action: lxc-ms-master_demoted_0
+ * Pseudo action: lxc-ms-master_stop_0
+ * Resource action: lxc1 start on rhel7-2
+ * Resource action: lxc2 start on rhel7-3
+ * Resource action: rsc_rhel7-1 monitor=5000 on rhel7-5
+ * Pseudo action: lxc-ms_stop_0
+ * Pseudo action: lxc-ms_stop_0
+ * Pseudo action: lxc-ms-master_stopped_0
+ * Pseudo action: lxc-ms-master_start_0
+ * Resource action: lxc1 monitor=30000 on rhel7-2
+ * Resource action: lxc2 monitor=30000 on rhel7-3
+ * Resource action: lxc-ms start on lxc1
+ * Resource action: lxc-ms start on lxc2
+ * Pseudo action: lxc-ms-master_running_0
+ * Resource action: lxc-ms monitor=10000 on lxc2
+ * Pseudo action: lxc-ms-master_promote_0
+ * Resource action: lxc-ms promote on lxc1
+ * Pseudo action: lxc-ms-master_promoted_0
+
+Revised Cluster Status:
+ * Node List:
+ * Online: [ rhel7-2 rhel7-3 rhel7-4 rhel7-5 ]
+ * OFFLINE: [ rhel7-1 ]
+ * GuestOnline: [ lxc1 lxc2 ]
+
+ * Full List of Resources:
+ * Fencing (stonith:fence_xvm): Started rhel7-4
+ * rsc_rhel7-1 (ocf:heartbeat:IPaddr2): Started rhel7-5
+ * container1 (ocf:heartbeat:VirtualDomain): Started rhel7-2
+ * container2 (ocf:heartbeat:VirtualDomain): Started rhel7-3
+ * Clone Set: lxc-ms-master [lxc-ms] (promotable):
+ * Promoted: [ lxc1 ]
+ * Unpromoted: [ lxc2 ]
diff --git a/cts/scheduler/summary/history-1.summary b/cts/scheduler/summary/history-1.summary
new file mode 100644
index 0000000..74d31ec
--- /dev/null
+++ b/cts/scheduler/summary/history-1.summary
@@ -0,0 +1,55 @@
+Current cluster status:
+ * Node List:
+ * Online: [ pcmk-1 pcmk-2 pcmk-3 ]
+ * OFFLINE: [ pcmk-4 ]
+
+ * Full List of Resources:
+ * Clone Set: Fencing [FencingChild]:
+ * Started: [ pcmk-1 pcmk-2 pcmk-3 ]
+ * Stopped: [ pcmk-4 ]
+ * Resource Group: group-1:
+ * r192.168.101.181 (ocf:heartbeat:IPaddr): Stopped
+ * r192.168.101.182 (ocf:heartbeat:IPaddr): Stopped
+ * r192.168.101.183 (ocf:heartbeat:IPaddr): Stopped
+ * rsc_pcmk-1 (ocf:heartbeat:IPaddr): Started pcmk-1
+ * rsc_pcmk-2 (ocf:heartbeat:IPaddr): Started pcmk-2
+ * rsc_pcmk-3 (ocf:heartbeat:IPaddr): Started pcmk-3
+ * rsc_pcmk-4 (ocf:heartbeat:IPaddr): Started pcmk-3
+ * lsb-dummy (lsb:/usr/share/pacemaker/tests/cts/LSBDummy): Stopped
+ * migrator (ocf:pacemaker:Dummy): Started pcmk-1
+ * Clone Set: Connectivity [ping-1]:
+ * Started: [ pcmk-1 pcmk-2 pcmk-3 ]
+ * Stopped: [ pcmk-4 ]
+ * Clone Set: master-1 [stateful-1] (promotable):
+ * Unpromoted: [ pcmk-1 pcmk-2 pcmk-3 ]
+ * Stopped: [ pcmk-4 ]
+
+Transition Summary:
+
+Executing Cluster Transition:
+
+Revised Cluster Status:
+ * Node List:
+ * Online: [ pcmk-1 pcmk-2 pcmk-3 ]
+ * OFFLINE: [ pcmk-4 ]
+
+ * Full List of Resources:
+ * Clone Set: Fencing [FencingChild]:
+ * Started: [ pcmk-1 pcmk-2 pcmk-3 ]
+ * Stopped: [ pcmk-4 ]
+ * Resource Group: group-1:
+ * r192.168.101.181 (ocf:heartbeat:IPaddr): Stopped
+ * r192.168.101.182 (ocf:heartbeat:IPaddr): Stopped
+ * r192.168.101.183 (ocf:heartbeat:IPaddr): Stopped
+ * rsc_pcmk-1 (ocf:heartbeat:IPaddr): Started pcmk-1
+ * rsc_pcmk-2 (ocf:heartbeat:IPaddr): Started pcmk-2
+ * rsc_pcmk-3 (ocf:heartbeat:IPaddr): Started pcmk-3
+ * rsc_pcmk-4 (ocf:heartbeat:IPaddr): Started pcmk-3
+ * lsb-dummy (lsb:/usr/share/pacemaker/tests/cts/LSBDummy): Stopped
+ * migrator (ocf:pacemaker:Dummy): Started pcmk-1
+ * Clone Set: Connectivity [ping-1]:
+ * Started: [ pcmk-1 pcmk-2 pcmk-3 ]
+ * Stopped: [ pcmk-4 ]
+ * Clone Set: master-1 [stateful-1] (promotable):
+ * Unpromoted: [ pcmk-1 pcmk-2 pcmk-3 ]
+ * Stopped: [ pcmk-4 ]
diff --git a/cts/scheduler/summary/honor_stonith_rsc_order1.summary b/cts/scheduler/summary/honor_stonith_rsc_order1.summary
new file mode 100644
index 0000000..392cebc
--- /dev/null
+++ b/cts/scheduler/summary/honor_stonith_rsc_order1.summary
@@ -0,0 +1,38 @@
+Current cluster status:
+ * Node List:
+ * Online: [ fc16-builder ]
+
+ * Full List of Resources:
+ * Clone Set: S_CLONE [S_A]:
+ * Stopped: [ fc16-builder ]
+ * Resource Group: S_GROUP:
+ * S_B (stonith:fence_xvm): Stopped
+ * A (ocf:pacemaker:Dummy): Stopped
+
+Transition Summary:
+ * Start S_A:0 ( fc16-builder )
+ * Start S_B ( fc16-builder )
+ * Start A ( fc16-builder )
+
+Executing Cluster Transition:
+ * Resource action: S_A:0 monitor on fc16-builder
+ * Pseudo action: S_GROUP_start_0
+ * Resource action: S_B monitor on fc16-builder
+ * Resource action: A monitor on fc16-builder
+ * Resource action: S_B start on fc16-builder
+ * Resource action: A start on fc16-builder
+ * Pseudo action: S_GROUP_running_0
+ * Pseudo action: S_CLONE_start_0
+ * Resource action: S_A:0 start on fc16-builder
+ * Pseudo action: S_CLONE_running_0
+
+Revised Cluster Status:
+ * Node List:
+ * Online: [ fc16-builder ]
+
+ * Full List of Resources:
+ * Clone Set: S_CLONE [S_A]:
+ * Started: [ fc16-builder ]
+ * Resource Group: S_GROUP:
+ * S_B (stonith:fence_xvm): Started fc16-builder
+ * A (ocf:pacemaker:Dummy): Started fc16-builder
diff --git a/cts/scheduler/summary/honor_stonith_rsc_order2.summary b/cts/scheduler/summary/honor_stonith_rsc_order2.summary
new file mode 100644
index 0000000..281178f
--- /dev/null
+++ b/cts/scheduler/summary/honor_stonith_rsc_order2.summary
@@ -0,0 +1,48 @@
+Current cluster status:
+ * Node List:
+ * Online: [ fc16-builder ]
+
+ * Full List of Resources:
+ * Clone Set: S_CLONE [S_A]:
+ * Stopped: [ fc16-builder ]
+ * Resource Group: S_GROUP:
+ * S_B (stonith:fence_xvm): Stopped
+ * S_C (stonith:fence_xvm): Stopped
+ * S_D (stonith:fence_xvm): Stopped
+ * A (ocf:pacemaker:Dummy): Stopped
+
+Transition Summary:
+ * Start S_A:0 ( fc16-builder )
+ * Start S_B ( fc16-builder )
+ * Start S_C ( fc16-builder )
+ * Start S_D ( fc16-builder )
+ * Start A ( fc16-builder )
+
+Executing Cluster Transition:
+ * Resource action: S_A:0 monitor on fc16-builder
+ * Pseudo action: S_GROUP_start_0
+ * Resource action: S_B monitor on fc16-builder
+ * Resource action: S_C monitor on fc16-builder
+ * Resource action: S_D monitor on fc16-builder
+ * Resource action: A monitor on fc16-builder
+ * Resource action: S_B start on fc16-builder
+ * Resource action: S_C start on fc16-builder
+ * Resource action: S_D start on fc16-builder
+ * Resource action: A start on fc16-builder
+ * Pseudo action: S_GROUP_running_0
+ * Pseudo action: S_CLONE_start_0
+ * Resource action: S_A:0 start on fc16-builder
+ * Pseudo action: S_CLONE_running_0
+
+Revised Cluster Status:
+ * Node List:
+ * Online: [ fc16-builder ]
+
+ * Full List of Resources:
+ * Clone Set: S_CLONE [S_A]:
+ * Started: [ fc16-builder ]
+ * Resource Group: S_GROUP:
+ * S_B (stonith:fence_xvm): Started fc16-builder
+ * S_C (stonith:fence_xvm): Started fc16-builder
+ * S_D (stonith:fence_xvm): Started fc16-builder
+ * A (ocf:pacemaker:Dummy): Started fc16-builder
diff --git a/cts/scheduler/summary/honor_stonith_rsc_order3.summary b/cts/scheduler/summary/honor_stonith_rsc_order3.summary
new file mode 100644
index 0000000..3366a6b
--- /dev/null
+++ b/cts/scheduler/summary/honor_stonith_rsc_order3.summary
@@ -0,0 +1,46 @@
+Current cluster status:
+ * Node List:
+ * Online: [ fc16-builder ]
+
+ * Full List of Resources:
+ * Clone Set: S_CLONE [S_A]:
+ * Stopped: [ fc16-builder ]
+ * Clone Set: S_CLONE2 [S_GROUP]:
+ * Stopped: [ fc16-builder ]
+ * A (ocf:pacemaker:Dummy): Stopped
+
+Transition Summary:
+ * Start S_A:0 ( fc16-builder )
+ * Start S_B:0 ( fc16-builder )
+ * Start S_C:0 ( fc16-builder )
+ * Start S_D:0 ( fc16-builder )
+ * Start A ( fc16-builder )
+
+Executing Cluster Transition:
+ * Resource action: S_A:0 monitor on fc16-builder
+ * Resource action: S_B:0 monitor on fc16-builder
+ * Resource action: S_C:0 monitor on fc16-builder
+ * Resource action: S_D:0 monitor on fc16-builder
+ * Pseudo action: S_CLONE2_start_0
+ * Resource action: A monitor on fc16-builder
+ * Pseudo action: S_GROUP:0_start_0
+ * Resource action: S_B:0 start on fc16-builder
+ * Resource action: S_C:0 start on fc16-builder
+ * Resource action: S_D:0 start on fc16-builder
+ * Resource action: A start on fc16-builder
+ * Pseudo action: S_GROUP:0_running_0
+ * Pseudo action: S_CLONE2_running_0
+ * Pseudo action: S_CLONE_start_0
+ * Resource action: S_A:0 start on fc16-builder
+ * Pseudo action: S_CLONE_running_0
+
+Revised Cluster Status:
+ * Node List:
+ * Online: [ fc16-builder ]
+
+ * Full List of Resources:
+ * Clone Set: S_CLONE [S_A]:
+ * Started: [ fc16-builder ]
+ * Clone Set: S_CLONE2 [S_GROUP]:
+ * Started: [ fc16-builder ]
+ * A (ocf:pacemaker:Dummy): Started fc16-builder
diff --git a/cts/scheduler/summary/honor_stonith_rsc_order4.summary b/cts/scheduler/summary/honor_stonith_rsc_order4.summary
new file mode 100644
index 0000000..d93ffdf
--- /dev/null
+++ b/cts/scheduler/summary/honor_stonith_rsc_order4.summary
@@ -0,0 +1,30 @@
+Current cluster status:
+ * Node List:
+ * Online: [ fc16-builder ]
+
+ * Full List of Resources:
+ * S_A (stonith:fence_xvm): Stopped
+ * S_B (stonith:fence_xvm): Stopped
+ * A (ocf:pacemaker:Dummy): Stopped
+
+Transition Summary:
+ * Start S_A ( fc16-builder )
+ * Start S_B ( fc16-builder )
+ * Start A ( fc16-builder )
+
+Executing Cluster Transition:
+ * Resource action: S_A monitor on fc16-builder
+ * Resource action: S_B monitor on fc16-builder
+ * Resource action: A monitor on fc16-builder
+ * Resource action: S_B start on fc16-builder
+ * Resource action: A start on fc16-builder
+ * Resource action: S_A start on fc16-builder
+
+Revised Cluster Status:
+ * Node List:
+ * Online: [ fc16-builder ]
+
+ * Full List of Resources:
+ * S_A (stonith:fence_xvm): Started fc16-builder
+ * S_B (stonith:fence_xvm): Started fc16-builder
+ * A (ocf:pacemaker:Dummy): Started fc16-builder
diff --git a/cts/scheduler/summary/ignore_stonith_rsc_order1.summary b/cts/scheduler/summary/ignore_stonith_rsc_order1.summary
new file mode 100644
index 0000000..0331f12
--- /dev/null
+++ b/cts/scheduler/summary/ignore_stonith_rsc_order1.summary
@@ -0,0 +1,25 @@
+Current cluster status:
+ * Node List:
+ * Online: [ fc16-builder ]
+
+ * Full List of Resources:
+ * S_A (stonith:fence_xvm): Stopped
+ * A (ocf:pacemaker:Dummy): Stopped
+
+Transition Summary:
+ * Start S_A ( fc16-builder )
+ * Start A ( fc16-builder )
+
+Executing Cluster Transition:
+ * Resource action: S_A monitor on fc16-builder
+ * Resource action: A monitor on fc16-builder
+ * Resource action: A start on fc16-builder
+ * Resource action: S_A start on fc16-builder
+
+Revised Cluster Status:
+ * Node List:
+ * Online: [ fc16-builder ]
+
+ * Full List of Resources:
+ * S_A (stonith:fence_xvm): Started fc16-builder
+ * A (ocf:pacemaker:Dummy): Started fc16-builder
diff --git a/cts/scheduler/summary/ignore_stonith_rsc_order2.summary b/cts/scheduler/summary/ignore_stonith_rsc_order2.summary
new file mode 100644
index 0000000..cd37f0b
--- /dev/null
+++ b/cts/scheduler/summary/ignore_stonith_rsc_order2.summary
@@ -0,0 +1,34 @@
+Current cluster status:
+ * Node List:
+ * Online: [ fc16-builder ]
+
+ * Full List of Resources:
+ * S_A (stonith:fence_xvm): Stopped
+ * Resource Group: MIXED_GROUP:
+ * A (ocf:pacemaker:Dummy): Stopped
+ * S_B (stonith:fence_xvm): Stopped
+
+Transition Summary:
+ * Start S_A ( fc16-builder )
+ * Start A ( fc16-builder )
+ * Start S_B ( fc16-builder )
+
+Executing Cluster Transition:
+ * Resource action: S_A monitor on fc16-builder
+ * Pseudo action: MIXED_GROUP_start_0
+ * Resource action: A monitor on fc16-builder
+ * Resource action: S_B monitor on fc16-builder
+ * Resource action: A start on fc16-builder
+ * Resource action: S_B start on fc16-builder
+ * Pseudo action: MIXED_GROUP_running_0
+ * Resource action: S_A start on fc16-builder
+
+Revised Cluster Status:
+ * Node List:
+ * Online: [ fc16-builder ]
+
+ * Full List of Resources:
+ * S_A (stonith:fence_xvm): Started fc16-builder
+ * Resource Group: MIXED_GROUP:
+ * A (ocf:pacemaker:Dummy): Started fc16-builder
+ * S_B (stonith:fence_xvm): Started fc16-builder
diff --git a/cts/scheduler/summary/ignore_stonith_rsc_order3.summary b/cts/scheduler/summary/ignore_stonith_rsc_order3.summary
new file mode 100644
index 0000000..36b5bf5
--- /dev/null
+++ b/cts/scheduler/summary/ignore_stonith_rsc_order3.summary
@@ -0,0 +1,38 @@
+Current cluster status:
+ * Node List:
+ * Online: [ fc16-builder ]
+
+ * Full List of Resources:
+ * Clone Set: S_CLONE [S_A]:
+ * Stopped: [ fc16-builder ]
+ * Resource Group: MIXED_GROUP:
+ * A (ocf:pacemaker:Dummy): Stopped
+ * S_B (stonith:fence_xvm): Stopped
+
+Transition Summary:
+ * Start S_A:0 ( fc16-builder )
+ * Start A ( fc16-builder )
+ * Start S_B ( fc16-builder )
+
+Executing Cluster Transition:
+ * Resource action: S_A:0 monitor on fc16-builder
+ * Pseudo action: MIXED_GROUP_start_0
+ * Resource action: A monitor on fc16-builder
+ * Resource action: S_B monitor on fc16-builder
+ * Resource action: A start on fc16-builder
+ * Resource action: S_B start on fc16-builder
+ * Pseudo action: MIXED_GROUP_running_0
+ * Pseudo action: S_CLONE_start_0
+ * Resource action: S_A:0 start on fc16-builder
+ * Pseudo action: S_CLONE_running_0
+
+Revised Cluster Status:
+ * Node List:
+ * Online: [ fc16-builder ]
+
+ * Full List of Resources:
+ * Clone Set: S_CLONE [S_A]:
+ * Started: [ fc16-builder ]
+ * Resource Group: MIXED_GROUP:
+ * A (ocf:pacemaker:Dummy): Started fc16-builder
+ * S_B (stonith:fence_xvm): Started fc16-builder
diff --git a/cts/scheduler/summary/ignore_stonith_rsc_order4.summary b/cts/scheduler/summary/ignore_stonith_rsc_order4.summary
new file mode 100644
index 0000000..e56f65c
--- /dev/null
+++ b/cts/scheduler/summary/ignore_stonith_rsc_order4.summary
@@ -0,0 +1,38 @@
+Current cluster status:
+ * Node List:
+ * Online: [ fc16-builder ]
+
+ * Full List of Resources:
+ * Clone Set: S_CLONE [S_A]:
+ * Stopped: [ fc16-builder ]
+ * Clone Set: S_CLONE2 [MIXED_GROUP]:
+ * Stopped: [ fc16-builder ]
+
+Transition Summary:
+ * Start S_A:0 ( fc16-builder )
+ * Start A:0 ( fc16-builder )
+ * Start S_B:0 ( fc16-builder )
+
+Executing Cluster Transition:
+ * Resource action: S_A:0 monitor on fc16-builder
+ * Resource action: A:0 monitor on fc16-builder
+ * Resource action: S_B:0 monitor on fc16-builder
+ * Pseudo action: S_CLONE2_start_0
+ * Pseudo action: MIXED_GROUP:0_start_0
+ * Resource action: A:0 start on fc16-builder
+ * Resource action: S_B:0 start on fc16-builder
+ * Pseudo action: MIXED_GROUP:0_running_0
+ * Pseudo action: S_CLONE2_running_0
+ * Pseudo action: S_CLONE_start_0
+ * Resource action: S_A:0 start on fc16-builder
+ * Pseudo action: S_CLONE_running_0
+
+Revised Cluster Status:
+ * Node List:
+ * Online: [ fc16-builder ]
+
+ * Full List of Resources:
+ * Clone Set: S_CLONE [S_A]:
+ * Started: [ fc16-builder ]
+ * Clone Set: S_CLONE2 [MIXED_GROUP]:
+ * Started: [ fc16-builder ]
diff --git a/cts/scheduler/summary/inc0.summary b/cts/scheduler/summary/inc0.summary
new file mode 100644
index 0000000..947d5e5
--- /dev/null
+++ b/cts/scheduler/summary/inc0.summary
@@ -0,0 +1,47 @@
+Current cluster status:
+ * Node List:
+ * Online: [ node1 node2 ]
+
+ * Full List of Resources:
+ * Clone Set: rsc1 [child_rsc1] (unique):
+ * child_rsc1:0 (ocf:heartbeat:apache): Stopped
+ * child_rsc1:1 (ocf:heartbeat:apache): Stopped
+ * child_rsc1:2 (ocf:heartbeat:apache): Stopped
+ * child_rsc1:3 (ocf:heartbeat:apache): Stopped
+ * child_rsc1:4 (ocf:heartbeat:apache): Stopped
+
+Transition Summary:
+ * Start child_rsc1:0 ( node1 )
+ * Start child_rsc1:1 ( node2 )
+ * Start child_rsc1:2 ( node1 )
+ * Start child_rsc1:3 ( node2 )
+
+Executing Cluster Transition:
+ * Resource action: child_rsc1:0 monitor on node2
+ * Resource action: child_rsc1:0 monitor on node1
+ * Resource action: child_rsc1:1 monitor on node2
+ * Resource action: child_rsc1:1 monitor on node1
+ * Resource action: child_rsc1:2 monitor on node2
+ * Resource action: child_rsc1:2 monitor on node1
+ * Resource action: child_rsc1:3 monitor on node2
+ * Resource action: child_rsc1:3 monitor on node1
+ * Resource action: child_rsc1:4 monitor on node2
+ * Resource action: child_rsc1:4 monitor on node1
+ * Pseudo action: rsc1_start_0
+ * Resource action: child_rsc1:0 start on node1
+ * Resource action: child_rsc1:1 start on node2
+ * Resource action: child_rsc1:2 start on node1
+ * Resource action: child_rsc1:3 start on node2
+ * Pseudo action: rsc1_running_0
+
+Revised Cluster Status:
+ * Node List:
+ * Online: [ node1 node2 ]
+
+ * Full List of Resources:
+ * Clone Set: rsc1 [child_rsc1] (unique):
+ * child_rsc1:0 (ocf:heartbeat:apache): Started node1
+ * child_rsc1:1 (ocf:heartbeat:apache): Started node2
+ * child_rsc1:2 (ocf:heartbeat:apache): Started node1
+ * child_rsc1:3 (ocf:heartbeat:apache): Started node2
+ * child_rsc1:4 (ocf:heartbeat:apache): Stopped
diff --git a/cts/scheduler/summary/inc1.summary b/cts/scheduler/summary/inc1.summary
new file mode 100644
index 0000000..5201a44
--- /dev/null
+++ b/cts/scheduler/summary/inc1.summary
@@ -0,0 +1,59 @@
+Current cluster status:
+ * Node List:
+ * Online: [ node1 node2 ]
+
+ * Full List of Resources:
+ * rsc1 (ocf:heartbeat:apache): Stopped
+ * Clone Set: rsc2 [child_rsc2] (unique):
+ * child_rsc2:0 (ocf:heartbeat:apache): Stopped
+ * child_rsc2:1 (ocf:heartbeat:apache): Stopped
+ * child_rsc2:2 (ocf:heartbeat:apache): Stopped
+ * child_rsc2:3 (ocf:heartbeat:apache): Stopped
+ * child_rsc2:4 (ocf:heartbeat:apache): Stopped
+ * rsc3 (ocf:heartbeat:apache): Stopped
+
+Transition Summary:
+ * Start rsc1 ( node1 )
+ * Start child_rsc2:0 ( node2 )
+ * Start child_rsc2:1 ( node1 )
+ * Start child_rsc2:2 ( node2 )
+ * Start child_rsc2:3 ( node1 )
+ * Start rsc3 ( node2 )
+
+Executing Cluster Transition:
+ * Resource action: rsc1 monitor on node2
+ * Resource action: rsc1 monitor on node1
+ * Resource action: child_rsc2:0 monitor on node2
+ * Resource action: child_rsc2:0 monitor on node1
+ * Resource action: child_rsc2:1 monitor on node2
+ * Resource action: child_rsc2:1 monitor on node1
+ * Resource action: child_rsc2:2 monitor on node2
+ * Resource action: child_rsc2:2 monitor on node1
+ * Resource action: child_rsc2:3 monitor on node2
+ * Resource action: child_rsc2:3 monitor on node1
+ * Resource action: child_rsc2:4 monitor on node2
+ * Resource action: child_rsc2:4 monitor on node1
+ * Resource action: rsc3 monitor on node2
+ * Resource action: rsc3 monitor on node1
+ * Resource action: rsc1 start on node1
+ * Pseudo action: rsc2_start_0
+ * Resource action: child_rsc2:0 start on node2
+ * Resource action: child_rsc2:1 start on node1
+ * Resource action: child_rsc2:2 start on node2
+ * Resource action: child_rsc2:3 start on node1
+ * Pseudo action: rsc2_running_0
+ * Resource action: rsc3 start on node2
+
+Revised Cluster Status:
+ * Node List:
+ * Online: [ node1 node2 ]
+
+ * Full List of Resources:
+ * rsc1 (ocf:heartbeat:apache): Started node1
+ * Clone Set: rsc2 [child_rsc2] (unique):
+ * child_rsc2:0 (ocf:heartbeat:apache): Started node2
+ * child_rsc2:1 (ocf:heartbeat:apache): Started node1
+ * child_rsc2:2 (ocf:heartbeat:apache): Started node2
+ * child_rsc2:3 (ocf:heartbeat:apache): Started node1
+ * child_rsc2:4 (ocf:heartbeat:apache): Stopped
+ * rsc3 (ocf:heartbeat:apache): Started node2
diff --git a/cts/scheduler/summary/inc10.summary b/cts/scheduler/summary/inc10.summary
new file mode 100644
index 0000000..1037e6c
--- /dev/null
+++ b/cts/scheduler/summary/inc10.summary
@@ -0,0 +1,46 @@
+Current cluster status:
+ * Node List:
+ * Node xen-2: standby (with active resources)
+ * Online: [ xen-1 xen-3 xen-4 ]
+
+ * Full List of Resources:
+ * Clone Set: DoFencing [child_DoFencing]:
+ * Started: [ xen-1 xen-2 xen-3 xen-4 ]
+ * Clone Set: ocfs2-clone [ocfs2]:
+ * Started: [ xen-1 xen-2 xen-3 xen-4 ]
+
+Transition Summary:
+ * Stop child_DoFencing:1 ( xen-2 ) due to node availability
+ * Stop ocfs2:1 ( xen-2 ) due to node availability
+
+Executing Cluster Transition:
+ * Pseudo action: DoFencing_stop_0
+ * Pseudo action: ocfs2-clone_pre_notify_stop_0
+ * Resource action: child_DoFencing:2 stop on xen-2
+ * Pseudo action: DoFencing_stopped_0
+ * Resource action: ocfs2:1 notify on xen-3
+ * Resource action: ocfs2:1 notify on xen-2
+ * Resource action: ocfs2:3 notify on xen-1
+ * Resource action: ocfs2:0 notify on xen-4
+ * Pseudo action: ocfs2-clone_confirmed-pre_notify_stop_0
+ * Pseudo action: ocfs2-clone_stop_0
+ * Resource action: ocfs2:1 stop on xen-2
+ * Pseudo action: ocfs2-clone_stopped_0
+ * Pseudo action: ocfs2-clone_post_notify_stopped_0
+ * Resource action: ocfs2:1 notify on xen-3
+ * Resource action: ocfs2:3 notify on xen-1
+ * Resource action: ocfs2:0 notify on xen-4
+ * Pseudo action: ocfs2-clone_confirmed-post_notify_stopped_0
+
+Revised Cluster Status:
+ * Node List:
+ * Node xen-2: standby
+ * Online: [ xen-1 xen-3 xen-4 ]
+
+ * Full List of Resources:
+ * Clone Set: DoFencing [child_DoFencing]:
+ * Started: [ xen-1 xen-3 xen-4 ]
+ * Stopped: [ xen-2 ]
+ * Clone Set: ocfs2-clone [ocfs2]:
+ * Started: [ xen-1 xen-3 xen-4 ]
+ * Stopped: [ xen-2 ]
diff --git a/cts/scheduler/summary/inc11.summary b/cts/scheduler/summary/inc11.summary
new file mode 100644
index 0000000..1149123
--- /dev/null
+++ b/cts/scheduler/summary/inc11.summary
@@ -0,0 +1,43 @@
+Current cluster status:
+ * Node List:
+ * Online: [ node0 node1 node2 ]
+
+ * Full List of Resources:
+ * simple-rsc (ocf:heartbeat:apache): Stopped
+ * Clone Set: rsc1 [child_rsc1] (promotable, unique):
+ * child_rsc1:0 (ocf:heartbeat:apache): Stopped
+ * child_rsc1:1 (ocf:heartbeat:apache): Stopped
+
+Transition Summary:
+ * Start simple-rsc ( node2 )
+ * Start child_rsc1:0 ( node1 )
+ * Promote child_rsc1:1 ( Stopped -> Promoted node2 )
+
+Executing Cluster Transition:
+ * Resource action: simple-rsc monitor on node2
+ * Resource action: simple-rsc monitor on node1
+ * Resource action: simple-rsc monitor on node0
+ * Resource action: child_rsc1:0 monitor on node2
+ * Resource action: child_rsc1:0 monitor on node1
+ * Resource action: child_rsc1:0 monitor on node0
+ * Resource action: child_rsc1:1 monitor on node2
+ * Resource action: child_rsc1:1 monitor on node1
+ * Resource action: child_rsc1:1 monitor on node0
+ * Pseudo action: rsc1_start_0
+ * Resource action: simple-rsc start on node2
+ * Resource action: child_rsc1:0 start on node1
+ * Resource action: child_rsc1:1 start on node2
+ * Pseudo action: rsc1_running_0
+ * Pseudo action: rsc1_promote_0
+ * Resource action: child_rsc1:1 promote on node2
+ * Pseudo action: rsc1_promoted_0
+
+Revised Cluster Status:
+ * Node List:
+ * Online: [ node0 node1 node2 ]
+
+ * Full List of Resources:
+ * simple-rsc (ocf:heartbeat:apache): Started node2
+ * Clone Set: rsc1 [child_rsc1] (promotable, unique):
+ * child_rsc1:0 (ocf:heartbeat:apache): Unpromoted node1
+ * child_rsc1:1 (ocf:heartbeat:apache): Promoted node2
diff --git a/cts/scheduler/summary/inc12.summary b/cts/scheduler/summary/inc12.summary
new file mode 100644
index 0000000..36ffffa
--- /dev/null
+++ b/cts/scheduler/summary/inc12.summary
@@ -0,0 +1,132 @@
+Current cluster status:
+ * Node List:
+ * Online: [ c001n02 c001n03 c001n04 c001n05 c001n06 c001n07 ]
+
+ * Full List of Resources:
+ * DcIPaddr (ocf:heartbeat:IPaddr): Stopped
+ * Resource Group: group-1:
+ * ocf_192.168.100.181 (ocf:heartbeat:IPaddr): Started c001n02
+ * heartbeat_192.168.100.182 (ocf:heartbeat:IPaddr): Started c001n02
+ * ocf_192.168.100.183 (ocf:heartbeat:IPaddr): Started c001n02
+ * lsb_dummy (lsb:/usr/lib/heartbeat/cts/LSBDummy): Started c001n04
+ * rsc_c001n03 (ocf:heartbeat:IPaddr): Started c001n05
+ * rsc_c001n02 (ocf:heartbeat:IPaddr): Started c001n02
+ * rsc_c001n04 (ocf:heartbeat:IPaddr): Started c001n04
+ * rsc_c001n05 (ocf:heartbeat:IPaddr): Started c001n05
+ * rsc_c001n06 (ocf:heartbeat:IPaddr): Started c001n06
+ * rsc_c001n07 (ocf:heartbeat:IPaddr): Started c001n07
+ * Clone Set: DoFencing [child_DoFencing]:
+ * Started: [ c001n02 c001n04 c001n05 c001n06 c001n07 ]
+ * Stopped: [ c001n03 ]
+ * Clone Set: master_rsc_1 [ocf_msdummy] (promotable, unique):
+ * ocf_msdummy:0 (ocf:heartbeat:Stateful): Stopped
+ * ocf_msdummy:1 (ocf:heartbeat:Stateful): Stopped
+ * ocf_msdummy:2 (ocf:heartbeat:Stateful): Unpromoted c001n04
+ * ocf_msdummy:3 (ocf:heartbeat:Stateful): Unpromoted c001n04
+ * ocf_msdummy:4 (ocf:heartbeat:Stateful): Unpromoted c001n05
+ * ocf_msdummy:5 (ocf:heartbeat:Stateful): Unpromoted c001n05
+ * ocf_msdummy:6 (ocf:heartbeat:Stateful): Unpromoted c001n06
+ * ocf_msdummy:7 (ocf:heartbeat:Stateful): Unpromoted c001n06
+ * ocf_msdummy:8 (ocf:heartbeat:Stateful): Unpromoted c001n07
+ * ocf_msdummy:9 (ocf:heartbeat:Stateful): Unpromoted c001n07
+ * ocf_msdummy:10 (ocf:heartbeat:Stateful): Unpromoted c001n02
+ * ocf_msdummy:11 (ocf:heartbeat:Stateful): Unpromoted c001n02
+
+Transition Summary:
+ * Stop ocf_192.168.100.181 ( c001n02 ) due to node availability
+ * Stop heartbeat_192.168.100.182 ( c001n02 ) due to node availability
+ * Stop ocf_192.168.100.183 ( c001n02 ) due to node availability
+ * Stop lsb_dummy ( c001n04 ) due to node availability
+ * Stop rsc_c001n03 ( c001n05 ) due to node availability
+ * Stop rsc_c001n02 ( c001n02 ) due to node availability
+ * Stop rsc_c001n04 ( c001n04 ) due to node availability
+ * Stop rsc_c001n05 ( c001n05 ) due to node availability
+ * Stop rsc_c001n06 ( c001n06 ) due to node availability
+ * Stop rsc_c001n07 ( c001n07 ) due to node availability
+ * Stop child_DoFencing:0 ( c001n02 ) due to node availability
+ * Stop child_DoFencing:1 ( c001n04 ) due to node availability
+ * Stop child_DoFencing:2 ( c001n05 ) due to node availability
+ * Stop child_DoFencing:3 ( c001n06 ) due to node availability
+ * Stop child_DoFencing:4 ( c001n07 ) due to node availability
+ * Stop ocf_msdummy:2 ( Unpromoted c001n04 ) due to node availability
+ * Stop ocf_msdummy:3 ( Unpromoted c001n04 ) due to node availability
+ * Stop ocf_msdummy:4 ( Unpromoted c001n05 ) due to node availability
+ * Stop ocf_msdummy:5 ( Unpromoted c001n05 ) due to node availability
+ * Stop ocf_msdummy:6 ( Unpromoted c001n06 ) due to node availability
+ * Stop ocf_msdummy:7 ( Unpromoted c001n06 ) due to node availability
+ * Stop ocf_msdummy:8 ( Unpromoted c001n07 ) due to node availability
+ * Stop ocf_msdummy:9 ( Unpromoted c001n07 ) due to node availability
+ * Stop ocf_msdummy:10 ( Unpromoted c001n02 ) due to node availability
+ * Stop ocf_msdummy:11 ( Unpromoted c001n02 ) due to node availability
+
+Executing Cluster Transition:
+ * Pseudo action: group-1_stop_0
+ * Resource action: ocf_192.168.100.183 stop on c001n02
+ * Resource action: lsb_dummy stop on c001n04
+ * Resource action: rsc_c001n03 stop on c001n05
+ * Resource action: rsc_c001n02 stop on c001n02
+ * Resource action: rsc_c001n04 stop on c001n04
+ * Resource action: rsc_c001n05 stop on c001n05
+ * Resource action: rsc_c001n06 stop on c001n06
+ * Resource action: rsc_c001n07 stop on c001n07
+ * Pseudo action: DoFencing_stop_0
+ * Pseudo action: master_rsc_1_stop_0
+ * Resource action: heartbeat_192.168.100.182 stop on c001n02
+ * Resource action: child_DoFencing:1 stop on c001n02
+ * Resource action: child_DoFencing:2 stop on c001n04
+ * Resource action: child_DoFencing:3 stop on c001n05
+ * Resource action: child_DoFencing:4 stop on c001n06
+ * Resource action: child_DoFencing:5 stop on c001n07
+ * Pseudo action: DoFencing_stopped_0
+ * Resource action: ocf_msdummy:2 stop on c001n04
+ * Resource action: ocf_msdummy:3 stop on c001n04
+ * Resource action: ocf_msdummy:4 stop on c001n05
+ * Resource action: ocf_msdummy:5 stop on c001n05
+ * Resource action: ocf_msdummy:6 stop on c001n06
+ * Resource action: ocf_msdummy:7 stop on c001n06
+ * Resource action: ocf_msdummy:8 stop on c001n07
+ * Resource action: ocf_msdummy:9 stop on c001n07
+ * Resource action: ocf_msdummy:10 stop on c001n02
+ * Resource action: ocf_msdummy:11 stop on c001n02
+ * Pseudo action: master_rsc_1_stopped_0
+ * Cluster action: do_shutdown on c001n07
+ * Cluster action: do_shutdown on c001n06
+ * Cluster action: do_shutdown on c001n05
+ * Cluster action: do_shutdown on c001n04
+ * Resource action: ocf_192.168.100.181 stop on c001n02
+ * Cluster action: do_shutdown on c001n02
+ * Pseudo action: group-1_stopped_0
+ * Cluster action: do_shutdown on c001n03
+
+Revised Cluster Status:
+ * Node List:
+ * Online: [ c001n02 c001n03 c001n04 c001n05 c001n06 c001n07 ]
+
+ * Full List of Resources:
+ * DcIPaddr (ocf:heartbeat:IPaddr): Stopped
+ * Resource Group: group-1:
+ * ocf_192.168.100.181 (ocf:heartbeat:IPaddr): Stopped
+ * heartbeat_192.168.100.182 (ocf:heartbeat:IPaddr): Stopped
+ * ocf_192.168.100.183 (ocf:heartbeat:IPaddr): Stopped
+ * lsb_dummy (lsb:/usr/lib/heartbeat/cts/LSBDummy): Stopped
+ * rsc_c001n03 (ocf:heartbeat:IPaddr): Stopped
+ * rsc_c001n02 (ocf:heartbeat:IPaddr): Stopped
+ * rsc_c001n04 (ocf:heartbeat:IPaddr): Stopped
+ * rsc_c001n05 (ocf:heartbeat:IPaddr): Stopped
+ * rsc_c001n06 (ocf:heartbeat:IPaddr): Stopped
+ * rsc_c001n07 (ocf:heartbeat:IPaddr): Stopped
+ * Clone Set: DoFencing [child_DoFencing]:
+ * Stopped: [ c001n02 c001n03 c001n04 c001n05 c001n06 c001n07 ]
+ * Clone Set: master_rsc_1 [ocf_msdummy] (promotable, unique):
+ * ocf_msdummy:0 (ocf:heartbeat:Stateful): Stopped
+ * ocf_msdummy:1 (ocf:heartbeat:Stateful): Stopped
+ * ocf_msdummy:2 (ocf:heartbeat:Stateful): Stopped
+ * ocf_msdummy:3 (ocf:heartbeat:Stateful): Stopped
+ * ocf_msdummy:4 (ocf:heartbeat:Stateful): Stopped
+ * ocf_msdummy:5 (ocf:heartbeat:Stateful): Stopped
+ * ocf_msdummy:6 (ocf:heartbeat:Stateful): Stopped
+ * ocf_msdummy:7 (ocf:heartbeat:Stateful): Stopped
+ * ocf_msdummy:8 (ocf:heartbeat:Stateful): Stopped
+ * ocf_msdummy:9 (ocf:heartbeat:Stateful): Stopped
+ * ocf_msdummy:10 (ocf:heartbeat:Stateful): Stopped
+ * ocf_msdummy:11 (ocf:heartbeat:Stateful): Stopped
diff --git a/cts/scheduler/summary/inc2.summary b/cts/scheduler/summary/inc2.summary
new file mode 100644
index 0000000..bf90e78
--- /dev/null
+++ b/cts/scheduler/summary/inc2.summary
@@ -0,0 +1,44 @@
+Current cluster status:
+ * Node List:
+ * Online: [ node1 node2 ]
+
+ * Full List of Resources:
+ * Clone Set: rsc1 [child_rsc1] (unique):
+ * child_rsc1:0 (ocf:heartbeat:apache): Started node1
+ * child_rsc1:1 (ocf:heartbeat:apache): Started node1
+ * child_rsc1:2 (ocf:heartbeat:apache): Started node1
+ * child_rsc1:3 (ocf:heartbeat:apache): Started node1
+ * child_rsc1:4 (ocf:heartbeat:apache): Started node1
+
+Transition Summary:
+ * Move child_rsc1:2 ( node1 -> node2 )
+ * Move child_rsc1:3 ( node1 -> node2 )
+ * Stop child_rsc1:4 ( node1 ) due to node availability
+
+Executing Cluster Transition:
+ * Resource action: child_rsc1:0 monitor on node2
+ * Resource action: child_rsc1:1 monitor on node2
+ * Resource action: child_rsc1:2 monitor on node2
+ * Resource action: child_rsc1:3 monitor on node2
+ * Resource action: child_rsc1:4 monitor on node2
+ * Pseudo action: rsc1_stop_0
+ * Resource action: child_rsc1:2 stop on node1
+ * Resource action: child_rsc1:3 stop on node1
+ * Resource action: child_rsc1:4 stop on node1
+ * Pseudo action: rsc1_stopped_0
+ * Pseudo action: rsc1_start_0
+ * Resource action: child_rsc1:2 start on node2
+ * Resource action: child_rsc1:3 start on node2
+ * Pseudo action: rsc1_running_0
+
+Revised Cluster Status:
+ * Node List:
+ * Online: [ node1 node2 ]
+
+ * Full List of Resources:
+ * Clone Set: rsc1 [child_rsc1] (unique):
+ * child_rsc1:0 (ocf:heartbeat:apache): Started node1
+ * child_rsc1:1 (ocf:heartbeat:apache): Started node1
+ * child_rsc1:2 (ocf:heartbeat:apache): Started node2
+ * child_rsc1:3 (ocf:heartbeat:apache): Started node2
+ * child_rsc1:4 (ocf:heartbeat:apache): Stopped
diff --git a/cts/scheduler/summary/inc3.summary b/cts/scheduler/summary/inc3.summary
new file mode 100644
index 0000000..7256446
--- /dev/null
+++ b/cts/scheduler/summary/inc3.summary
@@ -0,0 +1,71 @@
+Current cluster status:
+ * Node List:
+ * Online: [ node1 node2 ]
+
+ * Full List of Resources:
+ * Clone Set: rsc1 [child_rsc1] (unique):
+ * child_rsc1:0 (ocf:heartbeat:apache): Started node1
+ * child_rsc1:1 (ocf:heartbeat:apache): Started node1
+ * child_rsc1:2 (ocf:heartbeat:apache): Started node1
+ * child_rsc1:3 (ocf:heartbeat:apache): Started node1
+ * child_rsc1:4 (ocf:heartbeat:apache): Started node1
+ * Clone Set: rsc2 [child_rsc2] (unique):
+ * child_rsc2:0 (ocf:heartbeat:apache): Started node2
+ * child_rsc2:1 (ocf:heartbeat:apache): Started node2
+ * child_rsc2:2 (ocf:heartbeat:apache): Started node2
+ * child_rsc2:3 (ocf:heartbeat:apache): Started node2
+ * child_rsc2:4 (ocf:heartbeat:apache): Started node2
+
+Transition Summary:
+ * Move child_rsc1:2 ( node1 -> node2 )
+ * Move child_rsc1:3 ( node1 -> node2 )
+ * Stop child_rsc1:4 ( node1 ) due to node availability
+ * Move child_rsc2:3 ( node2 -> node1 )
+ * Move child_rsc2:4 ( node2 -> node1 )
+
+Executing Cluster Transition:
+ * Resource action: child_rsc1:0 monitor on node2
+ * Resource action: child_rsc1:1 monitor on node2
+ * Resource action: child_rsc1:2 monitor on node2
+ * Resource action: child_rsc1:3 monitor on node2
+ * Resource action: child_rsc1:4 monitor on node2
+ * Resource action: child_rsc2:0 monitor on node1
+ * Resource action: child_rsc2:1 monitor on node1
+ * Resource action: child_rsc2:2 monitor on node1
+ * Resource action: child_rsc2:3 monitor on node1
+ * Resource action: child_rsc2:4 monitor on node1
+ * Pseudo action: rsc2_stop_0
+ * Resource action: child_rsc2:3 stop on node2
+ * Resource action: child_rsc2:4 stop on node2
+ * Pseudo action: rsc2_stopped_0
+ * Pseudo action: rsc1_stop_0
+ * Resource action: child_rsc1:2 stop on node1
+ * Resource action: child_rsc1:3 stop on node1
+ * Resource action: child_rsc1:4 stop on node1
+ * Pseudo action: rsc1_stopped_0
+ * Pseudo action: rsc1_start_0
+ * Resource action: child_rsc1:2 start on node2
+ * Resource action: child_rsc1:3 start on node2
+ * Pseudo action: rsc1_running_0
+ * Pseudo action: rsc2_start_0
+ * Resource action: child_rsc2:3 start on node1
+ * Resource action: child_rsc2:4 start on node1
+ * Pseudo action: rsc2_running_0
+
+Revised Cluster Status:
+ * Node List:
+ * Online: [ node1 node2 ]
+
+ * Full List of Resources:
+ * Clone Set: rsc1 [child_rsc1] (unique):
+ * child_rsc1:0 (ocf:heartbeat:apache): Started node1
+ * child_rsc1:1 (ocf:heartbeat:apache): Started node1
+ * child_rsc1:2 (ocf:heartbeat:apache): Started node2
+ * child_rsc1:3 (ocf:heartbeat:apache): Started node2
+ * child_rsc1:4 (ocf:heartbeat:apache): Stopped
+ * Clone Set: rsc2 [child_rsc2] (unique):
+ * child_rsc2:0 (ocf:heartbeat:apache): Started node2
+ * child_rsc2:1 (ocf:heartbeat:apache): Started node2
+ * child_rsc2:2 (ocf:heartbeat:apache): Started node2
+ * child_rsc2:3 (ocf:heartbeat:apache): Started node1
+ * child_rsc2:4 (ocf:heartbeat:apache): Started node1
diff --git a/cts/scheduler/summary/inc4.summary b/cts/scheduler/summary/inc4.summary
new file mode 100644
index 0000000..e71cea6
--- /dev/null
+++ b/cts/scheduler/summary/inc4.summary
@@ -0,0 +1,71 @@
+Current cluster status:
+ * Node List:
+ * Online: [ node1 node2 ]
+
+ * Full List of Resources:
+ * Clone Set: rsc1 [child_rsc1] (unique):
+ * child_rsc1:0 (ocf:heartbeat:apache): Started node1
+ * child_rsc1:1 (ocf:heartbeat:apache): Started node1
+ * child_rsc1:2 (ocf:heartbeat:apache): Started node1
+ * child_rsc1:3 (ocf:heartbeat:apache): Started node1
+ * child_rsc1:4 (ocf:heartbeat:apache): Started node1
+ * Clone Set: rsc2 [child_rsc2] (unique):
+ * child_rsc2:0 (ocf:heartbeat:apache): Started node2
+ * child_rsc2:1 (ocf:heartbeat:apache): Started node2
+ * child_rsc2:2 (ocf:heartbeat:apache): Started node2
+ * child_rsc2:3 (ocf:heartbeat:apache): Started node2
+ * child_rsc2:4 (ocf:heartbeat:apache): Started node2
+
+Transition Summary:
+ * Move child_rsc1:2 ( node1 -> node2 )
+ * Move child_rsc1:3 ( node1 -> node2 )
+ * Stop child_rsc1:4 ( node1 ) due to node availability
+ * Move child_rsc2:3 ( node2 -> node1 )
+ * Move child_rsc2:4 ( node2 -> node1 )
+
+Executing Cluster Transition:
+ * Resource action: child_rsc1:0 monitor on node2
+ * Resource action: child_rsc1:1 monitor on node2
+ * Resource action: child_rsc1:2 monitor on node2
+ * Resource action: child_rsc1:3 monitor on node2
+ * Resource action: child_rsc1:4 monitor on node2
+ * Resource action: child_rsc2:0 monitor on node1
+ * Resource action: child_rsc2:1 monitor on node1
+ * Resource action: child_rsc2:2 monitor on node1
+ * Resource action: child_rsc2:3 monitor on node1
+ * Resource action: child_rsc2:4 monitor on node1
+ * Pseudo action: rsc2_stop_0
+ * Resource action: child_rsc2:4 stop on node2
+ * Resource action: child_rsc2:3 stop on node2
+ * Pseudo action: rsc2_stopped_0
+ * Pseudo action: rsc1_stop_0
+ * Resource action: child_rsc1:4 stop on node1
+ * Resource action: child_rsc1:3 stop on node1
+ * Resource action: child_rsc1:2 stop on node1
+ * Pseudo action: rsc1_stopped_0
+ * Pseudo action: rsc1_start_0
+ * Resource action: child_rsc1:2 start on node2
+ * Resource action: child_rsc1:3 start on node2
+ * Pseudo action: rsc1_running_0
+ * Pseudo action: rsc2_start_0
+ * Resource action: child_rsc2:3 start on node1
+ * Resource action: child_rsc2:4 start on node1
+ * Pseudo action: rsc2_running_0
+
+Revised Cluster Status:
+ * Node List:
+ * Online: [ node1 node2 ]
+
+ * Full List of Resources:
+ * Clone Set: rsc1 [child_rsc1] (unique):
+ * child_rsc1:0 (ocf:heartbeat:apache): Started node1
+ * child_rsc1:1 (ocf:heartbeat:apache): Started node1
+ * child_rsc1:2 (ocf:heartbeat:apache): Started node2
+ * child_rsc1:3 (ocf:heartbeat:apache): Started node2
+ * child_rsc1:4 (ocf:heartbeat:apache): Stopped
+ * Clone Set: rsc2 [child_rsc2] (unique):
+ * child_rsc2:0 (ocf:heartbeat:apache): Started node2
+ * child_rsc2:1 (ocf:heartbeat:apache): Started node2
+ * child_rsc2:2 (ocf:heartbeat:apache): Started node2
+ * child_rsc2:3 (ocf:heartbeat:apache): Started node1
+ * child_rsc2:4 (ocf:heartbeat:apache): Started node1
diff --git a/cts/scheduler/summary/inc5.summary b/cts/scheduler/summary/inc5.summary
new file mode 100644
index 0000000..3b97115
--- /dev/null
+++ b/cts/scheduler/summary/inc5.summary
@@ -0,0 +1,139 @@
+Current cluster status:
+ * Node List:
+ * Online: [ node1 node2 ]
+
+ * Full List of Resources:
+ * Clone Set: rsc1 [child_rsc1] (unique):
+ * child_rsc1:0 (ocf:heartbeat:apache): Started node1
+ * child_rsc1:1 (ocf:heartbeat:apache): Started node2
+ * child_rsc1:2 (ocf:heartbeat:apache): Stopped
+ * Clone Set: rsc2 [child_rsc2] (unique):
+ * child_rsc2:0 (ocf:heartbeat:apache): Started node1
+ * child_rsc2:1 (ocf:heartbeat:apache): Started node1
+ * child_rsc2:2 (ocf:heartbeat:apache): Stopped
+ * Clone Set: rsc3 [child_rsc3] (unique):
+ * child_rsc3:0 (ocf:heartbeat:apache): Started node1
+ * child_rsc3:1 (ocf:heartbeat:apache): Started node2
+ * child_rsc3:2 (ocf:heartbeat:apache): Stopped
+ * Clone Set: rsc4 [child_rsc4] (unique):
+ * child_rsc4:0 (ocf:heartbeat:apache): Started node1
+ * child_rsc4:1 (ocf:heartbeat:apache): Started node1
+ * child_rsc4:2 (ocf:heartbeat:apache): Stopped
+ * Clone Set: rsc5 [child_rsc5] (unique):
+ * child_rsc5:0 (ocf:heartbeat:apache): Started node2
+ * child_rsc5:1 (ocf:heartbeat:apache): Started node2
+ * child_rsc5:2 (ocf:heartbeat:apache): Stopped
+ * Clone Set: rsc6 [child_rsc6] (unique):
+ * child_rsc6:0 (ocf:heartbeat:apache): Started node1
+ * child_rsc6:1 (ocf:heartbeat:apache): Started node2
+ * child_rsc6:2 (ocf:heartbeat:apache): Stopped
+ * Clone Set: rsc7 [child_rsc7] (unique):
+ * child_rsc7:0 (ocf:heartbeat:apache): Started node2
+ * child_rsc7:1 (ocf:heartbeat:apache): Started node2
+ * child_rsc7:2 (ocf:heartbeat:apache): Stopped
+ * Clone Set: rsc8 [child_rsc8] (unique):
+ * child_rsc8:0 (ocf:heartbeat:apache): Started node1
+ * child_rsc8:1 (ocf:heartbeat:apache): Started node2
+ * child_rsc8:2 (ocf:heartbeat:apache): Stopped
+
+Transition Summary:
+ * Move child_rsc2:1 ( node1 -> node2 )
+ * Move child_rsc4:1 ( node1 -> node2 )
+ * Move child_rsc5:1 ( node2 -> node1 )
+ * Move child_rsc7:1 ( node2 -> node1 )
+
+Executing Cluster Transition:
+ * Resource action: child_rsc1:0 monitor on node2
+ * Resource action: child_rsc1:1 monitor on node1
+ * Resource action: child_rsc1:2 monitor on node2
+ * Resource action: child_rsc1:2 monitor on node1
+ * Resource action: child_rsc2:0 monitor on node2
+ * Resource action: child_rsc2:1 monitor on node2
+ * Resource action: child_rsc2:2 monitor on node2
+ * Resource action: child_rsc2:2 monitor on node1
+ * Pseudo action: rsc2_stop_0
+ * Resource action: child_rsc3:0 monitor on node2
+ * Resource action: child_rsc3:1 monitor on node1
+ * Resource action: child_rsc3:2 monitor on node2
+ * Resource action: child_rsc3:2 monitor on node1
+ * Resource action: child_rsc4:0 monitor on node2
+ * Resource action: child_rsc4:1 monitor on node2
+ * Resource action: child_rsc4:2 monitor on node2
+ * Resource action: child_rsc4:2 monitor on node1
+ * Pseudo action: rsc4_stop_0
+ * Resource action: child_rsc5:0 monitor on node1
+ * Resource action: child_rsc5:1 monitor on node1
+ * Resource action: child_rsc5:2 monitor on node2
+ * Resource action: child_rsc5:2 monitor on node1
+ * Pseudo action: rsc5_stop_0
+ * Resource action: child_rsc6:0 monitor on node2
+ * Resource action: child_rsc6:1 monitor on node1
+ * Resource action: child_rsc6:2 monitor on node2
+ * Resource action: child_rsc6:2 monitor on node1
+ * Resource action: child_rsc7:0 monitor on node1
+ * Resource action: child_rsc7:1 monitor on node1
+ * Resource action: child_rsc7:2 monitor on node2
+ * Resource action: child_rsc7:2 monitor on node1
+ * Pseudo action: rsc7_stop_0
+ * Resource action: child_rsc8:0 monitor on node2
+ * Resource action: child_rsc8:1 monitor on node1
+ * Resource action: child_rsc8:2 monitor on node2
+ * Resource action: child_rsc8:2 monitor on node1
+ * Resource action: child_rsc2:1 stop on node1
+ * Pseudo action: rsc2_stopped_0
+ * Pseudo action: rsc2_start_0
+ * Resource action: child_rsc4:1 stop on node1
+ * Pseudo action: rsc4_stopped_0
+ * Pseudo action: rsc4_start_0
+ * Resource action: child_rsc5:1 stop on node2
+ * Pseudo action: rsc5_stopped_0
+ * Pseudo action: rsc5_start_0
+ * Resource action: child_rsc7:1 stop on node2
+ * Pseudo action: rsc7_stopped_0
+ * Pseudo action: rsc7_start_0
+ * Resource action: child_rsc2:1 start on node2
+ * Pseudo action: rsc2_running_0
+ * Resource action: child_rsc4:1 start on node2
+ * Pseudo action: rsc4_running_0
+ * Resource action: child_rsc5:1 start on node1
+ * Pseudo action: rsc5_running_0
+ * Resource action: child_rsc7:1 start on node1
+ * Pseudo action: rsc7_running_0
+
+Revised Cluster Status:
+ * Node List:
+ * Online: [ node1 node2 ]
+
+ * Full List of Resources:
+ * Clone Set: rsc1 [child_rsc1] (unique):
+ * child_rsc1:0 (ocf:heartbeat:apache): Started node1
+ * child_rsc1:1 (ocf:heartbeat:apache): Started node2
+ * child_rsc1:2 (ocf:heartbeat:apache): Stopped
+ * Clone Set: rsc2 [child_rsc2] (unique):
+ * child_rsc2:0 (ocf:heartbeat:apache): Started node1
+ * child_rsc2:1 (ocf:heartbeat:apache): Started node2
+ * child_rsc2:2 (ocf:heartbeat:apache): Stopped
+ * Clone Set: rsc3 [child_rsc3] (unique):
+ * child_rsc3:0 (ocf:heartbeat:apache): Started node1
+ * child_rsc3:1 (ocf:heartbeat:apache): Started node2
+ * child_rsc3:2 (ocf:heartbeat:apache): Stopped
+ * Clone Set: rsc4 [child_rsc4] (unique):
+ * child_rsc4:0 (ocf:heartbeat:apache): Started node1
+ * child_rsc4:1 (ocf:heartbeat:apache): Started node2
+ * child_rsc4:2 (ocf:heartbeat:apache): Stopped
+ * Clone Set: rsc5 [child_rsc5] (unique):
+ * child_rsc5:0 (ocf:heartbeat:apache): Started node2
+ * child_rsc5:1 (ocf:heartbeat:apache): Started node1
+ * child_rsc5:2 (ocf:heartbeat:apache): Stopped
+ * Clone Set: rsc6 [child_rsc6] (unique):
+ * child_rsc6:0 (ocf:heartbeat:apache): Started node1
+ * child_rsc6:1 (ocf:heartbeat:apache): Started node2
+ * child_rsc6:2 (ocf:heartbeat:apache): Stopped
+ * Clone Set: rsc7 [child_rsc7] (unique):
+ * child_rsc7:0 (ocf:heartbeat:apache): Started node2
+ * child_rsc7:1 (ocf:heartbeat:apache): Started node1
+ * child_rsc7:2 (ocf:heartbeat:apache): Stopped
+ * Clone Set: rsc8 [child_rsc8] (unique):
+ * child_rsc8:0 (ocf:heartbeat:apache): Started node1
+ * child_rsc8:1 (ocf:heartbeat:apache): Started node2
+ * child_rsc8:2 (ocf:heartbeat:apache): Stopped
diff --git a/cts/scheduler/summary/inc6.summary b/cts/scheduler/summary/inc6.summary
new file mode 100644
index 0000000..74daaa6
--- /dev/null
+++ b/cts/scheduler/summary/inc6.summary
@@ -0,0 +1,101 @@
+Current cluster status:
+ * Node List:
+ * Online: [ node1 node2 ]
+
+ * Full List of Resources:
+ * Clone Set: rsc1 [child_rsc1]:
+ * Started: [ node1 node2 ]
+ * Clone Set: rsc2 [child_rsc2] (unique):
+ * child_rsc2:0 (ocf:heartbeat:apache): Started node1
+ * child_rsc2:1 (ocf:heartbeat:apache): Started node1
+ * child_rsc2:2 (ocf:heartbeat:apache): Stopped
+ * Clone Set: rsc3 [child_rsc3]:
+ * Started: [ node1 node2 ]
+ * Clone Set: rsc4 [child_rsc4] (unique):
+ * child_rsc4:0 (ocf:heartbeat:apache): Started node1
+ * child_rsc4:1 (ocf:heartbeat:apache): Started node1
+ * child_rsc4:2 (ocf:heartbeat:apache): Stopped
+ * Clone Set: rsc5 [child_rsc5] (unique):
+ * child_rsc5:0 (ocf:heartbeat:apache): Started node2
+ * child_rsc5:1 (ocf:heartbeat:apache): Started node2
+ * child_rsc5:2 (ocf:heartbeat:apache): Stopped
+ * Clone Set: rsc6 [child_rsc6]:
+ * Started: [ node1 node2 ]
+ * Clone Set: rsc7 [child_rsc7] (unique):
+ * child_rsc7:0 (ocf:heartbeat:apache): Started node2
+ * child_rsc7:1 (ocf:heartbeat:apache): Started node2
+ * child_rsc7:2 (ocf:heartbeat:apache): Stopped
+ * Clone Set: rsc8 [child_rsc8]:
+ * Started: [ node1 node2 ]
+
+Transition Summary:
+ * Move child_rsc2:1 ( node1 -> node2 )
+ * Move child_rsc4:1 ( node1 -> node2 )
+ * Move child_rsc5:1 ( node2 -> node1 )
+ * Restart child_rsc6:0 ( node1 ) due to required rsc5 running
+ * Restart child_rsc6:1 ( node2 ) due to required rsc5 running
+ * Move child_rsc7:1 ( node2 -> node1 )
+
+Executing Cluster Transition:
+ * Pseudo action: rsc2_stop_0
+ * Pseudo action: rsc4_stop_0
+ * Pseudo action: rsc6_stop_0
+ * Pseudo action: rsc7_stop_0
+ * Resource action: child_rsc2:1 stop on node1
+ * Pseudo action: rsc2_stopped_0
+ * Pseudo action: rsc2_start_0
+ * Resource action: child_rsc4:1 stop on node1
+ * Pseudo action: rsc4_stopped_0
+ * Pseudo action: rsc4_start_0
+ * Resource action: child_rsc6:0 stop on node1
+ * Resource action: child_rsc6:1 stop on node2
+ * Pseudo action: rsc6_stopped_0
+ * Resource action: child_rsc7:1 stop on node2
+ * Pseudo action: rsc7_stopped_0
+ * Pseudo action: rsc7_start_0
+ * Resource action: child_rsc2:1 start on node2
+ * Pseudo action: rsc2_running_0
+ * Resource action: child_rsc4:1 start on node2
+ * Pseudo action: rsc4_running_0
+ * Pseudo action: rsc5_stop_0
+ * Resource action: child_rsc7:1 start on node1
+ * Pseudo action: rsc7_running_0
+ * Resource action: child_rsc5:1 stop on node2
+ * Pseudo action: rsc5_stopped_0
+ * Pseudo action: rsc5_start_0
+ * Resource action: child_rsc5:1 start on node1
+ * Pseudo action: rsc5_running_0
+ * Pseudo action: rsc6_start_0
+ * Resource action: child_rsc6:0 start on node1
+ * Resource action: child_rsc6:1 start on node2
+ * Pseudo action: rsc6_running_0
+
+Revised Cluster Status:
+ * Node List:
+ * Online: [ node1 node2 ]
+
+ * Full List of Resources:
+ * Clone Set: rsc1 [child_rsc1]:
+ * Started: [ node1 node2 ]
+ * Clone Set: rsc2 [child_rsc2] (unique):
+ * child_rsc2:0 (ocf:heartbeat:apache): Started node1
+ * child_rsc2:1 (ocf:heartbeat:apache): Started [ node1 node2 ]
+ * child_rsc2:2 (ocf:heartbeat:apache): Stopped
+ * Clone Set: rsc3 [child_rsc3]:
+ * Started: [ node1 node2 ]
+ * Clone Set: rsc4 [child_rsc4] (unique):
+ * child_rsc4:0 (ocf:heartbeat:apache): Started node1
+ * child_rsc4:1 (ocf:heartbeat:apache): Started [ node1 node2 ]
+ * child_rsc4:2 (ocf:heartbeat:apache): Stopped
+ * Clone Set: rsc5 [child_rsc5] (unique):
+ * child_rsc5:0 (ocf:heartbeat:apache): Started node2
+ * child_rsc5:1 (ocf:heartbeat:apache): Started node1
+ * child_rsc5:2 (ocf:heartbeat:apache): Stopped
+ * Clone Set: rsc6 [child_rsc6]:
+ * Started: [ node1 node2 ]
+ * Clone Set: rsc7 [child_rsc7] (unique):
+ * child_rsc7:0 (ocf:heartbeat:apache): Started node2
+ * child_rsc7:1 (ocf:heartbeat:apache): Started node1
+ * child_rsc7:2 (ocf:heartbeat:apache): Stopped
+ * Clone Set: rsc8 [child_rsc8]:
+ * Started: [ node1 node2 ]
diff --git a/cts/scheduler/summary/inc7.summary b/cts/scheduler/summary/inc7.summary
new file mode 100644
index 0000000..71cca23
--- /dev/null
+++ b/cts/scheduler/summary/inc7.summary
@@ -0,0 +1,100 @@
+Current cluster status:
+ * Node List:
+ * Online: [ node1 node2 node3 ]
+
+ * Full List of Resources:
+ * rsc0 (ocf:heartbeat:apache): Stopped
+ * Clone Set: rsc1 [child_rsc1] (unique):
+ * child_rsc1:0 (ocf:heartbeat:apache): Stopped
+ * child_rsc1:1 (ocf:heartbeat:apache): Stopped
+ * child_rsc1:2 (ocf:heartbeat:apache): Stopped
+ * child_rsc1:3 (ocf:heartbeat:apache): Stopped
+ * child_rsc1:4 (ocf:heartbeat:apache): Stopped
+ * Clone Set: rsc2 [child_rsc2] (unique):
+ * child_rsc2:0 (ocf:heartbeat:apache): Stopped
+ * child_rsc2:1 (ocf:heartbeat:apache): Stopped
+ * child_rsc2:2 (ocf:heartbeat:apache): Stopped
+ * child_rsc2:3 (ocf:heartbeat:apache): Stopped
+ * child_rsc2:4 (ocf:heartbeat:apache): Stopped
+
+Transition Summary:
+ * Start rsc0 ( node1 )
+ * Start child_rsc1:0 ( node1 )
+ * Start child_rsc1:1 ( node2 )
+ * Start child_rsc1:2 ( node3 )
+ * Start child_rsc1:3 ( node1 )
+ * Start child_rsc1:4 ( node2 )
+ * Start child_rsc2:0 ( node2 )
+ * Start child_rsc2:1 ( node3 )
+ * Start child_rsc2:2 ( node1 )
+ * Start child_rsc2:3 ( node2 )
+ * Start child_rsc2:4 ( node3 )
+
+Executing Cluster Transition:
+ * Resource action: rsc0 monitor on node3
+ * Resource action: rsc0 monitor on node2
+ * Resource action: rsc0 monitor on node1
+ * Resource action: child_rsc1:0 monitor on node3
+ * Resource action: child_rsc1:0 monitor on node2
+ * Resource action: child_rsc1:0 monitor on node1
+ * Resource action: child_rsc1:1 monitor on node3
+ * Resource action: child_rsc1:1 monitor on node2
+ * Resource action: child_rsc1:1 monitor on node1
+ * Resource action: child_rsc1:2 monitor on node3
+ * Resource action: child_rsc1:2 monitor on node2
+ * Resource action: child_rsc1:2 monitor on node1
+ * Resource action: child_rsc1:3 monitor on node3
+ * Resource action: child_rsc1:3 monitor on node2
+ * Resource action: child_rsc1:3 monitor on node1
+ * Resource action: child_rsc1:4 monitor on node3
+ * Resource action: child_rsc1:4 monitor on node2
+ * Resource action: child_rsc1:4 monitor on node1
+ * Pseudo action: rsc1_start_0
+ * Resource action: child_rsc2:0 monitor on node3
+ * Resource action: child_rsc2:0 monitor on node2
+ * Resource action: child_rsc2:0 monitor on node1
+ * Resource action: child_rsc2:1 monitor on node3
+ * Resource action: child_rsc2:1 monitor on node2
+ * Resource action: child_rsc2:1 monitor on node1
+ * Resource action: child_rsc2:2 monitor on node3
+ * Resource action: child_rsc2:2 monitor on node2
+ * Resource action: child_rsc2:2 monitor on node1
+ * Resource action: child_rsc2:3 monitor on node3
+ * Resource action: child_rsc2:3 monitor on node2
+ * Resource action: child_rsc2:3 monitor on node1
+ * Resource action: child_rsc2:4 monitor on node3
+ * Resource action: child_rsc2:4 monitor on node2
+ * Resource action: child_rsc2:4 monitor on node1
+ * Resource action: rsc0 start on node1
+ * Resource action: child_rsc1:0 start on node1
+ * Resource action: child_rsc1:1 start on node2
+ * Resource action: child_rsc1:2 start on node3
+ * Resource action: child_rsc1:3 start on node1
+ * Resource action: child_rsc1:4 start on node2
+ * Pseudo action: rsc1_running_0
+ * Pseudo action: rsc2_start_0
+ * Resource action: child_rsc2:0 start on node2
+ * Resource action: child_rsc2:1 start on node3
+ * Resource action: child_rsc2:2 start on node1
+ * Resource action: child_rsc2:3 start on node2
+ * Resource action: child_rsc2:4 start on node3
+ * Pseudo action: rsc2_running_0
+
+Revised Cluster Status:
+ * Node List:
+ * Online: [ node1 node2 node3 ]
+
+ * Full List of Resources:
+ * rsc0 (ocf:heartbeat:apache): Started node1
+ * Clone Set: rsc1 [child_rsc1] (unique):
+ * child_rsc1:0 (ocf:heartbeat:apache): Started node1
+ * child_rsc1:1 (ocf:heartbeat:apache): Started node2
+ * child_rsc1:2 (ocf:heartbeat:apache): Started node3
+ * child_rsc1:3 (ocf:heartbeat:apache): Started node1
+ * child_rsc1:4 (ocf:heartbeat:apache): Started node2
+ * Clone Set: rsc2 [child_rsc2] (unique):
+ * child_rsc2:0 (ocf:heartbeat:apache): Started node2
+ * child_rsc2:1 (ocf:heartbeat:apache): Started node3
+ * child_rsc2:2 (ocf:heartbeat:apache): Started node1
+ * child_rsc2:3 (ocf:heartbeat:apache): Started node2
+ * child_rsc2:4 (ocf:heartbeat:apache): Started node3
diff --git a/cts/scheduler/summary/inc8.summary b/cts/scheduler/summary/inc8.summary
new file mode 100644
index 0000000..9a88b44
--- /dev/null
+++ b/cts/scheduler/summary/inc8.summary
@@ -0,0 +1,71 @@
+Current cluster status:
+ * Node List:
+ * Online: [ node1 node2 ]
+
+ * Full List of Resources:
+ * rsc0 (ocf:heartbeat:apache): Stopped
+ * Clone Set: rsc1 [child_rsc1] (unique):
+ * child_rsc1:0 (ocf:heartbeat:apache): Stopped
+ * child_rsc1:1 (ocf:heartbeat:apache): Stopped
+ * child_rsc1:2 (ocf:heartbeat:apache): Stopped
+ * child_rsc1:3 (ocf:heartbeat:apache): Stopped
+ * child_rsc1:4 (ocf:heartbeat:apache): Stopped
+ * Clone Set: rsc2 [child_rsc2] (unique):
+ * child_rsc2:0 (ocf:heartbeat:apache): Stopped
+ * child_rsc2:1 (ocf:heartbeat:apache): Stopped
+ * child_rsc2:2 (ocf:heartbeat:apache): Stopped
+ * child_rsc2:3 (ocf:heartbeat:apache): Stopped
+ * child_rsc2:4 (ocf:heartbeat:apache): Stopped
+
+Transition Summary:
+ * Start rsc0 ( node1 )
+ * Start child_rsc2:0 ( node2 )
+ * Start child_rsc2:1 ( node1 )
+
+Executing Cluster Transition:
+ * Resource action: rsc0 monitor on node2
+ * Resource action: rsc0 monitor on node1
+ * Resource action: child_rsc1:0 monitor on node2
+ * Resource action: child_rsc1:0 monitor on node1
+ * Resource action: child_rsc1:1 monitor on node2
+ * Resource action: child_rsc1:1 monitor on node1
+ * Resource action: child_rsc1:2 monitor on node2
+ * Resource action: child_rsc1:2 monitor on node1
+ * Resource action: child_rsc1:3 monitor on node2
+ * Resource action: child_rsc1:3 monitor on node1
+ * Resource action: child_rsc1:4 monitor on node2
+ * Resource action: child_rsc1:4 monitor on node1
+ * Resource action: child_rsc2:0 monitor on node2
+ * Resource action: child_rsc2:0 monitor on node1
+ * Resource action: child_rsc2:1 monitor on node2
+ * Resource action: child_rsc2:1 monitor on node1
+ * Resource action: child_rsc2:2 monitor on node2
+ * Resource action: child_rsc2:2 monitor on node1
+ * Resource action: child_rsc2:3 monitor on node2
+ * Resource action: child_rsc2:3 monitor on node1
+ * Resource action: child_rsc2:4 monitor on node2
+ * Resource action: child_rsc2:4 monitor on node1
+ * Pseudo action: rsc2_start_0
+ * Resource action: rsc0 start on node1
+ * Resource action: child_rsc2:0 start on node2
+ * Resource action: child_rsc2:1 start on node1
+ * Pseudo action: rsc2_running_0
+
+Revised Cluster Status:
+ * Node List:
+ * Online: [ node1 node2 ]
+
+ * Full List of Resources:
+ * rsc0 (ocf:heartbeat:apache): Started node1
+ * Clone Set: rsc1 [child_rsc1] (unique):
+ * child_rsc1:0 (ocf:heartbeat:apache): Stopped
+ * child_rsc1:1 (ocf:heartbeat:apache): Stopped
+ * child_rsc1:2 (ocf:heartbeat:apache): Stopped
+ * child_rsc1:3 (ocf:heartbeat:apache): Stopped
+ * child_rsc1:4 (ocf:heartbeat:apache): Stopped
+ * Clone Set: rsc2 [child_rsc2] (unique):
+ * child_rsc2:0 (ocf:heartbeat:apache): Started node2
+ * child_rsc2:1 (ocf:heartbeat:apache): Started node1
+ * child_rsc2:2 (ocf:heartbeat:apache): Stopped
+ * child_rsc2:3 (ocf:heartbeat:apache): Stopped
+ * child_rsc2:4 (ocf:heartbeat:apache): Stopped
diff --git a/cts/scheduler/summary/inc9.summary b/cts/scheduler/summary/inc9.summary
new file mode 100644
index 0000000..0e91a2e
--- /dev/null
+++ b/cts/scheduler/summary/inc9.summary
@@ -0,0 +1,30 @@
+Current cluster status:
+ * Node List:
+ * Online: [ node1 node2 ]
+
+ * Full List of Resources:
+ * Clone Set: rsc1 [child_rsc1]:
+ * child_rsc1 (ocf:heartbeat:apache): ORPHANED Started node1
+ * child_rsc1 (ocf:heartbeat:apache): ORPHANED Started node1
+ * child_rsc1 (ocf:heartbeat:apache): ORPHANED Started node2
+ * Started: [ node1 node2 ]
+
+Transition Summary:
+ * Stop child_rsc1:5 ( node1 ) due to node availability
+ * Stop child_rsc1:6 ( node1 ) due to node availability
+ * Stop child_rsc1:7 ( node2 ) due to node availability
+
+Executing Cluster Transition:
+ * Pseudo action: rsc1_stop_0
+ * Resource action: child_rsc1:1 stop on node1
+ * Resource action: child_rsc1:2 stop on node1
+ * Resource action: child_rsc1:1 stop on node2
+ * Pseudo action: rsc1_stopped_0
+
+Revised Cluster Status:
+ * Node List:
+ * Online: [ node1 node2 ]
+
+ * Full List of Resources:
+ * Clone Set: rsc1 [child_rsc1]:
+ * Started: [ node1 node2 ]
diff --git a/cts/scheduler/summary/interleave-0.summary b/cts/scheduler/summary/interleave-0.summary
new file mode 100644
index 0000000..fe16667
--- /dev/null
+++ b/cts/scheduler/summary/interleave-0.summary
@@ -0,0 +1,241 @@
+Current cluster status:
+ * Node List:
+ * Online: [ c001n02 c001n03 c001n04 c001n05 c001n06 c001n07 c001n08 c001n09 ]
+
+ * Full List of Resources:
+ * DcIPaddr (ocf:heartbeat:IPaddr): Started c001n09
+ * rsc_c001n09 (ocf:heartbeat:IPaddr): Stopped (unmanaged)
+ * rsc_c001n02 (ocf:heartbeat:IPaddr): Started c001n02 (unmanaged)
+ * rsc_c001n03 (ocf:heartbeat:IPaddr): Started c001n03 (unmanaged)
+ * rsc_c001n04 (ocf:heartbeat:IPaddr): Started c001n04 (unmanaged)
+ * rsc_c001n05 (ocf:heartbeat:IPaddr): Started c001n05 (unmanaged)
+ * rsc_c001n06 (ocf:heartbeat:IPaddr): Started c001n06 (unmanaged)
+ * rsc_c001n07 (ocf:heartbeat:IPaddr): Started c001n07 (unmanaged)
+ * rsc_c001n08 (ocf:heartbeat:IPaddr): Started c001n08 (unmanaged)
+ * Clone Set: DoFencing [child_DoFencing] (unique):
+ * child_DoFencing:0 (stonith:ssh): Started c001n02
+ * child_DoFencing:1 (stonith:ssh): Started c001n03
+ * child_DoFencing:2 (stonith:ssh): Started c001n04
+ * child_DoFencing:3 (stonith:ssh): Started c001n05
+ * child_DoFencing:4 (stonith:ssh): Started c001n06
+ * child_DoFencing:5 (stonith:ssh): Started c001n07
+ * child_DoFencing:6 (stonith:ssh): Started c001n08
+ * child_DoFencing:7 (stonith:ssh): Started c001n09
+ * Clone Set: CloneSet [child_CloneSet] (unique):
+ * child_CloneSet:0 (stonith:ssh): Stopped
+ * child_CloneSet:1 (stonith:ssh): Stopped
+ * child_CloneSet:2 (stonith:ssh): Stopped
+ * child_CloneSet:3 (stonith:ssh): Stopped
+ * child_CloneSet:4 (stonith:ssh): Stopped
+ * child_CloneSet:5 (stonith:ssh): Stopped
+ * child_CloneSet:6 (stonith:ssh): Stopped
+ * child_CloneSet:7 (stonith:ssh): Stopped
+
+Transition Summary:
+ * Start child_CloneSet:0 ( c001n02 )
+ * Start child_CloneSet:1 ( c001n03 )
+ * Start child_CloneSet:2 ( c001n04 )
+ * Start child_CloneSet:3 ( c001n05 )
+ * Start child_CloneSet:4 ( c001n06 )
+ * Start child_CloneSet:5 ( c001n07 )
+ * Start child_CloneSet:6 ( c001n08 )
+ * Start child_CloneSet:7 ( c001n09 )
+
+Executing Cluster Transition:
+ * Resource action: DcIPaddr monitor on c001n08
+ * Resource action: DcIPaddr monitor on c001n07
+ * Resource action: DcIPaddr monitor on c001n06
+ * Resource action: DcIPaddr monitor on c001n05
+ * Resource action: DcIPaddr monitor on c001n04
+ * Resource action: DcIPaddr monitor on c001n03
+ * Resource action: DcIPaddr monitor on c001n02
+ * Resource action: rsc_c001n09 monitor on c001n09
+ * Resource action: rsc_c001n09 monitor on c001n08
+ * Resource action: rsc_c001n09 monitor on c001n07
+ * Resource action: rsc_c001n09 monitor on c001n05
+ * Resource action: rsc_c001n09 monitor on c001n04
+ * Resource action: rsc_c001n09 monitor on c001n03
+ * Resource action: rsc_c001n09 monitor on c001n02
+ * Resource action: rsc_c001n02 monitor on c001n09
+ * Resource action: rsc_c001n02 monitor on c001n08
+ * Resource action: rsc_c001n02 monitor on c001n07
+ * Resource action: rsc_c001n02 monitor on c001n05
+ * Resource action: rsc_c001n02 monitor on c001n04
+ * Resource action: rsc_c001n03 monitor on c001n09
+ * Resource action: rsc_c001n03 monitor on c001n08
+ * Resource action: rsc_c001n03 monitor on c001n07
+ * Resource action: rsc_c001n03 monitor on c001n05
+ * Resource action: rsc_c001n03 monitor on c001n04
+ * Resource action: rsc_c001n03 monitor on c001n02
+ * Resource action: rsc_c001n04 monitor on c001n09
+ * Resource action: rsc_c001n04 monitor on c001n08
+ * Resource action: rsc_c001n04 monitor on c001n07
+ * Resource action: rsc_c001n04 monitor on c001n05
+ * Resource action: rsc_c001n04 monitor on c001n03
+ * Resource action: rsc_c001n04 monitor on c001n02
+ * Resource action: rsc_c001n05 monitor on c001n09
+ * Resource action: rsc_c001n05 monitor on c001n08
+ * Resource action: rsc_c001n05 monitor on c001n07
+ * Resource action: rsc_c001n05 monitor on c001n06
+ * Resource action: rsc_c001n05 monitor on c001n04
+ * Resource action: rsc_c001n05 monitor on c001n03
+ * Resource action: rsc_c001n05 monitor on c001n02
+ * Resource action: rsc_c001n06 monitor on c001n09
+ * Resource action: rsc_c001n06 monitor on c001n08
+ * Resource action: rsc_c001n06 monitor on c001n07
+ * Resource action: rsc_c001n06 monitor on c001n05
+ * Resource action: rsc_c001n06 monitor on c001n04
+ * Resource action: rsc_c001n06 monitor on c001n03
+ * Resource action: rsc_c001n07 monitor on c001n09
+ * Resource action: rsc_c001n07 monitor on c001n08
+ * Resource action: rsc_c001n07 monitor on c001n06
+ * Resource action: rsc_c001n07 monitor on c001n05
+ * Resource action: rsc_c001n07 monitor on c001n04
+ * Resource action: rsc_c001n08 monitor on c001n09
+ * Resource action: rsc_c001n08 monitor on c001n07
+ * Resource action: rsc_c001n08 monitor on c001n05
+ * Resource action: child_DoFencing:0 monitor on c001n09
+ * Resource action: child_DoFencing:0 monitor on c001n08
+ * Resource action: child_DoFencing:0 monitor on c001n07
+ * Resource action: child_DoFencing:1 monitor on c001n08
+ * Resource action: child_DoFencing:1 monitor on c001n07
+ * Resource action: child_DoFencing:1 monitor on c001n02
+ * Resource action: child_DoFencing:2 monitor on c001n09
+ * Resource action: child_DoFencing:2 monitor on c001n08
+ * Resource action: child_DoFencing:2 monitor on c001n07
+ * Resource action: child_DoFencing:2 monitor on c001n03
+ * Resource action: child_DoFencing:3 monitor on c001n08
+ * Resource action: child_DoFencing:3 monitor on c001n04
+ * Resource action: child_DoFencing:3 monitor on c001n02
+ * Resource action: child_DoFencing:4 monitor on c001n09
+ * Resource action: child_DoFencing:4 monitor on c001n05
+ * Resource action: child_DoFencing:4 monitor on c001n03
+ * Resource action: child_DoFencing:5 monitor on c001n08
+ * Resource action: child_DoFencing:5 monitor on c001n05
+ * Resource action: child_DoFencing:5 monitor on c001n04
+ * Resource action: child_DoFencing:5 monitor on c001n02
+ * Resource action: child_DoFencing:6 monitor on c001n09
+ * Resource action: child_DoFencing:6 monitor on c001n07
+ * Resource action: child_DoFencing:6 monitor on c001n05
+ * Resource action: child_DoFencing:6 monitor on c001n04
+ * Resource action: child_DoFencing:7 monitor on c001n08
+ * Resource action: child_DoFencing:7 monitor on c001n07
+ * Resource action: child_DoFencing:7 monitor on c001n05
+ * Resource action: child_DoFencing:7 monitor on c001n04
+ * Resource action: child_DoFencing:7 monitor on c001n03
+ * Resource action: child_DoFencing:7 monitor on c001n02
+ * Resource action: child_CloneSet:0 monitor on c001n09
+ * Resource action: child_CloneSet:0 monitor on c001n08
+ * Resource action: child_CloneSet:0 monitor on c001n07
+ * Resource action: child_CloneSet:0 monitor on c001n06
+ * Resource action: child_CloneSet:0 monitor on c001n05
+ * Resource action: child_CloneSet:0 monitor on c001n04
+ * Resource action: child_CloneSet:0 monitor on c001n03
+ * Resource action: child_CloneSet:0 monitor on c001n02
+ * Resource action: child_CloneSet:1 monitor on c001n09
+ * Resource action: child_CloneSet:1 monitor on c001n08
+ * Resource action: child_CloneSet:1 monitor on c001n07
+ * Resource action: child_CloneSet:1 monitor on c001n06
+ * Resource action: child_CloneSet:1 monitor on c001n05
+ * Resource action: child_CloneSet:1 monitor on c001n04
+ * Resource action: child_CloneSet:1 monitor on c001n03
+ * Resource action: child_CloneSet:1 monitor on c001n02
+ * Resource action: child_CloneSet:2 monitor on c001n09
+ * Resource action: child_CloneSet:2 monitor on c001n08
+ * Resource action: child_CloneSet:2 monitor on c001n07
+ * Resource action: child_CloneSet:2 monitor on c001n06
+ * Resource action: child_CloneSet:2 monitor on c001n05
+ * Resource action: child_CloneSet:2 monitor on c001n04
+ * Resource action: child_CloneSet:2 monitor on c001n03
+ * Resource action: child_CloneSet:2 monitor on c001n02
+ * Resource action: child_CloneSet:3 monitor on c001n09
+ * Resource action: child_CloneSet:3 monitor on c001n08
+ * Resource action: child_CloneSet:3 monitor on c001n07
+ * Resource action: child_CloneSet:3 monitor on c001n06
+ * Resource action: child_CloneSet:3 monitor on c001n05
+ * Resource action: child_CloneSet:3 monitor on c001n04
+ * Resource action: child_CloneSet:3 monitor on c001n03
+ * Resource action: child_CloneSet:3 monitor on c001n02
+ * Resource action: child_CloneSet:4 monitor on c001n09
+ * Resource action: child_CloneSet:4 monitor on c001n08
+ * Resource action: child_CloneSet:4 monitor on c001n07
+ * Resource action: child_CloneSet:4 monitor on c001n06
+ * Resource action: child_CloneSet:4 monitor on c001n05
+ * Resource action: child_CloneSet:4 monitor on c001n04
+ * Resource action: child_CloneSet:4 monitor on c001n03
+ * Resource action: child_CloneSet:4 monitor on c001n02
+ * Resource action: child_CloneSet:5 monitor on c001n09
+ * Resource action: child_CloneSet:5 monitor on c001n08
+ * Resource action: child_CloneSet:5 monitor on c001n07
+ * Resource action: child_CloneSet:5 monitor on c001n06
+ * Resource action: child_CloneSet:5 monitor on c001n05
+ * Resource action: child_CloneSet:5 monitor on c001n04
+ * Resource action: child_CloneSet:5 monitor on c001n03
+ * Resource action: child_CloneSet:5 monitor on c001n02
+ * Resource action: child_CloneSet:6 monitor on c001n09
+ * Resource action: child_CloneSet:6 monitor on c001n08
+ * Resource action: child_CloneSet:6 monitor on c001n07
+ * Resource action: child_CloneSet:6 monitor on c001n06
+ * Resource action: child_CloneSet:6 monitor on c001n05
+ * Resource action: child_CloneSet:6 monitor on c001n04
+ * Resource action: child_CloneSet:6 monitor on c001n03
+ * Resource action: child_CloneSet:6 monitor on c001n02
+ * Resource action: child_CloneSet:7 monitor on c001n09
+ * Resource action: child_CloneSet:7 monitor on c001n08
+ * Resource action: child_CloneSet:7 monitor on c001n07
+ * Resource action: child_CloneSet:7 monitor on c001n06
+ * Resource action: child_CloneSet:7 monitor on c001n05
+ * Resource action: child_CloneSet:7 monitor on c001n04
+ * Resource action: child_CloneSet:7 monitor on c001n03
+ * Resource action: child_CloneSet:7 monitor on c001n02
+ * Pseudo action: CloneSet_start_0
+ * Resource action: child_CloneSet:0 start on c001n02
+ * Resource action: child_CloneSet:1 start on c001n03
+ * Resource action: child_CloneSet:2 start on c001n04
+ * Resource action: child_CloneSet:3 start on c001n05
+ * Resource action: child_CloneSet:4 start on c001n06
+ * Resource action: child_CloneSet:5 start on c001n07
+ * Resource action: child_CloneSet:6 start on c001n08
+ * Resource action: child_CloneSet:7 start on c001n09
+ * Pseudo action: CloneSet_running_0
+ * Resource action: child_CloneSet:0 monitor=5000 on c001n02
+ * Resource action: child_CloneSet:1 monitor=5000 on c001n03
+ * Resource action: child_CloneSet:2 monitor=5000 on c001n04
+ * Resource action: child_CloneSet:3 monitor=5000 on c001n05
+ * Resource action: child_CloneSet:4 monitor=5000 on c001n06
+ * Resource action: child_CloneSet:5 monitor=5000 on c001n07
+ * Resource action: child_CloneSet:6 monitor=5000 on c001n08
+ * Resource action: child_CloneSet:7 monitor=5000 on c001n09
+
+Revised Cluster Status:
+ * Node List:
+ * Online: [ c001n02 c001n03 c001n04 c001n05 c001n06 c001n07 c001n08 c001n09 ]
+
+ * Full List of Resources:
+ * DcIPaddr (ocf:heartbeat:IPaddr): Started c001n09
+ * rsc_c001n09 (ocf:heartbeat:IPaddr): Stopped (unmanaged)
+ * rsc_c001n02 (ocf:heartbeat:IPaddr): Started c001n02 (unmanaged)
+ * rsc_c001n03 (ocf:heartbeat:IPaddr): Started c001n03 (unmanaged)
+ * rsc_c001n04 (ocf:heartbeat:IPaddr): Started c001n04 (unmanaged)
+ * rsc_c001n05 (ocf:heartbeat:IPaddr): Started c001n05 (unmanaged)
+ * rsc_c001n06 (ocf:heartbeat:IPaddr): Started c001n06 (unmanaged)
+ * rsc_c001n07 (ocf:heartbeat:IPaddr): Started c001n07 (unmanaged)
+ * rsc_c001n08 (ocf:heartbeat:IPaddr): Started c001n08 (unmanaged)
+ * Clone Set: DoFencing [child_DoFencing] (unique):
+ * child_DoFencing:0 (stonith:ssh): Started c001n02
+ * child_DoFencing:1 (stonith:ssh): Started c001n03
+ * child_DoFencing:2 (stonith:ssh): Started c001n04
+ * child_DoFencing:3 (stonith:ssh): Started c001n05
+ * child_DoFencing:4 (stonith:ssh): Started c001n06
+ * child_DoFencing:5 (stonith:ssh): Started c001n07
+ * child_DoFencing:6 (stonith:ssh): Started c001n08
+ * child_DoFencing:7 (stonith:ssh): Started c001n09
+ * Clone Set: CloneSet [child_CloneSet] (unique):
+ * child_CloneSet:0 (stonith:ssh): Started c001n02
+ * child_CloneSet:1 (stonith:ssh): Started c001n03
+ * child_CloneSet:2 (stonith:ssh): Started c001n04
+ * child_CloneSet:3 (stonith:ssh): Started c001n05
+ * child_CloneSet:4 (stonith:ssh): Started c001n06
+ * child_CloneSet:5 (stonith:ssh): Started c001n07
+ * child_CloneSet:6 (stonith:ssh): Started c001n08
+ * child_CloneSet:7 (stonith:ssh): Started c001n09
diff --git a/cts/scheduler/summary/interleave-1.summary b/cts/scheduler/summary/interleave-1.summary
new file mode 100644
index 0000000..fe16667
--- /dev/null
+++ b/cts/scheduler/summary/interleave-1.summary
@@ -0,0 +1,241 @@
+Current cluster status:
+ * Node List:
+ * Online: [ c001n02 c001n03 c001n04 c001n05 c001n06 c001n07 c001n08 c001n09 ]
+
+ * Full List of Resources:
+ * DcIPaddr (ocf:heartbeat:IPaddr): Started c001n09
+ * rsc_c001n09 (ocf:heartbeat:IPaddr): Stopped (unmanaged)
+ * rsc_c001n02 (ocf:heartbeat:IPaddr): Started c001n02 (unmanaged)
+ * rsc_c001n03 (ocf:heartbeat:IPaddr): Started c001n03 (unmanaged)
+ * rsc_c001n04 (ocf:heartbeat:IPaddr): Started c001n04 (unmanaged)
+ * rsc_c001n05 (ocf:heartbeat:IPaddr): Started c001n05 (unmanaged)
+ * rsc_c001n06 (ocf:heartbeat:IPaddr): Started c001n06 (unmanaged)
+ * rsc_c001n07 (ocf:heartbeat:IPaddr): Started c001n07 (unmanaged)
+ * rsc_c001n08 (ocf:heartbeat:IPaddr): Started c001n08 (unmanaged)
+ * Clone Set: DoFencing [child_DoFencing] (unique):
+ * child_DoFencing:0 (stonith:ssh): Started c001n02
+ * child_DoFencing:1 (stonith:ssh): Started c001n03
+ * child_DoFencing:2 (stonith:ssh): Started c001n04
+ * child_DoFencing:3 (stonith:ssh): Started c001n05
+ * child_DoFencing:4 (stonith:ssh): Started c001n06
+ * child_DoFencing:5 (stonith:ssh): Started c001n07
+ * child_DoFencing:6 (stonith:ssh): Started c001n08
+ * child_DoFencing:7 (stonith:ssh): Started c001n09
+ * Clone Set: CloneSet [child_CloneSet] (unique):
+ * child_CloneSet:0 (stonith:ssh): Stopped
+ * child_CloneSet:1 (stonith:ssh): Stopped
+ * child_CloneSet:2 (stonith:ssh): Stopped
+ * child_CloneSet:3 (stonith:ssh): Stopped
+ * child_CloneSet:4 (stonith:ssh): Stopped
+ * child_CloneSet:5 (stonith:ssh): Stopped
+ * child_CloneSet:6 (stonith:ssh): Stopped
+ * child_CloneSet:7 (stonith:ssh): Stopped
+
+Transition Summary:
+ * Start child_CloneSet:0 ( c001n02 )
+ * Start child_CloneSet:1 ( c001n03 )
+ * Start child_CloneSet:2 ( c001n04 )
+ * Start child_CloneSet:3 ( c001n05 )
+ * Start child_CloneSet:4 ( c001n06 )
+ * Start child_CloneSet:5 ( c001n07 )
+ * Start child_CloneSet:6 ( c001n08 )
+ * Start child_CloneSet:7 ( c001n09 )
+
+Executing Cluster Transition:
+ * Resource action: DcIPaddr monitor on c001n08
+ * Resource action: DcIPaddr monitor on c001n07
+ * Resource action: DcIPaddr monitor on c001n06
+ * Resource action: DcIPaddr monitor on c001n05
+ * Resource action: DcIPaddr monitor on c001n04
+ * Resource action: DcIPaddr monitor on c001n03
+ * Resource action: DcIPaddr monitor on c001n02
+ * Resource action: rsc_c001n09 monitor on c001n09
+ * Resource action: rsc_c001n09 monitor on c001n08
+ * Resource action: rsc_c001n09 monitor on c001n07
+ * Resource action: rsc_c001n09 monitor on c001n05
+ * Resource action: rsc_c001n09 monitor on c001n04
+ * Resource action: rsc_c001n09 monitor on c001n03
+ * Resource action: rsc_c001n09 monitor on c001n02
+ * Resource action: rsc_c001n02 monitor on c001n09
+ * Resource action: rsc_c001n02 monitor on c001n08
+ * Resource action: rsc_c001n02 monitor on c001n07
+ * Resource action: rsc_c001n02 monitor on c001n05
+ * Resource action: rsc_c001n02 monitor on c001n04
+ * Resource action: rsc_c001n03 monitor on c001n09
+ * Resource action: rsc_c001n03 monitor on c001n08
+ * Resource action: rsc_c001n03 monitor on c001n07
+ * Resource action: rsc_c001n03 monitor on c001n05
+ * Resource action: rsc_c001n03 monitor on c001n04
+ * Resource action: rsc_c001n03 monitor on c001n02
+ * Resource action: rsc_c001n04 monitor on c001n09
+ * Resource action: rsc_c001n04 monitor on c001n08
+ * Resource action: rsc_c001n04 monitor on c001n07
+ * Resource action: rsc_c001n04 monitor on c001n05
+ * Resource action: rsc_c001n04 monitor on c001n03
+ * Resource action: rsc_c001n04 monitor on c001n02
+ * Resource action: rsc_c001n05 monitor on c001n09
+ * Resource action: rsc_c001n05 monitor on c001n08
+ * Resource action: rsc_c001n05 monitor on c001n07
+ * Resource action: rsc_c001n05 monitor on c001n06
+ * Resource action: rsc_c001n05 monitor on c001n04
+ * Resource action: rsc_c001n05 monitor on c001n03
+ * Resource action: rsc_c001n05 monitor on c001n02
+ * Resource action: rsc_c001n06 monitor on c001n09
+ * Resource action: rsc_c001n06 monitor on c001n08
+ * Resource action: rsc_c001n06 monitor on c001n07
+ * Resource action: rsc_c001n06 monitor on c001n05
+ * Resource action: rsc_c001n06 monitor on c001n04
+ * Resource action: rsc_c001n06 monitor on c001n03
+ * Resource action: rsc_c001n07 monitor on c001n09
+ * Resource action: rsc_c001n07 monitor on c001n08
+ * Resource action: rsc_c001n07 monitor on c001n06
+ * Resource action: rsc_c001n07 monitor on c001n05
+ * Resource action: rsc_c001n07 monitor on c001n04
+ * Resource action: rsc_c001n08 monitor on c001n09
+ * Resource action: rsc_c001n08 monitor on c001n07
+ * Resource action: rsc_c001n08 monitor on c001n05
+ * Resource action: child_DoFencing:0 monitor on c001n09
+ * Resource action: child_DoFencing:0 monitor on c001n08
+ * Resource action: child_DoFencing:0 monitor on c001n07
+ * Resource action: child_DoFencing:1 monitor on c001n08
+ * Resource action: child_DoFencing:1 monitor on c001n07
+ * Resource action: child_DoFencing:1 monitor on c001n02
+ * Resource action: child_DoFencing:2 monitor on c001n09
+ * Resource action: child_DoFencing:2 monitor on c001n08
+ * Resource action: child_DoFencing:2 monitor on c001n07
+ * Resource action: child_DoFencing:2 monitor on c001n03
+ * Resource action: child_DoFencing:3 monitor on c001n08
+ * Resource action: child_DoFencing:3 monitor on c001n04
+ * Resource action: child_DoFencing:3 monitor on c001n02
+ * Resource action: child_DoFencing:4 monitor on c001n09
+ * Resource action: child_DoFencing:4 monitor on c001n05
+ * Resource action: child_DoFencing:4 monitor on c001n03
+ * Resource action: child_DoFencing:5 monitor on c001n08
+ * Resource action: child_DoFencing:5 monitor on c001n05
+ * Resource action: child_DoFencing:5 monitor on c001n04
+ * Resource action: child_DoFencing:5 monitor on c001n02
+ * Resource action: child_DoFencing:6 monitor on c001n09
+ * Resource action: child_DoFencing:6 monitor on c001n07
+ * Resource action: child_DoFencing:6 monitor on c001n05
+ * Resource action: child_DoFencing:6 monitor on c001n04
+ * Resource action: child_DoFencing:7 monitor on c001n08
+ * Resource action: child_DoFencing:7 monitor on c001n07
+ * Resource action: child_DoFencing:7 monitor on c001n05
+ * Resource action: child_DoFencing:7 monitor on c001n04
+ * Resource action: child_DoFencing:7 monitor on c001n03
+ * Resource action: child_DoFencing:7 monitor on c001n02
+ * Resource action: child_CloneSet:0 monitor on c001n09
+ * Resource action: child_CloneSet:0 monitor on c001n08
+ * Resource action: child_CloneSet:0 monitor on c001n07
+ * Resource action: child_CloneSet:0 monitor on c001n06
+ * Resource action: child_CloneSet:0 monitor on c001n05
+ * Resource action: child_CloneSet:0 monitor on c001n04
+ * Resource action: child_CloneSet:0 monitor on c001n03
+ * Resource action: child_CloneSet:0 monitor on c001n02
+ * Resource action: child_CloneSet:1 monitor on c001n09
+ * Resource action: child_CloneSet:1 monitor on c001n08
+ * Resource action: child_CloneSet:1 monitor on c001n07
+ * Resource action: child_CloneSet:1 monitor on c001n06
+ * Resource action: child_CloneSet:1 monitor on c001n05
+ * Resource action: child_CloneSet:1 monitor on c001n04
+ * Resource action: child_CloneSet:1 monitor on c001n03
+ * Resource action: child_CloneSet:1 monitor on c001n02
+ * Resource action: child_CloneSet:2 monitor on c001n09
+ * Resource action: child_CloneSet:2 monitor on c001n08
+ * Resource action: child_CloneSet:2 monitor on c001n07
+ * Resource action: child_CloneSet:2 monitor on c001n06
+ * Resource action: child_CloneSet:2 monitor on c001n05
+ * Resource action: child_CloneSet:2 monitor on c001n04
+ * Resource action: child_CloneSet:2 monitor on c001n03
+ * Resource action: child_CloneSet:2 monitor on c001n02
+ * Resource action: child_CloneSet:3 monitor on c001n09
+ * Resource action: child_CloneSet:3 monitor on c001n08
+ * Resource action: child_CloneSet:3 monitor on c001n07
+ * Resource action: child_CloneSet:3 monitor on c001n06
+ * Resource action: child_CloneSet:3 monitor on c001n05
+ * Resource action: child_CloneSet:3 monitor on c001n04
+ * Resource action: child_CloneSet:3 monitor on c001n03
+ * Resource action: child_CloneSet:3 monitor on c001n02
+ * Resource action: child_CloneSet:4 monitor on c001n09
+ * Resource action: child_CloneSet:4 monitor on c001n08
+ * Resource action: child_CloneSet:4 monitor on c001n07
+ * Resource action: child_CloneSet:4 monitor on c001n06
+ * Resource action: child_CloneSet:4 monitor on c001n05
+ * Resource action: child_CloneSet:4 monitor on c001n04
+ * Resource action: child_CloneSet:4 monitor on c001n03
+ * Resource action: child_CloneSet:4 monitor on c001n02
+ * Resource action: child_CloneSet:5 monitor on c001n09
+ * Resource action: child_CloneSet:5 monitor on c001n08
+ * Resource action: child_CloneSet:5 monitor on c001n07
+ * Resource action: child_CloneSet:5 monitor on c001n06
+ * Resource action: child_CloneSet:5 monitor on c001n05
+ * Resource action: child_CloneSet:5 monitor on c001n04
+ * Resource action: child_CloneSet:5 monitor on c001n03
+ * Resource action: child_CloneSet:5 monitor on c001n02
+ * Resource action: child_CloneSet:6 monitor on c001n09
+ * Resource action: child_CloneSet:6 monitor on c001n08
+ * Resource action: child_CloneSet:6 monitor on c001n07
+ * Resource action: child_CloneSet:6 monitor on c001n06
+ * Resource action: child_CloneSet:6 monitor on c001n05
+ * Resource action: child_CloneSet:6 monitor on c001n04
+ * Resource action: child_CloneSet:6 monitor on c001n03
+ * Resource action: child_CloneSet:6 monitor on c001n02
+ * Resource action: child_CloneSet:7 monitor on c001n09
+ * Resource action: child_CloneSet:7 monitor on c001n08
+ * Resource action: child_CloneSet:7 monitor on c001n07
+ * Resource action: child_CloneSet:7 monitor on c001n06
+ * Resource action: child_CloneSet:7 monitor on c001n05
+ * Resource action: child_CloneSet:7 monitor on c001n04
+ * Resource action: child_CloneSet:7 monitor on c001n03
+ * Resource action: child_CloneSet:7 monitor on c001n02
+ * Pseudo action: CloneSet_start_0
+ * Resource action: child_CloneSet:0 start on c001n02
+ * Resource action: child_CloneSet:1 start on c001n03
+ * Resource action: child_CloneSet:2 start on c001n04
+ * Resource action: child_CloneSet:3 start on c001n05
+ * Resource action: child_CloneSet:4 start on c001n06
+ * Resource action: child_CloneSet:5 start on c001n07
+ * Resource action: child_CloneSet:6 start on c001n08
+ * Resource action: child_CloneSet:7 start on c001n09
+ * Pseudo action: CloneSet_running_0
+ * Resource action: child_CloneSet:0 monitor=5000 on c001n02
+ * Resource action: child_CloneSet:1 monitor=5000 on c001n03
+ * Resource action: child_CloneSet:2 monitor=5000 on c001n04
+ * Resource action: child_CloneSet:3 monitor=5000 on c001n05
+ * Resource action: child_CloneSet:4 monitor=5000 on c001n06
+ * Resource action: child_CloneSet:5 monitor=5000 on c001n07
+ * Resource action: child_CloneSet:6 monitor=5000 on c001n08
+ * Resource action: child_CloneSet:7 monitor=5000 on c001n09
+
+Revised Cluster Status:
+ * Node List:
+ * Online: [ c001n02 c001n03 c001n04 c001n05 c001n06 c001n07 c001n08 c001n09 ]
+
+ * Full List of Resources:
+ * DcIPaddr (ocf:heartbeat:IPaddr): Started c001n09
+ * rsc_c001n09 (ocf:heartbeat:IPaddr): Stopped (unmanaged)
+ * rsc_c001n02 (ocf:heartbeat:IPaddr): Started c001n02 (unmanaged)
+ * rsc_c001n03 (ocf:heartbeat:IPaddr): Started c001n03 (unmanaged)
+ * rsc_c001n04 (ocf:heartbeat:IPaddr): Started c001n04 (unmanaged)
+ * rsc_c001n05 (ocf:heartbeat:IPaddr): Started c001n05 (unmanaged)
+ * rsc_c001n06 (ocf:heartbeat:IPaddr): Started c001n06 (unmanaged)
+ * rsc_c001n07 (ocf:heartbeat:IPaddr): Started c001n07 (unmanaged)
+ * rsc_c001n08 (ocf:heartbeat:IPaddr): Started c001n08 (unmanaged)
+ * Clone Set: DoFencing [child_DoFencing] (unique):
+ * child_DoFencing:0 (stonith:ssh): Started c001n02
+ * child_DoFencing:1 (stonith:ssh): Started c001n03
+ * child_DoFencing:2 (stonith:ssh): Started c001n04
+ * child_DoFencing:3 (stonith:ssh): Started c001n05
+ * child_DoFencing:4 (stonith:ssh): Started c001n06
+ * child_DoFencing:5 (stonith:ssh): Started c001n07
+ * child_DoFencing:6 (stonith:ssh): Started c001n08
+ * child_DoFencing:7 (stonith:ssh): Started c001n09
+ * Clone Set: CloneSet [child_CloneSet] (unique):
+ * child_CloneSet:0 (stonith:ssh): Started c001n02
+ * child_CloneSet:1 (stonith:ssh): Started c001n03
+ * child_CloneSet:2 (stonith:ssh): Started c001n04
+ * child_CloneSet:3 (stonith:ssh): Started c001n05
+ * child_CloneSet:4 (stonith:ssh): Started c001n06
+ * child_CloneSet:5 (stonith:ssh): Started c001n07
+ * child_CloneSet:6 (stonith:ssh): Started c001n08
+ * child_CloneSet:7 (stonith:ssh): Started c001n09
diff --git a/cts/scheduler/summary/interleave-2.summary b/cts/scheduler/summary/interleave-2.summary
new file mode 100644
index 0000000..fe16667
--- /dev/null
+++ b/cts/scheduler/summary/interleave-2.summary
@@ -0,0 +1,241 @@
+Current cluster status:
+ * Node List:
+ * Online: [ c001n02 c001n03 c001n04 c001n05 c001n06 c001n07 c001n08 c001n09 ]
+
+ * Full List of Resources:
+ * DcIPaddr (ocf:heartbeat:IPaddr): Started c001n09
+ * rsc_c001n09 (ocf:heartbeat:IPaddr): Stopped (unmanaged)
+ * rsc_c001n02 (ocf:heartbeat:IPaddr): Started c001n02 (unmanaged)
+ * rsc_c001n03 (ocf:heartbeat:IPaddr): Started c001n03 (unmanaged)
+ * rsc_c001n04 (ocf:heartbeat:IPaddr): Started c001n04 (unmanaged)
+ * rsc_c001n05 (ocf:heartbeat:IPaddr): Started c001n05 (unmanaged)
+ * rsc_c001n06 (ocf:heartbeat:IPaddr): Started c001n06 (unmanaged)
+ * rsc_c001n07 (ocf:heartbeat:IPaddr): Started c001n07 (unmanaged)
+ * rsc_c001n08 (ocf:heartbeat:IPaddr): Started c001n08 (unmanaged)
+ * Clone Set: DoFencing [child_DoFencing] (unique):
+ * child_DoFencing:0 (stonith:ssh): Started c001n02
+ * child_DoFencing:1 (stonith:ssh): Started c001n03
+ * child_DoFencing:2 (stonith:ssh): Started c001n04
+ * child_DoFencing:3 (stonith:ssh): Started c001n05
+ * child_DoFencing:4 (stonith:ssh): Started c001n06
+ * child_DoFencing:5 (stonith:ssh): Started c001n07
+ * child_DoFencing:6 (stonith:ssh): Started c001n08
+ * child_DoFencing:7 (stonith:ssh): Started c001n09
+ * Clone Set: CloneSet [child_CloneSet] (unique):
+ * child_CloneSet:0 (stonith:ssh): Stopped
+ * child_CloneSet:1 (stonith:ssh): Stopped
+ * child_CloneSet:2 (stonith:ssh): Stopped
+ * child_CloneSet:3 (stonith:ssh): Stopped
+ * child_CloneSet:4 (stonith:ssh): Stopped
+ * child_CloneSet:5 (stonith:ssh): Stopped
+ * child_CloneSet:6 (stonith:ssh): Stopped
+ * child_CloneSet:7 (stonith:ssh): Stopped
+
+Transition Summary:
+ * Start child_CloneSet:0 ( c001n02 )
+ * Start child_CloneSet:1 ( c001n03 )
+ * Start child_CloneSet:2 ( c001n04 )
+ * Start child_CloneSet:3 ( c001n05 )
+ * Start child_CloneSet:4 ( c001n06 )
+ * Start child_CloneSet:5 ( c001n07 )
+ * Start child_CloneSet:6 ( c001n08 )
+ * Start child_CloneSet:7 ( c001n09 )
+
+Executing Cluster Transition:
+ * Resource action: DcIPaddr monitor on c001n08
+ * Resource action: DcIPaddr monitor on c001n07
+ * Resource action: DcIPaddr monitor on c001n06
+ * Resource action: DcIPaddr monitor on c001n05
+ * Resource action: DcIPaddr monitor on c001n04
+ * Resource action: DcIPaddr monitor on c001n03
+ * Resource action: DcIPaddr monitor on c001n02
+ * Resource action: rsc_c001n09 monitor on c001n09
+ * Resource action: rsc_c001n09 monitor on c001n08
+ * Resource action: rsc_c001n09 monitor on c001n07
+ * Resource action: rsc_c001n09 monitor on c001n05
+ * Resource action: rsc_c001n09 monitor on c001n04
+ * Resource action: rsc_c001n09 monitor on c001n03
+ * Resource action: rsc_c001n09 monitor on c001n02
+ * Resource action: rsc_c001n02 monitor on c001n09
+ * Resource action: rsc_c001n02 monitor on c001n08
+ * Resource action: rsc_c001n02 monitor on c001n07
+ * Resource action: rsc_c001n02 monitor on c001n05
+ * Resource action: rsc_c001n02 monitor on c001n04
+ * Resource action: rsc_c001n03 monitor on c001n09
+ * Resource action: rsc_c001n03 monitor on c001n08
+ * Resource action: rsc_c001n03 monitor on c001n07
+ * Resource action: rsc_c001n03 monitor on c001n05
+ * Resource action: rsc_c001n03 monitor on c001n04
+ * Resource action: rsc_c001n03 monitor on c001n02
+ * Resource action: rsc_c001n04 monitor on c001n09
+ * Resource action: rsc_c001n04 monitor on c001n08
+ * Resource action: rsc_c001n04 monitor on c001n07
+ * Resource action: rsc_c001n04 monitor on c001n05
+ * Resource action: rsc_c001n04 monitor on c001n03
+ * Resource action: rsc_c001n04 monitor on c001n02
+ * Resource action: rsc_c001n05 monitor on c001n09
+ * Resource action: rsc_c001n05 monitor on c001n08
+ * Resource action: rsc_c001n05 monitor on c001n07
+ * Resource action: rsc_c001n05 monitor on c001n06
+ * Resource action: rsc_c001n05 monitor on c001n04
+ * Resource action: rsc_c001n05 monitor on c001n03
+ * Resource action: rsc_c001n05 monitor on c001n02
+ * Resource action: rsc_c001n06 monitor on c001n09
+ * Resource action: rsc_c001n06 monitor on c001n08
+ * Resource action: rsc_c001n06 monitor on c001n07
+ * Resource action: rsc_c001n06 monitor on c001n05
+ * Resource action: rsc_c001n06 monitor on c001n04
+ * Resource action: rsc_c001n06 monitor on c001n03
+ * Resource action: rsc_c001n07 monitor on c001n09
+ * Resource action: rsc_c001n07 monitor on c001n08
+ * Resource action: rsc_c001n07 monitor on c001n06
+ * Resource action: rsc_c001n07 monitor on c001n05
+ * Resource action: rsc_c001n07 monitor on c001n04
+ * Resource action: rsc_c001n08 monitor on c001n09
+ * Resource action: rsc_c001n08 monitor on c001n07
+ * Resource action: rsc_c001n08 monitor on c001n05
+ * Resource action: child_DoFencing:0 monitor on c001n09
+ * Resource action: child_DoFencing:0 monitor on c001n08
+ * Resource action: child_DoFencing:0 monitor on c001n07
+ * Resource action: child_DoFencing:1 monitor on c001n08
+ * Resource action: child_DoFencing:1 monitor on c001n07
+ * Resource action: child_DoFencing:1 monitor on c001n02
+ * Resource action: child_DoFencing:2 monitor on c001n09
+ * Resource action: child_DoFencing:2 monitor on c001n08
+ * Resource action: child_DoFencing:2 monitor on c001n07
+ * Resource action: child_DoFencing:2 monitor on c001n03
+ * Resource action: child_DoFencing:3 monitor on c001n08
+ * Resource action: child_DoFencing:3 monitor on c001n04
+ * Resource action: child_DoFencing:3 monitor on c001n02
+ * Resource action: child_DoFencing:4 monitor on c001n09
+ * Resource action: child_DoFencing:4 monitor on c001n05
+ * Resource action: child_DoFencing:4 monitor on c001n03
+ * Resource action: child_DoFencing:5 monitor on c001n08
+ * Resource action: child_DoFencing:5 monitor on c001n05
+ * Resource action: child_DoFencing:5 monitor on c001n04
+ * Resource action: child_DoFencing:5 monitor on c001n02
+ * Resource action: child_DoFencing:6 monitor on c001n09
+ * Resource action: child_DoFencing:6 monitor on c001n07
+ * Resource action: child_DoFencing:6 monitor on c001n05
+ * Resource action: child_DoFencing:6 monitor on c001n04
+ * Resource action: child_DoFencing:7 monitor on c001n08
+ * Resource action: child_DoFencing:7 monitor on c001n07
+ * Resource action: child_DoFencing:7 monitor on c001n05
+ * Resource action: child_DoFencing:7 monitor on c001n04
+ * Resource action: child_DoFencing:7 monitor on c001n03
+ * Resource action: child_DoFencing:7 monitor on c001n02
+ * Resource action: child_CloneSet:0 monitor on c001n09
+ * Resource action: child_CloneSet:0 monitor on c001n08
+ * Resource action: child_CloneSet:0 monitor on c001n07
+ * Resource action: child_CloneSet:0 monitor on c001n06
+ * Resource action: child_CloneSet:0 monitor on c001n05
+ * Resource action: child_CloneSet:0 monitor on c001n04
+ * Resource action: child_CloneSet:0 monitor on c001n03
+ * Resource action: child_CloneSet:0 monitor on c001n02
+ * Resource action: child_CloneSet:1 monitor on c001n09
+ * Resource action: child_CloneSet:1 monitor on c001n08
+ * Resource action: child_CloneSet:1 monitor on c001n07
+ * Resource action: child_CloneSet:1 monitor on c001n06
+ * Resource action: child_CloneSet:1 monitor on c001n05
+ * Resource action: child_CloneSet:1 monitor on c001n04
+ * Resource action: child_CloneSet:1 monitor on c001n03
+ * Resource action: child_CloneSet:1 monitor on c001n02
+ * Resource action: child_CloneSet:2 monitor on c001n09
+ * Resource action: child_CloneSet:2 monitor on c001n08
+ * Resource action: child_CloneSet:2 monitor on c001n07
+ * Resource action: child_CloneSet:2 monitor on c001n06
+ * Resource action: child_CloneSet:2 monitor on c001n05
+ * Resource action: child_CloneSet:2 monitor on c001n04
+ * Resource action: child_CloneSet:2 monitor on c001n03
+ * Resource action: child_CloneSet:2 monitor on c001n02
+ * Resource action: child_CloneSet:3 monitor on c001n09
+ * Resource action: child_CloneSet:3 monitor on c001n08
+ * Resource action: child_CloneSet:3 monitor on c001n07
+ * Resource action: child_CloneSet:3 monitor on c001n06
+ * Resource action: child_CloneSet:3 monitor on c001n05
+ * Resource action: child_CloneSet:3 monitor on c001n04
+ * Resource action: child_CloneSet:3 monitor on c001n03
+ * Resource action: child_CloneSet:3 monitor on c001n02
+ * Resource action: child_CloneSet:4 monitor on c001n09
+ * Resource action: child_CloneSet:4 monitor on c001n08
+ * Resource action: child_CloneSet:4 monitor on c001n07
+ * Resource action: child_CloneSet:4 monitor on c001n06
+ * Resource action: child_CloneSet:4 monitor on c001n05
+ * Resource action: child_CloneSet:4 monitor on c001n04
+ * Resource action: child_CloneSet:4 monitor on c001n03
+ * Resource action: child_CloneSet:4 monitor on c001n02
+ * Resource action: child_CloneSet:5 monitor on c001n09
+ * Resource action: child_CloneSet:5 monitor on c001n08
+ * Resource action: child_CloneSet:5 monitor on c001n07
+ * Resource action: child_CloneSet:5 monitor on c001n06
+ * Resource action: child_CloneSet:5 monitor on c001n05
+ * Resource action: child_CloneSet:5 monitor on c001n04
+ * Resource action: child_CloneSet:5 monitor on c001n03
+ * Resource action: child_CloneSet:5 monitor on c001n02
+ * Resource action: child_CloneSet:6 monitor on c001n09
+ * Resource action: child_CloneSet:6 monitor on c001n08
+ * Resource action: child_CloneSet:6 monitor on c001n07
+ * Resource action: child_CloneSet:6 monitor on c001n06
+ * Resource action: child_CloneSet:6 monitor on c001n05
+ * Resource action: child_CloneSet:6 monitor on c001n04
+ * Resource action: child_CloneSet:6 monitor on c001n03
+ * Resource action: child_CloneSet:6 monitor on c001n02
+ * Resource action: child_CloneSet:7 monitor on c001n09
+ * Resource action: child_CloneSet:7 monitor on c001n08
+ * Resource action: child_CloneSet:7 monitor on c001n07
+ * Resource action: child_CloneSet:7 monitor on c001n06
+ * Resource action: child_CloneSet:7 monitor on c001n05
+ * Resource action: child_CloneSet:7 monitor on c001n04
+ * Resource action: child_CloneSet:7 monitor on c001n03
+ * Resource action: child_CloneSet:7 monitor on c001n02
+ * Pseudo action: CloneSet_start_0
+ * Resource action: child_CloneSet:0 start on c001n02
+ * Resource action: child_CloneSet:1 start on c001n03
+ * Resource action: child_CloneSet:2 start on c001n04
+ * Resource action: child_CloneSet:3 start on c001n05
+ * Resource action: child_CloneSet:4 start on c001n06
+ * Resource action: child_CloneSet:5 start on c001n07
+ * Resource action: child_CloneSet:6 start on c001n08
+ * Resource action: child_CloneSet:7 start on c001n09
+ * Pseudo action: CloneSet_running_0
+ * Resource action: child_CloneSet:0 monitor=5000 on c001n02
+ * Resource action: child_CloneSet:1 monitor=5000 on c001n03
+ * Resource action: child_CloneSet:2 monitor=5000 on c001n04
+ * Resource action: child_CloneSet:3 monitor=5000 on c001n05
+ * Resource action: child_CloneSet:4 monitor=5000 on c001n06
+ * Resource action: child_CloneSet:5 monitor=5000 on c001n07
+ * Resource action: child_CloneSet:6 monitor=5000 on c001n08
+ * Resource action: child_CloneSet:7 monitor=5000 on c001n09
+
+Revised Cluster Status:
+ * Node List:
+ * Online: [ c001n02 c001n03 c001n04 c001n05 c001n06 c001n07 c001n08 c001n09 ]
+
+ * Full List of Resources:
+ * DcIPaddr (ocf:heartbeat:IPaddr): Started c001n09
+ * rsc_c001n09 (ocf:heartbeat:IPaddr): Stopped (unmanaged)
+ * rsc_c001n02 (ocf:heartbeat:IPaddr): Started c001n02 (unmanaged)
+ * rsc_c001n03 (ocf:heartbeat:IPaddr): Started c001n03 (unmanaged)
+ * rsc_c001n04 (ocf:heartbeat:IPaddr): Started c001n04 (unmanaged)
+ * rsc_c001n05 (ocf:heartbeat:IPaddr): Started c001n05 (unmanaged)
+ * rsc_c001n06 (ocf:heartbeat:IPaddr): Started c001n06 (unmanaged)
+ * rsc_c001n07 (ocf:heartbeat:IPaddr): Started c001n07 (unmanaged)
+ * rsc_c001n08 (ocf:heartbeat:IPaddr): Started c001n08 (unmanaged)
+ * Clone Set: DoFencing [child_DoFencing] (unique):
+ * child_DoFencing:0 (stonith:ssh): Started c001n02
+ * child_DoFencing:1 (stonith:ssh): Started c001n03
+ * child_DoFencing:2 (stonith:ssh): Started c001n04
+ * child_DoFencing:3 (stonith:ssh): Started c001n05
+ * child_DoFencing:4 (stonith:ssh): Started c001n06
+ * child_DoFencing:5 (stonith:ssh): Started c001n07
+ * child_DoFencing:6 (stonith:ssh): Started c001n08
+ * child_DoFencing:7 (stonith:ssh): Started c001n09
+ * Clone Set: CloneSet [child_CloneSet] (unique):
+ * child_CloneSet:0 (stonith:ssh): Started c001n02
+ * child_CloneSet:1 (stonith:ssh): Started c001n03
+ * child_CloneSet:2 (stonith:ssh): Started c001n04
+ * child_CloneSet:3 (stonith:ssh): Started c001n05
+ * child_CloneSet:4 (stonith:ssh): Started c001n06
+ * child_CloneSet:5 (stonith:ssh): Started c001n07
+ * child_CloneSet:6 (stonith:ssh): Started c001n08
+ * child_CloneSet:7 (stonith:ssh): Started c001n09
diff --git a/cts/scheduler/summary/interleave-3.summary b/cts/scheduler/summary/interleave-3.summary
new file mode 100644
index 0000000..fe16667
--- /dev/null
+++ b/cts/scheduler/summary/interleave-3.summary
@@ -0,0 +1,241 @@
+Current cluster status:
+ * Node List:
+ * Online: [ c001n02 c001n03 c001n04 c001n05 c001n06 c001n07 c001n08 c001n09 ]
+
+ * Full List of Resources:
+ * DcIPaddr (ocf:heartbeat:IPaddr): Started c001n09
+ * rsc_c001n09 (ocf:heartbeat:IPaddr): Stopped (unmanaged)
+ * rsc_c001n02 (ocf:heartbeat:IPaddr): Started c001n02 (unmanaged)
+ * rsc_c001n03 (ocf:heartbeat:IPaddr): Started c001n03 (unmanaged)
+ * rsc_c001n04 (ocf:heartbeat:IPaddr): Started c001n04 (unmanaged)
+ * rsc_c001n05 (ocf:heartbeat:IPaddr): Started c001n05 (unmanaged)
+ * rsc_c001n06 (ocf:heartbeat:IPaddr): Started c001n06 (unmanaged)
+ * rsc_c001n07 (ocf:heartbeat:IPaddr): Started c001n07 (unmanaged)
+ * rsc_c001n08 (ocf:heartbeat:IPaddr): Started c001n08 (unmanaged)
+ * Clone Set: DoFencing [child_DoFencing] (unique):
+ * child_DoFencing:0 (stonith:ssh): Started c001n02
+ * child_DoFencing:1 (stonith:ssh): Started c001n03
+ * child_DoFencing:2 (stonith:ssh): Started c001n04
+ * child_DoFencing:3 (stonith:ssh): Started c001n05
+ * child_DoFencing:4 (stonith:ssh): Started c001n06
+ * child_DoFencing:5 (stonith:ssh): Started c001n07
+ * child_DoFencing:6 (stonith:ssh): Started c001n08
+ * child_DoFencing:7 (stonith:ssh): Started c001n09
+ * Clone Set: CloneSet [child_CloneSet] (unique):
+ * child_CloneSet:0 (stonith:ssh): Stopped
+ * child_CloneSet:1 (stonith:ssh): Stopped
+ * child_CloneSet:2 (stonith:ssh): Stopped
+ * child_CloneSet:3 (stonith:ssh): Stopped
+ * child_CloneSet:4 (stonith:ssh): Stopped
+ * child_CloneSet:5 (stonith:ssh): Stopped
+ * child_CloneSet:6 (stonith:ssh): Stopped
+ * child_CloneSet:7 (stonith:ssh): Stopped
+
+Transition Summary:
+ * Start child_CloneSet:0 ( c001n02 )
+ * Start child_CloneSet:1 ( c001n03 )
+ * Start child_CloneSet:2 ( c001n04 )
+ * Start child_CloneSet:3 ( c001n05 )
+ * Start child_CloneSet:4 ( c001n06 )
+ * Start child_CloneSet:5 ( c001n07 )
+ * Start child_CloneSet:6 ( c001n08 )
+ * Start child_CloneSet:7 ( c001n09 )
+
+Executing Cluster Transition:
+ * Resource action: DcIPaddr monitor on c001n08
+ * Resource action: DcIPaddr monitor on c001n07
+ * Resource action: DcIPaddr monitor on c001n06
+ * Resource action: DcIPaddr monitor on c001n05
+ * Resource action: DcIPaddr monitor on c001n04
+ * Resource action: DcIPaddr monitor on c001n03
+ * Resource action: DcIPaddr monitor on c001n02
+ * Resource action: rsc_c001n09 monitor on c001n09
+ * Resource action: rsc_c001n09 monitor on c001n08
+ * Resource action: rsc_c001n09 monitor on c001n07
+ * Resource action: rsc_c001n09 monitor on c001n05
+ * Resource action: rsc_c001n09 monitor on c001n04
+ * Resource action: rsc_c001n09 monitor on c001n03
+ * Resource action: rsc_c001n09 monitor on c001n02
+ * Resource action: rsc_c001n02 monitor on c001n09
+ * Resource action: rsc_c001n02 monitor on c001n08
+ * Resource action: rsc_c001n02 monitor on c001n07
+ * Resource action: rsc_c001n02 monitor on c001n05
+ * Resource action: rsc_c001n02 monitor on c001n04
+ * Resource action: rsc_c001n03 monitor on c001n09
+ * Resource action: rsc_c001n03 monitor on c001n08
+ * Resource action: rsc_c001n03 monitor on c001n07
+ * Resource action: rsc_c001n03 monitor on c001n05
+ * Resource action: rsc_c001n03 monitor on c001n04
+ * Resource action: rsc_c001n03 monitor on c001n02
+ * Resource action: rsc_c001n04 monitor on c001n09
+ * Resource action: rsc_c001n04 monitor on c001n08
+ * Resource action: rsc_c001n04 monitor on c001n07
+ * Resource action: rsc_c001n04 monitor on c001n05
+ * Resource action: rsc_c001n04 monitor on c001n03
+ * Resource action: rsc_c001n04 monitor on c001n02
+ * Resource action: rsc_c001n05 monitor on c001n09
+ * Resource action: rsc_c001n05 monitor on c001n08
+ * Resource action: rsc_c001n05 monitor on c001n07
+ * Resource action: rsc_c001n05 monitor on c001n06
+ * Resource action: rsc_c001n05 monitor on c001n04
+ * Resource action: rsc_c001n05 monitor on c001n03
+ * Resource action: rsc_c001n05 monitor on c001n02
+ * Resource action: rsc_c001n06 monitor on c001n09
+ * Resource action: rsc_c001n06 monitor on c001n08
+ * Resource action: rsc_c001n06 monitor on c001n07
+ * Resource action: rsc_c001n06 monitor on c001n05
+ * Resource action: rsc_c001n06 monitor on c001n04
+ * Resource action: rsc_c001n06 monitor on c001n03
+ * Resource action: rsc_c001n07 monitor on c001n09
+ * Resource action: rsc_c001n07 monitor on c001n08
+ * Resource action: rsc_c001n07 monitor on c001n06
+ * Resource action: rsc_c001n07 monitor on c001n05
+ * Resource action: rsc_c001n07 monitor on c001n04
+ * Resource action: rsc_c001n08 monitor on c001n09
+ * Resource action: rsc_c001n08 monitor on c001n07
+ * Resource action: rsc_c001n08 monitor on c001n05
+ * Resource action: child_DoFencing:0 monitor on c001n09
+ * Resource action: child_DoFencing:0 monitor on c001n08
+ * Resource action: child_DoFencing:0 monitor on c001n07
+ * Resource action: child_DoFencing:1 monitor on c001n08
+ * Resource action: child_DoFencing:1 monitor on c001n07
+ * Resource action: child_DoFencing:1 monitor on c001n02
+ * Resource action: child_DoFencing:2 monitor on c001n09
+ * Resource action: child_DoFencing:2 monitor on c001n08
+ * Resource action: child_DoFencing:2 monitor on c001n07
+ * Resource action: child_DoFencing:2 monitor on c001n03
+ * Resource action: child_DoFencing:3 monitor on c001n08
+ * Resource action: child_DoFencing:3 monitor on c001n04
+ * Resource action: child_DoFencing:3 monitor on c001n02
+ * Resource action: child_DoFencing:4 monitor on c001n09
+ * Resource action: child_DoFencing:4 monitor on c001n05
+ * Resource action: child_DoFencing:4 monitor on c001n03
+ * Resource action: child_DoFencing:5 monitor on c001n08
+ * Resource action: child_DoFencing:5 monitor on c001n05
+ * Resource action: child_DoFencing:5 monitor on c001n04
+ * Resource action: child_DoFencing:5 monitor on c001n02
+ * Resource action: child_DoFencing:6 monitor on c001n09
+ * Resource action: child_DoFencing:6 monitor on c001n07
+ * Resource action: child_DoFencing:6 monitor on c001n05
+ * Resource action: child_DoFencing:6 monitor on c001n04
+ * Resource action: child_DoFencing:7 monitor on c001n08
+ * Resource action: child_DoFencing:7 monitor on c001n07
+ * Resource action: child_DoFencing:7 monitor on c001n05
+ * Resource action: child_DoFencing:7 monitor on c001n04
+ * Resource action: child_DoFencing:7 monitor on c001n03
+ * Resource action: child_DoFencing:7 monitor on c001n02
+ * Resource action: child_CloneSet:0 monitor on c001n09
+ * Resource action: child_CloneSet:0 monitor on c001n08
+ * Resource action: child_CloneSet:0 monitor on c001n07
+ * Resource action: child_CloneSet:0 monitor on c001n06
+ * Resource action: child_CloneSet:0 monitor on c001n05
+ * Resource action: child_CloneSet:0 monitor on c001n04
+ * Resource action: child_CloneSet:0 monitor on c001n03
+ * Resource action: child_CloneSet:0 monitor on c001n02
+ * Resource action: child_CloneSet:1 monitor on c001n09
+ * Resource action: child_CloneSet:1 monitor on c001n08
+ * Resource action: child_CloneSet:1 monitor on c001n07
+ * Resource action: child_CloneSet:1 monitor on c001n06
+ * Resource action: child_CloneSet:1 monitor on c001n05
+ * Resource action: child_CloneSet:1 monitor on c001n04
+ * Resource action: child_CloneSet:1 monitor on c001n03
+ * Resource action: child_CloneSet:1 monitor on c001n02
+ * Resource action: child_CloneSet:2 monitor on c001n09
+ * Resource action: child_CloneSet:2 monitor on c001n08
+ * Resource action: child_CloneSet:2 monitor on c001n07
+ * Resource action: child_CloneSet:2 monitor on c001n06
+ * Resource action: child_CloneSet:2 monitor on c001n05
+ * Resource action: child_CloneSet:2 monitor on c001n04
+ * Resource action: child_CloneSet:2 monitor on c001n03
+ * Resource action: child_CloneSet:2 monitor on c001n02
+ * Resource action: child_CloneSet:3 monitor on c001n09
+ * Resource action: child_CloneSet:3 monitor on c001n08
+ * Resource action: child_CloneSet:3 monitor on c001n07
+ * Resource action: child_CloneSet:3 monitor on c001n06
+ * Resource action: child_CloneSet:3 monitor on c001n05
+ * Resource action: child_CloneSet:3 monitor on c001n04
+ * Resource action: child_CloneSet:3 monitor on c001n03
+ * Resource action: child_CloneSet:3 monitor on c001n02
+ * Resource action: child_CloneSet:4 monitor on c001n09
+ * Resource action: child_CloneSet:4 monitor on c001n08
+ * Resource action: child_CloneSet:4 monitor on c001n07
+ * Resource action: child_CloneSet:4 monitor on c001n06
+ * Resource action: child_CloneSet:4 monitor on c001n05
+ * Resource action: child_CloneSet:4 monitor on c001n04
+ * Resource action: child_CloneSet:4 monitor on c001n03
+ * Resource action: child_CloneSet:4 monitor on c001n02
+ * Resource action: child_CloneSet:5 monitor on c001n09
+ * Resource action: child_CloneSet:5 monitor on c001n08
+ * Resource action: child_CloneSet:5 monitor on c001n07
+ * Resource action: child_CloneSet:5 monitor on c001n06
+ * Resource action: child_CloneSet:5 monitor on c001n05
+ * Resource action: child_CloneSet:5 monitor on c001n04
+ * Resource action: child_CloneSet:5 monitor on c001n03
+ * Resource action: child_CloneSet:5 monitor on c001n02
+ * Resource action: child_CloneSet:6 monitor on c001n09
+ * Resource action: child_CloneSet:6 monitor on c001n08
+ * Resource action: child_CloneSet:6 monitor on c001n07
+ * Resource action: child_CloneSet:6 monitor on c001n06
+ * Resource action: child_CloneSet:6 monitor on c001n05
+ * Resource action: child_CloneSet:6 monitor on c001n04
+ * Resource action: child_CloneSet:6 monitor on c001n03
+ * Resource action: child_CloneSet:6 monitor on c001n02
+ * Resource action: child_CloneSet:7 monitor on c001n09
+ * Resource action: child_CloneSet:7 monitor on c001n08
+ * Resource action: child_CloneSet:7 monitor on c001n07
+ * Resource action: child_CloneSet:7 monitor on c001n06
+ * Resource action: child_CloneSet:7 monitor on c001n05
+ * Resource action: child_CloneSet:7 monitor on c001n04
+ * Resource action: child_CloneSet:7 monitor on c001n03
+ * Resource action: child_CloneSet:7 monitor on c001n02
+ * Pseudo action: CloneSet_start_0
+ * Resource action: child_CloneSet:0 start on c001n02
+ * Resource action: child_CloneSet:1 start on c001n03
+ * Resource action: child_CloneSet:2 start on c001n04
+ * Resource action: child_CloneSet:3 start on c001n05
+ * Resource action: child_CloneSet:4 start on c001n06
+ * Resource action: child_CloneSet:5 start on c001n07
+ * Resource action: child_CloneSet:6 start on c001n08
+ * Resource action: child_CloneSet:7 start on c001n09
+ * Pseudo action: CloneSet_running_0
+ * Resource action: child_CloneSet:0 monitor=5000 on c001n02
+ * Resource action: child_CloneSet:1 monitor=5000 on c001n03
+ * Resource action: child_CloneSet:2 monitor=5000 on c001n04
+ * Resource action: child_CloneSet:3 monitor=5000 on c001n05
+ * Resource action: child_CloneSet:4 monitor=5000 on c001n06
+ * Resource action: child_CloneSet:5 monitor=5000 on c001n07
+ * Resource action: child_CloneSet:6 monitor=5000 on c001n08
+ * Resource action: child_CloneSet:7 monitor=5000 on c001n09
+
+Revised Cluster Status:
+ * Node List:
+ * Online: [ c001n02 c001n03 c001n04 c001n05 c001n06 c001n07 c001n08 c001n09 ]
+
+ * Full List of Resources:
+ * DcIPaddr (ocf:heartbeat:IPaddr): Started c001n09
+ * rsc_c001n09 (ocf:heartbeat:IPaddr): Stopped (unmanaged)
+ * rsc_c001n02 (ocf:heartbeat:IPaddr): Started c001n02 (unmanaged)
+ * rsc_c001n03 (ocf:heartbeat:IPaddr): Started c001n03 (unmanaged)
+ * rsc_c001n04 (ocf:heartbeat:IPaddr): Started c001n04 (unmanaged)
+ * rsc_c001n05 (ocf:heartbeat:IPaddr): Started c001n05 (unmanaged)
+ * rsc_c001n06 (ocf:heartbeat:IPaddr): Started c001n06 (unmanaged)
+ * rsc_c001n07 (ocf:heartbeat:IPaddr): Started c001n07 (unmanaged)
+ * rsc_c001n08 (ocf:heartbeat:IPaddr): Started c001n08 (unmanaged)
+ * Clone Set: DoFencing [child_DoFencing] (unique):
+ * child_DoFencing:0 (stonith:ssh): Started c001n02
+ * child_DoFencing:1 (stonith:ssh): Started c001n03
+ * child_DoFencing:2 (stonith:ssh): Started c001n04
+ * child_DoFencing:3 (stonith:ssh): Started c001n05
+ * child_DoFencing:4 (stonith:ssh): Started c001n06
+ * child_DoFencing:5 (stonith:ssh): Started c001n07
+ * child_DoFencing:6 (stonith:ssh): Started c001n08
+ * child_DoFencing:7 (stonith:ssh): Started c001n09
+ * Clone Set: CloneSet [child_CloneSet] (unique):
+ * child_CloneSet:0 (stonith:ssh): Started c001n02
+ * child_CloneSet:1 (stonith:ssh): Started c001n03
+ * child_CloneSet:2 (stonith:ssh): Started c001n04
+ * child_CloneSet:3 (stonith:ssh): Started c001n05
+ * child_CloneSet:4 (stonith:ssh): Started c001n06
+ * child_CloneSet:5 (stonith:ssh): Started c001n07
+ * child_CloneSet:6 (stonith:ssh): Started c001n08
+ * child_CloneSet:7 (stonith:ssh): Started c001n09
diff --git a/cts/scheduler/summary/interleave-pseudo-stop.summary b/cts/scheduler/summary/interleave-pseudo-stop.summary
new file mode 100644
index 0000000..619e40d
--- /dev/null
+++ b/cts/scheduler/summary/interleave-pseudo-stop.summary
@@ -0,0 +1,83 @@
+Current cluster status:
+ * Node List:
+ * Node node1: UNCLEAN (offline)
+ * Online: [ node2 ]
+
+ * Full List of Resources:
+ * Clone Set: stonithcloneset [stonithclone]:
+ * stonithclone (stonith:external/ssh): Started node1 (UNCLEAN)
+ * Started: [ node2 ]
+ * Clone Set: evmscloneset [evmsclone]:
+ * evmsclone (ocf:heartbeat:EvmsSCC): Started node1 (UNCLEAN)
+ * Started: [ node2 ]
+ * Clone Set: imagestorecloneset [imagestoreclone] (disabled):
+ * imagestoreclone (ocf:heartbeat:Filesystem): Started node1 (UNCLEAN)
+ * Started: [ node2 ]
+ * Clone Set: configstorecloneset [configstoreclone]:
+ * configstoreclone (ocf:heartbeat:Filesystem): Started node1 (UNCLEAN)
+ * Started: [ node2 ]
+
+Transition Summary:
+ * Fence (reboot) node1 'peer is no longer part of the cluster'
+ * Stop stonithclone:1 ( node1 ) due to node availability
+ * Stop evmsclone:1 ( node1 ) due to node availability
+ * Stop imagestoreclone:1 ( node1 ) due to node availability
+ * Stop configstoreclone:1 ( node1 ) due to node availability
+
+Executing Cluster Transition:
+ * Pseudo action: stonithcloneset_stop_0
+ * Pseudo action: evmscloneset_pre_notify_stop_0
+ * Pseudo action: imagestorecloneset_pre_notify_stop_0
+ * Pseudo action: configstorecloneset_pre_notify_stop_0
+ * Fencing node1 (reboot)
+ * Pseudo action: stonithclone:0_stop_0
+ * Pseudo action: stonithcloneset_stopped_0
+ * Resource action: evmsclone:1 notify on node2
+ * Pseudo action: evmsclone:0_post_notify_stop_0
+ * Pseudo action: evmscloneset_confirmed-pre_notify_stop_0
+ * Resource action: imagestoreclone:1 notify on node2
+ * Pseudo action: imagestoreclone:0_post_notify_stop_0
+ * Pseudo action: imagestorecloneset_confirmed-pre_notify_stop_0
+ * Pseudo action: imagestorecloneset_stop_0
+ * Resource action: configstoreclone:1 notify on node2
+ * Pseudo action: configstoreclone:0_post_notify_stop_0
+ * Pseudo action: configstorecloneset_confirmed-pre_notify_stop_0
+ * Pseudo action: configstorecloneset_stop_0
+ * Pseudo action: imagestoreclone:0_stop_0
+ * Pseudo action: imagestorecloneset_stopped_0
+ * Pseudo action: configstoreclone:0_stop_0
+ * Pseudo action: configstorecloneset_stopped_0
+ * Pseudo action: imagestorecloneset_post_notify_stopped_0
+ * Pseudo action: configstorecloneset_post_notify_stopped_0
+ * Resource action: imagestoreclone:1 notify on node2
+ * Pseudo action: imagestoreclone:0_notified_0
+ * Pseudo action: imagestorecloneset_confirmed-post_notify_stopped_0
+ * Resource action: configstoreclone:1 notify on node2
+ * Pseudo action: configstoreclone:0_notified_0
+ * Pseudo action: configstorecloneset_confirmed-post_notify_stopped_0
+ * Pseudo action: evmscloneset_stop_0
+ * Pseudo action: evmsclone:0_stop_0
+ * Pseudo action: evmscloneset_stopped_0
+ * Pseudo action: evmscloneset_post_notify_stopped_0
+ * Resource action: evmsclone:1 notify on node2
+ * Pseudo action: evmsclone:0_notified_0
+ * Pseudo action: evmscloneset_confirmed-post_notify_stopped_0
+
+Revised Cluster Status:
+ * Node List:
+ * Online: [ node2 ]
+ * OFFLINE: [ node1 ]
+
+ * Full List of Resources:
+ * Clone Set: stonithcloneset [stonithclone]:
+ * Started: [ node2 ]
+ * Stopped: [ node1 ]
+ * Clone Set: evmscloneset [evmsclone]:
+ * Started: [ node2 ]
+ * Stopped: [ node1 ]
+ * Clone Set: imagestorecloneset [imagestoreclone] (disabled):
+ * Started: [ node2 ]
+ * Stopped (disabled): [ node1 ]
+ * Clone Set: configstorecloneset [configstoreclone]:
+ * Started: [ node2 ]
+ * Stopped: [ node1 ]
diff --git a/cts/scheduler/summary/interleave-restart.summary b/cts/scheduler/summary/interleave-restart.summary
new file mode 100644
index 0000000..8862aac
--- /dev/null
+++ b/cts/scheduler/summary/interleave-restart.summary
@@ -0,0 +1,97 @@
+Current cluster status:
+ * Node List:
+ * Online: [ node1 node2 ]
+
+ * Full List of Resources:
+ * Clone Set: stonithcloneset [stonithclone]:
+ * Started: [ node1 node2 ]
+ * Clone Set: evmscloneset [evmsclone]:
+ * evmsclone (ocf:heartbeat:EvmsSCC): FAILED node1
+ * Started: [ node2 ]
+ * Clone Set: imagestorecloneset [imagestoreclone] (disabled):
+ * Started: [ node1 node2 ]
+ * Clone Set: configstorecloneset [configstoreclone]:
+ * Started: [ node1 node2 ]
+
+Transition Summary:
+ * Recover evmsclone:1 ( node1 )
+ * Restart imagestoreclone:1 ( node1 ) due to required evmsclone:1 start
+ * Restart configstoreclone:1 ( node1 ) due to required evmsclone:1 start
+
+Executing Cluster Transition:
+ * Pseudo action: evmscloneset_pre_notify_stop_0
+ * Pseudo action: imagestorecloneset_pre_notify_stop_0
+ * Pseudo action: configstorecloneset_pre_notify_stop_0
+ * Resource action: evmsclone:1 notify on node2
+ * Resource action: evmsclone:0 notify on node1
+ * Pseudo action: evmscloneset_confirmed-pre_notify_stop_0
+ * Resource action: imagestoreclone:1 notify on node2
+ * Resource action: imagestoreclone:0 notify on node1
+ * Pseudo action: imagestorecloneset_confirmed-pre_notify_stop_0
+ * Pseudo action: imagestorecloneset_stop_0
+ * Resource action: configstoreclone:1 notify on node2
+ * Resource action: configstoreclone:0 notify on node1
+ * Pseudo action: configstorecloneset_confirmed-pre_notify_stop_0
+ * Pseudo action: configstorecloneset_stop_0
+ * Resource action: imagestoreclone:0 stop on node1
+ * Pseudo action: imagestorecloneset_stopped_0
+ * Resource action: configstoreclone:0 stop on node1
+ * Pseudo action: configstorecloneset_stopped_0
+ * Pseudo action: imagestorecloneset_post_notify_stopped_0
+ * Pseudo action: configstorecloneset_post_notify_stopped_0
+ * Resource action: imagestoreclone:1 notify on node2
+ * Pseudo action: imagestorecloneset_confirmed-post_notify_stopped_0
+ * Pseudo action: imagestorecloneset_pre_notify_start_0
+ * Resource action: configstoreclone:1 notify on node2
+ * Pseudo action: configstorecloneset_confirmed-post_notify_stopped_0
+ * Pseudo action: configstorecloneset_pre_notify_start_0
+ * Pseudo action: evmscloneset_stop_0
+ * Resource action: imagestoreclone:1 notify on node2
+ * Pseudo action: imagestorecloneset_confirmed-pre_notify_start_0
+ * Resource action: configstoreclone:1 notify on node2
+ * Pseudo action: configstorecloneset_confirmed-pre_notify_start_0
+ * Resource action: evmsclone:0 stop on node1
+ * Pseudo action: evmscloneset_stopped_0
+ * Pseudo action: evmscloneset_post_notify_stopped_0
+ * Resource action: evmsclone:1 notify on node2
+ * Pseudo action: evmscloneset_confirmed-post_notify_stopped_0
+ * Pseudo action: evmscloneset_pre_notify_start_0
+ * Resource action: evmsclone:1 notify on node2
+ * Pseudo action: evmscloneset_confirmed-pre_notify_start_0
+ * Pseudo action: evmscloneset_start_0
+ * Resource action: evmsclone:0 start on node1
+ * Pseudo action: evmscloneset_running_0
+ * Pseudo action: evmscloneset_post_notify_running_0
+ * Resource action: evmsclone:1 notify on node2
+ * Resource action: evmsclone:0 notify on node1
+ * Pseudo action: evmscloneset_confirmed-post_notify_running_0
+ * Pseudo action: imagestorecloneset_start_0
+ * Pseudo action: configstorecloneset_start_0
+ * Resource action: imagestoreclone:0 start on node1
+ * Pseudo action: imagestorecloneset_running_0
+ * Resource action: configstoreclone:0 start on node1
+ * Pseudo action: configstorecloneset_running_0
+ * Pseudo action: imagestorecloneset_post_notify_running_0
+ * Pseudo action: configstorecloneset_post_notify_running_0
+ * Resource action: imagestoreclone:1 notify on node2
+ * Resource action: imagestoreclone:0 notify on node1
+ * Pseudo action: imagestorecloneset_confirmed-post_notify_running_0
+ * Resource action: configstoreclone:1 notify on node2
+ * Resource action: configstoreclone:0 notify on node1
+ * Pseudo action: configstorecloneset_confirmed-post_notify_running_0
+ * Resource action: imagestoreclone:0 monitor=20000 on node1
+ * Resource action: configstoreclone:0 monitor=20000 on node1
+
+Revised Cluster Status:
+ * Node List:
+ * Online: [ node1 node2 ]
+
+ * Full List of Resources:
+ * Clone Set: stonithcloneset [stonithclone]:
+ * Started: [ node1 node2 ]
+ * Clone Set: evmscloneset [evmsclone]:
+ * Started: [ node1 node2 ]
+ * Clone Set: imagestorecloneset [imagestoreclone] (disabled):
+ * Started: [ node1 node2 ]
+ * Clone Set: configstorecloneset [configstoreclone]:
+ * Started: [ node1 node2 ]
diff --git a/cts/scheduler/summary/interleave-stop.summary b/cts/scheduler/summary/interleave-stop.summary
new file mode 100644
index 0000000..560c540
--- /dev/null
+++ b/cts/scheduler/summary/interleave-stop.summary
@@ -0,0 +1,74 @@
+Current cluster status:
+ * Node List:
+ * Node node1: standby (with active resources)
+ * Online: [ node2 ]
+
+ * Full List of Resources:
+ * Clone Set: stonithcloneset [stonithclone]:
+ * Started: [ node1 node2 ]
+ * Clone Set: evmscloneset [evmsclone]:
+ * Started: [ node1 node2 ]
+ * Clone Set: imagestorecloneset [imagestoreclone] (disabled):
+ * Started: [ node1 node2 ]
+ * Clone Set: configstorecloneset [configstoreclone]:
+ * Started: [ node1 node2 ]
+
+Transition Summary:
+ * Stop stonithclone:1 ( node1 ) due to node availability
+ * Stop evmsclone:1 ( node1 ) due to node availability
+ * Stop imagestoreclone:1 ( node1 ) due to node availability
+ * Stop configstoreclone:1 ( node1 ) due to node availability
+
+Executing Cluster Transition:
+ * Pseudo action: stonithcloneset_stop_0
+ * Pseudo action: evmscloneset_pre_notify_stop_0
+ * Pseudo action: imagestorecloneset_pre_notify_stop_0
+ * Pseudo action: configstorecloneset_pre_notify_stop_0
+ * Resource action: stonithclone:0 stop on node1
+ * Pseudo action: stonithcloneset_stopped_0
+ * Resource action: evmsclone:1 notify on node2
+ * Resource action: evmsclone:0 notify on node1
+ * Pseudo action: evmscloneset_confirmed-pre_notify_stop_0
+ * Resource action: imagestoreclone:1 notify on node2
+ * Resource action: imagestoreclone:0 notify on node1
+ * Pseudo action: imagestorecloneset_confirmed-pre_notify_stop_0
+ * Pseudo action: imagestorecloneset_stop_0
+ * Resource action: configstoreclone:1 notify on node2
+ * Resource action: configstoreclone:0 notify on node1
+ * Pseudo action: configstorecloneset_confirmed-pre_notify_stop_0
+ * Pseudo action: configstorecloneset_stop_0
+ * Resource action: imagestoreclone:0 stop on node1
+ * Pseudo action: imagestorecloneset_stopped_0
+ * Resource action: configstoreclone:0 stop on node1
+ * Pseudo action: configstorecloneset_stopped_0
+ * Pseudo action: imagestorecloneset_post_notify_stopped_0
+ * Pseudo action: configstorecloneset_post_notify_stopped_0
+ * Resource action: imagestoreclone:1 notify on node2
+ * Pseudo action: imagestorecloneset_confirmed-post_notify_stopped_0
+ * Resource action: configstoreclone:1 notify on node2
+ * Pseudo action: configstorecloneset_confirmed-post_notify_stopped_0
+ * Pseudo action: evmscloneset_stop_0
+ * Resource action: evmsclone:0 stop on node1
+ * Pseudo action: evmscloneset_stopped_0
+ * Pseudo action: evmscloneset_post_notify_stopped_0
+ * Resource action: evmsclone:1 notify on node2
+ * Pseudo action: evmscloneset_confirmed-post_notify_stopped_0
+
+Revised Cluster Status:
+ * Node List:
+ * Node node1: standby
+ * Online: [ node2 ]
+
+ * Full List of Resources:
+ * Clone Set: stonithcloneset [stonithclone]:
+ * Started: [ node2 ]
+ * Stopped: [ node1 ]
+ * Clone Set: evmscloneset [evmsclone]:
+ * Started: [ node2 ]
+ * Stopped: [ node1 ]
+ * Clone Set: imagestorecloneset [imagestoreclone] (disabled):
+ * Started: [ node2 ]
+ * Stopped (disabled): [ node1 ]
+ * Clone Set: configstorecloneset [configstoreclone]:
+ * Started: [ node2 ]
+ * Stopped: [ node1 ]
diff --git a/cts/scheduler/summary/intervals.summary b/cts/scheduler/summary/intervals.summary
new file mode 100644
index 0000000..f6dc2e4
--- /dev/null
+++ b/cts/scheduler/summary/intervals.summary
@@ -0,0 +1,52 @@
+Using the original execution date of: 2018-03-21 23:12:42Z
+0 of 7 resource instances DISABLED and 1 BLOCKED from further action due to failure
+
+Current cluster status:
+ * Node List:
+ * Online: [ rhel7-1 rhel7-2 rhel7-3 rhel7-4 rhel7-5 ]
+
+ * Full List of Resources:
+ * Fencing (stonith:fence_xvm): Started rhel7-1
+ * rsc1 (ocf:pacemaker:Dummy): Started rhel7-2
+ * rsc2 (ocf:pacemaker:Dummy): Stopped
+ * rsc3 (ocf:pacemaker:Dummy): Started rhel7-4
+ * rsc4 (ocf:pacemaker:Dummy): FAILED rhel7-5 (blocked)
+ * rsc5 (ocf:pacemaker:Dummy): Started rhel7-1
+ * rsc6 (ocf:pacemaker:Dummy): Started rhel7-2
+
+Transition Summary:
+ * Start rsc2 ( rhel7-3 )
+ * Move rsc5 ( rhel7-1 -> rhel7-2 )
+ * Move rsc6 ( rhel7-2 -> rhel7-1 )
+
+Executing Cluster Transition:
+ * Resource action: rsc2 monitor on rhel7-5
+ * Resource action: rsc2 monitor on rhel7-4
+ * Resource action: rsc2 monitor on rhel7-3
+ * Resource action: rsc2 monitor on rhel7-2
+ * Resource action: rsc2 monitor on rhel7-1
+ * Resource action: rsc5 stop on rhel7-1
+ * Resource action: rsc5 cancel=25000 on rhel7-2
+ * Resource action: rsc6 stop on rhel7-2
+ * Resource action: rsc2 start on rhel7-3
+ * Resource action: rsc5 monitor=25000 on rhel7-1
+ * Resource action: rsc5 start on rhel7-2
+ * Resource action: rsc6 start on rhel7-1
+ * Resource action: rsc2 monitor=90000 on rhel7-3
+ * Resource action: rsc2 monitor=40000 on rhel7-3
+ * Resource action: rsc5 monitor=20000 on rhel7-2
+ * Resource action: rsc6 monitor=28000 on rhel7-1
+Using the original execution date of: 2018-03-21 23:12:42Z
+
+Revised Cluster Status:
+ * Node List:
+ * Online: [ rhel7-1 rhel7-2 rhel7-3 rhel7-4 rhel7-5 ]
+
+ * Full List of Resources:
+ * Fencing (stonith:fence_xvm): Started rhel7-1
+ * rsc1 (ocf:pacemaker:Dummy): Started rhel7-2
+ * rsc2 (ocf:pacemaker:Dummy): Started rhel7-3
+ * rsc3 (ocf:pacemaker:Dummy): Started rhel7-4
+ * rsc4 (ocf:pacemaker:Dummy): FAILED rhel7-5 (blocked)
+ * rsc5 (ocf:pacemaker:Dummy): Started rhel7-2
+ * rsc6 (ocf:pacemaker:Dummy): Started rhel7-1
diff --git a/cts/scheduler/summary/leftover-pending-monitor.summary b/cts/scheduler/summary/leftover-pending-monitor.summary
new file mode 100644
index 0000000..04b03f2
--- /dev/null
+++ b/cts/scheduler/summary/leftover-pending-monitor.summary
@@ -0,0 +1,30 @@
+Using the original execution date of: 2022-12-02 17:04:52Z
+Current cluster status:
+ * Node List:
+ * Node node-2: pending
+ * Online: [ node-1 node-3 ]
+
+ * Full List of Resources:
+ * st-sbd (stonith:external/sbd): Started node-1
+ * Clone Set: promotable-1 [stateful-1] (promotable):
+ * Promoted: [ node-3 ]
+ * Stopped: [ node-1 node-2 ]
+
+Transition Summary:
+ * Start stateful-1:1 ( node-1 ) due to unrunnable stateful-1:0 monitor (blocked)
+
+Executing Cluster Transition:
+ * Pseudo action: promotable-1_start_0
+ * Pseudo action: promotable-1_running_0
+Using the original execution date of: 2022-12-02 17:04:52Z
+
+Revised Cluster Status:
+ * Node List:
+ * Node node-2: pending
+ * Online: [ node-1 node-3 ]
+
+ * Full List of Resources:
+ * st-sbd (stonith:external/sbd): Started node-1
+ * Clone Set: promotable-1 [stateful-1] (promotable):
+ * Promoted: [ node-3 ]
+ * Stopped: [ node-1 node-2 ]
diff --git a/cts/scheduler/summary/load-stopped-loop-2.summary b/cts/scheduler/summary/load-stopped-loop-2.summary
new file mode 100644
index 0000000..eb22c5a
--- /dev/null
+++ b/cts/scheduler/summary/load-stopped-loop-2.summary
@@ -0,0 +1,114 @@
+4 of 25 resource instances DISABLED and 0 BLOCKED from further action due to failure
+
+Current cluster status:
+ * Node List:
+ * Online: [ xfc0 xfc1 xfc2 xfc3 ]
+
+ * Full List of Resources:
+ * Clone Set: cl_glusterd [p_glusterd]:
+ * Started: [ xfc0 xfc1 xfc2 xfc3 ]
+ * Clone Set: cl_p_bl_glusterfs [p_bl_glusterfs]:
+ * Started: [ xfc0 xfc1 xfc2 xfc3 ]
+ * xu-test8 (ocf:heartbeat:Xen): Started xfc3
+ * xu-test1 (ocf:heartbeat:Xen): Started xfc3
+ * xu-test10 (ocf:heartbeat:Xen): Started xfc3
+ * xu-test11 (ocf:heartbeat:Xen): Started xfc3
+ * xu-test12 (ocf:heartbeat:Xen): Started xfc2
+ * xu-test13 (ocf:heartbeat:Xen): Stopped
+ * xu-test14 (ocf:heartbeat:Xen): Stopped (disabled)
+ * xu-test15 (ocf:heartbeat:Xen): Stopped (disabled)
+ * xu-test16 (ocf:heartbeat:Xen): Stopped (disabled)
+ * xu-test17 (ocf:heartbeat:Xen): Stopped (disabled)
+ * xu-test2 (ocf:heartbeat:Xen): Started xfc3
+ * xu-test3 (ocf:heartbeat:Xen): Started xfc1
+ * xu-test4 (ocf:heartbeat:Xen): Started xfc0
+ * xu-test5 (ocf:heartbeat:Xen): Started xfc2
+ * xu-test6 (ocf:heartbeat:Xen): Started xfc3
+ * xu-test7 (ocf:heartbeat:Xen): Started xfc1
+ * xu-test9 (ocf:heartbeat:Xen): Started xfc0
+
+Transition Summary:
+ * Migrate xu-test12 ( xfc2 -> xfc3 )
+ * Migrate xu-test2 ( xfc3 -> xfc1 )
+ * Migrate xu-test3 ( xfc1 -> xfc0 )
+ * Migrate xu-test4 ( xfc0 -> xfc2 )
+ * Migrate xu-test5 ( xfc2 -> xfc3 )
+ * Migrate xu-test6 ( xfc3 -> xfc1 )
+ * Migrate xu-test7 ( xfc1 -> xfc0 )
+ * Migrate xu-test9 ( xfc0 -> xfc2 )
+ * Start xu-test13 ( xfc3 )
+
+Executing Cluster Transition:
+ * Resource action: xu-test4 migrate_to on xfc0
+ * Resource action: xu-test5 migrate_to on xfc2
+ * Resource action: xu-test6 migrate_to on xfc3
+ * Resource action: xu-test7 migrate_to on xfc1
+ * Resource action: xu-test9 migrate_to on xfc0
+ * Resource action: xu-test4 migrate_from on xfc2
+ * Resource action: xu-test4 stop on xfc0
+ * Resource action: xu-test5 migrate_from on xfc3
+ * Resource action: xu-test5 stop on xfc2
+ * Resource action: xu-test6 migrate_from on xfc1
+ * Resource action: xu-test6 stop on xfc3
+ * Resource action: xu-test7 migrate_from on xfc0
+ * Resource action: xu-test7 stop on xfc1
+ * Resource action: xu-test9 migrate_from on xfc2
+ * Resource action: xu-test9 stop on xfc0
+ * Pseudo action: load_stopped_xfc0
+ * Resource action: xu-test3 migrate_to on xfc1
+ * Pseudo action: xu-test7_start_0
+ * Resource action: xu-test3 migrate_from on xfc0
+ * Resource action: xu-test3 stop on xfc1
+ * Resource action: xu-test7 monitor=10000 on xfc0
+ * Pseudo action: load_stopped_xfc1
+ * Resource action: xu-test2 migrate_to on xfc3
+ * Pseudo action: xu-test3_start_0
+ * Pseudo action: xu-test6_start_0
+ * Resource action: xu-test2 migrate_from on xfc1
+ * Resource action: xu-test2 stop on xfc3
+ * Resource action: xu-test3 monitor=10000 on xfc0
+ * Resource action: xu-test6 monitor=10000 on xfc1
+ * Pseudo action: load_stopped_xfc3
+ * Resource action: xu-test12 migrate_to on xfc2
+ * Pseudo action: xu-test2_start_0
+ * Pseudo action: xu-test5_start_0
+ * Resource action: xu-test13 start on xfc3
+ * Resource action: xu-test12 migrate_from on xfc3
+ * Resource action: xu-test12 stop on xfc2
+ * Resource action: xu-test2 monitor=10000 on xfc1
+ * Resource action: xu-test5 monitor=10000 on xfc3
+ * Resource action: xu-test13 monitor=10000 on xfc3
+ * Pseudo action: load_stopped_xfc2
+ * Pseudo action: xu-test12_start_0
+ * Pseudo action: xu-test4_start_0
+ * Pseudo action: xu-test9_start_0
+ * Resource action: xu-test12 monitor=10000 on xfc3
+ * Resource action: xu-test4 monitor=10000 on xfc2
+ * Resource action: xu-test9 monitor=10000 on xfc2
+
+Revised Cluster Status:
+ * Node List:
+ * Online: [ xfc0 xfc1 xfc2 xfc3 ]
+
+ * Full List of Resources:
+ * Clone Set: cl_glusterd [p_glusterd]:
+ * Started: [ xfc0 xfc1 xfc2 xfc3 ]
+ * Clone Set: cl_p_bl_glusterfs [p_bl_glusterfs]:
+ * Started: [ xfc0 xfc1 xfc2 xfc3 ]
+ * xu-test8 (ocf:heartbeat:Xen): Started xfc3
+ * xu-test1 (ocf:heartbeat:Xen): Started xfc3
+ * xu-test10 (ocf:heartbeat:Xen): Started xfc3
+ * xu-test11 (ocf:heartbeat:Xen): Started xfc3
+ * xu-test12 (ocf:heartbeat:Xen): Started xfc3
+ * xu-test13 (ocf:heartbeat:Xen): Started xfc3
+ * xu-test14 (ocf:heartbeat:Xen): Stopped (disabled)
+ * xu-test15 (ocf:heartbeat:Xen): Stopped (disabled)
+ * xu-test16 (ocf:heartbeat:Xen): Stopped (disabled)
+ * xu-test17 (ocf:heartbeat:Xen): Stopped (disabled)
+ * xu-test2 (ocf:heartbeat:Xen): Started xfc1
+ * xu-test3 (ocf:heartbeat:Xen): Started xfc0
+ * xu-test4 (ocf:heartbeat:Xen): Started xfc2
+ * xu-test5 (ocf:heartbeat:Xen): Started xfc3
+ * xu-test6 (ocf:heartbeat:Xen): Started xfc1
+ * xu-test7 (ocf:heartbeat:Xen): Started xfc0
+ * xu-test9 (ocf:heartbeat:Xen): Started xfc2
diff --git a/cts/scheduler/summary/load-stopped-loop.summary b/cts/scheduler/summary/load-stopped-loop.summary
new file mode 100644
index 0000000..f3f2473
--- /dev/null
+++ b/cts/scheduler/summary/load-stopped-loop.summary
@@ -0,0 +1,337 @@
+32 of 308 resource instances DISABLED and 0 BLOCKED from further action due to failure
+
+Current cluster status:
+ * Node List:
+ * Online: [ mgmt01 v03-a v03-b ]
+
+ * Full List of Resources:
+ * stonith-v02-a (stonith:fence_ipmilan): Stopped (disabled)
+ * stonith-v02-b (stonith:fence_ipmilan): Stopped (disabled)
+ * stonith-v02-c (stonith:fence_ipmilan): Stopped (disabled)
+ * stonith-v02-d (stonith:fence_ipmilan): Stopped (disabled)
+ * stonith-mgmt01 (stonith:fence_xvm): Started v03-b
+ * stonith-mgmt02 (stonith:meatware): Started mgmt01
+ * stonith-v03-c (stonith:fence_ipmilan): Stopped (disabled)
+ * stonith-v03-a (stonith:fence_ipmilan): Started v03-b
+ * stonith-v03-b (stonith:fence_ipmilan): Started v03-a
+ * stonith-v03-d (stonith:fence_ipmilan): Stopped (disabled)
+ * Clone Set: cl-clvmd [clvmd]:
+ * Started: [ mgmt01 v03-a v03-b ]
+ * Clone Set: cl-dlm [dlm]:
+ * Started: [ mgmt01 v03-a v03-b ]
+ * Clone Set: cl-iscsid [iscsid]:
+ * Started: [ mgmt01 v03-a v03-b ]
+ * Clone Set: cl-libvirtd [libvirtd]:
+ * Started: [ v03-a v03-b ]
+ * Stopped: [ mgmt01 ]
+ * Clone Set: cl-multipathd [multipathd]:
+ * Started: [ mgmt01 v03-a v03-b ]
+ * Clone Set: cl-node-params [node-params]:
+ * Started: [ v03-a v03-b ]
+ * Stopped: [ mgmt01 ]
+ * Clone Set: cl-vlan1-if [vlan1-if]:
+ * Started: [ v03-a v03-b ]
+ * Stopped: [ mgmt01 ]
+ * Clone Set: cl-vlan101-if [vlan101-if]:
+ * Started: [ v03-a v03-b ]
+ * Stopped: [ mgmt01 ]
+ * Clone Set: cl-vlan102-if [vlan102-if]:
+ * Started: [ v03-a v03-b ]
+ * Stopped: [ mgmt01 ]
+ * Clone Set: cl-vlan103-if [vlan103-if]:
+ * Started: [ v03-a v03-b ]
+ * Stopped: [ mgmt01 ]
+ * Clone Set: cl-vlan104-if [vlan104-if]:
+ * Started: [ v03-a v03-b ]
+ * Stopped: [ mgmt01 ]
+ * Clone Set: cl-vlan3-if [vlan3-if]:
+ * Started: [ v03-a v03-b ]
+ * Stopped: [ mgmt01 ]
+ * Clone Set: cl-vlan4-if [vlan4-if]:
+ * Started: [ v03-a v03-b ]
+ * Stopped: [ mgmt01 ]
+ * Clone Set: cl-vlan5-if [vlan5-if]:
+ * Started: [ v03-a v03-b ]
+ * Stopped: [ mgmt01 ]
+ * Clone Set: cl-vlan900-if [vlan900-if]:
+ * Started: [ v03-a v03-b ]
+ * Stopped: [ mgmt01 ]
+ * Clone Set: cl-vlan909-if [vlan909-if]:
+ * Started: [ v03-a v03-b ]
+ * Stopped: [ mgmt01 ]
+ * Clone Set: cl-libvirt-images-fs [libvirt-images-fs]:
+ * Started: [ mgmt01 v03-a v03-b ]
+ * Clone Set: cl-libvirt-install-fs [libvirt-install-fs]:
+ * Started: [ mgmt01 v03-a v03-b ]
+ * Clone Set: cl-vds-ok-pool-0-iscsi [vds-ok-pool-0-iscsi]:
+ * Started: [ mgmt01 v03-a v03-b ]
+ * Clone Set: cl-vds-ok-pool-0-vg [vds-ok-pool-0-vg]:
+ * Started: [ mgmt01 v03-a v03-b ]
+ * Clone Set: cl-vds-ok-pool-1-iscsi [vds-ok-pool-1-iscsi]:
+ * Started: [ mgmt01 v03-a v03-b ]
+ * Clone Set: cl-vds-ok-pool-1-vg [vds-ok-pool-1-vg]:
+ * Started: [ mgmt01 v03-a v03-b ]
+ * Clone Set: cl-libvirt-images-pool [libvirt-images-pool]:
+ * Started: [ v03-a v03-b ]
+ * Stopped: [ mgmt01 ]
+ * Clone Set: cl-vds-ok-pool-0-pool [vds-ok-pool-0-pool]:
+ * Started: [ v03-a v03-b ]
+ * Stopped: [ mgmt01 ]
+ * Clone Set: cl-vds-ok-pool-1-pool [vds-ok-pool-1-pool]:
+ * Started: [ v03-a v03-b ]
+ * Stopped: [ mgmt01 ]
+ * git.vds-ok.com-vm (ocf:vds-ok:VirtualDomain): Stopped (disabled)
+ * vd01-a.cdev.ttc.prague.cz.vds-ok.com-vm (ocf:vds-ok:VirtualDomain): Started v03-a
+ * vd01-b.cdev.ttc.prague.cz.vds-ok.com-vm (ocf:vds-ok:VirtualDomain): Started v03-a
+ * vd01-c.cdev.ttc.prague.cz.vds-ok.com-vm (ocf:vds-ok:VirtualDomain): Started v03-b
+ * vd01-d.cdev.ttc.prague.cz.vds-ok.com-vm (ocf:vds-ok:VirtualDomain): Stopped
+ * vd02-a.cdev.ttc.prague.cz.vds-ok.com-vm (ocf:vds-ok:VirtualDomain): Stopped (disabled)
+ * vd02-b.cdev.ttc.prague.cz.vds-ok.com-vm (ocf:vds-ok:VirtualDomain): Stopped (disabled)
+ * vd02-c.cdev.ttc.prague.cz.vds-ok.com-vm (ocf:vds-ok:VirtualDomain): Stopped (disabled)
+ * vd02-d.cdev.ttc.prague.cz.vds-ok.com-vm (ocf:vds-ok:VirtualDomain): Stopped (disabled)
+ * vd03-a.cdev.ttc.prague.cz.vds-ok.com-vm (ocf:vds-ok:VirtualDomain): Stopped (disabled)
+ * vd03-b.cdev.ttc.prague.cz.vds-ok.com-vm (ocf:vds-ok:VirtualDomain): Stopped (disabled)
+ * vd03-c.cdev.ttc.prague.cz.vds-ok.com-vm (ocf:vds-ok:VirtualDomain): Stopped (disabled)
+ * vd03-d.cdev.ttc.prague.cz.vds-ok.com-vm (ocf:vds-ok:VirtualDomain): Stopped (disabled)
+ * vd04-a.cdev.ttc.prague.cz.vds-ok.com-vm (ocf:vds-ok:VirtualDomain): Stopped (disabled)
+ * vd04-b.cdev.ttc.prague.cz.vds-ok.com-vm (ocf:vds-ok:VirtualDomain): Stopped (disabled)
+ * vd04-c.cdev.ttc.prague.cz.vds-ok.com-vm (ocf:vds-ok:VirtualDomain): Stopped (disabled)
+ * vd04-d.cdev.ttc.prague.cz.vds-ok.com-vm (ocf:vds-ok:VirtualDomain): Stopped (disabled)
+ * f13-x64-devel.vds-ok.com-vm (ocf:vds-ok:VirtualDomain): Started v03-b
+ * eu2.ca-pages.com-vm (ocf:vds-ok:VirtualDomain): Stopped (disabled)
+ * zakaz.transferrus.ru-vm (ocf:vds-ok:VirtualDomain): Stopped (disabled)
+ * Clone Set: cl-vlan200-if [vlan200-if]:
+ * Started: [ v03-a v03-b ]
+ * Stopped: [ mgmt01 ]
+ * lenny-x32-devel-vm (ocf:vds-ok:VirtualDomain): Started v03-a
+ * dist.express-consult.org-vm (ocf:vds-ok:VirtualDomain): Stopped (disabled)
+ * eu1.ca-pages.com-vm (ocf:vds-ok:VirtualDomain): Stopped (disabled)
+ * gotin-bbb-vm (ocf:vds-ok:VirtualDomain): Stopped (disabled)
+ * maxb-c55-vm (ocf:vds-ok:VirtualDomain): Stopped (disabled)
+ * metae.ru-vm (ocf:vds-ok:VirtualDomain): Stopped (disabled)
+ * rodovoepomestie.ru-vm (ocf:vds-ok:VirtualDomain): Stopped (disabled)
+ * ubuntu9.10-gotin-vm (ocf:vds-ok:VirtualDomain): Stopped (disabled)
+ * c5-x64-devel.vds-ok.com-vm (ocf:vds-ok:VirtualDomain): Started v03-b
+ * Clone Set: cl-mcast-test-net [mcast-test-net]:
+ * Started: [ v03-a v03-b ]
+ * Stopped: [ mgmt01 ]
+ * dist.fly-uni.org-vm (ocf:vds-ok:VirtualDomain): Stopped (disabled)
+ * ktstudio.net-vm (ocf:vds-ok:VirtualDomain): Started v03-a
+ * cloudsrv.credo-dialogue.com-vm (ocf:vds-ok:VirtualDomain): Started v03-b
+ * c6-x64-devel.vds-ok.com-vm (ocf:vds-ok:VirtualDomain): Started v03-a
+ * lustre01-right.vds-ok.com-vm (ocf:vds-ok:VirtualDomain): Started v03-b
+ * lustre02-right.vds-ok.com-vm (ocf:vds-ok:VirtualDomain): Started v03-a
+ * lustre03-left.vds-ok.com-vm (ocf:vds-ok:VirtualDomain): Started v03-b
+ * lustre03-right.vds-ok.com-vm (ocf:vds-ok:VirtualDomain): Started v03-a
+ * lustre04-left.vds-ok.com-vm (ocf:vds-ok:VirtualDomain): Started v03-b
+ * lustre04-right.vds-ok.com-vm (ocf:vds-ok:VirtualDomain): Started v03-a
+ * Clone Set: cl-mcast-anbriz-net [mcast-anbriz-net]:
+ * Started: [ v03-a v03-b ]
+ * Stopped: [ mgmt01 ]
+ * gw.anbriz.vds-ok.com-vm (ocf:vds-ok:VirtualDomain): Started v03-b
+ * license.anbriz.vds-ok.com-vm (ocf:vds-ok:VirtualDomain): Started v03-b
+ * terminal.anbriz.vds-ok.com-vm (ocf:vds-ok:VirtualDomain): Stopped (disabled)
+ * lustre01-left.vds-ok.com-vm (ocf:vds-ok:VirtualDomain): Started v03-a
+ * lustre02-left.vds-ok.com-vm (ocf:vds-ok:VirtualDomain): Started v03-b
+ * test-01.vds-ok.com-vm (ocf:vds-ok:VirtualDomain): Started v03-a
+ * Clone Set: cl-libvirt-qpid [libvirt-qpid]:
+ * Started: [ v03-a v03-b ]
+ * Stopped: [ mgmt01 ]
+ * gw.gleb.vds-ok.com-vm (ocf:vds-ok:VirtualDomain): Stopped (disabled)
+ * gw.gotin.vds-ok.com-vm (ocf:vds-ok:VirtualDomain): Stopped (disabled)
+ * terminal0.anbriz.vds-ok.com-vm (ocf:vds-ok:VirtualDomain): Started v03-a
+ * Clone Set: cl-mcast-gleb-net [mcast-gleb-net]:
+ * Started: [ v03-a v03-b ]
+ * Stopped: [ mgmt01 ]
+
+Transition Summary:
+ * Reload vds-ok-pool-0-iscsi:0 ( mgmt01 )
+ * Reload vds-ok-pool-0-iscsi:1 ( v03-b )
+ * Reload vds-ok-pool-0-iscsi:2 ( v03-a )
+ * Reload vds-ok-pool-1-iscsi:0 ( mgmt01 )
+ * Reload vds-ok-pool-1-iscsi:1 ( v03-b )
+ * Reload vds-ok-pool-1-iscsi:2 ( v03-a )
+ * Restart stonith-v03-b ( v03-a ) due to resource definition change
+ * Restart stonith-v03-a ( v03-b ) due to resource definition change
+ * Migrate license.anbriz.vds-ok.com-vm ( v03-b -> v03-a )
+ * Migrate terminal0.anbriz.vds-ok.com-vm ( v03-a -> v03-b )
+ * Start vd01-d.cdev.ttc.prague.cz.vds-ok.com-vm ( v03-a )
+
+Executing Cluster Transition:
+ * Resource action: vds-ok-pool-0-iscsi:1 reload-agent on mgmt01
+ * Resource action: vds-ok-pool-0-iscsi:1 monitor=30000 on mgmt01
+ * Resource action: vds-ok-pool-0-iscsi:0 reload-agent on v03-b
+ * Resource action: vds-ok-pool-0-iscsi:0 monitor=30000 on v03-b
+ * Resource action: vds-ok-pool-0-iscsi:2 reload-agent on v03-a
+ * Resource action: vds-ok-pool-0-iscsi:2 monitor=30000 on v03-a
+ * Resource action: vds-ok-pool-1-iscsi:1 reload-agent on mgmt01
+ * Resource action: vds-ok-pool-1-iscsi:1 monitor=30000 on mgmt01
+ * Resource action: vds-ok-pool-1-iscsi:0 reload-agent on v03-b
+ * Resource action: vds-ok-pool-1-iscsi:0 monitor=30000 on v03-b
+ * Resource action: vds-ok-pool-1-iscsi:2 reload-agent on v03-a
+ * Resource action: vds-ok-pool-1-iscsi:2 monitor=30000 on v03-a
+ * Resource action: stonith-v03-b stop on v03-a
+ * Resource action: stonith-v03-b start on v03-a
+ * Resource action: stonith-v03-b monitor=60000 on v03-a
+ * Resource action: stonith-v03-a stop on v03-b
+ * Resource action: stonith-v03-a start on v03-b
+ * Resource action: stonith-v03-a monitor=60000 on v03-b
+ * Resource action: terminal0.anbriz.vds-ok.com-vm migrate_to on v03-a
+ * Pseudo action: load_stopped_mgmt01
+ * Resource action: terminal0.anbriz.vds-ok.com-vm migrate_from on v03-b
+ * Resource action: terminal0.anbriz.vds-ok.com-vm stop on v03-a
+ * Pseudo action: load_stopped_v03-a
+ * Resource action: license.anbriz.vds-ok.com-vm migrate_to on v03-b
+ * Resource action: vd01-d.cdev.ttc.prague.cz.vds-ok.com-vm start on v03-a
+ * Resource action: license.anbriz.vds-ok.com-vm migrate_from on v03-a
+ * Resource action: license.anbriz.vds-ok.com-vm stop on v03-b
+ * Resource action: vd01-d.cdev.ttc.prague.cz.vds-ok.com-vm monitor=10000 on v03-a
+ * Pseudo action: load_stopped_v03-b
+ * Pseudo action: license.anbriz.vds-ok.com-vm_start_0
+ * Pseudo action: terminal0.anbriz.vds-ok.com-vm_start_0
+ * Resource action: license.anbriz.vds-ok.com-vm monitor=10000 on v03-a
+ * Resource action: terminal0.anbriz.vds-ok.com-vm monitor=10000 on v03-b
+
+Revised Cluster Status:
+ * Node List:
+ * Online: [ mgmt01 v03-a v03-b ]
+
+ * Full List of Resources:
+ * stonith-v02-a (stonith:fence_ipmilan): Stopped (disabled)
+ * stonith-v02-b (stonith:fence_ipmilan): Stopped (disabled)
+ * stonith-v02-c (stonith:fence_ipmilan): Stopped (disabled)
+ * stonith-v02-d (stonith:fence_ipmilan): Stopped (disabled)
+ * stonith-mgmt01 (stonith:fence_xvm): Started v03-b
+ * stonith-mgmt02 (stonith:meatware): Started mgmt01
+ * stonith-v03-c (stonith:fence_ipmilan): Stopped (disabled)
+ * stonith-v03-a (stonith:fence_ipmilan): Started v03-b
+ * stonith-v03-b (stonith:fence_ipmilan): Started v03-a
+ * stonith-v03-d (stonith:fence_ipmilan): Stopped (disabled)
+ * Clone Set: cl-clvmd [clvmd]:
+ * Started: [ mgmt01 v03-a v03-b ]
+ * Clone Set: cl-dlm [dlm]:
+ * Started: [ mgmt01 v03-a v03-b ]
+ * Clone Set: cl-iscsid [iscsid]:
+ * Started: [ mgmt01 v03-a v03-b ]
+ * Clone Set: cl-libvirtd [libvirtd]:
+ * Started: [ v03-a v03-b ]
+ * Stopped: [ mgmt01 ]
+ * Clone Set: cl-multipathd [multipathd]:
+ * Started: [ mgmt01 v03-a v03-b ]
+ * Clone Set: cl-node-params [node-params]:
+ * Started: [ v03-a v03-b ]
+ * Stopped: [ mgmt01 ]
+ * Clone Set: cl-vlan1-if [vlan1-if]:
+ * Started: [ v03-a v03-b ]
+ * Stopped: [ mgmt01 ]
+ * Clone Set: cl-vlan101-if [vlan101-if]:
+ * Started: [ v03-a v03-b ]
+ * Stopped: [ mgmt01 ]
+ * Clone Set: cl-vlan102-if [vlan102-if]:
+ * Started: [ v03-a v03-b ]
+ * Stopped: [ mgmt01 ]
+ * Clone Set: cl-vlan103-if [vlan103-if]:
+ * Started: [ v03-a v03-b ]
+ * Stopped: [ mgmt01 ]
+ * Clone Set: cl-vlan104-if [vlan104-if]:
+ * Started: [ v03-a v03-b ]
+ * Stopped: [ mgmt01 ]
+ * Clone Set: cl-vlan3-if [vlan3-if]:
+ * Started: [ v03-a v03-b ]
+ * Stopped: [ mgmt01 ]
+ * Clone Set: cl-vlan4-if [vlan4-if]:
+ * Started: [ v03-a v03-b ]
+ * Stopped: [ mgmt01 ]
+ * Clone Set: cl-vlan5-if [vlan5-if]:
+ * Started: [ v03-a v03-b ]
+ * Stopped: [ mgmt01 ]
+ * Clone Set: cl-vlan900-if [vlan900-if]:
+ * Started: [ v03-a v03-b ]
+ * Stopped: [ mgmt01 ]
+ * Clone Set: cl-vlan909-if [vlan909-if]:
+ * Started: [ v03-a v03-b ]
+ * Stopped: [ mgmt01 ]
+ * Clone Set: cl-libvirt-images-fs [libvirt-images-fs]:
+ * Started: [ mgmt01 v03-a v03-b ]
+ * Clone Set: cl-libvirt-install-fs [libvirt-install-fs]:
+ * Started: [ mgmt01 v03-a v03-b ]
+ * Clone Set: cl-vds-ok-pool-0-iscsi [vds-ok-pool-0-iscsi]:
+ * Started: [ mgmt01 v03-a v03-b ]
+ * Clone Set: cl-vds-ok-pool-0-vg [vds-ok-pool-0-vg]:
+ * Started: [ mgmt01 v03-a v03-b ]
+ * Clone Set: cl-vds-ok-pool-1-iscsi [vds-ok-pool-1-iscsi]:
+ * Started: [ mgmt01 v03-a v03-b ]
+ * Clone Set: cl-vds-ok-pool-1-vg [vds-ok-pool-1-vg]:
+ * Started: [ mgmt01 v03-a v03-b ]
+ * Clone Set: cl-libvirt-images-pool [libvirt-images-pool]:
+ * Started: [ v03-a v03-b ]
+ * Stopped: [ mgmt01 ]
+ * Clone Set: cl-vds-ok-pool-0-pool [vds-ok-pool-0-pool]:
+ * Started: [ v03-a v03-b ]
+ * Stopped: [ mgmt01 ]
+ * Clone Set: cl-vds-ok-pool-1-pool [vds-ok-pool-1-pool]:
+ * Started: [ v03-a v03-b ]
+ * Stopped: [ mgmt01 ]
+ * git.vds-ok.com-vm (ocf:vds-ok:VirtualDomain): Stopped (disabled)
+ * vd01-a.cdev.ttc.prague.cz.vds-ok.com-vm (ocf:vds-ok:VirtualDomain): Started v03-a
+ * vd01-b.cdev.ttc.prague.cz.vds-ok.com-vm (ocf:vds-ok:VirtualDomain): Started v03-a
+ * vd01-c.cdev.ttc.prague.cz.vds-ok.com-vm (ocf:vds-ok:VirtualDomain): Started v03-b
+ * vd01-d.cdev.ttc.prague.cz.vds-ok.com-vm (ocf:vds-ok:VirtualDomain): Started v03-a
+ * vd02-a.cdev.ttc.prague.cz.vds-ok.com-vm (ocf:vds-ok:VirtualDomain): Stopped (disabled)
+ * vd02-b.cdev.ttc.prague.cz.vds-ok.com-vm (ocf:vds-ok:VirtualDomain): Stopped (disabled)
+ * vd02-c.cdev.ttc.prague.cz.vds-ok.com-vm (ocf:vds-ok:VirtualDomain): Stopped (disabled)
+ * vd02-d.cdev.ttc.prague.cz.vds-ok.com-vm (ocf:vds-ok:VirtualDomain): Stopped (disabled)
+ * vd03-a.cdev.ttc.prague.cz.vds-ok.com-vm (ocf:vds-ok:VirtualDomain): Stopped (disabled)
+ * vd03-b.cdev.ttc.prague.cz.vds-ok.com-vm (ocf:vds-ok:VirtualDomain): Stopped (disabled)
+ * vd03-c.cdev.ttc.prague.cz.vds-ok.com-vm (ocf:vds-ok:VirtualDomain): Stopped (disabled)
+ * vd03-d.cdev.ttc.prague.cz.vds-ok.com-vm (ocf:vds-ok:VirtualDomain): Stopped (disabled)
+ * vd04-a.cdev.ttc.prague.cz.vds-ok.com-vm (ocf:vds-ok:VirtualDomain): Stopped (disabled)
+ * vd04-b.cdev.ttc.prague.cz.vds-ok.com-vm (ocf:vds-ok:VirtualDomain): Stopped (disabled)
+ * vd04-c.cdev.ttc.prague.cz.vds-ok.com-vm (ocf:vds-ok:VirtualDomain): Stopped (disabled)
+ * vd04-d.cdev.ttc.prague.cz.vds-ok.com-vm (ocf:vds-ok:VirtualDomain): Stopped (disabled)
+ * f13-x64-devel.vds-ok.com-vm (ocf:vds-ok:VirtualDomain): Started v03-b
+ * eu2.ca-pages.com-vm (ocf:vds-ok:VirtualDomain): Stopped (disabled)
+ * zakaz.transferrus.ru-vm (ocf:vds-ok:VirtualDomain): Stopped (disabled)
+ * Clone Set: cl-vlan200-if [vlan200-if]:
+ * Started: [ v03-a v03-b ]
+ * Stopped: [ mgmt01 ]
+ * lenny-x32-devel-vm (ocf:vds-ok:VirtualDomain): Started v03-a
+ * dist.express-consult.org-vm (ocf:vds-ok:VirtualDomain): Stopped (disabled)
+ * eu1.ca-pages.com-vm (ocf:vds-ok:VirtualDomain): Stopped (disabled)
+ * gotin-bbb-vm (ocf:vds-ok:VirtualDomain): Stopped (disabled)
+ * maxb-c55-vm (ocf:vds-ok:VirtualDomain): Stopped (disabled)
+ * metae.ru-vm (ocf:vds-ok:VirtualDomain): Stopped (disabled)
+ * rodovoepomestie.ru-vm (ocf:vds-ok:VirtualDomain): Stopped (disabled)
+ * ubuntu9.10-gotin-vm (ocf:vds-ok:VirtualDomain): Stopped (disabled)
+ * c5-x64-devel.vds-ok.com-vm (ocf:vds-ok:VirtualDomain): Started v03-b
+ * Clone Set: cl-mcast-test-net [mcast-test-net]:
+ * Started: [ v03-a v03-b ]
+ * Stopped: [ mgmt01 ]
+ * dist.fly-uni.org-vm (ocf:vds-ok:VirtualDomain): Stopped (disabled)
+ * ktstudio.net-vm (ocf:vds-ok:VirtualDomain): Started v03-a
+ * cloudsrv.credo-dialogue.com-vm (ocf:vds-ok:VirtualDomain): Started v03-b
+ * c6-x64-devel.vds-ok.com-vm (ocf:vds-ok:VirtualDomain): Started v03-a
+ * lustre01-right.vds-ok.com-vm (ocf:vds-ok:VirtualDomain): Started v03-b
+ * lustre02-right.vds-ok.com-vm (ocf:vds-ok:VirtualDomain): Started v03-a
+ * lustre03-left.vds-ok.com-vm (ocf:vds-ok:VirtualDomain): Started v03-b
+ * lustre03-right.vds-ok.com-vm (ocf:vds-ok:VirtualDomain): Started v03-a
+ * lustre04-left.vds-ok.com-vm (ocf:vds-ok:VirtualDomain): Started v03-b
+ * lustre04-right.vds-ok.com-vm (ocf:vds-ok:VirtualDomain): Started v03-a
+ * Clone Set: cl-mcast-anbriz-net [mcast-anbriz-net]:
+ * Started: [ v03-a v03-b ]
+ * Stopped: [ mgmt01 ]
+ * gw.anbriz.vds-ok.com-vm (ocf:vds-ok:VirtualDomain): Started v03-b
+ * license.anbriz.vds-ok.com-vm (ocf:vds-ok:VirtualDomain): Started v03-a
+ * terminal.anbriz.vds-ok.com-vm (ocf:vds-ok:VirtualDomain): Stopped (disabled)
+ * lustre01-left.vds-ok.com-vm (ocf:vds-ok:VirtualDomain): Started v03-a
+ * lustre02-left.vds-ok.com-vm (ocf:vds-ok:VirtualDomain): Started v03-b
+ * test-01.vds-ok.com-vm (ocf:vds-ok:VirtualDomain): Started v03-a
+ * Clone Set: cl-libvirt-qpid [libvirt-qpid]:
+ * Started: [ v03-a v03-b ]
+ * Stopped: [ mgmt01 ]
+ * gw.gleb.vds-ok.com-vm (ocf:vds-ok:VirtualDomain): Stopped (disabled)
+ * gw.gotin.vds-ok.com-vm (ocf:vds-ok:VirtualDomain): Stopped (disabled)
+ * terminal0.anbriz.vds-ok.com-vm (ocf:vds-ok:VirtualDomain): Started v03-b
+ * Clone Set: cl-mcast-gleb-net [mcast-gleb-net]:
+ * Started: [ v03-a v03-b ]
+ * Stopped: [ mgmt01 ]
diff --git a/cts/scheduler/summary/location-date-rules-1.summary b/cts/scheduler/summary/location-date-rules-1.summary
new file mode 100644
index 0000000..b1afba4
--- /dev/null
+++ b/cts/scheduler/summary/location-date-rules-1.summary
@@ -0,0 +1,36 @@
+Using the original execution date of: 2019-09-20 15:10:52Z
+Current cluster status:
+ * Node List:
+ * Online: [ rhel7-1 rhel7-2 rhel7-3 rhel7-4 rhel7-5 ]
+
+ * Full List of Resources:
+ * Fencing (stonith:fence_xvm): Started rhel7-1
+ * FencingPass (stonith:fence_dummy): Started rhel7-2
+ * rsc1 (ocf:pacemaker:Dummy): Stopped
+ * rsc2 (ocf:pacemaker:Dummy): Stopped
+ * rsc3 (ocf:pacemaker:Dummy): Stopped
+
+Transition Summary:
+ * Start rsc1 ( rhel7-3 )
+ * Start rsc2 ( rhel7-4 )
+ * Start rsc3 ( rhel7-4 )
+
+Executing Cluster Transition:
+ * Resource action: rsc1 start on rhel7-3
+ * Resource action: rsc2 start on rhel7-4
+ * Resource action: rsc3 start on rhel7-4
+ * Resource action: rsc1 monitor=10000 on rhel7-3
+ * Resource action: rsc2 monitor=10000 on rhel7-4
+ * Resource action: rsc3 monitor=10000 on rhel7-4
+Using the original execution date of: 2019-09-20 15:10:52Z
+
+Revised Cluster Status:
+ * Node List:
+ * Online: [ rhel7-1 rhel7-2 rhel7-3 rhel7-4 rhel7-5 ]
+
+ * Full List of Resources:
+ * Fencing (stonith:fence_xvm): Started rhel7-1
+ * FencingPass (stonith:fence_dummy): Started rhel7-2
+ * rsc1 (ocf:pacemaker:Dummy): Started rhel7-3
+ * rsc2 (ocf:pacemaker:Dummy): Started rhel7-4
+ * rsc3 (ocf:pacemaker:Dummy): Started rhel7-4
diff --git a/cts/scheduler/summary/location-date-rules-2.summary b/cts/scheduler/summary/location-date-rules-2.summary
new file mode 100644
index 0000000..3f27c03
--- /dev/null
+++ b/cts/scheduler/summary/location-date-rules-2.summary
@@ -0,0 +1,36 @@
+Using the original execution date of: 2019-09-20 15:10:52Z
+Current cluster status:
+ * Node List:
+ * Online: [ rhel7-1 rhel7-2 rhel7-3 rhel7-4 rhel7-5 ]
+
+ * Full List of Resources:
+ * Fencing (stonith:fence_xvm): Started rhel7-1
+ * FencingPass (stonith:fence_dummy): Started rhel7-2
+ * rsc1 (ocf:pacemaker:Dummy): Stopped
+ * rsc2 (ocf:pacemaker:Dummy): Stopped
+ * rsc3 (ocf:pacemaker:Dummy): Stopped
+
+Transition Summary:
+ * Start rsc1 ( rhel7-3 )
+ * Start rsc2 ( rhel7-3 )
+ * Start rsc3 ( rhel7-4 )
+
+Executing Cluster Transition:
+ * Resource action: rsc1 start on rhel7-3
+ * Resource action: rsc2 start on rhel7-3
+ * Resource action: rsc3 start on rhel7-4
+ * Resource action: rsc1 monitor=10000 on rhel7-3
+ * Resource action: rsc2 monitor=10000 on rhel7-3
+ * Resource action: rsc3 monitor=10000 on rhel7-4
+Using the original execution date of: 2019-09-20 15:10:52Z
+
+Revised Cluster Status:
+ * Node List:
+ * Online: [ rhel7-1 rhel7-2 rhel7-3 rhel7-4 rhel7-5 ]
+
+ * Full List of Resources:
+ * Fencing (stonith:fence_xvm): Started rhel7-1
+ * FencingPass (stonith:fence_dummy): Started rhel7-2
+ * rsc1 (ocf:pacemaker:Dummy): Started rhel7-3
+ * rsc2 (ocf:pacemaker:Dummy): Started rhel7-3
+ * rsc3 (ocf:pacemaker:Dummy): Started rhel7-4
diff --git a/cts/scheduler/summary/location-sets-templates.summary b/cts/scheduler/summary/location-sets-templates.summary
new file mode 100644
index 0000000..e604711
--- /dev/null
+++ b/cts/scheduler/summary/location-sets-templates.summary
@@ -0,0 +1,51 @@
+Current cluster status:
+ * Node List:
+ * Online: [ node1 node2 ]
+
+ * Full List of Resources:
+ * rsc1 (ocf:pacemaker:Dummy): Stopped
+ * rsc2 (ocf:pacemaker:Dummy): Stopped
+ * rsc3 (ocf:pacemaker:Dummy): Stopped
+ * rsc4 (ocf:pacemaker:Dummy): Stopped
+ * rsc5 (ocf:pacemaker:Dummy): Stopped
+ * rsc6 (ocf:pacemaker:Dummy): Stopped
+
+Transition Summary:
+ * Start rsc1 ( node2 )
+ * Start rsc2 ( node2 )
+ * Start rsc3 ( node2 )
+ * Start rsc4 ( node2 )
+ * Start rsc5 ( node2 )
+ * Start rsc6 ( node2 )
+
+Executing Cluster Transition:
+ * Resource action: rsc1 monitor on node2
+ * Resource action: rsc1 monitor on node1
+ * Resource action: rsc2 monitor on node2
+ * Resource action: rsc2 monitor on node1
+ * Resource action: rsc3 monitor on node2
+ * Resource action: rsc3 monitor on node1
+ * Resource action: rsc4 monitor on node2
+ * Resource action: rsc4 monitor on node1
+ * Resource action: rsc5 monitor on node2
+ * Resource action: rsc5 monitor on node1
+ * Resource action: rsc6 monitor on node2
+ * Resource action: rsc6 monitor on node1
+ * Resource action: rsc1 start on node2
+ * Resource action: rsc2 start on node2
+ * Resource action: rsc3 start on node2
+ * Resource action: rsc4 start on node2
+ * Resource action: rsc5 start on node2
+ * Resource action: rsc6 start on node2
+
+Revised Cluster Status:
+ * Node List:
+ * Online: [ node1 node2 ]
+
+ * Full List of Resources:
+ * rsc1 (ocf:pacemaker:Dummy): Started node2
+ * rsc2 (ocf:pacemaker:Dummy): Started node2
+ * rsc3 (ocf:pacemaker:Dummy): Started node2
+ * rsc4 (ocf:pacemaker:Dummy): Started node2
+ * rsc5 (ocf:pacemaker:Dummy): Started node2
+ * rsc6 (ocf:pacemaker:Dummy): Started node2
diff --git a/cts/scheduler/summary/managed-0.summary b/cts/scheduler/summary/managed-0.summary
new file mode 100644
index 0000000..39d715b
--- /dev/null
+++ b/cts/scheduler/summary/managed-0.summary
@@ -0,0 +1,132 @@
+Current cluster status:
+ * Node List:
+ * Online: [ c001n02 c001n03 c001n04 c001n05 c001n06 c001n07 c001n08 c001n09 ]
+
+ * Full List of Resources:
+ * DcIPaddr (ocf:heartbeat:IPaddr): Started c001n09
+ * rsc_c001n09 (ocf:heartbeat:IPaddr): Started c001n09
+ * rsc_c001n02 (ocf:heartbeat:IPaddr): Started c001n02
+ * rsc_c001n03 (ocf:heartbeat:IPaddr): Started c001n03
+ * rsc_c001n04 (ocf:heartbeat:IPaddr): Started c001n04
+ * rsc_c001n05 (ocf:heartbeat:IPaddr): Started c001n05
+ * rsc_c001n06 (ocf:heartbeat:IPaddr): Started c001n06
+ * rsc_c001n07 (ocf:heartbeat:IPaddr): Started c001n07
+ * rsc_c001n08 (ocf:heartbeat:IPaddr): Started c001n08
+ * Clone Set: DoFencing [child_DoFencing] (unique):
+ * child_DoFencing:0 (stonith:ssh): Started c001n02
+ * child_DoFencing:1 (stonith:ssh): Started c001n03
+ * child_DoFencing:2 (stonith:ssh): Started c001n04
+ * child_DoFencing:3 (stonith:ssh): Started c001n05
+ * child_DoFencing:4 (stonith:ssh): Started c001n06
+ * child_DoFencing:5 (stonith:ssh): Started c001n07
+ * child_DoFencing:6 (stonith:ssh): Started c001n08
+ * child_DoFencing:7 (stonith:ssh): Started c001n09
+
+Transition Summary:
+
+Executing Cluster Transition:
+ * Resource action: DcIPaddr monitor on c001n08
+ * Resource action: DcIPaddr monitor on c001n07
+ * Resource action: DcIPaddr monitor on c001n06
+ * Resource action: DcIPaddr monitor on c001n05
+ * Resource action: DcIPaddr monitor on c001n04
+ * Resource action: DcIPaddr monitor on c001n03
+ * Resource action: DcIPaddr monitor on c001n02
+ * Resource action: rsc_c001n09 monitor on c001n08
+ * Resource action: rsc_c001n09 monitor on c001n07
+ * Resource action: rsc_c001n09 monitor on c001n05
+ * Resource action: rsc_c001n09 monitor on c001n04
+ * Resource action: rsc_c001n09 monitor on c001n03
+ * Resource action: rsc_c001n09 monitor on c001n02
+ * Resource action: rsc_c001n02 monitor on c001n09
+ * Resource action: rsc_c001n02 monitor on c001n08
+ * Resource action: rsc_c001n02 monitor on c001n07
+ * Resource action: rsc_c001n02 monitor on c001n05
+ * Resource action: rsc_c001n02 monitor on c001n04
+ * Resource action: rsc_c001n03 monitor on c001n09
+ * Resource action: rsc_c001n03 monitor on c001n08
+ * Resource action: rsc_c001n03 monitor on c001n07
+ * Resource action: rsc_c001n03 monitor on c001n05
+ * Resource action: rsc_c001n03 monitor on c001n04
+ * Resource action: rsc_c001n03 monitor on c001n02
+ * Resource action: rsc_c001n04 monitor on c001n09
+ * Resource action: rsc_c001n04 monitor on c001n08
+ * Resource action: rsc_c001n04 monitor on c001n07
+ * Resource action: rsc_c001n04 monitor on c001n05
+ * Resource action: rsc_c001n04 monitor on c001n03
+ * Resource action: rsc_c001n04 monitor on c001n02
+ * Resource action: rsc_c001n05 monitor on c001n09
+ * Resource action: rsc_c001n05 monitor on c001n08
+ * Resource action: rsc_c001n05 monitor on c001n07
+ * Resource action: rsc_c001n05 monitor on c001n06
+ * Resource action: rsc_c001n05 monitor on c001n04
+ * Resource action: rsc_c001n05 monitor on c001n03
+ * Resource action: rsc_c001n05 monitor on c001n02
+ * Resource action: rsc_c001n06 monitor on c001n09
+ * Resource action: rsc_c001n06 monitor on c001n08
+ * Resource action: rsc_c001n06 monitor on c001n07
+ * Resource action: rsc_c001n06 monitor on c001n05
+ * Resource action: rsc_c001n06 monitor on c001n04
+ * Resource action: rsc_c001n06 monitor on c001n03
+ * Resource action: rsc_c001n07 monitor on c001n09
+ * Resource action: rsc_c001n07 monitor on c001n08
+ * Resource action: rsc_c001n07 monitor on c001n06
+ * Resource action: rsc_c001n07 monitor on c001n05
+ * Resource action: rsc_c001n07 monitor on c001n04
+ * Resource action: rsc_c001n08 monitor on c001n09
+ * Resource action: rsc_c001n08 monitor on c001n07
+ * Resource action: rsc_c001n08 monitor on c001n05
+ * Resource action: child_DoFencing:0 monitor on c001n09
+ * Resource action: child_DoFencing:0 monitor on c001n08
+ * Resource action: child_DoFencing:0 monitor on c001n07
+ * Resource action: child_DoFencing:1 monitor on c001n08
+ * Resource action: child_DoFencing:1 monitor on c001n07
+ * Resource action: child_DoFencing:1 monitor on c001n02
+ * Resource action: child_DoFencing:2 monitor on c001n09
+ * Resource action: child_DoFencing:2 monitor on c001n08
+ * Resource action: child_DoFencing:2 monitor on c001n07
+ * Resource action: child_DoFencing:2 monitor on c001n03
+ * Resource action: child_DoFencing:3 monitor on c001n08
+ * Resource action: child_DoFencing:3 monitor on c001n04
+ * Resource action: child_DoFencing:3 monitor on c001n02
+ * Resource action: child_DoFencing:4 monitor on c001n09
+ * Resource action: child_DoFencing:4 monitor on c001n05
+ * Resource action: child_DoFencing:4 monitor on c001n03
+ * Resource action: child_DoFencing:5 monitor on c001n08
+ * Resource action: child_DoFencing:5 monitor on c001n05
+ * Resource action: child_DoFencing:5 monitor on c001n04
+ * Resource action: child_DoFencing:5 monitor on c001n02
+ * Resource action: child_DoFencing:6 monitor on c001n09
+ * Resource action: child_DoFencing:6 monitor on c001n07
+ * Resource action: child_DoFencing:6 monitor on c001n05
+ * Resource action: child_DoFencing:6 monitor on c001n04
+ * Resource action: child_DoFencing:7 monitor on c001n08
+ * Resource action: child_DoFencing:7 monitor on c001n07
+ * Resource action: child_DoFencing:7 monitor on c001n05
+ * Resource action: child_DoFencing:7 monitor on c001n04
+ * Resource action: child_DoFencing:7 monitor on c001n03
+ * Resource action: child_DoFencing:7 monitor on c001n02
+
+Revised Cluster Status:
+ * Node List:
+ * Online: [ c001n02 c001n03 c001n04 c001n05 c001n06 c001n07 c001n08 c001n09 ]
+
+ * Full List of Resources:
+ * DcIPaddr (ocf:heartbeat:IPaddr): Started c001n09
+ * rsc_c001n09 (ocf:heartbeat:IPaddr): Started c001n09
+ * rsc_c001n02 (ocf:heartbeat:IPaddr): Started c001n02
+ * rsc_c001n03 (ocf:heartbeat:IPaddr): Started c001n03
+ * rsc_c001n04 (ocf:heartbeat:IPaddr): Started c001n04
+ * rsc_c001n05 (ocf:heartbeat:IPaddr): Started c001n05
+ * rsc_c001n06 (ocf:heartbeat:IPaddr): Started c001n06
+ * rsc_c001n07 (ocf:heartbeat:IPaddr): Started c001n07
+ * rsc_c001n08 (ocf:heartbeat:IPaddr): Started c001n08
+ * Clone Set: DoFencing [child_DoFencing] (unique):
+ * child_DoFencing:0 (stonith:ssh): Started c001n02
+ * child_DoFencing:1 (stonith:ssh): Started c001n03
+ * child_DoFencing:2 (stonith:ssh): Started c001n04
+ * child_DoFencing:3 (stonith:ssh): Started c001n05
+ * child_DoFencing:4 (stonith:ssh): Started c001n06
+ * child_DoFencing:5 (stonith:ssh): Started c001n07
+ * child_DoFencing:6 (stonith:ssh): Started c001n08
+ * child_DoFencing:7 (stonith:ssh): Started c001n09
diff --git a/cts/scheduler/summary/managed-1.summary b/cts/scheduler/summary/managed-1.summary
new file mode 100644
index 0000000..9c25080
--- /dev/null
+++ b/cts/scheduler/summary/managed-1.summary
@@ -0,0 +1,132 @@
+Current cluster status:
+ * Node List:
+ * Online: [ c001n02 c001n03 c001n04 c001n05 c001n06 c001n07 c001n08 c001n09 ]
+
+ * Full List of Resources:
+ * DcIPaddr (ocf:heartbeat:IPaddr): Started c001n09
+ * rsc_c001n09 (ocf:heartbeat:IPaddr): Started c001n09
+ * rsc_c001n02 (ocf:heartbeat:IPaddr): Started c001n02
+ * rsc_c001n03 (ocf:heartbeat:IPaddr): Started c001n03
+ * rsc_c001n04 (ocf:heartbeat:IPaddr): Started c001n04
+ * rsc_c001n05 (ocf:heartbeat:IPaddr): Started c001n05
+ * rsc_c001n06 (ocf:heartbeat:IPaddr): Started c001n06
+ * rsc_c001n07 (ocf:heartbeat:IPaddr): Started c001n07
+ * rsc_c001n08 (ocf:heartbeat:IPaddr): Started c001n08
+ * Clone Set: DoFencing [child_DoFencing] (unique, unmanaged):
+ * child_DoFencing:0 (stonith:ssh): Started c001n02 (unmanaged)
+ * child_DoFencing:1 (stonith:ssh): Started c001n03 (unmanaged)
+ * child_DoFencing:2 (stonith:ssh): Started c001n04 (unmanaged)
+ * child_DoFencing:3 (stonith:ssh): Started c001n05 (unmanaged)
+ * child_DoFencing:4 (stonith:ssh): Started c001n06 (unmanaged)
+ * child_DoFencing:5 (stonith:ssh): Started c001n07 (unmanaged)
+ * child_DoFencing:6 (stonith:ssh): Started c001n08 (unmanaged)
+ * child_DoFencing:7 (stonith:ssh): Started c001n09 (unmanaged)
+
+Transition Summary:
+
+Executing Cluster Transition:
+ * Resource action: DcIPaddr monitor on c001n08
+ * Resource action: DcIPaddr monitor on c001n07
+ * Resource action: DcIPaddr monitor on c001n06
+ * Resource action: DcIPaddr monitor on c001n05
+ * Resource action: DcIPaddr monitor on c001n04
+ * Resource action: DcIPaddr monitor on c001n03
+ * Resource action: DcIPaddr monitor on c001n02
+ * Resource action: rsc_c001n09 monitor on c001n08
+ * Resource action: rsc_c001n09 monitor on c001n07
+ * Resource action: rsc_c001n09 monitor on c001n05
+ * Resource action: rsc_c001n09 monitor on c001n04
+ * Resource action: rsc_c001n09 monitor on c001n03
+ * Resource action: rsc_c001n09 monitor on c001n02
+ * Resource action: rsc_c001n02 monitor on c001n09
+ * Resource action: rsc_c001n02 monitor on c001n08
+ * Resource action: rsc_c001n02 monitor on c001n07
+ * Resource action: rsc_c001n02 monitor on c001n05
+ * Resource action: rsc_c001n02 monitor on c001n04
+ * Resource action: rsc_c001n03 monitor on c001n09
+ * Resource action: rsc_c001n03 monitor on c001n08
+ * Resource action: rsc_c001n03 monitor on c001n07
+ * Resource action: rsc_c001n03 monitor on c001n05
+ * Resource action: rsc_c001n03 monitor on c001n04
+ * Resource action: rsc_c001n03 monitor on c001n02
+ * Resource action: rsc_c001n04 monitor on c001n09
+ * Resource action: rsc_c001n04 monitor on c001n08
+ * Resource action: rsc_c001n04 monitor on c001n07
+ * Resource action: rsc_c001n04 monitor on c001n05
+ * Resource action: rsc_c001n04 monitor on c001n03
+ * Resource action: rsc_c001n04 monitor on c001n02
+ * Resource action: rsc_c001n05 monitor on c001n09
+ * Resource action: rsc_c001n05 monitor on c001n08
+ * Resource action: rsc_c001n05 monitor on c001n07
+ * Resource action: rsc_c001n05 monitor on c001n06
+ * Resource action: rsc_c001n05 monitor on c001n04
+ * Resource action: rsc_c001n05 monitor on c001n03
+ * Resource action: rsc_c001n05 monitor on c001n02
+ * Resource action: rsc_c001n06 monitor on c001n09
+ * Resource action: rsc_c001n06 monitor on c001n08
+ * Resource action: rsc_c001n06 monitor on c001n07
+ * Resource action: rsc_c001n06 monitor on c001n05
+ * Resource action: rsc_c001n06 monitor on c001n04
+ * Resource action: rsc_c001n06 monitor on c001n03
+ * Resource action: rsc_c001n07 monitor on c001n09
+ * Resource action: rsc_c001n07 monitor on c001n08
+ * Resource action: rsc_c001n07 monitor on c001n06
+ * Resource action: rsc_c001n07 monitor on c001n05
+ * Resource action: rsc_c001n07 monitor on c001n04
+ * Resource action: rsc_c001n08 monitor on c001n09
+ * Resource action: rsc_c001n08 monitor on c001n07
+ * Resource action: rsc_c001n08 monitor on c001n05
+ * Resource action: child_DoFencing:0 monitor on c001n09
+ * Resource action: child_DoFencing:0 monitor on c001n08
+ * Resource action: child_DoFencing:0 monitor on c001n07
+ * Resource action: child_DoFencing:1 monitor on c001n08
+ * Resource action: child_DoFencing:1 monitor on c001n07
+ * Resource action: child_DoFencing:1 monitor on c001n02
+ * Resource action: child_DoFencing:2 monitor on c001n09
+ * Resource action: child_DoFencing:2 monitor on c001n08
+ * Resource action: child_DoFencing:2 monitor on c001n07
+ * Resource action: child_DoFencing:2 monitor on c001n03
+ * Resource action: child_DoFencing:3 monitor on c001n08
+ * Resource action: child_DoFencing:3 monitor on c001n04
+ * Resource action: child_DoFencing:3 monitor on c001n02
+ * Resource action: child_DoFencing:4 monitor on c001n09
+ * Resource action: child_DoFencing:4 monitor on c001n05
+ * Resource action: child_DoFencing:4 monitor on c001n03
+ * Resource action: child_DoFencing:5 monitor on c001n08
+ * Resource action: child_DoFencing:5 monitor on c001n05
+ * Resource action: child_DoFencing:5 monitor on c001n04
+ * Resource action: child_DoFencing:5 monitor on c001n02
+ * Resource action: child_DoFencing:6 monitor on c001n09
+ * Resource action: child_DoFencing:6 monitor on c001n07
+ * Resource action: child_DoFencing:6 monitor on c001n05
+ * Resource action: child_DoFencing:6 monitor on c001n04
+ * Resource action: child_DoFencing:7 monitor on c001n08
+ * Resource action: child_DoFencing:7 monitor on c001n07
+ * Resource action: child_DoFencing:7 monitor on c001n05
+ * Resource action: child_DoFencing:7 monitor on c001n04
+ * Resource action: child_DoFencing:7 monitor on c001n03
+ * Resource action: child_DoFencing:7 monitor on c001n02
+
+Revised Cluster Status:
+ * Node List:
+ * Online: [ c001n02 c001n03 c001n04 c001n05 c001n06 c001n07 c001n08 c001n09 ]
+
+ * Full List of Resources:
+ * DcIPaddr (ocf:heartbeat:IPaddr): Started c001n09
+ * rsc_c001n09 (ocf:heartbeat:IPaddr): Started c001n09
+ * rsc_c001n02 (ocf:heartbeat:IPaddr): Started c001n02
+ * rsc_c001n03 (ocf:heartbeat:IPaddr): Started c001n03
+ * rsc_c001n04 (ocf:heartbeat:IPaddr): Started c001n04
+ * rsc_c001n05 (ocf:heartbeat:IPaddr): Started c001n05
+ * rsc_c001n06 (ocf:heartbeat:IPaddr): Started c001n06
+ * rsc_c001n07 (ocf:heartbeat:IPaddr): Started c001n07
+ * rsc_c001n08 (ocf:heartbeat:IPaddr): Started c001n08
+ * Clone Set: DoFencing [child_DoFencing] (unique, unmanaged):
+ * child_DoFencing:0 (stonith:ssh): Started c001n02 (unmanaged)
+ * child_DoFencing:1 (stonith:ssh): Started c001n03 (unmanaged)
+ * child_DoFencing:2 (stonith:ssh): Started c001n04 (unmanaged)
+ * child_DoFencing:3 (stonith:ssh): Started c001n05 (unmanaged)
+ * child_DoFencing:4 (stonith:ssh): Started c001n06 (unmanaged)
+ * child_DoFencing:5 (stonith:ssh): Started c001n07 (unmanaged)
+ * child_DoFencing:6 (stonith:ssh): Started c001n08 (unmanaged)
+ * child_DoFencing:7 (stonith:ssh): Started c001n09 (unmanaged)
diff --git a/cts/scheduler/summary/managed-2.summary b/cts/scheduler/summary/managed-2.summary
new file mode 100644
index 0000000..dd0a187
--- /dev/null
+++ b/cts/scheduler/summary/managed-2.summary
@@ -0,0 +1,166 @@
+Current cluster status:
+ * Node List:
+ * Online: [ c001n02 c001n03 c001n04 c001n05 c001n06 c001n07 c001n08 c001n09 ]
+
+ * Full List of Resources:
+ * DcIPaddr (ocf:heartbeat:IPaddr): Started c001n09
+ * rsc_c001n09 (ocf:heartbeat:IPaddr): Started c001n09
+ * rsc_c001n02 (ocf:heartbeat:IPaddr): Started c001n02
+ * rsc_c001n03 (ocf:heartbeat:IPaddr): Started c001n03
+ * rsc_c001n04 (ocf:heartbeat:IPaddr): Started c001n04
+ * rsc_c001n05 (ocf:heartbeat:IPaddr): Started c001n05
+ * rsc_c001n06 (ocf:heartbeat:IPaddr): Started c001n06
+ * rsc_c001n07 (ocf:heartbeat:IPaddr): Started c001n07
+ * rsc_c001n08 (ocf:heartbeat:IPaddr): Started c001n08
+ * Clone Set: DoFencing [child_DoFencing] (unique, unmanaged):
+ * child_DoFencing:0 (stonith:ssh): Stopped (unmanaged)
+ * child_DoFencing:1 (stonith:ssh): Stopped (unmanaged)
+ * child_DoFencing:2 (stonith:ssh): Stopped (unmanaged)
+ * child_DoFencing:3 (stonith:ssh): Stopped (unmanaged)
+ * child_DoFencing:4 (stonith:ssh): Stopped (unmanaged)
+ * child_DoFencing:5 (stonith:ssh): Stopped (unmanaged)
+ * child_DoFencing:6 (stonith:ssh): Stopped (unmanaged)
+ * child_DoFencing:7 (stonith:ssh): Stopped (unmanaged)
+
+Transition Summary:
+
+Executing Cluster Transition:
+ * Resource action: DcIPaddr monitor on c001n08
+ * Resource action: DcIPaddr monitor on c001n07
+ * Resource action: DcIPaddr monitor on c001n06
+ * Resource action: DcIPaddr monitor on c001n05
+ * Resource action: DcIPaddr monitor on c001n04
+ * Resource action: DcIPaddr monitor on c001n03
+ * Resource action: DcIPaddr monitor on c001n02
+ * Resource action: rsc_c001n09 monitor on c001n08
+ * Resource action: rsc_c001n09 monitor on c001n07
+ * Resource action: rsc_c001n09 monitor on c001n05
+ * Resource action: rsc_c001n09 monitor on c001n04
+ * Resource action: rsc_c001n09 monitor on c001n03
+ * Resource action: rsc_c001n09 monitor on c001n02
+ * Resource action: rsc_c001n02 monitor on c001n09
+ * Resource action: rsc_c001n02 monitor on c001n08
+ * Resource action: rsc_c001n02 monitor on c001n07
+ * Resource action: rsc_c001n02 monitor on c001n05
+ * Resource action: rsc_c001n02 monitor on c001n04
+ * Resource action: rsc_c001n03 monitor on c001n09
+ * Resource action: rsc_c001n03 monitor on c001n08
+ * Resource action: rsc_c001n03 monitor on c001n07
+ * Resource action: rsc_c001n03 monitor on c001n05
+ * Resource action: rsc_c001n03 monitor on c001n04
+ * Resource action: rsc_c001n03 monitor on c001n02
+ * Resource action: rsc_c001n04 monitor on c001n09
+ * Resource action: rsc_c001n04 monitor on c001n08
+ * Resource action: rsc_c001n04 monitor on c001n07
+ * Resource action: rsc_c001n04 monitor on c001n05
+ * Resource action: rsc_c001n04 monitor on c001n03
+ * Resource action: rsc_c001n04 monitor on c001n02
+ * Resource action: rsc_c001n05 monitor on c001n09
+ * Resource action: rsc_c001n05 monitor on c001n08
+ * Resource action: rsc_c001n05 monitor on c001n07
+ * Resource action: rsc_c001n05 monitor on c001n06
+ * Resource action: rsc_c001n05 monitor on c001n04
+ * Resource action: rsc_c001n05 monitor on c001n03
+ * Resource action: rsc_c001n05 monitor on c001n02
+ * Resource action: rsc_c001n06 monitor on c001n09
+ * Resource action: rsc_c001n06 monitor on c001n08
+ * Resource action: rsc_c001n06 monitor on c001n07
+ * Resource action: rsc_c001n06 monitor on c001n05
+ * Resource action: rsc_c001n06 monitor on c001n04
+ * Resource action: rsc_c001n06 monitor on c001n03
+ * Resource action: rsc_c001n07 monitor on c001n09
+ * Resource action: rsc_c001n07 monitor on c001n08
+ * Resource action: rsc_c001n07 monitor on c001n06
+ * Resource action: rsc_c001n07 monitor on c001n05
+ * Resource action: rsc_c001n07 monitor on c001n04
+ * Resource action: rsc_c001n08 monitor on c001n09
+ * Resource action: rsc_c001n08 monitor on c001n07
+ * Resource action: rsc_c001n08 monitor on c001n05
+ * Resource action: child_DoFencing:0 monitor on c001n09
+ * Resource action: child_DoFencing:0 monitor on c001n08
+ * Resource action: child_DoFencing:0 monitor on c001n07
+ * Resource action: child_DoFencing:0 monitor on c001n06
+ * Resource action: child_DoFencing:0 monitor on c001n05
+ * Resource action: child_DoFencing:0 monitor on c001n04
+ * Resource action: child_DoFencing:0 monitor on c001n03
+ * Resource action: child_DoFencing:0 monitor on c001n02
+ * Resource action: child_DoFencing:1 monitor on c001n09
+ * Resource action: child_DoFencing:1 monitor on c001n08
+ * Resource action: child_DoFencing:1 monitor on c001n07
+ * Resource action: child_DoFencing:1 monitor on c001n06
+ * Resource action: child_DoFencing:1 monitor on c001n05
+ * Resource action: child_DoFencing:1 monitor on c001n04
+ * Resource action: child_DoFencing:1 monitor on c001n03
+ * Resource action: child_DoFencing:1 monitor on c001n02
+ * Resource action: child_DoFencing:2 monitor on c001n09
+ * Resource action: child_DoFencing:2 monitor on c001n08
+ * Resource action: child_DoFencing:2 monitor on c001n07
+ * Resource action: child_DoFencing:2 monitor on c001n06
+ * Resource action: child_DoFencing:2 monitor on c001n05
+ * Resource action: child_DoFencing:2 monitor on c001n04
+ * Resource action: child_DoFencing:2 monitor on c001n03
+ * Resource action: child_DoFencing:2 monitor on c001n02
+ * Resource action: child_DoFencing:3 monitor on c001n09
+ * Resource action: child_DoFencing:3 monitor on c001n08
+ * Resource action: child_DoFencing:3 monitor on c001n07
+ * Resource action: child_DoFencing:3 monitor on c001n06
+ * Resource action: child_DoFencing:3 monitor on c001n05
+ * Resource action: child_DoFencing:3 monitor on c001n04
+ * Resource action: child_DoFencing:3 monitor on c001n03
+ * Resource action: child_DoFencing:3 monitor on c001n02
+ * Resource action: child_DoFencing:4 monitor on c001n09
+ * Resource action: child_DoFencing:4 monitor on c001n08
+ * Resource action: child_DoFencing:4 monitor on c001n07
+ * Resource action: child_DoFencing:4 monitor on c001n06
+ * Resource action: child_DoFencing:4 monitor on c001n05
+ * Resource action: child_DoFencing:4 monitor on c001n04
+ * Resource action: child_DoFencing:4 monitor on c001n03
+ * Resource action: child_DoFencing:4 monitor on c001n02
+ * Resource action: child_DoFencing:5 monitor on c001n09
+ * Resource action: child_DoFencing:5 monitor on c001n08
+ * Resource action: child_DoFencing:5 monitor on c001n07
+ * Resource action: child_DoFencing:5 monitor on c001n06
+ * Resource action: child_DoFencing:5 monitor on c001n05
+ * Resource action: child_DoFencing:5 monitor on c001n04
+ * Resource action: child_DoFencing:5 monitor on c001n03
+ * Resource action: child_DoFencing:5 monitor on c001n02
+ * Resource action: child_DoFencing:6 monitor on c001n09
+ * Resource action: child_DoFencing:6 monitor on c001n08
+ * Resource action: child_DoFencing:6 monitor on c001n07
+ * Resource action: child_DoFencing:6 monitor on c001n06
+ * Resource action: child_DoFencing:6 monitor on c001n05
+ * Resource action: child_DoFencing:6 monitor on c001n04
+ * Resource action: child_DoFencing:6 monitor on c001n03
+ * Resource action: child_DoFencing:6 monitor on c001n02
+ * Resource action: child_DoFencing:7 monitor on c001n09
+ * Resource action: child_DoFencing:7 monitor on c001n08
+ * Resource action: child_DoFencing:7 monitor on c001n07
+ * Resource action: child_DoFencing:7 monitor on c001n06
+ * Resource action: child_DoFencing:7 monitor on c001n05
+ * Resource action: child_DoFencing:7 monitor on c001n04
+ * Resource action: child_DoFencing:7 monitor on c001n03
+ * Resource action: child_DoFencing:7 monitor on c001n02
+
+Revised Cluster Status:
+ * Node List:
+ * Online: [ c001n02 c001n03 c001n04 c001n05 c001n06 c001n07 c001n08 c001n09 ]
+
+ * Full List of Resources:
+ * DcIPaddr (ocf:heartbeat:IPaddr): Started c001n09
+ * rsc_c001n09 (ocf:heartbeat:IPaddr): Started c001n09
+ * rsc_c001n02 (ocf:heartbeat:IPaddr): Started c001n02
+ * rsc_c001n03 (ocf:heartbeat:IPaddr): Started c001n03
+ * rsc_c001n04 (ocf:heartbeat:IPaddr): Started c001n04
+ * rsc_c001n05 (ocf:heartbeat:IPaddr): Started c001n05
+ * rsc_c001n06 (ocf:heartbeat:IPaddr): Started c001n06
+ * rsc_c001n07 (ocf:heartbeat:IPaddr): Started c001n07
+ * rsc_c001n08 (ocf:heartbeat:IPaddr): Started c001n08
+ * Clone Set: DoFencing [child_DoFencing] (unique, unmanaged):
+ * child_DoFencing:0 (stonith:ssh): Stopped (unmanaged)
+ * child_DoFencing:1 (stonith:ssh): Stopped (unmanaged)
+ * child_DoFencing:2 (stonith:ssh): Stopped (unmanaged)
+ * child_DoFencing:3 (stonith:ssh): Stopped (unmanaged)
+ * child_DoFencing:4 (stonith:ssh): Stopped (unmanaged)
+ * child_DoFencing:5 (stonith:ssh): Stopped (unmanaged)
+ * child_DoFencing:6 (stonith:ssh): Stopped (unmanaged)
+ * child_DoFencing:7 (stonith:ssh): Stopped (unmanaged)
diff --git a/cts/scheduler/summary/migrate-1.summary b/cts/scheduler/summary/migrate-1.summary
new file mode 100644
index 0000000..13a5c6b
--- /dev/null
+++ b/cts/scheduler/summary/migrate-1.summary
@@ -0,0 +1,25 @@
+Current cluster status:
+ * Node List:
+ * Node node1: standby (with active resources)
+ * Online: [ node2 ]
+
+ * Full List of Resources:
+ * rsc3 (ocf:heartbeat:apache): Started node1
+
+Transition Summary:
+ * Migrate rsc3 ( node1 -> node2 )
+
+Executing Cluster Transition:
+ * Resource action: rsc3 monitor on node2
+ * Resource action: rsc3 migrate_to on node1
+ * Resource action: rsc3 migrate_from on node2
+ * Resource action: rsc3 stop on node1
+ * Pseudo action: rsc3_start_0
+
+Revised Cluster Status:
+ * Node List:
+ * Node node1: standby
+ * Online: [ node2 ]
+
+ * Full List of Resources:
+ * rsc3 (ocf:heartbeat:apache): Started node2
diff --git a/cts/scheduler/summary/migrate-2.summary b/cts/scheduler/summary/migrate-2.summary
new file mode 100644
index 0000000..e7723b1
--- /dev/null
+++ b/cts/scheduler/summary/migrate-2.summary
@@ -0,0 +1,19 @@
+Current cluster status:
+ * Node List:
+ * Node node1: standby
+ * Online: [ node2 ]
+
+ * Full List of Resources:
+ * rsc3 (ocf:heartbeat:apache): Started node2
+
+Transition Summary:
+
+Executing Cluster Transition:
+
+Revised Cluster Status:
+ * Node List:
+ * Node node1: standby
+ * Online: [ node2 ]
+
+ * Full List of Resources:
+ * rsc3 (ocf:heartbeat:apache): Started node2
diff --git a/cts/scheduler/summary/migrate-3.summary b/cts/scheduler/summary/migrate-3.summary
new file mode 100644
index 0000000..5190069
--- /dev/null
+++ b/cts/scheduler/summary/migrate-3.summary
@@ -0,0 +1,23 @@
+Current cluster status:
+ * Node List:
+ * Node node1: standby (with active resources)
+ * Online: [ node2 ]
+
+ * Full List of Resources:
+ * rsc3 (ocf:heartbeat:apache): FAILED node1
+
+Transition Summary:
+ * Recover rsc3 ( node1 -> node2 )
+
+Executing Cluster Transition:
+ * Resource action: rsc3 monitor on node2
+ * Resource action: rsc3 stop on node1
+ * Resource action: rsc3 start on node2
+
+Revised Cluster Status:
+ * Node List:
+ * Node node1: standby
+ * Online: [ node2 ]
+
+ * Full List of Resources:
+ * rsc3 (ocf:heartbeat:apache): Started node2
diff --git a/cts/scheduler/summary/migrate-4.summary b/cts/scheduler/summary/migrate-4.summary
new file mode 100644
index 0000000..366fc22
--- /dev/null
+++ b/cts/scheduler/summary/migrate-4.summary
@@ -0,0 +1,22 @@
+Current cluster status:
+ * Node List:
+ * Node node1: standby
+ * Online: [ node2 ]
+
+ * Full List of Resources:
+ * rsc3 (ocf:heartbeat:apache): FAILED node2
+
+Transition Summary:
+ * Recover rsc3 ( node2 )
+
+Executing Cluster Transition:
+ * Resource action: rsc3 stop on node2
+ * Resource action: rsc3 start on node2
+
+Revised Cluster Status:
+ * Node List:
+ * Node node1: standby
+ * Online: [ node2 ]
+
+ * Full List of Resources:
+ * rsc3 (ocf:heartbeat:apache): Started node2
diff --git a/cts/scheduler/summary/migrate-5.summary b/cts/scheduler/summary/migrate-5.summary
new file mode 100644
index 0000000..f669865
--- /dev/null
+++ b/cts/scheduler/summary/migrate-5.summary
@@ -0,0 +1,35 @@
+Current cluster status:
+ * Node List:
+ * Node dom0-02: standby (with active resources)
+ * Online: [ dom0-01 ]
+
+ * Full List of Resources:
+ * domU-test01 (ocf:heartbeat:Xen): Started dom0-02
+ * Clone Set: clone-dom0-iscsi1 [dom0-iscsi1]:
+ * Started: [ dom0-01 dom0-02 ]
+
+Transition Summary:
+ * Migrate domU-test01 ( dom0-02 -> dom0-01 )
+ * Stop dom0-iscsi1-cnx1:1 ( dom0-02 ) due to node availability
+
+Executing Cluster Transition:
+ * Resource action: domU-test01 migrate_to on dom0-02
+ * Pseudo action: clone-dom0-iscsi1_stop_0
+ * Resource action: domU-test01 migrate_from on dom0-01
+ * Resource action: domU-test01 stop on dom0-02
+ * Pseudo action: dom0-iscsi1:1_stop_0
+ * Resource action: dom0-iscsi1-cnx1:0 stop on dom0-02
+ * Pseudo action: domU-test01_start_0
+ * Pseudo action: dom0-iscsi1:1_stopped_0
+ * Pseudo action: clone-dom0-iscsi1_stopped_0
+
+Revised Cluster Status:
+ * Node List:
+ * Node dom0-02: standby
+ * Online: [ dom0-01 ]
+
+ * Full List of Resources:
+ * domU-test01 (ocf:heartbeat:Xen): Started dom0-01
+ * Clone Set: clone-dom0-iscsi1 [dom0-iscsi1]:
+ * Started: [ dom0-01 ]
+ * Stopped: [ dom0-02 ]
diff --git a/cts/scheduler/summary/migrate-begin.summary b/cts/scheduler/summary/migrate-begin.summary
new file mode 100644
index 0000000..3c67302
--- /dev/null
+++ b/cts/scheduler/summary/migrate-begin.summary
@@ -0,0 +1,28 @@
+Current cluster status:
+ * Node List:
+ * Online: [ hex-13 hex-14 ]
+
+ * Full List of Resources:
+ * test-vm (ocf:heartbeat:Xen): Started hex-14
+ * Clone Set: c-clusterfs [dlm]:
+ * Started: [ hex-13 hex-14 ]
+
+Transition Summary:
+ * Migrate test-vm ( hex-14 -> hex-13 )
+
+Executing Cluster Transition:
+ * Pseudo action: load_stopped_hex-13
+ * Resource action: test-vm migrate_to on hex-14
+ * Resource action: test-vm migrate_from on hex-13
+ * Resource action: test-vm stop on hex-14
+ * Pseudo action: load_stopped_hex-14
+ * Pseudo action: test-vm_start_0
+
+Revised Cluster Status:
+ * Node List:
+ * Online: [ hex-13 hex-14 ]
+
+ * Full List of Resources:
+ * test-vm (ocf:heartbeat:Xen): Started hex-13
+ * Clone Set: c-clusterfs [dlm]:
+ * Started: [ hex-13 hex-14 ]
diff --git a/cts/scheduler/summary/migrate-both-vms.summary b/cts/scheduler/summary/migrate-both-vms.summary
new file mode 100644
index 0000000..0edd108
--- /dev/null
+++ b/cts/scheduler/summary/migrate-both-vms.summary
@@ -0,0 +1,102 @@
+Current cluster status:
+ * Node List:
+ * Node cvmh03: standby (with active resources)
+ * Node cvmh04: standby (with active resources)
+ * Online: [ cvmh01 cvmh02 ]
+
+ * Full List of Resources:
+ * fence-cvmh01 (stonith:fence_ipmilan): Started cvmh02
+ * fence-cvmh02 (stonith:fence_ipmilan): Started cvmh01
+ * fence-cvmh03 (stonith:fence_ipmilan): Started cvmh01
+ * fence-cvmh04 (stonith:fence_ipmilan): Started cvmh02
+ * Clone Set: c-fs-libvirt-VM-xcm [fs-libvirt-VM-xcm]:
+ * Started: [ cvmh01 cvmh02 cvmh03 cvmh04 ]
+ * Clone Set: c-p-libvirtd [p-libvirtd]:
+ * Started: [ cvmh01 cvmh02 cvmh03 cvmh04 ]
+ * Clone Set: c-fs-bind-libvirt-VM-cvmh [fs-bind-libvirt-VM-cvmh]:
+ * Started: [ cvmh01 cvmh02 cvmh03 cvmh04 ]
+ * Clone Set: c-watch-ib0 [p-watch-ib0]:
+ * Started: [ cvmh01 cvmh02 cvmh03 cvmh04 ]
+ * Clone Set: c-fs-gpfs [p-fs-gpfs]:
+ * Started: [ cvmh01 cvmh02 cvmh03 cvmh04 ]
+ * vm-compute-test (ocf:ccni:xcatVirtualDomain): Started cvmh03
+ * vm-swbuildsl6 (ocf:ccni:xcatVirtualDomain): Started cvmh04
+
+Transition Summary:
+ * Stop fs-libvirt-VM-xcm:0 ( cvmh04 ) due to node availability
+ * Stop fs-libvirt-VM-xcm:2 ( cvmh03 ) due to node availability
+ * Stop p-watch-ib0:0 ( cvmh04 ) due to node availability
+ * Stop p-watch-ib0:2 ( cvmh03 ) due to node availability
+ * Stop p-fs-gpfs:0 ( cvmh04 ) due to node availability
+ * Stop p-fs-gpfs:2 ( cvmh03 ) due to node availability
+ * Stop p-libvirtd:0 ( cvmh04 ) due to node availability
+ * Stop p-libvirtd:2 ( cvmh03 ) due to node availability
+ * Stop fs-bind-libvirt-VM-cvmh:0 ( cvmh04 ) due to node availability
+ * Stop fs-bind-libvirt-VM-cvmh:2 ( cvmh03 ) due to node availability
+ * Migrate vm-compute-test ( cvmh03 -> cvmh01 )
+ * Migrate vm-swbuildsl6 ( cvmh04 -> cvmh02 )
+
+Executing Cluster Transition:
+ * Pseudo action: c-watch-ib0_stop_0
+ * Pseudo action: load_stopped_cvmh02
+ * Pseudo action: load_stopped_cvmh01
+ * Resource action: p-watch-ib0 stop on cvmh03
+ * Resource action: vm-compute-test migrate_to on cvmh03
+ * Resource action: p-watch-ib0 stop on cvmh04
+ * Pseudo action: c-watch-ib0_stopped_0
+ * Resource action: vm-compute-test migrate_from on cvmh01
+ * Resource action: vm-swbuildsl6 migrate_to on cvmh04
+ * Resource action: vm-swbuildsl6 migrate_from on cvmh02
+ * Resource action: vm-swbuildsl6 stop on cvmh04
+ * Pseudo action: load_stopped_cvmh04
+ * Resource action: vm-compute-test stop on cvmh03
+ * Pseudo action: load_stopped_cvmh03
+ * Pseudo action: c-p-libvirtd_stop_0
+ * Pseudo action: vm-compute-test_start_0
+ * Pseudo action: vm-swbuildsl6_start_0
+ * Resource action: p-libvirtd stop on cvmh03
+ * Resource action: vm-compute-test monitor=45000 on cvmh01
+ * Resource action: vm-swbuildsl6 monitor=45000 on cvmh02
+ * Resource action: p-libvirtd stop on cvmh04
+ * Pseudo action: c-p-libvirtd_stopped_0
+ * Pseudo action: c-fs-bind-libvirt-VM-cvmh_stop_0
+ * Pseudo action: c-fs-libvirt-VM-xcm_stop_0
+ * Resource action: fs-bind-libvirt-VM-cvmh stop on cvmh03
+ * Resource action: fs-libvirt-VM-xcm stop on cvmh03
+ * Resource action: fs-bind-libvirt-VM-cvmh stop on cvmh04
+ * Pseudo action: c-fs-bind-libvirt-VM-cvmh_stopped_0
+ * Resource action: fs-libvirt-VM-xcm stop on cvmh04
+ * Pseudo action: c-fs-libvirt-VM-xcm_stopped_0
+ * Pseudo action: c-fs-gpfs_stop_0
+ * Resource action: p-fs-gpfs stop on cvmh03
+ * Resource action: p-fs-gpfs stop on cvmh04
+ * Pseudo action: c-fs-gpfs_stopped_0
+
+Revised Cluster Status:
+ * Node List:
+ * Node cvmh03: standby
+ * Node cvmh04: standby
+ * Online: [ cvmh01 cvmh02 ]
+
+ * Full List of Resources:
+ * fence-cvmh01 (stonith:fence_ipmilan): Started cvmh02
+ * fence-cvmh02 (stonith:fence_ipmilan): Started cvmh01
+ * fence-cvmh03 (stonith:fence_ipmilan): Started cvmh01
+ * fence-cvmh04 (stonith:fence_ipmilan): Started cvmh02
+ * Clone Set: c-fs-libvirt-VM-xcm [fs-libvirt-VM-xcm]:
+ * Started: [ cvmh01 cvmh02 ]
+ * Stopped: [ cvmh03 cvmh04 ]
+ * Clone Set: c-p-libvirtd [p-libvirtd]:
+ * Started: [ cvmh01 cvmh02 ]
+ * Stopped: [ cvmh03 cvmh04 ]
+ * Clone Set: c-fs-bind-libvirt-VM-cvmh [fs-bind-libvirt-VM-cvmh]:
+ * Started: [ cvmh01 cvmh02 ]
+ * Stopped: [ cvmh03 cvmh04 ]
+ * Clone Set: c-watch-ib0 [p-watch-ib0]:
+ * Started: [ cvmh01 cvmh02 ]
+ * Stopped: [ cvmh03 cvmh04 ]
+ * Clone Set: c-fs-gpfs [p-fs-gpfs]:
+ * Started: [ cvmh01 cvmh02 ]
+ * Stopped: [ cvmh03 cvmh04 ]
+ * vm-compute-test (ocf:ccni:xcatVirtualDomain): Started cvmh01
+ * vm-swbuildsl6 (ocf:ccni:xcatVirtualDomain): Started cvmh02
diff --git a/cts/scheduler/summary/migrate-fail-2.summary b/cts/scheduler/summary/migrate-fail-2.summary
new file mode 100644
index 0000000..278b2c0
--- /dev/null
+++ b/cts/scheduler/summary/migrate-fail-2.summary
@@ -0,0 +1,27 @@
+Current cluster status:
+ * Node List:
+ * Online: [ hex-13 hex-14 ]
+
+ * Full List of Resources:
+ * test-vm (ocf:heartbeat:Xen): FAILED [ hex-13 hex-14 ]
+ * Clone Set: c-clusterfs [dlm]:
+ * Started: [ hex-13 hex-14 ]
+
+Transition Summary:
+ * Recover test-vm ( hex-13 )
+
+Executing Cluster Transition:
+ * Resource action: test-vm stop on hex-14
+ * Resource action: test-vm stop on hex-13
+ * Pseudo action: load_stopped_hex-14
+ * Pseudo action: load_stopped_hex-13
+ * Resource action: test-vm start on hex-13
+
+Revised Cluster Status:
+ * Node List:
+ * Online: [ hex-13 hex-14 ]
+
+ * Full List of Resources:
+ * test-vm (ocf:heartbeat:Xen): Started hex-13
+ * Clone Set: c-clusterfs [dlm]:
+ * Started: [ hex-13 hex-14 ]
diff --git a/cts/scheduler/summary/migrate-fail-3.summary b/cts/scheduler/summary/migrate-fail-3.summary
new file mode 100644
index 0000000..f028396
--- /dev/null
+++ b/cts/scheduler/summary/migrate-fail-3.summary
@@ -0,0 +1,26 @@
+Current cluster status:
+ * Node List:
+ * Online: [ hex-13 hex-14 ]
+
+ * Full List of Resources:
+ * test-vm (ocf:heartbeat:Xen): FAILED hex-13
+ * Clone Set: c-clusterfs [dlm]:
+ * Started: [ hex-13 hex-14 ]
+
+Transition Summary:
+ * Recover test-vm ( hex-13 )
+
+Executing Cluster Transition:
+ * Resource action: test-vm stop on hex-13
+ * Pseudo action: load_stopped_hex-14
+ * Pseudo action: load_stopped_hex-13
+ * Resource action: test-vm start on hex-13
+
+Revised Cluster Status:
+ * Node List:
+ * Online: [ hex-13 hex-14 ]
+
+ * Full List of Resources:
+ * test-vm (ocf:heartbeat:Xen): Started hex-13
+ * Clone Set: c-clusterfs [dlm]:
+ * Started: [ hex-13 hex-14 ]
diff --git a/cts/scheduler/summary/migrate-fail-4.summary b/cts/scheduler/summary/migrate-fail-4.summary
new file mode 100644
index 0000000..0d155f4
--- /dev/null
+++ b/cts/scheduler/summary/migrate-fail-4.summary
@@ -0,0 +1,26 @@
+Current cluster status:
+ * Node List:
+ * Online: [ hex-13 hex-14 ]
+
+ * Full List of Resources:
+ * test-vm (ocf:heartbeat:Xen): FAILED hex-14
+ * Clone Set: c-clusterfs [dlm]:
+ * Started: [ hex-13 hex-14 ]
+
+Transition Summary:
+ * Recover test-vm ( hex-14 -> hex-13 )
+
+Executing Cluster Transition:
+ * Resource action: test-vm stop on hex-14
+ * Pseudo action: load_stopped_hex-13
+ * Pseudo action: load_stopped_hex-14
+ * Resource action: test-vm start on hex-13
+
+Revised Cluster Status:
+ * Node List:
+ * Online: [ hex-13 hex-14 ]
+
+ * Full List of Resources:
+ * test-vm (ocf:heartbeat:Xen): Started hex-13
+ * Clone Set: c-clusterfs [dlm]:
+ * Started: [ hex-13 hex-14 ]
diff --git a/cts/scheduler/summary/migrate-fail-5.summary b/cts/scheduler/summary/migrate-fail-5.summary
new file mode 100644
index 0000000..4200e29
--- /dev/null
+++ b/cts/scheduler/summary/migrate-fail-5.summary
@@ -0,0 +1,25 @@
+Current cluster status:
+ * Node List:
+ * Online: [ hex-13 hex-14 ]
+
+ * Full List of Resources:
+ * test-vm (ocf:heartbeat:Xen): Stopped
+ * Clone Set: c-clusterfs [dlm]:
+ * Started: [ hex-13 hex-14 ]
+
+Transition Summary:
+ * Start test-vm ( hex-13 )
+
+Executing Cluster Transition:
+ * Pseudo action: load_stopped_hex-14
+ * Pseudo action: load_stopped_hex-13
+ * Resource action: test-vm start on hex-13
+
+Revised Cluster Status:
+ * Node List:
+ * Online: [ hex-13 hex-14 ]
+
+ * Full List of Resources:
+ * test-vm (ocf:heartbeat:Xen): Started hex-13
+ * Clone Set: c-clusterfs [dlm]:
+ * Started: [ hex-13 hex-14 ]
diff --git a/cts/scheduler/summary/migrate-fail-6.summary b/cts/scheduler/summary/migrate-fail-6.summary
new file mode 100644
index 0000000..da1ccb0
--- /dev/null
+++ b/cts/scheduler/summary/migrate-fail-6.summary
@@ -0,0 +1,27 @@
+Current cluster status:
+ * Node List:
+ * Online: [ hex-13 hex-14 ]
+
+ * Full List of Resources:
+ * test-vm (ocf:heartbeat:Xen): FAILED [ hex-13 hex-14 ]
+ * Clone Set: c-clusterfs [dlm]:
+ * Started: [ hex-13 hex-14 ]
+
+Transition Summary:
+ * Recover test-vm ( hex-13 )
+
+Executing Cluster Transition:
+ * Resource action: test-vm stop on hex-13
+ * Resource action: test-vm stop on hex-14
+ * Pseudo action: load_stopped_hex-14
+ * Pseudo action: load_stopped_hex-13
+ * Resource action: test-vm start on hex-13
+
+Revised Cluster Status:
+ * Node List:
+ * Online: [ hex-13 hex-14 ]
+
+ * Full List of Resources:
+ * test-vm (ocf:heartbeat:Xen): Started hex-13
+ * Clone Set: c-clusterfs [dlm]:
+ * Started: [ hex-13 hex-14 ]
diff --git a/cts/scheduler/summary/migrate-fail-7.summary b/cts/scheduler/summary/migrate-fail-7.summary
new file mode 100644
index 0000000..9a8222d
--- /dev/null
+++ b/cts/scheduler/summary/migrate-fail-7.summary
@@ -0,0 +1,25 @@
+Current cluster status:
+ * Node List:
+ * Online: [ hex-13 hex-14 ]
+
+ * Full List of Resources:
+ * test-vm (ocf:heartbeat:Xen): Stopped hex-13
+ * Clone Set: c-clusterfs [dlm]:
+ * Started: [ hex-13 hex-14 ]
+
+Transition Summary:
+ * Restart test-vm ( hex-13 )
+
+Executing Cluster Transition:
+ * Pseudo action: load_stopped_hex-14
+ * Pseudo action: load_stopped_hex-13
+ * Resource action: test-vm start on hex-13
+
+Revised Cluster Status:
+ * Node List:
+ * Online: [ hex-13 hex-14 ]
+
+ * Full List of Resources:
+ * test-vm (ocf:heartbeat:Xen): Started hex-13
+ * Clone Set: c-clusterfs [dlm]:
+ * Started: [ hex-13 hex-14 ]
diff --git a/cts/scheduler/summary/migrate-fail-8.summary b/cts/scheduler/summary/migrate-fail-8.summary
new file mode 100644
index 0000000..0d155f4
--- /dev/null
+++ b/cts/scheduler/summary/migrate-fail-8.summary
@@ -0,0 +1,26 @@
+Current cluster status:
+ * Node List:
+ * Online: [ hex-13 hex-14 ]
+
+ * Full List of Resources:
+ * test-vm (ocf:heartbeat:Xen): FAILED hex-14
+ * Clone Set: c-clusterfs [dlm]:
+ * Started: [ hex-13 hex-14 ]
+
+Transition Summary:
+ * Recover test-vm ( hex-14 -> hex-13 )
+
+Executing Cluster Transition:
+ * Resource action: test-vm stop on hex-14
+ * Pseudo action: load_stopped_hex-13
+ * Pseudo action: load_stopped_hex-14
+ * Resource action: test-vm start on hex-13
+
+Revised Cluster Status:
+ * Node List:
+ * Online: [ hex-13 hex-14 ]
+
+ * Full List of Resources:
+ * test-vm (ocf:heartbeat:Xen): Started hex-13
+ * Clone Set: c-clusterfs [dlm]:
+ * Started: [ hex-13 hex-14 ]
diff --git a/cts/scheduler/summary/migrate-fail-9.summary b/cts/scheduler/summary/migrate-fail-9.summary
new file mode 100644
index 0000000..4200e29
--- /dev/null
+++ b/cts/scheduler/summary/migrate-fail-9.summary
@@ -0,0 +1,25 @@
+Current cluster status:
+ * Node List:
+ * Online: [ hex-13 hex-14 ]
+
+ * Full List of Resources:
+ * test-vm (ocf:heartbeat:Xen): Stopped
+ * Clone Set: c-clusterfs [dlm]:
+ * Started: [ hex-13 hex-14 ]
+
+Transition Summary:
+ * Start test-vm ( hex-13 )
+
+Executing Cluster Transition:
+ * Pseudo action: load_stopped_hex-14
+ * Pseudo action: load_stopped_hex-13
+ * Resource action: test-vm start on hex-13
+
+Revised Cluster Status:
+ * Node List:
+ * Online: [ hex-13 hex-14 ]
+
+ * Full List of Resources:
+ * test-vm (ocf:heartbeat:Xen): Started hex-13
+ * Clone Set: c-clusterfs [dlm]:
+ * Started: [ hex-13 hex-14 ]
diff --git a/cts/scheduler/summary/migrate-fencing.summary b/cts/scheduler/summary/migrate-fencing.summary
new file mode 100644
index 0000000..ebc65bd
--- /dev/null
+++ b/cts/scheduler/summary/migrate-fencing.summary
@@ -0,0 +1,108 @@
+Current cluster status:
+ * Node List:
+ * Node pcmk-4: UNCLEAN (online)
+ * Online: [ pcmk-1 pcmk-2 pcmk-3 ]
+
+ * Full List of Resources:
+ * Clone Set: Fencing [FencingChild]:
+ * Started: [ pcmk-1 pcmk-2 pcmk-3 pcmk-4 ]
+ * Resource Group: group-1:
+ * r192.168.101.181 (ocf:heartbeat:IPaddr): Started pcmk-4
+ * r192.168.101.182 (ocf:heartbeat:IPaddr): Started pcmk-4
+ * r192.168.101.183 (ocf:heartbeat:IPaddr): Started pcmk-4
+ * rsc_pcmk-1 (ocf:heartbeat:IPaddr): Started pcmk-1
+ * rsc_pcmk-2 (ocf:heartbeat:IPaddr): Started pcmk-2
+ * rsc_pcmk-3 (ocf:heartbeat:IPaddr): Started pcmk-3
+ * rsc_pcmk-4 (ocf:heartbeat:IPaddr): Started pcmk-4
+ * lsb-dummy (lsb:/usr/share/pacemaker/tests/cts/LSBDummy): Started pcmk-4
+ * migrator (ocf:pacemaker:Dummy): Started pcmk-1
+ * Clone Set: Connectivity [ping-1]:
+ * Started: [ pcmk-1 pcmk-2 pcmk-3 pcmk-4 ]
+ * Clone Set: master-1 [stateful-1] (promotable):
+ * Promoted: [ pcmk-4 ]
+ * Unpromoted: [ pcmk-1 pcmk-2 pcmk-3 ]
+
+Transition Summary:
+ * Fence (reboot) pcmk-4 'termination was requested'
+ * Stop FencingChild:0 ( pcmk-4 ) due to node availability
+ * Move r192.168.101.181 ( pcmk-4 -> pcmk-1 )
+ * Move r192.168.101.182 ( pcmk-4 -> pcmk-1 )
+ * Move r192.168.101.183 ( pcmk-4 -> pcmk-1 )
+ * Move rsc_pcmk-4 ( pcmk-4 -> pcmk-2 )
+ * Move lsb-dummy ( pcmk-4 -> pcmk-1 )
+ * Migrate migrator ( pcmk-1 -> pcmk-3 )
+ * Stop ping-1:0 ( pcmk-4 ) due to node availability
+ * Stop stateful-1:0 ( Promoted pcmk-4 ) due to node availability
+ * Promote stateful-1:1 ( Unpromoted -> Promoted pcmk-1 )
+
+Executing Cluster Transition:
+ * Pseudo action: Fencing_stop_0
+ * Resource action: stateful-1:3 monitor=15000 on pcmk-3
+ * Resource action: stateful-1:2 monitor=15000 on pcmk-2
+ * Fencing pcmk-4 (reboot)
+ * Pseudo action: FencingChild:0_stop_0
+ * Pseudo action: Fencing_stopped_0
+ * Pseudo action: rsc_pcmk-4_stop_0
+ * Pseudo action: lsb-dummy_stop_0
+ * Resource action: migrator migrate_to on pcmk-1
+ * Pseudo action: Connectivity_stop_0
+ * Pseudo action: group-1_stop_0
+ * Pseudo action: r192.168.101.183_stop_0
+ * Resource action: rsc_pcmk-4 start on pcmk-2
+ * Resource action: migrator migrate_from on pcmk-3
+ * Resource action: migrator stop on pcmk-1
+ * Pseudo action: ping-1:0_stop_0
+ * Pseudo action: Connectivity_stopped_0
+ * Pseudo action: r192.168.101.182_stop_0
+ * Resource action: rsc_pcmk-4 monitor=5000 on pcmk-2
+ * Pseudo action: migrator_start_0
+ * Pseudo action: r192.168.101.181_stop_0
+ * Resource action: migrator monitor=10000 on pcmk-3
+ * Pseudo action: group-1_stopped_0
+ * Pseudo action: master-1_demote_0
+ * Pseudo action: stateful-1:0_demote_0
+ * Pseudo action: master-1_demoted_0
+ * Pseudo action: master-1_stop_0
+ * Pseudo action: stateful-1:0_stop_0
+ * Pseudo action: master-1_stopped_0
+ * Pseudo action: master-1_promote_0
+ * Resource action: stateful-1:1 promote on pcmk-1
+ * Pseudo action: master-1_promoted_0
+ * Pseudo action: group-1_start_0
+ * Resource action: r192.168.101.181 start on pcmk-1
+ * Resource action: r192.168.101.182 start on pcmk-1
+ * Resource action: r192.168.101.183 start on pcmk-1
+ * Resource action: stateful-1:1 monitor=16000 on pcmk-1
+ * Pseudo action: group-1_running_0
+ * Resource action: r192.168.101.181 monitor=5000 on pcmk-1
+ * Resource action: r192.168.101.182 monitor=5000 on pcmk-1
+ * Resource action: r192.168.101.183 monitor=5000 on pcmk-1
+ * Resource action: lsb-dummy start on pcmk-1
+ * Resource action: lsb-dummy monitor=5000 on pcmk-1
+
+Revised Cluster Status:
+ * Node List:
+ * Online: [ pcmk-1 pcmk-2 pcmk-3 ]
+ * OFFLINE: [ pcmk-4 ]
+
+ * Full List of Resources:
+ * Clone Set: Fencing [FencingChild]:
+ * Started: [ pcmk-1 pcmk-2 pcmk-3 ]
+ * Stopped: [ pcmk-4 ]
+ * Resource Group: group-1:
+ * r192.168.101.181 (ocf:heartbeat:IPaddr): Started pcmk-1
+ * r192.168.101.182 (ocf:heartbeat:IPaddr): Started pcmk-1
+ * r192.168.101.183 (ocf:heartbeat:IPaddr): Started pcmk-1
+ * rsc_pcmk-1 (ocf:heartbeat:IPaddr): Started pcmk-1
+ * rsc_pcmk-2 (ocf:heartbeat:IPaddr): Started pcmk-2
+ * rsc_pcmk-3 (ocf:heartbeat:IPaddr): Started pcmk-3
+ * rsc_pcmk-4 (ocf:heartbeat:IPaddr): Started pcmk-2
+ * lsb-dummy (lsb:/usr/share/pacemaker/tests/cts/LSBDummy): Started pcmk-1
+ * migrator (ocf:pacemaker:Dummy): Started pcmk-3
+ * Clone Set: Connectivity [ping-1]:
+ * Started: [ pcmk-1 pcmk-2 pcmk-3 ]
+ * Stopped: [ pcmk-4 ]
+ * Clone Set: master-1 [stateful-1] (promotable):
+ * Promoted: [ pcmk-1 ]
+ * Unpromoted: [ pcmk-2 pcmk-3 ]
+ * Stopped: [ pcmk-4 ]
diff --git a/cts/scheduler/summary/migrate-partial-1.summary b/cts/scheduler/summary/migrate-partial-1.summary
new file mode 100644
index 0000000..f65b708
--- /dev/null
+++ b/cts/scheduler/summary/migrate-partial-1.summary
@@ -0,0 +1,24 @@
+Current cluster status:
+ * Node List:
+ * Online: [ hex-13 hex-14 ]
+
+ * Full List of Resources:
+ * test-vm (ocf:heartbeat:Xen): Started hex-13
+ * Clone Set: c-clusterfs [dlm]:
+ * Started: [ hex-13 hex-14 ]
+
+Transition Summary:
+
+Executing Cluster Transition:
+ * Resource action: test-vm stop on hex-14
+ * Pseudo action: load_stopped_hex-14
+ * Pseudo action: load_stopped_hex-13
+
+Revised Cluster Status:
+ * Node List:
+ * Online: [ hex-13 hex-14 ]
+
+ * Full List of Resources:
+ * test-vm (ocf:heartbeat:Xen): Started hex-13
+ * Clone Set: c-clusterfs [dlm]:
+ * Started: [ hex-13 hex-14 ]
diff --git a/cts/scheduler/summary/migrate-partial-2.summary b/cts/scheduler/summary/migrate-partial-2.summary
new file mode 100644
index 0000000..3a42359
--- /dev/null
+++ b/cts/scheduler/summary/migrate-partial-2.summary
@@ -0,0 +1,27 @@
+Current cluster status:
+ * Node List:
+ * Online: [ hex-13 hex-14 ]
+
+ * Full List of Resources:
+ * test-vm (ocf:heartbeat:Xen): Started [ hex-13 hex-14 ]
+ * Clone Set: c-clusterfs [dlm]:
+ * Started: [ hex-13 hex-14 ]
+
+Transition Summary:
+ * Migrate test-vm ( hex-14 -> hex-13 )
+
+Executing Cluster Transition:
+ * Resource action: test-vm migrate_from on hex-13
+ * Resource action: test-vm stop on hex-14
+ * Pseudo action: load_stopped_hex-14
+ * Pseudo action: load_stopped_hex-13
+ * Pseudo action: test-vm_start_0
+
+Revised Cluster Status:
+ * Node List:
+ * Online: [ hex-13 hex-14 ]
+
+ * Full List of Resources:
+ * test-vm (ocf:heartbeat:Xen): Started hex-13
+ * Clone Set: c-clusterfs [dlm]:
+ * Started: [ hex-13 hex-14 ]
diff --git a/cts/scheduler/summary/migrate-partial-3.summary b/cts/scheduler/summary/migrate-partial-3.summary
new file mode 100644
index 0000000..a674cf7
--- /dev/null
+++ b/cts/scheduler/summary/migrate-partial-3.summary
@@ -0,0 +1,31 @@
+Current cluster status:
+ * Node List:
+ * Online: [ hex-13 hex-14 ]
+ * OFFLINE: [ hex-15 ]
+
+ * Full List of Resources:
+ * test-vm (ocf:heartbeat:Xen): FAILED hex-14
+ * Clone Set: c-clusterfs [dlm]:
+ * Started: [ hex-13 hex-14 ]
+ * Stopped: [ hex-15 ]
+
+Transition Summary:
+ * Recover test-vm ( hex-14 -> hex-13 )
+
+Executing Cluster Transition:
+ * Resource action: test-vm stop on hex-14
+ * Pseudo action: load_stopped_hex-15
+ * Pseudo action: load_stopped_hex-13
+ * Pseudo action: load_stopped_hex-14
+ * Resource action: test-vm start on hex-13
+
+Revised Cluster Status:
+ * Node List:
+ * Online: [ hex-13 hex-14 ]
+ * OFFLINE: [ hex-15 ]
+
+ * Full List of Resources:
+ * test-vm (ocf:heartbeat:Xen): Started hex-13
+ * Clone Set: c-clusterfs [dlm]:
+ * Started: [ hex-13 hex-14 ]
+ * Stopped: [ hex-15 ]
diff --git a/cts/scheduler/summary/migrate-partial-4.summary b/cts/scheduler/summary/migrate-partial-4.summary
new file mode 100644
index 0000000..abb31f1
--- /dev/null
+++ b/cts/scheduler/summary/migrate-partial-4.summary
@@ -0,0 +1,126 @@
+Current cluster status:
+ * Node List:
+ * Online: [ lustre01-left lustre02-left lustre03-left lustre04-left ]
+
+ * Full List of Resources:
+ * drbd-local (ocf:vds-ok:Ticketer): Started lustre01-left
+ * drbd-stacked (ocf:vds-ok:Ticketer): Stopped
+ * drbd-testfs-local (ocf:vds-ok:Ticketer): Stopped
+ * drbd-testfs-stacked (ocf:vds-ok:Ticketer): Stopped
+ * ip-testfs-mdt0000-left (ocf:heartbeat:IPaddr2): Stopped
+ * ip-testfs-ost0000-left (ocf:heartbeat:IPaddr2): Stopped
+ * ip-testfs-ost0001-left (ocf:heartbeat:IPaddr2): Stopped
+ * ip-testfs-ost0002-left (ocf:heartbeat:IPaddr2): Stopped
+ * ip-testfs-ost0003-left (ocf:heartbeat:IPaddr2): Stopped
+ * lustre (ocf:vds-ok:Ticketer): Started lustre03-left
+ * mgs (ocf:vds-ok:lustre-server): Stopped
+ * testfs (ocf:vds-ok:Ticketer): Started lustre02-left
+ * testfs-mdt0000 (ocf:vds-ok:lustre-server): Stopped
+ * testfs-ost0000 (ocf:vds-ok:lustre-server): Stopped
+ * testfs-ost0001 (ocf:vds-ok:lustre-server): Stopped
+ * testfs-ost0002 (ocf:vds-ok:lustre-server): Stopped
+ * testfs-ost0003 (ocf:vds-ok:lustre-server): Stopped
+ * Resource Group: booth:
+ * ip-booth (ocf:heartbeat:IPaddr2): Started lustre02-left
+ * boothd (ocf:pacemaker:booth-site): Started lustre02-left
+ * Clone Set: ms-drbd-mgs [drbd-mgs] (promotable):
+ * Stopped: [ lustre01-left lustre02-left lustre03-left lustre04-left ]
+ * Clone Set: ms-drbd-testfs-mdt0000 [drbd-testfs-mdt0000] (promotable):
+ * Stopped: [ lustre01-left lustre02-left lustre03-left lustre04-left ]
+ * Clone Set: ms-drbd-testfs-mdt0000-left [drbd-testfs-mdt0000-left] (promotable):
+ * Stopped: [ lustre01-left lustre02-left lustre03-left lustre04-left ]
+ * Clone Set: ms-drbd-testfs-ost0000 [drbd-testfs-ost0000] (promotable):
+ * Stopped: [ lustre01-left lustre02-left lustre03-left lustre04-left ]
+ * Clone Set: ms-drbd-testfs-ost0000-left [drbd-testfs-ost0000-left] (promotable):
+ * Stopped: [ lustre01-left lustre02-left lustre03-left lustre04-left ]
+ * Clone Set: ms-drbd-testfs-ost0001 [drbd-testfs-ost0001] (promotable):
+ * Stopped: [ lustre01-left lustre02-left lustre03-left lustre04-left ]
+ * Clone Set: ms-drbd-testfs-ost0001-left [drbd-testfs-ost0001-left] (promotable):
+ * Stopped: [ lustre01-left lustre02-left lustre03-left lustre04-left ]
+ * Clone Set: ms-drbd-testfs-ost0002 [drbd-testfs-ost0002] (promotable):
+ * Stopped: [ lustre01-left lustre02-left lustre03-left lustre04-left ]
+ * Clone Set: ms-drbd-testfs-ost0002-left [drbd-testfs-ost0002-left] (promotable):
+ * Stopped: [ lustre01-left lustre02-left lustre03-left lustre04-left ]
+ * Clone Set: ms-drbd-testfs-ost0003 [drbd-testfs-ost0003] (promotable):
+ * Stopped: [ lustre01-left lustre02-left lustre03-left lustre04-left ]
+ * Clone Set: ms-drbd-testfs-ost0003-left [drbd-testfs-ost0003-left] (promotable):
+ * Stopped: [ lustre01-left lustre02-left lustre03-left lustre04-left ]
+
+Transition Summary:
+ * Start drbd-stacked ( lustre02-left )
+ * Start drbd-testfs-local ( lustre03-left )
+ * Migrate lustre ( lustre03-left -> lustre04-left )
+ * Move testfs ( lustre02-left -> lustre03-left )
+ * Start drbd-mgs:0 ( lustre01-left )
+ * Start drbd-mgs:1 ( lustre02-left )
+
+Executing Cluster Transition:
+ * Resource action: drbd-stacked start on lustre02-left
+ * Resource action: drbd-testfs-local start on lustre03-left
+ * Resource action: lustre migrate_to on lustre03-left
+ * Resource action: testfs stop on lustre02-left
+ * Resource action: testfs stop on lustre01-left
+ * Pseudo action: ms-drbd-mgs_pre_notify_start_0
+ * Resource action: lustre migrate_from on lustre04-left
+ * Resource action: lustre stop on lustre03-left
+ * Resource action: testfs start on lustre03-left
+ * Pseudo action: ms-drbd-mgs_confirmed-pre_notify_start_0
+ * Pseudo action: ms-drbd-mgs_start_0
+ * Pseudo action: lustre_start_0
+ * Resource action: drbd-mgs:0 start on lustre01-left
+ * Resource action: drbd-mgs:1 start on lustre02-left
+ * Pseudo action: ms-drbd-mgs_running_0
+ * Pseudo action: ms-drbd-mgs_post_notify_running_0
+ * Resource action: drbd-mgs:0 notify on lustre01-left
+ * Resource action: drbd-mgs:1 notify on lustre02-left
+ * Pseudo action: ms-drbd-mgs_confirmed-post_notify_running_0
+ * Resource action: drbd-mgs:0 monitor=30000 on lustre01-left
+ * Resource action: drbd-mgs:1 monitor=30000 on lustre02-left
+
+Revised Cluster Status:
+ * Node List:
+ * Online: [ lustre01-left lustre02-left lustre03-left lustre04-left ]
+
+ * Full List of Resources:
+ * drbd-local (ocf:vds-ok:Ticketer): Started lustre01-left
+ * drbd-stacked (ocf:vds-ok:Ticketer): Started lustre02-left
+ * drbd-testfs-local (ocf:vds-ok:Ticketer): Started lustre03-left
+ * drbd-testfs-stacked (ocf:vds-ok:Ticketer): Stopped
+ * ip-testfs-mdt0000-left (ocf:heartbeat:IPaddr2): Stopped
+ * ip-testfs-ost0000-left (ocf:heartbeat:IPaddr2): Stopped
+ * ip-testfs-ost0001-left (ocf:heartbeat:IPaddr2): Stopped
+ * ip-testfs-ost0002-left (ocf:heartbeat:IPaddr2): Stopped
+ * ip-testfs-ost0003-left (ocf:heartbeat:IPaddr2): Stopped
+ * lustre (ocf:vds-ok:Ticketer): Started lustre04-left
+ * mgs (ocf:vds-ok:lustre-server): Stopped
+ * testfs (ocf:vds-ok:Ticketer): Started lustre03-left
+ * testfs-mdt0000 (ocf:vds-ok:lustre-server): Stopped
+ * testfs-ost0000 (ocf:vds-ok:lustre-server): Stopped
+ * testfs-ost0001 (ocf:vds-ok:lustre-server): Stopped
+ * testfs-ost0002 (ocf:vds-ok:lustre-server): Stopped
+ * testfs-ost0003 (ocf:vds-ok:lustre-server): Stopped
+ * Resource Group: booth:
+ * ip-booth (ocf:heartbeat:IPaddr2): Started lustre02-left
+ * boothd (ocf:pacemaker:booth-site): Started lustre02-left
+ * Clone Set: ms-drbd-mgs [drbd-mgs] (promotable):
+ * Unpromoted: [ lustre01-left lustre02-left ]
+ * Clone Set: ms-drbd-testfs-mdt0000 [drbd-testfs-mdt0000] (promotable):
+ * Stopped: [ lustre01-left lustre02-left lustre03-left lustre04-left ]
+ * Clone Set: ms-drbd-testfs-mdt0000-left [drbd-testfs-mdt0000-left] (promotable):
+ * Stopped: [ lustre01-left lustre02-left lustre03-left lustre04-left ]
+ * Clone Set: ms-drbd-testfs-ost0000 [drbd-testfs-ost0000] (promotable):
+ * Stopped: [ lustre01-left lustre02-left lustre03-left lustre04-left ]
+ * Clone Set: ms-drbd-testfs-ost0000-left [drbd-testfs-ost0000-left] (promotable):
+ * Stopped: [ lustre01-left lustre02-left lustre03-left lustre04-left ]
+ * Clone Set: ms-drbd-testfs-ost0001 [drbd-testfs-ost0001] (promotable):
+ * Stopped: [ lustre01-left lustre02-left lustre03-left lustre04-left ]
+ * Clone Set: ms-drbd-testfs-ost0001-left [drbd-testfs-ost0001-left] (promotable):
+ * Stopped: [ lustre01-left lustre02-left lustre03-left lustre04-left ]
+ * Clone Set: ms-drbd-testfs-ost0002 [drbd-testfs-ost0002] (promotable):
+ * Stopped: [ lustre01-left lustre02-left lustre03-left lustre04-left ]
+ * Clone Set: ms-drbd-testfs-ost0002-left [drbd-testfs-ost0002-left] (promotable):
+ * Stopped: [ lustre01-left lustre02-left lustre03-left lustre04-left ]
+ * Clone Set: ms-drbd-testfs-ost0003 [drbd-testfs-ost0003] (promotable):
+ * Stopped: [ lustre01-left lustre02-left lustre03-left lustre04-left ]
+ * Clone Set: ms-drbd-testfs-ost0003-left [drbd-testfs-ost0003-left] (promotable):
+ * Stopped: [ lustre01-left lustre02-left lustre03-left lustre04-left ]
diff --git a/cts/scheduler/summary/migrate-shutdown.summary b/cts/scheduler/summary/migrate-shutdown.summary
new file mode 100644
index 0000000..985b554
--- /dev/null
+++ b/cts/scheduler/summary/migrate-shutdown.summary
@@ -0,0 +1,92 @@
+Current cluster status:
+ * Node List:
+ * Online: [ pcmk-1 pcmk-2 pcmk-3 pcmk-4 ]
+
+ * Full List of Resources:
+ * Fencing (stonith:fence_xvm): Started pcmk-1
+ * Resource Group: group-1:
+ * r192.168.122.105 (ocf:heartbeat:IPaddr): Started pcmk-2
+ * r192.168.122.106 (ocf:heartbeat:IPaddr): Started pcmk-2
+ * r192.168.122.107 (ocf:heartbeat:IPaddr): Started pcmk-2
+ * rsc_pcmk-1 (ocf:heartbeat:IPaddr): Started pcmk-1
+ * rsc_pcmk-2 (ocf:heartbeat:IPaddr): Started pcmk-2
+ * rsc_pcmk-3 (ocf:heartbeat:IPaddr): Stopped
+ * rsc_pcmk-4 (ocf:heartbeat:IPaddr): Started pcmk-4
+ * lsb-dummy (lsb:/usr/share/pacemaker/tests/cts/LSBDummy): Started pcmk-2
+ * migrator (ocf:pacemaker:Dummy): Started pcmk-1
+ * Clone Set: Connectivity [ping-1]:
+ * Started: [ pcmk-1 pcmk-2 pcmk-4 ]
+ * Stopped: [ pcmk-3 ]
+ * Clone Set: master-1 [stateful-1] (promotable):
+ * Promoted: [ pcmk-2 ]
+ * Unpromoted: [ pcmk-1 pcmk-4 ]
+ * Stopped: [ pcmk-3 ]
+
+Transition Summary:
+ * Stop Fencing ( pcmk-1 ) due to node availability
+ * Stop r192.168.122.105 ( pcmk-2 ) due to node availability
+ * Stop r192.168.122.106 ( pcmk-2 ) due to node availability
+ * Stop r192.168.122.107 ( pcmk-2 ) due to node availability
+ * Stop rsc_pcmk-1 ( pcmk-1 ) due to node availability
+ * Stop rsc_pcmk-2 ( pcmk-2 ) due to node availability
+ * Stop rsc_pcmk-4 ( pcmk-4 ) due to node availability
+ * Stop lsb-dummy ( pcmk-2 ) due to node availability
+ * Stop migrator ( pcmk-1 ) due to node availability
+ * Stop ping-1:0 ( pcmk-1 ) due to node availability
+ * Stop ping-1:1 ( pcmk-2 ) due to node availability
+ * Stop ping-1:2 ( pcmk-4 ) due to node availability
+ * Stop stateful-1:0 ( Unpromoted pcmk-1 ) due to node availability
+ * Stop stateful-1:1 ( Promoted pcmk-2 ) due to node availability
+ * Stop stateful-1:2 ( Unpromoted pcmk-4 ) due to node availability
+
+Executing Cluster Transition:
+ * Resource action: Fencing stop on pcmk-1
+ * Resource action: rsc_pcmk-1 stop on pcmk-1
+ * Resource action: rsc_pcmk-2 stop on pcmk-2
+ * Resource action: rsc_pcmk-4 stop on pcmk-4
+ * Resource action: lsb-dummy stop on pcmk-2
+ * Resource action: migrator stop on pcmk-1
+ * Resource action: migrator stop on pcmk-3
+ * Pseudo action: Connectivity_stop_0
+ * Cluster action: do_shutdown on pcmk-3
+ * Pseudo action: group-1_stop_0
+ * Resource action: r192.168.122.107 stop on pcmk-2
+ * Resource action: ping-1:0 stop on pcmk-1
+ * Resource action: ping-1:1 stop on pcmk-2
+ * Resource action: ping-1:3 stop on pcmk-4
+ * Pseudo action: Connectivity_stopped_0
+ * Resource action: r192.168.122.106 stop on pcmk-2
+ * Resource action: r192.168.122.105 stop on pcmk-2
+ * Pseudo action: group-1_stopped_0
+ * Pseudo action: master-1_demote_0
+ * Resource action: stateful-1:0 demote on pcmk-2
+ * Pseudo action: master-1_demoted_0
+ * Pseudo action: master-1_stop_0
+ * Resource action: stateful-1:2 stop on pcmk-1
+ * Resource action: stateful-1:0 stop on pcmk-2
+ * Resource action: stateful-1:3 stop on pcmk-4
+ * Pseudo action: master-1_stopped_0
+ * Cluster action: do_shutdown on pcmk-4
+ * Cluster action: do_shutdown on pcmk-2
+ * Cluster action: do_shutdown on pcmk-1
+
+Revised Cluster Status:
+ * Node List:
+ * Online: [ pcmk-1 pcmk-2 pcmk-3 pcmk-4 ]
+
+ * Full List of Resources:
+ * Fencing (stonith:fence_xvm): Stopped
+ * Resource Group: group-1:
+ * r192.168.122.105 (ocf:heartbeat:IPaddr): Stopped
+ * r192.168.122.106 (ocf:heartbeat:IPaddr): Stopped
+ * r192.168.122.107 (ocf:heartbeat:IPaddr): Stopped
+ * rsc_pcmk-1 (ocf:heartbeat:IPaddr): Stopped
+ * rsc_pcmk-2 (ocf:heartbeat:IPaddr): Stopped
+ * rsc_pcmk-3 (ocf:heartbeat:IPaddr): Stopped
+ * rsc_pcmk-4 (ocf:heartbeat:IPaddr): Stopped
+ * lsb-dummy (lsb:/usr/share/pacemaker/tests/cts/LSBDummy): Stopped
+ * migrator (ocf:pacemaker:Dummy): Stopped
+ * Clone Set: Connectivity [ping-1]:
+ * Stopped: [ pcmk-1 pcmk-2 pcmk-3 pcmk-4 ]
+ * Clone Set: master-1 [stateful-1] (promotable):
+ * Stopped: [ pcmk-1 pcmk-2 pcmk-3 pcmk-4 ]
diff --git a/cts/scheduler/summary/migrate-start-complex.summary b/cts/scheduler/summary/migrate-start-complex.summary
new file mode 100644
index 0000000..78a408b
--- /dev/null
+++ b/cts/scheduler/summary/migrate-start-complex.summary
@@ -0,0 +1,50 @@
+Current cluster status:
+ * Node List:
+ * Online: [ dom0-01 dom0-02 ]
+
+ * Full List of Resources:
+ * top (ocf:heartbeat:Dummy): Started dom0-02
+ * domU-test01 (ocf:heartbeat:Xen): Started dom0-02
+ * Clone Set: clone-dom0-iscsi1 [dom0-iscsi1]:
+ * Started: [ dom0-02 ]
+ * Stopped: [ dom0-01 ]
+ * Clone Set: clone-bottom [bottom]:
+ * Stopped: [ dom0-01 dom0-02 ]
+
+Transition Summary:
+ * Move top ( dom0-02 -> dom0-01 )
+ * Migrate domU-test01 ( dom0-02 -> dom0-01 )
+ * Start dom0-iscsi1-cnx1:1 ( dom0-01 )
+ * Start bottom:0 ( dom0-01 )
+ * Start bottom:1 ( dom0-02 )
+
+Executing Cluster Transition:
+ * Resource action: top stop on dom0-02
+ * Pseudo action: clone-dom0-iscsi1_start_0
+ * Resource action: bottom:0 monitor on dom0-01
+ * Resource action: bottom:1 monitor on dom0-02
+ * Pseudo action: clone-bottom_start_0
+ * Pseudo action: dom0-iscsi1:1_start_0
+ * Resource action: dom0-iscsi1-cnx1:1 start on dom0-01
+ * Resource action: bottom:0 start on dom0-01
+ * Resource action: bottom:1 start on dom0-02
+ * Pseudo action: clone-bottom_running_0
+ * Pseudo action: dom0-iscsi1:1_running_0
+ * Pseudo action: clone-dom0-iscsi1_running_0
+ * Resource action: domU-test01 migrate_to on dom0-02
+ * Resource action: domU-test01 migrate_from on dom0-01
+ * Resource action: domU-test01 stop on dom0-02
+ * Pseudo action: domU-test01_start_0
+ * Resource action: top start on dom0-01
+
+Revised Cluster Status:
+ * Node List:
+ * Online: [ dom0-01 dom0-02 ]
+
+ * Full List of Resources:
+ * top (ocf:heartbeat:Dummy): Started dom0-01
+ * domU-test01 (ocf:heartbeat:Xen): Started dom0-01
+ * Clone Set: clone-dom0-iscsi1 [dom0-iscsi1]:
+ * Started: [ dom0-01 dom0-02 ]
+ * Clone Set: clone-bottom [bottom]:
+ * Started: [ dom0-01 dom0-02 ]
diff --git a/cts/scheduler/summary/migrate-start.summary b/cts/scheduler/summary/migrate-start.summary
new file mode 100644
index 0000000..9aa1831
--- /dev/null
+++ b/cts/scheduler/summary/migrate-start.summary
@@ -0,0 +1,33 @@
+Current cluster status:
+ * Node List:
+ * Online: [ dom0-01 dom0-02 ]
+
+ * Full List of Resources:
+ * domU-test01 (ocf:heartbeat:Xen): Started dom0-02
+ * Clone Set: clone-dom0-iscsi1 [dom0-iscsi1]:
+ * Started: [ dom0-02 ]
+ * Stopped: [ dom0-01 ]
+
+Transition Summary:
+ * Migrate domU-test01 ( dom0-02 -> dom0-01 )
+ * Start dom0-iscsi1-cnx1:1 ( dom0-01 )
+
+Executing Cluster Transition:
+ * Pseudo action: clone-dom0-iscsi1_start_0
+ * Pseudo action: dom0-iscsi1:1_start_0
+ * Resource action: dom0-iscsi1-cnx1:1 start on dom0-01
+ * Pseudo action: dom0-iscsi1:1_running_0
+ * Pseudo action: clone-dom0-iscsi1_running_0
+ * Resource action: domU-test01 migrate_to on dom0-02
+ * Resource action: domU-test01 migrate_from on dom0-01
+ * Resource action: domU-test01 stop on dom0-02
+ * Pseudo action: domU-test01_start_0
+
+Revised Cluster Status:
+ * Node List:
+ * Online: [ dom0-01 dom0-02 ]
+
+ * Full List of Resources:
+ * domU-test01 (ocf:heartbeat:Xen): Started dom0-01
+ * Clone Set: clone-dom0-iscsi1 [dom0-iscsi1]:
+ * Started: [ dom0-01 dom0-02 ]
diff --git a/cts/scheduler/summary/migrate-stop-complex.summary b/cts/scheduler/summary/migrate-stop-complex.summary
new file mode 100644
index 0000000..7cc68b0
--- /dev/null
+++ b/cts/scheduler/summary/migrate-stop-complex.summary
@@ -0,0 +1,49 @@
+Current cluster status:
+ * Node List:
+ * Node dom0-02: standby (with active resources)
+ * Online: [ dom0-01 ]
+
+ * Full List of Resources:
+ * top (ocf:heartbeat:Dummy): Started dom0-02
+ * domU-test01 (ocf:heartbeat:Xen): Started dom0-02
+ * Clone Set: clone-dom0-iscsi1 [dom0-iscsi1]:
+ * Started: [ dom0-01 dom0-02 ]
+ * Clone Set: clone-bottom [bottom]:
+ * Started: [ dom0-01 dom0-02 ]
+
+Transition Summary:
+ * Move top ( dom0-02 -> dom0-01 )
+ * Migrate domU-test01 ( dom0-02 -> dom0-01 )
+ * Stop dom0-iscsi1-cnx1:1 ( dom0-02 ) due to node availability
+ * Stop bottom:1 ( dom0-02 ) due to node availability
+
+Executing Cluster Transition:
+ * Resource action: top stop on dom0-02
+ * Resource action: domU-test01 migrate_to on dom0-02
+ * Pseudo action: clone-dom0-iscsi1_stop_0
+ * Pseudo action: clone-bottom_stop_0
+ * Resource action: domU-test01 migrate_from on dom0-01
+ * Resource action: domU-test01 stop on dom0-02
+ * Pseudo action: dom0-iscsi1:1_stop_0
+ * Resource action: dom0-iscsi1-cnx1:0 stop on dom0-02
+ * Resource action: bottom:0 stop on dom0-02
+ * Pseudo action: clone-bottom_stopped_0
+ * Pseudo action: domU-test01_start_0
+ * Pseudo action: dom0-iscsi1:1_stopped_0
+ * Pseudo action: clone-dom0-iscsi1_stopped_0
+ * Resource action: top start on dom0-01
+
+Revised Cluster Status:
+ * Node List:
+ * Node dom0-02: standby
+ * Online: [ dom0-01 ]
+
+ * Full List of Resources:
+ * top (ocf:heartbeat:Dummy): Started dom0-01
+ * domU-test01 (ocf:heartbeat:Xen): Started dom0-01
+ * Clone Set: clone-dom0-iscsi1 [dom0-iscsi1]:
+ * Started: [ dom0-01 ]
+ * Stopped: [ dom0-02 ]
+ * Clone Set: clone-bottom [bottom]:
+ * Started: [ dom0-01 ]
+ * Stopped: [ dom0-02 ]
diff --git a/cts/scheduler/summary/migrate-stop-start-complex.summary b/cts/scheduler/summary/migrate-stop-start-complex.summary
new file mode 100644
index 0000000..b317383
--- /dev/null
+++ b/cts/scheduler/summary/migrate-stop-start-complex.summary
@@ -0,0 +1,50 @@
+Current cluster status:
+ * Node List:
+ * Node dom0-02: standby (with active resources)
+ * Online: [ dom0-01 ]
+
+ * Full List of Resources:
+ * top (ocf:heartbeat:Dummy): Started dom0-01
+ * domU-test01 (ocf:heartbeat:Xen): Started dom0-02
+ * Clone Set: clone-dom0-iscsi1 [dom0-iscsi1]:
+ * Started: [ dom0-01 dom0-02 ]
+ * Clone Set: clone-bottom [bottom]:
+ * Started: [ dom0-02 ]
+ * Stopped: [ dom0-01 ]
+
+Transition Summary:
+ * Migrate domU-test01 ( dom0-02 -> dom0-01 )
+ * Stop dom0-iscsi1-cnx1:1 ( dom0-02 ) due to node availability
+ * Move bottom:0 ( dom0-02 -> dom0-01 )
+
+Executing Cluster Transition:
+ * Resource action: domU-test01 migrate_to on dom0-02
+ * Pseudo action: clone-dom0-iscsi1_stop_0
+ * Resource action: domU-test01 migrate_from on dom0-01
+ * Resource action: domU-test01 stop on dom0-02
+ * Pseudo action: dom0-iscsi1:1_stop_0
+ * Resource action: dom0-iscsi1-cnx1:0 stop on dom0-02
+ * Pseudo action: domU-test01_start_0
+ * Pseudo action: dom0-iscsi1:1_stopped_0
+ * Pseudo action: clone-dom0-iscsi1_stopped_0
+ * Pseudo action: clone-bottom_stop_0
+ * Resource action: bottom:0 stop on dom0-02
+ * Pseudo action: clone-bottom_stopped_0
+ * Pseudo action: clone-bottom_start_0
+ * Resource action: bottom:0 start on dom0-01
+ * Pseudo action: clone-bottom_running_0
+
+Revised Cluster Status:
+ * Node List:
+ * Node dom0-02: standby
+ * Online: [ dom0-01 ]
+
+ * Full List of Resources:
+ * top (ocf:heartbeat:Dummy): Started dom0-01
+ * domU-test01 (ocf:heartbeat:Xen): Started dom0-01
+ * Clone Set: clone-dom0-iscsi1 [dom0-iscsi1]:
+ * Started: [ dom0-01 ]
+ * Stopped: [ dom0-02 ]
+ * Clone Set: clone-bottom [bottom]:
+ * Started: [ dom0-01 ]
+ * Stopped: [ dom0-02 ]
diff --git a/cts/scheduler/summary/migrate-stop.summary b/cts/scheduler/summary/migrate-stop.summary
new file mode 100644
index 0000000..f669865
--- /dev/null
+++ b/cts/scheduler/summary/migrate-stop.summary
@@ -0,0 +1,35 @@
+Current cluster status:
+ * Node List:
+ * Node dom0-02: standby (with active resources)
+ * Online: [ dom0-01 ]
+
+ * Full List of Resources:
+ * domU-test01 (ocf:heartbeat:Xen): Started dom0-02
+ * Clone Set: clone-dom0-iscsi1 [dom0-iscsi1]:
+ * Started: [ dom0-01 dom0-02 ]
+
+Transition Summary:
+ * Migrate domU-test01 ( dom0-02 -> dom0-01 )
+ * Stop dom0-iscsi1-cnx1:1 ( dom0-02 ) due to node availability
+
+Executing Cluster Transition:
+ * Resource action: domU-test01 migrate_to on dom0-02
+ * Pseudo action: clone-dom0-iscsi1_stop_0
+ * Resource action: domU-test01 migrate_from on dom0-01
+ * Resource action: domU-test01 stop on dom0-02
+ * Pseudo action: dom0-iscsi1:1_stop_0
+ * Resource action: dom0-iscsi1-cnx1:0 stop on dom0-02
+ * Pseudo action: domU-test01_start_0
+ * Pseudo action: dom0-iscsi1:1_stopped_0
+ * Pseudo action: clone-dom0-iscsi1_stopped_0
+
+Revised Cluster Status:
+ * Node List:
+ * Node dom0-02: standby
+ * Online: [ dom0-01 ]
+
+ * Full List of Resources:
+ * domU-test01 (ocf:heartbeat:Xen): Started dom0-01
+ * Clone Set: clone-dom0-iscsi1 [dom0-iscsi1]:
+ * Started: [ dom0-01 ]
+ * Stopped: [ dom0-02 ]
diff --git a/cts/scheduler/summary/migrate-stop_start.summary b/cts/scheduler/summary/migrate-stop_start.summary
new file mode 100644
index 0000000..13cb1c9
--- /dev/null
+++ b/cts/scheduler/summary/migrate-stop_start.summary
@@ -0,0 +1,41 @@
+Current cluster status:
+ * Node List:
+ * Node dom0-02: standby (with active resources)
+ * Online: [ dom0-01 ]
+
+ * Full List of Resources:
+ * domU-test01 (ocf:heartbeat:Xen): Started dom0-02
+ * Clone Set: clone-dom0-iscsi1 [dom0-iscsi1]:
+ * Started: [ dom0-02 ]
+ * Stopped: [ dom0-01 ]
+
+Transition Summary:
+ * Migrate domU-test01 ( dom0-02 -> dom0-01 )
+ * Move dom0-iscsi1-cnx1:0 ( dom0-02 -> dom0-01 )
+
+Executing Cluster Transition:
+ * Pseudo action: clone-dom0-iscsi1_stop_0
+ * Pseudo action: dom0-iscsi1:0_stop_0
+ * Resource action: dom0-iscsi1-cnx1:0 stop on dom0-02
+ * Pseudo action: dom0-iscsi1:0_stopped_0
+ * Pseudo action: clone-dom0-iscsi1_stopped_0
+ * Pseudo action: clone-dom0-iscsi1_start_0
+ * Pseudo action: dom0-iscsi1:0_start_0
+ * Resource action: dom0-iscsi1-cnx1:0 start on dom0-01
+ * Pseudo action: dom0-iscsi1:0_running_0
+ * Pseudo action: clone-dom0-iscsi1_running_0
+ * Resource action: domU-test01 migrate_to on dom0-02
+ * Resource action: domU-test01 migrate_from on dom0-01
+ * Resource action: domU-test01 stop on dom0-02
+ * Pseudo action: domU-test01_start_0
+
+Revised Cluster Status:
+ * Node List:
+ * Node dom0-02: standby
+ * Online: [ dom0-01 ]
+
+ * Full List of Resources:
+ * domU-test01 (ocf:heartbeat:Xen): Started dom0-01
+ * Clone Set: clone-dom0-iscsi1 [dom0-iscsi1]:
+ * Started: [ dom0-01 ]
+ * Stopped: [ dom0-02 ]
diff --git a/cts/scheduler/summary/migrate-success.summary b/cts/scheduler/summary/migrate-success.summary
new file mode 100644
index 0000000..cf9a000
--- /dev/null
+++ b/cts/scheduler/summary/migrate-success.summary
@@ -0,0 +1,23 @@
+Current cluster status:
+ * Node List:
+ * Online: [ hex-13 hex-14 ]
+
+ * Full List of Resources:
+ * test-vm (ocf:heartbeat:Xen): Started hex-13
+ * Clone Set: c-clusterfs [dlm]:
+ * Started: [ hex-13 hex-14 ]
+
+Transition Summary:
+
+Executing Cluster Transition:
+ * Pseudo action: load_stopped_hex-14
+ * Pseudo action: load_stopped_hex-13
+
+Revised Cluster Status:
+ * Node List:
+ * Online: [ hex-13 hex-14 ]
+
+ * Full List of Resources:
+ * test-vm (ocf:heartbeat:Xen): Started hex-13
+ * Clone Set: c-clusterfs [dlm]:
+ * Started: [ hex-13 hex-14 ]
diff --git a/cts/scheduler/summary/migration-behind-migrating-remote.summary b/cts/scheduler/summary/migration-behind-migrating-remote.summary
new file mode 100644
index 0000000..5529819
--- /dev/null
+++ b/cts/scheduler/summary/migration-behind-migrating-remote.summary
@@ -0,0 +1,39 @@
+Using the original execution date of: 2017-08-21 17:12:54Z
+Current cluster status:
+ * Node List:
+ * Online: [ node1 node2 ]
+ * RemoteOnline: [ remote1 remote2 ]
+
+ * Full List of Resources:
+ * Fencing (stonith:fence_xvm): Started node2
+ * rsc1 (ocf:pacemaker:Dummy): Started remote1
+ * remote1 (ocf:pacemaker:remote): Started node1
+ * remote2 (ocf:pacemaker:remote): Started node2
+
+Transition Summary:
+ * Migrate rsc1 ( remote1 -> remote2 )
+ * Migrate remote1 ( node1 -> node2 )
+
+Executing Cluster Transition:
+ * Resource action: rsc1 migrate_to on remote1
+ * Resource action: remote1 migrate_to on node1
+ * Resource action: rsc1 migrate_from on remote2
+ * Resource action: rsc1 stop on remote1
+ * Resource action: remote1 migrate_from on node2
+ * Resource action: remote1 stop on node1
+ * Pseudo action: rsc1_start_0
+ * Pseudo action: remote1_start_0
+ * Resource action: rsc1 monitor=10000 on remote2
+ * Resource action: remote1 monitor=60000 on node2
+Using the original execution date of: 2017-08-21 17:12:54Z
+
+Revised Cluster Status:
+ * Node List:
+ * Online: [ node1 node2 ]
+ * RemoteOnline: [ remote1 remote2 ]
+
+ * Full List of Resources:
+ * Fencing (stonith:fence_xvm): Started node2
+ * rsc1 (ocf:pacemaker:Dummy): Started remote2
+ * remote1 (ocf:pacemaker:remote): Started node2
+ * remote2 (ocf:pacemaker:remote): Started node2
diff --git a/cts/scheduler/summary/migration-intermediary-cleaned.summary b/cts/scheduler/summary/migration-intermediary-cleaned.summary
new file mode 100644
index 0000000..dd127a8
--- /dev/null
+++ b/cts/scheduler/summary/migration-intermediary-cleaned.summary
@@ -0,0 +1,89 @@
+Using the original execution date of: 2023-01-19 21:05:59Z
+Current cluster status:
+ * Node List:
+ * Online: [ rhel8-2 rhel8-3 rhel8-4 rhel8-5 ]
+ * OFFLINE: [ rhel8-1 ]
+
+ * Full List of Resources:
+ * Fencing (stonith:fence_xvm): Started rhel8-3
+ * FencingPass (stonith:fence_dummy): Started rhel8-4
+ * FencingFail (stonith:fence_dummy): Started rhel8-5
+ * rsc_rhel8-1 (ocf:heartbeat:IPaddr2): Started rhel8-3
+ * rsc_rhel8-2 (ocf:heartbeat:IPaddr2): Started rhel8-4
+ * rsc_rhel8-3 (ocf:heartbeat:IPaddr2): Started rhel8-3
+ * rsc_rhel8-4 (ocf:heartbeat:IPaddr2): Started rhel8-4
+ * rsc_rhel8-5 (ocf:heartbeat:IPaddr2): Started rhel8-5
+ * migrator (ocf:pacemaker:Dummy): Started rhel8-5
+ * Clone Set: Connectivity [ping-1]:
+ * Started: [ rhel8-3 rhel8-4 rhel8-5 ]
+ * Stopped: [ rhel8-1 rhel8-2 ]
+ * Clone Set: promotable-1 [stateful-1] (promotable):
+ * Promoted: [ rhel8-3 ]
+ * Unpromoted: [ rhel8-4 rhel8-5 ]
+ * Stopped: [ rhel8-1 rhel8-2 ]
+ * Resource Group: group-1:
+ * r192.168.122.207 (ocf:heartbeat:IPaddr2): Started rhel8-3
+ * petulant (service:pacemaker-cts-dummyd@10): Started rhel8-3
+ * r192.168.122.208 (ocf:heartbeat:IPaddr2): Started rhel8-3
+ * lsb-dummy (lsb:LSBDummy): Started rhel8-3
+
+Transition Summary:
+ * Move rsc_rhel8-1 ( rhel8-3 -> rhel8-2 )
+ * Move rsc_rhel8-2 ( rhel8-4 -> rhel8-2 )
+ * Start ping-1:3 ( rhel8-2 )
+
+Executing Cluster Transition:
+ * Resource action: Fencing monitor on rhel8-2
+ * Resource action: FencingPass monitor on rhel8-2
+ * Resource action: FencingFail monitor on rhel8-2
+ * Resource action: rsc_rhel8-1 stop on rhel8-3
+ * Resource action: rsc_rhel8-1 monitor on rhel8-2
+ * Resource action: rsc_rhel8-2 stop on rhel8-4
+ * Resource action: rsc_rhel8-2 monitor on rhel8-2
+ * Resource action: rsc_rhel8-3 monitor on rhel8-2
+ * Resource action: rsc_rhel8-4 monitor on rhel8-2
+ * Resource action: rsc_rhel8-5 monitor on rhel8-2
+ * Resource action: migrator monitor on rhel8-2
+ * Resource action: ping-1 monitor on rhel8-2
+ * Pseudo action: Connectivity_start_0
+ * Resource action: stateful-1 monitor on rhel8-2
+ * Resource action: r192.168.122.207 monitor on rhel8-2
+ * Resource action: petulant monitor on rhel8-2
+ * Resource action: r192.168.122.208 monitor on rhel8-2
+ * Resource action: lsb-dummy monitor on rhel8-2
+ * Resource action: rsc_rhel8-1 start on rhel8-2
+ * Resource action: rsc_rhel8-2 start on rhel8-2
+ * Resource action: ping-1 start on rhel8-2
+ * Pseudo action: Connectivity_running_0
+ * Resource action: rsc_rhel8-1 monitor=5000 on rhel8-2
+ * Resource action: rsc_rhel8-2 monitor=5000 on rhel8-2
+ * Resource action: ping-1 monitor=60000 on rhel8-2
+Using the original execution date of: 2023-01-19 21:05:59Z
+
+Revised Cluster Status:
+ * Node List:
+ * Online: [ rhel8-2 rhel8-3 rhel8-4 rhel8-5 ]
+ * OFFLINE: [ rhel8-1 ]
+
+ * Full List of Resources:
+ * Fencing (stonith:fence_xvm): Started rhel8-3
+ * FencingPass (stonith:fence_dummy): Started rhel8-4
+ * FencingFail (stonith:fence_dummy): Started rhel8-5
+ * rsc_rhel8-1 (ocf:heartbeat:IPaddr2): Started rhel8-2
+ * rsc_rhel8-2 (ocf:heartbeat:IPaddr2): Started rhel8-2
+ * rsc_rhel8-3 (ocf:heartbeat:IPaddr2): Started rhel8-3
+ * rsc_rhel8-4 (ocf:heartbeat:IPaddr2): Started rhel8-4
+ * rsc_rhel8-5 (ocf:heartbeat:IPaddr2): Started rhel8-5
+ * migrator (ocf:pacemaker:Dummy): Started rhel8-5
+ * Clone Set: Connectivity [ping-1]:
+ * Started: [ rhel8-2 rhel8-3 rhel8-4 rhel8-5 ]
+ * Stopped: [ rhel8-1 ]
+ * Clone Set: promotable-1 [stateful-1] (promotable):
+ * Promoted: [ rhel8-3 ]
+ * Unpromoted: [ rhel8-4 rhel8-5 ]
+ * Stopped: [ rhel8-1 rhel8-2 ]
+ * Resource Group: group-1:
+ * r192.168.122.207 (ocf:heartbeat:IPaddr2): Started rhel8-3
+ * petulant (service:pacemaker-cts-dummyd@10): Started rhel8-3
+ * r192.168.122.208 (ocf:heartbeat:IPaddr2): Started rhel8-3
+ * lsb-dummy (lsb:LSBDummy): Started rhel8-3
diff --git a/cts/scheduler/summary/migration-ping-pong.summary b/cts/scheduler/summary/migration-ping-pong.summary
new file mode 100644
index 0000000..0891fbf
--- /dev/null
+++ b/cts/scheduler/summary/migration-ping-pong.summary
@@ -0,0 +1,27 @@
+Using the original execution date of: 2019-06-06 13:56:45Z
+Current cluster status:
+ * Node List:
+ * Node ha-idg-2: standby
+ * Online: [ ha-idg-1 ]
+
+ * Full List of Resources:
+ * fence_ilo_ha-idg-2 (stonith:fence_ilo2): Started ha-idg-1
+ * fence_ilo_ha-idg-1 (stonith:fence_ilo4): Stopped
+ * vm_idcc_devel (ocf:heartbeat:VirtualDomain): Started ha-idg-1
+ * vm_severin (ocf:heartbeat:VirtualDomain): Started ha-idg-1
+
+Transition Summary:
+
+Executing Cluster Transition:
+Using the original execution date of: 2019-06-06 13:56:45Z
+
+Revised Cluster Status:
+ * Node List:
+ * Node ha-idg-2: standby
+ * Online: [ ha-idg-1 ]
+
+ * Full List of Resources:
+ * fence_ilo_ha-idg-2 (stonith:fence_ilo2): Started ha-idg-1
+ * fence_ilo_ha-idg-1 (stonith:fence_ilo4): Stopped
+ * vm_idcc_devel (ocf:heartbeat:VirtualDomain): Started ha-idg-1
+ * vm_severin (ocf:heartbeat:VirtualDomain): Started ha-idg-1
diff --git a/cts/scheduler/summary/minimal.summary b/cts/scheduler/summary/minimal.summary
new file mode 100644
index 0000000..3886b9e
--- /dev/null
+++ b/cts/scheduler/summary/minimal.summary
@@ -0,0 +1,29 @@
+Current cluster status:
+ * Node List:
+ * Online: [ host1 host2 ]
+
+ * Full List of Resources:
+ * rsc1 (ocf:pacemaker:Dummy): Stopped
+ * rsc2 (ocf:pacemaker:Dummy): Stopped
+
+Transition Summary:
+ * Start rsc1 ( host1 )
+ * Start rsc2 ( host1 )
+
+Executing Cluster Transition:
+ * Resource action: rsc1 monitor on host2
+ * Resource action: rsc1 monitor on host1
+ * Resource action: rsc2 monitor on host2
+ * Resource action: rsc2 monitor on host1
+ * Pseudo action: load_stopped_host2
+ * Pseudo action: load_stopped_host1
+ * Resource action: rsc1 start on host1
+ * Resource action: rsc2 start on host1
+
+Revised Cluster Status:
+ * Node List:
+ * Online: [ host1 host2 ]
+
+ * Full List of Resources:
+ * rsc1 (ocf:pacemaker:Dummy): Started host1
+ * rsc2 (ocf:pacemaker:Dummy): Started host1
diff --git a/cts/scheduler/summary/mon-rsc-1.summary b/cts/scheduler/summary/mon-rsc-1.summary
new file mode 100644
index 0000000..92229e3
--- /dev/null
+++ b/cts/scheduler/summary/mon-rsc-1.summary
@@ -0,0 +1,22 @@
+Current cluster status:
+ * Node List:
+ * Online: [ node1 node2 ]
+
+ * Full List of Resources:
+ * rsc1 (ocf:heartbeat:apache): Stopped
+
+Transition Summary:
+ * Start rsc1 ( node1 )
+
+Executing Cluster Transition:
+ * Resource action: rsc1 monitor on node2
+ * Resource action: rsc1 monitor on node1
+ * Resource action: rsc1 start on node1
+ * Resource action: rsc1 monitor=5000 on node1
+
+Revised Cluster Status:
+ * Node List:
+ * Online: [ node1 node2 ]
+
+ * Full List of Resources:
+ * rsc1 (ocf:heartbeat:apache): Started node1
diff --git a/cts/scheduler/summary/mon-rsc-2.summary b/cts/scheduler/summary/mon-rsc-2.summary
new file mode 100644
index 0000000..3d605e8
--- /dev/null
+++ b/cts/scheduler/summary/mon-rsc-2.summary
@@ -0,0 +1,24 @@
+Current cluster status:
+ * Node List:
+ * Node node2: standby (with active resources)
+ * Online: [ node1 ]
+
+ * Full List of Resources:
+ * rsc1 (ocf:heartbeat:apache): Started node2
+
+Transition Summary:
+ * Move rsc1 ( node2 -> node1 )
+
+Executing Cluster Transition:
+ * Resource action: rsc1 stop on node2
+ * Resource action: rsc1 monitor on node1
+ * Resource action: rsc1 start on node1
+ * Resource action: rsc1 monitor=5000 on node1
+
+Revised Cluster Status:
+ * Node List:
+ * Node node2: standby
+ * Online: [ node1 ]
+
+ * Full List of Resources:
+ * rsc1 (ocf:heartbeat:apache): Started node1
diff --git a/cts/scheduler/summary/mon-rsc-3.summary b/cts/scheduler/summary/mon-rsc-3.summary
new file mode 100644
index 0000000..b60e20f
--- /dev/null
+++ b/cts/scheduler/summary/mon-rsc-3.summary
@@ -0,0 +1,20 @@
+Current cluster status:
+ * Node List:
+ * Online: [ node1 node2 ]
+
+ * Full List of Resources:
+ * rsc1 (ocf:heartbeat:apache): Starting node1
+
+Transition Summary:
+
+Executing Cluster Transition:
+ * Resource action: rsc1 monitor on node2
+ * Resource action: rsc1 start on node1
+ * Resource action: rsc1 monitor=5000 on node1
+
+Revised Cluster Status:
+ * Node List:
+ * Online: [ node1 node2 ]
+
+ * Full List of Resources:
+ * rsc1 (ocf:heartbeat:apache): Started node1
diff --git a/cts/scheduler/summary/mon-rsc-4.summary b/cts/scheduler/summary/mon-rsc-4.summary
new file mode 100644
index 0000000..8ee6628
--- /dev/null
+++ b/cts/scheduler/summary/mon-rsc-4.summary
@@ -0,0 +1,24 @@
+Current cluster status:
+ * Node List:
+ * Node node2: standby (with active resources)
+ * Online: [ node1 ]
+
+ * Full List of Resources:
+ * rsc1 (ocf:heartbeat:apache): Starting node2
+
+Transition Summary:
+ * Move rsc1 ( node2 -> node1 )
+
+Executing Cluster Transition:
+ * Resource action: rsc1 stop on node2
+ * Resource action: rsc1 monitor on node1
+ * Resource action: rsc1 start on node1
+ * Resource action: rsc1 monitor=5000 on node1
+
+Revised Cluster Status:
+ * Node List:
+ * Node node2: standby (with active resources)
+ * Online: [ node1 ]
+
+ * Full List of Resources:
+ * rsc1 (ocf:heartbeat:apache): Started [ node1 node2 ]
diff --git a/cts/scheduler/summary/monitor-onfail-restart.summary b/cts/scheduler/summary/monitor-onfail-restart.summary
new file mode 100644
index 0000000..5f409fc
--- /dev/null
+++ b/cts/scheduler/summary/monitor-onfail-restart.summary
@@ -0,0 +1,23 @@
+Current cluster status:
+ * Node List:
+ * Online: [ fc16-builder ]
+ * OFFLINE: [ fc16-builder2 ]
+
+ * Full List of Resources:
+ * A (ocf:pacemaker:Dummy): FAILED fc16-builder
+
+Transition Summary:
+ * Recover A ( fc16-builder )
+
+Executing Cluster Transition:
+ * Resource action: A stop on fc16-builder
+ * Resource action: A start on fc16-builder
+ * Resource action: A monitor=20000 on fc16-builder
+
+Revised Cluster Status:
+ * Node List:
+ * Online: [ fc16-builder ]
+ * OFFLINE: [ fc16-builder2 ]
+
+ * Full List of Resources:
+ * A (ocf:pacemaker:Dummy): Started fc16-builder
diff --git a/cts/scheduler/summary/monitor-onfail-stop.summary b/cts/scheduler/summary/monitor-onfail-stop.summary
new file mode 100644
index 0000000..2633efd
--- /dev/null
+++ b/cts/scheduler/summary/monitor-onfail-stop.summary
@@ -0,0 +1,21 @@
+Current cluster status:
+ * Node List:
+ * Online: [ fc16-builder ]
+ * OFFLINE: [ fc16-builder2 ]
+
+ * Full List of Resources:
+ * A (ocf:pacemaker:Dummy): FAILED fc16-builder
+
+Transition Summary:
+ * Stop A ( fc16-builder ) due to node availability
+
+Executing Cluster Transition:
+ * Resource action: A stop on fc16-builder
+
+Revised Cluster Status:
+ * Node List:
+ * Online: [ fc16-builder ]
+ * OFFLINE: [ fc16-builder2 ]
+
+ * Full List of Resources:
+ * A (ocf:pacemaker:Dummy): Stopped
diff --git a/cts/scheduler/summary/monitor-recovery.summary b/cts/scheduler/summary/monitor-recovery.summary
new file mode 100644
index 0000000..1a7ff74
--- /dev/null
+++ b/cts/scheduler/summary/monitor-recovery.summary
@@ -0,0 +1,32 @@
+Current cluster status:
+ * Node List:
+ * Online: [ CSE-1 ]
+ * OFFLINE: [ CSE-2 ]
+
+ * Full List of Resources:
+ * Resource Group: svc-cse:
+ * ip_19 (ocf:heartbeat:IPaddr2): Stopped
+ * ip_11 (ocf:heartbeat:IPaddr2): Stopped
+ * Clone Set: cl_tomcat [d_tomcat]:
+ * Started: [ CSE-1 ]
+ * Stopped: [ CSE-2 ]
+
+Transition Summary:
+ * Stop d_tomcat:0 ( CSE-1 ) due to node availability
+
+Executing Cluster Transition:
+ * Pseudo action: cl_tomcat_stop_0
+ * Resource action: d_tomcat stop on CSE-1
+ * Pseudo action: cl_tomcat_stopped_0
+
+Revised Cluster Status:
+ * Node List:
+ * Online: [ CSE-1 ]
+ * OFFLINE: [ CSE-2 ]
+
+ * Full List of Resources:
+ * Resource Group: svc-cse:
+ * ip_19 (ocf:heartbeat:IPaddr2): Stopped
+ * ip_11 (ocf:heartbeat:IPaddr2): Stopped
+ * Clone Set: cl_tomcat [d_tomcat]:
+ * Stopped: [ CSE-1 CSE-2 ]
diff --git a/cts/scheduler/summary/multi1.summary b/cts/scheduler/summary/multi1.summary
new file mode 100644
index 0000000..a4ea149
--- /dev/null
+++ b/cts/scheduler/summary/multi1.summary
@@ -0,0 +1,21 @@
+Current cluster status:
+ * Node List:
+ * Online: [ node1 node2 ]
+
+ * Full List of Resources:
+ * rsc1 (ocf:heartbeat:apache): Started [ node1 node2 ]
+
+Transition Summary:
+ * Restart rsc1 ( node1 )
+
+Executing Cluster Transition:
+ * Resource action: rsc1 stop on node2
+ * Resource action: rsc1 stop on node1
+ * Resource action: rsc1 start on node1
+
+Revised Cluster Status:
+ * Node List:
+ * Online: [ node1 node2 ]
+
+ * Full List of Resources:
+ * rsc1 (ocf:heartbeat:apache): Started node1
diff --git a/cts/scheduler/summary/multiple-active-block-group.summary b/cts/scheduler/summary/multiple-active-block-group.summary
new file mode 100644
index 0000000..923ce55
--- /dev/null
+++ b/cts/scheduler/summary/multiple-active-block-group.summary
@@ -0,0 +1,27 @@
+0 of 4 resource instances DISABLED and 3 BLOCKED from further action due to failure
+
+Current cluster status:
+ * Node List:
+ * Online: [ node2 node3 ]
+
+ * Full List of Resources:
+ * st-sbd (stonith:external/sbd): Started node2
+ * Resource Group: dgroup:
+ * dummy (ocf:heartbeat:DummyTimeout): FAILED (blocked) [ node2 node3 ]
+ * dummy2 (ocf:heartbeat:Dummy): Started node2 (blocked)
+ * dummy3 (ocf:heartbeat:Dummy): Started node2 (blocked)
+
+Transition Summary:
+
+Executing Cluster Transition:
+
+Revised Cluster Status:
+ * Node List:
+ * Online: [ node2 node3 ]
+
+ * Full List of Resources:
+ * st-sbd (stonith:external/sbd): Started node2
+ * Resource Group: dgroup:
+ * dummy (ocf:heartbeat:DummyTimeout): FAILED (blocked) [ node2 node3 ]
+ * dummy2 (ocf:heartbeat:Dummy): Started node2 (blocked)
+ * dummy3 (ocf:heartbeat:Dummy): Started node2 (blocked)
diff --git a/cts/scheduler/summary/multiple-monitor-one-failed.summary b/cts/scheduler/summary/multiple-monitor-one-failed.summary
new file mode 100644
index 0000000..f6c872c
--- /dev/null
+++ b/cts/scheduler/summary/multiple-monitor-one-failed.summary
@@ -0,0 +1,22 @@
+Current cluster status:
+ * Node List:
+ * Online: [ dhcp69 dhcp180 ]
+
+ * Full List of Resources:
+ * Dummy-test2 (ocf:test:Dummy): FAILED dhcp180
+
+Transition Summary:
+ * Recover Dummy-test2 ( dhcp180 )
+
+Executing Cluster Transition:
+ * Resource action: Dummy-test2 stop on dhcp180
+ * Resource action: Dummy-test2 start on dhcp180
+ * Resource action: Dummy-test2 monitor=30000 on dhcp180
+ * Resource action: Dummy-test2 monitor=10000 on dhcp180
+
+Revised Cluster Status:
+ * Node List:
+ * Online: [ dhcp69 dhcp180 ]
+
+ * Full List of Resources:
+ * Dummy-test2 (ocf:test:Dummy): Started dhcp180
diff --git a/cts/scheduler/summary/multiply-active-stonith.summary b/cts/scheduler/summary/multiply-active-stonith.summary
new file mode 100644
index 0000000..ec37de0
--- /dev/null
+++ b/cts/scheduler/summary/multiply-active-stonith.summary
@@ -0,0 +1,28 @@
+Using the original execution date of: 2018-05-09 09:54:39Z
+Current cluster status:
+ * Node List:
+ * Node node2: UNCLEAN (online)
+ * Online: [ node1 node3 ]
+
+ * Full List of Resources:
+ * fencer (stonith:fence_ipmilan): Started [ node2 node3 ]
+ * rsc1 (lsb:rsc1): FAILED node2
+
+Transition Summary:
+ * Fence (reboot) node2 'rsc1 failed there'
+ * Stop rsc1 ( node2 ) due to node availability
+
+Executing Cluster Transition:
+ * Resource action: fencer monitor=60000 on node3
+ * Fencing node2 (reboot)
+ * Pseudo action: rsc1_stop_0
+Using the original execution date of: 2018-05-09 09:54:39Z
+
+Revised Cluster Status:
+ * Node List:
+ * Online: [ node1 node3 ]
+ * OFFLINE: [ node2 ]
+
+ * Full List of Resources:
+ * fencer (stonith:fence_ipmilan): Started node3
+ * rsc1 (lsb:rsc1): Stopped (not installed)
diff --git a/cts/scheduler/summary/nested-remote-recovery.summary b/cts/scheduler/summary/nested-remote-recovery.summary
new file mode 100644
index 0000000..fd3ccd7
--- /dev/null
+++ b/cts/scheduler/summary/nested-remote-recovery.summary
@@ -0,0 +1,131 @@
+Using the original execution date of: 2018-09-11 21:23:25Z
+Current cluster status:
+ * Node List:
+ * Online: [ controller-0 controller-1 controller-2 ]
+ * RemoteOnline: [ database-0 database-1 database-2 messaging-0 messaging-1 messaging-2 ]
+ * GuestOnline: [ galera-bundle-1 galera-bundle-2 rabbitmq-bundle-0 rabbitmq-bundle-1 rabbitmq-bundle-2 redis-bundle-0 redis-bundle-1 redis-bundle-2 ]
+
+ * Full List of Resources:
+ * database-0 (ocf:pacemaker:remote): Started controller-0
+ * database-1 (ocf:pacemaker:remote): Started controller-1
+ * database-2 (ocf:pacemaker:remote): Started controller-2
+ * messaging-0 (ocf:pacemaker:remote): Started controller-2
+ * messaging-1 (ocf:pacemaker:remote): Started controller-1
+ * messaging-2 (ocf:pacemaker:remote): Started controller-1
+ * Container bundle set: galera-bundle [192.168.24.1:8787/rhosp13/openstack-mariadb:pcmklatest]:
+ * galera-bundle-0 (ocf:heartbeat:galera): FAILED Promoted database-0
+ * galera-bundle-1 (ocf:heartbeat:galera): Promoted database-1
+ * galera-bundle-2 (ocf:heartbeat:galera): Promoted database-2
+ * Container bundle set: rabbitmq-bundle [192.168.24.1:8787/rhosp13/openstack-rabbitmq:pcmklatest]:
+ * rabbitmq-bundle-0 (ocf:heartbeat:rabbitmq-cluster): Started messaging-0
+ * rabbitmq-bundle-1 (ocf:heartbeat:rabbitmq-cluster): Started messaging-1
+ * rabbitmq-bundle-2 (ocf:heartbeat:rabbitmq-cluster): Started messaging-2
+ * Container bundle set: redis-bundle [192.168.24.1:8787/rhosp13/openstack-redis:pcmklatest]:
+ * redis-bundle-0 (ocf:heartbeat:redis): Unpromoted controller-0
+ * redis-bundle-1 (ocf:heartbeat:redis): Promoted controller-1
+ * redis-bundle-2 (ocf:heartbeat:redis): Unpromoted controller-2
+ * ip-192.168.24.12 (ocf:heartbeat:IPaddr2): Started controller-1
+ * ip-10.0.0.109 (ocf:heartbeat:IPaddr2): Started controller-2
+ * ip-172.17.1.18 (ocf:heartbeat:IPaddr2): Started controller-1
+ * ip-172.17.1.12 (ocf:heartbeat:IPaddr2): Started controller-2
+ * ip-172.17.3.18 (ocf:heartbeat:IPaddr2): Started controller-1
+ * ip-172.17.4.14 (ocf:heartbeat:IPaddr2): Started controller-1
+ * Container bundle set: haproxy-bundle [192.168.24.1:8787/rhosp13/openstack-haproxy:pcmklatest]:
+ * haproxy-bundle-docker-0 (ocf:heartbeat:docker): Started controller-0
+ * haproxy-bundle-docker-1 (ocf:heartbeat:docker): Started controller-1
+ * haproxy-bundle-docker-2 (ocf:heartbeat:docker): Started controller-2
+ * Container bundle: openstack-cinder-volume [192.168.24.1:8787/rhosp13/openstack-cinder-volume:pcmklatest]:
+ * openstack-cinder-volume-docker-0 (ocf:heartbeat:docker): Started controller-0
+ * stonith-fence_ipmilan-5254005f9a33 (stonith:fence_ipmilan): Started controller-2
+ * stonith-fence_ipmilan-52540098c9ff (stonith:fence_ipmilan): Started controller-1
+ * stonith-fence_ipmilan-5254000203a2 (stonith:fence_ipmilan): Started controller-2
+ * stonith-fence_ipmilan-5254003296a5 (stonith:fence_ipmilan): Started controller-1
+ * stonith-fence_ipmilan-52540066e27e (stonith:fence_ipmilan): Started controller-1
+ * stonith-fence_ipmilan-52540065418e (stonith:fence_ipmilan): Started controller-2
+ * stonith-fence_ipmilan-525400aab9d9 (stonith:fence_ipmilan): Started controller-2
+ * stonith-fence_ipmilan-525400a16c0d (stonith:fence_ipmilan): Started controller-1
+ * stonith-fence_ipmilan-5254002f6d57 (stonith:fence_ipmilan): Started controller-1
+
+Transition Summary:
+ * Fence (reboot) galera-bundle-0 (resource: galera-bundle-docker-0) 'guest is unclean'
+ * Recover galera-bundle-docker-0 ( database-0 )
+ * Recover galera-bundle-0 ( controller-0 )
+ * Recover galera:0 ( Promoted galera-bundle-0 )
+
+Executing Cluster Transition:
+ * Resource action: galera-bundle-0 stop on controller-0
+ * Pseudo action: galera-bundle_demote_0
+ * Pseudo action: galera-bundle-master_demote_0
+ * Pseudo action: galera_demote_0
+ * Pseudo action: galera-bundle-master_demoted_0
+ * Pseudo action: galera-bundle_demoted_0
+ * Pseudo action: galera-bundle_stop_0
+ * Resource action: galera-bundle-docker-0 stop on database-0
+ * Pseudo action: stonith-galera-bundle-0-reboot on galera-bundle-0
+ * Pseudo action: galera-bundle-master_stop_0
+ * Pseudo action: galera_stop_0
+ * Pseudo action: galera-bundle-master_stopped_0
+ * Pseudo action: galera-bundle_stopped_0
+ * Pseudo action: galera-bundle_start_0
+ * Pseudo action: galera-bundle-master_start_0
+ * Resource action: galera-bundle-docker-0 start on database-0
+ * Resource action: galera-bundle-docker-0 monitor=60000 on database-0
+ * Resource action: galera-bundle-0 start on controller-0
+ * Resource action: galera-bundle-0 monitor=30000 on controller-0
+ * Resource action: galera start on galera-bundle-0
+ * Pseudo action: galera-bundle-master_running_0
+ * Pseudo action: galera-bundle_running_0
+ * Pseudo action: galera-bundle_promote_0
+ * Pseudo action: galera-bundle-master_promote_0
+ * Resource action: galera promote on galera-bundle-0
+ * Pseudo action: galera-bundle-master_promoted_0
+ * Pseudo action: galera-bundle_promoted_0
+ * Resource action: galera monitor=10000 on galera-bundle-0
+Using the original execution date of: 2018-09-11 21:23:25Z
+
+Revised Cluster Status:
+ * Node List:
+ * Online: [ controller-0 controller-1 controller-2 ]
+ * RemoteOnline: [ database-0 database-1 database-2 messaging-0 messaging-1 messaging-2 ]
+ * GuestOnline: [ galera-bundle-0 galera-bundle-1 galera-bundle-2 rabbitmq-bundle-0 rabbitmq-bundle-1 rabbitmq-bundle-2 redis-bundle-0 redis-bundle-1 redis-bundle-2 ]
+
+ * Full List of Resources:
+ * database-0 (ocf:pacemaker:remote): Started controller-0
+ * database-1 (ocf:pacemaker:remote): Started controller-1
+ * database-2 (ocf:pacemaker:remote): Started controller-2
+ * messaging-0 (ocf:pacemaker:remote): Started controller-2
+ * messaging-1 (ocf:pacemaker:remote): Started controller-1
+ * messaging-2 (ocf:pacemaker:remote): Started controller-1
+ * Container bundle set: galera-bundle [192.168.24.1:8787/rhosp13/openstack-mariadb:pcmklatest]:
+ * galera-bundle-0 (ocf:heartbeat:galera): Promoted database-0
+ * galera-bundle-1 (ocf:heartbeat:galera): Promoted database-1
+ * galera-bundle-2 (ocf:heartbeat:galera): Promoted database-2
+ * Container bundle set: rabbitmq-bundle [192.168.24.1:8787/rhosp13/openstack-rabbitmq:pcmklatest]:
+ * rabbitmq-bundle-0 (ocf:heartbeat:rabbitmq-cluster): Started messaging-0
+ * rabbitmq-bundle-1 (ocf:heartbeat:rabbitmq-cluster): Started messaging-1
+ * rabbitmq-bundle-2 (ocf:heartbeat:rabbitmq-cluster): Started messaging-2
+ * Container bundle set: redis-bundle [192.168.24.1:8787/rhosp13/openstack-redis:pcmklatest]:
+ * redis-bundle-0 (ocf:heartbeat:redis): Unpromoted controller-0
+ * redis-bundle-1 (ocf:heartbeat:redis): Promoted controller-1
+ * redis-bundle-2 (ocf:heartbeat:redis): Unpromoted controller-2
+ * ip-192.168.24.12 (ocf:heartbeat:IPaddr2): Started controller-1
+ * ip-10.0.0.109 (ocf:heartbeat:IPaddr2): Started controller-2
+ * ip-172.17.1.18 (ocf:heartbeat:IPaddr2): Started controller-1
+ * ip-172.17.1.12 (ocf:heartbeat:IPaddr2): Started controller-2
+ * ip-172.17.3.18 (ocf:heartbeat:IPaddr2): Started controller-1
+ * ip-172.17.4.14 (ocf:heartbeat:IPaddr2): Started controller-1
+ * Container bundle set: haproxy-bundle [192.168.24.1:8787/rhosp13/openstack-haproxy:pcmklatest]:
+ * haproxy-bundle-docker-0 (ocf:heartbeat:docker): Started controller-0
+ * haproxy-bundle-docker-1 (ocf:heartbeat:docker): Started controller-1
+ * haproxy-bundle-docker-2 (ocf:heartbeat:docker): Started controller-2
+ * Container bundle: openstack-cinder-volume [192.168.24.1:8787/rhosp13/openstack-cinder-volume:pcmklatest]:
+ * openstack-cinder-volume-docker-0 (ocf:heartbeat:docker): Started controller-0
+ * stonith-fence_ipmilan-5254005f9a33 (stonith:fence_ipmilan): Started controller-2
+ * stonith-fence_ipmilan-52540098c9ff (stonith:fence_ipmilan): Started controller-1
+ * stonith-fence_ipmilan-5254000203a2 (stonith:fence_ipmilan): Started controller-2
+ * stonith-fence_ipmilan-5254003296a5 (stonith:fence_ipmilan): Started controller-1
+ * stonith-fence_ipmilan-52540066e27e (stonith:fence_ipmilan): Started controller-1
+ * stonith-fence_ipmilan-52540065418e (stonith:fence_ipmilan): Started controller-2
+ * stonith-fence_ipmilan-525400aab9d9 (stonith:fence_ipmilan): Started controller-2
+ * stonith-fence_ipmilan-525400a16c0d (stonith:fence_ipmilan): Started controller-1
+ * stonith-fence_ipmilan-5254002f6d57 (stonith:fence_ipmilan): Started controller-1
diff --git a/cts/scheduler/summary/no-promote-on-unrunnable-guest.summary b/cts/scheduler/summary/no-promote-on-unrunnable-guest.summary
new file mode 100644
index 0000000..c06f8f0
--- /dev/null
+++ b/cts/scheduler/summary/no-promote-on-unrunnable-guest.summary
@@ -0,0 +1,103 @@
+Using the original execution date of: 2020-05-14 10:49:31Z
+Current cluster status:
+ * Node List:
+ * Online: [ controller-0 controller-1 controller-2 ]
+ * GuestOnline: [ galera-bundle-0 galera-bundle-1 galera-bundle-2 ovn-dbs-bundle-0 ovn-dbs-bundle-1 ovn-dbs-bundle-2 rabbitmq-bundle-0 rabbitmq-bundle-1 rabbitmq-bundle-2 redis-bundle-0 redis-bundle-1 redis-bundle-2 ]
+
+ * Full List of Resources:
+ * Container bundle set: galera-bundle [cluster.common.tag/rhosp16-openstack-mariadb:pcmklatest]:
+ * galera-bundle-0 (ocf:heartbeat:galera): Promoted controller-0
+ * galera-bundle-1 (ocf:heartbeat:galera): Promoted controller-1
+ * galera-bundle-2 (ocf:heartbeat:galera): Promoted controller-2
+ * Container bundle set: rabbitmq-bundle [cluster.common.tag/rhosp16-openstack-rabbitmq:pcmklatest]:
+ * rabbitmq-bundle-0 (ocf:heartbeat:rabbitmq-cluster): Started controller-0
+ * rabbitmq-bundle-1 (ocf:heartbeat:rabbitmq-cluster): Started controller-1
+ * rabbitmq-bundle-2 (ocf:heartbeat:rabbitmq-cluster): Started controller-2
+ * Container bundle set: redis-bundle [cluster.common.tag/rhosp16-openstack-redis:pcmklatest]:
+ * redis-bundle-0 (ocf:heartbeat:redis): Promoted controller-0
+ * redis-bundle-1 (ocf:heartbeat:redis): Unpromoted controller-1
+ * redis-bundle-2 (ocf:heartbeat:redis): Unpromoted controller-2
+ * Container bundle set: ovn-dbs-bundle [cluster.common.tag/rhosp16-openstack-ovn-northd:pcmklatest]:
+ * ovn-dbs-bundle-0 (ocf:ovn:ovndb-servers): Unpromoted controller-0
+ * ovn-dbs-bundle-1 (ocf:ovn:ovndb-servers): Unpromoted controller-1
+ * ovn-dbs-bundle-2 (ocf:ovn:ovndb-servers): Unpromoted controller-2
+ * stonith-fence_ipmilan-5254005e097a (stonith:fence_ipmilan): Started controller-0
+ * stonith-fence_ipmilan-525400afe30e (stonith:fence_ipmilan): Started controller-2
+ * stonith-fence_ipmilan-525400985679 (stonith:fence_ipmilan): Started controller-1
+ * Container bundle: openstack-cinder-volume [cluster.common.tag/rhosp16-openstack-cinder-volume:pcmklatest]:
+ * openstack-cinder-volume-podman-0 (ocf:heartbeat:podman): Started controller-0
+
+Transition Summary:
+ * Stop ovn-dbs-bundle-podman-0 ( controller-0 ) due to node availability
+ * Stop ovn-dbs-bundle-0 ( controller-0 ) due to unrunnable ovn-dbs-bundle-podman-0 start
+ * Stop ovndb_servers:0 ( Unpromoted ovn-dbs-bundle-0 ) due to unrunnable ovn-dbs-bundle-podman-0 start
+ * Promote ovndb_servers:1 ( Unpromoted -> Promoted ovn-dbs-bundle-1 )
+
+Executing Cluster Transition:
+ * Resource action: ovndb_servers cancel=30000 on ovn-dbs-bundle-1
+ * Pseudo action: ovn-dbs-bundle-master_pre_notify_stop_0
+ * Pseudo action: ovn-dbs-bundle_stop_0
+ * Resource action: ovndb_servers notify on ovn-dbs-bundle-0
+ * Resource action: ovndb_servers notify on ovn-dbs-bundle-1
+ * Resource action: ovndb_servers notify on ovn-dbs-bundle-2
+ * Pseudo action: ovn-dbs-bundle-master_confirmed-pre_notify_stop_0
+ * Pseudo action: ovn-dbs-bundle-master_stop_0
+ * Resource action: ovndb_servers stop on ovn-dbs-bundle-0
+ * Pseudo action: ovn-dbs-bundle-master_stopped_0
+ * Resource action: ovn-dbs-bundle-0 stop on controller-0
+ * Pseudo action: ovn-dbs-bundle-master_post_notify_stopped_0
+ * Resource action: ovn-dbs-bundle-podman-0 stop on controller-0
+ * Resource action: ovndb_servers notify on ovn-dbs-bundle-1
+ * Resource action: ovndb_servers notify on ovn-dbs-bundle-2
+ * Pseudo action: ovn-dbs-bundle-master_confirmed-post_notify_stopped_0
+ * Pseudo action: ovn-dbs-bundle-master_pre_notify_start_0
+ * Pseudo action: ovn-dbs-bundle_stopped_0
+ * Pseudo action: ovn-dbs-bundle-master_confirmed-pre_notify_start_0
+ * Pseudo action: ovn-dbs-bundle-master_start_0
+ * Pseudo action: ovn-dbs-bundle-master_running_0
+ * Pseudo action: ovn-dbs-bundle-master_post_notify_running_0
+ * Pseudo action: ovn-dbs-bundle-master_confirmed-post_notify_running_0
+ * Pseudo action: ovn-dbs-bundle_running_0
+ * Pseudo action: ovn-dbs-bundle-master_pre_notify_promote_0
+ * Pseudo action: ovn-dbs-bundle_promote_0
+ * Resource action: ovndb_servers notify on ovn-dbs-bundle-1
+ * Resource action: ovndb_servers notify on ovn-dbs-bundle-2
+ * Pseudo action: ovn-dbs-bundle-master_confirmed-pre_notify_promote_0
+ * Pseudo action: ovn-dbs-bundle-master_promote_0
+ * Resource action: ovndb_servers promote on ovn-dbs-bundle-1
+ * Pseudo action: ovn-dbs-bundle-master_promoted_0
+ * Pseudo action: ovn-dbs-bundle-master_post_notify_promoted_0
+ * Resource action: ovndb_servers notify on ovn-dbs-bundle-1
+ * Resource action: ovndb_servers notify on ovn-dbs-bundle-2
+ * Pseudo action: ovn-dbs-bundle-master_confirmed-post_notify_promoted_0
+ * Pseudo action: ovn-dbs-bundle_promoted_0
+ * Resource action: ovndb_servers monitor=10000 on ovn-dbs-bundle-1
+Using the original execution date of: 2020-05-14 10:49:31Z
+
+Revised Cluster Status:
+ * Node List:
+ * Online: [ controller-0 controller-1 controller-2 ]
+ * GuestOnline: [ galera-bundle-0 galera-bundle-1 galera-bundle-2 ovn-dbs-bundle-1 ovn-dbs-bundle-2 rabbitmq-bundle-0 rabbitmq-bundle-1 rabbitmq-bundle-2 redis-bundle-0 redis-bundle-1 redis-bundle-2 ]
+
+ * Full List of Resources:
+ * Container bundle set: galera-bundle [cluster.common.tag/rhosp16-openstack-mariadb:pcmklatest]:
+ * galera-bundle-0 (ocf:heartbeat:galera): Promoted controller-0
+ * galera-bundle-1 (ocf:heartbeat:galera): Promoted controller-1
+ * galera-bundle-2 (ocf:heartbeat:galera): Promoted controller-2
+ * Container bundle set: rabbitmq-bundle [cluster.common.tag/rhosp16-openstack-rabbitmq:pcmklatest]:
+ * rabbitmq-bundle-0 (ocf:heartbeat:rabbitmq-cluster): Started controller-0
+ * rabbitmq-bundle-1 (ocf:heartbeat:rabbitmq-cluster): Started controller-1
+ * rabbitmq-bundle-2 (ocf:heartbeat:rabbitmq-cluster): Started controller-2
+ * Container bundle set: redis-bundle [cluster.common.tag/rhosp16-openstack-redis:pcmklatest]:
+ * redis-bundle-0 (ocf:heartbeat:redis): Promoted controller-0
+ * redis-bundle-1 (ocf:heartbeat:redis): Unpromoted controller-1
+ * redis-bundle-2 (ocf:heartbeat:redis): Unpromoted controller-2
+ * Container bundle set: ovn-dbs-bundle [cluster.common.tag/rhosp16-openstack-ovn-northd:pcmklatest]:
+ * ovn-dbs-bundle-0 (ocf:ovn:ovndb-servers): Stopped
+ * ovn-dbs-bundle-1 (ocf:ovn:ovndb-servers): Promoted controller-1
+ * ovn-dbs-bundle-2 (ocf:ovn:ovndb-servers): Unpromoted controller-2
+ * stonith-fence_ipmilan-5254005e097a (stonith:fence_ipmilan): Started controller-0
+ * stonith-fence_ipmilan-525400afe30e (stonith:fence_ipmilan): Started controller-2
+ * stonith-fence_ipmilan-525400985679 (stonith:fence_ipmilan): Started controller-1
+ * Container bundle: openstack-cinder-volume [cluster.common.tag/rhosp16-openstack-cinder-volume:pcmklatest]:
+ * openstack-cinder-volume-podman-0 (ocf:heartbeat:podman): Started controller-0
diff --git a/cts/scheduler/summary/no_quorum_demote.summary b/cts/scheduler/summary/no_quorum_demote.summary
new file mode 100644
index 0000000..7de1658
--- /dev/null
+++ b/cts/scheduler/summary/no_quorum_demote.summary
@@ -0,0 +1,40 @@
+Using the original execution date of: 2020-06-17 17:26:35Z
+Current cluster status:
+ * Node List:
+ * Online: [ rhel7-1 rhel7-2 ]
+ * OFFLINE: [ rhel7-3 rhel7-4 rhel7-5 ]
+
+ * Full List of Resources:
+ * Fencing (stonith:fence_xvm): Started rhel7-1
+ * Clone Set: rsc1-clone [rsc1] (promotable):
+ * Promoted: [ rhel7-1 ]
+ * Unpromoted: [ rhel7-2 ]
+ * Stopped: [ rhel7-3 rhel7-4 rhel7-5 ]
+ * rsc2 (ocf:pacemaker:Dummy): Started rhel7-2
+
+Transition Summary:
+ * Stop Fencing ( rhel7-1 ) due to no quorum
+ * Demote rsc1:0 ( Promoted -> Unpromoted rhel7-1 )
+ * Stop rsc2 ( rhel7-2 ) due to no quorum
+
+Executing Cluster Transition:
+ * Resource action: Fencing stop on rhel7-1
+ * Resource action: rsc1 cancel=10000 on rhel7-1
+ * Pseudo action: rsc1-clone_demote_0
+ * Resource action: rsc2 stop on rhel7-2
+ * Resource action: rsc1 demote on rhel7-1
+ * Pseudo action: rsc1-clone_demoted_0
+ * Resource action: rsc1 monitor=11000 on rhel7-1
+Using the original execution date of: 2020-06-17 17:26:35Z
+
+Revised Cluster Status:
+ * Node List:
+ * Online: [ rhel7-1 rhel7-2 ]
+ * OFFLINE: [ rhel7-3 rhel7-4 rhel7-5 ]
+
+ * Full List of Resources:
+ * Fencing (stonith:fence_xvm): Stopped
+ * Clone Set: rsc1-clone [rsc1] (promotable):
+ * Unpromoted: [ rhel7-1 rhel7-2 ]
+ * Stopped: [ rhel7-3 rhel7-4 rhel7-5 ]
+ * rsc2 (ocf:pacemaker:Dummy): Stopped
diff --git a/cts/scheduler/summary/node-maintenance-1.summary b/cts/scheduler/summary/node-maintenance-1.summary
new file mode 100644
index 0000000..eb75567
--- /dev/null
+++ b/cts/scheduler/summary/node-maintenance-1.summary
@@ -0,0 +1,26 @@
+Current cluster status:
+ * Node List:
+ * Node node2: maintenance
+ * Online: [ node1 ]
+
+ * Full List of Resources:
+ * rsc_stonith (stonith:null): Started node1
+ * rsc1 (ocf:pacemaker:Dummy): Started node1
+ * rsc2 (ocf:pacemaker:Dummy): Started node2 (maintenance)
+
+Transition Summary:
+ * Stop rsc1 ( node1 ) due to node availability
+
+Executing Cluster Transition:
+ * Resource action: rsc1 stop on node1
+ * Resource action: rsc2 cancel=10000 on node2
+
+Revised Cluster Status:
+ * Node List:
+ * Node node2: maintenance
+ * Online: [ node1 ]
+
+ * Full List of Resources:
+ * rsc_stonith (stonith:null): Started node1
+ * rsc1 (ocf:pacemaker:Dummy): Stopped
+ * rsc2 (ocf:pacemaker:Dummy): Started node2 (maintenance)
diff --git a/cts/scheduler/summary/node-maintenance-2.summary b/cts/scheduler/summary/node-maintenance-2.summary
new file mode 100644
index 0000000..b21e5db
--- /dev/null
+++ b/cts/scheduler/summary/node-maintenance-2.summary
@@ -0,0 +1,25 @@
+Current cluster status:
+ * Node List:
+ * Online: [ node1 node2 ]
+
+ * Full List of Resources:
+ * rsc_stonith (stonith:null): Started node1
+ * rsc1 (ocf:pacemaker:Dummy): Stopped
+ * rsc2 (ocf:pacemaker:Dummy): Started node2
+
+Transition Summary:
+ * Start rsc1 ( node2 )
+
+Executing Cluster Transition:
+ * Resource action: rsc1 start on node2
+ * Resource action: rsc2 monitor=10000 on node2
+ * Resource action: rsc1 monitor=10000 on node2
+
+Revised Cluster Status:
+ * Node List:
+ * Online: [ node1 node2 ]
+
+ * Full List of Resources:
+ * rsc_stonith (stonith:null): Started node1
+ * rsc1 (ocf:pacemaker:Dummy): Started node2
+ * rsc2 (ocf:pacemaker:Dummy): Started node2
diff --git a/cts/scheduler/summary/not-installed-agent.summary b/cts/scheduler/summary/not-installed-agent.summary
new file mode 100644
index 0000000..3e4fb6b
--- /dev/null
+++ b/cts/scheduler/summary/not-installed-agent.summary
@@ -0,0 +1,29 @@
+Current cluster status:
+ * Node List:
+ * Online: [ sles11-1 sles11-2 ]
+
+ * Full List of Resources:
+ * st_sbd (stonith:external/sbd): Started sles11-1
+ * rsc1 (ocf:pacemaker:Dummy): FAILED sles11-1
+ * rsc2 (ocf:pacemaker:Dummy): FAILED sles11-1
+
+Transition Summary:
+ * Recover rsc1 ( sles11-1 -> sles11-2 )
+ * Recover rsc2 ( sles11-1 -> sles11-2 )
+
+Executing Cluster Transition:
+ * Resource action: rsc1 stop on sles11-1
+ * Resource action: rsc2 stop on sles11-1
+ * Resource action: rsc1 start on sles11-2
+ * Resource action: rsc2 start on sles11-2
+ * Resource action: rsc1 monitor=10000 on sles11-2
+ * Resource action: rsc2 monitor=10000 on sles11-2
+
+Revised Cluster Status:
+ * Node List:
+ * Online: [ sles11-1 sles11-2 ]
+
+ * Full List of Resources:
+ * st_sbd (stonith:external/sbd): Started sles11-1
+ * rsc1 (ocf:pacemaker:Dummy): Started sles11-2
+ * rsc2 (ocf:pacemaker:Dummy): Started sles11-2
diff --git a/cts/scheduler/summary/not-installed-tools.summary b/cts/scheduler/summary/not-installed-tools.summary
new file mode 100644
index 0000000..7481216
--- /dev/null
+++ b/cts/scheduler/summary/not-installed-tools.summary
@@ -0,0 +1,25 @@
+Current cluster status:
+ * Node List:
+ * Online: [ sles11-1 sles11-2 ]
+
+ * Full List of Resources:
+ * st_sbd (stonith:external/sbd): Started sles11-1
+ * rsc1 (ocf:pacemaker:Dummy): FAILED sles11-1
+ * rsc2 (ocf:pacemaker:Dummy): Started sles11-1 (failure ignored)
+
+Transition Summary:
+ * Recover rsc1 ( sles11-1 -> sles11-2 )
+
+Executing Cluster Transition:
+ * Resource action: rsc1 stop on sles11-1
+ * Resource action: rsc1 start on sles11-2
+ * Resource action: rsc1 monitor=10000 on sles11-2
+
+Revised Cluster Status:
+ * Node List:
+ * Online: [ sles11-1 sles11-2 ]
+
+ * Full List of Resources:
+ * st_sbd (stonith:external/sbd): Started sles11-1
+ * rsc1 (ocf:pacemaker:Dummy): Started sles11-2
+ * rsc2 (ocf:pacemaker:Dummy): Started sles11-1 (failure ignored)
diff --git a/cts/scheduler/summary/not-reschedule-unneeded-monitor.summary b/cts/scheduler/summary/not-reschedule-unneeded-monitor.summary
new file mode 100644
index 0000000..4496535
--- /dev/null
+++ b/cts/scheduler/summary/not-reschedule-unneeded-monitor.summary
@@ -0,0 +1,40 @@
+1 of 11 resource instances DISABLED and 0 BLOCKED from further action due to failure
+
+Current cluster status:
+ * Node List:
+ * Online: [ castor kimball ]
+
+ * Full List of Resources:
+ * sbd (stonith:external/sbd): Started kimball
+ * Clone Set: base-clone [dlm]:
+ * Started: [ castor kimball ]
+ * Clone Set: c-vm-fs [vm1]:
+ * Started: [ castor kimball ]
+ * xen-f (ocf:heartbeat:VirtualDomain): Stopped (disabled)
+ * sle12-kvm (ocf:heartbeat:VirtualDomain): FAILED [ kimball castor ]
+ * Clone Set: cl_sgdisk [sgdisk]:
+ * Started: [ castor kimball ]
+
+Transition Summary:
+ * Recover sle12-kvm ( kimball )
+
+Executing Cluster Transition:
+ * Resource action: sle12-kvm stop on kimball
+ * Resource action: sle12-kvm stop on castor
+ * Resource action: sle12-kvm start on kimball
+ * Resource action: sle12-kvm monitor=10000 on kimball
+
+Revised Cluster Status:
+ * Node List:
+ * Online: [ castor kimball ]
+
+ * Full List of Resources:
+ * sbd (stonith:external/sbd): Started kimball
+ * Clone Set: base-clone [dlm]:
+ * Started: [ castor kimball ]
+ * Clone Set: c-vm-fs [vm1]:
+ * Started: [ castor kimball ]
+ * xen-f (ocf:heartbeat:VirtualDomain): Stopped (disabled)
+ * sle12-kvm (ocf:heartbeat:VirtualDomain): Started kimball
+ * Clone Set: cl_sgdisk [sgdisk]:
+ * Started: [ castor kimball ]
diff --git a/cts/scheduler/summary/notifs-for-unrunnable.summary b/cts/scheduler/summary/notifs-for-unrunnable.summary
new file mode 100644
index 0000000..a9503b4
--- /dev/null
+++ b/cts/scheduler/summary/notifs-for-unrunnable.summary
@@ -0,0 +1,99 @@
+Using the original execution date of: 2018-02-13 23:40:47Z
+Current cluster status:
+ * Node List:
+ * Online: [ controller-1 controller-2 ]
+ * OFFLINE: [ controller-0 ]
+ * GuestOnline: [ galera-bundle-1 galera-bundle-2 rabbitmq-bundle-1 rabbitmq-bundle-2 redis-bundle-1 redis-bundle-2 ]
+
+ * Full List of Resources:
+ * Container bundle set: rabbitmq-bundle [192.168.24.1:8787/rhosp12/openstack-rabbitmq:pcmklatest]:
+ * rabbitmq-bundle-0 (ocf:heartbeat:rabbitmq-cluster): Stopped
+ * rabbitmq-bundle-1 (ocf:heartbeat:rabbitmq-cluster): Started controller-1
+ * rabbitmq-bundle-2 (ocf:heartbeat:rabbitmq-cluster): Started controller-2
+ * Container bundle set: galera-bundle [192.168.24.1:8787/rhosp12/openstack-mariadb:pcmklatest]:
+ * galera-bundle-0 (ocf:heartbeat:galera): Stopped
+ * galera-bundle-1 (ocf:heartbeat:galera): Promoted controller-1
+ * galera-bundle-2 (ocf:heartbeat:galera): Promoted controller-2
+ * Container bundle set: redis-bundle [192.168.24.1:8787/rhosp12/openstack-redis:pcmklatest]:
+ * redis-bundle-0 (ocf:heartbeat:redis): Stopped
+ * redis-bundle-1 (ocf:heartbeat:redis): Unpromoted controller-1
+ * redis-bundle-2 (ocf:heartbeat:redis): Promoted controller-2
+ * ip-192.168.24.6 (ocf:heartbeat:IPaddr2): Started controller-2
+ * ip-10.0.0.109 (ocf:heartbeat:IPaddr2): Started controller-1
+ * ip-172.17.1.15 (ocf:heartbeat:IPaddr2): Started controller-2
+ * ip-172.17.1.11 (ocf:heartbeat:IPaddr2): Started controller-1
+ * ip-172.17.3.11 (ocf:heartbeat:IPaddr2): Started controller-1
+ * ip-172.17.4.11 (ocf:heartbeat:IPaddr2): Started controller-2
+ * Container bundle set: haproxy-bundle [192.168.24.1:8787/rhosp12/openstack-haproxy:pcmklatest]:
+ * haproxy-bundle-docker-0 (ocf:heartbeat:docker): Stopped
+ * haproxy-bundle-docker-1 (ocf:heartbeat:docker): Started controller-1
+ * haproxy-bundle-docker-2 (ocf:heartbeat:docker): Started controller-2
+ * openstack-cinder-volume (systemd:openstack-cinder-volume): Started controller-2
+ * stonith-fence_ipmilan-525400fec0c8 (stonith:fence_ipmilan): Started controller-1
+ * stonith-fence_ipmilan-5254002ff217 (stonith:fence_ipmilan): Started controller-2
+ * stonith-fence_ipmilan-5254008f971a (stonith:fence_ipmilan): Started controller-1
+
+Transition Summary:
+ * Start rabbitmq-bundle-0 ( controller-1 ) due to unrunnable rabbitmq-bundle-docker-0 start (blocked)
+ * Start rabbitmq:0 ( rabbitmq-bundle-0 ) due to unrunnable rabbitmq-bundle-docker-0 start (blocked)
+ * Start galera-bundle-0 ( controller-2 ) due to unrunnable galera-bundle-docker-0 start (blocked)
+ * Start galera:0 ( galera-bundle-0 ) due to unrunnable galera-bundle-docker-0 start (blocked)
+ * Start redis-bundle-0 ( controller-1 ) due to unrunnable redis-bundle-docker-0 start (blocked)
+ * Start redis:0 ( redis-bundle-0 ) due to unrunnable redis-bundle-docker-0 start (blocked)
+
+Executing Cluster Transition:
+ * Pseudo action: rabbitmq-bundle-clone_pre_notify_start_0
+ * Pseudo action: redis-bundle-master_pre_notify_start_0
+ * Pseudo action: redis-bundle_start_0
+ * Pseudo action: galera-bundle_start_0
+ * Pseudo action: rabbitmq-bundle_start_0
+ * Pseudo action: rabbitmq-bundle-clone_confirmed-pre_notify_start_0
+ * Pseudo action: rabbitmq-bundle-clone_start_0
+ * Pseudo action: galera-bundle-master_start_0
+ * Pseudo action: redis-bundle-master_confirmed-pre_notify_start_0
+ * Pseudo action: redis-bundle-master_start_0
+ * Pseudo action: rabbitmq-bundle-clone_running_0
+ * Pseudo action: galera-bundle-master_running_0
+ * Pseudo action: redis-bundle-master_running_0
+ * Pseudo action: galera-bundle_running_0
+ * Pseudo action: rabbitmq-bundle-clone_post_notify_running_0
+ * Pseudo action: redis-bundle-master_post_notify_running_0
+ * Pseudo action: rabbitmq-bundle-clone_confirmed-post_notify_running_0
+ * Pseudo action: redis-bundle-master_confirmed-post_notify_running_0
+ * Pseudo action: redis-bundle_running_0
+ * Pseudo action: rabbitmq-bundle_running_0
+Using the original execution date of: 2018-02-13 23:40:47Z
+
+Revised Cluster Status:
+ * Node List:
+ * Online: [ controller-1 controller-2 ]
+ * OFFLINE: [ controller-0 ]
+ * GuestOnline: [ galera-bundle-1 galera-bundle-2 rabbitmq-bundle-1 rabbitmq-bundle-2 redis-bundle-1 redis-bundle-2 ]
+
+ * Full List of Resources:
+ * Container bundle set: rabbitmq-bundle [192.168.24.1:8787/rhosp12/openstack-rabbitmq:pcmklatest]:
+ * rabbitmq-bundle-0 (ocf:heartbeat:rabbitmq-cluster): Stopped
+ * rabbitmq-bundle-1 (ocf:heartbeat:rabbitmq-cluster): Started controller-1
+ * rabbitmq-bundle-2 (ocf:heartbeat:rabbitmq-cluster): Started controller-2
+ * Container bundle set: galera-bundle [192.168.24.1:8787/rhosp12/openstack-mariadb:pcmklatest]:
+ * galera-bundle-0 (ocf:heartbeat:galera): Stopped
+ * galera-bundle-1 (ocf:heartbeat:galera): Promoted controller-1
+ * galera-bundle-2 (ocf:heartbeat:galera): Promoted controller-2
+ * Container bundle set: redis-bundle [192.168.24.1:8787/rhosp12/openstack-redis:pcmklatest]:
+ * redis-bundle-0 (ocf:heartbeat:redis): Stopped
+ * redis-bundle-1 (ocf:heartbeat:redis): Unpromoted controller-1
+ * redis-bundle-2 (ocf:heartbeat:redis): Promoted controller-2
+ * ip-192.168.24.6 (ocf:heartbeat:IPaddr2): Started controller-2
+ * ip-10.0.0.109 (ocf:heartbeat:IPaddr2): Started controller-1
+ * ip-172.17.1.15 (ocf:heartbeat:IPaddr2): Started controller-2
+ * ip-172.17.1.11 (ocf:heartbeat:IPaddr2): Started controller-1
+ * ip-172.17.3.11 (ocf:heartbeat:IPaddr2): Started controller-1
+ * ip-172.17.4.11 (ocf:heartbeat:IPaddr2): Started controller-2
+ * Container bundle set: haproxy-bundle [192.168.24.1:8787/rhosp12/openstack-haproxy:pcmklatest]:
+ * haproxy-bundle-docker-0 (ocf:heartbeat:docker): Stopped
+ * haproxy-bundle-docker-1 (ocf:heartbeat:docker): Started controller-1
+ * haproxy-bundle-docker-2 (ocf:heartbeat:docker): Started controller-2
+ * openstack-cinder-volume (systemd:openstack-cinder-volume): Started controller-2
+ * stonith-fence_ipmilan-525400fec0c8 (stonith:fence_ipmilan): Started controller-1
+ * stonith-fence_ipmilan-5254002ff217 (stonith:fence_ipmilan): Started controller-2
+ * stonith-fence_ipmilan-5254008f971a (stonith:fence_ipmilan): Started controller-1
diff --git a/cts/scheduler/summary/notify-0.summary b/cts/scheduler/summary/notify-0.summary
new file mode 100644
index 0000000..f39ea94
--- /dev/null
+++ b/cts/scheduler/summary/notify-0.summary
@@ -0,0 +1,39 @@
+Current cluster status:
+ * Node List:
+ * Online: [ node1 ]
+ * OFFLINE: [ node2 ]
+
+ * Full List of Resources:
+ * Clone Set: rsc1 [child_rsc1] (unique):
+ * child_rsc1:0 (ocf:heartbeat:apache): Started node1
+ * child_rsc1:1 (ocf:heartbeat:apache): Stopped
+ * Clone Set: rsc2 [child_rsc2] (unique):
+ * child_rsc2:0 (ocf:heartbeat:apache): Started node1
+ * child_rsc2:1 (ocf:heartbeat:apache): Stopped
+
+Transition Summary:
+ * Start child_rsc1:1 ( node1 )
+ * Stop child_rsc2:0 ( node1 ) due to node availability
+
+Executing Cluster Transition:
+ * Resource action: child_rsc1:1 monitor on node1
+ * Pseudo action: rsc1_start_0
+ * Resource action: child_rsc2:1 monitor on node1
+ * Pseudo action: rsc2_stop_0
+ * Resource action: child_rsc1:1 start on node1
+ * Pseudo action: rsc1_running_0
+ * Resource action: child_rsc2:0 stop on node1
+ * Pseudo action: rsc2_stopped_0
+
+Revised Cluster Status:
+ * Node List:
+ * Online: [ node1 ]
+ * OFFLINE: [ node2 ]
+
+ * Full List of Resources:
+ * Clone Set: rsc1 [child_rsc1] (unique):
+ * child_rsc1:0 (ocf:heartbeat:apache): Started node1
+ * child_rsc1:1 (ocf:heartbeat:apache): Started node1
+ * Clone Set: rsc2 [child_rsc2] (unique):
+ * child_rsc2:0 (ocf:heartbeat:apache): Stopped
+ * child_rsc2:1 (ocf:heartbeat:apache): Stopped
diff --git a/cts/scheduler/summary/notify-1.summary b/cts/scheduler/summary/notify-1.summary
new file mode 100644
index 0000000..8ce4b25
--- /dev/null
+++ b/cts/scheduler/summary/notify-1.summary
@@ -0,0 +1,51 @@
+Current cluster status:
+ * Node List:
+ * Online: [ node1 ]
+ * OFFLINE: [ node2 ]
+
+ * Full List of Resources:
+ * Clone Set: rsc1 [child_rsc1] (unique):
+ * child_rsc1:0 (ocf:heartbeat:apache): Started node1
+ * child_rsc1:1 (ocf:heartbeat:apache): Stopped
+ * Clone Set: rsc2 [child_rsc2] (unique):
+ * child_rsc2:0 (ocf:heartbeat:apache): Started node1
+ * child_rsc2:1 (ocf:heartbeat:apache): Stopped
+
+Transition Summary:
+ * Start child_rsc1:1 ( node1 )
+ * Stop child_rsc2:0 ( node1 ) due to node availability
+
+Executing Cluster Transition:
+ * Resource action: child_rsc1:1 monitor on node1
+ * Pseudo action: rsc1_pre_notify_start_0
+ * Resource action: child_rsc2:1 monitor on node1
+ * Pseudo action: rsc2_pre_notify_stop_0
+ * Resource action: child_rsc1:0 notify on node1
+ * Pseudo action: rsc1_confirmed-pre_notify_start_0
+ * Pseudo action: rsc1_start_0
+ * Resource action: child_rsc2:0 notify on node1
+ * Pseudo action: rsc2_confirmed-pre_notify_stop_0
+ * Pseudo action: rsc2_stop_0
+ * Resource action: child_rsc1:1 start on node1
+ * Pseudo action: rsc1_running_0
+ * Resource action: child_rsc2:0 stop on node1
+ * Pseudo action: rsc2_stopped_0
+ * Pseudo action: rsc1_post_notify_running_0
+ * Pseudo action: rsc2_post_notify_stopped_0
+ * Resource action: child_rsc1:0 notify on node1
+ * Resource action: child_rsc1:1 notify on node1
+ * Pseudo action: rsc1_confirmed-post_notify_running_0
+ * Pseudo action: rsc2_confirmed-post_notify_stopped_0
+
+Revised Cluster Status:
+ * Node List:
+ * Online: [ node1 ]
+ * OFFLINE: [ node2 ]
+
+ * Full List of Resources:
+ * Clone Set: rsc1 [child_rsc1] (unique):
+ * child_rsc1:0 (ocf:heartbeat:apache): Started node1
+ * child_rsc1:1 (ocf:heartbeat:apache): Started node1
+ * Clone Set: rsc2 [child_rsc2] (unique):
+ * child_rsc2:0 (ocf:heartbeat:apache): Stopped
+ * child_rsc2:1 (ocf:heartbeat:apache): Stopped
diff --git a/cts/scheduler/summary/notify-2.summary b/cts/scheduler/summary/notify-2.summary
new file mode 100644
index 0000000..8ce4b25
--- /dev/null
+++ b/cts/scheduler/summary/notify-2.summary
@@ -0,0 +1,51 @@
+Current cluster status:
+ * Node List:
+ * Online: [ node1 ]
+ * OFFLINE: [ node2 ]
+
+ * Full List of Resources:
+ * Clone Set: rsc1 [child_rsc1] (unique):
+ * child_rsc1:0 (ocf:heartbeat:apache): Started node1
+ * child_rsc1:1 (ocf:heartbeat:apache): Stopped
+ * Clone Set: rsc2 [child_rsc2] (unique):
+ * child_rsc2:0 (ocf:heartbeat:apache): Started node1
+ * child_rsc2:1 (ocf:heartbeat:apache): Stopped
+
+Transition Summary:
+ * Start child_rsc1:1 ( node1 )
+ * Stop child_rsc2:0 ( node1 ) due to node availability
+
+Executing Cluster Transition:
+ * Resource action: child_rsc1:1 monitor on node1
+ * Pseudo action: rsc1_pre_notify_start_0
+ * Resource action: child_rsc2:1 monitor on node1
+ * Pseudo action: rsc2_pre_notify_stop_0
+ * Resource action: child_rsc1:0 notify on node1
+ * Pseudo action: rsc1_confirmed-pre_notify_start_0
+ * Pseudo action: rsc1_start_0
+ * Resource action: child_rsc2:0 notify on node1
+ * Pseudo action: rsc2_confirmed-pre_notify_stop_0
+ * Pseudo action: rsc2_stop_0
+ * Resource action: child_rsc1:1 start on node1
+ * Pseudo action: rsc1_running_0
+ * Resource action: child_rsc2:0 stop on node1
+ * Pseudo action: rsc2_stopped_0
+ * Pseudo action: rsc1_post_notify_running_0
+ * Pseudo action: rsc2_post_notify_stopped_0
+ * Resource action: child_rsc1:0 notify on node1
+ * Resource action: child_rsc1:1 notify on node1
+ * Pseudo action: rsc1_confirmed-post_notify_running_0
+ * Pseudo action: rsc2_confirmed-post_notify_stopped_0
+
+Revised Cluster Status:
+ * Node List:
+ * Online: [ node1 ]
+ * OFFLINE: [ node2 ]
+
+ * Full List of Resources:
+ * Clone Set: rsc1 [child_rsc1] (unique):
+ * child_rsc1:0 (ocf:heartbeat:apache): Started node1
+ * child_rsc1:1 (ocf:heartbeat:apache): Started node1
+ * Clone Set: rsc2 [child_rsc2] (unique):
+ * child_rsc2:0 (ocf:heartbeat:apache): Stopped
+ * child_rsc2:1 (ocf:heartbeat:apache): Stopped
diff --git a/cts/scheduler/summary/notify-3.summary b/cts/scheduler/summary/notify-3.summary
new file mode 100644
index 0000000..5658692
--- /dev/null
+++ b/cts/scheduler/summary/notify-3.summary
@@ -0,0 +1,62 @@
+Current cluster status:
+ * Node List:
+ * Online: [ node1 node2 ]
+
+ * Full List of Resources:
+ * Clone Set: rsc1 [child_rsc1] (unique):
+ * child_rsc1:0 (ocf:heartbeat:apache): Started node1
+ * child_rsc1:1 (ocf:heartbeat:apache): Started node2
+ * Clone Set: rsc2 [child_rsc2] (unique):
+ * child_rsc2:0 (ocf:heartbeat:apache): Started node1
+ * child_rsc2:1 (ocf:heartbeat:apache): Stopped
+
+Transition Summary:
+ * Move child_rsc1:1 ( node2 -> node1 )
+ * Stop child_rsc2:0 ( node1 ) due to node availability
+
+Executing Cluster Transition:
+ * Resource action: child_rsc1:0 monitor on node2
+ * Resource action: child_rsc1:1 monitor on node1
+ * Pseudo action: rsc1_pre_notify_stop_0
+ * Resource action: child_rsc2:0 monitor on node2
+ * Resource action: child_rsc2:1 monitor on node2
+ * Resource action: child_rsc2:1 monitor on node1
+ * Pseudo action: rsc2_pre_notify_stop_0
+ * Resource action: child_rsc1:0 notify on node1
+ * Resource action: child_rsc1:1 notify on node2
+ * Pseudo action: rsc1_confirmed-pre_notify_stop_0
+ * Pseudo action: rsc1_stop_0
+ * Resource action: child_rsc2:0 notify on node1
+ * Pseudo action: rsc2_confirmed-pre_notify_stop_0
+ * Pseudo action: rsc2_stop_0
+ * Resource action: child_rsc1:1 stop on node2
+ * Pseudo action: rsc1_stopped_0
+ * Resource action: child_rsc2:0 stop on node1
+ * Pseudo action: rsc2_stopped_0
+ * Pseudo action: rsc1_post_notify_stopped_0
+ * Pseudo action: rsc2_post_notify_stopped_0
+ * Resource action: child_rsc1:0 notify on node1
+ * Pseudo action: rsc1_confirmed-post_notify_stopped_0
+ * Pseudo action: rsc1_pre_notify_start_0
+ * Pseudo action: rsc2_confirmed-post_notify_stopped_0
+ * Resource action: child_rsc1:0 notify on node1
+ * Pseudo action: rsc1_confirmed-pre_notify_start_0
+ * Pseudo action: rsc1_start_0
+ * Resource action: child_rsc1:1 start on node1
+ * Pseudo action: rsc1_running_0
+ * Pseudo action: rsc1_post_notify_running_0
+ * Resource action: child_rsc1:0 notify on node1
+ * Resource action: child_rsc1:1 notify on node1
+ * Pseudo action: rsc1_confirmed-post_notify_running_0
+
+Revised Cluster Status:
+ * Node List:
+ * Online: [ node1 node2 ]
+
+ * Full List of Resources:
+ * Clone Set: rsc1 [child_rsc1] (unique):
+ * child_rsc1:0 (ocf:heartbeat:apache): Started node1
+ * child_rsc1:1 (ocf:heartbeat:apache): Started node1
+ * Clone Set: rsc2 [child_rsc2] (unique):
+ * child_rsc2:0 (ocf:heartbeat:apache): Stopped
+ * child_rsc2:1 (ocf:heartbeat:apache): Stopped
diff --git a/cts/scheduler/summary/notify-behind-stopping-remote.summary b/cts/scheduler/summary/notify-behind-stopping-remote.summary
new file mode 100644
index 0000000..257e445
--- /dev/null
+++ b/cts/scheduler/summary/notify-behind-stopping-remote.summary
@@ -0,0 +1,64 @@
+Using the original execution date of: 2018-11-22 20:36:07Z
+Current cluster status:
+ * Node List:
+ * Online: [ ra1 ra2 ra3 ]
+ * GuestOnline: [ redis-bundle-0 redis-bundle-1 redis-bundle-2 ]
+
+ * Full List of Resources:
+ * Container bundle set: redis-bundle [docker.io/tripleoqueens/centos-binary-redis:current-tripleo-rdo]:
+ * redis-bundle-0 (ocf:heartbeat:redis): Unpromoted ra1
+ * redis-bundle-1 (ocf:heartbeat:redis): Stopped ra2
+ * redis-bundle-2 (ocf:heartbeat:redis): Unpromoted ra3
+
+Transition Summary:
+ * Promote redis:0 ( Unpromoted -> Promoted redis-bundle-0 )
+ * Stop redis-bundle-docker-1 ( ra2 ) due to node availability
+ * Stop redis-bundle-1 ( ra2 ) due to unrunnable redis-bundle-docker-1 start
+ * Start redis:1 ( redis-bundle-1 ) due to unrunnable redis-bundle-docker-1 start (blocked)
+
+Executing Cluster Transition:
+ * Resource action: redis cancel=45000 on redis-bundle-0
+ * Resource action: redis cancel=60000 on redis-bundle-0
+ * Pseudo action: redis-bundle-master_pre_notify_start_0
+ * Resource action: redis-bundle-0 monitor=30000 on ra1
+ * Resource action: redis-bundle-0 cancel=60000 on ra1
+ * Resource action: redis-bundle-1 stop on ra2
+ * Resource action: redis-bundle-1 cancel=60000 on ra2
+ * Resource action: redis-bundle-2 monitor=30000 on ra3
+ * Resource action: redis-bundle-2 cancel=60000 on ra3
+ * Pseudo action: redis-bundle_stop_0
+ * Pseudo action: redis-bundle-master_confirmed-pre_notify_start_0
+ * Resource action: redis-bundle-docker-1 stop on ra2
+ * Pseudo action: redis-bundle_stopped_0
+ * Pseudo action: redis-bundle_start_0
+ * Pseudo action: redis-bundle-master_start_0
+ * Pseudo action: redis-bundle-master_running_0
+ * Pseudo action: redis-bundle-master_post_notify_running_0
+ * Pseudo action: redis-bundle-master_confirmed-post_notify_running_0
+ * Pseudo action: redis-bundle_running_0
+ * Pseudo action: redis-bundle-master_pre_notify_promote_0
+ * Pseudo action: redis-bundle_promote_0
+ * Resource action: redis notify on redis-bundle-0
+ * Resource action: redis notify on redis-bundle-2
+ * Pseudo action: redis-bundle-master_confirmed-pre_notify_promote_0
+ * Pseudo action: redis-bundle-master_promote_0
+ * Resource action: redis promote on redis-bundle-0
+ * Pseudo action: redis-bundle-master_promoted_0
+ * Pseudo action: redis-bundle-master_post_notify_promoted_0
+ * Resource action: redis notify on redis-bundle-0
+ * Resource action: redis notify on redis-bundle-2
+ * Pseudo action: redis-bundle-master_confirmed-post_notify_promoted_0
+ * Pseudo action: redis-bundle_promoted_0
+ * Resource action: redis monitor=20000 on redis-bundle-0
+Using the original execution date of: 2018-11-22 20:36:07Z
+
+Revised Cluster Status:
+ * Node List:
+ * Online: [ ra1 ra2 ra3 ]
+ * GuestOnline: [ redis-bundle-0 redis-bundle-2 ]
+
+ * Full List of Resources:
+ * Container bundle set: redis-bundle [docker.io/tripleoqueens/centos-binary-redis:current-tripleo-rdo]:
+ * redis-bundle-0 (ocf:heartbeat:redis): Promoted ra1
+ * redis-bundle-1 (ocf:heartbeat:redis): Stopped
+ * redis-bundle-2 (ocf:heartbeat:redis): Unpromoted ra3
diff --git a/cts/scheduler/summary/novell-239079.summary b/cts/scheduler/summary/novell-239079.summary
new file mode 100644
index 0000000..0afbba5
--- /dev/null
+++ b/cts/scheduler/summary/novell-239079.summary
@@ -0,0 +1,33 @@
+Current cluster status:
+ * Node List:
+ * Online: [ xen-1 xen-2 ]
+
+ * Full List of Resources:
+ * fs_1 (ocf:heartbeat:Filesystem): Stopped
+ * Clone Set: ms-drbd0 [drbd0] (promotable):
+ * Stopped: [ xen-1 xen-2 ]
+
+Transition Summary:
+ * Start drbd0:0 ( xen-1 )
+ * Start drbd0:1 ( xen-2 )
+
+Executing Cluster Transition:
+ * Pseudo action: ms-drbd0_pre_notify_start_0
+ * Pseudo action: ms-drbd0_confirmed-pre_notify_start_0
+ * Pseudo action: ms-drbd0_start_0
+ * Resource action: drbd0:0 start on xen-1
+ * Resource action: drbd0:1 start on xen-2
+ * Pseudo action: ms-drbd0_running_0
+ * Pseudo action: ms-drbd0_post_notify_running_0
+ * Resource action: drbd0:0 notify on xen-1
+ * Resource action: drbd0:1 notify on xen-2
+ * Pseudo action: ms-drbd0_confirmed-post_notify_running_0
+
+Revised Cluster Status:
+ * Node List:
+ * Online: [ xen-1 xen-2 ]
+
+ * Full List of Resources:
+ * fs_1 (ocf:heartbeat:Filesystem): Stopped
+ * Clone Set: ms-drbd0 [drbd0] (promotable):
+ * Unpromoted: [ xen-1 xen-2 ]
diff --git a/cts/scheduler/summary/novell-239082.summary b/cts/scheduler/summary/novell-239082.summary
new file mode 100644
index 0000000..051c022
--- /dev/null
+++ b/cts/scheduler/summary/novell-239082.summary
@@ -0,0 +1,59 @@
+Current cluster status:
+ * Node List:
+ * Online: [ xen-1 xen-2 ]
+
+ * Full List of Resources:
+ * fs_1 (ocf:heartbeat:Filesystem): Started xen-1
+ * Clone Set: ms-drbd0 [drbd0] (promotable):
+ * Promoted: [ xen-1 ]
+ * Unpromoted: [ xen-2 ]
+
+Transition Summary:
+ * Move fs_1 ( xen-1 -> xen-2 )
+ * Promote drbd0:0 ( Unpromoted -> Promoted xen-2 )
+ * Stop drbd0:1 ( Promoted xen-1 ) due to node availability
+
+Executing Cluster Transition:
+ * Resource action: fs_1 stop on xen-1
+ * Pseudo action: ms-drbd0_pre_notify_demote_0
+ * Resource action: drbd0:0 notify on xen-2
+ * Resource action: drbd0:1 notify on xen-1
+ * Pseudo action: ms-drbd0_confirmed-pre_notify_demote_0
+ * Pseudo action: ms-drbd0_demote_0
+ * Resource action: drbd0:1 demote on xen-1
+ * Pseudo action: ms-drbd0_demoted_0
+ * Pseudo action: ms-drbd0_post_notify_demoted_0
+ * Resource action: drbd0:0 notify on xen-2
+ * Resource action: drbd0:1 notify on xen-1
+ * Pseudo action: ms-drbd0_confirmed-post_notify_demoted_0
+ * Pseudo action: ms-drbd0_pre_notify_stop_0
+ * Resource action: drbd0:0 notify on xen-2
+ * Resource action: drbd0:1 notify on xen-1
+ * Pseudo action: ms-drbd0_confirmed-pre_notify_stop_0
+ * Pseudo action: ms-drbd0_stop_0
+ * Resource action: drbd0:1 stop on xen-1
+ * Pseudo action: ms-drbd0_stopped_0
+ * Cluster action: do_shutdown on xen-1
+ * Pseudo action: ms-drbd0_post_notify_stopped_0
+ * Resource action: drbd0:0 notify on xen-2
+ * Pseudo action: ms-drbd0_confirmed-post_notify_stopped_0
+ * Pseudo action: ms-drbd0_pre_notify_promote_0
+ * Resource action: drbd0:0 notify on xen-2
+ * Pseudo action: ms-drbd0_confirmed-pre_notify_promote_0
+ * Pseudo action: ms-drbd0_promote_0
+ * Resource action: drbd0:0 promote on xen-2
+ * Pseudo action: ms-drbd0_promoted_0
+ * Pseudo action: ms-drbd0_post_notify_promoted_0
+ * Resource action: drbd0:0 notify on xen-2
+ * Pseudo action: ms-drbd0_confirmed-post_notify_promoted_0
+ * Resource action: fs_1 start on xen-2
+
+Revised Cluster Status:
+ * Node List:
+ * Online: [ xen-1 xen-2 ]
+
+ * Full List of Resources:
+ * fs_1 (ocf:heartbeat:Filesystem): Started xen-2
+ * Clone Set: ms-drbd0 [drbd0] (promotable):
+ * Promoted: [ xen-2 ]
+ * Stopped: [ xen-1 ]
diff --git a/cts/scheduler/summary/novell-239087.summary b/cts/scheduler/summary/novell-239087.summary
new file mode 100644
index 0000000..0c158d3
--- /dev/null
+++ b/cts/scheduler/summary/novell-239087.summary
@@ -0,0 +1,23 @@
+Current cluster status:
+ * Node List:
+ * Online: [ xen-1 xen-2 ]
+
+ * Full List of Resources:
+ * fs_1 (ocf:heartbeat:Filesystem): Started xen-1
+ * Clone Set: ms-drbd0 [drbd0] (promotable):
+ * Promoted: [ xen-1 ]
+ * Unpromoted: [ xen-2 ]
+
+Transition Summary:
+
+Executing Cluster Transition:
+
+Revised Cluster Status:
+ * Node List:
+ * Online: [ xen-1 xen-2 ]
+
+ * Full List of Resources:
+ * fs_1 (ocf:heartbeat:Filesystem): Started xen-1
+ * Clone Set: ms-drbd0 [drbd0] (promotable):
+ * Promoted: [ xen-1 ]
+ * Unpromoted: [ xen-2 ]
diff --git a/cts/scheduler/summary/novell-251689.summary b/cts/scheduler/summary/novell-251689.summary
new file mode 100644
index 0000000..51a4bea
--- /dev/null
+++ b/cts/scheduler/summary/novell-251689.summary
@@ -0,0 +1,49 @@
+1 of 11 resource instances DISABLED and 0 BLOCKED from further action due to failure
+
+Current cluster status:
+ * Node List:
+ * Online: [ node1 node2 ]
+
+ * Full List of Resources:
+ * Clone Set: stonithcloneset [stonithclone]:
+ * Started: [ node1 node2 ]
+ * Clone Set: evmsdcloneset [evmsdclone]:
+ * Started: [ node1 node2 ]
+ * Clone Set: evmscloneset [evmsclone]:
+ * Started: [ node1 node2 ]
+ * Clone Set: imagestorecloneset [imagestoreclone]:
+ * Started: [ node1 node2 ]
+ * Clone Set: configstorecloneset [configstoreclone]:
+ * Started: [ node1 node2 ]
+ * sles10 (ocf:heartbeat:Xen): Started node2 (disabled)
+
+Transition Summary:
+ * Stop sles10 ( node2 ) due to node availability
+
+Executing Cluster Transition:
+ * Resource action: stonithclone:0 monitor=5000 on node2
+ * Resource action: stonithclone:1 monitor=5000 on node1
+ * Resource action: evmsdclone:0 monitor=5000 on node2
+ * Resource action: evmsdclone:1 monitor=5000 on node1
+ * Resource action: imagestoreclone:0 monitor=20000 on node2
+ * Resource action: imagestoreclone:1 monitor=20000 on node1
+ * Resource action: configstoreclone:0 monitor=20000 on node2
+ * Resource action: configstoreclone:1 monitor=20000 on node1
+ * Resource action: sles10 stop on node2
+
+Revised Cluster Status:
+ * Node List:
+ * Online: [ node1 node2 ]
+
+ * Full List of Resources:
+ * Clone Set: stonithcloneset [stonithclone]:
+ * Started: [ node1 node2 ]
+ * Clone Set: evmsdcloneset [evmsdclone]:
+ * Started: [ node1 node2 ]
+ * Clone Set: evmscloneset [evmsclone]:
+ * Started: [ node1 node2 ]
+ * Clone Set: imagestorecloneset [imagestoreclone]:
+ * Started: [ node1 node2 ]
+ * Clone Set: configstorecloneset [configstoreclone]:
+ * Started: [ node1 node2 ]
+ * sles10 (ocf:heartbeat:Xen): Stopped (disabled)
diff --git a/cts/scheduler/summary/novell-252693-2.summary b/cts/scheduler/summary/novell-252693-2.summary
new file mode 100644
index 0000000..45ee46d
--- /dev/null
+++ b/cts/scheduler/summary/novell-252693-2.summary
@@ -0,0 +1,103 @@
+Current cluster status:
+ * Node List:
+ * Online: [ node1 node2 ]
+
+ * Full List of Resources:
+ * Clone Set: stonithcloneset [stonithclone]:
+ * Started: [ node2 ]
+ * Stopped: [ node1 ]
+ * Clone Set: evmsdcloneset [evmsdclone]:
+ * Started: [ node2 ]
+ * Stopped: [ node1 ]
+ * Clone Set: evmscloneset [evmsclone]:
+ * Started: [ node2 ]
+ * Stopped: [ node1 ]
+ * Clone Set: imagestorecloneset [imagestoreclone]:
+ * Started: [ node2 ]
+ * Stopped: [ node1 ]
+ * Clone Set: configstorecloneset [configstoreclone]:
+ * Started: [ node2 ]
+ * Stopped: [ node1 ]
+ * sles10 (ocf:heartbeat:Xen): Started node2
+
+Transition Summary:
+ * Start stonithclone:1 ( node1 )
+ * Start evmsdclone:1 ( node1 )
+ * Start evmsclone:1 ( node1 )
+ * Start imagestoreclone:1 ( node1 )
+ * Start configstoreclone:1 ( node1 )
+ * Migrate sles10 ( node2 -> node1 )
+
+Executing Cluster Transition:
+ * Resource action: stonithclone:0 monitor=5000 on node2
+ * Resource action: stonithclone:1 monitor on node1
+ * Pseudo action: stonithcloneset_start_0
+ * Resource action: evmsdclone:0 monitor=5000 on node2
+ * Resource action: evmsdclone:1 monitor on node1
+ * Pseudo action: evmsdcloneset_start_0
+ * Resource action: evmsclone:1 monitor on node1
+ * Pseudo action: evmscloneset_pre_notify_start_0
+ * Resource action: imagestoreclone:1 monitor on node1
+ * Pseudo action: imagestorecloneset_pre_notify_start_0
+ * Resource action: configstoreclone:1 monitor on node1
+ * Pseudo action: configstorecloneset_pre_notify_start_0
+ * Resource action: sles10 monitor on node1
+ * Resource action: stonithclone:1 start on node1
+ * Pseudo action: stonithcloneset_running_0
+ * Resource action: evmsdclone:1 start on node1
+ * Pseudo action: evmsdcloneset_running_0
+ * Resource action: evmsclone:0 notify on node2
+ * Pseudo action: evmscloneset_confirmed-pre_notify_start_0
+ * Pseudo action: evmscloneset_start_0
+ * Resource action: imagestoreclone:0 notify on node2
+ * Pseudo action: imagestorecloneset_confirmed-pre_notify_start_0
+ * Resource action: configstoreclone:0 notify on node2
+ * Pseudo action: configstorecloneset_confirmed-pre_notify_start_0
+ * Resource action: stonithclone:1 monitor=5000 on node1
+ * Resource action: evmsdclone:1 monitor=5000 on node1
+ * Resource action: evmsclone:1 start on node1
+ * Pseudo action: evmscloneset_running_0
+ * Pseudo action: evmscloneset_post_notify_running_0
+ * Resource action: evmsclone:0 notify on node2
+ * Resource action: evmsclone:1 notify on node1
+ * Pseudo action: evmscloneset_confirmed-post_notify_running_0
+ * Pseudo action: imagestorecloneset_start_0
+ * Pseudo action: configstorecloneset_start_0
+ * Resource action: imagestoreclone:1 start on node1
+ * Pseudo action: imagestorecloneset_running_0
+ * Resource action: configstoreclone:1 start on node1
+ * Pseudo action: configstorecloneset_running_0
+ * Pseudo action: imagestorecloneset_post_notify_running_0
+ * Pseudo action: configstorecloneset_post_notify_running_0
+ * Resource action: imagestoreclone:0 notify on node2
+ * Resource action: imagestoreclone:1 notify on node1
+ * Pseudo action: imagestorecloneset_confirmed-post_notify_running_0
+ * Resource action: configstoreclone:0 notify on node2
+ * Resource action: configstoreclone:1 notify on node1
+ * Pseudo action: configstorecloneset_confirmed-post_notify_running_0
+ * Resource action: sles10 migrate_to on node2
+ * Resource action: imagestoreclone:0 monitor=20000 on node2
+ * Resource action: imagestoreclone:1 monitor=20000 on node1
+ * Resource action: configstoreclone:0 monitor=20000 on node2
+ * Resource action: configstoreclone:1 monitor=20000 on node1
+ * Resource action: sles10 migrate_from on node1
+ * Resource action: sles10 stop on node2
+ * Pseudo action: sles10_start_0
+ * Resource action: sles10 monitor=10000 on node1
+
+Revised Cluster Status:
+ * Node List:
+ * Online: [ node1 node2 ]
+
+ * Full List of Resources:
+ * Clone Set: stonithcloneset [stonithclone]:
+ * Started: [ node1 node2 ]
+ * Clone Set: evmsdcloneset [evmsdclone]:
+ * Started: [ node1 node2 ]
+ * Clone Set: evmscloneset [evmsclone]:
+ * Started: [ node1 node2 ]
+ * Clone Set: imagestorecloneset [imagestoreclone]:
+ * Started: [ node1 node2 ]
+ * Clone Set: configstorecloneset [configstoreclone]:
+ * Started: [ node1 node2 ]
+ * sles10 (ocf:heartbeat:Xen): Started node1
diff --git a/cts/scheduler/summary/novell-252693-3.summary b/cts/scheduler/summary/novell-252693-3.summary
new file mode 100644
index 0000000..246969d
--- /dev/null
+++ b/cts/scheduler/summary/novell-252693-3.summary
@@ -0,0 +1,112 @@
+Current cluster status:
+ * Node List:
+ * Online: [ node1 node2 ]
+
+ * Full List of Resources:
+ * Clone Set: stonithcloneset [stonithclone]:
+ * Started: [ node2 ]
+ * Stopped: [ node1 ]
+ * Clone Set: evmsdcloneset [evmsdclone]:
+ * Started: [ node2 ]
+ * Stopped: [ node1 ]
+ * Clone Set: evmscloneset [evmsclone]:
+ * Started: [ node2 ]
+ * Stopped: [ node1 ]
+ * Clone Set: imagestorecloneset [imagestoreclone]:
+ * imagestoreclone (ocf:heartbeat:Filesystem): FAILED node2
+ * Stopped: [ node1 ]
+ * Clone Set: configstorecloneset [configstoreclone]:
+ * Started: [ node2 ]
+ * Stopped: [ node1 ]
+ * sles10 (ocf:heartbeat:Xen): Started node2
+
+Transition Summary:
+ * Start stonithclone:1 ( node1 )
+ * Start evmsdclone:1 ( node1 )
+ * Start evmsclone:1 ( node1 )
+ * Recover imagestoreclone:0 ( node2 -> node1 )
+ * Start imagestoreclone:1 ( node2 )
+ * Start configstoreclone:1 ( node1 )
+ * Migrate sles10 ( node2 -> node1 )
+
+Executing Cluster Transition:
+ * Resource action: stonithclone:0 monitor=5000 on node2
+ * Resource action: stonithclone:1 monitor on node1
+ * Pseudo action: stonithcloneset_start_0
+ * Resource action: evmsdclone:0 monitor=5000 on node2
+ * Resource action: evmsdclone:1 monitor on node1
+ * Pseudo action: evmsdcloneset_start_0
+ * Resource action: evmsclone:1 monitor on node1
+ * Pseudo action: evmscloneset_pre_notify_start_0
+ * Resource action: imagestoreclone:0 monitor on node1
+ * Pseudo action: imagestorecloneset_pre_notify_stop_0
+ * Resource action: configstoreclone:1 monitor on node1
+ * Pseudo action: configstorecloneset_pre_notify_start_0
+ * Resource action: sles10 monitor on node1
+ * Resource action: stonithclone:1 start on node1
+ * Pseudo action: stonithcloneset_running_0
+ * Resource action: evmsdclone:1 start on node1
+ * Pseudo action: evmsdcloneset_running_0
+ * Resource action: evmsclone:0 notify on node2
+ * Pseudo action: evmscloneset_confirmed-pre_notify_start_0
+ * Pseudo action: evmscloneset_start_0
+ * Resource action: imagestoreclone:0 notify on node2
+ * Pseudo action: imagestorecloneset_confirmed-pre_notify_stop_0
+ * Pseudo action: imagestorecloneset_stop_0
+ * Resource action: configstoreclone:0 notify on node2
+ * Pseudo action: configstorecloneset_confirmed-pre_notify_start_0
+ * Resource action: stonithclone:1 monitor=5000 on node1
+ * Resource action: evmsdclone:1 monitor=5000 on node1
+ * Resource action: evmsclone:1 start on node1
+ * Pseudo action: evmscloneset_running_0
+ * Resource action: imagestoreclone:0 stop on node2
+ * Pseudo action: imagestorecloneset_stopped_0
+ * Pseudo action: evmscloneset_post_notify_running_0
+ * Pseudo action: imagestorecloneset_post_notify_stopped_0
+ * Resource action: evmsclone:0 notify on node2
+ * Resource action: evmsclone:1 notify on node1
+ * Pseudo action: evmscloneset_confirmed-post_notify_running_0
+ * Pseudo action: imagestorecloneset_confirmed-post_notify_stopped_0
+ * Pseudo action: imagestorecloneset_pre_notify_start_0
+ * Pseudo action: configstorecloneset_start_0
+ * Pseudo action: imagestorecloneset_confirmed-pre_notify_start_0
+ * Pseudo action: imagestorecloneset_start_0
+ * Resource action: configstoreclone:1 start on node1
+ * Pseudo action: configstorecloneset_running_0
+ * Resource action: imagestoreclone:0 start on node1
+ * Resource action: imagestoreclone:1 start on node2
+ * Pseudo action: imagestorecloneset_running_0
+ * Pseudo action: configstorecloneset_post_notify_running_0
+ * Pseudo action: imagestorecloneset_post_notify_running_0
+ * Resource action: configstoreclone:0 notify on node2
+ * Resource action: configstoreclone:1 notify on node1
+ * Pseudo action: configstorecloneset_confirmed-post_notify_running_0
+ * Resource action: imagestoreclone:0 notify on node1
+ * Resource action: imagestoreclone:1 notify on node2
+ * Pseudo action: imagestorecloneset_confirmed-post_notify_running_0
+ * Resource action: configstoreclone:0 monitor=20000 on node2
+ * Resource action: configstoreclone:1 monitor=20000 on node1
+ * Resource action: sles10 migrate_to on node2
+ * Resource action: imagestoreclone:0 monitor=20000 on node1
+ * Resource action: imagestoreclone:1 monitor=20000 on node2
+ * Resource action: sles10 migrate_from on node1
+ * Resource action: sles10 stop on node2
+ * Pseudo action: sles10_start_0
+ * Resource action: sles10 monitor=10000 on node1
+
+Revised Cluster Status:
+ * Node List:
+ * Online: [ node1 node2 ]
+
+ * Full List of Resources:
+ * Clone Set: stonithcloneset [stonithclone]:
+ * Started: [ node1 node2 ]
+ * Clone Set: evmsdcloneset [evmsdclone]:
+ * Started: [ node1 node2 ]
+ * Clone Set: evmscloneset [evmsclone]:
+ * Started: [ node1 node2 ]
+ * Clone Set: imagestorecloneset [imagestoreclone]:
+ * Started: [ node1 node2 ]
+ * Clone Set: configstorecloneset [configstoreclone]:
+ * Started: [ node1 node2 ]
+ * sles10 (ocf:heartbeat:Xen): Started node1
diff --git a/cts/scheduler/summary/novell-252693.summary b/cts/scheduler/summary/novell-252693.summary
new file mode 100644
index 0000000..82fce77
--- /dev/null
+++ b/cts/scheduler/summary/novell-252693.summary
@@ -0,0 +1,94 @@
+Current cluster status:
+ * Node List:
+ * Online: [ node1 node2 ]
+
+ * Full List of Resources:
+ * Clone Set: stonithcloneset [stonithclone]:
+ * Started: [ node1 node2 ]
+ * Clone Set: evmsdcloneset [evmsdclone]:
+ * Started: [ node1 node2 ]
+ * Clone Set: evmscloneset [evmsclone]:
+ * Started: [ node1 node2 ]
+ * Clone Set: imagestorecloneset [imagestoreclone]:
+ * Started: [ node1 node2 ]
+ * Clone Set: configstorecloneset [configstoreclone]:
+ * Started: [ node1 node2 ]
+ * sles10 (ocf:heartbeat:Xen): Started node1
+
+Transition Summary:
+ * Stop stonithclone:1 ( node1 ) due to node availability
+ * Stop evmsdclone:1 ( node1 ) due to node availability
+ * Stop evmsclone:1 ( node1 ) due to node availability
+ * Stop imagestoreclone:1 ( node1 ) due to node availability
+ * Stop configstoreclone:1 ( node1 ) due to node availability
+ * Migrate sles10 ( node1 -> node2 )
+
+Executing Cluster Transition:
+ * Resource action: stonithclone:0 monitor=5000 on node2
+ * Pseudo action: stonithcloneset_stop_0
+ * Resource action: evmsdclone:0 monitor=5000 on node2
+ * Pseudo action: evmscloneset_pre_notify_stop_0
+ * Pseudo action: imagestorecloneset_pre_notify_stop_0
+ * Pseudo action: configstorecloneset_pre_notify_stop_0
+ * Resource action: sles10 migrate_to on node1
+ * Resource action: stonithclone:1 stop on node1
+ * Pseudo action: stonithcloneset_stopped_0
+ * Resource action: evmsclone:0 notify on node2
+ * Resource action: evmsclone:1 notify on node1
+ * Pseudo action: evmscloneset_confirmed-pre_notify_stop_0
+ * Resource action: imagestoreclone:0 notify on node2
+ * Resource action: imagestoreclone:0 notify on node1
+ * Pseudo action: imagestorecloneset_confirmed-pre_notify_stop_0
+ * Pseudo action: imagestorecloneset_stop_0
+ * Resource action: configstoreclone:0 notify on node2
+ * Resource action: configstoreclone:0 notify on node1
+ * Pseudo action: configstorecloneset_confirmed-pre_notify_stop_0
+ * Pseudo action: configstorecloneset_stop_0
+ * Resource action: sles10 migrate_from on node2
+ * Resource action: sles10 stop on node1
+ * Resource action: imagestoreclone:0 stop on node1
+ * Pseudo action: imagestorecloneset_stopped_0
+ * Resource action: configstoreclone:0 stop on node1
+ * Pseudo action: configstorecloneset_stopped_0
+ * Pseudo action: sles10_start_0
+ * Pseudo action: imagestorecloneset_post_notify_stopped_0
+ * Pseudo action: configstorecloneset_post_notify_stopped_0
+ * Resource action: sles10 monitor=10000 on node2
+ * Resource action: imagestoreclone:0 notify on node2
+ * Pseudo action: imagestorecloneset_confirmed-post_notify_stopped_0
+ * Resource action: configstoreclone:0 notify on node2
+ * Pseudo action: configstorecloneset_confirmed-post_notify_stopped_0
+ * Pseudo action: evmscloneset_stop_0
+ * Resource action: imagestoreclone:0 monitor=20000 on node2
+ * Resource action: configstoreclone:0 monitor=20000 on node2
+ * Resource action: evmsclone:1 stop on node1
+ * Pseudo action: evmscloneset_stopped_0
+ * Pseudo action: evmscloneset_post_notify_stopped_0
+ * Resource action: evmsclone:0 notify on node2
+ * Pseudo action: evmscloneset_confirmed-post_notify_stopped_0
+ * Pseudo action: evmsdcloneset_stop_0
+ * Resource action: evmsdclone:1 stop on node1
+ * Pseudo action: evmsdcloneset_stopped_0
+ * Cluster action: do_shutdown on node1
+
+Revised Cluster Status:
+ * Node List:
+ * Online: [ node1 node2 ]
+
+ * Full List of Resources:
+ * Clone Set: stonithcloneset [stonithclone]:
+ * Started: [ node2 ]
+ * Stopped: [ node1 ]
+ * Clone Set: evmsdcloneset [evmsdclone]:
+ * Started: [ node2 ]
+ * Stopped: [ node1 ]
+ * Clone Set: evmscloneset [evmsclone]:
+ * Started: [ node2 ]
+ * Stopped: [ node1 ]
+ * Clone Set: imagestorecloneset [imagestoreclone]:
+ * Started: [ node2 ]
+ * Stopped: [ node1 ]
+ * Clone Set: configstorecloneset [configstoreclone]:
+ * Started: [ node2 ]
+ * Stopped: [ node1 ]
+ * sles10 (ocf:heartbeat:Xen): Started node2
diff --git a/cts/scheduler/summary/nvpair-date-rules-1.summary b/cts/scheduler/summary/nvpair-date-rules-1.summary
new file mode 100644
index 0000000..145ff4a
--- /dev/null
+++ b/cts/scheduler/summary/nvpair-date-rules-1.summary
@@ -0,0 +1,38 @@
+Using the original execution date of: 2019-09-23 17:00:00Z
+Current cluster status:
+ * Node List:
+ * Node rhel7-3: standby
+ * Online: [ rhel7-1 rhel7-2 rhel7-4 rhel7-5 ]
+
+ * Full List of Resources:
+ * Fencing (stonith:fence_xvm): Started rhel7-1
+ * FencingPass (stonith:fence_dummy): Started rhel7-2
+ * rsc1 (ocf:pacemaker:Dummy): Stopped
+ * rsc2 (ocf:pacemaker:Dummy): Stopped
+ * rsc3 (ocf:pacemaker:Dummy): Stopped
+
+Transition Summary:
+ * Start rsc1 ( rhel7-5 )
+ * Start rsc2 ( rhel7-2 )
+ * Start rsc3 ( rhel7-4 )
+
+Executing Cluster Transition:
+ * Resource action: rsc1 start on rhel7-5
+ * Resource action: rsc2 start on rhel7-2
+ * Resource action: rsc3 start on rhel7-4
+ * Resource action: rsc1 monitor=10000 on rhel7-5
+ * Resource action: rsc2 monitor=10000 on rhel7-2
+ * Resource action: rsc3 monitor=10000 on rhel7-4
+Using the original execution date of: 2019-09-23 17:00:00Z
+
+Revised Cluster Status:
+ * Node List:
+ * Node rhel7-3: standby
+ * Online: [ rhel7-1 rhel7-2 rhel7-4 rhel7-5 ]
+
+ * Full List of Resources:
+ * Fencing (stonith:fence_xvm): Started rhel7-1
+ * FencingPass (stonith:fence_dummy): Started rhel7-2
+ * rsc1 (ocf:pacemaker:Dummy): Started rhel7-5
+ * rsc2 (ocf:pacemaker:Dummy): Started rhel7-2
+ * rsc3 (ocf:pacemaker:Dummy): Started rhel7-4
diff --git a/cts/scheduler/summary/nvpair-id-ref.summary b/cts/scheduler/summary/nvpair-id-ref.summary
new file mode 100644
index 0000000..4f05861
--- /dev/null
+++ b/cts/scheduler/summary/nvpair-id-ref.summary
@@ -0,0 +1,31 @@
+Current cluster status:
+ * Node List:
+ * Online: [ node1 node2 ]
+
+ * Full List of Resources:
+ * rsc_stonith (stonith:null): Started node1
+ * rsc1 (ocf:pacemaker:Dummy): Stopped
+ * rsc2 (ocf:pacemaker:Dummy): Stopped
+
+Transition Summary:
+ * Start rsc1 ( node2 )
+ * Start rsc2 ( node1 )
+
+Executing Cluster Transition:
+ * Resource action: rsc1 monitor on node2
+ * Resource action: rsc1 monitor on node1
+ * Resource action: rsc2 monitor on node2
+ * Resource action: rsc2 monitor on node1
+ * Resource action: rsc1 start on node2
+ * Resource action: rsc2 start on node1
+ * Resource action: rsc1 monitor=10000 on node2
+ * Resource action: rsc2 monitor=10000 on node1
+
+Revised Cluster Status:
+ * Node List:
+ * Online: [ node1 node2 ]
+
+ * Full List of Resources:
+ * rsc_stonith (stonith:null): Started node1
+ * rsc1 (ocf:pacemaker:Dummy): Started node2
+ * rsc2 (ocf:pacemaker:Dummy): Started node1
diff --git a/cts/scheduler/summary/obsolete-lrm-resource.summary b/cts/scheduler/summary/obsolete-lrm-resource.summary
new file mode 100644
index 0000000..3d8889e
--- /dev/null
+++ b/cts/scheduler/summary/obsolete-lrm-resource.summary
@@ -0,0 +1,25 @@
+Current cluster status:
+ * Node List:
+ * Online: [ yingying.site ]
+
+ * Full List of Resources:
+ * Clone Set: rsc1 [rsc1_child]:
+ * Stopped: [ yingying.site ]
+
+Transition Summary:
+ * Start rsc1_child:0 ( yingying.site )
+
+Executing Cluster Transition:
+ * Resource action: rsc1_child:0 monitor on yingying.site
+ * Pseudo action: rsc1_start_0
+ * Resource action: rsc1 delete on yingying.site
+ * Resource action: rsc1_child:0 start on yingying.site
+ * Pseudo action: rsc1_running_0
+
+Revised Cluster Status:
+ * Node List:
+ * Online: [ yingying.site ]
+
+ * Full List of Resources:
+ * Clone Set: rsc1 [rsc1_child]:
+ * Started: [ yingying.site ]
diff --git a/cts/scheduler/summary/ocf_degraded-remap-ocf_ok.summary b/cts/scheduler/summary/ocf_degraded-remap-ocf_ok.summary
new file mode 100644
index 0000000..7cfb040
--- /dev/null
+++ b/cts/scheduler/summary/ocf_degraded-remap-ocf_ok.summary
@@ -0,0 +1,21 @@
+Using the original execution date of: 2020-09-30 10:24:21Z
+Current cluster status:
+ * Node List:
+ * Online: [ rhel8-1 rhel8-2 ]
+
+ * Full List of Resources:
+ * xvmfence (stonith:fence_xvm): Started rhel8-1
+ * dummy (ocf:pacemaker:Dummy): Started rhel8-1
+
+Transition Summary:
+
+Executing Cluster Transition:
+Using the original execution date of: 2020-09-30 10:24:21Z
+
+Revised Cluster Status:
+ * Node List:
+ * Online: [ rhel8-1 rhel8-2 ]
+
+ * Full List of Resources:
+ * xvmfence (stonith:fence_xvm): Started rhel8-1
+ * dummy (ocf:pacemaker:Dummy): Started rhel8-1
diff --git a/cts/scheduler/summary/ocf_degraded_promoted-remap-ocf_ok.summary b/cts/scheduler/summary/ocf_degraded_promoted-remap-ocf_ok.summary
new file mode 100644
index 0000000..f297042
--- /dev/null
+++ b/cts/scheduler/summary/ocf_degraded_promoted-remap-ocf_ok.summary
@@ -0,0 +1,25 @@
+Using the original execution date of: 2020-09-30 14:23:26Z
+Current cluster status:
+ * Node List:
+ * Online: [ rhel8-1 rhel8-2 ]
+
+ * Full List of Resources:
+ * xvmfence (stonith:fence_xvm): Started rhel8-1
+ * Clone Set: state-clone [state] (promotable):
+ * Promoted: [ rhel8-1 ]
+ * Unpromoted: [ rhel8-2 ]
+
+Transition Summary:
+
+Executing Cluster Transition:
+Using the original execution date of: 2020-09-30 14:23:26Z
+
+Revised Cluster Status:
+ * Node List:
+ * Online: [ rhel8-1 rhel8-2 ]
+
+ * Full List of Resources:
+ * xvmfence (stonith:fence_xvm): Started rhel8-1
+ * Clone Set: state-clone [state] (promotable):
+ * Promoted: [ rhel8-1 ]
+ * Unpromoted: [ rhel8-2 ]
diff --git a/cts/scheduler/summary/on-fail-ignore.summary b/cts/scheduler/summary/on-fail-ignore.summary
new file mode 100644
index 0000000..2558f60
--- /dev/null
+++ b/cts/scheduler/summary/on-fail-ignore.summary
@@ -0,0 +1,23 @@
+Using the original execution date of: 2017-10-26 14:23:50Z
+Current cluster status:
+ * Node List:
+ * Online: [ 407888-db1 407892-db2 ]
+
+ * Full List of Resources:
+ * fence_db1 (stonith:fence_ipmilan): Started 407892-db2
+ * fence_db2 (stonith:fence_ipmilan): Started 407888-db1
+ * nfs_snet_ip (ocf:heartbeat:IPaddr2): Started 407888-db1 (failure ignored)
+
+Transition Summary:
+
+Executing Cluster Transition:
+Using the original execution date of: 2017-10-26 14:23:50Z
+
+Revised Cluster Status:
+ * Node List:
+ * Online: [ 407888-db1 407892-db2 ]
+
+ * Full List of Resources:
+ * fence_db1 (stonith:fence_ipmilan): Started 407892-db2
+ * fence_db2 (stonith:fence_ipmilan): Started 407888-db1
+ * nfs_snet_ip (ocf:heartbeat:IPaddr2): Started 407888-db1 (failure ignored)
diff --git a/cts/scheduler/summary/on_fail_demote1.summary b/cts/scheduler/summary/on_fail_demote1.summary
new file mode 100644
index 0000000..a386da0
--- /dev/null
+++ b/cts/scheduler/summary/on_fail_demote1.summary
@@ -0,0 +1,88 @@
+Using the original execution date of: 2020-06-16 19:23:21Z
+Current cluster status:
+ * Node List:
+ * Online: [ rhel7-1 rhel7-3 rhel7-4 rhel7-5 ]
+ * RemoteOnline: [ remote-rhel7-2 ]
+ * GuestOnline: [ lxc1 lxc2 stateful-bundle-0 stateful-bundle-1 stateful-bundle-2 ]
+
+ * Full List of Resources:
+ * Fencing (stonith:fence_xvm): Started rhel7-4
+ * Clone Set: rsc1-clone [rsc1] (promotable):
+ * rsc1 (ocf:pacemaker:Stateful): FAILED Promoted rhel7-4
+ * Unpromoted: [ lxc1 lxc2 remote-rhel7-2 rhel7-1 rhel7-3 rhel7-5 ]
+ * Clone Set: rsc2-master [rsc2] (promotable):
+ * rsc2 (ocf:pacemaker:Stateful): FAILED Promoted remote-rhel7-2
+ * Unpromoted: [ lxc1 lxc2 rhel7-1 rhel7-3 rhel7-4 rhel7-5 ]
+ * remote-rhel7-2 (ocf:pacemaker:remote): Started rhel7-1
+ * container1 (ocf:heartbeat:VirtualDomain): Started rhel7-3
+ * container2 (ocf:heartbeat:VirtualDomain): Started rhel7-3
+ * Clone Set: lxc-ms-master [lxc-ms] (promotable):
+ * lxc-ms (ocf:pacemaker:Stateful): FAILED Promoted lxc2
+ * Unpromoted: [ lxc1 ]
+ * Stopped: [ remote-rhel7-2 rhel7-1 rhel7-3 rhel7-4 rhel7-5 ]
+ * Container bundle set: stateful-bundle [pcmktest:http]:
+ * stateful-bundle-0 (192.168.122.131) (ocf:pacemaker:Stateful): FAILED Promoted rhel7-5
+ * stateful-bundle-1 (192.168.122.132) (ocf:pacemaker:Stateful): Unpromoted rhel7-1
+ * stateful-bundle-2 (192.168.122.133) (ocf:pacemaker:Stateful): Unpromoted rhel7-4
+
+Transition Summary:
+ * Re-promote rsc1:0 ( Promoted rhel7-4 )
+ * Re-promote rsc2:4 ( Promoted remote-rhel7-2 )
+ * Re-promote lxc-ms:0 ( Promoted lxc2 )
+ * Re-promote bundled:0 ( Promoted stateful-bundle-0 )
+
+Executing Cluster Transition:
+ * Pseudo action: rsc1-clone_demote_0
+ * Pseudo action: rsc2-master_demote_0
+ * Pseudo action: lxc-ms-master_demote_0
+ * Pseudo action: stateful-bundle_demote_0
+ * Resource action: rsc1 demote on rhel7-4
+ * Pseudo action: rsc1-clone_demoted_0
+ * Pseudo action: rsc1-clone_promote_0
+ * Resource action: rsc2 demote on remote-rhel7-2
+ * Pseudo action: rsc2-master_demoted_0
+ * Pseudo action: rsc2-master_promote_0
+ * Resource action: lxc-ms demote on lxc2
+ * Pseudo action: lxc-ms-master_demoted_0
+ * Pseudo action: lxc-ms-master_promote_0
+ * Pseudo action: stateful-bundle-master_demote_0
+ * Resource action: rsc1 promote on rhel7-4
+ * Pseudo action: rsc1-clone_promoted_0
+ * Resource action: rsc2 promote on remote-rhel7-2
+ * Pseudo action: rsc2-master_promoted_0
+ * Resource action: lxc-ms promote on lxc2
+ * Pseudo action: lxc-ms-master_promoted_0
+ * Resource action: bundled demote on stateful-bundle-0
+ * Pseudo action: stateful-bundle-master_demoted_0
+ * Pseudo action: stateful-bundle_demoted_0
+ * Pseudo action: stateful-bundle_promote_0
+ * Pseudo action: stateful-bundle-master_promote_0
+ * Resource action: bundled promote on stateful-bundle-0
+ * Pseudo action: stateful-bundle-master_promoted_0
+ * Pseudo action: stateful-bundle_promoted_0
+Using the original execution date of: 2020-06-16 19:23:21Z
+
+Revised Cluster Status:
+ * Node List:
+ * Online: [ rhel7-1 rhel7-3 rhel7-4 rhel7-5 ]
+ * RemoteOnline: [ remote-rhel7-2 ]
+ * GuestOnline: [ lxc1 lxc2 stateful-bundle-0 stateful-bundle-1 stateful-bundle-2 ]
+
+ * Full List of Resources:
+ * Fencing (stonith:fence_xvm): Started rhel7-4
+ * Clone Set: rsc1-clone [rsc1] (promotable):
+ * Promoted: [ rhel7-4 ]
+ * Unpromoted: [ lxc1 lxc2 remote-rhel7-2 rhel7-1 rhel7-3 rhel7-5 ]
+ * Clone Set: rsc2-master [rsc2] (promotable):
+ * Promoted: [ remote-rhel7-2 ]
+ * Unpromoted: [ lxc1 lxc2 rhel7-1 rhel7-3 rhel7-4 rhel7-5 ]
+ * remote-rhel7-2 (ocf:pacemaker:remote): Started rhel7-1
+ * container1 (ocf:heartbeat:VirtualDomain): Started rhel7-3
+ * container2 (ocf:heartbeat:VirtualDomain): Started rhel7-3
+ * Clone Set: lxc-ms-master [lxc-ms] (promotable):
+ * Promoted: [ lxc2 ]
+ * Unpromoted: [ lxc1 ]
+ * Container bundle set: stateful-bundle [pcmktest:http]:
+ * stateful-bundle-0 (192.168.122.131) (ocf:pacemaker:Stateful): Promoted rhel7-5
+ * stateful-bundle-1 (192.168.122.132) (ocf:pacemaker:Stateful): Unpromoted rhel7-1
+ * stateful-bundle-2 (192.168.122.133) (ocf:pacemaker:Stateful): Unpromoted rhel7-4
diff --git a/cts/scheduler/summary/on_fail_demote2.summary b/cts/scheduler/summary/on_fail_demote2.summary
new file mode 100644
index 0000000..0ec0ea3
--- /dev/null
+++ b/cts/scheduler/summary/on_fail_demote2.summary
@@ -0,0 +1,43 @@
+Using the original execution date of: 2020-06-16 19:23:21Z
+Current cluster status:
+ * Node List:
+ * Online: [ rhel7-1 rhel7-2 rhel7-3 rhel7-4 rhel7-5 ]
+
+ * Full List of Resources:
+ * Fencing (stonith:fence_xvm): Started rhel7-1
+ * Clone Set: rsc1-clone [rsc1] (promotable):
+ * rsc1 (ocf:pacemaker:Stateful): FAILED Promoted rhel7-4
+ * Unpromoted: [ rhel7-1 rhel7-2 rhel7-3 rhel7-5 ]
+ * Clone Set: rsc2-master [rsc2] (promotable):
+ * Promoted: [ rhel7-4 ]
+ * Unpromoted: [ rhel7-1 rhel7-2 rhel7-3 rhel7-5 ]
+
+Transition Summary:
+ * Demote rsc1:0 ( Promoted -> Unpromoted rhel7-4 )
+ * Promote rsc1:1 ( Unpromoted -> Promoted rhel7-3 )
+
+Executing Cluster Transition:
+ * Resource action: rsc1 cancel=10000 on rhel7-4
+ * Resource action: rsc1 cancel=11000 on rhel7-3
+ * Pseudo action: rsc1-clone_demote_0
+ * Resource action: rsc1 demote on rhel7-4
+ * Pseudo action: rsc1-clone_demoted_0
+ * Pseudo action: rsc1-clone_promote_0
+ * Resource action: rsc1 monitor=11000 on rhel7-4
+ * Resource action: rsc1 promote on rhel7-3
+ * Pseudo action: rsc1-clone_promoted_0
+ * Resource action: rsc1 monitor=10000 on rhel7-3
+Using the original execution date of: 2020-06-16 19:23:21Z
+
+Revised Cluster Status:
+ * Node List:
+ * Online: [ rhel7-1 rhel7-2 rhel7-3 rhel7-4 rhel7-5 ]
+
+ * Full List of Resources:
+ * Fencing (stonith:fence_xvm): Started rhel7-1
+ * Clone Set: rsc1-clone [rsc1] (promotable):
+ * Promoted: [ rhel7-3 ]
+ * Unpromoted: [ rhel7-1 rhel7-2 rhel7-4 rhel7-5 ]
+ * Clone Set: rsc2-master [rsc2] (promotable):
+ * Promoted: [ rhel7-4 ]
+ * Unpromoted: [ rhel7-1 rhel7-2 rhel7-3 rhel7-5 ]
diff --git a/cts/scheduler/summary/on_fail_demote3.summary b/cts/scheduler/summary/on_fail_demote3.summary
new file mode 100644
index 0000000..793804a
--- /dev/null
+++ b/cts/scheduler/summary/on_fail_demote3.summary
@@ -0,0 +1,36 @@
+Using the original execution date of: 2020-06-16 19:23:21Z
+Current cluster status:
+ * Node List:
+ * Online: [ rhel7-1 rhel7-2 rhel7-3 rhel7-4 rhel7-5 ]
+
+ * Full List of Resources:
+ * Fencing (stonith:fence_xvm): Started rhel7-1
+ * Clone Set: rsc1-clone [rsc1] (promotable):
+ * rsc1 (ocf:pacemaker:Stateful): FAILED Promoted rhel7-4
+ * Unpromoted: [ rhel7-1 rhel7-2 rhel7-3 rhel7-5 ]
+ * Clone Set: rsc2-master [rsc2] (promotable):
+ * Promoted: [ rhel7-4 ]
+ * Unpromoted: [ rhel7-1 rhel7-2 rhel7-3 rhel7-5 ]
+
+Transition Summary:
+ * Demote rsc1:0 ( Promoted -> Unpromoted rhel7-4 )
+
+Executing Cluster Transition:
+ * Resource action: rsc1 cancel=10000 on rhel7-4
+ * Pseudo action: rsc1-clone_demote_0
+ * Resource action: rsc1 demote on rhel7-4
+ * Pseudo action: rsc1-clone_demoted_0
+ * Resource action: rsc1 monitor=11000 on rhel7-4
+Using the original execution date of: 2020-06-16 19:23:21Z
+
+Revised Cluster Status:
+ * Node List:
+ * Online: [ rhel7-1 rhel7-2 rhel7-3 rhel7-4 rhel7-5 ]
+
+ * Full List of Resources:
+ * Fencing (stonith:fence_xvm): Started rhel7-1
+ * Clone Set: rsc1-clone [rsc1] (promotable):
+ * Unpromoted: [ rhel7-1 rhel7-2 rhel7-3 rhel7-4 rhel7-5 ]
+ * Clone Set: rsc2-master [rsc2] (promotable):
+ * Promoted: [ rhel7-4 ]
+ * Unpromoted: [ rhel7-1 rhel7-2 rhel7-3 rhel7-5 ]
diff --git a/cts/scheduler/summary/on_fail_demote4.summary b/cts/scheduler/summary/on_fail_demote4.summary
new file mode 100644
index 0000000..3082651
--- /dev/null
+++ b/cts/scheduler/summary/on_fail_demote4.summary
@@ -0,0 +1,189 @@
+Using the original execution date of: 2020-06-16 19:23:21Z
+Current cluster status:
+ * Node List:
+ * RemoteNode remote-rhel7-2: UNCLEAN (offline)
+ * Node rhel7-4: UNCLEAN (offline)
+ * Online: [ rhel7-1 rhel7-3 rhel7-5 ]
+ * GuestOnline: [ lxc1 stateful-bundle-1 ]
+
+ * Full List of Resources:
+ * Fencing (stonith:fence_xvm): Started rhel7-4 (UNCLEAN)
+ * Clone Set: rsc1-clone [rsc1] (promotable):
+ * rsc1 (ocf:pacemaker:Stateful): Promoted rhel7-4 (UNCLEAN)
+ * rsc1 (ocf:pacemaker:Stateful): Unpromoted remote-rhel7-2 (UNCLEAN)
+ * Unpromoted: [ lxc1 rhel7-1 rhel7-3 rhel7-5 ]
+ * Clone Set: rsc2-master [rsc2] (promotable):
+ * rsc2 (ocf:pacemaker:Stateful): Unpromoted rhel7-4 (UNCLEAN)
+ * rsc2 (ocf:pacemaker:Stateful): Promoted remote-rhel7-2 (UNCLEAN)
+ * Unpromoted: [ lxc1 rhel7-1 rhel7-3 rhel7-5 ]
+ * remote-rhel7-2 (ocf:pacemaker:remote): FAILED rhel7-1
+ * container1 (ocf:heartbeat:VirtualDomain): Started rhel7-3
+ * container2 (ocf:heartbeat:VirtualDomain): FAILED rhel7-3
+ * Clone Set: lxc-ms-master [lxc-ms] (promotable):
+ * Unpromoted: [ lxc1 ]
+ * Stopped: [ remote-rhel7-2 rhel7-1 rhel7-3 rhel7-4 rhel7-5 ]
+ * Container bundle set: stateful-bundle [pcmktest:http]:
+ * stateful-bundle-0 (192.168.122.131) (ocf:pacemaker:Stateful): FAILED Promoted rhel7-5
+ * stateful-bundle-1 (192.168.122.132) (ocf:pacemaker:Stateful): Unpromoted rhel7-1
+ * stateful-bundle-2 (192.168.122.133) (ocf:pacemaker:Stateful): FAILED rhel7-4 (UNCLEAN)
+
+Transition Summary:
+ * Fence (reboot) stateful-bundle-2 (resource: stateful-bundle-docker-2) 'guest is unclean'
+ * Fence (reboot) stateful-bundle-0 (resource: stateful-bundle-docker-0) 'guest is unclean'
+ * Fence (reboot) lxc2 (resource: container2) 'guest is unclean'
+ * Fence (reboot) remote-rhel7-2 'remote connection is unrecoverable'
+ * Fence (reboot) rhel7-4 'peer is no longer part of the cluster'
+ * Move Fencing ( rhel7-4 -> rhel7-5 )
+ * Stop rsc1:0 ( Promoted rhel7-4 ) due to node availability
+ * Promote rsc1:1 ( Unpromoted -> Promoted rhel7-3 )
+ * Stop rsc1:4 ( Unpromoted remote-rhel7-2 ) due to node availability
+ * Recover rsc1:5 ( Unpromoted lxc2 )
+ * Stop rsc2:0 ( Unpromoted rhel7-4 ) due to node availability
+ * Promote rsc2:1 ( Unpromoted -> Promoted rhel7-3 )
+ * Stop rsc2:4 ( Promoted remote-rhel7-2 ) due to node availability
+ * Recover rsc2:5 ( Unpromoted lxc2 )
+ * Recover remote-rhel7-2 ( rhel7-1 )
+ * Recover container2 ( rhel7-3 )
+ * Recover lxc-ms:0 ( Promoted lxc2 )
+ * Recover stateful-bundle-docker-0 ( rhel7-5 )
+ * Restart stateful-bundle-0 ( rhel7-5 ) due to required stateful-bundle-docker-0 start
+ * Recover bundled:0 ( Promoted stateful-bundle-0 )
+ * Move stateful-bundle-ip-192.168.122.133 ( rhel7-4 -> rhel7-3 )
+ * Recover stateful-bundle-docker-2 ( rhel7-4 -> rhel7-3 )
+ * Move stateful-bundle-2 ( rhel7-4 -> rhel7-3 )
+ * Recover bundled:2 ( Unpromoted stateful-bundle-2 )
+ * Restart lxc2 ( rhel7-3 ) due to required container2 start
+
+Executing Cluster Transition:
+ * Pseudo action: Fencing_stop_0
+ * Resource action: rsc1 cancel=11000 on rhel7-3
+ * Pseudo action: rsc1-clone_demote_0
+ * Resource action: rsc2 cancel=11000 on rhel7-3
+ * Pseudo action: rsc2-master_demote_0
+ * Pseudo action: lxc-ms-master_demote_0
+ * Resource action: stateful-bundle-0 stop on rhel7-5
+ * Pseudo action: stateful-bundle-2_stop_0
+ * Resource action: lxc2 stop on rhel7-3
+ * Pseudo action: stateful-bundle_demote_0
+ * Fencing remote-rhel7-2 (reboot)
+ * Fencing rhel7-4 (reboot)
+ * Pseudo action: rsc1_demote_0
+ * Pseudo action: rsc1-clone_demoted_0
+ * Pseudo action: rsc2_demote_0
+ * Pseudo action: rsc2-master_demoted_0
+ * Resource action: container2 stop on rhel7-3
+ * Pseudo action: stateful-bundle-master_demote_0
+ * Pseudo action: stonith-stateful-bundle-2-reboot on stateful-bundle-2
+ * Pseudo action: stonith-lxc2-reboot on lxc2
+ * Resource action: Fencing start on rhel7-5
+ * Pseudo action: rsc1-clone_stop_0
+ * Pseudo action: rsc2-master_stop_0
+ * Pseudo action: lxc-ms_demote_0
+ * Pseudo action: lxc-ms-master_demoted_0
+ * Pseudo action: lxc-ms-master_stop_0
+ * Pseudo action: bundled_demote_0
+ * Pseudo action: stateful-bundle-master_demoted_0
+ * Pseudo action: stateful-bundle_demoted_0
+ * Pseudo action: stateful-bundle_stop_0
+ * Resource action: Fencing monitor=120000 on rhel7-5
+ * Pseudo action: rsc1_stop_0
+ * Pseudo action: rsc1_stop_0
+ * Pseudo action: rsc1_stop_0
+ * Pseudo action: rsc1-clone_stopped_0
+ * Pseudo action: rsc1-clone_start_0
+ * Pseudo action: rsc2_stop_0
+ * Pseudo action: rsc2_stop_0
+ * Pseudo action: rsc2_stop_0
+ * Pseudo action: rsc2-master_stopped_0
+ * Pseudo action: rsc2-master_start_0
+ * Resource action: remote-rhel7-2 stop on rhel7-1
+ * Pseudo action: lxc-ms_stop_0
+ * Pseudo action: lxc-ms-master_stopped_0
+ * Pseudo action: lxc-ms-master_start_0
+ * Resource action: stateful-bundle-docker-0 stop on rhel7-5
+ * Pseudo action: stateful-bundle-docker-2_stop_0
+ * Pseudo action: stonith-stateful-bundle-0-reboot on stateful-bundle-0
+ * Resource action: remote-rhel7-2 start on rhel7-1
+ * Resource action: remote-rhel7-2 monitor=60000 on rhel7-1
+ * Resource action: container2 start on rhel7-3
+ * Resource action: container2 monitor=20000 on rhel7-3
+ * Pseudo action: stateful-bundle-master_stop_0
+ * Pseudo action: stateful-bundle-ip-192.168.122.133_stop_0
+ * Resource action: lxc2 start on rhel7-3
+ * Resource action: lxc2 monitor=30000 on rhel7-3
+ * Resource action: rsc1 start on lxc2
+ * Pseudo action: rsc1-clone_running_0
+ * Resource action: rsc2 start on lxc2
+ * Pseudo action: rsc2-master_running_0
+ * Resource action: lxc-ms start on lxc2
+ * Pseudo action: lxc-ms-master_running_0
+ * Pseudo action: bundled_stop_0
+ * Resource action: stateful-bundle-ip-192.168.122.133 start on rhel7-3
+ * Resource action: rsc1 monitor=11000 on lxc2
+ * Pseudo action: rsc1-clone_promote_0
+ * Resource action: rsc2 monitor=11000 on lxc2
+ * Pseudo action: rsc2-master_promote_0
+ * Pseudo action: lxc-ms-master_promote_0
+ * Pseudo action: bundled_stop_0
+ * Pseudo action: stateful-bundle-master_stopped_0
+ * Resource action: stateful-bundle-ip-192.168.122.133 monitor=60000 on rhel7-3
+ * Pseudo action: stateful-bundle_stopped_0
+ * Pseudo action: stateful-bundle_start_0
+ * Resource action: rsc1 promote on rhel7-3
+ * Pseudo action: rsc1-clone_promoted_0
+ * Resource action: rsc2 promote on rhel7-3
+ * Pseudo action: rsc2-master_promoted_0
+ * Resource action: lxc-ms promote on lxc2
+ * Pseudo action: lxc-ms-master_promoted_0
+ * Pseudo action: stateful-bundle-master_start_0
+ * Resource action: stateful-bundle-docker-0 start on rhel7-5
+ * Resource action: stateful-bundle-docker-0 monitor=60000 on rhel7-5
+ * Resource action: stateful-bundle-0 start on rhel7-5
+ * Resource action: stateful-bundle-0 monitor=30000 on rhel7-5
+ * Resource action: stateful-bundle-docker-2 start on rhel7-3
+ * Resource action: stateful-bundle-2 start on rhel7-3
+ * Resource action: rsc1 monitor=10000 on rhel7-3
+ * Resource action: rsc2 monitor=10000 on rhel7-3
+ * Resource action: lxc-ms monitor=10000 on lxc2
+ * Resource action: bundled start on stateful-bundle-0
+ * Resource action: bundled start on stateful-bundle-2
+ * Pseudo action: stateful-bundle-master_running_0
+ * Resource action: stateful-bundle-docker-2 monitor=60000 on rhel7-3
+ * Resource action: stateful-bundle-2 monitor=30000 on rhel7-3
+ * Pseudo action: stateful-bundle_running_0
+ * Resource action: bundled monitor=11000 on stateful-bundle-2
+ * Pseudo action: stateful-bundle_promote_0
+ * Pseudo action: stateful-bundle-master_promote_0
+ * Resource action: bundled promote on stateful-bundle-0
+ * Pseudo action: stateful-bundle-master_promoted_0
+ * Pseudo action: stateful-bundle_promoted_0
+ * Resource action: bundled monitor=10000 on stateful-bundle-0
+Using the original execution date of: 2020-06-16 19:23:21Z
+
+Revised Cluster Status:
+ * Node List:
+ * Online: [ rhel7-1 rhel7-3 rhel7-5 ]
+ * OFFLINE: [ rhel7-4 ]
+ * RemoteOnline: [ remote-rhel7-2 ]
+ * GuestOnline: [ lxc1 lxc2 stateful-bundle-0 stateful-bundle-1 stateful-bundle-2 ]
+
+ * Full List of Resources:
+ * Fencing (stonith:fence_xvm): Started rhel7-5
+ * Clone Set: rsc1-clone [rsc1] (promotable):
+ * Promoted: [ rhel7-3 ]
+ * Unpromoted: [ lxc1 lxc2 rhel7-1 rhel7-5 ]
+ * Stopped: [ remote-rhel7-2 rhel7-4 ]
+ * Clone Set: rsc2-master [rsc2] (promotable):
+ * Promoted: [ rhel7-3 ]
+ * Unpromoted: [ lxc1 lxc2 rhel7-1 rhel7-5 ]
+ * Stopped: [ remote-rhel7-2 rhel7-4 ]
+ * remote-rhel7-2 (ocf:pacemaker:remote): Started rhel7-1
+ * container1 (ocf:heartbeat:VirtualDomain): Started rhel7-3
+ * container2 (ocf:heartbeat:VirtualDomain): Started rhel7-3
+ * Clone Set: lxc-ms-master [lxc-ms] (promotable):
+ * Promoted: [ lxc2 ]
+ * Unpromoted: [ lxc1 ]
+ * Container bundle set: stateful-bundle [pcmktest:http]:
+ * stateful-bundle-0 (192.168.122.131) (ocf:pacemaker:Stateful): Promoted rhel7-5
+ * stateful-bundle-1 (192.168.122.132) (ocf:pacemaker:Stateful): Unpromoted rhel7-1
+ * stateful-bundle-2 (192.168.122.133) (ocf:pacemaker:Stateful): Unpromoted rhel7-3
diff --git a/cts/scheduler/summary/one-or-more-0.summary b/cts/scheduler/summary/one-or-more-0.summary
new file mode 100644
index 0000000..100e42c
--- /dev/null
+++ b/cts/scheduler/summary/one-or-more-0.summary
@@ -0,0 +1,38 @@
+Current cluster status:
+ * Node List:
+ * Online: [ fc16-builder ]
+ * OFFLINE: [ fc16-builder2 ]
+
+ * Full List of Resources:
+ * A (ocf:pacemaker:Dummy): Stopped
+ * B (ocf:pacemaker:Dummy): Stopped
+ * C (ocf:pacemaker:Dummy): Stopped
+ * D (ocf:pacemaker:Dummy): Stopped
+
+Transition Summary:
+ * Start A ( fc16-builder )
+ * Start B ( fc16-builder )
+ * Start C ( fc16-builder )
+ * Start D ( fc16-builder )
+
+Executing Cluster Transition:
+ * Resource action: A monitor on fc16-builder
+ * Resource action: B monitor on fc16-builder
+ * Resource action: C monitor on fc16-builder
+ * Resource action: D monitor on fc16-builder
+ * Resource action: A start on fc16-builder
+ * Resource action: B start on fc16-builder
+ * Resource action: C start on fc16-builder
+ * Pseudo action: one-or-more:require-all-set-1
+ * Resource action: D start on fc16-builder
+
+Revised Cluster Status:
+ * Node List:
+ * Online: [ fc16-builder ]
+ * OFFLINE: [ fc16-builder2 ]
+
+ * Full List of Resources:
+ * A (ocf:pacemaker:Dummy): Started fc16-builder
+ * B (ocf:pacemaker:Dummy): Started fc16-builder
+ * C (ocf:pacemaker:Dummy): Started fc16-builder
+ * D (ocf:pacemaker:Dummy): Started fc16-builder
diff --git a/cts/scheduler/summary/one-or-more-1.summary b/cts/scheduler/summary/one-or-more-1.summary
new file mode 100644
index 0000000..cdbdc69
--- /dev/null
+++ b/cts/scheduler/summary/one-or-more-1.summary
@@ -0,0 +1,34 @@
+1 of 4 resource instances DISABLED and 0 BLOCKED from further action due to failure
+
+Current cluster status:
+ * Node List:
+ * Online: [ fc16-builder ]
+ * OFFLINE: [ fc16-builder2 ]
+
+ * Full List of Resources:
+ * A (ocf:pacemaker:Dummy): Stopped (disabled)
+ * B (ocf:pacemaker:Dummy): Stopped
+ * C (ocf:pacemaker:Dummy): Stopped
+ * D (ocf:pacemaker:Dummy): Stopped
+
+Transition Summary:
+ * Start B ( fc16-builder ) due to unrunnable A start (blocked)
+ * Start C ( fc16-builder ) due to unrunnable A start (blocked)
+ * Start D ( fc16-builder ) due to unrunnable one-or-more:require-all-set-1 (blocked)
+
+Executing Cluster Transition:
+ * Resource action: A monitor on fc16-builder
+ * Resource action: B monitor on fc16-builder
+ * Resource action: C monitor on fc16-builder
+ * Resource action: D monitor on fc16-builder
+
+Revised Cluster Status:
+ * Node List:
+ * Online: [ fc16-builder ]
+ * OFFLINE: [ fc16-builder2 ]
+
+ * Full List of Resources:
+ * A (ocf:pacemaker:Dummy): Stopped (disabled)
+ * B (ocf:pacemaker:Dummy): Stopped
+ * C (ocf:pacemaker:Dummy): Stopped
+ * D (ocf:pacemaker:Dummy): Stopped
diff --git a/cts/scheduler/summary/one-or-more-2.summary b/cts/scheduler/summary/one-or-more-2.summary
new file mode 100644
index 0000000..eb40a7f
--- /dev/null
+++ b/cts/scheduler/summary/one-or-more-2.summary
@@ -0,0 +1,38 @@
+1 of 4 resource instances DISABLED and 0 BLOCKED from further action due to failure
+
+Current cluster status:
+ * Node List:
+ * Online: [ fc16-builder ]
+ * OFFLINE: [ fc16-builder2 ]
+
+ * Full List of Resources:
+ * A (ocf:pacemaker:Dummy): Stopped
+ * B (ocf:pacemaker:Dummy): Stopped (disabled)
+ * C (ocf:pacemaker:Dummy): Stopped
+ * D (ocf:pacemaker:Dummy): Stopped
+
+Transition Summary:
+ * Start A ( fc16-builder )
+ * Start C ( fc16-builder )
+ * Start D ( fc16-builder )
+
+Executing Cluster Transition:
+ * Resource action: A monitor on fc16-builder
+ * Resource action: B monitor on fc16-builder
+ * Resource action: C monitor on fc16-builder
+ * Resource action: D monitor on fc16-builder
+ * Resource action: A start on fc16-builder
+ * Resource action: C start on fc16-builder
+ * Pseudo action: one-or-more:require-all-set-1
+ * Resource action: D start on fc16-builder
+
+Revised Cluster Status:
+ * Node List:
+ * Online: [ fc16-builder ]
+ * OFFLINE: [ fc16-builder2 ]
+
+ * Full List of Resources:
+ * A (ocf:pacemaker:Dummy): Started fc16-builder
+ * B (ocf:pacemaker:Dummy): Stopped (disabled)
+ * C (ocf:pacemaker:Dummy): Started fc16-builder
+ * D (ocf:pacemaker:Dummy): Started fc16-builder
diff --git a/cts/scheduler/summary/one-or-more-3.summary b/cts/scheduler/summary/one-or-more-3.summary
new file mode 100644
index 0000000..9235870
--- /dev/null
+++ b/cts/scheduler/summary/one-or-more-3.summary
@@ -0,0 +1,34 @@
+2 of 4 resource instances DISABLED and 0 BLOCKED from further action due to failure
+
+Current cluster status:
+ * Node List:
+ * Online: [ fc16-builder ]
+ * OFFLINE: [ fc16-builder2 ]
+
+ * Full List of Resources:
+ * A (ocf:pacemaker:Dummy): Stopped
+ * B (ocf:pacemaker:Dummy): Stopped (disabled)
+ * C (ocf:pacemaker:Dummy): Stopped (disabled)
+ * D (ocf:pacemaker:Dummy): Stopped
+
+Transition Summary:
+ * Start A ( fc16-builder )
+ * Start D ( fc16-builder ) due to unrunnable one-or-more:require-all-set-1 (blocked)
+
+Executing Cluster Transition:
+ * Resource action: A monitor on fc16-builder
+ * Resource action: B monitor on fc16-builder
+ * Resource action: C monitor on fc16-builder
+ * Resource action: D monitor on fc16-builder
+ * Resource action: A start on fc16-builder
+
+Revised Cluster Status:
+ * Node List:
+ * Online: [ fc16-builder ]
+ * OFFLINE: [ fc16-builder2 ]
+
+ * Full List of Resources:
+ * A (ocf:pacemaker:Dummy): Started fc16-builder
+ * B (ocf:pacemaker:Dummy): Stopped (disabled)
+ * C (ocf:pacemaker:Dummy): Stopped (disabled)
+ * D (ocf:pacemaker:Dummy): Stopped
diff --git a/cts/scheduler/summary/one-or-more-4.summary b/cts/scheduler/summary/one-or-more-4.summary
new file mode 100644
index 0000000..828f6a5
--- /dev/null
+++ b/cts/scheduler/summary/one-or-more-4.summary
@@ -0,0 +1,38 @@
+1 of 4 resource instances DISABLED and 0 BLOCKED from further action due to failure
+
+Current cluster status:
+ * Node List:
+ * Online: [ fc16-builder ]
+ * OFFLINE: [ fc16-builder2 ]
+
+ * Full List of Resources:
+ * A (ocf:pacemaker:Dummy): Stopped
+ * B (ocf:pacemaker:Dummy): Stopped
+ * C (ocf:pacemaker:Dummy): Stopped
+ * D (ocf:pacemaker:Dummy): Stopped (disabled)
+
+Transition Summary:
+ * Start A ( fc16-builder )
+ * Start B ( fc16-builder )
+ * Start C ( fc16-builder )
+
+Executing Cluster Transition:
+ * Resource action: A monitor on fc16-builder
+ * Resource action: B monitor on fc16-builder
+ * Resource action: C monitor on fc16-builder
+ * Resource action: D monitor on fc16-builder
+ * Resource action: A start on fc16-builder
+ * Resource action: B start on fc16-builder
+ * Resource action: C start on fc16-builder
+ * Pseudo action: one-or-more:require-all-set-1
+
+Revised Cluster Status:
+ * Node List:
+ * Online: [ fc16-builder ]
+ * OFFLINE: [ fc16-builder2 ]
+
+ * Full List of Resources:
+ * A (ocf:pacemaker:Dummy): Started fc16-builder
+ * B (ocf:pacemaker:Dummy): Started fc16-builder
+ * C (ocf:pacemaker:Dummy): Started fc16-builder
+ * D (ocf:pacemaker:Dummy): Stopped (disabled)
diff --git a/cts/scheduler/summary/one-or-more-5.summary b/cts/scheduler/summary/one-or-more-5.summary
new file mode 100644
index 0000000..607566b
--- /dev/null
+++ b/cts/scheduler/summary/one-or-more-5.summary
@@ -0,0 +1,47 @@
+2 of 6 resource instances DISABLED and 0 BLOCKED from further action due to failure
+
+Current cluster status:
+ * Node List:
+ * Online: [ fc16-builder ]
+ * OFFLINE: [ fc16-builder2 ]
+
+ * Full List of Resources:
+ * A (ocf:pacemaker:Dummy): Stopped
+ * B (ocf:pacemaker:Dummy): Stopped
+ * C (ocf:pacemaker:Dummy): Stopped (disabled)
+ * D (ocf:pacemaker:Dummy): Stopped (disabled)
+ * E (ocf:pacemaker:Dummy): Stopped
+ * F (ocf:pacemaker:Dummy): Stopped
+
+Transition Summary:
+ * Start A ( fc16-builder )
+ * Start B ( fc16-builder )
+ * Start E ( fc16-builder )
+ * Start F ( fc16-builder )
+
+Executing Cluster Transition:
+ * Resource action: A monitor on fc16-builder
+ * Resource action: B monitor on fc16-builder
+ * Resource action: C monitor on fc16-builder
+ * Resource action: D monitor on fc16-builder
+ * Resource action: E monitor on fc16-builder
+ * Resource action: F monitor on fc16-builder
+ * Resource action: B start on fc16-builder
+ * Pseudo action: one-or-more:require-all-set-1
+ * Resource action: A start on fc16-builder
+ * Resource action: E start on fc16-builder
+ * Pseudo action: one-or-more:require-all-set-3
+ * Resource action: F start on fc16-builder
+
+Revised Cluster Status:
+ * Node List:
+ * Online: [ fc16-builder ]
+ * OFFLINE: [ fc16-builder2 ]
+
+ * Full List of Resources:
+ * A (ocf:pacemaker:Dummy): Started fc16-builder
+ * B (ocf:pacemaker:Dummy): Started fc16-builder
+ * C (ocf:pacemaker:Dummy): Stopped (disabled)
+ * D (ocf:pacemaker:Dummy): Stopped (disabled)
+ * E (ocf:pacemaker:Dummy): Started fc16-builder
+ * F (ocf:pacemaker:Dummy): Started fc16-builder
diff --git a/cts/scheduler/summary/one-or-more-6.summary b/cts/scheduler/summary/one-or-more-6.summary
new file mode 100644
index 0000000..79dc891
--- /dev/null
+++ b/cts/scheduler/summary/one-or-more-6.summary
@@ -0,0 +1,27 @@
+1 of 3 resource instances DISABLED and 0 BLOCKED from further action due to failure
+
+Current cluster status:
+ * Node List:
+ * Online: [ fc16-builder ]
+ * OFFLINE: [ fc16-builder2 ]
+
+ * Full List of Resources:
+ * A (ocf:pacemaker:Dummy): Started fc16-builder
+ * B (ocf:pacemaker:Dummy): Started fc16-builder (disabled)
+ * C (ocf:pacemaker:Dummy): Started fc16-builder
+
+Transition Summary:
+ * Stop B ( fc16-builder ) due to node availability
+
+Executing Cluster Transition:
+ * Resource action: B stop on fc16-builder
+
+Revised Cluster Status:
+ * Node List:
+ * Online: [ fc16-builder ]
+ * OFFLINE: [ fc16-builder2 ]
+
+ * Full List of Resources:
+ * A (ocf:pacemaker:Dummy): Started fc16-builder
+ * B (ocf:pacemaker:Dummy): Stopped (disabled)
+ * C (ocf:pacemaker:Dummy): Started fc16-builder
diff --git a/cts/scheduler/summary/one-or-more-7.summary b/cts/scheduler/summary/one-or-more-7.summary
new file mode 100644
index 0000000..a25c618
--- /dev/null
+++ b/cts/scheduler/summary/one-or-more-7.summary
@@ -0,0 +1,27 @@
+1 of 3 resource instances DISABLED and 0 BLOCKED from further action due to failure
+
+Current cluster status:
+ * Node List:
+ * Online: [ fc16-builder ]
+ * OFFLINE: [ fc16-builder2 ]
+
+ * Full List of Resources:
+ * A (ocf:pacemaker:Dummy): Started fc16-builder
+ * B (ocf:pacemaker:Dummy): Started fc16-builder
+ * C (ocf:pacemaker:Dummy): Started fc16-builder (disabled)
+
+Transition Summary:
+ * Stop C ( fc16-builder ) due to node availability
+
+Executing Cluster Transition:
+ * Resource action: C stop on fc16-builder
+
+Revised Cluster Status:
+ * Node List:
+ * Online: [ fc16-builder ]
+ * OFFLINE: [ fc16-builder2 ]
+
+ * Full List of Resources:
+ * A (ocf:pacemaker:Dummy): Started fc16-builder
+ * B (ocf:pacemaker:Dummy): Started fc16-builder
+ * C (ocf:pacemaker:Dummy): Stopped (disabled)
diff --git a/cts/scheduler/summary/one-or-more-unrunnable-instances.summary b/cts/scheduler/summary/one-or-more-unrunnable-instances.summary
new file mode 100644
index 0000000..58c572d
--- /dev/null
+++ b/cts/scheduler/summary/one-or-more-unrunnable-instances.summary
@@ -0,0 +1,736 @@
+Current cluster status:
+ * Node List:
+ * Online: [ rdo7-node1 rdo7-node2 rdo7-node3 ]
+ * RemoteOnline: [ mrg-07 mrg-08 mrg-09 ]
+
+ * Full List of Resources:
+ * fence1 (stonith:fence_xvm): Started rdo7-node2
+ * fence2 (stonith:fence_xvm): Started rdo7-node1
+ * fence3 (stonith:fence_xvm): Started rdo7-node3
+ * Clone Set: lb-haproxy-clone [lb-haproxy]:
+ * Started: [ rdo7-node1 rdo7-node2 rdo7-node3 ]
+ * Stopped: [ mrg-07 mrg-08 mrg-09 ]
+ * vip-db (ocf:heartbeat:IPaddr2): Started rdo7-node3
+ * vip-rabbitmq (ocf:heartbeat:IPaddr2): Started rdo7-node1
+ * vip-keystone (ocf:heartbeat:IPaddr2): Started rdo7-node2
+ * vip-glance (ocf:heartbeat:IPaddr2): Started rdo7-node3
+ * vip-cinder (ocf:heartbeat:IPaddr2): Started rdo7-node1
+ * vip-swift (ocf:heartbeat:IPaddr2): Started rdo7-node2
+ * vip-neutron (ocf:heartbeat:IPaddr2): Started rdo7-node2
+ * vip-nova (ocf:heartbeat:IPaddr2): Started rdo7-node1
+ * vip-horizon (ocf:heartbeat:IPaddr2): Started rdo7-node3
+ * vip-heat (ocf:heartbeat:IPaddr2): Started rdo7-node1
+ * vip-ceilometer (ocf:heartbeat:IPaddr2): Started rdo7-node2
+ * vip-qpid (ocf:heartbeat:IPaddr2): Started rdo7-node3
+ * vip-node (ocf:heartbeat:IPaddr2): Started rdo7-node1
+ * Clone Set: galera-master [galera] (promotable):
+ * Promoted: [ rdo7-node1 rdo7-node2 rdo7-node3 ]
+ * Stopped: [ mrg-07 mrg-08 mrg-09 ]
+ * Clone Set: rabbitmq-server-clone [rabbitmq-server]:
+ * Started: [ rdo7-node1 rdo7-node2 rdo7-node3 ]
+ * Stopped: [ mrg-07 mrg-08 mrg-09 ]
+ * Clone Set: memcached-clone [memcached]:
+ * Started: [ rdo7-node1 rdo7-node2 rdo7-node3 ]
+ * Stopped: [ mrg-07 mrg-08 mrg-09 ]
+ * Clone Set: mongodb-clone [mongodb]:
+ * Started: [ rdo7-node1 rdo7-node2 rdo7-node3 ]
+ * Stopped: [ mrg-07 mrg-08 mrg-09 ]
+ * Clone Set: keystone-clone [keystone]:
+ * Stopped: [ mrg-07 mrg-08 mrg-09 rdo7-node1 rdo7-node2 rdo7-node3 ]
+ * Clone Set: glance-fs-clone [glance-fs]:
+ * Started: [ rdo7-node1 rdo7-node2 rdo7-node3 ]
+ * Stopped: [ mrg-07 mrg-08 mrg-09 ]
+ * Clone Set: glance-registry-clone [glance-registry]:
+ * Stopped: [ mrg-07 mrg-08 mrg-09 rdo7-node1 rdo7-node2 rdo7-node3 ]
+ * Clone Set: glance-api-clone [glance-api]:
+ * Stopped: [ mrg-07 mrg-08 mrg-09 rdo7-node1 rdo7-node2 rdo7-node3 ]
+ * Clone Set: cinder-api-clone [cinder-api]:
+ * Stopped: [ mrg-07 mrg-08 mrg-09 rdo7-node1 rdo7-node2 rdo7-node3 ]
+ * Clone Set: cinder-scheduler-clone [cinder-scheduler]:
+ * Stopped: [ mrg-07 mrg-08 mrg-09 rdo7-node1 rdo7-node2 rdo7-node3 ]
+ * cinder-volume (systemd:openstack-cinder-volume): Stopped
+ * Clone Set: swift-fs-clone [swift-fs]:
+ * Started: [ rdo7-node1 rdo7-node2 rdo7-node3 ]
+ * Stopped: [ mrg-07 mrg-08 mrg-09 ]
+ * Clone Set: swift-account-clone [swift-account]:
+ * Stopped: [ mrg-07 mrg-08 mrg-09 rdo7-node1 rdo7-node2 rdo7-node3 ]
+ * Clone Set: swift-container-clone [swift-container]:
+ * Stopped: [ mrg-07 mrg-08 mrg-09 rdo7-node1 rdo7-node2 rdo7-node3 ]
+ * Clone Set: swift-object-clone [swift-object]:
+ * Stopped: [ mrg-07 mrg-08 mrg-09 rdo7-node1 rdo7-node2 rdo7-node3 ]
+ * Clone Set: swift-proxy-clone [swift-proxy]:
+ * Stopped: [ mrg-07 mrg-08 mrg-09 rdo7-node1 rdo7-node2 rdo7-node3 ]
+ * swift-object-expirer (systemd:openstack-swift-object-expirer): Stopped
+ * Clone Set: neutron-server-clone [neutron-server]:
+ * Stopped: [ mrg-07 mrg-08 mrg-09 rdo7-node1 rdo7-node2 rdo7-node3 ]
+ * Clone Set: neutron-scale-clone [neutron-scale] (unique):
+ * neutron-scale:0 (ocf:neutron:NeutronScale): Stopped
+ * neutron-scale:1 (ocf:neutron:NeutronScale): Stopped
+ * neutron-scale:2 (ocf:neutron:NeutronScale): Stopped
+ * Clone Set: neutron-ovs-cleanup-clone [neutron-ovs-cleanup]:
+ * Stopped: [ mrg-07 mrg-08 mrg-09 rdo7-node1 rdo7-node2 rdo7-node3 ]
+ * Clone Set: neutron-netns-cleanup-clone [neutron-netns-cleanup]:
+ * Stopped: [ mrg-07 mrg-08 mrg-09 rdo7-node1 rdo7-node2 rdo7-node3 ]
+ * Clone Set: neutron-openvswitch-agent-clone [neutron-openvswitch-agent]:
+ * Stopped: [ mrg-07 mrg-08 mrg-09 rdo7-node1 rdo7-node2 rdo7-node3 ]
+ * Clone Set: neutron-dhcp-agent-clone [neutron-dhcp-agent]:
+ * Stopped: [ mrg-07 mrg-08 mrg-09 rdo7-node1 rdo7-node2 rdo7-node3 ]
+ * Clone Set: neutron-l3-agent-clone [neutron-l3-agent]:
+ * Stopped: [ mrg-07 mrg-08 mrg-09 rdo7-node1 rdo7-node2 rdo7-node3 ]
+ * Clone Set: neutron-metadata-agent-clone [neutron-metadata-agent]:
+ * Stopped: [ mrg-07 mrg-08 mrg-09 rdo7-node1 rdo7-node2 rdo7-node3 ]
+ * Clone Set: nova-consoleauth-clone [nova-consoleauth]:
+ * Stopped: [ mrg-07 mrg-08 mrg-09 rdo7-node1 rdo7-node2 rdo7-node3 ]
+ * Clone Set: nova-novncproxy-clone [nova-novncproxy]:
+ * Stopped: [ mrg-07 mrg-08 mrg-09 rdo7-node1 rdo7-node2 rdo7-node3 ]
+ * Clone Set: nova-api-clone [nova-api]:
+ * Stopped: [ mrg-07 mrg-08 mrg-09 rdo7-node1 rdo7-node2 rdo7-node3 ]
+ * Clone Set: nova-scheduler-clone [nova-scheduler]:
+ * Stopped: [ mrg-07 mrg-08 mrg-09 rdo7-node1 rdo7-node2 rdo7-node3 ]
+ * Clone Set: nova-conductor-clone [nova-conductor]:
+ * Stopped: [ mrg-07 mrg-08 mrg-09 rdo7-node1 rdo7-node2 rdo7-node3 ]
+ * Clone Set: redis-master [redis] (promotable):
+ * Promoted: [ rdo7-node1 ]
+ * Unpromoted: [ rdo7-node2 rdo7-node3 ]
+ * Stopped: [ mrg-07 mrg-08 mrg-09 ]
+ * vip-redis (ocf:heartbeat:IPaddr2): Started rdo7-node1
+ * Clone Set: ceilometer-central-clone [ceilometer-central]:
+ * Stopped: [ mrg-07 mrg-08 mrg-09 rdo7-node1 rdo7-node2 rdo7-node3 ]
+ * Clone Set: ceilometer-collector-clone [ceilometer-collector]:
+ * Stopped: [ mrg-07 mrg-08 mrg-09 rdo7-node1 rdo7-node2 rdo7-node3 ]
+ * Clone Set: ceilometer-api-clone [ceilometer-api]:
+ * Stopped: [ mrg-07 mrg-08 mrg-09 rdo7-node1 rdo7-node2 rdo7-node3 ]
+ * Clone Set: ceilometer-delay-clone [ceilometer-delay]:
+ * Stopped: [ mrg-07 mrg-08 mrg-09 rdo7-node1 rdo7-node2 rdo7-node3 ]
+ * Clone Set: ceilometer-alarm-evaluator-clone [ceilometer-alarm-evaluator]:
+ * Stopped: [ mrg-07 mrg-08 mrg-09 rdo7-node1 rdo7-node2 rdo7-node3 ]
+ * Clone Set: ceilometer-alarm-notifier-clone [ceilometer-alarm-notifier]:
+ * Stopped: [ mrg-07 mrg-08 mrg-09 rdo7-node1 rdo7-node2 rdo7-node3 ]
+ * Clone Set: ceilometer-notification-clone [ceilometer-notification]:
+ * Stopped: [ mrg-07 mrg-08 mrg-09 rdo7-node1 rdo7-node2 rdo7-node3 ]
+ * Clone Set: heat-api-clone [heat-api]:
+ * Stopped: [ mrg-07 mrg-08 mrg-09 rdo7-node1 rdo7-node2 rdo7-node3 ]
+ * Clone Set: heat-api-cfn-clone [heat-api-cfn]:
+ * Stopped: [ mrg-07 mrg-08 mrg-09 rdo7-node1 rdo7-node2 rdo7-node3 ]
+ * Clone Set: heat-api-cloudwatch-clone [heat-api-cloudwatch]:
+ * Stopped: [ mrg-07 mrg-08 mrg-09 rdo7-node1 rdo7-node2 rdo7-node3 ]
+ * Clone Set: heat-engine-clone [heat-engine]:
+ * Stopped: [ mrg-07 mrg-08 mrg-09 rdo7-node1 rdo7-node2 rdo7-node3 ]
+ * Clone Set: horizon-clone [horizon]:
+ * Started: [ rdo7-node1 rdo7-node2 rdo7-node3 ]
+ * Stopped: [ mrg-07 mrg-08 mrg-09 ]
+ * Clone Set: neutron-openvswitch-agent-compute-clone [neutron-openvswitch-agent-compute]:
+ * Stopped: [ mrg-07 mrg-08 mrg-09 rdo7-node1 rdo7-node2 rdo7-node3 ]
+ * Clone Set: libvirtd-compute-clone [libvirtd-compute]:
+ * Stopped: [ mrg-07 mrg-08 mrg-09 rdo7-node1 rdo7-node2 rdo7-node3 ]
+ * Clone Set: ceilometer-compute-clone [ceilometer-compute]:
+ * Stopped: [ mrg-07 mrg-08 mrg-09 rdo7-node1 rdo7-node2 rdo7-node3 ]
+ * Clone Set: nova-compute-clone [nova-compute]:
+ * Stopped: [ mrg-07 mrg-08 mrg-09 rdo7-node1 rdo7-node2 rdo7-node3 ]
+ * fence-nova (stonith:fence_compute): Stopped
+ * fence-compute (stonith:fence_apc_snmp): Started rdo7-node3
+ * mrg-07 (ocf:pacemaker:remote): Started rdo7-node1
+ * mrg-08 (ocf:pacemaker:remote): Started rdo7-node2
+ * mrg-09 (ocf:pacemaker:remote): Started rdo7-node3
+
+Transition Summary:
+ * Start keystone:0 ( rdo7-node2 )
+ * Start keystone:1 ( rdo7-node3 )
+ * Start keystone:2 ( rdo7-node1 )
+ * Start glance-registry:0 ( rdo7-node2 )
+ * Start glance-registry:1 ( rdo7-node3 )
+ * Start glance-registry:2 ( rdo7-node1 )
+ * Start glance-api:0 ( rdo7-node2 )
+ * Start glance-api:1 ( rdo7-node3 )
+ * Start glance-api:2 ( rdo7-node1 )
+ * Start cinder-api:0 ( rdo7-node2 )
+ * Start cinder-api:1 ( rdo7-node3 )
+ * Start cinder-api:2 ( rdo7-node1 )
+ * Start cinder-scheduler:0 ( rdo7-node2 )
+ * Start cinder-scheduler:1 ( rdo7-node3 )
+ * Start cinder-scheduler:2 ( rdo7-node1 )
+ * Start cinder-volume ( rdo7-node2 )
+ * Start swift-account:0 ( rdo7-node3 )
+ * Start swift-account:1 ( rdo7-node1 )
+ * Start swift-account:2 ( rdo7-node2 )
+ * Start swift-container:0 ( rdo7-node3 )
+ * Start swift-container:1 ( rdo7-node1 )
+ * Start swift-container:2 ( rdo7-node2 )
+ * Start swift-object:0 ( rdo7-node3 )
+ * Start swift-object:1 ( rdo7-node1 )
+ * Start swift-object:2 ( rdo7-node2 )
+ * Start swift-proxy:0 ( rdo7-node3 )
+ * Start swift-proxy:1 ( rdo7-node1 )
+ * Start swift-proxy:2 ( rdo7-node2 )
+ * Start swift-object-expirer ( rdo7-node3 )
+ * Start neutron-server:0 ( rdo7-node1 )
+ * Start neutron-server:1 ( rdo7-node2 )
+ * Start neutron-server:2 ( rdo7-node3 )
+ * Start neutron-scale:0 ( rdo7-node1 )
+ * Start neutron-scale:1 ( rdo7-node2 )
+ * Start neutron-scale:2 ( rdo7-node3 )
+ * Start neutron-ovs-cleanup:0 ( rdo7-node1 )
+ * Start neutron-ovs-cleanup:1 ( rdo7-node2 )
+ * Start neutron-ovs-cleanup:2 ( rdo7-node3 )
+ * Start neutron-netns-cleanup:0 ( rdo7-node1 )
+ * Start neutron-netns-cleanup:1 ( rdo7-node2 )
+ * Start neutron-netns-cleanup:2 ( rdo7-node3 )
+ * Start neutron-openvswitch-agent:0 ( rdo7-node1 )
+ * Start neutron-openvswitch-agent:1 ( rdo7-node2 )
+ * Start neutron-openvswitch-agent:2 ( rdo7-node3 )
+ * Start neutron-dhcp-agent:0 ( rdo7-node1 )
+ * Start neutron-dhcp-agent:1 ( rdo7-node2 )
+ * Start neutron-dhcp-agent:2 ( rdo7-node3 )
+ * Start neutron-l3-agent:0 ( rdo7-node1 )
+ * Start neutron-l3-agent:1 ( rdo7-node2 )
+ * Start neutron-l3-agent:2 ( rdo7-node3 )
+ * Start neutron-metadata-agent:0 ( rdo7-node1 )
+ * Start neutron-metadata-agent:1 ( rdo7-node2 )
+ * Start neutron-metadata-agent:2 ( rdo7-node3 )
+ * Start nova-consoleauth:0 ( rdo7-node1 )
+ * Start nova-consoleauth:1 ( rdo7-node2 )
+ * Start nova-consoleauth:2 ( rdo7-node3 )
+ * Start nova-novncproxy:0 ( rdo7-node1 )
+ * Start nova-novncproxy:1 ( rdo7-node2 )
+ * Start nova-novncproxy:2 ( rdo7-node3 )
+ * Start nova-api:0 ( rdo7-node1 )
+ * Start nova-api:1 ( rdo7-node2 )
+ * Start nova-api:2 ( rdo7-node3 )
+ * Start nova-scheduler:0 ( rdo7-node1 )
+ * Start nova-scheduler:1 ( rdo7-node2 )
+ * Start nova-scheduler:2 ( rdo7-node3 )
+ * Start nova-conductor:0 ( rdo7-node1 )
+ * Start nova-conductor:1 ( rdo7-node2 )
+ * Start nova-conductor:2 ( rdo7-node3 )
+ * Start ceilometer-central:0 ( rdo7-node2 )
+ * Start ceilometer-central:1 ( rdo7-node3 )
+ * Start ceilometer-central:2 ( rdo7-node1 )
+ * Start ceilometer-collector:0 ( rdo7-node2 )
+ * Start ceilometer-collector:1 ( rdo7-node3 )
+ * Start ceilometer-collector:2 ( rdo7-node1 )
+ * Start ceilometer-api:0 ( rdo7-node2 )
+ * Start ceilometer-api:1 ( rdo7-node3 )
+ * Start ceilometer-api:2 ( rdo7-node1 )
+ * Start ceilometer-delay:0 ( rdo7-node2 )
+ * Start ceilometer-delay:1 ( rdo7-node3 )
+ * Start ceilometer-delay:2 ( rdo7-node1 )
+ * Start ceilometer-alarm-evaluator:0 ( rdo7-node2 )
+ * Start ceilometer-alarm-evaluator:1 ( rdo7-node3 )
+ * Start ceilometer-alarm-evaluator:2 ( rdo7-node1 )
+ * Start ceilometer-alarm-notifier:0 ( rdo7-node2 )
+ * Start ceilometer-alarm-notifier:1 ( rdo7-node3 )
+ * Start ceilometer-alarm-notifier:2 ( rdo7-node1 )
+ * Start ceilometer-notification:0 ( rdo7-node2 )
+ * Start ceilometer-notification:1 ( rdo7-node3 )
+ * Start ceilometer-notification:2 ( rdo7-node1 )
+ * Start heat-api:0 ( rdo7-node2 )
+ * Start heat-api:1 ( rdo7-node3 )
+ * Start heat-api:2 ( rdo7-node1 )
+ * Start heat-api-cfn:0 ( rdo7-node2 )
+ * Start heat-api-cfn:1 ( rdo7-node3 )
+ * Start heat-api-cfn:2 ( rdo7-node1 )
+ * Start heat-api-cloudwatch:0 ( rdo7-node2 )
+ * Start heat-api-cloudwatch:1 ( rdo7-node3 )
+ * Start heat-api-cloudwatch:2 ( rdo7-node1 )
+ * Start heat-engine:0 ( rdo7-node2 )
+ * Start heat-engine:1 ( rdo7-node3 )
+ * Start heat-engine:2 ( rdo7-node1 )
+ * Start neutron-openvswitch-agent-compute:0 ( mrg-07 )
+ * Start neutron-openvswitch-agent-compute:1 ( mrg-08 )
+ * Start neutron-openvswitch-agent-compute:2 ( mrg-09 )
+ * Start libvirtd-compute:0 ( mrg-07 )
+ * Start libvirtd-compute:1 ( mrg-08 )
+ * Start libvirtd-compute:2 ( mrg-09 )
+ * Start ceilometer-compute:0 ( mrg-07 )
+ * Start ceilometer-compute:1 ( mrg-08 )
+ * Start ceilometer-compute:2 ( mrg-09 )
+ * Start nova-compute:0 ( mrg-07 )
+ * Start nova-compute:1 ( mrg-08 )
+ * Start nova-compute:2 ( mrg-09 )
+ * Start fence-nova ( rdo7-node2 )
+
+Executing Cluster Transition:
+ * Resource action: galera monitor=10000 on rdo7-node2
+ * Pseudo action: keystone-clone_start_0
+ * Pseudo action: nova-compute-clone_pre_notify_start_0
+ * Resource action: keystone start on rdo7-node2
+ * Resource action: keystone start on rdo7-node3
+ * Resource action: keystone start on rdo7-node1
+ * Pseudo action: keystone-clone_running_0
+ * Pseudo action: glance-registry-clone_start_0
+ * Pseudo action: cinder-api-clone_start_0
+ * Pseudo action: swift-account-clone_start_0
+ * Pseudo action: neutron-server-clone_start_0
+ * Pseudo action: nova-consoleauth-clone_start_0
+ * Pseudo action: ceilometer-central-clone_start_0
+ * Pseudo action: nova-compute-clone_confirmed-pre_notify_start_0
+ * Resource action: keystone monitor=60000 on rdo7-node2
+ * Resource action: keystone monitor=60000 on rdo7-node3
+ * Resource action: keystone monitor=60000 on rdo7-node1
+ * Resource action: glance-registry start on rdo7-node2
+ * Resource action: glance-registry start on rdo7-node3
+ * Resource action: glance-registry start on rdo7-node1
+ * Pseudo action: glance-registry-clone_running_0
+ * Pseudo action: glance-api-clone_start_0
+ * Resource action: cinder-api start on rdo7-node2
+ * Resource action: cinder-api start on rdo7-node3
+ * Resource action: cinder-api start on rdo7-node1
+ * Pseudo action: cinder-api-clone_running_0
+ * Pseudo action: cinder-scheduler-clone_start_0
+ * Resource action: swift-account start on rdo7-node3
+ * Resource action: swift-account start on rdo7-node1
+ * Resource action: swift-account start on rdo7-node2
+ * Pseudo action: swift-account-clone_running_0
+ * Pseudo action: swift-container-clone_start_0
+ * Pseudo action: swift-proxy-clone_start_0
+ * Resource action: neutron-server start on rdo7-node1
+ * Resource action: neutron-server start on rdo7-node2
+ * Resource action: neutron-server start on rdo7-node3
+ * Pseudo action: neutron-server-clone_running_0
+ * Pseudo action: neutron-scale-clone_start_0
+ * Resource action: nova-consoleauth start on rdo7-node1
+ * Resource action: nova-consoleauth start on rdo7-node2
+ * Resource action: nova-consoleauth start on rdo7-node3
+ * Pseudo action: nova-consoleauth-clone_running_0
+ * Pseudo action: nova-novncproxy-clone_start_0
+ * Resource action: ceilometer-central start on rdo7-node2
+ * Resource action: ceilometer-central start on rdo7-node3
+ * Resource action: ceilometer-central start on rdo7-node1
+ * Pseudo action: ceilometer-central-clone_running_0
+ * Pseudo action: ceilometer-collector-clone_start_0
+ * Pseudo action: clone-one-or-more:order-neutron-server-clone-neutron-openvswitch-agent-compute-clone-mandatory
+ * Resource action: glance-registry monitor=60000 on rdo7-node2
+ * Resource action: glance-registry monitor=60000 on rdo7-node3
+ * Resource action: glance-registry monitor=60000 on rdo7-node1
+ * Resource action: glance-api start on rdo7-node2
+ * Resource action: glance-api start on rdo7-node3
+ * Resource action: glance-api start on rdo7-node1
+ * Pseudo action: glance-api-clone_running_0
+ * Resource action: cinder-api monitor=60000 on rdo7-node2
+ * Resource action: cinder-api monitor=60000 on rdo7-node3
+ * Resource action: cinder-api monitor=60000 on rdo7-node1
+ * Resource action: cinder-scheduler start on rdo7-node2
+ * Resource action: cinder-scheduler start on rdo7-node3
+ * Resource action: cinder-scheduler start on rdo7-node1
+ * Pseudo action: cinder-scheduler-clone_running_0
+ * Resource action: cinder-volume start on rdo7-node2
+ * Resource action: swift-account monitor=60000 on rdo7-node3
+ * Resource action: swift-account monitor=60000 on rdo7-node1
+ * Resource action: swift-account monitor=60000 on rdo7-node2
+ * Resource action: swift-container start on rdo7-node3
+ * Resource action: swift-container start on rdo7-node1
+ * Resource action: swift-container start on rdo7-node2
+ * Pseudo action: swift-container-clone_running_0
+ * Pseudo action: swift-object-clone_start_0
+ * Resource action: swift-proxy start on rdo7-node3
+ * Resource action: swift-proxy start on rdo7-node1
+ * Resource action: swift-proxy start on rdo7-node2
+ * Pseudo action: swift-proxy-clone_running_0
+ * Resource action: swift-object-expirer start on rdo7-node3
+ * Resource action: neutron-server monitor=60000 on rdo7-node1
+ * Resource action: neutron-server monitor=60000 on rdo7-node2
+ * Resource action: neutron-server monitor=60000 on rdo7-node3
+ * Resource action: neutron-scale:0 start on rdo7-node1
+ * Resource action: neutron-scale:1 start on rdo7-node2
+ * Resource action: neutron-scale:2 start on rdo7-node3
+ * Pseudo action: neutron-scale-clone_running_0
+ * Pseudo action: neutron-ovs-cleanup-clone_start_0
+ * Resource action: nova-consoleauth monitor=60000 on rdo7-node1
+ * Resource action: nova-consoleauth monitor=60000 on rdo7-node2
+ * Resource action: nova-consoleauth monitor=60000 on rdo7-node3
+ * Resource action: nova-novncproxy start on rdo7-node1
+ * Resource action: nova-novncproxy start on rdo7-node2
+ * Resource action: nova-novncproxy start on rdo7-node3
+ * Pseudo action: nova-novncproxy-clone_running_0
+ * Pseudo action: nova-api-clone_start_0
+ * Resource action: ceilometer-central monitor=60000 on rdo7-node2
+ * Resource action: ceilometer-central monitor=60000 on rdo7-node3
+ * Resource action: ceilometer-central monitor=60000 on rdo7-node1
+ * Resource action: ceilometer-collector start on rdo7-node2
+ * Resource action: ceilometer-collector start on rdo7-node3
+ * Resource action: ceilometer-collector start on rdo7-node1
+ * Pseudo action: ceilometer-collector-clone_running_0
+ * Pseudo action: ceilometer-api-clone_start_0
+ * Pseudo action: neutron-openvswitch-agent-compute-clone_start_0
+ * Resource action: glance-api monitor=60000 on rdo7-node2
+ * Resource action: glance-api monitor=60000 on rdo7-node3
+ * Resource action: glance-api monitor=60000 on rdo7-node1
+ * Resource action: cinder-scheduler monitor=60000 on rdo7-node2
+ * Resource action: cinder-scheduler monitor=60000 on rdo7-node3
+ * Resource action: cinder-scheduler monitor=60000 on rdo7-node1
+ * Resource action: cinder-volume monitor=60000 on rdo7-node2
+ * Resource action: swift-container monitor=60000 on rdo7-node3
+ * Resource action: swift-container monitor=60000 on rdo7-node1
+ * Resource action: swift-container monitor=60000 on rdo7-node2
+ * Resource action: swift-object start on rdo7-node3
+ * Resource action: swift-object start on rdo7-node1
+ * Resource action: swift-object start on rdo7-node2
+ * Pseudo action: swift-object-clone_running_0
+ * Resource action: swift-proxy monitor=60000 on rdo7-node3
+ * Resource action: swift-proxy monitor=60000 on rdo7-node1
+ * Resource action: swift-proxy monitor=60000 on rdo7-node2
+ * Resource action: swift-object-expirer monitor=60000 on rdo7-node3
+ * Resource action: neutron-scale:0 monitor=10000 on rdo7-node1
+ * Resource action: neutron-scale:1 monitor=10000 on rdo7-node2
+ * Resource action: neutron-scale:2 monitor=10000 on rdo7-node3
+ * Resource action: neutron-ovs-cleanup start on rdo7-node1
+ * Resource action: neutron-ovs-cleanup start on rdo7-node2
+ * Resource action: neutron-ovs-cleanup start on rdo7-node3
+ * Pseudo action: neutron-ovs-cleanup-clone_running_0
+ * Pseudo action: neutron-netns-cleanup-clone_start_0
+ * Resource action: nova-novncproxy monitor=60000 on rdo7-node1
+ * Resource action: nova-novncproxy monitor=60000 on rdo7-node2
+ * Resource action: nova-novncproxy monitor=60000 on rdo7-node3
+ * Resource action: nova-api start on rdo7-node1
+ * Resource action: nova-api start on rdo7-node2
+ * Resource action: nova-api start on rdo7-node3
+ * Pseudo action: nova-api-clone_running_0
+ * Pseudo action: nova-scheduler-clone_start_0
+ * Resource action: ceilometer-collector monitor=60000 on rdo7-node2
+ * Resource action: ceilometer-collector monitor=60000 on rdo7-node3
+ * Resource action: ceilometer-collector monitor=60000 on rdo7-node1
+ * Resource action: ceilometer-api start on rdo7-node2
+ * Resource action: ceilometer-api start on rdo7-node3
+ * Resource action: ceilometer-api start on rdo7-node1
+ * Pseudo action: ceilometer-api-clone_running_0
+ * Pseudo action: ceilometer-delay-clone_start_0
+ * Resource action: neutron-openvswitch-agent-compute start on mrg-07
+ * Resource action: neutron-openvswitch-agent-compute start on mrg-08
+ * Resource action: neutron-openvswitch-agent-compute start on mrg-09
+ * Pseudo action: neutron-openvswitch-agent-compute-clone_running_0
+ * Pseudo action: libvirtd-compute-clone_start_0
+ * Resource action: swift-object monitor=60000 on rdo7-node3
+ * Resource action: swift-object monitor=60000 on rdo7-node1
+ * Resource action: swift-object monitor=60000 on rdo7-node2
+ * Resource action: neutron-ovs-cleanup monitor=10000 on rdo7-node1
+ * Resource action: neutron-ovs-cleanup monitor=10000 on rdo7-node2
+ * Resource action: neutron-ovs-cleanup monitor=10000 on rdo7-node3
+ * Resource action: neutron-netns-cleanup start on rdo7-node1
+ * Resource action: neutron-netns-cleanup start on rdo7-node2
+ * Resource action: neutron-netns-cleanup start on rdo7-node3
+ * Pseudo action: neutron-netns-cleanup-clone_running_0
+ * Pseudo action: neutron-openvswitch-agent-clone_start_0
+ * Resource action: nova-api monitor=60000 on rdo7-node1
+ * Resource action: nova-api monitor=60000 on rdo7-node2
+ * Resource action: nova-api monitor=60000 on rdo7-node3
+ * Resource action: nova-scheduler start on rdo7-node1
+ * Resource action: nova-scheduler start on rdo7-node2
+ * Resource action: nova-scheduler start on rdo7-node3
+ * Pseudo action: nova-scheduler-clone_running_0
+ * Pseudo action: nova-conductor-clone_start_0
+ * Resource action: ceilometer-api monitor=60000 on rdo7-node2
+ * Resource action: ceilometer-api monitor=60000 on rdo7-node3
+ * Resource action: ceilometer-api monitor=60000 on rdo7-node1
+ * Resource action: ceilometer-delay start on rdo7-node2
+ * Resource action: ceilometer-delay start on rdo7-node3
+ * Resource action: ceilometer-delay start on rdo7-node1
+ * Pseudo action: ceilometer-delay-clone_running_0
+ * Pseudo action: ceilometer-alarm-evaluator-clone_start_0
+ * Resource action: neutron-openvswitch-agent-compute monitor=60000 on mrg-07
+ * Resource action: neutron-openvswitch-agent-compute monitor=60000 on mrg-08
+ * Resource action: neutron-openvswitch-agent-compute monitor=60000 on mrg-09
+ * Resource action: libvirtd-compute start on mrg-07
+ * Resource action: libvirtd-compute start on mrg-08
+ * Resource action: libvirtd-compute start on mrg-09
+ * Pseudo action: libvirtd-compute-clone_running_0
+ * Resource action: neutron-netns-cleanup monitor=10000 on rdo7-node1
+ * Resource action: neutron-netns-cleanup monitor=10000 on rdo7-node2
+ * Resource action: neutron-netns-cleanup monitor=10000 on rdo7-node3
+ * Resource action: neutron-openvswitch-agent start on rdo7-node1
+ * Resource action: neutron-openvswitch-agent start on rdo7-node2
+ * Resource action: neutron-openvswitch-agent start on rdo7-node3
+ * Pseudo action: neutron-openvswitch-agent-clone_running_0
+ * Pseudo action: neutron-dhcp-agent-clone_start_0
+ * Resource action: nova-scheduler monitor=60000 on rdo7-node1
+ * Resource action: nova-scheduler monitor=60000 on rdo7-node2
+ * Resource action: nova-scheduler monitor=60000 on rdo7-node3
+ * Resource action: nova-conductor start on rdo7-node1
+ * Resource action: nova-conductor start on rdo7-node2
+ * Resource action: nova-conductor start on rdo7-node3
+ * Pseudo action: nova-conductor-clone_running_0
+ * Resource action: ceilometer-delay monitor=10000 on rdo7-node2
+ * Resource action: ceilometer-delay monitor=10000 on rdo7-node3
+ * Resource action: ceilometer-delay monitor=10000 on rdo7-node1
+ * Resource action: ceilometer-alarm-evaluator start on rdo7-node2
+ * Resource action: ceilometer-alarm-evaluator start on rdo7-node3
+ * Resource action: ceilometer-alarm-evaluator start on rdo7-node1
+ * Pseudo action: ceilometer-alarm-evaluator-clone_running_0
+ * Pseudo action: ceilometer-alarm-notifier-clone_start_0
+ * Resource action: libvirtd-compute monitor=60000 on mrg-07
+ * Resource action: libvirtd-compute monitor=60000 on mrg-08
+ * Resource action: libvirtd-compute monitor=60000 on mrg-09
+ * Resource action: fence-nova start on rdo7-node2
+ * Pseudo action: clone-one-or-more:order-nova-conductor-clone-nova-compute-clone-mandatory
+ * Resource action: neutron-openvswitch-agent monitor=60000 on rdo7-node1
+ * Resource action: neutron-openvswitch-agent monitor=60000 on rdo7-node2
+ * Resource action: neutron-openvswitch-agent monitor=60000 on rdo7-node3
+ * Resource action: neutron-dhcp-agent start on rdo7-node1
+ * Resource action: neutron-dhcp-agent start on rdo7-node2
+ * Resource action: neutron-dhcp-agent start on rdo7-node3
+ * Pseudo action: neutron-dhcp-agent-clone_running_0
+ * Pseudo action: neutron-l3-agent-clone_start_0
+ * Resource action: nova-conductor monitor=60000 on rdo7-node1
+ * Resource action: nova-conductor monitor=60000 on rdo7-node2
+ * Resource action: nova-conductor monitor=60000 on rdo7-node3
+ * Resource action: ceilometer-alarm-evaluator monitor=60000 on rdo7-node2
+ * Resource action: ceilometer-alarm-evaluator monitor=60000 on rdo7-node3
+ * Resource action: ceilometer-alarm-evaluator monitor=60000 on rdo7-node1
+ * Resource action: ceilometer-alarm-notifier start on rdo7-node2
+ * Resource action: ceilometer-alarm-notifier start on rdo7-node3
+ * Resource action: ceilometer-alarm-notifier start on rdo7-node1
+ * Pseudo action: ceilometer-alarm-notifier-clone_running_0
+ * Pseudo action: ceilometer-notification-clone_start_0
+ * Resource action: fence-nova monitor=60000 on rdo7-node2
+ * Resource action: neutron-dhcp-agent monitor=60000 on rdo7-node1
+ * Resource action: neutron-dhcp-agent monitor=60000 on rdo7-node2
+ * Resource action: neutron-dhcp-agent monitor=60000 on rdo7-node3
+ * Resource action: neutron-l3-agent start on rdo7-node1
+ * Resource action: neutron-l3-agent start on rdo7-node2
+ * Resource action: neutron-l3-agent start on rdo7-node3
+ * Pseudo action: neutron-l3-agent-clone_running_0
+ * Pseudo action: neutron-metadata-agent-clone_start_0
+ * Resource action: ceilometer-alarm-notifier monitor=60000 on rdo7-node2
+ * Resource action: ceilometer-alarm-notifier monitor=60000 on rdo7-node3
+ * Resource action: ceilometer-alarm-notifier monitor=60000 on rdo7-node1
+ * Resource action: ceilometer-notification start on rdo7-node2
+ * Resource action: ceilometer-notification start on rdo7-node3
+ * Resource action: ceilometer-notification start on rdo7-node1
+ * Pseudo action: ceilometer-notification-clone_running_0
+ * Pseudo action: heat-api-clone_start_0
+ * Pseudo action: clone-one-or-more:order-ceilometer-notification-clone-ceilometer-compute-clone-mandatory
+ * Resource action: neutron-l3-agent monitor=60000 on rdo7-node1
+ * Resource action: neutron-l3-agent monitor=60000 on rdo7-node2
+ * Resource action: neutron-l3-agent monitor=60000 on rdo7-node3
+ * Resource action: neutron-metadata-agent start on rdo7-node1
+ * Resource action: neutron-metadata-agent start on rdo7-node2
+ * Resource action: neutron-metadata-agent start on rdo7-node3
+ * Pseudo action: neutron-metadata-agent-clone_running_0
+ * Resource action: ceilometer-notification monitor=60000 on rdo7-node2
+ * Resource action: ceilometer-notification monitor=60000 on rdo7-node3
+ * Resource action: ceilometer-notification monitor=60000 on rdo7-node1
+ * Resource action: heat-api start on rdo7-node2
+ * Resource action: heat-api start on rdo7-node3
+ * Resource action: heat-api start on rdo7-node1
+ * Pseudo action: heat-api-clone_running_0
+ * Pseudo action: heat-api-cfn-clone_start_0
+ * Pseudo action: ceilometer-compute-clone_start_0
+ * Resource action: neutron-metadata-agent monitor=60000 on rdo7-node1
+ * Resource action: neutron-metadata-agent monitor=60000 on rdo7-node2
+ * Resource action: neutron-metadata-agent monitor=60000 on rdo7-node3
+ * Resource action: heat-api monitor=60000 on rdo7-node2
+ * Resource action: heat-api monitor=60000 on rdo7-node3
+ * Resource action: heat-api monitor=60000 on rdo7-node1
+ * Resource action: heat-api-cfn start on rdo7-node2
+ * Resource action: heat-api-cfn start on rdo7-node3
+ * Resource action: heat-api-cfn start on rdo7-node1
+ * Pseudo action: heat-api-cfn-clone_running_0
+ * Pseudo action: heat-api-cloudwatch-clone_start_0
+ * Resource action: ceilometer-compute start on mrg-07
+ * Resource action: ceilometer-compute start on mrg-08
+ * Resource action: ceilometer-compute start on mrg-09
+ * Pseudo action: ceilometer-compute-clone_running_0
+ * Pseudo action: nova-compute-clone_start_0
+ * Resource action: heat-api-cfn monitor=60000 on rdo7-node2
+ * Resource action: heat-api-cfn monitor=60000 on rdo7-node3
+ * Resource action: heat-api-cfn monitor=60000 on rdo7-node1
+ * Resource action: heat-api-cloudwatch start on rdo7-node2
+ * Resource action: heat-api-cloudwatch start on rdo7-node3
+ * Resource action: heat-api-cloudwatch start on rdo7-node1
+ * Pseudo action: heat-api-cloudwatch-clone_running_0
+ * Pseudo action: heat-engine-clone_start_0
+ * Resource action: ceilometer-compute monitor=60000 on mrg-07
+ * Resource action: ceilometer-compute monitor=60000 on mrg-08
+ * Resource action: ceilometer-compute monitor=60000 on mrg-09
+ * Resource action: nova-compute start on mrg-07
+ * Resource action: nova-compute start on mrg-08
+ * Resource action: nova-compute start on mrg-09
+ * Pseudo action: nova-compute-clone_running_0
+ * Resource action: heat-api-cloudwatch monitor=60000 on rdo7-node2
+ * Resource action: heat-api-cloudwatch monitor=60000 on rdo7-node3
+ * Resource action: heat-api-cloudwatch monitor=60000 on rdo7-node1
+ * Resource action: heat-engine start on rdo7-node2
+ * Resource action: heat-engine start on rdo7-node3
+ * Resource action: heat-engine start on rdo7-node1
+ * Pseudo action: heat-engine-clone_running_0
+ * Pseudo action: nova-compute-clone_post_notify_running_0
+ * Resource action: heat-engine monitor=60000 on rdo7-node2
+ * Resource action: heat-engine monitor=60000 on rdo7-node3
+ * Resource action: heat-engine monitor=60000 on rdo7-node1
+ * Resource action: nova-compute notify on mrg-07
+ * Resource action: nova-compute notify on mrg-08
+ * Resource action: nova-compute notify on mrg-09
+ * Pseudo action: nova-compute-clone_confirmed-post_notify_running_0
+ * Resource action: nova-compute monitor=10000 on mrg-07
+ * Resource action: nova-compute monitor=10000 on mrg-08
+ * Resource action: nova-compute monitor=10000 on mrg-09
+
+Revised Cluster Status:
+ * Node List:
+ * Online: [ rdo7-node1 rdo7-node2 rdo7-node3 ]
+ * RemoteOnline: [ mrg-07 mrg-08 mrg-09 ]
+
+ * Full List of Resources:
+ * fence1 (stonith:fence_xvm): Started rdo7-node2
+ * fence2 (stonith:fence_xvm): Started rdo7-node1
+ * fence3 (stonith:fence_xvm): Started rdo7-node3
+ * Clone Set: lb-haproxy-clone [lb-haproxy]:
+ * Started: [ rdo7-node1 rdo7-node2 rdo7-node3 ]
+ * Stopped: [ mrg-07 mrg-08 mrg-09 ]
+ * vip-db (ocf:heartbeat:IPaddr2): Started rdo7-node3
+ * vip-rabbitmq (ocf:heartbeat:IPaddr2): Started rdo7-node1
+ * vip-keystone (ocf:heartbeat:IPaddr2): Started rdo7-node2
+ * vip-glance (ocf:heartbeat:IPaddr2): Started rdo7-node3
+ * vip-cinder (ocf:heartbeat:IPaddr2): Started rdo7-node1
+ * vip-swift (ocf:heartbeat:IPaddr2): Started rdo7-node2
+ * vip-neutron (ocf:heartbeat:IPaddr2): Started rdo7-node2
+ * vip-nova (ocf:heartbeat:IPaddr2): Started rdo7-node1
+ * vip-horizon (ocf:heartbeat:IPaddr2): Started rdo7-node3
+ * vip-heat (ocf:heartbeat:IPaddr2): Started rdo7-node1
+ * vip-ceilometer (ocf:heartbeat:IPaddr2): Started rdo7-node2
+ * vip-qpid (ocf:heartbeat:IPaddr2): Started rdo7-node3
+ * vip-node (ocf:heartbeat:IPaddr2): Started rdo7-node1
+ * Clone Set: galera-master [galera] (promotable):
+ * Promoted: [ rdo7-node1 rdo7-node2 rdo7-node3 ]
+ * Stopped: [ mrg-07 mrg-08 mrg-09 ]
+ * Clone Set: rabbitmq-server-clone [rabbitmq-server]:
+ * Started: [ rdo7-node1 rdo7-node2 rdo7-node3 ]
+ * Stopped: [ mrg-07 mrg-08 mrg-09 ]
+ * Clone Set: memcached-clone [memcached]:
+ * Started: [ rdo7-node1 rdo7-node2 rdo7-node3 ]
+ * Stopped: [ mrg-07 mrg-08 mrg-09 ]
+ * Clone Set: mongodb-clone [mongodb]:
+ * Started: [ rdo7-node1 rdo7-node2 rdo7-node3 ]
+ * Stopped: [ mrg-07 mrg-08 mrg-09 ]
+ * Clone Set: keystone-clone [keystone]:
+ * Started: [ rdo7-node1 rdo7-node2 rdo7-node3 ]
+ * Stopped: [ mrg-07 mrg-08 mrg-09 ]
+ * Clone Set: glance-fs-clone [glance-fs]:
+ * Started: [ rdo7-node1 rdo7-node2 rdo7-node3 ]
+ * Stopped: [ mrg-07 mrg-08 mrg-09 ]
+ * Clone Set: glance-registry-clone [glance-registry]:
+ * Started: [ rdo7-node1 rdo7-node2 rdo7-node3 ]
+ * Stopped: [ mrg-07 mrg-08 mrg-09 ]
+ * Clone Set: glance-api-clone [glance-api]:
+ * Started: [ rdo7-node1 rdo7-node2 rdo7-node3 ]
+ * Stopped: [ mrg-07 mrg-08 mrg-09 ]
+ * Clone Set: cinder-api-clone [cinder-api]:
+ * Started: [ rdo7-node1 rdo7-node2 rdo7-node3 ]
+ * Stopped: [ mrg-07 mrg-08 mrg-09 ]
+ * Clone Set: cinder-scheduler-clone [cinder-scheduler]:
+ * Started: [ rdo7-node1 rdo7-node2 rdo7-node3 ]
+ * Stopped: [ mrg-07 mrg-08 mrg-09 ]
+ * cinder-volume (systemd:openstack-cinder-volume): Started rdo7-node2
+ * Clone Set: swift-fs-clone [swift-fs]:
+ * Started: [ rdo7-node1 rdo7-node2 rdo7-node3 ]
+ * Stopped: [ mrg-07 mrg-08 mrg-09 ]
+ * Clone Set: swift-account-clone [swift-account]:
+ * Started: [ rdo7-node1 rdo7-node2 rdo7-node3 ]
+ * Stopped: [ mrg-07 mrg-08 mrg-09 ]
+ * Clone Set: swift-container-clone [swift-container]:
+ * Started: [ rdo7-node1 rdo7-node2 rdo7-node3 ]
+ * Stopped: [ mrg-07 mrg-08 mrg-09 ]
+ * Clone Set: swift-object-clone [swift-object]:
+ * Started: [ rdo7-node1 rdo7-node2 rdo7-node3 ]
+ * Stopped: [ mrg-07 mrg-08 mrg-09 ]
+ * Clone Set: swift-proxy-clone [swift-proxy]:
+ * Started: [ rdo7-node1 rdo7-node2 rdo7-node3 ]
+ * Stopped: [ mrg-07 mrg-08 mrg-09 ]
+ * swift-object-expirer (systemd:openstack-swift-object-expirer): Started rdo7-node3
+ * Clone Set: neutron-server-clone [neutron-server]:
+ * Started: [ rdo7-node1 rdo7-node2 rdo7-node3 ]
+ * Stopped: [ mrg-07 mrg-08 mrg-09 ]
+ * Clone Set: neutron-scale-clone [neutron-scale] (unique):
+ * neutron-scale:0 (ocf:neutron:NeutronScale): Started rdo7-node1
+ * neutron-scale:1 (ocf:neutron:NeutronScale): Started rdo7-node2
+ * neutron-scale:2 (ocf:neutron:NeutronScale): Started rdo7-node3
+ * Clone Set: neutron-ovs-cleanup-clone [neutron-ovs-cleanup]:
+ * Started: [ rdo7-node1 rdo7-node2 rdo7-node3 ]
+ * Stopped: [ mrg-07 mrg-08 mrg-09 ]
+ * Clone Set: neutron-netns-cleanup-clone [neutron-netns-cleanup]:
+ * Started: [ rdo7-node1 rdo7-node2 rdo7-node3 ]
+ * Stopped: [ mrg-07 mrg-08 mrg-09 ]
+ * Clone Set: neutron-openvswitch-agent-clone [neutron-openvswitch-agent]:
+ * Started: [ rdo7-node1 rdo7-node2 rdo7-node3 ]
+ * Stopped: [ mrg-07 mrg-08 mrg-09 ]
+ * Clone Set: neutron-dhcp-agent-clone [neutron-dhcp-agent]:
+ * Started: [ rdo7-node1 rdo7-node2 rdo7-node3 ]
+ * Stopped: [ mrg-07 mrg-08 mrg-09 ]
+ * Clone Set: neutron-l3-agent-clone [neutron-l3-agent]:
+ * Started: [ rdo7-node1 rdo7-node2 rdo7-node3 ]
+ * Stopped: [ mrg-07 mrg-08 mrg-09 ]
+ * Clone Set: neutron-metadata-agent-clone [neutron-metadata-agent]:
+ * Started: [ rdo7-node1 rdo7-node2 rdo7-node3 ]
+ * Stopped: [ mrg-07 mrg-08 mrg-09 ]
+ * Clone Set: nova-consoleauth-clone [nova-consoleauth]:
+ * Started: [ rdo7-node1 rdo7-node2 rdo7-node3 ]
+ * Stopped: [ mrg-07 mrg-08 mrg-09 ]
+ * Clone Set: nova-novncproxy-clone [nova-novncproxy]:
+ * Started: [ rdo7-node1 rdo7-node2 rdo7-node3 ]
+ * Stopped: [ mrg-07 mrg-08 mrg-09 ]
+ * Clone Set: nova-api-clone [nova-api]:
+ * Started: [ rdo7-node1 rdo7-node2 rdo7-node3 ]
+ * Stopped: [ mrg-07 mrg-08 mrg-09 ]
+ * Clone Set: nova-scheduler-clone [nova-scheduler]:
+ * Started: [ rdo7-node1 rdo7-node2 rdo7-node3 ]
+ * Stopped: [ mrg-07 mrg-08 mrg-09 ]
+ * Clone Set: nova-conductor-clone [nova-conductor]:
+ * Started: [ rdo7-node1 rdo7-node2 rdo7-node3 ]
+ * Stopped: [ mrg-07 mrg-08 mrg-09 ]
+ * Clone Set: redis-master [redis] (promotable):
+ * Promoted: [ rdo7-node1 ]
+ * Unpromoted: [ rdo7-node2 rdo7-node3 ]
+ * Stopped: [ mrg-07 mrg-08 mrg-09 ]
+ * vip-redis (ocf:heartbeat:IPaddr2): Started rdo7-node1
+ * Clone Set: ceilometer-central-clone [ceilometer-central]:
+ * Started: [ rdo7-node1 rdo7-node2 rdo7-node3 ]
+ * Stopped: [ mrg-07 mrg-08 mrg-09 ]
+ * Clone Set: ceilometer-collector-clone [ceilometer-collector]:
+ * Started: [ rdo7-node1 rdo7-node2 rdo7-node3 ]
+ * Stopped: [ mrg-07 mrg-08 mrg-09 ]
+ * Clone Set: ceilometer-api-clone [ceilometer-api]:
+ * Started: [ rdo7-node1 rdo7-node2 rdo7-node3 ]
+ * Stopped: [ mrg-07 mrg-08 mrg-09 ]
+ * Clone Set: ceilometer-delay-clone [ceilometer-delay]:
+ * Started: [ rdo7-node1 rdo7-node2 rdo7-node3 ]
+ * Stopped: [ mrg-07 mrg-08 mrg-09 ]
+ * Clone Set: ceilometer-alarm-evaluator-clone [ceilometer-alarm-evaluator]:
+ * Started: [ rdo7-node1 rdo7-node2 rdo7-node3 ]
+ * Stopped: [ mrg-07 mrg-08 mrg-09 ]
+ * Clone Set: ceilometer-alarm-notifier-clone [ceilometer-alarm-notifier]:
+ * Started: [ rdo7-node1 rdo7-node2 rdo7-node3 ]
+ * Stopped: [ mrg-07 mrg-08 mrg-09 ]
+ * Clone Set: ceilometer-notification-clone [ceilometer-notification]:
+ * Started: [ rdo7-node1 rdo7-node2 rdo7-node3 ]
+ * Stopped: [ mrg-07 mrg-08 mrg-09 ]
+ * Clone Set: heat-api-clone [heat-api]:
+ * Started: [ rdo7-node1 rdo7-node2 rdo7-node3 ]
+ * Stopped: [ mrg-07 mrg-08 mrg-09 ]
+ * Clone Set: heat-api-cfn-clone [heat-api-cfn]:
+ * Started: [ rdo7-node1 rdo7-node2 rdo7-node3 ]
+ * Stopped: [ mrg-07 mrg-08 mrg-09 ]
+ * Clone Set: heat-api-cloudwatch-clone [heat-api-cloudwatch]:
+ * Started: [ rdo7-node1 rdo7-node2 rdo7-node3 ]
+ * Stopped: [ mrg-07 mrg-08 mrg-09 ]
+ * Clone Set: heat-engine-clone [heat-engine]:
+ * Started: [ rdo7-node1 rdo7-node2 rdo7-node3 ]
+ * Stopped: [ mrg-07 mrg-08 mrg-09 ]
+ * Clone Set: horizon-clone [horizon]:
+ * Started: [ rdo7-node1 rdo7-node2 rdo7-node3 ]
+ * Stopped: [ mrg-07 mrg-08 mrg-09 ]
+ * Clone Set: neutron-openvswitch-agent-compute-clone [neutron-openvswitch-agent-compute]:
+ * Started: [ mrg-07 mrg-08 mrg-09 ]
+ * Stopped: [ rdo7-node1 rdo7-node2 rdo7-node3 ]
+ * Clone Set: libvirtd-compute-clone [libvirtd-compute]:
+ * Started: [ mrg-07 mrg-08 mrg-09 ]
+ * Stopped: [ rdo7-node1 rdo7-node2 rdo7-node3 ]
+ * Clone Set: ceilometer-compute-clone [ceilometer-compute]:
+ * Started: [ mrg-07 mrg-08 mrg-09 ]
+ * Stopped: [ rdo7-node1 rdo7-node2 rdo7-node3 ]
+ * Clone Set: nova-compute-clone [nova-compute]:
+ * Started: [ mrg-07 mrg-08 mrg-09 ]
+ * Stopped: [ rdo7-node1 rdo7-node2 rdo7-node3 ]
+ * fence-nova (stonith:fence_compute): Started rdo7-node2
+ * fence-compute (stonith:fence_apc_snmp): Started rdo7-node3
+ * mrg-07 (ocf:pacemaker:remote): Started rdo7-node1
+ * mrg-08 (ocf:pacemaker:remote): Started rdo7-node2
+ * mrg-09 (ocf:pacemaker:remote): Started rdo7-node3
diff --git a/cts/scheduler/summary/op-defaults-2.summary b/cts/scheduler/summary/op-defaults-2.summary
new file mode 100644
index 0000000..c42da11
--- /dev/null
+++ b/cts/scheduler/summary/op-defaults-2.summary
@@ -0,0 +1,48 @@
+Current cluster status:
+ * Node List:
+ * Online: [ cluster01 cluster02 ]
+
+ * Full List of Resources:
+ * fencing (stonith:fence_xvm): Stopped
+ * ip-rsc (ocf:heartbeat:IPaddr2): Stopped
+ * rsc-passes (ocf:heartbeat:IPaddr2): Stopped
+ * dummy-rsc (ocf:pacemaker:Dummy): Stopped
+ * ping-rsc-ping (ocf:pacemaker:ping): Stopped
+
+Transition Summary:
+ * Start fencing ( cluster01 )
+ * Start ip-rsc ( cluster02 )
+ * Start rsc-passes ( cluster01 )
+ * Start dummy-rsc ( cluster02 )
+ * Start ping-rsc-ping ( cluster01 )
+
+Executing Cluster Transition:
+ * Resource action: fencing monitor on cluster02
+ * Resource action: fencing monitor on cluster01
+ * Resource action: ip-rsc monitor on cluster02
+ * Resource action: ip-rsc monitor on cluster01
+ * Resource action: rsc-passes monitor on cluster02
+ * Resource action: rsc-passes monitor on cluster01
+ * Resource action: dummy-rsc monitor on cluster02
+ * Resource action: dummy-rsc monitor on cluster01
+ * Resource action: ping-rsc-ping monitor on cluster02
+ * Resource action: ping-rsc-ping monitor on cluster01
+ * Resource action: fencing start on cluster01
+ * Resource action: ip-rsc start on cluster02
+ * Resource action: rsc-passes start on cluster01
+ * Resource action: dummy-rsc start on cluster02
+ * Resource action: ping-rsc-ping start on cluster01
+ * Resource action: ip-rsc monitor=20000 on cluster02
+ * Resource action: rsc-passes monitor=10000 on cluster01
+ * Resource action: dummy-rsc monitor=10000 on cluster02
+
+Revised Cluster Status:
+ * Node List:
+ * Online: [ cluster01 cluster02 ]
+
+ * Full List of Resources:
+ * fencing (stonith:fence_xvm): Started cluster01
+ * ip-rsc (ocf:heartbeat:IPaddr2): Started cluster02
+ * rsc-passes (ocf:heartbeat:IPaddr2): Started cluster01
+ * dummy-rsc (ocf:pacemaker:Dummy): Started cluster02
+ * ping-rsc-ping (ocf:pacemaker:ping): Started cluster01
diff --git a/cts/scheduler/summary/op-defaults-3.summary b/cts/scheduler/summary/op-defaults-3.summary
new file mode 100644
index 0000000..4e22be7
--- /dev/null
+++ b/cts/scheduler/summary/op-defaults-3.summary
@@ -0,0 +1,28 @@
+Current cluster status:
+ * Node List:
+ * Online: [ cluster01 cluster02 ]
+
+ * Full List of Resources:
+ * fencing (stonith:fence_xvm): Stopped
+ * dummy-rsc (ocf:pacemaker:Dummy): Stopped
+
+Transition Summary:
+ * Start fencing ( cluster01 )
+ * Start dummy-rsc ( cluster02 )
+
+Executing Cluster Transition:
+ * Resource action: fencing monitor on cluster02
+ * Resource action: fencing monitor on cluster01
+ * Resource action: dummy-rsc monitor on cluster02
+ * Resource action: dummy-rsc monitor on cluster01
+ * Resource action: fencing start on cluster01
+ * Resource action: dummy-rsc start on cluster02
+ * Resource action: dummy-rsc monitor=10000 on cluster02
+
+Revised Cluster Status:
+ * Node List:
+ * Online: [ cluster01 cluster02 ]
+
+ * Full List of Resources:
+ * fencing (stonith:fence_xvm): Started cluster01
+ * dummy-rsc (ocf:pacemaker:Dummy): Started cluster02
diff --git a/cts/scheduler/summary/op-defaults.summary b/cts/scheduler/summary/op-defaults.summary
new file mode 100644
index 0000000..7e4830e
--- /dev/null
+++ b/cts/scheduler/summary/op-defaults.summary
@@ -0,0 +1,48 @@
+Current cluster status:
+ * Node List:
+ * Online: [ cluster01 cluster02 ]
+
+ * Full List of Resources:
+ * fencing (stonith:fence_xvm): Stopped
+ * ip-rsc (ocf:heartbeat:IPaddr2): Stopped
+ * ip-rsc2 (ocf:heartbeat:IPaddr2): Stopped
+ * dummy-rsc (ocf:pacemaker:Dummy): Stopped
+ * ping-rsc-ping (ocf:pacemaker:ping): Stopped
+
+Transition Summary:
+ * Start fencing ( cluster01 )
+ * Start ip-rsc ( cluster02 )
+ * Start ip-rsc2 ( cluster01 )
+ * Start dummy-rsc ( cluster02 )
+ * Start ping-rsc-ping ( cluster01 )
+
+Executing Cluster Transition:
+ * Resource action: fencing monitor on cluster02
+ * Resource action: fencing monitor on cluster01
+ * Resource action: ip-rsc monitor on cluster02
+ * Resource action: ip-rsc monitor on cluster01
+ * Resource action: ip-rsc2 monitor on cluster02
+ * Resource action: ip-rsc2 monitor on cluster01
+ * Resource action: dummy-rsc monitor on cluster02
+ * Resource action: dummy-rsc monitor on cluster01
+ * Resource action: ping-rsc-ping monitor on cluster02
+ * Resource action: ping-rsc-ping monitor on cluster01
+ * Resource action: fencing start on cluster01
+ * Resource action: ip-rsc start on cluster02
+ * Resource action: ip-rsc2 start on cluster01
+ * Resource action: dummy-rsc start on cluster02
+ * Resource action: ping-rsc-ping start on cluster01
+ * Resource action: ip-rsc monitor=20000 on cluster02
+ * Resource action: ip-rsc2 monitor=10000 on cluster01
+ * Resource action: dummy-rsc monitor=60000 on cluster02
+
+Revised Cluster Status:
+ * Node List:
+ * Online: [ cluster01 cluster02 ]
+
+ * Full List of Resources:
+ * fencing (stonith:fence_xvm): Started cluster01
+ * ip-rsc (ocf:heartbeat:IPaddr2): Started cluster02
+ * ip-rsc2 (ocf:heartbeat:IPaddr2): Started cluster01
+ * dummy-rsc (ocf:pacemaker:Dummy): Started cluster02
+ * ping-rsc-ping (ocf:pacemaker:ping): Started cluster01
diff --git a/cts/scheduler/summary/order-clone.summary b/cts/scheduler/summary/order-clone.summary
new file mode 100644
index 0000000..d60aa2e
--- /dev/null
+++ b/cts/scheduler/summary/order-clone.summary
@@ -0,0 +1,45 @@
+4 of 25 resource instances DISABLED and 0 BLOCKED from further action due to failure
+
+Current cluster status:
+ * Node List:
+ * Online: [ hex-0 hex-7 hex-8 hex-9 ]
+
+ * Full List of Resources:
+ * fencing-sbd (stonith:external/sbd): Stopped
+ * Clone Set: o2cb-clone [o2cb]:
+ * Stopped: [ hex-0 hex-7 hex-8 hex-9 ]
+ * Clone Set: vg1-clone [vg1]:
+ * Stopped: [ hex-0 hex-7 hex-8 hex-9 ]
+ * Clone Set: fs2-clone [ocfs2-2]:
+ * Stopped: [ hex-0 hex-7 hex-8 hex-9 ]
+ * Clone Set: fs1-clone [ocfs2-1]:
+ * Stopped: [ hex-0 hex-7 hex-8 hex-9 ]
+ * Clone Set: dlm-clone [dlm] (disabled):
+ * Stopped (disabled): [ hex-0 hex-7 hex-8 hex-9 ]
+ * Clone Set: clvm-clone [clvm]:
+ * Stopped: [ hex-0 hex-7 hex-8 hex-9 ]
+
+Transition Summary:
+ * Start fencing-sbd ( hex-0 )
+
+Executing Cluster Transition:
+ * Resource action: fencing-sbd start on hex-0
+
+Revised Cluster Status:
+ * Node List:
+ * Online: [ hex-0 hex-7 hex-8 hex-9 ]
+
+ * Full List of Resources:
+ * fencing-sbd (stonith:external/sbd): Started hex-0
+ * Clone Set: o2cb-clone [o2cb]:
+ * Stopped: [ hex-0 hex-7 hex-8 hex-9 ]
+ * Clone Set: vg1-clone [vg1]:
+ * Stopped: [ hex-0 hex-7 hex-8 hex-9 ]
+ * Clone Set: fs2-clone [ocfs2-2]:
+ * Stopped: [ hex-0 hex-7 hex-8 hex-9 ]
+ * Clone Set: fs1-clone [ocfs2-1]:
+ * Stopped: [ hex-0 hex-7 hex-8 hex-9 ]
+ * Clone Set: dlm-clone [dlm] (disabled):
+ * Stopped (disabled): [ hex-0 hex-7 hex-8 hex-9 ]
+ * Clone Set: clvm-clone [clvm]:
+ * Stopped: [ hex-0 hex-7 hex-8 hex-9 ]
diff --git a/cts/scheduler/summary/order-expired-failure.summary b/cts/scheduler/summary/order-expired-failure.summary
new file mode 100644
index 0000000..7ec0617
--- /dev/null
+++ b/cts/scheduler/summary/order-expired-failure.summary
@@ -0,0 +1,112 @@
+Using the original execution date of: 2018-04-09 07:55:35Z
+Current cluster status:
+ * Node List:
+ * RemoteNode overcloud-novacompute-1: UNCLEAN (offline)
+ * Online: [ controller-0 controller-1 controller-2 ]
+ * RemoteOnline: [ overcloud-novacompute-0 ]
+ * GuestOnline: [ galera-bundle-0 galera-bundle-1 galera-bundle-2 rabbitmq-bundle-0 rabbitmq-bundle-1 rabbitmq-bundle-2 redis-bundle-0 redis-bundle-1 redis-bundle-2 ]
+
+ * Full List of Resources:
+ * overcloud-novacompute-0 (ocf:pacemaker:remote): Started controller-0
+ * overcloud-novacompute-1 (ocf:pacemaker:remote): FAILED controller-1
+ * Container bundle set: rabbitmq-bundle [192.168.24.1:8787/rhosp13/openstack-rabbitmq:pcmklatest]:
+ * rabbitmq-bundle-0 (ocf:heartbeat:rabbitmq-cluster): Started controller-2
+ * rabbitmq-bundle-1 (ocf:heartbeat:rabbitmq-cluster): Started controller-0
+ * rabbitmq-bundle-2 (ocf:heartbeat:rabbitmq-cluster): Started controller-1
+ * Container bundle set: galera-bundle [192.168.24.1:8787/rhosp13/openstack-mariadb:pcmklatest]:
+ * galera-bundle-0 (ocf:heartbeat:galera): Promoted controller-2
+ * galera-bundle-1 (ocf:heartbeat:galera): Promoted controller-0
+ * galera-bundle-2 (ocf:heartbeat:galera): Promoted controller-1
+ * Container bundle set: redis-bundle [192.168.24.1:8787/rhosp13/openstack-redis:pcmklatest]:
+ * redis-bundle-0 (ocf:heartbeat:redis): Promoted controller-2
+ * redis-bundle-1 (ocf:heartbeat:redis): Unpromoted controller-0
+ * redis-bundle-2 (ocf:heartbeat:redis): Unpromoted controller-1
+ * ip-192.168.24.11 (ocf:heartbeat:IPaddr2): Started controller-2
+ * ip-10.0.0.110 (ocf:heartbeat:IPaddr2): Stopped
+ * ip-172.17.1.14 (ocf:heartbeat:IPaddr2): Started controller-1
+ * ip-172.17.1.17 (ocf:heartbeat:IPaddr2): Started controller-2
+ * ip-172.17.3.11 (ocf:heartbeat:IPaddr2): Started controller-0
+ * ip-172.17.4.17 (ocf:heartbeat:IPaddr2): Started controller-1
+ * Container bundle set: haproxy-bundle [192.168.24.1:8787/rhosp13/openstack-haproxy:pcmklatest]:
+ * haproxy-bundle-docker-0 (ocf:heartbeat:docker): Started controller-2
+ * haproxy-bundle-docker-1 (ocf:heartbeat:docker): Started controller-0
+ * haproxy-bundle-docker-2 (ocf:heartbeat:docker): Started controller-1
+ * stonith-fence_compute-fence-nova (stonith:fence_compute): FAILED controller-2
+ * Clone Set: compute-unfence-trigger-clone [compute-unfence-trigger]:
+ * compute-unfence-trigger (ocf:pacemaker:Dummy): Started overcloud-novacompute-1 (UNCLEAN)
+ * Started: [ overcloud-novacompute-0 ]
+ * Stopped: [ controller-0 controller-1 controller-2 ]
+ * nova-evacuate (ocf:openstack:NovaEvacuate): Started controller-0
+ * stonith-fence_ipmilan-5254008be2cc (stonith:fence_ipmilan): Started controller-1
+ * stonith-fence_ipmilan-525400803f9e (stonith:fence_ipmilan): Started controller-0
+ * stonith-fence_ipmilan-525400fca120 (stonith:fence_ipmilan): Started controller-2
+ * stonith-fence_ipmilan-525400953d48 (stonith:fence_ipmilan): Started controller-2
+ * stonith-fence_ipmilan-525400b02b86 (stonith:fence_ipmilan): Started controller-1
+ * Container bundle: openstack-cinder-volume [192.168.24.1:8787/rhosp13/openstack-cinder-volume:pcmklatest]:
+ * openstack-cinder-volume-docker-0 (ocf:heartbeat:docker): Started controller-0
+
+Transition Summary:
+ * Fence (reboot) overcloud-novacompute-1 'remote connection is unrecoverable'
+ * Stop overcloud-novacompute-1 ( controller-1 ) due to node availability
+ * Start ip-10.0.0.110 ( controller-1 )
+ * Recover stonith-fence_compute-fence-nova ( controller-2 )
+ * Stop compute-unfence-trigger:1 ( overcloud-novacompute-1 ) due to node availability
+
+Executing Cluster Transition:
+ * Resource action: overcloud-novacompute-1 stop on controller-1
+ * Resource action: stonith-fence_compute-fence-nova stop on controller-2
+ * Fencing overcloud-novacompute-1 (reboot)
+ * Cluster action: clear_failcount for overcloud-novacompute-1 on controller-1
+ * Resource action: ip-10.0.0.110 start on controller-1
+ * Resource action: stonith-fence_compute-fence-nova start on controller-2
+ * Resource action: stonith-fence_compute-fence-nova monitor=60000 on controller-2
+ * Pseudo action: compute-unfence-trigger-clone_stop_0
+ * Resource action: ip-10.0.0.110 monitor=10000 on controller-1
+ * Pseudo action: compute-unfence-trigger_stop_0
+ * Pseudo action: compute-unfence-trigger-clone_stopped_0
+Using the original execution date of: 2018-04-09 07:55:35Z
+
+Revised Cluster Status:
+ * Node List:
+ * RemoteNode overcloud-novacompute-1: UNCLEAN (offline)
+ * Online: [ controller-0 controller-1 controller-2 ]
+ * RemoteOnline: [ overcloud-novacompute-0 ]
+ * GuestOnline: [ galera-bundle-0 galera-bundle-1 galera-bundle-2 rabbitmq-bundle-0 rabbitmq-bundle-1 rabbitmq-bundle-2 redis-bundle-0 redis-bundle-1 redis-bundle-2 ]
+
+ * Full List of Resources:
+ * overcloud-novacompute-0 (ocf:pacemaker:remote): Started controller-0
+ * overcloud-novacompute-1 (ocf:pacemaker:remote): FAILED
+ * Container bundle set: rabbitmq-bundle [192.168.24.1:8787/rhosp13/openstack-rabbitmq:pcmklatest]:
+ * rabbitmq-bundle-0 (ocf:heartbeat:rabbitmq-cluster): Started controller-2
+ * rabbitmq-bundle-1 (ocf:heartbeat:rabbitmq-cluster): Started controller-0
+ * rabbitmq-bundle-2 (ocf:heartbeat:rabbitmq-cluster): Started controller-1
+ * Container bundle set: galera-bundle [192.168.24.1:8787/rhosp13/openstack-mariadb:pcmklatest]:
+ * galera-bundle-0 (ocf:heartbeat:galera): Promoted controller-2
+ * galera-bundle-1 (ocf:heartbeat:galera): Promoted controller-0
+ * galera-bundle-2 (ocf:heartbeat:galera): Promoted controller-1
+ * Container bundle set: redis-bundle [192.168.24.1:8787/rhosp13/openstack-redis:pcmklatest]:
+ * redis-bundle-0 (ocf:heartbeat:redis): Promoted controller-2
+ * redis-bundle-1 (ocf:heartbeat:redis): Unpromoted controller-0
+ * redis-bundle-2 (ocf:heartbeat:redis): Unpromoted controller-1
+ * ip-192.168.24.11 (ocf:heartbeat:IPaddr2): Started controller-2
+ * ip-10.0.0.110 (ocf:heartbeat:IPaddr2): Started controller-1
+ * ip-172.17.1.14 (ocf:heartbeat:IPaddr2): Started controller-1
+ * ip-172.17.1.17 (ocf:heartbeat:IPaddr2): Started controller-2
+ * ip-172.17.3.11 (ocf:heartbeat:IPaddr2): Started controller-0
+ * ip-172.17.4.17 (ocf:heartbeat:IPaddr2): Started controller-1
+ * Container bundle set: haproxy-bundle [192.168.24.1:8787/rhosp13/openstack-haproxy:pcmklatest]:
+ * haproxy-bundle-docker-0 (ocf:heartbeat:docker): Started controller-2
+ * haproxy-bundle-docker-1 (ocf:heartbeat:docker): Started controller-0
+ * haproxy-bundle-docker-2 (ocf:heartbeat:docker): Started controller-1
+ * stonith-fence_compute-fence-nova (stonith:fence_compute): Started controller-2
+ * Clone Set: compute-unfence-trigger-clone [compute-unfence-trigger]:
+ * Started: [ overcloud-novacompute-0 ]
+ * Stopped: [ controller-0 controller-1 controller-2 overcloud-novacompute-1 ]
+ * nova-evacuate (ocf:openstack:NovaEvacuate): Started controller-0
+ * stonith-fence_ipmilan-5254008be2cc (stonith:fence_ipmilan): Started controller-1
+ * stonith-fence_ipmilan-525400803f9e (stonith:fence_ipmilan): Started controller-0
+ * stonith-fence_ipmilan-525400fca120 (stonith:fence_ipmilan): Started controller-2
+ * stonith-fence_ipmilan-525400953d48 (stonith:fence_ipmilan): Started controller-2
+ * stonith-fence_ipmilan-525400b02b86 (stonith:fence_ipmilan): Started controller-1
+ * Container bundle: openstack-cinder-volume [192.168.24.1:8787/rhosp13/openstack-cinder-volume:pcmklatest]:
+ * openstack-cinder-volume-docker-0 (ocf:heartbeat:docker): Started controller-0
diff --git a/cts/scheduler/summary/order-first-probes.summary b/cts/scheduler/summary/order-first-probes.summary
new file mode 100644
index 0000000..8648aba
--- /dev/null
+++ b/cts/scheduler/summary/order-first-probes.summary
@@ -0,0 +1,37 @@
+Using the original execution date of: 2016-10-05 07:32:34Z
+Current cluster status:
+ * Node List:
+ * Node rh72-01: standby (with active resources)
+ * Online: [ rh72-02 ]
+
+ * Full List of Resources:
+ * Resource Group: grpDummy:
+ * prmDummy1 (ocf:pacemaker:Dummy1): Started rh72-01
+ * prmDummy2 (ocf:pacemaker:Dummy2): Stopped
+
+Transition Summary:
+ * Move prmDummy1 ( rh72-01 -> rh72-02 )
+ * Start prmDummy2 ( rh72-02 )
+
+Executing Cluster Transition:
+ * Pseudo action: grpDummy_stop_0
+ * Resource action: prmDummy2 monitor on rh72-01
+ * Resource action: prmDummy1 stop on rh72-01
+ * Pseudo action: grpDummy_stopped_0
+ * Pseudo action: grpDummy_start_0
+ * Resource action: prmDummy1 start on rh72-02
+ * Resource action: prmDummy2 start on rh72-02
+ * Pseudo action: grpDummy_running_0
+ * Resource action: prmDummy1 monitor=10000 on rh72-02
+ * Resource action: prmDummy2 monitor=10000 on rh72-02
+Using the original execution date of: 2016-10-05 07:32:34Z
+
+Revised Cluster Status:
+ * Node List:
+ * Node rh72-01: standby
+ * Online: [ rh72-02 ]
+
+ * Full List of Resources:
+ * Resource Group: grpDummy:
+ * prmDummy1 (ocf:pacemaker:Dummy1): Started rh72-02
+ * prmDummy2 (ocf:pacemaker:Dummy2): Started rh72-02
diff --git a/cts/scheduler/summary/order-mandatory.summary b/cts/scheduler/summary/order-mandatory.summary
new file mode 100644
index 0000000..f6856b0
--- /dev/null
+++ b/cts/scheduler/summary/order-mandatory.summary
@@ -0,0 +1,30 @@
+Current cluster status:
+ * Node List:
+ * Online: [ node1 ]
+
+ * Full List of Resources:
+ * rsc1 (ocf:heartbeat:apache): Stopped
+ * rsc2 (ocf:heartbeat:apache): Started node1
+ * rsc3 (ocf:heartbeat:apache): Stopped
+ * rsc4 (ocf:heartbeat:apache): Started node1
+
+Transition Summary:
+ * Start rsc1 ( node1 )
+ * Restart rsc2 ( node1 ) due to required rsc1 start
+ * Stop rsc4 ( node1 ) due to unrunnable rsc3 start
+
+Executing Cluster Transition:
+ * Resource action: rsc1 start on node1
+ * Resource action: rsc2 stop on node1
+ * Resource action: rsc4 stop on node1
+ * Resource action: rsc2 start on node1
+
+Revised Cluster Status:
+ * Node List:
+ * Online: [ node1 ]
+
+ * Full List of Resources:
+ * rsc1 (ocf:heartbeat:apache): Started node1
+ * rsc2 (ocf:heartbeat:apache): Started node1
+ * rsc3 (ocf:heartbeat:apache): Stopped
+ * rsc4 (ocf:heartbeat:apache): Stopped
diff --git a/cts/scheduler/summary/order-optional-keyword.summary b/cts/scheduler/summary/order-optional-keyword.summary
new file mode 100644
index 0000000..d8a12bf
--- /dev/null
+++ b/cts/scheduler/summary/order-optional-keyword.summary
@@ -0,0 +1,25 @@
+Current cluster status:
+ * Node List:
+ * Online: [ node1 ]
+
+ * Full List of Resources:
+ * rsc1 (ocf:heartbeat:apache): Stopped
+ * rsc2 (ocf:heartbeat:apache): Started node1
+ * rsc3 (ocf:heartbeat:apache): Stopped
+ * rsc4 (ocf:heartbeat:apache): Started node1
+
+Transition Summary:
+ * Start rsc1 ( node1 )
+
+Executing Cluster Transition:
+ * Resource action: rsc1 start on node1
+
+Revised Cluster Status:
+ * Node List:
+ * Online: [ node1 ]
+
+ * Full List of Resources:
+ * rsc1 (ocf:heartbeat:apache): Started node1
+ * rsc2 (ocf:heartbeat:apache): Started node1
+ * rsc3 (ocf:heartbeat:apache): Stopped
+ * rsc4 (ocf:heartbeat:apache): Started node1
diff --git a/cts/scheduler/summary/order-optional.summary b/cts/scheduler/summary/order-optional.summary
new file mode 100644
index 0000000..d8a12bf
--- /dev/null
+++ b/cts/scheduler/summary/order-optional.summary
@@ -0,0 +1,25 @@
+Current cluster status:
+ * Node List:
+ * Online: [ node1 ]
+
+ * Full List of Resources:
+ * rsc1 (ocf:heartbeat:apache): Stopped
+ * rsc2 (ocf:heartbeat:apache): Started node1
+ * rsc3 (ocf:heartbeat:apache): Stopped
+ * rsc4 (ocf:heartbeat:apache): Started node1
+
+Transition Summary:
+ * Start rsc1 ( node1 )
+
+Executing Cluster Transition:
+ * Resource action: rsc1 start on node1
+
+Revised Cluster Status:
+ * Node List:
+ * Online: [ node1 ]
+
+ * Full List of Resources:
+ * rsc1 (ocf:heartbeat:apache): Started node1
+ * rsc2 (ocf:heartbeat:apache): Started node1
+ * rsc3 (ocf:heartbeat:apache): Stopped
+ * rsc4 (ocf:heartbeat:apache): Started node1
diff --git a/cts/scheduler/summary/order-required.summary b/cts/scheduler/summary/order-required.summary
new file mode 100644
index 0000000..f6856b0
--- /dev/null
+++ b/cts/scheduler/summary/order-required.summary
@@ -0,0 +1,30 @@
+Current cluster status:
+ * Node List:
+ * Online: [ node1 ]
+
+ * Full List of Resources:
+ * rsc1 (ocf:heartbeat:apache): Stopped
+ * rsc2 (ocf:heartbeat:apache): Started node1
+ * rsc3 (ocf:heartbeat:apache): Stopped
+ * rsc4 (ocf:heartbeat:apache): Started node1
+
+Transition Summary:
+ * Start rsc1 ( node1 )
+ * Restart rsc2 ( node1 ) due to required rsc1 start
+ * Stop rsc4 ( node1 ) due to unrunnable rsc3 start
+
+Executing Cluster Transition:
+ * Resource action: rsc1 start on node1
+ * Resource action: rsc2 stop on node1
+ * Resource action: rsc4 stop on node1
+ * Resource action: rsc2 start on node1
+
+Revised Cluster Status:
+ * Node List:
+ * Online: [ node1 ]
+
+ * Full List of Resources:
+ * rsc1 (ocf:heartbeat:apache): Started node1
+ * rsc2 (ocf:heartbeat:apache): Started node1
+ * rsc3 (ocf:heartbeat:apache): Stopped
+ * rsc4 (ocf:heartbeat:apache): Stopped
diff --git a/cts/scheduler/summary/order-serialize-set.summary b/cts/scheduler/summary/order-serialize-set.summary
new file mode 100644
index 0000000..b0b759b
--- /dev/null
+++ b/cts/scheduler/summary/order-serialize-set.summary
@@ -0,0 +1,73 @@
+Current cluster status:
+ * Node List:
+ * Node xen-a: standby (with active resources)
+ * Online: [ xen-b ]
+
+ * Full List of Resources:
+ * xen-a-fencing (stonith:external/ipmi): Started xen-b
+ * xen-b-fencing (stonith:external/ipmi): Started xen-a
+ * db (ocf:heartbeat:Xen): Started xen-a
+ * dbreplica (ocf:heartbeat:Xen): Started xen-b
+ * core-101 (ocf:heartbeat:Xen): Started xen-a
+ * core-200 (ocf:heartbeat:Xen): Started xen-a
+ * sysadmin (ocf:heartbeat:Xen): Started xen-b
+ * edge (ocf:heartbeat:Xen): Started xen-a
+ * base (ocf:heartbeat:Xen): Started xen-a
+ * Email_Alerting (ocf:heartbeat:MailTo): Started xen-b
+
+Transition Summary:
+ * Restart xen-a-fencing ( xen-b ) due to resource definition change
+ * Stop xen-b-fencing ( xen-a ) due to node availability
+ * Migrate db ( xen-a -> xen-b )
+ * Migrate core-101 ( xen-a -> xen-b )
+ * Migrate core-200 ( xen-a -> xen-b )
+ * Migrate edge ( xen-a -> xen-b )
+ * Migrate base ( xen-a -> xen-b )
+
+Executing Cluster Transition:
+ * Resource action: xen-a-fencing stop on xen-b
+ * Resource action: xen-a-fencing start on xen-b
+ * Resource action: xen-a-fencing monitor=60000 on xen-b
+ * Resource action: xen-b-fencing stop on xen-a
+ * Resource action: db migrate_to on xen-a
+ * Resource action: db migrate_from on xen-b
+ * Resource action: db stop on xen-a
+ * Resource action: core-101 migrate_to on xen-a
+ * Pseudo action: db_start_0
+ * Resource action: core-101 migrate_from on xen-b
+ * Resource action: core-101 stop on xen-a
+ * Resource action: core-200 migrate_to on xen-a
+ * Resource action: db monitor=10000 on xen-b
+ * Pseudo action: core-101_start_0
+ * Resource action: core-200 migrate_from on xen-b
+ * Resource action: core-200 stop on xen-a
+ * Resource action: edge migrate_to on xen-a
+ * Resource action: core-101 monitor=10000 on xen-b
+ * Pseudo action: core-200_start_0
+ * Resource action: edge migrate_from on xen-b
+ * Resource action: edge stop on xen-a
+ * Resource action: base migrate_to on xen-a
+ * Resource action: core-200 monitor=10000 on xen-b
+ * Pseudo action: edge_start_0
+ * Resource action: base migrate_from on xen-b
+ * Resource action: base stop on xen-a
+ * Resource action: edge monitor=10000 on xen-b
+ * Pseudo action: base_start_0
+ * Resource action: base monitor=10000 on xen-b
+
+Revised Cluster Status:
+ * Node List:
+ * Node xen-a: standby
+ * Online: [ xen-b ]
+
+ * Full List of Resources:
+ * xen-a-fencing (stonith:external/ipmi): Started xen-b
+ * xen-b-fencing (stonith:external/ipmi): Stopped
+ * db (ocf:heartbeat:Xen): Started xen-b
+ * dbreplica (ocf:heartbeat:Xen): Started xen-b
+ * core-101 (ocf:heartbeat:Xen): Started xen-b
+ * core-200 (ocf:heartbeat:Xen): Started xen-b
+ * sysadmin (ocf:heartbeat:Xen): Started xen-b
+ * edge (ocf:heartbeat:Xen): Started xen-b
+ * base (ocf:heartbeat:Xen): Started xen-b
+ * Email_Alerting (ocf:heartbeat:MailTo): Started xen-b
diff --git a/cts/scheduler/summary/order-serialize.summary b/cts/scheduler/summary/order-serialize.summary
new file mode 100644
index 0000000..c7ef3e0
--- /dev/null
+++ b/cts/scheduler/summary/order-serialize.summary
@@ -0,0 +1,73 @@
+Current cluster status:
+ * Node List:
+ * Node xen-a: standby (with active resources)
+ * Online: [ xen-b ]
+
+ * Full List of Resources:
+ * xen-a-fencing (stonith:external/ipmi): Started xen-b
+ * xen-b-fencing (stonith:external/ipmi): Started xen-a
+ * db (ocf:heartbeat:Xen): Started xen-a
+ * dbreplica (ocf:heartbeat:Xen): Started xen-b
+ * core-101 (ocf:heartbeat:Xen): Started xen-a
+ * core-200 (ocf:heartbeat:Xen): Started xen-a
+ * sysadmin (ocf:heartbeat:Xen): Started xen-b
+ * edge (ocf:heartbeat:Xen): Started xen-a
+ * base (ocf:heartbeat:Xen): Started xen-a
+ * Email_Alerting (ocf:heartbeat:MailTo): Started xen-b
+
+Transition Summary:
+ * Restart xen-a-fencing ( xen-b ) due to resource definition change
+ * Stop xen-b-fencing ( xen-a ) due to node availability
+ * Migrate db ( xen-a -> xen-b )
+ * Migrate core-101 ( xen-a -> xen-b )
+ * Migrate core-200 ( xen-a -> xen-b )
+ * Migrate edge ( xen-a -> xen-b )
+ * Migrate base ( xen-a -> xen-b )
+
+Executing Cluster Transition:
+ * Resource action: xen-a-fencing stop on xen-b
+ * Resource action: xen-a-fencing start on xen-b
+ * Resource action: xen-a-fencing monitor=60000 on xen-b
+ * Resource action: xen-b-fencing stop on xen-a
+ * Resource action: db migrate_to on xen-a
+ * Resource action: core-101 migrate_to on xen-a
+ * Resource action: edge migrate_to on xen-a
+ * Resource action: db migrate_from on xen-b
+ * Resource action: db stop on xen-a
+ * Resource action: core-101 migrate_from on xen-b
+ * Resource action: core-101 stop on xen-a
+ * Resource action: core-200 migrate_to on xen-a
+ * Resource action: edge migrate_from on xen-b
+ * Resource action: edge stop on xen-a
+ * Resource action: base migrate_to on xen-a
+ * Pseudo action: db_start_0
+ * Pseudo action: core-101_start_0
+ * Resource action: core-200 migrate_from on xen-b
+ * Resource action: core-200 stop on xen-a
+ * Pseudo action: edge_start_0
+ * Resource action: base migrate_from on xen-b
+ * Resource action: base stop on xen-a
+ * Resource action: db monitor=10000 on xen-b
+ * Resource action: core-101 monitor=10000 on xen-b
+ * Pseudo action: core-200_start_0
+ * Resource action: edge monitor=10000 on xen-b
+ * Pseudo action: base_start_0
+ * Resource action: core-200 monitor=10000 on xen-b
+ * Resource action: base monitor=10000 on xen-b
+
+Revised Cluster Status:
+ * Node List:
+ * Node xen-a: standby
+ * Online: [ xen-b ]
+
+ * Full List of Resources:
+ * xen-a-fencing (stonith:external/ipmi): Started xen-b
+ * xen-b-fencing (stonith:external/ipmi): Stopped
+ * db (ocf:heartbeat:Xen): Started xen-b
+ * dbreplica (ocf:heartbeat:Xen): Started xen-b
+ * core-101 (ocf:heartbeat:Xen): Started xen-b
+ * core-200 (ocf:heartbeat:Xen): Started xen-b
+ * sysadmin (ocf:heartbeat:Xen): Started xen-b
+ * edge (ocf:heartbeat:Xen): Started xen-b
+ * base (ocf:heartbeat:Xen): Started xen-b
+ * Email_Alerting (ocf:heartbeat:MailTo): Started xen-b
diff --git a/cts/scheduler/summary/order-sets.summary b/cts/scheduler/summary/order-sets.summary
new file mode 100644
index 0000000..201ef43
--- /dev/null
+++ b/cts/scheduler/summary/order-sets.summary
@@ -0,0 +1,41 @@
+Current cluster status:
+ * Node List:
+ * Node ubuntu_2: standby (with active resources)
+ * Online: [ ubuntu_1 ]
+
+ * Full List of Resources:
+ * world1 (ocf:bbnd:world1test): Started ubuntu_2
+ * world2 (ocf:bbnd:world2test): Started ubuntu_2
+ * world3 (ocf:bbnd:world3test): Started ubuntu_2
+ * world4 (ocf:bbnd:world4test): Started ubuntu_2
+
+Transition Summary:
+ * Move world1 ( ubuntu_2 -> ubuntu_1 )
+ * Move world2 ( ubuntu_2 -> ubuntu_1 )
+ * Move world3 ( ubuntu_2 -> ubuntu_1 )
+ * Move world4 ( ubuntu_2 -> ubuntu_1 )
+
+Executing Cluster Transition:
+ * Resource action: world4 stop on ubuntu_2
+ * Resource action: world3 stop on ubuntu_2
+ * Resource action: world2 stop on ubuntu_2
+ * Resource action: world1 stop on ubuntu_2
+ * Resource action: world1 start on ubuntu_1
+ * Resource action: world2 start on ubuntu_1
+ * Resource action: world3 start on ubuntu_1
+ * Resource action: world4 start on ubuntu_1
+ * Resource action: world1 monitor=10000 on ubuntu_1
+ * Resource action: world2 monitor=10000 on ubuntu_1
+ * Resource action: world3 monitor=10000 on ubuntu_1
+ * Resource action: world4 monitor=10000 on ubuntu_1
+
+Revised Cluster Status:
+ * Node List:
+ * Node ubuntu_2: standby
+ * Online: [ ubuntu_1 ]
+
+ * Full List of Resources:
+ * world1 (ocf:bbnd:world1test): Started ubuntu_1
+ * world2 (ocf:bbnd:world2test): Started ubuntu_1
+ * world3 (ocf:bbnd:world3test): Started ubuntu_1
+ * world4 (ocf:bbnd:world4test): Started ubuntu_1
diff --git a/cts/scheduler/summary/order-wrong-kind.summary b/cts/scheduler/summary/order-wrong-kind.summary
new file mode 100644
index 0000000..0e00bdf
--- /dev/null
+++ b/cts/scheduler/summary/order-wrong-kind.summary
@@ -0,0 +1,29 @@
+Schema validation of configuration is disabled (enabling is encouraged and prevents common misconfigurations)
+Current cluster status:
+ * Node List:
+ * Online: [ node1 ]
+
+ * Full List of Resources:
+ * rsc1 (ocf:heartbeat:apache): Stopped
+ * rsc2 (ocf:heartbeat:apache): Started node1
+ * rsc3 (ocf:heartbeat:apache): Stopped
+ * rsc4 (ocf:heartbeat:apache): Started node1
+
+Transition Summary:
+ * Start rsc1 ( node1 )
+ * Restart rsc2 ( node1 ) due to required rsc1 start
+
+Executing Cluster Transition:
+ * Resource action: rsc1 start on node1
+ * Resource action: rsc2 stop on node1
+ * Resource action: rsc2 start on node1
+
+Revised Cluster Status:
+ * Node List:
+ * Online: [ node1 ]
+
+ * Full List of Resources:
+ * rsc1 (ocf:heartbeat:apache): Started node1
+ * rsc2 (ocf:heartbeat:apache): Started node1
+ * rsc3 (ocf:heartbeat:apache): Stopped
+ * rsc4 (ocf:heartbeat:apache): Started node1
diff --git a/cts/scheduler/summary/order1.summary b/cts/scheduler/summary/order1.summary
new file mode 100644
index 0000000..59028d7
--- /dev/null
+++ b/cts/scheduler/summary/order1.summary
@@ -0,0 +1,33 @@
+Current cluster status:
+ * Node List:
+ * Online: [ node1 node2 ]
+
+ * Full List of Resources:
+ * rsc1 (ocf:heartbeat:apache): Stopped
+ * rsc2 (ocf:heartbeat:apache): Stopped
+ * rsc3 (ocf:heartbeat:apache): Stopped
+
+Transition Summary:
+ * Start rsc1 ( node1 )
+ * Start rsc2 ( node2 )
+ * Start rsc3 ( node1 )
+
+Executing Cluster Transition:
+ * Resource action: rsc1 monitor on node2
+ * Resource action: rsc1 monitor on node1
+ * Resource action: rsc2 monitor on node2
+ * Resource action: rsc2 monitor on node1
+ * Resource action: rsc3 monitor on node2
+ * Resource action: rsc3 monitor on node1
+ * Resource action: rsc1 start on node1
+ * Resource action: rsc2 start on node2
+ * Resource action: rsc3 start on node1
+
+Revised Cluster Status:
+ * Node List:
+ * Online: [ node1 node2 ]
+
+ * Full List of Resources:
+ * rsc1 (ocf:heartbeat:apache): Started node1
+ * rsc2 (ocf:heartbeat:apache): Started node2
+ * rsc3 (ocf:heartbeat:apache): Started node1
diff --git a/cts/scheduler/summary/order2.summary b/cts/scheduler/summary/order2.summary
new file mode 100644
index 0000000..285d067
--- /dev/null
+++ b/cts/scheduler/summary/order2.summary
@@ -0,0 +1,39 @@
+Current cluster status:
+ * Node List:
+ * Online: [ node1 node2 ]
+
+ * Full List of Resources:
+ * rsc1 (ocf:heartbeat:apache): Stopped
+ * rsc2 (ocf:heartbeat:apache): Stopped
+ * rsc3 (ocf:heartbeat:apache): Stopped
+ * rsc4 (ocf:heartbeat:apache): Stopped
+
+Transition Summary:
+ * Start rsc1 ( node1 )
+ * Start rsc2 ( node2 )
+ * Start rsc3 ( node1 )
+ * Start rsc4 ( node2 )
+
+Executing Cluster Transition:
+ * Resource action: rsc1 monitor on node2
+ * Resource action: rsc1 monitor on node1
+ * Resource action: rsc2 monitor on node2
+ * Resource action: rsc2 monitor on node1
+ * Resource action: rsc3 monitor on node2
+ * Resource action: rsc3 monitor on node1
+ * Resource action: rsc4 monitor on node2
+ * Resource action: rsc4 monitor on node1
+ * Resource action: rsc1 start on node1
+ * Resource action: rsc4 start on node2
+ * Resource action: rsc2 start on node2
+ * Resource action: rsc3 start on node1
+
+Revised Cluster Status:
+ * Node List:
+ * Online: [ node1 node2 ]
+
+ * Full List of Resources:
+ * rsc1 (ocf:heartbeat:apache): Started node1
+ * rsc2 (ocf:heartbeat:apache): Started node2
+ * rsc3 (ocf:heartbeat:apache): Started node1
+ * rsc4 (ocf:heartbeat:apache): Started node2
diff --git a/cts/scheduler/summary/order3.summary b/cts/scheduler/summary/order3.summary
new file mode 100644
index 0000000..9bba0f5
--- /dev/null
+++ b/cts/scheduler/summary/order3.summary
@@ -0,0 +1,39 @@
+Current cluster status:
+ * Node List:
+ * Online: [ node1 node2 ]
+
+ * Full List of Resources:
+ * rsc1 (ocf:heartbeat:apache): Started node1
+ * rsc2 (ocf:heartbeat:apache): Started node1
+ * rsc3 (ocf:heartbeat:apache): Started node1
+ * rsc4 (ocf:heartbeat:apache): Started node1
+
+Transition Summary:
+ * Move rsc1 ( node1 -> node2 )
+ * Move rsc2 ( node1 -> node2 )
+ * Move rsc3 ( node1 -> node2 )
+ * Move rsc4 ( node1 -> node2 )
+
+Executing Cluster Transition:
+ * Resource action: rsc1 monitor on node2
+ * Resource action: rsc2 monitor on node2
+ * Resource action: rsc3 stop on node1
+ * Resource action: rsc3 monitor on node2
+ * Resource action: rsc4 monitor on node2
+ * Resource action: rsc2 stop on node1
+ * Resource action: rsc4 stop on node1
+ * Resource action: rsc1 stop on node1
+ * Resource action: rsc4 start on node2
+ * Resource action: rsc1 start on node2
+ * Resource action: rsc2 start on node2
+ * Resource action: rsc3 start on node2
+
+Revised Cluster Status:
+ * Node List:
+ * Online: [ node1 node2 ]
+
+ * Full List of Resources:
+ * rsc1 (ocf:heartbeat:apache): Started node2
+ * rsc2 (ocf:heartbeat:apache): Started node2
+ * rsc3 (ocf:heartbeat:apache): Started node2
+ * rsc4 (ocf:heartbeat:apache): Started node2
diff --git a/cts/scheduler/summary/order4.summary b/cts/scheduler/summary/order4.summary
new file mode 100644
index 0000000..59028d7
--- /dev/null
+++ b/cts/scheduler/summary/order4.summary
@@ -0,0 +1,33 @@
+Current cluster status:
+ * Node List:
+ * Online: [ node1 node2 ]
+
+ * Full List of Resources:
+ * rsc1 (ocf:heartbeat:apache): Stopped
+ * rsc2 (ocf:heartbeat:apache): Stopped
+ * rsc3 (ocf:heartbeat:apache): Stopped
+
+Transition Summary:
+ * Start rsc1 ( node1 )
+ * Start rsc2 ( node2 )
+ * Start rsc3 ( node1 )
+
+Executing Cluster Transition:
+ * Resource action: rsc1 monitor on node2
+ * Resource action: rsc1 monitor on node1
+ * Resource action: rsc2 monitor on node2
+ * Resource action: rsc2 monitor on node1
+ * Resource action: rsc3 monitor on node2
+ * Resource action: rsc3 monitor on node1
+ * Resource action: rsc1 start on node1
+ * Resource action: rsc2 start on node2
+ * Resource action: rsc3 start on node1
+
+Revised Cluster Status:
+ * Node List:
+ * Online: [ node1 node2 ]
+
+ * Full List of Resources:
+ * rsc1 (ocf:heartbeat:apache): Started node1
+ * rsc2 (ocf:heartbeat:apache): Started node2
+ * rsc3 (ocf:heartbeat:apache): Started node1
diff --git a/cts/scheduler/summary/order5.summary b/cts/scheduler/summary/order5.summary
new file mode 100644
index 0000000..6a841e3
--- /dev/null
+++ b/cts/scheduler/summary/order5.summary
@@ -0,0 +1,51 @@
+Current cluster status:
+ * Node List:
+ * Online: [ node1 node2 ]
+
+ * Full List of Resources:
+ * rsc1 (ocf:heartbeat:apache): Started node1
+ * rsc2 (ocf:heartbeat:apache): Started node1
+ * rsc3 (ocf:heartbeat:apache): Started node1
+ * rsc4 (ocf:heartbeat:apache): Started node1
+ * rsc5 (ocf:heartbeat:apache): Started node2
+ * rsc6 (ocf:heartbeat:apache): Started node2
+ * rsc7 (ocf:heartbeat:apache): Started node2
+ * rsc8 (ocf:heartbeat:apache): Started node2
+
+Transition Summary:
+ * Move rsc2 ( node1 -> node2 )
+ * Move rsc4 ( node1 -> node2 )
+ * Move rsc6 ( node2 -> node1 )
+ * Move rsc8 ( node2 -> node1 )
+
+Executing Cluster Transition:
+ * Resource action: rsc1 monitor on node2
+ * Resource action: rsc2 stop on node1
+ * Resource action: rsc2 monitor on node2
+ * Resource action: rsc3 monitor on node2
+ * Resource action: rsc4 stop on node1
+ * Resource action: rsc4 monitor on node2
+ * Resource action: rsc5 monitor on node1
+ * Resource action: rsc6 stop on node2
+ * Resource action: rsc6 monitor on node1
+ * Resource action: rsc7 monitor on node1
+ * Resource action: rsc8 stop on node2
+ * Resource action: rsc8 monitor on node1
+ * Resource action: rsc2 start on node2
+ * Resource action: rsc4 start on node2
+ * Resource action: rsc6 start on node1
+ * Resource action: rsc8 start on node1
+
+Revised Cluster Status:
+ * Node List:
+ * Online: [ node1 node2 ]
+
+ * Full List of Resources:
+ * rsc1 (ocf:heartbeat:apache): Started node1
+ * rsc2 (ocf:heartbeat:apache): Started node2
+ * rsc3 (ocf:heartbeat:apache): Started node1
+ * rsc4 (ocf:heartbeat:apache): Started node2
+ * rsc5 (ocf:heartbeat:apache): Started node2
+ * rsc6 (ocf:heartbeat:apache): Started node1
+ * rsc7 (ocf:heartbeat:apache): Started node2
+ * rsc8 (ocf:heartbeat:apache): Started node1
diff --git a/cts/scheduler/summary/order6.summary b/cts/scheduler/summary/order6.summary
new file mode 100644
index 0000000..6a841e3
--- /dev/null
+++ b/cts/scheduler/summary/order6.summary
@@ -0,0 +1,51 @@
+Current cluster status:
+ * Node List:
+ * Online: [ node1 node2 ]
+
+ * Full List of Resources:
+ * rsc1 (ocf:heartbeat:apache): Started node1
+ * rsc2 (ocf:heartbeat:apache): Started node1
+ * rsc3 (ocf:heartbeat:apache): Started node1
+ * rsc4 (ocf:heartbeat:apache): Started node1
+ * rsc5 (ocf:heartbeat:apache): Started node2
+ * rsc6 (ocf:heartbeat:apache): Started node2
+ * rsc7 (ocf:heartbeat:apache): Started node2
+ * rsc8 (ocf:heartbeat:apache): Started node2
+
+Transition Summary:
+ * Move rsc2 ( node1 -> node2 )
+ * Move rsc4 ( node1 -> node2 )
+ * Move rsc6 ( node2 -> node1 )
+ * Move rsc8 ( node2 -> node1 )
+
+Executing Cluster Transition:
+ * Resource action: rsc1 monitor on node2
+ * Resource action: rsc2 stop on node1
+ * Resource action: rsc2 monitor on node2
+ * Resource action: rsc3 monitor on node2
+ * Resource action: rsc4 stop on node1
+ * Resource action: rsc4 monitor on node2
+ * Resource action: rsc5 monitor on node1
+ * Resource action: rsc6 stop on node2
+ * Resource action: rsc6 monitor on node1
+ * Resource action: rsc7 monitor on node1
+ * Resource action: rsc8 stop on node2
+ * Resource action: rsc8 monitor on node1
+ * Resource action: rsc2 start on node2
+ * Resource action: rsc4 start on node2
+ * Resource action: rsc6 start on node1
+ * Resource action: rsc8 start on node1
+
+Revised Cluster Status:
+ * Node List:
+ * Online: [ node1 node2 ]
+
+ * Full List of Resources:
+ * rsc1 (ocf:heartbeat:apache): Started node1
+ * rsc2 (ocf:heartbeat:apache): Started node2
+ * rsc3 (ocf:heartbeat:apache): Started node1
+ * rsc4 (ocf:heartbeat:apache): Started node2
+ * rsc5 (ocf:heartbeat:apache): Started node2
+ * rsc6 (ocf:heartbeat:apache): Started node1
+ * rsc7 (ocf:heartbeat:apache): Started node2
+ * rsc8 (ocf:heartbeat:apache): Started node1
diff --git a/cts/scheduler/summary/order7.summary b/cts/scheduler/summary/order7.summary
new file mode 100644
index 0000000..1cc7681
--- /dev/null
+++ b/cts/scheduler/summary/order7.summary
@@ -0,0 +1,40 @@
+0 of 6 resource instances DISABLED and 1 BLOCKED from further action due to failure
+
+Current cluster status:
+ * Node List:
+ * Online: [ node1 ]
+
+ * Full List of Resources:
+ * rsc1 (ocf:heartbeat:apache): Started node1
+ * rsc2 (ocf:heartbeat:apache): Stopped
+ * rsc3 (ocf:heartbeat:apache): Stopped
+ * rscA (ocf:heartbeat:apache): FAILED node1 (blocked)
+ * rscB (ocf:heartbeat:apache): Stopped
+ * rscC (ocf:heartbeat:apache): Stopped
+
+Transition Summary:
+ * Start rsc2 ( node1 )
+ * Start rsc3 ( node1 )
+ * Start rscB ( node1 )
+ * Start rscC ( node1 ) due to unrunnable rscA start (blocked)
+
+Executing Cluster Transition:
+ * Resource action: rsc2 monitor on node1
+ * Resource action: rsc3 monitor on node1
+ * Resource action: rscB monitor on node1
+ * Resource action: rscC monitor on node1
+ * Resource action: rsc2 start on node1
+ * Resource action: rsc3 start on node1
+ * Resource action: rscB start on node1
+
+Revised Cluster Status:
+ * Node List:
+ * Online: [ node1 ]
+
+ * Full List of Resources:
+ * rsc1 (ocf:heartbeat:apache): Started node1
+ * rsc2 (ocf:heartbeat:apache): Started node1
+ * rsc3 (ocf:heartbeat:apache): Started node1
+ * rscA (ocf:heartbeat:apache): FAILED node1 (blocked)
+ * rscB (ocf:heartbeat:apache): Started node1
+ * rscC (ocf:heartbeat:apache): Stopped
diff --git a/cts/scheduler/summary/order_constraint_stops_promoted.summary b/cts/scheduler/summary/order_constraint_stops_promoted.summary
new file mode 100644
index 0000000..e888be5
--- /dev/null
+++ b/cts/scheduler/summary/order_constraint_stops_promoted.summary
@@ -0,0 +1,44 @@
+1 of 2 resource instances DISABLED and 0 BLOCKED from further action due to failure
+
+Current cluster status:
+ * Node List:
+ * Online: [ fc16-builder fc16-builder2 ]
+
+ * Full List of Resources:
+ * Clone Set: PROMOTABLE_RSC_A [NATIVE_RSC_A] (promotable):
+ * Promoted: [ fc16-builder ]
+ * NATIVE_RSC_B (ocf:pacemaker:Dummy): Started fc16-builder2 (disabled)
+
+Transition Summary:
+ * Stop NATIVE_RSC_A:0 ( Promoted fc16-builder ) due to required NATIVE_RSC_B start
+ * Stop NATIVE_RSC_B ( fc16-builder2 ) due to node availability
+
+Executing Cluster Transition:
+ * Pseudo action: PROMOTABLE_RSC_A_pre_notify_demote_0
+ * Resource action: NATIVE_RSC_A:0 notify on fc16-builder
+ * Pseudo action: PROMOTABLE_RSC_A_confirmed-pre_notify_demote_0
+ * Pseudo action: PROMOTABLE_RSC_A_demote_0
+ * Resource action: NATIVE_RSC_A:0 demote on fc16-builder
+ * Pseudo action: PROMOTABLE_RSC_A_demoted_0
+ * Pseudo action: PROMOTABLE_RSC_A_post_notify_demoted_0
+ * Resource action: NATIVE_RSC_A:0 notify on fc16-builder
+ * Pseudo action: PROMOTABLE_RSC_A_confirmed-post_notify_demoted_0
+ * Pseudo action: PROMOTABLE_RSC_A_pre_notify_stop_0
+ * Resource action: NATIVE_RSC_A:0 notify on fc16-builder
+ * Pseudo action: PROMOTABLE_RSC_A_confirmed-pre_notify_stop_0
+ * Pseudo action: PROMOTABLE_RSC_A_stop_0
+ * Resource action: NATIVE_RSC_A:0 stop on fc16-builder
+ * Resource action: NATIVE_RSC_A:0 delete on fc16-builder2
+ * Pseudo action: PROMOTABLE_RSC_A_stopped_0
+ * Pseudo action: PROMOTABLE_RSC_A_post_notify_stopped_0
+ * Pseudo action: PROMOTABLE_RSC_A_confirmed-post_notify_stopped_0
+ * Resource action: NATIVE_RSC_B stop on fc16-builder2
+
+Revised Cluster Status:
+ * Node List:
+ * Online: [ fc16-builder fc16-builder2 ]
+
+ * Full List of Resources:
+ * Clone Set: PROMOTABLE_RSC_A [NATIVE_RSC_A] (promotable):
+ * Stopped: [ fc16-builder fc16-builder2 ]
+ * NATIVE_RSC_B (ocf:pacemaker:Dummy): Stopped (disabled)
diff --git a/cts/scheduler/summary/order_constraint_stops_unpromoted.summary b/cts/scheduler/summary/order_constraint_stops_unpromoted.summary
new file mode 100644
index 0000000..2898d2e
--- /dev/null
+++ b/cts/scheduler/summary/order_constraint_stops_unpromoted.summary
@@ -0,0 +1,36 @@
+1 of 2 resource instances DISABLED and 0 BLOCKED from further action due to failure
+
+Current cluster status:
+ * Node List:
+ * Online: [ fc16-builder ]
+ * OFFLINE: [ fc16-builder2 ]
+
+ * Full List of Resources:
+ * Clone Set: PROMOTABLE_RSC_A [NATIVE_RSC_A] (promotable):
+ * Unpromoted: [ fc16-builder ]
+ * NATIVE_RSC_B (ocf:pacemaker:Dummy): Started fc16-builder (disabled)
+
+Transition Summary:
+ * Stop NATIVE_RSC_A:0 ( Unpromoted fc16-builder ) due to required NATIVE_RSC_B start
+ * Stop NATIVE_RSC_B ( fc16-builder ) due to node availability
+
+Executing Cluster Transition:
+ * Pseudo action: PROMOTABLE_RSC_A_pre_notify_stop_0
+ * Resource action: NATIVE_RSC_A:0 notify on fc16-builder
+ * Pseudo action: PROMOTABLE_RSC_A_confirmed-pre_notify_stop_0
+ * Pseudo action: PROMOTABLE_RSC_A_stop_0
+ * Resource action: NATIVE_RSC_A:0 stop on fc16-builder
+ * Pseudo action: PROMOTABLE_RSC_A_stopped_0
+ * Pseudo action: PROMOTABLE_RSC_A_post_notify_stopped_0
+ * Pseudo action: PROMOTABLE_RSC_A_confirmed-post_notify_stopped_0
+ * Resource action: NATIVE_RSC_B stop on fc16-builder
+
+Revised Cluster Status:
+ * Node List:
+ * Online: [ fc16-builder ]
+ * OFFLINE: [ fc16-builder2 ]
+
+ * Full List of Resources:
+ * Clone Set: PROMOTABLE_RSC_A [NATIVE_RSC_A] (promotable):
+ * Stopped: [ fc16-builder fc16-builder2 ]
+ * NATIVE_RSC_B (ocf:pacemaker:Dummy): Stopped (disabled)
diff --git a/cts/scheduler/summary/ordered-set-basic-startup.summary b/cts/scheduler/summary/ordered-set-basic-startup.summary
new file mode 100644
index 0000000..2554358
--- /dev/null
+++ b/cts/scheduler/summary/ordered-set-basic-startup.summary
@@ -0,0 +1,42 @@
+2 of 6 resource instances DISABLED and 0 BLOCKED from further action due to failure
+
+Current cluster status:
+ * Node List:
+ * Online: [ fc16-builder ]
+ * OFFLINE: [ fc16-builder2 ]
+
+ * Full List of Resources:
+ * A (ocf:pacemaker:Dummy): Stopped
+ * B (ocf:pacemaker:Dummy): Stopped
+ * C (ocf:pacemaker:Dummy): Stopped (disabled)
+ * D (ocf:pacemaker:Dummy): Stopped (disabled)
+ * E (ocf:pacemaker:Dummy): Stopped
+ * F (ocf:pacemaker:Dummy): Stopped
+
+Transition Summary:
+ * Start A ( fc16-builder ) due to unrunnable C start (blocked)
+ * Start B ( fc16-builder )
+ * Start E ( fc16-builder ) due to unrunnable A start (blocked)
+ * Start F ( fc16-builder ) due to unrunnable D start (blocked)
+
+Executing Cluster Transition:
+ * Resource action: A monitor on fc16-builder
+ * Resource action: B monitor on fc16-builder
+ * Resource action: C monitor on fc16-builder
+ * Resource action: D monitor on fc16-builder
+ * Resource action: E monitor on fc16-builder
+ * Resource action: F monitor on fc16-builder
+ * Resource action: B start on fc16-builder
+
+Revised Cluster Status:
+ * Node List:
+ * Online: [ fc16-builder ]
+ * OFFLINE: [ fc16-builder2 ]
+
+ * Full List of Resources:
+ * A (ocf:pacemaker:Dummy): Stopped
+ * B (ocf:pacemaker:Dummy): Started fc16-builder
+ * C (ocf:pacemaker:Dummy): Stopped (disabled)
+ * D (ocf:pacemaker:Dummy): Stopped (disabled)
+ * E (ocf:pacemaker:Dummy): Stopped
+ * F (ocf:pacemaker:Dummy): Stopped
diff --git a/cts/scheduler/summary/ordered-set-natural.summary b/cts/scheduler/summary/ordered-set-natural.summary
new file mode 100644
index 0000000..b944e0d
--- /dev/null
+++ b/cts/scheduler/summary/ordered-set-natural.summary
@@ -0,0 +1,55 @@
+3 of 15 resource instances DISABLED and 0 BLOCKED from further action due to failure
+
+Current cluster status:
+ * Node List:
+ * Online: [ node1 node2 ]
+
+ * Full List of Resources:
+ * Resource Group: rgroup:
+ * dummy1-1 (ocf:heartbeat:Dummy): Stopped
+ * dummy1-2 (ocf:heartbeat:Dummy): Stopped
+ * dummy1-3 (ocf:heartbeat:Dummy): Stopped (disabled)
+ * dummy1-4 (ocf:heartbeat:Dummy): Stopped
+ * dummy1-5 (ocf:heartbeat:Dummy): Stopped
+ * dummy2-1 (ocf:heartbeat:Dummy): Stopped
+ * dummy2-2 (ocf:heartbeat:Dummy): Stopped
+ * dummy2-3 (ocf:heartbeat:Dummy): Stopped (disabled)
+ * dummy3-1 (ocf:heartbeat:Dummy): Stopped
+ * dummy3-2 (ocf:heartbeat:Dummy): Stopped
+ * dummy3-3 (ocf:heartbeat:Dummy): Stopped (disabled)
+ * dummy3-4 (ocf:heartbeat:Dummy): Stopped
+ * dummy3-5 (ocf:heartbeat:Dummy): Stopped
+ * dummy2-4 (ocf:heartbeat:Dummy): Stopped
+ * dummy2-5 (ocf:heartbeat:Dummy): Stopped
+
+Transition Summary:
+ * Start dummy1-1 ( node1 ) due to no quorum (blocked)
+ * Start dummy1-2 ( node1 ) due to no quorum (blocked)
+ * Start dummy2-1 ( node2 ) due to no quorum (blocked)
+ * Start dummy2-2 ( node2 ) due to no quorum (blocked)
+ * Start dummy3-4 ( node1 ) due to no quorum (blocked)
+ * Start dummy3-5 ( node1 ) due to no quorum (blocked)
+
+Executing Cluster Transition:
+
+Revised Cluster Status:
+ * Node List:
+ * Online: [ node1 node2 ]
+
+ * Full List of Resources:
+ * Resource Group: rgroup:
+ * dummy1-1 (ocf:heartbeat:Dummy): Stopped
+ * dummy1-2 (ocf:heartbeat:Dummy): Stopped
+ * dummy1-3 (ocf:heartbeat:Dummy): Stopped (disabled)
+ * dummy1-4 (ocf:heartbeat:Dummy): Stopped
+ * dummy1-5 (ocf:heartbeat:Dummy): Stopped
+ * dummy2-1 (ocf:heartbeat:Dummy): Stopped
+ * dummy2-2 (ocf:heartbeat:Dummy): Stopped
+ * dummy2-3 (ocf:heartbeat:Dummy): Stopped (disabled)
+ * dummy3-1 (ocf:heartbeat:Dummy): Stopped
+ * dummy3-2 (ocf:heartbeat:Dummy): Stopped
+ * dummy3-3 (ocf:heartbeat:Dummy): Stopped (disabled)
+ * dummy3-4 (ocf:heartbeat:Dummy): Stopped
+ * dummy3-5 (ocf:heartbeat:Dummy): Stopped
+ * dummy2-4 (ocf:heartbeat:Dummy): Stopped
+ * dummy2-5 (ocf:heartbeat:Dummy): Stopped
diff --git a/cts/scheduler/summary/origin.summary b/cts/scheduler/summary/origin.summary
new file mode 100644
index 0000000..32514e2
--- /dev/null
+++ b/cts/scheduler/summary/origin.summary
@@ -0,0 +1,18 @@
+Current cluster status:
+ * Node List:
+ * Online: [ node1 ]
+
+ * Full List of Resources:
+ * resD (ocf:heartbeat:Dummy): Started node1
+
+Transition Summary:
+
+Executing Cluster Transition:
+ * Resource action: resD monitor=3600000 on node1
+
+Revised Cluster Status:
+ * Node List:
+ * Online: [ node1 ]
+
+ * Full List of Resources:
+ * resD (ocf:heartbeat:Dummy): Started node1
diff --git a/cts/scheduler/summary/orphan-0.summary b/cts/scheduler/summary/orphan-0.summary
new file mode 100644
index 0000000..ddab295
--- /dev/null
+++ b/cts/scheduler/summary/orphan-0.summary
@@ -0,0 +1,38 @@
+Current cluster status:
+ * Node List:
+ * Online: [ c001n01 c001n02 c001n03 c001n08 ]
+
+ * Full List of Resources:
+ * DcIPaddr (ocf:heartbeat:IPaddr): Started c001n02
+ * rsc_c001n02 (ocf:heartbeat:IPaddr): Started c001n02
+ * rsc_c001n03 (ocf:heartbeat:IPaddr): Started c001n03
+ * rsc_c001n01 (ocf:heartbeat:IPaddr): Started c001n01
+ * rsc_c001n08 (ocf:heartbeat:IPaddr): ORPHANED Started c001n08 (unmanaged)
+
+Transition Summary:
+
+Executing Cluster Transition:
+ * Resource action: DcIPaddr monitor on c001n08
+ * Resource action: DcIPaddr monitor on c001n03
+ * Resource action: DcIPaddr monitor on c001n01
+ * Resource action: rsc_c001n02 monitor on c001n08
+ * Resource action: rsc_c001n02 monitor on c001n03
+ * Resource action: rsc_c001n02 monitor on c001n01
+ * Resource action: rsc_c001n03 monitor=6000 on c001n03
+ * Resource action: rsc_c001n03 monitor on c001n08
+ * Resource action: rsc_c001n03 monitor on c001n02
+ * Resource action: rsc_c001n03 monitor on c001n01
+ * Resource action: rsc_c001n01 monitor on c001n08
+ * Resource action: rsc_c001n01 monitor on c001n03
+ * Resource action: rsc_c001n01 monitor on c001n02
+
+Revised Cluster Status:
+ * Node List:
+ * Online: [ c001n01 c001n02 c001n03 c001n08 ]
+
+ * Full List of Resources:
+ * DcIPaddr (ocf:heartbeat:IPaddr): Started c001n02
+ * rsc_c001n02 (ocf:heartbeat:IPaddr): Started c001n02
+ * rsc_c001n03 (ocf:heartbeat:IPaddr): Started c001n03
+ * rsc_c001n01 (ocf:heartbeat:IPaddr): Started c001n01
+ * rsc_c001n08 (ocf:heartbeat:IPaddr): ORPHANED Started c001n08 (unmanaged)
diff --git a/cts/scheduler/summary/orphan-1.summary b/cts/scheduler/summary/orphan-1.summary
new file mode 100644
index 0000000..f0774f6
--- /dev/null
+++ b/cts/scheduler/summary/orphan-1.summary
@@ -0,0 +1,42 @@
+Current cluster status:
+ * Node List:
+ * Online: [ c001n01 c001n02 c001n03 c001n08 ]
+
+ * Full List of Resources:
+ * DcIPaddr (ocf:heartbeat:IPaddr): Started c001n02
+ * rsc_c001n02 (ocf:heartbeat:IPaddr): Started c001n02
+ * rsc_c001n03 (ocf:heartbeat:IPaddr): Started c001n03
+ * rsc_c001n01 (ocf:heartbeat:IPaddr): Started c001n01
+ * rsc_c001n08 (ocf:heartbeat:IPaddr): ORPHANED Started c001n08
+
+Transition Summary:
+ * Stop rsc_c001n08 ( c001n08 ) due to node availability
+
+Executing Cluster Transition:
+ * Resource action: DcIPaddr monitor on c001n08
+ * Resource action: DcIPaddr monitor on c001n03
+ * Resource action: DcIPaddr monitor on c001n01
+ * Resource action: rsc_c001n02 monitor on c001n08
+ * Resource action: rsc_c001n02 monitor on c001n03
+ * Resource action: rsc_c001n02 monitor on c001n01
+ * Resource action: rsc_c001n02 cancel=5000 on c001n02
+ * Resource action: rsc_c001n03 monitor=6000 on c001n03
+ * Resource action: rsc_c001n03 monitor on c001n08
+ * Resource action: rsc_c001n03 monitor on c001n02
+ * Resource action: rsc_c001n03 monitor on c001n01
+ * Resource action: rsc_c001n03 cancel=5000 on c001n03
+ * Resource action: rsc_c001n01 monitor on c001n08
+ * Resource action: rsc_c001n01 monitor on c001n03
+ * Resource action: rsc_c001n01 monitor on c001n02
+ * Resource action: rsc_c001n08 stop on c001n08
+ * Resource action: rsc_c001n08 delete on c001n08
+
+Revised Cluster Status:
+ * Node List:
+ * Online: [ c001n01 c001n02 c001n03 c001n08 ]
+
+ * Full List of Resources:
+ * DcIPaddr (ocf:heartbeat:IPaddr): Started c001n02
+ * rsc_c001n02 (ocf:heartbeat:IPaddr): Started c001n02
+ * rsc_c001n03 (ocf:heartbeat:IPaddr): Started c001n03
+ * rsc_c001n01 (ocf:heartbeat:IPaddr): Started c001n01
diff --git a/cts/scheduler/summary/orphan-2.summary b/cts/scheduler/summary/orphan-2.summary
new file mode 100644
index 0000000..07f7c5c
--- /dev/null
+++ b/cts/scheduler/summary/orphan-2.summary
@@ -0,0 +1,44 @@
+Current cluster status:
+ * Node List:
+ * Online: [ c001n01 c001n02 c001n03 c001n08 ]
+
+ * Full List of Resources:
+ * DcIPaddr (ocf:heartbeat:IPaddr): Started c001n02
+ * rsc_c001n02 (ocf:heartbeat:IPaddr): Started c001n02
+ * rsc_c001n03 (ocf:heartbeat:IPaddr): Started c001n03
+ * rsc_c001n01 (ocf:heartbeat:IPaddr): Started c001n01
+ * rsc_c001n08 (ocf:heartbeat:IPaddr): ORPHANED Started c001n08
+
+Transition Summary:
+ * Stop rsc_c001n08 ( c001n08 ) due to node availability
+
+Executing Cluster Transition:
+ * Resource action: DcIPaddr monitor on c001n08
+ * Resource action: DcIPaddr monitor on c001n03
+ * Resource action: DcIPaddr monitor on c001n01
+ * Resource action: rsc_c001n02 monitor on c001n08
+ * Resource action: rsc_c001n02 monitor on c001n03
+ * Resource action: rsc_c001n02 monitor on c001n01
+ * Resource action: rsc_c001n02 cancel=5000 on c001n02
+ * Resource action: rsc_c001n03 monitor=6000 on c001n03
+ * Resource action: rsc_c001n03 monitor on c001n08
+ * Resource action: rsc_c001n03 monitor on c001n02
+ * Resource action: rsc_c001n03 monitor on c001n01
+ * Resource action: rsc_c001n03 cancel=5000 on c001n03
+ * Resource action: rsc_c001n01 monitor on c001n08
+ * Resource action: rsc_c001n01 monitor on c001n03
+ * Resource action: rsc_c001n01 monitor on c001n02
+ * Cluster action: clear_failcount for rsc_c001n08 on c001n08
+ * Cluster action: clear_failcount for rsc_c001n08 on c001n02
+ * Resource action: rsc_c001n08 stop on c001n08
+ * Resource action: rsc_c001n08 delete on c001n08
+
+Revised Cluster Status:
+ * Node List:
+ * Online: [ c001n01 c001n02 c001n03 c001n08 ]
+
+ * Full List of Resources:
+ * DcIPaddr (ocf:heartbeat:IPaddr): Started c001n02
+ * rsc_c001n02 (ocf:heartbeat:IPaddr): Started c001n02
+ * rsc_c001n03 (ocf:heartbeat:IPaddr): Started c001n03
+ * rsc_c001n01 (ocf:heartbeat:IPaddr): Started c001n01
diff --git a/cts/scheduler/summary/params-0.summary b/cts/scheduler/summary/params-0.summary
new file mode 100644
index 0000000..ee291fc
--- /dev/null
+++ b/cts/scheduler/summary/params-0.summary
@@ -0,0 +1,40 @@
+Current cluster status:
+ * Node List:
+ * Online: [ c001n01 c001n02 c001n03 c001n08 ]
+
+ * Full List of Resources:
+ * DcIPaddr (ocf:heartbeat:IPaddr): Started c001n02
+ * rsc_c001n08 (ocf:heartbeat:IPaddr): Started c001n08
+ * rsc_c001n02 (ocf:heartbeat:IPaddr): Started c001n02
+ * rsc_c001n03 (ocf:heartbeat:IPaddr): Started c001n03
+ * rsc_c001n01 (ocf:heartbeat:IPaddr): Started c001n01
+
+Transition Summary:
+
+Executing Cluster Transition:
+ * Resource action: DcIPaddr monitor on c001n08
+ * Resource action: DcIPaddr monitor on c001n03
+ * Resource action: DcIPaddr monitor on c001n01
+ * Resource action: rsc_c001n08 monitor on c001n03
+ * Resource action: rsc_c001n08 monitor on c001n02
+ * Resource action: rsc_c001n08 monitor on c001n01
+ * Resource action: rsc_c001n02 monitor on c001n08
+ * Resource action: rsc_c001n02 monitor on c001n03
+ * Resource action: rsc_c001n02 monitor on c001n01
+ * Resource action: rsc_c001n03 monitor on c001n08
+ * Resource action: rsc_c001n03 monitor on c001n02
+ * Resource action: rsc_c001n03 monitor on c001n01
+ * Resource action: rsc_c001n01 monitor on c001n08
+ * Resource action: rsc_c001n01 monitor on c001n03
+ * Resource action: rsc_c001n01 monitor on c001n02
+
+Revised Cluster Status:
+ * Node List:
+ * Online: [ c001n01 c001n02 c001n03 c001n08 ]
+
+ * Full List of Resources:
+ * DcIPaddr (ocf:heartbeat:IPaddr): Started c001n02
+ * rsc_c001n08 (ocf:heartbeat:IPaddr): Started c001n08
+ * rsc_c001n02 (ocf:heartbeat:IPaddr): Started c001n02
+ * rsc_c001n03 (ocf:heartbeat:IPaddr): Started c001n03
+ * rsc_c001n01 (ocf:heartbeat:IPaddr): Started c001n01
diff --git a/cts/scheduler/summary/params-1.summary b/cts/scheduler/summary/params-1.summary
new file mode 100644
index 0000000..7150d36
--- /dev/null
+++ b/cts/scheduler/summary/params-1.summary
@@ -0,0 +1,47 @@
+Current cluster status:
+ * Node List:
+ * Online: [ c001n01 c001n02 c001n03 c001n08 ]
+
+ * Full List of Resources:
+ * DcIPaddr (ocf:heartbeat:IPaddr): Started c001n02
+ * rsc_c001n08 (ocf:heartbeat:IPaddr): Started c001n08
+ * rsc_c001n02 (ocf:heartbeat:IPaddr): Started c001n02
+ * rsc_c001n03 (ocf:heartbeat:IPaddr): Started c001n03
+ * rsc_c001n01 (ocf:heartbeat:IPaddr): Started c001n01
+
+Transition Summary:
+ * Restart DcIPaddr ( c001n02 ) due to resource definition change
+
+Executing Cluster Transition:
+ * Resource action: DcIPaddr stop on c001n02
+ * Resource action: DcIPaddr monitor on c001n08
+ * Resource action: DcIPaddr monitor on c001n03
+ * Resource action: DcIPaddr monitor on c001n01
+ * Resource action: DcIPaddr start on c001n02
+ * Resource action: DcIPaddr monitor=5000 on c001n02
+ * Resource action: rsc_c001n08 monitor on c001n03
+ * Resource action: rsc_c001n08 monitor on c001n02
+ * Resource action: rsc_c001n08 monitor on c001n01
+ * Resource action: rsc_c001n08 monitor=5000 on c001n08
+ * Resource action: rsc_c001n02 monitor=6000 on c001n02
+ * Resource action: rsc_c001n02 monitor on c001n08
+ * Resource action: rsc_c001n02 monitor on c001n03
+ * Resource action: rsc_c001n02 monitor on c001n01
+ * Resource action: rsc_c001n02 cancel=5000 on c001n02
+ * Resource action: rsc_c001n03 monitor on c001n08
+ * Resource action: rsc_c001n03 monitor on c001n02
+ * Resource action: rsc_c001n03 monitor on c001n01
+ * Resource action: rsc_c001n01 monitor on c001n08
+ * Resource action: rsc_c001n01 monitor on c001n03
+ * Resource action: rsc_c001n01 monitor on c001n02
+
+Revised Cluster Status:
+ * Node List:
+ * Online: [ c001n01 c001n02 c001n03 c001n08 ]
+
+ * Full List of Resources:
+ * DcIPaddr (ocf:heartbeat:IPaddr): Started c001n02
+ * rsc_c001n08 (ocf:heartbeat:IPaddr): Started c001n08
+ * rsc_c001n02 (ocf:heartbeat:IPaddr): Started c001n02
+ * rsc_c001n03 (ocf:heartbeat:IPaddr): Started c001n03
+ * rsc_c001n01 (ocf:heartbeat:IPaddr): Started c001n01
diff --git a/cts/scheduler/summary/params-2.summary b/cts/scheduler/summary/params-2.summary
new file mode 100644
index 0000000..c43892d
--- /dev/null
+++ b/cts/scheduler/summary/params-2.summary
@@ -0,0 +1,37 @@
+Current cluster status:
+ * Node List:
+ * Online: [ node1 node2 node3 ]
+
+ * Full List of Resources:
+ * rsc1 (lsb:apache): Started node1
+ * rsc2 (lsb:apache): Started node2
+ * rsc3 (lsb:apache): Stopped
+
+Transition Summary:
+ * Stop rsc1 ( node1 ) due to node availability
+ * Restart rsc2 ( node2 )
+ * Start rsc3 ( node3 )
+
+Executing Cluster Transition:
+ * Resource action: rsc1 stop on node1
+ * Resource action: rsc1 monitor on node3
+ * Resource action: rsc1 monitor on node2
+ * Resource action: rsc2 monitor on node3
+ * Resource action: rsc2 monitor on node1
+ * Resource action: rsc2 stop on node2
+ * Resource action: rsc3 monitor on node2
+ * Resource action: rsc3 monitor on node1
+ * Resource action: rsc3 delete on node3
+ * Cluster action: do_shutdown on node1
+ * Resource action: rsc2 delete on node2
+ * Resource action: rsc3 start on node3
+ * Resource action: rsc2 start on node2
+
+Revised Cluster Status:
+ * Node List:
+ * Online: [ node1 node2 node3 ]
+
+ * Full List of Resources:
+ * rsc1 (lsb:apache): Stopped
+ * rsc2 (lsb:apache): Started node2
+ * rsc3 (lsb:apache): Started node3
diff --git a/cts/scheduler/summary/params-3.summary b/cts/scheduler/summary/params-3.summary
new file mode 100644
index 0000000..de38fbf
--- /dev/null
+++ b/cts/scheduler/summary/params-3.summary
@@ -0,0 +1,47 @@
+Current cluster status:
+ * Node List:
+ * Online: [ c001n01 c001n02 c001n03 c001n08 ]
+
+ * Full List of Resources:
+ * DcIPaddr (ocf:heartbeat:IPaddr): Starting c001n02
+ * rsc_c001n08 (ocf:heartbeat:IPaddr): Started c001n08
+ * rsc_c001n02 (ocf:heartbeat:IPaddr): Started c001n02
+ * rsc_c001n03 (ocf:heartbeat:IPaddr): Started c001n03
+ * rsc_c001n01 (ocf:heartbeat:IPaddr): Started c001n01
+
+Transition Summary:
+ * Restart DcIPaddr ( c001n02 )
+
+Executing Cluster Transition:
+ * Resource action: DcIPaddr monitor on c001n08
+ * Resource action: DcIPaddr monitor on c001n03
+ * Resource action: DcIPaddr monitor on c001n01
+ * Resource action: DcIPaddr stop on c001n02
+ * Resource action: rsc_c001n08 monitor on c001n03
+ * Resource action: rsc_c001n08 monitor on c001n02
+ * Resource action: rsc_c001n08 monitor on c001n01
+ * Resource action: rsc_c001n08 monitor=5000 on c001n08
+ * Resource action: rsc_c001n02 monitor=6000 on c001n02
+ * Resource action: rsc_c001n02 monitor on c001n08
+ * Resource action: rsc_c001n02 monitor on c001n03
+ * Resource action: rsc_c001n02 monitor on c001n01
+ * Resource action: rsc_c001n02 cancel=5000 on c001n02
+ * Resource action: rsc_c001n03 monitor on c001n08
+ * Resource action: rsc_c001n03 monitor on c001n02
+ * Resource action: rsc_c001n03 monitor on c001n01
+ * Resource action: rsc_c001n01 monitor on c001n08
+ * Resource action: rsc_c001n01 monitor on c001n03
+ * Resource action: rsc_c001n01 monitor on c001n02
+ * Resource action: DcIPaddr start on c001n02
+ * Resource action: DcIPaddr monitor=5000 on c001n02
+
+Revised Cluster Status:
+ * Node List:
+ * Online: [ c001n01 c001n02 c001n03 c001n08 ]
+
+ * Full List of Resources:
+ * DcIPaddr (ocf:heartbeat:IPaddr): Started c001n02
+ * rsc_c001n08 (ocf:heartbeat:IPaddr): Started c001n08
+ * rsc_c001n02 (ocf:heartbeat:IPaddr): Started c001n02
+ * rsc_c001n03 (ocf:heartbeat:IPaddr): Started c001n03
+ * rsc_c001n01 (ocf:heartbeat:IPaddr): Started c001n01
diff --git a/cts/scheduler/summary/params-4.summary b/cts/scheduler/summary/params-4.summary
new file mode 100644
index 0000000..d6a7147
--- /dev/null
+++ b/cts/scheduler/summary/params-4.summary
@@ -0,0 +1,46 @@
+Current cluster status:
+ * Node List:
+ * Online: [ c001n01 c001n02 c001n03 c001n08 ]
+
+ * Full List of Resources:
+ * DcIPaddr (ocf:heartbeat:IPaddr): Started c001n02
+ * rsc_c001n08 (ocf:heartbeat:IPaddr): Started c001n08
+ * rsc_c001n02 (ocf:heartbeat:IPaddr): Started c001n02
+ * rsc_c001n03 (ocf:heartbeat:IPaddr): Started c001n03
+ * rsc_c001n01 (ocf:heartbeat:IPaddr): Started c001n01
+
+Transition Summary:
+ * Reload DcIPaddr ( c001n02 )
+
+Executing Cluster Transition:
+ * Resource action: DcIPaddr monitor on c001n08
+ * Resource action: DcIPaddr monitor on c001n03
+ * Resource action: DcIPaddr monitor on c001n01
+ * Resource action: DcIPaddr reload-agent on c001n02
+ * Resource action: DcIPaddr monitor=5000 on c001n02
+ * Resource action: rsc_c001n08 monitor on c001n03
+ * Resource action: rsc_c001n08 monitor on c001n02
+ * Resource action: rsc_c001n08 monitor on c001n01
+ * Resource action: rsc_c001n08 monitor=5000 on c001n08
+ * Resource action: rsc_c001n02 monitor=6000 on c001n02
+ * Resource action: rsc_c001n02 monitor on c001n08
+ * Resource action: rsc_c001n02 monitor on c001n03
+ * Resource action: rsc_c001n02 monitor on c001n01
+ * Resource action: rsc_c001n02 cancel=5000 on c001n02
+ * Resource action: rsc_c001n03 monitor on c001n08
+ * Resource action: rsc_c001n03 monitor on c001n02
+ * Resource action: rsc_c001n03 monitor on c001n01
+ * Resource action: rsc_c001n01 monitor on c001n08
+ * Resource action: rsc_c001n01 monitor on c001n03
+ * Resource action: rsc_c001n01 monitor on c001n02
+
+Revised Cluster Status:
+ * Node List:
+ * Online: [ c001n01 c001n02 c001n03 c001n08 ]
+
+ * Full List of Resources:
+ * DcIPaddr (ocf:heartbeat:IPaddr): Started c001n02
+ * rsc_c001n08 (ocf:heartbeat:IPaddr): Started c001n08
+ * rsc_c001n02 (ocf:heartbeat:IPaddr): Started c001n02
+ * rsc_c001n03 (ocf:heartbeat:IPaddr): Started c001n03
+ * rsc_c001n01 (ocf:heartbeat:IPaddr): Started c001n01
diff --git a/cts/scheduler/summary/params-5.summary b/cts/scheduler/summary/params-5.summary
new file mode 100644
index 0000000..7150d36
--- /dev/null
+++ b/cts/scheduler/summary/params-5.summary
@@ -0,0 +1,47 @@
+Current cluster status:
+ * Node List:
+ * Online: [ c001n01 c001n02 c001n03 c001n08 ]
+
+ * Full List of Resources:
+ * DcIPaddr (ocf:heartbeat:IPaddr): Started c001n02
+ * rsc_c001n08 (ocf:heartbeat:IPaddr): Started c001n08
+ * rsc_c001n02 (ocf:heartbeat:IPaddr): Started c001n02
+ * rsc_c001n03 (ocf:heartbeat:IPaddr): Started c001n03
+ * rsc_c001n01 (ocf:heartbeat:IPaddr): Started c001n01
+
+Transition Summary:
+ * Restart DcIPaddr ( c001n02 ) due to resource definition change
+
+Executing Cluster Transition:
+ * Resource action: DcIPaddr stop on c001n02
+ * Resource action: DcIPaddr monitor on c001n08
+ * Resource action: DcIPaddr monitor on c001n03
+ * Resource action: DcIPaddr monitor on c001n01
+ * Resource action: DcIPaddr start on c001n02
+ * Resource action: DcIPaddr monitor=5000 on c001n02
+ * Resource action: rsc_c001n08 monitor on c001n03
+ * Resource action: rsc_c001n08 monitor on c001n02
+ * Resource action: rsc_c001n08 monitor on c001n01
+ * Resource action: rsc_c001n08 monitor=5000 on c001n08
+ * Resource action: rsc_c001n02 monitor=6000 on c001n02
+ * Resource action: rsc_c001n02 monitor on c001n08
+ * Resource action: rsc_c001n02 monitor on c001n03
+ * Resource action: rsc_c001n02 monitor on c001n01
+ * Resource action: rsc_c001n02 cancel=5000 on c001n02
+ * Resource action: rsc_c001n03 monitor on c001n08
+ * Resource action: rsc_c001n03 monitor on c001n02
+ * Resource action: rsc_c001n03 monitor on c001n01
+ * Resource action: rsc_c001n01 monitor on c001n08
+ * Resource action: rsc_c001n01 monitor on c001n03
+ * Resource action: rsc_c001n01 monitor on c001n02
+
+Revised Cluster Status:
+ * Node List:
+ * Online: [ c001n01 c001n02 c001n03 c001n08 ]
+
+ * Full List of Resources:
+ * DcIPaddr (ocf:heartbeat:IPaddr): Started c001n02
+ * rsc_c001n08 (ocf:heartbeat:IPaddr): Started c001n08
+ * rsc_c001n02 (ocf:heartbeat:IPaddr): Started c001n02
+ * rsc_c001n03 (ocf:heartbeat:IPaddr): Started c001n03
+ * rsc_c001n01 (ocf:heartbeat:IPaddr): Started c001n01
diff --git a/cts/scheduler/summary/params-6.summary b/cts/scheduler/summary/params-6.summary
new file mode 100644
index 0000000..4b5c480
--- /dev/null
+++ b/cts/scheduler/summary/params-6.summary
@@ -0,0 +1,379 @@
+90 of 337 resource instances DISABLED and 0 BLOCKED from further action due to failure
+
+Current cluster status:
+ * Node List:
+ * Online: [ mgmt01 v03-a v03-b ]
+
+ * Full List of Resources:
+ * stonith-v02-a (stonith:fence_ipmilan): Stopped (disabled)
+ * stonith-v02-b (stonith:fence_ipmilan): Stopped (disabled)
+ * stonith-v02-c (stonith:fence_ipmilan): Stopped (disabled)
+ * stonith-v02-d (stonith:fence_ipmilan): Stopped (disabled)
+ * stonith-mgmt01 (stonith:fence_xvm): Started v03-a
+ * stonith-mgmt02 (stonith:meatware): Started v03-a
+ * stonith-v03-c (stonith:fence_ipmilan): Stopped (disabled)
+ * stonith-v03-a (stonith:fence_ipmilan): Started v03-b
+ * stonith-v03-b (stonith:fence_ipmilan): Started mgmt01
+ * stonith-v03-d (stonith:fence_ipmilan): Stopped (disabled)
+ * Clone Set: cl-clvmd [clvmd]:
+ * Started: [ mgmt01 v03-a v03-b ]
+ * Clone Set: cl-dlm [dlm]:
+ * Started: [ mgmt01 v03-a v03-b ]
+ * Clone Set: cl-iscsid [iscsid]:
+ * Started: [ mgmt01 v03-a v03-b ]
+ * Clone Set: cl-libvirtd [libvirtd]:
+ * Started: [ v03-a v03-b ]
+ * Stopped: [ mgmt01 ]
+ * Clone Set: cl-multipathd [multipathd]:
+ * Started: [ mgmt01 v03-a v03-b ]
+ * Clone Set: cl-node-params [node-params]:
+ * Started: [ v03-a v03-b ]
+ * Stopped: [ mgmt01 ]
+ * Clone Set: cl-vlan1-if [vlan1-if]:
+ * Started: [ v03-a v03-b ]
+ * Stopped: [ mgmt01 ]
+ * Clone Set: cl-vlan101-if [vlan101-if]:
+ * Started: [ v03-a v03-b ]
+ * Stopped: [ mgmt01 ]
+ * Clone Set: cl-vlan102-if [vlan102-if]:
+ * Started: [ v03-a v03-b ]
+ * Stopped: [ mgmt01 ]
+ * Clone Set: cl-vlan103-if [vlan103-if]:
+ * Started: [ v03-a v03-b ]
+ * Stopped: [ mgmt01 ]
+ * Clone Set: cl-vlan104-if [vlan104-if]:
+ * Started: [ v03-a v03-b ]
+ * Stopped: [ mgmt01 ]
+ * Clone Set: cl-vlan3-if [vlan3-if]:
+ * Started: [ v03-a v03-b ]
+ * Stopped: [ mgmt01 ]
+ * Clone Set: cl-vlan4-if [vlan4-if]:
+ * Started: [ v03-a v03-b ]
+ * Stopped: [ mgmt01 ]
+ * Clone Set: cl-vlan5-if [vlan5-if]:
+ * Started: [ v03-a v03-b ]
+ * Stopped: [ mgmt01 ]
+ * Clone Set: cl-vlan900-if [vlan900-if]:
+ * Started: [ v03-a v03-b ]
+ * Stopped: [ mgmt01 ]
+ * Clone Set: cl-vlan909-if [vlan909-if]:
+ * Started: [ v03-a v03-b ]
+ * Stopped: [ mgmt01 ]
+ * Clone Set: cl-libvirt-images-fs [libvirt-images-fs]:
+ * Started: [ mgmt01 v03-a v03-b ]
+ * Clone Set: cl-libvirt-install-fs [libvirt-install-fs]:
+ * Started: [ mgmt01 v03-a v03-b ]
+ * Clone Set: cl-vds-ok-pool-0-iscsi [vds-ok-pool-0-iscsi]:
+ * Started: [ mgmt01 v03-a v03-b ]
+ * Clone Set: cl-vds-ok-pool-0-vg [vds-ok-pool-0-vg]:
+ * Started: [ mgmt01 v03-a v03-b ]
+ * Clone Set: cl-vds-ok-pool-1-iscsi [vds-ok-pool-1-iscsi]:
+ * Started: [ mgmt01 v03-a v03-b ]
+ * Clone Set: cl-vds-ok-pool-1-vg [vds-ok-pool-1-vg]:
+ * Started: [ mgmt01 v03-a v03-b ]
+ * Clone Set: cl-libvirt-images-pool [libvirt-images-pool]:
+ * Started: [ v03-a v03-b ]
+ * Stopped: [ mgmt01 ]
+ * Clone Set: cl-vds-ok-pool-0-pool [vds-ok-pool-0-pool]:
+ * Started: [ v03-a v03-b ]
+ * Stopped: [ mgmt01 ]
+ * Clone Set: cl-vds-ok-pool-1-pool [vds-ok-pool-1-pool]:
+ * Started: [ v03-a v03-b ]
+ * Stopped: [ mgmt01 ]
+ * git.vds-ok.com-vm (ocf:vds-ok:VirtualDomain): Stopped (disabled)
+ * Clone Set: cl-libvirt-qpid [libvirt-qpid]:
+ * Started: [ v03-a v03-b ]
+ * Stopped: [ mgmt01 ]
+ * vd01-a.cdev.ttc.prague.cz.vds-ok.com-vm (ocf:vds-ok:VirtualDomain): Started v03-b
+ * vd01-b.cdev.ttc.prague.cz.vds-ok.com-vm (ocf:vds-ok:VirtualDomain): Started v03-b
+ * vd01-c.cdev.ttc.prague.cz.vds-ok.com-vm (ocf:vds-ok:VirtualDomain): Started v03-a
+ * vd01-d.cdev.ttc.prague.cz.vds-ok.com-vm (ocf:vds-ok:VirtualDomain): Started v03-b
+ * vd02-a.cdev.ttc.prague.cz.vds-ok.com-vm (ocf:vds-ok:VirtualDomain): Stopped (disabled)
+ * vd02-b.cdev.ttc.prague.cz.vds-ok.com-vm (ocf:vds-ok:VirtualDomain): Stopped (disabled)
+ * vd02-c.cdev.ttc.prague.cz.vds-ok.com-vm (ocf:vds-ok:VirtualDomain): Stopped (disabled)
+ * vd02-d.cdev.ttc.prague.cz.vds-ok.com-vm (ocf:vds-ok:VirtualDomain): Stopped (disabled)
+ * vd03-a.cdev.ttc.prague.cz.vds-ok.com-vm (ocf:vds-ok:VirtualDomain): Stopped (disabled)
+ * vd03-b.cdev.ttc.prague.cz.vds-ok.com-vm (ocf:vds-ok:VirtualDomain): Stopped (disabled)
+ * vd03-c.cdev.ttc.prague.cz.vds-ok.com-vm (ocf:vds-ok:VirtualDomain): Stopped (disabled)
+ * vd03-d.cdev.ttc.prague.cz.vds-ok.com-vm (ocf:vds-ok:VirtualDomain): Stopped (disabled)
+ * vd04-a.cdev.ttc.prague.cz.vds-ok.com-vm (ocf:vds-ok:VirtualDomain): Stopped (disabled)
+ * vd04-b.cdev.ttc.prague.cz.vds-ok.com-vm (ocf:vds-ok:VirtualDomain): Stopped (disabled)
+ * vd04-c.cdev.ttc.prague.cz.vds-ok.com-vm (ocf:vds-ok:VirtualDomain): Stopped (disabled)
+ * vd04-d.cdev.ttc.prague.cz.vds-ok.com-vm (ocf:vds-ok:VirtualDomain): Stopped (disabled)
+ * f13-x64-devel.vds-ok.com-vm (ocf:vds-ok:VirtualDomain): Started v03-a
+ * eu2.ca-pages.com-vm (ocf:vds-ok:VirtualDomain): Started v03-b
+ * zakaz.transferrus.ru-vm (ocf:vds-ok:VirtualDomain): Stopped (disabled)
+ * Clone Set: cl-vlan200-if [vlan200-if]:
+ * Started: [ v03-a v03-b ]
+ * Stopped: [ mgmt01 ]
+ * anbriz-gw-vm (ocf:vds-ok:VirtualDomain): Stopped (disabled)
+ * anbriz-work-vm (ocf:vds-ok:VirtualDomain): Stopped (disabled)
+ * lenny-x32-devel-vm (ocf:vds-ok:VirtualDomain): Started v03-a
+ * vptest1.vds-ok.com-vm (ocf:vds-ok:VirtualDomain): Stopped (disabled)
+ * vptest2.vds-ok.com-vm (ocf:vds-ok:VirtualDomain): Stopped (disabled)
+ * vptest3.vds-ok.com-vm (ocf:vds-ok:VirtualDomain): Stopped (disabled)
+ * vptest4.vds-ok.com-vm (ocf:vds-ok:VirtualDomain): Stopped (disabled)
+ * vptest5.vds-ok.com-vm (ocf:vds-ok:VirtualDomain): Stopped (disabled)
+ * vptest6.vds-ok.com-vm (ocf:vds-ok:VirtualDomain): Stopped (disabled)
+ * vptest7.vds-ok.com-vm (ocf:vds-ok:VirtualDomain): Stopped (disabled)
+ * vptest8.vds-ok.com-vm (ocf:vds-ok:VirtualDomain): Stopped (disabled)
+ * vptest9.vds-ok.com-vm (ocf:vds-ok:VirtualDomain): Stopped (disabled)
+ * vptest10.vds-ok.com-vm (ocf:vds-ok:VirtualDomain): Stopped (disabled)
+ * vptest11.vds-ok.com-vm (ocf:vds-ok:VirtualDomain): Stopped (disabled)
+ * vptest12.vds-ok.com-vm (ocf:vds-ok:VirtualDomain): Stopped (disabled)
+ * vptest13.vds-ok.com-vm (ocf:vds-ok:VirtualDomain): Stopped (disabled)
+ * vptest14.vds-ok.com-vm (ocf:vds-ok:VirtualDomain): Stopped (disabled)
+ * vptest15.vds-ok.com-vm (ocf:vds-ok:VirtualDomain): Stopped (disabled)
+ * vptest16.vds-ok.com-vm (ocf:vds-ok:VirtualDomain): Stopped (disabled)
+ * vptest17.vds-ok.com-vm (ocf:vds-ok:VirtualDomain): Stopped (disabled)
+ * vptest18.vds-ok.com-vm (ocf:vds-ok:VirtualDomain): Stopped (disabled)
+ * vptest19.vds-ok.com-vm (ocf:vds-ok:VirtualDomain): Stopped (disabled)
+ * vptest20.vds-ok.com-vm (ocf:vds-ok:VirtualDomain): Stopped (disabled)
+ * vptest21.vds-ok.com-vm (ocf:vds-ok:VirtualDomain): Stopped (disabled)
+ * vptest22.vds-ok.com-vm (ocf:vds-ok:VirtualDomain): Stopped (disabled)
+ * vptest23.vds-ok.com-vm (ocf:vds-ok:VirtualDomain): Stopped (disabled)
+ * vptest24.vds-ok.com-vm (ocf:vds-ok:VirtualDomain): Stopped (disabled)
+ * vptest25.vds-ok.com-vm (ocf:vds-ok:VirtualDomain): Stopped (disabled)
+ * vptest26.vds-ok.com-vm (ocf:vds-ok:VirtualDomain): Stopped (disabled)
+ * vptest27.vds-ok.com-vm (ocf:vds-ok:VirtualDomain): Stopped (disabled)
+ * vptest28.vds-ok.com-vm (ocf:vds-ok:VirtualDomain): Stopped (disabled)
+ * vptest29.vds-ok.com-vm (ocf:vds-ok:VirtualDomain): Stopped (disabled)
+ * vptest30.vds-ok.com-vm (ocf:vds-ok:VirtualDomain): Stopped (disabled)
+ * vptest31.vds-ok.com-vm (ocf:vds-ok:VirtualDomain): Stopped (disabled)
+ * vptest32.vds-ok.com-vm (ocf:vds-ok:VirtualDomain): Stopped (disabled)
+ * vptest33.vds-ok.com-vm (ocf:vds-ok:VirtualDomain): Stopped (disabled)
+ * vptest34.vds-ok.com-vm (ocf:vds-ok:VirtualDomain): Stopped (disabled)
+ * vptest35.vds-ok.com-vm (ocf:vds-ok:VirtualDomain): Stopped (disabled)
+ * vptest36.vds-ok.com-vm (ocf:vds-ok:VirtualDomain): Stopped (disabled)
+ * vptest37.vds-ok.com-vm (ocf:vds-ok:VirtualDomain): Stopped (disabled)
+ * vptest38.vds-ok.com-vm (ocf:vds-ok:VirtualDomain): Stopped (disabled)
+ * vptest39.vds-ok.com-vm (ocf:vds-ok:VirtualDomain): Stopped (disabled)
+ * vptest40.vds-ok.com-vm (ocf:vds-ok:VirtualDomain): Stopped (disabled)
+ * vptest41.vds-ok.com-vm (ocf:vds-ok:VirtualDomain): Stopped (disabled)
+ * vptest42.vds-ok.com-vm (ocf:vds-ok:VirtualDomain): Stopped (disabled)
+ * vptest43.vds-ok.com-vm (ocf:vds-ok:VirtualDomain): Stopped (disabled)
+ * vptest44.vds-ok.com-vm (ocf:vds-ok:VirtualDomain): Stopped (disabled)
+ * vptest45.vds-ok.com-vm (ocf:vds-ok:VirtualDomain): Stopped (disabled)
+ * vptest46.vds-ok.com-vm (ocf:vds-ok:VirtualDomain): Stopped (disabled)
+ * vptest47.vds-ok.com-vm (ocf:vds-ok:VirtualDomain): Stopped (disabled)
+ * vptest48.vds-ok.com-vm (ocf:vds-ok:VirtualDomain): Stopped (disabled)
+ * vptest49.vds-ok.com-vm (ocf:vds-ok:VirtualDomain): Stopped (disabled)
+ * vptest50.vds-ok.com-vm (ocf:vds-ok:VirtualDomain): Stopped (disabled)
+ * vptest51.vds-ok.com-vm (ocf:vds-ok:VirtualDomain): Stopped (disabled)
+ * vptest52.vds-ok.com-vm (ocf:vds-ok:VirtualDomain): Stopped (disabled)
+ * vptest53.vds-ok.com-vm (ocf:vds-ok:VirtualDomain): Stopped (disabled)
+ * vptest54.vds-ok.com-vm (ocf:vds-ok:VirtualDomain): Stopped (disabled)
+ * vptest55.vds-ok.com-vm (ocf:vds-ok:VirtualDomain): Stopped (disabled)
+ * vptest56.vds-ok.com-vm (ocf:vds-ok:VirtualDomain): Stopped (disabled)
+ * vptest57.vds-ok.com-vm (ocf:vds-ok:VirtualDomain): Stopped (disabled)
+ * vptest58.vds-ok.com-vm (ocf:vds-ok:VirtualDomain): Stopped (disabled)
+ * vptest59.vds-ok.com-vm (ocf:vds-ok:VirtualDomain): Stopped (disabled)
+ * vptest60.vds-ok.com-vm (ocf:vds-ok:VirtualDomain): Stopped (disabled)
+ * sl6-x64-devel.vds-ok.com-vm (ocf:vds-ok:VirtualDomain): Started v03-b
+ * dist.express-consult.org-vm (ocf:vds-ok:VirtualDomain): Stopped (disabled)
+ * eu1.ca-pages.com-vm (ocf:vds-ok:VirtualDomain): Stopped (disabled)
+ * gotin-bbb-vm (ocf:vds-ok:VirtualDomain): Stopped (disabled)
+ * maxb-c55-vm (ocf:vds-ok:VirtualDomain): Stopped (disabled)
+ * metae.ru-vm (ocf:vds-ok:VirtualDomain): Stopped (disabled)
+ * rodovoepomestie.ru-vm (ocf:vds-ok:VirtualDomain): Stopped (disabled)
+ * ubuntu9.10-gotin-vm (ocf:vds-ok:VirtualDomain): Stopped (disabled)
+ * c5-x64-devel.vds-ok.com-vm (ocf:vds-ok:VirtualDomain): Started v03-a
+ * Clone Set: cl-mcast-test-net [mcast-test-net]:
+ * Started: [ v03-a v03-b ]
+ * Stopped: [ mgmt01 ]
+ * dist.fly-uni.org-vm (ocf:vds-ok:VirtualDomain): Stopped (disabled)
+
+Transition Summary:
+ * Reload c5-x64-devel.vds-ok.com-vm ( v03-a )
+
+Executing Cluster Transition:
+ * Resource action: vd01-b.cdev.ttc.prague.cz.vds-ok.com-vm monitor=10000 on v03-b
+ * Resource action: vd01-d.cdev.ttc.prague.cz.vds-ok.com-vm monitor=10000 on v03-b
+ * Resource action: c5-x64-devel.vds-ok.com-vm reload-agent on v03-a
+ * Resource action: c5-x64-devel.vds-ok.com-vm monitor=10000 on v03-a
+ * Pseudo action: load_stopped_v03-b
+ * Pseudo action: load_stopped_v03-a
+ * Pseudo action: load_stopped_mgmt01
+
+Revised Cluster Status:
+ * Node List:
+ * Online: [ mgmt01 v03-a v03-b ]
+
+ * Full List of Resources:
+ * stonith-v02-a (stonith:fence_ipmilan): Stopped (disabled)
+ * stonith-v02-b (stonith:fence_ipmilan): Stopped (disabled)
+ * stonith-v02-c (stonith:fence_ipmilan): Stopped (disabled)
+ * stonith-v02-d (stonith:fence_ipmilan): Stopped (disabled)
+ * stonith-mgmt01 (stonith:fence_xvm): Started v03-a
+ * stonith-mgmt02 (stonith:meatware): Started v03-a
+ * stonith-v03-c (stonith:fence_ipmilan): Stopped (disabled)
+ * stonith-v03-a (stonith:fence_ipmilan): Started v03-b
+ * stonith-v03-b (stonith:fence_ipmilan): Started mgmt01
+ * stonith-v03-d (stonith:fence_ipmilan): Stopped (disabled)
+ * Clone Set: cl-clvmd [clvmd]:
+ * Started: [ mgmt01 v03-a v03-b ]
+ * Clone Set: cl-dlm [dlm]:
+ * Started: [ mgmt01 v03-a v03-b ]
+ * Clone Set: cl-iscsid [iscsid]:
+ * Started: [ mgmt01 v03-a v03-b ]
+ * Clone Set: cl-libvirtd [libvirtd]:
+ * Started: [ v03-a v03-b ]
+ * Stopped: [ mgmt01 ]
+ * Clone Set: cl-multipathd [multipathd]:
+ * Started: [ mgmt01 v03-a v03-b ]
+ * Clone Set: cl-node-params [node-params]:
+ * Started: [ v03-a v03-b ]
+ * Stopped: [ mgmt01 ]
+ * Clone Set: cl-vlan1-if [vlan1-if]:
+ * Started: [ v03-a v03-b ]
+ * Stopped: [ mgmt01 ]
+ * Clone Set: cl-vlan101-if [vlan101-if]:
+ * Started: [ v03-a v03-b ]
+ * Stopped: [ mgmt01 ]
+ * Clone Set: cl-vlan102-if [vlan102-if]:
+ * Started: [ v03-a v03-b ]
+ * Stopped: [ mgmt01 ]
+ * Clone Set: cl-vlan103-if [vlan103-if]:
+ * Started: [ v03-a v03-b ]
+ * Stopped: [ mgmt01 ]
+ * Clone Set: cl-vlan104-if [vlan104-if]:
+ * Started: [ v03-a v03-b ]
+ * Stopped: [ mgmt01 ]
+ * Clone Set: cl-vlan3-if [vlan3-if]:
+ * Started: [ v03-a v03-b ]
+ * Stopped: [ mgmt01 ]
+ * Clone Set: cl-vlan4-if [vlan4-if]:
+ * Started: [ v03-a v03-b ]
+ * Stopped: [ mgmt01 ]
+ * Clone Set: cl-vlan5-if [vlan5-if]:
+ * Started: [ v03-a v03-b ]
+ * Stopped: [ mgmt01 ]
+ * Clone Set: cl-vlan900-if [vlan900-if]:
+ * Started: [ v03-a v03-b ]
+ * Stopped: [ mgmt01 ]
+ * Clone Set: cl-vlan909-if [vlan909-if]:
+ * Started: [ v03-a v03-b ]
+ * Stopped: [ mgmt01 ]
+ * Clone Set: cl-libvirt-images-fs [libvirt-images-fs]:
+ * Started: [ mgmt01 v03-a v03-b ]
+ * Clone Set: cl-libvirt-install-fs [libvirt-install-fs]:
+ * Started: [ mgmt01 v03-a v03-b ]
+ * Clone Set: cl-vds-ok-pool-0-iscsi [vds-ok-pool-0-iscsi]:
+ * Started: [ mgmt01 v03-a v03-b ]
+ * Clone Set: cl-vds-ok-pool-0-vg [vds-ok-pool-0-vg]:
+ * Started: [ mgmt01 v03-a v03-b ]
+ * Clone Set: cl-vds-ok-pool-1-iscsi [vds-ok-pool-1-iscsi]:
+ * Started: [ mgmt01 v03-a v03-b ]
+ * Clone Set: cl-vds-ok-pool-1-vg [vds-ok-pool-1-vg]:
+ * Started: [ mgmt01 v03-a v03-b ]
+ * Clone Set: cl-libvirt-images-pool [libvirt-images-pool]:
+ * Started: [ v03-a v03-b ]
+ * Stopped: [ mgmt01 ]
+ * Clone Set: cl-vds-ok-pool-0-pool [vds-ok-pool-0-pool]:
+ * Started: [ v03-a v03-b ]
+ * Stopped: [ mgmt01 ]
+ * Clone Set: cl-vds-ok-pool-1-pool [vds-ok-pool-1-pool]:
+ * Started: [ v03-a v03-b ]
+ * Stopped: [ mgmt01 ]
+ * git.vds-ok.com-vm (ocf:vds-ok:VirtualDomain): Stopped (disabled)
+ * Clone Set: cl-libvirt-qpid [libvirt-qpid]:
+ * Started: [ v03-a v03-b ]
+ * Stopped: [ mgmt01 ]
+ * vd01-a.cdev.ttc.prague.cz.vds-ok.com-vm (ocf:vds-ok:VirtualDomain): Started v03-b
+ * vd01-b.cdev.ttc.prague.cz.vds-ok.com-vm (ocf:vds-ok:VirtualDomain): Started v03-b
+ * vd01-c.cdev.ttc.prague.cz.vds-ok.com-vm (ocf:vds-ok:VirtualDomain): Started v03-a
+ * vd01-d.cdev.ttc.prague.cz.vds-ok.com-vm (ocf:vds-ok:VirtualDomain): Started v03-b
+ * vd02-a.cdev.ttc.prague.cz.vds-ok.com-vm (ocf:vds-ok:VirtualDomain): Stopped (disabled)
+ * vd02-b.cdev.ttc.prague.cz.vds-ok.com-vm (ocf:vds-ok:VirtualDomain): Stopped (disabled)
+ * vd02-c.cdev.ttc.prague.cz.vds-ok.com-vm (ocf:vds-ok:VirtualDomain): Stopped (disabled)
+ * vd02-d.cdev.ttc.prague.cz.vds-ok.com-vm (ocf:vds-ok:VirtualDomain): Stopped (disabled)
+ * vd03-a.cdev.ttc.prague.cz.vds-ok.com-vm (ocf:vds-ok:VirtualDomain): Stopped (disabled)
+ * vd03-b.cdev.ttc.prague.cz.vds-ok.com-vm (ocf:vds-ok:VirtualDomain): Stopped (disabled)
+ * vd03-c.cdev.ttc.prague.cz.vds-ok.com-vm (ocf:vds-ok:VirtualDomain): Stopped (disabled)
+ * vd03-d.cdev.ttc.prague.cz.vds-ok.com-vm (ocf:vds-ok:VirtualDomain): Stopped (disabled)
+ * vd04-a.cdev.ttc.prague.cz.vds-ok.com-vm (ocf:vds-ok:VirtualDomain): Stopped (disabled)
+ * vd04-b.cdev.ttc.prague.cz.vds-ok.com-vm (ocf:vds-ok:VirtualDomain): Stopped (disabled)
+ * vd04-c.cdev.ttc.prague.cz.vds-ok.com-vm (ocf:vds-ok:VirtualDomain): Stopped (disabled)
+ * vd04-d.cdev.ttc.prague.cz.vds-ok.com-vm (ocf:vds-ok:VirtualDomain): Stopped (disabled)
+ * f13-x64-devel.vds-ok.com-vm (ocf:vds-ok:VirtualDomain): Started v03-a
+ * eu2.ca-pages.com-vm (ocf:vds-ok:VirtualDomain): Started v03-b
+ * zakaz.transferrus.ru-vm (ocf:vds-ok:VirtualDomain): Stopped (disabled)
+ * Clone Set: cl-vlan200-if [vlan200-if]:
+ * Started: [ v03-a v03-b ]
+ * Stopped: [ mgmt01 ]
+ * anbriz-gw-vm (ocf:vds-ok:VirtualDomain): Stopped (disabled)
+ * anbriz-work-vm (ocf:vds-ok:VirtualDomain): Stopped (disabled)
+ * lenny-x32-devel-vm (ocf:vds-ok:VirtualDomain): Started v03-a
+ * vptest1.vds-ok.com-vm (ocf:vds-ok:VirtualDomain): Stopped (disabled)
+ * vptest2.vds-ok.com-vm (ocf:vds-ok:VirtualDomain): Stopped (disabled)
+ * vptest3.vds-ok.com-vm (ocf:vds-ok:VirtualDomain): Stopped (disabled)
+ * vptest4.vds-ok.com-vm (ocf:vds-ok:VirtualDomain): Stopped (disabled)
+ * vptest5.vds-ok.com-vm (ocf:vds-ok:VirtualDomain): Stopped (disabled)
+ * vptest6.vds-ok.com-vm (ocf:vds-ok:VirtualDomain): Stopped (disabled)
+ * vptest7.vds-ok.com-vm (ocf:vds-ok:VirtualDomain): Stopped (disabled)
+ * vptest8.vds-ok.com-vm (ocf:vds-ok:VirtualDomain): Stopped (disabled)
+ * vptest9.vds-ok.com-vm (ocf:vds-ok:VirtualDomain): Stopped (disabled)
+ * vptest10.vds-ok.com-vm (ocf:vds-ok:VirtualDomain): Stopped (disabled)
+ * vptest11.vds-ok.com-vm (ocf:vds-ok:VirtualDomain): Stopped (disabled)
+ * vptest12.vds-ok.com-vm (ocf:vds-ok:VirtualDomain): Stopped (disabled)
+ * vptest13.vds-ok.com-vm (ocf:vds-ok:VirtualDomain): Stopped (disabled)
+ * vptest14.vds-ok.com-vm (ocf:vds-ok:VirtualDomain): Stopped (disabled)
+ * vptest15.vds-ok.com-vm (ocf:vds-ok:VirtualDomain): Stopped (disabled)
+ * vptest16.vds-ok.com-vm (ocf:vds-ok:VirtualDomain): Stopped (disabled)
+ * vptest17.vds-ok.com-vm (ocf:vds-ok:VirtualDomain): Stopped (disabled)
+ * vptest18.vds-ok.com-vm (ocf:vds-ok:VirtualDomain): Stopped (disabled)
+ * vptest19.vds-ok.com-vm (ocf:vds-ok:VirtualDomain): Stopped (disabled)
+ * vptest20.vds-ok.com-vm (ocf:vds-ok:VirtualDomain): Stopped (disabled)
+ * vptest21.vds-ok.com-vm (ocf:vds-ok:VirtualDomain): Stopped (disabled)
+ * vptest22.vds-ok.com-vm (ocf:vds-ok:VirtualDomain): Stopped (disabled)
+ * vptest23.vds-ok.com-vm (ocf:vds-ok:VirtualDomain): Stopped (disabled)
+ * vptest24.vds-ok.com-vm (ocf:vds-ok:VirtualDomain): Stopped (disabled)
+ * vptest25.vds-ok.com-vm (ocf:vds-ok:VirtualDomain): Stopped (disabled)
+ * vptest26.vds-ok.com-vm (ocf:vds-ok:VirtualDomain): Stopped (disabled)
+ * vptest27.vds-ok.com-vm (ocf:vds-ok:VirtualDomain): Stopped (disabled)
+ * vptest28.vds-ok.com-vm (ocf:vds-ok:VirtualDomain): Stopped (disabled)
+ * vptest29.vds-ok.com-vm (ocf:vds-ok:VirtualDomain): Stopped (disabled)
+ * vptest30.vds-ok.com-vm (ocf:vds-ok:VirtualDomain): Stopped (disabled)
+ * vptest31.vds-ok.com-vm (ocf:vds-ok:VirtualDomain): Stopped (disabled)
+ * vptest32.vds-ok.com-vm (ocf:vds-ok:VirtualDomain): Stopped (disabled)
+ * vptest33.vds-ok.com-vm (ocf:vds-ok:VirtualDomain): Stopped (disabled)
+ * vptest34.vds-ok.com-vm (ocf:vds-ok:VirtualDomain): Stopped (disabled)
+ * vptest35.vds-ok.com-vm (ocf:vds-ok:VirtualDomain): Stopped (disabled)
+ * vptest36.vds-ok.com-vm (ocf:vds-ok:VirtualDomain): Stopped (disabled)
+ * vptest37.vds-ok.com-vm (ocf:vds-ok:VirtualDomain): Stopped (disabled)
+ * vptest38.vds-ok.com-vm (ocf:vds-ok:VirtualDomain): Stopped (disabled)
+ * vptest39.vds-ok.com-vm (ocf:vds-ok:VirtualDomain): Stopped (disabled)
+ * vptest40.vds-ok.com-vm (ocf:vds-ok:VirtualDomain): Stopped (disabled)
+ * vptest41.vds-ok.com-vm (ocf:vds-ok:VirtualDomain): Stopped (disabled)
+ * vptest42.vds-ok.com-vm (ocf:vds-ok:VirtualDomain): Stopped (disabled)
+ * vptest43.vds-ok.com-vm (ocf:vds-ok:VirtualDomain): Stopped (disabled)
+ * vptest44.vds-ok.com-vm (ocf:vds-ok:VirtualDomain): Stopped (disabled)
+ * vptest45.vds-ok.com-vm (ocf:vds-ok:VirtualDomain): Stopped (disabled)
+ * vptest46.vds-ok.com-vm (ocf:vds-ok:VirtualDomain): Stopped (disabled)
+ * vptest47.vds-ok.com-vm (ocf:vds-ok:VirtualDomain): Stopped (disabled)
+ * vptest48.vds-ok.com-vm (ocf:vds-ok:VirtualDomain): Stopped (disabled)
+ * vptest49.vds-ok.com-vm (ocf:vds-ok:VirtualDomain): Stopped (disabled)
+ * vptest50.vds-ok.com-vm (ocf:vds-ok:VirtualDomain): Stopped (disabled)
+ * vptest51.vds-ok.com-vm (ocf:vds-ok:VirtualDomain): Stopped (disabled)
+ * vptest52.vds-ok.com-vm (ocf:vds-ok:VirtualDomain): Stopped (disabled)
+ * vptest53.vds-ok.com-vm (ocf:vds-ok:VirtualDomain): Stopped (disabled)
+ * vptest54.vds-ok.com-vm (ocf:vds-ok:VirtualDomain): Stopped (disabled)
+ * vptest55.vds-ok.com-vm (ocf:vds-ok:VirtualDomain): Stopped (disabled)
+ * vptest56.vds-ok.com-vm (ocf:vds-ok:VirtualDomain): Stopped (disabled)
+ * vptest57.vds-ok.com-vm (ocf:vds-ok:VirtualDomain): Stopped (disabled)
+ * vptest58.vds-ok.com-vm (ocf:vds-ok:VirtualDomain): Stopped (disabled)
+ * vptest59.vds-ok.com-vm (ocf:vds-ok:VirtualDomain): Stopped (disabled)
+ * vptest60.vds-ok.com-vm (ocf:vds-ok:VirtualDomain): Stopped (disabled)
+ * sl6-x64-devel.vds-ok.com-vm (ocf:vds-ok:VirtualDomain): Started v03-b
+ * dist.express-consult.org-vm (ocf:vds-ok:VirtualDomain): Stopped (disabled)
+ * eu1.ca-pages.com-vm (ocf:vds-ok:VirtualDomain): Stopped (disabled)
+ * gotin-bbb-vm (ocf:vds-ok:VirtualDomain): Stopped (disabled)
+ * maxb-c55-vm (ocf:vds-ok:VirtualDomain): Stopped (disabled)
+ * metae.ru-vm (ocf:vds-ok:VirtualDomain): Stopped (disabled)
+ * rodovoepomestie.ru-vm (ocf:vds-ok:VirtualDomain): Stopped (disabled)
+ * ubuntu9.10-gotin-vm (ocf:vds-ok:VirtualDomain): Stopped (disabled)
+ * c5-x64-devel.vds-ok.com-vm (ocf:vds-ok:VirtualDomain): Started v03-a
+ * Clone Set: cl-mcast-test-net [mcast-test-net]:
+ * Started: [ v03-a v03-b ]
+ * Stopped: [ mgmt01 ]
+ * dist.fly-uni.org-vm (ocf:vds-ok:VirtualDomain): Stopped (disabled)
diff --git a/cts/scheduler/summary/partial-live-migration-multiple-active.summary b/cts/scheduler/summary/partial-live-migration-multiple-active.summary
new file mode 100644
index 0000000..41819e0
--- /dev/null
+++ b/cts/scheduler/summary/partial-live-migration-multiple-active.summary
@@ -0,0 +1,25 @@
+Using the original execution date of: 2021-03-02 21:28:21Z
+Current cluster status:
+ * Node List:
+ * Node node2: standby (with active resources)
+ * Online: [ node1 ]
+
+ * Full List of Resources:
+ * migrator (ocf:pacemaker:Dummy): Started node2
+
+Transition Summary:
+ * Move migrator ( node2 -> node1 )
+
+Executing Cluster Transition:
+ * Resource action: migrator stop on node2
+ * Resource action: migrator start on node1
+ * Resource action: migrator monitor=10000 on node1
+Using the original execution date of: 2021-03-02 21:28:21Z
+
+Revised Cluster Status:
+ * Node List:
+ * Node node2: standby
+ * Online: [ node1 ]
+
+ * Full List of Resources:
+ * migrator (ocf:pacemaker:Dummy): Started node1
diff --git a/cts/scheduler/summary/partial-unmanaged-group.summary b/cts/scheduler/summary/partial-unmanaged-group.summary
new file mode 100644
index 0000000..9cb68bc
--- /dev/null
+++ b/cts/scheduler/summary/partial-unmanaged-group.summary
@@ -0,0 +1,41 @@
+Using the original execution date of: 2020-01-20 21:19:17Z
+Current cluster status:
+ * Node List:
+ * Online: [ rhel8-1 rhel8-2 rhel8-4 rhel8-5 ]
+ * OFFLINE: [ rhel8-3 ]
+
+ * Full List of Resources:
+ * Fencing (stonith:fence_xvm): Started rhel8-1
+ * rsc1 (ocf:pacemaker:Dummy): Started rhel8-4
+ * rsc2 (ocf:pacemaker:Dummy): Started rhel8-5
+ * Resource Group: grp1:
+ * grp1a (ocf:pacemaker:Dummy): Started rhel8-2
+ * interloper (ocf:pacemaker:Dummy): Stopped
+ * grp1b (ocf:pacemaker:Dummy): Started rhel8-2 (unmanaged)
+ * grp1c (ocf:pacemaker:Dummy): Started rhel8-2 (unmanaged)
+
+Transition Summary:
+ * Start interloper ( rhel8-2 ) due to unrunnable grp1b stop (blocked)
+
+Executing Cluster Transition:
+ * Pseudo action: grp1_start_0
+ * Resource action: interloper monitor on rhel8-5
+ * Resource action: interloper monitor on rhel8-4
+ * Resource action: interloper monitor on rhel8-2
+ * Resource action: interloper monitor on rhel8-1
+Using the original execution date of: 2020-01-20 21:19:17Z
+
+Revised Cluster Status:
+ * Node List:
+ * Online: [ rhel8-1 rhel8-2 rhel8-4 rhel8-5 ]
+ * OFFLINE: [ rhel8-3 ]
+
+ * Full List of Resources:
+ * Fencing (stonith:fence_xvm): Started rhel8-1
+ * rsc1 (ocf:pacemaker:Dummy): Started rhel8-4
+ * rsc2 (ocf:pacemaker:Dummy): Started rhel8-5
+ * Resource Group: grp1:
+ * grp1a (ocf:pacemaker:Dummy): Started rhel8-2
+ * interloper (ocf:pacemaker:Dummy): Stopped
+ * grp1b (ocf:pacemaker:Dummy): Started rhel8-2 (unmanaged)
+ * grp1c (ocf:pacemaker:Dummy): Started rhel8-2 (unmanaged)
diff --git a/cts/scheduler/summary/per-node-attrs.summary b/cts/scheduler/summary/per-node-attrs.summary
new file mode 100644
index 0000000..718a845
--- /dev/null
+++ b/cts/scheduler/summary/per-node-attrs.summary
@@ -0,0 +1,22 @@
+Current cluster status:
+ * Node List:
+ * Online: [ pcmk-1 pcmk-2 pcmk-3 ]
+
+ * Full List of Resources:
+ * dummy (ocf:heartbeat:IPaddr2): Stopped
+
+Transition Summary:
+ * Start dummy ( pcmk-1 )
+
+Executing Cluster Transition:
+ * Resource action: dummy monitor on pcmk-3
+ * Resource action: dummy monitor on pcmk-2
+ * Resource action: dummy monitor on pcmk-1
+ * Resource action: dummy start on pcmk-1
+
+Revised Cluster Status:
+ * Node List:
+ * Online: [ pcmk-1 pcmk-2 pcmk-3 ]
+
+ * Full List of Resources:
+ * dummy (ocf:heartbeat:IPaddr2): Started pcmk-1
diff --git a/cts/scheduler/summary/per-op-failcount.summary b/cts/scheduler/summary/per-op-failcount.summary
new file mode 100644
index 0000000..a86c294
--- /dev/null
+++ b/cts/scheduler/summary/per-op-failcount.summary
@@ -0,0 +1,34 @@
+Using the original execution date of: 2017-04-06 09:04:22Z
+Current cluster status:
+ * Node List:
+ * Node rh73-01-snmp: UNCLEAN (online)
+ * Online: [ rh73-02-snmp ]
+
+ * Full List of Resources:
+ * prmDummy (ocf:pacemaker:Dummy): FAILED rh73-01-snmp
+ * prmStonith1-1 (stonith:external/ssh): Started rh73-02-snmp
+ * prmStonith2-1 (stonith:external/ssh): Started rh73-01-snmp
+
+Transition Summary:
+ * Fence (reboot) rh73-01-snmp 'prmDummy failed there'
+ * Recover prmDummy ( rh73-01-snmp -> rh73-02-snmp )
+ * Move prmStonith2-1 ( rh73-01-snmp -> rh73-02-snmp )
+
+Executing Cluster Transition:
+ * Pseudo action: prmStonith2-1_stop_0
+ * Fencing rh73-01-snmp (reboot)
+ * Pseudo action: prmDummy_stop_0
+ * Resource action: prmStonith2-1 start on rh73-02-snmp
+ * Resource action: prmDummy start on rh73-02-snmp
+ * Resource action: prmDummy monitor=10000 on rh73-02-snmp
+Using the original execution date of: 2017-04-06 09:04:22Z
+
+Revised Cluster Status:
+ * Node List:
+ * Online: [ rh73-02-snmp ]
+ * OFFLINE: [ rh73-01-snmp ]
+
+ * Full List of Resources:
+ * prmDummy (ocf:pacemaker:Dummy): Started rh73-02-snmp
+ * prmStonith1-1 (stonith:external/ssh): Started rh73-02-snmp
+ * prmStonith2-1 (stonith:external/ssh): Started rh73-02-snmp
diff --git a/cts/scheduler/summary/placement-capacity.summary b/cts/scheduler/summary/placement-capacity.summary
new file mode 100644
index 0000000..b17d7f0
--- /dev/null
+++ b/cts/scheduler/summary/placement-capacity.summary
@@ -0,0 +1,23 @@
+Current cluster status:
+ * Node List:
+ * Online: [ node1 node2 ]
+
+ * Full List of Resources:
+ * rsc1 (ocf:pacemaker:Dummy): Started node1
+ * rsc2 (ocf:pacemaker:Dummy): Started node2
+
+Transition Summary:
+ * Stop rsc1 ( node1 ) due to node availability
+
+Executing Cluster Transition:
+ * Resource action: rsc1 stop on node1
+ * Pseudo action: load_stopped_node2
+ * Pseudo action: load_stopped_node1
+
+Revised Cluster Status:
+ * Node List:
+ * Online: [ node1 node2 ]
+
+ * Full List of Resources:
+ * rsc1 (ocf:pacemaker:Dummy): Stopped
+ * rsc2 (ocf:pacemaker:Dummy): Started node2
diff --git a/cts/scheduler/summary/placement-location.summary b/cts/scheduler/summary/placement-location.summary
new file mode 100644
index 0000000..f38df74
--- /dev/null
+++ b/cts/scheduler/summary/placement-location.summary
@@ -0,0 +1,25 @@
+Current cluster status:
+ * Node List:
+ * Node node1: standby (with active resources)
+ * Online: [ node2 ]
+
+ * Full List of Resources:
+ * rsc1 (ocf:pacemaker:Dummy): Started node1
+ * rsc2 (ocf:pacemaker:Dummy): Started node2
+
+Transition Summary:
+ * Stop rsc1 ( node1 ) due to node availability
+
+Executing Cluster Transition:
+ * Resource action: rsc1 stop on node1
+ * Pseudo action: load_stopped_node2
+ * Pseudo action: load_stopped_node1
+
+Revised Cluster Status:
+ * Node List:
+ * Node node1: standby
+ * Online: [ node2 ]
+
+ * Full List of Resources:
+ * rsc1 (ocf:pacemaker:Dummy): Stopped
+ * rsc2 (ocf:pacemaker:Dummy): Started node2
diff --git a/cts/scheduler/summary/placement-priority.summary b/cts/scheduler/summary/placement-priority.summary
new file mode 100644
index 0000000..71843ca
--- /dev/null
+++ b/cts/scheduler/summary/placement-priority.summary
@@ -0,0 +1,25 @@
+Current cluster status:
+ * Node List:
+ * Node node1: standby (with active resources)
+ * Online: [ node2 ]
+
+ * Full List of Resources:
+ * rsc2 (ocf:pacemaker:Dummy): Started node2
+ * rsc1 (ocf:pacemaker:Dummy): Started node1
+
+Transition Summary:
+ * Stop rsc1 ( node1 ) due to node availability
+
+Executing Cluster Transition:
+ * Resource action: rsc1 stop on node1
+ * Pseudo action: load_stopped_node1
+ * Pseudo action: load_stopped_node2
+
+Revised Cluster Status:
+ * Node List:
+ * Node node1: standby
+ * Online: [ node2 ]
+
+ * Full List of Resources:
+ * rsc2 (ocf:pacemaker:Dummy): Started node2
+ * rsc1 (ocf:pacemaker:Dummy): Stopped
diff --git a/cts/scheduler/summary/placement-stickiness.summary b/cts/scheduler/summary/placement-stickiness.summary
new file mode 100644
index 0000000..f38df74
--- /dev/null
+++ b/cts/scheduler/summary/placement-stickiness.summary
@@ -0,0 +1,25 @@
+Current cluster status:
+ * Node List:
+ * Node node1: standby (with active resources)
+ * Online: [ node2 ]
+
+ * Full List of Resources:
+ * rsc1 (ocf:pacemaker:Dummy): Started node1
+ * rsc2 (ocf:pacemaker:Dummy): Started node2
+
+Transition Summary:
+ * Stop rsc1 ( node1 ) due to node availability
+
+Executing Cluster Transition:
+ * Resource action: rsc1 stop on node1
+ * Pseudo action: load_stopped_node2
+ * Pseudo action: load_stopped_node1
+
+Revised Cluster Status:
+ * Node List:
+ * Node node1: standby
+ * Online: [ node2 ]
+
+ * Full List of Resources:
+ * rsc1 (ocf:pacemaker:Dummy): Stopped
+ * rsc2 (ocf:pacemaker:Dummy): Started node2
diff --git a/cts/scheduler/summary/primitive-with-group-with-clone.summary b/cts/scheduler/summary/primitive-with-group-with-clone.summary
new file mode 100644
index 0000000..aa0b96f
--- /dev/null
+++ b/cts/scheduler/summary/primitive-with-group-with-clone.summary
@@ -0,0 +1,71 @@
+Current cluster status:
+ * Node List:
+ * Online: [ node1 node2 node3 node4 node5 ]
+
+ * Full List of Resources:
+ * Fencing (stonith:fence_xvm): Started node1
+ * Clone Set: rsc2-clone [rsc2]:
+ * Stopped: [ node1 node2 node3 node4 node5 ]
+ * rsc1 (ocf:pacemaker:Dummy): Stopped
+ * Resource Group: group1:
+ * group1rsc1 (ocf:pacemaker:Dummy): Stopped
+ * group1rsc2 (ocf:pacemaker:Dummy): Stopped
+
+Transition Summary:
+ * Start rsc2:0 ( node5 )
+ * Start rsc2:1 ( node2 )
+ * Start rsc2:2 ( node3 )
+ * Start rsc1 ( node5 )
+ * Start group1rsc1 ( node5 )
+ * Start group1rsc2 ( node5 )
+
+Executing Cluster Transition:
+ * Resource action: rsc2:0 monitor on node5
+ * Resource action: rsc2:0 monitor on node4
+ * Resource action: rsc2:0 monitor on node1
+ * Resource action: rsc2:1 monitor on node2
+ * Resource action: rsc2:2 monitor on node3
+ * Pseudo action: rsc2-clone_start_0
+ * Resource action: rsc1 monitor on node5
+ * Resource action: rsc1 monitor on node4
+ * Resource action: rsc1 monitor on node3
+ * Resource action: rsc1 monitor on node2
+ * Resource action: rsc1 monitor on node1
+ * Pseudo action: group1_start_0
+ * Resource action: group1rsc1 monitor on node5
+ * Resource action: group1rsc1 monitor on node4
+ * Resource action: group1rsc1 monitor on node3
+ * Resource action: group1rsc1 monitor on node2
+ * Resource action: group1rsc1 monitor on node1
+ * Resource action: group1rsc2 monitor on node5
+ * Resource action: group1rsc2 monitor on node4
+ * Resource action: group1rsc2 monitor on node3
+ * Resource action: group1rsc2 monitor on node2
+ * Resource action: group1rsc2 monitor on node1
+ * Resource action: rsc2:0 start on node5
+ * Resource action: rsc2:1 start on node2
+ * Resource action: rsc2:2 start on node3
+ * Pseudo action: rsc2-clone_running_0
+ * Resource action: rsc1 start on node5
+ * Resource action: group1rsc1 start on node5
+ * Resource action: group1rsc2 start on node5
+ * Resource action: rsc2:0 monitor=10000 on node5
+ * Resource action: rsc2:1 monitor=10000 on node2
+ * Resource action: rsc2:2 monitor=10000 on node3
+ * Resource action: rsc1 monitor=10000 on node5
+ * Pseudo action: group1_running_0
+ * Resource action: group1rsc1 monitor=10000 on node5
+ * Resource action: group1rsc2 monitor=10000 on node5
+
+Revised Cluster Status:
+ * Node List:
+ * Online: [ node1 node2 node3 node4 node5 ]
+
+ * Full List of Resources:
+ * Fencing (stonith:fence_xvm): Started node1
+ * Clone Set: rsc2-clone [rsc2]:
+ * Started: [ node2 node3 node5 ]
+ * rsc1 (ocf:pacemaker:Dummy): Started node5
+ * Resource Group: group1:
+ * group1rsc1 (ocf:pacemaker:Dummy): Started node5
+ * group1rsc2 (ocf:pacemaker:Dummy): Started node5
diff --git a/cts/scheduler/summary/primitive-with-group-with-promoted.summary b/cts/scheduler/summary/primitive-with-group-with-promoted.summary
new file mode 100644
index 0000000..b92ce1e
--- /dev/null
+++ b/cts/scheduler/summary/primitive-with-group-with-promoted.summary
@@ -0,0 +1,75 @@
+Current cluster status:
+ * Node List:
+ * Online: [ node1 node2 node3 node4 node5 ]
+
+ * Full List of Resources:
+ * Fencing (stonith:fence_xvm): Started node1
+ * Clone Set: rsc2-clone [rsc2] (promotable):
+ * Stopped: [ node1 node2 node3 node4 node5 ]
+ * rsc1 (ocf:pacemaker:Dummy): Stopped
+ * Resource Group: group1:
+ * group1rsc1 (ocf:pacemaker:Dummy): Stopped
+ * group1rsc2 (ocf:pacemaker:Dummy): Stopped
+
+Transition Summary:
+ * Promote rsc2:0 ( Stopped -> Promoted node5 )
+ * Start rsc2:1 ( node2 )
+ * Start rsc2:2 ( node3 )
+ * Start rsc1 ( node5 )
+ * Start group1rsc1 ( node5 )
+ * Start group1rsc2 ( node5 )
+
+Executing Cluster Transition:
+ * Resource action: rsc2:0 monitor on node5
+ * Resource action: rsc2:0 monitor on node4
+ * Resource action: rsc2:0 monitor on node1
+ * Resource action: rsc2:1 monitor on node2
+ * Resource action: rsc2:2 monitor on node3
+ * Pseudo action: rsc2-clone_start_0
+ * Resource action: rsc1 monitor on node5
+ * Resource action: rsc1 monitor on node4
+ * Resource action: rsc1 monitor on node3
+ * Resource action: rsc1 monitor on node2
+ * Resource action: rsc1 monitor on node1
+ * Pseudo action: group1_start_0
+ * Resource action: group1rsc1 monitor on node5
+ * Resource action: group1rsc1 monitor on node4
+ * Resource action: group1rsc1 monitor on node3
+ * Resource action: group1rsc1 monitor on node2
+ * Resource action: group1rsc1 monitor on node1
+ * Resource action: group1rsc2 monitor on node5
+ * Resource action: group1rsc2 monitor on node4
+ * Resource action: group1rsc2 monitor on node3
+ * Resource action: group1rsc2 monitor on node2
+ * Resource action: group1rsc2 monitor on node1
+ * Resource action: rsc2:0 start on node5
+ * Resource action: rsc2:1 start on node2
+ * Resource action: rsc2:2 start on node3
+ * Pseudo action: rsc2-clone_running_0
+ * Resource action: rsc1 start on node5
+ * Resource action: group1rsc1 start on node5
+ * Resource action: group1rsc2 start on node5
+ * Resource action: rsc2:1 monitor=11000 on node2
+ * Resource action: rsc2:2 monitor=11000 on node3
+ * Pseudo action: rsc2-clone_promote_0
+ * Resource action: rsc1 monitor=10000 on node5
+ * Pseudo action: group1_running_0
+ * Resource action: group1rsc1 monitor=10000 on node5
+ * Resource action: group1rsc2 monitor=10000 on node5
+ * Resource action: rsc2:0 promote on node5
+ * Pseudo action: rsc2-clone_promoted_0
+ * Resource action: rsc2:0 monitor=10000 on node5
+
+Revised Cluster Status:
+ * Node List:
+ * Online: [ node1 node2 node3 node4 node5 ]
+
+ * Full List of Resources:
+ * Fencing (stonith:fence_xvm): Started node1
+ * Clone Set: rsc2-clone [rsc2] (promotable):
+ * Promoted: [ node5 ]
+ * Unpromoted: [ node2 node3 ]
+ * rsc1 (ocf:pacemaker:Dummy): Started node5
+ * Resource Group: group1:
+ * group1rsc1 (ocf:pacemaker:Dummy): Started node5
+ * group1rsc2 (ocf:pacemaker:Dummy): Started node5
diff --git a/cts/scheduler/summary/primitive-with-unrunnable-group.summary b/cts/scheduler/summary/primitive-with-unrunnable-group.summary
new file mode 100644
index 0000000..5b6c382
--- /dev/null
+++ b/cts/scheduler/summary/primitive-with-unrunnable-group.summary
@@ -0,0 +1,37 @@
+1 of 5 resource instances DISABLED and 0 BLOCKED from further action due to failure
+
+Current cluster status:
+ * Node List:
+ * Online: [ node1 node2 node3 node4 node5 ]
+
+ * Full List of Resources:
+ * Fencing (stonith:fence_xvm): Started node1
+ * rsc1 (ocf:pacemaker:Dummy): Stopped
+ * Resource Group: group1:
+ * group1a (ocf:pacemaker:Dummy): Stopped
+ * group1b (ocf:pacemaker:Dummy): Stopped
+ * rsc2 (ocf:pacemaker:Dummy): Stopped (disabled)
+
+Transition Summary:
+ * Start rsc1 ( node2 ) due to colocation with group1 (blocked)
+ * Start group1a ( node2 ) due to unrunnable rsc2 start (blocked)
+ * Start group1b ( node2 ) due to unrunnable rsc2 start (blocked)
+
+Executing Cluster Transition:
+ * Resource action: rsc2 monitor on node5
+ * Resource action: rsc2 monitor on node4
+ * Resource action: rsc2 monitor on node3
+ * Resource action: rsc2 monitor on node2
+ * Resource action: rsc2 monitor on node1
+
+Revised Cluster Status:
+ * Node List:
+ * Online: [ node1 node2 node3 node4 node5 ]
+
+ * Full List of Resources:
+ * Fencing (stonith:fence_xvm): Started node1
+ * rsc1 (ocf:pacemaker:Dummy): Stopped
+ * Resource Group: group1:
+ * group1a (ocf:pacemaker:Dummy): Stopped
+ * group1b (ocf:pacemaker:Dummy): Stopped
+ * rsc2 (ocf:pacemaker:Dummy): Stopped (disabled)
diff --git a/cts/scheduler/summary/priority-fencing-delay.summary b/cts/scheduler/summary/priority-fencing-delay.summary
new file mode 100644
index 0000000..ce5aff2
--- /dev/null
+++ b/cts/scheduler/summary/priority-fencing-delay.summary
@@ -0,0 +1,104 @@
+Current cluster status:
+ * Node List:
+ * Node kiff-01: UNCLEAN (offline)
+ * Online: [ kiff-02 ]
+ * GuestOnline: [ lxc-01_kiff-02 lxc-02_kiff-02 ]
+
+ * Full List of Resources:
+ * vm-fs (ocf:heartbeat:Filesystem): FAILED lxc-01_kiff-01
+ * R-lxc-01_kiff-02 (ocf:heartbeat:VirtualDomain): Started kiff-02
+ * fence-kiff-01 (stonith:fence_ipmilan): Started kiff-02
+ * fence-kiff-02 (stonith:fence_ipmilan): Started kiff-01 (UNCLEAN)
+ * Clone Set: dlm-clone [dlm]:
+ * dlm (ocf:pacemaker:controld): Started kiff-01 (UNCLEAN)
+ * Started: [ kiff-02 ]
+ * Stopped: [ lxc-01_kiff-01 lxc-01_kiff-02 lxc-02_kiff-01 lxc-02_kiff-02 ]
+ * Clone Set: clvmd-clone [clvmd]:
+ * clvmd (ocf:heartbeat:clvm): Started kiff-01 (UNCLEAN)
+ * Started: [ kiff-02 ]
+ * Stopped: [ lxc-01_kiff-01 lxc-01_kiff-02 lxc-02_kiff-01 lxc-02_kiff-02 ]
+ * Clone Set: shared0-clone [shared0]:
+ * shared0 (ocf:heartbeat:Filesystem): Started kiff-01 (UNCLEAN)
+ * Started: [ kiff-02 ]
+ * Stopped: [ lxc-01_kiff-01 lxc-01_kiff-02 lxc-02_kiff-01 lxc-02_kiff-02 ]
+ * R-lxc-01_kiff-01 (ocf:heartbeat:VirtualDomain): FAILED kiff-01 (UNCLEAN)
+ * R-lxc-02_kiff-01 (ocf:heartbeat:VirtualDomain): Started kiff-01 (UNCLEAN)
+ * R-lxc-02_kiff-02 (ocf:heartbeat:VirtualDomain): Started kiff-02
+
+Transition Summary:
+ * Fence (reboot) lxc-02_kiff-01 (resource: R-lxc-02_kiff-01) 'guest is unclean'
+ * Fence (reboot) lxc-01_kiff-01 (resource: R-lxc-01_kiff-01) 'guest is unclean'
+ * Fence (reboot) kiff-01 'peer is no longer part of the cluster'
+ * Recover vm-fs ( lxc-01_kiff-01 )
+ * Move fence-kiff-02 ( kiff-01 -> kiff-02 )
+ * Stop dlm:0 ( kiff-01 ) due to node availability
+ * Stop clvmd:0 ( kiff-01 ) due to node availability
+ * Stop shared0:0 ( kiff-01 ) due to node availability
+ * Recover R-lxc-01_kiff-01 ( kiff-01 -> kiff-02 )
+ * Move R-lxc-02_kiff-01 ( kiff-01 -> kiff-02 )
+ * Move lxc-01_kiff-01 ( kiff-01 -> kiff-02 )
+ * Move lxc-02_kiff-01 ( kiff-01 -> kiff-02 )
+
+Executing Cluster Transition:
+ * Resource action: vm-fs monitor on lxc-02_kiff-02
+ * Resource action: vm-fs monitor on lxc-01_kiff-02
+ * Pseudo action: fence-kiff-02_stop_0
+ * Resource action: dlm monitor on lxc-02_kiff-02
+ * Resource action: dlm monitor on lxc-01_kiff-02
+ * Resource action: clvmd monitor on lxc-02_kiff-02
+ * Resource action: clvmd monitor on lxc-01_kiff-02
+ * Resource action: shared0 monitor on lxc-02_kiff-02
+ * Resource action: shared0 monitor on lxc-01_kiff-02
+ * Pseudo action: lxc-01_kiff-01_stop_0
+ * Pseudo action: lxc-02_kiff-01_stop_0
+ * Fencing kiff-01 (reboot)
+ * Pseudo action: R-lxc-01_kiff-01_stop_0
+ * Pseudo action: R-lxc-02_kiff-01_stop_0
+ * Pseudo action: stonith-lxc-02_kiff-01-reboot on lxc-02_kiff-01
+ * Pseudo action: stonith-lxc-01_kiff-01-reboot on lxc-01_kiff-01
+ * Pseudo action: vm-fs_stop_0
+ * Resource action: fence-kiff-02 start on kiff-02
+ * Pseudo action: shared0-clone_stop_0
+ * Resource action: R-lxc-01_kiff-01 start on kiff-02
+ * Resource action: R-lxc-02_kiff-01 start on kiff-02
+ * Resource action: lxc-01_kiff-01 start on kiff-02
+ * Resource action: lxc-02_kiff-01 start on kiff-02
+ * Resource action: vm-fs start on lxc-01_kiff-01
+ * Resource action: fence-kiff-02 monitor=60000 on kiff-02
+ * Pseudo action: shared0_stop_0
+ * Pseudo action: shared0-clone_stopped_0
+ * Resource action: R-lxc-01_kiff-01 monitor=10000 on kiff-02
+ * Resource action: R-lxc-02_kiff-01 monitor=10000 on kiff-02
+ * Resource action: lxc-01_kiff-01 monitor=30000 on kiff-02
+ * Resource action: lxc-02_kiff-01 monitor=30000 on kiff-02
+ * Resource action: vm-fs monitor=20000 on lxc-01_kiff-01
+ * Pseudo action: clvmd-clone_stop_0
+ * Pseudo action: clvmd_stop_0
+ * Pseudo action: clvmd-clone_stopped_0
+ * Pseudo action: dlm-clone_stop_0
+ * Pseudo action: dlm_stop_0
+ * Pseudo action: dlm-clone_stopped_0
+
+Revised Cluster Status:
+ * Node List:
+ * Online: [ kiff-02 ]
+ * OFFLINE: [ kiff-01 ]
+ * GuestOnline: [ lxc-01_kiff-01 lxc-01_kiff-02 lxc-02_kiff-01 lxc-02_kiff-02 ]
+
+ * Full List of Resources:
+ * vm-fs (ocf:heartbeat:Filesystem): Started lxc-01_kiff-01
+ * R-lxc-01_kiff-02 (ocf:heartbeat:VirtualDomain): Started kiff-02
+ * fence-kiff-01 (stonith:fence_ipmilan): Started kiff-02
+ * fence-kiff-02 (stonith:fence_ipmilan): Started kiff-02
+ * Clone Set: dlm-clone [dlm]:
+ * Started: [ kiff-02 ]
+ * Stopped: [ kiff-01 lxc-01_kiff-01 lxc-01_kiff-02 lxc-02_kiff-01 lxc-02_kiff-02 ]
+ * Clone Set: clvmd-clone [clvmd]:
+ * Started: [ kiff-02 ]
+ * Stopped: [ kiff-01 lxc-01_kiff-01 lxc-01_kiff-02 lxc-02_kiff-01 lxc-02_kiff-02 ]
+ * Clone Set: shared0-clone [shared0]:
+ * Started: [ kiff-02 ]
+ * Stopped: [ kiff-01 lxc-01_kiff-01 lxc-01_kiff-02 lxc-02_kiff-01 lxc-02_kiff-02 ]
+ * R-lxc-01_kiff-01 (ocf:heartbeat:VirtualDomain): Started kiff-02
+ * R-lxc-02_kiff-01 (ocf:heartbeat:VirtualDomain): Started kiff-02
+ * R-lxc-02_kiff-02 (ocf:heartbeat:VirtualDomain): Started kiff-02
diff --git a/cts/scheduler/summary/probe-0.summary b/cts/scheduler/summary/probe-0.summary
new file mode 100644
index 0000000..c717f0f
--- /dev/null
+++ b/cts/scheduler/summary/probe-0.summary
@@ -0,0 +1,41 @@
+Current cluster status:
+ * Node List:
+ * Online: [ x32c47 x32c48 ]
+
+ * Full List of Resources:
+ * Clone Set: stonithcloneset [stonithclone]:
+ * Started: [ x32c47 x32c48 ]
+ * Clone Set: imagestorecloneset [imagestoreclone]:
+ * Started: [ x32c47 x32c48 ]
+ * Clone Set: configstorecloneset [configstoreclone]:
+ * Stopped: [ x32c47 x32c48 ]
+
+Transition Summary:
+ * Start configstoreclone:0 ( x32c47 )
+ * Start configstoreclone:1 ( x32c48 )
+
+Executing Cluster Transition:
+ * Resource action: configstoreclone:0 monitor on x32c47
+ * Resource action: configstoreclone:1 monitor on x32c48
+ * Pseudo action: configstorecloneset_pre_notify_start_0
+ * Pseudo action: configstorecloneset_confirmed-pre_notify_start_0
+ * Pseudo action: configstorecloneset_start_0
+ * Resource action: configstoreclone:0 start on x32c47
+ * Resource action: configstoreclone:1 start on x32c48
+ * Pseudo action: configstorecloneset_running_0
+ * Pseudo action: configstorecloneset_post_notify_running_0
+ * Resource action: configstoreclone:0 notify on x32c47
+ * Resource action: configstoreclone:1 notify on x32c48
+ * Pseudo action: configstorecloneset_confirmed-post_notify_running_0
+
+Revised Cluster Status:
+ * Node List:
+ * Online: [ x32c47 x32c48 ]
+
+ * Full List of Resources:
+ * Clone Set: stonithcloneset [stonithclone]:
+ * Started: [ x32c47 x32c48 ]
+ * Clone Set: imagestorecloneset [imagestoreclone]:
+ * Started: [ x32c47 x32c48 ]
+ * Clone Set: configstorecloneset [configstoreclone]:
+ * Started: [ x32c47 x32c48 ]
diff --git a/cts/scheduler/summary/probe-1.summary b/cts/scheduler/summary/probe-1.summary
new file mode 100644
index 0000000..605ea0f
--- /dev/null
+++ b/cts/scheduler/summary/probe-1.summary
@@ -0,0 +1,21 @@
+Current cluster status:
+ * Node List:
+ * Online: [ c001n05 ]
+
+ * Full List of Resources:
+ * DcIPaddr (ocf:heartbeat:IPaddr): Stopped
+
+Transition Summary:
+ * Start DcIPaddr ( c001n05 )
+
+Executing Cluster Transition:
+ * Resource action: DcIPaddr monitor on c001n05
+ * Resource action: DcIPaddr start on c001n05
+ * Resource action: DcIPaddr monitor=5000 on c001n05
+
+Revised Cluster Status:
+ * Node List:
+ * Online: [ c001n05 ]
+
+ * Full List of Resources:
+ * DcIPaddr (ocf:heartbeat:IPaddr): Started c001n05
diff --git a/cts/scheduler/summary/probe-2.summary b/cts/scheduler/summary/probe-2.summary
new file mode 100644
index 0000000..f73d561
--- /dev/null
+++ b/cts/scheduler/summary/probe-2.summary
@@ -0,0 +1,163 @@
+Current cluster status:
+ * Node List:
+ * Node wc02: standby (with active resources)
+ * Online: [ wc01 ]
+
+ * Full List of Resources:
+ * Resource Group: group_www_data:
+ * fs_www_data (ocf:heartbeat:Filesystem): Started wc01
+ * nfs-kernel-server (lsb:nfs-kernel-server): Started wc01
+ * intip_nfs (ocf:heartbeat:IPaddr2): Started wc01
+ * Clone Set: ms_drbd_mysql [drbd_mysql] (promotable):
+ * Promoted: [ wc02 ]
+ * Unpromoted: [ wc01 ]
+ * Resource Group: group_mysql:
+ * fs_mysql (ocf:heartbeat:Filesystem): Started wc02
+ * intip_sql (ocf:heartbeat:IPaddr2): Started wc02
+ * mysql-server (ocf:heartbeat:mysql): Started wc02
+ * Clone Set: ms_drbd_www [drbd_www] (promotable):
+ * Promoted: [ wc01 ]
+ * Unpromoted: [ wc02 ]
+ * Clone Set: clone_nfs-common [group_nfs-common]:
+ * Started: [ wc01 wc02 ]
+ * Clone Set: clone_mysql-proxy [group_mysql-proxy]:
+ * Started: [ wc01 wc02 ]
+ * Clone Set: clone_webservice [group_webservice]:
+ * Started: [ wc01 wc02 ]
+ * Resource Group: group_ftpd:
+ * extip_ftp (ocf:heartbeat:IPaddr2): Started wc01
+ * pure-ftpd (ocf:heartbeat:Pure-FTPd): Started wc01
+ * Clone Set: DoFencing [stonith_rackpdu] (unique):
+ * stonith_rackpdu:0 (stonith:external/rackpdu): Started wc01
+ * stonith_rackpdu:1 (stonith:external/rackpdu): Started wc02
+
+Transition Summary:
+ * Promote drbd_mysql:0 ( Unpromoted -> Promoted wc01 )
+ * Stop drbd_mysql:1 ( Promoted wc02 ) due to node availability
+ * Move fs_mysql ( wc02 -> wc01 )
+ * Move intip_sql ( wc02 -> wc01 )
+ * Move mysql-server ( wc02 -> wc01 )
+ * Stop drbd_www:1 ( Unpromoted wc02 ) due to node availability
+ * Stop nfs-common:1 ( wc02 ) due to node availability
+ * Stop mysql-proxy:1 ( wc02 ) due to node availability
+ * Stop fs_www:1 ( wc02 ) due to node availability
+ * Stop apache2:1 ( wc02 ) due to node availability
+ * Restart stonith_rackpdu:0 ( wc01 )
+ * Stop stonith_rackpdu:1 ( wc02 ) due to node availability
+
+Executing Cluster Transition:
+ * Resource action: drbd_mysql:0 cancel=10000 on wc01
+ * Pseudo action: ms_drbd_mysql_pre_notify_demote_0
+ * Pseudo action: group_mysql_stop_0
+ * Resource action: mysql-server stop on wc02
+ * Pseudo action: ms_drbd_www_pre_notify_stop_0
+ * Pseudo action: clone_mysql-proxy_stop_0
+ * Pseudo action: clone_webservice_stop_0
+ * Pseudo action: DoFencing_stop_0
+ * Resource action: drbd_mysql:0 notify on wc01
+ * Resource action: drbd_mysql:1 notify on wc02
+ * Pseudo action: ms_drbd_mysql_confirmed-pre_notify_demote_0
+ * Resource action: intip_sql stop on wc02
+ * Resource action: drbd_www:0 notify on wc01
+ * Resource action: drbd_www:1 notify on wc02
+ * Pseudo action: ms_drbd_www_confirmed-pre_notify_stop_0
+ * Pseudo action: ms_drbd_www_stop_0
+ * Pseudo action: group_mysql-proxy:1_stop_0
+ * Resource action: mysql-proxy:1 stop on wc02
+ * Pseudo action: group_webservice:1_stop_0
+ * Resource action: apache2:1 stop on wc02
+ * Resource action: stonith_rackpdu:0 stop on wc01
+ * Resource action: stonith_rackpdu:1 stop on wc02
+ * Pseudo action: DoFencing_stopped_0
+ * Pseudo action: DoFencing_start_0
+ * Resource action: fs_mysql stop on wc02
+ * Resource action: drbd_www:1 stop on wc02
+ * Pseudo action: ms_drbd_www_stopped_0
+ * Pseudo action: group_mysql-proxy:1_stopped_0
+ * Pseudo action: clone_mysql-proxy_stopped_0
+ * Resource action: fs_www:1 stop on wc02
+ * Resource action: stonith_rackpdu:0 start on wc01
+ * Pseudo action: DoFencing_running_0
+ * Pseudo action: group_mysql_stopped_0
+ * Pseudo action: ms_drbd_www_post_notify_stopped_0
+ * Pseudo action: group_webservice:1_stopped_0
+ * Pseudo action: clone_webservice_stopped_0
+ * Resource action: stonith_rackpdu:0 monitor=5000 on wc01
+ * Pseudo action: ms_drbd_mysql_demote_0
+ * Resource action: drbd_www:0 notify on wc01
+ * Pseudo action: ms_drbd_www_confirmed-post_notify_stopped_0
+ * Pseudo action: clone_nfs-common_stop_0
+ * Resource action: drbd_mysql:1 demote on wc02
+ * Pseudo action: ms_drbd_mysql_demoted_0
+ * Pseudo action: group_nfs-common:1_stop_0
+ * Resource action: nfs-common:1 stop on wc02
+ * Pseudo action: ms_drbd_mysql_post_notify_demoted_0
+ * Pseudo action: group_nfs-common:1_stopped_0
+ * Pseudo action: clone_nfs-common_stopped_0
+ * Resource action: drbd_mysql:0 notify on wc01
+ * Resource action: drbd_mysql:1 notify on wc02
+ * Pseudo action: ms_drbd_mysql_confirmed-post_notify_demoted_0
+ * Pseudo action: ms_drbd_mysql_pre_notify_stop_0
+ * Resource action: drbd_mysql:0 notify on wc01
+ * Resource action: drbd_mysql:1 notify on wc02
+ * Pseudo action: ms_drbd_mysql_confirmed-pre_notify_stop_0
+ * Pseudo action: ms_drbd_mysql_stop_0
+ * Resource action: drbd_mysql:1 stop on wc02
+ * Pseudo action: ms_drbd_mysql_stopped_0
+ * Pseudo action: ms_drbd_mysql_post_notify_stopped_0
+ * Resource action: drbd_mysql:0 notify on wc01
+ * Pseudo action: ms_drbd_mysql_confirmed-post_notify_stopped_0
+ * Pseudo action: ms_drbd_mysql_pre_notify_promote_0
+ * Resource action: drbd_mysql:0 notify on wc01
+ * Pseudo action: ms_drbd_mysql_confirmed-pre_notify_promote_0
+ * Pseudo action: ms_drbd_mysql_promote_0
+ * Resource action: drbd_mysql:0 promote on wc01
+ * Pseudo action: ms_drbd_mysql_promoted_0
+ * Pseudo action: ms_drbd_mysql_post_notify_promoted_0
+ * Resource action: drbd_mysql:0 notify on wc01
+ * Pseudo action: ms_drbd_mysql_confirmed-post_notify_promoted_0
+ * Pseudo action: group_mysql_start_0
+ * Resource action: fs_mysql start on wc01
+ * Resource action: intip_sql start on wc01
+ * Resource action: mysql-server start on wc01
+ * Resource action: drbd_mysql:0 monitor=5000 on wc01
+ * Pseudo action: group_mysql_running_0
+ * Resource action: fs_mysql monitor=30000 on wc01
+ * Resource action: intip_sql monitor=30000 on wc01
+ * Resource action: mysql-server monitor=30000 on wc01
+
+Revised Cluster Status:
+ * Node List:
+ * Node wc02: standby
+ * Online: [ wc01 ]
+
+ * Full List of Resources:
+ * Resource Group: group_www_data:
+ * fs_www_data (ocf:heartbeat:Filesystem): Started wc01
+ * nfs-kernel-server (lsb:nfs-kernel-server): Started wc01
+ * intip_nfs (ocf:heartbeat:IPaddr2): Started wc01
+ * Clone Set: ms_drbd_mysql [drbd_mysql] (promotable):
+ * Promoted: [ wc01 ]
+ * Stopped: [ wc02 ]
+ * Resource Group: group_mysql:
+ * fs_mysql (ocf:heartbeat:Filesystem): Started wc01
+ * intip_sql (ocf:heartbeat:IPaddr2): Started wc01
+ * mysql-server (ocf:heartbeat:mysql): Started wc01
+ * Clone Set: ms_drbd_www [drbd_www] (promotable):
+ * Promoted: [ wc01 ]
+ * Stopped: [ wc02 ]
+ * Clone Set: clone_nfs-common [group_nfs-common]:
+ * Started: [ wc01 ]
+ * Stopped: [ wc02 ]
+ * Clone Set: clone_mysql-proxy [group_mysql-proxy]:
+ * Started: [ wc01 ]
+ * Stopped: [ wc02 ]
+ * Clone Set: clone_webservice [group_webservice]:
+ * Started: [ wc01 ]
+ * Stopped: [ wc02 ]
+ * Resource Group: group_ftpd:
+ * extip_ftp (ocf:heartbeat:IPaddr2): Started wc01
+ * pure-ftpd (ocf:heartbeat:Pure-FTPd): Started wc01
+ * Clone Set: DoFencing [stonith_rackpdu] (unique):
+ * stonith_rackpdu:0 (stonith:external/rackpdu): Started wc01
+ * stonith_rackpdu:1 (stonith:external/rackpdu): Stopped
diff --git a/cts/scheduler/summary/probe-3.summary b/cts/scheduler/summary/probe-3.summary
new file mode 100644
index 0000000..929fb4d
--- /dev/null
+++ b/cts/scheduler/summary/probe-3.summary
@@ -0,0 +1,57 @@
+Current cluster status:
+ * Node List:
+ * Node pcmk-4: pending
+ * Online: [ pcmk-1 pcmk-2 pcmk-3 ]
+
+ * Full List of Resources:
+ * Resource Group: group-1:
+ * r192.168.101.181 (ocf:heartbeat:IPaddr): Started pcmk-1
+ * r192.168.101.182 (ocf:heartbeat:IPaddr): Started pcmk-1
+ * r192.168.101.183 (ocf:heartbeat:IPaddr): Started pcmk-1
+ * rsc_pcmk-1 (ocf:heartbeat:IPaddr): Started pcmk-1
+ * rsc_pcmk-2 (ocf:heartbeat:IPaddr): Started pcmk-2
+ * rsc_pcmk-3 (ocf:heartbeat:IPaddr): Started pcmk-3
+ * rsc_pcmk-4 (ocf:heartbeat:IPaddr): Started pcmk-2
+ * lsb-dummy (lsb:/usr/share/pacemaker/tests/cts/LSBDummy): Started pcmk-1
+ * migrator (ocf:pacemaker:Dummy): Started pcmk-3
+ * Clone Set: Connectivity [ping-1]:
+ * Started: [ pcmk-1 pcmk-2 pcmk-3 ]
+ * Stopped: [ pcmk-4 ]
+ * Clone Set: master-1 [stateful-1] (promotable):
+ * Promoted: [ pcmk-1 ]
+ * Unpromoted: [ pcmk-2 pcmk-3 ]
+ * Stopped: [ pcmk-4 ]
+ * Clone Set: Fencing [FencingChild]:
+ * Started: [ pcmk-1 pcmk-2 pcmk-3 ]
+ * Stopped: [ pcmk-4 ]
+
+Transition Summary:
+
+Executing Cluster Transition:
+
+Revised Cluster Status:
+ * Node List:
+ * Node pcmk-4: pending
+ * Online: [ pcmk-1 pcmk-2 pcmk-3 ]
+
+ * Full List of Resources:
+ * Resource Group: group-1:
+ * r192.168.101.181 (ocf:heartbeat:IPaddr): Started pcmk-1
+ * r192.168.101.182 (ocf:heartbeat:IPaddr): Started pcmk-1
+ * r192.168.101.183 (ocf:heartbeat:IPaddr): Started pcmk-1
+ * rsc_pcmk-1 (ocf:heartbeat:IPaddr): Started pcmk-1
+ * rsc_pcmk-2 (ocf:heartbeat:IPaddr): Started pcmk-2
+ * rsc_pcmk-3 (ocf:heartbeat:IPaddr): Started pcmk-3
+ * rsc_pcmk-4 (ocf:heartbeat:IPaddr): Started pcmk-2
+ * lsb-dummy (lsb:/usr/share/pacemaker/tests/cts/LSBDummy): Started pcmk-1
+ * migrator (ocf:pacemaker:Dummy): Started pcmk-3
+ * Clone Set: Connectivity [ping-1]:
+ * Started: [ pcmk-1 pcmk-2 pcmk-3 ]
+ * Stopped: [ pcmk-4 ]
+ * Clone Set: master-1 [stateful-1] (promotable):
+ * Promoted: [ pcmk-1 ]
+ * Unpromoted: [ pcmk-2 pcmk-3 ]
+ * Stopped: [ pcmk-4 ]
+ * Clone Set: Fencing [FencingChild]:
+ * Started: [ pcmk-1 pcmk-2 pcmk-3 ]
+ * Stopped: [ pcmk-4 ]
diff --git a/cts/scheduler/summary/probe-4.summary b/cts/scheduler/summary/probe-4.summary
new file mode 100644
index 0000000..99005e9
--- /dev/null
+++ b/cts/scheduler/summary/probe-4.summary
@@ -0,0 +1,58 @@
+Current cluster status:
+ * Node List:
+ * Node pcmk-4: pending
+ * Online: [ pcmk-1 pcmk-2 pcmk-3 ]
+
+ * Full List of Resources:
+ * Resource Group: group-1:
+ * r192.168.101.181 (ocf:heartbeat:IPaddr): Started pcmk-1
+ * r192.168.101.182 (ocf:heartbeat:IPaddr): Started pcmk-1
+ * r192.168.101.183 (ocf:heartbeat:IPaddr): Started pcmk-1
+ * rsc_pcmk-1 (ocf:heartbeat:IPaddr): Started pcmk-1
+ * rsc_pcmk-2 (ocf:heartbeat:IPaddr): Started pcmk-2
+ * rsc_pcmk-3 (ocf:heartbeat:IPaddr): Started pcmk-3
+ * rsc_pcmk-4 (ocf:heartbeat:IPaddr): Started pcmk-2
+ * lsb-dummy (lsb:/usr/share/pacemaker/tests/cts/LSBDummy): Started pcmk-1
+ * migrator (ocf:pacemaker:Dummy): Stopped
+ * Clone Set: Connectivity [ping-1]:
+ * Started: [ pcmk-1 pcmk-2 pcmk-3 ]
+ * Stopped: [ pcmk-4 ]
+ * Clone Set: master-1 [stateful-1] (promotable):
+ * Promoted: [ pcmk-1 ]
+ * Unpromoted: [ pcmk-2 pcmk-3 ]
+ * Stopped: [ pcmk-4 ]
+ * Clone Set: Fencing [FencingChild]:
+ * Started: [ pcmk-1 pcmk-2 pcmk-3 ]
+ * Stopped: [ pcmk-4 ]
+
+Transition Summary:
+ * Start migrator ( pcmk-3 ) blocked
+
+Executing Cluster Transition:
+
+Revised Cluster Status:
+ * Node List:
+ * Node pcmk-4: pending
+ * Online: [ pcmk-1 pcmk-2 pcmk-3 ]
+
+ * Full List of Resources:
+ * Resource Group: group-1:
+ * r192.168.101.181 (ocf:heartbeat:IPaddr): Started pcmk-1
+ * r192.168.101.182 (ocf:heartbeat:IPaddr): Started pcmk-1
+ * r192.168.101.183 (ocf:heartbeat:IPaddr): Started pcmk-1
+ * rsc_pcmk-1 (ocf:heartbeat:IPaddr): Started pcmk-1
+ * rsc_pcmk-2 (ocf:heartbeat:IPaddr): Started pcmk-2
+ * rsc_pcmk-3 (ocf:heartbeat:IPaddr): Started pcmk-3
+ * rsc_pcmk-4 (ocf:heartbeat:IPaddr): Started pcmk-2
+ * lsb-dummy (lsb:/usr/share/pacemaker/tests/cts/LSBDummy): Started pcmk-1
+ * migrator (ocf:pacemaker:Dummy): Stopped
+ * Clone Set: Connectivity [ping-1]:
+ * Started: [ pcmk-1 pcmk-2 pcmk-3 ]
+ * Stopped: [ pcmk-4 ]
+ * Clone Set: master-1 [stateful-1] (promotable):
+ * Promoted: [ pcmk-1 ]
+ * Unpromoted: [ pcmk-2 pcmk-3 ]
+ * Stopped: [ pcmk-4 ]
+ * Clone Set: Fencing [FencingChild]:
+ * Started: [ pcmk-1 pcmk-2 pcmk-3 ]
+ * Stopped: [ pcmk-4 ]
diff --git a/cts/scheduler/summary/probe-pending-node.summary b/cts/scheduler/summary/probe-pending-node.summary
new file mode 100644
index 0000000..92153e2
--- /dev/null
+++ b/cts/scheduler/summary/probe-pending-node.summary
@@ -0,0 +1,55 @@
+Using the original execution date of: 2021-06-11 13:55:24Z
+
+ *** Resource management is DISABLED ***
+ The cluster will not attempt to start, stop or recover services
+
+Current cluster status:
+ * Node List:
+ * Node gcdoubwap02: pending
+ * Online: [ gcdoubwap01 ]
+
+ * Full List of Resources:
+ * stonith_gcdoubwap01 (stonith:fence_gce): Stopped (maintenance)
+ * stonith_gcdoubwap02 (stonith:fence_gce): Stopped (maintenance)
+ * Clone Set: fs_UC5_SAPMNT-clone [fs_UC5_SAPMNT] (maintenance):
+ * Stopped: [ gcdoubwap01 gcdoubwap02 ]
+ * Clone Set: fs_UC5_SYS-clone [fs_UC5_SYS] (maintenance):
+ * Stopped: [ gcdoubwap01 gcdoubwap02 ]
+ * Resource Group: grp_UC5_ascs (maintenance):
+ * rsc_vip_int_ascs (ocf:heartbeat:IPaddr2): Stopped (maintenance)
+ * rsc_vip_gcp_ascs (ocf:heartbeat:gcp-vpc-move-vip): Started gcdoubwap01 (maintenance)
+ * fs_UC5_ascs (ocf:heartbeat:Filesystem): Stopped (maintenance)
+ * rsc_sap_UC5_ASCS11 (ocf:heartbeat:SAPInstance): Stopped (maintenance)
+ * Resource Group: grp_UC5_ers (maintenance):
+ * rsc_vip_init_ers (ocf:heartbeat:IPaddr2): Stopped (maintenance)
+ * rsc_vip_gcp_ers (ocf:heartbeat:gcp-vpc-move-vip): Stopped (maintenance)
+ * fs_UC5_ers (ocf:heartbeat:Filesystem): Stopped (maintenance)
+ * rsc_sap_UC5_ERS12 (ocf:heartbeat:SAPInstance): Stopped (maintenance)
+
+Transition Summary:
+
+Executing Cluster Transition:
+Using the original execution date of: 2021-06-11 13:55:24Z
+
+Revised Cluster Status:
+ * Node List:
+ * Node gcdoubwap02: pending
+ * Online: [ gcdoubwap01 ]
+
+ * Full List of Resources:
+ * stonith_gcdoubwap01 (stonith:fence_gce): Stopped (maintenance)
+ * stonith_gcdoubwap02 (stonith:fence_gce): Stopped (maintenance)
+ * Clone Set: fs_UC5_SAPMNT-clone [fs_UC5_SAPMNT] (maintenance):
+ * Stopped: [ gcdoubwap01 gcdoubwap02 ]
+ * Clone Set: fs_UC5_SYS-clone [fs_UC5_SYS] (maintenance):
+ * Stopped: [ gcdoubwap01 gcdoubwap02 ]
+ * Resource Group: grp_UC5_ascs (maintenance):
+ * rsc_vip_int_ascs (ocf:heartbeat:IPaddr2): Stopped (maintenance)
+ * rsc_vip_gcp_ascs (ocf:heartbeat:gcp-vpc-move-vip): Started gcdoubwap01 (maintenance)
+ * fs_UC5_ascs (ocf:heartbeat:Filesystem): Stopped (maintenance)
+ * rsc_sap_UC5_ASCS11 (ocf:heartbeat:SAPInstance): Stopped (maintenance)
+ * Resource Group: grp_UC5_ers (maintenance):
+ * rsc_vip_init_ers (ocf:heartbeat:IPaddr2): Stopped (maintenance)
+ * rsc_vip_gcp_ers (ocf:heartbeat:gcp-vpc-move-vip): Stopped (maintenance)
+ * fs_UC5_ers (ocf:heartbeat:Filesystem): Stopped (maintenance)
+ * rsc_sap_UC5_ERS12 (ocf:heartbeat:SAPInstance): Stopped (maintenance)
diff --git a/cts/scheduler/summary/probe-target-of-failed-migrate_to-1.summary b/cts/scheduler/summary/probe-target-of-failed-migrate_to-1.summary
new file mode 100644
index 0000000..1be0ea6
--- /dev/null
+++ b/cts/scheduler/summary/probe-target-of-failed-migrate_to-1.summary
@@ -0,0 +1,23 @@
+Using the original execution date of: 2022-05-09 10:28:56Z
+Current cluster status:
+ * Node List:
+ * Online: [ node1 node2 ]
+
+ * Full List of Resources:
+ * st-sbd (stonith:external/sbd): Started node1
+ * dummy1 (ocf:pacemaker:Dummy): Started node1
+
+Transition Summary:
+
+Executing Cluster Transition:
+ * Resource action: st-sbd monitor on node2
+ * Resource action: dummy1 monitor on node2
+Using the original execution date of: 2022-05-09 10:28:56Z
+
+Revised Cluster Status:
+ * Node List:
+ * Online: [ node1 node2 ]
+
+ * Full List of Resources:
+ * st-sbd (stonith:external/sbd): Started node1
+ * dummy1 (ocf:pacemaker:Dummy): Started node1
diff --git a/cts/scheduler/summary/probe-target-of-failed-migrate_to-2.summary b/cts/scheduler/summary/probe-target-of-failed-migrate_to-2.summary
new file mode 100644
index 0000000..6346e38
--- /dev/null
+++ b/cts/scheduler/summary/probe-target-of-failed-migrate_to-2.summary
@@ -0,0 +1,19 @@
+Current cluster status:
+ * Node List:
+ * Online: [ node1 node2 ]
+
+ * Full List of Resources:
+ * st-sbd (stonith:external/sbd): Started node1
+ * dummy1 (ocf:pacemaker:Dummy): Started node1
+
+Transition Summary:
+
+Executing Cluster Transition:
+
+Revised Cluster Status:
+ * Node List:
+ * Online: [ node1 node2 ]
+
+ * Full List of Resources:
+ * st-sbd (stonith:external/sbd): Started node1
+ * dummy1 (ocf:pacemaker:Dummy): Started node1
diff --git a/cts/scheduler/summary/probe-timeout.summary b/cts/scheduler/summary/probe-timeout.summary
new file mode 100644
index 0000000..ca7dc5a
--- /dev/null
+++ b/cts/scheduler/summary/probe-timeout.summary
@@ -0,0 +1,31 @@
+Current cluster status:
+ * Node List:
+ * Online: [ node1 node2 ]
+
+ * Full List of Resources:
+ * rsc1 (ocf:pacemaker:Dummy): Stopped
+ * rsc2 (ocf:pacemaker:Dummy): Stopped
+
+Transition Summary:
+ * Start rsc1 ( node1 )
+ * Start rsc2 ( node2 )
+
+Executing Cluster Transition:
+ * Resource action: rsc1 monitor on node2
+ * Resource action: rsc1 monitor on node1
+ * Resource action: rsc2 monitor on node2
+ * Resource action: rsc2 monitor on node1
+ * Resource action: rsc1 start on node1
+ * Resource action: rsc2 start on node2
+ * Resource action: rsc1 monitor=5000 on node1
+ * Resource action: rsc1 monitor=10000 on node1
+ * Resource action: rsc2 monitor=10000 on node2
+ * Resource action: rsc2 monitor=5000 on node2
+
+Revised Cluster Status:
+ * Node List:
+ * Online: [ node1 node2 ]
+
+ * Full List of Resources:
+ * rsc1 (ocf:pacemaker:Dummy): Started node1
+ * rsc2 (ocf:pacemaker:Dummy): Started node2
diff --git a/cts/scheduler/summary/promoted-0.summary b/cts/scheduler/summary/promoted-0.summary
new file mode 100644
index 0000000..3e724ff
--- /dev/null
+++ b/cts/scheduler/summary/promoted-0.summary
@@ -0,0 +1,47 @@
+Current cluster status:
+ * Node List:
+ * Online: [ node1 node2 ]
+
+ * Full List of Resources:
+ * Clone Set: rsc1 [child_rsc1] (promotable, unique):
+ * child_rsc1:0 (ocf:heartbeat:apache): Stopped
+ * child_rsc1:1 (ocf:heartbeat:apache): Stopped
+ * child_rsc1:2 (ocf:heartbeat:apache): Stopped
+ * child_rsc1:3 (ocf:heartbeat:apache): Stopped
+ * child_rsc1:4 (ocf:heartbeat:apache): Stopped
+
+Transition Summary:
+ * Start child_rsc1:0 ( node1 )
+ * Start child_rsc1:1 ( node2 )
+ * Start child_rsc1:2 ( node1 )
+ * Start child_rsc1:3 ( node2 )
+
+Executing Cluster Transition:
+ * Resource action: child_rsc1:0 monitor on node2
+ * Resource action: child_rsc1:0 monitor on node1
+ * Resource action: child_rsc1:1 monitor on node2
+ * Resource action: child_rsc1:1 monitor on node1
+ * Resource action: child_rsc1:2 monitor on node2
+ * Resource action: child_rsc1:2 monitor on node1
+ * Resource action: child_rsc1:3 monitor on node2
+ * Resource action: child_rsc1:3 monitor on node1
+ * Resource action: child_rsc1:4 monitor on node2
+ * Resource action: child_rsc1:4 monitor on node1
+ * Pseudo action: rsc1_start_0
+ * Resource action: child_rsc1:0 start on node1
+ * Resource action: child_rsc1:1 start on node2
+ * Resource action: child_rsc1:2 start on node1
+ * Resource action: child_rsc1:3 start on node2
+ * Pseudo action: rsc1_running_0
+
+Revised Cluster Status:
+ * Node List:
+ * Online: [ node1 node2 ]
+
+ * Full List of Resources:
+ * Clone Set: rsc1 [child_rsc1] (promotable, unique):
+ * child_rsc1:0 (ocf:heartbeat:apache): Unpromoted node1
+ * child_rsc1:1 (ocf:heartbeat:apache): Unpromoted node2
+ * child_rsc1:2 (ocf:heartbeat:apache): Unpromoted node1
+ * child_rsc1:3 (ocf:heartbeat:apache): Unpromoted node2
+ * child_rsc1:4 (ocf:heartbeat:apache): Stopped
diff --git a/cts/scheduler/summary/promoted-1.summary b/cts/scheduler/summary/promoted-1.summary
new file mode 100644
index 0000000..839de37
--- /dev/null
+++ b/cts/scheduler/summary/promoted-1.summary
@@ -0,0 +1,50 @@
+Current cluster status:
+ * Node List:
+ * Online: [ node1 node2 ]
+
+ * Full List of Resources:
+ * Clone Set: rsc1 [child_rsc1] (promotable, unique):
+ * child_rsc1:0 (ocf:heartbeat:apache): Stopped
+ * child_rsc1:1 (ocf:heartbeat:apache): Stopped
+ * child_rsc1:2 (ocf:heartbeat:apache): Stopped
+ * child_rsc1:3 (ocf:heartbeat:apache): Stopped
+ * child_rsc1:4 (ocf:heartbeat:apache): Stopped
+
+Transition Summary:
+ * Start child_rsc1:0 ( node1 )
+ * Promote child_rsc1:1 ( Stopped -> Promoted node2 )
+ * Start child_rsc1:2 ( node1 )
+ * Start child_rsc1:3 ( node2 )
+
+Executing Cluster Transition:
+ * Resource action: child_rsc1:0 monitor on node2
+ * Resource action: child_rsc1:0 monitor on node1
+ * Resource action: child_rsc1:1 monitor on node2
+ * Resource action: child_rsc1:1 monitor on node1
+ * Resource action: child_rsc1:2 monitor on node2
+ * Resource action: child_rsc1:2 monitor on node1
+ * Resource action: child_rsc1:3 monitor on node2
+ * Resource action: child_rsc1:3 monitor on node1
+ * Resource action: child_rsc1:4 monitor on node2
+ * Resource action: child_rsc1:4 monitor on node1
+ * Pseudo action: rsc1_start_0
+ * Resource action: child_rsc1:0 start on node1
+ * Resource action: child_rsc1:1 start on node2
+ * Resource action: child_rsc1:2 start on node1
+ * Resource action: child_rsc1:3 start on node2
+ * Pseudo action: rsc1_running_0
+ * Pseudo action: rsc1_promote_0
+ * Resource action: child_rsc1:1 promote on node2
+ * Pseudo action: rsc1_promoted_0
+
+Revised Cluster Status:
+ * Node List:
+ * Online: [ node1 node2 ]
+
+ * Full List of Resources:
+ * Clone Set: rsc1 [child_rsc1] (promotable, unique):
+ * child_rsc1:0 (ocf:heartbeat:apache): Unpromoted node1
+ * child_rsc1:1 (ocf:heartbeat:apache): Promoted node2
+ * child_rsc1:2 (ocf:heartbeat:apache): Unpromoted node1
+ * child_rsc1:3 (ocf:heartbeat:apache): Unpromoted node2
+ * child_rsc1:4 (ocf:heartbeat:apache): Stopped
diff --git a/cts/scheduler/summary/promoted-10.summary b/cts/scheduler/summary/promoted-10.summary
new file mode 100644
index 0000000..7efbce9
--- /dev/null
+++ b/cts/scheduler/summary/promoted-10.summary
@@ -0,0 +1,75 @@
+Current cluster status:
+ * Node List:
+ * Online: [ node1 node2 ]
+
+ * Full List of Resources:
+ * Clone Set: rsc1 [child_rsc1] (promotable, unique):
+ * child_rsc1:0 (ocf:heartbeat:apache): Stopped
+ * child_rsc1:1 (ocf:heartbeat:apache): Stopped
+ * child_rsc1:2 (ocf:heartbeat:apache): Stopped
+ * child_rsc1:3 (ocf:heartbeat:apache): Stopped
+ * child_rsc1:4 (ocf:heartbeat:apache): Stopped
+
+Transition Summary:
+ * Promote child_rsc1:0 ( Stopped -> Promoted node1 )
+ * Start child_rsc1:1 ( node2 )
+ * Start child_rsc1:2 ( node1 )
+ * Promote child_rsc1:3 ( Stopped -> Promoted node2 )
+
+Executing Cluster Transition:
+ * Resource action: child_rsc1:0 monitor on node2
+ * Resource action: child_rsc1:0 monitor on node1
+ * Resource action: child_rsc1:1 monitor on node2
+ * Resource action: child_rsc1:1 monitor on node1
+ * Resource action: child_rsc1:2 monitor on node2
+ * Resource action: child_rsc1:2 monitor on node1
+ * Resource action: child_rsc1:3 monitor on node2
+ * Resource action: child_rsc1:3 monitor on node1
+ * Resource action: child_rsc1:4 monitor on node2
+ * Resource action: child_rsc1:4 monitor on node1
+ * Pseudo action: rsc1_pre_notify_start_0
+ * Pseudo action: rsc1_confirmed-pre_notify_start_0
+ * Pseudo action: rsc1_start_0
+ * Resource action: child_rsc1:0 start on node1
+ * Resource action: child_rsc1:1 start on node2
+ * Resource action: child_rsc1:2 start on node1
+ * Resource action: child_rsc1:3 start on node2
+ * Pseudo action: rsc1_running_0
+ * Pseudo action: rsc1_post_notify_running_0
+ * Resource action: child_rsc1:0 notify on node1
+ * Resource action: child_rsc1:1 notify on node2
+ * Resource action: child_rsc1:2 notify on node1
+ * Resource action: child_rsc1:3 notify on node2
+ * Pseudo action: rsc1_confirmed-post_notify_running_0
+ * Pseudo action: rsc1_pre_notify_promote_0
+ * Resource action: child_rsc1:0 notify on node1
+ * Resource action: child_rsc1:1 notify on node2
+ * Resource action: child_rsc1:2 notify on node1
+ * Resource action: child_rsc1:3 notify on node2
+ * Pseudo action: rsc1_confirmed-pre_notify_promote_0
+ * Pseudo action: rsc1_promote_0
+ * Resource action: child_rsc1:0 promote on node1
+ * Resource action: child_rsc1:3 promote on node2
+ * Pseudo action: rsc1_promoted_0
+ * Pseudo action: rsc1_post_notify_promoted_0
+ * Resource action: child_rsc1:0 notify on node1
+ * Resource action: child_rsc1:1 notify on node2
+ * Resource action: child_rsc1:2 notify on node1
+ * Resource action: child_rsc1:3 notify on node2
+ * Pseudo action: rsc1_confirmed-post_notify_promoted_0
+ * Resource action: child_rsc1:0 monitor=11000 on node1
+ * Resource action: child_rsc1:1 monitor=1000 on node2
+ * Resource action: child_rsc1:2 monitor=1000 on node1
+ * Resource action: child_rsc1:3 monitor=11000 on node2
+
+Revised Cluster Status:
+ * Node List:
+ * Online: [ node1 node2 ]
+
+ * Full List of Resources:
+ * Clone Set: rsc1 [child_rsc1] (promotable, unique):
+ * child_rsc1:0 (ocf:heartbeat:apache): Promoted node1
+ * child_rsc1:1 (ocf:heartbeat:apache): Unpromoted node2
+ * child_rsc1:2 (ocf:heartbeat:apache): Unpromoted node1
+ * child_rsc1:3 (ocf:heartbeat:apache): Promoted node2
+ * child_rsc1:4 (ocf:heartbeat:apache): Stopped
diff --git a/cts/scheduler/summary/promoted-11.summary b/cts/scheduler/summary/promoted-11.summary
new file mode 100644
index 0000000..6999bb1
--- /dev/null
+++ b/cts/scheduler/summary/promoted-11.summary
@@ -0,0 +1,40 @@
+Current cluster status:
+ * Node List:
+ * Online: [ node1 node2 ]
+
+ * Full List of Resources:
+ * simple-rsc (ocf:heartbeat:apache): Stopped
+ * Clone Set: rsc1 [child_rsc1] (promotable, unique):
+ * child_rsc1:0 (ocf:heartbeat:apache): Stopped
+ * child_rsc1:1 (ocf:heartbeat:apache): Stopped
+
+Transition Summary:
+ * Start simple-rsc ( node2 )
+ * Start child_rsc1:0 ( node1 )
+ * Promote child_rsc1:1 ( Stopped -> Promoted node2 )
+
+Executing Cluster Transition:
+ * Resource action: simple-rsc monitor on node2
+ * Resource action: simple-rsc monitor on node1
+ * Resource action: child_rsc1:0 monitor on node2
+ * Resource action: child_rsc1:0 monitor on node1
+ * Resource action: child_rsc1:1 monitor on node2
+ * Resource action: child_rsc1:1 monitor on node1
+ * Pseudo action: rsc1_start_0
+ * Resource action: simple-rsc start on node2
+ * Resource action: child_rsc1:0 start on node1
+ * Resource action: child_rsc1:1 start on node2
+ * Pseudo action: rsc1_running_0
+ * Pseudo action: rsc1_promote_0
+ * Resource action: child_rsc1:1 promote on node2
+ * Pseudo action: rsc1_promoted_0
+
+Revised Cluster Status:
+ * Node List:
+ * Online: [ node1 node2 ]
+
+ * Full List of Resources:
+ * simple-rsc (ocf:heartbeat:apache): Started node2
+ * Clone Set: rsc1 [child_rsc1] (promotable, unique):
+ * child_rsc1:0 (ocf:heartbeat:apache): Unpromoted node1
+ * child_rsc1:1 (ocf:heartbeat:apache): Promoted node2
diff --git a/cts/scheduler/summary/promoted-12.summary b/cts/scheduler/summary/promoted-12.summary
new file mode 100644
index 0000000..9125a9a
--- /dev/null
+++ b/cts/scheduler/summary/promoted-12.summary
@@ -0,0 +1,33 @@
+Current cluster status:
+ * Node List:
+ * Online: [ sel3 sel4 ]
+
+ * Full List of Resources:
+ * Clone Set: ms-drbd0 [drbd0] (promotable):
+ * Promoted: [ sel3 ]
+ * Unpromoted: [ sel4 ]
+ * Clone Set: ms-sf [sf] (promotable, unique):
+ * sf:0 (ocf:heartbeat:Stateful): Unpromoted sel3
+ * sf:1 (ocf:heartbeat:Stateful): Unpromoted sel4
+ * fs0 (ocf:heartbeat:Filesystem): Started sel3
+
+Transition Summary:
+ * Promote sf:0 ( Unpromoted -> Promoted sel3 )
+
+Executing Cluster Transition:
+ * Pseudo action: ms-sf_promote_0
+ * Resource action: sf:0 promote on sel3
+ * Pseudo action: ms-sf_promoted_0
+
+Revised Cluster Status:
+ * Node List:
+ * Online: [ sel3 sel4 ]
+
+ * Full List of Resources:
+ * Clone Set: ms-drbd0 [drbd0] (promotable):
+ * Promoted: [ sel3 ]
+ * Unpromoted: [ sel4 ]
+ * Clone Set: ms-sf [sf] (promotable, unique):
+ * sf:0 (ocf:heartbeat:Stateful): Promoted sel3
+ * sf:1 (ocf:heartbeat:Stateful): Unpromoted sel4
+ * fs0 (ocf:heartbeat:Filesystem): Started sel3
diff --git a/cts/scheduler/summary/promoted-13.summary b/cts/scheduler/summary/promoted-13.summary
new file mode 100644
index 0000000..5f977c8
--- /dev/null
+++ b/cts/scheduler/summary/promoted-13.summary
@@ -0,0 +1,62 @@
+Current cluster status:
+ * Node List:
+ * Online: [ frigg odin ]
+
+ * Full List of Resources:
+ * Clone Set: ms_drbd [drbd0] (promotable):
+ * Promoted: [ frigg ]
+ * Unpromoted: [ odin ]
+ * Resource Group: group:
+ * IPaddr0 (ocf:heartbeat:IPaddr): Stopped
+ * MailTo (ocf:heartbeat:MailTo): Stopped
+
+Transition Summary:
+ * Promote drbd0:0 ( Unpromoted -> Promoted odin )
+ * Demote drbd0:1 ( Promoted -> Unpromoted frigg )
+ * Start IPaddr0 ( odin )
+ * Start MailTo ( odin )
+
+Executing Cluster Transition:
+ * Resource action: drbd0:1 cancel=12000 on odin
+ * Resource action: drbd0:0 cancel=10000 on frigg
+ * Pseudo action: ms_drbd_pre_notify_demote_0
+ * Resource action: drbd0:1 notify on odin
+ * Resource action: drbd0:0 notify on frigg
+ * Pseudo action: ms_drbd_confirmed-pre_notify_demote_0
+ * Pseudo action: ms_drbd_demote_0
+ * Resource action: drbd0:0 demote on frigg
+ * Pseudo action: ms_drbd_demoted_0
+ * Pseudo action: ms_drbd_post_notify_demoted_0
+ * Resource action: drbd0:1 notify on odin
+ * Resource action: drbd0:0 notify on frigg
+ * Pseudo action: ms_drbd_confirmed-post_notify_demoted_0
+ * Pseudo action: ms_drbd_pre_notify_promote_0
+ * Resource action: drbd0:1 notify on odin
+ * Resource action: drbd0:0 notify on frigg
+ * Pseudo action: ms_drbd_confirmed-pre_notify_promote_0
+ * Pseudo action: ms_drbd_promote_0
+ * Resource action: drbd0:1 promote on odin
+ * Pseudo action: ms_drbd_promoted_0
+ * Pseudo action: ms_drbd_post_notify_promoted_0
+ * Resource action: drbd0:1 notify on odin
+ * Resource action: drbd0:0 notify on frigg
+ * Pseudo action: ms_drbd_confirmed-post_notify_promoted_0
+ * Pseudo action: group_start_0
+ * Resource action: IPaddr0 start on odin
+ * Resource action: MailTo start on odin
+ * Resource action: drbd0:1 monitor=10000 on odin
+ * Resource action: drbd0:0 monitor=12000 on frigg
+ * Pseudo action: group_running_0
+ * Resource action: IPaddr0 monitor=5000 on odin
+
+Revised Cluster Status:
+ * Node List:
+ * Online: [ frigg odin ]
+
+ * Full List of Resources:
+ * Clone Set: ms_drbd [drbd0] (promotable):
+ * Promoted: [ odin ]
+ * Unpromoted: [ frigg ]
+ * Resource Group: group:
+ * IPaddr0 (ocf:heartbeat:IPaddr): Started odin
+ * MailTo (ocf:heartbeat:MailTo): Started odin
diff --git a/cts/scheduler/summary/promoted-2.summary b/cts/scheduler/summary/promoted-2.summary
new file mode 100644
index 0000000..58e3e2e
--- /dev/null
+++ b/cts/scheduler/summary/promoted-2.summary
@@ -0,0 +1,71 @@
+Current cluster status:
+ * Node List:
+ * Online: [ node1 node2 ]
+
+ * Full List of Resources:
+ * Clone Set: rsc1 [child_rsc1] (promotable, unique):
+ * child_rsc1:0 (ocf:heartbeat:apache): Stopped
+ * child_rsc1:1 (ocf:heartbeat:apache): Stopped
+ * child_rsc1:2 (ocf:heartbeat:apache): Stopped
+ * child_rsc1:3 (ocf:heartbeat:apache): Stopped
+ * child_rsc1:4 (ocf:heartbeat:apache): Stopped
+
+Transition Summary:
+ * Promote child_rsc1:0 ( Stopped -> Promoted node1 )
+ * Start child_rsc1:1 ( node2 )
+ * Start child_rsc1:2 ( node1 )
+ * Promote child_rsc1:3 ( Stopped -> Promoted node2 )
+
+Executing Cluster Transition:
+ * Resource action: child_rsc1:0 monitor on node2
+ * Resource action: child_rsc1:0 monitor on node1
+ * Resource action: child_rsc1:1 monitor on node2
+ * Resource action: child_rsc1:1 monitor on node1
+ * Resource action: child_rsc1:2 monitor on node2
+ * Resource action: child_rsc1:2 monitor on node1
+ * Resource action: child_rsc1:3 monitor on node2
+ * Resource action: child_rsc1:3 monitor on node1
+ * Resource action: child_rsc1:4 monitor on node2
+ * Resource action: child_rsc1:4 monitor on node1
+ * Pseudo action: rsc1_pre_notify_start_0
+ * Pseudo action: rsc1_confirmed-pre_notify_start_0
+ * Pseudo action: rsc1_start_0
+ * Resource action: child_rsc1:0 start on node1
+ * Resource action: child_rsc1:1 start on node2
+ * Resource action: child_rsc1:2 start on node1
+ * Resource action: child_rsc1:3 start on node2
+ * Pseudo action: rsc1_running_0
+ * Pseudo action: rsc1_post_notify_running_0
+ * Resource action: child_rsc1:0 notify on node1
+ * Resource action: child_rsc1:1 notify on node2
+ * Resource action: child_rsc1:2 notify on node1
+ * Resource action: child_rsc1:3 notify on node2
+ * Pseudo action: rsc1_confirmed-post_notify_running_0
+ * Pseudo action: rsc1_pre_notify_promote_0
+ * Resource action: child_rsc1:0 notify on node1
+ * Resource action: child_rsc1:1 notify on node2
+ * Resource action: child_rsc1:2 notify on node1
+ * Resource action: child_rsc1:3 notify on node2
+ * Pseudo action: rsc1_confirmed-pre_notify_promote_0
+ * Pseudo action: rsc1_promote_0
+ * Resource action: child_rsc1:0 promote on node1
+ * Resource action: child_rsc1:3 promote on node2
+ * Pseudo action: rsc1_promoted_0
+ * Pseudo action: rsc1_post_notify_promoted_0
+ * Resource action: child_rsc1:0 notify on node1
+ * Resource action: child_rsc1:1 notify on node2
+ * Resource action: child_rsc1:2 notify on node1
+ * Resource action: child_rsc1:3 notify on node2
+ * Pseudo action: rsc1_confirmed-post_notify_promoted_0
+
+Revised Cluster Status:
+ * Node List:
+ * Online: [ node1 node2 ]
+
+ * Full List of Resources:
+ * Clone Set: rsc1 [child_rsc1] (promotable, unique):
+ * child_rsc1:0 (ocf:heartbeat:apache): Promoted node1
+ * child_rsc1:1 (ocf:heartbeat:apache): Unpromoted node2
+ * child_rsc1:2 (ocf:heartbeat:apache): Unpromoted node1
+ * child_rsc1:3 (ocf:heartbeat:apache): Promoted node2
+ * child_rsc1:4 (ocf:heartbeat:apache): Stopped
diff --git a/cts/scheduler/summary/promoted-3.summary b/cts/scheduler/summary/promoted-3.summary
new file mode 100644
index 0000000..839de37
--- /dev/null
+++ b/cts/scheduler/summary/promoted-3.summary
@@ -0,0 +1,50 @@
+Current cluster status:
+ * Node List:
+ * Online: [ node1 node2 ]
+
+ * Full List of Resources:
+ * Clone Set: rsc1 [child_rsc1] (promotable, unique):
+ * child_rsc1:0 (ocf:heartbeat:apache): Stopped
+ * child_rsc1:1 (ocf:heartbeat:apache): Stopped
+ * child_rsc1:2 (ocf:heartbeat:apache): Stopped
+ * child_rsc1:3 (ocf:heartbeat:apache): Stopped
+ * child_rsc1:4 (ocf:heartbeat:apache): Stopped
+
+Transition Summary:
+ * Start child_rsc1:0 ( node1 )
+ * Promote child_rsc1:1 ( Stopped -> Promoted node2 )
+ * Start child_rsc1:2 ( node1 )
+ * Start child_rsc1:3 ( node2 )
+
+Executing Cluster Transition:
+ * Resource action: child_rsc1:0 monitor on node2
+ * Resource action: child_rsc1:0 monitor on node1
+ * Resource action: child_rsc1:1 monitor on node2
+ * Resource action: child_rsc1:1 monitor on node1
+ * Resource action: child_rsc1:2 monitor on node2
+ * Resource action: child_rsc1:2 monitor on node1
+ * Resource action: child_rsc1:3 monitor on node2
+ * Resource action: child_rsc1:3 monitor on node1
+ * Resource action: child_rsc1:4 monitor on node2
+ * Resource action: child_rsc1:4 monitor on node1
+ * Pseudo action: rsc1_start_0
+ * Resource action: child_rsc1:0 start on node1
+ * Resource action: child_rsc1:1 start on node2
+ * Resource action: child_rsc1:2 start on node1
+ * Resource action: child_rsc1:3 start on node2
+ * Pseudo action: rsc1_running_0
+ * Pseudo action: rsc1_promote_0
+ * Resource action: child_rsc1:1 promote on node2
+ * Pseudo action: rsc1_promoted_0
+
+Revised Cluster Status:
+ * Node List:
+ * Online: [ node1 node2 ]
+
+ * Full List of Resources:
+ * Clone Set: rsc1 [child_rsc1] (promotable, unique):
+ * child_rsc1:0 (ocf:heartbeat:apache): Unpromoted node1
+ * child_rsc1:1 (ocf:heartbeat:apache): Promoted node2
+ * child_rsc1:2 (ocf:heartbeat:apache): Unpromoted node1
+ * child_rsc1:3 (ocf:heartbeat:apache): Unpromoted node2
+ * child_rsc1:4 (ocf:heartbeat:apache): Stopped
diff --git a/cts/scheduler/summary/promoted-4.summary b/cts/scheduler/summary/promoted-4.summary
new file mode 100644
index 0000000..2bcb25e
--- /dev/null
+++ b/cts/scheduler/summary/promoted-4.summary
@@ -0,0 +1,94 @@
+Current cluster status:
+ * Node List:
+ * Online: [ c001n01 c001n02 c001n03 c001n08 ]
+
+ * Full List of Resources:
+ * DcIPaddr (ocf:heartbeat:IPaddr): Started c001n08
+ * Resource Group: group-1:
+ * ocf_child (ocf:heartbeat:IPaddr): Started c001n03
+ * heartbeat_child (ocf:heartbeat:IPaddr): Started c001n03
+ * lsb_dummy (lsb:/usr/lib/heartbeat/cts/LSBDummy): Started c001n01
+ * rsc_c001n08 (ocf:heartbeat:IPaddr): Started c001n08
+ * rsc_c001n02 (ocf:heartbeat:IPaddr): Started c001n02
+ * rsc_c001n03 (ocf:heartbeat:IPaddr): Started c001n03
+ * rsc_c001n01 (ocf:heartbeat:IPaddr): Started c001n01
+ * Clone Set: DoFencing [child_DoFencing] (unique):
+ * child_DoFencing:0 (stonith:ssh): Started c001n08
+ * child_DoFencing:1 (stonith:ssh): Started c001n03
+ * child_DoFencing:2 (stonith:ssh): Started c001n01
+ * child_DoFencing:3 (stonith:ssh): Started c001n02
+ * Clone Set: master_rsc_1 [ocf_msdummy] (promotable, unique):
+ * ocf_msdummy:0 (ocf:heartbeat:/usr/lib/heartbeat/cts/OCFMSDummy): Unpromoted c001n08
+ * ocf_msdummy:1 (ocf:heartbeat:/usr/lib/heartbeat/cts/OCFMSDummy): Unpromoted c001n03
+ * ocf_msdummy:2 (ocf:heartbeat:/usr/lib/heartbeat/cts/OCFMSDummy): Unpromoted c001n01
+ * ocf_msdummy:3 (ocf:heartbeat:/usr/lib/heartbeat/cts/OCFMSDummy): Unpromoted c001n08
+ * ocf_msdummy:4 (ocf:heartbeat:/usr/lib/heartbeat/cts/OCFMSDummy): Unpromoted c001n03
+ * ocf_msdummy:5 (ocf:heartbeat:/usr/lib/heartbeat/cts/OCFMSDummy): Unpromoted c001n01
+ * ocf_msdummy:6 (ocf:heartbeat:/usr/lib/heartbeat/cts/OCFMSDummy): Unpromoted c001n02
+ * ocf_msdummy:7 (ocf:heartbeat:/usr/lib/heartbeat/cts/OCFMSDummy): Unpromoted c001n02
+
+Transition Summary:
+ * Promote ocf_msdummy:0 ( Unpromoted -> Promoted c001n08 )
+
+Executing Cluster Transition:
+ * Resource action: child_DoFencing:1 monitor on c001n08
+ * Resource action: child_DoFencing:1 monitor on c001n02
+ * Resource action: child_DoFencing:1 monitor on c001n01
+ * Resource action: child_DoFencing:2 monitor on c001n08
+ * Resource action: child_DoFencing:2 monitor on c001n03
+ * Resource action: child_DoFencing:2 monitor on c001n02
+ * Resource action: child_DoFencing:3 monitor on c001n08
+ * Resource action: child_DoFencing:3 monitor on c001n03
+ * Resource action: child_DoFencing:3 monitor on c001n01
+ * Resource action: ocf_msdummy:0 cancel=5000 on c001n08
+ * Resource action: ocf_msdummy:2 monitor on c001n08
+ * Resource action: ocf_msdummy:2 monitor on c001n03
+ * Resource action: ocf_msdummy:2 monitor on c001n02
+ * Resource action: ocf_msdummy:3 monitor on c001n03
+ * Resource action: ocf_msdummy:3 monitor on c001n02
+ * Resource action: ocf_msdummy:3 monitor on c001n01
+ * Resource action: ocf_msdummy:4 monitor on c001n08
+ * Resource action: ocf_msdummy:4 monitor on c001n02
+ * Resource action: ocf_msdummy:4 monitor on c001n01
+ * Resource action: ocf_msdummy:5 monitor on c001n08
+ * Resource action: ocf_msdummy:5 monitor on c001n03
+ * Resource action: ocf_msdummy:5 monitor on c001n02
+ * Resource action: ocf_msdummy:6 monitor on c001n08
+ * Resource action: ocf_msdummy:6 monitor on c001n03
+ * Resource action: ocf_msdummy:6 monitor on c001n01
+ * Resource action: ocf_msdummy:7 monitor on c001n08
+ * Resource action: ocf_msdummy:7 monitor on c001n03
+ * Resource action: ocf_msdummy:7 monitor on c001n01
+ * Pseudo action: master_rsc_1_promote_0
+ * Resource action: ocf_msdummy:0 promote on c001n08
+ * Pseudo action: master_rsc_1_promoted_0
+ * Resource action: ocf_msdummy:0 monitor=6000 on c001n08
+
+Revised Cluster Status:
+ * Node List:
+ * Online: [ c001n01 c001n02 c001n03 c001n08 ]
+
+ * Full List of Resources:
+ * DcIPaddr (ocf:heartbeat:IPaddr): Started c001n08
+ * Resource Group: group-1:
+ * ocf_child (ocf:heartbeat:IPaddr): Started c001n03
+ * heartbeat_child (ocf:heartbeat:IPaddr): Started c001n03
+ * lsb_dummy (lsb:/usr/lib/heartbeat/cts/LSBDummy): Started c001n01
+ * rsc_c001n08 (ocf:heartbeat:IPaddr): Started c001n08
+ * rsc_c001n02 (ocf:heartbeat:IPaddr): Started c001n02
+ * rsc_c001n03 (ocf:heartbeat:IPaddr): Started c001n03
+ * rsc_c001n01 (ocf:heartbeat:IPaddr): Started c001n01
+ * Clone Set: DoFencing [child_DoFencing] (unique):
+ * child_DoFencing:0 (stonith:ssh): Started c001n08
+ * child_DoFencing:1 (stonith:ssh): Started c001n03
+ * child_DoFencing:2 (stonith:ssh): Started c001n01
+ * child_DoFencing:3 (stonith:ssh): Started c001n02
+ * Clone Set: master_rsc_1 [ocf_msdummy] (promotable, unique):
+ * ocf_msdummy:0 (ocf:heartbeat:/usr/lib/heartbeat/cts/OCFMSDummy): Promoted c001n08
+ * ocf_msdummy:1 (ocf:heartbeat:/usr/lib/heartbeat/cts/OCFMSDummy): Unpromoted c001n03
+ * ocf_msdummy:2 (ocf:heartbeat:/usr/lib/heartbeat/cts/OCFMSDummy): Unpromoted c001n01
+ * ocf_msdummy:3 (ocf:heartbeat:/usr/lib/heartbeat/cts/OCFMSDummy): Unpromoted c001n08
+ * ocf_msdummy:4 (ocf:heartbeat:/usr/lib/heartbeat/cts/OCFMSDummy): Unpromoted c001n03
+ * ocf_msdummy:5 (ocf:heartbeat:/usr/lib/heartbeat/cts/OCFMSDummy): Unpromoted c001n01
+ * ocf_msdummy:6 (ocf:heartbeat:/usr/lib/heartbeat/cts/OCFMSDummy): Unpromoted c001n02
+ * ocf_msdummy:7 (ocf:heartbeat:/usr/lib/heartbeat/cts/OCFMSDummy): Unpromoted c001n02
diff --git a/cts/scheduler/summary/promoted-5.summary b/cts/scheduler/summary/promoted-5.summary
new file mode 100644
index 0000000..8a2f1a2
--- /dev/null
+++ b/cts/scheduler/summary/promoted-5.summary
@@ -0,0 +1,88 @@
+Current cluster status:
+ * Node List:
+ * Online: [ c001n01 c001n02 c001n03 c001n08 ]
+
+ * Full List of Resources:
+ * DcIPaddr (ocf:heartbeat:IPaddr): Started c001n08
+ * Resource Group: group-1:
+ * ocf_child (ocf:heartbeat:IPaddr): Started c001n03
+ * heartbeat_child (ocf:heartbeat:IPaddr): Started c001n03
+ * lsb_dummy (lsb:/usr/lib/heartbeat/cts/LSBDummy): Started c001n01
+ * rsc_c001n08 (ocf:heartbeat:IPaddr): Started c001n08
+ * rsc_c001n02 (ocf:heartbeat:IPaddr): Started c001n02
+ * rsc_c001n03 (ocf:heartbeat:IPaddr): Started c001n03
+ * rsc_c001n01 (ocf:heartbeat:IPaddr): Started c001n01
+ * Clone Set: DoFencing [child_DoFencing] (unique):
+ * child_DoFencing:0 (stonith:ssh): Started c001n08
+ * child_DoFencing:1 (stonith:ssh): Started c001n03
+ * child_DoFencing:2 (stonith:ssh): Started c001n01
+ * child_DoFencing:3 (stonith:ssh): Started c001n02
+ * Clone Set: master_rsc_1 [ocf_msdummy] (promotable, unique):
+ * ocf_msdummy:0 (ocf:heartbeat:/usr/lib/heartbeat/cts/OCFMSDummy): Promoted c001n08
+ * ocf_msdummy:1 (ocf:heartbeat:/usr/lib/heartbeat/cts/OCFMSDummy): Unpromoted c001n03
+ * ocf_msdummy:2 (ocf:heartbeat:/usr/lib/heartbeat/cts/OCFMSDummy): Unpromoted c001n01
+ * ocf_msdummy:3 (ocf:heartbeat:/usr/lib/heartbeat/cts/OCFMSDummy): Unpromoted c001n08
+ * ocf_msdummy:4 (ocf:heartbeat:/usr/lib/heartbeat/cts/OCFMSDummy): Unpromoted c001n03
+ * ocf_msdummy:5 (ocf:heartbeat:/usr/lib/heartbeat/cts/OCFMSDummy): Unpromoted c001n01
+ * ocf_msdummy:6 (ocf:heartbeat:/usr/lib/heartbeat/cts/OCFMSDummy): Unpromoted c001n02
+ * ocf_msdummy:7 (ocf:heartbeat:/usr/lib/heartbeat/cts/OCFMSDummy): Unpromoted c001n02
+
+Transition Summary:
+
+Executing Cluster Transition:
+ * Resource action: child_DoFencing:1 monitor on c001n08
+ * Resource action: child_DoFencing:1 monitor on c001n02
+ * Resource action: child_DoFencing:1 monitor on c001n01
+ * Resource action: child_DoFencing:2 monitor on c001n08
+ * Resource action: child_DoFencing:2 monitor on c001n03
+ * Resource action: child_DoFencing:2 monitor on c001n02
+ * Resource action: child_DoFencing:3 monitor on c001n08
+ * Resource action: child_DoFencing:3 monitor on c001n03
+ * Resource action: child_DoFencing:3 monitor on c001n01
+ * Resource action: ocf_msdummy:2 monitor on c001n08
+ * Resource action: ocf_msdummy:2 monitor on c001n03
+ * Resource action: ocf_msdummy:2 monitor on c001n02
+ * Resource action: ocf_msdummy:3 monitor on c001n03
+ * Resource action: ocf_msdummy:3 monitor on c001n02
+ * Resource action: ocf_msdummy:3 monitor on c001n01
+ * Resource action: ocf_msdummy:4 monitor on c001n08
+ * Resource action: ocf_msdummy:4 monitor on c001n02
+ * Resource action: ocf_msdummy:4 monitor on c001n01
+ * Resource action: ocf_msdummy:5 monitor on c001n08
+ * Resource action: ocf_msdummy:5 monitor on c001n03
+ * Resource action: ocf_msdummy:5 monitor on c001n02
+ * Resource action: ocf_msdummy:6 monitor on c001n08
+ * Resource action: ocf_msdummy:6 monitor on c001n03
+ * Resource action: ocf_msdummy:6 monitor on c001n01
+ * Resource action: ocf_msdummy:7 monitor on c001n08
+ * Resource action: ocf_msdummy:7 monitor on c001n03
+ * Resource action: ocf_msdummy:7 monitor on c001n01
+
+Revised Cluster Status:
+ * Node List:
+ * Online: [ c001n01 c001n02 c001n03 c001n08 ]
+
+ * Full List of Resources:
+ * DcIPaddr (ocf:heartbeat:IPaddr): Started c001n08
+ * Resource Group: group-1:
+ * ocf_child (ocf:heartbeat:IPaddr): Started c001n03
+ * heartbeat_child (ocf:heartbeat:IPaddr): Started c001n03
+ * lsb_dummy (lsb:/usr/lib/heartbeat/cts/LSBDummy): Started c001n01
+ * rsc_c001n08 (ocf:heartbeat:IPaddr): Started c001n08
+ * rsc_c001n02 (ocf:heartbeat:IPaddr): Started c001n02
+ * rsc_c001n03 (ocf:heartbeat:IPaddr): Started c001n03
+ * rsc_c001n01 (ocf:heartbeat:IPaddr): Started c001n01
+ * Clone Set: DoFencing [child_DoFencing] (unique):
+ * child_DoFencing:0 (stonith:ssh): Started c001n08
+ * child_DoFencing:1 (stonith:ssh): Started c001n03
+ * child_DoFencing:2 (stonith:ssh): Started c001n01
+ * child_DoFencing:3 (stonith:ssh): Started c001n02
+ * Clone Set: master_rsc_1 [ocf_msdummy] (promotable, unique):
+ * ocf_msdummy:0 (ocf:heartbeat:/usr/lib/heartbeat/cts/OCFMSDummy): Promoted c001n08
+ * ocf_msdummy:1 (ocf:heartbeat:/usr/lib/heartbeat/cts/OCFMSDummy): Unpromoted c001n03
+ * ocf_msdummy:2 (ocf:heartbeat:/usr/lib/heartbeat/cts/OCFMSDummy): Unpromoted c001n01
+ * ocf_msdummy:3 (ocf:heartbeat:/usr/lib/heartbeat/cts/OCFMSDummy): Unpromoted c001n08
+ * ocf_msdummy:4 (ocf:heartbeat:/usr/lib/heartbeat/cts/OCFMSDummy): Unpromoted c001n03
+ * ocf_msdummy:5 (ocf:heartbeat:/usr/lib/heartbeat/cts/OCFMSDummy): Unpromoted c001n01
+ * ocf_msdummy:6 (ocf:heartbeat:/usr/lib/heartbeat/cts/OCFMSDummy): Unpromoted c001n02
+ * ocf_msdummy:7 (ocf:heartbeat:/usr/lib/heartbeat/cts/OCFMSDummy): Unpromoted c001n02
diff --git a/cts/scheduler/summary/promoted-6.summary b/cts/scheduler/summary/promoted-6.summary
new file mode 100644
index 0000000..2d9c953
--- /dev/null
+++ b/cts/scheduler/summary/promoted-6.summary
@@ -0,0 +1,87 @@
+Current cluster status:
+ * Node List:
+ * Online: [ c001n01 c001n02 c001n03 c001n08 ]
+
+ * Full List of Resources:
+ * DcIPaddr (ocf:heartbeat:IPaddr): Started c001n08
+ * Resource Group: group-1:
+ * ocf_192.168.100.181 (ocf:heartbeat:IPaddr): Started c001n02
+ * heartbeat_192.168.100.182 (ocf:heartbeat:IPaddr): Started c001n02
+ * ocf_192.168.100.183 (ocf:heartbeat:IPaddr): Started c001n02
+ * lsb_dummy (lsb:/usr/lib/heartbeat/cts/LSBDummy): Started c001n03
+ * rsc_c001n08 (ocf:heartbeat:IPaddr): Started c001n08
+ * rsc_c001n02 (ocf:heartbeat:IPaddr): Started c001n02
+ * rsc_c001n03 (ocf:heartbeat:IPaddr): Started c001n03
+ * rsc_c001n01 (ocf:heartbeat:IPaddr): Started c001n01
+ * Clone Set: DoFencing [child_DoFencing] (unique):
+ * child_DoFencing:0 (stonith:ssh): Started c001n08
+ * child_DoFencing:1 (stonith:ssh): Started c001n02
+ * child_DoFencing:2 (stonith:ssh): Started c001n03
+ * child_DoFencing:3 (stonith:ssh): Started c001n01
+ * Clone Set: master_rsc_1 [ocf_msdummy] (promotable, unique):
+ * ocf_msdummy:0 (ocf:heartbeat:/usr/lib/heartbeat/cts/OCFMSDummy): Promoted c001n08
+ * ocf_msdummy:1 (ocf:heartbeat:/usr/lib/heartbeat/cts/OCFMSDummy): Unpromoted c001n02
+ * ocf_msdummy:2 (ocf:heartbeat:/usr/lib/heartbeat/cts/OCFMSDummy): Unpromoted c001n03
+ * ocf_msdummy:3 (ocf:heartbeat:/usr/lib/heartbeat/cts/OCFMSDummy): Unpromoted c001n08
+ * ocf_msdummy:4 (ocf:heartbeat:/usr/lib/heartbeat/cts/OCFMSDummy): Unpromoted c001n02
+ * ocf_msdummy:5 (ocf:heartbeat:/usr/lib/heartbeat/cts/OCFMSDummy): Unpromoted c001n03
+ * ocf_msdummy:6 (ocf:heartbeat:/usr/lib/heartbeat/cts/OCFMSDummy): Unpromoted c001n01
+ * ocf_msdummy:7 (ocf:heartbeat:/usr/lib/heartbeat/cts/OCFMSDummy): Unpromoted c001n01
+
+Transition Summary:
+
+Executing Cluster Transition:
+ * Resource action: child_DoFencing:1 monitor on c001n08
+ * Resource action: child_DoFencing:1 monitor on c001n03
+ * Resource action: child_DoFencing:1 monitor on c001n01
+ * Resource action: child_DoFencing:2 monitor on c001n08
+ * Resource action: child_DoFencing:2 monitor on c001n01
+ * Resource action: child_DoFencing:3 monitor on c001n08
+ * Resource action: child_DoFencing:3 monitor on c001n03
+ * Resource action: child_DoFencing:3 monitor on c001n02
+ * Resource action: ocf_msdummy:2 monitor on c001n08
+ * Resource action: ocf_msdummy:2 monitor on c001n01
+ * Resource action: ocf_msdummy:3 monitor on c001n03
+ * Resource action: ocf_msdummy:3 monitor on c001n01
+ * Resource action: ocf_msdummy:4 monitor on c001n08
+ * Resource action: ocf_msdummy:4 monitor on c001n03
+ * Resource action: ocf_msdummy:4 monitor on c001n01
+ * Resource action: ocf_msdummy:5 monitor on c001n08
+ * Resource action: ocf_msdummy:5 monitor on c001n02
+ * Resource action: ocf_msdummy:5 monitor on c001n01
+ * Resource action: ocf_msdummy:6 monitor on c001n08
+ * Resource action: ocf_msdummy:6 monitor on c001n03
+ * Resource action: ocf_msdummy:6 monitor on c001n02
+ * Resource action: ocf_msdummy:7 monitor on c001n08
+ * Resource action: ocf_msdummy:7 monitor on c001n03
+ * Resource action: ocf_msdummy:7 monitor on c001n02
+
+Revised Cluster Status:
+ * Node List:
+ * Online: [ c001n01 c001n02 c001n03 c001n08 ]
+
+ * Full List of Resources:
+ * DcIPaddr (ocf:heartbeat:IPaddr): Started c001n08
+ * Resource Group: group-1:
+ * ocf_192.168.100.181 (ocf:heartbeat:IPaddr): Started c001n02
+ * heartbeat_192.168.100.182 (ocf:heartbeat:IPaddr): Started c001n02
+ * ocf_192.168.100.183 (ocf:heartbeat:IPaddr): Started c001n02
+ * lsb_dummy (lsb:/usr/lib/heartbeat/cts/LSBDummy): Started c001n03
+ * rsc_c001n08 (ocf:heartbeat:IPaddr): Started c001n08
+ * rsc_c001n02 (ocf:heartbeat:IPaddr): Started c001n02
+ * rsc_c001n03 (ocf:heartbeat:IPaddr): Started c001n03
+ * rsc_c001n01 (ocf:heartbeat:IPaddr): Started c001n01
+ * Clone Set: DoFencing [child_DoFencing] (unique):
+ * child_DoFencing:0 (stonith:ssh): Started c001n08
+ * child_DoFencing:1 (stonith:ssh): Started c001n02
+ * child_DoFencing:2 (stonith:ssh): Started c001n03
+ * child_DoFencing:3 (stonith:ssh): Started c001n01
+ * Clone Set: master_rsc_1 [ocf_msdummy] (promotable, unique):
+ * ocf_msdummy:0 (ocf:heartbeat:/usr/lib/heartbeat/cts/OCFMSDummy): Promoted c001n08
+ * ocf_msdummy:1 (ocf:heartbeat:/usr/lib/heartbeat/cts/OCFMSDummy): Unpromoted c001n02
+ * ocf_msdummy:2 (ocf:heartbeat:/usr/lib/heartbeat/cts/OCFMSDummy): Unpromoted c001n03
+ * ocf_msdummy:3 (ocf:heartbeat:/usr/lib/heartbeat/cts/OCFMSDummy): Unpromoted c001n08
+ * ocf_msdummy:4 (ocf:heartbeat:/usr/lib/heartbeat/cts/OCFMSDummy): Unpromoted c001n02
+ * ocf_msdummy:5 (ocf:heartbeat:/usr/lib/heartbeat/cts/OCFMSDummy): Unpromoted c001n03
+ * ocf_msdummy:6 (ocf:heartbeat:/usr/lib/heartbeat/cts/OCFMSDummy): Unpromoted c001n01
+ * ocf_msdummy:7 (ocf:heartbeat:/usr/lib/heartbeat/cts/OCFMSDummy): Unpromoted c001n01
diff --git a/cts/scheduler/summary/promoted-7.summary b/cts/scheduler/summary/promoted-7.summary
new file mode 100644
index 0000000..a1ddea5
--- /dev/null
+++ b/cts/scheduler/summary/promoted-7.summary
@@ -0,0 +1,121 @@
+Current cluster status:
+ * Node List:
+ * Node c001n01: UNCLEAN (offline)
+ * Online: [ c001n02 c001n03 c001n08 ]
+
+ * Full List of Resources:
+ * DcIPaddr (ocf:heartbeat:IPaddr): Started c001n01 (UNCLEAN)
+ * Resource Group: group-1:
+ * ocf_192.168.100.181 (ocf:heartbeat:IPaddr): Started c001n03
+ * heartbeat_192.168.100.182 (ocf:heartbeat:IPaddr): Started c001n03
+ * ocf_192.168.100.183 (ocf:heartbeat:IPaddr): Started c001n03
+ * lsb_dummy (lsb:/usr/lib/heartbeat/cts/LSBDummy): Started c001n02
+ * rsc_c001n01 (ocf:heartbeat:IPaddr): Started c001n01 (UNCLEAN)
+ * rsc_c001n08 (ocf:heartbeat:IPaddr): Started c001n08
+ * rsc_c001n02 (ocf:heartbeat:IPaddr): Started c001n02
+ * rsc_c001n03 (ocf:heartbeat:IPaddr): Started c001n03
+ * Clone Set: DoFencing [child_DoFencing] (unique):
+ * child_DoFencing:0 (stonith:ssh): Started c001n01 (UNCLEAN)
+ * child_DoFencing:1 (stonith:ssh): Started c001n03
+ * child_DoFencing:2 (stonith:ssh): Started c001n02
+ * child_DoFencing:3 (stonith:ssh): Started c001n08
+ * Clone Set: master_rsc_1 [ocf_msdummy] (promotable, unique):
+ * ocf_msdummy:0 (ocf:heartbeat:/usr/lib/heartbeat/cts/OCFMSDummy): Promoted c001n01 (UNCLEAN)
+ * ocf_msdummy:1 (ocf:heartbeat:/usr/lib/heartbeat/cts/OCFMSDummy): Unpromoted c001n03
+ * ocf_msdummy:2 (ocf:heartbeat:/usr/lib/heartbeat/cts/OCFMSDummy): Unpromoted c001n02
+ * ocf_msdummy:3 (ocf:heartbeat:/usr/lib/heartbeat/cts/OCFMSDummy): Unpromoted c001n08
+ * ocf_msdummy:4 (ocf:heartbeat:/usr/lib/heartbeat/cts/OCFMSDummy): Unpromoted c001n01 (UNCLEAN)
+ * ocf_msdummy:5 (ocf:heartbeat:/usr/lib/heartbeat/cts/OCFMSDummy): Unpromoted c001n03
+ * ocf_msdummy:6 (ocf:heartbeat:/usr/lib/heartbeat/cts/OCFMSDummy): Unpromoted c001n02
+ * ocf_msdummy:7 (ocf:heartbeat:/usr/lib/heartbeat/cts/OCFMSDummy): Unpromoted c001n08
+
+Transition Summary:
+ * Fence (reboot) c001n01 'peer is no longer part of the cluster'
+ * Move DcIPaddr ( c001n01 -> c001n03 )
+ * Move ocf_192.168.100.181 ( c001n03 -> c001n02 )
+ * Move heartbeat_192.168.100.182 ( c001n03 -> c001n02 )
+ * Move ocf_192.168.100.183 ( c001n03 -> c001n02 )
+ * Move lsb_dummy ( c001n02 -> c001n08 )
+ * Move rsc_c001n01 ( c001n01 -> c001n03 )
+ * Stop child_DoFencing:0 ( c001n01 ) due to node availability
+ * Stop ocf_msdummy:0 ( Promoted c001n01 ) due to node availability
+ * Stop ocf_msdummy:4 ( Unpromoted c001n01 ) due to node availability
+
+Executing Cluster Transition:
+ * Pseudo action: group-1_stop_0
+ * Resource action: ocf_192.168.100.183 stop on c001n03
+ * Resource action: lsb_dummy stop on c001n02
+ * Resource action: child_DoFencing:2 monitor on c001n08
+ * Resource action: child_DoFencing:2 monitor on c001n03
+ * Resource action: child_DoFencing:3 monitor on c001n03
+ * Resource action: child_DoFencing:3 monitor on c001n02
+ * Pseudo action: DoFencing_stop_0
+ * Resource action: ocf_msdummy:4 monitor on c001n08
+ * Resource action: ocf_msdummy:4 monitor on c001n03
+ * Resource action: ocf_msdummy:4 monitor on c001n02
+ * Resource action: ocf_msdummy:5 monitor on c001n08
+ * Resource action: ocf_msdummy:5 monitor on c001n02
+ * Resource action: ocf_msdummy:6 monitor on c001n08
+ * Resource action: ocf_msdummy:6 monitor on c001n03
+ * Resource action: ocf_msdummy:7 monitor on c001n03
+ * Resource action: ocf_msdummy:7 monitor on c001n02
+ * Pseudo action: master_rsc_1_demote_0
+ * Fencing c001n01 (reboot)
+ * Pseudo action: DcIPaddr_stop_0
+ * Resource action: heartbeat_192.168.100.182 stop on c001n03
+ * Resource action: lsb_dummy start on c001n08
+ * Pseudo action: rsc_c001n01_stop_0
+ * Pseudo action: child_DoFencing:0_stop_0
+ * Pseudo action: DoFencing_stopped_0
+ * Pseudo action: ocf_msdummy:0_demote_0
+ * Pseudo action: master_rsc_1_demoted_0
+ * Pseudo action: master_rsc_1_stop_0
+ * Resource action: DcIPaddr start on c001n03
+ * Resource action: ocf_192.168.100.181 stop on c001n03
+ * Resource action: lsb_dummy monitor=5000 on c001n08
+ * Resource action: rsc_c001n01 start on c001n03
+ * Pseudo action: ocf_msdummy:0_stop_0
+ * Pseudo action: ocf_msdummy:4_stop_0
+ * Pseudo action: master_rsc_1_stopped_0
+ * Resource action: DcIPaddr monitor=5000 on c001n03
+ * Pseudo action: group-1_stopped_0
+ * Pseudo action: group-1_start_0
+ * Resource action: ocf_192.168.100.181 start on c001n02
+ * Resource action: heartbeat_192.168.100.182 start on c001n02
+ * Resource action: ocf_192.168.100.183 start on c001n02
+ * Resource action: rsc_c001n01 monitor=5000 on c001n03
+ * Pseudo action: group-1_running_0
+ * Resource action: ocf_192.168.100.181 monitor=5000 on c001n02
+ * Resource action: heartbeat_192.168.100.182 monitor=5000 on c001n02
+ * Resource action: ocf_192.168.100.183 monitor=5000 on c001n02
+
+Revised Cluster Status:
+ * Node List:
+ * Online: [ c001n02 c001n03 c001n08 ]
+ * OFFLINE: [ c001n01 ]
+
+ * Full List of Resources:
+ * DcIPaddr (ocf:heartbeat:IPaddr): Started c001n03
+ * Resource Group: group-1:
+ * ocf_192.168.100.181 (ocf:heartbeat:IPaddr): Started c001n02
+ * heartbeat_192.168.100.182 (ocf:heartbeat:IPaddr): Started c001n02
+ * ocf_192.168.100.183 (ocf:heartbeat:IPaddr): Started c001n02
+ * lsb_dummy (lsb:/usr/lib/heartbeat/cts/LSBDummy): Started c001n08
+ * rsc_c001n01 (ocf:heartbeat:IPaddr): Started c001n03
+ * rsc_c001n08 (ocf:heartbeat:IPaddr): Started c001n08
+ * rsc_c001n02 (ocf:heartbeat:IPaddr): Started c001n02
+ * rsc_c001n03 (ocf:heartbeat:IPaddr): Started c001n03
+ * Clone Set: DoFencing [child_DoFencing] (unique):
+ * child_DoFencing:0 (stonith:ssh): Stopped
+ * child_DoFencing:1 (stonith:ssh): Started c001n03
+ * child_DoFencing:2 (stonith:ssh): Started c001n02
+ * child_DoFencing:3 (stonith:ssh): Started c001n08
+ * Clone Set: master_rsc_1 [ocf_msdummy] (promotable, unique):
+ * ocf_msdummy:0 (ocf:heartbeat:/usr/lib/heartbeat/cts/OCFMSDummy): Stopped
+ * ocf_msdummy:1 (ocf:heartbeat:/usr/lib/heartbeat/cts/OCFMSDummy): Unpromoted c001n03
+ * ocf_msdummy:2 (ocf:heartbeat:/usr/lib/heartbeat/cts/OCFMSDummy): Unpromoted c001n02
+ * ocf_msdummy:3 (ocf:heartbeat:/usr/lib/heartbeat/cts/OCFMSDummy): Unpromoted c001n08
+ * ocf_msdummy:4 (ocf:heartbeat:/usr/lib/heartbeat/cts/OCFMSDummy): Stopped
+ * ocf_msdummy:5 (ocf:heartbeat:/usr/lib/heartbeat/cts/OCFMSDummy): Unpromoted c001n03
+ * ocf_msdummy:6 (ocf:heartbeat:/usr/lib/heartbeat/cts/OCFMSDummy): Unpromoted c001n02
+ * ocf_msdummy:7 (ocf:heartbeat:/usr/lib/heartbeat/cts/OCFMSDummy): Unpromoted c001n08
diff --git a/cts/scheduler/summary/promoted-8.summary b/cts/scheduler/summary/promoted-8.summary
new file mode 100644
index 0000000..ed646ed
--- /dev/null
+++ b/cts/scheduler/summary/promoted-8.summary
@@ -0,0 +1,124 @@
+Current cluster status:
+ * Node List:
+ * Node c001n01: UNCLEAN (offline)
+ * Online: [ c001n02 c001n03 c001n08 ]
+
+ * Full List of Resources:
+ * DcIPaddr (ocf:heartbeat:IPaddr): Started c001n01 (UNCLEAN)
+ * Resource Group: group-1:
+ * ocf_192.168.100.181 (ocf:heartbeat:IPaddr): Started c001n03
+ * heartbeat_192.168.100.182 (ocf:heartbeat:IPaddr): Started c001n03
+ * ocf_192.168.100.183 (ocf:heartbeat:IPaddr): Started c001n03
+ * lsb_dummy (lsb:/usr/lib/heartbeat/cts/LSBDummy): Started c001n02
+ * rsc_c001n01 (ocf:heartbeat:IPaddr): Started c001n01 (UNCLEAN)
+ * rsc_c001n08 (ocf:heartbeat:IPaddr): Started c001n08
+ * rsc_c001n02 (ocf:heartbeat:IPaddr): Started c001n02
+ * rsc_c001n03 (ocf:heartbeat:IPaddr): Started c001n03
+ * Clone Set: DoFencing [child_DoFencing] (unique):
+ * child_DoFencing:0 (stonith:ssh): Started c001n01 (UNCLEAN)
+ * child_DoFencing:1 (stonith:ssh): Started c001n03
+ * child_DoFencing:2 (stonith:ssh): Started c001n02
+ * child_DoFencing:3 (stonith:ssh): Started c001n08
+ * Clone Set: master_rsc_1 [ocf_msdummy] (promotable, unique):
+ * ocf_msdummy:0 (ocf:heartbeat:/usr/lib/heartbeat/cts/OCFMSDummy): Promoted c001n01 (UNCLEAN)
+ * ocf_msdummy:1 (ocf:heartbeat:/usr/lib/heartbeat/cts/OCFMSDummy): Unpromoted c001n03
+ * ocf_msdummy:2 (ocf:heartbeat:/usr/lib/heartbeat/cts/OCFMSDummy): Unpromoted c001n02
+ * ocf_msdummy:3 (ocf:heartbeat:/usr/lib/heartbeat/cts/OCFMSDummy): Unpromoted c001n08
+ * ocf_msdummy:4 (ocf:heartbeat:/usr/lib/heartbeat/cts/OCFMSDummy): Stopped
+ * ocf_msdummy:5 (ocf:heartbeat:/usr/lib/heartbeat/cts/OCFMSDummy): Stopped
+ * ocf_msdummy:6 (ocf:heartbeat:/usr/lib/heartbeat/cts/OCFMSDummy): Unpromoted c001n02
+ * ocf_msdummy:7 (ocf:heartbeat:/usr/lib/heartbeat/cts/OCFMSDummy): Unpromoted c001n08
+
+Transition Summary:
+ * Fence (reboot) c001n01 'peer is no longer part of the cluster'
+ * Move DcIPaddr ( c001n01 -> c001n03 )
+ * Move ocf_192.168.100.181 ( c001n03 -> c001n02 )
+ * Move heartbeat_192.168.100.182 ( c001n03 -> c001n02 )
+ * Move ocf_192.168.100.183 ( c001n03 -> c001n02 )
+ * Move lsb_dummy ( c001n02 -> c001n08 )
+ * Move rsc_c001n01 ( c001n01 -> c001n03 )
+ * Stop child_DoFencing:0 ( c001n01 ) due to node availability
+ * Move ocf_msdummy:0 ( Promoted c001n01 -> Unpromoted c001n03 )
+
+Executing Cluster Transition:
+ * Pseudo action: group-1_stop_0
+ * Resource action: ocf_192.168.100.183 stop on c001n03
+ * Resource action: lsb_dummy stop on c001n02
+ * Resource action: child_DoFencing:2 monitor on c001n08
+ * Resource action: child_DoFencing:2 monitor on c001n03
+ * Resource action: child_DoFencing:3 monitor on c001n03
+ * Resource action: child_DoFencing:3 monitor on c001n02
+ * Pseudo action: DoFencing_stop_0
+ * Resource action: ocf_msdummy:4 monitor on c001n08
+ * Resource action: ocf_msdummy:4 monitor on c001n03
+ * Resource action: ocf_msdummy:4 monitor on c001n02
+ * Resource action: ocf_msdummy:5 monitor on c001n08
+ * Resource action: ocf_msdummy:5 monitor on c001n03
+ * Resource action: ocf_msdummy:5 monitor on c001n02
+ * Resource action: ocf_msdummy:6 monitor on c001n08
+ * Resource action: ocf_msdummy:6 monitor on c001n03
+ * Resource action: ocf_msdummy:7 monitor on c001n03
+ * Resource action: ocf_msdummy:7 monitor on c001n02
+ * Pseudo action: master_rsc_1_demote_0
+ * Fencing c001n01 (reboot)
+ * Pseudo action: DcIPaddr_stop_0
+ * Resource action: heartbeat_192.168.100.182 stop on c001n03
+ * Resource action: lsb_dummy start on c001n08
+ * Pseudo action: rsc_c001n01_stop_0
+ * Pseudo action: child_DoFencing:0_stop_0
+ * Pseudo action: DoFencing_stopped_0
+ * Pseudo action: ocf_msdummy:0_demote_0
+ * Pseudo action: master_rsc_1_demoted_0
+ * Pseudo action: master_rsc_1_stop_0
+ * Resource action: DcIPaddr start on c001n03
+ * Resource action: ocf_192.168.100.181 stop on c001n03
+ * Resource action: lsb_dummy monitor=5000 on c001n08
+ * Resource action: rsc_c001n01 start on c001n03
+ * Pseudo action: ocf_msdummy:0_stop_0
+ * Pseudo action: master_rsc_1_stopped_0
+ * Pseudo action: master_rsc_1_start_0
+ * Resource action: DcIPaddr monitor=5000 on c001n03
+ * Pseudo action: group-1_stopped_0
+ * Pseudo action: group-1_start_0
+ * Resource action: ocf_192.168.100.181 start on c001n02
+ * Resource action: heartbeat_192.168.100.182 start on c001n02
+ * Resource action: ocf_192.168.100.183 start on c001n02
+ * Resource action: rsc_c001n01 monitor=5000 on c001n03
+ * Resource action: ocf_msdummy:0 start on c001n03
+ * Pseudo action: master_rsc_1_running_0
+ * Pseudo action: group-1_running_0
+ * Resource action: ocf_192.168.100.181 monitor=5000 on c001n02
+ * Resource action: heartbeat_192.168.100.182 monitor=5000 on c001n02
+ * Resource action: ocf_192.168.100.183 monitor=5000 on c001n02
+ * Resource action: ocf_msdummy:0 monitor=5000 on c001n03
+
+Revised Cluster Status:
+ * Node List:
+ * Online: [ c001n02 c001n03 c001n08 ]
+ * OFFLINE: [ c001n01 ]
+
+ * Full List of Resources:
+ * DcIPaddr (ocf:heartbeat:IPaddr): Started c001n03
+ * Resource Group: group-1:
+ * ocf_192.168.100.181 (ocf:heartbeat:IPaddr): Started c001n02
+ * heartbeat_192.168.100.182 (ocf:heartbeat:IPaddr): Started c001n02
+ * ocf_192.168.100.183 (ocf:heartbeat:IPaddr): Started c001n02
+ * lsb_dummy (lsb:/usr/lib/heartbeat/cts/LSBDummy): Started c001n08
+ * rsc_c001n01 (ocf:heartbeat:IPaddr): Started c001n03
+ * rsc_c001n08 (ocf:heartbeat:IPaddr): Started c001n08
+ * rsc_c001n02 (ocf:heartbeat:IPaddr): Started c001n02
+ * rsc_c001n03 (ocf:heartbeat:IPaddr): Started c001n03
+ * Clone Set: DoFencing [child_DoFencing] (unique):
+ * child_DoFencing:0 (stonith:ssh): Stopped
+ * child_DoFencing:1 (stonith:ssh): Started c001n03
+ * child_DoFencing:2 (stonith:ssh): Started c001n02
+ * child_DoFencing:3 (stonith:ssh): Started c001n08
+ * Clone Set: master_rsc_1 [ocf_msdummy] (promotable, unique):
+ * ocf_msdummy:0 (ocf:heartbeat:/usr/lib/heartbeat/cts/OCFMSDummy): Unpromoted c001n03
+ * ocf_msdummy:1 (ocf:heartbeat:/usr/lib/heartbeat/cts/OCFMSDummy): Unpromoted c001n03
+ * ocf_msdummy:2 (ocf:heartbeat:/usr/lib/heartbeat/cts/OCFMSDummy): Unpromoted c001n02
+ * ocf_msdummy:3 (ocf:heartbeat:/usr/lib/heartbeat/cts/OCFMSDummy): Unpromoted c001n08
+ * ocf_msdummy:4 (ocf:heartbeat:/usr/lib/heartbeat/cts/OCFMSDummy): Stopped
+ * ocf_msdummy:5 (ocf:heartbeat:/usr/lib/heartbeat/cts/OCFMSDummy): Stopped
+ * ocf_msdummy:6 (ocf:heartbeat:/usr/lib/heartbeat/cts/OCFMSDummy): Unpromoted c001n02
+ * ocf_msdummy:7 (ocf:heartbeat:/usr/lib/heartbeat/cts/OCFMSDummy): Unpromoted c001n08
diff --git a/cts/scheduler/summary/promoted-9.summary b/cts/scheduler/summary/promoted-9.summary
new file mode 100644
index 0000000..69dab46
--- /dev/null
+++ b/cts/scheduler/summary/promoted-9.summary
@@ -0,0 +1,100 @@
+Current cluster status:
+ * Node List:
+ * Node sgi2: UNCLEAN (offline)
+ * Node test02: UNCLEAN (offline)
+ * Online: [ ibm1 va1 ]
+
+ * Full List of Resources:
+ * DcIPaddr (ocf:heartbeat:IPaddr): Stopped
+ * Resource Group: group-1:
+ * ocf_127.0.0.11 (ocf:heartbeat:IPaddr): Stopped
+ * heartbeat_127.0.0.12 (ocf:heartbeat:IPaddr): Stopped
+ * ocf_127.0.0.13 (ocf:heartbeat:IPaddr): Stopped
+ * lsb_dummy (lsb:/usr/lib64/heartbeat/cts/LSBDummy): Stopped
+ * rsc_sgi2 (ocf:heartbeat:IPaddr): Stopped
+ * rsc_ibm1 (ocf:heartbeat:IPaddr): Stopped
+ * rsc_va1 (ocf:heartbeat:IPaddr): Stopped
+ * rsc_test02 (ocf:heartbeat:IPaddr): Stopped
+ * Clone Set: DoFencing [child_DoFencing] (unique):
+ * child_DoFencing:0 (stonith:ssh): Started va1
+ * child_DoFencing:1 (stonith:ssh): Started ibm1
+ * child_DoFencing:2 (stonith:ssh): Stopped
+ * child_DoFencing:3 (stonith:ssh): Stopped
+ * Clone Set: master_rsc_1 [ocf_msdummy] (promotable, unique):
+ * ocf_msdummy:0 (ocf:heartbeat:/usr/lib64/heartbeat/cts/OCFMSDummy): Stopped
+ * ocf_msdummy:1 (ocf:heartbeat:/usr/lib64/heartbeat/cts/OCFMSDummy): Stopped
+ * ocf_msdummy:2 (ocf:heartbeat:/usr/lib64/heartbeat/cts/OCFMSDummy): Stopped
+ * ocf_msdummy:3 (ocf:heartbeat:/usr/lib64/heartbeat/cts/OCFMSDummy): Stopped
+ * ocf_msdummy:4 (ocf:heartbeat:/usr/lib64/heartbeat/cts/OCFMSDummy): Stopped
+ * ocf_msdummy:5 (ocf:heartbeat:/usr/lib64/heartbeat/cts/OCFMSDummy): Stopped
+ * ocf_msdummy:6 (ocf:heartbeat:/usr/lib64/heartbeat/cts/OCFMSDummy): Stopped
+ * ocf_msdummy:7 (ocf:heartbeat:/usr/lib64/heartbeat/cts/OCFMSDummy): Stopped
+
+Transition Summary:
+ * Start DcIPaddr ( va1 ) due to no quorum (blocked)
+ * Start ocf_127.0.0.11 ( va1 ) due to no quorum (blocked)
+ * Start heartbeat_127.0.0.12 ( va1 ) due to no quorum (blocked)
+ * Start ocf_127.0.0.13 ( va1 ) due to no quorum (blocked)
+ * Start lsb_dummy ( va1 ) due to no quorum (blocked)
+ * Start rsc_sgi2 ( va1 ) due to no quorum (blocked)
+ * Start rsc_ibm1 ( va1 ) due to no quorum (blocked)
+ * Start rsc_va1 ( va1 ) due to no quorum (blocked)
+ * Start rsc_test02 ( va1 ) due to no quorum (blocked)
+ * Stop child_DoFencing:1 ( ibm1 ) due to node availability
+ * Promote ocf_msdummy:0 ( Stopped -> Promoted va1 ) blocked
+ * Start ocf_msdummy:1 ( va1 ) due to no quorum (blocked)
+
+Executing Cluster Transition:
+ * Resource action: child_DoFencing:1 monitor on va1
+ * Resource action: child_DoFencing:2 monitor on va1
+ * Resource action: child_DoFencing:2 monitor on ibm1
+ * Resource action: child_DoFencing:3 monitor on va1
+ * Resource action: child_DoFencing:3 monitor on ibm1
+ * Pseudo action: DoFencing_stop_0
+ * Resource action: ocf_msdummy:2 monitor on va1
+ * Resource action: ocf_msdummy:2 monitor on ibm1
+ * Resource action: ocf_msdummy:3 monitor on va1
+ * Resource action: ocf_msdummy:3 monitor on ibm1
+ * Resource action: ocf_msdummy:4 monitor on va1
+ * Resource action: ocf_msdummy:4 monitor on ibm1
+ * Resource action: ocf_msdummy:5 monitor on va1
+ * Resource action: ocf_msdummy:5 monitor on ibm1
+ * Resource action: ocf_msdummy:6 monitor on va1
+ * Resource action: ocf_msdummy:6 monitor on ibm1
+ * Resource action: ocf_msdummy:7 monitor on va1
+ * Resource action: ocf_msdummy:7 monitor on ibm1
+ * Resource action: child_DoFencing:1 stop on ibm1
+ * Pseudo action: DoFencing_stopped_0
+ * Cluster action: do_shutdown on ibm1
+
+Revised Cluster Status:
+ * Node List:
+ * Node sgi2: UNCLEAN (offline)
+ * Node test02: UNCLEAN (offline)
+ * Online: [ ibm1 va1 ]
+
+ * Full List of Resources:
+ * DcIPaddr (ocf:heartbeat:IPaddr): Stopped
+ * Resource Group: group-1:
+ * ocf_127.0.0.11 (ocf:heartbeat:IPaddr): Stopped
+ * heartbeat_127.0.0.12 (ocf:heartbeat:IPaddr): Stopped
+ * ocf_127.0.0.13 (ocf:heartbeat:IPaddr): Stopped
+ * lsb_dummy (lsb:/usr/lib64/heartbeat/cts/LSBDummy): Stopped
+ * rsc_sgi2 (ocf:heartbeat:IPaddr): Stopped
+ * rsc_ibm1 (ocf:heartbeat:IPaddr): Stopped
+ * rsc_va1 (ocf:heartbeat:IPaddr): Stopped
+ * rsc_test02 (ocf:heartbeat:IPaddr): Stopped
+ * Clone Set: DoFencing [child_DoFencing] (unique):
+ * child_DoFencing:0 (stonith:ssh): Started va1
+ * child_DoFencing:1 (stonith:ssh): Stopped
+ * child_DoFencing:2 (stonith:ssh): Stopped
+ * child_DoFencing:3 (stonith:ssh): Stopped
+ * Clone Set: master_rsc_1 [ocf_msdummy] (promotable, unique):
+ * ocf_msdummy:0 (ocf:heartbeat:/usr/lib64/heartbeat/cts/OCFMSDummy): Stopped
+ * ocf_msdummy:1 (ocf:heartbeat:/usr/lib64/heartbeat/cts/OCFMSDummy): Stopped
+ * ocf_msdummy:2 (ocf:heartbeat:/usr/lib64/heartbeat/cts/OCFMSDummy): Stopped
+ * ocf_msdummy:3 (ocf:heartbeat:/usr/lib64/heartbeat/cts/OCFMSDummy): Stopped
+ * ocf_msdummy:4 (ocf:heartbeat:/usr/lib64/heartbeat/cts/OCFMSDummy): Stopped
+ * ocf_msdummy:5 (ocf:heartbeat:/usr/lib64/heartbeat/cts/OCFMSDummy): Stopped
+ * ocf_msdummy:6 (ocf:heartbeat:/usr/lib64/heartbeat/cts/OCFMSDummy): Stopped
+ * ocf_msdummy:7 (ocf:heartbeat:/usr/lib64/heartbeat/cts/OCFMSDummy): Stopped
diff --git a/cts/scheduler/summary/promoted-allow-start.summary b/cts/scheduler/summary/promoted-allow-start.summary
new file mode 100644
index 0000000..c9afdaa
--- /dev/null
+++ b/cts/scheduler/summary/promoted-allow-start.summary
@@ -0,0 +1,21 @@
+Current cluster status:
+ * Node List:
+ * Online: [ sles11-a sles11-b ]
+
+ * Full List of Resources:
+ * Clone Set: ms_res_Stateful_1 [res_Stateful_1] (promotable):
+ * Promoted: [ sles11-a ]
+ * Unpromoted: [ sles11-b ]
+
+Transition Summary:
+
+Executing Cluster Transition:
+
+Revised Cluster Status:
+ * Node List:
+ * Online: [ sles11-a sles11-b ]
+
+ * Full List of Resources:
+ * Clone Set: ms_res_Stateful_1 [res_Stateful_1] (promotable):
+ * Promoted: [ sles11-a ]
+ * Unpromoted: [ sles11-b ]
diff --git a/cts/scheduler/summary/promoted-asymmetrical-order.summary b/cts/scheduler/summary/promoted-asymmetrical-order.summary
new file mode 100644
index 0000000..591ff18
--- /dev/null
+++ b/cts/scheduler/summary/promoted-asymmetrical-order.summary
@@ -0,0 +1,37 @@
+2 of 4 resource instances DISABLED and 0 BLOCKED from further action due to failure
+
+Current cluster status:
+ * Node List:
+ * Online: [ node1 node2 ]
+
+ * Full List of Resources:
+ * Clone Set: ms1 [rsc1] (promotable, disabled):
+ * Promoted: [ node1 ]
+ * Unpromoted: [ node2 ]
+ * Clone Set: ms2 [rsc2] (promotable):
+ * Promoted: [ node2 ]
+ * Unpromoted: [ node1 ]
+
+Transition Summary:
+ * Stop rsc1:0 ( Promoted node1 ) due to node availability
+ * Stop rsc1:1 ( Unpromoted node2 ) due to node availability
+
+Executing Cluster Transition:
+ * Pseudo action: ms1_demote_0
+ * Resource action: rsc1:0 demote on node1
+ * Pseudo action: ms1_demoted_0
+ * Pseudo action: ms1_stop_0
+ * Resource action: rsc1:0 stop on node1
+ * Resource action: rsc1:1 stop on node2
+ * Pseudo action: ms1_stopped_0
+
+Revised Cluster Status:
+ * Node List:
+ * Online: [ node1 node2 ]
+
+ * Full List of Resources:
+ * Clone Set: ms1 [rsc1] (promotable, disabled):
+ * Stopped (disabled): [ node1 node2 ]
+ * Clone Set: ms2 [rsc2] (promotable):
+ * Promoted: [ node2 ]
+ * Unpromoted: [ node1 ]
diff --git a/cts/scheduler/summary/promoted-colocation.summary b/cts/scheduler/summary/promoted-colocation.summary
new file mode 100644
index 0000000..b3e776b
--- /dev/null
+++ b/cts/scheduler/summary/promoted-colocation.summary
@@ -0,0 +1,34 @@
+Current cluster status:
+ * Node List:
+ * Online: [ box1 box2 ]
+
+ * Full List of Resources:
+ * Clone Set: ms-conntrackd [conntrackd-stateful] (promotable):
+ * Unpromoted: [ box1 box2 ]
+ * Resource Group: virtualips:
+ * externalip (ocf:heartbeat:IPaddr2): Started box2
+ * internalip (ocf:heartbeat:IPaddr2): Started box2
+ * sship (ocf:heartbeat:IPaddr2): Started box2
+
+Transition Summary:
+ * Promote conntrackd-stateful:1 ( Unpromoted -> Promoted box2 )
+
+Executing Cluster Transition:
+ * Resource action: conntrackd-stateful:0 monitor=29000 on box1
+ * Pseudo action: ms-conntrackd_promote_0
+ * Resource action: conntrackd-stateful:1 promote on box2
+ * Pseudo action: ms-conntrackd_promoted_0
+ * Resource action: conntrackd-stateful:1 monitor=30000 on box2
+
+Revised Cluster Status:
+ * Node List:
+ * Online: [ box1 box2 ]
+
+ * Full List of Resources:
+ * Clone Set: ms-conntrackd [conntrackd-stateful] (promotable):
+ * Promoted: [ box2 ]
+ * Unpromoted: [ box1 ]
+ * Resource Group: virtualips:
+ * externalip (ocf:heartbeat:IPaddr2): Started box2
+ * internalip (ocf:heartbeat:IPaddr2): Started box2
+ * sship (ocf:heartbeat:IPaddr2): Started box2
diff --git a/cts/scheduler/summary/promoted-demote-2.summary b/cts/scheduler/summary/promoted-demote-2.summary
new file mode 100644
index 0000000..e371d3f
--- /dev/null
+++ b/cts/scheduler/summary/promoted-demote-2.summary
@@ -0,0 +1,75 @@
+Current cluster status:
+ * Node List:
+ * Online: [ pcmk-1 pcmk-2 pcmk-3 pcmk-4 ]
+
+ * Full List of Resources:
+ * Fencing (stonith:fence_xvm): Started pcmk-1
+ * Resource Group: group-1:
+ * r192.168.122.105 (ocf:heartbeat:IPaddr): Stopped
+ * r192.168.122.106 (ocf:heartbeat:IPaddr): Stopped
+ * r192.168.122.107 (ocf:heartbeat:IPaddr): Stopped
+ * rsc_pcmk-1 (ocf:heartbeat:IPaddr): Started pcmk-1
+ * rsc_pcmk-2 (ocf:heartbeat:IPaddr): Started pcmk-2
+ * rsc_pcmk-3 (ocf:heartbeat:IPaddr): Started pcmk-3
+ * rsc_pcmk-4 (ocf:heartbeat:IPaddr): Started pcmk-4
+ * lsb-dummy (lsb:/usr/share/pacemaker/tests/cts/LSBDummy): Stopped
+ * migrator (ocf:pacemaker:Dummy): Started pcmk-4
+ * Clone Set: Connectivity [ping-1]:
+ * Started: [ pcmk-1 pcmk-2 pcmk-3 pcmk-4 ]
+ * Clone Set: master-1 [stateful-1] (promotable):
+ * stateful-1 (ocf:pacemaker:Stateful): FAILED pcmk-1
+ * Unpromoted: [ pcmk-2 pcmk-3 pcmk-4 ]
+
+Transition Summary:
+ * Start r192.168.122.105 ( pcmk-2 )
+ * Start r192.168.122.106 ( pcmk-2 )
+ * Start r192.168.122.107 ( pcmk-2 )
+ * Start lsb-dummy ( pcmk-2 )
+ * Recover stateful-1:0 ( Unpromoted pcmk-1 )
+ * Promote stateful-1:1 ( Unpromoted -> Promoted pcmk-2 )
+
+Executing Cluster Transition:
+ * Resource action: stateful-1:0 cancel=15000 on pcmk-2
+ * Pseudo action: master-1_stop_0
+ * Resource action: stateful-1:1 stop on pcmk-1
+ * Pseudo action: master-1_stopped_0
+ * Pseudo action: master-1_start_0
+ * Resource action: stateful-1:1 start on pcmk-1
+ * Pseudo action: master-1_running_0
+ * Resource action: stateful-1:1 monitor=15000 on pcmk-1
+ * Pseudo action: master-1_promote_0
+ * Resource action: stateful-1:0 promote on pcmk-2
+ * Pseudo action: master-1_promoted_0
+ * Pseudo action: group-1_start_0
+ * Resource action: r192.168.122.105 start on pcmk-2
+ * Resource action: r192.168.122.106 start on pcmk-2
+ * Resource action: r192.168.122.107 start on pcmk-2
+ * Resource action: stateful-1:0 monitor=16000 on pcmk-2
+ * Pseudo action: group-1_running_0
+ * Resource action: r192.168.122.105 monitor=5000 on pcmk-2
+ * Resource action: r192.168.122.106 monitor=5000 on pcmk-2
+ * Resource action: r192.168.122.107 monitor=5000 on pcmk-2
+ * Resource action: lsb-dummy start on pcmk-2
+ * Resource action: lsb-dummy monitor=5000 on pcmk-2
+
+Revised Cluster Status:
+ * Node List:
+ * Online: [ pcmk-1 pcmk-2 pcmk-3 pcmk-4 ]
+
+ * Full List of Resources:
+ * Fencing (stonith:fence_xvm): Started pcmk-1
+ * Resource Group: group-1:
+ * r192.168.122.105 (ocf:heartbeat:IPaddr): Started pcmk-2
+ * r192.168.122.106 (ocf:heartbeat:IPaddr): Started pcmk-2
+ * r192.168.122.107 (ocf:heartbeat:IPaddr): Started pcmk-2
+ * rsc_pcmk-1 (ocf:heartbeat:IPaddr): Started pcmk-1
+ * rsc_pcmk-2 (ocf:heartbeat:IPaddr): Started pcmk-2
+ * rsc_pcmk-3 (ocf:heartbeat:IPaddr): Started pcmk-3
+ * rsc_pcmk-4 (ocf:heartbeat:IPaddr): Started pcmk-4
+ * lsb-dummy (lsb:/usr/share/pacemaker/tests/cts/LSBDummy): Started pcmk-2
+ * migrator (ocf:pacemaker:Dummy): Started pcmk-4
+ * Clone Set: Connectivity [ping-1]:
+ * Started: [ pcmk-1 pcmk-2 pcmk-3 pcmk-4 ]
+ * Clone Set: master-1 [stateful-1] (promotable):
+ * Promoted: [ pcmk-2 ]
+ * Unpromoted: [ pcmk-1 pcmk-3 pcmk-4 ]
diff --git a/cts/scheduler/summary/promoted-demote-block.summary b/cts/scheduler/summary/promoted-demote-block.summary
new file mode 100644
index 0000000..e4fc100
--- /dev/null
+++ b/cts/scheduler/summary/promoted-demote-block.summary
@@ -0,0 +1,26 @@
+0 of 2 resource instances DISABLED and 1 BLOCKED from further action due to failure
+
+Current cluster status:
+ * Node List:
+ * Node dl380g5c: standby (with active resources)
+ * Online: [ dl380g5d ]
+
+ * Full List of Resources:
+ * Clone Set: stateful [dummy] (promotable):
+ * dummy (ocf:pacemaker:Stateful): FAILED Promoted dl380g5c (blocked)
+ * Unpromoted: [ dl380g5d ]
+
+Transition Summary:
+
+Executing Cluster Transition:
+ * Resource action: dummy:1 monitor=20000 on dl380g5d
+
+Revised Cluster Status:
+ * Node List:
+ * Node dl380g5c: standby (with active resources)
+ * Online: [ dl380g5d ]
+
+ * Full List of Resources:
+ * Clone Set: stateful [dummy] (promotable):
+ * dummy (ocf:pacemaker:Stateful): FAILED Promoted dl380g5c (blocked)
+ * Unpromoted: [ dl380g5d ]
diff --git a/cts/scheduler/summary/promoted-demote.summary b/cts/scheduler/summary/promoted-demote.summary
new file mode 100644
index 0000000..3ba4985
--- /dev/null
+++ b/cts/scheduler/summary/promoted-demote.summary
@@ -0,0 +1,70 @@
+Current cluster status:
+ * Node List:
+ * Online: [ cxa1 cxb1 ]
+
+ * Full List of Resources:
+ * cyrus_address (ocf:heartbeat:IPaddr2): Started cxa1
+ * cyrus_master (ocf:heartbeat:cyrus-imap): Stopped
+ * cyrus_syslogd (ocf:heartbeat:syslogd): Stopped
+ * cyrus_filesys (ocf:heartbeat:Filesystem): Stopped
+ * cyrus_volgroup (ocf:heartbeat:VolGroup): Stopped
+ * Clone Set: cyrus_drbd [cyrus_drbd_node] (promotable):
+ * Promoted: [ cxa1 ]
+ * Unpromoted: [ cxb1 ]
+ * named_address (ocf:heartbeat:IPaddr2): Started cxa1
+ * named_filesys (ocf:heartbeat:Filesystem): Stopped
+ * named_volgroup (ocf:heartbeat:VolGroup): Stopped
+ * named_daemon (ocf:heartbeat:recursor): Stopped
+ * named_syslogd (ocf:heartbeat:syslogd): Stopped
+ * Clone Set: named_drbd [named_drbd_node] (promotable):
+ * Unpromoted: [ cxa1 cxb1 ]
+ * Clone Set: pingd_clone [pingd_node]:
+ * Started: [ cxa1 cxb1 ]
+ * Clone Set: fence_clone [fence_node]:
+ * Started: [ cxa1 cxb1 ]
+
+Transition Summary:
+ * Move named_address ( cxa1 -> cxb1 )
+ * Promote named_drbd_node:1 ( Unpromoted -> Promoted cxb1 )
+
+Executing Cluster Transition:
+ * Resource action: named_address stop on cxa1
+ * Pseudo action: named_drbd_pre_notify_promote_0
+ * Resource action: named_address start on cxb1
+ * Resource action: named_drbd_node:1 notify on cxa1
+ * Resource action: named_drbd_node:0 notify on cxb1
+ * Pseudo action: named_drbd_confirmed-pre_notify_promote_0
+ * Pseudo action: named_drbd_promote_0
+ * Resource action: named_drbd_node:0 promote on cxb1
+ * Pseudo action: named_drbd_promoted_0
+ * Pseudo action: named_drbd_post_notify_promoted_0
+ * Resource action: named_drbd_node:1 notify on cxa1
+ * Resource action: named_drbd_node:0 notify on cxb1
+ * Pseudo action: named_drbd_confirmed-post_notify_promoted_0
+ * Resource action: named_drbd_node:0 monitor=10000 on cxb1
+
+Revised Cluster Status:
+ * Node List:
+ * Online: [ cxa1 cxb1 ]
+
+ * Full List of Resources:
+ * cyrus_address (ocf:heartbeat:IPaddr2): Started cxa1
+ * cyrus_master (ocf:heartbeat:cyrus-imap): Stopped
+ * cyrus_syslogd (ocf:heartbeat:syslogd): Stopped
+ * cyrus_filesys (ocf:heartbeat:Filesystem): Stopped
+ * cyrus_volgroup (ocf:heartbeat:VolGroup): Stopped
+ * Clone Set: cyrus_drbd [cyrus_drbd_node] (promotable):
+ * Promoted: [ cxa1 ]
+ * Unpromoted: [ cxb1 ]
+ * named_address (ocf:heartbeat:IPaddr2): Started cxb1
+ * named_filesys (ocf:heartbeat:Filesystem): Stopped
+ * named_volgroup (ocf:heartbeat:VolGroup): Stopped
+ * named_daemon (ocf:heartbeat:recursor): Stopped
+ * named_syslogd (ocf:heartbeat:syslogd): Stopped
+ * Clone Set: named_drbd [named_drbd_node] (promotable):
+ * Promoted: [ cxb1 ]
+ * Unpromoted: [ cxa1 ]
+ * Clone Set: pingd_clone [pingd_node]:
+ * Started: [ cxa1 cxb1 ]
+ * Clone Set: fence_clone [fence_node]:
+ * Started: [ cxa1 cxb1 ]
diff --git a/cts/scheduler/summary/promoted-depend.summary b/cts/scheduler/summary/promoted-depend.summary
new file mode 100644
index 0000000..3df262f
--- /dev/null
+++ b/cts/scheduler/summary/promoted-depend.summary
@@ -0,0 +1,62 @@
+3 of 10 resource instances DISABLED and 0 BLOCKED from further action due to failure
+
+Current cluster status:
+ * Node List:
+ * Online: [ vbox4 ]
+ * OFFLINE: [ vbox3 ]
+
+ * Full List of Resources:
+ * Clone Set: drbd [drbd0] (promotable):
+ * Stopped: [ vbox3 vbox4 ]
+ * Clone Set: cman_clone [cman]:
+ * Stopped: [ vbox3 vbox4 ]
+ * Clone Set: clvmd_clone [clvmd]:
+ * Stopped: [ vbox3 vbox4 ]
+ * vmnci36 (ocf:heartbeat:vm): Stopped
+ * vmnci37 (ocf:heartbeat:vm): Stopped (disabled)
+ * vmnci38 (ocf:heartbeat:vm): Stopped (disabled)
+ * vmnci55 (ocf:heartbeat:vm): Stopped (disabled)
+
+Transition Summary:
+ * Start drbd0:0 ( vbox4 )
+ * Start cman:0 ( vbox4 )
+
+Executing Cluster Transition:
+ * Resource action: drbd0:0 monitor on vbox4
+ * Pseudo action: drbd_pre_notify_start_0
+ * Resource action: cman:0 monitor on vbox4
+ * Pseudo action: cman_clone_start_0
+ * Resource action: clvmd:0 monitor on vbox4
+ * Resource action: vmnci36 monitor on vbox4
+ * Resource action: vmnci37 monitor on vbox4
+ * Resource action: vmnci38 monitor on vbox4
+ * Resource action: vmnci55 monitor on vbox4
+ * Pseudo action: drbd_confirmed-pre_notify_start_0
+ * Pseudo action: drbd_start_0
+ * Resource action: cman:0 start on vbox4
+ * Pseudo action: cman_clone_running_0
+ * Resource action: drbd0:0 start on vbox4
+ * Pseudo action: drbd_running_0
+ * Pseudo action: drbd_post_notify_running_0
+ * Resource action: drbd0:0 notify on vbox4
+ * Pseudo action: drbd_confirmed-post_notify_running_0
+ * Resource action: drbd0:0 monitor=60000 on vbox4
+
+Revised Cluster Status:
+ * Node List:
+ * Online: [ vbox4 ]
+ * OFFLINE: [ vbox3 ]
+
+ * Full List of Resources:
+ * Clone Set: drbd [drbd0] (promotable):
+ * Unpromoted: [ vbox4 ]
+ * Stopped: [ vbox3 ]
+ * Clone Set: cman_clone [cman]:
+ * Started: [ vbox4 ]
+ * Stopped: [ vbox3 ]
+ * Clone Set: clvmd_clone [clvmd]:
+ * Stopped: [ vbox3 vbox4 ]
+ * vmnci36 (ocf:heartbeat:vm): Stopped
+ * vmnci37 (ocf:heartbeat:vm): Stopped (disabled)
+ * vmnci38 (ocf:heartbeat:vm): Stopped (disabled)
+ * vmnci55 (ocf:heartbeat:vm): Stopped (disabled)
diff --git a/cts/scheduler/summary/promoted-dependent-ban.summary b/cts/scheduler/summary/promoted-dependent-ban.summary
new file mode 100644
index 0000000..2b24139
--- /dev/null
+++ b/cts/scheduler/summary/promoted-dependent-ban.summary
@@ -0,0 +1,38 @@
+Current cluster status:
+ * Node List:
+ * Online: [ c6 c7 c8 ]
+
+ * Full List of Resources:
+ * Clone Set: ms_drbd-dtest1 [p_drbd-dtest1] (promotable):
+ * Unpromoted: [ c6 c7 ]
+ * p_dtest1 (ocf:heartbeat:Dummy): Stopped
+
+Transition Summary:
+ * Promote p_drbd-dtest1:0 ( Unpromoted -> Promoted c7 )
+ * Start p_dtest1 ( c7 )
+
+Executing Cluster Transition:
+ * Pseudo action: ms_drbd-dtest1_pre_notify_promote_0
+ * Resource action: p_drbd-dtest1 notify on c7
+ * Resource action: p_drbd-dtest1 notify on c6
+ * Pseudo action: ms_drbd-dtest1_confirmed-pre_notify_promote_0
+ * Pseudo action: ms_drbd-dtest1_promote_0
+ * Resource action: p_drbd-dtest1 promote on c7
+ * Pseudo action: ms_drbd-dtest1_promoted_0
+ * Pseudo action: ms_drbd-dtest1_post_notify_promoted_0
+ * Resource action: p_drbd-dtest1 notify on c7
+ * Resource action: p_drbd-dtest1 notify on c6
+ * Pseudo action: ms_drbd-dtest1_confirmed-post_notify_promoted_0
+ * Resource action: p_dtest1 start on c7
+ * Resource action: p_drbd-dtest1 monitor=10000 on c7
+ * Resource action: p_drbd-dtest1 monitor=20000 on c6
+
+Revised Cluster Status:
+ * Node List:
+ * Online: [ c6 c7 c8 ]
+
+ * Full List of Resources:
+ * Clone Set: ms_drbd-dtest1 [p_drbd-dtest1] (promotable):
+ * Promoted: [ c7 ]
+ * Unpromoted: [ c6 ]
+ * p_dtest1 (ocf:heartbeat:Dummy): Started c7
diff --git a/cts/scheduler/summary/promoted-failed-demote-2.summary b/cts/scheduler/summary/promoted-failed-demote-2.summary
new file mode 100644
index 0000000..3f317fa
--- /dev/null
+++ b/cts/scheduler/summary/promoted-failed-demote-2.summary
@@ -0,0 +1,47 @@
+Current cluster status:
+ * Node List:
+ * Online: [ dl380g5a dl380g5b ]
+
+ * Full List of Resources:
+ * Clone Set: ms-sf [group] (promotable, unique):
+ * Resource Group: group:0:
+ * stateful-1:0 (ocf:heartbeat:Stateful): FAILED dl380g5b
+ * stateful-2:0 (ocf:heartbeat:Stateful): Stopped
+ * Resource Group: group:1:
+ * stateful-1:1 (ocf:heartbeat:Stateful): Unpromoted dl380g5a
+ * stateful-2:1 (ocf:heartbeat:Stateful): Unpromoted dl380g5a
+
+Transition Summary:
+ * Stop stateful-1:0 ( Unpromoted dl380g5b ) due to node availability
+ * Promote stateful-1:1 ( Unpromoted -> Promoted dl380g5a )
+ * Promote stateful-2:1 ( Unpromoted -> Promoted dl380g5a )
+
+Executing Cluster Transition:
+ * Resource action: stateful-1:1 cancel=20000 on dl380g5a
+ * Resource action: stateful-2:1 cancel=20000 on dl380g5a
+ * Pseudo action: ms-sf_stop_0
+ * Pseudo action: group:0_stop_0
+ * Resource action: stateful-1:0 stop on dl380g5b
+ * Pseudo action: group:0_stopped_0
+ * Pseudo action: ms-sf_stopped_0
+ * Pseudo action: ms-sf_promote_0
+ * Pseudo action: group:1_promote_0
+ * Resource action: stateful-1:1 promote on dl380g5a
+ * Resource action: stateful-2:1 promote on dl380g5a
+ * Pseudo action: group:1_promoted_0
+ * Resource action: stateful-1:1 monitor=10000 on dl380g5a
+ * Resource action: stateful-2:1 monitor=10000 on dl380g5a
+ * Pseudo action: ms-sf_promoted_0
+
+Revised Cluster Status:
+ * Node List:
+ * Online: [ dl380g5a dl380g5b ]
+
+ * Full List of Resources:
+ * Clone Set: ms-sf [group] (promotable, unique):
+ * Resource Group: group:0:
+ * stateful-1:0 (ocf:heartbeat:Stateful): Stopped
+ * stateful-2:0 (ocf:heartbeat:Stateful): Stopped
+ * Resource Group: group:1:
+ * stateful-1:1 (ocf:heartbeat:Stateful): Promoted dl380g5a
+ * stateful-2:1 (ocf:heartbeat:Stateful): Promoted dl380g5a
diff --git a/cts/scheduler/summary/promoted-failed-demote.summary b/cts/scheduler/summary/promoted-failed-demote.summary
new file mode 100644
index 0000000..70b3e1b
--- /dev/null
+++ b/cts/scheduler/summary/promoted-failed-demote.summary
@@ -0,0 +1,64 @@
+Current cluster status:
+ * Node List:
+ * Online: [ dl380g5a dl380g5b ]
+
+ * Full List of Resources:
+ * Clone Set: ms-sf [group] (promotable, unique):
+ * Resource Group: group:0:
+ * stateful-1:0 (ocf:heartbeat:Stateful): FAILED dl380g5b
+ * stateful-2:0 (ocf:heartbeat:Stateful): Stopped
+ * Resource Group: group:1:
+ * stateful-1:1 (ocf:heartbeat:Stateful): Unpromoted dl380g5a
+ * stateful-2:1 (ocf:heartbeat:Stateful): Unpromoted dl380g5a
+
+Transition Summary:
+ * Stop stateful-1:0 ( Unpromoted dl380g5b ) due to node availability
+ * Promote stateful-1:1 ( Unpromoted -> Promoted dl380g5a )
+ * Promote stateful-2:1 ( Unpromoted -> Promoted dl380g5a )
+
+Executing Cluster Transition:
+ * Resource action: stateful-1:1 cancel=20000 on dl380g5a
+ * Resource action: stateful-2:1 cancel=20000 on dl380g5a
+ * Pseudo action: ms-sf_pre_notify_stop_0
+ * Resource action: stateful-1:0 notify on dl380g5b
+ * Resource action: stateful-1:1 notify on dl380g5a
+ * Resource action: stateful-2:1 notify on dl380g5a
+ * Pseudo action: ms-sf_confirmed-pre_notify_stop_0
+ * Pseudo action: ms-sf_stop_0
+ * Pseudo action: group:0_stop_0
+ * Resource action: stateful-1:0 stop on dl380g5b
+ * Pseudo action: group:0_stopped_0
+ * Pseudo action: ms-sf_stopped_0
+ * Pseudo action: ms-sf_post_notify_stopped_0
+ * Resource action: stateful-1:1 notify on dl380g5a
+ * Resource action: stateful-2:1 notify on dl380g5a
+ * Pseudo action: ms-sf_confirmed-post_notify_stopped_0
+ * Pseudo action: ms-sf_pre_notify_promote_0
+ * Resource action: stateful-1:1 notify on dl380g5a
+ * Resource action: stateful-2:1 notify on dl380g5a
+ * Pseudo action: ms-sf_confirmed-pre_notify_promote_0
+ * Pseudo action: ms-sf_promote_0
+ * Pseudo action: group:1_promote_0
+ * Resource action: stateful-1:1 promote on dl380g5a
+ * Resource action: stateful-2:1 promote on dl380g5a
+ * Pseudo action: group:1_promoted_0
+ * Pseudo action: ms-sf_promoted_0
+ * Pseudo action: ms-sf_post_notify_promoted_0
+ * Resource action: stateful-1:1 notify on dl380g5a
+ * Resource action: stateful-2:1 notify on dl380g5a
+ * Pseudo action: ms-sf_confirmed-post_notify_promoted_0
+ * Resource action: stateful-1:1 monitor=10000 on dl380g5a
+ * Resource action: stateful-2:1 monitor=10000 on dl380g5a
+
+Revised Cluster Status:
+ * Node List:
+ * Online: [ dl380g5a dl380g5b ]
+
+ * Full List of Resources:
+ * Clone Set: ms-sf [group] (promotable, unique):
+ * Resource Group: group:0:
+ * stateful-1:0 (ocf:heartbeat:Stateful): Stopped
+ * stateful-2:0 (ocf:heartbeat:Stateful): Stopped
+ * Resource Group: group:1:
+ * stateful-1:1 (ocf:heartbeat:Stateful): Promoted dl380g5a
+ * stateful-2:1 (ocf:heartbeat:Stateful): Promoted dl380g5a
diff --git a/cts/scheduler/summary/promoted-group.summary b/cts/scheduler/summary/promoted-group.summary
new file mode 100644
index 0000000..44b380c
--- /dev/null
+++ b/cts/scheduler/summary/promoted-group.summary
@@ -0,0 +1,37 @@
+Current cluster status:
+ * Node List:
+ * Online: [ rh44-1 rh44-2 ]
+
+ * Full List of Resources:
+ * Resource Group: test:
+ * resource_1 (ocf:heartbeat:IPaddr): Started rh44-1
+ * Clone Set: ms-sf [grp_ms_sf] (promotable, unique):
+ * Resource Group: grp_ms_sf:0:
+ * promotable_Stateful:0 (ocf:heartbeat:Stateful): Unpromoted rh44-2
+ * Resource Group: grp_ms_sf:1:
+ * promotable_Stateful:1 (ocf:heartbeat:Stateful): Unpromoted rh44-1
+
+Transition Summary:
+ * Promote promotable_Stateful:1 ( Unpromoted -> Promoted rh44-1 )
+
+Executing Cluster Transition:
+ * Resource action: promotable_Stateful:1 cancel=5000 on rh44-1
+ * Pseudo action: ms-sf_promote_0
+ * Pseudo action: grp_ms_sf:1_promote_0
+ * Resource action: promotable_Stateful:1 promote on rh44-1
+ * Pseudo action: grp_ms_sf:1_promoted_0
+ * Resource action: promotable_Stateful:1 monitor=6000 on rh44-1
+ * Pseudo action: ms-sf_promoted_0
+
+Revised Cluster Status:
+ * Node List:
+ * Online: [ rh44-1 rh44-2 ]
+
+ * Full List of Resources:
+ * Resource Group: test:
+ * resource_1 (ocf:heartbeat:IPaddr): Started rh44-1
+ * Clone Set: ms-sf [grp_ms_sf] (promotable, unique):
+ * Resource Group: grp_ms_sf:0:
+ * promotable_Stateful:0 (ocf:heartbeat:Stateful): Unpromoted rh44-2
+ * Resource Group: grp_ms_sf:1:
+ * promotable_Stateful:1 (ocf:heartbeat:Stateful): Promoted rh44-1
diff --git a/cts/scheduler/summary/promoted-move.summary b/cts/scheduler/summary/promoted-move.summary
new file mode 100644
index 0000000..4782edb
--- /dev/null
+++ b/cts/scheduler/summary/promoted-move.summary
@@ -0,0 +1,72 @@
+Current cluster status:
+ * Node List:
+ * Online: [ bl460g1n13 bl460g1n14 ]
+
+ * Full List of Resources:
+ * Resource Group: grpDRBD:
+ * dummy01 (ocf:pacemaker:Dummy): FAILED bl460g1n13
+ * dummy02 (ocf:pacemaker:Dummy): Started bl460g1n13
+ * dummy03 (ocf:pacemaker:Dummy): Stopped
+ * Clone Set: msDRBD [prmDRBD] (promotable):
+ * Promoted: [ bl460g1n13 ]
+ * Unpromoted: [ bl460g1n14 ]
+
+Transition Summary:
+ * Recover dummy01 ( bl460g1n13 -> bl460g1n14 )
+ * Move dummy02 ( bl460g1n13 -> bl460g1n14 )
+ * Start dummy03 ( bl460g1n14 )
+ * Demote prmDRBD:0 ( Promoted -> Unpromoted bl460g1n13 )
+ * Promote prmDRBD:1 ( Unpromoted -> Promoted bl460g1n14 )
+
+Executing Cluster Transition:
+ * Pseudo action: grpDRBD_stop_0
+ * Resource action: dummy02 stop on bl460g1n13
+ * Resource action: prmDRBD:0 cancel=10000 on bl460g1n13
+ * Resource action: prmDRBD:1 cancel=20000 on bl460g1n14
+ * Pseudo action: msDRBD_pre_notify_demote_0
+ * Resource action: dummy01 stop on bl460g1n13
+ * Resource action: prmDRBD:0 notify on bl460g1n13
+ * Resource action: prmDRBD:1 notify on bl460g1n14
+ * Pseudo action: msDRBD_confirmed-pre_notify_demote_0
+ * Pseudo action: grpDRBD_stopped_0
+ * Pseudo action: msDRBD_demote_0
+ * Resource action: prmDRBD:0 demote on bl460g1n13
+ * Pseudo action: msDRBD_demoted_0
+ * Pseudo action: msDRBD_post_notify_demoted_0
+ * Resource action: prmDRBD:0 notify on bl460g1n13
+ * Resource action: prmDRBD:1 notify on bl460g1n14
+ * Pseudo action: msDRBD_confirmed-post_notify_demoted_0
+ * Pseudo action: msDRBD_pre_notify_promote_0
+ * Resource action: prmDRBD:0 notify on bl460g1n13
+ * Resource action: prmDRBD:1 notify on bl460g1n14
+ * Pseudo action: msDRBD_confirmed-pre_notify_promote_0
+ * Pseudo action: msDRBD_promote_0
+ * Resource action: prmDRBD:1 promote on bl460g1n14
+ * Pseudo action: msDRBD_promoted_0
+ * Pseudo action: msDRBD_post_notify_promoted_0
+ * Resource action: prmDRBD:0 notify on bl460g1n13
+ * Resource action: prmDRBD:1 notify on bl460g1n14
+ * Pseudo action: msDRBD_confirmed-post_notify_promoted_0
+ * Pseudo action: grpDRBD_start_0
+ * Resource action: dummy01 start on bl460g1n14
+ * Resource action: dummy02 start on bl460g1n14
+ * Resource action: dummy03 start on bl460g1n14
+ * Resource action: prmDRBD:0 monitor=20000 on bl460g1n13
+ * Resource action: prmDRBD:1 monitor=10000 on bl460g1n14
+ * Pseudo action: grpDRBD_running_0
+ * Resource action: dummy01 monitor=10000 on bl460g1n14
+ * Resource action: dummy02 monitor=10000 on bl460g1n14
+ * Resource action: dummy03 monitor=10000 on bl460g1n14
+
+Revised Cluster Status:
+ * Node List:
+ * Online: [ bl460g1n13 bl460g1n14 ]
+
+ * Full List of Resources:
+ * Resource Group: grpDRBD:
+ * dummy01 (ocf:pacemaker:Dummy): Started bl460g1n14
+ * dummy02 (ocf:pacemaker:Dummy): Started bl460g1n14
+ * dummy03 (ocf:pacemaker:Dummy): Started bl460g1n14
+ * Clone Set: msDRBD [prmDRBD] (promotable):
+ * Promoted: [ bl460g1n14 ]
+ * Unpromoted: [ bl460g1n13 ]
diff --git a/cts/scheduler/summary/promoted-notify.summary b/cts/scheduler/summary/promoted-notify.summary
new file mode 100644
index 0000000..f0fb040
--- /dev/null
+++ b/cts/scheduler/summary/promoted-notify.summary
@@ -0,0 +1,36 @@
+Current cluster status:
+ * Node List:
+ * Online: [ rhel7-auto1 rhel7-auto2 rhel7-auto3 ]
+
+ * Full List of Resources:
+ * shooter (stonith:fence_xvm): Started rhel7-auto1
+ * Clone Set: fake-master [fake] (promotable):
+ * Unpromoted: [ rhel7-auto1 rhel7-auto2 rhel7-auto3 ]
+
+Transition Summary:
+ * Promote fake:0 ( Unpromoted -> Promoted rhel7-auto1 )
+
+Executing Cluster Transition:
+ * Pseudo action: fake-master_pre_notify_promote_0
+ * Resource action: fake notify on rhel7-auto1
+ * Resource action: fake notify on rhel7-auto3
+ * Resource action: fake notify on rhel7-auto2
+ * Pseudo action: fake-master_confirmed-pre_notify_promote_0
+ * Pseudo action: fake-master_promote_0
+ * Resource action: fake promote on rhel7-auto1
+ * Pseudo action: fake-master_promoted_0
+ * Pseudo action: fake-master_post_notify_promoted_0
+ * Resource action: fake notify on rhel7-auto1
+ * Resource action: fake notify on rhel7-auto3
+ * Resource action: fake notify on rhel7-auto2
+ * Pseudo action: fake-master_confirmed-post_notify_promoted_0
+
+Revised Cluster Status:
+ * Node List:
+ * Online: [ rhel7-auto1 rhel7-auto2 rhel7-auto3 ]
+
+ * Full List of Resources:
+ * shooter (stonith:fence_xvm): Started rhel7-auto1
+ * Clone Set: fake-master [fake] (promotable):
+ * Promoted: [ rhel7-auto1 ]
+ * Unpromoted: [ rhel7-auto2 rhel7-auto3 ]
diff --git a/cts/scheduler/summary/promoted-ordering.summary b/cts/scheduler/summary/promoted-ordering.summary
new file mode 100644
index 0000000..3222e18
--- /dev/null
+++ b/cts/scheduler/summary/promoted-ordering.summary
@@ -0,0 +1,96 @@
+Current cluster status:
+ * Node List:
+ * Online: [ webcluster01 ]
+ * OFFLINE: [ webcluster02 ]
+
+ * Full List of Resources:
+ * mysql-server (ocf:heartbeat:mysql): Stopped
+ * extip_1 (ocf:heartbeat:IPaddr2): Stopped
+ * extip_2 (ocf:heartbeat:IPaddr2): Stopped
+ * Resource Group: group_main:
+ * intip_0_main (ocf:heartbeat:IPaddr2): Stopped
+ * intip_1_master (ocf:heartbeat:IPaddr2): Stopped
+ * intip_2_slave (ocf:heartbeat:IPaddr2): Stopped
+ * Clone Set: ms_drbd_www [drbd_www] (promotable):
+ * Stopped: [ webcluster01 webcluster02 ]
+ * Clone Set: clone_ocfs2_www [ocfs2_www] (unique):
+ * ocfs2_www:0 (ocf:heartbeat:Filesystem): Stopped
+ * ocfs2_www:1 (ocf:heartbeat:Filesystem): Stopped
+ * Clone Set: clone_webservice [group_webservice]:
+ * Stopped: [ webcluster01 webcluster02 ]
+ * Clone Set: ms_drbd_mysql [drbd_mysql] (promotable):
+ * Stopped: [ webcluster01 webcluster02 ]
+ * fs_mysql (ocf:heartbeat:Filesystem): Stopped
+
+Transition Summary:
+ * Start extip_1 ( webcluster01 )
+ * Start extip_2 ( webcluster01 )
+ * Start intip_1_master ( webcluster01 )
+ * Start intip_2_slave ( webcluster01 )
+ * Start drbd_www:0 ( webcluster01 )
+ * Start drbd_mysql:0 ( webcluster01 )
+
+Executing Cluster Transition:
+ * Resource action: mysql-server monitor on webcluster01
+ * Resource action: extip_1 monitor on webcluster01
+ * Resource action: extip_2 monitor on webcluster01
+ * Resource action: intip_0_main monitor on webcluster01
+ * Resource action: intip_1_master monitor on webcluster01
+ * Resource action: intip_2_slave monitor on webcluster01
+ * Resource action: drbd_www:0 monitor on webcluster01
+ * Pseudo action: ms_drbd_www_pre_notify_start_0
+ * Resource action: ocfs2_www:0 monitor on webcluster01
+ * Resource action: ocfs2_www:1 monitor on webcluster01
+ * Resource action: apache2:0 monitor on webcluster01
+ * Resource action: mysql-proxy:0 monitor on webcluster01
+ * Resource action: drbd_mysql:0 monitor on webcluster01
+ * Pseudo action: ms_drbd_mysql_pre_notify_start_0
+ * Resource action: fs_mysql monitor on webcluster01
+ * Resource action: extip_1 start on webcluster01
+ * Resource action: extip_2 start on webcluster01
+ * Resource action: intip_1_master start on webcluster01
+ * Resource action: intip_2_slave start on webcluster01
+ * Pseudo action: ms_drbd_www_confirmed-pre_notify_start_0
+ * Pseudo action: ms_drbd_www_start_0
+ * Pseudo action: ms_drbd_mysql_confirmed-pre_notify_start_0
+ * Pseudo action: ms_drbd_mysql_start_0
+ * Resource action: extip_1 monitor=30000 on webcluster01
+ * Resource action: extip_2 monitor=30000 on webcluster01
+ * Resource action: intip_1_master monitor=30000 on webcluster01
+ * Resource action: intip_2_slave monitor=30000 on webcluster01
+ * Resource action: drbd_www:0 start on webcluster01
+ * Pseudo action: ms_drbd_www_running_0
+ * Resource action: drbd_mysql:0 start on webcluster01
+ * Pseudo action: ms_drbd_mysql_running_0
+ * Pseudo action: ms_drbd_www_post_notify_running_0
+ * Pseudo action: ms_drbd_mysql_post_notify_running_0
+ * Resource action: drbd_www:0 notify on webcluster01
+ * Pseudo action: ms_drbd_www_confirmed-post_notify_running_0
+ * Resource action: drbd_mysql:0 notify on webcluster01
+ * Pseudo action: ms_drbd_mysql_confirmed-post_notify_running_0
+
+Revised Cluster Status:
+ * Node List:
+ * Online: [ webcluster01 ]
+ * OFFLINE: [ webcluster02 ]
+
+ * Full List of Resources:
+ * mysql-server (ocf:heartbeat:mysql): Stopped
+ * extip_1 (ocf:heartbeat:IPaddr2): Started webcluster01
+ * extip_2 (ocf:heartbeat:IPaddr2): Started webcluster01
+ * Resource Group: group_main:
+ * intip_0_main (ocf:heartbeat:IPaddr2): Stopped
+ * intip_1_master (ocf:heartbeat:IPaddr2): Started webcluster01
+ * intip_2_slave (ocf:heartbeat:IPaddr2): Started webcluster01
+ * Clone Set: ms_drbd_www [drbd_www] (promotable):
+ * Unpromoted: [ webcluster01 ]
+ * Stopped: [ webcluster02 ]
+ * Clone Set: clone_ocfs2_www [ocfs2_www] (unique):
+ * ocfs2_www:0 (ocf:heartbeat:Filesystem): Stopped
+ * ocfs2_www:1 (ocf:heartbeat:Filesystem): Stopped
+ * Clone Set: clone_webservice [group_webservice]:
+ * Stopped: [ webcluster01 webcluster02 ]
+ * Clone Set: ms_drbd_mysql [drbd_mysql] (promotable):
+ * Unpromoted: [ webcluster01 ]
+ * Stopped: [ webcluster02 ]
+ * fs_mysql (ocf:heartbeat:Filesystem): Stopped
diff --git a/cts/scheduler/summary/promoted-partially-demoted-group.summary b/cts/scheduler/summary/promoted-partially-demoted-group.summary
new file mode 100644
index 0000000..b85c805
--- /dev/null
+++ b/cts/scheduler/summary/promoted-partially-demoted-group.summary
@@ -0,0 +1,118 @@
+Current cluster status:
+ * Node List:
+ * Online: [ sd01-0 sd01-1 ]
+
+ * Full List of Resources:
+ * stonith-xvm-sd01-0 (stonith:fence_xvm): Started sd01-1
+ * stonith-xvm-sd01-1 (stonith:fence_xvm): Started sd01-0
+ * Resource Group: cdev-pool-0-iscsi-export:
+ * cdev-pool-0-iscsi-target (ocf:vds-ok:iSCSITarget): Started sd01-1
+ * cdev-pool-0-iscsi-lun-1 (ocf:vds-ok:iSCSILogicalUnit): Started sd01-1
+ * Clone Set: ms-cdev-pool-0-drbd [cdev-pool-0-drbd] (promotable):
+ * Promoted: [ sd01-1 ]
+ * Unpromoted: [ sd01-0 ]
+ * Clone Set: cl-ietd [ietd]:
+ * Started: [ sd01-0 sd01-1 ]
+ * Clone Set: cl-vlan1-net [vlan1-net]:
+ * Started: [ sd01-0 sd01-1 ]
+ * Resource Group: cdev-pool-0-iscsi-vips:
+ * vip-164 (ocf:heartbeat:IPaddr2): Started sd01-1
+ * vip-165 (ocf:heartbeat:IPaddr2): Started sd01-1
+ * Clone Set: ms-cdev-pool-0-iscsi-vips-fw [cdev-pool-0-iscsi-vips-fw] (promotable):
+ * Promoted: [ sd01-1 ]
+ * Unpromoted: [ sd01-0 ]
+
+Transition Summary:
+ * Move vip-164 ( sd01-1 -> sd01-0 )
+ * Move vip-165 ( sd01-1 -> sd01-0 )
+ * Move cdev-pool-0-iscsi-target ( sd01-1 -> sd01-0 )
+ * Move cdev-pool-0-iscsi-lun-1 ( sd01-1 -> sd01-0 )
+ * Demote vip-164-fw:0 ( Promoted -> Unpromoted sd01-1 )
+ * Promote vip-164-fw:1 ( Unpromoted -> Promoted sd01-0 )
+ * Promote vip-165-fw:1 ( Unpromoted -> Promoted sd01-0 )
+ * Demote cdev-pool-0-drbd:0 ( Promoted -> Unpromoted sd01-1 )
+ * Promote cdev-pool-0-drbd:1 ( Unpromoted -> Promoted sd01-0 )
+
+Executing Cluster Transition:
+ * Resource action: vip-165-fw monitor=10000 on sd01-1
+ * Pseudo action: ms-cdev-pool-0-iscsi-vips-fw_demote_0
+ * Pseudo action: ms-cdev-pool-0-drbd_pre_notify_demote_0
+ * Pseudo action: cdev-pool-0-iscsi-vips-fw:0_demote_0
+ * Resource action: vip-164-fw demote on sd01-1
+ * Resource action: cdev-pool-0-drbd notify on sd01-1
+ * Resource action: cdev-pool-0-drbd notify on sd01-0
+ * Pseudo action: ms-cdev-pool-0-drbd_confirmed-pre_notify_demote_0
+ * Pseudo action: cdev-pool-0-iscsi-vips-fw:0_demoted_0
+ * Resource action: vip-164-fw monitor=10000 on sd01-1
+ * Pseudo action: ms-cdev-pool-0-iscsi-vips-fw_demoted_0
+ * Pseudo action: cdev-pool-0-iscsi-vips_stop_0
+ * Resource action: vip-165 stop on sd01-1
+ * Resource action: vip-164 stop on sd01-1
+ * Pseudo action: cdev-pool-0-iscsi-vips_stopped_0
+ * Pseudo action: cdev-pool-0-iscsi-export_stop_0
+ * Resource action: cdev-pool-0-iscsi-lun-1 stop on sd01-1
+ * Resource action: cdev-pool-0-iscsi-target stop on sd01-1
+ * Pseudo action: cdev-pool-0-iscsi-export_stopped_0
+ * Pseudo action: ms-cdev-pool-0-drbd_demote_0
+ * Resource action: cdev-pool-0-drbd demote on sd01-1
+ * Pseudo action: ms-cdev-pool-0-drbd_demoted_0
+ * Pseudo action: ms-cdev-pool-0-drbd_post_notify_demoted_0
+ * Resource action: cdev-pool-0-drbd notify on sd01-1
+ * Resource action: cdev-pool-0-drbd notify on sd01-0
+ * Pseudo action: ms-cdev-pool-0-drbd_confirmed-post_notify_demoted_0
+ * Pseudo action: ms-cdev-pool-0-drbd_pre_notify_promote_0
+ * Resource action: cdev-pool-0-drbd notify on sd01-1
+ * Resource action: cdev-pool-0-drbd notify on sd01-0
+ * Pseudo action: ms-cdev-pool-0-drbd_confirmed-pre_notify_promote_0
+ * Pseudo action: ms-cdev-pool-0-drbd_promote_0
+ * Resource action: cdev-pool-0-drbd promote on sd01-0
+ * Pseudo action: ms-cdev-pool-0-drbd_promoted_0
+ * Pseudo action: ms-cdev-pool-0-drbd_post_notify_promoted_0
+ * Resource action: cdev-pool-0-drbd notify on sd01-1
+ * Resource action: cdev-pool-0-drbd notify on sd01-0
+ * Pseudo action: ms-cdev-pool-0-drbd_confirmed-post_notify_promoted_0
+ * Pseudo action: cdev-pool-0-iscsi-export_start_0
+ * Resource action: cdev-pool-0-iscsi-target start on sd01-0
+ * Resource action: cdev-pool-0-iscsi-lun-1 start on sd01-0
+ * Resource action: cdev-pool-0-drbd monitor=20000 on sd01-1
+ * Resource action: cdev-pool-0-drbd monitor=10000 on sd01-0
+ * Pseudo action: cdev-pool-0-iscsi-export_running_0
+ * Resource action: cdev-pool-0-iscsi-target monitor=10000 on sd01-0
+ * Resource action: cdev-pool-0-iscsi-lun-1 monitor=10000 on sd01-0
+ * Pseudo action: cdev-pool-0-iscsi-vips_start_0
+ * Resource action: vip-164 start on sd01-0
+ * Resource action: vip-165 start on sd01-0
+ * Pseudo action: cdev-pool-0-iscsi-vips_running_0
+ * Resource action: vip-164 monitor=30000 on sd01-0
+ * Resource action: vip-165 monitor=30000 on sd01-0
+ * Pseudo action: ms-cdev-pool-0-iscsi-vips-fw_promote_0
+ * Pseudo action: cdev-pool-0-iscsi-vips-fw:0_promote_0
+ * Pseudo action: cdev-pool-0-iscsi-vips-fw:1_promote_0
+ * Resource action: vip-164-fw promote on sd01-0
+ * Resource action: vip-165-fw promote on sd01-0
+ * Pseudo action: cdev-pool-0-iscsi-vips-fw:1_promoted_0
+ * Pseudo action: ms-cdev-pool-0-iscsi-vips-fw_promoted_0
+
+Revised Cluster Status:
+ * Node List:
+ * Online: [ sd01-0 sd01-1 ]
+
+ * Full List of Resources:
+ * stonith-xvm-sd01-0 (stonith:fence_xvm): Started sd01-1
+ * stonith-xvm-sd01-1 (stonith:fence_xvm): Started sd01-0
+ * Resource Group: cdev-pool-0-iscsi-export:
+ * cdev-pool-0-iscsi-target (ocf:vds-ok:iSCSITarget): Started sd01-0
+ * cdev-pool-0-iscsi-lun-1 (ocf:vds-ok:iSCSILogicalUnit): Started sd01-0
+ * Clone Set: ms-cdev-pool-0-drbd [cdev-pool-0-drbd] (promotable):
+ * Promoted: [ sd01-0 ]
+ * Unpromoted: [ sd01-1 ]
+ * Clone Set: cl-ietd [ietd]:
+ * Started: [ sd01-0 sd01-1 ]
+ * Clone Set: cl-vlan1-net [vlan1-net]:
+ * Started: [ sd01-0 sd01-1 ]
+ * Resource Group: cdev-pool-0-iscsi-vips:
+ * vip-164 (ocf:heartbeat:IPaddr2): Started sd01-0
+ * vip-165 (ocf:heartbeat:IPaddr2): Started sd01-0
+ * Clone Set: ms-cdev-pool-0-iscsi-vips-fw [cdev-pool-0-iscsi-vips-fw] (promotable):
+ * Promoted: [ sd01-0 ]
+ * Unpromoted: [ sd01-1 ]
diff --git a/cts/scheduler/summary/promoted-probed-score.summary b/cts/scheduler/summary/promoted-probed-score.summary
new file mode 100644
index 0000000..3c9326c
--- /dev/null
+++ b/cts/scheduler/summary/promoted-probed-score.summary
@@ -0,0 +1,329 @@
+1 of 60 resource instances DISABLED and 0 BLOCKED from further action due to failure
+
+Current cluster status:
+ * Node List:
+ * Online: [ hypatia-corosync.nevis.columbia.edu orestes-corosync.nevis.columbia.edu ]
+
+ * Full List of Resources:
+ * Clone Set: AdminClone [AdminDrbd] (promotable):
+ * Stopped: [ hypatia-corosync.nevis.columbia.edu orestes-corosync.nevis.columbia.edu ]
+ * CronAmbientTemperature (ocf:heartbeat:symlink): Stopped
+ * StonithHypatia (stonith:fence_nut): Stopped
+ * StonithOrestes (stonith:fence_nut): Stopped
+ * Resource Group: DhcpGroup:
+ * SymlinkDhcpdConf (ocf:heartbeat:symlink): Stopped
+ * SymlinkSysconfigDhcpd (ocf:heartbeat:symlink): Stopped
+ * SymlinkDhcpdLeases (ocf:heartbeat:symlink): Stopped
+ * Dhcpd (lsb:dhcpd): Stopped (disabled)
+ * DhcpIP (ocf:heartbeat:IPaddr2): Stopped
+ * Clone Set: CupsClone [CupsGroup]:
+ * Stopped: [ hypatia-corosync.nevis.columbia.edu orestes-corosync.nevis.columbia.edu ]
+ * Clone Set: IPClone [IPGroup] (unique):
+ * Resource Group: IPGroup:0:
+ * ClusterIP:0 (ocf:heartbeat:IPaddr2): Stopped
+ * ClusterIPLocal:0 (ocf:heartbeat:IPaddr2): Stopped
+ * ClusterIPSandbox:0 (ocf:heartbeat:IPaddr2): Stopped
+ * Resource Group: IPGroup:1:
+ * ClusterIP:1 (ocf:heartbeat:IPaddr2): Stopped
+ * ClusterIPLocal:1 (ocf:heartbeat:IPaddr2): Stopped
+ * ClusterIPSandbox:1 (ocf:heartbeat:IPaddr2): Stopped
+ * Clone Set: LibvirtdClone [LibvirtdGroup]:
+ * Stopped: [ hypatia-corosync.nevis.columbia.edu orestes-corosync.nevis.columbia.edu ]
+ * Clone Set: TftpClone [TftpGroup]:
+ * Stopped: [ hypatia-corosync.nevis.columbia.edu orestes-corosync.nevis.columbia.edu ]
+ * Clone Set: ExportsClone [ExportsGroup]:
+ * Stopped: [ hypatia-corosync.nevis.columbia.edu orestes-corosync.nevis.columbia.edu ]
+ * Clone Set: FilesystemClone [FilesystemGroup]:
+ * Stopped: [ hypatia-corosync.nevis.columbia.edu orestes-corosync.nevis.columbia.edu ]
+ * KVM-guest (ocf:heartbeat:VirtualDomain): Stopped
+ * Proxy (ocf:heartbeat:VirtualDomain): Stopped
+
+Transition Summary:
+ * Promote AdminDrbd:0 ( Stopped -> Promoted hypatia-corosync.nevis.columbia.edu )
+ * Promote AdminDrbd:1 ( Stopped -> Promoted orestes-corosync.nevis.columbia.edu )
+ * Start CronAmbientTemperature ( hypatia-corosync.nevis.columbia.edu )
+ * Start StonithHypatia ( orestes-corosync.nevis.columbia.edu )
+ * Start StonithOrestes ( hypatia-corosync.nevis.columbia.edu )
+ * Start SymlinkDhcpdConf ( orestes-corosync.nevis.columbia.edu )
+ * Start SymlinkSysconfigDhcpd ( orestes-corosync.nevis.columbia.edu )
+ * Start SymlinkDhcpdLeases ( orestes-corosync.nevis.columbia.edu )
+ * Start SymlinkUsrShareCups:0 ( hypatia-corosync.nevis.columbia.edu )
+ * Start SymlinkCupsdConf:0 ( hypatia-corosync.nevis.columbia.edu )
+ * Start Cups:0 ( hypatia-corosync.nevis.columbia.edu )
+ * Start SymlinkUsrShareCups:1 ( orestes-corosync.nevis.columbia.edu )
+ * Start SymlinkCupsdConf:1 ( orestes-corosync.nevis.columbia.edu )
+ * Start Cups:1 ( orestes-corosync.nevis.columbia.edu )
+ * Start ClusterIP:0 ( hypatia-corosync.nevis.columbia.edu )
+ * Start ClusterIPLocal:0 ( hypatia-corosync.nevis.columbia.edu )
+ * Start ClusterIPSandbox:0 ( hypatia-corosync.nevis.columbia.edu )
+ * Start ClusterIP:1 ( orestes-corosync.nevis.columbia.edu )
+ * Start ClusterIPLocal:1 ( orestes-corosync.nevis.columbia.edu )
+ * Start ClusterIPSandbox:1 ( orestes-corosync.nevis.columbia.edu )
+ * Start SymlinkEtcLibvirt:0 ( hypatia-corosync.nevis.columbia.edu )
+ * Start Libvirtd:0 ( hypatia-corosync.nevis.columbia.edu )
+ * Start SymlinkEtcLibvirt:1 ( orestes-corosync.nevis.columbia.edu )
+ * Start Libvirtd:1 ( orestes-corosync.nevis.columbia.edu )
+ * Start SymlinkTftp:0 ( hypatia-corosync.nevis.columbia.edu )
+ * Start Xinetd:0 ( hypatia-corosync.nevis.columbia.edu )
+ * Start SymlinkTftp:1 ( orestes-corosync.nevis.columbia.edu )
+ * Start Xinetd:1 ( orestes-corosync.nevis.columbia.edu )
+ * Start ExportMail:0 ( hypatia-corosync.nevis.columbia.edu )
+ * Start ExportMailInbox:0 ( hypatia-corosync.nevis.columbia.edu )
+ * Start ExportMailFolders:0 ( hypatia-corosync.nevis.columbia.edu )
+ * Start ExportMailForward:0 ( hypatia-corosync.nevis.columbia.edu )
+ * Start ExportMailProcmailrc:0 ( hypatia-corosync.nevis.columbia.edu )
+ * Start ExportUsrNevis:0 ( hypatia-corosync.nevis.columbia.edu )
+ * Start ExportUsrNevisOffsite:0 ( hypatia-corosync.nevis.columbia.edu )
+ * Start ExportWWW:0 ( hypatia-corosync.nevis.columbia.edu )
+ * Start ExportMail:1 ( orestes-corosync.nevis.columbia.edu )
+ * Start ExportMailInbox:1 ( orestes-corosync.nevis.columbia.edu )
+ * Start ExportMailFolders:1 ( orestes-corosync.nevis.columbia.edu )
+ * Start ExportMailForward:1 ( orestes-corosync.nevis.columbia.edu )
+ * Start ExportMailProcmailrc:1 ( orestes-corosync.nevis.columbia.edu )
+ * Start ExportUsrNevis:1 ( orestes-corosync.nevis.columbia.edu )
+ * Start ExportUsrNevisOffsite:1 ( orestes-corosync.nevis.columbia.edu )
+ * Start ExportWWW:1 ( orestes-corosync.nevis.columbia.edu )
+ * Start AdminLvm:0 ( hypatia-corosync.nevis.columbia.edu )
+ * Start FSUsrNevis:0 ( hypatia-corosync.nevis.columbia.edu )
+ * Start FSVarNevis:0 ( hypatia-corosync.nevis.columbia.edu )
+ * Start FSVirtualMachines:0 ( hypatia-corosync.nevis.columbia.edu )
+ * Start FSMail:0 ( hypatia-corosync.nevis.columbia.edu )
+ * Start FSWork:0 ( hypatia-corosync.nevis.columbia.edu )
+ * Start AdminLvm:1 ( orestes-corosync.nevis.columbia.edu )
+ * Start FSUsrNevis:1 ( orestes-corosync.nevis.columbia.edu )
+ * Start FSVarNevis:1 ( orestes-corosync.nevis.columbia.edu )
+ * Start FSVirtualMachines:1 ( orestes-corosync.nevis.columbia.edu )
+ * Start FSMail:1 ( orestes-corosync.nevis.columbia.edu )
+ * Start FSWork:1 ( orestes-corosync.nevis.columbia.edu )
+ * Start KVM-guest ( hypatia-corosync.nevis.columbia.edu )
+ * Start Proxy ( orestes-corosync.nevis.columbia.edu )
+
+Executing Cluster Transition:
+ * Pseudo action: AdminClone_pre_notify_start_0
+ * Resource action: StonithHypatia start on orestes-corosync.nevis.columbia.edu
+ * Resource action: StonithOrestes start on hypatia-corosync.nevis.columbia.edu
+ * Resource action: SymlinkEtcLibvirt:0 monitor on hypatia-corosync.nevis.columbia.edu
+ * Resource action: Libvirtd:0 monitor on orestes-corosync.nevis.columbia.edu
+ * Resource action: Libvirtd:0 monitor on hypatia-corosync.nevis.columbia.edu
+ * Resource action: SymlinkTftp:0 monitor on hypatia-corosync.nevis.columbia.edu
+ * Resource action: Xinetd:0 monitor on hypatia-corosync.nevis.columbia.edu
+ * Resource action: SymlinkTftp:1 monitor on orestes-corosync.nevis.columbia.edu
+ * Resource action: Xinetd:1 monitor on orestes-corosync.nevis.columbia.edu
+ * Resource action: ExportMail:0 monitor on hypatia-corosync.nevis.columbia.edu
+ * Resource action: ExportMailInbox:0 monitor on hypatia-corosync.nevis.columbia.edu
+ * Resource action: ExportMailFolders:0 monitor on hypatia-corosync.nevis.columbia.edu
+ * Resource action: ExportMailForward:0 monitor on hypatia-corosync.nevis.columbia.edu
+ * Resource action: ExportMailProcmailrc:0 monitor on hypatia-corosync.nevis.columbia.edu
+ * Resource action: ExportUsrNevis:0 monitor on hypatia-corosync.nevis.columbia.edu
+ * Resource action: ExportUsrNevisOffsite:0 monitor on hypatia-corosync.nevis.columbia.edu
+ * Resource action: ExportWWW:0 monitor on hypatia-corosync.nevis.columbia.edu
+ * Resource action: ExportMail:1 monitor on orestes-corosync.nevis.columbia.edu
+ * Resource action: ExportMailInbox:1 monitor on orestes-corosync.nevis.columbia.edu
+ * Resource action: ExportMailFolders:1 monitor on orestes-corosync.nevis.columbia.edu
+ * Resource action: ExportMailForward:1 monitor on orestes-corosync.nevis.columbia.edu
+ * Resource action: ExportMailProcmailrc:1 monitor on orestes-corosync.nevis.columbia.edu
+ * Resource action: ExportUsrNevis:1 monitor on orestes-corosync.nevis.columbia.edu
+ * Resource action: ExportUsrNevisOffsite:1 monitor on orestes-corosync.nevis.columbia.edu
+ * Resource action: ExportWWW:1 monitor on orestes-corosync.nevis.columbia.edu
+ * Resource action: AdminLvm:0 monitor on hypatia-corosync.nevis.columbia.edu
+ * Resource action: FSUsrNevis:0 monitor on hypatia-corosync.nevis.columbia.edu
+ * Resource action: FSVarNevis:0 monitor on hypatia-corosync.nevis.columbia.edu
+ * Resource action: FSVirtualMachines:0 monitor on hypatia-corosync.nevis.columbia.edu
+ * Resource action: FSMail:0 monitor on hypatia-corosync.nevis.columbia.edu
+ * Resource action: FSWork:0 monitor on hypatia-corosync.nevis.columbia.edu
+ * Resource action: AdminLvm:1 monitor on orestes-corosync.nevis.columbia.edu
+ * Resource action: FSUsrNevis:1 monitor on orestes-corosync.nevis.columbia.edu
+ * Resource action: FSVarNevis:1 monitor on orestes-corosync.nevis.columbia.edu
+ * Resource action: FSVirtualMachines:1 monitor on orestes-corosync.nevis.columbia.edu
+ * Resource action: FSMail:1 monitor on orestes-corosync.nevis.columbia.edu
+ * Resource action: FSWork:1 monitor on orestes-corosync.nevis.columbia.edu
+ * Resource action: KVM-guest monitor on orestes-corosync.nevis.columbia.edu
+ * Resource action: KVM-guest monitor on hypatia-corosync.nevis.columbia.edu
+ * Resource action: Proxy monitor on orestes-corosync.nevis.columbia.edu
+ * Resource action: Proxy monitor on hypatia-corosync.nevis.columbia.edu
+ * Pseudo action: AdminClone_confirmed-pre_notify_start_0
+ * Pseudo action: AdminClone_start_0
+ * Resource action: AdminDrbd:0 start on hypatia-corosync.nevis.columbia.edu
+ * Resource action: AdminDrbd:1 start on orestes-corosync.nevis.columbia.edu
+ * Pseudo action: AdminClone_running_0
+ * Pseudo action: AdminClone_post_notify_running_0
+ * Resource action: AdminDrbd:0 notify on hypatia-corosync.nevis.columbia.edu
+ * Resource action: AdminDrbd:1 notify on orestes-corosync.nevis.columbia.edu
+ * Pseudo action: AdminClone_confirmed-post_notify_running_0
+ * Pseudo action: AdminClone_pre_notify_promote_0
+ * Resource action: AdminDrbd:0 notify on hypatia-corosync.nevis.columbia.edu
+ * Resource action: AdminDrbd:1 notify on orestes-corosync.nevis.columbia.edu
+ * Pseudo action: AdminClone_confirmed-pre_notify_promote_0
+ * Pseudo action: AdminClone_promote_0
+ * Resource action: AdminDrbd:0 promote on hypatia-corosync.nevis.columbia.edu
+ * Resource action: AdminDrbd:1 promote on orestes-corosync.nevis.columbia.edu
+ * Pseudo action: AdminClone_promoted_0
+ * Pseudo action: AdminClone_post_notify_promoted_0
+ * Resource action: AdminDrbd:0 notify on hypatia-corosync.nevis.columbia.edu
+ * Resource action: AdminDrbd:1 notify on orestes-corosync.nevis.columbia.edu
+ * Pseudo action: AdminClone_confirmed-post_notify_promoted_0
+ * Pseudo action: FilesystemClone_start_0
+ * Resource action: AdminDrbd:0 monitor=59000 on hypatia-corosync.nevis.columbia.edu
+ * Resource action: AdminDrbd:1 monitor=59000 on orestes-corosync.nevis.columbia.edu
+ * Pseudo action: FilesystemGroup:0_start_0
+ * Resource action: AdminLvm:0 start on hypatia-corosync.nevis.columbia.edu
+ * Resource action: FSUsrNevis:0 start on hypatia-corosync.nevis.columbia.edu
+ * Resource action: FSVarNevis:0 start on hypatia-corosync.nevis.columbia.edu
+ * Resource action: FSVirtualMachines:0 start on hypatia-corosync.nevis.columbia.edu
+ * Resource action: FSMail:0 start on hypatia-corosync.nevis.columbia.edu
+ * Resource action: FSWork:0 start on hypatia-corosync.nevis.columbia.edu
+ * Pseudo action: FilesystemGroup:1_start_0
+ * Resource action: AdminLvm:1 start on orestes-corosync.nevis.columbia.edu
+ * Resource action: FSUsrNevis:1 start on orestes-corosync.nevis.columbia.edu
+ * Resource action: FSVarNevis:1 start on orestes-corosync.nevis.columbia.edu
+ * Resource action: FSVirtualMachines:1 start on orestes-corosync.nevis.columbia.edu
+ * Resource action: FSMail:1 start on orestes-corosync.nevis.columbia.edu
+ * Resource action: FSWork:1 start on orestes-corosync.nevis.columbia.edu
+ * Pseudo action: FilesystemGroup:0_running_0
+ * Resource action: AdminLvm:0 monitor=30000 on hypatia-corosync.nevis.columbia.edu
+ * Resource action: FSUsrNevis:0 monitor=20000 on hypatia-corosync.nevis.columbia.edu
+ * Resource action: FSVarNevis:0 monitor=20000 on hypatia-corosync.nevis.columbia.edu
+ * Resource action: FSVirtualMachines:0 monitor=20000 on hypatia-corosync.nevis.columbia.edu
+ * Resource action: FSMail:0 monitor=20000 on hypatia-corosync.nevis.columbia.edu
+ * Resource action: FSWork:0 monitor=20000 on hypatia-corosync.nevis.columbia.edu
+ * Pseudo action: FilesystemGroup:1_running_0
+ * Resource action: AdminLvm:1 monitor=30000 on orestes-corosync.nevis.columbia.edu
+ * Resource action: FSUsrNevis:1 monitor=20000 on orestes-corosync.nevis.columbia.edu
+ * Resource action: FSVarNevis:1 monitor=20000 on orestes-corosync.nevis.columbia.edu
+ * Resource action: FSVirtualMachines:1 monitor=20000 on orestes-corosync.nevis.columbia.edu
+ * Resource action: FSMail:1 monitor=20000 on orestes-corosync.nevis.columbia.edu
+ * Resource action: FSWork:1 monitor=20000 on orestes-corosync.nevis.columbia.edu
+ * Pseudo action: FilesystemClone_running_0
+ * Resource action: CronAmbientTemperature start on hypatia-corosync.nevis.columbia.edu
+ * Pseudo action: DhcpGroup_start_0
+ * Resource action: SymlinkDhcpdConf start on orestes-corosync.nevis.columbia.edu
+ * Resource action: SymlinkSysconfigDhcpd start on orestes-corosync.nevis.columbia.edu
+ * Resource action: SymlinkDhcpdLeases start on orestes-corosync.nevis.columbia.edu
+ * Pseudo action: CupsClone_start_0
+ * Pseudo action: IPClone_start_0
+ * Pseudo action: LibvirtdClone_start_0
+ * Pseudo action: TftpClone_start_0
+ * Pseudo action: ExportsClone_start_0
+ * Resource action: CronAmbientTemperature monitor=60000 on hypatia-corosync.nevis.columbia.edu
+ * Resource action: SymlinkDhcpdConf monitor=60000 on orestes-corosync.nevis.columbia.edu
+ * Resource action: SymlinkSysconfigDhcpd monitor=60000 on orestes-corosync.nevis.columbia.edu
+ * Resource action: SymlinkDhcpdLeases monitor=60000 on orestes-corosync.nevis.columbia.edu
+ * Pseudo action: CupsGroup:0_start_0
+ * Resource action: SymlinkUsrShareCups:0 start on hypatia-corosync.nevis.columbia.edu
+ * Resource action: SymlinkCupsdConf:0 start on hypatia-corosync.nevis.columbia.edu
+ * Resource action: Cups:0 start on hypatia-corosync.nevis.columbia.edu
+ * Pseudo action: CupsGroup:1_start_0
+ * Resource action: SymlinkUsrShareCups:1 start on orestes-corosync.nevis.columbia.edu
+ * Resource action: SymlinkCupsdConf:1 start on orestes-corosync.nevis.columbia.edu
+ * Resource action: Cups:1 start on orestes-corosync.nevis.columbia.edu
+ * Pseudo action: IPGroup:0_start_0
+ * Resource action: ClusterIP:0 start on hypatia-corosync.nevis.columbia.edu
+ * Resource action: ClusterIPLocal:0 start on hypatia-corosync.nevis.columbia.edu
+ * Resource action: ClusterIPSandbox:0 start on hypatia-corosync.nevis.columbia.edu
+ * Pseudo action: IPGroup:1_start_0
+ * Resource action: ClusterIP:1 start on orestes-corosync.nevis.columbia.edu
+ * Resource action: ClusterIPLocal:1 start on orestes-corosync.nevis.columbia.edu
+ * Resource action: ClusterIPSandbox:1 start on orestes-corosync.nevis.columbia.edu
+ * Pseudo action: LibvirtdGroup:0_start_0
+ * Resource action: SymlinkEtcLibvirt:0 start on hypatia-corosync.nevis.columbia.edu
+ * Resource action: Libvirtd:0 start on hypatia-corosync.nevis.columbia.edu
+ * Pseudo action: LibvirtdGroup:1_start_0
+ * Resource action: SymlinkEtcLibvirt:1 start on orestes-corosync.nevis.columbia.edu
+ * Resource action: Libvirtd:1 start on orestes-corosync.nevis.columbia.edu
+ * Pseudo action: TftpGroup:0_start_0
+ * Resource action: SymlinkTftp:0 start on hypatia-corosync.nevis.columbia.edu
+ * Resource action: Xinetd:0 start on hypatia-corosync.nevis.columbia.edu
+ * Pseudo action: TftpGroup:1_start_0
+ * Resource action: SymlinkTftp:1 start on orestes-corosync.nevis.columbia.edu
+ * Resource action: Xinetd:1 start on orestes-corosync.nevis.columbia.edu
+ * Pseudo action: ExportsGroup:0_start_0
+ * Resource action: ExportMail:0 start on hypatia-corosync.nevis.columbia.edu
+ * Resource action: ExportMailInbox:0 start on hypatia-corosync.nevis.columbia.edu
+ * Resource action: ExportMailFolders:0 start on hypatia-corosync.nevis.columbia.edu
+ * Resource action: ExportMailForward:0 start on hypatia-corosync.nevis.columbia.edu
+ * Resource action: ExportMailProcmailrc:0 start on hypatia-corosync.nevis.columbia.edu
+ * Resource action: ExportUsrNevis:0 start on hypatia-corosync.nevis.columbia.edu
+ * Resource action: ExportUsrNevisOffsite:0 start on hypatia-corosync.nevis.columbia.edu
+ * Resource action: ExportWWW:0 start on hypatia-corosync.nevis.columbia.edu
+ * Pseudo action: ExportsGroup:1_start_0
+ * Resource action: ExportMail:1 start on orestes-corosync.nevis.columbia.edu
+ * Resource action: ExportMailInbox:1 start on orestes-corosync.nevis.columbia.edu
+ * Resource action: ExportMailFolders:1 start on orestes-corosync.nevis.columbia.edu
+ * Resource action: ExportMailForward:1 start on orestes-corosync.nevis.columbia.edu
+ * Resource action: ExportMailProcmailrc:1 start on orestes-corosync.nevis.columbia.edu
+ * Resource action: ExportUsrNevis:1 start on orestes-corosync.nevis.columbia.edu
+ * Resource action: ExportUsrNevisOffsite:1 start on orestes-corosync.nevis.columbia.edu
+ * Resource action: ExportWWW:1 start on orestes-corosync.nevis.columbia.edu
+ * Pseudo action: CupsGroup:0_running_0
+ * Resource action: SymlinkUsrShareCups:0 monitor=60000 on hypatia-corosync.nevis.columbia.edu
+ * Resource action: SymlinkCupsdConf:0 monitor=60000 on hypatia-corosync.nevis.columbia.edu
+ * Resource action: Cups:0 monitor=30000 on hypatia-corosync.nevis.columbia.edu
+ * Pseudo action: CupsGroup:1_running_0
+ * Resource action: SymlinkUsrShareCups:1 monitor=60000 on orestes-corosync.nevis.columbia.edu
+ * Resource action: SymlinkCupsdConf:1 monitor=60000 on orestes-corosync.nevis.columbia.edu
+ * Resource action: Cups:1 monitor=30000 on orestes-corosync.nevis.columbia.edu
+ * Pseudo action: CupsClone_running_0
+ * Pseudo action: IPGroup:0_running_0
+ * Resource action: ClusterIP:0 monitor=30000 on hypatia-corosync.nevis.columbia.edu
+ * Resource action: ClusterIPLocal:0 monitor=31000 on hypatia-corosync.nevis.columbia.edu
+ * Resource action: ClusterIPSandbox:0 monitor=32000 on hypatia-corosync.nevis.columbia.edu
+ * Pseudo action: IPGroup:1_running_0
+ * Resource action: ClusterIP:1 monitor=30000 on orestes-corosync.nevis.columbia.edu
+ * Resource action: ClusterIPLocal:1 monitor=31000 on orestes-corosync.nevis.columbia.edu
+ * Resource action: ClusterIPSandbox:1 monitor=32000 on orestes-corosync.nevis.columbia.edu
+ * Pseudo action: IPClone_running_0
+ * Pseudo action: LibvirtdGroup:0_running_0
+ * Resource action: SymlinkEtcLibvirt:0 monitor=60000 on hypatia-corosync.nevis.columbia.edu
+ * Resource action: Libvirtd:0 monitor=30000 on hypatia-corosync.nevis.columbia.edu
+ * Pseudo action: LibvirtdGroup:1_running_0
+ * Resource action: SymlinkEtcLibvirt:1 monitor=60000 on orestes-corosync.nevis.columbia.edu
+ * Resource action: Libvirtd:1 monitor=30000 on orestes-corosync.nevis.columbia.edu
+ * Pseudo action: LibvirtdClone_running_0
+ * Pseudo action: TftpGroup:0_running_0
+ * Resource action: SymlinkTftp:0 monitor=60000 on hypatia-corosync.nevis.columbia.edu
+ * Pseudo action: TftpGroup:1_running_0
+ * Resource action: SymlinkTftp:1 monitor=60000 on orestes-corosync.nevis.columbia.edu
+ * Pseudo action: TftpClone_running_0
+ * Pseudo action: ExportsGroup:0_running_0
+ * Pseudo action: ExportsGroup:1_running_0
+ * Pseudo action: ExportsClone_running_0
+ * Resource action: KVM-guest start on hypatia-corosync.nevis.columbia.edu
+ * Resource action: Proxy start on orestes-corosync.nevis.columbia.edu
+
+Revised Cluster Status:
+ * Node List:
+ * Online: [ hypatia-corosync.nevis.columbia.edu orestes-corosync.nevis.columbia.edu ]
+
+ * Full List of Resources:
+ * Clone Set: AdminClone [AdminDrbd] (promotable):
+ * Promoted: [ hypatia-corosync.nevis.columbia.edu orestes-corosync.nevis.columbia.edu ]
+ * CronAmbientTemperature (ocf:heartbeat:symlink): Started hypatia-corosync.nevis.columbia.edu
+ * StonithHypatia (stonith:fence_nut): Started orestes-corosync.nevis.columbia.edu
+ * StonithOrestes (stonith:fence_nut): Started hypatia-corosync.nevis.columbia.edu
+ * Resource Group: DhcpGroup:
+ * SymlinkDhcpdConf (ocf:heartbeat:symlink): Started orestes-corosync.nevis.columbia.edu
+ * SymlinkSysconfigDhcpd (ocf:heartbeat:symlink): Started orestes-corosync.nevis.columbia.edu
+ * SymlinkDhcpdLeases (ocf:heartbeat:symlink): Started orestes-corosync.nevis.columbia.edu
+ * Dhcpd (lsb:dhcpd): Stopped (disabled)
+ * DhcpIP (ocf:heartbeat:IPaddr2): Stopped
+ * Clone Set: CupsClone [CupsGroup]:
+ * Started: [ hypatia-corosync.nevis.columbia.edu orestes-corosync.nevis.columbia.edu ]
+ * Clone Set: IPClone [IPGroup] (unique):
+ * Resource Group: IPGroup:0:
+ * ClusterIP:0 (ocf:heartbeat:IPaddr2): Started hypatia-corosync.nevis.columbia.edu
+ * ClusterIPLocal:0 (ocf:heartbeat:IPaddr2): Started hypatia-corosync.nevis.columbia.edu
+ * ClusterIPSandbox:0 (ocf:heartbeat:IPaddr2): Started hypatia-corosync.nevis.columbia.edu
+ * Resource Group: IPGroup:1:
+ * ClusterIP:1 (ocf:heartbeat:IPaddr2): Started orestes-corosync.nevis.columbia.edu
+ * ClusterIPLocal:1 (ocf:heartbeat:IPaddr2): Started orestes-corosync.nevis.columbia.edu
+ * ClusterIPSandbox:1 (ocf:heartbeat:IPaddr2): Started orestes-corosync.nevis.columbia.edu
+ * Clone Set: LibvirtdClone [LibvirtdGroup]:
+ * Started: [ hypatia-corosync.nevis.columbia.edu orestes-corosync.nevis.columbia.edu ]
+ * Clone Set: TftpClone [TftpGroup]:
+ * Started: [ hypatia-corosync.nevis.columbia.edu orestes-corosync.nevis.columbia.edu ]
+ * Clone Set: ExportsClone [ExportsGroup]:
+ * Started: [ hypatia-corosync.nevis.columbia.edu orestes-corosync.nevis.columbia.edu ]
+ * Clone Set: FilesystemClone [FilesystemGroup]:
+ * Started: [ hypatia-corosync.nevis.columbia.edu orestes-corosync.nevis.columbia.edu ]
+ * KVM-guest (ocf:heartbeat:VirtualDomain): Started hypatia-corosync.nevis.columbia.edu
+ * Proxy (ocf:heartbeat:VirtualDomain): Started orestes-corosync.nevis.columbia.edu
diff --git a/cts/scheduler/summary/promoted-promotion-constraint.summary b/cts/scheduler/summary/promoted-promotion-constraint.summary
new file mode 100644
index 0000000..22bc250
--- /dev/null
+++ b/cts/scheduler/summary/promoted-promotion-constraint.summary
@@ -0,0 +1,36 @@
+2 of 5 resource instances DISABLED and 0 BLOCKED from further action due to failure
+
+Current cluster status:
+ * Node List:
+ * Online: [ hex-13 hex-14 ]
+
+ * Full List of Resources:
+ * fencing-sbd (stonith:external/sbd): Started hex-13
+ * Resource Group: g0 (disabled):
+ * d0 (ocf:pacemaker:Dummy): Stopped (disabled)
+ * d1 (ocf:pacemaker:Dummy): Stopped (disabled)
+ * Clone Set: ms0 [s0] (promotable):
+ * Promoted: [ hex-14 ]
+ * Unpromoted: [ hex-13 ]
+
+Transition Summary:
+ * Demote s0:0 ( Promoted -> Unpromoted hex-14 )
+
+Executing Cluster Transition:
+ * Resource action: s0:1 cancel=20000 on hex-14
+ * Pseudo action: ms0_demote_0
+ * Resource action: s0:1 demote on hex-14
+ * Pseudo action: ms0_demoted_0
+ * Resource action: s0:1 monitor=21000 on hex-14
+
+Revised Cluster Status:
+ * Node List:
+ * Online: [ hex-13 hex-14 ]
+
+ * Full List of Resources:
+ * fencing-sbd (stonith:external/sbd): Started hex-13
+ * Resource Group: g0 (disabled):
+ * d0 (ocf:pacemaker:Dummy): Stopped (disabled)
+ * d1 (ocf:pacemaker:Dummy): Stopped (disabled)
+ * Clone Set: ms0 [s0] (promotable):
+ * Unpromoted: [ hex-13 hex-14 ]
diff --git a/cts/scheduler/summary/promoted-pseudo.summary b/cts/scheduler/summary/promoted-pseudo.summary
new file mode 100644
index 0000000..92302e7
--- /dev/null
+++ b/cts/scheduler/summary/promoted-pseudo.summary
@@ -0,0 +1,60 @@
+Current cluster status:
+ * Node List:
+ * Node raki.linbit: standby
+ * Online: [ sambuca.linbit ]
+
+ * Full List of Resources:
+ * ip_float_right (ocf:heartbeat:IPaddr2): Stopped
+ * Clone Set: ms_drbd_float [drbd_float] (promotable):
+ * Unpromoted: [ sambuca.linbit ]
+ * Resource Group: nfsexport:
+ * ip_nfs (ocf:heartbeat:IPaddr2): Stopped
+ * fs_float (ocf:heartbeat:Filesystem): Stopped
+
+Transition Summary:
+ * Start ip_float_right ( sambuca.linbit )
+ * Restart drbd_float:0 ( Unpromoted -> Promoted sambuca.linbit ) due to required ip_float_right start
+ * Start ip_nfs ( sambuca.linbit )
+
+Executing Cluster Transition:
+ * Resource action: ip_float_right start on sambuca.linbit
+ * Pseudo action: ms_drbd_float_pre_notify_stop_0
+ * Resource action: drbd_float:0 notify on sambuca.linbit
+ * Pseudo action: ms_drbd_float_confirmed-pre_notify_stop_0
+ * Pseudo action: ms_drbd_float_stop_0
+ * Resource action: drbd_float:0 stop on sambuca.linbit
+ * Pseudo action: ms_drbd_float_stopped_0
+ * Pseudo action: ms_drbd_float_post_notify_stopped_0
+ * Pseudo action: ms_drbd_float_confirmed-post_notify_stopped_0
+ * Pseudo action: ms_drbd_float_pre_notify_start_0
+ * Pseudo action: ms_drbd_float_confirmed-pre_notify_start_0
+ * Pseudo action: ms_drbd_float_start_0
+ * Resource action: drbd_float:0 start on sambuca.linbit
+ * Pseudo action: ms_drbd_float_running_0
+ * Pseudo action: ms_drbd_float_post_notify_running_0
+ * Resource action: drbd_float:0 notify on sambuca.linbit
+ * Pseudo action: ms_drbd_float_confirmed-post_notify_running_0
+ * Pseudo action: ms_drbd_float_pre_notify_promote_0
+ * Resource action: drbd_float:0 notify on sambuca.linbit
+ * Pseudo action: ms_drbd_float_confirmed-pre_notify_promote_0
+ * Pseudo action: ms_drbd_float_promote_0
+ * Resource action: drbd_float:0 promote on sambuca.linbit
+ * Pseudo action: ms_drbd_float_promoted_0
+ * Pseudo action: ms_drbd_float_post_notify_promoted_0
+ * Resource action: drbd_float:0 notify on sambuca.linbit
+ * Pseudo action: ms_drbd_float_confirmed-post_notify_promoted_0
+ * Pseudo action: nfsexport_start_0
+ * Resource action: ip_nfs start on sambuca.linbit
+
+Revised Cluster Status:
+ * Node List:
+ * Node raki.linbit: standby
+ * Online: [ sambuca.linbit ]
+
+ * Full List of Resources:
+ * ip_float_right (ocf:heartbeat:IPaddr2): Started sambuca.linbit
+ * Clone Set: ms_drbd_float [drbd_float] (promotable):
+ * Promoted: [ sambuca.linbit ]
+ * Resource Group: nfsexport:
+ * ip_nfs (ocf:heartbeat:IPaddr2): Started sambuca.linbit
+ * fs_float (ocf:heartbeat:Filesystem): Stopped
diff --git a/cts/scheduler/summary/promoted-reattach.summary b/cts/scheduler/summary/promoted-reattach.summary
new file mode 100644
index 0000000..8f07251
--- /dev/null
+++ b/cts/scheduler/summary/promoted-reattach.summary
@@ -0,0 +1,34 @@
+Current cluster status:
+ * Node List:
+ * Online: [ dktest1 dktest2 ]
+
+ * Full List of Resources:
+ * Clone Set: ms-drbd1 [drbd1] (promotable, unmanaged):
+ * drbd1 (ocf:heartbeat:drbd): Promoted dktest1 (unmanaged)
+ * drbd1 (ocf:heartbeat:drbd): Unpromoted dktest2 (unmanaged)
+ * Resource Group: apache (unmanaged):
+ * apache-vip (ocf:heartbeat:IPaddr2): Started dktest1 (unmanaged)
+ * mount (ocf:heartbeat:Filesystem): Started dktest1 (unmanaged)
+ * webserver (ocf:heartbeat:apache): Started dktest1 (unmanaged)
+
+Transition Summary:
+
+Executing Cluster Transition:
+ * Resource action: drbd1:0 monitor=10000 on dktest1
+ * Resource action: drbd1:0 monitor=11000 on dktest2
+ * Resource action: apache-vip monitor=60000 on dktest1
+ * Resource action: mount monitor=10000 on dktest1
+ * Resource action: webserver monitor=30000 on dktest1
+
+Revised Cluster Status:
+ * Node List:
+ * Online: [ dktest1 dktest2 ]
+
+ * Full List of Resources:
+ * Clone Set: ms-drbd1 [drbd1] (promotable, unmanaged):
+ * drbd1 (ocf:heartbeat:drbd): Promoted dktest1 (unmanaged)
+ * drbd1 (ocf:heartbeat:drbd): Unpromoted dktest2 (unmanaged)
+ * Resource Group: apache (unmanaged):
+ * apache-vip (ocf:heartbeat:IPaddr2): Started dktest1 (unmanaged)
+ * mount (ocf:heartbeat:Filesystem): Started dktest1 (unmanaged)
+ * webserver (ocf:heartbeat:apache): Started dktest1 (unmanaged)
diff --git a/cts/scheduler/summary/promoted-role.summary b/cts/scheduler/summary/promoted-role.summary
new file mode 100644
index 0000000..588f523
--- /dev/null
+++ b/cts/scheduler/summary/promoted-role.summary
@@ -0,0 +1,24 @@
+Current cluster status:
+ * Node List:
+ * Online: [ sles11-a sles11-b ]
+
+ * Full List of Resources:
+ * Clone Set: ms_res_Stateful_1 [res_Stateful_1] (promotable):
+ * Promoted: [ sles11-a sles11-b ]
+
+Transition Summary:
+ * Demote res_Stateful_1:1 ( Promoted -> Unpromoted sles11-a )
+
+Executing Cluster Transition:
+ * Pseudo action: ms_res_Stateful_1_demote_0
+ * Resource action: res_Stateful_1:0 demote on sles11-a
+ * Pseudo action: ms_res_Stateful_1_demoted_0
+
+Revised Cluster Status:
+ * Node List:
+ * Online: [ sles11-a sles11-b ]
+
+ * Full List of Resources:
+ * Clone Set: ms_res_Stateful_1 [res_Stateful_1] (promotable):
+ * Promoted: [ sles11-b ]
+ * Unpromoted: [ sles11-a ]
diff --git a/cts/scheduler/summary/promoted-score-startup.summary b/cts/scheduler/summary/promoted-score-startup.summary
new file mode 100644
index 0000000..f9d3640
--- /dev/null
+++ b/cts/scheduler/summary/promoted-score-startup.summary
@@ -0,0 +1,54 @@
+Current cluster status:
+ * Node List:
+ * Online: [ srv1 srv2 ]
+
+ * Full List of Resources:
+ * Clone Set: pgsql-ha [pgsqld] (promotable):
+ * Stopped: [ srv1 srv2 ]
+ * pgsql-master-ip (ocf:heartbeat:IPaddr2): Stopped
+
+Transition Summary:
+ * Promote pgsqld:0 ( Stopped -> Promoted srv1 )
+ * Start pgsqld:1 ( srv2 )
+ * Start pgsql-master-ip ( srv1 )
+
+Executing Cluster Transition:
+ * Resource action: pgsqld:0 monitor on srv1
+ * Resource action: pgsqld:1 monitor on srv2
+ * Pseudo action: pgsql-ha_pre_notify_start_0
+ * Resource action: pgsql-master-ip monitor on srv2
+ * Resource action: pgsql-master-ip monitor on srv1
+ * Pseudo action: pgsql-ha_confirmed-pre_notify_start_0
+ * Pseudo action: pgsql-ha_start_0
+ * Resource action: pgsqld:0 start on srv1
+ * Resource action: pgsqld:1 start on srv2
+ * Pseudo action: pgsql-ha_running_0
+ * Pseudo action: pgsql-ha_post_notify_running_0
+ * Resource action: pgsqld:0 notify on srv1
+ * Resource action: pgsqld:1 notify on srv2
+ * Pseudo action: pgsql-ha_confirmed-post_notify_running_0
+ * Pseudo action: pgsql-ha_pre_notify_promote_0
+ * Resource action: pgsqld:0 notify on srv1
+ * Resource action: pgsqld:1 notify on srv2
+ * Pseudo action: pgsql-ha_confirmed-pre_notify_promote_0
+ * Pseudo action: pgsql-ha_promote_0
+ * Resource action: pgsqld:0 promote on srv1
+ * Pseudo action: pgsql-ha_promoted_0
+ * Pseudo action: pgsql-ha_post_notify_promoted_0
+ * Resource action: pgsqld:0 notify on srv1
+ * Resource action: pgsqld:1 notify on srv2
+ * Pseudo action: pgsql-ha_confirmed-post_notify_promoted_0
+ * Resource action: pgsql-master-ip start on srv1
+ * Resource action: pgsqld:0 monitor=15000 on srv1
+ * Resource action: pgsqld:1 monitor=16000 on srv2
+ * Resource action: pgsql-master-ip monitor=10000 on srv1
+
+Revised Cluster Status:
+ * Node List:
+ * Online: [ srv1 srv2 ]
+
+ * Full List of Resources:
+ * Clone Set: pgsql-ha [pgsqld] (promotable):
+ * Promoted: [ srv1 ]
+ * Unpromoted: [ srv2 ]
+ * pgsql-master-ip (ocf:heartbeat:IPaddr2): Started srv1
diff --git a/cts/scheduler/summary/promoted-stop.summary b/cts/scheduler/summary/promoted-stop.summary
new file mode 100644
index 0000000..efc7492
--- /dev/null
+++ b/cts/scheduler/summary/promoted-stop.summary
@@ -0,0 +1,24 @@
+Current cluster status:
+ * Node List:
+ * Online: [ node1 node2 node3 ]
+
+ * Full List of Resources:
+ * Clone Set: m [dummy] (promotable):
+ * Unpromoted: [ node1 node2 node3 ]
+
+Transition Summary:
+ * Stop dummy:2 ( Unpromoted node3 ) due to node availability
+
+Executing Cluster Transition:
+ * Pseudo action: m_stop_0
+ * Resource action: dummy:2 stop on node3
+ * Pseudo action: m_stopped_0
+
+Revised Cluster Status:
+ * Node List:
+ * Online: [ node1 node2 node3 ]
+
+ * Full List of Resources:
+ * Clone Set: m [dummy] (promotable):
+ * Unpromoted: [ node1 node2 ]
+ * Stopped: [ node3 ]
diff --git a/cts/scheduler/summary/promoted-unmanaged-monitor.summary b/cts/scheduler/summary/promoted-unmanaged-monitor.summary
new file mode 100644
index 0000000..3c5b39a
--- /dev/null
+++ b/cts/scheduler/summary/promoted-unmanaged-monitor.summary
@@ -0,0 +1,69 @@
+Current cluster status:
+ * Node List:
+ * Online: [ pcmk-1 pcmk-2 pcmk-3 pcmk-4 ]
+
+ * Full List of Resources:
+ * Clone Set: Fencing [FencingChild] (unmanaged):
+ * Stopped: [ pcmk-1 pcmk-2 pcmk-3 pcmk-4 ]
+ * Resource Group: group-1 (unmanaged):
+ * r192.168.122.112 (ocf:heartbeat:IPaddr): Started pcmk-3 (unmanaged)
+ * r192.168.122.113 (ocf:heartbeat:IPaddr): Started pcmk-3 (unmanaged)
+ * r192.168.122.114 (ocf:heartbeat:IPaddr): Started pcmk-3 (unmanaged)
+ * rsc_pcmk-1 (ocf:heartbeat:IPaddr): Started pcmk-1 (unmanaged)
+ * rsc_pcmk-2 (ocf:heartbeat:IPaddr): Started pcmk-2 (unmanaged)
+ * rsc_pcmk-3 (ocf:heartbeat:IPaddr): Started pcmk-3 (unmanaged)
+ * rsc_pcmk-4 (ocf:heartbeat:IPaddr): Started pcmk-4 (unmanaged)
+ * lsb-dummy (lsb:/usr/share/pacemaker/tests/cts/LSBDummy): Started pcmk-3 (unmanaged)
+ * migrator (ocf:pacemaker:Dummy): Started pcmk-4 (unmanaged)
+ * Clone Set: Connectivity [ping-1] (unmanaged):
+ * ping-1 (ocf:pacemaker:ping): Started pcmk-2 (unmanaged)
+ * ping-1 (ocf:pacemaker:ping): Started pcmk-3 (unmanaged)
+ * ping-1 (ocf:pacemaker:ping): Started pcmk-4 (unmanaged)
+ * ping-1 (ocf:pacemaker:ping): Started pcmk-1 (unmanaged)
+ * Clone Set: master-1 [stateful-1] (promotable, unmanaged):
+ * stateful-1 (ocf:pacemaker:Stateful): Unpromoted pcmk-2 (unmanaged)
+ * stateful-1 (ocf:pacemaker:Stateful): Promoted pcmk-3 (unmanaged)
+ * stateful-1 (ocf:pacemaker:Stateful): Unpromoted pcmk-4 (unmanaged)
+ * Stopped: [ pcmk-1 ]
+
+Transition Summary:
+
+Executing Cluster Transition:
+ * Resource action: lsb-dummy monitor=5000 on pcmk-3
+ * Resource action: migrator monitor=10000 on pcmk-4
+ * Resource action: ping-1:0 monitor=60000 on pcmk-2
+ * Resource action: ping-1:0 monitor=60000 on pcmk-3
+ * Resource action: ping-1:0 monitor=60000 on pcmk-4
+ * Resource action: ping-1:0 monitor=60000 on pcmk-1
+ * Resource action: stateful-1:0 monitor=15000 on pcmk-2
+ * Resource action: stateful-1:0 monitor on pcmk-1
+ * Resource action: stateful-1:0 monitor=16000 on pcmk-3
+ * Resource action: stateful-1:0 monitor=15000 on pcmk-4
+
+Revised Cluster Status:
+ * Node List:
+ * Online: [ pcmk-1 pcmk-2 pcmk-3 pcmk-4 ]
+
+ * Full List of Resources:
+ * Clone Set: Fencing [FencingChild] (unmanaged):
+ * Stopped: [ pcmk-1 pcmk-2 pcmk-3 pcmk-4 ]
+ * Resource Group: group-1 (unmanaged):
+ * r192.168.122.112 (ocf:heartbeat:IPaddr): Started pcmk-3 (unmanaged)
+ * r192.168.122.113 (ocf:heartbeat:IPaddr): Started pcmk-3 (unmanaged)
+ * r192.168.122.114 (ocf:heartbeat:IPaddr): Started pcmk-3 (unmanaged)
+ * rsc_pcmk-1 (ocf:heartbeat:IPaddr): Started pcmk-1 (unmanaged)
+ * rsc_pcmk-2 (ocf:heartbeat:IPaddr): Started pcmk-2 (unmanaged)
+ * rsc_pcmk-3 (ocf:heartbeat:IPaddr): Started pcmk-3 (unmanaged)
+ * rsc_pcmk-4 (ocf:heartbeat:IPaddr): Started pcmk-4 (unmanaged)
+ * lsb-dummy (lsb:/usr/share/pacemaker/tests/cts/LSBDummy): Started pcmk-3 (unmanaged)
+ * migrator (ocf:pacemaker:Dummy): Started pcmk-4 (unmanaged)
+ * Clone Set: Connectivity [ping-1] (unmanaged):
+ * ping-1 (ocf:pacemaker:ping): Started pcmk-2 (unmanaged)
+ * ping-1 (ocf:pacemaker:ping): Started pcmk-3 (unmanaged)
+ * ping-1 (ocf:pacemaker:ping): Started pcmk-4 (unmanaged)
+ * ping-1 (ocf:pacemaker:ping): Started pcmk-1 (unmanaged)
+ * Clone Set: master-1 [stateful-1] (promotable, unmanaged):
+ * stateful-1 (ocf:pacemaker:Stateful): Unpromoted pcmk-2 (unmanaged)
+ * stateful-1 (ocf:pacemaker:Stateful): Promoted pcmk-3 (unmanaged)
+ * stateful-1 (ocf:pacemaker:Stateful): Unpromoted pcmk-4 (unmanaged)
+ * Stopped: [ pcmk-1 ]
diff --git a/cts/scheduler/summary/promoted-with-blocked.summary b/cts/scheduler/summary/promoted-with-blocked.summary
new file mode 100644
index 0000000..82177a9
--- /dev/null
+++ b/cts/scheduler/summary/promoted-with-blocked.summary
@@ -0,0 +1,59 @@
+1 of 8 resource instances DISABLED and 0 BLOCKED from further action due to failure
+
+Current cluster status:
+ * Node List:
+ * Online: [ node1 node2 node3 node4 node5 ]
+
+ * Full List of Resources:
+ * Fencing (stonith:fence_xvm): Started node1
+ * rsc1 (ocf:pacemaker:Dummy): Stopped
+ * Clone Set: rsc2-clone [rsc2] (promotable):
+ * Stopped: [ node1 node2 node3 node4 node5 ]
+ * rsc3 (ocf:pacemaker:Dummy): Stopped (disabled)
+
+Transition Summary:
+ * Start rsc1 ( node2 ) due to unrunnable rsc3 start (blocked)
+ * Start rsc2:0 ( node3 )
+ * Start rsc2:1 ( node4 )
+ * Start rsc2:2 ( node5 )
+ * Start rsc2:3 ( node1 )
+ * Promote rsc2:4 ( Stopped -> Promoted node2 ) due to colocation with rsc1 (blocked)
+
+Executing Cluster Transition:
+ * Resource action: rsc1 monitor on node5
+ * Resource action: rsc1 monitor on node4
+ * Resource action: rsc1 monitor on node3
+ * Resource action: rsc1 monitor on node2
+ * Resource action: rsc1 monitor on node1
+ * Resource action: rsc2:0 monitor on node3
+ * Resource action: rsc2:1 monitor on node4
+ * Resource action: rsc2:2 monitor on node5
+ * Resource action: rsc2:3 monitor on node1
+ * Resource action: rsc2:4 monitor on node2
+ * Pseudo action: rsc2-clone_start_0
+ * Resource action: rsc3 monitor on node5
+ * Resource action: rsc3 monitor on node4
+ * Resource action: rsc3 monitor on node3
+ * Resource action: rsc3 monitor on node2
+ * Resource action: rsc3 monitor on node1
+ * Resource action: rsc2:0 start on node3
+ * Resource action: rsc2:1 start on node4
+ * Resource action: rsc2:2 start on node5
+ * Resource action: rsc2:3 start on node1
+ * Resource action: rsc2:4 start on node2
+ * Pseudo action: rsc2-clone_running_0
+ * Resource action: rsc2:0 monitor=10000 on node3
+ * Resource action: rsc2:1 monitor=10000 on node4
+ * Resource action: rsc2:2 monitor=10000 on node5
+ * Resource action: rsc2:3 monitor=10000 on node1
+
+Revised Cluster Status:
+ * Node List:
+ * Online: [ node1 node2 node3 node4 node5 ]
+
+ * Full List of Resources:
+ * Fencing (stonith:fence_xvm): Started node1
+ * rsc1 (ocf:pacemaker:Dummy): Stopped
+ * Clone Set: rsc2-clone [rsc2] (promotable):
+ * Unpromoted: [ node1 node2 node3 node4 node5 ]
+ * rsc3 (ocf:pacemaker:Dummy): Stopped (disabled)
diff --git a/cts/scheduler/summary/promoted_monitor_restart.summary b/cts/scheduler/summary/promoted_monitor_restart.summary
new file mode 100644
index 0000000..be181bd
--- /dev/null
+++ b/cts/scheduler/summary/promoted_monitor_restart.summary
@@ -0,0 +1,24 @@
+Current cluster status:
+ * Node List:
+ * Node node2: standby
+ * Online: [ node1 ]
+
+ * Full List of Resources:
+ * Clone Set: MS_RSC [MS_RSC_NATIVE] (promotable):
+ * Promoted: [ node1 ]
+ * Stopped: [ node2 ]
+
+Transition Summary:
+
+Executing Cluster Transition:
+ * Resource action: MS_RSC_NATIVE:0 monitor=5000 on node1
+
+Revised Cluster Status:
+ * Node List:
+ * Node node2: standby
+ * Online: [ node1 ]
+
+ * Full List of Resources:
+ * Clone Set: MS_RSC [MS_RSC_NATIVE] (promotable):
+ * Promoted: [ node1 ]
+ * Stopped: [ node2 ]
diff --git a/cts/scheduler/summary/quorum-1.summary b/cts/scheduler/summary/quorum-1.summary
new file mode 100644
index 0000000..d0a05bd
--- /dev/null
+++ b/cts/scheduler/summary/quorum-1.summary
@@ -0,0 +1,30 @@
+Current cluster status:
+ * Node List:
+ * Online: [ node1 node2 ]
+
+ * Full List of Resources:
+ * rsc1 (ocf:heartbeat:apache): Started node1
+ * rsc2 (ocf:heartbeat:apache): Started node1
+ * rsc3 (ocf:heartbeat:apache): Stopped
+
+Transition Summary:
+ * Move rsc2 ( node1 -> node2 )
+ * Start rsc3 ( node1 )
+
+Executing Cluster Transition:
+ * Resource action: rsc1 monitor on node2
+ * Resource action: rsc2 stop on node1
+ * Resource action: rsc2 monitor on node2
+ * Resource action: rsc3 monitor on node2
+ * Resource action: rsc3 monitor on node1
+ * Resource action: rsc2 start on node2
+ * Resource action: rsc3 start on node1
+
+Revised Cluster Status:
+ * Node List:
+ * Online: [ node1 node2 ]
+
+ * Full List of Resources:
+ * rsc1 (ocf:heartbeat:apache): Started node1
+ * rsc2 (ocf:heartbeat:apache): Started node2
+ * rsc3 (ocf:heartbeat:apache): Started node1
diff --git a/cts/scheduler/summary/quorum-2.summary b/cts/scheduler/summary/quorum-2.summary
new file mode 100644
index 0000000..136a84e
--- /dev/null
+++ b/cts/scheduler/summary/quorum-2.summary
@@ -0,0 +1,29 @@
+Current cluster status:
+ * Node List:
+ * Online: [ node1 node2 ]
+
+ * Full List of Resources:
+ * rsc1 (ocf:heartbeat:apache): Started node1
+ * rsc2 (ocf:heartbeat:apache): Started node1
+ * rsc3 (ocf:heartbeat:apache): Stopped
+
+Transition Summary:
+ * Move rsc2 ( node1 -> node2 )
+ * Start rsc3 ( node1 ) due to quorum freeze (blocked)
+
+Executing Cluster Transition:
+ * Resource action: rsc1 monitor on node2
+ * Resource action: rsc2 stop on node1
+ * Resource action: rsc2 monitor on node2
+ * Resource action: rsc3 monitor on node2
+ * Resource action: rsc3 monitor on node1
+ * Resource action: rsc2 start on node2
+
+Revised Cluster Status:
+ * Node List:
+ * Online: [ node1 node2 ]
+
+ * Full List of Resources:
+ * rsc1 (ocf:heartbeat:apache): Started node1
+ * rsc2 (ocf:heartbeat:apache): Started node2
+ * rsc3 (ocf:heartbeat:apache): Stopped
diff --git a/cts/scheduler/summary/quorum-3.summary b/cts/scheduler/summary/quorum-3.summary
new file mode 100644
index 0000000..e51f9c4
--- /dev/null
+++ b/cts/scheduler/summary/quorum-3.summary
@@ -0,0 +1,30 @@
+Current cluster status:
+ * Node List:
+ * Online: [ node1 node2 ]
+
+ * Full List of Resources:
+ * rsc1 (ocf:heartbeat:apache): Started node1
+ * rsc2 (ocf:heartbeat:apache): Started node1
+ * rsc3 (ocf:heartbeat:apache): Stopped
+
+Transition Summary:
+ * Stop rsc1 ( node1 ) due to no quorum
+ * Stop rsc2 ( node1 ) due to no quorum
+ * Start rsc3 ( node1 ) due to no quorum (blocked)
+
+Executing Cluster Transition:
+ * Resource action: rsc1 stop on node1
+ * Resource action: rsc1 monitor on node2
+ * Resource action: rsc2 stop on node1
+ * Resource action: rsc2 monitor on node2
+ * Resource action: rsc3 monitor on node2
+ * Resource action: rsc3 monitor on node1
+
+Revised Cluster Status:
+ * Node List:
+ * Online: [ node1 node2 ]
+
+ * Full List of Resources:
+ * rsc1 (ocf:heartbeat:apache): Stopped
+ * rsc2 (ocf:heartbeat:apache): Stopped
+ * rsc3 (ocf:heartbeat:apache): Stopped
diff --git a/cts/scheduler/summary/quorum-4.summary b/cts/scheduler/summary/quorum-4.summary
new file mode 100644
index 0000000..3d0c88e
--- /dev/null
+++ b/cts/scheduler/summary/quorum-4.summary
@@ -0,0 +1,25 @@
+Current cluster status:
+ * Node List:
+ * Node hadev1: UNCLEAN (offline)
+ * Node hadev3: UNCLEAN (offline)
+ * Online: [ hadev2 ]
+
+ * Full List of Resources:
+ * child_DoFencing (stonith:ssh): Stopped
+
+Transition Summary:
+ * Start child_DoFencing ( hadev2 )
+
+Executing Cluster Transition:
+ * Resource action: child_DoFencing monitor on hadev2
+ * Resource action: child_DoFencing start on hadev2
+ * Resource action: child_DoFencing monitor=5000 on hadev2
+
+Revised Cluster Status:
+ * Node List:
+ * Node hadev1: UNCLEAN (offline)
+ * Node hadev3: UNCLEAN (offline)
+ * Online: [ hadev2 ]
+
+ * Full List of Resources:
+ * child_DoFencing (stonith:ssh): Started hadev2
diff --git a/cts/scheduler/summary/quorum-5.summary b/cts/scheduler/summary/quorum-5.summary
new file mode 100644
index 0000000..1e7abf3
--- /dev/null
+++ b/cts/scheduler/summary/quorum-5.summary
@@ -0,0 +1,35 @@
+Current cluster status:
+ * Node List:
+ * Node hadev1: UNCLEAN (offline)
+ * Node hadev3: UNCLEAN (offline)
+ * Online: [ hadev2 ]
+
+ * Full List of Resources:
+ * Resource Group: group1:
+ * child_DoFencing_1 (stonith:ssh): Stopped
+ * child_DoFencing_2 (stonith:ssh): Stopped
+
+Transition Summary:
+ * Start child_DoFencing_1 ( hadev2 )
+ * Start child_DoFencing_2 ( hadev2 )
+
+Executing Cluster Transition:
+ * Pseudo action: group1_start_0
+ * Resource action: child_DoFencing_1 monitor on hadev2
+ * Resource action: child_DoFencing_2 monitor on hadev2
+ * Resource action: child_DoFencing_1 start on hadev2
+ * Resource action: child_DoFencing_2 start on hadev2
+ * Pseudo action: group1_running_0
+ * Resource action: child_DoFencing_1 monitor=5000 on hadev2
+ * Resource action: child_DoFencing_2 monitor=5000 on hadev2
+
+Revised Cluster Status:
+ * Node List:
+ * Node hadev1: UNCLEAN (offline)
+ * Node hadev3: UNCLEAN (offline)
+ * Online: [ hadev2 ]
+
+ * Full List of Resources:
+ * Resource Group: group1:
+ * child_DoFencing_1 (stonith:ssh): Started hadev2
+ * child_DoFencing_2 (stonith:ssh): Started hadev2
diff --git a/cts/scheduler/summary/quorum-6.summary b/cts/scheduler/summary/quorum-6.summary
new file mode 100644
index 0000000..321410d
--- /dev/null
+++ b/cts/scheduler/summary/quorum-6.summary
@@ -0,0 +1,50 @@
+Current cluster status:
+ * Node List:
+ * Node hadev1: UNCLEAN (offline)
+ * Node hadev3: UNCLEAN (offline)
+ * Online: [ hadev2 ]
+
+ * Full List of Resources:
+ * Clone Set: DoFencing [child_DoFencing] (unique):
+ * child_DoFencing:0 (stonith:ssh): Stopped
+ * child_DoFencing:1 (stonith:ssh): Stopped
+ * child_DoFencing:2 (stonith:ssh): Stopped
+ * child_DoFencing:3 (stonith:ssh): Stopped
+ * child_DoFencing:4 (stonith:ssh): Stopped
+ * child_DoFencing:5 (stonith:ssh): Stopped
+ * child_DoFencing:6 (stonith:ssh): Stopped
+ * child_DoFencing:7 (stonith:ssh): Stopped
+
+Transition Summary:
+ * Start child_DoFencing:0 ( hadev2 )
+
+Executing Cluster Transition:
+ * Resource action: child_DoFencing:0 monitor on hadev2
+ * Resource action: child_DoFencing:1 monitor on hadev2
+ * Resource action: child_DoFencing:2 monitor on hadev2
+ * Resource action: child_DoFencing:3 monitor on hadev2
+ * Resource action: child_DoFencing:4 monitor on hadev2
+ * Resource action: child_DoFencing:5 monitor on hadev2
+ * Resource action: child_DoFencing:6 monitor on hadev2
+ * Resource action: child_DoFencing:7 monitor on hadev2
+ * Pseudo action: DoFencing_start_0
+ * Resource action: child_DoFencing:0 start on hadev2
+ * Pseudo action: DoFencing_running_0
+ * Resource action: child_DoFencing:0 monitor=5000 on hadev2
+
+Revised Cluster Status:
+ * Node List:
+ * Node hadev1: UNCLEAN (offline)
+ * Node hadev3: UNCLEAN (offline)
+ * Online: [ hadev2 ]
+
+ * Full List of Resources:
+ * Clone Set: DoFencing [child_DoFencing] (unique):
+ * child_DoFencing:0 (stonith:ssh): Started hadev2
+ * child_DoFencing:1 (stonith:ssh): Stopped
+ * child_DoFencing:2 (stonith:ssh): Stopped
+ * child_DoFencing:3 (stonith:ssh): Stopped
+ * child_DoFencing:4 (stonith:ssh): Stopped
+ * child_DoFencing:5 (stonith:ssh): Stopped
+ * child_DoFencing:6 (stonith:ssh): Stopped
+ * child_DoFencing:7 (stonith:ssh): Stopped
diff --git a/cts/scheduler/summary/rebalance-unique-clones.summary b/cts/scheduler/summary/rebalance-unique-clones.summary
new file mode 100644
index 0000000..2dea83b
--- /dev/null
+++ b/cts/scheduler/summary/rebalance-unique-clones.summary
@@ -0,0 +1,29 @@
+Current cluster status:
+ * Node List:
+ * Online: [ node1 node2 ]
+
+ * Full List of Resources:
+ * Clone Set: C [P] (unique):
+ * P:0 (ocf:heartbeat:IPaddr2): Started node1
+ * P:1 (ocf:heartbeat:IPaddr2): Started node1
+
+Transition Summary:
+ * Move P:1 ( node1 -> node2 )
+
+Executing Cluster Transition:
+ * Pseudo action: C_stop_0
+ * Resource action: P:1 stop on node1
+ * Pseudo action: C_stopped_0
+ * Pseudo action: C_start_0
+ * Resource action: P:1 start on node2
+ * Pseudo action: C_running_0
+ * Resource action: P:1 monitor=10000 on node2
+
+Revised Cluster Status:
+ * Node List:
+ * Online: [ node1 node2 ]
+
+ * Full List of Resources:
+ * Clone Set: C [P] (unique):
+ * P:0 (ocf:heartbeat:IPaddr2): Started node1
+ * P:1 (ocf:heartbeat:IPaddr2): Started node2
diff --git a/cts/scheduler/summary/rec-node-1.summary b/cts/scheduler/summary/rec-node-1.summary
new file mode 100644
index 0000000..35d9dd3
--- /dev/null
+++ b/cts/scheduler/summary/rec-node-1.summary
@@ -0,0 +1,27 @@
+Current cluster status:
+ * Node List:
+ * Online: [ node2 ]
+ * OFFLINE: [ node1 ]
+
+ * Full List of Resources:
+ * rsc1 (ocf:heartbeat:apache): Stopped
+ * rsc2 (ocf:heartbeat:apache): Stopped
+
+Transition Summary:
+ * Start rsc1 ( node2 )
+ * Start rsc2 ( node2 )
+
+Executing Cluster Transition:
+ * Resource action: rsc1 monitor on node2
+ * Resource action: rsc2 monitor on node2
+ * Resource action: rsc1 start on node2
+ * Resource action: rsc2 start on node2
+
+Revised Cluster Status:
+ * Node List:
+ * Online: [ node2 ]
+ * OFFLINE: [ node1 ]
+
+ * Full List of Resources:
+ * rsc1 (ocf:heartbeat:apache): Started node2
+ * rsc2 (ocf:heartbeat:apache): Started node2
diff --git a/cts/scheduler/summary/rec-node-10.summary b/cts/scheduler/summary/rec-node-10.summary
new file mode 100644
index 0000000..a77b2a1
--- /dev/null
+++ b/cts/scheduler/summary/rec-node-10.summary
@@ -0,0 +1,29 @@
+Current cluster status:
+ * Node List:
+ * Node node1: UNCLEAN (offline)
+ * Online: [ node2 ]
+
+ * Full List of Resources:
+ * stonith-1 (stonith:dummy): Stopped
+ * rsc1 (ocf:heartbeat:apache): Started node1 (UNCLEAN)
+ * rsc2 (ocf:heartbeat:apache): Started node1 (UNCLEAN)
+
+Transition Summary:
+ * Start stonith-1 ( node2 ) due to no quorum (blocked)
+ * Stop rsc1 ( node1 ) due to no quorum (blocked)
+ * Stop rsc2 ( node1 ) due to no quorum (blocked)
+
+Executing Cluster Transition:
+ * Resource action: stonith-1 monitor on node2
+ * Resource action: rsc1 monitor on node2
+ * Resource action: rsc2 monitor on node2
+
+Revised Cluster Status:
+ * Node List:
+ * Node node1: UNCLEAN (offline)
+ * Online: [ node2 ]
+
+ * Full List of Resources:
+ * stonith-1 (stonith:dummy): Stopped
+ * rsc1 (ocf:heartbeat:apache): Started node1 (UNCLEAN)
+ * rsc2 (ocf:heartbeat:apache): Started node1 (UNCLEAN)
diff --git a/cts/scheduler/summary/rec-node-11.summary b/cts/scheduler/summary/rec-node-11.summary
new file mode 100644
index 0000000..453dc00
--- /dev/null
+++ b/cts/scheduler/summary/rec-node-11.summary
@@ -0,0 +1,47 @@
+Current cluster status:
+ * Node List:
+ * Node node1: UNCLEAN (online)
+ * Online: [ node2 ]
+
+ * Full List of Resources:
+ * stonith-1 (stonith:dummy): Stopped
+ * Resource Group: group1:
+ * rsc1 (ocf:heartbeat:apache): Started node1
+ * rsc2 (ocf:heartbeat:apache): Started node1
+ * rsc3 (ocf:heartbeat:apache): Started node2
+
+Transition Summary:
+ * Fence (reboot) node1 'peer process is no longer available'
+ * Start stonith-1 ( node2 )
+ * Move rsc1 ( node1 -> node2 )
+ * Move rsc2 ( node1 -> node2 )
+ * Restart rsc3 ( node2 )
+
+Executing Cluster Transition:
+ * Resource action: stonith-1 monitor on node2
+ * Resource action: rsc1 monitor on node2
+ * Resource action: rsc2 monitor on node2
+ * Fencing node1 (reboot)
+ * Resource action: stonith-1 start on node2
+ * Pseudo action: group1_stop_0
+ * Pseudo action: rsc2_stop_0
+ * Pseudo action: rsc1_stop_0
+ * Pseudo action: group1_stopped_0
+ * Resource action: rsc3 stop on node2
+ * Resource action: rsc3 start on node2
+ * Pseudo action: group1_start_0
+ * Resource action: rsc1 start on node2
+ * Resource action: rsc2 start on node2
+ * Pseudo action: group1_running_0
+
+Revised Cluster Status:
+ * Node List:
+ * Online: [ node2 ]
+ * OFFLINE: [ node1 ]
+
+ * Full List of Resources:
+ * stonith-1 (stonith:dummy): Started node2
+ * Resource Group: group1:
+ * rsc1 (ocf:heartbeat:apache): Started node2
+ * rsc2 (ocf:heartbeat:apache): Started node2
+ * rsc3 (ocf:heartbeat:apache): Started node2
diff --git a/cts/scheduler/summary/rec-node-12.summary b/cts/scheduler/summary/rec-node-12.summary
new file mode 100644
index 0000000..8edeec2
--- /dev/null
+++ b/cts/scheduler/summary/rec-node-12.summary
@@ -0,0 +1,92 @@
+Current cluster status:
+ * Node List:
+ * Node c001n02: UNCLEAN (offline)
+ * Online: [ c001n01 c001n03 c001n08 ]
+
+ * Full List of Resources:
+ * DcIPaddr (ocf:heartbeat:IPaddr): Stopped
+ * rsc_c001n08 (ocf:heartbeat:IPaddr): Stopped
+ * rsc_c001n02 (ocf:heartbeat:IPaddr): Stopped
+ * rsc_c001n03 (ocf:heartbeat:IPaddr): Stopped
+ * rsc_c001n01 (ocf:heartbeat:IPaddr): Stopped
+ * Clone Set: DoFencing [child_DoFencing] (unique):
+ * child_DoFencing:0 (stonith:ssh): Stopped
+ * child_DoFencing:1 (stonith:ssh): Stopped
+ * child_DoFencing:2 (stonith:ssh): Stopped
+ * child_DoFencing:3 (stonith:ssh): Stopped
+
+Transition Summary:
+ * Fence (reboot) c001n02 'node is unclean'
+ * Start DcIPaddr ( c001n08 )
+ * Start rsc_c001n08 ( c001n08 )
+ * Start rsc_c001n02 ( c001n01 )
+ * Start rsc_c001n03 ( c001n03 )
+ * Start rsc_c001n01 ( c001n01 )
+ * Start child_DoFencing:0 ( c001n03 )
+ * Start child_DoFencing:1 ( c001n01 )
+ * Start child_DoFencing:2 ( c001n08 )
+
+Executing Cluster Transition:
+ * Resource action: DcIPaddr monitor on c001n08
+ * Resource action: DcIPaddr monitor on c001n03
+ * Resource action: DcIPaddr monitor on c001n01
+ * Resource action: rsc_c001n08 monitor on c001n08
+ * Resource action: rsc_c001n08 monitor on c001n03
+ * Resource action: rsc_c001n08 monitor on c001n01
+ * Resource action: rsc_c001n02 monitor on c001n08
+ * Resource action: rsc_c001n02 monitor on c001n03
+ * Resource action: rsc_c001n02 monitor on c001n01
+ * Resource action: rsc_c001n03 monitor on c001n08
+ * Resource action: rsc_c001n03 monitor on c001n03
+ * Resource action: rsc_c001n03 monitor on c001n01
+ * Resource action: rsc_c001n01 monitor on c001n08
+ * Resource action: rsc_c001n01 monitor on c001n03
+ * Resource action: rsc_c001n01 monitor on c001n01
+ * Resource action: child_DoFencing:0 monitor on c001n08
+ * Resource action: child_DoFencing:0 monitor on c001n03
+ * Resource action: child_DoFencing:0 monitor on c001n01
+ * Resource action: child_DoFencing:1 monitor on c001n08
+ * Resource action: child_DoFencing:1 monitor on c001n03
+ * Resource action: child_DoFencing:1 monitor on c001n01
+ * Resource action: child_DoFencing:2 monitor on c001n08
+ * Resource action: child_DoFencing:2 monitor on c001n03
+ * Resource action: child_DoFencing:2 monitor on c001n01
+ * Resource action: child_DoFencing:3 monitor on c001n08
+ * Resource action: child_DoFencing:3 monitor on c001n03
+ * Resource action: child_DoFencing:3 monitor on c001n01
+ * Pseudo action: DoFencing_start_0
+ * Fencing c001n02 (reboot)
+ * Resource action: DcIPaddr start on c001n08
+ * Resource action: rsc_c001n08 start on c001n08
+ * Resource action: rsc_c001n02 start on c001n01
+ * Resource action: rsc_c001n03 start on c001n03
+ * Resource action: rsc_c001n01 start on c001n01
+ * Resource action: child_DoFencing:0 start on c001n03
+ * Resource action: child_DoFencing:1 start on c001n01
+ * Resource action: child_DoFencing:2 start on c001n08
+ * Pseudo action: DoFencing_running_0
+ * Resource action: DcIPaddr monitor=5000 on c001n08
+ * Resource action: rsc_c001n08 monitor=5000 on c001n08
+ * Resource action: rsc_c001n02 monitor=5000 on c001n01
+ * Resource action: rsc_c001n03 monitor=5000 on c001n03
+ * Resource action: rsc_c001n01 monitor=5000 on c001n01
+ * Resource action: child_DoFencing:0 monitor=5000 on c001n03
+ * Resource action: child_DoFencing:1 monitor=5000 on c001n01
+ * Resource action: child_DoFencing:2 monitor=5000 on c001n08
+
+Revised Cluster Status:
+ * Node List:
+ * Online: [ c001n01 c001n03 c001n08 ]
+ * OFFLINE: [ c001n02 ]
+
+ * Full List of Resources:
+ * DcIPaddr (ocf:heartbeat:IPaddr): Started c001n08
+ * rsc_c001n08 (ocf:heartbeat:IPaddr): Started c001n08
+ * rsc_c001n02 (ocf:heartbeat:IPaddr): Started c001n01
+ * rsc_c001n03 (ocf:heartbeat:IPaddr): Started c001n03
+ * rsc_c001n01 (ocf:heartbeat:IPaddr): Started c001n01
+ * Clone Set: DoFencing [child_DoFencing] (unique):
+ * child_DoFencing:0 (stonith:ssh): Started c001n03
+ * child_DoFencing:1 (stonith:ssh): Started c001n01
+ * child_DoFencing:2 (stonith:ssh): Started c001n08
+ * child_DoFencing:3 (stonith:ssh): Stopped
diff --git a/cts/scheduler/summary/rec-node-13.summary b/cts/scheduler/summary/rec-node-13.summary
new file mode 100644
index 0000000..72c8e42
--- /dev/null
+++ b/cts/scheduler/summary/rec-node-13.summary
@@ -0,0 +1,80 @@
+Current cluster status:
+ * Node List:
+ * Node c001n04: UNCLEAN (online)
+ * Online: [ c001n02 c001n06 c001n07 ]
+ * OFFLINE: [ c001n03 c001n05 ]
+
+ * Full List of Resources:
+ * Clone Set: DoFencing [child_DoFencing]:
+ * Started: [ c001n02 c001n06 c001n07 ]
+ * Stopped: [ c001n03 c001n04 c001n05 ]
+ * DcIPaddr (ocf:heartbeat:IPaddr): Stopped
+ * Resource Group: group-1:
+ * ocf_192.168.100.181 (ocf:heartbeat:IPaddr): Started c001n02
+ * heartbeat_192.168.100.182 (ocf:heartbeat:IPaddr): Started c001n02
+ * ocf_192.168.100.183 (ocf:heartbeat:IPaddr): Started c001n02
+ * lsb_dummy (lsb:/usr/lib/heartbeat/cts/LSBDummy): Started c001n06
+ * rsc_c001n05 (ocf:heartbeat:IPaddr): Started c001n07
+ * rsc_c001n03 (ocf:heartbeat:IPaddr): Started c001n06
+ * rsc_c001n04 (ocf:heartbeat:IPaddr): Started c001n07
+ * rsc_c001n02 (ocf:heartbeat:IPaddr): Started c001n02
+ * rsc_c001n07 (ocf:heartbeat:IPaddr): Started c001n07
+ * rsc_c001n06 (ocf:heartbeat:IPaddr): Started c001n06
+ * Clone Set: master_rsc_1 [ocf_msdummy] (promotable, unique):
+ * ocf_msdummy:0 (ocf:heartbeat:/usr/lib/heartbeat/cts/OCFMSDummy): Promoted c001n02
+ * ocf_msdummy:1 (ocf:heartbeat:/usr/lib/heartbeat/cts/OCFMSDummy): Stopped
+ * ocf_msdummy:2 (ocf:heartbeat:/usr/lib/heartbeat/cts/OCFMSDummy): Stopped
+ * ocf_msdummy:3 (ocf:heartbeat:/usr/lib/heartbeat/cts/OCFMSDummy): Stopped
+ * ocf_msdummy:4 (ocf:heartbeat:/usr/lib/heartbeat/cts/OCFMSDummy): Unpromoted c001n02
+ * ocf_msdummy:5 (ocf:heartbeat:/usr/lib/heartbeat/cts/OCFMSDummy): Stopped
+ * ocf_msdummy:6 (ocf:heartbeat:/usr/lib/heartbeat/cts/OCFMSDummy): FAILED c001n04
+ * ocf_msdummy:7 (ocf:heartbeat:/usr/lib/heartbeat/cts/OCFMSDummy): Stopped
+ * ocf_msdummy:8 (ocf:heartbeat:/usr/lib/heartbeat/cts/OCFMSDummy): Unpromoted c001n06
+ * ocf_msdummy:9 (ocf:heartbeat:/usr/lib/heartbeat/cts/OCFMSDummy): Unpromoted c001n07
+ * ocf_msdummy:10 (ocf:heartbeat:/usr/lib/heartbeat/cts/OCFMSDummy): Unpromoted c001n06
+ * ocf_msdummy:11 (ocf:heartbeat:/usr/lib/heartbeat/cts/OCFMSDummy): Unpromoted c001n07
+
+Transition Summary:
+ * Fence (reboot) c001n04 'ocf_msdummy:6 failed there'
+ * Stop ocf_msdummy:6 ( Unpromoted c001n04 ) due to node availability
+
+Executing Cluster Transition:
+ * Fencing c001n04 (reboot)
+ * Pseudo action: master_rsc_1_stop_0
+ * Pseudo action: ocf_msdummy:6_stop_0
+ * Pseudo action: master_rsc_1_stopped_0
+
+Revised Cluster Status:
+ * Node List:
+ * Online: [ c001n02 c001n06 c001n07 ]
+ * OFFLINE: [ c001n03 c001n04 c001n05 ]
+
+ * Full List of Resources:
+ * Clone Set: DoFencing [child_DoFencing]:
+ * Started: [ c001n02 c001n06 c001n07 ]
+ * Stopped: [ c001n03 c001n04 c001n05 ]
+ * DcIPaddr (ocf:heartbeat:IPaddr): Stopped
+ * Resource Group: group-1:
+ * ocf_192.168.100.181 (ocf:heartbeat:IPaddr): Started c001n02
+ * heartbeat_192.168.100.182 (ocf:heartbeat:IPaddr): Started c001n02
+ * ocf_192.168.100.183 (ocf:heartbeat:IPaddr): Started c001n02
+ * lsb_dummy (lsb:/usr/lib/heartbeat/cts/LSBDummy): Started c001n06
+ * rsc_c001n05 (ocf:heartbeat:IPaddr): Started c001n07
+ * rsc_c001n03 (ocf:heartbeat:IPaddr): Started c001n06
+ * rsc_c001n04 (ocf:heartbeat:IPaddr): Started c001n07
+ * rsc_c001n02 (ocf:heartbeat:IPaddr): Started c001n02
+ * rsc_c001n07 (ocf:heartbeat:IPaddr): Started c001n07
+ * rsc_c001n06 (ocf:heartbeat:IPaddr): Started c001n06
+ * Clone Set: master_rsc_1 [ocf_msdummy] (promotable, unique):
+ * ocf_msdummy:0 (ocf:heartbeat:/usr/lib/heartbeat/cts/OCFMSDummy): Promoted c001n02
+ * ocf_msdummy:1 (ocf:heartbeat:/usr/lib/heartbeat/cts/OCFMSDummy): Stopped
+ * ocf_msdummy:2 (ocf:heartbeat:/usr/lib/heartbeat/cts/OCFMSDummy): Stopped
+ * ocf_msdummy:3 (ocf:heartbeat:/usr/lib/heartbeat/cts/OCFMSDummy): Stopped
+ * ocf_msdummy:4 (ocf:heartbeat:/usr/lib/heartbeat/cts/OCFMSDummy): Unpromoted c001n02
+ * ocf_msdummy:5 (ocf:heartbeat:/usr/lib/heartbeat/cts/OCFMSDummy): Stopped
+ * ocf_msdummy:6 (ocf:heartbeat:/usr/lib/heartbeat/cts/OCFMSDummy): Stopped
+ * ocf_msdummy:7 (ocf:heartbeat:/usr/lib/heartbeat/cts/OCFMSDummy): Stopped
+ * ocf_msdummy:8 (ocf:heartbeat:/usr/lib/heartbeat/cts/OCFMSDummy): Unpromoted c001n06
+ * ocf_msdummy:9 (ocf:heartbeat:/usr/lib/heartbeat/cts/OCFMSDummy): Unpromoted c001n07
+ * ocf_msdummy:10 (ocf:heartbeat:/usr/lib/heartbeat/cts/OCFMSDummy): Unpromoted c001n06
+ * ocf_msdummy:11 (ocf:heartbeat:/usr/lib/heartbeat/cts/OCFMSDummy): Unpromoted c001n07
diff --git a/cts/scheduler/summary/rec-node-14.summary b/cts/scheduler/summary/rec-node-14.summary
new file mode 100644
index 0000000..5c55391
--- /dev/null
+++ b/cts/scheduler/summary/rec-node-14.summary
@@ -0,0 +1,27 @@
+Current cluster status:
+ * Node List:
+ * Node node1: UNCLEAN (offline)
+ * Node node2: UNCLEAN (offline)
+ * Node node3: UNCLEAN (offline)
+
+ * Full List of Resources:
+ * stonith-1 (stonith:dummy): Stopped
+ * lsb_dummy (lsb:/usr/lib/heartbeat/cts/LSBDummy): Stopped
+
+Transition Summary:
+ * Fence (reboot) node3 'peer is no longer part of the cluster'
+ * Fence (reboot) node2 'peer is no longer part of the cluster'
+ * Fence (reboot) node1 'peer is no longer part of the cluster'
+
+Executing Cluster Transition:
+ * Fencing node1 (reboot)
+ * Fencing node3 (reboot)
+ * Fencing node2 (reboot)
+
+Revised Cluster Status:
+ * Node List:
+ * OFFLINE: [ node1 node2 node3 ]
+
+ * Full List of Resources:
+ * stonith-1 (stonith:dummy): Stopped
+ * lsb_dummy (lsb:/usr/lib/heartbeat/cts/LSBDummy): Stopped
diff --git a/cts/scheduler/summary/rec-node-15.summary b/cts/scheduler/summary/rec-node-15.summary
new file mode 100644
index 0000000..39a9964
--- /dev/null
+++ b/cts/scheduler/summary/rec-node-15.summary
@@ -0,0 +1,88 @@
+Current cluster status:
+ * Node List:
+ * Node sapcl02: standby (with active resources)
+ * Node sapcl03: UNCLEAN (offline)
+ * Online: [ sapcl01 ]
+
+ * Full List of Resources:
+ * stonith-1 (stonith:dummy): Stopped
+ * Resource Group: app01:
+ * IPaddr_192_168_1_101 (ocf:heartbeat:IPaddr): Started sapcl01
+ * LVM_2 (ocf:heartbeat:LVM): Started sapcl01
+ * Filesystem_3 (ocf:heartbeat:Filesystem): Started sapcl01
+ * Resource Group: app02:
+ * IPaddr_192_168_1_102 (ocf:heartbeat:IPaddr): Started sapcl02
+ * LVM_12 (ocf:heartbeat:LVM): Started sapcl02
+ * Filesystem_13 (ocf:heartbeat:Filesystem): Started sapcl02
+ * Resource Group: oracle:
+ * IPaddr_192_168_1_104 (ocf:heartbeat:IPaddr): Stopped
+ * LVM_22 (ocf:heartbeat:LVM): Stopped
+ * Filesystem_23 (ocf:heartbeat:Filesystem): Stopped
+ * oracle_24 (ocf:heartbeat:oracle): Stopped
+ * oralsnr_25 (ocf:heartbeat:oralsnr): Stopped
+
+Transition Summary:
+ * Fence (reboot) sapcl03 'peer is no longer part of the cluster'
+ * Start stonith-1 ( sapcl01 )
+ * Move IPaddr_192_168_1_102 ( sapcl02 -> sapcl01 )
+ * Move LVM_12 ( sapcl02 -> sapcl01 )
+ * Move Filesystem_13 ( sapcl02 -> sapcl01 )
+ * Start IPaddr_192_168_1_104 ( sapcl01 )
+ * Start LVM_22 ( sapcl01 )
+ * Start Filesystem_23 ( sapcl01 )
+ * Start oracle_24 ( sapcl01 )
+ * Start oralsnr_25 ( sapcl01 )
+
+Executing Cluster Transition:
+ * Resource action: stonith-1 monitor on sapcl02
+ * Resource action: stonith-1 monitor on sapcl01
+ * Pseudo action: app02_stop_0
+ * Resource action: Filesystem_13 stop on sapcl02
+ * Pseudo action: oracle_start_0
+ * Fencing sapcl03 (reboot)
+ * Resource action: stonith-1 start on sapcl01
+ * Resource action: LVM_12 stop on sapcl02
+ * Resource action: IPaddr_192_168_1_104 start on sapcl01
+ * Resource action: LVM_22 start on sapcl01
+ * Resource action: Filesystem_23 start on sapcl01
+ * Resource action: oracle_24 start on sapcl01
+ * Resource action: oralsnr_25 start on sapcl01
+ * Resource action: IPaddr_192_168_1_102 stop on sapcl02
+ * Pseudo action: oracle_running_0
+ * Resource action: IPaddr_192_168_1_104 monitor=5000 on sapcl01
+ * Resource action: LVM_22 monitor=120000 on sapcl01
+ * Resource action: Filesystem_23 monitor=120000 on sapcl01
+ * Resource action: oracle_24 monitor=120000 on sapcl01
+ * Resource action: oralsnr_25 monitor=120000 on sapcl01
+ * Pseudo action: app02_stopped_0
+ * Pseudo action: app02_start_0
+ * Resource action: IPaddr_192_168_1_102 start on sapcl01
+ * Resource action: LVM_12 start on sapcl01
+ * Resource action: Filesystem_13 start on sapcl01
+ * Pseudo action: app02_running_0
+ * Resource action: IPaddr_192_168_1_102 monitor=5000 on sapcl01
+ * Resource action: LVM_12 monitor=120000 on sapcl01
+ * Resource action: Filesystem_13 monitor=120000 on sapcl01
+
+Revised Cluster Status:
+ * Node List:
+ * Node sapcl02: standby
+ * Online: [ sapcl01 ]
+ * OFFLINE: [ sapcl03 ]
+
+ * Full List of Resources:
+ * stonith-1 (stonith:dummy): Started sapcl01
+ * Resource Group: app01:
+ * IPaddr_192_168_1_101 (ocf:heartbeat:IPaddr): Started sapcl01
+ * LVM_2 (ocf:heartbeat:LVM): Started sapcl01
+ * Filesystem_3 (ocf:heartbeat:Filesystem): Started sapcl01
+ * Resource Group: app02:
+ * IPaddr_192_168_1_102 (ocf:heartbeat:IPaddr): Started sapcl01
+ * LVM_12 (ocf:heartbeat:LVM): Started sapcl01
+ * Filesystem_13 (ocf:heartbeat:Filesystem): Started sapcl01
+ * Resource Group: oracle:
+ * IPaddr_192_168_1_104 (ocf:heartbeat:IPaddr): Started sapcl01
+ * LVM_22 (ocf:heartbeat:LVM): Started sapcl01
+ * Filesystem_23 (ocf:heartbeat:Filesystem): Started sapcl01
+ * oracle_24 (ocf:heartbeat:oracle): Started sapcl01
+ * oralsnr_25 (ocf:heartbeat:oralsnr): Started sapcl01
diff --git a/cts/scheduler/summary/rec-node-2.summary b/cts/scheduler/summary/rec-node-2.summary
new file mode 100644
index 0000000..11e818a
--- /dev/null
+++ b/cts/scheduler/summary/rec-node-2.summary
@@ -0,0 +1,62 @@
+Current cluster status:
+ * Node List:
+ * Node node1: UNCLEAN (offline)
+ * Online: [ node2 ]
+
+ * Full List of Resources:
+ * stonith-1 (stonith:dummy): Stopped
+ * rsc1 (ocf:heartbeat:apache): Stopped
+ * rsc2 (ocf:heartbeat:apache): Stopped
+ * Resource Group: group1:
+ * rsc3 (ocf:heartbeat:apache): Stopped
+ * rsc4 (ocf:heartbeat:apache): Stopped
+ * Resource Group: group2:
+ * rsc5 (ocf:heartbeat:apache): Stopped
+ * rsc6 (ocf:heartbeat:apache): Stopped
+
+Transition Summary:
+ * Fence (reboot) node1 'node is unclean'
+ * Start stonith-1 ( node2 )
+ * Start rsc1 ( node2 )
+ * Start rsc2 ( node2 )
+ * Start rsc3 ( node2 )
+ * Start rsc4 ( node2 )
+ * Start rsc5 ( node2 )
+ * Start rsc6 ( node2 )
+
+Executing Cluster Transition:
+ * Resource action: stonith-1 monitor on node2
+ * Resource action: rsc1 monitor on node2
+ * Resource action: rsc2 monitor on node2
+ * Pseudo action: group1_start_0
+ * Resource action: rsc3 monitor on node2
+ * Resource action: rsc4 monitor on node2
+ * Pseudo action: group2_start_0
+ * Resource action: rsc5 monitor on node2
+ * Resource action: rsc6 monitor on node2
+ * Fencing node1 (reboot)
+ * Resource action: stonith-1 start on node2
+ * Resource action: rsc1 start on node2
+ * Resource action: rsc2 start on node2
+ * Resource action: rsc3 start on node2
+ * Resource action: rsc4 start on node2
+ * Resource action: rsc5 start on node2
+ * Resource action: rsc6 start on node2
+ * Pseudo action: group1_running_0
+ * Pseudo action: group2_running_0
+
+Revised Cluster Status:
+ * Node List:
+ * Online: [ node2 ]
+ * OFFLINE: [ node1 ]
+
+ * Full List of Resources:
+ * stonith-1 (stonith:dummy): Started node2
+ * rsc1 (ocf:heartbeat:apache): Started node2
+ * rsc2 (ocf:heartbeat:apache): Started node2
+ * Resource Group: group1:
+ * rsc3 (ocf:heartbeat:apache): Started node2
+ * rsc4 (ocf:heartbeat:apache): Started node2
+ * Resource Group: group2:
+ * rsc5 (ocf:heartbeat:apache): Started node2
+ * rsc6 (ocf:heartbeat:apache): Started node2
diff --git a/cts/scheduler/summary/rec-node-3.summary b/cts/scheduler/summary/rec-node-3.summary
new file mode 100644
index 0000000..35d9dd3
--- /dev/null
+++ b/cts/scheduler/summary/rec-node-3.summary
@@ -0,0 +1,27 @@
+Current cluster status:
+ * Node List:
+ * Online: [ node2 ]
+ * OFFLINE: [ node1 ]
+
+ * Full List of Resources:
+ * rsc1 (ocf:heartbeat:apache): Stopped
+ * rsc2 (ocf:heartbeat:apache): Stopped
+
+Transition Summary:
+ * Start rsc1 ( node2 )
+ * Start rsc2 ( node2 )
+
+Executing Cluster Transition:
+ * Resource action: rsc1 monitor on node2
+ * Resource action: rsc2 monitor on node2
+ * Resource action: rsc1 start on node2
+ * Resource action: rsc2 start on node2
+
+Revised Cluster Status:
+ * Node List:
+ * Online: [ node2 ]
+ * OFFLINE: [ node1 ]
+
+ * Full List of Resources:
+ * rsc1 (ocf:heartbeat:apache): Started node2
+ * rsc2 (ocf:heartbeat:apache): Started node2
diff --git a/cts/scheduler/summary/rec-node-4.summary b/cts/scheduler/summary/rec-node-4.summary
new file mode 100644
index 0000000..f56c118
--- /dev/null
+++ b/cts/scheduler/summary/rec-node-4.summary
@@ -0,0 +1,36 @@
+Current cluster status:
+ * Node List:
+ * Node node1: UNCLEAN (offline)
+ * Online: [ node2 ]
+
+ * Full List of Resources:
+ * stonith-1 (stonith:dummy): Stopped
+ * rsc1 (ocf:heartbeat:apache): Started node1 (UNCLEAN)
+ * rsc2 (ocf:heartbeat:apache): Started node1 (UNCLEAN)
+
+Transition Summary:
+ * Fence (reboot) node1 'peer is no longer part of the cluster'
+ * Start stonith-1 ( node2 )
+ * Move rsc1 ( node1 -> node2 )
+ * Move rsc2 ( node1 -> node2 )
+
+Executing Cluster Transition:
+ * Resource action: stonith-1 monitor on node2
+ * Resource action: rsc1 monitor on node2
+ * Resource action: rsc2 monitor on node2
+ * Fencing node1 (reboot)
+ * Resource action: stonith-1 start on node2
+ * Pseudo action: rsc1_stop_0
+ * Pseudo action: rsc2_stop_0
+ * Resource action: rsc1 start on node2
+ * Resource action: rsc2 start on node2
+
+Revised Cluster Status:
+ * Node List:
+ * Online: [ node2 ]
+ * OFFLINE: [ node1 ]
+
+ * Full List of Resources:
+ * stonith-1 (stonith:dummy): Started node2
+ * rsc1 (ocf:heartbeat:apache): Started node2
+ * rsc2 (ocf:heartbeat:apache): Started node2
diff --git a/cts/scheduler/summary/rec-node-5.summary b/cts/scheduler/summary/rec-node-5.summary
new file mode 100644
index 0000000..a4128ca
--- /dev/null
+++ b/cts/scheduler/summary/rec-node-5.summary
@@ -0,0 +1,27 @@
+Current cluster status:
+ * Node List:
+ * Node node1: UNCLEAN (offline)
+ * Online: [ node2 ]
+
+ * Full List of Resources:
+ * rsc1 (ocf:heartbeat:apache): Stopped
+ * rsc2 (ocf:heartbeat:apache): Stopped
+
+Transition Summary:
+ * Start rsc1 ( node2 )
+ * Start rsc2 ( node2 )
+
+Executing Cluster Transition:
+ * Resource action: rsc1 monitor on node2
+ * Resource action: rsc2 monitor on node2
+ * Resource action: rsc1 start on node2
+ * Resource action: rsc2 start on node2
+
+Revised Cluster Status:
+ * Node List:
+ * Node node1: UNCLEAN (offline)
+ * Online: [ node2 ]
+
+ * Full List of Resources:
+ * rsc1 (ocf:heartbeat:apache): Started node2
+ * rsc2 (ocf:heartbeat:apache): Started node2
diff --git a/cts/scheduler/summary/rec-node-6.summary b/cts/scheduler/summary/rec-node-6.summary
new file mode 100644
index 0000000..a7ee902
--- /dev/null
+++ b/cts/scheduler/summary/rec-node-6.summary
@@ -0,0 +1,36 @@
+Current cluster status:
+ * Node List:
+ * Node node1: UNCLEAN (online)
+ * Online: [ node2 ]
+
+ * Full List of Resources:
+ * stonith-1 (stonith:dummy): Stopped
+ * rsc1 (ocf:heartbeat:apache): Started node1
+ * rsc2 (ocf:heartbeat:apache): Started node1
+
+Transition Summary:
+ * Fence (reboot) node1 'peer process is no longer available'
+ * Start stonith-1 ( node2 )
+ * Move rsc1 ( node1 -> node2 )
+ * Move rsc2 ( node1 -> node2 )
+
+Executing Cluster Transition:
+ * Resource action: stonith-1 monitor on node2
+ * Resource action: rsc1 monitor on node2
+ * Resource action: rsc2 monitor on node2
+ * Fencing node1 (reboot)
+ * Resource action: stonith-1 start on node2
+ * Pseudo action: rsc1_stop_0
+ * Pseudo action: rsc2_stop_0
+ * Resource action: rsc1 start on node2
+ * Resource action: rsc2 start on node2
+
+Revised Cluster Status:
+ * Node List:
+ * Online: [ node2 ]
+ * OFFLINE: [ node1 ]
+
+ * Full List of Resources:
+ * stonith-1 (stonith:dummy): Started node2
+ * rsc1 (ocf:heartbeat:apache): Started node2
+ * rsc2 (ocf:heartbeat:apache): Started node2
diff --git a/cts/scheduler/summary/rec-node-7.summary b/cts/scheduler/summary/rec-node-7.summary
new file mode 100644
index 0000000..f56c118
--- /dev/null
+++ b/cts/scheduler/summary/rec-node-7.summary
@@ -0,0 +1,36 @@
+Current cluster status:
+ * Node List:
+ * Node node1: UNCLEAN (offline)
+ * Online: [ node2 ]
+
+ * Full List of Resources:
+ * stonith-1 (stonith:dummy): Stopped
+ * rsc1 (ocf:heartbeat:apache): Started node1 (UNCLEAN)
+ * rsc2 (ocf:heartbeat:apache): Started node1 (UNCLEAN)
+
+Transition Summary:
+ * Fence (reboot) node1 'peer is no longer part of the cluster'
+ * Start stonith-1 ( node2 )
+ * Move rsc1 ( node1 -> node2 )
+ * Move rsc2 ( node1 -> node2 )
+
+Executing Cluster Transition:
+ * Resource action: stonith-1 monitor on node2
+ * Resource action: rsc1 monitor on node2
+ * Resource action: rsc2 monitor on node2
+ * Fencing node1 (reboot)
+ * Resource action: stonith-1 start on node2
+ * Pseudo action: rsc1_stop_0
+ * Pseudo action: rsc2_stop_0
+ * Resource action: rsc1 start on node2
+ * Resource action: rsc2 start on node2
+
+Revised Cluster Status:
+ * Node List:
+ * Online: [ node2 ]
+ * OFFLINE: [ node1 ]
+
+ * Full List of Resources:
+ * stonith-1 (stonith:dummy): Started node2
+ * rsc1 (ocf:heartbeat:apache): Started node2
+ * rsc2 (ocf:heartbeat:apache): Started node2
diff --git a/cts/scheduler/summary/rec-node-8.summary b/cts/scheduler/summary/rec-node-8.summary
new file mode 100644
index 0000000..226e333
--- /dev/null
+++ b/cts/scheduler/summary/rec-node-8.summary
@@ -0,0 +1,33 @@
+Current cluster status:
+ * Node List:
+ * Node node1: UNCLEAN (offline)
+ * Online: [ node2 ]
+
+ * Full List of Resources:
+ * stonith-1 (stonith:dummy): Stopped
+ * rsc1 (ocf:heartbeat:apache): Started node1 (UNCLEAN)
+ * rsc2 (ocf:heartbeat:apache): Started node1 (UNCLEAN)
+ * rsc3 (ocf:heartbeat:apache): Stopped
+
+Transition Summary:
+ * Start stonith-1 ( node2 ) due to quorum freeze (blocked)
+ * Stop rsc1 ( node1 ) blocked
+ * Stop rsc2 ( node1 ) blocked
+ * Start rsc3 ( node2 ) due to quorum freeze (blocked)
+
+Executing Cluster Transition:
+ * Resource action: stonith-1 monitor on node2
+ * Resource action: rsc1 monitor on node2
+ * Resource action: rsc2 monitor on node2
+ * Resource action: rsc3 monitor on node2
+
+Revised Cluster Status:
+ * Node List:
+ * Node node1: UNCLEAN (offline)
+ * Online: [ node2 ]
+
+ * Full List of Resources:
+ * stonith-1 (stonith:dummy): Stopped
+ * rsc1 (ocf:heartbeat:apache): Started node1 (UNCLEAN)
+ * rsc2 (ocf:heartbeat:apache): Started node1 (UNCLEAN)
+ * rsc3 (ocf:heartbeat:apache): Stopped
diff --git a/cts/scheduler/summary/rec-node-9.summary b/cts/scheduler/summary/rec-node-9.summary
new file mode 100644
index 0000000..edb9d8d
--- /dev/null
+++ b/cts/scheduler/summary/rec-node-9.summary
@@ -0,0 +1,25 @@
+Current cluster status:
+ * Node List:
+ * Online: [ node2 ]
+ * OFFLINE: [ node1 ]
+
+ * Full List of Resources:
+ * rsc1 (ocf:heartbeat:apache): Stopped
+ * rsc2 (ocf:heartbeat:apache): Stopped
+
+Transition Summary:
+ * Start rsc1 ( node2 ) due to no quorum (blocked)
+ * Start rsc2 ( node2 ) due to no quorum (blocked)
+
+Executing Cluster Transition:
+ * Resource action: rsc1 monitor on node2
+ * Resource action: rsc2 monitor on node2
+
+Revised Cluster Status:
+ * Node List:
+ * Online: [ node2 ]
+ * OFFLINE: [ node1 ]
+
+ * Full List of Resources:
+ * rsc1 (ocf:heartbeat:apache): Stopped
+ * rsc2 (ocf:heartbeat:apache): Stopped
diff --git a/cts/scheduler/summary/rec-rsc-0.summary b/cts/scheduler/summary/rec-rsc-0.summary
new file mode 100644
index 0000000..9861e82
--- /dev/null
+++ b/cts/scheduler/summary/rec-rsc-0.summary
@@ -0,0 +1,21 @@
+Current cluster status:
+ * Node List:
+ * Online: [ node1 node2 ]
+
+ * Full List of Resources:
+ * rsc1 (ocf:heartbeat:apache): FAILED [ node1 node2 ]
+
+Transition Summary:
+ * Stop rsc1 ( node1 ) due to node availability
+ * Stop rsc1 ( node2 ) due to node availability
+
+Executing Cluster Transition:
+ * Resource action: rsc1 stop on node2
+ * Resource action: rsc1 stop on node1
+
+Revised Cluster Status:
+ * Node List:
+ * Online: [ node1 node2 ]
+
+ * Full List of Resources:
+ * rsc1 (ocf:heartbeat:apache): Stopped
diff --git a/cts/scheduler/summary/rec-rsc-1.summary b/cts/scheduler/summary/rec-rsc-1.summary
new file mode 100644
index 0000000..95f311f
--- /dev/null
+++ b/cts/scheduler/summary/rec-rsc-1.summary
@@ -0,0 +1,21 @@
+Current cluster status:
+ * Node List:
+ * Online: [ node1 node2 ]
+
+ * Full List of Resources:
+ * rsc1 (ocf:heartbeat:apache): FAILED node1
+
+Transition Summary:
+ * Recover rsc1 ( node1 -> node2 )
+
+Executing Cluster Transition:
+ * Resource action: rsc1 monitor on node2
+ * Resource action: rsc1 stop on node1
+ * Resource action: rsc1 start on node2
+
+Revised Cluster Status:
+ * Node List:
+ * Online: [ node1 node2 ]
+
+ * Full List of Resources:
+ * rsc1 (ocf:heartbeat:apache): Started node2
diff --git a/cts/scheduler/summary/rec-rsc-2.summary b/cts/scheduler/summary/rec-rsc-2.summary
new file mode 100644
index 0000000..27a2eb0
--- /dev/null
+++ b/cts/scheduler/summary/rec-rsc-2.summary
@@ -0,0 +1,22 @@
+Current cluster status:
+ * Node List:
+ * Online: [ node1 node2 ]
+
+ * Full List of Resources:
+ * rsc1 (ocf:heartbeat:apache): FAILED node1
+
+Transition Summary:
+ * Recover rsc1 ( node1 )
+
+Executing Cluster Transition:
+ * Resource action: rsc1 monitor on node2
+ * Resource action: rsc1 stop on node1
+ * Resource action: rsc1 cancel=1 on node1
+ * Resource action: rsc1 start on node1
+
+Revised Cluster Status:
+ * Node List:
+ * Online: [ node1 node2 ]
+
+ * Full List of Resources:
+ * rsc1 (ocf:heartbeat:apache): Started node1
diff --git a/cts/scheduler/summary/rec-rsc-3.summary b/cts/scheduler/summary/rec-rsc-3.summary
new file mode 100644
index 0000000..12ee7b0
--- /dev/null
+++ b/cts/scheduler/summary/rec-rsc-3.summary
@@ -0,0 +1,20 @@
+Current cluster status:
+ * Node List:
+ * Online: [ node1 node2 ]
+
+ * Full List of Resources:
+ * rsc1 (ocf:heartbeat:apache): Stopped (failure ignored)
+
+Transition Summary:
+ * Start rsc1 ( node1 )
+
+Executing Cluster Transition:
+ * Resource action: rsc1 monitor on node1
+ * Resource action: rsc1 start on node1
+
+Revised Cluster Status:
+ * Node List:
+ * Online: [ node1 node2 ]
+
+ * Full List of Resources:
+ * rsc1 (ocf:heartbeat:apache): Started node1 (failure ignored)
diff --git a/cts/scheduler/summary/rec-rsc-4.summary b/cts/scheduler/summary/rec-rsc-4.summary
new file mode 100644
index 0000000..2f5dbdb
--- /dev/null
+++ b/cts/scheduler/summary/rec-rsc-4.summary
@@ -0,0 +1,20 @@
+0 of 1 resource instances DISABLED and 1 BLOCKED from further action due to failure
+
+Current cluster status:
+ * Node List:
+ * Online: [ node1 node2 ]
+
+ * Full List of Resources:
+ * rsc1 (ocf:heartbeat:apache): FAILED node2 (blocked)
+
+Transition Summary:
+
+Executing Cluster Transition:
+ * Resource action: rsc1 monitor on node1
+
+Revised Cluster Status:
+ * Node List:
+ * Online: [ node1 node2 ]
+
+ * Full List of Resources:
+ * rsc1 (ocf:heartbeat:apache): FAILED node2 (blocked)
diff --git a/cts/scheduler/summary/rec-rsc-5.summary b/cts/scheduler/summary/rec-rsc-5.summary
new file mode 100644
index 0000000..b045e03
--- /dev/null
+++ b/cts/scheduler/summary/rec-rsc-5.summary
@@ -0,0 +1,36 @@
+Current cluster status:
+ * Node List:
+ * Node node2: UNCLEAN (online)
+ * Online: [ node1 ]
+
+ * Full List of Resources:
+ * stonith-1 (stonith:dummy): Stopped
+ * rsc1 (ocf:heartbeat:apache): FAILED node2
+ * rsc2 (ocf:heartbeat:apache): Started node2
+
+Transition Summary:
+ * Fence (reboot) node2 'rsc1 failed there'
+ * Start stonith-1 ( node1 )
+ * Recover rsc1 ( node2 -> node1 )
+ * Move rsc2 ( node2 -> node1 )
+
+Executing Cluster Transition:
+ * Resource action: stonith-1 monitor on node1
+ * Resource action: rsc1 monitor on node1
+ * Resource action: rsc2 monitor on node1
+ * Fencing node2 (reboot)
+ * Resource action: stonith-1 start on node1
+ * Pseudo action: rsc1_stop_0
+ * Pseudo action: rsc2_stop_0
+ * Resource action: rsc1 start on node1
+ * Resource action: rsc2 start on node1
+
+Revised Cluster Status:
+ * Node List:
+ * Online: [ node1 ]
+ * OFFLINE: [ node2 ]
+
+ * Full List of Resources:
+ * stonith-1 (stonith:dummy): Started node1
+ * rsc1 (ocf:heartbeat:apache): Started node1
+ * rsc2 (ocf:heartbeat:apache): Started node1
diff --git a/cts/scheduler/summary/rec-rsc-6.summary b/cts/scheduler/summary/rec-rsc-6.summary
new file mode 100644
index 0000000..a4ea149
--- /dev/null
+++ b/cts/scheduler/summary/rec-rsc-6.summary
@@ -0,0 +1,21 @@
+Current cluster status:
+ * Node List:
+ * Online: [ node1 node2 ]
+
+ * Full List of Resources:
+ * rsc1 (ocf:heartbeat:apache): Started [ node1 node2 ]
+
+Transition Summary:
+ * Restart rsc1 ( node1 )
+
+Executing Cluster Transition:
+ * Resource action: rsc1 stop on node2
+ * Resource action: rsc1 stop on node1
+ * Resource action: rsc1 start on node1
+
+Revised Cluster Status:
+ * Node List:
+ * Online: [ node1 node2 ]
+
+ * Full List of Resources:
+ * rsc1 (ocf:heartbeat:apache): Started node1
diff --git a/cts/scheduler/summary/rec-rsc-7.summary b/cts/scheduler/summary/rec-rsc-7.summary
new file mode 100644
index 0000000..bb5cd98
--- /dev/null
+++ b/cts/scheduler/summary/rec-rsc-7.summary
@@ -0,0 +1,21 @@
+Current cluster status:
+ * Node List:
+ * Online: [ node1 node2 ]
+
+ * Full List of Resources:
+ * rsc1 (ocf:heartbeat:apache): Started [ node1 node2 ]
+
+Transition Summary:
+ * Stop rsc1 ( node1 ) due to node availability
+ * Stop rsc1 ( node2 ) due to node availability
+
+Executing Cluster Transition:
+ * Resource action: rsc1 stop on node2
+ * Resource action: rsc1 stop on node1
+
+Revised Cluster Status:
+ * Node List:
+ * Online: [ node1 node2 ]
+
+ * Full List of Resources:
+ * rsc1 (ocf:heartbeat:apache): Stopped
diff --git a/cts/scheduler/summary/rec-rsc-8.summary b/cts/scheduler/summary/rec-rsc-8.summary
new file mode 100644
index 0000000..5ea2de6
--- /dev/null
+++ b/cts/scheduler/summary/rec-rsc-8.summary
@@ -0,0 +1,19 @@
+0 of 1 resource instances DISABLED and 1 BLOCKED from further action due to failure
+
+Current cluster status:
+ * Node List:
+ * Online: [ node1 node2 ]
+
+ * Full List of Resources:
+ * rsc1 (ocf:heartbeat:apache): Started (blocked) [ node1 node2 ]
+
+Transition Summary:
+
+Executing Cluster Transition:
+
+Revised Cluster Status:
+ * Node List:
+ * Online: [ node1 node2 ]
+
+ * Full List of Resources:
+ * rsc1 (ocf:heartbeat:apache): Started (blocked) [ node1 node2 ]
diff --git a/cts/scheduler/summary/rec-rsc-9.summary b/cts/scheduler/summary/rec-rsc-9.summary
new file mode 100644
index 0000000..f3fae63
--- /dev/null
+++ b/cts/scheduler/summary/rec-rsc-9.summary
@@ -0,0 +1,42 @@
+Current cluster status:
+ * Node List:
+ * Online: [ node1 ]
+
+ * Full List of Resources:
+ * rsc1 (ocf:heartbeat:apache): Started node1
+ * Resource Group: foo:
+ * rsc2 (ocf:heartbeat:apache): Started node1
+ * Resource Group: bar:
+ * rsc3 (ocf:heartbeat:apache): FAILED node1
+
+Transition Summary:
+ * Restart rsc1 ( node1 ) due to required bar running
+ * Restart rsc2 ( node1 ) due to required bar running
+ * Recover rsc3 ( node1 )
+
+Executing Cluster Transition:
+ * Resource action: rsc1 stop on node1
+ * Pseudo action: foo_stop_0
+ * Resource action: rsc2 stop on node1
+ * Pseudo action: foo_stopped_0
+ * Pseudo action: bar_stop_0
+ * Resource action: rsc3 stop on node1
+ * Pseudo action: bar_stopped_0
+ * Pseudo action: bar_start_0
+ * Resource action: rsc3 start on node1
+ * Pseudo action: bar_running_0
+ * Resource action: rsc1 start on node1
+ * Pseudo action: foo_start_0
+ * Resource action: rsc2 start on node1
+ * Pseudo action: foo_running_0
+
+Revised Cluster Status:
+ * Node List:
+ * Online: [ node1 ]
+
+ * Full List of Resources:
+ * rsc1 (ocf:heartbeat:apache): Started node1
+ * Resource Group: foo:
+ * rsc2 (ocf:heartbeat:apache): Started node1
+ * Resource Group: bar:
+ * rsc3 (ocf:heartbeat:apache): Started node1
diff --git a/cts/scheduler/summary/reload-becomes-restart.summary b/cts/scheduler/summary/reload-becomes-restart.summary
new file mode 100644
index 0000000..a6bd43a
--- /dev/null
+++ b/cts/scheduler/summary/reload-becomes-restart.summary
@@ -0,0 +1,55 @@
+Using the original execution date of: 2016-12-12 20:28:26Z
+Current cluster status:
+ * Node List:
+ * Online: [ node1 node2 ]
+
+ * Full List of Resources:
+ * Fencing (stonith:fence_xvm): Stopped
+ * Clone Set: cl-rsc1 [rsc1]:
+ * Stopped: [ node1 node2 ]
+ * Clone Set: cl-rsc2 [rsc2]:
+ * Started: [ node1 ]
+ * Stopped: [ node2 ]
+
+Transition Summary:
+ * Start Fencing ( node1 )
+ * Start rsc1:0 ( node2 )
+ * Start rsc1:1 ( node1 )
+ * Restart rsc2:0 ( node1 ) due to required rsc1:1 start
+ * Start rsc2:1 ( node2 )
+
+Executing Cluster Transition:
+ * Resource action: Fencing monitor on node2
+ * Resource action: Fencing monitor on node1
+ * Resource action: rsc1:0 monitor on node2
+ * Resource action: rsc1:1 monitor on node1
+ * Pseudo action: cl-rsc1_start_0
+ * Resource action: rsc2 monitor on node2
+ * Pseudo action: cl-rsc2_stop_0
+ * Resource action: Fencing start on node1
+ * Resource action: rsc1:0 start on node2
+ * Resource action: rsc1:1 start on node1
+ * Pseudo action: cl-rsc1_running_0
+ * Resource action: rsc2 stop on node1
+ * Pseudo action: cl-rsc2_stopped_0
+ * Pseudo action: cl-rsc2_start_0
+ * Resource action: Fencing monitor=120000 on node1
+ * Resource action: rsc1:0 monitor=120000 on node2
+ * Resource action: rsc1:1 monitor=120000 on node1
+ * Resource action: rsc2 start on node1
+ * Resource action: rsc2 monitor=200000 on node1
+ * Resource action: rsc2 start on node2
+ * Pseudo action: cl-rsc2_running_0
+ * Resource action: rsc2 monitor=200000 on node2
+Using the original execution date of: 2016-12-12 20:28:26Z
+
+Revised Cluster Status:
+ * Node List:
+ * Online: [ node1 node2 ]
+
+ * Full List of Resources:
+ * Fencing (stonith:fence_xvm): Started node1
+ * Clone Set: cl-rsc1 [rsc1]:
+ * Started: [ node1 node2 ]
+ * Clone Set: cl-rsc2 [rsc2]:
+ * Started: [ node1 node2 ]
diff --git a/cts/scheduler/summary/remote-connection-shutdown.summary b/cts/scheduler/summary/remote-connection-shutdown.summary
new file mode 100644
index 0000000..b8ea5be
--- /dev/null
+++ b/cts/scheduler/summary/remote-connection-shutdown.summary
@@ -0,0 +1,162 @@
+Using the original execution date of: 2020-11-17 07:03:16Z
+Current cluster status:
+ * Node List:
+ * Online: [ controller-0 controller-1 controller-2 database-0 database-1 database-2 messaging-0 messaging-1 messaging-2 ]
+ * RemoteOnline: [ compute-0 compute-1 ]
+ * GuestOnline: [ galera-bundle-0 galera-bundle-1 galera-bundle-2 ovn-dbs-bundle-0 ovn-dbs-bundle-1 ovn-dbs-bundle-2 rabbitmq-bundle-0 rabbitmq-bundle-1 rabbitmq-bundle-2 redis-bundle-0 redis-bundle-1 redis-bundle-2 ]
+
+ * Full List of Resources:
+ * compute-0 (ocf:pacemaker:remote): Started controller-0
+ * compute-1 (ocf:pacemaker:remote): Started controller-1
+ * Container bundle set: galera-bundle [cluster.common.tag/mariadb:pcmklatest]:
+ * galera-bundle-0 (ocf:heartbeat:galera): Promoted database-0
+ * galera-bundle-1 (ocf:heartbeat:galera): Promoted database-1
+ * galera-bundle-2 (ocf:heartbeat:galera): Promoted database-2
+ * Container bundle set: rabbitmq-bundle [cluster.common.tag/rabbitmq:pcmklatest]:
+ * rabbitmq-bundle-0 (ocf:heartbeat:rabbitmq-cluster): Started messaging-0
+ * rabbitmq-bundle-1 (ocf:heartbeat:rabbitmq-cluster): Started messaging-1
+ * rabbitmq-bundle-2 (ocf:heartbeat:rabbitmq-cluster): Started messaging-2
+ * Container bundle set: redis-bundle [cluster.common.tag/redis:pcmklatest]:
+ * redis-bundle-0 (ocf:heartbeat:redis): Promoted controller-2
+ * redis-bundle-1 (ocf:heartbeat:redis): Unpromoted controller-0
+ * redis-bundle-2 (ocf:heartbeat:redis): Unpromoted controller-1
+ * ip-192.168.24.150 (ocf:heartbeat:IPaddr2): Started controller-2
+ * ip-10.0.0.150 (ocf:heartbeat:IPaddr2): Started controller-0
+ * ip-172.17.1.151 (ocf:heartbeat:IPaddr2): Started controller-1
+ * ip-172.17.1.150 (ocf:heartbeat:IPaddr2): Started controller-2
+ * ip-172.17.3.150 (ocf:heartbeat:IPaddr2): Started controller-0
+ * ip-172.17.4.150 (ocf:heartbeat:IPaddr2): Started controller-1
+ * Container bundle set: haproxy-bundle [cluster.common.tag/haproxy:pcmklatest]:
+ * haproxy-bundle-podman-0 (ocf:heartbeat:podman): Started controller-2
+ * haproxy-bundle-podman-1 (ocf:heartbeat:podman): Started controller-0
+ * haproxy-bundle-podman-2 (ocf:heartbeat:podman): Started controller-1
+ * Container bundle set: ovn-dbs-bundle [cluster.common.tag/ovn-northd:pcmklatest]:
+ * ovn-dbs-bundle-0 (ocf:ovn:ovndb-servers): Promoted controller-2
+ * ovn-dbs-bundle-1 (ocf:ovn:ovndb-servers): Unpromoted controller-0
+ * ovn-dbs-bundle-2 (ocf:ovn:ovndb-servers): Unpromoted controller-1
+ * ip-172.17.1.57 (ocf:heartbeat:IPaddr2): Started controller-2
+ * stonith-fence_compute-fence-nova (stonith:fence_compute): Stopped
+ * Clone Set: compute-unfence-trigger-clone [compute-unfence-trigger]:
+ * Started: [ compute-0 compute-1 ]
+ * Stopped: [ controller-0 controller-1 controller-2 database-0 database-1 database-2 messaging-0 messaging-1 messaging-2 ]
+ * nova-evacuate (ocf:openstack:NovaEvacuate): Started database-0
+ * stonith-fence_ipmilan-52540033df9c (stonith:fence_ipmilan): Started database-1
+ * stonith-fence_ipmilan-5254001f5f3c (stonith:fence_ipmilan): Started database-2
+ * stonith-fence_ipmilan-5254003f88b4 (stonith:fence_ipmilan): Started messaging-0
+ * stonith-fence_ipmilan-5254007b7920 (stonith:fence_ipmilan): Started messaging-1
+ * stonith-fence_ipmilan-525400642894 (stonith:fence_ipmilan): Started messaging-2
+ * stonith-fence_ipmilan-525400d5382b (stonith:fence_ipmilan): Started database-2
+ * stonith-fence_ipmilan-525400bb150b (stonith:fence_ipmilan): Started messaging-0
+ * stonith-fence_ipmilan-525400ffc780 (stonith:fence_ipmilan): Started messaging-2
+ * stonith-fence_ipmilan-5254009cb549 (stonith:fence_ipmilan): Started database-0
+ * stonith-fence_ipmilan-525400e10267 (stonith:fence_ipmilan): Started messaging-1
+ * stonith-fence_ipmilan-525400dc0f81 (stonith:fence_ipmilan): Started database-1
+ * Container bundle: openstack-cinder-volume [cluster.common.tag/cinder-volume:pcmklatest]:
+ * openstack-cinder-volume-podman-0 (ocf:heartbeat:podman): Started controller-0
+
+Transition Summary:
+ * Stop compute-0 ( controller-0 ) due to node availability
+ * Start stonith-fence_compute-fence-nova ( database-0 )
+ * Stop compute-unfence-trigger:0 ( compute-0 ) due to node availability
+ * Move nova-evacuate ( database-0 -> database-1 )
+ * Move stonith-fence_ipmilan-52540033df9c ( database-1 -> database-2 )
+ * Move stonith-fence_ipmilan-5254001f5f3c ( database-2 -> messaging-0 )
+ * Move stonith-fence_ipmilan-5254003f88b4 ( messaging-0 -> messaging-1 )
+ * Move stonith-fence_ipmilan-5254007b7920 ( messaging-1 -> messaging-2 )
+ * Move stonith-fence_ipmilan-525400ffc780 ( messaging-2 -> database-0 )
+ * Move stonith-fence_ipmilan-5254009cb549 ( database-0 -> database-1 )
+
+Executing Cluster Transition:
+ * Resource action: stonith-fence_compute-fence-nova start on database-0
+ * Cluster action: clear_failcount for stonith-fence_compute-fence-nova on messaging-2
+ * Cluster action: clear_failcount for stonith-fence_compute-fence-nova on messaging-0
+ * Cluster action: clear_failcount for stonith-fence_compute-fence-nova on messaging-1
+ * Cluster action: clear_failcount for stonith-fence_compute-fence-nova on controller-2
+ * Cluster action: clear_failcount for stonith-fence_compute-fence-nova on controller-1
+ * Cluster action: clear_failcount for stonith-fence_compute-fence-nova on controller-0
+ * Cluster action: clear_failcount for stonith-fence_compute-fence-nova on database-2
+ * Cluster action: clear_failcount for stonith-fence_compute-fence-nova on database-1
+ * Cluster action: clear_failcount for stonith-fence_compute-fence-nova on database-0
+ * Pseudo action: compute-unfence-trigger-clone_stop_0
+ * Resource action: nova-evacuate stop on database-0
+ * Resource action: stonith-fence_ipmilan-52540033df9c stop on database-1
+ * Resource action: stonith-fence_ipmilan-5254001f5f3c stop on database-2
+ * Resource action: stonith-fence_ipmilan-5254003f88b4 stop on messaging-0
+ * Resource action: stonith-fence_ipmilan-5254007b7920 stop on messaging-1
+ * Resource action: stonith-fence_ipmilan-525400ffc780 stop on messaging-2
+ * Resource action: stonith-fence_ipmilan-5254009cb549 stop on database-0
+ * Resource action: stonith-fence_compute-fence-nova monitor=60000 on database-0
+ * Resource action: compute-unfence-trigger stop on compute-0
+ * Pseudo action: compute-unfence-trigger-clone_stopped_0
+ * Resource action: nova-evacuate start on database-1
+ * Resource action: stonith-fence_ipmilan-52540033df9c start on database-2
+ * Resource action: stonith-fence_ipmilan-5254001f5f3c start on messaging-0
+ * Resource action: stonith-fence_ipmilan-5254003f88b4 start on messaging-1
+ * Resource action: stonith-fence_ipmilan-5254007b7920 start on messaging-2
+ * Resource action: stonith-fence_ipmilan-525400ffc780 start on database-0
+ * Resource action: stonith-fence_ipmilan-5254009cb549 start on database-1
+ * Resource action: compute-0 stop on controller-0
+ * Resource action: nova-evacuate monitor=10000 on database-1
+ * Resource action: stonith-fence_ipmilan-52540033df9c monitor=60000 on database-2
+ * Resource action: stonith-fence_ipmilan-5254001f5f3c monitor=60000 on messaging-0
+ * Resource action: stonith-fence_ipmilan-5254003f88b4 monitor=60000 on messaging-1
+ * Resource action: stonith-fence_ipmilan-5254007b7920 monitor=60000 on messaging-2
+ * Resource action: stonith-fence_ipmilan-525400ffc780 monitor=60000 on database-0
+ * Resource action: stonith-fence_ipmilan-5254009cb549 monitor=60000 on database-1
+Using the original execution date of: 2020-11-17 07:03:16Z
+
+Revised Cluster Status:
+ * Node List:
+ * Online: [ controller-0 controller-1 controller-2 database-0 database-1 database-2 messaging-0 messaging-1 messaging-2 ]
+ * RemoteOnline: [ compute-1 ]
+ * RemoteOFFLINE: [ compute-0 ]
+ * GuestOnline: [ galera-bundle-0 galera-bundle-1 galera-bundle-2 ovn-dbs-bundle-0 ovn-dbs-bundle-1 ovn-dbs-bundle-2 rabbitmq-bundle-0 rabbitmq-bundle-1 rabbitmq-bundle-2 redis-bundle-0 redis-bundle-1 redis-bundle-2 ]
+
+ * Full List of Resources:
+ * compute-0 (ocf:pacemaker:remote): Stopped
+ * compute-1 (ocf:pacemaker:remote): Started controller-1
+ * Container bundle set: galera-bundle [cluster.common.tag/mariadb:pcmklatest]:
+ * galera-bundle-0 (ocf:heartbeat:galera): Promoted database-0
+ * galera-bundle-1 (ocf:heartbeat:galera): Promoted database-1
+ * galera-bundle-2 (ocf:heartbeat:galera): Promoted database-2
+ * Container bundle set: rabbitmq-bundle [cluster.common.tag/rabbitmq:pcmklatest]:
+ * rabbitmq-bundle-0 (ocf:heartbeat:rabbitmq-cluster): Started messaging-0
+ * rabbitmq-bundle-1 (ocf:heartbeat:rabbitmq-cluster): Started messaging-1
+ * rabbitmq-bundle-2 (ocf:heartbeat:rabbitmq-cluster): Started messaging-2
+ * Container bundle set: redis-bundle [cluster.common.tag/redis:pcmklatest]:
+ * redis-bundle-0 (ocf:heartbeat:redis): Promoted controller-2
+ * redis-bundle-1 (ocf:heartbeat:redis): Unpromoted controller-0
+ * redis-bundle-2 (ocf:heartbeat:redis): Unpromoted controller-1
+ * ip-192.168.24.150 (ocf:heartbeat:IPaddr2): Started controller-2
+ * ip-10.0.0.150 (ocf:heartbeat:IPaddr2): Started controller-0
+ * ip-172.17.1.151 (ocf:heartbeat:IPaddr2): Started controller-1
+ * ip-172.17.1.150 (ocf:heartbeat:IPaddr2): Started controller-2
+ * ip-172.17.3.150 (ocf:heartbeat:IPaddr2): Started controller-0
+ * ip-172.17.4.150 (ocf:heartbeat:IPaddr2): Started controller-1
+ * Container bundle set: haproxy-bundle [cluster.common.tag/haproxy:pcmklatest]:
+ * haproxy-bundle-podman-0 (ocf:heartbeat:podman): Started controller-2
+ * haproxy-bundle-podman-1 (ocf:heartbeat:podman): Started controller-0
+ * haproxy-bundle-podman-2 (ocf:heartbeat:podman): Started controller-1
+ * Container bundle set: ovn-dbs-bundle [cluster.common.tag/ovn-northd:pcmklatest]:
+ * ovn-dbs-bundle-0 (ocf:ovn:ovndb-servers): Promoted controller-2
+ * ovn-dbs-bundle-1 (ocf:ovn:ovndb-servers): Unpromoted controller-0
+ * ovn-dbs-bundle-2 (ocf:ovn:ovndb-servers): Unpromoted controller-1
+ * ip-172.17.1.57 (ocf:heartbeat:IPaddr2): Started controller-2
+ * stonith-fence_compute-fence-nova (stonith:fence_compute): Started database-0
+ * Clone Set: compute-unfence-trigger-clone [compute-unfence-trigger]:
+ * Started: [ compute-1 ]
+ * Stopped: [ compute-0 controller-0 controller-1 controller-2 database-0 database-1 database-2 messaging-0 messaging-1 messaging-2 ]
+ * nova-evacuate (ocf:openstack:NovaEvacuate): Started database-1
+ * stonith-fence_ipmilan-52540033df9c (stonith:fence_ipmilan): Started database-2
+ * stonith-fence_ipmilan-5254001f5f3c (stonith:fence_ipmilan): Started messaging-0
+ * stonith-fence_ipmilan-5254003f88b4 (stonith:fence_ipmilan): Started messaging-1
+ * stonith-fence_ipmilan-5254007b7920 (stonith:fence_ipmilan): Started messaging-2
+ * stonith-fence_ipmilan-525400642894 (stonith:fence_ipmilan): Started messaging-2
+ * stonith-fence_ipmilan-525400d5382b (stonith:fence_ipmilan): Started database-2
+ * stonith-fence_ipmilan-525400bb150b (stonith:fence_ipmilan): Started messaging-0
+ * stonith-fence_ipmilan-525400ffc780 (stonith:fence_ipmilan): Started database-0
+ * stonith-fence_ipmilan-5254009cb549 (stonith:fence_ipmilan): Started database-1
+ * stonith-fence_ipmilan-525400e10267 (stonith:fence_ipmilan): Started messaging-1
+ * stonith-fence_ipmilan-525400dc0f81 (stonith:fence_ipmilan): Started database-1
+ * Container bundle: openstack-cinder-volume [cluster.common.tag/cinder-volume:pcmklatest]:
+ * openstack-cinder-volume-podman-0 (ocf:heartbeat:podman): Started controller-0
diff --git a/cts/scheduler/summary/remote-connection-unrecoverable.summary b/cts/scheduler/summary/remote-connection-unrecoverable.summary
new file mode 100644
index 0000000..ad8f353
--- /dev/null
+++ b/cts/scheduler/summary/remote-connection-unrecoverable.summary
@@ -0,0 +1,54 @@
+Current cluster status:
+ * Node List:
+ * Node node1: UNCLEAN (offline)
+ * Online: [ node2 ]
+ * RemoteOnline: [ remote1 ]
+
+ * Full List of Resources:
+ * remote1 (ocf:pacemaker:remote): Started node1 (UNCLEAN)
+ * killer (stonith:fence_xvm): Started node2
+ * rsc1 (ocf:pacemaker:Dummy): Started remote1
+ * Clone Set: rsc2-master [rsc2] (promotable):
+ * rsc2 (ocf:pacemaker:Stateful): Promoted node1 (UNCLEAN)
+ * Promoted: [ node2 ]
+ * Stopped: [ remote1 ]
+
+Transition Summary:
+ * Fence (reboot) remote1 'resources are active but connection is unrecoverable'
+ * Fence (reboot) node1 'peer is no longer part of the cluster'
+ * Stop remote1 ( node1 ) due to node availability
+ * Restart killer ( node2 ) due to resource definition change
+ * Move rsc1 ( remote1 -> node2 )
+ * Stop rsc2:0 ( Promoted node1 ) due to node availability
+
+Executing Cluster Transition:
+ * Pseudo action: remote1_stop_0
+ * Resource action: killer stop on node2
+ * Resource action: rsc1 monitor on node2
+ * Fencing node1 (reboot)
+ * Fencing remote1 (reboot)
+ * Resource action: killer start on node2
+ * Resource action: killer monitor=60000 on node2
+ * Pseudo action: rsc1_stop_0
+ * Pseudo action: rsc2-master_demote_0
+ * Resource action: rsc1 start on node2
+ * Pseudo action: rsc2_demote_0
+ * Pseudo action: rsc2-master_demoted_0
+ * Pseudo action: rsc2-master_stop_0
+ * Resource action: rsc1 monitor=10000 on node2
+ * Pseudo action: rsc2_stop_0
+ * Pseudo action: rsc2-master_stopped_0
+
+Revised Cluster Status:
+ * Node List:
+ * Online: [ node2 ]
+ * OFFLINE: [ node1 ]
+ * RemoteOFFLINE: [ remote1 ]
+
+ * Full List of Resources:
+ * remote1 (ocf:pacemaker:remote): Stopped
+ * killer (stonith:fence_xvm): Started node2
+ * rsc1 (ocf:pacemaker:Dummy): Started node2
+ * Clone Set: rsc2-master [rsc2] (promotable):
+ * Promoted: [ node2 ]
+ * Stopped: [ node1 remote1 ]
diff --git a/cts/scheduler/summary/remote-disable.summary b/cts/scheduler/summary/remote-disable.summary
new file mode 100644
index 0000000..a90cb40
--- /dev/null
+++ b/cts/scheduler/summary/remote-disable.summary
@@ -0,0 +1,35 @@
+1 of 6 resource instances DISABLED and 0 BLOCKED from further action due to failure
+
+Current cluster status:
+ * Node List:
+ * Online: [ 18builder 18node1 18node2 ]
+ * RemoteOnline: [ remote1 ]
+
+ * Full List of Resources:
+ * shooter (stonith:fence_xvm): Started 18node1
+ * remote1 (ocf:pacemaker:remote): Started 18builder (disabled)
+ * FAKE1 (ocf:heartbeat:Dummy): Started 18node2
+ * FAKE2 (ocf:heartbeat:Dummy): Started remote1
+ * FAKE3 (ocf:heartbeat:Dummy): Started 18builder
+ * FAKE4 (ocf:heartbeat:Dummy): Started 18node1
+
+Transition Summary:
+ * Stop remote1 ( 18builder ) due to node availability
+ * Stop FAKE2 ( remote1 ) due to node availability
+
+Executing Cluster Transition:
+ * Resource action: FAKE2 stop on remote1
+ * Resource action: remote1 stop on 18builder
+
+Revised Cluster Status:
+ * Node List:
+ * Online: [ 18builder 18node1 18node2 ]
+ * RemoteOFFLINE: [ remote1 ]
+
+ * Full List of Resources:
+ * shooter (stonith:fence_xvm): Started 18node1
+ * remote1 (ocf:pacemaker:remote): Stopped (disabled)
+ * FAKE1 (ocf:heartbeat:Dummy): Started 18node2
+ * FAKE2 (ocf:heartbeat:Dummy): Stopped
+ * FAKE3 (ocf:heartbeat:Dummy): Started 18builder
+ * FAKE4 (ocf:heartbeat:Dummy): Started 18node1
diff --git a/cts/scheduler/summary/remote-fence-before-reconnect.summary b/cts/scheduler/summary/remote-fence-before-reconnect.summary
new file mode 100644
index 0000000..ab361ef
--- /dev/null
+++ b/cts/scheduler/summary/remote-fence-before-reconnect.summary
@@ -0,0 +1,39 @@
+Current cluster status:
+ * Node List:
+ * RemoteNode c7auto4: UNCLEAN (offline)
+ * Online: [ c7auto1 c7auto2 c7auto3 ]
+
+ * Full List of Resources:
+ * shooter (stonith:fence_phd_kvm): Started c7auto2
+ * c7auto4 (ocf:pacemaker:remote): FAILED c7auto1
+ * fake1 (ocf:heartbeat:Dummy): Started c7auto3
+ * fake2 (ocf:heartbeat:Dummy): Started c7auto4 (UNCLEAN)
+ * fake3 (ocf:heartbeat:Dummy): Started c7auto1
+ * fake4 (ocf:heartbeat:Dummy): Started c7auto2
+ * fake5 (ocf:heartbeat:Dummy): Started c7auto3
+
+Transition Summary:
+ * Fence (reboot) c7auto4 'remote connection is unrecoverable'
+ * Stop c7auto4 ( c7auto1 ) due to node availability
+ * Move fake2 ( c7auto4 -> c7auto1 )
+
+Executing Cluster Transition:
+ * Resource action: c7auto4 stop on c7auto1
+ * Fencing c7auto4 (reboot)
+ * Pseudo action: fake2_stop_0
+ * Resource action: fake2 start on c7auto1
+ * Resource action: fake2 monitor=10000 on c7auto1
+
+Revised Cluster Status:
+ * Node List:
+ * RemoteNode c7auto4: UNCLEAN (offline)
+ * Online: [ c7auto1 c7auto2 c7auto3 ]
+
+ * Full List of Resources:
+ * shooter (stonith:fence_phd_kvm): Started c7auto2
+ * c7auto4 (ocf:pacemaker:remote): FAILED
+ * fake1 (ocf:heartbeat:Dummy): Started c7auto3
+ * fake2 (ocf:heartbeat:Dummy): Started c7auto1
+ * fake3 (ocf:heartbeat:Dummy): Started c7auto1
+ * fake4 (ocf:heartbeat:Dummy): Started c7auto2
+ * fake5 (ocf:heartbeat:Dummy): Started c7auto3
diff --git a/cts/scheduler/summary/remote-fence-unclean-3.summary b/cts/scheduler/summary/remote-fence-unclean-3.summary
new file mode 100644
index 0000000..af916ed
--- /dev/null
+++ b/cts/scheduler/summary/remote-fence-unclean-3.summary
@@ -0,0 +1,103 @@
+Current cluster status:
+ * Node List:
+ * Online: [ overcloud-controller-0 overcloud-controller-1 overcloud-controller-2 ]
+ * RemoteOFFLINE: [ overcloud-novacompute-0 ]
+ * GuestOnline: [ galera-bundle-0 galera-bundle-1 galera-bundle-2 rabbitmq-bundle-0 rabbitmq-bundle-1 rabbitmq-bundle-2 redis-bundle-0 redis-bundle-1 redis-bundle-2 ]
+
+ * Full List of Resources:
+ * fence1 (stonith:fence_xvm): Stopped
+ * overcloud-novacompute-0 (ocf:pacemaker:remote): FAILED overcloud-controller-0
+ * Container bundle set: rabbitmq-bundle [192.168.24.1:8787/tripleoupstream/centos-binary-rabbitmq:latest]:
+ * rabbitmq-bundle-0 (ocf:heartbeat:rabbitmq-cluster): Started overcloud-controller-0
+ * rabbitmq-bundle-1 (ocf:heartbeat:rabbitmq-cluster): Started overcloud-controller-1
+ * rabbitmq-bundle-2 (ocf:heartbeat:rabbitmq-cluster): Started overcloud-controller-2
+ * Container bundle set: galera-bundle [192.168.24.1:8787/tripleoupstream/centos-binary-mariadb:latest]:
+ * galera-bundle-0 (ocf:heartbeat:galera): Promoted overcloud-controller-0
+ * galera-bundle-1 (ocf:heartbeat:galera): Promoted overcloud-controller-1
+ * galera-bundle-2 (ocf:heartbeat:galera): Promoted overcloud-controller-2
+ * Container bundle set: redis-bundle [192.168.24.1:8787/tripleoupstream/centos-binary-redis:latest]:
+ * redis-bundle-0 (ocf:heartbeat:redis): Promoted overcloud-controller-0
+ * redis-bundle-1 (ocf:heartbeat:redis): Unpromoted overcloud-controller-1
+ * redis-bundle-2 (ocf:heartbeat:redis): Unpromoted overcloud-controller-2
+ * ip-192.168.24.9 (ocf:heartbeat:IPaddr2): Started overcloud-controller-0
+ * ip-10.0.0.7 (ocf:heartbeat:IPaddr2): Started overcloud-controller-1
+ * ip-172.16.2.4 (ocf:heartbeat:IPaddr2): Started overcloud-controller-2
+ * ip-172.16.2.8 (ocf:heartbeat:IPaddr2): Started overcloud-controller-0
+ * ip-172.16.1.9 (ocf:heartbeat:IPaddr2): Started overcloud-controller-1
+ * ip-172.16.3.9 (ocf:heartbeat:IPaddr2): Started overcloud-controller-2
+ * Container bundle set: haproxy-bundle [192.168.24.1:8787/tripleoupstream/centos-binary-haproxy:latest]:
+ * haproxy-bundle-docker-0 (ocf:heartbeat:docker): Started overcloud-controller-0
+ * haproxy-bundle-docker-1 (ocf:heartbeat:docker): Started overcloud-controller-1
+ * haproxy-bundle-docker-2 (ocf:heartbeat:docker): Started overcloud-controller-2
+ * Container bundle: openstack-cinder-volume [192.168.24.1:8787/tripleoupstream/centos-binary-cinder-volume:latest]:
+ * openstack-cinder-volume-docker-0 (ocf:heartbeat:docker): Started overcloud-controller-0
+ * Container bundle: openstack-cinder-backup [192.168.24.1:8787/tripleoupstream/centos-binary-cinder-backup:latest]:
+ * openstack-cinder-backup-docker-0 (ocf:heartbeat:docker): Started overcloud-controller-1
+
+Transition Summary:
+ * Fence (reboot) overcloud-novacompute-0 'the connection is unrecoverable'
+ * Start fence1 ( overcloud-controller-0 )
+ * Stop overcloud-novacompute-0 ( overcloud-controller-0 ) due to node availability
+
+Executing Cluster Transition:
+ * Resource action: fence1 monitor on overcloud-controller-2
+ * Resource action: fence1 monitor on overcloud-controller-1
+ * Resource action: fence1 monitor on overcloud-controller-0
+ * Resource action: overcloud-novacompute-0 stop on overcloud-controller-0
+ * Resource action: rabbitmq-bundle-0 monitor on overcloud-controller-2
+ * Resource action: rabbitmq-bundle-0 monitor on overcloud-controller-1
+ * Resource action: rabbitmq-bundle-1 monitor on overcloud-controller-2
+ * Resource action: rabbitmq-bundle-1 monitor on overcloud-controller-0
+ * Resource action: rabbitmq-bundle-2 monitor on overcloud-controller-1
+ * Resource action: rabbitmq-bundle-2 monitor on overcloud-controller-0
+ * Resource action: galera-bundle-0 monitor on overcloud-controller-2
+ * Resource action: galera-bundle-0 monitor on overcloud-controller-1
+ * Resource action: galera-bundle-1 monitor on overcloud-controller-2
+ * Resource action: galera-bundle-1 monitor on overcloud-controller-0
+ * Resource action: galera-bundle-2 monitor on overcloud-controller-1
+ * Resource action: galera-bundle-2 monitor on overcloud-controller-0
+ * Resource action: redis-bundle-0 monitor on overcloud-controller-2
+ * Resource action: redis-bundle-0 monitor on overcloud-controller-1
+ * Resource action: redis-bundle-1 monitor on overcloud-controller-2
+ * Resource action: redis-bundle-1 monitor on overcloud-controller-0
+ * Resource action: redis-bundle-2 monitor on overcloud-controller-1
+ * Resource action: redis-bundle-2 monitor on overcloud-controller-0
+ * Fencing overcloud-novacompute-0 (reboot)
+ * Resource action: fence1 start on overcloud-controller-0
+ * Resource action: fence1 monitor=60000 on overcloud-controller-0
+
+Revised Cluster Status:
+ * Node List:
+ * Online: [ overcloud-controller-0 overcloud-controller-1 overcloud-controller-2 ]
+ * RemoteOFFLINE: [ overcloud-novacompute-0 ]
+ * GuestOnline: [ galera-bundle-0 galera-bundle-1 galera-bundle-2 rabbitmq-bundle-0 rabbitmq-bundle-1 rabbitmq-bundle-2 redis-bundle-0 redis-bundle-1 redis-bundle-2 ]
+
+ * Full List of Resources:
+ * fence1 (stonith:fence_xvm): Started overcloud-controller-0
+ * overcloud-novacompute-0 (ocf:pacemaker:remote): Stopped
+ * Container bundle set: rabbitmq-bundle [192.168.24.1:8787/tripleoupstream/centos-binary-rabbitmq:latest]:
+ * rabbitmq-bundle-0 (ocf:heartbeat:rabbitmq-cluster): Started overcloud-controller-0
+ * rabbitmq-bundle-1 (ocf:heartbeat:rabbitmq-cluster): Started overcloud-controller-1
+ * rabbitmq-bundle-2 (ocf:heartbeat:rabbitmq-cluster): Started overcloud-controller-2
+ * Container bundle set: galera-bundle [192.168.24.1:8787/tripleoupstream/centos-binary-mariadb:latest]:
+ * galera-bundle-0 (ocf:heartbeat:galera): Promoted overcloud-controller-0
+ * galera-bundle-1 (ocf:heartbeat:galera): Promoted overcloud-controller-1
+ * galera-bundle-2 (ocf:heartbeat:galera): Promoted overcloud-controller-2
+ * Container bundle set: redis-bundle [192.168.24.1:8787/tripleoupstream/centos-binary-redis:latest]:
+ * redis-bundle-0 (ocf:heartbeat:redis): Promoted overcloud-controller-0
+ * redis-bundle-1 (ocf:heartbeat:redis): Unpromoted overcloud-controller-1
+ * redis-bundle-2 (ocf:heartbeat:redis): Unpromoted overcloud-controller-2
+ * ip-192.168.24.9 (ocf:heartbeat:IPaddr2): Started overcloud-controller-0
+ * ip-10.0.0.7 (ocf:heartbeat:IPaddr2): Started overcloud-controller-1
+ * ip-172.16.2.4 (ocf:heartbeat:IPaddr2): Started overcloud-controller-2
+ * ip-172.16.2.8 (ocf:heartbeat:IPaddr2): Started overcloud-controller-0
+ * ip-172.16.1.9 (ocf:heartbeat:IPaddr2): Started overcloud-controller-1
+ * ip-172.16.3.9 (ocf:heartbeat:IPaddr2): Started overcloud-controller-2
+ * Container bundle set: haproxy-bundle [192.168.24.1:8787/tripleoupstream/centos-binary-haproxy:latest]:
+ * haproxy-bundle-docker-0 (ocf:heartbeat:docker): Started overcloud-controller-0
+ * haproxy-bundle-docker-1 (ocf:heartbeat:docker): Started overcloud-controller-1
+ * haproxy-bundle-docker-2 (ocf:heartbeat:docker): Started overcloud-controller-2
+ * Container bundle: openstack-cinder-volume [192.168.24.1:8787/tripleoupstream/centos-binary-cinder-volume:latest]:
+ * openstack-cinder-volume-docker-0 (ocf:heartbeat:docker): Started overcloud-controller-0
+ * Container bundle: openstack-cinder-backup [192.168.24.1:8787/tripleoupstream/centos-binary-cinder-backup:latest]:
+ * openstack-cinder-backup-docker-0 (ocf:heartbeat:docker): Started overcloud-controller-1
diff --git a/cts/scheduler/summary/remote-fence-unclean.summary b/cts/scheduler/summary/remote-fence-unclean.summary
new file mode 100644
index 0000000..a467dc3
--- /dev/null
+++ b/cts/scheduler/summary/remote-fence-unclean.summary
@@ -0,0 +1,47 @@
+Current cluster status:
+ * Node List:
+ * RemoteNode remote1: UNCLEAN (offline)
+ * Online: [ 18builder 18node1 18node2 ]
+
+ * Full List of Resources:
+ * shooter (stonith:fence_xvm): Started 18builder
+ * remote1 (ocf:pacemaker:remote): FAILED 18node1
+ * FAKE1 (ocf:heartbeat:Dummy): Started 18node2
+ * FAKE2 (ocf:heartbeat:Dummy): Started remote1 (UNCLEAN)
+ * FAKE3 (ocf:heartbeat:Dummy): Started 18builder
+ * FAKE4 (ocf:heartbeat:Dummy): Started 18node1
+
+Transition Summary:
+ * Fence (reboot) remote1 'remote connection is unrecoverable'
+ * Recover remote1 ( 18node1 )
+ * Move FAKE2 ( remote1 -> 18builder )
+ * Move FAKE3 ( 18builder -> 18node1 )
+ * Move FAKE4 ( 18node1 -> 18node2 )
+
+Executing Cluster Transition:
+ * Resource action: FAKE3 stop on 18builder
+ * Resource action: FAKE4 stop on 18node1
+ * Fencing remote1 (reboot)
+ * Pseudo action: FAKE2_stop_0
+ * Resource action: FAKE3 start on 18node1
+ * Resource action: FAKE4 start on 18node2
+ * Resource action: remote1 stop on 18node1
+ * Resource action: FAKE2 start on 18builder
+ * Resource action: FAKE3 monitor=60000 on 18node1
+ * Resource action: FAKE4 monitor=60000 on 18node2
+ * Resource action: remote1 start on 18node1
+ * Resource action: remote1 monitor=60000 on 18node1
+ * Resource action: FAKE2 monitor=60000 on 18builder
+
+Revised Cluster Status:
+ * Node List:
+ * Online: [ 18builder 18node1 18node2 ]
+ * RemoteOnline: [ remote1 ]
+
+ * Full List of Resources:
+ * shooter (stonith:fence_xvm): Started 18builder
+ * remote1 (ocf:pacemaker:remote): Started 18node1
+ * FAKE1 (ocf:heartbeat:Dummy): Started 18node2
+ * FAKE2 (ocf:heartbeat:Dummy): Started 18builder
+ * FAKE3 (ocf:heartbeat:Dummy): Started 18node1
+ * FAKE4 (ocf:heartbeat:Dummy): Started 18node2
diff --git a/cts/scheduler/summary/remote-fence-unclean2.summary b/cts/scheduler/summary/remote-fence-unclean2.summary
new file mode 100644
index 0000000..a4251c6
--- /dev/null
+++ b/cts/scheduler/summary/remote-fence-unclean2.summary
@@ -0,0 +1,31 @@
+Current cluster status:
+ * Node List:
+ * Node rhel7-alt1: standby
+ * Node rhel7-alt2: standby
+ * RemoteNode rhel7-alt4: UNCLEAN (offline)
+ * OFFLINE: [ rhel7-alt3 ]
+
+ * Full List of Resources:
+ * shooter (stonith:fence_xvm): Stopped
+ * rhel7-alt4 (ocf:pacemaker:remote): Stopped
+ * fake (ocf:heartbeat:Dummy): Started rhel7-alt4 (UNCLEAN)
+
+Transition Summary:
+ * Fence (reboot) rhel7-alt4 'fake is active there (fencing will be revoked if remote connection can be re-established elsewhere)'
+ * Stop fake ( rhel7-alt4 ) due to node availability
+
+Executing Cluster Transition:
+ * Fencing rhel7-alt4 (reboot)
+ * Pseudo action: fake_stop_0
+
+Revised Cluster Status:
+ * Node List:
+ * Node rhel7-alt1: standby
+ * Node rhel7-alt2: standby
+ * OFFLINE: [ rhel7-alt3 ]
+ * RemoteOFFLINE: [ rhel7-alt4 ]
+
+ * Full List of Resources:
+ * shooter (stonith:fence_xvm): Stopped
+ * rhel7-alt4 (ocf:pacemaker:remote): Stopped
+ * fake (ocf:heartbeat:Dummy): Stopped
diff --git a/cts/scheduler/summary/remote-move.summary b/cts/scheduler/summary/remote-move.summary
new file mode 100644
index 0000000..5fc5f09
--- /dev/null
+++ b/cts/scheduler/summary/remote-move.summary
@@ -0,0 +1,39 @@
+Current cluster status:
+ * Node List:
+ * Online: [ 18builder 18node1 18node2 ]
+ * RemoteOnline: [ remote1 ]
+
+ * Full List of Resources:
+ * shooter (stonith:fence_xvm): Started 18node1
+ * remote1 (ocf:pacemaker:remote): Started 18builder
+ * FAKE1 (ocf:heartbeat:Dummy): Started 18node2
+ * FAKE2 (ocf:heartbeat:Dummy): Started remote1
+ * FAKE3 (ocf:heartbeat:Dummy): Started 18builder
+ * FAKE4 (ocf:heartbeat:Dummy): Started 18node1
+
+Transition Summary:
+ * Move shooter ( 18node1 -> 18builder )
+ * Migrate remote1 ( 18builder -> 18node1 )
+
+Executing Cluster Transition:
+ * Resource action: shooter stop on 18node1
+ * Resource action: remote1 migrate_to on 18builder
+ * Resource action: shooter start on 18builder
+ * Resource action: remote1 migrate_from on 18node1
+ * Resource action: remote1 stop on 18builder
+ * Resource action: shooter monitor=60000 on 18builder
+ * Pseudo action: remote1_start_0
+ * Resource action: remote1 monitor=60000 on 18node1
+
+Revised Cluster Status:
+ * Node List:
+ * Online: [ 18builder 18node1 18node2 ]
+ * RemoteOnline: [ remote1 ]
+
+ * Full List of Resources:
+ * shooter (stonith:fence_xvm): Started 18builder
+ * remote1 (ocf:pacemaker:remote): Started 18node1
+ * FAKE1 (ocf:heartbeat:Dummy): Started 18node2
+ * FAKE2 (ocf:heartbeat:Dummy): Started remote1
+ * FAKE3 (ocf:heartbeat:Dummy): Started 18builder
+ * FAKE4 (ocf:heartbeat:Dummy): Started 18node1
diff --git a/cts/scheduler/summary/remote-orphaned.summary b/cts/scheduler/summary/remote-orphaned.summary
new file mode 100644
index 0000000..4b5ed6f
--- /dev/null
+++ b/cts/scheduler/summary/remote-orphaned.summary
@@ -0,0 +1,69 @@
+Current cluster status:
+ * Node List:
+ * Online: [ 18node1 18node3 ]
+ * OFFLINE: [ 18node2 ]
+ * RemoteOnline: [ remote1 ]
+
+ * Full List of Resources:
+ * Fencing (stonith:fence_xvm): Started 18node3
+ * FencingPass (stonith:fence_dummy): Started 18node1
+ * FencingFail (stonith:fence_dummy): Started 18node3
+ * rsc_18node1 (ocf:heartbeat:IPaddr2): Started 18node1
+ * rsc_18node2 (ocf:heartbeat:IPaddr2): Started remote1
+ * rsc_18node3 (ocf:heartbeat:IPaddr2): Started 18node3
+ * migrator (ocf:pacemaker:Dummy): Started 18node1
+ * Clone Set: Connectivity [ping-1]:
+ * Started: [ 18node1 18node3 remote1 ]
+ * Clone Set: master-1 [stateful-1] (promotable):
+ * Promoted: [ 18node1 ]
+ * Unpromoted: [ 18node3 ]
+ * Stopped: [ 18node2 ]
+ * Resource Group: group-1:
+ * r192.168.122.87 (ocf:heartbeat:IPaddr2): Started 18node1
+ * r192.168.122.88 (ocf:heartbeat:IPaddr2): Started 18node1
+ * r192.168.122.89 (ocf:heartbeat:IPaddr2): Started 18node1
+ * lsb-dummy (lsb:/usr/share/pacemaker/tests/cts/LSBDummy): Started 18node1
+ * remote1 (ocf:pacemaker:remote): ORPHANED Started 18node1
+
+Transition Summary:
+ * Move rsc_18node2 ( remote1 -> 18node1 )
+ * Stop ping-1:2 ( remote1 ) due to node availability
+ * Stop remote1 ( 18node1 ) due to node availability
+
+Executing Cluster Transition:
+ * Resource action: rsc_18node2 stop on remote1
+ * Pseudo action: Connectivity_stop_0
+ * Resource action: rsc_18node2 start on 18node1
+ * Resource action: ping-1 stop on remote1
+ * Pseudo action: Connectivity_stopped_0
+ * Resource action: remote1 stop on 18node1
+ * Resource action: remote1 delete on 18node3
+ * Resource action: remote1 delete on 18node1
+ * Resource action: rsc_18node2 monitor=5000 on 18node1
+
+Revised Cluster Status:
+ * Node List:
+ * Online: [ 18node1 18node3 ]
+ * OFFLINE: [ 18node2 ]
+ * RemoteOFFLINE: [ remote1 ]
+
+ * Full List of Resources:
+ * Fencing (stonith:fence_xvm): Started 18node3
+ * FencingPass (stonith:fence_dummy): Started 18node1
+ * FencingFail (stonith:fence_dummy): Started 18node3
+ * rsc_18node1 (ocf:heartbeat:IPaddr2): Started 18node1
+ * rsc_18node2 (ocf:heartbeat:IPaddr2): Started 18node1
+ * rsc_18node3 (ocf:heartbeat:IPaddr2): Started 18node3
+ * migrator (ocf:pacemaker:Dummy): Started 18node1
+ * Clone Set: Connectivity [ping-1]:
+ * Started: [ 18node1 18node3 ]
+ * Stopped: [ 18node2 ]
+ * Clone Set: master-1 [stateful-1] (promotable):
+ * Promoted: [ 18node1 ]
+ * Unpromoted: [ 18node3 ]
+ * Stopped: [ 18node2 ]
+ * Resource Group: group-1:
+ * r192.168.122.87 (ocf:heartbeat:IPaddr2): Started 18node1
+ * r192.168.122.88 (ocf:heartbeat:IPaddr2): Started 18node1
+ * r192.168.122.89 (ocf:heartbeat:IPaddr2): Started 18node1
+ * lsb-dummy (lsb:/usr/share/pacemaker/tests/cts/LSBDummy): Started 18node1
diff --git a/cts/scheduler/summary/remote-orphaned2.summary b/cts/scheduler/summary/remote-orphaned2.summary
new file mode 100644
index 0000000..9b00914
--- /dev/null
+++ b/cts/scheduler/summary/remote-orphaned2.summary
@@ -0,0 +1,29 @@
+Current cluster status:
+ * Node List:
+ * RemoteNode mrg-02: UNCLEAN (offline)
+ * RemoteNode mrg-03: UNCLEAN (offline)
+ * RemoteNode mrg-04: UNCLEAN (offline)
+ * Online: [ host-026 host-027 host-028 ]
+
+ * Full List of Resources:
+ * neutron-openvswitch-agent-compute (ocf:heartbeat:Dummy): ORPHANED Started [ mrg-03 mrg-02 mrg-04 ]
+ * libvirtd-compute (systemd:libvirtd): ORPHANED Started [ mrg-03 mrg-02 mrg-04 ]
+ * ceilometer-compute (systemd:openstack-ceilometer-compute): ORPHANED Started [ mrg-03 mrg-02 mrg-04 ]
+ * nova-compute (systemd:openstack-nova-compute): ORPHANED Started [ mrg-03 mrg-02 mrg-04 ]
+
+Transition Summary:
+
+Executing Cluster Transition:
+
+Revised Cluster Status:
+ * Node List:
+ * RemoteNode mrg-02: UNCLEAN (offline)
+ * RemoteNode mrg-03: UNCLEAN (offline)
+ * RemoteNode mrg-04: UNCLEAN (offline)
+ * Online: [ host-026 host-027 host-028 ]
+
+ * Full List of Resources:
+ * neutron-openvswitch-agent-compute (ocf:heartbeat:Dummy): ORPHANED Started [ mrg-03 mrg-02 mrg-04 ]
+ * libvirtd-compute (systemd:libvirtd): ORPHANED Started [ mrg-03 mrg-02 mrg-04 ]
+ * ceilometer-compute (systemd:openstack-ceilometer-compute): ORPHANED Started [ mrg-03 mrg-02 mrg-04 ]
+ * nova-compute (systemd:openstack-nova-compute): ORPHANED Started [ mrg-03 mrg-02 mrg-04 ]
diff --git a/cts/scheduler/summary/remote-partial-migrate.summary b/cts/scheduler/summary/remote-partial-migrate.summary
new file mode 100644
index 0000000..2cdf227
--- /dev/null
+++ b/cts/scheduler/summary/remote-partial-migrate.summary
@@ -0,0 +1,190 @@
+Current cluster status:
+ * Node List:
+ * Online: [ pcmk1 pcmk2 pcmk3 ]
+ * RemoteOnline: [ pcmk_remote1 pcmk_remote2 pcmk_remote3 pcmk_remote4 ]
+ * RemoteOFFLINE: [ pcmk_remote5 ]
+
+ * Full List of Resources:
+ * shooter (stonith:fence_docker_cts): Started pcmk2
+ * pcmk_remote1 (ocf:pacemaker:remote): Started pcmk1
+ * pcmk_remote2 (ocf:pacemaker:remote): Started pcmk3
+ * pcmk_remote3 (ocf:pacemaker:remote): Started [ pcmk2 pcmk1 ]
+ * pcmk_remote4 (ocf:pacemaker:remote): Started pcmk3
+ * pcmk_remote5 (ocf:pacemaker:remote): Stopped
+ * FAKE1 (ocf:heartbeat:Dummy): Started pcmk_remote2
+ * FAKE2 (ocf:heartbeat:Dummy): Started pcmk_remote1
+ * FAKE3 (ocf:heartbeat:Dummy): Started pcmk_remote4
+ * FAKE4 (ocf:heartbeat:Dummy): Stopped
+ * FAKE5 (ocf:heartbeat:Dummy): Started pcmk_remote3
+ * FAKE6 (ocf:heartbeat:Dummy): Started pcmk_remote2
+ * FAKE7 (ocf:heartbeat:Dummy): Started pcmk_remote1
+ * FAKE8 (ocf:heartbeat:Dummy): Started pcmk_remote4
+ * FAKE9 (ocf:heartbeat:Dummy): Started pcmk_remote3
+ * FAKE10 (ocf:heartbeat:Dummy): Stopped
+ * FAKE11 (ocf:heartbeat:Dummy): Started pcmk_remote1
+ * FAKE12 (ocf:heartbeat:Dummy): Started pcmk_remote2
+ * FAKE13 (ocf:heartbeat:Dummy): Stopped
+ * FAKE14 (ocf:heartbeat:Dummy): Started pcmk_remote4
+ * FAKE15 (ocf:heartbeat:Dummy): Stopped
+ * FAKE16 (ocf:heartbeat:Dummy): Started pcmk1
+ * FAKE17 (ocf:heartbeat:Dummy): Started pcmk3
+ * FAKE18 (ocf:heartbeat:Dummy): Started pcmk2
+ * FAKE19 (ocf:heartbeat:Dummy): Started pcmk_remote2
+ * FAKE20 (ocf:heartbeat:Dummy): Started pcmk_remote3
+ * FAKE21 (ocf:heartbeat:Dummy): Started pcmk_remote4
+ * FAKE22 (ocf:heartbeat:Dummy): Stopped
+ * FAKE23 (ocf:heartbeat:Dummy): Started pcmk1
+ * FAKE24 (ocf:heartbeat:Dummy): Started pcmk3
+ * FAKE25 (ocf:heartbeat:Dummy): Started pcmk_remote1
+ * FAKE26 (ocf:heartbeat:Dummy): Stopped
+ * FAKE27 (ocf:heartbeat:Dummy): Started pcmk_remote3
+ * FAKE28 (ocf:heartbeat:Dummy): Started pcmk_remote4
+ * FAKE29 (ocf:heartbeat:Dummy): Stopped
+ * FAKE30 (ocf:heartbeat:Dummy): Started pcmk1
+ * FAKE31 (ocf:heartbeat:Dummy): Started pcmk3
+ * FAKE32 (ocf:heartbeat:Dummy): Started pcmk_remote1
+ * FAKE33 (ocf:heartbeat:Dummy): Started pcmk_remote2
+ * FAKE34 (ocf:heartbeat:Dummy): Started pcmk_remote3
+ * FAKE35 (ocf:heartbeat:Dummy): Started pcmk_remote4
+ * FAKE36 (ocf:heartbeat:Dummy): Stopped
+ * FAKE37 (ocf:heartbeat:Dummy): Started pcmk1
+ * FAKE38 (ocf:heartbeat:Dummy): Started pcmk3
+ * FAKE39 (ocf:heartbeat:Dummy): Started pcmk_remote1
+ * FAKE40 (ocf:heartbeat:Dummy): Started pcmk_remote2
+ * FAKE41 (ocf:heartbeat:Dummy): Started pcmk_remote3
+ * FAKE42 (ocf:heartbeat:Dummy): Started pcmk_remote4
+ * FAKE43 (ocf:heartbeat:Dummy): Stopped
+ * FAKE44 (ocf:heartbeat:Dummy): Started pcmk1
+ * FAKE45 (ocf:heartbeat:Dummy): Started pcmk3
+ * FAKE46 (ocf:heartbeat:Dummy): Started pcmk_remote1
+ * FAKE47 (ocf:heartbeat:Dummy): Started pcmk_remote2
+ * FAKE48 (ocf:heartbeat:Dummy): Started pcmk_remote3
+ * FAKE49 (ocf:heartbeat:Dummy): Started pcmk_remote4
+ * FAKE50 (ocf:heartbeat:Dummy): Stopped
+
+Transition Summary:
+ * Migrate pcmk_remote3 ( pcmk1 -> pcmk2 )
+ * Start FAKE4 ( pcmk_remote3 )
+ * Move FAKE9 ( pcmk_remote3 -> pcmk1 )
+ * Start FAKE10 ( pcmk1 )
+ * Start FAKE13 ( pcmk2 )
+ * Start FAKE15 ( pcmk3 )
+ * Move FAKE16 ( pcmk1 -> pcmk_remote3 )
+ * Start FAKE22 ( pcmk1 )
+ * Move FAKE23 ( pcmk1 -> pcmk_remote1 )
+ * Start FAKE26 ( pcmk1 )
+ * Start FAKE29 ( pcmk2 )
+ * Move FAKE30 ( pcmk1 -> pcmk_remote2 )
+ * Start FAKE36 ( pcmk1 )
+ * Move FAKE37 ( pcmk1 -> pcmk2 )
+ * Start FAKE43 ( pcmk1 )
+ * Move FAKE44 ( pcmk1 -> pcmk2 )
+ * Start FAKE50 ( pcmk1 )
+
+Executing Cluster Transition:
+ * Resource action: pcmk_remote3 migrate_from on pcmk2
+ * Resource action: pcmk_remote3 stop on pcmk1
+ * Resource action: FAKE10 start on pcmk1
+ * Resource action: FAKE13 start on pcmk2
+ * Resource action: FAKE15 start on pcmk3
+ * Resource action: FAKE22 start on pcmk1
+ * Resource action: FAKE23 stop on pcmk1
+ * Resource action: FAKE26 start on pcmk1
+ * Resource action: FAKE29 start on pcmk2
+ * Resource action: FAKE30 stop on pcmk1
+ * Resource action: FAKE36 start on pcmk1
+ * Resource action: FAKE37 stop on pcmk1
+ * Resource action: FAKE43 start on pcmk1
+ * Resource action: FAKE44 stop on pcmk1
+ * Resource action: FAKE50 start on pcmk1
+ * Pseudo action: pcmk_remote3_start_0
+ * Resource action: FAKE4 start on pcmk_remote3
+ * Resource action: FAKE9 stop on pcmk_remote3
+ * Resource action: FAKE10 monitor=10000 on pcmk1
+ * Resource action: FAKE13 monitor=10000 on pcmk2
+ * Resource action: FAKE15 monitor=10000 on pcmk3
+ * Resource action: FAKE16 stop on pcmk1
+ * Resource action: FAKE22 monitor=10000 on pcmk1
+ * Resource action: FAKE23 start on pcmk_remote1
+ * Resource action: FAKE26 monitor=10000 on pcmk1
+ * Resource action: FAKE29 monitor=10000 on pcmk2
+ * Resource action: FAKE30 start on pcmk_remote2
+ * Resource action: FAKE36 monitor=10000 on pcmk1
+ * Resource action: FAKE37 start on pcmk2
+ * Resource action: FAKE43 monitor=10000 on pcmk1
+ * Resource action: FAKE44 start on pcmk2
+ * Resource action: FAKE50 monitor=10000 on pcmk1
+ * Resource action: pcmk_remote3 monitor=60000 on pcmk2
+ * Resource action: FAKE4 monitor=10000 on pcmk_remote3
+ * Resource action: FAKE9 start on pcmk1
+ * Resource action: FAKE16 start on pcmk_remote3
+ * Resource action: FAKE23 monitor=10000 on pcmk_remote1
+ * Resource action: FAKE30 monitor=10000 on pcmk_remote2
+ * Resource action: FAKE37 monitor=10000 on pcmk2
+ * Resource action: FAKE44 monitor=10000 on pcmk2
+ * Resource action: FAKE9 monitor=10000 on pcmk1
+ * Resource action: FAKE16 monitor=10000 on pcmk_remote3
+
+Revised Cluster Status:
+ * Node List:
+ * Online: [ pcmk1 pcmk2 pcmk3 ]
+ * RemoteOnline: [ pcmk_remote1 pcmk_remote2 pcmk_remote3 pcmk_remote4 ]
+ * RemoteOFFLINE: [ pcmk_remote5 ]
+
+ * Full List of Resources:
+ * shooter (stonith:fence_docker_cts): Started pcmk2
+ * pcmk_remote1 (ocf:pacemaker:remote): Started pcmk1
+ * pcmk_remote2 (ocf:pacemaker:remote): Started pcmk3
+ * pcmk_remote3 (ocf:pacemaker:remote): Started pcmk2
+ * pcmk_remote4 (ocf:pacemaker:remote): Started pcmk3
+ * pcmk_remote5 (ocf:pacemaker:remote): Stopped
+ * FAKE1 (ocf:heartbeat:Dummy): Started pcmk_remote2
+ * FAKE2 (ocf:heartbeat:Dummy): Started pcmk_remote1
+ * FAKE3 (ocf:heartbeat:Dummy): Started pcmk_remote4
+ * FAKE4 (ocf:heartbeat:Dummy): Started pcmk_remote3
+ * FAKE5 (ocf:heartbeat:Dummy): Started pcmk_remote3
+ * FAKE6 (ocf:heartbeat:Dummy): Started pcmk_remote2
+ * FAKE7 (ocf:heartbeat:Dummy): Started pcmk_remote1
+ * FAKE8 (ocf:heartbeat:Dummy): Started pcmk_remote4
+ * FAKE9 (ocf:heartbeat:Dummy): Started pcmk1
+ * FAKE10 (ocf:heartbeat:Dummy): Started pcmk1
+ * FAKE11 (ocf:heartbeat:Dummy): Started pcmk_remote1
+ * FAKE12 (ocf:heartbeat:Dummy): Started pcmk_remote2
+ * FAKE13 (ocf:heartbeat:Dummy): Started pcmk2
+ * FAKE14 (ocf:heartbeat:Dummy): Started pcmk_remote4
+ * FAKE15 (ocf:heartbeat:Dummy): Started pcmk3
+ * FAKE16 (ocf:heartbeat:Dummy): Started pcmk_remote3
+ * FAKE17 (ocf:heartbeat:Dummy): Started pcmk3
+ * FAKE18 (ocf:heartbeat:Dummy): Started pcmk2
+ * FAKE19 (ocf:heartbeat:Dummy): Started pcmk_remote2
+ * FAKE20 (ocf:heartbeat:Dummy): Started pcmk_remote3
+ * FAKE21 (ocf:heartbeat:Dummy): Started pcmk_remote4
+ * FAKE22 (ocf:heartbeat:Dummy): Started pcmk1
+ * FAKE23 (ocf:heartbeat:Dummy): Started pcmk_remote1
+ * FAKE24 (ocf:heartbeat:Dummy): Started pcmk3
+ * FAKE25 (ocf:heartbeat:Dummy): Started pcmk_remote1
+ * FAKE26 (ocf:heartbeat:Dummy): Started pcmk1
+ * FAKE27 (ocf:heartbeat:Dummy): Started pcmk_remote3
+ * FAKE28 (ocf:heartbeat:Dummy): Started pcmk_remote4
+ * FAKE29 (ocf:heartbeat:Dummy): Started pcmk2
+ * FAKE30 (ocf:heartbeat:Dummy): Started pcmk_remote2
+ * FAKE31 (ocf:heartbeat:Dummy): Started pcmk3
+ * FAKE32 (ocf:heartbeat:Dummy): Started pcmk_remote1
+ * FAKE33 (ocf:heartbeat:Dummy): Started pcmk_remote2
+ * FAKE34 (ocf:heartbeat:Dummy): Started pcmk_remote3
+ * FAKE35 (ocf:heartbeat:Dummy): Started pcmk_remote4
+ * FAKE36 (ocf:heartbeat:Dummy): Started pcmk1
+ * FAKE37 (ocf:heartbeat:Dummy): Started pcmk2
+ * FAKE38 (ocf:heartbeat:Dummy): Started pcmk3
+ * FAKE39 (ocf:heartbeat:Dummy): Started pcmk_remote1
+ * FAKE40 (ocf:heartbeat:Dummy): Started pcmk_remote2
+ * FAKE41 (ocf:heartbeat:Dummy): Started pcmk_remote3
+ * FAKE42 (ocf:heartbeat:Dummy): Started pcmk_remote4
+ * FAKE43 (ocf:heartbeat:Dummy): Started pcmk1
+ * FAKE44 (ocf:heartbeat:Dummy): Started pcmk2
+ * FAKE45 (ocf:heartbeat:Dummy): Started pcmk3
+ * FAKE46 (ocf:heartbeat:Dummy): Started pcmk_remote1
+ * FAKE47 (ocf:heartbeat:Dummy): Started pcmk_remote2
+ * FAKE48 (ocf:heartbeat:Dummy): Started pcmk_remote3
+ * FAKE49 (ocf:heartbeat:Dummy): Started pcmk_remote4
+ * FAKE50 (ocf:heartbeat:Dummy): Started pcmk1
diff --git a/cts/scheduler/summary/remote-partial-migrate2.summary b/cts/scheduler/summary/remote-partial-migrate2.summary
new file mode 100644
index 0000000..f7157c5
--- /dev/null
+++ b/cts/scheduler/summary/remote-partial-migrate2.summary
@@ -0,0 +1,208 @@
+Current cluster status:
+ * Node List:
+ * Node pcmk4: UNCLEAN (offline)
+ * Online: [ pcmk1 pcmk2 pcmk3 ]
+ * RemoteOnline: [ pcmk_remote1 pcmk_remote2 pcmk_remote3 pcmk_remote5 ]
+ * RemoteOFFLINE: [ pcmk_remote4 ]
+
+ * Full List of Resources:
+ * shooter (stonith:fence_docker_cts): Started pcmk3
+ * pcmk_remote1 (ocf:pacemaker:remote): Started pcmk1
+ * pcmk_remote2 (ocf:pacemaker:remote): Started [ pcmk1 pcmk3 ]
+ * pcmk_remote3 (ocf:pacemaker:remote): Started pcmk3
+ * pcmk_remote4 (ocf:pacemaker:remote): Stopped
+ * pcmk_remote5 (ocf:pacemaker:remote): Started pcmk1
+ * FAKE1 (ocf:heartbeat:Dummy): Started pcmk_remote2
+ * FAKE2 (ocf:heartbeat:Dummy): Started pcmk_remote3
+ * FAKE3 (ocf:heartbeat:Dummy): Started pcmk_remote1
+ * FAKE4 (ocf:heartbeat:Dummy): Started pcmk_remote5
+ * FAKE5 (ocf:heartbeat:Dummy): Started pcmk1
+ * FAKE6 (ocf:heartbeat:Dummy): Started pcmk_remote1
+ * FAKE7 (ocf:heartbeat:Dummy): Started pcmk_remote2
+ * FAKE8 (ocf:heartbeat:Dummy): Started pcmk_remote3
+ * FAKE9 (ocf:heartbeat:Dummy): Started pcmk2
+ * FAKE10 (ocf:heartbeat:Dummy): Started pcmk_remote5
+ * FAKE11 (ocf:heartbeat:Dummy): Started pcmk1
+ * FAKE12 (ocf:heartbeat:Dummy): Started pcmk1
+ * FAKE13 (ocf:heartbeat:Dummy): Started pcmk3
+ * FAKE14 (ocf:heartbeat:Dummy): Started pcmk2
+ * FAKE15 (ocf:heartbeat:Dummy): Started pcmk_remote2
+ * FAKE16 (ocf:heartbeat:Dummy): Started pcmk_remote3
+ * FAKE17 (ocf:heartbeat:Dummy): Started pcmk_remote1
+ * FAKE18 (ocf:heartbeat:Dummy): Started pcmk_remote5
+ * FAKE19 (ocf:heartbeat:Dummy): Started pcmk3
+ * FAKE20 (ocf:heartbeat:Dummy): Started pcmk2
+ * FAKE21 (ocf:heartbeat:Dummy): Started pcmk1
+ * FAKE22 (ocf:heartbeat:Dummy): Started pcmk_remote1
+ * FAKE23 (ocf:heartbeat:Dummy): Started pcmk_remote2
+ * FAKE24 (ocf:heartbeat:Dummy): Started pcmk_remote3
+ * FAKE25 (ocf:heartbeat:Dummy): Started pcmk_remote1
+ * FAKE26 (ocf:heartbeat:Dummy): Started pcmk_remote5
+ * FAKE27 (ocf:heartbeat:Dummy): Started pcmk3
+ * FAKE28 (ocf:heartbeat:Dummy): Started pcmk3
+ * FAKE29 (ocf:heartbeat:Dummy): Started pcmk2
+ * FAKE30 (ocf:heartbeat:Dummy): Started pcmk1
+ * FAKE31 (ocf:heartbeat:Dummy): Started pcmk_remote2
+ * FAKE32 (ocf:heartbeat:Dummy): Started pcmk_remote3
+ * FAKE33 (ocf:heartbeat:Dummy): Started pcmk_remote1
+ * FAKE34 (ocf:heartbeat:Dummy): Started pcmk_remote5
+ * FAKE35 (ocf:heartbeat:Dummy): Started pcmk1
+ * FAKE36 (ocf:heartbeat:Dummy): Started pcmk3
+ * FAKE37 (ocf:heartbeat:Dummy): Started pcmk2
+ * FAKE38 (ocf:heartbeat:Dummy): Started pcmk2
+ * FAKE39 (ocf:heartbeat:Dummy): Started pcmk1
+ * FAKE40 (ocf:heartbeat:Dummy): Started pcmk_remote3
+ * FAKE41 (ocf:heartbeat:Dummy): Started pcmk_remote2
+ * FAKE42 (ocf:heartbeat:Dummy): Started pcmk_remote5
+ * FAKE43 (ocf:heartbeat:Dummy): Started pcmk_remote1
+ * FAKE44 (ocf:heartbeat:Dummy): Started pcmk2
+ * FAKE45 (ocf:heartbeat:Dummy): Started pcmk3
+ * FAKE46 (ocf:heartbeat:Dummy): Started pcmk1
+ * FAKE47 (ocf:heartbeat:Dummy): Started pcmk_remote1
+ * FAKE48 (ocf:heartbeat:Dummy): Started pcmk1
+ * FAKE49 (ocf:heartbeat:Dummy): Started pcmk_remote3
+ * FAKE50 (ocf:heartbeat:Dummy): Started pcmk_remote5
+
+Transition Summary:
+ * Fence (reboot) pcmk4 'peer is no longer part of the cluster'
+ * Migrate pcmk_remote2 ( pcmk3 -> pcmk1 )
+ * Start pcmk_remote4 ( pcmk2 )
+ * Migrate pcmk_remote5 ( pcmk1 -> pcmk2 )
+ * Move FAKE5 ( pcmk1 -> pcmk_remote4 )
+ * Move FAKE9 ( pcmk2 -> pcmk_remote4 )
+ * Move FAKE12 ( pcmk1 -> pcmk2 )
+ * Move FAKE14 ( pcmk2 -> pcmk_remote1 )
+ * Move FAKE17 ( pcmk_remote1 -> pcmk_remote4 )
+ * Move FAKE25 ( pcmk_remote1 -> pcmk_remote4 )
+ * Move FAKE28 ( pcmk3 -> pcmk1 )
+ * Move FAKE30 ( pcmk1 -> pcmk_remote1 )
+ * Move FAKE33 ( pcmk_remote1 -> pcmk_remote4 )
+ * Move FAKE38 ( pcmk2 -> pcmk_remote1 )
+ * Move FAKE39 ( pcmk1 -> pcmk_remote2 )
+ * Move FAKE41 ( pcmk_remote2 -> pcmk_remote4 )
+ * Move FAKE47 ( pcmk_remote1 -> pcmk_remote2 )
+ * Move FAKE48 ( pcmk1 -> pcmk_remote3 )
+ * Move FAKE49 ( pcmk_remote3 -> pcmk_remote4 )
+
+Executing Cluster Transition:
+ * Resource action: pcmk_remote2 migrate_from on pcmk1
+ * Resource action: pcmk_remote2 stop on pcmk3
+ * Resource action: pcmk_remote4 start on pcmk2
+ * Resource action: pcmk_remote5 migrate_to on pcmk1
+ * Resource action: FAKE5 stop on pcmk1
+ * Resource action: FAKE9 stop on pcmk2
+ * Resource action: FAKE12 stop on pcmk1
+ * Resource action: FAKE14 stop on pcmk2
+ * Resource action: FAKE17 stop on pcmk_remote1
+ * Resource action: FAKE25 stop on pcmk_remote1
+ * Resource action: FAKE28 stop on pcmk3
+ * Resource action: FAKE30 stop on pcmk1
+ * Resource action: FAKE33 stop on pcmk_remote1
+ * Resource action: FAKE38 stop on pcmk2
+ * Resource action: FAKE48 stop on pcmk1
+ * Resource action: FAKE49 stop on pcmk_remote3
+ * Fencing pcmk4 (reboot)
+ * Pseudo action: pcmk_remote2_start_0
+ * Resource action: pcmk_remote4 monitor=60000 on pcmk2
+ * Resource action: pcmk_remote5 migrate_from on pcmk2
+ * Resource action: pcmk_remote5 stop on pcmk1
+ * Resource action: FAKE5 start on pcmk_remote4
+ * Resource action: FAKE9 start on pcmk_remote4
+ * Resource action: FAKE12 start on pcmk2
+ * Resource action: FAKE14 start on pcmk_remote1
+ * Resource action: FAKE17 start on pcmk_remote4
+ * Resource action: FAKE25 start on pcmk_remote4
+ * Resource action: FAKE28 start on pcmk1
+ * Resource action: FAKE30 start on pcmk_remote1
+ * Resource action: FAKE33 start on pcmk_remote4
+ * Resource action: FAKE38 start on pcmk_remote1
+ * Resource action: FAKE39 stop on pcmk1
+ * Resource action: FAKE41 stop on pcmk_remote2
+ * Resource action: FAKE47 stop on pcmk_remote1
+ * Resource action: FAKE48 start on pcmk_remote3
+ * Resource action: FAKE49 start on pcmk_remote4
+ * Resource action: pcmk_remote2 monitor=60000 on pcmk1
+ * Pseudo action: pcmk_remote5_start_0
+ * Resource action: FAKE5 monitor=10000 on pcmk_remote4
+ * Resource action: FAKE9 monitor=10000 on pcmk_remote4
+ * Resource action: FAKE12 monitor=10000 on pcmk2
+ * Resource action: FAKE14 monitor=10000 on pcmk_remote1
+ * Resource action: FAKE17 monitor=10000 on pcmk_remote4
+ * Resource action: FAKE25 monitor=10000 on pcmk_remote4
+ * Resource action: FAKE28 monitor=10000 on pcmk1
+ * Resource action: FAKE30 monitor=10000 on pcmk_remote1
+ * Resource action: FAKE33 monitor=10000 on pcmk_remote4
+ * Resource action: FAKE38 monitor=10000 on pcmk_remote1
+ * Resource action: FAKE39 start on pcmk_remote2
+ * Resource action: FAKE41 start on pcmk_remote4
+ * Resource action: FAKE47 start on pcmk_remote2
+ * Resource action: FAKE48 monitor=10000 on pcmk_remote3
+ * Resource action: FAKE49 monitor=10000 on pcmk_remote4
+ * Resource action: pcmk_remote5 monitor=60000 on pcmk2
+ * Resource action: FAKE39 monitor=10000 on pcmk_remote2
+ * Resource action: FAKE41 monitor=10000 on pcmk_remote4
+ * Resource action: FAKE47 monitor=10000 on pcmk_remote2
+
+Revised Cluster Status:
+ * Node List:
+ * Online: [ pcmk1 pcmk2 pcmk3 ]
+ * OFFLINE: [ pcmk4 ]
+ * RemoteOnline: [ pcmk_remote1 pcmk_remote2 pcmk_remote3 pcmk_remote4 pcmk_remote5 ]
+
+ * Full List of Resources:
+ * shooter (stonith:fence_docker_cts): Started pcmk3
+ * pcmk_remote1 (ocf:pacemaker:remote): Started pcmk1
+ * pcmk_remote2 (ocf:pacemaker:remote): Started pcmk1
+ * pcmk_remote3 (ocf:pacemaker:remote): Started pcmk3
+ * pcmk_remote4 (ocf:pacemaker:remote): Started pcmk2
+ * pcmk_remote5 (ocf:pacemaker:remote): Started pcmk2
+ * FAKE1 (ocf:heartbeat:Dummy): Started pcmk_remote2
+ * FAKE2 (ocf:heartbeat:Dummy): Started pcmk_remote3
+ * FAKE3 (ocf:heartbeat:Dummy): Started pcmk_remote1
+ * FAKE4 (ocf:heartbeat:Dummy): Started pcmk_remote5
+ * FAKE5 (ocf:heartbeat:Dummy): Started pcmk_remote4
+ * FAKE6 (ocf:heartbeat:Dummy): Started pcmk_remote1
+ * FAKE7 (ocf:heartbeat:Dummy): Started pcmk_remote2
+ * FAKE8 (ocf:heartbeat:Dummy): Started pcmk_remote3
+ * FAKE9 (ocf:heartbeat:Dummy): Started pcmk_remote4
+ * FAKE10 (ocf:heartbeat:Dummy): Started pcmk_remote5
+ * FAKE11 (ocf:heartbeat:Dummy): Started pcmk1
+ * FAKE12 (ocf:heartbeat:Dummy): Started pcmk2
+ * FAKE13 (ocf:heartbeat:Dummy): Started pcmk3
+ * FAKE14 (ocf:heartbeat:Dummy): Started pcmk_remote1
+ * FAKE15 (ocf:heartbeat:Dummy): Started pcmk_remote2
+ * FAKE16 (ocf:heartbeat:Dummy): Started pcmk_remote3
+ * FAKE17 (ocf:heartbeat:Dummy): Started pcmk_remote4
+ * FAKE18 (ocf:heartbeat:Dummy): Started pcmk_remote5
+ * FAKE19 (ocf:heartbeat:Dummy): Started pcmk3
+ * FAKE20 (ocf:heartbeat:Dummy): Started pcmk2
+ * FAKE21 (ocf:heartbeat:Dummy): Started pcmk1
+ * FAKE22 (ocf:heartbeat:Dummy): Started pcmk_remote1
+ * FAKE23 (ocf:heartbeat:Dummy): Started pcmk_remote2
+ * FAKE24 (ocf:heartbeat:Dummy): Started pcmk_remote3
+ * FAKE25 (ocf:heartbeat:Dummy): Started pcmk_remote4
+ * FAKE26 (ocf:heartbeat:Dummy): Started pcmk_remote5
+ * FAKE27 (ocf:heartbeat:Dummy): Started pcmk3
+ * FAKE28 (ocf:heartbeat:Dummy): Started pcmk1
+ * FAKE29 (ocf:heartbeat:Dummy): Started pcmk2
+ * FAKE30 (ocf:heartbeat:Dummy): Started pcmk_remote1
+ * FAKE31 (ocf:heartbeat:Dummy): Started pcmk_remote2
+ * FAKE32 (ocf:heartbeat:Dummy): Started pcmk_remote3
+ * FAKE33 (ocf:heartbeat:Dummy): Started pcmk_remote4
+ * FAKE34 (ocf:heartbeat:Dummy): Started pcmk_remote5
+ * FAKE35 (ocf:heartbeat:Dummy): Started pcmk1
+ * FAKE36 (ocf:heartbeat:Dummy): Started pcmk3
+ * FAKE37 (ocf:heartbeat:Dummy): Started pcmk2
+ * FAKE38 (ocf:heartbeat:Dummy): Started pcmk_remote1
+ * FAKE39 (ocf:heartbeat:Dummy): Started pcmk_remote2
+ * FAKE40 (ocf:heartbeat:Dummy): Started pcmk_remote3
+ * FAKE41 (ocf:heartbeat:Dummy): Started pcmk_remote4
+ * FAKE42 (ocf:heartbeat:Dummy): Started pcmk_remote5
+ * FAKE43 (ocf:heartbeat:Dummy): Started pcmk_remote1
+ * FAKE44 (ocf:heartbeat:Dummy): Started pcmk2
+ * FAKE45 (ocf:heartbeat:Dummy): Started pcmk3
+ * FAKE46 (ocf:heartbeat:Dummy): Started pcmk1
+ * FAKE47 (ocf:heartbeat:Dummy): Started pcmk_remote2
+ * FAKE48 (ocf:heartbeat:Dummy): Started pcmk_remote3
+ * FAKE49 (ocf:heartbeat:Dummy): Started pcmk_remote4
+ * FAKE50 (ocf:heartbeat:Dummy): Started pcmk_remote5
diff --git a/cts/scheduler/summary/remote-probe-disable.summary b/cts/scheduler/summary/remote-probe-disable.summary
new file mode 100644
index 0000000..34c0d84
--- /dev/null
+++ b/cts/scheduler/summary/remote-probe-disable.summary
@@ -0,0 +1,37 @@
+1 of 6 resource instances DISABLED and 0 BLOCKED from further action due to failure
+
+Current cluster status:
+ * Node List:
+ * Online: [ 18builder 18node1 18node2 ]
+ * RemoteOnline: [ remote1 ]
+
+ * Full List of Resources:
+ * shooter (stonith:fence_xvm): Started 18node1
+ * remote1 (ocf:pacemaker:remote): Started 18builder (disabled)
+ * FAKE1 (ocf:heartbeat:Dummy): Started 18node2
+ * FAKE2 (ocf:heartbeat:Dummy): Stopped
+ * FAKE3 (ocf:heartbeat:Dummy): Started 18builder
+ * FAKE4 (ocf:heartbeat:Dummy): Started 18node1
+
+Transition Summary:
+ * Stop remote1 ( 18builder ) due to node availability
+
+Executing Cluster Transition:
+ * Resource action: FAKE1 monitor on remote1
+ * Resource action: FAKE2 monitor on remote1
+ * Resource action: FAKE3 monitor on remote1
+ * Resource action: FAKE4 monitor on remote1
+ * Resource action: remote1 stop on 18builder
+
+Revised Cluster Status:
+ * Node List:
+ * Online: [ 18builder 18node1 18node2 ]
+ * RemoteOFFLINE: [ remote1 ]
+
+ * Full List of Resources:
+ * shooter (stonith:fence_xvm): Started 18node1
+ * remote1 (ocf:pacemaker:remote): Stopped (disabled)
+ * FAKE1 (ocf:heartbeat:Dummy): Started 18node2
+ * FAKE2 (ocf:heartbeat:Dummy): Stopped
+ * FAKE3 (ocf:heartbeat:Dummy): Started 18builder
+ * FAKE4 (ocf:heartbeat:Dummy): Started 18node1
diff --git a/cts/scheduler/summary/remote-reconnect-delay.summary b/cts/scheduler/summary/remote-reconnect-delay.summary
new file mode 100644
index 0000000..f195919
--- /dev/null
+++ b/cts/scheduler/summary/remote-reconnect-delay.summary
@@ -0,0 +1,67 @@
+Using the original execution date of: 2017-08-21 17:12:54Z
+Current cluster status:
+ * Node List:
+ * Online: [ rhel7-1 rhel7-2 rhel7-4 rhel7-5 ]
+ * RemoteOFFLINE: [ remote-rhel7-3 ]
+
+ * Full List of Resources:
+ * Fencing (stonith:fence_xvm): Started rhel7-2
+ * FencingFail (stonith:fence_dummy): Started rhel7-4
+ * rsc_rhel7-1 (ocf:heartbeat:IPaddr2): Started rhel7-1
+ * rsc_rhel7-2 (ocf:heartbeat:IPaddr2): Started rhel7-2
+ * rsc_rhel7-3 (ocf:heartbeat:IPaddr2): Started rhel7-5
+ * rsc_rhel7-4 (ocf:heartbeat:IPaddr2): Started rhel7-4
+ * rsc_rhel7-5 (ocf:heartbeat:IPaddr2): Started rhel7-5
+ * migrator (ocf:pacemaker:Dummy): Started rhel7-5
+ * Clone Set: Connectivity [ping-1]:
+ * Started: [ rhel7-1 rhel7-2 rhel7-4 rhel7-5 ]
+ * Stopped: [ remote-rhel7-3 ]
+ * Clone Set: master-1 [stateful-1] (promotable):
+ * Promoted: [ rhel7-2 ]
+ * Unpromoted: [ rhel7-1 rhel7-4 rhel7-5 ]
+ * Stopped: [ remote-rhel7-3 ]
+ * Resource Group: group-1:
+ * r192.168.122.207 (ocf:heartbeat:IPaddr2): Started rhel7-2
+ * petulant (service:DummySD): Started rhel7-2
+ * r192.168.122.208 (ocf:heartbeat:IPaddr2): Started rhel7-2
+ * lsb-dummy (lsb:/usr/share/pacemaker/tests/cts/LSBDummy): Started rhel7-2
+ * remote-rhel7-3 (ocf:pacemaker:remote): FAILED
+ * remote-rsc (ocf:heartbeat:Dummy): Started rhel7-1
+
+Transition Summary:
+ * Restart Fencing ( rhel7-2 ) due to resource definition change
+
+Executing Cluster Transition:
+ * Resource action: Fencing stop on rhel7-2
+ * Resource action: Fencing start on rhel7-2
+ * Resource action: Fencing monitor=120000 on rhel7-2
+Using the original execution date of: 2017-08-21 17:12:54Z
+
+Revised Cluster Status:
+ * Node List:
+ * Online: [ rhel7-1 rhel7-2 rhel7-4 rhel7-5 ]
+ * RemoteOFFLINE: [ remote-rhel7-3 ]
+
+ * Full List of Resources:
+ * Fencing (stonith:fence_xvm): Started rhel7-2
+ * FencingFail (stonith:fence_dummy): Started rhel7-4
+ * rsc_rhel7-1 (ocf:heartbeat:IPaddr2): Started rhel7-1
+ * rsc_rhel7-2 (ocf:heartbeat:IPaddr2): Started rhel7-2
+ * rsc_rhel7-3 (ocf:heartbeat:IPaddr2): Started rhel7-5
+ * rsc_rhel7-4 (ocf:heartbeat:IPaddr2): Started rhel7-4
+ * rsc_rhel7-5 (ocf:heartbeat:IPaddr2): Started rhel7-5
+ * migrator (ocf:pacemaker:Dummy): Started rhel7-5
+ * Clone Set: Connectivity [ping-1]:
+ * Started: [ rhel7-1 rhel7-2 rhel7-4 rhel7-5 ]
+ * Stopped: [ remote-rhel7-3 ]
+ * Clone Set: master-1 [stateful-1] (promotable):
+ * Promoted: [ rhel7-2 ]
+ * Unpromoted: [ rhel7-1 rhel7-4 rhel7-5 ]
+ * Stopped: [ remote-rhel7-3 ]
+ * Resource Group: group-1:
+ * r192.168.122.207 (ocf:heartbeat:IPaddr2): Started rhel7-2
+ * petulant (service:DummySD): Started rhel7-2
+ * r192.168.122.208 (ocf:heartbeat:IPaddr2): Started rhel7-2
+ * lsb-dummy (lsb:/usr/share/pacemaker/tests/cts/LSBDummy): Started rhel7-2
+ * remote-rhel7-3 (ocf:pacemaker:remote): FAILED
+ * remote-rsc (ocf:heartbeat:Dummy): Started rhel7-1
diff --git a/cts/scheduler/summary/remote-recover-all.summary b/cts/scheduler/summary/remote-recover-all.summary
new file mode 100644
index 0000000..257301a
--- /dev/null
+++ b/cts/scheduler/summary/remote-recover-all.summary
@@ -0,0 +1,152 @@
+Using the original execution date of: 2017-05-03 13:33:24Z
+Current cluster status:
+ * Node List:
+ * Node controller-1: UNCLEAN (offline)
+ * Online: [ controller-0 controller-2 ]
+ * RemoteOnline: [ galera-0 galera-1 galera-2 messaging-0 messaging-1 messaging-2 ]
+
+ * Full List of Resources:
+ * messaging-0 (ocf:pacemaker:remote): Started controller-0
+ * messaging-1 (ocf:pacemaker:remote): Started controller-1 (UNCLEAN)
+ * messaging-2 (ocf:pacemaker:remote): Started controller-0
+ * galera-0 (ocf:pacemaker:remote): Started controller-1 (UNCLEAN)
+ * galera-1 (ocf:pacemaker:remote): Started controller-0
+ * galera-2 (ocf:pacemaker:remote): Started controller-1 (UNCLEAN)
+ * Clone Set: rabbitmq-clone [rabbitmq]:
+ * Started: [ messaging-0 messaging-1 messaging-2 ]
+ * Stopped: [ controller-0 controller-1 controller-2 galera-0 galera-1 galera-2 ]
+ * Clone Set: galera-master [galera] (promotable):
+ * Promoted: [ galera-0 galera-1 galera-2 ]
+ * Stopped: [ controller-0 controller-1 controller-2 messaging-0 messaging-1 messaging-2 ]
+ * Clone Set: redis-master [redis] (promotable):
+ * redis (ocf:heartbeat:redis): Unpromoted controller-1 (UNCLEAN)
+ * Promoted: [ controller-0 ]
+ * Unpromoted: [ controller-2 ]
+ * Stopped: [ galera-0 galera-1 galera-2 messaging-0 messaging-1 messaging-2 ]
+ * ip-192.168.24.6 (ocf:heartbeat:IPaddr2): Started controller-0
+ * ip-10.0.0.102 (ocf:heartbeat:IPaddr2): Started controller-0
+ * ip-172.17.1.14 (ocf:heartbeat:IPaddr2): Started controller-1 (UNCLEAN)
+ * ip-172.17.1.17 (ocf:heartbeat:IPaddr2): Started controller-1 (UNCLEAN)
+ * ip-172.17.3.15 (ocf:heartbeat:IPaddr2): Started controller-0
+ * ip-172.17.4.11 (ocf:heartbeat:IPaddr2): Started controller-1 (UNCLEAN)
+ * Clone Set: haproxy-clone [haproxy]:
+ * haproxy (systemd:haproxy): Started controller-1 (UNCLEAN)
+ * Started: [ controller-0 controller-2 ]
+ * Stopped: [ galera-0 galera-1 galera-2 messaging-0 messaging-1 messaging-2 ]
+ * openstack-cinder-volume (systemd:openstack-cinder-volume): Started controller-0
+ * stonith-fence_ipmilan-525400bbf613 (stonith:fence_ipmilan): Started controller-0
+ * stonith-fence_ipmilan-525400b4f6bd (stonith:fence_ipmilan): Started controller-0
+ * stonith-fence_ipmilan-5254005bdbb5 (stonith:fence_ipmilan): Started controller-1 (UNCLEAN)
+
+Transition Summary:
+ * Fence (reboot) messaging-1 'resources are active but connection is unrecoverable'
+ * Fence (reboot) galera-2 'resources are active but connection is unrecoverable'
+ * Fence (reboot) controller-1 'peer is no longer part of the cluster'
+ * Stop messaging-1 ( controller-1 ) due to node availability
+ * Move galera-0 ( controller-1 -> controller-2 )
+ * Stop galera-2 ( controller-1 ) due to node availability
+ * Stop rabbitmq:2 ( messaging-1 ) due to node availability
+ * Stop galera:1 ( Promoted galera-2 ) due to node availability
+ * Stop redis:0 ( Unpromoted controller-1 ) due to node availability
+ * Move ip-172.17.1.14 ( controller-1 -> controller-2 )
+ * Move ip-172.17.1.17 ( controller-1 -> controller-2 )
+ * Move ip-172.17.4.11 ( controller-1 -> controller-2 )
+ * Stop haproxy:0 ( controller-1 ) due to node availability
+ * Move stonith-fence_ipmilan-5254005bdbb5 ( controller-1 -> controller-2 )
+
+Executing Cluster Transition:
+ * Pseudo action: messaging-1_stop_0
+ * Pseudo action: galera-0_stop_0
+ * Pseudo action: galera-2_stop_0
+ * Pseudo action: rabbitmq-clone_pre_notify_stop_0
+ * Pseudo action: galera-master_demote_0
+ * Pseudo action: redis-master_pre_notify_stop_0
+ * Pseudo action: stonith-fence_ipmilan-5254005bdbb5_stop_0
+ * Fencing controller-1 (reboot)
+ * Resource action: rabbitmq notify on messaging-2
+ * Resource action: rabbitmq notify on messaging-0
+ * Pseudo action: rabbitmq-clone_confirmed-pre_notify_stop_0
+ * Pseudo action: redis_post_notify_stop_0
+ * Resource action: redis notify on controller-0
+ * Resource action: redis notify on controller-2
+ * Pseudo action: redis-master_confirmed-pre_notify_stop_0
+ * Pseudo action: redis-master_stop_0
+ * Pseudo action: haproxy-clone_stop_0
+ * Fencing galera-2 (reboot)
+ * Pseudo action: galera_demote_0
+ * Pseudo action: galera-master_demoted_0
+ * Pseudo action: galera-master_stop_0
+ * Pseudo action: redis_stop_0
+ * Pseudo action: redis-master_stopped_0
+ * Pseudo action: haproxy_stop_0
+ * Pseudo action: haproxy-clone_stopped_0
+ * Fencing messaging-1 (reboot)
+ * Resource action: galera-0 start on controller-2
+ * Pseudo action: rabbitmq_post_notify_stop_0
+ * Pseudo action: rabbitmq-clone_stop_0
+ * Pseudo action: galera_stop_0
+ * Resource action: galera monitor=10000 on galera-0
+ * Pseudo action: galera-master_stopped_0
+ * Pseudo action: redis-master_post_notify_stopped_0
+ * Pseudo action: ip-172.17.1.14_stop_0
+ * Pseudo action: ip-172.17.1.17_stop_0
+ * Pseudo action: ip-172.17.4.11_stop_0
+ * Resource action: stonith-fence_ipmilan-5254005bdbb5 start on controller-2
+ * Resource action: galera-0 monitor=20000 on controller-2
+ * Pseudo action: rabbitmq_stop_0
+ * Pseudo action: rabbitmq-clone_stopped_0
+ * Resource action: redis notify on controller-0
+ * Resource action: redis notify on controller-2
+ * Pseudo action: redis-master_confirmed-post_notify_stopped_0
+ * Resource action: ip-172.17.1.14 start on controller-2
+ * Resource action: ip-172.17.1.17 start on controller-2
+ * Resource action: ip-172.17.4.11 start on controller-2
+ * Resource action: stonith-fence_ipmilan-5254005bdbb5 monitor=60000 on controller-2
+ * Pseudo action: rabbitmq-clone_post_notify_stopped_0
+ * Pseudo action: redis_notified_0
+ * Resource action: ip-172.17.1.14 monitor=10000 on controller-2
+ * Resource action: ip-172.17.1.17 monitor=10000 on controller-2
+ * Resource action: ip-172.17.4.11 monitor=10000 on controller-2
+ * Resource action: rabbitmq notify on messaging-2
+ * Resource action: rabbitmq notify on messaging-0
+ * Pseudo action: rabbitmq_notified_0
+ * Pseudo action: rabbitmq-clone_confirmed-post_notify_stopped_0
+Using the original execution date of: 2017-05-03 13:33:24Z
+
+Revised Cluster Status:
+ * Node List:
+ * Online: [ controller-0 controller-2 ]
+ * OFFLINE: [ controller-1 ]
+ * RemoteOnline: [ galera-0 galera-1 messaging-0 messaging-2 ]
+ * RemoteOFFLINE: [ galera-2 messaging-1 ]
+
+ * Full List of Resources:
+ * messaging-0 (ocf:pacemaker:remote): Started controller-0
+ * messaging-1 (ocf:pacemaker:remote): Stopped
+ * messaging-2 (ocf:pacemaker:remote): Started controller-0
+ * galera-0 (ocf:pacemaker:remote): Started controller-2
+ * galera-1 (ocf:pacemaker:remote): Started controller-0
+ * galera-2 (ocf:pacemaker:remote): Stopped
+ * Clone Set: rabbitmq-clone [rabbitmq]:
+ * Started: [ messaging-0 messaging-2 ]
+ * Stopped: [ controller-0 controller-1 controller-2 galera-0 galera-1 galera-2 messaging-1 ]
+ * Clone Set: galera-master [galera] (promotable):
+ * Promoted: [ galera-0 galera-1 ]
+ * Stopped: [ controller-0 controller-1 controller-2 galera-2 messaging-0 messaging-1 messaging-2 ]
+ * Clone Set: redis-master [redis] (promotable):
+ * Promoted: [ controller-0 ]
+ * Unpromoted: [ controller-2 ]
+ * Stopped: [ controller-1 galera-0 galera-1 galera-2 messaging-0 messaging-1 messaging-2 ]
+ * ip-192.168.24.6 (ocf:heartbeat:IPaddr2): Started controller-0
+ * ip-10.0.0.102 (ocf:heartbeat:IPaddr2): Started controller-0
+ * ip-172.17.1.14 (ocf:heartbeat:IPaddr2): Started controller-2
+ * ip-172.17.1.17 (ocf:heartbeat:IPaddr2): Started controller-2
+ * ip-172.17.3.15 (ocf:heartbeat:IPaddr2): Started controller-0
+ * ip-172.17.4.11 (ocf:heartbeat:IPaddr2): Started controller-2
+ * Clone Set: haproxy-clone [haproxy]:
+ * Started: [ controller-0 controller-2 ]
+ * Stopped: [ controller-1 galera-0 galera-1 galera-2 messaging-0 messaging-1 messaging-2 ]
+ * openstack-cinder-volume (systemd:openstack-cinder-volume): Started controller-0
+ * stonith-fence_ipmilan-525400bbf613 (stonith:fence_ipmilan): Started controller-0
+ * stonith-fence_ipmilan-525400b4f6bd (stonith:fence_ipmilan): Started controller-0
+ * stonith-fence_ipmilan-5254005bdbb5 (stonith:fence_ipmilan): Started controller-2
diff --git a/cts/scheduler/summary/remote-recover-connection.summary b/cts/scheduler/summary/remote-recover-connection.summary
new file mode 100644
index 0000000..fd6900d
--- /dev/null
+++ b/cts/scheduler/summary/remote-recover-connection.summary
@@ -0,0 +1,132 @@
+Using the original execution date of: 2017-05-03 13:33:24Z
+Current cluster status:
+ * Node List:
+ * Node controller-1: UNCLEAN (offline)
+ * Online: [ controller-0 controller-2 ]
+ * RemoteOnline: [ galera-0 galera-1 galera-2 messaging-0 messaging-1 messaging-2 ]
+
+ * Full List of Resources:
+ * messaging-0 (ocf:pacemaker:remote): Started controller-0
+ * messaging-1 (ocf:pacemaker:remote): Started controller-1 (UNCLEAN)
+ * messaging-2 (ocf:pacemaker:remote): Started controller-0
+ * galera-0 (ocf:pacemaker:remote): Started controller-1 (UNCLEAN)
+ * galera-1 (ocf:pacemaker:remote): Started controller-0
+ * galera-2 (ocf:pacemaker:remote): Started controller-1 (UNCLEAN)
+ * Clone Set: rabbitmq-clone [rabbitmq]:
+ * Started: [ messaging-0 messaging-1 messaging-2 ]
+ * Stopped: [ controller-0 controller-1 controller-2 galera-0 galera-1 galera-2 ]
+ * Clone Set: galera-master [galera] (promotable):
+ * Promoted: [ galera-0 galera-1 galera-2 ]
+ * Stopped: [ controller-0 controller-1 controller-2 messaging-0 messaging-1 messaging-2 ]
+ * Clone Set: redis-master [redis] (promotable):
+ * redis (ocf:heartbeat:redis): Unpromoted controller-1 (UNCLEAN)
+ * Promoted: [ controller-0 ]
+ * Unpromoted: [ controller-2 ]
+ * Stopped: [ galera-0 galera-1 galera-2 messaging-0 messaging-1 messaging-2 ]
+ * ip-192.168.24.6 (ocf:heartbeat:IPaddr2): Started controller-0
+ * ip-10.0.0.102 (ocf:heartbeat:IPaddr2): Started controller-0
+ * ip-172.17.1.14 (ocf:heartbeat:IPaddr2): Started controller-1 (UNCLEAN)
+ * ip-172.17.1.17 (ocf:heartbeat:IPaddr2): Started controller-1 (UNCLEAN)
+ * ip-172.17.3.15 (ocf:heartbeat:IPaddr2): Started controller-0
+ * ip-172.17.4.11 (ocf:heartbeat:IPaddr2): Started controller-1 (UNCLEAN)
+ * Clone Set: haproxy-clone [haproxy]:
+ * haproxy (systemd:haproxy): Started controller-1 (UNCLEAN)
+ * Started: [ controller-0 controller-2 ]
+ * Stopped: [ galera-0 galera-1 galera-2 messaging-0 messaging-1 messaging-2 ]
+ * openstack-cinder-volume (systemd:openstack-cinder-volume): Started controller-0
+ * stonith-fence_ipmilan-525400bbf613 (stonith:fence_ipmilan): Started controller-0
+ * stonith-fence_ipmilan-525400b4f6bd (stonith:fence_ipmilan): Started controller-0
+ * stonith-fence_ipmilan-5254005bdbb5 (stonith:fence_ipmilan): Started controller-1 (UNCLEAN)
+
+Transition Summary:
+ * Fence (reboot) controller-1 'peer is no longer part of the cluster'
+ * Move messaging-1 ( controller-1 -> controller-2 )
+ * Move galera-0 ( controller-1 -> controller-2 )
+ * Move galera-2 ( controller-1 -> controller-2 )
+ * Stop redis:0 ( Unpromoted controller-1 ) due to node availability
+ * Move ip-172.17.1.14 ( controller-1 -> controller-2 )
+ * Move ip-172.17.1.17 ( controller-1 -> controller-2 )
+ * Move ip-172.17.4.11 ( controller-1 -> controller-2 )
+ * Stop haproxy:0 ( controller-1 ) due to node availability
+ * Move stonith-fence_ipmilan-5254005bdbb5 ( controller-1 -> controller-2 )
+
+Executing Cluster Transition:
+ * Pseudo action: messaging-1_stop_0
+ * Pseudo action: galera-0_stop_0
+ * Pseudo action: galera-2_stop_0
+ * Pseudo action: redis-master_pre_notify_stop_0
+ * Pseudo action: stonith-fence_ipmilan-5254005bdbb5_stop_0
+ * Fencing controller-1 (reboot)
+ * Resource action: messaging-1 start on controller-2
+ * Resource action: galera-0 start on controller-2
+ * Resource action: galera-2 start on controller-2
+ * Resource action: rabbitmq monitor=10000 on messaging-1
+ * Resource action: galera monitor=10000 on galera-2
+ * Resource action: galera monitor=10000 on galera-0
+ * Pseudo action: redis_post_notify_stop_0
+ * Resource action: redis notify on controller-0
+ * Resource action: redis notify on controller-2
+ * Pseudo action: redis-master_confirmed-pre_notify_stop_0
+ * Pseudo action: redis-master_stop_0
+ * Pseudo action: haproxy-clone_stop_0
+ * Resource action: stonith-fence_ipmilan-5254005bdbb5 start on controller-2
+ * Resource action: messaging-1 monitor=20000 on controller-2
+ * Resource action: galera-0 monitor=20000 on controller-2
+ * Resource action: galera-2 monitor=20000 on controller-2
+ * Pseudo action: redis_stop_0
+ * Pseudo action: redis-master_stopped_0
+ * Pseudo action: haproxy_stop_0
+ * Pseudo action: haproxy-clone_stopped_0
+ * Resource action: stonith-fence_ipmilan-5254005bdbb5 monitor=60000 on controller-2
+ * Pseudo action: redis-master_post_notify_stopped_0
+ * Pseudo action: ip-172.17.1.14_stop_0
+ * Pseudo action: ip-172.17.1.17_stop_0
+ * Pseudo action: ip-172.17.4.11_stop_0
+ * Resource action: redis notify on controller-0
+ * Resource action: redis notify on controller-2
+ * Pseudo action: redis-master_confirmed-post_notify_stopped_0
+ * Resource action: ip-172.17.1.14 start on controller-2
+ * Resource action: ip-172.17.1.17 start on controller-2
+ * Resource action: ip-172.17.4.11 start on controller-2
+ * Pseudo action: redis_notified_0
+ * Resource action: ip-172.17.1.14 monitor=10000 on controller-2
+ * Resource action: ip-172.17.1.17 monitor=10000 on controller-2
+ * Resource action: ip-172.17.4.11 monitor=10000 on controller-2
+Using the original execution date of: 2017-05-03 13:33:24Z
+
+Revised Cluster Status:
+ * Node List:
+ * Online: [ controller-0 controller-2 ]
+ * OFFLINE: [ controller-1 ]
+ * RemoteOnline: [ galera-0 galera-1 galera-2 messaging-0 messaging-1 messaging-2 ]
+
+ * Full List of Resources:
+ * messaging-0 (ocf:pacemaker:remote): Started controller-0
+ * messaging-1 (ocf:pacemaker:remote): Started controller-2
+ * messaging-2 (ocf:pacemaker:remote): Started controller-0
+ * galera-0 (ocf:pacemaker:remote): Started controller-2
+ * galera-1 (ocf:pacemaker:remote): Started controller-0
+ * galera-2 (ocf:pacemaker:remote): Started controller-2
+ * Clone Set: rabbitmq-clone [rabbitmq]:
+ * Started: [ messaging-0 messaging-1 messaging-2 ]
+ * Stopped: [ controller-0 controller-1 controller-2 galera-0 galera-1 galera-2 ]
+ * Clone Set: galera-master [galera] (promotable):
+ * Promoted: [ galera-0 galera-1 galera-2 ]
+ * Stopped: [ controller-0 controller-1 controller-2 messaging-0 messaging-1 messaging-2 ]
+ * Clone Set: redis-master [redis] (promotable):
+ * Promoted: [ controller-0 ]
+ * Unpromoted: [ controller-2 ]
+ * Stopped: [ controller-1 galera-0 galera-1 galera-2 messaging-0 messaging-1 messaging-2 ]
+ * ip-192.168.24.6 (ocf:heartbeat:IPaddr2): Started controller-0
+ * ip-10.0.0.102 (ocf:heartbeat:IPaddr2): Started controller-0
+ * ip-172.17.1.14 (ocf:heartbeat:IPaddr2): Started controller-2
+ * ip-172.17.1.17 (ocf:heartbeat:IPaddr2): Started controller-2
+ * ip-172.17.3.15 (ocf:heartbeat:IPaddr2): Started controller-0
+ * ip-172.17.4.11 (ocf:heartbeat:IPaddr2): Started controller-2
+ * Clone Set: haproxy-clone [haproxy]:
+ * Started: [ controller-0 controller-2 ]
+ * Stopped: [ controller-1 galera-0 galera-1 galera-2 messaging-0 messaging-1 messaging-2 ]
+ * openstack-cinder-volume (systemd:openstack-cinder-volume): Started controller-0
+ * stonith-fence_ipmilan-525400bbf613 (stonith:fence_ipmilan): Started controller-0
+ * stonith-fence_ipmilan-525400b4f6bd (stonith:fence_ipmilan): Started controller-0
+ * stonith-fence_ipmilan-5254005bdbb5 (stonith:fence_ipmilan): Started controller-2
diff --git a/cts/scheduler/summary/remote-recover-fail.summary b/cts/scheduler/summary/remote-recover-fail.summary
new file mode 100644
index 0000000..d239914
--- /dev/null
+++ b/cts/scheduler/summary/remote-recover-fail.summary
@@ -0,0 +1,54 @@
+Current cluster status:
+ * Node List:
+ * RemoteNode rhel7-auto4: UNCLEAN (offline)
+ * Online: [ rhel7-auto2 rhel7-auto3 ]
+ * OFFLINE: [ rhel7-auto1 ]
+
+ * Full List of Resources:
+ * shooter (stonith:fence_xvm): Started rhel7-auto3
+ * rhel7-auto4 (ocf:pacemaker:remote): FAILED rhel7-auto2
+ * FAKE1 (ocf:heartbeat:Dummy): Stopped
+ * FAKE2 (ocf:heartbeat:Dummy): Started rhel7-auto4 (UNCLEAN)
+ * FAKE3 (ocf:heartbeat:Dummy): Started rhel7-auto2
+ * FAKE4 (ocf:heartbeat:Dummy): Started rhel7-auto3
+ * FAKE5 (ocf:heartbeat:Dummy): Started rhel7-auto3
+ * FAKE6 (ocf:heartbeat:Dummy): Started rhel7-auto4 (UNCLEAN)
+
+Transition Summary:
+ * Fence (reboot) rhel7-auto4 'FAKE2 is thought to be active there'
+ * Recover rhel7-auto4 ( rhel7-auto2 )
+ * Start FAKE1 ( rhel7-auto2 )
+ * Move FAKE2 ( rhel7-auto4 -> rhel7-auto3 )
+ * Move FAKE6 ( rhel7-auto4 -> rhel7-auto2 )
+
+Executing Cluster Transition:
+ * Resource action: FAKE3 monitor=10000 on rhel7-auto2
+ * Resource action: FAKE4 monitor=10000 on rhel7-auto3
+ * Fencing rhel7-auto4 (reboot)
+ * Resource action: FAKE1 start on rhel7-auto2
+ * Pseudo action: FAKE2_stop_0
+ * Pseudo action: FAKE6_stop_0
+ * Resource action: rhel7-auto4 stop on rhel7-auto2
+ * Resource action: FAKE1 monitor=10000 on rhel7-auto2
+ * Resource action: FAKE2 start on rhel7-auto3
+ * Resource action: FAKE6 start on rhel7-auto2
+ * Resource action: rhel7-auto4 start on rhel7-auto2
+ * Resource action: FAKE2 monitor=10000 on rhel7-auto3
+ * Resource action: FAKE6 monitor=10000 on rhel7-auto2
+ * Resource action: rhel7-auto4 monitor=60000 on rhel7-auto2
+
+Revised Cluster Status:
+ * Node List:
+ * Online: [ rhel7-auto2 rhel7-auto3 ]
+ * OFFLINE: [ rhel7-auto1 ]
+ * RemoteOnline: [ rhel7-auto4 ]
+
+ * Full List of Resources:
+ * shooter (stonith:fence_xvm): Started rhel7-auto3
+ * rhel7-auto4 (ocf:pacemaker:remote): Started rhel7-auto2
+ * FAKE1 (ocf:heartbeat:Dummy): Started rhel7-auto2
+ * FAKE2 (ocf:heartbeat:Dummy): Started rhel7-auto3
+ * FAKE3 (ocf:heartbeat:Dummy): Started rhel7-auto2
+ * FAKE4 (ocf:heartbeat:Dummy): Started rhel7-auto3
+ * FAKE5 (ocf:heartbeat:Dummy): Started rhel7-auto3
+ * FAKE6 (ocf:heartbeat:Dummy): Started rhel7-auto2
diff --git a/cts/scheduler/summary/remote-recover-no-resources.summary b/cts/scheduler/summary/remote-recover-no-resources.summary
new file mode 100644
index 0000000..d5978be
--- /dev/null
+++ b/cts/scheduler/summary/remote-recover-no-resources.summary
@@ -0,0 +1,143 @@
+Using the original execution date of: 2017-05-03 13:33:24Z
+Current cluster status:
+ * Node List:
+ * Node controller-1: UNCLEAN (offline)
+ * Online: [ controller-0 controller-2 ]
+ * RemoteOnline: [ galera-0 galera-1 galera-2 messaging-0 messaging-1 messaging-2 ]
+
+ * Full List of Resources:
+ * messaging-0 (ocf:pacemaker:remote): Started controller-0
+ * messaging-1 (ocf:pacemaker:remote): Started controller-1 (UNCLEAN)
+ * messaging-2 (ocf:pacemaker:remote): Started controller-0
+ * galera-0 (ocf:pacemaker:remote): Started controller-1 (UNCLEAN)
+ * galera-1 (ocf:pacemaker:remote): Started controller-0
+ * galera-2 (ocf:pacemaker:remote): Started controller-1 (UNCLEAN)
+ * Clone Set: rabbitmq-clone [rabbitmq]:
+ * Started: [ messaging-0 messaging-1 messaging-2 ]
+ * Stopped: [ controller-0 controller-1 controller-2 galera-0 galera-1 galera-2 ]
+ * Clone Set: galera-master [galera] (promotable):
+ * Promoted: [ galera-0 galera-1 ]
+ * Stopped: [ controller-0 controller-1 controller-2 galera-2 messaging-0 messaging-1 messaging-2 ]
+ * Clone Set: redis-master [redis] (promotable):
+ * redis (ocf:heartbeat:redis): Unpromoted controller-1 (UNCLEAN)
+ * Promoted: [ controller-0 ]
+ * Unpromoted: [ controller-2 ]
+ * Stopped: [ galera-0 galera-1 galera-2 messaging-0 messaging-1 messaging-2 ]
+ * ip-192.168.24.6 (ocf:heartbeat:IPaddr2): Started controller-0
+ * ip-10.0.0.102 (ocf:heartbeat:IPaddr2): Started controller-0
+ * ip-172.17.1.14 (ocf:heartbeat:IPaddr2): Started controller-1 (UNCLEAN)
+ * ip-172.17.1.17 (ocf:heartbeat:IPaddr2): Started controller-1 (UNCLEAN)
+ * ip-172.17.3.15 (ocf:heartbeat:IPaddr2): Started controller-0
+ * ip-172.17.4.11 (ocf:heartbeat:IPaddr2): Started controller-1 (UNCLEAN)
+ * Clone Set: haproxy-clone [haproxy]:
+ * haproxy (systemd:haproxy): Started controller-1 (UNCLEAN)
+ * Started: [ controller-0 controller-2 ]
+ * Stopped: [ galera-0 galera-1 galera-2 messaging-0 messaging-1 messaging-2 ]
+ * openstack-cinder-volume (systemd:openstack-cinder-volume): Started controller-0
+ * stonith-fence_ipmilan-525400bbf613 (stonith:fence_ipmilan): Started controller-0
+ * stonith-fence_ipmilan-525400b4f6bd (stonith:fence_ipmilan): Started controller-0
+ * stonith-fence_ipmilan-5254005bdbb5 (stonith:fence_ipmilan): Started controller-1 (UNCLEAN)
+
+Transition Summary:
+ * Fence (reboot) messaging-1 'resources are active but connection is unrecoverable'
+ * Fence (reboot) controller-1 'peer is no longer part of the cluster'
+ * Stop messaging-1 ( controller-1 ) due to node availability
+ * Move galera-0 ( controller-1 -> controller-2 )
+ * Stop galera-2 ( controller-1 ) due to node availability
+ * Stop rabbitmq:2 ( messaging-1 ) due to node availability
+ * Stop redis:0 ( Unpromoted controller-1 ) due to node availability
+ * Move ip-172.17.1.14 ( controller-1 -> controller-2 )
+ * Move ip-172.17.1.17 ( controller-1 -> controller-2 )
+ * Move ip-172.17.4.11 ( controller-1 -> controller-2 )
+ * Stop haproxy:0 ( controller-1 ) due to node availability
+ * Move stonith-fence_ipmilan-5254005bdbb5 ( controller-1 -> controller-2 )
+
+Executing Cluster Transition:
+ * Pseudo action: messaging-1_stop_0
+ * Pseudo action: galera-0_stop_0
+ * Pseudo action: galera-2_stop_0
+ * Pseudo action: rabbitmq-clone_pre_notify_stop_0
+ * Pseudo action: redis-master_pre_notify_stop_0
+ * Pseudo action: stonith-fence_ipmilan-5254005bdbb5_stop_0
+ * Fencing controller-1 (reboot)
+ * Resource action: rabbitmq notify on messaging-2
+ * Resource action: rabbitmq notify on messaging-0
+ * Pseudo action: rabbitmq-clone_confirmed-pre_notify_stop_0
+ * Pseudo action: redis_post_notify_stop_0
+ * Resource action: redis notify on controller-0
+ * Resource action: redis notify on controller-2
+ * Pseudo action: redis-master_confirmed-pre_notify_stop_0
+ * Pseudo action: redis-master_stop_0
+ * Pseudo action: haproxy-clone_stop_0
+ * Fencing messaging-1 (reboot)
+ * Resource action: galera-0 start on controller-2
+ * Pseudo action: rabbitmq_post_notify_stop_0
+ * Pseudo action: rabbitmq-clone_stop_0
+ * Resource action: galera monitor=10000 on galera-0
+ * Pseudo action: redis_stop_0
+ * Pseudo action: redis-master_stopped_0
+ * Pseudo action: haproxy_stop_0
+ * Pseudo action: haproxy-clone_stopped_0
+ * Resource action: stonith-fence_ipmilan-5254005bdbb5 start on controller-2
+ * Resource action: galera-0 monitor=20000 on controller-2
+ * Pseudo action: rabbitmq_stop_0
+ * Pseudo action: rabbitmq-clone_stopped_0
+ * Pseudo action: redis-master_post_notify_stopped_0
+ * Pseudo action: ip-172.17.1.14_stop_0
+ * Pseudo action: ip-172.17.1.17_stop_0
+ * Pseudo action: ip-172.17.4.11_stop_0
+ * Resource action: stonith-fence_ipmilan-5254005bdbb5 monitor=60000 on controller-2
+ * Pseudo action: rabbitmq-clone_post_notify_stopped_0
+ * Resource action: redis notify on controller-0
+ * Resource action: redis notify on controller-2
+ * Pseudo action: redis-master_confirmed-post_notify_stopped_0
+ * Resource action: ip-172.17.1.14 start on controller-2
+ * Resource action: ip-172.17.1.17 start on controller-2
+ * Resource action: ip-172.17.4.11 start on controller-2
+ * Resource action: rabbitmq notify on messaging-2
+ * Resource action: rabbitmq notify on messaging-0
+ * Pseudo action: rabbitmq_notified_0
+ * Pseudo action: rabbitmq-clone_confirmed-post_notify_stopped_0
+ * Pseudo action: redis_notified_0
+ * Resource action: ip-172.17.1.14 monitor=10000 on controller-2
+ * Resource action: ip-172.17.1.17 monitor=10000 on controller-2
+ * Resource action: ip-172.17.4.11 monitor=10000 on controller-2
+Using the original execution date of: 2017-05-03 13:33:24Z
+
+Revised Cluster Status:
+ * Node List:
+ * Online: [ controller-0 controller-2 ]
+ * OFFLINE: [ controller-1 ]
+ * RemoteOnline: [ galera-0 galera-1 messaging-0 messaging-2 ]
+ * RemoteOFFLINE: [ galera-2 messaging-1 ]
+
+ * Full List of Resources:
+ * messaging-0 (ocf:pacemaker:remote): Started controller-0
+ * messaging-1 (ocf:pacemaker:remote): Stopped
+ * messaging-2 (ocf:pacemaker:remote): Started controller-0
+ * galera-0 (ocf:pacemaker:remote): Started controller-2
+ * galera-1 (ocf:pacemaker:remote): Started controller-0
+ * galera-2 (ocf:pacemaker:remote): Stopped
+ * Clone Set: rabbitmq-clone [rabbitmq]:
+ * Started: [ messaging-0 messaging-2 ]
+ * Stopped: [ controller-0 controller-1 controller-2 galera-0 galera-1 galera-2 messaging-1 ]
+ * Clone Set: galera-master [galera] (promotable):
+ * Promoted: [ galera-0 galera-1 ]
+ * Stopped: [ controller-0 controller-1 controller-2 galera-2 messaging-0 messaging-1 messaging-2 ]
+ * Clone Set: redis-master [redis] (promotable):
+ * Promoted: [ controller-0 ]
+ * Unpromoted: [ controller-2 ]
+ * Stopped: [ controller-1 galera-0 galera-1 galera-2 messaging-0 messaging-1 messaging-2 ]
+ * ip-192.168.24.6 (ocf:heartbeat:IPaddr2): Started controller-0
+ * ip-10.0.0.102 (ocf:heartbeat:IPaddr2): Started controller-0
+ * ip-172.17.1.14 (ocf:heartbeat:IPaddr2): Started controller-2
+ * ip-172.17.1.17 (ocf:heartbeat:IPaddr2): Started controller-2
+ * ip-172.17.3.15 (ocf:heartbeat:IPaddr2): Started controller-0
+ * ip-172.17.4.11 (ocf:heartbeat:IPaddr2): Started controller-2
+ * Clone Set: haproxy-clone [haproxy]:
+ * Started: [ controller-0 controller-2 ]
+ * Stopped: [ controller-1 galera-0 galera-1 galera-2 messaging-0 messaging-1 messaging-2 ]
+ * openstack-cinder-volume (systemd:openstack-cinder-volume): Started controller-0
+ * stonith-fence_ipmilan-525400bbf613 (stonith:fence_ipmilan): Started controller-0
+ * stonith-fence_ipmilan-525400b4f6bd (stonith:fence_ipmilan): Started controller-0
+ * stonith-fence_ipmilan-5254005bdbb5 (stonith:fence_ipmilan): Started controller-2
diff --git a/cts/scheduler/summary/remote-recover-unknown.summary b/cts/scheduler/summary/remote-recover-unknown.summary
new file mode 100644
index 0000000..c689158
--- /dev/null
+++ b/cts/scheduler/summary/remote-recover-unknown.summary
@@ -0,0 +1,145 @@
+Using the original execution date of: 2017-05-03 13:33:24Z
+Current cluster status:
+ * Node List:
+ * Node controller-1: UNCLEAN (offline)
+ * Online: [ controller-0 controller-2 ]
+ * RemoteOnline: [ galera-0 galera-1 galera-2 messaging-0 messaging-1 messaging-2 ]
+
+ * Full List of Resources:
+ * messaging-0 (ocf:pacemaker:remote): Started controller-0
+ * messaging-1 (ocf:pacemaker:remote): Started controller-1 (UNCLEAN)
+ * messaging-2 (ocf:pacemaker:remote): Started controller-0
+ * galera-0 (ocf:pacemaker:remote): Started controller-1 (UNCLEAN)
+ * galera-1 (ocf:pacemaker:remote): Started controller-0
+ * galera-2 (ocf:pacemaker:remote): Started controller-1 (UNCLEAN)
+ * Clone Set: rabbitmq-clone [rabbitmq]:
+ * Started: [ messaging-0 messaging-1 messaging-2 ]
+ * Stopped: [ controller-0 controller-1 controller-2 galera-0 galera-1 galera-2 ]
+ * Clone Set: galera-master [galera] (promotable):
+ * Promoted: [ galera-0 galera-1 ]
+ * Stopped: [ controller-0 controller-1 controller-2 galera-2 messaging-0 messaging-1 messaging-2 ]
+ * Clone Set: redis-master [redis] (promotable):
+ * redis (ocf:heartbeat:redis): Unpromoted controller-1 (UNCLEAN)
+ * Promoted: [ controller-0 ]
+ * Unpromoted: [ controller-2 ]
+ * Stopped: [ galera-0 galera-1 galera-2 messaging-0 messaging-1 messaging-2 ]
+ * ip-192.168.24.6 (ocf:heartbeat:IPaddr2): Started controller-0
+ * ip-10.0.0.102 (ocf:heartbeat:IPaddr2): Started controller-0
+ * ip-172.17.1.14 (ocf:heartbeat:IPaddr2): Started controller-1 (UNCLEAN)
+ * ip-172.17.1.17 (ocf:heartbeat:IPaddr2): Started controller-1 (UNCLEAN)
+ * ip-172.17.3.15 (ocf:heartbeat:IPaddr2): Started controller-0
+ * ip-172.17.4.11 (ocf:heartbeat:IPaddr2): Started controller-1 (UNCLEAN)
+ * Clone Set: haproxy-clone [haproxy]:
+ * haproxy (systemd:haproxy): Started controller-1 (UNCLEAN)
+ * Started: [ controller-0 controller-2 ]
+ * Stopped: [ galera-0 galera-1 galera-2 messaging-0 messaging-1 messaging-2 ]
+ * openstack-cinder-volume (systemd:openstack-cinder-volume): Started controller-0
+ * stonith-fence_ipmilan-525400bbf613 (stonith:fence_ipmilan): Started controller-0
+ * stonith-fence_ipmilan-525400b4f6bd (stonith:fence_ipmilan): Started controller-0
+ * stonith-fence_ipmilan-5254005bdbb5 (stonith:fence_ipmilan): Started controller-1 (UNCLEAN)
+
+Transition Summary:
+ * Fence (reboot) galera-2 'resources are in unknown state and connection is unrecoverable'
+ * Fence (reboot) messaging-1 'resources are active but connection is unrecoverable'
+ * Fence (reboot) controller-1 'peer is no longer part of the cluster'
+ * Stop messaging-1 ( controller-1 ) due to node availability
+ * Move galera-0 ( controller-1 -> controller-2 )
+ * Stop galera-2 ( controller-1 ) due to node availability
+ * Stop rabbitmq:2 ( messaging-1 ) due to node availability
+ * Stop redis:0 ( Unpromoted controller-1 ) due to node availability
+ * Move ip-172.17.1.14 ( controller-1 -> controller-2 )
+ * Move ip-172.17.1.17 ( controller-1 -> controller-2 )
+ * Move ip-172.17.4.11 ( controller-1 -> controller-2 )
+ * Stop haproxy:0 ( controller-1 ) due to node availability
+ * Move stonith-fence_ipmilan-5254005bdbb5 ( controller-1 -> controller-2 )
+
+Executing Cluster Transition:
+ * Pseudo action: messaging-1_stop_0
+ * Pseudo action: galera-0_stop_0
+ * Pseudo action: galera-2_stop_0
+ * Pseudo action: rabbitmq-clone_pre_notify_stop_0
+ * Pseudo action: redis-master_pre_notify_stop_0
+ * Pseudo action: stonith-fence_ipmilan-5254005bdbb5_stop_0
+ * Fencing controller-1 (reboot)
+ * Resource action: rabbitmq notify on messaging-2
+ * Resource action: rabbitmq notify on messaging-0
+ * Pseudo action: rabbitmq-clone_confirmed-pre_notify_stop_0
+ * Pseudo action: redis_post_notify_stop_0
+ * Resource action: redis notify on controller-0
+ * Resource action: redis notify on controller-2
+ * Pseudo action: redis-master_confirmed-pre_notify_stop_0
+ * Pseudo action: redis-master_stop_0
+ * Pseudo action: haproxy-clone_stop_0
+ * Fencing galera-2 (reboot)
+ * Fencing messaging-1 (reboot)
+ * Resource action: galera-0 start on controller-2
+ * Pseudo action: rabbitmq_post_notify_stop_0
+ * Pseudo action: rabbitmq-clone_stop_0
+ * Resource action: galera monitor=10000 on galera-0
+ * Pseudo action: redis_stop_0
+ * Pseudo action: redis-master_stopped_0
+ * Pseudo action: haproxy_stop_0
+ * Pseudo action: haproxy-clone_stopped_0
+ * Resource action: stonith-fence_ipmilan-5254005bdbb5 start on controller-2
+ * Resource action: galera-0 monitor=20000 on controller-2
+ * Pseudo action: rabbitmq_stop_0
+ * Pseudo action: rabbitmq-clone_stopped_0
+ * Pseudo action: redis-master_post_notify_stopped_0
+ * Pseudo action: ip-172.17.1.14_stop_0
+ * Pseudo action: ip-172.17.1.17_stop_0
+ * Pseudo action: ip-172.17.4.11_stop_0
+ * Resource action: stonith-fence_ipmilan-5254005bdbb5 monitor=60000 on controller-2
+ * Pseudo action: rabbitmq-clone_post_notify_stopped_0
+ * Resource action: redis notify on controller-0
+ * Resource action: redis notify on controller-2
+ * Pseudo action: redis-master_confirmed-post_notify_stopped_0
+ * Resource action: ip-172.17.1.14 start on controller-2
+ * Resource action: ip-172.17.1.17 start on controller-2
+ * Resource action: ip-172.17.4.11 start on controller-2
+ * Resource action: rabbitmq notify on messaging-2
+ * Resource action: rabbitmq notify on messaging-0
+ * Pseudo action: rabbitmq_notified_0
+ * Pseudo action: rabbitmq-clone_confirmed-post_notify_stopped_0
+ * Pseudo action: redis_notified_0
+ * Resource action: ip-172.17.1.14 monitor=10000 on controller-2
+ * Resource action: ip-172.17.1.17 monitor=10000 on controller-2
+ * Resource action: ip-172.17.4.11 monitor=10000 on controller-2
+Using the original execution date of: 2017-05-03 13:33:24Z
+
+Revised Cluster Status:
+ * Node List:
+ * Online: [ controller-0 controller-2 ]
+ * OFFLINE: [ controller-1 ]
+ * RemoteOnline: [ galera-0 galera-1 messaging-0 messaging-2 ]
+ * RemoteOFFLINE: [ galera-2 messaging-1 ]
+
+ * Full List of Resources:
+ * messaging-0 (ocf:pacemaker:remote): Started controller-0
+ * messaging-1 (ocf:pacemaker:remote): Stopped
+ * messaging-2 (ocf:pacemaker:remote): Started controller-0
+ * galera-0 (ocf:pacemaker:remote): Started controller-2
+ * galera-1 (ocf:pacemaker:remote): Started controller-0
+ * galera-2 (ocf:pacemaker:remote): Stopped
+ * Clone Set: rabbitmq-clone [rabbitmq]:
+ * Started: [ messaging-0 messaging-2 ]
+ * Stopped: [ controller-0 controller-1 controller-2 galera-0 galera-1 galera-2 messaging-1 ]
+ * Clone Set: galera-master [galera] (promotable):
+ * Promoted: [ galera-0 galera-1 ]
+ * Stopped: [ controller-0 controller-1 controller-2 galera-2 messaging-0 messaging-1 messaging-2 ]
+ * Clone Set: redis-master [redis] (promotable):
+ * Promoted: [ controller-0 ]
+ * Unpromoted: [ controller-2 ]
+ * Stopped: [ controller-1 galera-0 galera-1 galera-2 messaging-0 messaging-1 messaging-2 ]
+ * ip-192.168.24.6 (ocf:heartbeat:IPaddr2): Started controller-0
+ * ip-10.0.0.102 (ocf:heartbeat:IPaddr2): Started controller-0
+ * ip-172.17.1.14 (ocf:heartbeat:IPaddr2): Started controller-2
+ * ip-172.17.1.17 (ocf:heartbeat:IPaddr2): Started controller-2
+ * ip-172.17.3.15 (ocf:heartbeat:IPaddr2): Started controller-0
+ * ip-172.17.4.11 (ocf:heartbeat:IPaddr2): Started controller-2
+ * Clone Set: haproxy-clone [haproxy]:
+ * Started: [ controller-0 controller-2 ]
+ * Stopped: [ controller-1 galera-0 galera-1 galera-2 messaging-0 messaging-1 messaging-2 ]
+ * openstack-cinder-volume (systemd:openstack-cinder-volume): Started controller-0
+ * stonith-fence_ipmilan-525400bbf613 (stonith:fence_ipmilan): Started controller-0
+ * stonith-fence_ipmilan-525400b4f6bd (stonith:fence_ipmilan): Started controller-0
+ * stonith-fence_ipmilan-5254005bdbb5 (stonith:fence_ipmilan): Started controller-2
diff --git a/cts/scheduler/summary/remote-recover.summary b/cts/scheduler/summary/remote-recover.summary
new file mode 100644
index 0000000..3d956c2
--- /dev/null
+++ b/cts/scheduler/summary/remote-recover.summary
@@ -0,0 +1,34 @@
+Current cluster status:
+ * Node List:
+ * Node rhel7-alt2: standby
+ * RemoteNode rhel7-alt4: UNCLEAN (offline)
+ * Online: [ rhel7-alt1 ]
+ * OFFLINE: [ rhel7-alt3 ]
+
+ * Full List of Resources:
+ * shooter (stonith:fence_xvm): Stopped
+ * rhel7-alt4 (ocf:pacemaker:remote): Stopped
+ * fake (ocf:heartbeat:Dummy): Started rhel7-alt4 (UNCLEAN)
+
+Transition Summary:
+ * Start shooter ( rhel7-alt1 )
+ * Start rhel7-alt4 ( rhel7-alt1 )
+
+Executing Cluster Transition:
+ * Resource action: shooter start on rhel7-alt1
+ * Resource action: rhel7-alt4 start on rhel7-alt1
+ * Resource action: fake monitor=10000 on rhel7-alt4
+ * Resource action: shooter monitor=60000 on rhel7-alt1
+ * Resource action: rhel7-alt4 monitor=60000 on rhel7-alt1
+
+Revised Cluster Status:
+ * Node List:
+ * Node rhel7-alt2: standby
+ * Online: [ rhel7-alt1 ]
+ * OFFLINE: [ rhel7-alt3 ]
+ * RemoteOnline: [ rhel7-alt4 ]
+
+ * Full List of Resources:
+ * shooter (stonith:fence_xvm): Started rhel7-alt1
+ * rhel7-alt4 (ocf:pacemaker:remote): Started rhel7-alt1
+ * fake (ocf:heartbeat:Dummy): Started rhel7-alt4
diff --git a/cts/scheduler/summary/remote-recovery.summary b/cts/scheduler/summary/remote-recovery.summary
new file mode 100644
index 0000000..fd6900d
--- /dev/null
+++ b/cts/scheduler/summary/remote-recovery.summary
@@ -0,0 +1,132 @@
+Using the original execution date of: 2017-05-03 13:33:24Z
+Current cluster status:
+ * Node List:
+ * Node controller-1: UNCLEAN (offline)
+ * Online: [ controller-0 controller-2 ]
+ * RemoteOnline: [ galera-0 galera-1 galera-2 messaging-0 messaging-1 messaging-2 ]
+
+ * Full List of Resources:
+ * messaging-0 (ocf:pacemaker:remote): Started controller-0
+ * messaging-1 (ocf:pacemaker:remote): Started controller-1 (UNCLEAN)
+ * messaging-2 (ocf:pacemaker:remote): Started controller-0
+ * galera-0 (ocf:pacemaker:remote): Started controller-1 (UNCLEAN)
+ * galera-1 (ocf:pacemaker:remote): Started controller-0
+ * galera-2 (ocf:pacemaker:remote): Started controller-1 (UNCLEAN)
+ * Clone Set: rabbitmq-clone [rabbitmq]:
+ * Started: [ messaging-0 messaging-1 messaging-2 ]
+ * Stopped: [ controller-0 controller-1 controller-2 galera-0 galera-1 galera-2 ]
+ * Clone Set: galera-master [galera] (promotable):
+ * Promoted: [ galera-0 galera-1 galera-2 ]
+ * Stopped: [ controller-0 controller-1 controller-2 messaging-0 messaging-1 messaging-2 ]
+ * Clone Set: redis-master [redis] (promotable):
+ * redis (ocf:heartbeat:redis): Unpromoted controller-1 (UNCLEAN)
+ * Promoted: [ controller-0 ]
+ * Unpromoted: [ controller-2 ]
+ * Stopped: [ galera-0 galera-1 galera-2 messaging-0 messaging-1 messaging-2 ]
+ * ip-192.168.24.6 (ocf:heartbeat:IPaddr2): Started controller-0
+ * ip-10.0.0.102 (ocf:heartbeat:IPaddr2): Started controller-0
+ * ip-172.17.1.14 (ocf:heartbeat:IPaddr2): Started controller-1 (UNCLEAN)
+ * ip-172.17.1.17 (ocf:heartbeat:IPaddr2): Started controller-1 (UNCLEAN)
+ * ip-172.17.3.15 (ocf:heartbeat:IPaddr2): Started controller-0
+ * ip-172.17.4.11 (ocf:heartbeat:IPaddr2): Started controller-1 (UNCLEAN)
+ * Clone Set: haproxy-clone [haproxy]:
+ * haproxy (systemd:haproxy): Started controller-1 (UNCLEAN)
+ * Started: [ controller-0 controller-2 ]
+ * Stopped: [ galera-0 galera-1 galera-2 messaging-0 messaging-1 messaging-2 ]
+ * openstack-cinder-volume (systemd:openstack-cinder-volume): Started controller-0
+ * stonith-fence_ipmilan-525400bbf613 (stonith:fence_ipmilan): Started controller-0
+ * stonith-fence_ipmilan-525400b4f6bd (stonith:fence_ipmilan): Started controller-0
+ * stonith-fence_ipmilan-5254005bdbb5 (stonith:fence_ipmilan): Started controller-1 (UNCLEAN)
+
+Transition Summary:
+ * Fence (reboot) controller-1 'peer is no longer part of the cluster'
+ * Move messaging-1 ( controller-1 -> controller-2 )
+ * Move galera-0 ( controller-1 -> controller-2 )
+ * Move galera-2 ( controller-1 -> controller-2 )
+ * Stop redis:0 ( Unpromoted controller-1 ) due to node availability
+ * Move ip-172.17.1.14 ( controller-1 -> controller-2 )
+ * Move ip-172.17.1.17 ( controller-1 -> controller-2 )
+ * Move ip-172.17.4.11 ( controller-1 -> controller-2 )
+ * Stop haproxy:0 ( controller-1 ) due to node availability
+ * Move stonith-fence_ipmilan-5254005bdbb5 ( controller-1 -> controller-2 )
+
+Executing Cluster Transition:
+ * Pseudo action: messaging-1_stop_0
+ * Pseudo action: galera-0_stop_0
+ * Pseudo action: galera-2_stop_0
+ * Pseudo action: redis-master_pre_notify_stop_0
+ * Pseudo action: stonith-fence_ipmilan-5254005bdbb5_stop_0
+ * Fencing controller-1 (reboot)
+ * Resource action: messaging-1 start on controller-2
+ * Resource action: galera-0 start on controller-2
+ * Resource action: galera-2 start on controller-2
+ * Resource action: rabbitmq monitor=10000 on messaging-1
+ * Resource action: galera monitor=10000 on galera-2
+ * Resource action: galera monitor=10000 on galera-0
+ * Pseudo action: redis_post_notify_stop_0
+ * Resource action: redis notify on controller-0
+ * Resource action: redis notify on controller-2
+ * Pseudo action: redis-master_confirmed-pre_notify_stop_0
+ * Pseudo action: redis-master_stop_0
+ * Pseudo action: haproxy-clone_stop_0
+ * Resource action: stonith-fence_ipmilan-5254005bdbb5 start on controller-2
+ * Resource action: messaging-1 monitor=20000 on controller-2
+ * Resource action: galera-0 monitor=20000 on controller-2
+ * Resource action: galera-2 monitor=20000 on controller-2
+ * Pseudo action: redis_stop_0
+ * Pseudo action: redis-master_stopped_0
+ * Pseudo action: haproxy_stop_0
+ * Pseudo action: haproxy-clone_stopped_0
+ * Resource action: stonith-fence_ipmilan-5254005bdbb5 monitor=60000 on controller-2
+ * Pseudo action: redis-master_post_notify_stopped_0
+ * Pseudo action: ip-172.17.1.14_stop_0
+ * Pseudo action: ip-172.17.1.17_stop_0
+ * Pseudo action: ip-172.17.4.11_stop_0
+ * Resource action: redis notify on controller-0
+ * Resource action: redis notify on controller-2
+ * Pseudo action: redis-master_confirmed-post_notify_stopped_0
+ * Resource action: ip-172.17.1.14 start on controller-2
+ * Resource action: ip-172.17.1.17 start on controller-2
+ * Resource action: ip-172.17.4.11 start on controller-2
+ * Pseudo action: redis_notified_0
+ * Resource action: ip-172.17.1.14 monitor=10000 on controller-2
+ * Resource action: ip-172.17.1.17 monitor=10000 on controller-2
+ * Resource action: ip-172.17.4.11 monitor=10000 on controller-2
+Using the original execution date of: 2017-05-03 13:33:24Z
+
+Revised Cluster Status:
+ * Node List:
+ * Online: [ controller-0 controller-2 ]
+ * OFFLINE: [ controller-1 ]
+ * RemoteOnline: [ galera-0 galera-1 galera-2 messaging-0 messaging-1 messaging-2 ]
+
+ * Full List of Resources:
+ * messaging-0 (ocf:pacemaker:remote): Started controller-0
+ * messaging-1 (ocf:pacemaker:remote): Started controller-2
+ * messaging-2 (ocf:pacemaker:remote): Started controller-0
+ * galera-0 (ocf:pacemaker:remote): Started controller-2
+ * galera-1 (ocf:pacemaker:remote): Started controller-0
+ * galera-2 (ocf:pacemaker:remote): Started controller-2
+ * Clone Set: rabbitmq-clone [rabbitmq]:
+ * Started: [ messaging-0 messaging-1 messaging-2 ]
+ * Stopped: [ controller-0 controller-1 controller-2 galera-0 galera-1 galera-2 ]
+ * Clone Set: galera-master [galera] (promotable):
+ * Promoted: [ galera-0 galera-1 galera-2 ]
+ * Stopped: [ controller-0 controller-1 controller-2 messaging-0 messaging-1 messaging-2 ]
+ * Clone Set: redis-master [redis] (promotable):
+ * Promoted: [ controller-0 ]
+ * Unpromoted: [ controller-2 ]
+ * Stopped: [ controller-1 galera-0 galera-1 galera-2 messaging-0 messaging-1 messaging-2 ]
+ * ip-192.168.24.6 (ocf:heartbeat:IPaddr2): Started controller-0
+ * ip-10.0.0.102 (ocf:heartbeat:IPaddr2): Started controller-0
+ * ip-172.17.1.14 (ocf:heartbeat:IPaddr2): Started controller-2
+ * ip-172.17.1.17 (ocf:heartbeat:IPaddr2): Started controller-2
+ * ip-172.17.3.15 (ocf:heartbeat:IPaddr2): Started controller-0
+ * ip-172.17.4.11 (ocf:heartbeat:IPaddr2): Started controller-2
+ * Clone Set: haproxy-clone [haproxy]:
+ * Started: [ controller-0 controller-2 ]
+ * Stopped: [ controller-1 galera-0 galera-1 galera-2 messaging-0 messaging-1 messaging-2 ]
+ * openstack-cinder-volume (systemd:openstack-cinder-volume): Started controller-0
+ * stonith-fence_ipmilan-525400bbf613 (stonith:fence_ipmilan): Started controller-0
+ * stonith-fence_ipmilan-525400b4f6bd (stonith:fence_ipmilan): Started controller-0
+ * stonith-fence_ipmilan-5254005bdbb5 (stonith:fence_ipmilan): Started controller-2
diff --git a/cts/scheduler/summary/remote-stale-node-entry.summary b/cts/scheduler/summary/remote-stale-node-entry.summary
new file mode 100644
index 0000000..77cffc9
--- /dev/null
+++ b/cts/scheduler/summary/remote-stale-node-entry.summary
@@ -0,0 +1,112 @@
+Current cluster status:
+ * Node List:
+ * Online: [ rhel7-node1 rhel7-node2 rhel7-node3 ]
+ * RemoteOFFLINE: [ remote1 ]
+
+ * Full List of Resources:
+ * Fencing (stonith:fence_xvm): Stopped
+ * FencingPass (stonith:fence_dummy): Stopped
+ * rsc_rhel7-node1 (ocf:heartbeat:IPaddr2): Stopped
+ * rsc_rhel7-node2 (ocf:heartbeat:IPaddr2): Stopped
+ * rsc_rhel7-node3 (ocf:heartbeat:IPaddr2): Stopped
+ * migrator (ocf:pacemaker:Dummy): Stopped
+ * Clone Set: Connectivity [ping-1]:
+ * Stopped: [ remote1 rhel7-node1 rhel7-node2 rhel7-node3 ]
+ * Clone Set: master-1 [stateful-1] (promotable):
+ * Stopped: [ remote1 rhel7-node1 rhel7-node2 rhel7-node3 ]
+ * Resource Group: group-1:
+ * r192.168.122.204 (ocf:heartbeat:IPaddr2): Stopped
+ * r192.168.122.205 (ocf:heartbeat:IPaddr2): Stopped
+ * r192.168.122.206 (ocf:heartbeat:IPaddr2): Stopped
+ * lsb-dummy (lsb:/usr/share/pacemaker/tests/cts/LSBDummy): Stopped
+
+Transition Summary:
+ * Start Fencing ( rhel7-node1 )
+ * Start FencingPass ( rhel7-node2 )
+ * Start rsc_rhel7-node1 ( rhel7-node1 )
+ * Start rsc_rhel7-node2 ( rhel7-node2 )
+ * Start rsc_rhel7-node3 ( rhel7-node3 )
+ * Start migrator ( rhel7-node3 )
+ * Start ping-1:0 ( rhel7-node1 )
+ * Start ping-1:1 ( rhel7-node2 )
+ * Start ping-1:2 ( rhel7-node3 )
+
+Executing Cluster Transition:
+ * Resource action: Fencing monitor on rhel7-node3
+ * Resource action: Fencing monitor on rhel7-node2
+ * Resource action: Fencing monitor on rhel7-node1
+ * Resource action: FencingPass monitor on rhel7-node3
+ * Resource action: FencingPass monitor on rhel7-node2
+ * Resource action: FencingPass monitor on rhel7-node1
+ * Resource action: rsc_rhel7-node1 monitor on rhel7-node3
+ * Resource action: rsc_rhel7-node1 monitor on rhel7-node2
+ * Resource action: rsc_rhel7-node1 monitor on rhel7-node1
+ * Resource action: rsc_rhel7-node2 monitor on rhel7-node3
+ * Resource action: rsc_rhel7-node2 monitor on rhel7-node2
+ * Resource action: rsc_rhel7-node2 monitor on rhel7-node1
+ * Resource action: rsc_rhel7-node3 monitor on rhel7-node3
+ * Resource action: rsc_rhel7-node3 monitor on rhel7-node2
+ * Resource action: rsc_rhel7-node3 monitor on rhel7-node1
+ * Resource action: migrator monitor on rhel7-node3
+ * Resource action: migrator monitor on rhel7-node2
+ * Resource action: migrator monitor on rhel7-node1
+ * Resource action: ping-1:0 monitor on rhel7-node1
+ * Resource action: ping-1:1 monitor on rhel7-node2
+ * Resource action: ping-1:2 monitor on rhel7-node3
+ * Pseudo action: Connectivity_start_0
+ * Resource action: stateful-1:0 monitor on rhel7-node3
+ * Resource action: stateful-1:0 monitor on rhel7-node2
+ * Resource action: stateful-1:0 monitor on rhel7-node1
+ * Resource action: r192.168.122.204 monitor on rhel7-node3
+ * Resource action: r192.168.122.204 monitor on rhel7-node2
+ * Resource action: r192.168.122.204 monitor on rhel7-node1
+ * Resource action: r192.168.122.205 monitor on rhel7-node3
+ * Resource action: r192.168.122.205 monitor on rhel7-node2
+ * Resource action: r192.168.122.205 monitor on rhel7-node1
+ * Resource action: r192.168.122.206 monitor on rhel7-node3
+ * Resource action: r192.168.122.206 monitor on rhel7-node2
+ * Resource action: r192.168.122.206 monitor on rhel7-node1
+ * Resource action: lsb-dummy monitor on rhel7-node3
+ * Resource action: lsb-dummy monitor on rhel7-node2
+ * Resource action: lsb-dummy monitor on rhel7-node1
+ * Resource action: Fencing start on rhel7-node1
+ * Resource action: FencingPass start on rhel7-node2
+ * Resource action: rsc_rhel7-node1 start on rhel7-node1
+ * Resource action: rsc_rhel7-node2 start on rhel7-node2
+ * Resource action: rsc_rhel7-node3 start on rhel7-node3
+ * Resource action: migrator start on rhel7-node3
+ * Resource action: ping-1:0 start on rhel7-node1
+ * Resource action: ping-1:1 start on rhel7-node2
+ * Resource action: ping-1:2 start on rhel7-node3
+ * Pseudo action: Connectivity_running_0
+ * Resource action: Fencing monitor=120000 on rhel7-node1
+ * Resource action: rsc_rhel7-node1 monitor=5000 on rhel7-node1
+ * Resource action: rsc_rhel7-node2 monitor=5000 on rhel7-node2
+ * Resource action: rsc_rhel7-node3 monitor=5000 on rhel7-node3
+ * Resource action: migrator monitor=10000 on rhel7-node3
+ * Resource action: ping-1:0 monitor=60000 on rhel7-node1
+ * Resource action: ping-1:1 monitor=60000 on rhel7-node2
+ * Resource action: ping-1:2 monitor=60000 on rhel7-node3
+
+Revised Cluster Status:
+ * Node List:
+ * Online: [ rhel7-node1 rhel7-node2 rhel7-node3 ]
+ * RemoteOFFLINE: [ remote1 ]
+
+ * Full List of Resources:
+ * Fencing (stonith:fence_xvm): Started rhel7-node1
+ * FencingPass (stonith:fence_dummy): Started rhel7-node2
+ * rsc_rhel7-node1 (ocf:heartbeat:IPaddr2): Started rhel7-node1
+ * rsc_rhel7-node2 (ocf:heartbeat:IPaddr2): Started rhel7-node2
+ * rsc_rhel7-node3 (ocf:heartbeat:IPaddr2): Started rhel7-node3
+ * migrator (ocf:pacemaker:Dummy): Started rhel7-node3
+ * Clone Set: Connectivity [ping-1]:
+ * Started: [ rhel7-node1 rhel7-node2 rhel7-node3 ]
+ * Stopped: [ remote1 ]
+ * Clone Set: master-1 [stateful-1] (promotable):
+ * Stopped: [ remote1 rhel7-node1 rhel7-node2 rhel7-node3 ]
+ * Resource Group: group-1:
+ * r192.168.122.204 (ocf:heartbeat:IPaddr2): Stopped
+ * r192.168.122.205 (ocf:heartbeat:IPaddr2): Stopped
+ * r192.168.122.206 (ocf:heartbeat:IPaddr2): Stopped
+ * lsb-dummy (lsb:/usr/share/pacemaker/tests/cts/LSBDummy): Stopped
diff --git a/cts/scheduler/summary/remote-start-fail.summary b/cts/scheduler/summary/remote-start-fail.summary
new file mode 100644
index 0000000..cf83c04
--- /dev/null
+++ b/cts/scheduler/summary/remote-start-fail.summary
@@ -0,0 +1,25 @@
+Current cluster status:
+ * Node List:
+ * Online: [ rhel7-auto1 rhel7-auto2 rhel7-auto3 ]
+ * RemoteOFFLINE: [ rhel7-auto4 ]
+
+ * Full List of Resources:
+ * shooter (stonith:fence_xvm): Started rhel7-auto1
+ * rhel7-auto4 (ocf:pacemaker:remote): FAILED rhel7-auto2
+
+Transition Summary:
+ * Recover rhel7-auto4 ( rhel7-auto2 -> rhel7-auto3 )
+
+Executing Cluster Transition:
+ * Resource action: rhel7-auto4 stop on rhel7-auto2
+ * Resource action: rhel7-auto4 start on rhel7-auto3
+ * Resource action: rhel7-auto4 monitor=60000 on rhel7-auto3
+
+Revised Cluster Status:
+ * Node List:
+ * Online: [ rhel7-auto1 rhel7-auto2 rhel7-auto3 ]
+ * RemoteOnline: [ rhel7-auto4 ]
+
+ * Full List of Resources:
+ * shooter (stonith:fence_xvm): Started rhel7-auto1
+ * rhel7-auto4 (ocf:pacemaker:remote): Started rhel7-auto3
diff --git a/cts/scheduler/summary/remote-startup-probes.summary b/cts/scheduler/summary/remote-startup-probes.summary
new file mode 100644
index 0000000..b49f5db
--- /dev/null
+++ b/cts/scheduler/summary/remote-startup-probes.summary
@@ -0,0 +1,44 @@
+Current cluster status:
+ * Node List:
+ * Online: [ 18builder 18node1 18node2 ]
+ * RemoteOFFLINE: [ remote1 ]
+
+ * Full List of Resources:
+ * shooter (stonith:fence_xvm): Started 18node1
+ * remote1 (ocf:pacemaker:remote): Stopped
+ * FAKE1 (ocf:heartbeat:Dummy): Started 18builder
+ * FAKE2 (ocf:heartbeat:Dummy): Started 18node2
+ * FAKE3 (ocf:heartbeat:Dummy): Started 18builder
+ * FAKE4 (ocf:heartbeat:Dummy): Started 18node1
+
+Transition Summary:
+ * Start remote1 ( 18builder )
+ * Move FAKE1 ( 18builder -> 18node2 )
+ * Move FAKE2 ( 18node2 -> remote1 )
+
+Executing Cluster Transition:
+ * Resource action: remote1 start on 18builder
+ * Resource action: FAKE1 stop on 18builder
+ * Resource action: FAKE1 monitor on remote1
+ * Resource action: FAKE2 stop on 18node2
+ * Resource action: FAKE2 monitor on remote1
+ * Resource action: FAKE3 monitor on remote1
+ * Resource action: FAKE4 monitor on remote1
+ * Resource action: remote1 monitor=60000 on 18builder
+ * Resource action: FAKE1 start on 18node2
+ * Resource action: FAKE2 start on remote1
+ * Resource action: FAKE1 monitor=60000 on 18node2
+ * Resource action: FAKE2 monitor=60000 on remote1
+
+Revised Cluster Status:
+ * Node List:
+ * Online: [ 18builder 18node1 18node2 ]
+ * RemoteOnline: [ remote1 ]
+
+ * Full List of Resources:
+ * shooter (stonith:fence_xvm): Started 18node1
+ * remote1 (ocf:pacemaker:remote): Started 18builder
+ * FAKE1 (ocf:heartbeat:Dummy): Started 18node2
+ * FAKE2 (ocf:heartbeat:Dummy): Started remote1
+ * FAKE3 (ocf:heartbeat:Dummy): Started 18builder
+ * FAKE4 (ocf:heartbeat:Dummy): Started 18node1
diff --git a/cts/scheduler/summary/remote-startup.summary b/cts/scheduler/summary/remote-startup.summary
new file mode 100644
index 0000000..00bb311
--- /dev/null
+++ b/cts/scheduler/summary/remote-startup.summary
@@ -0,0 +1,39 @@
+Current cluster status:
+ * Node List:
+ * RemoteNode remote1: UNCLEAN (offline)
+ * Online: [ 18builder 18node1 18node2 ]
+
+ * Full List of Resources:
+ * shooter (stonith:fence_xvm): Started 18builder
+ * fake (ocf:pacemaker:Dummy): Stopped
+ * remote1 (ocf:pacemaker:remote): Stopped
+
+Transition Summary:
+ * Move shooter ( 18builder -> 18node1 )
+ * Start fake ( 18node2 )
+ * Start remote1 ( 18builder )
+
+Executing Cluster Transition:
+ * Resource action: shooter stop on 18builder
+ * Resource action: fake monitor on 18node2
+ * Resource action: fake monitor on 18node1
+ * Resource action: fake monitor on 18builder
+ * Resource action: remote1 monitor on 18node2
+ * Resource action: remote1 monitor on 18node1
+ * Resource action: remote1 monitor on 18builder
+ * Resource action: shooter start on 18node1
+ * Resource action: remote1 start on 18builder
+ * Resource action: shooter monitor=60000 on 18node1
+ * Resource action: fake monitor on remote1
+ * Resource action: remote1 monitor=60000 on 18builder
+ * Resource action: fake start on 18node2
+
+Revised Cluster Status:
+ * Node List:
+ * Online: [ 18builder 18node1 18node2 ]
+ * RemoteOnline: [ remote1 ]
+
+ * Full List of Resources:
+ * shooter (stonith:fence_xvm): Started 18node1
+ * fake (ocf:pacemaker:Dummy): Started 18node2
+ * remote1 (ocf:pacemaker:remote): Started 18builder
diff --git a/cts/scheduler/summary/remote-unclean2.summary b/cts/scheduler/summary/remote-unclean2.summary
new file mode 100644
index 0000000..3ad98b9
--- /dev/null
+++ b/cts/scheduler/summary/remote-unclean2.summary
@@ -0,0 +1,27 @@
+Current cluster status:
+ * Node List:
+ * RemoteNode rhel7-auto4: UNCLEAN (offline)
+ * Online: [ rhel7-auto1 rhel7-auto2 rhel7-auto3 ]
+
+ * Full List of Resources:
+ * shooter (stonith:fence_xvm): Started rhel7-auto2
+ * rhel7-auto4 (ocf:pacemaker:remote): FAILED rhel7-auto1
+
+Transition Summary:
+ * Fence (reboot) rhel7-auto4 'remote connection is unrecoverable'
+ * Recover rhel7-auto4 ( rhel7-auto1 )
+
+Executing Cluster Transition:
+ * Resource action: rhel7-auto4 stop on rhel7-auto1
+ * Fencing rhel7-auto4 (reboot)
+ * Resource action: rhel7-auto4 start on rhel7-auto1
+ * Resource action: rhel7-auto4 monitor=60000 on rhel7-auto1
+
+Revised Cluster Status:
+ * Node List:
+ * Online: [ rhel7-auto1 rhel7-auto2 rhel7-auto3 ]
+ * RemoteOnline: [ rhel7-auto4 ]
+
+ * Full List of Resources:
+ * shooter (stonith:fence_xvm): Started rhel7-auto2
+ * rhel7-auto4 (ocf:pacemaker:remote): Started rhel7-auto1
diff --git a/cts/scheduler/summary/reprobe-target_rc.summary b/cts/scheduler/summary/reprobe-target_rc.summary
new file mode 100644
index 0000000..7902ce8
--- /dev/null
+++ b/cts/scheduler/summary/reprobe-target_rc.summary
@@ -0,0 +1,55 @@
+Current cluster status:
+ * Node List:
+ * Online: [ node-0 node-1 ]
+
+ * Full List of Resources:
+ * d00 (ocf:pacemaker:Dummy): Started node-0
+ * d01 (ocf:pacemaker:Dummy): Started node-1
+ * d02 (ocf:pacemaker:Dummy): Started node-0
+ * d03 (ocf:pacemaker:Dummy): Started node-1
+ * d04 (ocf:pacemaker:Dummy): Started node-0
+ * d05 (ocf:pacemaker:Dummy): Started node-1
+ * d06 (ocf:pacemaker:Dummy): Started node-0
+ * d07 (ocf:pacemaker:Dummy): Started node-1
+ * d08 (ocf:pacemaker:Dummy): Started node-0
+ * d09 (ocf:pacemaker:Dummy): Started node-1
+ * d10 (ocf:pacemaker:Dummy): Started node-0
+ * d11 (ocf:pacemaker:Dummy): Started node-1
+ * d12 (ocf:pacemaker:Dummy): Started node-0
+ * d13 (ocf:pacemaker:Dummy): Started node-1
+ * d14 (ocf:pacemaker:Dummy): Started node-0
+ * d15 (ocf:pacemaker:Dummy): Started node-1
+ * d16 (ocf:pacemaker:Dummy): Started node-0
+ * d17 (ocf:pacemaker:Dummy): Started node-1
+ * d18 (ocf:pacemaker:Dummy): Started node-0
+ * d19 (ocf:pacemaker:Dummy): Started node-1
+
+Transition Summary:
+
+Executing Cluster Transition:
+
+Revised Cluster Status:
+ * Node List:
+ * Online: [ node-0 node-1 ]
+
+ * Full List of Resources:
+ * d00 (ocf:pacemaker:Dummy): Started node-0
+ * d01 (ocf:pacemaker:Dummy): Started node-1
+ * d02 (ocf:pacemaker:Dummy): Started node-0
+ * d03 (ocf:pacemaker:Dummy): Started node-1
+ * d04 (ocf:pacemaker:Dummy): Started node-0
+ * d05 (ocf:pacemaker:Dummy): Started node-1
+ * d06 (ocf:pacemaker:Dummy): Started node-0
+ * d07 (ocf:pacemaker:Dummy): Started node-1
+ * d08 (ocf:pacemaker:Dummy): Started node-0
+ * d09 (ocf:pacemaker:Dummy): Started node-1
+ * d10 (ocf:pacemaker:Dummy): Started node-0
+ * d11 (ocf:pacemaker:Dummy): Started node-1
+ * d12 (ocf:pacemaker:Dummy): Started node-0
+ * d13 (ocf:pacemaker:Dummy): Started node-1
+ * d14 (ocf:pacemaker:Dummy): Started node-0
+ * d15 (ocf:pacemaker:Dummy): Started node-1
+ * d16 (ocf:pacemaker:Dummy): Started node-0
+ * d17 (ocf:pacemaker:Dummy): Started node-1
+ * d18 (ocf:pacemaker:Dummy): Started node-0
+ * d19 (ocf:pacemaker:Dummy): Started node-1
diff --git a/cts/scheduler/summary/resource-discovery.summary b/cts/scheduler/summary/resource-discovery.summary
new file mode 100644
index 0000000..2d6ab7c
--- /dev/null
+++ b/cts/scheduler/summary/resource-discovery.summary
@@ -0,0 +1,128 @@
+Current cluster status:
+ * Node List:
+ * Online: [ 18node1 18node2 18node3 18node4 ]
+ * RemoteOFFLINE: [ remote1 ]
+
+ * Full List of Resources:
+ * shooter (stonith:fence_xvm): Stopped
+ * remote1 (ocf:pacemaker:remote): Stopped
+ * FAKE1 (ocf:heartbeat:Dummy): Stopped
+ * FAKE2 (ocf:heartbeat:Dummy): Stopped
+ * FAKE3 (ocf:heartbeat:Dummy): Stopped
+ * FAKE4 (ocf:heartbeat:Dummy): Stopped
+ * FAKE5 (ocf:heartbeat:Dummy): Stopped
+ * Clone Set: FAKECLONE1-clone [FAKECLONE1]:
+ * Stopped: [ 18node1 18node2 18node3 18node4 remote1 ]
+ * Clone Set: FAKECLONE2-clone [FAKECLONE2]:
+ * Stopped: [ 18node1 18node2 18node3 18node4 remote1 ]
+ * Resource Group: FAKEGROUP:
+ * FAKE6 (ocf:heartbeat:Dummy): Stopped
+ * FAKE7 (ocf:heartbeat:Dummy): Stopped
+
+Transition Summary:
+ * Start shooter ( 18node2 )
+ * Start remote1 ( 18node1 )
+ * Start FAKE1 ( 18node4 )
+ * Start FAKE2 ( 18node2 )
+ * Start FAKE3 ( 18node3 )
+ * Start FAKE4 ( 18node4 )
+ * Start FAKE5 ( remote1 )
+ * Start FAKECLONE1:0 ( 18node1 )
+ * Start FAKECLONE1:1 ( remote1 )
+ * Start FAKECLONE2:0 ( 18node3 )
+ * Start FAKECLONE2:1 ( 18node1 )
+ * Start FAKECLONE2:2 ( 18node2 )
+ * Start FAKECLONE2:3 ( 18node4 )
+ * Start FAKECLONE2:4 ( remote1 )
+ * Start FAKE6 ( 18node1 )
+ * Start FAKE7 ( 18node1 )
+
+Executing Cluster Transition:
+ * Resource action: shooter monitor on 18node4
+ * Resource action: shooter monitor on 18node3
+ * Resource action: shooter monitor on 18node2
+ * Resource action: shooter monitor on 18node1
+ * Resource action: remote1 monitor on 18node4
+ * Resource action: remote1 monitor on 18node3
+ * Resource action: remote1 monitor on 18node2
+ * Resource action: remote1 monitor on 18node1
+ * Resource action: FAKE1 monitor on 18node4
+ * Resource action: FAKE2 monitor on 18node2
+ * Resource action: FAKE2 monitor on 18node1
+ * Resource action: FAKE3 monitor on 18node3
+ * Resource action: FAKE4 monitor on 18node4
+ * Resource action: FAKE5 monitor on 18node4
+ * Resource action: FAKE5 monitor on 18node3
+ * Resource action: FAKE5 monitor on 18node2
+ * Resource action: FAKE5 monitor on 18node1
+ * Resource action: FAKECLONE1:0 monitor on 18node1
+ * Resource action: FAKECLONE2:0 monitor on 18node3
+ * Resource action: FAKECLONE2:1 monitor on 18node1
+ * Resource action: FAKECLONE2:3 monitor on 18node4
+ * Pseudo action: FAKEGROUP_start_0
+ * Resource action: FAKE6 monitor on 18node2
+ * Resource action: FAKE6 monitor on 18node1
+ * Resource action: FAKE7 monitor on 18node2
+ * Resource action: FAKE7 monitor on 18node1
+ * Resource action: shooter start on 18node2
+ * Resource action: remote1 start on 18node1
+ * Resource action: FAKE1 start on 18node4
+ * Resource action: FAKE2 start on 18node2
+ * Resource action: FAKE3 start on 18node3
+ * Resource action: FAKE4 start on 18node4
+ * Resource action: FAKE5 monitor on remote1
+ * Resource action: FAKECLONE1:1 monitor on remote1
+ * Pseudo action: FAKECLONE1-clone_start_0
+ * Resource action: FAKECLONE2:4 monitor on remote1
+ * Pseudo action: FAKECLONE2-clone_start_0
+ * Resource action: FAKE6 start on 18node1
+ * Resource action: FAKE7 start on 18node1
+ * Resource action: shooter monitor=60000 on 18node2
+ * Resource action: remote1 monitor=60000 on 18node1
+ * Resource action: FAKE1 monitor=60000 on 18node4
+ * Resource action: FAKE2 monitor=60000 on 18node2
+ * Resource action: FAKE3 monitor=60000 on 18node3
+ * Resource action: FAKE4 monitor=60000 on 18node4
+ * Resource action: FAKE5 start on remote1
+ * Resource action: FAKECLONE1:0 start on 18node1
+ * Resource action: FAKECLONE1:1 start on remote1
+ * Pseudo action: FAKECLONE1-clone_running_0
+ * Resource action: FAKECLONE2:0 start on 18node3
+ * Resource action: FAKECLONE2:1 start on 18node1
+ * Resource action: FAKECLONE2:2 start on 18node2
+ * Resource action: FAKECLONE2:3 start on 18node4
+ * Resource action: FAKECLONE2:4 start on remote1
+ * Pseudo action: FAKECLONE2-clone_running_0
+ * Pseudo action: FAKEGROUP_running_0
+ * Resource action: FAKE6 monitor=10000 on 18node1
+ * Resource action: FAKE7 monitor=10000 on 18node1
+ * Resource action: FAKE5 monitor=60000 on remote1
+ * Resource action: FAKECLONE1:0 monitor=60000 on 18node1
+ * Resource action: FAKECLONE1:1 monitor=60000 on remote1
+ * Resource action: FAKECLONE2:0 monitor=60000 on 18node3
+ * Resource action: FAKECLONE2:1 monitor=60000 on 18node1
+ * Resource action: FAKECLONE2:2 monitor=60000 on 18node2
+ * Resource action: FAKECLONE2:3 monitor=60000 on 18node4
+ * Resource action: FAKECLONE2:4 monitor=60000 on remote1
+
+Revised Cluster Status:
+ * Node List:
+ * Online: [ 18node1 18node2 18node3 18node4 ]
+ * RemoteOnline: [ remote1 ]
+
+ * Full List of Resources:
+ * shooter (stonith:fence_xvm): Started 18node2
+ * remote1 (ocf:pacemaker:remote): Started 18node1
+ * FAKE1 (ocf:heartbeat:Dummy): Started 18node4
+ * FAKE2 (ocf:heartbeat:Dummy): Started 18node2
+ * FAKE3 (ocf:heartbeat:Dummy): Started 18node3
+ * FAKE4 (ocf:heartbeat:Dummy): Started 18node4
+ * FAKE5 (ocf:heartbeat:Dummy): Started remote1
+ * Clone Set: FAKECLONE1-clone [FAKECLONE1]:
+ * Started: [ 18node1 remote1 ]
+ * Stopped: [ 18node2 18node3 18node4 ]
+ * Clone Set: FAKECLONE2-clone [FAKECLONE2]:
+ * Started: [ 18node1 18node2 18node3 18node4 remote1 ]
+ * Resource Group: FAKEGROUP:
+ * FAKE6 (ocf:heartbeat:Dummy): Started 18node1
+ * FAKE7 (ocf:heartbeat:Dummy): Started 18node1
diff --git a/cts/scheduler/summary/restart-with-extra-op-params.summary b/cts/scheduler/summary/restart-with-extra-op-params.summary
new file mode 100644
index 0000000..d80e0d9
--- /dev/null
+++ b/cts/scheduler/summary/restart-with-extra-op-params.summary
@@ -0,0 +1,25 @@
+Using the original execution date of: 2021-03-31 14:58:18Z
+Current cluster status:
+ * Node List:
+ * Online: [ node1 node2 ]
+
+ * Full List of Resources:
+ * Fencing (stonith:fence_dummy): Started node1
+ * dummy1 (ocf:pacemaker:Dummy): Started node2
+
+Transition Summary:
+ * Restart dummy1 ( node2 ) due to resource definition change
+
+Executing Cluster Transition:
+ * Resource action: dummy1 stop on node2
+ * Resource action: dummy1 start on node2
+ * Resource action: dummy1 monitor=10000 on node2
+Using the original execution date of: 2021-03-31 14:58:18Z
+
+Revised Cluster Status:
+ * Node List:
+ * Online: [ node1 node2 ]
+
+ * Full List of Resources:
+ * Fencing (stonith:fence_dummy): Started node1
+ * dummy1 (ocf:pacemaker:Dummy): Started node2
diff --git a/cts/scheduler/summary/route-remote-notify.summary b/cts/scheduler/summary/route-remote-notify.summary
new file mode 100644
index 0000000..fb55346
--- /dev/null
+++ b/cts/scheduler/summary/route-remote-notify.summary
@@ -0,0 +1,98 @@
+Using the original execution date of: 2018-10-31 11:51:32Z
+Current cluster status:
+ * Node List:
+ * Online: [ controller-0 controller-1 controller-2 ]
+ * GuestOnline: [ rabbitmq-bundle-0 rabbitmq-bundle-1 rabbitmq-bundle-2 ]
+
+ * Full List of Resources:
+ * Container bundle set: rabbitmq-bundle [192.168.24.1:8787/rhosp13/openstack-rabbitmq:pcmklatest]:
+ * rabbitmq-bundle-0 (ocf:heartbeat:rabbitmq-cluster): Started controller-0
+ * rabbitmq-bundle-1 (ocf:heartbeat:rabbitmq-cluster): Started controller-1
+ * rabbitmq-bundle-2 (ocf:heartbeat:rabbitmq-cluster): Started controller-2
+ * ip-192.168.24.12 (ocf:heartbeat:IPaddr2): Started controller-0
+ * ip-10.0.0.101 (ocf:heartbeat:IPaddr2): Started controller-1
+ * ip-172.17.1.20 (ocf:heartbeat:IPaddr2): Started controller-2
+ * ip-172.17.1.11 (ocf:heartbeat:IPaddr2): Started controller-0
+ * ip-172.17.3.16 (ocf:heartbeat:IPaddr2): Started controller-1
+ * ip-172.17.4.15 (ocf:heartbeat:IPaddr2): Started controller-2
+ * Container bundle set: haproxy-bundle [192.168.24.1:8787/rhosp13/openstack-haproxy:pcmklatest]:
+ * haproxy-bundle-docker-0 (ocf:heartbeat:docker): Started controller-0
+ * haproxy-bundle-docker-1 (ocf:heartbeat:docker): Started controller-1
+ * haproxy-bundle-docker-2 (ocf:heartbeat:docker): Started controller-2
+ * Container bundle: openstack-cinder-volume [192.168.24.1:8787/rhosp13/openstack-cinder-volume:pcmklatest]:
+ * openstack-cinder-volume-docker-0 (ocf:heartbeat:docker): Started controller-0
+
+Transition Summary:
+ * Stop rabbitmq-bundle-docker-0 ( controller-0 ) due to node availability
+ * Stop rabbitmq-bundle-0 ( controller-0 ) due to unrunnable rabbitmq-bundle-docker-0 start
+ * Stop rabbitmq:0 ( rabbitmq-bundle-0 ) due to unrunnable rabbitmq-bundle-docker-0 start
+ * Move ip-192.168.24.12 ( controller-0 -> controller-2 )
+ * Move ip-172.17.1.11 ( controller-0 -> controller-1 )
+ * Stop haproxy-bundle-docker-0 ( controller-0 ) due to node availability
+ * Move openstack-cinder-volume-docker-0 ( controller-0 -> controller-2 )
+
+Executing Cluster Transition:
+ * Pseudo action: rabbitmq-bundle-clone_pre_notify_stop_0
+ * Pseudo action: openstack-cinder-volume_stop_0
+ * Pseudo action: openstack-cinder-volume_start_0
+ * Pseudo action: haproxy-bundle_stop_0
+ * Pseudo action: rabbitmq-bundle_stop_0
+ * Resource action: rabbitmq notify on rabbitmq-bundle-0
+ * Resource action: rabbitmq notify on rabbitmq-bundle-1
+ * Resource action: rabbitmq notify on rabbitmq-bundle-2
+ * Pseudo action: rabbitmq-bundle-clone_confirmed-pre_notify_stop_0
+ * Pseudo action: rabbitmq-bundle-clone_stop_0
+ * Resource action: haproxy-bundle-docker-0 stop on controller-0
+ * Resource action: openstack-cinder-volume-docker-0 stop on controller-0
+ * Pseudo action: openstack-cinder-volume_stopped_0
+ * Pseudo action: haproxy-bundle_stopped_0
+ * Resource action: rabbitmq stop on rabbitmq-bundle-0
+ * Pseudo action: rabbitmq-bundle-clone_stopped_0
+ * Resource action: rabbitmq-bundle-0 stop on controller-0
+ * Resource action: ip-192.168.24.12 stop on controller-0
+ * Resource action: ip-172.17.1.11 stop on controller-0
+ * Resource action: openstack-cinder-volume-docker-0 start on controller-2
+ * Pseudo action: openstack-cinder-volume_running_0
+ * Pseudo action: rabbitmq-bundle-clone_post_notify_stopped_0
+ * Resource action: rabbitmq-bundle-docker-0 stop on controller-0
+ * Resource action: ip-192.168.24.12 start on controller-2
+ * Resource action: ip-172.17.1.11 start on controller-1
+ * Resource action: openstack-cinder-volume-docker-0 monitor=60000 on controller-2
+ * Cluster action: do_shutdown on controller-0
+ * Resource action: rabbitmq notify on rabbitmq-bundle-1
+ * Resource action: rabbitmq notify on rabbitmq-bundle-2
+ * Pseudo action: rabbitmq-bundle-clone_confirmed-post_notify_stopped_0
+ * Pseudo action: rabbitmq-bundle-clone_pre_notify_start_0
+ * Resource action: ip-192.168.24.12 monitor=10000 on controller-2
+ * Resource action: ip-172.17.1.11 monitor=10000 on controller-1
+ * Pseudo action: rabbitmq-bundle_stopped_0
+ * Pseudo action: rabbitmq-bundle-clone_confirmed-pre_notify_start_0
+ * Pseudo action: rabbitmq-bundle-clone_start_0
+ * Pseudo action: rabbitmq-bundle-clone_running_0
+ * Pseudo action: rabbitmq-bundle-clone_post_notify_running_0
+ * Pseudo action: rabbitmq-bundle-clone_confirmed-post_notify_running_0
+ * Pseudo action: rabbitmq-bundle_running_0
+Using the original execution date of: 2018-10-31 11:51:32Z
+
+Revised Cluster Status:
+ * Node List:
+ * Online: [ controller-0 controller-1 controller-2 ]
+ * GuestOnline: [ rabbitmq-bundle-1 rabbitmq-bundle-2 ]
+
+ * Full List of Resources:
+ * Container bundle set: rabbitmq-bundle [192.168.24.1:8787/rhosp13/openstack-rabbitmq:pcmklatest]:
+ * rabbitmq-bundle-0 (ocf:heartbeat:rabbitmq-cluster): Stopped
+ * rabbitmq-bundle-1 (ocf:heartbeat:rabbitmq-cluster): Started controller-1
+ * rabbitmq-bundle-2 (ocf:heartbeat:rabbitmq-cluster): Started controller-2
+ * ip-192.168.24.12 (ocf:heartbeat:IPaddr2): Started controller-2
+ * ip-10.0.0.101 (ocf:heartbeat:IPaddr2): Started controller-1
+ * ip-172.17.1.20 (ocf:heartbeat:IPaddr2): Started controller-2
+ * ip-172.17.1.11 (ocf:heartbeat:IPaddr2): Started controller-1
+ * ip-172.17.3.16 (ocf:heartbeat:IPaddr2): Started controller-1
+ * ip-172.17.4.15 (ocf:heartbeat:IPaddr2): Started controller-2
+ * Container bundle set: haproxy-bundle [192.168.24.1:8787/rhosp13/openstack-haproxy:pcmklatest]:
+ * haproxy-bundle-docker-0 (ocf:heartbeat:docker): Stopped
+ * haproxy-bundle-docker-1 (ocf:heartbeat:docker): Started controller-1
+ * haproxy-bundle-docker-2 (ocf:heartbeat:docker): Started controller-2
+ * Container bundle: openstack-cinder-volume [192.168.24.1:8787/rhosp13/openstack-cinder-volume:pcmklatest]:
+ * openstack-cinder-volume-docker-0 (ocf:heartbeat:docker): Started controller-2
diff --git a/cts/scheduler/summary/rsc-defaults-2.summary b/cts/scheduler/summary/rsc-defaults-2.summary
new file mode 100644
index 0000000..b363fe8
--- /dev/null
+++ b/cts/scheduler/summary/rsc-defaults-2.summary
@@ -0,0 +1,29 @@
+Current cluster status:
+ * Node List:
+ * Online: [ cluster01 cluster02 ]
+
+ * Full List of Resources:
+ * fencing (stonith:fence_xvm): Stopped
+ * dummy-rsc (ocf:pacemaker:Dummy): Stopped (unmanaged)
+ * ping-rsc-ping (ocf:pacemaker:ping): Stopped (unmanaged)
+
+Transition Summary:
+ * Start fencing ( cluster01 )
+
+Executing Cluster Transition:
+ * Resource action: fencing monitor on cluster02
+ * Resource action: fencing monitor on cluster01
+ * Resource action: dummy-rsc monitor on cluster02
+ * Resource action: dummy-rsc monitor on cluster01
+ * Resource action: ping-rsc-ping monitor on cluster02
+ * Resource action: ping-rsc-ping monitor on cluster01
+ * Resource action: fencing start on cluster01
+
+Revised Cluster Status:
+ * Node List:
+ * Online: [ cluster01 cluster02 ]
+
+ * Full List of Resources:
+ * fencing (stonith:fence_xvm): Started cluster01
+ * dummy-rsc (ocf:pacemaker:Dummy): Stopped (unmanaged)
+ * ping-rsc-ping (ocf:pacemaker:ping): Stopped (unmanaged)
diff --git a/cts/scheduler/summary/rsc-defaults.summary b/cts/scheduler/summary/rsc-defaults.summary
new file mode 100644
index 0000000..c3657e7
--- /dev/null
+++ b/cts/scheduler/summary/rsc-defaults.summary
@@ -0,0 +1,41 @@
+2 of 5 resource instances DISABLED and 0 BLOCKED from further action due to failure
+
+Current cluster status:
+ * Node List:
+ * Online: [ cluster01 cluster02 ]
+
+ * Full List of Resources:
+ * fencing (stonith:fence_xvm): Stopped
+ * ip-rsc (ocf:heartbeat:IPaddr2): Stopped (disabled)
+ * ip-rsc2 (ocf:heartbeat:IPaddr2): Stopped (disabled)
+ * dummy-rsc (ocf:pacemaker:Dummy): Stopped (unmanaged)
+ * ping-rsc-ping (ocf:pacemaker:ping): Stopped
+
+Transition Summary:
+ * Start fencing ( cluster01 )
+ * Start ping-rsc-ping ( cluster02 )
+
+Executing Cluster Transition:
+ * Resource action: fencing monitor on cluster02
+ * Resource action: fencing monitor on cluster01
+ * Resource action: ip-rsc monitor on cluster02
+ * Resource action: ip-rsc monitor on cluster01
+ * Resource action: ip-rsc2 monitor on cluster02
+ * Resource action: ip-rsc2 monitor on cluster01
+ * Resource action: dummy-rsc monitor on cluster02
+ * Resource action: dummy-rsc monitor on cluster01
+ * Resource action: ping-rsc-ping monitor on cluster02
+ * Resource action: ping-rsc-ping monitor on cluster01
+ * Resource action: fencing start on cluster01
+ * Resource action: ping-rsc-ping start on cluster02
+
+Revised Cluster Status:
+ * Node List:
+ * Online: [ cluster01 cluster02 ]
+
+ * Full List of Resources:
+ * fencing (stonith:fence_xvm): Started cluster01
+ * ip-rsc (ocf:heartbeat:IPaddr2): Stopped (disabled)
+ * ip-rsc2 (ocf:heartbeat:IPaddr2): Stopped (disabled)
+ * dummy-rsc (ocf:pacemaker:Dummy): Stopped (unmanaged)
+ * ping-rsc-ping (ocf:pacemaker:ping): Started cluster02
diff --git a/cts/scheduler/summary/rsc-discovery-per-node.summary b/cts/scheduler/summary/rsc-discovery-per-node.summary
new file mode 100644
index 0000000..3c34ced
--- /dev/null
+++ b/cts/scheduler/summary/rsc-discovery-per-node.summary
@@ -0,0 +1,130 @@
+Current cluster status:
+ * Node List:
+ * Online: [ 18builder 18node1 18node2 18node3 18node4 ]
+ * RemoteOFFLINE: [ remote1 ]
+
+ * Full List of Resources:
+ * shooter (stonith:fence_xvm): Started 18node1
+ * remote1 (ocf:pacemaker:remote): Stopped
+ * FAKE1 (ocf:heartbeat:Dummy): Stopped
+ * FAKE2 (ocf:heartbeat:Dummy): Started 18node2
+ * FAKE3 (ocf:heartbeat:Dummy): Started 18builder
+ * FAKE4 (ocf:heartbeat:Dummy): Started 18node1
+ * FAKE5 (ocf:heartbeat:Dummy): Stopped
+ * Clone Set: FAKECLONE1-clone [FAKECLONE1]:
+ * Stopped: [ 18builder 18node1 18node2 18node3 18node4 remote1 ]
+ * Clone Set: FAKECLONE2-clone [FAKECLONE2]:
+ * Stopped: [ 18builder 18node1 18node2 18node3 18node4 remote1 ]
+
+Transition Summary:
+ * Start remote1 ( 18builder )
+ * Start FAKE1 ( 18node2 )
+ * Move FAKE2 ( 18node2 -> 18node3 )
+ * Move FAKE3 ( 18builder -> 18node4 )
+ * Move FAKE4 ( 18node1 -> remote1 )
+ * Start FAKE5 ( 18builder )
+ * Start FAKECLONE1:0 ( 18node1 )
+ * Start FAKECLONE1:1 ( 18node2 )
+ * Start FAKECLONE1:2 ( 18node3 )
+ * Start FAKECLONE1:3 ( 18node4 )
+ * Start FAKECLONE1:4 ( remote1 )
+ * Start FAKECLONE1:5 ( 18builder )
+ * Start FAKECLONE2:0 ( 18node1 )
+ * Start FAKECLONE2:1 ( 18node2 )
+ * Start FAKECLONE2:2 ( 18node3 )
+ * Start FAKECLONE2:3 ( 18node4 )
+ * Start FAKECLONE2:4 ( remote1 )
+ * Start FAKECLONE2:5 ( 18builder )
+
+Executing Cluster Transition:
+ * Resource action: shooter monitor on 18node4
+ * Resource action: shooter monitor on 18node3
+ * Resource action: remote1 monitor on 18node4
+ * Resource action: remote1 monitor on 18node3
+ * Resource action: FAKE1 monitor on 18node4
+ * Resource action: FAKE1 monitor on 18node3
+ * Resource action: FAKE1 monitor on 18node2
+ * Resource action: FAKE1 monitor on 18node1
+ * Resource action: FAKE1 monitor on 18builder
+ * Resource action: FAKE2 stop on 18node2
+ * Resource action: FAKE2 monitor on 18node4
+ * Resource action: FAKE2 monitor on 18node3
+ * Resource action: FAKE3 stop on 18builder
+ * Resource action: FAKE3 monitor on 18node4
+ * Resource action: FAKE3 monitor on 18node3
+ * Resource action: FAKE4 monitor on 18node4
+ * Resource action: FAKE4 monitor on 18node3
+ * Resource action: FAKE5 monitor on 18node4
+ * Resource action: FAKE5 monitor on 18node3
+ * Resource action: FAKE5 monitor on 18node2
+ * Resource action: FAKE5 monitor on 18node1
+ * Resource action: FAKE5 monitor on 18builder
+ * Resource action: FAKECLONE1:0 monitor on 18node1
+ * Resource action: FAKECLONE1:1 monitor on 18node2
+ * Resource action: FAKECLONE1:2 monitor on 18node3
+ * Resource action: FAKECLONE1:3 monitor on 18node4
+ * Resource action: FAKECLONE1:5 monitor on 18builder
+ * Pseudo action: FAKECLONE1-clone_start_0
+ * Resource action: FAKECLONE2:0 monitor on 18node1
+ * Resource action: FAKECLONE2:1 monitor on 18node2
+ * Resource action: FAKECLONE2:2 monitor on 18node3
+ * Resource action: FAKECLONE2:3 monitor on 18node4
+ * Resource action: FAKECLONE2:5 monitor on 18builder
+ * Pseudo action: FAKECLONE2-clone_start_0
+ * Resource action: remote1 start on 18builder
+ * Resource action: FAKE1 start on 18node2
+ * Resource action: FAKE2 start on 18node3
+ * Resource action: FAKE3 start on 18node4
+ * Resource action: FAKE4 stop on 18node1
+ * Resource action: FAKE5 start on 18builder
+ * Resource action: FAKECLONE1:0 start on 18node1
+ * Resource action: FAKECLONE1:1 start on 18node2
+ * Resource action: FAKECLONE1:2 start on 18node3
+ * Resource action: FAKECLONE1:3 start on 18node4
+ * Resource action: FAKECLONE1:4 start on remote1
+ * Resource action: FAKECLONE1:5 start on 18builder
+ * Pseudo action: FAKECLONE1-clone_running_0
+ * Resource action: FAKECLONE2:0 start on 18node1
+ * Resource action: FAKECLONE2:1 start on 18node2
+ * Resource action: FAKECLONE2:2 start on 18node3
+ * Resource action: FAKECLONE2:3 start on 18node4
+ * Resource action: FAKECLONE2:4 start on remote1
+ * Resource action: FAKECLONE2:5 start on 18builder
+ * Pseudo action: FAKECLONE2-clone_running_0
+ * Resource action: remote1 monitor=60000 on 18builder
+ * Resource action: FAKE1 monitor=60000 on 18node2
+ * Resource action: FAKE2 monitor=60000 on 18node3
+ * Resource action: FAKE3 monitor=60000 on 18node4
+ * Resource action: FAKE4 start on remote1
+ * Resource action: FAKE5 monitor=60000 on 18builder
+ * Resource action: FAKECLONE1:0 monitor=60000 on 18node1
+ * Resource action: FAKECLONE1:1 monitor=60000 on 18node2
+ * Resource action: FAKECLONE1:2 monitor=60000 on 18node3
+ * Resource action: FAKECLONE1:3 monitor=60000 on 18node4
+ * Resource action: FAKECLONE1:4 monitor=60000 on remote1
+ * Resource action: FAKECLONE1:5 monitor=60000 on 18builder
+ * Resource action: FAKECLONE2:0 monitor=60000 on 18node1
+ * Resource action: FAKECLONE2:1 monitor=60000 on 18node2
+ * Resource action: FAKECLONE2:2 monitor=60000 on 18node3
+ * Resource action: FAKECLONE2:3 monitor=60000 on 18node4
+ * Resource action: FAKECLONE2:4 monitor=60000 on remote1
+ * Resource action: FAKECLONE2:5 monitor=60000 on 18builder
+ * Resource action: FAKE4 monitor=60000 on remote1
+
+Revised Cluster Status:
+ * Node List:
+ * Online: [ 18builder 18node1 18node2 18node3 18node4 ]
+ * RemoteOnline: [ remote1 ]
+
+ * Full List of Resources:
+ * shooter (stonith:fence_xvm): Started 18node1
+ * remote1 (ocf:pacemaker:remote): Started 18builder
+ * FAKE1 (ocf:heartbeat:Dummy): Started 18node2
+ * FAKE2 (ocf:heartbeat:Dummy): Started 18node3
+ * FAKE3 (ocf:heartbeat:Dummy): Started 18node4
+ * FAKE4 (ocf:heartbeat:Dummy): Started remote1
+ * FAKE5 (ocf:heartbeat:Dummy): Started 18builder
+ * Clone Set: FAKECLONE1-clone [FAKECLONE1]:
+ * Started: [ 18builder 18node1 18node2 18node3 18node4 remote1 ]
+ * Clone Set: FAKECLONE2-clone [FAKECLONE2]:
+ * Started: [ 18builder 18node1 18node2 18node3 18node4 remote1 ]
diff --git a/cts/scheduler/summary/rsc-maintenance.summary b/cts/scheduler/summary/rsc-maintenance.summary
new file mode 100644
index 0000000..0525d8c
--- /dev/null
+++ b/cts/scheduler/summary/rsc-maintenance.summary
@@ -0,0 +1,31 @@
+2 of 4 resource instances DISABLED and 0 BLOCKED from further action due to failure
+
+Current cluster status:
+ * Node List:
+ * Online: [ node1 node2 ]
+
+ * Full List of Resources:
+ * Resource Group: group1 (disabled, maintenance):
+ * rsc1 (ocf:pacemaker:Dummy): Started node1 (disabled, maintenance)
+ * rsc2 (ocf:pacemaker:Dummy): Started node1 (disabled, maintenance)
+ * Resource Group: group2:
+ * rsc3 (ocf:pacemaker:Dummy): Started node2
+ * rsc4 (ocf:pacemaker:Dummy): Started node2
+
+Transition Summary:
+
+Executing Cluster Transition:
+ * Resource action: rsc1 cancel=10000 on node1
+ * Resource action: rsc2 cancel=10000 on node1
+
+Revised Cluster Status:
+ * Node List:
+ * Online: [ node1 node2 ]
+
+ * Full List of Resources:
+ * Resource Group: group1 (disabled, maintenance):
+ * rsc1 (ocf:pacemaker:Dummy): Started node1 (disabled, maintenance)
+ * rsc2 (ocf:pacemaker:Dummy): Started node1 (disabled, maintenance)
+ * Resource Group: group2:
+ * rsc3 (ocf:pacemaker:Dummy): Started node2
+ * rsc4 (ocf:pacemaker:Dummy): Started node2
diff --git a/cts/scheduler/summary/rsc-sets-clone-1.summary b/cts/scheduler/summary/rsc-sets-clone-1.summary
new file mode 100644
index 0000000..9f57a8f
--- /dev/null
+++ b/cts/scheduler/summary/rsc-sets-clone-1.summary
@@ -0,0 +1,86 @@
+5 of 24 resource instances DISABLED and 0 BLOCKED from further action due to failure
+
+Current cluster status:
+ * Node List:
+ * Online: [ sys2 sys3 ]
+
+ * Full List of Resources:
+ * vm1 (ocf:heartbeat:Xen): Started sys2
+ * vm2 (ocf:heartbeat:Xen): Stopped (disabled)
+ * vm3 (ocf:heartbeat:Xen): Stopped (disabled)
+ * vm4 (ocf:heartbeat:Xen): Stopped (disabled)
+ * stonithsys2 (stonith:external/ipmi): Stopped
+ * stonithsys3 (stonith:external/ipmi): Started sys2
+ * Clone Set: baseclone [basegrp]:
+ * Started: [ sys2 ]
+ * Stopped: [ sys3 ]
+ * Clone Set: fs1 [nfs1] (disabled):
+ * Stopped (disabled): [ sys2 sys3 ]
+
+Transition Summary:
+ * Restart stonithsys3 ( sys2 ) due to resource definition change
+ * Start controld:1 ( sys3 )
+ * Start clvmd:1 ( sys3 )
+ * Start o2cb:1 ( sys3 )
+ * Start iscsi1:1 ( sys3 )
+ * Start iscsi2:1 ( sys3 )
+ * Start vg1:1 ( sys3 )
+ * Start vg2:1 ( sys3 )
+ * Start fs2:1 ( sys3 )
+ * Start stonithsys2 ( sys3 )
+
+Executing Cluster Transition:
+ * Resource action: vm1 monitor on sys3
+ * Resource action: vm2 monitor on sys3
+ * Resource action: vm3 monitor on sys3
+ * Resource action: vm4 monitor on sys3
+ * Resource action: stonithsys3 stop on sys2
+ * Resource action: stonithsys3 monitor on sys3
+ * Resource action: stonithsys3 start on sys2
+ * Resource action: stonithsys3 monitor=15000 on sys2
+ * Resource action: controld:1 monitor on sys3
+ * Resource action: clvmd:1 monitor on sys3
+ * Resource action: o2cb:1 monitor on sys3
+ * Resource action: iscsi1:1 monitor on sys3
+ * Resource action: iscsi2:1 monitor on sys3
+ * Resource action: vg1:1 monitor on sys3
+ * Resource action: vg2:1 monitor on sys3
+ * Resource action: fs2:1 monitor on sys3
+ * Pseudo action: baseclone_start_0
+ * Resource action: nfs1:0 monitor on sys3
+ * Resource action: stonithsys2 monitor on sys3
+ * Pseudo action: load_stopped_sys3
+ * Pseudo action: load_stopped_sys2
+ * Pseudo action: basegrp:1_start_0
+ * Resource action: controld:1 start on sys3
+ * Resource action: clvmd:1 start on sys3
+ * Resource action: o2cb:1 start on sys3
+ * Resource action: iscsi1:1 start on sys3
+ * Resource action: iscsi2:1 start on sys3
+ * Resource action: vg1:1 start on sys3
+ * Resource action: vg2:1 start on sys3
+ * Resource action: fs2:1 start on sys3
+ * Resource action: stonithsys2 start on sys3
+ * Pseudo action: basegrp:1_running_0
+ * Resource action: controld:1 monitor=10000 on sys3
+ * Resource action: iscsi1:1 monitor=120000 on sys3
+ * Resource action: iscsi2:1 monitor=120000 on sys3
+ * Resource action: fs2:1 monitor=20000 on sys3
+ * Pseudo action: baseclone_running_0
+ * Resource action: stonithsys2 monitor=15000 on sys3
+
+Revised Cluster Status:
+ * Node List:
+ * Online: [ sys2 sys3 ]
+
+ * Full List of Resources:
+ * vm1 (ocf:heartbeat:Xen): Started sys2
+ * vm2 (ocf:heartbeat:Xen): Stopped (disabled)
+ * vm3 (ocf:heartbeat:Xen): Stopped (disabled)
+ * vm4 (ocf:heartbeat:Xen): Stopped (disabled)
+ * stonithsys2 (stonith:external/ipmi): Started sys3
+ * stonithsys3 (stonith:external/ipmi): Started sys2
+ * Clone Set: baseclone [basegrp]:
+ * Started: [ sys2 sys3 ]
+ * Clone Set: fs1 [nfs1] (disabled):
+ * Stopped (disabled): [ sys2 sys3 ]
diff --git a/cts/scheduler/summary/rsc-sets-clone.summary b/cts/scheduler/summary/rsc-sets-clone.summary
new file mode 100644
index 0000000..ac3ad53
--- /dev/null
+++ b/cts/scheduler/summary/rsc-sets-clone.summary
@@ -0,0 +1,38 @@
+Current cluster status:
+ * Node List:
+ * Node node1: standby (with active resources)
+ * Online: [ node2 ]
+
+ * Full List of Resources:
+ * rsc1 (ocf:pacemaker:Dummy): Started node1
+ * rsc2 (ocf:pacemaker:Dummy): Started node2
+ * rsc3 (ocf:pacemaker:Dummy): Started node1
+ * Clone Set: clone-rsc [rsc]:
+ * Started: [ node1 node2 ]
+
+Transition Summary:
+ * Move rsc1 ( node1 -> node2 )
+ * Move rsc3 ( node1 -> node2 )
+ * Stop rsc:0 ( node1 ) due to node availability
+
+Executing Cluster Transition:
+ * Resource action: rsc1 stop on node1
+ * Resource action: rsc3 stop on node1
+ * Pseudo action: clone-rsc_stop_0
+ * Resource action: rsc1 start on node2
+ * Resource action: rsc3 start on node2
+ * Resource action: rsc:0 stop on node1
+ * Pseudo action: clone-rsc_stopped_0
+
+Revised Cluster Status:
+ * Node List:
+ * Node node1: standby
+ * Online: [ node2 ]
+
+ * Full List of Resources:
+ * rsc1 (ocf:pacemaker:Dummy): Started node2
+ * rsc2 (ocf:pacemaker:Dummy): Started node2
+ * rsc3 (ocf:pacemaker:Dummy): Started node2
+ * Clone Set: clone-rsc [rsc]:
+ * Started: [ node2 ]
+ * Stopped: [ node1 ]
diff --git a/cts/scheduler/summary/rsc-sets-promoted.summary b/cts/scheduler/summary/rsc-sets-promoted.summary
new file mode 100644
index 0000000..af78ecb
--- /dev/null
+++ b/cts/scheduler/summary/rsc-sets-promoted.summary
@@ -0,0 +1,49 @@
+Current cluster status:
+ * Node List:
+ * Node node1: standby (with active resources)
+ * Online: [ node2 ]
+
+ * Full List of Resources:
+ * Clone Set: ms-rsc [rsc] (promotable):
+ * Promoted: [ node1 ]
+ * Unpromoted: [ node2 ]
+ * rsc1 (ocf:pacemaker:Dummy): Started node1
+ * rsc2 (ocf:pacemaker:Dummy): Started node1
+ * rsc3 (ocf:pacemaker:Dummy): Started node1
+
+Transition Summary:
+ * Stop rsc:0 ( Promoted node1 ) due to node availability
+ * Promote rsc:1 ( Unpromoted -> Promoted node2 )
+ * Move rsc1 ( node1 -> node2 )
+ * Move rsc2 ( node1 -> node2 )
+ * Move rsc3 ( node1 -> node2 )
+
+Executing Cluster Transition:
+ * Resource action: rsc1 stop on node1
+ * Resource action: rsc2 stop on node1
+ * Resource action: rsc3 stop on node1
+ * Pseudo action: ms-rsc_demote_0
+ * Resource action: rsc:0 demote on node1
+ * Pseudo action: ms-rsc_demoted_0
+ * Pseudo action: ms-rsc_stop_0
+ * Resource action: rsc:0 stop on node1
+ * Pseudo action: ms-rsc_stopped_0
+ * Pseudo action: ms-rsc_promote_0
+ * Resource action: rsc:1 promote on node2
+ * Pseudo action: ms-rsc_promoted_0
+ * Resource action: rsc1 start on node2
+ * Resource action: rsc2 start on node2
+ * Resource action: rsc3 start on node2
+
+Revised Cluster Status:
+ * Node List:
+ * Node node1: standby
+ * Online: [ node2 ]
+
+ * Full List of Resources:
+ * Clone Set: ms-rsc [rsc] (promotable):
+ * Promoted: [ node2 ]
+ * Stopped: [ node1 ]
+ * rsc1 (ocf:pacemaker:Dummy): Started node2
+ * rsc2 (ocf:pacemaker:Dummy): Started node2
+ * rsc3 (ocf:pacemaker:Dummy): Started node2
diff --git a/cts/scheduler/summary/rsc-sets-seq-false.summary b/cts/scheduler/summary/rsc-sets-seq-false.summary
new file mode 100644
index 0000000..e864c17
--- /dev/null
+++ b/cts/scheduler/summary/rsc-sets-seq-false.summary
@@ -0,0 +1,47 @@
+Current cluster status:
+ * Node List:
+ * Node node1: standby (with active resources)
+ * Online: [ node2 ]
+
+ * Full List of Resources:
+ * rsc1 (ocf:pacemaker:Dummy): Started node1
+ * rsc2 (ocf:pacemaker:Dummy): Started node1
+ * rsc3 (ocf:pacemaker:Dummy): Started node1
+ * rsc4 (ocf:pacemaker:Dummy): Started node1
+ * rsc5 (ocf:pacemaker:Dummy): Started node1
+ * rsc6 (ocf:pacemaker:Dummy): Started node1
+
+Transition Summary:
+ * Move rsc1 ( node1 -> node2 )
+ * Move rsc2 ( node1 -> node2 )
+ * Move rsc3 ( node1 -> node2 )
+ * Move rsc4 ( node1 -> node2 )
+ * Move rsc5 ( node1 -> node2 )
+ * Move rsc6 ( node1 -> node2 )
+
+Executing Cluster Transition:
+ * Resource action: rsc4 stop on node1
+ * Resource action: rsc5 stop on node1
+ * Resource action: rsc6 stop on node1
+ * Resource action: rsc3 stop on node1
+ * Resource action: rsc2 stop on node1
+ * Resource action: rsc1 stop on node1
+ * Resource action: rsc1 start on node2
+ * Resource action: rsc2 start on node2
+ * Resource action: rsc3 start on node2
+ * Resource action: rsc4 start on node2
+ * Resource action: rsc5 start on node2
+ * Resource action: rsc6 start on node2
+
+Revised Cluster Status:
+ * Node List:
+ * Node node1: standby
+ * Online: [ node2 ]
+
+ * Full List of Resources:
+ * rsc1 (ocf:pacemaker:Dummy): Started node2
+ * rsc2 (ocf:pacemaker:Dummy): Started node2
+ * rsc3 (ocf:pacemaker:Dummy): Started node2
+ * rsc4 (ocf:pacemaker:Dummy): Started node2
+ * rsc5 (ocf:pacemaker:Dummy): Started node2
+ * rsc6 (ocf:pacemaker:Dummy): Started node2
diff --git a/cts/scheduler/summary/rsc-sets-seq-true.summary b/cts/scheduler/summary/rsc-sets-seq-true.summary
new file mode 100644
index 0000000..fec65c4
--- /dev/null
+++ b/cts/scheduler/summary/rsc-sets-seq-true.summary
@@ -0,0 +1,47 @@
+Current cluster status:
+ * Node List:
+ * Node node1: standby (with active resources)
+ * Online: [ node2 ]
+
+ * Full List of Resources:
+ * rsc1 (ocf:pacemaker:Dummy): Started node1
+ * rsc2 (ocf:pacemaker:Dummy): Started node1
+ * rsc3 (ocf:pacemaker:Dummy): Started node1
+ * rsc4 (ocf:pacemaker:Dummy): Started node1
+ * rsc5 (ocf:pacemaker:Dummy): Started node1
+ * rsc6 (ocf:pacemaker:Dummy): Started node1
+
+Transition Summary:
+ * Move rsc1 ( node1 -> node2 )
+ * Move rsc2 ( node1 -> node2 )
+ * Move rsc3 ( node1 -> node2 )
+ * Move rsc4 ( node1 -> node2 )
+ * Move rsc5 ( node1 -> node2 )
+ * Move rsc6 ( node1 -> node2 )
+
+Executing Cluster Transition:
+ * Resource action: rsc6 stop on node1
+ * Resource action: rsc5 stop on node1
+ * Resource action: rsc4 stop on node1
+ * Resource action: rsc3 stop on node1
+ * Resource action: rsc2 stop on node1
+ * Resource action: rsc1 stop on node1
+ * Resource action: rsc1 start on node2
+ * Resource action: rsc2 start on node2
+ * Resource action: rsc3 start on node2
+ * Resource action: rsc4 start on node2
+ * Resource action: rsc5 start on node2
+ * Resource action: rsc6 start on node2
+
+Revised Cluster Status:
+ * Node List:
+ * Node node1: standby
+ * Online: [ node2 ]
+
+ * Full List of Resources:
+ * rsc1 (ocf:pacemaker:Dummy): Started node2
+ * rsc2 (ocf:pacemaker:Dummy): Started node2
+ * rsc3 (ocf:pacemaker:Dummy): Started node2
+ * rsc4 (ocf:pacemaker:Dummy): Started node2
+ * rsc5 (ocf:pacemaker:Dummy): Started node2
+ * rsc6 (ocf:pacemaker:Dummy): Started node2
diff --git a/cts/scheduler/summary/rsc_dep1.summary b/cts/scheduler/summary/rsc_dep1.summary
new file mode 100644
index 0000000..c7d9ebf
--- /dev/null
+++ b/cts/scheduler/summary/rsc_dep1.summary
@@ -0,0 +1,27 @@
+Current cluster status:
+ * Node List:
+ * Online: [ node1 node2 ]
+
+ * Full List of Resources:
+ * rsc2 (ocf:heartbeat:apache): Stopped
+ * rsc1 (ocf:heartbeat:apache): Stopped
+
+Transition Summary:
+ * Start rsc2 ( node1 )
+ * Start rsc1 ( node2 )
+
+Executing Cluster Transition:
+ * Resource action: rsc2 monitor on node2
+ * Resource action: rsc2 monitor on node1
+ * Resource action: rsc1 monitor on node2
+ * Resource action: rsc1 monitor on node1
+ * Resource action: rsc2 start on node1
+ * Resource action: rsc1 start on node2
+
+Revised Cluster Status:
+ * Node List:
+ * Online: [ node1 node2 ]
+
+ * Full List of Resources:
+ * rsc2 (ocf:heartbeat:apache): Started node1
+ * rsc1 (ocf:heartbeat:apache): Started node2
diff --git a/cts/scheduler/summary/rsc_dep10.summary b/cts/scheduler/summary/rsc_dep10.summary
new file mode 100644
index 0000000..da800c8
--- /dev/null
+++ b/cts/scheduler/summary/rsc_dep10.summary
@@ -0,0 +1,25 @@
+Current cluster status:
+ * Node List:
+ * Online: [ node1 node2 ]
+
+ * Full List of Resources:
+ * rsc1 (ocf:heartbeat:apache): Stopped
+ * rsc2 (ocf:heartbeat:apache): Stopped
+
+Transition Summary:
+ * Start rsc2 ( node1 )
+
+Executing Cluster Transition:
+ * Resource action: rsc1 monitor on node2
+ * Resource action: rsc1 monitor on node1
+ * Resource action: rsc2 monitor on node2
+ * Resource action: rsc2 monitor on node1
+ * Resource action: rsc2 start on node1
+
+Revised Cluster Status:
+ * Node List:
+ * Online: [ node1 node2 ]
+
+ * Full List of Resources:
+ * rsc1 (ocf:heartbeat:apache): Stopped
+ * rsc2 (ocf:heartbeat:apache): Started node1
diff --git a/cts/scheduler/summary/rsc_dep2.summary b/cts/scheduler/summary/rsc_dep2.summary
new file mode 100644
index 0000000..d66735a
--- /dev/null
+++ b/cts/scheduler/summary/rsc_dep2.summary
@@ -0,0 +1,33 @@
+Current cluster status:
+ * Node List:
+ * Online: [ node1 node2 ]
+
+ * Full List of Resources:
+ * rsc2 (ocf:heartbeat:apache): Stopped
+ * rsc4 (ocf:heartbeat:apache): Started node2
+ * rsc1 (ocf:heartbeat:apache): Started node1
+ * rsc3 (ocf:heartbeat:apache): Stopped
+
+Transition Summary:
+ * Start rsc2 ( node1 )
+ * Start rsc3 ( node2 )
+
+Executing Cluster Transition:
+ * Resource action: rsc2 monitor on node2
+ * Resource action: rsc2 monitor on node1
+ * Resource action: rsc4 monitor on node1
+ * Resource action: rsc1 monitor on node2
+ * Resource action: rsc3 monitor on node2
+ * Resource action: rsc3 monitor on node1
+ * Resource action: rsc2 start on node1
+ * Resource action: rsc3 start on node2
+
+Revised Cluster Status:
+ * Node List:
+ * Online: [ node1 node2 ]
+
+ * Full List of Resources:
+ * rsc2 (ocf:heartbeat:apache): Started node1
+ * rsc4 (ocf:heartbeat:apache): Started node2
+ * rsc1 (ocf:heartbeat:apache): Started node1
+ * rsc3 (ocf:heartbeat:apache): Started node2
diff --git a/cts/scheduler/summary/rsc_dep3.summary b/cts/scheduler/summary/rsc_dep3.summary
new file mode 100644
index 0000000..e48f5cf
--- /dev/null
+++ b/cts/scheduler/summary/rsc_dep3.summary
@@ -0,0 +1,27 @@
+Current cluster status:
+ * Node List:
+ * Online: [ node1 node2 ]
+
+ * Full List of Resources:
+ * rsc2 (ocf:heartbeat:apache): Stopped
+ * rsc1 (ocf:heartbeat:apache): Stopped
+
+Transition Summary:
+ * Start rsc2 ( node1 )
+ * Start rsc1 ( node1 )
+
+Executing Cluster Transition:
+ * Resource action: rsc2 monitor on node2
+ * Resource action: rsc2 monitor on node1
+ * Resource action: rsc1 monitor on node2
+ * Resource action: rsc1 monitor on node1
+ * Resource action: rsc2 start on node1
+ * Resource action: rsc1 start on node1
+
+Revised Cluster Status:
+ * Node List:
+ * Online: [ node1 node2 ]
+
+ * Full List of Resources:
+ * rsc2 (ocf:heartbeat:apache): Started node1
+ * rsc1 (ocf:heartbeat:apache): Started node1
diff --git a/cts/scheduler/summary/rsc_dep4.summary b/cts/scheduler/summary/rsc_dep4.summary
new file mode 100644
index 0000000..b4f280d
--- /dev/null
+++ b/cts/scheduler/summary/rsc_dep4.summary
@@ -0,0 +1,36 @@
+Current cluster status:
+ * Node List:
+ * Online: [ node1 node2 ]
+
+ * Full List of Resources:
+ * rsc2 (ocf:heartbeat:apache): Stopped
+ * rsc4 (ocf:heartbeat:apache): Started node1
+ * rsc1 (ocf:heartbeat:apache): Started node1
+ * rsc3 (ocf:heartbeat:apache): Stopped
+
+Transition Summary:
+ * Start rsc2 ( node1 )
+ * Move rsc4 ( node1 -> node2 )
+ * Start rsc3 ( node2 )
+
+Executing Cluster Transition:
+ * Resource action: rsc2 monitor on node2
+ * Resource action: rsc2 monitor on node1
+ * Resource action: rsc4 stop on node1
+ * Resource action: rsc4 monitor on node2
+ * Resource action: rsc1 monitor on node2
+ * Resource action: rsc3 monitor on node2
+ * Resource action: rsc3 monitor on node1
+ * Resource action: rsc2 start on node1
+ * Resource action: rsc4 start on node2
+ * Resource action: rsc3 start on node2
+
+Revised Cluster Status:
+ * Node List:
+ * Online: [ node1 node2 ]
+
+ * Full List of Resources:
+ * rsc2 (ocf:heartbeat:apache): Started node1
+ * rsc4 (ocf:heartbeat:apache): Started node2
+ * rsc1 (ocf:heartbeat:apache): Started node1
+ * rsc3 (ocf:heartbeat:apache): Started node2
diff --git a/cts/scheduler/summary/rsc_dep5.summary b/cts/scheduler/summary/rsc_dep5.summary
new file mode 100644
index 0000000..cab6653
--- /dev/null
+++ b/cts/scheduler/summary/rsc_dep5.summary
@@ -0,0 +1,31 @@
+Current cluster status:
+ * Node List:
+ * Online: [ node1 node2 ]
+
+ * Full List of Resources:
+ * rsc3 (ocf:heartbeat:apache): Stopped
+ * rsc2 (ocf:heartbeat:apache): Stopped
+ * rsc1 (ocf:heartbeat:apache): Stopped
+
+Transition Summary:
+ * Start rsc3 ( node1 )
+ * Start rsc2 ( node2 )
+
+Executing Cluster Transition:
+ * Resource action: rsc3 monitor on node2
+ * Resource action: rsc3 monitor on node1
+ * Resource action: rsc2 monitor on node2
+ * Resource action: rsc2 monitor on node1
+ * Resource action: rsc1 monitor on node2
+ * Resource action: rsc1 monitor on node1
+ * Resource action: rsc3 start on node1
+ * Resource action: rsc2 start on node2
+
+Revised Cluster Status:
+ * Node List:
+ * Online: [ node1 node2 ]
+
+ * Full List of Resources:
+ * rsc3 (ocf:heartbeat:apache): Started node1
+ * rsc2 (ocf:heartbeat:apache): Started node2
+ * rsc1 (ocf:heartbeat:apache): Stopped
diff --git a/cts/scheduler/summary/rsc_dep7.summary b/cts/scheduler/summary/rsc_dep7.summary
new file mode 100644
index 0000000..8d4b6ec
--- /dev/null
+++ b/cts/scheduler/summary/rsc_dep7.summary
@@ -0,0 +1,33 @@
+Current cluster status:
+ * Node List:
+ * Online: [ node1 node2 ]
+
+ * Full List of Resources:
+ * rsc3 (ocf:heartbeat:apache): Stopped
+ * rsc2 (ocf:heartbeat:apache): Stopped
+ * rsc1 (ocf:heartbeat:apache): Stopped
+
+Transition Summary:
+ * Start rsc3 ( node1 )
+ * Start rsc2 ( node1 )
+ * Start rsc1 ( node1 )
+
+Executing Cluster Transition:
+ * Resource action: rsc3 monitor on node2
+ * Resource action: rsc3 monitor on node1
+ * Resource action: rsc2 monitor on node2
+ * Resource action: rsc2 monitor on node1
+ * Resource action: rsc1 monitor on node2
+ * Resource action: rsc1 monitor on node1
+ * Resource action: rsc3 start on node1
+ * Resource action: rsc2 start on node1
+ * Resource action: rsc1 start on node1
+
+Revised Cluster Status:
+ * Node List:
+ * Online: [ node1 node2 ]
+
+ * Full List of Resources:
+ * rsc3 (ocf:heartbeat:apache): Started node1
+ * rsc2 (ocf:heartbeat:apache): Started node1
+ * rsc1 (ocf:heartbeat:apache): Started node1
diff --git a/cts/scheduler/summary/rsc_dep8.summary b/cts/scheduler/summary/rsc_dep8.summary
new file mode 100644
index 0000000..d66735a
--- /dev/null
+++ b/cts/scheduler/summary/rsc_dep8.summary
@@ -0,0 +1,33 @@
+Current cluster status:
+ * Node List:
+ * Online: [ node1 node2 ]
+
+ * Full List of Resources:
+ * rsc2 (ocf:heartbeat:apache): Stopped
+ * rsc4 (ocf:heartbeat:apache): Started node2
+ * rsc1 (ocf:heartbeat:apache): Started node1
+ * rsc3 (ocf:heartbeat:apache): Stopped
+
+Transition Summary:
+ * Start rsc2 ( node1 )
+ * Start rsc3 ( node2 )
+
+Executing Cluster Transition:
+ * Resource action: rsc2 monitor on node2
+ * Resource action: rsc2 monitor on node1
+ * Resource action: rsc4 monitor on node1
+ * Resource action: rsc1 monitor on node2
+ * Resource action: rsc3 monitor on node2
+ * Resource action: rsc3 monitor on node1
+ * Resource action: rsc2 start on node1
+ * Resource action: rsc3 start on node2
+
+Revised Cluster Status:
+ * Node List:
+ * Online: [ node1 node2 ]
+
+ * Full List of Resources:
+ * rsc2 (ocf:heartbeat:apache): Started node1
+ * rsc4 (ocf:heartbeat:apache): Started node2
+ * rsc1 (ocf:heartbeat:apache): Started node1
+ * rsc3 (ocf:heartbeat:apache): Started node2
diff --git a/cts/scheduler/summary/rule-dbl-as-auto-number-match.summary b/cts/scheduler/summary/rule-dbl-as-auto-number-match.summary
new file mode 100644
index 0000000..32c5645
--- /dev/null
+++ b/cts/scheduler/summary/rule-dbl-as-auto-number-match.summary
@@ -0,0 +1,21 @@
+Current cluster status:
+ * Node List:
+ * Node node2: standby
+ * Online: [ node1 ]
+
+ * Full List of Resources:
+ * dummy (ocf:heartbeat:Dummy): Started node1
+
+Transition Summary:
+ * Stop dummy ( node1 ) due to node availability
+
+Executing Cluster Transition:
+ * Resource action: dummy stop on node1
+
+Revised Cluster Status:
+ * Node List:
+ * Node node2: standby
+ * Online: [ node1 ]
+
+ * Full List of Resources:
+ * dummy (ocf:heartbeat:Dummy): Stopped
diff --git a/cts/scheduler/summary/rule-dbl-as-auto-number-no-match.summary b/cts/scheduler/summary/rule-dbl-as-auto-number-no-match.summary
new file mode 100644
index 0000000..2bec6eb
--- /dev/null
+++ b/cts/scheduler/summary/rule-dbl-as-auto-number-no-match.summary
@@ -0,0 +1,19 @@
+Current cluster status:
+ * Node List:
+ * Node node2: standby
+ * Online: [ node1 ]
+
+ * Full List of Resources:
+ * dummy (ocf:heartbeat:Dummy): Started node1
+
+Transition Summary:
+
+Executing Cluster Transition:
+
+Revised Cluster Status:
+ * Node List:
+ * Node node2: standby
+ * Online: [ node1 ]
+
+ * Full List of Resources:
+ * dummy (ocf:heartbeat:Dummy): Started node1
diff --git a/cts/scheduler/summary/rule-dbl-as-integer-match.summary b/cts/scheduler/summary/rule-dbl-as-integer-match.summary
new file mode 100644
index 0000000..32c5645
--- /dev/null
+++ b/cts/scheduler/summary/rule-dbl-as-integer-match.summary
@@ -0,0 +1,21 @@
+Current cluster status:
+ * Node List:
+ * Node node2: standby
+ * Online: [ node1 ]
+
+ * Full List of Resources:
+ * dummy (ocf:heartbeat:Dummy): Started node1
+
+Transition Summary:
+ * Stop dummy ( node1 ) due to node availability
+
+Executing Cluster Transition:
+ * Resource action: dummy stop on node1
+
+Revised Cluster Status:
+ * Node List:
+ * Node node2: standby
+ * Online: [ node1 ]
+
+ * Full List of Resources:
+ * dummy (ocf:heartbeat:Dummy): Stopped
diff --git a/cts/scheduler/summary/rule-dbl-as-integer-no-match.summary b/cts/scheduler/summary/rule-dbl-as-integer-no-match.summary
new file mode 100644
index 0000000..2bec6eb
--- /dev/null
+++ b/cts/scheduler/summary/rule-dbl-as-integer-no-match.summary
@@ -0,0 +1,19 @@
+Current cluster status:
+ * Node List:
+ * Node node2: standby
+ * Online: [ node1 ]
+
+ * Full List of Resources:
+ * dummy (ocf:heartbeat:Dummy): Started node1
+
+Transition Summary:
+
+Executing Cluster Transition:
+
+Revised Cluster Status:
+ * Node List:
+ * Node node2: standby
+ * Online: [ node1 ]
+
+ * Full List of Resources:
+ * dummy (ocf:heartbeat:Dummy): Started node1
diff --git a/cts/scheduler/summary/rule-dbl-as-number-match.summary b/cts/scheduler/summary/rule-dbl-as-number-match.summary
new file mode 100644
index 0000000..32c5645
--- /dev/null
+++ b/cts/scheduler/summary/rule-dbl-as-number-match.summary
@@ -0,0 +1,21 @@
+Current cluster status:
+ * Node List:
+ * Node node2: standby
+ * Online: [ node1 ]
+
+ * Full List of Resources:
+ * dummy (ocf:heartbeat:Dummy): Started node1
+
+Transition Summary:
+ * Stop dummy ( node1 ) due to node availability
+
+Executing Cluster Transition:
+ * Resource action: dummy stop on node1
+
+Revised Cluster Status:
+ * Node List:
+ * Node node2: standby
+ * Online: [ node1 ]
+
+ * Full List of Resources:
+ * dummy (ocf:heartbeat:Dummy): Stopped
diff --git a/cts/scheduler/summary/rule-dbl-as-number-no-match.summary b/cts/scheduler/summary/rule-dbl-as-number-no-match.summary
new file mode 100644
index 0000000..2bec6eb
--- /dev/null
+++ b/cts/scheduler/summary/rule-dbl-as-number-no-match.summary
@@ -0,0 +1,19 @@
+Current cluster status:
+ * Node List:
+ * Node node2: standby
+ * Online: [ node1 ]
+
+ * Full List of Resources:
+ * dummy (ocf:heartbeat:Dummy): Started node1
+
+Transition Summary:
+
+Executing Cluster Transition:
+
+Revised Cluster Status:
+ * Node List:
+ * Node node2: standby
+ * Online: [ node1 ]
+
+ * Full List of Resources:
+ * dummy (ocf:heartbeat:Dummy): Started node1
diff --git a/cts/scheduler/summary/rule-dbl-parse-fail-default-str-match.summary b/cts/scheduler/summary/rule-dbl-parse-fail-default-str-match.summary
new file mode 100644
index 0000000..32c5645
--- /dev/null
+++ b/cts/scheduler/summary/rule-dbl-parse-fail-default-str-match.summary
@@ -0,0 +1,21 @@
+Current cluster status:
+ * Node List:
+ * Node node2: standby
+ * Online: [ node1 ]
+
+ * Full List of Resources:
+ * dummy (ocf:heartbeat:Dummy): Started node1
+
+Transition Summary:
+ * Stop dummy ( node1 ) due to node availability
+
+Executing Cluster Transition:
+ * Resource action: dummy stop on node1
+
+Revised Cluster Status:
+ * Node List:
+ * Node node2: standby
+ * Online: [ node1 ]
+
+ * Full List of Resources:
+ * dummy (ocf:heartbeat:Dummy): Stopped
diff --git a/cts/scheduler/summary/rule-dbl-parse-fail-default-str-no-match.summary b/cts/scheduler/summary/rule-dbl-parse-fail-default-str-no-match.summary
new file mode 100644
index 0000000..2bec6eb
--- /dev/null
+++ b/cts/scheduler/summary/rule-dbl-parse-fail-default-str-no-match.summary
@@ -0,0 +1,19 @@
+Current cluster status:
+ * Node List:
+ * Node node2: standby
+ * Online: [ node1 ]
+
+ * Full List of Resources:
+ * dummy (ocf:heartbeat:Dummy): Started node1
+
+Transition Summary:
+
+Executing Cluster Transition:
+
+Revised Cluster Status:
+ * Node List:
+ * Node node2: standby
+ * Online: [ node1 ]
+
+ * Full List of Resources:
+ * dummy (ocf:heartbeat:Dummy): Started node1
diff --git a/cts/scheduler/summary/rule-int-as-auto-integer-match.summary b/cts/scheduler/summary/rule-int-as-auto-integer-match.summary
new file mode 100644
index 0000000..32c5645
--- /dev/null
+++ b/cts/scheduler/summary/rule-int-as-auto-integer-match.summary
@@ -0,0 +1,21 @@
+Current cluster status:
+ * Node List:
+ * Node node2: standby
+ * Online: [ node1 ]
+
+ * Full List of Resources:
+ * dummy (ocf:heartbeat:Dummy): Started node1
+
+Transition Summary:
+ * Stop dummy ( node1 ) due to node availability
+
+Executing Cluster Transition:
+ * Resource action: dummy stop on node1
+
+Revised Cluster Status:
+ * Node List:
+ * Node node2: standby
+ * Online: [ node1 ]
+
+ * Full List of Resources:
+ * dummy (ocf:heartbeat:Dummy): Stopped
diff --git a/cts/scheduler/summary/rule-int-as-auto-integer-no-match.summary b/cts/scheduler/summary/rule-int-as-auto-integer-no-match.summary
new file mode 100644
index 0000000..2bec6eb
--- /dev/null
+++ b/cts/scheduler/summary/rule-int-as-auto-integer-no-match.summary
@@ -0,0 +1,19 @@
+Current cluster status:
+ * Node List:
+ * Node node2: standby
+ * Online: [ node1 ]
+
+ * Full List of Resources:
+ * dummy (ocf:heartbeat:Dummy): Started node1
+
+Transition Summary:
+
+Executing Cluster Transition:
+
+Revised Cluster Status:
+ * Node List:
+ * Node node2: standby
+ * Online: [ node1 ]
+
+ * Full List of Resources:
+ * dummy (ocf:heartbeat:Dummy): Started node1
diff --git a/cts/scheduler/summary/rule-int-as-integer-match.summary b/cts/scheduler/summary/rule-int-as-integer-match.summary
new file mode 100644
index 0000000..32c5645
--- /dev/null
+++ b/cts/scheduler/summary/rule-int-as-integer-match.summary
@@ -0,0 +1,21 @@
+Current cluster status:
+ * Node List:
+ * Node node2: standby
+ * Online: [ node1 ]
+
+ * Full List of Resources:
+ * dummy (ocf:heartbeat:Dummy): Started node1
+
+Transition Summary:
+ * Stop dummy ( node1 ) due to node availability
+
+Executing Cluster Transition:
+ * Resource action: dummy stop on node1
+
+Revised Cluster Status:
+ * Node List:
+ * Node node2: standby
+ * Online: [ node1 ]
+
+ * Full List of Resources:
+ * dummy (ocf:heartbeat:Dummy): Stopped
diff --git a/cts/scheduler/summary/rule-int-as-integer-no-match.summary b/cts/scheduler/summary/rule-int-as-integer-no-match.summary
new file mode 100644
index 0000000..2bec6eb
--- /dev/null
+++ b/cts/scheduler/summary/rule-int-as-integer-no-match.summary
@@ -0,0 +1,19 @@
+Current cluster status:
+ * Node List:
+ * Node node2: standby
+ * Online: [ node1 ]
+
+ * Full List of Resources:
+ * dummy (ocf:heartbeat:Dummy): Started node1
+
+Transition Summary:
+
+Executing Cluster Transition:
+
+Revised Cluster Status:
+ * Node List:
+ * Node node2: standby
+ * Online: [ node1 ]
+
+ * Full List of Resources:
+ * dummy (ocf:heartbeat:Dummy): Started node1
diff --git a/cts/scheduler/summary/rule-int-as-number-match.summary b/cts/scheduler/summary/rule-int-as-number-match.summary
new file mode 100644
index 0000000..32c5645
--- /dev/null
+++ b/cts/scheduler/summary/rule-int-as-number-match.summary
@@ -0,0 +1,21 @@
+Current cluster status:
+ * Node List:
+ * Node node2: standby
+ * Online: [ node1 ]
+
+ * Full List of Resources:
+ * dummy (ocf:heartbeat:Dummy): Started node1
+
+Transition Summary:
+ * Stop dummy ( node1 ) due to node availability
+
+Executing Cluster Transition:
+ * Resource action: dummy stop on node1
+
+Revised Cluster Status:
+ * Node List:
+ * Node node2: standby
+ * Online: [ node1 ]
+
+ * Full List of Resources:
+ * dummy (ocf:heartbeat:Dummy): Stopped
diff --git a/cts/scheduler/summary/rule-int-as-number-no-match.summary b/cts/scheduler/summary/rule-int-as-number-no-match.summary
new file mode 100644
index 0000000..2bec6eb
--- /dev/null
+++ b/cts/scheduler/summary/rule-int-as-number-no-match.summary
@@ -0,0 +1,19 @@
+Current cluster status:
+ * Node List:
+ * Node node2: standby
+ * Online: [ node1 ]
+
+ * Full List of Resources:
+ * dummy (ocf:heartbeat:Dummy): Started node1
+
+Transition Summary:
+
+Executing Cluster Transition:
+
+Revised Cluster Status:
+ * Node List:
+ * Node node2: standby
+ * Online: [ node1 ]
+
+ * Full List of Resources:
+ * dummy (ocf:heartbeat:Dummy): Started node1
diff --git a/cts/scheduler/summary/rule-int-parse-fail-default-str-match.summary b/cts/scheduler/summary/rule-int-parse-fail-default-str-match.summary
new file mode 100644
index 0000000..32c5645
--- /dev/null
+++ b/cts/scheduler/summary/rule-int-parse-fail-default-str-match.summary
@@ -0,0 +1,21 @@
+Current cluster status:
+ * Node List:
+ * Node node2: standby
+ * Online: [ node1 ]
+
+ * Full List of Resources:
+ * dummy (ocf:heartbeat:Dummy): Started node1
+
+Transition Summary:
+ * Stop dummy ( node1 ) due to node availability
+
+Executing Cluster Transition:
+ * Resource action: dummy stop on node1
+
+Revised Cluster Status:
+ * Node List:
+ * Node node2: standby
+ * Online: [ node1 ]
+
+ * Full List of Resources:
+ * dummy (ocf:heartbeat:Dummy): Stopped
diff --git a/cts/scheduler/summary/rule-int-parse-fail-default-str-no-match.summary b/cts/scheduler/summary/rule-int-parse-fail-default-str-no-match.summary
new file mode 100644
index 0000000..2bec6eb
--- /dev/null
+++ b/cts/scheduler/summary/rule-int-parse-fail-default-str-no-match.summary
@@ -0,0 +1,19 @@
+Current cluster status:
+ * Node List:
+ * Node node2: standby
+ * Online: [ node1 ]
+
+ * Full List of Resources:
+ * dummy (ocf:heartbeat:Dummy): Started node1
+
+Transition Summary:
+
+Executing Cluster Transition:
+
+Revised Cluster Status:
+ * Node List:
+ * Node node2: standby
+ * Online: [ node1 ]
+
+ * Full List of Resources:
+ * dummy (ocf:heartbeat:Dummy): Started node1
diff --git a/cts/scheduler/summary/shutdown-lock-expiration.summary b/cts/scheduler/summary/shutdown-lock-expiration.summary
new file mode 100644
index 0000000..aa6f2e8
--- /dev/null
+++ b/cts/scheduler/summary/shutdown-lock-expiration.summary
@@ -0,0 +1,33 @@
+Using the original execution date of: 2020-01-06 22:11:40Z
+Current cluster status:
+ * Node List:
+ * Online: [ node3 node4 node5 ]
+ * OFFLINE: [ node1 node2 ]
+
+ * Full List of Resources:
+ * Fencing (stonith:fence_xvm): Started node3
+ * rsc1 (ocf:pacemaker:Dummy): Stopped node1 (LOCKED)
+ * rsc2 (ocf:pacemaker:Dummy): Stopped
+
+Transition Summary:
+ * Restart Fencing ( node3 ) due to resource definition change
+ * Start rsc2 ( node4 )
+
+Executing Cluster Transition:
+ * Resource action: Fencing stop on node3
+ * Resource action: Fencing start on node3
+ * Resource action: Fencing monitor=120000 on node3
+ * Resource action: rsc2 start on node4
+ * Cluster action: lrm_delete for rsc2 on node2
+ * Resource action: rsc2 monitor=10000 on node4
+Using the original execution date of: 2020-01-06 22:11:40Z
+
+Revised Cluster Status:
+ * Node List:
+ * Online: [ node3 node4 node5 ]
+ * OFFLINE: [ node1 node2 ]
+
+ * Full List of Resources:
+ * Fencing (stonith:fence_xvm): Started node3
+ * rsc1 (ocf:pacemaker:Dummy): Stopped node1 (LOCKED)
+ * rsc2 (ocf:pacemaker:Dummy): Started node4
diff --git a/cts/scheduler/summary/shutdown-lock.summary b/cts/scheduler/summary/shutdown-lock.summary
new file mode 100644
index 0000000..e36a005
--- /dev/null
+++ b/cts/scheduler/summary/shutdown-lock.summary
@@ -0,0 +1,32 @@
+Using the original execution date of: 2020-01-06 21:59:11Z
+Current cluster status:
+ * Node List:
+ * Online: [ node1 node3 node4 node5 ]
+ * OFFLINE: [ node2 ]
+
+ * Full List of Resources:
+ * Fencing (stonith:fence_xvm): Started node1
+ * rsc1 (ocf:pacemaker:Dummy): Started node1
+ * rsc2 (ocf:pacemaker:Dummy): Stopped node2 (LOCKED)
+
+Transition Summary:
+ * Move Fencing ( node1 -> node3 )
+ * Stop rsc1 ( node1 ) due to node availability
+
+Executing Cluster Transition:
+ * Resource action: Fencing stop on node1
+ * Resource action: rsc1 stop on node1
+ * Cluster action: do_shutdown on node1
+ * Resource action: Fencing start on node3
+ * Resource action: Fencing monitor=120000 on node3
+Using the original execution date of: 2020-01-06 21:59:11Z
+
+Revised Cluster Status:
+ * Node List:
+ * Online: [ node1 node3 node4 node5 ]
+ * OFFLINE: [ node2 ]
+
+ * Full List of Resources:
+ * Fencing (stonith:fence_xvm): Started node3
+ * rsc1 (ocf:pacemaker:Dummy): Stopped
+ * rsc2 (ocf:pacemaker:Dummy): Stopped node2 (LOCKED)
diff --git a/cts/scheduler/summary/shutdown-maintenance-node.summary b/cts/scheduler/summary/shutdown-maintenance-node.summary
new file mode 100644
index 0000000..b8bca96
--- /dev/null
+++ b/cts/scheduler/summary/shutdown-maintenance-node.summary
@@ -0,0 +1,21 @@
+Current cluster status:
+ * Node List:
+ * Node sle12sp2-2: OFFLINE (maintenance)
+ * Online: [ sle12sp2-1 ]
+
+ * Full List of Resources:
+ * st-sbd (stonith:external/sbd): Started sle12sp2-1
+ * dummy1 (ocf:pacemaker:Dummy): Started sle12sp2-2 (maintenance)
+
+Transition Summary:
+
+Executing Cluster Transition:
+
+Revised Cluster Status:
+ * Node List:
+ * Node sle12sp2-2: OFFLINE (maintenance)
+ * Online: [ sle12sp2-1 ]
+
+ * Full List of Resources:
+ * st-sbd (stonith:external/sbd): Started sle12sp2-1
+ * dummy1 (ocf:pacemaker:Dummy): Started sle12sp2-2 (maintenance)
diff --git a/cts/scheduler/summary/simple1.summary b/cts/scheduler/summary/simple1.summary
new file mode 100644
index 0000000..14afbe2
--- /dev/null
+++ b/cts/scheduler/summary/simple1.summary
@@ -0,0 +1,17 @@
+Current cluster status:
+ * Node List:
+ * OFFLINE: [ node1 ]
+
+ * Full List of Resources:
+ * rsc1 (ocf:heartbeat:apache): Stopped
+
+Transition Summary:
+
+Executing Cluster Transition:
+
+Revised Cluster Status:
+ * Node List:
+ * OFFLINE: [ node1 ]
+
+ * Full List of Resources:
+ * rsc1 (ocf:heartbeat:apache): Stopped
diff --git a/cts/scheduler/summary/simple11.summary b/cts/scheduler/summary/simple11.summary
new file mode 100644
index 0000000..fc329b8
--- /dev/null
+++ b/cts/scheduler/summary/simple11.summary
@@ -0,0 +1,27 @@
+Current cluster status:
+ * Node List:
+ * Online: [ node1 node2 ]
+
+ * Full List of Resources:
+ * rsc1 (ocf:heartbeat:apache): Stopped
+ * rsc2 (ocf:heartbeat:apache): Stopped
+
+Transition Summary:
+ * Start rsc1 ( node1 )
+ * Start rsc2 ( node1 )
+
+Executing Cluster Transition:
+ * Resource action: rsc1 monitor on node2
+ * Resource action: rsc1 monitor on node1
+ * Resource action: rsc2 monitor on node2
+ * Resource action: rsc2 monitor on node1
+ * Resource action: rsc1 start on node1
+ * Resource action: rsc2 start on node1
+
+Revised Cluster Status:
+ * Node List:
+ * Online: [ node1 node2 ]
+
+ * Full List of Resources:
+ * rsc1 (ocf:heartbeat:apache): Started node1
+ * rsc2 (ocf:heartbeat:apache): Started node1
diff --git a/cts/scheduler/summary/simple12.summary b/cts/scheduler/summary/simple12.summary
new file mode 100644
index 0000000..4e654b6
--- /dev/null
+++ b/cts/scheduler/summary/simple12.summary
@@ -0,0 +1,27 @@
+Current cluster status:
+ * Node List:
+ * Online: [ node1 node2 ]
+
+ * Full List of Resources:
+ * rsc1 (ocf:heartbeat:apache): Stopped
+ * rsc2 (ocf:heartbeat:apache): Stopped
+
+Transition Summary:
+ * Start rsc1 ( node2 )
+ * Start rsc2 ( node1 )
+
+Executing Cluster Transition:
+ * Resource action: rsc1 monitor on node2
+ * Resource action: rsc1 monitor on node1
+ * Resource action: rsc2 monitor on node2
+ * Resource action: rsc2 monitor on node1
+ * Resource action: rsc1 start on node2
+ * Resource action: rsc2 start on node1
+
+Revised Cluster Status:
+ * Node List:
+ * Online: [ node1 node2 ]
+
+ * Full List of Resources:
+ * rsc1 (ocf:heartbeat:apache): Started node2
+ * rsc2 (ocf:heartbeat:apache): Started node1
diff --git a/cts/scheduler/summary/simple2.summary b/cts/scheduler/summary/simple2.summary
new file mode 100644
index 0000000..7d133a8
--- /dev/null
+++ b/cts/scheduler/summary/simple2.summary
@@ -0,0 +1,21 @@
+Current cluster status:
+ * Node List:
+ * Online: [ node1 node2 ]
+
+ * Full List of Resources:
+ * rsc1 (ocf:heartbeat:apache): Stopped
+
+Transition Summary:
+ * Start rsc1 ( node1 )
+
+Executing Cluster Transition:
+ * Resource action: rsc1 monitor on node2
+ * Resource action: rsc1 monitor on node1
+ * Resource action: rsc1 start on node1
+
+Revised Cluster Status:
+ * Node List:
+ * Online: [ node1 node2 ]
+
+ * Full List of Resources:
+ * rsc1 (ocf:heartbeat:apache): Started node1
diff --git a/cts/scheduler/summary/simple3.summary b/cts/scheduler/summary/simple3.summary
new file mode 100644
index 0000000..9ca3dd4
--- /dev/null
+++ b/cts/scheduler/summary/simple3.summary
@@ -0,0 +1,19 @@
+Current cluster status:
+ * Node List:
+ * Online: [ node1 ]
+
+ * Full List of Resources:
+ * rsc1 (ocf:heartbeat:apache): Stopped
+
+Transition Summary:
+ * Start rsc1 ( node1 )
+
+Executing Cluster Transition:
+ * Resource action: rsc1 start on node1
+
+Revised Cluster Status:
+ * Node List:
+ * Online: [ node1 ]
+
+ * Full List of Resources:
+ * rsc1 (ocf:heartbeat:apache): Started node1
diff --git a/cts/scheduler/summary/simple4.summary b/cts/scheduler/summary/simple4.summary
new file mode 100644
index 0000000..456e7dc
--- /dev/null
+++ b/cts/scheduler/summary/simple4.summary
@@ -0,0 +1,19 @@
+Current cluster status:
+ * Node List:
+ * Online: [ node1 ]
+
+ * Full List of Resources:
+ * rsc1 (ocf:heartbeat:apache): FAILED node1
+
+Transition Summary:
+ * Stop rsc1 ( node1 ) due to node availability
+
+Executing Cluster Transition:
+ * Resource action: rsc1 stop on node1
+
+Revised Cluster Status:
+ * Node List:
+ * Online: [ node1 ]
+
+ * Full List of Resources:
+ * rsc1 (ocf:heartbeat:apache): Stopped
diff --git a/cts/scheduler/summary/simple6.summary b/cts/scheduler/summary/simple6.summary
new file mode 100644
index 0000000..5f6c9ce
--- /dev/null
+++ b/cts/scheduler/summary/simple6.summary
@@ -0,0 +1,24 @@
+Current cluster status:
+ * Node List:
+ * Online: [ node1 ]
+
+ * Full List of Resources:
+ * rsc2 (ocf:heartbeat:apache): Stopped
+ * rsc1 (ocf:heartbeat:apache): Started node1
+
+Transition Summary:
+ * Start rsc2 ( node1 )
+ * Stop rsc1 ( node1 ) due to node availability
+
+Executing Cluster Transition:
+ * Resource action: rsc2 monitor on node1
+ * Resource action: rsc1 stop on node1
+ * Resource action: rsc2 start on node1
+
+Revised Cluster Status:
+ * Node List:
+ * Online: [ node1 ]
+
+ * Full List of Resources:
+ * rsc2 (ocf:heartbeat:apache): Started node1
+ * rsc1 (ocf:heartbeat:apache): Stopped
diff --git a/cts/scheduler/summary/simple7.summary b/cts/scheduler/summary/simple7.summary
new file mode 100644
index 0000000..fa102ed
--- /dev/null
+++ b/cts/scheduler/summary/simple7.summary
@@ -0,0 +1,20 @@
+Current cluster status:
+ * Node List:
+ * Online: [ node1 ]
+
+ * Full List of Resources:
+ * rsc1 (ocf:heartbeat:apache): Started node1
+
+Transition Summary:
+ * Stop rsc1 ( node1 ) due to node availability
+
+Executing Cluster Transition:
+ * Resource action: rsc1 stop on node1
+ * Cluster action: do_shutdown on node1
+
+Revised Cluster Status:
+ * Node List:
+ * Online: [ node1 ]
+
+ * Full List of Resources:
+ * rsc1 (ocf:heartbeat:apache): Stopped
diff --git a/cts/scheduler/summary/simple8.summary b/cts/scheduler/summary/simple8.summary
new file mode 100644
index 0000000..24bf53b
--- /dev/null
+++ b/cts/scheduler/summary/simple8.summary
@@ -0,0 +1,31 @@
+Current cluster status:
+ * Node List:
+ * Online: [ node1 node2 ]
+
+ * Full List of Resources:
+ * rsc3 (ocf:heartbeat:apache): Started node1
+ * rsc4 (ocf:heartbeat:apache): Started node1
+ * Resource Group: foo:
+ * rsc1 (ocf:heartbeat:apache): Started node1
+ * Resource Group: bar:
+ * rsc2 (ocf:heartbeat:apache): Started node1
+
+Transition Summary:
+
+Executing Cluster Transition:
+ * Resource action: rsc3 monitor on node2
+ * Resource action: rsc4 monitor on node2
+ * Resource action: rsc1 monitor on node2
+ * Resource action: rsc2 monitor on node2
+
+Revised Cluster Status:
+ * Node List:
+ * Online: [ node1 node2 ]
+
+ * Full List of Resources:
+ * rsc3 (ocf:heartbeat:apache): Started node1
+ * rsc4 (ocf:heartbeat:apache): Started node1
+ * Resource Group: foo:
+ * rsc1 (ocf:heartbeat:apache): Started node1
+ * Resource Group: bar:
+ * rsc2 (ocf:heartbeat:apache): Started node1
diff --git a/cts/scheduler/summary/site-specific-params.summary b/cts/scheduler/summary/site-specific-params.summary
new file mode 100644
index 0000000..08a1dbb
--- /dev/null
+++ b/cts/scheduler/summary/site-specific-params.summary
@@ -0,0 +1,25 @@
+Current cluster status:
+ * Node List:
+ * Online: [ node1 node2 node3 ]
+
+ * Full List of Resources:
+ * rsc_stonith (stonith:null): Started node1
+ * rsc1 (ocf:pacemaker:Dummy): Stopped
+
+Transition Summary:
+ * Start rsc1 ( node2 )
+
+Executing Cluster Transition:
+ * Resource action: rsc1 monitor on node3
+ * Resource action: rsc1 monitor on node2
+ * Resource action: rsc1 monitor on node1
+ * Resource action: rsc1 start on node2
+ * Resource action: rsc1 monitor=10000 on node2
+
+Revised Cluster Status:
+ * Node List:
+ * Online: [ node1 node2 node3 ]
+
+ * Full List of Resources:
+ * rsc_stonith (stonith:null): Started node1
+ * rsc1 (ocf:pacemaker:Dummy): Started node2
diff --git a/cts/scheduler/summary/standby.summary b/cts/scheduler/summary/standby.summary
new file mode 100644
index 0000000..b13326e
--- /dev/null
+++ b/cts/scheduler/summary/standby.summary
@@ -0,0 +1,87 @@
+Current cluster status:
+ * Node List:
+ * Node sapcl02: standby (with active resources)
+ * Node sapcl03: standby (with active resources)
+ * Online: [ sapcl01 ]
+
+ * Full List of Resources:
+ * Resource Group: app01:
+ * IPaddr_192_168_1_101 (ocf:heartbeat:IPaddr): Started sapcl01
+ * LVM_2 (ocf:heartbeat:LVM): Started sapcl01
+ * Filesystem_3 (ocf:heartbeat:Filesystem): Started sapcl01
+ * Resource Group: app02:
+ * IPaddr_192_168_1_102 (ocf:heartbeat:IPaddr): Started sapcl02
+ * LVM_12 (ocf:heartbeat:LVM): Started sapcl02
+ * Filesystem_13 (ocf:heartbeat:Filesystem): Started sapcl02
+ * Resource Group: oracle:
+ * IPaddr_192_168_1_104 (ocf:heartbeat:IPaddr): Started sapcl03
+ * LVM_22 (ocf:heartbeat:LVM): Started sapcl03
+ * Filesystem_23 (ocf:heartbeat:Filesystem): Started sapcl03
+ * oracle_24 (ocf:heartbeat:oracle): Started sapcl03
+ * oralsnr_25 (ocf:heartbeat:oralsnr): Started sapcl03
+
+Transition Summary:
+ * Move IPaddr_192_168_1_102 ( sapcl02 -> sapcl01 )
+ * Move LVM_12 ( sapcl02 -> sapcl01 )
+ * Move Filesystem_13 ( sapcl02 -> sapcl01 )
+ * Move IPaddr_192_168_1_104 ( sapcl03 -> sapcl01 )
+ * Move LVM_22 ( sapcl03 -> sapcl01 )
+ * Move Filesystem_23 ( sapcl03 -> sapcl01 )
+ * Move oracle_24 ( sapcl03 -> sapcl01 )
+ * Move oralsnr_25 ( sapcl03 -> sapcl01 )
+
+Executing Cluster Transition:
+ * Pseudo action: app02_stop_0
+ * Resource action: Filesystem_13 stop on sapcl02
+ * Pseudo action: oracle_stop_0
+ * Resource action: oralsnr_25 stop on sapcl03
+ * Resource action: LVM_12 stop on sapcl02
+ * Resource action: oracle_24 stop on sapcl03
+ * Resource action: IPaddr_192_168_1_102 stop on sapcl02
+ * Resource action: Filesystem_23 stop on sapcl03
+ * Pseudo action: app02_stopped_0
+ * Pseudo action: app02_start_0
+ * Resource action: IPaddr_192_168_1_102 start on sapcl01
+ * Resource action: LVM_12 start on sapcl01
+ * Resource action: Filesystem_13 start on sapcl01
+ * Resource action: LVM_22 stop on sapcl03
+ * Pseudo action: app02_running_0
+ * Resource action: IPaddr_192_168_1_102 monitor=5000 on sapcl01
+ * Resource action: LVM_12 monitor=120000 on sapcl01
+ * Resource action: Filesystem_13 monitor=120000 on sapcl01
+ * Resource action: IPaddr_192_168_1_104 stop on sapcl03
+ * Pseudo action: oracle_stopped_0
+ * Pseudo action: oracle_start_0
+ * Resource action: IPaddr_192_168_1_104 start on sapcl01
+ * Resource action: LVM_22 start on sapcl01
+ * Resource action: Filesystem_23 start on sapcl01
+ * Resource action: oracle_24 start on sapcl01
+ * Resource action: oralsnr_25 start on sapcl01
+ * Pseudo action: oracle_running_0
+ * Resource action: IPaddr_192_168_1_104 monitor=5000 on sapcl01
+ * Resource action: LVM_22 monitor=120000 on sapcl01
+ * Resource action: Filesystem_23 monitor=120000 on sapcl01
+ * Resource action: oracle_24 monitor=120000 on sapcl01
+ * Resource action: oralsnr_25 monitor=120000 on sapcl01
+
+Revised Cluster Status:
+ * Node List:
+ * Node sapcl02: standby
+ * Node sapcl03: standby
+ * Online: [ sapcl01 ]
+
+ * Full List of Resources:
+ * Resource Group: app01:
+ * IPaddr_192_168_1_101 (ocf:heartbeat:IPaddr): Started sapcl01
+ * LVM_2 (ocf:heartbeat:LVM): Started sapcl01
+ * Filesystem_3 (ocf:heartbeat:Filesystem): Started sapcl01
+ * Resource Group: app02:
+ * IPaddr_192_168_1_102 (ocf:heartbeat:IPaddr): Started sapcl01
+ * LVM_12 (ocf:heartbeat:LVM): Started sapcl01
+ * Filesystem_13 (ocf:heartbeat:Filesystem): Started sapcl01
+ * Resource Group: oracle:
+ * IPaddr_192_168_1_104 (ocf:heartbeat:IPaddr): Started sapcl01
+ * LVM_22 (ocf:heartbeat:LVM): Started sapcl01
+ * Filesystem_23 (ocf:heartbeat:Filesystem): Started sapcl01
+ * oracle_24 (ocf:heartbeat:oracle): Started sapcl01
+ * oralsnr_25 (ocf:heartbeat:oralsnr): Started sapcl01
diff --git a/cts/scheduler/summary/start-then-stop-with-unfence.summary b/cts/scheduler/summary/start-then-stop-with-unfence.summary
new file mode 100644
index 0000000..8d83fcc
--- /dev/null
+++ b/cts/scheduler/summary/start-then-stop-with-unfence.summary
@@ -0,0 +1,44 @@
+Current cluster status:
+ * Node List:
+ * Online: [ rhel7-node1.example.com rhel7-node2.example.com ]
+
+ * Full List of Resources:
+ * mpath-node2 (stonith:fence_mpath): Started rhel7-node2.example.com
+ * mpath-node1 (stonith:fence_mpath): Stopped
+ * ip1 (ocf:heartbeat:IPaddr2): Started rhel7-node2.example.com
+ * ip2 (ocf:heartbeat:IPaddr2): Started rhel7-node2.example.com
+ * Clone Set: jrummy-clone [jrummy]:
+ * Started: [ rhel7-node2.example.com ]
+ * Stopped: [ rhel7-node1.example.com ]
+
+Transition Summary:
+ * Fence (on) rhel7-node1.example.com 'required by mpath-node2 monitor'
+ * Start mpath-node1 ( rhel7-node1.example.com )
+ * Move ip1 ( rhel7-node2.example.com -> rhel7-node1.example.com )
+ * Start jrummy:1 ( rhel7-node1.example.com )
+
+Executing Cluster Transition:
+ * Pseudo action: jrummy-clone_start_0
+ * Fencing rhel7-node1.example.com (on)
+ * Resource action: mpath-node2 monitor on rhel7-node1.example.com
+ * Resource action: mpath-node1 monitor on rhel7-node1.example.com
+ * Resource action: jrummy start on rhel7-node1.example.com
+ * Pseudo action: jrummy-clone_running_0
+ * Resource action: mpath-node1 start on rhel7-node1.example.com
+ * Resource action: ip1 stop on rhel7-node2.example.com
+ * Resource action: jrummy monitor=10000 on rhel7-node1.example.com
+ * Resource action: mpath-node1 monitor=60000 on rhel7-node1.example.com
+ * Resource action: ip1 start on rhel7-node1.example.com
+ * Resource action: ip1 monitor=10000 on rhel7-node1.example.com
+
+Revised Cluster Status:
+ * Node List:
+ * Online: [ rhel7-node1.example.com rhel7-node2.example.com ]
+
+ * Full List of Resources:
+ * mpath-node2 (stonith:fence_mpath): Started rhel7-node2.example.com
+ * mpath-node1 (stonith:fence_mpath): Started rhel7-node1.example.com
+ * ip1 (ocf:heartbeat:IPaddr2): Started rhel7-node1.example.com
+ * ip2 (ocf:heartbeat:IPaddr2): Started rhel7-node2.example.com
+ * Clone Set: jrummy-clone [jrummy]:
+ * Started: [ rhel7-node1.example.com rhel7-node2.example.com ]
diff --git a/cts/scheduler/summary/stonith-0.summary b/cts/scheduler/summary/stonith-0.summary
new file mode 100644
index 0000000..f9745bd
--- /dev/null
+++ b/cts/scheduler/summary/stonith-0.summary
@@ -0,0 +1,111 @@
+Current cluster status:
+ * Node List:
+ * Node c001n03: UNCLEAN (online)
+ * Node c001n05: UNCLEAN (online)
+ * Online: [ c001n02 c001n04 c001n06 c001n07 c001n08 ]
+
+ * Full List of Resources:
+ * DcIPaddr (ocf:heartbeat:IPaddr): Stopped
+ * Resource Group: group-1:
+ * ocf_192.168.100.181 (ocf:heartbeat:IPaddr): Started [ c001n03 c001n05 ]
+ * heartbeat_192.168.100.182 (ocf:heartbeat:IPaddr): Started c001n03
+ * ocf_192.168.100.183 (ocf:heartbeat:IPaddr): FAILED [ c001n03 c001n05 ]
+ * lsb_dummy (lsb:/usr/lib/heartbeat/cts/LSBDummy): Started c001n04
+ * rsc_c001n03 (ocf:heartbeat:IPaddr): Started c001n06
+ * rsc_c001n02 (ocf:heartbeat:IPaddr): Started c001n02
+ * rsc_c001n04 (ocf:heartbeat:IPaddr): Started c001n04
+ * rsc_c001n05 (ocf:heartbeat:IPaddr): Started c001n05
+ * rsc_c001n06 (ocf:heartbeat:IPaddr): Started c001n06
+ * rsc_c001n07 (ocf:heartbeat:IPaddr): Started c001n03
+ * rsc_c001n08 (ocf:heartbeat:IPaddr): Started c001n08
+ * Clone Set: DoFencing [child_DoFencing]:
+ * Started: [ c001n02 c001n04 c001n06 c001n07 c001n08 ]
+ * Stopped: [ c001n03 c001n05 ]
+ * Clone Set: master_rsc_1 [ocf_msdummy] (promotable, unique):
+ * ocf_msdummy:0 (ocf:heartbeat:/usr/lib/heartbeat/cts/OCFMSDummy): Promoted c001n02
+ * ocf_msdummy:1 (ocf:heartbeat:/usr/lib/heartbeat/cts/OCFMSDummy): Unpromoted c001n02
+ * ocf_msdummy:2 (ocf:heartbeat:/usr/lib/heartbeat/cts/OCFMSDummy): Unpromoted c001n07
+ * ocf_msdummy:3 (ocf:heartbeat:/usr/lib/heartbeat/cts/OCFMSDummy): Unpromoted c001n07
+ * ocf_msdummy:4 (ocf:heartbeat:/usr/lib/heartbeat/cts/OCFMSDummy): Unpromoted c001n08
+ * ocf_msdummy:5 (ocf:heartbeat:/usr/lib/heartbeat/cts/OCFMSDummy): Unpromoted c001n08
+ * ocf_msdummy:6 (ocf:heartbeat:/usr/lib/heartbeat/cts/OCFMSDummy): Stopped
+ * ocf_msdummy:7 (ocf:heartbeat:/usr/lib/heartbeat/cts/OCFMSDummy): Stopped
+ * ocf_msdummy:8 (ocf:heartbeat:/usr/lib/heartbeat/cts/OCFMSDummy): Stopped
+ * ocf_msdummy:9 (ocf:heartbeat:/usr/lib/heartbeat/cts/OCFMSDummy): Stopped
+ * ocf_msdummy:10 (ocf:heartbeat:/usr/lib/heartbeat/cts/OCFMSDummy): Unpromoted c001n04
+ * ocf_msdummy:11 (ocf:heartbeat:/usr/lib/heartbeat/cts/OCFMSDummy): Unpromoted c001n04
+ * ocf_msdummy:12 (ocf:heartbeat:/usr/lib/heartbeat/cts/OCFMSDummy): Unpromoted c001n06
+ * ocf_msdummy:13 (ocf:heartbeat:/usr/lib/heartbeat/cts/OCFMSDummy): Unpromoted c001n06
+
+Transition Summary:
+ * Fence (reboot) c001n05 'ocf_192.168.100.183 failed there'
+ * Fence (reboot) c001n03 'ocf_192.168.100.183 failed there'
+ * Move ocf_192.168.100.181 ( c001n03 -> c001n02 )
+ * Move heartbeat_192.168.100.182 ( c001n03 -> c001n02 )
+ * Recover ocf_192.168.100.183 ( c001n03 -> c001n02 )
+ * Move rsc_c001n05 ( c001n05 -> c001n07 )
+ * Move rsc_c001n07 ( c001n03 -> c001n07 )
+
+Executing Cluster Transition:
+ * Resource action: child_DoFencing:4 monitor=20000 on c001n08
+ * Fencing c001n05 (reboot)
+ * Fencing c001n03 (reboot)
+ * Pseudo action: group-1_stop_0
+ * Pseudo action: ocf_192.168.100.183_stop_0
+ * Pseudo action: ocf_192.168.100.183_stop_0
+ * Pseudo action: rsc_c001n05_stop_0
+ * Pseudo action: rsc_c001n07_stop_0
+ * Pseudo action: heartbeat_192.168.100.182_stop_0
+ * Resource action: rsc_c001n05 start on c001n07
+ * Resource action: rsc_c001n07 start on c001n07
+ * Pseudo action: ocf_192.168.100.181_stop_0
+ * Pseudo action: ocf_192.168.100.181_stop_0
+ * Resource action: rsc_c001n05 monitor=5000 on c001n07
+ * Resource action: rsc_c001n07 monitor=5000 on c001n07
+ * Pseudo action: group-1_stopped_0
+ * Pseudo action: group-1_start_0
+ * Resource action: ocf_192.168.100.181 start on c001n02
+ * Resource action: heartbeat_192.168.100.182 start on c001n02
+ * Resource action: ocf_192.168.100.183 start on c001n02
+ * Pseudo action: group-1_running_0
+ * Resource action: ocf_192.168.100.181 monitor=5000 on c001n02
+ * Resource action: heartbeat_192.168.100.182 monitor=5000 on c001n02
+ * Resource action: ocf_192.168.100.183 monitor=5000 on c001n02
+
+Revised Cluster Status:
+ * Node List:
+ * Online: [ c001n02 c001n04 c001n06 c001n07 c001n08 ]
+ * OFFLINE: [ c001n03 c001n05 ]
+
+ * Full List of Resources:
+ * DcIPaddr (ocf:heartbeat:IPaddr): Stopped
+ * Resource Group: group-1:
+ * ocf_192.168.100.181 (ocf:heartbeat:IPaddr): Started c001n02
+ * heartbeat_192.168.100.182 (ocf:heartbeat:IPaddr): Started c001n02
+ * ocf_192.168.100.183 (ocf:heartbeat:IPaddr): Started c001n02
+ * lsb_dummy (lsb:/usr/lib/heartbeat/cts/LSBDummy): Started c001n04
+ * rsc_c001n03 (ocf:heartbeat:IPaddr): Started c001n06
+ * rsc_c001n02 (ocf:heartbeat:IPaddr): Started c001n02
+ * rsc_c001n04 (ocf:heartbeat:IPaddr): Started c001n04
+ * rsc_c001n05 (ocf:heartbeat:IPaddr): Started c001n07
+ * rsc_c001n06 (ocf:heartbeat:IPaddr): Started c001n06
+ * rsc_c001n07 (ocf:heartbeat:IPaddr): Started c001n07
+ * rsc_c001n08 (ocf:heartbeat:IPaddr): Started c001n08
+ * Clone Set: DoFencing [child_DoFencing]:
+ * Started: [ c001n02 c001n04 c001n06 c001n07 c001n08 ]
+ * Stopped: [ c001n03 c001n05 ]
+ * Clone Set: master_rsc_1 [ocf_msdummy] (promotable, unique):
+ * ocf_msdummy:0 (ocf:heartbeat:/usr/lib/heartbeat/cts/OCFMSDummy): Promoted c001n02
+ * ocf_msdummy:1 (ocf:heartbeat:/usr/lib/heartbeat/cts/OCFMSDummy): Unpromoted c001n02
+ * ocf_msdummy:2 (ocf:heartbeat:/usr/lib/heartbeat/cts/OCFMSDummy): Unpromoted c001n07
+ * ocf_msdummy:3 (ocf:heartbeat:/usr/lib/heartbeat/cts/OCFMSDummy): Unpromoted c001n07
+ * ocf_msdummy:4 (ocf:heartbeat:/usr/lib/heartbeat/cts/OCFMSDummy): Unpromoted c001n08
+ * ocf_msdummy:5 (ocf:heartbeat:/usr/lib/heartbeat/cts/OCFMSDummy): Unpromoted c001n08
+ * ocf_msdummy:6 (ocf:heartbeat:/usr/lib/heartbeat/cts/OCFMSDummy): Stopped
+ * ocf_msdummy:7 (ocf:heartbeat:/usr/lib/heartbeat/cts/OCFMSDummy): Stopped
+ * ocf_msdummy:8 (ocf:heartbeat:/usr/lib/heartbeat/cts/OCFMSDummy): Stopped
+ * ocf_msdummy:9 (ocf:heartbeat:/usr/lib/heartbeat/cts/OCFMSDummy): Stopped
+ * ocf_msdummy:10 (ocf:heartbeat:/usr/lib/heartbeat/cts/OCFMSDummy): Unpromoted c001n04
+ * ocf_msdummy:11 (ocf:heartbeat:/usr/lib/heartbeat/cts/OCFMSDummy): Unpromoted c001n04
+ * ocf_msdummy:12 (ocf:heartbeat:/usr/lib/heartbeat/cts/OCFMSDummy): Unpromoted c001n06
+ * ocf_msdummy:13 (ocf:heartbeat:/usr/lib/heartbeat/cts/OCFMSDummy): Unpromoted c001n06
diff --git a/cts/scheduler/summary/stonith-1.summary b/cts/scheduler/summary/stonith-1.summary
new file mode 100644
index 0000000..dfb4be4
--- /dev/null
+++ b/cts/scheduler/summary/stonith-1.summary
@@ -0,0 +1,113 @@
+Current cluster status:
+ * Node List:
+ * Node sles-3: UNCLEAN (offline)
+ * Online: [ sles-1 sles-2 sles-4 ]
+
+ * Full List of Resources:
+ * Resource Group: group-1:
+ * r192.168.100.181 (ocf:heartbeat:IPaddr): Started sles-1
+ * r192.168.100.182 (ocf:heartbeat:IPaddr): Started sles-1
+ * r192.168.100.183 (ocf:heartbeat:IPaddr): Stopped
+ * lsb_dummy (lsb:/usr/lib64/heartbeat/cts/LSBDummy): Started sles-2
+ * migrator (ocf:heartbeat:Dummy): Started sles-3 (UNCLEAN)
+ * rsc_sles-1 (ocf:heartbeat:IPaddr): Started sles-1
+ * rsc_sles-2 (ocf:heartbeat:IPaddr): Started sles-2
+ * rsc_sles-3 (ocf:heartbeat:IPaddr): Started sles-3 (UNCLEAN)
+ * rsc_sles-4 (ocf:heartbeat:IPaddr): Started sles-4
+ * Clone Set: DoFencing [child_DoFencing]:
+ * child_DoFencing (stonith:external/vmware): Started sles-3 (UNCLEAN)
+ * Started: [ sles-1 sles-2 ]
+ * Stopped: [ sles-4 ]
+ * Clone Set: master_rsc_1 [ocf_msdummy] (promotable, unique):
+ * ocf_msdummy:0 (ocf:heartbeat:Stateful): Stopped
+ * ocf_msdummy:1 (ocf:heartbeat:Stateful): Stopped
+ * ocf_msdummy:2 (ocf:heartbeat:Stateful): Unpromoted sles-3 (UNCLEAN)
+ * ocf_msdummy:3 (ocf:heartbeat:Stateful): Stopped
+ * ocf_msdummy:4 (ocf:heartbeat:Stateful): Stopped
+ * ocf_msdummy:5 (ocf:heartbeat:Stateful): Unpromoted sles-3 (UNCLEAN)
+ * ocf_msdummy:6 (ocf:heartbeat:Stateful): Stopped
+ * ocf_msdummy:7 (ocf:heartbeat:Stateful): Stopped
+
+Transition Summary:
+ * Fence (reboot) sles-3 'peer is no longer part of the cluster'
+ * Start r192.168.100.183 ( sles-1 )
+ * Move migrator ( sles-3 -> sles-4 )
+ * Move rsc_sles-3 ( sles-3 -> sles-4 )
+ * Move child_DoFencing:2 ( sles-3 -> sles-4 )
+ * Start ocf_msdummy:0 ( sles-4 )
+ * Start ocf_msdummy:1 ( sles-1 )
+ * Move ocf_msdummy:2 ( sles-3 -> sles-2 Unpromoted )
+ * Start ocf_msdummy:3 ( sles-4 )
+ * Start ocf_msdummy:4 ( sles-1 )
+ * Move ocf_msdummy:5 ( sles-3 -> sles-2 Unpromoted )
+
+Executing Cluster Transition:
+ * Pseudo action: group-1_start_0
+ * Resource action: r192.168.100.182 monitor=5000 on sles-1
+ * Resource action: lsb_dummy monitor=5000 on sles-2
+ * Resource action: rsc_sles-2 monitor=5000 on sles-2
+ * Resource action: rsc_sles-4 monitor=5000 on sles-4
+ * Pseudo action: DoFencing_stop_0
+ * Fencing sles-3 (reboot)
+ * Resource action: r192.168.100.183 start on sles-1
+ * Pseudo action: migrator_stop_0
+ * Pseudo action: rsc_sles-3_stop_0
+ * Pseudo action: child_DoFencing:2_stop_0
+ * Pseudo action: DoFencing_stopped_0
+ * Pseudo action: DoFencing_start_0
+ * Pseudo action: master_rsc_1_stop_0
+ * Pseudo action: group-1_running_0
+ * Resource action: r192.168.100.183 monitor=5000 on sles-1
+ * Resource action: migrator start on sles-4
+ * Resource action: rsc_sles-3 start on sles-4
+ * Resource action: child_DoFencing:2 start on sles-4
+ * Pseudo action: DoFencing_running_0
+ * Pseudo action: ocf_msdummy:2_stop_0
+ * Pseudo action: ocf_msdummy:5_stop_0
+ * Pseudo action: master_rsc_1_stopped_0
+ * Pseudo action: master_rsc_1_start_0
+ * Resource action: migrator monitor=10000 on sles-4
+ * Resource action: rsc_sles-3 monitor=5000 on sles-4
+ * Resource action: child_DoFencing:2 monitor=60000 on sles-4
+ * Resource action: ocf_msdummy:0 start on sles-4
+ * Resource action: ocf_msdummy:1 start on sles-1
+ * Resource action: ocf_msdummy:2 start on sles-2
+ * Resource action: ocf_msdummy:3 start on sles-4
+ * Resource action: ocf_msdummy:4 start on sles-1
+ * Resource action: ocf_msdummy:5 start on sles-2
+ * Pseudo action: master_rsc_1_running_0
+ * Resource action: ocf_msdummy:0 monitor=5000 on sles-4
+ * Resource action: ocf_msdummy:1 monitor=5000 on sles-1
+ * Resource action: ocf_msdummy:2 monitor=5000 on sles-2
+ * Resource action: ocf_msdummy:3 monitor=5000 on sles-4
+ * Resource action: ocf_msdummy:4 monitor=5000 on sles-1
+ * Resource action: ocf_msdummy:5 monitor=5000 on sles-2
+
+Revised Cluster Status:
+ * Node List:
+ * Online: [ sles-1 sles-2 sles-4 ]
+ * OFFLINE: [ sles-3 ]
+
+ * Full List of Resources:
+ * Resource Group: group-1:
+ * r192.168.100.181 (ocf:heartbeat:IPaddr): Started sles-1
+ * r192.168.100.182 (ocf:heartbeat:IPaddr): Started sles-1
+ * r192.168.100.183 (ocf:heartbeat:IPaddr): Started sles-1
+ * lsb_dummy (lsb:/usr/lib64/heartbeat/cts/LSBDummy): Started sles-2
+ * migrator (ocf:heartbeat:Dummy): Started sles-4
+ * rsc_sles-1 (ocf:heartbeat:IPaddr): Started sles-1
+ * rsc_sles-2 (ocf:heartbeat:IPaddr): Started sles-2
+ * rsc_sles-3 (ocf:heartbeat:IPaddr): Started sles-4
+ * rsc_sles-4 (ocf:heartbeat:IPaddr): Started sles-4
+ * Clone Set: DoFencing [child_DoFencing]:
+ * Started: [ sles-1 sles-2 sles-4 ]
+ * Stopped: [ sles-3 ]
+ * Clone Set: master_rsc_1 [ocf_msdummy] (promotable, unique):
+ * ocf_msdummy:0 (ocf:heartbeat:Stateful): Unpromoted sles-4
+ * ocf_msdummy:1 (ocf:heartbeat:Stateful): Unpromoted sles-1
+ * ocf_msdummy:2 (ocf:heartbeat:Stateful): Unpromoted sles-2
+ * ocf_msdummy:3 (ocf:heartbeat:Stateful): Unpromoted sles-4
+ * ocf_msdummy:4 (ocf:heartbeat:Stateful): Unpromoted sles-1
+ * ocf_msdummy:5 (ocf:heartbeat:Stateful): Unpromoted sles-2
+ * ocf_msdummy:6 (ocf:heartbeat:Stateful): Stopped
+ * ocf_msdummy:7 (ocf:heartbeat:Stateful): Stopped
diff --git a/cts/scheduler/summary/stonith-2.summary b/cts/scheduler/summary/stonith-2.summary
new file mode 100644
index 0000000..c6f6571
--- /dev/null
+++ b/cts/scheduler/summary/stonith-2.summary
@@ -0,0 +1,78 @@
+Current cluster status:
+ * Node List:
+ * Node sles-5: UNCLEAN (offline)
+ * Online: [ sles-1 sles-2 sles-3 sles-4 sles-6 ]
+
+ * Full List of Resources:
+ * Resource Group: group-1:
+ * r192.168.100.181 (ocf:heartbeat:IPaddr): Started sles-1
+ * r192.168.100.182 (ocf:heartbeat:IPaddr): Started sles-1
+ * r192.168.100.183 (ocf:heartbeat:IPaddr): Started sles-1
+ * lsb_dummy (lsb:/usr/share/heartbeat/cts/LSBDummy): Started sles-2
+ * migrator (ocf:heartbeat:Dummy): Started sles-3
+ * rsc_sles-1 (ocf:heartbeat:IPaddr): Started sles-1
+ * rsc_sles-2 (ocf:heartbeat:IPaddr): Started sles-2
+ * rsc_sles-3 (ocf:heartbeat:IPaddr): Started sles-3
+ * rsc_sles-4 (ocf:heartbeat:IPaddr): Started sles-4
+ * rsc_sles-5 (ocf:heartbeat:IPaddr): Stopped
+ * rsc_sles-6 (ocf:heartbeat:IPaddr): Started sles-6
+ * Clone Set: DoFencing [child_DoFencing]:
+ * Started: [ sles-1 sles-2 sles-3 sles-4 sles-6 ]
+ * Stopped: [ sles-5 ]
+ * Clone Set: master_rsc_1 [ocf_msdummy] (promotable, unique):
+ * ocf_msdummy:0 (ocf:heartbeat:Stateful): Unpromoted sles-3
+ * ocf_msdummy:1 (ocf:heartbeat:Stateful): Unpromoted sles-4
+ * ocf_msdummy:2 (ocf:heartbeat:Stateful): Unpromoted sles-4
+ * ocf_msdummy:3 (ocf:heartbeat:Stateful): Unpromoted sles-1
+ * ocf_msdummy:4 (ocf:heartbeat:Stateful): Unpromoted sles-2
+ * ocf_msdummy:5 (ocf:heartbeat:Stateful): Unpromoted sles-1
+ * ocf_msdummy:6 (ocf:heartbeat:Stateful): Stopped
+ * ocf_msdummy:7 (ocf:heartbeat:Stateful): Stopped
+ * ocf_msdummy:8 (ocf:heartbeat:Stateful): Unpromoted sles-6
+ * ocf_msdummy:9 (ocf:heartbeat:Stateful): Unpromoted sles-6
+ * ocf_msdummy:10 (ocf:heartbeat:Stateful): Unpromoted sles-2
+ * ocf_msdummy:11 (ocf:heartbeat:Stateful): Unpromoted sles-3
+
+Transition Summary:
+ * Fence (reboot) sles-5 'peer is no longer part of the cluster'
+ * Start rsc_sles-5 ( sles-6 )
+
+Executing Cluster Transition:
+ * Fencing sles-5 (reboot)
+ * Resource action: rsc_sles-5 start on sles-6
+ * Resource action: rsc_sles-5 monitor=5000 on sles-6
+
+Revised Cluster Status:
+ * Node List:
+ * Online: [ sles-1 sles-2 sles-3 sles-4 sles-6 ]
+ * OFFLINE: [ sles-5 ]
+
+ * Full List of Resources:
+ * Resource Group: group-1:
+ * r192.168.100.181 (ocf:heartbeat:IPaddr): Started sles-1
+ * r192.168.100.182 (ocf:heartbeat:IPaddr): Started sles-1
+ * r192.168.100.183 (ocf:heartbeat:IPaddr): Started sles-1
+ * lsb_dummy (lsb:/usr/share/heartbeat/cts/LSBDummy): Started sles-2
+ * migrator (ocf:heartbeat:Dummy): Started sles-3
+ * rsc_sles-1 (ocf:heartbeat:IPaddr): Started sles-1
+ * rsc_sles-2 (ocf:heartbeat:IPaddr): Started sles-2
+ * rsc_sles-3 (ocf:heartbeat:IPaddr): Started sles-3
+ * rsc_sles-4 (ocf:heartbeat:IPaddr): Started sles-4
+ * rsc_sles-5 (ocf:heartbeat:IPaddr): Started sles-6
+ * rsc_sles-6 (ocf:heartbeat:IPaddr): Started sles-6
+ * Clone Set: DoFencing [child_DoFencing]:
+ * Started: [ sles-1 sles-2 sles-3 sles-4 sles-6 ]
+ * Stopped: [ sles-5 ]
+ * Clone Set: master_rsc_1 [ocf_msdummy] (promotable, unique):
+ * ocf_msdummy:0 (ocf:heartbeat:Stateful): Unpromoted sles-3
+ * ocf_msdummy:1 (ocf:heartbeat:Stateful): Unpromoted sles-4
+ * ocf_msdummy:2 (ocf:heartbeat:Stateful): Unpromoted sles-4
+ * ocf_msdummy:3 (ocf:heartbeat:Stateful): Unpromoted sles-1
+ * ocf_msdummy:4 (ocf:heartbeat:Stateful): Unpromoted sles-2
+ * ocf_msdummy:5 (ocf:heartbeat:Stateful): Unpromoted sles-1
+ * ocf_msdummy:6 (ocf:heartbeat:Stateful): Stopped
+ * ocf_msdummy:7 (ocf:heartbeat:Stateful): Stopped
+ * ocf_msdummy:8 (ocf:heartbeat:Stateful): Unpromoted sles-6
+ * ocf_msdummy:9 (ocf:heartbeat:Stateful): Unpromoted sles-6
+ * ocf_msdummy:10 (ocf:heartbeat:Stateful): Unpromoted sles-2
+ * ocf_msdummy:11 (ocf:heartbeat:Stateful): Unpromoted sles-3
diff --git a/cts/scheduler/summary/stonith-3.summary b/cts/scheduler/summary/stonith-3.summary
new file mode 100644
index 0000000..d1adf9b
--- /dev/null
+++ b/cts/scheduler/summary/stonith-3.summary
@@ -0,0 +1,37 @@
+Current cluster status:
+ * Node List:
+ * Node rh5node1: UNCLEAN (offline)
+ * Online: [ rh5node2 ]
+
+ * Full List of Resources:
+ * prmIpPostgreSQLDB (ocf:heartbeat:IPaddr): Stopped
+ * Clone Set: clnStonith [grpStonith]:
+ * Stopped: [ rh5node1 rh5node2 ]
+
+Transition Summary:
+ * Fence (reboot) rh5node1 'node is unclean'
+ * Start prmIpPostgreSQLDB ( rh5node2 )
+ * Start prmStonith:0 ( rh5node2 )
+
+Executing Cluster Transition:
+ * Resource action: prmIpPostgreSQLDB monitor on rh5node2
+ * Resource action: prmStonith:0 monitor on rh5node2
+ * Pseudo action: clnStonith_start_0
+ * Fencing rh5node1 (reboot)
+ * Resource action: prmIpPostgreSQLDB start on rh5node2
+ * Pseudo action: grpStonith:0_start_0
+ * Resource action: prmStonith:0 start on rh5node2
+ * Resource action: prmIpPostgreSQLDB monitor=30000 on rh5node2
+ * Pseudo action: grpStonith:0_running_0
+ * Pseudo action: clnStonith_running_0
+
+Revised Cluster Status:
+ * Node List:
+ * Online: [ rh5node2 ]
+ * OFFLINE: [ rh5node1 ]
+
+ * Full List of Resources:
+ * prmIpPostgreSQLDB (ocf:heartbeat:IPaddr): Started rh5node2
+ * Clone Set: clnStonith [grpStonith]:
+ * Started: [ rh5node2 ]
+ * Stopped: [ rh5node1 ]
diff --git a/cts/scheduler/summary/stonith-4.summary b/cts/scheduler/summary/stonith-4.summary
new file mode 100644
index 0000000..6aa0f4d
--- /dev/null
+++ b/cts/scheduler/summary/stonith-4.summary
@@ -0,0 +1,40 @@
+Current cluster status:
+ * Node List:
+ * Node pcmk-2: pending
+ * Node pcmk-3: pending
+ * Node pcmk-5: UNCLEAN (offline)
+ * Node pcmk-7: UNCLEAN (online)
+ * Node pcmk-8: UNCLEAN (offline)
+ * Node pcmk-9: pending
+ * Node pcmk-10: UNCLEAN (online)
+ * Node pcmk-11: pending
+ * Online: [ pcmk-1 ]
+ * OFFLINE: [ pcmk-4 pcmk-6 ]
+
+ * Full List of Resources:
+ * Fencing (stonith:fence_xvm): Stopped
+
+Transition Summary:
+ * Fence (reboot) pcmk-10 'peer process is no longer available'
+ * Fence (reboot) pcmk-8 'peer has not been seen by the cluster'
+ * Fence (reboot) pcmk-7 'peer failed Pacemaker membership criteria'
+ * Fence (reboot) pcmk-5 'peer has not been seen by the cluster'
+ * Start Fencing ( pcmk-1 ) blocked
+
+Executing Cluster Transition:
+ * Fencing pcmk-5 (reboot)
+ * Fencing pcmk-7 (reboot)
+ * Fencing pcmk-8 (reboot)
+ * Fencing pcmk-10 (reboot)
+
+Revised Cluster Status:
+ * Node List:
+ * Node pcmk-2: pending
+ * Node pcmk-3: pending
+ * Node pcmk-9: pending
+ * Node pcmk-11: pending
+ * Online: [ pcmk-1 ]
+ * OFFLINE: [ pcmk-4 pcmk-5 pcmk-6 pcmk-7 pcmk-8 pcmk-10 ]
+
+ * Full List of Resources:
+ * Fencing (stonith:fence_xvm): Stopped
diff --git a/cts/scheduler/summary/stop-all-resources.summary b/cts/scheduler/summary/stop-all-resources.summary
new file mode 100644
index 0000000..da36fb0
--- /dev/null
+++ b/cts/scheduler/summary/stop-all-resources.summary
@@ -0,0 +1,83 @@
+4 of 27 resource instances DISABLED and 0 BLOCKED from further action due to failure
+
+Current cluster status:
+ * Node List:
+ * Online: [ cluster01 cluster02 ]
+
+ * Full List of Resources:
+ * Clone Set: ping-clone [ping]:
+ * Stopped: [ cluster01 cluster02 ]
+ * Fencing (stonith:fence_xvm): Stopped
+ * dummy (ocf:pacemaker:Dummy): Stopped
+ * Clone Set: inactive-clone [inactive-dhcpd] (disabled):
+ * Stopped (disabled): [ cluster01 cluster02 ]
+ * Resource Group: inactive-group (disabled):
+ * inactive-dummy-1 (ocf:pacemaker:Dummy): Stopped (disabled)
+ * inactive-dummy-2 (ocf:pacemaker:Dummy): Stopped (disabled)
+ * Container bundle set: httpd-bundle [pcmk:http]:
+ * httpd-bundle-0 (192.168.122.131) (ocf:heartbeat:apache): Stopped
+ * httpd-bundle-1 (192.168.122.132) (ocf:heartbeat:apache): Stopped
+ * httpd-bundle-2 (192.168.122.133) (ocf:heartbeat:apache): Stopped
+ * Resource Group: exim-group:
+ * Public-IP (ocf:heartbeat:IPaddr): Stopped
+ * Email (lsb:exim): Stopped
+ * Clone Set: mysql-clone-group [mysql-group]:
+ * Stopped: [ cluster01 cluster02 ]
+
+Transition Summary:
+
+Executing Cluster Transition:
+ * Resource action: ping:0 monitor on cluster02
+ * Resource action: ping:0 monitor on cluster01
+ * Resource action: Fencing monitor on cluster02
+ * Resource action: Fencing monitor on cluster01
+ * Resource action: dummy monitor on cluster02
+ * Resource action: dummy monitor on cluster01
+ * Resource action: inactive-dhcpd:0 monitor on cluster02
+ * Resource action: inactive-dhcpd:0 monitor on cluster01
+ * Resource action: inactive-dummy-1 monitor on cluster02
+ * Resource action: inactive-dummy-1 monitor on cluster01
+ * Resource action: inactive-dummy-2 monitor on cluster02
+ * Resource action: inactive-dummy-2 monitor on cluster01
+ * Resource action: httpd-bundle-ip-192.168.122.131 monitor on cluster02
+ * Resource action: httpd-bundle-ip-192.168.122.131 monitor on cluster01
+ * Resource action: httpd-bundle-docker-0 monitor on cluster02
+ * Resource action: httpd-bundle-docker-0 monitor on cluster01
+ * Resource action: httpd-bundle-ip-192.168.122.132 monitor on cluster02
+ * Resource action: httpd-bundle-ip-192.168.122.132 monitor on cluster01
+ * Resource action: httpd-bundle-docker-1 monitor on cluster02
+ * Resource action: httpd-bundle-docker-1 monitor on cluster01
+ * Resource action: httpd-bundle-ip-192.168.122.133 monitor on cluster02
+ * Resource action: httpd-bundle-ip-192.168.122.133 monitor on cluster01
+ * Resource action: httpd-bundle-docker-2 monitor on cluster02
+ * Resource action: httpd-bundle-docker-2 monitor on cluster01
+ * Resource action: Public-IP monitor on cluster02
+ * Resource action: Public-IP monitor on cluster01
+ * Resource action: Email monitor on cluster02
+ * Resource action: Email monitor on cluster01
+ * Resource action: mysql-proxy:0 monitor on cluster02
+ * Resource action: mysql-proxy:0 monitor on cluster01
+
+Revised Cluster Status:
+ * Node List:
+ * Online: [ cluster01 cluster02 ]
+
+ * Full List of Resources:
+ * Clone Set: ping-clone [ping]:
+ * Stopped: [ cluster01 cluster02 ]
+ * Fencing (stonith:fence_xvm): Stopped
+ * dummy (ocf:pacemaker:Dummy): Stopped
+ * Clone Set: inactive-clone [inactive-dhcpd] (disabled):
+ * Stopped (disabled): [ cluster01 cluster02 ]
+ * Resource Group: inactive-group (disabled):
+ * inactive-dummy-1 (ocf:pacemaker:Dummy): Stopped (disabled)
+ * inactive-dummy-2 (ocf:pacemaker:Dummy): Stopped (disabled)
+ * Container bundle set: httpd-bundle [pcmk:http]:
+ * httpd-bundle-0 (192.168.122.131) (ocf:heartbeat:apache): Stopped
+ * httpd-bundle-1 (192.168.122.132) (ocf:heartbeat:apache): Stopped
+ * httpd-bundle-2 (192.168.122.133) (ocf:heartbeat:apache): Stopped
+ * Resource Group: exim-group:
+ * Public-IP (ocf:heartbeat:IPaddr): Stopped
+ * Email (lsb:exim): Stopped
+ * Clone Set: mysql-clone-group [mysql-group]:
+ * Stopped: [ cluster01 cluster02 ]
diff --git a/cts/scheduler/summary/stop-failure-no-fencing.summary b/cts/scheduler/summary/stop-failure-no-fencing.summary
new file mode 100644
index 0000000..bb164fd
--- /dev/null
+++ b/cts/scheduler/summary/stop-failure-no-fencing.summary
@@ -0,0 +1,27 @@
+0 of 9 resource instances DISABLED and 1 BLOCKED from further action due to failure
+
+Current cluster status:
+ * Node List:
+ * Node pcmk-3: UNCLEAN (offline)
+ * Node pcmk-4: UNCLEAN (offline)
+ * Online: [ pcmk-1 pcmk-2 ]
+
+ * Full List of Resources:
+ * Clone Set: dlm-clone [dlm]:
+ * Stopped: [ pcmk-1 pcmk-2 pcmk-3 pcmk-4 ]
+ * ClusterIP (ocf:heartbeat:IPaddr2): Stopped
+
+Transition Summary:
+
+Executing Cluster Transition:
+
+Revised Cluster Status:
+ * Node List:
+ * Node pcmk-3: UNCLEAN (offline)
+ * Node pcmk-4: UNCLEAN (offline)
+ * Online: [ pcmk-1 pcmk-2 ]
+
+ * Full List of Resources:
+ * Clone Set: dlm-clone [dlm]:
+ * Stopped: [ pcmk-1 pcmk-2 pcmk-3 pcmk-4 ]
+ * ClusterIP (ocf:heartbeat:IPaddr2): Stopped
diff --git a/cts/scheduler/summary/stop-failure-no-quorum.summary b/cts/scheduler/summary/stop-failure-no-quorum.summary
new file mode 100644
index 0000000..e76827d
--- /dev/null
+++ b/cts/scheduler/summary/stop-failure-no-quorum.summary
@@ -0,0 +1,45 @@
+0 of 10 resource instances DISABLED and 1 BLOCKED from further action due to failure
+
+Current cluster status:
+ * Node List:
+ * Node pcmk-2: UNCLEAN (online)
+ * Node pcmk-3: UNCLEAN (offline)
+ * Node pcmk-4: UNCLEAN (offline)
+ * Online: [ pcmk-1 ]
+
+ * Full List of Resources:
+ * Clone Set: dlm-clone [dlm]:
+ * Stopped: [ pcmk-1 pcmk-2 pcmk-3 pcmk-4 ]
+ * Clone Set: clvm-clone [clvm]:
+ * clvm (lsb:clvmd): FAILED pcmk-2
+ * clvm (lsb:clvmd): FAILED pcmk-3 (UNCLEAN, blocked)
+ * Stopped: [ pcmk-1 pcmk-3 pcmk-4 ]
+ * ClusterIP (ocf:heartbeat:IPaddr2): Stopped
+ * Fencing (stonith:fence_xvm): Stopped
+
+Transition Summary:
+ * Fence (reboot) pcmk-2 'clvm:0 failed there'
+ * Start dlm:0 ( pcmk-1 ) due to no quorum (blocked)
+ * Stop clvm:0 ( pcmk-2 ) due to node availability
+ * Start clvm:2 ( pcmk-1 ) due to no quorum (blocked)
+ * Start ClusterIP ( pcmk-1 ) due to no quorum (blocked)
+ * Start Fencing ( pcmk-1 ) due to no quorum (blocked)
+
+Executing Cluster Transition:
+ * Fencing pcmk-2 (reboot)
+ * Pseudo action: clvm-clone_stop_0
+ * Pseudo action: clvm_stop_0
+ * Pseudo action: clvm-clone_stopped_0
+
+Revised Cluster Status:
+ * Node List:
+ * Node pcmk-3: UNCLEAN (offline)
+ * Node pcmk-4: UNCLEAN (offline)
+ * Online: [ pcmk-1 ]
+ * OFFLINE: [ pcmk-2 ]
+
+ * Full List of Resources:
+ * Clone Set: dlm-clone [dlm]:
+ * Stopped: [ pcmk-1 pcmk-2 pcmk-3 pcmk-4 ]
+ * ClusterIP (ocf:heartbeat:IPaddr2): Stopped
+ * Fencing (stonith:fence_xvm): Stopped
diff --git a/cts/scheduler/summary/stop-failure-with-fencing.summary b/cts/scheduler/summary/stop-failure-with-fencing.summary
new file mode 100644
index 0000000..437708e
--- /dev/null
+++ b/cts/scheduler/summary/stop-failure-with-fencing.summary
@@ -0,0 +1,45 @@
+Current cluster status:
+ * Node List:
+ * Node pcmk-2: UNCLEAN (online)
+ * Node pcmk-3: UNCLEAN (offline)
+ * Node pcmk-4: UNCLEAN (offline)
+ * Online: [ pcmk-1 ]
+
+ * Full List of Resources:
+ * Clone Set: dlm-clone [dlm]:
+ * Stopped: [ pcmk-1 pcmk-2 pcmk-3 pcmk-4 ]
+ * Clone Set: clvm-clone [clvm]:
+ * clvm (lsb:clvmd): FAILED pcmk-2
+ * Stopped: [ pcmk-1 pcmk-3 pcmk-4 ]
+ * ClusterIP (ocf:heartbeat:IPaddr2): Stopped
+ * Fencing (stonith:fence_xvm): Stopped
+
+Transition Summary:
+ * Fence (reboot) pcmk-2 'clvm:0 failed there'
+ * Start dlm:0 ( pcmk-1 ) due to no quorum (blocked)
+ * Stop clvm:0 ( pcmk-2 ) due to node availability
+ * Start clvm:1 ( pcmk-1 ) due to no quorum (blocked)
+ * Start ClusterIP ( pcmk-1 ) due to no quorum (blocked)
+ * Start Fencing ( pcmk-1 ) due to no quorum (blocked)
+
+Executing Cluster Transition:
+ * Resource action: Fencing monitor on pcmk-1
+ * Fencing pcmk-2 (reboot)
+ * Pseudo action: clvm-clone_stop_0
+ * Pseudo action: clvm_stop_0
+ * Pseudo action: clvm-clone_stopped_0
+
+Revised Cluster Status:
+ * Node List:
+ * Node pcmk-3: UNCLEAN (offline)
+ * Node pcmk-4: UNCLEAN (offline)
+ * Online: [ pcmk-1 ]
+ * OFFLINE: [ pcmk-2 ]
+
+ * Full List of Resources:
+ * Clone Set: dlm-clone [dlm]:
+ * Stopped: [ pcmk-1 pcmk-2 pcmk-3 pcmk-4 ]
+ * Clone Set: clvm-clone [clvm]:
+ * Stopped: [ pcmk-1 pcmk-2 pcmk-3 pcmk-4 ]
+ * ClusterIP (ocf:heartbeat:IPaddr2): Stopped
+ * Fencing (stonith:fence_xvm): Stopped
diff --git a/cts/scheduler/summary/stop-unexpected-2.summary b/cts/scheduler/summary/stop-unexpected-2.summary
new file mode 100644
index 0000000..d6b0c15
--- /dev/null
+++ b/cts/scheduler/summary/stop-unexpected-2.summary
@@ -0,0 +1,29 @@
+Using the original execution date of: 2022-04-22 14:15:37Z
+Current cluster status:
+ * Node List:
+ * Online: [ rhel8-1 rhel8-2 rhel8-3 rhel8-4 rhel8-5 ]
+
+ * Full List of Resources:
+ * Fencing (stonith:fence_xvm): Started rhel8-1
+ * FencingPass (stonith:fence_dummy): Started rhel8-2
+ * FencingFail (stonith:fence_dummy): Started rhel8-3
+ * test (ocf:pacemaker:Dummy): Started [ rhel8-4 rhel8-3 ]
+
+Transition Summary:
+ * Restart test ( rhel8-4 )
+
+Executing Cluster Transition:
+ * Resource action: test stop on rhel8-3
+ * Pseudo action: test_start_0
+ * Resource action: test monitor=10000 on rhel8-4
+Using the original execution date of: 2022-04-22 14:15:37Z
+
+Revised Cluster Status:
+ * Node List:
+ * Online: [ rhel8-1 rhel8-2 rhel8-3 rhel8-4 rhel8-5 ]
+
+ * Full List of Resources:
+ * Fencing (stonith:fence_xvm): Started rhel8-1
+ * FencingPass (stonith:fence_dummy): Started rhel8-2
+ * FencingFail (stonith:fence_dummy): Started rhel8-3
+ * test (ocf:pacemaker:Dummy): Started rhel8-4
diff --git a/cts/scheduler/summary/stop-unexpected.summary b/cts/scheduler/summary/stop-unexpected.summary
new file mode 100644
index 0000000..7c7fc68
--- /dev/null
+++ b/cts/scheduler/summary/stop-unexpected.summary
@@ -0,0 +1,41 @@
+Current cluster status:
+ * Node List:
+ * Online: [ node2 node3 ]
+
+ * Full List of Resources:
+ * st-sbd (stonith:external/sbd): Started node2
+ * Resource Group: dgroup:
+ * dummy (ocf:heartbeat:DummyTimeout): FAILED [ node2 node3 ]
+ * dummy2 (ocf:heartbeat:Dummy): Started node2
+ * dummy3 (ocf:heartbeat:Dummy): Started node2
+
+Transition Summary:
+ * Recover dummy ( node2 ) due to being multiply active
+ * Restart dummy2 ( node2 ) due to required dummy start
+ * Restart dummy3 ( node2 ) due to required dummy2 start
+
+Executing Cluster Transition:
+ * Pseudo action: dgroup_stop_0
+ * Resource action: dummy3 stop on node2
+ * Resource action: dummy2 stop on node2
+ * Resource action: dummy stop on node3
+ * Pseudo action: dgroup_stopped_0
+ * Pseudo action: dgroup_start_0
+ * Pseudo action: dummy_start_0
+ * Resource action: dummy monitor=10000 on node2
+ * Resource action: dummy2 start on node2
+ * Resource action: dummy2 monitor=10000 on node2
+ * Resource action: dummy3 start on node2
+ * Resource action: dummy3 monitor=10000 on node2
+ * Pseudo action: dgroup_running_0
+
+Revised Cluster Status:
+ * Node List:
+ * Online: [ node2 node3 ]
+
+ * Full List of Resources:
+ * st-sbd (stonith:external/sbd): Started node2
+ * Resource Group: dgroup:
+ * dummy (ocf:heartbeat:DummyTimeout): Started node2
+ * dummy2 (ocf:heartbeat:Dummy): Started node2
+ * dummy3 (ocf:heartbeat:Dummy): Started node2
diff --git a/cts/scheduler/summary/stopped-monitor-00.summary b/cts/scheduler/summary/stopped-monitor-00.summary
new file mode 100644
index 0000000..c28cad7
--- /dev/null
+++ b/cts/scheduler/summary/stopped-monitor-00.summary
@@ -0,0 +1,23 @@
+Current cluster status:
+ * Node List:
+ * Online: [ node1 node2 ]
+
+ * Full List of Resources:
+ * rsc1 (ocf:pacemaker:Dummy): Stopped
+
+Transition Summary:
+ * Start rsc1 ( node1 )
+
+Executing Cluster Transition:
+ * Resource action: rsc1 monitor on node2
+ * Resource action: rsc1 monitor on node1
+ * Resource action: rsc1 monitor=20000 on node2
+ * Resource action: rsc1 start on node1
+ * Resource action: rsc1 monitor=10000 on node1
+
+Revised Cluster Status:
+ * Node List:
+ * Online: [ node1 node2 ]
+
+ * Full List of Resources:
+ * rsc1 (ocf:pacemaker:Dummy): Started node1
diff --git a/cts/scheduler/summary/stopped-monitor-01.summary b/cts/scheduler/summary/stopped-monitor-01.summary
new file mode 100644
index 0000000..0bd0488
--- /dev/null
+++ b/cts/scheduler/summary/stopped-monitor-01.summary
@@ -0,0 +1,21 @@
+Current cluster status:
+ * Node List:
+ * Online: [ node1 node2 ]
+
+ * Full List of Resources:
+ * rsc1 (ocf:pacemaker:Dummy): FAILED node1
+
+Transition Summary:
+ * Recover rsc1 ( node1 )
+
+Executing Cluster Transition:
+ * Resource action: rsc1 stop on node1
+ * Resource action: rsc1 start on node1
+ * Resource action: rsc1 monitor=10000 on node1
+
+Revised Cluster Status:
+ * Node List:
+ * Online: [ node1 node2 ]
+
+ * Full List of Resources:
+ * rsc1 (ocf:pacemaker:Dummy): Started node1
diff --git a/cts/scheduler/summary/stopped-monitor-02.summary b/cts/scheduler/summary/stopped-monitor-02.summary
new file mode 100644
index 0000000..93d9286
--- /dev/null
+++ b/cts/scheduler/summary/stopped-monitor-02.summary
@@ -0,0 +1,23 @@
+Current cluster status:
+ * Node List:
+ * Online: [ node1 node2 ]
+
+ * Full List of Resources:
+ * rsc1 (ocf:pacemaker:Dummy): FAILED [ node1 node2 ]
+
+Transition Summary:
+ * Recover rsc1 ( node1 )
+
+Executing Cluster Transition:
+ * Resource action: rsc1 stop on node1
+ * Resource action: rsc1 stop on node2
+ * Resource action: rsc1 monitor=20000 on node2
+ * Resource action: rsc1 start on node1
+ * Resource action: rsc1 monitor=10000 on node1
+
+Revised Cluster Status:
+ * Node List:
+ * Online: [ node1 node2 ]
+
+ * Full List of Resources:
+ * rsc1 (ocf:pacemaker:Dummy): Started node1
diff --git a/cts/scheduler/summary/stopped-monitor-03.summary b/cts/scheduler/summary/stopped-monitor-03.summary
new file mode 100644
index 0000000..d16e523
--- /dev/null
+++ b/cts/scheduler/summary/stopped-monitor-03.summary
@@ -0,0 +1,22 @@
+1 of 1 resource instances DISABLED and 0 BLOCKED from further action due to failure
+
+Current cluster status:
+ * Node List:
+ * Online: [ node1 node2 ]
+
+ * Full List of Resources:
+ * rsc1 (ocf:pacemaker:Dummy): Started node1 (disabled)
+
+Transition Summary:
+ * Stop rsc1 ( node1 ) due to node availability
+
+Executing Cluster Transition:
+ * Resource action: rsc1 stop on node1
+ * Resource action: rsc1 monitor=20000 on node1
+
+Revised Cluster Status:
+ * Node List:
+ * Online: [ node1 node2 ]
+
+ * Full List of Resources:
+ * rsc1 (ocf:pacemaker:Dummy): Stopped (disabled)
diff --git a/cts/scheduler/summary/stopped-monitor-04.summary b/cts/scheduler/summary/stopped-monitor-04.summary
new file mode 100644
index 0000000..11f4d49
--- /dev/null
+++ b/cts/scheduler/summary/stopped-monitor-04.summary
@@ -0,0 +1,19 @@
+1 of 1 resource instances DISABLED and 1 BLOCKED from further action due to failure
+
+Current cluster status:
+ * Node List:
+ * Online: [ node1 node2 ]
+
+ * Full List of Resources:
+ * rsc1 (ocf:pacemaker:Dummy): FAILED node1 (disabled, blocked)
+
+Transition Summary:
+
+Executing Cluster Transition:
+
+Revised Cluster Status:
+ * Node List:
+ * Online: [ node1 node2 ]
+
+ * Full List of Resources:
+ * rsc1 (ocf:pacemaker:Dummy): FAILED node1 (disabled, blocked)
diff --git a/cts/scheduler/summary/stopped-monitor-05.summary b/cts/scheduler/summary/stopped-monitor-05.summary
new file mode 100644
index 0000000..1ed0d69
--- /dev/null
+++ b/cts/scheduler/summary/stopped-monitor-05.summary
@@ -0,0 +1,19 @@
+0 of 1 resource instances DISABLED and 1 BLOCKED from further action due to failure
+
+Current cluster status:
+ * Node List:
+ * Online: [ node1 node2 ]
+
+ * Full List of Resources:
+ * rsc1 (ocf:pacemaker:Dummy): FAILED node1 (blocked)
+
+Transition Summary:
+
+Executing Cluster Transition:
+
+Revised Cluster Status:
+ * Node List:
+ * Online: [ node1 node2 ]
+
+ * Full List of Resources:
+ * rsc1 (ocf:pacemaker:Dummy): FAILED node1 (blocked)
diff --git a/cts/scheduler/summary/stopped-monitor-06.summary b/cts/scheduler/summary/stopped-monitor-06.summary
new file mode 100644
index 0000000..744994f
--- /dev/null
+++ b/cts/scheduler/summary/stopped-monitor-06.summary
@@ -0,0 +1,19 @@
+1 of 1 resource instances DISABLED and 1 BLOCKED from further action due to failure
+
+Current cluster status:
+ * Node List:
+ * Online: [ node1 node2 ]
+
+ * Full List of Resources:
+ * rsc1 (ocf:pacemaker:Dummy): FAILED (disabled, blocked) [ node1 node2 ]
+
+Transition Summary:
+
+Executing Cluster Transition:
+
+Revised Cluster Status:
+ * Node List:
+ * Online: [ node1 node2 ]
+
+ * Full List of Resources:
+ * rsc1 (ocf:pacemaker:Dummy): FAILED (disabled, blocked) [ node1 node2 ]
diff --git a/cts/scheduler/summary/stopped-monitor-07.summary b/cts/scheduler/summary/stopped-monitor-07.summary
new file mode 100644
index 0000000..596e6c5
--- /dev/null
+++ b/cts/scheduler/summary/stopped-monitor-07.summary
@@ -0,0 +1,19 @@
+0 of 1 resource instances DISABLED and 1 BLOCKED from further action due to failure
+
+Current cluster status:
+ * Node List:
+ * Online: [ node1 node2 ]
+
+ * Full List of Resources:
+ * rsc1 (ocf:pacemaker:Dummy): FAILED (blocked) [ node1 node2 ]
+
+Transition Summary:
+
+Executing Cluster Transition:
+
+Revised Cluster Status:
+ * Node List:
+ * Online: [ node1 node2 ]
+
+ * Full List of Resources:
+ * rsc1 (ocf:pacemaker:Dummy): FAILED (blocked) [ node1 node2 ]
diff --git a/cts/scheduler/summary/stopped-monitor-08.summary b/cts/scheduler/summary/stopped-monitor-08.summary
new file mode 100644
index 0000000..d23f033
--- /dev/null
+++ b/cts/scheduler/summary/stopped-monitor-08.summary
@@ -0,0 +1,25 @@
+Current cluster status:
+ * Node List:
+ * Node node1: standby (with active resources)
+ * Online: [ node2 ]
+
+ * Full List of Resources:
+ * rsc1 (ocf:pacemaker:Dummy): Started node1
+
+Transition Summary:
+ * Move rsc1 ( node1 -> node2 )
+
+Executing Cluster Transition:
+ * Resource action: rsc1 stop on node1
+ * Resource action: rsc1 cancel=20000 on node2
+ * Resource action: rsc1 monitor=20000 on node1
+ * Resource action: rsc1 start on node2
+ * Resource action: rsc1 monitor=10000 on node2
+
+Revised Cluster Status:
+ * Node List:
+ * Node node1: standby
+ * Online: [ node2 ]
+
+ * Full List of Resources:
+ * rsc1 (ocf:pacemaker:Dummy): Started node2
diff --git a/cts/scheduler/summary/stopped-monitor-09.summary b/cts/scheduler/summary/stopped-monitor-09.summary
new file mode 100644
index 0000000..9a11f5a
--- /dev/null
+++ b/cts/scheduler/summary/stopped-monitor-09.summary
@@ -0,0 +1,17 @@
+Current cluster status:
+ * Node List:
+ * Online: [ node1 node2 ]
+
+ * Full List of Resources:
+ * rsc1 (ocf:pacemaker:Dummy): Started node1 (unmanaged)
+
+Transition Summary:
+
+Executing Cluster Transition:
+
+Revised Cluster Status:
+ * Node List:
+ * Online: [ node1 node2 ]
+
+ * Full List of Resources:
+ * rsc1 (ocf:pacemaker:Dummy): Started node1 (unmanaged)
diff --git a/cts/scheduler/summary/stopped-monitor-10.summary b/cts/scheduler/summary/stopped-monitor-10.summary
new file mode 100644
index 0000000..5ca9343
--- /dev/null
+++ b/cts/scheduler/summary/stopped-monitor-10.summary
@@ -0,0 +1,17 @@
+Current cluster status:
+ * Node List:
+ * Online: [ node1 node2 ]
+
+ * Full List of Resources:
+ * rsc1 (ocf:pacemaker:Dummy): FAILED (unmanaged) [ node1 node2 ]
+
+Transition Summary:
+
+Executing Cluster Transition:
+
+Revised Cluster Status:
+ * Node List:
+ * Online: [ node1 node2 ]
+
+ * Full List of Resources:
+ * rsc1 (ocf:pacemaker:Dummy): FAILED (unmanaged) [ node1 node2 ]
diff --git a/cts/scheduler/summary/stopped-monitor-11.summary b/cts/scheduler/summary/stopped-monitor-11.summary
new file mode 100644
index 0000000..74feb98
--- /dev/null
+++ b/cts/scheduler/summary/stopped-monitor-11.summary
@@ -0,0 +1,19 @@
+1 of 1 resource instances DISABLED and 0 BLOCKED from further action due to failure
+
+Current cluster status:
+ * Node List:
+ * Online: [ node1 node2 ]
+
+ * Full List of Resources:
+ * rsc1 (ocf:pacemaker:Dummy): Started node1 (disabled, unmanaged)
+
+Transition Summary:
+
+Executing Cluster Transition:
+
+Revised Cluster Status:
+ * Node List:
+ * Online: [ node1 node2 ]
+
+ * Full List of Resources:
+ * rsc1 (ocf:pacemaker:Dummy): Started node1 (disabled, unmanaged)
diff --git a/cts/scheduler/summary/stopped-monitor-12.summary b/cts/scheduler/summary/stopped-monitor-12.summary
new file mode 100644
index 0000000..9d14834
--- /dev/null
+++ b/cts/scheduler/summary/stopped-monitor-12.summary
@@ -0,0 +1,19 @@
+1 of 1 resource instances DISABLED and 0 BLOCKED from further action due to failure
+
+Current cluster status:
+ * Node List:
+ * Online: [ node1 node2 ]
+
+ * Full List of Resources:
+ * rsc1 (ocf:pacemaker:Dummy): FAILED (disabled, unmanaged) [ node1 node2 ]
+
+Transition Summary:
+
+Executing Cluster Transition:
+
+Revised Cluster Status:
+ * Node List:
+ * Online: [ node1 node2 ]
+
+ * Full List of Resources:
+ * rsc1 (ocf:pacemaker:Dummy): FAILED (disabled, unmanaged) [ node1 node2 ]
diff --git a/cts/scheduler/summary/stopped-monitor-20.summary b/cts/scheduler/summary/stopped-monitor-20.summary
new file mode 100644
index 0000000..b0d44ee
--- /dev/null
+++ b/cts/scheduler/summary/stopped-monitor-20.summary
@@ -0,0 +1,23 @@
+1 of 1 resource instances DISABLED and 0 BLOCKED from further action due to failure
+
+Current cluster status:
+ * Node List:
+ * Online: [ node1 node2 ]
+
+ * Full List of Resources:
+ * rsc1 (ocf:pacemaker:Dummy): Stopped (disabled)
+
+Transition Summary:
+
+Executing Cluster Transition:
+ * Resource action: rsc1 monitor on node2
+ * Resource action: rsc1 monitor on node1
+ * Resource action: rsc1 monitor=20000 on node2
+ * Resource action: rsc1 monitor=20000 on node1
+
+Revised Cluster Status:
+ * Node List:
+ * Online: [ node1 node2 ]
+
+ * Full List of Resources:
+ * rsc1 (ocf:pacemaker:Dummy): Stopped (disabled)
diff --git a/cts/scheduler/summary/stopped-monitor-21.summary b/cts/scheduler/summary/stopped-monitor-21.summary
new file mode 100644
index 0000000..e3e64c0
--- /dev/null
+++ b/cts/scheduler/summary/stopped-monitor-21.summary
@@ -0,0 +1,22 @@
+1 of 1 resource instances DISABLED and 0 BLOCKED from further action due to failure
+
+Current cluster status:
+ * Node List:
+ * Online: [ node1 node2 ]
+
+ * Full List of Resources:
+ * rsc1 (ocf:pacemaker:Dummy): FAILED node1 (disabled)
+
+Transition Summary:
+ * Stop rsc1 ( node1 ) due to node availability
+
+Executing Cluster Transition:
+ * Resource action: rsc1 stop on node1
+ * Resource action: rsc1 monitor=20000 on node1
+
+Revised Cluster Status:
+ * Node List:
+ * Online: [ node1 node2 ]
+
+ * Full List of Resources:
+ * rsc1 (ocf:pacemaker:Dummy): Stopped (disabled)
diff --git a/cts/scheduler/summary/stopped-monitor-22.summary b/cts/scheduler/summary/stopped-monitor-22.summary
new file mode 100644
index 0000000..8b04d7f
--- /dev/null
+++ b/cts/scheduler/summary/stopped-monitor-22.summary
@@ -0,0 +1,25 @@
+1 of 1 resource instances DISABLED and 0 BLOCKED from further action due to failure
+
+Current cluster status:
+ * Node List:
+ * Online: [ node1 node2 ]
+
+ * Full List of Resources:
+ * rsc1 (ocf:pacemaker:Dummy): FAILED (disabled) [ node1 node2 ]
+
+Transition Summary:
+ * Stop rsc1 ( node1 ) due to node availability
+ * Stop rsc1 ( node2 ) due to node availability
+
+Executing Cluster Transition:
+ * Resource action: rsc1 stop on node2
+ * Resource action: rsc1 monitor=20000 on node2
+ * Resource action: rsc1 stop on node1
+ * Resource action: rsc1 monitor=20000 on node1
+
+Revised Cluster Status:
+ * Node List:
+ * Online: [ node1 node2 ]
+
+ * Full List of Resources:
+ * rsc1 (ocf:pacemaker:Dummy): Stopped (disabled)
diff --git a/cts/scheduler/summary/stopped-monitor-23.summary b/cts/scheduler/summary/stopped-monitor-23.summary
new file mode 100644
index 0000000..3135b99
--- /dev/null
+++ b/cts/scheduler/summary/stopped-monitor-23.summary
@@ -0,0 +1,21 @@
+Current cluster status:
+ * Node List:
+ * Online: [ node1 node2 ]
+
+ * Full List of Resources:
+ * rsc1 (ocf:pacemaker:Dummy): Stopped
+
+Transition Summary:
+ * Start rsc1 ( node1 )
+
+Executing Cluster Transition:
+ * Resource action: rsc1 cancel=20000 on node1
+ * Resource action: rsc1 start on node1
+ * Resource action: rsc1 monitor=10000 on node1
+
+Revised Cluster Status:
+ * Node List:
+ * Online: [ node1 node2 ]
+
+ * Full List of Resources:
+ * rsc1 (ocf:pacemaker:Dummy): Started node1
diff --git a/cts/scheduler/summary/stopped-monitor-24.summary b/cts/scheduler/summary/stopped-monitor-24.summary
new file mode 100644
index 0000000..abbedf8
--- /dev/null
+++ b/cts/scheduler/summary/stopped-monitor-24.summary
@@ -0,0 +1,19 @@
+1 of 1 resource instances DISABLED and 0 BLOCKED from further action due to failure
+
+Current cluster status:
+ * Node List:
+ * Online: [ node1 node2 ]
+
+ * Full List of Resources:
+ * rsc1 (ocf:pacemaker:Dummy): Stopped (disabled, unmanaged)
+
+Transition Summary:
+
+Executing Cluster Transition:
+
+Revised Cluster Status:
+ * Node List:
+ * Online: [ node1 node2 ]
+
+ * Full List of Resources:
+ * rsc1 (ocf:pacemaker:Dummy): Stopped (disabled, unmanaged)
diff --git a/cts/scheduler/summary/stopped-monitor-25.summary b/cts/scheduler/summary/stopped-monitor-25.summary
new file mode 100644
index 0000000..44e7340
--- /dev/null
+++ b/cts/scheduler/summary/stopped-monitor-25.summary
@@ -0,0 +1,21 @@
+1 of 1 resource instances DISABLED and 0 BLOCKED from further action due to failure
+
+Current cluster status:
+ * Node List:
+ * Online: [ node1 node2 ]
+
+ * Full List of Resources:
+ * rsc1 (ocf:pacemaker:Dummy): FAILED (disabled, unmanaged) [ node1 node2 ]
+
+Transition Summary:
+
+Executing Cluster Transition:
+ * Resource action: rsc1 monitor=10000 on node1
+ * Resource action: rsc1 cancel=20000 on node1
+
+Revised Cluster Status:
+ * Node List:
+ * Online: [ node1 node2 ]
+
+ * Full List of Resources:
+ * rsc1 (ocf:pacemaker:Dummy): FAILED (disabled, unmanaged) [ node1 node2 ]
diff --git a/cts/scheduler/summary/stopped-monitor-26.summary b/cts/scheduler/summary/stopped-monitor-26.summary
new file mode 100644
index 0000000..c88413d
--- /dev/null
+++ b/cts/scheduler/summary/stopped-monitor-26.summary
@@ -0,0 +1,17 @@
+Current cluster status:
+ * Node List:
+ * Online: [ node1 node2 ]
+
+ * Full List of Resources:
+ * rsc1 (ocf:pacemaker:Dummy): Stopped (unmanaged)
+
+Transition Summary:
+
+Executing Cluster Transition:
+
+Revised Cluster Status:
+ * Node List:
+ * Online: [ node1 node2 ]
+
+ * Full List of Resources:
+ * rsc1 (ocf:pacemaker:Dummy): Stopped (unmanaged)
diff --git a/cts/scheduler/summary/stopped-monitor-27.summary b/cts/scheduler/summary/stopped-monitor-27.summary
new file mode 100644
index 0000000..f38b439
--- /dev/null
+++ b/cts/scheduler/summary/stopped-monitor-27.summary
@@ -0,0 +1,19 @@
+Current cluster status:
+ * Node List:
+ * Online: [ node1 node2 ]
+
+ * Full List of Resources:
+ * rsc1 (ocf:pacemaker:Dummy): FAILED (unmanaged) [ node1 node2 ]
+
+Transition Summary:
+
+Executing Cluster Transition:
+ * Resource action: rsc1 monitor=10000 on node1
+ * Resource action: rsc1 cancel=20000 on node1
+
+Revised Cluster Status:
+ * Node List:
+ * Online: [ node1 node2 ]
+
+ * Full List of Resources:
+ * rsc1 (ocf:pacemaker:Dummy): FAILED (unmanaged) [ node1 node2 ]
diff --git a/cts/scheduler/summary/stopped-monitor-30.summary b/cts/scheduler/summary/stopped-monitor-30.summary
new file mode 100644
index 0000000..97f47ad
--- /dev/null
+++ b/cts/scheduler/summary/stopped-monitor-30.summary
@@ -0,0 +1,19 @@
+Current cluster status:
+ * Node List:
+ * Online: [ node1 node2 node3 ]
+
+ * Full List of Resources:
+ * rsc1 (ocf:pacemaker:Dummy): Started node1
+
+Transition Summary:
+
+Executing Cluster Transition:
+ * Resource action: rsc1 monitor on node3
+ * Resource action: rsc1 monitor=20000 on node3
+
+Revised Cluster Status:
+ * Node List:
+ * Online: [ node1 node2 node3 ]
+
+ * Full List of Resources:
+ * rsc1 (ocf:pacemaker:Dummy): Started node1
diff --git a/cts/scheduler/summary/stopped-monitor-31.summary b/cts/scheduler/summary/stopped-monitor-31.summary
new file mode 100644
index 0000000..f3876d4
--- /dev/null
+++ b/cts/scheduler/summary/stopped-monitor-31.summary
@@ -0,0 +1,21 @@
+1 of 1 resource instances DISABLED and 0 BLOCKED from further action due to failure
+
+Current cluster status:
+ * Node List:
+ * Online: [ node1 node2 node3 ]
+
+ * Full List of Resources:
+ * rsc1 (ocf:pacemaker:Dummy): Stopped (disabled)
+
+Transition Summary:
+
+Executing Cluster Transition:
+ * Resource action: rsc1 monitor on node3
+ * Resource action: rsc1 monitor=20000 on node3
+
+Revised Cluster Status:
+ * Node List:
+ * Online: [ node1 node2 node3 ]
+
+ * Full List of Resources:
+ * rsc1 (ocf:pacemaker:Dummy): Stopped (disabled)
diff --git a/cts/scheduler/summary/suicide-needed-inquorate.summary b/cts/scheduler/summary/suicide-needed-inquorate.summary
new file mode 100644
index 0000000..d98152d
--- /dev/null
+++ b/cts/scheduler/summary/suicide-needed-inquorate.summary
@@ -0,0 +1,27 @@
+Using the original execution date of: 2017-08-21 17:12:54Z
+Current cluster status:
+ * Node List:
+ * Node node1: UNCLEAN (online)
+ * Node node2: UNCLEAN (online)
+ * Node node3: UNCLEAN (online)
+
+ * Full List of Resources:
+ * Fencing (stonith:fence_xvm): Stopped
+
+Transition Summary:
+ * Fence (reboot) node3 'cluster does not have quorum'
+ * Fence (reboot) node2 'cluster does not have quorum'
+ * Fence (reboot) node1 'cluster does not have quorum'
+
+Executing Cluster Transition:
+ * Fencing node1 (reboot)
+ * Fencing node3 (reboot)
+ * Fencing node2 (reboot)
+Using the original execution date of: 2017-08-21 17:12:54Z
+
+Revised Cluster Status:
+ * Node List:
+ * OFFLINE: [ node1 node2 node3 ]
+
+ * Full List of Resources:
+ * Fencing (stonith:fence_xvm): Stopped
diff --git a/cts/scheduler/summary/suicide-not-needed-initial-quorum.summary b/cts/scheduler/summary/suicide-not-needed-initial-quorum.summary
new file mode 100644
index 0000000..9865ed3
--- /dev/null
+++ b/cts/scheduler/summary/suicide-not-needed-initial-quorum.summary
@@ -0,0 +1,25 @@
+Using the original execution date of: 2017-08-21 17:12:54Z
+Current cluster status:
+ * Node List:
+ * Online: [ node1 node2 node3 ]
+
+ * Full List of Resources:
+ * Fencing (stonith:fence_xvm): Stopped
+
+Transition Summary:
+ * Start Fencing ( node1 )
+
+Executing Cluster Transition:
+ * Resource action: Fencing monitor on node3
+ * Resource action: Fencing monitor on node2
+ * Resource action: Fencing monitor on node1
+ * Resource action: Fencing start on node1
+ * Resource action: Fencing monitor=120000 on node1
+Using the original execution date of: 2017-08-21 17:12:54Z
+
+Revised Cluster Status:
+ * Node List:
+ * Online: [ node1 node2 node3 ]
+
+ * Full List of Resources:
+ * Fencing (stonith:fence_xvm): Started node1
diff --git a/cts/scheduler/summary/suicide-not-needed-never-quorate.summary b/cts/scheduler/summary/suicide-not-needed-never-quorate.summary
new file mode 100644
index 0000000..5c1f248
--- /dev/null
+++ b/cts/scheduler/summary/suicide-not-needed-never-quorate.summary
@@ -0,0 +1,23 @@
+Using the original execution date of: 2017-08-21 17:12:54Z
+Current cluster status:
+ * Node List:
+ * Online: [ node1 node2 node3 ]
+
+ * Full List of Resources:
+ * Fencing (stonith:fence_xvm): Stopped
+
+Transition Summary:
+ * Start Fencing ( node1 ) due to no quorum (blocked)
+
+Executing Cluster Transition:
+ * Resource action: Fencing monitor on node3
+ * Resource action: Fencing monitor on node2
+ * Resource action: Fencing monitor on node1
+Using the original execution date of: 2017-08-21 17:12:54Z
+
+Revised Cluster Status:
+ * Node List:
+ * Online: [ node1 node2 node3 ]
+
+ * Full List of Resources:
+ * Fencing (stonith:fence_xvm): Stopped
diff --git a/cts/scheduler/summary/suicide-not-needed-quorate.summary b/cts/scheduler/summary/suicide-not-needed-quorate.summary
new file mode 100644
index 0000000..9865ed3
--- /dev/null
+++ b/cts/scheduler/summary/suicide-not-needed-quorate.summary
@@ -0,0 +1,25 @@
+Using the original execution date of: 2017-08-21 17:12:54Z
+Current cluster status:
+ * Node List:
+ * Online: [ node1 node2 node3 ]
+
+ * Full List of Resources:
+ * Fencing (stonith:fence_xvm): Stopped
+
+Transition Summary:
+ * Start Fencing ( node1 )
+
+Executing Cluster Transition:
+ * Resource action: Fencing monitor on node3
+ * Resource action: Fencing monitor on node2
+ * Resource action: Fencing monitor on node1
+ * Resource action: Fencing start on node1
+ * Resource action: Fencing monitor=120000 on node1
+Using the original execution date of: 2017-08-21 17:12:54Z
+
+Revised Cluster Status:
+ * Node List:
+ * Online: [ node1 node2 node3 ]
+
+ * Full List of Resources:
+ * Fencing (stonith:fence_xvm): Started node1
diff --git a/cts/scheduler/summary/systemhealth1.summary b/cts/scheduler/summary/systemhealth1.summary
new file mode 100644
index 0000000..f47d395
--- /dev/null
+++ b/cts/scheduler/summary/systemhealth1.summary
@@ -0,0 +1,26 @@
+Current cluster status:
+ * Node List:
+ * Node hs21c: UNCLEAN (offline)
+ * Node hs21d: UNCLEAN (offline)
+
+ * Full List of Resources:
+ * stonith-1 (stonith:dummy): Stopped
+ * apache_1 (ocf:heartbeat:apache): Stopped
+ * nfs_1 (ocf:heartbeat:Filesystem): Stopped
+
+Transition Summary:
+ * Fence (reboot) hs21d 'node is unclean'
+ * Fence (reboot) hs21c 'node is unclean'
+
+Executing Cluster Transition:
+ * Fencing hs21d (reboot)
+ * Fencing hs21c (reboot)
+
+Revised Cluster Status:
+ * Node List:
+ * OFFLINE: [ hs21c hs21d ]
+
+ * Full List of Resources:
+ * stonith-1 (stonith:dummy): Stopped
+ * apache_1 (ocf:heartbeat:apache): Stopped
+ * nfs_1 (ocf:heartbeat:Filesystem): Stopped
diff --git a/cts/scheduler/summary/systemhealth2.summary b/cts/scheduler/summary/systemhealth2.summary
new file mode 100644
index 0000000..ec1d7be
--- /dev/null
+++ b/cts/scheduler/summary/systemhealth2.summary
@@ -0,0 +1,36 @@
+Current cluster status:
+ * Node List:
+ * Node hs21d: UNCLEAN (offline)
+ * Online: [ hs21c ]
+
+ * Full List of Resources:
+ * stonith-1 (stonith:dummy): Stopped
+ * apache_1 (ocf:heartbeat:apache): Stopped
+ * nfs_1 (ocf:heartbeat:Filesystem): Stopped
+
+Transition Summary:
+ * Fence (reboot) hs21d 'node is unclean'
+ * Start stonith-1 ( hs21c )
+ * Start apache_1 ( hs21c )
+ * Start nfs_1 ( hs21c )
+
+Executing Cluster Transition:
+ * Resource action: stonith-1 monitor on hs21c
+ * Resource action: apache_1 monitor on hs21c
+ * Resource action: nfs_1 monitor on hs21c
+ * Fencing hs21d (reboot)
+ * Resource action: stonith-1 start on hs21c
+ * Resource action: apache_1 start on hs21c
+ * Resource action: nfs_1 start on hs21c
+ * Resource action: apache_1 monitor=10000 on hs21c
+ * Resource action: nfs_1 monitor=20000 on hs21c
+
+Revised Cluster Status:
+ * Node List:
+ * Online: [ hs21c ]
+ * OFFLINE: [ hs21d ]
+
+ * Full List of Resources:
+ * stonith-1 (stonith:dummy): Started hs21c
+ * apache_1 (ocf:heartbeat:apache): Started hs21c
+ * nfs_1 (ocf:heartbeat:Filesystem): Started hs21c
diff --git a/cts/scheduler/summary/systemhealth3.summary b/cts/scheduler/summary/systemhealth3.summary
new file mode 100644
index 0000000..ec1d7be
--- /dev/null
+++ b/cts/scheduler/summary/systemhealth3.summary
@@ -0,0 +1,36 @@
+Current cluster status:
+ * Node List:
+ * Node hs21d: UNCLEAN (offline)
+ * Online: [ hs21c ]
+
+ * Full List of Resources:
+ * stonith-1 (stonith:dummy): Stopped
+ * apache_1 (ocf:heartbeat:apache): Stopped
+ * nfs_1 (ocf:heartbeat:Filesystem): Stopped
+
+Transition Summary:
+ * Fence (reboot) hs21d 'node is unclean'
+ * Start stonith-1 ( hs21c )
+ * Start apache_1 ( hs21c )
+ * Start nfs_1 ( hs21c )
+
+Executing Cluster Transition:
+ * Resource action: stonith-1 monitor on hs21c
+ * Resource action: apache_1 monitor on hs21c
+ * Resource action: nfs_1 monitor on hs21c
+ * Fencing hs21d (reboot)
+ * Resource action: stonith-1 start on hs21c
+ * Resource action: apache_1 start on hs21c
+ * Resource action: nfs_1 start on hs21c
+ * Resource action: apache_1 monitor=10000 on hs21c
+ * Resource action: nfs_1 monitor=20000 on hs21c
+
+Revised Cluster Status:
+ * Node List:
+ * Online: [ hs21c ]
+ * OFFLINE: [ hs21d ]
+
+ * Full List of Resources:
+ * stonith-1 (stonith:dummy): Started hs21c
+ * apache_1 (ocf:heartbeat:apache): Started hs21c
+ * nfs_1 (ocf:heartbeat:Filesystem): Started hs21c
diff --git a/cts/scheduler/summary/systemhealthm1.summary b/cts/scheduler/summary/systemhealthm1.summary
new file mode 100644
index 0000000..f47d395
--- /dev/null
+++ b/cts/scheduler/summary/systemhealthm1.summary
@@ -0,0 +1,26 @@
+Current cluster status:
+ * Node List:
+ * Node hs21c: UNCLEAN (offline)
+ * Node hs21d: UNCLEAN (offline)
+
+ * Full List of Resources:
+ * stonith-1 (stonith:dummy): Stopped
+ * apache_1 (ocf:heartbeat:apache): Stopped
+ * nfs_1 (ocf:heartbeat:Filesystem): Stopped
+
+Transition Summary:
+ * Fence (reboot) hs21d 'node is unclean'
+ * Fence (reboot) hs21c 'node is unclean'
+
+Executing Cluster Transition:
+ * Fencing hs21d (reboot)
+ * Fencing hs21c (reboot)
+
+Revised Cluster Status:
+ * Node List:
+ * OFFLINE: [ hs21c hs21d ]
+
+ * Full List of Resources:
+ * stonith-1 (stonith:dummy): Stopped
+ * apache_1 (ocf:heartbeat:apache): Stopped
+ * nfs_1 (ocf:heartbeat:Filesystem): Stopped
diff --git a/cts/scheduler/summary/systemhealthm2.summary b/cts/scheduler/summary/systemhealthm2.summary
new file mode 100644
index 0000000..41071ff
--- /dev/null
+++ b/cts/scheduler/summary/systemhealthm2.summary
@@ -0,0 +1,36 @@
+Current cluster status:
+ * Node List:
+ * Node hs21c: online (health is YELLOW)
+ * Node hs21d: UNCLEAN (offline)
+
+ * Full List of Resources:
+ * stonith-1 (stonith:dummy): Stopped
+ * apache_1 (ocf:heartbeat:apache): Stopped
+ * nfs_1 (ocf:heartbeat:Filesystem): Stopped
+
+Transition Summary:
+ * Fence (reboot) hs21d 'node is unclean'
+ * Start stonith-1 ( hs21c )
+ * Start apache_1 ( hs21c )
+ * Start nfs_1 ( hs21c )
+
+Executing Cluster Transition:
+ * Resource action: stonith-1 monitor on hs21c
+ * Resource action: apache_1 monitor on hs21c
+ * Resource action: nfs_1 monitor on hs21c
+ * Fencing hs21d (reboot)
+ * Resource action: stonith-1 start on hs21c
+ * Resource action: apache_1 start on hs21c
+ * Resource action: nfs_1 start on hs21c
+ * Resource action: apache_1 monitor=10000 on hs21c
+ * Resource action: nfs_1 monitor=20000 on hs21c
+
+Revised Cluster Status:
+ * Node List:
+ * Node hs21c: online (health is YELLOW)
+ * OFFLINE: [ hs21d ]
+
+ * Full List of Resources:
+ * stonith-1 (stonith:dummy): Started hs21c
+ * apache_1 (ocf:heartbeat:apache): Started hs21c
+ * nfs_1 (ocf:heartbeat:Filesystem): Started hs21c
diff --git a/cts/scheduler/summary/systemhealthm3.summary b/cts/scheduler/summary/systemhealthm3.summary
new file mode 100644
index 0000000..e8c2174
--- /dev/null
+++ b/cts/scheduler/summary/systemhealthm3.summary
@@ -0,0 +1,28 @@
+Current cluster status:
+ * Node List:
+ * Node hs21c: online (health is RED)
+ * Node hs21d: UNCLEAN (offline)
+
+ * Full List of Resources:
+ * stonith-1 (stonith:dummy): Stopped
+ * apache_1 (ocf:heartbeat:apache): Stopped
+ * nfs_1 (ocf:heartbeat:Filesystem): Stopped
+
+Transition Summary:
+ * Fence (reboot) hs21d 'node is unclean'
+
+Executing Cluster Transition:
+ * Resource action: stonith-1 monitor on hs21c
+ * Resource action: apache_1 monitor on hs21c
+ * Resource action: nfs_1 monitor on hs21c
+ * Fencing hs21d (reboot)
+
+Revised Cluster Status:
+ * Node List:
+ * Node hs21c: online (health is RED)
+ * OFFLINE: [ hs21d ]
+
+ * Full List of Resources:
+ * stonith-1 (stonith:dummy): Stopped
+ * apache_1 (ocf:heartbeat:apache): Stopped
+ * nfs_1 (ocf:heartbeat:Filesystem): Stopped
diff --git a/cts/scheduler/summary/systemhealthn1.summary b/cts/scheduler/summary/systemhealthn1.summary
new file mode 100644
index 0000000..f47d395
--- /dev/null
+++ b/cts/scheduler/summary/systemhealthn1.summary
@@ -0,0 +1,26 @@
+Current cluster status:
+ * Node List:
+ * Node hs21c: UNCLEAN (offline)
+ * Node hs21d: UNCLEAN (offline)
+
+ * Full List of Resources:
+ * stonith-1 (stonith:dummy): Stopped
+ * apache_1 (ocf:heartbeat:apache): Stopped
+ * nfs_1 (ocf:heartbeat:Filesystem): Stopped
+
+Transition Summary:
+ * Fence (reboot) hs21d 'node is unclean'
+ * Fence (reboot) hs21c 'node is unclean'
+
+Executing Cluster Transition:
+ * Fencing hs21d (reboot)
+ * Fencing hs21c (reboot)
+
+Revised Cluster Status:
+ * Node List:
+ * OFFLINE: [ hs21c hs21d ]
+
+ * Full List of Resources:
+ * stonith-1 (stonith:dummy): Stopped
+ * apache_1 (ocf:heartbeat:apache): Stopped
+ * nfs_1 (ocf:heartbeat:Filesystem): Stopped
diff --git a/cts/scheduler/summary/systemhealthn2.summary b/cts/scheduler/summary/systemhealthn2.summary
new file mode 100644
index 0000000..ec1d7be
--- /dev/null
+++ b/cts/scheduler/summary/systemhealthn2.summary
@@ -0,0 +1,36 @@
+Current cluster status:
+ * Node List:
+ * Node hs21d: UNCLEAN (offline)
+ * Online: [ hs21c ]
+
+ * Full List of Resources:
+ * stonith-1 (stonith:dummy): Stopped
+ * apache_1 (ocf:heartbeat:apache): Stopped
+ * nfs_1 (ocf:heartbeat:Filesystem): Stopped
+
+Transition Summary:
+ * Fence (reboot) hs21d 'node is unclean'
+ * Start stonith-1 ( hs21c )
+ * Start apache_1 ( hs21c )
+ * Start nfs_1 ( hs21c )
+
+Executing Cluster Transition:
+ * Resource action: stonith-1 monitor on hs21c
+ * Resource action: apache_1 monitor on hs21c
+ * Resource action: nfs_1 monitor on hs21c
+ * Fencing hs21d (reboot)
+ * Resource action: stonith-1 start on hs21c
+ * Resource action: apache_1 start on hs21c
+ * Resource action: nfs_1 start on hs21c
+ * Resource action: apache_1 monitor=10000 on hs21c
+ * Resource action: nfs_1 monitor=20000 on hs21c
+
+Revised Cluster Status:
+ * Node List:
+ * Online: [ hs21c ]
+ * OFFLINE: [ hs21d ]
+
+ * Full List of Resources:
+ * stonith-1 (stonith:dummy): Started hs21c
+ * apache_1 (ocf:heartbeat:apache): Started hs21c
+ * nfs_1 (ocf:heartbeat:Filesystem): Started hs21c
diff --git a/cts/scheduler/summary/systemhealthn3.summary b/cts/scheduler/summary/systemhealthn3.summary
new file mode 100644
index 0000000..ec1d7be
--- /dev/null
+++ b/cts/scheduler/summary/systemhealthn3.summary
@@ -0,0 +1,36 @@
+Current cluster status:
+ * Node List:
+ * Node hs21d: UNCLEAN (offline)
+ * Online: [ hs21c ]
+
+ * Full List of Resources:
+ * stonith-1 (stonith:dummy): Stopped
+ * apache_1 (ocf:heartbeat:apache): Stopped
+ * nfs_1 (ocf:heartbeat:Filesystem): Stopped
+
+Transition Summary:
+ * Fence (reboot) hs21d 'node is unclean'
+ * Start stonith-1 ( hs21c )
+ * Start apache_1 ( hs21c )
+ * Start nfs_1 ( hs21c )
+
+Executing Cluster Transition:
+ * Resource action: stonith-1 monitor on hs21c
+ * Resource action: apache_1 monitor on hs21c
+ * Resource action: nfs_1 monitor on hs21c
+ * Fencing hs21d (reboot)
+ * Resource action: stonith-1 start on hs21c
+ * Resource action: apache_1 start on hs21c
+ * Resource action: nfs_1 start on hs21c
+ * Resource action: apache_1 monitor=10000 on hs21c
+ * Resource action: nfs_1 monitor=20000 on hs21c
+
+Revised Cluster Status:
+ * Node List:
+ * Online: [ hs21c ]
+ * OFFLINE: [ hs21d ]
+
+ * Full List of Resources:
+ * stonith-1 (stonith:dummy): Started hs21c
+ * apache_1 (ocf:heartbeat:apache): Started hs21c
+ * nfs_1 (ocf:heartbeat:Filesystem): Started hs21c
diff --git a/cts/scheduler/summary/systemhealtho1.summary b/cts/scheduler/summary/systemhealtho1.summary
new file mode 100644
index 0000000..f47d395
--- /dev/null
+++ b/cts/scheduler/summary/systemhealtho1.summary
@@ -0,0 +1,26 @@
+Current cluster status:
+ * Node List:
+ * Node hs21c: UNCLEAN (offline)
+ * Node hs21d: UNCLEAN (offline)
+
+ * Full List of Resources:
+ * stonith-1 (stonith:dummy): Stopped
+ * apache_1 (ocf:heartbeat:apache): Stopped
+ * nfs_1 (ocf:heartbeat:Filesystem): Stopped
+
+Transition Summary:
+ * Fence (reboot) hs21d 'node is unclean'
+ * Fence (reboot) hs21c 'node is unclean'
+
+Executing Cluster Transition:
+ * Fencing hs21d (reboot)
+ * Fencing hs21c (reboot)
+
+Revised Cluster Status:
+ * Node List:
+ * OFFLINE: [ hs21c hs21d ]
+
+ * Full List of Resources:
+ * stonith-1 (stonith:dummy): Stopped
+ * apache_1 (ocf:heartbeat:apache): Stopped
+ * nfs_1 (ocf:heartbeat:Filesystem): Stopped
diff --git a/cts/scheduler/summary/systemhealtho2.summary b/cts/scheduler/summary/systemhealtho2.summary
new file mode 100644
index 0000000..fb951fd
--- /dev/null
+++ b/cts/scheduler/summary/systemhealtho2.summary
@@ -0,0 +1,28 @@
+Current cluster status:
+ * Node List:
+ * Node hs21c: online (health is YELLOW)
+ * Node hs21d: UNCLEAN (offline)
+
+ * Full List of Resources:
+ * stonith-1 (stonith:dummy): Stopped
+ * apache_1 (ocf:heartbeat:apache): Stopped
+ * nfs_1 (ocf:heartbeat:Filesystem): Stopped
+
+Transition Summary:
+ * Fence (reboot) hs21d 'node is unclean'
+
+Executing Cluster Transition:
+ * Resource action: stonith-1 monitor on hs21c
+ * Resource action: apache_1 monitor on hs21c
+ * Resource action: nfs_1 monitor on hs21c
+ * Fencing hs21d (reboot)
+
+Revised Cluster Status:
+ * Node List:
+ * Node hs21c: online (health is YELLOW)
+ * OFFLINE: [ hs21d ]
+
+ * Full List of Resources:
+ * stonith-1 (stonith:dummy): Stopped
+ * apache_1 (ocf:heartbeat:apache): Stopped
+ * nfs_1 (ocf:heartbeat:Filesystem): Stopped
diff --git a/cts/scheduler/summary/systemhealtho3.summary b/cts/scheduler/summary/systemhealtho3.summary
new file mode 100644
index 0000000..e8c2174
--- /dev/null
+++ b/cts/scheduler/summary/systemhealtho3.summary
@@ -0,0 +1,28 @@
+Current cluster status:
+ * Node List:
+ * Node hs21c: online (health is RED)
+ * Node hs21d: UNCLEAN (offline)
+
+ * Full List of Resources:
+ * stonith-1 (stonith:dummy): Stopped
+ * apache_1 (ocf:heartbeat:apache): Stopped
+ * nfs_1 (ocf:heartbeat:Filesystem): Stopped
+
+Transition Summary:
+ * Fence (reboot) hs21d 'node is unclean'
+
+Executing Cluster Transition:
+ * Resource action: stonith-1 monitor on hs21c
+ * Resource action: apache_1 monitor on hs21c
+ * Resource action: nfs_1 monitor on hs21c
+ * Fencing hs21d (reboot)
+
+Revised Cluster Status:
+ * Node List:
+ * Node hs21c: online (health is RED)
+ * OFFLINE: [ hs21d ]
+
+ * Full List of Resources:
+ * stonith-1 (stonith:dummy): Stopped
+ * apache_1 (ocf:heartbeat:apache): Stopped
+ * nfs_1 (ocf:heartbeat:Filesystem): Stopped
diff --git a/cts/scheduler/summary/systemhealthp1.summary b/cts/scheduler/summary/systemhealthp1.summary
new file mode 100644
index 0000000..f47d395
--- /dev/null
+++ b/cts/scheduler/summary/systemhealthp1.summary
@@ -0,0 +1,26 @@
+Current cluster status:
+ * Node List:
+ * Node hs21c: UNCLEAN (offline)
+ * Node hs21d: UNCLEAN (offline)
+
+ * Full List of Resources:
+ * stonith-1 (stonith:dummy): Stopped
+ * apache_1 (ocf:heartbeat:apache): Stopped
+ * nfs_1 (ocf:heartbeat:Filesystem): Stopped
+
+Transition Summary:
+ * Fence (reboot) hs21d 'node is unclean'
+ * Fence (reboot) hs21c 'node is unclean'
+
+Executing Cluster Transition:
+ * Fencing hs21d (reboot)
+ * Fencing hs21c (reboot)
+
+Revised Cluster Status:
+ * Node List:
+ * OFFLINE: [ hs21c hs21d ]
+
+ * Full List of Resources:
+ * stonith-1 (stonith:dummy): Stopped
+ * apache_1 (ocf:heartbeat:apache): Stopped
+ * nfs_1 (ocf:heartbeat:Filesystem): Stopped
diff --git a/cts/scheduler/summary/systemhealthp2.summary b/cts/scheduler/summary/systemhealthp2.summary
new file mode 100644
index 0000000..9dba001
--- /dev/null
+++ b/cts/scheduler/summary/systemhealthp2.summary
@@ -0,0 +1,34 @@
+Current cluster status:
+ * Node List:
+ * Node hs21c: online (health is YELLOW)
+ * Node hs21d: UNCLEAN (offline)
+
+ * Full List of Resources:
+ * stonith-1 (stonith:dummy): Stopped
+ * apache_1 (ocf:heartbeat:apache): Stopped
+ * nfs_1 (ocf:heartbeat:Filesystem): Stopped
+
+Transition Summary:
+ * Fence (reboot) hs21d 'node is unclean'
+ * Start apache_1 ( hs21c )
+ * Start nfs_1 ( hs21c )
+
+Executing Cluster Transition:
+ * Resource action: stonith-1 monitor on hs21c
+ * Resource action: apache_1 monitor on hs21c
+ * Resource action: nfs_1 monitor on hs21c
+ * Fencing hs21d (reboot)
+ * Resource action: apache_1 start on hs21c
+ * Resource action: nfs_1 start on hs21c
+ * Resource action: apache_1 monitor=10000 on hs21c
+ * Resource action: nfs_1 monitor=20000 on hs21c
+
+Revised Cluster Status:
+ * Node List:
+ * Node hs21c: online (health is YELLOW)
+ * OFFLINE: [ hs21d ]
+
+ * Full List of Resources:
+ * stonith-1 (stonith:dummy): Stopped
+ * apache_1 (ocf:heartbeat:apache): Started hs21c
+ * nfs_1 (ocf:heartbeat:Filesystem): Started hs21c
diff --git a/cts/scheduler/summary/systemhealthp3.summary b/cts/scheduler/summary/systemhealthp3.summary
new file mode 100644
index 0000000..e8c2174
--- /dev/null
+++ b/cts/scheduler/summary/systemhealthp3.summary
@@ -0,0 +1,28 @@
+Current cluster status:
+ * Node List:
+ * Node hs21c: online (health is RED)
+ * Node hs21d: UNCLEAN (offline)
+
+ * Full List of Resources:
+ * stonith-1 (stonith:dummy): Stopped
+ * apache_1 (ocf:heartbeat:apache): Stopped
+ * nfs_1 (ocf:heartbeat:Filesystem): Stopped
+
+Transition Summary:
+ * Fence (reboot) hs21d 'node is unclean'
+
+Executing Cluster Transition:
+ * Resource action: stonith-1 monitor on hs21c
+ * Resource action: apache_1 monitor on hs21c
+ * Resource action: nfs_1 monitor on hs21c
+ * Fencing hs21d (reboot)
+
+Revised Cluster Status:
+ * Node List:
+ * Node hs21c: online (health is RED)
+ * OFFLINE: [ hs21d ]
+
+ * Full List of Resources:
+ * stonith-1 (stonith:dummy): Stopped
+ * apache_1 (ocf:heartbeat:apache): Stopped
+ * nfs_1 (ocf:heartbeat:Filesystem): Stopped
diff --git a/cts/scheduler/summary/tags-coloc-order-1.summary b/cts/scheduler/summary/tags-coloc-order-1.summary
new file mode 100644
index 0000000..9d421dd
--- /dev/null
+++ b/cts/scheduler/summary/tags-coloc-order-1.summary
@@ -0,0 +1,39 @@
+Current cluster status:
+ * Node List:
+ * Online: [ node1 node2 ]
+
+ * Full List of Resources:
+ * rsc1 (ocf:pacemaker:Dummy): Stopped
+ * rsc2 (ocf:pacemaker:Dummy): Stopped
+ * rsc3 (ocf:pacemaker:Dummy): Stopped
+ * rsc4 (ocf:pacemaker:Dummy): Stopped
+
+Transition Summary:
+ * Start rsc1 ( node1 )
+ * Start rsc2 ( node1 )
+ * Start rsc3 ( node1 )
+ * Start rsc4 ( node1 )
+
+Executing Cluster Transition:
+ * Resource action: rsc1 monitor on node2
+ * Resource action: rsc1 monitor on node1
+ * Resource action: rsc2 monitor on node2
+ * Resource action: rsc2 monitor on node1
+ * Resource action: rsc3 monitor on node2
+ * Resource action: rsc3 monitor on node1
+ * Resource action: rsc4 monitor on node2
+ * Resource action: rsc4 monitor on node1
+ * Resource action: rsc1 start on node1
+ * Resource action: rsc2 start on node1
+ * Resource action: rsc3 start on node1
+ * Resource action: rsc4 start on node1
+
+Revised Cluster Status:
+ * Node List:
+ * Online: [ node1 node2 ]
+
+ * Full List of Resources:
+ * rsc1 (ocf:pacemaker:Dummy): Started node1
+ * rsc2 (ocf:pacemaker:Dummy): Started node1
+ * rsc3 (ocf:pacemaker:Dummy): Started node1
+ * rsc4 (ocf:pacemaker:Dummy): Started node1
diff --git a/cts/scheduler/summary/tags-coloc-order-2.summary b/cts/scheduler/summary/tags-coloc-order-2.summary
new file mode 100644
index 0000000..11e4730
--- /dev/null
+++ b/cts/scheduler/summary/tags-coloc-order-2.summary
@@ -0,0 +1,87 @@
+Current cluster status:
+ * Node List:
+ * Online: [ node1 node2 ]
+
+ * Full List of Resources:
+ * rsc1 (ocf:pacemaker:Dummy): Stopped
+ * rsc2 (ocf:pacemaker:Dummy): Stopped
+ * rsc3 (ocf:pacemaker:Dummy): Stopped
+ * rsc4 (ocf:pacemaker:Dummy): Stopped
+ * rsc5 (ocf:pacemaker:Dummy): Stopped
+ * rsc6 (ocf:pacemaker:Dummy): Stopped
+ * rsc7 (ocf:pacemaker:Dummy): Stopped
+ * rsc8 (ocf:pacemaker:Dummy): Stopped
+ * rsc9 (ocf:pacemaker:Dummy): Stopped
+ * rsc10 (ocf:pacemaker:Dummy): Stopped
+ * rsc11 (ocf:pacemaker:Dummy): Stopped
+ * rsc12 (ocf:pacemaker:Dummy): Stopped
+
+Transition Summary:
+ * Start rsc1 ( node1 )
+ * Start rsc2 ( node1 )
+ * Start rsc3 ( node1 )
+ * Start rsc4 ( node1 )
+ * Start rsc5 ( node1 )
+ * Start rsc6 ( node1 )
+ * Start rsc7 ( node1 )
+ * Start rsc8 ( node1 )
+ * Start rsc9 ( node1 )
+ * Start rsc10 ( node1 )
+ * Start rsc11 ( node1 )
+ * Start rsc12 ( node1 )
+
+Executing Cluster Transition:
+ * Resource action: rsc1 monitor on node2
+ * Resource action: rsc1 monitor on node1
+ * Resource action: rsc2 monitor on node2
+ * Resource action: rsc2 monitor on node1
+ * Resource action: rsc3 monitor on node2
+ * Resource action: rsc3 monitor on node1
+ * Resource action: rsc4 monitor on node2
+ * Resource action: rsc4 monitor on node1
+ * Resource action: rsc5 monitor on node2
+ * Resource action: rsc5 monitor on node1
+ * Resource action: rsc6 monitor on node2
+ * Resource action: rsc6 monitor on node1
+ * Resource action: rsc7 monitor on node2
+ * Resource action: rsc7 monitor on node1
+ * Resource action: rsc8 monitor on node2
+ * Resource action: rsc8 monitor on node1
+ * Resource action: rsc9 monitor on node2
+ * Resource action: rsc9 monitor on node1
+ * Resource action: rsc10 monitor on node2
+ * Resource action: rsc10 monitor on node1
+ * Resource action: rsc11 monitor on node2
+ * Resource action: rsc11 monitor on node1
+ * Resource action: rsc12 monitor on node2
+ * Resource action: rsc12 monitor on node1
+ * Resource action: rsc1 start on node1
+ * Resource action: rsc2 start on node1
+ * Resource action: rsc3 start on node1
+ * Resource action: rsc4 start on node1
+ * Resource action: rsc5 start on node1
+ * Resource action: rsc6 start on node1
+ * Resource action: rsc7 start on node1
+ * Resource action: rsc8 start on node1
+ * Resource action: rsc9 start on node1
+ * Resource action: rsc10 start on node1
+ * Resource action: rsc11 start on node1
+ * Resource action: rsc12 start on node1
+
+Revised Cluster Status:
+ * Node List:
+ * Online: [ node1 node2 ]
+
+ * Full List of Resources:
+ * rsc1 (ocf:pacemaker:Dummy): Started node1
+ * rsc2 (ocf:pacemaker:Dummy): Started node1
+ * rsc3 (ocf:pacemaker:Dummy): Started node1
+ * rsc4 (ocf:pacemaker:Dummy): Started node1
+ * rsc5 (ocf:pacemaker:Dummy): Started node1
+ * rsc6 (ocf:pacemaker:Dummy): Started node1
+ * rsc7 (ocf:pacemaker:Dummy): Started node1
+ * rsc8 (ocf:pacemaker:Dummy): Started node1
+ * rsc9 (ocf:pacemaker:Dummy): Started node1
+ * rsc10 (ocf:pacemaker:Dummy): Started node1
+ * rsc11 (ocf:pacemaker:Dummy): Started node1
+ * rsc12 (ocf:pacemaker:Dummy): Started node1
diff --git a/cts/scheduler/summary/tags-location.summary b/cts/scheduler/summary/tags-location.summary
new file mode 100644
index 0000000..e604711
--- /dev/null
+++ b/cts/scheduler/summary/tags-location.summary
@@ -0,0 +1,51 @@
+Current cluster status:
+ * Node List:
+ * Online: [ node1 node2 ]
+
+ * Full List of Resources:
+ * rsc1 (ocf:pacemaker:Dummy): Stopped
+ * rsc2 (ocf:pacemaker:Dummy): Stopped
+ * rsc3 (ocf:pacemaker:Dummy): Stopped
+ * rsc4 (ocf:pacemaker:Dummy): Stopped
+ * rsc5 (ocf:pacemaker:Dummy): Stopped
+ * rsc6 (ocf:pacemaker:Dummy): Stopped
+
+Transition Summary:
+ * Start rsc1 ( node2 )
+ * Start rsc2 ( node2 )
+ * Start rsc3 ( node2 )
+ * Start rsc4 ( node2 )
+ * Start rsc5 ( node2 )
+ * Start rsc6 ( node2 )
+
+Executing Cluster Transition:
+ * Resource action: rsc1 monitor on node2
+ * Resource action: rsc1 monitor on node1
+ * Resource action: rsc2 monitor on node2
+ * Resource action: rsc2 monitor on node1
+ * Resource action: rsc3 monitor on node2
+ * Resource action: rsc3 monitor on node1
+ * Resource action: rsc4 monitor on node2
+ * Resource action: rsc4 monitor on node1
+ * Resource action: rsc5 monitor on node2
+ * Resource action: rsc5 monitor on node1
+ * Resource action: rsc6 monitor on node2
+ * Resource action: rsc6 monitor on node1
+ * Resource action: rsc1 start on node2
+ * Resource action: rsc2 start on node2
+ * Resource action: rsc3 start on node2
+ * Resource action: rsc4 start on node2
+ * Resource action: rsc5 start on node2
+ * Resource action: rsc6 start on node2
+
+Revised Cluster Status:
+ * Node List:
+ * Online: [ node1 node2 ]
+
+ * Full List of Resources:
+ * rsc1 (ocf:pacemaker:Dummy): Started node2
+ * rsc2 (ocf:pacemaker:Dummy): Started node2
+ * rsc3 (ocf:pacemaker:Dummy): Started node2
+ * rsc4 (ocf:pacemaker:Dummy): Started node2
+ * rsc5 (ocf:pacemaker:Dummy): Started node2
+ * rsc6 (ocf:pacemaker:Dummy): Started node2
diff --git a/cts/scheduler/summary/tags-ticket.summary b/cts/scheduler/summary/tags-ticket.summary
new file mode 100644
index 0000000..572d2b4
--- /dev/null
+++ b/cts/scheduler/summary/tags-ticket.summary
@@ -0,0 +1,39 @@
+Current cluster status:
+ * Node List:
+ * Online: [ node1 node2 ]
+
+ * Full List of Resources:
+ * rsc1 (ocf:pacemaker:Dummy): Stopped
+ * rsc2 (ocf:pacemaker:Dummy): Stopped
+ * rsc3 (ocf:pacemaker:Dummy): Stopped
+ * rsc4 (ocf:pacemaker:Dummy): Stopped
+ * rsc5 (ocf:pacemaker:Dummy): Stopped
+ * rsc6 (ocf:pacemaker:Dummy): Stopped
+
+Transition Summary:
+
+Executing Cluster Transition:
+ * Resource action: rsc1 monitor on node2
+ * Resource action: rsc1 monitor on node1
+ * Resource action: rsc2 monitor on node2
+ * Resource action: rsc2 monitor on node1
+ * Resource action: rsc3 monitor on node2
+ * Resource action: rsc3 monitor on node1
+ * Resource action: rsc4 monitor on node2
+ * Resource action: rsc4 monitor on node1
+ * Resource action: rsc5 monitor on node2
+ * Resource action: rsc5 monitor on node1
+ * Resource action: rsc6 monitor on node2
+ * Resource action: rsc6 monitor on node1
+
+Revised Cluster Status:
+ * Node List:
+ * Online: [ node1 node2 ]
+
+ * Full List of Resources:
+ * rsc1 (ocf:pacemaker:Dummy): Stopped
+ * rsc2 (ocf:pacemaker:Dummy): Stopped
+ * rsc3 (ocf:pacemaker:Dummy): Stopped
+ * rsc4 (ocf:pacemaker:Dummy): Stopped
+ * rsc5 (ocf:pacemaker:Dummy): Stopped
+ * rsc6 (ocf:pacemaker:Dummy): Stopped
diff --git a/cts/scheduler/summary/target-0.summary b/cts/scheduler/summary/target-0.summary
new file mode 100644
index 0000000..ee291fc
--- /dev/null
+++ b/cts/scheduler/summary/target-0.summary
@@ -0,0 +1,40 @@
+Current cluster status:
+ * Node List:
+ * Online: [ c001n01 c001n02 c001n03 c001n08 ]
+
+ * Full List of Resources:
+ * DcIPaddr (ocf:heartbeat:IPaddr): Started c001n02
+ * rsc_c001n08 (ocf:heartbeat:IPaddr): Started c001n08
+ * rsc_c001n02 (ocf:heartbeat:IPaddr): Started c001n02
+ * rsc_c001n03 (ocf:heartbeat:IPaddr): Started c001n03
+ * rsc_c001n01 (ocf:heartbeat:IPaddr): Started c001n01
+
+Transition Summary:
+
+Executing Cluster Transition:
+ * Resource action: DcIPaddr monitor on c001n08
+ * Resource action: DcIPaddr monitor on c001n03
+ * Resource action: DcIPaddr monitor on c001n01
+ * Resource action: rsc_c001n08 monitor on c001n03
+ * Resource action: rsc_c001n08 monitor on c001n02
+ * Resource action: rsc_c001n08 monitor on c001n01
+ * Resource action: rsc_c001n02 monitor on c001n08
+ * Resource action: rsc_c001n02 monitor on c001n03
+ * Resource action: rsc_c001n02 monitor on c001n01
+ * Resource action: rsc_c001n03 monitor on c001n08
+ * Resource action: rsc_c001n03 monitor on c001n02
+ * Resource action: rsc_c001n03 monitor on c001n01
+ * Resource action: rsc_c001n01 monitor on c001n08
+ * Resource action: rsc_c001n01 monitor on c001n03
+ * Resource action: rsc_c001n01 monitor on c001n02
+
+Revised Cluster Status:
+ * Node List:
+ * Online: [ c001n01 c001n02 c001n03 c001n08 ]
+
+ * Full List of Resources:
+ * DcIPaddr (ocf:heartbeat:IPaddr): Started c001n02
+ * rsc_c001n08 (ocf:heartbeat:IPaddr): Started c001n08
+ * rsc_c001n02 (ocf:heartbeat:IPaddr): Started c001n02
+ * rsc_c001n03 (ocf:heartbeat:IPaddr): Started c001n03
+ * rsc_c001n01 (ocf:heartbeat:IPaddr): Started c001n01
diff --git a/cts/scheduler/summary/target-1.summary b/cts/scheduler/summary/target-1.summary
new file mode 100644
index 0000000..edc1daf
--- /dev/null
+++ b/cts/scheduler/summary/target-1.summary
@@ -0,0 +1,43 @@
+1 of 5 resource instances DISABLED and 0 BLOCKED from further action due to failure
+
+Current cluster status:
+ * Node List:
+ * Online: [ c001n01 c001n02 c001n03 c001n08 ]
+
+ * Full List of Resources:
+ * DcIPaddr (ocf:heartbeat:IPaddr): Started c001n02
+ * rsc_c001n08 (ocf:heartbeat:IPaddr): Started c001n08 (disabled)
+ * rsc_c001n02 (ocf:heartbeat:IPaddr): Started c001n02
+ * Clone Set: promoteme [rsc_c001n03] (promotable):
+ * Unpromoted: [ c001n03 ]
+ * rsc_c001n01 (ocf:heartbeat:IPaddr): Started c001n01
+
+Transition Summary:
+ * Stop rsc_c001n08 ( c001n08 ) due to node availability
+
+Executing Cluster Transition:
+ * Resource action: DcIPaddr monitor on c001n08
+ * Resource action: DcIPaddr monitor on c001n03
+ * Resource action: DcIPaddr monitor on c001n01
+ * Resource action: rsc_c001n08 stop on c001n08
+ * Resource action: rsc_c001n08 monitor on c001n03
+ * Resource action: rsc_c001n08 monitor on c001n02
+ * Resource action: rsc_c001n08 monitor on c001n01
+ * Resource action: rsc_c001n02 monitor on c001n08
+ * Resource action: rsc_c001n02 monitor on c001n03
+ * Resource action: rsc_c001n02 monitor on c001n01
+ * Resource action: rsc_c001n01 monitor on c001n08
+ * Resource action: rsc_c001n01 monitor on c001n03
+ * Resource action: rsc_c001n01 monitor on c001n02
+
+Revised Cluster Status:
+ * Node List:
+ * Online: [ c001n01 c001n02 c001n03 c001n08 ]
+
+ * Full List of Resources:
+ * DcIPaddr (ocf:heartbeat:IPaddr): Started c001n02
+ * rsc_c001n08 (ocf:heartbeat:IPaddr): Stopped (disabled)
+ * rsc_c001n02 (ocf:heartbeat:IPaddr): Started c001n02
+ * Clone Set: promoteme [rsc_c001n03] (promotable):
+ * Unpromoted: [ c001n03 ]
+ * rsc_c001n01 (ocf:heartbeat:IPaddr): Started c001n01
diff --git a/cts/scheduler/summary/target-2.summary b/cts/scheduler/summary/target-2.summary
new file mode 100644
index 0000000..a6194ae
--- /dev/null
+++ b/cts/scheduler/summary/target-2.summary
@@ -0,0 +1,44 @@
+1 of 5 resource instances DISABLED and 0 BLOCKED from further action due to failure
+
+Current cluster status:
+ * Node List:
+ * Online: [ c001n01 c001n02 c001n03 c001n08 ]
+
+ * Full List of Resources:
+ * DcIPaddr (ocf:heartbeat:IPaddr): Started c001n02
+ * rsc_c001n08 (ocf:heartbeat:IPaddr): Started c001n08 (disabled)
+ * rsc_c001n02 (ocf:heartbeat:IPaddr): Started c001n02
+ * rsc_c001n03 (ocf:heartbeat:IPaddr): Started c001n03
+ * rsc_c001n01 (ocf:heartbeat:IPaddr): Started c001n01
+
+Transition Summary:
+ * Stop rsc_c001n08 ( c001n08 ) due to node availability
+
+Executing Cluster Transition:
+ * Resource action: DcIPaddr monitor on c001n08
+ * Resource action: DcIPaddr monitor on c001n03
+ * Resource action: DcIPaddr monitor on c001n01
+ * Resource action: rsc_c001n08 stop on c001n08
+ * Resource action: rsc_c001n08 monitor on c001n03
+ * Resource action: rsc_c001n08 monitor on c001n02
+ * Resource action: rsc_c001n08 monitor on c001n01
+ * Resource action: rsc_c001n02 monitor on c001n08
+ * Resource action: rsc_c001n02 monitor on c001n03
+ * Resource action: rsc_c001n02 monitor on c001n01
+ * Resource action: rsc_c001n03 monitor on c001n08
+ * Resource action: rsc_c001n03 monitor on c001n02
+ * Resource action: rsc_c001n03 monitor on c001n01
+ * Resource action: rsc_c001n01 monitor on c001n08
+ * Resource action: rsc_c001n01 monitor on c001n03
+ * Resource action: rsc_c001n01 monitor on c001n02
+
+Revised Cluster Status:
+ * Node List:
+ * Online: [ c001n01 c001n02 c001n03 c001n08 ]
+
+ * Full List of Resources:
+ * DcIPaddr (ocf:heartbeat:IPaddr): Started c001n02
+ * rsc_c001n08 (ocf:heartbeat:IPaddr): Stopped (disabled)
+ * rsc_c001n02 (ocf:heartbeat:IPaddr): Started c001n02
+ * rsc_c001n03 (ocf:heartbeat:IPaddr): Started c001n03
+ * rsc_c001n01 (ocf:heartbeat:IPaddr): Started c001n01
diff --git a/cts/scheduler/summary/template-1.summary b/cts/scheduler/summary/template-1.summary
new file mode 100644
index 0000000..eb4493e
--- /dev/null
+++ b/cts/scheduler/summary/template-1.summary
@@ -0,0 +1,30 @@
+1 of 2 resource instances DISABLED and 0 BLOCKED from further action due to failure
+
+Current cluster status:
+ * Node List:
+ * Online: [ node1 node2 ]
+
+ * Full List of Resources:
+ * rsc1 (ocf:pacemaker:Dummy): Stopped (disabled)
+ * rsc2 (ocf:pacemaker:Dummy): Stopped
+
+Transition Summary:
+ * Start rsc2 ( node1 )
+
+Executing Cluster Transition:
+ * Resource action: rsc1 monitor on node2
+ * Resource action: rsc1 monitor on node1
+ * Resource action: rsc2 monitor on node2
+ * Resource action: rsc2 monitor on node1
+ * Pseudo action: load_stopped_node2
+ * Pseudo action: load_stopped_node1
+ * Resource action: rsc2 start on node1
+ * Resource action: rsc2 monitor=10000 on node1
+
+Revised Cluster Status:
+ * Node List:
+ * Online: [ node1 node2 ]
+
+ * Full List of Resources:
+ * rsc1 (ocf:pacemaker:Dummy): Stopped (disabled)
+ * rsc2 (ocf:pacemaker:Dummy): Started node1
diff --git a/cts/scheduler/summary/template-2.summary b/cts/scheduler/summary/template-2.summary
new file mode 100644
index 0000000..e7d2c11
--- /dev/null
+++ b/cts/scheduler/summary/template-2.summary
@@ -0,0 +1,28 @@
+Current cluster status:
+ * Node List:
+ * Online: [ node1 node2 ]
+
+ * Full List of Resources:
+ * rsc1 (ocf:pacemaker:Dummy): Stopped
+ * rsc2 (ocf:pacemaker:Dummy): Stopped
+
+Transition Summary:
+ * Start rsc2 ( node1 )
+
+Executing Cluster Transition:
+ * Resource action: rsc1 monitor on node2
+ * Resource action: rsc1 monitor on node1
+ * Resource action: rsc2 monitor on node2
+ * Resource action: rsc2 monitor on node1
+ * Pseudo action: load_stopped_node2
+ * Pseudo action: load_stopped_node1
+ * Resource action: rsc2 start on node1
+ * Resource action: rsc2 monitor=20000 on node1
+
+Revised Cluster Status:
+ * Node List:
+ * Online: [ node1 node2 ]
+
+ * Full List of Resources:
+ * rsc1 (ocf:pacemaker:Dummy): Stopped
+ * rsc2 (ocf:pacemaker:Dummy): Started node1
diff --git a/cts/scheduler/summary/template-3.summary b/cts/scheduler/summary/template-3.summary
new file mode 100644
index 0000000..4054f1e
--- /dev/null
+++ b/cts/scheduler/summary/template-3.summary
@@ -0,0 +1,33 @@
+Current cluster status:
+ * Node List:
+ * Online: [ node1 node2 ]
+
+ * Full List of Resources:
+ * rsc1 (ocf:pacemaker:Dummy): Stopped
+ * rsc2 (ocf:pacemaker:Dummy): Stopped
+
+Transition Summary:
+ * Start rsc1 ( node1 )
+ * Start rsc2 ( node2 )
+
+Executing Cluster Transition:
+ * Resource action: rsc1 monitor on node2
+ * Resource action: rsc1 monitor on node1
+ * Resource action: rsc2 monitor on node2
+ * Resource action: rsc2 monitor on node1
+ * Pseudo action: load_stopped_node2
+ * Pseudo action: load_stopped_node1
+ * Resource action: rsc1 start on node1
+ * Resource action: rsc2 monitor=30000 on node1
+ * Resource action: rsc2 start on node2
+ * Resource action: rsc1 monitor=20000 on node1
+ * Resource action: rsc1 monitor=10000 on node1
+ * Resource action: rsc2 monitor=5000 on node2
+
+Revised Cluster Status:
+ * Node List:
+ * Online: [ node1 node2 ]
+
+ * Full List of Resources:
+ * rsc1 (ocf:pacemaker:Dummy): Started node1
+ * rsc2 (ocf:pacemaker:Dummy): Started node2
diff --git a/cts/scheduler/summary/template-clone-group.summary b/cts/scheduler/summary/template-clone-group.summary
new file mode 100644
index 0000000..efc904d
--- /dev/null
+++ b/cts/scheduler/summary/template-clone-group.summary
@@ -0,0 +1,37 @@
+Current cluster status:
+ * Node List:
+ * Online: [ node1 node2 ]
+
+ * Full List of Resources:
+ * Clone Set: clone1 [group1]:
+ * Stopped: [ node1 node2 ]
+
+Transition Summary:
+ * Start rsc1:0 ( node1 )
+ * Start rsc2:0 ( node1 )
+ * Start rsc1:1 ( node2 )
+ * Start rsc2:1 ( node2 )
+
+Executing Cluster Transition:
+ * Resource action: rsc1:0 monitor on node1
+ * Resource action: rsc2:0 monitor on node1
+ * Resource action: rsc1:1 monitor on node2
+ * Resource action: rsc2:1 monitor on node2
+ * Pseudo action: clone1_start_0
+ * Pseudo action: group1:0_start_0
+ * Resource action: rsc1:0 start on node1
+ * Resource action: rsc2:0 start on node1
+ * Pseudo action: group1:1_start_0
+ * Resource action: rsc1:1 start on node2
+ * Resource action: rsc2:1 start on node2
+ * Pseudo action: group1:0_running_0
+ * Pseudo action: group1:1_running_0
+ * Pseudo action: clone1_running_0
+
+Revised Cluster Status:
+ * Node List:
+ * Online: [ node1 node2 ]
+
+ * Full List of Resources:
+ * Clone Set: clone1 [group1]:
+ * Started: [ node1 node2 ]
diff --git a/cts/scheduler/summary/template-clone-primitive.summary b/cts/scheduler/summary/template-clone-primitive.summary
new file mode 100644
index 0000000..59fdfbe
--- /dev/null
+++ b/cts/scheduler/summary/template-clone-primitive.summary
@@ -0,0 +1,27 @@
+Current cluster status:
+ * Node List:
+ * Online: [ node1 node2 ]
+
+ * Full List of Resources:
+ * Clone Set: clone1 [rsc1]:
+ * Stopped: [ node1 node2 ]
+
+Transition Summary:
+ * Start rsc1:0 ( node1 )
+ * Start rsc1:1 ( node2 )
+
+Executing Cluster Transition:
+ * Resource action: rsc1:0 monitor on node1
+ * Resource action: rsc1:1 monitor on node2
+ * Pseudo action: clone1_start_0
+ * Resource action: rsc1:0 start on node1
+ * Resource action: rsc1:1 start on node2
+ * Pseudo action: clone1_running_0
+
+Revised Cluster Status:
+ * Node List:
+ * Online: [ node1 node2 ]
+
+ * Full List of Resources:
+ * Clone Set: clone1 [rsc1]:
+ * Started: [ node1 node2 ]
diff --git a/cts/scheduler/summary/template-coloc-1.summary b/cts/scheduler/summary/template-coloc-1.summary
new file mode 100644
index 0000000..9d421dd
--- /dev/null
+++ b/cts/scheduler/summary/template-coloc-1.summary
@@ -0,0 +1,39 @@
+Current cluster status:
+ * Node List:
+ * Online: [ node1 node2 ]
+
+ * Full List of Resources:
+ * rsc1 (ocf:pacemaker:Dummy): Stopped
+ * rsc2 (ocf:pacemaker:Dummy): Stopped
+ * rsc3 (ocf:pacemaker:Dummy): Stopped
+ * rsc4 (ocf:pacemaker:Dummy): Stopped
+
+Transition Summary:
+ * Start rsc1 ( node1 )
+ * Start rsc2 ( node1 )
+ * Start rsc3 ( node1 )
+ * Start rsc4 ( node1 )
+
+Executing Cluster Transition:
+ * Resource action: rsc1 monitor on node2
+ * Resource action: rsc1 monitor on node1
+ * Resource action: rsc2 monitor on node2
+ * Resource action: rsc2 monitor on node1
+ * Resource action: rsc3 monitor on node2
+ * Resource action: rsc3 monitor on node1
+ * Resource action: rsc4 monitor on node2
+ * Resource action: rsc4 monitor on node1
+ * Resource action: rsc1 start on node1
+ * Resource action: rsc2 start on node1
+ * Resource action: rsc3 start on node1
+ * Resource action: rsc4 start on node1
+
+Revised Cluster Status:
+ * Node List:
+ * Online: [ node1 node2 ]
+
+ * Full List of Resources:
+ * rsc1 (ocf:pacemaker:Dummy): Started node1
+ * rsc2 (ocf:pacemaker:Dummy): Started node1
+ * rsc3 (ocf:pacemaker:Dummy): Started node1
+ * rsc4 (ocf:pacemaker:Dummy): Started node1
diff --git a/cts/scheduler/summary/template-coloc-2.summary b/cts/scheduler/summary/template-coloc-2.summary
new file mode 100644
index 0000000..9d421dd
--- /dev/null
+++ b/cts/scheduler/summary/template-coloc-2.summary
@@ -0,0 +1,39 @@
+Current cluster status:
+ * Node List:
+ * Online: [ node1 node2 ]
+
+ * Full List of Resources:
+ * rsc1 (ocf:pacemaker:Dummy): Stopped
+ * rsc2 (ocf:pacemaker:Dummy): Stopped
+ * rsc3 (ocf:pacemaker:Dummy): Stopped
+ * rsc4 (ocf:pacemaker:Dummy): Stopped
+
+Transition Summary:
+ * Start rsc1 ( node1 )
+ * Start rsc2 ( node1 )
+ * Start rsc3 ( node1 )
+ * Start rsc4 ( node1 )
+
+Executing Cluster Transition:
+ * Resource action: rsc1 monitor on node2
+ * Resource action: rsc1 monitor on node1
+ * Resource action: rsc2 monitor on node2
+ * Resource action: rsc2 monitor on node1
+ * Resource action: rsc3 monitor on node2
+ * Resource action: rsc3 monitor on node1
+ * Resource action: rsc4 monitor on node2
+ * Resource action: rsc4 monitor on node1
+ * Resource action: rsc1 start on node1
+ * Resource action: rsc2 start on node1
+ * Resource action: rsc3 start on node1
+ * Resource action: rsc4 start on node1
+
+Revised Cluster Status:
+ * Node List:
+ * Online: [ node1 node2 ]
+
+ * Full List of Resources:
+ * rsc1 (ocf:pacemaker:Dummy): Started node1
+ * rsc2 (ocf:pacemaker:Dummy): Started node1
+ * rsc3 (ocf:pacemaker:Dummy): Started node1
+ * rsc4 (ocf:pacemaker:Dummy): Started node1
diff --git a/cts/scheduler/summary/template-coloc-3.summary b/cts/scheduler/summary/template-coloc-3.summary
new file mode 100644
index 0000000..a7ff63e
--- /dev/null
+++ b/cts/scheduler/summary/template-coloc-3.summary
@@ -0,0 +1,51 @@
+Current cluster status:
+ * Node List:
+ * Online: [ node1 node2 ]
+
+ * Full List of Resources:
+ * rsc1 (ocf:pacemaker:Dummy): Stopped
+ * rsc2 (ocf:pacemaker:Dummy): Stopped
+ * rsc3 (ocf:pacemaker:Dummy): Stopped
+ * rsc4 (ocf:pacemaker:Dummy): Stopped
+ * rsc5 (ocf:pacemaker:Dummy): Stopped
+ * rsc6 (ocf:pacemaker:Dummy): Stopped
+
+Transition Summary:
+ * Start rsc1 ( node1 )
+ * Start rsc2 ( node2 )
+ * Start rsc3 ( node1 )
+ * Start rsc4 ( node2 )
+ * Start rsc5 ( node1 )
+ * Start rsc6 ( node2 )
+
+Executing Cluster Transition:
+ * Resource action: rsc1 monitor on node2
+ * Resource action: rsc1 monitor on node1
+ * Resource action: rsc2 monitor on node2
+ * Resource action: rsc2 monitor on node1
+ * Resource action: rsc3 monitor on node2
+ * Resource action: rsc3 monitor on node1
+ * Resource action: rsc4 monitor on node2
+ * Resource action: rsc4 monitor on node1
+ * Resource action: rsc5 monitor on node2
+ * Resource action: rsc5 monitor on node1
+ * Resource action: rsc6 monitor on node2
+ * Resource action: rsc6 monitor on node1
+ * Resource action: rsc1 start on node1
+ * Resource action: rsc2 start on node2
+ * Resource action: rsc3 start on node1
+ * Resource action: rsc4 start on node2
+ * Resource action: rsc5 start on node1
+ * Resource action: rsc6 start on node2
+
+Revised Cluster Status:
+ * Node List:
+ * Online: [ node1 node2 ]
+
+ * Full List of Resources:
+ * rsc1 (ocf:pacemaker:Dummy): Started node1
+ * rsc2 (ocf:pacemaker:Dummy): Started node2
+ * rsc3 (ocf:pacemaker:Dummy): Started node1
+ * rsc4 (ocf:pacemaker:Dummy): Started node2
+ * rsc5 (ocf:pacemaker:Dummy): Started node1
+ * rsc6 (ocf:pacemaker:Dummy): Started node2
diff --git a/cts/scheduler/summary/template-order-1.summary b/cts/scheduler/summary/template-order-1.summary
new file mode 100644
index 0000000..1b3059c
--- /dev/null
+++ b/cts/scheduler/summary/template-order-1.summary
@@ -0,0 +1,39 @@
+Current cluster status:
+ * Node List:
+ * Online: [ node1 node2 ]
+
+ * Full List of Resources:
+ * rsc1 (ocf:pacemaker:Dummy): Stopped
+ * rsc2 (ocf:pacemaker:Dummy): Stopped
+ * rsc3 (ocf:pacemaker:Dummy): Stopped
+ * rsc4 (ocf:pacemaker:Dummy): Stopped
+
+Transition Summary:
+ * Start rsc1 ( node1 )
+ * Start rsc2 ( node2 )
+ * Start rsc3 ( node1 )
+ * Start rsc4 ( node2 )
+
+Executing Cluster Transition:
+ * Resource action: rsc1 monitor on node2
+ * Resource action: rsc1 monitor on node1
+ * Resource action: rsc2 monitor on node2
+ * Resource action: rsc2 monitor on node1
+ * Resource action: rsc3 monitor on node2
+ * Resource action: rsc3 monitor on node1
+ * Resource action: rsc4 monitor on node2
+ * Resource action: rsc4 monitor on node1
+ * Resource action: rsc4 start on node2
+ * Resource action: rsc1 start on node1
+ * Resource action: rsc2 start on node2
+ * Resource action: rsc3 start on node1
+
+Revised Cluster Status:
+ * Node List:
+ * Online: [ node1 node2 ]
+
+ * Full List of Resources:
+ * rsc1 (ocf:pacemaker:Dummy): Started node1
+ * rsc2 (ocf:pacemaker:Dummy): Started node2
+ * rsc3 (ocf:pacemaker:Dummy): Started node1
+ * rsc4 (ocf:pacemaker:Dummy): Started node2
diff --git a/cts/scheduler/summary/template-order-2.summary b/cts/scheduler/summary/template-order-2.summary
new file mode 100644
index 0000000..9283ce8
--- /dev/null
+++ b/cts/scheduler/summary/template-order-2.summary
@@ -0,0 +1,39 @@
+Current cluster status:
+ * Node List:
+ * Online: [ node1 node2 ]
+
+ * Full List of Resources:
+ * rsc1 (ocf:pacemaker:Dummy): Stopped
+ * rsc2 (ocf:pacemaker:Dummy): Stopped
+ * rsc3 (ocf:pacemaker:Dummy): Stopped
+ * rsc4 (ocf:pacemaker:Dummy): Stopped
+
+Transition Summary:
+ * Start rsc1 ( node1 )
+ * Start rsc2 ( node2 )
+ * Start rsc3 ( node1 )
+ * Start rsc4 ( node2 )
+
+Executing Cluster Transition:
+ * Resource action: rsc1 monitor on node2
+ * Resource action: rsc1 monitor on node1
+ * Resource action: rsc2 monitor on node2
+ * Resource action: rsc2 monitor on node1
+ * Resource action: rsc3 monitor on node2
+ * Resource action: rsc3 monitor on node1
+ * Resource action: rsc4 monitor on node2
+ * Resource action: rsc4 monitor on node1
+ * Resource action: rsc2 start on node2
+ * Resource action: rsc3 start on node1
+ * Resource action: rsc4 start on node2
+ * Resource action: rsc1 start on node1
+
+Revised Cluster Status:
+ * Node List:
+ * Online: [ node1 node2 ]
+
+ * Full List of Resources:
+ * rsc1 (ocf:pacemaker:Dummy): Started node1
+ * rsc2 (ocf:pacemaker:Dummy): Started node2
+ * rsc3 (ocf:pacemaker:Dummy): Started node1
+ * rsc4 (ocf:pacemaker:Dummy): Started node2
diff --git a/cts/scheduler/summary/template-order-3.summary b/cts/scheduler/summary/template-order-3.summary
new file mode 100644
index 0000000..664b1a6
--- /dev/null
+++ b/cts/scheduler/summary/template-order-3.summary
@@ -0,0 +1,51 @@
+Current cluster status:
+ * Node List:
+ * Online: [ node1 node2 ]
+
+ * Full List of Resources:
+ * rsc1 (ocf:pacemaker:Dummy): Stopped
+ * rsc2 (ocf:pacemaker:Dummy): Stopped
+ * rsc3 (ocf:pacemaker:Dummy): Stopped
+ * rsc4 (ocf:pacemaker:Dummy): Stopped
+ * rsc5 (ocf:pacemaker:Dummy): Stopped
+ * rsc6 (ocf:pacemaker:Dummy): Stopped
+
+Transition Summary:
+ * Start rsc1 ( node1 )
+ * Start rsc2 ( node2 )
+ * Start rsc3 ( node1 )
+ * Start rsc4 ( node2 )
+ * Start rsc5 ( node1 )
+ * Start rsc6 ( node2 )
+
+Executing Cluster Transition:
+ * Resource action: rsc1 monitor on node2
+ * Resource action: rsc1 monitor on node1
+ * Resource action: rsc2 monitor on node2
+ * Resource action: rsc2 monitor on node1
+ * Resource action: rsc3 monitor on node2
+ * Resource action: rsc3 monitor on node1
+ * Resource action: rsc4 monitor on node2
+ * Resource action: rsc4 monitor on node1
+ * Resource action: rsc5 monitor on node2
+ * Resource action: rsc5 monitor on node1
+ * Resource action: rsc6 monitor on node2
+ * Resource action: rsc6 monitor on node1
+ * Resource action: rsc4 start on node2
+ * Resource action: rsc5 start on node1
+ * Resource action: rsc6 start on node2
+ * Resource action: rsc1 start on node1
+ * Resource action: rsc2 start on node2
+ * Resource action: rsc3 start on node1
+
+Revised Cluster Status:
+ * Node List:
+ * Online: [ node1 node2 ]
+
+ * Full List of Resources:
+ * rsc1 (ocf:pacemaker:Dummy): Started node1
+ * rsc2 (ocf:pacemaker:Dummy): Started node2
+ * rsc3 (ocf:pacemaker:Dummy): Started node1
+ * rsc4 (ocf:pacemaker:Dummy): Started node2
+ * rsc5 (ocf:pacemaker:Dummy): Started node1
+ * rsc6 (ocf:pacemaker:Dummy): Started node2
diff --git a/cts/scheduler/summary/template-rsc-sets-1.summary b/cts/scheduler/summary/template-rsc-sets-1.summary
new file mode 100644
index 0000000..8e005c4
--- /dev/null
+++ b/cts/scheduler/summary/template-rsc-sets-1.summary
@@ -0,0 +1,45 @@
+Current cluster status:
+ * Node List:
+ * Online: [ node1 node2 ]
+
+ * Full List of Resources:
+ * rsc1 (ocf:pacemaker:Dummy): Stopped
+ * rsc2 (ocf:pacemaker:Dummy): Stopped
+ * rsc3 (ocf:pacemaker:Dummy): Stopped
+ * rsc4 (ocf:pacemaker:Dummy): Stopped
+ * rsc5 (ocf:pacemaker:Dummy): Stopped
+
+Transition Summary:
+ * Start rsc1 ( node1 )
+ * Start rsc2 ( node1 )
+ * Start rsc3 ( node1 )
+ * Start rsc4 ( node1 )
+ * Start rsc5 ( node1 )
+
+Executing Cluster Transition:
+ * Resource action: rsc1 monitor on node2
+ * Resource action: rsc1 monitor on node1
+ * Resource action: rsc2 monitor on node2
+ * Resource action: rsc2 monitor on node1
+ * Resource action: rsc3 monitor on node2
+ * Resource action: rsc3 monitor on node1
+ * Resource action: rsc4 monitor on node2
+ * Resource action: rsc4 monitor on node1
+ * Resource action: rsc5 monitor on node2
+ * Resource action: rsc5 monitor on node1
+ * Resource action: rsc4 start on node1
+ * Resource action: rsc1 start on node1
+ * Resource action: rsc2 start on node1
+ * Resource action: rsc3 start on node1
+ * Resource action: rsc5 start on node1
+
+Revised Cluster Status:
+ * Node List:
+ * Online: [ node1 node2 ]
+
+ * Full List of Resources:
+ * rsc1 (ocf:pacemaker:Dummy): Started node1
+ * rsc2 (ocf:pacemaker:Dummy): Started node1
+ * rsc3 (ocf:pacemaker:Dummy): Started node1
+ * rsc4 (ocf:pacemaker:Dummy): Started node1
+ * rsc5 (ocf:pacemaker:Dummy): Started node1
diff --git a/cts/scheduler/summary/template-rsc-sets-2.summary b/cts/scheduler/summary/template-rsc-sets-2.summary
new file mode 100644
index 0000000..8e005c4
--- /dev/null
+++ b/cts/scheduler/summary/template-rsc-sets-2.summary
@@ -0,0 +1,45 @@
+Current cluster status:
+ * Node List:
+ * Online: [ node1 node2 ]
+
+ * Full List of Resources:
+ * rsc1 (ocf:pacemaker:Dummy): Stopped
+ * rsc2 (ocf:pacemaker:Dummy): Stopped
+ * rsc3 (ocf:pacemaker:Dummy): Stopped
+ * rsc4 (ocf:pacemaker:Dummy): Stopped
+ * rsc5 (ocf:pacemaker:Dummy): Stopped
+
+Transition Summary:
+ * Start rsc1 ( node1 )
+ * Start rsc2 ( node1 )
+ * Start rsc3 ( node1 )
+ * Start rsc4 ( node1 )
+ * Start rsc5 ( node1 )
+
+Executing Cluster Transition:
+ * Resource action: rsc1 monitor on node2
+ * Resource action: rsc1 monitor on node1
+ * Resource action: rsc2 monitor on node2
+ * Resource action: rsc2 monitor on node1
+ * Resource action: rsc3 monitor on node2
+ * Resource action: rsc3 monitor on node1
+ * Resource action: rsc4 monitor on node2
+ * Resource action: rsc4 monitor on node1
+ * Resource action: rsc5 monitor on node2
+ * Resource action: rsc5 monitor on node1
+ * Resource action: rsc4 start on node1
+ * Resource action: rsc1 start on node1
+ * Resource action: rsc2 start on node1
+ * Resource action: rsc3 start on node1
+ * Resource action: rsc5 start on node1
+
+Revised Cluster Status:
+ * Node List:
+ * Online: [ node1 node2 ]
+
+ * Full List of Resources:
+ * rsc1 (ocf:pacemaker:Dummy): Started node1
+ * rsc2 (ocf:pacemaker:Dummy): Started node1
+ * rsc3 (ocf:pacemaker:Dummy): Started node1
+ * rsc4 (ocf:pacemaker:Dummy): Started node1
+ * rsc5 (ocf:pacemaker:Dummy): Started node1
diff --git a/cts/scheduler/summary/template-rsc-sets-3.summary b/cts/scheduler/summary/template-rsc-sets-3.summary
new file mode 100644
index 0000000..8e005c4
--- /dev/null
+++ b/cts/scheduler/summary/template-rsc-sets-3.summary
@@ -0,0 +1,45 @@
+Current cluster status:
+ * Node List:
+ * Online: [ node1 node2 ]
+
+ * Full List of Resources:
+ * rsc1 (ocf:pacemaker:Dummy): Stopped
+ * rsc2 (ocf:pacemaker:Dummy): Stopped
+ * rsc3 (ocf:pacemaker:Dummy): Stopped
+ * rsc4 (ocf:pacemaker:Dummy): Stopped
+ * rsc5 (ocf:pacemaker:Dummy): Stopped
+
+Transition Summary:
+ * Start rsc1 ( node1 )
+ * Start rsc2 ( node1 )
+ * Start rsc3 ( node1 )
+ * Start rsc4 ( node1 )
+ * Start rsc5 ( node1 )
+
+Executing Cluster Transition:
+ * Resource action: rsc1 monitor on node2
+ * Resource action: rsc1 monitor on node1
+ * Resource action: rsc2 monitor on node2
+ * Resource action: rsc2 monitor on node1
+ * Resource action: rsc3 monitor on node2
+ * Resource action: rsc3 monitor on node1
+ * Resource action: rsc4 monitor on node2
+ * Resource action: rsc4 monitor on node1
+ * Resource action: rsc5 monitor on node2
+ * Resource action: rsc5 monitor on node1
+ * Resource action: rsc4 start on node1
+ * Resource action: rsc1 start on node1
+ * Resource action: rsc2 start on node1
+ * Resource action: rsc3 start on node1
+ * Resource action: rsc5 start on node1
+
+Revised Cluster Status:
+ * Node List:
+ * Online: [ node1 node2 ]
+
+ * Full List of Resources:
+ * rsc1 (ocf:pacemaker:Dummy): Started node1
+ * rsc2 (ocf:pacemaker:Dummy): Started node1
+ * rsc3 (ocf:pacemaker:Dummy): Started node1
+ * rsc4 (ocf:pacemaker:Dummy): Started node1
+ * rsc5 (ocf:pacemaker:Dummy): Started node1
diff --git a/cts/scheduler/summary/template-rsc-sets-4.summary b/cts/scheduler/summary/template-rsc-sets-4.summary
new file mode 100644
index 0000000..e74b971
--- /dev/null
+++ b/cts/scheduler/summary/template-rsc-sets-4.summary
@@ -0,0 +1,27 @@
+Current cluster status:
+ * Node List:
+ * Online: [ node1 node2 ]
+
+ * Full List of Resources:
+ * rsc1 (ocf:pacemaker:Dummy): Stopped
+ * rsc2 (ocf:pacemaker:Dummy): Stopped
+ * rsc3 (ocf:pacemaker:Dummy): Stopped
+
+Transition Summary:
+
+Executing Cluster Transition:
+ * Resource action: rsc1 monitor on node2
+ * Resource action: rsc1 monitor on node1
+ * Resource action: rsc2 monitor on node2
+ * Resource action: rsc2 monitor on node1
+ * Resource action: rsc3 monitor on node2
+ * Resource action: rsc3 monitor on node1
+
+Revised Cluster Status:
+ * Node List:
+ * Online: [ node1 node2 ]
+
+ * Full List of Resources:
+ * rsc1 (ocf:pacemaker:Dummy): Stopped
+ * rsc2 (ocf:pacemaker:Dummy): Stopped
+ * rsc3 (ocf:pacemaker:Dummy): Stopped
diff --git a/cts/scheduler/summary/template-ticket.summary b/cts/scheduler/summary/template-ticket.summary
new file mode 100644
index 0000000..e74b971
--- /dev/null
+++ b/cts/scheduler/summary/template-ticket.summary
@@ -0,0 +1,27 @@
+Current cluster status:
+ * Node List:
+ * Online: [ node1 node2 ]
+
+ * Full List of Resources:
+ * rsc1 (ocf:pacemaker:Dummy): Stopped
+ * rsc2 (ocf:pacemaker:Dummy): Stopped
+ * rsc3 (ocf:pacemaker:Dummy): Stopped
+
+Transition Summary:
+
+Executing Cluster Transition:
+ * Resource action: rsc1 monitor on node2
+ * Resource action: rsc1 monitor on node1
+ * Resource action: rsc2 monitor on node2
+ * Resource action: rsc2 monitor on node1
+ * Resource action: rsc3 monitor on node2
+ * Resource action: rsc3 monitor on node1
+
+Revised Cluster Status:
+ * Node List:
+ * Online: [ node1 node2 ]
+
+ * Full List of Resources:
+ * rsc1 (ocf:pacemaker:Dummy): Stopped
+ * rsc2 (ocf:pacemaker:Dummy): Stopped
+ * rsc3 (ocf:pacemaker:Dummy): Stopped
diff --git a/cts/scheduler/summary/ticket-clone-1.summary b/cts/scheduler/summary/ticket-clone-1.summary
new file mode 100644
index 0000000..f682d73
--- /dev/null
+++ b/cts/scheduler/summary/ticket-clone-1.summary
@@ -0,0 +1,23 @@
+Current cluster status:
+ * Node List:
+ * Online: [ node1 node2 ]
+
+ * Full List of Resources:
+ * rsc_stonith (stonith:null): Started node1
+ * Clone Set: clone1 [rsc1]:
+ * Stopped: [ node1 node2 ]
+
+Transition Summary:
+
+Executing Cluster Transition:
+ * Resource action: rsc1:0 monitor on node2
+ * Resource action: rsc1:0 monitor on node1
+
+Revised Cluster Status:
+ * Node List:
+ * Online: [ node1 node2 ]
+
+ * Full List of Resources:
+ * rsc_stonith (stonith:null): Started node1
+ * Clone Set: clone1 [rsc1]:
+ * Stopped: [ node1 node2 ]
diff --git a/cts/scheduler/summary/ticket-clone-10.summary b/cts/scheduler/summary/ticket-clone-10.summary
new file mode 100644
index 0000000..f682d73
--- /dev/null
+++ b/cts/scheduler/summary/ticket-clone-10.summary
@@ -0,0 +1,23 @@
+Current cluster status:
+ * Node List:
+ * Online: [ node1 node2 ]
+
+ * Full List of Resources:
+ * rsc_stonith (stonith:null): Started node1
+ * Clone Set: clone1 [rsc1]:
+ * Stopped: [ node1 node2 ]
+
+Transition Summary:
+
+Executing Cluster Transition:
+ * Resource action: rsc1:0 monitor on node2
+ * Resource action: rsc1:0 monitor on node1
+
+Revised Cluster Status:
+ * Node List:
+ * Online: [ node1 node2 ]
+
+ * Full List of Resources:
+ * rsc_stonith (stonith:null): Started node1
+ * Clone Set: clone1 [rsc1]:
+ * Stopped: [ node1 node2 ]
diff --git a/cts/scheduler/summary/ticket-clone-11.summary b/cts/scheduler/summary/ticket-clone-11.summary
new file mode 100644
index 0000000..abba11f
--- /dev/null
+++ b/cts/scheduler/summary/ticket-clone-11.summary
@@ -0,0 +1,29 @@
+Current cluster status:
+ * Node List:
+ * Online: [ node1 node2 ]
+
+ * Full List of Resources:
+ * rsc_stonith (stonith:null): Started node1
+ * Clone Set: clone1 [rsc1]:
+ * Stopped: [ node1 node2 ]
+
+Transition Summary:
+ * Start rsc1:0 ( node2 )
+ * Start rsc1:1 ( node1 )
+
+Executing Cluster Transition:
+ * Pseudo action: clone1_start_0
+ * Resource action: rsc1:0 start on node2
+ * Resource action: rsc1:1 start on node1
+ * Pseudo action: clone1_running_0
+ * Resource action: rsc1:0 monitor=5000 on node2
+ * Resource action: rsc1:1 monitor=5000 on node1
+
+Revised Cluster Status:
+ * Node List:
+ * Online: [ node1 node2 ]
+
+ * Full List of Resources:
+ * rsc_stonith (stonith:null): Started node1
+ * Clone Set: clone1 [rsc1]:
+ * Started: [ node1 node2 ]
diff --git a/cts/scheduler/summary/ticket-clone-12.summary b/cts/scheduler/summary/ticket-clone-12.summary
new file mode 100644
index 0000000..d71f36e
--- /dev/null
+++ b/cts/scheduler/summary/ticket-clone-12.summary
@@ -0,0 +1,21 @@
+Current cluster status:
+ * Node List:
+ * Online: [ node1 node2 ]
+
+ * Full List of Resources:
+ * rsc_stonith (stonith:null): Started node1
+ * Clone Set: clone1 [rsc1]:
+ * Started: [ node1 node2 ]
+
+Transition Summary:
+
+Executing Cluster Transition:
+
+Revised Cluster Status:
+ * Node List:
+ * Online: [ node1 node2 ]
+
+ * Full List of Resources:
+ * rsc_stonith (stonith:null): Started node1
+ * Clone Set: clone1 [rsc1]:
+ * Started: [ node1 node2 ]
diff --git a/cts/scheduler/summary/ticket-clone-13.summary b/cts/scheduler/summary/ticket-clone-13.summary
new file mode 100644
index 0000000..d3be28c
--- /dev/null
+++ b/cts/scheduler/summary/ticket-clone-13.summary
@@ -0,0 +1,21 @@
+Current cluster status:
+ * Node List:
+ * Online: [ node1 node2 ]
+
+ * Full List of Resources:
+ * rsc_stonith (stonith:null): Started node1
+ * Clone Set: clone1 [rsc1]:
+ * Stopped: [ node1 node2 ]
+
+Transition Summary:
+
+Executing Cluster Transition:
+
+Revised Cluster Status:
+ * Node List:
+ * Online: [ node1 node2 ]
+
+ * Full List of Resources:
+ * rsc_stonith (stonith:null): Started node1
+ * Clone Set: clone1 [rsc1]:
+ * Stopped: [ node1 node2 ]
diff --git a/cts/scheduler/summary/ticket-clone-14.summary b/cts/scheduler/summary/ticket-clone-14.summary
new file mode 100644
index 0000000..11dbd5c
--- /dev/null
+++ b/cts/scheduler/summary/ticket-clone-14.summary
@@ -0,0 +1,27 @@
+Current cluster status:
+ * Node List:
+ * Online: [ node1 node2 ]
+
+ * Full List of Resources:
+ * rsc_stonith (stonith:null): Started node1
+ * Clone Set: clone1 [rsc1]:
+ * Started: [ node1 node2 ]
+
+Transition Summary:
+ * Stop rsc1:0 ( node1 ) due to node availability
+ * Stop rsc1:1 ( node2 ) due to node availability
+
+Executing Cluster Transition:
+ * Pseudo action: clone1_stop_0
+ * Resource action: rsc1:1 stop on node1
+ * Resource action: rsc1:0 stop on node2
+ * Pseudo action: clone1_stopped_0
+
+Revised Cluster Status:
+ * Node List:
+ * Online: [ node1 node2 ]
+
+ * Full List of Resources:
+ * rsc_stonith (stonith:null): Started node1
+ * Clone Set: clone1 [rsc1]:
+ * Stopped: [ node1 node2 ]
diff --git a/cts/scheduler/summary/ticket-clone-15.summary b/cts/scheduler/summary/ticket-clone-15.summary
new file mode 100644
index 0000000..11dbd5c
--- /dev/null
+++ b/cts/scheduler/summary/ticket-clone-15.summary
@@ -0,0 +1,27 @@
+Current cluster status:
+ * Node List:
+ * Online: [ node1 node2 ]
+
+ * Full List of Resources:
+ * rsc_stonith (stonith:null): Started node1
+ * Clone Set: clone1 [rsc1]:
+ * Started: [ node1 node2 ]
+
+Transition Summary:
+ * Stop rsc1:0 ( node1 ) due to node availability
+ * Stop rsc1:1 ( node2 ) due to node availability
+
+Executing Cluster Transition:
+ * Pseudo action: clone1_stop_0
+ * Resource action: rsc1:1 stop on node1
+ * Resource action: rsc1:0 stop on node2
+ * Pseudo action: clone1_stopped_0
+
+Revised Cluster Status:
+ * Node List:
+ * Online: [ node1 node2 ]
+
+ * Full List of Resources:
+ * rsc_stonith (stonith:null): Started node1
+ * Clone Set: clone1 [rsc1]:
+ * Stopped: [ node1 node2 ]
diff --git a/cts/scheduler/summary/ticket-clone-16.summary b/cts/scheduler/summary/ticket-clone-16.summary
new file mode 100644
index 0000000..d3be28c
--- /dev/null
+++ b/cts/scheduler/summary/ticket-clone-16.summary
@@ -0,0 +1,21 @@
+Current cluster status:
+ * Node List:
+ * Online: [ node1 node2 ]
+
+ * Full List of Resources:
+ * rsc_stonith (stonith:null): Started node1
+ * Clone Set: clone1 [rsc1]:
+ * Stopped: [ node1 node2 ]
+
+Transition Summary:
+
+Executing Cluster Transition:
+
+Revised Cluster Status:
+ * Node List:
+ * Online: [ node1 node2 ]
+
+ * Full List of Resources:
+ * rsc_stonith (stonith:null): Started node1
+ * Clone Set: clone1 [rsc1]:
+ * Stopped: [ node1 node2 ]
diff --git a/cts/scheduler/summary/ticket-clone-17.summary b/cts/scheduler/summary/ticket-clone-17.summary
new file mode 100644
index 0000000..11dbd5c
--- /dev/null
+++ b/cts/scheduler/summary/ticket-clone-17.summary
@@ -0,0 +1,27 @@
+Current cluster status:
+ * Node List:
+ * Online: [ node1 node2 ]
+
+ * Full List of Resources:
+ * rsc_stonith (stonith:null): Started node1
+ * Clone Set: clone1 [rsc1]:
+ * Started: [ node1 node2 ]
+
+Transition Summary:
+ * Stop rsc1:0 ( node1 ) due to node availability
+ * Stop rsc1:1 ( node2 ) due to node availability
+
+Executing Cluster Transition:
+ * Pseudo action: clone1_stop_0
+ * Resource action: rsc1:1 stop on node1
+ * Resource action: rsc1:0 stop on node2
+ * Pseudo action: clone1_stopped_0
+
+Revised Cluster Status:
+ * Node List:
+ * Online: [ node1 node2 ]
+
+ * Full List of Resources:
+ * rsc_stonith (stonith:null): Started node1
+ * Clone Set: clone1 [rsc1]:
+ * Stopped: [ node1 node2 ]
diff --git a/cts/scheduler/summary/ticket-clone-18.summary b/cts/scheduler/summary/ticket-clone-18.summary
new file mode 100644
index 0000000..11dbd5c
--- /dev/null
+++ b/cts/scheduler/summary/ticket-clone-18.summary
@@ -0,0 +1,27 @@
+Current cluster status:
+ * Node List:
+ * Online: [ node1 node2 ]
+
+ * Full List of Resources:
+ * rsc_stonith (stonith:null): Started node1
+ * Clone Set: clone1 [rsc1]:
+ * Started: [ node1 node2 ]
+
+Transition Summary:
+ * Stop rsc1:0 ( node1 ) due to node availability
+ * Stop rsc1:1 ( node2 ) due to node availability
+
+Executing Cluster Transition:
+ * Pseudo action: clone1_stop_0
+ * Resource action: rsc1:1 stop on node1
+ * Resource action: rsc1:0 stop on node2
+ * Pseudo action: clone1_stopped_0
+
+Revised Cluster Status:
+ * Node List:
+ * Online: [ node1 node2 ]
+
+ * Full List of Resources:
+ * rsc_stonith (stonith:null): Started node1
+ * Clone Set: clone1 [rsc1]:
+ * Stopped: [ node1 node2 ]
diff --git a/cts/scheduler/summary/ticket-clone-19.summary b/cts/scheduler/summary/ticket-clone-19.summary
new file mode 100644
index 0000000..d3be28c
--- /dev/null
+++ b/cts/scheduler/summary/ticket-clone-19.summary
@@ -0,0 +1,21 @@
+Current cluster status:
+ * Node List:
+ * Online: [ node1 node2 ]
+
+ * Full List of Resources:
+ * rsc_stonith (stonith:null): Started node1
+ * Clone Set: clone1 [rsc1]:
+ * Stopped: [ node1 node2 ]
+
+Transition Summary:
+
+Executing Cluster Transition:
+
+Revised Cluster Status:
+ * Node List:
+ * Online: [ node1 node2 ]
+
+ * Full List of Resources:
+ * rsc_stonith (stonith:null): Started node1
+ * Clone Set: clone1 [rsc1]:
+ * Stopped: [ node1 node2 ]
diff --git a/cts/scheduler/summary/ticket-clone-2.summary b/cts/scheduler/summary/ticket-clone-2.summary
new file mode 100644
index 0000000..abba11f
--- /dev/null
+++ b/cts/scheduler/summary/ticket-clone-2.summary
@@ -0,0 +1,29 @@
+Current cluster status:
+ * Node List:
+ * Online: [ node1 node2 ]
+
+ * Full List of Resources:
+ * rsc_stonith (stonith:null): Started node1
+ * Clone Set: clone1 [rsc1]:
+ * Stopped: [ node1 node2 ]
+
+Transition Summary:
+ * Start rsc1:0 ( node2 )
+ * Start rsc1:1 ( node1 )
+
+Executing Cluster Transition:
+ * Pseudo action: clone1_start_0
+ * Resource action: rsc1:0 start on node2
+ * Resource action: rsc1:1 start on node1
+ * Pseudo action: clone1_running_0
+ * Resource action: rsc1:0 monitor=5000 on node2
+ * Resource action: rsc1:1 monitor=5000 on node1
+
+Revised Cluster Status:
+ * Node List:
+ * Online: [ node1 node2 ]
+
+ * Full List of Resources:
+ * rsc_stonith (stonith:null): Started node1
+ * Clone Set: clone1 [rsc1]:
+ * Started: [ node1 node2 ]
diff --git a/cts/scheduler/summary/ticket-clone-20.summary b/cts/scheduler/summary/ticket-clone-20.summary
new file mode 100644
index 0000000..11dbd5c
--- /dev/null
+++ b/cts/scheduler/summary/ticket-clone-20.summary
@@ -0,0 +1,27 @@
+Current cluster status:
+ * Node List:
+ * Online: [ node1 node2 ]
+
+ * Full List of Resources:
+ * rsc_stonith (stonith:null): Started node1
+ * Clone Set: clone1 [rsc1]:
+ * Started: [ node1 node2 ]
+
+Transition Summary:
+ * Stop rsc1:0 ( node1 ) due to node availability
+ * Stop rsc1:1 ( node2 ) due to node availability
+
+Executing Cluster Transition:
+ * Pseudo action: clone1_stop_0
+ * Resource action: rsc1:1 stop on node1
+ * Resource action: rsc1:0 stop on node2
+ * Pseudo action: clone1_stopped_0
+
+Revised Cluster Status:
+ * Node List:
+ * Online: [ node1 node2 ]
+
+ * Full List of Resources:
+ * rsc_stonith (stonith:null): Started node1
+ * Clone Set: clone1 [rsc1]:
+ * Stopped: [ node1 node2 ]
diff --git a/cts/scheduler/summary/ticket-clone-21.summary b/cts/scheduler/summary/ticket-clone-21.summary
new file mode 100644
index 0000000..1dfd9b4
--- /dev/null
+++ b/cts/scheduler/summary/ticket-clone-21.summary
@@ -0,0 +1,33 @@
+Current cluster status:
+ * Node List:
+ * Online: [ node1 node2 ]
+
+ * Full List of Resources:
+ * rsc_stonith (stonith:null): Started node1
+ * Clone Set: clone1 [rsc1]:
+ * Started: [ node1 node2 ]
+
+Transition Summary:
+ * Fence (reboot) node2 'deadman ticket was lost'
+ * Fence (reboot) node1 'deadman ticket was lost'
+ * Stop rsc_stonith ( node1 ) due to node availability
+ * Stop rsc1:0 ( node1 ) due to node availability
+ * Stop rsc1:1 ( node2 ) due to node availability
+
+Executing Cluster Transition:
+ * Pseudo action: rsc_stonith_stop_0
+ * Fencing node1 (reboot)
+ * Fencing node2 (reboot)
+ * Pseudo action: clone1_stop_0
+ * Pseudo action: rsc1:1_stop_0
+ * Pseudo action: rsc1:0_stop_0
+ * Pseudo action: clone1_stopped_0
+
+Revised Cluster Status:
+ * Node List:
+ * OFFLINE: [ node1 node2 ]
+
+ * Full List of Resources:
+ * rsc_stonith (stonith:null): Stopped
+ * Clone Set: clone1 [rsc1]:
+ * Stopped: [ node1 node2 ]
diff --git a/cts/scheduler/summary/ticket-clone-22.summary b/cts/scheduler/summary/ticket-clone-22.summary
new file mode 100644
index 0000000..d3be28c
--- /dev/null
+++ b/cts/scheduler/summary/ticket-clone-22.summary
@@ -0,0 +1,21 @@
+Current cluster status:
+ * Node List:
+ * Online: [ node1 node2 ]
+
+ * Full List of Resources:
+ * rsc_stonith (stonith:null): Started node1
+ * Clone Set: clone1 [rsc1]:
+ * Stopped: [ node1 node2 ]
+
+Transition Summary:
+
+Executing Cluster Transition:
+
+Revised Cluster Status:
+ * Node List:
+ * Online: [ node1 node2 ]
+
+ * Full List of Resources:
+ * rsc_stonith (stonith:null): Started node1
+ * Clone Set: clone1 [rsc1]:
+ * Stopped: [ node1 node2 ]
diff --git a/cts/scheduler/summary/ticket-clone-23.summary b/cts/scheduler/summary/ticket-clone-23.summary
new file mode 100644
index 0000000..11dbd5c
--- /dev/null
+++ b/cts/scheduler/summary/ticket-clone-23.summary
@@ -0,0 +1,27 @@
+Current cluster status:
+ * Node List:
+ * Online: [ node1 node2 ]
+
+ * Full List of Resources:
+ * rsc_stonith (stonith:null): Started node1
+ * Clone Set: clone1 [rsc1]:
+ * Started: [ node1 node2 ]
+
+Transition Summary:
+ * Stop rsc1:0 ( node1 ) due to node availability
+ * Stop rsc1:1 ( node2 ) due to node availability
+
+Executing Cluster Transition:
+ * Pseudo action: clone1_stop_0
+ * Resource action: rsc1:1 stop on node1
+ * Resource action: rsc1:0 stop on node2
+ * Pseudo action: clone1_stopped_0
+
+Revised Cluster Status:
+ * Node List:
+ * Online: [ node1 node2 ]
+
+ * Full List of Resources:
+ * rsc_stonith (stonith:null): Started node1
+ * Clone Set: clone1 [rsc1]:
+ * Stopped: [ node1 node2 ]
diff --git a/cts/scheduler/summary/ticket-clone-24.summary b/cts/scheduler/summary/ticket-clone-24.summary
new file mode 100644
index 0000000..d71f36e
--- /dev/null
+++ b/cts/scheduler/summary/ticket-clone-24.summary
@@ -0,0 +1,21 @@
+Current cluster status:
+ * Node List:
+ * Online: [ node1 node2 ]
+
+ * Full List of Resources:
+ * rsc_stonith (stonith:null): Started node1
+ * Clone Set: clone1 [rsc1]:
+ * Started: [ node1 node2 ]
+
+Transition Summary:
+
+Executing Cluster Transition:
+
+Revised Cluster Status:
+ * Node List:
+ * Online: [ node1 node2 ]
+
+ * Full List of Resources:
+ * rsc_stonith (stonith:null): Started node1
+ * Clone Set: clone1 [rsc1]:
+ * Started: [ node1 node2 ]
diff --git a/cts/scheduler/summary/ticket-clone-3.summary b/cts/scheduler/summary/ticket-clone-3.summary
new file mode 100644
index 0000000..11dbd5c
--- /dev/null
+++ b/cts/scheduler/summary/ticket-clone-3.summary
@@ -0,0 +1,27 @@
+Current cluster status:
+ * Node List:
+ * Online: [ node1 node2 ]
+
+ * Full List of Resources:
+ * rsc_stonith (stonith:null): Started node1
+ * Clone Set: clone1 [rsc1]:
+ * Started: [ node1 node2 ]
+
+Transition Summary:
+ * Stop rsc1:0 ( node1 ) due to node availability
+ * Stop rsc1:1 ( node2 ) due to node availability
+
+Executing Cluster Transition:
+ * Pseudo action: clone1_stop_0
+ * Resource action: rsc1:1 stop on node1
+ * Resource action: rsc1:0 stop on node2
+ * Pseudo action: clone1_stopped_0
+
+Revised Cluster Status:
+ * Node List:
+ * Online: [ node1 node2 ]
+
+ * Full List of Resources:
+ * rsc_stonith (stonith:null): Started node1
+ * Clone Set: clone1 [rsc1]:
+ * Stopped: [ node1 node2 ]
diff --git a/cts/scheduler/summary/ticket-clone-4.summary b/cts/scheduler/summary/ticket-clone-4.summary
new file mode 100644
index 0000000..f682d73
--- /dev/null
+++ b/cts/scheduler/summary/ticket-clone-4.summary
@@ -0,0 +1,23 @@
+Current cluster status:
+ * Node List:
+ * Online: [ node1 node2 ]
+
+ * Full List of Resources:
+ * rsc_stonith (stonith:null): Started node1
+ * Clone Set: clone1 [rsc1]:
+ * Stopped: [ node1 node2 ]
+
+Transition Summary:
+
+Executing Cluster Transition:
+ * Resource action: rsc1:0 monitor on node2
+ * Resource action: rsc1:0 monitor on node1
+
+Revised Cluster Status:
+ * Node List:
+ * Online: [ node1 node2 ]
+
+ * Full List of Resources:
+ * rsc_stonith (stonith:null): Started node1
+ * Clone Set: clone1 [rsc1]:
+ * Stopped: [ node1 node2 ]
diff --git a/cts/scheduler/summary/ticket-clone-5.summary b/cts/scheduler/summary/ticket-clone-5.summary
new file mode 100644
index 0000000..abba11f
--- /dev/null
+++ b/cts/scheduler/summary/ticket-clone-5.summary
@@ -0,0 +1,29 @@
+Current cluster status:
+ * Node List:
+ * Online: [ node1 node2 ]
+
+ * Full List of Resources:
+ * rsc_stonith (stonith:null): Started node1
+ * Clone Set: clone1 [rsc1]:
+ * Stopped: [ node1 node2 ]
+
+Transition Summary:
+ * Start rsc1:0 ( node2 )
+ * Start rsc1:1 ( node1 )
+
+Executing Cluster Transition:
+ * Pseudo action: clone1_start_0
+ * Resource action: rsc1:0 start on node2
+ * Resource action: rsc1:1 start on node1
+ * Pseudo action: clone1_running_0
+ * Resource action: rsc1:0 monitor=5000 on node2
+ * Resource action: rsc1:1 monitor=5000 on node1
+
+Revised Cluster Status:
+ * Node List:
+ * Online: [ node1 node2 ]
+
+ * Full List of Resources:
+ * rsc_stonith (stonith:null): Started node1
+ * Clone Set: clone1 [rsc1]:
+ * Started: [ node1 node2 ]
diff --git a/cts/scheduler/summary/ticket-clone-6.summary b/cts/scheduler/summary/ticket-clone-6.summary
new file mode 100644
index 0000000..11dbd5c
--- /dev/null
+++ b/cts/scheduler/summary/ticket-clone-6.summary
@@ -0,0 +1,27 @@
+Current cluster status:
+ * Node List:
+ * Online: [ node1 node2 ]
+
+ * Full List of Resources:
+ * rsc_stonith (stonith:null): Started node1
+ * Clone Set: clone1 [rsc1]:
+ * Started: [ node1 node2 ]
+
+Transition Summary:
+ * Stop rsc1:0 ( node1 ) due to node availability
+ * Stop rsc1:1 ( node2 ) due to node availability
+
+Executing Cluster Transition:
+ * Pseudo action: clone1_stop_0
+ * Resource action: rsc1:1 stop on node1
+ * Resource action: rsc1:0 stop on node2
+ * Pseudo action: clone1_stopped_0
+
+Revised Cluster Status:
+ * Node List:
+ * Online: [ node1 node2 ]
+
+ * Full List of Resources:
+ * rsc_stonith (stonith:null): Started node1
+ * Clone Set: clone1 [rsc1]:
+ * Stopped: [ node1 node2 ]
diff --git a/cts/scheduler/summary/ticket-clone-7.summary b/cts/scheduler/summary/ticket-clone-7.summary
new file mode 100644
index 0000000..f682d73
--- /dev/null
+++ b/cts/scheduler/summary/ticket-clone-7.summary
@@ -0,0 +1,23 @@
+Current cluster status:
+ * Node List:
+ * Online: [ node1 node2 ]
+
+ * Full List of Resources:
+ * rsc_stonith (stonith:null): Started node1
+ * Clone Set: clone1 [rsc1]:
+ * Stopped: [ node1 node2 ]
+
+Transition Summary:
+
+Executing Cluster Transition:
+ * Resource action: rsc1:0 monitor on node2
+ * Resource action: rsc1:0 monitor on node1
+
+Revised Cluster Status:
+ * Node List:
+ * Online: [ node1 node2 ]
+
+ * Full List of Resources:
+ * rsc_stonith (stonith:null): Started node1
+ * Clone Set: clone1 [rsc1]:
+ * Stopped: [ node1 node2 ]
diff --git a/cts/scheduler/summary/ticket-clone-8.summary b/cts/scheduler/summary/ticket-clone-8.summary
new file mode 100644
index 0000000..abba11f
--- /dev/null
+++ b/cts/scheduler/summary/ticket-clone-8.summary
@@ -0,0 +1,29 @@
+Current cluster status:
+ * Node List:
+ * Online: [ node1 node2 ]
+
+ * Full List of Resources:
+ * rsc_stonith (stonith:null): Started node1
+ * Clone Set: clone1 [rsc1]:
+ * Stopped: [ node1 node2 ]
+
+Transition Summary:
+ * Start rsc1:0 ( node2 )
+ * Start rsc1:1 ( node1 )
+
+Executing Cluster Transition:
+ * Pseudo action: clone1_start_0
+ * Resource action: rsc1:0 start on node2
+ * Resource action: rsc1:1 start on node1
+ * Pseudo action: clone1_running_0
+ * Resource action: rsc1:0 monitor=5000 on node2
+ * Resource action: rsc1:1 monitor=5000 on node1
+
+Revised Cluster Status:
+ * Node List:
+ * Online: [ node1 node2 ]
+
+ * Full List of Resources:
+ * rsc_stonith (stonith:null): Started node1
+ * Clone Set: clone1 [rsc1]:
+ * Started: [ node1 node2 ]
diff --git a/cts/scheduler/summary/ticket-clone-9.summary b/cts/scheduler/summary/ticket-clone-9.summary
new file mode 100644
index 0000000..1dfd9b4
--- /dev/null
+++ b/cts/scheduler/summary/ticket-clone-9.summary
@@ -0,0 +1,33 @@
+Current cluster status:
+ * Node List:
+ * Online: [ node1 node2 ]
+
+ * Full List of Resources:
+ * rsc_stonith (stonith:null): Started node1
+ * Clone Set: clone1 [rsc1]:
+ * Started: [ node1 node2 ]
+
+Transition Summary:
+ * Fence (reboot) node2 'deadman ticket was lost'
+ * Fence (reboot) node1 'deadman ticket was lost'
+ * Stop rsc_stonith ( node1 ) due to node availability
+ * Stop rsc1:0 ( node1 ) due to node availability
+ * Stop rsc1:1 ( node2 ) due to node availability
+
+Executing Cluster Transition:
+ * Pseudo action: rsc_stonith_stop_0
+ * Fencing node1 (reboot)
+ * Fencing node2 (reboot)
+ * Pseudo action: clone1_stop_0
+ * Pseudo action: rsc1:1_stop_0
+ * Pseudo action: rsc1:0_stop_0
+ * Pseudo action: clone1_stopped_0
+
+Revised Cluster Status:
+ * Node List:
+ * OFFLINE: [ node1 node2 ]
+
+ * Full List of Resources:
+ * rsc_stonith (stonith:null): Stopped
+ * Clone Set: clone1 [rsc1]:
+ * Stopped: [ node1 node2 ]
diff --git a/cts/scheduler/summary/ticket-group-1.summary b/cts/scheduler/summary/ticket-group-1.summary
new file mode 100644
index 0000000..4db96ef
--- /dev/null
+++ b/cts/scheduler/summary/ticket-group-1.summary
@@ -0,0 +1,27 @@
+Current cluster status:
+ * Node List:
+ * Online: [ node1 node2 ]
+
+ * Full List of Resources:
+ * rsc_stonith (stonith:null): Started node1
+ * Resource Group: group1:
+ * rsc1 (ocf:pacemaker:Dummy): Stopped
+ * rsc2 (ocf:pacemaker:Dummy): Stopped
+
+Transition Summary:
+
+Executing Cluster Transition:
+ * Resource action: rsc1 monitor on node2
+ * Resource action: rsc1 monitor on node1
+ * Resource action: rsc2 monitor on node2
+ * Resource action: rsc2 monitor on node1
+
+Revised Cluster Status:
+ * Node List:
+ * Online: [ node1 node2 ]
+
+ * Full List of Resources:
+ * rsc_stonith (stonith:null): Started node1
+ * Resource Group: group1:
+ * rsc1 (ocf:pacemaker:Dummy): Stopped
+ * rsc2 (ocf:pacemaker:Dummy): Stopped
diff --git a/cts/scheduler/summary/ticket-group-10.summary b/cts/scheduler/summary/ticket-group-10.summary
new file mode 100644
index 0000000..4db96ef
--- /dev/null
+++ b/cts/scheduler/summary/ticket-group-10.summary
@@ -0,0 +1,27 @@
+Current cluster status:
+ * Node List:
+ * Online: [ node1 node2 ]
+
+ * Full List of Resources:
+ * rsc_stonith (stonith:null): Started node1
+ * Resource Group: group1:
+ * rsc1 (ocf:pacemaker:Dummy): Stopped
+ * rsc2 (ocf:pacemaker:Dummy): Stopped
+
+Transition Summary:
+
+Executing Cluster Transition:
+ * Resource action: rsc1 monitor on node2
+ * Resource action: rsc1 monitor on node1
+ * Resource action: rsc2 monitor on node2
+ * Resource action: rsc2 monitor on node1
+
+Revised Cluster Status:
+ * Node List:
+ * Online: [ node1 node2 ]
+
+ * Full List of Resources:
+ * rsc_stonith (stonith:null): Started node1
+ * Resource Group: group1:
+ * rsc1 (ocf:pacemaker:Dummy): Stopped
+ * rsc2 (ocf:pacemaker:Dummy): Stopped
diff --git a/cts/scheduler/summary/ticket-group-11.summary b/cts/scheduler/summary/ticket-group-11.summary
new file mode 100644
index 0000000..2351695
--- /dev/null
+++ b/cts/scheduler/summary/ticket-group-11.summary
@@ -0,0 +1,31 @@
+Current cluster status:
+ * Node List:
+ * Online: [ node1 node2 ]
+
+ * Full List of Resources:
+ * rsc_stonith (stonith:null): Started node1
+ * Resource Group: group1:
+ * rsc1 (ocf:pacemaker:Dummy): Stopped
+ * rsc2 (ocf:pacemaker:Dummy): Stopped
+
+Transition Summary:
+ * Start rsc1 ( node2 )
+ * Start rsc2 ( node2 )
+
+Executing Cluster Transition:
+ * Pseudo action: group1_start_0
+ * Resource action: rsc1 start on node2
+ * Resource action: rsc2 start on node2
+ * Pseudo action: group1_running_0
+ * Resource action: rsc1 monitor=5000 on node2
+ * Resource action: rsc2 monitor=5000 on node2
+
+Revised Cluster Status:
+ * Node List:
+ * Online: [ node1 node2 ]
+
+ * Full List of Resources:
+ * rsc_stonith (stonith:null): Started node1
+ * Resource Group: group1:
+ * rsc1 (ocf:pacemaker:Dummy): Started node2
+ * rsc2 (ocf:pacemaker:Dummy): Started node2
diff --git a/cts/scheduler/summary/ticket-group-12.summary b/cts/scheduler/summary/ticket-group-12.summary
new file mode 100644
index 0000000..322b79f
--- /dev/null
+++ b/cts/scheduler/summary/ticket-group-12.summary
@@ -0,0 +1,23 @@
+Current cluster status:
+ * Node List:
+ * Online: [ node1 node2 ]
+
+ * Full List of Resources:
+ * rsc_stonith (stonith:null): Started node1
+ * Resource Group: group1:
+ * rsc1 (ocf:pacemaker:Dummy): Started node2
+ * rsc2 (ocf:pacemaker:Dummy): Started node2
+
+Transition Summary:
+
+Executing Cluster Transition:
+
+Revised Cluster Status:
+ * Node List:
+ * Online: [ node1 node2 ]
+
+ * Full List of Resources:
+ * rsc_stonith (stonith:null): Started node1
+ * Resource Group: group1:
+ * rsc1 (ocf:pacemaker:Dummy): Started node2
+ * rsc2 (ocf:pacemaker:Dummy): Started node2
diff --git a/cts/scheduler/summary/ticket-group-13.summary b/cts/scheduler/summary/ticket-group-13.summary
new file mode 100644
index 0000000..378dda4
--- /dev/null
+++ b/cts/scheduler/summary/ticket-group-13.summary
@@ -0,0 +1,23 @@
+Current cluster status:
+ * Node List:
+ * Online: [ node1 node2 ]
+
+ * Full List of Resources:
+ * rsc_stonith (stonith:null): Started node1
+ * Resource Group: group1:
+ * rsc1 (ocf:pacemaker:Dummy): Stopped
+ * rsc2 (ocf:pacemaker:Dummy): Stopped
+
+Transition Summary:
+
+Executing Cluster Transition:
+
+Revised Cluster Status:
+ * Node List:
+ * Online: [ node1 node2 ]
+
+ * Full List of Resources:
+ * rsc_stonith (stonith:null): Started node1
+ * Resource Group: group1:
+ * rsc1 (ocf:pacemaker:Dummy): Stopped
+ * rsc2 (ocf:pacemaker:Dummy): Stopped
diff --git a/cts/scheduler/summary/ticket-group-14.summary b/cts/scheduler/summary/ticket-group-14.summary
new file mode 100644
index 0000000..72f7464
--- /dev/null
+++ b/cts/scheduler/summary/ticket-group-14.summary
@@ -0,0 +1,29 @@
+Current cluster status:
+ * Node List:
+ * Online: [ node1 node2 ]
+
+ * Full List of Resources:
+ * rsc_stonith (stonith:null): Started node1
+ * Resource Group: group1:
+ * rsc1 (ocf:pacemaker:Dummy): Started node2
+ * rsc2 (ocf:pacemaker:Dummy): Started node2
+
+Transition Summary:
+ * Stop rsc1 ( node2 ) due to node availability
+ * Stop rsc2 ( node2 ) due to node availability
+
+Executing Cluster Transition:
+ * Pseudo action: group1_stop_0
+ * Resource action: rsc2 stop on node2
+ * Resource action: rsc1 stop on node2
+ * Pseudo action: group1_stopped_0
+
+Revised Cluster Status:
+ * Node List:
+ * Online: [ node1 node2 ]
+
+ * Full List of Resources:
+ * rsc_stonith (stonith:null): Started node1
+ * Resource Group: group1:
+ * rsc1 (ocf:pacemaker:Dummy): Stopped
+ * rsc2 (ocf:pacemaker:Dummy): Stopped
diff --git a/cts/scheduler/summary/ticket-group-15.summary b/cts/scheduler/summary/ticket-group-15.summary
new file mode 100644
index 0000000..72f7464
--- /dev/null
+++ b/cts/scheduler/summary/ticket-group-15.summary
@@ -0,0 +1,29 @@
+Current cluster status:
+ * Node List:
+ * Online: [ node1 node2 ]
+
+ * Full List of Resources:
+ * rsc_stonith (stonith:null): Started node1
+ * Resource Group: group1:
+ * rsc1 (ocf:pacemaker:Dummy): Started node2
+ * rsc2 (ocf:pacemaker:Dummy): Started node2
+
+Transition Summary:
+ * Stop rsc1 ( node2 ) due to node availability
+ * Stop rsc2 ( node2 ) due to node availability
+
+Executing Cluster Transition:
+ * Pseudo action: group1_stop_0
+ * Resource action: rsc2 stop on node2
+ * Resource action: rsc1 stop on node2
+ * Pseudo action: group1_stopped_0
+
+Revised Cluster Status:
+ * Node List:
+ * Online: [ node1 node2 ]
+
+ * Full List of Resources:
+ * rsc_stonith (stonith:null): Started node1
+ * Resource Group: group1:
+ * rsc1 (ocf:pacemaker:Dummy): Stopped
+ * rsc2 (ocf:pacemaker:Dummy): Stopped
diff --git a/cts/scheduler/summary/ticket-group-16.summary b/cts/scheduler/summary/ticket-group-16.summary
new file mode 100644
index 0000000..378dda4
--- /dev/null
+++ b/cts/scheduler/summary/ticket-group-16.summary
@@ -0,0 +1,23 @@
+Current cluster status:
+ * Node List:
+ * Online: [ node1 node2 ]
+
+ * Full List of Resources:
+ * rsc_stonith (stonith:null): Started node1
+ * Resource Group: group1:
+ * rsc1 (ocf:pacemaker:Dummy): Stopped
+ * rsc2 (ocf:pacemaker:Dummy): Stopped
+
+Transition Summary:
+
+Executing Cluster Transition:
+
+Revised Cluster Status:
+ * Node List:
+ * Online: [ node1 node2 ]
+
+ * Full List of Resources:
+ * rsc_stonith (stonith:null): Started node1
+ * Resource Group: group1:
+ * rsc1 (ocf:pacemaker:Dummy): Stopped
+ * rsc2 (ocf:pacemaker:Dummy): Stopped
diff --git a/cts/scheduler/summary/ticket-group-17.summary b/cts/scheduler/summary/ticket-group-17.summary
new file mode 100644
index 0000000..72f7464
--- /dev/null
+++ b/cts/scheduler/summary/ticket-group-17.summary
@@ -0,0 +1,29 @@
+Current cluster status:
+ * Node List:
+ * Online: [ node1 node2 ]
+
+ * Full List of Resources:
+ * rsc_stonith (stonith:null): Started node1
+ * Resource Group: group1:
+ * rsc1 (ocf:pacemaker:Dummy): Started node2
+ * rsc2 (ocf:pacemaker:Dummy): Started node2
+
+Transition Summary:
+ * Stop rsc1 ( node2 ) due to node availability
+ * Stop rsc2 ( node2 ) due to node availability
+
+Executing Cluster Transition:
+ * Pseudo action: group1_stop_0
+ * Resource action: rsc2 stop on node2
+ * Resource action: rsc1 stop on node2
+ * Pseudo action: group1_stopped_0
+
+Revised Cluster Status:
+ * Node List:
+ * Online: [ node1 node2 ]
+
+ * Full List of Resources:
+ * rsc_stonith (stonith:null): Started node1
+ * Resource Group: group1:
+ * rsc1 (ocf:pacemaker:Dummy): Stopped
+ * rsc2 (ocf:pacemaker:Dummy): Stopped
diff --git a/cts/scheduler/summary/ticket-group-18.summary b/cts/scheduler/summary/ticket-group-18.summary
new file mode 100644
index 0000000..72f7464
--- /dev/null
+++ b/cts/scheduler/summary/ticket-group-18.summary
@@ -0,0 +1,29 @@
+Current cluster status:
+ * Node List:
+ * Online: [ node1 node2 ]
+
+ * Full List of Resources:
+ * rsc_stonith (stonith:null): Started node1
+ * Resource Group: group1:
+ * rsc1 (ocf:pacemaker:Dummy): Started node2
+ * rsc2 (ocf:pacemaker:Dummy): Started node2
+
+Transition Summary:
+ * Stop rsc1 ( node2 ) due to node availability
+ * Stop rsc2 ( node2 ) due to node availability
+
+Executing Cluster Transition:
+ * Pseudo action: group1_stop_0
+ * Resource action: rsc2 stop on node2
+ * Resource action: rsc1 stop on node2
+ * Pseudo action: group1_stopped_0
+
+Revised Cluster Status:
+ * Node List:
+ * Online: [ node1 node2 ]
+
+ * Full List of Resources:
+ * rsc_stonith (stonith:null): Started node1
+ * Resource Group: group1:
+ * rsc1 (ocf:pacemaker:Dummy): Stopped
+ * rsc2 (ocf:pacemaker:Dummy): Stopped
diff --git a/cts/scheduler/summary/ticket-group-19.summary b/cts/scheduler/summary/ticket-group-19.summary
new file mode 100644
index 0000000..378dda4
--- /dev/null
+++ b/cts/scheduler/summary/ticket-group-19.summary
@@ -0,0 +1,23 @@
+Current cluster status:
+ * Node List:
+ * Online: [ node1 node2 ]
+
+ * Full List of Resources:
+ * rsc_stonith (stonith:null): Started node1
+ * Resource Group: group1:
+ * rsc1 (ocf:pacemaker:Dummy): Stopped
+ * rsc2 (ocf:pacemaker:Dummy): Stopped
+
+Transition Summary:
+
+Executing Cluster Transition:
+
+Revised Cluster Status:
+ * Node List:
+ * Online: [ node1 node2 ]
+
+ * Full List of Resources:
+ * rsc_stonith (stonith:null): Started node1
+ * Resource Group: group1:
+ * rsc1 (ocf:pacemaker:Dummy): Stopped
+ * rsc2 (ocf:pacemaker:Dummy): Stopped
diff --git a/cts/scheduler/summary/ticket-group-2.summary b/cts/scheduler/summary/ticket-group-2.summary
new file mode 100644
index 0000000..2351695
--- /dev/null
+++ b/cts/scheduler/summary/ticket-group-2.summary
@@ -0,0 +1,31 @@
+Current cluster status:
+ * Node List:
+ * Online: [ node1 node2 ]
+
+ * Full List of Resources:
+ * rsc_stonith (stonith:null): Started node1
+ * Resource Group: group1:
+ * rsc1 (ocf:pacemaker:Dummy): Stopped
+ * rsc2 (ocf:pacemaker:Dummy): Stopped
+
+Transition Summary:
+ * Start rsc1 ( node2 )
+ * Start rsc2 ( node2 )
+
+Executing Cluster Transition:
+ * Pseudo action: group1_start_0
+ * Resource action: rsc1 start on node2
+ * Resource action: rsc2 start on node2
+ * Pseudo action: group1_running_0
+ * Resource action: rsc1 monitor=5000 on node2
+ * Resource action: rsc2 monitor=5000 on node2
+
+Revised Cluster Status:
+ * Node List:
+ * Online: [ node1 node2 ]
+
+ * Full List of Resources:
+ * rsc_stonith (stonith:null): Started node1
+ * Resource Group: group1:
+ * rsc1 (ocf:pacemaker:Dummy): Started node2
+ * rsc2 (ocf:pacemaker:Dummy): Started node2
diff --git a/cts/scheduler/summary/ticket-group-20.summary b/cts/scheduler/summary/ticket-group-20.summary
new file mode 100644
index 0000000..72f7464
--- /dev/null
+++ b/cts/scheduler/summary/ticket-group-20.summary
@@ -0,0 +1,29 @@
+Current cluster status:
+ * Node List:
+ * Online: [ node1 node2 ]
+
+ * Full List of Resources:
+ * rsc_stonith (stonith:null): Started node1
+ * Resource Group: group1:
+ * rsc1 (ocf:pacemaker:Dummy): Started node2
+ * rsc2 (ocf:pacemaker:Dummy): Started node2
+
+Transition Summary:
+ * Stop rsc1 ( node2 ) due to node availability
+ * Stop rsc2 ( node2 ) due to node availability
+
+Executing Cluster Transition:
+ * Pseudo action: group1_stop_0
+ * Resource action: rsc2 stop on node2
+ * Resource action: rsc1 stop on node2
+ * Pseudo action: group1_stopped_0
+
+Revised Cluster Status:
+ * Node List:
+ * Online: [ node1 node2 ]
+
+ * Full List of Resources:
+ * rsc_stonith (stonith:null): Started node1
+ * Resource Group: group1:
+ * rsc1 (ocf:pacemaker:Dummy): Stopped
+ * rsc2 (ocf:pacemaker:Dummy): Stopped
diff --git a/cts/scheduler/summary/ticket-group-21.summary b/cts/scheduler/summary/ticket-group-21.summary
new file mode 100644
index 0000000..19880d9
--- /dev/null
+++ b/cts/scheduler/summary/ticket-group-21.summary
@@ -0,0 +1,32 @@
+Current cluster status:
+ * Node List:
+ * Online: [ node1 node2 ]
+
+ * Full List of Resources:
+ * rsc_stonith (stonith:null): Started node1
+ * Resource Group: group1:
+ * rsc1 (ocf:pacemaker:Dummy): Started node2
+ * rsc2 (ocf:pacemaker:Dummy): Started node2
+
+Transition Summary:
+ * Fence (reboot) node2 'deadman ticket was lost'
+ * Stop rsc1 ( node2 ) due to node availability
+ * Stop rsc2 ( node2 ) due to node availability
+
+Executing Cluster Transition:
+ * Fencing node2 (reboot)
+ * Pseudo action: group1_stop_0
+ * Pseudo action: rsc2_stop_0
+ * Pseudo action: rsc1_stop_0
+ * Pseudo action: group1_stopped_0
+
+Revised Cluster Status:
+ * Node List:
+ * Online: [ node1 ]
+ * OFFLINE: [ node2 ]
+
+ * Full List of Resources:
+ * rsc_stonith (stonith:null): Started node1
+ * Resource Group: group1:
+ * rsc1 (ocf:pacemaker:Dummy): Stopped
+ * rsc2 (ocf:pacemaker:Dummy): Stopped
diff --git a/cts/scheduler/summary/ticket-group-22.summary b/cts/scheduler/summary/ticket-group-22.summary
new file mode 100644
index 0000000..378dda4
--- /dev/null
+++ b/cts/scheduler/summary/ticket-group-22.summary
@@ -0,0 +1,23 @@
+Current cluster status:
+ * Node List:
+ * Online: [ node1 node2 ]
+
+ * Full List of Resources:
+ * rsc_stonith (stonith:null): Started node1
+ * Resource Group: group1:
+ * rsc1 (ocf:pacemaker:Dummy): Stopped
+ * rsc2 (ocf:pacemaker:Dummy): Stopped
+
+Transition Summary:
+
+Executing Cluster Transition:
+
+Revised Cluster Status:
+ * Node List:
+ * Online: [ node1 node2 ]
+
+ * Full List of Resources:
+ * rsc_stonith (stonith:null): Started node1
+ * Resource Group: group1:
+ * rsc1 (ocf:pacemaker:Dummy): Stopped
+ * rsc2 (ocf:pacemaker:Dummy): Stopped
diff --git a/cts/scheduler/summary/ticket-group-23.summary b/cts/scheduler/summary/ticket-group-23.summary
new file mode 100644
index 0000000..72f7464
--- /dev/null
+++ b/cts/scheduler/summary/ticket-group-23.summary
@@ -0,0 +1,29 @@
+Current cluster status:
+ * Node List:
+ * Online: [ node1 node2 ]
+
+ * Full List of Resources:
+ * rsc_stonith (stonith:null): Started node1
+ * Resource Group: group1:
+ * rsc1 (ocf:pacemaker:Dummy): Started node2
+ * rsc2 (ocf:pacemaker:Dummy): Started node2
+
+Transition Summary:
+ * Stop rsc1 ( node2 ) due to node availability
+ * Stop rsc2 ( node2 ) due to node availability
+
+Executing Cluster Transition:
+ * Pseudo action: group1_stop_0
+ * Resource action: rsc2 stop on node2
+ * Resource action: rsc1 stop on node2
+ * Pseudo action: group1_stopped_0
+
+Revised Cluster Status:
+ * Node List:
+ * Online: [ node1 node2 ]
+
+ * Full List of Resources:
+ * rsc_stonith (stonith:null): Started node1
+ * Resource Group: group1:
+ * rsc1 (ocf:pacemaker:Dummy): Stopped
+ * rsc2 (ocf:pacemaker:Dummy): Stopped
diff --git a/cts/scheduler/summary/ticket-group-24.summary b/cts/scheduler/summary/ticket-group-24.summary
new file mode 100644
index 0000000..322b79f
--- /dev/null
+++ b/cts/scheduler/summary/ticket-group-24.summary
@@ -0,0 +1,23 @@
+Current cluster status:
+ * Node List:
+ * Online: [ node1 node2 ]
+
+ * Full List of Resources:
+ * rsc_stonith (stonith:null): Started node1
+ * Resource Group: group1:
+ * rsc1 (ocf:pacemaker:Dummy): Started node2
+ * rsc2 (ocf:pacemaker:Dummy): Started node2
+
+Transition Summary:
+
+Executing Cluster Transition:
+
+Revised Cluster Status:
+ * Node List:
+ * Online: [ node1 node2 ]
+
+ * Full List of Resources:
+ * rsc_stonith (stonith:null): Started node1
+ * Resource Group: group1:
+ * rsc1 (ocf:pacemaker:Dummy): Started node2
+ * rsc2 (ocf:pacemaker:Dummy): Started node2
diff --git a/cts/scheduler/summary/ticket-group-3.summary b/cts/scheduler/summary/ticket-group-3.summary
new file mode 100644
index 0000000..72f7464
--- /dev/null
+++ b/cts/scheduler/summary/ticket-group-3.summary
@@ -0,0 +1,29 @@
+Current cluster status:
+ * Node List:
+ * Online: [ node1 node2 ]
+
+ * Full List of Resources:
+ * rsc_stonith (stonith:null): Started node1
+ * Resource Group: group1:
+ * rsc1 (ocf:pacemaker:Dummy): Started node2
+ * rsc2 (ocf:pacemaker:Dummy): Started node2
+
+Transition Summary:
+ * Stop rsc1 ( node2 ) due to node availability
+ * Stop rsc2 ( node2 ) due to node availability
+
+Executing Cluster Transition:
+ * Pseudo action: group1_stop_0
+ * Resource action: rsc2 stop on node2
+ * Resource action: rsc1 stop on node2
+ * Pseudo action: group1_stopped_0
+
+Revised Cluster Status:
+ * Node List:
+ * Online: [ node1 node2 ]
+
+ * Full List of Resources:
+ * rsc_stonith (stonith:null): Started node1
+ * Resource Group: group1:
+ * rsc1 (ocf:pacemaker:Dummy): Stopped
+ * rsc2 (ocf:pacemaker:Dummy): Stopped
diff --git a/cts/scheduler/summary/ticket-group-4.summary b/cts/scheduler/summary/ticket-group-4.summary
new file mode 100644
index 0000000..4db96ef
--- /dev/null
+++ b/cts/scheduler/summary/ticket-group-4.summary
@@ -0,0 +1,27 @@
+Current cluster status:
+ * Node List:
+ * Online: [ node1 node2 ]
+
+ * Full List of Resources:
+ * rsc_stonith (stonith:null): Started node1
+ * Resource Group: group1:
+ * rsc1 (ocf:pacemaker:Dummy): Stopped
+ * rsc2 (ocf:pacemaker:Dummy): Stopped
+
+Transition Summary:
+
+Executing Cluster Transition:
+ * Resource action: rsc1 monitor on node2
+ * Resource action: rsc1 monitor on node1
+ * Resource action: rsc2 monitor on node2
+ * Resource action: rsc2 monitor on node1
+
+Revised Cluster Status:
+ * Node List:
+ * Online: [ node1 node2 ]
+
+ * Full List of Resources:
+ * rsc_stonith (stonith:null): Started node1
+ * Resource Group: group1:
+ * rsc1 (ocf:pacemaker:Dummy): Stopped
+ * rsc2 (ocf:pacemaker:Dummy): Stopped
diff --git a/cts/scheduler/summary/ticket-group-5.summary b/cts/scheduler/summary/ticket-group-5.summary
new file mode 100644
index 0000000..2351695
--- /dev/null
+++ b/cts/scheduler/summary/ticket-group-5.summary
@@ -0,0 +1,31 @@
+Current cluster status:
+ * Node List:
+ * Online: [ node1 node2 ]
+
+ * Full List of Resources:
+ * rsc_stonith (stonith:null): Started node1
+ * Resource Group: group1:
+ * rsc1 (ocf:pacemaker:Dummy): Stopped
+ * rsc2 (ocf:pacemaker:Dummy): Stopped
+
+Transition Summary:
+ * Start rsc1 ( node2 )
+ * Start rsc2 ( node2 )
+
+Executing Cluster Transition:
+ * Pseudo action: group1_start_0
+ * Resource action: rsc1 start on node2
+ * Resource action: rsc2 start on node2
+ * Pseudo action: group1_running_0
+ * Resource action: rsc1 monitor=5000 on node2
+ * Resource action: rsc2 monitor=5000 on node2
+
+Revised Cluster Status:
+ * Node List:
+ * Online: [ node1 node2 ]
+
+ * Full List of Resources:
+ * rsc_stonith (stonith:null): Started node1
+ * Resource Group: group1:
+ * rsc1 (ocf:pacemaker:Dummy): Started node2
+ * rsc2 (ocf:pacemaker:Dummy): Started node2
diff --git a/cts/scheduler/summary/ticket-group-6.summary b/cts/scheduler/summary/ticket-group-6.summary
new file mode 100644
index 0000000..72f7464
--- /dev/null
+++ b/cts/scheduler/summary/ticket-group-6.summary
@@ -0,0 +1,29 @@
+Current cluster status:
+ * Node List:
+ * Online: [ node1 node2 ]
+
+ * Full List of Resources:
+ * rsc_stonith (stonith:null): Started node1
+ * Resource Group: group1:
+ * rsc1 (ocf:pacemaker:Dummy): Started node2
+ * rsc2 (ocf:pacemaker:Dummy): Started node2
+
+Transition Summary:
+ * Stop rsc1 ( node2 ) due to node availability
+ * Stop rsc2 ( node2 ) due to node availability
+
+Executing Cluster Transition:
+ * Pseudo action: group1_stop_0
+ * Resource action: rsc2 stop on node2
+ * Resource action: rsc1 stop on node2
+ * Pseudo action: group1_stopped_0
+
+Revised Cluster Status:
+ * Node List:
+ * Online: [ node1 node2 ]
+
+ * Full List of Resources:
+ * rsc_stonith (stonith:null): Started node1
+ * Resource Group: group1:
+ * rsc1 (ocf:pacemaker:Dummy): Stopped
+ * rsc2 (ocf:pacemaker:Dummy): Stopped
diff --git a/cts/scheduler/summary/ticket-group-7.summary b/cts/scheduler/summary/ticket-group-7.summary
new file mode 100644
index 0000000..4db96ef
--- /dev/null
+++ b/cts/scheduler/summary/ticket-group-7.summary
@@ -0,0 +1,27 @@
+Current cluster status:
+ * Node List:
+ * Online: [ node1 node2 ]
+
+ * Full List of Resources:
+ * rsc_stonith (stonith:null): Started node1
+ * Resource Group: group1:
+ * rsc1 (ocf:pacemaker:Dummy): Stopped
+ * rsc2 (ocf:pacemaker:Dummy): Stopped
+
+Transition Summary:
+
+Executing Cluster Transition:
+ * Resource action: rsc1 monitor on node2
+ * Resource action: rsc1 monitor on node1
+ * Resource action: rsc2 monitor on node2
+ * Resource action: rsc2 monitor on node1
+
+Revised Cluster Status:
+ * Node List:
+ * Online: [ node1 node2 ]
+
+ * Full List of Resources:
+ * rsc_stonith (stonith:null): Started node1
+ * Resource Group: group1:
+ * rsc1 (ocf:pacemaker:Dummy): Stopped
+ * rsc2 (ocf:pacemaker:Dummy): Stopped
diff --git a/cts/scheduler/summary/ticket-group-8.summary b/cts/scheduler/summary/ticket-group-8.summary
new file mode 100644
index 0000000..2351695
--- /dev/null
+++ b/cts/scheduler/summary/ticket-group-8.summary
@@ -0,0 +1,31 @@
+Current cluster status:
+ * Node List:
+ * Online: [ node1 node2 ]
+
+ * Full List of Resources:
+ * rsc_stonith (stonith:null): Started node1
+ * Resource Group: group1:
+ * rsc1 (ocf:pacemaker:Dummy): Stopped
+ * rsc2 (ocf:pacemaker:Dummy): Stopped
+
+Transition Summary:
+ * Start rsc1 ( node2 )
+ * Start rsc2 ( node2 )
+
+Executing Cluster Transition:
+ * Pseudo action: group1_start_0
+ * Resource action: rsc1 start on node2
+ * Resource action: rsc2 start on node2
+ * Pseudo action: group1_running_0
+ * Resource action: rsc1 monitor=5000 on node2
+ * Resource action: rsc2 monitor=5000 on node2
+
+Revised Cluster Status:
+ * Node List:
+ * Online: [ node1 node2 ]
+
+ * Full List of Resources:
+ * rsc_stonith (stonith:null): Started node1
+ * Resource Group: group1:
+ * rsc1 (ocf:pacemaker:Dummy): Started node2
+ * rsc2 (ocf:pacemaker:Dummy): Started node2
diff --git a/cts/scheduler/summary/ticket-group-9.summary b/cts/scheduler/summary/ticket-group-9.summary
new file mode 100644
index 0000000..19880d9
--- /dev/null
+++ b/cts/scheduler/summary/ticket-group-9.summary
@@ -0,0 +1,32 @@
+Current cluster status:
+ * Node List:
+ * Online: [ node1 node2 ]
+
+ * Full List of Resources:
+ * rsc_stonith (stonith:null): Started node1
+ * Resource Group: group1:
+ * rsc1 (ocf:pacemaker:Dummy): Started node2
+ * rsc2 (ocf:pacemaker:Dummy): Started node2
+
+Transition Summary:
+ * Fence (reboot) node2 'deadman ticket was lost'
+ * Stop rsc1 ( node2 ) due to node availability
+ * Stop rsc2 ( node2 ) due to node availability
+
+Executing Cluster Transition:
+ * Fencing node2 (reboot)
+ * Pseudo action: group1_stop_0
+ * Pseudo action: rsc2_stop_0
+ * Pseudo action: rsc1_stop_0
+ * Pseudo action: group1_stopped_0
+
+Revised Cluster Status:
+ * Node List:
+ * Online: [ node1 ]
+ * OFFLINE: [ node2 ]
+
+ * Full List of Resources:
+ * rsc_stonith (stonith:null): Started node1
+ * Resource Group: group1:
+ * rsc1 (ocf:pacemaker:Dummy): Stopped
+ * rsc2 (ocf:pacemaker:Dummy): Stopped
diff --git a/cts/scheduler/summary/ticket-primitive-1.summary b/cts/scheduler/summary/ticket-primitive-1.summary
new file mode 100644
index 0000000..80e49e9
--- /dev/null
+++ b/cts/scheduler/summary/ticket-primitive-1.summary
@@ -0,0 +1,21 @@
+Current cluster status:
+ * Node List:
+ * Online: [ node1 node2 ]
+
+ * Full List of Resources:
+ * rsc_stonith (stonith:null): Started node1
+ * rsc1 (ocf:pacemaker:Dummy): Stopped
+
+Transition Summary:
+
+Executing Cluster Transition:
+ * Resource action: rsc1 monitor on node2
+ * Resource action: rsc1 monitor on node1
+
+Revised Cluster Status:
+ * Node List:
+ * Online: [ node1 node2 ]
+
+ * Full List of Resources:
+ * rsc_stonith (stonith:null): Started node1
+ * rsc1 (ocf:pacemaker:Dummy): Stopped
diff --git a/cts/scheduler/summary/ticket-primitive-10.summary b/cts/scheduler/summary/ticket-primitive-10.summary
new file mode 100644
index 0000000..80e49e9
--- /dev/null
+++ b/cts/scheduler/summary/ticket-primitive-10.summary
@@ -0,0 +1,21 @@
+Current cluster status:
+ * Node List:
+ * Online: [ node1 node2 ]
+
+ * Full List of Resources:
+ * rsc_stonith (stonith:null): Started node1
+ * rsc1 (ocf:pacemaker:Dummy): Stopped
+
+Transition Summary:
+
+Executing Cluster Transition:
+ * Resource action: rsc1 monitor on node2
+ * Resource action: rsc1 monitor on node1
+
+Revised Cluster Status:
+ * Node List:
+ * Online: [ node1 node2 ]
+
+ * Full List of Resources:
+ * rsc_stonith (stonith:null): Started node1
+ * rsc1 (ocf:pacemaker:Dummy): Stopped
diff --git a/cts/scheduler/summary/ticket-primitive-11.summary b/cts/scheduler/summary/ticket-primitive-11.summary
new file mode 100644
index 0000000..cb38b43
--- /dev/null
+++ b/cts/scheduler/summary/ticket-primitive-11.summary
@@ -0,0 +1,22 @@
+Current cluster status:
+ * Node List:
+ * Online: [ node1 node2 ]
+
+ * Full List of Resources:
+ * rsc_stonith (stonith:null): Started node1
+ * rsc1 (ocf:pacemaker:Dummy): Stopped
+
+Transition Summary:
+ * Start rsc1 ( node2 )
+
+Executing Cluster Transition:
+ * Resource action: rsc1 start on node2
+ * Resource action: rsc1 monitor=10000 on node2
+
+Revised Cluster Status:
+ * Node List:
+ * Online: [ node1 node2 ]
+
+ * Full List of Resources:
+ * rsc_stonith (stonith:null): Started node1
+ * rsc1 (ocf:pacemaker:Dummy): Started node2
diff --git a/cts/scheduler/summary/ticket-primitive-12.summary b/cts/scheduler/summary/ticket-primitive-12.summary
new file mode 100644
index 0000000..fc3d40d
--- /dev/null
+++ b/cts/scheduler/summary/ticket-primitive-12.summary
@@ -0,0 +1,19 @@
+Current cluster status:
+ * Node List:
+ * Online: [ node1 node2 ]
+
+ * Full List of Resources:
+ * rsc_stonith (stonith:null): Started node1
+ * rsc1 (ocf:pacemaker:Dummy): Started node2
+
+Transition Summary:
+
+Executing Cluster Transition:
+
+Revised Cluster Status:
+ * Node List:
+ * Online: [ node1 node2 ]
+
+ * Full List of Resources:
+ * rsc_stonith (stonith:null): Started node1
+ * rsc1 (ocf:pacemaker:Dummy): Started node2
diff --git a/cts/scheduler/summary/ticket-primitive-13.summary b/cts/scheduler/summary/ticket-primitive-13.summary
new file mode 100644
index 0000000..3ba6f11
--- /dev/null
+++ b/cts/scheduler/summary/ticket-primitive-13.summary
@@ -0,0 +1,19 @@
+Current cluster status:
+ * Node List:
+ * Online: [ node1 node2 ]
+
+ * Full List of Resources:
+ * rsc_stonith (stonith:null): Started node1
+ * rsc1 (ocf:pacemaker:Dummy): Stopped
+
+Transition Summary:
+
+Executing Cluster Transition:
+
+Revised Cluster Status:
+ * Node List:
+ * Online: [ node1 node2 ]
+
+ * Full List of Resources:
+ * rsc_stonith (stonith:null): Started node1
+ * rsc1 (ocf:pacemaker:Dummy): Stopped
diff --git a/cts/scheduler/summary/ticket-primitive-14.summary b/cts/scheduler/summary/ticket-primitive-14.summary
new file mode 100644
index 0000000..f28cec3
--- /dev/null
+++ b/cts/scheduler/summary/ticket-primitive-14.summary
@@ -0,0 +1,21 @@
+Current cluster status:
+ * Node List:
+ * Online: [ node1 node2 ]
+
+ * Full List of Resources:
+ * rsc_stonith (stonith:null): Started node1
+ * rsc1 (ocf:pacemaker:Dummy): Started node2
+
+Transition Summary:
+ * Stop rsc1 ( node2 ) due to node availability
+
+Executing Cluster Transition:
+ * Resource action: rsc1 stop on node2
+
+Revised Cluster Status:
+ * Node List:
+ * Online: [ node1 node2 ]
+
+ * Full List of Resources:
+ * rsc_stonith (stonith:null): Started node1
+ * rsc1 (ocf:pacemaker:Dummy): Stopped
diff --git a/cts/scheduler/summary/ticket-primitive-15.summary b/cts/scheduler/summary/ticket-primitive-15.summary
new file mode 100644
index 0000000..f28cec3
--- /dev/null
+++ b/cts/scheduler/summary/ticket-primitive-15.summary
@@ -0,0 +1,21 @@
+Current cluster status:
+ * Node List:
+ * Online: [ node1 node2 ]
+
+ * Full List of Resources:
+ * rsc_stonith (stonith:null): Started node1
+ * rsc1 (ocf:pacemaker:Dummy): Started node2
+
+Transition Summary:
+ * Stop rsc1 ( node2 ) due to node availability
+
+Executing Cluster Transition:
+ * Resource action: rsc1 stop on node2
+
+Revised Cluster Status:
+ * Node List:
+ * Online: [ node1 node2 ]
+
+ * Full List of Resources:
+ * rsc_stonith (stonith:null): Started node1
+ * rsc1 (ocf:pacemaker:Dummy): Stopped
diff --git a/cts/scheduler/summary/ticket-primitive-16.summary b/cts/scheduler/summary/ticket-primitive-16.summary
new file mode 100644
index 0000000..3ba6f11
--- /dev/null
+++ b/cts/scheduler/summary/ticket-primitive-16.summary
@@ -0,0 +1,19 @@
+Current cluster status:
+ * Node List:
+ * Online: [ node1 node2 ]
+
+ * Full List of Resources:
+ * rsc_stonith (stonith:null): Started node1
+ * rsc1 (ocf:pacemaker:Dummy): Stopped
+
+Transition Summary:
+
+Executing Cluster Transition:
+
+Revised Cluster Status:
+ * Node List:
+ * Online: [ node1 node2 ]
+
+ * Full List of Resources:
+ * rsc_stonith (stonith:null): Started node1
+ * rsc1 (ocf:pacemaker:Dummy): Stopped
diff --git a/cts/scheduler/summary/ticket-primitive-17.summary b/cts/scheduler/summary/ticket-primitive-17.summary
new file mode 100644
index 0000000..f28cec3
--- /dev/null
+++ b/cts/scheduler/summary/ticket-primitive-17.summary
@@ -0,0 +1,21 @@
+Current cluster status:
+ * Node List:
+ * Online: [ node1 node2 ]
+
+ * Full List of Resources:
+ * rsc_stonith (stonith:null): Started node1
+ * rsc1 (ocf:pacemaker:Dummy): Started node2
+
+Transition Summary:
+ * Stop rsc1 ( node2 ) due to node availability
+
+Executing Cluster Transition:
+ * Resource action: rsc1 stop on node2
+
+Revised Cluster Status:
+ * Node List:
+ * Online: [ node1 node2 ]
+
+ * Full List of Resources:
+ * rsc_stonith (stonith:null): Started node1
+ * rsc1 (ocf:pacemaker:Dummy): Stopped
diff --git a/cts/scheduler/summary/ticket-primitive-18.summary b/cts/scheduler/summary/ticket-primitive-18.summary
new file mode 100644
index 0000000..f28cec3
--- /dev/null
+++ b/cts/scheduler/summary/ticket-primitive-18.summary
@@ -0,0 +1,21 @@
+Current cluster status:
+ * Node List:
+ * Online: [ node1 node2 ]
+
+ * Full List of Resources:
+ * rsc_stonith (stonith:null): Started node1
+ * rsc1 (ocf:pacemaker:Dummy): Started node2
+
+Transition Summary:
+ * Stop rsc1 ( node2 ) due to node availability
+
+Executing Cluster Transition:
+ * Resource action: rsc1 stop on node2
+
+Revised Cluster Status:
+ * Node List:
+ * Online: [ node1 node2 ]
+
+ * Full List of Resources:
+ * rsc_stonith (stonith:null): Started node1
+ * rsc1 (ocf:pacemaker:Dummy): Stopped
diff --git a/cts/scheduler/summary/ticket-primitive-19.summary b/cts/scheduler/summary/ticket-primitive-19.summary
new file mode 100644
index 0000000..3ba6f11
--- /dev/null
+++ b/cts/scheduler/summary/ticket-primitive-19.summary
@@ -0,0 +1,19 @@
+Current cluster status:
+ * Node List:
+ * Online: [ node1 node2 ]
+
+ * Full List of Resources:
+ * rsc_stonith (stonith:null): Started node1
+ * rsc1 (ocf:pacemaker:Dummy): Stopped
+
+Transition Summary:
+
+Executing Cluster Transition:
+
+Revised Cluster Status:
+ * Node List:
+ * Online: [ node1 node2 ]
+
+ * Full List of Resources:
+ * rsc_stonith (stonith:null): Started node1
+ * rsc1 (ocf:pacemaker:Dummy): Stopped
diff --git a/cts/scheduler/summary/ticket-primitive-2.summary b/cts/scheduler/summary/ticket-primitive-2.summary
new file mode 100644
index 0000000..cb38b43
--- /dev/null
+++ b/cts/scheduler/summary/ticket-primitive-2.summary
@@ -0,0 +1,22 @@
+Current cluster status:
+ * Node List:
+ * Online: [ node1 node2 ]
+
+ * Full List of Resources:
+ * rsc_stonith (stonith:null): Started node1
+ * rsc1 (ocf:pacemaker:Dummy): Stopped
+
+Transition Summary:
+ * Start rsc1 ( node2 )
+
+Executing Cluster Transition:
+ * Resource action: rsc1 start on node2
+ * Resource action: rsc1 monitor=10000 on node2
+
+Revised Cluster Status:
+ * Node List:
+ * Online: [ node1 node2 ]
+
+ * Full List of Resources:
+ * rsc_stonith (stonith:null): Started node1
+ * rsc1 (ocf:pacemaker:Dummy): Started node2
diff --git a/cts/scheduler/summary/ticket-primitive-20.summary b/cts/scheduler/summary/ticket-primitive-20.summary
new file mode 100644
index 0000000..f28cec3
--- /dev/null
+++ b/cts/scheduler/summary/ticket-primitive-20.summary
@@ -0,0 +1,21 @@
+Current cluster status:
+ * Node List:
+ * Online: [ node1 node2 ]
+
+ * Full List of Resources:
+ * rsc_stonith (stonith:null): Started node1
+ * rsc1 (ocf:pacemaker:Dummy): Started node2
+
+Transition Summary:
+ * Stop rsc1 ( node2 ) due to node availability
+
+Executing Cluster Transition:
+ * Resource action: rsc1 stop on node2
+
+Revised Cluster Status:
+ * Node List:
+ * Online: [ node1 node2 ]
+
+ * Full List of Resources:
+ * rsc_stonith (stonith:null): Started node1
+ * rsc1 (ocf:pacemaker:Dummy): Stopped
diff --git a/cts/scheduler/summary/ticket-primitive-21.summary b/cts/scheduler/summary/ticket-primitive-21.summary
new file mode 100644
index 0000000..ba8d8cb
--- /dev/null
+++ b/cts/scheduler/summary/ticket-primitive-21.summary
@@ -0,0 +1,24 @@
+Current cluster status:
+ * Node List:
+ * Online: [ node1 node2 ]
+
+ * Full List of Resources:
+ * rsc_stonith (stonith:null): Started node1
+ * rsc1 (ocf:pacemaker:Dummy): Started node2
+
+Transition Summary:
+ * Fence (reboot) node2 'deadman ticket was lost'
+ * Stop rsc1 ( node2 ) due to node availability
+
+Executing Cluster Transition:
+ * Fencing node2 (reboot)
+ * Pseudo action: rsc1_stop_0
+
+Revised Cluster Status:
+ * Node List:
+ * Online: [ node1 ]
+ * OFFLINE: [ node2 ]
+
+ * Full List of Resources:
+ * rsc_stonith (stonith:null): Started node1
+ * rsc1 (ocf:pacemaker:Dummy): Stopped
diff --git a/cts/scheduler/summary/ticket-primitive-22.summary b/cts/scheduler/summary/ticket-primitive-22.summary
new file mode 100644
index 0000000..3ba6f11
--- /dev/null
+++ b/cts/scheduler/summary/ticket-primitive-22.summary
@@ -0,0 +1,19 @@
+Current cluster status:
+ * Node List:
+ * Online: [ node1 node2 ]
+
+ * Full List of Resources:
+ * rsc_stonith (stonith:null): Started node1
+ * rsc1 (ocf:pacemaker:Dummy): Stopped
+
+Transition Summary:
+
+Executing Cluster Transition:
+
+Revised Cluster Status:
+ * Node List:
+ * Online: [ node1 node2 ]
+
+ * Full List of Resources:
+ * rsc_stonith (stonith:null): Started node1
+ * rsc1 (ocf:pacemaker:Dummy): Stopped
diff --git a/cts/scheduler/summary/ticket-primitive-23.summary b/cts/scheduler/summary/ticket-primitive-23.summary
new file mode 100644
index 0000000..f28cec3
--- /dev/null
+++ b/cts/scheduler/summary/ticket-primitive-23.summary
@@ -0,0 +1,21 @@
+Current cluster status:
+ * Node List:
+ * Online: [ node1 node2 ]
+
+ * Full List of Resources:
+ * rsc_stonith (stonith:null): Started node1
+ * rsc1 (ocf:pacemaker:Dummy): Started node2
+
+Transition Summary:
+ * Stop rsc1 ( node2 ) due to node availability
+
+Executing Cluster Transition:
+ * Resource action: rsc1 stop on node2
+
+Revised Cluster Status:
+ * Node List:
+ * Online: [ node1 node2 ]
+
+ * Full List of Resources:
+ * rsc_stonith (stonith:null): Started node1
+ * rsc1 (ocf:pacemaker:Dummy): Stopped
diff --git a/cts/scheduler/summary/ticket-primitive-24.summary b/cts/scheduler/summary/ticket-primitive-24.summary
new file mode 100644
index 0000000..fc3d40d
--- /dev/null
+++ b/cts/scheduler/summary/ticket-primitive-24.summary
@@ -0,0 +1,19 @@
+Current cluster status:
+ * Node List:
+ * Online: [ node1 node2 ]
+
+ * Full List of Resources:
+ * rsc_stonith (stonith:null): Started node1
+ * rsc1 (ocf:pacemaker:Dummy): Started node2
+
+Transition Summary:
+
+Executing Cluster Transition:
+
+Revised Cluster Status:
+ * Node List:
+ * Online: [ node1 node2 ]
+
+ * Full List of Resources:
+ * rsc_stonith (stonith:null): Started node1
+ * rsc1 (ocf:pacemaker:Dummy): Started node2
diff --git a/cts/scheduler/summary/ticket-primitive-3.summary b/cts/scheduler/summary/ticket-primitive-3.summary
new file mode 100644
index 0000000..f28cec3
--- /dev/null
+++ b/cts/scheduler/summary/ticket-primitive-3.summary
@@ -0,0 +1,21 @@
+Current cluster status:
+ * Node List:
+ * Online: [ node1 node2 ]
+
+ * Full List of Resources:
+ * rsc_stonith (stonith:null): Started node1
+ * rsc1 (ocf:pacemaker:Dummy): Started node2
+
+Transition Summary:
+ * Stop rsc1 ( node2 ) due to node availability
+
+Executing Cluster Transition:
+ * Resource action: rsc1 stop on node2
+
+Revised Cluster Status:
+ * Node List:
+ * Online: [ node1 node2 ]
+
+ * Full List of Resources:
+ * rsc_stonith (stonith:null): Started node1
+ * rsc1 (ocf:pacemaker:Dummy): Stopped
diff --git a/cts/scheduler/summary/ticket-primitive-4.summary b/cts/scheduler/summary/ticket-primitive-4.summary
new file mode 100644
index 0000000..80e49e9
--- /dev/null
+++ b/cts/scheduler/summary/ticket-primitive-4.summary
@@ -0,0 +1,21 @@
+Current cluster status:
+ * Node List:
+ * Online: [ node1 node2 ]
+
+ * Full List of Resources:
+ * rsc_stonith (stonith:null): Started node1
+ * rsc1 (ocf:pacemaker:Dummy): Stopped
+
+Transition Summary:
+
+Executing Cluster Transition:
+ * Resource action: rsc1 monitor on node2
+ * Resource action: rsc1 monitor on node1
+
+Revised Cluster Status:
+ * Node List:
+ * Online: [ node1 node2 ]
+
+ * Full List of Resources:
+ * rsc_stonith (stonith:null): Started node1
+ * rsc1 (ocf:pacemaker:Dummy): Stopped
diff --git a/cts/scheduler/summary/ticket-primitive-5.summary b/cts/scheduler/summary/ticket-primitive-5.summary
new file mode 100644
index 0000000..cb38b43
--- /dev/null
+++ b/cts/scheduler/summary/ticket-primitive-5.summary
@@ -0,0 +1,22 @@
+Current cluster status:
+ * Node List:
+ * Online: [ node1 node2 ]
+
+ * Full List of Resources:
+ * rsc_stonith (stonith:null): Started node1
+ * rsc1 (ocf:pacemaker:Dummy): Stopped
+
+Transition Summary:
+ * Start rsc1 ( node2 )
+
+Executing Cluster Transition:
+ * Resource action: rsc1 start on node2
+ * Resource action: rsc1 monitor=10000 on node2
+
+Revised Cluster Status:
+ * Node List:
+ * Online: [ node1 node2 ]
+
+ * Full List of Resources:
+ * rsc_stonith (stonith:null): Started node1
+ * rsc1 (ocf:pacemaker:Dummy): Started node2
diff --git a/cts/scheduler/summary/ticket-primitive-6.summary b/cts/scheduler/summary/ticket-primitive-6.summary
new file mode 100644
index 0000000..f28cec3
--- /dev/null
+++ b/cts/scheduler/summary/ticket-primitive-6.summary
@@ -0,0 +1,21 @@
+Current cluster status:
+ * Node List:
+ * Online: [ node1 node2 ]
+
+ * Full List of Resources:
+ * rsc_stonith (stonith:null): Started node1
+ * rsc1 (ocf:pacemaker:Dummy): Started node2
+
+Transition Summary:
+ * Stop rsc1 ( node2 ) due to node availability
+
+Executing Cluster Transition:
+ * Resource action: rsc1 stop on node2
+
+Revised Cluster Status:
+ * Node List:
+ * Online: [ node1 node2 ]
+
+ * Full List of Resources:
+ * rsc_stonith (stonith:null): Started node1
+ * rsc1 (ocf:pacemaker:Dummy): Stopped
diff --git a/cts/scheduler/summary/ticket-primitive-7.summary b/cts/scheduler/summary/ticket-primitive-7.summary
new file mode 100644
index 0000000..80e49e9
--- /dev/null
+++ b/cts/scheduler/summary/ticket-primitive-7.summary
@@ -0,0 +1,21 @@
+Current cluster status:
+ * Node List:
+ * Online: [ node1 node2 ]
+
+ * Full List of Resources:
+ * rsc_stonith (stonith:null): Started node1
+ * rsc1 (ocf:pacemaker:Dummy): Stopped
+
+Transition Summary:
+
+Executing Cluster Transition:
+ * Resource action: rsc1 monitor on node2
+ * Resource action: rsc1 monitor on node1
+
+Revised Cluster Status:
+ * Node List:
+ * Online: [ node1 node2 ]
+
+ * Full List of Resources:
+ * rsc_stonith (stonith:null): Started node1
+ * rsc1 (ocf:pacemaker:Dummy): Stopped
diff --git a/cts/scheduler/summary/ticket-primitive-8.summary b/cts/scheduler/summary/ticket-primitive-8.summary
new file mode 100644
index 0000000..cb38b43
--- /dev/null
+++ b/cts/scheduler/summary/ticket-primitive-8.summary
@@ -0,0 +1,22 @@
+Current cluster status:
+ * Node List:
+ * Online: [ node1 node2 ]
+
+ * Full List of Resources:
+ * rsc_stonith (stonith:null): Started node1
+ * rsc1 (ocf:pacemaker:Dummy): Stopped
+
+Transition Summary:
+ * Start rsc1 ( node2 )
+
+Executing Cluster Transition:
+ * Resource action: rsc1 start on node2
+ * Resource action: rsc1 monitor=10000 on node2
+
+Revised Cluster Status:
+ * Node List:
+ * Online: [ node1 node2 ]
+
+ * Full List of Resources:
+ * rsc_stonith (stonith:null): Started node1
+ * rsc1 (ocf:pacemaker:Dummy): Started node2
diff --git a/cts/scheduler/summary/ticket-primitive-9.summary b/cts/scheduler/summary/ticket-primitive-9.summary
new file mode 100644
index 0000000..ba8d8cb
--- /dev/null
+++ b/cts/scheduler/summary/ticket-primitive-9.summary
@@ -0,0 +1,24 @@
+Current cluster status:
+ * Node List:
+ * Online: [ node1 node2 ]
+
+ * Full List of Resources:
+ * rsc_stonith (stonith:null): Started node1
+ * rsc1 (ocf:pacemaker:Dummy): Started node2
+
+Transition Summary:
+ * Fence (reboot) node2 'deadman ticket was lost'
+ * Stop rsc1 ( node2 ) due to node availability
+
+Executing Cluster Transition:
+ * Fencing node2 (reboot)
+ * Pseudo action: rsc1_stop_0
+
+Revised Cluster Status:
+ * Node List:
+ * Online: [ node1 ]
+ * OFFLINE: [ node2 ]
+
+ * Full List of Resources:
+ * rsc_stonith (stonith:null): Started node1
+ * rsc1 (ocf:pacemaker:Dummy): Stopped
diff --git a/cts/scheduler/summary/ticket-promoted-1.summary b/cts/scheduler/summary/ticket-promoted-1.summary
new file mode 100644
index 0000000..6bc1364
--- /dev/null
+++ b/cts/scheduler/summary/ticket-promoted-1.summary
@@ -0,0 +1,23 @@
+Current cluster status:
+ * Node List:
+ * Online: [ node1 node2 ]
+
+ * Full List of Resources:
+ * rsc_stonith (stonith:null): Started node1
+ * Clone Set: ms1 [rsc1] (promotable):
+ * Stopped: [ node1 node2 ]
+
+Transition Summary:
+
+Executing Cluster Transition:
+ * Resource action: rsc1:0 monitor on node2
+ * Resource action: rsc1:0 monitor on node1
+
+Revised Cluster Status:
+ * Node List:
+ * Online: [ node1 node2 ]
+
+ * Full List of Resources:
+ * rsc_stonith (stonith:null): Started node1
+ * Clone Set: ms1 [rsc1] (promotable):
+ * Stopped: [ node1 node2 ]
diff --git a/cts/scheduler/summary/ticket-promoted-10.summary b/cts/scheduler/summary/ticket-promoted-10.summary
new file mode 100644
index 0000000..eab3d91
--- /dev/null
+++ b/cts/scheduler/summary/ticket-promoted-10.summary
@@ -0,0 +1,29 @@
+Current cluster status:
+ * Node List:
+ * Online: [ node1 node2 ]
+
+ * Full List of Resources:
+ * rsc_stonith (stonith:null): Started node1
+ * Clone Set: ms1 [rsc1] (promotable):
+ * Stopped: [ node1 node2 ]
+
+Transition Summary:
+ * Start rsc1:0 ( node2 )
+ * Start rsc1:1 ( node1 )
+
+Executing Cluster Transition:
+ * Resource action: rsc1:0 monitor on node2
+ * Resource action: rsc1:1 monitor on node1
+ * Pseudo action: ms1_start_0
+ * Resource action: rsc1:0 start on node2
+ * Resource action: rsc1:1 start on node1
+ * Pseudo action: ms1_running_0
+
+Revised Cluster Status:
+ * Node List:
+ * Online: [ node1 node2 ]
+
+ * Full List of Resources:
+ * rsc_stonith (stonith:null): Started node1
+ * Clone Set: ms1 [rsc1] (promotable):
+ * Unpromoted: [ node1 node2 ]
diff --git a/cts/scheduler/summary/ticket-promoted-11.summary b/cts/scheduler/summary/ticket-promoted-11.summary
new file mode 100644
index 0000000..3816039
--- /dev/null
+++ b/cts/scheduler/summary/ticket-promoted-11.summary
@@ -0,0 +1,26 @@
+Current cluster status:
+ * Node List:
+ * Online: [ node1 node2 ]
+
+ * Full List of Resources:
+ * rsc_stonith (stonith:null): Started node1
+ * Clone Set: ms1 [rsc1] (promotable):
+ * Unpromoted: [ node1 node2 ]
+
+Transition Summary:
+ * Promote rsc1:0 ( Unpromoted -> Promoted node1 )
+
+Executing Cluster Transition:
+ * Pseudo action: ms1_promote_0
+ * Resource action: rsc1:1 promote on node1
+ * Pseudo action: ms1_promoted_0
+
+Revised Cluster Status:
+ * Node List:
+ * Online: [ node1 node2 ]
+
+ * Full List of Resources:
+ * rsc_stonith (stonith:null): Started node1
+ * Clone Set: ms1 [rsc1] (promotable):
+ * Promoted: [ node1 ]
+ * Unpromoted: [ node2 ]
diff --git a/cts/scheduler/summary/ticket-promoted-12.summary b/cts/scheduler/summary/ticket-promoted-12.summary
new file mode 100644
index 0000000..b51c277
--- /dev/null
+++ b/cts/scheduler/summary/ticket-promoted-12.summary
@@ -0,0 +1,23 @@
+Current cluster status:
+ * Node List:
+ * Online: [ node1 node2 ]
+
+ * Full List of Resources:
+ * rsc_stonith (stonith:null): Started node1
+ * Clone Set: ms1 [rsc1] (promotable):
+ * Promoted: [ node1 ]
+ * Unpromoted: [ node2 ]
+
+Transition Summary:
+
+Executing Cluster Transition:
+
+Revised Cluster Status:
+ * Node List:
+ * Online: [ node1 node2 ]
+
+ * Full List of Resources:
+ * rsc_stonith (stonith:null): Started node1
+ * Clone Set: ms1 [rsc1] (promotable):
+ * Promoted: [ node1 ]
+ * Unpromoted: [ node2 ]
diff --git a/cts/scheduler/summary/ticket-promoted-13.summary b/cts/scheduler/summary/ticket-promoted-13.summary
new file mode 100644
index 0000000..6b5d14a
--- /dev/null
+++ b/cts/scheduler/summary/ticket-promoted-13.summary
@@ -0,0 +1,21 @@
+Current cluster status:
+ * Node List:
+ * Online: [ node1 node2 ]
+
+ * Full List of Resources:
+ * rsc_stonith (stonith:null): Started node1
+ * Clone Set: ms1 [rsc1] (promotable):
+ * Stopped: [ node1 node2 ]
+
+Transition Summary:
+
+Executing Cluster Transition:
+
+Revised Cluster Status:
+ * Node List:
+ * Online: [ node1 node2 ]
+
+ * Full List of Resources:
+ * rsc_stonith (stonith:null): Started node1
+ * Clone Set: ms1 [rsc1] (promotable):
+ * Stopped: [ node1 node2 ]
diff --git a/cts/scheduler/summary/ticket-promoted-14.summary b/cts/scheduler/summary/ticket-promoted-14.summary
new file mode 100644
index 0000000..ee8912b
--- /dev/null
+++ b/cts/scheduler/summary/ticket-promoted-14.summary
@@ -0,0 +1,31 @@
+Current cluster status:
+ * Node List:
+ * Online: [ node1 node2 ]
+
+ * Full List of Resources:
+ * rsc_stonith (stonith:null): Started node1
+ * Clone Set: ms1 [rsc1] (promotable):
+ * Promoted: [ node1 ]
+ * Unpromoted: [ node2 ]
+
+Transition Summary:
+ * Stop rsc1:0 ( Promoted node1 ) due to node availability
+ * Stop rsc1:1 ( Unpromoted node2 ) due to node availability
+
+Executing Cluster Transition:
+ * Pseudo action: ms1_demote_0
+ * Resource action: rsc1:1 demote on node1
+ * Pseudo action: ms1_demoted_0
+ * Pseudo action: ms1_stop_0
+ * Resource action: rsc1:1 stop on node1
+ * Resource action: rsc1:0 stop on node2
+ * Pseudo action: ms1_stopped_0
+
+Revised Cluster Status:
+ * Node List:
+ * Online: [ node1 node2 ]
+
+ * Full List of Resources:
+ * rsc_stonith (stonith:null): Started node1
+ * Clone Set: ms1 [rsc1] (promotable):
+ * Stopped: [ node1 node2 ]
diff --git a/cts/scheduler/summary/ticket-promoted-15.summary b/cts/scheduler/summary/ticket-promoted-15.summary
new file mode 100644
index 0000000..ee8912b
--- /dev/null
+++ b/cts/scheduler/summary/ticket-promoted-15.summary
@@ -0,0 +1,31 @@
+Current cluster status:
+ * Node List:
+ * Online: [ node1 node2 ]
+
+ * Full List of Resources:
+ * rsc_stonith (stonith:null): Started node1
+ * Clone Set: ms1 [rsc1] (promotable):
+ * Promoted: [ node1 ]
+ * Unpromoted: [ node2 ]
+
+Transition Summary:
+ * Stop rsc1:0 ( Promoted node1 ) due to node availability
+ * Stop rsc1:1 ( Unpromoted node2 ) due to node availability
+
+Executing Cluster Transition:
+ * Pseudo action: ms1_demote_0
+ * Resource action: rsc1:1 demote on node1
+ * Pseudo action: ms1_demoted_0
+ * Pseudo action: ms1_stop_0
+ * Resource action: rsc1:1 stop on node1
+ * Resource action: rsc1:0 stop on node2
+ * Pseudo action: ms1_stopped_0
+
+Revised Cluster Status:
+ * Node List:
+ * Online: [ node1 node2 ]
+
+ * Full List of Resources:
+ * rsc_stonith (stonith:null): Started node1
+ * Clone Set: ms1 [rsc1] (promotable):
+ * Stopped: [ node1 node2 ]
diff --git a/cts/scheduler/summary/ticket-promoted-16.summary b/cts/scheduler/summary/ticket-promoted-16.summary
new file mode 100644
index 0000000..851e54e
--- /dev/null
+++ b/cts/scheduler/summary/ticket-promoted-16.summary
@@ -0,0 +1,21 @@
+Current cluster status:
+ * Node List:
+ * Online: [ node1 node2 ]
+
+ * Full List of Resources:
+ * rsc_stonith (stonith:null): Started node1
+ * Clone Set: ms1 [rsc1] (promotable):
+ * Unpromoted: [ node1 node2 ]
+
+Transition Summary:
+
+Executing Cluster Transition:
+
+Revised Cluster Status:
+ * Node List:
+ * Online: [ node1 node2 ]
+
+ * Full List of Resources:
+ * rsc_stonith (stonith:null): Started node1
+ * Clone Set: ms1 [rsc1] (promotable):
+ * Unpromoted: [ node1 node2 ]
diff --git a/cts/scheduler/summary/ticket-promoted-17.summary b/cts/scheduler/summary/ticket-promoted-17.summary
new file mode 100644
index 0000000..ee25f92
--- /dev/null
+++ b/cts/scheduler/summary/ticket-promoted-17.summary
@@ -0,0 +1,26 @@
+Current cluster status:
+ * Node List:
+ * Online: [ node1 node2 ]
+
+ * Full List of Resources:
+ * rsc_stonith (stonith:null): Started node1
+ * Clone Set: ms1 [rsc1] (promotable):
+ * Promoted: [ node1 ]
+ * Unpromoted: [ node2 ]
+
+Transition Summary:
+ * Demote rsc1:0 ( Promoted -> Unpromoted node1 )
+
+Executing Cluster Transition:
+ * Pseudo action: ms1_demote_0
+ * Resource action: rsc1:1 demote on node1
+ * Pseudo action: ms1_demoted_0
+
+Revised Cluster Status:
+ * Node List:
+ * Online: [ node1 node2 ]
+
+ * Full List of Resources:
+ * rsc_stonith (stonith:null): Started node1
+ * Clone Set: ms1 [rsc1] (promotable):
+ * Unpromoted: [ node1 node2 ]
diff --git a/cts/scheduler/summary/ticket-promoted-18.summary b/cts/scheduler/summary/ticket-promoted-18.summary
new file mode 100644
index 0000000..ee25f92
--- /dev/null
+++ b/cts/scheduler/summary/ticket-promoted-18.summary
@@ -0,0 +1,26 @@
+Current cluster status:
+ * Node List:
+ * Online: [ node1 node2 ]
+
+ * Full List of Resources:
+ * rsc_stonith (stonith:null): Started node1
+ * Clone Set: ms1 [rsc1] (promotable):
+ * Promoted: [ node1 ]
+ * Unpromoted: [ node2 ]
+
+Transition Summary:
+ * Demote rsc1:0 ( Promoted -> Unpromoted node1 )
+
+Executing Cluster Transition:
+ * Pseudo action: ms1_demote_0
+ * Resource action: rsc1:1 demote on node1
+ * Pseudo action: ms1_demoted_0
+
+Revised Cluster Status:
+ * Node List:
+ * Online: [ node1 node2 ]
+
+ * Full List of Resources:
+ * rsc_stonith (stonith:null): Started node1
+ * Clone Set: ms1 [rsc1] (promotable):
+ * Unpromoted: [ node1 node2 ]
diff --git a/cts/scheduler/summary/ticket-promoted-19.summary b/cts/scheduler/summary/ticket-promoted-19.summary
new file mode 100644
index 0000000..851e54e
--- /dev/null
+++ b/cts/scheduler/summary/ticket-promoted-19.summary
@@ -0,0 +1,21 @@
+Current cluster status:
+ * Node List:
+ * Online: [ node1 node2 ]
+
+ * Full List of Resources:
+ * rsc_stonith (stonith:null): Started node1
+ * Clone Set: ms1 [rsc1] (promotable):
+ * Unpromoted: [ node1 node2 ]
+
+Transition Summary:
+
+Executing Cluster Transition:
+
+Revised Cluster Status:
+ * Node List:
+ * Online: [ node1 node2 ]
+
+ * Full List of Resources:
+ * rsc_stonith (stonith:null): Started node1
+ * Clone Set: ms1 [rsc1] (promotable):
+ * Unpromoted: [ node1 node2 ]
diff --git a/cts/scheduler/summary/ticket-promoted-2.summary b/cts/scheduler/summary/ticket-promoted-2.summary
new file mode 100644
index 0000000..dc67f96
--- /dev/null
+++ b/cts/scheduler/summary/ticket-promoted-2.summary
@@ -0,0 +1,31 @@
+Current cluster status:
+ * Node List:
+ * Online: [ node1 node2 ]
+
+ * Full List of Resources:
+ * rsc_stonith (stonith:null): Started node1
+ * Clone Set: ms1 [rsc1] (promotable):
+ * Stopped: [ node1 node2 ]
+
+Transition Summary:
+ * Start rsc1:0 ( node2 )
+ * Promote rsc1:1 ( Stopped -> Promoted node1 )
+
+Executing Cluster Transition:
+ * Pseudo action: ms1_start_0
+ * Resource action: rsc1:0 start on node2
+ * Resource action: rsc1:1 start on node1
+ * Pseudo action: ms1_running_0
+ * Pseudo action: ms1_promote_0
+ * Resource action: rsc1:1 promote on node1
+ * Pseudo action: ms1_promoted_0
+
+Revised Cluster Status:
+ * Node List:
+ * Online: [ node1 node2 ]
+
+ * Full List of Resources:
+ * rsc_stonith (stonith:null): Started node1
+ * Clone Set: ms1 [rsc1] (promotable):
+ * Promoted: [ node1 ]
+ * Unpromoted: [ node2 ]
diff --git a/cts/scheduler/summary/ticket-promoted-20.summary b/cts/scheduler/summary/ticket-promoted-20.summary
new file mode 100644
index 0000000..ee25f92
--- /dev/null
+++ b/cts/scheduler/summary/ticket-promoted-20.summary
@@ -0,0 +1,26 @@
+Current cluster status:
+ * Node List:
+ * Online: [ node1 node2 ]
+
+ * Full List of Resources:
+ * rsc_stonith (stonith:null): Started node1
+ * Clone Set: ms1 [rsc1] (promotable):
+ * Promoted: [ node1 ]
+ * Unpromoted: [ node2 ]
+
+Transition Summary:
+ * Demote rsc1:0 ( Promoted -> Unpromoted node1 )
+
+Executing Cluster Transition:
+ * Pseudo action: ms1_demote_0
+ * Resource action: rsc1:1 demote on node1
+ * Pseudo action: ms1_demoted_0
+
+Revised Cluster Status:
+ * Node List:
+ * Online: [ node1 node2 ]
+
+ * Full List of Resources:
+ * rsc_stonith (stonith:null): Started node1
+ * Clone Set: ms1 [rsc1] (promotable):
+ * Unpromoted: [ node1 node2 ]
diff --git a/cts/scheduler/summary/ticket-promoted-21.summary b/cts/scheduler/summary/ticket-promoted-21.summary
new file mode 100644
index 0000000..f116a2e
--- /dev/null
+++ b/cts/scheduler/summary/ticket-promoted-21.summary
@@ -0,0 +1,36 @@
+Current cluster status:
+ * Node List:
+ * Online: [ node1 node2 ]
+
+ * Full List of Resources:
+ * rsc_stonith (stonith:null): Started node1
+ * Clone Set: ms1 [rsc1] (promotable):
+ * Promoted: [ node1 ]
+ * Unpromoted: [ node2 ]
+
+Transition Summary:
+ * Fence (reboot) node1 'deadman ticket was lost'
+ * Move rsc_stonith ( node1 -> node2 )
+ * Stop rsc1:0 ( Promoted node1 ) due to node availability
+
+Executing Cluster Transition:
+ * Pseudo action: rsc_stonith_stop_0
+ * Pseudo action: ms1_demote_0
+ * Fencing node1 (reboot)
+ * Resource action: rsc_stonith start on node2
+ * Pseudo action: rsc1:1_demote_0
+ * Pseudo action: ms1_demoted_0
+ * Pseudo action: ms1_stop_0
+ * Pseudo action: rsc1:1_stop_0
+ * Pseudo action: ms1_stopped_0
+
+Revised Cluster Status:
+ * Node List:
+ * Online: [ node2 ]
+ * OFFLINE: [ node1 ]
+
+ * Full List of Resources:
+ * rsc_stonith (stonith:null): Started node2
+ * Clone Set: ms1 [rsc1] (promotable):
+ * Unpromoted: [ node2 ]
+ * Stopped: [ node1 ]
diff --git a/cts/scheduler/summary/ticket-promoted-22.summary b/cts/scheduler/summary/ticket-promoted-22.summary
new file mode 100644
index 0000000..851e54e
--- /dev/null
+++ b/cts/scheduler/summary/ticket-promoted-22.summary
@@ -0,0 +1,21 @@
+Current cluster status:
+ * Node List:
+ * Online: [ node1 node2 ]
+
+ * Full List of Resources:
+ * rsc_stonith (stonith:null): Started node1
+ * Clone Set: ms1 [rsc1] (promotable):
+ * Unpromoted: [ node1 node2 ]
+
+Transition Summary:
+
+Executing Cluster Transition:
+
+Revised Cluster Status:
+ * Node List:
+ * Online: [ node1 node2 ]
+
+ * Full List of Resources:
+ * rsc_stonith (stonith:null): Started node1
+ * Clone Set: ms1 [rsc1] (promotable):
+ * Unpromoted: [ node1 node2 ]
diff --git a/cts/scheduler/summary/ticket-promoted-23.summary b/cts/scheduler/summary/ticket-promoted-23.summary
new file mode 100644
index 0000000..ee25f92
--- /dev/null
+++ b/cts/scheduler/summary/ticket-promoted-23.summary
@@ -0,0 +1,26 @@
+Current cluster status:
+ * Node List:
+ * Online: [ node1 node2 ]
+
+ * Full List of Resources:
+ * rsc_stonith (stonith:null): Started node1
+ * Clone Set: ms1 [rsc1] (promotable):
+ * Promoted: [ node1 ]
+ * Unpromoted: [ node2 ]
+
+Transition Summary:
+ * Demote rsc1:0 ( Promoted -> Unpromoted node1 )
+
+Executing Cluster Transition:
+ * Pseudo action: ms1_demote_0
+ * Resource action: rsc1:1 demote on node1
+ * Pseudo action: ms1_demoted_0
+
+Revised Cluster Status:
+ * Node List:
+ * Online: [ node1 node2 ]
+
+ * Full List of Resources:
+ * rsc_stonith (stonith:null): Started node1
+ * Clone Set: ms1 [rsc1] (promotable):
+ * Unpromoted: [ node1 node2 ]
diff --git a/cts/scheduler/summary/ticket-promoted-24.summary b/cts/scheduler/summary/ticket-promoted-24.summary
new file mode 100644
index 0000000..b51c277
--- /dev/null
+++ b/cts/scheduler/summary/ticket-promoted-24.summary
@@ -0,0 +1,23 @@
+Current cluster status:
+ * Node List:
+ * Online: [ node1 node2 ]
+
+ * Full List of Resources:
+ * rsc_stonith (stonith:null): Started node1
+ * Clone Set: ms1 [rsc1] (promotable):
+ * Promoted: [ node1 ]
+ * Unpromoted: [ node2 ]
+
+Transition Summary:
+
+Executing Cluster Transition:
+
+Revised Cluster Status:
+ * Node List:
+ * Online: [ node1 node2 ]
+
+ * Full List of Resources:
+ * rsc_stonith (stonith:null): Started node1
+ * Clone Set: ms1 [rsc1] (promotable):
+ * Promoted: [ node1 ]
+ * Unpromoted: [ node2 ]
diff --git a/cts/scheduler/summary/ticket-promoted-3.summary b/cts/scheduler/summary/ticket-promoted-3.summary
new file mode 100644
index 0000000..ee8912b
--- /dev/null
+++ b/cts/scheduler/summary/ticket-promoted-3.summary
@@ -0,0 +1,31 @@
+Current cluster status:
+ * Node List:
+ * Online: [ node1 node2 ]
+
+ * Full List of Resources:
+ * rsc_stonith (stonith:null): Started node1
+ * Clone Set: ms1 [rsc1] (promotable):
+ * Promoted: [ node1 ]
+ * Unpromoted: [ node2 ]
+
+Transition Summary:
+ * Stop rsc1:0 ( Promoted node1 ) due to node availability
+ * Stop rsc1:1 ( Unpromoted node2 ) due to node availability
+
+Executing Cluster Transition:
+ * Pseudo action: ms1_demote_0
+ * Resource action: rsc1:1 demote on node1
+ * Pseudo action: ms1_demoted_0
+ * Pseudo action: ms1_stop_0
+ * Resource action: rsc1:1 stop on node1
+ * Resource action: rsc1:0 stop on node2
+ * Pseudo action: ms1_stopped_0
+
+Revised Cluster Status:
+ * Node List:
+ * Online: [ node1 node2 ]
+
+ * Full List of Resources:
+ * rsc_stonith (stonith:null): Started node1
+ * Clone Set: ms1 [rsc1] (promotable):
+ * Stopped: [ node1 node2 ]
diff --git a/cts/scheduler/summary/ticket-promoted-4.summary b/cts/scheduler/summary/ticket-promoted-4.summary
new file mode 100644
index 0000000..eab3d91
--- /dev/null
+++ b/cts/scheduler/summary/ticket-promoted-4.summary
@@ -0,0 +1,29 @@
+Current cluster status:
+ * Node List:
+ * Online: [ node1 node2 ]
+
+ * Full List of Resources:
+ * rsc_stonith (stonith:null): Started node1
+ * Clone Set: ms1 [rsc1] (promotable):
+ * Stopped: [ node1 node2 ]
+
+Transition Summary:
+ * Start rsc1:0 ( node2 )
+ * Start rsc1:1 ( node1 )
+
+Executing Cluster Transition:
+ * Resource action: rsc1:0 monitor on node2
+ * Resource action: rsc1:1 monitor on node1
+ * Pseudo action: ms1_start_0
+ * Resource action: rsc1:0 start on node2
+ * Resource action: rsc1:1 start on node1
+ * Pseudo action: ms1_running_0
+
+Revised Cluster Status:
+ * Node List:
+ * Online: [ node1 node2 ]
+
+ * Full List of Resources:
+ * rsc_stonith (stonith:null): Started node1
+ * Clone Set: ms1 [rsc1] (promotable):
+ * Unpromoted: [ node1 node2 ]
diff --git a/cts/scheduler/summary/ticket-promoted-5.summary b/cts/scheduler/summary/ticket-promoted-5.summary
new file mode 100644
index 0000000..3816039
--- /dev/null
+++ b/cts/scheduler/summary/ticket-promoted-5.summary
@@ -0,0 +1,26 @@
+Current cluster status:
+ * Node List:
+ * Online: [ node1 node2 ]
+
+ * Full List of Resources:
+ * rsc_stonith (stonith:null): Started node1
+ * Clone Set: ms1 [rsc1] (promotable):
+ * Unpromoted: [ node1 node2 ]
+
+Transition Summary:
+ * Promote rsc1:0 ( Unpromoted -> Promoted node1 )
+
+Executing Cluster Transition:
+ * Pseudo action: ms1_promote_0
+ * Resource action: rsc1:1 promote on node1
+ * Pseudo action: ms1_promoted_0
+
+Revised Cluster Status:
+ * Node List:
+ * Online: [ node1 node2 ]
+
+ * Full List of Resources:
+ * rsc_stonith (stonith:null): Started node1
+ * Clone Set: ms1 [rsc1] (promotable):
+ * Promoted: [ node1 ]
+ * Unpromoted: [ node2 ]
diff --git a/cts/scheduler/summary/ticket-promoted-6.summary b/cts/scheduler/summary/ticket-promoted-6.summary
new file mode 100644
index 0000000..ee25f92
--- /dev/null
+++ b/cts/scheduler/summary/ticket-promoted-6.summary
@@ -0,0 +1,26 @@
+Current cluster status:
+ * Node List:
+ * Online: [ node1 node2 ]
+
+ * Full List of Resources:
+ * rsc_stonith (stonith:null): Started node1
+ * Clone Set: ms1 [rsc1] (promotable):
+ * Promoted: [ node1 ]
+ * Unpromoted: [ node2 ]
+
+Transition Summary:
+ * Demote rsc1:0 ( Promoted -> Unpromoted node1 )
+
+Executing Cluster Transition:
+ * Pseudo action: ms1_demote_0
+ * Resource action: rsc1:1 demote on node1
+ * Pseudo action: ms1_demoted_0
+
+Revised Cluster Status:
+ * Node List:
+ * Online: [ node1 node2 ]
+
+ * Full List of Resources:
+ * rsc_stonith (stonith:null): Started node1
+ * Clone Set: ms1 [rsc1] (promotable):
+ * Unpromoted: [ node1 node2 ]
diff --git a/cts/scheduler/summary/ticket-promoted-7.summary b/cts/scheduler/summary/ticket-promoted-7.summary
new file mode 100644
index 0000000..eab3d91
--- /dev/null
+++ b/cts/scheduler/summary/ticket-promoted-7.summary
@@ -0,0 +1,29 @@
+Current cluster status:
+ * Node List:
+ * Online: [ node1 node2 ]
+
+ * Full List of Resources:
+ * rsc_stonith (stonith:null): Started node1
+ * Clone Set: ms1 [rsc1] (promotable):
+ * Stopped: [ node1 node2 ]
+
+Transition Summary:
+ * Start rsc1:0 ( node2 )
+ * Start rsc1:1 ( node1 )
+
+Executing Cluster Transition:
+ * Resource action: rsc1:0 monitor on node2
+ * Resource action: rsc1:1 monitor on node1
+ * Pseudo action: ms1_start_0
+ * Resource action: rsc1:0 start on node2
+ * Resource action: rsc1:1 start on node1
+ * Pseudo action: ms1_running_0
+
+Revised Cluster Status:
+ * Node List:
+ * Online: [ node1 node2 ]
+
+ * Full List of Resources:
+ * rsc_stonith (stonith:null): Started node1
+ * Clone Set: ms1 [rsc1] (promotable):
+ * Unpromoted: [ node1 node2 ]
diff --git a/cts/scheduler/summary/ticket-promoted-8.summary b/cts/scheduler/summary/ticket-promoted-8.summary
new file mode 100644
index 0000000..3816039
--- /dev/null
+++ b/cts/scheduler/summary/ticket-promoted-8.summary
@@ -0,0 +1,26 @@
+Current cluster status:
+ * Node List:
+ * Online: [ node1 node2 ]
+
+ * Full List of Resources:
+ * rsc_stonith (stonith:null): Started node1
+ * Clone Set: ms1 [rsc1] (promotable):
+ * Unpromoted: [ node1 node2 ]
+
+Transition Summary:
+ * Promote rsc1:0 ( Unpromoted -> Promoted node1 )
+
+Executing Cluster Transition:
+ * Pseudo action: ms1_promote_0
+ * Resource action: rsc1:1 promote on node1
+ * Pseudo action: ms1_promoted_0
+
+Revised Cluster Status:
+ * Node List:
+ * Online: [ node1 node2 ]
+
+ * Full List of Resources:
+ * rsc_stonith (stonith:null): Started node1
+ * Clone Set: ms1 [rsc1] (promotable):
+ * Promoted: [ node1 ]
+ * Unpromoted: [ node2 ]
diff --git a/cts/scheduler/summary/ticket-promoted-9.summary b/cts/scheduler/summary/ticket-promoted-9.summary
new file mode 100644
index 0000000..f116a2e
--- /dev/null
+++ b/cts/scheduler/summary/ticket-promoted-9.summary
@@ -0,0 +1,36 @@
+Current cluster status:
+ * Node List:
+ * Online: [ node1 node2 ]
+
+ * Full List of Resources:
+ * rsc_stonith (stonith:null): Started node1
+ * Clone Set: ms1 [rsc1] (promotable):
+ * Promoted: [ node1 ]
+ * Unpromoted: [ node2 ]
+
+Transition Summary:
+ * Fence (reboot) node1 'deadman ticket was lost'
+ * Move rsc_stonith ( node1 -> node2 )
+ * Stop rsc1:0 ( Promoted node1 ) due to node availability
+
+Executing Cluster Transition:
+ * Pseudo action: rsc_stonith_stop_0
+ * Pseudo action: ms1_demote_0
+ * Fencing node1 (reboot)
+ * Resource action: rsc_stonith start on node2
+ * Pseudo action: rsc1:1_demote_0
+ * Pseudo action: ms1_demoted_0
+ * Pseudo action: ms1_stop_0
+ * Pseudo action: rsc1:1_stop_0
+ * Pseudo action: ms1_stopped_0
+
+Revised Cluster Status:
+ * Node List:
+ * Online: [ node2 ]
+ * OFFLINE: [ node1 ]
+
+ * Full List of Resources:
+ * rsc_stonith (stonith:null): Started node2
+ * Clone Set: ms1 [rsc1] (promotable):
+ * Unpromoted: [ node2 ]
+ * Stopped: [ node1 ]
diff --git a/cts/scheduler/summary/ticket-rsc-sets-1.summary b/cts/scheduler/summary/ticket-rsc-sets-1.summary
new file mode 100644
index 0000000..d119ce5
--- /dev/null
+++ b/cts/scheduler/summary/ticket-rsc-sets-1.summary
@@ -0,0 +1,49 @@
+Current cluster status:
+ * Node List:
+ * Online: [ node1 node2 ]
+
+ * Full List of Resources:
+ * rsc_stonith (stonith:null): Started node1
+ * rsc1 (ocf:pacemaker:Dummy): Stopped
+ * Resource Group: group2:
+ * rsc2 (ocf:pacemaker:Dummy): Stopped
+ * rsc3 (ocf:pacemaker:Dummy): Stopped
+ * Clone Set: clone4 [rsc4]:
+ * Stopped: [ node1 node2 ]
+ * Clone Set: ms5 [rsc5] (promotable):
+ * Stopped: [ node1 node2 ]
+
+Transition Summary:
+ * Start rsc5:0 ( node2 )
+ * Start rsc5:1 ( node1 )
+
+Executing Cluster Transition:
+ * Resource action: rsc1 monitor on node2
+ * Resource action: rsc1 monitor on node1
+ * Resource action: rsc2 monitor on node2
+ * Resource action: rsc2 monitor on node1
+ * Resource action: rsc3 monitor on node2
+ * Resource action: rsc3 monitor on node1
+ * Resource action: rsc4:0 monitor on node2
+ * Resource action: rsc4:0 monitor on node1
+ * Resource action: rsc5:0 monitor on node2
+ * Resource action: rsc5:1 monitor on node1
+ * Pseudo action: ms5_start_0
+ * Resource action: rsc5:0 start on node2
+ * Resource action: rsc5:1 start on node1
+ * Pseudo action: ms5_running_0
+
+Revised Cluster Status:
+ * Node List:
+ * Online: [ node1 node2 ]
+
+ * Full List of Resources:
+ * rsc_stonith (stonith:null): Started node1
+ * rsc1 (ocf:pacemaker:Dummy): Stopped
+ * Resource Group: group2:
+ * rsc2 (ocf:pacemaker:Dummy): Stopped
+ * rsc3 (ocf:pacemaker:Dummy): Stopped
+ * Clone Set: clone4 [rsc4]:
+ * Stopped: [ node1 node2 ]
+ * Clone Set: ms5 [rsc5] (promotable):
+ * Unpromoted: [ node1 node2 ]
diff --git a/cts/scheduler/summary/ticket-rsc-sets-10.summary b/cts/scheduler/summary/ticket-rsc-sets-10.summary
new file mode 100644
index 0000000..3bc9d64
--- /dev/null
+++ b/cts/scheduler/summary/ticket-rsc-sets-10.summary
@@ -0,0 +1,52 @@
+Current cluster status:
+ * Node List:
+ * Online: [ node1 node2 ]
+
+ * Full List of Resources:
+ * rsc_stonith (stonith:null): Started node1
+ * rsc1 (ocf:pacemaker:Dummy): Started node2
+ * Resource Group: group2:
+ * rsc2 (ocf:pacemaker:Dummy): Started node1
+ * rsc3 (ocf:pacemaker:Dummy): Started node1
+ * Clone Set: clone4 [rsc4]:
+ * Started: [ node1 node2 ]
+ * Clone Set: ms5 [rsc5] (promotable):
+ * Promoted: [ node1 ]
+ * Unpromoted: [ node2 ]
+
+Transition Summary:
+ * Stop rsc1 ( node2 ) due to node availability
+ * Stop rsc2 ( node1 ) due to node availability
+ * Stop rsc3 ( node1 ) due to node availability
+ * Stop rsc4:0 ( node1 ) due to node availability
+ * Stop rsc4:1 ( node2 ) due to node availability
+ * Demote rsc5:0 ( Promoted -> Unpromoted node1 )
+
+Executing Cluster Transition:
+ * Resource action: rsc1 stop on node2
+ * Pseudo action: group2_stop_0
+ * Resource action: rsc3 stop on node1
+ * Pseudo action: clone4_stop_0
+ * Pseudo action: ms5_demote_0
+ * Resource action: rsc2 stop on node1
+ * Resource action: rsc4:1 stop on node1
+ * Resource action: rsc4:0 stop on node2
+ * Pseudo action: clone4_stopped_0
+ * Resource action: rsc5:1 demote on node1
+ * Pseudo action: ms5_demoted_0
+ * Pseudo action: group2_stopped_0
+
+Revised Cluster Status:
+ * Node List:
+ * Online: [ node1 node2 ]
+
+ * Full List of Resources:
+ * rsc_stonith (stonith:null): Started node1
+ * rsc1 (ocf:pacemaker:Dummy): Stopped
+ * Resource Group: group2:
+ * rsc2 (ocf:pacemaker:Dummy): Stopped
+ * rsc3 (ocf:pacemaker:Dummy): Stopped
+ * Clone Set: clone4 [rsc4]:
+ * Stopped: [ node1 node2 ]
+ * Clone Set: ms5 [rsc5] (promotable):
+ * Unpromoted: [ node1 node2 ]
diff --git a/cts/scheduler/summary/ticket-rsc-sets-11.summary b/cts/scheduler/summary/ticket-rsc-sets-11.summary
new file mode 100644
index 0000000..03153aa
--- /dev/null
+++ b/cts/scheduler/summary/ticket-rsc-sets-11.summary
@@ -0,0 +1,33 @@
+Current cluster status:
+ * Node List:
+ * Online: [ node1 node2 ]
+
+ * Full List of Resources:
+ * rsc_stonith (stonith:null): Started node1
+ * rsc1 (ocf:pacemaker:Dummy): Stopped
+ * Resource Group: group2:
+ * rsc2 (ocf:pacemaker:Dummy): Stopped
+ * rsc3 (ocf:pacemaker:Dummy): Stopped
+ * Clone Set: clone4 [rsc4]:
+ * Stopped: [ node1 node2 ]
+ * Clone Set: ms5 [rsc5] (promotable):
+ * Unpromoted: [ node1 node2 ]
+
+Transition Summary:
+
+Executing Cluster Transition:
+
+Revised Cluster Status:
+ * Node List:
+ * Online: [ node1 node2 ]
+
+ * Full List of Resources:
+ * rsc_stonith (stonith:null): Started node1
+ * rsc1 (ocf:pacemaker:Dummy): Stopped
+ * Resource Group: group2:
+ * rsc2 (ocf:pacemaker:Dummy): Stopped
+ * rsc3 (ocf:pacemaker:Dummy): Stopped
+ * Clone Set: clone4 [rsc4]:
+ * Stopped: [ node1 node2 ]
+ * Clone Set: ms5 [rsc5] (promotable):
+ * Unpromoted: [ node1 node2 ]
diff --git a/cts/scheduler/summary/ticket-rsc-sets-12.summary b/cts/scheduler/summary/ticket-rsc-sets-12.summary
new file mode 100644
index 0000000..68e0827
--- /dev/null
+++ b/cts/scheduler/summary/ticket-rsc-sets-12.summary
@@ -0,0 +1,41 @@
+Current cluster status:
+ * Node List:
+ * Online: [ node1 node2 ]
+
+ * Full List of Resources:
+ * rsc_stonith (stonith:null): Started node1
+ * rsc1 (ocf:pacemaker:Dummy): Started node2
+ * Resource Group: group2:
+ * rsc2 (ocf:pacemaker:Dummy): Started node1
+ * rsc3 (ocf:pacemaker:Dummy): Started node1
+ * Clone Set: clone4 [rsc4]:
+ * Stopped: [ node1 node2 ]
+ * Clone Set: ms5 [rsc5] (promotable):
+ * Unpromoted: [ node1 node2 ]
+
+Transition Summary:
+ * Stop rsc1 ( node2 ) due to node availability
+ * Stop rsc2 ( node1 ) due to node availability
+ * Stop rsc3 ( node1 ) due to node availability
+
+Executing Cluster Transition:
+ * Resource action: rsc1 stop on node2
+ * Pseudo action: group2_stop_0
+ * Resource action: rsc3 stop on node1
+ * Resource action: rsc2 stop on node1
+ * Pseudo action: group2_stopped_0
+
+Revised Cluster Status:
+ * Node List:
+ * Online: [ node1 node2 ]
+
+ * Full List of Resources:
+ * rsc_stonith (stonith:null): Started node1
+ * rsc1 (ocf:pacemaker:Dummy): Stopped
+ * Resource Group: group2:
+ * rsc2 (ocf:pacemaker:Dummy): Stopped
+ * rsc3 (ocf:pacemaker:Dummy): Stopped
+ * Clone Set: clone4 [rsc4]:
+ * Stopped: [ node1 node2 ]
+ * Clone Set: ms5 [rsc5] (promotable):
+ * Unpromoted: [ node1 node2 ]
diff --git a/cts/scheduler/summary/ticket-rsc-sets-13.summary b/cts/scheduler/summary/ticket-rsc-sets-13.summary
new file mode 100644
index 0000000..3bc9d64
--- /dev/null
+++ b/cts/scheduler/summary/ticket-rsc-sets-13.summary
@@ -0,0 +1,52 @@
+Current cluster status:
+ * Node List:
+ * Online: [ node1 node2 ]
+
+ * Full List of Resources:
+ * rsc_stonith (stonith:null): Started node1
+ * rsc1 (ocf:pacemaker:Dummy): Started node2
+ * Resource Group: group2:
+ * rsc2 (ocf:pacemaker:Dummy): Started node1
+ * rsc3 (ocf:pacemaker:Dummy): Started node1
+ * Clone Set: clone4 [rsc4]:
+ * Started: [ node1 node2 ]
+ * Clone Set: ms5 [rsc5] (promotable):
+ * Promoted: [ node1 ]
+ * Unpromoted: [ node2 ]
+
+Transition Summary:
+ * Stop rsc1 ( node2 ) due to node availability
+ * Stop rsc2 ( node1 ) due to node availability
+ * Stop rsc3 ( node1 ) due to node availability
+ * Stop rsc4:0 ( node1 ) due to node availability
+ * Stop rsc4:1 ( node2 ) due to node availability
+ * Demote rsc5:0 ( Promoted -> Unpromoted node1 )
+
+Executing Cluster Transition:
+ * Resource action: rsc1 stop on node2
+ * Pseudo action: group2_stop_0
+ * Resource action: rsc3 stop on node1
+ * Pseudo action: clone4_stop_0
+ * Pseudo action: ms5_demote_0
+ * Resource action: rsc2 stop on node1
+ * Resource action: rsc4:1 stop on node1
+ * Resource action: rsc4:0 stop on node2
+ * Pseudo action: clone4_stopped_0
+ * Resource action: rsc5:1 demote on node1
+ * Pseudo action: ms5_demoted_0
+ * Pseudo action: group2_stopped_0
+
+Revised Cluster Status:
+ * Node List:
+ * Online: [ node1 node2 ]
+
+ * Full List of Resources:
+ * rsc_stonith (stonith:null): Started node1
+ * rsc1 (ocf:pacemaker:Dummy): Stopped
+ * Resource Group: group2:
+ * rsc2 (ocf:pacemaker:Dummy): Stopped
+ * rsc3 (ocf:pacemaker:Dummy): Stopped
+ * Clone Set: clone4 [rsc4]:
+ * Stopped: [ node1 node2 ]
+ * Clone Set: ms5 [rsc5] (promotable):
+ * Unpromoted: [ node1 node2 ]
diff --git a/cts/scheduler/summary/ticket-rsc-sets-14.summary b/cts/scheduler/summary/ticket-rsc-sets-14.summary
new file mode 100644
index 0000000..3bc9d64
--- /dev/null
+++ b/cts/scheduler/summary/ticket-rsc-sets-14.summary
@@ -0,0 +1,52 @@
+Current cluster status:
+ * Node List:
+ * Online: [ node1 node2 ]
+
+ * Full List of Resources:
+ * rsc_stonith (stonith:null): Started node1
+ * rsc1 (ocf:pacemaker:Dummy): Started node2
+ * Resource Group: group2:
+ * rsc2 (ocf:pacemaker:Dummy): Started node1
+ * rsc3 (ocf:pacemaker:Dummy): Started node1
+ * Clone Set: clone4 [rsc4]:
+ * Started: [ node1 node2 ]
+ * Clone Set: ms5 [rsc5] (promotable):
+ * Promoted: [ node1 ]
+ * Unpromoted: [ node2 ]
+
+Transition Summary:
+ * Stop rsc1 ( node2 ) due to node availability
+ * Stop rsc2 ( node1 ) due to node availability
+ * Stop rsc3 ( node1 ) due to node availability
+ * Stop rsc4:0 ( node1 ) due to node availability
+ * Stop rsc4:1 ( node2 ) due to node availability
+ * Demote rsc5:0 ( Promoted -> Unpromoted node1 )
+
+Executing Cluster Transition:
+ * Resource action: rsc1 stop on node2
+ * Pseudo action: group2_stop_0
+ * Resource action: rsc3 stop on node1
+ * Pseudo action: clone4_stop_0
+ * Pseudo action: ms5_demote_0
+ * Resource action: rsc2 stop on node1
+ * Resource action: rsc4:1 stop on node1
+ * Resource action: rsc4:0 stop on node2
+ * Pseudo action: clone4_stopped_0
+ * Resource action: rsc5:1 demote on node1
+ * Pseudo action: ms5_demoted_0
+ * Pseudo action: group2_stopped_0
+
+Revised Cluster Status:
+ * Node List:
+ * Online: [ node1 node2 ]
+
+ * Full List of Resources:
+ * rsc_stonith (stonith:null): Started node1
+ * rsc1 (ocf:pacemaker:Dummy): Stopped
+ * Resource Group: group2:
+ * rsc2 (ocf:pacemaker:Dummy): Stopped
+ * rsc3 (ocf:pacemaker:Dummy): Stopped
+ * Clone Set: clone4 [rsc4]:
+ * Stopped: [ node1 node2 ]
+ * Clone Set: ms5 [rsc5] (promotable):
+ * Unpromoted: [ node1 node2 ]
diff --git a/cts/scheduler/summary/ticket-rsc-sets-2.summary b/cts/scheduler/summary/ticket-rsc-sets-2.summary
new file mode 100644
index 0000000..fccf3ca
--- /dev/null
+++ b/cts/scheduler/summary/ticket-rsc-sets-2.summary
@@ -0,0 +1,57 @@
+Current cluster status:
+ * Node List:
+ * Online: [ node1 node2 ]
+
+ * Full List of Resources:
+ * rsc_stonith (stonith:null): Started node1
+ * rsc1 (ocf:pacemaker:Dummy): Stopped
+ * Resource Group: group2:
+ * rsc2 (ocf:pacemaker:Dummy): Stopped
+ * rsc3 (ocf:pacemaker:Dummy): Stopped
+ * Clone Set: clone4 [rsc4]:
+ * Stopped: [ node1 node2 ]
+ * Clone Set: ms5 [rsc5] (promotable):
+ * Unpromoted: [ node1 node2 ]
+
+Transition Summary:
+ * Start rsc1 ( node2 )
+ * Start rsc2 ( node1 )
+ * Start rsc3 ( node1 )
+ * Start rsc4:0 ( node2 )
+ * Start rsc4:1 ( node1 )
+ * Promote rsc5:0 ( Unpromoted -> Promoted node1 )
+
+Executing Cluster Transition:
+ * Resource action: rsc1 start on node2
+ * Pseudo action: group2_start_0
+ * Resource action: rsc2 start on node1
+ * Resource action: rsc3 start on node1
+ * Pseudo action: clone4_start_0
+ * Pseudo action: ms5_promote_0
+ * Resource action: rsc1 monitor=10000 on node2
+ * Pseudo action: group2_running_0
+ * Resource action: rsc2 monitor=5000 on node1
+ * Resource action: rsc3 monitor=5000 on node1
+ * Resource action: rsc4:0 start on node2
+ * Resource action: rsc4:1 start on node1
+ * Pseudo action: clone4_running_0
+ * Resource action: rsc5:1 promote on node1
+ * Pseudo action: ms5_promoted_0
+ * Resource action: rsc4:0 monitor=5000 on node2
+ * Resource action: rsc4:1 monitor=5000 on node1
+
+Revised Cluster Status:
+ * Node List:
+ * Online: [ node1 node2 ]
+
+ * Full List of Resources:
+ * rsc_stonith (stonith:null): Started node1
+ * rsc1 (ocf:pacemaker:Dummy): Started node2
+ * Resource Group: group2:
+ * rsc2 (ocf:pacemaker:Dummy): Started node1
+ * rsc3 (ocf:pacemaker:Dummy): Started node1
+ * Clone Set: clone4 [rsc4]:
+ * Started: [ node1 node2 ]
+ * Clone Set: ms5 [rsc5] (promotable):
+ * Promoted: [ node1 ]
+ * Unpromoted: [ node2 ]
diff --git a/cts/scheduler/summary/ticket-rsc-sets-3.summary b/cts/scheduler/summary/ticket-rsc-sets-3.summary
new file mode 100644
index 0000000..3bc9d64
--- /dev/null
+++ b/cts/scheduler/summary/ticket-rsc-sets-3.summary
@@ -0,0 +1,52 @@
+Current cluster status:
+ * Node List:
+ * Online: [ node1 node2 ]
+
+ * Full List of Resources:
+ * rsc_stonith (stonith:null): Started node1
+ * rsc1 (ocf:pacemaker:Dummy): Started node2
+ * Resource Group: group2:
+ * rsc2 (ocf:pacemaker:Dummy): Started node1
+ * rsc3 (ocf:pacemaker:Dummy): Started node1
+ * Clone Set: clone4 [rsc4]:
+ * Started: [ node1 node2 ]
+ * Clone Set: ms5 [rsc5] (promotable):
+ * Promoted: [ node1 ]
+ * Unpromoted: [ node2 ]
+
+Transition Summary:
+ * Stop rsc1 ( node2 ) due to node availability
+ * Stop rsc2 ( node1 ) due to node availability
+ * Stop rsc3 ( node1 ) due to node availability
+ * Stop rsc4:0 ( node1 ) due to node availability
+ * Stop rsc4:1 ( node2 ) due to node availability
+ * Demote rsc5:0 ( Promoted -> Unpromoted node1 )
+
+Executing Cluster Transition:
+ * Resource action: rsc1 stop on node2
+ * Pseudo action: group2_stop_0
+ * Resource action: rsc3 stop on node1
+ * Pseudo action: clone4_stop_0
+ * Pseudo action: ms5_demote_0
+ * Resource action: rsc2 stop on node1
+ * Resource action: rsc4:1 stop on node1
+ * Resource action: rsc4:0 stop on node2
+ * Pseudo action: clone4_stopped_0
+ * Resource action: rsc5:1 demote on node1
+ * Pseudo action: ms5_demoted_0
+ * Pseudo action: group2_stopped_0
+
+Revised Cluster Status:
+ * Node List:
+ * Online: [ node1 node2 ]
+
+ * Full List of Resources:
+ * rsc_stonith (stonith:null): Started node1
+ * rsc1 (ocf:pacemaker:Dummy): Stopped
+ * Resource Group: group2:
+ * rsc2 (ocf:pacemaker:Dummy): Stopped
+ * rsc3 (ocf:pacemaker:Dummy): Stopped
+ * Clone Set: clone4 [rsc4]:
+ * Stopped: [ node1 node2 ]
+ * Clone Set: ms5 [rsc5] (promotable):
+ * Unpromoted: [ node1 node2 ]
diff --git a/cts/scheduler/summary/ticket-rsc-sets-4.summary b/cts/scheduler/summary/ticket-rsc-sets-4.summary
new file mode 100644
index 0000000..d119ce5
--- /dev/null
+++ b/cts/scheduler/summary/ticket-rsc-sets-4.summary
@@ -0,0 +1,49 @@
+Current cluster status:
+ * Node List:
+ * Online: [ node1 node2 ]
+
+ * Full List of Resources:
+ * rsc_stonith (stonith:null): Started node1
+ * rsc1 (ocf:pacemaker:Dummy): Stopped
+ * Resource Group: group2:
+ * rsc2 (ocf:pacemaker:Dummy): Stopped
+ * rsc3 (ocf:pacemaker:Dummy): Stopped
+ * Clone Set: clone4 [rsc4]:
+ * Stopped: [ node1 node2 ]
+ * Clone Set: ms5 [rsc5] (promotable):
+ * Stopped: [ node1 node2 ]
+
+Transition Summary:
+ * Start rsc5:0 ( node2 )
+ * Start rsc5:1 ( node1 )
+
+Executing Cluster Transition:
+ * Resource action: rsc1 monitor on node2
+ * Resource action: rsc1 monitor on node1
+ * Resource action: rsc2 monitor on node2
+ * Resource action: rsc2 monitor on node1
+ * Resource action: rsc3 monitor on node2
+ * Resource action: rsc3 monitor on node1
+ * Resource action: rsc4:0 monitor on node2
+ * Resource action: rsc4:0 monitor on node1
+ * Resource action: rsc5:0 monitor on node2
+ * Resource action: rsc5:1 monitor on node1
+ * Pseudo action: ms5_start_0
+ * Resource action: rsc5:0 start on node2
+ * Resource action: rsc5:1 start on node1
+ * Pseudo action: ms5_running_0
+
+Revised Cluster Status:
+ * Node List:
+ * Online: [ node1 node2 ]
+
+ * Full List of Resources:
+ * rsc_stonith (stonith:null): Started node1
+ * rsc1 (ocf:pacemaker:Dummy): Stopped
+ * Resource Group: group2:
+ * rsc2 (ocf:pacemaker:Dummy): Stopped
+ * rsc3 (ocf:pacemaker:Dummy): Stopped
+ * Clone Set: clone4 [rsc4]:
+ * Stopped: [ node1 node2 ]
+ * Clone Set: ms5 [rsc5] (promotable):
+ * Unpromoted: [ node1 node2 ]
diff --git a/cts/scheduler/summary/ticket-rsc-sets-5.summary b/cts/scheduler/summary/ticket-rsc-sets-5.summary
new file mode 100644
index 0000000..217243a
--- /dev/null
+++ b/cts/scheduler/summary/ticket-rsc-sets-5.summary
@@ -0,0 +1,44 @@
+Current cluster status:
+ * Node List:
+ * Online: [ node1 node2 ]
+
+ * Full List of Resources:
+ * rsc_stonith (stonith:null): Started node1
+ * rsc1 (ocf:pacemaker:Dummy): Stopped
+ * Resource Group: group2:
+ * rsc2 (ocf:pacemaker:Dummy): Stopped
+ * rsc3 (ocf:pacemaker:Dummy): Stopped
+ * Clone Set: clone4 [rsc4]:
+ * Stopped: [ node1 node2 ]
+ * Clone Set: ms5 [rsc5] (promotable):
+ * Unpromoted: [ node1 node2 ]
+
+Transition Summary:
+ * Start rsc1 ( node2 )
+ * Start rsc2 ( node1 )
+ * Start rsc3 ( node1 )
+
+Executing Cluster Transition:
+ * Resource action: rsc1 start on node2
+ * Pseudo action: group2_start_0
+ * Resource action: rsc2 start on node1
+ * Resource action: rsc3 start on node1
+ * Resource action: rsc1 monitor=10000 on node2
+ * Pseudo action: group2_running_0
+ * Resource action: rsc2 monitor=5000 on node1
+ * Resource action: rsc3 monitor=5000 on node1
+
+Revised Cluster Status:
+ * Node List:
+ * Online: [ node1 node2 ]
+
+ * Full List of Resources:
+ * rsc_stonith (stonith:null): Started node1
+ * rsc1 (ocf:pacemaker:Dummy): Started node2
+ * Resource Group: group2:
+ * rsc2 (ocf:pacemaker:Dummy): Started node1
+ * rsc3 (ocf:pacemaker:Dummy): Started node1
+ * Clone Set: clone4 [rsc4]:
+ * Stopped: [ node1 node2 ]
+ * Clone Set: ms5 [rsc5] (promotable):
+ * Unpromoted: [ node1 node2 ]
diff --git a/cts/scheduler/summary/ticket-rsc-sets-6.summary b/cts/scheduler/summary/ticket-rsc-sets-6.summary
new file mode 100644
index 0000000..7336f70
--- /dev/null
+++ b/cts/scheduler/summary/ticket-rsc-sets-6.summary
@@ -0,0 +1,46 @@
+Current cluster status:
+ * Node List:
+ * Online: [ node1 node2 ]
+
+ * Full List of Resources:
+ * rsc_stonith (stonith:null): Started node1
+ * rsc1 (ocf:pacemaker:Dummy): Started node2
+ * Resource Group: group2:
+ * rsc2 (ocf:pacemaker:Dummy): Started node1
+ * rsc3 (ocf:pacemaker:Dummy): Started node1
+ * Clone Set: clone4 [rsc4]:
+ * Stopped: [ node1 node2 ]
+ * Clone Set: ms5 [rsc5] (promotable):
+ * Unpromoted: [ node1 node2 ]
+
+Transition Summary:
+ * Start rsc4:0 ( node2 )
+ * Start rsc4:1 ( node1 )
+ * Promote rsc5:0 ( Unpromoted -> Promoted node1 )
+
+Executing Cluster Transition:
+ * Pseudo action: clone4_start_0
+ * Pseudo action: ms5_promote_0
+ * Resource action: rsc4:0 start on node2
+ * Resource action: rsc4:1 start on node1
+ * Pseudo action: clone4_running_0
+ * Resource action: rsc5:1 promote on node1
+ * Pseudo action: ms5_promoted_0
+ * Resource action: rsc4:0 monitor=5000 on node2
+ * Resource action: rsc4:1 monitor=5000 on node1
+
+Revised Cluster Status:
+ * Node List:
+ * Online: [ node1 node2 ]
+
+ * Full List of Resources:
+ * rsc_stonith (stonith:null): Started node1
+ * rsc1 (ocf:pacemaker:Dummy): Started node2
+ * Resource Group: group2:
+ * rsc2 (ocf:pacemaker:Dummy): Started node1
+ * rsc3 (ocf:pacemaker:Dummy): Started node1
+ * Clone Set: clone4 [rsc4]:
+ * Started: [ node1 node2 ]
+ * Clone Set: ms5 [rsc5] (promotable):
+ * Promoted: [ node1 ]
+ * Unpromoted: [ node2 ]
diff --git a/cts/scheduler/summary/ticket-rsc-sets-7.summary b/cts/scheduler/summary/ticket-rsc-sets-7.summary
new file mode 100644
index 0000000..3bc9d64
--- /dev/null
+++ b/cts/scheduler/summary/ticket-rsc-sets-7.summary
@@ -0,0 +1,52 @@
+Current cluster status:
+ * Node List:
+ * Online: [ node1 node2 ]
+
+ * Full List of Resources:
+ * rsc_stonith (stonith:null): Started node1
+ * rsc1 (ocf:pacemaker:Dummy): Started node2
+ * Resource Group: group2:
+ * rsc2 (ocf:pacemaker:Dummy): Started node1
+ * rsc3 (ocf:pacemaker:Dummy): Started node1
+ * Clone Set: clone4 [rsc4]:
+ * Started: [ node1 node2 ]
+ * Clone Set: ms5 [rsc5] (promotable):
+ * Promoted: [ node1 ]
+ * Unpromoted: [ node2 ]
+
+Transition Summary:
+ * Stop rsc1 ( node2 ) due to node availability
+ * Stop rsc2 ( node1 ) due to node availability
+ * Stop rsc3 ( node1 ) due to node availability
+ * Stop rsc4:0 ( node1 ) due to node availability
+ * Stop rsc4:1 ( node2 ) due to node availability
+ * Demote rsc5:0 ( Promoted -> Unpromoted node1 )
+
+Executing Cluster Transition:
+ * Resource action: rsc1 stop on node2
+ * Pseudo action: group2_stop_0
+ * Resource action: rsc3 stop on node1
+ * Pseudo action: clone4_stop_0
+ * Pseudo action: ms5_demote_0
+ * Resource action: rsc2 stop on node1
+ * Resource action: rsc4:1 stop on node1
+ * Resource action: rsc4:0 stop on node2
+ * Pseudo action: clone4_stopped_0
+ * Resource action: rsc5:1 demote on node1
+ * Pseudo action: ms5_demoted_0
+ * Pseudo action: group2_stopped_0
+
+Revised Cluster Status:
+ * Node List:
+ * Online: [ node1 node2 ]
+
+ * Full List of Resources:
+ * rsc_stonith (stonith:null): Started node1
+ * rsc1 (ocf:pacemaker:Dummy): Stopped
+ * Resource Group: group2:
+ * rsc2 (ocf:pacemaker:Dummy): Stopped
+ * rsc3 (ocf:pacemaker:Dummy): Stopped
+ * Clone Set: clone4 [rsc4]:
+ * Stopped: [ node1 node2 ]
+ * Clone Set: ms5 [rsc5] (promotable):
+ * Unpromoted: [ node1 node2 ]
diff --git a/cts/scheduler/summary/ticket-rsc-sets-8.summary b/cts/scheduler/summary/ticket-rsc-sets-8.summary
new file mode 100644
index 0000000..03153aa
--- /dev/null
+++ b/cts/scheduler/summary/ticket-rsc-sets-8.summary
@@ -0,0 +1,33 @@
+Current cluster status:
+ * Node List:
+ * Online: [ node1 node2 ]
+
+ * Full List of Resources:
+ * rsc_stonith (stonith:null): Started node1
+ * rsc1 (ocf:pacemaker:Dummy): Stopped
+ * Resource Group: group2:
+ * rsc2 (ocf:pacemaker:Dummy): Stopped
+ * rsc3 (ocf:pacemaker:Dummy): Stopped
+ * Clone Set: clone4 [rsc4]:
+ * Stopped: [ node1 node2 ]
+ * Clone Set: ms5 [rsc5] (promotable):
+ * Unpromoted: [ node1 node2 ]
+
+Transition Summary:
+
+Executing Cluster Transition:
+
+Revised Cluster Status:
+ * Node List:
+ * Online: [ node1 node2 ]
+
+ * Full List of Resources:
+ * rsc_stonith (stonith:null): Started node1
+ * rsc1 (ocf:pacemaker:Dummy): Stopped
+ * Resource Group: group2:
+ * rsc2 (ocf:pacemaker:Dummy): Stopped
+ * rsc3 (ocf:pacemaker:Dummy): Stopped
+ * Clone Set: clone4 [rsc4]:
+ * Stopped: [ node1 node2 ]
+ * Clone Set: ms5 [rsc5] (promotable):
+ * Unpromoted: [ node1 node2 ]
diff --git a/cts/scheduler/summary/ticket-rsc-sets-9.summary b/cts/scheduler/summary/ticket-rsc-sets-9.summary
new file mode 100644
index 0000000..3bc9d64
--- /dev/null
+++ b/cts/scheduler/summary/ticket-rsc-sets-9.summary
@@ -0,0 +1,52 @@
+Current cluster status:
+ * Node List:
+ * Online: [ node1 node2 ]
+
+ * Full List of Resources:
+ * rsc_stonith (stonith:null): Started node1
+ * rsc1 (ocf:pacemaker:Dummy): Started node2
+ * Resource Group: group2:
+ * rsc2 (ocf:pacemaker:Dummy): Started node1
+ * rsc3 (ocf:pacemaker:Dummy): Started node1
+ * Clone Set: clone4 [rsc4]:
+ * Started: [ node1 node2 ]
+ * Clone Set: ms5 [rsc5] (promotable):
+ * Promoted: [ node1 ]
+ * Unpromoted: [ node2 ]
+
+Transition Summary:
+ * Stop rsc1 ( node2 ) due to node availability
+ * Stop rsc2 ( node1 ) due to node availability
+ * Stop rsc3 ( node1 ) due to node availability
+ * Stop rsc4:0 ( node1 ) due to node availability
+ * Stop rsc4:1 ( node2 ) due to node availability
+ * Demote rsc5:0 ( Promoted -> Unpromoted node1 )
+
+Executing Cluster Transition:
+ * Resource action: rsc1 stop on node2
+ * Pseudo action: group2_stop_0
+ * Resource action: rsc3 stop on node1
+ * Pseudo action: clone4_stop_0
+ * Pseudo action: ms5_demote_0
+ * Resource action: rsc2 stop on node1
+ * Resource action: rsc4:1 stop on node1
+ * Resource action: rsc4:0 stop on node2
+ * Pseudo action: clone4_stopped_0
+ * Resource action: rsc5:1 demote on node1
+ * Pseudo action: ms5_demoted_0
+ * Pseudo action: group2_stopped_0
+
+Revised Cluster Status:
+ * Node List:
+ * Online: [ node1 node2 ]
+
+ * Full List of Resources:
+ * rsc_stonith (stonith:null): Started node1
+ * rsc1 (ocf:pacemaker:Dummy): Stopped
+ * Resource Group: group2:
+ * rsc2 (ocf:pacemaker:Dummy): Stopped
+ * rsc3 (ocf:pacemaker:Dummy): Stopped
+ * Clone Set: clone4 [rsc4]:
+ * Stopped: [ node1 node2 ]
+ * Clone Set: ms5 [rsc5] (promotable):
+ * Unpromoted: [ node1 node2 ]
diff --git a/cts/scheduler/summary/unfence-definition.summary b/cts/scheduler/summary/unfence-definition.summary
new file mode 100644
index 0000000..bb22680
--- /dev/null
+++ b/cts/scheduler/summary/unfence-definition.summary
@@ -0,0 +1,65 @@
+Current cluster status:
+ * Node List:
+ * Node virt-4: UNCLEAN (offline)
+ * Online: [ virt-1 virt-2 virt-3 ]
+
+ * Full List of Resources:
+ * fencing (stonith:fence_scsi): Started virt-1
+ * Clone Set: dlm-clone [dlm]:
+ * Started: [ virt-1 virt-2 ]
+ * Stopped: [ virt-3 virt-4 ]
+ * Clone Set: clvmd-clone [clvmd]:
+ * Started: [ virt-1 ]
+ * Stopped: [ virt-2 virt-3 virt-4 ]
+
+Transition Summary:
+ * Fence (reboot) virt-4 'node is unclean'
+ * Fence (on) virt-3 'required by fencing monitor'
+ * Fence (on) virt-1 'Device definition changed'
+ * Restart fencing ( virt-1 )
+ * Restart dlm:0 ( virt-1 ) due to required stonith
+ * Start dlm:2 ( virt-3 )
+ * Restart clvmd:0 ( virt-1 ) due to required stonith
+ * Start clvmd:1 ( virt-2 )
+ * Start clvmd:2 ( virt-3 )
+
+Executing Cluster Transition:
+ * Resource action: fencing stop on virt-1
+ * Resource action: clvmd monitor on virt-2
+ * Pseudo action: clvmd-clone_stop_0
+ * Fencing virt-4 (reboot)
+ * Fencing virt-3 (on)
+ * Resource action: fencing monitor on virt-3
+ * Resource action: fencing delete on virt-1
+ * Resource action: dlm monitor on virt-3
+ * Resource action: clvmd stop on virt-1
+ * Resource action: clvmd monitor on virt-3
+ * Pseudo action: clvmd-clone_stopped_0
+ * Pseudo action: dlm-clone_stop_0
+ * Resource action: dlm stop on virt-1
+ * Pseudo action: dlm-clone_stopped_0
+ * Pseudo action: dlm-clone_start_0
+ * Fencing virt-1 (on)
+ * Resource action: fencing start on virt-1
+ * Resource action: dlm start on virt-1
+ * Resource action: dlm start on virt-3
+ * Pseudo action: dlm-clone_running_0
+ * Pseudo action: clvmd-clone_start_0
+ * Resource action: clvmd start on virt-1
+ * Resource action: clvmd start on virt-2
+ * Resource action: clvmd start on virt-3
+ * Pseudo action: clvmd-clone_running_0
+
+Revised Cluster Status:
+ * Node List:
+ * Online: [ virt-1 virt-2 virt-3 ]
+ * OFFLINE: [ virt-4 ]
+
+ * Full List of Resources:
+ * fencing (stonith:fence_scsi): Started virt-1
+ * Clone Set: dlm-clone [dlm]:
+ * Started: [ virt-1 virt-2 virt-3 ]
+ * Stopped: [ virt-4 ]
+ * Clone Set: clvmd-clone [clvmd]:
+ * Started: [ virt-1 virt-2 virt-3 ]
+ * Stopped: [ virt-4 ]
diff --git a/cts/scheduler/summary/unfence-device.summary b/cts/scheduler/summary/unfence-device.summary
new file mode 100644
index 0000000..6ee7a59
--- /dev/null
+++ b/cts/scheduler/summary/unfence-device.summary
@@ -0,0 +1,31 @@
+Using the original execution date of: 2017-11-30 10:44:29Z
+Current cluster status:
+ * Node List:
+ * Online: [ virt-008 virt-009 virt-013 ]
+
+ * Full List of Resources:
+ * fence_scsi (stonith:fence_scsi): Stopped
+
+Transition Summary:
+ * Fence (on) virt-013 'required by fence_scsi monitor'
+ * Fence (on) virt-009 'required by fence_scsi monitor'
+ * Fence (on) virt-008 'required by fence_scsi monitor'
+ * Start fence_scsi ( virt-008 )
+
+Executing Cluster Transition:
+ * Fencing virt-013 (on)
+ * Fencing virt-009 (on)
+ * Fencing virt-008 (on)
+ * Resource action: fence_scsi monitor on virt-013
+ * Resource action: fence_scsi monitor on virt-009
+ * Resource action: fence_scsi monitor on virt-008
+ * Resource action: fence_scsi start on virt-008
+ * Resource action: fence_scsi monitor=60000 on virt-008
+Using the original execution date of: 2017-11-30 10:44:29Z
+
+Revised Cluster Status:
+ * Node List:
+ * Online: [ virt-008 virt-009 virt-013 ]
+
+ * Full List of Resources:
+ * fence_scsi (stonith:fence_scsi): Started virt-008
diff --git a/cts/scheduler/summary/unfence-parameters.summary b/cts/scheduler/summary/unfence-parameters.summary
new file mode 100644
index 0000000..b872a41
--- /dev/null
+++ b/cts/scheduler/summary/unfence-parameters.summary
@@ -0,0 +1,64 @@
+Current cluster status:
+ * Node List:
+ * Node virt-4: UNCLEAN (offline)
+ * Online: [ virt-1 virt-2 virt-3 ]
+
+ * Full List of Resources:
+ * fencing (stonith:fence_scsi): Started virt-1
+ * Clone Set: dlm-clone [dlm]:
+ * Started: [ virt-1 virt-2 ]
+ * Stopped: [ virt-3 virt-4 ]
+ * Clone Set: clvmd-clone [clvmd]:
+ * Started: [ virt-1 ]
+ * Stopped: [ virt-2 virt-3 virt-4 ]
+
+Transition Summary:
+ * Fence (reboot) virt-4 'node is unclean'
+ * Fence (on) virt-3 'required by fencing monitor'
+ * Fence (on) virt-1 'Device parameters changed'
+ * Restart fencing ( virt-1 ) due to resource definition change
+ * Restart dlm:0 ( virt-1 ) due to required stonith
+ * Start dlm:2 ( virt-3 )
+ * Restart clvmd:0 ( virt-1 ) due to required stonith
+ * Start clvmd:1 ( virt-2 )
+ * Start clvmd:2 ( virt-3 )
+
+Executing Cluster Transition:
+ * Resource action: fencing stop on virt-1
+ * Resource action: clvmd monitor on virt-2
+ * Pseudo action: clvmd-clone_stop_0
+ * Fencing virt-4 (reboot)
+ * Fencing virt-3 (on)
+ * Resource action: fencing monitor on virt-3
+ * Resource action: dlm monitor on virt-3
+ * Resource action: clvmd stop on virt-1
+ * Resource action: clvmd monitor on virt-3
+ * Pseudo action: clvmd-clone_stopped_0
+ * Pseudo action: dlm-clone_stop_0
+ * Resource action: dlm stop on virt-1
+ * Pseudo action: dlm-clone_stopped_0
+ * Pseudo action: dlm-clone_start_0
+ * Fencing virt-1 (on)
+ * Resource action: fencing start on virt-1
+ * Resource action: dlm start on virt-1
+ * Resource action: dlm start on virt-3
+ * Pseudo action: dlm-clone_running_0
+ * Pseudo action: clvmd-clone_start_0
+ * Resource action: clvmd start on virt-1
+ * Resource action: clvmd start on virt-2
+ * Resource action: clvmd start on virt-3
+ * Pseudo action: clvmd-clone_running_0
+
+Revised Cluster Status:
+ * Node List:
+ * Online: [ virt-1 virt-2 virt-3 ]
+ * OFFLINE: [ virt-4 ]
+
+ * Full List of Resources:
+ * fencing (stonith:fence_scsi): Started virt-1
+ * Clone Set: dlm-clone [dlm]:
+ * Started: [ virt-1 virt-2 virt-3 ]
+ * Stopped: [ virt-4 ]
+ * Clone Set: clvmd-clone [clvmd]:
+ * Started: [ virt-1 virt-2 virt-3 ]
+ * Stopped: [ virt-4 ]
diff --git a/cts/scheduler/summary/unfence-startup.summary b/cts/scheduler/summary/unfence-startup.summary
new file mode 100644
index 0000000..94617c2
--- /dev/null
+++ b/cts/scheduler/summary/unfence-startup.summary
@@ -0,0 +1,49 @@
+Current cluster status:
+ * Node List:
+ * Node virt-4: UNCLEAN (offline)
+ * Online: [ virt-1 virt-2 virt-3 ]
+
+ * Full List of Resources:
+ * fencing (stonith:fence_scsi): Started virt-1
+ * Clone Set: dlm-clone [dlm]:
+ * Started: [ virt-1 virt-2 ]
+ * Stopped: [ virt-3 virt-4 ]
+ * Clone Set: clvmd-clone [clvmd]:
+ * Started: [ virt-1 ]
+ * Stopped: [ virt-2 virt-3 virt-4 ]
+
+Transition Summary:
+ * Fence (reboot) virt-4 'node is unclean'
+ * Fence (on) virt-3 'required by fencing monitor'
+ * Start dlm:2 ( virt-3 )
+ * Start clvmd:1 ( virt-2 )
+ * Start clvmd:2 ( virt-3 )
+
+Executing Cluster Transition:
+ * Resource action: clvmd monitor on virt-2
+ * Fencing virt-4 (reboot)
+ * Fencing virt-3 (on)
+ * Resource action: fencing monitor on virt-3
+ * Resource action: dlm monitor on virt-3
+ * Pseudo action: dlm-clone_start_0
+ * Resource action: clvmd monitor on virt-3
+ * Resource action: dlm start on virt-3
+ * Pseudo action: dlm-clone_running_0
+ * Pseudo action: clvmd-clone_start_0
+ * Resource action: clvmd start on virt-2
+ * Resource action: clvmd start on virt-3
+ * Pseudo action: clvmd-clone_running_0
+
+Revised Cluster Status:
+ * Node List:
+ * Online: [ virt-1 virt-2 virt-3 ]
+ * OFFLINE: [ virt-4 ]
+
+ * Full List of Resources:
+ * fencing (stonith:fence_scsi): Started virt-1
+ * Clone Set: dlm-clone [dlm]:
+ * Started: [ virt-1 virt-2 virt-3 ]
+ * Stopped: [ virt-4 ]
+ * Clone Set: clvmd-clone [clvmd]:
+ * Started: [ virt-1 virt-2 virt-3 ]
+ * Stopped: [ virt-4 ]
diff --git a/cts/scheduler/summary/unmanaged-block-restart.summary b/cts/scheduler/summary/unmanaged-block-restart.summary
new file mode 100644
index 0000000..c771449
--- /dev/null
+++ b/cts/scheduler/summary/unmanaged-block-restart.summary
@@ -0,0 +1,32 @@
+0 of 4 resource instances DISABLED and 1 BLOCKED from further action due to failure
+
+Current cluster status:
+ * Node List:
+ * Online: [ yingying.site ]
+
+ * Full List of Resources:
+ * Resource Group: group1:
+ * rsc1 (ocf:pacemaker:Dummy): Stopped
+ * rsc2 (ocf:pacemaker:Dummy): Started yingying.site
+ * rsc3 (ocf:pacemaker:Dummy): Started yingying.site
+ * rsc4 (ocf:pacemaker:Dummy): FAILED yingying.site (blocked)
+
+Transition Summary:
+ * Start rsc1 ( yingying.site ) due to unrunnable rsc2 stop (blocked)
+ * Stop rsc2 ( yingying.site ) due to unrunnable rsc3 stop (blocked)
+ * Stop rsc3 ( yingying.site ) due to required rsc2 stop (blocked)
+
+Executing Cluster Transition:
+ * Pseudo action: group1_stop_0
+ * Pseudo action: group1_start_0
+
+Revised Cluster Status:
+ * Node List:
+ * Online: [ yingying.site ]
+
+ * Full List of Resources:
+ * Resource Group: group1:
+ * rsc1 (ocf:pacemaker:Dummy): Stopped
+ * rsc2 (ocf:pacemaker:Dummy): Started yingying.site
+ * rsc3 (ocf:pacemaker:Dummy): Started yingying.site
+ * rsc4 (ocf:pacemaker:Dummy): FAILED yingying.site (blocked)
diff --git a/cts/scheduler/summary/unmanaged-promoted.summary b/cts/scheduler/summary/unmanaged-promoted.summary
new file mode 100644
index 0000000..a617e07
--- /dev/null
+++ b/cts/scheduler/summary/unmanaged-promoted.summary
@@ -0,0 +1,75 @@
+Current cluster status:
+ * Node List:
+ * Online: [ pcmk-1 pcmk-2 ]
+ * OFFLINE: [ pcmk-3 pcmk-4 ]
+
+ * Full List of Resources:
+ * Clone Set: Fencing [FencingChild] (unmanaged):
+ * FencingChild (stonith:fence_xvm): Started pcmk-3 (unmanaged)
+ * FencingChild (stonith:fence_xvm): Started pcmk-4 (unmanaged)
+ * FencingChild (stonith:fence_xvm): Started pcmk-2 (unmanaged)
+ * FencingChild (stonith:fence_xvm): Started pcmk-1 (unmanaged)
+ * Stopped: [ pcmk-3 pcmk-4 ]
+ * Resource Group: group-1 (unmanaged):
+ * r192.168.122.126 (ocf:heartbeat:IPaddr): Started pcmk-2 (unmanaged)
+ * r192.168.122.127 (ocf:heartbeat:IPaddr): Started pcmk-2 (unmanaged)
+ * r192.168.122.128 (ocf:heartbeat:IPaddr): Started pcmk-2 (unmanaged)
+ * rsc_pcmk-1 (ocf:heartbeat:IPaddr): Started pcmk-1 (unmanaged)
+ * rsc_pcmk-2 (ocf:heartbeat:IPaddr): Started pcmk-2 (unmanaged)
+ * rsc_pcmk-3 (ocf:heartbeat:IPaddr): Started pcmk-3 (unmanaged)
+ * rsc_pcmk-4 (ocf:heartbeat:IPaddr): Started pcmk-4 (unmanaged)
+ * lsb-dummy (lsb:/usr/share/pacemaker/tests/cts/LSBDummy): Started pcmk-2 (unmanaged)
+ * migrator (ocf:pacemaker:Dummy): Started pcmk-4 (unmanaged)
+ * Clone Set: Connectivity [ping-1] (unmanaged):
+ * ping-1 (ocf:pacemaker:ping): Started pcmk-3 (unmanaged)
+ * ping-1 (ocf:pacemaker:ping): Started pcmk-4 (unmanaged)
+ * ping-1 (ocf:pacemaker:ping): Started pcmk-2 (unmanaged)
+ * ping-1 (ocf:pacemaker:ping): Started pcmk-1 (unmanaged)
+ * Stopped: [ pcmk-3 pcmk-4 ]
+ * Clone Set: master-1 [stateful-1] (promotable, unmanaged):
+ * stateful-1 (ocf:pacemaker:Stateful): Unpromoted pcmk-3 (unmanaged)
+ * stateful-1 (ocf:pacemaker:Stateful): Unpromoted pcmk-4 (unmanaged)
+ * stateful-1 (ocf:pacemaker:Stateful): Promoted pcmk-2 (unmanaged)
+ * stateful-1 (ocf:pacemaker:Stateful): Unpromoted pcmk-1 (unmanaged)
+ * Stopped: [ pcmk-3 pcmk-4 ]
+
+Transition Summary:
+
+Executing Cluster Transition:
+ * Cluster action: do_shutdown on pcmk-2
+ * Cluster action: do_shutdown on pcmk-1
+
+Revised Cluster Status:
+ * Node List:
+ * Online: [ pcmk-1 pcmk-2 ]
+ * OFFLINE: [ pcmk-3 pcmk-4 ]
+
+ * Full List of Resources:
+ * Clone Set: Fencing [FencingChild] (unmanaged):
+ * FencingChild (stonith:fence_xvm): Started pcmk-3 (unmanaged)
+ * FencingChild (stonith:fence_xvm): Started pcmk-4 (unmanaged)
+ * FencingChild (stonith:fence_xvm): Started pcmk-2 (unmanaged)
+ * FencingChild (stonith:fence_xvm): Started pcmk-1 (unmanaged)
+ * Stopped: [ pcmk-3 pcmk-4 ]
+ * Resource Group: group-1 (unmanaged):
+ * r192.168.122.126 (ocf:heartbeat:IPaddr): Started pcmk-2 (unmanaged)
+ * r192.168.122.127 (ocf:heartbeat:IPaddr): Started pcmk-2 (unmanaged)
+ * r192.168.122.128 (ocf:heartbeat:IPaddr): Started pcmk-2 (unmanaged)
+ * rsc_pcmk-1 (ocf:heartbeat:IPaddr): Started pcmk-1 (unmanaged)
+ * rsc_pcmk-2 (ocf:heartbeat:IPaddr): Started pcmk-2 (unmanaged)
+ * rsc_pcmk-3 (ocf:heartbeat:IPaddr): Started pcmk-3 (unmanaged)
+ * rsc_pcmk-4 (ocf:heartbeat:IPaddr): Started pcmk-4 (unmanaged)
+ * lsb-dummy (lsb:/usr/share/pacemaker/tests/cts/LSBDummy): Started pcmk-2 (unmanaged)
+ * migrator (ocf:pacemaker:Dummy): Started pcmk-4 (unmanaged)
+ * Clone Set: Connectivity [ping-1] (unmanaged):
+ * ping-1 (ocf:pacemaker:ping): Started pcmk-3 (unmanaged)
+ * ping-1 (ocf:pacemaker:ping): Started pcmk-4 (unmanaged)
+ * ping-1 (ocf:pacemaker:ping): Started pcmk-2 (unmanaged)
+ * ping-1 (ocf:pacemaker:ping): Started pcmk-1 (unmanaged)
+ * Stopped: [ pcmk-3 pcmk-4 ]
+ * Clone Set: master-1 [stateful-1] (promotable, unmanaged):
+ * stateful-1 (ocf:pacemaker:Stateful): Unpromoted pcmk-3 (unmanaged)
+ * stateful-1 (ocf:pacemaker:Stateful): Unpromoted pcmk-4 (unmanaged)
+ * stateful-1 (ocf:pacemaker:Stateful): Promoted pcmk-2 (unmanaged)
+ * stateful-1 (ocf:pacemaker:Stateful): Unpromoted pcmk-1 (unmanaged)
+ * Stopped: [ pcmk-3 pcmk-4 ]
diff --git a/cts/scheduler/summary/unmanaged-stop-1.summary b/cts/scheduler/summary/unmanaged-stop-1.summary
new file mode 100644
index 0000000..ce91d7a
--- /dev/null
+++ b/cts/scheduler/summary/unmanaged-stop-1.summary
@@ -0,0 +1,22 @@
+1 of 2 resource instances DISABLED and 1 BLOCKED from further action due to failure
+
+Current cluster status:
+ * Node List:
+ * Online: [ yingying.site ]
+
+ * Full List of Resources:
+ * rsc1 (ocf:pacemaker:Dummy): Started yingying.site (disabled)
+ * rsc2 (ocf:pacemaker:Dummy): FAILED yingying.site (blocked)
+
+Transition Summary:
+ * Stop rsc1 ( yingying.site ) due to node availability (blocked)
+
+Executing Cluster Transition:
+
+Revised Cluster Status:
+ * Node List:
+ * Online: [ yingying.site ]
+
+ * Full List of Resources:
+ * rsc1 (ocf:pacemaker:Dummy): Started yingying.site (disabled)
+ * rsc2 (ocf:pacemaker:Dummy): FAILED yingying.site (blocked)
diff --git a/cts/scheduler/summary/unmanaged-stop-2.summary b/cts/scheduler/summary/unmanaged-stop-2.summary
new file mode 100644
index 0000000..ce91d7a
--- /dev/null
+++ b/cts/scheduler/summary/unmanaged-stop-2.summary
@@ -0,0 +1,22 @@
+1 of 2 resource instances DISABLED and 1 BLOCKED from further action due to failure
+
+Current cluster status:
+ * Node List:
+ * Online: [ yingying.site ]
+
+ * Full List of Resources:
+ * rsc1 (ocf:pacemaker:Dummy): Started yingying.site (disabled)
+ * rsc2 (ocf:pacemaker:Dummy): FAILED yingying.site (blocked)
+
+Transition Summary:
+ * Stop rsc1 ( yingying.site ) due to node availability (blocked)
+
+Executing Cluster Transition:
+
+Revised Cluster Status:
+ * Node List:
+ * Online: [ yingying.site ]
+
+ * Full List of Resources:
+ * rsc1 (ocf:pacemaker:Dummy): Started yingying.site (disabled)
+ * rsc2 (ocf:pacemaker:Dummy): FAILED yingying.site (blocked)
diff --git a/cts/scheduler/summary/unmanaged-stop-3.summary b/cts/scheduler/summary/unmanaged-stop-3.summary
new file mode 100644
index 0000000..373130a
--- /dev/null
+++ b/cts/scheduler/summary/unmanaged-stop-3.summary
@@ -0,0 +1,25 @@
+2 of 2 resource instances DISABLED and 1 BLOCKED from further action due to failure
+
+Current cluster status:
+ * Node List:
+ * Online: [ yingying.site ]
+
+ * Full List of Resources:
+ * Resource Group: group1 (disabled):
+ * rsc1 (ocf:pacemaker:Dummy): Started yingying.site (disabled)
+ * rsc2 (ocf:pacemaker:Dummy): FAILED yingying.site (disabled, blocked)
+
+Transition Summary:
+ * Stop rsc1 ( yingying.site ) due to node availability (blocked)
+
+Executing Cluster Transition:
+ * Pseudo action: group1_stop_0
+
+Revised Cluster Status:
+ * Node List:
+ * Online: [ yingying.site ]
+
+ * Full List of Resources:
+ * Resource Group: group1 (disabled):
+ * rsc1 (ocf:pacemaker:Dummy): Started yingying.site (disabled)
+ * rsc2 (ocf:pacemaker:Dummy): FAILED yingying.site (disabled, blocked)
diff --git a/cts/scheduler/summary/unmanaged-stop-4.summary b/cts/scheduler/summary/unmanaged-stop-4.summary
new file mode 100644
index 0000000..edf940c
--- /dev/null
+++ b/cts/scheduler/summary/unmanaged-stop-4.summary
@@ -0,0 +1,27 @@
+3 of 3 resource instances DISABLED and 1 BLOCKED from further action due to failure
+
+Current cluster status:
+ * Node List:
+ * Online: [ yingying.site ]
+
+ * Full List of Resources:
+ * Resource Group: group1 (disabled):
+ * rsc1 (ocf:pacemaker:Dummy): Started yingying.site (disabled)
+ * rsc2 (ocf:pacemaker:Dummy): FAILED yingying.site (disabled, blocked)
+ * rsc3 (ocf:heartbeat:Dummy): Stopped (disabled)
+
+Transition Summary:
+ * Stop rsc1 ( yingying.site ) due to node availability (blocked)
+
+Executing Cluster Transition:
+ * Pseudo action: group1_stop_0
+
+Revised Cluster Status:
+ * Node List:
+ * Online: [ yingying.site ]
+
+ * Full List of Resources:
+ * Resource Group: group1 (disabled):
+ * rsc1 (ocf:pacemaker:Dummy): Started yingying.site (disabled)
+ * rsc2 (ocf:pacemaker:Dummy): FAILED yingying.site (disabled, blocked)
+ * rsc3 (ocf:heartbeat:Dummy): Stopped (disabled)
diff --git a/cts/scheduler/summary/unrunnable-1.summary b/cts/scheduler/summary/unrunnable-1.summary
new file mode 100644
index 0000000..75fda23
--- /dev/null
+++ b/cts/scheduler/summary/unrunnable-1.summary
@@ -0,0 +1,67 @@
+Current cluster status:
+ * Node List:
+ * Node c001n02: UNCLEAN (offline)
+ * Online: [ c001n03 ]
+
+ * Full List of Resources:
+ * DcIPaddr (ocf:heartbeat:IPaddr): Stopped
+ * Resource Group: group-1:
+ * child_192.168.100.181 (ocf:heartbeat:IPaddr): Stopped
+ * child_192.168.100.182 (ocf:heartbeat:IPaddr): Stopped
+ * child_192.168.100.183 (ocf:heartbeat:IPaddr): Stopped
+ * rsc_c001n08 (ocf:heartbeat:IPaddr): Stopped
+ * rsc_c001n02 (ocf:heartbeat:IPaddr): Stopped
+ * rsc_c001n03 (ocf:heartbeat:IPaddr): Stopped
+ * rsc_c001n01 (ocf:heartbeat:IPaddr): Stopped
+ * Clone Set: DoFencing [child_DoFencing] (unique):
+ * child_DoFencing:0 (stonith:ssh): Started c001n03
+ * child_DoFencing:1 (stonith:ssh): Started c001n02 (UNCLEAN)
+ * child_DoFencing:2 (stonith:ssh): Stopped
+ * child_DoFencing:3 (stonith:ssh): Stopped
+
+Transition Summary:
+ * Start DcIPaddr ( c001n03 ) due to no quorum (blocked)
+ * Start child_192.168.100.181 ( c001n03 ) due to no quorum (blocked)
+ * Start child_192.168.100.182 ( c001n03 ) due to no quorum (blocked)
+ * Start child_192.168.100.183 ( c001n03 ) due to no quorum (blocked)
+ * Start rsc_c001n08 ( c001n03 ) due to no quorum (blocked)
+ * Start rsc_c001n02 ( c001n03 ) due to no quorum (blocked)
+ * Start rsc_c001n03 ( c001n03 ) due to no quorum (blocked)
+ * Start rsc_c001n01 ( c001n03 ) due to no quorum (blocked)
+ * Stop child_DoFencing:1 ( c001n02 ) due to node availability (blocked)
+
+Executing Cluster Transition:
+ * Resource action: DcIPaddr monitor on c001n03
+ * Resource action: child_192.168.100.181 monitor on c001n03
+ * Resource action: child_192.168.100.182 monitor on c001n03
+ * Resource action: child_192.168.100.183 monitor on c001n03
+ * Resource action: rsc_c001n08 monitor on c001n03
+ * Resource action: rsc_c001n02 monitor on c001n03
+ * Resource action: rsc_c001n03 monitor on c001n03
+ * Resource action: rsc_c001n01 monitor on c001n03
+ * Resource action: child_DoFencing:1 monitor on c001n03
+ * Resource action: child_DoFencing:2 monitor on c001n03
+ * Resource action: child_DoFencing:3 monitor on c001n03
+ * Pseudo action: DoFencing_stop_0
+ * Pseudo action: DoFencing_stopped_0
+
+Revised Cluster Status:
+ * Node List:
+ * Node c001n02: UNCLEAN (offline)
+ * Online: [ c001n03 ]
+
+ * Full List of Resources:
+ * DcIPaddr (ocf:heartbeat:IPaddr): Stopped
+ * Resource Group: group-1:
+ * child_192.168.100.181 (ocf:heartbeat:IPaddr): Stopped
+ * child_192.168.100.182 (ocf:heartbeat:IPaddr): Stopped
+ * child_192.168.100.183 (ocf:heartbeat:IPaddr): Stopped
+ * rsc_c001n08 (ocf:heartbeat:IPaddr): Stopped
+ * rsc_c001n02 (ocf:heartbeat:IPaddr): Stopped
+ * rsc_c001n03 (ocf:heartbeat:IPaddr): Stopped
+ * rsc_c001n01 (ocf:heartbeat:IPaddr): Stopped
+ * Clone Set: DoFencing [child_DoFencing] (unique):
+ * child_DoFencing:0 (stonith:ssh): Started c001n03
+ * child_DoFencing:1 (stonith:ssh): Started c001n02 (UNCLEAN)
+ * child_DoFencing:2 (stonith:ssh): Stopped
+ * child_DoFencing:3 (stonith:ssh): Stopped
diff --git a/cts/scheduler/summary/unrunnable-2.summary b/cts/scheduler/summary/unrunnable-2.summary
new file mode 100644
index 0000000..26c6351
--- /dev/null
+++ b/cts/scheduler/summary/unrunnable-2.summary
@@ -0,0 +1,178 @@
+6 of 117 resource instances DISABLED and 0 BLOCKED from further action due to failure
+
+Current cluster status:
+ * Node List:
+ * Online: [ overcloud-controller-0 overcloud-controller-1 overcloud-controller-2 ]
+
+ * Full List of Resources:
+ * ip-192.0.2.12 (ocf:heartbeat:IPaddr2): Started overcloud-controller-0
+ * Clone Set: haproxy-clone [haproxy]:
+ * Started: [ overcloud-controller-0 overcloud-controller-1 overcloud-controller-2 ]
+ * Clone Set: galera-master [galera] (promotable):
+ * Promoted: [ overcloud-controller-0 overcloud-controller-1 overcloud-controller-2 ]
+ * Clone Set: memcached-clone [memcached]:
+ * Started: [ overcloud-controller-0 overcloud-controller-1 overcloud-controller-2 ]
+ * Clone Set: rabbitmq-clone [rabbitmq]:
+ * Started: [ overcloud-controller-0 overcloud-controller-1 overcloud-controller-2 ]
+ * Clone Set: openstack-core-clone [openstack-core] (disabled):
+ * Stopped (disabled): [ overcloud-controller-0 overcloud-controller-1 overcloud-controller-2 ]
+ * Clone Set: redis-master [redis] (promotable):
+ * Promoted: [ overcloud-controller-1 ]
+ * Unpromoted: [ overcloud-controller-0 overcloud-controller-2 ]
+ * ip-192.0.2.11 (ocf:heartbeat:IPaddr2): Started overcloud-controller-1
+ * Clone Set: mongod-clone [mongod]:
+ * Started: [ overcloud-controller-0 overcloud-controller-1 overcloud-controller-2 ]
+ * Clone Set: openstack-aodh-evaluator-clone [openstack-aodh-evaluator]:
+ * Stopped: [ overcloud-controller-0 overcloud-controller-1 overcloud-controller-2 ]
+ * Clone Set: openstack-nova-scheduler-clone [openstack-nova-scheduler]:
+ * Stopped: [ overcloud-controller-0 overcloud-controller-1 overcloud-controller-2 ]
+ * Clone Set: neutron-l3-agent-clone [neutron-l3-agent]:
+ * Stopped: [ overcloud-controller-0 overcloud-controller-1 overcloud-controller-2 ]
+ * Clone Set: neutron-netns-cleanup-clone [neutron-netns-cleanup]:
+ * Started: [ overcloud-controller-0 overcloud-controller-1 overcloud-controller-2 ]
+ * Clone Set: neutron-ovs-cleanup-clone [neutron-ovs-cleanup]:
+ * Started: [ overcloud-controller-0 overcloud-controller-1 overcloud-controller-2 ]
+ * openstack-cinder-volume (systemd:openstack-cinder-volume): Stopped
+ * Clone Set: openstack-heat-engine-clone [openstack-heat-engine]:
+ * Stopped: [ overcloud-controller-0 overcloud-controller-1 overcloud-controller-2 ]
+ * Clone Set: openstack-ceilometer-api-clone [openstack-ceilometer-api]:
+ * Stopped: [ overcloud-controller-0 overcloud-controller-1 overcloud-controller-2 ]
+ * Clone Set: openstack-aodh-listener-clone [openstack-aodh-listener]:
+ * Started: [ overcloud-controller-0 overcloud-controller-1 overcloud-controller-2 ]
+ * Clone Set: neutron-metadata-agent-clone [neutron-metadata-agent]:
+ * Stopped: [ overcloud-controller-0 overcloud-controller-1 overcloud-controller-2 ]
+ * Clone Set: openstack-aodh-notifier-clone [openstack-aodh-notifier]:
+ * Stopped: [ overcloud-controller-0 overcloud-controller-1 overcloud-controller-2 ]
+ * Clone Set: openstack-heat-api-clone [openstack-heat-api]:
+ * Stopped: [ overcloud-controller-0 overcloud-controller-1 overcloud-controller-2 ]
+ * Clone Set: openstack-ceilometer-collector-clone [openstack-ceilometer-collector]:
+ * Stopped: [ overcloud-controller-0 overcloud-controller-1 overcloud-controller-2 ]
+ * Clone Set: openstack-glance-api-clone [openstack-glance-api]:
+ * Stopped: [ overcloud-controller-0 overcloud-controller-1 overcloud-controller-2 ]
+ * Clone Set: openstack-cinder-scheduler-clone [openstack-cinder-scheduler]:
+ * Stopped: [ overcloud-controller-0 overcloud-controller-1 overcloud-controller-2 ]
+ * Clone Set: openstack-nova-api-clone [openstack-nova-api]:
+ * Stopped: [ overcloud-controller-0 overcloud-controller-1 overcloud-controller-2 ]
+ * Clone Set: openstack-nova-consoleauth-clone [openstack-nova-consoleauth]:
+ * Stopped: [ overcloud-controller-0 overcloud-controller-1 overcloud-controller-2 ]
+ * Clone Set: openstack-sahara-api-clone [openstack-sahara-api]:
+ * Stopped: [ overcloud-controller-0 overcloud-controller-1 overcloud-controller-2 ]
+ * Clone Set: openstack-heat-api-cloudwatch-clone [openstack-heat-api-cloudwatch]:
+ * Stopped: [ overcloud-controller-0 overcloud-controller-1 overcloud-controller-2 ]
+ * Clone Set: openstack-sahara-engine-clone [openstack-sahara-engine]:
+ * Started: [ overcloud-controller-0 overcloud-controller-1 overcloud-controller-2 ]
+ * Clone Set: openstack-glance-registry-clone [openstack-glance-registry]:
+ * Stopped: [ overcloud-controller-0 overcloud-controller-1 overcloud-controller-2 ]
+ * Clone Set: openstack-ceilometer-notification-clone [openstack-ceilometer-notification]:
+ * Started: [ overcloud-controller-0 overcloud-controller-1 overcloud-controller-2 ]
+ * Clone Set: openstack-cinder-api-clone [openstack-cinder-api]:
+ * Stopped: [ overcloud-controller-0 overcloud-controller-1 overcloud-controller-2 ]
+ * Clone Set: neutron-dhcp-agent-clone [neutron-dhcp-agent]:
+ * Stopped: [ overcloud-controller-0 overcloud-controller-1 overcloud-controller-2 ]
+ * Clone Set: neutron-openvswitch-agent-clone [neutron-openvswitch-agent]:
+ * Stopped: [ overcloud-controller-0 overcloud-controller-1 overcloud-controller-2 ]
+ * Clone Set: openstack-nova-novncproxy-clone [openstack-nova-novncproxy]:
+ * Stopped: [ overcloud-controller-0 overcloud-controller-1 overcloud-controller-2 ]
+ * Clone Set: delay-clone [delay]:
+ * Stopped: [ overcloud-controller-0 overcloud-controller-1 overcloud-controller-2 ]
+ * Clone Set: neutron-server-clone [neutron-server]:
+ * Stopped: [ overcloud-controller-0 overcloud-controller-1 overcloud-controller-2 ]
+ * Clone Set: openstack-ceilometer-central-clone [openstack-ceilometer-central]:
+ * Stopped: [ overcloud-controller-0 overcloud-controller-1 overcloud-controller-2 ]
+ * Clone Set: httpd-clone [httpd] (disabled):
+ * Stopped (disabled): [ overcloud-controller-0 overcloud-controller-1 overcloud-controller-2 ]
+ * Clone Set: openstack-heat-api-cfn-clone [openstack-heat-api-cfn]:
+ * Stopped: [ overcloud-controller-0 overcloud-controller-1 overcloud-controller-2 ]
+ * Clone Set: openstack-nova-conductor-clone [openstack-nova-conductor]:
+ * Stopped: [ overcloud-controller-0 overcloud-controller-1 overcloud-controller-2 ]
+
+Transition Summary:
+ * Start openstack-cinder-volume ( overcloud-controller-2 ) due to unrunnable openstack-cinder-scheduler-clone running (blocked)
+
+Executing Cluster Transition:
+
+Revised Cluster Status:
+ * Node List:
+ * Online: [ overcloud-controller-0 overcloud-controller-1 overcloud-controller-2 ]
+
+ * Full List of Resources:
+ * ip-192.0.2.12 (ocf:heartbeat:IPaddr2): Started overcloud-controller-0
+ * Clone Set: haproxy-clone [haproxy]:
+ * Started: [ overcloud-controller-0 overcloud-controller-1 overcloud-controller-2 ]
+ * Clone Set: galera-master [galera] (promotable):
+ * Promoted: [ overcloud-controller-0 overcloud-controller-1 overcloud-controller-2 ]
+ * Clone Set: memcached-clone [memcached]:
+ * Started: [ overcloud-controller-0 overcloud-controller-1 overcloud-controller-2 ]
+ * Clone Set: rabbitmq-clone [rabbitmq]:
+ * Started: [ overcloud-controller-0 overcloud-controller-1 overcloud-controller-2 ]
+ * Clone Set: openstack-core-clone [openstack-core] (disabled):
+ * Stopped (disabled): [ overcloud-controller-0 overcloud-controller-1 overcloud-controller-2 ]
+ * Clone Set: redis-master [redis] (promotable):
+ * Promoted: [ overcloud-controller-1 ]
+ * Unpromoted: [ overcloud-controller-0 overcloud-controller-2 ]
+ * ip-192.0.2.11 (ocf:heartbeat:IPaddr2): Started overcloud-controller-1
+ * Clone Set: mongod-clone [mongod]:
+ * Started: [ overcloud-controller-0 overcloud-controller-1 overcloud-controller-2 ]
+ * Clone Set: openstack-aodh-evaluator-clone [openstack-aodh-evaluator]:
+ * Stopped: [ overcloud-controller-0 overcloud-controller-1 overcloud-controller-2 ]
+ * Clone Set: openstack-nova-scheduler-clone [openstack-nova-scheduler]:
+ * Stopped: [ overcloud-controller-0 overcloud-controller-1 overcloud-controller-2 ]
+ * Clone Set: neutron-l3-agent-clone [neutron-l3-agent]:
+ * Stopped: [ overcloud-controller-0 overcloud-controller-1 overcloud-controller-2 ]
+ * Clone Set: neutron-netns-cleanup-clone [neutron-netns-cleanup]:
+ * Started: [ overcloud-controller-0 overcloud-controller-1 overcloud-controller-2 ]
+ * Clone Set: neutron-ovs-cleanup-clone [neutron-ovs-cleanup]:
+ * Started: [ overcloud-controller-0 overcloud-controller-1 overcloud-controller-2 ]
+ * openstack-cinder-volume (systemd:openstack-cinder-volume): Stopped
+ * Clone Set: openstack-heat-engine-clone [openstack-heat-engine]:
+ * Stopped: [ overcloud-controller-0 overcloud-controller-1 overcloud-controller-2 ]
+ * Clone Set: openstack-ceilometer-api-clone [openstack-ceilometer-api]:
+ * Stopped: [ overcloud-controller-0 overcloud-controller-1 overcloud-controller-2 ]
+ * Clone Set: openstack-aodh-listener-clone [openstack-aodh-listener]:
+ * Started: [ overcloud-controller-0 overcloud-controller-1 overcloud-controller-2 ]
+ * Clone Set: neutron-metadata-agent-clone [neutron-metadata-agent]:
+ * Stopped: [ overcloud-controller-0 overcloud-controller-1 overcloud-controller-2 ]
+ * Clone Set: openstack-aodh-notifier-clone [openstack-aodh-notifier]:
+ * Stopped: [ overcloud-controller-0 overcloud-controller-1 overcloud-controller-2 ]
+ * Clone Set: openstack-heat-api-clone [openstack-heat-api]:
+ * Stopped: [ overcloud-controller-0 overcloud-controller-1 overcloud-controller-2 ]
+ * Clone Set: openstack-ceilometer-collector-clone [openstack-ceilometer-collector]:
+ * Stopped: [ overcloud-controller-0 overcloud-controller-1 overcloud-controller-2 ]
+ * Clone Set: openstack-glance-api-clone [openstack-glance-api]:
+ * Stopped: [ overcloud-controller-0 overcloud-controller-1 overcloud-controller-2 ]
+ * Clone Set: openstack-cinder-scheduler-clone [openstack-cinder-scheduler]:
+ * Stopped: [ overcloud-controller-0 overcloud-controller-1 overcloud-controller-2 ]
+ * Clone Set: openstack-nova-api-clone [openstack-nova-api]:
+ * Stopped: [ overcloud-controller-0 overcloud-controller-1 overcloud-controller-2 ]
+ * Clone Set: openstack-nova-consoleauth-clone [openstack-nova-consoleauth]:
+ * Stopped: [ overcloud-controller-0 overcloud-controller-1 overcloud-controller-2 ]
+ * Clone Set: openstack-sahara-api-clone [openstack-sahara-api]:
+ * Stopped: [ overcloud-controller-0 overcloud-controller-1 overcloud-controller-2 ]
+ * Clone Set: openstack-heat-api-cloudwatch-clone [openstack-heat-api-cloudwatch]:
+ * Stopped: [ overcloud-controller-0 overcloud-controller-1 overcloud-controller-2 ]
+ * Clone Set: openstack-sahara-engine-clone [openstack-sahara-engine]:
+ * Started: [ overcloud-controller-0 overcloud-controller-1 overcloud-controller-2 ]
+ * Clone Set: openstack-glance-registry-clone [openstack-glance-registry]:
+ * Stopped: [ overcloud-controller-0 overcloud-controller-1 overcloud-controller-2 ]
+ * Clone Set: openstack-ceilometer-notification-clone [openstack-ceilometer-notification]:
+ * Started: [ overcloud-controller-0 overcloud-controller-1 overcloud-controller-2 ]
+ * Clone Set: openstack-cinder-api-clone [openstack-cinder-api]:
+ * Stopped: [ overcloud-controller-0 overcloud-controller-1 overcloud-controller-2 ]
+ * Clone Set: neutron-dhcp-agent-clone [neutron-dhcp-agent]:
+ * Stopped: [ overcloud-controller-0 overcloud-controller-1 overcloud-controller-2 ]
+ * Clone Set: neutron-openvswitch-agent-clone [neutron-openvswitch-agent]:
+ * Stopped: [ overcloud-controller-0 overcloud-controller-1 overcloud-controller-2 ]
+ * Clone Set: openstack-nova-novncproxy-clone [openstack-nova-novncproxy]:
+ * Stopped: [ overcloud-controller-0 overcloud-controller-1 overcloud-controller-2 ]
+ * Clone Set: delay-clone [delay]:
+ * Stopped: [ overcloud-controller-0 overcloud-controller-1 overcloud-controller-2 ]
+ * Clone Set: neutron-server-clone [neutron-server]:
+ * Stopped: [ overcloud-controller-0 overcloud-controller-1 overcloud-controller-2 ]
+ * Clone Set: openstack-ceilometer-central-clone [openstack-ceilometer-central]:
+ * Stopped: [ overcloud-controller-0 overcloud-controller-1 overcloud-controller-2 ]
+ * Clone Set: httpd-clone [httpd] (disabled):
+ * Stopped (disabled): [ overcloud-controller-0 overcloud-controller-1 overcloud-controller-2 ]
+ * Clone Set: openstack-heat-api-cfn-clone [openstack-heat-api-cfn]:
+ * Stopped: [ overcloud-controller-0 overcloud-controller-1 overcloud-controller-2 ]
+ * Clone Set: openstack-nova-conductor-clone [openstack-nova-conductor]:
+ * Stopped: [ overcloud-controller-0 overcloud-controller-1 overcloud-controller-2 ]
diff --git a/cts/scheduler/summary/use-after-free-merge.summary b/cts/scheduler/summary/use-after-free-merge.summary
new file mode 100644
index 0000000..af3e2a2
--- /dev/null
+++ b/cts/scheduler/summary/use-after-free-merge.summary
@@ -0,0 +1,45 @@
+2 of 5 resource instances DISABLED and 0 BLOCKED from further action due to failure
+
+Current cluster status:
+ * Node List:
+ * Online: [ hex-13 hex-14 ]
+
+ * Full List of Resources:
+ * fencing-sbd (stonith:external/sbd): Stopped
+ * Resource Group: g0 (disabled):
+ * d0 (ocf:heartbeat:Dummy): Stopped (disabled)
+ * d1 (ocf:heartbeat:Dummy): Stopped (disabled)
+ * Clone Set: ms0 [s0] (promotable):
+ * Stopped: [ hex-13 hex-14 ]
+
+Transition Summary:
+ * Start fencing-sbd ( hex-14 )
+ * Start s0:0 ( hex-13 )
+ * Start s0:1 ( hex-14 )
+
+Executing Cluster Transition:
+ * Resource action: fencing-sbd monitor on hex-14
+ * Resource action: fencing-sbd monitor on hex-13
+ * Resource action: d0 monitor on hex-14
+ * Resource action: d0 monitor on hex-13
+ * Resource action: d1 monitor on hex-14
+ * Resource action: d1 monitor on hex-13
+ * Resource action: s0:0 monitor on hex-13
+ * Resource action: s0:1 monitor on hex-14
+ * Pseudo action: ms0_start_0
+ * Resource action: fencing-sbd start on hex-14
+ * Resource action: s0:0 start on hex-13
+ * Resource action: s0:1 start on hex-14
+ * Pseudo action: ms0_running_0
+
+Revised Cluster Status:
+ * Node List:
+ * Online: [ hex-13 hex-14 ]
+
+ * Full List of Resources:
+ * fencing-sbd (stonith:external/sbd): Started hex-14
+ * Resource Group: g0 (disabled):
+ * d0 (ocf:heartbeat:Dummy): Stopped (disabled)
+ * d1 (ocf:heartbeat:Dummy): Stopped (disabled)
+ * Clone Set: ms0 [s0] (promotable):
+ * Unpromoted: [ hex-13 hex-14 ]
diff --git a/cts/scheduler/summary/utilization-check-allowed-nodes.summary b/cts/scheduler/summary/utilization-check-allowed-nodes.summary
new file mode 100644
index 0000000..608a377
--- /dev/null
+++ b/cts/scheduler/summary/utilization-check-allowed-nodes.summary
@@ -0,0 +1,27 @@
+Current cluster status:
+ * Node List:
+ * Online: [ node1 node2 ]
+
+ * Full List of Resources:
+ * rsc1 (ocf:pacemaker:Dummy): Stopped
+ * rsc2 (ocf:pacemaker:Dummy): Stopped
+
+Transition Summary:
+ * Start rsc1 ( node2 )
+
+Executing Cluster Transition:
+ * Resource action: rsc1 monitor on node2
+ * Resource action: rsc1 monitor on node1
+ * Resource action: rsc2 monitor on node2
+ * Resource action: rsc2 monitor on node1
+ * Pseudo action: load_stopped_node2
+ * Pseudo action: load_stopped_node1
+ * Resource action: rsc1 start on node2
+
+Revised Cluster Status:
+ * Node List:
+ * Online: [ node1 node2 ]
+
+ * Full List of Resources:
+ * rsc1 (ocf:pacemaker:Dummy): Started node2
+ * rsc2 (ocf:pacemaker:Dummy): Stopped
diff --git a/cts/scheduler/summary/utilization-complex.summary b/cts/scheduler/summary/utilization-complex.summary
new file mode 100644
index 0000000..946dd12
--- /dev/null
+++ b/cts/scheduler/summary/utilization-complex.summary
@@ -0,0 +1,148 @@
+Using the original execution date of: 2022-01-05 22:04:47Z
+Current cluster status:
+ * Node List:
+ * Online: [ rhel8-1 rhel8-2 rhel8-3 rhel8-4 rhel8-5 ]
+ * GuestOnline: [ httpd-bundle-0 ]
+
+ * Full List of Resources:
+ * dummy3 (ocf:pacemaker:Dummy): Started rhel8-1
+ * dummy5 (ocf:pacemaker:Dummy): Started rhel8-2
+ * Container bundle set: httpd-bundle [localhost/pcmktest:http]:
+ * httpd-bundle-0 (192.168.122.131) (ocf:heartbeat:apache): Started rhel8-2
+ * httpd-bundle-1 (192.168.122.132) (ocf:heartbeat:apache): Stopped
+ * httpd-bundle-2 (192.168.122.133) (ocf:heartbeat:apache): Stopped
+ * dummy4 (ocf:pacemaker:Dummy): Started rhel8-5
+ * dummy1 (ocf:pacemaker:Dummy): Started rhel8-1
+ * dummy2 (ocf:pacemaker:Dummy): Started rhel8-1
+ * Fencing (stonith:fence_xvm): Started rhel8-3
+ * FencingPass (stonith:fence_dummy): Started rhel8-4
+ * FencingFail (stonith:fence_dummy): Started rhel8-5
+ * Resource Group: g1:
+ * g1m1 (ocf:pacemaker:Dummy): Started rhel8-5
+ * g1m2 (ocf:pacemaker:Dummy): Started rhel8-5
+ * g1m3 (ocf:pacemaker:Dummy): Started rhel8-5
+ * Clone Set: clone1-clone [clone1]:
+ * Started: [ rhel8-1 rhel8-2 rhel8-3 rhel8-4 rhel8-5 ]
+ * Stopped: [ httpd-bundle-0 httpd-bundle-1 httpd-bundle-2 ]
+ * Clone Set: clone2-clone [clone2]:
+ * Started: [ rhel8-1 rhel8-2 rhel8-3 rhel8-4 rhel8-5 ]
+ * Stopped: [ httpd-bundle-0 httpd-bundle-1 httpd-bundle-2 ]
+
+Transition Summary:
+ * Stop dummy3 ( rhel8-1 ) due to node availability
+ * Move dummy5 ( rhel8-2 -> rhel8-5 )
+ * Move httpd-bundle-ip-192.168.122.131 ( rhel8-2 -> rhel8-5 )
+ * Move httpd-bundle-podman-0 ( rhel8-2 -> rhel8-5 )
+ * Move httpd-bundle-0 ( rhel8-2 -> rhel8-5 )
+ * Restart httpd:0 ( httpd-bundle-0 ) due to required httpd-bundle-podman-0 start
+ * Start httpd-bundle-1 ( rhel8-1 ) due to unrunnable httpd-bundle-podman-1 start (blocked)
+ * Start httpd:1 ( httpd-bundle-1 ) due to unrunnable httpd-bundle-podman-1 start (blocked)
+ * Start httpd-bundle-2 ( rhel8-2 ) due to unrunnable httpd-bundle-podman-2 start (blocked)
+ * Start httpd:2 ( httpd-bundle-2 ) due to unrunnable httpd-bundle-podman-2 start (blocked)
+ * Move dummy4 ( rhel8-5 -> rhel8-4 )
+ * Move dummy1 ( rhel8-1 -> rhel8-3 )
+ * Move dummy2 ( rhel8-1 -> rhel8-3 )
+ * Move Fencing ( rhel8-3 -> rhel8-1 )
+ * Move FencingFail ( rhel8-5 -> rhel8-2 )
+ * Move g1m1 ( rhel8-5 -> rhel8-4 )
+ * Move g1m2 ( rhel8-5 -> rhel8-4 )
+ * Move g1m3 ( rhel8-5 -> rhel8-4 )
+ * Stop clone1:3 ( rhel8-5 ) due to node availability
+ * Stop clone2:3 ( rhel8-5 ) due to node availability
+
+Executing Cluster Transition:
+ * Resource action: dummy3 stop on rhel8-1
+ * Resource action: dummy5 stop on rhel8-2
+ * Resource action: dummy4 stop on rhel8-5
+ * Resource action: dummy1 stop on rhel8-1
+ * Resource action: dummy2 stop on rhel8-1
+ * Resource action: Fencing stop on rhel8-3
+ * Resource action: FencingFail stop on rhel8-5
+ * Pseudo action: g1_stop_0
+ * Resource action: g1m3 stop on rhel8-5
+ * Pseudo action: clone1-clone_stop_0
+ * Pseudo action: clone2-clone_stop_0
+ * Pseudo action: httpd-bundle_stop_0
+ * Pseudo action: httpd-bundle_start_0
+ * Pseudo action: load_stopped_rhel8-4
+ * Pseudo action: load_stopped_rhel8-3
+ * Pseudo action: load_stopped_httpd-bundle-2
+ * Pseudo action: load_stopped_httpd-bundle-1
+ * Pseudo action: load_stopped_httpd-bundle-0
+ * Pseudo action: load_stopped_rhel8-1
+ * Pseudo action: httpd-bundle-clone_stop_0
+ * Resource action: dummy4 start on rhel8-4
+ * Resource action: dummy1 start on rhel8-3
+ * Resource action: dummy2 start on rhel8-3
+ * Resource action: Fencing start on rhel8-1
+ * Resource action: FencingFail start on rhel8-2
+ * Resource action: g1m2 stop on rhel8-5
+ * Resource action: clone1 stop on rhel8-5
+ * Pseudo action: clone1-clone_stopped_0
+ * Resource action: clone2 stop on rhel8-5
+ * Pseudo action: clone2-clone_stopped_0
+ * Resource action: httpd stop on httpd-bundle-0
+ * Pseudo action: httpd-bundle-clone_stopped_0
+ * Pseudo action: httpd-bundle-clone_start_0
+ * Resource action: httpd-bundle-0 stop on rhel8-2
+ * Resource action: dummy4 monitor=10000 on rhel8-4
+ * Resource action: dummy1 monitor=10000 on rhel8-3
+ * Resource action: dummy2 monitor=10000 on rhel8-3
+ * Resource action: Fencing monitor=120000 on rhel8-1
+ * Resource action: g1m1 stop on rhel8-5
+ * Pseudo action: load_stopped_rhel8-5
+ * Resource action: dummy5 start on rhel8-5
+ * Resource action: httpd-bundle-podman-0 stop on rhel8-2
+ * Pseudo action: g1_stopped_0
+ * Pseudo action: g1_start_0
+ * Resource action: g1m1 start on rhel8-4
+ * Resource action: g1m2 start on rhel8-4
+ * Resource action: g1m3 start on rhel8-4
+ * Pseudo action: httpd-bundle_stopped_0
+ * Pseudo action: load_stopped_rhel8-2
+ * Resource action: dummy5 monitor=10000 on rhel8-5
+ * Resource action: httpd-bundle-ip-192.168.122.131 stop on rhel8-2
+ * Pseudo action: g1_running_0
+ * Resource action: g1m1 monitor=10000 on rhel8-4
+ * Resource action: g1m2 monitor=10000 on rhel8-4
+ * Resource action: g1m3 monitor=10000 on rhel8-4
+ * Resource action: httpd-bundle-ip-192.168.122.131 start on rhel8-5
+ * Resource action: httpd-bundle-podman-0 start on rhel8-5
+ * Resource action: httpd-bundle-0 start on rhel8-5
+ * Resource action: httpd start on httpd-bundle-0
+ * Resource action: httpd monitor=15000 on httpd-bundle-0
+ * Pseudo action: httpd-bundle-clone_running_0
+ * Resource action: httpd-bundle-ip-192.168.122.131 monitor=60000 on rhel8-5
+ * Resource action: httpd-bundle-podman-0 monitor=60000 on rhel8-5
+ * Resource action: httpd-bundle-0 monitor=30000 on rhel8-5
+ * Pseudo action: httpd-bundle_running_0
+Using the original execution date of: 2022-01-05 22:04:47Z
+
+Revised Cluster Status:
+ * Node List:
+ * Online: [ rhel8-1 rhel8-2 rhel8-3 rhel8-4 rhel8-5 ]
+ * GuestOnline: [ httpd-bundle-0 ]
+
+ * Full List of Resources:
+ * dummy3 (ocf:pacemaker:Dummy): Stopped
+ * dummy5 (ocf:pacemaker:Dummy): Started rhel8-5
+ * Container bundle set: httpd-bundle [localhost/pcmktest:http]:
+ * httpd-bundle-0 (192.168.122.131) (ocf:heartbeat:apache): Started rhel8-5
+ * httpd-bundle-1 (192.168.122.132) (ocf:heartbeat:apache): Stopped
+ * httpd-bundle-2 (192.168.122.133) (ocf:heartbeat:apache): Stopped
+ * dummy4 (ocf:pacemaker:Dummy): Started rhel8-4
+ * dummy1 (ocf:pacemaker:Dummy): Started rhel8-3
+ * dummy2 (ocf:pacemaker:Dummy): Started rhel8-3
+ * Fencing (stonith:fence_xvm): Started rhel8-1
+ * FencingPass (stonith:fence_dummy): Started rhel8-4
+ * FencingFail (stonith:fence_dummy): Started rhel8-2
+ * Resource Group: g1:
+ * g1m1 (ocf:pacemaker:Dummy): Started rhel8-4
+ * g1m2 (ocf:pacemaker:Dummy): Started rhel8-4
+ * g1m3 (ocf:pacemaker:Dummy): Started rhel8-4
+ * Clone Set: clone1-clone [clone1]:
+ * Started: [ rhel8-1 rhel8-2 rhel8-3 rhel8-4 ]
+ * Stopped: [ httpd-bundle-0 httpd-bundle-1 httpd-bundle-2 rhel8-5 ]
+ * Clone Set: clone2-clone [clone2]:
+ * Started: [ rhel8-1 rhel8-2 rhel8-3 rhel8-4 ]
+ * Stopped: [ httpd-bundle-0 httpd-bundle-1 httpd-bundle-2 rhel8-5 ]
diff --git a/cts/scheduler/summary/utilization-order1.summary b/cts/scheduler/summary/utilization-order1.summary
new file mode 100644
index 0000000..f76ce61
--- /dev/null
+++ b/cts/scheduler/summary/utilization-order1.summary
@@ -0,0 +1,25 @@
+Current cluster status:
+ * Node List:
+ * Online: [ node1 node2 ]
+
+ * Full List of Resources:
+ * rsc2 (ocf:pacemaker:Dummy): Stopped
+ * rsc1 (ocf:pacemaker:Dummy): Started node1
+
+Transition Summary:
+ * Start rsc2 ( node1 )
+ * Stop rsc1 ( node1 ) due to node availability
+
+Executing Cluster Transition:
+ * Resource action: rsc1 stop on node1
+ * Pseudo action: load_stopped_node2
+ * Pseudo action: load_stopped_node1
+ * Resource action: rsc2 start on node1
+
+Revised Cluster Status:
+ * Node List:
+ * Online: [ node1 node2 ]
+
+ * Full List of Resources:
+ * rsc2 (ocf:pacemaker:Dummy): Started node1
+ * rsc1 (ocf:pacemaker:Dummy): Stopped
diff --git a/cts/scheduler/summary/utilization-order2.summary b/cts/scheduler/summary/utilization-order2.summary
new file mode 100644
index 0000000..123b935
--- /dev/null
+++ b/cts/scheduler/summary/utilization-order2.summary
@@ -0,0 +1,39 @@
+Current cluster status:
+ * Node List:
+ * Online: [ node1 node2 ]
+
+ * Full List of Resources:
+ * rsc4 (ocf:pacemaker:Dummy): Stopped
+ * rsc3 (ocf:pacemaker:Dummy): Started node1
+ * Clone Set: clone-rsc2 [rsc2]:
+ * Started: [ node1 node2 ]
+ * rsc1 (ocf:pacemaker:Dummy): Started node2
+
+Transition Summary:
+ * Start rsc4 ( node1 )
+ * Move rsc3 ( node1 -> node2 )
+ * Stop rsc2:0 ( node1 ) due to node availability
+ * Stop rsc1 ( node2 ) due to node availability
+
+Executing Cluster Transition:
+ * Resource action: rsc3 stop on node1
+ * Pseudo action: clone-rsc2_stop_0
+ * Resource action: rsc1 stop on node2
+ * Pseudo action: load_stopped_node2
+ * Resource action: rsc3 start on node2
+ * Resource action: rsc2:1 stop on node1
+ * Pseudo action: clone-rsc2_stopped_0
+ * Pseudo action: load_stopped_node1
+ * Resource action: rsc4 start on node1
+
+Revised Cluster Status:
+ * Node List:
+ * Online: [ node1 node2 ]
+
+ * Full List of Resources:
+ * rsc4 (ocf:pacemaker:Dummy): Started node1
+ * rsc3 (ocf:pacemaker:Dummy): Started node2
+ * Clone Set: clone-rsc2 [rsc2]:
+ * Started: [ node2 ]
+ * Stopped: [ node1 ]
+ * rsc1 (ocf:pacemaker:Dummy): Stopped
diff --git a/cts/scheduler/summary/utilization-order3.summary b/cts/scheduler/summary/utilization-order3.summary
new file mode 100644
index 0000000..b192e2a
--- /dev/null
+++ b/cts/scheduler/summary/utilization-order3.summary
@@ -0,0 +1,28 @@
+Current cluster status:
+ * Node List:
+ * Online: [ node1 node2 ]
+
+ * Full List of Resources:
+ * rsc2 (ocf:pacemaker:Dummy): Stopped
+ * rsc1 (ocf:pacemaker:Dummy): Started node1
+
+Transition Summary:
+ * Start rsc2 ( node1 )
+ * Migrate rsc1 ( node1 -> node2 )
+
+Executing Cluster Transition:
+ * Pseudo action: load_stopped_node2
+ * Resource action: rsc1 migrate_to on node1
+ * Resource action: rsc1 migrate_from on node2
+ * Resource action: rsc1 stop on node1
+ * Pseudo action: load_stopped_node1
+ * Resource action: rsc2 start on node1
+ * Pseudo action: rsc1_start_0
+
+Revised Cluster Status:
+ * Node List:
+ * Online: [ node1 node2 ]
+
+ * Full List of Resources:
+ * rsc2 (ocf:pacemaker:Dummy): Started node1
+ * rsc1 (ocf:pacemaker:Dummy): Started node2
diff --git a/cts/scheduler/summary/utilization-order4.summary b/cts/scheduler/summary/utilization-order4.summary
new file mode 100644
index 0000000..a3f8aa0
--- /dev/null
+++ b/cts/scheduler/summary/utilization-order4.summary
@@ -0,0 +1,63 @@
+2 of 13 resource instances DISABLED and 0 BLOCKED from further action due to failure
+
+Current cluster status:
+ * Node List:
+ * Node deglxen002: standby (with active resources)
+ * Online: [ deglxen001 ]
+
+ * Full List of Resources:
+ * degllx62-vm (ocf:heartbeat:Xen): Started deglxen002
+ * degllx63-vm (ocf:heartbeat:Xen): Stopped (disabled)
+ * degllx61-vm (ocf:heartbeat:Xen): Started deglxen001
+ * degllx64-vm (ocf:heartbeat:Xen): Stopped (disabled)
+ * stonith_sbd (stonith:external/sbd): Started deglxen001
+ * Clone Set: clone-nfs [grp-nfs]:
+ * Started: [ deglxen001 deglxen002 ]
+ * Clone Set: clone-ping [prim-ping]:
+ * Started: [ deglxen001 deglxen002 ]
+
+Transition Summary:
+ * Migrate degllx62-vm ( deglxen002 -> deglxen001 )
+ * Stop degllx61-vm ( deglxen001 ) due to node availability
+ * Stop nfs-xen_config:1 ( deglxen002 ) due to node availability
+ * Stop nfs-xen_swapfiles:1 ( deglxen002 ) due to node availability
+ * Stop nfs-xen_images:1 ( deglxen002 ) due to node availability
+ * Stop prim-ping:1 ( deglxen002 ) due to node availability
+
+Executing Cluster Transition:
+ * Resource action: degllx61-vm stop on deglxen001
+ * Pseudo action: load_stopped_deglxen001
+ * Resource action: degllx62-vm migrate_to on deglxen002
+ * Resource action: degllx62-vm migrate_from on deglxen001
+ * Resource action: degllx62-vm stop on deglxen002
+ * Pseudo action: clone-nfs_stop_0
+ * Pseudo action: load_stopped_deglxen002
+ * Pseudo action: degllx62-vm_start_0
+ * Pseudo action: grp-nfs:1_stop_0
+ * Resource action: nfs-xen_images:1 stop on deglxen002
+ * Resource action: degllx62-vm monitor=30000 on deglxen001
+ * Resource action: nfs-xen_swapfiles:1 stop on deglxen002
+ * Resource action: nfs-xen_config:1 stop on deglxen002
+ * Pseudo action: grp-nfs:1_stopped_0
+ * Pseudo action: clone-nfs_stopped_0
+ * Pseudo action: clone-ping_stop_0
+ * Resource action: prim-ping:0 stop on deglxen002
+ * Pseudo action: clone-ping_stopped_0
+
+Revised Cluster Status:
+ * Node List:
+ * Node deglxen002: standby
+ * Online: [ deglxen001 ]
+
+ * Full List of Resources:
+ * degllx62-vm (ocf:heartbeat:Xen): Started deglxen001
+ * degllx63-vm (ocf:heartbeat:Xen): Stopped (disabled)
+ * degllx61-vm (ocf:heartbeat:Xen): Stopped
+ * degllx64-vm (ocf:heartbeat:Xen): Stopped (disabled)
+ * stonith_sbd (stonith:external/sbd): Started deglxen001
+ * Clone Set: clone-nfs [grp-nfs]:
+ * Started: [ deglxen001 ]
+ * Stopped: [ deglxen002 ]
+ * Clone Set: clone-ping [prim-ping]:
+ * Started: [ deglxen001 ]
+ * Stopped: [ deglxen002 ]
diff --git a/cts/scheduler/summary/utilization-shuffle.summary b/cts/scheduler/summary/utilization-shuffle.summary
new file mode 100644
index 0000000..c350e94
--- /dev/null
+++ b/cts/scheduler/summary/utilization-shuffle.summary
@@ -0,0 +1,94 @@
+Current cluster status:
+ * Node List:
+ * Online: [ act1 act2 act3 sby1 sby2 ]
+
+ * Full List of Resources:
+ * Resource Group: grpPostgreSQLDB1:
+ * prmExPostgreSQLDB1 (ocf:pacemaker:Dummy): Stopped
+ * prmFsPostgreSQLDB1-1 (ocf:pacemaker:Dummy): Stopped
+ * prmFsPostgreSQLDB1-2 (ocf:pacemaker:Dummy): Stopped
+ * prmFsPostgreSQLDB1-3 (ocf:pacemaker:Dummy): Stopped
+ * prmIpPostgreSQLDB1 (ocf:pacemaker:Dummy): Stopped
+ * prmApPostgreSQLDB1 (ocf:pacemaker:Dummy): Stopped
+ * Resource Group: grpPostgreSQLDB2:
+ * prmExPostgreSQLDB2 (ocf:pacemaker:Dummy): Started act2
+ * prmFsPostgreSQLDB2-1 (ocf:pacemaker:Dummy): Started act2
+ * prmFsPostgreSQLDB2-2 (ocf:pacemaker:Dummy): Started act2
+ * prmFsPostgreSQLDB2-3 (ocf:pacemaker:Dummy): Started act2
+ * prmIpPostgreSQLDB2 (ocf:pacemaker:Dummy): Started act2
+ * prmApPostgreSQLDB2 (ocf:pacemaker:Dummy): Started act2
+ * Resource Group: grpPostgreSQLDB3:
+ * prmExPostgreSQLDB3 (ocf:pacemaker:Dummy): Started act1
+ * prmFsPostgreSQLDB3-1 (ocf:pacemaker:Dummy): Started act1
+ * prmFsPostgreSQLDB3-2 (ocf:pacemaker:Dummy): Started act1
+ * prmFsPostgreSQLDB3-3 (ocf:pacemaker:Dummy): Started act1
+ * prmIpPostgreSQLDB3 (ocf:pacemaker:Dummy): Started act1
+ * prmApPostgreSQLDB3 (ocf:pacemaker:Dummy): Started act1
+ * Clone Set: clnPingd [prmPingd]:
+ * Started: [ act1 act2 act3 sby1 sby2 ]
+ * Clone Set: clnDiskd1 [prmDiskd1]:
+ * Started: [ act1 act2 act3 sby1 sby2 ]
+ * Clone Set: clnDiskd2 [prmDiskd2]:
+ * Started: [ act1 act2 act3 sby1 sby2 ]
+
+Transition Summary:
+ * Start prmExPostgreSQLDB1 ( act3 )
+ * Start prmFsPostgreSQLDB1-1 ( act3 )
+ * Start prmFsPostgreSQLDB1-2 ( act3 )
+ * Start prmFsPostgreSQLDB1-3 ( act3 )
+ * Start prmIpPostgreSQLDB1 ( act3 )
+ * Start prmApPostgreSQLDB1 ( act3 )
+
+Executing Cluster Transition:
+ * Pseudo action: grpPostgreSQLDB1_start_0
+ * Pseudo action: load_stopped_sby2
+ * Pseudo action: load_stopped_sby1
+ * Pseudo action: load_stopped_act3
+ * Pseudo action: load_stopped_act2
+ * Pseudo action: load_stopped_act1
+ * Resource action: prmExPostgreSQLDB1 start on act3
+ * Resource action: prmFsPostgreSQLDB1-1 start on act3
+ * Resource action: prmFsPostgreSQLDB1-2 start on act3
+ * Resource action: prmFsPostgreSQLDB1-3 start on act3
+ * Resource action: prmIpPostgreSQLDB1 start on act3
+ * Resource action: prmApPostgreSQLDB1 start on act3
+ * Pseudo action: grpPostgreSQLDB1_running_0
+ * Resource action: prmExPostgreSQLDB1 monitor=5000 on act3
+ * Resource action: prmFsPostgreSQLDB1-1 monitor=5000 on act3
+ * Resource action: prmFsPostgreSQLDB1-2 monitor=5000 on act3
+ * Resource action: prmFsPostgreSQLDB1-3 monitor=5000 on act3
+ * Resource action: prmIpPostgreSQLDB1 monitor=5000 on act3
+ * Resource action: prmApPostgreSQLDB1 monitor=5000 on act3
+
+Revised Cluster Status:
+ * Node List:
+ * Online: [ act1 act2 act3 sby1 sby2 ]
+
+ * Full List of Resources:
+ * Resource Group: grpPostgreSQLDB1:
+ * prmExPostgreSQLDB1 (ocf:pacemaker:Dummy): Started act3
+ * prmFsPostgreSQLDB1-1 (ocf:pacemaker:Dummy): Started act3
+ * prmFsPostgreSQLDB1-2 (ocf:pacemaker:Dummy): Started act3
+ * prmFsPostgreSQLDB1-3 (ocf:pacemaker:Dummy): Started act3
+ * prmIpPostgreSQLDB1 (ocf:pacemaker:Dummy): Started act3
+ * prmApPostgreSQLDB1 (ocf:pacemaker:Dummy): Started act3
+ * Resource Group: grpPostgreSQLDB2:
+ * prmExPostgreSQLDB2 (ocf:pacemaker:Dummy): Started act2
+ * prmFsPostgreSQLDB2-1 (ocf:pacemaker:Dummy): Started act2
+ * prmFsPostgreSQLDB2-2 (ocf:pacemaker:Dummy): Started act2
+ * prmFsPostgreSQLDB2-3 (ocf:pacemaker:Dummy): Started act2
+ * prmIpPostgreSQLDB2 (ocf:pacemaker:Dummy): Started act2
+ * prmApPostgreSQLDB2 (ocf:pacemaker:Dummy): Started act2
+ * Resource Group: grpPostgreSQLDB3:
+ * prmExPostgreSQLDB3 (ocf:pacemaker:Dummy): Started act1
+ * prmFsPostgreSQLDB3-1 (ocf:pacemaker:Dummy): Started act1
+ * prmFsPostgreSQLDB3-2 (ocf:pacemaker:Dummy): Started act1
+ * prmFsPostgreSQLDB3-3 (ocf:pacemaker:Dummy): Started act1
+ * prmIpPostgreSQLDB3 (ocf:pacemaker:Dummy): Started act1
+ * prmApPostgreSQLDB3 (ocf:pacemaker:Dummy): Started act1
+ * Clone Set: clnPingd [prmPingd]:
+ * Started: [ act1 act2 act3 sby1 sby2 ]
+ * Clone Set: clnDiskd1 [prmDiskd1]:
+ * Started: [ act1 act2 act3 sby1 sby2 ]
+ * Clone Set: clnDiskd2 [prmDiskd2]:
+ * Started: [ act1 act2 act3 sby1 sby2 ]
diff --git a/cts/scheduler/summary/utilization.summary b/cts/scheduler/summary/utilization.summary
new file mode 100644
index 0000000..8a72fde
--- /dev/null
+++ b/cts/scheduler/summary/utilization.summary
@@ -0,0 +1,27 @@
+Current cluster status:
+ * Node List:
+ * Online: [ host1 host2 ]
+
+ * Full List of Resources:
+ * rsc2 (ocf:pacemaker:Dummy): Stopped
+ * rsc1 (ocf:pacemaker:Dummy): Stopped
+
+Transition Summary:
+ * Start rsc2 ( host2 )
+
+Executing Cluster Transition:
+ * Resource action: rsc2 monitor on host2
+ * Resource action: rsc2 monitor on host1
+ * Resource action: rsc1 monitor on host2
+ * Resource action: rsc1 monitor on host1
+ * Pseudo action: load_stopped_host2
+ * Pseudo action: load_stopped_host1
+ * Resource action: rsc2 start on host2
+
+Revised Cluster Status:
+ * Node List:
+ * Online: [ host1 host2 ]
+
+ * Full List of Resources:
+ * rsc2 (ocf:pacemaker:Dummy): Started host2
+ * rsc1 (ocf:pacemaker:Dummy): Stopped
diff --git a/cts/scheduler/summary/value-source.summary b/cts/scheduler/summary/value-source.summary
new file mode 100644
index 0000000..7f033ca
--- /dev/null
+++ b/cts/scheduler/summary/value-source.summary
@@ -0,0 +1,62 @@
+Using the original execution date of: 2020-11-12 21:28:08Z
+Current cluster status:
+ * Node List:
+ * Online: [ rhel7-1 rhel7-2 rhel7-3 rhel7-4 rhel7-5 ]
+
+ * Full List of Resources:
+ * Fencing (stonith:fence_xvm): Stopped
+ * rsc1 (ocf:pacemaker:Dummy): Stopped
+ * rsc2 (ocf:pacemaker:Dummy): Stopped
+ * invert-match (ocf:pacemaker:Dummy): Stopped
+ * single-rsc (ocf:pacemaker:Dummy): Stopped
+ * set-rsc1 (ocf:pacemaker:Dummy): Stopped
+ * set-rsc2 (ocf:pacemaker:Dummy): Stopped
+ * meta-rsc (ocf:pacemaker:Dummy): Stopped
+ * insane-rsc (ocf:pacemaker:Dummy): Stopped
+
+Transition Summary:
+ * Start Fencing ( rhel7-1 )
+ * Start rsc1 ( rhel7-4 )
+ * Start rsc2 ( rhel7-5 )
+ * Start invert-match ( rhel7-1 )
+ * Start single-rsc ( rhel7-2 )
+ * Start set-rsc1 ( rhel7-3 )
+ * Start set-rsc2 ( rhel7-4 )
+ * Start meta-rsc ( rhel7-5 )
+ * Start insane-rsc ( rhel7-4 )
+
+Executing Cluster Transition:
+ * Resource action: Fencing start on rhel7-1
+ * Resource action: rsc1 start on rhel7-4
+ * Resource action: rsc2 start on rhel7-5
+ * Resource action: invert-match start on rhel7-1
+ * Resource action: single-rsc start on rhel7-2
+ * Resource action: set-rsc1 start on rhel7-3
+ * Resource action: set-rsc2 start on rhel7-4
+ * Resource action: meta-rsc start on rhel7-5
+ * Resource action: insane-rsc start on rhel7-4
+ * Resource action: Fencing monitor=120000 on rhel7-1
+ * Resource action: rsc1 monitor=10000 on rhel7-4
+ * Resource action: rsc2 monitor=10000 on rhel7-5
+ * Resource action: invert-match monitor=10000 on rhel7-1
+ * Resource action: single-rsc monitor=10000 on rhel7-2
+ * Resource action: set-rsc1 monitor=10000 on rhel7-3
+ * Resource action: set-rsc2 monitor=10000 on rhel7-4
+ * Resource action: meta-rsc monitor=10000 on rhel7-5
+ * Resource action: insane-rsc monitor=10000 on rhel7-4
+Using the original execution date of: 2020-11-12 21:28:08Z
+
+Revised Cluster Status:
+ * Node List:
+ * Online: [ rhel7-1 rhel7-2 rhel7-3 rhel7-4 rhel7-5 ]
+
+ * Full List of Resources:
+ * Fencing (stonith:fence_xvm): Started rhel7-1
+ * rsc1 (ocf:pacemaker:Dummy): Started rhel7-4
+ * rsc2 (ocf:pacemaker:Dummy): Started rhel7-5
+ * invert-match (ocf:pacemaker:Dummy): Started rhel7-1
+ * single-rsc (ocf:pacemaker:Dummy): Started rhel7-2
+ * set-rsc1 (ocf:pacemaker:Dummy): Started rhel7-3
+ * set-rsc2 (ocf:pacemaker:Dummy): Started rhel7-4
+ * meta-rsc (ocf:pacemaker:Dummy): Started rhel7-5
+ * insane-rsc (ocf:pacemaker:Dummy): Started rhel7-4
diff --git a/cts/scheduler/summary/whitebox-asymmetric.summary b/cts/scheduler/summary/whitebox-asymmetric.summary
new file mode 100644
index 0000000..5391139
--- /dev/null
+++ b/cts/scheduler/summary/whitebox-asymmetric.summary
@@ -0,0 +1,42 @@
+1 of 7 resource instances DISABLED and 0 BLOCKED from further action due to failure
+
+Current cluster status:
+ * Node List:
+ * Online: [ 18builder ]
+
+ * Full List of Resources:
+ * fence_false (stonith:fence_false): Stopped
+ * container2 (ocf:pacemaker:Dummy): Started 18builder
+ * webserver (ocf:pacemaker:Dummy): Stopped
+ * nfs_mount (ocf:pacemaker:Dummy): Stopped
+ * Resource Group: mygroup:
+ * vg_tags (ocf:heartbeat:LVM): Stopped (disabled)
+ * vg_tags_dup (ocf:heartbeat:LVM): Stopped
+
+Transition Summary:
+ * Start nfs_mount ( 18node2 )
+ * Start 18node2 ( 18builder )
+
+Executing Cluster Transition:
+ * Resource action: 18node2 start on 18builder
+ * Resource action: webserver monitor on 18node2
+ * Resource action: nfs_mount monitor on 18node2
+ * Resource action: vg_tags monitor on 18node2
+ * Resource action: vg_tags_dup monitor on 18node2
+ * Resource action: 18node2 monitor=30000 on 18builder
+ * Resource action: nfs_mount start on 18node2
+ * Resource action: nfs_mount monitor=10000 on 18node2
+
+Revised Cluster Status:
+ * Node List:
+ * Online: [ 18builder ]
+ * GuestOnline: [ 18node2 ]
+
+ * Full List of Resources:
+ * fence_false (stonith:fence_false): Stopped
+ * container2 (ocf:pacemaker:Dummy): Started 18builder
+ * webserver (ocf:pacemaker:Dummy): Stopped
+ * nfs_mount (ocf:pacemaker:Dummy): Started 18node2
+ * Resource Group: mygroup:
+ * vg_tags (ocf:heartbeat:LVM): Stopped (disabled)
+ * vg_tags_dup (ocf:heartbeat:LVM): Stopped
diff --git a/cts/scheduler/summary/whitebox-fail1.summary b/cts/scheduler/summary/whitebox-fail1.summary
new file mode 100644
index 0000000..974f124
--- /dev/null
+++ b/cts/scheduler/summary/whitebox-fail1.summary
@@ -0,0 +1,59 @@
+Current cluster status:
+ * Node List:
+ * Online: [ 18node1 18node2 18node3 ]
+ * GuestOnline: [ lxc2 ]
+
+ * Full List of Resources:
+ * container1 (ocf:heartbeat:VirtualDomain): FAILED 18node2
+ * container2 (ocf:heartbeat:VirtualDomain): Started 18node2
+ * shoot1 (stonith:fence_xvm): Started 18node3
+ * Clone Set: M-clone [M]:
+ * Started: [ 18node1 18node2 18node3 lxc2 ]
+ * A (ocf:pacemaker:Dummy): Started 18node1
+ * B (ocf:pacemaker:Dummy): FAILED lxc1
+ * C (ocf:pacemaker:Dummy): Started lxc2
+ * D (ocf:pacemaker:Dummy): Started 18node1
+
+Transition Summary:
+ * Fence (reboot) lxc1 (resource: container1) 'guest is unclean'
+ * Recover container1 ( 18node2 )
+ * Recover M:4 ( lxc1 )
+ * Recover B ( lxc1 )
+ * Restart lxc1 ( 18node2 ) due to required container1 start
+
+Executing Cluster Transition:
+ * Resource action: A monitor on lxc2
+ * Resource action: B monitor on lxc2
+ * Resource action: D monitor on lxc2
+ * Resource action: lxc1 stop on 18node2
+ * Resource action: container1 stop on 18node2
+ * Pseudo action: stonith-lxc1-reboot on lxc1
+ * Resource action: container1 start on 18node2
+ * Pseudo action: M-clone_stop_0
+ * Pseudo action: B_stop_0
+ * Resource action: lxc1 start on 18node2
+ * Resource action: lxc1 monitor=30000 on 18node2
+ * Pseudo action: M_stop_0
+ * Pseudo action: M-clone_stopped_0
+ * Pseudo action: M-clone_start_0
+ * Resource action: B start on lxc1
+ * Resource action: M start on lxc1
+ * Pseudo action: M-clone_running_0
+ * Resource action: B monitor=10000 on lxc1
+ * Resource action: M monitor=10000 on lxc1
+
+Revised Cluster Status:
+ * Node List:
+ * Online: [ 18node1 18node2 18node3 ]
+ * GuestOnline: [ lxc1 lxc2 ]
+
+ * Full List of Resources:
+ * container1 (ocf:heartbeat:VirtualDomain): Started 18node2
+ * container2 (ocf:heartbeat:VirtualDomain): Started 18node2
+ * shoot1 (stonith:fence_xvm): Started 18node3
+ * Clone Set: M-clone [M]:
+ * Started: [ 18node1 18node2 18node3 lxc1 lxc2 ]
+ * A (ocf:pacemaker:Dummy): Started 18node1
+ * B (ocf:pacemaker:Dummy): Started lxc1
+ * C (ocf:pacemaker:Dummy): Started lxc2
+ * D (ocf:pacemaker:Dummy): Started 18node1
diff --git a/cts/scheduler/summary/whitebox-fail2.summary b/cts/scheduler/summary/whitebox-fail2.summary
new file mode 100644
index 0000000..73b44f5
--- /dev/null
+++ b/cts/scheduler/summary/whitebox-fail2.summary
@@ -0,0 +1,59 @@
+Current cluster status:
+ * Node List:
+ * Online: [ 18node1 18node2 18node3 ]
+ * GuestOnline: [ lxc2 ]
+
+ * Full List of Resources:
+ * container1 (ocf:heartbeat:VirtualDomain): FAILED 18node2
+ * container2 (ocf:heartbeat:VirtualDomain): Started 18node2
+ * shoot1 (stonith:fence_xvm): Started 18node3
+ * Clone Set: M-clone [M]:
+ * Started: [ 18node1 18node2 18node3 lxc2 ]
+ * A (ocf:pacemaker:Dummy): Started 18node1
+ * B (ocf:pacemaker:Dummy): FAILED lxc1
+ * C (ocf:pacemaker:Dummy): Started lxc2
+ * D (ocf:pacemaker:Dummy): Started 18node1
+
+Transition Summary:
+ * Fence (reboot) lxc1 (resource: container1) 'guest is unclean'
+ * Recover container1 ( 18node2 )
+ * Recover M:4 ( lxc1 )
+ * Recover B ( lxc1 )
+ * Recover lxc1 ( 18node2 )
+
+Executing Cluster Transition:
+ * Resource action: A monitor on lxc2
+ * Resource action: B monitor on lxc2
+ * Resource action: D monitor on lxc2
+ * Resource action: lxc1 stop on 18node2
+ * Resource action: container1 stop on 18node2
+ * Pseudo action: stonith-lxc1-reboot on lxc1
+ * Resource action: container1 start on 18node2
+ * Pseudo action: M-clone_stop_0
+ * Pseudo action: B_stop_0
+ * Resource action: lxc1 start on 18node2
+ * Resource action: lxc1 monitor=30000 on 18node2
+ * Pseudo action: M_stop_0
+ * Pseudo action: M-clone_stopped_0
+ * Pseudo action: M-clone_start_0
+ * Resource action: B start on lxc1
+ * Resource action: M start on lxc1
+ * Pseudo action: M-clone_running_0
+ * Resource action: B monitor=10000 on lxc1
+ * Resource action: M monitor=10000 on lxc1
+
+Revised Cluster Status:
+ * Node List:
+ * Online: [ 18node1 18node2 18node3 ]
+ * GuestOnline: [ lxc1 lxc2 ]
+
+ * Full List of Resources:
+ * container1 (ocf:heartbeat:VirtualDomain): Started 18node2
+ * container2 (ocf:heartbeat:VirtualDomain): Started 18node2
+ * shoot1 (stonith:fence_xvm): Started 18node3
+ * Clone Set: M-clone [M]:
+ * Started: [ 18node1 18node2 18node3 lxc1 lxc2 ]
+ * A (ocf:pacemaker:Dummy): Started 18node1
+ * B (ocf:pacemaker:Dummy): Started lxc1
+ * C (ocf:pacemaker:Dummy): Started lxc2
+ * D (ocf:pacemaker:Dummy): Started 18node1
diff --git a/cts/scheduler/summary/whitebox-fail3.summary b/cts/scheduler/summary/whitebox-fail3.summary
new file mode 100644
index 0000000..b7de4a7
--- /dev/null
+++ b/cts/scheduler/summary/whitebox-fail3.summary
@@ -0,0 +1,55 @@
+Current cluster status:
+ * Node List:
+ * Online: [ dvossel-laptop2 ]
+
+ * Full List of Resources:
+ * vm (ocf:heartbeat:VirtualDomain): Stopped
+ * vm2 (ocf:heartbeat:VirtualDomain): Stopped
+ * FAKE (ocf:pacemaker:Dummy): Started dvossel-laptop2
+ * Clone Set: W-master [W] (promotable):
+ * Promoted: [ dvossel-laptop2 ]
+ * Stopped: [ 18builder 18node1 ]
+ * Clone Set: X-master [X] (promotable):
+ * Promoted: [ dvossel-laptop2 ]
+ * Stopped: [ 18builder 18node1 ]
+
+Transition Summary:
+ * Start vm ( dvossel-laptop2 )
+ * Move FAKE ( dvossel-laptop2 -> 18builder )
+ * Start W:1 ( 18builder )
+ * Start X:1 ( 18builder )
+ * Start 18builder ( dvossel-laptop2 )
+
+Executing Cluster Transition:
+ * Resource action: vm start on dvossel-laptop2
+ * Pseudo action: W-master_start_0
+ * Pseudo action: X-master_start_0
+ * Resource action: 18builder monitor on dvossel-laptop2
+ * Resource action: 18builder start on dvossel-laptop2
+ * Resource action: FAKE stop on dvossel-laptop2
+ * Resource action: W start on 18builder
+ * Pseudo action: W-master_running_0
+ * Resource action: X start on 18builder
+ * Pseudo action: X-master_running_0
+ * Resource action: 18builder monitor=30000 on dvossel-laptop2
+ * Resource action: FAKE start on 18builder
+ * Resource action: W monitor=10000 on 18builder
+ * Resource action: X monitor=10000 on 18builder
+
+Revised Cluster Status:
+ * Node List:
+ * Online: [ dvossel-laptop2 ]
+ * GuestOnline: [ 18builder ]
+
+ * Full List of Resources:
+ * vm (ocf:heartbeat:VirtualDomain): Started dvossel-laptop2
+ * vm2 (ocf:heartbeat:VirtualDomain): Stopped
+ * FAKE (ocf:pacemaker:Dummy): Started 18builder
+ * Clone Set: W-master [W] (promotable):
+ * Promoted: [ dvossel-laptop2 ]
+ * Unpromoted: [ 18builder ]
+ * Stopped: [ 18node1 ]
+ * Clone Set: X-master [X] (promotable):
+ * Promoted: [ dvossel-laptop2 ]
+ * Unpromoted: [ 18builder ]
+ * Stopped: [ 18node1 ]
diff --git a/cts/scheduler/summary/whitebox-imply-stop-on-fence.summary b/cts/scheduler/summary/whitebox-imply-stop-on-fence.summary
new file mode 100644
index 0000000..78506c5
--- /dev/null
+++ b/cts/scheduler/summary/whitebox-imply-stop-on-fence.summary
@@ -0,0 +1,104 @@
+Current cluster status:
+ * Node List:
+ * Node kiff-01: UNCLEAN (offline)
+ * Online: [ kiff-02 ]
+ * GuestOnline: [ lxc-01_kiff-02 lxc-02_kiff-02 ]
+
+ * Full List of Resources:
+ * fence-kiff-01 (stonith:fence_ipmilan): Started kiff-02
+ * fence-kiff-02 (stonith:fence_ipmilan): Started kiff-01 (UNCLEAN)
+ * Clone Set: dlm-clone [dlm]:
+ * dlm (ocf:pacemaker:controld): Started kiff-01 (UNCLEAN)
+ * Started: [ kiff-02 ]
+ * Stopped: [ lxc-01_kiff-01 lxc-01_kiff-02 lxc-02_kiff-01 lxc-02_kiff-02 ]
+ * Clone Set: clvmd-clone [clvmd]:
+ * clvmd (ocf:heartbeat:clvm): Started kiff-01 (UNCLEAN)
+ * Started: [ kiff-02 ]
+ * Stopped: [ lxc-01_kiff-01 lxc-01_kiff-02 lxc-02_kiff-01 lxc-02_kiff-02 ]
+ * Clone Set: shared0-clone [shared0]:
+ * shared0 (ocf:heartbeat:Filesystem): Started kiff-01 (UNCLEAN)
+ * Started: [ kiff-02 ]
+ * Stopped: [ lxc-01_kiff-01 lxc-01_kiff-02 lxc-02_kiff-01 lxc-02_kiff-02 ]
+ * R-lxc-01_kiff-01 (ocf:heartbeat:VirtualDomain): FAILED kiff-01 (UNCLEAN)
+ * R-lxc-02_kiff-01 (ocf:heartbeat:VirtualDomain): Started kiff-01 (UNCLEAN)
+ * R-lxc-01_kiff-02 (ocf:heartbeat:VirtualDomain): Started kiff-02
+ * R-lxc-02_kiff-02 (ocf:heartbeat:VirtualDomain): Started kiff-02
+ * vm-fs (ocf:heartbeat:Filesystem): FAILED lxc-01_kiff-01
+
+Transition Summary:
+ * Fence (reboot) lxc-02_kiff-01 (resource: R-lxc-02_kiff-01) 'guest is unclean'
+ * Fence (reboot) lxc-01_kiff-01 (resource: R-lxc-01_kiff-01) 'guest is unclean'
+ * Fence (reboot) kiff-01 'peer is no longer part of the cluster'
+ * Move fence-kiff-02 ( kiff-01 -> kiff-02 )
+ * Stop dlm:0 ( kiff-01 ) due to node availability
+ * Stop clvmd:0 ( kiff-01 ) due to node availability
+ * Stop shared0:0 ( kiff-01 ) due to node availability
+ * Recover R-lxc-01_kiff-01 ( kiff-01 -> kiff-02 )
+ * Move R-lxc-02_kiff-01 ( kiff-01 -> kiff-02 )
+ * Recover vm-fs ( lxc-01_kiff-01 )
+ * Move lxc-01_kiff-01 ( kiff-01 -> kiff-02 )
+ * Move lxc-02_kiff-01 ( kiff-01 -> kiff-02 )
+
+Executing Cluster Transition:
+ * Pseudo action: fence-kiff-02_stop_0
+ * Resource action: dlm monitor on lxc-02_kiff-02
+ * Resource action: dlm monitor on lxc-01_kiff-02
+ * Resource action: clvmd monitor on lxc-02_kiff-02
+ * Resource action: clvmd monitor on lxc-01_kiff-02
+ * Resource action: shared0 monitor on lxc-02_kiff-02
+ * Resource action: shared0 monitor on lxc-01_kiff-02
+ * Resource action: vm-fs monitor on lxc-02_kiff-02
+ * Resource action: vm-fs monitor on lxc-01_kiff-02
+ * Pseudo action: lxc-01_kiff-01_stop_0
+ * Pseudo action: lxc-02_kiff-01_stop_0
+ * Fencing kiff-01 (reboot)
+ * Pseudo action: R-lxc-01_kiff-01_stop_0
+ * Pseudo action: R-lxc-02_kiff-01_stop_0
+ * Pseudo action: stonith-lxc-02_kiff-01-reboot on lxc-02_kiff-01
+ * Pseudo action: stonith-lxc-01_kiff-01-reboot on lxc-01_kiff-01
+ * Resource action: fence-kiff-02 start on kiff-02
+ * Pseudo action: shared0-clone_stop_0
+ * Resource action: R-lxc-01_kiff-01 start on kiff-02
+ * Resource action: R-lxc-02_kiff-01 start on kiff-02
+ * Pseudo action: vm-fs_stop_0
+ * Resource action: lxc-01_kiff-01 start on kiff-02
+ * Resource action: lxc-02_kiff-01 start on kiff-02
+ * Resource action: fence-kiff-02 monitor=60000 on kiff-02
+ * Pseudo action: shared0_stop_0
+ * Pseudo action: shared0-clone_stopped_0
+ * Resource action: R-lxc-01_kiff-01 monitor=10000 on kiff-02
+ * Resource action: R-lxc-02_kiff-01 monitor=10000 on kiff-02
+ * Resource action: vm-fs start on lxc-01_kiff-01
+ * Resource action: lxc-01_kiff-01 monitor=30000 on kiff-02
+ * Resource action: lxc-02_kiff-01 monitor=30000 on kiff-02
+ * Pseudo action: clvmd-clone_stop_0
+ * Resource action: vm-fs monitor=20000 on lxc-01_kiff-01
+ * Pseudo action: clvmd_stop_0
+ * Pseudo action: clvmd-clone_stopped_0
+ * Pseudo action: dlm-clone_stop_0
+ * Pseudo action: dlm_stop_0
+ * Pseudo action: dlm-clone_stopped_0
+
+Revised Cluster Status:
+ * Node List:
+ * Online: [ kiff-02 ]
+ * OFFLINE: [ kiff-01 ]
+ * GuestOnline: [ lxc-01_kiff-01 lxc-01_kiff-02 lxc-02_kiff-01 lxc-02_kiff-02 ]
+
+ * Full List of Resources:
+ * fence-kiff-01 (stonith:fence_ipmilan): Started kiff-02
+ * fence-kiff-02 (stonith:fence_ipmilan): Started kiff-02
+ * Clone Set: dlm-clone [dlm]:
+ * Started: [ kiff-02 ]
+ * Stopped: [ kiff-01 lxc-01_kiff-01 lxc-01_kiff-02 lxc-02_kiff-01 lxc-02_kiff-02 ]
+ * Clone Set: clvmd-clone [clvmd]:
+ * Started: [ kiff-02 ]
+ * Stopped: [ kiff-01 lxc-01_kiff-01 lxc-01_kiff-02 lxc-02_kiff-01 lxc-02_kiff-02 ]
+ * Clone Set: shared0-clone [shared0]:
+ * Started: [ kiff-02 ]
+ * Stopped: [ kiff-01 lxc-01_kiff-01 lxc-01_kiff-02 lxc-02_kiff-01 lxc-02_kiff-02 ]
+ * R-lxc-01_kiff-01 (ocf:heartbeat:VirtualDomain): Started kiff-02
+ * R-lxc-02_kiff-01 (ocf:heartbeat:VirtualDomain): Started kiff-02
+ * R-lxc-01_kiff-02 (ocf:heartbeat:VirtualDomain): Started kiff-02
+ * R-lxc-02_kiff-02 (ocf:heartbeat:VirtualDomain): Started kiff-02
+ * vm-fs (ocf:heartbeat:Filesystem): Started lxc-01_kiff-01
diff --git a/cts/scheduler/summary/whitebox-migrate1.summary b/cts/scheduler/summary/whitebox-migrate1.summary
new file mode 100644
index 0000000..f864548
--- /dev/null
+++ b/cts/scheduler/summary/whitebox-migrate1.summary
@@ -0,0 +1,56 @@
+Current cluster status:
+ * Node List:
+ * Online: [ rhel7-node2 rhel7-node3 ]
+ * GuestOnline: [ rhel7-node1 ]
+
+ * Full List of Resources:
+ * shooter1 (stonith:fence_xvm): Started rhel7-node3
+ * FAKE1 (ocf:heartbeat:Dummy): Started rhel7-node1
+ * FAKE2 (ocf:heartbeat:Dummy): Started rhel7-node1
+ * FAKE3 (ocf:heartbeat:Dummy): Started rhel7-node3
+ * FAKE4 (ocf:heartbeat:Dummy): Started rhel7-node3
+ * FAKE5 (ocf:heartbeat:Dummy): Started rhel7-node2
+ * FAKE6 (ocf:heartbeat:Dummy): Started rhel7-node1
+ * FAKE7 (ocf:heartbeat:Dummy): Started rhel7-node3
+ * remote-rsc (ocf:heartbeat:Dummy): Started rhel7-node2
+
+Transition Summary:
+ * Move shooter1 ( rhel7-node3 -> rhel7-node2 )
+ * Move FAKE3 ( rhel7-node3 -> rhel7-node2 )
+ * Migrate remote-rsc ( rhel7-node2 -> rhel7-node3 )
+ * Migrate rhel7-node1 ( rhel7-node2 -> rhel7-node3 )
+
+Executing Cluster Transition:
+ * Resource action: shooter1 stop on rhel7-node3
+ * Resource action: FAKE3 stop on rhel7-node3
+ * Resource action: rhel7-node1 monitor on rhel7-node3
+ * Resource action: shooter1 start on rhel7-node2
+ * Resource action: FAKE3 start on rhel7-node2
+ * Resource action: remote-rsc migrate_to on rhel7-node2
+ * Resource action: shooter1 monitor=60000 on rhel7-node2
+ * Resource action: FAKE3 monitor=10000 on rhel7-node2
+ * Resource action: remote-rsc migrate_from on rhel7-node3
+ * Resource action: rhel7-node1 migrate_to on rhel7-node2
+ * Resource action: rhel7-node1 migrate_from on rhel7-node3
+ * Resource action: rhel7-node1 stop on rhel7-node2
+ * Resource action: remote-rsc stop on rhel7-node2
+ * Pseudo action: remote-rsc_start_0
+ * Pseudo action: rhel7-node1_start_0
+ * Resource action: remote-rsc monitor=10000 on rhel7-node3
+ * Resource action: rhel7-node1 monitor=30000 on rhel7-node3
+
+Revised Cluster Status:
+ * Node List:
+ * Online: [ rhel7-node2 rhel7-node3 ]
+ * GuestOnline: [ rhel7-node1 ]
+
+ * Full List of Resources:
+ * shooter1 (stonith:fence_xvm): Started rhel7-node2
+ * FAKE1 (ocf:heartbeat:Dummy): Started rhel7-node1
+ * FAKE2 (ocf:heartbeat:Dummy): Started rhel7-node1
+ * FAKE3 (ocf:heartbeat:Dummy): Started rhel7-node2
+ * FAKE4 (ocf:heartbeat:Dummy): Started rhel7-node3
+ * FAKE5 (ocf:heartbeat:Dummy): Started rhel7-node2
+ * FAKE6 (ocf:heartbeat:Dummy): Started rhel7-node1
+ * FAKE7 (ocf:heartbeat:Dummy): Started rhel7-node3
+ * remote-rsc (ocf:heartbeat:Dummy): Started rhel7-node3
diff --git a/cts/scheduler/summary/whitebox-move.summary b/cts/scheduler/summary/whitebox-move.summary
new file mode 100644
index 0000000..88846e2
--- /dev/null
+++ b/cts/scheduler/summary/whitebox-move.summary
@@ -0,0 +1,49 @@
+Current cluster status:
+ * Node List:
+ * Online: [ 18node1 18node2 18node3 ]
+ * GuestOnline: [ lxc1 lxc2 ]
+
+ * Full List of Resources:
+ * container1 (ocf:heartbeat:VirtualDomain): Started 18node1
+ * container2 (ocf:heartbeat:VirtualDomain): Started 18node2
+ * shoot1 (stonith:fence_xvm): Started 18node3
+ * Clone Set: M-clone [M]:
+ * Started: [ 18node1 18node2 18node3 lxc1 lxc2 ]
+ * A (ocf:pacemaker:Dummy): Started lxc1
+
+Transition Summary:
+ * Move container1 ( 18node1 -> 18node2 )
+ * Restart M:3 ( lxc1 ) due to required container1 start
+ * Restart A ( lxc1 ) due to required container1 start
+ * Move lxc1 ( 18node1 -> 18node2 )
+
+Executing Cluster Transition:
+ * Pseudo action: M-clone_stop_0
+ * Resource action: A stop on lxc1
+ * Resource action: A monitor on lxc2
+ * Resource action: M stop on lxc1
+ * Pseudo action: M-clone_stopped_0
+ * Pseudo action: M-clone_start_0
+ * Resource action: lxc1 stop on 18node1
+ * Resource action: container1 stop on 18node1
+ * Resource action: container1 start on 18node2
+ * Resource action: lxc1 start on 18node2
+ * Resource action: M start on lxc1
+ * Resource action: M monitor=10000 on lxc1
+ * Pseudo action: M-clone_running_0
+ * Resource action: A start on lxc1
+ * Resource action: A monitor=10000 on lxc1
+ * Resource action: lxc1 monitor=30000 on 18node2
+
+Revised Cluster Status:
+ * Node List:
+ * Online: [ 18node1 18node2 18node3 ]
+ * GuestOnline: [ lxc1 lxc2 ]
+
+ * Full List of Resources:
+ * container1 (ocf:heartbeat:VirtualDomain): Started 18node2
+ * container2 (ocf:heartbeat:VirtualDomain): Started 18node2
+ * shoot1 (stonith:fence_xvm): Started 18node3
+ * Clone Set: M-clone [M]:
+ * Started: [ 18node1 18node2 18node3 lxc1 lxc2 ]
+ * A (ocf:pacemaker:Dummy): Started lxc1
diff --git a/cts/scheduler/summary/whitebox-ms-ordering-move.summary b/cts/scheduler/summary/whitebox-ms-ordering-move.summary
new file mode 100644
index 0000000..0007698
--- /dev/null
+++ b/cts/scheduler/summary/whitebox-ms-ordering-move.summary
@@ -0,0 +1,107 @@
+Current cluster status:
+ * Node List:
+ * Online: [ rhel7-1 rhel7-2 rhel7-3 rhel7-4 rhel7-5 ]
+ * GuestOnline: [ lxc1 lxc2 ]
+
+ * Full List of Resources:
+ * Fencing (stonith:fence_xvm): Started rhel7-3
+ * FencingPass (stonith:fence_dummy): Started rhel7-4
+ * FencingFail (stonith:fence_dummy): Started rhel7-5
+ * rsc_rhel7-1 (ocf:heartbeat:IPaddr2): Started rhel7-1
+ * rsc_rhel7-2 (ocf:heartbeat:IPaddr2): Started rhel7-2
+ * rsc_rhel7-3 (ocf:heartbeat:IPaddr2): Started rhel7-3
+ * rsc_rhel7-4 (ocf:heartbeat:IPaddr2): Started rhel7-4
+ * rsc_rhel7-5 (ocf:heartbeat:IPaddr2): Started rhel7-5
+ * migrator (ocf:pacemaker:Dummy): Started rhel7-4
+ * Clone Set: Connectivity [ping-1]:
+ * Started: [ rhel7-1 rhel7-2 rhel7-3 rhel7-4 rhel7-5 ]
+ * Stopped: [ lxc1 lxc2 ]
+ * Clone Set: master-1 [stateful-1] (promotable):
+ * Promoted: [ rhel7-3 ]
+ * Unpromoted: [ rhel7-1 rhel7-2 rhel7-4 rhel7-5 ]
+ * Resource Group: group-1:
+ * r192.168.122.207 (ocf:heartbeat:IPaddr2): Started rhel7-3
+ * petulant (service:DummySD): Started rhel7-3
+ * r192.168.122.208 (ocf:heartbeat:IPaddr2): Started rhel7-3
+ * lsb-dummy (lsb:/usr/share/pacemaker/tests/cts/LSBDummy): Started rhel7-3
+ * container1 (ocf:heartbeat:VirtualDomain): Started rhel7-1
+ * container2 (ocf:heartbeat:VirtualDomain): Started rhel7-1
+ * Clone Set: lxc-ms-master [lxc-ms] (promotable):
+ * Promoted: [ lxc1 ]
+ * Unpromoted: [ lxc2 ]
+
+Transition Summary:
+ * Move container1 ( rhel7-1 -> rhel7-2 )
+ * Restart lxc-ms:0 ( Promoted lxc1 ) due to required container1 start
+ * Move lxc1 ( rhel7-1 -> rhel7-2 )
+
+Executing Cluster Transition:
+ * Resource action: rsc_rhel7-1 monitor on lxc2
+ * Resource action: rsc_rhel7-2 monitor on lxc2
+ * Resource action: rsc_rhel7-3 monitor on lxc2
+ * Resource action: rsc_rhel7-4 monitor on lxc2
+ * Resource action: rsc_rhel7-5 monitor on lxc2
+ * Resource action: migrator monitor on lxc2
+ * Resource action: ping-1 monitor on lxc2
+ * Resource action: stateful-1 monitor on lxc2
+ * Resource action: r192.168.122.207 monitor on lxc2
+ * Resource action: petulant monitor on lxc2
+ * Resource action: r192.168.122.208 monitor on lxc2
+ * Resource action: lsb-dummy monitor on lxc2
+ * Pseudo action: lxc-ms-master_demote_0
+ * Resource action: lxc1 monitor on rhel7-5
+ * Resource action: lxc1 monitor on rhel7-4
+ * Resource action: lxc1 monitor on rhel7-3
+ * Resource action: lxc1 monitor on rhel7-2
+ * Resource action: lxc2 monitor on rhel7-5
+ * Resource action: lxc2 monitor on rhel7-4
+ * Resource action: lxc2 monitor on rhel7-3
+ * Resource action: lxc2 monitor on rhel7-2
+ * Resource action: lxc-ms demote on lxc1
+ * Pseudo action: lxc-ms-master_demoted_0
+ * Pseudo action: lxc-ms-master_stop_0
+ * Resource action: lxc-ms stop on lxc1
+ * Pseudo action: lxc-ms-master_stopped_0
+ * Pseudo action: lxc-ms-master_start_0
+ * Resource action: lxc1 stop on rhel7-1
+ * Resource action: container1 stop on rhel7-1
+ * Resource action: container1 start on rhel7-2
+ * Resource action: lxc1 start on rhel7-2
+ * Resource action: lxc-ms start on lxc1
+ * Pseudo action: lxc-ms-master_running_0
+ * Resource action: lxc1 monitor=30000 on rhel7-2
+ * Pseudo action: lxc-ms-master_promote_0
+ * Resource action: lxc-ms promote on lxc1
+ * Pseudo action: lxc-ms-master_promoted_0
+
+Revised Cluster Status:
+ * Node List:
+ * Online: [ rhel7-1 rhel7-2 rhel7-3 rhel7-4 rhel7-5 ]
+ * GuestOnline: [ lxc1 lxc2 ]
+
+ * Full List of Resources:
+ * Fencing (stonith:fence_xvm): Started rhel7-3
+ * FencingPass (stonith:fence_dummy): Started rhel7-4
+ * FencingFail (stonith:fence_dummy): Started rhel7-5
+ * rsc_rhel7-1 (ocf:heartbeat:IPaddr2): Started rhel7-1
+ * rsc_rhel7-2 (ocf:heartbeat:IPaddr2): Started rhel7-2
+ * rsc_rhel7-3 (ocf:heartbeat:IPaddr2): Started rhel7-3
+ * rsc_rhel7-4 (ocf:heartbeat:IPaddr2): Started rhel7-4
+ * rsc_rhel7-5 (ocf:heartbeat:IPaddr2): Started rhel7-5
+ * migrator (ocf:pacemaker:Dummy): Started rhel7-4
+ * Clone Set: Connectivity [ping-1]:
+ * Started: [ rhel7-1 rhel7-2 rhel7-3 rhel7-4 rhel7-5 ]
+ * Stopped: [ lxc1 lxc2 ]
+ * Clone Set: master-1 [stateful-1] (promotable):
+ * Promoted: [ rhel7-3 ]
+ * Unpromoted: [ rhel7-1 rhel7-2 rhel7-4 rhel7-5 ]
+ * Resource Group: group-1:
+ * r192.168.122.207 (ocf:heartbeat:IPaddr2): Started rhel7-3
+ * petulant (service:DummySD): Started rhel7-3
+ * r192.168.122.208 (ocf:heartbeat:IPaddr2): Started rhel7-3
+ * lsb-dummy (lsb:/usr/share/pacemaker/tests/cts/LSBDummy): Started rhel7-3
+ * container1 (ocf:heartbeat:VirtualDomain): Started rhel7-2
+ * container2 (ocf:heartbeat:VirtualDomain): Started rhel7-1
+ * Clone Set: lxc-ms-master [lxc-ms] (promotable):
+ * Promoted: [ lxc1 ]
+ * Unpromoted: [ lxc2 ]
diff --git a/cts/scheduler/summary/whitebox-ms-ordering.summary b/cts/scheduler/summary/whitebox-ms-ordering.summary
new file mode 100644
index 0000000..06ac356
--- /dev/null
+++ b/cts/scheduler/summary/whitebox-ms-ordering.summary
@@ -0,0 +1,73 @@
+Current cluster status:
+ * Node List:
+ * Online: [ 18node1 18node2 18node3 ]
+
+ * Full List of Resources:
+ * shooter (stonith:fence_xvm): Started 18node2
+ * container1 (ocf:heartbeat:VirtualDomain): FAILED
+ * container2 (ocf:heartbeat:VirtualDomain): FAILED
+ * Clone Set: lxc-ms-master [lxc-ms] (promotable):
+ * Stopped: [ 18node1 18node2 18node3 ]
+
+Transition Summary:
+ * Fence (reboot) lxc2 (resource: container2) 'guest is unclean'
+ * Fence (reboot) lxc1 (resource: container1) 'guest is unclean'
+ * Start container1 ( 18node1 )
+ * Start container2 ( 18node1 )
+ * Recover lxc-ms:0 ( Promoted lxc1 )
+ * Recover lxc-ms:1 ( Unpromoted lxc2 )
+ * Start lxc1 ( 18node1 )
+ * Start lxc2 ( 18node1 )
+
+Executing Cluster Transition:
+ * Resource action: container1 monitor on 18node3
+ * Resource action: container1 monitor on 18node2
+ * Resource action: container1 monitor on 18node1
+ * Resource action: container2 monitor on 18node3
+ * Resource action: container2 monitor on 18node2
+ * Resource action: container2 monitor on 18node1
+ * Resource action: lxc-ms monitor on 18node3
+ * Resource action: lxc-ms monitor on 18node2
+ * Resource action: lxc-ms monitor on 18node1
+ * Pseudo action: lxc-ms-master_demote_0
+ * Resource action: lxc1 monitor on 18node3
+ * Resource action: lxc1 monitor on 18node2
+ * Resource action: lxc1 monitor on 18node1
+ * Resource action: lxc2 monitor on 18node3
+ * Resource action: lxc2 monitor on 18node2
+ * Resource action: lxc2 monitor on 18node1
+ * Pseudo action: stonith-lxc2-reboot on lxc2
+ * Pseudo action: stonith-lxc1-reboot on lxc1
+ * Resource action: container1 start on 18node1
+ * Resource action: container2 start on 18node1
+ * Pseudo action: lxc-ms_demote_0
+ * Pseudo action: lxc-ms-master_demoted_0
+ * Pseudo action: lxc-ms-master_stop_0
+ * Resource action: lxc1 start on 18node1
+ * Resource action: lxc2 start on 18node1
+ * Pseudo action: lxc-ms_stop_0
+ * Pseudo action: lxc-ms_stop_0
+ * Pseudo action: lxc-ms-master_stopped_0
+ * Pseudo action: lxc-ms-master_start_0
+ * Resource action: lxc1 monitor=30000 on 18node1
+ * Resource action: lxc2 monitor=30000 on 18node1
+ * Resource action: lxc-ms start on lxc1
+ * Resource action: lxc-ms start on lxc2
+ * Pseudo action: lxc-ms-master_running_0
+ * Resource action: lxc-ms monitor=10000 on lxc2
+ * Pseudo action: lxc-ms-master_promote_0
+ * Resource action: lxc-ms promote on lxc1
+ * Pseudo action: lxc-ms-master_promoted_0
+
+Revised Cluster Status:
+ * Node List:
+ * Online: [ 18node1 18node2 18node3 ]
+ * GuestOnline: [ lxc1 lxc2 ]
+
+ * Full List of Resources:
+ * shooter (stonith:fence_xvm): Started 18node2
+ * container1 (ocf:heartbeat:VirtualDomain): Started 18node1
+ * container2 (ocf:heartbeat:VirtualDomain): Started 18node1
+ * Clone Set: lxc-ms-master [lxc-ms] (promotable):
+ * Promoted: [ lxc1 ]
+ * Unpromoted: [ lxc2 ]
diff --git a/cts/scheduler/summary/whitebox-nested-group.summary b/cts/scheduler/summary/whitebox-nested-group.summary
new file mode 100644
index 0000000..d97c079
--- /dev/null
+++ b/cts/scheduler/summary/whitebox-nested-group.summary
@@ -0,0 +1,102 @@
+Current cluster status:
+ * Node List:
+ * Online: [ c7auto1 c7auto2 c7auto3 ]
+
+ * Full List of Resources:
+ * shooter (stonith:fence_phd_kvm): Started c7auto2
+ * fake1 (ocf:heartbeat:Dummy): Stopped
+ * fake2 (ocf:heartbeat:Dummy): Stopped
+ * fake3 (ocf:heartbeat:Dummy): Stopped
+ * fake4 (ocf:heartbeat:Dummy): Stopped
+ * fake5 (ocf:heartbeat:Dummy): Stopped
+ * Clone Set: fake_clone [fake]:
+ * Stopped: [ c7auto1 c7auto2 c7auto3 c7auto4 ]
+ * Resource Group: fake_group:
+ * fake_fs (ocf:heartbeat:Dummy): Stopped
+ * container (ocf:heartbeat:Dummy): Stopped
+
+Transition Summary:
+ * Start fake1 ( c7auto3 )
+ * Start fake2 ( c7auto4 )
+ * Start fake3 ( c7auto2 )
+ * Start fake4 ( c7auto3 )
+ * Start fake5 ( c7auto4 )
+ * Start fake:0 ( c7auto2 )
+ * Start fake:1 ( c7auto3 )
+ * Start fake:2 ( c7auto4 )
+ * Start fake:3 ( c7auto1 )
+ * Start fake_fs ( c7auto1 )
+ * Start container ( c7auto1 )
+ * Start c7auto4 ( c7auto1 )
+
+Executing Cluster Transition:
+ * Resource action: fake1 monitor on c7auto3
+ * Resource action: fake1 monitor on c7auto2
+ * Resource action: fake1 monitor on c7auto1
+ * Resource action: fake2 monitor on c7auto3
+ * Resource action: fake2 monitor on c7auto2
+ * Resource action: fake2 monitor on c7auto1
+ * Resource action: fake3 monitor on c7auto3
+ * Resource action: fake3 monitor on c7auto2
+ * Resource action: fake3 monitor on c7auto1
+ * Resource action: fake4 monitor on c7auto3
+ * Resource action: fake4 monitor on c7auto2
+ * Resource action: fake4 monitor on c7auto1
+ * Resource action: fake5 monitor on c7auto3
+ * Resource action: fake5 monitor on c7auto2
+ * Resource action: fake5 monitor on c7auto1
+ * Resource action: fake:0 monitor on c7auto2
+ * Resource action: fake:1 monitor on c7auto3
+ * Resource action: fake:3 monitor on c7auto1
+ * Pseudo action: fake_clone_start_0
+ * Pseudo action: fake_group_start_0
+ * Resource action: fake_fs monitor on c7auto3
+ * Resource action: fake_fs monitor on c7auto2
+ * Resource action: fake_fs monitor on c7auto1
+ * Resource action: c7auto4 monitor on c7auto3
+ * Resource action: c7auto4 monitor on c7auto2
+ * Resource action: c7auto4 monitor on c7auto1
+ * Resource action: fake1 start on c7auto3
+ * Resource action: fake3 start on c7auto2
+ * Resource action: fake4 start on c7auto3
+ * Resource action: fake:0 start on c7auto2
+ * Resource action: fake:1 start on c7auto3
+ * Resource action: fake:3 start on c7auto1
+ * Resource action: fake_fs start on c7auto1
+ * Resource action: container start on c7auto1
+ * Resource action: c7auto4 start on c7auto1
+ * Resource action: fake1 monitor=10000 on c7auto3
+ * Resource action: fake2 start on c7auto4
+ * Resource action: fake3 monitor=10000 on c7auto2
+ * Resource action: fake4 monitor=10000 on c7auto3
+ * Resource action: fake5 start on c7auto4
+ * Resource action: fake:0 monitor=10000 on c7auto2
+ * Resource action: fake:1 monitor=10000 on c7auto3
+ * Resource action: fake:2 start on c7auto4
+ * Resource action: fake:3 monitor=10000 on c7auto1
+ * Pseudo action: fake_clone_running_0
+ * Pseudo action: fake_group_running_0
+ * Resource action: fake_fs monitor=10000 on c7auto1
+ * Resource action: container monitor=10000 on c7auto1
+ * Resource action: c7auto4 monitor=30000 on c7auto1
+ * Resource action: fake2 monitor=10000 on c7auto4
+ * Resource action: fake5 monitor=10000 on c7auto4
+ * Resource action: fake:2 monitor=10000 on c7auto4
+
+Revised Cluster Status:
+ * Node List:
+ * Online: [ c7auto1 c7auto2 c7auto3 ]
+ * GuestOnline: [ c7auto4 ]
+
+ * Full List of Resources:
+ * shooter (stonith:fence_phd_kvm): Started c7auto2
+ * fake1 (ocf:heartbeat:Dummy): Started c7auto3
+ * fake2 (ocf:heartbeat:Dummy): Started c7auto4
+ * fake3 (ocf:heartbeat:Dummy): Started c7auto2
+ * fake4 (ocf:heartbeat:Dummy): Started c7auto3
+ * fake5 (ocf:heartbeat:Dummy): Started c7auto4
+ * Clone Set: fake_clone [fake]:
+ * Started: [ c7auto1 c7auto2 c7auto3 c7auto4 ]
+ * Resource Group: fake_group:
+ * fake_fs (ocf:heartbeat:Dummy): Started c7auto1
+ * container (ocf:heartbeat:Dummy): Started c7auto1
diff --git a/cts/scheduler/summary/whitebox-orphan-ms.summary b/cts/scheduler/summary/whitebox-orphan-ms.summary
new file mode 100644
index 0000000..e7df2d8
--- /dev/null
+++ b/cts/scheduler/summary/whitebox-orphan-ms.summary
@@ -0,0 +1,87 @@
+Current cluster status:
+ * Node List:
+ * Online: [ 18node1 18node2 18node3 ]
+ * GuestOnline: [ lxc1 lxc2 ]
+
+ * Full List of Resources:
+ * Fencing (stonith:fence_xvm): Started 18node2
+ * FencingPass (stonith:fence_dummy): Started 18node3
+ * FencingFail (stonith:fence_dummy): Started 18node3
+ * rsc_18node1 (ocf:heartbeat:IPaddr2): Started 18node1
+ * rsc_18node2 (ocf:heartbeat:IPaddr2): Started 18node2
+ * rsc_18node3 (ocf:heartbeat:IPaddr2): Started 18node3
+ * migrator (ocf:pacemaker:Dummy): Started 18node1
+ * Clone Set: Connectivity [ping-1]:
+ * Started: [ 18node1 18node2 18node3 ]
+ * Clone Set: master-1 [stateful-1] (promotable):
+ * Promoted: [ 18node1 ]
+ * Unpromoted: [ 18node2 18node3 ]
+ * Resource Group: group-1:
+ * r192.168.122.87 (ocf:heartbeat:IPaddr2): Started 18node1
+ * r192.168.122.88 (ocf:heartbeat:IPaddr2): Started 18node1
+ * r192.168.122.89 (ocf:heartbeat:IPaddr2): Started 18node1
+ * lsb-dummy (lsb:/usr/share/pacemaker/tests/cts/LSBDummy): Started 18node1
+ * container2 (ocf:heartbeat:VirtualDomain): ORPHANED Started 18node1
+ * lxc1 (ocf:pacemaker:remote): ORPHANED Started 18node1
+ * lxc-ms (ocf:pacemaker:Stateful): ORPHANED Promoted [ lxc1 lxc2 ]
+ * lxc2 (ocf:pacemaker:remote): ORPHANED Started 18node1
+ * container1 (ocf:heartbeat:VirtualDomain): ORPHANED Started 18node1
+
+Transition Summary:
+ * Move FencingFail ( 18node3 -> 18node1 )
+ * Stop container2 ( 18node1 ) due to node availability
+ * Stop lxc1 ( 18node1 ) due to node availability
+ * Stop lxc-ms ( Promoted lxc1 ) due to node availability
+ * Stop lxc-ms ( Promoted lxc2 ) due to node availability
+ * Stop lxc2 ( 18node1 ) due to node availability
+ * Stop container1 ( 18node1 ) due to node availability
+
+Executing Cluster Transition:
+ * Resource action: FencingFail stop on 18node3
+ * Resource action: lxc-ms demote on lxc2
+ * Resource action: lxc-ms demote on lxc1
+ * Resource action: FencingFail start on 18node1
+ * Resource action: lxc-ms stop on lxc2
+ * Resource action: lxc-ms stop on lxc1
+ * Resource action: lxc-ms delete on 18node3
+ * Resource action: lxc-ms delete on 18node2
+ * Resource action: lxc-ms delete on 18node1
+ * Resource action: lxc2 stop on 18node1
+ * Resource action: lxc2 delete on 18node3
+ * Resource action: lxc2 delete on 18node2
+ * Resource action: lxc2 delete on 18node1
+ * Resource action: container2 stop on 18node1
+ * Resource action: container2 delete on 18node3
+ * Resource action: container2 delete on 18node2
+ * Resource action: container2 delete on 18node1
+ * Resource action: lxc1 stop on 18node1
+ * Resource action: lxc1 delete on 18node3
+ * Resource action: lxc1 delete on 18node2
+ * Resource action: lxc1 delete on 18node1
+ * Resource action: container1 stop on 18node1
+ * Resource action: container1 delete on 18node3
+ * Resource action: container1 delete on 18node2
+ * Resource action: container1 delete on 18node1
+
+Revised Cluster Status:
+ * Node List:
+ * Online: [ 18node1 18node2 18node3 ]
+
+ * Full List of Resources:
+ * Fencing (stonith:fence_xvm): Started 18node2
+ * FencingPass (stonith:fence_dummy): Started 18node3
+ * FencingFail (stonith:fence_dummy): Started 18node1
+ * rsc_18node1 (ocf:heartbeat:IPaddr2): Started 18node1
+ * rsc_18node2 (ocf:heartbeat:IPaddr2): Started 18node2
+ * rsc_18node3 (ocf:heartbeat:IPaddr2): Started 18node3
+ * migrator (ocf:pacemaker:Dummy): Started 18node1
+ * Clone Set: Connectivity [ping-1]:
+ * Started: [ 18node1 18node2 18node3 ]
+ * Clone Set: master-1 [stateful-1] (promotable):
+ * Promoted: [ 18node1 ]
+ * Unpromoted: [ 18node2 18node3 ]
+ * Resource Group: group-1:
+ * r192.168.122.87 (ocf:heartbeat:IPaddr2): Started 18node1
+ * r192.168.122.88 (ocf:heartbeat:IPaddr2): Started 18node1
+ * r192.168.122.89 (ocf:heartbeat:IPaddr2): Started 18node1
+ * lsb-dummy (lsb:/usr/share/pacemaker/tests/cts/LSBDummy): Started 18node1
diff --git a/cts/scheduler/summary/whitebox-orphaned.summary b/cts/scheduler/summary/whitebox-orphaned.summary
new file mode 100644
index 0000000..8d5efb4
--- /dev/null
+++ b/cts/scheduler/summary/whitebox-orphaned.summary
@@ -0,0 +1,59 @@
+Current cluster status:
+ * Node List:
+ * Online: [ 18node1 18node2 18node3 ]
+ * GuestOnline: [ lxc1 lxc2 ]
+
+ * Full List of Resources:
+ * container2 (ocf:heartbeat:VirtualDomain): Started 18node2
+ * shoot1 (stonith:fence_xvm): Started 18node3
+ * Clone Set: M-clone [M]:
+ * M (ocf:pacemaker:Dummy): ORPHANED Started lxc1
+ * Started: [ 18node1 18node2 18node3 lxc2 ]
+ * A (ocf:pacemaker:Dummy): Started 18node1
+ * B (ocf:pacemaker:Dummy): Started lxc1
+ * C (ocf:pacemaker:Dummy): Started lxc2
+ * D (ocf:pacemaker:Dummy): Started 18node1
+ * container1 (ocf:heartbeat:VirtualDomain): ORPHANED Started 18node2
+ * lxc1 (ocf:pacemaker:remote): ORPHANED Started 18node2
+
+Transition Summary:
+ * Stop M:4 ( lxc1 ) due to node availability
+ * Move B ( lxc1 -> lxc2 )
+ * Stop container1 ( 18node2 ) due to node availability
+ * Stop lxc1 ( 18node2 ) due to node availability
+
+Executing Cluster Transition:
+ * Pseudo action: M-clone_stop_0
+ * Resource action: A monitor on lxc2
+ * Resource action: B stop on lxc1
+ * Resource action: B monitor on lxc2
+ * Resource action: D monitor on lxc2
+ * Cluster action: clear_failcount for container1 on 18node2
+ * Cluster action: clear_failcount for lxc1 on 18node2
+ * Resource action: M stop on lxc1
+ * Pseudo action: M-clone_stopped_0
+ * Resource action: B start on lxc2
+ * Resource action: lxc1 stop on 18node2
+ * Resource action: lxc1 delete on 18node3
+ * Resource action: lxc1 delete on 18node2
+ * Resource action: lxc1 delete on 18node1
+ * Resource action: B monitor=10000 on lxc2
+ * Resource action: container1 stop on 18node2
+ * Resource action: container1 delete on 18node3
+ * Resource action: container1 delete on 18node2
+ * Resource action: container1 delete on 18node1
+
+Revised Cluster Status:
+ * Node List:
+ * Online: [ 18node1 18node2 18node3 ]
+ * GuestOnline: [ lxc2 ]
+
+ * Full List of Resources:
+ * container2 (ocf:heartbeat:VirtualDomain): Started 18node2
+ * shoot1 (stonith:fence_xvm): Started 18node3
+ * Clone Set: M-clone [M]:
+ * Started: [ 18node1 18node2 18node3 lxc2 ]
+ * A (ocf:pacemaker:Dummy): Started 18node1
+ * B (ocf:pacemaker:Dummy): Started lxc2
+ * C (ocf:pacemaker:Dummy): Started lxc2
+ * D (ocf:pacemaker:Dummy): Started 18node1
diff --git a/cts/scheduler/summary/whitebox-start.summary b/cts/scheduler/summary/whitebox-start.summary
new file mode 100644
index 0000000..e17cde1
--- /dev/null
+++ b/cts/scheduler/summary/whitebox-start.summary
@@ -0,0 +1,56 @@
+Current cluster status:
+ * Node List:
+ * Online: [ 18node1 18node2 18node3 ]
+ * GuestOnline: [ lxc2 ]
+
+ * Full List of Resources:
+ * container1 (ocf:heartbeat:VirtualDomain): Stopped
+ * container2 (ocf:heartbeat:VirtualDomain): Started 18node2
+ * shoot1 (stonith:fence_xvm): Started 18node3
+ * Clone Set: M-clone [M]:
+ * Started: [ 18node1 18node2 18node3 lxc2 ]
+ * Stopped: [ lxc1 ]
+ * A (ocf:pacemaker:Dummy): Started 18node1
+ * B (ocf:pacemaker:Dummy): Started lxc2
+ * C (ocf:pacemaker:Dummy): Started lxc2
+ * D (ocf:pacemaker:Dummy): Started 18node1
+
+Transition Summary:
+ * Start container1 ( 18node1 )
+ * Start M:4 ( lxc1 )
+ * Move A ( 18node1 -> lxc1 )
+ * Move B ( lxc2 -> 18node3 )
+ * Start lxc1 ( 18node1 )
+
+Executing Cluster Transition:
+ * Resource action: container1 start on 18node1
+ * Pseudo action: M-clone_start_0
+ * Resource action: A monitor on lxc2
+ * Resource action: B stop on lxc2
+ * Resource action: D monitor on lxc2
+ * Resource action: lxc1 start on 18node1
+ * Resource action: M start on lxc1
+ * Pseudo action: M-clone_running_0
+ * Resource action: A stop on 18node1
+ * Resource action: B start on 18node3
+ * Resource action: lxc1 monitor=30000 on 18node1
+ * Resource action: M monitor=10000 on lxc1
+ * Resource action: A start on lxc1
+ * Resource action: B monitor=10000 on 18node3
+ * Resource action: A monitor=10000 on lxc1
+
+Revised Cluster Status:
+ * Node List:
+ * Online: [ 18node1 18node2 18node3 ]
+ * GuestOnline: [ lxc1 lxc2 ]
+
+ * Full List of Resources:
+ * container1 (ocf:heartbeat:VirtualDomain): Started 18node1
+ * container2 (ocf:heartbeat:VirtualDomain): Started 18node2
+ * shoot1 (stonith:fence_xvm): Started 18node3
+ * Clone Set: M-clone [M]:
+ * Started: [ 18node1 18node2 18node3 lxc1 lxc2 ]
+ * A (ocf:pacemaker:Dummy): Started lxc1
+ * B (ocf:pacemaker:Dummy): Started 18node3
+ * C (ocf:pacemaker:Dummy): Started lxc2
+ * D (ocf:pacemaker:Dummy): Started 18node1
diff --git a/cts/scheduler/summary/whitebox-stop.summary b/cts/scheduler/summary/whitebox-stop.summary
new file mode 100644
index 0000000..a7a5e0f
--- /dev/null
+++ b/cts/scheduler/summary/whitebox-stop.summary
@@ -0,0 +1,53 @@
+1 of 14 resource instances DISABLED and 0 BLOCKED from further action due to failure
+
+Current cluster status:
+ * Node List:
+ * Online: [ 18node1 18node2 18node3 ]
+ * GuestOnline: [ lxc1 lxc2 ]
+
+ * Full List of Resources:
+ * container1 (ocf:heartbeat:VirtualDomain): Started 18node2 (disabled)
+ * container2 (ocf:heartbeat:VirtualDomain): Started 18node2
+ * shoot1 (stonith:fence_xvm): Started 18node3
+ * Clone Set: M-clone [M]:
+ * Started: [ 18node1 18node2 18node3 lxc1 lxc2 ]
+ * A (ocf:pacemaker:Dummy): Started 18node1
+ * B (ocf:pacemaker:Dummy): Started lxc1
+ * C (ocf:pacemaker:Dummy): Started lxc2
+ * D (ocf:pacemaker:Dummy): Started 18node1
+
+Transition Summary:
+ * Stop container1 ( 18node2 ) due to node availability
+ * Stop M:4 ( lxc1 ) due to node availability
+ * Move B ( lxc1 -> lxc2 )
+ * Stop lxc1 ( 18node2 ) due to node availability
+
+Executing Cluster Transition:
+ * Pseudo action: M-clone_stop_0
+ * Resource action: A monitor on lxc2
+ * Resource action: B stop on lxc1
+ * Resource action: B monitor on lxc2
+ * Resource action: D monitor on lxc2
+ * Resource action: M stop on lxc1
+ * Pseudo action: M-clone_stopped_0
+ * Resource action: B start on lxc2
+ * Resource action: lxc1 stop on 18node2
+ * Resource action: container1 stop on 18node2
+ * Resource action: B monitor=10000 on lxc2
+
+Revised Cluster Status:
+ * Node List:
+ * Online: [ 18node1 18node2 18node3 ]
+ * GuestOnline: [ lxc2 ]
+
+ * Full List of Resources:
+ * container1 (ocf:heartbeat:VirtualDomain): Stopped (disabled)
+ * container2 (ocf:heartbeat:VirtualDomain): Started 18node2
+ * shoot1 (stonith:fence_xvm): Started 18node3
+ * Clone Set: M-clone [M]:
+ * Started: [ 18node1 18node2 18node3 lxc2 ]
+ * Stopped: [ lxc1 ]
+ * A (ocf:pacemaker:Dummy): Started 18node1
+ * B (ocf:pacemaker:Dummy): Started lxc2
+ * C (ocf:pacemaker:Dummy): Started lxc2
+ * D (ocf:pacemaker:Dummy): Started 18node1
diff --git a/cts/scheduler/summary/whitebox-unexpectedly-running.summary b/cts/scheduler/summary/whitebox-unexpectedly-running.summary
new file mode 100644
index 0000000..5973497
--- /dev/null
+++ b/cts/scheduler/summary/whitebox-unexpectedly-running.summary
@@ -0,0 +1,35 @@
+Current cluster status:
+ * Node List:
+ * Online: [ 18builder ]
+
+ * Full List of Resources:
+ * FAKE (ocf:pacemaker:Dummy): Started 18builder
+ * FAKE-crashed (ocf:pacemaker:Dummy): FAILED 18builder
+
+Transition Summary:
+ * Fence (reboot) remote2 (resource: FAKE-crashed) 'guest is unclean'
+ * Recover FAKE-crashed ( 18builder )
+ * Start remote1 ( 18builder )
+ * Start remote2 ( 18builder )
+
+Executing Cluster Transition:
+ * Resource action: FAKE monitor=60000 on 18builder
+ * Resource action: FAKE-crashed stop on 18builder
+ * Resource action: remote1 monitor on 18builder
+ * Resource action: remote2 monitor on 18builder
+ * Pseudo action: stonith-remote2-reboot on remote2
+ * Resource action: FAKE-crashed start on 18builder
+ * Resource action: remote1 start on 18builder
+ * Resource action: remote2 start on 18builder
+ * Resource action: FAKE-crashed monitor=60000 on 18builder
+ * Resource action: remote1 monitor=30000 on 18builder
+ * Resource action: remote2 monitor=30000 on 18builder
+
+Revised Cluster Status:
+ * Node List:
+ * Online: [ 18builder ]
+ * GuestOnline: [ remote1 remote2 ]
+
+ * Full List of Resources:
+ * FAKE (ocf:pacemaker:Dummy): Started 18builder
+ * FAKE-crashed (ocf:pacemaker:Dummy): Started 18builder
diff --git a/cts/scheduler/summary/year-2038.summary b/cts/scheduler/summary/year-2038.summary
new file mode 100644
index 0000000..edaed22
--- /dev/null
+++ b/cts/scheduler/summary/year-2038.summary
@@ -0,0 +1,112 @@
+Using the original execution date of: 2038-02-17 06:13:20Z
+Current cluster status:
+ * Node List:
+ * RemoteNode overcloud-novacompute-1: UNCLEAN (offline)
+ * Online: [ controller-0 controller-1 controller-2 ]
+ * RemoteOnline: [ overcloud-novacompute-0 ]
+ * GuestOnline: [ galera-bundle-0 galera-bundle-1 galera-bundle-2 rabbitmq-bundle-0 rabbitmq-bundle-1 rabbitmq-bundle-2 redis-bundle-0 redis-bundle-1 redis-bundle-2 ]
+
+ * Full List of Resources:
+ * overcloud-novacompute-0 (ocf:pacemaker:remote): Started controller-0
+ * overcloud-novacompute-1 (ocf:pacemaker:remote): FAILED controller-1
+ * Container bundle set: rabbitmq-bundle [192.168.24.1:8787/rhosp13/openstack-rabbitmq:pcmklatest]:
+ * rabbitmq-bundle-0 (ocf:heartbeat:rabbitmq-cluster): Started controller-2
+ * rabbitmq-bundle-1 (ocf:heartbeat:rabbitmq-cluster): Started controller-0
+ * rabbitmq-bundle-2 (ocf:heartbeat:rabbitmq-cluster): Started controller-1
+ * Container bundle set: galera-bundle [192.168.24.1:8787/rhosp13/openstack-mariadb:pcmklatest]:
+ * galera-bundle-0 (ocf:heartbeat:galera): Promoted controller-2
+ * galera-bundle-1 (ocf:heartbeat:galera): Promoted controller-0
+ * galera-bundle-2 (ocf:heartbeat:galera): Promoted controller-1
+ * Container bundle set: redis-bundle [192.168.24.1:8787/rhosp13/openstack-redis:pcmklatest]:
+ * redis-bundle-0 (ocf:heartbeat:redis): Promoted controller-2
+ * redis-bundle-1 (ocf:heartbeat:redis): Unpromoted controller-0
+ * redis-bundle-2 (ocf:heartbeat:redis): Unpromoted controller-1
+ * ip-192.168.24.11 (ocf:heartbeat:IPaddr2): Started controller-2
+ * ip-10.0.0.110 (ocf:heartbeat:IPaddr2): Stopped
+ * ip-172.17.1.14 (ocf:heartbeat:IPaddr2): Started controller-1
+ * ip-172.17.1.17 (ocf:heartbeat:IPaddr2): Started controller-2
+ * ip-172.17.3.11 (ocf:heartbeat:IPaddr2): Started controller-0
+ * ip-172.17.4.17 (ocf:heartbeat:IPaddr2): Started controller-1
+ * Container bundle set: haproxy-bundle [192.168.24.1:8787/rhosp13/openstack-haproxy:pcmklatest]:
+ * haproxy-bundle-docker-0 (ocf:heartbeat:docker): Started controller-2
+ * haproxy-bundle-docker-1 (ocf:heartbeat:docker): Started controller-0
+ * haproxy-bundle-docker-2 (ocf:heartbeat:docker): Started controller-1
+ * stonith-fence_compute-fence-nova (stonith:fence_compute): FAILED controller-2
+ * Clone Set: compute-unfence-trigger-clone [compute-unfence-trigger]:
+ * compute-unfence-trigger (ocf:pacemaker:Dummy): Started overcloud-novacompute-1 (UNCLEAN)
+ * Started: [ overcloud-novacompute-0 ]
+ * Stopped: [ controller-0 controller-1 controller-2 ]
+ * nova-evacuate (ocf:openstack:NovaEvacuate): Started controller-0
+ * stonith-fence_ipmilan-5254008be2cc (stonith:fence_ipmilan): Started controller-1
+ * stonith-fence_ipmilan-525400803f9e (stonith:fence_ipmilan): Started controller-0
+ * stonith-fence_ipmilan-525400fca120 (stonith:fence_ipmilan): Started controller-2
+ * stonith-fence_ipmilan-525400953d48 (stonith:fence_ipmilan): Started controller-2
+ * stonith-fence_ipmilan-525400b02b86 (stonith:fence_ipmilan): Started controller-1
+ * Container bundle: openstack-cinder-volume [192.168.24.1:8787/rhosp13/openstack-cinder-volume:pcmklatest]:
+ * openstack-cinder-volume-docker-0 (ocf:heartbeat:docker): Started controller-0
+
+Transition Summary:
+ * Fence (reboot) overcloud-novacompute-1 'remote connection is unrecoverable'
+ * Stop overcloud-novacompute-1 ( controller-1 ) due to node availability
+ * Start ip-10.0.0.110 ( controller-1 )
+ * Recover stonith-fence_compute-fence-nova ( controller-2 )
+ * Stop compute-unfence-trigger:1 ( overcloud-novacompute-1 ) due to node availability
+
+Executing Cluster Transition:
+ * Resource action: overcloud-novacompute-1 stop on controller-1
+ * Resource action: stonith-fence_compute-fence-nova stop on controller-2
+ * Fencing overcloud-novacompute-1 (reboot)
+ * Cluster action: clear_failcount for overcloud-novacompute-1 on controller-1
+ * Resource action: ip-10.0.0.110 start on controller-1
+ * Resource action: stonith-fence_compute-fence-nova start on controller-2
+ * Resource action: stonith-fence_compute-fence-nova monitor=60000 on controller-2
+ * Pseudo action: compute-unfence-trigger-clone_stop_0
+ * Resource action: ip-10.0.0.110 monitor=10000 on controller-1
+ * Pseudo action: compute-unfence-trigger_stop_0
+ * Pseudo action: compute-unfence-trigger-clone_stopped_0
+Using the original execution date of: 2038-02-17 06:13:20Z
+
+Revised Cluster Status:
+ * Node List:
+ * RemoteNode overcloud-novacompute-1: UNCLEAN (offline)
+ * Online: [ controller-0 controller-1 controller-2 ]
+ * RemoteOnline: [ overcloud-novacompute-0 ]
+ * GuestOnline: [ galera-bundle-0 galera-bundle-1 galera-bundle-2 rabbitmq-bundle-0 rabbitmq-bundle-1 rabbitmq-bundle-2 redis-bundle-0 redis-bundle-1 redis-bundle-2 ]
+
+ * Full List of Resources:
+ * overcloud-novacompute-0 (ocf:pacemaker:remote): Started controller-0
+ * overcloud-novacompute-1 (ocf:pacemaker:remote): FAILED
+ * Container bundle set: rabbitmq-bundle [192.168.24.1:8787/rhosp13/openstack-rabbitmq:pcmklatest]:
+ * rabbitmq-bundle-0 (ocf:heartbeat:rabbitmq-cluster): Started controller-2
+ * rabbitmq-bundle-1 (ocf:heartbeat:rabbitmq-cluster): Started controller-0
+ * rabbitmq-bundle-2 (ocf:heartbeat:rabbitmq-cluster): Started controller-1
+ * Container bundle set: galera-bundle [192.168.24.1:8787/rhosp13/openstack-mariadb:pcmklatest]:
+ * galera-bundle-0 (ocf:heartbeat:galera): Promoted controller-2
+ * galera-bundle-1 (ocf:heartbeat:galera): Promoted controller-0
+ * galera-bundle-2 (ocf:heartbeat:galera): Promoted controller-1
+ * Container bundle set: redis-bundle [192.168.24.1:8787/rhosp13/openstack-redis:pcmklatest]:
+ * redis-bundle-0 (ocf:heartbeat:redis): Promoted controller-2
+ * redis-bundle-1 (ocf:heartbeat:redis): Unpromoted controller-0
+ * redis-bundle-2 (ocf:heartbeat:redis): Unpromoted controller-1
+ * ip-192.168.24.11 (ocf:heartbeat:IPaddr2): Started controller-2
+ * ip-10.0.0.110 (ocf:heartbeat:IPaddr2): Started controller-1
+ * ip-172.17.1.14 (ocf:heartbeat:IPaddr2): Started controller-1
+ * ip-172.17.1.17 (ocf:heartbeat:IPaddr2): Started controller-2
+ * ip-172.17.3.11 (ocf:heartbeat:IPaddr2): Started controller-0
+ * ip-172.17.4.17 (ocf:heartbeat:IPaddr2): Started controller-1
+ * Container bundle set: haproxy-bundle [192.168.24.1:8787/rhosp13/openstack-haproxy:pcmklatest]:
+ * haproxy-bundle-docker-0 (ocf:heartbeat:docker): Started controller-2
+ * haproxy-bundle-docker-1 (ocf:heartbeat:docker): Started controller-0
+ * haproxy-bundle-docker-2 (ocf:heartbeat:docker): Started controller-1
+ * stonith-fence_compute-fence-nova (stonith:fence_compute): Started controller-2
+ * Clone Set: compute-unfence-trigger-clone [compute-unfence-trigger]:
+ * Started: [ overcloud-novacompute-0 ]
+ * Stopped: [ controller-0 controller-1 controller-2 overcloud-novacompute-1 ]
+ * nova-evacuate (ocf:openstack:NovaEvacuate): Started controller-0
+ * stonith-fence_ipmilan-5254008be2cc (stonith:fence_ipmilan): Started controller-1
+ * stonith-fence_ipmilan-525400803f9e (stonith:fence_ipmilan): Started controller-0
+ * stonith-fence_ipmilan-525400fca120 (stonith:fence_ipmilan): Started controller-2
+ * stonith-fence_ipmilan-525400953d48 (stonith:fence_ipmilan): Started controller-2
+ * stonith-fence_ipmilan-525400b02b86 (stonith:fence_ipmilan): Started controller-1
+ * Container bundle: openstack-cinder-volume [192.168.24.1:8787/rhosp13/openstack-cinder-volume:pcmklatest]:
+ * openstack-cinder-volume-docker-0 (ocf:heartbeat:docker): Started controller-0