Current cluster status: * Node List: * Node sapcl02: standby (with active resources) * Node sapcl03: UNCLEAN (offline) * Online: [ sapcl01 ] * Full List of Resources: * stonith-1 (stonith:dummy): Stopped * Resource Group: app01: * IPaddr_192_168_1_101 (ocf:heartbeat:IPaddr): Started sapcl01 * LVM_2 (ocf:heartbeat:LVM): Started sapcl01 * Filesystem_3 (ocf:heartbeat:Filesystem): Started sapcl01 * Resource Group: app02: * IPaddr_192_168_1_102 (ocf:heartbeat:IPaddr): Started sapcl02 * LVM_12 (ocf:heartbeat:LVM): Started sapcl02 * Filesystem_13 (ocf:heartbeat:Filesystem): Started sapcl02 * Resource Group: oracle: * IPaddr_192_168_1_104 (ocf:heartbeat:IPaddr): Stopped * LVM_22 (ocf:heartbeat:LVM): Stopped * Filesystem_23 (ocf:heartbeat:Filesystem): Stopped * oracle_24 (ocf:heartbeat:oracle): Stopped * oralsnr_25 (ocf:heartbeat:oralsnr): Stopped Transition Summary: * Fence (reboot) sapcl03 'peer is no longer part of the cluster' * Start stonith-1 ( sapcl01 ) * Move IPaddr_192_168_1_102 ( sapcl02 -> sapcl01 ) * Move LVM_12 ( sapcl02 -> sapcl01 ) * Move Filesystem_13 ( sapcl02 -> sapcl01 ) * Start IPaddr_192_168_1_104 ( sapcl01 ) * Start LVM_22 ( sapcl01 ) * Start Filesystem_23 ( sapcl01 ) * Start oracle_24 ( sapcl01 ) * Start oralsnr_25 ( sapcl01 ) Executing Cluster Transition: * Resource action: stonith-1 monitor on sapcl02 * Resource action: stonith-1 monitor on sapcl01 * Pseudo action: app02_stop_0 * Resource action: Filesystem_13 stop on sapcl02 * Pseudo action: oracle_start_0 * Fencing sapcl03 (reboot) * Resource action: stonith-1 start on sapcl01 * Resource action: LVM_12 stop on sapcl02 * Resource action: IPaddr_192_168_1_104 start on sapcl01 * Resource action: LVM_22 start on sapcl01 * Resource action: Filesystem_23 start on sapcl01 * Resource action: oracle_24 start on sapcl01 * Resource action: oralsnr_25 start on sapcl01 * Resource action: IPaddr_192_168_1_102 stop on sapcl02 * Pseudo action: oracle_running_0 * Resource action: IPaddr_192_168_1_104 monitor=5000 on sapcl01 * Resource action: LVM_22 monitor=120000 on sapcl01 * Resource action: Filesystem_23 monitor=120000 on sapcl01 * Resource action: oracle_24 monitor=120000 on sapcl01 * Resource action: oralsnr_25 monitor=120000 on sapcl01 * Pseudo action: app02_stopped_0 * Pseudo action: app02_start_0 * Resource action: IPaddr_192_168_1_102 start on sapcl01 * Resource action: LVM_12 start on sapcl01 * Resource action: Filesystem_13 start on sapcl01 * Pseudo action: app02_running_0 * Resource action: IPaddr_192_168_1_102 monitor=5000 on sapcl01 * Resource action: LVM_12 monitor=120000 on sapcl01 * Resource action: Filesystem_13 monitor=120000 on sapcl01 Revised Cluster Status: * Node List: * Node sapcl02: standby * Online: [ sapcl01 ] * OFFLINE: [ sapcl03 ] * Full List of Resources: * stonith-1 (stonith:dummy): Started sapcl01 * Resource Group: app01: * IPaddr_192_168_1_101 (ocf:heartbeat:IPaddr): Started sapcl01 * LVM_2 (ocf:heartbeat:LVM): Started sapcl01 * Filesystem_3 (ocf:heartbeat:Filesystem): Started sapcl01 * Resource Group: app02: * IPaddr_192_168_1_102 (ocf:heartbeat:IPaddr): Started sapcl01 * LVM_12 (ocf:heartbeat:LVM): Started sapcl01 * Filesystem_13 (ocf:heartbeat:Filesystem): Started sapcl01 * Resource Group: oracle: * IPaddr_192_168_1_104 (ocf:heartbeat:IPaddr): Started sapcl01 * LVM_22 (ocf:heartbeat:LVM): Started sapcl01 * Filesystem_23 (ocf:heartbeat:Filesystem): Started sapcl01 * oracle_24 (ocf:heartbeat:oracle): Started sapcl01 * oralsnr_25 (ocf:heartbeat:oralsnr): Started sapcl01