summaryrefslogtreecommitdiffstats
path: root/doc/sphinx
diff options
context:
space:
mode:
authorDaniel Baumann <daniel.baumann@progress-linux.org>2024-04-17 06:53:20 +0000
committerDaniel Baumann <daniel.baumann@progress-linux.org>2024-04-17 06:53:20 +0000
commite5a812082ae033afb1eed82c0f2df3d0f6bdc93f (patch)
treea6716c9275b4b413f6c9194798b34b91affb3cc7 /doc/sphinx
parentInitial commit. (diff)
downloadpacemaker-e5a812082ae033afb1eed82c0f2df3d0f6bdc93f.tar.xz
pacemaker-e5a812082ae033afb1eed82c0f2df3d0f6bdc93f.zip
Adding upstream version 2.1.6.upstream/2.1.6
Signed-off-by: Daniel Baumann <daniel.baumann@progress-linux.org>
Diffstat (limited to 'doc/sphinx')
-rw-r--r--doc/sphinx/Clusters_from_Scratch/active-active.rst343
-rw-r--r--doc/sphinx/Clusters_from_Scratch/active-passive.rst324
-rw-r--r--doc/sphinx/Clusters_from_Scratch/ap-configuration.rst345
-rw-r--r--doc/sphinx/Clusters_from_Scratch/ap-corosync-conf.rst43
-rw-r--r--doc/sphinx/Clusters_from_Scratch/ap-reading.rst10
-rw-r--r--doc/sphinx/Clusters_from_Scratch/apache.rst448
-rw-r--r--doc/sphinx/Clusters_from_Scratch/cluster-setup.rst313
-rw-r--r--doc/sphinx/Clusters_from_Scratch/fencing.rst231
-rw-r--r--doc/sphinx/Clusters_from_Scratch/images/ConfigureVolumeGroup.pngbin0 -> 100021 bytes
-rw-r--r--doc/sphinx/Clusters_from_Scratch/images/ConsolePrompt.pngbin0 -> 19561 bytes
-rw-r--r--doc/sphinx/Clusters_from_Scratch/images/InstallationDestination.pngbin0 -> 72468 bytes
-rw-r--r--doc/sphinx/Clusters_from_Scratch/images/InstallationSummary.pngbin0 -> 97618 bytes
-rw-r--r--doc/sphinx/Clusters_from_Scratch/images/ManualPartitioning.pngbin0 -> 94959 bytes
-rw-r--r--doc/sphinx/Clusters_from_Scratch/images/NetworkAndHostName.pngbin0 -> 86829 bytes
-rw-r--r--doc/sphinx/Clusters_from_Scratch/images/RootPassword.pngbin0 -> 44890 bytes
-rw-r--r--doc/sphinx/Clusters_from_Scratch/images/SoftwareSelection.pngbin0 -> 125644 bytes
-rw-r--r--doc/sphinx/Clusters_from_Scratch/images/SummaryOfChanges.pngbin0 -> 122021 bytes
-rw-r--r--doc/sphinx/Clusters_from_Scratch/images/TimeAndDate.pngbin0 -> 237593 bytes
-rw-r--r--doc/sphinx/Clusters_from_Scratch/images/WelcomeToAlmaLinux.pngbin0 -> 122389 bytes
-rw-r--r--doc/sphinx/Clusters_from_Scratch/images/WelcomeToCentos.pngbin0 -> 169687 bytes
-rw-r--r--doc/sphinx/Clusters_from_Scratch/index.rst49
-rw-r--r--doc/sphinx/Clusters_from_Scratch/installation.rst466
-rw-r--r--doc/sphinx/Clusters_from_Scratch/intro.rst29
-rw-r--r--doc/sphinx/Clusters_from_Scratch/shared-storage.rst645
-rw-r--r--doc/sphinx/Clusters_from_Scratch/verification.rst222
-rw-r--r--doc/sphinx/Makefile.am198
-rw-r--r--doc/sphinx/Pacemaker_Administration/agents.rst443
-rw-r--r--doc/sphinx/Pacemaker_Administration/alerts.rst311
-rw-r--r--doc/sphinx/Pacemaker_Administration/cluster.rst21
-rw-r--r--doc/sphinx/Pacemaker_Administration/configuring.rst278
-rw-r--r--doc/sphinx/Pacemaker_Administration/index.rst36
-rw-r--r--doc/sphinx/Pacemaker_Administration/installing.rst9
-rw-r--r--doc/sphinx/Pacemaker_Administration/intro.rst21
-rw-r--r--doc/sphinx/Pacemaker_Administration/pcs-crmsh.rst441
-rw-r--r--doc/sphinx/Pacemaker_Administration/tools.rst562
-rw-r--r--doc/sphinx/Pacemaker_Administration/troubleshooting.rst123
-rw-r--r--doc/sphinx/Pacemaker_Administration/upgrading.rst534
-rw-r--r--doc/sphinx/Pacemaker_Development/c.rst955
-rw-r--r--doc/sphinx/Pacemaker_Development/components.rst489
-rw-r--r--doc/sphinx/Pacemaker_Development/evolution.rst90
-rw-r--r--doc/sphinx/Pacemaker_Development/faq.rst171
-rw-r--r--doc/sphinx/Pacemaker_Development/general.rst40
-rw-r--r--doc/sphinx/Pacemaker_Development/helpers.rst521
-rw-r--r--doc/sphinx/Pacemaker_Development/index.rst33
-rw-r--r--doc/sphinx/Pacemaker_Development/python.rst81
-rw-r--r--doc/sphinx/Pacemaker_Explained/acls.rst460
-rw-r--r--doc/sphinx/Pacemaker_Explained/advanced-options.rst586
-rw-r--r--doc/sphinx/Pacemaker_Explained/advanced-resources.rst1629
-rw-r--r--doc/sphinx/Pacemaker_Explained/alerts.rst257
-rw-r--r--doc/sphinx/Pacemaker_Explained/ap-samples.rst148
-rw-r--r--doc/sphinx/Pacemaker_Explained/constraints.rst1106
-rw-r--r--doc/sphinx/Pacemaker_Explained/fencing.rst1298
-rw-r--r--doc/sphinx/Pacemaker_Explained/images/resource-set.pngbin0 -> 27238 bytes
-rw-r--r--doc/sphinx/Pacemaker_Explained/images/three-sets.pngbin0 -> 69969 bytes
-rw-r--r--doc/sphinx/Pacemaker_Explained/images/two-sets.pngbin0 -> 47601 bytes
-rw-r--r--doc/sphinx/Pacemaker_Explained/index.rst41
-rw-r--r--doc/sphinx/Pacemaker_Explained/intro.rst22
-rw-r--r--doc/sphinx/Pacemaker_Explained/multi-site-clusters.rst341
-rw-r--r--doc/sphinx/Pacemaker_Explained/nodes.rst441
-rw-r--r--doc/sphinx/Pacemaker_Explained/options.rst622
-rw-r--r--doc/sphinx/Pacemaker_Explained/resources.rst1074
-rw-r--r--doc/sphinx/Pacemaker_Explained/reusing-configuration.rst415
-rw-r--r--doc/sphinx/Pacemaker_Explained/rules.rst1021
-rw-r--r--doc/sphinx/Pacemaker_Explained/status.rst372
-rw-r--r--doc/sphinx/Pacemaker_Explained/utilization.rst264
-rw-r--r--doc/sphinx/Pacemaker_Python_API/_templates/custom-class-template.rst32
-rw-r--r--doc/sphinx/Pacemaker_Python_API/_templates/custom-module-template.rst65
-rw-r--r--doc/sphinx/Pacemaker_Python_API/api.rst10
-rw-r--r--doc/sphinx/Pacemaker_Python_API/index.rst11
-rw-r--r--doc/sphinx/Pacemaker_Remote/alternatives.rst95
-rw-r--r--doc/sphinx/Pacemaker_Remote/baremetal-tutorial.rst288
-rw-r--r--doc/sphinx/Pacemaker_Remote/images/pcmk-ha-cluster-stack.pngbin0 -> 34152 bytes
-rw-r--r--doc/sphinx/Pacemaker_Remote/images/pcmk-ha-remote-stack.pngbin0 -> 58380 bytes
-rw-r--r--doc/sphinx/Pacemaker_Remote/index.rst44
-rw-r--r--doc/sphinx/Pacemaker_Remote/intro.rst187
-rw-r--r--doc/sphinx/Pacemaker_Remote/kvm-tutorial.rst584
-rw-r--r--doc/sphinx/Pacemaker_Remote/options.rst174
-rw-r--r--doc/sphinx/_static/pacemaker.css142
-rw-r--r--doc/sphinx/conf.py.in319
-rw-r--r--doc/sphinx/shared/images/Policy-Engine-big.dot83
-rw-r--r--doc/sphinx/shared/images/Policy-Engine-big.svg418
-rw-r--r--doc/sphinx/shared/images/Policy-Engine-small.dot31
-rw-r--r--doc/sphinx/shared/images/Policy-Engine-small.svg133
-rw-r--r--doc/sphinx/shared/images/pcmk-active-active.svg1398
-rw-r--r--doc/sphinx/shared/images/pcmk-active-passive.svg1027
-rw-r--r--doc/sphinx/shared/images/pcmk-colocated-sets.svg436
-rw-r--r--doc/sphinx/shared/images/pcmk-internals.svg1649
-rw-r--r--doc/sphinx/shared/images/pcmk-overview.svg855
-rw-r--r--doc/sphinx/shared/images/pcmk-shared-failover.svg1306
-rw-r--r--doc/sphinx/shared/images/pcmk-stack.svg925
-rw-r--r--doc/sphinx/shared/pacemaker-intro.rst196
91 files changed, 29330 insertions, 0 deletions
diff --git a/doc/sphinx/Clusters_from_Scratch/active-active.rst b/doc/sphinx/Clusters_from_Scratch/active-active.rst
new file mode 100644
index 0000000..0d27174
--- /dev/null
+++ b/doc/sphinx/Clusters_from_Scratch/active-active.rst
@@ -0,0 +1,343 @@
+.. index::
+ single: storage; active/active
+
+Convert Storage to Active/Active
+--------------------------------
+
+The primary requirement for an active/active cluster is that the data
+required for your services is available, simultaneously, on both
+machines. Pacemaker makes no requirement on how this is achieved; you
+could use a Storage Area Network (SAN) if you had one available, but
+since DRBD supports multiple Primaries, we can continue to use it here.
+
+.. index::
+ single: GFS2
+ single: DLM
+ single: filesystem; GFS2
+
+Install Cluster Filesystem Software
+###################################
+
+The only hitch is that we need to use a cluster-aware filesystem. The
+one we used earlier with DRBD, xfs, is not one of those. Both OCFS2
+and GFS2 are supported; here, we will use GFS2.
+
+On both nodes, install Distributed Lock Manager (DLM) and the GFS2 command-
+line utilities required by cluster filesystems:
+
+.. code-block:: console
+
+ # dnf config-manager --set-enabled resilientstorage
+ # dnf install -y dlm gfs2-utils
+
+Configure the Cluster for the DLM
+#################################
+
+The DLM control daemon needs to run on both nodes, so we'll start by creating a
+resource for it (using the ``ocf:pacemaker:controld`` resource agent), and
+clone it:
+
+.. code-block:: console
+
+ [root@pcmk-1 ~]# pcs cluster cib dlm_cfg
+ [root@pcmk-1 ~]# pcs -f dlm_cfg resource create dlm \
+ ocf:pacemaker:controld op monitor interval=60s
+ [root@pcmk-1 ~]# pcs -f dlm_cfg resource clone dlm clone-max=2 clone-node-max=1
+ [root@pcmk-1 ~]# pcs resource status
+ * ClusterIP (ocf:heartbeat:IPaddr2): Started pcmk-1
+ * WebSite (ocf:heartbeat:apache): Started pcmk-1
+ * Clone Set: WebData-clone [WebData] (promotable):
+ * Promoted: [ pcmk-1 ]
+ * Unpromoted: [ pcmk-2 ]
+ * WebFS (ocf:heartbeat:Filesystem): Started pcmk-1
+
+Activate our new configuration, and see how the cluster responds:
+
+.. code-block:: console
+
+ [root@pcmk-1 ~]# pcs cluster cib-push dlm_cfg --config
+ CIB updated
+ [root@pcmk-1 ~]# pcs resource status
+ * ClusterIP (ocf:heartbeat:IPaddr2): Started pcmk-1
+ * WebSite (ocf:heartbeat:apache): Started pcmk-1
+ * Clone Set: WebData-clone [WebData] (promotable):
+ * Promoted: [ pcmk-1 ]
+ * Unpromoted: [ pcmk-2 ]
+ * WebFS (ocf:heartbeat:Filesystem): Started pcmk-1
+ * Clone Set: dlm-clone [dlm]:
+ * Started: [ pcmk-1 pcmk-2 ]
+ [root@pcmk-1 ~]# pcs resource config
+ Resource: ClusterIP (class=ocf provider=heartbeat type=IPaddr2)
+ Attributes: cidr_netmask=24 ip=192.168.122.120
+ Operations: monitor interval=30s (ClusterIP-monitor-interval-30s)
+ start interval=0s timeout=20s (ClusterIP-start-interval-0s)
+ stop interval=0s timeout=20s (ClusterIP-stop-interval-0s)
+ Resource: WebSite (class=ocf provider=heartbeat type=apache)
+ Attributes: configfile=/etc/httpd/conf/httpd.conf statusurl=http://localhost/server-status
+ Operations: monitor interval=1min (WebSite-monitor-interval-1min)
+ start interval=0s timeout=40s (WebSite-start-interval-0s)
+ stop interval=0s timeout=60s (WebSite-stop-interval-0s)
+ Clone: WebData-clone
+ Meta Attrs: clone-max=2 clone-node-max=1 notify=true promotable=true promoted-max=1 promoted-node-max=1
+ Resource: WebData (class=ocf provider=linbit type=drbd)
+ Attributes: drbd_resource=wwwdata
+ Operations: demote interval=0s timeout=90 (WebData-demote-interval-0s)
+ monitor interval=29s role=Promoted (WebData-monitor-interval-29s)
+ monitor interval=31s role=Unpromoted (WebData-monitor-interval-31s)
+ notify interval=0s timeout=90 (WebData-notify-interval-0s)
+ promote interval=0s timeout=90 (WebData-promote-interval-0s)
+ reload interval=0s timeout=30 (WebData-reload-interval-0s)
+ start interval=0s timeout=240 (WebData-start-interval-0s)
+ stop interval=0s timeout=100 (WebData-stop-interval-0s)
+ Resource: WebFS (class=ocf provider=heartbeat type=Filesystem)
+ Attributes: device=/dev/drbd1 directory=/var/www/html fstype=xfs
+ Operations: monitor interval=20s timeout=40s (WebFS-monitor-interval-20s)
+ start interval=0s timeout=60s (WebFS-start-interval-0s)
+ stop interval=0s timeout=60s (WebFS-stop-interval-0s)
+ Clone: dlm-clone
+ Meta Attrs: interleave=true ordered=true
+ Resource: dlm (class=ocf provider=pacemaker type=controld)
+ Operations: monitor interval=60s (dlm-monitor-interval-60s)
+ start interval=0s timeout=90s (dlm-start-interval-0s)
+ stop interval=0s timeout=100s (dlm-stop-interval-0s)
+
+Create and Populate GFS2 Filesystem
+###################################
+
+Before we do anything to the existing partition, we need to make sure it
+is unmounted. We do this by telling the cluster to stop the ``WebFS`` resource.
+This will ensure that other resources (in our case, ``WebSite``) using
+``WebFS`` are not only stopped, but stopped in the correct order.
+
+.. code-block:: console
+
+ [root@pcmk-1 ~]# pcs resource disable WebFS
+ [root@pcmk-1 ~]# pcs resource
+ * ClusterIP (ocf:heartbeat:IPaddr2): Started pcmk-1
+ * WebSite (ocf:heartbeat:apache): Stopped
+ * Clone Set: WebData-clone [WebData] (promotable):
+ * Promoted: [ pcmk-1 ]
+ * Unpromoted: [ pcmk-2 ]
+ * WebFS (ocf:heartbeat:Filesystem): Stopped (disabled)
+ * Clone Set: dlm-clone [dlm]:
+ * Started: [ pcmk-1 pcmk-2 ]
+
+You can see that both ``WebSite`` and ``WebFS`` have been stopped, and that
+``pcmk-1`` is currently running the promoted instance for the DRBD device.
+
+Now we can create a new GFS2 filesystem on the DRBD device.
+
+.. WARNING::
+
+ This will erase all previous content stored on the DRBD device. Ensure
+ you have a copy of any important data.
+
+.. IMPORTANT::
+
+ Run the next command on whichever node has the DRBD Primary role.
+ Otherwise, you will receive the message:
+
+ .. code-block:: console
+
+ /dev/drbd1: Read-only file system
+
+.. code-block:: console
+
+ [root@pcmk-1 ~]# mkfs.gfs2 -p lock_dlm -j 2 -t mycluster:web /dev/drbd1
+ It appears to contain an existing filesystem (xfs)
+ This will destroy any data on /dev/drbd1
+ Are you sure you want to proceed? [y/n] y
+ Discarding device contents (may take a while on large devices): Done
+ Adding journals: Done
+ Building resource groups: Done
+ Creating quota file: Done
+ Writing superblock and syncing: Done
+ Device: /dev/drbd1
+ Block size: 4096
+ Device size: 0.50 GB (131059 blocks)
+ Filesystem size: 0.50 GB (131055 blocks)
+ Journals: 2
+ Journal size: 8MB
+ Resource groups: 4
+ Locking protocol: "lock_dlm"
+ Lock table: "mycluster:web"
+ UUID: 19712677-7206-4660-a079-5d17341dd720
+
+The ``mkfs.gfs2`` command required a number of additional parameters:
+
+* ``-p lock_dlm`` specifies that we want to use DLM-based locking.
+
+* ``-j 2`` indicates that the filesystem should reserve enough
+ space for two journals (one for each node that will access the filesystem).
+
+* ``-t mycluster:web`` specifies the lock table name. The format for this
+ field is ``<CLUSTERNAME>:<FSNAME>``. For ``CLUSTERNAME``, we need to use the
+ same value we specified originally with ``pcs cluster setup --name`` (which is
+ also the value of ``cluster_name`` in ``/etc/corosync/corosync.conf``). If
+ you are unsure what your cluster name is, you can look in
+ ``/etc/corosync/corosync.conf`` or execute the command
+ ``pcs cluster corosync | grep cluster_name``.
+
+Now we can (re-)populate the new filesystem with data
+(web pages). We'll create yet another variation on our home page.
+
+.. code-block:: console
+
+ [root@pcmk-1 ~]# mount /dev/drbd1 /mnt
+ [root@pcmk-1 ~]# cat <<-END >/mnt/index.html
+ <html>
+ <body>My Test Site - GFS2</body>
+ </html>
+ END
+ [root@pcmk-1 ~]# chcon -R --reference=/var/www/html /mnt
+ [root@pcmk-1 ~]# umount /dev/drbd1
+ [root@pcmk-1 ~]# drbdadm verify wwwdata
+
+Reconfigure the Cluster for GFS2
+################################
+
+With the ``WebFS`` resource stopped, let's update the configuration.
+
+.. code-block:: console
+
+ [root@pcmk-1 ~]# pcs resource config WebFS
+ Resource: WebFS (class=ocf provider=heartbeat type=Filesystem)
+ Attributes: device=/dev/drbd1 directory=/var/www/html fstype=xfs
+ Meta Attrs: target-role=Stopped
+ Operations: monitor interval=20s timeout=40s (WebFS-monitor-interval-20s)
+ start interval=0s timeout=60s (WebFS-start-interval-0s)
+ stop interval=0s timeout=60s (WebFS-stop-interval-0s)
+
+The fstype option needs to be updated to ``gfs2`` instead of ``xfs``.
+
+.. code-block:: console
+
+ [root@pcmk-1 ~]# pcs resource update WebFS fstype=gfs2
+ [root@pcmk-1 ~]# pcs resource config WebFS
+ Resource: WebFS (class=ocf provider=heartbeat type=Filesystem)
+ Attributes: device=/dev/drbd1 directory=/var/www/html fstype=gfs2
+ Meta Attrs: target-role=Stopped
+ Operations: monitor interval=20s timeout=40s (WebFS-monitor-interval-20s)
+ start interval=0s timeout=60s (WebFS-start-interval-0s)
+ stop interval=0s timeout=60s (WebFS-stop-interval-0s)
+
+GFS2 requires that DLM be running, so we also need to set up new colocation
+and ordering constraints for it:
+
+.. code-block:: console
+
+ [root@pcmk-1 ~]# pcs constraint colocation add WebFS with dlm-clone
+ [root@pcmk-1 ~]# pcs constraint order dlm-clone then WebFS
+ Adding dlm-clone WebFS (kind: Mandatory) (Options: first-action=start then-action=start)
+ [root@pcmk-1 ~]# pcs constraint
+ Location Constraints:
+ Resource: WebSite
+ Enabled on:
+ Node: pcmk-2 (score:50)
+ Ordering Constraints:
+ start ClusterIP then start WebSite (kind:Mandatory)
+ promote WebData-clone then start WebFS (kind:Mandatory)
+ start WebFS then start WebSite (kind:Mandatory)
+ start dlm-clone then start WebFS (kind:Mandatory)
+ Colocation Constraints:
+ WebSite with ClusterIP (score:INFINITY)
+ WebFS with WebData-clone (score:INFINITY) (rsc-role:Started) (with-rsc-role:Promoted)
+ WebSite with WebFS (score:INFINITY)
+ WebFS with dlm-clone (score:INFINITY)
+ Ticket Constraints:
+
+We also need to update the ``no-quorum-policy`` property to ``freeze``. By
+default, the value of ``no-quorum-policy`` is set to ``stop`` indicating that
+once quorum is lost, all the resources on the remaining partition will
+immediately be stopped. Typically this default is the safest and most optimal
+option, but unlike most resources, GFS2 requires quorum to function. When
+quorum is lost both the applications using the GFS2 mounts and the GFS2 mount
+itself cannot be correctly stopped. Any attempts to stop these resources
+without quorum will fail, which will ultimately result in the entire cluster
+being fenced every time quorum is lost.
+
+To address this situation, set ``no-quorum-policy`` to ``freeze`` when GFS2 is
+in use. This means that when quorum is lost, the remaining partition will do
+nothing until quorum is regained.
+
+.. code-block:: console
+
+ [root@pcmk-1 ~]# pcs property set no-quorum-policy=freeze
+
+
+.. index::
+ pair: filesystem; clone
+
+Clone the Filesystem Resource
+#############################
+
+Now that we have a cluster filesystem ready to go, we can configure the cluster
+so both nodes mount the filesystem.
+
+Clone the ``Filesystem`` resource in a new configuration.
+Notice how ``pcs`` automatically updates the relevant constraints again.
+
+.. code-block:: console
+
+ [root@pcmk-1 ~]# pcs cluster cib active_cfg
+ [root@pcmk-1 ~]# pcs -f active_cfg resource clone WebFS
+ [root@pcmk-1 ~]# pcs -f active_cfg constraint
+ Location Constraints:
+ Resource: WebSite
+ Enabled on:
+ Node: pcmk-2 (score:50)
+ Ordering Constraints:
+ start ClusterIP then start WebSite (kind:Mandatory)
+ promote WebData-clone then start WebFS-clone (kind:Mandatory)
+ start WebFS-clone then start WebSite (kind:Mandatory)
+ start dlm-clone then start WebFS-clone (kind:Mandatory)
+ Colocation Constraints:
+ WebSite with ClusterIP (score:INFINITY)
+ WebFS-clone with WebData-clone (score:INFINITY) (rsc-role:Started) (with-rsc-role:Promoted)
+ WebSite with WebFS-clone (score:INFINITY)
+ WebFS-clone with dlm-clone (score:INFINITY)
+ Ticket Constraints:
+
+Tell the cluster that it is now allowed to promote both instances to be DRBD
+Primary.
+
+.. code-block:: console
+
+ [root@pcmk-1 ~]# pcs -f active_cfg resource update WebData-clone promoted-max=2
+
+Finally, load our configuration to the cluster, and re-enable the ``WebFS``
+resource (which we disabled earlier).
+
+.. code-block:: console
+
+ [root@pcmk-1 ~]# pcs cluster cib-push active_cfg --config
+ CIB updated
+ [root@pcmk-1 ~]# pcs resource enable WebFS
+
+After all the processes are started, the status should look similar to this.
+
+.. code-block:: console
+
+ [root@pcmk-1 ~]# pcs resource
+ * ClusterIP (ocf:heartbeat:IPaddr2): Started pcmk-1
+ * WebSite (ocf:heartbeat:apache): Started pcmk-1
+ * Clone Set: WebData-clone [WebData] (promotable):
+ * Promoted: [ pcmk-1 pcmk-2 ]
+ * Clone Set: dlm-clone [dlm]:
+ * Started: [ pcmk-1 pcmk-2 ]
+ * Clone Set: WebFS-clone [WebFS]:
+ * Started: [ pcmk-1 pcmk-2 ]
+
+Test Failover
+#############
+
+Testing failover is left as an exercise for the reader.
+
+With this configuration, the data is now active/active. The website
+administrator could change HTML files on either node, and the live website will
+show the changes even if it is running on the opposite node.
+
+If the web server is configured to listen on all IP addresses, it is possible
+to remove the constraints between the ``WebSite`` and ``ClusterIP`` resources,
+and clone the ``WebSite`` resource. The web server would always be ready to
+serve web pages, and only the IP address would need to be moved in a failover.
diff --git a/doc/sphinx/Clusters_from_Scratch/active-passive.rst b/doc/sphinx/Clusters_from_Scratch/active-passive.rst
new file mode 100644
index 0000000..1699c43
--- /dev/null
+++ b/doc/sphinx/Clusters_from_Scratch/active-passive.rst
@@ -0,0 +1,324 @@
+Create an Active/Passive Cluster
+--------------------------------
+
+.. index::
+ pair: resource; IP address
+
+Add a Resource
+##############
+
+Our first resource will be a floating IP address that the cluster can bring up
+on either node. Regardless of where any cluster service(s) are running, end
+users need to be able to communicate with them at a consistent address. Here,
+we will use ``192.168.122.120`` as the floating IP address, give it the
+imaginative name ``ClusterIP``, and tell the cluster to check whether it is
+still running every 30 seconds.
+
+.. WARNING::
+
+ The chosen address must not already be in use on the network, on a cluster
+ node or elsewhere. Do not reuse an IP address one of the nodes already has
+ configured.
+
+.. code-block:: console
+
+ [root@pcmk-1 ~]# pcs resource create ClusterIP ocf:heartbeat:IPaddr2 \
+ ip=192.168.122.120 cidr_netmask=24 op monitor interval=30s
+
+Another important piece of information here is ``ocf:heartbeat:IPaddr2``.
+This tells Pacemaker three things about the resource you want to add:
+
+* The first field (``ocf`` in this case) is the standard to which the resource
+ agent conforms and where to find it.
+
+* The second field (``heartbeat`` in this case) is known as the provider.
+ Currently, this field is supported only for OCF resources. It tells
+ Pacemaker which OCF namespace the resource script is in.
+
+* The third field (``IPaddr2`` in this case) is the name of the resource agent,
+ the executable file responsible for starting, stopping, monitoring, and
+ possibly promoting and demoting the resource.
+
+To obtain a list of the available resource standards (the ``ocf`` part of
+``ocf:heartbeat:IPaddr2``), run:
+
+.. code-block:: console
+
+ [root@pcmk-1 ~]# pcs resource standards
+ lsb
+ ocf
+ service
+ systemd
+
+To obtain a list of the available OCF resource providers (the ``heartbeat``
+part of ``ocf:heartbeat:IPaddr2``), run:
+
+.. code-block:: console
+
+ [root@pcmk-1 ~]# pcs resource providers
+ heartbeat
+ openstack
+ pacemaker
+
+Finally, if you want to see all the resource agents available for
+a specific OCF provider (the ``IPaddr2`` part of ``ocf:heartbeat:IPaddr2``), run:
+
+.. code-block:: console
+
+ [root@pcmk-1 ~]# pcs resource agents ocf:heartbeat
+ apache
+ conntrackd
+ corosync-qnetd
+ .
+ . (skipping lots of resources to save space)
+ .
+ VirtualDomain
+ Xinetd
+
+If you want to list all resource agents available on the system, run ``pcs
+resource list``. We'll skip that here.
+
+Now, verify that the IP resource has been added, and display the cluster's
+status to see that it is now active. Note: There should be a stonith device by
+now, but it's okay if it doesn't look like the one below.
+
+.. code-block:: console
+
+ [root@pcmk-1 ~]# pcs status
+ Cluster name: mycluster
+ Cluster Summary:
+ * Stack: corosync
+ * Current DC: pcmk-1 (version 2.1.2-4.el9-ada5c3b36e2) - partition with quorum
+ * Last updated: Wed Jul 27 00:37:28 2022
+ * Last change: Wed Jul 27 00:37:14 2022 by root via cibadmin on pcmk-1
+ * 2 nodes configured
+ * 2 resource instances configured
+
+ Node List:
+ * Online: [ pcmk-1 pcmk-2 ]
+
+ Full List of Resources:
+ * fence_dev (stonith:some_fence_agent): Started pcmk-1
+ * ClusterIP (ocf:heartbeat:IPaddr2): Started pcmk-2
+
+ Daemon Status:
+ corosync: active/disabled
+ pacemaker: active/disabled
+ pcsd: active/enabled
+
+On the node where the ``ClusterIP`` resource is running, verify that the
+address has been added.
+
+.. code-block:: console
+
+ [root@pcmk-2 ~]# ip -o addr show
+ 1: lo inet 127.0.0.1/8 scope host lo\ valid_lft forever preferred_lft forever
+ 1: lo inet6 ::1/128 scope host \ valid_lft forever preferred_lft forever
+ 2: enp1s0 inet 192.168.122.102/24 brd 192.168.122.255 scope global noprefixroute enp1s0\ valid_lft forever preferred_lft forever
+ 2: enp1s0 inet 192.168.122.120/24 brd 192.168.122.255 scope global secondary enp1s0\ valid_lft forever preferred_lft forever
+ 2: enp1s0 inet6 fe80::5054:ff:fe95:209/64 scope link noprefixroute \ valid_lft forever preferred_lft forever
+
+Perform a Failover
+##################
+
+Since our ultimate goal is high availability, we should test failover of
+our new resource before moving on.
+
+First, from the ``pcs status`` output in the previous step, find the node on
+which the IP address is running. You can see that the status of the
+``ClusterIP`` resource is ``Started`` on a particular node (in this example,
+``pcmk-2``). Shut down ``pacemaker`` and ``corosync`` on that machine to
+trigger a failover.
+
+.. code-block:: console
+
+ [root@pcmk-2 ~]# pcs cluster stop pcmk-2
+ pcmk-2: Stopping Cluster (pacemaker)...
+ pcmk-2: Stopping Cluster (corosync)...
+
+.. NOTE::
+
+ A cluster command such as ``pcs cluster stop <NODENAME>`` can be run from
+ any node in the cluster, not just the node where the cluster services will
+ be stopped. Running ``pcs cluster stop`` without a ``<NODENAME>`` stops the
+ cluster services on the local host. The same is true for ``pcs cluster
+ start`` and many other such commands.
+
+Verify that ``pacemaker`` and ``corosync`` are no longer running:
+
+.. code-block:: console
+
+ [root@pcmk-2 ~]# pcs status
+ Error: error running crm_mon, is pacemaker running?
+ Could not connect to pacemakerd: Connection refused
+ crm_mon: Connection to cluster failed: Connection refused
+
+Go to the other node, and check the cluster status.
+
+.. code-block:: console
+
+ [root@pcmk-1 ~]# pcs status
+ Cluster name: mycluster
+ Cluster Summary:
+ * Stack: corosync
+ * Current DC: pcmk-1 (version 2.1.2-4.el9-ada5c3b36e2) - partition with quorum
+ * Last updated: Wed Jul 27 00:43:51 2022
+ * Last change: Wed Jul 27 00:43:14 2022 by root via cibadmin on pcmk-1
+ * 2 nodes configured
+ * 2 resource instances configured
+
+ Node List:
+ * Online: [ pcmk-1 ]
+ * OFFLINE: [ pcmk-2 ]
+
+ Full List of Resources:
+ * fence_dev (stonith:some_fence_agent): Started pcmk-1
+ * ClusterIP (ocf:heartbeat:IPaddr2): Started pcmk-1
+
+ Daemon Status:
+ corosync: active/disabled
+ pacemaker: active/disabled
+ pcsd: active/enabled
+
+Notice that ``pcmk-2`` is ``OFFLINE`` for cluster purposes (its ``pcsd`` is still
+active, allowing it to receive ``pcs`` commands, but it is not participating in
+the cluster).
+
+Also notice that ``ClusterIP`` is now running on ``pcmk-1`` -- failover happened
+automatically, and no errors are reported.
+
+.. topic:: Quorum
+
+ If a cluster splits into two (or more) groups of nodes that can no longer
+ communicate with each other (a.k.a. *partitions*), *quorum* is used to
+ prevent resources from starting on more nodes than desired, which would
+ risk data corruption.
+
+ A cluster has quorum when more than half of all known nodes are online in
+ the same partition, or for the mathematically inclined, whenever the following
+ inequality is true:
+
+ .. code-block:: console
+
+ total_nodes < 2 * active_nodes
+
+ For example, if a 5-node cluster split into 3- and 2-node paritions,
+ the 3-node partition would have quorum and could continue serving resources.
+ If a 6-node cluster split into two 3-node partitions, neither partition
+ would have quorum; Pacemaker's default behavior in such cases is to
+ stop all resources, in order to prevent data corruption.
+
+ Two-node clusters are a special case. By the above definition,
+ a two-node cluster would only have quorum when both nodes are
+ running. This would make the creation of a two-node cluster pointless.
+ However, Corosync has the ability to require only one node for quorum in a
+ two-node cluster.
+
+ The ``pcs cluster setup`` command will automatically configure
+ ``two_node: 1`` in ``corosync.conf``, so a two-node cluster will "just work".
+
+ .. NOTE::
+
+ You might wonder, "What if the nodes in a two-node cluster can't
+ communicate with each other? Wouldn't this ``two_node: 1`` setting
+ create a split-brain scenario, in which each node has quorum separately
+ and they both try to manage the same cluster resources?"
+
+ As long as fencing is configured, there is no danger of this. If the
+ nodes lose contact with each other, each node will try to fence the
+ other node. Resource management is disabled until fencing succeeds;
+ neither node is allowed to start, stop, promote, or demote resources.
+
+ After fencing succeeds, the surviving node can safely recover any
+ resources that were running on the fenced node.
+
+ If the fenced node boots up and rejoins the cluster, it does not have
+ quorum until it can communicate with the surviving node at least once.
+ This prevents "fence loops," in which a node gets fenced, reboots,
+ rejoins the cluster, and fences the other node. This protective
+ behavior is controlled by the ``wait_for_all: 1`` option, which is
+ enabled automatically when ``two_node: 1`` is configured.
+
+ If you are using a different cluster shell, you may have to configure
+ ``corosync.conf`` appropriately yourself.
+
+Now, simulate node recovery by restarting the cluster stack on ``pcmk-2``, and
+check the cluster's status. (It may take a little while before the cluster
+gets going on the node, but it eventually will look like the below.)
+
+.. code-block:: console
+
+ [root@pcmk-1 ~]# pcs status
+ Cluster name: mycluster
+ Cluster Summary:
+ * Stack: corosync
+ * Current DC: pcmk-1 (version 2.1.2-4.el9-ada5c3b36e2) - partition with quorum
+ * Last updated: Wed Jul 27 00:45:17 2022
+ * Last change: Wed Jul 27 00:45:01 2022 by root via cibadmin on pcmk-1
+ * 2 nodes configured
+ * 2 resource instances configured
+
+ Node List:
+ * Online: [ pcmk-1 pcmk-2 ]
+
+ Full List of Resources:
+ * fence_dev (stonith:some_fence_agent): Started pcmk-1
+ * ClusterIP (ocf:heartbeat:IPaddr2): Started pcmk-1
+
+ Daemon Status:
+ corosync: active/disabled
+ pacemaker: active/disabled
+ pcsd: active/enabled
+
+.. index:: stickiness
+
+Prevent Resources from Moving after Recovery
+############################################
+
+In most circumstances, it is highly desirable to prevent healthy
+resources from being moved around the cluster. Moving resources almost
+always requires a period of downtime. For complex services such as
+databases, this period can be quite long.
+
+To address this, Pacemaker has the concept of resource *stickiness*,
+which controls how strongly a service prefers to stay running where it
+is. You may like to think of it as the "cost" of any downtime. By
+default, [#]_ Pacemaker assumes there is zero cost associated with moving
+resources and will do so to achieve "optimal" [#]_ resource placement.
+We can specify a different stickiness for every resource, but it is
+often sufficient to change the default.
+
+In |CFS_DISTRO| |CFS_DISTRO_VER|, the cluster setup process automatically
+configures a default resource stickiness score of 1. This is sufficient to
+prevent healthy resources from moving around the cluster when there are no
+user-configured constraints that influence where Pacemaker prefers to run those
+resources.
+
+.. code-block:: console
+
+ [root@pcmk-1 ~]# pcs resource defaults
+ Meta Attrs: build-resource-defaults
+ resource-stickiness=1
+
+For this example, we will increase the default resource stickiness to 100.
+Later in this guide, we will configure a location constraint with a score lower
+than the default resource stickiness.
+
+.. code-block:: console
+
+ [root@pcmk-1 ~]# pcs resource defaults update resource-stickiness=100
+ Warning: Defaults do not apply to resources which override them with their own defined values
+ [root@pcmk-1 ~]# pcs resource defaults
+ Meta Attrs: build-resource-defaults
+ resource-stickiness=100
+
+
+.. [#] Zero resource stickiness is Pacemaker's default if you remove the
+ default value that was created at cluster setup time, or if you're using
+ an older version of Pacemaker that doesn't create this value at setup
+ time.
+
+.. [#] Pacemaker's default definition of "optimal" may not always agree with
+ yours. The order in which Pacemaker processes lists of resources and
+ nodes creates implicit preferences in situations where the administrator
+ has not explicitly specified them.
diff --git a/doc/sphinx/Clusters_from_Scratch/ap-configuration.rst b/doc/sphinx/Clusters_from_Scratch/ap-configuration.rst
new file mode 100644
index 0000000..b71e9af
--- /dev/null
+++ b/doc/sphinx/Clusters_from_Scratch/ap-configuration.rst
@@ -0,0 +1,345 @@
+Configuration Recap
+-------------------
+
+Final Cluster Configuration
+###########################
+
+.. code-block:: console
+
+ [root@pcmk-1 ~]# pcs resource
+ * ClusterIP (ocf:heartbeat:IPaddr2): Started pcmk-1
+ * WebSite (ocf:heartbeat:apache): Started pcmk-1
+ * Clone Set: WebData-clone [WebData] (promotable):
+ * Promoted: [ pcmk-1 pcmk-2 ]
+ * Clone Set: dlm-clone [dlm]:
+ * Started: [ pcmk-1 pcmk-2 ]
+ * Clone Set: WebFS-clone [WebFS]:
+ * Started: [ pcmk-1 pcmk-2 ]
+
+.. code-block:: console
+
+ [root@pcmk-1 ~]# pcs resource op defaults
+ Meta Attrs: op_defaults-meta_attributes
+ timeout=240s
+
+.. code-block:: console
+
+ [root@pcmk-1 ~]# pcs stonith
+ * fence_dev (stonith:some_fence_agent): Started pcmk-1
+
+.. code-block:: console
+
+ [root@pcmk-1 ~]# pcs constraint
+ Location Constraints:
+ Resource: WebSite
+ Enabled on:
+ Node: pcmk-2 (score:50)
+ Ordering Constraints:
+ start ClusterIP then start WebSite (kind:Mandatory)
+ promote WebData-clone then start WebFS-clone (kind:Mandatory)
+ start WebFS-clone then start WebSite (kind:Mandatory)
+ start dlm-clone then start WebFS-clone (kind:Mandatory)
+ Colocation Constraints:
+ WebSite with ClusterIP (score:INFINITY)
+ WebFS-clone with WebData-clone (score:INFINITY) (rsc-role:Started) (with-rsc-role:Promoted)
+ WebSite with WebFS-clone (score:INFINITY)
+ WebFS-clone with dlm-clone (score:INFINITY)
+ Ticket Constraints:
+
+.. code-block:: console
+
+ [root@pcmk-1 ~]# pcs status
+ Cluster name: mycluster
+ Cluster Summary:
+ * Stack: corosync
+ * Current DC: pcmk-1 (version 2.1.2-4.el9-ada5c3b36e2) - partition with quorum
+ * Last updated: Wed Jul 27 08:57:57 2022
+ * Last change: Wed Jul 27 08:55:00 2022 by root via cibadmin on pcmk-1
+ * 2 nodes configured
+ * 9 resource instances configured
+
+ Node List:
+ * Online: [ pcmk-1 pcmk-2 ]
+
+ Full List of Resources:
+ * fence_dev (stonith:some_fence_agent): Started pcmk-1
+ * ClusterIP (ocf:heartbeat:IPaddr2): Started pcmk-1
+ * WebSite (ocf:heartbeat:apache): Started pcmk-1
+ * Clone Set: WebData-clone [WebData] (promotable):
+ * Promoted: [ pcmk-1 pcmk-2 ]
+ * Clone Set: dlm-clone [dlm]:
+ * Started: [ pcmk-1 pcmk-2 ]
+ * Clone Set: WebFS-clone [WebFS]:
+ * Started: [ pcmk-1 pcmk-2 ]
+
+ Daemon Status:
+ corosync: active/disabled
+ pacemaker: active/disabled
+ pcsd: active/enabled
+
+.. code-block:: console
+
+ [root@pcmk-1 ~]# pcs config
+ Cluster Name: mycluster
+ Corosync Nodes:
+ pcmk-1 pcmk-2
+ Pacemaker Nodes:
+ pcmk-1 pcmk-2
+
+ Resources:
+ Resource: ClusterIP (class=ocf provider=heartbeat type=IPaddr2)
+ Attributes: cidr_netmask=24 ip=192.168.122.120
+ Operations: monitor interval=30s (ClusterIP-monitor-interval-30s)
+ start interval=0s timeout=20s (ClusterIP-start-interval-0s)
+ stop interval=0s timeout=20s (ClusterIP-stop-interval-0s)
+ Resource: WebSite (class=ocf provider=heartbeat type=apache)
+ Attributes: configfile=/etc/httpd/conf/httpd.conf statusurl=http://localhost/server-status
+ Operations: monitor interval=1min (WebSite-monitor-interval-1min)
+ start interval=0s timeout=40s (WebSite-start-interval-0s)
+ stop interval=0s timeout=60s (WebSite-stop-interval-0s)
+ Clone: WebData-clone
+ Meta Attrs: clone-max=2 clone-node-max=1 notify=true promotable=true promoted-max=2 promoted-node-max=1
+ Resource: WebData (class=ocf provider=linbit type=drbd)
+ Attributes: drbd_resource=wwwdata
+ Operations: demote interval=0s timeout=90 (WebData-demote-interval-0s)
+ monitor interval=29s role=Promoted (WebData-monitor-interval-29s)
+ monitor interval=31s role=Unpromoted (WebData-monitor-interval-31s)
+ notify interval=0s timeout=90 (WebData-notify-interval-0s)
+ promote interval=0s timeout=90 (WebData-promote-interval-0s)
+ reload interval=0s timeout=30 (WebData-reload-interval-0s)
+ start interval=0s timeout=240 (WebData-start-interval-0s)
+ stop interval=0s timeout=100 (WebData-stop-interval-0s)
+ Clone: dlm-clone
+ Meta Attrs: interleave=true ordered=true
+ Resource: dlm (class=ocf provider=pacemaker type=controld)
+ Operations: monitor interval=60s (dlm-monitor-interval-60s)
+ start interval=0s timeout=90s (dlm-start-interval-0s)
+ stop interval=0s timeout=100s (dlm-stop-interval-0s)
+ Clone: WebFS-clone
+ Resource: WebFS (class=ocf provider=heartbeat type=Filesystem)
+ Attributes: device=/dev/drbd1 directory=/var/www/html fstype=gfs2
+ Operations: monitor interval=20s timeout=40s (WebFS-monitor-interval-20s)
+ start interval=0s timeout=60s (WebFS-start-interval-0s)
+ stop interval=0s timeout=60s (WebFS-stop-interval-0s)
+
+ Stonith Devices:
+ Resource: fence_dev (class=stonith type=some_fence_agent)
+ Attributes: pcmk_delay_base=pcmk-1:5s;pcmk-2:0s pcmk_host_map=pcmk-1:almalinux9-1;pcmk-2:almalinux9-2
+ Operations: monitor interval=60s (fence_dev-monitor-interval-60s)
+ Fencing Levels:
+
+ Location Constraints:
+ Resource: WebSite
+ Enabled on:
+ Node: pcmk-2 (score:50) (id:location-WebSite-pcmk-2-50)
+ Ordering Constraints:
+ start ClusterIP then start WebSite (kind:Mandatory) (id:order-ClusterIP-WebSite-mandatory)
+ promote WebData-clone then start WebFS-clone (kind:Mandatory) (id:order-WebData-clone-WebFS-mandatory)
+ start WebFS-clone then start WebSite (kind:Mandatory) (id:order-WebFS-WebSite-mandatory)
+ start dlm-clone then start WebFS-clone (kind:Mandatory) (id:order-dlm-clone-WebFS-mandatory)
+ Colocation Constraints:
+ WebSite with ClusterIP (score:INFINITY) (id:colocation-WebSite-ClusterIP-INFINITY)
+ WebFS-clone with WebData-clone (score:INFINITY) (rsc-role:Started) (with-rsc-role:Promoted) (id:colocation-WebFS-WebData-clone-INFINITY)
+ WebSite with WebFS-clone (score:INFINITY) (id:colocation-WebSite-WebFS-INFINITY)
+ WebFS-clone with dlm-clone (score:INFINITY) (id:colocation-WebFS-dlm-clone-INFINITY)
+ Ticket Constraints:
+
+ Alerts:
+ No alerts defined
+
+ Resources Defaults:
+ Meta Attrs: build-resource-defaults
+ resource-stickiness=100
+ Operations Defaults:
+ Meta Attrs: op_defaults-meta_attributes
+ timeout=240s
+
+ Cluster Properties:
+ cluster-infrastructure: corosync
+ cluster-name: mycluster
+ dc-version: 2.1.2-4.el9-ada5c3b36e2
+ have-watchdog: false
+ last-lrm-refresh: 1658896047
+ no-quorum-policy: freeze
+ stonith-enabled: true
+
+ Tags:
+ No tags defined
+
+ Quorum:
+ Options:
+
+Node List
+#########
+
+.. code-block:: console
+
+ [root@pcmk-1 ~]# pcs status nodes
+ Pacemaker Nodes:
+ Online: pcmk-1 pcmk-2
+ Standby:
+ Standby with resource(s) running:
+ Maintenance:
+ Offline:
+ Pacemaker Remote Nodes:
+ Online:
+ Standby:
+ Standby with resource(s) running:
+ Maintenance:
+ Offline:
+
+Cluster Options
+###############
+
+.. code-block:: console
+
+ [root@pcmk-1 ~]# pcs property
+ Cluster Properties:
+ cluster-infrastructure: corosync
+ cluster-name: mycluster
+ dc-version: 2.1.2-4.el9-ada5c3b36e2
+ have-watchdog: false
+ no-quorum-policy: freeze
+ stonith-enabled: true
+
+The output shows cluster-wide configuration options, as well as some baseline-
+level state information. The output includes:
+
+* ``cluster-infrastructure`` - the cluster communications layer in use
+* ``cluster-name`` - the cluster name chosen by the administrator when the
+ cluster was created
+* ``dc-version`` - the version (including upstream source-code hash) of
+ ``pacemaker`` used on the Designated Controller, which is the node elected to
+ determine what actions are needed when events occur
+* ``have-watchdog`` - whether watchdog integration is enabled; set
+ automatically when SBD is enabled
+* ``stonith-enabled`` - whether nodes may be fenced as part of recovery
+
+.. NOTE::
+
+ This command is equivalent to ``pcs property config``.
+
+Resources
+#########
+
+Default Options
+_______________
+
+.. code-block:: console
+
+ [root@pcmk-1 ~]# pcs resource defaults
+ Meta Attrs: build-resource-defaults
+ resource-stickiness=100
+
+This shows cluster option defaults that apply to every resource that does not
+explicitly set the option itself. Above:
+
+* ``resource-stickiness`` - Specify how strongly a resource prefers to remain
+ on its current node. Alternatively, you can view this as the level of
+ aversion to moving healthy resources to other machines.
+
+Fencing
+_______
+
+.. code-block:: console
+
+ [root@pcmk-1 ~]# pcs stonith status
+ * fence_dev (stonith:some_fence_agent): Started pcmk-1
+ [root@pcmk-1 ~]# pcs stonith config
+ Resource: fence_dev (class=stonith type=some_fence_agent)
+ Attributes: pcmk_delay_base=pcmk-1:5s;pcmk-2:0s pcmk_host_map=pcmk-1:almalinux9-1;pcmk-2:almalinux9-2
+ Operations: monitor interval=60s (fence_dev-monitor-interval-60s)
+
+Service Address
+_______________
+
+Users of the services provided by the cluster require an unchanging
+address with which to access it.
+
+.. code-block:: console
+
+ [root@pcmk-1 ~]# pcs resource config ClusterIP
+ Resource: ClusterIP (class=ocf provider=heartbeat type=IPaddr2)
+ Attributes: cidr_netmask=24 ip=192.168.122.120
+ Operations: monitor interval=30s (ClusterIP-monitor-interval-30s)
+ start interval=0s timeout=20s (ClusterIP-start-interval-0s)
+ stop interval=0s timeout=20s (ClusterIP-stop-interval-0s)
+
+DRBD - Shared Storage
+_____________________
+
+Here, we define the DRBD service and specify which DRBD resource (from
+``/etc/drbd.d/\*.res``) it should manage. We make it a promotable clone
+resource and, in order to have an active/active setup, allow both instances to
+be promoted at the same time. We also set the notify option so that the cluster
+will tell the ``drbd`` agent when its peer changes state.
+
+.. code-block:: console
+
+ [root@pcmk-1 ~]# pcs resource config WebData-clone
+ Clone: WebData-clone
+ Meta Attrs: clone-max=2 clone-node-max=1 notify=true promotable=true promoted-max=2 promoted-node-max=1
+ Resource: WebData (class=ocf provider=linbit type=drbd)
+ Attributes: drbd_resource=wwwdata
+ Operations: demote interval=0s timeout=90 (WebData-demote-interval-0s)
+ monitor interval=29s role=Promoted (WebData-monitor-interval-29s)
+ monitor interval=31s role=Unpromoted (WebData-monitor-interval-31s)
+ notify interval=0s timeout=90 (WebData-notify-interval-0s)
+ promote interval=0s timeout=90 (WebData-promote-interval-0s)
+ reload interval=0s timeout=30 (WebData-reload-interval-0s)
+ start interval=0s timeout=240 (WebData-start-interval-0s)
+ stop interval=0s timeout=100 (WebData-stop-interval-0s)
+ [root@pcmk-1 ~]# pcs constraint ref WebData-clone
+ Resource: WebData-clone
+ colocation-WebFS-WebData-clone-INFINITY
+ order-WebData-clone-WebFS-mandatory
+
+Cluster Filesystem
+__________________
+
+The cluster filesystem ensures that files are read and written correctly.
+We need to specify the block device (provided by DRBD), where we want it
+mounted and that we are using GFS2. Again, it is a clone because it is
+intended to be active on both nodes. The additional constraints ensure
+that it can only be started on nodes with active DLM and DRBD instances.
+
+.. code-block:: console
+
+ [root@pcmk-1 ~]# pcs resource config WebFS-clone
+ Clone: WebFS-clone
+ Resource: WebFS (class=ocf provider=heartbeat type=Filesystem)
+ Attributes: device=/dev/drbd1 directory=/var/www/html fstype=gfs2
+ Operations: monitor interval=20s timeout=40s (WebFS-monitor-interval-20s)
+ start interval=0s timeout=60s (WebFS-start-interval-0s)
+ stop interval=0s timeout=60s (WebFS-stop-interval-0s)
+ [root@pcmk-1 ~]# pcs constraint ref WebFS-clone
+ Resource: WebFS-clone
+ colocation-WebFS-WebData-clone-INFINITY
+ colocation-WebSite-WebFS-INFINITY
+ colocation-WebFS-dlm-clone-INFINITY
+ order-WebData-clone-WebFS-mandatory
+ order-WebFS-WebSite-mandatory
+ order-dlm-clone-WebFS-mandatory
+
+Apache
+______
+
+Lastly, we have the actual service, Apache. We need only tell the cluster
+where to find its main configuration file and restrict it to running on
+a node that has the required filesystem mounted and the IP address active.
+
+.. code-block:: console
+
+ [root@pcmk-1 ~]# pcs resource config WebSite
+ Resource: WebSite (class=ocf provider=heartbeat type=apache)
+ Attributes: configfile=/etc/httpd/conf/httpd.conf statusurl=http://localhost/server-status
+ Operations: monitor interval=1min (WebSite-monitor-interval-1min)
+ start interval=0s timeout=40s (WebSite-start-interval-0s)
+ stop interval=0s timeout=60s (WebSite-stop-interval-0s)
+ [root@pcmk-1 ~]# pcs constraint ref WebSite
+ Resource: WebSite
+ colocation-WebSite-ClusterIP-INFINITY
+ colocation-WebSite-WebFS-INFINITY
+ location-WebSite-pcmk-2-50
+ order-ClusterIP-WebSite-mandatory
+ order-WebFS-WebSite-mandatory
diff --git a/doc/sphinx/Clusters_from_Scratch/ap-corosync-conf.rst b/doc/sphinx/Clusters_from_Scratch/ap-corosync-conf.rst
new file mode 100644
index 0000000..3bd1b8d
--- /dev/null
+++ b/doc/sphinx/Clusters_from_Scratch/ap-corosync-conf.rst
@@ -0,0 +1,43 @@
+.. _sample-corosync-configuration:
+
+Sample Corosync Configuration
+---------------------------------
+
+.. topic:: Sample ``corosync.conf`` for two-node cluster created by ``pcs``.
+
+ .. code-block:: none
+
+ totem {
+ version: 2
+ cluster_name: mycluster
+ transport: knet
+ crypto_cipher: aes256
+ crypto_hash: sha256
+ cluster_uuid: e592f61f916943978bdf7c046a195980
+ }
+
+ nodelist {
+ node {
+ ring0_addr: pcmk-1
+ name: pcmk-1
+ nodeid: 1
+ }
+
+ node {
+ ring0_addr: pcmk-2
+ name: pcmk-2
+ nodeid: 2
+ }
+ }
+
+ quorum {
+ provider: corosync_votequorum
+ two_node: 1
+ }
+
+ logging {
+ to_logfile: yes
+ logfile: /var/log/cluster/corosync.log
+ to_syslog: yes
+ timestamp: on
+ }
diff --git a/doc/sphinx/Clusters_from_Scratch/ap-reading.rst b/doc/sphinx/Clusters_from_Scratch/ap-reading.rst
new file mode 100644
index 0000000..546b4f3
--- /dev/null
+++ b/doc/sphinx/Clusters_from_Scratch/ap-reading.rst
@@ -0,0 +1,10 @@
+Further Reading
+---------------
+
+- Project Website https://www.clusterlabs.org/
+
+- SuSE has a comprehensive guide to cluster commands (though using the ``crmsh`` command-line
+ shell rather than ``pcs``) at:
+ https://www.suse.com/documentation/sle_ha/book_sleha/data/book_sleha.html
+
+- Corosync http://www.corosync.org/
diff --git a/doc/sphinx/Clusters_from_Scratch/apache.rst b/doc/sphinx/Clusters_from_Scratch/apache.rst
new file mode 100644
index 0000000..e4eddff
--- /dev/null
+++ b/doc/sphinx/Clusters_from_Scratch/apache.rst
@@ -0,0 +1,448 @@
+.. index::
+ single: Apache HTTP Server
+
+Add Apache HTTP Server as a Cluster Service
+-------------------------------------------
+
+Now that we have a basic but functional active/passive two-node cluster,
+we're ready to add some real services. We're going to start with
+Apache HTTP Server because it is a feature of many clusters and is relatively
+simple to configure.
+
+Install Apache
+##############
+
+Before continuing, we need to make sure Apache is installed on both
+hosts. We will also allow the cluster to use the ``wget`` tool (this is the
+default, but ``curl`` is also supported) to check the status of the Apache
+server. We'll install ``httpd`` (Apache) and ``wget`` now.
+
+.. code-block:: console
+
+ # dnf install -y httpd wget
+ # firewall-cmd --permanent --add-service=http
+ # firewall-cmd --reload
+
+.. IMPORTANT::
+
+ Do **not** enable the ``httpd`` service. Services that are intended to
+ be managed via the cluster software should never be managed by the OS.
+ It is often useful, however, to manually start the service, verify that
+ it works, then stop it again, before adding it to the cluster. This
+ allows you to resolve any non-cluster-related problems before continuing.
+ Since this is a simple example, we'll skip that step here.
+
+Create Website Documents
+########################
+
+We need to create a page for Apache to serve. On |CFS_DISTRO| |CFS_DISTRO_VER|, the
+default Apache document root is ``/var/www/html``, so we'll create an index
+file there. For the moment, we will simplify things by serving a static site
+and manually synchronizing the data between the two nodes, so run this command
+on both nodes:
+
+.. code-block:: console
+
+ # cat <<-END >/var/www/html/index.html
+ <html>
+ <body>My Test Site - $(hostname)</body>
+ </html>
+ END
+
+
+.. index::
+ single: Apache HTTP Server; status URL
+
+Enable the Apache Status URL
+############################
+
+Pacemaker uses the ``apache`` resource agent to monitor the health of your
+Apache instance via the ``server-status`` URL, and to recover the instance if
+it fails. On both nodes, configure this URL as follows:
+
+.. code-block:: console
+
+ # cat <<-END >/etc/httpd/conf.d/status.conf
+ <Location /server-status>
+ SetHandler server-status
+ Require local
+ </Location>
+ END
+
+.. NOTE::
+
+ If you are using a different operating system, ``server-status`` may
+ already be enabled or may be configurable in a different location. If you
+ are using a version of Apache HTTP Server less than 2.4, the syntax will be
+ different.
+
+
+.. index::
+ pair: Apache HTTP Server; resource
+
+Configure the Cluster
+#####################
+
+At this point, Apache is ready to go, and all that needs to be done is to
+add it to the cluster. Let's call the resource ``WebSite``. We need to use
+an OCF resource agent called ``apache`` in the ``heartbeat`` namespace [#]_.
+The script's only required parameter is the path to the main Apache
+configuration file, and we'll tell the cluster to check once a
+minute that Apache is still running.
+
+.. code-block:: console
+
+ [root@pcmk-1 ~]# pcs resource create WebSite ocf:heartbeat:apache \
+ configfile=/etc/httpd/conf/httpd.conf \
+ statusurl="http://localhost/server-status" \
+ op monitor interval=1min
+
+By default, the operation timeout for all resources' start, stop, monitor, and
+other operations is 20 seconds. In many cases, this timeout period is less than
+a particular resource's advised timeout period. For the purposes of this
+tutorial, we will adjust the global operation timeout default to 240 seconds.
+
+.. code-block:: console
+
+ [root@pcmk-1 ~]# pcs resource op defaults
+ No defaults set
+ [root@pcmk-1 ~]# pcs resource op defaults update timeout=240s
+ Warning: Defaults do not apply to resources which override them with their own defined values
+ [root@pcmk-1 ~]# pcs resource op defaults
+ Meta Attrs: op_defaults-meta_attributes
+ timeout: 240s
+
+.. NOTE::
+
+ In a production cluster, it is usually better to adjust each resource's
+ start, stop, and monitor timeouts to values that are appropriate for
+ the behavior observed in your environment, rather than adjusting
+ the global default.
+
+.. NOTE::
+
+ If you use a tool like ``pcs`` to create a resource, its operations may be
+ automatically configured with explicit timeout values that override the
+ Pacemaker built-in default value of 20 seconds. If the resource agent's
+ metadata contains suggested values for the operation timeouts in a
+ particular format, ``pcs`` reads those values and adds them to the
+ configuration at resource creation time.
+
+After a short delay, we should see the cluster start Apache.
+
+.. code-block:: console
+
+ [root@pcmk-1 ~]# pcs status
+ Cluster name: mycluster
+ Cluster Summary:
+ * Stack: corosync
+ * Current DC: pcmk-1 (version 2.1.2-4.el9-ada5c3b36e2) - partition with quorum
+ * Last updated: Wed Jul 27 00:47:44 2022
+ * Last change: Wed Jul 27 00:47:23 2022 by root via cibadmin on pcmk-1
+ * 2 nodes configured
+ * 3 resource instances configured
+
+ Node List:
+ * Online: [ pcmk-1 pcmk-2 ]
+
+ Full List of Resources:
+ * fence_dev (stonith:some_fence_agent): Started pcmk-1
+ * ClusterIP (ocf:heartbeat:IPaddr2): Started pcmk-1
+ * WebSite (ocf:heartbeat:apache): Started pcmk-2
+
+ Daemon Status:
+ corosync: active/disabled
+ pacemaker: active/disabled
+ pcsd: active/enabled
+
+Wait a moment, the ``WebSite`` resource isn't running on the same host as our
+IP address!
+
+.. NOTE::
+
+ If, in the ``pcs status`` output, you see the ``WebSite`` resource has
+ failed to start, then you've likely not enabled the status URL correctly.
+ You can check whether this is the problem by running:
+
+ .. code-block:: console
+
+ wget -O - http://localhost/server-status
+
+ If you see ``Not Found`` or ``Forbidden`` in the output, then this is likely the
+ problem. Ensure that the ``<Location /server-status>`` block is correct.
+
+.. index::
+ single: constraint; colocation
+ single: colocation constraint
+
+Ensure Resources Run on the Same Host
+#####################################
+
+To reduce the load on any one machine, Pacemaker will generally try to
+spread the configured resources across the cluster nodes. However, we
+can tell the cluster that two resources are related and need to run on
+the same host (or else one of them should not run at all, if they cannot run on
+the same node). Here, we instruct the cluster that ``WebSite`` can only run on
+the host where ``ClusterIP`` is active.
+
+To achieve this, we use a *colocation constraint* that indicates it is
+mandatory for ``WebSite`` to run on the same node as ``ClusterIP``. The
+"mandatory" part of the colocation constraint is indicated by using a
+score of ``INFINITY``. The ``INFINITY`` score also means that if ``ClusterIP``
+is not active anywhere, ``WebSite`` will not be permitted to run.
+
+.. NOTE::
+
+ If ``ClusterIP`` is not active anywhere, ``WebSite`` will not be permitted
+ to run anywhere.
+
+.. NOTE::
+
+ ``INFINITY`` is the default score for a colocation constraint. If you don't
+ specify a score, ``INFINITY`` will be used automatically.
+
+.. IMPORTANT::
+
+ Colocation constraints are "directional", in that they imply certain
+ things about the order in which the two resources will have a location
+ chosen. In this case, we're saying that ``WebSite`` needs to be placed on
+ the same machine as ``ClusterIP``, which implies that the cluster must know
+ the location of ``ClusterIP`` before choosing a location for ``WebSite``
+
+.. code-block:: console
+
+ [root@pcmk-1 ~]# pcs constraint colocation add WebSite with ClusterIP INFINITY
+ [root@pcmk-1 ~]# pcs constraint
+ Location Constraints:
+ Ordering Constraints:
+ Colocation Constraints:
+ WebSite with ClusterIP (score:INFINITY)
+ Ticket Constraints:
+ [root@pcmk-1 ~]# pcs status
+ Cluster name: mycluster
+ Cluster Summary:
+ * Stack: corosync
+ * Current DC: pcmk-1 (version 2.1.2-4.el9-ada5c3b36e2) - partition with quorum
+ * Last updated: Wed Jul 27 00:49:33 2022
+ * Last change: Wed Jul 27 00:49:16 2022 by root via cibadmin on pcmk-1
+ * 2 nodes configured
+ * 3 resource instances configured
+
+ Node List:
+ * Online: [ pcmk-1 pcmk-2 ]
+
+ Full List of Resources:
+ * fence_dev (stonith:some_fence_agent): Started pcmk-1
+ * ClusterIP (ocf:heartbeat:IPaddr2): Started pcmk-1
+ * WebSite (ocf:heartbeat:apache): Started pcmk-1
+
+ Daemon Status:
+ corosync: active/disabled
+ pacemaker: active/disabled
+ pcsd: active/enabled
+
+
+.. index::
+ single: constraint; ordering
+ single: ordering constraint
+
+Ensure Resources Start and Stop in Order
+########################################
+
+Like many services, Apache can be configured to bind to specific
+IP addresses on a host or to the wildcard IP address. If Apache
+binds to the wildcard, it doesn't matter whether an IP address
+is added before or after Apache starts; Apache will respond on
+that IP just the same. However, if Apache binds only to certain IP
+address(es), the order matters: If the address is added after Apache
+starts, Apache won't respond on that address.
+
+To be sure our ``WebSite`` responds regardless of Apache's address
+configuration, we need to make sure ``ClusterIP`` not only runs on the same
+node, but also starts before ``WebSite``. A colocation constraint ensures
+only that the resources run together; it doesn't affect order in which the
+resources are started or stopped.
+
+We do this by adding an ordering constraint. By default, all order constraints
+are mandatory. This means, for example, that if ``ClusterIP`` needs to stop,
+then ``WebSite`` must stop first (or already be stopped); and if WebSite needs
+to start, then ``ClusterIP`` must start first (or already be started). This
+also implies that the recovery of ``ClusterIP`` will trigger the recovery of
+``WebSite``, causing it to be restarted.
+
+.. code-block:: console
+
+ [root@pcmk-1 ~]# pcs constraint order ClusterIP then WebSite
+ Adding ClusterIP WebSite (kind: Mandatory) (Options: first-action=start then-action=start)
+ [root@pcmk-1 ~]# pcs constraint
+ Location Constraints:
+ Ordering Constraints:
+ start ClusterIP then start WebSite (kind:Mandatory)
+ Colocation Constraints:
+ WebSite with ClusterIP (score:INFINITY)
+ Ticket Constraints:
+
+.. NOTE::
+
+ The default action in an order constraint is ``start`` If you don't
+ specify an action, as in the example above, ``pcs`` automatically uses the
+ ``start`` action.
+
+.. NOTE::
+
+ We could have placed the ``ClusterIP`` and ``WebSite`` resources into a
+ **resource group** instead of configuring constraints. A resource group is
+ a compact and intuitive way to organize a set of resources into a chain of
+ colocation and ordering constraints. We will omit that in this guide; see
+ the `Pacemaker Explained <https://www.clusterlabs.org/pacemaker/doc/>`_
+ document for more details.
+
+
+.. index::
+ single: constraint; location
+ single: location constraint
+
+Prefer One Node Over Another
+############################
+
+Pacemaker does not rely on any sort of hardware symmetry between nodes,
+so it may well be that one machine is more powerful than the other.
+
+In such cases, you may want to host the resources on the more powerful node
+when it is available, to have the best performance -- or you may want to host
+the resources on the **less** powerful node when it's available, so you don't
+have to worry about whether you can handle the load after a failover.
+
+To do this, we create a location constraint.
+
+In the location constraint below, we are saying the ``WebSite`` resource
+prefers the node ``pcmk-1`` with a score of ``50``. Here, the score indicates
+how strongly we'd like the resource to run at this location.
+
+.. code-block:: console
+
+ [root@pcmk-1 ~]# pcs constraint location WebSite prefers pcmk-2=50
+ [root@pcmk-1 ~]# pcs constraint
+ Location Constraints:
+ Resource: WebSite
+ Enabled on:
+ Node: pcmk-2 (score:50)
+ Ordering Constraints:
+ start ClusterIP then start WebSite (kind:Mandatory)
+ Colocation Constraints:
+ WebSite with ClusterIP (score:INFINITY)
+ Ticket Constraints:
+ [root@pcmk-1 ~]# pcs status
+ Cluster name: mycluster
+ Cluster Summary:
+ * Stack: corosync
+ * Current DC: pcmk-1 (version 2.1.2-4.el9-ada5c3b36e2) - partition with quorum
+ * Last updated: Wed Jul 27 00:51:13 2022
+ * Last change: Wed Jul 27 00:51:07 2022 by root via cibadmin on pcmk-1
+ * 2 nodes configured
+ * 3 resource instances configured
+
+ Node List:
+ * Online: [ pcmk-1 pcmk-2 ]
+
+ Full List of Resources:
+ * fence_dev (stonith:some_fence_agent): Started pcmk-1
+ * ClusterIP (ocf:heartbeat:IPaddr2): Started pcmk-1
+ * WebSite (ocf:heartbeat:apache): Started pcmk-1
+
+ Daemon Status:
+ corosync: active/disabled
+ pacemaker: active/disabled
+ pcsd: active/enabled
+
+Wait a minute, the resources are still on ``pcmk-1``!
+
+Even though ``WebSite`` now prefers to run on ``pcmk-2``, that preference is
+(intentionally) less than the resource stickiness (how much we
+preferred not to have unnecessary downtime).
+
+To see the current placement scores, you can use a tool called
+``crm_simulate``.
+
+.. code-block:: console
+
+ [root@pcmk-1 ~]# crm_simulate -sL
+ [ pcmk-1 pcmk-2 ]
+
+ fence_dev (stonith:some_fence_agent): Started pcmk-1
+ ClusterIP (ocf:heartbeat:IPaddr2): Started pcmk-1
+ WebSite (ocf:heartbeat:apache): Started pcmk-1
+
+ pcmk__native_allocate: fence_dev allocation score on pcmk-1: 100
+ pcmk__native_allocate: fence_dev allocation score on pcmk-2: 0
+ pcmk__native_allocate: ClusterIP allocation score on pcmk-1: 200
+ pcmk__native_allocate: ClusterIP allocation score on pcmk-2: 50
+ pcmk__native_allocate: WebSite allocation score on pcmk-1: 100
+ pcmk__native_allocate: WebSite allocation score on pcmk-2: -INFINITY
+
+.. index::
+ single: resource; moving manually
+
+Move Resources Manually
+#######################
+
+There are always times when an administrator needs to override the
+cluster and force resources to move to a specific location. In this example,
+we will force the WebSite to move to ``pcmk-2``.
+
+We will use the ``pcs resource move`` command to create a temporary constraint
+with a score of ``INFINITY``. While we could update our existing constraint,
+using ``move`` allows ``pcs`` to get rid of the temporary constraint
+automatically after the resource has moved to its destination. Note in the
+below that the ``pcs constraint`` output after the ``move`` command is the same
+as before.
+
+.. code-block:: console
+
+ [root@pcmk-1 ~]# pcs resource move WebSite pcmk-2
+ Location constraint to move resource 'WebSite' has been created
+ Waiting for the cluster to apply configuration changes...
+ Location constraint created to move resource 'WebSite' has been removed
+ Waiting for the cluster to apply configuration changes...
+ resource 'WebSite' is running on node 'pcmk-2'
+ [root@pcmk-1 ~]# pcs constraint
+ Location Constraints:
+ Resource: WebSite
+ Enabled on:
+ Node: pcmk-2 (score:50)
+ Ordering Constraints:
+ start ClusterIP then start WebSite (kind:Mandatory)
+ Colocation Constraints:
+ WebSite with ClusterIP (score:INFINITY)
+ Ticket Constraints:
+ [root@pcmk-1 ~]# pcs status
+ Cluster name: mycluster
+ Cluster Summary:
+ * Stack: corosync
+ * Current DC: pcmk-1 (version 2.1.2-4.el9-ada5c3b36e2) - partition with quorum
+ * Last updated: Wed Jul 27 00:54:23 2022
+ * Last change: Wed Jul 27 00:53:48 2022 by root via cibadmin on pcmk-1
+ * 2 nodes configured
+ * 3 resource instances configured
+
+ Node List:
+ * Online: [ pcmk-1 pcmk-2 ]
+
+ Full List of Resources:
+ * fence_dev (stonith:some_fence_agent): Started pcmk-1
+ * ClusterIP (ocf:heartbeat:IPaddr2): Started pcmk-2
+ * WebSite (ocf:heartbeat:apache): Started pcmk-2
+
+ Daemon Status:
+ corosync: active/disabled
+ pacemaker: active/disabled
+ pcsd: active/enabled
+
+To remove the constraint with the score of ``50``, we would first get the
+constraint's ID using ``pcs constraint --full``, then remove it with
+``pcs constraint remove`` and the ID. We won't show those steps here,
+but feel free to try it on your own, with the help of the ``pcs`` man page
+if necessary.
+
+.. [#] Compare the key used here, ``ocf:heartbeat:apache`` with the one we
+ used earlier for the IP address, ``ocf:heartbeat:IPaddr2``.
diff --git a/doc/sphinx/Clusters_from_Scratch/cluster-setup.rst b/doc/sphinx/Clusters_from_Scratch/cluster-setup.rst
new file mode 100644
index 0000000..0a7a7a5
--- /dev/null
+++ b/doc/sphinx/Clusters_from_Scratch/cluster-setup.rst
@@ -0,0 +1,313 @@
+Set up a Cluster
+----------------
+
+Simplify Administration With a Cluster Shell
+############################################
+
+In the dark past, configuring Pacemaker required the administrator to
+read and write XML. In true UNIX style, there were also a number of
+different commands that specialized in different aspects of querying
+and updating the cluster.
+
+In addition, the various components of the cluster stack (Corosync, Pacemaker,
+etc.) had to be configured separately, with different configuration tools and
+formats.
+
+All of that has been greatly simplified with the creation of higher-level tools,
+whether command-line or GUIs, that hide all the mess underneath.
+
+Command-line cluster shells take all the individual aspects required for
+managing and configuring a cluster, and pack them into one simple-to-use
+command-line tool.
+
+They even allow you to queue up several changes at once and commit
+them all at once.
+
+Two popular command-line shells are ``pcs`` and ``crmsh``. Clusters from Scratch is
+based on ``pcs`` because it comes with |CFS_DISTRO|, but both have similar
+functionality. Choosing a shell or GUI is a matter of personal preference and
+what comes with (and perhaps is supported by) your choice of operating system.
+
+
+Install the Cluster Software
+############################
+
+Fire up a shell on both nodes and run the following to activate the High
+Availability repo.
+
+.. code-block:: console
+
+ # dnf config-manager --set-enabled highavailability
+
+.. IMPORTANT::
+
+ This document will show commands that need to be executed on both nodes
+ with a simple ``#`` prompt. Be sure to run them on each node individually.
+
+Now, we'll install ``pacemaker``, ``pcs``, and some other command-line tools
+that will make our lives easier:
+
+.. code-block:: console
+
+ # dnf install -y pacemaker pcs psmisc policycoreutils-python3
+
+.. NOTE::
+
+ This document uses ``pcs`` for cluster management. Other alternatives,
+ such as ``crmsh``, are available, but their syntax
+ will differ from the examples used here.
+
+Configure the Cluster Software
+##############################
+
+.. index::
+ single: firewall
+
+Allow cluster services through firewall
+_______________________________________
+
+On each node, allow cluster-related services through the local firewall:
+
+.. code-block:: console
+
+ # firewall-cmd --permanent --add-service=high-availability
+ success
+ # firewall-cmd --reload
+ success
+
+.. NOTE ::
+
+ If you are using ``iptables`` directly, or some other firewall solution
+ besides ``firewalld``, simply open the following ports, which can be used
+ by various clustering components: TCP ports 2224, 3121, and 21064, and UDP
+ port 5405.
+
+ If you run into any problems during testing, you might want to disable
+ the firewall and SELinux entirely until you have everything working.
+ This may create significant security issues and should not be performed on
+ machines that will be exposed to the outside world, but may be appropriate
+ during development and testing on a protected host.
+
+ To disable security measures:
+
+ .. code-block:: console
+
+ [root@pcmk-1 ~]# setenforce 0
+ [root@pcmk-1 ~]# sed -i.bak "s/SELINUX=enforcing/SELINUX=permissive/g" /etc/selinux/config
+ [root@pcmk-1 ~]# systemctl mask firewalld.service
+ [root@pcmk-1 ~]# systemctl stop firewalld.service
+ [root@pcmk-1 ~]# iptables --flush
+
+Enable ``pcs`` Daemon
+_____________________
+
+Before the cluster can be configured, the ``pcs`` daemon must be started and
+enabled to start at boot time on each node. This daemon works with the ``pcs``
+command-line interface to manage synchronizing the Corosync configuration
+across all nodes in the cluster, among other functions.
+
+Start and enable the daemon by issuing the following commands on each node:
+
+.. code-block:: console
+
+ # systemctl start pcsd.service
+ # systemctl enable pcsd.service
+ Created symlink from /etc/systemd/system/multi-user.target.wants/pcsd.service to /usr/lib/systemd/system/pcsd.service.
+
+The installed packages will create an ``hacluster`` user with a disabled password.
+While this is fine for running ``pcs`` commands locally,
+the account needs a login password in order to perform such tasks as syncing
+the Corosync configuration, or starting and stopping the cluster on other nodes.
+
+This tutorial will make use of such commands,
+so now we will set a password for the ``hacluster`` user, using the same password
+on both nodes:
+
+.. code-block:: console
+
+ # passwd hacluster
+ Changing password for user hacluster.
+ New password:
+ Retype new password:
+ passwd: all authentication tokens updated successfully.
+
+.. NOTE::
+
+ Alternatively, to script this process or set the password on a
+ different machine from the one you're logged into, you can use
+ the ``--stdin`` option for ``passwd``:
+
+ .. code-block:: console
+
+ [root@pcmk-1 ~]# ssh pcmk-2 -- 'echo mysupersecretpassword | passwd --stdin hacluster'
+
+Configure Corosync
+__________________
+
+On either node, use ``pcs host auth`` to authenticate as the ``hacluster`` user:
+
+.. code-block:: console
+
+ [root@pcmk-1 ~]# pcs host auth pcmk-1 pcmk-2
+ Username: hacluster
+ Password:
+ pcmk-2: Authorized
+ pcmk-1: Authorized
+
+Next, use ``pcs cluster setup`` on the same node to generate and synchronize the
+Corosync configuration:
+
+.. code-block:: console
+
+ [root@pcmk-1 ~]# pcs cluster setup mycluster pcmk-1 pcmk-2
+ No addresses specified for host 'pcmk-1', using 'pcmk-1'
+ No addresses specified for host 'pcmk-2', using 'pcmk-2'
+ Destroying cluster on hosts: 'pcmk-1', 'pcmk-2'...
+ pcmk-2: Successfully destroyed cluster
+ pcmk-1: Successfully destroyed cluster
+ Requesting remove 'pcsd settings' from 'pcmk-1', 'pcmk-2'
+ pcmk-1: successful removal of the file 'pcsd settings'
+ pcmk-2: successful removal of the file 'pcsd settings'
+ Sending 'corosync authkey', 'pacemaker authkey' to 'pcmk-1', 'pcmk-2'
+ pcmk-1: successful distribution of the file 'corosync authkey'
+ pcmk-1: successful distribution of the file 'pacemaker authkey'
+ pcmk-2: successful distribution of the file 'corosync authkey'
+ pcmk-2: successful distribution of the file 'pacemaker authkey'
+ Sending 'corosync.conf' to 'pcmk-1', 'pcmk-2'
+ pcmk-1: successful distribution of the file 'corosync.conf'
+ pcmk-2: successful distribution of the file 'corosync.conf'
+ Cluster has been successfully set up.
+
+.. NOTE::
+
+ If you'd like, you can specify an ``addr`` option for each node in the
+ ``pcs cluster setup`` command. This will create an explicit name-to-address
+ mapping for each node in ``/etc/corosync/corosync.conf``, eliminating the
+ need for hostname resolution via DNS, ``/etc/hosts``, and the like.
+
+ .. code-block:: console
+
+ [root@pcmk-1 ~]# pcs cluster setup mycluster \
+ pcmk-1 addr=192.168.122.101 pcmk-2 addr=192.168.122.102
+
+
+If you received an authorization error for either of those commands, make
+sure you configured the ``hacluster`` user account on each node
+with the same password.
+
+The final ``corosync.conf`` configuration on each node should look
+something like the sample in :ref:`sample-corosync-configuration`.
+
+Explore pcs
+###########
+
+Start by taking some time to familiarize yourself with what ``pcs`` can do.
+
+.. code-block:: console
+
+ [root@pcmk-1 ~]# pcs
+
+ Usage: pcs [-f file] [-h] [commands]...
+ Control and configure pacemaker and corosync.
+
+ Options:
+ -h, --help Display usage and exit.
+ -f file Perform actions on file instead of active CIB.
+ Commands supporting the option use the initial state of
+ the specified file as their input and then overwrite the
+ file with the state reflecting the requested
+ operation(s).
+ A few commands only use the specified file in read-only
+ mode since their effect is not a CIB modification.
+ --debug Print all network traffic and external commands run.
+ --version Print pcs version information. List pcs capabilities if
+ --full is specified.
+ --request-timeout Timeout for each outgoing request to another node in
+ seconds. Default is 60s.
+ --force Override checks and errors, the exact behavior depends on
+ the command. WARNING: Using the --force option is
+ strongly discouraged unless you know what you are doing.
+
+ Commands:
+ cluster Configure cluster options and nodes.
+ resource Manage cluster resources.
+ stonith Manage fence devices.
+ constraint Manage resource constraints.
+ property Manage pacemaker properties.
+ acl Manage pacemaker access control lists.
+ qdevice Manage quorum device provider on the local host.
+ quorum Manage cluster quorum settings.
+ booth Manage booth (cluster ticket manager).
+ status View cluster status.
+ config View and manage cluster configuration.
+ pcsd Manage pcs daemon.
+ host Manage hosts known to pcs/pcsd.
+ node Manage cluster nodes.
+ alert Manage pacemaker alerts.
+ client Manage pcsd client configuration.
+ dr Manage disaster recovery configuration.
+ tag Manage pacemaker tags.
+
+
+As you can see, the different aspects of cluster management are separated
+into categories. To discover the functionality available in each of these
+categories, one can issue the command ``pcs <CATEGORY> help``. Below is an
+example of all the options available under the status category.
+
+.. code-block:: console
+
+ [root@pcmk-1 ~]# pcs status help
+
+ Usage: pcs status [commands]...
+ View current cluster and resource status
+ Commands:
+ [status] [--full] [--hide-inactive]
+ View all information about the cluster and resources (--full provides
+ more details, --hide-inactive hides inactive resources).
+
+ resources [<resource id | tag id>] [node=<node>] [--hide-inactive]
+ Show status of all currently configured resources. If --hide-inactive
+ is specified, only show active resources. If a resource or tag id is
+ specified, only show status of the specified resource or resources in
+ the specified tag. If node is specified, only show status of resources
+ configured for the specified node.
+
+ cluster
+ View current cluster status.
+
+ corosync
+ View current membership information as seen by corosync.
+
+ quorum
+ View current quorum status.
+
+ qdevice <device model> [--full] [<cluster name>]
+ Show runtime status of specified model of quorum device provider. Using
+ --full will give more detailed output. If <cluster name> is specified,
+ only information about the specified cluster will be displayed.
+
+ booth
+ Print current status of booth on the local node.
+
+ nodes [corosync | both | config]
+ View current status of nodes from pacemaker. If 'corosync' is
+ specified, view current status of nodes from corosync instead. If
+ 'both' is specified, view current status of nodes from both corosync &
+ pacemaker. If 'config' is specified, print nodes from corosync &
+ pacemaker configuration.
+
+ pcsd [<node>]...
+ Show current status of pcsd on nodes specified, or on all nodes
+ configured in the local cluster if no nodes are specified.
+
+ xml
+ View xml version of status (output from crm_mon -r -1 -X).
+
+Additionally, if you are interested in the version and supported cluster stack(s)
+available with your Pacemaker installation, run:
+
+.. code-block:: console
+
+ [root@pcmk-1 ~]# pacemakerd --features
+ Pacemaker 2.1.2-4.el9 (Build: ada5c3b36e2)
+ Supporting v3.13.0: agent-manpages cibsecrets corosync-ge-2 default-concurrent-fencing default-resource-stickiness default-sbd-sync generated-manpages monotonic nagios ncurses remote systemd
diff --git a/doc/sphinx/Clusters_from_Scratch/fencing.rst b/doc/sphinx/Clusters_from_Scratch/fencing.rst
new file mode 100644
index 0000000..65537bf
--- /dev/null
+++ b/doc/sphinx/Clusters_from_Scratch/fencing.rst
@@ -0,0 +1,231 @@
+.. index:: fencing
+
+Configure Fencing
+-----------------
+
+What is Fencing?
+################
+
+Fencing protects your data from being corrupted, and your application from
+becoming unavailable, due to unintended concurrent access by rogue nodes.
+
+Just because a node is unresponsive doesn't mean it has stopped
+accessing your data. The only way to be 100% sure that your data is
+safe, is to use fencing to ensure that the node is truly
+offline before allowing the data to be accessed from another node.
+
+Fencing also has a role to play in the event that a clustered service
+cannot be stopped. In this case, the cluster uses fencing to force the
+whole node offline, thereby making it safe to start the service
+elsewhere.
+
+Fencing is also known as STONITH, an acronym for "Shoot The Other Node In The
+Head", since the most popular form of fencing is cutting a host's power.
+
+In order to guarantee the safety of your data [#]_, fencing is enabled by default.
+
+.. NOTE::
+
+ It is possible to tell the cluster not to use fencing, by setting the
+ ``stonith-enabled`` cluster property to false:
+
+ .. code-block:: console
+
+ [root@pcmk-1 ~]# pcs property set stonith-enabled=false
+ [root@pcmk-1 ~]# pcs cluster verify --full
+
+ However, this is completely inappropriate for a production cluster. It tells
+ the cluster to simply pretend that failed nodes are safely powered off. Some
+ vendors will refuse to support clusters that have fencing disabled. Even
+ disabling it for a test cluster means you won't be able to test real failure
+ scenarios.
+
+
+.. index::
+ single: fencing; device
+
+Choose a Fence Device
+#####################
+
+The two broad categories of fence device are power fencing, which cuts off
+power to the target, and fabric fencing, which cuts off the target's access to
+some critical resource, such as a shared disk or access to the local network.
+
+Power fencing devices include:
+
+* Intelligent power switches
+* IPMI
+* Hardware watchdog device (alone, or in combination with shared storage used
+ as a "poison pill" mechanism)
+
+Fabric fencing devices include:
+
+* Shared storage that can be cut off for a target host by another host (for
+ example, an external storage device that supports SCSI-3 persistent
+ reservations)
+* Intelligent network switches
+
+Using IPMI as a power fencing device may seem like a good choice. However,
+if the IPMI shares power and/or network access with the host (such as most
+onboard IPMI controllers), a power or network failure will cause both the
+host and its fencing device to fail. The cluster will be unable to recover,
+and must stop all resources to avoid a possible split-brain situation.
+
+Likewise, any device that relies on the machine being active (such as
+SSH-based "devices" sometimes used during testing) is inappropriate,
+because fencing will be required when the node is completely unresponsive.
+(Fence agents like ``fence_ilo_ssh``, which connects via SSH to an HP iLO but
+not to the cluster node, are fine.)
+
+Configure the Cluster for Fencing
+#################################
+
+#. Install the fence agent(s). To see what packages are available, run
+ ``dnf search fence-``. Be sure to install the package(s) on all cluster nodes.
+
+#. Configure the fence device itself to be able to fence your nodes and accept
+ fencing requests. This includes any necessary configuration on the device and
+ on the nodes, and any firewall or SELinux changes needed. Test the
+ communication between the device and your nodes.
+
+#. Find the name of the correct fence agent: ``pcs stonith list``
+
+#. Find the parameters associated with the device:
+ ``pcs stonith describe <AGENT_NAME>``
+
+#. Create a local copy of the CIB: ``pcs cluster cib stonith_cfg``
+
+#. Create the fencing resource: ``pcs -f stonith_cfg stonith create <STONITH_ID> <STONITH_DEVICE_TYPE> [STONITH_DEVICE_OPTIONS]``
+
+ Any flags that do not take arguments, such as ``--ssl``, should be passed as ``ssl=1``.
+
+#. Ensure fencing is enabled in the cluster:
+ ``pcs -f stonith_cfg property set stonith-enabled=true``
+
+#. If the device does not know how to fence nodes based on their cluster node
+ name, you may also need to set the special ``pcmk_host_map`` parameter. See
+ ``man pacemaker-fenced`` for details.
+
+#. If the device does not support the ``list`` command, you may also need to
+ set the special ``pcmk_host_list`` and/or ``pcmk_host_check`` parameters.
+ See ``man pacemaker-fenced`` for details.
+
+#. If the device does not expect the target to be specified with the ``port``
+ parameter, you may also need to set the special ``pcmk_host_argument``
+ parameter. See ``man pacemaker-fenced`` for details.
+
+#. Commit the new configuration: ``pcs cluster cib-push stonith_cfg``
+
+#. Once the fence device resource is running, test it (you might want to stop
+ the cluster on that machine first):
+ ``pcs stonith fence <NODENAME>``
+
+Example
+#######
+
+For this example, assume we have a chassis containing four nodes
+and a separately powered IPMI device active on ``10.0.0.1``. Following the steps
+above would go something like this:
+
+Step 1: Install the ``fence-agents-ipmilan`` package on both nodes.
+
+Step 2: Configure the IP address, authentication credentials, etc. in the IPMI device itself.
+
+Step 3: Choose the ``fence_ipmilan`` STONITH agent.
+
+Step 4: Obtain the agent's possible parameters:
+
+.. code-block:: console
+
+ [root@pcmk-1 ~]# pcs stonith describe fence_ipmilan
+ fence_ipmilan - Fence agent for IPMI
+
+ fence_ipmilan is an I/O Fencing agentwhich can be used with machines controlled by IPMI.This agent calls support software ipmitool (http://ipmitool.sf.net/). WARNING! This fence agent might report success before the node is powered off. You should use -m/method onoff if your fence device works correctly with that option.
+
+ Stonith options:
+ auth: IPMI Lan Auth type.
+ cipher: Ciphersuite to use (same as ipmitool -C parameter)
+ hexadecimal_kg: Hexadecimal-encoded Kg key for IPMIv2 authentication
+ ip: IP address or hostname of fencing device
+ ipport: TCP/UDP port to use for connection with device
+ lanplus: Use Lanplus to improve security of connection
+ method: Method to fence
+ password: Login password or passphrase
+ password_script: Script to run to retrieve password
+ plug: IP address or hostname of fencing device (together with --port-as-ip)
+ privlvl: Privilege level on IPMI device
+ target: Bridge IPMI requests to the remote target address
+ username: Login name
+ quiet: Disable logging to stderr. Does not affect --verbose or --debug-file or logging to syslog.
+ verbose: Verbose mode. Multiple -v flags can be stacked on the command line (e.g., -vvv) to increase verbosity.
+ verbose_level: Level of debugging detail in output. Defaults to the number of --verbose flags specified on the command line, or to 1 if verbose=1 in a stonith device configuration (i.e., on stdin).
+ debug_file: Write debug information to given file
+ delay: Wait X seconds before fencing is started
+ disable_timeout: Disable timeout (true/false) (default: true when run from Pacemaker 2.0+)
+ ipmitool_path: Path to ipmitool binary
+ login_timeout: Wait X seconds for cmd prompt after login
+ port_as_ip: Make "port/plug" to be an alias to IP address
+ power_timeout: Test X seconds for status change after ON/OFF
+ power_wait: Wait X seconds after issuing ON/OFF
+ shell_timeout: Wait X seconds for cmd prompt after issuing command
+ stonith_status_sleep: Sleep X seconds between status calls during a STONITH action
+ ipmitool_timeout: Timeout (sec) for IPMI operation
+ retry_on: Count of attempts to retry power on
+ use_sudo: Use sudo (without password) when calling 3rd party software
+ sudo_path: Path to sudo binary
+ pcmk_host_map: A mapping of host names to ports numbers for devices that do not support host names. Eg. node1:1;node2:2,3 would tell the cluster to use port 1 for node1 and ports 2 and 3 for node2
+ pcmk_host_list: A list of machines controlled by this device (Optional unless pcmk_host_check=static-list).
+ pcmk_host_check: How to determine which machines are controlled by the device. Allowed values: dynamic-list (query the device via the 'list' command), static-list (check the pcmk_host_list attribute), status
+ (query the device via the 'status' command), none (assume every device can fence every machine)
+ pcmk_delay_max: Enable a delay of no more than the time specified before executing fencing actions. Pacemaker derives the overall delay by taking the value of pcmk_delay_base and adding a random delay value
+ such that the sum is kept below this maximum. This prevents double fencing when using slow devices such as sbd. Use this to enable a random delay for fencing actions. The overall delay is
+ derived from this random delay value adding a static delay so that the sum is kept below the maximum delay.
+ pcmk_delay_base: Enable a base delay for fencing actions and specify base delay value. This enables a static delay for fencing actions, which can help avoid "death matches" where two nodes try to fence each
+ other at the same time. If pcmk_delay_max is also used, a random delay will be added such that the total delay is kept below that value. This can be set to a single time value to apply to any
+ node targeted by this device (useful if a separate device is configured for each target), or to a node map (for example, "node1:1s;node2:5") to set a different value per target.
+ pcmk_action_limit: The maximum number of actions can be performed in parallel on this device Cluster property concurrent-fencing=true needs to be configured first. Then use this to specify the maximum number
+ of actions can be performed in parallel on this device. -1 is unlimited.
+
+ Default operations:
+ monitor: interval=60s
+
+
+Step 5: ``pcs cluster cib stonith_cfg``
+
+Step 6: Here are example parameters for creating our fence device resource:
+
+.. code-block:: console
+
+ [root@pcmk-1 ~]# pcs -f stonith_cfg stonith create ipmi-fencing fence_ipmilan \
+ pcmk_host_list="pcmk-1 pcmk-2" ipaddr=10.0.0.1 login=testuser \
+ passwd=acd123 op monitor interval=60s
+ [root@pcmk-1 ~]# pcs -f stonith_cfg stonith
+ * ipmi-fencing (stonith:fence_ipmilan): Stopped
+
+Steps 7-10: Enable fencing in the cluster:
+
+.. code-block:: console
+
+ [root@pcmk-1 ~]# pcs -f stonith_cfg property set stonith-enabled=true
+ [root@pcmk-1 ~]# pcs -f stonith_cfg property
+ Cluster Properties:
+ cluster-infrastructure: corosync
+ cluster-name: mycluster
+ dc-version: 2.0.5-4.el8-ba59be7122
+ have-watchdog: false
+ stonith-enabled: true
+
+Step 11: ``pcs cluster cib-push stonith_cfg --config``
+
+Step 12: Test:
+
+.. code-block:: console
+
+ [root@pcmk-1 ~]# pcs cluster stop pcmk-2
+ [root@pcmk-1 ~]# pcs stonith fence pcmk-2
+
+After a successful test, login to any rebooted nodes, and start the cluster
+(with ``pcs cluster start``).
+
+.. [#] If the data is corrupt, there is little point in continuing to
+ make it available.
diff --git a/doc/sphinx/Clusters_from_Scratch/images/ConfigureVolumeGroup.png b/doc/sphinx/Clusters_from_Scratch/images/ConfigureVolumeGroup.png
new file mode 100644
index 0000000..00ef1ba
--- /dev/null
+++ b/doc/sphinx/Clusters_from_Scratch/images/ConfigureVolumeGroup.png
Binary files differ
diff --git a/doc/sphinx/Clusters_from_Scratch/images/ConsolePrompt.png b/doc/sphinx/Clusters_from_Scratch/images/ConsolePrompt.png
new file mode 100644
index 0000000..336ae56
--- /dev/null
+++ b/doc/sphinx/Clusters_from_Scratch/images/ConsolePrompt.png
Binary files differ
diff --git a/doc/sphinx/Clusters_from_Scratch/images/InstallationDestination.png b/doc/sphinx/Clusters_from_Scratch/images/InstallationDestination.png
new file mode 100644
index 0000000..d847c81
--- /dev/null
+++ b/doc/sphinx/Clusters_from_Scratch/images/InstallationDestination.png
Binary files differ
diff --git a/doc/sphinx/Clusters_from_Scratch/images/InstallationSummary.png b/doc/sphinx/Clusters_from_Scratch/images/InstallationSummary.png
new file mode 100644
index 0000000..eefe9f0
--- /dev/null
+++ b/doc/sphinx/Clusters_from_Scratch/images/InstallationSummary.png
Binary files differ
diff --git a/doc/sphinx/Clusters_from_Scratch/images/ManualPartitioning.png b/doc/sphinx/Clusters_from_Scratch/images/ManualPartitioning.png
new file mode 100644
index 0000000..9047c65
--- /dev/null
+++ b/doc/sphinx/Clusters_from_Scratch/images/ManualPartitioning.png
Binary files differ
diff --git a/doc/sphinx/Clusters_from_Scratch/images/NetworkAndHostName.png b/doc/sphinx/Clusters_from_Scratch/images/NetworkAndHostName.png
new file mode 100644
index 0000000..156a1f0
--- /dev/null
+++ b/doc/sphinx/Clusters_from_Scratch/images/NetworkAndHostName.png
Binary files differ
diff --git a/doc/sphinx/Clusters_from_Scratch/images/RootPassword.png b/doc/sphinx/Clusters_from_Scratch/images/RootPassword.png
new file mode 100644
index 0000000..fc579ea
--- /dev/null
+++ b/doc/sphinx/Clusters_from_Scratch/images/RootPassword.png
Binary files differ
diff --git a/doc/sphinx/Clusters_from_Scratch/images/SoftwareSelection.png b/doc/sphinx/Clusters_from_Scratch/images/SoftwareSelection.png
new file mode 100644
index 0000000..d400915
--- /dev/null
+++ b/doc/sphinx/Clusters_from_Scratch/images/SoftwareSelection.png
Binary files differ
diff --git a/doc/sphinx/Clusters_from_Scratch/images/SummaryOfChanges.png b/doc/sphinx/Clusters_from_Scratch/images/SummaryOfChanges.png
new file mode 100644
index 0000000..746be66
--- /dev/null
+++ b/doc/sphinx/Clusters_from_Scratch/images/SummaryOfChanges.png
Binary files differ
diff --git a/doc/sphinx/Clusters_from_Scratch/images/TimeAndDate.png b/doc/sphinx/Clusters_from_Scratch/images/TimeAndDate.png
new file mode 100644
index 0000000..a3ea351
--- /dev/null
+++ b/doc/sphinx/Clusters_from_Scratch/images/TimeAndDate.png
Binary files differ
diff --git a/doc/sphinx/Clusters_from_Scratch/images/WelcomeToAlmaLinux.png b/doc/sphinx/Clusters_from_Scratch/images/WelcomeToAlmaLinux.png
new file mode 100644
index 0000000..dc573ad
--- /dev/null
+++ b/doc/sphinx/Clusters_from_Scratch/images/WelcomeToAlmaLinux.png
Binary files differ
diff --git a/doc/sphinx/Clusters_from_Scratch/images/WelcomeToCentos.png b/doc/sphinx/Clusters_from_Scratch/images/WelcomeToCentos.png
new file mode 100644
index 0000000..ae9879c
--- /dev/null
+++ b/doc/sphinx/Clusters_from_Scratch/images/WelcomeToCentos.png
Binary files differ
diff --git a/doc/sphinx/Clusters_from_Scratch/index.rst b/doc/sphinx/Clusters_from_Scratch/index.rst
new file mode 100644
index 0000000..74fe250
--- /dev/null
+++ b/doc/sphinx/Clusters_from_Scratch/index.rst
@@ -0,0 +1,49 @@
+Clusters from Scratch
+=====================
+
+*Step-by-Step Instructions for Building Your First High-Availability Cluster*
+
+
+Abstract
+--------
+This document provides a step-by-step guide to building a simple high-availability
+cluster using Pacemaker.
+
+The example cluster will use:
+
+* |CFS_DISTRO| |CFS_DISTRO_VER| as the host operating system
+* Corosync to provide messaging and membership services
+* Pacemaker 2 as the cluster resource manager
+* DRBD as a cost-effective alternative to shared storage
+* GFS2 as the cluster filesystem (in active/active mode)
+
+Given the graphical nature of the install process, a number of screenshots are
+included. However, the guide is primarily composed of commands, the reasons for
+executing them, and their expected outputs.
+
+
+Table of Contents
+-----------------
+
+.. toctree::
+ :maxdepth: 3
+ :numbered:
+
+ intro
+ installation
+ cluster-setup
+ verification
+ fencing
+ active-passive
+ apache
+ shared-storage
+ active-active
+ ap-configuration
+ ap-corosync-conf
+ ap-reading
+
+Index
+-----
+
+* :ref:`genindex`
+* :ref:`search`
diff --git a/doc/sphinx/Clusters_from_Scratch/installation.rst b/doc/sphinx/Clusters_from_Scratch/installation.rst
new file mode 100644
index 0000000..e7f9e2d
--- /dev/null
+++ b/doc/sphinx/Clusters_from_Scratch/installation.rst
@@ -0,0 +1,466 @@
+Installation
+------------
+
+Install |CFS_DISTRO| |CFS_DISTRO_VER|
+################################################################################################
+
+Boot the Install Image
+______________________
+
+Download the latest |CFS_DISTRO| |CFS_DISTRO_VER| DVD ISO by navigating to
+the |CFS_DISTRO| `mirrors list <https://mirrors.almalinux.org/isos.html>`_,
+selecting the latest 9.x version for your machine's architecture, selecting a
+download mirror that's close to you, and finally selecting the latest .iso file
+that has “dvd” in its name. Use the image to boot a virtual machine, or burn it
+to a DVD or USB drive and boot a physical server from that.
+
+After starting the installation, select your language and keyboard layout at
+the welcome screen.
+
+.. figure:: images/WelcomeToAlmaLinux.png
+ :align: center
+ :alt: Installation Welcome Screen
+
+ |CFS_DISTRO| |CFS_DISTRO_VER| Installation Welcome Screen
+
+Installation Options
+____________________
+
+At this point, you get a chance to tweak the default installation options.
+
+.. figure:: images/InstallationSummary.png
+ :align: center
+ :alt: Installation Summary Screen
+
+ |CFS_DISTRO| |CFS_DISTRO_VER| Installation Summary Screen
+
+Click on the **SOFTWARE SELECTION** section (try saying that 10 times quickly). The
+default environment, **Server with GUI**, does have add-ons with much of the software
+we need, but we will change the environment to a **Minimal Install** here, so that we
+can see exactly what software is required later, and press **Done**.
+
+.. figure:: images/SoftwareSelection.png
+ :align: center
+ :alt: Software Selection Screen
+
+ |CFS_DISTRO| |CFS_DISTRO_VER| Software Selection Screen
+
+Configure Network
+_________________
+
+In the **NETWORK & HOST NAME** section:
+
+- Edit **Host Name:** as desired. For this example, we will enter
+ ``pcmk-1.localdomain`` and then press **Apply**.
+- Select your network device, press **Configure...**, select the **IPv4
+ Settings** tab, and select **Manual** from the **Method** dropdown menu. Then
+ assign the machine a fixed IP address with an appropriate netmask, gateway,
+ and DNS server. For this example, we'll use ``192.168.122.101`` for the
+ address, ``24`` for the netmask, and ``192.168.122.1`` for the gateway and
+ DNS server.
+- Press **Save**.
+- Flip the switch to turn your network device on (if it is not on already), and
+ press **Done**.
+
+.. figure:: images/NetworkAndHostName.png
+ :align: center
+ :alt: Editing network settings
+
+ |CFS_DISTRO| |CFS_DISTRO_VER| Network Interface Screen
+
+.. IMPORTANT::
+
+ Do not accept the default network settings.
+ Cluster machines should never obtain an IP address via DHCP, because
+ DHCP's periodic address renewal will interfere with Corosync.
+
+Configure Disk
+______________
+
+By default, the installer's automatic partitioning will use LVM (which allows
+us to dynamically change the amount of space allocated to a given partition).
+However, it allocates all free space to the ``/`` (a.k.a. **root**) partition,
+which cannot be reduced in size later (dynamic increases are fine).
+
+In order to follow the DRBD and GFS2 portions of this guide, we need to reserve
+space on each machine for a replicated volume.
+
+Enter the **INSTALLATION DESTINATION** section and select the disk where you
+want to install the OS. Then under **Storage Configuration**, select **Custom**
+and press **Done**.
+
+.. figure:: images/ManualPartitioning.png
+ :align: center
+ :alt: Installation Destination Screen
+
+ |CFS_DISTRO| |CFS_DISTRO_VER| Installation Destination Screen
+
+On the **MANUAL PARTITIONING** screen that comes next, click the option to create
+mountpoints automatically. Select the ``/`` mountpoint and reduce the **Desired
+Capacity** down to 4 GiB or so. (The installer will not allow you to proceed if
+the ``/`` filesystem is too small to install all required packages.)
+
+.. figure:: images/ManualPartitioning.png
+ :align: center
+ :alt: Manual Partitioning Screen
+
+ |CFS_DISTRO| |CFS_DISTRO_VER| Manual Partitioning Screen
+
+Then select **Modify…** next to the volume group name. In the **CONFIGURE
+VOLUME GROUP** dialog box that appears, change the **Size policy** to **As
+large as possible**, to make the reclaimed space available inside the LVM
+volume group. We’ll add the additional volume later.
+
+.. figure:: images/ConfigureVolumeGroup.png
+ :align: center
+ :alt: Configure Volume Group Dialog
+
+ |CFS_DISTRO| |CFS_DISTRO_VER| Configure Volume Group Dialog
+
+Press **Done**. Finally, in the **SUMMARY OF CHANGES** dialog box, press
+**Accept Changes**.
+
+.. figure:: images/SummaryOfChanges.png
+ :align: center
+ :alt: Summary of Changes Dialog
+
+ |CFS_DISTRO| |CFS_DISTRO_VER| Summary of Changes Dialog
+
+Configure Time Synchronization
+______________________________
+
+It is highly recommended to enable NTP on your cluster nodes. Doing so
+ensures all nodes agree on the current time and makes reading log files
+significantly easier.
+
+|CFS_DISTRO| will enable NTP automatically. If you want to change any time-related
+settings (such as time zone or NTP server), you can do this in the
+**TIME & DATE** section. In this example, we configure the time zone as UTC
+(Coordinated Universal Time).
+
+.. figure:: images/TimeAndDate.png
+ :align: center
+ :alt: Time & Date Screen
+
+ |CFS_DISTRO| |CFS_DISTRO_VER| Time & Date Screen
+
+
+Root Password
+______________________________
+
+In order to continue to the next step, a **Root Password** must be set. Be sure
+to check the box marked **Allow root SSH login with password**.
+
+.. figure:: images/RootPassword.png
+ :align: center
+ :alt: Root Password Screen
+
+ |CFS_DISTRO| |CFS_DISTRO_VER| Root Password Screen
+
+Press **Done**. (Depending on the password you chose, you may need to do so
+twice.)
+
+Finish Install
+______________
+
+Select **Begin Installation**. Once it completes, **Reboot System**
+as instructed. After the node reboots, you'll see a login prompt on
+the console. Login using ``root`` and the password you created earlier.
+
+.. figure:: images/ConsolePrompt.png
+ :align: center
+ :alt: Console Prompt
+
+ |CFS_DISTRO| |CFS_DISTRO_VER| Console Prompt
+
+.. NOTE::
+
+ From here on, we're going to be working exclusively from the terminal.
+
+Configure the OS
+################
+
+Verify Networking
+_________________
+
+Ensure that the machine has the static IP address you configured earlier.
+
+.. code-block:: console
+
+ [root@pcmk-1 ~]# ip addr
+ 1: lo: <LOOPBACK,UP,LOWER_UP> mtu 65536 qdisc noqueue state UNKNOWN group default qlen 1000
+ link/loopback 00:00:00:00:00:00 brd 00:00:00:00:00:00
+ inet 127.0.0.1/8 scope host lo
+ valid_lft forever preferred_lft forever
+ inet6 ::1/128 scope host
+ valid_lft forever preferred_lft forever
+ 2: enp1s0: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc fq_codel state UP group default qlen 1000
+ link/ether 52:54:00:32:cf:a9 brd ff:ff:ff:ff:ff:ff
+ inet 192.168.122.101/24 brd 192.168.122.255 scope global noprefixroute enp1s0
+ valid_lft forever preferred_lft forever
+ inet6 fe80::c3e1:3ba:959:fa96/64 scope link noprefixroute
+ valid_lft forever preferred_lft forever
+
+.. NOTE::
+
+ If you ever need to change the node's IP address from the command line,
+ follow these instructions, replacing ``${conn}`` with the name of your
+ network connection. You can find the list of all network connection names
+ by running ``nmcli con show``; you can get details for each connection by
+ running ``nmcli con show ${conn}``.
+
+ .. code-block:: console
+
+ [root@pcmk-1 ~]# nmcli con mod ${conn} ipv4.addresses "${new_address}"
+ [root@pcmk-1 ~]# nmcli con up ${conn}
+
+Next, ensure that the routes are as expected:
+
+.. code-block:: console
+
+ [root@pcmk-1 ~]# ip route
+ default via 192.168.122.1 dev enp1s0 proto static metric 100
+ 192.168.122.0/24 dev enp1s0 proto kernel scope link src 192.168.122.101 metric 100
+
+If there is no line beginning with ``default via``, then use ``nmcli`` to add a
+gateway:
+
+.. code-block:: console
+
+ [root@pcmk-1 ~]# nmcli con mod ${conn} ipv4.gateway "${new_gateway_addr}"
+ [root@pcmk-1 ~]# nmcli con up ${conn}
+
+Now, check for connectivity to the outside world. Start small by
+testing whether we can reach the gateway we configured.
+
+.. code-block:: console
+
+ [root@pcmk-1 ~]# ping -c 1 192.168.122.1
+ PING 192.168.122.1 (192.168.122.1) 56(84) bytes of data.
+ 64 bytes from 192.168.122.1: icmp_seq=1 ttl=64 time=0.492 ms
+
+ --- 192.168.122.1 ping statistics ---
+ 1 packets transmitted, 1 received, 0% packet loss, time 0ms
+ rtt min/avg/max/mdev = 0.492/0.492/0.492/0.000 ms
+
+Now try something external; choose a location you know should be available.
+
+.. code-block:: console
+
+ [root@pcmk-1 ~]# ping -c 1 www.clusterlabs.org
+ PING mx1.clusterlabs.org (95.217.104.78) 56(84) bytes of data.
+ 64 bytes from mx1.clusterlabs.org (95.217.104.78): icmp_seq=1 ttl=54 time=134 ms
+
+ --- mx1.clusterlabs.org ping statistics ---
+ 1 packets transmitted, 1 received, 0% packet loss, time 0ms
+ rtt min/avg/max/mdev = 133.987/133.987/133.987/0.000 ms
+
+Login Remotely
+______________
+
+The console isn't a very friendly place to work from, so we will now
+switch to accessing the machine remotely via SSH where we can
+use copy and paste, etc.
+
+From another host, check whether we can see the new host at all:
+
+.. code-block:: console
+
+ [gchin@gchin ~]$ ping -c 1 192.168.122.101
+ PING 192.168.122.101 (192.168.122.101) 56(84) bytes of data.
+ 64 bytes from 192.168.122.101: icmp_seq=1 ttl=64 time=0.344 ms
+
+ --- 192.168.122.101 ping statistics ---
+ 1 packets transmitted, 1 received, 0% packet loss, time 0ms
+ rtt min/avg/max/mdev = 0.344/0.344/0.344/0.000 ms
+
+Next, login as ``root`` via SSH.
+
+.. code-block:: console
+
+ [gchin@gchin ~]$ ssh root@192.168.122.101
+ The authenticity of host '192.168.122.101 (192.168.122.101)' can't be established.
+ ECDSA key fingerprint is SHA256:NBvcRrPDLIt39Rf0Tz4/f2Rd/FA5wUiDOd9bZ9QWWjo.
+ Are you sure you want to continue connecting (yes/no/[fingerprint])? yes
+ Warning: Permanently added '192.168.122.101' (ECDSA) to the list of known hosts.
+ root@192.168.122.101's password:
+ Last login: Tue Jan 10 20:46:30 2021
+ [root@pcmk-1 ~]#
+
+Apply Updates
+_____________
+
+Apply any package updates released since your installation image was created:
+
+.. code-block:: console
+
+ [root@pcmk-1 ~]# dnf update -y
+
+
+.. index::
+ single: node; short name
+
+Use Short Node Names
+____________________
+
+During installation, we filled in the machine's fully qualified domain
+name (FQDN), which can be rather long when it appears in cluster logs and
+status output. See for yourself how the machine identifies itself:
+
+.. code-block:: console
+
+ [root@pcmk-1 ~]# uname -n
+ pcmk-1.localdomain
+
+We can use the ``hostnamectl`` tool to strip off the domain name:
+
+.. code-block:: console
+
+ [root@pcmk-1 ~]# hostnamectl set-hostname $(uname -n | sed s/\\..*//)
+
+Now, check that the machine is using the correct name:
+
+.. code-block:: console
+
+ [root@pcmk-1 ~]# uname -n
+ pcmk-1
+
+You may want to reboot to ensure all updates take effect.
+
+Repeat for Second Node
+######################
+
+Repeat the installation steps so far, so that you have two
+nodes ready to have the cluster software installed.
+
+For the purposes of this document, the additional node is called
+``pcmk-2`` with address ``192.168.122.102``.
+
+Configure Communication Between Nodes
+#####################################
+
+Configure Host Name Resolution
+______________________________
+
+Confirm that you can communicate between the two new nodes:
+
+.. code-block:: console
+
+ [root@pcmk-1 ~]# ping -c 3 192.168.122.102
+ PING 192.168.122.102 (192.168.122.102) 56(84) bytes of data.
+ 64 bytes from 192.168.122.102: icmp_seq=1 ttl=64 time=1.22 ms
+ 64 bytes from 192.168.122.102: icmp_seq=2 ttl=64 time=0.795 ms
+ 64 bytes from 192.168.122.102: icmp_seq=3 ttl=64 time=0.751 ms
+
+ --- 192.168.122.102 ping statistics ---
+ 3 packets transmitted, 3 received, 0% packet loss, time 2054ms
+ rtt min/avg/max/mdev = 0.751/0.923/1.224/0.214 ms
+
+Now we need to make sure we can communicate with the machines by their
+name. Add entries for the machines to ``/etc/hosts`` on both nodes. You can
+add entries for the machines to your DNS server if you have one, but this can
+create a single-point-of-failure (SPOF) if the DNS server goes down [#]_. If
+you add entries to ``/etc/hosts``, they should look something like the
+following:
+
+.. code-block:: console
+
+ [root@pcmk-1 ~]# grep pcmk /etc/hosts
+ 192.168.122.101 pcmk-1.localdomain pcmk-1
+ 192.168.122.102 pcmk-2.localdomain pcmk-2
+
+We can now verify the setup by again using ``ping``:
+
+.. code-block:: console
+
+ [root@pcmk-1 ~]# ping -c 3 pcmk-2
+ PING pcmk-2.localdomain (192.168.122.102) 56(84) bytes of data.
+ 64 bytes from pcmk-2.localdomain (192.168.122.102): icmp_seq=1 ttl=64 time=0.295 ms
+ 64 bytes from pcmk-2.localdomain (192.168.122.102): icmp_seq=2 ttl=64 time=0.616 ms
+ 64 bytes from pcmk-2.localdomain (192.168.122.102): icmp_seq=3 ttl=64 time=0.809 ms
+
+ --- pcmk-2.localdomain ping statistics ---
+ 3 packets transmitted, 3 received, 0% packet loss, time 2043ms
+ rtt min/avg/max/mdev = 0.295/0.573/0.809/0.212 ms
+
+.. index:: SSH
+
+Configure SSH
+_____________
+
+SSH is a convenient and secure way to copy files and perform commands
+remotely. For the purposes of this guide, we will create a key without a
+password (using the ``-N`` option) so that we can perform remote actions
+without being prompted.
+
+
+.. WARNING::
+
+ Unprotected SSH keys (those without a password) are not recommended for
+ servers exposed to the outside world. We use them here only to simplify
+ the demo.
+
+Create a new key and allow anyone with that key to log in:
+
+
+.. index::
+ single: SSH; key
+
+.. topic:: Creating and Activating a New SSH Key
+
+ .. code-block:: console
+
+ [root@pcmk-1 ~]# ssh-keygen -f ~/.ssh/id_rsa -N ""
+ Generating public/private rsa key pair.
+ Your identification has been saved in /root/.ssh/id_rsa
+ Your public key has been saved in /root/.ssh/id_rsa.pub
+ The key fingerprint is:
+ SHA256:h5AFPmXsGU4woOxRLYHW9lnU2wIQVOxpSRrsXbo/AX8 root@pcmk-1
+ The key's randomart image is:
+ +---[RSA 3072]----+
+ | o+*BX*. |
+ | .oo+.+*O o |
+ | .+. +=% O o |
+ | . . =o%.o . |
+ | . .S+.. |
+ | ..o E |
+ | . o |
+ | o |
+ | . |
+ +----[SHA256]-----+
+
+ [root@pcmk-1 ~]# cat ~/.ssh/id_rsa.pub >> ~/.ssh/authorized_keys
+
+Install the key on the other node:
+
+.. code-block:: console
+
+ [root@pcmk-1 ~]# ssh-copy-id pcmk-2
+ /usr/bin/ssh-copy-id: INFO: Source of key(s) to be installed: "/root/.ssh/id_rsa.pub"
+ The authenticity of host 'pcmk-2 (192.168.122.102)' can't be established.
+ ED25519 key fingerprint is SHA256:QkJnJ3fmszY7kAuuZ7wxUC5CC+eQThSCF13XYWnZJPo.
+ This host key is known by the following other names/addresses:
+ ~/.ssh/known_hosts:1: 192.168.122.102
+ Are you sure you want to continue connecting (yes/no/[fingerprint])? yes
+ /usr/bin/ssh-copy-id: INFO: attempting to log in with the new key(s), to filter out any that are already installed
+ /usr/bin/ssh-copy-id: INFO: 1 key(s) remain to be installed -- if you are prompted now it is to install the new keys
+ root@pcmk-2's password:
+
+ Number of key(s) added: 1
+
+ Now try logging into the machine, with: "ssh 'pcmk-2'"
+ and check to make sure that only the key(s) you wanted were added.
+
+Test that you can now run commands remotely, without being prompted:
+
+.. code-block:: console
+
+ [root@pcmk-1 ~]# ssh pcmk-2 -- uname -n
+ pcmk-2
+
+Finally, repeat this same process on the other node. For convenience, you can
+also generate an SSH key on your administrative machine and use ``ssh-copy-id``
+to copy it to both cluster nodes.
+
+.. [#] You can also avoid this SPOF by specifying an ``addr`` option for each
+ node when creating the cluster. We will discuss this in a later section.
diff --git a/doc/sphinx/Clusters_from_Scratch/intro.rst b/doc/sphinx/Clusters_from_Scratch/intro.rst
new file mode 100644
index 0000000..7f600e3
--- /dev/null
+++ b/doc/sphinx/Clusters_from_Scratch/intro.rst
@@ -0,0 +1,29 @@
+Introduction
+------------
+
+The Scope of This Document
+##########################
+
+Computer clusters can be used to provide highly available services or
+resources. The redundancy of multiple machines is used to guard
+against failures of many types.
+
+This document will walk through the installation and setup of simple
+clusters using the |CFS_DISTRO| distribution, version |CFS_DISTRO_VER|.
+
+The clusters described here will use Pacemaker and Corosync to provide
+resource management and messaging. Required packages and modifications
+to their configuration files are described along with the use of the
+``pcs`` command line tool for generating the XML used for cluster
+control.
+
+Pacemaker is a central component and provides the resource management
+required in these systems. This management includes detecting and
+recovering from the failure of various nodes, resources, and services
+under its control.
+
+When more in-depth information is required, and for real-world usage,
+please refer to the `Pacemaker Explained <https://www.clusterlabs.org/pacemaker/doc/>`_
+manual.
+
+.. include:: ../shared/pacemaker-intro.rst
diff --git a/doc/sphinx/Clusters_from_Scratch/shared-storage.rst b/doc/sphinx/Clusters_from_Scratch/shared-storage.rst
new file mode 100644
index 0000000..dea3e58
--- /dev/null
+++ b/doc/sphinx/Clusters_from_Scratch/shared-storage.rst
@@ -0,0 +1,645 @@
+.. index::
+ pair: storage; DRBD
+
+Replicate Storage Using DRBD
+----------------------------
+
+Even if you're serving up static websites, having to manually synchronize
+the contents of that website to all the machines in the cluster is not
+ideal. For dynamic websites, such as a wiki, it's not even an option. Not
+everyone can afford network-attached storage, but somehow the data needs
+to be kept in sync.
+
+Enter DRBD, which can be thought of as network-based RAID-1 [#]_.
+
+Install the DRBD Packages
+#########################
+
+DRBD itself is included in the upstream kernel [#]_, but we do need some
+utilities to use it effectively.
+
+|CFS_DISTRO| does not ship these utilities, so we need to enable a third-party
+repository to get them. Supported packages for many OSes are available from
+DRBD's maker `LINBIT <http://www.linbit.com/>`_, but here we'll use the free
+`ELRepo <http://elrepo.org/>`_ repository.
+
+On both nodes, import the ELRepo package signing key, and enable the
+repository:
+
+.. code-block:: console
+
+ [root@pcmk-1 ~]# rpm --import https://www.elrepo.org/RPM-GPG-KEY-elrepo.org
+ [root@pcmk-1 ~]# dnf install -y https://www.elrepo.org/elrepo-release-9.el9.elrepo.noarch.rpm
+
+Now, we can install the DRBD kernel module and utilities:
+
+.. code-block:: console
+
+ # dnf install -y kmod-drbd9x drbd9x-utils
+
+DRBD will not be able to run under the default SELinux security policies.
+If you are familiar with SELinux, you can modify the policies in a more
+fine-grained manner, but here we will simply exempt DRBD processes from SELinux
+control:
+
+.. code-block:: console
+
+ # dnf install -y policycoreutils-python-utils
+ # semanage permissive -a drbd_t
+
+We will configure DRBD to use port 7789, so allow that port from each host to
+the other:
+
+.. code-block:: console
+
+ [root@pcmk-1 ~]# firewall-cmd --permanent --add-rich-rule='rule family="ipv4" \
+ source address="192.168.122.102" port port="7789" protocol="tcp" accept'
+ success
+ [root@pcmk-1 ~]# firewall-cmd --reload
+ success
+
+.. code-block:: console
+
+ [root@pcmk-2 ~]# firewall-cmd --permanent --add-rich-rule='rule family="ipv4" \
+ source address="192.168.122.101" port port="7789" protocol="tcp" accept'
+ success
+ [root@pcmk-2 ~]# firewall-cmd --reload
+ success
+
+.. NOTE::
+
+ In this example, we have only two nodes, and all network traffic is on the same LAN.
+ In production, it is recommended to use a dedicated, isolated network for cluster-related traffic,
+ so the firewall configuration would likely be different; one approach would be to
+ add the dedicated network interfaces to the trusted zone.
+
+.. NOTE::
+
+ If the ``firewall-cmd --add-rich-rule`` command fails with ``Error:
+ INVALID_RULE: unknown element`` ensure that there is no space at the
+ beginning of the second line of the command.
+
+Allocate a Disk Volume for DRBD
+###############################
+
+DRBD will need its own block device on each node. This can be
+a physical disk partition or logical volume, of whatever size
+you need for your data. For this document, we will use a 512MiB logical volume,
+which is more than sufficient for a single HTML file and (later) GFS2 metadata.
+
+.. code-block:: console
+
+ [root@pcmk-1 ~]# vgs
+ VG #PV #LV #SN Attr VSize VFree
+ almalinux_pcmk-1 1 2 0 wz--n- <19.00g <13.00g
+
+ [root@pcmk-1 ~]# lvcreate --name drbd-demo --size 512M almalinux_pcmk-1
+ Logical volume "drbd-demo" created.
+ [root@pcmk-1 ~]# lvs
+ LV VG Attr LSize Pool Origin Data% Meta% Move Log Cpy%Sync Convert
+ drbd-demo almalinux_pcmk-1 -wi-a----- 512.00m
+ root almalinux_pcmk-1 -wi-ao---- 4.00g
+ swap almalinux_pcmk-1 -wi-ao---- 2.00g
+
+Repeat for the second node, making sure to use the same size:
+
+.. code-block:: console
+
+ [root@pcmk-1 ~]# ssh pcmk-2 -- lvcreate --name drbd-demo --size 512M cs_pcmk-2
+ Logical volume "drbd-demo" created.
+
+Configure DRBD
+##############
+
+There is no series of commands for building a DRBD configuration, so simply
+run this on both nodes to use this sample configuration:
+
+.. code-block:: console
+
+ # cat <<END >/etc/drbd.d/wwwdata.res
+ resource "wwwdata" {
+ device minor 1;
+ meta-disk internal;
+
+ net {
+ protocol C;
+ allow-two-primaries yes;
+ fencing resource-and-stonith;
+ verify-alg sha1;
+ }
+ handlers {
+ fence-peer "/usr/lib/drbd/crm-fence-peer.9.sh";
+ unfence-peer "/usr/lib/drbd/crm-unfence-peer.9.sh";
+ }
+ on "pcmk-1" {
+ disk "/dev/almalinux_pcmk-1/drbd-demo";
+ node-id 0;
+ }
+ on "pcmk-2" {
+ disk "/dev/almalinux_pcmk-2/drbd-demo";
+ node-id 1;
+ }
+ connection {
+ host "pcmk-1" address 192.168.122.101:7789;
+ host "pcmk-2" address 192.168.122.102:7789;
+ }
+ }
+ END
+
+
+.. IMPORTANT::
+
+ Edit the file to use the hostnames, IP addresses, and logical volume paths
+ of your nodes if they differ from the ones used in this guide.
+
+.. NOTE::
+
+ Detailed information on the directives used in this configuration (and
+ other alternatives) is available in the
+ `DRBD User's Guide
+ <https://linbit.com/drbd-user-guide/drbd-guide-9_0-en/#ch-configure>`_. The
+ guide contains a wealth of information on such topics as core DRBD
+ concepts, replication settings, network connection options, quorum, split-
+ brain handling, administrative tasks, troubleshooting, and responding to
+ disk or node failures, among others.
+
+ The ``allow-two-primaries: yes`` option would not normally be used in
+ an active/passive cluster. We are adding it here for the convenience
+ of changing to an active/active cluster later.
+
+Initialize DRBD
+###############
+
+With the configuration in place, we can now get DRBD running.
+
+These commands create the local metadata for the DRBD resource,
+ensure the DRBD kernel module is loaded, and bring up the DRBD resource.
+Run them on one node:
+
+.. code-block:: console
+
+ [root@pcmk-1 ~]# drbdadm create-md wwwdata
+ initializing activity log
+ initializing bitmap (16 KB) to all zero
+ Writing meta data...
+ New drbd meta data block successfully created.
+ success
+
+ [root@pcmk-1 ~]# modprobe drbd
+ [root@pcmk-1 ~]# drbdadm up wwwdata
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+ --== Thank you for participating in the global usage survey ==--
+ The server's response is:
+
+ you are the 25212th user to install this version
+
+We can confirm DRBD's status on this node:
+
+.. code-block:: console
+
+ [root@pcmk-1 ~]# drbdadm status
+ wwwdata role:Secondary
+ disk:Inconsistent
+ pcmk-2 connection:Connecting
+
+Because we have not yet initialized the data, this node's data
+is marked as ``Inconsistent`` Because we have not yet initialized
+the second node, the ``pcmk-2`` connection is ``Connecting`` (waiting for
+connection).
+
+Now, repeat the above commands on the second node, starting with creating
+``wwwdata.res``. After giving it time to connect, when we check the status of
+the first node, it shows:
+
+.. code-block:: console
+
+ [root@pcmk-1 ~]# drbdadm status
+ wwwdata role:Secondary
+ disk:Inconsistent
+ pcmk-2 role:Secondary
+ peer-disk:Inconsistent
+
+You can see that ``pcmk-2 connection:Connecting`` longer appears in the
+output, meaning the two DRBD nodes are communicating properly, and both
+nodes are in ``Secondary`` role with ``Inconsistent`` data.
+
+To make the data consistent, we need to tell DRBD which node should be
+considered to have the correct data. In this case, since we are creating
+a new resource, both have garbage, so we'll just pick ``pcmk-1``
+and run this command on it:
+
+.. code-block:: console
+
+ [root@pcmk-1 ~]# drbdadm primary --force wwwdata
+
+.. NOTE::
+
+ If you are using a different version of DRBD, the required syntax may be different.
+ See the documentation for your version for how to perform these commands.
+
+If we check the status immediately, we'll see something like this:
+
+.. code-block:: console
+
+ [root@pcmk-1 ~]# drbdadm status
+ wwwdata role:Primary
+ disk:UpToDate
+ pcmk-2 role:Secondary
+ peer-disk:Inconsistent
+
+It will be quickly followed by this:
+
+.. code-block:: console
+
+ [root@pcmk-1 ~]# drbdadm status
+ wwwdata role:Primary
+ disk:UpToDate
+ pcmk-2 role:Secondary
+ replication:SyncSource peer-disk:Inconsistent
+
+We can see that the first node has the ``Primary`` role, its partner node has
+the ``Secondary`` role, the first node's data is now considered ``UpToDate``,
+and the partner node's data is still ``Inconsistent``.
+
+After a while, the sync should finish, and you'll see something like:
+
+.. code-block:: console
+
+ [root@pcmk-1 ~]# drbdadm status
+ wwwdata role:Primary
+ disk:UpToDate
+ pcmk-1 role:Secondary
+ peer-disk:UpToDate
+ [root@pcmk-2 ~]# drbdadm status
+ wwwdata role:Secondary
+ disk:UpToDate
+ pcmk-1 role:Primary
+ peer-disk:UpToDate
+
+Both sets of data are now ``UpToDate``, and we can proceed to creating
+and populating a filesystem for our ``WebSite`` resource's documents.
+
+Populate the DRBD Disk
+######################
+
+On the node with the primary role (``pcmk-1`` in this example),
+create a filesystem on the DRBD device:
+
+.. code-block:: console
+
+ [root@pcmk-1 ~]# mkfs.xfs /dev/drbd1
+ meta-data=/dev/drbd1 isize=512 agcount=4, agsize=32765 blks
+ = sectsz=512 attr=2, projid32bit=1
+ = crc=1 finobt=1, sparse=1, rmapbt=0
+ = reflink=1
+ data = bsize=4096 blocks=131059, imaxpct=25
+ = sunit=0 swidth=0 blks
+ naming =version 2 bsize=4096 ascii-ci=0, ftype=1
+ log =internal log bsize=4096 blocks=1368, version=2
+ = sectsz=512 sunit=0 blks, lazy-count=1
+ realtime =none extsz=4096 blocks=0, rtextents=0
+ Discarding blocks...Done.
+
+.. NOTE::
+
+ In this example, we create an xfs filesystem with no special options.
+ In a production environment, you should choose a filesystem type and
+ options that are suitable for your application.
+
+Mount the newly created filesystem, populate it with our web document,
+give it the same SELinux policy as the web document root,
+then unmount it (the cluster will handle mounting and unmounting it later):
+
+.. code-block:: console
+
+ [root@pcmk-1 ~]# mount /dev/drbd1 /mnt
+ [root@pcmk-1 ~]# cat <<-END >/mnt/index.html
+ <html>
+ <body>My Test Site - DRBD</body>
+ </html>
+ END
+ [root@pcmk-1 ~]# chcon -R --reference=/var/www/html /mnt
+ [root@pcmk-1 ~]# umount /dev/drbd1
+
+Configure the Cluster for the DRBD device
+#########################################
+
+One handy feature ``pcs`` has is the ability to queue up several changes
+into a file and commit those changes all at once. To do this, start by
+populating the file with the current raw XML config from the CIB.
+
+.. code-block:: console
+
+ [root@pcmk-1 ~]# pcs cluster cib drbd_cfg
+
+Using ``pcs``'s ``-f`` option, make changes to the configuration saved
+in the ``drbd_cfg`` file. These changes will not be seen by the cluster until
+the ``drbd_cfg`` file is pushed into the live cluster's CIB later.
+
+Here, we create a cluster resource for the DRBD device, and an additional *clone*
+resource to allow the resource to run on both nodes at the same time.
+
+.. code-block:: console
+
+ [root@pcmk-1 ~]# pcs -f drbd_cfg resource create WebData ocf:linbit:drbd \
+ drbd_resource=wwwdata op monitor interval=29s role=Promoted \
+ monitor interval=31s role=Unpromoted
+ [root@pcmk-1 ~]# pcs -f drbd_cfg resource promotable WebData \
+ promoted-max=1 promoted-node-max=1 clone-max=2 clone-node-max=1 \
+ notify=true
+ [root@pcmk-1 ~]# pcs resource status
+ * ClusterIP (ocf::heartbeat:IPaddr2): Started pcmk-1
+ * WebSite (ocf::heartbeat:apache): Started pcmk-1
+ [root@pcmk-1 ~]# pcs resource config
+ Resource: ClusterIP (class=ocf provider=heartbeat type=IPaddr2)
+ Attributes: cidr_netmask=24 ip=192.168.122.120
+ Operations: monitor interval=30s (ClusterIP-monitor-interval-30s)
+ start interval=0s timeout=20s (ClusterIP-start-interval-0s)
+ stop interval=0s timeout=20s (ClusterIP-stop-interval-0s)
+ Resource: WebSite (class=ocf provider=heartbeat type=apache)
+ Attributes: configfile=/etc/httpd/conf/httpd.conf statusurl=http://localhost/server-status
+ Operations: monitor interval=1min (WebSite-monitor-interval-1min)
+ start interval=0s timeout=40s (WebSite-start-interval-0s)
+ stop interval=0s timeout=60s (WebSite-stop-interval-0s)
+
+After you are satisfied with all the changes, you can commit
+them all at once by pushing the ``drbd_cfg`` file into the live CIB.
+
+.. code-block:: console
+
+ [root@pcmk-1 ~]# pcs cluster cib-push drbd_cfg --config
+ CIB updated
+
+.. NOTE::
+
+ All the updates above can be done in one shot as follows:
+
+ .. code-block:: console
+
+ [root@pcmk-1 ~]# pcs resource create WebData ocf:linbit:drbd \
+ drbd_resource=wwwdata op monitor interval=29s role=Promoted \
+ monitor interval=31s role=Unpromoted \
+ promotable promoted-max=1 promoted-node-max=1 clone-max=2 \
+ clone-node-max=1 notify=true
+
+Let's see what the cluster did with the new configuration:
+
+.. code-block:: console
+
+ [root@pcmk-1 ~]# pcs resource status
+ * ClusterIP (ocf:heartbeat:IPaddr2): Started pcmk-2
+ * WebSite (ocf:heartbeat:apache): Started pcmk-2
+ * Clone Set: WebData-clone [WebData] (promotable):
+ * Promoted: [ pcmk-1 ]
+ * Unpromoted: [ pcmk-2 ]
+ [root@pcmk-1 ~]# pcs resource config
+ Resource: ClusterIP (class=ocf provider=heartbeat type=IPaddr2)
+ Attributes: cidr_netmask=24 ip=192.168.122.120
+ Operations: monitor interval=30s (ClusterIP-monitor-interval-30s)
+ start interval=0s timeout=20s (ClusterIP-start-interval-0s)
+ stop interval=0s timeout=20s (ClusterIP-stop-interval-0s)
+ Resource: WebSite (class=ocf provider=heartbeat type=apache)
+ Attributes: configfile=/etc/httpd/conf/httpd.conf statusurl=http://localhost/server-status
+ Operations: monitor interval=1min (WebSite-monitor-interval-1min)
+ start interval=0s timeout=40s (WebSite-start-interval-0s)
+ stop interval=0s timeout=60s (WebSite-stop-interval-0s)
+ Clone: WebData-clone
+ Meta Attrs: clone-max=2 clone-node-max=1 notify=true promotable=true promoted-max=1 promoted-node-max=1
+ Resource: WebData (class=ocf provider=linbit type=drbd)
+ Attributes: drbd_resource=wwwdata
+ Operations: demote interval=0s timeout=90 (WebData-demote-interval-0s)
+ monitor interval=29s role=Promoted (WebData-monitor-interval-29s)
+ monitor interval=31s role=Unpromoted (WebData-monitor-interval-31s)
+ notify interval=0s timeout=90 (WebData-notify-interval-0s)
+ promote interval=0s timeout=90 (WebData-promote-interval-0s)
+ reload interval=0s timeout=30 (WebData-reload-interval-0s)
+ start interval=0s timeout=240 (WebData-start-interval-0s)
+ stop interval=0s timeout=100 (WebData-stop-interval-0s)
+
+We can see that ``WebData-clone`` (our DRBD device) is running as ``Promoted``
+(DRBD's primary role) on ``pcmk-1`` and ``Unpromoted`` (DRBD's secondary role)
+on ``pcmk-2``.
+
+.. IMPORTANT::
+
+ The resource agent should load the DRBD module when needed if it's not already
+ loaded. If that does not happen, configure your operating system to load the
+ module at boot time. For |CFS_DISTRO| |CFS_DISTRO_VER|, you would run this on both
+ nodes:
+
+ .. code-block:: console
+
+ # echo drbd >/etc/modules-load.d/drbd.conf
+
+Configure the Cluster for the Filesystem
+########################################
+
+Now that we have a working DRBD device, we need to mount its filesystem.
+
+In addition to defining the filesystem, we also need to
+tell the cluster where it can be located (only on the DRBD Primary)
+and when it is allowed to start (after the Primary was promoted).
+
+We are going to take a shortcut when creating the resource this time.
+Instead of explicitly saying we want the ``ocf:heartbeat:Filesystem`` script,
+we are only going to ask for ``Filesystem``. We can do this because we know
+there is only one resource script named ``Filesystem`` available to
+Pacemaker, and that ``pcs`` is smart enough to fill in the
+``ocf:heartbeat:`` portion for us correctly in the configuration. If there were
+multiple ``Filesystem`` scripts from different OCF providers, we would need to
+specify the exact one we wanted.
+
+Once again, we will queue our changes to a file and then push the
+new configuration to the cluster as the final step.
+
+.. code-block:: console
+
+ [root@pcmk-1 ~]# pcs cluster cib fs_cfg
+ [root@pcmk-1 ~]# pcs -f fs_cfg resource create WebFS Filesystem \
+ device="/dev/drbd1" directory="/var/www/html" fstype="xfs"
+ Assumed agent name 'ocf:heartbeat:Filesystem' (deduced from 'Filesystem')
+ [root@pcmk-1 ~]# pcs -f fs_cfg constraint colocation add \
+ WebFS with Promoted WebData-clone
+ [root@pcmk-1 ~]# pcs -f fs_cfg constraint order \
+ promote WebData-clone then start WebFS
+ Adding WebData-clone WebFS (kind: Mandatory) (Options: first-action=promote then-action=start)
+
+We also need to tell the cluster that Apache needs to run on the same
+machine as the filesystem and that it must be active before Apache can
+start.
+
+.. code-block:: console
+
+ [root@pcmk-1 ~]# pcs -f fs_cfg constraint colocation add WebSite with WebFS
+ [root@pcmk-1 ~]# pcs -f fs_cfg constraint order WebFS then WebSite
+ Adding WebFS WebSite (kind: Mandatory) (Options: first-action=start then-action=start)
+
+Review the updated configuration.
+
+.. code-block:: console
+
+ [root@pcmk-1 ~]# pcs -f fs_cfg constraint
+ Location Constraints:
+ Resource: WebSite
+ Enabled on:
+ Node: pcmk-1 (score:50)
+ Ordering Constraints:
+ start ClusterIP then start WebSite (kind:Mandatory)
+ promote WebData-clone then start WebFS (kind:Mandatory)
+ start WebFS then start WebSite (kind:Mandatory)
+ Colocation Constraints:
+ WebSite with ClusterIP (score:INFINITY)
+ WebFS with WebData-clone (score:INFINITY) (rsc-role:Started) (with-rsc-role:Promoted)
+ WebSite with WebFS (score:INFINITY)
+ Ticket Constraints:
+
+After reviewing the new configuration, upload it and watch the
+cluster put it into effect.
+
+.. code-block:: console
+
+ [root@pcmk-1 ~]# pcs cluster cib-push fs_cfg --config
+ CIB updated
+ [root@pcmk-1 ~]# pcs resource status
+ * ClusterIP (ocf:heartbeat:IPaddr2): Started pcmk-2
+ * WebSite (ocf:heartbeat:apache): Started pcmk-2
+ * Clone Set: WebData-clone [WebData] (promotable):
+ * Promoted: [ pcmk-2 ]
+ * Unpromoted: [ pcmk-1 ]
+ * WebFS (ocf:heartbeat:Filesystem): Started pcmk-2
+ [root@pcmk-1 ~]# pcs resource config
+ Resource: ClusterIP (class=ocf provider=heartbeat type=IPaddr2)
+ Attributes: cidr_netmask=24 ip=192.168.122.120
+ Operations: monitor interval=30s (ClusterIP-monitor-interval-30s)
+ start interval=0s timeout=20s (ClusterIP-start-interval-0s)
+ stop interval=0s timeout=20s (ClusterIP-stop-interval-0s)
+ Resource: WebSite (class=ocf provider=heartbeat type=apache)
+ Attributes: configfile=/etc/httpd/conf/httpd.conf statusurl=http://localhost/server-status
+ Operations: monitor interval=1min (WebSite-monitor-interval-1min)
+ start interval=0s timeout=40s (WebSite-start-interval-0s)
+ stop interval=0s timeout=60s (WebSite-stop-interval-0s)
+ Clone: WebData-clone
+ Meta Attrs: clone-max=2 clone-node-max=1 notify=true promotable=true promoted-max=1 promoted-node-max=1
+ Resource: WebData (class=ocf provider=linbit type=drbd)
+ Attributes: drbd_resource=wwwdata
+ Operations: demote interval=0s timeout=90 (WebData-demote-interval-0s)
+ monitor interval=29s role=Promoted (WebData-monitor-interval-29s)
+ monitor interval=31s role=Unpromoted (WebData-monitor-interval-31s)
+ notify interval=0s timeout=90 (WebData-notify-interval-0s)
+ promote interval=0s timeout=90 (WebData-promote-interval-0s)
+ reload interval=0s timeout=30 (WebData-reload-interval-0s)
+ start interval=0s timeout=240 (WebData-start-interval-0s)
+ stop interval=0s timeout=100 (WebData-stop-interval-0s)
+ Resource: WebFS (class=ocf provider=heartbeat type=Filesystem)
+ Attributes: device=/dev/drbd1 directory=/var/www/html fstype=xfs
+ Operations: monitor interval=20s timeout=40s (WebFS-monitor-interval-20s)
+ start interval=0s timeout=60s (WebFS-start-interval-0s)
+ stop interval=0s timeout=60s (WebFS-stop-interval-0s)
+
+Test Cluster Failover
+#####################
+
+Previously, we used ``pcs cluster stop pcmk-2`` to stop all cluster
+services on ``pcmk-2``, failing over the cluster resources, but there is another
+way to safely simulate node failure.
+
+We can put the node into *standby mode*. Nodes in this state continue to
+run ``corosync`` and ``pacemaker`` but are not allowed to run resources. Any
+resources found active there will be moved elsewhere. This feature can be
+particularly useful when performing system administration tasks such as
+updating packages used by cluster resources.
+
+Put the active node into standby mode, and observe the cluster move all
+the resources to the other node. The node's status will change to indicate that
+it can no longer host resources, and eventually all the resources will move.
+
+.. code-block:: console
+
+ [root@pcmk-1 ~]# pcs node standby pcmk-2
+ [root@pcmk-1 ~]# pcs status
+ Cluster name: mycluster
+ Cluster Summary:
+ * Stack: corosync
+ * Current DC: pcmk-1 (version 2.1.2-4.el9-ada5c3b36e2) - partition with quorum
+ * Last updated: Wed Jul 27 05:28:01 2022
+ * Last change: Wed Jul 27 05:27:57 2022 by root via cibadmin on pcmk-1
+ * 2 nodes configured
+ * 6 resource instances configured
+
+ Node List:
+ * Node pcmk-2: standby
+ * Online: [ pcmk-1 ]
+
+ Full List of Resources:
+ * fence_dev (stonith:some_fence_agent): Started pcmk-1
+ * ClusterIP (ocf:heartbeat:IPaddr2): Started pcmk-1
+ * WebSite (ocf:heartbeat:apache): Started pcmk-1
+ * Clone Set: WebData-clone [WebData] (promotable):
+ * Promoted: [ pcmk-1 ]
+ * Stopped: [ pcmk-2 ]
+ * WebFS (ocf:heartbeat:Filesystem): Started pcmk-1
+
+ Daemon Status:
+ corosync: active/disabled
+ pacemaker: active/disabled
+ pcsd: active/enabled
+
+Once we've done everything we needed to on ``pcmk-2`` (in this case nothing,
+we just wanted to see the resources move), we can unstandby the node, making it
+eligible to host resources again.
+
+.. code-block:: console
+
+ [root@pcmk-1 ~]# pcs node unstandby pcmk-2
+ [root@pcmk-1 ~]# pcs status
+ Cluster name: mycluster
+ Cluster Summary:
+ * Stack: corosync
+ * Current DC: pcmk-1 (version 2.1.2-4.el9-ada5c3b36e2) - partition with quorum
+ * Last updated: Wed Jul 27 05:28:50 2022
+ * Last change: Wed Jul 27 05:28:47 2022 by root via cibadmin on pcmk-1
+ * 2 nodes configured
+ * 6 resource instances configured
+
+ Node List:
+ * Online: [ pcmk-1 pcmk-2 ]
+
+ Full List of Resources:
+ * fence_dev (stonith:some_fence_agent): Started pcmk-1
+ * ClusterIP (ocf:heartbeat:IPaddr2): Started pcmk-1
+ * WebSite (ocf:heartbeat:apache): Started pcmk-1
+ * Clone Set: WebData-clone [WebData] (promotable):
+ * Promoted: [ pcmk-1 ]
+ * Unpromoted: [ pcmk-2 ]
+ * WebFS (ocf:heartbeat:Filesystem): Started pcmk-1
+
+ Daemon Status:
+ corosync: active/disabled
+ pacemaker: active/disabled
+ pcsd: active/enabled
+
+Notice that ``pcmk-2`` is back to the ``Online`` state, and that the cluster
+resources stay where they are due to our resource stickiness settings
+configured earlier.
+
+.. [#] See http://www.drbd.org for details.
+
+.. [#] Since version 2.6.33
diff --git a/doc/sphinx/Clusters_from_Scratch/verification.rst b/doc/sphinx/Clusters_from_Scratch/verification.rst
new file mode 100644
index 0000000..08fab31
--- /dev/null
+++ b/doc/sphinx/Clusters_from_Scratch/verification.rst
@@ -0,0 +1,222 @@
+Start and Verify Cluster
+------------------------
+
+Start the Cluster
+#################
+
+Now that Corosync is configured, it is time to start the cluster.
+The command below will start the ``corosync`` and ``pacemaker`` services on
+both nodes in the cluster.
+
+.. code-block:: console
+
+ [root@pcmk-1 ~]# pcs cluster start --all
+ pcmk-1: Starting Cluster...
+ pcmk-2: Starting Cluster...
+
+.. NOTE::
+
+ An alternative to using the ``pcs cluster start --all`` command
+ is to issue either of the below command sequences on each node in the
+ cluster separately:
+
+ .. code-block:: console
+
+ # pcs cluster start
+ Starting Cluster...
+
+ or
+
+ .. code-block:: console
+
+ # systemctl start corosync.service
+ # systemctl start pacemaker.service
+
+.. IMPORTANT::
+
+ In this example, we are not enabling the ``corosync`` and ``pacemaker``
+ services to start at boot. If a cluster node fails or is rebooted, you will
+ need to run ``pcs cluster start [<NODENAME> | --all]`` to start the cluster
+ on it. While you can enable the services to start at boot (for example,
+ using ``pcs cluster enable [<NODENAME> | --all]``), requiring a manual
+ start of cluster services gives you the opportunity to do a post-mortem
+ investigation of a node failure before returning it to the cluster.
+
+Verify Corosync Installation
+################################
+
+First, use ``corosync-cfgtool`` to check whether cluster communication is happy:
+
+.. code-block:: console
+
+ [root@pcmk-1 ~]# corosync-cfgtool -s
+ Local node ID 1, transport knet
+ LINK ID 0 udp
+ addr = 192.168.122.101
+ status:
+ nodeid: 1: localhost
+ nodeid: 2: connected
+
+We can see here that everything appears normal with our fixed IP address (not a
+``127.0.0.x`` loopback address) listed as the ``addr``, and ``localhost`` and
+``connected`` for the statuses of nodeid 1 and nodeid 2, respectively.
+
+If you see something different, you might want to start by checking
+the node's network, firewall, and SELinux configurations.
+
+Next, check the membership and quorum APIs:
+
+.. code-block:: console
+
+ [root@pcmk-1 ~]# corosync-cmapctl | grep members
+ runtime.members.1.config_version (u64) = 0
+ runtime.members.1.ip (str) = r(0) ip(192.168.122.101)
+ runtime.members.1.join_count (u32) = 1
+ runtime.members.1.status (str) = joined
+ runtime.members.2.config_version (u64) = 0
+ runtime.members.2.ip (str) = r(0) ip(192.168.122.102)
+ runtime.members.2.join_count (u32) = 1
+ runtime.members.2.status (str) = joined
+
+ [root@pcmk-1 ~]# pcs status corosync
+
+ Membership information
+ ----------------------
+ Nodeid Votes Name
+ 1 1 pcmk-1 (local)
+ 2 1 pcmk-2
+
+You should see both nodes have joined the cluster.
+
+Verify Pacemaker Installation
+#################################
+
+Now that we have confirmed that Corosync is functional, we can check
+the rest of the stack. Pacemaker has already been started, so verify
+the necessary processes are running:
+
+.. code-block:: console
+
+ [root@pcmk-1 ~]# ps axf
+ PID TTY STAT TIME COMMAND
+ 2 ? S 0:00 [kthreadd]
+ ...lots of processes...
+ 17121 ? SLsl 0:01 /usr/sbin/corosync -f
+ 17133 ? Ss 0:00 /usr/sbin/pacemakerd
+ 17134 ? Ss 0:00 \_ /usr/libexec/pacemaker/pacemaker-based
+ 17135 ? Ss 0:00 \_ /usr/libexec/pacemaker/pacemaker-fenced
+ 17136 ? Ss 0:00 \_ /usr/libexec/pacemaker/pacemaker-execd
+ 17137 ? Ss 0:00 \_ /usr/libexec/pacemaker/pacemaker-attrd
+ 17138 ? Ss 0:00 \_ /usr/libexec/pacemaker/pacemaker-schedulerd
+ 17139 ? Ss 0:00 \_ /usr/libexec/pacemaker/pacemaker-controld
+
+If that looks OK, check the ``pcs status`` output:
+
+.. code-block:: console
+
+ [root@pcmk-1 ~]# pcs status
+ Cluster name: mycluster
+
+ WARNINGS:
+ No stonith devices and stonith-enabled is not false
+
+ Cluster Summary:
+ * Stack: corosync
+ * Current DC: pcmk-2 (version 2.1.2-4.el9-ada5c3b36e2) - partition with quorum
+ * Last updated: Wed Jul 27 00:09:55 2022
+ * Last change: Wed Jul 27 00:07:08 2022 by hacluster via crmd on pcmk-2
+ * 2 nodes configured
+ * 0 resource instances configured
+
+ Node List:
+ * Online: [ pcmk-1 pcmk-2 ]
+
+ Full List of Resources:
+ * No resources
+
+ Daemon Status:
+ corosync: active/disabled
+ pacemaker: active/disabled
+ pcsd: active/enabled
+
+Finally, ensure there are no start-up errors from ``corosync`` or ``pacemaker``
+(aside from messages relating to not having STONITH configured, which are OK at
+this point):
+
+.. code-block:: console
+
+ [root@pcmk-1 ~]# journalctl -b | grep -i error
+
+.. NOTE::
+
+ Other operating systems may report startup errors in other locations
+ (for example, ``/var/log/messages``).
+
+Repeat these checks on the other node. The results should be the same.
+
+Explore the Existing Configuration
+##################################
+
+For those who are not of afraid of XML, you can see the raw cluster
+configuration and status by using the ``pcs cluster cib`` command.
+
+.. topic:: The last XML you'll see in this document
+
+ .. code-block:: console
+
+ [root@pcmk-1 ~]# pcs cluster cib
+
+ .. code-block:: xml
+
+ <cib crm_feature_set="3.13.0" validate-with="pacemaker-3.8" epoch="5" num_updates="4" admin_epoch="0" cib-last-written="Wed Jul 27 00:07:08 2022" update-origin="pcmk-2" update-client="crmd" update-user="hacluster" have-quorum="1" dc-uuid="2">
+ <configuration>
+ <crm_config>
+ <cluster_property_set id="cib-bootstrap-options">
+ <nvpair id="cib-bootstrap-options-have-watchdog" name="have-watchdog" value="false"/>
+ <nvpair id="cib-bootstrap-options-dc-version" name="dc-version" value="2.1.2-4.el9-ada5c3b36e2"/>
+ <nvpair id="cib-bootstrap-options-cluster-infrastructure" name="cluster-infrastructure" value="corosync"/>
+ <nvpair id="cib-bootstrap-options-cluster-name" name="cluster-name" value="mycluster"/>
+ </cluster_property_set>
+ </crm_config>
+ <nodes>
+ <node id="1" uname="pcmk-1"/>
+ <node id="2" uname="pcmk-2"/>
+ </nodes>
+ <resources/>
+ <constraints/>
+ <rsc_defaults>
+ <meta_attributes id="build-resource-defaults">
+ <nvpair id="build-resource-stickiness" name="resource-stickiness" value="1"/>
+ </meta_attributes>
+ </rsc_defaults>
+ </configuration>
+ <status>
+ <node_state id="2" uname="pcmk-2" in_ccm="true" crmd="online" crm-debug-origin="do_state_transition" join="member" expected="member">
+ <lrm id="2">
+ <lrm_resources/>
+ </lrm>
+ </node_state>
+ <node_state id="1" uname="pcmk-1" in_ccm="true" crmd="online" crm-debug-origin="do_state_transition" join="member" expected="member">
+ <lrm id="1">
+ <lrm_resources/>
+ </lrm>
+ </node_state>
+ </status>
+ </cib>
+
+Before we make any changes, it's a good idea to check the validity of
+the configuration.
+
+.. code-block:: console
+
+ [root@pcmk-1 ~]# pcs cluster verify --full
+ Error: invalid cib:
+ (unpack_resources) error: Resource start-up disabled since no STONITH resources have been defined
+ (unpack_resources) error: Either configure some or disable STONITH with the stonith-enabled option
+ (unpack_resources) error: NOTE: Clusters with shared data need STONITH to ensure data integrity
+ crm_verify: Errors found during check: config not valid
+
+ Error: Errors have occurred, therefore pcs is unable to continue
+
+As you can see, the tool has found some errors. The cluster will not start any
+resources until we configure STONITH.
diff --git a/doc/sphinx/Makefile.am b/doc/sphinx/Makefile.am
new file mode 100644
index 0000000..c4ade5c
--- /dev/null
+++ b/doc/sphinx/Makefile.am
@@ -0,0 +1,198 @@
+#
+# Copyright 2003-2023 the Pacemaker project contributors
+#
+# The version control history for this file may have further details.
+#
+# This source code is licensed under the GNU General Public License version 2
+# or later (GPLv2+) WITHOUT ANY WARRANTY.
+#
+include $(top_srcdir)/mk/common.mk
+
+# Define release-related variables
+include $(top_srcdir)/mk/release.mk
+
+# Things you might want to override on the command line
+
+# Books to generate
+BOOKS ?= Clusters_from_Scratch \
+ Pacemaker_Administration \
+ Pacemaker_Development \
+ Pacemaker_Explained \
+ Pacemaker_Python_API \
+ Pacemaker_Remote
+
+# Output formats to generate. Possible values:
+# html (multiple HTML files)
+# dirhtml (HTML files named index.html in multiple directories)
+# singlehtml (a single large HTML file)
+# text
+# pdf
+# epub
+# latex
+# linkcheck (not actually a format; check validity of external links)
+#
+# The results will end up in <book>/_build/<format>
+BOOK_FORMATS ?= singlehtml
+
+# Set to "a4paper" or "letterpaper" if building latex format
+PAPER ?= letterpaper
+
+# Additional options for sphinx-build
+SPHINXFLAGS ?=
+
+# toplevel rsync destination for www targets (without trailing slash)
+RSYNC_DEST ?= root@www.clusterlabs.org:/var/www/html
+
+# End of useful overrides
+
+
+# Example scheduler transition graphs
+# @TODO The original CIB XML for these is long lost. Ideally, we would recreate
+# something similar and keep those here instead of the DOTs (or use a couple of
+# scheduler regression test inputs instead), then regenerate the SVG
+# equivalents using crm_simulate and dot when making a release.
+DOTS = $(wildcard shared/images/*.dot)
+
+# Vector sources for generated PNGs (including SVG equivalents of DOTS, created
+# manually using dot)
+SVGS = $(wildcard shared/images/pcmk-*.svg) $(DOTS:%.dot=%.svg)
+
+# PNG images generated from SVGS
+#
+# These will not be accessible in a VPATH build, which will generate warnings
+# when building the documentation, but the make will still succeed. It is
+# nontrivial to get them working for VPATH builds and not worth the effort.
+PNGS_GENERATED = $(SVGS:%.svg=%.png)
+
+# Original PNG image sources
+PNGS_Clusters_from_Scratch = $(wildcard Clusters_from_Scratch/images/*.png)
+PNGS_Pacemaker_Explained = $(wildcard Pacemaker_Explained/images/*.png)
+PNGS_Pacemaker_Remote = $(wildcard Pacemaker_Remote/images/*.png)
+
+STATIC_FILES = $(wildcard _static/*.css)
+
+EXTRA_DIST = $(wildcard */*.rst) $(DOTS) $(SVGS) \
+ $(PNGS_Clusters_from_Scratch) \
+ $(PNGS_Pacemaker_Explained) \
+ $(PNGS_Pacemaker_Remote) \
+ $(wildcard Pacemaker_Python_API/_templates/*rst) \
+ $(STATIC_FILES) \
+ conf.py.in
+
+# recursive, preserve symlinks/permissions/times, verbose, compress,
+# don't cross filesystems, sparse, show progress
+RSYNC_OPTS = -rlptvzxS --progress
+
+BOOK_RSYNC_DEST = $(RSYNC_DEST)/$(PACKAGE)/doc/$(PACKAGE_SERIES)
+
+BOOK = none
+
+DEPS_intro = shared/pacemaker-intro.rst $(PNGS_GENERATED)
+
+DEPS_Clusters_from_Scratch = $(DEPS_intro) $(PNGS_Clusters_from_Scratch)
+DEPS_Pacemaker_Administration = $(DEPS_intro)
+DEPS_Pacemaker_Development =
+DEPS_Pacemaker_Explained = $(DEPS_intro) $(PNGS_Pacemaker_Explained)
+DEPS_Pacemaker_Python_API = ../../python
+DEPS_Pacemaker_Remote = $(PNGS_Pacemaker_Remote)
+
+if BUILD_SPHINX_DOCS
+
+INKSCAPE_CMD = $(INKSCAPE) --export-dpi=90 -C
+
+# Pattern rule to generate PNGs from SVGs
+# (--export-png works with Inkscape <1.0, --export-filename with >=1.0;
+# create the destination directory in case this is a VPATH build)
+%.png: %.svg
+ $(AM_V_at)-$(MKDIR_P) "$(shell dirname "$@")"
+ $(AM_V_GEN) { \
+ $(INKSCAPE_CMD) --export-png="$@" "$<" 2>/dev/null \
+ || $(INKSCAPE_CMD) --export-filename="$@" "$<"; \
+ } $(PCMK_quiet)
+
+# Create a book's Sphinx configuration.
+# Create the book directory in case this is a VPATH build.
+$(BOOKS:%=%/conf.py): conf.py.in
+ $(AM_V_at)-$(MKDIR_P) "$(@:%/conf.py=%)"
+ $(AM_V_GEN)sed \
+ -e 's/%VERSION%/$(VERSION)/g' \
+ -e 's/%BOOK_ID%/$(@:%/conf.py=%)/g' \
+ -e 's/%BOOK_TITLE%/$(subst _, ,$(@:%/conf.py=%))/g' \
+ -e 's#%SRC_DIR%#$(abs_srcdir)#g' \
+ -e 's#%ABS_TOP_SRCDIR%#$(abs_top_srcdir)#g' \
+ $(<) > "$@"
+
+$(BOOK)/_build: $(STATIC_FILES) $(BOOK)/conf.py $(DEPS_$(BOOK)) $(wildcard $(srcdir)/$(BOOK)/*.rst)
+ @echo 'Building "$(subst _, ,$(BOOK))" because of $?' $(PCMK_quiet)
+ $(AM_V_at)rm -rf "$@"
+ $(AM_V_BOOK)for format in $(BOOK_FORMATS); do \
+ echo -e "\n * Building $$format" $(PCMK_quiet); \
+ doctrees="doctrees"; \
+ real_format="$$format"; \
+ case "$$format" in \
+ pdf) real_format="latex" ;; \
+ gettext) doctrees="gettext-doctrees" ;; \
+ esac; \
+ $(SPHINX) -b "$$real_format" -d "$@/$$doctrees" \
+ -c "$(builddir)/$(BOOK)" \
+ -D latex_elements.papersize=$(PAPER) \
+ $(SPHINXFLAGS) \
+ "$(srcdir)/$(BOOK)" "$@/$$format" \
+ $(PCMK_quiet); \
+ if [ "$$format" = "pdf" ]; then \
+ $(MAKE) $(AM_MAKEFLAGS) -C "$@/$$format" \
+ all-pdf; \
+ fi; \
+ done
+endif
+
+build-$(PACKAGE_SERIES).txt: all
+ $(AM_V_GEN)echo "Generated on `date --utc` from version $(TAG)" > "$@"
+
+.PHONY: books-upload
+books-upload: all build-$(PACKAGE_SERIES).txt
+if BUILD_SPHINX_DOCS
+ @echo "Uploading $(PACKAGE_SERIES) documentation set"
+ @for book in $(BOOKS); do \
+ echo " * $$book"; \
+ rsync $(RSYNC_OPTS) $(BOOK_FORMATS:%=$$book/_build/%) \
+ "$(BOOK_RSYNC_DEST)/$$book/"; \
+ done
+ @rsync $(RSYNC_OPTS) "$(builddir)/build-$(PACKAGE_SERIES).txt" \
+ "$(RSYNC_DEST)/$(PACKAGE)/doc"
+
+all-local:
+ @for book in $(BOOKS); do \
+ $(MAKE) $(AM_MAKEFLAGS) BOOK=$$book \
+ PAPER="$(PAPER)" SPHINXFLAGS="$(SPHINXFLAGS)" \
+ BOOK_FORMATS="$(BOOK_FORMATS)" $$book/_build; \
+ done
+
+install-data-local: all-local
+ $(AM_V_at)for book in $(BOOKS); do \
+ for format in $(BOOK_FORMATS); do \
+ formatdir="$$book/_build/$$format"; \
+ for f in `find "$$formatdir" -print`; do \
+ dname="`echo $$f | sed s:_build/::`"; \
+ dloc="$(DESTDIR)/$(docdir)/$$dname"; \
+ if [ -d "$$f" ]; then \
+ $(INSTALL) -d -m 755 "$$dloc"; \
+ else \
+ $(INSTALL_DATA) "$$f" "$$dloc"; \
+ fi \
+ done; \
+ done; \
+ done
+
+uninstall-local:
+ $(AM_V_at)for book in $(BOOKS); do \
+ rm -rf "$(DESTDIR)/$(docdir)/$$book"; \
+ done
+endif
+
+clean-local:
+ $(AM_V_at)-rm -rf \
+ $(BOOKS:%="$(builddir)/%/_build") \
+ $(BOOKS:%="$(builddir)/%/conf.py") \
+ $(BOOKS:%="$(builddir)/%/generated") \
+ $(PNGS_GENERATED)
diff --git a/doc/sphinx/Pacemaker_Administration/agents.rst b/doc/sphinx/Pacemaker_Administration/agents.rst
new file mode 100644
index 0000000..e5b17e2
--- /dev/null
+++ b/doc/sphinx/Pacemaker_Administration/agents.rst
@@ -0,0 +1,443 @@
+.. index::
+ single: resource agent
+
+Resource Agents
+---------------
+
+
+Action Completion
+#################
+
+If one resource depends on another resource via constraints, the cluster will
+interpret an expected result as sufficient to continue with dependent actions.
+This may cause timing issues if the resource agent start returns before the
+service is not only launched but fully ready to perform its function, or if the
+resource agent stop returns before the service has fully released all its
+claims on system resources. At a minimum, the start or stop should not return
+before a status command would return the expected (started or stopped) result.
+
+
+.. index::
+ single: OCF resource agent
+ single: resource agent; OCF
+
+OCF Resource Agents
+###################
+
+.. index::
+ single: OCF resource agent; location
+
+Location of Custom Scripts
+__________________________
+
+OCF Resource Agents are found in ``/usr/lib/ocf/resource.d/$PROVIDER``
+
+When creating your own agents, you are encouraged to create a new directory
+under ``/usr/lib/ocf/resource.d/`` so that they are not confused with (or
+overwritten by) the agents shipped by existing providers.
+
+So, for example, if you choose the provider name of big-corp and want a new
+resource named big-app, you would create a resource agent called
+``/usr/lib/ocf/resource.d/big-corp/big-app`` and define a resource:
+
+.. code-block: xml
+
+ <primitive id="custom-app" class="ocf" provider="big-corp" type="big-app"/>
+
+
+.. index::
+ single: OCF resource agent; action
+
+Actions
+_______
+
+All OCF resource agents are required to implement the following actions.
+
+.. table:: **Required Actions for OCF Agents**
+
+ +--------------+-------------+------------------------------------------------+
+ | Action | Description | Instructions |
+ +==============+=============+================================================+
+ | start | Start the | .. index:: |
+ | | resource | single: OCF resource agent; start |
+ | | | single: start action |
+ | | | |
+ | | | Return 0 on success and an appropriate |
+ | | | error code otherwise. Must not report |
+ | | | success until the resource is fully |
+ | | | active. |
+ +--------------+-------------+------------------------------------------------+
+ | stop | Stop the | .. index:: |
+ | | resource | single: OCF resource agent; stop |
+ | | | single: stop action |
+ | | | |
+ | | | Return 0 on success and an appropriate |
+ | | | error code otherwise. Must not report |
+ | | | success until the resource is fully |
+ | | | stopped. |
+ +--------------+-------------+------------------------------------------------+
+ | monitor | Check the | .. index:: |
+ | | resource's | single: OCF resource agent; monitor |
+ | | state | single: monitor action |
+ | | | |
+ | | | Exit 0 if the resource is running, 7 |
+ | | | if it is stopped, and any other OCF |
+ | | | exit code if it is failed. NOTE: The |
+ | | | monitor script should test the state |
+ | | | of the resource on the local machine |
+ | | | only. |
+ +--------------+-------------+------------------------------------------------+
+ | meta-data | Describe | .. index:: |
+ | | the | single: OCF resource agent; meta-data |
+ | | resource | single: meta-data action |
+ | | | |
+ | | | Provide information about this |
+ | | | resource in the XML format defined by |
+ | | | the OCF standard. Exit with 0. NOTE: |
+ | | | This is *not* required to be performed |
+ | | | as root. |
+ +--------------+-------------+------------------------------------------------+
+
+OCF resource agents may optionally implement additional actions. Some are used
+only with advanced resource types such as clones.
+
+.. table:: **Optional Actions for OCF Resource Agents**
+
+ +--------------+-------------+------------------------------------------------+
+ | Action | Description | Instructions |
+ +==============+=============+================================================+
+ | validate-all | This should | .. index:: |
+ | | validate | single: OCF resource agent; validate-all |
+ | | the | single: validate-all action |
+ | | instance | |
+ | | parameters | Return 0 if parameters are valid, 2 if |
+ | | provided. | not valid, and 6 if resource is not |
+ | | | configured. |
+ +--------------+-------------+------------------------------------------------+
+ | promote | Bring the | .. index:: |
+ | | local | single: OCF resource agent; promote |
+ | | instance of | single: promote action |
+ | | a promotable| |
+ | | clone | Return 0 on success |
+ | | resource to | |
+ | | the promoted| |
+ | | role. | |
+ +--------------+-------------+------------------------------------------------+
+ | demote | Bring the | .. index:: |
+ | | local | single: OCF resource agent; demote |
+ | | instance of | single: demote action |
+ | | a promotable| |
+ | | clone | Return 0 on success |
+ | | resource to | |
+ | | the | |
+ | | unpromoted | |
+ | | role. | |
+ +--------------+-------------+------------------------------------------------+
+ | notify | Used by the | .. index:: |
+ | | cluster to | single: OCF resource agent; notify |
+ | | send | single: notify action |
+ | | the agent | |
+ | | pre- and | Must not fail. Must exit with 0 |
+ | | post- | |
+ | | notification| |
+ | | events | |
+ | | telling the | |
+ | | resource | |
+ | | what has | |
+ | | happened and| |
+ | | will happen.| |
+ +--------------+-------------+------------------------------------------------+
+ | reload | Reload the | .. index:: |
+ | | service's | single: OCF resource agent; reload |
+ | | own | single: reload action |
+ | | config. | |
+ | | | Not used by Pacemaker |
+ +--------------+-------------+------------------------------------------------+
+ | reload-agent | Make | .. index:: |
+ | | effective | single: OCF resource agent; reload-agent |
+ | | any changes | single: reload-agent action |
+ | | in instance | |
+ | | parameters | This is used when the agent can handle a |
+ | | marked as | change in some of its parameters more |
+ | | reloadable | efficiently than stopping and starting the |
+ | | in the | resource. |
+ | | agent's | |
+ | | meta-data. | |
+ +--------------+-------------+------------------------------------------------+
+ | recover | Restart the | .. index:: |
+ | | service. | single: OCF resource agent; recover |
+ | | | single: recover action |
+ | | | |
+ | | | Not used by Pacemaker |
+ +--------------+-------------+------------------------------------------------+
+
+.. important::
+
+ If you create a new OCF resource agent, use `ocf-tester` to verify that the
+ agent complies with the OCF standard properly.
+
+
+.. index::
+ single: OCF resource agent; return code
+
+How are OCF Return Codes Interpreted?
+_____________________________________
+
+The first thing the cluster does is to check the return code against
+the expected result. If the result does not match the expected value,
+then the operation is considered to have failed, and recovery action is
+initiated.
+
+There are three types of failure recovery:
+
+.. table:: **Types of recovery performed by the cluster**
+
+ +-------+--------------------------------------------+--------------------------------------+
+ | Type | Description | Action Taken by the Cluster |
+ +=======+============================================+======================================+
+ | soft | .. index:: | Restart the resource or move it to a |
+ | | single: OCF resource agent; soft error | new location |
+ | | | |
+ | | A transient error occurred | |
+ +-------+--------------------------------------------+--------------------------------------+
+ | hard | .. index:: | Move the resource elsewhere and |
+ | | single: OCF resource agent; hard error | prevent it from being retried on the |
+ | | | current node |
+ | | A non-transient error that | |
+ | | may be specific to the | |
+ | | current node | |
+ +-------+--------------------------------------------+--------------------------------------+
+ | fatal | .. index:: | Stop the resource and prevent it |
+ | | single: OCF resource agent; fatal error | from being started on any cluster |
+ | | | node |
+ | | A non-transient error that | |
+ | | will be common to all | |
+ | | cluster nodes (e.g. a bad | |
+ | | configuration was specified) | |
+ +-------+--------------------------------------------+--------------------------------------+
+
+.. _ocf_return_codes:
+
+OCF Return Codes
+________________
+
+The following table outlines the different OCF return codes and the type of
+recovery the cluster will initiate when a failure code is received. Although
+counterintuitive, even actions that return 0 (aka. ``OCF_SUCCESS``) can be
+considered to have failed, if 0 was not the expected return value.
+
+.. table:: **OCF Exit Codes and their Recovery Types**
+
+ +-------+-----------------------+---------------------------------------------------+----------+
+ | Exit | OCF Alias | Description | Recovery |
+ | Code | | | |
+ +=======+=======================+===================================================+==========+
+ | 0 | OCF_SUCCESS | .. index:: | soft |
+ | | | single: OCF_SUCCESS | |
+ | | | single: OCF return code; OCF_SUCCESS | |
+ | | | pair: OCF return code; 0 | |
+ | | | | |
+ | | | Success. The command completed successfully. | |
+ | | | This is the expected result for all start, | |
+ | | | stop, promote and demote commands. | |
+ +-------+-----------------------+---------------------------------------------------+----------+
+ | 1 | OCF_ERR_GENERIC | .. index:: | soft |
+ | | | single: OCF_ERR_GENERIC | |
+ | | | single: OCF return code; OCF_ERR_GENERIC | |
+ | | | pair: OCF return code; 1 | |
+ | | | | |
+ | | | Generic "there was a problem" error code. | |
+ +-------+-----------------------+---------------------------------------------------+----------+
+ | 2 | OCF_ERR_ARGS | .. index:: | hard |
+ | | | single: OCF_ERR_ARGS | |
+ | | | single: OCF return code; OCF_ERR_ARGS | |
+ | | | pair: OCF return code; 2 | |
+ | | | | |
+ | | | The resource's parameter values are not valid on | |
+ | | | this machine (for example, a value refers to a | |
+ | | | file not found on the local host). | |
+ +-------+-----------------------+---------------------------------------------------+----------+
+ | 3 | OCF_ERR_UNIMPLEMENTED | .. index:: | hard |
+ | | | single: OCF_ERR_UNIMPLEMENTED | |
+ | | | single: OCF return code; OCF_ERR_UNIMPLEMENTED | |
+ | | | pair: OCF return code; 3 | |
+ | | | | |
+ | | | The requested action is not implemented. | |
+ +-------+-----------------------+---------------------------------------------------+----------+
+ | 4 | OCF_ERR_PERM | .. index:: | hard |
+ | | | single: OCF_ERR_PERM | |
+ | | | single: OCF return code; OCF_ERR_PERM | |
+ | | | pair: OCF return code; 4 | |
+ | | | | |
+ | | | The resource agent does not have | |
+ | | | sufficient privileges to complete the task. | |
+ +-------+-----------------------+---------------------------------------------------+----------+
+ | 5 | OCF_ERR_INSTALLED | .. index:: | hard |
+ | | | single: OCF_ERR_INSTALLED | |
+ | | | single: OCF return code; OCF_ERR_INSTALLED | |
+ | | | pair: OCF return code; 5 | |
+ | | | | |
+ | | | The tools required by the resource are | |
+ | | | not installed on this machine. | |
+ +-------+-----------------------+---------------------------------------------------+----------+
+ | 6 | OCF_ERR_CONFIGURED | .. index:: | fatal |
+ | | | single: OCF_ERR_CONFIGURED | |
+ | | | single: OCF return code; OCF_ERR_CONFIGURED | |
+ | | | pair: OCF return code; 6 | |
+ | | | | |
+ | | | The resource's parameter values are inherently | |
+ | | | invalid (for example, a required parameter was | |
+ | | | not given). | |
+ +-------+-----------------------+---------------------------------------------------+----------+
+ | 7 | OCF_NOT_RUNNING | .. index:: | N/A |
+ | | | single: OCF_NOT_RUNNING | |
+ | | | single: OCF return code; OCF_NOT_RUNNING | |
+ | | | pair: OCF return code; 7 | |
+ | | | | |
+ | | | The resource is safely stopped. This should only | |
+ | | | be returned by monitor actions, not stop actions. | |
+ +-------+-----------------------+---------------------------------------------------+----------+
+ | 8 | OCF_RUNNING_PROMOTED | .. index:: | soft |
+ | | | single: OCF_RUNNING_PROMOTED | |
+ | | | single: OCF return code; OCF_RUNNING_PROMOTED | |
+ | | | pair: OCF return code; 8 | |
+ | | | | |
+ | | | The resource is running in the promoted role. | |
+ +-------+-----------------------+---------------------------------------------------+----------+
+ | 9 | OCF_FAILED_PROMOTED | .. index:: | soft |
+ | | | single: OCF_FAILED_PROMOTED | |
+ | | | single: OCF return code; OCF_FAILED_PROMOTED | |
+ | | | pair: OCF return code; 9 | |
+ | | | | |
+ | | | The resource is (or might be) in the promoted | |
+ | | | role but has failed. The resource will be | |
+ | | | demoted, stopped and then started (and possibly | |
+ | | | promoted) again. | |
+ +-------+-----------------------+---------------------------------------------------+----------+
+ | 190 | OCF_DEGRADED | .. index:: | none |
+ | | | single: OCF_DEGRADED | |
+ | | | single: OCF return code; OCF_DEGRADED | |
+ | | | pair: OCF return code; 190 | |
+ | | | | |
+ | | | The resource is properly active, but in such a | |
+ | | | condition that future failures are more likely. | |
+ +-------+-----------------------+---------------------------------------------------+----------+
+ | 191 | OCF_DEGRADED_PROMOTED | .. index:: | none |
+ | | | single: OCF_DEGRADED_PROMOTED | |
+ | | | single: OCF return code; OCF_DEGRADED_PROMOTED | |
+ | | | pair: OCF return code; 191 | |
+ | | | | |
+ | | | The resource is properly active in the promoted | |
+ | | | role, but in such a condition that future | |
+ | | | failures are more likely. | |
+ +-------+-----------------------+---------------------------------------------------+----------+
+ | other | *none* | Custom error code. | soft |
+ +-------+-----------------------+---------------------------------------------------+----------+
+
+Exceptions to the recovery handling described above:
+
+* Probes (non-recurring monitor actions) that find a resource active
+ (or in the promoted role) will not result in recovery action unless it is
+ also found active elsewhere.
+* The recovery action taken when a resource is found active more than
+ once is determined by the resource's ``multiple-active`` property.
+* Recurring actions that return ``OCF_ERR_UNIMPLEMENTED``
+ do not cause any type of recovery.
+* Actions that return one of the "degraded" codes will be treated the same as
+ if they had returned success, but status output will indicate that the
+ resource is degraded.
+
+
+.. index::
+ single: resource agent; LSB
+ single: LSB resource agent
+ single: init script
+
+LSB Resource Agents (Init Scripts)
+##################################
+
+LSB Compliance
+______________
+
+The relevant part of the
+`LSB specifications <http://refspecs.linuxfoundation.org/lsb.shtml>`_
+includes a description of all the return codes listed here.
+
+Assuming `some_service` is configured correctly and currently
+inactive, the following sequence will help you determine if it is
+LSB-compatible:
+
+#. Start (stopped):
+
+ .. code-block:: none
+
+ # /etc/init.d/some_service start ; echo "result: $?"
+
+ * Did the service start?
+ * Did the echo command print ``result: 0`` (in addition to the init script's
+ usual output)?
+
+#. Status (running):
+
+ .. code-block:: none
+
+ # /etc/init.d/some_service status ; echo "result: $?"
+
+ * Did the script accept the command?
+ * Did the script indicate the service was running?
+ * Did the echo command print ``result: 0`` (in addition to the init script's
+ usual output)?
+
+#. Start (running):
+
+ .. code-block:: none
+
+ # /etc/init.d/some_service start ; echo "result: $?"
+
+ * Is the service still running?
+ * Did the echo command print ``result: 0`` (in addition to the init
+ script's usual output)?
+
+#. Stop (running):
+
+ .. code-block:: none
+
+ # /etc/init.d/some_service stop ; echo "result: $?"
+
+ * Was the service stopped?
+ * Did the echo command print ``result: 0`` (in addition to the init
+ script's usual output)?
+
+#. Status (stopped):
+
+ .. code-block:: none
+
+ # /etc/init.d/some_service status ; echo "result: $?"
+
+ * Did the script accept the command?
+ * Did the script indicate the service was not running?
+ * Did the echo command print ``result: 3`` (in addition to the init
+ script's usual output)?
+
+#. Stop (stopped):
+
+ .. code-block:: none
+
+ # /etc/init.d/some_service stop ; echo "result: $?"
+
+ * Is the service still stopped?
+ * Did the echo command print ``result: 0`` (in addition to the init
+ script's usual output)?
+
+#. Status (failed):
+
+ This step is not readily testable and relies on manual inspection of the script.
+
+ The script can use one of the error codes (other than 3) listed in the
+ LSB spec to indicate that it is active but failed. This tells the
+ cluster that before moving the resource to another node, it needs to
+ stop it on the existing one first.
+
+If the answer to any of the above questions is no, then the script is not
+LSB-compliant. Your options are then to either fix the script or write an OCF
+agent based on the existing script.
diff --git a/doc/sphinx/Pacemaker_Administration/alerts.rst b/doc/sphinx/Pacemaker_Administration/alerts.rst
new file mode 100644
index 0000000..c0f54c6
--- /dev/null
+++ b/doc/sphinx/Pacemaker_Administration/alerts.rst
@@ -0,0 +1,311 @@
+.. index::
+ single: alert; agents
+
+Alert Agents
+------------
+
+.. index::
+ single: alert; sample agents
+
+Using the Sample Alert Agents
+#############################
+
+Pacemaker provides several sample alert agents, installed in
+``/usr/share/pacemaker/alerts`` by default.
+
+While these sample scripts may be copied and used as-is, they are provided
+mainly as templates to be edited to suit your purposes. See their source code
+for the full set of instance attributes they support.
+
+.. topic:: Sending cluster events as SNMP v2c traps
+
+ .. code-block:: xml
+
+ <configuration>
+ <alerts>
+ <alert id="snmp_alert" path="/path/to/alert_snmp.sh">
+ <instance_attributes id="config_for_alert_snmp">
+ <nvpair id="trap_node_states" name="trap_node_states"
+ value="all"/>
+ </instance_attributes>
+ <meta_attributes id="config_for_timestamp">
+ <nvpair id="ts_fmt" name="timestamp-format"
+ value="%Y-%m-%d,%H:%M:%S.%01N"/>
+ </meta_attributes>
+ <recipient id="snmp_destination" value="192.168.1.2"/>
+ </alert>
+ </alerts>
+ </configuration>
+
+.. note:: **SNMP alert agent attributes**
+
+ The ``timestamp-format`` meta-attribute should always be set to
+ ``%Y-%m-%d,%H:%M:%S.%01N`` when using the SNMP agent, to match the SNMP
+ standard.
+
+ The SNMP agent provides a number of instance attributes in addition to the
+ one used in the example above. The most useful are ``trap_version``, which
+ defaults to ``2c``, and ``trap_community``, which defaults to ``public``.
+ See the source code for more details.
+
+.. topic:: Sending cluster events as SNMP v3 traps
+
+ .. code-block:: xml
+
+ <configuration>
+ <alerts>
+ <alert id="snmp_alert" path="/path/to/alert_snmp.sh">
+ <instance_attributes id="config_for_alert_snmp">
+ <nvpair id="trap_node_states" name="trap_node_states"
+ value="all"/>
+ <nvpair id="trap_version" name="trap_version" value="3"/>
+ <nvpair id="trap_community" name="trap_community" value=""/>
+ <nvpair id="trap_options" name="trap_options"
+ value="-l authNoPriv -a MD5 -u testuser -A secret1"/>
+ </instance_attributes>
+ <meta_attributes id="config_for_timestamp">
+ <nvpair id="ts_fmt" name="timestamp-format"
+ value="%Y-%m-%d,%H:%M:%S.%01N"/>
+ </meta_attributes>
+ <recipient id="snmp_destination" value="192.168.1.2"/>
+ </alert>
+ </alerts>
+ </configuration>
+
+.. note:: **SNMP v3 trap configuration**
+
+ To use SNMP v3, ``trap_version`` must be set to ``3``. ``trap_community``
+ will be ignored.
+
+ The example above uses the ``trap_options`` instance attribute to override
+ the security level, authentication protocol, authentication user, and
+ authentication password from snmp.conf. These will be passed to the snmptrap
+ command. Passing the password on the command line is considered insecure;
+ specify authentication and privacy options suitable for your environment.
+
+.. topic:: Sending cluster events as e-mails
+
+ .. code-block:: xml
+
+ <configuration>
+ <alerts>
+ <alert id="smtp_alert" path="/path/to/alert_smtp.sh">
+ <instance_attributes id="config_for_alert_smtp">
+ <nvpair id="email_sender" name="email_sender"
+ value="donotreply@example.com"/>
+ </instance_attributes>
+ <recipient id="smtp_destination" value="admin@example.com"/>
+ </alert>
+ </alerts>
+ </configuration>
+
+
+.. index::
+ single: alert; agent development
+
+Writing an Alert Agent
+######################
+
+.. index::
+ single: alert; environment variables
+ single: environment variable; alert agents
+
+.. table:: **Environment variables passed to alert agents**
+ :class: longtable
+ :widths: 1 3
+
+ +---------------------------+----------------------------------------------------------------+
+ | Environment Variable | Description |
+ +===========================+================================================================+
+ | CRM_alert_kind | .. index:: |
+ | | single:environment variable; CRM_alert_kind |
+ | | single:CRM_alert_kind |
+ | | |
+ | | The type of alert (``node``, ``fencing``, ``resource``, or |
+ | | ``attribute``) |
+ +---------------------------+----------------------------------------------------------------+
+ | CRM_alert_node | .. index:: |
+ | | single:environment variable; CRM_alert_node |
+ | | single:CRM_alert_node |
+ | | |
+ | | Name of affected node |
+ +---------------------------+----------------------------------------------------------------+
+ | CRM_alert_node_sequence | .. index:: |
+ | | single:environment variable; CRM_alert_sequence |
+ | | single:CRM_alert_sequence |
+ | | |
+ | | A sequence number increased whenever an alert is being issued |
+ | | on the local node, which can be used to reference the order in |
+ | | which alerts have been issued by Pacemaker. An alert for an |
+ | | event that happened later in time reliably has a higher |
+ | | sequence number than alerts for earlier events. |
+ | | |
+ | | Be aware that this number has no cluster-wide meaning. |
+ +---------------------------+----------------------------------------------------------------+
+ | CRM_alert_recipient | .. index:: |
+ | | single:environment variable; CRM_alert_recipient |
+ | | single:CRM_alert_recipient |
+ | | |
+ | | The configured recipient |
+ +---------------------------+----------------------------------------------------------------+
+ | CRM_alert_timestamp | .. index:: |
+ | | single:environment variable; CRM_alert_timestamp |
+ | | single:CRM_alert_timestamp |
+ | | |
+ | | A timestamp created prior to executing the agent, in the |
+ | | format specified by the ``timestamp-format`` meta-attribute. |
+ | | This allows the agent to have a reliable, high-precision time |
+ | | of when the event occurred, regardless of when the agent |
+ | | itself was invoked (which could potentially be delayed due to |
+ | | system load, etc.). |
+ +---------------------------+----------------------------------------------------------------+
+ | CRM_alert_timestamp_epoch | .. index:: |
+ | | single:environment variable; CRM_alert_timestamp_epoch |
+ | | single:CRM_alert_timestamp_epoch |
+ | | |
+ | | The same time as ``CRM_alert_timestamp``, expressed as the |
+ | | integer number of seconds since January 1, 1970. This (along |
+ | | with ``CRM_alert_timestamp_usec``) can be useful for alert |
+ | | agents that need to format time in a specific way rather than |
+ | | let the user configure it. |
+ +---------------------------+----------------------------------------------------------------+
+ | CRM_alert_timestamp_usec | .. index:: |
+ | | single:environment variable; CRM_alert_timestamp_usec |
+ | | single:CRM_alert_timestamp_usec |
+ | | |
+ | | The same time as ``CRM_alert_timestamp``, expressed as the |
+ | | integer number of microseconds since |
+ | | ``CRM_alert_timestamp_epoch``. |
+ +---------------------------+----------------------------------------------------------------+
+ | CRM_alert_version | .. index:: |
+ | | single:environment variable; CRM_alert_version |
+ | | single:CRM_alert_version |
+ | | |
+ | | The version of Pacemaker sending the alert |
+ +---------------------------+----------------------------------------------------------------+
+ | CRM_alert_desc | .. index:: |
+ | | single:environment variable; CRM_alert_desc |
+ | | single:CRM_alert_desc |
+ | | |
+ | | Detail about event. For ``node`` alerts, this is the node's |
+ | | current state (``member`` or ``lost``). For ``fencing`` |
+ | | alerts, this is a summary of the requested fencing operation, |
+ | | including origin, target, and fencing operation error code, if |
+ | | any. For ``resource`` alerts, this is a readable string |
+ | | equivalent of ``CRM_alert_status``. |
+ +---------------------------+----------------------------------------------------------------+
+ | CRM_alert_nodeid | .. index:: |
+ | | single:environment variable; CRM_alert_nodeid |
+ | | single:CRM_alert_nodeid |
+ | | |
+ | | ID of node whose status changed (provided with ``node`` alerts |
+ | | only) |
+ +---------------------------+----------------------------------------------------------------+
+ | CRM_alert_rc | .. index:: |
+ | | single:environment variable; CRM_alert_rc |
+ | | single:CRM_alert_rc |
+ | | |
+ | | The numerical return code of the fencing or resource operation |
+ | | (provided with ``fencing`` and ``resource`` alerts only) |
+ +---------------------------+----------------------------------------------------------------+
+ | CRM_alert_task | .. index:: |
+ | | single:environment variable; CRM_alert_task |
+ | | single:CRM_alert_task |
+ | | |
+ | | The requested fencing or resource operation (provided with |
+ | | ``fencing`` and ``resource`` alerts only) |
+ +---------------------------+----------------------------------------------------------------+
+ | CRM_alert_exec_time | .. index:: |
+ | | single:environment variable; CRM_alert_exec_time |
+ | | single:CRM_alert_exec_time |
+ | | |
+ | | The (wall-clock) time, in milliseconds, that it took to |
+ | | execute the action. If the action timed out, |
+ | | ``CRM_alert_status`` will be 2, ``CRM_alert_desc`` will be |
+ | | "Timed Out", and this value will be the action timeout. May |
+ | | not be supported on all platforms. (``resource`` alerts only) |
+ | | *(since 2.0.1)* |
+ +---------------------------+----------------------------------------------------------------+
+ | CRM_alert_interval | .. index:: |
+ | | single:environment variable; CRM_alert_interval |
+ | | single:CRM_alert_interval |
+ | | |
+ | | The interval of the resource operation (``resource`` alerts |
+ | | only) |
+ +---------------------------+----------------------------------------------------------------+
+ | CRM_alert_rsc | .. index:: |
+ | | single:environment variable; CRM_alert_rsc |
+ | | single:CRM_alert_rsc |
+ | | |
+ | | The name of the affected resource (``resource`` alerts only) |
+ +---------------------------+----------------------------------------------------------------+
+ | CRM_alert_status | .. index:: |
+ | | single:environment variable; CRM_alert_status |
+ | | single:CRM_alert_status |
+ | | |
+ | | A numerical code used by Pacemaker to represent the operation |
+ | | result (``resource`` alerts only) |
+ +---------------------------+----------------------------------------------------------------+
+ | CRM_alert_target_rc | .. index:: |
+ | | single:environment variable; CRM_alert_target_rc |
+ | | single:CRM_alert_target_rc |
+ | | |
+ | | The expected numerical return code of the operation |
+ | | (``resource`` alerts only) |
+ +---------------------------+----------------------------------------------------------------+
+ | CRM_alert_attribute_name | .. index:: |
+ | | single:environment variable; CRM_alert_attribute_name |
+ | | single:CRM_alert_attribute_name |
+ | | |
+ | | The name of the node attribute that changed (``attribute`` |
+ | | alerts only) |
+ +---------------------------+----------------------------------------------------------------+
+ | CRM_alert_attribute_value | .. index:: |
+ | | single:environment variable; CRM_alert_attribute_value |
+ | | single:CRM_alert_attribute_value |
+ | | |
+ | | The new value of the node attribute that changed |
+ | | (``attribute`` alerts only) |
+ +---------------------------+----------------------------------------------------------------+
+
+Special concerns when writing alert agents:
+
+* Alert agents may be called with no recipient (if none is configured),
+ so the agent must be able to handle this situation, even if it
+ only exits in that case. (Users may modify the configuration in
+ stages, and add a recipient later.)
+
+* If more than one recipient is configured for an alert, the alert agent will
+ be called once per recipient. If an agent is not able to run concurrently, it
+ should be configured with only a single recipient. The agent is free,
+ however, to interpret the recipient as a list.
+
+* When a cluster event occurs, all alerts are fired off at the same time as
+ separate processes. Depending on how many alerts and recipients are
+ configured, and on what is done within the alert agents,
+ a significant load burst may occur. The agent could be written to take
+ this into consideration, for example by queueing resource-intensive actions
+ into some other instance, instead of directly executing them.
+
+* Alert agents are run as the ``hacluster`` user, which has a minimal set
+ of permissions. If an agent requires additional privileges, it is
+ recommended to configure ``sudo`` to allow the agent to run the necessary
+ commands as another user with the appropriate privileges.
+
+* As always, take care to validate and sanitize user-configured parameters,
+ such as ``CRM_alert_timestamp`` (whose content is specified by the
+ user-configured ``timestamp-format``), ``CRM_alert_recipient,`` and all
+ instance attributes. Mostly this is needed simply to protect against
+ configuration errors, but if some user can modify the CIB without having
+ ``hacluster``-level access to the cluster nodes, it is a potential security
+ concern as well, to avoid the possibility of code injection.
+
+.. note:: **ocf:pacemaker:ClusterMon compatibility**
+
+ The alerts interface is designed to be backward compatible with the external
+ scripts interface used by the ``ocf:pacemaker:ClusterMon`` resource, which
+ is now deprecated. To preserve this compatibility, the environment variables
+ passed to alert agents are available prepended with ``CRM_notify_``
+ as well as ``CRM_alert_``. One break in compatibility is that ``ClusterMon``
+ ran external scripts as the ``root`` user, while alert agents are run as the
+ ``hacluster`` user.
diff --git a/doc/sphinx/Pacemaker_Administration/cluster.rst b/doc/sphinx/Pacemaker_Administration/cluster.rst
new file mode 100644
index 0000000..3713733
--- /dev/null
+++ b/doc/sphinx/Pacemaker_Administration/cluster.rst
@@ -0,0 +1,21 @@
+.. index::
+ single: cluster layer
+
+The Cluster Layer
+-----------------
+
+Pacemaker utilizes an underlying cluster layer for two purposes:
+
+* obtaining quorum
+* messaging between nodes
+
+.. index::
+ single: cluster layer; Corosync
+ single: Corosync
+
+Currently, only Corosync 2 and later is supported for this layer.
+
+This document assumes you have configured the cluster nodes in Corosync
+already. High-level cluster management tools are available that can configure
+Corosync for you. If you want the lower-level details, see the
+`Corosync documentation <https://corosync.github.io/corosync/>`_.
diff --git a/doc/sphinx/Pacemaker_Administration/configuring.rst b/doc/sphinx/Pacemaker_Administration/configuring.rst
new file mode 100644
index 0000000..415dd81
--- /dev/null
+++ b/doc/sphinx/Pacemaker_Administration/configuring.rst
@@ -0,0 +1,278 @@
+.. index::
+ single: configuration
+ single: CIB
+
+Configuring Pacemaker
+---------------------
+
+Pacemaker's configuration, the CIB, is stored in XML format. Cluster
+administrators have multiple options for modifying the configuration either via
+the XML, or at a more abstract (and easier for humans to understand) level.
+
+Pacemaker reacts to configuration changes as soon as they are saved.
+Pacemaker's command-line tools and most higher-level tools provide the ability
+to batch changes together and commit them at once, rather than make a series of
+small changes, which could cause avoid unnecessary actions as Pacemaker
+responds to each change individually.
+
+Pacemaker tracks revisions to the configuration and will reject any update
+older than the current revision. Thus, it is a good idea to serialize all
+changes to the configuration. Avoid attempting simultaneous changes, whether on
+the same node or different nodes, and whether manually or using some automated
+configuration tool.
+
+.. note::
+
+ It is not necessary to update the configuration on all cluster nodes.
+ Pacemaker immediately synchronizes changes to all active members of the
+ cluster. To reduce bandwidth, the cluster only broadcasts the incremental
+ updates that result from your changes and uses checksums to ensure that each
+ copy is consistent.
+
+
+Configuration Using Higher-level Tools
+######################################
+
+Most users will benefit from using higher-level tools provided by projects
+separate from Pacemaker. Some of the most commonly used include the crm shell,
+hawk, and pcs. [#]_
+
+See those projects' documentation for details on how to configure Pacemaker
+using them.
+
+
+Configuration Using Pacemaker's Command-Line Tools
+##################################################
+
+Pacemaker provides lower-level, command-line tools to manage the cluster. Most
+configuration tasks can be performed with these tools, without needing any XML
+knowledge.
+
+To enable STONITH for example, one could run:
+
+.. code-block:: none
+
+ # crm_attribute --name stonith-enabled --update 1
+
+Or, to check whether **node1** is allowed to run resources, there is:
+
+.. code-block:: none
+
+ # crm_standby --query --node node1
+
+Or, to change the failure threshold of **my-test-rsc**, one can use:
+
+.. code-block:: none
+
+ # crm_resource -r my-test-rsc --set-parameter migration-threshold --parameter-value 3 --meta
+
+Examples of using these tools for specific cases will be given throughout this
+document where appropriate. See the man pages for further details.
+
+See :ref:`cibadmin` for how to edit the CIB using XML.
+
+See :ref:`crm_shadow` for a way to make a series of changes, then commit them
+all at once to the live cluster.
+
+
+.. index::
+ single: configuration; CIB properties
+ single: CIB; properties
+ single: CIB property
+
+Working with CIB Properties
+___________________________
+
+Although these fields can be written to by the user, in
+most cases the cluster will overwrite any values specified by the
+user with the "correct" ones.
+
+To change the ones that can be specified by the user, for example
+``admin_epoch``, one should use:
+
+.. code-block:: none
+
+ # cibadmin --modify --xml-text '<cib admin_epoch="42"/>'
+
+A complete set of CIB properties will look something like this:
+
+.. topic:: XML attributes set for a cib element
+
+ .. code-block:: xml
+
+ <cib crm_feature_set="3.0.7" validate-with="pacemaker-1.2"
+ admin_epoch="42" epoch="116" num_updates="1"
+ cib-last-written="Mon Jan 12 15:46:39 2015" update-origin="rhel7-1"
+ update-client="crm_attribute" have-quorum="1" dc-uuid="1">
+
+
+.. index::
+ single: configuration; cluster options
+
+Querying and Setting Cluster Options
+____________________________________
+
+Cluster options can be queried and modified using the ``crm_attribute`` tool.
+To get the current value of ``cluster-delay``, you can run:
+
+.. code-block:: none
+
+ # crm_attribute --query --name cluster-delay
+
+which is more simply written as
+
+.. code-block:: none
+
+ # crm_attribute -G -n cluster-delay
+
+If a value is found, you'll see a result like this:
+
+.. code-block:: none
+
+ # crm_attribute -G -n cluster-delay
+ scope=crm_config name=cluster-delay value=60s
+
+If no value is found, the tool will display an error:
+
+.. code-block:: none
+
+ # crm_attribute -G -n clusta-deway
+ scope=crm_config name=clusta-deway value=(null)
+ Error performing operation: No such device or address
+
+To use a different value (for example, 30 seconds), simply run:
+
+.. code-block:: none
+
+ # crm_attribute --name cluster-delay --update 30s
+
+To go back to the cluster's default value, you can delete the value, for example:
+
+.. code-block:: none
+
+ # crm_attribute --name cluster-delay --delete
+ Deleted crm_config option: id=cib-bootstrap-options-cluster-delay name=cluster-delay
+
+
+When Options are Listed More Than Once
+______________________________________
+
+If you ever see something like the following, it means that the option you're
+modifying is present more than once.
+
+.. topic:: Deleting an option that is listed twice
+
+ .. code-block:: none
+
+ # crm_attribute --name batch-limit --delete
+
+ Please choose from one of the matches below and supply the 'id' with --id
+ Multiple attributes match name=batch-limit in crm_config:
+ Value: 50 (set=cib-bootstrap-options, id=cib-bootstrap-options-batch-limit)
+ Value: 100 (set=custom, id=custom-batch-limit)
+
+In such cases, follow the on-screen instructions to perform the requested
+action. To determine which value is currently being used by the cluster, refer
+to the "Rules" chapter of *Pacemaker Explained*.
+
+
+.. index::
+ single: configuration; remote
+
+.. _remote_connection:
+
+Connecting from a Remote Machine
+################################
+
+Provided Pacemaker is installed on a machine, it is possible to connect to the
+cluster even if the machine itself is not in the same cluster. To do this, one
+simply sets up a number of environment variables and runs the same commands as
+when working on a cluster node.
+
+.. table:: **Environment Variables Used to Connect to Remote Instances of the CIB**
+
+ +----------------------+-----------+------------------------------------------------+
+ | Environment Variable | Default | Description |
+ +======================+===========+================================================+
+ | CIB_user | $USER | .. index:: |
+ | | | single: CIB_user |
+ | | | single: environment variable; CIB_user |
+ | | | |
+ | | | The user to connect as. Needs to be |
+ | | | part of the ``haclient`` group on |
+ | | | the target host. |
+ +----------------------+-----------+------------------------------------------------+
+ | CIB_passwd | | .. index:: |
+ | | | single: CIB_passwd |
+ | | | single: environment variable; CIB_passwd |
+ | | | |
+ | | | The user's password. Read from the |
+ | | | command line if unset. |
+ +----------------------+-----------+------------------------------------------------+
+ | CIB_server | localhost | .. index:: |
+ | | | single: CIB_server |
+ | | | single: environment variable; CIB_server |
+ | | | |
+ | | | The host to contact |
+ +----------------------+-----------+------------------------------------------------+
+ | CIB_port | | .. index:: |
+ | | | single: CIB_port |
+ | | | single: environment variable; CIB_port |
+ | | | |
+ | | | The port on which to contact the server; |
+ | | | required. |
+ +----------------------+-----------+------------------------------------------------+
+ | CIB_encrypted | TRUE | .. index:: |
+ | | | single: CIB_encrypted |
+ | | | single: environment variable; CIB_encrypted |
+ | | | |
+ | | | Whether to encrypt network traffic |
+ +----------------------+-----------+------------------------------------------------+
+
+So, if **c001n01** is an active cluster node and is listening on port 1234
+for connections, and **someuser** is a member of the **haclient** group,
+then the following would prompt for **someuser**'s password and return
+the cluster's current configuration:
+
+.. code-block:: none
+
+ # export CIB_port=1234; export CIB_server=c001n01; export CIB_user=someuser;
+ # cibadmin -Q
+
+For security reasons, the cluster does not listen for remote connections by
+default. If you wish to allow remote access, you need to set the
+``remote-tls-port`` (encrypted) or ``remote-clear-port`` (unencrypted) CIB
+properties (i.e., those kept in the ``cib`` tag, like ``num_updates`` and
+``epoch``).
+
+.. table:: **Extra top-level CIB properties for remote access**
+
+ +----------------------+-----------+------------------------------------------------------+
+ | CIB Property | Default | Description |
+ +======================+===========+======================================================+
+ | remote-tls-port | | .. index:: |
+ | | | single: remote-tls-port |
+ | | | single: CIB property; remote-tls-port |
+ | | | |
+ | | | Listen for encrypted remote connections |
+ | | | on this port. |
+ +----------------------+-----------+------------------------------------------------------+
+ | remote-clear-port | | .. index:: |
+ | | | single: remote-clear-port |
+ | | | single: CIB property; remote-clear-port |
+ | | | |
+ | | | Listen for plaintext remote connections |
+ | | | on this port. |
+ +----------------------+-----------+------------------------------------------------------+
+
+.. important::
+
+ The Pacemaker version on the administration host must be the same or greater
+ than the version(s) on the cluster nodes. Otherwise, it may not have the
+ schema files necessary to validate the CIB.
+
+
+.. rubric:: Footnotes
+
+.. [#] For a list, see "Configuration Tools" at
+ https://clusterlabs.org/components.html
diff --git a/doc/sphinx/Pacemaker_Administration/index.rst b/doc/sphinx/Pacemaker_Administration/index.rst
new file mode 100644
index 0000000..327ad31
--- /dev/null
+++ b/doc/sphinx/Pacemaker_Administration/index.rst
@@ -0,0 +1,36 @@
+Pacemaker Administration
+========================
+
+*Managing Pacemaker Clusters*
+
+
+Abstract
+--------
+This document has instructions and tips for system administrators who
+manage high-availability clusters using Pacemaker.
+
+
+Table of Contents
+-----------------
+
+.. toctree::
+ :maxdepth: 3
+ :numbered:
+
+ intro
+ installing
+ cluster
+ configuring
+ tools
+ troubleshooting
+ upgrading
+ alerts
+ agents
+ pcs-crmsh
+
+
+Index
+-----
+
+* :ref:`genindex`
+* :ref:`search`
diff --git a/doc/sphinx/Pacemaker_Administration/installing.rst b/doc/sphinx/Pacemaker_Administration/installing.rst
new file mode 100644
index 0000000..44a3f5f
--- /dev/null
+++ b/doc/sphinx/Pacemaker_Administration/installing.rst
@@ -0,0 +1,9 @@
+Installing Cluster Software
+---------------------------
+
+.. index:: installation
+
+Most major Linux distributions have pacemaker packages in their standard
+package repositories, or the software can be built from source code.
+See the `Install wiki page <https://wiki.clusterlabs.org/wiki/Install>`_
+for details.
diff --git a/doc/sphinx/Pacemaker_Administration/intro.rst b/doc/sphinx/Pacemaker_Administration/intro.rst
new file mode 100644
index 0000000..067e293
--- /dev/null
+++ b/doc/sphinx/Pacemaker_Administration/intro.rst
@@ -0,0 +1,21 @@
+Introduction
+------------
+
+The Scope of this Document
+##########################
+
+The purpose of this document is to help system administrators learn how to
+manage a Pacemaker cluster.
+
+System administrators may be interested in other parts of the
+`Pacemaker documentation set <https://www.clusterlabs.org/pacemaker/doc/>`_
+such as *Clusters from Scratch*, a step-by-step guide to setting up an example
+cluster, and *Pacemaker Explained*, an exhaustive reference for cluster
+configuration.
+
+Multiple higher-level tools (both command-line and GUI) are available to
+simplify cluster management. However, this document focuses on the lower-level
+command-line tools that come with Pacemaker itself. The concepts are applicable
+to the higher-level tools, though the syntax would differ.
+
+.. include:: ../shared/pacemaker-intro.rst
diff --git a/doc/sphinx/Pacemaker_Administration/pcs-crmsh.rst b/doc/sphinx/Pacemaker_Administration/pcs-crmsh.rst
new file mode 100644
index 0000000..61ab4e6
--- /dev/null
+++ b/doc/sphinx/Pacemaker_Administration/pcs-crmsh.rst
@@ -0,0 +1,441 @@
+Quick Comparison of pcs and crm shell
+-------------------------------------
+
+``pcs`` and ``crm shell`` are two popular higher-level command-line interfaces
+to Pacemaker. Each has its own syntax; this chapter gives a quick comparion of
+how to accomplish the same tasks using either one. Some examples also show the
+equivalent command using low-level Pacmaker command-line tools.
+
+These examples show the simplest syntax; see the respective man pages for all
+possible options.
+
+Show Cluster Configuration and Status
+#####################################
+
+.. topic:: Show Configuration (Raw XML)
+
+ .. code-block:: none
+
+ crmsh # crm configure show xml
+ pcs # pcs cluster cib
+ pacemaker # cibadmin -Q
+
+.. topic:: Show Configuration (Human-friendly)
+
+ .. code-block:: none
+
+ crmsh # crm configure show
+ pcs # pcs config
+
+.. topic:: Show Cluster Status
+
+ .. code-block:: none
+
+ crmsh # crm status
+ pcs # pcs status
+ pacemaker # crm_mon -1
+
+Manage Nodes
+############
+
+.. topic:: Put node "pcmk-1" in standby mode
+
+ .. code-block:: none
+
+ crmsh # crm node standby pcmk-1
+ pcs-0.9 # pcs cluster standby pcmk-1
+ pcs-0.10 # pcs node standby pcmk-1
+ pacemaker # crm_standby -N pcmk-1 -v on
+
+.. topic:: Remove node "pcmk-1" from standby mode
+
+ .. code-block:: none
+
+ crmsh # crm node online pcmk-1
+ pcs-0.9 # pcs cluster unstandby pcmk-1
+ pcs-0.10 # pcs node unstandby pcmk-1
+ pacemaker # crm_standby -N pcmk-1 -v off
+
+Manage Cluster Properties
+#########################
+
+.. topic:: Set the "stonith-enabled" cluster property to "false"
+
+ .. code-block:: none
+
+ crmsh # crm configure property stonith-enabled=false
+ pcs # pcs property set stonith-enabled=false
+ pacemaker # crm_attribute -n stonith-enabled -v false
+
+Show Resource Agent Information
+###############################
+
+.. topic:: List Resource Agent (RA) Classes
+
+ .. code-block:: none
+
+ crmsh # crm ra classes
+ pcs # pcs resource standards
+ pacmaker # crm_resource --list-standards
+
+.. topic:: List Available Resource Agents (RAs) by Standard
+
+ .. code-block:: none
+
+ crmsh # crm ra list ocf
+ pcs # pcs resource agents ocf
+ pacemaker # crm_resource --list-agents ocf
+
+.. topic:: List Available Resource Agents (RAs) by OCF Provider
+
+ .. code-block:: none
+
+ crmsh # crm ra list ocf pacemaker
+ pcs # pcs resource agents ocf:pacemaker
+ pacemaker # crm_resource --list-agents ocf:pacemaker
+
+.. topic:: List Available Resource Agent Parameters
+
+ .. code-block:: none
+
+ crmsh # crm ra info IPaddr2
+ pcs # pcs resource describe IPaddr2
+ pacemaker # crm_resource --show-metadata ocf:heartbeat:IPaddr2
+
+You can also use the full ``class:provider:type`` format with crmsh and pcs if
+multiple RAs with the same name are available.
+
+.. topic:: Show Available Fence Agent Parameters
+
+ .. code-block:: none
+
+ crmsh # crm ra info stonith:fence_ipmilan
+ pcs # pcs stonith describe fence_ipmilan
+
+Manage Resources
+################
+
+.. topic:: Create a Resource
+
+ .. code-block:: none
+
+ crmsh # crm configure primitive ClusterIP ocf:heartbeat:IPaddr2 \
+ params ip=192.168.122.120 cidr_netmask=24 \
+ op monitor interval=30s
+ pcs # pcs resource create ClusterIP IPaddr2 ip=192.168.122.120 cidr_netmask=24
+
+pcs determines the standard and provider (``ocf:heartbeat``) automatically
+since ``IPaddr2`` is unique, and automatically creates operations (including
+monitor) based on the agent's meta-data.
+
+.. topic:: Show Configuration of All Resources
+
+ .. code-block:: none
+
+ crmsh # crm configure show
+ pcs-0.9 # pcs resource show --full
+ pcs-0.10 # pcs resource config
+
+.. topic:: Show Configuration of One Resource
+
+ .. code-block:: none
+
+ crmsh # crm configure show ClusterIP
+ pcs-0.9 # pcs resource show ClusterIP
+ pcs-0.10 # pcs resource config ClusterIP
+
+.. topic:: Show Configuration of Fencing Resources
+
+ .. code-block:: none
+
+ crmsh # crm resource status
+ pcs-0.9 # pcs stonith show --full
+ pcs-0.10 # pcs stonith config
+
+.. topic:: Start a Resource
+
+ .. code-block:: none
+
+ crmsh # crm resource start ClusterIP
+ pcs # pcs resource enable ClusterIP
+ pacemaker # crm_resource -r ClusterIP --set-parameter target-role --meta -v Started
+
+.. topic:: Stop a Resource
+
+ .. code-block:: none
+
+ crmsh # crm resource stop ClusterIP
+ pcs # pcs resource disable ClusterIP
+ pacemaker # crm_resource -r ClusterIP --set-parameter target-role --meta -v Stopped
+
+.. topic:: Remove a Resource
+
+ .. code-block:: none
+
+ crmsh # crm configure delete ClusterIP
+ pcs # pcs resource delete ClusterIP
+
+.. topic:: Modify a Resource's Instance Parameters
+
+ .. code-block:: none
+
+ crmsh # crm resource param ClusterIP set clusterip_hash=sourceip
+ pcs # pcs resource update ClusterIP clusterip_hash=sourceip
+ pacemaker # crm_resource -r ClusterIP --set-parameter clusterip_hash -v sourceip
+
+crmsh also has an `edit` command which edits the simplified CIB syntax
+(same commands as the command line) via a configurable text editor.
+
+.. topic:: Modify a Resource's Instance Parameters Interactively
+
+ .. code-block:: none
+
+ crmsh # crm configure edit ClusterIP
+
+Using the interactive shell mode of crmsh, multiple changes can be
+edited and verified before committing to the live configuration:
+
+.. topic:: Make Multiple Configuration Changes Interactively
+
+ .. code-block:: none
+
+ crmsh # crm configure
+ crmsh # edit
+ crmsh # verify
+ crmsh # commit
+
+.. topic:: Delete a Resource's Instance Parameters
+
+ .. code-block:: none
+
+ crmsh # crm resource param ClusterIP delete nic
+ pcs # pcs resource update ClusterIP nic=
+ pacemaker # crm_resource -r ClusterIP --delete-parameter nic
+
+.. topic:: List Current Resource Defaults
+
+ .. code-block:: none
+
+ crmsh # crm configure show type:rsc_defaults
+ pcs # pcs resource defaults
+ pacemaker # cibadmin -Q --scope rsc_defaults
+
+.. topic:: Set Resource Defaults
+
+ .. code-block:: none
+
+ crmsh # crm configure rsc_defaults resource-stickiness=100
+ pcs # pcs resource defaults resource-stickiness=100
+
+.. topic:: List Current Operation Defaults
+
+ .. code-block:: none
+
+ crmsh # crm configure show type:op_defaults
+ pcs # pcs resource op defaults
+ pacemaker # cibadmin -Q --scope op_defaults
+
+.. topic:: Set Operation Defaults
+
+ .. code-block:: none
+
+ crmsh # crm configure op_defaults timeout=240s
+ pcs # pcs resource op defaults timeout=240s
+
+.. topic:: Enable Resource Agent Tracing for a Resource
+
+ .. code-block:: none
+
+ crmsh # crm resource trace Website
+
+.. topic:: Clear Fail Counts for a Resource
+
+ .. code-block:: none
+
+ crmsh # crm resource cleanup Website
+ pcs # pcs resource cleanup Website
+ pacemaker # crm_resource --cleanup -r Website
+
+.. topic:: Create a Clone Resource
+
+ .. code-block:: none
+
+ crmsh # crm configure clone WebIP ClusterIP meta globally-unique=true clone-max=2 clone-node-max=2
+ pcs # pcs resource clone ClusterIP globally-unique=true clone-max=2 clone-node-max=2
+
+.. topic:: Create a Promotable Clone Resource
+
+ .. code-block:: none
+
+ crmsh # crm configure ms WebDataClone WebData \
+ meta master-max=1 master-node-max=1 \
+ clone-max=2 clone-node-max=1 notify=true
+ pcs-0.9 # pcs resource master WebDataClone WebData \
+ master-max=1 master-node-max=1 \
+ clone-max=2 clone-node-max=1 notify=true
+ pcs-0.10 # pcs resource promotable WebData WebDataClone \
+ promoted-max=1 promoted-node-max=1 \
+ clone-max=2 clone-node-max=1 notify=true
+
+pcs will generate the clone name automatically if it is omitted from the
+command line.
+
+
+Manage Constraints
+##################
+
+.. topic:: Create a Colocation Constraint
+
+ .. code-block:: none
+
+ crmsh # crm configure colocation website-with-ip INFINITY: WebSite ClusterIP
+ pcs # pcs constraint colocation add ClusterIP with WebSite INFINITY
+
+.. topic:: Create a Colocation Constraint Based on Role
+
+ .. code-block:: none
+
+ crmsh # crm configure colocation another-ip-with-website inf: AnotherIP WebSite:Master
+ pcs # pcs constraint colocation add Started AnotherIP with Promoted WebSite INFINITY
+
+.. topic:: Create an Ordering Constraint
+
+ .. code-block:: none
+
+ crmsh # crm configure order apache-after-ip mandatory: ClusterIP WebSite
+ pcs # pcs constraint order ClusterIP then WebSite
+
+.. topic:: Create an Ordering Constraint Based on Role
+
+ .. code-block:: none
+
+ crmsh # crm configure order ip-after-website Mandatory: WebSite:Master AnotherIP
+ pcs # pcs constraint order promote WebSite then start AnotherIP
+
+.. topic:: Create a Location Constraint
+
+ .. code-block:: none
+
+ crmsh # crm configure location prefer-pcmk-1 WebSite 50: pcmk-1
+ pcs # pcs constraint location WebSite prefers pcmk-1=50
+
+.. topic:: Create a Location Constraint Based on Role
+
+ .. code-block:: none
+
+ crmsh # crm configure location prefer-pcmk-1 WebSite rule role=Master 50: \#uname eq pcmk-1
+ pcs # pcs constraint location WebSite rule role=Promoted 50 \#uname eq pcmk-1
+
+.. topic:: Move a Resource to a Specific Node (by Creating a Location Constraint)
+
+ .. code-block:: none
+
+ crmsh # crm resource move WebSite pcmk-1
+ pcs # pcs resource move WebSite pcmk-1
+ pacemaker # crm_resource -r WebSite --move -N pcmk-1
+
+.. topic:: Move a Resource Away from Its Current Node (by Creating a Location Constraint)
+
+ .. code-block:: none
+
+ crmsh # crm resource ban Website pcmk-2
+ pcs # pcs resource ban Website pcmk-2
+ pacemaker # crm_resource -r WebSite --move
+
+.. topic:: Remove any Constraints Created by Moving a Resource
+
+ .. code-block:: none
+
+ crmsh # crm resource unmove WebSite
+ pcs # pcs resource clear WebSite
+ pacemaker # crm_resource -r WebSite --clear
+
+Advanced Configuration
+######################
+
+Manipulate Configuration Elements by Type
+_________________________________________
+
+.. topic:: List Constraints with IDs
+
+ .. code-block:: none
+
+ pcs # pcs constraint list --full
+
+.. topic:: Remove Constraint by ID
+
+ .. code-block:: none
+
+ pcs # pcs constraint remove cli-ban-Website-on-pcmk-1
+ crmsh # crm configure remove cli-ban-Website-on-pcmk-1
+
+crmsh's `show` and `edit` commands can be used to manage resources and
+constraints by type:
+
+.. topic:: Show Configuration Elements
+
+ .. code-block:: none
+
+ crmsh # crm configure show type:primitive
+ crmsh # crm configure edit type:colocation
+
+Batch Changes
+_____________
+
+.. topic:: Make Multiple Changes and Apply Together
+
+ .. code-block:: none
+
+ crmsh # crm
+ crmsh # cib new drbd_cfg
+ crmsh # configure primitive WebData ocf:linbit:drbd params drbd_resource=wwwdata \
+ op monitor interval=60s
+ crmsh # configure ms WebDataClone WebData meta master-max=1 master-node-max=1 \
+ clone-max=2 clone-node-max=1 notify=true
+ crmsh # cib commit drbd_cfg
+ crmsh # quit
+
+ pcs # pcs cluster cib drbd_cfg
+ pcs # pcs -f drbd_cfg resource create WebData ocf:linbit:drbd drbd_resource=wwwdata \
+ op monitor interval=60s
+ pcs-0.9 # pcs -f drbd_cfg resource master WebDataClone WebData \
+ master-max=1 master-node-max=1 clone-max=2 clone-node-max=1 notify=true
+ pcs-0.10 # pcs -f drbd_cfg resource promotable WebData WebDataClone \
+ promoted-max=1 promoted-node-max=1 clone-max=2 clone-node-max=1 notify=true
+ pcs # pcs cluster cib-push drbd_cfg
+
+Template Creation
+_________________
+
+.. topic:: Create Resource Template Based on Existing Primitives of Same Type
+
+ .. code-block:: none
+
+ crmsh # crm configure assist template ClusterIP AdminIP
+
+Log Analysis
+____________
+
+.. topic:: Show Information About Recent Cluster Events
+
+ .. code-block:: none
+
+ crmsh # crm history
+ crmsh # peinputs
+ crmsh # transition pe-input-10
+ crmsh # transition log pe-input-10
+
+Configuration Scripts
+_____________________
+
+.. topic:: Script Multiple-step Cluster Configurations
+
+ .. code-block:: none
+
+ crmsh # crm script show apache
+ crmsh # crm script run apache \
+ id=WebSite \
+ install=true \
+ virtual-ip:ip=192.168.0.15 \
+ database:id=WebData \
+ database:install=true
diff --git a/doc/sphinx/Pacemaker_Administration/tools.rst b/doc/sphinx/Pacemaker_Administration/tools.rst
new file mode 100644
index 0000000..5a6044d
--- /dev/null
+++ b/doc/sphinx/Pacemaker_Administration/tools.rst
@@ -0,0 +1,562 @@
+.. index:: command-line tool
+
+Using Pacemaker Command-Line Tools
+----------------------------------
+
+.. index::
+ single: command-line tool; output format
+
+.. _cmdline_output:
+
+Controlling Command Line Output
+###############################
+
+Some of the pacemaker command line utilities have been converted to a new
+output system. Among these tools are ``crm_mon`` and ``stonith_admin``. This
+is an ongoing project, and more tools will be converted over time. This system
+lets you control the formatting of output with ``--output-as=`` and the
+destination of output with ``--output-to=``.
+
+The available formats vary by tool, but at least plain text and XML are
+supported by all tools that use the new system. The default format is plain
+text. The default destination is stdout but can be redirected to any file.
+Some formats support command line options for changing the style of the output.
+For instance:
+
+.. code-block:: none
+
+ # crm_mon --help-output
+ Usage:
+ crm_mon [OPTION?]
+
+ Provides a summary of cluster's current state.
+
+ Outputs varying levels of detail in a number of different formats.
+
+ Output Options:
+ --output-as=FORMAT Specify output format as one of: console (default), html, text, xml
+ --output-to=DEST Specify file name for output (or "-" for stdout)
+ --html-cgi Add text needed to use output in a CGI program
+ --html-stylesheet=URI Link to an external CSS stylesheet
+ --html-title=TITLE Page title
+ --text-fancy Use more highly formatted output
+
+.. index::
+ single: crm_mon
+ single: command-line tool; crm_mon
+
+.. _crm_mon:
+
+Monitor a Cluster with crm_mon
+##############################
+
+The ``crm_mon`` utility displays the current state of an active cluster. It can
+show the cluster status organized by node or by resource, and can be used in
+either single-shot or dynamically updating mode. It can also display operations
+performed and information about failures.
+
+Using this tool, you can examine the state of the cluster for irregularities,
+and see how it responds when you cause or simulate failures.
+
+See the manual page or the output of ``crm_mon --help`` for a full description
+of its many options.
+
+.. topic:: Sample output from crm_mon -1
+
+ .. code-block:: none
+
+ Cluster Summary:
+ * Stack: corosync
+ * Current DC: node2 (version 2.0.0-1) - partition with quorum
+ * Last updated: Mon Jan 29 12:18:42 2018
+ * Last change: Mon Jan 29 12:18:40 2018 by root via crm_attribute on node3
+ * 5 nodes configured
+ * 2 resources configured
+
+ Node List:
+ * Online: [ node1 node2 node3 node4 node5 ]
+
+ * Active resources:
+ * Fencing (stonith:fence_xvm): Started node1
+ * IP (ocf:heartbeat:IPaddr2): Started node2
+
+.. topic:: Sample output from crm_mon -n -1
+
+ .. code-block:: none
+
+ Cluster Summary:
+ * Stack: corosync
+ * Current DC: node2 (version 2.0.0-1) - partition with quorum
+ * Last updated: Mon Jan 29 12:21:48 2018
+ * Last change: Mon Jan 29 12:18:40 2018 by root via crm_attribute on node3
+ * 5 nodes configured
+ * 2 resources configured
+
+ * Node List:
+ * Node node1: online
+ * Fencing (stonith:fence_xvm): Started
+ * Node node2: online
+ * IP (ocf:heartbeat:IPaddr2): Started
+ * Node node3: online
+ * Node node4: online
+ * Node node5: online
+
+As mentioned in an earlier chapter, the DC is the node is where decisions are
+made. The cluster elects a node to be DC as needed. The only significance of
+the choice of DC to an administrator is the fact that its logs will have the
+most information about why decisions were made.
+
+.. index::
+ pair: crm_mon; CSS
+
+.. _crm_mon_css:
+
+Styling crm_mon HTML output
+___________________________
+
+Various parts of ``crm_mon``'s HTML output have a CSS class associated with
+them. Not everything does, but some of the most interesting portions do. In
+the following example, the status of each node has an ``online`` class and the
+details of each resource have an ``rsc-ok`` class.
+
+.. code-block:: html
+
+ <h2>Node List</h2>
+ <ul>
+ <li>
+ <span>Node: cluster01</span><span class="online"> online</span>
+ </li>
+ <li><ul><li><span class="rsc-ok">ping (ocf::pacemaker:ping): Started</span></li></ul></li>
+ <li>
+ <span>Node: cluster02</span><span class="online"> online</span>
+ </li>
+ <li><ul><li><span class="rsc-ok">ping (ocf::pacemaker:ping): Started</span></li></ul></li>
+ </ul>
+
+By default, a stylesheet for styling these classes is included in the head of
+the HTML output. The relevant portions of this stylesheet that would be used
+in the above example is:
+
+.. code-block:: css
+
+ <style>
+ .online { color: green }
+ .rsc-ok { color: green }
+ </style>
+
+If you want to override some or all of the styling, simply create your own
+stylesheet, place it on a web server, and pass ``--html-stylesheet=<URL>``
+to ``crm_mon``. The link is added after the default stylesheet, so your
+changes take precedence. You don't need to duplicate the entire default.
+Only include what you want to change.
+
+.. index::
+ single: cibadmin
+ single: command-line tool; cibadmin
+
+.. _cibadmin:
+
+Edit the CIB XML with cibadmin
+##############################
+
+The most flexible tool for modifying the configuration is Pacemaker's
+``cibadmin`` command. With ``cibadmin``, you can query, add, remove, update
+or replace any part of the configuration. All changes take effect immediately,
+so there is no need to perform a reload-like operation.
+
+The simplest way of using ``cibadmin`` is to use it to save the current
+configuration to a temporary file, edit that file with your favorite
+text or XML editor, and then upload the revised configuration.
+
+.. topic:: Safely using an editor to modify the cluster configuration
+
+ .. code-block:: none
+
+ # cibadmin --query > tmp.xml
+ # vi tmp.xml
+ # cibadmin --replace --xml-file tmp.xml
+
+Some of the better XML editors can make use of a RELAX NG schema to
+help make sure any changes you make are valid. The schema describing
+the configuration can be found in ``pacemaker.rng``, which may be
+deployed in a location such as ``/usr/share/pacemaker`` depending on your
+operating system distribution and how you installed the software.
+
+If you want to modify just one section of the configuration, you can
+query and replace just that section to avoid modifying any others.
+
+.. topic:: Safely using an editor to modify only the resources section
+
+ .. code-block:: none
+
+ # cibadmin --query --scope resources > tmp.xml
+ # vi tmp.xml
+ # cibadmin --replace --scope resources --xml-file tmp.xml
+
+To quickly delete a part of the configuration, identify the object you wish to
+delete by XML tag and id. For example, you might search the CIB for all
+STONITH-related configuration:
+
+.. topic:: Searching for STONITH-related configuration items
+
+ .. code-block:: none
+
+ # cibadmin --query | grep stonith
+ <nvpair id="cib-bootstrap-options-stonith-action" name="stonith-action" value="reboot"/>
+ <nvpair id="cib-bootstrap-options-stonith-enabled" name="stonith-enabled" value="1"/>
+ <primitive id="child_DoFencing" class="stonith" type="external/vmware">
+ <lrm_resource id="child_DoFencing:0" type="external/vmware" class="stonith">
+ <lrm_resource id="child_DoFencing:0" type="external/vmware" class="stonith">
+ <lrm_resource id="child_DoFencing:1" type="external/vmware" class="stonith">
+ <lrm_resource id="child_DoFencing:0" type="external/vmware" class="stonith">
+ <lrm_resource id="child_DoFencing:2" type="external/vmware" class="stonith">
+ <lrm_resource id="child_DoFencing:0" type="external/vmware" class="stonith">
+ <lrm_resource id="child_DoFencing:3" type="external/vmware" class="stonith">
+
+If you wanted to delete the ``primitive`` tag with id ``child_DoFencing``,
+you would run:
+
+.. code-block:: none
+
+ # cibadmin --delete --xml-text '<primitive id="child_DoFencing"/>'
+
+See the cibadmin man page for more options.
+
+.. warning::
+
+ Never edit the live ``cib.xml`` file directly. Pacemaker will detect such
+ changes and refuse to use the configuration.
+
+
+.. index::
+ single: crm_shadow
+ single: command-line tool; crm_shadow
+
+.. _crm_shadow:
+
+Batch Configuration Changes with crm_shadow
+###########################################
+
+Often, it is desirable to preview the effects of a series of configuration
+changes before updating the live configuration all at once. For this purpose,
+``crm_shadow`` creates a "shadow" copy of the configuration and arranges for
+all the command-line tools to use it.
+
+To begin, simply invoke ``crm_shadow --create`` with a name of your choice,
+and follow the simple on-screen instructions. Shadow copies are identified with
+a name to make it possible to have more than one.
+
+.. warning::
+
+ Read this section and the on-screen instructions carefully; failure to do so
+ could result in destroying the cluster's active configuration!
+
+.. topic:: Creating and displaying the active sandbox
+
+ .. code-block:: none
+
+ # crm_shadow --create test
+ Setting up shadow instance
+ Type Ctrl-D to exit the crm_shadow shell
+ shadow[test]:
+ shadow[test] # crm_shadow --which
+ test
+
+From this point on, all cluster commands will automatically use the shadow copy
+instead of talking to the cluster's active configuration. Once you have
+finished experimenting, you can either make the changes active via the
+``--commit`` option, or discard them using the ``--delete`` option. Again, be
+sure to follow the on-screen instructions carefully!
+
+For a full list of ``crm_shadow`` options and commands, invoke it with the
+``--help`` option.
+
+.. topic:: Use sandbox to make multiple changes all at once, discard them, and verify real configuration is untouched
+
+ .. code-block:: none
+
+ shadow[test] # crm_failcount -r rsc_c001n01 -G
+ scope=status name=fail-count-rsc_c001n01 value=0
+ shadow[test] # crm_standby --node c001n02 -v on
+ shadow[test] # crm_standby --node c001n02 -G
+ scope=nodes name=standby value=on
+
+ shadow[test] # cibadmin --erase --force
+ shadow[test] # cibadmin --query
+ <cib crm_feature_set="3.0.14" validate-with="pacemaker-3.0" epoch="112" num_updates="2" admin_epoch="0" cib-last-written="Mon Jan 8 23:26:47 2018" update-origin="rhel7-1" update-client="crm_node" update-user="root" have-quorum="1" dc-uuid="1">
+ <configuration>
+ <crm_config/>
+ <nodes/>
+ <resources/>
+ <constraints/>
+ </configuration>
+ <status/>
+ </cib>
+ shadow[test] # crm_shadow --delete test --force
+ Now type Ctrl-D to exit the crm_shadow shell
+ shadow[test] # exit
+ # crm_shadow --which
+ No active shadow configuration defined
+ # cibadmin -Q
+ <cib crm_feature_set="3.0.14" validate-with="pacemaker-3.0" epoch="110" num_updates="2" admin_epoch="0" cib-last-written="Mon Jan 8 23:26:47 2018" update-origin="rhel7-1" update-client="crm_node" update-user="root" have-quorum="1">
+ <configuration>
+ <crm_config>
+ <cluster_property_set id="cib-bootstrap-options">
+ <nvpair id="cib-bootstrap-1" name="stonith-enabled" value="1"/>
+ <nvpair id="cib-bootstrap-2" name="pe-input-series-max" value="30000"/>
+
+See the next section, :ref:`crm_simulate`, for how to test your changes before
+committing them to the live cluster.
+
+
+.. index::
+ single: crm_simulate
+ single: command-line tool; crm_simulate
+
+.. _crm_simulate:
+
+Simulate Cluster Activity with crm_simulate
+###########################################
+
+The command-line tool `crm_simulate` shows the results of the same logic
+the cluster itself uses to respond to a particular cluster configuration and
+status.
+
+As always, the man page is the primary documentation, and should be consulted
+for further details. This section aims for a better conceptual explanation and
+practical examples.
+
+Replaying cluster decision-making logic
+_______________________________________
+
+At any given time, one node in a Pacemaker cluster will be elected DC, and that
+node will run Pacemaker's scheduler to make decisions.
+
+Each time decisions need to be made (a "transition"), the DC will have log
+messages like "Calculated transition ... saving inputs in ..." with a file
+name. You can grab the named file and replay the cluster logic to see why
+particular decisions were made. The file contains the live cluster
+configuration at that moment, so you can also look at it directly to see the
+value of node attributes, etc., at that time.
+
+The simplest usage is (replacing $FILENAME with the actual file name):
+
+.. topic:: Simulate cluster response to a given CIB
+
+ .. code-block:: none
+
+ # crm_simulate --simulate --xml-file $FILENAME
+
+That will show the cluster state when the process started, the actions that
+need to be taken ("Transition Summary"), and the resulting cluster state if the
+actions succeed. Most actions will have a brief description of why they were
+required.
+
+The transition inputs may be compressed. ``crm_simulate`` can handle these
+compressed files directly, though if you want to edit the file, you'll need to
+uncompress it first.
+
+You can do the same simulation for the live cluster configuration at the
+current moment. This is useful mainly when using ``crm_shadow`` to create a
+sandbox version of the CIB; the ``--live-check`` option will use the shadow CIB
+if one is in effect.
+
+.. topic:: Simulate cluster response to current live CIB or shadow CIB
+
+ .. code-block:: none
+
+ # crm_simulate --simulate --live-check
+
+
+Why decisions were made
+_______________________
+
+To get further insight into the "why", it gets user-unfriendly very quickly. If
+you add the ``--show-scores`` option, you will also see all the scores that
+went into the decision-making. The node with the highest cumulative score for a
+resource will run it. You can look for ``-INFINITY`` scores in particular to
+see where complete bans came into effect.
+
+You can also add ``-VVVV`` to get more detailed messages about what's happening
+under the hood. You can add up to two more V's even, but that's usually useful
+only if you're a masochist or tracing through the source code.
+
+
+Visualizing the action sequence
+_______________________________
+
+Another handy feature is the ability to generate a visual graph of the actions
+needed, using the ``--save-dotfile`` option. This relies on the separate
+Graphviz [#]_ project.
+
+.. topic:: Generate a visual graph of cluster actions from a saved CIB
+
+ .. code-block:: none
+
+ # crm_simulate --simulate --xml-file $FILENAME --save-dotfile $FILENAME.dot
+ # dot $FILENAME.dot -Tsvg > $FILENAME.svg
+
+``$FILENAME.dot`` will contain a GraphViz representation of the cluster's
+response to your changes, including all actions with their ordering
+dependencies.
+
+``$FILENAME.svg`` will be the same information in a standard graphical format
+that you can view in your browser or other app of choice. You could, of course,
+use other ``dot`` options to generate other formats.
+
+How to interpret the graphical output:
+
+ * Bubbles indicate actions, and arrows indicate ordering dependencies
+ * Resource actions have text of the form
+ ``<RESOURCE>_<ACTION>_<INTERVAL_IN_MS> <NODE>`` indicating that the
+ specified action will be executed for the specified resource on the
+ specified node, once if interval is 0 or at specified recurring interval
+ otherwise
+ * Actions with black text will be sent to the executor (that is, the
+ appropriate agent will be invoked)
+ * Actions with orange text are "pseudo" actions that the cluster uses
+ internally for ordering but require no real activity
+ * Actions with a solid green border are part of the transition (that is, the
+ cluster will attempt to execute them in the given order -- though a
+ transition can be interrupted by action failure or new events)
+ * Dashed arrows indicate dependencies that are not present in the transition
+ graph
+ * Actions with a dashed border will not be executed. If the dashed border is
+ blue, the cluster does not feel the action needs to be executed. If the
+ dashed border is red, the cluster would like to execute the action but
+ cannot. Any actions depending on an action with a dashed border will not be
+ able to execute.
+ * Loops should not happen, and should be reported as a bug if found.
+
+.. topic:: Small Cluster Transition
+
+ .. image:: ../shared/images/Policy-Engine-small.png
+ :alt: An example transition graph as represented by Graphviz
+ :align: center
+
+In the above example, it appears that a new node, ``pcmk-2``, has come online
+and that the cluster is checking to make sure ``rsc1``, ``rsc2`` and ``rsc3``
+are not already running there (indicated by the ``rscN_monitor_0`` entries).
+Once it did that, and assuming the resources were not active there, it would
+have liked to stop ``rsc1`` and ``rsc2`` on ``pcmk-1`` and move them to
+``pcmk-2``. However, there appears to be some problem and the cluster cannot or
+is not permitted to perform the stop actions which implies it also cannot
+perform the start actions. For some reason, the cluster does not want to start
+``rsc3`` anywhere.
+
+.. topic:: Complex Cluster Transition
+
+ .. image:: ../shared/images/Policy-Engine-big.png
+ :alt: Complex transition graph that you're not expected to be able to read
+ :align: center
+
+
+What-if scenarios
+_________________
+
+You can make changes to the saved or shadow CIB and simulate it again, to see
+how Pacemaker would react differently. You can edit the XML by hand, use
+command-line tools such as ``cibadmin`` with either a shadow CIB or the
+``CIB_file`` environment variable set to the filename, or use higher-level tool
+support (see the man pages of the specific tool you're using for how to perform
+actions on a saved CIB file rather than the live CIB).
+
+You can also inject node failures and/or action failures into the simulation;
+see the ``crm_simulate`` man page for more details.
+
+This capability is useful when using a shadow CIB to edit the configuration.
+Before committing the changes to the live cluster with ``crm_shadow --commit``,
+you can use ``crm_simulate`` to see how the cluster will react to the changes.
+
+.. _crm_attribute:
+
+.. index::
+ single: attrd_updater
+ single: command-line tool; attrd_updater
+ single: crm_attribute
+ single: command-line tool; crm_attribute
+
+Manage Node Attributes, Cluster Options and Defaults with crm_attribute and attrd_updater
+#########################################################################################
+
+``crm_attribute`` and ``attrd_updater`` are confusingly similar tools with subtle
+differences.
+
+``attrd_updater`` can query and update node attributes. ``crm_attribute`` can query
+and update not only node attributes, but also cluster options, resource
+defaults, and operation defaults.
+
+To understand the differences, it helps to understand the various types of node
+attribute.
+
+.. table:: **Types of Node Attributes**
+
+ +-----------+----------+-------------------+------------------+----------------+----------------+
+ | Type | Recorded | Recorded in | Survive full | Manageable by | Manageable by |
+ | | in CIB? | attribute manager | cluster restart? | crm_attribute? | attrd_updater? |
+ | | | memory? | | | |
+ +===========+==========+===================+==================+================+================+
+ | permanent | yes | no | yes | yes | no |
+ +-----------+----------+-------------------+------------------+----------------+----------------+
+ | transient | yes | yes | no | yes | yes |
+ +-----------+----------+-------------------+------------------+----------------+----------------+
+ | private | no | yes | no | no | yes |
+ +-----------+----------+-------------------+------------------+----------------+----------------+
+
+As you can see from the table above, ``crm_attribute`` can manage permanent and
+transient node attributes, while ``attrd_updater`` can manage transient and
+private node attributes.
+
+The difference between the two tools lies mainly in *how* they update node
+attributes: ``attrd_updater`` always contacts the Pacemaker attribute manager
+directly, while ``crm_attribute`` will contact the attribute manager only for
+transient node attributes, and will instead modify the CIB directly for
+permanent node attributes (and for transient node attributes when unable to
+contact the attribute manager).
+
+By contacting the attribute manager directly, ``attrd_updater`` can change
+an attribute's "dampening" (whether changes are immediately flushed to the CIB
+or after a specified amount of time, to minimize disk writes for frequent
+changes), set private node attributes (which are never written to the CIB), and
+set attributes for nodes that don't yet exist.
+
+By modifying the CIB directly, ``crm_attribute`` can set permanent node
+attributes (which are only in the CIB and not managed by the attribute
+manager), and can be used with saved CIB files and shadow CIBs.
+
+However a transient node attribute is set, it is synchronized between the CIB
+and the attribute manager, on all nodes.
+
+
+.. index::
+ single: crm_failcount
+ single: command-line tool; crm_failcount
+ single: crm_node
+ single: command-line tool; crm_node
+ single: crm_report
+ single: command-line tool; crm_report
+ single: crm_standby
+ single: command-line tool; crm_standby
+ single: crm_verify
+ single: command-line tool; crm_verify
+ single: stonith_admin
+ single: command-line tool; stonith_admin
+
+Other Commonly Used Tools
+#########################
+
+Other command-line tools include:
+
+* ``crm_failcount``: query or delete resource fail counts
+* ``crm_node``: manage cluster nodes
+* ``crm_report``: generate a detailed cluster report for bug submissions
+* ``crm_resource``: manage cluster resources
+* ``crm_standby``: manage standby status of nodes
+* ``crm_verify``: validate a CIB
+* ``stonith_admin``: manage fencing devices
+
+See the manual pages for details.
+
+.. rubric:: Footnotes
+
+.. [#] Graph visualization software. See http://www.graphviz.org/ for details.
diff --git a/doc/sphinx/Pacemaker_Administration/troubleshooting.rst b/doc/sphinx/Pacemaker_Administration/troubleshooting.rst
new file mode 100644
index 0000000..22c9dc8
--- /dev/null
+++ b/doc/sphinx/Pacemaker_Administration/troubleshooting.rst
@@ -0,0 +1,123 @@
+.. index:: troubleshooting
+
+Troubleshooting Cluster Problems
+--------------------------------
+
+.. index:: logging, pacemaker.log
+
+Logging
+#######
+
+Pacemaker by default logs messages of ``notice`` severity and higher to the
+system log, and messages of ``info`` severity and higher to the detail log,
+which by default is ``/var/log/pacemaker/pacemaker.log``.
+
+Logging options can be controlled via environment variables at Pacemaker
+start-up. Where these are set varies by operating system (often
+``/etc/sysconfig/pacemaker`` or ``/etc/default/pacemaker``). See the comments
+in that file for details.
+
+Because cluster problems are often highly complex, involving multiple machines,
+cluster daemons, and managed services, Pacemaker logs rather verbosely to
+provide as much context as possible. It is an ongoing priority to make these
+logs more user-friendly, but by necessity there is a lot of obscure, low-level
+information that can make them difficult to follow.
+
+The default log rotation configuration shipped with Pacemaker (typically
+installed in ``/etc/logrotate.d/pacemaker``) rotates the log when it reaches
+100MB in size, or weekly, whichever comes first.
+
+If you configure debug or (Heaven forbid) trace-level logging, the logs can
+grow enormous quite quickly. Because rotated logs are by default named with the
+year, month, and day only, this can cause name collisions if your logs exceed
+100MB in a single day. You can add ``dateformat -%Y%m%d-%H`` to the rotation
+configuration to avoid this.
+
+Reading the Logs
+################
+
+When troubleshooting, first check the system log or journal for errors or
+warnings from Pacemaker components (conveniently, they will all have
+"pacemaker" in their logged process name). For example:
+
+.. code-block:: none
+
+ # grep 'pacemaker.*\(error\|warning\)' /var/log/messages
+ Mar 29 14:04:19 node1 pacemaker-controld[86636]: error: Result of monitor operation for rn2 on node1: Timed Out after 45s (Remote executor did not respond)
+
+If that doesn't give sufficient information, next look at the ``notice`` level
+messages from ``pacemaker-controld``. These will show changes in the state of
+cluster nodes. On the DC, this will also show resource actions attempted. For
+example:
+
+.. code-block:: none
+
+ # grep 'pacemaker-controld.*notice:' /var/log/messages
+ ... output skipped for brevity ...
+ Mar 29 14:05:36 node1 pacemaker-controld[86636]: notice: Node rn2 state is now lost
+ ... more output skipped for brevity ...
+ Mar 29 14:12:17 node1 pacemaker-controld[86636]: notice: Initiating stop operation rsc1_stop_0 on node4
+ ... more output skipped for brevity ...
+
+Of course, you can use other tools besides ``grep`` to search the logs.
+
+
+.. index:: transition
+
+Transitions
+###########
+
+A key concept in understanding how a Pacemaker cluster functions is a
+*transition*. A transition is a set of actions that need to be taken to bring
+the cluster from its current state to the desired state (as expressed by the
+configuration).
+
+Whenever a relevant event happens (a node joining or leaving the cluster,
+a resource failing, etc.), the controller will ask the scheduler to recalculate
+the status of the cluster, which generates a new transition. The controller
+then performs the actions in the transition in the proper order.
+
+Each transition can be identified in the DC's logs by a line like:
+
+.. code-block:: none
+
+ notice: Calculated transition 19, saving inputs in /var/lib/pacemaker/pengine/pe-input-1463.bz2
+
+The file listed as the "inputs" is a snapshot of the cluster configuration and
+state at that moment (the CIB). This file can help determine why particular
+actions were scheduled. The ``crm_simulate`` command, described in
+:ref:`crm_simulate`, can be used to replay the file.
+
+The log messages immediately before the "saving inputs" message will include
+any actions that the scheduler thinks need to be done.
+
+
+Node Failures
+#############
+
+When a node fails, and looking at errors and warnings doesn't give an obvious
+explanation, try to answer questions like the following based on log messages:
+
+* When and what was the last successful message on the node itself, or about
+ that node in the other nodes' logs?
+* Did pacemaker-controld on the other nodes notice the node leave?
+* Did pacemaker-controld on the DC invoke the scheduler and schedule a new
+ transition?
+* Did the transition include fencing the failed node?
+* Was fencing attempted?
+* Did fencing succeed?
+
+Resource Failures
+#################
+
+When a resource fails, and looking at errors and warnings doesn't give an
+obvious explanation, try to answer questions like the following based on log
+messages:
+
+* Did pacemaker-controld record the result of the failed resource action?
+* What was the failed action's execution status and exit status?
+* What code in the resource agent could result in those status codes?
+* Did pacemaker-controld on the DC invoke the scheduler and schedule a new
+ transition?
+* Did the new transition include recovery of the resource?
+* Were the recovery actions initiated, and what were their results?
diff --git a/doc/sphinx/Pacemaker_Administration/upgrading.rst b/doc/sphinx/Pacemaker_Administration/upgrading.rst
new file mode 100644
index 0000000..1ca2a4e
--- /dev/null
+++ b/doc/sphinx/Pacemaker_Administration/upgrading.rst
@@ -0,0 +1,534 @@
+.. index:: upgrade
+
+Upgrading a Pacemaker Cluster
+-----------------------------
+
+.. index:: version
+
+Pacemaker Versioning
+####################
+
+Pacemaker has an overall release version, plus separate version numbers for
+certain internal components.
+
+.. index::
+ single: version; release
+
+* **Pacemaker release version:** This version consists of three numbers
+ (*x.y.z*).
+
+ The major version number (the *x* in *x.y.z*) increases when at least some
+ rolling upgrades are not possible from the previous major version. For example,
+ a rolling upgrade from 1.0.8 to 1.1.15 should always be supported, but a
+ rolling upgrade from 1.0.8 to 2.0.0 may not be possible.
+
+ The minor version (the *y* in *x.y.z*) increases when there are significant
+ changes in cluster default behavior, tool behavior, and/or the API interface
+ (for software that utilizes Pacemaker libraries). The main benefit is to alert
+ you to pay closer attention to the release notes, to see if you might be
+ affected.
+
+ The release counter (the *z* in *x.y.z*) is increased with all public releases
+ of Pacemaker, which typically include both bug fixes and new features.
+
+.. index::
+ single: feature set
+ single: version; feature set
+
+* **CRM feature set:** This version number applies to the communication between
+ full cluster nodes, and is used to avoid problems in mixed-version clusters.
+
+ The major version number increases when nodes with different versions would not
+ work (rolling upgrades are not allowed). The minor version number increases
+ when mixed-version clusters are allowed only during rolling upgrades. The
+ minor-minor version number is ignored, but allows resource agents to detect
+ cluster support for various features. [#]_
+
+ Pacemaker ensures that the longest-running node is the cluster's DC. This
+ ensures new features are not enabled until all nodes are upgraded to support
+ them.
+
+.. index::
+ single: version; Pacemaker Remote protocol
+
+* **Pacemaker Remote protocol version:** This version applies to communication
+ between a Pacemaker Remote node and the cluster. It increases when an older
+ cluster node would have problems hosting the connection to a newer
+ Pacemaker Remote node. To avoid these problems, Pacemaker Remote nodes will
+ accept connections only from cluster nodes with the same or newer
+ Pacemaker Remote protocol version.
+
+ Unlike with CRM feature set differences between full cluster nodes,
+ mixed Pacemaker Remote protocol versions between Pacemaker Remote nodes and
+ full cluster nodes are fine, as long as the Pacemaker Remote nodes have the
+ older version. This can be useful, for example, to host a legacy application
+ in an older operating system version used as a Pacemaker Remote node.
+
+.. index::
+ single: version; XML schema
+
+* **XML schema version:** Pacemaker’s configuration syntax — what's allowed in
+ the Configuration Information Base (CIB) — has its own version. This allows
+ the configuration syntax to evolve over time while still allowing clusters
+ with older configurations to work without change.
+
+
+.. index::
+ single: upgrade; methods
+
+Upgrading Cluster Software
+##########################
+
+There are three approaches to upgrading a cluster, each with advantages and
+disadvantages.
+
+.. table:: **Upgrade Methods**
+
+ +---------------------------------------------------+----------+----------+--------+---------+----------+----------+
+ | Method | Available| Can be | Service| Service | Exercises| Allows |
+ | | between | used with| outage | recovery| failover | change of|
+ | | all | Pacemaker| during | during | logic | messaging|
+ | | versions | Remote | upgrade| upgrade | | layer |
+ | | | nodes | | | | [#]_ |
+ +===================================================+==========+==========+========+=========+==========+==========+
+ | Complete cluster shutdown | yes | yes | always | N/A | no | yes |
+ +---------------------------------------------------+----------+----------+--------+---------+----------+----------+
+ | Rolling (node by node) | no | yes | always | yes | yes | no |
+ | | | | [#]_ | | | |
+ +---------------------------------------------------+----------+----------+--------+---------+----------+----------+
+ | Detach and reattach | yes | no | only | no | no | yes |
+ | | | | due to | | | |
+ | | | | failure| | | |
+ +---------------------------------------------------+----------+----------+--------+---------+----------+----------+
+
+
+.. index::
+ single: upgrade; shutdown
+
+Complete Cluster Shutdown
+_________________________
+
+In this scenario, one shuts down all cluster nodes and resources,
+then upgrades all the nodes before restarting the cluster.
+
+#. On each node:
+
+ a. Shutdown the cluster software (pacemaker and the messaging layer).
+ #. Upgrade the Pacemaker software. This may also include upgrading the
+ messaging layer and/or the underlying operating system.
+ #. Check the configuration with the ``crm_verify`` tool.
+
+#. On each node:
+
+ a. Start the cluster software.
+
+Currently, only Corosync version 2 and greater is supported as the cluster
+layer, but if another stack is supported in the future, the stack does not
+need to be the same one before the upgrade.
+
+One variation of this approach is to build a new cluster on new hosts.
+This allows the new version to be tested beforehand, and minimizes downtime by
+having the new nodes ready to be placed in production as soon as the old nodes
+are shut down.
+
+
+.. index::
+ single: upgrade; rolling upgrade
+
+Rolling (node by node)
+______________________
+
+In this scenario, each node is removed from the cluster, upgraded, and then
+brought back online, until all nodes are running the newest version.
+
+Special considerations when planning a rolling upgrade:
+
+* If you plan to upgrade other cluster software -- such as the messaging layer --
+ at the same time, consult that software's documentation for its compatibility
+ with a rolling upgrade.
+
+* If the major version number is changing in the Pacemaker version you are
+ upgrading to, a rolling upgrade may not be possible. Read the new version's
+ release notes (as well the information here) for what limitations may exist.
+
+* If the CRM feature set is changing in the Pacemaker version you are upgrading
+ to, you should run a mixed-version cluster only during a small rolling
+ upgrade window. If one of the older nodes drops out of the cluster for any
+ reason, it will not be able to rejoin until it is upgraded.
+
+* If the Pacemaker Remote protocol version is changing, all cluster nodes
+ should be upgraded before upgrading any Pacemaker Remote nodes.
+
+See the ClusterLabs wiki's
+`release calendar <https://wiki.clusterlabs.org/wiki/ReleaseCalendar>`_
+to figure out whether the CRM feature set and/or Pacemaker Remote protocol
+version changed between the the Pacemaker release versions in your rolling
+upgrade.
+
+To perform a rolling upgrade, on each node in turn:
+
+#. Put the node into standby mode, and wait for any active resources
+ to be moved cleanly to another node. (This step is optional, but
+ allows you to deal with any resource issues before the upgrade.)
+#. Shutdown the cluster software (pacemaker and the messaging layer) on the node.
+#. Upgrade the Pacemaker software. This may also include upgrading the
+ messaging layer and/or the underlying operating system.
+#. If this is the first node to be upgraded, check the configuration
+ with the ``crm_verify`` tool.
+#. Start the messaging layer.
+ This must be the same messaging layer (currently only Corosync version 2 and
+ greater is supported) that the rest of the cluster is using.
+
+.. note::
+
+ Even if a rolling upgrade from the current version of the cluster to the
+ newest version is not directly possible, it may be possible to perform a
+ rolling upgrade in multiple steps, by upgrading to an intermediate version
+ first.
+
+.. table:: **Version Compatibility Table**
+
+ +-------------------------+---------------------------+
+ | Version being Installed | Oldest Compatible Version |
+ +=========================+===========================+
+ | Pacemaker 2.y.z | Pacemaker 1.1.11 [#]_ |
+ +-------------------------+---------------------------+
+ | Pacemaker 1.y.z | Pacemaker 1.0.0 |
+ +-------------------------+---------------------------+
+ | Pacemaker 0.7.z | Pacemaker 0.6.z |
+ +-------------------------+---------------------------+
+
+.. index::
+ single: upgrade; detach and reattach
+
+Detach and Reattach
+___________________
+
+The reattach method is a variant of a complete cluster shutdown, where the
+resources are left active and get re-detected when the cluster is restarted.
+
+This method may not be used if the cluster contains any Pacemaker Remote nodes.
+
+#. Tell the cluster to stop managing services. This is required to allow the
+ services to remain active after the cluster shuts down.
+
+ .. code-block:: none
+
+ # crm_attribute --name maintenance-mode --update true
+
+#. On each node, shutdown the cluster software (pacemaker and the messaging
+ layer), and upgrade the Pacemaker software. This may also include upgrading
+ the messaging layer. While the underlying operating system may be upgraded
+ at the same time, that will be more likely to cause outages in the detached
+ services (certainly, if a reboot is required).
+#. Check the configuration with the ``crm_verify`` tool.
+#. On each node, start the cluster software.
+ Currently, only Corosync version 2 and greater is supported as the cluster
+ layer, but if another stack is supported in the future, the stack does not
+ need to be the same one before the upgrade.
+#. Verify that the cluster re-detected all resources correctly.
+#. Allow the cluster to resume managing resources again:
+
+ .. code-block:: none
+
+ # crm_attribute --name maintenance-mode --delete
+
+.. note::
+
+ While the goal of the detach-and-reattach method is to avoid disturbing
+ running services, resources may still move after the upgrade if any
+ resource's location is governed by a rule based on transient node
+ attributes. Transient node attributes are erased when the node leaves the
+ cluster. A common example is using the ``ocf:pacemaker:ping`` resource to
+ set a node attribute used to locate other resources.
+
+.. index::
+ pair: upgrade; CIB
+
+Upgrading the Configuration
+###########################
+
+The CIB schema version can change from one Pacemaker version to another.
+
+After cluster software is upgraded, the cluster will continue to use the older
+schema version that it was previously using. This can be useful, for example,
+when administrators have written tools that modify the configuration, and are
+based on the older syntax. [#]_
+
+However, when using an older syntax, new features may be unavailable, and there
+is a performance impact, since the cluster must do a non-persistent
+configuration upgrade before each transition. So while using the old syntax is
+possible, it is not advisable to continue using it indefinitely.
+
+Even if you wish to continue using the old syntax, it is a good idea to
+follow the upgrade procedure outlined below, except for the last step, to ensure
+that the new software has no problems with your existing configuration (since it
+will perform much the same task internally).
+
+If you are brave, it is sufficient simply to run ``cibadmin --upgrade``.
+
+A more cautious approach would proceed like this:
+
+#. Create a shadow copy of the configuration. The later commands will
+ automatically operate on this copy, rather than the live configuration.
+
+ .. code-block:: none
+
+ # crm_shadow --create shadow
+
+.. index::
+ single: configuration; verify
+
+#. Verify the configuration is valid with the new software (which may be
+ stricter about syntax mistakes, or may have dropped support for deprecated
+ features):
+
+ .. code-block:: none
+
+ # crm_verify --live-check
+
+#. Fix any errors or warnings.
+#. Perform the upgrade:
+
+ .. code-block:: none
+
+ # cibadmin --upgrade
+
+#. If this step fails, there are three main possibilities:
+
+ a. The configuration was not valid to start with (did you do steps 2 and
+ 3?).
+ #. The transformation failed; `report a bug <https://bugs.clusterlabs.org/>`_.
+ #. The transformation was successful but produced an invalid result.
+
+ If the result of the transformation is invalid, you may see a number of
+ errors from the validation library. If these are not helpful, visit the
+ `Validation FAQ wiki page <https://wiki.clusterlabs.org/wiki/Validation_FAQ>`_
+ and/or try the manual upgrade procedure described below.
+
+#. Check the changes:
+
+ .. code-block:: none
+
+ # crm_shadow --diff
+
+ If at this point there is anything about the upgrade that you wish to
+ fine-tune (for example, to change some of the automatic IDs), now is the
+ time to do so:
+
+ .. code-block:: none
+
+ # crm_shadow --edit
+
+ This will open the configuration in your favorite editor (whichever is
+ specified by the standard ``$EDITOR`` environment variable).
+
+#. Preview how the cluster will react:
+
+ .. code-block:: none
+
+ # crm_simulate --live-check --save-dotfile shadow.dot -S
+ # dot -Tsvg shadow.dot -o shadow.svg
+
+ You can then view shadow.svg with any compatible image viewer or web
+ browser. Verify that either no resource actions will occur or that you are
+ happy with any that are scheduled. If the output contains actions you do
+ not expect (possibly due to changes to the score calculations), you may need
+ to make further manual changes. See :ref:`crm_simulate` for further details
+ on how to interpret the output of ``crm_simulate`` and ``dot``.
+
+#. Upload the changes:
+
+ .. code-block:: none
+
+ # crm_shadow --commit shadow --force
+
+ In the unlikely event this step fails, please report a bug.
+
+.. note::
+
+ It is also possible to perform the configuration upgrade steps manually:
+
+ #. Locate the ``upgrade*.xsl`` conversion scripts provided with the source
+ code. These will often be installed in a location such as
+ ``/usr/share/pacemaker``, or may be obtained from the
+ `source repository <https://github.com/ClusterLabs/pacemaker/tree/main/xml>`_.
+
+ #. Run the conversion scripts that apply to your older version, for example:
+
+ .. code-block:: none
+
+ # xsltproc /path/to/upgrade06.xsl config06.xml > config10.xml
+
+ #. Locate the ``pacemaker.rng`` script (from the same location as the xsl
+ files).
+ #. Check the XML validity:
+
+ .. code-block:: none
+
+ # xmllint --relaxng /path/to/pacemaker.rng config10.xml
+
+ The advantage of this method is that it can be performed without the cluster
+ running, and any validation errors are often more informative.
+
+
+What Changed in 2.1
+###################
+
+The Pacemaker 2.1 release is fully backward-compatible in both the CIB XML and
+the C API. Highlights:
+
+* Pacemaker now supports the **OCF Resource Agent API version 1.1**.
+ Most notably, the ``Master`` and ``Slave`` role names have been renamed to
+ ``Promoted`` and ``Unpromoted``.
+
+* Pacemaker now supports colocations where the dependent resource does not
+ affect the primary resource's placement (via a new ``influence`` colocation
+ constraint option and ``critical`` resource meta-attribute). This is intended
+ for cases where a less-important resource must be colocated with an essential
+ resource, but it is preferred to leave the less-important resource stopped if
+ it fails, rather than move both resources.
+
+* If Pacemaker is built with libqb 2.0 or later, the detail log will use
+ **millisecond-resolution timestamps**.
+
+* In addition to crm_mon and stonith_admin, the crmadmin, crm_resource,
+ crm_simulate, and crm_verify commands now support the ``--output-as`` and
+ ``--output-to`` options, including **XML output** (which scripts and
+ higher-level tools are strongly recommended to use instead of trying to parse
+ the text output, which may change from release to release).
+
+For a detailed list of changes, see the release notes and the
+`Pacemaker 2.1 Changes <https://wiki.clusterlabs.org/wiki/Pacemaker_2.1_Changes>`_
+page on the ClusterLabs wiki.
+
+
+What Changed in 2.0
+###################
+
+The main goal of the 2.0 release was to remove support for deprecated syntax,
+along with some small changes in default configuration behavior and tool
+behavior. Highlights:
+
+* Only Corosync version 2 and greater is now supported as the underlying
+ cluster layer. Support for Heartbeat and Corosync 1 (including CMAN) is
+ removed.
+
+* The Pacemaker detail log file is now stored in
+ ``/var/log/pacemaker/pacemaker.log`` by default.
+
+* The record-pending cluster property now defaults to true, which
+ allows status tools such as crm_mon to show operations that are in
+ progress.
+
+* Support for a number of deprecated build options, environment variables,
+ and configuration settings has been removed.
+
+* The ``master`` tag has been deprecated in favor of using the ``clone`` tag
+ with the new ``promotable`` meta-attribute set to ``true``. "Master/slave"
+ clone resources are now referred to as "promotable" clone resources.
+
+* The public API for Pacemaker libraries that software applications can use
+ has changed significantly.
+
+For a detailed list of changes, see the release notes and the
+`Pacemaker 2.0 Changes <https://wiki.clusterlabs.org/wiki/Pacemaker_2.0_Changes>`_
+page on the ClusterLabs wiki.
+
+
+What Changed in 1.0
+###################
+
+New
+___
+
+* Failure timeouts.
+* New section for resource and operation defaults.
+* Tool for making offline configuration changes.
+* ``Rules``, ``instance_attributes``, ``meta_attributes`` and sets of
+ operations can be defined once and referenced in multiple places.
+* The CIB now accepts XPath-based create/modify/delete operations. See
+ ``cibadmin --help``.
+* Multi-dimensional colocation and ordering constraints.
+* The ability to connect to the CIB from non-cluster machines.
+* Allow recurring actions to be triggered at known times.
+
+
+Changed
+_______
+
+* Syntax
+
+ * All resource and cluster options now use dashes (-) instead of underscores
+ (_)
+ * ``master_slave`` was renamed to ``master``
+ * The ``attributes`` container tag was removed
+ * The operation field ``pre-req`` has been renamed ``requires``
+ * All operations must have an ``interval``, ``start``/``stop`` must have it
+ set to zero
+
+* The ``stonith-enabled`` option now defaults to true.
+* The cluster will refuse to start resources if ``stonith-enabled`` is true (or
+ unset) and no STONITH resources have been defined
+* The attributes of colocation and ordering constraints were renamed for
+ clarity.
+* ``resource-failure-stickiness`` has been replaced by ``migration-threshold``.
+* The parameters for command-line tools have been made consistent
+* Switched to 'RelaxNG' schema validation and 'libxml2' parser
+
+ * id fields are now XML IDs which have the following limitations:
+
+ * id's cannot contain colons (:)
+ * id's cannot begin with a number
+ * id's must be globally unique (not just unique for that tag)
+
+ * Some fields (such as those in constraints that refer to resources) are
+ IDREFs.
+
+ This means that they must reference existing resources or objects in
+ order for the configuration to be valid. Removing an object which is
+ referenced elsewhere will therefore fail.
+
+ * The CIB representation, from which a MD5 digest is calculated to verify
+ CIBs on the nodes, has changed.
+
+ This means that every CIB update will require a full refresh on any
+ upgraded nodes until the cluster is fully upgraded to 1.0. This will result
+ in significant performance degradation and it is therefore highly
+ inadvisable to run a mixed 1.0/0.6 cluster for any longer than absolutely
+ necessary.
+
+* Ping node information no longer needs to be added to ``ha.cf``. Simply
+ include the lists of hosts in your ping resource(s).
+
+
+Removed
+_______
+
+
+* Syntax
+
+ * It is no longer possible to set resource meta options as top-level
+ attributes. Use meta-attributes instead.
+ * Resource and operation defaults are no longer read from ``crm_config``.
+
+.. rubric:: Footnotes
+
+.. [#] Before CRM feature set 3.1.0 (Pacemaker 2.0.0), the minor-minor version
+ number was treated the same as the minor version.
+
+.. [#] Currently, Corosync version 2 and greater is the only supported cluster
+ stack, but other stacks have been supported by past versions, and may be
+ supported by future versions.
+
+.. [#] Any active resources will be moved off the node being upgraded, so there
+ will be at least a brief outage unless all resources can be migrated
+ "live".
+
+.. [#] Rolling upgrades from Pacemaker 1.1.z to 2.y.z are possible only if the
+ cluster uses corosync version 2 or greater as its messaging layer, and
+ the Cluster Information Base (CIB) uses schema 1.0 or higher in its
+ ``validate-with`` property.
+
+.. [#] As of Pacemaker 2.0.0, only schema versions pacemaker-1.0 and higher
+ are supported (excluding pacemaker-1.1, which was a special case).
diff --git a/doc/sphinx/Pacemaker_Development/c.rst b/doc/sphinx/Pacemaker_Development/c.rst
new file mode 100644
index 0000000..66ce3b2
--- /dev/null
+++ b/doc/sphinx/Pacemaker_Development/c.rst
@@ -0,0 +1,955 @@
+.. index::
+ single: C
+ pair: C; guidelines
+
+C Coding Guidelines
+-------------------
+
+Pacemaker is a large project accepting contributions from developers with a
+wide range of skill levels and organizational affiliations, and maintained by
+multiple people over long periods of time. Following consistent guidelines
+makes reading, writing, and reviewing code easier, and helps avoid common
+mistakes.
+
+Some existing Pacemaker code does not follow these guidelines, for historical
+reasons and API backward compatibility, but new code should.
+
+
+Code Organization
+#################
+
+Pacemaker's C code is organized as follows:
+
++-----------------+-----------------------------------------------------------+
+| Directory | Contents |
++=================+===========================================================+
+| daemons | the Pacemaker daemons (pacemakerd, pacemaker-based, etc.) |
++-----------------+-----------------------------------------------------------+
+| include | header files for library APIs |
++-----------------+-----------------------------------------------------------+
+| lib | libraries |
++-----------------+-----------------------------------------------------------+
+| tools | command-line tools |
++-----------------+-----------------------------------------------------------+
+
+Source file names should be unique across the entire project, to allow for
+individual tracing via ``PCMK_trace_files``.
+
+
+.. index::
+ single: C; library
+ single: C library
+
+Pacemaker Libraries
+###################
+
++---------------+---------+---------------+---------------------------+-------------------------------------+
+| Library | Symbol | Source | API Headers | Description |
+| | prefix | location | | |
++===============+=========+===============+===========================+=====================================+
+| libcib | cib | lib/cib | | include/crm/cib.h | .. index:: |
+| | | | | include/crm/cib/* | single: C library; libcib |
+| | | | | single: libcib |
+| | | | | |
+| | | | | API for pacemaker-based IPC and |
+| | | | | the CIB |
++---------------+---------+---------------+---------------------------+-------------------------------------+
+| libcrmcluster | pcmk | lib/cluster | | include/crm/cluster.h | .. index:: |
+| | | | | include/crm/cluster/* | single: C library; libcrmcluster |
+| | | | | single: libcrmcluster |
+| | | | | |
+| | | | | Abstract interface to underlying |
+| | | | | cluster layer |
++---------------+---------+---------------+---------------------------+-------------------------------------+
+| libcrmcommon | pcmk | lib/common | | include/crm/common/* | .. index:: |
+| | | | | some of include/crm/* | single: C library; libcrmcommon |
+| | | | | single: libcrmcommon |
+| | | | | |
+| | | | | Everything else |
++---------------+---------+---------------+---------------------------+-------------------------------------+
+| libcrmservice | svc | lib/services | | include/crm/services.h | .. index:: |
+| | | | | single: C library; libcrmservice |
+| | | | | single: libcrmservice |
+| | | | | |
+| | | | | Abstract interface to supported |
+| | | | | resource types (OCF, LSB, etc.) |
++---------------+---------+---------------+---------------------------+-------------------------------------+
+| liblrmd | lrmd | lib/lrmd | | include/crm/lrmd*.h | .. index:: |
+| | | | | single: C library; liblrmd |
+| | | | | single: liblrmd |
+| | | | | |
+| | | | | API for pacemaker-execd IPC |
++---------------+---------+---------------+---------------------------+-------------------------------------+
+| libpacemaker | pcmk | lib/pacemaker | | include/pacemaker*.h | .. index:: |
+| | | | | include/pcmki/* | single: C library; libpacemaker |
+| | | | | single: libpacemaker |
+| | | | | |
+| | | | | High-level APIs equivalent to |
+| | | | | command-line tool capabilities |
+| | | | | (and high-level internal APIs) |
++---------------+---------+---------------+---------------------------+-------------------------------------+
+| libpe_rules | pe | lib/pengine | | include/crm/pengine/* | .. index:: |
+| | | | | single: C library; libpe_rules |
+| | | | | single: libpe_rules |
+| | | | | |
+| | | | | Scheduler functionality related |
+| | | | | to evaluating rules |
++---------------+---------+---------------+---------------------------+-------------------------------------+
+| libpe_status | pe | lib/pengine | | include/crm/pengine/* | .. index:: |
+| | | | | single: C library; libpe_status |
+| | | | | single: libpe_status |
+| | | | | |
+| | | | | Low-level scheduler functionality |
++---------------+---------+---------------+---------------------------+-------------------------------------+
+| libstonithd | stonith | lib/fencing | | include/crm/stonith-ng.h| .. index:: |
+| | | | | include/crm/fencing/* | single: C library; libstonithd |
+| | | | | single: libstonithd |
+| | | | | |
+| | | | | API for pacemaker-fenced IPC |
++---------------+---------+---------------+---------------------------+-------------------------------------+
+
+
+Public versus Internal APIs
+___________________________
+
+Pacemaker libraries have both internal and public APIs. Internal APIs are those
+used only within Pacemaker; public APIs are those offered (via header files and
+documentation) for external code to use.
+
+Generic functionality needed by Pacemaker itself, such as string processing or
+XML processing, should remain internal, while functions providing useful
+high-level access to Pacemaker capabilities should be public. When in doubt,
+keep APIs internal, because it's easier to expose a previously internal API
+than hide a previously public API.
+
+Internal APIs can be changed as needed.
+
+The public API/ABI should maintain a degree of stability so that external
+applications using it do not need to be rewritten or rebuilt frequently. Many
+OSes/distributions avoid breaking API/ABI compatibility within a major release,
+so if Pacemaker breaks compatibility, that significantly delays when OSes
+can package the new version. Therefore, changes to public APIs should be
+backward-compatible (as detailed throughout this chapter), unless we are doing
+a (rare) release where we specifically intend to break compatibility.
+
+External applications known to use Pacemaker's public C API include
+`sbd <https://github.com/ClusterLabs/sbd>`_ and dlm_controld.
+
+
+.. index::
+ pair: C; naming
+
+API Symbol Naming
+_________________
+
+Exposed API symbols (non-``static`` function names, ``struct`` and ``typedef``
+names in header files, etc.) must begin with the prefix appropriate to the
+library (shown in the table at the beginning of this section). This reduces the
+chance of naming collisions when external software links against the library.
+
+The prefix is usually lowercase but may be all-caps for some defined constants
+and macros.
+
+Public API symbols should follow the library prefix with a single underbar
+(for example, ``pcmk_something``), and internal API symbols with a double
+underbar (for example, ``pcmk__other_thing``).
+
+File-local symbols (such as static functions) and non-library code do not
+require a prefix, though a unique prefix indicating an executable (controld,
+crm_mon, etc.) can be helpful when symbols are shared between multiple
+source files for the executable.
+
+
+API Header File Naming
+______________________
+
+* Internal API headers should be named ending in ``_internal.h``, in the same
+ location as public headers, with the exception of libpacemaker, which for
+ historical reasons keeps internal headers in ``include/pcmki/pcmki_*.h``).
+
+* If a library needs to share symbols just within the library, header files for
+ these should be named ending in ``_private.h`` and located in the library
+ source directory (not ``include``). Such functions should be declared as
+ ``G_GNUC_INTERNAL``, to aid compiler efficiency (glib defines this
+ symbol appropriately for the compiler).
+
+Header files that are not library API are kept in the same directory as the
+source code they're included from.
+
+The easiest way to tell what kind of API a symbol is, is to see where it's
+declared. If it's in a public header, it's public API; if it's in an internal
+header, it's internal API; if it's in a library-private header, it's
+library-private API; otherwise, it's not an API.
+
+
+.. index::
+ pair: C; API documentation
+ single: Doxygen
+
+API Documentation
+_________________
+
+Pacemaker uses `Doxygen <https://www.doxygen.nl/manual/docblocks.html>`_
+to automatically generate its
+`online API documentation <https://clusterlabs.org/pacemaker/doxygen/>`_,
+so all public API (header files, functions, structs, enums, etc.) should be
+documented with Doxygen comment blocks. Other code may be documented in the
+same way if desired, with an ``\internal`` tag in the Doxygen comment.
+
+Simple example of an internal function with a Doxygen comment block:
+
+.. code-block:: c
+
+ /*!
+ * \internal
+ * \brief Return string length plus 1
+ *
+ * Return the number of characters in a given string, plus one.
+ *
+ * \param[in] s A string (must not be NULL)
+ *
+ * \return The length of \p s plus 1.
+ */
+ static int
+ f(const char *s)
+ {
+ return strlen(s) + 1;
+ }
+
+Function arguments are marked as ``[in]`` for input only, ``[out]`` for output
+only, or ``[in,out]`` for both input and output.
+
+``[in,out]`` should be used for struct pointer arguments if the function can
+change any data accessed via the pointer. For example, if the struct contains
+a ``GHashTable *`` member, the argument should be marked as ``[in,out]`` if the
+function inserts data into the table, even if the struct members themselves are
+not changed. However, an argument is not ``[in,out]`` if something reachable
+via the argument is modified via a separate argument. For example, both
+``pe_resource_t`` and ``pe_node_t`` contain pointers to their
+``pe_working_set_t`` and thus indirectly to each other, but if the function
+modifies the resource via the resource argument, the node argument does not
+have to be ``[in,out]``.
+
+
+Public API Deprecation
+______________________
+
+Public APIs may not be removed in most Pacemaker releases, but they may be
+deprecated.
+
+When a public API is deprecated, it is moved to a header whose name ends in
+``compat.h``. The original header includes the compatibility header only if the
+``PCMK_ALLOW_DEPRECATED`` symbol is undefined or defined to 1. This allows
+external code to continue using the deprecated APIs, but internal code is
+prevented from using them because the ``crm_internal.h`` header defines the
+symbol to 0.
+
+
+.. index::
+ pair: C; boilerplate
+ pair: license; C
+ pair: copyright; C
+
+C Boilerplate
+#############
+
+Every C file should start with a short copyright and license notice:
+
+.. code-block:: c
+
+ /*
+ * Copyright <YYYY[-YYYY]> the Pacemaker project contributors
+ *
+ * The version control history for this file may have further details.
+ *
+ * This source code is licensed under <LICENSE> WITHOUT ANY WARRANTY.
+ */
+
+*<LICENSE>* should follow the policy set forth in the
+`COPYING <https://github.com/ClusterLabs/pacemaker/blob/main/COPYING>`_ file,
+generally one of "GNU General Public License version 2 or later (GPLv2+)"
+or "GNU Lesser General Public License version 2.1 or later (LGPLv2.1+)".
+
+Header files should additionally protect against multiple inclusion by defining
+a unique symbol of the form ``PCMK__<capitalized_header_name>__H``. For
+example:
+
+.. code-block:: c
+
+ #ifndef PCMK__MY_HEADER__H
+ # define PCMK__MY_HEADER__H
+
+ // header code here
+
+ #endif // PCMK__MY_HEADER__H
+
+Public API header files should additionally declare "C" compatibility for
+inclusion by C++, and give a Doxygen file description. For example:
+
+.. code-block:: c
+
+ #ifdef __cplusplus
+ extern "C" {
+ #endif
+
+ /*!
+ * \file
+ * \brief My brief description here
+ * \ingroup core
+ */
+
+ // header code here
+
+ #ifdef __cplusplus
+ }
+ #endif
+
+
+.. index::
+ pair: C; whitespace
+
+Line Formatting
+###############
+
+* Indentation must be 4 spaces, no tabs.
+
+* Do not leave trailing whitespace.
+
+* Lines should be no longer than 80 characters unless limiting line length
+ hurts readability.
+
+
+.. index::
+ pair: C; comment
+
+Comments
+########
+
+.. code-block:: c
+
+ /* Single-line comments may look like this */
+
+ // ... or this
+
+ /* Multi-line comments should start immediately after the comment opening.
+ * Subsequent lines should start with an aligned asterisk. The comment
+ * closing should be aligned and on a line by itself.
+ */
+
+
+.. index::
+ pair: C; operator
+
+Operators
+#########
+
+.. code-block:: c
+
+ // Operators have spaces on both sides
+ x = a;
+
+ /* (1) Do not rely on operator precedence; use parentheses when mixing
+ * operators with different priority, for readability.
+ * (2) No space is used after an opening parenthesis or before a closing
+ * parenthesis.
+ */
+ x = a + b - (c * d);
+
+
+.. index::
+ single: C; if
+ single: C; else
+ single: C; while
+ single: C; for
+ single: C; switch
+
+Control Statements (if, else, while, for, switch)
+#################################################
+
+.. code-block:: c
+
+ /*
+ * (1) The control keyword is followed by a space, a left parenthesis
+ * without a space, the condition, a right parenthesis, a space, and the
+ * opening bracket on the same line.
+ * (2) Always use braces around control statement blocks, even if they only
+ * contain one line. This makes code review diffs smaller if a line gets
+ * added in the future, and avoids the chance of bad indenting making a
+ * line incorrectly appear to be part of the block.
+ * (3) The closing bracket is on a line by itself.
+ */
+ if (v < 0) {
+ return 0;
+ }
+
+ /* "else" and "else if" are on the same line with the previous ending brace
+ * and next opening brace, separated by a space. Blank lines may be used
+ * between blocks to help readability.
+ */
+ if (v > 0) {
+ return 0;
+
+ } else if (a == 0) {
+ return 1;
+
+ } else {
+ return 2;
+ }
+
+ /* Do not use assignments in conditions. This ensures that the developer's
+ * intent is always clear, makes code reviews easier, and reduces the chance
+ * of using assignment where comparison is intended.
+ */
+ // Do this ...
+ a = f();
+ if (a) {
+ return 0;
+ }
+ // ... NOT this
+ if (a = f()) {
+ return 0;
+ }
+
+ /* It helps readability to use the "!" operator only in boolean
+ * comparisons, and explicitly compare numeric values against 0,
+ * pointers against NULL, etc. This helps remind the reader of the
+ * type being compared.
+ */
+ int i = 0;
+ char *s = NULL;
+ bool cond = false;
+
+ if (!cond) {
+ return 0;
+ }
+ if (i == 0) {
+ return 0;
+ }
+ if (s == NULL) {
+ return 0;
+ }
+
+ /* In a "switch" statement, indent "case" one level, and indent the body of
+ * each "case" another level.
+ */
+ switch (expression) {
+ case 0:
+ command1;
+ break;
+ case 1:
+ command2;
+ break;
+ default:
+ command3;
+ break;
+ }
+
+
+.. index::
+ pair: C; macro
+
+Macros
+######
+
+Macros are a powerful but easily misused feature of the C preprocessor, and
+Pacemaker uses a lot of obscure macro features. If you need to brush up, the
+`GCC documentation for macros
+<https://gcc.gnu.org/onlinedocs/cpp/Macros.html#Macros>`_ is excellent.
+
+Some common issues:
+
+* Beware of side effects in macro arguments that may be evaluated more than
+ once
+* Always parenthesize macro arguments used in the macro body to avoid
+ precedence issues if the argument is an expression
+* Multi-statement macro bodies should be enclosed in do...while(0) to make them
+ behave more like a single statement and avoid control flow issues
+
+Often, a static inline function defined in a header is preferable to a macro,
+to avoid the numerous issues that plague macros and gain the benefit of
+argument and return value type checking.
+
+
+.. index::
+ pair: C; memory
+
+Memory Management
+#################
+
+* Always use ``calloc()`` rather than ``malloc()``. It has no additional cost on
+ modern operating systems, and reduces the severity and security risks of
+ uninitialized memory usage bugs.
+
+* Ensure that all dynamically allocated memory is freed when no longer needed,
+ and not used after it is freed. This can be challenging in the more
+ event-driven, callback-oriented sections of code.
+
+* Free dynamically allocated memory using the free function corresponding to
+ how it was allocated. For example, use ``free()`` with ``calloc()``, and
+ ``g_free()`` with most glib functions that allocate objects.
+
+
+.. index::
+ single: C; struct
+
+Structures
+##########
+
+Changes to structures defined in public API headers (adding or removing
+members, or changing member types) are generally not possible without breaking
+API compatibility. However, there are exceptions:
+
+* Public API structures can be designed such that they can be allocated only
+ via API functions, not declared directly or allocated with standard memory
+ functions using ``sizeof``.
+
+ * This can be enforced simply by documentating the limitation, in which case
+ new ``struct`` members can be added to the end of the structure without
+ breaking compatibility.
+
+ * Alternatively, the structure definition can be kept in an internal header,
+ with only a pointer type definition kept in a public header, in which case
+ the structure definition can be changed however needed.
+
+
+.. index::
+ single: C; variable
+
+Variables
+#########
+
+.. index::
+ single: C; pointer
+
+Pointers
+________
+
+.. code-block:: c
+
+ /* (1) The asterisk goes by the variable name, not the type;
+ * (2) Avoid leaving pointers uninitialized, to lessen the impact of
+ * use-before-assignment bugs
+ */
+ char *my_string = NULL;
+
+ // Use space before asterisk and after closing parenthesis in a cast
+ char *foo = (char *) bar;
+
+.. index::
+ single: C; global variable
+
+Globals
+_______
+
+Global variables should be avoided in libraries when possible. State
+information should instead be passed as function arguments (often as a
+structure). This is not for thread safety -- Pacemaker's use of forking
+ensures it will never be threaded -- but it does minimize overhead,
+improve readability, and avoid obscure side effects.
+
+Variable Naming
+_______________
+
+Time intervals are sometimes represented in Pacemaker code as user-defined
+text specifications (for example, "10s"), other times as an integer number of
+seconds or milliseconds, and still other times as a string representation
+of an integer number. Variables for these should be named with an indication
+of which is being used (for example, use ``interval_spec``, ``interval_ms``,
+or ``interval_ms_s`` instead of ``interval``).
+
+.. index::
+ pair: C; booleans
+ pair: C; bool
+ pair: C; gboolean
+
+Booleans
+________
+
+Booleans in C can be represented by an integer type, ``bool``, or ``gboolean``.
+
+Integers are sometimes useful for storing booleans when they must be converted
+to and from a string, such as an XML attribute value (for which
+``crm_element_value_int()`` can be used). Integer booleans use 0 for false and
+nonzero (usually 1) for true.
+
+``gboolean`` should be used with glib APIs that specify it. ``gboolean`` should
+always be used with glib's ``TRUE`` and ``FALSE`` constants.
+
+Otherwise, ``bool`` should be preferred. ``bool`` should be used with the
+``true`` and ``false`` constants from the ``stdbool.h`` header.
+
+Do not use equality operators when testing booleans. For example:
+
+.. code-block:: c
+
+ // Do this
+ if (bool1) {
+ fn();
+ }
+ if (!bool2) {
+ fn2();
+ }
+
+ // Not this
+ if (bool1 == true) {
+ fn();
+ }
+ if (bool2 == false) {
+ fn2();
+ }
+
+ // Otherwise there's no logical end ...
+ if ((bool1 == false) == true) {
+ fn();
+ }
+
+
+.. index::
+ pair: C; strings
+
+String Handling
+###############
+
+Define Constants for Magic Strings
+__________________________________
+
+A "magic" string is one used for control purposes rather than human reading,
+and which must be exactly the same every time it is used. Examples would be
+configuration option names, XML attribute names, or environment variable names.
+
+These should always be defined constants, rather than using the string literal
+everywhere. If someone mistypes a defined constant, the code won't compile, but
+if they mistype a literal, it could go unnoticed until a user runs into a
+problem.
+
+
+String-Related Library Functions
+________________________________
+
+Pacemaker's libcrmcommon has a large number of functions to assist in string
+handling. The most commonly used ones are:
+
+* ``pcmk__str_eq()`` tests string equality (similar to ``strcmp()``), but can
+ handle NULL, and takes options for case-insensitive, whether NULL should be
+ considered a match, etc.
+* ``crm_strdup_printf()`` takes ``printf()``-style arguments and creates a
+ string from them (dynamically allocated, so it must be freed with
+ ``free()``). It asserts on memory failure, so the return value is always
+ non-NULL.
+
+String handling functions should almost always be internal API, since Pacemaker
+isn't intended to be used as a general-purpose library. Most are declared in
+``include/crm/common/strings_internal.h``. ``util.h`` has some older ones that
+are public API (for now, but will eventually be made internal).
+
+char*, gchar*, and GString
+__________________________
+
+When using dynamically allocated strings, be careful to always use the
+appropriate free function.
+
+* ``char*`` strings allocated with something like ``calloc()`` must be freed
+ with ``free()``. Most Pacemaker library functions that allocate strings use
+ this implementation.
+* glib functions often use ``gchar*`` instead, which must be freed with
+ ``g_free()``.
+* Occasionally, it's convenient to use glib's flexible ``GString*`` type, which
+ must be freed with ``g_string_free()``.
+
+.. index::
+ pair: C; regular expression
+
+Regular Expressions
+___________________
+
+- Use ``REG_NOSUB`` with ``regcomp()`` whenever possible, for efficiency.
+- Be sure to use ``regfree()`` appropriately.
+
+
+.. index::
+ single: C; enum
+
+Enumerations
+############
+
+* Enumerations should not have a ``typedef``, and do not require any naming
+ convention beyond what applies to all exposed symbols.
+
+* New values should usually be added to the end of public API enumerations,
+ because the compiler will define the values to 0, 1, etc., in the order
+ given, and inserting a value in the middle would change the numerical values
+ of all later values, breaking code compiled with the old values. However, if
+ enum numerical values are explicitly specified rather than left to the
+ compiler, new values can be added anywhere.
+
+* When defining constant integer values, enum should be preferred over
+ ``#define`` or ``const`` when possible. This allows type checking without
+ consuming memory.
+
+Flag groups
+___________
+
+Pacemaker often uses flag groups (also called bit fields or bitmasks) for a
+collection of boolean options (flags/bits).
+
+This is more efficient for storage and manipulation than individual booleans,
+but its main advantage is when used in public APIs, because using another bit
+in a bitmask is backward compatible, whereas adding a new function argument (or
+sometimes even a structure member) is not.
+
+.. code-block:: c
+
+ #include <stdint.h>
+
+ /* (1) Define an enumeration to name the individual flags, for readability.
+ * An enumeration is preferred to a series of "#define" constants
+ * because it is typed, and logically groups the related names.
+ * (2) Define the values using left-shifting, which is more readable and
+ * less error-prone than hexadecimal literals (0x0001, 0x0002, 0x0004,
+ * etc.).
+ * (3) Using a comma after the last entry makes diffs smaller for reviewing
+ * if a new value needs to be added or removed later.
+ */
+ enum pcmk__some_bitmask_type {
+ pcmk__some_value = (1 << 0),
+ pcmk__other_value = (1 << 1),
+ pcmk__another_value = (1 << 2),
+ };
+
+ /* The flag group itself should be an unsigned type from stdint.h (not
+ * the enum type, since it will be a mask of the enum values and not just
+ * one of them). uint32_t is the most common, since we rarely need more than
+ * 32 flags, but a smaller or larger type could be appropriate in some
+ * cases.
+ */
+ uint32_t flags = pcmk__some_value|pcmk__other_value;
+
+ /* If the values will be used only with uint64_t, define them accordingly,
+ * to make compilers happier.
+ */
+ enum pcmk__something_else {
+ pcmk__whatever = (UINT64_C(1) << 0),
+ };
+
+We have convenience functions for checking flags (see ``pcmk_any_flags_set()``,
+``pcmk_all_flags_set()``, and ``pcmk_is_set()``) as well as setting and
+clearing them (see ``pcmk__set_flags_as()`` and ``pcmk__clear_flags_as()``,
+usually used via wrapper macros defined for specific flag groups). These
+convenience functions should be preferred to direct bitwise arithmetic, for
+readability and logging consistency.
+
+
+.. index::
+ pair: C; function
+
+Functions
+#########
+
+Function names should be unique across the entire project, to allow for
+individual tracing via ``PCMK_trace_functions``, and make it easier to search
+code and follow detail logs.
+
+
+Function Definitions
+____________________
+
+.. code-block:: c
+
+ /*
+ * (1) The return type goes on its own line
+ * (2) The opening brace goes by itself on a line
+ * (3) Use "const" with pointer arguments whenever appropriate, to allow the
+ * function to be used by more callers.
+ */
+ int
+ my_func1(const char *s)
+ {
+ return 0;
+ }
+
+ /* Functions with no arguments must explicitly list them as void,
+ * for compatibility with strict compilers
+ */
+ int
+ my_func2(void)
+ {
+ return 0;
+ }
+
+ /*
+ * (1) For functions with enough arguments that they must break to the next
+ * line, align arguments with the first argument.
+ * (2) When a function argument is a function itself, use the pointer form.
+ * (3) Declare functions and file-global variables as ``static`` whenever
+ * appropriate. This gains a slight efficiency in shared libraries, and
+ * helps the reader know that it is not used outside the one file.
+ */
+ static int
+ my_func3(int bar, const char *a, const char *b, const char *c,
+ void (*callback)())
+ {
+ return 0;
+ }
+
+
+Return Values
+_____________
+
+Functions that need to indicate success or failure should follow one of the
+following guidelines. More details, including functions for using them in user
+messages and converting from one to another, can be found in
+``include/crm/common/results.h``.
+
+* A **standard Pacemaker return code** is one of the ``pcmk_rc_*`` enum values
+ or a system errno code, as an ``int``.
+
+* ``crm_exit_t`` (the ``CRM_EX_*`` enum values) is a system-independent code
+ suitable for the exit status of a process, or for interchange between nodes.
+
+* Other special-purpose status codes exist, such as ``enum ocf_exitcode`` for
+ the possible exit statuses of OCF resource agents (along with some
+ Pacemaker-specific extensions). It is usually obvious when the context calls
+ for such.
+
+* Some older Pacemaker APIs use the now-deprecated "legacy" return values of
+ ``pcmk_ok`` or the positive or negative value of one of the ``pcmk_err_*``
+ constants or system errno codes.
+
+* Functions registered with external libraries (as callbacks for example)
+ should use the appropriate signature defined by those libraries, rather than
+ follow Pacemaker guidelines.
+
+Of course, functions may have return values that aren't success/failure
+indicators, such as a pointer, integer count, or bool.
+
+
+Public API Functions
+____________________
+
+Unless we are doing a (rare) release where we break public API compatibility,
+new public API functions can be added, but existing function signatures (return
+type, name, and argument types) should not be changed. To work around this, an
+existing function can become a wrapper for a new function.
+
+
+.. index::
+ pair: C; logging
+ pair: C; output
+
+Logging and Output
+##################
+
+Logging Vs. Output
+__________________
+
+Log messages and output messages are logically similar but distinct.
+Oversimplifying a bit, daemons log, and tools output.
+
+Log messages are intended to help with troubleshooting and debugging.
+They may have a high level of technical detail, and are usually filtered by
+severity -- for example, the system log by default gets messages of notice
+level and higher.
+
+Output is intended to let the user know what a tool is doing, and is generally
+terser and less technical, and may even be parsed by scripts. Output might have
+"verbose" and "quiet" modes, but it is not filtered by severity.
+
+Common Guidelines for All Messages
+__________________________________
+
+* When format strings are used for derived data types whose implementation may
+ vary across platforms (``pid_t``, ``time_t``, etc.), the safest approach is
+ to use ``%lld`` in the format string, and cast the value to ``long long``.
+
+* Do not rely on ``%s`` handling ``NULL`` values properly. While the standard
+ library functions might, not all functions using printf-style formatting
+ does, and it's safest to get in the habit of always ensuring format values
+ are non-NULL. If a value can be NULL, the ``pcmk__s()`` function is a
+ convenient way to say "this string if not NULL otherwise this default".
+
+* The convenience macros ``pcmk__plural_s()`` and ``pcmk__plural_alt()`` are
+ handy when logging a word that may be singular or plural.
+
+Logging
+_______
+
+Pacemaker uses libqb for logging, but wraps it with a higher level of
+functionality (see ``include/crm/common/logging*h``).
+
+A few macros ``crm_err()``, ``crm_warn()``, etc. do most of the heavy lifting.
+
+By default, Pacemaker sends logs at notice level and higher to the system log,
+and logs at info level and higher to the detail log (typically
+``/var/log/pacemaker/pacemaker.log``). The intent is that most users will only
+ever need the system log, but for deeper troubleshooting and developer
+debugging, the detail log may be helpful, at the cost of being more technical
+and difficult to follow.
+
+The same message can have more detail in the detail log than in the system log,
+using libqb's "extended logging" feature:
+
+.. code-block:: c
+
+ /* The following will log a simple message in the system log, like:
+
+ warning: Action failed: Node not found
+
+ with extra detail in the detail log, like:
+
+ warning: Action failed: Node not found | rc=-1005 id=hgjjg-51006
+ */
+ crm_warn("Action failed: %s " CRM_XS " rc=%d id=%s",
+ pcmk_rc_str(rc), rc, id);
+
+
+Output
+______
+
+Pacemaker has a somewhat complicated system for tool output. The main benefit
+is that the user can select the output format with the ``--output-as`` option
+(usually "text" for human-friendly output or "xml" for reliably script-parsable
+output, though ``crm_mon`` additionally supports "console" and "html").
+
+A custom message can be defined with a unique string identifier, plus
+implementation functions for each supported format. The caller invokes the
+message using the identifier. The user selects the output format via
+``--output-as``, and the output code automatically calls the appropriate
+implementation function.
+
+The interface (most importantly ``pcmk__output_t``) is declared in
+``include/crm/common/output*h``. See the API comments and existing tools for
+examples.
+
+
+.. index::
+ single: Makefile.am
+
+Makefiles
+#########
+
+Pacemaker uses
+`automake <https://www.gnu.org/software/automake/manual/automake.html>`_
+for building, so the Makefile.am in each directory should be edited rather than
+Makefile.in or Makefile, which are automatically generated.
+
+* Public API headers are installed (by adding them to a ``HEADERS`` variable in
+ ``Makefile.am``), but internal API headers are not (by adding them to
+ ``noinst_HEADERS``).
+
+
+.. index::
+ pair: C; vim settings
+
+vim Settings
+############
+
+Developers who use ``vim`` to edit source code can add the following settings
+to their ``~/.vimrc`` file to follow Pacemaker C coding guidelines:
+
+.. code-block:: none
+
+ " follow Pacemaker coding guidelines when editing C source code files
+ filetype plugin indent on
+ au FileType c setlocal expandtab tabstop=4 softtabstop=4 shiftwidth=4 textwidth=80
+ autocmd BufNewFile,BufRead *.h set filetype=c
+ let c_space_errors = 1
diff --git a/doc/sphinx/Pacemaker_Development/components.rst b/doc/sphinx/Pacemaker_Development/components.rst
new file mode 100644
index 0000000..e14df26
--- /dev/null
+++ b/doc/sphinx/Pacemaker_Development/components.rst
@@ -0,0 +1,489 @@
+Coding Particular Pacemaker Components
+--------------------------------------
+
+The Pacemaker code can be intricate and difficult to follow. This chapter has
+some high-level descriptions of how individual components work.
+
+
+.. index::
+ single: controller
+ single: pacemaker-controld
+
+Controller
+##########
+
+``pacemaker-controld`` is the Pacemaker daemon that utilizes the other daemons
+to orchestrate actions that need to be taken in the cluster. It receives CIB
+change notifications from the CIB manager, passes the new CIB to the scheduler
+to determine whether anything needs to be done, uses the executor and fencer to
+execute any actions required, and sets failure counts (among other things) via
+the attribute manager.
+
+As might be expected, it has the most code of any of the daemons.
+
+.. index::
+ single: join
+
+Join sequence
+_____________
+
+Most daemons track their cluster peers using Corosync's membership and CPG
+only. The controller additionally requires peers to `join`, which ensures they
+are ready to be assigned tasks. Joining proceeds through a series of phases
+referred to as the `join sequence` or `join process`.
+
+A node's current join phase is tracked by the ``join`` member of ``crm_node_t``
+(used in the peer cache). It is an ``enum crm_join_phase`` that (ideally)
+progresses from the DC's point of view as follows:
+
+* The node initially starts at ``crm_join_none``
+
+* The DC sends the node a `join offer` (``CRM_OP_JOIN_OFFER``), and the node
+ proceeds to ``crm_join_welcomed``. This can happen in three ways:
+
+ * The joining node will send a `join announce` (``CRM_OP_JOIN_ANNOUNCE``) at
+ its controller startup, and the DC will reply to that with a join offer.
+ * When the DC's peer status callback notices that the node has joined the
+ messaging layer, it registers ``I_NODE_JOIN`` (which leads to
+ ``A_DC_JOIN_OFFER_ONE`` -> ``do_dc_join_offer_one()`` ->
+ ``join_make_offer()``).
+ * After certain events (notably a new DC being elected), the DC will send all
+ nodes join offers (via A_DC_JOIN_OFFER_ALL -> ``do_dc_join_offer_all()``).
+
+ These can overlap. The DC can send a join offer and the node can send a join
+ announce at nearly the same time, so the node responds to the original join
+ offer while the DC responds to the join announce with a new join offer. The
+ situation resolves itself after looping a bit.
+
+* The node responds to join offers with a `join request`
+ (``CRM_OP_JOIN_REQUEST``, via ``do_cl_join_offer_respond()`` and
+ ``join_query_callback()``). When the DC receives the request, the
+ node proceeds to ``crm_join_integrated`` (via ``do_dc_join_filter_offer()``).
+
+* As each node is integrated, the current best CIB is sync'ed to each
+ integrated node via ``do_dc_join_finalize()``. As each integrated node's CIB
+ sync succeeds, the DC acks the node's join request (``CRM_OP_JOIN_ACKNAK``)
+ and the node proceeds to ``crm_join_finalized`` (via
+ ``finalize_sync_callback()`` + ``finalize_join_for()``).
+
+* Each node confirms the finalization ack (``CRM_OP_JOIN_CONFIRM`` via
+ ``do_cl_join_finalize_respond()``), including its current resource operation
+ history (via ``controld_query_executor_state()``). Once the DC receives this
+ confirmation, the node proceeds to ``crm_join_confirmed`` via
+ ``do_dc_join_ack()``.
+
+Once all nodes are confirmed, the DC calls ``do_dc_join_final()``, which checks
+for quorum and responds appropriately.
+
+When peers are lost, their join phase is reset to none (in various places).
+
+``crm_update_peer_join()`` updates a node's join phase.
+
+The DC increments the global ``current_join_id`` for each joining round, and
+rejects any (older) replies that don't match.
+
+
+.. index::
+ single: fencer
+ single: pacemaker-fenced
+
+Fencer
+######
+
+``pacemaker-fenced`` is the Pacemaker daemon that handles fencing requests. In
+the broadest terms, fencing works like this:
+
+#. The initiator (an external program such as ``stonith_admin``, or the cluster
+ itself via the controller) asks the local fencer, "Hey, could you please
+ fence this node?"
+#. The local fencer asks all the fencers in the cluster (including itself),
+ "Hey, what fencing devices do you have access to that can fence this node?"
+#. Each fencer in the cluster replies with a list of available devices that
+ it knows about.
+#. Once the original fencer gets all the replies, it asks the most
+ appropriate fencer peer to actually carry out the fencing. It may send
+ out more than one such request if the target node must be fenced with
+ multiple devices.
+#. The chosen fencer(s) call the appropriate fencing resource agent(s) to
+ do the fencing, then reply to the original fencer with the result.
+#. The original fencer broadcasts the result to all fencers.
+#. Each fencer sends the result to each of its local clients (including, at
+ some point, the initiator).
+
+A more detailed description follows.
+
+.. index::
+ single: libstonithd
+
+Initiating a fencing request
+____________________________
+
+A fencing request can be initiated by the cluster or externally, using the
+libstonithd API.
+
+* The cluster always initiates fencing via
+ ``daemons/controld/controld_fencing.c:te_fence_node()`` (which calls the
+ ``fence()`` API method). This occurs when a transition graph synapse contains
+ a ``CRM_OP_FENCE`` XML operation.
+* The main external clients are ``stonith_admin`` and ``cts-fence-helper``.
+ The ``DLM`` project also uses Pacemaker for fencing.
+
+Highlights of the fencing API:
+
+* ``stonith_api_new()`` creates and returns a new ``stonith_t`` object, whose
+ ``cmds`` member has methods for connect, disconnect, fence, etc.
+* the ``fence()`` method creates and sends a ``STONITH_OP_FENCE XML`` request with
+ the desired action and target node. Callers do not have to choose or even
+ have any knowledge about particular fencing devices.
+
+Fencing queries
+_______________
+
+The function calls for a fencing request go something like this:
+
+The local fencer receives the client's request via an IPC or messaging
+layer callback, which calls
+
+* ``stonith_command()``, which (for requests) calls
+
+ * ``handle_request()``, which (for ``STONITH_OP_FENCE`` from a client) calls
+
+ * ``initiate_remote_stonith_op()``, which creates a ``STONITH_OP_QUERY`` XML
+ request with the target, desired action, timeout, etc. then broadcasts
+ the operation to the cluster group (i.e. all fencer instances) and
+ starts a timer. The query is broadcast because (1) location constraints
+ might prevent the local node from accessing the stonith device directly,
+ and (2) even if the local node does have direct access, another node
+ might be preferred to carry out the fencing.
+
+Each fencer receives the original fencer's ``STONITH_OP_QUERY`` broadcast
+request via IPC or messaging layer callback, which calls:
+
+* ``stonith_command()``, which (for requests) calls
+
+ * ``handle_request()``, which (for ``STONITH_OP_QUERY`` from a peer) calls
+
+ * ``stonith_query()``, which calls
+
+ * ``get_capable_devices()`` with ``stonith_query_capable_device_cb()`` to add
+ device information to an XML reply and send it. (A message is
+ considered a reply if it contains ``T_STONITH_REPLY``, which is only
+ set by fencer peers, not clients.)
+
+The original fencer receives all peers' ``STONITH_OP_QUERY`` replies via IPC
+or messaging layer callback, which calls:
+
+* ``stonith_command()``, which (for replies) calls
+
+ * ``handle_reply()`` which (for ``STONITH_OP_QUERY``) calls
+
+ * ``process_remote_stonith_query()``, which allocates a new query result
+ structure, parses device information into it, and adds it to the
+ operation object. It increments the number of replies received for this
+ operation, and compares it against the expected number of replies (i.e.
+ the number of active peers), and if this is the last expected reply,
+ calls
+
+ * ``request_peer_fencing()``, which calculates the timeout and sends
+ ``STONITH_OP_FENCE`` request(s) to carry out the fencing. If the target
+ node has a fencing "topology" (which allows specifications such as
+ "this node can be fenced either with device A, or devices B and C in
+ combination"), it will choose the device(s), and send out as many
+ requests as needed. If it chooses a device, it will choose the peer; a
+ peer is preferred if it has "verified" access to the desired device,
+ meaning that it has the device "running" on it and thus has a monitor
+ operation ensuring reachability.
+
+Fencing operations
+__________________
+
+Each ``STONITH_OP_FENCE`` request goes something like this:
+
+The chosen peer fencer receives the ``STONITH_OP_FENCE`` request via IPC or
+messaging layer callback, which calls:
+
+* ``stonith_command()``, which (for requests) calls
+
+ * ``handle_request()``, which (for ``STONITH_OP_FENCE`` from a peer) calls
+
+ * ``stonith_fence()``, which calls
+
+ * ``schedule_stonith_command()`` (using supplied device if
+ ``F_STONITH_DEVICE`` was set, otherwise the highest-priority capable
+ device obtained via ``get_capable_devices()`` with
+ ``stonith_fence_get_devices_cb()``), which adds the operation to the
+ device's pending operations list and triggers processing.
+
+The chosen peer fencer's mainloop is triggered and calls
+
+* ``stonith_device_dispatch()``, which calls
+
+ * ``stonith_device_execute()``, which pops off the next item from the device's
+ pending operations list. If acting as the (internally implemented) watchdog
+ agent, it panics the node, otherwise it calls
+
+ * ``stonith_action_create()`` and ``stonith_action_execute_async()`` to
+ call the fencing agent.
+
+The chosen peer fencer's mainloop is triggered again once the fencing agent
+returns, and calls
+
+* ``stonith_action_async_done()`` which adds the results to an action object
+ then calls its
+
+ * done callback (``st_child_done()``), which calls ``schedule_stonith_command()``
+ for a new device if there are further required actions to execute or if the
+ original action failed, then builds and sends an XML reply to the original
+ fencer (via ``send_async_reply()``), then checks whether any
+ pending actions are the same as the one just executed and merges them if so.
+
+Fencing replies
+_______________
+
+The original fencer receives the ``STONITH_OP_FENCE`` reply via IPC or
+messaging layer callback, which calls:
+
+* ``stonith_command()``, which (for replies) calls
+
+ * ``handle_reply()``, which calls
+
+ * ``fenced_process_fencing_reply()``, which calls either
+ ``request_peer_fencing()`` (to retry a failed operation, or try the next
+ device in a topology if appropriate, which issues a new
+ ``STONITH_OP_FENCE`` request, proceeding as before) or
+ ``finalize_op()`` (if the operation is definitively failed or
+ successful).
+
+ * ``finalize_op()`` broadcasts the result to all peers.
+
+Finally, all peers receive the broadcast result and call
+
+* ``finalize_op()``, which sends the result to all local clients.
+
+
+.. index::
+ single: fence history
+
+Fencing History
+_______________
+
+The fencer keeps a running history of all fencing operations. The bulk of the
+relevant code is in `fenced_history.c` and ensures the history is synchronized
+across all nodes even if a node leaves and rejoins the cluster.
+
+In libstonithd, this information is represented by `stonith_history_t` and is
+queryable by the `stonith_api_operations_t:history()` method. `crm_mon` and
+`stonith_admin` use this API to display the history.
+
+
+.. index::
+ single: scheduler
+ single: pacemaker-schedulerd
+ single: libpe_status
+ single: libpe_rules
+ single: libpacemaker
+
+Scheduler
+#########
+
+``pacemaker-schedulerd`` is the Pacemaker daemon that runs the Pacemaker
+scheduler for the controller, but "the scheduler" in general refers to related
+library code in ``libpe_status`` and ``libpe_rules`` (``lib/pengine/*.c``), and
+some of ``libpacemaker`` (``lib/pacemaker/pcmk_sched_*.c``).
+
+The purpose of the scheduler is to take a CIB as input and generate a
+transition graph (list of actions that need to be taken) as output.
+
+The controller invokes the scheduler by contacting the scheduler daemon via
+local IPC. Tools such as ``crm_simulate``, ``crm_mon``, and ``crm_resource``
+can also invoke the scheduler, but do so by calling the library functions
+directly. This allows them to run using a ``CIB_file`` without the cluster
+needing to be active.
+
+The main entry point for the scheduler code is
+``lib/pacemaker/pcmk_sched_allocate.c:pcmk__schedule_actions()``. It sets
+defaults and calls a series of functions for the scheduling. Some key steps:
+
+* ``unpack_cib()`` parses most of the CIB XML into data structures, and
+ determines the current cluster status.
+* ``apply_node_criteria()`` applies factors that make resources prefer certain
+ nodes, such as shutdown locks, location constraints, and stickiness.
+* ``pcmk__create_internal_constraints()`` creates internal constraints, such as
+ the implicit ordering for group members, or start actions being implicitly
+ ordered before promote actions.
+* ``pcmk__handle_rsc_config_changes()`` processes resource history entries in
+ the CIB status section. This is used to decide whether certain
+ actions need to be done, such as deleting orphan resources, forcing a restart
+ when a resource definition changes, etc.
+* ``allocate_resources()`` assigns resources to nodes.
+* ``schedule_resource_actions()`` schedules resource-specific actions (which
+ might or might not end up in the final graph).
+* ``pcmk__apply_orderings()`` processes ordering constraints in order to modify
+ action attributes such as optional or required.
+* ``pcmk__create_graph()`` creates the transition graph.
+
+Challenges
+__________
+
+Working with the scheduler is difficult. Challenges include:
+
+* It is far too much code to keep more than a small portion in your head at one
+ time.
+* Small changes can have large (and unexpected) effects. This is why we have a
+ large number of regression tests (``cts/cts-scheduler``), which should be run
+ after making code changes.
+* It produces an insane amount of log messages at debug and trace levels.
+ You can put resource ID(s) in the ``PCMK_trace_tags`` environment variable to
+ enable trace-level messages only when related to specific resources.
+* Different parts of the main ``pe_working_set_t`` structure are finalized at
+ different points in the scheduling process, so you have to keep in mind
+ whether information you're using at one point of the code can possibly change
+ later. For example, data unpacked from the CIB can safely be used anytime
+ after ``unpack_cib(),`` but actions may become optional or required anytime
+ before ``pcmk__create_graph()``. There's no easy way to deal with this.
+* Many names of struct members, functions, etc., are suboptimal, but are part
+ of the public API and cannot be changed until an API backward compatibility
+ break.
+
+
+.. index::
+ single: pe_working_set_t
+
+Cluster Working Set
+___________________
+
+The main data object for the scheduler is ``pe_working_set_t``, which contains
+all information needed about nodes, resources, constraints, etc., both as the
+raw CIB XML and parsed into more usable data structures, plus the resulting
+transition graph XML. The variable name is usually ``data_set``.
+
+.. index::
+ single: pe_resource_t
+
+Resources
+_________
+
+``pe_resource_t`` is the data object representing cluster resources. A resource
+has a variant: primitive (a.k.a. native), group, clone, or bundle.
+
+The resource object has members for two sets of methods,
+``resource_object_functions_t`` from the ``libpe_status`` public API, and
+``resource_alloc_functions_t`` whose implementation is internal to
+``libpacemaker``. The actual functions vary by variant.
+
+The object functions have basic capabilities such as unpacking the resource
+XML, and determining the current or planned location of the resource.
+
+The allocation functions have more obscure capabilities needed for scheduling,
+such as processing location and ordering constraints. For example,
+``pcmk__create_internal_constraints()`` simply calls the
+``internal_constraints()`` method for each top-level resource in the cluster.
+
+.. index::
+ single: pe_node_t
+
+Nodes
+_____
+
+Allocation of resources to nodes is done by choosing the node with the highest
+score for a given resource. The scheduler does a bunch of processing to
+generate the scores, then the actual allocation is straightforward.
+
+Node lists are frequently used. For example, ``pe_working_set_t`` has a
+``nodes`` member which is a list of all nodes in the cluster, and
+``pe_resource_t`` has a ``running_on`` member which is a list of all nodes on
+which the resource is (or might be) active. These are lists of ``pe_node_t``
+objects.
+
+The ``pe_node_t`` object contains a ``struct pe_node_shared_s *details`` member
+with all node information that is independent of resource allocation (the node
+name, etc.).
+
+The working set's ``nodes`` member contains the original of this information.
+All other node lists contain copies of ``pe_node_t`` where only the ``details``
+member points to the originals in the working set's ``nodes`` list. In this
+way, the other members of ``pe_node_t`` (such as ``weight``, which is the node
+score) may vary by node list, while the common details are shared.
+
+.. index::
+ single: pe_action_t
+ single: pe_action_flags
+
+Actions
+_______
+
+``pe_action_t`` is the data object representing actions that might need to be
+taken. These could be resource actions, cluster-wide actions such as fencing a
+node, or "pseudo-actions" which are abstractions used as convenient points for
+ordering other actions against.
+
+It has a ``flags`` member which is a bitmask of ``enum pe_action_flags``. The
+most important of these are ``pe_action_runnable`` (if not set, the action is
+"blocked" and cannot be added to the transition graph) and
+``pe_action_optional`` (actions with this set will not be added to the
+transition graph; actions often start out as optional, and may become required
+later).
+
+
+.. index::
+ single: pe__colocation_t
+
+Colocations
+___________
+
+``pcmk__colocation_t`` is the data object representing colocations.
+
+Colocation constraints come into play in these parts of the scheduler code:
+
+* When sorting resources for assignment, so resources with highest node score
+ are assigned first (see ``cmp_resources()``)
+* When updating node scores for resource assigment or promotion priority
+* When assigning resources, so any resources to be colocated with can be
+ assigned first, and so colocations affect where the resource is assigned
+* When choosing roles for promotable clone instances, so colocations involving
+ a specific role can affect which instances are promoted
+
+The resource allocation functions have several methods related to colocations:
+
+* ``apply_coloc_score():`` This applies a colocation's score to either the
+ dependent's allowed node scores (if called while resources are being
+ assigned) or the dependent's priority (if called while choosing promotable
+ instance roles). It can behave differently depending on whether it is being
+ called as the primary's method or as the dependent's method.
+* ``add_colocated_node_scores():`` This updates a table of nodes for a given
+ colocation attribute and score. It goes through colocations involving a given
+ resource, and updates the scores of the nodes in the table with the best
+ scores of nodes that match up according to the colocation criteria.
+* ``colocated_resources():`` This generates a list of all resources involved
+ in mandatory colocations (directly or indirectly via colocation chains) with
+ a given resource.
+
+
+.. index::
+ single: pe__ordering_t
+ single: pe_ordering
+
+Orderings
+_________
+
+Ordering constraints are simple in concept, but they are one of the most
+important, powerful, and difficult to follow aspects of the scheduler code.
+
+``pe__ordering_t`` is the data object representing an ordering, better thought
+of as a relationship between two actions, since the relation can be more
+complex than just "this one runs after that one".
+
+For an ordering "A then B", the code generally refers to A as "first" or
+"before", and B as "then" or "after".
+
+Much of the power comes from ``enum pe_ordering``, which are flags that
+determine how an ordering behaves. There are many obscure flags with big
+effects. A few examples:
+
+* ``pe_order_none`` means the ordering is disabled and will be ignored. It's 0,
+ meaning no flags set, so it must be compared with equality rather than
+ ``pcmk_is_set()``.
+* ``pe_order_optional`` means the ordering does not make either action
+ required, so it only applies if they both become required for other reasons.
+* ``pe_order_implies_first`` means that if action B becomes required for any
+ reason, then action A will become required as well.
diff --git a/doc/sphinx/Pacemaker_Development/evolution.rst b/doc/sphinx/Pacemaker_Development/evolution.rst
new file mode 100644
index 0000000..31349c3
--- /dev/null
+++ b/doc/sphinx/Pacemaker_Development/evolution.rst
@@ -0,0 +1,90 @@
+Evolution of the project
+------------------------
+
+This section will not generally be of interest, but may occasionally
+shed light on why the current code is structured the way it is when
+investigating some thorny issue.
+
+Origin in Heartbeat project
+###########################
+
+Pacemaker can be considered as a spin-off from Heartbeat, the original
+comprehensive high availability suite started by Alan Robertson. Some
+portions of code are shared, at least on the conceptual level if not verbatim,
+till today, even if the effective percentage continually declines.
+
+Before Pacemaker 2.0, Pacemaker supported Heartbeat as a cluster layer
+alternative to Corosync. That support was dropped for the 2.0.0 release (see
+`commit 55ab749bf
+<https://github.com/ClusterLabs/pacemaker/commit/55ab749bf0f0143bd1cd050c1bbe302aecb3898e>`_).
+
+An archive of a 2016 checkout of the Heartbeat code base is shared as a
+`read-only repository <https://gitlab.com/poki/archived-heartbeat>`_. Notable
+commits include:
+
+* `creation of Heartbeat's "new cluster resource manager," which evolved into
+ Pacemaker
+ <https://gitlab.com/poki/archived-heartbeat/commit/bb48551be418291c46980511aa31c7c2df3a85e4>`_
+
+* `deletion of the new CRM from Heartbeat after Pacemaker had been split off
+ <https://gitlab.com/poki/archived-heartbeat/commit/74573ac6182785820d765ec76c5d70086381931a>`_
+
+Regarding Pacemaker's split from heartbeat, it evolved stepwise (as opposed to
+one-off cut), and the last step of full dependency is depicted in
+`The Corosync Cluster Engine
+<https://www.kernel.org/doc/ols/2008/ols2008v1-pages-85-100.pdf#page=14>`_
+paper, fig. 10. This article also provides a good reference regarding wider
+historical context of the tangentially (and deeper in some cases) meeting
+components around that time.
+
+
+Influence of Heartbeat on Pacemaker
+___________________________________
+
+On a closer look, we can identify these things in common:
+
+* extensive use of data types and functions of
+ `GLib <https://wiki.gnome.org/Projects/GLib>`_
+
+* Cluster Testing System (CTS), inherited from initial implementation
+ by Alan Robertson
+
+* ...
+
+
+Notable Restructuring Steps in the Codebase
+###########################################
+
+File renames may not appear as notable ... unless one runs into complicated
+``git blame`` and ``git log`` scenarios, so some more massive ones may be
+stated as well.
+
+* watchdog/'sbd' functionality spin-off:
+
+ * `start separating, eb7cce2a1
+ <https://github.com/ClusterLabs/pacemaker/commit/eb7cce2a172a026336f4ba6c441dedce42f41092>`_
+ * `finish separating, 5884db780
+ <https://github.com/ClusterLabs/pacemaker/commit/5884db78080941cdc4e77499bc76677676729484>`_
+
+* daemons' rename for 2.0 (in chronological order)
+
+ * `start of moving daemon sources from their top-level directories under new
+ /daemons hierarchy, 318a2e003
+ <https://github.com/ClusterLabs/pacemaker/commit/318a2e003d2369caf10a450fe7a7616eb7ffb264>`_
+ * `attrd -> pacemaker-attrd, 01563cf26
+ <https://github.com/ClusterLabs/pacemaker/commit/01563cf2637040e9d725b777f0c42efa8ab075c7>`_
+ * `lrmd -> pacemaker-execd, 36a00e237
+ <https://github.com/ClusterLabs/pacemaker/commit/36a00e2376fd50d52c2ccc49483e235a974b161c>`_
+ * `pacemaker_remoted -> pacemaker-remoted, e4f4a0d64
+ <https://github.com/ClusterLabs/pacemaker/commit/e4f4a0d64c8b6bbc4961810f2a41383f52eaa116>`_
+ * `crmd -> pacemaker-controld, db5536e40
+ <https://github.com/ClusterLabs/pacemaker/commit/db5536e40c77cdfdf1011b837f18e4ad9df45442>`_
+ * `pengine -> pacemaker-schedulerd, e2fdc2bac
+ <https://github.com/ClusterLabs/pacemaker/commit/e2fdc2baccc3ae07652aac622a83f317597608cd>`_
+ * `stonithd -> pacemaker-fenced, 038c465e2
+ <https://github.com/ClusterLabs/pacemaker/commit/038c465e2380c5349fb30ea96c8a7eb6184452e0>`_
+ * `cib daemon -> pacemaker-based, 50584c234
+ <https://github.com/ClusterLabs/pacemaker/commit/50584c234e48cd8b99d355ca9349b0dfb9503987>`_
+
+.. TBD:
+ - standalone tengine -> part of crmd/pacemaker-controld
diff --git a/doc/sphinx/Pacemaker_Development/faq.rst b/doc/sphinx/Pacemaker_Development/faq.rst
new file mode 100644
index 0000000..e738b7d
--- /dev/null
+++ b/doc/sphinx/Pacemaker_Development/faq.rst
@@ -0,0 +1,171 @@
+Frequently Asked Questions
+--------------------------
+
+:Q: Who is this document intended for?
+
+:A: Anyone who wishes to read and/or edit the Pacemaker source code.
+ Casual contributors should feel free to read just this FAQ, and
+ consult other chapters as needed.
+
+----
+
+.. index::
+ single: download
+ single: source code
+ single: git
+ single: git; GitHub
+
+:Q: Where is the source code for Pacemaker?
+:A: The `source code for Pacemaker <https://github.com/ClusterLabs/pacemaker>`_ is
+ kept on `GitHub <https://github.com/>`_, as are all software projects under the
+ `ClusterLabs <https://github.com/ClusterLabs>`_ umbrella. Pacemaker uses
+ `Git <https://git-scm.com/>`_ for source code management. If you are a Git newbie,
+ the `gittutorial(7) man page <http://schacon.github.io/git/gittutorial.html>`_
+ is an excellent starting point. If you're familiar with using Git from the
+ command line, you can create a local copy of the Pacemaker source code with:
+ **git clone https://github.com/ClusterLabs/pacemaker.git**
+
+----
+
+.. index::
+ single: git; branch
+
+:Q: What are the different Git branches and repositories used for?
+:A: * The `main branch <https://github.com/ClusterLabs/pacemaker/tree/main>`_
+ is the primary branch used for development.
+ * The `2.1 branch <https://github.com/ClusterLabs/pacemaker/tree/2.1>`_ is
+ the current release branch. Normally, it does not receive any changes, but
+ during the release cycle for a new release, it will contain release
+ candidates. During the release cycle, certain bug fixes will go to the
+ 2.1 branch first (and be pulled into main later).
+ * The `2.0 branch <https://github.com/ClusterLabs/pacemaker/tree/2.0>`_,
+ `1.1 branch <https://github.com/ClusterLabs/pacemaker/tree/1.1>`_,
+ and separate
+ `1.0 repository <https://github.com/ClusterLabs/pacemaker-1.0>`_
+ are frozen snapshots of earlier release series, no longer being developed.
+ * Messages will be posted to the
+ `developers@ClusterLabs.org <https://lists.ClusterLabs.org/mailman/listinfo/developers>`_
+ mailing list during the release cycle, with instructions about which
+ branches to use when submitting requests.
+
+----
+
+:Q: How do I build from the source code?
+:A: See `INSTALL.md <https://github.com/ClusterLabs/pacemaker/blob/main/INSTALL.md>`_
+ in the main checkout directory.
+
+----
+
+:Q: What coding style should I follow?
+:A: You'll be mostly fine if you simply follow the example of existing code.
+ When unsure, see the relevant chapter of this document for language-specific
+ recommendations. Pacemaker has grown and evolved organically over many years,
+ so you will see much code that doesn't conform to the current guidelines. We
+ discourage making changes solely to bring code into conformance, as any change
+ requires developer time for review and opens the possibility of adding bugs.
+ However, new code should follow the guidelines, and it is fine to bring lines
+ of older code into conformance when modifying that code for other reasons.
+
+----
+
+.. index::
+ single: git; commit message
+
+:Q: How should I format my Git commit messages?
+:A: An example is "Feature: scheduler: wobble the frizzle better".
+
+ * The first part is the type of change, used to automatically generate the
+ change log for the next release. Commit messages with the following will
+ be included in the change log:
+
+ * **Feature** for new features
+ * **Fix** for bug fixes (**Bug** or **High** also work)
+ * **API** for changes to the public API
+
+ Everything else will *not* automatically be in the change log, and so
+ don't really matter, but types commonly used include:
+
+ * **Log** for changes to log messages or handling
+ * **Doc** for changes to documentation or comments
+ * **Test** for changes in CTS and regression tests
+ * **Low**, **Med**, or **Mid** for bug fixes not significant enough for a
+ change log entry
+ * **Refactor** for refactoring-only code changes
+ * **Build** for build process changes
+
+ * The next part is the name of the component(s) being changed, for example,
+ **controller** or **libcrmcommon** (it's more free-form, so don't sweat
+ getting it exact).
+
+ * The rest briefly describes the change. The git project recommends the
+ entire summary line stay under 50 characters, but more is fine if needed
+ for clarity.
+
+ * Except for the most simple and obvious of changes, the summary should be
+ followed by a blank line and a longer explanation of *why* the change was
+ made.
+
+ * If the commit is associated with a task in the `ClusterLabs project
+ manager <https://projects.clusterlabs.org/>`_, you can say
+ "Fixes T\ *n*" in the commit message to automatically close task
+ T\ *n* when the pull request is merged.
+
+----
+
+:Q: How can I test my changes?
+:A: The source repository has some unit tests for simple functions, though this
+ is a recent effort without much coverage yet. Pacemaker's Cluster Test
+ Suite (CTS) has regression tests for most major components; these will
+ automatically be run for any pull requests submitted through GitHub, and
+ are sufficient for most changes. Additionally, CTS has a lab component that
+ can be used to set up a test cluster and run a wide variety of complex
+ tests, for testing major changes. See cts/README.md in the source
+ repository for details.
+
+----
+
+.. index:: license
+
+:Q: What is Pacemaker's license?
+:A: Except where noted otherwise in the file itself, the source code for all
+ Pacemaker programs is licensed under version 2 or later of the GNU General
+ Public License (`GPLv2+ <https://www.gnu.org/licenses/gpl-2.0.html>`_), its
+ headers, libraries, and native language translations under version 2.1 or
+ later of the less restrictive GNU Lesser General Public License
+ (`LGPLv2.1+ <https://www.gnu.org/licenses/lgpl-2.1.html>`_),
+ its documentation under version 4.0 or later of the
+ Creative Commons Attribution-ShareAlike International Public License
+ (`CC-BY-SA-4.0 <https://creativecommons.org/licenses/by-sa/4.0/legalcode>`_),
+ and its init scripts under the
+ `Revised BSD <https://opensource.org/licenses/BSD-3-Clause>`_ license. If you find
+ any deviations from this policy, or wish to inquire about alternate licensing
+ arrangements, please e-mail the
+ `developers@ClusterLabs.org <https://lists.ClusterLabs.org/mailman/listinfo/developers>`_
+ mailing list. Licensing issues are also discussed on the
+ `ClusterLabs wiki <https://wiki.ClusterLabs.org/wiki/License>`_.
+
+----
+
+:Q: How can I contribute my changes to the project?
+:A: Contributions of bug fixes or new features are very much appreciated!
+ Patches can be submitted as
+ `pull requests <https://help.github.com/en/github/collaborating-with-issues-and-pull-requests/about-pull-requests>`_
+ via GitHub (the preferred method, due to its excellent
+ `features <https://github.com/features/>`_), or e-mailed to the
+ `developers@ClusterLabs.org <https://lists.ClusterLabs.org/mailman/listinfo/developers>`_
+ mailing list as an attachment in a format Git can import. Authors may only
+ submit changes that they have the right to submit under the open source
+ license indicated in the affected files.
+
+----
+
+.. index:: mailing list
+
+:Q: What if I still have questions?
+:A: Ask on the
+ `developers@ClusterLabs.org <https://lists.ClusterLabs.org/mailman/listinfo/developers>`_
+ mailing list for development-related questions, or on the
+ `users@ClusterLabs.org <https://lists.ClusterLabs.org/mailman/listinfo/users>`_
+ mailing list for general questions about using Pacemaker.
+ Developers often also hang out on the
+ [ClusterLabs IRC channel](https://wiki.clusterlabs.org/wiki/ClusterLabs_IRC_channel).
diff --git a/doc/sphinx/Pacemaker_Development/general.rst b/doc/sphinx/Pacemaker_Development/general.rst
new file mode 100644
index 0000000..9d9dcec
--- /dev/null
+++ b/doc/sphinx/Pacemaker_Development/general.rst
@@ -0,0 +1,40 @@
+.. index::
+ single: guidelines; all languages
+
+General Guidelines for All Languages
+------------------------------------
+
+.. index:: copyright
+
+Copyright
+#########
+
+When copyright notices are added to a file, they should look like this:
+
+.. note:: **Copyright Notice Format**
+
+ | Copyright *YYYY[-YYYY]* the Pacemaker project contributors
+ |
+ | The version control history for this file may have further details.
+
+The first *YYYY* is the year the file was *originally* published. The original
+date is important for two reasons: when two entities claim copyright ownership
+of the same work, the earlier claim generally prevails; and copyright
+expiration is generally calculated from the original publication date. [1]_
+
+If the file is modified in later years, add *-YYYY* with the most recent year
+of modification. Even though Pacemaker is an ongoing project, copyright notices
+are about the years of *publication* of specific content.
+
+Copyright notices are intended to indicate, but do not affect, copyright
+*ownership*, which is determined by applicable laws and regulations. Authors
+may put more specific copyright notices in their commit messages if desired.
+
+.. rubric:: Footnotes
+
+.. [1] See the U.S. Copyright Office's `"Compendium of U.S. Copyright Office
+ Practices" <https://www.copyright.gov/comp3/>`_, particularly "Chapter
+ 2200: Notice of Copyright", sections 2205.1(A) and 2205.1(F), or
+ `"Updating Copyright Notices"
+ <https://techwhirl.com/updating-copyright-notices/>`_ for a more
+ readable summary.
diff --git a/doc/sphinx/Pacemaker_Development/helpers.rst b/doc/sphinx/Pacemaker_Development/helpers.rst
new file mode 100644
index 0000000..3fcb48d
--- /dev/null
+++ b/doc/sphinx/Pacemaker_Development/helpers.rst
@@ -0,0 +1,521 @@
+C Development Helpers
+---------------------
+
+.. index::
+ single: unit testing
+
+Refactoring
+###########
+
+Pacemaker uses an optional tool called `coccinelle <https://coccinelle.gitlabpages.inria.fr/website/>`_
+to do automatic refactoring. coccinelle is a very complicated tool that can be
+difficult to understand, and the existing documentation makes it pretty tough
+to get started. Much of the documentation is either aimed at kernel developers
+or takes the form of grammars.
+
+However, it can apply very complex transformations across an entire source tree.
+This is useful for tasks like code refactoring, changing APIs (number or type of
+arguments, etc.), catching functions that should not be called, and changing
+existing patterns.
+
+coccinelle is driven by input scripts called `semantic patches <https://coccinelle.gitlabpages.inria.fr/website/docs/index.html>`_
+written in its own language. These scripts bear a passing resemblance to source
+code patches and tell coccinelle how to match and modify a piece of source
+code. They are stored in ``devel/coccinelle`` and each script either contains
+a single source transformation or several related transformations. In general,
+we try to keep these as simple as possible.
+
+In Pacemaker development, we use a couple targets in ``devel/Makefile.am`` to
+control coccinelle. The ``cocci`` target tries to apply each script to every
+Pacemaker source file, printing out any changes it would make to the console.
+The ``cocci-inplace`` target does the same but also makes those changes to the
+source files. A variety of warnings might also be printed. If you aren't working
+on a new script, these can usually be ignored.
+
+If you are working on a new coccinelle script, it can be useful (and faster) to
+skip everything else and only run the new script. The ``COCCI_FILES`` variable
+can be used for this:
+
+.. code-block:: none
+
+ $ make -C devel COCCI_FILES=coccinelle/new-file.cocci cocci
+
+This variable is also used for preventing some coccinelle scripts in the Pacemaker
+source tree from running. Some scripts are disabled because they are not currently
+fully working or because they are there as templates. When adding a new script,
+remember to add it to this variable if it should always be run.
+
+One complication when writing coccinelle scripts is that certain Pacemaker source
+files may not use private functions (those whose name starts with ``pcmk__``).
+Handling this requires work in both the Makefile and in the coccinelle scripts.
+
+The Makefile deals with this by maintaining two lists of source files: those that
+may use private functions and those that may not. For those that may, a special
+argument (``-D internal``) is added to the coccinelle command line. This creates
+a virtual dependency named ``internal``.
+
+In the coccinelle scripts, those transformations that modify source code to use
+a private function also have a dependency on ``internal``. If that dependency
+was given on the command line, the transformation will be run. Otherwise, it will
+be skipped.
+
+This means that not all instances of an older style of code will be changed after
+running a given transformation. Some developer intervention is still necessary
+to know whether a source code block should have been changed or not.
+
+Probably the easiest way to learn how to use coccinelle is by following other
+people's scripts. In addition to the ones in the Pacemaker source directory,
+there's several others on the `coccinelle website <https://coccinelle.gitlabpages.inria.fr/website/rules/>`_.
+
+Sanitizers
+##########
+
+gcc supports a variety of run-time checks called sanitizers. These can be used to
+catch programming errors with memory, race conditions, various undefined behavior
+conditions, and more. Because these are run-time checks, they should only be used
+during development and not in compiled packages or production code.
+
+Certain sanitizers cannot be combined with others because their run-time checks
+cause interfere. Instead of trying to figure out which combinations work, it is
+simplest to just enable one at a time.
+
+Each supported sanitizer requires an installed libray. In addition to just
+enabling the sanitizer, their use can be configured with environment variables.
+For example:
+
+.. code-block:: none
+
+ $ ASAN_OPTIONS=verbosity=1:replace_str=true crm_mon -1R
+
+Pacemaker supports the following subset of gcc's sanitizers:
+
++--------------------+-------------------------+----------+----------------------+
+| Sanitizer | Configure Option | Library | Environment Variable |
++====================+=========================+==========+======================+
+| Address | --with-sanitizers=asan | libasan | ASAN_OPTIONS |
++--------------------+-------------------------+----------+----------------------+
+| Threads | --with-sanitizers=tsan | libtsan | TSAN_OPTIONS |
++--------------------+-------------------------+----------+----------------------+
+| Undefined behavior | --with-sanitizers=ubsan | libubsan | UBSAN_OPTIONS |
++--------------------+-------------------------+----------+----------------------+
+
+The undefined behavior sanitizer further supports suboptions that need to be
+given as CFLAGS when configuring pacemaker:
+
+.. code-block:: none
+
+ $ CFLAGS=-fsanitize=integer-divide-by-zero ./configure --with-sanitizers=ubsan
+
+For more information, see the `gcc documentation <https://gcc.gnu.org/onlinedocs/gcc/Instrumentation-Options.html>`_
+which also provides links to more information on each sanitizer.
+
+Unit Testing
+############
+
+Where possible, changes to the C side of Pacemaker should be accompanied by unit
+tests. Much of Pacemaker cannot effectively be unit tested (and there are other
+testing systems used for those parts), but the ``lib`` subdirectory is pretty easy
+to write tests for.
+
+Pacemaker uses the `cmocka unit testing framework <https://cmocka.org/>`_ which looks
+a lot like other unit testing frameworks for C and should be fairly familiar. In
+addition to regular unit tests, cmocka also gives us the ability to use
+`mock functions <https://en.wikipedia.org/wiki/Mock_object>`_ for unit testing
+functions that would otherwise be difficult to test.
+
+Organization
+____________
+
+Pay close attention to the organization and naming of test cases to ensure the
+unit tests continue to work as they should.
+
+Tests are spread throughout the source tree, alongside the source code they test.
+For instance, all the tests for the source code in ``lib/common/`` are in the
+``lib/common/tests`` directory. If there is no ``tests`` subdirectory, there are no
+tests for that library yet.
+
+Under that directory, there is a ``Makefile.am`` and additional subdirectories. Each
+subdirectory contains the tests for a single library source file. For instance,
+all the tests for ``lib/common/strings.c`` are in the ``lib/common/tests/strings``
+directory. Note that the test subdirectory does not have a ``.c`` suffix. If there
+is no test subdirectory, there are no tests for that file yet.
+
+Finally, under that directory, there is a ``Makefile.am`` and then various source
+files. Each of these source files tests the single function that it is named
+after. For instance, ``lib/common/tests/strings/pcmk__btoa_test.c`` tests the
+``pcmk__btoa()`` function in ``lib/common/strings.c``. If there is no test
+source file, there are no tests for that function yet.
+
+The ``_test`` suffix on the test source file is important. All tests have this
+suffix, which means all the compiled test cases will also end with this suffix.
+That lets us ignore all the compiled tests with a single line in ``.gitignore``:
+
+.. code-block:: none
+
+ /lib/*/tests/*/*_test
+
+Adding a test
+_____________
+
+Testing a new function in an already testable source file
+~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
+
+Follow these steps if you want to test a function in a source file where there
+are already other tested functions. For the purposes of this example, we will
+add a test for the ``pcmk__scan_port()`` function in ``lib/common/strings.c``. As
+you can see, there are already tests for other functions in this same file in
+the ``lib/common/tests/strings`` directory.
+
+* cd into ``lib/common/tests/strings``
+* Add the new file to the the ``check_PROGRAMS`` variable in ``Makefile.am``,
+ making it something like this:
+
+ .. code-block:: none
+
+ check_PROGRAMS = \
+ pcmk__add_word_test \
+ pcmk__btoa_test \
+ pcmk__scan_port_test
+
+* Create a new ``pcmk__scan_port_test.c`` file, copying the copyright and include
+ boilerplate from another file in the same directory.
+* Continue with the steps in `Writing the test`_.
+
+Testing a function in a source file without tests
+~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
+
+Follow these steps if you want to test a function in a source file where there
+are not already other tested functions, but there are tests for other files in
+the same library. For the purposes of this example, we will add a test for the
+``pcmk_acl_required()`` function in ``lib/common/acls.c``. At the time of this
+documentation being written, no tests existed for that source file, so there
+is no ``lib/common/tests/acls`` directory.
+
+* Add to ``AC_CONFIG_FILES`` in the top-level ``configure.ac`` file so the build
+ process knows to use directory we're about to create. That variable would
+ now look something like:
+
+ .. code-block:: none
+
+ dnl Other files we output
+ AC_CONFIG_FILES(Makefile \
+ ...
+ lib/common/tests/Makefile \
+ lib/common/tests/acls/Makefile \
+ lib/common/tests/agents/Makefile \
+ ...
+ )
+
+* cd into ``lib/common/tests``
+* Add to the ``SUBDIRS`` variable in ``Makefile.am``, making it something like:
+
+ .. code-block:: none
+
+ SUBDIRS = agents acls cmdline flags operations strings utils xpath results
+
+* Create a new ``acls`` directory, copying the ``Makefile.am`` from some other
+ directory. At this time, each ``Makefile.am`` is largely boilerplate with
+ very little that needs to change from directory to directory.
+* cd into ``acls``
+* Get rid of any existing values for ``check_PROGRAMS`` and set it to
+ ``pcmk_acl_required_test`` like so:
+
+ .. code-block:: none
+
+ check_PROGRAMS = pcmk_acl_required_test
+
+* Double check that ``$(top_srcdir)/mk/tap.mk`` and ``$(top_srcdir)/mk/unittest.mk``
+ are included in the ``Makefile.am``. These files contain all the flags necessary
+ for most unit tests. If necessary, individual settings can be overridden like so:
+
+ .. code-block:: none
+
+ AM_CPPFLAGS += -I$(top_srcdir)
+ LDADD += $(top_builddir)/lib/pengine/libpe_status_test.la
+
+* Follow the steps in `Testing a new function in an already testable source file`_
+ to create the new ``pcmk_acl_required_test.c`` file.
+
+Testing a function in a library without tests
+~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
+
+Adding a test case for a function in a library that doesn't have any test cases
+to begin with is only slightly more complicated. In general, the steps are the
+same as for the previous section, except with an additional layer of directory
+creation.
+
+For the purposes of this example, we will add a test case for the
+``lrmd_send_resource_alert()`` function in ``lib/lrmd/lrmd_alerts.c``. Note that this
+may not be a very good function or even library to write actual unit tests for.
+
+* Add to ``AC_CONFIG_FILES`` in the top-level ``configure.ac`` file so the build
+ process knows to use directory we're about to create. That variable would
+ now look something like:
+
+ .. code-block:: none
+
+ dnl Other files we output
+ AC_CONFIG_FILES(Makefile \
+ ...
+ lib/lrmd/Makefile \
+ lib/lrmd/tests/Makefile \
+ lib/services/Makefile \
+ ...
+ )
+
+* cd into ``lib/lrmd``
+* Create a ``SUBDIRS`` variable in ``Makefile.am`` if it doesn't already exist.
+ Most libraries should not have this variable already.
+
+ .. code-block:: none
+
+ SUBDIRS = tests
+
+* Create a new ``tests`` directory and add a ``Makefile.am`` with the following
+ contents:
+
+ .. code-block:: none
+
+ SUBDIRS = lrmd_alerts
+
+* Follow the steps in `Testing a function in a source file without tests`_ to create
+ the rest of the new directory structure.
+
+* Follow the steps in `Testing a new function in an already testable source file`_
+ to create the new ``lrmd_send_resource_alert_test.c`` file.
+
+Adding to an existing test case
+~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
+
+If all you need to do is add additional test cases to an existing file, none of
+the above work is necessary. All you need to do is find the test source file
+with the name matching your function and add to it and then follow the
+instructions in `Writing the test`_.
+
+Writing the test
+________________
+
+A test case file contains a fair amount of boilerplate. For this reason, it's
+usually easiest to just copy an existing file and adapt it to your needs. However,
+here's the basic structure:
+
+.. code-block:: c
+
+ /*
+ * Copyright 2021 the Pacemaker project contributors
+ *
+ * The version control history for this file may have further details.
+ *
+ * This source code is licensed under the GNU Lesser General Public License
+ * version 2.1 or later (LGPLv2.1+) WITHOUT ANY WARRANTY.
+ */
+
+ #include <crm_internal.h>
+
+ #include <crm/common/unittest_internal.h>
+
+ /* Put your test-specific includes here */
+
+ /* Put your test functions here */
+
+ PCMK__UNIT_TEST(NULL, NULL,
+ /* Register your test functions here */)
+
+Each test-specific function should test one aspect of the library function,
+though it can include many assertions if there are many ways of testing that
+one aspect. For instance, there might be multiple ways of testing regular
+expression matching:
+
+.. code-block:: c
+
+ static void
+ regex(void **state) {
+ const char *s1 = "abcd";
+ const char *s2 = "ABCD";
+
+ assert_true(pcmk__strcmp(NULL, "a..d", pcmk__str_regex) < 0);
+ assert_true(pcmk__strcmp(s1, NULL, pcmk__str_regex) > 0);
+ assert_int_equal(pcmk__strcmp(s1, "a..d", pcmk__str_regex), 0);
+ }
+
+Each test-specific function must also be registered or it will not be called.
+This is done with ``cmocka_unit_test()`` in the ``PCMK__UNIT_TEST`` macro:
+
+.. code-block:: c
+
+ PCMK__UNIT_TEST(NULL, NULL,
+ cmocka_unit_test(regex))
+
+Most unit tests do not require a setup and teardown function to be executed
+around the entire group of tests. On occassion, this may be necessary. Simply
+pass those functions in as the first two parameters to ``PCMK__UNIT_TEST``
+instead of using NULL.
+
+Assertions
+__________
+
+In addition to the `assertions provided by <https://api.cmocka.org/group__cmocka__asserts.html>`_,
+``unittest_internal.h`` also provides ``pcmk__assert_asserts``. This macro takes an
+expression and verifies that the expression aborts due to a failed call to
+``CRM_ASSERT`` or some other similar function. It can be used like so:
+
+.. code-block:: c
+
+ static void
+ null_input_variables(void **state)
+ {
+ long long start, end;
+
+ pcmk__assert_asserts(pcmk__parse_ll_range("1234", NULL, &end));
+ pcmk__assert_asserts(pcmk__parse_ll_range("1234", &start, NULL));
+ }
+
+Here, ``pcmk__parse_ll_range`` expects non-NULL for its second and third
+arguments. If one of those arguments is NULL, ``CRM_ASSERT`` will fail and
+the program will abort. ``pcmk__assert_asserts`` checks that the code would
+abort and the test passes. If the code does not abort, the test fails.
+
+
+Running
+_______
+
+If you had to create any new files or directories, you will first need to run
+``./configure`` from the top level of the source directory. This will regenerate
+the Makefiles throughout the tree. If you skip this step, your changes will be
+skipped and you'll be left wondering why the output doesn't match what you
+expected.
+
+To run the tests, simply run ``make check`` after previously building the source
+with ``make``. The test cases in each directory will be built and then run.
+This should not take long. If all the tests succeed, you will be back at the
+prompt. Scrolling back through the history, you should see lines like the
+following:
+
+.. code-block:: none
+
+ PASS: pcmk__strcmp_test 1 - same_pointer
+ PASS: pcmk__strcmp_test 2 - one_is_null
+ PASS: pcmk__strcmp_test 3 - case_matters
+ PASS: pcmk__strcmp_test 4 - case_insensitive
+ PASS: pcmk__strcmp_test 5 - regex
+ ============================================================================
+ Testsuite summary for pacemaker 2.1.0
+ ============================================================================
+ # TOTAL: 33
+ # PASS: 33
+ # SKIP: 0
+ # XFAIL: 0
+ # FAIL: 0
+ # XPASS: 0
+ # ERROR: 0
+ ============================================================================
+ make[7]: Leaving directory '/home/clumens/src/pacemaker/lib/common/tests/strings'
+
+The testing process will quit on the first failed test, and you will see lines
+like these:
+
+.. code-block:: none
+
+ PASS: pcmk__scan_double_test 3 - trailing_chars
+ FAIL: pcmk__scan_double_test 4 - typical_case
+ PASS: pcmk__scan_double_test 5 - double_overflow
+ PASS: pcmk__scan_double_test 6 - double_underflow
+ ERROR: pcmk__scan_double_test - exited with status 1
+ PASS: pcmk__starts_with_test 1 - bad_input
+ ============================================================================
+ Testsuite summary for pacemaker 2.1.0
+ ============================================================================
+ # TOTAL: 56
+ # PASS: 54
+ # SKIP: 0
+ # XFAIL: 0
+ # FAIL: 1
+ # XPASS: 0
+ # ERROR: 1
+ ============================================================================
+ See lib/common/tests/strings/test-suite.log
+ Please report to users@clusterlabs.org
+ ============================================================================
+ make[7]: *** [Makefile:1218: test-suite.log] Error 1
+ make[7]: Leaving directory '/home/clumens/src/pacemaker/lib/common/tests/strings'
+
+The failure is in ``lib/common/tests/strings/test-suite.log``:
+
+.. code-block:: none
+
+ ERROR: pcmk__scan_double_test
+ =============================
+
+ 1..6
+ ok 1 - empty_input_string
+ PASS: pcmk__scan_double_test 1 - empty_input_string
+ ok 2 - bad_input_string
+ PASS: pcmk__scan_double_test 2 - bad_input_string
+ ok 3 - trailing_chars
+ PASS: pcmk__scan_double_test 3 - trailing_chars
+ not ok 4 - typical_case
+ FAIL: pcmk__scan_double_test 4 - typical_case
+ # 0.000000 != 3.000000
+ # pcmk__scan_double_test.c:80: error: Failure!
+ ok 5 - double_overflow
+ PASS: pcmk__scan_double_test 5 - double_overflow
+ ok 6 - double_underflow
+ PASS: pcmk__scan_double_test 6 - double_underflow
+ # not ok - tests
+ ERROR: pcmk__scan_double_test - exited with status 1
+
+At this point, you need to determine whether your test case is incorrect or
+whether the code being tested is incorrect. Fix whichever is wrong and continue.
+
+
+Code Coverage
+#############
+
+Figuring out what needs unit tests written is the purpose of a code coverage tool.
+The Pacemaker build process uses ``lcov`` and special make targets to generate
+an HTML coverage report that can be inspected with any web browser.
+
+To start, you'll need to install the ``lcov`` package which is included in most
+distributions. Next, reconfigure and rebuild the source tree:
+
+.. code-block:: none
+
+ $ ./configure --with-coverage
+ $ make
+
+Then simply run ``make coverage``. This will do the same thing as ``make check``,
+but will generate a bunch of intermediate files as part of the compiler's output.
+Essentially, the coverage tools run all the unit tests and make a note if a given
+line if code is executed as a part of some test program. This will include not
+just things run as part of the tests but anything in the setup and teardown
+functions as well.
+
+Afterwards, the HTML report will be in ``coverage/index.html``. You can drill down
+into individual source files to see exactly which lines are covered and which are
+not, which makes it easy to target new unit tests. Note that sometimes, it is
+impossible to achieve 100% coverage for a source file. For instance, how do you
+test a function with a return type of void that simply returns on some condition?
+
+Note that Pacemaker's overall code coverage numbers are very low at the moment.
+One reason for this is the large amount of code in the ``daemons`` directory that
+will be very difficult to write unit tests for. For now, it is best to focus
+efforts on increasing the coverage on individual libraries.
+
+Additionally, there is a ``coverage-cts`` target that does the same thing but
+instead of testing ``make check``, it tests ``cts/cts-cli``. The idea behind this
+target is to see what parts of our command line tools are covered by our regression
+tests. It is probably best to clean and rebuild the source tree when switching
+between these various targets.
+
+
+Debugging
+#########
+
+gdb
+___
+
+If you use ``gdb`` for debugging, some helper functions are defined in
+``devel/gdbhelpers``, which can be given to ``gdb`` using the ``-x`` option.
+
+From within the debugger, you can then invoke the ``pcmk`` command that
+will describe the helper functions available.
diff --git a/doc/sphinx/Pacemaker_Development/index.rst b/doc/sphinx/Pacemaker_Development/index.rst
new file mode 100644
index 0000000..cbe1499
--- /dev/null
+++ b/doc/sphinx/Pacemaker_Development/index.rst
@@ -0,0 +1,33 @@
+Pacemaker Development
+=====================
+
+*Working with the Pacemaker Code Base*
+
+
+Abstract
+--------
+This document has guidelines and tips for developers interested in editing
+Pacemaker source code and submitting changes for inclusion in the project.
+Start with the FAQ; the rest is optional detail.
+
+
+Table of Contents
+-----------------
+
+.. toctree::
+ :maxdepth: 3
+ :numbered:
+
+ faq
+ general
+ python
+ c
+ components
+ helpers
+ evolution
+
+Index
+-----
+
+* :ref:`genindex`
+* :ref:`search`
diff --git a/doc/sphinx/Pacemaker_Development/python.rst b/doc/sphinx/Pacemaker_Development/python.rst
new file mode 100644
index 0000000..54e6c55
--- /dev/null
+++ b/doc/sphinx/Pacemaker_Development/python.rst
@@ -0,0 +1,81 @@
+.. index::
+ single: Python
+ pair: Python; guidelines
+
+Python Coding Guidelines
+------------------------
+
+.. index::
+ pair: Python; boilerplate
+ pair: license; Python
+ pair: copyright; Python
+
+.. _s-python-boilerplate:
+
+Python Boilerplate
+##################
+
+If a Python file is meant to be executed (as opposed to imported), it should
+have a ``.in`` extension, and its first line should be:
+
+.. code-block:: python
+
+ #!@PYTHON@
+
+which will be replaced with the appropriate python executable when Pacemaker is
+built. To make that happen, add an entry to ``CONFIG_FILES_EXEC()`` in
+``configure.ac``, and add the file name without ``.in`` to ``.gitignore`` (see
+existing examples).
+
+After the above line if any, every Python file should start like this:
+
+.. code-block:: python
+
+ """ <BRIEF-DESCRIPTION>
+ """
+
+ __copyright__ = "Copyright <YYYY[-YYYY]> the Pacemaker project contributors"
+ __license__ = "<LICENSE> WITHOUT ANY WARRANTY"
+
+*<BRIEF-DESCRIPTION>* is obviously a brief description of the file's
+purpose. The string may contain any other information typically used in
+a Python file `docstring <https://www.python.org/dev/peps/pep-0257/>`_.
+
+``<LICENSE>`` should follow the policy set forth in the
+`COPYING <https://github.com/ClusterLabs/pacemaker/blob/main/COPYING>`_ file,
+generally one of "GNU General Public License version 2 or later (GPLv2+)"
+or "GNU Lesser General Public License version 2.1 or later (LGPLv2.1+)".
+
+
+.. index::
+ single: Python; 3
+ single: Python; version
+
+Python Version Compatibility
+############################
+
+Pacemaker targets compatibility with Python 3.4 and later.
+
+Do not use features not available in all targeted Python versions. An
+example is the ``subprocess.run()`` function.
+
+
+.. index::
+ pair: Python; whitespace
+
+Formatting Python Code
+######################
+
+* Indentation must be 4 spaces, no tabs.
+* Do not leave trailing whitespace.
+* Lines should be no longer than 80 characters unless limiting line length
+ significantly impacts readability. For Python, this limitation is
+ flexible since breaking a line often impacts readability, but
+ definitely keep it under 120 characters.
+* Where not conflicting with this style guide, it is recommended (but not
+ required) to follow `PEP 8 <https://www.python.org/dev/peps/pep-0008/>`_.
+* It is recommended (but not required) to format Python code such that
+ ``pylint
+ --disable=line-too-long,too-many-lines,too-many-instance-attributes,too-many-arguments,too-many-statements``
+ produces minimal complaints (even better if you don't need to disable all
+ those checks).
diff --git a/doc/sphinx/Pacemaker_Explained/acls.rst b/doc/sphinx/Pacemaker_Explained/acls.rst
new file mode 100644
index 0000000..67d5d15
--- /dev/null
+++ b/doc/sphinx/Pacemaker_Explained/acls.rst
@@ -0,0 +1,460 @@
+.. index::
+ single: Access Control List (ACL)
+
+.. _acl:
+
+Access Control Lists (ACLs)
+---------------------------
+
+By default, the ``root`` user or any user in the ``haclient`` group can modify
+Pacemaker's CIB without restriction. Pacemaker offers *access control lists
+(ACLs)* to provide more fine-grained authorization.
+
+.. important::
+
+ Being able to modify the CIB's resource section allows a user to run any
+ executable file as root, by configuring it as an LSB resource with a full
+ path.
+
+ACL Prerequisites
+#################
+
+In order to use ACLs:
+
+* The ``enable-acl`` :ref:`cluster option <cluster_options>` must be set to
+ true.
+
+* Desired users must have user accounts in the ``haclient`` group on all
+ cluster nodes in the cluster.
+
+* If your CIB was created before Pacemaker 1.1.12, it might need to be updated
+ to the current schema (using ``cibadmin --upgrade`` or a higher-level tool
+ equivalent) in order to use the syntax documented here.
+
+* Prior to the 2.1.0 release, the Pacemaker software had to have been built
+ with ACL support. If you are using an older release, your installation
+ supports ACLs only if the output of the command ``pacemakerd --features``
+ contains ``acls``. In newer versions, ACLs are always enabled.
+
+
+.. index::
+ single: Access Control List (ACL); acls
+ pair: acls; XML element
+
+ACL Configuration
+#################
+
+ACLs are specified within an ``acls`` element of the CIB. The ``acls`` element
+may contain any number of ``acl_role``, ``acl_target``, and ``acl_group``
+elements.
+
+
+.. index::
+ single: Access Control List (ACL); acl_role
+ pair: acl_role; XML element
+
+ACL Roles
+#########
+
+An ACL *role* is a collection of permissions allowing or denying access to
+particular portions of the CIB. A role is configured with an ``acl_role``
+element in the CIB ``acls`` section.
+
+.. table:: **Properties of an acl_role element**
+ :widths: 1 3
+
+ +------------------+-----------------------------------------------------------+
+ | Attribute | Description |
+ +==================+===========================================================+
+ | id | .. index:: |
+ | | single: acl_role; id (attribute) |
+ | | single: id; acl_role attribute |
+ | | single: attribute; id (acl_role) |
+ | | |
+ | | A unique name for the role *(required)* |
+ +------------------+-----------------------------------------------------------+
+ | description | .. index:: |
+ | | single: acl_role; description (attribute) |
+ | | single: description; acl_role attribute |
+ | | single: attribute; description (acl_role) |
+ | | |
+ | | Arbitrary text (not used by Pacemaker) |
+ +------------------+-----------------------------------------------------------+
+
+An ``acl_role`` element may contain any number of ``acl_permission`` elements.
+
+.. index::
+ single: Access Control List (ACL); acl_permission
+ pair: acl_permission; XML element
+
+.. table:: **Properties of an acl_permission element**
+ :widths: 1 3
+
+ +------------------+-----------------------------------------------------------+
+ | Attribute | Description |
+ +==================+===========================================================+
+ | id | .. index:: |
+ | | single: acl_permission; id (attribute) |
+ | | single: id; acl_permission attribute |
+ | | single: attribute; id (acl_permission) |
+ | | |
+ | | A unique name for the permission *(required)* |
+ +------------------+-----------------------------------------------------------+
+ | description | .. index:: |
+ | | single: acl_permission; description (attribute) |
+ | | single: description; acl_permission attribute |
+ | | single: attribute; description (acl_permission) |
+ | | |
+ | | Arbitrary text (not used by Pacemaker) |
+ +------------------+-----------------------------------------------------------+
+ | kind | .. index:: |
+ | | single: acl_permission; kind (attribute) |
+ | | single: kind; acl_permission attribute |
+ | | single: attribute; kind (acl_permission) |
+ | | |
+ | | The access being granted. Allowed values are ``read``, |
+ | | ``write``, and ``deny``. A value of ``write`` grants both |
+ | | read and write access. |
+ +------------------+-----------------------------------------------------------+
+ | object-type | .. index:: |
+ | | single: acl_permission; object-type (attribute) |
+ | | single: object-type; acl_permission attribute |
+ | | single: attribute; object-type (acl_permission) |
+ | | |
+ | | The name of an XML element in the CIB to which the |
+ | | permission applies. (Exactly one of ``object-type``, |
+ | | ``xpath``, and ``reference`` must be specified for a |
+ | | permission.) |
+ +------------------+-----------------------------------------------------------+
+ | attribute | .. index:: |
+ | | single: acl_permission; attribute (attribute) |
+ | | single: attribute; acl_permission attribute |
+ | | single: attribute; attribute (acl_permission) |
+ | | |
+ | | If specified, the permission applies only to |
+ | | ``object-type`` elements that have this attribute set (to |
+ | | any value). If not specified, the permission applies to |
+ | | all ``object-type`` elements. May only be used with |
+ | | ``object-type``. |
+ +------------------+-----------------------------------------------------------+
+ | reference | .. index:: |
+ | | single: acl_permission; reference (attribute) |
+ | | single: reference; acl_permission attribute |
+ | | single: attribute; reference (acl_permission) |
+ | | |
+ | | The ID of an XML element in the CIB to which the |
+ | | permission applies. (Exactly one of ``object-type``, |
+ | | ``xpath``, and ``reference`` must be specified for a |
+ | | permission.) |
+ +------------------+-----------------------------------------------------------+
+ | xpath | .. index:: |
+ | | single: acl_permission; xpath (attribute) |
+ | | single: xpath; acl_permission attribute |
+ | | single: attribute; xpath (acl_permission) |
+ | | |
+ | | An `XPath <https://www.w3.org/TR/xpath-10/>`_ |
+ | | specification selecting an XML element in the CIB to |
+ | | which the permission applies. Attributes may be specified |
+ | | in the XPath to select particular elements, but the |
+ | | permissions apply to the entire element. (Exactly one of |
+ | | ``object-type``, ``xpath``, and ``reference`` must be |
+ | | specified for a permission.) |
+ +------------------+-----------------------------------------------------------+
+
+.. important::
+
+ * Permissions are applied to the selected XML element's entire XML subtree
+ (all elements enclosed within it).
+
+ * Write permission grants the ability to create, modify, or remove the
+ element and its subtree, and also the ability to create any "scaffolding"
+ elements (enclosing elements that do not have attributes other than an
+ ID).
+
+ * Permissions for more specific matches (more deeply nested elements) take
+ precedence over more general ones.
+
+ * If multiple permissions are configured for the same match (for example, in
+ different roles applied to the same user), any ``deny`` permission takes
+ precedence, then ``write``, then lastly ``read``.
+
+
+ACL Targets and Groups
+######################
+
+ACL targets correspond to user accounts on the system.
+
+.. index::
+ single: Access Control List (ACL); acl_target
+ pair: acl_target; XML element
+
+.. table:: **Properties of an acl_target element**
+ :widths: 1 3
+
+ +------------------+-----------------------------------------------------------+
+ | Attribute | Description |
+ +==================+===========================================================+
+ | id | .. index:: |
+ | | single: acl_target; id (attribute) |
+ | | single: id; acl_target attribute |
+ | | single: attribute; id (acl_target) |
+ | | |
+ | | A unique identifier for the target (if ``name`` is not |
+ | | specified, this must be the name of the user account) |
+ | | *(required)* |
+ +------------------+-----------------------------------------------------------+
+ | name | .. index:: |
+ | | single: acl_target; name (attribute) |
+ | | single: name; acl_target attribute |
+ | | single: attribute; name (acl_target) |
+ | | |
+ | | If specified, the user account name (this allows you to |
+ | | specify a user name that is already used as the ``id`` |
+ | | for some other configuration element) *(since 2.1.5)* |
+ +------------------+-----------------------------------------------------------+
+
+ACL groups correspond to groups on the system. Any role configured for these
+groups apply to all users in that group *(since 2.1.5)*.
+
+.. index::
+ single: Access Control List (ACL); acl_group
+ pair: acl_group; XML element
+
+.. table:: **Properties of an acl_group element**
+ :widths: 1 3
+
+ +------------------+-----------------------------------------------------------+
+ | Attribute | Description |
+ +==================+===========================================================+
+ | id | .. index:: |
+ | | single: acl_group; id (attribute) |
+ | | single: id; acl_group attribute |
+ | | single: attribute; id (acl_group) |
+ | | |
+ | | A unique identifier for the group (if ``name`` is not |
+ | | specified, this must be the group name) *(required)* |
+ +------------------+-----------------------------------------------------------+
+ | name | .. index:: |
+ | | single: acl_group; name (attribute) |
+ | | single: name; acl_group attribute |
+ | | single: attribute; name (acl_group) |
+ | | |
+ | | If specified, the group name (this allows you to specify |
+ | | a group name that is already used as the ``id`` for some |
+ | | other configuration element) |
+ +------------------+-----------------------------------------------------------+
+
+Each ``acl_target`` and ``acl_group`` element may contain any number of ``role``
+elements.
+
+.. note::
+
+ If the system users and groups are defined by some network service (such as
+ LDAP), the cluster itself will be unaffected by outages in the service, but
+ affected users and groups will not be able to make changes to the CIB.
+
+
+.. index::
+ single: Access Control List (ACL); role
+ pair: role; XML element
+
+.. table:: **Properties of a role element**
+ :widths: 1 3
+
+ +------------------+-----------------------------------------------------------+
+ | Attribute | Description |
+ +==================+===========================================================+
+ | id | .. index:: |
+ | | single: role; id (attribute) |
+ | | single: id; role attribute |
+ | | single: attribute; id (role) |
+ | | |
+ | | The ``id`` of an ``acl_role`` element that specifies |
+ | | permissions granted to the enclosing target or group. |
+ +------------------+-----------------------------------------------------------+
+
+.. important::
+
+ The ``root`` and ``hacluster`` user accounts always have full access to the
+ CIB, regardless of ACLs. For all other user accounts, when ``enable-acl`` is
+ true, permission to all parts of the CIB is denied by default (permissions
+ must be explicitly granted).
+
+ACL Examples
+############
+
+.. code-block:: xml
+
+ <acls>
+
+ <acl_role id="read_all">
+ <acl_permission id="read_all-cib" kind="read" xpath="/cib" />
+ </acl_role>
+
+ <acl_role id="operator">
+
+ <acl_permission id="operator-maintenance-mode" kind="write"
+ xpath="//crm_config//nvpair[@name='maintenance-mode']" />
+
+ <acl_permission id="operator-maintenance-attr" kind="write"
+ xpath="//nvpair[@name='maintenance']" />
+
+ <acl_permission id="operator-target-role" kind="write"
+ xpath="//resources//meta_attributes/nvpair[@name='target-role']" />
+
+ <acl_permission id="operator-is-managed" kind="write"
+ xpath="//resources//nvpair[@name='is-managed']" />
+
+ <acl_permission id="operator-rsc_location" kind="write"
+ object-type="rsc_location" />
+
+ </acl_role>
+
+ <acl_role id="administrator">
+ <acl_permission id="administrator-cib" kind="write" xpath="/cib" />
+ </acl_role>
+
+ <acl_role id="minimal">
+
+ <acl_permission id="minimal-standby" kind="read"
+ description="allow reading standby node attribute (permanent or transient)"
+ xpath="//instance_attributes/nvpair[@name='standby']"/>
+
+ <acl_permission id="minimal-maintenance" kind="read"
+ description="allow reading maintenance node attribute (permanent or transient)"
+ xpath="//nvpair[@name='maintenance']"/>
+
+ <acl_permission id="minimal-target-role" kind="read"
+ description="allow reading resource target roles"
+ xpath="//resources//meta_attributes/nvpair[@name='target-role']"/>
+
+ <acl_permission id="minimal-is-managed" kind="read"
+ description="allow reading resource managed status"
+ xpath="//resources//meta_attributes/nvpair[@name='is-managed']"/>
+
+ <acl_permission id="minimal-deny-instance-attributes" kind="deny"
+ xpath="//instance_attributes"/>
+
+ <acl_permission id="minimal-deny-meta-attributes" kind="deny"
+ xpath="//meta_attributes"/>
+
+ <acl_permission id="minimal-deny-operations" kind="deny"
+ xpath="//operations"/>
+
+ <acl_permission id="minimal-deny-utilization" kind="deny"
+ xpath="//utilization"/>
+
+ <acl_permission id="minimal-nodes" kind="read"
+ description="allow reading node names/IDs (attributes are denied separately)"
+ xpath="/cib/configuration/nodes"/>
+
+ <acl_permission id="minimal-resources" kind="read"
+ description="allow reading resource names/agents (parameters are denied separately)"
+ xpath="/cib/configuration/resources"/>
+
+ <acl_permission id="minimal-deny-constraints" kind="deny"
+ xpath="/cib/configuration/constraints"/>
+
+ <acl_permission id="minimal-deny-topology" kind="deny"
+ xpath="/cib/configuration/fencing-topology"/>
+
+ <acl_permission id="minimal-deny-op_defaults" kind="deny"
+ xpath="/cib/configuration/op_defaults"/>
+
+ <acl_permission id="minimal-deny-rsc_defaults" kind="deny"
+ xpath="/cib/configuration/rsc_defaults"/>
+
+ <acl_permission id="minimal-deny-alerts" kind="deny"
+ xpath="/cib/configuration/alerts"/>
+
+ <acl_permission id="minimal-deny-acls" kind="deny"
+ xpath="/cib/configuration/acls"/>
+
+ <acl_permission id="minimal-cib" kind="read"
+ description="allow reading cib element and crm_config/status sections"
+ xpath="/cib"/>
+
+ </acl_role>
+
+ <acl_target id="alice">
+ <role id="minimal"/>
+ </acl_target>
+
+ <acl_target id="bob">
+ <role id="read_all"/>
+ </acl_target>
+
+ <acl_target id="carol">
+ <role id="read_all"/>
+ <role id="operator"/>
+ </acl_target>
+
+ <acl_target id="dave">
+ <role id="administrator"/>
+ </acl_target>
+
+ </acls>
+
+In the above example, the user ``alice`` has the minimal permissions necessary
+to run basic Pacemaker CLI tools, including using ``crm_mon`` to view the
+cluster status, without being able to modify anything. The user ``bob`` can
+view the entire configuration and status of the cluster, but not make any
+changes. The user ``carol`` can read everything, and change selected cluster
+properties as well as resource roles and location constraints. Finally,
+``dave`` has full read and write access to the entire CIB.
+
+Looking at the ``minimal`` role in more depth, it is designed to allow read
+access to the ``cib`` tag itself, while denying access to particular portions
+of its subtree (which is the entire CIB).
+
+This is because the DC node is indicated in the ``cib`` tag, so ``crm_mon``
+will not be able to report the DC otherwise. However, this does change the
+security model to allow by default, since any portions of the CIB not
+explicitly denied will be readable. The ``cib`` read access could be removed
+and replaced with read access to just the ``crm_config`` and ``status``
+sections, for a safer approach at the cost of not seeing the DC in status
+output.
+
+For a simpler configuration, the ``minimal`` role allows read access to the
+entire ``crm_config`` section, which contains cluster properties. It would be
+possible to allow read access to specific properties instead (such as
+``stonith-enabled``, ``dc-uuid``, ``have-quorum``, and ``cluster-name``) to
+restrict access further while still allowing status output, but cluster
+properties are unlikely to be considered sensitive.
+
+
+ACL Limitations
+###############
+
+Actions performed via IPC rather than the CIB
+_____________________________________________
+
+ACLs apply *only* to the CIB.
+
+That means ACLs apply to command-line tools that operate by reading or writing
+the CIB, such as ``crm_attribute`` when managing permanent node attributes,
+``crm_mon``, and ``cibadmin``.
+
+However, command-line tools that communicate directly with Pacemaker daemons
+via IPC are not affected by ACLs. For example, users in the ``haclient`` group
+may still do the following, regardless of ACLs:
+
+* Query transient node attribute values using ``crm_attribute`` and
+ ``attrd_updater``.
+
+* Query basic node information using ``crm_node``.
+
+* Erase resource operation history using ``crm_resource``.
+
+* Query fencing configuration information, and execute fencing against nodes,
+ using ``stonith_admin``.
+
+ACLs and Pacemaker Remote
+_________________________
+
+ACLs apply to commands run on Pacemaker Remote nodes using the Pacemaker Remote
+node's name as the ACL user name.
+
+The idea is that Pacemaker Remote nodes (especially virtual machines and
+containers) are likely to be purpose-built and have different user accounts
+from full cluster nodes.
diff --git a/doc/sphinx/Pacemaker_Explained/advanced-options.rst b/doc/sphinx/Pacemaker_Explained/advanced-options.rst
new file mode 100644
index 0000000..20ab79e
--- /dev/null
+++ b/doc/sphinx/Pacemaker_Explained/advanced-options.rst
@@ -0,0 +1,586 @@
+Advanced Configuration
+----------------------
+
+.. index::
+ single: start-delay; operation attribute
+ single: interval-origin; operation attribute
+ single: interval; interval-origin
+ single: operation; interval-origin
+ single: operation; start-delay
+
+Specifying When Recurring Actions are Performed
+###############################################
+
+By default, recurring actions are scheduled relative to when the resource
+started. In some cases, you might prefer that a recurring action start relative
+to a specific date and time. For example, you might schedule an in-depth
+monitor to run once every 24 hours, and want it to run outside business hours.
+
+To do this, set the operation's ``interval-origin``. The cluster uses this point
+to calculate the correct ``start-delay`` such that the operation will occur
+at ``interval-origin`` plus a multiple of the operation interval.
+
+For example, if the recurring operation's interval is 24h, its
+``interval-origin`` is set to 02:00, and it is currently 14:32, then the
+cluster would initiate the operation after 11 hours and 28 minutes.
+
+The value specified for ``interval`` and ``interval-origin`` can be any
+date/time conforming to the
+`ISO8601 standard <https://en.wikipedia.org/wiki/ISO_8601>`_. By way of
+example, to specify an operation that would run on the first Monday of
+2021 and every Monday after that, you would add:
+
+.. topic:: Example recurring action that runs relative to base date/time
+
+ .. code-block:: xml
+
+ <op id="intensive-monitor" name="monitor" interval="P7D" interval-origin="2021-W01-1"/>
+
+.. index::
+ single: resource; failure recovery
+ single: operation; failure recovery
+
+.. _failure-handling:
+
+Handling Resource Failure
+#########################
+
+By default, Pacemaker will attempt to recover failed resources by restarting
+them. However, failure recovery is highly configurable.
+
+.. index::
+ single: resource; failure count
+ single: operation; failure count
+
+Failure Counts
+______________
+
+Pacemaker tracks resource failures for each combination of node, resource, and
+operation (start, stop, monitor, etc.).
+
+You can query the fail count for a particular node, resource, and/or operation
+using the ``crm_failcount`` command. For example, to see how many times the
+10-second monitor for ``myrsc`` has failed on ``node1``, run:
+
+.. code-block:: none
+
+ # crm_failcount --query -r myrsc -N node1 -n monitor -I 10s
+
+If you omit the node, ``crm_failcount`` will use the local node. If you omit
+the operation and interval, ``crm_failcount`` will display the sum of the fail
+counts for all operations on the resource.
+
+You can use ``crm_resource --cleanup`` or ``crm_failcount --delete`` to clear
+fail counts. For example, to clear the above monitor failures, run:
+
+.. code-block:: none
+
+ # crm_resource --cleanup -r myrsc -N node1 -n monitor -I 10s
+
+If you omit the resource, ``crm_resource --cleanup`` will clear failures for
+all resources. If you omit the node, it will clear failures on all nodes. If
+you omit the operation and interval, it will clear the failures for all
+operations on the resource.
+
+.. note::
+
+ Even when cleaning up only a single operation, all failed operations will
+ disappear from the status display. This allows us to trigger a re-check of
+ the resource's current status.
+
+Higher-level tools may provide other commands for querying and clearing
+fail counts.
+
+The ``crm_mon`` tool shows the current cluster status, including any failed
+operations. To see the current fail counts for any failed resources, call
+``crm_mon`` with the ``--failcounts`` option. This shows the fail counts per
+resource (that is, the sum of any operation fail counts for the resource).
+
+.. index::
+ single: migration-threshold; resource meta-attribute
+ single: resource; migration-threshold
+
+Failure Response
+________________
+
+Normally, if a running resource fails, pacemaker will try to stop it and start
+it again. Pacemaker will choose the best location to start it each time, which
+may be the same node that it failed on.
+
+However, if a resource fails repeatedly, it is possible that there is an
+underlying problem on that node, and you might desire trying a different node
+in such a case. Pacemaker allows you to set your preference via the
+``migration-threshold`` resource meta-attribute. [#]_
+
+If you define ``migration-threshold`` to *N* for a resource, it will be banned
+from the original node after *N* failures there.
+
+.. note::
+
+ The ``migration-threshold`` is per *resource*, even though fail counts are
+ tracked per *operation*. The operation fail counts are added together
+ to compare against the ``migration-threshold``.
+
+By default, fail counts remain until manually cleared by an administrator
+using ``crm_resource --cleanup`` or ``crm_failcount --delete`` (hopefully after
+first fixing the failure's cause). It is possible to have fail counts expire
+automatically by setting the ``failure-timeout`` resource meta-attribute.
+
+.. important::
+
+ A successful operation does not clear past failures. If a recurring monitor
+ operation fails once, succeeds many times, then fails again days later, its
+ fail count is 2. Fail counts are cleared only by manual intervention or
+ failure timeout.
+
+For example, setting ``migration-threshold`` to 2 and ``failure-timeout`` to
+``60s`` would cause the resource to move to a new node after 2 failures, and
+allow it to move back (depending on stickiness and constraint scores) after one
+minute.
+
+.. note::
+
+ ``failure-timeout`` is measured since the most recent failure. That is, older
+ failures do not individually time out and lower the fail count. Instead, all
+ failures are timed out simultaneously (and the fail count is reset to 0) if
+ there is no new failure for the timeout period.
+
+There are two exceptions to the migration threshold: when a resource either
+fails to start or fails to stop.
+
+If the cluster property ``start-failure-is-fatal`` is set to ``true`` (which is
+the default), start failures cause the fail count to be set to ``INFINITY`` and
+thus always cause the resource to move immediately.
+
+Stop failures are slightly different and crucial. If a resource fails to stop
+and fencing is enabled, then the cluster will fence the node in order to be
+able to start the resource elsewhere. If fencing is disabled, then the cluster
+has no way to continue and will not try to start the resource elsewhere, but
+will try to stop it again after any failure timeout or clearing.
+
+.. index::
+ single: resource; move
+
+Moving Resources
+################
+
+Moving Resources Manually
+_________________________
+
+There are primarily two occasions when you would want to move a resource from
+its current location: when the whole node is under maintenance, and when a
+single resource needs to be moved.
+
+.. index::
+ single: standby mode
+ single: node; standby mode
+
+Standby Mode
+~~~~~~~~~~~~
+
+Since everything eventually comes down to a score, you could create constraints
+for every resource to prevent them from running on one node. While Pacemaker
+configuration can seem convoluted at times, not even we would require this of
+administrators.
+
+Instead, you can set a special node attribute which tells the cluster "don't
+let anything run here". There is even a helpful tool to help query and set it,
+called ``crm_standby``. To check the standby status of the current machine,
+run:
+
+.. code-block:: none
+
+ # crm_standby -G
+
+A value of ``on`` indicates that the node is *not* able to host any resources,
+while a value of ``off`` says that it *can*.
+
+You can also check the status of other nodes in the cluster by specifying the
+`--node` option:
+
+.. code-block:: none
+
+ # crm_standby -G --node sles-2
+
+To change the current node's standby status, use ``-v`` instead of ``-G``:
+
+.. code-block:: none
+
+ # crm_standby -v on
+
+Again, you can change another host's value by supplying a hostname with
+``--node``.
+
+A cluster node in standby mode will not run resources, but still contributes to
+quorum, and may fence or be fenced by nodes.
+
+Moving One Resource
+~~~~~~~~~~~~~~~~~~~
+
+When only one resource is required to move, we could do this by creating
+location constraints. However, once again we provide a user-friendly shortcut
+as part of the ``crm_resource`` command, which creates and modifies the extra
+constraints for you. If ``Email`` were running on ``sles-1`` and you wanted it
+moved to a specific location, the command would look something like:
+
+.. code-block:: none
+
+ # crm_resource -M -r Email -H sles-2
+
+Behind the scenes, the tool will create the following location constraint:
+
+.. code-block:: xml
+
+ <rsc_location id="cli-prefer-Email" rsc="Email" node="sles-2" score="INFINITY"/>
+
+It is important to note that subsequent invocations of ``crm_resource -M`` are
+not cumulative. So, if you ran these commands:
+
+.. code-block:: none
+
+ # crm_resource -M -r Email -H sles-2
+ # crm_resource -M -r Email -H sles-3
+
+then it is as if you had never performed the first command.
+
+To allow the resource to move back again, use:
+
+.. code-block:: none
+
+ # crm_resource -U -r Email
+
+Note the use of the word *allow*. The resource *can* move back to its original
+location, but depending on ``resource-stickiness``, location constraints, and
+so forth, it might stay where it is.
+
+To be absolutely certain that it moves back to ``sles-1``, move it there before
+issuing the call to ``crm_resource -U``:
+
+.. code-block:: none
+
+ # crm_resource -M -r Email -H sles-1
+ # crm_resource -U -r Email
+
+Alternatively, if you only care that the resource should be moved from its
+current location, try:
+
+.. code-block:: none
+
+ # crm_resource -B -r Email
+
+which will instead create a negative constraint, like:
+
+.. code-block:: xml
+
+ <rsc_location id="cli-ban-Email-on-sles-1" rsc="Email" node="sles-1" score="-INFINITY"/>
+
+This will achieve the desired effect, but will also have long-term
+consequences. As the tool will warn you, the creation of a ``-INFINITY``
+constraint will prevent the resource from running on that node until
+``crm_resource -U`` is used. This includes the situation where every other
+cluster node is no longer available!
+
+In some cases, such as when ``resource-stickiness`` is set to ``INFINITY``, it
+is possible that you will end up with the problem described in
+:ref:`node-score-equal`. The tool can detect some of these cases and deals with
+them by creating both positive and negative constraints. For example:
+
+.. code-block:: xml
+
+ <rsc_location id="cli-ban-Email-on-sles-1" rsc="Email" node="sles-1" score="-INFINITY"/>
+ <rsc_location id="cli-prefer-Email" rsc="Email" node="sles-2" score="INFINITY"/>
+
+which has the same long-term consequences as discussed earlier.
+
+Moving Resources Due to Connectivity Changes
+____________________________________________
+
+You can configure the cluster to move resources when external connectivity is
+lost in two steps.
+
+.. index::
+ single: ocf:pacemaker:ping resource
+ single: ping resource
+
+Tell Pacemaker to Monitor Connectivity
+~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
+
+First, add an ``ocf:pacemaker:ping`` resource to the cluster. The ``ping``
+resource uses the system utility of the same name to a test whether a list of
+machines (specified by DNS hostname or IP address) are reachable, and uses the
+results to maintain a node attribute.
+
+The node attribute is called ``pingd`` by default, but is customizable in order
+to allow multiple ping groups to be defined.
+
+Normally, the ping resource should run on all cluster nodes, which means that
+you'll need to create a clone. A template for this can be found below, along
+with a description of the most interesting parameters.
+
+.. table:: **Commonly Used ocf:pacemaker:ping Resource Parameters**
+ :widths: 1 4
+
+ +--------------------+--------------------------------------------------------------+
+ | Resource Parameter | Description |
+ +====================+==============================================================+
+ | dampen | .. index:: |
+ | | single: ocf:pacemaker:ping resource; dampen parameter |
+ | | single: dampen; ocf:pacemaker:ping resource parameter |
+ | | |
+ | | The time to wait (dampening) for further changes to occur. |
+ | | Use this to prevent a resource from bouncing around the |
+ | | cluster when cluster nodes notice the loss of connectivity |
+ | | at slightly different times. |
+ +--------------------+--------------------------------------------------------------+
+ | multiplier | .. index:: |
+ | | single: ocf:pacemaker:ping resource; multiplier parameter |
+ | | single: multiplier; ocf:pacemaker:ping resource parameter |
+ | | |
+ | | The number of connected ping nodes gets multiplied by this |
+ | | value to get a score. Useful when there are multiple ping |
+ | | nodes configured. |
+ +--------------------+--------------------------------------------------------------+
+ | host_list | .. index:: |
+ | | single: ocf:pacemaker:ping resource; host_list parameter |
+ | | single: host_list; ocf:pacemaker:ping resource parameter |
+ | | |
+ | | The machines to contact in order to determine the current |
+ | | connectivity status. Allowed values include resolvable DNS |
+ | | connectivity host names, IPv4 addresses, and IPv6 addresses. |
+ +--------------------+--------------------------------------------------------------+
+
+.. topic:: Example ping resource that checks node connectivity once every minute
+
+ .. code-block:: xml
+
+ <clone id="Connected">
+ <primitive id="ping" class="ocf" provider="pacemaker" type="ping">
+ <instance_attributes id="ping-attrs">
+ <nvpair id="ping-dampen" name="dampen" value="5s"/>
+ <nvpair id="ping-multiplier" name="multiplier" value="1000"/>
+ <nvpair id="ping-hosts" name="host_list" value="my.gateway.com www.bigcorp.com"/>
+ </instance_attributes>
+ <operations>
+ <op id="ping-monitor-60s" interval="60s" name="monitor"/>
+ </operations>
+ </primitive>
+ </clone>
+
+.. important::
+
+ You're only half done. The next section deals with telling Pacemaker how to
+ deal with the connectivity status that ``ocf:pacemaker:ping`` is recording.
+
+Tell Pacemaker How to Interpret the Connectivity Data
+~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
+
+.. important::
+
+ Before attempting the following, make sure you understand
+ :ref:`rules`.
+
+There are a number of ways to use the connectivity data.
+
+The most common setup is for people to have a single ping target (for example,
+the service network's default gateway), to prevent the cluster from running a
+resource on any unconnected node.
+
+.. topic:: Don't run a resource on unconnected nodes
+
+ .. code-block:: xml
+
+ <rsc_location id="WebServer-no-connectivity" rsc="Webserver">
+ <rule id="ping-exclude-rule" score="-INFINITY" >
+ <expression id="ping-exclude" attribute="pingd" operation="not_defined"/>
+ </rule>
+ </rsc_location>
+
+A more complex setup is to have a number of ping targets configured. You can
+require the cluster to only run resources on nodes that can connect to all (or
+a minimum subset) of them.
+
+.. topic:: Run only on nodes connected to three or more ping targets
+
+ .. code-block:: xml
+
+ <primitive id="ping" provider="pacemaker" class="ocf" type="ping">
+ ... <!-- omitting some configuration to highlight important parts -->
+ <nvpair id="ping-multiplier" name="multiplier" value="1000"/>
+ ...
+ </primitive>
+ ...
+ <rsc_location id="WebServer-connectivity" rsc="Webserver">
+ <rule id="ping-prefer-rule" score="-INFINITY" >
+ <expression id="ping-prefer" attribute="pingd" operation="lt" value="3000"/>
+ </rule>
+ </rsc_location>
+
+Alternatively, you can tell the cluster only to *prefer* nodes with the best
+connectivity, by using ``score-attribute`` in the rule. Just be sure to set
+``multiplier`` to a value higher than that of ``resource-stickiness`` (and
+don't set either of them to ``INFINITY``).
+
+.. topic:: Prefer node with most connected ping nodes
+
+ .. code-block:: xml
+
+ <rsc_location id="WebServer-connectivity" rsc="Webserver">
+ <rule id="ping-prefer-rule" score-attribute="pingd" >
+ <expression id="ping-prefer" attribute="pingd" operation="defined"/>
+ </rule>
+ </rsc_location>
+
+It is perhaps easier to think of this in terms of the simple constraints that
+the cluster translates it into. For example, if ``sles-1`` is connected to all
+five ping nodes but ``sles-2`` is only connected to two, then it would be as if
+you instead had the following constraints in your configuration:
+
+.. topic:: How the cluster translates the above location constraint
+
+ .. code-block:: xml
+
+ <rsc_location id="ping-1" rsc="Webserver" node="sles-1" score="5000"/>
+ <rsc_location id="ping-2" rsc="Webserver" node="sles-2" score="2000"/>
+
+The advantage is that you don't have to manually update any constraints
+whenever your network connectivity changes.
+
+You can also combine the concepts above into something even more complex. The
+example below shows how you can prefer the node with the most connected ping
+nodes provided they have connectivity to at least three (again assuming that
+``multiplier`` is set to 1000).
+
+.. topic:: More complex example of choosing location based on connectivity
+
+ .. code-block:: xml
+
+ <rsc_location id="WebServer-connectivity" rsc="Webserver">
+ <rule id="ping-exclude-rule" score="-INFINITY" >
+ <expression id="ping-exclude" attribute="pingd" operation="lt" value="3000"/>
+ </rule>
+ <rule id="ping-prefer-rule" score-attribute="pingd" >
+ <expression id="ping-prefer" attribute="pingd" operation="defined"/>
+ </rule>
+ </rsc_location>
+
+
+.. _live-migration:
+
+Migrating Resources
+___________________
+
+Normally, when the cluster needs to move a resource, it fully restarts the
+resource (that is, it stops the resource on the current node and starts it on
+the new node).
+
+However, some types of resources, such as many virtual machines, are able to
+move to another location without loss of state (often referred to as live
+migration or hot migration). In pacemaker, this is called resource migration.
+Pacemaker can be configured to migrate a resource when moving it, rather than
+restarting it.
+
+Not all resources are able to migrate; see the
+:ref:`migration checklist <migration_checklist>` below. Even those that can,
+won't do so in all situations. Conceptually, there are two requirements from
+which the other prerequisites follow:
+
+* The resource must be active and healthy at the old location; and
+* everything required for the resource to run must be available on both the old
+ and new locations.
+
+The cluster is able to accommodate both *push* and *pull* migration models by
+requiring the resource agent to support two special actions: ``migrate_to``
+(performed on the current location) and ``migrate_from`` (performed on the
+destination).
+
+In push migration, the process on the current location transfers the resource
+to the new location where is it later activated. In this scenario, most of the
+work would be done in the ``migrate_to`` action and, if anything, the
+activation would occur during ``migrate_from``.
+
+Conversely for pull, the ``migrate_to`` action is practically empty and
+``migrate_from`` does most of the work, extracting the relevant resource state
+from the old location and activating it.
+
+There is no wrong or right way for a resource agent to implement migration, as
+long as it works.
+
+.. _migration_checklist:
+
+.. topic:: Migration Checklist
+
+ * The resource may not be a clone.
+ * The resource agent standard must be OCF.
+ * The resource must not be in a failed or degraded state.
+ * The resource agent must support ``migrate_to`` and ``migrate_from``
+ actions, and advertise them in its meta-data.
+ * The resource must have the ``allow-migrate`` meta-attribute set to
+ ``true`` (which is not the default).
+
+If an otherwise migratable resource depends on another resource via an ordering
+constraint, there are special situations in which it will be restarted rather
+than migrated.
+
+For example, if the resource depends on a clone, and at the time the resource
+needs to be moved, the clone has instances that are stopping and instances that
+are starting, then the resource will be restarted. The scheduler is not yet
+able to model this situation correctly and so takes the safer (if less optimal)
+path.
+
+Also, if a migratable resource depends on a non-migratable resource, and both
+need to be moved, the migratable resource will be restarted.
+
+
+.. index::
+ single: reload
+ single: reload-agent
+
+Reloading an Agent After a Definition Change
+############################################
+
+The cluster automatically detects changes to the configuration of active
+resources. The cluster's normal response is to stop the service (using the old
+definition) and start it again (with the new definition). This works, but some
+resource agents are smarter and can be told to use a new set of options without
+restarting.
+
+To take advantage of this capability, the resource agent must:
+
+* Implement the ``reload-agent`` action. What it should do depends completely
+ on your application!
+
+ .. note::
+
+ Resource agents may also implement a ``reload`` action to make the managed
+ service reload its own *native* configuration. This is different from
+ ``reload-agent``, which makes effective changes in the resource's
+ *Pacemaker* configuration (specifically, the values of the agent's
+ reloadable parameters).
+
+* Advertise the ``reload-agent`` operation in the ``actions`` section of its
+ meta-data.
+
+* Set the ``reloadable`` attribute to 1 in the ``parameters`` section of
+ its meta-data for any parameters eligible to be reloaded after a change.
+
+Once these requirements are satisfied, the cluster will automatically know to
+reload the resource (instead of restarting) when a reloadable parameter
+changes.
+
+.. note::
+
+ Metadata will not be re-read unless the resource needs to be started. If you
+ edit the agent of an already active resource to set a parameter reloadable,
+ the resource may restart the first time the parameter value changes.
+
+.. note::
+
+ If both a reloadable and non-reloadable parameter are changed
+ simultaneously, the resource will be restarted.
+
+.. rubric:: Footnotes
+
+.. [#] The naming of this option was perhaps unfortunate as it is easily
+ confused with live migration, the process of moving a resource from one
+ node to another without stopping it. Xen virtual guests are the most
+ common example of resources that can be migrated in this manner.
diff --git a/doc/sphinx/Pacemaker_Explained/advanced-resources.rst b/doc/sphinx/Pacemaker_Explained/advanced-resources.rst
new file mode 100644
index 0000000..a61b76d
--- /dev/null
+++ b/doc/sphinx/Pacemaker_Explained/advanced-resources.rst
@@ -0,0 +1,1629 @@
+Advanced Resource Types
+-----------------------
+
+.. index:
+ single: group resource
+ single: resource; group
+
+.. _group-resources:
+
+Groups - A Syntactic Shortcut
+#############################
+
+One of the most common elements of a cluster is a set of resources
+that need to be located together, start sequentially, and stop in the
+reverse order. To simplify this configuration, we support the concept
+of groups.
+
+.. topic:: A group of two primitive resources
+
+ .. code-block:: xml
+
+ <group id="shortcut">
+ <primitive id="Public-IP" class="ocf" type="IPaddr" provider="heartbeat">
+ <instance_attributes id="params-public-ip">
+ <nvpair id="public-ip-addr" name="ip" value="192.0.2.2"/>
+ </instance_attributes>
+ </primitive>
+ <primitive id="Email" class="lsb" type="exim"/>
+ </group>
+
+Although the example above contains only two resources, there is no
+limit to the number of resources a group can contain. The example is
+also sufficient to explain the fundamental properties of a group:
+
+* Resources are started in the order they appear in (**Public-IP** first,
+ then **Email**)
+* Resources are stopped in the reverse order to which they appear in
+ (**Email** first, then **Public-IP**)
+
+If a resource in the group can't run anywhere, then nothing after that
+is allowed to run, too.
+
+* If **Public-IP** can't run anywhere, neither can **Email**;
+* but if **Email** can't run anywhere, this does not affect **Public-IP**
+ in any way
+
+The group above is logically equivalent to writing:
+
+.. topic:: How the cluster sees a group resource
+
+ .. code-block:: xml
+
+ <configuration>
+ <resources>
+ <primitive id="Public-IP" class="ocf" type="IPaddr" provider="heartbeat">
+ <instance_attributes id="params-public-ip">
+ <nvpair id="public-ip-addr" name="ip" value="192.0.2.2"/>
+ </instance_attributes>
+ </primitive>
+ <primitive id="Email" class="lsb" type="exim"/>
+ </resources>
+ <constraints>
+ <rsc_colocation id="xxx" rsc="Email" with-rsc="Public-IP" score="INFINITY"/>
+ <rsc_order id="yyy" first="Public-IP" then="Email"/>
+ </constraints>
+ </configuration>
+
+Obviously as the group grows bigger, the reduced configuration effort
+can become significant.
+
+Another (typical) example of a group is a DRBD volume, the filesystem
+mount, an IP address, and an application that uses them.
+
+.. index::
+ pair: XML element; group
+
+Group Properties
+________________
+
+.. table:: **Properties of a Group Resource**
+ :widths: 1 4
+
+ +-------------+------------------------------------------------------------------+
+ | Field | Description |
+ +=============+==================================================================+
+ | id | .. index:: |
+ | | single: group; property, id |
+ | | single: property; id (group) |
+ | | single: id; group property |
+ | | |
+ | | A unique name for the group |
+ +-------------+------------------------------------------------------------------+
+ | description | .. index:: |
+ | | single: group; attribute, description |
+ | | single: attribute; description (group) |
+ | | single: description; group attribute |
+ | | |
+ | | An optional description of the group, for the user's own |
+ | | purposes. |
+ | | E.g. ``resources needed for website`` |
+ +-------------+------------------------------------------------------------------+
+
+Group Options
+_____________
+
+Groups inherit the ``priority``, ``target-role``, and ``is-managed`` properties
+from primitive resources. See :ref:`resource_options` for information about
+those properties.
+
+Group Instance Attributes
+_________________________
+
+Groups have no instance attributes. However, any that are set for the group
+object will be inherited by the group's children.
+
+Group Contents
+______________
+
+Groups may only contain a collection of cluster resources (see
+:ref:`primitive-resource`). To refer to a child of a group resource, just use
+the child's ``id`` instead of the group's.
+
+Group Constraints
+_________________
+
+Although it is possible to reference a group's children in
+constraints, it is usually preferable to reference the group itself.
+
+.. topic:: Some constraints involving groups
+
+ .. code-block:: xml
+
+ <constraints>
+ <rsc_location id="group-prefers-node1" rsc="shortcut" node="node1" score="500"/>
+ <rsc_colocation id="webserver-with-group" rsc="Webserver" with-rsc="shortcut"/>
+ <rsc_order id="start-group-then-webserver" first="Webserver" then="shortcut"/>
+ </constraints>
+
+.. index::
+ pair: resource-stickiness; group
+
+Group Stickiness
+________________
+
+Stickiness, the measure of how much a resource wants to stay where it
+is, is additive in groups. Every active resource of the group will
+contribute its stickiness value to the group's total. So if the
+default ``resource-stickiness`` is 100, and a group has seven members,
+five of which are active, then the group as a whole will prefer its
+current location with a score of 500.
+
+.. index::
+ single: clone
+ single: resource; clone
+
+.. _s-resource-clone:
+
+Clones - Resources That Can Have Multiple Active Instances
+##########################################################
+
+*Clone* resources are resources that can have more than one copy active at the
+same time. This allows you, for example, to run a copy of a daemon on every
+node. You can clone any primitive or group resource [#]_.
+
+Anonymous versus Unique Clones
+______________________________
+
+A clone resource is configured to be either *anonymous* or *globally unique*.
+
+Anonymous clones are the simplest. These behave completely identically
+everywhere they are running. Because of this, there can be only one instance of
+an anonymous clone active per node.
+
+The instances of globally unique clones are distinct entities. All instances
+are launched identically, but one instance of the clone is not identical to any
+other instance, whether running on the same node or a different node. As an
+example, a cloned IP address can use special kernel functionality such that
+each instance handles a subset of requests for the same IP address.
+
+.. index::
+ single: promotable clone
+ single: resource; promotable
+
+.. _s-resource-promotable:
+
+Promotable clones
+_________________
+
+If a clone is *promotable*, its instances can perform a special role that
+Pacemaker will manage via the ``promote`` and ``demote`` actions of the resource
+agent.
+
+Services that support such a special role have various terms for the special
+role and the default role: primary and secondary, master and replica,
+controller and worker, etc. Pacemaker uses the terms *promoted* and
+*unpromoted* to be agnostic to what the service calls them or what they do.
+
+All that Pacemaker cares about is that an instance comes up in the unpromoted role
+when started, and the resource agent supports the ``promote`` and ``demote`` actions
+to manage entering and exiting the promoted role.
+
+.. index::
+ pair: XML element; clone
+
+Clone Properties
+________________
+
+.. table:: **Properties of a Clone Resource**
+ :widths: 1 4
+
+ +-------------+------------------------------------------------------------------+
+ | Field | Description |
+ +=============+==================================================================+
+ | id | .. index:: |
+ | | single: clone; property, id |
+ | | single: property; id (clone) |
+ | | single: id; clone property |
+ | | |
+ | | A unique name for the clone |
+ +-------------+------------------------------------------------------------------+
+ | description | .. index:: |
+ | | single: clone; attribute, description |
+ | | single: attribute; description (clone) |
+ | | single: description; clone attribute |
+ | | |
+ | | An optional description of the clone, for the user's own |
+ | | purposes. |
+ | | E.g. ``IP address for website`` |
+ +-------------+------------------------------------------------------------------+
+
+.. index::
+ pair: options; clone
+
+Clone Options
+_____________
+
+:ref:`Options <resource_options>` inherited from primitive resources:
+``priority, target-role, is-managed``
+
+.. table:: **Clone-specific configuration options**
+ :class: longtable
+ :widths: 1 1 3
+
+ +-------------------+-----------------+-------------------------------------------------------+
+ | Field | Default | Description |
+ +===================+=================+=======================================================+
+ | globally-unique | false | .. index:: |
+ | | | single: clone; option, globally-unique |
+ | | | single: option; globally-unique (clone) |
+ | | | single: globally-unique; clone option |
+ | | | |
+ | | | If **true**, each clone instance performs a |
+ | | | distinct function |
+ +-------------------+-----------------+-------------------------------------------------------+
+ | clone-max | 0 | .. index:: |
+ | | | single: clone; option, clone-max |
+ | | | single: option; clone-max (clone) |
+ | | | single: clone-max; clone option |
+ | | | |
+ | | | The maximum number of clone instances that can |
+ | | | be started across the entire cluster. If 0, the |
+ | | | number of nodes in the cluster will be used. |
+ +-------------------+-----------------+-------------------------------------------------------+
+ | clone-node-max | 1 | .. index:: |
+ | | | single: clone; option, clone-node-max |
+ | | | single: option; clone-node-max (clone) |
+ | | | single: clone-node-max; clone option |
+ | | | |
+ | | | If ``globally-unique`` is **true**, the maximum |
+ | | | number of clone instances that can be started |
+ | | | on a single node |
+ +-------------------+-----------------+-------------------------------------------------------+
+ | clone-min | 0 | .. index:: |
+ | | | single: clone; option, clone-min |
+ | | | single: option; clone-min (clone) |
+ | | | single: clone-min; clone option |
+ | | | |
+ | | | Require at least this number of clone instances |
+ | | | to be runnable before allowing resources |
+ | | | depending on the clone to be runnable. A value |
+ | | | of 0 means require all clone instances to be |
+ | | | runnable. |
+ +-------------------+-----------------+-------------------------------------------------------+
+ | notify | false | .. index:: |
+ | | | single: clone; option, notify |
+ | | | single: option; notify (clone) |
+ | | | single: notify; clone option |
+ | | | |
+ | | | Call the resource agent's **notify** action for |
+ | | | all active instances, before and after starting |
+ | | | or stopping any clone instance. The resource |
+ | | | agent must support this action. |
+ | | | Allowed values: **false**, **true** |
+ +-------------------+-----------------+-------------------------------------------------------+
+ | ordered | false | .. index:: |
+ | | | single: clone; option, ordered |
+ | | | single: option; ordered (clone) |
+ | | | single: ordered; clone option |
+ | | | |
+ | | | If **true**, clone instances must be started |
+ | | | sequentially instead of in parallel. |
+ | | | Allowed values: **false**, **true** |
+ +-------------------+-----------------+-------------------------------------------------------+
+ | interleave | false | .. index:: |
+ | | | single: clone; option, interleave |
+ | | | single: option; interleave (clone) |
+ | | | single: interleave; clone option |
+ | | | |
+ | | | When this clone is ordered relative to another |
+ | | | clone, if this option is **false** (the default), |
+ | | | the ordering is relative to *all* instances of |
+ | | | the other clone, whereas if this option is |
+ | | | **true**, the ordering is relative only to |
+ | | | instances on the same node. |
+ | | | Allowed values: **false**, **true** |
+ +-------------------+-----------------+-------------------------------------------------------+
+ | promotable | false | .. index:: |
+ | | | single: clone; option, promotable |
+ | | | single: option; promotable (clone) |
+ | | | single: promotable; clone option |
+ | | | |
+ | | | If **true**, clone instances can perform a |
+ | | | special role that Pacemaker will manage via the |
+ | | | resource agent's **promote** and **demote** |
+ | | | actions. The resource agent must support these |
+ | | | actions. |
+ | | | Allowed values: **false**, **true** |
+ +-------------------+-----------------+-------------------------------------------------------+
+ | promoted-max | 1 | .. index:: |
+ | | | single: clone; option, promoted-max |
+ | | | single: option; promoted-max (clone) |
+ | | | single: promoted-max; clone option |
+ | | | |
+ | | | If ``promotable`` is **true**, the number of |
+ | | | instances that can be promoted at one time |
+ | | | across the entire cluster |
+ +-------------------+-----------------+-------------------------------------------------------+
+ | promoted-node-max | 1 | .. index:: |
+ | | | single: clone; option, promoted-node-max |
+ | | | single: option; promoted-node-max (clone) |
+ | | | single: promoted-node-max; clone option |
+ | | | |
+ | | | If ``promotable`` is **true** and ``globally-unique`` |
+ | | | is **false**, the number of clone instances can be |
+ | | | promoted at one time on a single node |
+ +-------------------+-----------------+-------------------------------------------------------+
+
+.. note:: **Deprecated Terminology**
+
+ In older documentation and online examples, you may see promotable clones
+ referred to as *multi-state*, *stateful*, or *master/slave*; these mean the
+ same thing as *promotable*. Certain syntax is supported for backward
+ compatibility, but is deprecated and will be removed in a future version:
+
+ * Using a ``master`` tag, instead of a ``clone`` tag with the ``promotable``
+ meta-attribute set to ``true``
+ * Using the ``master-max`` meta-attribute instead of ``promoted-max``
+ * Using the ``master-node-max`` meta-attribute instead of
+ ``promoted-node-max``
+ * Using ``Master`` as a role name instead of ``Promoted``
+ * Using ``Slave`` as a role name instead of ``Unpromoted``
+
+
+Clone Contents
+______________
+
+Clones must contain exactly one primitive or group resource.
+
+.. topic:: A clone that runs a web server on all nodes
+
+ .. code-block:: xml
+
+ <clone id="apache-clone">
+ <primitive id="apache" class="lsb" type="apache">
+ <operations>
+ <op id="apache-monitor" name="monitor" interval="30"/>
+ </operations>
+ </primitive>
+ </clone>
+
+.. warning::
+
+ You should never reference the name of a clone's child (the primitive or group
+ resource being cloned). If you think you need to do this, you probably need to
+ re-evaluate your design.
+
+Clone Instance Attribute
+________________________
+
+Clones have no instance attributes; however, any that are set here will be
+inherited by the clone's child.
+
+.. index::
+ single: clone; constraint
+
+Clone Constraints
+_________________
+
+In most cases, a clone will have a single instance on each active cluster
+node. If this is not the case, you can indicate which nodes the
+cluster should preferentially assign copies to with resource location
+constraints. These constraints are written no differently from those
+for primitive resources except that the clone's **id** is used.
+
+.. topic:: Some constraints involving clones
+
+ .. code-block:: xml
+
+ <constraints>
+ <rsc_location id="clone-prefers-node1" rsc="apache-clone" node="node1" score="500"/>
+ <rsc_colocation id="stats-with-clone" rsc="apache-stats" with="apache-clone"/>
+ <rsc_order id="start-clone-then-stats" first="apache-clone" then="apache-stats"/>
+ </constraints>
+
+Ordering constraints behave slightly differently for clones. In the
+example above, ``apache-stats`` will wait until all copies of ``apache-clone``
+that need to be started have done so before being started itself.
+Only if *no* copies can be started will ``apache-stats`` be prevented
+from being active. Additionally, the clone will wait for
+``apache-stats`` to be stopped before stopping itself.
+
+Colocation of a primitive or group resource with a clone means that
+the resource can run on any node with an active instance of the clone.
+The cluster will choose an instance based on where the clone is running and
+the resource's own location preferences.
+
+Colocation between clones is also possible. If one clone **A** is colocated
+with another clone **B**, the set of allowed locations for **A** is limited to
+nodes on which **B** is (or will be) active. Placement is then performed
+normally.
+
+.. index::
+ single: promotable clone; constraint
+
+.. _promotable-clone-constraints:
+
+Promotable Clone Constraints
+~~~~~~~~~~~~~~~~~~~~~~~~~~~~
+
+For promotable clone resources, the ``first-action`` and/or ``then-action`` fields
+for ordering constraints may be set to ``promote`` or ``demote`` to constrain the
+promoted role, and colocation constraints may contain ``rsc-role`` and/or
+``with-rsc-role`` fields.
+
+.. topic:: Constraints involving promotable clone resources
+
+ .. code-block:: xml
+
+ <constraints>
+ <rsc_location id="db-prefers-node1" rsc="database" node="node1" score="500"/>
+ <rsc_colocation id="backup-with-db-unpromoted" rsc="backup"
+ with-rsc="database" with-rsc-role="Unpromoted"/>
+ <rsc_colocation id="myapp-with-db-promoted" rsc="myApp"
+ with-rsc="database" with-rsc-role="Promoted"/>
+ <rsc_order id="start-db-before-backup" first="database" then="backup"/>
+ <rsc_order id="promote-db-then-app" first="database" first-action="promote"
+ then="myApp" then-action="start"/>
+ </constraints>
+
+In the example above, **myApp** will wait until one of the database
+copies has been started and promoted before being started
+itself on the same node. Only if no copies can be promoted will **myApp** be
+prevented from being active. Additionally, the cluster will wait for
+**myApp** to be stopped before demoting the database.
+
+Colocation of a primitive or group resource with a promotable clone
+resource means that it can run on any node with an active instance of
+the promotable clone resource that has the specified role (``Promoted`` or
+``Unpromoted``). In the example above, the cluster will choose a location
+based on where database is running in the promoted role, and if there are
+multiple promoted instances it will also factor in **myApp**'s own location
+preferences when deciding which location to choose.
+
+Colocation with regular clones and other promotable clone resources is also
+possible. In such cases, the set of allowed locations for the **rsc**
+clone is (after role filtering) limited to nodes on which the
+``with-rsc`` promotable clone resource is (or will be) in the specified role.
+Placement is then performed as normal.
+
+Using Promotable Clone Resources in Colocation Sets
+~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
+
+When a promotable clone is used in a :ref:`resource set <s-resource-sets>`
+inside a colocation constraint, the resource set may take a ``role`` attribute.
+
+In the following example, an instance of **B** may be promoted only on a node
+where **A** is in the promoted role. Additionally, resources **C** and **D**
+must be located on a node where both **A** and **B** are promoted.
+
+.. topic:: Colocate C and D with A's and B's promoted instances
+
+ .. code-block:: xml
+
+ <constraints>
+ <rsc_colocation id="coloc-1" score="INFINITY" >
+ <resource_set id="colocated-set-example-1" sequential="true" role="Promoted">
+ <resource_ref id="A"/>
+ <resource_ref id="B"/>
+ </resource_set>
+ <resource_set id="colocated-set-example-2" sequential="true">
+ <resource_ref id="C"/>
+ <resource_ref id="D"/>
+ </resource_set>
+ </rsc_colocation>
+ </constraints>
+
+Using Promotable Clone Resources in Ordered Sets
+~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
+
+When a promotable clone is used in a :ref:`resource set <s-resource-sets>`
+inside an ordering constraint, the resource set may take an ``action``
+attribute.
+
+.. topic:: Start C and D after first promoting A and B
+
+ .. code-block:: xml
+
+ <constraints>
+ <rsc_order id="order-1" score="INFINITY" >
+ <resource_set id="ordered-set-1" sequential="true" action="promote">
+ <resource_ref id="A"/>
+ <resource_ref id="B"/>
+ </resource_set>
+ <resource_set id="ordered-set-2" sequential="true" action="start">
+ <resource_ref id="C"/>
+ <resource_ref id="D"/>
+ </resource_set>
+ </rsc_order>
+ </constraints>
+
+In the above example, **B** cannot be promoted until **A** has been promoted.
+Additionally, resources **C** and **D** must wait until **A** and **B** have
+been promoted before they can start.
+
+.. index::
+ pair: resource-stickiness; clone
+
+.. _s-clone-stickiness:
+
+Clone Stickiness
+________________
+
+To achieve a stable allocation pattern, clones are slightly sticky by
+default. If no value for ``resource-stickiness`` is provided, the clone
+will use a value of 1. Being a small value, it causes minimal
+disturbance to the score calculations of other resources but is enough
+to prevent Pacemaker from needlessly moving copies around the cluster.
+
+.. note::
+
+ For globally unique clones, this may result in multiple instances of the
+ clone staying on a single node, even after another eligible node becomes
+ active (for example, after being put into standby mode then made active again).
+ If you do not want this behavior, specify a ``resource-stickiness`` of 0
+ for the clone temporarily and let the cluster adjust, then set it back
+ to 1 if you want the default behavior to apply again.
+
+.. important::
+
+ If ``resource-stickiness`` is set in the ``rsc_defaults`` section, it will
+ apply to clone instances as well. This means an explicit ``resource-stickiness``
+ of 0 in ``rsc_defaults`` works differently from the implicit default used when
+ ``resource-stickiness`` is not specified.
+
+Clone Resource Agent Requirements
+_________________________________
+
+Any resource can be used as an anonymous clone, as it requires no
+additional support from the resource agent. Whether it makes sense to
+do so depends on your resource and its resource agent.
+
+Resource Agent Requirements for Globally Unique Clones
+~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
+
+Globally unique clones require additional support in the resource agent. In
+particular, it must only respond with ``${OCF_SUCCESS}`` if the node has that
+exact instance active. All other probes for instances of the clone should
+result in ``${OCF_NOT_RUNNING}`` (or one of the other OCF error codes if
+they are failed).
+
+Individual instances of a clone are identified by appending a colon and a
+numerical offset, e.g. **apache:2**.
+
+Resource agents can find out how many copies there are by examining
+the ``OCF_RESKEY_CRM_meta_clone_max`` environment variable and which
+instance it is by examining ``OCF_RESKEY_CRM_meta_clone``.
+
+The resource agent must not make any assumptions (based on
+``OCF_RESKEY_CRM_meta_clone``) about which numerical instances are active. In
+particular, the list of active copies will not always be an unbroken
+sequence, nor always start at 0.
+
+Resource Agent Requirements for Promotable Clones
+~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
+
+Promotable clone resources require two extra actions, ``demote`` and ``promote``,
+which are responsible for changing the state of the resource. Like **start** and
+**stop**, they should return ``${OCF_SUCCESS}`` if they completed successfully or
+a relevant error code if they did not.
+
+The states can mean whatever you wish, but when the resource is
+started, it must come up in the unpromoted role. From there, the
+cluster will decide which instances to promote.
+
+In addition to the clone requirements for monitor actions, agents must
+also *accurately* report which state they are in. The cluster relies
+on the agent to report its status (including role) accurately and does
+not indicate to the agent what role it currently believes it to be in.
+
+.. table:: **Role implications of OCF return codes**
+ :widths: 1 3
+
+ +----------------------+--------------------------------------------------+
+ | Monitor Return Code | Description |
+ +======================+==================================================+
+ | OCF_NOT_RUNNING | .. index:: |
+ | | single: OCF_NOT_RUNNING |
+ | | single: OCF return code; OCF_NOT_RUNNING |
+ | | |
+ | | Stopped |
+ +----------------------+--------------------------------------------------+
+ | OCF_SUCCESS | .. index:: |
+ | | single: OCF_SUCCESS |
+ | | single: OCF return code; OCF_SUCCESS |
+ | | |
+ | | Running (Unpromoted) |
+ +----------------------+--------------------------------------------------+
+ | OCF_RUNNING_PROMOTED | .. index:: |
+ | | single: OCF_RUNNING_PROMOTED |
+ | | single: OCF return code; OCF_RUNNING_PROMOTED |
+ | | |
+ | | Running (Promoted) |
+ +----------------------+--------------------------------------------------+
+ | OCF_FAILED_PROMOTED | .. index:: |
+ | | single: OCF_FAILED_PROMOTED |
+ | | single: OCF return code; OCF_FAILED_PROMOTED |
+ | | |
+ | | Failed (Promoted) |
+ +----------------------+--------------------------------------------------+
+ | Other | .. index:: |
+ | | single: return code |
+ | | |
+ | | Failed (Unpromoted) |
+ +----------------------+--------------------------------------------------+
+
+Clone Notifications
+~~~~~~~~~~~~~~~~~~~
+
+If the clone has the ``notify`` meta-attribute set to **true**, and the resource
+agent supports the ``notify`` action, Pacemaker will call the action when
+appropriate, passing a number of extra variables which, when combined with
+additional context, can be used to calculate the current state of the cluster
+and what is about to happen to it.
+
+.. index::
+ single: clone; environment variables
+ single: notify; environment variables
+
+.. table:: **Environment variables supplied with Clone notify actions**
+ :widths: 1 1
+
+ +----------------------------------------------+-------------------------------------------------------------------------------+
+ | Variable | Description |
+ +==============================================+===============================================================================+
+ | OCF_RESKEY_CRM_meta_notify_type | .. index:: |
+ | | single: environment variable; OCF_RESKEY_CRM_meta_notify_type |
+ | | single: OCF_RESKEY_CRM_meta_notify_type |
+ | | |
+ | | Allowed values: **pre**, **post** |
+ +----------------------------------------------+-------------------------------------------------------------------------------+
+ | OCF_RESKEY_CRM_meta_notify_operation | .. index:: |
+ | | single: environment variable; OCF_RESKEY_CRM_meta_notify_operation |
+ | | single: OCF_RESKEY_CRM_meta_notify_operation |
+ | | |
+ | | Allowed values: **start**, **stop** |
+ +----------------------------------------------+-------------------------------------------------------------------------------+
+ | OCF_RESKEY_CRM_meta_notify_start_resource | .. index:: |
+ | | single: environment variable; OCF_RESKEY_CRM_meta_notify_start_resource |
+ | | single: OCF_RESKEY_CRM_meta_notify_start_resource |
+ | | |
+ | | Resources to be started |
+ +----------------------------------------------+-------------------------------------------------------------------------------+
+ | OCF_RESKEY_CRM_meta_notify_stop_resource | .. index:: |
+ | | single: environment variable; OCF_RESKEY_CRM_meta_notify_stop_resource |
+ | | single: OCF_RESKEY_CRM_meta_notify_stop_resource |
+ | | |
+ | | Resources to be stopped |
+ +----------------------------------------------+-------------------------------------------------------------------------------+
+ | OCF_RESKEY_CRM_meta_notify_active_resource | .. index:: |
+ | | single: environment variable; OCF_RESKEY_CRM_meta_notify_active_resource |
+ | | single: OCF_RESKEY_CRM_meta_notify_active_resource |
+ | | |
+ | | Resources that are running |
+ +----------------------------------------------+-------------------------------------------------------------------------------+
+ | OCF_RESKEY_CRM_meta_notify_inactive_resource | .. index:: |
+ | | single: environment variable; OCF_RESKEY_CRM_meta_notify_inactive_resource |
+ | | single: OCF_RESKEY_CRM_meta_notify_inactive_resource |
+ | | |
+ | | Resources that are not running |
+ +----------------------------------------------+-------------------------------------------------------------------------------+
+ | OCF_RESKEY_CRM_meta_notify_start_uname | .. index:: |
+ | | single: environment variable; OCF_RESKEY_CRM_meta_notify_start_uname |
+ | | single: OCF_RESKEY_CRM_meta_notify_start_uname |
+ | | |
+ | | Nodes on which resources will be started |
+ +----------------------------------------------+-------------------------------------------------------------------------------+
+ | OCF_RESKEY_CRM_meta_notify_stop_uname | .. index:: |
+ | | single: environment variable; OCF_RESKEY_CRM_meta_notify_stop_uname |
+ | | single: OCF_RESKEY_CRM_meta_notify_stop_uname |
+ | | |
+ | | Nodes on which resources will be stopped |
+ +----------------------------------------------+-------------------------------------------------------------------------------+
+ | OCF_RESKEY_CRM_meta_notify_active_uname | .. index:: |
+ | | single: environment variable; OCF_RESKEY_CRM_meta_notify_active_uname |
+ | | single: OCF_RESKEY_CRM_meta_notify_active_uname |
+ | | |
+ | | Nodes on which resources are running |
+ +----------------------------------------------+-------------------------------------------------------------------------------+
+
+The variables come in pairs, such as
+``OCF_RESKEY_CRM_meta_notify_start_resource`` and
+``OCF_RESKEY_CRM_meta_notify_start_uname``, and should be treated as an
+array of whitespace-separated elements.
+
+``OCF_RESKEY_CRM_meta_notify_inactive_resource`` is an exception, as the
+matching **uname** variable does not exist since inactive resources
+are not running on any node.
+
+Thus, in order to indicate that **clone:0** will be started on **sles-1**,
+**clone:2** will be started on **sles-3**, and **clone:3** will be started
+on **sles-2**, the cluster would set:
+
+.. topic:: Notification variables
+
+ .. code-block:: none
+
+ OCF_RESKEY_CRM_meta_notify_start_resource="clone:0 clone:2 clone:3"
+ OCF_RESKEY_CRM_meta_notify_start_uname="sles-1 sles-3 sles-2"
+
+.. note::
+
+ Pacemaker will log but otherwise ignore failures of notify actions.
+
+Interpretation of Notification Variables
+~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
+
+**Pre-notification (stop):**
+
+* Active resources: ``$OCF_RESKEY_CRM_meta_notify_active_resource``
+* Inactive resources: ``$OCF_RESKEY_CRM_meta_notify_inactive_resource``
+* Resources to be started: ``$OCF_RESKEY_CRM_meta_notify_start_resource``
+* Resources to be stopped: ``$OCF_RESKEY_CRM_meta_notify_stop_resource``
+
+**Post-notification (stop) / Pre-notification (start):**
+
+* Active resources
+
+ * ``$OCF_RESKEY_CRM_meta_notify_active_resource``
+ * minus ``$OCF_RESKEY_CRM_meta_notify_stop_resource``
+
+* Inactive resources
+
+ * ``$OCF_RESKEY_CRM_meta_notify_inactive_resource``
+ * plus ``$OCF_RESKEY_CRM_meta_notify_stop_resource``
+
+* Resources that were started: ``$OCF_RESKEY_CRM_meta_notify_start_resource``
+* Resources that were stopped: ``$OCF_RESKEY_CRM_meta_notify_stop_resource``
+
+**Post-notification (start):**
+
+* Active resources:
+
+ * ``$OCF_RESKEY_CRM_meta_notify_active_resource``
+ * minus ``$OCF_RESKEY_CRM_meta_notify_stop_resource``
+ * plus ``$OCF_RESKEY_CRM_meta_notify_start_resource``
+
+* Inactive resources:
+
+ * ``$OCF_RESKEY_CRM_meta_notify_inactive_resource``
+ * plus ``$OCF_RESKEY_CRM_meta_notify_stop_resource``
+ * minus ``$OCF_RESKEY_CRM_meta_notify_start_resource``
+
+* Resources that were started: ``$OCF_RESKEY_CRM_meta_notify_start_resource``
+* Resources that were stopped: ``$OCF_RESKEY_CRM_meta_notify_stop_resource``
+
+Extra Notifications for Promotable Clones
+~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
+
+.. index::
+ single: clone; environment variables
+ single: promotable; environment variables
+
+.. table:: **Extra environment variables supplied for promotable clones**
+ :widths: 1 1
+
+ +------------------------------------------------+---------------------------------------------------------------------------------+
+ | Variable | Description |
+ +================================================+=================================================================================+
+ | OCF_RESKEY_CRM_meta_notify_promoted_resource | .. index:: |
+ | | single: environment variable; OCF_RESKEY_CRM_meta_notify_promoted_resource |
+ | | single: OCF_RESKEY_CRM_meta_notify_promoted_resource |
+ | | |
+ | | Resources that are running in the promoted role |
+ +------------------------------------------------+---------------------------------------------------------------------------------+
+ | OCF_RESKEY_CRM_meta_notify_unpromoted_resource | .. index:: |
+ | | single: environment variable; OCF_RESKEY_CRM_meta_notify_unpromoted_resource |
+ | | single: OCF_RESKEY_CRM_meta_notify_unpromoted_resource |
+ | | |
+ | | Resources that are running in the unpromoted role |
+ +------------------------------------------------+---------------------------------------------------------------------------------+
+ | OCF_RESKEY_CRM_meta_notify_promote_resource | .. index:: |
+ | | single: environment variable; OCF_RESKEY_CRM_meta_notify_promote_resource |
+ | | single: OCF_RESKEY_CRM_meta_notify_promote_resource |
+ | | |
+ | | Resources to be promoted |
+ +------------------------------------------------+---------------------------------------------------------------------------------+
+ | OCF_RESKEY_CRM_meta_notify_demote_resource | .. index:: |
+ | | single: environment variable; OCF_RESKEY_CRM_meta_notify_demote_resource |
+ | | single: OCF_RESKEY_CRM_meta_notify_demote_resource |
+ | | |
+ | | Resources to be demoted |
+ +------------------------------------------------+---------------------------------------------------------------------------------+
+ | OCF_RESKEY_CRM_meta_notify_promote_uname | .. index:: |
+ | | single: environment variable; OCF_RESKEY_CRM_meta_notify_promote_uname |
+ | | single: OCF_RESKEY_CRM_meta_notify_promote_uname |
+ | | |
+ | | Nodes on which resources will be promoted |
+ +------------------------------------------------+---------------------------------------------------------------------------------+
+ | OCF_RESKEY_CRM_meta_notify_demote_uname | .. index:: |
+ | | single: environment variable; OCF_RESKEY_CRM_meta_notify_demote_uname |
+ | | single: OCF_RESKEY_CRM_meta_notify_demote_uname |
+ | | |
+ | | Nodes on which resources will be demoted |
+ +------------------------------------------------+---------------------------------------------------------------------------------+
+ | OCF_RESKEY_CRM_meta_notify_promoted_uname | .. index:: |
+ | | single: environment variable; OCF_RESKEY_CRM_meta_notify_promoted_uname |
+ | | single: OCF_RESKEY_CRM_meta_notify_promoted_uname |
+ | | |
+ | | Nodes on which resources are running in the promoted role |
+ +------------------------------------------------+---------------------------------------------------------------------------------+
+ | OCF_RESKEY_CRM_meta_notify_unpromoted_uname | .. index:: |
+ | | single: environment variable; OCF_RESKEY_CRM_meta_notify_unpromoted_uname |
+ | | single: OCF_RESKEY_CRM_meta_notify_unpromoted_uname |
+ | | |
+ | | Nodes on which resources are running in the unpromoted role |
+ +------------------------------------------------+---------------------------------------------------------------------------------+
+
+Interpretation of Promotable Notification Variables
+~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
+
+**Pre-notification (demote):**
+
+* Active resources: ``$OCF_RESKEY_CRM_meta_notify_active_resource``
+* Promoted resources: ``$OCF_RESKEY_CRM_meta_notify_promoted_resource``
+* Unpromoted resources: ``$OCF_RESKEY_CRM_meta_notify_unpromoted_resource``
+* Inactive resources: ``$OCF_RESKEY_CRM_meta_notify_inactive_resource``
+* Resources to be started: ``$OCF_RESKEY_CRM_meta_notify_start_resource``
+* Resources to be promoted: ``$OCF_RESKEY_CRM_meta_notify_promote_resource``
+* Resources to be demoted: ``$OCF_RESKEY_CRM_meta_notify_demote_resource``
+* Resources to be stopped: ``$OCF_RESKEY_CRM_meta_notify_stop_resource``
+
+**Post-notification (demote) / Pre-notification (stop):**
+
+* Active resources: ``$OCF_RESKEY_CRM_meta_notify_active_resource``
+* Promoted resources:
+
+ * ``$OCF_RESKEY_CRM_meta_notify_promoted_resource``
+ * minus ``$OCF_RESKEY_CRM_meta_notify_demote_resource``
+
+* Unpromoted resources: ``$OCF_RESKEY_CRM_meta_notify_unpromoted_resource``
+* Inactive resources: ``$OCF_RESKEY_CRM_meta_notify_inactive_resource``
+* Resources to be started: ``$OCF_RESKEY_CRM_meta_notify_start_resource``
+* Resources to be promoted: ``$OCF_RESKEY_CRM_meta_notify_promote_resource``
+* Resources to be demoted: ``$OCF_RESKEY_CRM_meta_notify_demote_resource``
+* Resources to be stopped: ``$OCF_RESKEY_CRM_meta_notify_stop_resource``
+* Resources that were demoted: ``$OCF_RESKEY_CRM_meta_notify_demote_resource``
+
+**Post-notification (stop) / Pre-notification (start)**
+
+* Active resources:
+
+ * ``$OCF_RESKEY_CRM_meta_notify_active_resource``
+ * minus ``$OCF_RESKEY_CRM_meta_notify_stop_resource``
+
+* Promoted resources:
+
+ * ``$OCF_RESKEY_CRM_meta_notify_promoted_resource``
+ * minus ``$OCF_RESKEY_CRM_meta_notify_demote_resource``
+
+* Unpromoted resources:
+
+ * ``$OCF_RESKEY_CRM_meta_notify_unpromoted_resource``
+ * minus ``$OCF_RESKEY_CRM_meta_notify_stop_resource``
+
+* Inactive resources:
+
+ * ``$OCF_RESKEY_CRM_meta_notify_inactive_resource``
+ * plus ``$OCF_RESKEY_CRM_meta_notify_stop_resource``
+
+* Resources to be started: ``$OCF_RESKEY_CRM_meta_notify_start_resource``
+* Resources to be promoted: ``$OCF_RESKEY_CRM_meta_notify_promote_resource``
+* Resources to be demoted: ``$OCF_RESKEY_CRM_meta_notify_demote_resource``
+* Resources to be stopped: ``$OCF_RESKEY_CRM_meta_notify_stop_resource``
+* Resources that were demoted: ``$OCF_RESKEY_CRM_meta_notify_demote_resource``
+* Resources that were stopped: ``$OCF_RESKEY_CRM_meta_notify_stop_resource``
+
+**Post-notification (start) / Pre-notification (promote)**
+
+* Active resources:
+
+ * ``$OCF_RESKEY_CRM_meta_notify_active_resource``
+ * minus ``$OCF_RESKEY_CRM_meta_notify_stop_resource``
+ * plus ``$OCF_RESKEY_CRM_meta_notify_start_resource``
+
+* Promoted resources:
+
+ * ``$OCF_RESKEY_CRM_meta_notify_promoted_resource``
+ * minus ``$OCF_RESKEY_CRM_meta_notify_demote_resource``
+
+* Unpromoted resources:
+
+ * ``$OCF_RESKEY_CRM_meta_notify_unpromoted_resource``
+ * minus ``$OCF_RESKEY_CRM_meta_notify_stop_resource``
+ * plus ``$OCF_RESKEY_CRM_meta_notify_start_resource``
+
+* Inactive resources:
+
+ * ``$OCF_RESKEY_CRM_meta_notify_inactive_resource``
+ * plus ``$OCF_RESKEY_CRM_meta_notify_stop_resource``
+ * minus ``$OCF_RESKEY_CRM_meta_notify_start_resource``
+
+* Resources to be started: ``$OCF_RESKEY_CRM_meta_notify_start_resource``
+* Resources to be promoted: ``$OCF_RESKEY_CRM_meta_notify_promote_resource``
+* Resources to be demoted: ``$OCF_RESKEY_CRM_meta_notify_demote_resource``
+* Resources to be stopped: ``$OCF_RESKEY_CRM_meta_notify_stop_resource``
+* Resources that were started: ``$OCF_RESKEY_CRM_meta_notify_start_resource``
+* Resources that were demoted: ``$OCF_RESKEY_CRM_meta_notify_demote_resource``
+* Resources that were stopped: ``$OCF_RESKEY_CRM_meta_notify_stop_resource``
+
+**Post-notification (promote)**
+
+* Active resources:
+
+ * ``$OCF_RESKEY_CRM_meta_notify_active_resource``
+ * minus ``$OCF_RESKEY_CRM_meta_notify_stop_resource``
+ * plus ``$OCF_RESKEY_CRM_meta_notify_start_resource``
+
+* Promoted resources:
+
+ * ``$OCF_RESKEY_CRM_meta_notify_promoted_resource``
+ * minus ``$OCF_RESKEY_CRM_meta_notify_demote_resource``
+ * plus ``$OCF_RESKEY_CRM_meta_notify_promote_resource``
+
+* Unpromoted resources:
+
+ * ``$OCF_RESKEY_CRM_meta_notify_unpromoted_resource``
+ * minus ``$OCF_RESKEY_CRM_meta_notify_stop_resource``
+ * plus ``$OCF_RESKEY_CRM_meta_notify_start_resource``
+ * minus ``$OCF_RESKEY_CRM_meta_notify_promote_resource``
+
+* Inactive resources:
+
+ * ``$OCF_RESKEY_CRM_meta_notify_inactive_resource``
+ * plus ``$OCF_RESKEY_CRM_meta_notify_stop_resource``
+ * minus ``$OCF_RESKEY_CRM_meta_notify_start_resource``
+
+* Resources to be started: ``$OCF_RESKEY_CRM_meta_notify_start_resource``
+* Resources to be promoted: ``$OCF_RESKEY_CRM_meta_notify_promote_resource``
+* Resources to be demoted: ``$OCF_RESKEY_CRM_meta_notify_demote_resource``
+* Resources to be stopped: ``$OCF_RESKEY_CRM_meta_notify_stop_resource``
+* Resources that were started: ``$OCF_RESKEY_CRM_meta_notify_start_resource``
+* Resources that were promoted: ``$OCF_RESKEY_CRM_meta_notify_promote_resource``
+* Resources that were demoted: ``$OCF_RESKEY_CRM_meta_notify_demote_resource``
+* Resources that were stopped: ``$OCF_RESKEY_CRM_meta_notify_stop_resource``
+
+Monitoring Promotable Clone Resources
+_____________________________________
+
+The usual monitor actions are insufficient to monitor a promotable clone
+resource, because Pacemaker needs to verify not only that the resource is
+active, but also that its actual role matches its intended one.
+
+Define two monitoring actions: the usual one will cover the unpromoted role,
+and an additional one with ``role="Promoted"`` will cover the promoted role.
+
+.. topic:: Monitoring both states of a promotable clone resource
+
+ .. code-block:: xml
+
+ <clone id="myPromotableRsc">
+ <meta_attributes id="myPromotableRsc-meta">
+ <nvpair name="promotable" value="true"/>
+ </meta_attributes>
+ <primitive id="myRsc" class="ocf" type="myApp" provider="myCorp">
+ <operations>
+ <op id="public-ip-unpromoted-check" name="monitor" interval="60"/>
+ <op id="public-ip-promoted-check" name="monitor" interval="61" role="Promoted"/>
+ </operations>
+ </primitive>
+ </clone>
+
+.. important::
+
+ It is crucial that *every* monitor operation has a different interval!
+ Pacemaker currently differentiates between operations
+ only by resource and interval; so if (for example) a promotable clone resource
+ had the same monitor interval for both roles, Pacemaker would ignore the
+ role when checking the status -- which would cause unexpected return
+ codes, and therefore unnecessary complications.
+
+.. _s-promotion-scores:
+
+Determining Which Instance is Promoted
+______________________________________
+
+Pacemaker can choose a promotable clone instance to be promoted in one of two
+ways:
+
+* Promotion scores: These are node attributes set via the ``crm_attribute``
+ command using the ``--promotion`` option, which generally would be called by
+ the resource agent's start action if it supports promotable clones. This tool
+ automatically detects both the resource and host, and should be used to set a
+ preference for being promoted. Based on this, ``promoted-max``, and
+ ``promoted-node-max``, the instance(s) with the highest preference will be
+ promoted.
+
+* Constraints: Location constraints can indicate which nodes are most preferred
+ to be promoted.
+
+.. topic:: Explicitly preferring node1 to be promoted
+
+ .. code-block:: xml
+
+ <rsc_location id="promoted-location" rsc="myPromotableRsc">
+ <rule id="promoted-rule" score="100" role="Promoted">
+ <expression id="promoted-exp" attribute="#uname" operation="eq" value="node1"/>
+ </rule>
+ </rsc_location>
+
+.. index:
+ single: bundle
+ single: resource; bundle
+ pair: container; Docker
+ pair: container; podman
+ pair: container; rkt
+
+.. _s-resource-bundle:
+
+Bundles - Containerized Resources
+#################################
+
+Pacemaker supports a special syntax for launching a service inside a
+`container <https://en.wikipedia.org/wiki/Operating-system-level_virtualization>`_
+with any infrastructure it requires: the *bundle*.
+
+Pacemaker bundles support `Docker <https://www.docker.com/>`_,
+`podman <https://podman.io/>`_ *(since 2.0.1)*, and
+`rkt <https://coreos.com/rkt/>`_ container technologies. [#]_
+
+.. topic:: A bundle for a containerized web server
+
+ .. code-block:: xml
+
+ <bundle id="httpd-bundle">
+ <podman image="pcmk:http" replicas="3"/>
+ <network ip-range-start="192.168.122.131"
+ host-netmask="24"
+ host-interface="eth0">
+ <port-mapping id="httpd-port" port="80"/>
+ </network>
+ <storage>
+ <storage-mapping id="httpd-syslog"
+ source-dir="/dev/log"
+ target-dir="/dev/log"
+ options="rw"/>
+ <storage-mapping id="httpd-root"
+ source-dir="/srv/html"
+ target-dir="/var/www/html"
+ options="rw,Z"/>
+ <storage-mapping id="httpd-logs"
+ source-dir-root="/var/log/pacemaker/bundles"
+ target-dir="/etc/httpd/logs"
+ options="rw,Z"/>
+ </storage>
+ <primitive class="ocf" id="httpd" provider="heartbeat" type="apache"/>
+ </bundle>
+
+Bundle Prerequisites
+____________________
+
+Before configuring a bundle in Pacemaker, the user must install the appropriate
+container launch technology (Docker, podman, or rkt), and supply a fully
+configured container image, on every node allowed to run the bundle.
+
+Pacemaker will create an implicit resource of type **ocf:heartbeat:docker**,
+**ocf:heartbeat:podman**, or **ocf:heartbeat:rkt** to manage a bundle's
+container. The user must ensure that the appropriate resource agent is
+installed on every node allowed to run the bundle.
+
+.. index::
+ pair: XML element; bundle
+
+Bundle Properties
+_________________
+
+.. table:: **XML Attributes of a bundle Element**
+ :widths: 1 4
+
+ +-------------+------------------------------------------------------------------+
+ | Field | Description |
+ +=============+==================================================================+
+ | id | .. index:: |
+ | | single: bundle; attribute, id |
+ | | single: attribute; id (bundle) |
+ | | single: id; bundle attribute |
+ | | |
+ | | A unique name for the bundle (required) |
+ +-------------+------------------------------------------------------------------+
+ | description | .. index:: |
+ | | single: bundle; attribute, description |
+ | | single: attribute; description (bundle) |
+ | | single: description; bundle attribute |
+ | | |
+ | | An optional description of the group, for the user's own |
+ | | purposes. |
+ | | E.g. ``manages the container that runs the service`` |
+ +-------------+------------------------------------------------------------------+
+
+
+A bundle must contain exactly one ``docker``, ``podman``, or ``rkt`` element.
+
+.. index::
+ pair: XML element; docker
+ pair: XML element; podman
+ pair: XML element; rkt
+
+Bundle Container Properties
+___________________________
+
+.. table:: **XML attributes of a docker, podman, or rkt Element**
+ :class: longtable
+ :widths: 2 3 4
+
+ +-------------------+------------------------------------+---------------------------------------------------+
+ | Attribute | Default | Description |
+ +===================+====================================+===================================================+
+ | image | | .. index:: |
+ | | | single: docker; attribute, image |
+ | | | single: attribute; image (docker) |
+ | | | single: image; docker attribute |
+ | | | single: podman; attribute, image |
+ | | | single: attribute; image (podman) |
+ | | | single: image; podman attribute |
+ | | | single: rkt; attribute, image |
+ | | | single: attribute; image (rkt) |
+ | | | single: image; rkt attribute |
+ | | | |
+ | | | Container image tag (required) |
+ +-------------------+------------------------------------+---------------------------------------------------+
+ | replicas | Value of ``promoted-max`` | .. index:: |
+ | | if that is positive, else 1 | single: docker; attribute, replicas |
+ | | | single: attribute; replicas (docker) |
+ | | | single: replicas; docker attribute |
+ | | | single: podman; attribute, replicas |
+ | | | single: attribute; replicas (podman) |
+ | | | single: replicas; podman attribute |
+ | | | single: rkt; attribute, replicas |
+ | | | single: attribute; replicas (rkt) |
+ | | | single: replicas; rkt attribute |
+ | | | |
+ | | | A positive integer specifying the number of |
+ | | | container instances to launch |
+ +-------------------+------------------------------------+---------------------------------------------------+
+ | replicas-per-host | 1 | .. index:: |
+ | | | single: docker; attribute, replicas-per-host |
+ | | | single: attribute; replicas-per-host (docker) |
+ | | | single: replicas-per-host; docker attribute |
+ | | | single: podman; attribute, replicas-per-host |
+ | | | single: attribute; replicas-per-host (podman) |
+ | | | single: replicas-per-host; podman attribute |
+ | | | single: rkt; attribute, replicas-per-host |
+ | | | single: attribute; replicas-per-host (rkt) |
+ | | | single: replicas-per-host; rkt attribute |
+ | | | |
+ | | | A positive integer specifying the number of |
+ | | | container instances allowed to run on a |
+ | | | single node |
+ +-------------------+------------------------------------+---------------------------------------------------+
+ | promoted-max | 0 | .. index:: |
+ | | | single: docker; attribute, promoted-max |
+ | | | single: attribute; promoted-max (docker) |
+ | | | single: promoted-max; docker attribute |
+ | | | single: podman; attribute, promoted-max |
+ | | | single: attribute; promoted-max (podman) |
+ | | | single: promoted-max; podman attribute |
+ | | | single: rkt; attribute, promoted-max |
+ | | | single: attribute; promoted-max (rkt) |
+ | | | single: promoted-max; rkt attribute |
+ | | | |
+ | | | A non-negative integer that, if positive, |
+ | | | indicates that the containerized service |
+ | | | should be treated as a promotable service, |
+ | | | with this many replicas allowed to run the |
+ | | | service in the promoted role |
+ +-------------------+------------------------------------+---------------------------------------------------+
+ | network | | .. index:: |
+ | | | single: docker; attribute, network |
+ | | | single: attribute; network (docker) |
+ | | | single: network; docker attribute |
+ | | | single: podman; attribute, network |
+ | | | single: attribute; network (podman) |
+ | | | single: network; podman attribute |
+ | | | single: rkt; attribute, network |
+ | | | single: attribute; network (rkt) |
+ | | | single: network; rkt attribute |
+ | | | |
+ | | | If specified, this will be passed to the |
+ | | | ``docker run``, ``podman run``, or |
+ | | | ``rkt run`` command as the network setting |
+ | | | for the container. |
+ +-------------------+------------------------------------+---------------------------------------------------+
+ | run-command | ``/usr/sbin/pacemaker-remoted`` if | .. index:: |
+ | | bundle contains a **primitive**, | single: docker; attribute, run-command |
+ | | otherwise none | single: attribute; run-command (docker) |
+ | | | single: run-command; docker attribute |
+ | | | single: podman; attribute, run-command |
+ | | | single: attribute; run-command (podman) |
+ | | | single: run-command; podman attribute |
+ | | | single: rkt; attribute, run-command |
+ | | | single: attribute; run-command (rkt) |
+ | | | single: run-command; rkt attribute |
+ | | | |
+ | | | This command will be run inside the container |
+ | | | when launching it ("PID 1"). If the bundle |
+ | | | contains a **primitive**, this command *must* |
+ | | | start ``pacemaker-remoted`` (but could, for |
+ | | | example, be a script that does other stuff, too). |
+ +-------------------+------------------------------------+---------------------------------------------------+
+ | options | | .. index:: |
+ | | | single: docker; attribute, options |
+ | | | single: attribute; options (docker) |
+ | | | single: options; docker attribute |
+ | | | single: podman; attribute, options |
+ | | | single: attribute; options (podman) |
+ | | | single: options; podman attribute |
+ | | | single: rkt; attribute, options |
+ | | | single: attribute; options (rkt) |
+ | | | single: options; rkt attribute |
+ | | | |
+ | | | Extra command-line options to pass to the |
+ | | | ``docker run``, ``podman run``, or ``rkt run`` |
+ | | | command |
+ +-------------------+------------------------------------+---------------------------------------------------+
+
+.. note::
+
+ Considerations when using cluster configurations or container images from
+ Pacemaker 1.1:
+
+ * If the container image has a pre-2.0.0 version of Pacemaker, set ``run-command``
+ to ``/usr/sbin/pacemaker_remoted`` (note the underbar instead of dash).
+
+ * ``masters`` is accepted as an alias for ``promoted-max``, but is deprecated since
+ 2.0.0, and support for it will be removed in a future version.
+
+Bundle Network Properties
+_________________________
+
+A bundle may optionally contain one ``<network>`` element.
+
+.. index::
+ pair: XML element; network
+ single: bundle; network
+
+.. table:: **XML attributes of a network Element**
+ :widths: 2 1 5
+
+ +----------------+---------+------------------------------------------------------------+
+ | Attribute | Default | Description |
+ +================+=========+============================================================+
+ | add-host | TRUE | .. index:: |
+ | | | single: network; attribute, add-host |
+ | | | single: attribute; add-host (network) |
+ | | | single: add-host; network attribute |
+ | | | |
+ | | | If TRUE, and ``ip-range-start`` is used, Pacemaker will |
+ | | | automatically ensure that ``/etc/hosts`` inside the |
+ | | | containers has entries for each |
+ | | | :ref:`replica name <s-resource-bundle-note-replica-names>` |
+ | | | and its assigned IP. |
+ +----------------+---------+------------------------------------------------------------+
+ | ip-range-start | | .. index:: |
+ | | | single: network; attribute, ip-range-start |
+ | | | single: attribute; ip-range-start (network) |
+ | | | single: ip-range-start; network attribute |
+ | | | |
+ | | | If specified, Pacemaker will create an implicit |
+ | | | ``ocf:heartbeat:IPaddr2`` resource for each container |
+ | | | instance, starting with this IP address, using up to |
+ | | | ``replicas`` sequential addresses. These addresses can be |
+ | | | used from the host's network to reach the service inside |
+ | | | the container, though it is not visible within the |
+ | | | container itself. Only IPv4 addresses are currently |
+ | | | supported. |
+ +----------------+---------+------------------------------------------------------------+
+ | host-netmask | 32 | .. index:: |
+ | | | single: network; attribute; host-netmask |
+ | | | single: attribute; host-netmask (network) |
+ | | | single: host-netmask; network attribute |
+ | | | |
+ | | | If ``ip-range-start`` is specified, the IP addresses |
+ | | | are created with this CIDR netmask (as a number of bits). |
+ +----------------+---------+------------------------------------------------------------+
+ | host-interface | | .. index:: |
+ | | | single: network; attribute; host-interface |
+ | | | single: attribute; host-interface (network) |
+ | | | single: host-interface; network attribute |
+ | | | |
+ | | | If ``ip-range-start`` is specified, the IP addresses are |
+ | | | created on this host interface (by default, it will be |
+ | | | determined from the IP address). |
+ +----------------+---------+------------------------------------------------------------+
+ | control-port | 3121 | .. index:: |
+ | | | single: network; attribute; control-port |
+ | | | single: attribute; control-port (network) |
+ | | | single: control-port; network attribute |
+ | | | |
+ | | | If the bundle contains a ``primitive``, the cluster will |
+ | | | use this integer TCP port for communication with |
+ | | | Pacemaker Remote inside the container. Changing this is |
+ | | | useful when the container is unable to listen on the |
+ | | | default port, for example, when the container uses the |
+ | | | host's network rather than ``ip-range-start`` (in which |
+ | | | case ``replicas-per-host`` must be 1), or when the bundle |
+ | | | may run on a Pacemaker Remote node that is already |
+ | | | listening on the default port. Any ``PCMK_remote_port`` |
+ | | | environment variable set on the host or in the container |
+ | | | is ignored for bundle connections. |
+ +----------------+---------+------------------------------------------------------------+
+
+.. _s-resource-bundle-note-replica-names:
+
+.. note::
+
+ Replicas are named by the bundle id plus a dash and an integer counter starting
+ with zero. For example, if a bundle named **httpd-bundle** has **replicas=2**, its
+ containers will be named **httpd-bundle-0** and **httpd-bundle-1**.
+
+.. index::
+ pair: XML element; port-mapping
+
+Additionally, a ``network`` element may optionally contain one or more
+``port-mapping`` elements.
+
+.. table:: **Attributes of a port-mapping Element**
+ :widths: 2 1 5
+
+ +---------------+-------------------+------------------------------------------------------+
+ | Attribute | Default | Description |
+ +===============+===================+======================================================+
+ | id | | .. index:: |
+ | | | single: port-mapping; attribute, id |
+ | | | single: attribute; id (port-mapping) |
+ | | | single: id; port-mapping attribute |
+ | | | |
+ | | | A unique name for the port mapping (required) |
+ +---------------+-------------------+------------------------------------------------------+
+ | port | | .. index:: |
+ | | | single: port-mapping; attribute, port |
+ | | | single: attribute; port (port-mapping) |
+ | | | single: port; port-mapping attribute |
+ | | | |
+ | | | If this is specified, connections to this TCP port |
+ | | | number on the host network (on the container's |
+ | | | assigned IP address, if ``ip-range-start`` is |
+ | | | specified) will be forwarded to the container |
+ | | | network. Exactly one of ``port`` or ``range`` |
+ | | | must be specified in a ``port-mapping``. |
+ +---------------+-------------------+------------------------------------------------------+
+ | internal-port | value of ``port`` | .. index:: |
+ | | | single: port-mapping; attribute, internal-port |
+ | | | single: attribute; internal-port (port-mapping) |
+ | | | single: internal-port; port-mapping attribute |
+ | | | |
+ | | | If ``port`` and this are specified, connections |
+ | | | to ``port`` on the host's network will be |
+ | | | forwarded to this port on the container network. |
+ +---------------+-------------------+------------------------------------------------------+
+ | range | | .. index:: |
+ | | | single: port-mapping; attribute, range |
+ | | | single: attribute; range (port-mapping) |
+ | | | single: range; port-mapping attribute |
+ | | | |
+ | | | If this is specified, connections to these TCP |
+ | | | port numbers (expressed as *first_port*-*last_port*) |
+ | | | on the host network (on the container's assigned IP |
+ | | | address, if ``ip-range-start`` is specified) will |
+ | | | be forwarded to the same ports in the container |
+ | | | network. Exactly one of ``port`` or ``range`` |
+ | | | must be specified in a ``port-mapping``. |
+ +---------------+-------------------+------------------------------------------------------+
+
+.. note::
+
+ If the bundle contains a ``primitive``, Pacemaker will automatically map the
+ ``control-port``, so it is not necessary to specify that port in a
+ ``port-mapping``.
+
+.. index:
+ pair: XML element; storage
+ pair: XML element; storage-mapping
+ single: bundle; storage
+
+.. _s-bundle-storage:
+
+Bundle Storage Properties
+_________________________
+
+A bundle may optionally contain one ``storage`` element. A ``storage`` element
+has no properties of its own, but may contain one or more ``storage-mapping``
+elements.
+
+.. table:: **Attributes of a storage-mapping Element**
+ :widths: 2 1 5
+
+ +-----------------+---------+-------------------------------------------------------------+
+ | Attribute | Default | Description |
+ +=================+=========+=============================================================+
+ | id | | .. index:: |
+ | | | single: storage-mapping; attribute, id |
+ | | | single: attribute; id (storage-mapping) |
+ | | | single: id; storage-mapping attribute |
+ | | | |
+ | | | A unique name for the storage mapping (required) |
+ +-----------------+---------+-------------------------------------------------------------+
+ | source-dir | | .. index:: |
+ | | | single: storage-mapping; attribute, source-dir |
+ | | | single: attribute; source-dir (storage-mapping) |
+ | | | single: source-dir; storage-mapping attribute |
+ | | | |
+ | | | The absolute path on the host's filesystem that will be |
+ | | | mapped into the container. Exactly one of ``source-dir`` |
+ | | | and ``source-dir-root`` must be specified in a |
+ | | | ``storage-mapping``. |
+ +-----------------+---------+-------------------------------------------------------------+
+ | source-dir-root | | .. index:: |
+ | | | single: storage-mapping; attribute, source-dir-root |
+ | | | single: attribute; source-dir-root (storage-mapping) |
+ | | | single: source-dir-root; storage-mapping attribute |
+ | | | |
+ | | | The start of a path on the host's filesystem that will |
+ | | | be mapped into the container, using a different |
+ | | | subdirectory on the host for each container instance. |
+ | | | The subdirectory will be named the same as the |
+ | | | :ref:`replica name <s-resource-bundle-note-replica-names>`. |
+ | | | Exactly one of ``source-dir`` and ``source-dir-root`` |
+ | | | must be specified in a ``storage-mapping``. |
+ +-----------------+---------+-------------------------------------------------------------+
+ | target-dir | | .. index:: |
+ | | | single: storage-mapping; attribute, target-dir |
+ | | | single: attribute; target-dir (storage-mapping) |
+ | | | single: target-dir; storage-mapping attribute |
+ | | | |
+ | | | The path name within the container where the host |
+ | | | storage will be mapped (required) |
+ +-----------------+---------+-------------------------------------------------------------+
+ | options | | .. index:: |
+ | | | single: storage-mapping; attribute, options |
+ | | | single: attribute; options (storage-mapping) |
+ | | | single: options; storage-mapping attribute |
+ | | | |
+ | | | A comma-separated list of file system mount |
+ | | | options to use when mapping the storage |
+ +-----------------+---------+-------------------------------------------------------------+
+
+.. note::
+
+ Pacemaker does not define the behavior if the source directory does not already
+ exist on the host. However, it is expected that the container technology and/or
+ its resource agent will create the source directory in that case.
+
+.. note::
+
+ If the bundle contains a ``primitive``,
+ Pacemaker will automatically map the equivalent of
+ ``source-dir=/etc/pacemaker/authkey target-dir=/etc/pacemaker/authkey``
+ and ``source-dir-root=/var/log/pacemaker/bundles target-dir=/var/log`` into the
+ container, so it is not necessary to specify those paths in a
+ ``storage-mapping``.
+
+.. important::
+
+ The ``PCMK_authkey_location`` environment variable must not be set to anything
+ other than the default of ``/etc/pacemaker/authkey`` on any node in the cluster.
+
+.. important::
+
+ If SELinux is used in enforcing mode on the host, you must ensure the container
+ is allowed to use any storage you mount into it. For Docker and podman bundles,
+ adding "Z" to the mount options will create a container-specific label for the
+ mount that allows the container access.
+
+.. index::
+ single: bundle; primitive
+
+Bundle Primitive
+________________
+
+A bundle may optionally contain one :ref:`primitive <primitive-resource>`
+resource. The primitive may have operations, instance attributes, and
+meta-attributes defined, as usual.
+
+If a bundle contains a primitive resource, the container image must include
+the Pacemaker Remote daemon, and at least one of ``ip-range-start`` or
+``control-port`` must be configured in the bundle. Pacemaker will create an
+implicit **ocf:pacemaker:remote** resource for the connection, launch
+Pacemaker Remote within the container, and monitor and manage the primitive
+resource via Pacemaker Remote.
+
+If the bundle has more than one container instance (replica), the primitive
+resource will function as an implicit :ref:`clone <s-resource-clone>` -- a
+:ref:`promotable clone <s-resource-promotable>` if the bundle has ``promoted-max``
+greater than zero.
+
+.. note::
+
+ If you want to pass environment variables to a bundle's Pacemaker Remote
+ connection or primitive, you have two options:
+
+ * Environment variables whose value is the same regardless of the underlying host
+ may be set using the container element's ``options`` attribute.
+ * If you want variables to have host-specific values, you can use the
+ :ref:`storage-mapping <s-bundle-storage>` element to map a file on the host as
+ ``/etc/pacemaker/pcmk-init.env`` in the container *(since 2.0.3)*.
+ Pacemaker Remote will parse this file as a shell-like format, with
+ variables set as NAME=VALUE, ignoring blank lines and comments starting
+ with "#".
+
+.. important::
+
+ When a bundle has a ``primitive``, Pacemaker on all cluster nodes must be able to
+ contact Pacemaker Remote inside the bundle's containers.
+
+ * The containers must have an accessible network (for example, ``network`` should
+ not be set to "none" with a ``primitive``).
+ * The default, using a distinct network space inside the container, works in
+ combination with ``ip-range-start``. Any firewall must allow access from all
+ cluster nodes to the ``control-port`` on the container IPs.
+ * If the container shares the host's network space (for example, by setting
+ ``network`` to "host"), a unique ``control-port`` should be specified for each
+ bundle. Any firewall must allow access from all cluster nodes to the
+ ``control-port`` on all cluster and remote node IPs.
+
+.. index::
+ single: bundle; node attributes
+
+.. _s-bundle-attributes:
+
+Bundle Node Attributes
+______________________
+
+If the bundle has a ``primitive``, the primitive's resource agent may want to set
+node attributes such as :ref:`promotion scores <s-promotion-scores>`. However, with
+containers, it is not apparent which node should get the attribute.
+
+If the container uses shared storage that is the same no matter which node the
+container is hosted on, then it is appropriate to use the promotion score on the
+bundle node itself.
+
+On the other hand, if the container uses storage exported from the underlying host,
+then it may be more appropriate to use the promotion score on the underlying host.
+
+Since this depends on the particular situation, the
+``container-attribute-target`` resource meta-attribute allows the user to specify
+which approach to use. If it is set to ``host``, then user-defined node attributes
+will be checked on the underlying host. If it is anything else, the local node
+(in this case the bundle node) is used as usual.
+
+This only applies to user-defined attributes; the cluster will always check the
+local node for cluster-defined attributes such as ``#uname``.
+
+If ``container-attribute-target`` is ``host``, the cluster will pass additional
+environment variables to the primitive's resource agent that allow it to set
+node attributes appropriately: ``CRM_meta_container_attribute_target`` (identical
+to the meta-attribute value) and ``CRM_meta_physical_host`` (the name of the
+underlying host).
+
+.. note::
+
+ When called by a resource agent, the ``attrd_updater`` and ``crm_attribute``
+ commands will automatically check those environment variables and set
+ attributes appropriately.
+
+.. index::
+ single: bundle; meta-attributes
+
+Bundle Meta-Attributes
+______________________
+
+Any meta-attribute set on a bundle will be inherited by the bundle's
+primitive and any resources implicitly created by Pacemaker for the bundle.
+
+This includes options such as ``priority``, ``target-role``, and ``is-managed``. See
+:ref:`resource_options` for more information.
+
+Bundles support clone meta-attributes including ``notify``, ``ordered``, and
+``interleave``.
+
+Limitations of Bundles
+______________________
+
+Restarting pacemaker while a bundle is unmanaged or the cluster is in
+maintenance mode may cause the bundle to fail.
+
+Bundles may not be explicitly cloned or included in groups. This includes the
+bundle's primitive and any resources implicitly created by Pacemaker for the
+bundle. (If ``replicas`` is greater than 1, the bundle will behave like a clone
+implicitly.)
+
+Bundles do not have instance attributes, utilization attributes, or operations,
+though a bundle's primitive may have them.
+
+A bundle with a primitive can run on a Pacemaker Remote node only if the bundle
+uses a distinct ``control-port``.
+
+.. [#] Of course, the service must support running multiple instances.
+
+.. [#] Docker is a trademark of Docker, Inc. No endorsement by or association with
+ Docker, Inc. is implied.
diff --git a/doc/sphinx/Pacemaker_Explained/alerts.rst b/doc/sphinx/Pacemaker_Explained/alerts.rst
new file mode 100644
index 0000000..1d02187
--- /dev/null
+++ b/doc/sphinx/Pacemaker_Explained/alerts.rst
@@ -0,0 +1,257 @@
+.. index::
+ single: alert
+ single: resource; alert
+ single: node; alert
+ single: fencing; alert
+ pair: XML element; alert
+ pair: XML element; alerts
+
+Alerts
+------
+
+*Alerts* may be configured to take some external action when a cluster event
+occurs (node failure, resource starting or stopping, etc.).
+
+
+.. index::
+ pair: alert; agent
+
+Alert Agents
+############
+
+As with resource agents, the cluster calls an external program (an
+*alert agent*) to handle alerts. The cluster passes information about the event
+to the agent via environment variables. Agents can do anything desired with
+this information (send an e-mail, log to a file, update a monitoring system,
+etc.).
+
+.. topic:: Simple alert configuration
+
+ .. code-block:: xml
+
+ <configuration>
+ <alerts>
+ <alert id="my-alert" path="/path/to/my-script.sh" />
+ </alerts>
+ </configuration>
+
+In the example above, the cluster will call ``my-script.sh`` for each event.
+
+Multiple alert agents may be configured; the cluster will call all of them for
+each event.
+
+Alert agents will be called only on cluster nodes. They will be called for
+events involving Pacemaker Remote nodes, but they will never be called *on*
+those nodes.
+
+For more information about sample alert agents provided by Pacemaker and about
+developing custom alert agents, see the *Pacemaker Administration* document.
+
+
+.. index::
+ single: alert; recipient
+ pair: XML element; recipient
+
+Alert Recipients
+################
+
+Usually, alerts are directed towards a recipient. Thus, each alert may be
+additionally configured with one or more recipients. The cluster will call the
+agent separately for each recipient.
+
+.. topic:: Alert configuration with recipient
+
+ .. code-block:: xml
+
+ <configuration>
+ <alerts>
+ <alert id="my-alert" path="/path/to/my-script.sh">
+ <recipient id="my-alert-recipient" value="some-address"/>
+ </alert>
+ </alerts>
+ </configuration>
+
+In the above example, the cluster will call ``my-script.sh`` for each event,
+passing the recipient ``some-address`` as an environment variable.
+
+The recipient may be anything the alert agent can recognize -- an IP address,
+an e-mail address, a file name, whatever the particular agent supports.
+
+
+.. index::
+ single: alert; meta-attributes
+ single: meta-attribute; alert meta-attributes
+
+Alert Meta-Attributes
+#####################
+
+As with resources, meta-attributes can be configured for alerts to change
+whether and how Pacemaker calls them.
+
+.. table:: **Meta-Attributes of an Alert**
+ :class: longtable
+ :widths: 1 1 3
+
+ +------------------+---------------+-----------------------------------------------------+
+ | Meta-Attribute | Default | Description |
+ +==================+===============+=====================================================+
+ | enabled | true | .. index:: |
+ | | | single: alert; meta-attribute, enabled |
+ | | | single: meta-attribute; enabled (alert) |
+ | | | single: enabled; alert meta-attribute |
+ | | | |
+ | | | If false for an alert, the alert will not be used. |
+ | | | If true for an alert and false for a particular |
+ | | | recipient of that alert, that recipient will not be |
+ | | | used. *(since 2.1.6)* |
+ +------------------+---------------+-----------------------------------------------------+
+ | timestamp-format | %H:%M:%S.%06N | .. index:: |
+ | | | single: alert; meta-attribute, timestamp-format |
+ | | | single: meta-attribute; timestamp-format (alert) |
+ | | | single: timestamp-format; alert meta-attribute |
+ | | | |
+ | | | Format the cluster will use when sending the |
+ | | | event's timestamp to the agent. This is a string as |
+ | | | used with the ``date(1)`` command. |
+ +------------------+---------------+-----------------------------------------------------+
+ | timeout | 30s | .. index:: |
+ | | | single: alert; meta-attribute, timeout |
+ | | | single: meta-attribute; timeout (alert) |
+ | | | single: timeout; alert meta-attribute |
+ | | | |
+ | | | If the alert agent does not complete within this |
+ | | | amount of time, it will be terminated. |
+ +------------------+---------------+-----------------------------------------------------+
+
+Meta-attributes can be configured per alert and/or per recipient.
+
+.. topic:: Alert configuration with meta-attributes
+
+ .. code-block:: xml
+
+ <configuration>
+ <alerts>
+ <alert id="my-alert" path="/path/to/my-script.sh">
+ <meta_attributes id="my-alert-attributes">
+ <nvpair id="my-alert-attributes-timeout" name="timeout"
+ value="15s"/>
+ </meta_attributes>
+ <recipient id="my-alert-recipient1" value="someuser@example.com">
+ <meta_attributes id="my-alert-recipient1-attributes">
+ <nvpair id="my-alert-recipient1-timestamp-format"
+ name="timestamp-format" value="%D %H:%M"/>
+ </meta_attributes>
+ </recipient>
+ <recipient id="my-alert-recipient2" value="otheruser@example.com">
+ <meta_attributes id="my-alert-recipient2-attributes">
+ <nvpair id="my-alert-recipient2-timestamp-format"
+ name="timestamp-format" value="%c"/>
+ </meta_attributes>
+ </recipient>
+ </alert>
+ </alerts>
+ </configuration>
+
+In the above example, the ``my-script.sh`` will get called twice for each
+event, with each call using a 15-second timeout. One call will be passed the
+recipient ``someuser@example.com`` and a timestamp in the format ``%D %H:%M``,
+while the other call will be passed the recipient ``otheruser@example.com`` and
+a timestamp in the format ``%c``.
+
+
+.. index::
+ single: alert; instance attributes
+ single: instance attribute; alert instance attributes
+
+Alert Instance Attributes
+#########################
+
+As with resource agents, agent-specific configuration values may be configured
+as instance attributes. These will be passed to the agent as additional
+environment variables. The number, names and allowed values of these instance
+attributes are completely up to the particular agent.
+
+.. topic:: Alert configuration with instance attributes
+
+ .. code-block:: xml
+
+ <configuration>
+ <alerts>
+ <alert id="my-alert" path="/path/to/my-script.sh">
+ <meta_attributes id="my-alert-attributes">
+ <nvpair id="my-alert-attributes-timeout" name="timeout"
+ value="15s"/>
+ </meta_attributes>
+ <instance_attributes id="my-alert-options">
+ <nvpair id="my-alert-options-debug" name="debug"
+ value="false"/>
+ </instance_attributes>
+ <recipient id="my-alert-recipient1"
+ value="someuser@example.com"/>
+ </alert>
+ </alerts>
+ </configuration>
+
+
+.. index::
+ single: alert; filters
+ pair: XML element; select
+ pair: XML element; select_nodes
+ pair: XML element; select_fencing
+ pair: XML element; select_resources
+ pair: XML element; select_attributes
+ pair: XML element; attribute
+
+Alert Filters
+#############
+
+By default, an alert agent will be called for node events, fencing events, and
+resource events. An agent may choose to ignore certain types of events, but
+there is still the overhead of calling it for those events. To eliminate that
+overhead, you may select which types of events the agent should receive.
+
+.. topic:: Alert configuration to receive only node events and fencing events
+
+ .. code-block:: xml
+
+ <configuration>
+ <alerts>
+ <alert id="my-alert" path="/path/to/my-script.sh">
+ <select>
+ <select_nodes />
+ <select_fencing />
+ </select>
+ <recipient id="my-alert-recipient1"
+ value="someuser@example.com"/>
+ </alert>
+ </alerts>
+ </configuration>
+
+The possible options within ``<select>`` are ``<select_nodes>``,
+``<select_fencing>``, ``<select_resources>``, and ``<select_attributes>``.
+
+With ``<select_attributes>`` (the only event type not enabled by default), the
+agent will receive alerts when a node attribute changes. If you wish the agent
+to be called only when certain attributes change, you can configure that as well.
+
+.. topic:: Alert configuration to be called when certain node attributes change
+
+ .. code-block:: xml
+
+ <configuration>
+ <alerts>
+ <alert id="my-alert" path="/path/to/my-script.sh">
+ <select>
+ <select_attributes>
+ <attribute id="alert-standby" name="standby" />
+ <attribute id="alert-shutdown" name="shutdown" />
+ </select_attributes>
+ </select>
+ <recipient id="my-alert-recipient1" value="someuser@example.com"/>
+ </alert>
+ </alerts>
+ </configuration>
+
+Node attribute alerts are currently considered experimental. Alerts may be
+limited to attributes set via ``attrd_updater``, and agents may be called
+multiple times with the same attribute value.
diff --git a/doc/sphinx/Pacemaker_Explained/ap-samples.rst b/doc/sphinx/Pacemaker_Explained/ap-samples.rst
new file mode 100644
index 0000000..641affc
--- /dev/null
+++ b/doc/sphinx/Pacemaker_Explained/ap-samples.rst
@@ -0,0 +1,148 @@
+Sample Configurations
+---------------------
+
+Empty
+#####
+
+.. topic:: An Empty Configuration
+
+ .. code-block:: xml
+
+ <cib crm_feature_set="3.0.7" validate-with="pacemaker-1.2" admin_epoch="1" epoch="0" num_updates="0">
+ <configuration>
+ <crm_config/>
+ <nodes/>
+ <resources/>
+ <constraints/>
+ </configuration>
+ <status/>
+ </cib>
+
+Simple
+######
+
+.. topic:: A simple configuration with two nodes, some cluster options and a resource
+
+ .. code-block:: xml
+
+ <cib crm_feature_set="3.0.7" validate-with="pacemaker-1.2" admin_epoch="1" epoch="0" num_updates="0">
+ <configuration>
+ <crm_config>
+ <cluster_property_set id="cib-bootstrap-options">
+ <nvpair id="option-1" name="symmetric-cluster" value="true"/>
+ <nvpair id="option-2" name="no-quorum-policy" value="stop"/>
+ <nvpair id="option-3" name="stonith-enabled" value="0"/>
+ </cluster_property_set>
+ </crm_config>
+ <nodes>
+ <node id="xxx" uname="c001n01" type="normal"/>
+ <node id="yyy" uname="c001n02" type="normal"/>
+ </nodes>
+ <resources>
+ <primitive id="myAddr" class="ocf" provider="heartbeat" type="IPaddr">
+ <operations>
+ <op id="myAddr-monitor" name="monitor" interval="300s"/>
+ </operations>
+ <instance_attributes id="myAddr-params">
+ <nvpair id="myAddr-ip" name="ip" value="192.0.2.10"/>
+ </instance_attributes>
+ </primitive>
+ </resources>
+ <constraints>
+ <rsc_location id="myAddr-prefer" rsc="myAddr" node="c001n01" score="INFINITY"/>
+ </constraints>
+ <rsc_defaults>
+ <meta_attributes id="rsc_defaults-options">
+ <nvpair id="rsc-default-1" name="resource-stickiness" value="100"/>
+ <nvpair id="rsc-default-2" name="migration-threshold" value="10"/>
+ </meta_attributes>
+ </rsc_defaults>
+ <op_defaults>
+ <meta_attributes id="op_defaults-options">
+ <nvpair id="op-default-1" name="timeout" value="30s"/>
+ </meta_attributes>
+ </op_defaults>
+ </configuration>
+ <status/>
+ </cib>
+
+In the above example, we have one resource (an IP address) that we check
+every five minutes and will run on host ``c001n01`` until either the
+resource fails 10 times or the host shuts down.
+
+Advanced Configuration
+######################
+
+.. topic:: An advanced configuration with groups, clones and STONITH
+
+ .. code-block:: xml
+
+ <cib crm_feature_set="3.0.7" validate-with="pacemaker-1.2" admin_epoch="1" epoch="0" num_updates="0">
+ <configuration>
+ <crm_config>
+ <cluster_property_set id="cib-bootstrap-options">
+ <nvpair id="option-1" name="symmetric-cluster" value="true"/>
+ <nvpair id="option-2" name="no-quorum-policy" value="stop"/>
+ <nvpair id="option-3" name="stonith-enabled" value="true"/>
+ </cluster_property_set>
+ </crm_config>
+ <nodes>
+ <node id="xxx" uname="c001n01" type="normal"/>
+ <node id="yyy" uname="c001n02" type="normal"/>
+ <node id="zzz" uname="c001n03" type="normal"/>
+ </nodes>
+ <resources>
+ <primitive id="myAddr" class="ocf" provider="heartbeat" type="IPaddr">
+ <operations>
+ <op id="myAddr-monitor" name="monitor" interval="300s"/>
+ </operations>
+ <instance_attributes id="myAddr-attrs">
+ <nvpair id="myAddr-attr-1" name="ip" value="192.0.2.10"/>
+ </instance_attributes>
+ </primitive>
+ <group id="myGroup">
+ <primitive id="database" class="lsb" type="oracle">
+ <operations>
+ <op id="database-monitor" name="monitor" interval="300s"/>
+ </operations>
+ </primitive>
+ <primitive id="webserver" class="lsb" type="apache">
+ <operations>
+ <op id="webserver-monitor" name="monitor" interval="300s"/>
+ </operations>
+ </primitive>
+ </group>
+ <clone id="STONITH">
+ <meta_attributes id="stonith-options">
+ <nvpair id="stonith-option-1" name="globally-unique" value="false"/>
+ </meta_attributes>
+ <primitive id="stonithclone" class="stonith" type="external/ssh">
+ <operations>
+ <op id="stonith-op-mon" name="monitor" interval="5s"/>
+ </operations>
+ <instance_attributes id="stonith-attrs">
+ <nvpair id="stonith-attr-1" name="hostlist" value="c001n01,c001n02"/>
+ </instance_attributes>
+ </primitive>
+ </clone>
+ </resources>
+ <constraints>
+ <rsc_location id="myAddr-prefer" rsc="myAddr" node="c001n01"
+ score="INFINITY"/>
+ <rsc_colocation id="group-with-ip" rsc="myGroup" with-rsc="myAddr"
+ score="INFINITY"/>
+ </constraints>
+ <op_defaults>
+ <meta_attributes id="op_defaults-options">
+ <nvpair id="op-default-1" name="timeout" value="30s"/>
+ </meta_attributes>
+ </op_defaults>
+ <rsc_defaults>
+ <meta_attributes id="rsc_defaults-options">
+ <nvpair id="rsc-default-1" name="resource-stickiness" value="100"/>
+ <nvpair id="rsc-default-2" name="migration-threshold" value="10"/>
+ </meta_attributes>
+ </rsc_defaults>
+ </configuration>
+ <status/>
+ </cib>
diff --git a/doc/sphinx/Pacemaker_Explained/constraints.rst b/doc/sphinx/Pacemaker_Explained/constraints.rst
new file mode 100644
index 0000000..ab34c9f
--- /dev/null
+++ b/doc/sphinx/Pacemaker_Explained/constraints.rst
@@ -0,0 +1,1106 @@
+.. index::
+ single: constraint
+ single: resource; constraint
+
+.. _constraints:
+
+Resource Constraints
+--------------------
+
+.. index::
+ single: resource; score
+ single: node; score
+
+Scores
+######
+
+Scores of all kinds are integral to how the cluster works.
+Practically everything from moving a resource to deciding which
+resource to stop in a degraded cluster is achieved by manipulating
+scores in some way.
+
+Scores are calculated per resource and node. Any node with a
+negative score for a resource can't run that resource. The cluster
+places a resource on the node with the highest score for it.
+
+Infinity Math
+_____________
+
+Pacemaker implements **INFINITY** (or equivalently, **+INFINITY**) internally as a
+score of 1,000,000. Addition and subtraction with it follow these three basic
+rules:
+
+* Any value + **INFINITY** = **INFINITY**
+
+* Any value - **INFINITY** = -**INFINITY**
+
+* **INFINITY** - **INFINITY** = **-INFINITY**
+
+.. note::
+
+ What if you want to use a score higher than 1,000,000? Typically this possibility
+ arises when someone wants to base the score on some external metric that might
+ go above 1,000,000.
+
+ The short answer is you can't.
+
+ The long answer is it is sometimes possible work around this limitation
+ creatively. You may be able to set the score to some computed value based on
+ the external metric rather than use the metric directly. For nodes, you can
+ store the metric as a node attribute, and query the attribute when computing
+ the score (possibly as part of a custom resource agent).
+
+.. _location-constraint:
+
+.. index::
+ single: location constraint
+ single: constraint; location
+
+Deciding Which Nodes a Resource Can Run On
+##########################################
+
+*Location constraints* tell the cluster which nodes a resource can run on.
+
+There are two alternative strategies. One way is to say that, by default,
+resources can run anywhere, and then the location constraints specify nodes
+that are not allowed (an *opt-out* cluster). The other way is to start with
+nothing able to run anywhere, and use location constraints to selectively
+enable allowed nodes (an *opt-in* cluster).
+
+Whether you should choose opt-in or opt-out depends on your
+personal preference and the make-up of your cluster. If most of your
+resources can run on most of the nodes, then an opt-out arrangement is
+likely to result in a simpler configuration. On the other-hand, if
+most resources can only run on a small subset of nodes, an opt-in
+configuration might be simpler.
+
+.. index::
+ pair: XML element; rsc_location
+ single: constraint; rsc_location
+
+Location Properties
+___________________
+
+.. table:: **Attributes of a rsc_location Element**
+ :class: longtable
+ :widths: 1 1 4
+
+ +--------------------+---------+----------------------------------------------------------------------------------------------+
+ | Attribute | Default | Description |
+ +====================+=========+==============================================================================================+
+ | id | | .. index:: |
+ | | | single: rsc_location; attribute, id |
+ | | | single: attribute; id (rsc_location) |
+ | | | single: id; rsc_location attribute |
+ | | | |
+ | | | A unique name for the constraint (required) |
+ +--------------------+---------+----------------------------------------------------------------------------------------------+
+ | rsc | | .. index:: |
+ | | | single: rsc_location; attribute, rsc |
+ | | | single: attribute; rsc (rsc_location) |
+ | | | single: rsc; rsc_location attribute |
+ | | | |
+ | | | The name of the resource to which this constraint |
+ | | | applies. A location constraint must either have a |
+ | | | ``rsc``, have a ``rsc-pattern``, or contain at |
+ | | | least one resource set. |
+ +--------------------+---------+----------------------------------------------------------------------------------------------+
+ | rsc-pattern | | .. index:: |
+ | | | single: rsc_location; attribute, rsc-pattern |
+ | | | single: attribute; rsc-pattern (rsc_location) |
+ | | | single: rsc-pattern; rsc_location attribute |
+ | | | |
+ | | | A pattern matching the names of resources to which |
+ | | | this constraint applies. The syntax is the same as |
+ | | | `POSIX <http://pubs.opengroup.org/onlinepubs/9699919799/basedefs/V1_chap09.html#tag_09_04>`_ |
+ | | | extended regular expressions, with the addition of an |
+ | | | initial ``!`` indicating that resources *not* matching |
+ | | | the pattern are selected. If the regular expression |
+ | | | contains submatches, and the constraint is governed by |
+ | | | a :ref:`rule <rules>`, the submatches can be |
+ | | | referenced as ``%1`` through ``%9`` in the rule's |
+ | | | ``score-attribute`` or a rule expression's ``attribute`` |
+ | | | (see :ref:`s-rsc-pattern-rules`). A location constraint |
+ | | | must either have a ``rsc``, have a ``rsc-pattern``, or |
+ | | | contain at least one resource set. |
+ +--------------------+---------+----------------------------------------------------------------------------------------------+
+ | node | | .. index:: |
+ | | | single: rsc_location; attribute, node |
+ | | | single: attribute; node (rsc_location) |
+ | | | single: node; rsc_location attribute |
+ | | | |
+ | | | The name of the node to which this constraint applies. |
+ | | | A location constraint must either have a ``node`` and |
+ | | | ``score``, or contain at least one rule. |
+ +--------------------+---------+----------------------------------------------------------------------------------------------+
+ | score | | .. index:: |
+ | | | single: rsc_location; attribute, score |
+ | | | single: attribute; score (rsc_location) |
+ | | | single: score; rsc_location attribute |
+ | | | |
+ | | | Positive values indicate a preference for running the |
+ | | | affected resource(s) on ``node`` -- the higher the value, |
+ | | | the stronger the preference. Negative values indicate |
+ | | | the resource(s) should avoid this node (a value of |
+ | | | **-INFINITY** changes "should" to "must"). A location |
+ | | | constraint must either have a ``node`` and ``score``, |
+ | | | or contain at least one rule. |
+ +--------------------+---------+----------------------------------------------------------------------------------------------+
+ | resource-discovery | always | .. index:: |
+ | | | single: rsc_location; attribute, resource-discovery |
+ | | | single: attribute; resource-discovery (rsc_location) |
+ | | | single: resource-discovery; rsc_location attribute |
+ | | | |
+ | | | Whether Pacemaker should perform resource discovery |
+ | | | (that is, check whether the resource is already running) |
+ | | | for this resource on this node. This should normally be |
+ | | | left as the default, so that rogue instances of a |
+ | | | service can be stopped when they are running where they |
+ | | | are not supposed to be. However, there are two |
+ | | | situations where disabling resource discovery is a good |
+ | | | idea: when a service is not installed on a node, |
+ | | | discovery might return an error (properly written OCF |
+ | | | agents will not, so this is usually only seen with other |
+ | | | agent types); and when Pacemaker Remote is used to scale |
+ | | | a cluster to hundreds of nodes, limiting resource |
+ | | | discovery to allowed nodes can significantly boost |
+ | | | performance. |
+ | | | |
+ | | | * ``always:`` Always perform resource discovery for |
+ | | | the specified resource on this node. |
+ | | | |
+ | | | * ``never:`` Never perform resource discovery for the |
+ | | | specified resource on this node. This option should |
+ | | | generally be used with a -INFINITY score, although |
+ | | | that is not strictly required. |
+ | | | |
+ | | | * ``exclusive:`` Perform resource discovery for the |
+ | | | specified resource only on this node (and other nodes |
+ | | | similarly marked as ``exclusive``). Multiple location |
+ | | | constraints using ``exclusive`` discovery for the |
+ | | | same resource across different nodes creates a subset |
+ | | | of nodes resource-discovery is exclusive to. If a |
+ | | | resource is marked for ``exclusive`` discovery on one |
+ | | | or more nodes, that resource is only allowed to be |
+ | | | placed within that subset of nodes. |
+ +--------------------+---------+----------------------------------------------------------------------------------------------+
+
+.. warning::
+
+ Setting ``resource-discovery`` to ``never`` or ``exclusive`` removes Pacemaker's
+ ability to detect and stop unwanted instances of a service running
+ where it's not supposed to be. It is up to the system administrator (you!)
+ to make sure that the service can *never* be active on nodes without
+ ``resource-discovery`` (such as by leaving the relevant software uninstalled).
+
+.. index::
+ single: Asymmetrical Clusters
+ single: Opt-In Clusters
+
+Asymmetrical "Opt-In" Clusters
+______________________________
+
+To create an opt-in cluster, start by preventing resources from running anywhere
+by default:
+
+.. code-block:: none
+
+ # crm_attribute --name symmetric-cluster --update false
+
+Then start enabling nodes. The following fragment says that the web
+server prefers **sles-1**, the database prefers **sles-2** and both can
+fail over to **sles-3** if their most preferred node fails.
+
+.. topic:: Opt-in location constraints for two resources
+
+ .. code-block:: xml
+
+ <constraints>
+ <rsc_location id="loc-1" rsc="Webserver" node="sles-1" score="200"/>
+ <rsc_location id="loc-2" rsc="Webserver" node="sles-3" score="0"/>
+ <rsc_location id="loc-3" rsc="Database" node="sles-2" score="200"/>
+ <rsc_location id="loc-4" rsc="Database" node="sles-3" score="0"/>
+ </constraints>
+
+.. index::
+ single: Symmetrical Clusters
+ single: Opt-Out Clusters
+
+Symmetrical "Opt-Out" Clusters
+______________________________
+
+To create an opt-out cluster, start by allowing resources to run
+anywhere by default:
+
+.. code-block:: none
+
+ # crm_attribute --name symmetric-cluster --update true
+
+Then start disabling nodes. The following fragment is the equivalent
+of the above opt-in configuration.
+
+.. topic:: Opt-out location constraints for two resources
+
+ .. code-block:: xml
+
+ <constraints>
+ <rsc_location id="loc-1" rsc="Webserver" node="sles-1" score="200"/>
+ <rsc_location id="loc-2-do-not-run" rsc="Webserver" node="sles-2" score="-INFINITY"/>
+ <rsc_location id="loc-3-do-not-run" rsc="Database" node="sles-1" score="-INFINITY"/>
+ <rsc_location id="loc-4" rsc="Database" node="sles-2" score="200"/>
+ </constraints>
+
+.. _node-score-equal:
+
+What if Two Nodes Have the Same Score
+_____________________________________
+
+If two nodes have the same score, then the cluster will choose one.
+This choice may seem random and may not be what was intended, however
+the cluster was not given enough information to know any better.
+
+.. topic:: Constraints where a resource prefers two nodes equally
+
+ .. code-block:: xml
+
+ <constraints>
+ <rsc_location id="loc-1" rsc="Webserver" node="sles-1" score="INFINITY"/>
+ <rsc_location id="loc-2" rsc="Webserver" node="sles-2" score="INFINITY"/>
+ <rsc_location id="loc-3" rsc="Database" node="sles-1" score="500"/>
+ <rsc_location id="loc-4" rsc="Database" node="sles-2" score="300"/>
+ <rsc_location id="loc-5" rsc="Database" node="sles-2" score="200"/>
+ </constraints>
+
+In the example above, assuming no other constraints and an inactive
+cluster, **Webserver** would probably be placed on **sles-1** and **Database** on
+**sles-2**. It would likely have placed **Webserver** based on the node's
+uname and **Database** based on the desire to spread the resource load
+evenly across the cluster. However other factors can also be involved
+in more complex configurations.
+
+.. _s-rsc-pattern:
+
+Specifying locations using pattern matching
+___________________________________________
+
+A location constraint can affect all resources whose IDs match a given pattern.
+The following example bans resources named **ip-httpd**, **ip-asterisk**,
+**ip-gateway**, etc., from **node1**.
+
+.. topic:: Location constraint banning all resources matching a pattern from one node
+
+ .. code-block:: xml
+
+ <constraints>
+ <rsc_location id="ban-ips-from-node1" rsc-pattern="ip-.*" node="node1" score="-INFINITY"/>
+ </constraints>
+
+
+.. index::
+ single: constraint; ordering
+ single: resource; start order
+
+
+.. _s-resource-ordering:
+
+Specifying the Order in which Resources Should Start/Stop
+#########################################################
+
+*Ordering constraints* tell the cluster the order in which certain
+resource actions should occur.
+
+.. important::
+
+ Ordering constraints affect *only* the ordering of resource actions;
+ they do *not* require that the resources be placed on the
+ same node. If you want resources to be started on the same node
+ *and* in a specific order, you need both an ordering constraint *and*
+ a colocation constraint (see :ref:`s-resource-colocation`), or
+ alternatively, a group (see :ref:`group-resources`).
+
+.. index::
+ pair: XML element; rsc_order
+ pair: constraint; rsc_order
+
+Ordering Properties
+___________________
+
+.. table:: **Attributes of a rsc_order Element**
+ :class: longtable
+ :widths: 1 2 4
+
+ +--------------+----------------------------+-------------------------------------------------------------------+
+ | Field | Default | Description |
+ +==============+============================+===================================================================+
+ | id | | .. index:: |
+ | | | single: rsc_order; attribute, id |
+ | | | single: attribute; id (rsc_order) |
+ | | | single: id; rsc_order attribute |
+ | | | |
+ | | | A unique name for the constraint |
+ +--------------+----------------------------+-------------------------------------------------------------------+
+ | first | | .. index:: |
+ | | | single: rsc_order; attribute, first |
+ | | | single: attribute; first (rsc_order) |
+ | | | single: first; rsc_order attribute |
+ | | | |
+ | | | Name of the resource that the ``then`` resource |
+ | | | depends on |
+ +--------------+----------------------------+-------------------------------------------------------------------+
+ | then | | .. index:: |
+ | | | single: rsc_order; attribute, then |
+ | | | single: attribute; then (rsc_order) |
+ | | | single: then; rsc_order attribute |
+ | | | |
+ | | | Name of the dependent resource |
+ +--------------+----------------------------+-------------------------------------------------------------------+
+ | first-action | start | .. index:: |
+ | | | single: rsc_order; attribute, first-action |
+ | | | single: attribute; first-action (rsc_order) |
+ | | | single: first-action; rsc_order attribute |
+ | | | |
+ | | | The action that the ``first`` resource must complete |
+ | | | before ``then-action`` can be initiated for the ``then`` |
+ | | | resource. Allowed values: ``start``, ``stop``, |
+ | | | ``promote``, ``demote``. |
+ +--------------+----------------------------+-------------------------------------------------------------------+
+ | then-action | value of ``first-action`` | .. index:: |
+ | | | single: rsc_order; attribute, then-action |
+ | | | single: attribute; then-action (rsc_order) |
+ | | | single: first-action; rsc_order attribute |
+ | | | |
+ | | | The action that the ``then`` resource can execute only |
+ | | | after the ``first-action`` on the ``first`` resource has |
+ | | | completed. Allowed values: ``start``, ``stop``, |
+ | | | ``promote``, ``demote``. |
+ +--------------+----------------------------+-------------------------------------------------------------------+
+ | kind | Mandatory | .. index:: |
+ | | | single: rsc_order; attribute, kind |
+ | | | single: attribute; kind (rsc_order) |
+ | | | single: kind; rsc_order attribute |
+ | | | |
+ | | | How to enforce the constraint. Allowed values: |
+ | | | |
+ | | | * ``Mandatory:`` ``then-action`` will never be initiated |
+ | | | for the ``then`` resource unless and until ``first-action`` |
+ | | | successfully completes for the ``first`` resource. |
+ | | | |
+ | | | * ``Optional:`` The constraint applies only if both specified |
+ | | | resource actions are scheduled in the same transition |
+ | | | (that is, in response to the same cluster state). This |
+ | | | means that ``then-action`` is allowed on the ``then`` |
+ | | | resource regardless of the state of the ``first`` resource, |
+ | | | but if both actions happen to be scheduled at the same time, |
+ | | | they will be ordered. |
+ | | | |
+ | | | * ``Serialize:`` Ensure that the specified actions are never |
+ | | | performed concurrently for the specified resources. |
+ | | | ``First-action`` and ``then-action`` can be executed in either |
+ | | | order, but one must complete before the other can be initiated. |
+ | | | An example use case is when resource start-up puts a high load |
+ | | | on the host. |
+ +--------------+----------------------------+-------------------------------------------------------------------+
+ | symmetrical | TRUE for ``Mandatory`` and | .. index:: |
+ | | ``Optional`` kinds. FALSE | single: rsc_order; attribute, symmetrical |
+ | | for ``Serialize`` kind. | single: attribute; symmetrical (rsc)order) |
+ | | | single: symmetrical; rsc_order attribute |
+ | | | |
+ | | | If true, the reverse of the constraint applies for the |
+ | | | opposite action (for example, if B starts after A starts, |
+ | | | then B stops before A stops). ``Serialize`` orders cannot |
+ | | | be symmetrical. |
+ +--------------+----------------------------+-------------------------------------------------------------------+
+
+``Promote`` and ``demote`` apply to :ref:`promotable <s-resource-promotable>`
+clone resources.
+
+Optional and mandatory ordering
+_______________________________
+
+Here is an example of ordering constraints where **Database** *must* start before
+**Webserver**, and **IP** *should* start before **Webserver** if they both need to be
+started:
+
+.. topic:: Optional and mandatory ordering constraints
+
+ .. code-block:: xml
+
+ <constraints>
+ <rsc_order id="order-1" first="IP" then="Webserver" kind="Optional"/>
+ <rsc_order id="order-2" first="Database" then="Webserver" kind="Mandatory" />
+ </constraints>
+
+Because the above example lets ``symmetrical`` default to TRUE, **Webserver**
+must be stopped before **Database** can be stopped, and **Webserver** should be
+stopped before **IP** if they both need to be stopped.
+
+.. index::
+ single: colocation
+ single: constraint; colocation
+ single: resource; location relative to other resources
+
+.. _s-resource-colocation:
+
+Placing Resources Relative to other Resources
+#############################################
+
+*Colocation constraints* tell the cluster that the location of one resource
+depends on the location of another one.
+
+Colocation has an important side-effect: it affects the order in which
+resources are assigned to a node. Think about it: You can't place A relative to
+B unless you know where B is [#]_.
+
+So when you are creating colocation constraints, it is important to
+consider whether you should colocate A with B, or B with A.
+
+.. important::
+
+ Colocation constraints affect *only* the placement of resources; they do *not*
+ require that the resources be started in a particular order. If you want
+ resources to be started on the same node *and* in a specific order, you need
+ both an ordering constraint (see :ref:`s-resource-ordering`) *and* a colocation
+ constraint, or alternatively, a group (see :ref:`group-resources`).
+
+.. index::
+ pair: XML element; rsc_colocation
+ single: constraint; rsc_colocation
+
+Colocation Properties
+_____________________
+
+.. table:: **Attributes of a rsc_colocation Constraint**
+ :class: longtable
+ :widths: 2 2 5
+
+ +----------------+----------------+--------------------------------------------------------+
+ | Field | Default | Description |
+ +================+================+========================================================+
+ | id | | .. index:: |
+ | | | single: rsc_colocation; attribute, id |
+ | | | single: attribute; id (rsc_colocation) |
+ | | | single: id; rsc_colocation attribute |
+ | | | |
+ | | | A unique name for the constraint (required). |
+ +----------------+----------------+--------------------------------------------------------+
+ | rsc | | .. index:: |
+ | | | single: rsc_colocation; attribute, rsc |
+ | | | single: attribute; rsc (rsc_colocation) |
+ | | | single: rsc; rsc_colocation attribute |
+ | | | |
+ | | | The name of a resource that should be located |
+ | | | relative to ``with-rsc``. A colocation constraint must |
+ | | | either contain at least one |
+ | | | :ref:`resource set <s-resource-sets>`, or specify both |
+ | | | ``rsc`` and ``with-rsc``. |
+ +----------------+----------------+--------------------------------------------------------+
+ | with-rsc | | .. index:: |
+ | | | single: rsc_colocation; attribute, with-rsc |
+ | | | single: attribute; with-rsc (rsc_colocation) |
+ | | | single: with-rsc; rsc_colocation attribute |
+ | | | |
+ | | | The name of the resource used as the colocation |
+ | | | target. The cluster will decide where to put this |
+ | | | resource first and then decide where to put ``rsc``. |
+ | | | A colocation constraint must either contain at least |
+ | | | one :ref:`resource set <s-resource-sets>`, or specify |
+ | | | both ``rsc`` and ``with-rsc``. |
+ +----------------+----------------+--------------------------------------------------------+
+ | node-attribute | #uname | .. index:: |
+ | | | single: rsc_colocation; attribute, node-attribute |
+ | | | single: attribute; node-attribute (rsc_colocation) |
+ | | | single: node-attribute; rsc_colocation attribute |
+ | | | |
+ | | | If ``rsc`` and ``with-rsc`` are specified, this node |
+ | | | attribute must be the same on the node running ``rsc`` |
+ | | | and the node running ``with-rsc`` for the constraint |
+ | | | to be satisfied. (For details, see |
+ | | | :ref:`s-coloc-attribute`.) |
+ +----------------+----------------+--------------------------------------------------------+
+ | score | 0 | .. index:: |
+ | | | single: rsc_colocation; attribute, score |
+ | | | single: attribute; score (rsc_colocation) |
+ | | | single: score; rsc_colocation attribute |
+ | | | |
+ | | | Positive values indicate the resources should run on |
+ | | | the same node. Negative values indicate the resources |
+ | | | should run on different nodes. Values of |
+ | | | +/- ``INFINITY`` change "should" to "must". |
+ +----------------+----------------+--------------------------------------------------------+
+ | rsc-role | Started | .. index:: |
+ | | | single: clone; ordering constraint, rsc-role |
+ | | | single: ordering constraint; rsc-role (clone) |
+ | | | single: rsc-role; clone ordering constraint |
+ | | | |
+ | | | If ``rsc`` and ``with-rsc`` are specified, and ``rsc`` |
+ | | | is a :ref:`promotable clone <s-resource-promotable>`, |
+ | | | the constraint applies only to ``rsc`` instances in |
+ | | | this role. Allowed values: ``Started``, ``Promoted``, |
+ | | | ``Unpromoted``. For details, see |
+ | | | :ref:`promotable-clone-constraints`. |
+ +----------------+----------------+--------------------------------------------------------+
+ | with-rsc-role | Started | .. index:: |
+ | | | single: clone; ordering constraint, with-rsc-role |
+ | | | single: ordering constraint; with-rsc-role (clone) |
+ | | | single: with-rsc-role; clone ordering constraint |
+ | | | |
+ | | | If ``rsc`` and ``with-rsc`` are specified, and |
+ | | | ``with-rsc`` is a |
+ | | | :ref:`promotable clone <s-resource-promotable>`, the |
+ | | | constraint applies only to ``with-rsc`` instances in |
+ | | | this role. Allowed values: ``Started``, ``Promoted``, |
+ | | | ``Unpromoted``. For details, see |
+ | | | :ref:`promotable-clone-constraints`. |
+ +----------------+----------------+--------------------------------------------------------+
+ | influence | value of | .. index:: |
+ | | ``critical`` | single: rsc_colocation; attribute, influence |
+ | | meta-attribute | single: attribute; influence (rsc_colocation) |
+ | | for ``rsc`` | single: influence; rsc_colocation attribute |
+ | | | |
+ | | | Whether to consider the location preferences of |
+ | | | ``rsc`` when ``with-rsc`` is already active. Allowed |
+ | | | values: ``true``, ``false``. For details, see |
+ | | | :ref:`s-coloc-influence`. *(since 2.1.0)* |
+ +----------------+----------------+--------------------------------------------------------+
+
+Mandatory Placement
+___________________
+
+Mandatory placement occurs when the constraint's score is
+**+INFINITY** or **-INFINITY**. In such cases, if the constraint can't be
+satisfied, then the **rsc** resource is not permitted to run. For
+``score=INFINITY``, this includes cases where the ``with-rsc`` resource is
+not active.
+
+If you need resource **A** to always run on the same machine as
+resource **B**, you would add the following constraint:
+
+.. topic:: Mandatory colocation constraint for two resources
+
+ .. code-block:: xml
+
+ <rsc_colocation id="colocate" rsc="A" with-rsc="B" score="INFINITY"/>
+
+Remember, because **INFINITY** was used, if **B** can't run on any
+of the cluster nodes (for whatever reason) then **A** will not
+be allowed to run. Whether **A** is running or not has no effect on **B**.
+
+Alternatively, you may want the opposite -- that **A** *cannot*
+run on the same machine as **B**. In this case, use ``score="-INFINITY"``.
+
+.. topic:: Mandatory anti-colocation constraint for two resources
+
+ .. code-block:: xml
+
+ <rsc_colocation id="anti-colocate" rsc="A" with-rsc="B" score="-INFINITY"/>
+
+Again, by specifying **-INFINITY**, the constraint is binding. So if the
+only place left to run is where **B** already is, then **A** may not run anywhere.
+
+As with **INFINITY**, **B** can run even if **A** is stopped. However, in this
+case **A** also can run if **B** is stopped, because it still meets the
+constraint of **A** and **B** not running on the same node.
+
+Advisory Placement
+__________________
+
+If mandatory placement is about "must" and "must not", then advisory
+placement is the "I'd prefer if" alternative.
+
+For colocation constraints with scores greater than **-INFINITY** and less than
+**INFINITY**, the cluster will try to accommodate your wishes, but may ignore
+them if other factors outweigh the colocation score. Those factors might
+include other constraints, resource stickiness, failure thresholds, whether
+other resources would be prevented from being active, etc.
+
+.. topic:: Advisory colocation constraint for two resources
+
+ .. code-block:: xml
+
+ <rsc_colocation id="colocate-maybe" rsc="A" with-rsc="B" score="500"/>
+
+.. _s-coloc-attribute:
+
+Colocation by Node Attribute
+____________________________
+
+The ``node-attribute`` property of a colocation constraints allows you to express
+the requirement, "these resources must be on similar nodes".
+
+As an example, imagine that you have two Storage Area Networks (SANs) that are
+not controlled by the cluster, and each node is connected to one or the other.
+You may have two resources **r1** and **r2** such that **r2** needs to use the same
+SAN as **r1**, but doesn't necessarily have to be on the same exact node.
+In such a case, you could define a :ref:`node attribute <node_attributes>` named
+**san**, with the value **san1** or **san2** on each node as appropriate. Then, you
+could colocate **r2** with **r1** using ``node-attribute`` set to **san**.
+
+.. _s-coloc-influence:
+
+Colocation Influence
+____________________
+
+By default, if A is colocated with B, the cluster will take into account A's
+preferences when deciding where to place B, to maximize the chance that both
+resources can run.
+
+For a detailed look at exactly how this occurs, see
+`Colocation Explained <http://clusterlabs.org/doc/Colocation_Explained.pdf>`_.
+
+However, if ``influence`` is set to ``false`` in the colocation constraint,
+this will happen only if B is inactive and needing to be started. If B is
+already active, A's preferences will have no effect on placing B.
+
+An example of what effect this would have and when it would be desirable would
+be a nonessential reporting tool colocated with a resource-intensive service
+that takes a long time to start. If the reporting tool fails enough times to
+reach its migration threshold, by default the cluster will want to move both
+resources to another node if possible. Setting ``influence`` to ``false`` on
+the colocation constraint would mean that the reporting tool would be stopped
+in this situation instead, to avoid forcing the service to move.
+
+The ``critical`` resource meta-attribute is a convenient way to specify the
+default for all colocation constraints and groups involving a particular
+resource.
+
+.. note::
+
+ If a noncritical resource is a member of a group, all later members of the
+ group will be treated as noncritical, even if they are marked as (or left to
+ default to) critical.
+
+
+.. _s-resource-sets:
+
+Resource Sets
+#############
+
+.. index::
+ single: constraint; resource set
+ single: resource; resource set
+
+*Resource sets* allow multiple resources to be affected by a single constraint.
+
+.. topic:: A set of 3 resources
+
+ .. code-block:: xml
+
+ <resource_set id="resource-set-example">
+ <resource_ref id="A"/>
+ <resource_ref id="B"/>
+ <resource_ref id="C"/>
+ </resource_set>
+
+Resource sets are valid inside ``rsc_location``, ``rsc_order``
+(see :ref:`s-resource-sets-ordering`), ``rsc_colocation``
+(see :ref:`s-resource-sets-colocation`), and ``rsc_ticket``
+(see :ref:`ticket-constraints`) constraints.
+
+A resource set has a number of properties that can be set, though not all
+have an effect in all contexts.
+
+.. index::
+ pair: XML element; resource_set
+
+.. table:: **Attributes of a resource_set Element**
+ :class: longtable
+ :widths: 2 2 5
+
+ +-------------+------------------+--------------------------------------------------------+
+ | Field | Default | Description |
+ +=============+==================+========================================================+
+ | id | | .. index:: |
+ | | | single: resource_set; attribute, id |
+ | | | single: attribute; id (resource_set) |
+ | | | single: id; resource_set attribute |
+ | | | |
+ | | | A unique name for the set (required) |
+ +-------------+------------------+--------------------------------------------------------+
+ | sequential | true | .. index:: |
+ | | | single: resource_set; attribute, sequential |
+ | | | single: attribute; sequential (resource_set) |
+ | | | single: sequential; resource_set attribute |
+ | | | |
+ | | | Whether the members of the set must be acted on in |
+ | | | order. Meaningful within ``rsc_order`` and |
+ | | | ``rsc_colocation``. |
+ +-------------+------------------+--------------------------------------------------------+
+ | require-all | true | .. index:: |
+ | | | single: resource_set; attribute, require-all |
+ | | | single: attribute; require-all (resource_set) |
+ | | | single: require-all; resource_set attribute |
+ | | | |
+ | | | Whether all members of the set must be active before |
+ | | | continuing. With the current implementation, the |
+ | | | cluster may continue even if only one member of the |
+ | | | set is started, but if more than one member of the set |
+ | | | is starting at the same time, the cluster will still |
+ | | | wait until all of those have started before continuing |
+ | | | (this may change in future versions). Meaningful |
+ | | | within ``rsc_order``. |
+ +-------------+------------------+--------------------------------------------------------+
+ | role | | .. index:: |
+ | | | single: resource_set; attribute, role |
+ | | | single: attribute; role (resource_set) |
+ | | | single: role; resource_set attribute |
+ | | | |
+ | | | The constraint applies only to resource set members |
+ | | | that are :ref:`s-resource-promotable` in this |
+ | | | role. Meaningful within ``rsc_location``, |
+ | | | ``rsc_colocation`` and ``rsc_ticket``. |
+ | | | Allowed values: ``Started``, ``Promoted``, |
+ | | | ``Unpromoted``. For details, see |
+ | | | :ref:`promotable-clone-constraints`. |
+ +-------------+------------------+--------------------------------------------------------+
+ | action | value of | .. index:: |
+ | | ``first-action`` | single: resource_set; attribute, action |
+ | | in the enclosing | single: attribute; action (resource_set) |
+ | | ordering | single: action; resource_set attribute |
+ | | constraint | |
+ | | | The action that applies to *all members* of the set. |
+ | | | Meaningful within ``rsc_order``. Allowed values: |
+ | | | ``start``, ``stop``, ``promote``, ``demote``. |
+ +-------------+------------------+--------------------------------------------------------+
+ | score | | .. index:: |
+ | | | single: resource_set; attribute, score |
+ | | | single: attribute; score (resource_set) |
+ | | | single: score; resource_set attribute |
+ | | | |
+ | | | *Advanced use only.* Use a specific score for this |
+ | | | set within the constraint. |
+ +-------------+------------------+--------------------------------------------------------+
+
+.. _s-resource-sets-ordering:
+
+Ordering Sets of Resources
+##########################
+
+A common situation is for an administrator to create a chain of ordered
+resources, such as:
+
+.. topic:: A chain of ordered resources
+
+ .. code-block:: xml
+
+ <constraints>
+ <rsc_order id="order-1" first="A" then="B" />
+ <rsc_order id="order-2" first="B" then="C" />
+ <rsc_order id="order-3" first="C" then="D" />
+ </constraints>
+
+.. topic:: Visual representation of the four resources' start order for the above constraints
+
+ .. image:: images/resource-set.png
+ :alt: Ordered set
+
+Ordered Set
+___________
+
+To simplify this situation, :ref:`s-resource-sets` can be used within ordering
+constraints:
+
+.. topic:: A chain of ordered resources expressed as a set
+
+ .. code-block:: xml
+
+ <constraints>
+ <rsc_order id="order-1">
+ <resource_set id="ordered-set-example" sequential="true">
+ <resource_ref id="A"/>
+ <resource_ref id="B"/>
+ <resource_ref id="C"/>
+ <resource_ref id="D"/>
+ </resource_set>
+ </rsc_order>
+ </constraints>
+
+While the set-based format is not less verbose, it is significantly easier to
+get right and maintain.
+
+.. important::
+
+ If you use a higher-level tool, pay attention to how it exposes this
+ functionality. Depending on the tool, creating a set **A B** may be equivalent to
+ **A then B**, or **B then A**.
+
+Ordering Multiple Sets
+______________________
+
+The syntax can be expanded to allow sets of resources to be ordered relative to
+each other, where the members of each individual set may be ordered or
+unordered (controlled by the ``sequential`` property). In the example below, **A**
+and **B** can both start in parallel, as can **C** and **D**, however **C** and
+**D** can only start once *both* **A** *and* **B** are active.
+
+.. topic:: Ordered sets of unordered resources
+
+ .. code-block:: xml
+
+ <constraints>
+ <rsc_order id="order-1">
+ <resource_set id="ordered-set-1" sequential="false">
+ <resource_ref id="A"/>
+ <resource_ref id="B"/>
+ </resource_set>
+ <resource_set id="ordered-set-2" sequential="false">
+ <resource_ref id="C"/>
+ <resource_ref id="D"/>
+ </resource_set>
+ </rsc_order>
+ </constraints>
+
+.. topic:: Visual representation of the start order for two ordered sets of
+ unordered resources
+
+ .. image:: images/two-sets.png
+ :alt: Two ordered sets
+
+Of course either set -- or both sets -- of resources can also be internally
+ordered (by setting ``sequential="true"``) and there is no limit to the number
+of sets that can be specified.
+
+.. topic:: Advanced use of set ordering - Three ordered sets, two of which are
+ internally unordered
+
+ .. code-block:: xml
+
+ <constraints>
+ <rsc_order id="order-1">
+ <resource_set id="ordered-set-1" sequential="false">
+ <resource_ref id="A"/>
+ <resource_ref id="B"/>
+ </resource_set>
+ <resource_set id="ordered-set-2" sequential="true">
+ <resource_ref id="C"/>
+ <resource_ref id="D"/>
+ </resource_set>
+ <resource_set id="ordered-set-3" sequential="false">
+ <resource_ref id="E"/>
+ <resource_ref id="F"/>
+ </resource_set>
+ </rsc_order>
+ </constraints>
+
+.. topic:: Visual representation of the start order for the three sets defined above
+
+ .. image:: images/three-sets.png
+ :alt: Three ordered sets
+
+.. important::
+
+ An ordered set with ``sequential=false`` makes sense only if there is another
+ set in the constraint. Otherwise, the constraint has no effect.
+
+Resource Set OR Logic
+_____________________
+
+The unordered set logic discussed so far has all been "AND" logic. To illustrate
+this take the 3 resource set figure in the previous section. Those sets can be
+expressed, **(A and B) then (C) then (D) then (E and F)**.
+
+Say for example we want to change the first set, **(A and B)**, to use "OR" logic
+so the sets look like this: **(A or B) then (C) then (D) then (E and F)**. This
+functionality can be achieved through the use of the ``require-all`` option.
+This option defaults to TRUE which is why the "AND" logic is used by default.
+Setting ``require-all=false`` means only one resource in the set needs to be
+started before continuing on to the next set.
+
+.. topic:: Resource Set "OR" logic: Three ordered sets, where the first set is
+ internally unordered with "OR" logic
+
+ .. code-block:: xml
+
+ <constraints>
+ <rsc_order id="order-1">
+ <resource_set id="ordered-set-1" sequential="false" require-all="false">
+ <resource_ref id="A"/>
+ <resource_ref id="B"/>
+ </resource_set>
+ <resource_set id="ordered-set-2" sequential="true">
+ <resource_ref id="C"/>
+ <resource_ref id="D"/>
+ </resource_set>
+ <resource_set id="ordered-set-3" sequential="false">
+ <resource_ref id="E"/>
+ <resource_ref id="F"/>
+ </resource_set>
+ </rsc_order>
+ </constraints>
+
+.. important::
+
+ An ordered set with ``require-all=false`` makes sense only in conjunction with
+ ``sequential=false``. Think of it like this: ``sequential=false`` modifies the set
+ to be an unordered set using "AND" logic by default, and adding
+ ``require-all=false`` flips the unordered set's "AND" logic to "OR" logic.
+
+.. _s-resource-sets-colocation:
+
+Colocating Sets of Resources
+############################
+
+Another common situation is for an administrator to create a set of
+colocated resources.
+
+The simplest way to do this is to define a resource group (see
+:ref:`group-resources`), but that cannot always accurately express the desired
+relationships. For example, maybe the resources do not need to be ordered.
+
+Another way would be to define each relationship as an individual constraint,
+but that causes a difficult-to-follow constraint explosion as the number of
+resources and combinations grow.
+
+.. topic:: Colocation chain as individual constraints, where A is placed first,
+ then B, then C, then D
+
+ .. code-block:: xml
+
+ <constraints>
+ <rsc_colocation id="coloc-1" rsc="D" with-rsc="C" score="INFINITY"/>
+ <rsc_colocation id="coloc-2" rsc="C" with-rsc="B" score="INFINITY"/>
+ <rsc_colocation id="coloc-3" rsc="B" with-rsc="A" score="INFINITY"/>
+ </constraints>
+
+To express complicated relationships with a simplified syntax [#]_,
+:ref:`resource sets <s-resource-sets>` can be used within colocation constraints.
+
+.. topic:: Equivalent colocation chain expressed using **resource_set**
+
+ .. code-block:: xml
+
+ <constraints>
+ <rsc_colocation id="coloc-1" score="INFINITY" >
+ <resource_set id="colocated-set-example" sequential="true">
+ <resource_ref id="A"/>
+ <resource_ref id="B"/>
+ <resource_ref id="C"/>
+ <resource_ref id="D"/>
+ </resource_set>
+ </rsc_colocation>
+ </constraints>
+
+.. note::
+
+ Within a ``resource_set``, the resources are listed in the order they are
+ *placed*, which is the reverse of the order in which they are *colocated*.
+ In the above example, resource **A** is placed before resource **B**, which is
+ the same as saying resource **B** is colocated with resource **A**.
+
+As with individual constraints, a resource that can't be active prevents any
+resource that must be colocated with it from being active. In both of the two
+previous examples, if **B** is unable to run, then both **C** and by inference **D**
+must remain stopped.
+
+.. important::
+
+ If you use a higher-level tool, pay attention to how it exposes this
+ functionality. Depending on the tool, creating a set **A B** may be equivalent to
+ **A with B**, or **B with A**.
+
+Resource sets can also be used to tell the cluster that entire *sets* of
+resources must be colocated relative to each other, while the individual
+members within any one set may or may not be colocated relative to each other
+(determined by the set's ``sequential`` property).
+
+In the following example, resources **B**, **C**, and **D** will each be colocated
+with **A** (which will be placed first). **A** must be able to run in order for any
+of the resources to run, but any of **B**, **C**, or **D** may be stopped without
+affecting any of the others.
+
+.. topic:: Using colocated sets to specify a shared dependency
+
+ .. code-block:: xml
+
+ <constraints>
+ <rsc_colocation id="coloc-1" score="INFINITY" >
+ <resource_set id="colocated-set-2" sequential="false">
+ <resource_ref id="B"/>
+ <resource_ref id="C"/>
+ <resource_ref id="D"/>
+ </resource_set>
+ <resource_set id="colocated-set-1" sequential="true">
+ <resource_ref id="A"/>
+ </resource_set>
+ </rsc_colocation>
+ </constraints>
+
+.. note::
+
+ Pay close attention to the order in which resources and sets are listed.
+ While the members of any one sequential set are placed first to last (i.e., the
+ colocation dependency is last with first), multiple sets are placed last to
+ first (i.e. the colocation dependency is first with last).
+
+.. important::
+
+ A colocated set with ``sequential="false"`` makes sense only if there is
+ another set in the constraint. Otherwise, the constraint has no effect.
+
+There is no inherent limit to the number and size of the sets used.
+The only thing that matters is that in order for any member of one set
+in the constraint to be active, all members of sets listed after it must also
+be active (and naturally on the same node); and if a set has ``sequential="true"``,
+then in order for one member of that set to be active, all members listed
+before it must also be active.
+
+If desired, you can restrict the dependency to instances of promotable clone
+resources that are in a specific role, using the set's ``role`` property.
+
+.. topic:: Colocation in which the members of the middle set have no
+ interdependencies, and the last set listed applies only to promoted
+ instances
+
+ .. code-block:: xml
+
+ <constraints>
+ <rsc_colocation id="coloc-1" score="INFINITY" >
+ <resource_set id="colocated-set-1" sequential="true">
+ <resource_ref id="F"/>
+ <resource_ref id="G"/>
+ </resource_set>
+ <resource_set id="colocated-set-2" sequential="false">
+ <resource_ref id="C"/>
+ <resource_ref id="D"/>
+ <resource_ref id="E"/>
+ </resource_set>
+ <resource_set id="colocated-set-3" sequential="true" role="Promoted">
+ <resource_ref id="A"/>
+ <resource_ref id="B"/>
+ </resource_set>
+ </rsc_colocation>
+ </constraints>
+
+.. topic:: Visual representation of the above example (resources are placed from
+ left to right)
+
+ .. image:: ../shared/images/pcmk-colocated-sets.png
+ :alt: Colocation chain
+
+.. note::
+
+ Unlike ordered sets, colocated sets do not use the ``require-all`` option.
+
+
+External Resource Dependencies
+##############################
+
+Sometimes, a resource will depend on services that are not managed by the
+cluster. An example might be a resource that requires a file system that is
+not managed by the cluster but mounted by systemd at boot time.
+
+To accommodate this, the pacemaker systemd service depends on a normally empty
+target called ``resource-agents-deps.target``. The system administrator may
+create a unit drop-in for that target specifying the dependencies, to ensure
+that the services are started before Pacemaker starts and stopped after
+Pacemaker stops.
+
+Typically, this is accomplished by placing a unit file in the
+``/etc/systemd/system/resource-agents-deps.target.d`` directory, with directives
+such as ``Requires`` and ``After`` specifying the dependencies as needed.
+
+
+.. [#] While the human brain is sophisticated enough to read the constraint
+ in any order and choose the correct one depending on the situation,
+ the cluster is not quite so smart. Yet.
+
+.. [#] which is not the same as saying easy to follow
diff --git a/doc/sphinx/Pacemaker_Explained/fencing.rst b/doc/sphinx/Pacemaker_Explained/fencing.rst
new file mode 100644
index 0000000..109b4da
--- /dev/null
+++ b/doc/sphinx/Pacemaker_Explained/fencing.rst
@@ -0,0 +1,1298 @@
+.. index::
+ single: fencing
+ single: STONITH
+
+.. _fencing:
+
+Fencing
+-------
+
+What Is Fencing?
+################
+
+*Fencing* is the ability to make a node unable to run resources, even when that
+node is unresponsive to cluster commands.
+
+Fencing is also known as *STONITH*, an acronym for "Shoot The Other Node In The
+Head", since the most common fencing method is cutting power to the node.
+Another method is "fabric fencing", cutting the node's access to some
+capability required to run resources (such as network access or a shared disk).
+
+.. index::
+ single: fencing; why necessary
+
+Why Is Fencing Necessary?
+#########################
+
+Fencing protects your data from being corrupted by malfunctioning nodes or
+unintentional concurrent access to shared resources.
+
+Fencing protects against the "split brain" failure scenario, where cluster
+nodes have lost the ability to reliably communicate with each other but are
+still able to run resources. If the cluster just assumed that uncommunicative
+nodes were down, then multiple instances of a resource could be started on
+different nodes.
+
+The effect of split brain depends on the resource type. For example, an IP
+address brought up on two hosts on a network will cause packets to randomly be
+sent to one or the other host, rendering the IP useless. For a database or
+clustered file system, the effect could be much more severe, causing data
+corruption or divergence.
+
+Fencing is also used when a resource cannot otherwise be stopped. If a
+resource fails to stop on a node, it cannot be started on a different node
+without risking the same type of conflict as split-brain. Fencing the
+original node ensures the resource can be safely started elsewhere.
+
+Users may also configure the ``on-fail`` property of :ref:`operation` or the
+``loss-policy`` property of
+:ref:`ticket constraints <ticket-constraints>` to ``fence``, in which
+case the cluster will fence the resource's node if the operation fails or the
+ticket is lost.
+
+.. index::
+ single: fencing; device
+
+Fence Devices
+#############
+
+A *fence device* or *fencing device* is a special type of resource that
+provides the means to fence a node.
+
+Examples of fencing devices include intelligent power switches and IPMI devices
+that accept SNMP commands to cut power to a node, and iSCSI controllers that
+allow SCSI reservations to be used to cut a node's access to a shared disk.
+
+Since fencing devices will be used to recover from loss of networking
+connectivity to other nodes, it is essential that they do not rely on the same
+network as the cluster itself, otherwise that network becomes a single point of
+failure.
+
+Since loss of a node due to power outage is indistinguishable from loss of
+network connectivity to that node, it is also essential that at least one fence
+device for a node does not share power with that node. For example, an on-board
+IPMI controller that shares power with its host should not be used as the sole
+fencing device for that host.
+
+Since fencing is used to isolate malfunctioning nodes, no fence device should
+rely on its target functioning properly. This includes, for example, devices
+that ssh into a node and issue a shutdown command (such devices might be
+suitable for testing, but never for production).
+
+.. index::
+ single: fencing; agent
+
+Fence Agents
+############
+
+A *fence agent* or *fencing agent* is a ``stonith``-class resource agent.
+
+The fence agent standard provides commands (such as ``off`` and ``reboot``)
+that the cluster can use to fence nodes. As with other resource agent classes,
+this allows a layer of abstraction so that Pacemaker doesn't need any knowledge
+about specific fencing technologies -- that knowledge is isolated in the agent.
+
+Pacemaker supports two fence agent standards, both inherited from
+no-longer-active projects:
+
+* Red Hat Cluster Suite (RHCS) style: These are typically installed in
+ ``/usr/sbin`` with names starting with ``fence_``.
+
+* Linux-HA style: These typically have names starting with ``external/``.
+ Pacemaker can support these agents using the **fence_legacy** RHCS-style
+ agent as a wrapper, *if* support was enabled when Pacemaker was built, which
+ requires the ``cluster-glue`` library.
+
+When a Fence Device Can Be Used
+###############################
+
+Fencing devices do not actually "run" like most services. Typically, they just
+provide an interface for sending commands to an external device.
+
+Additionally, fencing may be initiated by Pacemaker, by other cluster-aware
+software such as DRBD or DLM, or manually by an administrator, at any point in
+the cluster life cycle, including before any resources have been started.
+
+To accommodate this, Pacemaker does not require the fence device resource to be
+"started" in order to be used. Whether a fence device is started or not
+determines whether a node runs any recurring monitor for the device, and gives
+the node a slight preference for being chosen to execute fencing using that
+device.
+
+By default, any node can execute any fencing device. If a fence device is
+disabled by setting its ``target-role`` to ``Stopped``, then no node can use
+that device. If a location constraint with a negative score prevents a specific
+node from "running" a fence device, then that node will never be chosen to
+execute fencing using the device. A node may fence itself, but the cluster will
+choose that only if no other nodes can do the fencing.
+
+A common configuration scenario is to have one fence device per target node.
+In such a case, users often configure anti-location constraints so that
+the target node does not monitor its own device.
+
+Limitations of Fencing Resources
+################################
+
+Fencing resources have certain limitations that other resource classes don't:
+
+* They may have only one set of meta-attributes and one set of instance
+ attributes.
+* If :ref:`rules` are used to determine fencing resource options, these
+ might be evaluated only when first read, meaning that later changes to the
+ rules will have no effect. Therefore, it is better to avoid confusion and not
+ use rules at all with fencing resources.
+
+These limitations could be revisited if there is sufficient user demand.
+
+.. index::
+ single: fencing; special instance attributes
+
+.. _fencing-attributes:
+
+Special Meta-Attributes for Fencing Resources
+#############################################
+
+The table below lists special resource meta-attributes that may be set for any
+fencing resource.
+
+.. table:: **Additional Properties of Fencing Resources**
+ :widths: 2 1 2 4
+
+
+ +----------------------+---------+--------------------+----------------------------------------+
+ | Field | Type | Default | Description |
+ +======================+=========+====================+========================================+
+ | provides | string | | .. index:: |
+ | | | | single: provides |
+ | | | | |
+ | | | | Any special capability provided by the |
+ | | | | fence device. Currently, only one such |
+ | | | | capability is meaningful: |
+ | | | | :ref:`unfencing <unfencing>`. |
+ +----------------------+---------+--------------------+----------------------------------------+
+
+Special Instance Attributes for Fencing Resources
+#################################################
+
+The table below lists special instance attributes that may be set for any
+fencing resource (*not* meta-attributes, even though they are interpreted by
+Pacemaker rather than the fence agent). These are also listed in the man page
+for ``pacemaker-fenced``.
+
+.. Not_Yet_Implemented:
+
+ +----------------------+---------+--------------------+----------------------------------------+
+ | priority | integer | 0 | .. index:: |
+ | | | | single: priority |
+ | | | | |
+ | | | | The priority of the fence device. |
+ | | | | Devices are tried in order of highest |
+ | | | | priority to lowest. |
+ +----------------------+---------+--------------------+----------------------------------------+
+
+.. table:: **Additional Properties of Fencing Resources**
+ :class: longtable
+ :widths: 2 1 2 4
+
+ +----------------------+---------+--------------------+----------------------------------------+
+ | Field | Type | Default | Description |
+ +======================+=========+====================+========================================+
+ | stonith-timeout | time | | .. index:: |
+ | | | | single: stonith-timeout |
+ | | | | |
+ | | | | This is not used by Pacemaker (see the |
+ | | | | ``pcmk_reboot_timeout``, |
+ | | | | ``pcmk_off_timeout``, etc. properties |
+ | | | | instead), but it may be used by |
+ | | | | Linux-HA fence agents. |
+ +----------------------+---------+--------------------+----------------------------------------+
+ | pcmk_host_map | string | | .. index:: |
+ | | | | single: pcmk_host_map |
+ | | | | |
+ | | | | A mapping of node names to ports |
+ | | | | for devices that do not understand |
+ | | | | the node names. |
+ | | | | |
+ | | | | Example: ``node1:1;node2:2,3`` tells |
+ | | | | the cluster to use port 1 for |
+ | | | | ``node1`` and ports 2 and 3 for |
+ | | | | ``node2``. If ``pcmk_host_check`` is |
+ | | | | explicitly set to ``static-list``, |
+ | | | | either this or ``pcmk_host_list`` must |
+ | | | | be set. The port portion of the map |
+ | | | | may contain special characters such as |
+ | | | | spaces if preceded by a backslash |
+ | | | | *(since 2.1.2)*. |
+ +----------------------+---------+--------------------+----------------------------------------+
+ | pcmk_host_list | string | | .. index:: |
+ | | | | single: pcmk_host_list |
+ | | | | |
+ | | | | A list of machines controlled by this |
+ | | | | device. If ``pcmk_host_check`` is |
+ | | | | explicitly set to ``static-list``, |
+ | | | | either this or ``pcmk_host_map`` must |
+ | | | | be set. |
+ +----------------------+---------+--------------------+----------------------------------------+
+ | pcmk_host_check | string | Value appropriate | .. index:: |
+ | | | to other | single: pcmk_host_check |
+ | | | parameters (see | |
+ | | | "Default Check | The method Pacemaker should use to |
+ | | | Type" below) | determine which nodes can be targeted |
+ | | | | by this device. Allowed values: |
+ | | | | |
+ | | | | * ``static-list:`` targets are listed |
+ | | | | in the ``pcmk_host_list`` or |
+ | | | | ``pcmk_host_map`` attribute |
+ | | | | * ``dynamic-list:`` query the device |
+ | | | | via the agent's ``list`` action |
+ | | | | * ``status:`` query the device via the |
+ | | | | agent's ``status`` action |
+ | | | | * ``none:`` assume the device can |
+ | | | | fence any node |
+ +----------------------+---------+--------------------+----------------------------------------+
+ | pcmk_delay_max | time | 0s | .. index:: |
+ | | | | single: pcmk_delay_max |
+ | | | | |
+ | | | | Enable a delay of no more than the |
+ | | | | time specified before executing |
+ | | | | fencing actions. Pacemaker derives the |
+ | | | | overall delay by taking the value of |
+ | | | | pcmk_delay_base and adding a random |
+ | | | | delay value such that the sum is kept |
+ | | | | below this maximum. This is sometimes |
+ | | | | used in two-node clusters to ensure |
+ | | | | that the nodes don't fence each other |
+ | | | | at the same time. |
+ +----------------------+---------+--------------------+----------------------------------------+
+ | pcmk_delay_base | time | 0s | .. index:: |
+ | | | | single: pcmk_delay_base |
+ | | | | |
+ | | | | Enable a static delay before executing |
+ | | | | fencing actions. This can be used, for |
+ | | | | example, in two-node clusters to |
+ | | | | ensure that the nodes don't fence each |
+ | | | | other, by having separate fencing |
+ | | | | resources with different values. The |
+ | | | | node that is fenced with the shorter |
+ | | | | delay will lose a fencing race. The |
+ | | | | overall delay introduced by pacemaker |
+ | | | | is derived from this value plus a |
+ | | | | random delay such that the sum is kept |
+ | | | | below the maximum delay. A single |
+ | | | | device can have different delays per |
+ | | | | node using a host map *(since 2.1.2)*, |
+ | | | | for example ``node1:0s;node2:5s.`` |
+ +----------------------+---------+--------------------+----------------------------------------+
+ | pcmk_action_limit | integer | 1 | .. index:: |
+ | | | | single: pcmk_action_limit |
+ | | | | |
+ | | | | The maximum number of actions that can |
+ | | | | be performed in parallel on this |
+ | | | | device. A value of -1 means unlimited. |
+ | | | | Node fencing actions initiated by the |
+ | | | | cluster (as opposed to an administrator|
+ | | | | running the ``stonith_admin`` tool or |
+ | | | | the fencer running recurring device |
+ | | | | monitors and ``status`` and ``list`` |
+ | | | | commands) are additionally subject to |
+ | | | | the ``concurrent-fencing`` cluster |
+ | | | | property. |
+ +----------------------+---------+--------------------+----------------------------------------+
+ | pcmk_host_argument | string | ``port`` otherwise | .. index:: |
+ | | | ``plug`` if | single: pcmk_host_argument |
+ | | | supported | |
+ | | | according to the | *Advanced use only.* Which parameter |
+ | | | metadata of the | should be supplied to the fence agent |
+ | | | fence agent | to identify the node to be fenced. |
+ | | | | Some devices support neither the |
+ | | | | standard ``plug`` nor the deprecated |
+ | | | | ``port`` parameter, or may provide |
+ | | | | additional ones. Use this to specify |
+ | | | | an alternate, device-specific |
+ | | | | parameter. A value of ``none`` tells |
+ | | | | the cluster not to supply any |
+ | | | | additional parameters. |
+ +----------------------+---------+--------------------+----------------------------------------+
+ | pcmk_reboot_action | string | reboot | .. index:: |
+ | | | | single: pcmk_reboot_action |
+ | | | | |
+ | | | | *Advanced use only.* The command to |
+ | | | | send to the resource agent in order to |
+ | | | | reboot a node. Some devices do not |
+ | | | | support the standard commands or may |
+ | | | | provide additional ones. Use this to |
+ | | | | specify an alternate, device-specific |
+ | | | | command. |
+ +----------------------+---------+--------------------+----------------------------------------+
+ | pcmk_reboot_timeout | time | 60s | .. index:: |
+ | | | | single: pcmk_reboot_timeout |
+ | | | | |
+ | | | | *Advanced use only.* Specify an |
+ | | | | alternate timeout to use for |
+ | | | | ``reboot`` actions instead of the |
+ | | | | value of ``stonith-timeout``. Some |
+ | | | | devices need much more or less time to |
+ | | | | complete than normal. Use this to |
+ | | | | specify an alternate, device-specific |
+ | | | | timeout. |
+ +----------------------+---------+--------------------+----------------------------------------+
+ | pcmk_reboot_retries | integer | 2 | .. index:: |
+ | | | | single: pcmk_reboot_retries |
+ | | | | |
+ | | | | *Advanced use only.* The maximum |
+ | | | | number of times to retry the |
+ | | | | ``reboot`` command within the timeout |
+ | | | | period. Some devices do not support |
+ | | | | multiple connections, and operations |
+ | | | | may fail if the device is busy with |
+ | | | | another task, so Pacemaker will |
+ | | | | automatically retry the operation, if |
+ | | | | there is time remaining. Use this |
+ | | | | option to alter the number of times |
+ | | | | Pacemaker retries before giving up. |
+ +----------------------+---------+--------------------+----------------------------------------+
+ | pcmk_off_action | string | off | .. index:: |
+ | | | | single: pcmk_off_action |
+ | | | | |
+ | | | | *Advanced use only.* The command to |
+ | | | | send to the resource agent in order to |
+ | | | | shut down a node. Some devices do not |
+ | | | | support the standard commands or may |
+ | | | | provide additional ones. Use this to |
+ | | | | specify an alternate, device-specific |
+ | | | | command. |
+ +----------------------+---------+--------------------+----------------------------------------+
+ | pcmk_off_timeout | time | 60s | .. index:: |
+ | | | | single: pcmk_off_timeout |
+ | | | | |
+ | | | | *Advanced use only.* Specify an |
+ | | | | alternate timeout to use for |
+ | | | | ``off`` actions instead of the |
+ | | | | value of ``stonith-timeout``. Some |
+ | | | | devices need much more or less time to |
+ | | | | complete than normal. Use this to |
+ | | | | specify an alternate, device-specific |
+ | | | | timeout. |
+ +----------------------+---------+--------------------+----------------------------------------+
+ | pcmk_off_retries | integer | 2 | .. index:: |
+ | | | | single: pcmk_off_retries |
+ | | | | |
+ | | | | *Advanced use only.* The maximum |
+ | | | | number of times to retry the |
+ | | | | ``off`` command within the timeout |
+ | | | | period. Some devices do not support |
+ | | | | multiple connections, and operations |
+ | | | | may fail if the device is busy with |
+ | | | | another task, so Pacemaker will |
+ | | | | automatically retry the operation, if |
+ | | | | there is time remaining. Use this |
+ | | | | option to alter the number of times |
+ | | | | Pacemaker retries before giving up. |
+ +----------------------+---------+--------------------+----------------------------------------+
+ | pcmk_list_action | string | list | .. index:: |
+ | | | | single: pcmk_list_action |
+ | | | | |
+ | | | | *Advanced use only.* The command to |
+ | | | | send to the resource agent in order to |
+ | | | | list nodes. Some devices do not |
+ | | | | support the standard commands or may |
+ | | | | provide additional ones. Use this to |
+ | | | | specify an alternate, device-specific |
+ | | | | command. |
+ +----------------------+---------+--------------------+----------------------------------------+
+ | pcmk_list_timeout | time | 60s | .. index:: |
+ | | | | single: pcmk_list_timeout |
+ | | | | |
+ | | | | *Advanced use only.* Specify an |
+ | | | | alternate timeout to use for |
+ | | | | ``list`` actions instead of the |
+ | | | | value of ``stonith-timeout``. Some |
+ | | | | devices need much more or less time to |
+ | | | | complete than normal. Use this to |
+ | | | | specify an alternate, device-specific |
+ | | | | timeout. |
+ +----------------------+---------+--------------------+----------------------------------------+
+ | pcmk_list_retries | integer | 2 | .. index:: |
+ | | | | single: pcmk_list_retries |
+ | | | | |
+ | | | | *Advanced use only.* The maximum |
+ | | | | number of times to retry the |
+ | | | | ``list`` command within the timeout |
+ | | | | period. Some devices do not support |
+ | | | | multiple connections, and operations |
+ | | | | may fail if the device is busy with |
+ | | | | another task, so Pacemaker will |
+ | | | | automatically retry the operation, if |
+ | | | | there is time remaining. Use this |
+ | | | | option to alter the number of times |
+ | | | | Pacemaker retries before giving up. |
+ +----------------------+---------+--------------------+----------------------------------------+
+ | pcmk_monitor_action | string | monitor | .. index:: |
+ | | | | single: pcmk_monitor_action |
+ | | | | |
+ | | | | *Advanced use only.* The command to |
+ | | | | send to the resource agent in order to |
+ | | | | report extended status. Some devices do|
+ | | | | not support the standard commands or |
+ | | | | may provide additional ones. Use this |
+ | | | | to specify an alternate, |
+ | | | | device-specific command. |
+ +----------------------+---------+--------------------+----------------------------------------+
+ | pcmk_monitor_timeout | time | 60s | .. index:: |
+ | | | | single: pcmk_monitor_timeout |
+ | | | | |
+ | | | | *Advanced use only.* Specify an |
+ | | | | alternate timeout to use for |
+ | | | | ``monitor`` actions instead of the |
+ | | | | value of ``stonith-timeout``. Some |
+ | | | | devices need much more or less time to |
+ | | | | complete than normal. Use this to |
+ | | | | specify an alternate, device-specific |
+ | | | | timeout. |
+ +----------------------+---------+--------------------+----------------------------------------+
+ | pcmk_monitor_retries | integer | 2 | .. index:: |
+ | | | | single: pcmk_monitor_retries |
+ | | | | |
+ | | | | *Advanced use only.* The maximum |
+ | | | | number of times to retry the |
+ | | | | ``monitor`` command within the timeout |
+ | | | | period. Some devices do not support |
+ | | | | multiple connections, and operations |
+ | | | | may fail if the device is busy with |
+ | | | | another task, so Pacemaker will |
+ | | | | automatically retry the operation, if |
+ | | | | there is time remaining. Use this |
+ | | | | option to alter the number of times |
+ | | | | Pacemaker retries before giving up. |
+ +----------------------+---------+--------------------+----------------------------------------+
+ | pcmk_status_action | string | status | .. index:: |
+ | | | | single: pcmk_status_action |
+ | | | | |
+ | | | | *Advanced use only.* The command to |
+ | | | | send to the resource agent in order to |
+ | | | | report status. Some devices do |
+ | | | | not support the standard commands or |
+ | | | | may provide additional ones. Use this |
+ | | | | to specify an alternate, |
+ | | | | device-specific command. |
+ +----------------------+---------+--------------------+----------------------------------------+
+ | pcmk_status_timeout | time | 60s | .. index:: |
+ | | | | single: pcmk_status_timeout |
+ | | | | |
+ | | | | *Advanced use only.* Specify an |
+ | | | | alternate timeout to use for |
+ | | | | ``status`` actions instead of the |
+ | | | | value of ``stonith-timeout``. Some |
+ | | | | devices need much more or less time to |
+ | | | | complete than normal. Use this to |
+ | | | | specify an alternate, device-specific |
+ | | | | timeout. |
+ +----------------------+---------+--------------------+----------------------------------------+
+ | pcmk_status_retries | integer | 2 | .. index:: |
+ | | | | single: pcmk_status_retries |
+ | | | | |
+ | | | | *Advanced use only.* The maximum |
+ | | | | number of times to retry the |
+ | | | | ``status`` command within the timeout |
+ | | | | period. Some devices do not support |
+ | | | | multiple connections, and operations |
+ | | | | may fail if the device is busy with |
+ | | | | another task, so Pacemaker will |
+ | | | | automatically retry the operation, if |
+ | | | | there is time remaining. Use this |
+ | | | | option to alter the number of times |
+ | | | | Pacemaker retries before giving up. |
+ +----------------------+---------+--------------------+----------------------------------------+
+
+Default Check Type
+##################
+
+If the user does not explicitly configure ``pcmk_host_check`` for a fence
+device, a default value appropriate to other configured parameters will be
+used:
+
+* If either ``pcmk_host_list`` or ``pcmk_host_map`` is configured,
+ ``static-list`` will be used;
+* otherwise, if the fence device supports the ``list`` action, and the first
+ attempt at using ``list`` succeeds, ``dynamic-list`` will be used;
+* otherwise, if the fence device supports the ``status`` action, ``status``
+ will be used;
+* otherwise, ``none`` will be used.
+
+.. index::
+ single: unfencing
+ single: fencing; unfencing
+
+.. _unfencing:
+
+Unfencing
+#########
+
+With fabric fencing (such as cutting network or shared disk access rather than
+power), it is expected that the cluster will fence the node, and then a system
+administrator must manually investigate what went wrong, correct any issues
+found, then reboot (or restart the cluster services on) the node.
+
+Once the node reboots and rejoins the cluster, some fabric fencing devices
+require an explicit command to restore the node's access. This capability is
+called *unfencing* and is typically implemented as the fence agent's ``on``
+command.
+
+If any cluster resource has ``requires`` set to ``unfencing``, then that
+resource will not be probed or started on a node until that node has been
+unfenced.
+
+Fencing and Quorum
+##################
+
+In general, a cluster partition may execute fencing only if the partition has
+quorum, and the ``stonith-enabled`` cluster property is set to true. However,
+there are exceptions:
+
+* The requirements apply only to fencing initiated by Pacemaker. If an
+ administrator initiates fencing using the ``stonith_admin`` command, or an
+ external application such as DLM initiates fencing using Pacemaker's C API,
+ the requirements do not apply.
+
+* A cluster partition without quorum is allowed to fence any active member of
+ that partition. As a corollary, this allows a ``no-quorum-policy`` of
+ ``suicide`` to work.
+
+* If the ``no-quorum-policy`` cluster property is set to ``ignore``, then
+ quorum is not required to execute fencing of any node.
+
+Fencing Timeouts
+################
+
+Fencing timeouts are complicated, since a single fencing operation can involve
+many steps, each of which may have a separate timeout.
+
+Fencing may be initiated in one of several ways:
+
+* An administrator may initiate fencing using the ``stonith_admin`` tool,
+ which has a ``--timeout`` option (defaulting to 2 minutes) that will be used
+ as the fence operation timeout.
+
+* An external application such as DLM may initiate fencing using the Pacemaker
+ C API. The application will specify the fence operation timeout in this case,
+ which might or might not be configurable by the user.
+
+* The cluster may initiate fencing itself. In this case, the
+ ``stonith-timeout`` cluster property (defaulting to 1 minute) will be used as
+ the fence operation timeout.
+
+However fencing is initiated, the initiator contacts Pacemaker's fencer
+(``pacemaker-fenced``) to request fencing. This connection and request has its
+own timeout, separate from the fencing operation timeout, but usually happens
+very quickly.
+
+The fencer will contact all fencers in the cluster to ask what devices they
+have available to fence the target node. The fence operation timeout will be
+used as the timeout for each of these queries.
+
+Once a fencing device has been selected, the fencer will check whether any
+action-specific timeout has been configured for the device, to use instead of
+the fence operation timeout. For example, if ``stonith-timeout`` is 60 seconds,
+but the fencing device has ``pcmk_reboot_timeout`` configured as 90 seconds,
+then a timeout of 90 seconds will be used for reboot actions using that device.
+
+A device may have retries configured, in which case the timeout applies across
+all attempts. For example, if a device has ``pcmk_reboot_retries`` configured
+as 2, and the first reboot attempt fails, the second attempt will only have
+whatever time is remaining in the action timeout after subtracting how much
+time the first attempt used. This means that if the first attempt fails due to
+using the entire timeout, no further attempts will be made. There is currently
+no way to configure a per-attempt timeout.
+
+If more than one device is required to fence a target, whether due to failure
+of the first device or a fencing topology with multiple devices configured for
+the target, each device will have its own separate action timeout.
+
+For all of the above timeouts, the fencer will generally multiply the
+configured value by 1.2 to get an actual value to use, to account for time
+needed by the fencer's own processing.
+
+Separate from the fencer's timeouts, some fence agents have internal timeouts
+for individual steps of their fencing process. These agents often have
+parameters to configure these timeouts, such as ``login-timeout``,
+``shell-timeout``, or ``power-timeout``. Many such agents also have a
+``disable-timeout`` parameter to ignore their internal timeouts and just let
+Pacemaker handle the timeout. This causes a difference in retry behavior.
+If ``disable-timeout`` is not set, and the agent hits one of its internal
+timeouts, it will report that as a failure to Pacemaker, which can then retry.
+If ``disable-timeout`` is set, and Pacemaker hits a timeout for the agent, then
+there will be no time remaining, and no retry will be done.
+
+Fence Devices Dependent on Other Resources
+##########################################
+
+In some cases, a fence device may require some other cluster resource (such as
+an IP address) to be active in order to function properly.
+
+This is obviously undesirable in general: fencing may be required when the
+depended-on resource is not active, or fencing may be required because the node
+running the depended-on resource is no longer responding.
+
+However, this may be acceptable under certain conditions:
+
+* The dependent fence device should not be able to target any node that is
+ allowed to run the depended-on resource.
+
+* The depended-on resource should not be disabled during production operation.
+
+* The ``concurrent-fencing`` cluster property should be set to ``true``.
+ Otherwise, if both the node running the depended-on resource and some node
+ targeted by the dependent fence device need to be fenced, the fencing of the
+ node running the depended-on resource might be ordered first, making the
+ second fencing impossible and blocking further recovery. With concurrent
+ fencing, the dependent fence device might fail at first due to the
+ depended-on resource being unavailable, but it will be retried and eventually
+ succeed once the resource is brought back up.
+
+Even under those conditions, there is one unlikely problem scenario. The DC
+always schedules fencing of itself after any other fencing needed, to avoid
+unnecessary repeated DC elections. If the dependent fence device targets the
+DC, and both the DC and a different node running the depended-on resource need
+to be fenced, the DC fencing will always fail and block further recovery. Note,
+however, that losing a DC node entirely causes some other node to become DC and
+schedule the fencing, so this is only a risk when a stop or other operation
+with ``on-fail`` set to ``fencing`` fails on the DC.
+
+.. index::
+ single: fencing; configuration
+
+Configuring Fencing
+###################
+
+Higher-level tools can provide simpler interfaces to this process, but using
+Pacemaker command-line tools, this is how you could configure a fence device.
+
+#. Find the correct driver:
+
+ .. code-block:: none
+
+ # stonith_admin --list-installed
+
+ .. note::
+
+ You may have to install packages to make fence agents available on your
+ host. Searching your available packages for ``fence-`` is usually
+ helpful. Ensure the packages providing the fence agents you require are
+ installed on every cluster node.
+
+#. Find the required parameters associated with the device
+ (replacing ``$AGENT_NAME`` with the name obtained from the previous step):
+
+ .. code-block:: none
+
+ # stonith_admin --metadata --agent $AGENT_NAME
+
+#. Create a file called ``stonith.xml`` containing a primitive resource
+ with a class of ``stonith``, a type equal to the agent name obtained earlier,
+ and a parameter for each of the values returned in the previous step.
+
+#. If the device does not know how to fence nodes based on their uname,
+ you may also need to set the special ``pcmk_host_map`` parameter. See
+ :ref:`fencing-attributes` for details.
+
+#. If the device does not support the ``list`` command, you may also need
+ to set the special ``pcmk_host_list`` and/or ``pcmk_host_check``
+ parameters. See :ref:`fencing-attributes` for details.
+
+#. If the device does not expect the target to be specified with the
+ ``port`` parameter, you may also need to set the special
+ ``pcmk_host_argument`` parameter. See :ref:`fencing-attributes` for details.
+
+#. Upload it into the CIB using cibadmin:
+
+ .. code-block:: none
+
+ # cibadmin --create --scope resources --xml-file stonith.xml
+
+#. Set ``stonith-enabled`` to true:
+
+ .. code-block:: none
+
+ # crm_attribute --type crm_config --name stonith-enabled --update true
+
+#. Once the stonith resource is running, you can test it by executing the
+ following, replacing ``$NODE_NAME`` with the name of the node to fence
+ (although you might want to stop the cluster on that machine first):
+
+ .. code-block:: none
+
+ # stonith_admin --reboot $NODE_NAME
+
+
+Example Fencing Configuration
+_____________________________
+
+For this example, we assume we have a cluster node, ``pcmk-1``, whose IPMI
+controller is reachable at the IP address 192.0.2.1. The IPMI controller uses
+the username ``testuser`` and the password ``abc123``.
+
+#. Looking at what's installed, we may see a variety of available agents:
+
+ .. code-block:: none
+
+ # stonith_admin --list-installed
+
+ .. code-block:: none
+
+ (... some output omitted ...)
+ fence_idrac
+ fence_ilo3
+ fence_ilo4
+ fence_ilo5
+ fence_imm
+ fence_ipmilan
+ (... some output omitted ...)
+
+ Perhaps after some reading some man pages and doing some Internet searches,
+ we might decide ``fence_ipmilan`` is our best choice.
+
+#. Next, we would check what parameters ``fence_ipmilan`` provides:
+
+ .. code-block:: none
+
+ # stonith_admin --metadata -a fence_ipmilan
+
+ .. code-block:: xml
+
+ <resource-agent name="fence_ipmilan" shortdesc="Fence agent for IPMI">
+ <symlink name="fence_ilo3" shortdesc="Fence agent for HP iLO3"/>
+ <symlink name="fence_ilo4" shortdesc="Fence agent for HP iLO4"/>
+ <symlink name="fence_ilo5" shortdesc="Fence agent for HP iLO5"/>
+ <symlink name="fence_imm" shortdesc="Fence agent for IBM Integrated Management Module"/>
+ <symlink name="fence_idrac" shortdesc="Fence agent for Dell iDRAC"/>
+ <longdesc>fence_ipmilan is an I/O Fencing agentwhich can be used with machines controlled by IPMI.This agent calls support software ipmitool (http://ipmitool.sf.net/). WARNING! This fence agent might report success before the node is powered off. You should use -m/method onoff if your fence device works correctly with that option.</longdesc>
+ <vendor-url/>
+ <parameters>
+ <parameter name="action" unique="0" required="0">
+ <getopt mixed="-o, --action=[action]"/>
+ <content type="string" default="reboot"/>
+ <shortdesc lang="en">Fencing action</shortdesc>
+ </parameter>
+ <parameter name="auth" unique="0" required="0">
+ <getopt mixed="-A, --auth=[auth]"/>
+ <content type="select">
+ <option value="md5"/>
+ <option value="password"/>
+ <option value="none"/>
+ </content>
+ <shortdesc lang="en">IPMI Lan Auth type.</shortdesc>
+ </parameter>
+ <parameter name="cipher" unique="0" required="0">
+ <getopt mixed="-C, --cipher=[cipher]"/>
+ <content type="string"/>
+ <shortdesc lang="en">Ciphersuite to use (same as ipmitool -C parameter)</shortdesc>
+ </parameter>
+ <parameter name="hexadecimal_kg" unique="0" required="0">
+ <getopt mixed="--hexadecimal-kg=[key]"/>
+ <content type="string"/>
+ <shortdesc lang="en">Hexadecimal-encoded Kg key for IPMIv2 authentication</shortdesc>
+ </parameter>
+ <parameter name="ip" unique="0" required="0" obsoletes="ipaddr">
+ <getopt mixed="-a, --ip=[ip]"/>
+ <content type="string"/>
+ <shortdesc lang="en">IP address or hostname of fencing device</shortdesc>
+ </parameter>
+ <parameter name="ipaddr" unique="0" required="0" deprecated="1">
+ <getopt mixed="-a, --ip=[ip]"/>
+ <content type="string"/>
+ <shortdesc lang="en">IP address or hostname of fencing device</shortdesc>
+ </parameter>
+ <parameter name="ipport" unique="0" required="0">
+ <getopt mixed="-u, --ipport=[port]"/>
+ <content type="integer" default="623"/>
+ <shortdesc lang="en">TCP/UDP port to use for connection with device</shortdesc>
+ </parameter>
+ <parameter name="lanplus" unique="0" required="0">
+ <getopt mixed="-P, --lanplus"/>
+ <content type="boolean" default="0"/>
+ <shortdesc lang="en">Use Lanplus to improve security of connection</shortdesc>
+ </parameter>
+ <parameter name="login" unique="0" required="0" deprecated="1">
+ <getopt mixed="-l, --username=[name]"/>
+ <content type="string"/>
+ <shortdesc lang="en">Login name</shortdesc>
+ </parameter>
+ <parameter name="method" unique="0" required="0">
+ <getopt mixed="-m, --method=[method]"/>
+ <content type="select" default="onoff">
+ <option value="onoff"/>
+ <option value="cycle"/>
+ </content>
+ <shortdesc lang="en">Method to fence</shortdesc>
+ </parameter>
+ <parameter name="passwd" unique="0" required="0" deprecated="1">
+ <getopt mixed="-p, --password=[password]"/>
+ <content type="string"/>
+ <shortdesc lang="en">Login password or passphrase</shortdesc>
+ </parameter>
+ <parameter name="passwd_script" unique="0" required="0" deprecated="1">
+ <getopt mixed="-S, --password-script=[script]"/>
+ <content type="string"/>
+ <shortdesc lang="en">Script to run to retrieve password</shortdesc>
+ </parameter>
+ <parameter name="password" unique="0" required="0" obsoletes="passwd">
+ <getopt mixed="-p, --password=[password]"/>
+ <content type="string"/>
+ <shortdesc lang="en">Login password or passphrase</shortdesc>
+ </parameter>
+ <parameter name="password_script" unique="0" required="0" obsoletes="passwd_script">
+ <getopt mixed="-S, --password-script=[script]"/>
+ <content type="string"/>
+ <shortdesc lang="en">Script to run to retrieve password</shortdesc>
+ </parameter>
+ <parameter name="plug" unique="0" required="0" obsoletes="port">
+ <getopt mixed="-n, --plug=[ip]"/>
+ <content type="string"/>
+ <shortdesc lang="en">IP address or hostname of fencing device (together with --port-as-ip)</shortdesc>
+ </parameter>
+ <parameter name="port" unique="0" required="0" deprecated="1">
+ <getopt mixed="-n, --plug=[ip]"/>
+ <content type="string"/>
+ <shortdesc lang="en">IP address or hostname of fencing device (together with --port-as-ip)</shortdesc>
+ </parameter>
+ <parameter name="privlvl" unique="0" required="0">
+ <getopt mixed="-L, --privlvl=[level]"/>
+ <content type="select" default="administrator">
+ <option value="callback"/>
+ <option value="user"/>
+ <option value="operator"/>
+ <option value="administrator"/>
+ </content>
+ <shortdesc lang="en">Privilege level on IPMI device</shortdesc>
+ </parameter>
+ <parameter name="target" unique="0" required="0">
+ <getopt mixed="--target=[targetaddress]"/>
+ <content type="string"/>
+ <shortdesc lang="en">Bridge IPMI requests to the remote target address</shortdesc>
+ </parameter>
+ <parameter name="username" unique="0" required="0" obsoletes="login">
+ <getopt mixed="-l, --username=[name]"/>
+ <content type="string"/>
+ <shortdesc lang="en">Login name</shortdesc>
+ </parameter>
+ <parameter name="quiet" unique="0" required="0">
+ <getopt mixed="-q, --quiet"/>
+ <content type="boolean"/>
+ <shortdesc lang="en">Disable logging to stderr. Does not affect --verbose or --debug-file or logging to syslog.</shortdesc>
+ </parameter>
+ <parameter name="verbose" unique="0" required="0">
+ <getopt mixed="-v, --verbose"/>
+ <content type="boolean"/>
+ <shortdesc lang="en">Verbose mode</shortdesc>
+ </parameter>
+ <parameter name="debug" unique="0" required="0" deprecated="1">
+ <getopt mixed="-D, --debug-file=[debugfile]"/>
+ <content type="string"/>
+ <shortdesc lang="en">Write debug information to given file</shortdesc>
+ </parameter>
+ <parameter name="debug_file" unique="0" required="0" obsoletes="debug">
+ <getopt mixed="-D, --debug-file=[debugfile]"/>
+ <content type="string"/>
+ <shortdesc lang="en">Write debug information to given file</shortdesc>
+ </parameter>
+ <parameter name="version" unique="0" required="0">
+ <getopt mixed="-V, --version"/>
+ <content type="boolean"/>
+ <shortdesc lang="en">Display version information and exit</shortdesc>
+ </parameter>
+ <parameter name="help" unique="0" required="0">
+ <getopt mixed="-h, --help"/>
+ <content type="boolean"/>
+ <shortdesc lang="en">Display help and exit</shortdesc>
+ </parameter>
+ <parameter name="delay" unique="0" required="0">
+ <getopt mixed="--delay=[seconds]"/>
+ <content type="second" default="0"/>
+ <shortdesc lang="en">Wait X seconds before fencing is started</shortdesc>
+ </parameter>
+ <parameter name="ipmitool_path" unique="0" required="0">
+ <getopt mixed="--ipmitool-path=[path]"/>
+ <content type="string" default="/usr/bin/ipmitool"/>
+ <shortdesc lang="en">Path to ipmitool binary</shortdesc>
+ </parameter>
+ <parameter name="login_timeout" unique="0" required="0">
+ <getopt mixed="--login-timeout=[seconds]"/>
+ <content type="second" default="5"/>
+ <shortdesc lang="en">Wait X seconds for cmd prompt after login</shortdesc>
+ </parameter>
+ <parameter name="port_as_ip" unique="0" required="0">
+ <getopt mixed="--port-as-ip"/>
+ <content type="boolean"/>
+ <shortdesc lang="en">Make "port/plug" to be an alias to IP address</shortdesc>
+ </parameter>
+ <parameter name="power_timeout" unique="0" required="0">
+ <getopt mixed="--power-timeout=[seconds]"/>
+ <content type="second" default="20"/>
+ <shortdesc lang="en">Test X seconds for status change after ON/OFF</shortdesc>
+ </parameter>
+ <parameter name="power_wait" unique="0" required="0">
+ <getopt mixed="--power-wait=[seconds]"/>
+ <content type="second" default="2"/>
+ <shortdesc lang="en">Wait X seconds after issuing ON/OFF</shortdesc>
+ </parameter>
+ <parameter name="shell_timeout" unique="0" required="0">
+ <getopt mixed="--shell-timeout=[seconds]"/>
+ <content type="second" default="3"/>
+ <shortdesc lang="en">Wait X seconds for cmd prompt after issuing command</shortdesc>
+ </parameter>
+ <parameter name="retry_on" unique="0" required="0">
+ <getopt mixed="--retry-on=[attempts]"/>
+ <content type="integer" default="1"/>
+ <shortdesc lang="en">Count of attempts to retry power on</shortdesc>
+ </parameter>
+ <parameter name="sudo" unique="0" required="0" deprecated="1">
+ <getopt mixed="--use-sudo"/>
+ <content type="boolean"/>
+ <shortdesc lang="en">Use sudo (without password) when calling 3rd party software</shortdesc>
+ </parameter>
+ <parameter name="use_sudo" unique="0" required="0" obsoletes="sudo">
+ <getopt mixed="--use-sudo"/>
+ <content type="boolean"/>
+ <shortdesc lang="en">Use sudo (without password) when calling 3rd party software</shortdesc>
+ </parameter>
+ <parameter name="sudo_path" unique="0" required="0">
+ <getopt mixed="--sudo-path=[path]"/>
+ <content type="string" default="/usr/bin/sudo"/>
+ <shortdesc lang="en">Path to sudo binary</shortdesc>
+ </parameter>
+ </parameters>
+ <actions>
+ <action name="on" automatic="0"/>
+ <action name="off"/>
+ <action name="reboot"/>
+ <action name="status"/>
+ <action name="monitor"/>
+ <action name="metadata"/>
+ <action name="manpage"/>
+ <action name="validate-all"/>
+ <action name="diag"/>
+ <action name="stop" timeout="20s"/>
+ <action name="start" timeout="20s"/>
+ </actions>
+ </resource-agent>
+
+ Once we've decided what parameter values we think we need, it is a good idea
+ to run the fence agent's status action manually, to verify that our values
+ work correctly:
+
+ .. code-block:: none
+
+ # fence_ipmilan --lanplus -a 192.0.2.1 -l testuser -p abc123 -o status
+
+ Chassis Power is on
+
+#. Based on that, we might create a fencing resource configuration like this in
+ ``stonith.xml`` (or any file name, just use the same name with ``cibadmin``
+ later):
+
+ .. code-block:: xml
+
+ <primitive id="Fencing-pcmk-1" class="stonith" type="fence_ipmilan" >
+ <instance_attributes id="Fencing-params" >
+ <nvpair id="Fencing-lanplus" name="lanplus" value="1" />
+ <nvpair id="Fencing-ip" name="ip" value="192.0.2.1" />
+ <nvpair id="Fencing-password" name="password" value="testuser" />
+ <nvpair id="Fencing-username" name="username" value="abc123" />
+ </instance_attributes>
+ <operations >
+ <op id="Fencing-monitor-10m" interval="10m" name="monitor" timeout="300s" />
+ </operations>
+ </primitive>
+
+ .. note::
+
+ Even though the man page shows that the ``action`` parameter is
+ supported, we do not provide that in the resource configuration.
+ Pacemaker will supply an appropriate action whenever the fence device
+ must be used.
+
+#. In this case, we don't need to configure ``pcmk_host_map`` because
+ ``fence_ipmilan`` ignores the target node name and instead uses its
+ ``ip`` parameter to know how to contact the IPMI controller.
+
+#. We do need to let Pacemaker know which cluster node can be fenced by this
+ device, since ``fence_ipmilan`` doesn't support the ``list`` action. Add
+ a line like this to the agent's instance attributes:
+
+ .. code-block:: xml
+
+ <nvpair id="Fencing-pcmk_host_list" name="pcmk_host_list" value="pcmk-1" />
+
+#. We don't need to configure ``pcmk_host_argument`` since ``ip`` is all the
+ fence agent needs (it ignores the target name).
+
+#. Make the configuration active:
+
+ .. code-block:: none
+
+ # cibadmin --create --scope resources --xml-file stonith.xml
+
+#. Set ``stonith-enabled`` to true (this only has to be done once):
+
+ .. code-block:: none
+
+ # crm_attribute --type crm_config --name stonith-enabled --update true
+
+#. Since our cluster is still in testing, we can reboot ``pcmk-1`` without
+ bothering anyone, so we'll test our fencing configuration by running this
+ from one of the other cluster nodes:
+
+ .. code-block:: none
+
+ # stonith_admin --reboot pcmk-1
+
+ Then we will verify that the node did, in fact, reboot.
+
+We can repeat that process to create a separate fencing resource for each node.
+
+With some other fence device types, a single fencing resource is able to be
+used for all nodes. In fact, we could do that with ``fence_ipmilan``, using the
+``port-as-ip`` parameter along with ``pcmk_host_map``. Either approach is
+fine.
+
+.. index::
+ single: fencing; topology
+ single: fencing-topology
+ single: fencing-level
+
+Fencing Topologies
+##################
+
+Pacemaker supports fencing nodes with multiple devices through a feature called
+*fencing topologies*. Fencing topologies may be used to provide alternative
+devices in case one fails, or to require multiple devices to all be executed
+successfully in order to consider the node successfully fenced, or even a
+combination of the two.
+
+Create the individual devices as you normally would, then define one or more
+``fencing-level`` entries in the ``fencing-topology`` section of the
+configuration.
+
+* Each fencing level is attempted in order of ascending ``index``. Allowed
+ values are 1 through 9.
+* If a device fails, processing terminates for the current level. No further
+ devices in that level are exercised, and the next level is attempted instead.
+* If the operation succeeds for all the listed devices in a level, the level is
+ deemed to have passed.
+* The operation is finished when a level has passed (success), or all levels
+ have been attempted (failed).
+* If the operation failed, the next step is determined by the scheduler and/or
+ the controller.
+
+Some possible uses of topologies include:
+
+* Try on-board IPMI, then an intelligent power switch if that fails
+* Try fabric fencing of both disk and network, then fall back to power fencing
+ if either fails
+* Wait up to a certain time for a kernel dump to complete, then cut power to
+ the node
+
+.. table:: **Attributes of a fencing-level Element**
+ :class: longtable
+ :widths: 1 4
+
+ +------------------+-----------------------------------------------------------------------------------------+
+ | Attribute | Description |
+ +==================+=========================================================================================+
+ | id | .. index:: |
+ | | pair: fencing-level; id |
+ | | |
+ | | A unique name for this element (required) |
+ +------------------+-----------------------------------------------------------------------------------------+
+ | target | .. index:: |
+ | | pair: fencing-level; target |
+ | | |
+ | | The name of a single node to which this level applies |
+ +------------------+-----------------------------------------------------------------------------------------+
+ | target-pattern | .. index:: |
+ | | pair: fencing-level; target-pattern |
+ | | |
+ | | An extended regular expression (as defined in `POSIX |
+ | | <https://pubs.opengroup.org/onlinepubs/9699919799/basedefs/V1_chap09.html#tag_09_04>`_) |
+ | | matching the names of nodes to which this level applies |
+ +------------------+-----------------------------------------------------------------------------------------+
+ | target-attribute | .. index:: |
+ | | pair: fencing-level; target-attribute |
+ | | |
+ | | The name of a node attribute that is set (to ``target-value``) for nodes to which this |
+ | | level applies |
+ +------------------+-----------------------------------------------------------------------------------------+
+ | target-value | .. index:: |
+ | | pair: fencing-level; target-value |
+ | | |
+ | | The node attribute value (of ``target-attribute``) that is set for nodes to which this |
+ | | level applies |
+ +------------------+-----------------------------------------------------------------------------------------+
+ | index | .. index:: |
+ | | pair: fencing-level; index |
+ | | |
+ | | The order in which to attempt the levels. Levels are attempted in ascending order |
+ | | *until one succeeds*. Valid values are 1 through 9. |
+ +------------------+-----------------------------------------------------------------------------------------+
+ | devices | .. index:: |
+ | | pair: fencing-level; devices |
+ | | |
+ | | A comma-separated list of devices that must all be tried for this level |
+ +------------------+-----------------------------------------------------------------------------------------+
+
+.. note:: **Fencing topology with different devices for different nodes**
+
+ .. code-block:: xml
+
+ <cib crm_feature_set="3.6.0" validate-with="pacemaker-3.5" admin_epoch="1" epoch="0" num_updates="0">
+ <configuration>
+ ...
+ <fencing-topology>
+ <!-- For pcmk-1, try poison-pill and fail back to power -->
+ <fencing-level id="f-p1.1" target="pcmk-1" index="1" devices="poison-pill"/>
+ <fencing-level id="f-p1.2" target="pcmk-1" index="2" devices="power"/>
+
+ <!-- For pcmk-2, try disk and network, and fail back to power -->
+ <fencing-level id="f-p2.1" target="pcmk-2" index="1" devices="disk,network"/>
+ <fencing-level id="f-p2.2" target="pcmk-2" index="2" devices="power"/>
+ </fencing-topology>
+ ...
+ <configuration>
+ <status/>
+ </cib>
+
+Example Dual-Layer, Dual-Device Fencing Topologies
+__________________________________________________
+
+The following example illustrates an advanced use of ``fencing-topology`` in a
+cluster with the following properties:
+
+* 2 nodes (prod-mysql1 and prod-mysql2)
+* the nodes have IPMI controllers reachable at 192.0.2.1 and 192.0.2.2
+* the nodes each have two independent Power Supply Units (PSUs) connected to
+ two independent Power Distribution Units (PDUs) reachable at 198.51.100.1
+ (port 10 and port 11) and 203.0.113.1 (port 10 and port 11)
+* fencing via the IPMI controller uses the ``fence_ipmilan`` agent (1 fence device
+ per controller, with each device targeting a separate node)
+* fencing via the PDUs uses the ``fence_apc_snmp`` agent (1 fence device per
+ PDU, with both devices targeting both nodes)
+* a random delay is used to lessen the chance of a "death match"
+* fencing topology is set to try IPMI fencing first then dual PDU fencing if
+ that fails
+
+In a node failure scenario, Pacemaker will first select ``fence_ipmilan`` to
+try to kill the faulty node. Using the fencing topology, if that method fails,
+it will then move on to selecting ``fence_apc_snmp`` twice (once for the first
+PDU, then again for the second PDU).
+
+The fence action is considered successful only if both PDUs report the required
+status. If any of them fails, fencing loops back to the first fencing method,
+``fence_ipmilan``, and so on, until the node is fenced or the fencing action is
+cancelled.
+
+.. note:: **First fencing method: single IPMI device per target**
+
+ Each cluster node has it own dedicated IPMI controller that can be contacted
+ for fencing using the following primitives:
+
+ .. code-block:: xml
+
+ <primitive class="stonith" id="fence_prod-mysql1_ipmi" type="fence_ipmilan">
+ <instance_attributes id="fence_prod-mysql1_ipmi-instance_attributes">
+ <nvpair id="fence_prod-mysql1_ipmi-instance_attributes-ipaddr" name="ipaddr" value="192.0.2.1"/>
+ <nvpair id="fence_prod-mysql1_ipmi-instance_attributes-login" name="login" value="fencing"/>
+ <nvpair id="fence_prod-mysql1_ipmi-instance_attributes-passwd" name="passwd" value="finishme"/>
+ <nvpair id="fence_prod-mysql1_ipmi-instance_attributes-lanplus" name="lanplus" value="true"/>
+ <nvpair id="fence_prod-mysql1_ipmi-instance_attributes-pcmk_host_list" name="pcmk_host_list" value="prod-mysql1"/>
+ <nvpair id="fence_prod-mysql1_ipmi-instance_attributes-pcmk_delay_max" name="pcmk_delay_max" value="8s"/>
+ </instance_attributes>
+ </primitive>
+ <primitive class="stonith" id="fence_prod-mysql2_ipmi" type="fence_ipmilan">
+ <instance_attributes id="fence_prod-mysql2_ipmi-instance_attributes">
+ <nvpair id="fence_prod-mysql2_ipmi-instance_attributes-ipaddr" name="ipaddr" value="192.0.2.2"/>
+ <nvpair id="fence_prod-mysql2_ipmi-instance_attributes-login" name="login" value="fencing"/>
+ <nvpair id="fence_prod-mysql2_ipmi-instance_attributes-passwd" name="passwd" value="finishme"/>
+ <nvpair id="fence_prod-mysql2_ipmi-instance_attributes-lanplus" name="lanplus" value="true"/>
+ <nvpair id="fence_prod-mysql2_ipmi-instance_attributes-pcmk_host_list" name="pcmk_host_list" value="prod-mysql2"/>
+ <nvpair id="fence_prod-mysql2_ipmi-instance_attributes-pcmk_delay_max" name="pcmk_delay_max" value="8s"/>
+ </instance_attributes>
+ </primitive>
+
+.. note:: **Second fencing method: dual PDU devices**
+
+ Each cluster node also has 2 distinct power supplies controlled by 2
+ distinct PDUs:
+
+ * Node 1: PDU 1 port 10 and PDU 2 port 10
+ * Node 2: PDU 1 port 11 and PDU 2 port 11
+
+ The matching fencing agents are configured as follows:
+
+ .. code-block:: xml
+
+ <primitive class="stonith" id="fence_apc1" type="fence_apc_snmp">
+ <instance_attributes id="fence_apc1-instance_attributes">
+ <nvpair id="fence_apc1-instance_attributes-ipaddr" name="ipaddr" value="198.51.100.1"/>
+ <nvpair id="fence_apc1-instance_attributes-login" name="login" value="fencing"/>
+ <nvpair id="fence_apc1-instance_attributes-passwd" name="passwd" value="fencing"/>
+ <nvpair id="fence_apc1-instance_attributes-pcmk_host_list"
+ name="pcmk_host_map" value="prod-mysql1:10;prod-mysql2:11"/>
+ <nvpair id="fence_apc1-instance_attributes-pcmk_delay_max" name="pcmk_delay_max" value="8s"/>
+ </instance_attributes>
+ </primitive>
+ <primitive class="stonith" id="fence_apc2" type="fence_apc_snmp">
+ <instance_attributes id="fence_apc2-instance_attributes">
+ <nvpair id="fence_apc2-instance_attributes-ipaddr" name="ipaddr" value="203.0.113.1"/>
+ <nvpair id="fence_apc2-instance_attributes-login" name="login" value="fencing"/>
+ <nvpair id="fence_apc2-instance_attributes-passwd" name="passwd" value="fencing"/>
+ <nvpair id="fence_apc2-instance_attributes-pcmk_host_list"
+ name="pcmk_host_map" value="prod-mysql1:10;prod-mysql2:11"/>
+ <nvpair id="fence_apc2-instance_attributes-pcmk_delay_max" name="pcmk_delay_max" value="8s"/>
+ </instance_attributes>
+ </primitive>
+
+.. note:: **Fencing topology**
+
+ Now that all the fencing resources are defined, it's time to create the
+ right topology. We want to first fence using IPMI and if that does not work,
+ fence both PDUs to effectively and surely kill the node.
+
+ .. code-block:: xml
+
+ <fencing-topology>
+ <fencing-level id="level-1-1" target="prod-mysql1" index="1" devices="fence_prod-mysql1_ipmi" />
+ <fencing-level id="level-1-2" target="prod-mysql1" index="2" devices="fence_apc1,fence_apc2" />
+ <fencing-level id="level-2-1" target="prod-mysql2" index="1" devices="fence_prod-mysql2_ipmi" />
+ <fencing-level id="level-2-2" target="prod-mysql2" index="2" devices="fence_apc1,fence_apc2" />
+ </fencing-topology>
+
+ In ``fencing-topology``, the lowest ``index`` value for a target determines
+ its first fencing method.
+
+Remapping Reboots
+#################
+
+When the cluster needs to reboot a node, whether because ``stonith-action`` is
+``reboot`` or because a reboot was requested externally (such as by
+``stonith_admin --reboot``), it will remap that to other commands in two cases:
+
+* If the chosen fencing device does not support the ``reboot`` command, the
+ cluster will ask it to perform ``off`` instead.
+
+* If a fencing topology level with multiple devices must be executed, the
+ cluster will ask all the devices to perform ``off``, then ask the devices to
+ perform ``on``.
+
+To understand the second case, consider the example of a node with redundant
+power supplies connected to intelligent power switches. Rebooting one switch
+and then the other would have no effect on the node. Turning both switches off,
+and then on, actually reboots the node.
+
+In such a case, the fencing operation will be treated as successful as long as
+the ``off`` commands succeed, because then it is safe for the cluster to
+recover any resources that were on the node. Timeouts and errors in the ``on``
+phase will be logged but ignored.
+
+When a reboot operation is remapped, any action-specific timeout for the
+remapped action will be used (for example, ``pcmk_off_timeout`` will be used
+when executing the ``off`` command, not ``pcmk_reboot_timeout``).
diff --git a/doc/sphinx/Pacemaker_Explained/images/resource-set.png b/doc/sphinx/Pacemaker_Explained/images/resource-set.png
new file mode 100644
index 0000000..fbed8b8
--- /dev/null
+++ b/doc/sphinx/Pacemaker_Explained/images/resource-set.png
Binary files differ
diff --git a/doc/sphinx/Pacemaker_Explained/images/three-sets.png b/doc/sphinx/Pacemaker_Explained/images/three-sets.png
new file mode 100644
index 0000000..feda36e
--- /dev/null
+++ b/doc/sphinx/Pacemaker_Explained/images/three-sets.png
Binary files differ
diff --git a/doc/sphinx/Pacemaker_Explained/images/two-sets.png b/doc/sphinx/Pacemaker_Explained/images/two-sets.png
new file mode 100644
index 0000000..b84b5f4
--- /dev/null
+++ b/doc/sphinx/Pacemaker_Explained/images/two-sets.png
Binary files differ
diff --git a/doc/sphinx/Pacemaker_Explained/index.rst b/doc/sphinx/Pacemaker_Explained/index.rst
new file mode 100644
index 0000000..de2ddd9
--- /dev/null
+++ b/doc/sphinx/Pacemaker_Explained/index.rst
@@ -0,0 +1,41 @@
+Pacemaker Explained
+===================
+
+*Configuring Pacemaker Clusters*
+
+
+Abstract
+--------
+This document definitively explains Pacemaker's features and capabilities,
+particularly the XML syntax used in Pacemaker's Cluster Information Base (CIB).
+
+
+Table of Contents
+-----------------
+
+.. toctree::
+ :maxdepth: 3
+ :numbered:
+
+ intro
+ options
+ nodes
+ resources
+ constraints
+ fencing
+ alerts
+ rules
+ advanced-options
+ advanced-resources
+ reusing-configuration
+ utilization
+ acls
+ status
+ multi-site-clusters
+ ap-samples
+
+Index
+-----
+
+* :ref:`genindex`
+* :ref:`search`
diff --git a/doc/sphinx/Pacemaker_Explained/intro.rst b/doc/sphinx/Pacemaker_Explained/intro.rst
new file mode 100644
index 0000000..a1240c3
--- /dev/null
+++ b/doc/sphinx/Pacemaker_Explained/intro.rst
@@ -0,0 +1,22 @@
+Introduction
+------------
+
+The Scope of this Document
+##########################
+
+This document is intended to be an exhaustive reference for configuring
+Pacemaker. To achieve this, it focuses on the XML syntax used to configure the
+CIB.
+
+For those that are allergic to XML, multiple higher-level front-ends
+(both command-line and GUI) are available. These tools will not be covered
+in this document, though the concepts explained here should make the
+functionality of these tools more easily understood.
+
+Users may be interested in other parts of the
+`Pacemaker documentation set <https://www.clusterlabs.org/pacemaker/doc/>`_,
+such as *Clusters from Scratch*, a step-by-step guide to setting up an
+example cluster, and *Pacemaker Administration*, a guide to maintaining a
+cluster.
+
+.. include:: ../shared/pacemaker-intro.rst
diff --git a/doc/sphinx/Pacemaker_Explained/multi-site-clusters.rst b/doc/sphinx/Pacemaker_Explained/multi-site-clusters.rst
new file mode 100644
index 0000000..59d3f93
--- /dev/null
+++ b/doc/sphinx/Pacemaker_Explained/multi-site-clusters.rst
@@ -0,0 +1,341 @@
+Multi-Site Clusters and Tickets
+-------------------------------
+
+Apart from local clusters, Pacemaker also supports multi-site clusters.
+That means you can have multiple, geographically dispersed sites, each with a
+local cluster. Failover between these clusters can be coordinated
+manually by the administrator, or automatically by a higher-level entity called
+a *Cluster Ticket Registry (CTR)*.
+
+Challenges for Multi-Site Clusters
+##################################
+
+Typically, multi-site environments are too far apart to support
+synchronous communication and data replication between the sites.
+That leads to significant challenges:
+
+- How do we make sure that a cluster site is up and running?
+
+- How do we make sure that resources are only started once?
+
+- How do we make sure that quorum can be reached between the different
+ sites and a split-brain scenario avoided?
+
+- How do we manage failover between sites?
+
+- How do we deal with high latency in case of resources that need to be
+ stopped?
+
+In the following sections, learn how to meet these challenges.
+
+Conceptual Overview
+###################
+
+Multi-site clusters can be considered as “overlay” clusters where
+each cluster site corresponds to a cluster node in a traditional cluster.
+The overlay cluster can be managed by a CTR in order to
+guarantee that any cluster resource will be active
+on no more than one cluster site. This is achieved by using
+*tickets* that are treated as failover domain between cluster
+sites, in case a site should be down.
+
+The following sections explain the individual components and mechanisms
+that were introduced for multi-site clusters in more detail.
+
+Ticket
+______
+
+Tickets are, essentially, cluster-wide attributes. A ticket grants the
+right to run certain resources on a specific cluster site. Resources can
+be bound to a certain ticket by ``rsc_ticket`` constraints. Only if the
+ticket is available at a site can the respective resources be started there.
+Vice versa, if the ticket is revoked, the resources depending on that
+ticket must be stopped.
+
+The ticket thus is similar to a *site quorum*, i.e. the permission to
+manage/own resources associated with that site. (One can also think of the
+current ``have-quorum`` flag as a special, cluster-wide ticket that is
+granted in case of node majority.)
+
+Tickets can be granted and revoked either manually by administrators
+(which could be the default for classic enterprise clusters), or via
+the automated CTR mechanism described below.
+
+A ticket can only be owned by one site at a time. Initially, none
+of the sites has a ticket. Each ticket must be granted once by the cluster
+administrator.
+
+The presence or absence of tickets for a site is stored in the CIB as a
+cluster status. With regards to a certain ticket, there are only two states
+for a site: ``true`` (the site has the ticket) or ``false`` (the site does
+not have the ticket). The absence of a certain ticket (during the initial
+state of the multi-site cluster) is the same as the value ``false``.
+
+Dead Man Dependency
+___________________
+
+A site can only activate resources safely if it can be sure that the
+other site has deactivated them. However after a ticket is revoked, it can
+take a long time until all resources depending on that ticket are stopped
+"cleanly", especially in case of cascaded resources. To cut that process
+short, the concept of a *Dead Man Dependency* was introduced.
+
+If a dead man dependency is in force, if a ticket is revoked from a site, the
+nodes that are hosting dependent resources are fenced. This considerably speeds
+up the recovery process of the cluster and makes sure that resources can be
+migrated more quickly.
+
+This can be configured by specifying a ``loss-policy="fence"`` in
+``rsc_ticket`` constraints.
+
+Cluster Ticket Registry
+_______________________
+
+A CTR is a coordinated group of network daemons that automatically handles
+granting, revoking, and timing out tickets (instead of the administrator
+revoking the ticket somewhere, waiting for everything to stop, and then
+granting it on the desired site).
+
+Pacemaker does not implement its own CTR, but interoperates with external
+software designed for that purpose (similar to how resource and fencing agents
+are not directly part of pacemaker).
+
+Participating clusters run the CTR daemons, which connect to each other, exchange
+information about their connectivity, and vote on which sites gets which
+tickets.
+
+A ticket is granted to a site only once the CTR is sure that the ticket
+has been relinquished by the previous owner, implemented via a timer in most
+scenarios. If a site loses connection to its peers, its tickets time out and
+recovery occurs. After the connection timeout plus the recovery timeout has
+passed, the other sites are allowed to re-acquire the ticket and start the
+resources again.
+
+This can also be thought of as a "quorum server", except that it is not
+a single quorum ticket, but several.
+
+Configuration Replication
+_________________________
+
+As usual, the CIB is synchronized within each cluster, but it is *not* synchronized
+across cluster sites of a multi-site cluster. You have to configure the resources
+that will be highly available across the multi-site cluster for every site
+accordingly.
+
+.. _ticket-constraints:
+
+Configuring Ticket Dependencies
+###############################
+
+The **rsc_ticket** constraint lets you specify the resources depending on a certain
+ticket. Together with the constraint, you can set a **loss-policy** that defines
+what should happen to the respective resources if the ticket is revoked.
+
+The attribute **loss-policy** can have the following values:
+
+* ``fence:`` Fence the nodes that are running the relevant resources.
+
+* ``stop:`` Stop the relevant resources.
+
+* ``freeze:`` Do nothing to the relevant resources.
+
+* ``demote:`` Demote relevant resources that are running in the promoted role.
+
+.. topic:: Constraint that fences node if ``ticketA`` is revoked
+
+ .. code-block:: xml
+
+ <rsc_ticket id="rsc1-req-ticketA" rsc="rsc1" ticket="ticketA" loss-policy="fence"/>
+
+The example above creates a constraint with the ID ``rsc1-req-ticketA``. It
+defines that the resource ``rsc1`` depends on ``ticketA`` and that the node running
+the resource should be fenced if ``ticketA`` is revoked.
+
+If resource ``rsc1`` were a promotable resource, you might want to configure
+that only being in the promoted role depends on ``ticketA``. With the following
+configuration, ``rsc1`` will be demoted if ``ticketA`` is revoked:
+
+.. topic:: Constraint that demotes ``rsc1`` if ``ticketA`` is revoked
+
+ .. code-block:: xml
+
+ <rsc_ticket id="rsc1-req-ticketA" rsc="rsc1" rsc-role="Promoted" ticket="ticketA" loss-policy="demote"/>
+
+You can create multiple **rsc_ticket** constraints to let multiple resources
+depend on the same ticket. However, **rsc_ticket** also supports resource sets
+(see :ref:`s-resource-sets`), so one can easily list all the resources in one
+**rsc_ticket** constraint instead.
+
+.. topic:: Ticket constraint for multiple resources
+
+ .. code-block:: xml
+
+ <rsc_ticket id="resources-dep-ticketA" ticket="ticketA" loss-policy="fence">
+ <resource_set id="resources-dep-ticketA-0" role="Started">
+ <resource_ref id="rsc1"/>
+ <resource_ref id="group1"/>
+ <resource_ref id="clone1"/>
+ </resource_set>
+ <resource_set id="resources-dep-ticketA-1" role="Promoted">
+ <resource_ref id="ms1"/>
+ </resource_set>
+ </rsc_ticket>
+
+In the example above, there are two resource sets, so we can list resources
+with different roles in a single ``rsc_ticket`` constraint. There's no dependency
+between the two resource sets, and there's no dependency among the
+resources within a resource set. Each of the resources just depends on
+``ticketA``.
+
+Referencing resource templates in ``rsc_ticket`` constraints, and even
+referencing them within resource sets, is also supported.
+
+If you want other resources to depend on further tickets, create as many
+constraints as necessary with ``rsc_ticket``.
+
+Managing Multi-Site Clusters
+############################
+
+Granting and Revoking Tickets Manually
+______________________________________
+
+You can grant tickets to sites or revoke them from sites manually.
+If you want to re-distribute a ticket, you should wait for
+the dependent resources to stop cleanly at the previous site before you
+grant the ticket to the new site.
+
+Use the **crm_ticket** command line tool to grant and revoke tickets.
+
+To grant a ticket to this site:
+
+ .. code-block:: none
+
+ # crm_ticket --ticket ticketA --grant
+
+To revoke a ticket from this site:
+
+ .. code-block:: none
+
+ # crm_ticket --ticket ticketA --revoke
+
+.. important::
+
+ If you are managing tickets manually, use the **crm_ticket** command with
+ great care, because it cannot check whether the same ticket is already
+ granted elsewhere.
+
+Granting and Revoking Tickets via a Cluster Ticket Registry
+___________________________________________________________
+
+We will use `Booth <https://github.com/ClusterLabs/booth>`_ here as an example of
+software that can be used with pacemaker as a Cluster Ticket Registry. Booth
+implements the `Raft <http://en.wikipedia.org/wiki/Raft_%28computer_science%29>`_
+algorithm to guarantee the distributed consensus among different
+cluster sites, and manages the ticket distribution (and thus the failover
+process between sites).
+
+Each of the participating clusters and *arbitrators* runs the Booth daemon
+**boothd**.
+
+An *arbitrator* is the multi-site equivalent of a quorum-only node in a local
+cluster. If you have a setup with an even number of sites,
+you need an additional instance to reach consensus about decisions such
+as failover of resources across sites. In this case, add one or more
+arbitrators running at additional sites. Arbitrators are single machines
+that run a booth instance in a special mode. An arbitrator is especially
+important for a two-site scenario, otherwise there is no way for one site
+to distinguish between a network failure between it and the other site, and
+a failure of the other site.
+
+The most common multi-site scenario is probably a multi-site cluster with two
+sites and a single arbitrator on a third site. However, technically, there are
+no limitations with regards to the number of sites and the number of
+arbitrators involved.
+
+**Boothd** at each site connects to its peers running at the other sites and
+exchanges connectivity details. Once a ticket is granted to a site, the
+booth mechanism will manage the ticket automatically: If the site which
+holds the ticket is out of service, the booth daemons will vote which
+of the other sites will get the ticket. To protect against brief
+connection failures, sites that lose the vote (either explicitly or
+implicitly by being disconnected from the voting body) need to
+relinquish the ticket after a time-out. Thus, it is made sure that a
+ticket will only be re-distributed after it has been relinquished by the
+previous site. The resources that depend on that ticket will fail over
+to the new site holding the ticket. The nodes that have run the
+resources before will be treated according to the **loss-policy** you set
+within the **rsc_ticket** constraint.
+
+Before the booth can manage a certain ticket within the multi-site cluster,
+you initially need to grant it to a site manually via the **booth** command-line
+tool. After you have initially granted a ticket to a site, **boothd**
+will take over and manage the ticket automatically.
+
+.. important::
+
+ The **booth** command-line tool can be used to grant, list, or
+ revoke tickets and can be run on any machine where **boothd** is running.
+ If you are managing tickets via Booth, use only **booth** for manual
+ intervention, not **crm_ticket**. That ensures the same ticket
+ will only be owned by one cluster site at a time.
+
+Booth Requirements
+~~~~~~~~~~~~~~~~~~
+
+* All clusters that will be part of the multi-site cluster must be based on
+ Pacemaker.
+
+* Booth must be installed on all cluster nodes and on all arbitrators that will
+ be part of the multi-site cluster.
+
+* Nodes belonging to the same cluster site should be synchronized via NTP. However,
+ time synchronization is not required between the individual cluster sites.
+
+General Management of Tickets
+_____________________________
+
+Display the information of tickets:
+
+ .. code-block:: none
+
+ # crm_ticket --info
+
+Or you can monitor them with:
+
+ .. code-block:: none
+
+ # crm_mon --tickets
+
+Display the ``rsc_ticket`` constraints that apply to a ticket:
+
+ .. code-block:: none
+
+ # crm_ticket --ticket ticketA --constraints
+
+When you want to do maintenance or manual switch-over of a ticket,
+revoking the ticket would trigger the loss policies. If
+``loss-policy="fence"``, the dependent resources could not be gracefully
+stopped/demoted, and other unrelated resources could even be affected.
+
+The proper way is making the ticket *standby* first with:
+
+ .. code-block:: none
+
+ # crm_ticket --ticket ticketA --standby
+
+Then the dependent resources will be stopped or demoted gracefully without
+triggering the loss policies.
+
+If you have finished the maintenance and want to activate the ticket again,
+you can run:
+
+ .. code-block:: none
+
+ # crm_ticket --ticket ticketA --activate
+
+For more information
+####################
+
+* `SUSE's Geo Clustering quick start <https://www.suse.com/documentation/sle-ha-geo-12/art_ha_geo_quick/data/art_ha_geo_quick.html>`_
+
+* `Booth <https://github.com/ClusterLabs/booth>`_
diff --git a/doc/sphinx/Pacemaker_Explained/nodes.rst b/doc/sphinx/Pacemaker_Explained/nodes.rst
new file mode 100644
index 0000000..6fcadb3
--- /dev/null
+++ b/doc/sphinx/Pacemaker_Explained/nodes.rst
@@ -0,0 +1,441 @@
+Cluster Nodes
+-------------
+
+Defining a Cluster Node
+_______________________
+
+Each cluster node will have an entry in the ``nodes`` section containing at
+least an ID and a name. A cluster node's ID is defined by the cluster layer
+(Corosync).
+
+.. topic:: **Example Corosync cluster node entry**
+
+ .. code-block:: xml
+
+ <node id="101" uname="pcmk-1"/>
+
+In normal circumstances, the admin should let the cluster populate this
+information automatically from the cluster layer.
+
+
+.. _node_name:
+
+Where Pacemaker Gets the Node Name
+##################################
+
+The name that Pacemaker uses for a node in the configuration does not have to
+be the same as its local hostname. Pacemaker uses the following for a Corosync
+node's name, in order of most preferred first:
+
+* The value of ``name`` in the ``nodelist`` section of ``corosync.conf``
+* The value of ``ring0_addr`` in the ``nodelist`` section of ``corosync.conf``
+* The local hostname (value of ``uname -n``)
+
+If the cluster is running, the ``crm_node -n`` command will display the local
+node's name as used by the cluster.
+
+If a Corosync ``nodelist`` is used, ``crm_node --name-for-id`` with a Corosync
+node ID will display the name used by the node with the given Corosync
+``nodeid``, for example:
+
+.. code-block:: none
+
+ crm_node --name-for-id 2
+
+
+.. index::
+ single: node; attribute
+ single: node attribute
+
+.. _node_attributes:
+
+Node Attributes
+_______________
+
+Pacemaker allows node-specific values to be specified using *node attributes*.
+A node attribute has a name, and may have a distinct value for each node.
+
+Node attributes come in two types, *permanent* and *transient*. Permanent node
+attributes are kept within the ``node`` entry, and keep their values even if
+the cluster restarts on a node. Transient node attributes are kept in the CIB's
+``status`` section, and go away when the cluster stops on the node.
+
+While certain node attributes have specific meanings to the cluster, they are
+mainly intended to allow administrators and resource agents to track any
+information desired.
+
+For example, an administrator might choose to define node attributes for how
+much RAM and disk space each node has, which OS each uses, or which server room
+rack each node is in.
+
+Users can configure :ref:`rules` that use node attributes to affect where
+resources are placed.
+
+Setting and querying node attributes
+####################################
+
+Node attributes can be set and queried using the ``crm_attribute`` and
+``attrd_updater`` commands, so that the user does not have to deal with XML
+configuration directly.
+
+Here is an example command to set a permanent node attribute, and the XML
+configuration that would be generated:
+
+.. topic:: **Result of using crm_attribute to specify which kernel pcmk-1 is running**
+
+ .. code-block:: none
+
+ # crm_attribute --type nodes --node pcmk-1 --name kernel --update $(uname -r)
+
+ .. code-block:: xml
+
+ <node id="1" uname="pcmk-1">
+ <instance_attributes id="nodes-1-attributes">
+ <nvpair id="nodes-1-kernel" name="kernel" value="3.10.0-862.14.4.el7.x86_64"/>
+ </instance_attributes>
+ </node>
+
+To read back the value that was just set:
+
+.. code-block:: none
+
+ # crm_attribute --type nodes --node pcmk-1 --name kernel --query
+ scope=nodes name=kernel value=3.10.0-862.14.4.el7.x86_64
+
+The ``--type nodes`` indicates that this is a permanent node attribute;
+``--type status`` would indicate a transient node attribute.
+
+Special node attributes
+#######################
+
+Certain node attributes have special meaning to the cluster.
+
+Node attribute names beginning with ``#`` are considered reserved for these
+special attributes. Some special attributes do not start with ``#``, for
+historical reasons.
+
+Certain special attributes are set automatically by the cluster, should never
+be modified directly, and can be used only within :ref:`rules`; these are
+listed under
+:ref:`built-in node attributes <node-attribute-expressions-special>`.
+
+For true/false values, the cluster considers a value of "1", "y", "yes", "on",
+or "true" (case-insensitively) to be true, "0", "n", "no", "off", "false", or
+unset to be false, and anything else to be an error.
+
+.. table:: **Node attributes with special significance**
+ :class: longtable
+ :widths: 1 2
+
+ +----------------------------+-----------------------------------------------------+
+ | Name | Description |
+ +============================+=====================================================+
+ | fail-count-* | .. index:: |
+ | | pair: node attribute; fail-count |
+ | | |
+ | | Attributes whose names start with |
+ | | ``fail-count-`` are managed by the cluster |
+ | | to track how many times particular resource |
+ | | operations have failed on this node. These |
+ | | should be queried and cleared via the |
+ | | ``crm_failcount`` or |
+ | | ``crm_resource --cleanup`` commands rather |
+ | | than directly. |
+ +----------------------------+-----------------------------------------------------+
+ | last-failure-* | .. index:: |
+ | | pair: node attribute; last-failure |
+ | | |
+ | | Attributes whose names start with |
+ | | ``last-failure-`` are managed by the cluster |
+ | | to track when particular resource operations |
+ | | have most recently failed on this node. |
+ | | These should be cleared via the |
+ | | ``crm_failcount`` or |
+ | | ``crm_resource --cleanup`` commands rather |
+ | | than directly. |
+ +----------------------------+-----------------------------------------------------+
+ | maintenance | .. index:: |
+ | | pair: node attribute; maintenance |
+ | | |
+ | | Similar to the ``maintenance-mode`` |
+ | | :ref:`cluster option <cluster_options>`, but |
+ | | for a single node. If true, resources will |
+ | | not be started or stopped on the node, |
+ | | resources and individual clone instances |
+ | | running on the node will become unmanaged, |
+ | | and any recurring operations for those will |
+ | | be cancelled. |
+ | | |
+ | | **Warning:** Restarting pacemaker on a node that is |
+ | | in single-node maintenance mode will likely |
+ | | lead to undesirable effects. If |
+ | | ``maintenance`` is set as a transient |
+ | | attribute, it will be erased when |
+ | | Pacemaker is stopped, which will |
+ | | immediately take the node out of |
+ | | maintenance mode and likely get it |
+ | | fenced. Even if permanent, if Pacemaker |
+ | | is restarted, any resources active on the |
+ | | node will have their local history erased |
+ | | when the node rejoins, so the cluster |
+ | | will no longer consider them running on |
+ | | the node and thus will consider them |
+ | | managed again, leading them to be started |
+ | | elsewhere. This behavior might be |
+ | | improved in a future release. |
+ +----------------------------+-----------------------------------------------------+
+ | probe_complete | .. index:: |
+ | | pair: node attribute; probe_complete |
+ | | |
+ | | This is managed by the cluster to detect |
+ | | when nodes need to be reprobed, and should |
+ | | never be used directly. |
+ +----------------------------+-----------------------------------------------------+
+ | resource-discovery-enabled | .. index:: |
+ | | pair: node attribute; resource-discovery-enabled |
+ | | |
+ | | If the node is a remote node, fencing is enabled, |
+ | | and this attribute is explicitly set to false |
+ | | (unset means true in this case), resource discovery |
+ | | (probes) will not be done on this node. This is |
+ | | highly discouraged; the ``resource-discovery`` |
+ | | location constraint property is preferred for this |
+ | | purpose. |
+ +----------------------------+-----------------------------------------------------+
+ | shutdown | .. index:: |
+ | | pair: node attribute; shutdown |
+ | | |
+ | | This is managed by the cluster to orchestrate the |
+ | | shutdown of a node, and should never be used |
+ | | directly. |
+ +----------------------------+-----------------------------------------------------+
+ | site-name | .. index:: |
+ | | pair: node attribute; site-name |
+ | | |
+ | | If set, this will be used as the value of the |
+ | | ``#site-name`` node attribute used in rules. (If |
+ | | not set, the value of the ``cluster-name`` cluster |
+ | | option will be used as ``#site-name`` instead.) |
+ +----------------------------+-----------------------------------------------------+
+ | standby | .. index:: |
+ | | pair: node attribute; standby |
+ | | |
+ | | If true, the node is in standby mode. This is |
+ | | typically set and queried via the ``crm_standby`` |
+ | | command rather than directly. |
+ +----------------------------+-----------------------------------------------------+
+ | terminate | .. index:: |
+ | | pair: node attribute; terminate |
+ | | |
+ | | If the value is true or begins with any nonzero |
+ | | number, the node will be fenced. This is typically |
+ | | set by tools rather than directly. |
+ +----------------------------+-----------------------------------------------------+
+ | #digests-* | .. index:: |
+ | | pair: node attribute; #digests |
+ | | |
+ | | Attributes whose names start with ``#digests-`` are |
+ | | managed by the cluster to detect when |
+ | | :ref:`unfencing` needs to be redone, and should |
+ | | never be used directly. |
+ +----------------------------+-----------------------------------------------------+
+ | #node-unfenced | .. index:: |
+ | | pair: node attribute; #node-unfenced |
+ | | |
+ | | When the node was last unfenced (as seconds since |
+ | | the epoch). This is managed by the cluster and |
+ | | should never be used directly. |
+ +----------------------------+-----------------------------------------------------+
+
+.. index::
+ single: node; health
+
+.. _node-health:
+
+Tracking Node Health
+____________________
+
+A node may be functioning adequately as far as cluster membership is concerned,
+and yet be "unhealthy" in some respect that makes it an undesirable location
+for resources. For example, a disk drive may be reporting SMART errors, or the
+CPU may be highly loaded.
+
+Pacemaker offers a way to automatically move resources off unhealthy nodes.
+
+.. index::
+ single: node attribute; health
+
+Node Health Attributes
+######################
+
+Pacemaker will treat any node attribute whose name starts with ``#health`` as
+an indicator of node health. Node health attributes may have one of the
+following values:
+
+.. table:: **Allowed Values for Node Health Attributes**
+ :widths: 1 4
+
+ +------------+--------------------------------------------------------------+
+ | Value | Intended significance |
+ +============+==============================================================+
+ | ``red`` | .. index:: |
+ | | single: red; node health attribute value |
+ | | single: node attribute; health (red) |
+ | | |
+ | | This indicator is unhealthy |
+ +------------+--------------------------------------------------------------+
+ | ``yellow`` | .. index:: |
+ | | single: yellow; node health attribute value |
+ | | single: node attribute; health (yellow) |
+ | | |
+ | | This indicator is becoming unhealthy |
+ +------------+--------------------------------------------------------------+
+ | ``green`` | .. index:: |
+ | | single: green; node health attribute value |
+ | | single: node attribute; health (green) |
+ | | |
+ | | This indicator is healthy |
+ +------------+--------------------------------------------------------------+
+ | *integer* | .. index:: |
+ | | single: score; node health attribute value |
+ | | single: node attribute; health (score) |
+ | | |
+ | | A numeric score to apply to all resources on this node (0 or |
+ | | positive is healthy, negative is unhealthy) |
+ +------------+--------------------------------------------------------------+
+
+
+.. index::
+ pair: cluster option; node-health-strategy
+
+Node Health Strategy
+####################
+
+Pacemaker assigns a node health score to each node, as the sum of the values of
+all its node health attributes. This score will be used as a location
+constraint applied to this node for all resources.
+
+The ``node-health-strategy`` cluster option controls how Pacemaker responds to
+changes in node health attributes, and how it translates ``red``, ``yellow``,
+and ``green`` to scores.
+
+Allowed values are:
+
+.. table:: **Node Health Strategies**
+ :widths: 1 4
+
+ +----------------+----------------------------------------------------------+
+ | Value | Effect |
+ +================+==========================================================+
+ | none | .. index:: |
+ | | single: node-health-strategy; none |
+ | | single: none; node-health-strategy value |
+ | | |
+ | | Do not track node health attributes at all. |
+ +----------------+----------------------------------------------------------+
+ | migrate-on-red | .. index:: |
+ | | single: node-health-strategy; migrate-on-red |
+ | | single: migrate-on-red; node-health-strategy value |
+ | | |
+ | | Assign the value of ``-INFINITY`` to ``red``, and 0 to |
+ | | ``yellow`` and ``green``. This will cause all resources |
+ | | to move off the node if any attribute is ``red``. |
+ +----------------+----------------------------------------------------------+
+ | only-green | .. index:: |
+ | | single: node-health-strategy; only-green |
+ | | single: only-green; node-health-strategy value |
+ | | |
+ | | Assign the value of ``-INFINITY`` to ``red`` and |
+ | | ``yellow``, and 0 to ``green``. This will cause all |
+ | | resources to move off the node if any attribute is |
+ | | ``red`` or ``yellow``. |
+ +----------------+----------------------------------------------------------+
+ | progressive | .. index:: |
+ | | single: node-health-strategy; progressive |
+ | | single: progressive; node-health-strategy value |
+ | | |
+ | | Assign the value of the ``node-health-red`` cluster |
+ | | option to ``red``, the value of ``node-health-yellow`` |
+ | | to ``yellow``, and the value of ``node-health-green`` to |
+ | | ``green``. Each node is additionally assigned a score of |
+ | | ``node-health-base`` (this allows resources to start |
+ | | even if some attributes are ``yellow``). This strategy |
+ | | gives the administrator finer control over how important |
+ | | each value is. |
+ +----------------+----------------------------------------------------------+
+ | custom | .. index:: |
+ | | single: node-health-strategy; custom |
+ | | single: custom; node-health-strategy value |
+ | | |
+ | | Track node health attributes using the same values as |
+ | | ``progressive`` for ``red``, ``yellow``, and ``green``, |
+ | | but do not take them into account. The administrator is |
+ | | expected to implement a policy by defining :ref:`rules` |
+ | | referencing node health attributes. |
+ +----------------+----------------------------------------------------------+
+
+
+Exempting a Resource from Health Restrictions
+#############################################
+
+If you want a resource to be able to run on a node even if its health score
+would otherwise prevent it, set the resource's ``allow-unhealthy-nodes``
+meta-attribute to ``true`` *(available since 2.1.3)*.
+
+This is particularly useful for node health agents, to allow them to detect
+when the node becomes healthy again. If you configure a health agent without
+this setting, then the health agent will be banned from an unhealthy node,
+and you will have to investigate and clear the health attribute manually once
+it is healthy to allow resources on the node again.
+
+If you want the meta-attribute to apply to a clone, it must be set on the clone
+itself, not on the resource being cloned.
+
+
+Configuring Node Health Agents
+##############################
+
+Since Pacemaker calculates node health based on node attributes, any method
+that sets node attributes may be used to measure node health. The most common
+are resource agents and custom daemons.
+
+Pacemaker provides examples that can be used directly or as a basis for custom
+code. The ``ocf:pacemaker:HealthCPU``, ``ocf:pacemaker:HealthIOWait``, and
+``ocf:pacemaker:HealthSMART`` resource agents set node health attributes based
+on CPU and disk status.
+
+To take advantage of this feature, add the resource to your cluster (generally
+as a cloned resource with a recurring monitor action, to continually check the
+health of all nodes). For example:
+
+.. topic:: Example HealthIOWait resource configuration
+
+ .. code-block:: xml
+
+ <clone id="resHealthIOWait-clone">
+ <primitive class="ocf" id="HealthIOWait" provider="pacemaker" type="HealthIOWait">
+ <instance_attributes id="resHealthIOWait-instance_attributes">
+ <nvpair id="resHealthIOWait-instance_attributes-red_limit" name="red_limit" value="30"/>
+ <nvpair id="resHealthIOWait-instance_attributes-yellow_limit" name="yellow_limit" value="10"/>
+ </instance_attributes>
+ <operations>
+ <op id="resHealthIOWait-monitor-interval-5" interval="5" name="monitor" timeout="5"/>
+ <op id="resHealthIOWait-start-interval-0s" interval="0s" name="start" timeout="10s"/>
+ <op id="resHealthIOWait-stop-interval-0s" interval="0s" name="stop" timeout="10s"/>
+ </operations>
+ </primitive>
+ </clone>
+
+The resource agents use ``attrd_updater`` to set proper status for each node
+running this resource, as a node attribute whose name starts with ``#health``
+(for ``HealthIOWait``, the node attribute is named ``#health-iowait``).
+
+When a node is no longer faulty, you can force the cluster to make it available
+to take resources without waiting for the next monitor, by setting the node
+health attribute to green. For example:
+
+.. topic:: **Force node1 to be marked as healthy**
+
+ .. code-block:: none
+
+ # attrd_updater --name "#health-iowait" --update "green" --node "node1"
diff --git a/doc/sphinx/Pacemaker_Explained/options.rst b/doc/sphinx/Pacemaker_Explained/options.rst
new file mode 100644
index 0000000..ee0511c
--- /dev/null
+++ b/doc/sphinx/Pacemaker_Explained/options.rst
@@ -0,0 +1,622 @@
+Cluster-Wide Configuration
+--------------------------
+
+.. index::
+ pair: XML element; cib
+ pair: XML element; configuration
+
+Configuration Layout
+####################
+
+The cluster is defined by the Cluster Information Base (CIB), which uses XML
+notation. The simplest CIB, an empty one, looks like this:
+
+.. topic:: An empty configuration
+
+ .. code-block:: xml
+
+ <cib crm_feature_set="3.6.0" validate-with="pacemaker-3.5" epoch="1" num_updates="0" admin_epoch="0">
+ <configuration>
+ <crm_config/>
+ <nodes/>
+ <resources/>
+ <constraints/>
+ </configuration>
+ <status/>
+ </cib>
+
+The empty configuration above contains the major sections that make up a CIB:
+
+* ``cib``: The entire CIB is enclosed with a ``cib`` element. Certain
+ fundamental settings are defined as attributes of this element.
+
+ * ``configuration``: This section -- the primary focus of this document --
+ contains traditional configuration information such as what resources the
+ cluster serves and the relationships among them.
+
+ * ``crm_config``: cluster-wide configuration options
+
+ * ``nodes``: the machines that host the cluster
+
+ * ``resources``: the services run by the cluster
+
+ * ``constraints``: indications of how resources should be placed
+
+ * ``status``: This section contains the history of each resource on each
+ node. Based on this data, the cluster can construct the complete current
+ state of the cluster. The authoritative source for this section is the
+ local executor (pacemaker-execd process) on each cluster node, and the
+ cluster will occasionally repopulate the entire section. For this reason,
+ it is never written to disk, and administrators are advised against
+ modifying it in any way.
+
+In this document, configuration settings will be described as properties or
+options based on how they are defined in the CIB:
+
+* Properties are XML attributes of an XML element.
+
+* Options are name-value pairs expressed as ``nvpair`` child elements of an XML
+ element.
+
+Normally, you will use command-line tools that abstract the XML, so the
+distinction will be unimportant; both properties and options are cluster
+settings you can tweak.
+
+CIB Properties
+##############
+
+Certain settings are defined by CIB properties (that is, attributes of the
+``cib`` tag) rather than with the rest of the cluster configuration in the
+``configuration`` section.
+
+The reason is simply a matter of parsing. These options are used by the
+configuration database which is, by design, mostly ignorant of the content it
+holds. So the decision was made to place them in an easy-to-find location.
+
+.. table:: **CIB Properties**
+ :class: longtable
+ :widths: 1 3
+
+ +------------------+-----------------------------------------------------------+
+ | Attribute | Description |
+ +==================+===========================================================+
+ | admin_epoch | .. index:: |
+ | | pair: admin_epoch; cib |
+ | | |
+ | | When a node joins the cluster, the cluster performs a |
+ | | check to see which node has the best configuration. It |
+ | | asks the node with the highest (``admin_epoch``, |
+ | | ``epoch``, ``num_updates``) tuple to replace the |
+ | | configuration on all the nodes -- which makes setting |
+ | | them, and setting them correctly, very important. |
+ | | ``admin_epoch`` is never modified by the cluster; you can |
+ | | use this to make the configurations on any inactive nodes |
+ | | obsolete. |
+ | | |
+ | | **Warning:** Never set this value to zero. In such cases, |
+ | | the cluster cannot tell the difference between your |
+ | | configuration and the "empty" one used when nothing is |
+ | | found on disk. |
+ +------------------+-----------------------------------------------------------+
+ | epoch | .. index:: |
+ | | pair: epoch; cib |
+ | | |
+ | | The cluster increments this every time the configuration |
+ | | is updated (usually by the administrator). |
+ +------------------+-----------------------------------------------------------+
+ | num_updates | .. index:: |
+ | | pair: num_updates; cib |
+ | | |
+ | | The cluster increments this every time the configuration |
+ | | or status is updated (usually by the cluster) and resets |
+ | | it to 0 when epoch changes. |
+ +------------------+-----------------------------------------------------------+
+ | validate-with | .. index:: |
+ | | pair: validate-with; cib |
+ | | |
+ | | Determines the type of XML validation that will be done |
+ | | on the configuration. If set to ``none``, the cluster |
+ | | will not verify that updates conform to the DTD (nor |
+ | | reject ones that don't). |
+ +------------------+-----------------------------------------------------------+
+ | cib-last-written | .. index:: |
+ | | pair: cib-last-written; cib |
+ | | |
+ | | Indicates when the configuration was last written to |
+ | | disk. Maintained by the cluster; for informational |
+ | | purposes only. |
+ +------------------+-----------------------------------------------------------+
+ | have-quorum | .. index:: |
+ | | pair: have-quorum; cib |
+ | | |
+ | | Indicates if the cluster has quorum. If false, this may |
+ | | mean that the cluster cannot start resources or fence |
+ | | other nodes (see ``no-quorum-policy`` below). Maintained |
+ | | by the cluster. |
+ +------------------+-----------------------------------------------------------+
+ | dc-uuid | .. index:: |
+ | | pair: dc-uuid; cib |
+ | | |
+ | | Indicates which cluster node is the current leader. Used |
+ | | by the cluster when placing resources and determining the |
+ | | order of some events. Maintained by the cluster. |
+ +------------------+-----------------------------------------------------------+
+
+.. _cluster_options:
+
+Cluster Options
+###############
+
+Cluster options, as you might expect, control how the cluster behaves when
+confronted with various situations.
+
+They are grouped into sets within the ``crm_config`` section. In advanced
+configurations, there may be more than one set. (This will be described later
+in the chapter on :ref:`rules` where we will show how to have the cluster use
+different sets of options during working hours than during weekends.) For now,
+we will describe the simple case where each option is present at most once.
+
+You can obtain an up-to-date list of cluster options, including their default
+values, by running the ``man pacemaker-schedulerd`` and
+``man pacemaker-controld`` commands.
+
+.. table:: **Cluster Options**
+ :class: longtable
+ :widths: 2 1 4
+
+ +---------------------------+---------+----------------------------------------------------+
+ | Option | Default | Description |
+ +===========================+=========+====================================================+
+ | cluster-name | | .. index:: |
+ | | | pair: cluster option; cluster-name |
+ | | | |
+ | | | An (optional) name for the cluster as a whole. |
+ | | | This is mostly for users' convenience for use |
+ | | | as desired in administration, but this can be |
+ | | | used in the Pacemaker configuration in |
+ | | | :ref:`rules` (as the ``#cluster-name`` |
+ | | | :ref:`node attribute |
+ | | | <node-attribute-expressions-special>`. It may |
+ | | | also be used by higher-level tools when |
+ | | | displaying cluster information, and by |
+ | | | certain resource agents (for example, the |
+ | | | ``ocf:heartbeat:GFS2`` agent stores the |
+ | | | cluster name in filesystem meta-data). |
+ +---------------------------+---------+----------------------------------------------------+
+ | dc-version | | .. index:: |
+ | | | pair: cluster option; dc-version |
+ | | | |
+ | | | Version of Pacemaker on the cluster's DC. |
+ | | | Determined automatically by the cluster. Often |
+ | | | includes the hash which identifies the exact |
+ | | | Git changeset it was built from. Used for |
+ | | | diagnostic purposes. |
+ +---------------------------+---------+----------------------------------------------------+
+ | cluster-infrastructure | | .. index:: |
+ | | | pair: cluster option; cluster-infrastructure |
+ | | | |
+ | | | The messaging stack on which Pacemaker is |
+ | | | currently running. Determined automatically by |
+ | | | the cluster. Used for informational and |
+ | | | diagnostic purposes. |
+ +---------------------------+---------+----------------------------------------------------+
+ | no-quorum-policy | stop | .. index:: |
+ | | | pair: cluster option; no-quorum-policy |
+ | | | |
+ | | | What to do when the cluster does not have |
+ | | | quorum. Allowed values: |
+ | | | |
+ | | | * ``ignore:`` continue all resource management |
+ | | | * ``freeze:`` continue resource management, but |
+ | | | don't recover resources from nodes not in the |
+ | | | affected partition |
+ | | | * ``stop:`` stop all resources in the affected |
+ | | | cluster partition |
+ | | | * ``demote:`` demote promotable resources and |
+ | | | stop all other resources in the affected |
+ | | | cluster partition *(since 2.0.5)* |
+ | | | * ``suicide:`` fence all nodes in the affected |
+ | | | cluster partition |
+ +---------------------------+---------+----------------------------------------------------+
+ | batch-limit | 0 | .. index:: |
+ | | | pair: cluster option; batch-limit |
+ | | | |
+ | | | The maximum number of actions that the cluster |
+ | | | may execute in parallel across all nodes. The |
+ | | | "correct" value will depend on the speed and |
+ | | | load of your network and cluster nodes. If zero, |
+ | | | the cluster will impose a dynamically calculated |
+ | | | limit only when any node has high load. If -1, the |
+ | | | cluster will not impose any limit. |
+ +---------------------------+---------+----------------------------------------------------+
+ | migration-limit | -1 | .. index:: |
+ | | | pair: cluster option; migration-limit |
+ | | | |
+ | | | The number of |
+ | | | :ref:`live migration <live-migration>` actions |
+ | | | that the cluster is allowed to execute in |
+ | | | parallel on a node. A value of -1 means |
+ | | | unlimited. |
+ +---------------------------+---------+----------------------------------------------------+
+ | symmetric-cluster | true | .. index:: |
+ | | | pair: cluster option; symmetric-cluster |
+ | | | |
+ | | | Whether resources can run on any node by default |
+ | | | (if false, a resource is allowed to run on a |
+ | | | node only if a |
+ | | | :ref:`location constraint <location-constraint>` |
+ | | | enables it) |
+ +---------------------------+---------+----------------------------------------------------+
+ | stop-all-resources | false | .. index:: |
+ | | | pair: cluster option; stop-all-resources |
+ | | | |
+ | | | Whether all resources should be disallowed from |
+ | | | running (can be useful during maintenance) |
+ +---------------------------+---------+----------------------------------------------------+
+ | stop-orphan-resources | true | .. index:: |
+ | | | pair: cluster option; stop-orphan-resources |
+ | | | |
+ | | | Whether resources that have been deleted from |
+ | | | the configuration should be stopped. This value |
+ | | | takes precedence over ``is-managed`` (that is, |
+ | | | even unmanaged resources will be stopped when |
+ | | | orphaned if this value is ``true`` |
+ +---------------------------+---------+----------------------------------------------------+
+ | stop-orphan-actions | true | .. index:: |
+ | | | pair: cluster option; stop-orphan-actions |
+ | | | |
+ | | | Whether recurring :ref:`operations <operation>` |
+ | | | that have been deleted from the configuration |
+ | | | should be cancelled |
+ +---------------------------+---------+----------------------------------------------------+
+ | start-failure-is-fatal | true | .. index:: |
+ | | | pair: cluster option; start-failure-is-fatal |
+ | | | |
+ | | | Whether a failure to start a resource on a |
+ | | | particular node prevents further start attempts |
+ | | | on that node? If ``false``, the cluster will |
+ | | | decide whether the node is still eligible based |
+ | | | on the resource's current failure count and |
+ | | | :ref:`migration-threshold <failure-handling>`. |
+ +---------------------------+---------+----------------------------------------------------+
+ | enable-startup-probes | true | .. index:: |
+ | | | pair: cluster option; enable-startup-probes |
+ | | | |
+ | | | Whether the cluster should check the |
+ | | | pre-existing state of resources when the cluster |
+ | | | starts |
+ +---------------------------+---------+----------------------------------------------------+
+ | maintenance-mode | false | .. index:: |
+ | | | pair: cluster option; maintenance-mode |
+ | | | |
+ | | | Whether the cluster should refrain from |
+ | | | monitoring, starting and stopping resources |
+ +---------------------------+---------+----------------------------------------------------+
+ | stonith-enabled | true | .. index:: |
+ | | | pair: cluster option; stonith-enabled |
+ | | | |
+ | | | Whether the cluster is allowed to fence nodes |
+ | | | (for example, failed nodes and nodes with |
+ | | | resources that can't be stopped. |
+ | | | |
+ | | | If true, at least one fence device must be |
+ | | | configured before resources are allowed to run. |
+ | | | |
+ | | | If false, unresponsive nodes are immediately |
+ | | | assumed to be running no resources, and resource |
+ | | | recovery on online nodes starts without any |
+ | | | further protection (which can mean *data loss* |
+ | | | if the unresponsive node still accesses shared |
+ | | | storage, for example). See also the |
+ | | | :ref:`requires <requires>` resource |
+ | | | meta-attribute. |
+ +---------------------------+---------+----------------------------------------------------+
+ | stonith-action | reboot | .. index:: |
+ | | | pair: cluster option; stonith-action |
+ | | | |
+ | | | Action the cluster should send to the fence agent |
+ | | | when a node must be fenced. Allowed values are |
+ | | | ``reboot``, ``off``, and (for legacy agents only) |
+ | | | ``poweroff``. |
+ +---------------------------+---------+----------------------------------------------------+
+ | stonith-timeout | 60s | .. index:: |
+ | | | pair: cluster option; stonith-timeout |
+ | | | |
+ | | | How long to wait for ``on``, ``off``, and |
+ | | | ``reboot`` fence actions to complete by default. |
+ +---------------------------+---------+----------------------------------------------------+
+ | stonith-max-attempts | 10 | .. index:: |
+ | | | pair: cluster option; stonith-max-attempts |
+ | | | |
+ | | | How many times fencing can fail for a target |
+ | | | before the cluster will no longer immediately |
+ | | | re-attempt it. |
+ +---------------------------+---------+----------------------------------------------------+
+ | stonith-watchdog-timeout | 0 | .. index:: |
+ | | | pair: cluster option; stonith-watchdog-timeout |
+ | | | |
+ | | | If nonzero, and the cluster detects |
+ | | | ``have-watchdog`` as ``true``, then watchdog-based |
+ | | | self-fencing will be performed via SBD when |
+ | | | fencing is required, without requiring a fencing |
+ | | | resource explicitly configured. |
+ | | | |
+ | | | If this is set to a positive value, unseen nodes |
+ | | | are assumed to self-fence within this much time. |
+ | | | |
+ | | | **Warning:** It must be ensured that this value is |
+ | | | larger than the ``SBD_WATCHDOG_TIMEOUT`` |
+ | | | environment variable on all nodes. Pacemaker |
+ | | | verifies the settings individually on all nodes |
+ | | | and prevents startup or shuts down if configured |
+ | | | wrongly on the fly. It is strongly recommended |
+ | | | that ``SBD_WATCHDOG_TIMEOUT`` be set to the same |
+ | | | value on all nodes. |
+ | | | |
+ | | | If this is set to a negative value, and |
+ | | | ``SBD_WATCHDOG_TIMEOUT`` is set, twice that value |
+ | | | will be used. |
+ | | | |
+ | | | **Warning:** In this case, it is essential (and |
+ | | | currently not verified by pacemaker) that |
+ | | | ``SBD_WATCHDOG_TIMEOUT`` is set to the same |
+ | | | value on all nodes. |
+ +---------------------------+---------+----------------------------------------------------+
+ | concurrent-fencing | false | .. index:: |
+ | | | pair: cluster option; concurrent-fencing |
+ | | | |
+ | | | Whether the cluster is allowed to initiate |
+ | | | multiple fence actions concurrently. Fence actions |
+ | | | initiated externally, such as via the |
+ | | | ``stonith_admin`` tool or an application such as |
+ | | | DLM, or by the fencer itself such as recurring |
+ | | | device monitors and ``status`` and ``list`` |
+ | | | commands, are not limited by this option. |
+ +---------------------------+---------+----------------------------------------------------+
+ | fence-reaction | stop | .. index:: |
+ | | | pair: cluster option; fence-reaction |
+ | | | |
+ | | | How should a cluster node react if notified of its |
+ | | | own fencing? A cluster node may receive |
+ | | | notification of its own fencing if fencing is |
+ | | | misconfigured, or if fabric fencing is in use that |
+ | | | doesn't cut cluster communication. Allowed values |
+ | | | are ``stop`` to attempt to immediately stop |
+ | | | pacemaker and stay stopped, or ``panic`` to |
+ | | | attempt to immediately reboot the local node, |
+ | | | falling back to stop on failure. The default is |
+ | | | likely to be changed to ``panic`` in a future |
+ | | | release. *(since 2.0.3)* |
+ +---------------------------+---------+----------------------------------------------------+
+ | priority-fencing-delay | 0 | .. index:: |
+ | | | pair: cluster option; priority-fencing-delay |
+ | | | |
+ | | | Apply this delay to any fencing targeting the lost |
+ | | | nodes with the highest total resource priority in |
+ | | | case we don't have the majority of the nodes in |
+ | | | our cluster partition, so that the more |
+ | | | significant nodes potentially win any fencing |
+ | | | match (especially meaningful in a split-brain of a |
+ | | | 2-node cluster). A promoted resource instance |
+ | | | takes the resource's priority plus 1 if the |
+ | | | resource's priority is not 0. Any static or random |
+ | | | delays introduced by ``pcmk_delay_base`` and |
+ | | | ``pcmk_delay_max`` configured for the |
+ | | | corresponding fencing resources will be added to |
+ | | | this delay. This delay should be significantly |
+ | | | greater than (safely twice) the maximum delay from |
+ | | | those parameters. *(since 2.0.4)* |
+ +---------------------------+---------+----------------------------------------------------+
+ | cluster-delay | 60s | .. index:: |
+ | | | pair: cluster option; cluster-delay |
+ | | | |
+ | | | Estimated maximum round-trip delay over the |
+ | | | network (excluding action execution). If the DC |
+ | | | requires an action to be executed on another node, |
+ | | | it will consider the action failed if it does not |
+ | | | get a response from the other node in this time |
+ | | | (after considering the action's own timeout). The |
+ | | | "correct" value will depend on the speed and load |
+ | | | of your network and cluster nodes. |
+ +---------------------------+---------+----------------------------------------------------+
+ | dc-deadtime | 20s | .. index:: |
+ | | | pair: cluster option; dc-deadtime |
+ | | | |
+ | | | How long to wait for a response from other nodes |
+ | | | during startup. The "correct" value will depend on |
+ | | | the speed/load of your network and the type of |
+ | | | switches used. |
+ +---------------------------+---------+----------------------------------------------------+
+ | cluster-ipc-limit | 500 | .. index:: |
+ | | | pair: cluster option; cluster-ipc-limit |
+ | | | |
+ | | | The maximum IPC message backlog before one cluster |
+ | | | daemon will disconnect another. This is of use in |
+ | | | large clusters, for which a good value is the |
+ | | | number of resources in the cluster multiplied by |
+ | | | the number of nodes. The default of 500 is also |
+ | | | the minimum. Raise this if you see |
+ | | | "Evicting client" messages for cluster daemon PIDs |
+ | | | in the logs. |
+ +---------------------------+---------+----------------------------------------------------+
+ | pe-error-series-max | -1 | .. index:: |
+ | | | pair: cluster option; pe-error-series-max |
+ | | | |
+ | | | The number of scheduler inputs resulting in errors |
+ | | | to save. Used when reporting problems. A value of |
+ | | | -1 means unlimited (report all), and 0 means none. |
+ +---------------------------+---------+----------------------------------------------------+
+ | pe-warn-series-max | 5000 | .. index:: |
+ | | | pair: cluster option; pe-warn-series-max |
+ | | | |
+ | | | The number of scheduler inputs resulting in |
+ | | | warnings to save. Used when reporting problems. A |
+ | | | value of -1 means unlimited (report all), and 0 |
+ | | | means none. |
+ +---------------------------+---------+----------------------------------------------------+
+ | pe-input-series-max | 4000 | .. index:: |
+ | | | pair: cluster option; pe-input-series-max |
+ | | | |
+ | | | The number of "normal" scheduler inputs to save. |
+ | | | Used when reporting problems. A value of -1 means |
+ | | | unlimited (report all), and 0 means none. |
+ +---------------------------+---------+----------------------------------------------------+
+ | enable-acl | false | .. index:: |
+ | | | pair: cluster option; enable-acl |
+ | | | |
+ | | | Whether :ref:`acl` should be used to authorize |
+ | | | modifications to the CIB |
+ +---------------------------+---------+----------------------------------------------------+
+ | placement-strategy | default | .. index:: |
+ | | | pair: cluster option; placement-strategy |
+ | | | |
+ | | | How the cluster should allocate resources to nodes |
+ | | | (see :ref:`utilization`). Allowed values are |
+ | | | ``default``, ``utilization``, ``balanced``, and |
+ | | | ``minimal``. |
+ +---------------------------+---------+----------------------------------------------------+
+ | node-health-strategy | none | .. index:: |
+ | | | pair: cluster option; node-health-strategy |
+ | | | |
+ | | | How the cluster should react to node health |
+ | | | attributes (see :ref:`node-health`). Allowed values|
+ | | | are ``none``, ``migrate-on-red``, ``only-green``, |
+ | | | ``progressive``, and ``custom``. |
+ +---------------------------+---------+----------------------------------------------------+
+ | node-health-base | 0 | .. index:: |
+ | | | pair: cluster option; node-health-base |
+ | | | |
+ | | | The base health score assigned to a node. Only |
+ | | | used when ``node-health-strategy`` is |
+ | | | ``progressive``. |
+ +---------------------------+---------+----------------------------------------------------+
+ | node-health-green | 0 | .. index:: |
+ | | | pair: cluster option; node-health-green |
+ | | | |
+ | | | The score to use for a node health attribute whose |
+ | | | value is ``green``. Only used when |
+ | | | ``node-health-strategy`` is ``progressive`` or |
+ | | | ``custom``. |
+ +---------------------------+---------+----------------------------------------------------+
+ | node-health-yellow | 0 | .. index:: |
+ | | | pair: cluster option; node-health-yellow |
+ | | | |
+ | | | The score to use for a node health attribute whose |
+ | | | value is ``yellow``. Only used when |
+ | | | ``node-health-strategy`` is ``progressive`` or |
+ | | | ``custom``. |
+ +---------------------------+---------+----------------------------------------------------+
+ | node-health-red | 0 | .. index:: |
+ | | | pair: cluster option; node-health-red |
+ | | | |
+ | | | The score to use for a node health attribute whose |
+ | | | value is ``red``. Only used when |
+ | | | ``node-health-strategy`` is ``progressive`` or |
+ | | | ``custom``. |
+ +---------------------------+---------+----------------------------------------------------+
+ | cluster-recheck-interval | 15min | .. index:: |
+ | | | pair: cluster option; cluster-recheck-interval |
+ | | | |
+ | | | Pacemaker is primarily event-driven, and looks |
+ | | | ahead to know when to recheck the cluster for |
+ | | | failure timeouts and most time-based rules |
+ | | | *(since 2.0.3)*. However, it will also recheck the |
+ | | | cluster after this amount of inactivity. This has |
+ | | | two goals: rules with ``date_spec`` are only |
+ | | | guaranteed to be checked this often, and it also |
+ | | | serves as a fail-safe for some kinds of scheduler |
+ | | | bugs. A value of 0 disables this polling; positive |
+ | | | values are a time interval. |
+ +---------------------------+---------+----------------------------------------------------+
+ | shutdown-lock | false | .. index:: |
+ | | | pair: cluster option; shutdown-lock |
+ | | | |
+ | | | The default of false allows active resources to be |
+ | | | recovered elsewhere when their node is cleanly |
+ | | | shut down, which is what the vast majority of |
+ | | | users will want. However, some users prefer to |
+ | | | make resources highly available only for failures, |
+ | | | with no recovery for clean shutdowns. If this |
+ | | | option is true, resources active on a node when it |
+ | | | is cleanly shut down are kept "locked" to that |
+ | | | node (not allowed to run elsewhere) until they |
+ | | | start again on that node after it rejoins (or for |
+ | | | at most ``shutdown-lock-limit``, if set). Stonith |
+ | | | resources and Pacemaker Remote connections are |
+ | | | never locked. Clone and bundle instances and the |
+ | | | promoted role of promotable clones are currently |
+ | | | never locked, though support could be added in a |
+ | | | future release. Locks may be manually cleared |
+ | | | using the ``--refresh`` option of ``crm_resource`` |
+ | | | (both the resource and node must be specified; |
+ | | | this works with remote nodes if their connection |
+ | | | resource's ``target-role`` is set to ``Stopped``, |
+ | | | but not if Pacemaker Remote is stopped on the |
+ | | | remote node without disabling the connection |
+ | | | resource). *(since 2.0.4)* |
+ +---------------------------+---------+----------------------------------------------------+
+ | shutdown-lock-limit | 0 | .. index:: |
+ | | | pair: cluster option; shutdown-lock-limit |
+ | | | |
+ | | | If ``shutdown-lock`` is true, and this is set to a |
+ | | | nonzero time duration, locked resources will be |
+ | | | allowed to start after this much time has passed |
+ | | | since the node shutdown was initiated, even if the |
+ | | | node has not rejoined. (This works with remote |
+ | | | nodes only if their connection resource's |
+ | | | ``target-role`` is set to ``Stopped``.) |
+ | | | *(since 2.0.4)* |
+ +---------------------------+---------+----------------------------------------------------+
+ | remove-after-stop | false | .. index:: |
+ | | | pair: cluster option; remove-after-stop |
+ | | | |
+ | | | *Deprecated* Should the cluster remove |
+ | | | resources from Pacemaker's executor after they are |
+ | | | stopped? Values other than the default are, at |
+ | | | best, poorly tested and potentially dangerous. |
+ | | | This option is deprecated and will be removed in a |
+ | | | future release. |
+ +---------------------------+---------+----------------------------------------------------+
+ | startup-fencing | true | .. index:: |
+ | | | pair: cluster option; startup-fencing |
+ | | | |
+ | | | *Advanced Use Only:* Should the cluster fence |
+ | | | unseen nodes at start-up? Setting this to false is |
+ | | | unsafe, because the unseen nodes could be active |
+ | | | and running resources but unreachable. |
+ +---------------------------+---------+----------------------------------------------------+
+ | election-timeout | 2min | .. index:: |
+ | | | pair: cluster option; election-timeout |
+ | | | |
+ | | | *Advanced Use Only:* If you need to adjust this |
+ | | | value, it probably indicates the presence of a bug.|
+ +---------------------------+---------+----------------------------------------------------+
+ | shutdown-escalation | 20min | .. index:: |
+ | | | pair: cluster option; shutdown-escalation |
+ | | | |
+ | | | *Advanced Use Only:* If you need to adjust this |
+ | | | value, it probably indicates the presence of a bug.|
+ +---------------------------+---------+----------------------------------------------------+
+ | join-integration-timeout | 3min | .. index:: |
+ | | | pair: cluster option; join-integration-timeout |
+ | | | |
+ | | | *Advanced Use Only:* If you need to adjust this |
+ | | | value, it probably indicates the presence of a bug.|
+ +---------------------------+---------+----------------------------------------------------+
+ | join-finalization-timeout | 30min | .. index:: |
+ | | | pair: cluster option; join-finalization-timeout |
+ | | | |
+ | | | *Advanced Use Only:* If you need to adjust this |
+ | | | value, it probably indicates the presence of a bug.|
+ +---------------------------+---------+----------------------------------------------------+
+ | transition-delay | 0s | .. index:: |
+ | | | pair: cluster option; transition-delay |
+ | | | |
+ | | | *Advanced Use Only:* Delay cluster recovery for |
+ | | | the configured interval to allow for additional or |
+ | | | related events to occur. This can be useful if |
+ | | | your configuration is sensitive to the order in |
+ | | | which ping updates arrive. Enabling this option |
+ | | | will slow down cluster recovery under all |
+ | | | conditions. |
+ +---------------------------+---------+----------------------------------------------------+
diff --git a/doc/sphinx/Pacemaker_Explained/resources.rst b/doc/sphinx/Pacemaker_Explained/resources.rst
new file mode 100644
index 0000000..3b7520f
--- /dev/null
+++ b/doc/sphinx/Pacemaker_Explained/resources.rst
@@ -0,0 +1,1074 @@
+.. _resource:
+
+Cluster Resources
+-----------------
+
+.. _s-resource-primitive:
+
+What is a Cluster Resource?
+###########################
+
+.. index::
+ single: resource
+
+A *resource* is a service managed by Pacemaker. The simplest type of resource,
+a *primitive*, is described in this chapter. More complex forms, such as groups
+and clones, are described in later chapters.
+
+Every primitive has a *resource agent* that provides Pacemaker a standardized
+interface for managing the service. This allows Pacemaker to be agnostic about
+the services it manages. Pacemaker doesn't need to understand how the service
+works because it relies on the resource agent to do the right thing when asked.
+
+Every resource has a *class* specifying the standard that its resource agent
+follows, and a *type* identifying the specific service being managed.
+
+
+.. _s-resource-supported:
+
+.. index::
+ single: resource; class
+
+Resource Classes
+################
+
+Pacemaker supports several classes, or standards, of resource agents:
+
+* OCF
+* LSB
+* Systemd
+* Service
+* Fencing
+* Nagios *(deprecated since 2.1.6)*
+* Upstart *(deprecated since 2.1.0)*
+
+
+.. index::
+ single: resource; OCF
+ single: OCF; resources
+ single: Open Cluster Framework; resources
+
+Open Cluster Framework
+______________________
+
+The Open Cluster Framework (OCF) Resource Agent API is a ClusterLabs
+standard for managing services. It is the most preferred since it is
+specifically designed for use in a Pacemaker cluster.
+
+OCF agents are scripts that support a variety of actions including ``start``,
+``stop``, and ``monitor``. They may accept parameters, making them more
+flexible than other classes. The number and purpose of parameters is left to
+the agent, which advertises them via the ``meta-data`` action.
+
+Unlike other classes, OCF agents have a *provider* as well as a class and type.
+
+For more information, see the "Resource Agents" chapter of *Pacemaker
+Administration* and the `OCF standard
+<https://github.com/ClusterLabs/OCF-spec/tree/main/ra>`_.
+
+
+.. _s-resource-supported-systemd:
+
+.. index::
+ single: Resource; Systemd
+ single: Systemd; resources
+
+Systemd
+_______
+
+Most Linux distributions use `Systemd
+<http://www.freedesktop.org/wiki/Software/systemd>`_ for system initialization
+and service management. *Unit files* specify how to manage services and are
+usually provided by the distribution.
+
+Pacemaker can manage systemd services. Simply create a resource with
+``systemd`` as the resource class and the unit file name as the resource type.
+Do *not* run ``systemctl enable`` on the unit.
+
+.. important::
+
+ Make sure that any systemd services to be controlled by the cluster are
+ *not* enabled to start at boot.
+
+
+.. index::
+ single: resource; LSB
+ single: LSB; resources
+ single: Linux Standard Base; resources
+
+Linux Standard Base
+___________________
+
+*LSB* resource agents, also known as `SysV-style
+<https://en.wikipedia.org/wiki/Init#SysV-style init scripts>`_, are scripts that
+provide start, stop, and status actions for a service.
+
+They are provided by some operating system distributions. If a full path is not
+given, they are assumed to be located in a directory specified when your
+Pacemaker software was built (usually ``/etc/init.d``).
+
+In order to be used with Pacemaker, they must conform to the `LSB specification
+<http://refspecs.linux-foundation.org/LSB_5.0.0/LSB-Core-generic/LSB-Core-generic/iniscrptact.html>`_
+as it relates to init scripts.
+
+.. warning::
+
+ Some LSB scripts do not fully comply with the standard. For details on how
+ to check whether your script is LSB-compatible, see the "Resource Agents"
+ chapter of `Pacemaker Administration`. Common problems include:
+
+ * Not implementing the ``status`` action
+ * Not observing the correct exit status codes
+ * Starting a started resource returns an error
+ * Stopping a stopped resource returns an error
+
+.. important::
+
+ Make sure the host is *not* configured to start any LSB services at boot
+ that will be controlled by the cluster.
+
+
+.. index::
+ single: Resource; System Services
+ single: System Service; resources
+
+System Services
+_______________
+
+Since there are various types of system services (``systemd``,
+``upstart``, and ``lsb``), Pacemaker supports a special ``service`` alias which
+intelligently figures out which one applies to a given cluster node.
+
+This is particularly useful when the cluster contains a mix of
+``systemd``, ``upstart``, and ``lsb``.
+
+In order, Pacemaker will try to find the named service as:
+
+* an LSB init script
+* a Systemd unit file
+* an Upstart job
+
+
+.. index::
+ single: Resource; STONITH
+ single: STONITH; resources
+
+STONITH
+_______
+
+The ``stonith`` class is used for managing fencing devices, discussed later in
+:ref:`fencing`.
+
+
+.. index::
+ single: Resource; Nagios Plugins
+ single: Nagios Plugins; resources
+
+Nagios Plugins
+______________
+
+Nagios Plugins are a way to monitor services. Pacemaker can use these as
+resources, to react to a change in the service's status.
+
+To use plugins as resources, Pacemaker must have been built with support, and
+OCF-style meta-data for the plugins must be installed on nodes that can run
+them. Meta-data for several common plugins is provided by the
+`nagios-agents-metadata <https://github.com/ClusterLabs/nagios-agents-metadata>`_
+project.
+
+The supported parameters for such a resource are same as the long options of
+the plugin.
+
+Start and monitor actions for plugin resources are implemented as invoking the
+plugin. A plugin result of "OK" (0) is treated as success, a result of "WARN"
+(1) is treated as a successful but degraded service, and any other result is
+considered a failure.
+
+A plugin resource is not going to change its status after recovery by
+restarting the plugin, so using them alone does not make sense with ``on-fail``
+set (or left to default) to ``restart``. Another value could make sense, for
+example, if you want to fence or standby nodes that cannot reach some external
+service.
+
+A more common use case for plugin resources is to configure them with a
+``container`` meta-attribute set to the name of another resource that actually
+makes the service available, such as a virtual machine or container.
+
+With ``container`` set, the plugin resource will automatically be colocated
+with the containing resource and ordered after it, and the containing resource
+will be considered failed if the plugin resource fails. This allows monitoring
+of a service inside a virtual machine or container, with recovery of the
+virtual machine or container if the service fails.
+
+.. warning::
+
+ Nagios support is deprecated in Pacemaker. Support will be dropped entirely
+ at the next major release of Pacemaker.
+
+ For monitoring a service inside a virtual machine or container, the
+ recommended alternative is to configure the virtual machine as a guest node
+ or the container as a :ref:`bundle <s-resource-bundle>`. For other use
+ cases, or when the virtual machine or container image cannot be modified,
+ the recommended alternative is to write a custom OCF agent for the service
+ (which may even call the Nagios plugin as part of its status action).
+
+
+.. index::
+ single: Resource; Upstart
+ single: Upstart; resources
+
+Upstart
+_______
+
+Some Linux distributions previously used `Upstart
+<https://upstart.ubuntu.com/>`_ for system initialization and service
+management. Pacemaker is able to manage services using Upstart if the local
+system supports them and support was enabled when your Pacemaker software was
+built.
+
+The *jobs* that specify how services are managed are usually provided by the
+operating system distribution.
+
+.. important::
+
+ Make sure the host is *not* configured to start any Upstart services at boot
+ that will be controlled by the cluster.
+
+.. warning::
+
+ Upstart support is deprecated in Pacemaker. Upstart is no longer actively
+ maintained, and test platforms for it are no longer readily usable. Support
+ will be dropped entirely at the next major release of Pacemaker.
+
+
+.. _primitive-resource:
+
+Resource Properties
+###################
+
+These values tell the cluster which resource agent to use for the resource,
+where to find that resource agent and what standards it conforms to.
+
+.. table:: **Properties of a Primitive Resource**
+ :widths: 1 4
+
+ +-------------+------------------------------------------------------------------+
+ | Field | Description |
+ +=============+==================================================================+
+ | id | .. index:: |
+ | | single: id; resource |
+ | | single: resource; property, id |
+ | | |
+ | | Your name for the resource |
+ +-------------+------------------------------------------------------------------+
+ | class | .. index:: |
+ | | single: class; resource |
+ | | single: resource; property, class |
+ | | |
+ | | The standard the resource agent conforms to. Allowed values: |
+ | | ``lsb``, ``ocf``, ``service``, ``stonith``, ``systemd``, |
+ | | ``nagios`` *(deprecated since 2.1.6)*, and ``upstart`` |
+ | | *(deprecated since 2.1.0)* |
+ +-------------+------------------------------------------------------------------+
+ | description | .. index:: |
+ | | single: description; resource |
+ | | single: resource; property, description |
+ | | |
+ | | A description of the Resource Agent, intended for local use. |
+ | | E.g. ``IP address for website`` |
+ +-------------+------------------------------------------------------------------+
+ | type | .. index:: |
+ | | single: type; resource |
+ | | single: resource; property, type |
+ | | |
+ | | The name of the Resource Agent you wish to use. E.g. |
+ | | ``IPaddr`` or ``Filesystem`` |
+ +-------------+------------------------------------------------------------------+
+ | provider | .. index:: |
+ | | single: provider; resource |
+ | | single: resource; property, provider |
+ | | |
+ | | The OCF spec allows multiple vendors to supply the same resource |
+ | | agent. To use the OCF resource agents supplied by the Heartbeat |
+ | | project, you would specify ``heartbeat`` here. |
+ +-------------+------------------------------------------------------------------+
+
+The XML definition of a resource can be queried with the **crm_resource** tool.
+For example:
+
+.. code-block:: none
+
+ # crm_resource --resource Email --query-xml
+
+might produce:
+
+.. topic:: A system resource definition
+
+ .. code-block:: xml
+
+ <primitive id="Email" class="service" type="exim"/>
+
+.. note::
+
+ One of the main drawbacks to system services (LSB, systemd or
+ Upstart) resources is that they do not allow any parameters!
+
+.. topic:: An OCF resource definition
+
+ .. code-block:: xml
+
+ <primitive id="Public-IP" class="ocf" type="IPaddr" provider="heartbeat">
+ <instance_attributes id="Public-IP-params">
+ <nvpair id="Public-IP-ip" name="ip" value="192.0.2.2"/>
+ </instance_attributes>
+ </primitive>
+
+.. _resource_options:
+
+Resource Options
+################
+
+Resources have two types of options: *meta-attributes* and *instance attributes*.
+Meta-attributes apply to any type of resource, while instance attributes
+are specific to each resource agent.
+
+Resource Meta-Attributes
+________________________
+
+Meta-attributes are used by the cluster to decide how a resource should
+behave and can be easily set using the ``--meta`` option of the
+**crm_resource** command.
+
+.. table:: **Meta-attributes of a Primitive Resource**
+ :class: longtable
+ :widths: 2 2 3
+
+ +----------------------------+----------------------------------+------------------------------------------------------+
+ | Field | Default | Description |
+ +============================+==================================+======================================================+
+ | priority | 0 | .. index:: |
+ | | | single: priority; resource option |
+ | | | single: resource; option, priority |
+ | | | |
+ | | | If not all resources can be active, the cluster |
+ | | | will stop lower priority resources in order to |
+ | | | keep higher priority ones active. |
+ +----------------------------+----------------------------------+------------------------------------------------------+
+ | critical | true | .. index:: |
+ | | | single: critical; resource option |
+ | | | single: resource; option, critical |
+ | | | |
+ | | | Use this value as the default for ``influence`` in |
+ | | | all :ref:`colocation constraints |
+ | | | <s-resource-colocation>` involving this resource, |
+ | | | as well as the implicit colocation constraints |
+ | | | created if this resource is in a :ref:`group |
+ | | | <group-resources>`. For details, see |
+ | | | :ref:`s-coloc-influence`. *(since 2.1.0)* |
+ +----------------------------+----------------------------------+------------------------------------------------------+
+ | target-role | Started | .. index:: |
+ | | | single: target-role; resource option |
+ | | | single: resource; option, target-role |
+ | | | |
+ | | | What state should the cluster attempt to keep this |
+ | | | resource in? Allowed values: |
+ | | | |
+ | | | * ``Stopped:`` Force the resource to be stopped |
+ | | | * ``Started:`` Allow the resource to be started |
+ | | | (and in the case of :ref:`promotable clone |
+ | | | resources <s-resource-promotable>`, promoted |
+ | | | if appropriate) |
+ | | | * ``Unpromoted:`` Allow the resource to be started, |
+ | | | but only in the unpromoted role if the resource is |
+ | | | :ref:`promotable <s-resource-promotable>` |
+ | | | * ``Promoted:`` Equivalent to ``Started`` |
+ +----------------------------+----------------------------------+------------------------------------------------------+
+ | is-managed | TRUE | .. index:: |
+ | | | single: is-managed; resource option |
+ | | | single: resource; option, is-managed |
+ | | | |
+ | | | Is the cluster allowed to start and stop |
+ | | | the resource? Allowed values: ``true``, ``false`` |
+ +----------------------------+----------------------------------+------------------------------------------------------+
+ | maintenance | FALSE | .. index:: |
+ | | | single: maintenance; resource option |
+ | | | single: resource; option, maintenance |
+ | | | |
+ | | | Similar to the ``maintenance-mode`` |
+ | | | :ref:`cluster option <cluster_options>`, but for |
+ | | | a single resource. If true, the resource will not |
+ | | | be started, stopped, or monitored on any node. This |
+ | | | differs from ``is-managed`` in that monitors will |
+ | | | not be run. Allowed values: ``true``, ``false`` |
+ +----------------------------+----------------------------------+------------------------------------------------------+
+ | resource-stickiness | 1 for individual clone | .. _resource-stickiness: |
+ | | instances, 0 for all | |
+ | | other resources | .. index:: |
+ | | | single: resource-stickiness; resource option |
+ | | | single: resource; option, resource-stickiness |
+ | | | |
+ | | | A score that will be added to the current node when |
+ | | | a resource is already active. This allows running |
+ | | | resources to stay where they are, even if they |
+ | | | would be placed elsewhere if they were being |
+ | | | started from a stopped state. |
+ +----------------------------+----------------------------------+------------------------------------------------------+
+ | requires | ``quorum`` for resources | .. _requires: |
+ | | with a ``class`` of ``stonith``, | |
+ | | otherwise ``unfencing`` if | .. index:: |
+ | | unfencing is active in the | single: requires; resource option |
+ | | cluster, otherwise ``fencing`` | single: resource; option, requires |
+ | | if ``stonith-enabled`` is true, | |
+ | | otherwise ``quorum`` | Conditions under which the resource can be |
+ | | | started. Allowed values: |
+ | | | |
+ | | | * ``nothing:`` can always be started |
+ | | | * ``quorum:`` The cluster can only start this |
+ | | | resource if a majority of the configured nodes |
+ | | | are active |
+ | | | * ``fencing:`` The cluster can only start this |
+ | | | resource if a majority of the configured nodes |
+ | | | are active *and* any failed or unknown nodes |
+ | | | have been :ref:`fenced <fencing>` |
+ | | | * ``unfencing:`` The cluster can only start this |
+ | | | resource if a majority of the configured nodes |
+ | | | are active *and* any failed or unknown nodes have |
+ | | | been fenced *and* only on nodes that have been |
+ | | | :ref:`unfenced <unfencing>` |
+ +----------------------------+----------------------------------+------------------------------------------------------+
+ | migration-threshold | INFINITY | .. index:: |
+ | | | single: migration-threshold; resource option |
+ | | | single: resource; option, migration-threshold |
+ | | | |
+ | | | How many failures may occur for this resource on |
+ | | | a node, before this node is marked ineligible to |
+ | | | host this resource. A value of 0 indicates that this |
+ | | | feature is disabled (the node will never be marked |
+ | | | ineligible); by constrast, the cluster treats |
+ | | | INFINITY (the default) as a very large but finite |
+ | | | number. This option has an effect only if the |
+ | | | failed operation specifies ``on-fail`` as |
+ | | | ``restart`` (the default), and additionally for |
+ | | | failed ``start`` operations, if the cluster |
+ | | | property ``start-failure-is-fatal`` is ``false``. |
+ +----------------------------+----------------------------------+------------------------------------------------------+
+ | failure-timeout | 0 | .. index:: |
+ | | | single: failure-timeout; resource option |
+ | | | single: resource; option, failure-timeout |
+ | | | |
+ | | | How many seconds to wait before acting as if the |
+ | | | failure had not occurred, and potentially allowing |
+ | | | the resource back to the node on which it failed. |
+ | | | A value of 0 indicates that this feature is |
+ | | | disabled. |
+ +----------------------------+----------------------------------+------------------------------------------------------+
+ | multiple-active | stop_start | .. index:: |
+ | | | single: multiple-active; resource option |
+ | | | single: resource; option, multiple-active |
+ | | | |
+ | | | What should the cluster do if it ever finds the |
+ | | | resource active on more than one node? Allowed |
+ | | | values: |
+ | | | |
+ | | | * ``block``: mark the resource as unmanaged |
+ | | | * ``stop_only``: stop all active instances and |
+ | | | leave them that way |
+ | | | * ``stop_start``: stop all active instances and |
+ | | | start the resource in one location only |
+ | | | * ``stop_unexpected``: stop all active instances |
+ | | | except where the resource should be active (this |
+ | | | should be used only when extra instances are not |
+ | | | expected to disrupt existing instances, and the |
+ | | | resource agent's monitor of an existing instance |
+ | | | is capable of detecting any problems that could be |
+ | | | caused; note that any resources ordered after this |
+ | | | will still need to be restarted) *(since 2.1.3)* |
+ +----------------------------+----------------------------------+------------------------------------------------------+
+ | allow-migrate | TRUE for ocf:pacemaker:remote | Whether the cluster should try to "live migrate" |
+ | | resources, FALSE otherwise | this resource when it needs to be moved (see |
+ | | | :ref:`live-migration`) |
+ +----------------------------+----------------------------------+------------------------------------------------------+
+ | allow-unhealthy-nodes | FALSE | Whether the resource should be able to run on a node |
+ | | | even if the node's health score would otherwise |
+ | | | prevent it (see :ref:`node-health`) *(since 2.1.3)* |
+ +----------------------------+----------------------------------+------------------------------------------------------+
+ | container-attribute-target | | Specific to bundle resources; see |
+ | | | :ref:`s-bundle-attributes` |
+ +----------------------------+----------------------------------+------------------------------------------------------+
+ | remote-node | | The name of the Pacemaker Remote guest node this |
+ | | | resource is associated with, if any. If |
+ | | | specified, this both enables the resource as a |
+ | | | guest node and defines the unique name used to |
+ | | | identify the guest node. The guest must be |
+ | | | configured to run the Pacemaker Remote daemon |
+ | | | when it is started. **WARNING:** This value |
+ | | | cannot overlap with any resource or node IDs. |
+ +----------------------------+----------------------------------+------------------------------------------------------+
+ | remote-port | 3121 | If ``remote-node`` is specified, the port on the |
+ | | | guest used for its Pacemaker Remote connection. |
+ | | | The Pacemaker Remote daemon on the guest must |
+ | | | be configured to listen on this port. |
+ +----------------------------+----------------------------------+------------------------------------------------------+
+ | remote-addr | value of ``remote-node`` | If ``remote-node`` is specified, the IP |
+ | | | address or hostname used to connect to the |
+ | | | guest via Pacemaker Remote. The Pacemaker Remote |
+ | | | daemon on the guest must be configured to accept |
+ | | | connections on this address. |
+ +----------------------------+----------------------------------+------------------------------------------------------+
+ | remote-connect-timeout | 60s | If ``remote-node`` is specified, how long before |
+ | | | a pending guest connection will time out. |
+ +----------------------------+----------------------------------+------------------------------------------------------+
+
+As an example of setting resource options, if you performed the following
+commands on an LSB Email resource:
+
+.. code-block:: none
+
+ # crm_resource --meta --resource Email --set-parameter priority --parameter-value 100
+ # crm_resource -m -r Email -p multiple-active -v block
+
+the resulting resource definition might be:
+
+.. topic:: An LSB resource with cluster options
+
+ .. code-block:: xml
+
+ <primitive id="Email" class="lsb" type="exim">
+ <meta_attributes id="Email-meta_attributes">
+ <nvpair id="Email-meta_attributes-priority" name="priority" value="100"/>
+ <nvpair id="Email-meta_attributes-multiple-active" name="multiple-active" value="block"/>
+ </meta_attributes>
+ </primitive>
+
+In addition to the cluster-defined meta-attributes described above, you may
+also configure arbitrary meta-attributes of your own choosing. Most commonly,
+this would be done for use in :ref:`rules <rules>`. For example, an IT department
+might define a custom meta-attribute to indicate which company department each
+resource is intended for. To reduce the chance of name collisions with
+cluster-defined meta-attributes added in the future, it is recommended to use
+a unique, organization-specific prefix for such attributes.
+
+.. _s-resource-defaults:
+
+Setting Global Defaults for Resource Meta-Attributes
+____________________________________________________
+
+To set a default value for a resource option, add it to the
+``rsc_defaults`` section with ``crm_attribute``. For example,
+
+.. code-block:: none
+
+ # crm_attribute --type rsc_defaults --name is-managed --update false
+
+would prevent the cluster from starting or stopping any of the
+resources in the configuration (unless of course the individual
+resources were specifically enabled by having their ``is-managed`` set to
+``true``).
+
+Resource Instance Attributes
+____________________________
+
+The resource agents of some resource classes (lsb, systemd and upstart *not* among them)
+can be given parameters which determine how they behave and which instance
+of a service they control.
+
+If your resource agent supports parameters, you can add them with the
+``crm_resource`` command. For example,
+
+.. code-block:: none
+
+ # crm_resource --resource Public-IP --set-parameter ip --parameter-value 192.0.2.2
+
+would create an entry in the resource like this:
+
+.. topic:: An example OCF resource with instance attributes
+
+ .. code-block:: xml
+
+ <primitive id="Public-IP" class="ocf" type="IPaddr" provider="heartbeat">
+ <instance_attributes id="params-public-ip">
+ <nvpair id="public-ip-addr" name="ip" value="192.0.2.2"/>
+ </instance_attributes>
+ </primitive>
+
+For an OCF resource, the result would be an environment variable
+called ``OCF_RESKEY_ip`` with a value of ``192.0.2.2``.
+
+The list of instance attributes supported by an OCF resource agent can be
+found by calling the resource agent with the ``meta-data`` command.
+The output contains an XML description of all the supported
+attributes, their purpose and default values.
+
+.. topic:: Displaying the metadata for the Dummy resource agent template
+
+ .. code-block:: none
+
+ # export OCF_ROOT=/usr/lib/ocf
+ # $OCF_ROOT/resource.d/pacemaker/Dummy meta-data
+
+ .. code-block:: xml
+
+ <?xml version="1.0"?>
+ <!DOCTYPE resource-agent SYSTEM "ra-api-1.dtd">
+ <resource-agent name="Dummy" version="2.0">
+ <version>1.1</version>
+
+ <longdesc lang="en">
+ This is a dummy OCF resource agent. It does absolutely nothing except keep track
+ of whether it is running or not, and can be configured so that actions fail or
+ take a long time. Its purpose is primarily for testing, and to serve as a
+ template for resource agent writers.
+ </longdesc>
+ <shortdesc lang="en">Example stateless resource agent</shortdesc>
+
+ <parameters>
+ <parameter name="state" unique-group="state">
+ <longdesc lang="en">
+ Location to store the resource state in.
+ </longdesc>
+ <shortdesc lang="en">State file</shortdesc>
+ <content type="string" default="/var/run/Dummy-RESOURCE_ID.state" />
+ </parameter>
+
+ <parameter name="passwd" reloadable="1">
+ <longdesc lang="en">
+ Fake password field
+ </longdesc>
+ <shortdesc lang="en">Password</shortdesc>
+ <content type="string" default="" />
+ </parameter>
+
+ <parameter name="fake" reloadable="1">
+ <longdesc lang="en">
+ Fake attribute that can be changed to cause a reload
+ </longdesc>
+ <shortdesc lang="en">Fake attribute that can be changed to cause a reload</shortdesc>
+ <content type="string" default="dummy" />
+ </parameter>
+
+ <parameter name="op_sleep" reloadable="1">
+ <longdesc lang="en">
+ Number of seconds to sleep during operations. This can be used to test how
+ the cluster reacts to operation timeouts.
+ </longdesc>
+ <shortdesc lang="en">Operation sleep duration in seconds.</shortdesc>
+ <content type="string" default="0" />
+ </parameter>
+
+ <parameter name="fail_start_on" reloadable="1">
+ <longdesc lang="en">
+ Start, migrate_from, and reload-agent actions will return failure if running on
+ the host specified here, but the resource will run successfully anyway (future
+ monitor calls will find it running). This can be used to test on-fail=ignore.
+ </longdesc>
+ <shortdesc lang="en">Report bogus start failure on specified host</shortdesc>
+ <content type="string" default="" />
+ </parameter>
+ <parameter name="envfile" reloadable="1">
+ <longdesc lang="en">
+ If this is set, the environment will be dumped to this file for every call.
+ </longdesc>
+ <shortdesc lang="en">Environment dump file</shortdesc>
+ <content type="string" default="" />
+ </parameter>
+
+ </parameters>
+
+ <actions>
+ <action name="start" timeout="20s" />
+ <action name="stop" timeout="20s" />
+ <action name="monitor" timeout="20s" interval="10s" depth="0"/>
+ <action name="reload" timeout="20s" />
+ <action name="reload-agent" timeout="20s" />
+ <action name="migrate_to" timeout="20s" />
+ <action name="migrate_from" timeout="20s" />
+ <action name="validate-all" timeout="20s" />
+ <action name="meta-data" timeout="5s" />
+ </actions>
+ </resource-agent>
+
+.. index::
+ single: resource; action
+ single: resource; operation
+
+.. _operation:
+
+Resource Operations
+###################
+
+*Operations* are actions the cluster can perform on a resource by calling the
+resource agent. Resource agents must support certain common operations such as
+start, stop, and monitor, and may implement any others.
+
+Operations may be explicitly configured for two purposes: to override defaults
+for options (such as timeout) that the cluster will use whenever it initiates
+the operation, and to run an operation on a recurring basis (for example, to
+monitor the resource for failure).
+
+.. topic:: An OCF resource with a non-default start timeout
+
+ .. code-block:: xml
+
+ <primitive id="Public-IP" class="ocf" type="IPaddr" provider="heartbeat">
+ <operations>
+ <op id="Public-IP-start" name="start" timeout="60s"/>
+ </operations>
+ <instance_attributes id="params-public-ip">
+ <nvpair id="public-ip-addr" name="ip" value="192.0.2.2"/>
+ </instance_attributes>
+ </primitive>
+
+Pacemaker identifies operations by a combination of name and interval, so this
+combination must be unique for each resource. That is, you should not configure
+two operations for the same resource with the same name and interval.
+
+.. _operation_properties:
+
+Operation Properties
+____________________
+
+Operation properties may be specified directly in the ``op`` element as
+XML attributes, or in a separate ``meta_attributes`` block as ``nvpair`` elements.
+XML attributes take precedence over ``nvpair`` elements if both are specified.
+
+.. table:: **Properties of an Operation**
+ :class: longtable
+ :widths: 1 2 3
+
+ +----------------+-----------------------------------+-----------------------------------------------------+
+ | Field | Default | Description |
+ +================+===================================+=====================================================+
+ | id | | .. index:: |
+ | | | single: id; action property |
+ | | | single: action; property, id |
+ | | | |
+ | | | A unique name for the operation. |
+ +----------------+-----------------------------------+-----------------------------------------------------+
+ | name | | .. index:: |
+ | | | single: name; action property |
+ | | | single: action; property, name |
+ | | | |
+ | | | The action to perform. This can be any action |
+ | | | supported by the agent; common values include |
+ | | | ``monitor``, ``start``, and ``stop``. |
+ +----------------+-----------------------------------+-----------------------------------------------------+
+ | interval | 0 | .. index:: |
+ | | | single: interval; action property |
+ | | | single: action; property, interval |
+ | | | |
+ | | | How frequently (in seconds) to perform the |
+ | | | operation. A value of 0 means "when needed". |
+ | | | A positive value defines a *recurring action*, |
+ | | | which is typically used with |
+ | | | :ref:`monitor <s-resource-monitoring>`. |
+ +----------------+-----------------------------------+-----------------------------------------------------+
+ | timeout | | .. index:: |
+ | | | single: timeout; action property |
+ | | | single: action; property, timeout |
+ | | | |
+ | | | How long to wait before declaring the action |
+ | | | has failed |
+ +----------------+-----------------------------------+-----------------------------------------------------+
+ | on-fail | Varies by action: | .. index:: |
+ | | | single: on-fail; action property |
+ | | * ``stop``: ``fence`` if | single: action; property, on-fail |
+ | | ``stonith-enabled`` is true | |
+ | | or ``block`` otherwise | The action to take if this action ever fails. |
+ | | * ``demote``: ``on-fail`` of the | Allowed values: |
+ | | ``monitor`` action with | |
+ | | ``role`` set to ``Promoted``, | * ``ignore:`` Pretend the resource did not fail. |
+ | | if present, enabled, and | * ``block:`` Don't perform any further operations |
+ | | configured to a value other | on the resource. |
+ | | than ``demote``, or ``restart`` | * ``stop:`` Stop the resource and do not start |
+ | | otherwise | it elsewhere. |
+ | | * all other actions: ``restart`` | * ``demote:`` Demote the resource, without a |
+ | | | full restart. This is valid only for ``promote`` |
+ | | | actions, and for ``monitor`` actions with both |
+ | | | a nonzero ``interval`` and ``role`` set to |
+ | | | ``Promoted``; for any other action, a |
+ | | | configuration error will be logged, and the |
+ | | | default behavior will be used. *(since 2.0.5)* |
+ | | | * ``restart:`` Stop the resource and start it |
+ | | | again (possibly on a different node). |
+ | | | * ``fence:`` STONITH the node on which the |
+ | | | resource failed. |
+ | | | * ``standby:`` Move *all* resources away from the |
+ | | | node on which the resource failed. |
+ +----------------+-----------------------------------+-----------------------------------------------------+
+ | enabled | TRUE | .. index:: |
+ | | | single: enabled; action property |
+ | | | single: action; property, enabled |
+ | | | |
+ | | | If ``false``, ignore this operation definition. |
+ | | | This is typically used to pause a particular |
+ | | | recurring ``monitor`` operation; for instance, it |
+ | | | can complement the respective resource being |
+ | | | unmanaged (``is-managed=false``), as this alone |
+ | | | will :ref:`not block any configured monitoring |
+ | | | <s-monitoring-unmanaged>`. Disabling the operation |
+ | | | does not suppress all actions of the given type. |
+ | | | Allowed values: ``true``, ``false``. |
+ +----------------+-----------------------------------+-----------------------------------------------------+
+ | record-pending | TRUE | .. index:: |
+ | | | single: record-pending; action property |
+ | | | single: action; property, record-pending |
+ | | | |
+ | | | If ``true``, the intention to perform the operation |
+ | | | is recorded so that GUIs and CLI tools can indicate |
+ | | | that an operation is in progress. This is best set |
+ | | | as an *operation default* |
+ | | | (see :ref:`s-operation-defaults`). Allowed values: |
+ | | | ``true``, ``false``. |
+ +----------------+-----------------------------------+-----------------------------------------------------+
+ | role | | .. index:: |
+ | | | single: role; action property |
+ | | | single: action; property, role |
+ | | | |
+ | | | Run the operation only on node(s) that the cluster |
+ | | | thinks should be in the specified role. This only |
+ | | | makes sense for recurring ``monitor`` operations. |
+ | | | Allowed (case-sensitive) values: ``Stopped``, |
+ | | | ``Started``, and in the case of :ref:`promotable |
+ | | | clone resources <s-resource-promotable>`, |
+ | | | ``Unpromoted`` and ``Promoted``. |
+ +----------------+-----------------------------------+-----------------------------------------------------+
+
+.. note::
+
+ When ``on-fail`` is set to ``demote``, recovery from failure by a successful
+ demote causes the cluster to recalculate whether and where a new instance
+ should be promoted. The node with the failure is eligible, so if promotion
+ scores have not changed, it will be promoted again.
+
+ There is no direct equivalent of ``migration-threshold`` for the promoted
+ role, but the same effect can be achieved with a location constraint using a
+ :ref:`rule <rules>` with a node attribute expression for the resource's fail
+ count.
+
+ For example, to immediately ban the promoted role from a node with any
+ failed promote or promoted instance monitor:
+
+ .. code-block:: xml
+
+ <rsc_location id="loc1" rsc="my_primitive">
+ <rule id="rule1" score="-INFINITY" role="Promoted" boolean-op="or">
+ <expression id="expr1" attribute="fail-count-my_primitive#promote_0"
+ operation="gte" value="1"/>
+ <expression id="expr2" attribute="fail-count-my_primitive#monitor_10000"
+ operation="gte" value="1"/>
+ </rule>
+ </rsc_location>
+
+ This example assumes that there is a promotable clone of the ``my_primitive``
+ resource (note that the primitive name, not the clone name, is used in the
+ rule), and that there is a recurring 10-second-interval monitor configured for
+ the promoted role (fail count attributes specify the interval in
+ milliseconds).
+
+.. _s-resource-monitoring:
+
+Monitoring Resources for Failure
+________________________________
+
+When Pacemaker first starts a resource, it runs one-time ``monitor`` operations
+(referred to as *probes*) to ensure the resource is running where it's
+supposed to be, and not running where it's not supposed to be. (This behavior
+can be affected by the ``resource-discovery`` location constraint property.)
+
+Other than those initial probes, Pacemaker will *not* (by default) check that
+the resource continues to stay healthy [#]_. You must configure ``monitor``
+operations explicitly to perform these checks.
+
+.. topic:: An OCF resource with a recurring health check
+
+ .. code-block:: xml
+
+ <primitive id="Public-IP" class="ocf" type="IPaddr" provider="heartbeat">
+ <operations>
+ <op id="Public-IP-start" name="start" timeout="60s"/>
+ <op id="Public-IP-monitor" name="monitor" interval="60s"/>
+ </operations>
+ <instance_attributes id="params-public-ip">
+ <nvpair id="public-ip-addr" name="ip" value="192.0.2.2"/>
+ </instance_attributes>
+ </primitive>
+
+By default, a ``monitor`` operation will ensure that the resource is running
+where it is supposed to. The ``target-role`` property can be used for further
+checking.
+
+For example, if a resource has one ``monitor`` operation with
+``interval=10 role=Started`` and a second ``monitor`` operation with
+``interval=11 role=Stopped``, the cluster will run the first monitor on any nodes
+it thinks *should* be running the resource, and the second monitor on any nodes
+that it thinks *should not* be running the resource (for the truly paranoid,
+who want to know when an administrator manually starts a service by mistake).
+
+.. note::
+
+ Currently, monitors with ``role=Stopped`` are not implemented for
+ :ref:`clone <s-resource-clone>` resources.
+
+.. _s-monitoring-unmanaged:
+
+Monitoring Resources When Administration is Disabled
+____________________________________________________
+
+Recurring ``monitor`` operations behave differently under various administrative
+settings:
+
+* When a resource is unmanaged (by setting ``is-managed=false``): No monitors
+ will be stopped.
+
+ If the unmanaged resource is stopped on a node where the cluster thinks it
+ should be running, the cluster will detect and report that it is not, but it
+ will not consider the monitor failed, and will not try to start the resource
+ until it is managed again.
+
+ Starting the unmanaged resource on a different node is strongly discouraged
+ and will at least cause the cluster to consider the resource failed, and
+ may require the resource's ``target-role`` to be set to ``Stopped`` then
+ ``Started`` to be recovered.
+
+* When a resource is put into maintenance mode (by setting
+ ``maintenance=true``): The resource will be marked as unmanaged. (This
+ overrides ``is-managed=true``.)
+
+ Additionally, all monitor operations will be stopped, except those specifying
+ ``role`` as ``Stopped`` (which will be newly initiated if appropriate). As
+ with unmanaged resources in general, starting a resource on a node other than
+ where the cluster expects it to be will cause problems.
+
+* When a node is put into standby: All resources will be moved away from the
+ node, and all ``monitor`` operations will be stopped on the node, except those
+ specifying ``role`` as ``Stopped`` (which will be newly initiated if
+ appropriate).
+
+* When a node is put into maintenance mode: All resources that are active on the
+ node will be marked as in maintenance mode. See above for more details.
+
+* When the cluster is put into maintenance mode: All resources in the cluster
+ will be marked as in maintenance mode. See above for more details.
+
+A resource is in maintenance mode if the cluster, the node where the resource
+is active, or the resource itself is configured to be in maintenance mode. If a
+resource is in maintenance mode, then it is also unmanaged. However, if a
+resource is unmanaged, it is not necessarily in maintenance mode.
+
+.. _s-operation-defaults:
+
+Setting Global Defaults for Operations
+______________________________________
+
+You can change the global default values for operation properties
+in a given cluster. These are defined in an ``op_defaults`` section
+of the CIB's ``configuration`` section, and can be set with
+``crm_attribute``. For example,
+
+.. code-block:: none
+
+ # crm_attribute --type op_defaults --name timeout --update 20s
+
+would default each operation's ``timeout`` to 20 seconds. If an
+operation's definition also includes a value for ``timeout``, then that
+value would be used for that operation instead.
+
+When Implicit Operations Take a Long Time
+_________________________________________
+
+The cluster will always perform a number of implicit operations: ``start``,
+``stop`` and a non-recurring ``monitor`` operation used at startup to check
+whether the resource is already active. If one of these is taking too long,
+then you can create an entry for them and specify a longer timeout.
+
+.. topic:: An OCF resource with custom timeouts for its implicit actions
+
+ .. code-block:: xml
+
+ <primitive id="Public-IP" class="ocf" type="IPaddr" provider="heartbeat">
+ <operations>
+ <op id="public-ip-startup" name="monitor" interval="0" timeout="90s"/>
+ <op id="public-ip-start" name="start" interval="0" timeout="180s"/>
+ <op id="public-ip-stop" name="stop" interval="0" timeout="15min"/>
+ </operations>
+ <instance_attributes id="params-public-ip">
+ <nvpair id="public-ip-addr" name="ip" value="192.0.2.2"/>
+ </instance_attributes>
+ </primitive>
+
+Multiple Monitor Operations
+___________________________
+
+Provided no two operations (for a single resource) have the same name
+and interval, you can have as many ``monitor`` operations as you like.
+In this way, you can do a superficial health check every minute and
+progressively more intense ones at higher intervals.
+
+To tell the resource agent what kind of check to perform, you need to
+provide each monitor with a different value for a common parameter.
+The OCF standard creates a special parameter called ``OCF_CHECK_LEVEL``
+for this purpose and dictates that it is "made available to the
+resource agent without the normal ``OCF_RESKEY`` prefix".
+
+Whatever name you choose, you can specify it by adding an
+``instance_attributes`` block to the ``op`` tag. It is up to each
+resource agent to look for the parameter and decide how to use it.
+
+.. topic:: An OCF resource with two recurring health checks, performing
+ different levels of checks specified via ``OCF_CHECK_LEVEL``.
+
+ .. code-block:: xml
+
+ <primitive id="Public-IP" class="ocf" type="IPaddr" provider="heartbeat">
+ <operations>
+ <op id="public-ip-health-60" name="monitor" interval="60">
+ <instance_attributes id="params-public-ip-depth-60">
+ <nvpair id="public-ip-depth-60" name="OCF_CHECK_LEVEL" value="10"/>
+ </instance_attributes>
+ </op>
+ <op id="public-ip-health-300" name="monitor" interval="300">
+ <instance_attributes id="params-public-ip-depth-300">
+ <nvpair id="public-ip-depth-300" name="OCF_CHECK_LEVEL" value="20"/>
+ </instance_attributes>
+ </op>
+ </operations>
+ <instance_attributes id="params-public-ip">
+ <nvpair id="public-ip-level" name="ip" value="192.0.2.2"/>
+ </instance_attributes>
+ </primitive>
+
+Disabling a Monitor Operation
+_____________________________
+
+The easiest way to stop a recurring monitor is to just delete it.
+However, there can be times when you only want to disable it
+temporarily. In such cases, simply add ``enabled=false`` to the
+operation's definition.
+
+.. topic:: Example of an OCF resource with a disabled health check
+
+ .. code-block:: xml
+
+ <primitive id="Public-IP" class="ocf" type="IPaddr" provider="heartbeat">
+ <operations>
+ <op id="public-ip-check" name="monitor" interval="60s" enabled="false"/>
+ </operations>
+ <instance_attributes id="params-public-ip">
+ <nvpair id="public-ip-addr" name="ip" value="192.0.2.2"/>
+ </instance_attributes>
+ </primitive>
+
+This can be achieved from the command line by executing:
+
+.. code-block:: none
+
+ # cibadmin --modify --xml-text '<op id="public-ip-check" enabled="false"/>'
+
+Once you've done whatever you needed to do, you can then re-enable it with
+
+.. code-block:: none
+
+ # cibadmin --modify --xml-text '<op id="public-ip-check" enabled="true"/>'
+
+.. [#] Currently, anyway. Automatic monitoring operations may be added in a future
+ version of Pacemaker.
diff --git a/doc/sphinx/Pacemaker_Explained/reusing-configuration.rst b/doc/sphinx/Pacemaker_Explained/reusing-configuration.rst
new file mode 100644
index 0000000..0f34f84
--- /dev/null
+++ b/doc/sphinx/Pacemaker_Explained/reusing-configuration.rst
@@ -0,0 +1,415 @@
+Reusing Parts of the Configuration
+----------------------------------
+
+Pacemaker provides multiple ways to simplify the configuration XML by reusing
+parts of it in multiple places.
+
+Besides simplifying the XML, this also allows you to manipulate multiple
+configuration elements with a single reference.
+
+Reusing Resource Definitions
+############################
+
+If you want to create lots of resources with similar configurations, defining a
+*resource template* simplifies the task. Once defined, it can be referenced in
+primitives or in certain types of constraints.
+
+Configuring Resources with Templates
+____________________________________
+
+The primitives referencing the template will inherit all meta-attributes,
+instance attributes, utilization attributes and operations defined
+in the template. And you can define specific attributes and operations for any
+of the primitives. If any of these are defined in both the template and the
+primitive, the values defined in the primitive will take precedence over the
+ones defined in the template.
+
+Hence, resource templates help to reduce the amount of configuration work.
+If any changes are needed, they can be done to the template definition and
+will take effect globally in all resource definitions referencing that
+template.
+
+Resource templates have a syntax similar to that of primitives.
+
+.. topic:: Resource template for a migratable Xen virtual machine
+
+ .. code-block:: xml
+
+ <template id="vm-template" class="ocf" provider="heartbeat" type="Xen">
+ <meta_attributes id="vm-template-meta_attributes">
+ <nvpair id="vm-template-meta_attributes-allow-migrate" name="allow-migrate" value="true"/>
+ </meta_attributes>
+ <utilization id="vm-template-utilization">
+ <nvpair id="vm-template-utilization-memory" name="memory" value="512"/>
+ </utilization>
+ <operations>
+ <op id="vm-template-monitor-15s" interval="15s" name="monitor" timeout="60s"/>
+ <op id="vm-template-start-0" interval="0" name="start" timeout="60s"/>
+ </operations>
+ </template>
+
+Once you define a resource template, you can use it in primitives by specifying the
+``template`` property.
+
+.. topic:: Xen primitive resource using a resource template
+
+ .. code-block:: xml
+
+ <primitive id="vm1" template="vm-template">
+ <instance_attributes id="vm1-instance_attributes">
+ <nvpair id="vm1-instance_attributes-name" name="name" value="vm1"/>
+ <nvpair id="vm1-instance_attributes-xmfile" name="xmfile" value="/etc/xen/shared-vm/vm1"/>
+ </instance_attributes>
+ </primitive>
+
+In the example above, the new primitive ``vm1`` will inherit everything from ``vm-template``. For
+example, the equivalent of the above two examples would be:
+
+.. topic:: Equivalent Xen primitive resource not using a resource template
+
+ .. code-block:: xml
+
+ <primitive id="vm1" class="ocf" provider="heartbeat" type="Xen">
+ <meta_attributes id="vm-template-meta_attributes">
+ <nvpair id="vm-template-meta_attributes-allow-migrate" name="allow-migrate" value="true"/>
+ </meta_attributes>
+ <utilization id="vm-template-utilization">
+ <nvpair id="vm-template-utilization-memory" name="memory" value="512"/>
+ </utilization>
+ <operations>
+ <op id="vm-template-monitor-15s" interval="15s" name="monitor" timeout="60s"/>
+ <op id="vm-template-start-0" interval="0" name="start" timeout="60s"/>
+ </operations>
+ <instance_attributes id="vm1-instance_attributes">
+ <nvpair id="vm1-instance_attributes-name" name="name" value="vm1"/>
+ <nvpair id="vm1-instance_attributes-xmfile" name="xmfile" value="/etc/xen/shared-vm/vm1"/>
+ </instance_attributes>
+ </primitive>
+
+If you want to overwrite some attributes or operations, add them to the
+particular primitive's definition.
+
+.. topic:: Xen resource overriding template values
+
+ .. code-block:: xml
+
+ <primitive id="vm2" template="vm-template">
+ <meta_attributes id="vm2-meta_attributes">
+ <nvpair id="vm2-meta_attributes-allow-migrate" name="allow-migrate" value="false"/>
+ </meta_attributes>
+ <utilization id="vm2-utilization">
+ <nvpair id="vm2-utilization-memory" name="memory" value="1024"/>
+ </utilization>
+ <instance_attributes id="vm2-instance_attributes">
+ <nvpair id="vm2-instance_attributes-name" name="name" value="vm2"/>
+ <nvpair id="vm2-instance_attributes-xmfile" name="xmfile" value="/etc/xen/shared-vm/vm2"/>
+ </instance_attributes>
+ <operations>
+ <op id="vm2-monitor-30s" interval="30s" name="monitor" timeout="120s"/>
+ <op id="vm2-stop-0" interval="0" name="stop" timeout="60s"/>
+ </operations>
+ </primitive>
+
+In the example above, the new primitive ``vm2`` has special attribute values.
+Its ``monitor`` operation has a longer ``timeout`` and ``interval``, and
+the primitive has an additional ``stop`` operation.
+
+To see the resulting definition of a resource, run:
+
+.. code-block:: none
+
+ # crm_resource --query-xml --resource vm2
+
+To see the raw definition of a resource in the CIB, run:
+
+.. code-block:: none
+
+ # crm_resource --query-xml-raw --resource vm2
+
+Using Templates in Constraints
+______________________________
+
+A resource template can be referenced in the following types of constraints:
+
+- ``order`` constraints (see :ref:`s-resource-ordering`)
+- ``colocation`` constraints (see :ref:`s-resource-colocation`)
+- ``rsc_ticket`` constraints (for multi-site clusters as described in :ref:`ticket-constraints`)
+
+Resource templates referenced in constraints stand for all primitives which are
+derived from that template. This means, the constraint applies to all primitive
+resources referencing the resource template. Referencing resource templates in
+constraints is an alternative to resource sets and can simplify the cluster
+configuration considerably.
+
+For example, given the example templates earlier in this chapter:
+
+.. code-block:: xml
+
+ <rsc_colocation id="vm-template-colo-base-rsc" rsc="vm-template" rsc-role="Started" with-rsc="base-rsc" score="INFINITY"/>
+
+would colocate all VMs with ``base-rsc`` and is the equivalent of the following constraint configuration:
+
+.. code-block:: xml
+
+ <rsc_colocation id="vm-colo-base-rsc" score="INFINITY">
+ <resource_set id="vm-colo-base-rsc-0" sequential="false" role="Started">
+ <resource_ref id="vm1"/>
+ <resource_ref id="vm2"/>
+ </resource_set>
+ <resource_set id="vm-colo-base-rsc-1">
+ <resource_ref id="base-rsc"/>
+ </resource_set>
+ </rsc_colocation>
+
+.. note::
+
+ In a colocation constraint, only one template may be referenced from either
+ ``rsc`` or ``with-rsc``; the other reference must be a regular resource.
+
+Using Templates in Resource Sets
+________________________________
+
+Resource templates can also be referenced in resource sets.
+
+For example, given the example templates earlier in this section, then:
+
+.. code-block:: xml
+
+ <rsc_order id="order1" score="INFINITY">
+ <resource_set id="order1-0">
+ <resource_ref id="base-rsc"/>
+ <resource_ref id="vm-template"/>
+ <resource_ref id="top-rsc"/>
+ </resource_set>
+ </rsc_order>
+
+is the equivalent of the following constraint using a sequential resource set:
+
+.. code-block:: xml
+
+ <rsc_order id="order1" score="INFINITY">
+ <resource_set id="order1-0">
+ <resource_ref id="base-rsc"/>
+ <resource_ref id="vm1"/>
+ <resource_ref id="vm2"/>
+ <resource_ref id="top-rsc"/>
+ </resource_set>
+ </rsc_order>
+
+Or, if the resources referencing the template can run in parallel, then:
+
+.. code-block:: xml
+
+ <rsc_order id="order2" score="INFINITY">
+ <resource_set id="order2-0">
+ <resource_ref id="base-rsc"/>
+ </resource_set>
+ <resource_set id="order2-1" sequential="false">
+ <resource_ref id="vm-template"/>
+ </resource_set>
+ <resource_set id="order2-2">
+ <resource_ref id="top-rsc"/>
+ </resource_set>
+ </rsc_order>
+
+is the equivalent of the following constraint configuration:
+
+.. code-block:: xml
+
+ <rsc_order id="order2" score="INFINITY">
+ <resource_set id="order2-0">
+ <resource_ref id="base-rsc"/>
+ </resource_set>
+ <resource_set id="order2-1" sequential="false">
+ <resource_ref id="vm1"/>
+ <resource_ref id="vm2"/>
+ </resource_set>
+ <resource_set id="order2-2">
+ <resource_ref id="top-rsc"/>
+ </resource_set>
+ </rsc_order>
+
+.. _s-reusing-config-elements:
+
+Reusing Rules, Options and Sets of Operations
+#############################################
+
+Sometimes a number of constraints need to use the same set of rules,
+and resources need to set the same options and parameters. To
+simplify this situation, you can refer to an existing object using an
+``id-ref`` instead of an ``id``.
+
+So if for one resource you have
+
+.. code-block:: xml
+
+ <rsc_location id="WebServer-connectivity" rsc="Webserver">
+ <rule id="ping-prefer-rule" score-attribute="pingd" >
+ <expression id="ping-prefer" attribute="pingd" operation="defined"/>
+ </rule>
+ </rsc_location>
+
+Then instead of duplicating the rule for all your other resources, you can instead specify:
+
+.. topic:: **Referencing rules from other constraints**
+
+ .. code-block:: xml
+
+ <rsc_location id="WebDB-connectivity" rsc="WebDB">
+ <rule id-ref="ping-prefer-rule"/>
+ </rsc_location>
+
+.. important::
+
+ The cluster will insist that the ``rule`` exists somewhere. Attempting
+ to add a reference to a non-existing rule will cause a validation
+ failure, as will attempting to remove a ``rule`` that is referenced
+ elsewhere.
+
+The same principle applies for ``meta_attributes`` and
+``instance_attributes`` as illustrated in the example below:
+
+.. topic:: Referencing attributes, options, and operations from other resources
+
+ .. code-block:: xml
+
+ <primitive id="mySpecialRsc" class="ocf" type="Special" provider="me">
+ <instance_attributes id="mySpecialRsc-attrs" score="1" >
+ <nvpair id="default-interface" name="interface" value="eth0"/>
+ <nvpair id="default-port" name="port" value="9999"/>
+ </instance_attributes>
+ <meta_attributes id="mySpecialRsc-options">
+ <nvpair id="failure-timeout" name="failure-timeout" value="5m"/>
+ <nvpair id="migration-threshold" name="migration-threshold" value="1"/>
+ <nvpair id="stickiness" name="resource-stickiness" value="0"/>
+ </meta_attributes>
+ <operations id="health-checks">
+ <op id="health-check" name="monitor" interval="60s"/>
+ <op id="health-check" name="monitor" interval="30min"/>
+ </operations>
+ </primitive>
+ <primitive id="myOtherlRsc" class="ocf" type="Other" provider="me">
+ <instance_attributes id-ref="mySpecialRsc-attrs"/>
+ <meta_attributes id-ref="mySpecialRsc-options"/>
+ <operations id-ref="health-checks"/>
+ </primitive>
+
+``id-ref`` can similarly be used with ``resource_set`` (in any constraint type),
+``nvpair``, and ``operations``.
+
+Tagging Configuration Elements
+##############################
+
+Pacemaker allows you to *tag* any configuration element that has an XML ID.
+
+The main purpose of tagging is to support higher-level user interface tools;
+Pacemaker itself only uses tags within constraints. Therefore, what you can
+do with tags mostly depends on the tools you use.
+
+Configuring Tags
+________________
+
+A tag is simply a named list of XML IDs.
+
+.. topic:: Tag referencing three resources
+
+ .. code-block:: xml
+
+ <tags>
+ <tag id="all-vms">
+ <obj_ref id="vm1"/>
+ <obj_ref id="vm2"/>
+ <obj_ref id="vm3"/>
+ </tag>
+ </tags>
+
+What you can do with this new tag depends on what your higher-level tools
+support. For example, a tool might allow you to enable or disable all of
+the tagged resources at once, or show the status of just the tagged
+resources.
+
+A single configuration element can be listed in any number of tags.
+
+Using Tags in Constraints and Resource Sets
+___________________________________________
+
+Pacemaker itself only uses tags in constraints. If you supply a tag name
+instead of a resource name in any constraint, the constraint will apply to
+all resources listed in that tag.
+
+.. topic:: Constraint using a tag
+
+ .. code-block:: xml
+
+ <rsc_order id="order1" first="storage" then="all-vms" kind="Mandatory" />
+
+In the example above, assuming the ``all-vms`` tag is defined as in the previous
+example, the constraint will behave the same as:
+
+.. topic:: Equivalent constraints without tags
+
+ .. code-block:: xml
+
+ <rsc_order id="order1-1" first="storage" then="vm1" kind="Mandatory" />
+ <rsc_order id="order1-2" first="storage" then="vm2" kind="Mandatory" />
+ <rsc_order id="order1-3" first="storage" then="vm3" kind="Mandatory" />
+
+A tag may be used directly in the constraint, or indirectly by being
+listed in a :ref:`resource set <s-resource-sets>` used in the constraint.
+When used in a resource set, an expanded tag will honor the set's
+``sequential`` property.
+
+Filtering With Tags
+___________________
+
+The ``crm_mon`` tool can be used to display lots of information about the
+state of the cluster. On large or complicated clusters, this can include
+a lot of information, which makes it difficult to find the one thing you
+are interested in. The ``--resource=`` and ``--node=`` command line
+options can be used to filter results. In their most basic usage, these
+options take a single resource or node name. However, they can also
+be supplied with a tag name to display several objects at once.
+
+For instance, given the following CIB section:
+
+.. code-block:: xml
+
+ <resources>
+ <primitive class="stonith" id="Fencing" type="fence_xvm"/>
+ <primitive class="ocf" id="dummy" provider="pacemaker" type="Dummy"/>
+ <group id="inactive-group">
+ <primitive class="ocf" id="inactive-dummy-1" provider="pacemaker" type="Dummy"/>
+ <primitive class="ocf" id="inactive-dummy-2" provider="pacemaker" type="Dummy"/>
+ </group>
+ <clone id="inactive-clone">
+ <primitive id="inactive-dhcpd" class="lsb" type="dhcpd"/>
+ </clone>
+ </resources>
+ <tags>
+ <tag id="inactive-rscs">
+ <obj_ref id="inactive-group"/>
+ <obj_ref id="inactive-clone"/>
+ </tag>
+ </tags>
+
+The following would be output for ``crm_mon --resource=inactive-rscs -r``:
+
+.. code-block:: none
+
+ Cluster Summary:
+ * Stack: corosync
+ * Current DC: cluster02 (version 2.0.4-1.e97f9675f.git.el7-e97f9675f) - partition with quorum
+ * Last updated: Tue Oct 20 16:09:01 2020
+ * Last change: Tue May 5 12:04:36 2020 by hacluster via crmd on cluster01
+ * 5 nodes configured
+ * 27 resource instances configured (4 DISABLED)
+
+ Node List:
+ * Online: [ cluster01 cluster02 ]
+
+ Full List of Resources:
+ * Clone Set: inactive-clone [inactive-dhcpd] (disabled):
+ * Stopped (disabled): [ cluster01 cluster02 ]
+ * Resource Group: inactive-group (disabled):
+ * inactive-dummy-1 (ocf::pacemaker:Dummy): Stopped (disabled)
+ * inactive-dummy-2 (ocf::pacemaker:Dummy): Stopped (disabled)
diff --git a/doc/sphinx/Pacemaker_Explained/rules.rst b/doc/sphinx/Pacemaker_Explained/rules.rst
new file mode 100644
index 0000000..e9d85e0
--- /dev/null
+++ b/doc/sphinx/Pacemaker_Explained/rules.rst
@@ -0,0 +1,1021 @@
+.. index::
+ single: rule
+
+.. _rules:
+
+Rules
+-----
+
+Rules can be used to make your configuration more dynamic, allowing values to
+change depending on the time or the value of a node attribute. Examples of
+things rules are useful for:
+
+* Set a higher value for :ref:`resource-stickiness <resource-stickiness>`
+ during working hours, to minimize downtime, and a lower value on weekends, to
+ allow resources to move to their most preferred locations when people aren't
+ around to notice.
+
+* Automatically place the cluster into maintenance mode during a scheduled
+ maintenance window.
+
+* Assign certain nodes and resources to a particular department via custom
+ node attributes and meta-attributes, and add a single location constraint
+ that restricts the department's resources to run only on those nodes.
+
+Each constraint type or property set that supports rules may contain one or more
+``rule`` elements specifying conditions under which the constraint or properties
+take effect. Examples later in this chapter will make this clearer.
+
+.. index::
+ pair: XML element; rule
+
+Rule Properties
+###############
+
+.. table:: **Attributes of a rule Element**
+ :widths: 1 1 3
+
+ +-----------------+-------------+-------------------------------------------+
+ | Attribute | Default | Description |
+ +=================+=============+===========================================+
+ | id | | .. index:: |
+ | | | pair: rule; id |
+ | | | |
+ | | | A unique name for this element (required) |
+ +-----------------+-------------+-------------------------------------------+
+ | role | ``Started`` | .. index:: |
+ | | | pair: rule; role |
+ | | | |
+ | | | The rule is in effect only when the |
+ | | | resource is in the specified role. |
+ | | | Allowed values are ``Started``, |
+ | | | ``Unpromoted``, and ``Promoted``. A rule |
+ | | | with a ``role`` of ``Promoted`` cannot |
+ | | | determine the initial location of a clone |
+ | | | instance and will only affect which of |
+ | | | the active instances will be promoted. |
+ +-----------------+-------------+-------------------------------------------+
+ | score | | .. index:: |
+ | | | pair: rule; score |
+ | | | |
+ | | | If this rule is used in a location |
+ | | | constraint and evaluates to true, apply |
+ | | | this score to the constraint. Only one of |
+ | | | ``score`` and ``score-attribute`` may be |
+ | | | used. |
+ +-----------------+-------------+-------------------------------------------+
+ | score-attribute | | .. index:: |
+ | | | pair: rule; score-attribute |
+ | | | |
+ | | | If this rule is used in a location |
+ | | | constraint and evaluates to true, use the |
+ | | | value of this node attribute as the score |
+ | | | to apply to the constraint. Only one of |
+ | | | ``score`` and ``score-attribute`` may be |
+ | | | used. |
+ +-----------------+-------------+-------------------------------------------+
+ | boolean-op | ``and`` | .. index:: |
+ | | | pair: rule; boolean-op |
+ | | | |
+ | | | If this rule contains more than one |
+ | | | condition, a value of ``and`` specifies |
+ | | | that the rule evaluates to true only if |
+ | | | all conditions are true, and a value of |
+ | | | ``or`` specifies that the rule evaluates |
+ | | | to true if any condition is true. |
+ +-----------------+-------------+-------------------------------------------+
+
+A ``rule`` element must contain one or more conditions. A condition may be an
+``expression`` element, a ``date_expression`` element, or another ``rule`` element.
+
+
+.. index::
+ single: rule; node attribute expression
+ single: node attribute; rule expression
+ pair: XML element; expression
+
+.. _node_attribute_expressions:
+
+Node Attribute Expressions
+##########################
+
+Expressions are rule conditions based on the values of node attributes.
+
+.. table:: **Attributes of an expression Element**
+ :class: longtable
+ :widths: 1 2 3
+
+ +--------------+---------------------------------+-------------------------------------------+
+ | Attribute | Default | Description |
+ +==============+=================================+===========================================+
+ | id | | .. index:: |
+ | | | pair: expression; id |
+ | | | |
+ | | | A unique name for this element (required) |
+ +--------------+---------------------------------+-------------------------------------------+
+ | attribute | | .. index:: |
+ | | | pair: expression; attribute |
+ | | | |
+ | | | The node attribute to test (required) |
+ +--------------+---------------------------------+-------------------------------------------+
+ | type | The default type for | .. index:: |
+ | | ``lt``, ``gt``, ``lte``, and | pair: expression; type |
+ | | ``gte`` operations is ``number``| |
+ | | if either value contains a | How the node attributes should be |
+ | | decimal point character, or | compared. Allowed values are ``string``, |
+ | | ``integer`` otherwise. The | ``integer`` *(since 2.0.5)*, ``number``, |
+ | | default type for all other | and ``version``. ``integer`` truncates |
+ | | operations is ``string``. If a | floating-point values if necessary before |
+ | | numeric parse fails for either | performing a 64-bit integer comparison. |
+ | | value, then the values are | ``number`` performs a double-precision |
+ | | compared as type ``string``. | floating-point comparison |
+ | | | *(32-bit integer before 2.0.5)*. |
+ +--------------+---------------------------------+-------------------------------------------+
+ | operation | | .. index:: |
+ | | | pair: expression; operation |
+ | | | |
+ | | | The comparison to perform (required). |
+ | | | Allowed values: |
+ | | | |
+ | | | * ``lt:`` True if the node attribute value|
+ | | | is less than the comparison value |
+ | | | * ``gt:`` True if the node attribute value|
+ | | | is greater than the comparison value |
+ | | | * ``lte:`` True if the node attribute |
+ | | | value is less than or equal to the |
+ | | | comparison value |
+ | | | * ``gte:`` True if the node attribute |
+ | | | value is greater than or equal to the |
+ | | | comparison value |
+ | | | * ``eq:`` True if the node attribute value|
+ | | | is equal to the comparison value |
+ | | | * ``ne:`` True if the node attribute value|
+ | | | is not equal to the comparison value |
+ | | | * ``defined:`` True if the node has the |
+ | | | named attribute |
+ | | | * ``not_defined:`` True if the node does |
+ | | | not have the named attribute |
+ +--------------+---------------------------------+-------------------------------------------+
+ | value | | .. index:: |
+ | | | pair: expression; value |
+ | | | |
+ | | | User-supplied value for comparison |
+ | | | (required for operations other than |
+ | | | ``defined`` and ``not_defined``) |
+ +--------------+---------------------------------+-------------------------------------------+
+ | value-source | ``literal`` | .. index:: |
+ | | | pair: expression; value-source |
+ | | | |
+ | | | How the ``value`` is derived. Allowed |
+ | | | values: |
+ | | | |
+ | | | * ``literal``: ``value`` is a literal |
+ | | | string to compare against |
+ | | | * ``param``: ``value`` is the name of a |
+ | | | resource parameter to compare against |
+ | | | (only valid in location constraints) |
+ | | | * ``meta``: ``value`` is the name of a |
+ | | | resource meta-attribute to compare |
+ | | | against (only valid in location |
+ | | | constraints) |
+ +--------------+---------------------------------+-------------------------------------------+
+
+.. _node-attribute-expressions-special:
+
+In addition to custom node attributes defined by the administrator, the cluster
+defines special, built-in node attributes for each node that can also be used
+in rule expressions.
+
+.. table:: **Built-in Node Attributes**
+ :widths: 1 4
+
+ +---------------+-----------------------------------------------------------+
+ | Name | Value |
+ +===============+===========================================================+
+ | #uname | :ref:`Node name <node_name>` |
+ +---------------+-----------------------------------------------------------+
+ | #id | Node ID |
+ +---------------+-----------------------------------------------------------+
+ | #kind | Node type. Possible values are ``cluster``, ``remote``, |
+ | | and ``container``. Kind is ``remote`` for Pacemaker Remote|
+ | | nodes created with the ``ocf:pacemaker:remote`` resource, |
+ | | and ``container`` for Pacemaker Remote guest nodes and |
+ | | bundle nodes |
+ +---------------+-----------------------------------------------------------+
+ | #is_dc | ``true`` if this node is the cluster's Designated |
+ | | Controller (DC), ``false`` otherwise |
+ +---------------+-----------------------------------------------------------+
+ | #cluster-name | The value of the ``cluster-name`` cluster property, if set|
+ +---------------+-----------------------------------------------------------+
+ | #site-name | The value of the ``site-name`` node attribute, if set, |
+ | | otherwise identical to ``#cluster-name`` |
+ +---------------+-----------------------------------------------------------+
+ | #role | The role the relevant promotable clone resource has on |
+ | | this node. Valid only within a rule for a location |
+ | | constraint for a promotable clone resource. |
+ +---------------+-----------------------------------------------------------+
+
+.. Add_to_above_table_if_released:
+
+ +---------------+-----------------------------------------------------------+
+ | #ra-version | The installed version of the resource agent on the node, |
+ | | as defined by the ``version`` attribute of the |
+ | | ``resource-agent`` tag in the agent's metadata. Valid only|
+ | | within rules controlling resource options. This can be |
+ | | useful during rolling upgrades of a backward-incompatible |
+ | | resource agent. *(since x.x.x)* |
+
+
+.. index::
+ single: rule; date/time expression
+ pair: XML element; date_expression
+
+Date/Time Expressions
+#####################
+
+Date/time expressions are rule conditions based (as the name suggests) on the
+current date and time.
+
+A ``date_expression`` element may optionally contain a ``date_spec`` or
+``duration`` element depending on the context.
+
+.. table:: **Attributes of a date_expression Element**
+ :widths: 1 4
+
+ +---------------+-----------------------------------------------------------+
+ | Attribute | Description |
+ +===============+===========================================================+
+ | id | .. index:: |
+ | | pair: id; date_expression |
+ | | |
+ | | A unique name for this element (required) |
+ +---------------+-----------------------------------------------------------+
+ | start | .. index:: |
+ | | pair: start; date_expression |
+ | | |
+ | | A date/time conforming to the |
+ | | `ISO8601 <https://en.wikipedia.org/wiki/ISO_8601>`_ |
+ | | specification. May be used when ``operation`` is |
+ | | ``in_range`` (in which case at least one of ``start`` or |
+ | | ``end`` must be specified) or ``gt`` (in which case |
+ | | ``start`` is required). |
+ +---------------+-----------------------------------------------------------+
+ | end | .. index:: |
+ | | pair: end; date_expression |
+ | | |
+ | | A date/time conforming to the |
+ | | `ISO8601 <https://en.wikipedia.org/wiki/ISO_8601>`_ |
+ | | specification. May be used when ``operation`` is |
+ | | ``in_range`` (in which case at least one of ``start`` or |
+ | | ``end`` must be specified) or ``lt`` (in which case |
+ | | ``end`` is required). |
+ +---------------+-----------------------------------------------------------+
+ | operation | .. index:: |
+ | | pair: operation; date_expression |
+ | | |
+ | | Compares the current date/time with the start and/or end |
+ | | date, depending on the context. Allowed values: |
+ | | |
+ | | * ``gt:`` True if the current date/time is after ``start``|
+ | | * ``lt:`` True if the current date/time is before ``end`` |
+ | | * ``in_range:`` True if the current date/time is after |
+ | | ``start`` (if specified) and before either ``end`` (if |
+ | | specified) or ``start`` plus the value of the |
+ | | ``duration`` element (if one is contained in the |
+ | | ``date_expression``). If both ``end`` and ``duration`` |
+ | | are specified, ``duration`` is ignored. |
+ | | * ``date_spec:`` True if the current date/time matches |
+ | | the specification given in the contained ``date_spec`` |
+ | | element (described below) |
+ +---------------+-----------------------------------------------------------+
+
+
+.. note:: There is no ``eq``, ``neq``, ``gte``, or ``lte`` operation, since
+ they would be valid only for a single second.
+
+
+.. index::
+ single: date specification
+ pair: XML element; date_spec
+
+Date Specifications
+___________________
+
+A ``date_spec`` element is used to create a cron-like expression relating
+to time. Each field can contain a single number or range. Any field not
+supplied is ignored.
+
+.. table:: **Attributes of a date_spec Element**
+ :widths: 1 3
+
+ +---------------+-----------------------------------------------------------+
+ | Attribute | Description |
+ +===============+===========================================================+
+ | id | .. index:: |
+ | | pair: id; date_spec |
+ | | |
+ | | A unique name for this element (required) |
+ +---------------+-----------------------------------------------------------+
+ | seconds | .. index:: |
+ | | pair: seconds; date_spec |
+ | | |
+ | | Allowed values: 0-59 |
+ +---------------+-----------------------------------------------------------+
+ | minutes | .. index:: |
+ | | pair: minutes; date_spec |
+ | | |
+ | | Allowed values: 0-59 |
+ +---------------+-----------------------------------------------------------+
+ | hours | .. index:: |
+ | | pair: hours; date_spec |
+ | | |
+ | | Allowed values: 0-23 (where 0 is midnight and 23 is |
+ | | 11 p.m.) |
+ +---------------+-----------------------------------------------------------+
+ | monthdays | .. index:: |
+ | | pair: monthdays; date_spec |
+ | | |
+ | | Allowed values: 1-31 (depending on month and year) |
+ +---------------+-----------------------------------------------------------+
+ | weekdays | .. index:: |
+ | | pair: weekdays; date_spec |
+ | | |
+ | | Allowed values: 1-7 (where 1 is Monday and 7 is Sunday) |
+ +---------------+-----------------------------------------------------------+
+ | yeardays | .. index:: |
+ | | pair: yeardays; date_spec |
+ | | |
+ | | Allowed values: 1-366 (depending on the year) |
+ +---------------+-----------------------------------------------------------+
+ | months | .. index:: |
+ | | pair: months; date_spec |
+ | | |
+ | | Allowed values: 1-12 |
+ +---------------+-----------------------------------------------------------+
+ | weeks | .. index:: |
+ | | pair: weeks; date_spec |
+ | | |
+ | | Allowed values: 1-53 (depending on weekyear) |
+ +---------------+-----------------------------------------------------------+
+ | years | .. index:: |
+ | | pair: years; date_spec |
+ | | |
+ | | Year according to the Gregorian calendar |
+ +---------------+-----------------------------------------------------------+
+ | weekyears | .. index:: |
+ | | pair: weekyears; date_spec |
+ | | |
+ | | Year in which the week started; for example, 1 January |
+ | | 2005 can be specified in ISO 8601 as "2005-001 Ordinal", |
+ | | "2005-01-01 Gregorian" or "2004-W53-6 Weekly" and thus |
+ | | would match ``years="2005"`` or ``weekyears="2004"`` |
+ +---------------+-----------------------------------------------------------+
+ | moon | .. index:: |
+ | | pair: moon; date_spec |
+ | | |
+ | | Allowed values are 0-7 (where 0 is the new moon and 4 is |
+ | | full moon). *(deprecated since 2.1.6)* |
+ +---------------+-----------------------------------------------------------+
+
+For example, ``monthdays="1"`` matches the first day of every month, and
+``hours="09-17"`` matches the hours between 9 a.m. and 5 p.m. (inclusive).
+
+At this time, multiple ranges (e.g. ``weekdays="1,2"`` or ``weekdays="1-2,5-6"``)
+are not supported.
+
+.. note:: Pacemaker can calculate when evaluation of a ``date_expression`` with
+ an ``operation`` of ``gt``, ``lt``, or ``in_range`` will next change,
+ and schedule a cluster re-check for that time. However, it does not
+ do this for ``date_spec``. Instead, it evaluates the ``date_spec``
+ whenever a cluster re-check naturally happens via a cluster event or
+ the ``cluster-recheck-interval`` cluster option.
+
+ For example, if you have a ``date_spec`` enabling a resource from 9
+ a.m. to 5 p.m., and ``cluster-recheck-interval`` has been set to 5
+ minutes, then sometime between 9 a.m. and 9:05 a.m. the cluster would
+ notice that it needs to start the resource, and sometime between 5
+ p.m. and 5:05 p.m. it would realize that it needs to stop the
+ resource. The timing of the actual start and stop actions will
+ further depend on factors such as any other actions the cluster may
+ need to perform first, and the load of the machine.
+
+
+.. index::
+ single: duration
+ pair: XML element; duration
+
+Durations
+_________
+
+A ``duration`` is used to calculate a value for ``end`` when one is not
+supplied to ``in_range`` operations. It contains one or more attributes each
+containing a single number. Any attribute not supplied is ignored.
+
+.. table:: **Attributes of a duration Element**
+ :widths: 1 3
+
+ +---------------+-----------------------------------------------------------+
+ | Attribute | Description |
+ +===============+===========================================================+
+ | id | .. index:: |
+ | | pair: id; duration |
+ | | |
+ | | A unique name for this element (required) |
+ +---------------+-----------------------------------------------------------+
+ | seconds | .. index:: |
+ | | pair: seconds; duration |
+ | | |
+ | | This many seconds will be added to the total duration |
+ +---------------+-----------------------------------------------------------+
+ | minutes | .. index:: |
+ | | pair: minutes; duration |
+ | | |
+ | | This many minutes will be added to the total duration |
+ +---------------+-----------------------------------------------------------+
+ | hours | .. index:: |
+ | | pair: hours; duration |
+ | | |
+ | | This many hours will be added to the total duration |
+ +---------------+-----------------------------------------------------------+
+ | days | .. index:: |
+ | | pair: days; duration |
+ | | |
+ | | This many days will be added to the total duration |
+ +---------------+-----------------------------------------------------------+
+ | weeks | .. index:: |
+ | | pair: weeks; duration |
+ | | |
+ | | This many weeks will be added to the total duration |
+ +---------------+-----------------------------------------------------------+
+ | months | .. index:: |
+ | | pair: months; duration |
+ | | |
+ | | This many months will be added to the total duration |
+ +---------------+-----------------------------------------------------------+
+ | years | .. index:: |
+ | | pair: years; duration |
+ | | |
+ | | This many years will be added to the total duration |
+ +---------------+-----------------------------------------------------------+
+
+
+Example Time-Based Expressions
+______________________________
+
+A small sample of how time-based expressions can be used:
+
+.. topic:: True if now is any time in the year 2005
+
+ .. code-block:: xml
+
+ <rule id="rule1" score="INFINITY">
+ <date_expression id="date_expr1" start="2005-001" operation="in_range">
+ <duration id="duration1" years="1"/>
+ </date_expression>
+ </rule>
+
+ or equivalently:
+
+ .. code-block:: xml
+
+ <rule id="rule2" score="INFINITY">
+ <date_expression id="date_expr2" operation="date_spec">
+ <date_spec id="date_spec2" years="2005"/>
+ </date_expression>
+ </rule>
+
+.. topic:: 9 a.m. to 5 p.m. Monday through Friday
+
+ .. code-block:: xml
+
+ <rule id="rule3" score="INFINITY">
+ <date_expression id="date_expr3" operation="date_spec">
+ <date_spec id="date_spec3" hours="9-16" weekdays="1-5"/>
+ </date_expression>
+ </rule>
+
+ Note that the ``16`` matches all the way through ``16:59:59``, because the
+ numeric value of the hour still matches.
+
+.. topic:: 9 a.m. to 6 p.m. Monday through Friday or anytime Saturday
+
+ .. code-block:: xml
+
+ <rule id="rule4" score="INFINITY" boolean-op="or">
+ <date_expression id="date_expr4-1" operation="date_spec">
+ <date_spec id="date_spec4-1" hours="9-16" weekdays="1-5"/>
+ </date_expression>
+ <date_expression id="date_expr4-2" operation="date_spec">
+ <date_spec id="date_spec4-2" weekdays="6"/>
+ </date_expression>
+ </rule>
+
+.. topic:: 9 a.m. to 5 p.m. or 9 p.m. to 12 a.m. Monday through Friday
+
+ .. code-block:: xml
+
+ <rule id="rule5" score="INFINITY" boolean-op="and">
+ <rule id="rule5-nested1" score="INFINITY" boolean-op="or">
+ <date_expression id="date_expr5-1" operation="date_spec">
+ <date_spec id="date_spec5-1" hours="9-16"/>
+ </date_expression>
+ <date_expression id="date_expr5-2" operation="date_spec">
+ <date_spec id="date_spec5-2" hours="21-23"/>
+ </date_expression>
+ </rule>
+ <date_expression id="date_expr5-3" operation="date_spec">
+ <date_spec id="date_spec5-3" weekdays="1-5"/>
+ </date_expression>
+ </rule>
+
+.. topic:: Mondays in March 2005
+
+ .. code-block:: xml
+
+ <rule id="rule6" score="INFINITY" boolean-op="and">
+ <date_expression id="date_expr6-1" operation="date_spec">
+ <date_spec id="date_spec6" weekdays="1"/>
+ </date_expression>
+ <date_expression id="date_expr6-2" operation="in_range"
+ start="2005-03-01" end="2005-04-01"/>
+ </rule>
+
+ .. note:: Because no time is specified with the above dates, 00:00:00 is
+ implied. This means that the range includes all of 2005-03-01 but
+ none of 2005-04-01. You may wish to write ``end`` as
+ ``"2005-03-31T23:59:59"`` to avoid confusion.
+
+
+.. index::
+ single: rule; resource expression
+ single: resource; rule expression
+ pair: XML element; rsc_expression
+
+Resource Expressions
+####################
+
+An ``rsc_expression`` *(since 2.0.5)* is a rule condition based on a resource
+agent's properties. This rule is only valid within an ``rsc_defaults`` or
+``op_defaults`` context. None of the matching attributes of ``class``,
+``provider``, and ``type`` are required. If one is omitted, all values of that
+attribute will match. For instance, omitting ``type`` means every type will
+match.
+
+.. table:: **Attributes of a rsc_expression Element**
+ :widths: 1 3
+
+ +---------------+-----------------------------------------------------------+
+ | Attribute | Description |
+ +===============+===========================================================+
+ | id | .. index:: |
+ | | pair: id; rsc_expression |
+ | | |
+ | | A unique name for this element (required) |
+ +---------------+-----------------------------------------------------------+
+ | class | .. index:: |
+ | | pair: class; rsc_expression |
+ | | |
+ | | The standard name to be matched against resource agents |
+ +---------------+-----------------------------------------------------------+
+ | provider | .. index:: |
+ | | pair: provider; rsc_expression |
+ | | |
+ | | If given, the vendor to be matched against resource |
+ | | agents (only relevant when ``class`` is ``ocf``) |
+ +---------------+-----------------------------------------------------------+
+ | type | .. index:: |
+ | | pair: type; rsc_expression |
+ | | |
+ | | The name of the resource agent to be matched |
+ +---------------+-----------------------------------------------------------+
+
+Example Resource-Based Expressions
+__________________________________
+
+A small sample of how resource-based expressions can be used:
+
+.. topic:: True for all ``ocf:heartbeat:IPaddr2`` resources
+
+ .. code-block:: xml
+
+ <rule id="rule1" score="INFINITY">
+ <rsc_expression id="rule_expr1" class="ocf" provider="heartbeat" type="IPaddr2"/>
+ </rule>
+
+.. topic:: Provider doesn't apply to non-OCF resources
+
+ .. code-block:: xml
+
+ <rule id="rule2" score="INFINITY">
+ <rsc_expression id="rule_expr2" class="stonith" type="fence_xvm"/>
+ </rule>
+
+
+.. index::
+ single: rule; operation expression
+ single: operation; rule expression
+ pair: XML element; op_expression
+
+Operation Expressions
+#####################
+
+
+An ``op_expression`` *(since 2.0.5)* is a rule condition based on an action of
+some resource agent. This rule is only valid within an ``op_defaults`` context.
+
+.. table:: **Attributes of an op_expression Element**
+ :widths: 1 3
+
+ +---------------+-----------------------------------------------------------+
+ | Attribute | Description |
+ +===============+===========================================================+
+ | id | .. index:: |
+ | | pair: id; op_expression |
+ | | |
+ | | A unique name for this element (required) |
+ +---------------+-----------------------------------------------------------+
+ | name | .. index:: |
+ | | pair: name; op_expression |
+ | | |
+ | | The action name to match against. This can be any action |
+ | | supported by the resource agent; common values include |
+ | | ``monitor``, ``start``, and ``stop`` (required). |
+ +---------------+-----------------------------------------------------------+
+ | interval | .. index:: |
+ | | pair: interval; op_expression |
+ | | |
+ | | The interval of the action to match against. If not given,|
+ | | only the name attribute will be used to match. |
+ +---------------+-----------------------------------------------------------+
+
+Example Operation-Based Expressions
+___________________________________
+
+A small sample of how operation-based expressions can be used:
+
+.. topic:: True for all monitor actions
+
+ .. code-block:: xml
+
+ <rule id="rule1" score="INFINITY">
+ <op_expression id="rule_expr1" name="monitor"/>
+ </rule>
+
+.. topic:: True for all monitor actions with a 10 second interval
+
+ .. code-block:: xml
+
+ <rule id="rule2" score="INFINITY">
+ <op_expression id="rule_expr2" name="monitor" interval="10s"/>
+ </rule>
+
+
+.. index::
+ pair: location constraint; rule
+
+Using Rules to Determine Resource Location
+##########################################
+
+A location constraint may contain one or more top-level rules. The cluster will
+act as if there is a separate location constraint for each rule that evaluates
+as true.
+
+Consider the following simple location constraint:
+
+.. topic:: Prevent resource ``webserver`` from running on node ``node3``
+
+ .. code-block:: xml
+
+ <rsc_location id="ban-apache-on-node3" rsc="webserver"
+ score="-INFINITY" node="node3"/>
+
+The same constraint can be more verbosely written using a rule:
+
+.. topic:: Prevent resource ``webserver`` from running on node ``node3`` using a rule
+
+ .. code-block:: xml
+
+ <rsc_location id="ban-apache-on-node3" rsc="webserver">
+ <rule id="ban-apache-rule" score="-INFINITY">
+ <expression id="ban-apache-expr" attribute="#uname"
+ operation="eq" value="node3"/>
+ </rule>
+ </rsc_location>
+
+The advantage of using the expanded form is that one could add more expressions
+(for example, limiting the constraint to certain days of the week), or activate
+the constraint by some node attribute other than node name.
+
+Location Rules Based on Other Node Properties
+_____________________________________________
+
+The expanded form allows us to match on node properties other than its name.
+If we rated each machine's CPU power such that the cluster had the following
+nodes section:
+
+.. topic:: Sample node section with node attributes
+
+ .. code-block:: xml
+
+ <nodes>
+ <node id="uuid1" uname="c001n01" type="normal">
+ <instance_attributes id="uuid1-custom_attrs">
+ <nvpair id="uuid1-cpu_mips" name="cpu_mips" value="1234"/>
+ </instance_attributes>
+ </node>
+ <node id="uuid2" uname="c001n02" type="normal">
+ <instance_attributes id="uuid2-custom_attrs">
+ <nvpair id="uuid2-cpu_mips" name="cpu_mips" value="5678"/>
+ </instance_attributes>
+ </node>
+ </nodes>
+
+then we could prevent resources from running on underpowered machines with this
+rule:
+
+.. topic:: Rule using a node attribute (to be used inside a location constraint)
+
+ .. code-block:: xml
+
+ <rule id="need-more-power-rule" score="-INFINITY">
+ <expression id="need-more-power-expr" attribute="cpu_mips"
+ operation="lt" value="3000"/>
+ </rule>
+
+Using ``score-attribute`` Instead of ``score``
+______________________________________________
+
+When using ``score-attribute`` instead of ``score``, each node matched by the
+rule has its score adjusted differently, according to its value for the named
+node attribute. Thus, in the previous example, if a rule inside a location
+constraint for a resource used ``score-attribute="cpu_mips"``, ``c001n01``
+would have its preference to run the resource increased by ``1234`` whereas
+``c001n02`` would have its preference increased by ``5678``.
+
+
+.. _s-rsc-pattern-rules:
+
+Specifying location scores using pattern submatches
+___________________________________________________
+
+Location constraints may use ``rsc-pattern`` to apply the constraint to all
+resources whose IDs match the given pattern (see :ref:`s-rsc-pattern`). The
+pattern may contain up to 9 submatches in parentheses, whose values may be used
+as ``%1`` through ``%9`` in a rule's ``score-attribute`` or a rule expression's
+``attribute``.
+
+As an example, the following configuration (only relevant parts are shown)
+gives the resources **server-httpd** and **ip-httpd** a preference of 100 on
+**node1** and 50 on **node2**, and **ip-gateway** a preference of -100 on
+**node1** and 200 on **node2**.
+
+.. topic:: Location constraint using submatches
+
+ .. code-block:: xml
+
+ <nodes>
+ <node id="1" uname="node1">
+ <instance_attributes id="node1-attrs">
+ <nvpair id="node1-prefer-httpd" name="prefer-httpd" value="100"/>
+ <nvpair id="node1-prefer-gateway" name="prefer-gateway" value="-100"/>
+ </instance_attributes>
+ </node>
+ <node id="2" uname="node2">
+ <instance_attributes id="node2-attrs">
+ <nvpair id="node2-prefer-httpd" name="prefer-httpd" value="50"/>
+ <nvpair id="node2-prefer-gateway" name="prefer-gateway" value="200"/>
+ </instance_attributes>
+ </node>
+ </nodes>
+ <resources>
+ <primitive id="server-httpd" class="ocf" provider="heartbeat" type="apache"/>
+ <primitive id="ip-httpd" class="ocf" provider="heartbeat" type="IPaddr2"/>
+ <primitive id="ip-gateway" class="ocf" provider="heartbeat" type="IPaddr2"/>
+ </resources>
+ <constraints>
+ <!-- The following constraint says that for any resource whose name
+ starts with "server-" or "ip-", that resource's preference for a
+ node is the value of the node attribute named "prefer-" followed
+ by the part of the resource name after "server-" or "ip-",
+ wherever such a node attribute is defined.
+ -->
+ <rsc_location id="location1" rsc-pattern="(server|ip)-(.*)">
+ <rule id="location1-rule1" score-attribute="prefer-%2">
+ <expression id="location1-rule1-expression1" attribute="prefer-%2" operation="defined"/>
+ </rule>
+ </rsc_location>
+ </constraints>
+
+
+.. index::
+ pair: cluster option; rule
+ pair: instance attribute; rule
+ pair: meta-attribute; rule
+ pair: resource defaults; rule
+ pair: operation defaults; rule
+ pair: node attribute; rule
+
+Using Rules to Define Options
+#############################
+
+Rules may be used to control a variety of options:
+
+* :ref:`Cluster options <cluster_options>` (``cluster_property_set`` elements)
+* :ref:`Node attributes <node_attributes>` (``instance_attributes`` or
+ ``utilization`` elements inside a ``node`` element)
+* :ref:`Resource options <resource_options>` (``utilization``,
+ ``meta_attributes``, or ``instance_attributes`` elements inside a resource
+ definition element or ``op`` , ``rsc_defaults``, ``op_defaults``, or
+ ``template`` element)
+* :ref:`Operation properties <operation_properties>` (``meta_attributes``
+ elements inside an ``op`` or ``op_defaults`` element)
+
+.. note::
+
+ Attribute-based expressions for meta-attributes can only be used within
+ ``operations`` and ``op_defaults``. They will not work with resource
+ configuration or ``rsc_defaults``. Additionally, attribute-based
+ expressions cannot be used with cluster options.
+
+Using Rules to Control Resource Options
+_______________________________________
+
+Often some cluster nodes will be different from their peers. Sometimes,
+these differences -- e.g. the location of a binary or the names of network
+interfaces -- require resources to be configured differently depending
+on the machine they're hosted on.
+
+By defining multiple ``instance_attributes`` objects for the resource and
+adding a rule to each, we can easily handle these special cases.
+
+In the example below, ``mySpecialRsc`` will use eth1 and port 9999 when run on
+``node1``, eth2 and port 8888 on ``node2`` and default to eth0 and port 9999
+for all other nodes.
+
+.. topic:: Defining different resource options based on the node name
+
+ .. code-block:: xml
+
+ <primitive id="mySpecialRsc" class="ocf" type="Special" provider="me">
+ <instance_attributes id="special-node1" score="3">
+ <rule id="node1-special-case" score="INFINITY" >
+ <expression id="node1-special-case-expr" attribute="#uname"
+ operation="eq" value="node1"/>
+ </rule>
+ <nvpair id="node1-interface" name="interface" value="eth1"/>
+ </instance_attributes>
+ <instance_attributes id="special-node2" score="2" >
+ <rule id="node2-special-case" score="INFINITY">
+ <expression id="node2-special-case-expr" attribute="#uname"
+ operation="eq" value="node2"/>
+ </rule>
+ <nvpair id="node2-interface" name="interface" value="eth2"/>
+ <nvpair id="node2-port" name="port" value="8888"/>
+ </instance_attributes>
+ <instance_attributes id="defaults" score="1" >
+ <nvpair id="default-interface" name="interface" value="eth0"/>
+ <nvpair id="default-port" name="port" value="9999"/>
+ </instance_attributes>
+ </primitive>
+
+The order in which ``instance_attributes`` objects are evaluated is determined
+by their score (highest to lowest). If not supplied, the score defaults to
+zero. Objects with an equal score are processed in their listed order. If the
+``instance_attributes`` object has no rule, or a ``rule`` that evaluates to
+``true``, then for any parameter the resource does not yet have a value for,
+the resource will use the parameter values defined by the ``instance_attributes``.
+
+For example, given the configuration above, if the resource is placed on
+``node1``:
+
+* ``special-node1`` has the highest score (3) and so is evaluated first; its
+ rule evaluates to ``true``, so ``interface`` is set to ``eth1``.
+* ``special-node2`` is evaluated next with score 2, but its rule evaluates to
+ ``false``, so it is ignored.
+* ``defaults`` is evaluated last with score 1, and has no rule, so its values
+ are examined; ``interface`` is already defined, so the value here is not
+ used, but ``port`` is not yet defined, so ``port`` is set to ``9999``.
+
+Using Rules to Control Resource Defaults
+________________________________________
+
+Rules can be used for resource and operation defaults. The following example
+illustrates how to set a different ``resource-stickiness`` value during and
+outside work hours. This allows resources to automatically move back to their
+most preferred hosts, but at a time that (in theory) does not interfere with
+business activities.
+
+.. topic:: Change ``resource-stickiness`` during working hours
+
+ .. code-block:: xml
+
+ <rsc_defaults>
+ <meta_attributes id="core-hours" score="2">
+ <rule id="core-hour-rule" score="0">
+ <date_expression id="nine-to-five-Mon-to-Fri" operation="date_spec">
+ <date_spec id="nine-to-five-Mon-to-Fri-spec" hours="9-16" weekdays="1-5"/>
+ </date_expression>
+ </rule>
+ <nvpair id="core-stickiness" name="resource-stickiness" value="INFINITY"/>
+ </meta_attributes>
+ <meta_attributes id="after-hours" score="1" >
+ <nvpair id="after-stickiness" name="resource-stickiness" value="0"/>
+ </meta_attributes>
+ </rsc_defaults>
+
+Rules may be used similarly in ``instance_attributes`` or ``utilization``
+blocks.
+
+Any single block may directly contain only a single rule, but that rule may
+itself contain any number of rules.
+
+``rsc_expression`` and ``op_expression`` blocks may additionally be used to
+set defaults on either a single resource or across an entire class of resources
+with a single rule. ``rsc_expression`` may be used to select resource agents
+within both ``rsc_defaults`` and ``op_defaults``, while ``op_expression`` may
+only be used within ``op_defaults``. If multiple rules succeed for a given
+resource agent, the last one specified will be the one that takes effect. As
+with any other rule, boolean operations may be used to make more complicated
+expressions.
+
+.. topic:: Default all IPaddr2 resources to stopped
+
+ .. code-block:: xml
+
+ <rsc_defaults>
+ <meta_attributes id="op-target-role">
+ <rule id="op-target-role-rule" score="INFINITY">
+ <rsc_expression id="op-target-role-expr" class="ocf" provider="heartbeat"
+ type="IPaddr2"/>
+ </rule>
+ <nvpair id="op-target-role-nvpair" name="target-role" value="Stopped"/>
+ </meta_attributes>
+ </rsc_defaults>
+
+.. topic:: Default all monitor action timeouts to 7 seconds
+
+ .. code-block:: xml
+
+ <op_defaults>
+ <meta_attributes id="op-monitor-defaults">
+ <rule id="op-monitor-default-rule" score="INFINITY">
+ <op_expression id="op-monitor-default-expr" name="monitor"/>
+ </rule>
+ <nvpair id="op-monitor-timeout" name="timeout" value="7s"/>
+ </meta_attributes>
+ </op_defaults>
+
+.. topic:: Default the timeout on all 10-second-interval monitor actions on ``IPaddr2`` resources to 8 seconds
+
+ .. code-block:: xml
+
+ <op_defaults>
+ <meta_attributes id="op-monitor-and">
+ <rule id="op-monitor-and-rule" score="INFINITY">
+ <rsc_expression id="op-monitor-and-rsc-expr" class="ocf" provider="heartbeat"
+ type="IPaddr2"/>
+ <op_expression id="op-monitor-and-op-expr" name="monitor" interval="10s"/>
+ </rule>
+ <nvpair id="op-monitor-and-timeout" name="timeout" value="8s"/>
+ </meta_attributes>
+ </op_defaults>
+
+
+.. index::
+ pair: rule; cluster option
+
+Using Rules to Control Cluster Options
+______________________________________
+
+Controlling cluster options is achieved in much the same manner as specifying
+different resource options on different nodes.
+
+The following example illustrates how to set ``maintenance_mode`` during a
+scheduled maintenance window. This will keep the cluster running but not
+monitor, start, or stop resources during this time.
+
+.. topic:: Schedule a maintenance window for 9 to 11 p.m. CDT Sept. 20, 2019
+
+ .. code-block:: xml
+
+ <crm_config>
+ <cluster_property_set id="cib-bootstrap-options">
+ <nvpair id="bootstrap-stonith-enabled" name="stonith-enabled" value="1"/>
+ </cluster_property_set>
+ <cluster_property_set id="normal-set" score="10">
+ <nvpair id="normal-maintenance-mode" name="maintenance-mode" value="false"/>
+ </cluster_property_set>
+ <cluster_property_set id="maintenance-window-set" score="1000">
+ <nvpair id="maintenance-nvpair1" name="maintenance-mode" value="true"/>
+ <rule id="maintenance-rule1" score="INFINITY">
+ <date_expression id="maintenance-date1" operation="in_range"
+ start="2019-09-20 21:00:00 -05:00" end="2019-09-20 23:00:00 -05:00"/>
+ </rule>
+ </cluster_property_set>
+ </crm_config>
+
+.. important:: The ``cluster_property_set`` with an ``id`` set to
+ "cib-bootstrap-options" will *always* have the highest priority,
+ regardless of any scores. Therefore, rules in another
+ ``cluster_property_set`` can never take effect for any
+ properties listed in the bootstrap set.
diff --git a/doc/sphinx/Pacemaker_Explained/status.rst b/doc/sphinx/Pacemaker_Explained/status.rst
new file mode 100644
index 0000000..2d7dd7e
--- /dev/null
+++ b/doc/sphinx/Pacemaker_Explained/status.rst
@@ -0,0 +1,372 @@
+.. index::
+ single: status
+ single: XML element, status
+
+Status -- Here be dragons
+-------------------------
+
+Most users never need to understand the contents of the status section
+and can be happy with the output from ``crm_mon``.
+
+However for those with a curious inclination, this section attempts to
+provide an overview of its contents.
+
+.. index::
+ single: node; status
+
+Node Status
+###########
+
+In addition to the cluster's configuration, the CIB holds an
+up-to-date representation of each cluster node in the ``status`` section.
+
+.. topic:: A bare-bones status entry for a healthy node **cl-virt-1**
+
+ .. code-block:: xml
+
+ <node_state id="1" uname="cl-virt-1" in_ccm="true" crmd="online" crm-debug-origin="do_update_resource" join="member" expected="member">
+ <transient_attributes id="1"/>
+ <lrm id="1"/>
+ </node_state>
+
+Users are highly recommended *not* to modify any part of a node's
+state *directly*. The cluster will periodically regenerate the entire
+section from authoritative sources, so any changes should be done
+with the tools appropriate to those sources.
+
+.. table:: **Authoritative Sources for State Information**
+ :widths: 1 1
+
+ +----------------------+----------------------+
+ | CIB Object | Authoritative Source |
+ +======================+======================+
+ | node_state | pacemaker-controld |
+ +----------------------+----------------------+
+ | transient_attributes | pacemaker-attrd |
+ +----------------------+----------------------+
+ | lrm | pacemaker-execd |
+ +----------------------+----------------------+
+
+The fields used in the ``node_state`` objects are named as they are
+largely for historical reasons and are rooted in Pacemaker's origins
+as the resource manager for the older Heartbeat project. They have remained
+unchanged to preserve compatibility with older versions.
+
+.. table:: **Node Status Fields**
+ :widths: 1 3
+
+ +------------------+----------------------------------------------------------+
+ | Field | Description |
+ +==================+==========================================================+
+ | id | .. index: |
+ | | single: id; node status |
+ | | single: node; status, id |
+ | | |
+ | | Unique identifier for the node. Corosync-based clusters |
+ | | use a numeric counter. |
+ +------------------+----------------------------------------------------------+
+ | uname | .. index:: |
+ | | single: uname; node status |
+ | | single: node; status, uname |
+ | | |
+ | | The node's name as known by the cluster |
+ +------------------+----------------------------------------------------------+
+ | in_ccm | .. index:: |
+ | | single: in_ccm; node status |
+ | | single: node; status, in_ccm |
+ | | |
+ | | Is the node a member at the cluster communication later? |
+ | | Allowed values: ``true``, ``false``. |
+ +------------------+----------------------------------------------------------+
+ | crmd | .. index:: |
+ | | single: crmd; node status |
+ | | single: node; status, crmd |
+ | | |
+ | | Is the node a member at the pacemaker layer? Allowed |
+ | | values: ``online``, ``offline``. |
+ +------------------+----------------------------------------------------------+
+ | crm-debug-origin | .. index:: |
+ | | single: crm-debug-origin; node status |
+ | | single: node; status, crm-debug-origin |
+ | | |
+ | | The name of the source function that made the most |
+ | | recent change (for debugging purposes). |
+ +------------------+----------------------------------------------------------+
+ | join | .. index:: |
+ | | single: join; node status |
+ | | single: node; status, join |
+ | | |
+ | | Does the node participate in hosting resources? |
+ | | Allowed values: ``down``, ``pending``, ``member``. |
+ | | ``banned``. |
+ +------------------+----------------------------------------------------------+
+ | expected | .. index:: |
+ | | single: expected; node status |
+ | | single: node; status, expected |
+ | | |
+ | | Expected value for ``join``. |
+ +------------------+----------------------------------------------------------+
+
+The cluster uses these fields to determine whether, at the node level, the
+node is healthy or is in a failed state and needs to be fenced.
+
+Transient Node Attributes
+#########################
+
+Like regular :ref:`node_attributes`, the name/value
+pairs listed in the ``transient_attributes`` section help to describe the
+node. However they are forgotten by the cluster when the node goes offline.
+This can be useful, for instance, when you want a node to be in standby mode
+(not able to run resources) just until the next reboot.
+
+In addition to any values the administrator sets, the cluster will
+also store information about failed resources here.
+
+.. topic:: A set of transient node attributes for node **cl-virt-1**
+
+ .. code-block:: xml
+
+ <transient_attributes id="cl-virt-1">
+ <instance_attributes id="status-cl-virt-1">
+ <nvpair id="status-cl-virt-1-pingd" name="pingd" value="3"/>
+ <nvpair id="status-cl-virt-1-probe_complete" name="probe_complete" value="true"/>
+ <nvpair id="status-cl-virt-1-fail-count-pingd:0.monitor_30000" name="fail-count-pingd:0#monitor_30000" value="1"/>
+ <nvpair id="status-cl-virt-1-last-failure-pingd:0" name="last-failure-pingd:0" value="1239009742"/>
+ </instance_attributes>
+ </transient_attributes>
+
+In the above example, we can see that a monitor on the ``pingd:0`` resource has
+failed once, at 09:22:22 UTC 6 April 2009. [#]_.
+
+We also see that the node is connected to three **pingd** peers and that
+all known resources have been checked for on this machine (``probe_complete``).
+
+.. index::
+ single: Operation History
+
+Operation History
+#################
+
+A node's resource history is held in the ``lrm_resources`` tag (a child
+of the ``lrm`` tag). The information stored here includes enough
+information for the cluster to stop the resource safely if it is
+removed from the ``configuration`` section. Specifically, the resource's
+``id``, ``class``, ``type`` and ``provider`` are stored.
+
+.. topic:: A record of the ``apcstonith`` resource
+
+ .. code-block:: xml
+
+ <lrm_resource id="apcstonith" type="fence_apc_snmp" class="stonith"/>
+
+Additionally, we store the last job for every combination of
+``resource``, ``action`` and ``interval``. The concatenation of the values in
+this tuple are used to create the id of the ``lrm_rsc_op`` object.
+
+.. table:: **Contents of an lrm_rsc_op job**
+ :class: longtable
+ :widths: 1 3
+
+ +------------------+----------------------------------------------------------+
+ | Field | Description |
+ +==================+==========================================================+
+ | id | .. index:: |
+ | | single: id; action status |
+ | | single: action; status, id |
+ | | |
+ | | Identifier for the job constructed from the resource's |
+ | | ``operation`` and ``interval``. |
+ +------------------+----------------------------------------------------------+
+ | call-id | .. index:: |
+ | | single: call-id; action status |
+ | | single: action; status, call-id |
+ | | |
+ | | The job's ticket number. Used as a sort key to determine |
+ | | the order in which the jobs were executed. |
+ +------------------+----------------------------------------------------------+
+ | operation | .. index:: |
+ | | single: operation; action status |
+ | | single: action; status, operation |
+ | | |
+ | | The action the resource agent was invoked with. |
+ +------------------+----------------------------------------------------------+
+ | interval | .. index:: |
+ | | single: interval; action status |
+ | | single: action; status, interval |
+ | | |
+ | | The frequency, in milliseconds, at which the operation |
+ | | will be repeated. A one-off job is indicated by 0. |
+ +------------------+----------------------------------------------------------+
+ | op-status | .. index:: |
+ | | single: op-status; action status |
+ | | single: action; status, op-status |
+ | | |
+ | | The job's status. Generally this will be either 0 (done) |
+ | | or -1 (pending). Rarely used in favor of ``rc-code``. |
+ +------------------+----------------------------------------------------------+
+ | rc-code | .. index:: |
+ | | single: rc-code; action status |
+ | | single: action; status, rc-code |
+ | | |
+ | | The job's result. Refer to the *Resource Agents* chapter |
+ | | of *Pacemaker Administration* for details on what the |
+ | | values here mean and how they are interpreted. |
+ +------------------+----------------------------------------------------------+
+ | last-rc-change | .. index:: |
+ | | single: last-rc-change; action status |
+ | | single: action; status, last-rc-change |
+ | | |
+ | | Machine-local date/time, in seconds since epoch, at |
+ | | which the job first returned the current value of |
+ | | ``rc-code``. For diagnostic purposes. |
+ +------------------+----------------------------------------------------------+
+ | exec-time | .. index:: |
+ | | single: exec-time; action status |
+ | | single: action; status, exec-time |
+ | | |
+ | | Time, in milliseconds, that the job was running for. |
+ | | For diagnostic purposes. |
+ +------------------+----------------------------------------------------------+
+ | queue-time | .. index:: |
+ | | single: queue-time; action status |
+ | | single: action; status, queue-time |
+ | | |
+ | | Time, in seconds, that the job was queued for in the |
+ | | local executor. For diagnostic purposes. |
+ +------------------+----------------------------------------------------------+
+ | crm_feature_set | .. index:: |
+ | | single: crm_feature_set; action status |
+ | | single: action; status, crm_feature_set |
+ | | |
+ | | The version which this job description conforms to. Used |
+ | | when processing ``op-digest``. |
+ +------------------+----------------------------------------------------------+
+ | transition-key | .. index:: |
+ | | single: transition-key; action status |
+ | | single: action; status, transition-key |
+ | | |
+ | | A concatenation of the job's graph action number, the |
+ | | graph number, the expected result and the UUID of the |
+ | | controller instance that scheduled it. This is used to |
+ | | construct ``transition-magic`` (below). |
+ +------------------+----------------------------------------------------------+
+ | transition-magic | .. index:: |
+ | | single: transition-magic; action status |
+ | | single: action; status, transition-magic |
+ | | |
+ | | A concatenation of the job's ``op-status``, ``rc-code`` |
+ | | and ``transition-key``. Guaranteed to be unique for the |
+ | | life of the cluster (which ensures it is part of CIB |
+ | | update notifications) and contains all the information |
+ | | needed for the controller to correctly analyze and |
+ | | process the completed job. Most importantly, the |
+ | | decomposed elements tell the controller if the job |
+ | | entry was expected and whether it failed. |
+ +------------------+----------------------------------------------------------+
+ | op-digest | .. index:: |
+ | | single: op-digest; action status |
+ | | single: action; status, op-digest |
+ | | |
+ | | An MD5 sum representing the parameters passed to the |
+ | | job. Used to detect changes to the configuration, to |
+ | | restart resources if necessary. |
+ +------------------+----------------------------------------------------------+
+ | crm-debug-origin | .. index:: |
+ | | single: crm-debug-origin; action status |
+ | | single: action; status, crm-debug-origin |
+ | | |
+ | | The origin of the current values. For diagnostic |
+ | | purposes. |
+ +------------------+----------------------------------------------------------+
+
+Simple Operation History Example
+________________________________
+
+.. topic:: A monitor operation (determines current state of the ``apcstonith`` resource)
+
+ .. code-block:: xml
+
+ <lrm_resource id="apcstonith" type="fence_apc_snmp" class="stonith">
+ <lrm_rsc_op id="apcstonith_monitor_0" operation="monitor" call-id="2"
+ rc-code="7" op-status="0" interval="0"
+ crm-debug-origin="do_update_resource" crm_feature_set="3.0.1"
+ op-digest="2e3da9274d3550dc6526fb24bfcbcba0"
+ transition-key="22:2:7:2668bbeb-06d5-40f9-936d-24cb7f87006a"
+ transition-magic="0:7;22:2:7:2668bbeb-06d5-40f9-936d-24cb7f87006a"
+ last-rc-change="1239008085" exec-time="10" queue-time="0"/>
+ </lrm_resource>
+
+In the above example, the job is a non-recurring monitor operation
+often referred to as a "probe" for the ``apcstonith`` resource.
+
+The cluster schedules probes for every configured resource on a node when
+the node first starts, in order to determine the resource's current state
+before it takes any further action.
+
+From the ``transition-key``, we can see that this was the 22nd action of
+the 2nd graph produced by this instance of the controller
+(2668bbeb-06d5-40f9-936d-24cb7f87006a).
+
+The third field of the ``transition-key`` contains a 7, which indicates
+that the job expects to find the resource inactive. By looking at the ``rc-code``
+property, we see that this was the case.
+
+As that is the only job recorded for this node, we can conclude that
+the cluster started the resource elsewhere.
+
+Complex Operation History Example
+_________________________________
+
+.. topic:: Resource history of a ``pingd`` clone with multiple jobs
+
+ .. code-block:: xml
+
+ <lrm_resource id="pingd:0" type="pingd" class="ocf" provider="pacemaker">
+ <lrm_rsc_op id="pingd:0_monitor_30000" operation="monitor" call-id="34"
+ rc-code="0" op-status="0" interval="30000"
+ crm-debug-origin="do_update_resource" crm_feature_set="3.0.1"
+ transition-key="10:11:0:2668bbeb-06d5-40f9-936d-24cb7f87006a"
+ last-rc-change="1239009741" exec-time="10" queue-time="0"/>
+ <lrm_rsc_op id="pingd:0_stop_0" operation="stop"
+ crm-debug-origin="do_update_resource" crm_feature_set="3.0.1" call-id="32"
+ rc-code="0" op-status="0" interval="0"
+ transition-key="11:11:0:2668bbeb-06d5-40f9-936d-24cb7f87006a"
+ last-rc-change="1239009741" exec-time="10" queue-time="0"/>
+ <lrm_rsc_op id="pingd:0_start_0" operation="start" call-id="33"
+ rc-code="0" op-status="0" interval="0"
+ crm-debug-origin="do_update_resource" crm_feature_set="3.0.1"
+ transition-key="31:11:0:2668bbeb-06d5-40f9-936d-24cb7f87006a"
+ last-rc-change="1239009741" exec-time="10" queue-time="0" />
+ <lrm_rsc_op id="pingd:0_monitor_0" operation="monitor" call-id="3"
+ rc-code="0" op-status="0" interval="0"
+ crm-debug-origin="do_update_resource" crm_feature_set="3.0.1"
+ transition-key="23:2:7:2668bbeb-06d5-40f9-936d-24cb7f87006a"
+ last-rc-change="1239008085" exec-time="20" queue-time="0"/>
+ </lrm_resource>
+
+When more than one job record exists, it is important to first sort
+them by ``call-id`` before interpreting them.
+
+Once sorted, the above example can be summarized as:
+
+#. A non-recurring monitor operation returning 7 (not running), with a ``call-id`` of 3
+#. A stop operation returning 0 (success), with a ``call-id`` of 32
+#. A start operation returning 0 (success), with a ``call-id`` of 33
+#. A recurring monitor returning 0 (success), with a ``call-id`` of 34
+
+The cluster processes each job record to build up a picture of the
+resource's state. After the first and second entries, it is
+considered stopped, and after the third it considered active.
+
+Based on the last operation, we can tell that the resource is
+currently active.
+
+Additionally, from the presence of a ``stop`` operation with a lower
+``call-id`` than that of the ``start`` operation, we can conclude that the
+resource has been restarted. Specifically this occurred as part of
+actions 11 and 31 of transition 11 from the controller instance with the key
+``2668bbeb...``. This information can be helpful for locating the
+relevant section of the logs when looking for the source of a failure.
+
+.. [#] You can use the standard ``date`` command to print a human-readable version
+ of any seconds-since-epoch value, for example ``date -d @1239009742``.
diff --git a/doc/sphinx/Pacemaker_Explained/utilization.rst b/doc/sphinx/Pacemaker_Explained/utilization.rst
new file mode 100644
index 0000000..93c67cd
--- /dev/null
+++ b/doc/sphinx/Pacemaker_Explained/utilization.rst
@@ -0,0 +1,264 @@
+.. _utilization:
+
+Utilization and Placement Strategy
+----------------------------------
+
+Pacemaker decides where to place a resource according to the resource
+allocation scores on every node. The resource will be allocated to the
+node where the resource has the highest score.
+
+If the resource allocation scores on all the nodes are equal, by the default
+placement strategy, Pacemaker will choose a node with the least number of
+allocated resources for balancing the load. If the number of resources on each
+node is equal, the first eligible node listed in the CIB will be chosen to run
+the resource.
+
+Often, in real-world situations, different resources use significantly
+different proportions of a node's capacities (memory, I/O, etc.).
+We cannot balance the load ideally just according to the number of resources
+allocated to a node. Besides, if resources are placed such that their combined
+requirements exceed the provided capacity, they may fail to start completely or
+run with degraded performance.
+
+To take these factors into account, Pacemaker allows you to configure:
+
+#. The capacity a certain node provides.
+
+#. The capacity a certain resource requires.
+
+#. An overall strategy for placement of resources.
+
+Utilization attributes
+######################
+
+To configure the capacity that a node provides or a resource requires,
+you can use *utilization attributes* in ``node`` and ``resource`` objects.
+You can name utilization attributes according to your preferences and define as
+many name/value pairs as your configuration needs. However, the attributes'
+values must be integers.
+
+.. topic:: Specifying CPU and RAM capacities of two nodes
+
+ .. code-block:: xml
+
+ <node id="node1" type="normal" uname="node1">
+ <utilization id="node1-utilization">
+ <nvpair id="node1-utilization-cpu" name="cpu" value="2"/>
+ <nvpair id="node1-utilization-memory" name="memory" value="2048"/>
+ </utilization>
+ </node>
+ <node id="node2" type="normal" uname="node2">
+ <utilization id="node2-utilization">
+ <nvpair id="node2-utilization-cpu" name="cpu" value="4"/>
+ <nvpair id="node2-utilization-memory" name="memory" value="4096"/>
+ </utilization>
+ </node>
+
+.. topic:: Specifying CPU and RAM consumed by several resources
+
+ .. code-block:: xml
+
+ <primitive id="rsc-small" class="ocf" provider="pacemaker" type="Dummy">
+ <utilization id="rsc-small-utilization">
+ <nvpair id="rsc-small-utilization-cpu" name="cpu" value="1"/>
+ <nvpair id="rsc-small-utilization-memory" name="memory" value="1024"/>
+ </utilization>
+ </primitive>
+ <primitive id="rsc-medium" class="ocf" provider="pacemaker" type="Dummy">
+ <utilization id="rsc-medium-utilization">
+ <nvpair id="rsc-medium-utilization-cpu" name="cpu" value="2"/>
+ <nvpair id="rsc-medium-utilization-memory" name="memory" value="2048"/>
+ </utilization>
+ </primitive>
+ <primitive id="rsc-large" class="ocf" provider="pacemaker" type="Dummy">
+ <utilization id="rsc-large-utilization">
+ <nvpair id="rsc-large-utilization-cpu" name="cpu" value="3"/>
+ <nvpair id="rsc-large-utilization-memory" name="memory" value="3072"/>
+ </utilization>
+ </primitive>
+
+A node is considered eligible for a resource if it has sufficient free
+capacity to satisfy the resource's requirements. The nature of the required
+or provided capacities is completely irrelevant to Pacemaker -- it just makes
+sure that all capacity requirements of a resource are satisfied before placing
+a resource to a node.
+
+Utilization attributes used on a node object can also be *transient* *(since 2.1.6)*.
+These attributes are added to a ``transient_attributes`` section for the node
+and are forgotten by the cluster when the node goes offline. The ``attrd_updater``
+tool can be used to set these attributes.
+
+.. topic:: Transient utilization attribute for node cluster-1
+
+ .. code-block:: xml
+
+ <transient_attributes id="cluster-1">
+ <utilization id="status-cluster-1">
+ <nvpair id="status-cluster-1-cpu" name="cpu" value="1"/>
+ </utilization>
+ </transient_attributes>
+
+.. note::
+
+ Utilization is supported for bundles *(since 2.1.3)*, but only for bundles
+ with an inner primitive. Any resource utilization values should be specified
+ for the inner primitive, but any priority meta-attribute should be specified
+ for the outer bundle.
+
+
+Placement Strategy
+##################
+
+After you have configured the capacities your nodes provide and the
+capacities your resources require, you need to set the ``placement-strategy``
+in the global cluster options, otherwise the capacity configurations have
+*no effect*.
+
+Four values are available for the ``placement-strategy``:
+
+* **default**
+
+ Utilization values are not taken into account at all.
+ Resources are allocated according to allocation scores. If scores are equal,
+ resources are evenly distributed across nodes.
+
+* **utilization**
+
+ Utilization values are taken into account *only* when deciding whether a node
+ is considered eligible (i.e. whether it has sufficient free capacity to satisfy
+ the resource's requirements). Load-balancing is still done based on the
+ number of resources allocated to a node.
+
+* **balanced**
+
+ Utilization values are taken into account when deciding whether a node
+ is eligible to serve a resource *and* when load-balancing, so an attempt is
+ made to spread the resources in a way that optimizes resource performance.
+
+* **minimal**
+
+ Utilization values are taken into account *only* when deciding whether a node
+ is eligible to serve a resource. For load-balancing, an attempt is made to
+ concentrate the resources on as few nodes as possible, thereby enabling
+ possible power savings on the remaining nodes.
+
+Set ``placement-strategy`` with ``crm_attribute``:
+
+ .. code-block:: none
+
+ # crm_attribute --name placement-strategy --update balanced
+
+Now Pacemaker will ensure the load from your resources will be distributed
+evenly throughout the cluster, without the need for convoluted sets of
+colocation constraints.
+
+Allocation Details
+##################
+
+Which node is preferred to get consumed first when allocating resources?
+________________________________________________________________________
+
+* The node with the highest node weight gets consumed first. Node weight
+ is a score maintained by the cluster to represent node health.
+
+* If multiple nodes have the same node weight:
+
+ * If ``placement-strategy`` is ``default`` or ``utilization``,
+ the node that has the least number of allocated resources gets consumed first.
+
+ * If their numbers of allocated resources are equal,
+ the first eligible node listed in the CIB gets consumed first.
+
+ * If ``placement-strategy`` is ``balanced``,
+ the node that has the most free capacity gets consumed first.
+
+ * If the free capacities of the nodes are equal,
+ the node that has the least number of allocated resources gets consumed first.
+
+ * If their numbers of allocated resources are equal,
+ the first eligible node listed in the CIB gets consumed first.
+
+ * If ``placement-strategy`` is ``minimal``,
+ the first eligible node listed in the CIB gets consumed first.
+
+Which node has more free capacity?
+__________________________________
+
+If only one type of utilization attribute has been defined, free capacity
+is a simple numeric comparison.
+
+If multiple types of utilization attributes have been defined, then
+the node that is numerically highest in the the most attribute types
+has the most free capacity. For example:
+
+* If ``nodeA`` has more free ``cpus``, and ``nodeB`` has more free ``memory``,
+ then their free capacities are equal.
+
+* If ``nodeA`` has more free ``cpus``, while ``nodeB`` has more free ``memory``
+ and ``storage``, then ``nodeB`` has more free capacity.
+
+Which resource is preferred to be assigned first?
+_________________________________________________
+
+* The resource that has the highest ``priority`` (see :ref:`resource_options`) gets
+ allocated first.
+
+* If their priorities are equal, check whether they are already running. The
+ resource that has the highest score on the node where it's running gets allocated
+ first, to prevent resource shuffling.
+
+* If the scores above are equal or the resources are not running, the resource has
+ the highest score on the preferred node gets allocated first.
+
+* If the scores above are equal, the first runnable resource listed in the CIB
+ gets allocated first.
+
+Limitations and Workarounds
+###########################
+
+The type of problem Pacemaker is dealing with here is known as the
+`knapsack problem <http://en.wikipedia.org/wiki/Knapsack_problem>`_ and falls into
+the `NP-complete <http://en.wikipedia.org/wiki/NP-complete>`_ category of computer
+science problems -- a fancy way of saying "it takes a really long time
+to solve".
+
+Clearly in a HA cluster, it's not acceptable to spend minutes, let alone hours
+or days, finding an optimal solution while services remain unavailable.
+
+So instead of trying to solve the problem completely, Pacemaker uses a
+*best effort* algorithm for determining which node should host a particular
+service. This means it arrives at a solution much faster than traditional
+linear programming algorithms, but by doing so at the price of leaving some
+services stopped.
+
+In the contrived example at the start of this chapter:
+
+* ``rsc-small`` would be allocated to ``node1``
+
+* ``rsc-medium`` would be allocated to ``node2``
+
+* ``rsc-large`` would remain inactive
+
+Which is not ideal.
+
+There are various approaches to dealing with the limitations of
+pacemaker's placement strategy:
+
+* **Ensure you have sufficient physical capacity.**
+
+ It might sound obvious, but if the physical capacity of your nodes is (close to)
+ maxed out by the cluster under normal conditions, then failover isn't going to
+ go well. Even without the utilization feature, you'll start hitting timeouts and
+ getting secondary failures.
+
+* **Build some buffer into the capabilities advertised by the nodes.**
+
+ Advertise slightly more resources than we physically have, on the (usually valid)
+ assumption that a resource will not use 100% of the configured amount of
+ CPU, memory and so forth *all* the time. This practice is sometimes called *overcommit*.
+
+* **Specify resource priorities.**
+
+ If the cluster is going to sacrifice services, it should be the ones you care
+ about (comparatively) the least. Ensure that resource priorities are properly set
+ so that your most important resources are scheduled first.
diff --git a/doc/sphinx/Pacemaker_Python_API/_templates/custom-class-template.rst b/doc/sphinx/Pacemaker_Python_API/_templates/custom-class-template.rst
new file mode 100644
index 0000000..8d9b5b9
--- /dev/null
+++ b/doc/sphinx/Pacemaker_Python_API/_templates/custom-class-template.rst
@@ -0,0 +1,32 @@
+{{ fullname | escape | underline}}
+
+.. currentmodule:: {{ module }}
+
+.. autoclass:: {{ objname }}
+ :members:
+ :show-inheritance:
+ :inherited-members:
+
+ {% block methods %}
+ .. automethod:: __init__
+
+ {% if methods %}
+ .. rubric:: {{ 'Methods' }}
+
+ .. autosummary::
+ {% for item in methods %}
+ ~{{ name }}.{{ item }}
+ {%- endfor %}
+ {% endif %}
+ {% endblock %}
+
+ {% block attributes %}
+ {% if attributes %}
+ .. rubric:: {{ 'Attributes' }}
+
+ .. autosummary::
+ {% for item in attributes %}
+ ~{{ name }}.{{ item }}
+ {%- endfor %}
+ {% endif %}
+ {% endblock %}
diff --git a/doc/sphinx/Pacemaker_Python_API/_templates/custom-module-template.rst b/doc/sphinx/Pacemaker_Python_API/_templates/custom-module-template.rst
new file mode 100644
index 0000000..ffb4f5c
--- /dev/null
+++ b/doc/sphinx/Pacemaker_Python_API/_templates/custom-module-template.rst
@@ -0,0 +1,65 @@
+{{ fullname | escape | underline}}
+
+.. automodule:: {{ fullname }}
+
+ {% block attributes %}
+ {% if attributes %}
+ .. rubric:: {{ 'Module Attributes' }}
+
+ .. autosummary::
+ :toctree:
+ {% for item in attributes %}
+ {{ item }}
+ {%- endfor %}
+ {% endif %}
+ {% endblock %}
+
+ {% block functions %}
+ {% if functions %}
+ .. rubric:: {{ 'Functions' }}
+
+ .. autosummary::
+ :toctree:
+ {% for item in functions %}
+ {{ item }}
+ {%- endfor %}
+ {% endif %}
+ {% endblock %}
+
+ {% block classes %}
+ {% if classes %}
+ .. rubric:: {{ 'Classes' }}
+
+ .. autosummary::
+ :toctree:
+ :template: custom-class-template.rst
+ {% for item in classes %}
+ {{ item }}
+ {%- endfor %}
+ {% endif %}
+ {% endblock %}
+
+ {% block exceptions %}
+ {% if exceptions %}
+ .. rubric:: {{ 'Exceptions' }}
+
+ .. autosummary::
+ :toctree:
+ {% for item in exceptions %}
+ {{ item }}
+ {%- endfor %}
+ {% endif %}
+ {% endblock %}
+
+{% block modules %}
+{% if modules %}
+.. rubric:: Modules
+
+.. autosummary::
+ :toctree:
+ :template: custom-module-template.rst
+{% for item in modules %}
+ {{ item }}
+{%- endfor %}
+{% endif %}
+{% endblock %}
diff --git a/doc/sphinx/Pacemaker_Python_API/api.rst b/doc/sphinx/Pacemaker_Python_API/api.rst
new file mode 100644
index 0000000..01b74d3
--- /dev/null
+++ b/doc/sphinx/Pacemaker_Python_API/api.rst
@@ -0,0 +1,10 @@
+API
+===
+
+.. autosummary::
+ :toctree: generated
+ :template: custom-module-template.rst
+
+ pacemaker
+ pacemaker.buildoptions
+ pacemaker.exitstatus
diff --git a/doc/sphinx/Pacemaker_Python_API/index.rst b/doc/sphinx/Pacemaker_Python_API/index.rst
new file mode 100644
index 0000000..5c7f191
--- /dev/null
+++ b/doc/sphinx/Pacemaker_Python_API/index.rst
@@ -0,0 +1,11 @@
+Contents
+--------
+
+The APIs are documented here in submodules, but each submodule class is
+included at the top level, so code should import directly from the
+``pacemaker`` module. For example, use ``from pacemaker import BuildOptions``,
+not ``from pacemaker.buildoptions import BuildOptions``.
+
+.. toctree::
+
+ api
diff --git a/doc/sphinx/Pacemaker_Remote/alternatives.rst b/doc/sphinx/Pacemaker_Remote/alternatives.rst
new file mode 100644
index 0000000..83ed67c
--- /dev/null
+++ b/doc/sphinx/Pacemaker_Remote/alternatives.rst
@@ -0,0 +1,95 @@
+Alternative Configurations
+--------------------------
+
+These alternative configurations may be appropriate in limited cases, such as a
+test cluster, but are not the best method in most situations. They are
+presented here for completeness and as an example of Pacemaker's flexibility
+to suit your needs.
+
+.. index::
+ single: virtual machine; as cluster node
+
+Virtual Machines as Cluster Nodes
+#################################
+
+The preferred use of virtual machines in a Pacemaker cluster is as a
+cluster resource, whether opaque or as a guest node. However, it is
+possible to run the full cluster stack on a virtual node instead.
+
+This is commonly used to set up test environments; a single physical host
+(that does not participate in the cluster) runs two or more virtual machines,
+all running the full cluster stack. This can be used to simulate a
+larger cluster for testing purposes.
+
+In a production environment, fencing becomes more complicated, especially
+if the underlying hosts run any services besides the clustered VMs.
+If the VMs are not guaranteed a minimum amount of host resources,
+CPU and I/O contention can cause timing issues for cluster components.
+
+Another situation where this approach is sometimes used is when
+the cluster owner leases the VMs from a provider and does not have
+direct access to the underlying host. The main concerns in this case
+are proper fencing (usually via a custom resource agent that communicates
+with the provider's APIs) and maintaining a static IP address between reboots,
+as well as resource contention issues.
+
+.. index::
+ single: virtual machine; as remote node
+
+Virtual Machines as Remote Nodes
+################################
+
+Virtual machines may be configured following the process for remote nodes
+rather than guest nodes (i.e., using an **ocf:pacemaker:remote** resource
+rather than letting the cluster manage the VM directly).
+
+This is mainly useful in testing, to use a single physical host to simulate a
+larger cluster involving remote nodes. Pacemaker's Cluster Test Suite (CTS)
+uses this approach to test remote node functionality.
+
+.. index::
+ single: container; as guest node
+ single: container; LXC
+ single: container; Docker
+ single: container; bundle
+ single: LXC
+ single: Docker
+ single: bundle
+
+Containers as Guest Nodes
+#########################
+
+`Containers <https://en.wikipedia.org/wiki/Operating-system-level_virtualization>`_
+and in particular Linux containers (LXC) and Docker, have become a popular
+method of isolating services in a resource-efficient manner.
+
+The preferred means of integrating containers into Pacemaker is as a
+cluster resource, whether opaque or using Pacemaker's ``bundle`` resource type.
+
+However, it is possible to run ``pacemaker_remote`` inside a container,
+following the process for guest nodes. This is not recommended but can
+be useful, for example, in testing scenarios, to simulate a large number of
+guest nodes.
+
+The configuration process is very similar to that described for guest nodes
+using virtual machines. Key differences:
+
+* The underlying host must install the libvirt driver for the desired container
+ technology -- for example, the ``libvirt-daemon-lxc`` package to get the
+ `libvirt-lxc <http://libvirt.org/drvlxc.html>`_ driver for LXC containers.
+
+* Libvirt XML definitions must be generated for the containers. The
+ ``pacemaker-cts`` package includes a script for this purpose,
+ ``/usr/share/pacemaker/tests/cts/lxc_autogen.sh``. Run it with the
+ ``--help`` option for details on how to use it. It is intended for testing
+ purposes only, and hardcodes various parameters that would need to be set
+ appropriately in real usage. Of course, you can create XML definitions
+ manually, following the appropriate libvirt driver documentation.
+
+* To share the authentication key, either share the host's ``/etc/pacemaker``
+ directory with the container, or copy the key into the container's
+ filesystem.
+
+* The **VirtualDomain** resource for a container will need
+ **force_stop="true"** and an appropriate hypervisor option,
+ for example **hypervisor="lxc:///"** for LXC containers.
diff --git a/doc/sphinx/Pacemaker_Remote/baremetal-tutorial.rst b/doc/sphinx/Pacemaker_Remote/baremetal-tutorial.rst
new file mode 100644
index 0000000..a3c0fbe
--- /dev/null
+++ b/doc/sphinx/Pacemaker_Remote/baremetal-tutorial.rst
@@ -0,0 +1,288 @@
+.. index::
+ single: remote node; walk-through
+
+Remote Node Walk-through
+------------------------
+
+**What this tutorial is:** An in-depth walk-through of how to get Pacemaker to
+integrate a remote node into the cluster as a node capable of running cluster
+resources.
+
+**What this tutorial is not:** A realistic deployment scenario. The steps shown
+here are meant to get users familiar with the concept of remote nodes as
+quickly as possible.
+
+Configure Cluster Nodes
+#######################
+
+This walk-through assumes you already have a Pacemaker cluster configured. For examples, we will use a cluster with two cluster nodes named pcmk-1 and pcmk-2. You can substitute whatever your node names are, for however many nodes you have. If you are not familiar with setting up basic Pacemaker clusters, follow the walk-through in the Clusters From Scratch document before attempting this one.
+
+Configure Remote Node
+#####################
+
+.. index::
+ single: remote node; firewall
+
+Configure Firewall on Remote Node
+_________________________________
+
+Allow cluster-related services through the local firewall:
+
+.. code-block:: none
+
+ # firewall-cmd --permanent --add-service=high-availability
+ success
+ # firewall-cmd --reload
+ success
+
+.. NOTE::
+
+ If you are using some other firewall solution besides firewalld,
+ simply open the following ports, which can be used by various
+ clustering components: TCP ports 2224, 3121, and 21064.
+
+ If you run into any problems during testing, you might want to disable
+ the firewall and SELinux entirely until you have everything working.
+ This may create significant security issues and should not be performed on
+ machines that will be exposed to the outside world, but may be appropriate
+ during development and testing on a protected host.
+
+ To disable security measures:
+
+ .. code-block:: none
+
+ # setenforce 0
+ # sed -i.bak "s/SELINUX=enforcing/SELINUX=permissive/g" \
+ /etc/selinux/config
+ # systemctl mask firewalld.service
+ # systemctl stop firewalld.service
+
+Configure ``/etc/hosts``
+________________________
+
+You will need to add the remote node's hostname (we're using **remote1** in
+this tutorial) to the cluster nodes' ``/etc/hosts`` files if you haven't already.
+This is required unless you have DNS set up in a way where remote1's address can be
+discovered.
+
+For each remote node, execute the following on each cluster node and on the
+remote nodes, replacing the IP address with the actual IP address of the remote
+node.
+
+.. code-block:: none
+
+ # cat << END >> /etc/hosts
+ 192.168.122.10 remote1
+ END
+
+Also add entries for each cluster node to the ``/etc/hosts`` file on each
+remote node. For example:
+
+.. code-block:: none
+
+ # cat << END >> /etc/hosts
+ 192.168.122.101 pcmk-1
+ 192.168.122.102 pcmk-2
+ END
+
+Configure pacemaker_remote on Remote Node
+_________________________________________
+
+Install the pacemaker_remote daemon on the remote node.
+
+.. code-block:: none
+
+ [root@remote1 ~]# dnf config-manager --set-enabled highavailability
+ [root@remote1 ~]# dnf install -y pacemaker-remote resource-agents pcs
+
+Prepare ``pcsd``
+________________
+
+Now we need to prepare ``pcsd`` on the remote node so that we can use ``pcs``
+commands to communicate with it.
+
+Start and enable the ``pcsd`` daemon on the remote node.
+
+.. code-block:: none
+
+ [root@remote1 ~]# systemctl start pcsd
+ [root@remote1 ~]# systemctl enable pcsd
+ Created symlink /etc/systemd/system/multi-user.target.wants/pcsd.service → /usr/lib/systemd/system/pcsd.service.
+
+Next, set a password for the ``hacluster`` user on the remote node
+
+.. code-block:: none
+
+ [root@remote ~]# echo MyPassword | passwd --stdin hacluster
+ Changing password for user hacluster.
+ passwd: all authentication tokens updated successfully.
+
+Now authenticate the existing cluster nodes to ``pcsd`` on the remote node. The
+below command only needs to be run from one cluster node.
+
+.. code-block:: none
+
+ [root@pcmk-1 ~]# pcs host auth remote1 -u hacluster
+ Password:
+ remote1: Authorized
+
+Integrate Remote Node into Cluster
+__________________________________
+
+Integrating a remote node into the cluster is achieved through the
+creation of a remote node connection resource. The remote node connection
+resource both establishes the connection to the remote node and defines that
+the remote node exists. Note that this resource is actually internal to
+Pacemaker's controller. The metadata for this resource can be found in
+the ``/usr/lib/ocf/resource.d/pacemaker/remote`` file. The metadata in this file
+describes what options are available, but there is no actual
+**ocf:pacemaker:remote** resource agent script that performs any work.
+
+Define the remote node connection resource to our remote node,
+**remote1**, using the following command on any cluster node. This
+command creates the ocf:pacemaker:remote resource; creates the authkey if it
+does not exist already and distributes it to the remote node; and starts and
+enables ``pacemaker-remoted`` on the remote node.
+
+.. code-block:: none
+
+ [root@pcmk-1 ~]# pcs cluster node add-remote remote1
+ No addresses specified for host 'remote1', using 'remote1'
+ Sending 'pacemaker authkey' to 'remote1'
+ remote1: successful distribution of the file 'pacemaker authkey'
+ Requesting 'pacemaker_remote enable', 'pacemaker_remote start' on 'remote1'
+ remote1: successful run of 'pacemaker_remote enable'
+ remote1: successful run of 'pacemaker_remote start'
+
+That's it. After a moment you should see the remote node come online. The final ``pcs status`` output should look something like this, and you can see that it
+created the ocf:pacemaker:remote resource:
+
+.. code-block:: none
+
+ [root@pcmk-1 ~]# pcs status
+ Cluster name: mycluster
+ Cluster Summary:
+ * Stack: corosync
+ * Current DC: pcmk-1 (version 2.1.2-4.el9-ada5c3b36e2) - partition with quorum
+ * Last updated: Wed Aug 10 05:17:28 2022
+ * Last change: Wed Aug 10 05:17:26 2022 by root via cibadmin on pcmk-1
+ * 3 nodes configured
+ * 2 resource instances configured
+
+ Node List:
+ * Online: [ pcmk-1 pcmk-2 ]
+ * RemoteOnline: [ remote1 ]
+
+ Full List of Resources:
+ * xvm (stonith:fence_xvm): Started pcmk-1
+ * remote1 (ocf:pacemaker:remote): Started pcmk-1
+
+ Daemon Status:
+ corosync: active/disabled
+ pacemaker: active/disabled
+ pcsd: active/enabled
+
+How pcs Configures the Remote
+#############################
+
+Let's take a closer look at what the ``pcs cluster node add-remote`` command is
+doing. There is no need to run any of the commands in this section.
+
+First, ``pcs`` copies the Pacemaker authkey file to the VM that will become the
+guest. If an authkey is not already present on the cluster nodes, this command
+creates one and distributes it to the existing nodes and to the guest.
+
+If you want to do this manually, you can run a command like the following to
+generate an authkey in ``/etc/pacemaker/authkey``, and then distribute the key
+to the rest of the nodes and to the new guest.
+
+.. code-block:: none
+
+ [root@pcmk-1 ~]# dd if=/dev/urandom of=/etc/pacemaker/authkey bs=4096 count=1
+
+Then ``pcs`` starts and enables the ``pacemaker_remote`` service on the guest.
+If you want to do this manually, run the following commands.
+
+.. code-block:: none
+
+ [root@guest1 ~]# systemctl start pacemaker_remote
+ [root@guest1 ~]# systemctl enable pacemaker_remote
+
+Starting Resources on Remote Node
+#################################
+
+Once the remote node is integrated into the cluster, starting and managing
+resources on a remote node is the exact same as on cluster nodes. Refer to the
+`Clusters from Scratch <http://clusterlabs.org/doc/>`_ document for examples of
+resource creation.
+
+.. WARNING::
+
+ Never involve a remote node connection resource in a resource group,
+ colocation constraint, or order constraint.
+
+
+.. index::
+ single: remote node; fencing
+
+Fencing Remote Nodes
+####################
+
+Remote nodes are fenced the same way as cluster nodes. No special
+considerations are required. Configure fencing resources for use with
+remote nodes the same as you would with cluster nodes.
+
+Note, however, that remote nodes can never 'initiate' a fencing action. Only
+cluster nodes are capable of actually executing a fencing operation against
+another node.
+
+Accessing Cluster Tools from a Remote Node
+##########################################
+
+Besides allowing the cluster to manage resources on a remote node,
+pacemaker_remote has one other trick. The pacemaker_remote daemon allows
+nearly all the pacemaker tools (``crm_resource``, ``crm_mon``,
+``crm_attribute``, etc.) to work on remote nodes natively.
+
+Try it: Run ``crm_mon`` on the remote node after pacemaker has
+integrated it into the cluster. These tools just work. These means resource
+agents such as promotable resources (which need access to tools like
+``crm_attribute``) work seamlessly on the remote nodes.
+
+Higher-level command shells such as ``pcs`` may have partial support
+on remote nodes, but it is recommended to run them from a cluster node.
+
+Troubleshooting a Remote Connection
+###################################
+
+If connectivity issues occur, it's worth verifying that the cluster nodes can
+communicate with the remote node on TCP port 3121. We can use the ``nc`` command
+to test the connection.
+
+On the cluster nodes, install the package that provides the ``nc`` command. The
+package name may vary by distribution; on |REMOTE_DISTRO| |REMOTE_DISTRO_VER|
+it's ``nmap-ncat``.
+
+Now connect using ``nc`` from each of the cluster nodes to the remote node and
+run a ``/bin/true`` command that does nothing except return success. No output
+indicates that the cluster node is able to communicate with the remote node on
+TCP port 3121. An error indicates that the connection failed. This could be due
+to a network issue or because ``pacemaker-remoted`` is not currently running on
+the remote node.
+
+Example of success:
+
+.. code-block:: none
+
+ [root@pcmk-1 ~]# nc remote1 3121 --sh-exec /bin/true
+ [root@pcmk-1 ~]#
+
+Examples of failure:
+
+.. code-block:: none
+
+ [root@pcmk-1 ~]# nc remote1 3121 --sh-exec /bin/true
+ Ncat: Connection refused.
+ [root@pcmk-1 ~]# nc remote1 3121 --sh-exec /bin/true
+ Ncat: No route to host.
+
diff --git a/doc/sphinx/Pacemaker_Remote/images/pcmk-ha-cluster-stack.png b/doc/sphinx/Pacemaker_Remote/images/pcmk-ha-cluster-stack.png
new file mode 100644
index 0000000..163ba45
--- /dev/null
+++ b/doc/sphinx/Pacemaker_Remote/images/pcmk-ha-cluster-stack.png
Binary files differ
diff --git a/doc/sphinx/Pacemaker_Remote/images/pcmk-ha-remote-stack.png b/doc/sphinx/Pacemaker_Remote/images/pcmk-ha-remote-stack.png
new file mode 100644
index 0000000..11985a7
--- /dev/null
+++ b/doc/sphinx/Pacemaker_Remote/images/pcmk-ha-remote-stack.png
Binary files differ
diff --git a/doc/sphinx/Pacemaker_Remote/index.rst b/doc/sphinx/Pacemaker_Remote/index.rst
new file mode 100644
index 0000000..de8e898
--- /dev/null
+++ b/doc/sphinx/Pacemaker_Remote/index.rst
@@ -0,0 +1,44 @@
+Pacemaker Remote
+================
+
+*Scaling High Availablity Clusters*
+
+
+Abstract
+--------
+This document exists as both a reference and deployment guide for the Pacemaker
+Remote service.
+
+The example commands in this document will use:
+
+* |REMOTE_DISTRO| |REMOTE_DISTRO_VER| as the host operating system
+* Pacemaker Remote to perform resource management within guest nodes and remote nodes
+* KVM for virtualization
+* libvirt to manage guest nodes
+* Corosync to provide messaging and membership services on cluster nodes
+* Pacemaker 2 to perform resource management on cluster nodes
+
+* pcs as the cluster configuration toolset
+
+The concepts are the same for other distributions, virtualization platforms,
+toolsets, and messaging layers, and should be easily adaptable.
+
+
+Table of Contents
+-----------------
+
+.. toctree::
+ :maxdepth: 3
+ :numbered:
+
+ intro
+ options
+ kvm-tutorial
+ baremetal-tutorial
+ alternatives
+
+Index
+-----
+
+* :ref:`genindex`
+* :ref:`search`
diff --git a/doc/sphinx/Pacemaker_Remote/intro.rst b/doc/sphinx/Pacemaker_Remote/intro.rst
new file mode 100644
index 0000000..c0edac9
--- /dev/null
+++ b/doc/sphinx/Pacemaker_Remote/intro.rst
@@ -0,0 +1,187 @@
+Scaling a Pacemaker Cluster
+---------------------------
+
+Overview
+########
+
+In a basic Pacemaker high-availability cluster [#]_ each node runs the full
+cluster stack of Corosync and all Pacemaker components. This allows great
+flexibility but limits scalability to around 32 nodes.
+
+To allow for scalability to dozens or even hundreds of nodes, Pacemaker
+allows nodes not running the full cluster stack to integrate into the cluster
+and have the cluster manage their resources as if they were a cluster node.
+
+Terms
+#####
+
+.. index::
+ single: cluster node
+ single: node; cluster node
+
+**cluster node**
+ A node running the full high-availability stack of corosync and all
+ Pacemaker components. Cluster nodes may run cluster resources, run
+ all Pacemaker command-line tools (``crm_mon``, ``crm_resource`` and so on),
+ execute fencing actions, count toward cluster quorum, and serve as the
+ cluster's Designated Controller (DC).
+
+.. index:: pacemaker-remoted
+
+**pacemaker-remoted**
+ A small service daemon that allows a host to be used as a Pacemaker node
+ without running the full cluster stack. Nodes running ``pacemaker-remoted``
+ may run cluster resources and most command-line tools, but cannot perform
+ other functions of full cluster nodes such as fencing execution, quorum
+ voting, or DC eligibility. The ``pacemaker-remoted`` daemon is an enhanced
+ version of Pacemaker's local executor daemon (pacemaker-execd).
+
+.. index::
+ single: remote node
+ single: node; remote node
+
+**pacemaker_remote**
+ The name of the systemd service that manages ``pacemaker-remoted``
+
+**Pacemaker Remote**
+ A way to refer to the general technology implementing nodes running
+ ``pacemaker-remoted``, including the cluster-side implementation
+ and the communication protocol between them.
+
+**remote node**
+ A physical host running ``pacemaker-remoted``. Remote nodes have a special
+ resource that manages communication with the cluster. This is sometimes
+ referred to as the *bare metal* case.
+
+.. index::
+ single: guest node
+ single: node; guest node
+
+**guest node**
+ A virtual host running ``pacemaker-remoted``. Guest nodes differ from remote
+ nodes mainly in that the guest node is itself a resource that the cluster
+ manages.
+
+.. NOTE::
+
+ *Remote* in this document refers to the node not being a part of the underlying
+ corosync cluster. It has nothing to do with physical proximity. Remote nodes
+ and guest nodes are subject to the same latency requirements as cluster nodes,
+ which means they are typically in the same data center.
+
+.. NOTE::
+
+ It is important to distinguish the various roles a virtual machine can serve
+ in Pacemaker clusters:
+
+ * A virtual machine can run the full cluster stack, in which case it is a
+ cluster node and is not itself managed by the cluster.
+ * A virtual machine can be managed by the cluster as a resource, without the
+ cluster having any awareness of the services running inside the virtual
+ machine. The virtual machine is *opaque* to the cluster.
+ * A virtual machine can be a cluster resource, and run ``pacemaker-remoted``
+ to make it a guest node, allowing the cluster to manage services
+ inside it. The virtual machine is *transparent* to the cluster.
+
+.. index::
+ single: virtual machine; as guest node
+
+Guest Nodes
+###########
+
+**"I want a Pacemaker cluster to manage virtual machine resources, but I also
+want Pacemaker to be able to manage the resources that live within those
+virtual machines."**
+
+Without ``pacemaker-remoted``, the possibilities for implementing the above use
+case have significant limitations:
+
+* The cluster stack could be run on the physical hosts only, which loses the
+ ability to monitor resources within the guests.
+* A separate cluster could be on the virtual guests, which quickly hits
+ scalability issues.
+* The cluster stack could be run on the guests using the same cluster as the
+ physical hosts, which also hits scalability issues and complicates fencing.
+
+With ``pacemaker-remoted``:
+
+* The physical hosts are cluster nodes (running the full cluster stack).
+* The virtual machines are guest nodes (running ``pacemaker-remoted``).
+ Nearly zero configuration is required on the virtual machine.
+* The cluster stack on the cluster nodes launches the virtual machines and
+ immediately connects to ``pacemaker-remoted`` on them, allowing the
+ virtual machines to integrate into the cluster.
+
+The key difference here between the guest nodes and the cluster nodes is that
+the guest nodes do not run the cluster stack. This means they will never become
+the DC, initiate fencing actions or participate in quorum voting.
+
+On the other hand, this also means that they are not bound to the scalability
+limits associated with the cluster stack (no 32-node corosync member limits to
+deal with). That isn't to say that guest nodes can scale indefinitely, but it
+is known that guest nodes scale horizontally much further than cluster nodes.
+
+Other than the quorum limitation, these guest nodes behave just like cluster
+nodes with respect to resource management. The cluster is fully capable of
+managing and monitoring resources on each guest node. You can build constraints
+against guest nodes, put them in standby, or do whatever else you'd expect to
+be able to do with cluster nodes. They even show up in ``crm_mon`` output as
+nodes.
+
+To solidify the concept, below is an example that is very similar to an actual
+deployment that we tested in a developer environment to verify guest node
+scalability:
+
+* 16 cluster nodes running the full Corosync + Pacemaker stack
+* 64 Pacemaker-managed virtual machine resources running ``pacemaker-remoted``
+ configured as guest nodes
+* 64 Pacemaker-managed webserver and database resources configured to run on
+ the 64 guest nodes
+
+With this deployment, you would have 64 webservers and databases running on 64
+virtual machines on 16 hardware nodes, all of which are managed and monitored by
+the same Pacemaker deployment. It is known that ``pacemaker-remoted`` can scale
+to these lengths and possibly much further depending on the specific scenario.
+
+Remote Nodes
+############
+
+**"I want my traditional high-availability cluster to scale beyond the limits
+imposed by the corosync messaging layer."**
+
+Ultimately, the primary advantage of remote nodes over cluster nodes is
+scalability. There are likely some other use cases related to geographically
+distributed HA clusters that remote nodes may serve a purpose in, but those use
+cases are not well understood at this point.
+
+Like guest nodes, remote nodes will never become the DC, initiate
+fencing actions or participate in quorum voting.
+
+That is not to say, however, that fencing of a remote node works any
+differently than that of a cluster node. The Pacemaker scheduler
+understands how to fence remote nodes. As long as a fencing device exists, the
+cluster is capable of ensuring remote nodes are fenced in the exact same way as
+cluster nodes.
+
+Expanding the Cluster Stack
+###########################
+
+With ``pacemaker-remoted``, the traditional view of the high-availability stack
+can be expanded to include a new layer:
+
+Traditional HA Stack
+____________________
+
+.. image:: images/pcmk-ha-cluster-stack.png
+ :alt: Traditional Pacemaker+Corosync Stack
+ :align: center
+
+HA Stack With Guest Nodes
+_________________________
+
+.. image:: images/pcmk-ha-remote-stack.png
+ :alt: Pacemaker+Corosync Stack with pacemaker-remoted
+ :align: center
+
+.. [#] See the `<https://www.clusterlabs.org/doc/>`_ Pacemaker documentation,
+ especially *Clusters From Scratch* and *Pacemaker Explained*.
diff --git a/doc/sphinx/Pacemaker_Remote/kvm-tutorial.rst b/doc/sphinx/Pacemaker_Remote/kvm-tutorial.rst
new file mode 100644
index 0000000..253149e
--- /dev/null
+++ b/doc/sphinx/Pacemaker_Remote/kvm-tutorial.rst
@@ -0,0 +1,584 @@
+.. index::
+ single: guest node; walk-through
+
+Guest Node Walk-through
+-----------------------
+
+**What this tutorial is:** An in-depth walk-through of how to get Pacemaker to
+manage a KVM guest instance and integrate that guest into the cluster as a
+guest node.
+
+**What this tutorial is not:** A realistic deployment scenario. The steps shown
+here are meant to get users familiar with the concept of guest nodes as quickly
+as possible.
+
+Configure Cluster Nodes
+#######################
+
+This walk-through assumes you already have a Pacemaker cluster configured. For examples, we will use a cluster with two cluster nodes named pcmk-1 and pcmk-2. You can substitute whatever your node names are, for however many nodes you have. If you are not familiar with setting up basic Pacemaker clusters, follow the walk-through in the Clusters From Scratch document before attempting this one.
+
+Install Virtualization Software
+_______________________________
+
+On each node within your cluster, install virt-install, libvirt, and qemu-kvm.
+Start and enable ``virtnetworkd``.
+
+ .. code-block:: none
+
+ # dnf install -y virt-install libvirt qemu-kvm
+ # systemctl start virtnetworkd
+ # systemctl enable virtnetworkd
+
+Reboot the host.
+
+.. NOTE::
+
+ While KVM is used in this example, any virtualization platform with a Pacemaker
+ resource agent can be used to create a guest node. The resource agent needs
+ only to support usual commands (start, stop, etc.); Pacemaker implements the
+ **remote-node** meta-attribute, independent of the agent.
+
+Configure the KVM guest
+#######################
+
+Create Guest
+____________
+
+Create a KVM guest to use as a guest node. Be sure to configure the guest with a
+hostname and a static IP address (as an example here, we will use guest1 and 192.168.122.10).
+Here's an example way to create a guest:
+
+* Download an .iso file from the |REMOTE_DISTRO| |REMOTE_DISTRO_VER| `mirrors
+ list <https://mirrors.almalinux.org/isos.html>`_ into a directory on your
+ cluster node.
+
+* Run the following command, using your own path for the **location** flag:
+
+ .. code-block:: none
+
+ [root@pcmk-1 ~]# virt-install \
+ --name vm-guest1 \
+ --memory 1536 \
+ --disk path=/var/lib/libvirt/images/vm-guest1.qcow2,size=4 \
+ --vcpus 2 \
+ --os-variant almalinux9 \
+ --network bridge=virbr0 \
+ --graphics none \
+ --console pty,target_type=serial \
+ --location /tmp/AlmaLinux-9-latest-x86_64-dvd.iso \
+ --extra-args 'console=ttyS0,115200n8'
+
+ .. NOTE::
+
+ See the Clusters from Scratch document for more details about installing
+ |REMOTE_DISTRO| |REMOTE_DISTRO_VER|. The above command will perform a
+ text-based installation by default, but feel free to do a graphical
+ installation, which exposes more options.
+
+.. index::
+ single: guest node; firewall
+
+Configure Firewall on Guest
+___________________________
+
+On each guest, allow cluster-related services through the local firewall. If
+you're using ``firewalld``, run the following commands.
+
+.. code-block:: none
+
+ [root@guest1 ~]# firewall-cmd --permanent --add-service=high-availability
+ success
+ [root@guest1 ~]# firewall-cmd --reload
+ success
+
+.. NOTE::
+
+ If you are using some other firewall solution besides firewalld,
+ simply open the following ports, which can be used by various
+ clustering components: TCP ports 2224, 3121, and 21064.
+
+ If you run into any problems during testing, you might want to disable
+ the firewall and SELinux entirely until you have everything working.
+ This may create significant security issues and should not be performed on
+ machines that will be exposed to the outside world, but may be appropriate
+ during development and testing on a protected host.
+
+ To disable security measures:
+
+ .. code-block:: none
+
+ [root@guest1 ~]# setenforce 0
+ [root@guest1 ~]# sed -i.bak "s/SELINUX=enforcing/SELINUX=permissive/g" \
+ /etc/selinux/config
+ [root@guest1 ~]# systemctl mask firewalld.service
+ [root@guest1 ~]# systemctl stop firewalld.service
+
+Configure ``/etc/hosts``
+________________________
+
+You will need to add the remote node's hostname (we're using **guest1** in
+this tutorial) to the cluster nodes' ``/etc/hosts`` files if you haven't already.
+This is required unless you have DNS set up in a way where guest1's address can be
+discovered.
+
+For each guest, execute the following on each cluster node and on the guests,
+replacing the IP address with the actual IP address of the guest node.
+
+.. code-block:: none
+
+ # cat << END >> /etc/hosts
+ 192.168.122.10 guest1
+ END
+
+Also add entries for each cluster node to the ``/etc/hosts`` file on each guest.
+For example:
+
+.. code-block:: none
+
+ # cat << END >> /etc/hosts
+ 192.168.122.101 pcmk-1
+ 192.168.122.102 pcmk-2
+ END
+
+Verify Connectivity
+___________________
+
+At this point, you should be able to ping and ssh into guests from hosts, and
+vice versa.
+
+Depending on your installation method, you may have to perform an additional
+step to make SSH work. The simplest approach is to open the
+``/etc/ssh/sshd_config`` file and set ``PermitRootLogin yes``. Then to make the
+change take effect, run the following command.
+
+.. code-block:: none
+
+ [root@guest1 ~]# systemctl restart sshd
+
+Configure pacemaker_remote on Guest Node
+________________________________________
+
+Install the pacemaker_remote daemon on the guest node. We'll also install the
+``pacemaker`` package. It isn't required for a guest node to run, but it
+provides the ``crm_attribute`` tool, which many resource agents use.
+
+.. code-block:: none
+
+ [root@guest1 ~]# dnf config-manager --set-enabled highavailability
+ [root@guest1 ~]# dnf install -y pacemaker-remote resource-agents pcs \
+ pacemaker
+
+Integrate Guest into Cluster
+############################
+
+Now the fun part, integrating the virtual machine you've just created into the
+cluster. It is incredibly simple.
+
+Start the Cluster
+_________________
+
+On the host, start Pacemaker if it's not already running.
+
+.. code-block:: none
+
+ # pcs cluster start
+
+Create a ``VirtualDomain`` Resource for the Guest VM
+____________________________________________________
+
+For this simple walk-through, we have created the VM and made its disk
+available only on node ``pcmk-1``, so that's the only node where the VM is
+capable of running. In a more realistic scenario, you'll probably want to have
+multiple nodes that are capable of running the VM.
+
+Next we'll assign an attribute to node 1 that denotes its eligibility to host
+``vm-guest1``. If other nodes are capable of hosting your guest VM, then add the
+attribute to each of those nodes as well.
+
+.. code-block:: none
+
+ [root@pcmk-1 ~]# pcs node attribute pcmk-1 can-host-vm-guest1=1
+
+Then we'll create a ``VirtualDomain`` resource so that Pacemaker can manage
+``vm-guest1``. Be sure to replace the XML file path below with your own if it
+differs. We'll also create a rule to prevent Pacemaker from trying to start the
+resource or probe its status on any node that isn't capable of running the VM.
+We'll save the CIB to a file, make both of these edits, and push them
+simultaneously.
+
+.. code-block:: none
+
+ [root@pcmk-1 ~]# pcs cluster cib vm_cfg
+ [root@pcmk-1 ~]# pcs -f vm_cfg resource create vm-guest1 VirtualDomain \
+ hypervisor="qemu:///system" config="/etc/libvirt/qemu/vm-guest1.xml"
+ Assumed agent name 'ocf:heartbeat:VirtualDomain' (deduced from 'VirtualDomain')
+ [root@pcmk-1 ~]# pcs -f vm_cfg constraint location vm-guest1 rule \
+ resource-discovery=never score=-INFINITY can-host-vm-guest1 ne 1
+ [root@pcmk-1 ~]# pcs cluster cib-push --config vm_cfg --wait
+
+.. NOTE::
+
+ If all nodes in your cluster are capable of hosting the VM that you've
+ created, then you can skip the ``pcs node attribute`` and ``pcs constraint
+ location`` commands.
+
+.. NOTE::
+
+ The ID of the resource managing the virtual machine (``vm-guest1`` in the
+ above example) **must** be different from the virtual machine's node name
+ (``guest1`` in the above example). Pacemaker will create an implicit
+ internal resource for the Pacemaker Remote connection to the guest. This
+ implicit resource will be named with the value of the ``VirtualDomain``
+ resource's ``remote-node`` meta attribute, which will be set by ``pcs`` to
+ the guest node's node name. Therefore, that value cannot be used as the name
+ of any other resource.
+
+Now we can confirm that the ``VirtualDomain`` resource is running on ``pcmk-1``.
+
+.. code-block:: none
+
+ [root@pcmk-1 ~]# pcs resource status
+ * vm-guest1 (ocf:heartbeat:VirtualDomain): Started pcmk-1
+
+Prepare ``pcsd``
+________________
+
+Now we need to prepare ``pcsd`` on the guest so that we can use ``pcs`` commands
+to communicate with it.
+
+Start and enable the ``pcsd`` daemon on the guest.
+
+.. code-block:: none
+
+ [root@guest1 ~]# systemctl start pcsd
+ [root@guest1 ~]# systemctl enable pcsd
+ Created symlink /etc/systemd/system/multi-user.target.wants/pcsd.service → /usr/lib/systemd/system/pcsd.service.
+
+Next, set a password for the ``hacluster`` user on the guest.
+
+.. code-block:: none
+
+ [root@guest1 ~]# echo MyPassword | passwd --stdin hacluster
+ Changing password for user hacluster.
+ passwd: all authentication tokens updated successfully.
+
+Now authenticate the existing cluster nodes to ``pcsd`` on the guest. The below
+command only needs to be run from one cluster node.
+
+.. code-block:: none
+
+ [root@pcmk-1 ~]# pcs host auth guest1 -u hacluster
+ Password:
+ guest1: Authorized
+
+Integrate Guest Node into Cluster
+_________________________________
+
+We're finally ready to integrate the VM into the cluster as a guest node. Run
+the following command, which will create a guest node from the ``VirtualDomain``
+resource and take care of all the remaining steps. Note that the format is ``pcs
+cluster node add-guest <guest_name> <vm_resource_name>``.
+
+.. code-block:: none
+
+ [root@pcmk-1 ~]# pcs cluster node add-guest guest1 vm-guest1
+ No addresses specified for host 'guest1', using 'guest1'
+ Sending 'pacemaker authkey' to 'guest1'
+ guest1: successful distribution of the file 'pacemaker authkey'
+ Requesting 'pacemaker_remote enable', 'pacemaker_remote start' on 'guest1'
+ guest1: successful run of 'pacemaker_remote enable'
+ guest1: successful run of 'pacemaker_remote start'
+
+You should soon see ``guest1`` appear in the ``pcs status`` output as a node.
+The output should look something like this:
+
+.. code-block:: none
+
+ [root@pcmk-1 ~]# pcs status
+ Cluster name: mycluster
+ Cluster Summary:
+ * Stack: corosync
+ * Current DC: pcmk-1 (version 2.1.2-4.el9-ada5c3b36e2) - partition with quorum
+ * Last updated: Wed Aug 10 00:08:58 2022
+ * Last change: Wed Aug 10 00:02:37 2022 by root via cibadmin on pcmk-1
+ * 3 nodes configured
+ * 3 resource instances configured
+
+ Node List:
+ * Online: [ pcmk-1 pcmk-2 ]
+ * GuestOnline: [ guest1@pcmk-1 ]
+
+ Full List of Resources:
+ * xvm (stonith:fence_xvm): Started pcmk-1
+ * vm-guest1 (ocf:heartbeat:VirtualDomain): Started pcmk-1
+
+ Daemon Status:
+ corosync: active/disabled
+ pacemaker: active/disabled
+ pcsd: active/enabled
+
+The resulting configuration should look something like the following:
+
+.. code-block:: none
+
+ [root@pcmk-1 ~]# pcs resource config
+ Resource: vm-guest1 (class=ocf provider=heartbeat type=VirtualDomain)
+ Attributes: config=/etc/libvirt/qemu/vm-guest1.xml hypervisor=qemu:///system
+ Meta Attrs: remote-addr=guest1 remote-node=guest1
+ Operations: migrate_from interval=0s timeout=60s (vm-guest1-migrate_from-interval-0s)
+ migrate_to interval=0s timeout=120s (vm-guest1-migrate_to-interval-0s)
+ monitor interval=10s timeout=30s (vm-guest1-monitor-interval-10s)
+ start interval=0s timeout=90s (vm-guest1-start-interval-0s)
+ stop interval=0s timeout=90s (vm-guest1-stop-interval-0s)
+
+How pcs Configures the Guest
+____________________________
+
+Let's take a closer look at what the ``pcs cluster node add-guest`` command is
+doing. There is no need to run any of the commands in this section.
+
+First, ``pcs`` copies the Pacemaker authkey file to the VM that will become the
+guest. If an authkey is not already present on the cluster nodes, this command
+creates one and distributes it to the existing nodes and to the guest.
+
+If you want to do this manually, you can run a command like the following to
+generate an authkey in ``/etc/pacemaker/authkey``, and then distribute the key
+to the rest of the nodes and to the new guest.
+
+.. code-block:: none
+
+ [root@pcmk-1 ~]# dd if=/dev/urandom of=/etc/pacemaker/authkey bs=4096 count=1
+
+Then ``pcs`` starts and enables the ``pacemaker_remote`` service on the guest.
+If you want to do this manually, run the following commands.
+
+.. code-block:: none
+
+ [root@guest1 ~]# systemctl start pacemaker_remote
+ [root@guest1 ~]# systemctl enable pacemaker_remote
+
+Finally, ``pcs`` creates a guest node from the ``VirtualDomain`` resource by
+adding ``remote-addr`` and ``remote-node`` meta attributes to the resource. If
+you want to do this manually, you can run the following command if you're using
+``pcs``. Alternativately, run an equivalent command if you're using another
+cluster shell, or edit the CIB manually.
+
+.. code-block:: none
+
+ [root@pcmk-1 ~]# pcs resource update vm-guest1 meta remote-addr='guest1' \
+ remote-node='guest1' --force
+
+Starting Resources on KVM Guest
+###############################
+
+The following example demonstrates that resources can be run on the guest node
+in the exact same way as on the cluster nodes.
+
+Create a few ``Dummy`` resources. A ``Dummy`` resource is a real resource that
+actually executes operations on its assigned node. However, these operations are
+trivial (creating, deleting, or checking the existence of an empty or small
+file), so ``Dummy`` resources are ideal for testing purposes. ``Dummy``
+resources use the ``ocf:heartbeat:Dummy`` or ``ocf:pacemaker:Dummy`` resource
+agent.
+
+.. code-block:: none
+
+ # for i in {1..5}; do pcs resource create FAKE${i} ocf:heartbeat:Dummy; done
+
+Now run ``pcs resource status``. You should see something like the following,
+where some of the resources are started on the cluster nodes, and some are
+started on the guest node.
+
+.. code-block:: none
+
+ [root@pcmk-1 ~]# pcs resource status
+ * vm-guest1 (ocf:heartbeat:VirtualDomain): Started pcmk-1
+ * FAKE1 (ocf:heartbeat:Dummy): Started guest1
+ * FAKE2 (ocf:heartbeat:Dummy): Started pcmk-2
+ * FAKE3 (ocf:heartbeat:Dummy): Started guest1
+ * FAKE4 (ocf:heartbeat:Dummy): Started pcmk-2
+ * FAKE5 (ocf:heartbeat:Dummy): Started guest1
+
+The guest node, ``guest1``, behaves just like any other node in the cluster with
+respect to resources. For example, choose a resource that is running on one of
+your cluster nodes. We'll choose ``FAKE2`` from the output above. It's currently
+running on ``pcmk-2``. We can force ``FAKE2`` to run on ``guest1`` in the exact
+same way as we could force it to run on any particular cluster node. We do this
+by creating a location constraint:
+
+.. code-block:: none
+
+ # pcs constraint location FAKE2 prefers guest1
+
+Now the ``pcs resource status`` output shows that ``FAKE2`` is on ``guest1``.
+
+.. code-block:: none
+
+ [root@pcmk-1 ~]# pcs resource status
+ * vm-guest1 (ocf:heartbeat:VirtualDomain): Started pcmk-1
+ * FAKE1 (ocf:heartbeat:Dummy): Started guest1
+ * FAKE2 (ocf:heartbeat:Dummy): Started guest1
+ * FAKE3 (ocf:heartbeat:Dummy): Started guest1
+ * FAKE4 (ocf:heartbeat:Dummy): Started pcmk-2
+ * FAKE5 (ocf:heartbeat:Dummy): Started guest1
+
+Testing Recovery and Fencing
+############################
+
+Pacemaker's scheduler is smart enough to know fencing guest nodes
+associated with a virtual machine means shutting off/rebooting the virtual
+machine. No special configuration is necessary to make this happen. If you
+are interested in testing this functionality out, trying stopping the guest's
+pacemaker_remote daemon. This would be equivalent of abruptly terminating a
+cluster node's corosync membership without properly shutting it down.
+
+ssh into the guest and run this command.
+
+.. code-block:: none
+
+ [root@guest1 ~]# kill -9 $(pidof pacemaker-remoted)
+
+Within a few seconds, your ``pcs status`` output will show a monitor failure,
+and the **guest1** node will not be shown while it is being recovered.
+
+.. code-block:: none
+
+ [root@pcmk-1 ~]# pcs status
+ Cluster name: mycluster
+ Cluster Summary:
+ * Stack: corosync
+ * Current DC: pcmk-1 (version 2.1.2-4.el9-ada5c3b36e2) - partition with quorum
+ * Last updated: Wed Aug 10 01:39:40 2022
+ * Last change: Wed Aug 10 01:34:55 2022 by root via cibadmin on pcmk-1
+ * 3 nodes configured
+ * 8 resource instances configured
+
+ Node List:
+ * Online: [ pcmk-1 pcmk-2 ]
+
+ Full List of Resources:
+ * xvm (stonith:fence_xvm): Started pcmk-1
+ * vm-guest1 (ocf:heartbeat:VirtualDomain): FAILED pcmk-1
+ * FAKE1 (ocf:heartbeat:Dummy): FAILED guest1
+ * FAKE2 (ocf:heartbeat:Dummy): FAILED guest1
+ * FAKE3 (ocf:heartbeat:Dummy): FAILED guest1
+ * FAKE4 (ocf:heartbeat:Dummy): Started pcmk-2
+ * FAKE5 (ocf:heartbeat:Dummy): FAILED guest1
+
+ Failed Resource Actions:
+ * guest1 30s-interval monitor on pcmk-1 could not be executed (Error) because 'Lost connection to remote executor' at Wed Aug 10 01:39:38 2022
+
+ Daemon Status:
+ corosync: active/disabled
+ pacemaker: active/disabled
+ pcsd: active/enabled
+
+.. NOTE::
+
+ A guest node involves two resources: an explicitly configured resource that
+ you create, which manages the virtual machine (the ``VirtualDomain``
+ resource in our example); and an implicit resource that Pacemaker creates,
+ which manages the ``pacemaker-remoted`` connection to the guest. The
+ implicit resource's name is the value of the explicit resource's
+ ``remote-node`` meta attribute. When we killed ``pacemaker-remoted``, the
+ **implicit** resource is what failed. That's why the failed action starts
+ with ``guest1`` and not ``vm-guest1``.
+
+Once recovery of the guest is complete, you'll see it automatically get
+re-integrated into the cluster. The final ``pcs status`` output should look
+something like this.
+
+.. code-block:: none
+
+ [root@pcmk-1 ~]# pcs status
+ Cluster name: mycluster
+ Cluster Summary:
+ * Stack: corosync
+ * Current DC: pcmk-1 (version 2.1.2-4.el9-ada5c3b36e2) - partition with quorum
+ * Last updated: Wed Aug 10 01:40:05 2022
+ * Last change: Wed Aug 10 01:34:55 2022 by root via cibadmin on pcmk-1
+ * 3 nodes configured
+ * 8 resource instances configured
+
+ Node List:
+ * Online: [ pcmk-1 pcmk-2 ]
+ * GuestOnline: [ guest1@pcmk-1 ]
+
+ Full List of Resources:
+ * xvm (stonith:fence_xvm): Started pcmk-1
+ * vm-guest1 (ocf:heartbeat:VirtualDomain): Started pcmk-1
+ * FAKE1 (ocf:heartbeat:Dummy): Started guest1
+ * FAKE2 (ocf:heartbeat:Dummy): Started guest1
+ * FAKE3 (ocf:heartbeat:Dummy): Started pcmk-2
+ * FAKE4 (ocf:heartbeat:Dummy): Started pcmk-2
+ * FAKE5 (ocf:heartbeat:Dummy): Started guest1
+
+ Failed Resource Actions:
+ * guest1 30s-interval monitor on pcmk-1 could not be executed (Error) because 'Lost connection to remote executor' at Wed Aug 10 01:39:38 2022
+
+ Daemon Status:
+ corosync: active/disabled
+ pacemaker: active/disabled
+ pcsd: active/enabled
+
+Normally, once you've investigated and addressed a failed action, you can clear the
+failure. However Pacemaker does not yet support cleanup for the implicitly
+created connection resource while the explicit resource is active. If you want
+to clear the failed action from the status output, stop the guest resource before
+clearing it. For example:
+
+.. code-block:: none
+
+ # pcs resource disable vm-guest1 --wait
+ # pcs resource cleanup guest1
+ # pcs resource enable vm-guest1
+
+Accessing Cluster Tools from Guest Node
+#######################################
+
+Besides allowing the cluster to manage resources on a guest node,
+pacemaker_remote has one other trick. The pacemaker_remote daemon allows
+nearly all the pacemaker tools (``crm_resource``, ``crm_mon``, ``crm_attribute``,
+etc.) to work on guest nodes natively.
+
+Try it: Run ``crm_mon`` on the guest after pacemaker has
+integrated the guest node into the cluster. These tools just work. This
+means resource agents such as promotable resources (which need access to tools
+like ``crm_attribute``) work seamlessly on the guest nodes.
+
+Higher-level command shells such as ``pcs`` may have partial support
+on guest nodes, but it is recommended to run them from a cluster node.
+
+Troubleshooting a Remote Connection
+###################################
+
+If connectivity issues occur, it's worth verifying that the cluster nodes can
+communicate with the guest node on TCP port 3121. We can use the ``nc`` command
+to test the connection.
+
+On the cluster nodes, install the package that provides the ``nc`` command. The
+package name may vary by distribution; on |REMOTE_DISTRO| |REMOTE_DISTRO_VER|
+it's ``nmap-ncat``.
+
+Now connect using ``nc`` from each of the cluster nodes to the guest and run a
+``/bin/true`` command that does nothing except return success. No output
+indicates that the cluster node is able to communicate with the guest on TCP
+port 3121. An error indicates that the connection failed. This could be due to
+a network issue or because ``pacemaker-remoted`` is not currently running on
+the guest node.
+
+Example of success:
+
+.. code-block:: none
+
+ [root@pcmk-1 ~]# nc guest1 3121 --sh-exec /bin/true
+ [root@pcmk-1 ~]#
+
+Examples of failure:
+
+.. code-block:: none
+
+ [root@pcmk-1 ~]# nc guest1 3121 --sh-exec /bin/true
+ Ncat: Connection refused.
+ [root@pcmk-1 ~]# nc guest1 3121 --sh-exec /bin/true
+ Ncat: No route to host.
diff --git a/doc/sphinx/Pacemaker_Remote/options.rst b/doc/sphinx/Pacemaker_Remote/options.rst
new file mode 100644
index 0000000..4821829
--- /dev/null
+++ b/doc/sphinx/Pacemaker_Remote/options.rst
@@ -0,0 +1,174 @@
+.. index::
+ single: configuration
+
+Configuration Explained
+-----------------------
+
+The walk-through examples use some of these options, but don't explain exactly
+what they mean or do. This section is meant to be the go-to resource for all
+the options available for configuring Pacemaker Remote.
+
+.. index::
+ pair: configuration; guest node
+ single: guest node; meta-attribute
+
+Resource Meta-Attributes for Guest Nodes
+########################################
+
+When configuring a virtual machine as a guest node, the virtual machine is
+created using one of the usual resource agents for that purpose (for example,
+**ocf:heartbeat:VirtualDomain** or **ocf:heartbeat:Xen**), with additional
+meta-attributes.
+
+No restrictions are enforced on what agents may be used to create a guest node,
+but obviously the agent must create a distinct environment capable of running
+the pacemaker_remote daemon and cluster resources. An additional requirement is
+that fencing the host running the guest node resource must be sufficient for
+ensuring the guest node is stopped. This means, for example, that not all
+hypervisors supported by **VirtualDomain** may be used to create guest nodes;
+if the guest can survive the hypervisor being fenced, it may not be used as a
+guest node.
+
+Below are the meta-attributes available to enable a resource as a guest node
+and define its connection parameters.
+
+.. table:: **Meta-attributes for configuring VM resources as guest nodes**
+
+ +------------------------+-----------------+-----------------------------------------------------------+
+ | Option | Default | Description |
+ +========================+=================+===========================================================+
+ | remote-node | none | The node name of the guest node this resource defines. |
+ | | | This both enables the resource as a guest node and |
+ | | | defines the unique name used to identify the guest node. |
+ | | | If no other parameters are set, this value will also be |
+ | | | assumed as the hostname to use when connecting to |
+ | | | pacemaker_remote on the VM. This value **must not** |
+ | | | overlap with any resource or node IDs. |
+ +------------------------+-----------------+-----------------------------------------------------------+
+ | remote-port | 3121 | The port on the virtual machine that the cluster will |
+ | | | use to connect to pacemaker_remote. |
+ +------------------------+-----------------+-----------------------------------------------------------+
+ | remote-addr | 'value of' | The IP address or hostname to use when connecting to |
+ | | ``remote-node`` | pacemaker_remote on the VM. |
+ +------------------------+-----------------+-----------------------------------------------------------+
+ | remote-connect-timeout | 60s | How long before a pending guest connection will time out. |
+ +------------------------+-----------------+-----------------------------------------------------------+
+
+.. index::
+ pair: configuration; remote node
+
+Connection Resources for Remote Nodes
+#####################################
+
+A remote node is defined by a connection resource. That connection resource
+has instance attributes that define where the remote node is located on the
+network and how to communicate with it.
+
+Descriptions of these instance attributes can be retrieved using the following
+``pcs`` command:
+
+.. code-block:: none
+
+ [root@pcmk-1 ~]# pcs resource describe remote
+ Assumed agent name 'ocf:pacemaker:remote' (deduced from 'remote')
+ ocf:pacemaker:remote - Pacemaker Remote connection
+
+ Resource options:
+ server (unique group: address): Server location to connect to (IP address
+ or resolvable host name)
+ port (unique group: address): TCP port at which to contact Pacemaker
+ Remote executor
+ reconnect_interval: If this is a positive time interval, the cluster will
+ attempt to reconnect to a remote node after an active
+ connection has been lost at this interval. Otherwise,
+ the cluster will attempt to reconnect immediately
+ (after any fencing needed).
+
+When defining a remote node's connection resource, it is common and recommended
+to name the connection resource the same as the remote node's hostname. By
+default, if no ``server`` option is provided, the cluster will attempt to contact
+the remote node using the resource name as the hostname.
+
+Environment Variables for Daemon Start-up
+#########################################
+
+Authentication and encryption of the connection between cluster nodes
+and nodes running pacemaker_remote is achieved using
+with `TLS-PSK <https://en.wikipedia.org/wiki/TLS-PSK>`_ encryption/authentication
+over TCP (port 3121 by default). This means that both the cluster node and
+remote node must share the same private key. By default, this
+key is placed at ``/etc/pacemaker/authkey`` on each node.
+
+You can change the default port and/or key location for Pacemaker and
+``pacemaker_remoted`` via environment variables. How these variables are set
+varies by OS, but usually they are set in the ``/etc/sysconfig/pacemaker`` or
+``/etc/default/pacemaker`` file.
+
+.. code-block:: none
+
+ #==#==# Pacemaker Remote
+ # Use the contents of this file as the authorization key to use with Pacemaker
+ # Remote connections. This file must be readable by Pacemaker daemons (that is,
+ # it must allow read permissions to either the hacluster user or the haclient
+ # group), and its contents must be identical on all nodes. The default is
+ # "/etc/pacemaker/authkey".
+ # PCMK_authkey_location=/etc/pacemaker/authkey
+
+ # If the Pacemaker Remote service is run on the local node, it will listen
+ # for connections on this address. The value may be a resolvable hostname or an
+ # IPv4 or IPv6 numeric address. When resolving names or using the default
+ # wildcard address (i.e. listen on all available addresses), IPv6 will be
+ # preferred if available. When listening on an IPv6 address, IPv4 clients will
+ # be supported (via IPv4-mapped IPv6 addresses).
+ # PCMK_remote_address="192.0.2.1"
+
+ # Use this TCP port number when connecting to a Pacemaker Remote node. This
+ # value must be the same on all nodes. The default is "3121".
+ # PCMK_remote_port=3121
+
+ # Use these GnuTLS cipher priorities for TLS connections. See:
+ #
+ # https://gnutls.org/manual/html_node/Priority-Strings.html
+ #
+ # Pacemaker will append ":+ANON-DH" for remote CIB access (when enabled) and
+ # ":+DHE-PSK:+PSK" for Pacemaker Remote connections, as they are required for
+ # the respective functionality.
+ # PCMK_tls_priorities="NORMAL"
+
+ # Set bounds on the bit length of the prime number generated for Diffie-Hellman
+ # parameters needed by TLS connections. The default is not to set any bounds.
+ #
+ # If these values are specified, the server (Pacemaker Remote daemon, or CIB
+ # manager configured to accept remote clients) will use these values to provide
+ # a floor and/or ceiling for the value recommended by the GnuTLS library. The
+ # library will only accept a limited number of specific values, which vary by
+ # library version, so setting these is recommended only when required for
+ # compatibility with specific client versions.
+ #
+ # If PCMK_dh_min_bits is specified, the client (connecting cluster node or
+ # remote CIB command) will require that the server use a prime of at least this
+ # size. This is only recommended when the value must be lowered in order for
+ # the client's GnuTLS library to accept a connection to an older server.
+ # The client side does not use PCMK_dh_max_bits.
+ #
+ # PCMK_dh_min_bits=1024
+ # PCMK_dh_max_bits=2048
+
+Removing Remote Nodes and Guest Nodes
+#####################################
+
+If the resource creating a guest node, or the **ocf:pacemaker:remote** resource
+creating a connection to a remote node, is removed from the configuration, the
+affected node will continue to show up in output as an offline node.
+
+If you want to get rid of that output, run (replacing ``$NODE_NAME``
+appropriately):
+
+.. code-block:: none
+
+ # crm_node --force --remove $NODE_NAME
+
+.. WARNING::
+
+ Be absolutely sure that there are no references to the node's resource in the
+ configuration before running the above command.
diff --git a/doc/sphinx/_static/pacemaker.css b/doc/sphinx/_static/pacemaker.css
new file mode 100644
index 0000000..59c575d
--- /dev/null
+++ b/doc/sphinx/_static/pacemaker.css
@@ -0,0 +1,142 @@
+@import url("pyramid.css");
+
+/* Defaults for entire page */
+body {
+ color: #3f3f3f;
+ font-family: "Open Sans", sans-serif;
+ font-size: 11pt;
+ font-weight: 400;
+ line-height: 1.65;
+}
+
+/* Strip at top of page */
+div.related {
+ line-height: 1.65;
+ color: #3f3f3f;
+ font-size: 10pt;
+ background-color: #fff;
+
+ border-bottom: solid 4px #00b2e2;
+ font-family: "Roboto Slab", serif;
+}
+div.related a {
+ color: #1d2429;
+}
+div.related ul {
+ padding-left: 1em;
+}
+
+/* Sidebar */
+div.bodywrapper {
+ /* The final value must match the sidebarwidth theme option */
+ margin: 0 0 0 230px;
+}
+div.sphinxsidebar {
+ color: #1d2429;
+ background-color: #f5f6f7;
+ font-family: "Roboto Slab", serif;
+ font-size: 0.9em;
+ line-height: 1.65;
+ letter-spacing: 0.075em;
+ /*text-transform: uppercase;*/
+}
+div.sphinxsidebar h3,
+div.sphinxsidebar h4 {
+ font-family: "Roboto Slab", serif;
+ color: #1d2429;
+ font-size: 1.4em;
+ font-weight: 700;
+ border-bottom: 3px solid #00b2e2;
+
+ line-height: 1.5;
+ letter-spacing: 0.075em;
+ background-color: #f5f6f7;
+}
+div.sphinxsidebar p,
+div.sphinxsidebar ul,
+div.sphinxsidebar h3 a,
+div.sphinxsidebar a {
+ color: #1d2429;
+}
+
+/* Main text */
+div.body {
+ color: #3f3f3f;
+ font-size: 11pt;
+ border: none;
+}
+div.body h1,
+div.body h2,
+div.body h3,
+div.body h4,
+div.body h5,
+div.body h6 {
+ font-family: "Roboto Slab", serif;
+ font-weight: 700;
+ color: #1d2429;
+
+ font-variant: none;
+ line-height: 1.5;
+}
+div.body h1 {
+ border-top: none;
+ font-size: 3.5em;
+ background-color: #fff;
+ border-bottom: solid 3px #00b2e2;
+}
+div.body h2 {
+ font-size: 1.75em;
+ background-color: #fff;
+ border-bottom: solid 2px #00b2e2;
+}
+div.body h3 {
+ font-size: 1.5em;
+ background-color: #fff;
+ border-bottom: solid 1px #00b2e2;
+}
+div.body p,
+div.body dd,
+div.body li {
+ line-height: 1.65;
+}
+pre {
+ font-family: "Lucida Console", Monaco, monospace;
+ font-size: 0.9em;
+ line-height: 1.5;
+ background-color: #f5f6f7;
+ color: #1d2429;
+ border: none;
+ border-radius: 11px;
+ box-shadow: 0px 2px 5px #aaaaaa inset;
+}
+
+code {
+ font-family: "Lucida Console", Monaco, monospace;
+ font-size: 0.9em;
+ background-color: #f5f6f7;
+ color: #1d2429;
+ border: none;
+ border-radius: 0.25em;
+ padding: 0.125em 0.25em;
+}
+
+.viewcode-back,
+.download {
+ font-family: "Open Sans", sans-serif;
+}
+
+/* The pyramid theme uses an exclamation point for "note", and nothing (not
+ * even a background shade) for "important", and it supplies a light bulb icon
+ * but doesn't use it. Rearrange that a bit: use the light bulb for note, and
+ * the exclamation point (with the usual shading) for important.
+ */
+div.note {
+ background: #e1ecfe url(dialog-topic.png) no-repeat 10px 8px;
+}
+div.important {
+ border: 2px solid #7a9eec;
+ border-right-style: none;
+ border-left-style: none;
+ padding: 10px 20px 10px 60px;
+ background: #e1ecfe url(dialog-note.png) no-repeat 10px 8px;
+}
diff --git a/doc/sphinx/conf.py.in b/doc/sphinx/conf.py.in
new file mode 100644
index 0000000..7d843d8
--- /dev/null
+++ b/doc/sphinx/conf.py.in
@@ -0,0 +1,319 @@
+""" Sphinx configuration for Pacemaker documentation
+"""
+
+__copyright__ = "Copyright 2020-2023 the Pacemaker project contributors"
+__license__ = "GNU General Public License version 2 or later (GPLv2+) WITHOUT ANY WARRANTY"
+
+# This file is execfile()d with the current directory set to its containing dir.
+#
+# Note that not all possible configuration values are present in this
+# autogenerated file.
+#
+# All configuration values have a default; values that are commented out
+# serve to show the default.
+
+import datetime
+import os
+import sys
+
+# Variables that can be used later in this file
+authors = "the Pacemaker project contributors"
+year = datetime.datetime.now().year
+doc_license = "Creative Commons Attribution-ShareAlike International Public License"
+doc_license += " version 4.0 or later (CC-BY-SA v4.0+)"
+
+# rST markup to insert at beginning of every document; mainly used for
+#
+# .. |<abbr>| replace:: <Full text>
+#
+# where occurrences of |<abbr>| in the rST will be substituted with <Full text>
+rst_prolog="""
+.. |CFS_DISTRO| replace:: AlmaLinux
+.. |CFS_DISTRO_VER| replace:: 9
+.. |REMOTE_DISTRO| replace:: AlmaLinux
+.. |REMOTE_DISTRO_VER| replace:: 9
+"""
+
+# If extensions (or modules to document with autodoc) are in another directory,
+# add these directories to sys.path here. If the directory is relative to the
+# documentation root, use os.path.abspath to make it absolute, like shown here.
+sys.path.insert(0, os.path.abspath('%ABS_TOP_SRCDIR%/python'))
+
+# -- General configuration -----------------------------------------------------
+
+# If your documentation needs a minimal Sphinx version, state it here.
+needs_sphinx = '1.0'
+
+# Add any Sphinx extension module names here, as strings. They can be extensions
+# coming with Sphinx (named 'sphinx.ext.*') or your custom ones.
+extensions = ['sphinx.ext.autodoc',
+ 'sphinx.ext.autosummary']
+
+# Add any paths that contain templates here, relative to this directory.
+templates_path = ['_templates']
+
+# The suffix of source filenames.
+source_suffix = '.rst'
+
+# The encoding of source files.
+#source_encoding = 'utf-8-sig'
+
+# The master toctree document.
+master_doc = 'index'
+
+# General information about the project.
+project = '%BOOK_ID%'
+copyright = "2009-%s %s. Released under the terms of the %s" % (year, authors, doc_license)
+
+# The version info for the project you're documenting, acts as replacement for
+# |version| and |release|, also used in various other places throughout the
+# built documents.
+#
+# The full version, including alpha/beta/rc tags.
+release = '%VERSION%'
+# The short X.Y version.
+version = release.rsplit('.', 1)[0]
+
+# The language for content autogenerated by Sphinx. Refer to documentation
+# for a list of supported languages.
+#language = None
+
+# There are two options for replacing |today|: either, you set today to some
+# non-false value, then it is used:
+#today = ''
+# Else, today_fmt is used as the format for a strftime call.
+#today_fmt = '%B %d, %Y'
+
+# List of patterns, relative to source directory, that match files and
+# directories to ignore when looking for source files.
+exclude_patterns = ['_build']
+
+# The reST default role (used for this markup: `text`) to use for all documents.
+#default_role = None
+
+# If true, '()' will be appended to :func: etc. cross-reference text.
+#add_function_parentheses = True
+
+# If true, the current module name will be prepended to all description
+# unit titles (such as .. function::).
+#add_module_names = True
+
+# If true, sectionauthor and moduleauthor directives will be shown in the
+# output. They are ignored by default.
+#show_authors = False
+
+# The name of the Pygments (syntax highlighting) style to use.
+pygments_style = 'vs'
+
+# A list of ignored prefixes for module index sorting.
+#modindex_common_prefix = []
+
+
+# -- Options for HTML output ---------------------------------------------------
+
+# The theme to use for HTML and HTML Help pages. See the documentation for
+# a list of builtin themes.
+html_theme = 'pyramid'
+
+# Theme options are theme-specific and customize the look and feel of a theme
+# further. For a list of options available for each theme, see the
+# documentation.
+#html_theme_options = {}
+
+# Add any paths that contain custom themes here, relative to this directory.
+#html_theme_path = []
+
+html_style = 'pacemaker.css'
+
+# The name for this set of Sphinx documents. If None, it defaults to
+# "<project> v<release> documentation".
+html_title = "%BOOK_TITLE%"
+
+# A shorter title for the navigation bar. Default is the same as html_title.
+#html_short_title = None
+
+# The name of an image file (relative to this directory) to place at the top
+# of the sidebar.
+#html_logo = None
+
+# The name of an image file (within the static path) to use as favicon of the
+# docs. This file should be a Windows icon file (.ico) being 16x16 or 32x32
+# pixels large.
+#html_favicon = None
+
+# Add any paths that contain custom static files (such as style sheets) here,
+# relative to this directory. They are copied after the builtin static files,
+# so a file named "default.css" will overwrite the builtin "default.css".
+html_static_path = [ '%SRC_DIR%/_static' ]
+
+# If not '', a 'Last updated on:' timestamp is inserted at every page bottom,
+# using the given strftime format.
+#html_last_updated_fmt = '%b %d, %Y'
+
+# If true, SmartyPants will be used to convert quotes and dashes to
+# typographically correct entities.
+#html_use_smartypants = True
+
+# Custom sidebar templates, maps document names to template names.
+#html_sidebars = {}
+
+# Additional templates that should be rendered to pages, maps page names to
+# template names.
+#html_additional_pages = {}
+
+# If false, no module index is generated.
+#html_domain_indices = True
+
+# If false, no index is generated.
+#html_use_index = True
+
+# If true, the index is split into individual pages for each letter.
+#html_split_index = False
+
+# If true, links to the reST sources are added to the pages.
+#html_show_sourcelink = True
+
+# If true, "Created using Sphinx" is shown in the HTML footer. Default is True.
+#html_show_sphinx = True
+
+# If true, "(C) Copyright ..." is shown in the HTML footer. Default is True.
+#html_show_copyright = True
+
+# If true, an OpenSearch description file will be output, and all pages will
+# contain a <link> tag referring to it. The value of this option must be the
+# base URL from which the finished HTML is served.
+#html_use_opensearch = ''
+
+# This is the file name suffix for HTML files (e.g. ".xhtml").
+#html_file_suffix = None
+
+# Output file base name for HTML help builder.
+htmlhelp_basename = 'Pacemakerdoc'
+
+
+# -- Options for LaTeX output --------------------------------------------------
+
+latex_engine = "xelatex"
+
+latex_elements = {
+# The paper size ('letterpaper' or 'a4paper').
+#'papersize': 'letterpaper',
+
+# The font size ('10pt', '11pt' or '12pt').
+#'pointsize': '10pt',
+
+# Additional stuff for the LaTeX preamble.
+#'preamble': '',
+}
+
+# Grouping the document tree into LaTeX files. List of tuples
+# (source start file, target name, title, author, documentclass [howto/manual]).
+latex_documents = [
+ ('index', '%BOOK_ID%.tex', '%BOOK_TITLE%', authors, 'manual'),
+]
+
+# The name of an image file (relative to this directory) to place at the top of
+# the title page.
+#latex_logo = None
+
+# For "manual" documents, if this is true, then toplevel headings are parts,
+# not chapters.
+#latex_use_parts = False
+
+# If true, show page references after internal links.
+#latex_show_pagerefs = False
+
+# If true, show URL addresses after external links.
+#latex_show_urls = False
+
+# Documents to append as an appendix to all manuals.
+#latex_appendices = []
+
+# If false, no module index is generated.
+#latex_domain_indices = True
+
+
+# -- Options for manual page output --------------------------------------------
+
+# One entry per manual page. List of tuples
+# (source start file, name, description, authors, manual section).
+man_pages = [
+ ('index', '%BOOK_ID%', 'Part of the Pacemaker documentation set', [authors], 8)
+]
+
+# If true, show URL addresses after external links.
+#man_show_urls = False
+
+
+# -- Options for Texinfo output ------------------------------------------------
+
+# Grouping the document tree into Texinfo files. List of tuples
+# (source start file, target name, title, author,
+# dir menu entry, description, category)
+texinfo_documents = [
+ ('index', '%BOOK_ID%', '%BOOK_TITLE%', authors, '%BOOK_TITLE%',
+ 'Pacemaker is an advanced, scalable high-availability cluster resource manager.',
+ 'Miscellaneous'),
+]
+
+# Documents to append as an appendix to all manuals.
+#texinfo_appendices = []
+
+# If false, no module index is generated.
+#texinfo_domain_indices = True
+
+# How to display URL addresses: 'footnote', 'no', or 'inline'.
+#texinfo_show_urls = 'footnote'
+
+
+# -- Options for Epub output ---------------------------------------------------
+
+# Bibliographic Dublin Core info.
+epub_title = '%BOOK_TITLE%'
+epub_author = authors
+epub_publisher = 'ClusterLabs.org'
+epub_copyright = copyright
+
+# The language of the text. It defaults to the language option
+# or en if the language is not set.
+#epub_language = ''
+
+# The scheme of the identifier. Typical schemes are ISBN or URL.
+epub_scheme = 'URL'
+
+# The unique identifier of the text. This can be a ISBN number
+# or the project homepage.
+epub_identifier = 'https://www.clusterlabs.org/pacemaker/doc/2.1/%BOOK_ID%/epub/%BOOK_ID%.epub'
+
+# A unique identification for the text.
+epub_uid = 'ClusterLabs.org-Pacemaker-%BOOK_ID%'
+
+# A tuple containing the cover image and cover page html template filenames.
+#epub_cover = ()
+
+# HTML files that should be inserted before the pages created by sphinx.
+# The format is a list of tuples containing the path and title.
+#epub_pre_files = []
+
+# HTML files that should be inserted after the pages created by sphinx.
+# The format is a list of tuples containing the path and title.
+#epub_post_files = []
+
+# A list of files that should not be packed into the epub file.
+epub_exclude_files = [
+ '_static/doctools.js',
+ '_static/jquery.js',
+ '_static/searchtools.js',
+ '_static/underscore.js',
+ '_static/basic.css',
+ '_static/websupport.js',
+ 'search.html',
+]
+
+# The depth of the table of contents in toc.ncx.
+#epub_tocdepth = 3
+
+# Allow duplicate toc entries.
+#epub_tocdup = True
+
+autosummary_generate = True
diff --git a/doc/sphinx/shared/images/Policy-Engine-big.dot b/doc/sphinx/shared/images/Policy-Engine-big.dot
new file mode 100644
index 0000000..40ced22
--- /dev/null
+++ b/doc/sphinx/shared/images/Policy-Engine-big.dot
@@ -0,0 +1,83 @@
+digraph "g" {
+"Cancel drbd0:0_monitor_10000 frigg" -> "drbd0:0_demote_0 frigg" [ style = bold]
+"Cancel drbd0:0_monitor_10000 frigg" [ style=bold color="green" fontcolor="black" ]
+"Cancel drbd0:1_monitor_12000 odin" -> "drbd0:1_promote_0 odin" [ style = bold]
+"Cancel drbd0:1_monitor_12000 odin" [ style=bold color="green" fontcolor="black" ]
+"IPaddr0_monitor_5000 odin" [ style=bold color="green" fontcolor="black" ]
+"IPaddr0_start_0 odin" -> "IPaddr0_monitor_5000 odin" [ style = bold]
+"IPaddr0_start_0 odin" -> "MailTo_start_0 odin" [ style = bold]
+"IPaddr0_start_0 odin" -> "group_running_0" [ style = bold]
+"IPaddr0_start_0 odin" [ style=bold color="green" fontcolor="black" ]
+"MailTo_start_0 odin" -> "group_running_0" [ style = bold]
+"MailTo_start_0 odin" [ style=bold color="green" fontcolor="black" ]
+"drbd0:0_demote_0 frigg" -> "drbd0:0_monitor_12000 frigg" [ style = bold]
+"drbd0:0_demote_0 frigg" -> "ms_drbd_demoted_0" [ style = bold]
+"drbd0:0_demote_0 frigg" [ style=bold color="green" fontcolor="black" ]
+"drbd0:0_monitor_12000 frigg" [ style=bold color="green" fontcolor="black" ]
+"drbd0:0_post_notify_demote_0 frigg" -> "ms_drbd_confirmed-post_notify_demoted_0" [ style = bold]
+"drbd0:0_post_notify_demote_0 frigg" [ style=bold color="green" fontcolor="black" ]
+"drbd0:0_post_notify_promote_0 frigg" -> "ms_drbd_confirmed-post_notify_promoted_0" [ style = bold]
+"drbd0:0_post_notify_promote_0 frigg" [ style=bold color="green" fontcolor="black" ]
+"drbd0:0_pre_notify_demote_0 frigg" -> "ms_drbd_confirmed-pre_notify_demote_0" [ style = bold]
+"drbd0:0_pre_notify_demote_0 frigg" [ style=bold color="green" fontcolor="black" ]
+"drbd0:0_pre_notify_promote_0 frigg" -> "ms_drbd_confirmed-pre_notify_promote_0" [ style = bold]
+"drbd0:0_pre_notify_promote_0 frigg" [ style=bold color="green" fontcolor="black" ]
+"drbd0:1_monitor_10000 odin" [ style=bold color="green" fontcolor="black" ]
+"drbd0:1_post_notify_demote_0 odin" -> "ms_drbd_confirmed-post_notify_demoted_0" [ style = bold]
+"drbd0:1_post_notify_demote_0 odin" [ style=bold color="green" fontcolor="black" ]
+"drbd0:1_post_notify_promote_0 odin" -> "ms_drbd_confirmed-post_notify_promoted_0" [ style = bold]
+"drbd0:1_post_notify_promote_0 odin" [ style=bold color="green" fontcolor="black" ]
+"drbd0:1_pre_notify_demote_0 odin" -> "ms_drbd_confirmed-pre_notify_demote_0" [ style = bold]
+"drbd0:1_pre_notify_demote_0 odin" [ style=bold color="green" fontcolor="black" ]
+"drbd0:1_pre_notify_promote_0 odin" -> "ms_drbd_confirmed-pre_notify_promote_0" [ style = bold]
+"drbd0:1_pre_notify_promote_0 odin" [ style=bold color="green" fontcolor="black" ]
+"drbd0:1_promote_0 odin" -> "drbd0:1_monitor_10000 odin" [ style = bold]
+"drbd0:1_promote_0 odin" -> "ms_drbd_promoted_0" [ style = bold]
+"drbd0:1_promote_0 odin" [ style=bold color="green" fontcolor="black" ]
+"group_running_0" [ style=bold color="green" fontcolor="orange" ]
+"group_start_0" -> "IPaddr0_start_0 odin" [ style = bold]
+"group_start_0" -> "MailTo_start_0 odin" [ style = bold]
+"group_start_0" -> "group_running_0" [ style = bold]
+"group_start_0" [ style=bold color="green" fontcolor="orange" ]
+"ms_drbd_confirmed-post_notify_demoted_0" -> "drbd0:0_monitor_12000 frigg" [ style = bold]
+"ms_drbd_confirmed-post_notify_demoted_0" -> "drbd0:1_monitor_10000 odin" [ style = bold]
+"ms_drbd_confirmed-post_notify_demoted_0" -> "ms_drbd_pre_notify_promote_0" [ style = bold]
+"ms_drbd_confirmed-post_notify_demoted_0" [ style=bold color="green" fontcolor="orange" ]
+"ms_drbd_confirmed-post_notify_promoted_0" -> "drbd0:0_monitor_12000 frigg" [ style = bold]
+"ms_drbd_confirmed-post_notify_promoted_0" -> "drbd0:1_monitor_10000 odin" [ style = bold]
+"ms_drbd_confirmed-post_notify_promoted_0" -> "group_start_0" [ style = bold]
+"ms_drbd_confirmed-post_notify_promoted_0" [ style=bold color="green" fontcolor="orange" ]
+"ms_drbd_confirmed-pre_notify_demote_0" -> "ms_drbd_demote_0" [ style = bold]
+"ms_drbd_confirmed-pre_notify_demote_0" -> "ms_drbd_post_notify_demoted_0" [ style = bold]
+"ms_drbd_confirmed-pre_notify_demote_0" [ style=bold color="green" fontcolor="orange" ]
+"ms_drbd_confirmed-pre_notify_promote_0" -> "ms_drbd_post_notify_promoted_0" [ style = bold]
+"ms_drbd_confirmed-pre_notify_promote_0" -> "ms_drbd_promote_0" [ style = bold]
+"ms_drbd_confirmed-pre_notify_promote_0" [ style=bold color="green" fontcolor="orange" ]
+"ms_drbd_demote_0" -> "drbd0:0_demote_0 frigg" [ style = bold]
+"ms_drbd_demote_0" -> "ms_drbd_demoted_0" [ style = bold]
+"ms_drbd_demote_0" [ style=bold color="green" fontcolor="orange" ]
+"ms_drbd_demoted_0" -> "ms_drbd_post_notify_demoted_0" [ style = bold]
+"ms_drbd_demoted_0" -> "ms_drbd_promote_0" [ style = bold]
+"ms_drbd_demoted_0" [ style=bold color="green" fontcolor="orange" ]
+"ms_drbd_post_notify_demoted_0" -> "drbd0:0_post_notify_demote_0 frigg" [ style = bold]
+"ms_drbd_post_notify_demoted_0" -> "drbd0:1_post_notify_demote_0 odin" [ style = bold]
+"ms_drbd_post_notify_demoted_0" -> "ms_drbd_confirmed-post_notify_demoted_0" [ style = bold]
+"ms_drbd_post_notify_demoted_0" [ style=bold color="green" fontcolor="orange" ]
+"ms_drbd_post_notify_promoted_0" -> "drbd0:0_post_notify_promote_0 frigg" [ style = bold]
+"ms_drbd_post_notify_promoted_0" -> "drbd0:1_post_notify_promote_0 odin" [ style = bold]
+"ms_drbd_post_notify_promoted_0" -> "ms_drbd_confirmed-post_notify_promoted_0" [ style = bold]
+"ms_drbd_post_notify_promoted_0" [ style=bold color="green" fontcolor="orange" ]
+"ms_drbd_pre_notify_demote_0" -> "drbd0:0_pre_notify_demote_0 frigg" [ style = bold]
+"ms_drbd_pre_notify_demote_0" -> "drbd0:1_pre_notify_demote_0 odin" [ style = bold]
+"ms_drbd_pre_notify_demote_0" -> "ms_drbd_confirmed-pre_notify_demote_0" [ style = bold]
+"ms_drbd_pre_notify_demote_0" [ style=bold color="green" fontcolor="orange" ]
+"ms_drbd_pre_notify_promote_0" -> "drbd0:0_pre_notify_promote_0 frigg" [ style = bold]
+"ms_drbd_pre_notify_promote_0" -> "drbd0:1_pre_notify_promote_0 odin" [ style = bold]
+"ms_drbd_pre_notify_promote_0" -> "ms_drbd_confirmed-pre_notify_promote_0" [ style = bold]
+"ms_drbd_pre_notify_promote_0" [ style=bold color="green" fontcolor="orange" ]
+"ms_drbd_promote_0" -> "drbd0:1_promote_0 odin" [ style = bold]
+"ms_drbd_promote_0" [ style=bold color="green" fontcolor="orange" ]
+"ms_drbd_promoted_0" -> "group_start_0" [ style = bold]
+"ms_drbd_promoted_0" -> "ms_drbd_post_notify_promoted_0" [ style = bold]
+"ms_drbd_promoted_0" [ style=bold color="green" fontcolor="orange" ]
+}
diff --git a/doc/sphinx/shared/images/Policy-Engine-big.svg b/doc/sphinx/shared/images/Policy-Engine-big.svg
new file mode 100644
index 0000000..7964fcf
--- /dev/null
+++ b/doc/sphinx/shared/images/Policy-Engine-big.svg
@@ -0,0 +1,418 @@
+<?xml version="1.0" encoding="UTF-8" standalone="no"?>
+<!DOCTYPE svg PUBLIC "-//W3C//DTD SVG 1.1//EN"
+ "http://www.w3.org/Graphics/SVG/1.1/DTD/svg11.dtd">
+<!-- Generated by graphviz version 2.25.20091012.0445 (20091012.0445)
+ -->
+<!-- Title: g Pages: 1 -->
+<svg width="1164pt" height="1556pt"
+ viewBox="0.00 0.00 1164.00 1556.00" xmlns="http://www.w3.org/2000/svg" xmlns:xlink="http://www.w3.org/1999/xlink">
+<g id="graph1" class="graph" transform="scale(1 1) rotate(0) translate(4 1552)">
+<title>g</title>
+<polygon fill="white" stroke="white" points="-4,5 -4,-1552 1161,-1552 1161,5 -4,5"/>
+<!-- Cancel drbd0:0_monitor_10000 frigg -->
+<g id="node1" class="node"><title>Cancel drbd0:0_monitor_10000 frigg</title>
+<ellipse fill="none" stroke="green" stroke-width="2" cx="403" cy="-1314" rx="147.181" ry="18"/>
+<text text-anchor="middle" x="403" y="-1308.4" font-family="Times,serif" font-size="14.00">Cancel drbd0:0_monitor_10000 frigg</text>
+</g>
+<!-- drbd0:0_demote_0 frigg -->
+<g id="node3" class="node"><title>drbd0:0_demote_0 frigg</title>
+<ellipse fill="none" stroke="green" stroke-width="2" cx="462" cy="-1242" rx="99.1732" ry="18"/>
+<text text-anchor="middle" x="462" y="-1236.4" font-family="Times,serif" font-size="14.00">drbd0:0_demote_0 frigg</text>
+</g>
+<!-- Cancel drbd0:0_monitor_10000 frigg&#45;&gt;drbd0:0_demote_0 frigg -->
+<g id="edge2" class="edge"><title>Cancel drbd0:0_monitor_10000 frigg&#45;&gt;drbd0:0_demote_0 frigg</title>
+<path fill="none" stroke="black" stroke-width="2" d="M417.888,-1295.83C424.859,-1287.33 433.285,-1277.04 440.898,-1267.75"/>
+<polygon fill="black" stroke="black" points="443.69,-1269.87 447.321,-1259.91 438.276,-1265.43 443.69,-1269.87"/>
+</g>
+<!-- drbd0:0_monitor_12000 frigg -->
+<g id="node16" class="node"><title>drbd0:0_monitor_12000 frigg</title>
+<ellipse fill="none" stroke="green" stroke-width="2" cx="305" cy="-234" rx="118.755" ry="18"/>
+<text text-anchor="middle" x="305" y="-228.4" font-family="Times,serif" font-size="14.00">drbd0:0_monitor_12000 frigg</text>
+</g>
+<!-- drbd0:0_demote_0 frigg&#45;&gt;drbd0:0_monitor_12000 frigg -->
+<g id="edge14" class="edge"><title>drbd0:0_demote_0 frigg&#45;&gt;drbd0:0_monitor_12000 frigg</title>
+<path fill="none" stroke="black" stroke-width="2" d="M406.642,-1227.02C348.992,-1207.67 267,-1167.86 267,-1098 267,-1098 267,-1098 267,-378 267,-336.368 282.033,-290.14 293.188,-261.567"/>
+<polygon fill="black" stroke="black" points="296.458,-262.817 296.941,-252.233 289.963,-260.206 296.458,-262.817"/>
+</g>
+<!-- ms_drbd_demoted_0 -->
+<g id="node18" class="node"><title>ms_drbd_demoted_0</title>
+<ellipse fill="none" stroke="green" stroke-width="2" cx="649" cy="-1170" rx="87.1713" ry="18"/>
+<text text-anchor="middle" x="649" y="-1164.4" font-family="Times,serif" font-size="14.00" fill="orange">ms_drbd_demoted_0</text>
+</g>
+<!-- drbd0:0_demote_0 frigg&#45;&gt;ms_drbd_demoted_0 -->
+<g id="edge16" class="edge"><title>drbd0:0_demote_0 frigg&#45;&gt;ms_drbd_demoted_0</title>
+<path fill="none" stroke="black" stroke-width="2" d="M504.433,-1225.66C532.219,-1214.96 568.685,-1200.92 598.04,-1189.62"/>
+<polygon fill="black" stroke="black" points="599.625,-1192.76 607.7,-1185.9 597.11,-1186.23 599.625,-1192.76"/>
+</g>
+<!-- Cancel drbd0:1_monitor_12000 odin -->
+<g id="node4" class="node"><title>Cancel drbd0:1_monitor_12000 odin</title>
+<ellipse fill="none" stroke="green" stroke-width="2" cx="905" cy="-666" rx="144.153" ry="18"/>
+<text text-anchor="middle" x="905" y="-660.4" font-family="Times,serif" font-size="14.00">Cancel drbd0:1_monitor_12000 odin</text>
+</g>
+<!-- drbd0:1_promote_0 odin -->
+<g id="node6" class="node"><title>drbd0:1_promote_0 odin</title>
+<ellipse fill="none" stroke="green" stroke-width="2" cx="905" cy="-594" rx="99.9368" ry="18"/>
+<text text-anchor="middle" x="905" y="-588.4" font-family="Times,serif" font-size="14.00">drbd0:1_promote_0 odin</text>
+</g>
+<!-- Cancel drbd0:1_monitor_12000 odin&#45;&gt;drbd0:1_promote_0 odin -->
+<g id="edge4" class="edge"><title>Cancel drbd0:1_monitor_12000 odin&#45;&gt;drbd0:1_promote_0 odin</title>
+<path fill="none" stroke="black" stroke-width="2" d="M905,-647.831C905,-640.131 905,-630.974 905,-622.417"/>
+<polygon fill="black" stroke="black" points="908.5,-622.413 905,-612.413 901.5,-622.413 908.5,-622.413"/>
+</g>
+<!-- drbd0:1_monitor_10000 odin -->
+<g id="node31" class="node"><title>drbd0:1_monitor_10000 odin</title>
+<ellipse fill="none" stroke="green" stroke-width="2" cx="1039" cy="-234" rx="116.992" ry="18"/>
+<text text-anchor="middle" x="1039" y="-228.4" font-family="Times,serif" font-size="14.00">drbd0:1_monitor_10000 odin</text>
+</g>
+<!-- drbd0:1_promote_0 odin&#45;&gt;drbd0:1_monitor_10000 odin -->
+<g id="edge34" class="edge"><title>drbd0:1_promote_0 odin&#45;&gt;drbd0:1_monitor_10000 odin</title>
+<path fill="none" stroke="black" stroke-width="2" d="M948.966,-577.672C967.665,-568.896 988.462,-556.447 1003,-540 1031.53,-507.72 1039,-493.081 1039,-450 1039,-450 1039,-450 1039,-378 1039,-337.876 1039,-291.463 1039,-262.418"/>
+<polygon fill="black" stroke="black" points="1042.5,-262.185 1039,-252.185 1035.5,-262.185 1042.5,-262.185"/>
+</g>
+<!-- ms_drbd_promoted_0 -->
+<g id="node42" class="node"><title>ms_drbd_promoted_0</title>
+<ellipse fill="none" stroke="green" stroke-width="2" cx="905" cy="-522" rx="89.1969" ry="18"/>
+<text text-anchor="middle" x="905" y="-516.4" font-family="Times,serif" font-size="14.00" fill="orange">ms_drbd_promoted_0</text>
+</g>
+<!-- drbd0:1_promote_0 odin&#45;&gt;ms_drbd_promoted_0 -->
+<g id="edge36" class="edge"><title>drbd0:1_promote_0 odin&#45;&gt;ms_drbd_promoted_0</title>
+<path fill="none" stroke="black" stroke-width="2" d="M905,-575.831C905,-568.131 905,-558.974 905,-550.417"/>
+<polygon fill="black" stroke="black" points="908.5,-550.413 905,-540.413 901.5,-550.413 908.5,-550.413"/>
+</g>
+<!-- IPaddr0_monitor_5000 odin -->
+<g id="node7" class="node"><title>IPaddr0_monitor_5000 odin</title>
+<ellipse fill="none" stroke="green" stroke-width="2" cx="785" cy="-90" rx="113.834" ry="18"/>
+<text text-anchor="middle" x="785" y="-84.4" font-family="Times,serif" font-size="14.00">IPaddr0_monitor_5000 odin</text>
+</g>
+<!-- IPaddr0_start_0 odin -->
+<g id="node8" class="node"><title>IPaddr0_start_0 odin</title>
+<ellipse fill="none" stroke="green" stroke-width="2" cx="785" cy="-162" rx="85.908" ry="18"/>
+<text text-anchor="middle" x="785" y="-156.4" font-family="Times,serif" font-size="14.00">IPaddr0_start_0 odin</text>
+</g>
+<!-- IPaddr0_start_0 odin&#45;&gt;IPaddr0_monitor_5000 odin -->
+<g id="edge6" class="edge"><title>IPaddr0_start_0 odin&#45;&gt;IPaddr0_monitor_5000 odin</title>
+<path fill="none" stroke="black" stroke-width="2" d="M785,-143.831C785,-136.131 785,-126.974 785,-118.417"/>
+<polygon fill="black" stroke="black" points="788.5,-118.413 785,-108.413 781.5,-118.413 788.5,-118.413"/>
+</g>
+<!-- MailTo_start_0 odin -->
+<g id="node11" class="node"><title>MailTo_start_0 odin</title>
+<ellipse fill="none" stroke="green" stroke-width="2" cx="568" cy="-90" rx="84.7776" ry="18"/>
+<text text-anchor="middle" x="568" y="-84.4" font-family="Times,serif" font-size="14.00">MailTo_start_0 odin</text>
+</g>
+<!-- IPaddr0_start_0 odin&#45;&gt;MailTo_start_0 odin -->
+<g id="edge8" class="edge"><title>IPaddr0_start_0 odin&#45;&gt;MailTo_start_0 odin</title>
+<path fill="none" stroke="black" stroke-width="2" d="M738.98,-146.731C705.319,-135.562 659.487,-120.355 623.748,-108.497"/>
+<polygon fill="black" stroke="black" points="624.547,-105.075 613.954,-105.247 622.343,-111.718 624.547,-105.075"/>
+</g>
+<!-- group_running_0 -->
+<g id="node13" class="node"><title>group_running_0</title>
+<ellipse fill="none" stroke="green" stroke-width="2" cx="927" cy="-18" rx="72.0111" ry="18"/>
+<text text-anchor="middle" x="927" y="-12.4" font-family="Times,serif" font-size="14.00" fill="orange">group_running_0</text>
+</g>
+<!-- IPaddr0_start_0 odin&#45;&gt;group_running_0 -->
+<g id="edge10" class="edge"><title>IPaddr0_start_0 odin&#45;&gt;group_running_0</title>
+<path fill="none" stroke="black" stroke-width="2" d="M843.597,-148.766C866.832,-140.675 891.818,-127.906 908,-108 922.024,-90.7488 926.414,-65.6439 927.501,-46.3293"/>
+<polygon fill="black" stroke="black" points="931.001,-46.3826 927.805,-36.2813 924.004,-46.1707 931.001,-46.3826"/>
+</g>
+<!-- MailTo_start_0 odin&#45;&gt;group_running_0 -->
+<g id="edge12" class="edge"><title>MailTo_start_0 odin&#45;&gt;group_running_0</title>
+<path fill="none" stroke="black" stroke-width="2" d="M630.123,-77.5408C694.547,-64.6201 794.893,-44.4951 860.788,-31.2792"/>
+<polygon fill="black" stroke="black" points="861.538,-34.6986 870.655,-29.3005 860.161,-27.8353 861.538,-34.6986"/>
+</g>
+<!-- ms_drbd_post_notify_demoted_0 -->
+<g id="node57" class="node"><title>ms_drbd_post_notify_demoted_0</title>
+<ellipse fill="none" stroke="green" stroke-width="2" cx="648" cy="-1098" rx="132.02" ry="18"/>
+<text text-anchor="middle" x="648" y="-1092.4" font-family="Times,serif" font-size="14.00" fill="orange">ms_drbd_post_notify_demoted_0</text>
+</g>
+<!-- ms_drbd_demoted_0&#45;&gt;ms_drbd_post_notify_demoted_0 -->
+<g id="edge68" class="edge"><title>ms_drbd_demoted_0&#45;&gt;ms_drbd_post_notify_demoted_0</title>
+<path fill="none" stroke="black" stroke-width="2" d="M648.748,-1151.83C648.641,-1144.13 648.514,-1134.97 648.395,-1126.42"/>
+<polygon fill="black" stroke="black" points="651.894,-1126.36 648.256,-1116.41 644.895,-1126.46 651.894,-1126.36"/>
+</g>
+<!-- ms_drbd_promote_0 -->
+<g id="node61" class="node"><title>ms_drbd_promote_0</title>
+<ellipse fill="none" stroke="green" stroke-width="2" cx="658" cy="-666" rx="84.7776" ry="18"/>
+<text text-anchor="middle" x="658" y="-660.4" font-family="Times,serif" font-size="14.00" fill="orange">ms_drbd_promote_0</text>
+</g>
+<!-- ms_drbd_demoted_0&#45;&gt;ms_drbd_promote_0 -->
+<g id="edge70" class="edge"><title>ms_drbd_demoted_0&#45;&gt;ms_drbd_promote_0</title>
+<path fill="none" stroke="black" stroke-width="2" d="M727.261,-1162.11C859.993,-1146.53 1115,-1106.18 1115,-1026 1115,-1026 1115,-1026 1115,-810 1115,-770.01 857.154,-708.849 728.82,-680.885"/>
+<polygon fill="black" stroke="black" points="729.201,-677.387 718.686,-678.688 727.717,-684.228 729.201,-677.387"/>
+</g>
+<!-- drbd0:0_post_notify_demote_0 frigg -->
+<g id="node19" class="node"><title>drbd0:0_post_notify_demote_0 frigg</title>
+<ellipse fill="none" stroke="green" stroke-width="2" cx="820" cy="-1026" rx="144.022" ry="18"/>
+<text text-anchor="middle" x="820" y="-1020.4" font-family="Times,serif" font-size="14.00">drbd0:0_post_notify_demote_0 frigg</text>
+</g>
+<!-- ms_drbd_confirmed&#45;post_notify_demoted_0 -->
+<g id="node21" class="node"><title>ms_drbd_confirmed&#45;post_notify_demoted_0</title>
+<ellipse fill="none" stroke="green" stroke-width="2" cx="648" cy="-954" rx="173.079" ry="18"/>
+<text text-anchor="middle" x="648" y="-948.4" font-family="Times,serif" font-size="14.00" fill="orange">ms_drbd_confirmed&#45;post_notify_demoted_0</text>
+</g>
+<!-- drbd0:0_post_notify_demote_0 frigg&#45;&gt;ms_drbd_confirmed&#45;post_notify_demoted_0 -->
+<g id="edge18" class="edge"><title>drbd0:0_post_notify_demote_0 frigg&#45;&gt;ms_drbd_confirmed&#45;post_notify_demoted_0</title>
+<path fill="none" stroke="black" stroke-width="2" d="M778.364,-1008.57C754.561,-998.607 724.514,-986.029 699.267,-975.461"/>
+<polygon fill="black" stroke="black" points="700.425,-972.151 689.849,-971.518 697.722,-978.608 700.425,-972.151"/>
+</g>
+<!-- ms_drbd_confirmed&#45;post_notify_demoted_0&#45;&gt;drbd0:0_monitor_12000 frigg -->
+<g id="edge44" class="edge"><title>ms_drbd_confirmed&#45;post_notify_demoted_0&#45;&gt;drbd0:0_monitor_12000 frigg</title>
+<path fill="none" stroke="black" stroke-width="2" d="M571.024,-937.852C469.896,-914.38 305,-867.318 305,-810 305,-810 305,-810 305,-378 305,-337.876 305,-291.463 305,-262.418"/>
+<polygon fill="black" stroke="black" points="308.5,-262.185 305,-252.185 301.5,-262.185 308.5,-262.185"/>
+</g>
+<!-- ms_drbd_confirmed&#45;post_notify_demoted_0&#45;&gt;drbd0:1_monitor_10000 odin -->
+<g id="edge46" class="edge"><title>ms_drbd_confirmed&#45;post_notify_demoted_0&#45;&gt;drbd0:1_monitor_10000 odin</title>
+<path fill="none" stroke="black" stroke-width="2" d="M754.32,-939.792C881.367,-919.767 1077,-877.908 1077,-810 1077,-810 1077,-810 1077,-378 1077,-336.368 1061.97,-290.14 1050.81,-261.567"/>
+<polygon fill="black" stroke="black" points="1054.04,-260.206 1047.06,-252.233 1047.54,-262.817 1054.04,-260.206"/>
+</g>
+<!-- ms_drbd_pre_notify_promote_0 -->
+<g id="node50" class="node"><title>ms_drbd_pre_notify_promote_0</title>
+<ellipse fill="none" stroke="green" stroke-width="2" cx="648" cy="-882" rx="126.967" ry="18"/>
+<text text-anchor="middle" x="648" y="-876.4" font-family="Times,serif" font-size="14.00" fill="orange">ms_drbd_pre_notify_promote_0</text>
+</g>
+<!-- ms_drbd_confirmed&#45;post_notify_demoted_0&#45;&gt;ms_drbd_pre_notify_promote_0 -->
+<g id="edge48" class="edge"><title>ms_drbd_confirmed&#45;post_notify_demoted_0&#45;&gt;ms_drbd_pre_notify_promote_0</title>
+<path fill="none" stroke="black" stroke-width="2" d="M648,-935.831C648,-928.131 648,-918.974 648,-910.417"/>
+<polygon fill="black" stroke="black" points="651.5,-910.413 648,-900.413 644.5,-910.413 651.5,-910.413"/>
+</g>
+<!-- drbd0:0_post_notify_promote_0 frigg -->
+<g id="node22" class="node"><title>drbd0:0_post_notify_promote_0 frigg</title>
+<ellipse fill="none" stroke="green" stroke-width="2" cx="826" cy="-378" rx="147.181" ry="18"/>
+<text text-anchor="middle" x="826" y="-372.4" font-family="Times,serif" font-size="14.00">drbd0:0_post_notify_promote_0 frigg</text>
+</g>
+<!-- ms_drbd_confirmed&#45;post_notify_promoted_0 -->
+<g id="node24" class="node"><title>ms_drbd_confirmed&#45;post_notify_promoted_0</title>
+<ellipse fill="none" stroke="green" stroke-width="2" cx="724" cy="-306" rx="176.238" ry="18"/>
+<text text-anchor="middle" x="724" y="-300.4" font-family="Times,serif" font-size="14.00" fill="orange">ms_drbd_confirmed&#45;post_notify_promoted_0</text>
+</g>
+<!-- drbd0:0_post_notify_promote_0 frigg&#45;&gt;ms_drbd_confirmed&#45;post_notify_promoted_0 -->
+<g id="edge20" class="edge"><title>drbd0:0_post_notify_promote_0 frigg&#45;&gt;ms_drbd_confirmed&#45;post_notify_promoted_0</title>
+<path fill="none" stroke="black" stroke-width="2" d="M800.787,-360.202C787.791,-351.029 771.764,-339.716 757.72,-329.803"/>
+<polygon fill="black" stroke="black" points="759.465,-326.75 749.277,-323.843 755.428,-332.469 759.465,-326.75"/>
+</g>
+<!-- ms_drbd_confirmed&#45;post_notify_promoted_0&#45;&gt;drbd0:0_monitor_12000 frigg -->
+<g id="edge50" class="edge"><title>ms_drbd_confirmed&#45;post_notify_promoted_0&#45;&gt;drbd0:0_monitor_12000 frigg</title>
+<path fill="none" stroke="black" stroke-width="2" d="M633.857,-290.51C562.906,-278.318 464.546,-261.416 393.914,-249.279"/>
+<polygon fill="black" stroke="black" points="394.149,-245.768 383.701,-247.524 392.964,-252.667 394.149,-245.768"/>
+</g>
+<!-- ms_drbd_confirmed&#45;post_notify_promoted_0&#45;&gt;drbd0:1_monitor_10000 odin -->
+<g id="edge52" class="edge"><title>ms_drbd_confirmed&#45;post_notify_promoted_0&#45;&gt;drbd0:1_monitor_10000 odin</title>
+<path fill="none" stroke="black" stroke-width="2" d="M796.267,-289.482C846.331,-278.039 912.837,-262.837 963.574,-251.24"/>
+<polygon fill="black" stroke="black" points="964.533,-254.611 973.502,-248.971 962.973,-247.787 964.533,-254.611"/>
+</g>
+<!-- group_start_0 -->
+<g id="node43" class="node"><title>group_start_0</title>
+<ellipse fill="none" stroke="green" stroke-width="2" cx="785" cy="-234" rx="58.2438" ry="18"/>
+<text text-anchor="middle" x="785" y="-228.4" font-family="Times,serif" font-size="14.00" fill="orange">group_start_0</text>
+</g>
+<!-- ms_drbd_confirmed&#45;post_notify_promoted_0&#45;&gt;group_start_0 -->
+<g id="edge54" class="edge"><title>ms_drbd_confirmed&#45;post_notify_promoted_0&#45;&gt;group_start_0</title>
+<path fill="none" stroke="black" stroke-width="2" d="M739.393,-287.831C746.709,-279.196 755.576,-268.73 763.54,-259.33"/>
+<polygon fill="black" stroke="black" points="766.45,-261.31 770.243,-251.418 761.109,-256.785 766.45,-261.31"/>
+</g>
+<!-- drbd0:0_pre_notify_demote_0 frigg -->
+<g id="node25" class="node"><title>drbd0:0_pre_notify_demote_0 frigg</title>
+<ellipse fill="none" stroke="green" stroke-width="2" cx="141" cy="-1458" rx="140.864" ry="18"/>
+<text text-anchor="middle" x="141" y="-1452.4" font-family="Times,serif" font-size="14.00">drbd0:0_pre_notify_demote_0 frigg</text>
+</g>
+<!-- ms_drbd_confirmed&#45;pre_notify_demote_0 -->
+<g id="node27" class="node"><title>ms_drbd_confirmed&#45;pre_notify_demote_0</title>
+<ellipse fill="none" stroke="green" stroke-width="2" cx="439" cy="-1386" rx="164.867" ry="18"/>
+<text text-anchor="middle" x="439" y="-1380.4" font-family="Times,serif" font-size="14.00" fill="orange">ms_drbd_confirmed&#45;pre_notify_demote_0</text>
+</g>
+<!-- drbd0:0_pre_notify_demote_0 frigg&#45;&gt;ms_drbd_confirmed&#45;pre_notify_demote_0 -->
+<g id="edge22" class="edge"><title>drbd0:0_pre_notify_demote_0 frigg&#45;&gt;ms_drbd_confirmed&#45;pre_notify_demote_0</title>
+<path fill="none" stroke="black" stroke-width="2" d="M207.136,-1442.02C252.6,-1431.04 313.156,-1416.41 360.981,-1404.85"/>
+<polygon fill="black" stroke="black" points="362.085,-1408.18 370.983,-1402.43 360.441,-1401.38 362.085,-1408.18"/>
+</g>
+<!-- ms_drbd_demote_0 -->
+<g id="node55" class="node"><title>ms_drbd_demote_0</title>
+<ellipse fill="none" stroke="green" stroke-width="2" cx="650" cy="-1314" rx="82.1179" ry="18"/>
+<text text-anchor="middle" x="650" y="-1308.4" font-family="Times,serif" font-size="14.00" fill="orange">ms_drbd_demote_0</text>
+</g>
+<!-- ms_drbd_confirmed&#45;pre_notify_demote_0&#45;&gt;ms_drbd_demote_0 -->
+<g id="edge56" class="edge"><title>ms_drbd_confirmed&#45;pre_notify_demote_0&#45;&gt;ms_drbd_demote_0</title>
+<path fill="none" stroke="black" stroke-width="2" d="M489.272,-1368.85C521.307,-1357.91 562.803,-1343.75 595.608,-1332.56"/>
+<polygon fill="black" stroke="black" points="597.155,-1335.73 605.489,-1329.19 594.895,-1329.11 597.155,-1335.73"/>
+</g>
+<!-- ms_drbd_confirmed&#45;pre_notify_demote_0&#45;&gt;ms_drbd_post_notify_demoted_0 -->
+<g id="edge58" class="edge"><title>ms_drbd_confirmed&#45;pre_notify_demote_0&#45;&gt;ms_drbd_post_notify_demoted_0</title>
+<path fill="none" stroke="black" stroke-width="2" d="M328.275,-1372.62C273.989,-1360.62 224.595,-1338.08 247,-1296 306.089,-1185.02 450.881,-1135.21 549.305,-1113.6"/>
+<polygon fill="black" stroke="black" points="550.228,-1116.98 559.277,-1111.47 548.766,-1110.14 550.228,-1116.98"/>
+</g>
+<!-- drbd0:0_pre_notify_promote_0 frigg -->
+<g id="node28" class="node"><title>drbd0:0_pre_notify_promote_0 frigg</title>
+<ellipse fill="none" stroke="green" stroke-width="2" cx="820" cy="-810" rx="144.022" ry="18"/>
+<text text-anchor="middle" x="820" y="-804.4" font-family="Times,serif" font-size="14.00">drbd0:0_pre_notify_promote_0 frigg</text>
+</g>
+<!-- ms_drbd_confirmed&#45;pre_notify_promote_0 -->
+<g id="node30" class="node"><title>ms_drbd_confirmed&#45;pre_notify_promote_0</title>
+<ellipse fill="none" stroke="green" stroke-width="2" cx="648" cy="-738" rx="168.026" ry="18"/>
+<text text-anchor="middle" x="648" y="-732.4" font-family="Times,serif" font-size="14.00" fill="orange">ms_drbd_confirmed&#45;pre_notify_promote_0</text>
+</g>
+<!-- drbd0:0_pre_notify_promote_0 frigg&#45;&gt;ms_drbd_confirmed&#45;pre_notify_promote_0 -->
+<g id="edge24" class="edge"><title>drbd0:0_pre_notify_promote_0 frigg&#45;&gt;ms_drbd_confirmed&#45;pre_notify_promote_0</title>
+<path fill="none" stroke="black" stroke-width="2" d="M778.364,-792.571C754.561,-782.607 724.514,-770.029 699.267,-759.461"/>
+<polygon fill="black" stroke="black" points="700.425,-756.151 689.849,-755.518 697.722,-762.608 700.425,-756.151"/>
+</g>
+<!-- ms_drbd_post_notify_promoted_0 -->
+<g id="node59" class="node"><title>ms_drbd_post_notify_promoted_0</title>
+<ellipse fill="none" stroke="green" stroke-width="2" cx="651" cy="-450" rx="134.047" ry="18"/>
+<text text-anchor="middle" x="651" y="-444.4" font-family="Times,serif" font-size="14.00" fill="orange">ms_drbd_post_notify_promoted_0</text>
+</g>
+<!-- ms_drbd_confirmed&#45;pre_notify_promote_0&#45;&gt;ms_drbd_post_notify_promoted_0 -->
+<g id="edge60" class="edge"><title>ms_drbd_confirmed&#45;pre_notify_promote_0&#45;&gt;ms_drbd_post_notify_promoted_0</title>
+<path fill="none" stroke="black" stroke-width="2" d="M603.906,-720.477C588.422,-711.933 572.723,-699.978 564,-684 556.333,-669.957 560.712,-663.658 564,-648 577.623,-583.13 613.831,-513.569 634.995,-476.637"/>
+<polygon fill="black" stroke="black" points="638.031,-478.378 640.029,-467.973 631.979,-474.861 638.031,-478.378"/>
+</g>
+<!-- ms_drbd_confirmed&#45;pre_notify_promote_0&#45;&gt;ms_drbd_promote_0 -->
+<g id="edge62" class="edge"><title>ms_drbd_confirmed&#45;pre_notify_promote_0&#45;&gt;ms_drbd_promote_0</title>
+<path fill="none" stroke="black" stroke-width="2" d="M650.523,-719.831C651.593,-712.131 652.865,-702.974 654.053,-694.417"/>
+<polygon fill="black" stroke="black" points="657.534,-694.8 655.443,-684.413 650.6,-693.837 657.534,-694.8"/>
+</g>
+<!-- drbd0:1_post_notify_demote_0 odin -->
+<g id="node32" class="node"><title>drbd0:1_post_notify_demote_0 odin</title>
+<ellipse fill="none" stroke="green" stroke-width="2" cx="478" cy="-1026" rx="142.127" ry="18"/>
+<text text-anchor="middle" x="478" y="-1020.4" font-family="Times,serif" font-size="14.00">drbd0:1_post_notify_demote_0 odin</text>
+</g>
+<!-- drbd0:1_post_notify_demote_0 odin&#45;&gt;ms_drbd_confirmed&#45;post_notify_demoted_0 -->
+<g id="edge26" class="edge"><title>drbd0:1_post_notify_demote_0 odin&#45;&gt;ms_drbd_confirmed&#45;post_notify_demoted_0</title>
+<path fill="none" stroke="black" stroke-width="2" d="M518.719,-1008.75C542.237,-998.794 572.028,-986.176 597.084,-975.564"/>
+<polygon fill="black" stroke="black" points="598.589,-978.728 606.432,-971.605 595.859,-972.282 598.589,-978.728"/>
+</g>
+<!-- drbd0:1_post_notify_promote_0 odin -->
+<g id="node34" class="node"><title>drbd0:1_post_notify_promote_0 odin</title>
+<ellipse fill="none" stroke="green" stroke-width="2" cx="478" cy="-378" rx="144.786" ry="18"/>
+<text text-anchor="middle" x="478" y="-372.4" font-family="Times,serif" font-size="14.00">drbd0:1_post_notify_promote_0 odin</text>
+</g>
+<!-- drbd0:1_post_notify_promote_0 odin&#45;&gt;ms_drbd_confirmed&#45;post_notify_promoted_0 -->
+<g id="edge28" class="edge"><title>drbd0:1_post_notify_promote_0 odin&#45;&gt;ms_drbd_confirmed&#45;post_notify_promoted_0</title>
+<path fill="none" stroke="black" stroke-width="2" d="M534.746,-361.391C570.827,-350.831 617.748,-337.098 655.844,-325.948"/>
+<polygon fill="black" stroke="black" points="657.217,-329.193 665.831,-323.025 655.251,-322.475 657.217,-329.193"/>
+</g>
+<!-- drbd0:1_pre_notify_demote_0 odin -->
+<g id="node36" class="node"><title>drbd0:1_pre_notify_demote_0 odin</title>
+<ellipse fill="none" stroke="green" stroke-width="2" cx="439" cy="-1458" rx="138.969" ry="18"/>
+<text text-anchor="middle" x="439" y="-1452.4" font-family="Times,serif" font-size="14.00">drbd0:1_pre_notify_demote_0 odin</text>
+</g>
+<!-- drbd0:1_pre_notify_demote_0 odin&#45;&gt;ms_drbd_confirmed&#45;pre_notify_demote_0 -->
+<g id="edge30" class="edge"><title>drbd0:1_pre_notify_demote_0 odin&#45;&gt;ms_drbd_confirmed&#45;pre_notify_demote_0</title>
+<path fill="none" stroke="black" stroke-width="2" d="M439,-1439.83C439,-1432.13 439,-1422.97 439,-1414.42"/>
+<polygon fill="black" stroke="black" points="442.5,-1414.41 439,-1404.41 435.5,-1414.41 442.5,-1414.41"/>
+</g>
+<!-- drbd0:1_pre_notify_promote_0 odin -->
+<g id="node38" class="node"><title>drbd0:1_pre_notify_promote_0 odin</title>
+<ellipse fill="none" stroke="green" stroke-width="2" cx="478" cy="-810" rx="142.127" ry="18"/>
+<text text-anchor="middle" x="478" y="-804.4" font-family="Times,serif" font-size="14.00">drbd0:1_pre_notify_promote_0 odin</text>
+</g>
+<!-- drbd0:1_pre_notify_promote_0 odin&#45;&gt;ms_drbd_confirmed&#45;pre_notify_promote_0 -->
+<g id="edge32" class="edge"><title>drbd0:1_pre_notify_promote_0 odin&#45;&gt;ms_drbd_confirmed&#45;pre_notify_promote_0</title>
+<path fill="none" stroke="black" stroke-width="2" d="M518.719,-792.754C542.324,-782.757 572.249,-770.083 597.362,-759.447"/>
+<polygon fill="black" stroke="black" points="598.885,-762.603 606.728,-755.48 596.155,-756.157 598.885,-762.603"/>
+</g>
+<!-- ms_drbd_promoted_0&#45;&gt;group_start_0 -->
+<g id="edge98" class="edge"><title>ms_drbd_promoted_0&#45;&gt;group_start_0</title>
+<path fill="none" stroke="black" stroke-width="2" d="M924.136,-504.272C953.575,-474.741 1004.35,-413.435 982,-360 957.253,-300.834 888.035,-266.709 838.653,-249.251"/>
+<polygon fill="black" stroke="black" points="839.654,-245.894 829.06,-245.979 837.394,-252.52 839.654,-245.894"/>
+</g>
+<!-- ms_drbd_promoted_0&#45;&gt;ms_drbd_post_notify_promoted_0 -->
+<g id="edge100" class="edge"><title>ms_drbd_promoted_0&#45;&gt;ms_drbd_post_notify_promoted_0</title>
+<path fill="none" stroke="black" stroke-width="2" d="M853.293,-507.343C814.268,-496.281 760.399,-481.011 718.061,-469.009"/>
+<polygon fill="black" stroke="black" points="718.965,-465.628 708.39,-466.268 717.056,-472.363 718.965,-465.628"/>
+</g>
+<!-- group_start_0&#45;&gt;IPaddr0_start_0 odin -->
+<g id="edge38" class="edge"><title>group_start_0&#45;&gt;IPaddr0_start_0 odin</title>
+<path fill="none" stroke="black" stroke-width="2" d="M785,-215.831C785,-208.131 785,-198.974 785,-190.417"/>
+<polygon fill="black" stroke="black" points="788.5,-190.413 785,-180.413 781.5,-190.413 788.5,-190.413"/>
+</g>
+<!-- group_start_0&#45;&gt;MailTo_start_0 odin -->
+<g id="edge40" class="edge"><title>group_start_0&#45;&gt;MailTo_start_0 odin</title>
+<path fill="none" stroke="black" stroke-width="2" d="M755.361,-218.334C736.346,-207.997 711.317,-193.851 690,-180 657.622,-158.961 622.289,-132.48 598.02,-113.702"/>
+<polygon fill="black" stroke="black" points="599.963,-110.78 589.92,-107.404 595.666,-116.305 599.963,-110.78"/>
+</g>
+<!-- group_start_0&#45;&gt;group_running_0 -->
+<g id="edge42" class="edge"><title>group_start_0&#45;&gt;group_running_0</title>
+<path fill="none" stroke="black" stroke-width="2" d="M819.055,-219.285C838.323,-209.88 862.073,-196.38 880,-180 908.761,-153.72 918.357,-145.179 930,-108 936.288,-87.9209 935.135,-64.0791 932.628,-45.9805"/>
+<polygon fill="black" stroke="black" points="936.076,-45.3813 931.04,-36.0598 929.164,-46.4872 936.076,-45.3813"/>
+</g>
+<!-- ms_drbd_pre_notify_promote_0&#45;&gt;drbd0:0_pre_notify_promote_0 frigg -->
+<g id="edge90" class="edge"><title>ms_drbd_pre_notify_promote_0&#45;&gt;drbd0:0_pre_notify_promote_0 frigg</title>
+<path fill="none" stroke="black" stroke-width="2" d="M688.762,-864.937C712.81,-854.87 743.455,-842.042 769.083,-831.314"/>
+<polygon fill="black" stroke="black" points="770.763,-834.405 778.636,-827.315 768.06,-827.948 770.763,-834.405"/>
+</g>
+<!-- ms_drbd_pre_notify_promote_0&#45;&gt;ms_drbd_confirmed&#45;pre_notify_promote_0 -->
+<g id="edge94" class="edge"><title>ms_drbd_pre_notify_promote_0&#45;&gt;ms_drbd_confirmed&#45;pre_notify_promote_0</title>
+<path fill="none" stroke="black" stroke-width="2" d="M648,-863.762C648,-839.201 648,-795.247 648,-766.354"/>
+<polygon fill="black" stroke="black" points="651.5,-766.09 648,-756.09 644.5,-766.09 651.5,-766.09"/>
+</g>
+<!-- ms_drbd_pre_notify_promote_0&#45;&gt;drbd0:1_pre_notify_promote_0 odin -->
+<g id="edge92" class="edge"><title>ms_drbd_pre_notify_promote_0&#45;&gt;drbd0:1_pre_notify_promote_0 odin</title>
+<path fill="none" stroke="black" stroke-width="2" d="M607.281,-864.754C583.589,-854.72 553.53,-841.989 528.36,-831.329"/>
+<polygon fill="black" stroke="black" points="529.549,-828.032 518.976,-827.355 526.819,-834.477 529.549,-828.032"/>
+</g>
+<!-- ms_drbd_demote_0&#45;&gt;drbd0:0_demote_0 frigg -->
+<g id="edge64" class="edge"><title>ms_drbd_demote_0&#45;&gt;drbd0:0_demote_0 frigg</title>
+<path fill="none" stroke="black" stroke-width="2" d="M609.207,-1298.38C581.337,-1287.7 544.163,-1273.47 514.145,-1261.97"/>
+<polygon fill="black" stroke="black" points="515.261,-1258.65 504.671,-1258.34 512.758,-1265.19 515.261,-1258.65"/>
+</g>
+<!-- ms_drbd_demote_0&#45;&gt;ms_drbd_demoted_0 -->
+<g id="edge66" class="edge"><title>ms_drbd_demote_0&#45;&gt;ms_drbd_demoted_0</title>
+<path fill="none" stroke="black" stroke-width="2" d="M649.873,-1295.76C649.703,-1271.2 649.398,-1227.25 649.197,-1198.35"/>
+<polygon fill="black" stroke="black" points="652.695,-1198.07 649.126,-1188.09 645.695,-1198.11 652.695,-1198.07"/>
+</g>
+<!-- ms_drbd_post_notify_demoted_0&#45;&gt;drbd0:0_post_notify_demote_0 frigg -->
+<g id="edge72" class="edge"><title>ms_drbd_post_notify_demoted_0&#45;&gt;drbd0:0_post_notify_demote_0 frigg</title>
+<path fill="none" stroke="black" stroke-width="2" d="M689.198,-1080.75C713.168,-1070.72 743.581,-1057.99 769.047,-1047.33"/>
+<polygon fill="black" stroke="black" points="770.669,-1050.44 778.542,-1043.35 767.966,-1043.99 770.669,-1050.44"/>
+</g>
+<!-- ms_drbd_post_notify_demoted_0&#45;&gt;ms_drbd_confirmed&#45;post_notify_demoted_0 -->
+<g id="edge76" class="edge"><title>ms_drbd_post_notify_demoted_0&#45;&gt;ms_drbd_confirmed&#45;post_notify_demoted_0</title>
+<path fill="none" stroke="black" stroke-width="2" d="M648,-1079.76C648,-1055.2 648,-1011.25 648,-982.354"/>
+<polygon fill="black" stroke="black" points="651.5,-982.09 648,-972.09 644.5,-982.09 651.5,-982.09"/>
+</g>
+<!-- ms_drbd_post_notify_demoted_0&#45;&gt;drbd0:1_post_notify_demote_0 odin -->
+<g id="edge74" class="edge"><title>ms_drbd_post_notify_demoted_0&#45;&gt;drbd0:1_post_notify_demote_0 odin</title>
+<path fill="none" stroke="black" stroke-width="2" d="M607.281,-1080.75C583.589,-1070.72 553.53,-1057.99 528.36,-1047.33"/>
+<polygon fill="black" stroke="black" points="529.549,-1044.03 518.976,-1043.35 526.819,-1050.48 529.549,-1044.03"/>
+</g>
+<!-- ms_drbd_post_notify_promoted_0&#45;&gt;drbd0:0_post_notify_promote_0 frigg -->
+<g id="edge78" class="edge"><title>ms_drbd_post_notify_promoted_0&#45;&gt;drbd0:0_post_notify_promote_0 frigg</title>
+<path fill="none" stroke="black" stroke-width="2" d="M692.917,-432.754C717.413,-422.676 748.521,-409.877 774.501,-399.188"/>
+<polygon fill="black" stroke="black" points="775.902,-402.396 783.819,-395.355 773.239,-395.923 775.902,-402.396"/>
+</g>
+<!-- ms_drbd_post_notify_promoted_0&#45;&gt;ms_drbd_confirmed&#45;post_notify_promoted_0 -->
+<g id="edge82" class="edge"><title>ms_drbd_post_notify_promoted_0&#45;&gt;ms_drbd_confirmed&#45;post_notify_promoted_0</title>
+<path fill="none" stroke="black" stroke-width="2" d="M651.817,-431.707C653.338,-412.743 657.671,-382.744 670,-360 675.815,-349.272 684.429,-339.278 693.077,-330.877"/>
+<polygon fill="black" stroke="black" points="695.637,-333.277 700.617,-323.926 690.892,-328.131 695.637,-333.277"/>
+</g>
+<!-- ms_drbd_post_notify_promoted_0&#45;&gt;drbd0:1_post_notify_promote_0 odin -->
+<g id="edge80" class="edge"><title>ms_drbd_post_notify_promoted_0&#45;&gt;drbd0:1_post_notify_promote_0 odin</title>
+<path fill="none" stroke="black" stroke-width="2" d="M609.562,-432.754C585.453,-422.72 554.863,-409.989 529.249,-399.329"/>
+<polygon fill="black" stroke="black" points="530.276,-395.966 519.699,-395.355 527.587,-402.428 530.276,-395.966"/>
+</g>
+<!-- ms_drbd_promote_0&#45;&gt;drbd0:1_promote_0 odin -->
+<g id="edge96" class="edge"><title>ms_drbd_promote_0&#45;&gt;drbd0:1_promote_0 odin</title>
+<path fill="none" stroke="black" stroke-width="2" d="M707.984,-651.43C746.743,-640.131 800.692,-624.406 842.338,-612.266"/>
+<polygon fill="black" stroke="black" points="843.481,-615.578 852.102,-609.42 841.522,-608.858 843.481,-615.578"/>
+</g>
+<!-- ms_drbd_pre_notify_demote_0 -->
+<g id="node72" class="node"><title>ms_drbd_pre_notify_demote_0</title>
+<ellipse fill="none" stroke="green" stroke-width="2" cx="439" cy="-1530" rx="123.809" ry="18"/>
+<text text-anchor="middle" x="439" y="-1524.4" font-family="Times,serif" font-size="14.00" fill="orange">ms_drbd_pre_notify_demote_0</text>
+</g>
+<!-- ms_drbd_pre_notify_demote_0&#45;&gt;drbd0:0_pre_notify_demote_0 frigg -->
+<g id="edge84" class="edge"><title>ms_drbd_pre_notify_demote_0&#45;&gt;drbd0:0_pre_notify_demote_0 frigg</title>
+<path fill="none" stroke="black" stroke-width="2" d="M375.072,-1514.55C328.672,-1503.34 265.721,-1488.13 216.788,-1476.31"/>
+<polygon fill="black" stroke="black" points="217.427,-1472.86 206.885,-1473.92 215.783,-1479.67 217.427,-1472.86"/>
+</g>
+<!-- ms_drbd_pre_notify_demote_0&#45;&gt;ms_drbd_confirmed&#45;pre_notify_demote_0 -->
+<g id="edge88" class="edge"><title>ms_drbd_pre_notify_demote_0&#45;&gt;ms_drbd_confirmed&#45;pre_notify_demote_0</title>
+<path fill="none" stroke="black" stroke-width="2" d="M503.053,-1514.45C538.216,-1504.48 576.73,-1490.68 587,-1476 596.172,-1462.89 596.172,-1453.11 587,-1440 577.977,-1427.1 547.153,-1414.89 515.954,-1405.35"/>
+<polygon fill="black" stroke="black" points="516.865,-1401.97 506.283,-1402.48 514.872,-1408.68 516.865,-1401.97"/>
+</g>
+<!-- ms_drbd_pre_notify_demote_0&#45;&gt;drbd0:1_pre_notify_demote_0 odin -->
+<g id="edge86" class="edge"><title>ms_drbd_pre_notify_demote_0&#45;&gt;drbd0:1_pre_notify_demote_0 odin</title>
+<path fill="none" stroke="black" stroke-width="2" d="M439,-1511.83C439,-1504.13 439,-1494.97 439,-1486.42"/>
+<polygon fill="black" stroke="black" points="442.5,-1486.41 439,-1476.41 435.5,-1486.41 442.5,-1486.41"/>
+</g>
+</g>
+</svg>
diff --git a/doc/sphinx/shared/images/Policy-Engine-small.dot b/doc/sphinx/shared/images/Policy-Engine-small.dot
new file mode 100644
index 0000000..3fef81e
--- /dev/null
+++ b/doc/sphinx/shared/images/Policy-Engine-small.dot
@@ -0,0 +1,31 @@
+ digraph "g" {
+"rsc1_monitor_0 pcmk-2" -> "probe_complete pcmk-2" [ style = bold]
+"rsc1_monitor_0 pcmk-2" [ style=bold color="green" fontcolor="black" ]
+"rsc1_stop_0 pcmk-1" [ style=dashed color="red" fontcolor="black" ]
+"rsc1_start_0 pcmk-2" [ style=dashed color="red" fontcolor="black" ]
+"rsc1_stop_0 pcmk-1" -> "rsc1_start_0 pcmk-2" [ style = dashed ]
+"rsc1_stop_0 pcmk-1" -> "all_stopped" [ style = dashed ]
+"probe_complete" -> "rsc1_start_0 pcmk-2" [ style = dashed ]
+
+"rsc2_monitor_0 pcmk-2" -> "probe_complete pcmk-2" [ style = bold]
+"rsc2_monitor_0 pcmk-2" [ style=bold color="green" fontcolor="black" ]
+"rsc2_stop_0 pcmk-1" [ style=dashed color="red" fontcolor="black" ]
+"rsc2_start_0 pcmk-2" [ style=dashed color="red" fontcolor="black" ]
+"rsc2_stop_0 pcmk-1" -> "rsc2_start_0 pcmk-2" [ style = dashed ]
+"rsc2_stop_0 pcmk-1" -> "all_stopped" [ style = dashed ]
+"probe_complete" -> "rsc2_start_0 pcmk-2" [ style = dashed ]
+
+"rsc3_monitor_0 pcmk-2" -> "probe_complete pcmk-2" [ style = bold]
+"rsc3_monitor_0 pcmk-2" [ style=bold color="green" fontcolor="black" ]
+"rsc3_stop_0 pcmk-1" [ style=dashed color="blue" fontcolor="orange" ]
+"rsc3_start_0 pcmk-2" [ style=dashed color="blue" fontcolor="black" ]
+"rsc3_stop_0 pcmk-1" -> "all_stopped" [ style = dashed ]
+"probe_complete" -> "rsc3_start_0 pcmk-2" [ style = dashed ]
+
+"probe_complete pcmk-2" -> "probe_complete" [ style = bold]
+"probe_complete pcmk-2" [ style=bold color="green" fontcolor="black" ]
+"probe_complete" [ style=bold color="green" fontcolor="orange" ]
+
+"all_stopped" [ style=dashed color="red" fontcolor="orange" ]
+
+}
diff --git a/doc/sphinx/shared/images/Policy-Engine-small.svg b/doc/sphinx/shared/images/Policy-Engine-small.svg
new file mode 100644
index 0000000..a020d56
--- /dev/null
+++ b/doc/sphinx/shared/images/Policy-Engine-small.svg
@@ -0,0 +1,133 @@
+<?xml version="1.0" encoding="UTF-8" standalone="no"?>
+<!DOCTYPE svg PUBLIC "-//W3C//DTD SVG 1.1//EN"
+ "http://www.w3.org/Graphics/SVG/1.1/DTD/svg11.dtd">
+<!-- Generated by graphviz version 2.25.20091012.0445 (20091012.0445)
+ -->
+<!-- Title: g Pages: 1 -->
+<svg width="929pt" height="260pt"
+ viewBox="0.00 0.00 929.00 260.00" xmlns="http://www.w3.org/2000/svg" xmlns:xlink="http://www.w3.org/1999/xlink">
+<g id="graph1" class="graph" transform="scale(1 1) rotate(0) translate(4 256)">
+<title>g</title>
+<polygon fill="white" stroke="white" points="-4,5 -4,-256 926,-256 926,5 -4,5"/>
+<!-- rsc1_monitor_0 pcmk&#45;2 -->
+<g id="node1" class="node"><title>rsc1_monitor_0 pcmk&#45;2</title>
+<ellipse fill="none" stroke="green" stroke-width="2" cx="405" cy="-234" rx="96.1457" ry="18"/>
+<text text-anchor="middle" x="405" y="-228.4" font-family="Times,serif" font-size="14.00">rsc1_monitor_0 pcmk&#45;2</text>
+</g>
+<!-- probe_complete pcmk&#45;2 -->
+<g id="node3" class="node"><title>probe_complete pcmk&#45;2</title>
+<ellipse fill="none" stroke="green" stroke-width="2" cx="615" cy="-162" rx="98.0413" ry="18"/>
+<text text-anchor="middle" x="615" y="-156.4" font-family="Times,serif" font-size="14.00">probe_complete pcmk&#45;2</text>
+</g>
+<!-- rsc1_monitor_0 pcmk&#45;2&#45;&gt;probe_complete pcmk&#45;2 -->
+<g id="edge2" class="edge"><title>rsc1_monitor_0 pcmk&#45;2&#45;&gt;probe_complete pcmk&#45;2</title>
+<path fill="none" stroke="black" stroke-width="2" d="M451.085,-218.199C482.843,-207.311 525.232,-192.777 558.948,-181.218"/>
+<polygon fill="black" stroke="black" points="560.335,-184.442 568.66,-177.888 558.065,-177.821 560.335,-184.442"/>
+</g>
+<!-- probe_complete -->
+<g id="node9" class="node"><title>probe_complete</title>
+<ellipse fill="none" stroke="green" stroke-width="2" cx="615" cy="-90" rx="68.8527" ry="18"/>
+<text text-anchor="middle" x="615" y="-84.4" font-family="Times,serif" font-size="14.00" fill="orange">probe_complete</text>
+</g>
+<!-- probe_complete pcmk&#45;2&#45;&gt;probe_complete -->
+<g id="edge24" class="edge"><title>probe_complete pcmk&#45;2&#45;&gt;probe_complete</title>
+<path fill="none" stroke="black" stroke-width="2" d="M615,-143.831C615,-136.131 615,-126.974 615,-118.417"/>
+<polygon fill="black" stroke="black" points="618.5,-118.413 615,-108.413 611.5,-118.413 618.5,-118.413"/>
+</g>
+<!-- rsc1_stop_0 pcmk&#45;1 -->
+<g id="node4" class="node"><title>rsc1_stop_0 pcmk&#45;1</title>
+<ellipse fill="none" stroke="red" stroke-dasharray="5,2" cx="264" cy="-90" rx="82.2481" ry="18"/>
+<text text-anchor="middle" x="264" y="-84.4" font-family="Times,serif" font-size="14.00">rsc1_stop_0 pcmk&#45;1</text>
+</g>
+<!-- rsc1_start_0 pcmk&#45;2 -->
+<g id="node5" class="node"><title>rsc1_start_0 pcmk&#45;2</title>
+<ellipse fill="none" stroke="red" stroke-dasharray="5,2" cx="340" cy="-18" rx="82.2481" ry="18"/>
+<text text-anchor="middle" x="340" y="-12.4" font-family="Times,serif" font-size="14.00">rsc1_start_0 pcmk&#45;2</text>
+</g>
+<!-- rsc1_stop_0 pcmk&#45;1&#45;&gt;rsc1_start_0 pcmk&#45;2 -->
+<g id="edge4" class="edge"><title>rsc1_stop_0 pcmk&#45;1&#45;&gt;rsc1_start_0 pcmk&#45;2</title>
+<path fill="none" stroke="black" stroke-dasharray="5,2" d="M282.787,-72.2022C292.174,-63.3088 303.684,-52.4042 313.913,-42.7135"/>
+<polygon fill="black" stroke="black" points="316.577,-45.0113 321.43,-35.593 311.763,-39.9297 316.577,-45.0113"/>
+</g>
+<!-- all_stopped -->
+<g id="node8" class="node"><title>all_stopped</title>
+<ellipse fill="none" stroke="red" stroke-dasharray="5,2" cx="188" cy="-18" rx="51.7974" ry="18"/>
+<text text-anchor="middle" x="188" y="-12.4" font-family="Times,serif" font-size="14.00" fill="orange">all_stopped</text>
+</g>
+<!-- rsc1_stop_0 pcmk&#45;1&#45;&gt;all_stopped -->
+<g id="edge6" class="edge"><title>rsc1_stop_0 pcmk&#45;1&#45;&gt;all_stopped</title>
+<path fill="none" stroke="black" stroke-dasharray="5,2" d="M245.213,-72.2022C235.592,-63.0876 223.742,-51.8605 213.326,-41.9926"/>
+<polygon fill="black" stroke="black" points="215.714,-39.4339 206.047,-35.0972 210.9,-44.5155 215.714,-39.4339"/>
+</g>
+<!-- probe_complete&#45;&gt;rsc1_start_0 pcmk&#45;2 -->
+<g id="edge8" class="edge"><title>probe_complete&#45;&gt;rsc1_start_0 pcmk&#45;2</title>
+<path fill="none" stroke="black" stroke-dasharray="5,2" d="M566.152,-77.2108C520.668,-65.3021 452.621,-47.4861 403.053,-34.5083"/>
+<polygon fill="black" stroke="black" points="403.645,-31.0455 393.084,-31.8985 401.872,-37.8172 403.645,-31.0455"/>
+</g>
+<!-- rsc2_start_0 pcmk&#45;2 -->
+<g id="node14" class="node"><title>rsc2_start_0 pcmk&#45;2</title>
+<ellipse fill="none" stroke="red" stroke-dasharray="5,2" cx="522" cy="-18" rx="82.2481" ry="18"/>
+<text text-anchor="middle" x="522" y="-12.4" font-family="Times,serif" font-size="14.00">rsc2_start_0 pcmk&#45;2</text>
+</g>
+<!-- probe_complete&#45;&gt;rsc2_start_0 pcmk&#45;2 -->
+<g id="edge16" class="edge"><title>probe_complete&#45;&gt;rsc2_start_0 pcmk&#45;2</title>
+<path fill="none" stroke="black" stroke-dasharray="5,2" d="M592.96,-72.937C580.861,-63.57 565.674,-51.8119 552.457,-41.5796"/>
+<polygon fill="black" stroke="black" points="554.577,-38.7949 544.528,-35.4407 550.292,-44.33 554.577,-38.7949"/>
+</g>
+<!-- rsc3_start_0 pcmk&#45;2 -->
+<g id="node21" class="node"><title>rsc3_start_0 pcmk&#45;2</title>
+<ellipse fill="none" stroke="blue" stroke-dasharray="5,2" cx="704" cy="-18" rx="82.2481" ry="18"/>
+<text text-anchor="middle" x="704" y="-12.4" font-family="Times,serif" font-size="14.00">rsc3_start_0 pcmk&#45;2</text>
+</g>
+<!-- probe_complete&#45;&gt;rsc3_start_0 pcmk&#45;2 -->
+<g id="edge22" class="edge"><title>probe_complete&#45;&gt;rsc3_start_0 pcmk&#45;2</title>
+<path fill="none" stroke="black" stroke-dasharray="5,2" d="M636.544,-72.5708C647.939,-63.353 662.098,-51.8983 674.498,-41.8669"/>
+<polygon fill="black" stroke="black" points="676.772,-44.5288 682.346,-35.5182 672.37,-39.0867 676.772,-44.5288"/>
+</g>
+<!-- rsc2_monitor_0 pcmk&#45;2 -->
+<g id="node11" class="node"><title>rsc2_monitor_0 pcmk&#45;2</title>
+<ellipse fill="none" stroke="green" stroke-width="2" cx="615" cy="-234" rx="96.1457" ry="18"/>
+<text text-anchor="middle" x="615" y="-228.4" font-family="Times,serif" font-size="14.00">rsc2_monitor_0 pcmk&#45;2</text>
+</g>
+<!-- rsc2_monitor_0 pcmk&#45;2&#45;&gt;probe_complete pcmk&#45;2 -->
+<g id="edge10" class="edge"><title>rsc2_monitor_0 pcmk&#45;2&#45;&gt;probe_complete pcmk&#45;2</title>
+<path fill="none" stroke="black" stroke-width="2" d="M615,-215.831C615,-208.131 615,-198.974 615,-190.417"/>
+<polygon fill="black" stroke="black" points="618.5,-190.413 615,-180.413 611.5,-190.413 618.5,-190.413"/>
+</g>
+<!-- rsc2_stop_0 pcmk&#45;1 -->
+<g id="node13" class="node"><title>rsc2_stop_0 pcmk&#45;1</title>
+<ellipse fill="none" stroke="red" stroke-dasharray="5,2" cx="446" cy="-90" rx="82.2481" ry="18"/>
+<text text-anchor="middle" x="446" y="-84.4" font-family="Times,serif" font-size="14.00">rsc2_stop_0 pcmk&#45;1</text>
+</g>
+<!-- rsc2_stop_0 pcmk&#45;1&#45;&gt;all_stopped -->
+<g id="edge14" class="edge"><title>rsc2_stop_0 pcmk&#45;1&#45;&gt;all_stopped</title>
+<path fill="none" stroke="black" stroke-dasharray="5,2" d="M394.076,-76.062C354.309,-65.3159 298.117,-49.977 249,-36 245.225,-34.9257 241.322,-33.8027 237.4,-32.6653"/>
+<polygon fill="black" stroke="black" points="238.251,-29.2679 227.671,-29.8275 236.291,-35.9879 238.251,-29.2679"/>
+</g>
+<!-- rsc2_stop_0 pcmk&#45;1&#45;&gt;rsc2_start_0 pcmk&#45;2 -->
+<g id="edge12" class="edge"><title>rsc2_stop_0 pcmk&#45;1&#45;&gt;rsc2_start_0 pcmk&#45;2</title>
+<path fill="none" stroke="black" stroke-dasharray="5,2" d="M464.787,-72.2022C474.174,-63.3088 485.684,-52.4042 495.913,-42.7135"/>
+<polygon fill="black" stroke="black" points="498.577,-45.0113 503.43,-35.593 493.763,-39.9297 498.577,-45.0113"/>
+</g>
+<!-- rsc3_monitor_0 pcmk&#45;2 -->
+<g id="node18" class="node"><title>rsc3_monitor_0 pcmk&#45;2</title>
+<ellipse fill="none" stroke="green" stroke-width="2" cx="825" cy="-234" rx="96.1457" ry="18"/>
+<text text-anchor="middle" x="825" y="-228.4" font-family="Times,serif" font-size="14.00">rsc3_monitor_0 pcmk&#45;2</text>
+</g>
+<!-- rsc3_monitor_0 pcmk&#45;2&#45;&gt;probe_complete pcmk&#45;2 -->
+<g id="edge18" class="edge"><title>rsc3_monitor_0 pcmk&#45;2&#45;&gt;probe_complete pcmk&#45;2</title>
+<path fill="none" stroke="black" stroke-width="2" d="M778.915,-218.199C747.157,-207.311 704.768,-192.777 671.052,-181.218"/>
+<polygon fill="black" stroke="black" points="671.935,-177.821 661.34,-177.888 669.665,-184.442 671.935,-177.821"/>
+</g>
+<!-- rsc3_stop_0 pcmk&#45;1 -->
+<g id="node20" class="node"><title>rsc3_stop_0 pcmk&#45;1</title>
+<ellipse fill="none" stroke="blue" stroke-dasharray="5,2" cx="82" cy="-90" rx="82.2481" ry="18"/>
+<text text-anchor="middle" x="82" y="-84.4" font-family="Times,serif" font-size="14.00" fill="orange">rsc3_stop_0 pcmk&#45;1</text>
+</g>
+<!-- rsc3_stop_0 pcmk&#45;1&#45;&gt;all_stopped -->
+<g id="edge20" class="edge"><title>rsc3_stop_0 pcmk&#45;1&#45;&gt;all_stopped</title>
+<path fill="none" stroke="black" stroke-dasharray="5,2" d="M107.39,-72.7542C121.963,-62.8554 140.4,-50.3319 155.963,-39.761"/>
+<polygon fill="black" stroke="black" points="157.965,-42.6324 164.27,-34.1183 154.032,-36.8419 157.965,-42.6324"/>
+</g>
+</g>
+</svg>
diff --git a/doc/sphinx/shared/images/pcmk-active-active.svg b/doc/sphinx/shared/images/pcmk-active-active.svg
new file mode 100644
index 0000000..c377cce
--- /dev/null
+++ b/doc/sphinx/shared/images/pcmk-active-active.svg
@@ -0,0 +1,1398 @@
+<?xml version="1.0" encoding="UTF-8" standalone="no"?>
+<!-- Created with Inkscape (http://www.inkscape.org/) -->
+
+<svg
+ xmlns:dc="http://purl.org/dc/elements/1.1/"
+ xmlns:cc="http://creativecommons.org/ns#"
+ xmlns:rdf="http://www.w3.org/1999/02/22-rdf-syntax-ns#"
+ xmlns:svg="http://www.w3.org/2000/svg"
+ xmlns="http://www.w3.org/2000/svg"
+ xmlns:xlink="http://www.w3.org/1999/xlink"
+ xmlns:sodipodi="http://sodipodi.sourceforge.net/DTD/sodipodi-0.dtd"
+ xmlns:inkscape="http://www.inkscape.org/namespaces/inkscape"
+ width="800"
+ height="600"
+ id="svg2"
+ version="1.1"
+ inkscape:version="0.47 r22583"
+ sodipodi:docname="pcmk-active-active.svg"
+ inkscape:export-filename="/Users/beekhof/Dropbox/Public/pcmk-active-active-small.png"
+ inkscape:export-xdpi="45"
+ inkscape:export-ydpi="45">
+ <defs
+ id="defs4">
+ <linearGradient
+ id="linearGradient4826">
+ <stop
+ style="stop-color:#000000;stop-opacity:1;"
+ offset="0"
+ id="stop4828" />
+ <stop
+ style="stop-color:#000000;stop-opacity:0.62601626;"
+ offset="1"
+ id="stop4830" />
+ </linearGradient>
+ <linearGradient
+ id="linearGradient4270">
+ <stop
+ style="stop-color:#808080;stop-opacity:0.75;"
+ offset="0"
+ id="stop4272" />
+ <stop
+ style="stop-color:#bfbfbf;stop-opacity:0.5;"
+ offset="1"
+ id="stop4274" />
+ </linearGradient>
+ <linearGradient
+ id="linearGradient4411">
+ <stop
+ style="stop-color:#f3f3f3;stop-opacity:0;"
+ offset="0"
+ id="stop4413" />
+ <stop
+ style="stop-color:#e6e6e6;stop-opacity:0.21138212;"
+ offset="1"
+ id="stop4415" />
+ </linearGradient>
+ <linearGradient
+ id="linearGradient4370">
+ <stop
+ style="stop-color:#ffffff;stop-opacity:1;"
+ offset="0"
+ id="stop4372" />
+ <stop
+ style="stop-color:#f7f7f7;stop-opacity:0.69918698;"
+ offset="1"
+ id="stop4374" />
+ </linearGradient>
+ <linearGradient
+ id="linearGradient3988">
+ <stop
+ id="stop3990"
+ offset="0"
+ style="stop-color:#d3e219;stop-opacity:1;" />
+ <stop
+ id="stop3992"
+ offset="1"
+ style="stop-color:#e8a411;stop-opacity:1;" />
+ </linearGradient>
+ <linearGradient
+ id="linearGradient3838">
+ <stop
+ style="stop-color:#6badf2;stop-opacity:1;"
+ offset="0"
+ id="stop3840" />
+ <stop
+ style="stop-color:#2e447f;stop-opacity:1;"
+ offset="1"
+ id="stop3842" />
+ </linearGradient>
+ <linearGradient
+ id="linearGradient3658">
+ <stop
+ style="stop-color:#19e229;stop-opacity:1;"
+ offset="0"
+ id="stop3660" />
+ <stop
+ style="stop-color:#589b56;stop-opacity:1;"
+ offset="1"
+ id="stop3662" />
+ </linearGradient>
+ <linearGradient
+ id="linearGradient3650">
+ <stop
+ style="stop-color:#f36d6d;stop-opacity:1;"
+ offset="0"
+ id="stop3652" />
+ <stop
+ style="stop-color:#b81313;stop-opacity:1;"
+ offset="1"
+ id="stop3654" />
+ </linearGradient>
+ <inkscape:perspective
+ sodipodi:type="inkscape:persp3d"
+ inkscape:vp_x="0 : 526.18109 : 1"
+ inkscape:vp_y="0 : 1000 : 0"
+ inkscape:vp_z="744.09448 : 526.18109 : 1"
+ inkscape:persp3d-origin="372.04724 : 350.78739 : 1"
+ id="perspective10" />
+ <filter
+ id="filter3712"
+ inkscape:label="Ridged border"
+ inkscape:menu="Bevels"
+ inkscape:menu-tooltip="Ridged border with inner bevel"
+ color-interpolation-filters="sRGB">
+ <feMorphology
+ id="feMorphology3714"
+ radius="4.3"
+ in="SourceAlpha"
+ result="result91" />
+ <feComposite
+ id="feComposite3716"
+ in2="result91"
+ operator="out"
+ in="SourceGraphic" />
+ <feGaussianBlur
+ id="feGaussianBlur3718"
+ result="result0"
+ stdDeviation="1.2" />
+ <feDiffuseLighting
+ id="feDiffuseLighting3720"
+ diffuseConstant="1"
+ result="result92">
+ <feDistantLight
+ id="feDistantLight3722"
+ elevation="66"
+ azimuth="225" />
+ </feDiffuseLighting>
+ <feBlend
+ id="feBlend3724"
+ in2="SourceGraphic"
+ mode="multiply"
+ result="result93" />
+ <feComposite
+ id="feComposite3726"
+ in2="SourceAlpha"
+ operator="in" />
+ </filter>
+ <radialGradient
+ inkscape:collect="always"
+ xlink:href="#linearGradient3658"
+ id="radialGradient3750"
+ gradientUnits="userSpaceOnUse"
+ gradientTransform="matrix(1,0,0,0.25347222,0,86.109396)"
+ cx="238.61388"
+ cy="115.34654"
+ fx="238.61388"
+ fy="115.34654"
+ r="142.57426" />
+ <radialGradient
+ inkscape:collect="always"
+ xlink:href="#linearGradient3838"
+ id="radialGradient3844"
+ cx="532.67328"
+ cy="425.74258"
+ fx="532.67328"
+ fy="425.74258"
+ r="259.90594"
+ gradientTransform="matrix(0.99225464,0,0,0.13538946,-22.765338,801.65181)"
+ gradientUnits="userSpaceOnUse" />
+ <radialGradient
+ inkscape:collect="always"
+ xlink:href="#linearGradient3650"
+ id="radialGradient3854"
+ cx="531.18811"
+ cy="483.1683"
+ fx="531.18811"
+ fy="483.1683"
+ r="258.42081"
+ gradientTransform="matrix(1,0,0,0.07856171,-23.920792,882.72047)"
+ gradientUnits="userSpaceOnUse" />
+ <radialGradient
+ inkscape:collect="always"
+ xlink:href="#linearGradient3658"
+ id="radialGradient3862"
+ gradientUnits="userSpaceOnUse"
+ gradientTransform="matrix(1,0,0,0.25347222,0,86.109396)"
+ cx="238.61388"
+ cy="115.34654"
+ fx="238.61388"
+ fy="115.34654"
+ r="142.57426" />
+ <radialGradient
+ inkscape:collect="always"
+ xlink:href="#linearGradient3658"
+ id="radialGradient3866"
+ gradientUnits="userSpaceOnUse"
+ gradientTransform="matrix(1,0,0,0.25347222,0,86.109396)"
+ cx="238.61388"
+ cy="115.34654"
+ fx="238.61388"
+ fy="115.34654"
+ r="142.57426" />
+ <radialGradient
+ inkscape:collect="always"
+ xlink:href="#linearGradient3658"
+ id="radialGradient3870"
+ gradientUnits="userSpaceOnUse"
+ gradientTransform="matrix(1,0,0,0.25347222,0,86.109396)"
+ cx="238.61388"
+ cy="115.34654"
+ fx="238.61388"
+ fy="115.34654"
+ r="142.57426" />
+ <radialGradient
+ inkscape:collect="always"
+ xlink:href="#linearGradient3658"
+ id="radialGradient3884"
+ gradientUnits="userSpaceOnUse"
+ gradientTransform="matrix(1,0,0,0.25347222,0,86.109396)"
+ cx="238.61388"
+ cy="115.34654"
+ fx="238.61388"
+ fy="115.34654"
+ r="142.57426" />
+ <radialGradient
+ inkscape:collect="always"
+ xlink:href="#linearGradient3658"
+ id="radialGradient3886"
+ gradientUnits="userSpaceOnUse"
+ gradientTransform="matrix(1,0,0,0.25347222,0,86.109396)"
+ cx="238.61388"
+ cy="115.34654"
+ fx="238.61388"
+ fy="115.34654"
+ r="142.57426" />
+ <radialGradient
+ inkscape:collect="always"
+ xlink:href="#linearGradient3658"
+ id="radialGradient3888"
+ gradientUnits="userSpaceOnUse"
+ gradientTransform="matrix(1,0,0,0.25347222,0,86.109396)"
+ cx="238.61388"
+ cy="115.34654"
+ fx="238.61388"
+ fy="115.34654"
+ r="142.57426" />
+ <radialGradient
+ inkscape:collect="always"
+ xlink:href="#linearGradient3658"
+ id="radialGradient3890"
+ gradientUnits="userSpaceOnUse"
+ gradientTransform="matrix(1,0,0,0.25347222,0,86.109396)"
+ cx="238.61388"
+ cy="115.34654"
+ fx="238.61388"
+ fy="115.34654"
+ r="142.57426" />
+ <radialGradient
+ inkscape:collect="always"
+ xlink:href="#linearGradient3658"
+ id="radialGradient3904"
+ gradientUnits="userSpaceOnUse"
+ gradientTransform="matrix(1,0,0,0.25347222,0,86.109396)"
+ cx="238.61388"
+ cy="115.34654"
+ fx="238.61388"
+ fy="115.34654"
+ r="142.57426" />
+ <radialGradient
+ inkscape:collect="always"
+ xlink:href="#linearGradient3658"
+ id="radialGradient3906"
+ gradientUnits="userSpaceOnUse"
+ gradientTransform="matrix(1,0,0,0.25347222,0,86.109396)"
+ cx="238.61388"
+ cy="115.34654"
+ fx="238.61388"
+ fy="115.34654"
+ r="142.57426" />
+ <radialGradient
+ inkscape:collect="always"
+ xlink:href="#linearGradient3658"
+ id="radialGradient3908"
+ gradientUnits="userSpaceOnUse"
+ gradientTransform="matrix(1,0,0,0.25347222,0,86.109396)"
+ cx="238.61388"
+ cy="115.34654"
+ fx="238.61388"
+ fy="115.34654"
+ r="142.57426" />
+ <radialGradient
+ inkscape:collect="always"
+ xlink:href="#linearGradient3658"
+ id="radialGradient3910"
+ gradientUnits="userSpaceOnUse"
+ gradientTransform="matrix(1,0,0,0.25347222,0,86.109396)"
+ cx="238.61388"
+ cy="115.34654"
+ fx="238.61388"
+ fy="115.34654"
+ r="142.57426" />
+ <radialGradient
+ inkscape:collect="always"
+ xlink:href="#linearGradient3658"
+ id="radialGradient3924"
+ gradientUnits="userSpaceOnUse"
+ gradientTransform="matrix(1,0,0,0.25347222,0,86.109396)"
+ cx="238.61388"
+ cy="115.34654"
+ fx="238.61388"
+ fy="115.34654"
+ r="142.57426" />
+ <radialGradient
+ inkscape:collect="always"
+ xlink:href="#linearGradient3658"
+ id="radialGradient3926"
+ gradientUnits="userSpaceOnUse"
+ gradientTransform="matrix(1,0,0,0.25347222,0,86.109396)"
+ cx="238.61388"
+ cy="115.34654"
+ fx="238.61388"
+ fy="115.34654"
+ r="142.57426" />
+ <radialGradient
+ inkscape:collect="always"
+ xlink:href="#linearGradient3658"
+ id="radialGradient3928"
+ gradientUnits="userSpaceOnUse"
+ gradientTransform="matrix(1,0,0,0.25347222,0,86.109396)"
+ cx="238.61388"
+ cy="115.34654"
+ fx="238.61388"
+ fy="115.34654"
+ r="142.57426" />
+ <radialGradient
+ inkscape:collect="always"
+ xlink:href="#linearGradient3658"
+ id="radialGradient3930"
+ gradientUnits="userSpaceOnUse"
+ gradientTransform="matrix(1,0,0,0.25347222,0,86.109396)"
+ cx="238.61388"
+ cy="115.34654"
+ fx="238.61388"
+ fy="115.34654"
+ r="142.57426" />
+ <radialGradient
+ inkscape:collect="always"
+ xlink:href="#linearGradient3658"
+ id="radialGradient3944"
+ gradientUnits="userSpaceOnUse"
+ gradientTransform="matrix(1,0,0,0.25347222,0,86.109396)"
+ cx="238.61388"
+ cy="115.34654"
+ fx="238.61388"
+ fy="115.34654"
+ r="142.57426" />
+ <radialGradient
+ inkscape:collect="always"
+ xlink:href="#linearGradient3658"
+ id="radialGradient3946"
+ gradientUnits="userSpaceOnUse"
+ gradientTransform="matrix(1,0,0,0.25347222,0,86.109396)"
+ cx="238.61388"
+ cy="115.34654"
+ fx="238.61388"
+ fy="115.34654"
+ r="142.57426" />
+ <radialGradient
+ inkscape:collect="always"
+ xlink:href="#linearGradient3658"
+ id="radialGradient3948"
+ gradientUnits="userSpaceOnUse"
+ gradientTransform="matrix(1,0,0,0.25347222,0,86.109396)"
+ cx="238.61388"
+ cy="115.34654"
+ fx="238.61388"
+ fy="115.34654"
+ r="142.57426" />
+ <radialGradient
+ inkscape:collect="always"
+ xlink:href="#linearGradient3658"
+ id="radialGradient3950"
+ gradientUnits="userSpaceOnUse"
+ gradientTransform="matrix(1,0,0,0.25347222,0,86.109396)"
+ cx="238.61388"
+ cy="115.34654"
+ fx="238.61388"
+ fy="115.34654"
+ r="142.57426" />
+ <radialGradient
+ inkscape:collect="always"
+ xlink:href="#linearGradient3658"
+ id="radialGradient3964"
+ gradientUnits="userSpaceOnUse"
+ gradientTransform="matrix(1,0,0,0.25347222,0,86.109396)"
+ cx="238.61388"
+ cy="115.34654"
+ fx="238.61388"
+ fy="115.34654"
+ r="142.57426" />
+ <radialGradient
+ inkscape:collect="always"
+ xlink:href="#linearGradient3658"
+ id="radialGradient3966"
+ gradientUnits="userSpaceOnUse"
+ gradientTransform="matrix(1,0,0,0.25347222,0,86.109396)"
+ cx="238.61388"
+ cy="115.34654"
+ fx="238.61388"
+ fy="115.34654"
+ r="142.57426" />
+ <radialGradient
+ inkscape:collect="always"
+ xlink:href="#linearGradient3658"
+ id="radialGradient3968"
+ gradientUnits="userSpaceOnUse"
+ gradientTransform="matrix(1,0,0,0.25347222,0,86.109396)"
+ cx="238.61388"
+ cy="115.34654"
+ fx="238.61388"
+ fy="115.34654"
+ r="142.57426" />
+ <radialGradient
+ inkscape:collect="always"
+ xlink:href="#linearGradient3658"
+ id="radialGradient3970"
+ gradientUnits="userSpaceOnUse"
+ gradientTransform="matrix(1,0,0,0.25347222,0,86.109396)"
+ cx="238.61388"
+ cy="115.34654"
+ fx="238.61388"
+ fy="115.34654"
+ r="142.57426" />
+ <radialGradient
+ inkscape:collect="always"
+ xlink:href="#linearGradient3988"
+ id="radialGradient3974"
+ gradientUnits="userSpaceOnUse"
+ gradientTransform="matrix(1,0,0,0.25347222,0,86.109396)"
+ cx="238.61388"
+ cy="115.34654"
+ fx="238.61388"
+ fy="115.34654"
+ r="142.57426" />
+ <radialGradient
+ inkscape:collect="always"
+ xlink:href="#linearGradient3988"
+ id="radialGradient3996"
+ gradientUnits="userSpaceOnUse"
+ gradientTransform="matrix(1,0,0,0.25347222,0,86.109396)"
+ cx="238.61388"
+ cy="115.34654"
+ fx="238.61388"
+ fy="115.34654"
+ r="142.57426" />
+ <radialGradient
+ inkscape:collect="always"
+ xlink:href="#linearGradient3988"
+ id="radialGradient4000"
+ gradientUnits="userSpaceOnUse"
+ gradientTransform="matrix(1,0,0,0.25347222,0,86.109396)"
+ cx="238.61388"
+ cy="115.34654"
+ fx="238.61388"
+ fy="115.34654"
+ r="142.57426" />
+ <radialGradient
+ inkscape:collect="always"
+ xlink:href="#linearGradient3988"
+ id="radialGradient4004"
+ gradientUnits="userSpaceOnUse"
+ gradientTransform="matrix(1,0,0,0.25347222,0,86.109396)"
+ cx="238.61388"
+ cy="115.34654"
+ fx="238.61388"
+ fy="115.34654"
+ r="142.57426" />
+ <radialGradient
+ inkscape:collect="always"
+ xlink:href="#linearGradient3988"
+ id="radialGradient4024"
+ gradientUnits="userSpaceOnUse"
+ gradientTransform="matrix(1,0,0,0.07856171,-22.930693,971.82938)"
+ cx="531.18811"
+ cy="483.1683"
+ fx="531.18811"
+ fy="483.1683"
+ r="258.42081" />
+ <filter
+ id="filter4038"
+ inkscape:label="Drop shadow"
+ width="1.5"
+ height="1.5"
+ x="-.25"
+ y="-.25">
+ <feGaussianBlur
+ id="feGaussianBlur4040"
+ in="SourceAlpha"
+ stdDeviation="2.000000"
+ result="blur" />
+ <feColorMatrix
+ id="feColorMatrix4042"
+ result="bluralpha"
+ type="matrix"
+ values="-1 0 0 0 1 0 -1 0 0 1 0 0 -1 0 1 0 0 0 0.500000 0 " />
+ <feOffset
+ id="feOffset4044"
+ in="bluralpha"
+ dx="4.000000"
+ dy="4.000000"
+ result="offsetBlur" />
+ <feMerge
+ id="feMerge4046">
+ <feMergeNode
+ id="feMergeNode4048"
+ in="offsetBlur" />
+ <feMergeNode
+ id="feMergeNode4050"
+ in="SourceGraphic" />
+ </feMerge>
+ </filter>
+ <filter
+ id="filter4066"
+ inkscape:label="Drop shadow"
+ width="1.5"
+ height="1.5"
+ x="-.25"
+ y="-.25">
+ <feGaussianBlur
+ id="feGaussianBlur4068"
+ in="SourceAlpha"
+ stdDeviation="2.000000"
+ result="blur" />
+ <feColorMatrix
+ id="feColorMatrix4070"
+ result="bluralpha"
+ type="matrix"
+ values="1 0 0 0 0 0 1 0 0 0 0 0 1 0 0 0 0 0 0.500000 0 " />
+ <feOffset
+ id="feOffset4072"
+ in="bluralpha"
+ dx="4.000000"
+ dy="4.000000"
+ result="offsetBlur" />
+ <feMerge
+ id="feMerge4074">
+ <feMergeNode
+ id="feMergeNode4076"
+ in="offsetBlur" />
+ <feMergeNode
+ id="feMergeNode4078"
+ in="SourceGraphic" />
+ </feMerge>
+ </filter>
+ <radialGradient
+ inkscape:collect="always"
+ xlink:href="#linearGradient4370"
+ id="radialGradient4376"
+ cx="-0.5"
+ cy="-100.5"
+ fx="-0.5"
+ fy="-100.5"
+ r="400.5"
+ gradientTransform="matrix(0.06674414,1.4857892,-1.4966201,0.06723071,-150.87695,6.9995757)"
+ gradientUnits="userSpaceOnUse" />
+ <filter
+ id="filter4381"
+ inkscape:label="Drop shadow"
+ width="1.5"
+ height="1.5"
+ x="-.25"
+ y="-.25">
+ <feGaussianBlur
+ id="feGaussianBlur4383"
+ in="SourceAlpha"
+ stdDeviation="2.000000"
+ result="blur" />
+ <feColorMatrix
+ id="feColorMatrix4385"
+ result="bluralpha"
+ type="matrix"
+ values="1 0 0 0 0 0 1 0 0 0 0 0 1 0 0 0 0 0 0.500000 0 " />
+ <feOffset
+ id="feOffset4387"
+ in="bluralpha"
+ dx="4.000000"
+ dy="4.000000"
+ result="offsetBlur" />
+ <feMerge
+ id="feMerge4389">
+ <feMergeNode
+ id="feMergeNode4391"
+ in="offsetBlur" />
+ <feMergeNode
+ id="feMergeNode4393"
+ in="SourceGraphic" />
+ </feMerge>
+ </filter>
+ <filter
+ id="filter4397"
+ inkscape:label="Drop shadow"
+ width="1.5"
+ height="1.5"
+ x="-.25"
+ y="-.25">
+ <feGaussianBlur
+ id="feGaussianBlur4399"
+ in="SourceAlpha"
+ stdDeviation="2.000000"
+ result="blur" />
+ <feColorMatrix
+ id="feColorMatrix4401"
+ result="bluralpha"
+ type="matrix"
+ values="1 0 0 0 0 0 1 0 0 0 0 0 1 0 0 0 0 0 0.500000 0 " />
+ <feOffset
+ id="feOffset4403"
+ in="bluralpha"
+ dx="4.000000"
+ dy="4.000000"
+ result="offsetBlur" />
+ <feMerge
+ id="feMerge4405">
+ <feMergeNode
+ id="feMergeNode4407"
+ in="offsetBlur" />
+ <feMergeNode
+ id="feMergeNode4409"
+ in="SourceGraphic" />
+ </feMerge>
+ </filter>
+ <radialGradient
+ inkscape:collect="always"
+ xlink:href="#linearGradient4411"
+ id="radialGradient4417"
+ cx="35.009148"
+ cy="295.5629"
+ fx="35.009148"
+ fy="295.5629"
+ r="178.9604"
+ gradientTransform="matrix(-0.01440824,3.0997761,-3.960971,-0.01841003,1186.567,-92.683155)"
+ gradientUnits="userSpaceOnUse" />
+ <inkscape:perspective
+ id="perspective6612"
+ inkscape:persp3d-origin="0.5 : 0.33333333 : 1"
+ inkscape:vp_z="1 : 0.5 : 1"
+ inkscape:vp_y="0 : 1000 : 0"
+ inkscape:vp_x="0 : 0.5 : 1"
+ sodipodi:type="inkscape:persp3d" />
+ <inkscape:perspective
+ id="perspective4155"
+ inkscape:persp3d-origin="0.5 : 0.33333333 : 1"
+ inkscape:vp_z="1 : 0.5 : 1"
+ inkscape:vp_y="0 : 1000 : 0"
+ inkscape:vp_x="0 : 0.5 : 1"
+ sodipodi:type="inkscape:persp3d" />
+ <inkscape:perspective
+ id="perspective4188"
+ inkscape:persp3d-origin="0.5 : 0.33333333 : 1"
+ inkscape:vp_z="1 : 0.5 : 1"
+ inkscape:vp_y="0 : 1000 : 0"
+ inkscape:vp_x="0 : 0.5 : 1"
+ sodipodi:type="inkscape:persp3d" />
+ <inkscape:perspective
+ id="perspective4221"
+ inkscape:persp3d-origin="0.5 : 0.33333333 : 1"
+ inkscape:vp_z="1 : 0.5 : 1"
+ inkscape:vp_y="0 : 1000 : 0"
+ inkscape:vp_x="0 : 0.5 : 1"
+ sodipodi:type="inkscape:persp3d" />
+ <inkscape:perspective
+ id="perspective4262"
+ inkscape:persp3d-origin="0.5 : 0.33333333 : 1"
+ inkscape:vp_z="1 : 0.5 : 1"
+ inkscape:vp_y="0 : 1000 : 0"
+ inkscape:vp_x="0 : 0.5 : 1"
+ sodipodi:type="inkscape:persp3d" />
+ <linearGradient
+ inkscape:collect="always"
+ xlink:href="#linearGradient4270"
+ id="linearGradient4276"
+ x1="600"
+ y1="453.65854"
+ x2="63.414658"
+ y2="51.219509"
+ gradientUnits="userSpaceOnUse" />
+ <linearGradient
+ inkscape:collect="always"
+ xlink:href="#linearGradient4826"
+ id="linearGradient4832"
+ x1="864.63416"
+ y1="601.87805"
+ x2="864.63416"
+ y2="4.8780489"
+ gradientUnits="userSpaceOnUse"
+ gradientTransform="matrix(0.02157895,0,0,1,779.8421,450.48413)" />
+ <linearGradient
+ inkscape:collect="always"
+ xlink:href="#linearGradient4826"
+ id="linearGradient4836"
+ gradientUnits="userSpaceOnUse"
+ gradientTransform="matrix(0.02157895,0,0,1.3268374,-1069.5201,-1.5943392)"
+ x1="864.63416"
+ y1="601.87805"
+ x2="864.63416"
+ y2="4.8780489" />
+ </defs>
+ <sodipodi:namedview
+ id="base"
+ pagecolor="#ffffff"
+ bordercolor="#666666"
+ borderopacity="1.0"
+ inkscape:pageopacity="0.0"
+ inkscape:pageshadow="2"
+ inkscape:zoom="0.82"
+ inkscape:cx="407.64519"
+ inkscape:cy="171.92683"
+ inkscape:document-units="px"
+ inkscape:current-layer="layer1"
+ showgrid="false"
+ inkscape:window-width="1128"
+ inkscape:window-height="934"
+ inkscape:window-x="472"
+ inkscape:window-y="39"
+ inkscape:window-maximized="0"
+ showguides="true"
+ inkscape:guide-bbox="true" />
+ <metadata
+ id="metadata7">
+ <rdf:RDF>
+ <cc:Work
+ rdf:about="">
+ <dc:format>image/svg+xml</dc:format>
+ <dc:type
+ rdf:resource="http://purl.org/dc/dcmitype/StillImage" />
+ <dc:title />
+ </cc:Work>
+ </rdf:RDF>
+ </metadata>
+ <g
+ inkscape:label="Layer 1"
+ inkscape:groupmode="layer"
+ id="layer1"
+ transform="translate(0,-452.36218)">
+ <rect
+ style="fill:url(#linearGradient4276);fill-opacity:1;fill-rule:nonzero;stroke:#646464;stroke-width:0;stroke-miterlimit:4;stroke-opacity:1;stroke-dasharray:none"
+ id="rect4268"
+ width="797"
+ height="597"
+ x="-1.2195122"
+ y="0"
+ transform="translate(0,452.36218)"
+ ry="1.0732931" />
+ <rect
+ style="fill:url(#radialGradient3844);fill-opacity:1;stroke:#000000;stroke-width:1.05799341;stroke-miterlimit:4;stroke-opacity:1;stroke-dasharray:none"
+ id="rect3836"
+ width="514.79352"
+ height="69.24894"
+ x="248.38544"
+ y="824.6684"
+ ry="0.43881688" />
+ <rect
+ style="fill:url(#radialGradient3854);fill-opacity:1;stroke:#000000;stroke-width:1;stroke-miterlimit:4;stroke-opacity:1;stroke-dasharray:none"
+ id="rect3846"
+ width="515.84161"
+ height="39.603962"
+ x="249.34653"
+ y="900.87708"
+ ry="0.38899186" />
+ <rect
+ transform="matrix(0.43829706,0,0,0.49424167,204.16871,580.07095)"
+ ry="12.871287"
+ y="79.207924"
+ x="96.039604"
+ height="72.277229"
+ width="285.14853"
+ id="rect3872"
+ style="fill:url(#radialGradient3884);fill-opacity:1;stroke:#000000;stroke-width:1;stroke-miterlimit:4;stroke-opacity:1;stroke-dasharray:none;filter:url(#filter3712)" />
+ <rect
+ style="fill:url(#radialGradient3886);fill-opacity:1;stroke:#000000;stroke-width:1;stroke-miterlimit:4;stroke-opacity:1;stroke-dasharray:none;filter:url(#filter3712)"
+ id="rect3878"
+ width="285.14853"
+ height="72.277229"
+ x="96.039604"
+ y="79.207924"
+ ry="12.871287"
+ transform="matrix(0.43829706,0,0,0.49424167,334.86178,579.08085)" />
+ <rect
+ transform="matrix(0.43829706,0,0,0.49424167,465.55485,578.09075)"
+ ry="12.871287"
+ y="79.207924"
+ x="96.039604"
+ height="72.277229"
+ width="285.14853"
+ id="rect3880"
+ style="fill:url(#radialGradient3888);fill-opacity:1;stroke:#000000;stroke-width:1;stroke-miterlimit:4;stroke-opacity:1;stroke-dasharray:none;filter:url(#filter3712)" />
+ <rect
+ style="fill:url(#radialGradient3890);fill-opacity:1;stroke:#000000;stroke-width:1;stroke-miterlimit:4;stroke-opacity:1;stroke-dasharray:none;filter:url(#filter3712)"
+ id="rect3882"
+ width="285.14853"
+ height="72.277229"
+ x="96.039604"
+ y="79.207924"
+ ry="12.871287"
+ transform="matrix(0.43829706,0,0,0.49424167,596.24792,578.09075)" />
+ <rect
+ style="fill:url(#radialGradient3904);fill-opacity:1;stroke:#000000;stroke-width:1;stroke-miterlimit:4;stroke-opacity:1;stroke-dasharray:none;filter:url(#filter3712)"
+ id="rect3892"
+ width="285.14853"
+ height="72.277229"
+ x="96.039604"
+ y="79.207924"
+ ry="12.871287"
+ transform="matrix(0.43829706,0,0,0.49424167,204.16871,621.65511)" />
+ <rect
+ transform="matrix(0.43829706,0,0,0.49424167,334.86178,620.66501)"
+ ry="12.871287"
+ y="79.207924"
+ x="96.039604"
+ height="72.277229"
+ width="285.14853"
+ id="rect3898"
+ style="fill:url(#radialGradient3906);fill-opacity:1;stroke:#000000;stroke-width:1;stroke-miterlimit:4;stroke-opacity:1;stroke-dasharray:none;filter:url(#filter3712)" />
+ <rect
+ style="fill:url(#radialGradient3908);fill-opacity:1;stroke:#000000;stroke-width:1;stroke-miterlimit:4;stroke-opacity:1;stroke-dasharray:none;filter:url(#filter3712)"
+ id="rect3900"
+ width="285.14853"
+ height="72.277229"
+ x="96.039604"
+ y="79.207924"
+ ry="12.871287"
+ transform="matrix(0.43829706,0,0,0.49424167,465.55485,619.67491)" />
+ <rect
+ transform="matrix(0.43829706,0,0,0.49424167,596.24792,619.67491)"
+ ry="12.871287"
+ y="79.207924"
+ x="96.039604"
+ height="72.277229"
+ width="285.14853"
+ id="rect3902"
+ style="fill:url(#radialGradient3910);fill-opacity:1;stroke:#000000;stroke-width:1;stroke-miterlimit:4;stroke-opacity:1;stroke-dasharray:none;filter:url(#filter3712)" />
+ <rect
+ transform="matrix(0.43829706,0,0,0.49424167,205.15881,663.23927)"
+ ry="12.871287"
+ y="79.207924"
+ x="96.039604"
+ height="72.277229"
+ width="285.14853"
+ id="rect3912"
+ style="fill:url(#radialGradient3924);fill-opacity:1;stroke:#000000;stroke-width:1;stroke-miterlimit:4;stroke-opacity:1;stroke-dasharray:none;filter:url(#filter3712)" />
+ <text
+ xml:space="preserve"
+ style="font-size:36px;font-style:normal;font-variant:normal;font-weight:bold;font-stretch:normal;text-align:start;line-height:100%;writing-mode:lr-tb;text-anchor:start;fill:#000000;fill-opacity:1;stroke:none;font-family:BlairMdITC TT;-inkscape-font-specification:BlairMdITC TT Medium"
+ x="274.099"
+ y="725.62958"
+ id="text3856"
+ sodipodi:linespacing="100%"><tspan
+ sodipodi:role="line"
+ id="tspan3858"
+ x="274.099"
+ y="725.62958"
+ style="font-size:16px;font-style:normal;font-variant:normal;font-weight:bold;font-stretch:normal;text-align:start;line-height:100%;writing-mode:lr-tb;text-anchor:start;font-family:BlairMdITC TT;-inkscape-font-specification:BlairMdITC TT Bold">D'base</tspan></text>
+ <rect
+ style="fill:url(#radialGradient3926);fill-opacity:1;stroke:#000000;stroke-width:1;stroke-miterlimit:4;stroke-opacity:1;stroke-dasharray:none;filter:url(#filter3712)"
+ id="rect3918"
+ width="285.14853"
+ height="72.277229"
+ x="96.039604"
+ y="79.207924"
+ ry="12.871287"
+ transform="matrix(0.43829706,0,0,0.49424167,335.85188,662.24917)" />
+ <rect
+ transform="matrix(0.43829706,0,0,0.49424167,466.54495,661.25907)"
+ ry="12.871287"
+ y="79.207924"
+ x="96.039604"
+ height="72.277229"
+ width="285.14853"
+ id="rect3920"
+ style="fill:url(#radialGradient3928);fill-opacity:1;stroke:#000000;stroke-width:1;stroke-miterlimit:4;stroke-opacity:1;stroke-dasharray:none;filter:url(#filter3712)" />
+ <rect
+ style="fill:url(#radialGradient3930);fill-opacity:1;stroke:#000000;stroke-width:1;stroke-miterlimit:4;stroke-opacity:1;stroke-dasharray:none;filter:url(#filter3712)"
+ id="rect3922"
+ width="285.14853"
+ height="72.277229"
+ x="96.039604"
+ y="79.207924"
+ ry="12.871287"
+ transform="matrix(0.43829706,0,0,0.49424167,597.23802,661.25907)" />
+ <rect
+ style="fill:url(#radialGradient3944);fill-opacity:1;stroke:#000000;stroke-width:1;stroke-miterlimit:4;stroke-opacity:1;stroke-dasharray:none;filter:url(#filter3712)"
+ id="rect3932"
+ width="285.14853"
+ height="72.277229"
+ x="96.039604"
+ y="79.207924"
+ ry="12.871287"
+ transform="matrix(0.43829706,0,0,0.49424167,205.15881,703.83333)" />
+ <text
+ xml:space="preserve"
+ style="font-size:36px;font-style:normal;font-variant:normal;font-weight:bold;font-stretch:normal;text-align:start;line-height:100%;writing-mode:lr-tb;text-anchor:start;fill:#000000;fill-opacity:1;stroke:none;font-family:BlairMdITC TT;-inkscape-font-specification:BlairMdITC TT Medium"
+ x="260.23764"
+ y="767.21375"
+ id="text3934"
+ sodipodi:linespacing="100%"><tspan
+ sodipodi:role="line"
+ id="tspan3936"
+ x="260.23764"
+ y="767.21375"
+ style="font-size:16px;font-style:normal;font-variant:normal;font-weight:bold;font-stretch:normal;text-align:start;line-height:100%;writing-mode:lr-tb;text-anchor:start;font-family:BlairMdITC TT;-inkscape-font-specification:BlairMdITC TT Bold">Web Site</tspan></text>
+ <rect
+ transform="matrix(0.43829706,0,0,0.49424167,335.85188,702.84323)"
+ ry="12.871287"
+ y="79.207924"
+ x="96.039604"
+ height="72.277229"
+ width="285.14853"
+ id="rect3938"
+ style="fill:url(#radialGradient3946);fill-opacity:1;stroke:#000000;stroke-width:1;stroke-miterlimit:4;stroke-opacity:1;stroke-dasharray:none;filter:url(#filter3712)" />
+ <text
+ sodipodi:linespacing="100%"
+ id="text3914"
+ y="765.23358"
+ x="392.91092"
+ style="font-size:36px;font-style:normal;font-variant:normal;font-weight:bold;font-stretch:normal;text-align:start;line-height:100%;writing-mode:lr-tb;text-anchor:start;fill:#000000;fill-opacity:1;stroke:none;font-family:BlairMdITC TT;-inkscape-font-specification:BlairMdITC TT Medium"
+ xml:space="preserve"><tspan
+ style="font-size:16px;font-style:normal;font-variant:normal;font-weight:bold;font-stretch:normal;text-align:start;line-height:100%;writing-mode:lr-tb;text-anchor:start;font-family:BlairMdITC TT;-inkscape-font-specification:BlairMdITC TT Bold"
+ y="765.23358"
+ x="392.91092"
+ id="tspan3916"
+ sodipodi:role="line">Web Site</tspan></text>
+ <rect
+ style="fill:url(#radialGradient3948);fill-opacity:1;stroke:#000000;stroke-width:1;stroke-miterlimit:4;stroke-opacity:1;stroke-dasharray:none;filter:url(#filter3712)"
+ id="rect3940"
+ width="285.14853"
+ height="72.277229"
+ x="96.039604"
+ y="79.207924"
+ ry="12.871287"
+ transform="matrix(0.43829706,0,0,0.49424167,466.54495,701.85313)" />
+ <text
+ xml:space="preserve"
+ style="font-size:36px;font-style:normal;font-variant:normal;font-weight:bold;font-stretch:normal;text-align:start;line-height:100%;writing-mode:lr-tb;text-anchor:start;fill:#000000;fill-opacity:1;stroke:none;font-family:BlairMdITC TT;-inkscape-font-specification:BlairMdITC TT Medium"
+ x="523.60394"
+ y="764.24347"
+ id="text3894"
+ sodipodi:linespacing="100%"><tspan
+ sodipodi:role="line"
+ id="tspan3896"
+ x="523.60394"
+ y="764.24347"
+ style="font-size:16px;font-style:normal;font-variant:normal;font-weight:bold;font-stretch:normal;text-align:start;line-height:100%;writing-mode:lr-tb;text-anchor:start;font-family:BlairMdITC TT;-inkscape-font-specification:BlairMdITC TT Bold">Web Site</tspan></text>
+ <rect
+ transform="matrix(0.43829706,0,0,0.49424167,597.23802,701.85313)"
+ ry="12.871287"
+ y="79.207924"
+ x="96.039604"
+ height="72.277229"
+ width="285.14853"
+ id="rect3942"
+ style="fill:url(#radialGradient3950);fill-opacity:1;stroke:#000000;stroke-width:1;stroke-miterlimit:4;stroke-opacity:1;stroke-dasharray:none;filter:url(#filter3712)" />
+ <text
+ sodipodi:linespacing="100%"
+ id="text3874"
+ y="764.24341"
+ x="653.30695"
+ style="font-size:36px;font-style:normal;font-variant:normal;font-weight:bold;font-stretch:normal;text-align:start;line-height:100%;writing-mode:lr-tb;text-anchor:start;fill:#000000;fill-opacity:1;stroke:none;font-family:BlairMdITC TT;-inkscape-font-specification:BlairMdITC TT Medium"
+ xml:space="preserve"><tspan
+ style="font-size:16px;font-style:normal;font-variant:normal;font-weight:bold;font-stretch:normal;text-align:start;line-height:100%;writing-mode:lr-tb;text-anchor:start;font-family:BlairMdITC TT;-inkscape-font-specification:BlairMdITC TT Bold"
+ y="764.24341"
+ x="653.30695"
+ id="tspan3876"
+ sodipodi:role="line">Web Site</tspan></text>
+ <rect
+ transform="matrix(0.43829706,0,0,0.49424167,205.15881,745.41749)"
+ ry="12.871287"
+ y="79.207924"
+ x="96.039604"
+ height="72.277229"
+ width="285.14853"
+ id="rect3952"
+ style="fill:url(#radialGradient3964);fill-opacity:1;stroke:#000000;stroke-width:1;stroke-miterlimit:4;stroke-opacity:1;stroke-dasharray:none;filter:url(#filter3712)" />
+ <text
+ sodipodi:linespacing="100%"
+ id="text3954"
+ y="809.78802"
+ x="281.02972"
+ style="font-size:36px;font-style:normal;font-variant:normal;font-weight:bold;font-stretch:normal;text-align:start;line-height:100%;writing-mode:lr-tb;text-anchor:start;fill:#000000;fill-opacity:1;stroke:none;font-family:BlairMdITC TT;-inkscape-font-specification:BlairMdITC TT Medium"
+ xml:space="preserve"><tspan
+ style="font-size:16px;font-style:normal;font-variant:normal;font-weight:bold;font-stretch:normal;text-align:start;line-height:100%;writing-mode:lr-tb;text-anchor:start;font-family:BlairMdITC TT;-inkscape-font-specification:BlairMdITC TT Bold"
+ y="809.78802"
+ x="281.02972"
+ id="tspan3956"
+ sodipodi:role="line">GFS2</tspan></text>
+ <rect
+ style="fill:url(#radialGradient3966);fill-opacity:1;stroke:#000000;stroke-width:1;stroke-miterlimit:4;stroke-opacity:1;stroke-dasharray:none;filter:url(#filter3712)"
+ id="rect3958"
+ width="285.14853"
+ height="72.277229"
+ x="96.039604"
+ y="79.207924"
+ ry="12.871287"
+ transform="matrix(0.43829706,0,0,0.49424167,335.85188,744.42739)" />
+ <rect
+ transform="matrix(0.43829706,0,0,0.49424167,466.54495,743.43729)"
+ ry="12.871287"
+ y="79.207924"
+ x="96.039604"
+ height="72.277229"
+ width="285.14853"
+ id="rect3960"
+ style="fill:url(#radialGradient3968);fill-opacity:1;stroke:#000000;stroke-width:1;stroke-miterlimit:4;stroke-opacity:1;stroke-dasharray:none;filter:url(#filter3712)" />
+ <rect
+ style="fill:url(#radialGradient3970);fill-opacity:1;stroke:#000000;stroke-width:1;stroke-miterlimit:4;stroke-opacity:1;stroke-dasharray:none;filter:url(#filter3712)"
+ id="rect3962"
+ width="285.14853"
+ height="72.277229"
+ x="96.039604"
+ y="79.207924"
+ ry="12.871287"
+ transform="matrix(0.43829706,0,0,0.49424167,597.23802,743.43729)" />
+ <rect
+ transform="matrix(0.43829706,0,0,0.49424167,208.12911,908.78383)"
+ ry="12.871287"
+ y="79.207924"
+ x="96.039604"
+ height="72.277229"
+ width="285.14853"
+ id="rect3972"
+ style="fill:url(#radialGradient3974);fill-opacity:1;stroke:#000000;stroke-width:2.14855289;stroke-miterlimit:4;stroke-opacity:1;stroke-dasharray:none;filter:url(#filter3712)" />
+ <rect
+ style="fill:url(#radialGradient3996);fill-opacity:1;stroke:#000000;stroke-width:2.14855289;stroke-miterlimit:4;stroke-opacity:1;stroke-dasharray:none;filter:url(#filter3712)"
+ id="rect3994"
+ width="285.14853"
+ height="72.277229"
+ x="96.039604"
+ y="79.207924"
+ ry="12.871287"
+ transform="matrix(0.43829706,0,0,0.49424167,338.82218,907.79373)" />
+ <rect
+ transform="matrix(0.43829706,0,0,0.49424167,470.50535,907.79373)"
+ ry="12.871287"
+ y="79.207924"
+ x="96.039604"
+ height="72.277229"
+ width="285.14853"
+ id="rect3998"
+ style="fill:url(#radialGradient4000);fill-opacity:1;stroke:#000000;stroke-width:2.14855289;stroke-miterlimit:4;stroke-opacity:1;stroke-dasharray:none;filter:url(#filter3712)" />
+ <rect
+ style="fill:url(#radialGradient4004);fill-opacity:1;stroke:#000000;stroke-width:2.14855289;stroke-miterlimit:4;stroke-opacity:1;stroke-dasharray:none;filter:url(#filter3712)"
+ id="rect4002"
+ width="285.14853"
+ height="72.277229"
+ x="96.039604"
+ y="79.207924"
+ ry="12.871287"
+ transform="matrix(0.43829706,0,0,0.49424167,600.20832,907.79373)" />
+ <text
+ xml:space="preserve"
+ style="font-size:36px;font-style:normal;font-variant:normal;font-weight:bold;font-stretch:normal;text-align:start;line-height:100%;writing-mode:lr-tb;text-anchor:start;fill:#000000;fill-opacity:1;stroke:none;font-family:BlairMdITC TT;-inkscape-font-specification:BlairMdITC TT Medium"
+ x="284"
+ y="971.17413"
+ id="text4006"
+ sodipodi:linespacing="100%"><tspan
+ sodipodi:role="line"
+ id="tspan4008"
+ x="284"
+ y="971.17413"
+ style="font-size:16px;font-style:normal;font-variant:normal;font-weight:bold;font-stretch:normal;text-align:start;line-height:100%;writing-mode:lr-tb;text-anchor:start;font-family:BlairMdITC TT;-inkscape-font-specification:BlairMdITC TT Bold">Host</tspan></text>
+ <text
+ sodipodi:linespacing="100%"
+ id="text4010"
+ y="970.18402"
+ x="415.68317"
+ style="font-size:36px;font-style:normal;font-variant:normal;font-weight:bold;font-stretch:normal;text-align:start;line-height:100%;writing-mode:lr-tb;text-anchor:start;fill:#000000;fill-opacity:1;stroke:none;font-family:BlairMdITC TT;-inkscape-font-specification:BlairMdITC TT Medium"
+ xml:space="preserve"><tspan
+ style="font-size:16px;font-style:normal;font-variant:normal;font-weight:bold;font-stretch:normal;text-align:start;line-height:100%;writing-mode:lr-tb;text-anchor:start;font-family:BlairMdITC TT;-inkscape-font-specification:BlairMdITC TT Bold"
+ y="970.18402"
+ x="415.68317"
+ id="tspan4012"
+ sodipodi:role="line">Host</tspan></text>
+ <text
+ xml:space="preserve"
+ style="font-size:36px;font-style:normal;font-variant:normal;font-weight:bold;font-stretch:normal;text-align:start;line-height:100%;writing-mode:lr-tb;text-anchor:start;fill:#000000;fill-opacity:1;stroke:none;font-family:BlairMdITC TT;-inkscape-font-specification:BlairMdITC TT Medium"
+ x="548.35645"
+ y="970.18402"
+ id="text4014"
+ sodipodi:linespacing="100%"><tspan
+ sodipodi:role="line"
+ id="tspan4016"
+ x="548.35645"
+ y="970.18402"
+ style="font-size:16px;font-style:normal;font-variant:normal;font-weight:bold;font-stretch:normal;text-align:start;line-height:100%;writing-mode:lr-tb;text-anchor:start;font-family:BlairMdITC TT;-inkscape-font-specification:BlairMdITC TT Bold">Host</tspan></text>
+ <text
+ sodipodi:linespacing="100%"
+ id="text4018"
+ y="970.18402"
+ x="679.0495"
+ style="font-size:36px;font-style:normal;font-variant:normal;font-weight:bold;font-stretch:normal;text-align:start;line-height:100%;writing-mode:lr-tb;text-anchor:start;fill:#000000;fill-opacity:1;stroke:none;font-family:BlairMdITC TT;-inkscape-font-specification:BlairMdITC TT Medium"
+ xml:space="preserve"><tspan
+ style="font-size:16px;font-style:normal;font-variant:normal;font-weight:bold;font-stretch:normal;text-align:start;line-height:100%;writing-mode:lr-tb;text-anchor:start;font-family:BlairMdITC TT;-inkscape-font-specification:BlairMdITC TT Bold"
+ y="970.18402"
+ x="679.0495"
+ id="tspan4020"
+ sodipodi:role="line">Host</tspan></text>
+ <rect
+ ry="0.38899186"
+ y="989.98596"
+ x="250.33664"
+ height="39.603962"
+ width="515.84161"
+ id="rect4022"
+ style="fill:url(#radialGradient4024);fill-opacity:1;stroke:#000000;stroke-width:1;stroke-miterlimit:4;stroke-opacity:1;stroke-dasharray:none" />
+ <text
+ xml:space="preserve"
+ style="font-size:36px;font-style:normal;font-variant:normal;font-weight:bold;font-stretch:normal;text-align:start;line-height:100%;writing-mode:lr-tb;text-anchor:start;fill:#000000;fill-opacity:1;stroke:none;font-family:BlairMdITC TT;-inkscape-font-specification:BlairMdITC TT Medium"
+ x="419.64355"
+ y="1016.7187"
+ id="text4026"
+ sodipodi:linespacing="100%"><tspan
+ sodipodi:role="line"
+ id="tspan4028"
+ x="419.64355"
+ y="1016.7187"
+ style="font-size:16px;font-style:normal;font-variant:normal;font-weight:bold;font-stretch:normal;text-align:start;line-height:100%;writing-mode:lr-tb;text-anchor:start;font-family:BlairMdITC TT;-inkscape-font-specification:BlairMdITC TT Bold">Shared Storage</tspan></text>
+ <text
+ sodipodi:linespacing="100%"
+ id="text4030"
+ y="926.61969"
+ x="437.46533"
+ style="font-size:36px;font-style:normal;font-variant:normal;font-weight:bold;font-stretch:normal;text-align:start;line-height:100%;writing-mode:lr-tb;text-anchor:start;fill:#000000;fill-opacity:1;stroke:none;filter:url(#filter4066);font-family:BlairMdITC TT;-inkscape-font-specification:BlairMdITC TT Medium"
+ xml:space="preserve"><tspan
+ style="font-size:20px;font-style:normal;font-variant:normal;font-weight:bold;font-stretch:normal;text-align:start;line-height:100%;writing-mode:lr-tb;text-anchor:start;font-family:BlairMdITC TT;-inkscape-font-specification:BlairMdITC TT Bold"
+ y="926.61969"
+ x="437.46533"
+ id="tspan4032"
+ sodipodi:role="line">CoroSync</tspan></text>
+ <text
+ xml:space="preserve"
+ style="font-size:36px;font-style:normal;font-variant:normal;font-weight:bold;font-stretch:normal;text-align:start;line-height:100%;writing-mode:lr-tb;text-anchor:start;fill:#ffffff;fill-opacity:1;stroke:none;filter:url(#filter4038);font-family:BlairMdITC TT;-inkscape-font-specification:BlairMdITC TT Medium"
+ x="505.78217"
+ y="870.18402"
+ id="text4034"
+ sodipodi:linespacing="100%"><tspan
+ sodipodi:role="line"
+ id="tspan4036"
+ x="505.78217"
+ y="870.18402"
+ style="font-size:32px;font-style:normal;font-variant:normal;font-weight:bold;font-stretch:normal;text-align:center;line-height:100%;writing-mode:lr-tb;text-anchor:middle;fill:#ffffff;font-family:BlairMdITC TT;-inkscape-font-specification:BlairMdITC TT Bold">Pacemaker</tspan></text>
+ <text
+ xml:space="preserve"
+ style="font-size:36px;font-style:normal;font-variant:normal;font-weight:bold;font-stretch:normal;text-align:start;line-height:100%;writing-mode:lr-tb;text-anchor:start;fill:#000000;fill-opacity:1;stroke:none;filter:url(#filter4066);font-family:BlairMdITC TT;-inkscape-font-specification:BlairMdITC TT Medium"
+ x="142.57423"
+ y="524.63947"
+ id="text4080"
+ sodipodi:linespacing="100%"
+ transform="matrix(1.0227984,0,0,1,-0.46159388,0)"><tspan
+ sodipodi:role="line"
+ id="tspan4082"
+ x="142.57423"
+ y="524.63947"
+ style="font-size:48px;font-style:normal;font-variant:normal;font-weight:bold;font-stretch:normal;text-align:start;line-height:100%;writing-mode:lr-tb;text-anchor:start;font-family:BlairMdITC TT;-inkscape-font-specification:BlairMdITC TT Bold">Active / Active</tspan></text>
+ <text
+ sodipodi:linespacing="100%"
+ id="text4084"
+ y="989.98602"
+ x="43.564346"
+ style="font-size:36px;font-style:normal;font-variant:normal;font-weight:bold;font-stretch:normal;text-align:start;line-height:100%;writing-mode:lr-tb;text-anchor:start;fill:#000000;fill-opacity:1;stroke:none;font-family:BlairMdITC TT;-inkscape-font-specification:BlairMdITC TT Medium"
+ xml:space="preserve"><tspan
+ style="font-size:20px;font-style:normal;font-variant:normal;font-weight:bold;font-stretch:normal;text-align:start;line-height:100%;writing-mode:lr;text-anchor:start;font-family:BlairMdITC TT;-inkscape-font-specification:BlairMdITC TT Bold"
+ y="989.98602"
+ x="43.564346"
+ id="tspan4086"
+ sodipodi:role="line">Hardware</tspan></text>
+ <text
+ xml:space="preserve"
+ style="font-size:36px;font-style:normal;font-variant:normal;font-weight:bold;font-stretch:normal;text-align:start;line-height:100%;writing-mode:lr-tb;text-anchor:start;fill:#000000;fill-opacity:1;stroke:none;font-family:BlairMdITC TT;-inkscape-font-specification:BlairMdITC TT Medium"
+ x="120.79207"
+ y="886.02557"
+ id="text4088"
+ sodipodi:linespacing="100%"><tspan
+ sodipodi:role="line"
+ id="tspan4090"
+ x="120.79207"
+ y="886.02557"
+ style="font-size:20px;font-style:normal;font-variant:normal;font-weight:bold;font-stretch:normal;text-align:center;line-height:100%;writing-mode:lr-tb;text-anchor:middle;font-family:BlairMdITC TT;-inkscape-font-specification:BlairMdITC TT Bold">Cluster</tspan><tspan
+ sodipodi:role="line"
+ x="120.79207"
+ y="906.02557"
+ style="font-size:20px;font-style:normal;font-variant:normal;font-weight:bold;font-stretch:normal;text-align:center;line-height:100%;writing-mode:lr-tb;text-anchor:middle;font-family:BlairMdITC TT;-inkscape-font-specification:BlairMdITC TT Bold"
+ id="tspan4092">Software</tspan></text>
+ <text
+ sodipodi:linespacing="100%"
+ id="text4094"
+ y="726.61957"
+ x="119.80197"
+ style="font-size:36px;font-style:normal;font-variant:normal;font-weight:bold;font-stretch:normal;text-align:start;line-height:100%;writing-mode:lr-tb;text-anchor:start;fill:#000000;fill-opacity:1;stroke:none;font-family:BlairMdITC TT;-inkscape-font-specification:BlairMdITC TT Medium"
+ xml:space="preserve"><tspan
+ id="tspan4098"
+ style="font-size:20px;font-style:normal;font-variant:normal;font-weight:bold;font-stretch:normal;text-align:center;line-height:100%;writing-mode:lr-tb;text-anchor:middle;font-family:BlairMdITC TT;-inkscape-font-specification:BlairMdITC TT Bold"
+ y="726.61957"
+ x="119.80197"
+ sodipodi:role="line">Services</tspan></text>
+ <text
+ xml:space="preserve"
+ style="font-size:36px;font-style:normal;font-variant:normal;font-weight:bold;font-stretch:normal;text-align:start;line-height:100%;writing-mode:lr-tb;text-anchor:start;fill:#000000;fill-opacity:1;stroke:none;font-family:BlairMdITC TT;-inkscape-font-specification:BlairMdITC TT Medium"
+ x="414.69308"
+ y="807.8078"
+ id="text5801"
+ sodipodi:linespacing="100%"><tspan
+ sodipodi:role="line"
+ id="tspan5803"
+ x="414.69308"
+ y="807.8078"
+ style="font-size:16px;font-style:normal;font-variant:normal;font-weight:bold;font-stretch:normal;text-align:start;line-height:100%;writing-mode:lr-tb;text-anchor:start;font-family:BlairMdITC TT;-inkscape-font-specification:BlairMdITC TT Bold">GFS2</tspan></text>
+ <text
+ sodipodi:linespacing="100%"
+ id="text5805"
+ y="806.81769"
+ x="543.40594"
+ style="font-size:36px;font-style:normal;font-variant:normal;font-weight:bold;font-stretch:normal;text-align:start;line-height:100%;writing-mode:lr-tb;text-anchor:start;fill:#000000;fill-opacity:1;stroke:none;font-family:BlairMdITC TT;-inkscape-font-specification:BlairMdITC TT Medium"
+ xml:space="preserve"><tspan
+ style="font-size:16px;font-style:normal;font-variant:normal;font-weight:bold;font-stretch:normal;text-align:start;line-height:100%;writing-mode:lr-tb;text-anchor:start;font-family:BlairMdITC TT;-inkscape-font-specification:BlairMdITC TT Bold"
+ y="806.81769"
+ x="543.40594"
+ id="tspan5807"
+ sodipodi:role="line">GFS2</tspan></text>
+ <text
+ xml:space="preserve"
+ style="font-size:36px;font-style:normal;font-variant:normal;font-weight:bold;font-stretch:normal;text-align:start;line-height:100%;writing-mode:lr-tb;text-anchor:start;fill:#000000;fill-opacity:1;stroke:none;font-family:BlairMdITC TT;-inkscape-font-specification:BlairMdITC TT Medium"
+ x="675.08911"
+ y="805.82758"
+ id="text5809"
+ sodipodi:linespacing="100%"><tspan
+ sodipodi:role="line"
+ id="tspan5811"
+ x="675.08911"
+ y="805.82758"
+ style="font-size:16px;font-style:normal;font-variant:normal;font-weight:bold;font-stretch:normal;text-align:start;line-height:100%;writing-mode:lr-tb;text-anchor:start;font-family:BlairMdITC TT;-inkscape-font-specification:BlairMdITC TT Bold">GFS2</tspan></text>
+ <text
+ sodipodi:linespacing="100%"
+ id="text5813"
+ y="644.44147"
+ x="288.95047"
+ style="font-size:36px;font-style:normal;font-variant:normal;font-weight:bold;font-stretch:normal;text-align:start;line-height:100%;writing-mode:lr-tb;text-anchor:start;fill:#000000;fill-opacity:1;stroke:none;font-family:BlairMdITC TT;-inkscape-font-specification:BlairMdITC TT Medium"
+ xml:space="preserve"><tspan
+ style="font-size:16px;font-style:normal;font-variant:normal;font-weight:bold;font-stretch:normal;text-align:start;line-height:100%;writing-mode:lr-tb;text-anchor:start;font-family:BlairMdITC TT;-inkscape-font-specification:BlairMdITC TT Bold"
+ y="644.44147"
+ x="288.95047"
+ id="tspan5815"
+ sodipodi:role="line">URL</tspan></text>
+ <text
+ xml:space="preserve"
+ style="font-size:36px;font-style:normal;font-variant:normal;font-weight:bold;font-stretch:normal;text-align:start;line-height:100%;writing-mode:lr-tb;text-anchor:start;fill:#000000;fill-opacity:1;stroke:none;font-family:BlairMdITC TT;-inkscape-font-specification:BlairMdITC TT Medium"
+ x="286.97028"
+ y="685.03552"
+ id="text5817"
+ sodipodi:linespacing="100%"><tspan
+ sodipodi:role="line"
+ id="tspan5819"
+ x="286.97028"
+ y="685.03552"
+ style="font-size:16px;font-style:normal;font-variant:normal;font-weight:bold;font-stretch:normal;text-align:start;line-height:100%;writing-mode:lr-tb;text-anchor:start;font-family:BlairMdITC TT;-inkscape-font-specification:BlairMdITC TT Bold">Mail</tspan></text>
+ <text
+ sodipodi:linespacing="100%"
+ id="text5821"
+ y="724.63947"
+ x="404.79205"
+ style="font-size:36px;font-style:normal;font-variant:normal;font-weight:bold;font-stretch:normal;text-align:start;line-height:100%;writing-mode:lr-tb;text-anchor:start;fill:#000000;fill-opacity:1;stroke:none;font-family:BlairMdITC TT;-inkscape-font-specification:BlairMdITC TT Medium"
+ xml:space="preserve"><tspan
+ style="font-size:16px;font-style:normal;font-variant:normal;font-weight:bold;font-stretch:normal;text-align:start;line-height:100%;writing-mode:lr-tb;text-anchor:start;font-family:BlairMdITC TT;-inkscape-font-specification:BlairMdITC TT Bold"
+ y="724.63947"
+ x="404.79205"
+ id="tspan5823"
+ sodipodi:role="line">D'base</tspan></text>
+ <text
+ xml:space="preserve"
+ style="font-size:36px;font-style:normal;font-variant:normal;font-weight:bold;font-stretch:normal;text-align:start;line-height:100%;writing-mode:lr-tb;text-anchor:start;fill:#000000;fill-opacity:1;stroke:none;font-family:BlairMdITC TT;-inkscape-font-specification:BlairMdITC TT Medium"
+ x="419.64352"
+ y="643.45135"
+ id="text5825"
+ sodipodi:linespacing="100%"><tspan
+ sodipodi:role="line"
+ id="tspan5827"
+ x="419.64352"
+ y="643.45135"
+ style="font-size:16px;font-style:normal;font-variant:normal;font-weight:bold;font-stretch:normal;text-align:start;line-height:100%;writing-mode:lr-tb;text-anchor:start;font-family:BlairMdITC TT;-inkscape-font-specification:BlairMdITC TT Bold">URL</tspan></text>
+ <text
+ sodipodi:linespacing="100%"
+ id="text5829"
+ y="684.04541"
+ x="417.66333"
+ style="font-size:36px;font-style:normal;font-variant:normal;font-weight:bold;font-stretch:normal;text-align:start;line-height:100%;writing-mode:lr-tb;text-anchor:start;fill:#000000;fill-opacity:1;stroke:none;font-family:BlairMdITC TT;-inkscape-font-specification:BlairMdITC TT Medium"
+ xml:space="preserve"><tspan
+ style="font-size:16px;font-style:normal;font-variant:normal;font-weight:bold;font-stretch:normal;text-align:start;line-height:100%;writing-mode:lr-tb;text-anchor:start;font-family:BlairMdITC TT;-inkscape-font-specification:BlairMdITC TT Bold"
+ y="684.04541"
+ x="417.66333"
+ id="tspan5831"
+ sodipodi:role="line">Mail</tspan></text>
+ <text
+ xml:space="preserve"
+ style="font-size:36px;font-style:normal;font-variant:normal;font-weight:bold;font-stretch:normal;text-align:start;line-height:100%;writing-mode:lr-tb;text-anchor:start;fill:#000000;fill-opacity:1;stroke:none;font-family:BlairMdITC TT;-inkscape-font-specification:BlairMdITC TT Medium"
+ x="535.48511"
+ y="723.64935"
+ id="text5833"
+ sodipodi:linespacing="100%"><tspan
+ sodipodi:role="line"
+ id="tspan5835"
+ x="535.48511"
+ y="723.64935"
+ style="font-size:16px;font-style:normal;font-variant:normal;font-weight:bold;font-stretch:normal;text-align:start;line-height:100%;writing-mode:lr-tb;text-anchor:start;font-family:BlairMdITC TT;-inkscape-font-specification:BlairMdITC TT Bold">D'base</tspan></text>
+ <text
+ sodipodi:linespacing="100%"
+ id="text5837"
+ y="642.46124"
+ x="550.33661"
+ style="font-size:36px;font-style:normal;font-variant:normal;font-weight:bold;font-stretch:normal;text-align:start;line-height:100%;writing-mode:lr-tb;text-anchor:start;fill:#000000;fill-opacity:1;stroke:none;font-family:BlairMdITC TT;-inkscape-font-specification:BlairMdITC TT Medium"
+ xml:space="preserve"><tspan
+ style="font-size:16px;font-style:normal;font-variant:normal;font-weight:bold;font-stretch:normal;text-align:start;line-height:100%;writing-mode:lr-tb;text-anchor:start;font-family:BlairMdITC TT;-inkscape-font-specification:BlairMdITC TT Bold"
+ y="642.46124"
+ x="550.33661"
+ id="tspan5839"
+ sodipodi:role="line">URL</tspan></text>
+ <text
+ xml:space="preserve"
+ style="font-size:36px;font-style:normal;font-variant:normal;font-weight:bold;font-stretch:normal;text-align:start;line-height:100%;writing-mode:lr-tb;text-anchor:start;fill:#000000;fill-opacity:1;stroke:none;font-family:BlairMdITC TT;-inkscape-font-specification:BlairMdITC TT Medium"
+ x="548.35638"
+ y="683.0553"
+ id="text5841"
+ sodipodi:linespacing="100%"><tspan
+ sodipodi:role="line"
+ id="tspan5843"
+ x="548.35638"
+ y="683.0553"
+ style="font-size:16px;font-style:normal;font-variant:normal;font-weight:bold;font-stretch:normal;text-align:start;line-height:100%;writing-mode:lr-tb;text-anchor:start;font-family:BlairMdITC TT;-inkscape-font-specification:BlairMdITC TT Bold">Mail</tspan></text>
+ <text
+ sodipodi:linespacing="100%"
+ id="text5845"
+ y="723.64935"
+ x="663.20789"
+ style="font-size:36px;font-style:normal;font-variant:normal;font-weight:bold;font-stretch:normal;text-align:start;line-height:100%;writing-mode:lr-tb;text-anchor:start;fill:#000000;fill-opacity:1;stroke:none;font-family:BlairMdITC TT;-inkscape-font-specification:BlairMdITC TT Medium"
+ xml:space="preserve"><tspan
+ style="font-size:16px;font-style:normal;font-variant:normal;font-weight:bold;font-stretch:normal;text-align:start;line-height:100%;writing-mode:lr-tb;text-anchor:start;font-family:BlairMdITC TT;-inkscape-font-specification:BlairMdITC TT Bold"
+ y="723.64935"
+ x="663.20789"
+ id="tspan5847"
+ sodipodi:role="line">D'base</tspan></text>
+ <text
+ xml:space="preserve"
+ style="font-size:36px;font-style:normal;font-variant:normal;font-weight:bold;font-stretch:normal;text-align:start;line-height:100%;writing-mode:lr-tb;text-anchor:start;fill:#000000;fill-opacity:1;stroke:none;font-family:BlairMdITC TT;-inkscape-font-specification:BlairMdITC TT Medium"
+ x="678.05939"
+ y="642.46124"
+ id="text5849"
+ sodipodi:linespacing="100%"><tspan
+ sodipodi:role="line"
+ id="tspan5851"
+ x="678.05939"
+ y="642.46124"
+ style="font-size:16px;font-style:normal;font-variant:normal;font-weight:bold;font-stretch:normal;text-align:start;line-height:100%;writing-mode:lr-tb;text-anchor:start;font-family:BlairMdITC TT;-inkscape-font-specification:BlairMdITC TT Bold">URL</tspan></text>
+ <text
+ sodipodi:linespacing="100%"
+ id="text5853"
+ y="683.0553"
+ x="676.07916"
+ style="font-size:36px;font-style:normal;font-variant:normal;font-weight:bold;font-stretch:normal;text-align:start;line-height:100%;writing-mode:lr-tb;text-anchor:start;fill:#000000;fill-opacity:1;stroke:none;font-family:BlairMdITC TT;-inkscape-font-specification:BlairMdITC TT Medium"
+ xml:space="preserve"><tspan
+ style="font-size:16px;font-style:normal;font-variant:normal;font-weight:bold;font-stretch:normal;text-align:start;line-height:100%;writing-mode:lr-tb;text-anchor:start;font-family:BlairMdITC TT;-inkscape-font-specification:BlairMdITC TT Bold"
+ y="683.0553"
+ x="676.07916"
+ id="tspan5855"
+ sodipodi:role="line">Mail</tspan></text>
+ <rect
+ style="fill:url(#linearGradient4832);fill-opacity:1;fill-rule:nonzero;stroke:#646464;stroke-width:0;stroke-miterlimit:4;stroke-opacity:1;stroke-dasharray:none"
+ id="rect4824"
+ width="3"
+ height="597"
+ x="797"
+ y="455.36218" />
+ <rect
+ y="4.8780489"
+ x="-1052.3622"
+ height="792.12195"
+ width="3"
+ id="rect4834"
+ style="fill:url(#linearGradient4836);fill-opacity:1;fill-rule:nonzero;stroke:#646464;stroke-width:0;stroke-miterlimit:4;stroke-opacity:1;stroke-dasharray:none"
+ transform="matrix(0,-1,1,0,0,0)" />
+ </g>
+</svg>
diff --git a/doc/sphinx/shared/images/pcmk-active-passive.svg b/doc/sphinx/shared/images/pcmk-active-passive.svg
new file mode 100644
index 0000000..3c61078
--- /dev/null
+++ b/doc/sphinx/shared/images/pcmk-active-passive.svg
@@ -0,0 +1,1027 @@
+<?xml version="1.0" encoding="UTF-8" standalone="no"?>
+<!-- Created with Inkscape (http://www.inkscape.org/) -->
+
+<svg
+ xmlns:dc="http://purl.org/dc/elements/1.1/"
+ xmlns:cc="http://creativecommons.org/ns#"
+ xmlns:rdf="http://www.w3.org/1999/02/22-rdf-syntax-ns#"
+ xmlns:svg="http://www.w3.org/2000/svg"
+ xmlns="http://www.w3.org/2000/svg"
+ xmlns:xlink="http://www.w3.org/1999/xlink"
+ xmlns:sodipodi="http://sodipodi.sourceforge.net/DTD/sodipodi-0.dtd"
+ xmlns:inkscape="http://www.inkscape.org/namespaces/inkscape"
+ width="800"
+ height="600"
+ id="svg2"
+ version="1.1"
+ inkscape:version="0.47 r22583"
+ sodipodi:docname="pcmk-active-passive.svg"
+ inkscape:export-filename="/Users/beekhof/Dropbox/Public/pcmk-active-passive.png"
+ inkscape:export-xdpi="90"
+ inkscape:export-ydpi="90">
+ <defs
+ id="defs4">
+ <marker
+ inkscape:stockid="Arrow1Mend"
+ orient="auto"
+ refY="0.0"
+ refX="0.0"
+ id="Arrow1Mend"
+ style="overflow:visible;">
+ <path
+ id="path4652"
+ d="M 0.0,0.0 L 5.0,-5.0 L -12.5,0.0 L 5.0,5.0 L 0.0,0.0 z "
+ style="fill-rule:evenodd;stroke:#000000;stroke-width:1.0pt;marker-start:none;"
+ transform="scale(0.4) rotate(180) translate(10,0)" />
+ </marker>
+ <linearGradient
+ id="linearGradient4616">
+ <stop
+ style="stop-color:#808080;stop-opacity:0.75;"
+ offset="0"
+ id="stop4618" />
+ <stop
+ style="stop-color:#bfbfbf;stop-opacity:0.5;"
+ offset="1"
+ id="stop4620" />
+ </linearGradient>
+ <linearGradient
+ id="linearGradient4606">
+ <stop
+ style="stop-color:#000000;stop-opacity:0.58536583;"
+ offset="0"
+ id="stop4608" />
+ <stop
+ style="stop-color:#000000;stop-opacity:0.08130081;"
+ offset="1"
+ id="stop4610" />
+ </linearGradient>
+ <linearGradient
+ id="linearGradient4411">
+ <stop
+ style="stop-color:#f3f3f3;stop-opacity:0;"
+ offset="0"
+ id="stop4413" />
+ <stop
+ style="stop-color:#e6e6e6;stop-opacity:0.21138212;"
+ offset="1"
+ id="stop4415" />
+ </linearGradient>
+ <linearGradient
+ id="linearGradient4370">
+ <stop
+ style="stop-color:#ffffff;stop-opacity:1;"
+ offset="0"
+ id="stop4372" />
+ <stop
+ style="stop-color:#f7f7f7;stop-opacity:0.69918698;"
+ offset="1"
+ id="stop4374" />
+ </linearGradient>
+ <linearGradient
+ id="linearGradient3988">
+ <stop
+ id="stop3990"
+ offset="0"
+ style="stop-color:#d3e219;stop-opacity:1;" />
+ <stop
+ id="stop3992"
+ offset="1"
+ style="stop-color:#e8a411;stop-opacity:1;" />
+ </linearGradient>
+ <linearGradient
+ id="linearGradient3838">
+ <stop
+ style="stop-color:#6badf2;stop-opacity:1;"
+ offset="0"
+ id="stop3840" />
+ <stop
+ style="stop-color:#2e447f;stop-opacity:1;"
+ offset="1"
+ id="stop3842" />
+ </linearGradient>
+ <linearGradient
+ id="linearGradient3658">
+ <stop
+ style="stop-color:#19e229;stop-opacity:1;"
+ offset="0"
+ id="stop3660" />
+ <stop
+ style="stop-color:#589b56;stop-opacity:1;"
+ offset="1"
+ id="stop3662" />
+ </linearGradient>
+ <linearGradient
+ id="linearGradient3650">
+ <stop
+ style="stop-color:#f36d6d;stop-opacity:1;"
+ offset="0"
+ id="stop3652" />
+ <stop
+ style="stop-color:#b81313;stop-opacity:1;"
+ offset="1"
+ id="stop3654" />
+ </linearGradient>
+ <inkscape:perspective
+ sodipodi:type="inkscape:persp3d"
+ inkscape:vp_x="0 : 526.18109 : 1"
+ inkscape:vp_y="0 : 1000 : 0"
+ inkscape:vp_z="744.09448 : 526.18109 : 1"
+ inkscape:persp3d-origin="372.04724 : 350.78739 : 1"
+ id="perspective10" />
+ <filter
+ id="filter3712"
+ inkscape:label="Ridged border"
+ inkscape:menu="Bevels"
+ inkscape:menu-tooltip="Ridged border with inner bevel"
+ color-interpolation-filters="sRGB">
+ <feMorphology
+ id="feMorphology3714"
+ radius="4.3"
+ in="SourceAlpha"
+ result="result91" />
+ <feComposite
+ id="feComposite3716"
+ in2="result91"
+ operator="out"
+ in="SourceGraphic" />
+ <feGaussianBlur
+ id="feGaussianBlur3718"
+ result="result0"
+ stdDeviation="1.2" />
+ <feDiffuseLighting
+ id="feDiffuseLighting3720"
+ diffuseConstant="1"
+ result="result92">
+ <feDistantLight
+ id="feDistantLight3722"
+ elevation="66"
+ azimuth="225" />
+ </feDiffuseLighting>
+ <feBlend
+ id="feBlend3724"
+ in2="SourceGraphic"
+ mode="multiply"
+ result="result93" />
+ <feComposite
+ id="feComposite3726"
+ in2="SourceAlpha"
+ operator="in" />
+ </filter>
+ <radialGradient
+ inkscape:collect="always"
+ xlink:href="#linearGradient3838"
+ id="radialGradient3844"
+ cx="532.67328"
+ cy="425.74258"
+ fx="532.67328"
+ fy="425.74258"
+ r="259.90594"
+ gradientTransform="matrix(0.99225464,0,0,0.13538946,-22.765338,801.65181)"
+ gradientUnits="userSpaceOnUse" />
+ <radialGradient
+ inkscape:collect="always"
+ xlink:href="#linearGradient3650"
+ id="radialGradient3854"
+ cx="531.18811"
+ cy="483.1683"
+ fx="531.18811"
+ fy="483.1683"
+ r="258.42081"
+ gradientTransform="matrix(1,0,0,0.07856171,-23.920792,882.72047)"
+ gradientUnits="userSpaceOnUse" />
+ <radialGradient
+ inkscape:collect="always"
+ xlink:href="#linearGradient3658"
+ id="radialGradient3884"
+ gradientUnits="userSpaceOnUse"
+ gradientTransform="matrix(1,0,0,0.25347222,0,86.109396)"
+ cx="238.61388"
+ cy="115.34654"
+ fx="238.61388"
+ fy="115.34654"
+ r="142.57426" />
+ <radialGradient
+ inkscape:collect="always"
+ xlink:href="#linearGradient3658"
+ id="radialGradient3886"
+ gradientUnits="userSpaceOnUse"
+ gradientTransform="matrix(1,0,0,0.25347222,0,86.109396)"
+ cx="238.61388"
+ cy="115.34654"
+ fx="238.61388"
+ fy="115.34654"
+ r="142.57426" />
+ <radialGradient
+ inkscape:collect="always"
+ xlink:href="#linearGradient3658"
+ id="radialGradient3904"
+ gradientUnits="userSpaceOnUse"
+ gradientTransform="matrix(1,0,0,0.25347222,0,86.109396)"
+ cx="238.61388"
+ cy="115.34654"
+ fx="238.61388"
+ fy="115.34654"
+ r="142.57426" />
+ <radialGradient
+ inkscape:collect="always"
+ xlink:href="#linearGradient3658"
+ id="radialGradient3926"
+ gradientUnits="userSpaceOnUse"
+ gradientTransform="matrix(1,0,0,0.25347222,0,86.109396)"
+ cx="238.61388"
+ cy="115.34654"
+ fx="238.61388"
+ fy="115.34654"
+ r="142.57426" />
+ <radialGradient
+ inkscape:collect="always"
+ xlink:href="#linearGradient3658"
+ id="radialGradient3930"
+ gradientUnits="userSpaceOnUse"
+ gradientTransform="matrix(1,0,0,0.25347222,0,86.109396)"
+ cx="238.61388"
+ cy="115.34654"
+ fx="238.61388"
+ fy="115.34654"
+ r="142.57426" />
+ <radialGradient
+ inkscape:collect="always"
+ xlink:href="#linearGradient3658"
+ id="radialGradient3944"
+ gradientUnits="userSpaceOnUse"
+ gradientTransform="matrix(1,0,0,0.25347222,0,86.109396)"
+ cx="238.61388"
+ cy="115.34654"
+ fx="238.61388"
+ fy="115.34654"
+ r="142.57426" />
+ <radialGradient
+ inkscape:collect="always"
+ xlink:href="#linearGradient3658"
+ id="radialGradient3964"
+ gradientUnits="userSpaceOnUse"
+ gradientTransform="matrix(1,0,0,0.25347222,0,86.109396)"
+ cx="238.61388"
+ cy="115.34654"
+ fx="238.61388"
+ fy="115.34654"
+ r="142.57426" />
+ <radialGradient
+ inkscape:collect="always"
+ xlink:href="#linearGradient3658"
+ id="radialGradient3968"
+ gradientUnits="userSpaceOnUse"
+ gradientTransform="matrix(1,0,0,0.25347222,0,86.109396)"
+ cx="238.61388"
+ cy="115.34654"
+ fx="238.61388"
+ fy="115.34654"
+ r="142.57426" />
+ <radialGradient
+ inkscape:collect="always"
+ xlink:href="#linearGradient3988"
+ id="radialGradient3974"
+ gradientUnits="userSpaceOnUse"
+ gradientTransform="matrix(1,0,0,0.25347222,0,86.109396)"
+ cx="238.61388"
+ cy="115.34654"
+ fx="238.61388"
+ fy="115.34654"
+ r="142.57426" />
+ <radialGradient
+ inkscape:collect="always"
+ xlink:href="#linearGradient3988"
+ id="radialGradient3996"
+ gradientUnits="userSpaceOnUse"
+ gradientTransform="matrix(1,0,0,0.25347222,0,86.109396)"
+ cx="238.61388"
+ cy="115.34654"
+ fx="238.61388"
+ fy="115.34654"
+ r="142.57426" />
+ <radialGradient
+ inkscape:collect="always"
+ xlink:href="#linearGradient3988"
+ id="radialGradient4000"
+ gradientUnits="userSpaceOnUse"
+ gradientTransform="matrix(1,0,0,0.25347222,0,86.109396)"
+ cx="238.61388"
+ cy="115.34654"
+ fx="238.61388"
+ fy="115.34654"
+ r="142.57426" />
+ <radialGradient
+ inkscape:collect="always"
+ xlink:href="#linearGradient3988"
+ id="radialGradient4004"
+ gradientUnits="userSpaceOnUse"
+ gradientTransform="matrix(1,0,0,0.25347222,0,86.109396)"
+ cx="238.61388"
+ cy="115.34654"
+ fx="238.61388"
+ fy="115.34654"
+ r="142.57426" />
+ <filter
+ id="filter4038"
+ inkscape:label="Drop shadow"
+ width="1.5"
+ height="1.5"
+ x="-.25"
+ y="-.25">
+ <feGaussianBlur
+ id="feGaussianBlur4040"
+ in="SourceAlpha"
+ stdDeviation="2.000000"
+ result="blur" />
+ <feColorMatrix
+ id="feColorMatrix4042"
+ result="bluralpha"
+ type="matrix"
+ values="-1 0 0 0 1 0 -1 0 0 1 0 0 -1 0 1 0 0 0 0.500000 0 " />
+ <feOffset
+ id="feOffset4044"
+ in="bluralpha"
+ dx="4.000000"
+ dy="4.000000"
+ result="offsetBlur" />
+ <feMerge
+ id="feMerge4046">
+ <feMergeNode
+ id="feMergeNode4048"
+ in="offsetBlur" />
+ <feMergeNode
+ id="feMergeNode4050"
+ in="SourceGraphic" />
+ </feMerge>
+ </filter>
+ <filter
+ id="filter4066"
+ inkscape:label="Drop shadow"
+ width="1.5"
+ height="1.5"
+ x="-.25"
+ y="-.25">
+ <feGaussianBlur
+ id="feGaussianBlur4068"
+ in="SourceAlpha"
+ stdDeviation="2.000000"
+ result="blur" />
+ <feColorMatrix
+ id="feColorMatrix4070"
+ result="bluralpha"
+ type="matrix"
+ values="1 0 0 0 0 0 1 0 0 0 0 0 1 0 0 0 0 0 0.500000 0 " />
+ <feOffset
+ id="feOffset4072"
+ in="bluralpha"
+ dx="4.000000"
+ dy="4.000000"
+ result="offsetBlur" />
+ <feMerge
+ id="feMerge4074">
+ <feMergeNode
+ id="feMergeNode4076"
+ in="offsetBlur" />
+ <feMergeNode
+ id="feMergeNode4078"
+ in="SourceGraphic" />
+ </feMerge>
+ </filter>
+ <radialGradient
+ inkscape:collect="always"
+ xlink:href="#linearGradient4370"
+ id="radialGradient4376"
+ cx="-0.5"
+ cy="-100.5"
+ fx="-0.5"
+ fy="-100.5"
+ r="400.5"
+ gradientTransform="matrix(0.06674414,1.4857892,-1.4966201,0.06723071,-150.87695,6.9995757)"
+ gradientUnits="userSpaceOnUse" />
+ <filter
+ id="filter4381"
+ inkscape:label="Drop shadow"
+ width="1.5"
+ height="1.5"
+ x="-.25"
+ y="-.25">
+ <feGaussianBlur
+ id="feGaussianBlur4383"
+ in="SourceAlpha"
+ stdDeviation="2.000000"
+ result="blur" />
+ <feColorMatrix
+ id="feColorMatrix4385"
+ result="bluralpha"
+ type="matrix"
+ values="1 0 0 0 0 0 1 0 0 0 0 0 1 0 0 0 0 0 0.500000 0 " />
+ <feOffset
+ id="feOffset4387"
+ in="bluralpha"
+ dx="4.000000"
+ dy="4.000000"
+ result="offsetBlur" />
+ <feMerge
+ id="feMerge4389">
+ <feMergeNode
+ id="feMergeNode4391"
+ in="offsetBlur" />
+ <feMergeNode
+ id="feMergeNode4393"
+ in="SourceGraphic" />
+ </feMerge>
+ </filter>
+ <filter
+ id="filter4397"
+ inkscape:label="Drop shadow"
+ width="1.5"
+ height="1.5"
+ x="-.25"
+ y="-.25">
+ <feGaussianBlur
+ id="feGaussianBlur4399"
+ in="SourceAlpha"
+ stdDeviation="2.000000"
+ result="blur" />
+ <feColorMatrix
+ id="feColorMatrix4401"
+ result="bluralpha"
+ type="matrix"
+ values="1 0 0 0 0 0 1 0 0 0 0 0 1 0 0 0 0 0 0.500000 0 " />
+ <feOffset
+ id="feOffset4403"
+ in="bluralpha"
+ dx="4.000000"
+ dy="4.000000"
+ result="offsetBlur" />
+ <feMerge
+ id="feMerge4405">
+ <feMergeNode
+ id="feMergeNode4407"
+ in="offsetBlur" />
+ <feMergeNode
+ id="feMergeNode4409"
+ in="SourceGraphic" />
+ </feMerge>
+ </filter>
+ <inkscape:perspective
+ id="perspective4466"
+ inkscape:persp3d-origin="0.5 : 0.33333333 : 1"
+ inkscape:vp_z="1 : 0.5 : 1"
+ inkscape:vp_y="0 : 1000 : 0"
+ inkscape:vp_x="0 : 0.5 : 1"
+ sodipodi:type="inkscape:persp3d" />
+ <filter
+ id="filter4508"
+ inkscape:label="Drop shadow"
+ width="1.5"
+ height="1.5"
+ x="-.25"
+ y="-.25">
+ <feGaussianBlur
+ id="feGaussianBlur4510"
+ in="SourceAlpha"
+ stdDeviation="2.000000"
+ result="blur" />
+ <feColorMatrix
+ id="feColorMatrix4512"
+ result="bluralpha"
+ type="matrix"
+ values="1 0 0 0 0 0 1 0 0 0 0 0 1 0 0 0 0 0 0.500000 0 " />
+ <feOffset
+ id="feOffset4514"
+ in="bluralpha"
+ dx="4.000000"
+ dy="4.000000"
+ result="offsetBlur" />
+ <feMerge
+ id="feMerge4516">
+ <feMergeNode
+ id="feMergeNode4518"
+ in="offsetBlur" />
+ <feMergeNode
+ id="feMergeNode4520"
+ in="SourceGraphic" />
+ </feMerge>
+ </filter>
+ <filter
+ id="filter4592"
+ inkscape:label="Drop shadow"
+ width="1.5"
+ height="1.5"
+ x="-.25"
+ y="-.25">
+ <feGaussianBlur
+ id="feGaussianBlur4594"
+ in="SourceAlpha"
+ stdDeviation="2.000000"
+ result="blur" />
+ <feColorMatrix
+ id="feColorMatrix4596"
+ result="bluralpha"
+ type="matrix"
+ values="1 0 0 0 0 0 1 0 0 0 0 0 1 0 0 0 0 0 0.500000 0 " />
+ <feOffset
+ id="feOffset4598"
+ in="bluralpha"
+ dx="4.000000"
+ dy="4.000000"
+ result="offsetBlur" />
+ <feMerge
+ id="feMerge4600">
+ <feMergeNode
+ id="feMergeNode4602"
+ in="offsetBlur" />
+ <feMergeNode
+ id="feMergeNode4604"
+ in="SourceGraphic" />
+ </feMerge>
+ </filter>
+ <linearGradient
+ inkscape:collect="always"
+ xlink:href="#linearGradient4606"
+ id="linearGradient4622"
+ x1="906.94769"
+ y1="-7.3383088"
+ x2="906.94769"
+ y2="-172.97601"
+ gradientUnits="userSpaceOnUse"
+ gradientTransform="matrix(0.23092554,0,0,0.7849298,593.37513,596.7001)" />
+ <linearGradient
+ inkscape:collect="always"
+ xlink:href="#linearGradient4606"
+ id="linearGradient4626"
+ gradientUnits="userSpaceOnUse"
+ gradientTransform="matrix(0.23092554,0,0,1.0521382,-1255.8822,187.84807)"
+ x1="906.94769"
+ y1="-7.3383088"
+ x2="906.94769"
+ y2="-172.97601" />
+ <linearGradient
+ inkscape:collect="always"
+ xlink:href="#linearGradient4616"
+ id="linearGradient4636"
+ x1="102.24117"
+ y1="386.07532"
+ x2="-256.56793"
+ y2="98.293198"
+ gradientUnits="userSpaceOnUse"
+ gradientTransform="matrix(1.4992949,0,0,1.4260558,436.2333,350.79316)" />
+ <linearGradient
+ inkscape:collect="always"
+ xlink:href="#linearGradient3658"
+ id="linearGradient5088"
+ x1="514.39581"
+ y1="714.75159"
+ x2="679.29962"
+ y2="715.97925"
+ gradientUnits="userSpaceOnUse" />
+ </defs>
+ <sodipodi:namedview
+ id="base"
+ pagecolor="#ffffff"
+ bordercolor="#666666"
+ borderopacity="1.0"
+ inkscape:pageopacity="0.0"
+ inkscape:pageshadow="2"
+ inkscape:zoom="0.81454783"
+ inkscape:cx="377.54841"
+ inkscape:cy="264.44713"
+ inkscape:document-units="px"
+ inkscape:current-layer="layer1"
+ showgrid="false"
+ inkscape:window-width="1220"
+ inkscape:window-height="905"
+ inkscape:window-x="454"
+ inkscape:window-y="108"
+ inkscape:window-maximized="0"
+ showguides="true"
+ inkscape:guide-bbox="true" />
+ <metadata
+ id="metadata7">
+ <rdf:RDF>
+ <cc:Work
+ rdf:about="">
+ <dc:format>image/svg+xml</dc:format>
+ <dc:type
+ rdf:resource="http://purl.org/dc/dcmitype/StillImage" />
+ <dc:title />
+ </cc:Work>
+ </rdf:RDF>
+ </metadata>
+ <g
+ inkscape:label="Layer 1"
+ inkscape:groupmode="layer"
+ id="layer1"
+ transform="translate(0,-452.36218)">
+ <rect
+ style="fill:url(#linearGradient4636);fill-opacity:1.0;fill-rule:nonzero;stroke:#000000;stroke-width:0;stroke-miterlimit:4;stroke-opacity:1;stroke-dasharray:none"
+ id="rect4628"
+ width="797"
+ height="597"
+ x="-1.2012364e-05"
+ y="455.36218"
+ ry="1.0732931" />
+ <rect
+ style="fill:url(#radialGradient3844);fill-opacity:1;stroke:#000000;stroke-width:1.05799341;stroke-miterlimit:4;stroke-opacity:1;stroke-dasharray:none"
+ id="rect3836"
+ width="514.79352"
+ height="69.24894"
+ x="248.38544"
+ y="824.6684"
+ ry="0.43881688" />
+ <rect
+ style="fill:url(#radialGradient3854);fill-opacity:1;stroke:#000000;stroke-width:1;stroke-miterlimit:4;stroke-opacity:1;stroke-dasharray:none"
+ id="rect3846"
+ width="515.84161"
+ height="39.603962"
+ x="249.34653"
+ y="900.87708"
+ ry="0.38899186" />
+ <rect
+ transform="matrix(0.43829706,0,0,0.49424167,204.16871,580.07095)"
+ ry="12.871287"
+ y="79.207924"
+ x="96.039604"
+ height="72.277229"
+ width="285.14853"
+ id="rect3872"
+ style="fill:url(#radialGradient3884);fill-opacity:1;stroke:#000000;stroke-width:1;stroke-miterlimit:4;stroke-opacity:1;stroke-dasharray:none;filter:url(#filter3712)" />
+ <text
+ sodipodi:linespacing="100%"
+ id="text3874"
+ y="643.45135"
+ x="284.95572"
+ style="font-size:36px;font-style:normal;font-variant:normal;font-weight:bold;font-stretch:normal;text-align:start;line-height:100%;writing-mode:lr-tb;text-anchor:start;fill:#000000;fill-opacity:1;stroke:none;font-family:BlairMdITC TT;-inkscape-font-specification:BlairMdITC TT Medium"
+ xml:space="preserve"><tspan
+ style="font-size:16px;font-style:normal;font-variant:normal;font-weight:bold;font-stretch:normal;text-align:start;line-height:100%;writing-mode:lr-tb;text-anchor:start;font-family:BlairMdITC TT;-inkscape-font-specification:BlairMdITC TT Bold"
+ y="643.45135"
+ x="284.95572"
+ id="tspan3876"
+ sodipodi:role="line">URL</tspan></text>
+ <rect
+ style="fill:url(#radialGradient3886);fill-opacity:1;stroke:#000000;stroke-width:1;stroke-miterlimit:4;stroke-opacity:1;stroke-dasharray:none;filter:url(#filter3712)"
+ id="rect3878"
+ width="285.14853"
+ height="72.277229"
+ x="96.039604"
+ y="79.207924"
+ ry="12.871287"
+ transform="matrix(0.43829706,0,0,0.49424167,334.86178,579.08085)" />
+ <rect
+ style="fill:url(#radialGradient3904);fill-opacity:1;stroke:#000000;stroke-width:1;stroke-miterlimit:4;stroke-opacity:1;stroke-dasharray:none;filter:url(#filter3712)"
+ id="rect3892"
+ width="285.14853"
+ height="72.277229"
+ x="96.039604"
+ y="79.207924"
+ ry="12.871287"
+ transform="matrix(0.43829706,0,0,0.49424167,204.16871,621.65511)" />
+ <text
+ xml:space="preserve"
+ style="font-size:36px;font-style:normal;font-variant:normal;font-weight:bold;font-stretch:normal;text-align:start;line-height:100%;writing-mode:lr-tb;text-anchor:start;fill:#000000;fill-opacity:1;stroke:none;font-family:BlairMdITC TT;-inkscape-font-specification:BlairMdITC TT Medium"
+ x="259.24753"
+ y="685.03552"
+ id="text3894"
+ sodipodi:linespacing="100%"><tspan
+ sodipodi:role="line"
+ id="tspan3896"
+ x="259.24753"
+ y="685.03552"
+ style="font-size:16px;font-style:normal;font-variant:normal;font-weight:bold;font-stretch:normal;text-align:start;line-height:100%;writing-mode:lr-tb;text-anchor:start;font-family:BlairMdITC TT;-inkscape-font-specification:BlairMdITC TT Bold">Web Site</tspan></text>
+ <rect
+ style="fill:url(#radialGradient3926);fill-opacity:1;stroke:#000000;stroke-width:1;stroke-miterlimit:4;stroke-opacity:1;stroke-dasharray:none;filter:url(#filter3712)"
+ id="rect3918"
+ width="285.14853"
+ height="72.277229"
+ x="96.039604"
+ y="79.207924"
+ ry="12.871287"
+ transform="matrix(0.43829706,0,0,0.49424167,335.85188,662.24917)" />
+ <rect
+ style="fill:url(#radialGradient3930);fill-opacity:1;stroke:#000000;stroke-width:1;stroke-miterlimit:4;stroke-opacity:1;stroke-dasharray:none;filter:url(#filter3712)"
+ id="rect3922"
+ width="285.14853"
+ height="72.277229"
+ x="96.039604"
+ y="79.207924"
+ ry="12.871287"
+ transform="matrix(0.43829706,0,0,0.49424167,597.23802,661.25907)" />
+ <rect
+ style="fill:url(#radialGradient3944);fill-opacity:1;stroke:#000000;stroke-width:1;stroke-miterlimit:4;stroke-opacity:1;stroke-dasharray:none;filter:url(#filter3712)"
+ id="rect3932"
+ width="285.14853"
+ height="72.277229"
+ x="96.039604"
+ y="79.207924"
+ ry="12.871287"
+ transform="matrix(0.43829706,0,0,0.49424167,205.15881,703.83333)" />
+ <text
+ xml:space="preserve"
+ style="font-size:36px;font-style:normal;font-variant:normal;font-weight:bold;font-stretch:normal;text-align:start;line-height:100%;writing-mode:lr-tb;text-anchor:start;fill:#000000;fill-opacity:1;stroke:none;font-family:BlairMdITC TT;-inkscape-font-specification:BlairMdITC TT Medium"
+ x="279.67935"
+ y="767.21375"
+ id="text3934"
+ sodipodi:linespacing="100%"><tspan
+ sodipodi:role="line"
+ id="tspan3936"
+ x="279.67935"
+ y="767.21375"
+ style="font-size:16px;font-style:normal;font-variant:normal;font-weight:bold;font-stretch:normal;text-align:start;line-height:100%;writing-mode:lr-tb;text-anchor:start;font-family:BlairMdITC TT;-inkscape-font-specification:BlairMdITC TT Bold">Files</tspan></text>
+ <rect
+ transform="matrix(0.43829706,0,0,0.49424167,205.15881,745.41749)"
+ ry="12.871287"
+ y="79.207924"
+ x="96.039604"
+ height="72.277229"
+ width="285.14853"
+ id="rect3952"
+ style="fill:url(#radialGradient3964);fill-opacity:1;stroke:#000000;stroke-width:1;stroke-miterlimit:4;stroke-opacity:1;stroke-dasharray:none;filter:url(#filter3712)" />
+ <text
+ sodipodi:linespacing="100%"
+ id="text3954"
+ y="808.79791"
+ x="259.65472"
+ style="font-size:36px;font-style:normal;font-variant:normal;font-weight:bold;font-stretch:normal;text-align:start;line-height:100%;writing-mode:lr-tb;text-anchor:start;fill:#000000;fill-opacity:1;stroke:none;font-family:BlairMdITC TT;-inkscape-font-specification:BlairMdITC TT Medium"
+ xml:space="preserve"><tspan
+ style="font-size:16px;font-style:normal;font-variant:normal;font-weight:bold;font-stretch:normal;text-align:start;line-height:100%;writing-mode:lr-tb;text-anchor:start;font-family:BlairMdITC TT;-inkscape-font-specification:BlairMdITC TT Bold"
+ y="808.79791"
+ x="259.65472"
+ id="tspan3956"
+ sodipodi:role="line"> DRBD</tspan></text>
+ <rect
+ transform="matrix(0.43829706,0,0,0.49424167,466.54495,743.43729)"
+ ry="12.871287"
+ y="79.207924"
+ x="96.039604"
+ height="72.277229"
+ width="285.14853"
+ id="rect3960"
+ style="fill:url(#radialGradient3968);fill-opacity:1;stroke:#000000;stroke-width:1;stroke-miterlimit:4;stroke-opacity:1;stroke-dasharray:none;filter:url(#filter3712)" />
+ <rect
+ transform="matrix(0.43829706,0,0,0.49424167,208.12911,908.78383)"
+ ry="12.871287"
+ y="79.207924"
+ x="96.039604"
+ height="72.277229"
+ width="285.14853"
+ id="rect3972"
+ style="fill:url(#radialGradient3974);fill-opacity:1;stroke:#000000;stroke-width:2.14855289;stroke-miterlimit:4;stroke-opacity:1;stroke-dasharray:none;filter:url(#filter3712)" />
+ <rect
+ style="fill:url(#radialGradient3996);fill-opacity:1;stroke:#000000;stroke-width:2.14855289;stroke-miterlimit:4;stroke-opacity:1;stroke-dasharray:none;filter:url(#filter3712)"
+ id="rect3994"
+ width="285.14853"
+ height="72.277229"
+ x="96.039604"
+ y="79.207924"
+ ry="12.871287"
+ transform="matrix(0.43829706,0,0,0.49424167,338.82218,907.79373)" />
+ <rect
+ transform="matrix(0.43829706,0,0,0.49424167,470.50535,907.79373)"
+ ry="12.871287"
+ y="79.207924"
+ x="96.039604"
+ height="72.277229"
+ width="285.14853"
+ id="rect3998"
+ style="fill:url(#radialGradient4000);fill-opacity:1;stroke:#000000;stroke-width:2.14855289;stroke-miterlimit:4;stroke-opacity:1;stroke-dasharray:none;filter:url(#filter3712)" />
+ <rect
+ style="fill:url(#radialGradient4004);fill-opacity:1;stroke:#000000;stroke-width:2.14855289;stroke-miterlimit:4;stroke-opacity:1;stroke-dasharray:none;filter:url(#filter3712)"
+ id="rect4002"
+ width="285.14853"
+ height="72.277229"
+ x="96.039604"
+ y="79.207924"
+ ry="12.871287"
+ transform="matrix(0.43829706,0,0,0.49424167,600.20832,907.79373)" />
+ <text
+ xml:space="preserve"
+ style="font-size:36px;font-style:normal;font-variant:normal;font-weight:bold;font-stretch:normal;text-align:start;line-height:100%;writing-mode:lr-tb;text-anchor:start;fill:#000000;fill-opacity:1;stroke:none;font-family:BlairMdITC TT;-inkscape-font-specification:BlairMdITC TT Medium"
+ x="284"
+ y="971.17413"
+ id="text4006"
+ sodipodi:linespacing="100%"><tspan
+ sodipodi:role="line"
+ id="tspan4008"
+ x="284"
+ y="971.17413"
+ style="font-size:16px;font-style:normal;font-variant:normal;font-weight:bold;font-stretch:normal;text-align:start;line-height:100%;writing-mode:lr-tb;text-anchor:start;font-family:BlairMdITC TT;-inkscape-font-specification:BlairMdITC TT Bold">Host</tspan></text>
+ <text
+ sodipodi:linespacing="100%"
+ id="text4010"
+ y="970.18402"
+ x="415.68317"
+ style="font-size:36px;font-style:normal;font-variant:normal;font-weight:bold;font-stretch:normal;text-align:start;line-height:100%;writing-mode:lr-tb;text-anchor:start;fill:#000000;fill-opacity:1;stroke:none;font-family:BlairMdITC TT;-inkscape-font-specification:BlairMdITC TT Medium"
+ xml:space="preserve"><tspan
+ style="font-size:16px;font-style:normal;font-variant:normal;font-weight:bold;font-stretch:normal;text-align:start;line-height:100%;writing-mode:lr-tb;text-anchor:start;font-family:BlairMdITC TT;-inkscape-font-specification:BlairMdITC TT Bold"
+ y="970.18402"
+ x="415.68317"
+ id="tspan4012"
+ sodipodi:role="line">Host</tspan></text>
+ <text
+ xml:space="preserve"
+ style="font-size:36px;font-style:normal;font-variant:normal;font-weight:bold;font-stretch:normal;text-align:start;line-height:100%;writing-mode:lr-tb;text-anchor:start;fill:#000000;fill-opacity:1;stroke:none;font-family:BlairMdITC TT;-inkscape-font-specification:BlairMdITC TT Medium"
+ x="548.35645"
+ y="970.18402"
+ id="text4014"
+ sodipodi:linespacing="100%"><tspan
+ sodipodi:role="line"
+ id="tspan4016"
+ x="548.35645"
+ y="970.18402"
+ style="font-size:16px;font-style:normal;font-variant:normal;font-weight:bold;font-stretch:normal;text-align:start;line-height:100%;writing-mode:lr-tb;text-anchor:start;font-family:BlairMdITC TT;-inkscape-font-specification:BlairMdITC TT Bold">Host</tspan></text>
+ <text
+ sodipodi:linespacing="100%"
+ id="text4018"
+ y="970.18402"
+ x="679.0495"
+ style="font-size:36px;font-style:normal;font-variant:normal;font-weight:bold;font-stretch:normal;text-align:start;line-height:100%;writing-mode:lr-tb;text-anchor:start;fill:#000000;fill-opacity:1;stroke:none;font-family:BlairMdITC TT;-inkscape-font-specification:BlairMdITC TT Medium"
+ xml:space="preserve"><tspan
+ style="font-size:16px;font-style:normal;font-variant:normal;font-weight:bold;font-stretch:normal;text-align:start;line-height:100%;writing-mode:lr-tb;text-anchor:start;font-family:BlairMdITC TT;-inkscape-font-specification:BlairMdITC TT Bold"
+ y="970.18402"
+ x="679.0495"
+ id="tspan4020"
+ sodipodi:role="line">Host</tspan></text>
+ <text
+ sodipodi:linespacing="100%"
+ id="text4030"
+ y="926.61969"
+ x="437.46533"
+ style="font-size:36px;font-style:normal;font-variant:normal;font-weight:bold;font-stretch:normal;text-align:start;line-height:100%;writing-mode:lr-tb;text-anchor:start;fill:#000000;fill-opacity:1;stroke:none;filter:url(#filter4066);font-family:BlairMdITC TT;-inkscape-font-specification:BlairMdITC TT Medium"
+ xml:space="preserve"><tspan
+ style="font-size:20px;font-style:normal;font-variant:normal;font-weight:bold;font-stretch:normal;text-align:start;line-height:100%;writing-mode:lr-tb;text-anchor:start;font-family:BlairMdITC TT;-inkscape-font-specification:BlairMdITC TT Bold"
+ y="926.61969"
+ x="437.46533"
+ id="tspan4032"
+ sodipodi:role="line">CoroSync</tspan></text>
+ <text
+ xml:space="preserve"
+ style="font-size:36px;font-style:normal;font-variant:normal;font-weight:bold;font-stretch:normal;text-align:start;line-height:100%;writing-mode:lr-tb;text-anchor:start;fill:#ffffff;fill-opacity:1;stroke:none;filter:url(#filter4038);font-family:BlairMdITC TT;-inkscape-font-specification:BlairMdITC TT Medium"
+ x="505.78217"
+ y="870.18402"
+ id="text4034"
+ sodipodi:linespacing="100%"><tspan
+ sodipodi:role="line"
+ id="tspan4036"
+ x="505.78217"
+ y="870.18402"
+ style="font-size:32px;font-style:normal;font-variant:normal;font-weight:bold;font-stretch:normal;text-align:center;line-height:100%;writing-mode:lr-tb;text-anchor:middle;fill:#ffffff;font-family:BlairMdITC TT;-inkscape-font-specification:BlairMdITC TT Bold">Pacemaker</tspan></text>
+ <text
+ xml:space="preserve"
+ style="font-size:36px;font-style:normal;font-variant:normal;font-weight:bold;font-stretch:normal;text-align:start;line-height:100%;writing-mode:lr-tb;text-anchor:start;fill:#000000;fill-opacity:1;stroke:none;filter:url(#filter4066);font-family:BlairMdITC TT;-inkscape-font-specification:BlairMdITC TT Medium"
+ x="142.57423"
+ y="524.63947"
+ id="text4080"
+ sodipodi:linespacing="100%"><tspan
+ sodipodi:role="line"
+ id="tspan4082"
+ x="142.57423"
+ y="524.63947"
+ style="font-size:48px;font-style:normal;font-variant:normal;font-weight:bold;font-stretch:normal;text-align:start;line-height:100%;writing-mode:lr-tb;text-anchor:start;font-family:BlairMdITC TT;-inkscape-font-specification:BlairMdITC TT Bold">Active / Passive</tspan></text>
+ <text
+ sodipodi:linespacing="100%"
+ id="text4084"
+ y="970.9707"
+ x="43.591743"
+ style="font-size:36px;font-style:normal;font-variant:normal;font-weight:bold;font-stretch:normal;text-align:start;line-height:100%;writing-mode:lr-tb;text-anchor:start;fill:#000000;fill-opacity:1;stroke:none;font-family:BlairMdITC TT;-inkscape-font-specification:BlairMdITC TT Medium"
+ xml:space="preserve"><tspan
+ style="font-size:20px;font-style:normal;font-variant:normal;font-weight:bold;font-stretch:normal;text-align:start;line-height:100%;writing-mode:lr-tb;text-anchor:start;font-family:BlairMdITC TT;-inkscape-font-specification:BlairMdITC TT Bold"
+ y="970.9707"
+ x="43.591743"
+ id="tspan4086"
+ sodipodi:role="line">Hardware</tspan></text>
+ <text
+ xml:space="preserve"
+ style="font-size:36px;font-style:normal;font-variant:normal;font-weight:bold;font-stretch:normal;text-align:start;line-height:100%;writing-mode:lr-tb;text-anchor:start;fill:#000000;fill-opacity:1;stroke:none;font-family:BlairMdITC TT;-inkscape-font-specification:BlairMdITC TT Medium"
+ x="117.54487"
+ y="886.02557"
+ id="text4088"
+ sodipodi:linespacing="100%"><tspan
+ sodipodi:role="line"
+ id="tspan4090"
+ x="117.54487"
+ y="886.02557"
+ style="font-size:20px;font-style:normal;font-variant:normal;font-weight:bold;font-stretch:normal;text-align:center;line-height:100%;writing-mode:lr-tb;text-anchor:middle;font-family:BlairMdITC TT;-inkscape-font-specification:BlairMdITC TT Bold">Cluster</tspan><tspan
+ sodipodi:role="line"
+ x="117.54487"
+ y="906.02557"
+ style="font-size:20px;font-style:normal;font-variant:normal;font-weight:bold;font-stretch:normal;text-align:center;line-height:100%;writing-mode:lr-tb;text-anchor:middle;font-family:BlairMdITC TT;-inkscape-font-specification:BlairMdITC TT Bold"
+ id="tspan4092">Software</tspan></text>
+ <text
+ sodipodi:linespacing="100%"
+ id="text4094"
+ y="727.43585"
+ x="117.48581"
+ style="font-size:36px;font-style:normal;font-variant:normal;font-weight:bold;font-stretch:normal;text-align:start;line-height:100%;writing-mode:lr-tb;text-anchor:start;fill:#000000;fill-opacity:1;stroke:none;font-family:BlairMdITC TT;-inkscape-font-specification:BlairMdITC TT Medium"
+ xml:space="preserve"><tspan
+ id="tspan4098"
+ style="font-size:20px;font-style:normal;font-variant:normal;font-weight:bold;font-stretch:normal;text-align:center;line-height:100%;writing-mode:lr-tb;text-anchor:middle;font-family:BlairMdITC TT;-inkscape-font-specification:BlairMdITC TT Bold"
+ y="727.43585"
+ x="117.48581"
+ sodipodi:role="line">Services</tspan></text>
+ <text
+ xml:space="preserve"
+ style="font-size:36px;font-style:normal;font-variant:normal;font-weight:bold;font-stretch:normal;text-align:start;line-height:100%;writing-mode:lr-tb;text-anchor:start;fill:#000000;fill-opacity:1;stroke:none;font-family:BlairMdITC TT;-inkscape-font-specification:BlairMdITC TT Medium"
+ x="416.41132"
+ y="642.58704"
+ id="text4472"
+ sodipodi:linespacing="100%"><tspan
+ sodipodi:role="line"
+ id="tspan4474"
+ x="416.41132"
+ y="642.58704"
+ style="font-size:16px;font-style:normal;font-variant:normal;font-weight:bold;font-stretch:normal;text-align:start;line-height:100%;writing-mode:lr-tb;text-anchor:start;font-family:BlairMdITC TT;-inkscape-font-specification:BlairMdITC TT Bold">URL</tspan></text>
+ <text
+ xml:space="preserve"
+ style="font-size:36px;font-style:normal;font-variant:normal;font-weight:bold;font-stretch:normal;text-align:start;line-height:100%;writing-mode:lr-tb;text-anchor:start;fill:#000000;fill-opacity:1;stroke:none;font-family:BlairMdITC TT;-inkscape-font-specification:BlairMdITC TT Medium"
+ x="657.25055"
+ y="725.92633"
+ id="text4480"
+ sodipodi:linespacing="100%"><tspan
+ sodipodi:role="line"
+ id="tspan4482"
+ x="657.25055"
+ y="725.92633"
+ style="font-size:16px;font-style:normal;font-variant:normal;font-weight:bold;font-stretch:normal;text-align:start;line-height:100%;writing-mode:lr-tb;text-anchor:start;font-family:BlairMdITC TT;-inkscape-font-specification:BlairMdITC TT Bold">D/base</tspan></text>
+ <text
+ sodipodi:linespacing="100%"
+ id="text4484"
+ y="805.94604"
+ x="521.78308"
+ style="font-size:36px;font-style:normal;font-variant:normal;font-weight:bold;font-stretch:normal;text-align:start;line-height:100%;writing-mode:lr-tb;text-anchor:start;fill:#000000;fill-opacity:1;stroke:none;font-family:BlairMdITC TT;-inkscape-font-specification:BlairMdITC TT Medium"
+ xml:space="preserve"><tspan
+ style="font-size:16px;font-style:normal;font-variant:normal;font-weight:bold;font-stretch:normal;text-align:start;line-height:100%;writing-mode:lr-tb;text-anchor:start;font-family:BlairMdITC TT;-inkscape-font-specification:BlairMdITC TT Bold"
+ y="805.94604"
+ x="521.78308"
+ id="tspan4486"
+ sodipodi:role="line"> DRBD</tspan></text>
+ <text
+ sodipodi:linespacing="100%"
+ id="text4488"
+ y="725.56299"
+ x="397.08615"
+ style="font-size:36px;font-style:normal;font-variant:normal;font-weight:bold;font-stretch:normal;text-align:start;line-height:100%;writing-mode:lr-tb;text-anchor:start;fill:#000000;fill-opacity:1;stroke:none;font-family:BlairMdITC TT;-inkscape-font-specification:BlairMdITC TT Medium"
+ xml:space="preserve"><tspan
+ style="font-size:16px;font-style:normal;font-variant:normal;font-weight:bold;font-stretch:normal;text-align:start;line-height:100%;writing-mode:lr-tb;text-anchor:start;font-family:BlairMdITC TT;-inkscape-font-specification:BlairMdITC TT Bold"
+ y="725.56299"
+ x="397.08615"
+ id="tspan4490"
+ sodipodi:role="line">D/base</tspan></text>
+ <rect
+ style="fill:url(#linearGradient4622);fill-opacity:1.0;fill-rule:nonzero;stroke:#000000;stroke-width:0;stroke-miterlimit:4;stroke-opacity:1;stroke-dasharray:none"
+ id="rect4614"
+ width="3"
+ height="591.4361"
+ x="797"
+ y="460.92606"
+ ry="0.59076202" />
+ <rect
+ ry="0.79187125"
+ y="5.8533502"
+ x="-1052.2572"
+ height="792.77484"
+ width="3"
+ id="rect4624"
+ style="fill:url(#linearGradient4626);fill-opacity:1;fill-rule:nonzero;stroke:#000000;stroke-width:0;stroke-miterlimit:4;stroke-opacity:1;stroke-dasharray:none"
+ transform="matrix(0,-1,1,0,0,0)" />
+ <path
+ style="fill:none;stroke:#000000;stroke-width:2;stroke-linecap:butt;stroke-linejoin:miter;stroke-miterlimit:4;stroke-opacity:1;stroke-dasharray:none;marker-end:url(#Arrow1Mend)"
+ d="m 372.23245,801.95315 136.40638,-1.03339"
+ id="path5090"
+ inkscape:connector-type="polyline"
+ inkscape:connection-start="#rect3952"
+ inkscape:connection-end="#rect3960" />
+ <text
+ sodipodi:linespacing="100%"
+ id="text5646"
+ y="710.74072"
+ x="539.94647"
+ style="font-size:36px;font-style:normal;font-variant:normal;font-weight:bold;font-stretch:normal;text-align:start;line-height:100%;writing-mode:lr-tb;text-anchor:start;fill:#000000;fill-opacity:1;stroke:none;font-family:BlairMdITC TT;-inkscape-font-specification:BlairMdITC TT Medium"
+ xml:space="preserve"><tspan
+ style="font-size:12px;font-style:italic;font-variant:normal;font-weight:bold;font-stretch:normal;text-align:start;line-height:100%;writing-mode:lr-tb;text-anchor:start;font-family:BlairMdITC TT;-inkscape-font-specification:BlairMdITC TT Bold"
+ y="710.74072"
+ x="539.94647"
+ id="tspan5648"
+ sodipodi:role="line">Synch</tspan></text>
+ <text
+ xml:space="preserve"
+ style="font-size:36px;font-style:normal;font-variant:normal;font-weight:bold;font-stretch:normal;text-align:start;line-height:100%;writing-mode:lr-tb;text-anchor:start;fill:#000000;fill-opacity:1;stroke:none;font-family:BlairMdITC TT;-inkscape-font-specification:BlairMdITC TT Medium"
+ x="413.49594"
+ y="794.2226"
+ id="text5650"
+ sodipodi:linespacing="100%"><tspan
+ sodipodi:role="line"
+ id="tspan5652"
+ x="413.49594"
+ y="794.2226"
+ style="font-size:12px;font-style:italic;font-variant:normal;font-weight:bold;font-stretch:normal;text-align:start;line-height:100%;writing-mode:lr-tb;text-anchor:start;font-family:BlairMdITC TT;-inkscape-font-specification:BlairMdITC TT Bold">Synch</tspan></text>
+ <path
+ style="fill:none;stroke:#000000;stroke-width:2;stroke-linecap:butt;stroke-linejoin:miter;stroke-miterlimit:4;stroke-opacity:1;stroke-dasharray:none;marker-start:none;marker-end:url(#Arrow1Mend)"
+ d="M 502.92552,719.02153 639.3319,718.50484"
+ id="path5840"
+ inkscape:connector-type="polyline"
+ inkscape:connection-start="#rect3918"
+ inkscape:connection-end="#rect3922" />
+ </g>
+</svg>
diff --git a/doc/sphinx/shared/images/pcmk-colocated-sets.svg b/doc/sphinx/shared/images/pcmk-colocated-sets.svg
new file mode 100644
index 0000000..9e53fc4
--- /dev/null
+++ b/doc/sphinx/shared/images/pcmk-colocated-sets.svg
@@ -0,0 +1,436 @@
+<?xml version="1.0" encoding="UTF-8" standalone="no"?>
+<!-- Created with Inkscape (http://www.inkscape.org/) -->
+
+<svg
+ xmlns:dc="http://purl.org/dc/elements/1.1/"
+ xmlns:cc="http://creativecommons.org/ns#"
+ xmlns:rdf="http://www.w3.org/1999/02/22-rdf-syntax-ns#"
+ xmlns:svg="http://www.w3.org/2000/svg"
+ xmlns="http://www.w3.org/2000/svg"
+ xmlns:sodipodi="http://sodipodi.sourceforge.net/DTD/sodipodi-0.dtd"
+ xmlns:inkscape="http://www.inkscape.org/namespaces/inkscape"
+ width="1024"
+ height="460"
+ viewBox="0 0 270.93333 121.70833"
+ version="1.1"
+ id="svg8"
+ inkscape:version="0.92.2 (5c3e80d, 2017-08-06)"
+ sodipodi:docname="pcmk-colocated-sets.svg">
+ <defs
+ id="defs2">
+ <marker
+ inkscape:stockid="Arrow1Send"
+ orient="auto"
+ refY="0.0"
+ refX="0.0"
+ id="Arrow1Send"
+ style="overflow:visible;"
+ inkscape:isstock="true">
+ <path
+ id="path4652"
+ d="M 0.0,0.0 L 5.0,-5.0 L -12.5,0.0 L 5.0,5.0 L 0.0,0.0 z "
+ style="fill-rule:evenodd;stroke:#000000;stroke-width:1pt;stroke-opacity:1;fill:#000000;fill-opacity:1"
+ transform="scale(0.2) rotate(180) translate(6,0)" />
+ </marker>
+ <marker
+ inkscape:stockid="Arrow1Send"
+ orient="auto"
+ refY="0"
+ refX="0"
+ id="Arrow1Send-2"
+ style="overflow:visible"
+ inkscape:isstock="true">
+ <path
+ inkscape:connector-curvature="0"
+ id="path4652-3"
+ d="M 0,0 5,-5 -12.5,0 5,5 Z"
+ style="fill:#000000;fill-opacity:1;fill-rule:evenodd;stroke:#000000;stroke-width:1.00000003pt;stroke-opacity:1"
+ transform="matrix(-0.2,0,0,-0.2,-1.2,0)" />
+ </marker>
+ <marker
+ inkscape:stockid="Arrow1Send"
+ orient="auto"
+ refY="0"
+ refX="0"
+ id="Arrow1Send-8"
+ style="overflow:visible"
+ inkscape:isstock="true">
+ <path
+ inkscape:connector-curvature="0"
+ id="path4652-1"
+ d="M 0,0 5,-5 -12.5,0 5,5 Z"
+ style="fill:#000000;fill-opacity:1;fill-rule:evenodd;stroke:#000000;stroke-width:1.00000003pt;stroke-opacity:1"
+ transform="matrix(-0.2,0,0,-0.2,-1.2,0)" />
+ </marker>
+ <marker
+ inkscape:stockid="Arrow1Send"
+ orient="auto"
+ refY="0"
+ refX="0"
+ id="Arrow1Send-8-8"
+ style="overflow:visible"
+ inkscape:isstock="true">
+ <path
+ inkscape:connector-curvature="0"
+ id="path4652-1-4"
+ d="M 0,0 5,-5 -12.5,0 5,5 Z"
+ style="fill:#000000;fill-opacity:1;fill-rule:evenodd;stroke:#000000;stroke-width:1.00000003pt;stroke-opacity:1"
+ transform="matrix(-0.2,0,0,-0.2,-1.2,0)" />
+ </marker>
+ <marker
+ inkscape:stockid="Arrow1Send"
+ orient="auto"
+ refY="0"
+ refX="0"
+ id="Arrow1Send-8-0"
+ style="overflow:visible"
+ inkscape:isstock="true">
+ <path
+ inkscape:connector-curvature="0"
+ id="path4652-1-2"
+ d="M 0,0 5,-5 -12.5,0 5,5 Z"
+ style="fill:#000000;fill-opacity:1;fill-rule:evenodd;stroke:#000000;stroke-width:1.00000003pt;stroke-opacity:1"
+ transform="matrix(-0.2,0,0,-0.2,-1.2,0)" />
+ </marker>
+ <marker
+ inkscape:stockid="Arrow1Send"
+ orient="auto"
+ refY="0"
+ refX="0"
+ id="Arrow1Send-8-0-5"
+ style="overflow:visible"
+ inkscape:isstock="true">
+ <path
+ inkscape:connector-curvature="0"
+ id="path4652-1-2-5"
+ d="M 0,0 5,-5 -12.5,0 5,5 Z"
+ style="fill:#000000;fill-opacity:1;fill-rule:evenodd;stroke:#000000;stroke-width:1.00000003pt;stroke-opacity:1"
+ transform="matrix(-0.2,0,0,-0.2,-1.2,0)" />
+ </marker>
+ <marker
+ inkscape:stockid="Arrow1Send"
+ orient="auto"
+ refY="0"
+ refX="0"
+ id="Arrow1Send-8-0-4"
+ style="overflow:visible"
+ inkscape:isstock="true">
+ <path
+ inkscape:connector-curvature="0"
+ id="path4652-1-2-7"
+ d="M 0,0 5,-5 -12.5,0 5,5 Z"
+ style="fill:#000000;fill-opacity:1;fill-rule:evenodd;stroke:#000000;stroke-width:1.00000003pt;stroke-opacity:1"
+ transform="matrix(-0.2,0,0,-0.2,-1.2,0)" />
+ </marker>
+ <marker
+ inkscape:stockid="Arrow1Send"
+ orient="auto"
+ refY="0"
+ refX="0"
+ id="Arrow1Send-8-0-4-9"
+ style="overflow:visible"
+ inkscape:isstock="true">
+ <path
+ inkscape:connector-curvature="0"
+ id="path4652-1-2-7-9"
+ d="M 0,0 5,-5 -12.5,0 5,5 Z"
+ style="fill:#000000;fill-opacity:1;fill-rule:evenodd;stroke:#000000;stroke-width:1.00000003pt;stroke-opacity:1"
+ transform="matrix(-0.2,0,0,-0.2,-1.2,0)" />
+ </marker>
+ </defs>
+ <sodipodi:namedview
+ id="base"
+ pagecolor="#ffffff"
+ bordercolor="#666666"
+ borderopacity="1.0"
+ inkscape:pageopacity="0.0"
+ inkscape:pageshadow="2"
+ inkscape:zoom="1"
+ inkscape:cx="384.04119"
+ inkscape:cy="148.45137"
+ inkscape:document-units="px"
+ inkscape:current-layer="layer1"
+ showgrid="false"
+ units="px"
+ scale-x="1"
+ inkscape:window-width="1920"
+ inkscape:window-height="981"
+ inkscape:window-x="0"
+ inkscape:window-y="28"
+ inkscape:window-maximized="1" />
+ <metadata
+ id="metadata5">
+ <rdf:RDF>
+ <cc:Work
+ rdf:about="">
+ <dc:format>image/svg+xml</dc:format>
+ <dc:type
+ rdf:resource="http://purl.org/dc/dcmitype/StillImage" />
+ <dc:title />
+ </cc:Work>
+ </rdf:RDF>
+ </metadata>
+ <g
+ inkscape:label="Layer 1"
+ inkscape:groupmode="layer"
+ id="layer1"
+ transform="translate(0,-175.29165)">
+ <g
+ id="g3786">
+ <circle
+ style="fill:#3771c8;stroke:#3985d6;stroke-width:0.31634963;stroke-opacity:1"
+ r="14.393909"
+ cy="236.14581"
+ cx="22.489584"
+ id="path10" />
+ <text
+ id="text5153"
+ y="242.09901"
+ x="22.754168"
+ style="font-style:normal;font-weight:normal;font-size:16.93333244px;line-height:1.25;font-family:sans-serif;text-align:center;letter-spacing:0px;word-spacing:0px;text-anchor:middle;fill:#000000;fill-opacity:1;stroke:none;stroke-width:0.26458332;stroke-opacity:0"
+ xml:space="preserve"><tspan
+ style="fill:#000000;fill-opacity:1;stroke:none;stroke-width:0.26458332;stroke-opacity:0"
+ y="242.09901"
+ x="22.754168"
+ id="tspan5151"
+ sodipodi:role="line">A</tspan></text>
+ <text
+ id="text5153-1"
+ y="241.65677"
+ x="22.225002"
+ style="font-style:normal;font-weight:normal;font-size:16.93333244px;line-height:1.25;font-family:sans-serif;text-align:center;letter-spacing:0px;word-spacing:0px;text-anchor:middle;fill:#ffffff;fill-opacity:0.94117647;stroke:none;stroke-width:0.26458332;stroke-opacity:0"
+ xml:space="preserve"><tspan
+ style="fill:#ffffff;fill-opacity:0.94117647;stroke:none;stroke-width:0.26458332;stroke-opacity:0"
+ y="241.65677"
+ x="22.225002"
+ id="tspan5151-0"
+ sodipodi:role="line">A</tspan></text>
+ </g>
+ <g
+ id="g3793">
+ <circle
+ style="fill:#3771c8;stroke:#3985d6;stroke-width:0.31634963;stroke-opacity:1"
+ r="14.393909"
+ cy="236.14581"
+ cx="70.114586"
+ id="path10-2" />
+ <text
+ id="text5153-5"
+ y="242.09901"
+ x="70.246887"
+ style="font-style:normal;font-weight:normal;font-size:16.93333244px;line-height:1.25;font-family:sans-serif;text-align:center;letter-spacing:0px;word-spacing:0px;text-anchor:middle;fill:#000000;fill-opacity:1;stroke:none;stroke-width:0.26458332;stroke-opacity:0"
+ xml:space="preserve"><tspan
+ style="fill:#000000;fill-opacity:1;stroke:none;stroke-width:0.26458332;stroke-opacity:0"
+ y="242.09901"
+ x="70.246887"
+ id="tspan5151-8"
+ sodipodi:role="line">B</tspan></text>
+ <text
+ id="text5153-1-0"
+ y="241.65677"
+ x="69.71772"
+ style="font-style:normal;font-weight:normal;font-size:16.93333244px;line-height:1.25;font-family:sans-serif;text-align:center;letter-spacing:0px;word-spacing:0px;text-anchor:middle;fill:#ffffff;fill-opacity:0.94117647;stroke:none;stroke-width:0.26458332;stroke-opacity:0"
+ xml:space="preserve"><tspan
+ style="fill:#ffffff;fill-opacity:0.94117647;stroke:none;stroke-width:0.26458332;stroke-opacity:0"
+ y="241.65677"
+ x="69.71772"
+ id="tspan5151-0-3"
+ sodipodi:role="line">B</tspan></text>
+ </g>
+ <g
+ id="g3800">
+ <circle
+ style="fill:#3771c8;stroke:#3985d6;stroke-width:0.31634963;stroke-opacity:1"
+ r="14.393909"
+ cy="197.78123"
+ cx="135.46677"
+ id="path10-1" />
+ <text
+ id="text5153-2"
+ y="203.73444"
+ x="135.59908"
+ style="font-style:normal;font-weight:normal;font-size:16.93333244px;line-height:1.25;font-family:sans-serif;text-align:center;letter-spacing:0px;word-spacing:0px;text-anchor:middle;fill:#000000;fill-opacity:1;stroke:none;stroke-width:0.26458332;stroke-opacity:0"
+ xml:space="preserve"><tspan
+ style="fill:#000000;fill-opacity:1;stroke:none;stroke-width:0.26458332;stroke-opacity:0"
+ y="203.73444"
+ x="135.59908"
+ id="tspan5151-01"
+ sodipodi:role="line">C</tspan></text>
+ <text
+ id="text5153-1-1"
+ y="203.29219"
+ x="135.06992"
+ style="font-style:normal;font-weight:normal;font-size:16.93333244px;line-height:1.25;font-family:sans-serif;text-align:center;letter-spacing:0px;word-spacing:0px;text-anchor:middle;fill:#ffffff;fill-opacity:0.94117647;stroke:none;stroke-width:0.26458332;stroke-opacity:0"
+ xml:space="preserve"><tspan
+ style="fill:#ffffff;fill-opacity:0.94117647;stroke:none;stroke-width:0.26458332;stroke-opacity:0"
+ y="203.29219"
+ x="135.06992"
+ id="tspan5151-0-5"
+ sodipodi:role="line">C</tspan></text>
+ </g>
+ <g
+ id="g3807">
+ <circle
+ style="fill:#3771c8;stroke:#3985d6;stroke-width:0.31634963;stroke-opacity:1"
+ r="14.393909"
+ cy="236.14581"
+ cx="135.46677"
+ id="path10-13" />
+ <text
+ id="text5153-0"
+ y="242.09901"
+ x="135.59908"
+ style="font-style:normal;font-weight:normal;font-size:16.93333244px;line-height:1.25;font-family:sans-serif;text-align:center;letter-spacing:0px;word-spacing:0px;text-anchor:middle;fill:#000000;fill-opacity:1;stroke:none;stroke-width:0.26458332;stroke-opacity:0"
+ xml:space="preserve"><tspan
+ style="fill:#000000;fill-opacity:1;stroke:none;stroke-width:0.26458332;stroke-opacity:0"
+ y="242.09901"
+ x="135.59908"
+ id="tspan5151-85"
+ sodipodi:role="line">D</tspan></text>
+ <text
+ id="text5153-1-3"
+ y="241.65677"
+ x="135.06992"
+ style="font-style:normal;font-weight:normal;font-size:16.93333244px;line-height:1.25;font-family:sans-serif;text-align:center;letter-spacing:0px;word-spacing:0px;text-anchor:middle;fill:#ffffff;fill-opacity:0.94117647;stroke:none;stroke-width:0.26458332;stroke-opacity:0"
+ xml:space="preserve"><tspan
+ style="fill:#ffffff;fill-opacity:0.94117647;stroke:none;stroke-width:0.26458332;stroke-opacity:0"
+ y="241.65677"
+ x="135.06992"
+ id="tspan5151-0-34"
+ sodipodi:role="line">D</tspan></text>
+ </g>
+ <g
+ id="g3814">
+ <circle
+ style="fill:#3771c8;stroke:#3985d6;stroke-width:0.31634963;stroke-opacity:1"
+ r="14.393909"
+ cy="274.51041"
+ cx="135.46677"
+ id="path10-6" />
+ <text
+ id="text5153-25"
+ y="280.46359"
+ x="135.59908"
+ style="font-style:normal;font-weight:normal;font-size:16.93333244px;line-height:1.25;font-family:sans-serif;text-align:center;letter-spacing:0px;word-spacing:0px;text-anchor:middle;fill:#000000;fill-opacity:1;stroke:none;stroke-width:0.26458332;stroke-opacity:0"
+ xml:space="preserve"><tspan
+ style="fill:#000000;fill-opacity:1;stroke:none;stroke-width:0.26458332;stroke-opacity:0"
+ y="280.46359"
+ x="135.59908"
+ id="tspan5151-5"
+ sodipodi:role="line">E</tspan></text>
+ <text
+ id="text5153-1-38"
+ y="280.02136"
+ x="135.06992"
+ style="font-style:normal;font-weight:normal;font-size:16.93333244px;line-height:1.25;font-family:sans-serif;text-align:center;letter-spacing:0px;word-spacing:0px;text-anchor:middle;fill:#ffffff;fill-opacity:0.94117647;stroke:none;stroke-width:0.26458332;stroke-opacity:0"
+ xml:space="preserve"><tspan
+ style="fill:#ffffff;fill-opacity:0.94117647;stroke:none;stroke-width:0.26458332;stroke-opacity:0"
+ y="280.02136"
+ x="135.06992"
+ id="tspan5151-0-57"
+ sodipodi:role="line">E</tspan></text>
+ </g>
+ <g
+ id="g3821">
+ <circle
+ style="fill:#3771c8;stroke:#3985d6;stroke-width:0.31634963;stroke-opacity:1"
+ r="14.393909"
+ cy="236.14581"
+ cx="200.81874"
+ id="path10-25" />
+ <text
+ id="text5153-3"
+ y="242.09901"
+ x="200.95105"
+ style="font-style:normal;font-weight:normal;font-size:16.93333244px;line-height:1.25;font-family:sans-serif;text-align:center;letter-spacing:0px;word-spacing:0px;text-anchor:middle;fill:#000000;fill-opacity:1;stroke:none;stroke-width:0.26458332;stroke-opacity:0"
+ xml:space="preserve"><tspan
+ style="fill:#000000;fill-opacity:1;stroke:none;stroke-width:0.26458332;stroke-opacity:0"
+ y="242.09901"
+ x="200.95105"
+ id="tspan5151-57"
+ sodipodi:role="line">F</tspan></text>
+ <text
+ id="text5153-1-6"
+ y="241.65677"
+ x="200.42189"
+ style="font-style:normal;font-weight:normal;font-size:16.93333244px;line-height:1.25;font-family:sans-serif;text-align:center;letter-spacing:0px;word-spacing:0px;text-anchor:middle;fill:#ffffff;fill-opacity:0.94117647;stroke:none;stroke-width:0.26458332;stroke-opacity:0"
+ xml:space="preserve"><tspan
+ style="fill:#ffffff;fill-opacity:0.94117647;stroke:none;stroke-width:0.26458332;stroke-opacity:0"
+ y="241.65677"
+ x="200.42189"
+ id="tspan5151-0-0"
+ sodipodi:role="line">F</tspan></text>
+ </g>
+ <g
+ id="g3828">
+ <circle
+ style="fill:#3771c8;stroke:#3985d6;stroke-width:0.31634963;stroke-opacity:1"
+ r="14.393909"
+ cy="236.14581"
+ cx="248.44374"
+ id="path10-3" />
+ <text
+ id="text5153-4"
+ y="242.09901"
+ x="248.57605"
+ style="font-style:normal;font-weight:normal;font-size:16.93333244px;line-height:1.25;font-family:sans-serif;text-align:center;letter-spacing:0px;word-spacing:0px;text-anchor:middle;fill:#000000;fill-opacity:1;stroke:none;stroke-width:0.26458332;stroke-opacity:0"
+ xml:space="preserve"><tspan
+ style="fill:#000000;fill-opacity:1;stroke:none;stroke-width:0.26458332;stroke-opacity:0"
+ y="242.09901"
+ x="248.57605"
+ id="tspan5151-4"
+ sodipodi:role="line">G</tspan></text>
+ <text
+ id="text5153-1-4"
+ y="241.65677"
+ x="248.04689"
+ style="font-style:normal;font-weight:normal;font-size:16.93333244px;line-height:1.25;font-family:sans-serif;text-align:center;letter-spacing:0px;word-spacing:0px;text-anchor:middle;fill:#ffffff;fill-opacity:0.94117647;stroke:none;stroke-width:0.26458332;stroke-opacity:0"
+ xml:space="preserve"><tspan
+ style="fill:#ffffff;fill-opacity:0.94117647;stroke:none;stroke-width:0.26458332;stroke-opacity:0"
+ y="241.65677"
+ x="248.04689"
+ id="tspan5151-0-8"
+ sodipodi:role="line">G</tspan></text>
+ </g>
+ <path
+ style="fill:none;stroke:#000000;stroke-width:1.05833328;stroke-linecap:butt;stroke-linejoin:miter;stroke-miterlimit:4;stroke-dasharray:none;stroke-opacity:1;marker-end:url(#Arrow1Send)"
+ d="M 39.687499,236.14581 H 52.122916"
+ id="path4635"
+ inkscape:connector-curvature="0" />
+ <path
+ style="fill:none;stroke:#000000;stroke-width:1.05833328;stroke-linecap:butt;stroke-linejoin:miter;stroke-miterlimit:4;stroke-dasharray:none;stroke-opacity:1;marker-end:url(#Arrow1Send-2)"
+ d="m 218.01666,236.14581 h 12.43549"
+ id="path4635-6"
+ inkscape:connector-curvature="0" />
+ <path
+ style="fill:none;stroke:#000000;stroke-width:1.05833328;stroke-linecap:butt;stroke-linejoin:miter;stroke-miterlimit:4;stroke-dasharray:none;stroke-opacity:1;marker-end:url(#Arrow1Send-8)"
+ d="M 87.312492,236.54271 H 116.37697"
+ id="path4635-3"
+ inkscape:connector-curvature="0" />
+ <path
+ style="fill:none;stroke:#000000;stroke-width:1.05833328;stroke-linecap:butt;stroke-linejoin:miter;stroke-miterlimit:4;stroke-dasharray:none;stroke-opacity:1;marker-end:url(#Arrow1Send-8-8)"
+ d="m 152.66458,236.54271 h 29.0645"
+ id="path4635-3-7"
+ inkscape:connector-curvature="0" />
+ <path
+ style="fill:none;stroke:#000000;stroke-width:1.05833328;stroke-linecap:butt;stroke-linejoin:miter;stroke-miterlimit:4;stroke-dasharray:none;stroke-opacity:1;marker-end:url(#Arrow1Send-8-0)"
+ d="M 86.277014,227.304 117.68384,206.9858"
+ id="path4635-3-1"
+ inkscape:connector-curvature="0" />
+ <path
+ style="fill:none;stroke:#000000;stroke-width:1.05833328;stroke-linecap:butt;stroke-linejoin:miter;stroke-miterlimit:4;stroke-dasharray:none;stroke-opacity:1;marker-end:url(#Arrow1Send-8-0-5)"
+ d="m 152.15827,268.82093 31.40683,-20.3182"
+ id="path4635-3-1-7"
+ inkscape:connector-curvature="0" />
+ <path
+ style="fill:none;stroke:#000000;stroke-width:1.05833328;stroke-linecap:butt;stroke-linejoin:miter;stroke-miterlimit:4;stroke-dasharray:none;stroke-opacity:1;marker-end:url(#Arrow1Send-8-0-4)"
+ d="M 86.277014,247.3918 117.68383,267.71"
+ id="path4635-3-1-6"
+ inkscape:connector-curvature="0" />
+ <path
+ style="fill:none;stroke:#000000;stroke-width:1.05833328;stroke-linecap:butt;stroke-linejoin:miter;stroke-miterlimit:4;stroke-dasharray:none;stroke-opacity:1;marker-end:url(#Arrow1Send-8-0-4-9)"
+ d="m 152.15827,205.0582 31.40682,20.3182"
+ id="path4635-3-1-6-3"
+ inkscape:connector-curvature="0" />
+ </g>
+</svg>
diff --git a/doc/sphinx/shared/images/pcmk-internals.svg b/doc/sphinx/shared/images/pcmk-internals.svg
new file mode 100644
index 0000000..dcdac66
--- /dev/null
+++ b/doc/sphinx/shared/images/pcmk-internals.svg
@@ -0,0 +1,1649 @@
+<?xml version="1.0" encoding="UTF-8" standalone="no"?>
+<!-- Created with Inkscape (http://www.inkscape.org/) -->
+
+<svg
+ xmlns:osb="http://www.openswatchbook.org/uri/2009/osb"
+ xmlns:dc="http://purl.org/dc/elements/1.1/"
+ xmlns:cc="http://creativecommons.org/ns#"
+ xmlns:rdf="http://www.w3.org/1999/02/22-rdf-syntax-ns#"
+ xmlns:svg="http://www.w3.org/2000/svg"
+ xmlns="http://www.w3.org/2000/svg"
+ xmlns:xlink="http://www.w3.org/1999/xlink"
+ xmlns:sodipodi="http://sodipodi.sourceforge.net/DTD/sodipodi-0.dtd"
+ xmlns:inkscape="http://www.inkscape.org/namespaces/inkscape"
+ width="800"
+ height="600"
+ id="svg2"
+ version="1.1"
+ inkscape:version="0.92.2 (5c3e80d, 2017-08-06)"
+ sodipodi:docname="pcmk-internals.svg"
+ inkscape:export-filename="/Users/beekhof/Dropbox/Public/pcmk-active-passive.png"
+ inkscape:export-xdpi="90"
+ inkscape:export-ydpi="90">
+ <defs
+ id="defs4">
+ <marker
+ inkscape:stockid="Arrow2Lstart"
+ orient="auto"
+ refY="0.0"
+ refX="0.0"
+ id="Arrow2Lstart"
+ style="overflow:visible"
+ inkscape:isstock="true">
+ <path
+ id="path11149"
+ style="fill-rule:evenodd;stroke-width:0.625;stroke-linejoin:round;stroke:#000000;stroke-opacity:1;fill:#000000;fill-opacity:1"
+ d="M 8.7185878,4.0337352 L -2.2072895,0.016013256 L 8.7185884,-4.0017078 C 6.9730900,-1.6296469 6.9831476,1.6157441 8.7185878,4.0337352 z "
+ transform="scale(1.1) translate(1,0)" />
+ </marker>
+ <marker
+ inkscape:stockid="Arrow1Lstart"
+ orient="auto"
+ refY="0.0"
+ refX="0.0"
+ id="marker11394"
+ style="overflow:visible"
+ inkscape:isstock="true">
+ <path
+ id="path11131"
+ d="M 0.0,0.0 L 5.0,-5.0 L -12.5,0.0 L 5.0,5.0 L 0.0,0.0 z "
+ style="fill-rule:evenodd;stroke:#000000;stroke-width:1pt;stroke-opacity:1;fill:#000000;fill-opacity:1"
+ transform="scale(0.8) translate(12.5,0)" />
+ </marker>
+ <linearGradient
+ id="linearGradient7187"
+ osb:paint="solid">
+ <stop
+ style="stop-color:#359e46;stop-opacity:1;"
+ offset="0"
+ id="stop7185" />
+ </linearGradient>
+ <marker
+ inkscape:stockid="Arrow1Lend"
+ orient="auto"
+ refY="0.0"
+ refX="0.0"
+ id="Arrow1Lend"
+ style="overflow:visible;">
+ <path
+ id="path4104"
+ d="M 0.0,0.0 L 5.0,-5.0 L -12.5,0.0 L 5.0,5.0 L 0.0,0.0 z "
+ style="fill-rule:evenodd;stroke:#000000;stroke-width:1.0pt;marker-start:none;"
+ transform="scale(0.8) rotate(180) translate(12.5,0)" />
+ </marker>
+ <marker
+ inkscape:stockid="Arrow1Lstart"
+ orient="auto"
+ refY="0.0"
+ refX="0.0"
+ id="Arrow1Lstart"
+ style="overflow:visible">
+ <path
+ id="path8864"
+ d="M 0.0,0.0 L 5.0,-5.0 L -12.5,0.0 L 5.0,5.0 L 0.0,0.0 z "
+ style="fill-rule:evenodd;stroke:#000000;stroke-width:1.0pt;marker-start:none"
+ transform="scale(0.8) translate(12.5,0)" />
+ </marker>
+ <marker
+ inkscape:stockid="TriangleInL"
+ orient="auto"
+ refY="0.0"
+ refX="0.0"
+ id="TriangleInL"
+ style="overflow:visible">
+ <path
+ id="path8998"
+ d="M 5.77,0.0 L -2.88,5.0 L -2.88,-5.0 L 5.77,0.0 z "
+ style="fill-rule:evenodd;stroke:#000000;stroke-width:1.0pt;marker-start:none"
+ transform="scale(-0.8)" />
+ </marker>
+ <marker
+ inkscape:stockid="Arrow2Lend"
+ orient="auto"
+ refY="0.0"
+ refX="0.0"
+ id="Arrow2Lend"
+ style="overflow:visible;">
+ <path
+ id="path8885"
+ style="font-size:12.0;fill-rule:evenodd;stroke-width:0.62500000;stroke-linejoin:round;"
+ d="M 8.7185878,4.0337352 L -2.2072895,0.016013256 L 8.7185884,-4.0017078 C 6.9730900,-1.6296469 6.9831476,1.6157441 8.7185878,4.0337352 z "
+ transform="scale(1.1) rotate(180) translate(1,0)" />
+ </marker>
+ <marker
+ inkscape:stockid="Arrow1Mend"
+ orient="auto"
+ refY="0.0"
+ refX="0.0"
+ id="Arrow1Mend"
+ style="overflow:visible;">
+ <path
+ id="path4652"
+ d="M 0.0,0.0 L 5.0,-5.0 L -12.5,0.0 L 5.0,5.0 L 0.0,0.0 z "
+ style="fill-rule:evenodd;stroke:#000000;stroke-width:1.0pt;marker-start:none;"
+ transform="scale(0.4) rotate(180) translate(10,0)" />
+ </marker>
+ <linearGradient
+ id="linearGradient4616">
+ <stop
+ style="stop-color:#808080;stop-opacity:0.75;"
+ offset="0"
+ id="stop4618" />
+ <stop
+ style="stop-color:#bfbfbf;stop-opacity:0.5;"
+ offset="1"
+ id="stop4620" />
+ </linearGradient>
+ <linearGradient
+ id="linearGradient4606">
+ <stop
+ style="stop-color:#000000;stop-opacity:0.58536583;"
+ offset="0"
+ id="stop4608" />
+ <stop
+ style="stop-color:#000000;stop-opacity:0.08130081;"
+ offset="1"
+ id="stop4610" />
+ </linearGradient>
+ <linearGradient
+ id="linearGradient4411">
+ <stop
+ style="stop-color:#f3f3f3;stop-opacity:0;"
+ offset="0"
+ id="stop4413" />
+ <stop
+ style="stop-color:#e6e6e6;stop-opacity:0.21138212;"
+ offset="1"
+ id="stop4415" />
+ </linearGradient>
+ <linearGradient
+ id="linearGradient4370">
+ <stop
+ style="stop-color:#ffffff;stop-opacity:1;"
+ offset="0"
+ id="stop4372" />
+ <stop
+ style="stop-color:#f7f7f7;stop-opacity:0.69918698;"
+ offset="1"
+ id="stop4374" />
+ </linearGradient>
+ <linearGradient
+ id="linearGradient3988">
+ <stop
+ id="stop3990"
+ offset="0"
+ style="stop-color:#d3e219;stop-opacity:1;" />
+ <stop
+ id="stop3992"
+ offset="1"
+ style="stop-color:#e8a411;stop-opacity:1;" />
+ </linearGradient>
+ <linearGradient
+ id="linearGradient3838">
+ <stop
+ style="stop-color:#6badf2;stop-opacity:1;"
+ offset="0"
+ id="stop3840" />
+ <stop
+ style="stop-color:#2e447f;stop-opacity:1;"
+ offset="1"
+ id="stop3842" />
+ </linearGradient>
+ <linearGradient
+ id="linearGradient3658">
+ <stop
+ style="stop-color:#19e229;stop-opacity:1;"
+ offset="0"
+ id="stop3660" />
+ <stop
+ style="stop-color:#589b56;stop-opacity:1;"
+ offset="1"
+ id="stop3662" />
+ </linearGradient>
+ <linearGradient
+ id="linearGradient3650">
+ <stop
+ style="stop-color:#f36d6d;stop-opacity:1;"
+ offset="0"
+ id="stop3652" />
+ <stop
+ style="stop-color:#b81313;stop-opacity:1;"
+ offset="1"
+ id="stop3654" />
+ </linearGradient>
+ <inkscape:perspective
+ sodipodi:type="inkscape:persp3d"
+ inkscape:vp_x="0 : 526.18109 : 1"
+ inkscape:vp_y="0 : 1000 : 0"
+ inkscape:vp_z="744.09448 : 526.18109 : 1"
+ inkscape:persp3d-origin="372.04724 : 350.78739 : 1"
+ id="perspective10" />
+ <filter
+ id="filter3712"
+ inkscape:label="Ridged border"
+ inkscape:menu="Bevels"
+ inkscape:menu-tooltip="Ridged border with inner bevel"
+ color-interpolation-filters="sRGB">
+ <feMorphology
+ id="feMorphology3714"
+ radius="4.3"
+ in="SourceAlpha"
+ result="result91" />
+ <feComposite
+ id="feComposite3716"
+ in2="result91"
+ operator="out"
+ in="SourceGraphic" />
+ <feGaussianBlur
+ id="feGaussianBlur3718"
+ result="result0"
+ stdDeviation="1.2" />
+ <feDiffuseLighting
+ id="feDiffuseLighting3720"
+ diffuseConstant="1"
+ result="result92">
+ <feDistantLight
+ id="feDistantLight3722"
+ elevation="66"
+ azimuth="225" />
+ </feDiffuseLighting>
+ <feBlend
+ id="feBlend3724"
+ in2="SourceGraphic"
+ mode="multiply"
+ result="result93" />
+ <feComposite
+ id="feComposite3726"
+ in2="SourceAlpha"
+ operator="in" />
+ </filter>
+ <filter
+ id="filter4038"
+ inkscape:label="Drop shadow"
+ width="1.5"
+ height="1.5"
+ x="-.25"
+ y="-.25">
+ <feGaussianBlur
+ id="feGaussianBlur4040"
+ in="SourceAlpha"
+ stdDeviation="2.000000"
+ result="blur" />
+ <feColorMatrix
+ id="feColorMatrix4042"
+ result="bluralpha"
+ type="matrix"
+ values="-1 0 0 0 1 0 -1 0 0 1 0 0 -1 0 1 0 0 0 0.500000 0 " />
+ <feOffset
+ id="feOffset4044"
+ in="bluralpha"
+ dx="4.000000"
+ dy="4.000000"
+ result="offsetBlur" />
+ <feMerge
+ id="feMerge4046">
+ <feMergeNode
+ id="feMergeNode4048"
+ in="offsetBlur" />
+ <feMergeNode
+ id="feMergeNode4050"
+ in="SourceGraphic" />
+ </feMerge>
+ </filter>
+ <filter
+ id="filter4066"
+ inkscape:label="Drop shadow"
+ width="1.5"
+ height="1.5"
+ x="-.25"
+ y="-.25">
+ <feGaussianBlur
+ id="feGaussianBlur4068"
+ in="SourceAlpha"
+ stdDeviation="2.000000"
+ result="blur" />
+ <feColorMatrix
+ id="feColorMatrix4070"
+ result="bluralpha"
+ type="matrix"
+ values="1 0 0 0 0 0 1 0 0 0 0 0 1 0 0 0 0 0 0.500000 0 " />
+ <feOffset
+ id="feOffset4072"
+ in="bluralpha"
+ dx="4.000000"
+ dy="4.000000"
+ result="offsetBlur" />
+ <feMerge
+ id="feMerge4074">
+ <feMergeNode
+ id="feMergeNode4076"
+ in="offsetBlur" />
+ <feMergeNode
+ id="feMergeNode4078"
+ in="SourceGraphic" />
+ </feMerge>
+ </filter>
+ <radialGradient
+ inkscape:collect="always"
+ xlink:href="#linearGradient4370"
+ id="radialGradient4376"
+ cx="-0.5"
+ cy="-100.5"
+ fx="-0.5"
+ fy="-100.5"
+ r="400.5"
+ gradientTransform="matrix(0.06674414,1.4857892,-1.4966201,0.06723071,-150.87695,6.9995757)"
+ gradientUnits="userSpaceOnUse" />
+ <filter
+ id="filter4381"
+ inkscape:label="Drop shadow"
+ width="1.5"
+ height="1.5"
+ x="-.25"
+ y="-.25">
+ <feGaussianBlur
+ id="feGaussianBlur4383"
+ in="SourceAlpha"
+ stdDeviation="2.000000"
+ result="blur" />
+ <feColorMatrix
+ id="feColorMatrix4385"
+ result="bluralpha"
+ type="matrix"
+ values="1 0 0 0 0 0 1 0 0 0 0 0 1 0 0 0 0 0 0.500000 0 " />
+ <feOffset
+ id="feOffset4387"
+ in="bluralpha"
+ dx="4.000000"
+ dy="4.000000"
+ result="offsetBlur" />
+ <feMerge
+ id="feMerge4389">
+ <feMergeNode
+ id="feMergeNode4391"
+ in="offsetBlur" />
+ <feMergeNode
+ id="feMergeNode4393"
+ in="SourceGraphic" />
+ </feMerge>
+ </filter>
+ <filter
+ id="filter4397"
+ inkscape:label="Drop shadow"
+ width="1.5"
+ height="1.5"
+ x="-.25"
+ y="-.25">
+ <feGaussianBlur
+ id="feGaussianBlur4399"
+ in="SourceAlpha"
+ stdDeviation="2.000000"
+ result="blur" />
+ <feColorMatrix
+ id="feColorMatrix4401"
+ result="bluralpha"
+ type="matrix"
+ values="1 0 0 0 0 0 1 0 0 0 0 0 1 0 0 0 0 0 0.500000 0 " />
+ <feOffset
+ id="feOffset4403"
+ in="bluralpha"
+ dx="4.000000"
+ dy="4.000000"
+ result="offsetBlur" />
+ <feMerge
+ id="feMerge4405">
+ <feMergeNode
+ id="feMergeNode4407"
+ in="offsetBlur" />
+ <feMergeNode
+ id="feMergeNode4409"
+ in="SourceGraphic" />
+ </feMerge>
+ </filter>
+ <inkscape:perspective
+ id="perspective4466"
+ inkscape:persp3d-origin="0.5 : 0.33333333 : 1"
+ inkscape:vp_z="1 : 0.5 : 1"
+ inkscape:vp_y="0 : 1000 : 0"
+ inkscape:vp_x="0 : 0.5 : 1"
+ sodipodi:type="inkscape:persp3d" />
+ <filter
+ id="filter4508"
+ inkscape:label="Drop shadow"
+ width="1.5"
+ height="1.5"
+ x="-.25"
+ y="-.25">
+ <feGaussianBlur
+ id="feGaussianBlur4510"
+ in="SourceAlpha"
+ stdDeviation="2.000000"
+ result="blur" />
+ <feColorMatrix
+ id="feColorMatrix4512"
+ result="bluralpha"
+ type="matrix"
+ values="1 0 0 0 0 0 1 0 0 0 0 0 1 0 0 0 0 0 0.500000 0 " />
+ <feOffset
+ id="feOffset4514"
+ in="bluralpha"
+ dx="4.000000"
+ dy="4.000000"
+ result="offsetBlur" />
+ <feMerge
+ id="feMerge4516">
+ <feMergeNode
+ id="feMergeNode4518"
+ in="offsetBlur" />
+ <feMergeNode
+ id="feMergeNode4520"
+ in="SourceGraphic" />
+ </feMerge>
+ </filter>
+ <filter
+ id="filter4592"
+ inkscape:label="Drop shadow"
+ width="1.5"
+ height="1.5"
+ x="-.25"
+ y="-.25">
+ <feGaussianBlur
+ id="feGaussianBlur4594"
+ in="SourceAlpha"
+ stdDeviation="2.000000"
+ result="blur" />
+ <feColorMatrix
+ id="feColorMatrix4596"
+ result="bluralpha"
+ type="matrix"
+ values="1 0 0 0 0 0 1 0 0 0 0 0 1 0 0 0 0 0 0.500000 0 " />
+ <feOffset
+ id="feOffset4598"
+ in="bluralpha"
+ dx="4.000000"
+ dy="4.000000"
+ result="offsetBlur" />
+ <feMerge
+ id="feMerge4600">
+ <feMergeNode
+ id="feMergeNode4602"
+ in="offsetBlur" />
+ <feMergeNode
+ id="feMergeNode4604"
+ in="SourceGraphic" />
+ </feMerge>
+ </filter>
+ <linearGradient
+ inkscape:collect="always"
+ xlink:href="#linearGradient4606"
+ id="linearGradient4622"
+ x1="906.94769"
+ y1="-7.3383088"
+ x2="906.94769"
+ y2="-172.97601"
+ gradientUnits="userSpaceOnUse"
+ gradientTransform="matrix(0.23092554,0,0,0.7849298,593.37513,596.7001)" />
+ <linearGradient
+ inkscape:collect="always"
+ xlink:href="#linearGradient4606"
+ id="linearGradient4626"
+ gradientUnits="userSpaceOnUse"
+ gradientTransform="matrix(0.23092554,0,0,1.0521382,-1255.8822,187.84807)"
+ x1="906.94769"
+ y1="-7.3383088"
+ x2="906.94769"
+ y2="-172.97601" />
+ <linearGradient
+ inkscape:collect="always"
+ xlink:href="#linearGradient4616"
+ id="linearGradient4636"
+ x1="241.90201"
+ y1="489.76343"
+ x2="-256.56793"
+ y2="98.293198"
+ gradientUnits="userSpaceOnUse"
+ gradientTransform="matrix(1.4992949,0,0,1.4260558,436.2333,350.79316)" />
+ <radialGradient
+ inkscape:collect="always"
+ xlink:href="#linearGradient3838"
+ id="radialGradient14138"
+ gradientUnits="userSpaceOnUse"
+ gradientTransform="matrix(0.39363851,0,0,0.10719538,63.763839,638.96085)"
+ cx="532.67328"
+ cy="425.74258"
+ fx="532.67328"
+ fy="425.74258"
+ r="259.90594" />
+ <radialGradient
+ inkscape:collect="always"
+ xlink:href="#linearGradient3838"
+ id="radialGradient14146"
+ gradientUnits="userSpaceOnUse"
+ gradientTransform="matrix(0.22177052,0,0,0.10748334,595.9313,638.8382)"
+ cx="532.67328"
+ cy="425.74258"
+ fx="532.67328"
+ fy="425.74258"
+ r="259.90594" />
+ <radialGradient
+ inkscape:collect="always"
+ xlink:href="#linearGradient3838"
+ id="radialGradient14160"
+ gradientUnits="userSpaceOnUse"
+ gradientTransform="matrix(0.3477996,0,0,0.10726461,332.45951,554.48737)"
+ cx="532.67328"
+ cy="425.74258"
+ fx="532.67328"
+ fy="425.74258"
+ r="259.90594" />
+ <radialGradient
+ inkscape:collect="always"
+ xlink:href="#linearGradient3838"
+ id="radialGradient14162"
+ gradientUnits="userSpaceOnUse"
+ gradientTransform="matrix(0.40700893,0,0,0.10717596,301.41644,638.96907)"
+ cx="532.67328"
+ cy="425.74258"
+ fx="532.67328"
+ fy="425.74258"
+ r="259.90594" />
+ <radialGradient
+ inkscape:collect="always"
+ xlink:href="#linearGradient3658"
+ id="radialGradient14166"
+ gradientUnits="userSpaceOnUse"
+ gradientTransform="matrix(0.39363851,0,0,0.10719538,62.771456,554.53214)"
+ cx="532.67328"
+ cy="425.74258"
+ fx="532.67328"
+ fy="425.74258"
+ r="259.90594" />
+ <radialGradient
+ inkscape:collect="always"
+ xlink:href="#linearGradient3838"
+ id="radialGradient14170"
+ gradientUnits="userSpaceOnUse"
+ gradientTransform="matrix(1.1603188,0,0,0.06221331,-145.65353,731.17367)"
+ cx="532.67328"
+ cy="425.74258"
+ fx="532.67328"
+ fy="425.74258"
+ r="259.90594" />
+ <radialGradient
+ inkscape:collect="always"
+ xlink:href="#linearGradient3650"
+ id="radialGradient14182"
+ gradientUnits="userSpaceOnUse"
+ gradientTransform="matrix(0.39363851,0,0,0.10719538,411.09783,823.66593)"
+ cx="532.67328"
+ cy="425.74258"
+ fx="532.67328"
+ fy="425.74258"
+ r="259.90594" />
+ <radialGradient
+ inkscape:collect="always"
+ xlink:href="#linearGradient3658"
+ id="radialGradient14208"
+ gradientUnits="userSpaceOnUse"
+ gradientTransform="matrix(0.39363851,0,0,0.10719538,65.748605,867.33078)"
+ cx="532.67328"
+ cy="425.74258"
+ fx="532.67328"
+ fy="425.74258"
+ r="259.90594" />
+ <radialGradient
+ inkscape:collect="always"
+ xlink:href="#linearGradient3658"
+ id="radialGradient14210"
+ gradientUnits="userSpaceOnUse"
+ gradientTransform="matrix(0.27332189,0,0,0.1073878,97.585617,785.87348)"
+ cx="532.67328"
+ cy="425.74258"
+ fx="532.67328"
+ fy="425.74258"
+ r="259.90594" />
+ <inkscape:perspective
+ id="perspective16650"
+ inkscape:persp3d-origin="0.5 : 0.33333333 : 1"
+ inkscape:vp_z="1 : 0.5 : 1"
+ inkscape:vp_y="0 : 1000 : 0"
+ inkscape:vp_x="0 : 0.5 : 1"
+ sodipodi:type="inkscape:persp3d" />
+ <radialGradient
+ inkscape:collect="always"
+ xlink:href="#linearGradient3658-9"
+ id="radialGradient16578-0"
+ gradientUnits="userSpaceOnUse"
+ gradientTransform="matrix(0.27332189,0,0,0.1073878,97.585617,785.87348)"
+ cx="532.67328"
+ cy="425.74258"
+ fx="532.67328"
+ fy="425.74258"
+ r="259.90594" />
+ <linearGradient
+ id="linearGradient3658-9">
+ <stop
+ style="stop-color:#19e229;stop-opacity:1;"
+ offset="0"
+ id="stop3660-5" />
+ <stop
+ style="stop-color:#589b56;stop-opacity:1;"
+ offset="1"
+ id="stop3662-3" />
+ </linearGradient>
+ <inkscape:perspective
+ id="perspective16688"
+ inkscape:persp3d-origin="0.5 : 0.33333333 : 1"
+ inkscape:vp_z="1 : 0.5 : 1"
+ inkscape:vp_y="0 : 1000 : 0"
+ inkscape:vp_x="0 : 0.5 : 1"
+ sodipodi:type="inkscape:persp3d" />
+ <radialGradient
+ inkscape:collect="always"
+ xlink:href="#linearGradient3658-9"
+ id="radialGradient16729"
+ gradientUnits="userSpaceOnUse"
+ gradientTransform="matrix(0.26777109,0,0,0.0707139,230.52321,958.2476)"
+ cx="532.67328"
+ cy="425.74258"
+ fx="532.67328"
+ fy="425.74258"
+ r="259.90594" />
+ <radialGradient
+ inkscape:collect="always"
+ xlink:href="#linearGradient3658-9"
+ id="radialGradient16737"
+ gradientUnits="userSpaceOnUse"
+ gradientTransform="matrix(0.26777109,0,0,0.0707139,382.35778,956.26283)"
+ cx="532.67328"
+ cy="425.74258"
+ fx="532.67328"
+ fy="425.74258"
+ r="259.90594" />
+ <radialGradient
+ inkscape:collect="always"
+ xlink:href="#linearGradient3658"
+ id="radialGradient3197"
+ gradientUnits="userSpaceOnUse"
+ gradientTransform="matrix(1.1603188,0,0,0.08450912,-216.64591,473.69509)"
+ cx="513.85736"
+ cy="666.09711"
+ fx="513.85736"
+ fy="666.09711"
+ r="259.90594" />
+ <radialGradient
+ inkscape:collect="always"
+ xlink:href="#linearGradient3838"
+ id="radialGradient3202"
+ gradientUnits="userSpaceOnUse"
+ gradientTransform="matrix(0.39363851,0,0,0.10719538,-55.228544,644.53214)"
+ cx="532.67328"
+ cy="425.74258"
+ fx="532.67328"
+ fy="425.74258"
+ r="259.90594" />
+ <radialGradient
+ inkscape:collect="always"
+ xlink:href="#linearGradient3838"
+ id="radialGradient3223"
+ gradientUnits="userSpaceOnUse"
+ gradientTransform="matrix(0.48734647,0,0,0.16732279,139.57586,658.5285)"
+ cx="532.67328"
+ cy="425.74258"
+ fx="532.67328"
+ fy="425.74258"
+ r="259.90594" />
+ <filter
+ id="filter4038-3"
+ inkscape:label="Drop shadow"
+ width="1.5"
+ height="1.5"
+ x="-0.25"
+ y="-0.25">
+ <feGaussianBlur
+ id="feGaussianBlur4040-9"
+ in="SourceAlpha"
+ stdDeviation="2.000000"
+ result="blur" />
+ <feColorMatrix
+ id="feColorMatrix4042-4"
+ result="bluralpha"
+ type="matrix"
+ values="-1 0 0 0 1 0 -1 0 0 1 0 0 -1 0 1 0 0 0 0.500000 0 " />
+ <feOffset
+ id="feOffset4044-1"
+ in="bluralpha"
+ dx="4.000000"
+ dy="4.000000"
+ result="offsetBlur" />
+ <feMerge
+ id="feMerge4046-2">
+ <feMergeNode
+ id="feMergeNode4048-2"
+ in="offsetBlur" />
+ <feMergeNode
+ id="feMergeNode4050-7"
+ in="SourceGraphic" />
+ </feMerge>
+ </filter>
+ <radialGradient
+ inkscape:collect="always"
+ xlink:href="#linearGradient3838"
+ id="radialGradient3197-4"
+ gradientUnits="userSpaceOnUse"
+ gradientTransform="matrix(1.1603188,0,0,0.06221331,-217.37012,862.39766)"
+ cx="520.69952"
+ cy="936.18402"
+ fx="520.69952"
+ fy="936.18402"
+ r="259.90594" />
+ <filter
+ id="filter4038-3-6"
+ inkscape:label="Drop shadow"
+ width="1.5"
+ height="1.5"
+ x="-0.25"
+ y="-0.25">
+ <feGaussianBlur
+ id="feGaussianBlur4040-9-0"
+ in="SourceAlpha"
+ stdDeviation="2.000000"
+ result="blur" />
+ <feColorMatrix
+ id="feColorMatrix4042-4-0"
+ result="bluralpha"
+ type="matrix"
+ values="-1 0 0 0 1 0 -1 0 0 1 0 0 -1 0 1 0 0 0 0.500000 0 " />
+ <feOffset
+ id="feOffset4044-1-6"
+ in="bluralpha"
+ dx="4.000000"
+ dy="4.000000"
+ result="offsetBlur" />
+ <feMerge
+ id="feMerge4046-2-6">
+ <feMergeNode
+ id="feMergeNode4048-2-3"
+ in="offsetBlur" />
+ <feMergeNode
+ id="feMergeNode4050-7-6"
+ in="SourceGraphic" />
+ </feMerge>
+ </filter>
+ <filter
+ id="filter4038-3-5"
+ inkscape:label="Drop shadow"
+ width="1.5"
+ height="1.5"
+ x="-0.25"
+ y="-0.25">
+ <feGaussianBlur
+ id="feGaussianBlur4040-9-9"
+ in="SourceAlpha"
+ stdDeviation="2.000000"
+ result="blur" />
+ <feColorMatrix
+ id="feColorMatrix4042-4-9"
+ result="bluralpha"
+ type="matrix"
+ values="-1 0 0 0 1 0 -1 0 0 1 0 0 -1 0 1 0 0 0 0.500000 0 " />
+ <feOffset
+ id="feOffset4044-1-9"
+ in="bluralpha"
+ dx="4.000000"
+ dy="4.000000"
+ result="offsetBlur" />
+ <feMerge
+ id="feMerge4046-2-0">
+ <feMergeNode
+ id="feMergeNode4048-2-4"
+ in="offsetBlur" />
+ <feMergeNode
+ id="feMergeNode4050-7-2"
+ in="SourceGraphic" />
+ </feMerge>
+ </filter>
+ <radialGradient
+ inkscape:collect="always"
+ xlink:href="#linearGradient3838"
+ id="radialGradient3223-0"
+ gradientUnits="userSpaceOnUse"
+ gradientTransform="matrix(0.48734647,0,0,0.17508368,-86.58567,556.47444)"
+ cx="532.67328"
+ cy="425.74258"
+ fx="532.67328"
+ fy="425.74258"
+ r="259.90594" />
+ <filter
+ id="filter4038-3-5-0"
+ inkscape:label="Drop shadow"
+ width="1.5"
+ height="1.5"
+ x="-0.25"
+ y="-0.25">
+ <feGaussianBlur
+ id="feGaussianBlur4040-9-9-9"
+ in="SourceAlpha"
+ stdDeviation="2.000000"
+ result="blur" />
+ <feColorMatrix
+ id="feColorMatrix4042-4-9-9"
+ result="bluralpha"
+ type="matrix"
+ values="-1 0 0 0 1 0 -1 0 0 1 0 0 -1 0 1 0 0 0 0.500000 0 " />
+ <feOffset
+ id="feOffset4044-1-9-7"
+ in="bluralpha"
+ dx="4.000000"
+ dy="4.000000"
+ result="offsetBlur" />
+ <feMerge
+ id="feMerge4046-2-0-6">
+ <feMergeNode
+ id="feMergeNode4048-2-4-7"
+ in="offsetBlur" />
+ <feMergeNode
+ id="feMergeNode4050-7-2-3"
+ in="SourceGraphic" />
+ </feMerge>
+ </filter>
+ <radialGradient
+ inkscape:collect="always"
+ xlink:href="#linearGradient3838"
+ id="radialGradient3223-2"
+ gradientUnits="userSpaceOnUse"
+ gradientTransform="matrix(0.48734647,0,0,0.16732279,-86.58567,755.7938)"
+ cx="532.67328"
+ cy="425.74258"
+ fx="532.67328"
+ fy="425.74258"
+ r="259.90594" />
+ <filter
+ id="filter4038-3-5-3"
+ inkscape:label="Drop shadow"
+ width="1.5"
+ height="1.5"
+ x="-0.25"
+ y="-0.25">
+ <feGaussianBlur
+ id="feGaussianBlur4040-9-9-6"
+ in="SourceAlpha"
+ stdDeviation="2.000000"
+ result="blur" />
+ <feColorMatrix
+ id="feColorMatrix4042-4-9-6"
+ result="bluralpha"
+ type="matrix"
+ values="-1 0 0 0 1 0 -1 0 0 1 0 0 -1 0 1 0 0 0 0.500000 0 " />
+ <feOffset
+ id="feOffset4044-1-9-8"
+ in="bluralpha"
+ dx="4.000000"
+ dy="4.000000"
+ result="offsetBlur" />
+ <feMerge
+ id="feMerge4046-2-0-7">
+ <feMergeNode
+ id="feMergeNode4048-2-4-3"
+ in="offsetBlur" />
+ <feMergeNode
+ id="feMergeNode4050-7-2-0"
+ in="SourceGraphic" />
+ </feMerge>
+ </filter>
+ <radialGradient
+ inkscape:collect="always"
+ xlink:href="#linearGradient3838"
+ id="radialGradient3223-05"
+ gradientUnits="userSpaceOnUse"
+ gradientTransform="matrix(0.48734647,0,0,0.16732279,375.35124,755.61099)"
+ cx="532.67328"
+ cy="425.74258"
+ fx="532.67328"
+ fy="425.74258"
+ r="259.90594" />
+ <filter
+ id="filter4038-3-5-1"
+ inkscape:label="Drop shadow"
+ width="1.5"
+ height="1.5"
+ x="-0.25"
+ y="-0.25">
+ <feGaussianBlur
+ id="feGaussianBlur4040-9-9-2"
+ in="SourceAlpha"
+ stdDeviation="2.000000"
+ result="blur" />
+ <feColorMatrix
+ id="feColorMatrix4042-4-9-3"
+ result="bluralpha"
+ type="matrix"
+ values="-1 0 0 0 1 0 -1 0 0 1 0 0 -1 0 1 0 0 0 0.500000 0 " />
+ <feOffset
+ id="feOffset4044-1-9-1"
+ in="bluralpha"
+ dx="4.000000"
+ dy="4.000000"
+ result="offsetBlur" />
+ <feMerge
+ id="feMerge4046-2-0-4">
+ <feMergeNode
+ id="feMergeNode4048-2-4-0"
+ in="offsetBlur" />
+ <feMergeNode
+ id="feMergeNode4050-7-2-9"
+ in="SourceGraphic" />
+ </feMerge>
+ </filter>
+ <radialGradient
+ inkscape:collect="always"
+ xlink:href="#linearGradient3838"
+ id="radialGradient3223-6"
+ gradientUnits="userSpaceOnUse"
+ gradientTransform="matrix(0.53516641,0,0,0.16732279,328.83417,558.80142)"
+ cx="532.67328"
+ cy="425.74258"
+ fx="532.67328"
+ fy="425.74258"
+ r="259.90594" />
+ <filter
+ id="filter4038-3-5-9"
+ inkscape:label="Drop shadow"
+ width="1.5"
+ height="1.5"
+ x="-0.25"
+ y="-0.25">
+ <feGaussianBlur
+ id="feGaussianBlur4040-9-9-4"
+ in="SourceAlpha"
+ stdDeviation="2.000000"
+ result="blur" />
+ <feColorMatrix
+ id="feColorMatrix4042-4-9-5"
+ result="bluralpha"
+ type="matrix"
+ values="-1 0 0 0 1 0 -1 0 0 1 0 0 -1 0 1 0 0 0 0.500000 0 " />
+ <feOffset
+ id="feOffset4044-1-9-9"
+ in="bluralpha"
+ dx="4.000000"
+ dy="4.000000"
+ result="offsetBlur" />
+ <feMerge
+ id="feMerge4046-2-0-3">
+ <feMergeNode
+ id="feMergeNode4048-2-4-9"
+ in="offsetBlur" />
+ <feMergeNode
+ id="feMergeNode4050-7-2-94"
+ in="SourceGraphic" />
+ </feMerge>
+ </filter>
+ <marker
+ inkscape:stockid="Arrow2Lstart"
+ orient="auto"
+ refY="0"
+ refX="0"
+ id="Arrow2Lstart-8"
+ style="overflow:visible"
+ inkscape:isstock="true">
+ <path
+ inkscape:connector-curvature="0"
+ id="path11149-3"
+ style="fill:#000000;fill-opacity:1;fill-rule:evenodd;stroke:#000000;stroke-width:0.625;stroke-linejoin:round;stroke-opacity:1"
+ d="M 8.7185878,4.0337352 -2.2072895,0.01601326 8.7185884,-4.0017078 c -1.7454984,2.3720609 -1.7354408,5.6174519 -6e-7,8.035443 z"
+ transform="matrix(1.1,0,0,1.1,1.1,0)" />
+ </marker>
+ <marker
+ inkscape:stockid="Arrow2Lend"
+ orient="auto"
+ refY="0"
+ refX="0"
+ id="Arrow2Lend-3"
+ style="overflow:visible">
+ <path
+ inkscape:connector-curvature="0"
+ id="path8885-3"
+ style="font-size:12px;fill-rule:evenodd;stroke-width:0.625;stroke-linejoin:round"
+ d="M 8.7185878,4.0337352 -2.2072895,0.01601326 8.7185884,-4.0017078 c -1.7454984,2.3720609 -1.7354408,5.6174519 -6e-7,8.035443 z"
+ transform="matrix(-1.1,0,0,-1.1,-1.1,0)" />
+ </marker>
+ <marker
+ inkscape:stockid="Arrow2Lstart"
+ orient="auto"
+ refY="0"
+ refX="0"
+ id="Arrow2Lstart-8-0"
+ style="overflow:visible"
+ inkscape:isstock="true">
+ <path
+ inkscape:connector-curvature="0"
+ id="path11149-3-5"
+ style="fill:#000000;fill-opacity:1;fill-rule:evenodd;stroke:#000000;stroke-width:0.625;stroke-linejoin:round;stroke-opacity:1"
+ d="M 8.7185878,4.0337352 -2.2072895,0.01601326 8.7185884,-4.0017078 c -1.7454984,2.3720609 -1.7354408,5.6174519 -6e-7,8.035443 z"
+ transform="matrix(1.1,0,0,1.1,1.1,0)" />
+ </marker>
+ <marker
+ inkscape:stockid="Arrow2Lend"
+ orient="auto"
+ refY="0"
+ refX="0"
+ id="Arrow2Lend-3-2"
+ style="overflow:visible">
+ <path
+ inkscape:connector-curvature="0"
+ id="path8885-3-7"
+ style="font-size:12px;fill-rule:evenodd;stroke-width:0.625;stroke-linejoin:round"
+ d="M 8.7185878,4.0337352 -2.2072895,0.01601326 8.7185884,-4.0017078 c -1.7454984,2.3720609 -1.7354408,5.6174519 -6e-7,8.035443 z"
+ transform="matrix(-1.1,0,0,-1.1,-1.1,0)" />
+ </marker>
+ <marker
+ inkscape:stockid="Arrow2Lstart"
+ orient="auto"
+ refY="0"
+ refX="0"
+ id="Arrow2Lstart-8-7"
+ style="overflow:visible"
+ inkscape:isstock="true">
+ <path
+ inkscape:connector-curvature="0"
+ id="path11149-3-6"
+ style="fill:#000000;fill-opacity:1;fill-rule:evenodd;stroke:#000000;stroke-width:0.625;stroke-linejoin:round;stroke-opacity:1"
+ d="M 8.7185878,4.0337352 -2.2072895,0.01601326 8.7185884,-4.0017078 c -1.7454984,2.3720609 -1.7354408,5.6174519 -6e-7,8.035443 z"
+ transform="matrix(1.1,0,0,1.1,1.1,0)" />
+ </marker>
+ <marker
+ inkscape:stockid="Arrow2Lend"
+ orient="auto"
+ refY="0"
+ refX="0"
+ id="Arrow2Lend-3-3"
+ style="overflow:visible">
+ <path
+ inkscape:connector-curvature="0"
+ id="path8885-3-5"
+ style="font-size:12px;fill-rule:evenodd;stroke-width:0.625;stroke-linejoin:round"
+ d="M 8.7185878,4.0337352 -2.2072895,0.01601326 8.7185884,-4.0017078 c -1.7454984,2.3720609 -1.7354408,5.6174519 -6e-7,8.035443 z"
+ transform="matrix(-1.1,0,0,-1.1,-1.1,0)" />
+ </marker>
+ <marker
+ inkscape:stockid="Arrow2Lstart"
+ orient="auto"
+ refY="0"
+ refX="0"
+ id="Arrow2Lstart-8-3"
+ style="overflow:visible"
+ inkscape:isstock="true">
+ <path
+ inkscape:connector-curvature="0"
+ id="path11149-3-9"
+ style="fill:#000000;fill-opacity:1;fill-rule:evenodd;stroke:#000000;stroke-width:0.625;stroke-linejoin:round;stroke-opacity:1"
+ d="M 8.7185878,4.0337352 -2.2072895,0.01601326 8.7185884,-4.0017078 c -1.7454984,2.3720609 -1.7354408,5.6174519 -6e-7,8.035443 z"
+ transform="matrix(1.1,0,0,1.1,1.1,0)" />
+ </marker>
+ <marker
+ inkscape:stockid="Arrow2Lend"
+ orient="auto"
+ refY="0"
+ refX="0"
+ id="Arrow2Lend-3-5"
+ style="overflow:visible">
+ <path
+ inkscape:connector-curvature="0"
+ id="path8885-3-4"
+ style="font-size:12px;fill-rule:evenodd;stroke-width:0.625;stroke-linejoin:round"
+ d="M 8.7185878,4.0337352 -2.2072895,0.01601326 8.7185884,-4.0017078 c -1.7454984,2.3720609 -1.7354408,5.6174519 -6e-7,8.035443 z"
+ transform="matrix(-1.1,0,0,-1.1,-1.1,0)" />
+ </marker>
+ <marker
+ inkscape:stockid="Arrow2Lstart"
+ orient="auto"
+ refY="0"
+ refX="0"
+ id="Arrow2Lstart-8-1"
+ style="overflow:visible"
+ inkscape:isstock="true">
+ <path
+ inkscape:connector-curvature="0"
+ id="path11149-3-3"
+ style="fill:#000000;fill-opacity:1;fill-rule:evenodd;stroke:#000000;stroke-width:0.625;stroke-linejoin:round;stroke-opacity:1"
+ d="M 8.7185878,4.0337352 -2.2072895,0.01601326 8.7185884,-4.0017078 c -1.7454984,2.3720609 -1.7354408,5.6174519 -6e-7,8.035443 z"
+ transform="matrix(1.1,0,0,1.1,1.1,0)" />
+ </marker>
+ <marker
+ inkscape:stockid="Arrow2Lend"
+ orient="auto"
+ refY="0"
+ refX="0"
+ id="Arrow2Lend-3-27"
+ style="overflow:visible">
+ <path
+ inkscape:connector-curvature="0"
+ id="path8885-3-57"
+ style="font-size:12px;fill-rule:evenodd;stroke-width:0.625;stroke-linejoin:round"
+ d="M 8.7185878,4.0337352 -2.2072895,0.01601326 8.7185884,-4.0017078 c -1.7454984,2.3720609 -1.7354408,5.6174519 -6e-7,8.035443 z"
+ transform="matrix(-1.1,0,0,-1.1,-1.1,0)" />
+ </marker>
+ <marker
+ inkscape:stockid="Arrow2Lstart"
+ orient="auto"
+ refY="0"
+ refX="0"
+ id="Arrow2Lstart-8-1-6"
+ style="overflow:visible"
+ inkscape:isstock="true">
+ <path
+ inkscape:connector-curvature="0"
+ id="path11149-3-3-0"
+ style="fill:#000000;fill-opacity:1;fill-rule:evenodd;stroke:#000000;stroke-width:0.625;stroke-linejoin:round;stroke-opacity:1"
+ d="M 8.7185878,4.0337352 -2.2072895,0.01601326 8.7185884,-4.0017078 c -1.7454984,2.3720609 -1.7354408,5.6174519 -6e-7,8.035443 z"
+ transform="matrix(1.1,0,0,1.1,1.1,0)" />
+ </marker>
+ <marker
+ inkscape:stockid="Arrow2Lend"
+ orient="auto"
+ refY="0"
+ refX="0"
+ id="Arrow2Lend-3-27-9"
+ style="overflow:visible">
+ <path
+ inkscape:connector-curvature="0"
+ id="path8885-3-57-5"
+ style="font-size:12px;fill-rule:evenodd;stroke-width:0.625;stroke-linejoin:round"
+ d="M 8.7185878,4.0337352 -2.2072895,0.01601326 8.7185884,-4.0017078 c -1.7454984,2.3720609 -1.7354408,5.6174519 -6e-7,8.035443 z"
+ transform="matrix(-1.1,0,0,-1.1,-1.1,0)" />
+ </marker>
+ <marker
+ inkscape:stockid="Arrow2Lstart"
+ orient="auto"
+ refY="0"
+ refX="0"
+ id="Arrow2Lstart-8-1-3"
+ style="overflow:visible"
+ inkscape:isstock="true">
+ <path
+ inkscape:connector-curvature="0"
+ id="path11149-3-3-8"
+ style="fill:#000000;fill-opacity:1;fill-rule:evenodd;stroke:#000000;stroke-width:0.625;stroke-linejoin:round;stroke-opacity:1"
+ d="M 8.7185878,4.0337352 -2.2072895,0.01601326 8.7185884,-4.0017078 c -1.7454984,2.3720609 -1.7354408,5.6174519 -6e-7,8.035443 z"
+ transform="matrix(1.1,0,0,1.1,1.1,0)" />
+ </marker>
+ <marker
+ inkscape:stockid="Arrow2Lend"
+ orient="auto"
+ refY="0"
+ refX="0"
+ id="Arrow2Lend-3-27-8"
+ style="overflow:visible">
+ <path
+ inkscape:connector-curvature="0"
+ id="path8885-3-57-3"
+ style="font-size:12px;fill-rule:evenodd;stroke-width:0.625;stroke-linejoin:round"
+ d="M 8.7185878,4.0337352 -2.2072895,0.01601326 8.7185884,-4.0017078 c -1.7454984,2.3720609 -1.7354408,5.6174519 -6e-7,8.035443 z"
+ transform="matrix(-1.1,0,0,-1.1,-1.1,0)" />
+ </marker>
+ <marker
+ inkscape:stockid="Arrow2Lstart"
+ orient="auto"
+ refY="0"
+ refX="0"
+ id="Arrow2Lstart-8-1-3-0"
+ style="overflow:visible"
+ inkscape:isstock="true">
+ <path
+ inkscape:connector-curvature="0"
+ id="path11149-3-3-8-3"
+ style="fill:#000000;fill-opacity:1;fill-rule:evenodd;stroke:#000000;stroke-width:0.625;stroke-linejoin:round;stroke-opacity:1"
+ d="M 8.7185878,4.0337352 -2.2072895,0.01601326 8.7185884,-4.0017078 c -1.7454984,2.3720609 -1.7354408,5.6174519 -6e-7,8.035443 z"
+ transform="matrix(1.1,0,0,1.1,1.1,0)" />
+ </marker>
+ <marker
+ inkscape:stockid="Arrow2Lend"
+ orient="auto"
+ refY="0"
+ refX="0"
+ id="Arrow2Lend-3-27-8-1"
+ style="overflow:visible">
+ <path
+ inkscape:connector-curvature="0"
+ id="path8885-3-57-3-7"
+ style="font-size:12px;fill-rule:evenodd;stroke-width:0.625;stroke-linejoin:round"
+ d="M 8.7185878,4.0337352 -2.2072895,0.01601326 8.7185884,-4.0017078 c -1.7454984,2.3720609 -1.7354408,5.6174519 -6e-7,8.035443 z"
+ transform="matrix(-1.1,0,0,-1.1,-1.1,0)" />
+ </marker>
+ <radialGradient
+ inkscape:collect="always"
+ xlink:href="#linearGradient3658"
+ id="radialGradient3197-7"
+ gradientUnits="userSpaceOnUse"
+ gradientTransform="matrix(0.86183198,0,0,0.69923648,-58.817136,70.578128)"
+ cx="521.91772"
+ cy="922.97913"
+ fx="521.91772"
+ fy="922.97913"
+ r="259.90594" />
+ </defs>
+ <sodipodi:namedview
+ id="base"
+ pagecolor="#ffffff"
+ bordercolor="#666666"
+ borderopacity="1.0"
+ inkscape:pageopacity="0.0"
+ inkscape:pageshadow="2"
+ inkscape:zoom="1.0076756"
+ inkscape:cx="395.4549"
+ inkscape:cy="219.04823"
+ inkscape:document-units="px"
+ inkscape:current-layer="layer1"
+ showgrid="false"
+ inkscape:window-width="1920"
+ inkscape:window-height="981"
+ inkscape:window-x="1920"
+ inkscape:window-y="28"
+ inkscape:window-maximized="1"
+ showguides="true"
+ inkscape:guide-bbox="true" />
+ <metadata
+ id="metadata7">
+ <rdf:RDF>
+ <cc:Work
+ rdf:about="">
+ <dc:format>image/svg+xml</dc:format>
+ <dc:type
+ rdf:resource="http://purl.org/dc/dcmitype/StillImage" />
+ <dc:title />
+ </cc:Work>
+ </rdf:RDF>
+ </metadata>
+ <g
+ inkscape:label="Layer 1"
+ inkscape:groupmode="layer"
+ id="layer1"
+ transform="translate(0,-452.36218)">
+ <rect
+ style="fill:url(#radialGradient3197-7);fill-opacity:1;stroke:none;stroke-width:2.88930249"
+ id="rect14168-2"
+ width="518.62708"
+ height="435.72058"
+ x="138.36432"
+ y="526.80225"
+ ry="2.2663267"
+ rx="0.14977072" />
+ <rect
+ style="fill:url(#linearGradient4622);fill-opacity:1.0;fill-rule:nonzero;stroke:#000000;stroke-width:0;stroke-miterlimit:4;stroke-opacity:1;stroke-dasharray:none"
+ id="rect4614"
+ width="3"
+ height="591.4361"
+ x="797"
+ y="460.92606"
+ ry="0.59076202" />
+ <rect
+ ry="0.79187125"
+ y="5.8533502"
+ x="-1052.2572"
+ height="792.77484"
+ width="3"
+ id="rect4624"
+ style="fill:url(#linearGradient4626);fill-opacity:1;fill-rule:nonzero;stroke:#000000;stroke-width:0;stroke-miterlimit:4;stroke-opacity:1;stroke-dasharray:none"
+ transform="matrix(0,-1,1,0,0,0)" />
+ <text
+ id="text7860"
+ y="940.46515"
+ x="234.48592"
+ style="font-style:normal;font-weight:normal;line-height:0%;font-family:'Bitstream Vera Sans';fill:#000000;fill-opacity:1;stroke:none"
+ xml:space="preserve"><tspan
+ y="940.46515"
+ x="234.48592"
+ id="tspan7862"
+ sodipodi:role="line"
+ style="font-size:40px;line-height:1.25"> </tspan></text>
+ <path
+ inkscape:connector-curvature="0"
+ style="fill:none;stroke:#000000;stroke-width:1px;stroke-linecap:butt;stroke-linejoin:miter;stroke-opacity:1"
+ d="m 408.69275,817.02983 v 0"
+ id="path7961"
+ inkscape:connector-type="polyline" />
+ <path
+ style="fill:none;stroke:#000000;stroke-width:1.35752666px;stroke-linecap:butt;stroke-linejoin:miter;stroke-opacity:1;marker-start:url(#Arrow2Lstart);marker-end:url(#Arrow2Lend)"
+ d="M 398.60276,781.07444 V 892.63377"
+ id="path4092"
+ inkscape:connector-curvature="0" />
+ <rect
+ style="fill:url(#radialGradient3197-4);fill-opacity:1;stroke:none"
+ id="rect14168-6"
+ width="698.24835"
+ height="38.767456"
+ x="48.103188"
+ y="902.98938"
+ ry="0.20164233"
+ rx="0.20164233" />
+ <text
+ transform="matrix(0.81122555,0,0,0.75105676,80.616329,344.65227)"
+ id="text14172-8"
+ y="774.01617"
+ x="390.88086"
+ style="font-style:normal;font-variant:normal;font-weight:bold;font-stretch:normal;line-height:0%;font-family:'BlairMdITC TT';-inkscape-font-specification:'BlairMdITC TT Medium';text-align:start;writing-mode:lr-tb;text-anchor:start;fill:#ffffff;fill-opacity:1;stroke:none;filter:url(#filter4038-3)"
+ xml:space="preserve"><tspan
+ style="font-style:italic;font-variant:normal;font-weight:normal;font-stretch:normal;font-size:17.77115822px;line-height:100%;font-family:'Liberation Sans';-inkscape-font-specification:'Liberation Sans Italic';text-align:center;writing-mode:lr-tb;text-anchor:middle;fill:#ffffff"
+ y="774.01617"
+ x="390.88086"
+ id="tspan14174-4"
+ sodipodi:role="line"><tspan
+ style="font-style:normal;font-variant:normal;font-weight:bold;font-stretch:normal;font-size:27.33074951px;font-family:'Liberation Sans';-inkscape-font-specification:'Liberation Sans Bold'"
+ id="tspan822">pacemaker-based </tspan><tspan
+ style="font-size:20.49806213px"
+ id="tspan5534">(reads and writes cluster configuration and status)</tspan></tspan></text>
+ <text
+ transform="matrix(0.81122555,0,0,0.75105676,75.318146,-19.866724)"
+ id="text14172-8-2"
+ y="774.01617"
+ x="390.88086"
+ style="font-style:normal;font-variant:normal;font-weight:bold;font-stretch:normal;line-height:0%;font-family:'BlairMdITC TT';-inkscape-font-specification:'BlairMdITC TT Medium';text-align:start;writing-mode:lr-tb;text-anchor:start;fill:#ffffff;fill-opacity:1;stroke:none;filter:url(#filter4038-3-6)"
+ xml:space="preserve"><tspan
+ style="font-style:italic;font-variant:normal;font-weight:normal;font-stretch:normal;font-size:17.77115822px;line-height:100%;font-family:'Liberation Sans';-inkscape-font-specification:'Liberation Sans Italic';text-align:center;writing-mode:lr-tb;text-anchor:middle;fill:#ffffff"
+ y="774.01617"
+ x="390.88086"
+ id="tspan14174-4-9"
+ sodipodi:role="line"><tspan
+ style="font-style:normal;font-variant:normal;font-weight:bold;font-stretch:normal;font-size:27.33074951px;font-family:'Liberation Sans';-inkscape-font-specification:'Liberation Sans Bold'"
+ id="tspan822-7">pacemakerd </tspan><tspan
+ style="font-size:20.49806213px;fill:#ffffff"
+ id="tspan5540">(launches and monitors all other daemons)</tspan></tspan></text>
+ <text
+ xml:space="preserve"
+ style="font-style:normal;font-weight:normal;font-size:40px;line-height:125%;font-family:sans-serif;letter-spacing:0px;word-spacing:0px;fill:#000000;fill-opacity:1;stroke:none;stroke-width:1px;stroke-linecap:butt;stroke-linejoin:miter;stroke-opacity:1"
+ x="373.13596"
+ y="776.47974"
+ id="text909"><tspan
+ sodipodi:role="line"
+ id="tspan907"
+ x="373.13596"
+ y="811.87036"></tspan></text>
+ <flowRoot
+ xml:space="preserve"
+ id="flowRoot5484"
+ style="fill:black;stroke:none;stroke-opacity:1;stroke-width:1px;stroke-linejoin:miter;stroke-linecap:butt;fill-opacity:1;font-family:sans-serif;font-style:normal;font-weight:normal;font-size:40px;line-height:125%;letter-spacing:0px;word-spacing:0px"><flowRegion
+ id="flowRegion5486"><rect
+ id="rect5488"
+ width="582.52875"
+ height="77.405861"
+ x="93.283989"
+ y="23.425554" /></flowRegion><flowPara
+ id="flowPara5490"></flowPara></flowRoot> <flowRoot
+ xml:space="preserve"
+ id="flowRoot5492"
+ style="font-style:normal;font-weight:normal;font-size:40px;line-height:125%;font-family:sans-serif;letter-spacing:0px;word-spacing:0px;fill:#000000;fill-opacity:1;stroke:none;stroke-width:1px;stroke-linecap:butt;stroke-linejoin:miter;stroke-opacity:1"
+ transform="translate(46,446.36218)"><flowRegion
+ id="flowRegion5494"><rect
+ id="rect5496"
+ width="623.21643"
+ height="85.344925"
+ x="87.329689"
+ y="21.440788" /></flowRegion><flowPara
+ id="flowPara5498"
+ style="font-style:normal;font-variant:normal;font-weight:bold;font-stretch:normal;font-size:53.33333206px;font-family:'Liberation Sans';-inkscape-font-specification:'Liberation Sans Bold'">Pacemaker internals</flowPara></flowRoot> <rect
+ style="fill:url(#radialGradient3223-0);fill-opacity:1;stroke:none;stroke-width:0.84318089;stroke-miterlimit:4;stroke-dasharray:none;stroke-opacity:1"
+ id="rect3836-3"
+ width="252.8412"
+ height="89.551704"
+ x="46.59024"
+ y="586.23926"
+ ry="0.56747156" />
+ <text
+ transform="matrix(0.81122555,0,0,0.75105676,-148.85215,39.770657)"
+ id="text14172-8-4-1"
+ y="774.01617"
+ x="390.88086"
+ style="font-style:normal;font-variant:normal;font-weight:bold;font-stretch:normal;line-height:137.99999952%;font-family:'BlairMdITC TT';-inkscape-font-specification:'BlairMdITC TT Medium';text-align:start;writing-mode:lr-tb;text-anchor:start;fill:#ffffff;fill-opacity:1;stroke:none;filter:url(#filter4038-3-5-0)"
+ xml:space="preserve"><tspan
+ style="font-style:normal;font-variant:normal;font-weight:bold;font-stretch:normal;font-size:27.33074951px;line-height:137.99999952%;font-family:'Liberation Sans';-inkscape-font-specification:'Liberation Sans Bold';text-align:center;writing-mode:lr-tb;text-anchor:middle;fill:#ffffff"
+ y="774.01617"
+ x="390.88086"
+ id="tspan14174-4-8-2"
+ sodipodi:role="line">pacemaker-execd</tspan><tspan
+ style="font-style:italic;font-variant:normal;font-weight:normal;font-stretch:normal;font-size:20.49806213px;line-height:137.99999952%;font-family:'Liberation Sans';-inkscape-font-specification:'Liberation Sans Italic';text-align:center;writing-mode:lr-tb;text-anchor:middle;fill:#ffffff"
+ y="805.18372"
+ x="390.88086"
+ sodipodi:role="line"
+ id="tspan5693">(executes resource agents)</tspan></text>
+ <rect
+ style="fill:url(#radialGradient3223-2);fill-opacity:1;stroke:none;stroke-width:0.82428133;stroke-miterlimit:4;stroke-dasharray:none;stroke-opacity:1"
+ id="rect3836-1"
+ width="252.8412"
+ height="85.582176"
+ x="46.59024"
+ y="784.23926"
+ ry="0.54231739" />
+ <text
+ transform="matrix(0.81122555,0,0,0.75105676,-145.875,238.76304)"
+ id="text14172-8-4-2"
+ y="774.01617"
+ x="390.88086"
+ style="font-style:normal;font-variant:normal;font-weight:bold;font-stretch:normal;line-height:137.99999952%;font-family:'BlairMdITC TT';-inkscape-font-specification:'BlairMdITC TT Medium';text-align:start;writing-mode:lr-tb;text-anchor:start;fill:#ffffff;fill-opacity:1;stroke:none;filter:url(#filter4038-3-5-3)"
+ xml:space="preserve"><tspan
+ style="font-style:normal;font-variant:normal;font-weight:bold;font-stretch:normal;font-size:27.33074951px;line-height:137.99999952%;font-family:'Liberation Sans';-inkscape-font-specification:'Liberation Sans Bold';text-align:center;writing-mode:lr-tb;text-anchor:middle;fill:#ffffff"
+ y="774.01617"
+ x="390.88086"
+ id="tspan14174-4-8-9"
+ sodipodi:role="line">pacemaker-fenced</tspan><tspan
+ style="font-style:italic;font-variant:normal;font-weight:normal;font-stretch:normal;font-size:20.49806213px;line-height:137.99999952%;font-family:'Liberation Sans';-inkscape-font-specification:'Liberation Sans Italic';text-align:center;writing-mode:lr-tb;text-anchor:middle;fill:#ffffff"
+ y="805.18372"
+ x="390.88086"
+ sodipodi:role="line"
+ id="tspan956-6">(executes fencing agents)</tspan></text>
+ <rect
+ style="fill:url(#radialGradient3223-05);fill-opacity:1;stroke:none;stroke-width:0.82428133;stroke-miterlimit:4;stroke-dasharray:none;stroke-opacity:1"
+ id="rect3836-15"
+ width="252.8412"
+ height="85.582176"
+ x="508.52716"
+ y="784.05646"
+ ry="0.54231739" />
+ <text
+ transform="matrix(0.81122555,0,0,0.75105676,316.06191,238.58023)"
+ id="text14172-8-4-22"
+ y="774.01617"
+ x="390.88086"
+ style="font-style:normal;font-variant:normal;font-weight:bold;font-stretch:normal;line-height:137.99999952%;font-family:'BlairMdITC TT';-inkscape-font-specification:'BlairMdITC TT Medium';text-align:start;writing-mode:lr-tb;text-anchor:start;fill:#ffffff;fill-opacity:1;stroke:none;filter:url(#filter4038-3-5-1)"
+ xml:space="preserve"><tspan
+ style="font-style:normal;font-variant:normal;font-weight:bold;font-stretch:normal;font-size:27.33074951px;line-height:137.99999952%;font-family:'Liberation Sans';-inkscape-font-specification:'Liberation Sans Bold';text-align:center;writing-mode:lr-tb;text-anchor:middle;fill:#ffffff"
+ y="774.01617"
+ x="390.88086"
+ id="tspan14174-4-8-0"
+ sodipodi:role="line">pacemaker-attrd</tspan><tspan
+ style="font-style:italic;font-variant:normal;font-weight:normal;font-stretch:normal;font-size:20.49806213px;line-height:137.99999952%;font-family:'Liberation Sans';-inkscape-font-specification:'Liberation Sans Italic';text-align:center;writing-mode:lr-tb;text-anchor:middle;fill:#ffffff"
+ y="805.18372"
+ x="390.88086"
+ sodipodi:role="line"
+ id="tspan956-8">(manages node attributes)</tspan></text>
+ <rect
+ style="fill:url(#radialGradient3223-6);fill-opacity:1;stroke:none;stroke-width:0.86377567;stroke-miterlimit:4;stroke-dasharray:none;stroke-opacity:1"
+ id="rect3836-38"
+ width="277.65076"
+ height="85.582176"
+ x="475.07773"
+ y="587.24689"
+ ry="0.54231739" />
+ <text
+ transform="matrix(0.81122555,0,0,0.75105676,294.52109,41.77066)"
+ id="text14172-8-4-0"
+ y="774.01617"
+ x="390.88086"
+ style="font-style:normal;font-variant:normal;font-weight:bold;font-stretch:normal;line-height:137.99999952%;font-family:'BlairMdITC TT';-inkscape-font-specification:'BlairMdITC TT Medium';text-align:start;writing-mode:lr-tb;text-anchor:start;fill:#ffffff;fill-opacity:1;stroke:none;filter:url(#filter4038-3-5-9)"
+ xml:space="preserve"><tspan
+ style="font-style:normal;font-variant:normal;font-weight:bold;font-stretch:normal;font-size:27.33074951px;line-height:137.99999952%;font-family:'Liberation Sans';-inkscape-font-specification:'Liberation Sans Bold';text-align:center;writing-mode:lr-tb;text-anchor:middle;fill:#ffffff"
+ y="774.01617"
+ x="390.88086"
+ id="tspan14174-4-8-5"
+ sodipodi:role="line">pacemaker-schedulerd</tspan><tspan
+ style="font-style:italic;font-variant:normal;font-weight:normal;font-stretch:normal;font-size:20.49806213px;line-height:137.99999952%;font-family:'Liberation Sans';-inkscape-font-specification:'Liberation Sans Italic';text-align:center;writing-mode:lr-tb;text-anchor:middle;fill:#ffffff"
+ y="805.18372"
+ x="390.88086"
+ sodipodi:role="line"
+ id="tspan956-5">(determines all actions needed)</tspan></text>
+ <g
+ id="g5737"
+ transform="translate(-12,2)">
+ <image
+ width="67.571739"
+ height="80.908264"
+ preserveAspectRatio="none"
+ style="image-rendering:optimizeSpeed"
+ xlink:href="data:image/png;base64,iVBORw0KGgoAAAANSUhEUgAAAJgAAAC2CAYAAAAlZERnAAAABHNCSVQICAgIfAhkiAAAER1JREFU
+eJzt3Xl4FGWeB/DfW9XVd5LuDhAIJAYQEAgYIUTE88FjlfHeEQ8YD1CfWZdRkdX1nF11HGbGe8Yd
+Z0aZcXRVdkUHhhVRUQZwFASC4YqEcAUIIVcnfXfX8e4fTEMHuqqrQ7/dlfTv8zz+UV31vvVivk+/
+1W9X/ZpQSgEhVrhcDwD1bxgwxBQGDDGFAUNMYcAQUxgwxBQGDDGFAUNMYcAQUxgwxBQGDDGFAUNM
+YcAQUxgwxBQGDDGFAUNMYcAQUxgwxJQp1wPoDxQK/Pc+cVpjQKzuiCrDghItKhRIu8fMN48rEtaN
+cJq25HqMuYIBOw1+USlecjD02OqjkTt8ojJA7biBFr7pylLb69cMtb1q5kg4m2PMNYIPffTOqpbI
+nD/t9b8YlKhLb5tiC3do3ujCu89xmz9lOTYjwYCliQJwf9wTeGn54dADvWnPEZDvGuFccM1Q+6uZ
+HpsR4UV+mt7eF1jY23ABHLteW7Qn8MpnR8L3ZHJcRoUBS8NXbZGb/3Iw9Egm+vpDY+C13X5xSib6
+MjIMmE4xhdre2ht4PlP9SZSa32gM/IYCkEz1aUQYMJ0+aQ7f1x5VyjLZZ4NfPHdTR/TqTPZpNBgw
+nda0Rmax6Hdta/Q2Fv0aBQZMh/aoXLYvIFWx6HuzN3qVTEFg0bcRYMB0aArKlayulUISLWqLyOUs
++jYCDJgOnTFlCOP+S1n2n0sYMB1iCrWx7D+qUDvL/nMJA6aD28y1sOzfY+aaWfafSxgwHQZYuCZW
+fRMAWmzhD7HqP9cwYDqMdAq1hQLXzqLvUQXCt04T8bLo2wgwYDpwBORqj/n/WPRdM8CyjEW/RpHx
+uyk2bKydsX3n9xdQhSYNr9vtarn6qsv+YLFYQhk9MWOHQtLYBzZ3bpVp5u6hc5hI1++mFJ9ZIHAd
+mepTTSQSdXy8ctU9Xm/X4GT7CUeUCePHrqupPueTTJ43owH76ptvb/jPnz3/Uarjzj+vZunTTz58
+Q8ZOnCWvNfjfXNUSnpup/u4Y4XzkhmH2jH2/qeWnz/5q6dfrN16X6rhnnnrk+mlTp2TsXTWjd7Ru
+215/oZ7jNtXWXZHJ8/aGTEHY3hW7eE9AmhyQFHeRwLWNKRS+GVMorCcASrI2c0c65zf4xXObglLl
+6Z6/ym3+/Nqh9pdPtx+9Nm/Zerme47Zu33mRYQM2bWr1X5cu/+Qnsixr9ivLUk6/Gvlba2T2O/sC
+CzuiyrCT9w2zm+rnjnTOT3bXqY0n/sfHFV3/eJ137eksjlY4TFsfHls0kycg9baPdOn5f87zvDTt
+3Cl/zeR5M34NtnffgYlb6rZPFyXREn9NkRX+j2+//1x822TixZXLFpszemKdFu0JvLz8cOhBrWMI
+gDJnpHPBNUPtryTb3xlTSn+xo/ujBr94brrnn+wxr1hwVtFtdhPpTrft6bjyultikiQfD9mc2299
+guM5Ob4tmITopKoJXwyvKN+WyfNm5ZZpSZKFK6+7JRbfzlXAPj4c/skbe/y/1nMsAaBPVBZdW+2x
+JP30KCrU8smR8H0fNIWe8ItKcar+Bln5/bdVOJ66eJD1XQKQ9fvUTw7YymWLzSYTL7I+b948VeQX
+leJ3DwSe1Xs8BSBvNAZePWeK5VOewCl/CIEj0WuH2l++bLBt0bft0es2dESva/CL53bFlBKZgmDm
+SMRt5prHFwnraootyyZ7zCsEjkQz+68yvrwJ2Fdt0ZtDEi1Kp83RiDyizhu7dJLHvFLtGDtPfJeU
+WN+5pMT6DsCxYIYlWpjtKdCo8mahtd4nnt+bdjt9oq5PxnEEgGK4TsibgHXFlJJstkPH5E3AHL18
+V+nP3xNmQ94EbHgv60NUOE11mR5LPsmbgF0w0Pq/HAE59ZEn2Hnim6KyTIH0yZuAldr4hssG2xal
+0+amcsdzDhPpYjWmfJA3AQMAuHuk88ExhcJ6PceeN8Dy4fVl9hdYj6m/y6uAmTkSfmai69LLBtsW
+qX2hbSIk9sMy+8KHxxbdrHYM0i9vFlrjLBwJzRtdcPe1Q22vrG2L3rrHL072S0qxS+COjikUvrlo
+kPW9Eiu/L9fj7C/yLmBx5Q7T9tkO0xN6j48q1H4kLI8CABhi43dbONKnbpjMlbwNmF5+USl+e1/w
+F2taI7Pij6+ZORK+pMT6zu3DnY/iOpk2DJiGblEZ9Nh33q+a//HOFRdTqO2zI+F7d3TFLl5Y5b6A
+1QMh/UFeXeSn6/Xd/t+dHK5Eh8PymN83+n+bzTH1NRgwFW0R+YwN7dHrUx33dVv0h8nujEXHYMBU
+NPilGj0FTygAafCLNdkYU1+EAVMhUar7jltRoVaWY+nL8uEi/3IASLuuak2xpfDpCfoqlI8uFO4H
+gLt0dv17AFiS7nj6qnwIWCkAXJZuIxtP4Gy37jexdB7+yJsa+QA4RSLGMGCIqXyYIpsA4LQeJqUA
+XHdMGRiSjz004uBJV6GZa+/ll+GNpzOWviYfArb6H//1GgEAl5kD3T9KhI4zVMD+1hqZ/d/7Aj/P
+dD36fMARkIdY+ca5Iwse1HrMLtsMcw0WkanjtV3+RRiu3lEo8IfD8piXvve9GzPQupxhAuYXlQHp
+LG6i5AKS4tkflM7O9TjiDBMwlDmyYpwfdsCAIaYwYIgpDBhiCgOGmMKAIaYMtdBqVGaOwI1ldphS
+bAGeAGztisHi/UEIydqFCjkCcHWpHc4faAG7iUB9twjvHQhCVyz1N0zTS6xw6WAbuM0c7PaL8N7+
+IByNpFX5wBAwYCkQAHiqsggmuE4s0VU4TFDlMsO/bfFCTFEP2f2jC+GSkhNrnmV2E1QXW+CBzZ3g
+F9VDNqvCATeVO45vl9p4qCm2wEO1nXAk3LdChlNkClMHWHqEK67cYYIZpeo/wjaqQOgRrjiPmYNb
+EsJzsmILB/9cdup+G0/gzhFOnaM2DgxYCqML1NcsxxSq7xtVoD45jC5U3zfCKQCn8iTAGI2xGBUG
+LAWvxvWST2Oa02rnF9WnVa3rM5/U90plYMBS+KYjCuEkF/MyBfj0SFi1XV1XDDqiyQOxolm96sDe
+gAj7g8l/n2HFYfXzGRUGLIW2iAwv1nf3eLcKyxRea/DB3oD6D3WEJAq/qu+G1oRPfqJC4a29AdjU
+GVNtJ1OAF+q74UBCyCgALDsU0gy0UeGnSB02dcbgxxs7YEKRGUwcwLYuUXN6jNvlE2Hepk6Y4BLA
+buKgvjsG7SrvaokOhWRYUOuFSpcAhQIHjX4RmvvYp8c4DJhOIYnCho70f0chplDYrPGOpUaiFL7z
+pt/OaHCKRExhwBBTGDDEFAYMMYUBQ0xhwBBTGDDElGHWwRwC5719uPPRXI+jPxhk5ffnegxxhgmY
+nSe+G8vsv8z1OFBm4RSJmMKAIaYwYIgpDBhiyjAX+e1Ruez+zZ3bcj2O/uA/Kl1X6v3ZQtYMEzBK
+gQtJxyoIotOjUOP8XXGKRExhwBBTGDDEFAYMMYUBQ0xhwBBTGDDElGHWS4yMJwDXD7NDtccCPAdQ
+543BkqYQRDUq6wAcq8zzT0NsMG2gBZwmDnZ0x+B/DoQgoKMEwAUDrTC9xAoey7HnIhcfCOp6ptJo
+MGApEAB4stIF5yT88troAgEmeSzw71u8IFH1kN03ugAuH3yiAs8IpwmmFltgfm0nBCT1djPLHXBb
+xYkKOxUOE0wdYIWHajt7PCneF+AUmUJNsaVHuOJGOk1wlUb5ppFOU49wxQ208jDzDPXyTW4zBzcn
+2e80EbgLyzf1P1olmsYVqe8brdFurMa+kU4BeJXyTWdptDMqDFgK3Ro1KLSqFGrVrtCaHrXb9b1r
+MAxYCuvboxBJUr5JoQCft0RU29V5Y6o1wlY2q1fJ2RuQ4GAoedUerXZGhQFL4WhEhld2+Xq860QV
+Cq/v9sNuv6jaLiBReL6+u0eNMJkCvLc/qFlERaIUnq/3waHQiYt5CgArmsOwog8GDD9F6rC+PQo7
+ujugymUGjgBs64pBp45K0Tu7RZi3qQOq3Gaw8QR2dIu6KkU3BSWYX9sJVW4zFAoEdvslaFIpSmd0
+GDCd/KIC69rUp0Q1YZnCN+3pl30SFQobe1EuymhwikRMYcAQUxgwxBQGDDGFAUNMYcAQUxgwxJRh
+1sEcAuedVeF4Mtfj6A+wfFMSdp74bip3PJfrcaDMwikSMYUBQ0xhwBBTGDDElGEu8tuicvn8zZ3f
+5Xoc/cFTla4ZWL7pJJQCF5CoO9fj6A+wfBPKGxgwxBQGDDGFAUNMYcAQUxgwxBQGDDFlmPUSI+MI
+wIxSG1R7LGAiAHVdIiw7FIJYivJNAADTS6wwbaAVHDyBep8ISw4GIaRROiCuptgC00us4DZz0OiX
+YMnBoOqT4kaGAUuBAMCj44qgpthy/LVKlxmqPWZ4vM4LSaoKHHfPmQXwg4QKPGOLBDhvgAUW1HZC
+SKPhDWV2uGP4iUo6YwoFuHCQBR6q7exzNcJwikyhutjSI1xxYwoFuHKIevmmCocJZiQp7zTExsNN
+GuWbXGYOZlWcur9Q4OBOLN/U/2iVTBrvOrVu2PF2RQKoVGGCSo2yT2c6TWAiyVuOL1I/n1FhwFII
+aJRTCmmUUwqI6lNgUOMazK+xT6udUWHAUtjQEU16MU8BYJVW+aaumGqtr89btMs3NYeTF0jRamdU
+GLAUmsMy/KbBD+GEi3JRofBmox++96mXb/KLCrz4va9HATuFAiw5GIK/t6kXNREVCi/Ud59SheeL
+lggsPxw6jX9JbuCnSB3WtUZge1cMJnvMwBMC33ljusow1Xlj8K8bO2CyxwIOE4HtXSI0qRSXS7Q3
+IMEDmzthsscMLoGDBr+kWYvMyDBgOnljiuaUqCYgUVjTmn67iEw13+n6CpwiEVMYMMQUBgwxhQFD
+TGUlYDzP9fjIJcsKTylVW+hG/UhWAkYIUSxm8/FVQkopF43G7InHOAWukyfQNz+LG4yJg1jiNqWU
+KAo9/rcmhFCOI1n51jxrU6Tb7WpJ3G4+0jIycdvOE9/0EtufszWe/mqghW8qd5i2J77W1e0bpCgK
+H9922O3dHMdl5Ve1srYOdkb5sPqWo63D49ubttRdMWL4GVsTj/nxqIJ/megSvmwKyeNlSnGNLk1F
+Atd60SDrexaO9Fjy3924d1Lidmnp4MZsjSlrf8QJ48eu27CxdkZ8+4sv182eeeO1LyQewxOQLhxk
+fT9bY8oXa9Z9PTNxe9xZo7L21HfWpsjzz6v5CyHk+Bd6e/btP3vV6rWzs3X+fHXo8JHRq9f+/ZbE
+1y48f+qH2Tp/1gJWNqx016SqCasSX/v1b9/8r4bde6qzNYZ84w8E3c/98pX3YzHRGn9t2NAhDRMr
+x63J1hiyug42985Zj/E8f/zb3lAoXDj/kZ+uXbp85TxJkvre3XQGtnX7zovmzX9sw+49Pa+/7p17
++8OJMwlrhGr8JDALiz9Y+uibb7278OTXPR73kalTJn08vKJ8m8fjPpLVQfUHlBJ/IOhuaWkdvn5j
+7Q/2H2iqPPmQiy8474OnHntoZrLmrGQ9YAAAf3pn8bPvLv4QC/5mUdXEytU/f/rxGWazkP6tHach
+JwEDAFj52ZdzXn/zzy8Fg6GinAwgTxBC6DUzrnj93jk/esRqtQSzfv5cBQwAoL2jc+gHHy1f8Onn
+q+8KBIOunA2kHyKE0KqJ41f/6Nabnpk4IXsX9aeMI5cBi4vGYradO3dN21G/a1qnt2twd7dvYK7H
+1NdwHCfb7Taf2+06Wl42tL5qYuVqz0nfnuSCIQKG+i+8XQcxhQFDTGHAEFMYMMQUBgwxhQFDTGHA
+EFMYMMQUBgwxhQFDTGHAEFMYMMQUBgwxhQFDTGHAEFMYMMQUBgwxhQFDTP0/wELreaTb5aEAAAAA
+SUVORK5CYII=
+"
+ id="image5705"
+ x="25.691656"
+ y="958.81555" />
+ <flowRoot
+ transform="translate(5.68895,960.62676)"
+ style="font-style:normal;font-weight:normal;font-size:40px;line-height:125%;font-family:sans-serif;letter-spacing:0px;word-spacing:0px;fill:#000000;fill-opacity:1;stroke:none;stroke-width:1px;stroke-linecap:butt;stroke-linejoin:miter;stroke-opacity:1"
+ id="flowRoot5492-1"
+ xml:space="preserve"><flowRegion
+ id="flowRegion5494-2"><rect
+ y="21.440788"
+ x="87.329689"
+ height="85.344925"
+ width="623.21643"
+ id="rect5496-7" /></flowRegion><flowPara
+ style="font-style:normal;font-variant:normal;font-weight:bold;font-stretch:normal;font-size:48px;font-family:'Roboto Slab';-inkscape-font-specification:'Roboto Slab Bold'"
+ id="flowPara5498-8">ClusterLabs</flowPara></flowRoot> </g>
+ <path
+ style="fill:none;stroke:#000000;stroke-width:0.66936386px;stroke-linecap:butt;stroke-linejoin:miter;stroke-opacity:1;marker-start:url(#Arrow2Lstart-8);marker-end:url(#Arrow2Lend-3)"
+ d="m 171.96944,872.21881 v 27.12275"
+ id="path4092-2"
+ inkscape:connector-curvature="0" />
+ <path
+ style="fill:none;stroke:#000000;stroke-width:0.66936386px;stroke-linecap:butt;stroke-linejoin:miter;stroke-opacity:1;marker-start:url(#Arrow2Lstart-8-0);marker-end:url(#Arrow2Lend-3-2)"
+ d="m 633.90946,873.02031 v 27.12275"
+ id="path4092-2-6"
+ inkscape:connector-curvature="0" />
+ <path
+ style="fill:none;stroke:#000000;stroke-width:1.23034394px;stroke-linecap:butt;stroke-linejoin:miter;stroke-opacity:1;marker-start:url(#Arrow2Lstart-8-7);marker-end:url(#Arrow2Lend-3-3)"
+ d="m 171.69403,685.40072 v 91.63519"
+ id="path4092-2-0"
+ inkscape:connector-curvature="0" />
+ <path
+ style="fill:none;stroke:#000000;stroke-width:1.21563613px;stroke-linecap:butt;stroke-linejoin:miter;stroke-opacity:1;marker-start:url(#Arrow2Lstart-8-1);marker-end:url(#Arrow2Lend-3-27)"
+ d="m 341.76078,779.04299 -36.37531,33.2863"
+ id="path4092-2-02"
+ inkscape:connector-curvature="0" />
+ <path
+ style="fill:none;stroke:#000000;stroke-width:1.21563613px;stroke-linecap:butt;stroke-linejoin:miter;stroke-opacity:1;marker-start:url(#Arrow2Lstart-8-1-6);marker-end:url(#Arrow2Lend-3-27-9)"
+ d="m 463.50761,777.84747 36.37531,33.2863"
+ id="path4092-2-02-3"
+ inkscape:connector-curvature="0" />
+ <path
+ style="fill:none;stroke:#000000;stroke-width:1.21563613px;stroke-linecap:butt;stroke-linejoin:miter;stroke-opacity:1;marker-start:url(#Arrow2Lstart-8-1-3);marker-end:url(#Arrow2Lend-3-27-8)"
+ d="m 305.59901,647.80176 36.37531,33.2863"
+ id="path4092-2-02-5"
+ inkscape:connector-curvature="0" />
+ <path
+ style="fill:none;stroke:#000000;stroke-width:1.21563613px;stroke-linecap:butt;stroke-linejoin:miter;stroke-opacity:1;marker-start:url(#Arrow2Lstart-8-1-3-0);marker-end:url(#Arrow2Lend-3-27-8-1)"
+ d="m 469.51079,645.79961 -36.37531,33.2863"
+ id="path4092-2-02-5-9"
+ inkscape:connector-curvature="0" />
+ <rect
+ style="fill:url(#radialGradient3223);fill-opacity:1;stroke:none;stroke-width:0.82428133;stroke-miterlimit:4;stroke-dasharray:none;stroke-opacity:1"
+ id="rect3836"
+ width="252.8412"
+ height="85.582176"
+ x="272.75177"
+ y="686.97394"
+ ry="0.54231739" />
+ <text
+ transform="matrix(0.81122555,0,0,0.75105676,80.28653,143.49774)"
+ id="text14172-8-4"
+ y="774.01617"
+ x="390.88086"
+ style="font-style:normal;font-variant:normal;font-weight:bold;font-stretch:normal;line-height:137.99999952%;font-family:'BlairMdITC TT';-inkscape-font-specification:'BlairMdITC TT Medium';text-align:start;writing-mode:lr-tb;text-anchor:start;fill:#ffffff;fill-opacity:1;stroke:none;filter:url(#filter4038-3-5)"
+ xml:space="preserve"><tspan
+ style="font-style:normal;font-variant:normal;font-weight:bold;font-stretch:normal;font-size:27.33074951px;line-height:137.99999952%;font-family:'Liberation Sans';-inkscape-font-specification:'Liberation Sans Bold';text-align:center;writing-mode:lr-tb;text-anchor:middle;fill:#ffffff"
+ y="774.01617"
+ x="390.88086"
+ id="tspan14174-4-8"
+ sodipodi:role="line">pacemaker-controld</tspan><tspan
+ style="font-style:italic;font-variant:normal;font-weight:normal;font-stretch:normal;font-size:20.49806213px;line-height:137.99999952%;font-family:'Liberation Sans';-inkscape-font-specification:'Liberation Sans Italic';text-align:center;writing-mode:lr-tb;text-anchor:middle;fill:#ffffff"
+ y="805.18372"
+ x="390.88086"
+ sodipodi:role="line"
+ id="tspan956">(coordinates all actions)</tspan></text>
+ </g>
+</svg>
diff --git a/doc/sphinx/shared/images/pcmk-overview.svg b/doc/sphinx/shared/images/pcmk-overview.svg
new file mode 100644
index 0000000..9fb022d
--- /dev/null
+++ b/doc/sphinx/shared/images/pcmk-overview.svg
@@ -0,0 +1,855 @@
+<?xml version="1.0" encoding="UTF-8" standalone="no"?>
+<!-- Created with Inkscape (http://www.inkscape.org/) -->
+
+<svg
+ xmlns:dc="http://purl.org/dc/elements/1.1/"
+ xmlns:cc="http://creativecommons.org/ns#"
+ xmlns:rdf="http://www.w3.org/1999/02/22-rdf-syntax-ns#"
+ xmlns:svg="http://www.w3.org/2000/svg"
+ xmlns="http://www.w3.org/2000/svg"
+ xmlns:xlink="http://www.w3.org/1999/xlink"
+ xmlns:sodipodi="http://sodipodi.sourceforge.net/DTD/sodipodi-0.dtd"
+ xmlns:inkscape="http://www.inkscape.org/namespaces/inkscape"
+ width="800"
+ height="600"
+ id="svg2"
+ version="1.1"
+ inkscape:version="0.47 r22583"
+ sodipodi:docname="pcmk-overview.svg"
+ inkscape:export-filename="/Users/beekhof/Dropbox/Public/pcmk-active-passive.png"
+ inkscape:export-xdpi="90"
+ inkscape:export-ydpi="90">
+ <defs
+ id="defs4">
+ <marker
+ inkscape:stockid="TriangleInL"
+ orient="auto"
+ refY="0.0"
+ refX="0.0"
+ id="TriangleInL"
+ style="overflow:visible">
+ <path
+ id="path8998"
+ d="M 5.77,0.0 L -2.88,5.0 L -2.88,-5.0 L 5.77,0.0 z "
+ style="fill-rule:evenodd;stroke:#000000;stroke-width:1.0pt;marker-start:none"
+ transform="scale(-0.8)" />
+ </marker>
+ <marker
+ inkscape:stockid="Arrow2Lend"
+ orient="auto"
+ refY="0.0"
+ refX="0.0"
+ id="Arrow2Lend"
+ style="overflow:visible;">
+ <path
+ id="path8885"
+ style="font-size:12.0;fill-rule:evenodd;stroke-width:0.62500000;stroke-linejoin:round;"
+ d="M 8.7185878,4.0337352 L -2.2072895,0.016013256 L 8.7185884,-4.0017078 C 6.9730900,-1.6296469 6.9831476,1.6157441 8.7185878,4.0337352 z "
+ transform="scale(1.1) rotate(180) translate(1,0)" />
+ </marker>
+ <marker
+ inkscape:stockid="Arrow1Mend"
+ orient="auto"
+ refY="0.0"
+ refX="0.0"
+ id="Arrow1Mend"
+ style="overflow:visible;">
+ <path
+ id="path4652"
+ d="M 0.0,0.0 L 5.0,-5.0 L -12.5,0.0 L 5.0,5.0 L 0.0,0.0 z "
+ style="fill-rule:evenodd;stroke:#000000;stroke-width:1.0pt;marker-start:none;"
+ transform="scale(0.4) rotate(180) translate(10,0)" />
+ </marker>
+ <linearGradient
+ id="linearGradient4616">
+ <stop
+ style="stop-color:#808080;stop-opacity:0.75;"
+ offset="0"
+ id="stop4618" />
+ <stop
+ style="stop-color:#bfbfbf;stop-opacity:0.5;"
+ offset="1"
+ id="stop4620" />
+ </linearGradient>
+ <linearGradient
+ id="linearGradient4606">
+ <stop
+ style="stop-color:#000000;stop-opacity:0.58536583;"
+ offset="0"
+ id="stop4608" />
+ <stop
+ style="stop-color:#000000;stop-opacity:0.08130081;"
+ offset="1"
+ id="stop4610" />
+ </linearGradient>
+ <linearGradient
+ id="linearGradient4411">
+ <stop
+ style="stop-color:#f3f3f3;stop-opacity:0;"
+ offset="0"
+ id="stop4413" />
+ <stop
+ style="stop-color:#e6e6e6;stop-opacity:0.21138212;"
+ offset="1"
+ id="stop4415" />
+ </linearGradient>
+ <linearGradient
+ id="linearGradient4370">
+ <stop
+ style="stop-color:#ffffff;stop-opacity:1;"
+ offset="0"
+ id="stop4372" />
+ <stop
+ style="stop-color:#f7f7f7;stop-opacity:0.69918698;"
+ offset="1"
+ id="stop4374" />
+ </linearGradient>
+ <linearGradient
+ id="linearGradient3988">
+ <stop
+ id="stop3990"
+ offset="0"
+ style="stop-color:#d3e219;stop-opacity:1;" />
+ <stop
+ id="stop3992"
+ offset="1"
+ style="stop-color:#e8a411;stop-opacity:1;" />
+ </linearGradient>
+ <linearGradient
+ id="linearGradient3838">
+ <stop
+ style="stop-color:#6badf2;stop-opacity:1;"
+ offset="0"
+ id="stop3840" />
+ <stop
+ style="stop-color:#2e447f;stop-opacity:1;"
+ offset="1"
+ id="stop3842" />
+ </linearGradient>
+ <linearGradient
+ id="linearGradient3658">
+ <stop
+ style="stop-color:#19e229;stop-opacity:1;"
+ offset="0"
+ id="stop3660" />
+ <stop
+ style="stop-color:#589b56;stop-opacity:1;"
+ offset="1"
+ id="stop3662" />
+ </linearGradient>
+ <linearGradient
+ id="linearGradient3650">
+ <stop
+ style="stop-color:#f36d6d;stop-opacity:1;"
+ offset="0"
+ id="stop3652" />
+ <stop
+ style="stop-color:#b81313;stop-opacity:1;"
+ offset="1"
+ id="stop3654" />
+ </linearGradient>
+ <inkscape:perspective
+ sodipodi:type="inkscape:persp3d"
+ inkscape:vp_x="0 : 526.18109 : 1"
+ inkscape:vp_y="0 : 1000 : 0"
+ inkscape:vp_z="744.09448 : 526.18109 : 1"
+ inkscape:persp3d-origin="372.04724 : 350.78739 : 1"
+ id="perspective10" />
+ <filter
+ id="filter3712"
+ inkscape:label="Ridged border"
+ inkscape:menu="Bevels"
+ inkscape:menu-tooltip="Ridged border with inner bevel"
+ color-interpolation-filters="sRGB">
+ <feMorphology
+ id="feMorphology3714"
+ radius="4.3"
+ in="SourceAlpha"
+ result="result91" />
+ <feComposite
+ id="feComposite3716"
+ in2="result91"
+ operator="out"
+ in="SourceGraphic" />
+ <feGaussianBlur
+ id="feGaussianBlur3718"
+ result="result0"
+ stdDeviation="1.2" />
+ <feDiffuseLighting
+ id="feDiffuseLighting3720"
+ diffuseConstant="1"
+ result="result92">
+ <feDistantLight
+ id="feDistantLight3722"
+ elevation="66"
+ azimuth="225" />
+ </feDiffuseLighting>
+ <feBlend
+ id="feBlend3724"
+ in2="SourceGraphic"
+ mode="multiply"
+ result="result93" />
+ <feComposite
+ id="feComposite3726"
+ in2="SourceAlpha"
+ operator="in" />
+ </filter>
+ <filter
+ id="filter4038"
+ inkscape:label="Drop shadow"
+ width="1.5"
+ height="1.5"
+ x="-.25"
+ y="-.25">
+ <feGaussianBlur
+ id="feGaussianBlur4040"
+ in="SourceAlpha"
+ stdDeviation="2.000000"
+ result="blur" />
+ <feColorMatrix
+ id="feColorMatrix4042"
+ result="bluralpha"
+ type="matrix"
+ values="-1 0 0 0 1 0 -1 0 0 1 0 0 -1 0 1 0 0 0 0.500000 0 " />
+ <feOffset
+ id="feOffset4044"
+ in="bluralpha"
+ dx="4.000000"
+ dy="4.000000"
+ result="offsetBlur" />
+ <feMerge
+ id="feMerge4046">
+ <feMergeNode
+ id="feMergeNode4048"
+ in="offsetBlur" />
+ <feMergeNode
+ id="feMergeNode4050"
+ in="SourceGraphic" />
+ </feMerge>
+ </filter>
+ <filter
+ id="filter4066"
+ inkscape:label="Drop shadow"
+ width="1.5"
+ height="1.5"
+ x="-.25"
+ y="-.25">
+ <feGaussianBlur
+ id="feGaussianBlur4068"
+ in="SourceAlpha"
+ stdDeviation="2.000000"
+ result="blur" />
+ <feColorMatrix
+ id="feColorMatrix4070"
+ result="bluralpha"
+ type="matrix"
+ values="1 0 0 0 0 0 1 0 0 0 0 0 1 0 0 0 0 0 0.500000 0 " />
+ <feOffset
+ id="feOffset4072"
+ in="bluralpha"
+ dx="4.000000"
+ dy="4.000000"
+ result="offsetBlur" />
+ <feMerge
+ id="feMerge4074">
+ <feMergeNode
+ id="feMergeNode4076"
+ in="offsetBlur" />
+ <feMergeNode
+ id="feMergeNode4078"
+ in="SourceGraphic" />
+ </feMerge>
+ </filter>
+ <radialGradient
+ inkscape:collect="always"
+ xlink:href="#linearGradient4370"
+ id="radialGradient4376"
+ cx="-0.5"
+ cy="-100.5"
+ fx="-0.5"
+ fy="-100.5"
+ r="400.5"
+ gradientTransform="matrix(0.06674414,1.4857892,-1.4966201,0.06723071,-150.87695,6.9995757)"
+ gradientUnits="userSpaceOnUse" />
+ <filter
+ id="filter4381"
+ inkscape:label="Drop shadow"
+ width="1.5"
+ height="1.5"
+ x="-.25"
+ y="-.25">
+ <feGaussianBlur
+ id="feGaussianBlur4383"
+ in="SourceAlpha"
+ stdDeviation="2.000000"
+ result="blur" />
+ <feColorMatrix
+ id="feColorMatrix4385"
+ result="bluralpha"
+ type="matrix"
+ values="1 0 0 0 0 0 1 0 0 0 0 0 1 0 0 0 0 0 0.500000 0 " />
+ <feOffset
+ id="feOffset4387"
+ in="bluralpha"
+ dx="4.000000"
+ dy="4.000000"
+ result="offsetBlur" />
+ <feMerge
+ id="feMerge4389">
+ <feMergeNode
+ id="feMergeNode4391"
+ in="offsetBlur" />
+ <feMergeNode
+ id="feMergeNode4393"
+ in="SourceGraphic" />
+ </feMerge>
+ </filter>
+ <filter
+ id="filter4397"
+ inkscape:label="Drop shadow"
+ width="1.5"
+ height="1.5"
+ x="-.25"
+ y="-.25">
+ <feGaussianBlur
+ id="feGaussianBlur4399"
+ in="SourceAlpha"
+ stdDeviation="2.000000"
+ result="blur" />
+ <feColorMatrix
+ id="feColorMatrix4401"
+ result="bluralpha"
+ type="matrix"
+ values="1 0 0 0 0 0 1 0 0 0 0 0 1 0 0 0 0 0 0.500000 0 " />
+ <feOffset
+ id="feOffset4403"
+ in="bluralpha"
+ dx="4.000000"
+ dy="4.000000"
+ result="offsetBlur" />
+ <feMerge
+ id="feMerge4405">
+ <feMergeNode
+ id="feMergeNode4407"
+ in="offsetBlur" />
+ <feMergeNode
+ id="feMergeNode4409"
+ in="SourceGraphic" />
+ </feMerge>
+ </filter>
+ <inkscape:perspective
+ id="perspective4466"
+ inkscape:persp3d-origin="0.5 : 0.33333333 : 1"
+ inkscape:vp_z="1 : 0.5 : 1"
+ inkscape:vp_y="0 : 1000 : 0"
+ inkscape:vp_x="0 : 0.5 : 1"
+ sodipodi:type="inkscape:persp3d" />
+ <filter
+ id="filter4508"
+ inkscape:label="Drop shadow"
+ width="1.5"
+ height="1.5"
+ x="-.25"
+ y="-.25">
+ <feGaussianBlur
+ id="feGaussianBlur4510"
+ in="SourceAlpha"
+ stdDeviation="2.000000"
+ result="blur" />
+ <feColorMatrix
+ id="feColorMatrix4512"
+ result="bluralpha"
+ type="matrix"
+ values="1 0 0 0 0 0 1 0 0 0 0 0 1 0 0 0 0 0 0.500000 0 " />
+ <feOffset
+ id="feOffset4514"
+ in="bluralpha"
+ dx="4.000000"
+ dy="4.000000"
+ result="offsetBlur" />
+ <feMerge
+ id="feMerge4516">
+ <feMergeNode
+ id="feMergeNode4518"
+ in="offsetBlur" />
+ <feMergeNode
+ id="feMergeNode4520"
+ in="SourceGraphic" />
+ </feMerge>
+ </filter>
+ <filter
+ id="filter4592"
+ inkscape:label="Drop shadow"
+ width="1.5"
+ height="1.5"
+ x="-.25"
+ y="-.25">
+ <feGaussianBlur
+ id="feGaussianBlur4594"
+ in="SourceAlpha"
+ stdDeviation="2.000000"
+ result="blur" />
+ <feColorMatrix
+ id="feColorMatrix4596"
+ result="bluralpha"
+ type="matrix"
+ values="1 0 0 0 0 0 1 0 0 0 0 0 1 0 0 0 0 0 0.500000 0 " />
+ <feOffset
+ id="feOffset4598"
+ in="bluralpha"
+ dx="4.000000"
+ dy="4.000000"
+ result="offsetBlur" />
+ <feMerge
+ id="feMerge4600">
+ <feMergeNode
+ id="feMergeNode4602"
+ in="offsetBlur" />
+ <feMergeNode
+ id="feMergeNode4604"
+ in="SourceGraphic" />
+ </feMerge>
+ </filter>
+ <linearGradient
+ inkscape:collect="always"
+ xlink:href="#linearGradient4606"
+ id="linearGradient4622"
+ x1="906.94769"
+ y1="-7.3383088"
+ x2="906.94769"
+ y2="-172.97601"
+ gradientUnits="userSpaceOnUse"
+ gradientTransform="matrix(0.23092554,0,0,0.7849298,593.37513,596.7001)" />
+ <linearGradient
+ inkscape:collect="always"
+ xlink:href="#linearGradient4606"
+ id="linearGradient4626"
+ gradientUnits="userSpaceOnUse"
+ gradientTransform="matrix(0.23092554,0,0,1.0521382,-1255.8822,187.84807)"
+ x1="906.94769"
+ y1="-7.3383088"
+ x2="906.94769"
+ y2="-172.97601" />
+ <linearGradient
+ inkscape:collect="always"
+ xlink:href="#linearGradient4616"
+ id="linearGradient4636"
+ x1="102.24117"
+ y1="386.07532"
+ x2="-256.56793"
+ y2="98.293198"
+ gradientUnits="userSpaceOnUse"
+ gradientTransform="matrix(1.4992949,0,0,1.4260558,436.2333,350.79316)" />
+ <radialGradient
+ inkscape:collect="always"
+ xlink:href="#linearGradient3838"
+ id="radialGradient7925"
+ gradientUnits="userSpaceOnUse"
+ gradientTransform="matrix(0.45102834,0,0,0.13605992,149.51615,706.83538)"
+ cx="532.67328"
+ cy="425.74258"
+ fx="532.67328"
+ fy="425.74258"
+ r="259.90594" />
+ <radialGradient
+ inkscape:collect="always"
+ xlink:href="#linearGradient3650"
+ id="radialGradient7940"
+ gradientUnits="userSpaceOnUse"
+ gradientTransform="matrix(0.54381919,0,0,0.07907775,268.75446,722.63862)"
+ cx="531.18811"
+ cy="483.1683"
+ fx="531.18811"
+ fy="483.1683"
+ r="258.42081" />
+ <radialGradient
+ inkscape:collect="always"
+ xlink:href="#linearGradient3988"
+ id="radialGradient8010"
+ gradientUnits="userSpaceOnUse"
+ gradientTransform="matrix(1,0,0,0.25347222,0,86.109396)"
+ cx="238.61388"
+ cy="115.34654"
+ fx="238.61388"
+ fy="115.34654"
+ r="142.57426" />
+ <radialGradient
+ inkscape:collect="always"
+ xlink:href="#linearGradient3988"
+ id="radialGradient8012"
+ gradientUnits="userSpaceOnUse"
+ gradientTransform="matrix(1,0,0,0.25347222,0,86.109396)"
+ cx="238.61388"
+ cy="115.34654"
+ fx="238.61388"
+ fy="115.34654"
+ r="142.57426" />
+ <radialGradient
+ inkscape:collect="always"
+ xlink:href="#linearGradient3988"
+ id="radialGradient8014"
+ gradientUnits="userSpaceOnUse"
+ gradientTransform="matrix(1,0,0,0.25347222,0,86.109396)"
+ cx="238.61388"
+ cy="115.34654"
+ fx="238.61388"
+ fy="115.34654"
+ r="142.57426" />
+ <radialGradient
+ inkscape:collect="always"
+ xlink:href="#linearGradient3988"
+ id="radialGradient8044"
+ gradientUnits="userSpaceOnUse"
+ gradientTransform="matrix(1,0,0,0.25347222,0,86.109396)"
+ cx="238.61388"
+ cy="115.34654"
+ fx="238.61388"
+ fy="115.34654"
+ r="142.57426" />
+ <radialGradient
+ inkscape:collect="always"
+ xlink:href="#linearGradient3658"
+ id="radialGradient8071"
+ gradientUnits="userSpaceOnUse"
+ gradientTransform="matrix(1,0,0,0.25347222,0,86.109396)"
+ cx="238.61388"
+ cy="115.34654"
+ fx="238.61388"
+ fy="115.34654"
+ r="142.57426" />
+ <radialGradient
+ inkscape:collect="always"
+ xlink:href="#linearGradient3658"
+ id="radialGradient8073"
+ gradientUnits="userSpaceOnUse"
+ gradientTransform="matrix(1,0,0,0.25347222,0,86.109396)"
+ cx="238.61388"
+ cy="115.34654"
+ fx="238.61388"
+ fy="115.34654"
+ r="142.57426" />
+ <radialGradient
+ inkscape:collect="always"
+ xlink:href="#linearGradient3838"
+ id="radialGradient13206"
+ gradientUnits="userSpaceOnUse"
+ gradientTransform="matrix(0.825213,0,0,0.13557033,-49.370064,591.04382)"
+ cx="532.67328"
+ cy="425.74258"
+ fx="532.67328"
+ fy="425.74258"
+ r="259.90594" />
+ <radialGradient
+ inkscape:collect="always"
+ xlink:href="#linearGradient3658"
+ id="radialGradient13212"
+ gradientUnits="userSpaceOnUse"
+ gradientTransform="matrix(1,0,0,0.25347222,0,86.109396)"
+ cx="238.61388"
+ cy="115.34654"
+ fx="238.61388"
+ fy="115.34654"
+ r="142.57426" />
+ <radialGradient
+ inkscape:collect="always"
+ xlink:href="#linearGradient3658"
+ id="radialGradient13218"
+ gradientUnits="userSpaceOnUse"
+ gradientTransform="matrix(1,0,0,0.25347222,0,86.109396)"
+ cx="238.61388"
+ cy="115.34654"
+ fx="238.61388"
+ fy="115.34654"
+ r="142.57426" />
+ <radialGradient
+ inkscape:collect="always"
+ xlink:href="#linearGradient3658"
+ id="radialGradient13220"
+ gradientUnits="userSpaceOnUse"
+ gradientTransform="matrix(1,0,0,0.25347222,0,86.109396)"
+ cx="238.61388"
+ cy="115.34654"
+ fx="238.61388"
+ fy="115.34654"
+ r="142.57426" />
+ <radialGradient
+ inkscape:collect="always"
+ xlink:href="#linearGradient3658"
+ id="radialGradient13238"
+ gradientUnits="userSpaceOnUse"
+ gradientTransform="matrix(1,0,0,0.25347222,0,86.109396)"
+ cx="238.61388"
+ cy="115.34654"
+ fx="238.61388"
+ fy="115.34654"
+ r="142.57426" />
+ <radialGradient
+ inkscape:collect="always"
+ xlink:href="#linearGradient3658"
+ id="radialGradient13240"
+ gradientUnits="userSpaceOnUse"
+ gradientTransform="matrix(1,0,0,0.25347222,0,86.109396)"
+ cx="238.61388"
+ cy="115.34654"
+ fx="238.61388"
+ fy="115.34654"
+ r="142.57426" />
+ <radialGradient
+ inkscape:collect="always"
+ xlink:href="#linearGradient3650"
+ id="radialGradient13247"
+ gradientUnits="userSpaceOnUse"
+ gradientTransform="matrix(0.54381919,0,0,0.07907775,268.75446,722.63862)"
+ cx="531.18811"
+ cy="483.1683"
+ fx="531.18811"
+ fy="483.1683"
+ r="258.42081" />
+ <radialGradient
+ inkscape:collect="always"
+ xlink:href="#linearGradient3838"
+ id="radialGradient13254"
+ gradientUnits="userSpaceOnUse"
+ gradientTransform="matrix(0.825213,0,0,0.13557033,-49.370064,591.04382)"
+ cx="532.67328"
+ cy="425.74258"
+ fx="532.67328"
+ fy="425.74258"
+ r="259.90594" />
+ <radialGradient
+ inkscape:collect="always"
+ xlink:href="#linearGradient3650"
+ id="radialGradient14019"
+ gradientUnits="userSpaceOnUse"
+ gradientTransform="matrix(0.54381919,0,0,0.07907775,-46.100777,848.04824)"
+ cx="531.18811"
+ cy="483.1683"
+ fx="531.18811"
+ fy="483.1683"
+ r="258.42081" />
+ <radialGradient
+ inkscape:collect="always"
+ xlink:href="#linearGradient3658"
+ id="radialGradient14031"
+ gradientUnits="userSpaceOnUse"
+ gradientTransform="matrix(1,0,0,0.25347222,0,86.109396)"
+ cx="238.61388"
+ cy="115.34654"
+ fx="238.61388"
+ fy="115.34654"
+ r="142.57426" />
+ </defs>
+ <sodipodi:namedview
+ id="base"
+ pagecolor="#ffffff"
+ bordercolor="#666666"
+ borderopacity="1.0"
+ inkscape:pageopacity="0.0"
+ inkscape:pageshadow="2"
+ inkscape:zoom="1.0076756"
+ inkscape:cx="371.81433"
+ inkscape:cy="318.13221"
+ inkscape:document-units="px"
+ inkscape:current-layer="layer1"
+ showgrid="false"
+ inkscape:window-width="1335"
+ inkscape:window-height="910"
+ inkscape:window-x="211"
+ inkscape:window-y="75"
+ inkscape:window-maximized="0"
+ showguides="true"
+ inkscape:guide-bbox="true" />
+ <metadata
+ id="metadata7">
+ <rdf:RDF>
+ <cc:Work
+ rdf:about="">
+ <dc:format>image/svg+xml</dc:format>
+ <dc:type
+ rdf:resource="http://purl.org/dc/dcmitype/StillImage" />
+ <dc:title />
+ </cc:Work>
+ </rdf:RDF>
+ </metadata>
+ <g
+ inkscape:label="Layer 1"
+ inkscape:groupmode="layer"
+ id="layer1"
+ transform="translate(0,-452.36218)">
+ <rect
+ style="fill:url(#linearGradient4636);fill-opacity:1.0;fill-rule:nonzero;stroke:#000000;stroke-width:0;stroke-miterlimit:4;stroke-opacity:1;stroke-dasharray:none"
+ id="rect4628"
+ width="797"
+ height="597"
+ x="-1.2012364e-05"
+ y="455.36218"
+ ry="1.0732931" />
+ <text
+ xml:space="preserve"
+ style="font-size:36px;font-style:normal;font-variant:normal;font-weight:bold;font-stretch:normal;text-align:start;line-height:100%;writing-mode:lr-tb;text-anchor:start;fill:#000000;fill-opacity:1;stroke:none;filter:url(#filter4066);font-family:BlairMdITC TT;-inkscape-font-specification:BlairMdITC TT Medium"
+ x="142.57423"
+ y="524.63947"
+ id="text4080"
+ sodipodi:linespacing="100%"
+ transform="matrix(0.99136527,0,0,1,-11.873289,0)"><tspan
+ sodipodi:role="line"
+ id="tspan4082"
+ x="142.57423"
+ y="524.63947"
+ style="font-size:48px;font-style:normal;font-variant:normal;font-weight:bold;font-stretch:normal;text-align:start;line-height:100%;writing-mode:lr-tb;text-anchor:start;font-family:BlairMdITC TT;-inkscape-font-specification:BlairMdITC TT Bold">Pacemaker 10,000ft</tspan></text>
+ <rect
+ style="fill:url(#linearGradient4622);fill-opacity:1.0;fill-rule:nonzero;stroke:#000000;stroke-width:0;stroke-miterlimit:4;stroke-opacity:1;stroke-dasharray:none"
+ id="rect4614"
+ width="3"
+ height="591.4361"
+ x="797"
+ y="460.92606"
+ ry="0.59076202" />
+ <rect
+ ry="0.79187125"
+ y="5.8533502"
+ x="-1052.2572"
+ height="792.77484"
+ width="3"
+ id="rect4624"
+ style="fill:url(#linearGradient4626);fill-opacity:1;fill-rule:nonzero;stroke:#000000;stroke-width:0;stroke-miterlimit:4;stroke-opacity:1;stroke-dasharray:none"
+ transform="matrix(0,-1,1,0,0,0)" />
+ <text
+ id="text7860"
+ y="950.46515"
+ x="234.48592"
+ style="font-size:40px;font-style:normal;font-weight:normal;fill:#000000;fill-opacity:1;stroke:none;font-family:Bitstream Vera Sans"
+ xml:space="preserve"><tspan
+ y="950.46515"
+ x="234.48592"
+ id="tspan7862"
+ sodipodi:role="line" /></text>
+ <g
+ id="g18201">
+ <rect
+ ry="0.39154699"
+ y="866.3241"
+ x="102.50718"
+ height="39.864105"
+ width="280.52457"
+ id="rect3846"
+ style="fill:url(#radialGradient14019);fill-opacity:1;stroke:#000000;stroke-width:0.73985893;stroke-miterlimit:4;stroke-opacity:1;stroke-dasharray:none" />
+ <g
+ transform="translate(1.3392711,30.378478)"
+ id="g13249">
+ <rect
+ style="fill:url(#radialGradient13254);fill-opacity:1;stroke:#000000;stroke-width:0.96548235;stroke-miterlimit:4;stroke-opacity:1;stroke-dasharray:none"
+ id="rect3836"
+ width="428.13034"
+ height="69.341446"
+ x="176.1337"
+ y="614.09119"
+ ry="0.43940309" />
+ <text
+ xml:space="preserve"
+ style="font-size:36px;font-style:normal;font-variant:normal;font-weight:bold;font-stretch:normal;text-align:start;line-height:100%;writing-mode:lr-tb;text-anchor:start;fill:#ffffff;fill-opacity:1;stroke:none;filter:url(#filter4038);font-family:BlairMdITC TT;-inkscape-font-specification:BlairMdITC TT Medium"
+ x="390.88086"
+ y="774.01617"
+ id="text4034"
+ sodipodi:linespacing="100%"
+ transform="matrix(0.81060355,0,0,1,73.865826,-112.54432)"><tspan
+ sodipodi:role="line"
+ id="tspan4036"
+ x="390.88086"
+ y="774.01617"
+ style="font-size:32px;font-style:normal;font-variant:normal;font-weight:bold;font-stretch:normal;text-align:center;line-height:100%;writing-mode:lr-tb;text-anchor:middle;fill:#ffffff;font-family:BlairMdITC TT;-inkscape-font-specification:BlairMdITC TT Bold">Cluster Resource Manager</tspan></text>
+ </g>
+ <path
+ inkscape:connection-end="#g7850"
+ inkscape:connection-start="#g7850"
+ inkscape:connector-type="polyline"
+ id="path7961"
+ d="m 528.69275,773.02983 0,0"
+ style="fill:none;stroke:#000000;stroke-width:1px;stroke-linecap:butt;stroke-linejoin:miter;stroke-opacity:1" />
+ <g
+ transform="matrix(1.3664543,0,0,1,-143.97962,136.93991)"
+ id="g7850">
+ <rect
+ transform="matrix(0.67537021,0,0,0.4928394,331.12309,579.2426)"
+ ry="12.871287"
+ y="79.207924"
+ x="96.039604"
+ height="72.277229"
+ width="285.14853"
+ id="rect7852"
+ style="fill:url(#radialGradient14031);fill-opacity:1;stroke:#000000;stroke-width:1;stroke-miterlimit:4;stroke-opacity:1;stroke-dasharray:none;filter:url(#filter3712)" />
+ </g>
+ <text
+ xml:space="preserve"
+ style="font-size:36px;font-style:normal;font-variant:normal;font-weight:bold;font-stretch:normal;text-align:start;line-height:100%;writing-mode:lr-tb;text-anchor:start;fill:#000000;fill-opacity:1;stroke:none;font-family:BlairMdITC TT;-inkscape-font-specification:BlairMdITC TT Medium"
+ x="421.57001"
+ y="778.29926"
+ id="text7854"
+ sodipodi:linespacing="100%"><tspan
+ sodipodi:role="line"
+ id="tspan7856"
+ x="421.57001"
+ y="778.29926"
+ style="font-size:16px;font-style:normal;font-variant:normal;font-weight:bold;font-stretch:normal;text-align:start;line-height:100%;writing-mode:lr-tb;text-anchor:start;font-family:BlairMdITC TT;-inkscape-font-specification:BlairMdITC TT Bold">Local Resource Manager</tspan></text>
+ <g
+ transform="translate(156.84476,-72.766232)"
+ id="g13226">
+ <g
+ transform="matrix(1.3664543,0,0,1,-298.23263,284.08564)"
+ id="g13214">
+ <rect
+ transform="matrix(0.67537021,0,0,0.4928394,331.12309,579.2426)"
+ ry="12.871287"
+ y="79.207924"
+ x="96.039604"
+ height="72.277229"
+ width="285.14853"
+ id="rect13216"
+ style="fill:url(#radialGradient13240);fill-opacity:1;stroke:#000000;stroke-width:1;stroke-miterlimit:4;stroke-opacity:1;stroke-dasharray:none;filter:url(#filter3712)" />
+ </g>
+ <text
+ xml:space="preserve"
+ style="font-size:36px;font-style:normal;font-variant:normal;font-weight:bold;font-stretch:normal;text-align:start;line-height:100%;writing-mode:lr-tb;text-anchor:start;fill:#000000;fill-opacity:1;stroke:none;font-family:BlairMdITC TT;-inkscape-font-specification:BlairMdITC TT Medium"
+ x="288.91498"
+ y="924.71747"
+ id="text4472"
+ sodipodi:linespacing="100%"><tspan
+ sodipodi:role="line"
+ id="tspan4474"
+ x="288.91498"
+ y="924.71747"
+ style="font-size:16px;font-style:normal;font-variant:normal;font-weight:bold;font-stretch:normal;text-align:start;line-height:100%;writing-mode:lr-tb;text-anchor:start;font-family:BlairMdITC TT;-inkscape-font-specification:BlairMdITC TT Bold">Resource Agents</tspan></text>
+ </g>
+ <text
+ sodipodi:linespacing="100%"
+ id="text14025"
+ y="890.43848"
+ x="127.82468"
+ style="font-size:36px;font-style:normal;font-variant:normal;font-weight:bold;font-stretch:normal;text-align:start;line-height:100%;writing-mode:lr-tb;text-anchor:start;fill:#000000;fill-opacity:1;stroke:none;font-family:BlairMdITC TT;-inkscape-font-specification:BlairMdITC TT Medium"
+ xml:space="preserve"><tspan
+ style="font-size:16px;font-style:normal;font-variant:normal;font-weight:bold;font-stretch:normal;text-align:start;line-height:100%;writing-mode:lr-tb;text-anchor:start;font-family:BlairMdITC TT;-inkscape-font-specification:BlairMdITC TT Bold"
+ y="890.43848"
+ x="127.82468"
+ id="tspan14027"
+ sodipodi:role="line">Messaging &amp; Membership</tspan></text>
+ <path
+ inkscape:connector-type="polyline"
+ transform="translate(0,452.36218)"
+ id="path17103"
+ d="m 526.9553,334.04139 0.99238,41.68008"
+ style="fill:none;stroke:#000000;stroke-width:1px;stroke-linecap:butt;stroke-linejoin:miter;stroke-opacity:1;marker-end:url(#Arrow2Lend)" />
+ <path
+ inkscape:connector-type="polyline"
+ transform="translate(0,452.36218)"
+ id="path17105"
+ d="m 522.98577,259.61268 0.99238,39.69531"
+ style="fill:none;stroke:#000000;stroke-width:1px;stroke-linecap:butt;stroke-linejoin:miter;stroke-opacity:1;marker-end:url(#Arrow2Lend)" />
+ <path
+ inkscape:connector-type="polyline"
+ transform="translate(0,452.36218)"
+ id="path17107"
+ d="m 234.20236,260.60506 0.99238,150.8422"
+ style="fill:none;stroke:#000000;stroke-width:1px;stroke-linecap:butt;stroke-linejoin:miter;stroke-opacity:1;marker-end:url(#Arrow2Lend)" />
+ </g>
+ </g>
+</svg>
diff --git a/doc/sphinx/shared/images/pcmk-shared-failover.svg b/doc/sphinx/shared/images/pcmk-shared-failover.svg
new file mode 100644
index 0000000..ff65326
--- /dev/null
+++ b/doc/sphinx/shared/images/pcmk-shared-failover.svg
@@ -0,0 +1,1306 @@
+<?xml version="1.0" encoding="UTF-8" standalone="no"?>
+<!-- Created with Inkscape (http://www.inkscape.org/) -->
+
+<svg
+ xmlns:dc="http://purl.org/dc/elements/1.1/"
+ xmlns:cc="http://creativecommons.org/ns#"
+ xmlns:rdf="http://www.w3.org/1999/02/22-rdf-syntax-ns#"
+ xmlns:svg="http://www.w3.org/2000/svg"
+ xmlns="http://www.w3.org/2000/svg"
+ xmlns:xlink="http://www.w3.org/1999/xlink"
+ xmlns:sodipodi="http://sodipodi.sourceforge.net/DTD/sodipodi-0.dtd"
+ xmlns:inkscape="http://www.inkscape.org/namespaces/inkscape"
+ width="800"
+ height="600"
+ id="svg2"
+ version="1.1"
+ inkscape:version="0.47 r22583"
+ sodipodi:docname="pcmk-shared-failover.svg"
+ inkscape:export-filename="/Users/beekhof/Dropbox/Public/pcmk-shared-failover.png"
+ inkscape:export-xdpi="90"
+ inkscape:export-ydpi="90">
+ <defs
+ id="defs4">
+ <linearGradient
+ id="linearGradient4812">
+ <stop
+ style="stop-color:#000000;stop-opacity:1;"
+ offset="0"
+ id="stop4814" />
+ <stop
+ style="stop-color:#363636;stop-opacity:0.55284554;"
+ offset="1"
+ id="stop4816" />
+ </linearGradient>
+ <linearGradient
+ id="linearGradient4802">
+ <stop
+ style="stop-color:#808080;stop-opacity:0.75;"
+ offset="0"
+ id="stop4804" />
+ <stop
+ style="stop-color:#bfbfbf;stop-opacity:0.5;"
+ offset="1"
+ id="stop4806" />
+ </linearGradient>
+ <marker
+ inkscape:stockid="Arrow2Lend"
+ orient="auto"
+ refY="0.0"
+ refX="0.0"
+ id="Arrow2Lend"
+ style="overflow:visible;">
+ <path
+ id="path4041"
+ style="font-size:12.0;fill-rule:evenodd;stroke-width:0.62500000;stroke-linejoin:round;"
+ d="M 8.7185878,4.0337352 L -2.2072895,0.016013256 L 8.7185884,-4.0017078 C 6.9730900,-1.6296469 6.9831476,1.6157441 8.7185878,4.0337352 z "
+ transform="scale(1.1) rotate(180) translate(1,0)" />
+ </marker>
+ <marker
+ inkscape:stockid="Arrow2Mend"
+ orient="auto"
+ refY="0.0"
+ refX="0.0"
+ id="Arrow2Mend"
+ style="overflow:visible;">
+ <path
+ id="path4047"
+ style="font-size:12.0;fill-rule:evenodd;stroke-width:0.62500000;stroke-linejoin:round;"
+ d="M 8.7185878,4.0337352 L -2.2072895,0.016013256 L 8.7185884,-4.0017078 C 6.9730900,-1.6296469 6.9831476,1.6157441 8.7185878,4.0337352 z "
+ transform="scale(0.6) rotate(180) translate(0,0)" />
+ </marker>
+ <linearGradient
+ id="linearGradient4411">
+ <stop
+ style="stop-color:#f3f3f3;stop-opacity:0;"
+ offset="0"
+ id="stop4413" />
+ <stop
+ style="stop-color:#e6e6e6;stop-opacity:0.21138212;"
+ offset="1"
+ id="stop4415" />
+ </linearGradient>
+ <linearGradient
+ id="linearGradient4370">
+ <stop
+ style="stop-color:#ffffff;stop-opacity:1;"
+ offset="0"
+ id="stop4372" />
+ <stop
+ style="stop-color:#f7f7f7;stop-opacity:0.69918698;"
+ offset="1"
+ id="stop4374" />
+ </linearGradient>
+ <linearGradient
+ id="linearGradient3988">
+ <stop
+ id="stop3990"
+ offset="0"
+ style="stop-color:#d3e219;stop-opacity:1;" />
+ <stop
+ id="stop3992"
+ offset="1"
+ style="stop-color:#e8a411;stop-opacity:1;" />
+ </linearGradient>
+ <linearGradient
+ id="linearGradient3838">
+ <stop
+ style="stop-color:#6badf2;stop-opacity:1;"
+ offset="0"
+ id="stop3840" />
+ <stop
+ style="stop-color:#2e447f;stop-opacity:1;"
+ offset="1"
+ id="stop3842" />
+ </linearGradient>
+ <linearGradient
+ id="linearGradient3658">
+ <stop
+ style="stop-color:#19e229;stop-opacity:1;"
+ offset="0"
+ id="stop3660" />
+ <stop
+ style="stop-color:#589b56;stop-opacity:1;"
+ offset="1"
+ id="stop3662" />
+ </linearGradient>
+ <linearGradient
+ id="linearGradient3650">
+ <stop
+ style="stop-color:#f36d6d;stop-opacity:1;"
+ offset="0"
+ id="stop3652" />
+ <stop
+ style="stop-color:#b81313;stop-opacity:1;"
+ offset="1"
+ id="stop3654" />
+ </linearGradient>
+ <inkscape:perspective
+ sodipodi:type="inkscape:persp3d"
+ inkscape:vp_x="0 : 526.18109 : 1"
+ inkscape:vp_y="0 : 1000 : 0"
+ inkscape:vp_z="744.09448 : 526.18109 : 1"
+ inkscape:persp3d-origin="372.04724 : 350.78739 : 1"
+ id="perspective10" />
+ <filter
+ id="filter3712"
+ inkscape:label="Ridged border"
+ inkscape:menu="Bevels"
+ inkscape:menu-tooltip="Ridged border with inner bevel"
+ color-interpolation-filters="sRGB">
+ <feMorphology
+ id="feMorphology3714"
+ radius="4.3"
+ in="SourceAlpha"
+ result="result91" />
+ <feComposite
+ id="feComposite3716"
+ in2="result91"
+ operator="out"
+ in="SourceGraphic" />
+ <feGaussianBlur
+ id="feGaussianBlur3718"
+ result="result0"
+ stdDeviation="1.2" />
+ <feDiffuseLighting
+ id="feDiffuseLighting3720"
+ diffuseConstant="1"
+ result="result92">
+ <feDistantLight
+ id="feDistantLight3722"
+ elevation="66"
+ azimuth="225" />
+ </feDiffuseLighting>
+ <feBlend
+ id="feBlend3724"
+ in2="SourceGraphic"
+ mode="multiply"
+ result="result93" />
+ <feComposite
+ id="feComposite3726"
+ in2="SourceAlpha"
+ operator="in" />
+ </filter>
+ <radialGradient
+ inkscape:collect="always"
+ xlink:href="#linearGradient3658"
+ id="radialGradient3750"
+ gradientUnits="userSpaceOnUse"
+ gradientTransform="matrix(1,0,0,0.25347222,0,86.109396)"
+ cx="238.61388"
+ cy="115.34654"
+ fx="238.61388"
+ fy="115.34654"
+ r="142.57426" />
+ <radialGradient
+ inkscape:collect="always"
+ xlink:href="#linearGradient3838"
+ id="radialGradient3844"
+ cx="532.67328"
+ cy="425.74258"
+ fx="532.67328"
+ fy="425.74258"
+ r="259.90594"
+ gradientTransform="matrix(0.99606758,0,0,0.13538552,-23.806274,801.65349)"
+ gradientUnits="userSpaceOnUse" />
+ <radialGradient
+ inkscape:collect="always"
+ xlink:href="#linearGradient3650"
+ id="radialGradient3854"
+ cx="531.18811"
+ cy="483.1683"
+ fx="531.18811"
+ fy="483.1683"
+ r="258.42081"
+ gradientTransform="matrix(1,0,0,0.07856171,-23.920792,882.72047)"
+ gradientUnits="userSpaceOnUse" />
+ <radialGradient
+ inkscape:collect="always"
+ xlink:href="#linearGradient3658"
+ id="radialGradient3862"
+ gradientUnits="userSpaceOnUse"
+ gradientTransform="matrix(1,0,0,0.25347222,0,86.109396)"
+ cx="238.61388"
+ cy="115.34654"
+ fx="238.61388"
+ fy="115.34654"
+ r="142.57426" />
+ <radialGradient
+ inkscape:collect="always"
+ xlink:href="#linearGradient3658"
+ id="radialGradient3866"
+ gradientUnits="userSpaceOnUse"
+ gradientTransform="matrix(1,0,0,0.25347222,0,86.109396)"
+ cx="238.61388"
+ cy="115.34654"
+ fx="238.61388"
+ fy="115.34654"
+ r="142.57426" />
+ <radialGradient
+ inkscape:collect="always"
+ xlink:href="#linearGradient3658"
+ id="radialGradient3870"
+ gradientUnits="userSpaceOnUse"
+ gradientTransform="matrix(1,0,0,0.25347222,0,86.109396)"
+ cx="238.61388"
+ cy="115.34654"
+ fx="238.61388"
+ fy="115.34654"
+ r="142.57426" />
+ <radialGradient
+ inkscape:collect="always"
+ xlink:href="#linearGradient3658"
+ id="radialGradient3884"
+ gradientUnits="userSpaceOnUse"
+ gradientTransform="matrix(1,0,0,0.25347222,0,86.109396)"
+ cx="238.61388"
+ cy="115.34654"
+ fx="238.61388"
+ fy="115.34654"
+ r="142.57426" />
+ <radialGradient
+ inkscape:collect="always"
+ xlink:href="#linearGradient3658"
+ id="radialGradient3886"
+ gradientUnits="userSpaceOnUse"
+ gradientTransform="matrix(1,0,0,0.25347222,0,86.109396)"
+ cx="238.61388"
+ cy="115.34654"
+ fx="238.61388"
+ fy="115.34654"
+ r="142.57426" />
+ <radialGradient
+ inkscape:collect="always"
+ xlink:href="#linearGradient3658"
+ id="radialGradient3888"
+ gradientUnits="userSpaceOnUse"
+ gradientTransform="matrix(1,0,0,0.25347222,0,86.109396)"
+ cx="238.61388"
+ cy="115.34654"
+ fx="238.61388"
+ fy="115.34654"
+ r="142.57426" />
+ <radialGradient
+ inkscape:collect="always"
+ xlink:href="#linearGradient3658"
+ id="radialGradient3890"
+ gradientUnits="userSpaceOnUse"
+ gradientTransform="matrix(1,0,0,0.25347222,0,86.109396)"
+ cx="238.61388"
+ cy="115.34654"
+ fx="238.61388"
+ fy="115.34654"
+ r="142.57426" />
+ <radialGradient
+ inkscape:collect="always"
+ xlink:href="#linearGradient3658"
+ id="radialGradient3904"
+ gradientUnits="userSpaceOnUse"
+ gradientTransform="matrix(1,0,0,0.25347222,0,86.109396)"
+ cx="238.61388"
+ cy="115.34654"
+ fx="238.61388"
+ fy="115.34654"
+ r="142.57426" />
+ <radialGradient
+ inkscape:collect="always"
+ xlink:href="#linearGradient3658"
+ id="radialGradient3906"
+ gradientUnits="userSpaceOnUse"
+ gradientTransform="matrix(1,0,0,0.25347222,0,86.109396)"
+ cx="238.61388"
+ cy="115.34654"
+ fx="238.61388"
+ fy="115.34654"
+ r="142.57426" />
+ <radialGradient
+ inkscape:collect="always"
+ xlink:href="#linearGradient3658"
+ id="radialGradient3908"
+ gradientUnits="userSpaceOnUse"
+ gradientTransform="matrix(1,0,0,0.25347222,0,86.109396)"
+ cx="238.61388"
+ cy="115.34654"
+ fx="238.61388"
+ fy="115.34654"
+ r="142.57426" />
+ <radialGradient
+ inkscape:collect="always"
+ xlink:href="#linearGradient3658"
+ id="radialGradient3910"
+ gradientUnits="userSpaceOnUse"
+ gradientTransform="matrix(1,0,0,0.25347222,0,86.109396)"
+ cx="238.61388"
+ cy="115.34654"
+ fx="238.61388"
+ fy="115.34654"
+ r="142.57426" />
+ <radialGradient
+ inkscape:collect="always"
+ xlink:href="#linearGradient3658"
+ id="radialGradient3924"
+ gradientUnits="userSpaceOnUse"
+ gradientTransform="matrix(1,0,0,0.25347222,0,86.109396)"
+ cx="238.61388"
+ cy="115.34654"
+ fx="238.61388"
+ fy="115.34654"
+ r="142.57426" />
+ <radialGradient
+ inkscape:collect="always"
+ xlink:href="#linearGradient3658"
+ id="radialGradient3926"
+ gradientUnits="userSpaceOnUse"
+ gradientTransform="matrix(1,0,0,0.25347222,0,86.109396)"
+ cx="238.61388"
+ cy="115.34654"
+ fx="238.61388"
+ fy="115.34654"
+ r="142.57426" />
+ <radialGradient
+ inkscape:collect="always"
+ xlink:href="#linearGradient3658"
+ id="radialGradient3928"
+ gradientUnits="userSpaceOnUse"
+ gradientTransform="matrix(1,0,0,0.25347222,0,86.109396)"
+ cx="238.61388"
+ cy="115.34654"
+ fx="238.61388"
+ fy="115.34654"
+ r="142.57426" />
+ <radialGradient
+ inkscape:collect="always"
+ xlink:href="#linearGradient3658"
+ id="radialGradient3930"
+ gradientUnits="userSpaceOnUse"
+ gradientTransform="matrix(1,0,0,0.25347222,0,86.109396)"
+ cx="238.61388"
+ cy="115.34654"
+ fx="238.61388"
+ fy="115.34654"
+ r="142.57426" />
+ <radialGradient
+ inkscape:collect="always"
+ xlink:href="#linearGradient3658"
+ id="radialGradient3944"
+ gradientUnits="userSpaceOnUse"
+ gradientTransform="matrix(1,0,0,0.25347222,0,86.109396)"
+ cx="238.61388"
+ cy="115.34654"
+ fx="238.61388"
+ fy="115.34654"
+ r="142.57426" />
+ <radialGradient
+ inkscape:collect="always"
+ xlink:href="#linearGradient3658"
+ id="radialGradient3946"
+ gradientUnits="userSpaceOnUse"
+ gradientTransform="matrix(1,0,0,0.25347222,0,86.109396)"
+ cx="238.61388"
+ cy="115.34654"
+ fx="238.61388"
+ fy="115.34654"
+ r="142.57426" />
+ <radialGradient
+ inkscape:collect="always"
+ xlink:href="#linearGradient3658"
+ id="radialGradient3948"
+ gradientUnits="userSpaceOnUse"
+ gradientTransform="matrix(1,0,0,0.25347222,0,86.109396)"
+ cx="238.61388"
+ cy="115.34654"
+ fx="238.61388"
+ fy="115.34654"
+ r="142.57426" />
+ <radialGradient
+ inkscape:collect="always"
+ xlink:href="#linearGradient3658"
+ id="radialGradient3950"
+ gradientUnits="userSpaceOnUse"
+ gradientTransform="matrix(1,0,0,0.25347222,0,86.109396)"
+ cx="238.61388"
+ cy="115.34654"
+ fx="238.61388"
+ fy="115.34654"
+ r="142.57426" />
+ <radialGradient
+ inkscape:collect="always"
+ xlink:href="#linearGradient3658"
+ id="radialGradient3964"
+ gradientUnits="userSpaceOnUse"
+ gradientTransform="matrix(1,0,0,0.25347222,0,86.109396)"
+ cx="238.61388"
+ cy="115.34654"
+ fx="238.61388"
+ fy="115.34654"
+ r="142.57426" />
+ <radialGradient
+ inkscape:collect="always"
+ xlink:href="#linearGradient3658"
+ id="radialGradient3966"
+ gradientUnits="userSpaceOnUse"
+ gradientTransform="matrix(1,0,0,0.25347222,0,86.109396)"
+ cx="238.61388"
+ cy="115.34654"
+ fx="238.61388"
+ fy="115.34654"
+ r="142.57426" />
+ <radialGradient
+ inkscape:collect="always"
+ xlink:href="#linearGradient3658"
+ id="radialGradient3968"
+ gradientUnits="userSpaceOnUse"
+ gradientTransform="matrix(1,0,0,0.25347222,0,86.109396)"
+ cx="238.61388"
+ cy="115.34654"
+ fx="238.61388"
+ fy="115.34654"
+ r="142.57426" />
+ <radialGradient
+ inkscape:collect="always"
+ xlink:href="#linearGradient3658"
+ id="radialGradient3970"
+ gradientUnits="userSpaceOnUse"
+ gradientTransform="matrix(1,0,0,0.25347222,0,86.109396)"
+ cx="238.61388"
+ cy="115.34654"
+ fx="238.61388"
+ fy="115.34654"
+ r="142.57426" />
+ <radialGradient
+ inkscape:collect="always"
+ xlink:href="#linearGradient3988"
+ id="radialGradient3974"
+ gradientUnits="userSpaceOnUse"
+ gradientTransform="matrix(1,0,0,0.25347222,0,86.109396)"
+ cx="238.61388"
+ cy="115.34654"
+ fx="238.61388"
+ fy="115.34654"
+ r="142.57426" />
+ <radialGradient
+ inkscape:collect="always"
+ xlink:href="#linearGradient3988"
+ id="radialGradient3996"
+ gradientUnits="userSpaceOnUse"
+ gradientTransform="matrix(1,0,0,0.25347222,0,86.109396)"
+ cx="238.61388"
+ cy="115.34654"
+ fx="238.61388"
+ fy="115.34654"
+ r="142.57426" />
+ <radialGradient
+ inkscape:collect="always"
+ xlink:href="#linearGradient3988"
+ id="radialGradient4000"
+ gradientUnits="userSpaceOnUse"
+ gradientTransform="matrix(1,0,0,0.25347222,0,86.109396)"
+ cx="238.61388"
+ cy="115.34654"
+ fx="238.61388"
+ fy="115.34654"
+ r="142.57426" />
+ <radialGradient
+ inkscape:collect="always"
+ xlink:href="#linearGradient3988"
+ id="radialGradient4004"
+ gradientUnits="userSpaceOnUse"
+ gradientTransform="matrix(1,0,0,0.25347222,0,86.109396)"
+ cx="238.61388"
+ cy="115.34654"
+ fx="238.61388"
+ fy="115.34654"
+ r="142.57426" />
+ <radialGradient
+ inkscape:collect="always"
+ xlink:href="#linearGradient3988"
+ id="radialGradient4024"
+ gradientUnits="userSpaceOnUse"
+ gradientTransform="matrix(1,0,0,0.07856171,-6.9306931,971.82938)"
+ cx="531.18811"
+ cy="483.1683"
+ fx="531.18811"
+ fy="483.1683"
+ r="258.42081" />
+ <filter
+ id="filter4038"
+ inkscape:label="Drop shadow"
+ width="1.5"
+ height="1.5"
+ x="-.25"
+ y="-.25">
+ <feGaussianBlur
+ id="feGaussianBlur4040"
+ in="SourceAlpha"
+ stdDeviation="2.000000"
+ result="blur" />
+ <feColorMatrix
+ id="feColorMatrix4042"
+ result="bluralpha"
+ type="matrix"
+ values="-1 0 0 0 1 0 -1 0 0 1 0 0 -1 0 1 0 0 0 0.500000 0 " />
+ <feOffset
+ id="feOffset4044"
+ in="bluralpha"
+ dx="4.000000"
+ dy="4.000000"
+ result="offsetBlur" />
+ <feMerge
+ id="feMerge4046">
+ <feMergeNode
+ id="feMergeNode4048"
+ in="offsetBlur" />
+ <feMergeNode
+ id="feMergeNode4050"
+ in="SourceGraphic" />
+ </feMerge>
+ </filter>
+ <filter
+ id="filter4066"
+ inkscape:label="Drop shadow"
+ width="1.5"
+ height="1.5"
+ x="-.25"
+ y="-.25">
+ <feGaussianBlur
+ id="feGaussianBlur4068"
+ in="SourceAlpha"
+ stdDeviation="2.000000"
+ result="blur" />
+ <feColorMatrix
+ id="feColorMatrix4070"
+ result="bluralpha"
+ type="matrix"
+ values="1 0 0 0 0 0 1 0 0 0 0 0 1 0 0 0 0 0 0.500000 0 " />
+ <feOffset
+ id="feOffset4072"
+ in="bluralpha"
+ dx="4.000000"
+ dy="4.000000"
+ result="offsetBlur" />
+ <feMerge
+ id="feMerge4074">
+ <feMergeNode
+ id="feMergeNode4076"
+ in="offsetBlur" />
+ <feMergeNode
+ id="feMergeNode4078"
+ in="SourceGraphic" />
+ </feMerge>
+ </filter>
+ <radialGradient
+ inkscape:collect="always"
+ xlink:href="#linearGradient4370"
+ id="radialGradient4376"
+ cx="-0.5"
+ cy="-100.5"
+ fx="-0.5"
+ fy="-100.5"
+ r="400.5"
+ gradientTransform="matrix(0.06674414,1.4857892,-1.4966201,0.06723071,-150.87695,6.9995757)"
+ gradientUnits="userSpaceOnUse" />
+ <filter
+ id="filter4381"
+ inkscape:label="Drop shadow"
+ width="1.5"
+ height="1.5"
+ x="-.25"
+ y="-.25">
+ <feGaussianBlur
+ id="feGaussianBlur4383"
+ in="SourceAlpha"
+ stdDeviation="2.000000"
+ result="blur" />
+ <feColorMatrix
+ id="feColorMatrix4385"
+ result="bluralpha"
+ type="matrix"
+ values="1 0 0 0 0 0 1 0 0 0 0 0 1 0 0 0 0 0 0.500000 0 " />
+ <feOffset
+ id="feOffset4387"
+ in="bluralpha"
+ dx="4.000000"
+ dy="4.000000"
+ result="offsetBlur" />
+ <feMerge
+ id="feMerge4389">
+ <feMergeNode
+ id="feMergeNode4391"
+ in="offsetBlur" />
+ <feMergeNode
+ id="feMergeNode4393"
+ in="SourceGraphic" />
+ </feMerge>
+ </filter>
+ <filter
+ id="filter4397"
+ inkscape:label="Drop shadow"
+ width="1.5"
+ height="1.5"
+ x="-.25"
+ y="-.25">
+ <feGaussianBlur
+ id="feGaussianBlur4399"
+ in="SourceAlpha"
+ stdDeviation="2.000000"
+ result="blur" />
+ <feColorMatrix
+ id="feColorMatrix4401"
+ result="bluralpha"
+ type="matrix"
+ values="1 0 0 0 0 0 1 0 0 0 0 0 1 0 0 0 0 0 0.500000 0 " />
+ <feOffset
+ id="feOffset4403"
+ in="bluralpha"
+ dx="4.000000"
+ dy="4.000000"
+ result="offsetBlur" />
+ <feMerge
+ id="feMerge4405">
+ <feMergeNode
+ id="feMergeNode4407"
+ in="offsetBlur" />
+ <feMergeNode
+ id="feMergeNode4409"
+ in="SourceGraphic" />
+ </feMerge>
+ </filter>
+ <radialGradient
+ inkscape:collect="always"
+ xlink:href="#linearGradient4411"
+ id="radialGradient4417"
+ cx="35.009148"
+ cy="295.5629"
+ fx="35.009148"
+ fy="295.5629"
+ r="178.9604"
+ gradientTransform="matrix(-0.01440824,3.0997761,-3.960971,-0.01841003,1186.567,-92.683155)"
+ gradientUnits="userSpaceOnUse" />
+ <inkscape:perspective
+ id="perspective6612"
+ inkscape:persp3d-origin="0.5 : 0.33333333 : 1"
+ inkscape:vp_z="1 : 0.5 : 1"
+ inkscape:vp_y="0 : 1000 : 0"
+ inkscape:vp_x="0 : 0.5 : 1"
+ sodipodi:type="inkscape:persp3d" />
+ <radialGradient
+ inkscape:collect="always"
+ xlink:href="#linearGradient3658"
+ id="radialGradient3203"
+ gradientUnits="userSpaceOnUse"
+ gradientTransform="matrix(1,0,0,0.25347222,0,86.109396)"
+ cx="238.61388"
+ cy="115.34654"
+ fx="238.61388"
+ fy="115.34654"
+ r="142.57426" />
+ <inkscape:perspective
+ id="perspective5419"
+ inkscape:persp3d-origin="0.5 : 0.33333333 : 1"
+ inkscape:vp_z="1 : 0.5 : 1"
+ inkscape:vp_y="0 : 1000 : 0"
+ inkscape:vp_x="0 : 0.5 : 1"
+ sodipodi:type="inkscape:persp3d" />
+ <linearGradient
+ inkscape:collect="always"
+ xlink:href="#linearGradient4802"
+ id="linearGradient4808"
+ x1="596.03955"
+ y1="902.85724"
+ x2="100"
+ y2="526.61963"
+ gradientUnits="userSpaceOnUse" />
+ <linearGradient
+ inkscape:collect="always"
+ xlink:href="#linearGradient4812"
+ id="linearGradient4818"
+ x1="-104.39109"
+ y1="-1.980198"
+ x2="-104.39109"
+ y2="588.01978"
+ gradientUnits="userSpaceOnUse"
+ gradientTransform="matrix(0.23076923,0,0,1.0100688,822.59023,-1050.3621)" />
+ <linearGradient
+ inkscape:collect="always"
+ xlink:href="#linearGradient4812"
+ id="linearGradient4822"
+ gradientUnits="userSpaceOnUse"
+ gradientTransform="matrix(0.23076923,0,0,1.35073,-1027.3314,-798.74606)"
+ x1="-104.39109"
+ y1="-1.980198"
+ x2="-104.39109"
+ y2="588.01978" />
+ </defs>
+ <sodipodi:namedview
+ id="base"
+ pagecolor="#ffffff"
+ bordercolor="#666666"
+ borderopacity="1.0"
+ inkscape:pageopacity="0.0"
+ inkscape:pageshadow="2"
+ inkscape:zoom="1.01"
+ inkscape:cx="325.50589"
+ inkscape:cy="240.20345"
+ inkscape:document-units="px"
+ inkscape:current-layer="layer1"
+ showgrid="false"
+ inkscape:window-width="1674"
+ inkscape:window-height="978"
+ inkscape:window-x="0"
+ inkscape:window-y="46"
+ inkscape:window-maximized="0"
+ showguides="true"
+ inkscape:guide-bbox="true" />
+ <metadata
+ id="metadata7">
+ <rdf:RDF>
+ <cc:Work
+ rdf:about="">
+ <dc:format>image/svg+xml</dc:format>
+ <dc:type
+ rdf:resource="http://purl.org/dc/dcmitype/StillImage" />
+ <dc:title />
+ </cc:Work>
+ </rdf:RDF>
+ </metadata>
+ <g
+ inkscape:label="Layer 1"
+ inkscape:groupmode="layer"
+ id="layer1"
+ transform="translate(0,-452.36218)">
+ <rect
+ style="fill:url(#linearGradient4808);fill-opacity:1;fill-rule:nonzero;stroke:#646464;stroke-width:0;stroke-miterlimit:4;stroke-opacity:1;stroke-dasharray:none"
+ id="rect4800"
+ width="797"
+ height="597"
+ x="-6.8456814e-08"
+ y="452.36218"
+ ry="1.0732931" />
+ <rect
+ style="fill:url(#radialGradient3750);fill-opacity:1;stroke:#000000;stroke-width:1;stroke-miterlimit:4;stroke-opacity:1;stroke-dasharray:none;filter:url(#filter3712)"
+ id="rect3748"
+ width="285.14853"
+ height="72.277229"
+ x="96.039604"
+ y="79.207924"
+ ry="12.871287"
+ transform="matrix(0.43829706,0,0,0.49424167,204.16871,538.48679)" />
+ <rect
+ style="fill:url(#radialGradient3844);fill-opacity:1;stroke:#000000;stroke-width:1.06000876;stroke-miterlimit:4;stroke-opacity:1;stroke-dasharray:none"
+ id="rect3836"
+ width="516.77173"
+ height="69.246925"
+ x="248.38644"
+ y="824.66943"
+ ry="0.43880409" />
+ <rect
+ style="fill:url(#radialGradient3854);fill-opacity:1;stroke:#000000;stroke-width:1;stroke-miterlimit:4;stroke-opacity:1;stroke-dasharray:none"
+ id="rect3846"
+ width="515.84161"
+ height="39.603962"
+ x="249.34653"
+ y="900.87708"
+ ry="0.38899186" />
+ <text
+ xml:space="preserve"
+ style="font-size:36px;font-style:normal;font-variant:normal;font-weight:bold;font-stretch:normal;text-align:start;line-height:100%;writing-mode:lr-tb;text-anchor:start;fill:#000000;fill-opacity:1;stroke:none;font-family:BlairMdITC TT;-inkscape-font-specification:BlairMdITC TT Medium"
+ x="284.99011"
+ y="602.8573"
+ id="text3856"
+ sodipodi:linespacing="100%"><tspan
+ sodipodi:role="line"
+ id="tspan3858"
+ x="284.99011"
+ y="602.8573"
+ style="font-size:16px;font-style:normal;font-variant:normal;font-weight:bold;font-stretch:normal;text-align:start;line-height:100%;writing-mode:lr-tb;text-anchor:start;font-family:BlairMdITC TT;-inkscape-font-specification:BlairMdITC TT Bold">URL</tspan></text>
+ <rect
+ transform="matrix(0.43829706,0,0,0.49424167,334.86178,537.49669)"
+ ry="12.871287"
+ y="79.207924"
+ x="96.039604"
+ height="72.277229"
+ width="285.14853"
+ id="rect3860"
+ style="fill:url(#radialGradient3862);fill-opacity:1;stroke:#000000;stroke-width:1;stroke-miterlimit:4;stroke-opacity:1;stroke-dasharray:none;filter:url(#filter3712)" />
+ <rect
+ style="fill:url(#radialGradient3866);fill-opacity:1;stroke:#000000;stroke-width:1;stroke-miterlimit:4;stroke-opacity:1;stroke-dasharray:none;filter:url(#filter3712)"
+ id="rect3864"
+ width="285.14853"
+ height="72.277229"
+ x="96.039604"
+ y="79.207924"
+ ry="12.871287"
+ transform="matrix(0.43829706,0,0,0.49424167,465.55485,536.50659)" />
+ <rect
+ transform="matrix(0.43829706,0,0,0.49424167,204.16871,580.07095)"
+ ry="12.871287"
+ y="79.207924"
+ x="96.039604"
+ height="72.277229"
+ width="285.14853"
+ id="rect3872"
+ style="fill:url(#radialGradient3884);fill-opacity:1;stroke:#000000;stroke-width:1;stroke-miterlimit:4;stroke-opacity:1;stroke-dasharray:none;filter:url(#filter3712)" />
+ <text
+ sodipodi:linespacing="100%"
+ id="text3874"
+ y="642.46124"
+ x="282.01981"
+ style="font-size:36px;font-style:normal;font-variant:normal;font-weight:bold;font-stretch:normal;text-align:start;line-height:100%;writing-mode:lr-tb;text-anchor:start;fill:#000000;fill-opacity:1;stroke:none;font-family:BlairMdITC TT;-inkscape-font-specification:BlairMdITC TT Medium"
+ xml:space="preserve"><tspan
+ style="font-size:16px;font-style:normal;font-variant:normal;font-weight:bold;font-stretch:normal;text-align:start;line-height:100%;writing-mode:lr-tb;text-anchor:start;font-family:BlairMdITC TT;-inkscape-font-specification:BlairMdITC TT Bold"
+ y="642.46124"
+ x="282.01981"
+ id="tspan3876"
+ sodipodi:role="line">Mail</tspan></text>
+ <rect
+ style="fill:url(#radialGradient3886);fill-opacity:1;stroke:#000000;stroke-width:1;stroke-miterlimit:4;stroke-opacity:1;stroke-dasharray:none;filter:url(#filter3712)"
+ id="rect3878"
+ width="285.14853"
+ height="72.277229"
+ x="96.039604"
+ y="79.207924"
+ ry="12.871287"
+ transform="matrix(0.43829706,0,0,0.49424167,334.86178,579.08085)" />
+ <rect
+ style="fill:url(#radialGradient3908);fill-opacity:1;stroke:#000000;stroke-width:1;stroke-miterlimit:4;stroke-opacity:1;stroke-dasharray:none;filter:url(#filter3712)"
+ id="rect3900"
+ width="285.14853"
+ height="72.277229"
+ x="96.039604"
+ y="79.207924"
+ ry="12.871287"
+ transform="matrix(0.43829706,0,0,0.49424167,461.55485,619.67491)" />
+ <rect
+ transform="matrix(0.43829706,0,0,0.49424167,598.24792,619.67491)"
+ ry="12.871287"
+ y="79.207924"
+ x="96.039604"
+ height="72.277229"
+ width="285.14853"
+ id="rect3902"
+ style="fill:url(#radialGradient3910);fill-opacity:1;stroke:#000000;stroke-width:1;stroke-miterlimit:4;stroke-opacity:1;stroke-dasharray:none;filter:url(#filter3712)" />
+ <rect
+ transform="matrix(0.43829706,0,0,0.49424167,205.15881,663.23927)"
+ ry="12.871287"
+ y="79.207924"
+ x="96.039604"
+ height="72.277229"
+ width="285.14853"
+ id="rect3912"
+ style="fill:url(#radialGradient3924);fill-opacity:1;stroke:#000000;stroke-width:1;stroke-miterlimit:4;stroke-opacity:1;stroke-dasharray:none;filter:url(#filter3712)" />
+ <text
+ sodipodi:linespacing="100%"
+ id="text3914"
+ y="726.61969"
+ x="280.03961"
+ style="font-size:36px;font-style:normal;font-variant:normal;font-weight:bold;font-stretch:normal;text-align:start;line-height:100%;writing-mode:lr-tb;text-anchor:start;fill:#000000;fill-opacity:1;stroke:none;font-family:BlairMdITC TT;-inkscape-font-specification:BlairMdITC TT Medium"
+ xml:space="preserve"><tspan
+ style="font-size:16px;font-style:normal;font-variant:normal;font-weight:bold;font-stretch:normal;text-align:start;line-height:100%;writing-mode:lr-tb;text-anchor:start;font-family:BlairMdITC TT;-inkscape-font-specification:BlairMdITC TT Bold"
+ y="726.61969"
+ x="280.03961"
+ id="tspan3916"
+ sodipodi:role="line">Files</tspan></text>
+ <rect
+ style="fill:url(#radialGradient3926);fill-opacity:1;stroke:#000000;stroke-width:1;stroke-miterlimit:4;stroke-opacity:1;stroke-dasharray:none;filter:url(#filter3712)"
+ id="rect3918"
+ width="285.14853"
+ height="72.277229"
+ x="96.039604"
+ y="79.207924"
+ ry="12.871287"
+ transform="matrix(0.43829706,0,0,0.49424167,335.85188,662.24917)" />
+ <rect
+ transform="matrix(0.43829706,0,0,0.49424167,335.85188,702.84323)"
+ ry="12.871287"
+ y="79.207924"
+ x="96.039604"
+ height="72.277229"
+ width="285.14853"
+ id="rect3938"
+ style="fill:url(#radialGradient3946);fill-opacity:1;stroke:#000000;stroke-width:1;stroke-miterlimit:4;stroke-opacity:1;stroke-dasharray:none;filter:url(#filter3712)" />
+ <rect
+ transform="matrix(0.43829706,0,0,0.49424167,597.23802,701.85313)"
+ ry="12.871287"
+ y="79.207924"
+ x="96.039604"
+ height="72.277229"
+ width="285.14853"
+ id="rect3942"
+ style="fill:url(#radialGradient3950);fill-opacity:1;stroke:#000000;stroke-width:1;stroke-miterlimit:4;stroke-opacity:1;stroke-dasharray:none;filter:url(#filter3712)" />
+ <rect
+ transform="matrix(0.43829706,0,0,0.49424167,205.15881,745.41749)"
+ ry="12.871287"
+ y="79.207924"
+ x="96.039604"
+ height="72.277229"
+ width="285.14853"
+ id="rect3952"
+ style="fill:url(#radialGradient3964);fill-opacity:1;stroke:#000000;stroke-width:1;stroke-miterlimit:4;stroke-opacity:1;stroke-dasharray:none;filter:url(#filter3712)" />
+ <text
+ sodipodi:linespacing="100%"
+ id="text3954"
+ y="807.80786"
+ x="263.20795"
+ style="font-size:36px;font-style:normal;font-variant:normal;font-weight:bold;font-stretch:normal;text-align:start;line-height:100%;writing-mode:lr-tb;text-anchor:start;fill:#000000;fill-opacity:1;stroke:none;font-family:BlairMdITC TT;-inkscape-font-specification:BlairMdITC TT Medium"
+ xml:space="preserve"><tspan
+ style="font-size:16px;font-style:normal;font-variant:normal;font-weight:bold;font-stretch:normal;text-align:start;line-height:100%;writing-mode:lr-tb;text-anchor:start;font-family:BlairMdITC TT;-inkscape-font-specification:BlairMdITC TT Bold"
+ y="807.80786"
+ x="263.20795"
+ id="tspan3956"
+ sodipodi:role="line"> DRBD</tspan></text>
+ <rect
+ style="fill:url(#radialGradient3970);fill-opacity:1;stroke:#000000;stroke-width:1;stroke-miterlimit:4;stroke-opacity:1;stroke-dasharray:none;filter:url(#filter3712)"
+ id="rect3962"
+ width="285.14853"
+ height="72.277229"
+ x="96.039604"
+ y="79.207924"
+ ry="12.871287"
+ transform="matrix(0.43829706,0,0,0.49424167,597.25783,743.43729)" />
+ <rect
+ transform="matrix(0.43829706,0,0,0.49424167,208.12911,908.78383)"
+ ry="12.871287"
+ y="79.207924"
+ x="96.039604"
+ height="72.277229"
+ width="285.14853"
+ id="rect3972"
+ style="fill:url(#radialGradient3974);fill-opacity:1;stroke:#000000;stroke-width:2.14855289;stroke-miterlimit:4;stroke-opacity:1;stroke-dasharray:none;filter:url(#filter3712)" />
+ <rect
+ style="fill:url(#radialGradient3996);fill-opacity:1;stroke:#000000;stroke-width:2.14855289;stroke-miterlimit:4;stroke-opacity:1;stroke-dasharray:none;filter:url(#filter3712)"
+ id="rect3994"
+ width="285.14853"
+ height="72.277229"
+ x="96.039604"
+ y="79.207924"
+ ry="12.871287"
+ transform="matrix(0.43829706,0,0,0.49424167,338.82218,907.79373)" />
+ <rect
+ transform="matrix(0.43829706,0,0,0.49424167,470.50535,907.79373)"
+ ry="12.871287"
+ y="79.207924"
+ x="96.039604"
+ height="72.277229"
+ width="285.14853"
+ id="rect3998"
+ style="fill:url(#radialGradient4000);fill-opacity:1;stroke:#000000;stroke-width:2.14855289;stroke-miterlimit:4;stroke-opacity:1;stroke-dasharray:none;filter:url(#filter3712)" />
+ <rect
+ style="fill:url(#radialGradient4004);fill-opacity:1;stroke:#000000;stroke-width:2.14855289;stroke-miterlimit:4;stroke-opacity:1;stroke-dasharray:none;filter:url(#filter3712)"
+ id="rect4002"
+ width="285.14853"
+ height="72.277229"
+ x="96.039604"
+ y="79.207924"
+ ry="12.871287"
+ transform="matrix(0.43829706,0,0,0.49424167,600.20832,907.79373)" />
+ <text
+ xml:space="preserve"
+ style="font-size:36px;font-style:normal;font-variant:normal;font-weight:bold;font-stretch:normal;text-align:start;line-height:100%;writing-mode:lr-tb;text-anchor:start;fill:#000000;fill-opacity:1;stroke:none;font-family:BlairMdITC TT;-inkscape-font-specification:BlairMdITC TT Medium"
+ x="284"
+ y="971.17413"
+ id="text4006"
+ sodipodi:linespacing="100%"><tspan
+ sodipodi:role="line"
+ id="tspan4008"
+ x="284"
+ y="971.17413"
+ style="font-size:16px;font-style:normal;font-variant:normal;font-weight:bold;font-stretch:normal;text-align:start;line-height:100%;writing-mode:lr-tb;text-anchor:start;font-family:BlairMdITC TT;-inkscape-font-specification:BlairMdITC TT Bold">Host</tspan></text>
+ <text
+ sodipodi:linespacing="100%"
+ id="text4010"
+ y="970.18402"
+ x="415.68317"
+ style="font-size:36px;font-style:normal;font-variant:normal;font-weight:bold;font-stretch:normal;text-align:start;line-height:100%;writing-mode:lr-tb;text-anchor:start;fill:#000000;fill-opacity:1;stroke:none;font-family:BlairMdITC TT;-inkscape-font-specification:BlairMdITC TT Medium"
+ xml:space="preserve"><tspan
+ style="font-size:16px;font-style:normal;font-variant:normal;font-weight:bold;font-stretch:normal;text-align:start;line-height:100%;writing-mode:lr-tb;text-anchor:start;font-family:BlairMdITC TT;-inkscape-font-specification:BlairMdITC TT Bold"
+ y="970.18402"
+ x="415.68317"
+ id="tspan4012"
+ sodipodi:role="line">Host</tspan></text>
+ <text
+ xml:space="preserve"
+ style="font-size:36px;font-style:normal;font-variant:normal;font-weight:bold;font-stretch:normal;text-align:start;line-height:100%;writing-mode:lr-tb;text-anchor:start;fill:#000000;fill-opacity:1;stroke:none;font-family:BlairMdITC TT;-inkscape-font-specification:BlairMdITC TT Medium"
+ x="548.35645"
+ y="970.18402"
+ id="text4014"
+ sodipodi:linespacing="100%"><tspan
+ sodipodi:role="line"
+ id="tspan4016"
+ x="548.35645"
+ y="970.18402"
+ style="font-size:16px;font-style:normal;font-variant:normal;font-weight:bold;font-stretch:normal;text-align:start;line-height:100%;writing-mode:lr-tb;text-anchor:start;font-family:BlairMdITC TT;-inkscape-font-specification:BlairMdITC TT Bold">Host</tspan></text>
+ <text
+ sodipodi:linespacing="100%"
+ id="text4018"
+ y="970.18402"
+ x="679.0495"
+ style="font-size:36px;font-style:normal;font-variant:normal;font-weight:bold;font-stretch:normal;text-align:start;line-height:100%;writing-mode:lr-tb;text-anchor:start;fill:#000000;fill-opacity:1;stroke:none;font-family:BlairMdITC TT;-inkscape-font-specification:BlairMdITC TT Medium"
+ xml:space="preserve"><tspan
+ style="font-size:16px;font-style:normal;font-variant:normal;font-weight:bold;font-stretch:normal;text-align:start;line-height:100%;writing-mode:lr-tb;text-anchor:start;font-family:BlairMdITC TT;-inkscape-font-specification:BlairMdITC TT Bold"
+ y="970.18402"
+ x="679.0495"
+ id="tspan4020"
+ sodipodi:role="line">Host</tspan></text>
+ <text
+ sodipodi:linespacing="100%"
+ id="text4030"
+ y="926.61969"
+ x="437.46533"
+ style="font-size:36px;font-style:normal;font-variant:normal;font-weight:bold;font-stretch:normal;text-align:start;line-height:100%;writing-mode:lr-tb;text-anchor:start;fill:#000000;fill-opacity:1;stroke:none;filter:url(#filter4066);font-family:BlairMdITC TT;-inkscape-font-specification:BlairMdITC TT Medium"
+ xml:space="preserve"><tspan
+ style="font-size:20px;font-style:normal;font-variant:normal;font-weight:bold;font-stretch:normal;text-align:start;line-height:100%;writing-mode:lr-tb;text-anchor:start;font-family:BlairMdITC TT;-inkscape-font-specification:BlairMdITC TT Bold"
+ y="926.61969"
+ x="437.46533"
+ id="tspan4032"
+ sodipodi:role="line">CoroSync</tspan></text>
+ <text
+ xml:space="preserve"
+ style="font-size:36px;font-style:normal;font-variant:normal;font-weight:bold;font-stretch:normal;text-align:start;line-height:100%;writing-mode:lr-tb;text-anchor:start;fill:#ffffff;fill-opacity:1;stroke:none;filter:url(#filter4038);font-family:BlairMdITC TT;-inkscape-font-specification:BlairMdITC TT Medium"
+ x="505.78217"
+ y="870.18402"
+ id="text4034"
+ sodipodi:linespacing="100%"><tspan
+ sodipodi:role="line"
+ id="tspan4036"
+ x="505.78217"
+ y="870.18402"
+ style="font-size:32px;font-style:normal;font-variant:normal;font-weight:bold;font-stretch:normal;text-align:center;line-height:100%;writing-mode:lr-tb;text-anchor:middle;fill:#ffffff;font-family:BlairMdITC TT;-inkscape-font-specification:BlairMdITC TT Bold">Pacemaker</tspan></text>
+ <text
+ xml:space="preserve"
+ style="font-size:36px;font-style:normal;font-variant:normal;font-weight:bold;font-stretch:normal;text-align:start;line-height:100%;writing-mode:lr-tb;text-anchor:start;fill:#000000;fill-opacity:1;stroke:none;filter:url(#filter4066);font-family:BlairMdITC TT;-inkscape-font-specification:BlairMdITC TT Medium"
+ x="142.57423"
+ y="524.63947"
+ id="text4080"
+ sodipodi:linespacing="100%"
+ transform="matrix(0.95354502,0,0,1,-8.8433856,0)"><tspan
+ sodipodi:role="line"
+ id="tspan4082"
+ x="142.57423"
+ y="524.63947"
+ style="font-size:48px;font-style:normal;font-variant:normal;font-weight:bold;font-stretch:normal;text-align:start;line-height:100%;writing-mode:lr-tb;text-anchor:start;font-family:BlairMdITC TT;-inkscape-font-specification:BlairMdITC TT Bold">Shared Failover</tspan></text>
+ <text
+ sodipodi:linespacing="100%"
+ id="text4084"
+ y="970.18402"
+ x="44.554447"
+ style="font-size:36px;font-style:normal;font-variant:normal;font-weight:bold;font-stretch:normal;text-align:start;line-height:100%;writing-mode:lr-tb;text-anchor:start;fill:#000000;fill-opacity:1;stroke:none;font-family:BlairMdITC TT;-inkscape-font-specification:BlairMdITC TT Medium"
+ xml:space="preserve"><tspan
+ style="font-size:20px;font-style:normal;font-variant:normal;font-weight:bold;font-stretch:normal;text-align:start;line-height:100%;writing-mode:lr-tb;text-anchor:start;font-family:BlairMdITC TT;-inkscape-font-specification:BlairMdITC TT Bold"
+ y="970.18402"
+ x="44.554447"
+ id="tspan4086"
+ sodipodi:role="line">Hardware</tspan></text>
+ <text
+ xml:space="preserve"
+ style="font-size:36px;font-style:normal;font-variant:normal;font-weight:bold;font-stretch:normal;text-align:start;line-height:100%;writing-mode:lr-tb;text-anchor:start;fill:#000000;fill-opacity:1;stroke:none;font-family:BlairMdITC TT;-inkscape-font-specification:BlairMdITC TT Medium"
+ x="120.79207"
+ y="886.02557"
+ id="text4088"
+ sodipodi:linespacing="100%"><tspan
+ sodipodi:role="line"
+ id="tspan4090"
+ x="120.79207"
+ y="886.02557"
+ style="font-size:20px;font-style:normal;font-variant:normal;font-weight:bold;font-stretch:normal;text-align:center;line-height:100%;writing-mode:lr-tb;text-anchor:middle;font-family:BlairMdITC TT;-inkscape-font-specification:BlairMdITC TT Bold">Cluster</tspan><tspan
+ sodipodi:role="line"
+ x="120.79207"
+ y="906.02557"
+ style="font-size:20px;font-style:normal;font-variant:normal;font-weight:bold;font-stretch:normal;text-align:center;line-height:100%;writing-mode:lr-tb;text-anchor:middle;font-family:BlairMdITC TT;-inkscape-font-specification:BlairMdITC TT Bold"
+ id="tspan4092">Software</tspan></text>
+ <text
+ sodipodi:linespacing="100%"
+ id="text4094"
+ y="705.82751"
+ x="120.79207"
+ style="font-size:36px;font-style:normal;font-variant:normal;font-weight:bold;font-stretch:normal;text-align:start;line-height:100%;writing-mode:lr-tb;text-anchor:start;fill:#000000;fill-opacity:1;stroke:none;font-family:BlairMdITC TT;-inkscape-font-specification:BlairMdITC TT Medium"
+ xml:space="preserve"><tspan
+ id="tspan4098"
+ style="font-size:20px;font-style:normal;font-variant:normal;font-weight:bold;font-stretch:normal;text-align:center;line-height:100%;writing-mode:lr-tb;text-anchor:middle;font-family:BlairMdITC TT;-inkscape-font-specification:BlairMdITC TT Bold"
+ y="705.82751"
+ x="120.79207"
+ sodipodi:role="line">Services</tspan></text>
+ <text
+ xml:space="preserve"
+ style="font-size:36px;font-style:normal;font-variant:normal;font-weight:bold;font-stretch:normal;text-align:start;line-height:100%;writing-mode:lr-tb;text-anchor:start;fill:#000000;fill-opacity:1;stroke:none;font-family:BlairMdITC TT;-inkscape-font-specification:BlairMdITC TT Medium"
+ x="391.92081"
+ y="765.23352"
+ id="text3934"
+ sodipodi:linespacing="100%"><tspan
+ sodipodi:role="line"
+ id="tspan3936"
+ x="391.92081"
+ y="765.23352"
+ style="font-size:16px;font-style:normal;font-variant:normal;font-weight:bold;font-stretch:normal;text-align:start;line-height:100%;writing-mode:lr-tb;text-anchor:start;font-family:BlairMdITC TT;-inkscape-font-specification:BlairMdITC TT Bold"> DRBD</tspan></text>
+ <text
+ xml:space="preserve"
+ style="font-size:36px;font-style:normal;font-variant:normal;font-weight:bold;font-stretch:normal;text-align:start;line-height:100%;writing-mode:lr-tb;text-anchor:start;fill:#000000;fill-opacity:1;stroke:none;font-family:BlairMdITC TT;-inkscape-font-specification:BlairMdITC TT Medium"
+ x="526.57422"
+ y="681.07513"
+ id="text3894"
+ sodipodi:linespacing="100%"><tspan
+ sodipodi:role="line"
+ id="tspan3896"
+ x="526.57422"
+ y="681.07513"
+ style="font-size:16px;font-style:normal;font-variant:normal;font-weight:bold;font-stretch:normal;text-align:start;line-height:100%;writing-mode:lr-tb;text-anchor:start;font-family:BlairMdITC TT;-inkscape-font-specification:BlairMdITC TT Bold">D'base</tspan></text>
+ <text
+ sodipodi:linespacing="100%"
+ id="text3205"
+ y="764.24341"
+ x="657.26733"
+ style="font-size:36px;font-style:normal;font-variant:normal;font-weight:bold;font-stretch:normal;text-align:start;line-height:100%;writing-mode:lr-tb;text-anchor:start;fill:#000000;fill-opacity:1;stroke:none;font-family:BlairMdITC TT;-inkscape-font-specification:BlairMdITC TT Medium"
+ xml:space="preserve"><tspan
+ style="font-size:16px;font-style:normal;font-variant:normal;font-weight:bold;font-stretch:normal;text-align:start;line-height:100%;writing-mode:lr-tb;text-anchor:start;font-family:BlairMdITC TT;-inkscape-font-specification:BlairMdITC TT Bold"
+ y="764.24341"
+ x="657.26733"
+ id="tspan3207"
+ sodipodi:role="line"> DRBD</tspan></text>
+ <text
+ xml:space="preserve"
+ style="font-size:36px;font-style:normal;font-variant:normal;font-weight:bold;font-stretch:normal;text-align:start;line-height:100%;writing-mode:lr-tb;text-anchor:start;fill:#000000;fill-opacity:1;stroke:none;font-family:BlairMdITC TT;-inkscape-font-specification:BlairMdITC TT Medium"
+ x="656.27722"
+ y="807.80774"
+ id="text3209"
+ sodipodi:linespacing="100%"><tspan
+ sodipodi:role="line"
+ id="tspan3211"
+ x="656.27722"
+ y="807.80774"
+ style="font-size:16px;font-style:normal;font-variant:normal;font-weight:bold;font-stretch:normal;text-align:start;line-height:100%;writing-mode:lr-tb;text-anchor:start;font-family:BlairMdITC TT;-inkscape-font-specification:BlairMdITC TT Bold"> DRBD</tspan></text>
+ <text
+ sodipodi:linespacing="100%"
+ id="text3213"
+ y="682.06525"
+ x="670.13861"
+ style="font-size:36px;font-style:normal;font-variant:normal;font-weight:bold;font-stretch:normal;text-align:start;line-height:100%;writing-mode:lr-tb;text-anchor:start;fill:#000000;fill-opacity:1;stroke:none;font-family:BlairMdITC TT;-inkscape-font-specification:BlairMdITC TT Medium"
+ xml:space="preserve"><tspan
+ style="font-size:16px;font-style:normal;font-variant:normal;font-weight:bold;font-stretch:normal;text-align:start;line-height:100%;writing-mode:lr-tb;text-anchor:start;font-family:BlairMdITC TT;-inkscape-font-specification:BlairMdITC TT Bold"
+ y="682.06525"
+ x="670.13861"
+ id="tspan3215"
+ sodipodi:role="line">D'base</tspan></text>
+ <text
+ sodipodi:linespacing="100%"
+ id="text3217"
+ y="600.87708"
+ x="417.66339"
+ style="font-size:36px;font-style:normal;font-variant:normal;font-weight:bold;font-stretch:normal;text-align:start;line-height:100%;writing-mode:lr-tb;text-anchor:start;fill:#000000;fill-opacity:1;stroke:none;font-family:BlairMdITC TT;-inkscape-font-specification:BlairMdITC TT Medium"
+ xml:space="preserve"><tspan
+ style="font-size:16px;font-style:normal;font-variant:normal;font-weight:bold;font-stretch:normal;text-align:start;line-height:100%;writing-mode:lr-tb;text-anchor:start;font-family:BlairMdITC TT;-inkscape-font-specification:BlairMdITC TT Bold"
+ y="600.87708"
+ x="417.66339"
+ id="tspan3219"
+ sodipodi:role="line">URL</tspan></text>
+ <text
+ xml:space="preserve"
+ style="font-size:36px;font-style:normal;font-variant:normal;font-weight:bold;font-stretch:normal;text-align:start;line-height:100%;writing-mode:lr-tb;text-anchor:start;fill:#000000;fill-opacity:1;stroke:none;font-family:BlairMdITC TT;-inkscape-font-specification:BlairMdITC TT Medium"
+ x="549.34656"
+ y="599.88696"
+ id="text3221"
+ sodipodi:linespacing="100%"><tspan
+ sodipodi:role="line"
+ id="tspan3223"
+ x="549.34656"
+ y="599.88696"
+ style="font-size:16px;font-style:normal;font-variant:normal;font-weight:bold;font-stretch:normal;text-align:start;line-height:100%;writing-mode:lr-tb;text-anchor:start;font-family:BlairMdITC TT;-inkscape-font-specification:BlairMdITC TT Bold">URL</tspan></text>
+ <text
+ xml:space="preserve"
+ style="font-size:36px;font-style:normal;font-variant:normal;font-weight:bold;font-stretch:normal;text-align:start;line-height:100%;writing-mode:lr-tb;text-anchor:start;fill:#000000;fill-opacity:1;stroke:none;font-family:BlairMdITC TT;-inkscape-font-specification:BlairMdITC TT Medium"
+ x="389.94058"
+ y="643.45135"
+ id="text3225"
+ sodipodi:linespacing="100%"><tspan
+ sodipodi:role="line"
+ id="tspan3227"
+ x="389.94058"
+ y="643.45135"
+ style="font-size:16px;font-style:normal;font-variant:normal;font-weight:bold;font-stretch:normal;text-align:start;line-height:100%;writing-mode:lr-tb;text-anchor:start;font-family:BlairMdITC TT;-inkscape-font-specification:BlairMdITC TT Bold">Web Site</tspan></text>
+ <text
+ sodipodi:linespacing="100%"
+ id="text3233"
+ y="726.61963"
+ x="410.73267"
+ style="font-size:36px;font-style:normal;font-variant:normal;font-weight:bold;font-stretch:normal;text-align:start;line-height:100%;writing-mode:lr-tb;text-anchor:start;fill:#000000;fill-opacity:1;stroke:none;font-family:BlairMdITC TT;-inkscape-font-specification:BlairMdITC TT Medium"
+ xml:space="preserve"><tspan
+ style="font-size:16px;font-style:normal;font-variant:normal;font-weight:bold;font-stretch:normal;text-align:start;line-height:100%;writing-mode:lr-tb;text-anchor:start;font-family:BlairMdITC TT;-inkscape-font-specification:BlairMdITC TT Bold"
+ y="726.61963"
+ x="410.73267"
+ id="tspan3235"
+ sodipodi:role="line">Files</tspan></text>
+ <path
+ style="fill:none;stroke:#000000;stroke-width:1px;stroke-linecap:butt;stroke-linejoin:miter;stroke-opacity:1;marker-end:url(#Arrow2Lend)"
+ d="M 502.92552,759.61559 639.3319,759.0989"
+ id="path3239"
+ inkscape:connector-type="polyline"
+ inkscape:connection-start="#rect3938"
+ inkscape:connection-end="#rect3942" />
+ <path
+ style="fill:none;stroke:#000000;stroke-width:1px;stroke-linecap:butt;stroke-linejoin:miter;stroke-opacity:1;marker-end:url(#Arrow2Lend)"
+ d="m 628.62849,676.68398 11.71331,0"
+ id="path3243"
+ inkscape:connector-type="polyline"
+ inkscape:connection-start="#rect3900"
+ inkscape:connection-end="#rect3902" />
+ <path
+ style="fill:none;stroke:#000000;stroke-width:1px;stroke-linecap:butt;stroke-linejoin:miter;stroke-opacity:1;marker-start:none;marker-end:url(#Arrow2Lend);display:inline"
+ d="m 372.23245,802.11097 267.11926,-1.34902"
+ id="path4653"
+ inkscape:connector-type="polyline"
+ inkscape:connection-end="#rect3962"
+ inkscape:connection-start="#rect3952" />
+ <text
+ xml:space="preserve"
+ style="font-size:36px;font-style:normal;font-variant:normal;font-weight:normal;font-stretch:normal;text-align:start;line-height:100%;writing-mode:lr-tb;text-anchor:start;fill:#000000;fill-opacity:1;stroke:none;font-family:American Typewriter;-inkscape-font-specification:American Typewriter"
+ x="545.38617"
+ y="749.39185"
+ id="text5425"
+ sodipodi:linespacing="100%"><tspan
+ sodipodi:role="line"
+ id="tspan5427"
+ x="545.38617"
+ y="749.39185"
+ style="font-size:10px;font-style:italic;font-variant:normal;font-weight:bold;font-stretch:normal;text-align:start;line-height:100%;writing-mode:lr-tb;text-anchor:start;font-family:BlairMdITC TT;-inkscape-font-specification:BlairMdITC TT Medium">Synch</tspan></text>
+ <text
+ sodipodi:linespacing="100%"
+ id="text5429"
+ y="791.96613"
+ x="543.40594"
+ style="font-size:36px;font-style:normal;font-variant:normal;font-weight:normal;font-stretch:normal;text-align:start;line-height:100%;writing-mode:lr-tb;text-anchor:start;fill:#000000;fill-opacity:1;stroke:none;font-family:American Typewriter;-inkscape-font-specification:American Typewriter"
+ xml:space="preserve"><tspan
+ style="font-size:10px;font-style:italic;font-variant:normal;font-weight:bold;font-stretch:normal;text-align:start;line-height:100%;writing-mode:lr-tb;text-anchor:start;font-family:BlairMdITC TT;-inkscape-font-specification:BlairMdITC TT Medium"
+ y="791.96613"
+ x="543.40594"
+ id="tspan5431"
+ sodipodi:role="line">Synch</tspan></text>
+ <text
+ xml:space="preserve"
+ style="font-size:36px;font-style:normal;font-variant:normal;font-weight:normal;font-stretch:normal;text-align:start;line-height:100%;writing-mode:lr-tb;text-anchor:start;fill:#000000;fill-opacity:1;stroke:none;font-family:American Typewriter;-inkscape-font-specification:American Typewriter"
+ x="613.703"
+ y="650.38196"
+ id="text5433"
+ sodipodi:linespacing="100%"><tspan
+ sodipodi:role="line"
+ id="tspan5435"
+ x="613.703"
+ y="650.38196"
+ style="font-size:10px;font-style:italic;font-variant:normal;font-weight:bold;font-stretch:normal;text-align:start;line-height:100%;writing-mode:lr-tb;text-anchor:start;font-family:BlairMdITC TT;-inkscape-font-specification:BlairMdITC TT Medium">Synch</tspan></text>
+ <rect
+ style="fill:url(#linearGradient4818);fill-opacity:1;fill-rule:nonzero;stroke:#646464;stroke-width:0;stroke-miterlimit:4;stroke-opacity:1;stroke-dasharray:none"
+ id="rect4810"
+ width="3"
+ height="595.94061"
+ x="797"
+ y="-1052.3622"
+ ry="6.5654473"
+ transform="scale(1,-1)" />
+ <rect
+ transform="matrix(0,-1,-1,0,0,0)"
+ ry="8.7797451"
+ y="-801.42078"
+ x="-1052.9216"
+ height="796.93066"
+ width="3"
+ id="rect4820"
+ style="fill:url(#linearGradient4822);fill-opacity:1;fill-rule:nonzero;stroke:#646464;stroke-width:0;stroke-miterlimit:4;stroke-opacity:1;stroke-dasharray:none" />
+ </g>
+</svg>
diff --git a/doc/sphinx/shared/images/pcmk-stack.svg b/doc/sphinx/shared/images/pcmk-stack.svg
new file mode 100644
index 0000000..fcbe137
--- /dev/null
+++ b/doc/sphinx/shared/images/pcmk-stack.svg
@@ -0,0 +1,925 @@
+<?xml version="1.0" encoding="UTF-8" standalone="no"?>
+<!-- Created with Inkscape (http://www.inkscape.org/) -->
+
+<svg
+ xmlns:dc="http://purl.org/dc/elements/1.1/"
+ xmlns:cc="http://creativecommons.org/ns#"
+ xmlns:rdf="http://www.w3.org/1999/02/22-rdf-syntax-ns#"
+ xmlns:svg="http://www.w3.org/2000/svg"
+ xmlns="http://www.w3.org/2000/svg"
+ xmlns:xlink="http://www.w3.org/1999/xlink"
+ xmlns:sodipodi="http://sodipodi.sourceforge.net/DTD/sodipodi-0.dtd"
+ xmlns:inkscape="http://www.inkscape.org/namespaces/inkscape"
+ width="800"
+ height="600"
+ id="svg2"
+ version="1.1"
+ inkscape:version="0.48.2 r9819"
+ sodipodi:docname="pcmk-stack.svg"
+ inkscape:export-filename="/Users/beekhof/Dropbox/Public/pcmk-active-passive.png"
+ inkscape:export-xdpi="90"
+ inkscape:export-ydpi="90">
+ <defs
+ id="defs4">
+ <linearGradient
+ id="linearGradient3951">
+ <stop
+ style="stop-color:#000000;stop-opacity:1;"
+ offset="0"
+ id="stop3953" />
+ <stop
+ style="stop-color:#000000;stop-opacity:0;"
+ offset="1"
+ id="stop3955" />
+ </linearGradient>
+ <marker
+ inkscape:stockid="TriangleInL"
+ orient="auto"
+ refY="0.0"
+ refX="0.0"
+ id="TriangleInL"
+ style="overflow:visible">
+ <path
+ id="path8998"
+ d="M 5.77,0.0 L -2.88,5.0 L -2.88,-5.0 L 5.77,0.0 z "
+ style="fill-rule:evenodd;stroke:#000000;stroke-width:1.0pt;marker-start:none"
+ transform="scale(-0.8)" />
+ </marker>
+ <marker
+ inkscape:stockid="Arrow2Lend"
+ orient="auto"
+ refY="0.0"
+ refX="0.0"
+ id="Arrow2Lend"
+ style="overflow:visible;">
+ <path
+ id="path8885"
+ style="font-size:12.0;fill-rule:evenodd;stroke-width:0.62500000;stroke-linejoin:round;"
+ d="M 8.7185878,4.0337352 L -2.2072895,0.016013256 L 8.7185884,-4.0017078 C 6.9730900,-1.6296469 6.9831476,1.6157441 8.7185878,4.0337352 z "
+ transform="scale(1.1) rotate(180) translate(1,0)" />
+ </marker>
+ <marker
+ inkscape:stockid="Arrow1Mend"
+ orient="auto"
+ refY="0.0"
+ refX="0.0"
+ id="Arrow1Mend"
+ style="overflow:visible;">
+ <path
+ id="path4652"
+ d="M 0.0,0.0 L 5.0,-5.0 L -12.5,0.0 L 5.0,5.0 L 0.0,0.0 z "
+ style="fill-rule:evenodd;stroke:#000000;stroke-width:1.0pt;marker-start:none;"
+ transform="scale(0.4) rotate(180) translate(10,0)" />
+ </marker>
+ <linearGradient
+ id="linearGradient4616">
+ <stop
+ style="stop-color:#808080;stop-opacity:0.75;"
+ offset="0"
+ id="stop4618" />
+ <stop
+ style="stop-color:#bfbfbf;stop-opacity:0.5;"
+ offset="1"
+ id="stop4620" />
+ </linearGradient>
+ <linearGradient
+ id="linearGradient4606">
+ <stop
+ style="stop-color:#000000;stop-opacity:0.58536583;"
+ offset="0"
+ id="stop4608" />
+ <stop
+ style="stop-color:#000000;stop-opacity:0.08130081;"
+ offset="1"
+ id="stop4610" />
+ </linearGradient>
+ <linearGradient
+ id="linearGradient4411">
+ <stop
+ style="stop-color:#f3f3f3;stop-opacity:0;"
+ offset="0"
+ id="stop4413" />
+ <stop
+ style="stop-color:#e6e6e6;stop-opacity:0.21138212;"
+ offset="1"
+ id="stop4415" />
+ </linearGradient>
+ <linearGradient
+ id="linearGradient4370">
+ <stop
+ style="stop-color:#ffffff;stop-opacity:1;"
+ offset="0"
+ id="stop4372" />
+ <stop
+ style="stop-color:#f7f7f7;stop-opacity:0.69918698;"
+ offset="1"
+ id="stop4374" />
+ </linearGradient>
+ <linearGradient
+ id="linearGradient3988">
+ <stop
+ id="stop3990"
+ offset="0"
+ style="stop-color:#d3e219;stop-opacity:1;" />
+ <stop
+ id="stop3992"
+ offset="1"
+ style="stop-color:#e8a411;stop-opacity:1;" />
+ </linearGradient>
+ <linearGradient
+ id="linearGradient3838">
+ <stop
+ style="stop-color:#6badf2;stop-opacity:1;"
+ offset="0"
+ id="stop3840" />
+ <stop
+ style="stop-color:#2e447f;stop-opacity:1;"
+ offset="1"
+ id="stop3842" />
+ </linearGradient>
+ <linearGradient
+ id="linearGradient3658">
+ <stop
+ style="stop-color:#19e229;stop-opacity:1;"
+ offset="0"
+ id="stop3660" />
+ <stop
+ style="stop-color:#589b56;stop-opacity:1;"
+ offset="1"
+ id="stop3662" />
+ </linearGradient>
+ <linearGradient
+ id="linearGradient3650">
+ <stop
+ style="stop-color:#f36d6d;stop-opacity:1;"
+ offset="0"
+ id="stop3652" />
+ <stop
+ style="stop-color:#b81313;stop-opacity:1;"
+ offset="1"
+ id="stop3654" />
+ </linearGradient>
+ <inkscape:perspective
+ sodipodi:type="inkscape:persp3d"
+ inkscape:vp_x="0 : 526.18109 : 1"
+ inkscape:vp_y="0 : 1000 : 0"
+ inkscape:vp_z="744.09448 : 526.18109 : 1"
+ inkscape:persp3d-origin="372.04724 : 350.78739 : 1"
+ id="perspective10" />
+ <filter
+ id="filter3712"
+ inkscape:label="Ridged border"
+ inkscape:menu="Bevels"
+ inkscape:menu-tooltip="Ridged border with inner bevel"
+ color-interpolation-filters="sRGB">
+ <feMorphology
+ id="feMorphology3714"
+ radius="4.3"
+ in="SourceAlpha"
+ result="result91" />
+ <feComposite
+ id="feComposite3716"
+ in2="result91"
+ operator="out"
+ in="SourceGraphic" />
+ <feGaussianBlur
+ id="feGaussianBlur3718"
+ result="result0"
+ stdDeviation="1.2" />
+ <feDiffuseLighting
+ id="feDiffuseLighting3720"
+ diffuseConstant="1"
+ result="result92">
+ <feDistantLight
+ id="feDistantLight3722"
+ elevation="66"
+ azimuth="225" />
+ </feDiffuseLighting>
+ <feBlend
+ id="feBlend3724"
+ in2="SourceGraphic"
+ mode="multiply"
+ result="result93" />
+ <feComposite
+ id="feComposite3726"
+ in2="SourceAlpha"
+ operator="in" />
+ </filter>
+ <filter
+ id="filter4038"
+ inkscape:label="Drop shadow"
+ width="1.5"
+ height="1.5"
+ x="-.25"
+ y="-.25">
+ <feGaussianBlur
+ id="feGaussianBlur4040"
+ in="SourceAlpha"
+ stdDeviation="2.000000"
+ result="blur" />
+ <feColorMatrix
+ id="feColorMatrix4042"
+ result="bluralpha"
+ type="matrix"
+ values="-1 0 0 0 1 0 -1 0 0 1 0 0 -1 0 1 0 0 0 0.500000 0 " />
+ <feOffset
+ id="feOffset4044"
+ in="bluralpha"
+ dx="4.000000"
+ dy="4.000000"
+ result="offsetBlur" />
+ <feMerge
+ id="feMerge4046">
+ <feMergeNode
+ id="feMergeNode4048"
+ in="offsetBlur" />
+ <feMergeNode
+ id="feMergeNode4050"
+ in="SourceGraphic" />
+ </feMerge>
+ </filter>
+ <filter
+ id="filter4066"
+ inkscape:label="Drop shadow"
+ width="1.5"
+ height="1.5"
+ x="-.25"
+ y="-.25">
+ <feGaussianBlur
+ id="feGaussianBlur4068"
+ in="SourceAlpha"
+ stdDeviation="2.000000"
+ result="blur" />
+ <feColorMatrix
+ id="feColorMatrix4070"
+ result="bluralpha"
+ type="matrix"
+ values="1 0 0 0 0 0 1 0 0 0 0 0 1 0 0 0 0 0 0.500000 0 " />
+ <feOffset
+ id="feOffset4072"
+ in="bluralpha"
+ dx="4.000000"
+ dy="4.000000"
+ result="offsetBlur" />
+ <feMerge
+ id="feMerge4074">
+ <feMergeNode
+ id="feMergeNode4076"
+ in="offsetBlur" />
+ <feMergeNode
+ id="feMergeNode4078"
+ in="SourceGraphic" />
+ </feMerge>
+ </filter>
+ <radialGradient
+ inkscape:collect="always"
+ xlink:href="#linearGradient4370"
+ id="radialGradient4376"
+ cx="-0.5"
+ cy="-100.5"
+ fx="-0.5"
+ fy="-100.5"
+ r="400.5"
+ gradientTransform="matrix(0.06674414,1.4857892,-1.4966201,0.06723071,-150.87695,6.9995757)"
+ gradientUnits="userSpaceOnUse" />
+ <filter
+ id="filter4381"
+ inkscape:label="Drop shadow"
+ width="1.5"
+ height="1.5"
+ x="-.25"
+ y="-.25">
+ <feGaussianBlur
+ id="feGaussianBlur4383"
+ in="SourceAlpha"
+ stdDeviation="2.000000"
+ result="blur" />
+ <feColorMatrix
+ id="feColorMatrix4385"
+ result="bluralpha"
+ type="matrix"
+ values="1 0 0 0 0 0 1 0 0 0 0 0 1 0 0 0 0 0 0.500000 0 " />
+ <feOffset
+ id="feOffset4387"
+ in="bluralpha"
+ dx="4.000000"
+ dy="4.000000"
+ result="offsetBlur" />
+ <feMerge
+ id="feMerge4389">
+ <feMergeNode
+ id="feMergeNode4391"
+ in="offsetBlur" />
+ <feMergeNode
+ id="feMergeNode4393"
+ in="SourceGraphic" />
+ </feMerge>
+ </filter>
+ <filter
+ id="filter4397"
+ inkscape:label="Drop shadow"
+ width="1.5"
+ height="1.5"
+ x="-.25"
+ y="-.25">
+ <feGaussianBlur
+ id="feGaussianBlur4399"
+ in="SourceAlpha"
+ stdDeviation="2.000000"
+ result="blur" />
+ <feColorMatrix
+ id="feColorMatrix4401"
+ result="bluralpha"
+ type="matrix"
+ values="1 0 0 0 0 0 1 0 0 0 0 0 1 0 0 0 0 0 0.500000 0 " />
+ <feOffset
+ id="feOffset4403"
+ in="bluralpha"
+ dx="4.000000"
+ dy="4.000000"
+ result="offsetBlur" />
+ <feMerge
+ id="feMerge4405">
+ <feMergeNode
+ id="feMergeNode4407"
+ in="offsetBlur" />
+ <feMergeNode
+ id="feMergeNode4409"
+ in="SourceGraphic" />
+ </feMerge>
+ </filter>
+ <inkscape:perspective
+ id="perspective4466"
+ inkscape:persp3d-origin="0.5 : 0.33333333 : 1"
+ inkscape:vp_z="1 : 0.5 : 1"
+ inkscape:vp_y="0 : 1000 : 0"
+ inkscape:vp_x="0 : 0.5 : 1"
+ sodipodi:type="inkscape:persp3d" />
+ <filter
+ id="filter4508"
+ inkscape:label="Drop shadow"
+ width="1.5"
+ height="1.5"
+ x="-.25"
+ y="-.25">
+ <feGaussianBlur
+ id="feGaussianBlur4510"
+ in="SourceAlpha"
+ stdDeviation="2.000000"
+ result="blur" />
+ <feColorMatrix
+ id="feColorMatrix4512"
+ result="bluralpha"
+ type="matrix"
+ values="1 0 0 0 0 0 1 0 0 0 0 0 1 0 0 0 0 0 0.500000 0 " />
+ <feOffset
+ id="feOffset4514"
+ in="bluralpha"
+ dx="4.000000"
+ dy="4.000000"
+ result="offsetBlur" />
+ <feMerge
+ id="feMerge4516">
+ <feMergeNode
+ id="feMergeNode4518"
+ in="offsetBlur" />
+ <feMergeNode
+ id="feMergeNode4520"
+ in="SourceGraphic" />
+ </feMerge>
+ </filter>
+ <filter
+ id="filter4592"
+ inkscape:label="Drop shadow"
+ width="1.5"
+ height="1.5"
+ x="-.25"
+ y="-.25">
+ <feGaussianBlur
+ id="feGaussianBlur4594"
+ in="SourceAlpha"
+ stdDeviation="2.000000"
+ result="blur" />
+ <feColorMatrix
+ id="feColorMatrix4596"
+ result="bluralpha"
+ type="matrix"
+ values="1 0 0 0 0 0 1 0 0 0 0 0 1 0 0 0 0 0 0.500000 0 " />
+ <feOffset
+ id="feOffset4598"
+ in="bluralpha"
+ dx="4.000000"
+ dy="4.000000"
+ result="offsetBlur" />
+ <feMerge
+ id="feMerge4600">
+ <feMergeNode
+ id="feMergeNode4602"
+ in="offsetBlur" />
+ <feMergeNode
+ id="feMergeNode4604"
+ in="SourceGraphic" />
+ </feMerge>
+ </filter>
+ <linearGradient
+ inkscape:collect="always"
+ xlink:href="#linearGradient4606"
+ id="linearGradient4622"
+ x1="906.94769"
+ y1="-7.3383088"
+ x2="906.94769"
+ y2="-172.97601"
+ gradientUnits="userSpaceOnUse"
+ gradientTransform="matrix(0.23092554,0,0,0.7849298,593.37513,596.7001)" />
+ <linearGradient
+ inkscape:collect="always"
+ xlink:href="#linearGradient4606"
+ id="linearGradient4626"
+ gradientUnits="userSpaceOnUse"
+ gradientTransform="matrix(0.23092554,0,0,1.0521382,-1255.8822,187.84807)"
+ x1="906.94769"
+ y1="-7.3383088"
+ x2="906.94769"
+ y2="-172.97601" />
+ <linearGradient
+ inkscape:collect="always"
+ xlink:href="#linearGradient4616"
+ id="linearGradient4636"
+ x1="234.1949"
+ y1="476.34106"
+ x2="-256.56793"
+ y2="98.293198"
+ gradientUnits="userSpaceOnUse"
+ gradientTransform="matrix(1.4992949,0,0,1.4260558,436.2333,350.79316)" />
+ <radialGradient
+ inkscape:collect="always"
+ xlink:href="#linearGradient3838"
+ id="radialGradient7925"
+ gradientUnits="userSpaceOnUse"
+ gradientTransform="matrix(0.45102834,0,0,0.13605992,152.97182,670.55076)"
+ cx="532.67328"
+ cy="425.74258"
+ fx="532.67328"
+ fy="425.74258"
+ r="259.90594" />
+ <radialGradient
+ inkscape:collect="always"
+ xlink:href="#linearGradient3650"
+ id="radialGradient7940"
+ gradientUnits="userSpaceOnUse"
+ gradientTransform="matrix(0.56884205,0.00258067,-4.6551919e-4,0.1026116,226.03482,800.04606)"
+ cx="580.51013"
+ cy="1693.66"
+ fx="580.51013"
+ fy="1693.66"
+ r="258.42081" />
+ <radialGradient
+ inkscape:collect="always"
+ xlink:href="#linearGradient3658"
+ id="radialGradient8071"
+ gradientUnits="userSpaceOnUse"
+ gradientTransform="matrix(1,0,0,0.25347222,0,86.109396)"
+ cx="238.61388"
+ cy="115.34654"
+ fx="238.61388"
+ fy="115.34654"
+ r="142.57426" />
+ <radialGradient
+ inkscape:collect="always"
+ xlink:href="#linearGradient3658"
+ id="radialGradient8073"
+ gradientUnits="userSpaceOnUse"
+ gradientTransform="matrix(1,0,0,0.25347222,0,86.109396)"
+ cx="238.61388"
+ cy="115.34654"
+ fx="238.61388"
+ fy="115.34654"
+ r="142.57426" />
+ <radialGradient
+ inkscape:collect="always"
+ xlink:href="#linearGradient3658"
+ id="radialGradient3977"
+ gradientUnits="userSpaceOnUse"
+ gradientTransform="matrix(1,0,0,0.18518518,-0.86391925,778.95965)"
+ cx="219.43556"
+ cy="1051.8439"
+ fx="219.43556"
+ fy="1051.8439"
+ r="139.95496" />
+ <radialGradient
+ inkscape:collect="always"
+ xlink:href="#linearGradient3988"
+ id="radialGradient4583"
+ gradientUnits="userSpaceOnUse"
+ gradientTransform="matrix(1,0,0,0.10579346,-70.29707,153.69227)"
+ cx="492.00214"
+ cy="217.28368"
+ fx="492.00214"
+ fy="217.28368"
+ r="171.48801" />
+ <radialGradient
+ inkscape:collect="always"
+ xlink:href="#linearGradient3988"
+ id="radialGradient4619"
+ cx="403.68521"
+ cy="171.064"
+ fx="403.68521"
+ fy="171.064"
+ r="45.35577"
+ gradientTransform="matrix(1.4190478,0,0,0.39047619,-328.79138,107.72325)"
+ gradientUnits="userSpaceOnUse" />
+ <radialGradient
+ inkscape:collect="always"
+ xlink:href="#linearGradient3988"
+ id="radialGradient4629"
+ cx="545.99707"
+ cy="171.92792"
+ fx="545.99707"
+ fy="171.92792"
+ r="59.610443"
+ gradientTransform="matrix(1,0,0,0.29710143,-165.8251,574.07398)"
+ gradientUnits="userSpaceOnUse" />
+ <radialGradient
+ inkscape:collect="always"
+ xlink:href="#linearGradient3988"
+ id="radialGradient4665"
+ gradientUnits="userSpaceOnUse"
+ gradientTransform="matrix(1,0,0,0.25714285,-152.04982,611.87228)"
+ cx="248.80881"
+ cy="53.330952"
+ fx="248.80881"
+ fy="53.330952"
+ r="60.474361" />
+ </defs>
+ <sodipodi:namedview
+ id="base"
+ pagecolor="#ffffff"
+ bordercolor="#666666"
+ borderopacity="1.0"
+ inkscape:pageopacity="0.0"
+ inkscape:pageshadow="2"
+ inkscape:zoom="1.1575153"
+ inkscape:cx="390.83286"
+ inkscape:cy="191.97825"
+ inkscape:document-units="px"
+ inkscape:current-layer="layer1"
+ showgrid="false"
+ inkscape:window-width="2503"
+ inkscape:window-height="1396"
+ inkscape:window-x="57"
+ inkscape:window-y="0"
+ inkscape:window-maximized="1"
+ showguides="true"
+ inkscape:guide-bbox="true" />
+ <metadata
+ id="metadata7">
+ <rdf:RDF>
+ <cc:Work
+ rdf:about="">
+ <dc:format>image/svg+xml</dc:format>
+ <dc:type
+ rdf:resource="http://purl.org/dc/dcmitype/StillImage" />
+ <dc:title />
+ </cc:Work>
+ </rdf:RDF>
+ </metadata>
+ <g
+ inkscape:label="Layer 1"
+ inkscape:groupmode="layer"
+ id="layer1"
+ transform="translate(0,-452.36218)">
+ <rect
+ style="fill:url(#linearGradient4636);fill-opacity:1;fill-rule:nonzero;stroke:#000000;stroke-width:0;stroke-miterlimit:4;stroke-opacity:1;stroke-dasharray:none"
+ id="rect4628"
+ width="797"
+ height="597"
+ x="-1.2012364e-05"
+ y="455.36218"
+ ry="1.0732931" />
+ <g
+ id="g4578"
+ transform="translate(70.29707,40.604215)">
+ <rect
+ ry="0.39220902"
+ rx="0.39220905"
+ transform="translate(0,452.36218)"
+ y="199.14137"
+ x="339.52036"
+ height="36.284622"
+ width="301.50784"
+ id="rect4181"
+ style="fill:url(#radialGradient4583);fill-opacity:1;stroke:none" />
+ <text
+ xml:space="preserve"
+ style="font-size:14px;font-style:normal;font-variant:normal;font-weight:bold;font-stretch:normal;text-align:start;line-height:100%;writing-mode:lr-tb;text-anchor:start;fill:#000000;fill-opacity:1;stroke:none;font-family:BlairMdITC TT;-inkscape-font-specification:BlairMdITC TT Bold"
+ x="352.67245"
+ y="672.02716"
+ id="text4018"
+ sodipodi:linespacing="100%"><tspan
+ sodipodi:role="line"
+ id="tspan4020"
+ x="352.67245"
+ y="672.02716"
+ style="font-size:14px;font-style:normal;font-variant:normal;font-weight:bold;font-stretch:normal;text-align:start;line-height:100%;writing-mode:lr-tb;text-anchor:start;font-family:BlairMdITC TT;-inkscape-font-specification:BlairMdITC TT Bold">Distributed Lock Manager</tspan></text>
+ </g>
+ <g
+ id="g7965"
+ transform="translate(-3.4556779,36.284618)"
+ style="stroke:none">
+ <rect
+ ry="0.44098994"
+ y="729.966"
+ x="272.76746"
+ height="69.591866"
+ width="233.99887"
+ id="rect3836"
+ style="fill:url(#radialGradient7925);fill-opacity:1;stroke:none;stroke-width:0.71506577999999998;stroke-miterlimit:4;stroke-opacity:1;stroke-dasharray:none"
+ rx="0.44098994" />
+ <text
+ transform="matrix(0.81060355,0,0,1,72.137987,0)"
+ sodipodi:linespacing="100%"
+ id="text4034"
+ y="774.01617"
+ x="390.88086"
+ style="font-size:36px;font-style:normal;font-variant:normal;font-weight:bold;font-stretch:normal;text-align:start;line-height:100%;writing-mode:lr-tb;text-anchor:start;fill:#ffffff;fill-opacity:1;stroke:none;filter:url(#filter4038);font-family:BlairMdITC TT;-inkscape-font-specification:BlairMdITC TT Medium"
+ xml:space="preserve"><tspan
+ style="font-size:32px;font-style:normal;font-variant:normal;font-weight:bold;font-stretch:normal;text-align:center;line-height:100%;writing-mode:lr-tb;text-anchor:middle;fill:#ffffff;font-family:BlairMdITC TT;-inkscape-font-specification:BlairMdITC TT Bold;stroke:none"
+ y="774.01617"
+ x="390.88086"
+ id="tspan4036"
+ sodipodi:role="line">Pacemaker</tspan></text>
+ </g>
+ <text
+ xml:space="preserve"
+ style="font-size:36px;font-style:normal;font-variant:normal;font-weight:bold;font-stretch:normal;text-align:center;line-height:100%;writing-mode:lr-tb;text-anchor:middle;fill:#000000;fill-opacity:1;stroke:none;filter:url(#filter4066);font-family:BlairMdITC TT;-inkscape-font-specification:BlairMdITC TT Bold"
+ x="142.57423"
+ y="524.63947"
+ id="text4080"
+ sodipodi:linespacing="100%"
+ transform="matrix(1.093423,0,0,1.7166657,239.25476,-359.72728)"><tspan
+ sodipodi:role="line"
+ id="tspan4082"
+ x="142.57423"
+ y="524.63947"
+ style="font-size:36px;font-style:normal;font-variant:normal;font-weight:bold;font-stretch:normal;text-align:center;line-height:100%;writing-mode:lr-tb;text-anchor:middle;font-family:BlairMdITC TT;-inkscape-font-specification:BlairMdITC TT Bold">Pacemaker Stack</tspan></text>
+ <text
+ sodipodi:linespacing="100%"
+ id="text4084"
+ y="663.41534"
+ x="72.965027"
+ style="font-size:14px;font-style:normal;font-variant:normal;font-weight:bold;font-stretch:normal;text-align:start;line-height:100%;writing-mode:lr-tb;text-anchor:start;fill:#000000;fill-opacity:1;stroke:none;font-family:BlairMdITC TT;-inkscape-font-specification:BlairMdITC TT Bold"
+ xml:space="preserve"><tspan
+ style="font-size:14px;font-style:normal;font-variant:normal;font-weight:bold;font-stretch:normal;text-align:start;line-height:100%;writing-mode:lr-tb;text-anchor:start;font-family:BlairMdITC TT;-inkscape-font-specification:BlairMdITC TT Bold"
+ y="663.41534"
+ x="72.965027"
+ id="tspan4086"
+ sodipodi:role="line">Build Dependency</tspan></text>
+ <rect
+ style="fill:url(#linearGradient4622);fill-opacity:1.0;fill-rule:nonzero;stroke:#000000;stroke-width:0;stroke-miterlimit:4;stroke-opacity:1;stroke-dasharray:none"
+ id="rect4614"
+ width="3"
+ height="591.4361"
+ x="797"
+ y="460.92606"
+ ry="0.59076202" />
+ <rect
+ ry="0.79187125"
+ y="5.8533502"
+ x="-1052.2572"
+ height="792.77484"
+ width="3"
+ id="rect4624"
+ style="fill:url(#linearGradient4626);fill-opacity:1;fill-rule:nonzero;stroke:#000000;stroke-width:0;stroke-miterlimit:4;stroke-opacity:1;stroke-dasharray:none"
+ transform="matrix(0,-1,1,0,0,0)" />
+ <text
+ id="text7860"
+ y="950.46515"
+ x="234.48592"
+ style="font-size:40px;font-style:normal;font-weight:normal;fill:#000000;fill-opacity:1;stroke:none;font-family:Bitstream Vera Sans"
+ xml:space="preserve"><tspan
+ y="950.46515"
+ x="234.48592"
+ id="tspan7862"
+ sodipodi:role="line" /></text>
+ <path
+ style="fill:none;stroke:#000000;stroke-width:1px;stroke-linecap:butt;stroke-linejoin:miter;stroke-opacity:1"
+ d="m 261.55618,482.8712 0,0"
+ id="path7959"
+ transform="translate(0,452.36218)"
+ inkscape:connector-type="polyline"
+ inkscape:connector-curvature="0" />
+ <path
+ style="fill:none;stroke:#000000;stroke-width:1px;stroke-linecap:butt;stroke-linejoin:miter;stroke-opacity:1"
+ d="m 348.13765,525.93077 0,0"
+ id="path7961"
+ transform="translate(0,452.36218)"
+ inkscape:connector-type="polyline"
+ inkscape:connector-curvature="0" />
+ <g
+ id="g7837"
+ transform="matrix(1.1685,0,0,1,-313.66808,299.14346)" />
+ <path
+ style="fill:none;stroke:#000000;stroke-width:1px;stroke-linecap:butt;stroke-linejoin:miter;stroke-opacity:1;marker-end:url(#Arrow2Lend);display:inline"
+ d="m 88.983703,678.28506 141.682777,0"
+ id="path12899"
+ inkscape:connector-type="polyline"
+ inkscape:connector-curvature="0" />
+ <g
+ id="g4171"
+ transform="translate(0.86391925,-1.7278389)">
+ <rect
+ inkscape:transform-center-x="8.6391948"
+ rx="0.39220902"
+ ry="0.39220902"
+ y="945.23627"
+ x="79.480583"
+ height="51.835167"
+ width="279.90991"
+ id="rect3949"
+ style="fill:url(#radialGradient3977);fill-opacity:1;stroke:none" />
+ <text
+ transform="scale(0.97625322,1.0243244)"
+ sodipodi:linespacing="100%"
+ id="text4472"
+ y="953.59192"
+ x="102.07578"
+ style="font-size:20px;font-style:normal;font-variant:normal;font-weight:bold;font-stretch:normal;text-align:start;line-height:100%;writing-mode:lr-tb;text-anchor:start;fill:#000000;fill-opacity:1;stroke:none;font-family:BlairMdITC TT;-inkscape-font-specification:BlairMdITC TT Bold"
+ xml:space="preserve"><tspan
+ style="font-size:20px;font-style:normal;font-variant:normal;font-weight:bold;font-stretch:normal;text-align:start;line-height:100%;writing-mode:lr-tb;text-anchor:start;font-family:BlairMdITC TT;-inkscape-font-specification:BlairMdITC TT Bold"
+ y="953.59192"
+ x="102.07578"
+ id="tspan4474"
+ sodipodi:role="line">Resource Agents</tspan></text>
+ </g>
+ <rect
+ style="fill:none;stroke:none"
+ id="rect3979"
+ width="19.006227"
+ height="52.699089"
+ x="168.46429"
+ y="499.78534"
+ transform="translate(0,452.36218)"
+ rx="0.39220902"
+ ry="0.39220902" />
+ <g
+ id="g4195"
+ transform="translate(5.0723069e-7,-2.5917588)">
+ <rect
+ rx="0.39220905"
+ ry="0.39220902"
+ y="945.00061"
+ x="421.64832"
+ height="52.026382"
+ width="279.72806"
+ id="rect3846"
+ style="fill:url(#radialGradient7940);fill-opacity:1;stroke:none" />
+ <text
+ sodipodi:linespacing="125%"
+ id="text4191"
+ y="979.79291"
+ x="474.29178"
+ style="font-size:24px;font-style:normal;font-variant:normal;font-weight:500;font-stretch:normal;text-align:start;line-height:125%;letter-spacing:0px;word-spacing:0px;writing-mode:lr-tb;text-anchor:start;fill:#000000;fill-opacity:1;stroke:none;font-family:BlairMdITC TT;-inkscape-font-specification:BlairMdITC TT Medium"
+ xml:space="preserve"><tspan
+ y="979.79291"
+ x="474.29178"
+ id="tspan4193"
+ sodipodi:role="line">Corosync</tspan></text>
+ </g>
+ <path
+ style="fill:none;stroke:#000000;stroke-width:1px;stroke-linecap:butt;stroke-linejoin:miter;stroke-opacity:1;marker-end:url(#Arrow2Lend)"
+ d="M 351.95691,383.48031 245.84335,491.14625"
+ id="path4202"
+ inkscape:connector-type="polyline"
+ inkscape:connector-curvature="0"
+ inkscape:connection-start="#g7965"
+ inkscape:connection-start-point="d4"
+ inkscape:connection-end="#g4171"
+ inkscape:connection-end-point="d4"
+ transform="translate(0,452.36218)" />
+ <path
+ style="fill:none;stroke:#000000;stroke-width:1px;stroke-linecap:butt;stroke-linejoin:miter;stroke-opacity:1;marker-end:url(#Arrow2Lend)"
+ d="M 422.68644,383.48031 534.27357,490.04667"
+ id="path4390"
+ inkscape:connector-type="polyline"
+ inkscape:connector-curvature="0"
+ inkscape:connection-start="#g7965"
+ inkscape:connection-start-point="d4"
+ inkscape:connection-end="#g4195"
+ inkscape:connection-end-point="d4"
+ transform="translate(0,452.36218)" />
+ <g
+ id="g4667"
+ transform="translate(152.04983,-2.5917584)">
+ <rect
+ y="605.44366"
+ x="188.33444"
+ height="35.4207"
+ width="120.94872"
+ id="rect4631"
+ style="fill:url(#radialGradient4665);fill-opacity:1;stroke:none" />
+ <text
+ sodipodi:linespacing="100%"
+ id="text4006"
+ y="628.56183"
+ x="211.067"
+ style="font-size:36px;font-style:normal;font-variant:normal;font-weight:bold;font-stretch:normal;text-align:start;line-height:100%;writing-mode:lr-tb;text-anchor:start;fill:#000000;fill-opacity:1;stroke:none;font-family:BlairMdITC TT;-inkscape-font-specification:BlairMdITC TT Medium"
+ xml:space="preserve"><tspan
+ style="font-size:16px;font-style:normal;font-variant:normal;font-weight:bold;font-stretch:normal;text-align:start;line-height:100%;writing-mode:lr-tb;text-anchor:start;font-family:BlairMdITC TT;-inkscape-font-specification:BlairMdITC TT Bold"
+ y="628.56183"
+ x="211.067"
+ id="tspan4008"
+ sodipodi:role="line">cLVM2</tspan></text>
+ </g>
+ <g
+ id="g4643"
+ transform="translate(158.09726,-3.4556779)">
+ <rect
+ transform="translate(0,452.36218)"
+ y="153.35364"
+ x="334.33682"
+ height="35.4207"
+ width="133.90752"
+ id="rect4611"
+ style="fill:url(#radialGradient4619);fill-opacity:1;stroke:none" />
+ <text
+ xml:space="preserve"
+ style="font-size:36px;font-style:normal;font-variant:normal;font-weight:bold;font-stretch:normal;text-align:start;line-height:100%;writing-mode:lr-tb;text-anchor:start;fill:#000000;fill-opacity:1;stroke:none;font-family:BlairMdITC TT;-inkscape-font-specification:BlairMdITC TT Medium"
+ x="370.3956"
+ y="629.29962"
+ id="text4010"
+ sodipodi:linespacing="100%"><tspan
+ sodipodi:role="line"
+ id="tspan4012"
+ x="370.3956"
+ y="629.29962"
+ style="font-size:16px;font-style:normal;font-variant:normal;font-weight:bold;font-stretch:normal;text-align:start;line-height:100%;writing-mode:lr-tb;text-anchor:start;font-family:BlairMdITC TT;-inkscape-font-specification:BlairMdITC TT Bold">GFS2</tspan></text>
+ </g>
+ <g
+ id="g4653"
+ transform="translate(165.8251,-0.86391947)">
+ <rect
+ y="604.57971"
+ x="494.38666"
+ height="35.420696"
+ width="119.22089"
+ id="rect4621"
+ style="fill:url(#radialGradient4629);fill-opacity:1;stroke:none" />
+ <text
+ sodipodi:linespacing="100%"
+ id="text4014"
+ y="629.29962"
+ x="516.02765"
+ style="font-size:36px;font-style:normal;font-variant:normal;font-weight:bold;font-stretch:normal;text-align:start;line-height:100%;writing-mode:lr-tb;text-anchor:start;fill:#000000;fill-opacity:1;stroke:none;font-family:BlairMdITC TT;-inkscape-font-specification:BlairMdITC TT Medium"
+ xml:space="preserve"><tspan
+ style="font-size:16px;font-style:normal;font-variant:normal;font-weight:bold;font-stretch:normal;text-align:start;line-height:100%;writing-mode:lr-tb;text-anchor:start;font-family:BlairMdITC TT;-inkscape-font-specification:BlairMdITC TT Bold"
+ y="629.29962"
+ x="516.02765"
+ id="tspan4016"
+ sodipodi:role="line">OCFS2</tspan></text>
+ </g>
+ <path
+ style="fill:none;stroke:#000000;stroke-width:1px;stroke-linecap:butt;stroke-linejoin:miter;stroke-opacity:1;marker-end:url(#Arrow2Lend)"
+ d="m 432.39656,185.91043 95.86764,53.83516"
+ id="path4672"
+ inkscape:connector-type="polyline"
+ inkscape:connector-curvature="0"
+ inkscape:connection-start="#g4667"
+ inkscape:connection-start-point="d4"
+ inkscape:connection-end="#g4578"
+ inkscape:connection-end-point="d4"
+ transform="translate(0,452.36218)" />
+ <path
+ style="fill:none;stroke:#000000;stroke-width:1px;stroke-linecap:butt;stroke-linejoin:miter;stroke-opacity:1;marker-end:url(#Arrow2Lend)"
+ d="m 559.62001,185.31866 0.7135,54.42693"
+ id="path4674"
+ inkscape:connector-type="polyline"
+ inkscape:connector-curvature="0"
+ inkscape:connection-start="#g4643"
+ inkscape:connection-start-point="d4"
+ inkscape:connection-end="#g4578"
+ inkscape:connection-end-point="d4"
+ transform="translate(0,452.36218)" />
+ <path
+ style="fill:none;stroke:#000000;stroke-width:1px;stroke-linecap:butt;stroke-linejoin:miter;stroke-opacity:1;marker-end:url(#Arrow2Lend)"
+ d="m 688.06963,186.77431 -94.97126,52.97128"
+ id="path4676"
+ inkscape:connector-type="polyline"
+ inkscape:connector-curvature="0"
+ inkscape:connection-start="#g4653"
+ inkscape:connection-start-point="d4"
+ inkscape:connection-end="#g4578"
+ inkscape:connection-end-point="d4"
+ transform="translate(0,452.36218)" />
+ <path
+ style="fill:none;stroke:#000000;stroke-width:1px;stroke-linecap:butt;stroke-linejoin:miter;stroke-opacity:1;marker-end:url(#Arrow2Lend)"
+ d="m 560.63747,276.03021 0.78006,214.01646"
+ id="path4678"
+ inkscape:connector-type="polyline"
+ inkscape:connector-curvature="0"
+ inkscape:connection-start="#g4578"
+ inkscape:connection-start-point="d4"
+ inkscape:connection-end="#g4195"
+ inkscape:connection-end-point="d4"
+ transform="translate(0,452.36218)" />
+ </g>
+</svg>
diff --git a/doc/sphinx/shared/pacemaker-intro.rst b/doc/sphinx/shared/pacemaker-intro.rst
new file mode 100644
index 0000000..c8318ff
--- /dev/null
+++ b/doc/sphinx/shared/pacemaker-intro.rst
@@ -0,0 +1,196 @@
+What Is Pacemaker?
+####################
+
+Pacemaker is a high-availability *cluster resource manager* -- software that
+runs on a set of hosts (a *cluster* of *nodes*) in order to preserve integrity
+and minimize downtime of desired services (*resources*). [#]_ It is maintained
+by the `ClusterLabs <https://www.ClusterLabs.org/>`_ community.
+
+Pacemaker's key features include:
+
+* Detection of and recovery from node- and service-level failures
+* Ability to ensure data integrity by fencing faulty nodes
+* Support for one or more nodes per cluster
+* Support for multiple resource interface standards (anything that can be
+ scripted can be clustered)
+* Support (but no requirement) for shared storage
+* Support for practically any redundancy configuration (active/passive, N+1,
+ etc.)
+* Automatically replicated configuration that can be updated from any node
+* Ability to specify cluster-wide relationships between services,
+ such as ordering, colocation, and anti-colocation
+* Support for advanced service types, such as *clones* (services that need to
+ be active on multiple nodes), *promotable clones* (clones that can run in
+ one of two roles), and containerized services
+* Unified, scriptable cluster management tools
+
+.. note:: **Fencing**
+
+ *Fencing*, also known as *STONITH* (an acronym for Shoot The Other Node In
+ The Head), is the ability to ensure that it is not possible for a node to be
+ running a service. This is accomplished via *fence devices* such as
+ intelligent power switches that cut power to the target, or intelligent
+ network switches that cut the target's access to the local network.
+
+ Pacemaker represents fence devices as a special class of resource.
+
+ A cluster cannot safely recover from certain failure conditions, such as an
+ unresponsive node, without fencing.
+
+Cluster Architecture
+____________________
+
+At a high level, a cluster can be viewed as having these parts (which together
+are often referred to as the *cluster stack*):
+
+ * **Resources:** These are the reason for the cluster's being -- the services
+ that need to be kept highly available.
+
+ * **Resource agents:** These are scripts or operating system components that
+ start, stop, and monitor resources, given a set of resource parameters.
+ These provide a uniform interface between Pacemaker and the managed
+ services.
+
+ * **Fence agents:** These are scripts that execute node fencing actions,
+ given a target and fence device parameters.
+
+ * **Cluster membership layer:** This component provides reliable messaging,
+ membership, and quorum information about the cluster. Currently, Pacemaker
+ supports `Corosync <http://www.corosync.org/>`_ as this layer.
+
+ * **Cluster resource manager:** Pacemaker provides the brain that processes
+ and reacts to events that occur in the cluster. These events may include
+ nodes joining or leaving the cluster; resource events caused by failures,
+ maintenance, or scheduled activities; and other administrative actions.
+ To achieve the desired availability, Pacemaker may start and stop resources
+ and fence nodes.
+
+ * **Cluster tools:** These provide an interface for users to interact with the
+ cluster. Various command-line and graphical (GUI) interfaces are available.
+
+Most managed services are not, themselves, cluster-aware. However, many popular
+open-source cluster filesystems make use of a common *Distributed Lock
+Manager* (DLM), which makes direct use of Corosync for its messaging and
+membership capabilities and Pacemaker for the ability to fence nodes.
+
+.. image:: ../shared/images/pcmk-stack.png
+ :alt: Example cluster stack
+ :align: center
+
+Pacemaker Architecture
+______________________
+
+Pacemaker itself is composed of multiple daemons that work together:
+
+* ``pacemakerd``
+* ``pacemaker-attrd``
+* ``pacemaker-based``
+* ``pacemaker-controld``
+* ``pacemaker-execd``
+* ``pacemaker-fenced``
+* ``pacemaker-schedulerd``
+
+.. image:: ../shared/images/pcmk-internals.png
+ :alt: Pacemaker software components
+ :align: center
+
+Pacemaker's main process (``pacemakerd``) spawns all the other daemons, and
+respawns them if they unexpectedly exit.
+
+The *Cluster Information Base* (CIB) is an
+`XML <https://en.wikipedia.org/wiki/XML>`_ representation of the cluster's
+configuration and the state of all nodes and resources. The *CIB manager*
+(``pacemaker-based``) keeps the CIB synchronized across the cluster, and
+handles requests to modify it.
+
+The *attribute manager* (``pacemaker-attrd``) maintains a database of
+attributes for all nodes, keeps it synchronized across the cluster, and handles
+requests to modify them. These attributes are usually recorded in the CIB.
+
+Given a snapshot of the CIB as input, the *scheduler*
+(``pacemaker-schedulerd``) determines what actions are necessary to achieve the
+desired state of the cluster.
+
+The *local executor* (``pacemaker-execd``) handles requests to execute
+resource agents on the local cluster node, and returns the result.
+
+The *fencer* (``pacemaker-fenced``) handles requests to fence nodes. Given a
+target node, the fencer decides which cluster node(s) should execute which
+fencing device(s), and calls the necessary fencing agents (either directly, or
+via requests to the fencer peers on other nodes), and returns the result.
+
+The *controller* (``pacemaker-controld``) is Pacemaker's coordinator,
+maintaining a consistent view of the cluster membership and orchestrating all
+the other components.
+
+Pacemaker centralizes cluster decision-making by electing one of the controller
+instances as the *Designated Controller* (*DC*). Should the elected DC process
+(or the node it is on) fail, a new one is quickly established. The DC responds
+to cluster events by taking a current snapshot of the CIB, feeding it to the
+scheduler, then asking the executors (either directly on the local node, or via
+requests to controller peers on other nodes) and the fencer to execute any
+necessary actions.
+
+.. note:: **Old daemon names**
+
+ The Pacemaker daemons were renamed in version 2.0. You may still find
+ references to the old names, especially in documentation targeted to
+ version 1.1.
+
+ .. table::
+
+ +-----------------------+------------------------+
+ | Old name | New name |
+ +=======================+========================+
+ | ``attrd`` | ``pacemaker-attrd`` |
+ +-----------------------+------------------------+
+ | ``cib`` | ``pacemaker-based`` |
+ +-----------------------+------------------------+
+ | ``crmd`` | ``pacemaker-controld`` |
+ +-----------------------+------------------------+
+ | ``lrmd`` | ``pacemaker-execd`` |
+ +-----------------------+------------------------+
+ | ``stonithd`` | ``pacemaker-fenced`` |
+ +-----------------------+------------------------+
+ | ``pacemaker_remoted`` | ``pacemaker-remoted`` |
+ +-----------------------+------------------------+
+
+Node Redundancy Designs
+_______________________
+
+Pacemaker supports practically any `node redundancy configuration
+<https://en.wikipedia.org/wiki/High-availability_cluster#Node_configurations>`_
+including *Active/Active*, *Active/Passive*, *N+1*, *N+M*, *N-to-1*, and
+*N-to-N*.
+
+Active/passive clusters with two (or more) nodes using Pacemaker and
+`DRBD <https://en.wikipedia.org/wiki/Distributed_Replicated_Block_Device>`_ are
+a cost-effective high-availability solution for many situations. One of the
+nodes provides the desired services, and if it fails, the other node takes
+over.
+
+.. image:: ../shared/images/pcmk-active-passive.png
+ :alt: Active/Passive Redundancy
+ :align: center
+
+Pacemaker also supports multiple nodes in a shared-failover design, reducing
+hardware costs by allowing several active/passive clusters to be combined and
+share a common backup node.
+
+.. image:: ../shared/images/pcmk-shared-failover.png
+ :alt: Shared Failover
+ :align: center
+
+When shared storage is available, every node can potentially be used for
+failover. Pacemaker can even run multiple copies of services to spread out the
+workload. This is sometimes called N-to-N redundancy.
+
+.. image:: ../shared/images/pcmk-active-active.png
+ :alt: N to N Redundancy
+ :align: center
+
+.. rubric:: Footnotes
+
+.. [#] *Cluster* is sometimes used in other contexts to refer to hosts grouped
+ together for other purposes, such as high-performance computing (HPC),
+ but Pacemaker is not intended for those purposes.