1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
42
43
44
45
46
47
48
49
50
51
52
53
54
55
56
57
58
59
60
61
62
63
64
65
66
67
68
69
70
71
72
73
74
75
76
77
78
79
80
81
82
83
84
85
86
87
88
89
90
91
92
93
94
95
96
97
98
99
100
101
102
103
104
105
106
107
108
109
110
111
112
113
114
115
116
117
118
119
120
121
122
123
124
125
126
127
128
129
130
131
132
133
134
135
136
137
138
139
140
141
142
143
144
145
146
147
148
149
150
151
152
153
154
155
156
157
158
159
160
161
162
163
164
165
166
167
168
169
170
171
172
173
174
175
176
177
178
179
180
181
182
183
184
185
186
187
188
189
190
191
192
193
194
195
196
197
198
199
200
201
202
203
204
205
206
207
208
209
210
211
212
213
214
215
216
217
218
219
220
221
222
223
224
225
226
227
228
229
230
231
232
233
234
235
236
237
238
239
240
241
242
243
244
245
246
247
248
249
250
251
252
253
254
255
256
257
258
259
260
261
262
263
264
265
266
267
268
269
270
271
272
273
274
275
276
277
278
279
280
281
282
283
284
285
286
287
288
289
290
291
292
293
294
295
296
297
298
299
300
301
302
303
304
305
306
307
308
309
310
311
312
313
314
315
316
317
318
319
320
321
322
323
324
325
326
327
328
329
330
331
332
333
334
335
336
337
338
339
340
341
342
343
344
345
346
347
348
349
350
351
352
353
354
355
356
357
358
359
360
361
362
363
364
365
366
367
368
369
370
371
372
373
374
375
376
377
378
379
380
381
382
383
384
385
386
387
388
389
390
391
392
393
394
395
396
397
398
399
400
401
402
403
404
405
406
407
408
409
410
411
412
413
414
415
416
417
418
419
420
421
422
423
424
425
426
427
428
429
430
431
432
433
434
435
436
437
438
439
440
441
442
443
444
445
446
447
448
449
450
451
452
453
454
455
456
457
458
459
460
461
462
463
464
465
466
467
468
469
470
471
472
473
474
475
476
477
478
479
480
481
482
483
484
485
486
487
488
489
490
491
492
493
494
495
496
497
498
499
500
501
502
503
504
505
506
507
508
509
510
511
512
513
514
515
516
517
518
519
520
521
522
523
524
525
526
527
528
529
530
531
532
533
534
535
536
537
538
539
540
541
542
543
544
545
546
547
548
549
550
551
552
553
554
555
556
557
558
559
560
561
562
563
564
565
566
567
568
569
570
571
572
573
574
575
576
577
578
579
580
581
582
583
584
585
586
587
588
589
590
591
592
593
594
595
596
597
598
599
600
601
602
603
604
605
606
607
608
609
610
611
612
613
614
615
616
617
618
619
620
621
622
623
624
625
626
627
628
629
630
631
632
633
634
635
636
637
638
639
640
641
642
643
644
645
646
647
648
649
650
651
652
653
654
655
656
657
658
659
660
661
662
663
664
665
666
667
668
669
670
671
672
673
674
675
676
677
678
679
680
681
682
683
684
685
686
687
688
689
690
691
692
693
694
695
696
697
698
699
700
701
702
703
704
705
706
707
708
709
710
711
712
713
714
715
716
717
718
719
720
721
722
723
724
725
726
727
728
729
730
731
732
733
734
735
736
737
738
739
740
741
742
743
744
745
746
747
748
749
750
751
752
753
754
755
756
757
758
759
760
761
762
763
764
765
766
767
768
769
770
771
772
773
774
775
776
777
778
779
780
781
782
783
784
785
786
787
788
789
790
791
792
793
794
795
796
797
798
799
800
801
802
803
804
805
806
807
808
809
810
811
812
813
814
815
816
817
818
819
820
821
822
823
824
825
826
827
828
829
830
831
832
833
834
835
836
837
838
839
840
841
842
843
844
845
846
847
848
849
850
851
852
853
854
855
856
857
858
859
860
861
862
863
864
865
866
867
868
869
870
871
872
873
874
875
876
877
878
879
880
881
882
883
884
885
886
887
888
889
890
891
892
893
894
895
896
897
898
899
900
901
902
903
904
905
906
907
908
909
910
911
912
913
914
915
916
917
918
919
920
921
|
Cluster-Wide Configuration
--------------------------
.. index::
pair: XML element; cib
pair: XML element; configuration
Configuration Layout
####################
The cluster is defined by the Cluster Information Base (CIB), which uses XML
notation. The simplest CIB, an empty one, looks like this:
.. topic:: An empty configuration
.. code-block:: xml
<cib crm_feature_set="3.6.0" validate-with="pacemaker-3.5" epoch="1" num_updates="0" admin_epoch="0">
<configuration>
<crm_config/>
<nodes/>
<resources/>
<constraints/>
</configuration>
<status/>
</cib>
The empty configuration above contains the major sections that make up a CIB:
* ``cib``: The entire CIB is enclosed with a ``cib`` element. Certain
fundamental settings are defined as attributes of this element.
* ``configuration``: This section -- the primary focus of this document --
contains traditional configuration information such as what resources the
cluster serves and the relationships among them.
* ``crm_config``: cluster-wide configuration options
* ``nodes``: the machines that host the cluster
* ``resources``: the services run by the cluster
* ``constraints``: indications of how resources should be placed
* ``status``: This section contains the history of each resource on each
node. Based on this data, the cluster can construct the complete current
state of the cluster. The authoritative source for this section is the
local executor (pacemaker-execd process) on each cluster node, and the
cluster will occasionally repopulate the entire section. For this reason,
it is never written to disk, and administrators are advised against
modifying it in any way.
In this document, configuration settings will be described as properties or
options based on how they are defined in the CIB:
* Properties are XML attributes of an XML element.
* Options are name-value pairs expressed as ``nvpair`` child elements of an XML
element.
Normally, you will use command-line tools that abstract the XML, so the
distinction will be unimportant; both properties and options are cluster
settings you can tweak.
Configuration Value Types
#########################
Throughout this document, configuration values will be designated as having one
of the following types:
.. list-table:: **Configuration Value Types**
:class: longtable
:widths: 1 3
:header-rows: 1
* - Type
- Description
* - .. _boolean:
.. index::
pair: type; boolean
boolean
- Case-insensitive text value where ``1``, ``yes``, ``y``, ``on``,
and ``true`` evaluate as true and ``0``, ``no``, ``n``, ``off``,
``false``, and unset evaluate as false
* - .. _date_time:
.. index::
pair: type; date/time
date/time
- Textual timestamp like ``Sat Dec 21 11:47:45 2013``
* - .. _duration:
.. index::
pair: type; duration
duration
- A time duration, specified either like a :ref:`timeout <timeout>` or an
`ISO 8601 duration <https://en.wikipedia.org/wiki/ISO_8601#Durations>`_.
A duration may be up to approximately 49 days but is intended for much
smaller time periods.
* - .. _enumeration:
.. index::
pair: type; enumeration
enumeration
- Text that must be one of a set of defined values (which will be listed
in the description)
* - .. _integer:
.. index::
pair: type; integer
integer
- 32-bit signed integer value (-2,147,483,648 to 2,147,483,647)
* - .. _nonnegative_integer:
.. index::
pair: type; nonnegative integer
nonnegative integer
- 32-bit nonnegative integer value (0 to 2,147,483,647)
* - .. _port:
.. index::
pair: type; port
port
- Integer TCP port number (0 to 65535)
* - .. _score:
.. index::
pair: type; score
score
- A Pacemaker score can be an integer between -1,000,000 and 1,000,000, or
a string alias: ``INFINITY`` or ``+INFINITY`` is equivalent to
1,000,000, ``-INFINITY`` is equivalent to -1,000,000, and ``red``,
``yellow``, and ``green`` are equivalent to integers as described in
:ref:`node-health`.
* - .. _text:
.. index::
pair: type; text
text
- A text string
* - .. _timeout:
.. index::
pair: type; timeout
timeout
- A time duration, specified as a bare number (in which case it is
considered to be in seconds) or a number with a unit (``ms`` or ``msec``
for milliseconds, ``us`` or ``usec`` for microseconds, ``s`` or ``sec``
for seconds, ``m`` or ``min`` for minutes, ``h`` or ``hr`` for hours)
optionally with whitespace before and/or after the number.
* - .. _version:
.. index::
pair: type; version
version
- Version number (any combination of alphanumeric characters, dots, and
dashes, starting with a number).
Scores
______
Scores are integral to how Pacemaker works. Practically everything from moving
a resource to deciding which resource to stop in a degraded cluster is achieved
by manipulating scores in some way.
Scores are calculated per resource and node. Any node with a negative score for
a resource can't run that resource. The cluster places a resource on the node
with the highest score for it.
Score addition and subtraction follow these rules:
* Any value (including ``INFINITY``) - ``INFINITY`` = ``-INFINITY``
* ``INFINITY`` + any value other than ``-INFINITY`` = ``INFINITY``
.. note::
What if you want to use a score higher than 1,000,000? Typically this possibility
arises when someone wants to base the score on some external metric that might
go above 1,000,000.
The short answer is you can't.
The long answer is it is sometimes possible work around this limitation
creatively. You may be able to set the score to some computed value based on
the external metric rather than use the metric directly. For nodes, you can
store the metric as a node attribute, and query the attribute when computing
the score (possibly as part of a custom resource agent).
CIB Properties
##############
Certain settings are defined by CIB properties (that is, attributes of the
``cib`` tag) rather than with the rest of the cluster configuration in the
``configuration`` section.
The reason is simply a matter of parsing. These options are used by the
configuration database which is, by design, mostly ignorant of the content it
holds. So the decision was made to place them in an easy-to-find location.
.. list-table:: **CIB Properties**
:class: longtable
:widths: 2 2 2 5
:header-rows: 1
* - Name
- Type
- Default
- Description
* - .. _admin_epoch:
.. index::
pair: admin_epoch; cib
admin_epoch
- :ref:`nonnegative integer <nonnegative_integer>`
- 0
- When a node joins the cluster, the cluster asks the node with the
highest (``admin_epoch``, ``epoch``, ``num_updates``) tuple to replace
the configuration on all the nodes -- which makes setting them correctly
very important. ``admin_epoch`` is never modified by the cluster; you
can use this to make the configurations on any inactive nodes obsolete.
* - .. _epoch:
.. index::
pair: epoch; cib
epoch
- :ref:`nonnegative integer <nonnegative_integer>`
- 0
- The cluster increments this every time the CIB's configuration section
is updated.
* - .. _num_updates:
.. index::
pair: num_updates; cib
num_updates
- :ref:`nonnegative integer <nonnegative_integer>`
- 0
- The cluster increments this every time the CIB's configuration or status
sections are updated, and resets it to 0 when epoch changes.
* - .. _validate_with:
.. index::
pair: validate-with; cib
validate-with
- :ref:`enumeration <enumeration>`
-
- Determines the type of XML validation that will be done on the
configuration. Allowed values are ``none`` (in which case the cluster
will not require that updates conform to expected syntax) and the base
names of schema files installed on the local machine (for example,
"pacemaker-3.9")
* - .. _remote_tls_port:
.. index::
pair: remote-tls-port; cib
remote-tls-port
- :ref:`port <port>`
-
- If set, the CIB manager will listen for anonymously encrypted remote
connections on this port, to allow CIB administration from hosts not in
the cluster. No key is used, so this should be used only on a protected
network where man-in-the-middle attacks can be avoided.
* - .. _remote_clear_port:
.. index::
pair: remote-clear-port; cib
remote-clear-port
- :ref:`port <port>`
-
- If set to a TCP port number, the CIB manager will listen for remote
connections on this port, to allow for CIB administration from hosts not
in the cluster. No encryption is used, so this should be used only on a
protected network.
* - .. _cib_last_written:
.. index::
pair: cib-last-written; cib
cib-last-written
- :ref:`date/time <date_time>`
-
- Indicates when the configuration was last written to disk. Maintained by
the cluster; for informational purposes only.
* - .. _have_quorum:
.. index::
pair: have-quorum; cib
have-quorum
- :ref:`boolean <boolean>`
-
- Indicates whether the cluster has quorum. If false, the cluster's
response is determined by ``no-quorum-policy`` (see below). Maintained
by the cluster.
* - .. _dc_uuid:
.. index::
pair: dc-uuid; cib
dc-uuid
- :ref:`text <text>`
-
- Node ID of the cluster's current designated controller (DC). Used and
maintained by the cluster.
.. _cluster_options:
Cluster Options
###############
Cluster options, as you might expect, control how the cluster behaves when
confronted with various situations.
They are grouped into sets within the ``crm_config`` section. In advanced
configurations, there may be more than one set. (This will be described later
in the chapter on :ref:`rules` where we will show how to have the cluster use
different sets of options during working hours than during weekends.) For now,
we will describe the simple case where each option is present at most once.
You can obtain an up-to-date list of cluster options, including their default
values, by running the ``man pacemaker-schedulerd`` and
``man pacemaker-controld`` commands.
.. list-table:: **Cluster Options**
:class: longtable
:widths: 2 2 2 5
:header-rows: 1
* - Name
- Type
- Default
- Description
* - .. _cluster_name:
.. index::
pair: cluster option; cluster-name
cluster-name
- :ref:`text <text>`
-
- An (optional) name for the cluster as a whole. This is mostly for users'
convenience for use as desired in administration, but can be used in the
Pacemaker configuration in :ref:`rules` (as the ``#cluster-name``
:ref:`node attribute <node-attribute-expressions-special>`). It may also
be used by higher-level tools when displaying cluster information, and
by certain resource agents (for example, the ``ocf:heartbeat:GFS2``
agent stores the cluster name in filesystem meta-data).
* - .. _dc_version:
.. index::
pair: cluster option; dc-version
dc-version
- :ref:`version <version>`
- *detected*
- Version of Pacemaker on the cluster's designated controller (DC).
Maintained by the cluster, and intended for diagnostic purposes.
* - .. _cluster_infrastructure:
.. index::
pair: cluster option; cluster-infrastructure
cluster-infrastructure
- :ref:`text <text>`
- *detected*
- The messaging layer with which Pacemaker is currently running.
Maintained by the cluster, and intended for informational and diagnostic
purposes.
* - .. _no_quorum_policy:
.. index::
pair: cluster option; no-quorum-policy
no-quorum-policy
- :ref:`enumeration <enumeration>`
- stop
- What to do when the cluster does not have quorum. Allowed values:
* ``ignore:`` continue all resource management
* ``freeze:`` continue resource management, but don't recover resources
from nodes not in the affected partition
* ``stop:`` stop all resources in the affected cluster partition
* ``demote:`` demote promotable resources and stop all other resources
in the affected cluster partition *(since 2.0.5)*
* ``suicide:`` fence all nodes in the affected cluster partition
* - .. _batch_limit:
.. index::
pair: cluster option; batch-limit
batch-limit
- :ref:`integer <integer>`
- 0
- The maximum number of actions that the cluster may execute in parallel
across all nodes. The ideal value will depend on the speed and load
of your network and cluster nodes. If zero, the cluster will impose a
dynamically calculated limit only when any node has high load. If -1,
the cluster will not impose any limit.
* - .. _migration_limit:
.. index::
pair: cluster option; migration-limit
migration-limit
- :ref:`integer <integer>`
- -1
- The number of :ref:`live migration <live-migration>` actions that the
cluster is allowed to execute in parallel on a node. A value of -1 means
unlimited.
* - .. _symmetric_cluster:
.. index::
pair: cluster option; symmetric-cluster
symmetric-cluster
- :ref:`boolean <boolean>`
- true
- If true, resources can run on any node by default. If false, a resource
is allowed to run on a node only if a
:ref:`location constraint <location-constraint>` enables it.
* - .. _stop_all_resources:
.. index::
pair: cluster option; stop-all-resources
stop-all-resources
- :ref:`boolean <boolean>`
- false
- Whether all resources should be disallowed from running (can be useful
during maintenance or troubleshooting)
* - .. _stop_orphan_resources:
.. index::
pair: cluster option; stop-orphan-resources
stop-orphan-resources
- :ref:`boolean <boolean>`
- true
- Whether resources that have been deleted from the configuration should
be stopped. This value takes precedence over
:ref:`is-managed <is_managed>` (that is, even unmanaged resources will
be stopped when orphaned if this value is ``true``).
* - .. _stop_orphan_actions:
.. index::
pair: cluster option; stop-orphan-actions
stop-orphan-actions
- :ref:`boolean <boolean>`
- true
- Whether recurring :ref:`operations <operation>` that have been deleted
from the configuration should be cancelled
* - .. _start_failure_is_fatal:
.. index::
pair: cluster option; start-failure-is-fatal
start-failure-is-fatal
- :ref:`boolean <boolean>`
- true
- Whether a failure to start a resource on a particular node prevents
further start attempts on that node. If ``false``, the cluster will
decide whether the node is still eligible based on the resource's
current failure count and ``migration-threshold``.
* - .. _enable_startup_probes:
.. index::
pair: cluster option; enable-startup-probes
enable-startup-probes
- :ref:`boolean <boolean>`
- true
- Whether the cluster should check the pre-existing state of resources
when the cluster starts
* - .. _maintenance_mode:
.. index::
pair: cluster option; maintenance-mode
maintenance-mode
- :ref:`boolean <boolean>`
- false
- If true, the cluster will not start or stop any resource in the cluster,
and any recurring operations (expect those specifying ``role`` as
``Stopped``) will be paused. If true, this overrides the
:ref:`maintenance <node_maintenance>` node attribute,
:ref:`is-managed <is_managed>` and :ref:`maintenance <rsc_maintenance>`
resource meta-attributes, and :ref:`enabled <op_enabled>` operation
meta-attribute.
* - .. _stonith_enabled:
.. index::
pair: cluster option; stonith-enabled
stonith-enabled
- :ref:`boolean <boolean>`
- true
- Whether the cluster is allowed to fence nodes (for example, failed nodes
and nodes with resources that can't be stopped).
If true, at least one fence device must be configured before resources
are allowed to run.
If false, unresponsive nodes are immediately assumed to be running no
resources, and resource recovery on online nodes starts without any
further protection (which can mean *data loss* if the unresponsive node
still accesses shared storage, for example). See also the
:ref:`requires <requires>` resource meta-attribute.
* - .. _stonith_action:
.. index::
pair: cluster option; stonith-action
stonith-action
- :ref:`enumeration <enumeration>`
- reboot
- Action the cluster should send to the fence agent when a node must be
fenced. Allowed values are ``reboot``, ``off``, and (for legacy agents
only) ``poweroff``.
* - .. _stonith_timeout:
.. index::
pair: cluster option; stonith-timeout
stonith-timeout
- :ref:`duration <duration>`
- 60s
- How long to wait for ``on``, ``off``, and ``reboot`` fence actions to
complete by default.
* - .. _stonith_max_attempts:
.. index::
pair: cluster option; stonith-max-attempts
stonith-max-attempts
- :ref:`score <score>`
- 10
- How many times fencing can fail for a target before the cluster will no
longer immediately re-attempt it. Any value below 1 will be ignored, and
the default will be used instead.
* - .. _stonith_watchdog_timeout:
.. index::
pair: cluster option; stonith-watchdog-timeout
stonith-watchdog-timeout
- :ref:`timeout <timeout>`
- 0
- If nonzero, and the cluster detects ``have-watchdog`` as ``true``, then
watchdog-based self-fencing will be performed via SBD when fencing is
required, without requiring a fencing resource explicitly configured.
If this is set to a positive value, unseen nodes are assumed to
self-fence within this much time.
**Warning:** It must be ensured that this value is larger than the
``SBD_WATCHDOG_TIMEOUT`` environment variable on all nodes. Pacemaker
verifies the settings individually on all nodes and prevents startup or
shuts down if configured wrongly on the fly. It is strongly recommended
that ``SBD_WATCHDOG_TIMEOUT`` be set to the same value on all nodes.
If this is set to a negative value, and ``SBD_WATCHDOG_TIMEOUT`` is set,
twice that value will be used.
**Warning:** In this case, it is essential (and currently not verified
by pacemaker) that ``SBD_WATCHDOG_TIMEOUT`` is set to the same value on
all nodes.
* - .. _concurrent-fencing:
.. index::
pair: cluster option; concurrent-fencing
concurrent-fencing
- :ref:`boolean <boolean>`
- false
- Whether the cluster is allowed to initiate multiple fence actions
concurrently. Fence actions initiated externally, such as via the
``stonith_admin`` tool or an application such as DLM, or by the fencer
itself such as recurring device monitors and ``status`` and ``list``
commands, are not limited by this option.
* - .. _fence_reaction:
.. index::
pair: cluster option; fence-reaction
fence-reaction
- :ref:`enumeration <enumeration>`
- stop
- How should a cluster node react if notified of its own fencing? A
cluster node may receive notification of its own fencing if fencing is
misconfigured, or if fabric fencing is in use that doesn't cut cluster
communication. Allowed values are ``stop`` to attempt to immediately
stop Pacemaker and stay stopped, or ``panic`` to attempt to immediately
reboot the local node, falling back to stop on failure. The default is
likely to be changed to ``panic`` in a future release. *(since 2.0.3)*
* - .. _priority_fencing_delay:
.. index::
pair: cluster option; priority-fencing-delay
priority-fencing-delay
- :ref:`duration <duration>`
- 0
- Apply this delay to any fencing targeting the lost nodes with the
highest total resource priority in case we don't have the majority of
the nodes in our cluster partition, so that the more significant nodes
potentially win any fencing match (especially meaningful in a
split-brain of a 2-node cluster). A promoted resource instance takes the
resource's priority plus 1 if the resource's priority is not 0. Any
static or random delays introduced by ``pcmk_delay_base`` and
``pcmk_delay_max`` configured for the corresponding fencing resources
will be added to this delay. This delay should be significantly greater
than (safely twice) the maximum delay from those parameters. *(since
2.0.4)*
* - .. _node_pending_timeout:
.. index::
pair: cluster option; node-pending-timeout
node-pending-timeout
- :ref:`duration <duration>`
- 0
- Fence nodes that do not join the controller process group within this
much time after joining the cluster, to allow the cluster to continue
managing resources. A value of 0 means never fence pending nodes. Setting the value to 2h means fence nodes after 2 hours.
*(since 2.1.7)*
* - .. _cluster_delay:
.. index::
pair: cluster option; cluster-delay
cluster-delay
- :ref:`duration <duration>`
- 60s
- If the DC requires an action to be executed on another node, it will
consider the action failed if it does not get a response from the other
node within this time (beyond the action's own timeout). The ideal value
will depend on the speed and load of your network and cluster nodes.
* - .. _dc_deadtime:
.. index::
pair: cluster option; dc-deadtime
dc-deadtime
- :ref:`duration <duration>`
- 20s
- How long to wait for a response from other nodes when electing a DC. The
ideal value will depend on the speed and load of your network and
cluster nodes.
* - .. _cluster_ipc_limit:
.. index::
pair: cluster option; cluster-ipc-limit
cluster-ipc-limit
- :ref:`nonnegative integer <nonnegative_integer>`
- 500
- The maximum IPC message backlog before one cluster daemon will
disconnect another. This is of use in large clusters, for which a good
value is the number of resources in the cluster multiplied by the number
of nodes. The default of 500 is also the minimum. Raise this if you see
"Evicting client" log messages for cluster daemon process IDs.
* - .. _pe_error_series_max:
.. index::
pair: cluster option; pe-error-series-max
pe-error-series-max
- :ref:`integer <integer>`
- -1
- The number of scheduler inputs resulting in errors to save. These inputs
can be helpful during troubleshooting and when reporting issues. A
negative value means save all inputs, and 0 means save none.
* - .. _pe_warn_series_max:
.. index::
pair: cluster option; pe-warn-series-max
pe-warn-series-max
- :ref:`integer <integer>`
- 5000
- The number of scheduler inputs resulting in warnings to save. These
inputs can be helpful during troubleshooting and when reporting issues.
A negative value means save all inputs, and 0 means save none.
* - .. _pe_input_series_max:
.. index::
pair: cluster option; pe-input-series-max
pe-input-series-max
- :ref:`integer <integer>`
- 4000
- The number of "normal" scheduler inputs to save. These inputs can be
helpful during troubleshooting and when reporting issues. A negative
value means save all inputs, and 0 means save none.
* - .. _enable_acl:
.. index::
pair: cluster option; enable-acl
enable-acl
- :ref:`boolean <boolean>`
- false
- Whether :ref:`access control lists <acl>` should be used to authorize
CIB modifications
* - .. _placement_strategy:
.. index::
pair: cluster option; placement-strategy
placement-strategy
- :ref:`enumeration <enumeration>`
- default
- How the cluster should assign resources to nodes (see
:ref:`utilization`). Allowed values are ``default``, ``utilization``,
``balanced``, and ``minimal``.
* - .. _node_health_strategy:
.. index::
pair: cluster option; node-health-strategy
node-health-strategy
- :ref:`enumeration <enumeration>`
- none
- How the cluster should react to :ref:`node health <node-health>`
attributes. Allowed values are ``none``, ``migrate-on-red``,
``only-green``, ``progressive``, and ``custom``.
* - .. _node_health_base:
.. index::
pair: cluster option; node-health-base
node-health-base
- :ref:`score <score>`
- 0
- The base health score assigned to a node. Only used when
``node-health-strategy`` is ``progressive``.
* - .. _node_health_green:
.. index::
pair: cluster option; node-health-green
node-health-green
- :ref:`score <score>`
- 0
- The score to use for a node health attribute whose value is ``green``.
Only used when ``node-health-strategy`` is ``progressive`` or
``custom``.
* - .. _node_health_yellow:
.. index::
pair: cluster option; node-health-yellow
node-health-yellow
- :ref:`score <score>`
- 0
- The score to use for a node health attribute whose value is ``yellow``.
Only used when ``node-health-strategy`` is ``progressive`` or
``custom``.
* - .. _node_health_red:
.. index::
pair: cluster option; node-health-red
node-health-red
- :ref:`score <score>`
- 0
- The score to use for a node health attribute whose value is ``red``.
Only used when ``node-health-strategy`` is ``progressive`` or
``custom``.
* - .. _cluster_recheck_interval:
.. index::
pair: cluster option; cluster-recheck-interval
cluster-recheck-interval
- :ref:`duration <duration>`
- 15min
- Pacemaker is primarily event-driven, and looks ahead to know when to
recheck the cluster for failure timeouts and most time-based rules
*(since 2.0.3)*. However, it will also recheck the cluster after this
amount of inactivity. This has two goals: rules with ``date_spec`` are
only guaranteed to be checked this often, and it also serves as a
fail-safe for some kinds of scheduler bugs. A value of 0 disables this
polling.
* - .. _shutdown_lock:
.. index::
pair: cluster option; shutdown-lock
shutdown-lock
- :ref:`boolean <boolean>`
- false
- The default of false allows active resources to be recovered elsewhere
when their node is cleanly shut down, which is what the vast majority of
users will want. However, some users prefer to make resources highly
available only for failures, with no recovery for clean shutdowns. If
this option is true, resources active on a node when it is cleanly shut
down are kept "locked" to that node (not allowed to run elsewhere) until
they start again on that node after it rejoins (or for at most
``shutdown-lock-limit``, if set). Stonith resources and Pacemaker Remote
connections are never locked. Clone and bundle instances and the
promoted role of promotable clones are currently never locked, though
support could be added in a future release. Locks may be manually
cleared using the ``--refresh`` option of ``crm_resource`` (both the
resource and node must be specified; this works with remote nodes if
their connection resource's ``target-role`` is set to ``Stopped``, but
not if Pacemaker Remote is stopped on the remote node without disabling
the connection resource). *(since 2.0.4)*
* - .. _shutdown_lock_limit:
.. index::
pair: cluster option; shutdown-lock-limit
shutdown-lock-limit
- :ref:`duration <duration>`
- 0
- If ``shutdown-lock`` is true, and this is set to a nonzero time
duration, locked resources will be allowed to start after this much time
has passed since the node shutdown was initiated, even if the node has
not rejoined. (This works with remote nodes only if their connection
resource's ``target-role`` is set to ``Stopped``.) *(since 2.0.4)*
* - .. _remove_after_stop:
.. index::
pair: cluster option; remove-after-stop
remove-after-stop
- :ref:`boolean <boolean>`
- false
- *Deprecated* Whether the cluster should remove resources from
Pacemaker's executor after they are stopped. Values other than the
default are, at best, poorly tested and potentially dangerous. This
option is deprecated and will be removed in a future release.
* - .. _startup_fencing:
.. index::
pair: cluster option; startup-fencing
startup-fencing
- :ref:`boolean <boolean>`
- true
- *Advanced Use Only:* Whether the cluster should fence unseen nodes at
start-up. Setting this to false is unsafe, because the unseen nodes
could be active and running resources but unreachable. ``dc-deadtime``
acts as a grace period before this fencing, since a DC must be elected
to schedule fencing.
* - .. _election_timeout:
.. index::
pair: cluster option; election-timeout
election-timeout
- :ref:`duration <duration>`
- 2min
- *Advanced Use Only:* If a winner is not declared within this much time
of starting an election, the node that initiated the election will
declare itself the winner.
* - .. _shutdown_escalation:
.. index::
pair: cluster option; shutdown-escalation
shutdown-escalation
- :ref:`duration <duration>`
- 20min
- *Advanced Use Only:* The controller will exit immediately if a shutdown
does not complete within this much time.
* - .. _join_integration_timeout:
.. index::
pair: cluster option; join-integration-timeout
join-integration-timeout
- :ref:`duration <duration>`
- 3min
- *Advanced Use Only:* If you need to adjust this value, it probably
indicates the presence of a bug.
* - .. _join_finalization_timeout:
.. index::
pair: cluster option; join-finalization-timeout
join-finalization-timeout
- :ref:`duration <duration>`
- 30min
- *Advanced Use Only:* If you need to adjust this value, it probably
indicates the presence of a bug.
* - .. _transition_delay:
.. index::
pair: cluster option; transition-delay
transition-delay
- :ref:`duration <duration>`
- 0s
- *Advanced Use Only:* Delay cluster recovery for the configured interval
to allow for additional or related events to occur. This can be useful
if your configuration is sensitive to the order in which ping updates
arrive. Enabling this option will slow down cluster recovery under all
conditions.
|