summaryrefslogtreecommitdiffstats
path: root/doc/cephadm/services
diff options
context:
space:
mode:
Diffstat (limited to 'doc/cephadm/services')
-rw-r--r--doc/cephadm/services/index.rst24
-rw-r--r--doc/cephadm/services/monitoring.rst31
-rw-r--r--doc/cephadm/services/nfs.rst2
-rw-r--r--doc/cephadm/services/osd.rst2
-rw-r--r--doc/cephadm/services/rgw.rst5
5 files changed, 61 insertions, 3 deletions
diff --git a/doc/cephadm/services/index.rst b/doc/cephadm/services/index.rst
index 82f83bfac..c1da5d15f 100644
--- a/doc/cephadm/services/index.rst
+++ b/doc/cephadm/services/index.rst
@@ -357,7 +357,9 @@ Or in YAML:
Placement by pattern matching
-----------------------------
-Daemons can be placed on hosts as well:
+Daemons can be placed on hosts using a host pattern as well.
+By default, the host pattern is matched using fnmatch which supports
+UNIX shell-style wildcards (see https://docs.python.org/3/library/fnmatch.html):
.. prompt:: bash #
@@ -385,6 +387,26 @@ Or in YAML:
placement:
host_pattern: "*"
+The host pattern also has support for using a regex. To use a regex, you
+must either add "regex: " to the start of the pattern when using the
+command line, or specify a ``pattern_type`` field to be "regex"
+when using YAML.
+
+On the command line:
+
+.. prompt:: bash #
+
+ ceph orch apply prometheus --placement='regex:FOO[0-9]|BAR[0-9]'
+
+In YAML:
+
+.. code-block:: yaml
+
+ service_type: prometheus
+ placement:
+ host_pattern:
+ pattern: 'FOO[0-9]|BAR[0-9]'
+ pattern_type: regex
Changing the number of daemons
------------------------------
diff --git a/doc/cephadm/services/monitoring.rst b/doc/cephadm/services/monitoring.rst
index a17a5ba03..d95504796 100644
--- a/doc/cephadm/services/monitoring.rst
+++ b/doc/cephadm/services/monitoring.rst
@@ -83,6 +83,37 @@ steps below:
ceph orch apply grafana
+Enabling security for the monitoring stack
+----------------------------------------------
+
+By default, in a cephadm-managed cluster, the monitoring components are set up and configured without enabling security measures.
+While this suffices for certain deployments, others with strict security needs may find it necessary to protect the
+monitoring stack against unauthorized access. In such cases, cephadm relies on a specific configuration parameter,
+`mgr/cephadm/secure_monitoring_stack`, which toggles the security settings for all monitoring components. To activate security
+measures, set this option to ``true`` with a command of the following form:
+
+ .. prompt:: bash #
+
+ ceph config set mgr mgr/cephadm/secure_monitoring_stack true
+
+This change will trigger a sequence of reconfigurations across all monitoring daemons, typically requiring
+few minutes until all components are fully operational. The updated secure configuration includes the following modifications:
+
+#. Prometheus: basic authentication is required to access the web portal and TLS is enabled for secure communication.
+#. Alertmanager: basic authentication is required to access the web portal and TLS is enabled for secure communication.
+#. Node Exporter: TLS is enabled for secure communication.
+#. Grafana: TLS is enabled and authentication is requiered to access the datasource information.
+
+In this secure setup, users will need to setup authentication
+(username/password) for both Prometheus and Alertmanager. By default the
+username and password are set to ``admin``/``admin``. The user can change these
+value with the commands ``ceph orch prometheus set-credentials`` and ``ceph
+orch alertmanager set-credentials`` respectively. These commands offer the
+flexibility to input the username/password either as parameters or via a JSON
+file, which enhances security. Additionally, Cephadm provides the commands
+`orch prometheus get-credentials` and `orch alertmanager get-credentials` to
+retrieve the current credentials.
+
.. _cephadm-monitoring-centralized-logs:
Centralized Logging in Ceph
diff --git a/doc/cephadm/services/nfs.rst b/doc/cephadm/services/nfs.rst
index 2f12c5916..ab616ddcb 100644
--- a/doc/cephadm/services/nfs.rst
+++ b/doc/cephadm/services/nfs.rst
@@ -15,7 +15,7 @@ Deploying NFS ganesha
=====================
Cephadm deploys NFS Ganesha daemon (or set of daemons). The configuration for
-NFS is stored in the ``nfs-ganesha`` pool and exports are managed via the
+NFS is stored in the ``.nfs`` pool and exports are managed via the
``ceph nfs export ...`` commands and via the dashboard.
To deploy a NFS Ganesha gateway, run the following command:
diff --git a/doc/cephadm/services/osd.rst b/doc/cephadm/services/osd.rst
index f62b0f831..aa906e239 100644
--- a/doc/cephadm/services/osd.rst
+++ b/doc/cephadm/services/osd.rst
@@ -232,7 +232,7 @@ Remove an OSD
Removing an OSD from a cluster involves two steps:
-#. evacuating all placement groups (PGs) from the cluster
+#. evacuating all placement groups (PGs) from the OSD
#. removing the PG-free OSD from the cluster
The following command performs these two steps:
diff --git a/doc/cephadm/services/rgw.rst b/doc/cephadm/services/rgw.rst
index 20ec39a88..ed0b14936 100644
--- a/doc/cephadm/services/rgw.rst
+++ b/doc/cephadm/services/rgw.rst
@@ -246,6 +246,7 @@ It is a yaml format file with the following properties:
virtual_interface_networks: [ ... ] # optional: list of CIDR networks
use_keepalived_multicast: <bool> # optional: Default is False.
vrrp_interface_network: <string>/<string> # optional: ex: 192.168.20.0/24
+ health_check_interval: <string> # optional: Default is 2s.
ssl_cert: | # optional: SSL certificate and key
-----BEGIN CERTIFICATE-----
...
@@ -273,6 +274,7 @@ It is a yaml format file with the following properties:
monitor_port: <integer> # ex: 1967, used by haproxy for load balancer status
virtual_interface_networks: [ ... ] # optional: list of CIDR networks
first_virtual_router_id: <integer> # optional: default 50
+ health_check_interval: <string> # optional: Default is 2s.
ssl_cert: | # optional: SSL certificate and key
-----BEGIN CERTIFICATE-----
...
@@ -321,6 +323,9 @@ where the properties of this service specification are:
keepalived will have different virtual_router_id. In the case of using ``virtual_ips_list``,
each IP will create its own virtual router. So the first one will have ``first_virtual_router_id``,
second one will have ``first_virtual_router_id`` + 1, etc. Valid values go from 1 to 255.
+* ``health_check_interval``
+ Default is 2 seconds. This parameter can be used to set the interval between health checks
+ for the haproxy with the backend servers.
.. _ingress-virtual-ip: