summaryrefslogtreecommitdiffstats
path: root/doc/rados/deployment
diff options
context:
space:
mode:
authorDaniel Baumann <daniel.baumann@progress-linux.org>2024-04-27 18:24:20 +0000
committerDaniel Baumann <daniel.baumann@progress-linux.org>2024-04-27 18:24:20 +0000
commit483eb2f56657e8e7f419ab1a4fab8dce9ade8609 (patch)
treee5d88d25d870d5dedacb6bbdbe2a966086a0a5cf /doc/rados/deployment
parentInitial commit. (diff)
downloadceph-upstream.tar.xz
ceph-upstream.zip
Adding upstream version 14.2.21.upstream/14.2.21upstream
Signed-off-by: Daniel Baumann <daniel.baumann@progress-linux.org>
Diffstat (limited to 'doc/rados/deployment')
-rw-r--r--doc/rados/deployment/ceph-deploy-admin.rst38
-rw-r--r--doc/rados/deployment/ceph-deploy-install.rst46
-rw-r--r--doc/rados/deployment/ceph-deploy-keys.rst32
-rw-r--r--doc/rados/deployment/ceph-deploy-mds.rst42
-rw-r--r--doc/rados/deployment/ceph-deploy-mon.rst56
-rw-r--r--doc/rados/deployment/ceph-deploy-new.rst46
-rw-r--r--doc/rados/deployment/ceph-deploy-osd.rst87
-rw-r--r--doc/rados/deployment/ceph-deploy-purge.rst25
-rw-r--r--doc/rados/deployment/index.rst58
-rw-r--r--doc/rados/deployment/preflight-checklist.rst109
10 files changed, 539 insertions, 0 deletions
diff --git a/doc/rados/deployment/ceph-deploy-admin.rst b/doc/rados/deployment/ceph-deploy-admin.rst
new file mode 100644
index 00000000..a91f69cf
--- /dev/null
+++ b/doc/rados/deployment/ceph-deploy-admin.rst
@@ -0,0 +1,38 @@
+=============
+ Admin Tasks
+=============
+
+Once you have set up a cluster with ``ceph-deploy``, you may
+provide the client admin key and the Ceph configuration file
+to another host so that a user on the host may use the ``ceph``
+command line as an administrative user.
+
+
+Create an Admin Host
+====================
+
+To enable a host to execute ceph commands with administrator
+privileges, use the ``admin`` command. ::
+
+ ceph-deploy admin {host-name [host-name]...}
+
+
+Deploy Config File
+==================
+
+To send an updated copy of the Ceph configuration file to hosts
+in your cluster, use the ``config push`` command. ::
+
+ ceph-deploy config push {host-name [host-name]...}
+
+.. tip:: With a base name and increment host-naming convention,
+ it is easy to deploy configuration files via simple scripts
+ (e.g., ``ceph-deploy config hostname{1,2,3,4,5}``).
+
+Retrieve Config File
+====================
+
+To retrieve a copy of the Ceph configuration file from a host
+in your cluster, use the ``config pull`` command. ::
+
+ ceph-deploy config pull {host-name [host-name]...}
diff --git a/doc/rados/deployment/ceph-deploy-install.rst b/doc/rados/deployment/ceph-deploy-install.rst
new file mode 100644
index 00000000..9a4bbc4e
--- /dev/null
+++ b/doc/rados/deployment/ceph-deploy-install.rst
@@ -0,0 +1,46 @@
+====================
+ Package Management
+====================
+
+Install
+=======
+
+To install Ceph packages on your cluster hosts, open a command line on your
+client machine and type the following::
+
+ ceph-deploy install {hostname [hostname] ...}
+
+Without additional arguments, ``ceph-deploy`` will install the most recent
+major release of Ceph to the cluster host(s). To specify a particular package,
+you may select from the following:
+
+- ``--release <code-name>``
+- ``--testing``
+- ``--dev <branch-or-tag>``
+
+For example::
+
+ ceph-deploy install --release cuttlefish hostname1
+ ceph-deploy install --testing hostname2
+ ceph-deploy install --dev wip-some-branch hostname{1,2,3,4,5}
+
+For additional usage, execute::
+
+ ceph-deploy install -h
+
+
+Uninstall
+=========
+
+To uninstall Ceph packages from your cluster hosts, open a terminal on
+your admin host and type the following::
+
+ ceph-deploy uninstall {hostname [hostname] ...}
+
+On a Debian or Ubuntu system, you may also::
+
+ ceph-deploy purge {hostname [hostname] ...}
+
+The tool will uninstall ``ceph`` packages from the specified hosts. Purge
+additionally removes configuration files.
+
diff --git a/doc/rados/deployment/ceph-deploy-keys.rst b/doc/rados/deployment/ceph-deploy-keys.rst
new file mode 100644
index 00000000..3e106c9c
--- /dev/null
+++ b/doc/rados/deployment/ceph-deploy-keys.rst
@@ -0,0 +1,32 @@
+=================
+ Keys Management
+=================
+
+
+Gather Keys
+===========
+
+Before you can provision a host to run OSDs or metadata servers, you must gather
+monitor keys and the OSD and MDS bootstrap keyrings. To gather keys, enter the
+following::
+
+ ceph-deploy gatherkeys {monitor-host}
+
+
+.. note:: To retrieve the keys, you specify a host that has a
+ Ceph monitor.
+
+.. note:: If you have specified multiple monitors in the setup of the cluster,
+ make sure, that all monitors are up and running. If the monitors haven't
+ formed quorum, ``ceph-create-keys`` will not finish and the keys are not
+ generated.
+
+Forget Keys
+===========
+
+When you are no longer using ``ceph-deploy`` (or if you are recreating a
+cluster), you should delete the keys in the local directory of your admin host.
+To delete keys, enter the following::
+
+ ceph-deploy forgetkeys
+
diff --git a/doc/rados/deployment/ceph-deploy-mds.rst b/doc/rados/deployment/ceph-deploy-mds.rst
new file mode 100644
index 00000000..aee5242a
--- /dev/null
+++ b/doc/rados/deployment/ceph-deploy-mds.rst
@@ -0,0 +1,42 @@
+============================
+ Add/Remove Metadata Server
+============================
+
+With ``ceph-deploy``, adding and removing metadata servers is a simple task. You
+just add or remove one or more metadata servers on the command line with one
+command.
+
+See `MDS Config Reference`_ for details on configuring metadata servers.
+
+
+Add a Metadata Server
+=====================
+
+Once you deploy monitors and OSDs you may deploy the metadata server(s). ::
+
+ ceph-deploy mds create {host-name}[:{daemon-name}] [{host-name}[:{daemon-name}] ...]
+
+You may specify a daemon instance a name (optional) if you would like to run
+multiple daemons on a single server.
+
+
+Remove a Metadata Server
+========================
+
+Coming soon...
+
+.. If you have a metadata server in your cluster that you'd like to remove, you may use
+.. the ``destroy`` option. ::
+
+.. ceph-deploy mds destroy {host-name}[:{daemon-name}] [{host-name}[:{daemon-name}] ...]
+
+.. You may specify a daemon instance a name (optional) if you would like to destroy
+.. a particular daemon that runs on a single server with multiple MDS daemons.
+
+.. .. note:: Ensure that if you remove a metadata server, the remaining metadata
+ servers will be able to service requests from CephFS clients. If that is not
+ possible, consider adding a metadata server before destroying the metadata
+ server you would like to take offline.
+
+
+.. _MDS Config Reference: ../../../cephfs/mds-config-ref
diff --git a/doc/rados/deployment/ceph-deploy-mon.rst b/doc/rados/deployment/ceph-deploy-mon.rst
new file mode 100644
index 00000000..bda34fee
--- /dev/null
+++ b/doc/rados/deployment/ceph-deploy-mon.rst
@@ -0,0 +1,56 @@
+=====================
+ Add/Remove Monitors
+=====================
+
+With ``ceph-deploy``, adding and removing monitors is a simple task. You just
+add or remove one or more monitors on the command line with one command. Before
+``ceph-deploy``, the process of `adding and removing monitors`_ involved
+numerous manual steps. Using ``ceph-deploy`` imposes a restriction: **you may
+only install one monitor per host.**
+
+.. note:: We do not recommend comingling monitors and OSDs on
+ the same host.
+
+For high availability, you should run a production Ceph cluster with **AT
+LEAST** three monitors. Ceph uses the Paxos algorithm, which requires a
+consensus among the majority of monitors in a quorum. With Paxos, the monitors
+cannot determine a majority for establishing a quorum with only two monitors. A
+majority of monitors must be counted as such: 1:1, 2:3, 3:4, 3:5, 4:6, etc.
+
+See `Monitor Config Reference`_ for details on configuring monitors.
+
+
+Add a Monitor
+=============
+
+Once you create a cluster and install Ceph packages to the monitor host(s), you
+may deploy the monitor(s) to the monitor host(s). When using ``ceph-deploy``,
+the tool enforces a single monitor per host. ::
+
+ ceph-deploy mon create {host-name [host-name]...}
+
+
+.. note:: Ensure that you add monitors such that they may arrive at a consensus
+ among a majority of monitors, otherwise other steps (like ``ceph-deploy gatherkeys``)
+ will fail.
+
+.. note:: When adding a monitor on a host that was not in hosts initially defined
+ with the ``ceph-deploy new`` command, a ``public network`` statement needs
+ to be added to the ceph.conf file.
+
+Remove a Monitor
+================
+
+If you have a monitor in your cluster that you'd like to remove, you may use
+the ``destroy`` option. ::
+
+ ceph-deploy mon destroy {host-name [host-name]...}
+
+
+.. note:: Ensure that if you remove a monitor, the remaining monitors will be
+ able to establish a consensus. If that is not possible, consider adding a
+ monitor before removing the monitor you would like to take offline.
+
+
+.. _adding and removing monitors: ../../operations/add-or-rm-mons
+.. _Monitor Config Reference: ../../configuration/mon-config-ref
diff --git a/doc/rados/deployment/ceph-deploy-new.rst b/doc/rados/deployment/ceph-deploy-new.rst
new file mode 100644
index 00000000..1ddaf570
--- /dev/null
+++ b/doc/rados/deployment/ceph-deploy-new.rst
@@ -0,0 +1,46 @@
+==================
+ Create a Cluster
+==================
+
+The first step in using Ceph with ``ceph-deploy`` is to create a new Ceph
+cluster. A new Ceph cluster has:
+
+- A Ceph configuration file, and
+- A monitor keyring.
+
+The Ceph configuration file consists of at least:
+
+- Its own filesystem ID (``fsid``)
+- The initial monitor(s) hostname(s), and
+- The initial monitor(s) and IP address(es).
+
+For additional details, see the `Monitor Configuration Reference`_.
+
+The ``ceph-deploy`` tool also creates a monitor keyring and populates it with a
+``[mon.]`` key. For additional details, see the `Cephx Guide`_.
+
+
+Usage
+-----
+
+To create a cluster with ``ceph-deploy``, use the ``new`` command and specify
+the host(s) that will be initial members of the monitor quorum. ::
+
+ ceph-deploy new {host [host], ...}
+
+For example::
+
+ ceph-deploy new mon1.foo.com
+ ceph-deploy new mon{1,2,3}
+
+The ``ceph-deploy`` utility will use DNS to resolve hostnames to IP
+addresses. The monitors will be named using the first component of
+the name (e.g., ``mon1`` above). It will add the specified host names
+to the Ceph configuration file. For additional details, execute::
+
+ ceph-deploy new -h
+
+
+
+.. _Monitor Configuration Reference: ../../configuration/mon-config-ref
+.. _Cephx Guide: ../../../dev/mon-bootstrap#secret-keys
diff --git a/doc/rados/deployment/ceph-deploy-osd.rst b/doc/rados/deployment/ceph-deploy-osd.rst
new file mode 100644
index 00000000..3994adc8
--- /dev/null
+++ b/doc/rados/deployment/ceph-deploy-osd.rst
@@ -0,0 +1,87 @@
+=================
+ Add/Remove OSDs
+=================
+
+Adding and removing Ceph OSD Daemons to your cluster may involve a few more
+steps when compared to adding and removing other Ceph daemons. Ceph OSD Daemons
+write data to the disk and to journals. So you need to provide a disk for the
+OSD and a path to the journal partition (i.e., this is the most common
+configuration, but you may configure your system to your own needs).
+
+In Ceph v0.60 and later releases, Ceph supports ``dm-crypt`` on disk encryption.
+You may specify the ``--dmcrypt`` argument when preparing an OSD to tell
+``ceph-deploy`` that you want to use encryption. You may also specify the
+``--dmcrypt-key-dir`` argument to specify the location of ``dm-crypt``
+encryption keys.
+
+You should test various drive configurations to gauge their throughput before
+before building out a large cluster. See `Data Storage`_ for additional details.
+
+
+List Disks
+==========
+
+To list the disks on a node, execute the following command::
+
+ ceph-deploy disk list {node-name [node-name]...}
+
+
+Zap Disks
+=========
+
+To zap a disk (delete its partition table) in preparation for use with Ceph,
+execute the following::
+
+ ceph-deploy disk zap {osd-server-name}:{disk-name}
+ ceph-deploy disk zap osdserver1:sdb
+
+.. important:: This will delete all data.
+
+
+Create OSDs
+===========
+
+Once you create a cluster, install Ceph packages, and gather keys, you
+may create the OSDs and deploy them to the OSD node(s). If you need to
+identify a disk or zap it prior to preparing it for use as an OSD,
+see `List Disks`_ and `Zap Disks`_. ::
+
+ ceph-deploy osd create --data {data-disk} {node-name}
+
+For example::
+
+ ceph-deploy osd create --data /dev/ssd osd-server1
+
+For bluestore (the default) the example assumes a disk dedicated to one Ceph
+OSD Daemon. Filestore is also supported, in which case a ``--journal`` flag in
+addition to ``--filestore`` needs to be used to define the Journal device on
+the remote host.
+
+.. note:: When running multiple Ceph OSD daemons on a single node, and
+ sharing a partioned journal with each OSD daemon, you should consider
+ the entire node the minimum failure domain for CRUSH purposes, because
+ if the SSD drive fails, all of the Ceph OSD daemons that journal to it
+ will fail too.
+
+
+List OSDs
+=========
+
+To list the OSDs deployed on a node(s), execute the following command::
+
+ ceph-deploy osd list {node-name}
+
+
+Destroy OSDs
+============
+
+.. note:: Coming soon. See `Remove OSDs`_ for manual procedures.
+
+.. To destroy an OSD, execute the following command::
+
+.. ceph-deploy osd destroy {node-name}:{path-to-disk}[:{path/to/journal}]
+
+.. Destroying an OSD will take it ``down`` and ``out`` of the cluster.
+
+.. _Data Storage: ../../../start/hardware-recommendations#data-storage
+.. _Remove OSDs: ../../operations/add-or-rm-osds#removing-osds-manual
diff --git a/doc/rados/deployment/ceph-deploy-purge.rst b/doc/rados/deployment/ceph-deploy-purge.rst
new file mode 100644
index 00000000..685c3c4a
--- /dev/null
+++ b/doc/rados/deployment/ceph-deploy-purge.rst
@@ -0,0 +1,25 @@
+==============
+ Purge a Host
+==============
+
+When you remove Ceph daemons and uninstall Ceph, there may still be extraneous
+data from the cluster on your server. The ``purge`` and ``purgedata`` commands
+provide a convenient means of cleaning up a host.
+
+
+Purge Data
+==========
+
+To remove all data from ``/var/lib/ceph`` (but leave Ceph packages intact),
+execute the ``purgedata`` command.
+
+ ceph-deploy purgedata {hostname} [{hostname} ...]
+
+
+Purge
+=====
+
+To remove all data from ``/var/lib/ceph`` and uninstall Ceph packages, execute
+the ``purge`` command.
+
+ ceph-deploy purge {hostname} [{hostname} ...] \ No newline at end of file
diff --git a/doc/rados/deployment/index.rst b/doc/rados/deployment/index.rst
new file mode 100644
index 00000000..0853e4a3
--- /dev/null
+++ b/doc/rados/deployment/index.rst
@@ -0,0 +1,58 @@
+=================
+ Ceph Deployment
+=================
+
+The ``ceph-deploy`` tool is a way to deploy Ceph relying only upon SSH access to
+the servers, ``sudo``, and some Python. It runs on your workstation, and does
+not require servers, databases, or any other tools. If you set up and
+tear down Ceph clusters a lot, and want minimal extra bureaucracy,
+``ceph-deploy`` is an ideal tool. The ``ceph-deploy`` tool is not a generic
+deployment system. It was designed exclusively for Ceph users who want to get
+Ceph up and running quickly with sensible initial configuration settings without
+the overhead of installing Chef, Puppet or Juju. Users who want fine-control
+over security settings, partitions or directory locations should use a tool
+such as Juju, Puppet, `Chef`_ or Crowbar.
+
+
+With ``ceph-deploy``, you can develop scripts to install Ceph packages on remote
+hosts, create a cluster, add monitors, gather (or forget) keys, add OSDs and
+metadata servers, configure admin hosts, and tear down the clusters.
+
+.. raw:: html
+
+ <table cellpadding="10"><tbody valign="top"><tr><td>
+
+.. toctree::
+
+ Preflight Checklist <preflight-checklist>
+ Install Ceph <ceph-deploy-install>
+
+.. raw:: html
+
+ </td><td>
+
+.. toctree::
+
+ Create a Cluster <ceph-deploy-new>
+ Add/Remove Monitor(s) <ceph-deploy-mon>
+ Key Management <ceph-deploy-keys>
+ Add/Remove OSD(s) <ceph-deploy-osd>
+ Add/Remove MDS(s) <ceph-deploy-mds>
+
+
+.. raw:: html
+
+ </td><td>
+
+.. toctree::
+
+ Purge Hosts <ceph-deploy-purge>
+ Admin Tasks <ceph-deploy-admin>
+
+
+.. raw:: html
+
+ </td></tr></tbody></table>
+
+
+.. _Chef: http://tracker.ceph.com/projects/ceph/wiki/Deploying_Ceph_with_Chef
diff --git a/doc/rados/deployment/preflight-checklist.rst b/doc/rados/deployment/preflight-checklist.rst
new file mode 100644
index 00000000..d45de989
--- /dev/null
+++ b/doc/rados/deployment/preflight-checklist.rst
@@ -0,0 +1,109 @@
+=====================
+ Preflight Checklist
+=====================
+
+.. versionadded:: 0.60
+
+This **Preflight Checklist** will help you prepare an admin node for use with
+``ceph-deploy``, and server nodes for use with passwordless ``ssh`` and
+``sudo``.
+
+Before you can deploy Ceph using ``ceph-deploy``, you need to ensure that you
+have a few things set up first on your admin node and on nodes running Ceph
+daemons.
+
+
+Install an Operating System
+===========================
+
+Install a recent release of Debian or Ubuntu (e.g., 16.04 LTS) on
+your nodes. For additional details on operating systems or to use other
+operating systems other than Debian or Ubuntu, see `OS Recommendations`_.
+
+
+Install an SSH Server
+=====================
+
+The ``ceph-deploy`` utility requires ``ssh``, so your server node(s) require an
+SSH server. ::
+
+ sudo apt-get install openssh-server
+
+
+Create a User
+=============
+
+Create a user on nodes running Ceph daemons.
+
+.. tip:: We recommend a username that brute force attackers won't
+ guess easily (e.g., something other than ``root``, ``ceph``, etc).
+
+::
+
+ ssh user@ceph-server
+ sudo useradd -d /home/ceph -m ceph
+ sudo passwd ceph
+
+
+``ceph-deploy`` installs packages onto your nodes. This means that
+the user you create requires passwordless ``sudo`` privileges.
+
+.. note:: We **DO NOT** recommend enabling the ``root`` password
+ for security reasons.
+
+To provide full privileges to the user, add the following to
+``/etc/sudoers.d/ceph``. ::
+
+ echo "ceph ALL = (root) NOPASSWD:ALL" | sudo tee /etc/sudoers.d/ceph
+ sudo chmod 0440 /etc/sudoers.d/ceph
+
+
+Configure SSH
+=============
+
+Configure your admin machine with password-less SSH access to each node
+running Ceph daemons (leave the passphrase empty). ::
+
+ ssh-keygen
+ Generating public/private key pair.
+ Enter file in which to save the key (/ceph-client/.ssh/id_rsa):
+ Enter passphrase (empty for no passphrase):
+ Enter same passphrase again:
+ Your identification has been saved in /ceph-client/.ssh/id_rsa.
+ Your public key has been saved in /ceph-client/.ssh/id_rsa.pub.
+
+Copy the key to each node running Ceph daemons::
+
+ ssh-copy-id ceph@ceph-server
+
+Modify your ~/.ssh/config file of your admin node so that it defaults
+to logging in as the user you created when no username is specified. ::
+
+ Host ceph-server
+ Hostname ceph-server.fqdn-or-ip-address.com
+ User ceph
+
+
+Install ceph-deploy
+===================
+
+To install ``ceph-deploy``, execute the following::
+
+ wget -q -O- 'https://download.ceph.com/keys/release.asc' | sudo apt-key add -
+ echo deb http://ceph.com/debian-dumpling/ $(lsb_release -sc) main | sudo tee /etc/apt/sources.list.d/ceph.list
+ sudo apt-get update
+ sudo apt-get install ceph-deploy
+
+
+Ensure Connectivity
+===================
+
+Ensure that your Admin node has connectivity to the network and to your Server
+node (e.g., ensure ``iptables``, ``ufw`` or other tools that may prevent
+connections, traffic forwarding, etc. to allow what you need).
+
+
+Once you have completed this pre-flight checklist, you are ready to begin using
+``ceph-deploy``.
+
+.. _OS Recommendations: ../../../start/os-recommendations