From 483eb2f56657e8e7f419ab1a4fab8dce9ade8609 Mon Sep 17 00:00:00 2001 From: Daniel Baumann Date: Sat, 27 Apr 2024 20:24:20 +0200 Subject: Adding upstream version 14.2.21. Signed-off-by: Daniel Baumann --- doc/install/build-ceph.rst | 107 ++++++ doc/install/clone-source.rst | 101 +++++ doc/install/get-packages.rst | 315 +++++++++++++++ doc/install/get-tarballs.rst | 14 + doc/install/index.rst | 71 ++++ doc/install/install-ceph-deploy.rst | 23 ++ doc/install/install-ceph-gateway.rst | 615 ++++++++++++++++++++++++++++++ doc/install/install-storage-cluster.rst | 91 +++++ doc/install/install-vm-cloud.rst | 130 +++++++ doc/install/manual-deployment.rst | 535 ++++++++++++++++++++++++++ doc/install/manual-freebsd-deployment.rst | 581 ++++++++++++++++++++++++++++ doc/install/mirrors.rst | 66 ++++ doc/install/upgrading-ceph.rst | 235 ++++++++++++ 13 files changed, 2884 insertions(+) create mode 100644 doc/install/build-ceph.rst create mode 100644 doc/install/clone-source.rst create mode 100644 doc/install/get-packages.rst create mode 100644 doc/install/get-tarballs.rst create mode 100644 doc/install/index.rst create mode 100644 doc/install/install-ceph-deploy.rst create mode 100644 doc/install/install-ceph-gateway.rst create mode 100644 doc/install/install-storage-cluster.rst create mode 100644 doc/install/install-vm-cloud.rst create mode 100644 doc/install/manual-deployment.rst create mode 100644 doc/install/manual-freebsd-deployment.rst create mode 100644 doc/install/mirrors.rst create mode 100644 doc/install/upgrading-ceph.rst (limited to 'doc/install') diff --git a/doc/install/build-ceph.rst b/doc/install/build-ceph.rst new file mode 100644 index 00000000..147986e4 --- /dev/null +++ b/doc/install/build-ceph.rst @@ -0,0 +1,107 @@ +============ + Build Ceph +============ + +You can get Ceph software by retrieving Ceph source code and building it yourself. +To build Ceph, you need to set up a development environment, compile Ceph, +and then either install in user space or build packages and install the packages. + +Build Prerequisites +=================== + + +.. tip:: Check this section to see if there are specific prerequisites for your + Linux/Unix distribution. + +Before you can build Ceph source code, you need to install several libraries +and tools:: + + ./install-deps.sh + +.. note:: Some distributions that support Google's memory profiler tool may use + a different package name (e.g., ``libgoogle-perftools4``). + +Build Ceph +========== + +Ceph is built using cmake. To build Ceph, navigate to your cloned Ceph +repository and execute the following:: + + cd ceph + ./do_cmake.sh + cd build + make + +.. note:: By default do_cmake.sh will build a debug version of ceph that may + perform up to 5 times slower with certain workloads. Pass + '-DCMAKE_BUILD_TYPE=RelWithDebInfo' to do_cmake.sh if you would like to + build a release version of the ceph executables instead. + +.. topic:: Hyperthreading + + You can use ``make -j`` to execute multiple jobs depending upon your system. For + example, ``make -j4`` for a dual core processor may build faster. + +See `Installing a Build`_ to install a build in user space. + +Build Ceph Packages +=================== + +To build packages, you must clone the `Ceph`_ repository. You can create +installation packages from the latest code using ``dpkg-buildpackage`` for +Debian/Ubuntu or ``rpmbuild`` for the RPM Package Manager. + +.. tip:: When building on a multi-core CPU, use the ``-j`` and the number of + cores * 2. For example, use ``-j4`` for a dual-core processor to accelerate + the build. + + +Advanced Package Tool (APT) +--------------------------- + +To create ``.deb`` packages for Debian/Ubuntu, ensure that you have cloned the +`Ceph`_ repository, installed the `Build Prerequisites`_ and installed +``debhelper``:: + + sudo apt-get install debhelper + +Once you have installed debhelper, you can build the packages:: + + sudo dpkg-buildpackage + +For multi-processor CPUs use the ``-j`` option to accelerate the build. + + +RPM Package Manager +------------------- + +To create ``.rpm`` packages, ensure that you have cloned the `Ceph`_ repository, +installed the `Build Prerequisites`_ and installed ``rpm-build`` and +``rpmdevtools``:: + + yum install rpm-build rpmdevtools + +Once you have installed the tools, setup an RPM compilation environment:: + + rpmdev-setuptree + +Fetch the source tarball for the RPM compilation environment:: + + wget -P ~/rpmbuild/SOURCES/ https://download.ceph.com/tarballs/ceph-.tar.bz2 + +Or from the EU mirror:: + + wget -P ~/rpmbuild/SOURCES/ http://eu.ceph.com/tarballs/ceph-.tar.bz2 + +Extract the specfile:: + + tar --strip-components=1 -C ~/rpmbuild/SPECS/ --no-anchored -xvjf ~/rpmbuild/SOURCES/ceph-.tar.bz2 "ceph.spec" + +Build the RPM packages:: + + rpmbuild -ba ~/rpmbuild/SPECS/ceph.spec + +For multi-processor CPUs use the ``-j`` option to accelerate the build. + +.. _Ceph: ../clone-source +.. _Installing a Build: ../install-storage-cluster#installing-a-build diff --git a/doc/install/clone-source.rst b/doc/install/clone-source.rst new file mode 100644 index 00000000..da62ee93 --- /dev/null +++ b/doc/install/clone-source.rst @@ -0,0 +1,101 @@ +========================================= + Cloning the Ceph Source Code Repository +========================================= + +You may clone a Ceph branch of the Ceph source code by going to `github Ceph +Repository`_, selecting a branch (``master`` by default), and clicking the +**Download ZIP** button. + +.. _github Ceph Repository: https://github.com/ceph/ceph + + +To clone the entire git repository, install and configure ``git``. + + +Install Git +=========== + +To install ``git`` on Debian/Ubuntu, execute:: + + sudo apt-get install git + + +To install ``git`` on CentOS/RHEL, execute:: + + sudo yum install git + + +You must also have a ``github`` account. If you do not have a +``github`` account, go to `github.com`_ and register. +Follow the directions for setting up git at +`Set Up Git`_. + +.. _github.com: https://github.com +.. _Set Up Git: https://help.github.com/linux-set-up-git + + +Add SSH Keys (Optional) +======================= + +If you intend to commit code to Ceph or to clone using SSH +(``git@github.com:ceph/ceph.git``), you must generate SSH keys for github. + +.. tip:: If you only intend to clone the repository, you may + use ``git clone --recursive https://github.com/ceph/ceph.git`` + without generating SSH keys. + +To generate SSH keys for ``github``, execute:: + + ssh-keygen + +Get the key to add to your ``github`` account (the following example +assumes you used the default file path):: + + cat .ssh/id_rsa.pub + +Copy the public key. + +Go to your ``github`` account, click on "Account Settings" (i.e., the +'tools' icon); then, click "SSH Keys" on the left side navbar. + +Click "Add SSH key" in the "SSH Keys" list, enter a name for the key, paste the +key you generated, and press the "Add key" button. + + +Clone the Source +================ + +To clone the Ceph source code repository, execute:: + + git clone --recursive https://github.com/ceph/ceph.git + +Once ``git clone`` executes, you should have a full copy of the Ceph +repository. + +.. tip:: Make sure you maintain the latest copies of the submodules + included in the repository. Running ``git status`` will tell you if + the submodules are out of date. + +:: + + cd ceph + git status + +If your submodules are out of date, run:: + + git submodule update --force --init --recursive + +Choose a Branch +=============== + +Once you clone the source code and submodules, your Ceph repository +will be on the ``master`` branch by default, which is the unstable +development branch. You may choose other branches too. + +- ``master``: The unstable development branch. +- ``stable``: The bugfix branch. +- ``next``: The release candidate branch. + +:: + + git checkout master diff --git a/doc/install/get-packages.rst b/doc/install/get-packages.rst new file mode 100644 index 00000000..f37a706d --- /dev/null +++ b/doc/install/get-packages.rst @@ -0,0 +1,315 @@ +============== + Get Packages +============== + +To install Ceph and other enabling software, you need to retrieve packages from +the Ceph repository. Follow this guide to get packages; then, proceed to the +`Install Ceph Object Storage`_. + + +Getting Packages +================ + +There are two ways to get packages: + +- **Add Repositories:** Adding repositories is the easiest way to get packages, + because package management tools will retrieve the packages and all enabling + software for you in most cases. However, to use this approach, each + :term:`Ceph Node` in your cluster must have internet access. + +- **Download Packages Manually:** Downloading packages manually is a convenient + way to install Ceph if your environment does not allow a :term:`Ceph Node` to + access the internet. + + +Requirements +============ + +All Ceph deployments require Ceph packages (except for development). You should +also add keys and recommended packages. + +- **Keys: (Recommended)** Whether you add repositories or download packages + manually, you should download keys to verify the packages. If you do not get + the keys, you may encounter security warnings. See `Add Keys`_ for details. + +- **Ceph: (Required)** All Ceph deployments require Ceph release packages, + except for deployments that use development packages (development, QA, and + bleeding edge deployments only). See `Add Ceph`_ for details. + +- **Ceph Development: (Optional)** If you are developing for Ceph, testing Ceph + development builds, or if you want features from the bleeding edge of Ceph + development, you may get Ceph development packages. See + `Add Ceph Development`_ for details. + + +If you intend to download packages manually, see Section `Download Packages`_. + + +Add Keys +======== + +Add a key to your system's list of trusted keys to avoid a security warning. For +major releases (e.g., ``hammer``, ``jewel``, ``luminous``) and development releases +(``release-name-rc1``, ``release-name-rc2``), use the ``release.asc`` key. + + +APT +--- + +To install the ``release.asc`` key, execute the following:: + + wget -q -O- 'https://download.ceph.com/keys/release.asc' | sudo apt-key add - + + +RPM +--- + +To install the ``release.asc`` key, execute the following:: + + sudo rpm --import 'https://download.ceph.com/keys/release.asc' + +Add Ceph +======== + +Release repositories use the ``release.asc`` key to verify packages. +To install Ceph packages with the Advanced Package Tool (APT) or +Yellowdog Updater, Modified (YUM), you must add Ceph repositories. + +You may find releases for Debian/Ubuntu (installed with APT) at:: + + https://download.ceph.com/debian-{release-name} + +You may find releases for CentOS/RHEL and others (installed with YUM) at:: + + https://download.ceph.com/rpm-{release-name} + +The major releases of Ceph are summarized at: :ref:`ceph-releases` + +Every second major release is considered Long Term Stable (LTS). Critical +bugfixes are backported to LTS releases until their retirement. Since retired +releases are no longer maintained, we recommend that users upgrade their +clusters regularly - preferably to the latest LTS release. + +.. tip:: For non-US users: There might be a mirror close to you where + to download Ceph from. For more information see: `Ceph Mirrors`_. + +Debian Packages +--------------- + +Add a Ceph package repository to your system's list of APT sources. For newer +versions of Debian/Ubuntu, call ``lsb_release -sc`` on the command line to +get the short codename, and replace ``{codename}`` in the following command. :: + + sudo apt-add-repository 'deb https://download.ceph.com/debian-luminous/ {codename} main' + +For early Linux distributions, you may execute the following command:: + + echo deb https://download.ceph.com/debian-luminous/ $(lsb_release -sc) main | sudo tee /etc/apt/sources.list.d/ceph.list + +For earlier Ceph releases, replace ``{release-name}`` with the name with the +name of the Ceph release. You may call ``lsb_release -sc`` on the command line +to get the short codename, and replace ``{codename}`` in the following command. + +:: + + sudo apt-add-repository 'deb https://download.ceph.com/debian-{release-name}/ {codename} main' + +For older Linux distributions, replace ``{release-name}`` with the name of the +release:: + + echo deb https://download.ceph.com/debian-{release-name}/ $(lsb_release -sc) main | sudo tee /etc/apt/sources.list.d/ceph.list + +For development release packages, add our package repository to your system's +list of APT sources. See `the testing Debian repository`_ for a complete list +of Debian and Ubuntu releases supported. :: + + echo deb https://download.ceph.com/debian-testing/ $(lsb_release -sc) main | sudo tee /etc/apt/sources.list.d/ceph.list + +.. tip:: For non-US users: There might be a mirror close to you where + to download Ceph from. For more information see: `Ceph Mirrors`_. + + +RPM Packages +------------ + +For major releases, you may add a Ceph entry to the ``/etc/yum.repos.d`` +directory. Create a ``ceph.repo`` file. In the example below, replace +``{ceph-release}`` with a major release of Ceph (e.g., ``hammer``, ``jewel``, ``luminous``, +etc.) and ``{distro}`` with your Linux distribution (e.g., ``el7``, etc.). You +may view https://download.ceph.com/rpm-{ceph-release}/ directory to see which +distributions Ceph supports. Some Ceph packages (e.g., EPEL) must take priority +over standard packages, so you must ensure that you set +``priority=2``. :: + + [ceph] + name=Ceph packages for $basearch + baseurl=https://download.ceph.com/rpm-{ceph-release}/{distro}/$basearch + enabled=1 + priority=2 + gpgcheck=1 + gpgkey=https://download.ceph.com/keys/release.asc + + [ceph-noarch] + name=Ceph noarch packages + baseurl=https://download.ceph.com/rpm-{ceph-release}/{distro}/noarch + enabled=1 + priority=2 + gpgcheck=1 + gpgkey=https://download.ceph.com/keys/release.asc + + [ceph-source] + name=Ceph source packages + baseurl=https://download.ceph.com/rpm-{ceph-release}/{distro}/SRPMS + enabled=0 + priority=2 + gpgcheck=1 + gpgkey=https://download.ceph.com/keys/release.asc + + +For specific packages, you may retrieve them by downloading the release package +by name. Our development process generates a new release of Ceph every 3-4 +weeks. These packages are faster-moving than the major releases. Development +packages have new features integrated quickly, while still undergoing several +weeks of QA prior to release. + +The repository package installs the repository details on your local system for +use with ``yum``. Replace ``{distro}`` with your Linux distribution, and +``{release}`` with the specific release of Ceph:: + + su -c 'rpm -Uvh https://download.ceph.com/rpms/{distro}/x86_64/ceph-{release}.el7.noarch.rpm' + +You can download the RPMs directly from:: + + https://download.ceph.com/rpm-testing + +.. tip:: For non-US users: There might be a mirror close to you where + to download Ceph from. For more information see: `Ceph Mirrors`_. + + +Add Ceph Development +==================== + +If you are developing Ceph and need to deploy and test specific Ceph branches, +ensure that you remove repository entries for major releases first. + + +DEB Packages +------------ + +We automatically build Ubuntu packages for current development branches in the +Ceph source code repository. These packages are intended for developers and QA +only. + +Add the package repository to your system's list of APT sources, but +replace ``{BRANCH}`` with the branch you'd like to use (e.g., +wip-hack, master). See `the shaman page`_ for a complete +list of distributions we build. :: + + curl -L https://shaman.ceph.com/api/repos/ceph/{BRANCH}/latest/ubuntu/$(lsb_release -sc)/repo/ | sudo tee /etc/apt/sources.list.d/shaman.list + +.. note:: If the repository is not ready an HTTP 504 will be returned + +The use of ``latest`` in the url, means it will figure out which is the last +commit that has been built. Alternatively, a specific sha1 can be specified. +For Ubuntu Xenial and the master branch of Ceph, it would look like:: + + curl -L https://shaman.ceph.com/api/repos/ceph/master/53e772a45fdf2d211c0c383106a66e1feedec8fd/ubuntu/xenial/repo/ | sudo tee /etc/apt/sources.list.d/shaman.list + + +.. warning:: Development repositories are no longer available after two weeks. + +RPM Packages +------------ + +For current development branches, you may add a Ceph entry to the +``/etc/yum.repos.d`` directory. The `the shaman page`_ can be used to retrieve the full details +of a repo file. It can be retrieved via an HTTP request, for example:: + + curl -L https://shaman.ceph.com/api/repos/ceph/{BRANCH}/latest/centos/7/repo/ | sudo tee /etc/yum.repos.d/shaman.repo + +The use of ``latest`` in the url, means it will figure out which is the last +commit that has been built. Alternatively, a specific sha1 can be specified. +For CentOS 7 and the master branch of Ceph, it would look like:: + + curl -L https://shaman.ceph.com/api/repos/ceph/master/53e772a45fdf2d211c0c383106a66e1feedec8fd/centos/7/repo/ | sudo tee /etc/apt/sources.list.d/shaman.list + + +.. warning:: Development repositories are no longer available after two weeks. + +.. note:: If the repository is not ready an HTTP 504 will be returned + +Download Packages +================= + +If you are attempting to install behind a firewall in an environment without internet +access, you must retrieve the packages (mirrored with all the necessary dependencies) +before attempting an install. + +Debian Packages +--------------- + +Ceph requires additional third party libraries. + +- libaio1 +- libsnappy1 +- libcurl3 +- curl +- libgoogle-perftools4 +- google-perftools +- libleveldb1 + + +The repository package installs the repository details on your local system for +use with ``apt``. Replace ``{release}`` with the latest Ceph release. Replace +``{version}`` with the latest Ceph version number. Replace ``{distro}`` with +your Linux distribution codename. Replace ``{arch}`` with the CPU architecture. + +:: + + wget -q https://download.ceph.com/debian-{release}/pool/main/c/ceph/ceph_{version}{distro}_{arch}.deb + + +RPM Packages +------------ + +Ceph requires additional additional third party libraries. +To add the EPEL repository, execute the following:: + + sudo yum install -y https://dl.fedoraproject.org/pub/epel/epel-release-latest-7.noarch.rpm + +Ceph requires the following packages: + +- snappy +- leveldb +- gdisk +- python-argparse +- gperftools-libs + + +Packages are currently built for the RHEL/CentOS7 (``el7``) platforms. The +repository package installs the repository details on your local system for use +with ``yum``. Replace ``{distro}`` with your distribution. :: + + su -c 'rpm -Uvh https://download.ceph.com/rpm-luminous/{distro}/noarch/ceph-{version}.{distro}.noarch.rpm' + +For example, for CentOS 7 (``el7``):: + + su -c 'rpm -Uvh https://download.ceph.com/rpm-luminous/el7/noarch/ceph-release-1-0.el7.noarch.rpm' + +You can download the RPMs directly from:: + + https://download.ceph.com/rpm-luminous + + +For earlier Ceph releases, replace ``{release-name}`` with the name +with the name of the Ceph release. You may call ``lsb_release -sc`` on the command +line to get the short codename. :: + + su -c 'rpm -Uvh https://download.ceph.com/rpm-{release-name}/{distro}/noarch/ceph-{version}.{distro}.noarch.rpm' + + + +.. _Install Ceph Object Storage: ../install-storage-cluster +.. _the testing Debian repository: https://download.ceph.com/debian-testing/dists +.. _the shaman page: https://shaman.ceph.com +.. _Ceph Mirrors: ../mirrors diff --git a/doc/install/get-tarballs.rst b/doc/install/get-tarballs.rst new file mode 100644 index 00000000..175d0399 --- /dev/null +++ b/doc/install/get-tarballs.rst @@ -0,0 +1,14 @@ +==================================== + Downloading a Ceph Release Tarball +==================================== + +As Ceph development progresses, the Ceph team releases new versions of the +source code. You may download source code tarballs for Ceph releases here: + +`Ceph Release Tarballs`_ + +.. tip:: For international users: There might be a mirror close to you where download Ceph from. For more information see: `Ceph Mirrors`_. + + +.. _Ceph Release Tarballs: https://download.ceph.com/tarballs/ +.. _Ceph Mirrors: ../mirrors diff --git a/doc/install/index.rst b/doc/install/index.rst new file mode 100644 index 00000000..d9dde72c --- /dev/null +++ b/doc/install/index.rst @@ -0,0 +1,71 @@ +======================= + Installation (Manual) +======================= + + +Get Software +============ + +There are several methods for getting Ceph software. The easiest and most common +method is to `get packages`_ by adding repositories for use with package +management tools such as the Advanced Package Tool (APT) or Yellowdog Updater, +Modified (YUM). You may also retrieve pre-compiled packages from the Ceph +repository. Finally, you can retrieve tarballs or clone the Ceph source code +repository and build Ceph yourself. + + +.. toctree:: + :maxdepth: 1 + + Get Packages + Get Tarballs + Clone Source + Build Ceph + Ceph Mirrors + + +Install Software +================ + +Once you have the Ceph software (or added repositories), installing the software +is easy. To install packages on each :term:`Ceph Node` in your cluster. You may +use ``ceph-deploy`` to install Ceph for your storage cluster, or use package +management tools. You should install Yum Priorities for RHEL/CentOS and other +distributions that use Yum if you intend to install the Ceph Object Gateway or +QEMU. + +.. toctree:: + :maxdepth: 1 + + Install ceph-deploy + Install Ceph Storage Cluster + Install Ceph Object Gateway + Install Virtualization for Block + + +Deploy a Cluster Manually +========================= + +Once you have Ceph installed on your nodes, you can deploy a cluster manually. +The manual procedure is primarily for exemplary purposes for those developing +deployment scripts with Chef, Juju, Puppet, etc. + +.. toctree:: + + Manual Deployment + Manual Deployment on FreeBSD + +Upgrade Software +================ + +As new versions of Ceph become available, you may upgrade your cluster to take +advantage of new functionality. Read the upgrade documentation before you +upgrade your cluster. Sometimes upgrading Ceph requires you to follow an upgrade +sequence. + +.. toctree:: + :maxdepth: 2 + + Upgrading Ceph + +.. _get packages: ../install/get-packages diff --git a/doc/install/install-ceph-deploy.rst b/doc/install/install-ceph-deploy.rst new file mode 100644 index 00000000..d6516ad7 --- /dev/null +++ b/doc/install/install-ceph-deploy.rst @@ -0,0 +1,23 @@ +===================== + Install Ceph Deploy +===================== + +The ``ceph-deploy`` tool enables you to set up and tear down Ceph clusters +for development, testing and proof-of-concept projects. + + +APT +--- + +To install ``ceph-deploy`` with ``apt``, execute the following:: + + sudo apt-get update && sudo apt-get install ceph-deploy + + +RPM +--- + +To install ``ceph-deploy`` with ``yum``, execute the following:: + + sudo yum install ceph-deploy + diff --git a/doc/install/install-ceph-gateway.rst b/doc/install/install-ceph-gateway.rst new file mode 100644 index 00000000..17e62af9 --- /dev/null +++ b/doc/install/install-ceph-gateway.rst @@ -0,0 +1,615 @@ +=========================== +Install Ceph Object Gateway +=========================== + +As of `firefly` (v0.80), Ceph Object Gateway is running on Civetweb (embedded +into the ``ceph-radosgw`` daemon) instead of Apache and FastCGI. Using Civetweb +simplifies the Ceph Object Gateway installation and configuration. + +.. note:: To run the Ceph Object Gateway service, you should have a running + Ceph storage cluster, and the gateway host should have access to the + public network. + +.. note:: In version 0.80, the Ceph Object Gateway does not support SSL. You + may setup a reverse proxy server with SSL to dispatch HTTPS requests + as HTTP requests to CivetWeb. + +Execute the Pre-Installation Procedure +-------------------------------------- + +See Preflight_ and execute the pre-installation procedures on your Ceph Object +Gateway node. Specifically, you should disable ``requiretty`` on your Ceph +Deploy user, set SELinux to ``Permissive`` and set up a Ceph Deploy user with +password-less ``sudo``. For Ceph Object Gateways, you will need to open the +port that Civetweb will use in production. + +.. note:: Civetweb runs on port ``7480`` by default. + +Install Ceph Object Gateway +--------------------------- + +From the working directory of your administration server, install the Ceph +Object Gateway package on the Ceph Object Gateway node. For example:: + + ceph-deploy install --rgw [ ...] + +The ``ceph-common`` package is a dependency, so ``ceph-deploy`` will install +this too. The ``ceph`` CLI tools are intended for administrators. To make your +Ceph Object Gateway node an administrator node, execute the following from the +working directory of your administration server:: + + ceph-deploy admin + +Create a Gateway Instance +------------------------- + +From the working directory of your administration server, create an instance of +the Ceph Object Gateway on the Ceph Object Gateway. For example:: + + ceph-deploy rgw create + +Once the gateway is running, you should be able to access it on port ``7480`` +with an unauthenticated request like this:: + + http://client-node:7480 + +If the gateway instance is working properly, you should receive a response like +this:: + + + + + anonymous + + + + + + +If at any point you run into trouble and you want to start over, execute the +following to purge the configuration:: + + ceph-deploy purge [] + ceph-deploy purgedata [] + +If you execute ``purge``, you must re-install Ceph. + +Change the Default Port +----------------------- + +Civetweb runs on port ``7480`` by default. To change the default port (e.g., to +port ``80``), modify your Ceph configuration file in the working directory of +your administration server. Add a section entitled +``[client.rgw.]``, replacing ```` with the short +node name of your Ceph Object Gateway node (i.e., ``hostname -s``). + +.. note:: As of version 11.0.1, the Ceph Object Gateway **does** support SSL. + See `Using SSL with Civetweb`_ for information on how to set that up. + +For example, if your node name is ``gateway-node1``, add a section like this +after the ``[global]`` section:: + + [client.rgw.gateway-node1] + rgw_frontends = "civetweb port=80" + +.. note:: Ensure that you leave no whitespace between ``port=`` in + the ``rgw_frontends`` key/value pair. The ``[client.rgw.gateway-node1]`` + heading identifies this portion of the Ceph configuration file as + configuring a Ceph Storage Cluster client where the client type is a Ceph + Object Gateway (i.e., ``rgw``), and the name of the instance is + ``gateway-node1``. + +Push the updated configuration file to your Ceph Object Gateway node +(and other Ceph nodes):: + + ceph-deploy --overwrite-conf config push [] + +To make the new port setting take effect, restart the Ceph Object +Gateway:: + + sudo systemctl restart ceph-radosgw.service + +Finally, check to ensure that the port you selected is open on the node's +firewall (e.g., port ``80``). If it is not open, add the port and reload the +firewall configuration. If you use the ``firewalld`` daemon, execute:: + + sudo firewall-cmd --list-all + sudo firewall-cmd --zone=public --add-port 80/tcp --permanent + sudo firewall-cmd --reload + +If you use ``iptables``, execute:: + + sudo iptables --list + sudo iptables -I INPUT 1 -i -p tcp -s / --dport 80 -j ACCEPT + +Replace ```` and ``/`` with the relevant values for +your Ceph Object Gateway node. + +Once you have finished configuring ``iptables``, ensure that you make the +change persistent so that it will be in effect when your Ceph Object Gateway +node reboots. Execute:: + + sudo apt-get install iptables-persistent + +A terminal UI will open up. Select ``yes`` for the prompts to save current +``IPv4`` iptables rules to ``/etc/iptables/rules.v4`` and current ``IPv6`` +iptables rules to ``/etc/iptables/rules.v6``. + +The ``IPv4`` iptables rule that you set in the earlier step will be loaded in +``/etc/iptables/rules.v4`` and will be persistent across reboots. + +If you add a new ``IPv4`` iptables rule after installing +``iptables-persistent`` you will have to add it to the rule file. In such case, +execute the following as the ``root`` user:: + + iptables-save > /etc/iptables/rules.v4 + +Using SSL with Civetweb +----------------------- +.. _Using SSL with Civetweb: + +Before using SSL with civetweb, you will need a certificate that will match +the host name that that will be used to access the Ceph Object Gateway. +You may wish to obtain one that has `subject alternate name` fields for +more flexibility. If you intend to use S3-style subdomains +(`Add Wildcard to DNS`_), you will need a `wildcard` certificate. + +Civetweb requires that the server key, server certificate, and any other +CA or intermediate certificates be supplied in one file. Each of these +items must be in `pem` form. Because the combined file contains the +secret key, it should be protected from unauthorized access. + +To configure ssl operation, append ``s`` to the port number. For eg:: + + [client.rgw.gateway-node1] + rgw_frontends = civetweb port=443s ssl_certificate=/etc/ceph/private/keyandcert.pem + +.. versionadded :: Luminous + +Furthermore, civetweb can be made to bind to multiple ports, by separating them +with ``+`` in the configuration. This allows for use cases where both ssl and +non-ssl connections are hosted by a single rgw instance. For eg:: + + [client.rgw.gateway-node1] + rgw_frontends = civetweb port=80+443s ssl_certificate=/etc/ceph/private/keyandcert.pem + +Additional Civetweb Configuration Options +----------------------------------------- +Some additional configuration options can be adjusted for the embedded Civetweb web server +in the **Ceph Object Gateway** section of the ``ceph.conf`` file. +A list of supported options, including an example, can be found in the `HTTP Frontends`_. + +Migrating from Apache to Civetweb +--------------------------------- + +If you are running the Ceph Object Gateway on Apache and FastCGI with Ceph +Storage v0.80 or above, you are already running Civetweb--it starts with the +``ceph-radosgw`` daemon and it's running on port 7480 by default so that it +doesn't conflict with your Apache and FastCGI installation and other commonly +used web service ports. Migrating to use Civetweb basically involves removing +your Apache installation. Then, you must remove Apache and FastCGI settings +from your Ceph configuration file and reset ``rgw_frontends`` to Civetweb. + +Referring back to the description for installing a Ceph Object Gateway with +``ceph-deploy``, notice that the configuration file only has one setting +``rgw_frontends`` (and that's assuming you elected to change the default port). +The ``ceph-deploy`` utility generates the data directory and the keyring for +you--placing the keyring in ``/var/lib/ceph/radosgw/{rgw-intance}``. The daemon +looks in default locations, whereas you may have specified different settings +in your Ceph configuration file. Since you already have keys and a data +directory, you will want to maintain those paths in your Ceph configuration +file if you used something other than default paths. + +A typical Ceph Object Gateway configuration file for an Apache-based deployment +looks something similar as the following: + +On Red Hat Enterprise Linux:: + + [client.radosgw.gateway-node1] + host = {hostname} + keyring = /etc/ceph/ceph.client.radosgw.keyring + rgw socket path = "" + log file = /var/log/radosgw/client.radosgw.gateway-node1.log + rgw frontends = fastcgi socket\_port=9000 socket\_host=0.0.0.0 + rgw print continue = false + +On Ubuntu:: + + [client.radosgw.gateway-node] + host = {hostname} + keyring = /etc/ceph/ceph.client.radosgw.keyring + rgw socket path = /var/run/ceph/ceph.radosgw.gateway.fastcgi.sock + log file = /var/log/radosgw/client.radosgw.gateway-node1.log + +To modify it for use with Civetweb, simply remove the Apache-specific settings +such as ``rgw_socket_path`` and ``rgw_print_continue``. Then, change the +``rgw_frontends`` setting to reflect Civetweb rather than the Apache FastCGI +front end and specify the port number you intend to use. For example:: + + [client.radosgw.gateway-node1] + host = {hostname} + keyring = /etc/ceph/ceph.client.radosgw.keyring + log file = /var/log/radosgw/client.radosgw.gateway-node1.log + rgw_frontends = civetweb port=80 + +Finally, restart the Ceph Object Gateway. On Red Hat Enterprise Linux execute:: + + sudo systemctl restart ceph-radosgw.service + +On Ubuntu execute:: + + sudo service radosgw restart id=rgw. + +If you used a port number that is not open, you will also need to open that +port on your firewall. + +Configure Bucket Sharding +------------------------- + +A Ceph Object Gateway stores bucket index data in the ``index_pool``, which +defaults to ``.rgw.buckets.index``. Sometimes users like to put many objects +(hundreds of thousands to millions of objects) in a single bucket. If you do +not use the gateway administration interface to set quotas for the maximum +number of objects per bucket, the bucket index can suffer significant +performance degradation when users place large numbers of objects into a +bucket. + +In Ceph 0.94, you may shard bucket indices to help prevent performance +bottlenecks when you allow a high number of objects per bucket. The +``rgw_override_bucket_index_max_shards`` setting allows you to set a maximum +number of shards per bucket. The default value is ``0``, which means bucket +index sharding is off by default. + +To turn bucket index sharding on, set ``rgw_override_bucket_index_max_shards`` +to a value greater than ``0``. + +For simple configurations, you may add ``rgw_override_bucket_index_max_shards`` +to your Ceph configuration file. Add it under ``[global]`` to create a +system-wide value. You can also set it for each instance in your Ceph +configuration file. + +Once you have changed your bucket sharding configuration in your Ceph +configuration file, restart your gateway. On Red Hat Enterprise Linux execute:: + + sudo systemctl restart ceph-radosgw.service + +On Ubuntu execute:: + + sudo service radosgw restart id=rgw. + +For federated configurations, each zone may have a different ``index_pool`` +setting for failover. To make the value consistent for a zonegroup's zones, you +may set ``rgw_override_bucket_index_max_shards`` in a gateway's zonegroup +configuration. For example:: + + radosgw-admin zonegroup get > zonegroup.json + +Open the ``zonegroup.json`` file and edit the ``bucket_index_max_shards`` setting +for each named zone. Save the ``zonegroup.json`` file and reset the zonegroup. +For example:: + + radosgw-admin zonegroup set < zonegroup.json + +Once you have updated your zonegroup, update and commit the period. +For example:: + + radosgw-admin period update --commit + +.. note:: Mapping the index pool (for each zone, if applicable) to a CRUSH + rule of SSD-based OSDs may also help with bucket index performance. + +Add Wildcard to DNS +------------------- +.. _Add Wildcard to DNS: + +To use Ceph with S3-style subdomains (e.g., bucket-name.domain-name.com), you +need to add a wildcard to the DNS record of the DNS server you use with the +``ceph-radosgw`` daemon. + +The address of the DNS must also be specified in the Ceph configuration file +with the ``rgw dns name = {hostname}`` setting. + +For ``dnsmasq``, add the following address setting with a dot (.) prepended to +the host name:: + + address=/.{hostname-or-fqdn}/{host-ip-address} + +For example:: + + address=/.gateway-node1/192.168.122.75 + + +For ``bind``, add a wildcard to the DNS record. For example:: + + $TTL 604800 + @ IN SOA gateway-node1. root.gateway-node1. ( + 2 ; Serial + 604800 ; Refresh + 86400 ; Retry + 2419200 ; Expire + 604800 ) ; Negative Cache TTL + ; + @ IN NS gateway-node1. + @ IN A 192.168.122.113 + * IN CNAME @ + +Restart your DNS server and ping your server with a subdomain to ensure that +your DNS configuration works as expected:: + + ping mybucket.{hostname} + +For example:: + + ping mybucket.gateway-node1 + +Add Debugging (if needed) +------------------------- + +Once you finish the setup procedure, if you encounter issues with your +configuration, you can add debugging to the ``[global]`` section of your Ceph +configuration file and restart the gateway(s) to help troubleshoot any +configuration issues. For example:: + + [global] + #append the following in the global section. + debug ms = 1 + debug rgw = 20 + +Using the Gateway +----------------- + +To use the REST interfaces, first create an initial Ceph Object Gateway user +for the S3 interface. Then, create a subuser for the Swift interface. You then +need to verify if the created users are able to access the gateway. + +Create a RADOSGW User for S3 Access +^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ + +A ``radosgw`` user needs to be created and granted access. The command ``man +radosgw-admin`` will provide information on additional command options. + +To create the user, execute the following on the ``gateway host``:: + + sudo radosgw-admin user create --uid="testuser" --display-name="First User" + +The output of the command will be something like the following:: + + { + "user_id": "testuser", + "display_name": "First User", + "email": "", + "suspended": 0, + "max_buckets": 1000, + "subusers": [], + "keys": [{ + "user": "testuser", + "access_key": "I0PJDPCIYZ665MW88W9R", + "secret_key": "dxaXZ8U90SXydYzyS5ivamEP20hkLSUViiaR+ZDA" + }], + "swift_keys": [], + "caps": [], + "op_mask": "read, write, delete", + "default_placement": "", + "placement_tags": [], + "bucket_quota": { + "enabled": false, + "max_size_kb": -1, + "max_objects": -1 + }, + "user_quota": { + "enabled": false, + "max_size_kb": -1, + "max_objects": -1 + }, + "temp_url_keys": [] + } + +.. note:: The values of ``keys->access_key`` and ``keys->secret_key`` are + needed for access validation. + +.. important:: Check the key output. Sometimes ``radosgw-admin`` generates a + JSON escape character ``\`` in ``access_key`` or ``secret_key`` + and some clients do not know how to handle JSON escape + characters. Remedies include removing the JSON escape character + ``\``, encapsulating the string in quotes, regenerating the key + and ensuring that it does not have a JSON escape character or + specify the key and secret manually. Also, if ``radosgw-admin`` + generates a JSON escape character ``\`` and a forward slash ``/`` + together in a key, like ``\/``, only remove the JSON escape + character ``\``. Do not remove the forward slash ``/`` as it is + a valid character in the key. + +Create a Swift User +^^^^^^^^^^^^^^^^^^^ + +A Swift subuser needs to be created if this kind of access is needed. Creating +a Swift user is a two step process. The first step is to create the user. The +second is to create the secret key. + +Execute the following steps on the ``gateway host``: + +Create the Swift user:: + + sudo radosgw-admin subuser create --uid=testuser --subuser=testuser:swift --access=full + +The output will be something like the following:: + + { + "user_id": "testuser", + "display_name": "First User", + "email": "", + "suspended": 0, + "max_buckets": 1000, + "subusers": [{ + "id": "testuser:swift", + "permissions": "full-control" + }], + "keys": [{ + "user": "testuser:swift", + "access_key": "3Y1LNW4Q6X0Y53A52DET", + "secret_key": "" + }, { + "user": "testuser", + "access_key": "I0PJDPCIYZ665MW88W9R", + "secret_key": "dxaXZ8U90SXydYzyS5ivamEP20hkLSUViiaR+ZDA" + }], + "swift_keys": [], + "caps": [], + "op_mask": "read, write, delete", + "default_placement": "", + "placement_tags": [], + "bucket_quota": { + "enabled": false, + "max_size_kb": -1, + "max_objects": -1 + }, + "user_quota": { + "enabled": false, + "max_size_kb": -1, + "max_objects": -1 + }, + "temp_url_keys": [] + } + +Create the secret key:: + + sudo radosgw-admin key create --subuser=testuser:swift --key-type=swift --gen-secret + +The output will be something like the following:: + + { + "user_id": "testuser", + "display_name": "First User", + "email": "", + "suspended": 0, + "max_buckets": 1000, + "subusers": [{ + "id": "testuser:swift", + "permissions": "full-control" + }], + "keys": [{ + "user": "testuser:swift", + "access_key": "3Y1LNW4Q6X0Y53A52DET", + "secret_key": "" + }, { + "user": "testuser", + "access_key": "I0PJDPCIYZ665MW88W9R", + "secret_key": "dxaXZ8U90SXydYzyS5ivamEP20hkLSUViiaR+ZDA" + }], + "swift_keys": [{ + "user": "testuser:swift", + "secret_key": "244+fz2gSqoHwR3lYtSbIyomyPHf3i7rgSJrF\/IA" + }], + "caps": [], + "op_mask": "read, write, delete", + "default_placement": "", + "placement_tags": [], + "bucket_quota": { + "enabled": false, + "max_size_kb": -1, + "max_objects": -1 + }, + "user_quota": { + "enabled": false, + "max_size_kb": -1, + "max_objects": -1 + }, + "temp_url_keys": [] + } + +Access Verification +^^^^^^^^^^^^^^^^^^^ + +Test S3 Access +"""""""""""""" + +You need to write and run a Python test script for verifying S3 access. The S3 +access test script will connect to the ``radosgw``, create a new bucket and +list all buckets. The values for ``aws_access_key_id`` and +``aws_secret_access_key`` are taken from the values of ``access_key`` and +``secret_key`` returned by the ``radosgw-admin`` command. + +Execute the following steps: + +#. You will need to install the ``python-boto`` package:: + + sudo yum install python-boto + +#. Create the Python script:: + + vi s3test.py + +#. Add the following contents to the file:: + + import boto.s3.connection + + access_key = 'I0PJDPCIYZ665MW88W9R' + secret_key = 'dxaXZ8U90SXydYzyS5ivamEP20hkLSUViiaR+ZDA' + conn = boto.connect_s3( + aws_access_key_id=access_key, + aws_secret_access_key=secret_key, + host='{hostname}', port={port}, + is_secure=False, calling_format=boto.s3.connection.OrdinaryCallingFormat(), + ) + + bucket = conn.create_bucket('my-new-bucket') + for bucket in conn.get_all_buckets(): + print "{name} {created}".format( + name=bucket.name, + created=bucket.creation_date, + ) + + + Replace ``{hostname}`` with the hostname of the host where you have + configured the gateway service i.e., the ``gateway host``. Replace ``{port}`` + with the port number you are using with Civetweb. + +#. Run the script:: + + python s3test.py + + The output will be something like the following:: + + my-new-bucket 2015-02-16T17:09:10.000Z + +Test swift access +""""""""""""""""" + +Swift access can be verified via the ``swift`` command line client. The command +``man swift`` will provide more information on available command line options. + +To install ``swift`` client, execute the following commands. On Red Hat +Enterprise Linux:: + + sudo yum install python-setuptools + sudo easy_install pip + sudo pip install --upgrade setuptools + sudo pip install --upgrade python-swiftclient + +On Debian-based distributions:: + + sudo apt-get install python-setuptools + sudo easy_install pip + sudo pip install --upgrade setuptools + sudo pip install --upgrade python-swiftclient + +To test swift access, execute the following:: + + swift -V 1 -A http://{IP ADDRESS}:{port}/auth -U testuser:swift -K '{swift_secret_key}' list + +Replace ``{IP ADDRESS}`` with the public IP address of the gateway server and +``{swift_secret_key}`` with its value from the output of ``radosgw-admin key +create`` command executed for the ``swift`` user. Replace {port} with the port +number you are using with Civetweb (e.g., ``7480`` is the default). If you +don't replace the port, it will default to port ``80``. + +For example:: + + swift -V 1 -A http://10.19.143.116:7480/auth -U testuser:swift -K '244+fz2gSqoHwR3lYtSbIyomyPHf3i7rgSJrF/IA' list + +The output should be:: + + my-new-bucket + +.. _Preflight: ../../start/quick-start-preflight +.. _HTTP Frontends: ../../radosgw/frontends diff --git a/doc/install/install-storage-cluster.rst b/doc/install/install-storage-cluster.rst new file mode 100644 index 00000000..5da2c685 --- /dev/null +++ b/doc/install/install-storage-cluster.rst @@ -0,0 +1,91 @@ +============================== + Install Ceph Storage Cluster +============================== + +This guide describes installing Ceph packages manually. This procedure +is only for users who are not installing with a deployment tool such as +``ceph-deploy``, ``chef``, ``juju``, etc. + +.. tip:: You can also use ``ceph-deploy`` to install Ceph packages, which may + be more convenient since you can install ``ceph`` on multiple hosts with + a single command. + + +Installing with APT +=================== + +Once you have added either release or development packages to APT, you should +update APT's database and install Ceph:: + + sudo apt-get update && sudo apt-get install ceph ceph-mds + + +Installing with RPM +=================== + +To install Ceph with RPMs, execute the following steps: + + +#. Install ``yum-plugin-priorities``. :: + + sudo yum install yum-plugin-priorities + +#. Ensure ``/etc/yum/pluginconf.d/priorities.conf`` exists. + +#. Ensure ``priorities.conf`` enables the plugin. :: + + [main] + enabled = 1 + +#. Ensure your YUM ``ceph.repo`` entry includes ``priority=2``. See + `Get Packages`_ for details:: + + [ceph] + name=Ceph packages for $basearch + baseurl=https://download.ceph.com/rpm-{ceph-release}/{distro}/$basearch + enabled=1 + priority=2 + gpgcheck=1 + gpgkey=https://download.ceph.com/keys/release.asc + + [ceph-noarch] + name=Ceph noarch packages + baseurl=https://download.ceph.com/rpm-{ceph-release}/{distro}/noarch + enabled=1 + priority=2 + gpgcheck=1 + gpgkey=https://download.ceph.com/keys/release.asc + + [ceph-source] + name=Ceph source packages + baseurl=https://download.ceph.com/rpm-{ceph-release}/{distro}/SRPMS + enabled=0 + priority=2 + gpgcheck=1 + gpgkey=https://download.ceph.com/keys/release.asc + + +#. Install pre-requisite packages:: + + sudo yum install snappy leveldb gdisk python-argparse gperftools-libs + + +Once you have added either release or development packages, or added a +``ceph.repo`` file to ``/etc/yum.repos.d``, you can install Ceph packages. :: + + sudo yum install ceph + + +Installing a Build +================== + +If you build Ceph from source code, you may install Ceph in user space +by executing the following:: + + sudo make install + +If you install Ceph locally, ``make`` will place the executables in +``usr/local/bin``. You may add the Ceph configuration file to the +``usr/local/bin`` directory to run Ceph from a single directory. + +.. _Get Packages: ../get-packages diff --git a/doc/install/install-vm-cloud.rst b/doc/install/install-vm-cloud.rst new file mode 100644 index 00000000..39bc01c8 --- /dev/null +++ b/doc/install/install-vm-cloud.rst @@ -0,0 +1,130 @@ +========================================= + Install Virtualization for Block Device +========================================= + +If you intend to use Ceph Block Devices and the Ceph Storage Cluster as a +backend for Virtual Machines (VMs) or :term:`Cloud Platforms` the QEMU/KVM and +``libvirt`` packages are important for enabling VMs and cloud platforms. +Examples of VMs include: QEMU/KVM, XEN, VMWare, LXC, VirtualBox, etc. Examples +of Cloud Platforms include OpenStack, CloudStack, OpenNebula, etc. + + +.. ditaa:: + + +---------------------------------------------------+ + | libvirt | + +------------------------+--------------------------+ + | + | configures + v + +---------------------------------------------------+ + | QEMU | + +---------------------------------------------------+ + | librbd | + +------------------------+-+------------------------+ + | OSDs | | Monitors | + +------------------------+ +------------------------+ + + +Install QEMU +============ + +QEMU KVM can interact with Ceph Block Devices via ``librbd``, which is an +important feature for using Ceph with cloud platforms. Once you install QEMU, +see `QEMU and Block Devices`_ for usage. + + +Debian Packages +--------------- + +QEMU packages are incorporated into Ubuntu 12.04 Precise Pangolin and later +versions. To install QEMU, execute the following:: + + sudo apt-get install qemu + + +RPM Packages +------------ + +To install QEMU, execute the following: + + +#. Update your repositories. :: + + sudo yum update + +#. Install QEMU for Ceph. :: + + sudo yum install qemu-kvm qemu-kvm-tools qemu-img + +#. Install additional QEMU packages (optional):: + + sudo yum install qemu-guest-agent qemu-guest-agent-win32 + + +Building QEMU +------------- + +To build QEMU from source, use the following procedure:: + + cd {your-development-directory} + git clone git://git.qemu.org/qemu.git + cd qemu + ./configure --enable-rbd + make; make install + + + +Install libvirt +=============== + +To use ``libvirt`` with Ceph, you must have a running Ceph Storage Cluster, and +you must have installed and configured QEMU. See `Using libvirt with Ceph Block +Device`_ for usage. + + +Debian Packages +--------------- + +``libvirt`` packages are incorporated into Ubuntu 12.04 Precise Pangolin and +later versions of Ubuntu. To install ``libvirt`` on these distributions, +execute the following:: + + sudo apt-get update && sudo apt-get install libvirt-bin + + +RPM Packages +------------ + +To use ``libvirt`` with a Ceph Storage Cluster, you must have a running Ceph +Storage Cluster and you must also install a version of QEMU with ``rbd`` format +support. See `Install QEMU`_ for details. + + +``libvirt`` packages are incorporated into the recent CentOS/RHEL distributions. +To install ``libvirt``, execute the following:: + + sudo yum install libvirt + + +Building ``libvirt`` +-------------------- + +To build ``libvirt`` from source, clone the ``libvirt`` repository and use +`AutoGen`_ to generate the build. Then, execute ``make`` and ``make install`` to +complete the installation. For example:: + + git clone git://libvirt.org/libvirt.git + cd libvirt + ./autogen.sh + make + sudo make install + +See `libvirt Installation`_ for details. + + + +.. _libvirt Installation: http://www.libvirt.org/compiling.html +.. _AutoGen: http://www.gnu.org/software/autogen/ +.. _QEMU and Block Devices: ../../rbd/qemu-rbd +.. _Using libvirt with Ceph Block Device: ../../rbd/libvirt diff --git a/doc/install/manual-deployment.rst b/doc/install/manual-deployment.rst new file mode 100644 index 00000000..031e2f99 --- /dev/null +++ b/doc/install/manual-deployment.rst @@ -0,0 +1,535 @@ +=================== + Manual Deployment +=================== + +All Ceph clusters require at least one monitor, and at least as many OSDs as +copies of an object stored on the cluster. Bootstrapping the initial monitor(s) +is the first step in deploying a Ceph Storage Cluster. Monitor deployment also +sets important criteria for the entire cluster, such as the number of replicas +for pools, the number of placement groups per OSD, the heartbeat intervals, +whether authentication is required, etc. Most of these values are set by +default, so it's useful to know about them when setting up your cluster for +production. + +Following the same configuration as `Installation (Quick)`_, we will set up a +cluster with ``node1`` as the monitor node, and ``node2`` and ``node3`` for +OSD nodes. + + + +.. ditaa:: + + /------------------\ /----------------\ + | Admin Node | | node1 | + | +-------->+ | + | | | cCCC | + \---------+--------/ \----------------/ + | + | /----------------\ + | | node2 | + +----------------->+ | + | | cCCC | + | \----------------/ + | + | /----------------\ + | | node3 | + +----------------->| | + | cCCC | + \----------------/ + + +Monitor Bootstrapping +===================== + +Bootstrapping a monitor (a Ceph Storage Cluster, in theory) requires +a number of things: + +- **Unique Identifier:** The ``fsid`` is a unique identifier for the cluster, + and stands for File System ID from the days when the Ceph Storage Cluster was + principally for the Ceph Filesystem. Ceph now supports native interfaces, + block devices, and object storage gateway interfaces too, so ``fsid`` is a + bit of a misnomer. + +- **Cluster Name:** Ceph clusters have a cluster name, which is a simple string + without spaces. The default cluster name is ``ceph``, but you may specify + a different cluster name. Overriding the default cluster name is + especially useful when you are working with multiple clusters and you need to + clearly understand which cluster your are working with. + + For example, when you run multiple clusters in a :ref:`multisite configuration `, + the cluster name (e.g., ``us-west``, ``us-east``) identifies the cluster for + the current CLI session. **Note:** To identify the cluster name on the + command line interface, specify the Ceph configuration file with the + cluster name (e.g., ``ceph.conf``, ``us-west.conf``, ``us-east.conf``, etc.). + Also see CLI usage (``ceph --cluster {cluster-name}``). + +- **Monitor Name:** Each monitor instance within a cluster has a unique name. + In common practice, the Ceph Monitor name is the host name (we recommend one + Ceph Monitor per host, and no commingling of Ceph OSD Daemons with + Ceph Monitors). You may retrieve the short hostname with ``hostname -s``. + +- **Monitor Map:** Bootstrapping the initial monitor(s) requires you to + generate a monitor map. The monitor map requires the ``fsid``, the cluster + name (or uses the default), and at least one host name and its IP address. + +- **Monitor Keyring**: Monitors communicate with each other via a + secret key. You must generate a keyring with a monitor secret and provide + it when bootstrapping the initial monitor(s). + +- **Administrator Keyring**: To use the ``ceph`` CLI tools, you must have + a ``client.admin`` user. So you must generate the admin user and keyring, + and you must also add the ``client.admin`` user to the monitor keyring. + +The foregoing requirements do not imply the creation of a Ceph Configuration +file. However, as a best practice, we recommend creating a Ceph configuration +file and populating it with the ``fsid``, the ``mon initial members`` and the +``mon host`` settings. + +You can get and set all of the monitor settings at runtime as well. However, +a Ceph Configuration file may contain only those settings that override the +default values. When you add settings to a Ceph configuration file, these +settings override the default settings. Maintaining those settings in a +Ceph configuration file makes it easier to maintain your cluster. + +The procedure is as follows: + + +#. Log in to the initial monitor node(s):: + + ssh {hostname} + + For example:: + + ssh node1 + + +#. Ensure you have a directory for the Ceph configuration file. By default, + Ceph uses ``/etc/ceph``. When you install ``ceph``, the installer will + create the ``/etc/ceph`` directory automatically. :: + + ls /etc/ceph + + **Note:** Deployment tools may remove this directory when purging a + cluster (e.g., ``ceph-deploy purgedata {node-name}``, ``ceph-deploy purge + {node-name}``). + +#. Create a Ceph configuration file. By default, Ceph uses + ``ceph.conf``, where ``ceph`` reflects the cluster name. :: + + sudo vim /etc/ceph/ceph.conf + + +#. Generate a unique ID (i.e., ``fsid``) for your cluster. :: + + uuidgen + + +#. Add the unique ID to your Ceph configuration file. :: + + fsid = {UUID} + + For example:: + + fsid = a7f64266-0894-4f1e-a635-d0aeaca0e993 + + +#. Add the initial monitor(s) to your Ceph configuration file. :: + + mon initial members = {hostname}[,{hostname}] + + For example:: + + mon initial members = node1 + + +#. Add the IP address(es) of the initial monitor(s) to your Ceph configuration + file and save the file. :: + + mon host = {ip-address}[,{ip-address}] + + For example:: + + mon host = 192.168.0.1 + + **Note:** You may use IPv6 addresses instead of IPv4 addresses, but + you must set ``ms bind ipv6`` to ``true``. See `Network Configuration + Reference`_ for details about network configuration. + +#. Create a keyring for your cluster and generate a monitor secret key. :: + + ceph-authtool --create-keyring /tmp/ceph.mon.keyring --gen-key -n mon. --cap mon 'allow *' + + +#. Generate an administrator keyring, generate a ``client.admin`` user and add + the user to the keyring. :: + + sudo ceph-authtool --create-keyring /etc/ceph/ceph.client.admin.keyring --gen-key -n client.admin --cap mon 'allow *' --cap osd 'allow *' --cap mds 'allow *' --cap mgr 'allow *' + +#. Generate a bootstrap-osd keyring, generate a ``client.bootstrap-osd`` user and add + the user to the keyring. :: + + sudo ceph-authtool --create-keyring /var/lib/ceph/bootstrap-osd/ceph.keyring --gen-key -n client.bootstrap-osd --cap mon 'profile bootstrap-osd' + +#. Add the generated keys to the ``ceph.mon.keyring``. :: + + sudo ceph-authtool /tmp/ceph.mon.keyring --import-keyring /etc/ceph/ceph.client.admin.keyring + sudo ceph-authtool /tmp/ceph.mon.keyring --import-keyring /var/lib/ceph/bootstrap-osd/ceph.keyring + +#. Generate a monitor map using the hostname(s), host IP address(es) and the FSID. + Save it as ``/tmp/monmap``:: + + monmaptool --create --add {hostname} {ip-address} --fsid {uuid} /tmp/monmap + + For example:: + + monmaptool --create --add node1 192.168.0.1 --fsid a7f64266-0894-4f1e-a635-d0aeaca0e993 /tmp/monmap + + +#. Create a default data directory (or directories) on the monitor host(s). :: + + sudo mkdir /var/lib/ceph/mon/{cluster-name}-{hostname} + + For example:: + + sudo -u ceph mkdir /var/lib/ceph/mon/ceph-node1 + + See `Monitor Config Reference - Data`_ for details. + +#. Populate the monitor daemon(s) with the monitor map and keyring. :: + + sudo -u ceph ceph-mon [--cluster {cluster-name}] --mkfs -i {hostname} --monmap /tmp/monmap --keyring /tmp/ceph.mon.keyring + + For example:: + + sudo -u ceph ceph-mon --mkfs -i node1 --monmap /tmp/monmap --keyring /tmp/ceph.mon.keyring + + +#. Consider settings for a Ceph configuration file. Common settings include + the following:: + + [global] + fsid = {cluster-id} + mon initial members = {hostname}[, {hostname}] + mon host = {ip-address}[, {ip-address}] + public network = {network}[, {network}] + cluster network = {network}[, {network}] + auth cluster required = cephx + auth service required = cephx + auth client required = cephx + osd journal size = {n} + osd pool default size = {n} # Write an object n times. + osd pool default min size = {n} # Allow writing n copies in a degraded state. + osd pool default pg num = {n} + osd pool default pgp num = {n} + osd crush chooseleaf type = {n} + + In the foregoing example, the ``[global]`` section of the configuration might + look like this:: + + [global] + fsid = a7f64266-0894-4f1e-a635-d0aeaca0e993 + mon initial members = node1 + mon host = 192.168.0.1 + public network = 192.168.0.0/24 + auth cluster required = cephx + auth service required = cephx + auth client required = cephx + osd journal size = 1024 + osd pool default size = 3 + osd pool default min size = 2 + osd pool default pg num = 333 + osd pool default pgp num = 333 + osd crush chooseleaf type = 1 + + +#. Start the monitor(s). + + For most distributions, services are started via systemd now:: + + sudo systemctl start ceph-mon@node1 + + For older Debian/CentOS/RHEL, use sysvinit:: + + sudo /etc/init.d/ceph start mon.node1 + + +#. Verify that the monitor is running. :: + + ceph -s + + You should see output that the monitor you started is up and running, and + you should see a health error indicating that placement groups are stuck + inactive. It should look something like this:: + + cluster: + id: a7f64266-0894-4f1e-a635-d0aeaca0e993 + health: HEALTH_OK + + services: + mon: 1 daemons, quorum node1 + mgr: node1(active) + osd: 0 osds: 0 up, 0 in + + data: + pools: 0 pools, 0 pgs + objects: 0 objects, 0 bytes + usage: 0 kB used, 0 kB / 0 kB avail + pgs: + + + **Note:** Once you add OSDs and start them, the placement group health errors + should disappear. See `Adding OSDs`_ for details. + +Manager daemon configuration +============================ + +On each node where you run a ceph-mon daemon, you should also set up a ceph-mgr daemon. + +See :ref:`mgr-administrator-guide` + +Adding OSDs +=========== + +Once you have your initial monitor(s) running, you should add OSDs. Your cluster +cannot reach an ``active + clean`` state until you have enough OSDs to handle the +number of copies of an object (e.g., ``osd pool default size = 2`` requires at +least two OSDs). After bootstrapping your monitor, your cluster has a default +CRUSH map; however, the CRUSH map doesn't have any Ceph OSD Daemons mapped to +a Ceph Node. + + +Short Form +---------- + +Ceph provides the ``ceph-volume`` utility, which can prepare a logical volume, disk, or partition +for use with Ceph. The ``ceph-volume`` utility creates the OSD ID by +incrementing the index. Additionally, ``ceph-volume`` will add the new OSD to the +CRUSH map under the host for you. Execute ``ceph-volume -h`` for CLI details. +The ``ceph-volume`` utility automates the steps of the `Long Form`_ below. To +create the first two OSDs with the short form procedure, execute the following +on ``node2`` and ``node3``: + +bluestore +^^^^^^^^^ +#. Create the OSD. :: + + ssh {node-name} + sudo ceph-volume lvm create --data {data-path} + + For example:: + + ssh node1 + sudo ceph-volume lvm create --data /dev/hdd1 + +Alternatively, the creation process can be split in two phases (prepare, and +activate): + +#. Prepare the OSD. :: + + ssh {node-name} + sudo ceph-volume lvm prepare --data {data-path} {data-path} + + For example:: + + ssh node1 + sudo ceph-volume lvm prepare --data /dev/hdd1 + + Once prepared, the ``ID`` and ``FSID`` of the prepared OSD are required for + activation. These can be obtained by listing OSDs in the current server:: + + sudo ceph-volume lvm list + +#. Activate the OSD:: + + sudo ceph-volume lvm activate {ID} {FSID} + + For example:: + + sudo ceph-volume lvm activate 0 a7f64266-0894-4f1e-a635-d0aeaca0e993 + + +filestore +^^^^^^^^^ +#. Create the OSD. :: + + ssh {node-name} + sudo ceph-volume lvm create --filestore --data {data-path} --journal {journal-path} + + For example:: + + ssh node1 + sudo ceph-volume lvm create --filestore --data /dev/hdd1 --journal /dev/hdd2 + +Alternatively, the creation process can be split in two phases (prepare, and +activate): + +#. Prepare the OSD. :: + + ssh {node-name} + sudo ceph-volume lvm prepare --filestore --data {data-path} --journal {journal-path} + + For example:: + + ssh node1 + sudo ceph-volume lvm prepare --filestore --data /dev/hdd1 --journal /dev/hdd2 + + Once prepared, the ``ID`` and ``FSID`` of the prepared OSD are required for + activation. These can be obtained by listing OSDs in the current server:: + + sudo ceph-volume lvm list + +#. Activate the OSD:: + + sudo ceph-volume lvm activate --filestore {ID} {FSID} + + For example:: + + sudo ceph-volume lvm activate --filestore 0 a7f64266-0894-4f1e-a635-d0aeaca0e993 + + +Long Form +--------- + +Without the benefit of any helper utilities, create an OSD and add it to the +cluster and CRUSH map with the following procedure. To create the first two +OSDs with the long form procedure, execute the following steps for each OSD. + +.. note:: This procedure does not describe deployment on top of dm-crypt + making use of the dm-crypt 'lockbox'. + +#. Connect to the OSD host and become root. :: + + ssh {node-name} + sudo bash + +#. Generate a UUID for the OSD. :: + + UUID=$(uuidgen) + +#. Generate a cephx key for the OSD. :: + + OSD_SECRET=$(ceph-authtool --gen-print-key) + +#. Create the OSD. Note that an OSD ID can be provided as an + additional argument to ``ceph osd new`` if you need to reuse a + previously-destroyed OSD id. We assume that the + ``client.bootstrap-osd`` key is present on the machine. You may + alternatively execute this command as ``client.admin`` on a + different host where that key is present.:: + + ID=$(echo "{\"cephx_secret\": \"$OSD_SECRET\"}" | \ + ceph osd new $UUID -i - \ + -n client.bootstrap-osd -k /var/lib/ceph/bootstrap-osd/ceph.keyring) + + It is also possible to include a ``crush_device_class`` property in the JSON + to set an initial class other than the default (``ssd`` or ``hdd`` based on + the auto-detected device type). + +#. Create the default directory on your new OSD. :: + + mkdir /var/lib/ceph/osd/ceph-$ID + +#. If the OSD is for a drive other than the OS drive, prepare it + for use with Ceph, and mount it to the directory you just created. :: + + mkfs.xfs /dev/{DEV} + mount /dev/{DEV} /var/lib/ceph/osd/ceph-$ID + +#. Write the secret to the OSD keyring file. :: + + ceph-authtool --create-keyring /var/lib/ceph/osd/ceph-$ID/keyring \ + --name osd.$ID --add-key $OSD_SECRET + +#. Initialize the OSD data directory. :: + + ceph-osd -i $ID --mkfs --osd-uuid $UUID + +#. Fix ownership. :: + + chown -R ceph:ceph /var/lib/ceph/osd/ceph-$ID + +#. After you add an OSD to Ceph, the OSD is in your configuration. However, + it is not yet running. You must start + your new OSD before it can begin receiving data. + + For modern systemd distributions:: + + systemctl enable ceph-osd@$ID + systemctl start ceph-osd@$ID + + For example:: + + systemctl enable ceph-osd@12 + systemctl start ceph-osd@12 + + +Adding MDS +========== + +In the below instructions, ``{id}`` is an arbitrary name, such as the hostname of the machine. + +#. Create the mds data directory.:: + + mkdir -p /var/lib/ceph/mds/{cluster-name}-{id} + +#. Create a keyring.:: + + ceph-authtool --create-keyring /var/lib/ceph/mds/{cluster-name}-{id}/keyring --gen-key -n mds.{id} + +#. Import the keyring and set caps.:: + + ceph auth add mds.{id} osd "allow rwx" mds "allow" mon "allow profile mds" -i /var/lib/ceph/mds/{cluster}-{id}/keyring + +#. Add to ceph.conf.:: + + [mds.{id}] + host = {id} + +#. Start the daemon the manual way.:: + + ceph-mds --cluster {cluster-name} -i {id} -m {mon-hostname}:{mon-port} [-f] + +#. Start the daemon the right way (using ceph.conf entry).:: + + service ceph start + +#. If starting the daemon fails with this error:: + + mds.-1.0 ERROR: failed to authenticate: (22) Invalid argument + + Then make sure you do not have a keyring set in ceph.conf in the global section; move it to the client section; or add a keyring setting specific to this mds daemon. And verify that you see the same key in the mds data directory and ``ceph auth get mds.{id}`` output. + +#. Now you are ready to `create a Ceph filesystem`_. + + +Summary +======= + +Once you have your monitor and two OSDs up and running, you can watch the +placement groups peer by executing the following:: + + ceph -w + +To view the tree, execute the following:: + + ceph osd tree + +You should see output that looks something like this:: + + # id weight type name up/down reweight + -1 2 root default + -2 2 host node1 + 0 1 osd.0 up 1 + -3 1 host node2 + 1 1 osd.1 up 1 + +To add (or remove) additional monitors, see `Add/Remove Monitors`_. +To add (or remove) additional Ceph OSD Daemons, see `Add/Remove OSDs`_. + + +.. _Installation (Quick): ../../start +.. _Add/Remove Monitors: ../../rados/operations/add-or-rm-mons +.. _Add/Remove OSDs: ../../rados/operations/add-or-rm-osds +.. _Network Configuration Reference: ../../rados/configuration/network-config-ref +.. _Monitor Config Reference - Data: ../../rados/configuration/mon-config-ref#data +.. _create a Ceph filesystem: ../../cephfs/createfs diff --git a/doc/install/manual-freebsd-deployment.rst b/doc/install/manual-freebsd-deployment.rst new file mode 100644 index 00000000..764beab2 --- /dev/null +++ b/doc/install/manual-freebsd-deployment.rst @@ -0,0 +1,581 @@ +============================== + Manual Deployment on FreeBSD +============================== + +This a largely a copy of the regular Manual Deployment with FreeBSD specifics. +The difference lies in two parts: The underlying diskformat, and the way to use +the tools. + +All Ceph clusters require at least one monitor, and at least as many OSDs as +copies of an object stored on the cluster. Bootstrapping the initial monitor(s) +is the first step in deploying a Ceph Storage Cluster. Monitor deployment also +sets important criteria for the entire cluster, such as the number of replicas +for pools, the number of placement groups per OSD, the heartbeat intervals, +whether authentication is required, etc. Most of these values are set by +default, so it's useful to know about them when setting up your cluster for +production. + +Following the same configuration as `Installation (Quick)`_, we will set up a +cluster with ``node1`` as the monitor node, and ``node2`` and ``node3`` for +OSD nodes. + + + +.. ditaa:: + + /------------------\ /----------------\ + | Admin Node | | node1 | + | +-------->+ | + | | | cCCC | + \---------+--------/ \----------------/ + | + | /----------------\ + | | node2 | + +----------------->+ | + | | cCCC | + | \----------------/ + | + | /----------------\ + | | node3 | + +----------------->| | + | cCCC | + \----------------/ + + + +Disklayout on FreeBSD +===================== + +Current implementation works on ZFS pools + +* All Ceph data is created in /var/lib/ceph +* Log files go into /var/log/ceph +* PID files go into /var/log/run +* One ZFS pool is allocated per OSD, like:: + + gpart create -s GPT ada1 + gpart add -t freebsd-zfs -l osd.1 ada1 + zpool create -m /var/lib/ceph/osd/osd.1 osd.1 gpt/osd.1 + +* Some cache and log (ZIL) can be attached. + Please note that this is different from the Ceph journals. Cache and log are + totally transparent for Ceph, and help the filesystem to keep the system + consistent and help performance. + Assuming that ada2 is an SSD:: + + gpart create -s GPT ada2 + gpart add -t freebsd-zfs -l osd.1-log -s 1G ada2 + zpool add osd.1 log gpt/osd.1-log + gpart add -t freebsd-zfs -l osd.1-cache -s 10G ada2 + zpool add osd.1 log gpt/osd.1-cache + +* Note: *UFS2 does not allow large xattribs* + + +Configuration +------------- + +As per FreeBSD default parts of extra software go into ``/usr/local/``. Which +means that for ``/etc/ceph.conf`` the default location is +``/usr/local/etc/ceph/ceph.conf``. Smartest thing to do is to create a softlink +from ``/etc/ceph`` to ``/usr/local/etc/ceph``:: + + ln -s /usr/local/etc/ceph /etc/ceph + +A sample file is provided in ``/usr/local/share/doc/ceph/sample.ceph.conf`` +Note that ``/usr/local/etc/ceph/ceph.conf`` will be found by most tools, +linking it to ``/etc/ceph/ceph.conf`` will help with any scripts that are found +in extra tools, scripts, and/or discussionlists. + +Monitor Bootstrapping +===================== + +Bootstrapping a monitor (a Ceph Storage Cluster, in theory) requires +a number of things: + +- **Unique Identifier:** The ``fsid`` is a unique identifier for the cluster, + and stands for File System ID from the days when the Ceph Storage Cluster was + principally for the Ceph Filesystem. Ceph now supports native interfaces, + block devices, and object storage gateway interfaces too, so ``fsid`` is a + bit of a misnomer. + +- **Cluster Name:** Ceph clusters have a cluster name, which is a simple string + without spaces. The default cluster name is ``ceph``, but you may specify + a different cluster name. Overriding the default cluster name is + especially useful when you are working with multiple clusters and you need to + clearly understand which cluster your are working with. + + For example, when you run multiple clusters in a :ref:`multisite configuration `, + the cluster name (e.g., ``us-west``, ``us-east``) identifies the cluster for + the current CLI session. **Note:** To identify the cluster name on the + command line interface, specify the a Ceph configuration file with the + cluster name (e.g., ``ceph.conf``, ``us-west.conf``, ``us-east.conf``, etc.). + Also see CLI usage (``ceph --cluster {cluster-name}``). + +- **Monitor Name:** Each monitor instance within a cluster has a unique name. + In common practice, the Ceph Monitor name is the host name (we recommend one + Ceph Monitor per host, and no commingling of Ceph OSD Daemons with + Ceph Monitors). You may retrieve the short hostname with ``hostname -s``. + +- **Monitor Map:** Bootstrapping the initial monitor(s) requires you to + generate a monitor map. The monitor map requires the ``fsid``, the cluster + name (or uses the default), and at least one host name and its IP address. + +- **Monitor Keyring**: Monitors communicate with each other via a + secret key. You must generate a keyring with a monitor secret and provide + it when bootstrapping the initial monitor(s). + +- **Administrator Keyring**: To use the ``ceph`` CLI tools, you must have + a ``client.admin`` user. So you must generate the admin user and keyring, + and you must also add the ``client.admin`` user to the monitor keyring. + +The foregoing requirements do not imply the creation of a Ceph Configuration +file. However, as a best practice, we recommend creating a Ceph configuration +file and populating it with the ``fsid``, the ``mon initial members`` and the +``mon host`` settings. + +You can get and set all of the monitor settings at runtime as well. However, +a Ceph Configuration file may contain only those settings that override the +default values. When you add settings to a Ceph configuration file, these +settings override the default settings. Maintaining those settings in a +Ceph configuration file makes it easier to maintain your cluster. + +The procedure is as follows: + + +#. Log in to the initial monitor node(s):: + + ssh {hostname} + + For example:: + + ssh node1 + + +#. Ensure you have a directory for the Ceph configuration file. By default, + Ceph uses ``/etc/ceph``. When you install ``ceph``, the installer will + create the ``/etc/ceph`` directory automatically. :: + + ls /etc/ceph + + **Note:** Deployment tools may remove this directory when purging a + cluster (e.g., ``ceph-deploy purgedata {node-name}``, ``ceph-deploy purge + {node-name}``). + +#. Create a Ceph configuration file. By default, Ceph uses + ``ceph.conf``, where ``ceph`` reflects the cluster name. :: + + sudo vim /etc/ceph/ceph.conf + + +#. Generate a unique ID (i.e., ``fsid``) for your cluster. :: + + uuidgen + + +#. Add the unique ID to your Ceph configuration file. :: + + fsid = {UUID} + + For example:: + + fsid = a7f64266-0894-4f1e-a635-d0aeaca0e993 + + +#. Add the initial monitor(s) to your Ceph configuration file. :: + + mon initial members = {hostname}[,{hostname}] + + For example:: + + mon initial members = node1 + + +#. Add the IP address(es) of the initial monitor(s) to your Ceph configuration + file and save the file. :: + + mon host = {ip-address}[,{ip-address}] + + For example:: + + mon host = 192.168.0.1 + + **Note:** You may use IPv6 addresses instead of IPv4 addresses, but + you must set ``ms bind ipv6`` to ``true``. See `Network Configuration + Reference`_ for details about network configuration. + +#. Create a keyring for your cluster and generate a monitor secret key. :: + + ceph-authtool --create-keyring /tmp/ceph.mon.keyring --gen-key -n mon. --cap mon 'allow *' + + +#. Generate an administrator keyring, generate a ``client.admin`` user and add + the user to the keyring. :: + + sudo ceph-authtool --create-keyring /etc/ceph/ceph.client.admin.keyring --gen-key -n client.admin --cap mon 'allow *' --cap osd 'allow *' --cap mds 'allow *' --cap mgr 'allow *' + + +#. Add the ``client.admin`` key to the ``ceph.mon.keyring``. :: + + ceph-authtool /tmp/ceph.mon.keyring --import-keyring /etc/ceph/ceph.client.admin.keyring + + +#. Generate a monitor map using the hostname(s), host IP address(es) and the FSID. + Save it as ``/tmp/monmap``:: + + monmaptool --create --add {hostname} {ip-address} --fsid {uuid} /tmp/monmap + + For example:: + + monmaptool --create --add node1 192.168.0.1 --fsid a7f64266-0894-4f1e-a635-d0aeaca0e993 /tmp/monmap + + +#. Create a default data directory (or directories) on the monitor host(s). :: + + sudo mkdir /var/lib/ceph/mon/{cluster-name}-{hostname} + + For example:: + + sudo mkdir /var/lib/ceph/mon/ceph-node1 + + See `Monitor Config Reference - Data`_ for details. + +#. Populate the monitor daemon(s) with the monitor map and keyring. :: + + sudo -u ceph ceph-mon [--cluster {cluster-name}] --mkfs -i {hostname} --monmap /tmp/monmap --keyring /tmp/ceph.mon.keyring + + For example:: + + sudo -u ceph ceph-mon --mkfs -i node1 --monmap /tmp/monmap --keyring /tmp/ceph.mon.keyring + + +#. Consider settings for a Ceph configuration file. Common settings include + the following:: + + [global] + fsid = {cluster-id} + mon initial members = {hostname}[, {hostname}] + mon host = {ip-address}[, {ip-address}] + public network = {network}[, {network}] + cluster network = {network}[, {network}] + auth cluster required = cephx + auth service required = cephx + auth client required = cephx + osd journal size = {n} + osd pool default size = {n} # Write an object n times. + osd pool default min size = {n} # Allow writing n copy in a degraded state. + osd pool default pg num = {n} + osd pool default pgp num = {n} + osd crush chooseleaf type = {n} + + In the foregoing example, the ``[global]`` section of the configuration might + look like this:: + + [global] + fsid = a7f64266-0894-4f1e-a635-d0aeaca0e993 + mon initial members = node1 + mon host = 192.168.0.1 + public network = 192.168.0.0/24 + auth cluster required = cephx + auth service required = cephx + auth client required = cephx + osd journal size = 1024 + osd pool default size = 3 + osd pool default min size = 2 + osd pool default pg num = 333 + osd pool default pgp num = 333 + osd crush chooseleaf type = 1 + +#. Touch the ``done`` file. + + Mark that the monitor is created and ready to be started:: + + sudo touch /var/lib/ceph/mon/ceph-node1/done + +#. And for FreeBSD an entry for every monitor needs to be added to the config + file. (The requirement will be removed in future releases). + + The entry should look like:: + + [mon] + [mon.node1] + host = node1 # this name can be resolve + + +#. Start the monitor(s). + + For Ubuntu, use Upstart:: + + sudo start ceph-mon id=node1 [cluster={cluster-name}] + + In this case, to allow the start of the daemon at each reboot you + must create two empty files like this:: + + sudo touch /var/lib/ceph/mon/{cluster-name}-{hostname}/upstart + + For example:: + + sudo touch /var/lib/ceph/mon/ceph-node1/upstart + + For Debian/CentOS/RHEL, use sysvinit:: + + sudo /etc/init.d/ceph start mon.node1 + + For FreeBSD we use the rc.d init scripts (called bsdrc in Ceph):: + + sudo service ceph start start mon.node1 + + For this to work /etc/rc.conf also needs the entry to enable ceph:: + cat 'ceph_enable="YES"' >> /etc/rc.conf + + +#. Verify that Ceph created the default pools. :: + + ceph osd lspools + + You should see output like this:: + + 0 data + 1 metadata + 2 rbd + +#. Verify that the monitor is running. :: + + ceph -s + + You should see output that the monitor you started is up and running, and + you should see a health error indicating that placement groups are stuck + inactive. It should look something like this:: + + cluster a7f64266-0894-4f1e-a635-d0aeaca0e993 + health HEALTH_ERR 192 pgs stuck inactive; 192 pgs stuck unclean; no osds + monmap e1: 1 mons at {node1=192.168.0.1:6789/0}, election epoch 1, quorum 0 node1 + osdmap e1: 0 osds: 0 up, 0 in + pgmap v2: 192 pgs, 3 pools, 0 bytes data, 0 objects + 0 kB used, 0 kB / 0 kB avail + 192 creating + + **Note:** Once you add OSDs and start them, the placement group health errors + should disappear. See the next section for details. + +.. _freebsd_adding_osds: + +Adding OSDs +=========== + +Once you have your initial monitor(s) running, you should add OSDs. Your cluster +cannot reach an ``active + clean`` state until you have enough OSDs to handle the +number of copies of an object (e.g., ``osd pool default size = 2`` requires at +least two OSDs). After bootstrapping your monitor, your cluster has a default +CRUSH map; however, the CRUSH map doesn't have any Ceph OSD Daemons mapped to +a Ceph Node. + + +Long Form +--------- + +Without the benefit of any helper utilities, create an OSD and add it to the +cluster and CRUSH map with the following procedure. To create the first two +OSDs with the long form procedure, execute the following on ``node2`` and +``node3``: + +#. Connect to the OSD host. :: + + ssh {node-name} + +#. Generate a UUID for the OSD. :: + + uuidgen + + +#. Create the OSD. If no UUID is given, it will be set automatically when the + OSD starts up. The following command will output the OSD number, which you + will need for subsequent steps. :: + + ceph osd create [{uuid} [{id}]] + + +#. Create the default directory on your new OSD. :: + + ssh {new-osd-host} + sudo mkdir /var/lib/ceph/osd/{cluster-name}-{osd-number} + + Above are the ZFS instructions to do this for FreeBSD. + + +#. If the OSD is for a drive other than the OS drive, prepare it + for use with Ceph, and mount it to the directory you just created. + + +#. Initialize the OSD data directory. :: + + ssh {new-osd-host} + sudo ceph-osd -i {osd-num} --mkfs --mkkey --osd-uuid [{uuid}] + + The directory must be empty before you can run ``ceph-osd`` with the + ``--mkkey`` option. In addition, the ceph-osd tool requires specification + of custom cluster names with the ``--cluster`` option. + + +#. Register the OSD authentication key. The value of ``ceph`` for + ``ceph-{osd-num}`` in the path is the ``$cluster-$id``. If your + cluster name differs from ``ceph``, use your cluster name instead.:: + + sudo ceph auth add osd.{osd-num} osd 'allow *' mon 'allow profile osd' -i /var/lib/ceph/osd/{cluster-name}-{osd-num}/keyring + + +#. Add your Ceph Node to the CRUSH map. :: + + ceph [--cluster {cluster-name}] osd crush add-bucket {hostname} host + + For example:: + + ceph osd crush add-bucket node1 host + + +#. Place the Ceph Node under the root ``default``. :: + + ceph osd crush move node1 root=default + + +#. Add the OSD to the CRUSH map so that it can begin receiving data. You may + also decompile the CRUSH map, add the OSD to the device list, add the host as a + bucket (if it's not already in the CRUSH map), add the device as an item in the + host, assign it a weight, recompile it and set it. :: + + ceph [--cluster {cluster-name}] osd crush add {id-or-name} {weight} [{bucket-type}={bucket-name} ...] + + For example:: + + ceph osd crush add osd.0 1.0 host=node1 + + +#. After you add an OSD to Ceph, the OSD is in your configuration. However, + it is not yet running. The OSD is ``down`` and ``in``. You must start + your new OSD before it can begin receiving data. + + For Ubuntu, use Upstart:: + + sudo start ceph-osd id={osd-num} [cluster={cluster-name}] + + For example:: + + sudo start ceph-osd id=0 + sudo start ceph-osd id=1 + + For Debian/CentOS/RHEL, use sysvinit:: + + sudo /etc/init.d/ceph start osd.{osd-num} [--cluster {cluster-name}] + + For example:: + + sudo /etc/init.d/ceph start osd.0 + sudo /etc/init.d/ceph start osd.1 + + In this case, to allow the start of the daemon at each reboot you + must create an empty file like this:: + + sudo touch /var/lib/ceph/osd/{cluster-name}-{osd-num}/sysvinit + + For example:: + + sudo touch /var/lib/ceph/osd/ceph-0/sysvinit + sudo touch /var/lib/ceph/osd/ceph-1/sysvinit + + Once you start your OSD, it is ``up`` and ``in``. + + For FreeBSD using rc.d init. + + After adding the OSD to ``ceph.conf``:: + + sudo service ceph start osd.{osd-num} + + For example:: + + sudo service ceph start osd.0 + sudo service ceph start osd.1 + + In this case, to allow the start of the daemon at each reboot you + must create an empty file like this:: + + sudo touch /var/lib/ceph/osd/{cluster-name}-{osd-num}/bsdrc + + For example:: + + sudo touch /var/lib/ceph/osd/ceph-0/bsdrc + sudo touch /var/lib/ceph/osd/ceph-1/bsdrc + + Once you start your OSD, it is ``up`` and ``in``. + + + +Adding MDS +========== + +In the below instructions, ``{id}`` is an arbitrary name, such as the hostname of the machine. + +#. Create the mds data directory.:: + + mkdir -p /var/lib/ceph/mds/{cluster-name}-{id} + +#. Create a keyring.:: + + ceph-authtool --create-keyring /var/lib/ceph/mds/{cluster-name}-{id}/keyring --gen-key -n mds.{id} + +#. Import the keyring and set caps.:: + + ceph auth add mds.{id} osd "allow rwx" mds "allow" mon "allow profile mds" -i /var/lib/ceph/mds/{cluster}-{id}/keyring + +#. Add to ceph.conf.:: + + [mds.{id}] + host = {id} + +#. Start the daemon the manual way.:: + + ceph-mds --cluster {cluster-name} -i {id} -m {mon-hostname}:{mon-port} [-f] + +#. Start the daemon the right way (using ceph.conf entry).:: + + service ceph start + +#. If starting the daemon fails with this error:: + + mds.-1.0 ERROR: failed to authenticate: (22) Invalid argument + + Then make sure you do not have a keyring set in ceph.conf in the global section; move it to the client section; or add a keyring setting specific to this mds daemon. And verify that you see the same key in the mds data directory and ``ceph auth get mds.{id}`` output. + +#. Now you are ready to `create a Ceph filesystem`_. + + +Summary +======= + +Once you have your monitor and two OSDs up and running, you can watch the +placement groups peer by executing the following:: + + ceph -w + +To view the tree, execute the following:: + + ceph osd tree + +You should see output that looks something like this:: + + # id weight type name up/down reweight + -1 2 root default + -2 2 host node1 + 0 1 osd.0 up 1 + -3 1 host node2 + 1 1 osd.1 up 1 + +To add (or remove) additional monitors, see `Add/Remove Monitors`_. +To add (or remove) additional Ceph OSD Daemons, see `Add/Remove OSDs`_. + + +.. _Installation (Quick): ../../start +.. _Add/Remove Monitors: ../../rados/operations/add-or-rm-mons +.. _Add/Remove OSDs: ../../rados/operations/add-or-rm-osds +.. _Network Configuration Reference: ../../rados/configuration/network-config-ref +.. _Monitor Config Reference - Data: ../../rados/configuration/mon-config-ref#data +.. _create a Ceph filesystem: ../../cephfs/createfs diff --git a/doc/install/mirrors.rst b/doc/install/mirrors.rst new file mode 100644 index 00000000..99a0b4b0 --- /dev/null +++ b/doc/install/mirrors.rst @@ -0,0 +1,66 @@ +============= + Ceph Mirrors +============= + +For improved user experience multiple mirrors for Ceph are available around the +world. + +These mirrors are kindly sponsored by various companies who want to support the +Ceph project. + + +Locations +========= + +These mirrors are available on the following locations: + +- **EU: Netherlands**: http://eu.ceph.com/ +- **AU: Australia**: http://au.ceph.com/ +- **SE: Sweden**: http://se.ceph.com/ +- **DE: Germany**: http://de.ceph.com/ +- **HK: Hong Kong**: http://hk.ceph.com/ +- **FR: France**: http://fr.ceph.com/ +- **UK: UK**: http://uk.ceph.com +- **US-East: US East Coast**: http://us-east.ceph.com/ +- **US-West: US West Coast**: http://us-west.ceph.com/ +- **CN: China**: http://mirrors.ustc.edu.cn/ceph/ + +You can replace all download.ceph.com URLs with any of the mirrors, for example: + +- http://download.ceph.com/tarballs/ +- http://download.ceph.com/debian-hammer/ +- http://download.ceph.com/rpm-hammer/ + +Change this to: + +- http://eu.ceph.com/tarballs/ +- http://eu.ceph.com/debian-hammer/ +- http://eu.ceph.com/rpm-hammer/ + + +Mirroring +========= + +You can easily mirror Ceph yourself using a Bash script and rsync. A easy to use +script can be found at `Github`_. + +When mirroring Ceph, please keep the following guidelines in mind: + +- Choose a mirror close to you +- Do not sync in a interval shorter than 3 hours +- Avoid syncing at minute 0 of the hour, use something between 0 and 59 + + +Becoming a mirror +================= + +If you want to provide a public mirror for other users of Ceph you can opt to +become a official mirror. + +To make sure all mirrors meet the same standards some requirements have been +set for all mirrors. These can be found on `Github`_. + +If you want to apply for an official mirror, please contact the ceph-users mailinglist. + + +.. _Github: https://github.com/ceph/ceph/tree/master/mirroring diff --git a/doc/install/upgrading-ceph.rst b/doc/install/upgrading-ceph.rst new file mode 100644 index 00000000..bf22b38e --- /dev/null +++ b/doc/install/upgrading-ceph.rst @@ -0,0 +1,235 @@ +================ + Upgrading Ceph +================ + +Each release of Ceph may have additional steps. Refer to the `release notes +document of your release`_ to identify release-specific procedures for your +cluster before using the upgrade procedures. + + +Summary +======= + +You can upgrade daemons in your Ceph cluster while the cluster is online and in +service! Certain types of daemons depend upon others. For example, Ceph Metadata +Servers and Ceph Object Gateways depend upon Ceph Monitors and Ceph OSD Daemons. +We recommend upgrading in this order: + +#. `Ceph Deploy`_ +#. Ceph Monitors +#. Ceph OSD Daemons +#. Ceph Metadata Servers +#. Ceph Object Gateways + +As a general rule, we recommend upgrading all the daemons of a specific type +(e.g., all ``ceph-mon`` daemons, all ``ceph-osd`` daemons, etc.) to ensure that +they are all on the same release. We also recommend that you upgrade all the +daemons in your cluster before you try to exercise new functionality in a +release. + +The `Upgrade Procedures`_ are relatively simple, but do look at the `release +notes document of your release`_ before upgrading. The basic process involves +three steps: + +#. Use ``ceph-deploy`` on your admin node to upgrade the packages for + multiple hosts (using the ``ceph-deploy install`` command), or login to each + host and upgrade the Ceph package `using your distro's package manager`_. + For example, when `Upgrading Monitors`_, the ``ceph-deploy`` syntax might + look like this:: + + ceph-deploy install --release {release-name} ceph-node1[ ceph-node2] + ceph-deploy install --release firefly mon1 mon2 mon3 + + **Note:** The ``ceph-deploy install`` command will upgrade the packages + in the specified node(s) from the old release to the release you specify. + There is no ``ceph-deploy upgrade`` command. + +#. Login in to each Ceph node and restart each Ceph daemon. + See `Operating a Cluster`_ for details. + +#. Ensure your cluster is healthy. See `Monitoring a Cluster`_ for details. + +.. important:: Once you upgrade a daemon, you cannot downgrade it. + + +Ceph Deploy +=========== + +Before upgrading Ceph daemons, upgrade the ``ceph-deploy`` tool. :: + + sudo pip install -U ceph-deploy + +Or:: + + sudo apt-get install ceph-deploy + +Or:: + + sudo yum install ceph-deploy python-pushy + + +Upgrade Procedures +================== + +The following sections describe the upgrade process. + +.. important:: Each release of Ceph may have some additional steps. Refer to + the `release notes document of your release`_ for details **BEFORE** you + begin upgrading daemons. + + +Upgrading Monitors +------------------ + +To upgrade monitors, perform the following steps: + +#. Upgrade the Ceph package for each daemon instance. + + You may use ``ceph-deploy`` to address all monitor nodes at once. + For example:: + + ceph-deploy install --release {release-name} ceph-node1[ ceph-node2] + ceph-deploy install --release hammer mon1 mon2 mon3 + + You may also use the package manager for your Linux distribution on + each individual node. To upgrade packages manually on each Debian/Ubuntu + host, perform the following steps:: + + ssh {mon-host} + sudo apt-get update && sudo apt-get install ceph + + On CentOS/Red Hat hosts, perform the following steps:: + + ssh {mon-host} + sudo yum update && sudo yum install ceph + + +#. Restart each monitor. For Ubuntu distributions, use:: + + sudo restart ceph-mon id={hostname} + + For CentOS/Red Hat/Debian distributions, use:: + + sudo /etc/init.d/ceph restart {mon-id} + + For CentOS/Red Hat distributions deployed with ``ceph-deploy``, + the monitor ID is usually ``mon.{hostname}``. + +#. Ensure each monitor has rejoined the quorum:: + + ceph mon stat + +Ensure that you have completed the upgrade cycle for all of your Ceph Monitors. + + +Upgrading an OSD +---------------- + +To upgrade a Ceph OSD Daemon, perform the following steps: + +#. Upgrade the Ceph OSD Daemon package. + + You may use ``ceph-deploy`` to address all Ceph OSD Daemon nodes at + once. For example:: + + ceph-deploy install --release {release-name} ceph-node1[ ceph-node2] + ceph-deploy install --release hammer osd1 osd2 osd3 + + You may also use the package manager on each node to upgrade packages + `using your distro's package manager`_. For Debian/Ubuntu hosts, perform the + following steps on each host:: + + ssh {osd-host} + sudo apt-get update && sudo apt-get install ceph + + For CentOS/Red Hat hosts, perform the following steps:: + + ssh {osd-host} + sudo yum update && sudo yum install ceph + + +#. Restart the OSD, where ``N`` is the OSD number. For Ubuntu, use:: + + sudo restart ceph-osd id=N + + For multiple OSDs on a host, you may restart all of them with Upstart. :: + + sudo restart ceph-osd-all + + For CentOS/Red Hat/Debian distributions, use:: + + sudo /etc/init.d/ceph restart N + + +#. Ensure each upgraded Ceph OSD Daemon has rejoined the cluster:: + + ceph osd stat + +Ensure that you have completed the upgrade cycle for all of your +Ceph OSD Daemons. + + +Upgrading a Metadata Server +--------------------------- + +To upgrade a Ceph Metadata Server, perform the following steps: + +#. Upgrade the Ceph Metadata Server package. You may use ``ceph-deploy`` to + address all Ceph Metadata Server nodes at once, or use the package manager + on each node. For example:: + + ceph-deploy install --release {release-name} ceph-node1 + ceph-deploy install --release hammer mds1 + + To upgrade packages manually, perform the following steps on each + Debian/Ubuntu host:: + + ssh {mon-host} + sudo apt-get update && sudo apt-get install ceph-mds + + Or the following steps on CentOS/Red Hat hosts:: + + ssh {mon-host} + sudo yum update && sudo yum install ceph-mds + + +#. Restart the metadata server. For Ubuntu, use:: + + sudo restart ceph-mds id={hostname} + + For CentOS/Red Hat/Debian distributions, use:: + + sudo /etc/init.d/ceph restart mds.{hostname} + + For clusters deployed with ``ceph-deploy``, the name is usually either + the name you specified on creation or the hostname. + +#. Ensure the metadata server is up and running:: + + ceph mds stat + + +Upgrading a Client +------------------ + +Once you have upgraded the packages and restarted daemons on your Ceph +cluster, we recommend upgrading ``ceph-common`` and client libraries +(``librbd1`` and ``librados2``) on your client nodes too. + +#. Upgrade the package:: + + ssh {client-host} + apt-get update && sudo apt-get install ceph-common librados2 librbd1 python-rados python-rbd + +#. Ensure that you have the latest version:: + + ceph --version + +If you do not have the latest version, you may need to uninstall, auto remove +dependencies and reinstall. + + +.. _using your distro's package manager: ../install-storage-cluster/ +.. _Operating a Cluster: ../../rados/operations/operating +.. _Monitoring a Cluster: ../../rados/operations/monitoring +.. _release notes document of your release: ../../releases -- cgit v1.2.3