summaryrefslogtreecommitdiffstats
path: root/doc/install
diff options
context:
space:
mode:
authorDaniel Baumann <daniel.baumann@progress-linux.org>2024-04-07 18:45:59 +0000
committerDaniel Baumann <daniel.baumann@progress-linux.org>2024-04-07 18:45:59 +0000
commit19fcec84d8d7d21e796c7624e521b60d28ee21ed (patch)
tree42d26aa27d1e3f7c0b8bd3fd14e7d7082f5008dc /doc/install
parentInitial commit. (diff)
downloadceph-upstream.tar.xz
ceph-upstream.zip
Adding upstream version 16.2.11+ds.upstream/16.2.11+dsupstream
Signed-off-by: Daniel Baumann <daniel.baumann@progress-linux.org>
Diffstat (limited to 'doc/install')
-rw-r--r--doc/install/build-ceph.rst115
-rw-r--r--doc/install/clone-source.rst195
-rw-r--r--doc/install/containers.rst113
-rw-r--r--doc/install/get-packages.rst402
-rw-r--r--doc/install/get-tarballs.rst14
-rw-r--r--doc/install/index.rst73
-rw-r--r--doc/install/index_manual.rst71
-rw-r--r--doc/install/install-storage-cluster.rst87
-rw-r--r--doc/install/install-vm-cloud.rst132
-rw-r--r--doc/install/manual-deployment.rst529
-rw-r--r--doc/install/manual-freebsd-deployment.rst575
-rw-r--r--doc/install/mirrors.rst67
-rw-r--r--doc/install/windows-basic-config.rst48
-rw-r--r--doc/install/windows-install.rst88
-rw-r--r--doc/install/windows-troubleshooting.rst96
15 files changed, 2605 insertions, 0 deletions
diff --git a/doc/install/build-ceph.rst b/doc/install/build-ceph.rst
new file mode 100644
index 000000000..5c93ca52d
--- /dev/null
+++ b/doc/install/build-ceph.rst
@@ -0,0 +1,115 @@
+============
+ Build Ceph
+============
+
+You can get Ceph software by retrieving Ceph source code and building it yourself.
+To build Ceph, you need to set up a development environment, compile Ceph,
+and then either install in user space or build packages and install the packages.
+
+Build Prerequisites
+===================
+
+
+.. tip:: Check this section to see if there are specific prerequisites for your
+ Linux/Unix distribution.
+
+A debug build of Ceph may take around 40 gigabytes. If you want to build Ceph in
+a virtual machine (VM) please make sure total disk space on the VM is at least
+60 gigabytes.
+
+Please also be aware that some distributions of Linux, like CentOS, use Linux
+Volume Manager (LVM) for the default installation. LVM may reserve a large
+portion of disk space of a typical sized virtual disk for the operating system.
+
+Before you can build Ceph source code, you need to install several libraries
+and tools::
+
+ ./install-deps.sh
+
+.. note:: Some distributions that support Google's memory profiler tool may use
+ a different package name (e.g., ``libgoogle-perftools4``).
+
+Build Ceph
+==========
+
+Ceph is built using cmake. To build Ceph, navigate to your cloned Ceph
+repository and execute the following::
+
+ cd ceph
+ ./do_cmake.sh
+ cd build
+ make
+
+.. note:: By default do_cmake.sh will build a debug version of ceph that may
+ perform up to 5 times slower with certain workloads. Pass
+ '-DCMAKE_BUILD_TYPE=RelWithDebInfo' to do_cmake.sh if you would like to
+ build a release version of the ceph executables instead.
+
+.. topic:: Hyperthreading
+
+ You can use ``make -j`` to execute multiple jobs depending upon your system. For
+ example, ``make -j4`` for a dual core processor may build faster.
+
+See `Installing a Build`_ to install a build in user space.
+
+Build Ceph Packages
+===================
+
+To build packages, you must clone the `Ceph`_ repository. You can create
+installation packages from the latest code using ``dpkg-buildpackage`` for
+Debian/Ubuntu or ``rpmbuild`` for the RPM Package Manager.
+
+.. tip:: When building on a multi-core CPU, use the ``-j`` and the number of
+ cores * 2. For example, use ``-j4`` for a dual-core processor to accelerate
+ the build.
+
+
+Advanced Package Tool (APT)
+---------------------------
+
+To create ``.deb`` packages for Debian/Ubuntu, ensure that you have cloned the
+`Ceph`_ repository, installed the `Build Prerequisites`_ and installed
+``debhelper``::
+
+ sudo apt-get install debhelper
+
+Once you have installed debhelper, you can build the packages::
+
+ sudo dpkg-buildpackage
+
+For multi-processor CPUs use the ``-j`` option to accelerate the build.
+
+
+RPM Package Manager
+-------------------
+
+To create ``.rpm`` packages, ensure that you have cloned the `Ceph`_ repository,
+installed the `Build Prerequisites`_ and installed ``rpm-build`` and
+``rpmdevtools``::
+
+ yum install rpm-build rpmdevtools
+
+Once you have installed the tools, setup an RPM compilation environment::
+
+ rpmdev-setuptree
+
+Fetch the source tarball for the RPM compilation environment::
+
+ wget -P ~/rpmbuild/SOURCES/ https://download.ceph.com/tarballs/ceph-<version>.tar.bz2
+
+Or from the EU mirror::
+
+ wget -P ~/rpmbuild/SOURCES/ http://eu.ceph.com/tarballs/ceph-<version>.tar.bz2
+
+Extract the specfile::
+
+ tar --strip-components=1 -C ~/rpmbuild/SPECS/ --no-anchored -xvjf ~/rpmbuild/SOURCES/ceph-<version>.tar.bz2 "ceph.spec"
+
+Build the RPM packages::
+
+ rpmbuild -ba ~/rpmbuild/SPECS/ceph.spec
+
+For multi-processor CPUs use the ``-j`` option to accelerate the build.
+
+.. _Ceph: ../clone-source
+.. _Installing a Build: ../install-storage-cluster#installing-a-build
diff --git a/doc/install/clone-source.rst b/doc/install/clone-source.rst
new file mode 100644
index 000000000..2d09ef9eb
--- /dev/null
+++ b/doc/install/clone-source.rst
@@ -0,0 +1,195 @@
+=========================================
+ Cloning the Ceph Source Code Repository
+=========================================
+
+To clone a Ceph branch of the Ceph source code, go to `github Ceph
+Repository`_, select a branch (``main`` by default), and click the **Download
+ZIP** button.
+
+.. _github Ceph Repository: https://github.com/ceph/ceph
+
+To clone the entire git repository, :ref:`install <install-git>` and configure
+``git``.
+
+.. _install-git:
+
+Install Git
+===========
+
+To install ``git`` on Debian/Ubuntu, run the following command:
+
+.. prompt:: bash $
+
+ sudo apt-get install git
+
+
+To install ``git`` on CentOS/RHEL, run the following command:
+
+.. prompt:: bash $
+
+ sudo yum install git
+
+
+You must have a ``github`` account. If you do not have a ``github``
+account, go to `github.com`_ and register. Follow the directions for setting
+up git at `Set Up Git`_.
+
+.. _github.com: https://github.com
+.. _Set Up Git: https://help.github.com/linux-set-up-git
+
+
+Add SSH Keys (Optional)
+=======================
+
+To commit code to Ceph or to clone the respository by using SSH
+(``git@github.com:ceph/ceph.git``), you must generate SSH keys for github.
+
+.. tip:: If you want only to clone the repository, you can
+ use ``git clone --recursive https://github.com/ceph/ceph.git``
+ without generating SSH keys.
+
+To generate SSH keys for ``github``, run the following command:
+
+.. prompt:: bash $
+
+ ssh-keygen
+
+To print the SSH key that you just generated and that you will add to your
+``github`` account, use the ``cat`` command. (The following example assumes you
+used the default file path.):
+
+.. prompt:: bash $
+
+ cat .ssh/id_rsa.pub
+
+Copy the public key.
+
+Go to your ``github`` account, click "Account Settings" (represented by the
+'tools' icon), and click "SSH Keys" on the left side navbar.
+
+Click "Add SSH key" in the "SSH Keys" list, enter a name for the key, paste the
+key you generated, and press the "Add key" button.
+
+
+Clone the Source
+================
+
+To clone the Ceph source code repository, run the following command:
+
+.. prompt:: bash $
+
+ git clone --recursive https://github.com/ceph/ceph.git
+
+After ``git clone`` has run, you should have a full copy of the Ceph
+repository.
+
+.. tip:: Make sure you maintain the latest copies of the submodules included in
+ the repository. Running ``git status`` will tell you whether the submodules
+ are out of date. See :ref:`update-submodules` for more information.
+
+
+.. prompt:: bash $
+
+ cd ceph
+ git status
+
+.. _update-submodules:
+
+Updating Submodules
+-------------------
+
+#. Determine whether your submodules are out of date:
+
+ .. prompt:: bash $
+
+ git status
+
+ A. If your submodules are up to date
+ If your submodules are up to date, the following console output will
+ appear:
+
+ ::
+
+ On branch main
+ Your branch is up to date with 'origin/main'.
+
+ nothing to commit, working tree clean
+
+ If you see this console output, then your submodules are up to date.
+ You do not need this procedure.
+
+
+ B. If your submodules are not up to date
+ If your submodules are not up to date, you will see a message that
+ includes a list of "untracked files". The example here shows such a
+ list, which was generated from a real situation in which the
+ submodules were no longer current. Your list of files will not be the
+ same as this list of files, but this list is provided as an example.
+ If in your case any untracked files are listed, then you should
+ continue to the next step of this procedure.
+
+ ::
+
+ On branch main
+ Your branch is up to date with 'origin/main'.
+
+ Untracked files:
+ (use "git add <file>..." to include in what will be committed)
+ src/pybind/cephfs/build/
+ src/pybind/cephfs/cephfs.c
+ src/pybind/cephfs/cephfs.egg-info/
+ src/pybind/rados/build/
+ src/pybind/rados/rados.c
+ src/pybind/rados/rados.egg-info/
+ src/pybind/rbd/build/
+ src/pybind/rbd/rbd.c
+ src/pybind/rbd/rbd.egg-info/
+ src/pybind/rgw/build/
+ src/pybind/rgw/rgw.c
+ src/pybind/rgw/rgw.egg-info/
+
+ nothing added to commit but untracked files present (use "git add" to track)
+
+#. If your submodules are out of date, run the following commands:
+
+ .. prompt:: bash $
+
+ git submodule update --force --init --recursive
+ git clean -fdx
+ git submodule foreach git clean -fdx
+
+ If you still have problems with a submodule directory, use ``rm -rf
+ [directory name]`` to remove the directory. Then run ``git submodule update
+ --init --recursive`` again.
+
+#. Run ``git status`` again:
+
+ .. prompt:: bash $
+
+ git status
+
+ Your submodules are up to date if you see the following message:
+
+ ::
+
+ On branch main
+ Your branch is up to date with 'origin/main'.
+
+ nothing to commit, working tree clean
+
+Choose a Branch
+===============
+
+Once you clone the source code and submodules, your Ceph repository
+will be on the ``main`` branch by default, which is the unstable
+development branch. You may choose other branches too.
+
+- ``main``: The unstable development branch.
+- ``stable-release-name``: The name of the stable, `Active Releases`_. e.g. ``Pacific``
+- ``next``: The release candidate branch.
+
+::
+
+ git checkout main
+
+.. _Active Releases: https://docs.ceph.com/en/latest/releases/#active-releases
diff --git a/doc/install/containers.rst b/doc/install/containers.rst
new file mode 100644
index 000000000..49c976199
--- /dev/null
+++ b/doc/install/containers.rst
@@ -0,0 +1,113 @@
+.. _containers:
+
+Ceph Container Images
+=====================
+
+.. important::
+
+ Using the ``:latest`` tag is discouraged. If you use the ``:latest``
+ tag, there is no guarantee that the same image will be on each of
+ your hosts. Under these conditions, upgrades might not work
+ properly. Remember that ``:latest`` is a relative tag, and a moving
+ target.
+
+ Instead of the ``:latest`` tag, use explicit tags or image IDs. For
+ example:
+
+ ``podman pull ceph/ceph:v15.2.0``
+
+Official Releases
+-----------------
+
+Ceph Container images are available from Quay:
+
+ https://quay.io/repository/ceph/ceph
+ https://hub.docker.com/r/ceph
+
+ceph/ceph
+^^^^^^^^^
+
+- General purpose Ceph container with all necessary daemons and
+ dependencies installed.
+
++----------------------+--------------------------------------------------------------+
+| Tag | Meaning |
++----------------------+--------------------------------------------------------------+
+| vRELNUM | Latest release in this series (e.g., *v14* = Nautilus) |
++----------------------+--------------------------------------------------------------+
+| vRELNUM.2 | Latest *stable* release in this stable series (e.g., *v14.2*)|
++----------------------+--------------------------------------------------------------+
+| vRELNUM.Y.Z | A specific release (e.g., *v14.2.4*) |
++----------------------+--------------------------------------------------------------+
+| vRELNUM.Y.Z-YYYYMMDD | A specific build (e.g., *v14.2.4-20191203*) |
++----------------------+--------------------------------------------------------------+
+
+Legacy container images
+-----------------------
+
+Legacy container images are available from Docker Hub at::
+
+ https://hub.docker.com/r/ceph
+
+ceph/daemon-base
+^^^^^^^^^^^^^^^^
+
+- General purpose Ceph container with all necessary daemons and
+ dependencies installed.
+- Basically the same as *ceph/ceph*, but with different tags.
+- Note that all of the *-devel* tags (and the *latest-master* tag) are based on
+ unreleased and generally untested packages from https://shaman.ceph.com.
+
+:note: This image will soon become an alias to *ceph/ceph*.
+
++------------------------+---------------------------------------------------------+
+| Tag | Meaning |
++------------------------+---------------------------------------------------------+
+| latest-master | Build of master branch a last ceph-container.git update |
++------------------------+---------------------------------------------------------+
+| latest-master-devel | Daily build of the master branch |
++------------------------+---------------------------------------------------------+
+| latest-RELEASE-devel | Daily build of the *RELEASE* (e.g., nautilus) branch |
++------------------------+---------------------------------------------------------+
+
+
+ceph/daemon
+^^^^^^^^^^^
+
+- *ceph/daemon-base* plus a collection of BASH scripts that are used
+ by ceph-nano and ceph-ansible to manage a Ceph cluster.
+
++------------------------+---------------------------------------------------------+
+| Tag | Meaning |
++------------------------+---------------------------------------------------------+
+| latest-master | Build of master branch a last ceph-container.git update |
++------------------------+---------------------------------------------------------+
+| latest-master-devel | Daily build of the master branch |
++------------------------+---------------------------------------------------------+
+| latest-RELEASE-devel | Daily build of the *RELEASE* (e.g., nautilus) branch |
++------------------------+---------------------------------------------------------+
+
+
+Development builds
+------------------
+
+We automatically build container images for development ``wip-*``
+branches in the ceph-ci.git repositories and push them to Quay at::
+
+ https://quay.io/organization/ceph-ci
+
+ceph-ci/ceph
+^^^^^^^^^^^^
+
+- This is analogous to the ceph/ceph image above
+- TODO: remove the ``wip-*`` limitation and also build ceph.git branches.
+
++------------------------------------+------------------------------------------------------+
+| Tag | Meaning |
++------------------------------------+------------------------------------------------------+
+| BRANCH | Latest build of a given GIT branch (e.g., *wip-foo*) |
++------------------------------------+------------------------------------------------------+
+| BRANCH-SHORTSHA1-BASEOS-ARCH-devel | A specific build of a branch |
++------------------------------------+------------------------------------------------------+
+| SHA1 | A specific build |
++------------------------------------+------------------------------------------------------+
diff --git a/doc/install/get-packages.rst b/doc/install/get-packages.rst
new file mode 100644
index 000000000..261815187
--- /dev/null
+++ b/doc/install/get-packages.rst
@@ -0,0 +1,402 @@
+.. _packages:
+
+==============
+ Get Packages
+==============
+
+To install Ceph and other enabling software, you need to retrieve packages from
+the Ceph repository.
+
+There are three ways to get packages:
+
+- **Cephadm:** Cephadm can configure your Ceph repositories for you
+ based on a release name or a specific Ceph version. Each
+ :term:`Ceph Node` in your cluster must have internet access.
+
+- **Configure Repositories Manually:** You can manually configure your
+ package management tool to retrieve Ceph packages and all enabling
+ software. Each :term:`Ceph Node` in your cluster must have internet
+ access.
+
+- **Download Packages Manually:** Downloading packages manually is a convenient
+ way to install Ceph if your environment does not allow a :term:`Ceph Node` to
+ access the internet.
+
+Install packages with cephadm
+=============================
+
+#. Download the cephadm script
+
+.. prompt:: bash $
+ :substitutions:
+
+ curl --silent --remote-name --location https://github.com/ceph/ceph/raw/|stable-release|/src/cephadm/cephadm
+ chmod +x cephadm
+
+#. Configure the Ceph repository based on the release name::
+
+ ./cephadm add-repo --release nautilus
+
+ For Octopus (15.2.0) and later releases, you can also specify a specific
+ version::
+
+ ./cephadm add-repo --version 15.2.1
+
+ For development packages, you can specify a specific branch name::
+
+ ./cephadm add-repo --dev my-branch
+
+#. Install the appropriate packages. You can install them using your
+ package management tool (e.g., APT, Yum) directly, or you can also
+ use the cephadm wrapper. For example::
+
+ ./cephadm install ceph-common
+
+
+Configure Repositories Manually
+===============================
+
+All Ceph deployments require Ceph packages (except for development). You should
+also add keys and recommended packages.
+
+- **Keys: (Recommended)** Whether you add repositories or download packages
+ manually, you should download keys to verify the packages. If you do not get
+ the keys, you may encounter security warnings.
+
+- **Ceph: (Required)** All Ceph deployments require Ceph release packages,
+ except for deployments that use development packages (development, QA, and
+ bleeding edge deployments only).
+
+- **Ceph Development: (Optional)** If you are developing for Ceph, testing Ceph
+ development builds, or if you want features from the bleeding edge of Ceph
+ development, you may get Ceph development packages.
+
+
+
+Add Keys
+--------
+
+Add a key to your system's list of trusted keys to avoid a security warning. For
+major releases (e.g., ``luminous``, ``mimic``, ``nautilus``) and development releases
+(``release-name-rc1``, ``release-name-rc2``), use the ``release.asc`` key.
+
+
+APT
+~~~
+
+To install the ``release.asc`` key, execute the following::
+
+ wget -q -O- 'https://download.ceph.com/keys/release.asc' | sudo apt-key add -
+
+
+RPM
+~~~
+
+To install the ``release.asc`` key, execute the following::
+
+ sudo rpm --import 'https://download.ceph.com/keys/release.asc'
+
+Ceph Release Packages
+---------------------
+
+Release repositories use the ``release.asc`` key to verify packages.
+To install Ceph packages with the Advanced Package Tool (APT) or
+Yellowdog Updater, Modified (YUM), you must add Ceph repositories.
+
+You may find releases for Debian/Ubuntu (installed with APT) at::
+
+ https://download.ceph.com/debian-{release-name}
+
+You may find releases for CentOS/RHEL and others (installed with YUM) at::
+
+ https://download.ceph.com/rpm-{release-name}
+
+For Octopus and later releases, you can also configure a repository for a
+specific version ``x.y.z``. For Debian/Ubuntu packages::
+
+ https://download.ceph.com/debian-{version}
+
+For RPMs::
+
+ https://download.ceph.com/rpm-{version}
+
+The major releases of Ceph are summarized at: `Releases`_
+
+.. tip:: For non-US users: There might be a mirror close to you where
+ to download Ceph from. For more information see: `Ceph Mirrors`_.
+
+Debian Packages
+~~~~~~~~~~~~~~~
+
+Add a Ceph package repository to your system's list of APT sources. For newer
+versions of Debian/Ubuntu, call ``lsb_release -sc`` on the command line to
+get the short codename, and replace ``{codename}`` in the following command.
+
+.. prompt:: bash $
+ :substitutions:
+
+ sudo apt-add-repository 'deb https://download.ceph.com/debian-|stable-release|/ {codename} main'
+
+For early Linux distributions, you may execute the following command
+
+.. prompt:: bash $
+ :substitutions:
+
+ echo deb https://download.ceph.com/debian-|stable-release|/ $(lsb_release -sc) main | sudo tee /etc/apt/sources.list.d/ceph.list
+
+For earlier Ceph releases, replace ``{release-name}`` with the name with the
+name of the Ceph release. You may call ``lsb_release -sc`` on the command line
+to get the short codename, and replace ``{codename}`` in the following command.
+
+.. prompt:: bash $
+
+ sudo apt-add-repository 'deb https://download.ceph.com/debian-{release-name}/ {codename} main'
+
+For older Linux distributions, replace ``{release-name}`` with the name of the
+release
+
+.. prompt:: bash $
+
+ echo deb https://download.ceph.com/debian-{release-name}/ $(lsb_release -sc) main | sudo tee /etc/apt/sources.list.d/ceph.list
+
+For development release packages, add our package repository to your system's
+list of APT sources. See `the testing Debian repository`_ for a complete list
+of Debian and Ubuntu releases supported.
+
+.. prompt:: bash $
+
+ echo deb https://download.ceph.com/debian-testing/ $(lsb_release -sc) main | sudo tee /etc/apt/sources.list.d/ceph.list
+
+.. tip:: For non-US users: There might be a mirror close to you where
+ to download Ceph from. For more information see: `Ceph Mirrors`_.
+
+
+RPM Packages
+~~~~~~~~~~~~
+
+RHEL
+^^^^
+
+For major releases, you may add a Ceph entry to the ``/etc/yum.repos.d``
+directory. Create a ``ceph.repo`` file. In the example below, replace
+``{ceph-release}`` with a major release of Ceph (e.g., ``|stable-release|``)
+and ``{distro}`` with your Linux distribution (e.g., ``el8``, etc.). You
+may view https://download.ceph.com/rpm-{ceph-release}/ directory to see which
+distributions Ceph supports. Some Ceph packages (e.g., EPEL) must take priority
+over standard packages, so you must ensure that you set
+``priority=2``.
+
+.. code-block:: ini
+
+ [ceph]
+ name=Ceph packages for $basearch
+ baseurl=https://download.ceph.com/rpm-{ceph-release}/{distro}/$basearch
+ enabled=1
+ priority=2
+ gpgcheck=1
+ gpgkey=https://download.ceph.com/keys/release.asc
+
+ [ceph-noarch]
+ name=Ceph noarch packages
+ baseurl=https://download.ceph.com/rpm-{ceph-release}/{distro}/noarch
+ enabled=1
+ priority=2
+ gpgcheck=1
+ gpgkey=https://download.ceph.com/keys/release.asc
+
+ [ceph-source]
+ name=Ceph source packages
+ baseurl=https://download.ceph.com/rpm-{ceph-release}/{distro}/SRPMS
+ enabled=0
+ priority=2
+ gpgcheck=1
+ gpgkey=https://download.ceph.com/keys/release.asc
+
+
+For specific packages, you may retrieve them by downloading the release package
+by name. Our development process generates a new release of Ceph every 3-4
+weeks. These packages are faster-moving than the major releases. Development
+packages have new features integrated quickly, while still undergoing several
+weeks of QA prior to release.
+
+The repository package installs the repository details on your local system for
+use with ``yum``. Replace ``{distro}`` with your Linux distribution, and
+``{release}`` with the specific release of Ceph
+
+.. prompt:: bash $
+
+ su -c 'rpm -Uvh https://download.ceph.com/rpms/{distro}/x86_64/ceph-{release}.el7.noarch.rpm'
+
+You can download the RPMs directly from
+
+.. code-block:: none
+
+ https://download.ceph.com/rpm-testing
+
+.. tip:: For non-US users: There might be a mirror close to you where
+ to download Ceph from. For more information see: `Ceph Mirrors`_.
+
+openSUSE Leap 15.1
+^^^^^^^^^^^^^^^^^^
+
+You need to add the Ceph package repository to your list of zypper sources. This can be done with the following command
+
+.. code-block:: bash
+
+ zypper ar https://download.opensuse.org/repositories/filesystems:/ceph/openSUSE_Leap_15.1/filesystems:ceph.repo
+
+openSUSE Tumbleweed
+^^^^^^^^^^^^^^^^^^^
+
+The newest major release of Ceph is already available through the normal Tumbleweed repositories.
+There's no need to add another package repository manually.
+
+
+Ceph Development Packages
+-------------------------
+
+If you are developing Ceph and need to deploy and test specific Ceph branches,
+ensure that you remove repository entries for major releases first.
+
+
+DEB Packages
+~~~~~~~~~~~~
+
+We automatically build Ubuntu packages for current development branches in the
+Ceph source code repository. These packages are intended for developers and QA
+only.
+
+Add the package repository to your system's list of APT sources, but
+replace ``{BRANCH}`` with the branch you'd like to use (e.g.,
+wip-hack, master). See `the shaman page`_ for a complete
+list of distributions we build.
+
+.. prompt:: bash $
+
+ curl -L https://shaman.ceph.com/api/repos/ceph/{BRANCH}/latest/ubuntu/$(lsb_release -sc)/repo/ | sudo tee /etc/apt/sources.list.d/shaman.list
+
+.. note:: If the repository is not ready an HTTP 504 will be returned
+
+The use of ``latest`` in the url, means it will figure out which is the last
+commit that has been built. Alternatively, a specific sha1 can be specified.
+For Ubuntu Xenial and the master branch of Ceph, it would look like
+
+.. prompt:: bash $
+
+ curl -L https://shaman.ceph.com/api/repos/ceph/master/53e772a45fdf2d211c0c383106a66e1feedec8fd/ubuntu/xenial/repo/ | sudo tee /etc/apt/sources.list.d/shaman.list
+
+
+.. warning:: Development repositories are no longer available after two weeks.
+
+RPM Packages
+~~~~~~~~~~~~
+
+For current development branches, you may add a Ceph entry to the
+``/etc/yum.repos.d`` directory. The `the shaman page`_ can be used to retrieve the full details
+of a repo file. It can be retrieved via an HTTP request, for example
+
+.. prompt:: bash $
+
+ curl -L https://shaman.ceph.com/api/repos/ceph/{BRANCH}/latest/centos/7/repo/ | sudo tee /etc/yum.repos.d/shaman.repo
+
+The use of ``latest`` in the url, means it will figure out which is the last
+commit that has been built. Alternatively, a specific sha1 can be specified.
+For CentOS 7 and the master branch of Ceph, it would look like
+
+.. prompt:: bash $
+
+ curl -L https://shaman.ceph.com/api/repos/ceph/master/53e772a45fdf2d211c0c383106a66e1feedec8fd/centos/7/repo/ | sudo tee /etc/apt/sources.list.d/shaman.list
+
+
+.. warning:: Development repositories are no longer available after two weeks.
+
+.. note:: If the repository is not ready an HTTP 504 will be returned
+
+Download Packages Manually
+--------------------------
+
+If you are attempting to install behind a firewall in an environment without internet
+access, you must retrieve the packages (mirrored with all the necessary dependencies)
+before attempting an install.
+
+Debian Packages
+~~~~~~~~~~~~~~~
+
+Ceph requires additional third party libraries.
+
+- libaio1
+- libsnappy1
+- libcurl3
+- curl
+- libgoogle-perftools4
+- google-perftools
+- libleveldb1
+
+
+The repository package installs the repository details on your local system for
+use with ``apt``. Replace ``{release}`` with the latest Ceph release. Replace
+``{version}`` with the latest Ceph version number. Replace ``{distro}`` with
+your Linux distribution codename. Replace ``{arch}`` with the CPU architecture.
+
+.. prompt:: bash $
+
+ wget -q https://download.ceph.com/debian-{release}/pool/main/c/ceph/ceph_{version}{distro}_{arch}.deb
+
+
+RPM Packages
+~~~~~~~~~~~~
+
+Ceph requires additional third party libraries.
+To add the EPEL repository, execute the following
+
+.. prompt:: bash $
+
+ sudo yum install -y https://dl.fedoraproject.org/pub/epel/epel-release-latest-7.noarch.rpm
+
+Ceph requires the following packages:
+
+- snappy
+- leveldb
+- gdisk
+- python-argparse
+- gperftools-libs
+
+
+Packages are currently built for the RHEL/CentOS7 (``el7``) platforms. The
+repository package installs the repository details on your local system for use
+with ``yum``. Replace ``{distro}`` with your distribution.
+
+.. prompt:: bash $
+ :substitutions:
+
+ su -c 'rpm -Uvh https://download.ceph.com/rpm-|stable-release|/{distro}/noarch/ceph-{version}.{distro}.noarch.rpm'
+
+For example, for CentOS 8 (``el8``)
+
+.. prompt:: bash $
+ :substitutions:
+
+ su -c 'rpm -Uvh https://download.ceph.com/rpm-|stable-release|/el8/noarch/ceph-release-1-0.el8.noarch.rpm'
+
+You can download the RPMs directly from
+
+.. code-block:: none
+ :substitutions:
+
+ https://download.ceph.com/rpm-|stable-release|
+
+
+For earlier Ceph releases, replace ``{release-name}`` with the name
+with the name of the Ceph release. You may call ``lsb_release -sc`` on the command
+line to get the short codename.
+
+.. prompt:: bash $
+
+ su -c 'rpm -Uvh https://download.ceph.com/rpm-{release-name}/{distro}/noarch/ceph-{version}.{distro}.noarch.rpm'
+
+
+
+.. _Releases: https://docs.ceph.com/en/latest/releases/
+.. _the testing Debian repository: https://download.ceph.com/debian-testing/dists
+.. _the shaman page: https://shaman.ceph.com
+.. _Ceph Mirrors: ../mirrors
diff --git a/doc/install/get-tarballs.rst b/doc/install/get-tarballs.rst
new file mode 100644
index 000000000..175d0399b
--- /dev/null
+++ b/doc/install/get-tarballs.rst
@@ -0,0 +1,14 @@
+====================================
+ Downloading a Ceph Release Tarball
+====================================
+
+As Ceph development progresses, the Ceph team releases new versions of the
+source code. You may download source code tarballs for Ceph releases here:
+
+`Ceph Release Tarballs`_
+
+.. tip:: For international users: There might be a mirror close to you where download Ceph from. For more information see: `Ceph Mirrors`_.
+
+
+.. _Ceph Release Tarballs: https://download.ceph.com/tarballs/
+.. _Ceph Mirrors: ../mirrors
diff --git a/doc/install/index.rst b/doc/install/index.rst
new file mode 100644
index 000000000..841febbe3
--- /dev/null
+++ b/doc/install/index.rst
@@ -0,0 +1,73 @@
+.. _install-overview:
+
+===============
+Installing Ceph
+===============
+
+There are several different ways to install Ceph. Choose the
+method that best suits your needs.
+
+Recommended methods
+~~~~~~~~~~~~~~~~~~~
+
+:ref:`Cephadm <cephadm>` installs and manages a Ceph cluster using containers and
+systemd, with tight integration with the CLI and dashboard GUI.
+
+* cephadm only supports Octopus and newer releases.
+* cephadm is fully integrated with the new orchestration API and
+ fully supports the new CLI and dashboard features to manage
+ cluster deployment.
+* cephadm requires container support (podman or docker) and
+ Python 3.
+
+`Rook <https://rook.io/>`_ deploys and manages Ceph clusters running
+in Kubernetes, while also enabling management of storage resources and
+provisioning via Kubernetes APIs. We recommend Rook as the way to run Ceph in
+Kubernetes or to connect an existing Ceph storage cluster to Kubernetes.
+
+* Rook only supports Nautilus and newer releases of Ceph.
+* Rook is the preferred method for running Ceph on Kubernetes, or for
+ connecting a Kubernetes cluster to an existing (external) Ceph
+ cluster.
+* Rook supports the new orchestrator API. New management features
+ in the CLI and dashboard are fully supported.
+
+Other methods
+~~~~~~~~~~~~~
+
+`ceph-ansible <https://docs.ceph.com/ceph-ansible/>`_ deploys and manages
+Ceph clusters using Ansible.
+
+* ceph-ansible is widely deployed.
+* ceph-ansible is not integrated with the new orchestrator APIs,
+ introduced in Nautlius and Octopus, which means that newer
+ management features and dashboard integration are not available.
+
+
+`ceph-deploy <https://docs.ceph.com/projects/ceph-deploy/en/latest/>`_ is a tool for quickly deploying clusters.
+
+ .. IMPORTANT::
+
+ ceph-deploy is no longer actively maintained. It is not tested on versions of Ceph newer than Nautilus. It does not support RHEL8, CentOS 8, or newer operating systems.
+
+`DeepSea <https://github.com/SUSE/DeepSea>`_ installs Ceph using Salt.
+
+`jaas.ai/ceph-mon <https://jaas.ai/ceph-mon>`_ installs Ceph using Juju.
+
+`github.com/openstack/puppet-ceph <https://github.com/openstack/puppet-ceph>`_ installs Ceph via Puppet.
+
+Ceph can also be :ref:`installed manually <install-manual>`.
+
+
+.. toctree::
+ :hidden:
+
+ index_manual
+
+Windows
+~~~~~~~
+
+For Windows installations, please consult this document:
+`Windows installation guide`_.
+
+.. _Windows installation guide: ./windows-install
diff --git a/doc/install/index_manual.rst b/doc/install/index_manual.rst
new file mode 100644
index 000000000..60dccfde2
--- /dev/null
+++ b/doc/install/index_manual.rst
@@ -0,0 +1,71 @@
+.. _install-manual:
+
+=======================
+ Installation (Manual)
+=======================
+
+
+Get Software
+============
+
+There are several methods for getting Ceph software. The easiest and most common
+method is to `get packages`_ by adding repositories for use with package
+management tools such as the Advanced Package Tool (APT) or Yellowdog Updater,
+Modified (YUM). You may also retrieve pre-compiled packages from the Ceph
+repository. Finally, you can retrieve tarballs or clone the Ceph source code
+repository and build Ceph yourself.
+
+
+.. toctree::
+ :maxdepth: 1
+
+ Get Packages <get-packages>
+ Get Tarballs <get-tarballs>
+ Clone Source <clone-source>
+ Build Ceph <build-ceph>
+ Ceph Mirrors <mirrors>
+ Ceph Containers <containers>
+
+
+Install Software
+================
+
+Once you have the Ceph software (or added repositories), installing the software
+is easy. To install packages on each :term:`Ceph Node` in your cluster. You may
+use ``cephadm`` to install Ceph for your storage cluster, or use package
+management tools. You should install Yum Priorities for RHEL/CentOS and other
+distributions that use Yum if you intend to install the Ceph Object Gateway or
+QEMU.
+
+.. toctree::
+ :maxdepth: 1
+
+ Install cephadm <../cephadm/install>
+ Install Ceph Storage Cluster <install-storage-cluster>
+ Install Virtualization for Block <install-vm-cloud>
+
+
+Deploy a Cluster Manually
+=========================
+
+Once you have Ceph installed on your nodes, you can deploy a cluster manually.
+The manual procedure is primarily for exemplary purposes for those developing
+deployment scripts with Chef, Juju, Puppet, etc.
+
+.. toctree::
+
+ Manual Deployment <manual-deployment>
+ Manual Deployment on FreeBSD <manual-freebsd-deployment>
+
+Upgrade Software
+================
+
+As new versions of Ceph become available, you may upgrade your cluster to take
+advantage of new functionality. Read the upgrade documentation before you
+upgrade your cluster. Sometimes upgrading Ceph requires you to follow an upgrade
+sequence.
+
+.. toctree::
+ :maxdepth: 2
+
+.. _get packages: ../get-packages
diff --git a/doc/install/install-storage-cluster.rst b/doc/install/install-storage-cluster.rst
new file mode 100644
index 000000000..dcb9f5040
--- /dev/null
+++ b/doc/install/install-storage-cluster.rst
@@ -0,0 +1,87 @@
+==============================
+ Install Ceph Storage Cluster
+==============================
+
+This guide describes installing Ceph packages manually. This procedure
+is only for users who are not installing with a deployment tool such as
+``cephadm``, ``chef``, ``juju``, etc.
+
+
+Installing with APT
+===================
+
+Once you have added either release or development packages to APT, you should
+update APT's database and install Ceph::
+
+ sudo apt-get update && sudo apt-get install ceph ceph-mds
+
+
+Installing with RPM
+===================
+
+To install Ceph with RPMs, execute the following steps:
+
+
+#. Install ``yum-plugin-priorities``. ::
+
+ sudo yum install yum-plugin-priorities
+
+#. Ensure ``/etc/yum/pluginconf.d/priorities.conf`` exists.
+
+#. Ensure ``priorities.conf`` enables the plugin. ::
+
+ [main]
+ enabled = 1
+
+#. Ensure your YUM ``ceph.repo`` entry includes ``priority=2``. See
+ `Get Packages`_ for details::
+
+ [ceph]
+ name=Ceph packages for $basearch
+ baseurl=https://download.ceph.com/rpm-{ceph-release}/{distro}/$basearch
+ enabled=1
+ priority=2
+ gpgcheck=1
+ gpgkey=https://download.ceph.com/keys/release.asc
+
+ [ceph-noarch]
+ name=Ceph noarch packages
+ baseurl=https://download.ceph.com/rpm-{ceph-release}/{distro}/noarch
+ enabled=1
+ priority=2
+ gpgcheck=1
+ gpgkey=https://download.ceph.com/keys/release.asc
+
+ [ceph-source]
+ name=Ceph source packages
+ baseurl=https://download.ceph.com/rpm-{ceph-release}/{distro}/SRPMS
+ enabled=0
+ priority=2
+ gpgcheck=1
+ gpgkey=https://download.ceph.com/keys/release.asc
+
+
+#. Install pre-requisite packages::
+
+ sudo yum install snappy leveldb gdisk python-argparse gperftools-libs
+
+
+Once you have added either release or development packages, or added a
+``ceph.repo`` file to ``/etc/yum.repos.d``, you can install Ceph packages. ::
+
+ sudo yum install ceph
+
+
+Installing a Build
+==================
+
+If you build Ceph from source code, you may install Ceph in user space
+by executing the following::
+
+ sudo make install
+
+If you install Ceph locally, ``make`` will place the executables in
+``usr/local/bin``. You may add the Ceph configuration file to the
+``usr/local/bin`` directory to run Ceph from a single directory.
+
+.. _Get Packages: ../get-packages
diff --git a/doc/install/install-vm-cloud.rst b/doc/install/install-vm-cloud.rst
new file mode 100644
index 000000000..876422865
--- /dev/null
+++ b/doc/install/install-vm-cloud.rst
@@ -0,0 +1,132 @@
+=========================================
+ Install Virtualization for Block Device
+=========================================
+
+If you intend to use Ceph Block Devices and the Ceph Storage Cluster as a
+backend for Virtual Machines (VMs) or :term:`Cloud Platforms` the QEMU/KVM and
+``libvirt`` packages are important for enabling VMs and cloud platforms.
+Examples of VMs include: QEMU/KVM, XEN, VMWare, LXC, VirtualBox, etc. Examples
+of Cloud Platforms include OpenStack, CloudStack, OpenNebula, etc.
+
+
+.. ditaa::
+
+ +---------------------------------------------------+
+ | libvirt |
+ +------------------------+--------------------------+
+ |
+ | configures
+ v
+ +---------------------------------------------------+
+ | QEMU |
+ +---------------------------------------------------+
+ | librbd |
+ +---------------------------------------------------+
+ | librados |
+ +------------------------+-+------------------------+
+ | OSDs | | Monitors |
+ +------------------------+ +------------------------+
+
+
+Install QEMU
+============
+
+QEMU KVM can interact with Ceph Block Devices via ``librbd``, which is an
+important feature for using Ceph with cloud platforms. Once you install QEMU,
+see `QEMU and Block Devices`_ for usage.
+
+
+Debian Packages
+---------------
+
+QEMU packages are incorporated into Ubuntu 12.04 Precise Pangolin and later
+versions. To install QEMU, execute the following::
+
+ sudo apt-get install qemu
+
+
+RPM Packages
+------------
+
+To install QEMU, execute the following:
+
+
+#. Update your repositories. ::
+
+ sudo yum update
+
+#. Install QEMU for Ceph. ::
+
+ sudo yum install qemu-kvm qemu-kvm-tools qemu-img
+
+#. Install additional QEMU packages (optional)::
+
+ sudo yum install qemu-guest-agent qemu-guest-agent-win32
+
+
+Building QEMU
+-------------
+
+To build QEMU from source, use the following procedure::
+
+ cd {your-development-directory}
+ git clone git://git.qemu.org/qemu.git
+ cd qemu
+ ./configure --enable-rbd
+ make; make install
+
+
+
+Install libvirt
+===============
+
+To use ``libvirt`` with Ceph, you must have a running Ceph Storage Cluster, and
+you must have installed and configured QEMU. See `Using libvirt with Ceph Block
+Device`_ for usage.
+
+
+Debian Packages
+---------------
+
+``libvirt`` packages are incorporated into Ubuntu 12.04 Precise Pangolin and
+later versions of Ubuntu. To install ``libvirt`` on these distributions,
+execute the following::
+
+ sudo apt-get update && sudo apt-get install libvirt-bin
+
+
+RPM Packages
+------------
+
+To use ``libvirt`` with a Ceph Storage Cluster, you must have a running Ceph
+Storage Cluster and you must also install a version of QEMU with ``rbd`` format
+support. See `Install QEMU`_ for details.
+
+
+``libvirt`` packages are incorporated into the recent CentOS/RHEL distributions.
+To install ``libvirt``, execute the following::
+
+ sudo yum install libvirt
+
+
+Building ``libvirt``
+--------------------
+
+To build ``libvirt`` from source, clone the ``libvirt`` repository and use
+`AutoGen`_ to generate the build. Then, execute ``make`` and ``make install`` to
+complete the installation. For example::
+
+ git clone git://libvirt.org/libvirt.git
+ cd libvirt
+ ./autogen.sh
+ make
+ sudo make install
+
+See `libvirt Installation`_ for details.
+
+
+
+.. _libvirt Installation: http://www.libvirt.org/compiling.html
+.. _AutoGen: http://www.gnu.org/software/autogen/
+.. _QEMU and Block Devices: ../../rbd/qemu-rbd
+.. _Using libvirt with Ceph Block Device: ../../rbd/libvirt
diff --git a/doc/install/manual-deployment.rst b/doc/install/manual-deployment.rst
new file mode 100644
index 000000000..9ad652634
--- /dev/null
+++ b/doc/install/manual-deployment.rst
@@ -0,0 +1,529 @@
+===================
+ Manual Deployment
+===================
+
+All Ceph clusters require at least one monitor, and at least as many OSDs as
+copies of an object stored on the cluster. Bootstrapping the initial monitor(s)
+is the first step in deploying a Ceph Storage Cluster. Monitor deployment also
+sets important criteria for the entire cluster, such as the number of replicas
+for pools, the number of placement groups per OSD, the heartbeat intervals,
+whether authentication is required, etc. Most of these values are set by
+default, so it's useful to know about them when setting up your cluster for
+production.
+
+We will set up a cluster with ``node1`` as the monitor node, and ``node2`` and
+``node3`` for OSD nodes.
+
+
+
+.. ditaa::
+
+ /------------------\ /----------------\
+ | Admin Node | | node1 |
+ | +-------->+ |
+ | | | cCCC |
+ \---------+--------/ \----------------/
+ |
+ | /----------------\
+ | | node2 |
+ +----------------->+ |
+ | | cCCC |
+ | \----------------/
+ |
+ | /----------------\
+ | | node3 |
+ +----------------->| |
+ | cCCC |
+ \----------------/
+
+
+Monitor Bootstrapping
+=====================
+
+Bootstrapping a monitor (a Ceph Storage Cluster, in theory) requires
+a number of things:
+
+- **Unique Identifier:** The ``fsid`` is a unique identifier for the cluster,
+ and stands for File System ID from the days when the Ceph Storage Cluster was
+ principally for the Ceph File System. Ceph now supports native interfaces,
+ block devices, and object storage gateway interfaces too, so ``fsid`` is a
+ bit of a misnomer.
+
+- **Cluster Name:** Ceph clusters have a cluster name, which is a simple string
+ without spaces. The default cluster name is ``ceph``, but you may specify
+ a different cluster name. Overriding the default cluster name is
+ especially useful when you are working with multiple clusters and you need to
+ clearly understand which cluster your are working with.
+
+ For example, when you run multiple clusters in a :ref:`multisite configuration <multisite>`,
+ the cluster name (e.g., ``us-west``, ``us-east``) identifies the cluster for
+ the current CLI session. **Note:** To identify the cluster name on the
+ command line interface, specify the Ceph configuration file with the
+ cluster name (e.g., ``ceph.conf``, ``us-west.conf``, ``us-east.conf``, etc.).
+ Also see CLI usage (``ceph --cluster {cluster-name}``).
+
+- **Monitor Name:** Each monitor instance within a cluster has a unique name.
+ In common practice, the Ceph Monitor name is the host name (we recommend one
+ Ceph Monitor per host, and no commingling of Ceph OSD Daemons with
+ Ceph Monitors). You may retrieve the short hostname with ``hostname -s``.
+
+- **Monitor Map:** Bootstrapping the initial monitor(s) requires you to
+ generate a monitor map. The monitor map requires the ``fsid``, the cluster
+ name (or uses the default), and at least one host name and its IP address.
+
+- **Monitor Keyring**: Monitors communicate with each other via a
+ secret key. You must generate a keyring with a monitor secret and provide
+ it when bootstrapping the initial monitor(s).
+
+- **Administrator Keyring**: To use the ``ceph`` CLI tools, you must have
+ a ``client.admin`` user. So you must generate the admin user and keyring,
+ and you must also add the ``client.admin`` user to the monitor keyring.
+
+The foregoing requirements do not imply the creation of a Ceph Configuration
+file. However, as a best practice, we recommend creating a Ceph configuration
+file and populating it with the ``fsid``, the ``mon initial members`` and the
+``mon host`` settings.
+
+You can get and set all of the monitor settings at runtime as well. However,
+a Ceph Configuration file may contain only those settings that override the
+default values. When you add settings to a Ceph configuration file, these
+settings override the default settings. Maintaining those settings in a
+Ceph configuration file makes it easier to maintain your cluster.
+
+The procedure is as follows:
+
+
+#. Log in to the initial monitor node(s)::
+
+ ssh {hostname}
+
+ For example::
+
+ ssh node1
+
+
+#. Ensure you have a directory for the Ceph configuration file. By default,
+ Ceph uses ``/etc/ceph``. When you install ``ceph``, the installer will
+ create the ``/etc/ceph`` directory automatically. ::
+
+ ls /etc/ceph
+
+
+#. Create a Ceph configuration file. By default, Ceph uses
+ ``ceph.conf``, where ``ceph`` reflects the cluster name. ::
+
+ sudo vim /etc/ceph/ceph.conf
+
+
+#. Generate a unique ID (i.e., ``fsid``) for your cluster. ::
+
+ uuidgen
+
+
+#. Add the unique ID to your Ceph configuration file. ::
+
+ fsid = {UUID}
+
+ For example::
+
+ fsid = a7f64266-0894-4f1e-a635-d0aeaca0e993
+
+
+#. Add the initial monitor(s) to your Ceph configuration file. ::
+
+ mon initial members = {hostname}[,{hostname}]
+
+ For example::
+
+ mon initial members = node1
+
+
+#. Add the IP address(es) of the initial monitor(s) to your Ceph configuration
+ file and save the file. ::
+
+ mon host = {ip-address}[,{ip-address}]
+
+ For example::
+
+ mon host = 192.168.0.1
+
+ **Note:** You may use IPv6 addresses instead of IPv4 addresses, but
+ you must set ``ms bind ipv6`` to ``true``. See `Network Configuration
+ Reference`_ for details about network configuration.
+
+#. Create a keyring for your cluster and generate a monitor secret key. ::
+
+ sudo ceph-authtool --create-keyring /tmp/ceph.mon.keyring --gen-key -n mon. --cap mon 'allow *'
+
+
+#. Generate an administrator keyring, generate a ``client.admin`` user and add
+ the user to the keyring. ::
+
+ sudo ceph-authtool --create-keyring /etc/ceph/ceph.client.admin.keyring --gen-key -n client.admin --cap mon 'allow *' --cap osd 'allow *' --cap mds 'allow *' --cap mgr 'allow *'
+
+#. Generate a bootstrap-osd keyring, generate a ``client.bootstrap-osd`` user and add
+ the user to the keyring. ::
+
+ sudo ceph-authtool --create-keyring /var/lib/ceph/bootstrap-osd/ceph.keyring --gen-key -n client.bootstrap-osd --cap mon 'profile bootstrap-osd' --cap mgr 'allow r'
+
+#. Add the generated keys to the ``ceph.mon.keyring``. ::
+
+ sudo ceph-authtool /tmp/ceph.mon.keyring --import-keyring /etc/ceph/ceph.client.admin.keyring
+ sudo ceph-authtool /tmp/ceph.mon.keyring --import-keyring /var/lib/ceph/bootstrap-osd/ceph.keyring
+
+#. Change the owner for ``ceph.mon.keyring``. ::
+
+ sudo chown ceph:ceph /tmp/ceph.mon.keyring
+
+#. Generate a monitor map using the hostname(s), host IP address(es) and the FSID.
+ Save it as ``/tmp/monmap``::
+
+ monmaptool --create --add {hostname} {ip-address} --fsid {uuid} /tmp/monmap
+
+ For example::
+
+ monmaptool --create --add node1 192.168.0.1 --fsid a7f64266-0894-4f1e-a635-d0aeaca0e993 /tmp/monmap
+
+
+#. Create a default data directory (or directories) on the monitor host(s). ::
+
+ sudo mkdir /var/lib/ceph/mon/{cluster-name}-{hostname}
+
+ For example::
+
+ sudo -u ceph mkdir /var/lib/ceph/mon/ceph-node1
+
+ See `Monitor Config Reference - Data`_ for details.
+
+#. Populate the monitor daemon(s) with the monitor map and keyring. ::
+
+ sudo -u ceph ceph-mon [--cluster {cluster-name}] --mkfs -i {hostname} --monmap /tmp/monmap --keyring /tmp/ceph.mon.keyring
+
+ For example::
+
+ sudo -u ceph ceph-mon --mkfs -i node1 --monmap /tmp/monmap --keyring /tmp/ceph.mon.keyring
+
+
+#. Consider settings for a Ceph configuration file. Common settings include
+ the following::
+
+ [global]
+ fsid = {cluster-id}
+ mon initial members = {hostname}[, {hostname}]
+ mon host = {ip-address}[, {ip-address}]
+ public network = {network}[, {network}]
+ cluster network = {network}[, {network}]
+ auth cluster required = cephx
+ auth service required = cephx
+ auth client required = cephx
+ osd journal size = {n}
+ osd pool default size = {n} # Write an object n times.
+ osd pool default min size = {n} # Allow writing n copies in a degraded state.
+ osd pool default pg num = {n}
+ osd pool default pgp num = {n}
+ osd crush chooseleaf type = {n}
+
+ In the foregoing example, the ``[global]`` section of the configuration might
+ look like this::
+
+ [global]
+ fsid = a7f64266-0894-4f1e-a635-d0aeaca0e993
+ mon initial members = node1
+ mon host = 192.168.0.1
+ public network = 192.168.0.0/24
+ auth cluster required = cephx
+ auth service required = cephx
+ auth client required = cephx
+ osd journal size = 1024
+ osd pool default size = 3
+ osd pool default min size = 2
+ osd pool default pg num = 333
+ osd pool default pgp num = 333
+ osd crush chooseleaf type = 1
+
+
+#. Start the monitor(s).
+
+ Start the service with systemd::
+
+ sudo systemctl start ceph-mon@node1
+
+#. Verify that the monitor is running. ::
+
+ sudo ceph -s
+
+ You should see output that the monitor you started is up and running, and
+ you should see a health error indicating that placement groups are stuck
+ inactive. It should look something like this::
+
+ cluster:
+ id: a7f64266-0894-4f1e-a635-d0aeaca0e993
+ health: HEALTH_OK
+
+ services:
+ mon: 1 daemons, quorum node1
+ mgr: node1(active)
+ osd: 0 osds: 0 up, 0 in
+
+ data:
+ pools: 0 pools, 0 pgs
+ objects: 0 objects, 0 bytes
+ usage: 0 kB used, 0 kB / 0 kB avail
+ pgs:
+
+
+ **Note:** Once you add OSDs and start them, the placement group health errors
+ should disappear. See `Adding OSDs`_ for details.
+
+Manager daemon configuration
+============================
+
+On each node where you run a ceph-mon daemon, you should also set up a ceph-mgr daemon.
+
+See :ref:`mgr-administrator-guide`
+
+Adding OSDs
+===========
+
+Once you have your initial monitor(s) running, you should add OSDs. Your cluster
+cannot reach an ``active + clean`` state until you have enough OSDs to handle the
+number of copies of an object (e.g., ``osd pool default size = 2`` requires at
+least two OSDs). After bootstrapping your monitor, your cluster has a default
+CRUSH map; however, the CRUSH map doesn't have any Ceph OSD Daemons mapped to
+a Ceph Node.
+
+
+Short Form
+----------
+
+Ceph provides the ``ceph-volume`` utility, which can prepare a logical volume, disk, or partition
+for use with Ceph. The ``ceph-volume`` utility creates the OSD ID by
+incrementing the index. Additionally, ``ceph-volume`` will add the new OSD to the
+CRUSH map under the host for you. Execute ``ceph-volume -h`` for CLI details.
+The ``ceph-volume`` utility automates the steps of the `Long Form`_ below. To
+create the first two OSDs with the short form procedure, execute the following
+on ``node2`` and ``node3``:
+
+bluestore
+^^^^^^^^^
+#. Create the OSD. ::
+
+ ssh {node-name}
+ sudo ceph-volume lvm create --data {data-path}
+
+ For example::
+
+ ssh node1
+ sudo ceph-volume lvm create --data /dev/hdd1
+
+Alternatively, the creation process can be split in two phases (prepare, and
+activate):
+
+#. Prepare the OSD. ::
+
+ ssh {node-name}
+ sudo ceph-volume lvm prepare --data {data-path} {data-path}
+
+ For example::
+
+ ssh node1
+ sudo ceph-volume lvm prepare --data /dev/hdd1
+
+ Once prepared, the ``ID`` and ``FSID`` of the prepared OSD are required for
+ activation. These can be obtained by listing OSDs in the current server::
+
+ sudo ceph-volume lvm list
+
+#. Activate the OSD::
+
+ sudo ceph-volume lvm activate {ID} {FSID}
+
+ For example::
+
+ sudo ceph-volume lvm activate 0 a7f64266-0894-4f1e-a635-d0aeaca0e993
+
+
+filestore
+^^^^^^^^^
+#. Create the OSD. ::
+
+ ssh {node-name}
+ sudo ceph-volume lvm create --filestore --data {data-path} --journal {journal-path}
+
+ For example::
+
+ ssh node1
+ sudo ceph-volume lvm create --filestore --data /dev/hdd1 --journal /dev/hdd2
+
+Alternatively, the creation process can be split in two phases (prepare, and
+activate):
+
+#. Prepare the OSD. ::
+
+ ssh {node-name}
+ sudo ceph-volume lvm prepare --filestore --data {data-path} --journal {journal-path}
+
+ For example::
+
+ ssh node1
+ sudo ceph-volume lvm prepare --filestore --data /dev/hdd1 --journal /dev/hdd2
+
+ Once prepared, the ``ID`` and ``FSID`` of the prepared OSD are required for
+ activation. These can be obtained by listing OSDs in the current server::
+
+ sudo ceph-volume lvm list
+
+#. Activate the OSD::
+
+ sudo ceph-volume lvm activate --filestore {ID} {FSID}
+
+ For example::
+
+ sudo ceph-volume lvm activate --filestore 0 a7f64266-0894-4f1e-a635-d0aeaca0e993
+
+
+Long Form
+---------
+
+Without the benefit of any helper utilities, create an OSD and add it to the
+cluster and CRUSH map with the following procedure. To create the first two
+OSDs with the long form procedure, execute the following steps for each OSD.
+
+.. note:: This procedure does not describe deployment on top of dm-crypt
+ making use of the dm-crypt 'lockbox'.
+
+#. Connect to the OSD host and become root. ::
+
+ ssh {node-name}
+ sudo bash
+
+#. Generate a UUID for the OSD. ::
+
+ UUID=$(uuidgen)
+
+#. Generate a cephx key for the OSD. ::
+
+ OSD_SECRET=$(ceph-authtool --gen-print-key)
+
+#. Create the OSD. Note that an OSD ID can be provided as an
+ additional argument to ``ceph osd new`` if you need to reuse a
+ previously-destroyed OSD id. We assume that the
+ ``client.bootstrap-osd`` key is present on the machine. You may
+ alternatively execute this command as ``client.admin`` on a
+ different host where that key is present.::
+
+ ID=$(echo "{\"cephx_secret\": \"$OSD_SECRET\"}" | \
+ ceph osd new $UUID -i - \
+ -n client.bootstrap-osd -k /var/lib/ceph/bootstrap-osd/ceph.keyring)
+
+ It is also possible to include a ``crush_device_class`` property in the JSON
+ to set an initial class other than the default (``ssd`` or ``hdd`` based on
+ the auto-detected device type).
+
+#. Create the default directory on your new OSD. ::
+
+ mkdir /var/lib/ceph/osd/ceph-$ID
+
+#. If the OSD is for a drive other than the OS drive, prepare it
+ for use with Ceph, and mount it to the directory you just created. ::
+
+ mkfs.xfs /dev/{DEV}
+ mount /dev/{DEV} /var/lib/ceph/osd/ceph-$ID
+
+#. Write the secret to the OSD keyring file. ::
+
+ ceph-authtool --create-keyring /var/lib/ceph/osd/ceph-$ID/keyring \
+ --name osd.$ID --add-key $OSD_SECRET
+
+#. Initialize the OSD data directory. ::
+
+ ceph-osd -i $ID --mkfs --osd-uuid $UUID
+
+#. Fix ownership. ::
+
+ chown -R ceph:ceph /var/lib/ceph/osd/ceph-$ID
+
+#. After you add an OSD to Ceph, the OSD is in your configuration. However,
+ it is not yet running. You must start
+ your new OSD before it can begin receiving data.
+
+ For modern systemd distributions::
+
+ systemctl enable ceph-osd@$ID
+ systemctl start ceph-osd@$ID
+
+ For example::
+
+ systemctl enable ceph-osd@12
+ systemctl start ceph-osd@12
+
+
+Adding MDS
+==========
+
+In the below instructions, ``{id}`` is an arbitrary name, such as the hostname of the machine.
+
+#. Create the mds data directory.::
+
+ mkdir -p /var/lib/ceph/mds/{cluster-name}-{id}
+
+#. Create a keyring.::
+
+ ceph-authtool --create-keyring /var/lib/ceph/mds/{cluster-name}-{id}/keyring --gen-key -n mds.{id}
+
+#. Import the keyring and set caps.::
+
+ ceph auth add mds.{id} osd "allow rwx" mds "allow *" mon "allow profile mds" -i /var/lib/ceph/mds/{cluster}-{id}/keyring
+
+#. Add to ceph.conf.::
+
+ [mds.{id}]
+ host = {id}
+
+#. Start the daemon the manual way.::
+
+ ceph-mds --cluster {cluster-name} -i {id} -m {mon-hostname}:{mon-port} [-f]
+
+#. Start the daemon the right way (using ceph.conf entry).::
+
+ service ceph start
+
+#. If starting the daemon fails with this error::
+
+ mds.-1.0 ERROR: failed to authenticate: (22) Invalid argument
+
+ Then make sure you do not have a keyring set in ceph.conf in the global section; move it to the client section; or add a keyring setting specific to this mds daemon. And verify that you see the same key in the mds data directory and ``ceph auth get mds.{id}`` output.
+
+#. Now you are ready to `create a Ceph file system`_.
+
+
+Summary
+=======
+
+Once you have your monitor and two OSDs up and running, you can watch the
+placement groups peer by executing the following::
+
+ ceph -w
+
+To view the tree, execute the following::
+
+ ceph osd tree
+
+You should see output that looks something like this::
+
+ # id weight type name up/down reweight
+ -1 2 root default
+ -2 2 host node1
+ 0 1 osd.0 up 1
+ -3 1 host node2
+ 1 1 osd.1 up 1
+
+To add (or remove) additional monitors, see `Add/Remove Monitors`_.
+To add (or remove) additional Ceph OSD Daemons, see `Add/Remove OSDs`_.
+
+
+.. _Add/Remove Monitors: ../../rados/operations/add-or-rm-mons
+.. _Add/Remove OSDs: ../../rados/operations/add-or-rm-osds
+.. _Network Configuration Reference: ../../rados/configuration/network-config-ref
+.. _Monitor Config Reference - Data: ../../rados/configuration/mon-config-ref#data
+.. _create a Ceph file system: ../../cephfs/createfs
diff --git a/doc/install/manual-freebsd-deployment.rst b/doc/install/manual-freebsd-deployment.rst
new file mode 100644
index 000000000..f597574f4
--- /dev/null
+++ b/doc/install/manual-freebsd-deployment.rst
@@ -0,0 +1,575 @@
+==============================
+ Manual Deployment on FreeBSD
+==============================
+
+This a largely a copy of the regular Manual Deployment with FreeBSD specifics.
+The difference lies in two parts: The underlying diskformat, and the way to use
+the tools.
+
+All Ceph clusters require at least one monitor, and at least as many OSDs as
+copies of an object stored on the cluster. Bootstrapping the initial monitor(s)
+is the first step in deploying a Ceph Storage Cluster. Monitor deployment also
+sets important criteria for the entire cluster, such as the number of replicas
+for pools, the number of placement groups per OSD, the heartbeat intervals,
+whether authentication is required, etc. Most of these values are set by
+default, so it's useful to know about them when setting up your cluster for
+production.
+
+We will set up a cluster with ``node1`` as the monitor node, and ``node2`` and
+``node3`` for OSD nodes.
+
+
+
+.. ditaa::
+
+ /------------------\ /----------------\
+ | Admin Node | | node1 |
+ | +-------->+ |
+ | | | cCCC |
+ \---------+--------/ \----------------/
+ |
+ | /----------------\
+ | | node2 |
+ +----------------->+ |
+ | | cCCC |
+ | \----------------/
+ |
+ | /----------------\
+ | | node3 |
+ +----------------->| |
+ | cCCC |
+ \----------------/
+
+
+
+Disklayout on FreeBSD
+=====================
+
+Current implementation works on ZFS pools
+
+* All Ceph data is created in /var/lib/ceph
+* Log files go into /var/log/ceph
+* PID files go into /var/log/run
+* One ZFS pool is allocated per OSD, like::
+
+ gpart create -s GPT ada1
+ gpart add -t freebsd-zfs -l osd.1 ada1
+ zpool create -m /var/lib/ceph/osd/osd.1 osd.1 gpt/osd.1
+
+* Some cache and log (ZIL) can be attached.
+ Please note that this is different from the Ceph journals. Cache and log are
+ totally transparent for Ceph, and help the file system to keep the system
+ consistent and help performance.
+ Assuming that ada2 is an SSD::
+
+ gpart create -s GPT ada2
+ gpart add -t freebsd-zfs -l osd.1-log -s 1G ada2
+ zpool add osd.1 log gpt/osd.1-log
+ gpart add -t freebsd-zfs -l osd.1-cache -s 10G ada2
+ zpool add osd.1 log gpt/osd.1-cache
+
+* Note: *UFS2 does not allow large xattribs*
+
+
+Configuration
+-------------
+
+As per FreeBSD default parts of extra software go into ``/usr/local/``. Which
+means that for ``/etc/ceph.conf`` the default location is
+``/usr/local/etc/ceph/ceph.conf``. Smartest thing to do is to create a softlink
+from ``/etc/ceph`` to ``/usr/local/etc/ceph``::
+
+ ln -s /usr/local/etc/ceph /etc/ceph
+
+A sample file is provided in ``/usr/local/share/doc/ceph/sample.ceph.conf``
+Note that ``/usr/local/etc/ceph/ceph.conf`` will be found by most tools,
+linking it to ``/etc/ceph/ceph.conf`` will help with any scripts that are found
+in extra tools, scripts, and/or discussionlists.
+
+Monitor Bootstrapping
+=====================
+
+Bootstrapping a monitor (a Ceph Storage Cluster, in theory) requires
+a number of things:
+
+- **Unique Identifier:** The ``fsid`` is a unique identifier for the cluster,
+ and stands for File System ID from the days when the Ceph Storage Cluster was
+ principally for the Ceph File System. Ceph now supports native interfaces,
+ block devices, and object storage gateway interfaces too, so ``fsid`` is a
+ bit of a misnomer.
+
+- **Cluster Name:** Ceph clusters have a cluster name, which is a simple string
+ without spaces. The default cluster name is ``ceph``, but you may specify
+ a different cluster name. Overriding the default cluster name is
+ especially useful when you are working with multiple clusters and you need to
+ clearly understand which cluster your are working with.
+
+ For example, when you run multiple clusters in a :ref:`multisite configuration <multisite>`,
+ the cluster name (e.g., ``us-west``, ``us-east``) identifies the cluster for
+ the current CLI session. **Note:** To identify the cluster name on the
+ command line interface, specify the a Ceph configuration file with the
+ cluster name (e.g., ``ceph.conf``, ``us-west.conf``, ``us-east.conf``, etc.).
+ Also see CLI usage (``ceph --cluster {cluster-name}``).
+
+- **Monitor Name:** Each monitor instance within a cluster has a unique name.
+ In common practice, the Ceph Monitor name is the host name (we recommend one
+ Ceph Monitor per host, and no commingling of Ceph OSD Daemons with
+ Ceph Monitors). You may retrieve the short hostname with ``hostname -s``.
+
+- **Monitor Map:** Bootstrapping the initial monitor(s) requires you to
+ generate a monitor map. The monitor map requires the ``fsid``, the cluster
+ name (or uses the default), and at least one host name and its IP address.
+
+- **Monitor Keyring**: Monitors communicate with each other via a
+ secret key. You must generate a keyring with a monitor secret and provide
+ it when bootstrapping the initial monitor(s).
+
+- **Administrator Keyring**: To use the ``ceph`` CLI tools, you must have
+ a ``client.admin`` user. So you must generate the admin user and keyring,
+ and you must also add the ``client.admin`` user to the monitor keyring.
+
+The foregoing requirements do not imply the creation of a Ceph Configuration
+file. However, as a best practice, we recommend creating a Ceph configuration
+file and populating it with the ``fsid``, the ``mon initial members`` and the
+``mon host`` settings.
+
+You can get and set all of the monitor settings at runtime as well. However,
+a Ceph Configuration file may contain only those settings that override the
+default values. When you add settings to a Ceph configuration file, these
+settings override the default settings. Maintaining those settings in a
+Ceph configuration file makes it easier to maintain your cluster.
+
+The procedure is as follows:
+
+
+#. Log in to the initial monitor node(s)::
+
+ ssh {hostname}
+
+ For example::
+
+ ssh node1
+
+
+#. Ensure you have a directory for the Ceph configuration file. By default,
+ Ceph uses ``/etc/ceph``. When you install ``ceph``, the installer will
+ create the ``/etc/ceph`` directory automatically. ::
+
+ ls /etc/ceph
+
+#. Create a Ceph configuration file. By default, Ceph uses
+ ``ceph.conf``, where ``ceph`` reflects the cluster name. ::
+
+ sudo vim /etc/ceph/ceph.conf
+
+
+#. Generate a unique ID (i.e., ``fsid``) for your cluster. ::
+
+ uuidgen
+
+
+#. Add the unique ID to your Ceph configuration file. ::
+
+ fsid = {UUID}
+
+ For example::
+
+ fsid = a7f64266-0894-4f1e-a635-d0aeaca0e993
+
+
+#. Add the initial monitor(s) to your Ceph configuration file. ::
+
+ mon initial members = {hostname}[,{hostname}]
+
+ For example::
+
+ mon initial members = node1
+
+
+#. Add the IP address(es) of the initial monitor(s) to your Ceph configuration
+ file and save the file. ::
+
+ mon host = {ip-address}[,{ip-address}]
+
+ For example::
+
+ mon host = 192.168.0.1
+
+ **Note:** You may use IPv6 addresses instead of IPv4 addresses, but
+ you must set ``ms bind ipv6`` to ``true``. See `Network Configuration
+ Reference`_ for details about network configuration.
+
+#. Create a keyring for your cluster and generate a monitor secret key. ::
+
+ ceph-authtool --create-keyring /tmp/ceph.mon.keyring --gen-key -n mon. --cap mon 'allow *'
+
+
+#. Generate an administrator keyring, generate a ``client.admin`` user and add
+ the user to the keyring. ::
+
+ sudo ceph-authtool --create-keyring /etc/ceph/ceph.client.admin.keyring --gen-key -n client.admin --cap mon 'allow *' --cap osd 'allow *' --cap mds 'allow *' --cap mgr 'allow *'
+
+
+#. Add the ``client.admin`` key to the ``ceph.mon.keyring``. ::
+
+ ceph-authtool /tmp/ceph.mon.keyring --import-keyring /etc/ceph/ceph.client.admin.keyring
+
+
+#. Generate a monitor map using the hostname(s), host IP address(es) and the FSID.
+ Save it as ``/tmp/monmap``::
+
+ monmaptool --create --add {hostname} {ip-address} --fsid {uuid} /tmp/monmap
+
+ For example::
+
+ monmaptool --create --add node1 192.168.0.1 --fsid a7f64266-0894-4f1e-a635-d0aeaca0e993 /tmp/monmap
+
+
+#. Create a default data directory (or directories) on the monitor host(s). ::
+
+ sudo mkdir /var/lib/ceph/mon/{cluster-name}-{hostname}
+
+ For example::
+
+ sudo mkdir /var/lib/ceph/mon/ceph-node1
+
+ See `Monitor Config Reference - Data`_ for details.
+
+#. Populate the monitor daemon(s) with the monitor map and keyring. ::
+
+ sudo -u ceph ceph-mon [--cluster {cluster-name}] --mkfs -i {hostname} --monmap /tmp/monmap --keyring /tmp/ceph.mon.keyring
+
+ For example::
+
+ sudo -u ceph ceph-mon --mkfs -i node1 --monmap /tmp/monmap --keyring /tmp/ceph.mon.keyring
+
+
+#. Consider settings for a Ceph configuration file. Common settings include
+ the following::
+
+ [global]
+ fsid = {cluster-id}
+ mon initial members = {hostname}[, {hostname}]
+ mon host = {ip-address}[, {ip-address}]
+ public network = {network}[, {network}]
+ cluster network = {network}[, {network}]
+ auth cluster required = cephx
+ auth service required = cephx
+ auth client required = cephx
+ osd journal size = {n}
+ osd pool default size = {n} # Write an object n times.
+ osd pool default min size = {n} # Allow writing n copy in a degraded state.
+ osd pool default pg num = {n}
+ osd pool default pgp num = {n}
+ osd crush chooseleaf type = {n}
+
+ In the foregoing example, the ``[global]`` section of the configuration might
+ look like this::
+
+ [global]
+ fsid = a7f64266-0894-4f1e-a635-d0aeaca0e993
+ mon initial members = node1
+ mon host = 192.168.0.1
+ public network = 192.168.0.0/24
+ auth cluster required = cephx
+ auth service required = cephx
+ auth client required = cephx
+ osd journal size = 1024
+ osd pool default size = 3
+ osd pool default min size = 2
+ osd pool default pg num = 333
+ osd pool default pgp num = 333
+ osd crush chooseleaf type = 1
+
+#. Touch the ``done`` file.
+
+ Mark that the monitor is created and ready to be started::
+
+ sudo touch /var/lib/ceph/mon/ceph-node1/done
+
+#. And for FreeBSD an entry for every monitor needs to be added to the config
+ file. (The requirement will be removed in future releases).
+
+ The entry should look like::
+
+ [mon]
+ [mon.node1]
+ host = node1 # this name can be resolve
+
+
+#. Start the monitor(s).
+
+ For Ubuntu, use Upstart::
+
+ sudo start ceph-mon id=node1 [cluster={cluster-name}]
+
+ In this case, to allow the start of the daemon at each reboot you
+ must create two empty files like this::
+
+ sudo touch /var/lib/ceph/mon/{cluster-name}-{hostname}/upstart
+
+ For example::
+
+ sudo touch /var/lib/ceph/mon/ceph-node1/upstart
+
+ For Debian/CentOS/RHEL, use sysvinit::
+
+ sudo /etc/init.d/ceph start mon.node1
+
+ For FreeBSD we use the rc.d init scripts (called bsdrc in Ceph)::
+
+ sudo service ceph start start mon.node1
+
+ For this to work /etc/rc.conf also needs the entry to enable ceph::
+ cat 'ceph_enable="YES"' >> /etc/rc.conf
+
+
+#. Verify that Ceph created the default pools. ::
+
+ ceph osd lspools
+
+ You should see output like this::
+
+ 0 data
+ 1 metadata
+ 2 rbd
+
+#. Verify that the monitor is running. ::
+
+ ceph -s
+
+ You should see output that the monitor you started is up and running, and
+ you should see a health error indicating that placement groups are stuck
+ inactive. It should look something like this::
+
+ cluster a7f64266-0894-4f1e-a635-d0aeaca0e993
+ health HEALTH_ERR 192 pgs stuck inactive; 192 pgs stuck unclean; no osds
+ monmap e1: 1 mons at {node1=192.168.0.1:6789/0}, election epoch 1, quorum 0 node1
+ osdmap e1: 0 osds: 0 up, 0 in
+ pgmap v2: 192 pgs, 3 pools, 0 bytes data, 0 objects
+ 0 kB used, 0 kB / 0 kB avail
+ 192 creating
+
+ **Note:** Once you add OSDs and start them, the placement group health errors
+ should disappear. See the next section for details.
+
+.. _freebsd_adding_osds:
+
+Adding OSDs
+===========
+
+Once you have your initial monitor(s) running, you should add OSDs. Your cluster
+cannot reach an ``active + clean`` state until you have enough OSDs to handle the
+number of copies of an object (e.g., ``osd pool default size = 2`` requires at
+least two OSDs). After bootstrapping your monitor, your cluster has a default
+CRUSH map; however, the CRUSH map doesn't have any Ceph OSD Daemons mapped to
+a Ceph Node.
+
+
+Long Form
+---------
+
+Without the benefit of any helper utilities, create an OSD and add it to the
+cluster and CRUSH map with the following procedure. To create the first two
+OSDs with the long form procedure, execute the following on ``node2`` and
+``node3``:
+
+#. Connect to the OSD host. ::
+
+ ssh {node-name}
+
+#. Generate a UUID for the OSD. ::
+
+ uuidgen
+
+
+#. Create the OSD. If no UUID is given, it will be set automatically when the
+ OSD starts up. The following command will output the OSD number, which you
+ will need for subsequent steps. ::
+
+ ceph osd create [{uuid} [{id}]]
+
+
+#. Create the default directory on your new OSD. ::
+
+ ssh {new-osd-host}
+ sudo mkdir /var/lib/ceph/osd/{cluster-name}-{osd-number}
+
+ Above are the ZFS instructions to do this for FreeBSD.
+
+
+#. If the OSD is for a drive other than the OS drive, prepare it
+ for use with Ceph, and mount it to the directory you just created.
+
+
+#. Initialize the OSD data directory. ::
+
+ ssh {new-osd-host}
+ sudo ceph-osd -i {osd-num} --mkfs --mkkey --osd-uuid [{uuid}]
+
+ The directory must be empty before you can run ``ceph-osd`` with the
+ ``--mkkey`` option. In addition, the ceph-osd tool requires specification
+ of custom cluster names with the ``--cluster`` option.
+
+
+#. Register the OSD authentication key. The value of ``ceph`` for
+ ``ceph-{osd-num}`` in the path is the ``$cluster-$id``. If your
+ cluster name differs from ``ceph``, use your cluster name instead.::
+
+ sudo ceph auth add osd.{osd-num} osd 'allow *' mon 'allow profile osd' -i /var/lib/ceph/osd/{cluster-name}-{osd-num}/keyring
+
+
+#. Add your Ceph Node to the CRUSH map. ::
+
+ ceph [--cluster {cluster-name}] osd crush add-bucket {hostname} host
+
+ For example::
+
+ ceph osd crush add-bucket node1 host
+
+
+#. Place the Ceph Node under the root ``default``. ::
+
+ ceph osd crush move node1 root=default
+
+
+#. Add the OSD to the CRUSH map so that it can begin receiving data. You may
+ also decompile the CRUSH map, add the OSD to the device list, add the host as a
+ bucket (if it's not already in the CRUSH map), add the device as an item in the
+ host, assign it a weight, recompile it and set it. ::
+
+ ceph [--cluster {cluster-name}] osd crush add {id-or-name} {weight} [{bucket-type}={bucket-name} ...]
+
+ For example::
+
+ ceph osd crush add osd.0 1.0 host=node1
+
+
+#. After you add an OSD to Ceph, the OSD is in your configuration. However,
+ it is not yet running. The OSD is ``down`` and ``in``. You must start
+ your new OSD before it can begin receiving data.
+
+ For Ubuntu, use Upstart::
+
+ sudo start ceph-osd id={osd-num} [cluster={cluster-name}]
+
+ For example::
+
+ sudo start ceph-osd id=0
+ sudo start ceph-osd id=1
+
+ For Debian/CentOS/RHEL, use sysvinit::
+
+ sudo /etc/init.d/ceph start osd.{osd-num} [--cluster {cluster-name}]
+
+ For example::
+
+ sudo /etc/init.d/ceph start osd.0
+ sudo /etc/init.d/ceph start osd.1
+
+ In this case, to allow the start of the daemon at each reboot you
+ must create an empty file like this::
+
+ sudo touch /var/lib/ceph/osd/{cluster-name}-{osd-num}/sysvinit
+
+ For example::
+
+ sudo touch /var/lib/ceph/osd/ceph-0/sysvinit
+ sudo touch /var/lib/ceph/osd/ceph-1/sysvinit
+
+ Once you start your OSD, it is ``up`` and ``in``.
+
+ For FreeBSD using rc.d init.
+
+ After adding the OSD to ``ceph.conf``::
+
+ sudo service ceph start osd.{osd-num}
+
+ For example::
+
+ sudo service ceph start osd.0
+ sudo service ceph start osd.1
+
+ In this case, to allow the start of the daemon at each reboot you
+ must create an empty file like this::
+
+ sudo touch /var/lib/ceph/osd/{cluster-name}-{osd-num}/bsdrc
+
+ For example::
+
+ sudo touch /var/lib/ceph/osd/ceph-0/bsdrc
+ sudo touch /var/lib/ceph/osd/ceph-1/bsdrc
+
+ Once you start your OSD, it is ``up`` and ``in``.
+
+
+
+Adding MDS
+==========
+
+In the below instructions, ``{id}`` is an arbitrary name, such as the hostname of the machine.
+
+#. Create the mds data directory.::
+
+ mkdir -p /var/lib/ceph/mds/{cluster-name}-{id}
+
+#. Create a keyring.::
+
+ ceph-authtool --create-keyring /var/lib/ceph/mds/{cluster-name}-{id}/keyring --gen-key -n mds.{id}
+
+#. Import the keyring and set caps.::
+
+ ceph auth add mds.{id} osd "allow rwx" mds "allow *" mon "allow profile mds" -i /var/lib/ceph/mds/{cluster}-{id}/keyring
+
+#. Add to ceph.conf.::
+
+ [mds.{id}]
+ host = {id}
+
+#. Start the daemon the manual way.::
+
+ ceph-mds --cluster {cluster-name} -i {id} -m {mon-hostname}:{mon-port} [-f]
+
+#. Start the daemon the right way (using ceph.conf entry).::
+
+ service ceph start
+
+#. If starting the daemon fails with this error::
+
+ mds.-1.0 ERROR: failed to authenticate: (22) Invalid argument
+
+ Then make sure you do not have a keyring set in ceph.conf in the global section; move it to the client section; or add a keyring setting specific to this mds daemon. And verify that you see the same key in the mds data directory and ``ceph auth get mds.{id}`` output.
+
+#. Now you are ready to `create a Ceph file system`_.
+
+
+Summary
+=======
+
+Once you have your monitor and two OSDs up and running, you can watch the
+placement groups peer by executing the following::
+
+ ceph -w
+
+To view the tree, execute the following::
+
+ ceph osd tree
+
+You should see output that looks something like this::
+
+ # id weight type name up/down reweight
+ -1 2 root default
+ -2 2 host node1
+ 0 1 osd.0 up 1
+ -3 1 host node2
+ 1 1 osd.1 up 1
+
+To add (or remove) additional monitors, see `Add/Remove Monitors`_.
+To add (or remove) additional Ceph OSD Daemons, see `Add/Remove OSDs`_.
+
+
+.. _Add/Remove Monitors: ../../rados/operations/add-or-rm-mons
+.. _Add/Remove OSDs: ../../rados/operations/add-or-rm-osds
+.. _Network Configuration Reference: ../../rados/configuration/network-config-ref
+.. _Monitor Config Reference - Data: ../../rados/configuration/mon-config-ref#data
+.. _create a Ceph file system: ../../cephfs/createfs
diff --git a/doc/install/mirrors.rst b/doc/install/mirrors.rst
new file mode 100644
index 000000000..35df93f62
--- /dev/null
+++ b/doc/install/mirrors.rst
@@ -0,0 +1,67 @@
+=============
+ Ceph Mirrors
+=============
+
+For improved user experience multiple mirrors for Ceph are available around the
+world.
+
+These mirrors are kindly sponsored by various companies who want to support the
+Ceph project.
+
+
+Locations
+=========
+
+These mirrors are available on the following locations:
+
+- **EU: Netherlands**: http://eu.ceph.com/
+- **AU: Australia**: http://au.ceph.com/
+- **SE: Sweden**: http://se.ceph.com/
+- **DE: Germany**: http://de.ceph.com/
+- **HK: Hong Kong**: http://hk.ceph.com/
+- **FR: France**: http://fr.ceph.com/
+- **UK: UK**: http://uk.ceph.com
+- **US-East: US East Coast**: http://us-east.ceph.com/
+- **US-Mid-West: Chicago**: http://mirrors.gigenet.com/ceph/
+- **US-West: US West Coast**: http://us-west.ceph.com/
+- **CN: China**: http://mirrors.ustc.edu.cn/ceph/
+
+You can replace all download.ceph.com URLs with any of the mirrors, for example:
+
+- http://download.ceph.com/tarballs/
+- http://download.ceph.com/debian-hammer/
+- http://download.ceph.com/rpm-hammer/
+
+Change this to:
+
+- http://eu.ceph.com/tarballs/
+- http://eu.ceph.com/debian-hammer/
+- http://eu.ceph.com/rpm-hammer/
+
+
+Mirroring
+=========
+
+You can easily mirror Ceph yourself using a Bash script and rsync. A easy to use
+script can be found at `Github`_.
+
+When mirroring Ceph, please keep the following guidelines in mind:
+
+- Choose a mirror close to you
+- Do not sync in a interval shorter than 3 hours
+- Avoid syncing at minute 0 of the hour, use something between 0 and 59
+
+
+Becoming a mirror
+=================
+
+If you want to provide a public mirror for other users of Ceph you can opt to
+become a official mirror.
+
+To make sure all mirrors meet the same standards some requirements have been
+set for all mirrors. These can be found on `Github`_.
+
+If you want to apply for an official mirror, please contact the ceph-users mailinglist.
+
+
+.. _Github: https://github.com/ceph/ceph/tree/master/mirroring
diff --git a/doc/install/windows-basic-config.rst b/doc/install/windows-basic-config.rst
new file mode 100644
index 000000000..0fe8a1b1b
--- /dev/null
+++ b/doc/install/windows-basic-config.rst
@@ -0,0 +1,48 @@
+:orphan:
+
+===========================
+Windows basic configuration
+===========================
+
+This page describes the minimum Ceph configuration required for using the
+client components on Windows.
+
+ceph.conf
+=========
+
+The default location for the ``ceph.conf`` file on Windows is
+``%ProgramData%\ceph\ceph.conf``, which usually expands to
+``C:\ProgramData\ceph\ceph.conf``.
+
+Below you may find a sample. Please fill in the monitor addresses
+accordingly.
+
+.. code:: ini
+
+ [global]
+ log to stderr = true
+ ; Uncomment the following in order to use the Windows Event Log
+ ; log to syslog = true
+
+ run dir = C:/ProgramData/ceph/out
+ crash dir = C:/ProgramData/ceph/out
+
+ ; Use the following to change the cephfs client log level
+ ; debug client = 2
+ [client]
+ keyring = C:/ProgramData/ceph/keyring
+ ; log file = C:/ProgramData/ceph/out/$name.$pid.log
+ admin socket = C:/ProgramData/ceph/out/$name.$pid.asok
+
+ ; client_permissions = true
+ ; client_mount_uid = 1000
+ ; client_mount_gid = 1000
+ [global]
+ mon host = <ceph_monitor_addresses>
+
+Don't forget to also copy your keyring file to the specified location and make
+sure that the configured directories exist (e.g. ``C:\ProgramData\ceph\out``).
+
+Please use slashes ``/`` instead of backslashes ``\`` as path separators
+within ``ceph.conf``.
+
diff --git a/doc/install/windows-install.rst b/doc/install/windows-install.rst
new file mode 100644
index 000000000..fb5b5b6f5
--- /dev/null
+++ b/doc/install/windows-install.rst
@@ -0,0 +1,88 @@
+:orphan:
+
+==========================
+Installing Ceph on Windows
+==========================
+
+The Ceph client tools and libraries can be natively used on Windows. This avoids
+the need of having additional layers such as iSCSI gateways or SMB shares,
+drastically improving the performance.
+
+Prerequisites
+=============
+
+Supported platforms
+-------------------
+
+Windows Server 2019 and Windows Server 2016 are supported. Previous Windows
+Server versions, including Windows client versions such as Windows 10, might
+work but haven't been tested.
+
+Windows Server 2016 does not provide Unix sockets, in which case some commands
+might be unavailable.
+
+Secure boot
+-----------
+
+The ``WNBD`` driver hasn't been signed by Microsoft, which means that Secure Boot
+must be disabled.
+
+Dokany
+------
+
+In order to mount Ceph filesystems, ``ceph-dokan`` requires Dokany to be
+installed. You may fetch the installer as well as the source code from the
+Dokany Github repository: https://github.com/dokan-dev/dokany/releases
+
+The minimum supported Dokany version is 1.3.1. At the time of the writing,
+Dokany 2.0 is in Beta stage and is unsupported.
+
+Unlike ``WNBD``, Dokany isn't included in the Ceph MSI installer.
+
+MSI installer
+=============
+
+Using the MSI installer is the recommended way of installing Ceph on Windows.
+It can be downloaded from here: https://cloudbase.it/ceph-for-windows/
+
+As mentioned earlier, the Ceph installer does not include Dokany, which has
+to be installed separately.
+
+A server reboot is required after uninstalling the driver, otherwise subsequent
+install attempts may fail.
+
+The following project allows building the MSI installer:
+https://github.com/cloudbase/ceph-windows-installer. It can either use prebuilt
+Ceph and WNBD binaries or compile them from scratch.
+
+Manual installation
+===================
+
+The following document describes the build process and manual installation:
+https://github.com/ceph/ceph/blob/master/README.windows.rst
+
+Configuration
+=============
+
+Please check the `Windows configuration sample`_ to get started.
+
+You'll also need a keyring file. The `General CephFS Prerequisites`_ page provides a
+simple example, showing how a new CephX user can be created and how its secret
+key can be retrieved.
+
+For more details on CephX user management, see the `Client Authentication`_
+and :ref:`User Management <user-management>`.
+
+Further reading
+===============
+
+* `RBD Windows documentation`_
+* `CephFS Windows documentation`_
+* `Windows troubleshooting`_
+
+.. _CephFS Windows documentation: ../../cephfs/ceph-dokan
+.. _Windows configuration sample: ../windows-basic-config
+.. _RBD Windows documentation: ../../rbd/rbd-windows/
+.. _Windows troubleshooting: ../windows-troubleshooting
+.. _General CephFS Prerequisites: ../../cephfs/mount-prerequisites
+.. _Client Authentication: ../../cephfs/client-auth
diff --git a/doc/install/windows-troubleshooting.rst b/doc/install/windows-troubleshooting.rst
new file mode 100644
index 000000000..355fd8803
--- /dev/null
+++ b/doc/install/windows-troubleshooting.rst
@@ -0,0 +1,96 @@
+:orphan:
+
+===============================
+Troubleshooting Ceph on Windows
+===============================
+
+MSI installer
+~~~~~~~~~~~~~
+
+The MSI source code can be consulted here:
+https://github.com/cloudbase/ceph-windows-installer
+
+The following command can be used to generate MSI logs::
+
+ msiexec.exe /i $msi_full_path /l*v! $log_file
+
+WNBD driver installation failures will be logged here: ``C:\Windows\inf\setupapi.dev.log``.
+A server reboot is required after uninstalling the driver, otherwise subsequent
+install attempts may fail.
+
+Wnbd
+~~~~
+
+For ``WNBD`` troubleshooting, please check this page: https://github.com/cloudbase/wnbd#troubleshooting
+
+Privileges
+~~~~~~~~~~
+
+Most ``rbd-wnbd`` and ``rbd device`` commands require privileged rights. Make
+sure to use an elevated PowerShell or CMD command prompt.
+
+Crash dumps
+~~~~~~~~~~~
+
+Userspace crash dumps can be placed at a configurable location and enabled for all
+applications or just predefined ones, as outlined here:
+https://docs.microsoft.com/en-us/windows/win32/wer/collecting-user-mode-dumps.
+
+Whenever a Windows application crashes, an event will be submitted to the ``Application``
+Windows Event Log, having Event ID 1000. The entry will also include the process id,
+the faulting module name and path as well as the exception code.
+
+Please note that in order to analyze crash dumps, the debug symbols are required.
+We're currently buidling Ceph using ``MinGW``, so by default ``DWARF`` symbols will
+be embedded in the binaries. ``windbg`` does not support such symbols but ``gdb``
+can be used.
+
+``gdb`` can debug running Windows processes but it cannot open Windows minidumps.
+The following ``gdb`` fork may be used until this functionality is merged upstream:
+https://github.com/ssbssa/gdb/releases. As an alternative, ``DWARF`` symbols
+can be converted using ``cv2pdb`` but be aware that this tool has limitted C++
+support.
+
+ceph tool
+~~~~~~~~~
+
+The ``ceph`` Python tool can't be used on Windows natively yet. With minor
+changes it may run, but the main issue is that Python doesn't currently allow
+using ``AF_UNIX`` on Windows: https://bugs.python.org/issue33408
+
+As an alternative, the ``ceph`` tool can be used through Windows Subsystem
+for Linux (WSL). For example, running Windows RBD daemons may be contacted by
+using:
+
+.. code:: bash
+
+ ceph daemon /mnt/c/ProgramData/ceph/out/ceph-client.admin.61436.1209215304.asok help
+
+IO counters
+~~~~~~~~~~~
+
+Along with the standard RBD perf counters, the ``libwnbd`` IO counters may be
+retrieved using:
+
+.. code:: PowerShell
+
+ rbd-wnbd stats $imageName
+
+At the same time, WNBD driver counters can be fetched using:
+
+.. code:: PowerShell
+
+ wnbd-client stats $mappingId
+
+Note that the ``wnbd-client`` mapping identifier will be the full RBD image spec
+(the ``device`` column of the ``rbd device list`` output).
+
+Missing libraries
+~~~~~~~~~~~~~~~~~
+
+The Ceph tools can silently exit with a -1073741515 return code if one of the
+required DLLs is missing or unsupported.
+
+The `Dependency Walker`_ tool can be used to determine the missing library.
+
+.. _Dependency Walker: https://www.dependencywalker.com/