summaryrefslogtreecommitdiffstats
path: root/doc/start
diff options
context:
space:
mode:
authorDaniel Baumann <daniel.baumann@progress-linux.org>2024-04-21 11:54:28 +0000
committerDaniel Baumann <daniel.baumann@progress-linux.org>2024-04-21 11:54:28 +0000
commite6918187568dbd01842d8d1d2c808ce16a894239 (patch)
tree64f88b554b444a49f656b6c656111a145cbbaa28 /doc/start
parentInitial commit. (diff)
downloadceph-e6918187568dbd01842d8d1d2c808ce16a894239.tar.xz
ceph-e6918187568dbd01842d8d1d2c808ce16a894239.zip
Adding upstream version 18.2.2.upstream/18.2.2
Signed-off-by: Daniel Baumann <daniel.baumann@progress-linux.org>
Diffstat (limited to 'doc/start')
-rw-r--r--doc/start/ceph.conf3
-rw-r--r--doc/start/documenting-ceph.rst1085
-rw-r--r--doc/start/get-involved.rst99
-rw-r--r--doc/start/hardware-recommendations.rst623
-rw-r--r--doc/start/intro.rst99
-rw-r--r--doc/start/os-recommendations.rst82
-rw-r--r--doc/start/quick-rbd.rst69
7 files changed, 2060 insertions, 0 deletions
diff --git a/doc/start/ceph.conf b/doc/start/ceph.conf
new file mode 100644
index 000000000..f3d558eb0
--- /dev/null
+++ b/doc/start/ceph.conf
@@ -0,0 +1,3 @@
+[global]
+ # list your monitors here
+ mon host = {mon-host-1}, {mon-host-2}
diff --git a/doc/start/documenting-ceph.rst b/doc/start/documenting-ceph.rst
new file mode 100644
index 000000000..02d4dccc4
--- /dev/null
+++ b/doc/start/documenting-ceph.rst
@@ -0,0 +1,1085 @@
+.. _documenting_ceph:
+
+==================
+ Documenting Ceph
+==================
+
+You can help the Ceph project by contributing to the documentation. Even
+small contributions help the Ceph project.
+
+The easiest way to suggest a correction to the documentation is to send an
+email to `ceph-users@ceph.io`. Include the string "ATTN: DOCS" or
+"Attention: Docs" or "Attention: Documentation" in the subject line. In
+the body of the email, include the text to be corrected (so that I can find
+it in the repo) and include your correction.
+
+Another way to suggest a documentation correction is to make a pull request.
+The instructions for making a pull request against the Ceph documentation are
+in the section :ref:`making_contributions`.
+
+If this is your first time making an improvement to the documentation or
+if you have noticed a small mistake (such as a spelling error or a typo),
+it will be easier to send an email than to make a pull request. You will
+be credited for the improvement unless you instruct Ceph Upstream
+Documentation not to credit you.
+
+Location of the Documentation in the Repository
+===============================================
+
+The Ceph documentation source is in the ``ceph/doc`` directory of the Ceph
+repository. Python Sphinx renders the source into HTML and manpages.
+
+Viewing Old Ceph Documentation
+==============================
+The https://docs.ceph.com link displays the latest release branch by default
+(for example, if "Quincy" is the most recent release, then by default
+https://docs.ceph.com displays the documentation for Quincy), but you can view
+the documentation for older versions of Ceph (for example, ``pacific``) by
+replacing the version name in the url (for example, ``quincy`` in
+`https://docs.ceph.com/en/pacific <https://docs.ceph.com/en/quincy>`_) with the
+branch name you prefer (for example, ``pacific``, to create a URL that reads
+`https://docs.ceph.com/en/pacific/ <https://docs.ceph.com/en/pacific/>`_).
+
+.. _making_contributions:
+
+Making Contributions
+====================
+
+Making a documentation contribution involves the same basic procedure as making
+a code contribution, with one exception: you must build documentation source
+instead of compiling program source. This sequence (the sequence of building
+the documentation source) includes the following steps:
+
+#. `Get the Source`_
+#. `Select a Branch`_
+#. `Make a Change`_
+#. `Build the Source`_
+#. `Commit the Change`_
+#. `Push the Change`_
+#. `Make a Pull Request`_
+#. `Notify Us`_
+
+Get the Source
+--------------
+
+The source of the Ceph documentation is a collection of ReStructured Text files
+that are in the Ceph repository in the ``ceph/doc`` directory. For details
+on GitHub and Ceph, see :ref:`Get Involved`.
+
+Use the `Fork and Pull`_ approach to make documentation contributions. To do
+this, you must:
+
+#. Install git locally. In Debian or Ubuntu, run the following command:
+
+ .. prompt:: bash $
+
+ sudo apt-get install git
+
+ In Fedora, run the following command:
+
+ .. prompt:: bash $
+
+ sudo yum install git
+
+ In CentOS/RHEL, run the following command:
+
+ .. prompt:: bash $
+
+ sudo yum install git
+
+#. Make sure that your ``.gitconfig`` file has been configured to include your
+ name and email address:
+
+ .. code-block:: ini
+
+ [user]
+ email = {your-email-address}
+ name = {your-name}
+
+ For example:
+
+ .. prompt:: bash $
+
+ git config --global user.name "John Doe"
+ git config --global user.email johndoe@example.com
+
+
+#. Create a `github`_ account (if you don't have one).
+
+#. Fork the Ceph project. See https://github.com/ceph/ceph.
+
+#. Clone your fork of the Ceph project to your local host. This creates what is
+ known as a "local working copy".
+
+The Ceph documentation is organized by component:
+
+- **Ceph Storage Cluster:** The Ceph Storage Cluster documentation is
+ in the ``doc/rados`` directory.
+
+- **Ceph Block Device:** The Ceph Block Device documentation is in
+ the ``doc/rbd`` directory.
+
+- **Ceph Object Storage:** The Ceph Object Storage documentation is in
+ the ``doc/radosgw`` directory.
+
+- **Ceph File System:** The Ceph File System documentation is in the
+ ``doc/cephfs`` directory.
+
+- **Installation (Quick):** Quick start documentation is in the
+ ``doc/start`` directory.
+
+- **Installation (Manual):** Documentaton concerning the manual installation of
+ Ceph is in the ``doc/install`` directory.
+
+- **Manpage:** Manpage source is in the ``doc/man`` directory.
+
+- **Developer:** Developer documentation is in the ``doc/dev``
+ directory.
+
+- **Images:** Images including JPEG and PNG files are stored in the
+ ``doc/images`` directory.
+
+
+Select a Branch
+---------------
+
+When you make small changes to the documentation, such as fixing typographical
+errors or clarifying explanations, use the ``main`` branch (default). You
+should also use the ``main`` branch when making contributions to features that
+are in the current release. ``main`` is the most commonly used branch. :
+
+.. prompt:: bash $
+
+ git checkout main
+
+When you make changes to documentation that affect an upcoming release, use
+the ``next`` branch. ``next`` is the second most commonly used branch. :
+
+.. prompt:: bash $
+
+ git checkout next
+
+When you are making substantial contributions such as new features that are not
+yet in the current release; if your contribution is related to an issue with a
+tracker ID; or, if you want to see your documentation rendered on the Ceph.com
+website before it gets merged into the ``main`` branch, you should create a
+branch. To distinguish branches that include only documentation updates, we
+prepend them with ``wip-doc`` by convention, following the form
+``wip-doc-{your-branch-name}``. If the branch relates to an issue filed in
+http://tracker.ceph.com/issues, the branch name incorporates the issue number.
+For example, if a documentation branch is a fix for issue #4000, the branch name
+should be ``wip-doc-4000`` by convention and the relevant tracker URL will be
+http://tracker.ceph.com/issues/4000.
+
+.. note:: Please do not mingle documentation contributions and source code
+ contributions in a single commit. When you keep documentation
+ commits separate from source code commits, it simplifies the review
+ process. We highly recommend that any pull request that adds a feature or
+ a configuration option should also include a documentation commit that
+ describes the changes.
+
+Before you create your branch name, ensure that it doesn't already exist in the
+local or remote repository. :
+
+.. prompt:: bash $
+
+ git branch -a | grep wip-doc-{your-branch-name}
+
+If it doesn't exist, create your branch:
+
+.. prompt:: bash $
+
+ git checkout -b wip-doc-{your-branch-name}
+
+
+Make a Change
+-------------
+
+Modifying a document involves opening a reStructuredText file, changing
+its contents, and saving the changes. See `Documentation Style Guide`_ for
+details on syntax requirements.
+
+Adding a document involves creating a new reStructuredText file within the
+``doc`` directory tree with a ``*.rst``
+extension. You must also include a reference to the document: a hyperlink
+or a table of contents entry. The ``index.rst`` file of a top-level directory
+usually contains a TOC, where you can add the new file name. All documents must
+have a title. See `Headings`_ for details.
+
+Your new document doesn't get tracked by ``git`` automatically. When you want
+to add the document to the repository, you must use ``git add
+{path-to-filename}``. For example, from the top level directory of the
+repository, adding an ``example.rst`` file to the ``rados`` subdirectory would
+look like this:
+
+.. prompt:: bash $
+
+ git add doc/rados/example.rst
+
+Deleting a document involves removing it from the repository with ``git rm
+{path-to-filename}``. For example:
+
+.. prompt:: bash $
+
+ git rm doc/rados/example.rst
+
+You must also remove any reference to a deleted document from other documents.
+
+
+Build the Source
+----------------
+
+To build the documentation, navigate to the ``ceph`` repository directory:
+
+
+.. prompt:: bash $
+
+ cd ceph
+
+.. note::
+ The directory that contains ``build-doc`` and ``serve-doc`` must be included
+ in the ``PATH`` environment variable in order for these commands to work.
+
+
+To build the documentation on Debian/Ubuntu, Fedora, or CentOS/RHEL, execute:
+
+.. prompt:: bash $
+
+ admin/build-doc
+
+To scan for the reachability of external links, execute:
+
+.. prompt:: bash $
+
+ admin/build-doc linkcheck
+
+Executing ``admin/build-doc`` will create a ``build-doc`` directory under
+``ceph``. You may need to create a directory under ``ceph/build-doc`` for
+output of Javadoc files:
+
+.. prompt:: bash $
+
+ mkdir -p output/html/api/libcephfs-java/javadoc
+
+The build script ``build-doc`` will produce an output of errors and warnings.
+You MUST fix errors in documents you modified before committing a change, and
+you SHOULD fix warnings that are related to syntax you modified.
+
+.. important:: You must validate ALL HYPERLINKS. If a hyperlink is broken,
+ it automatically breaks the build!
+
+Once you build the documentation set, you may start an HTTP server at
+``http://localhost:8080/`` to view it:
+
+.. prompt:: bash $
+
+ admin/serve-doc
+
+You can also navigate to ``build-doc/output`` to inspect the built documents.
+There should be an ``html`` directory and a ``man`` directory containing
+documentation in HTML and manpage formats respectively.
+
+Build the Source (First Time)
+~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
+
+Ceph uses Python Sphinx, which is generally distribution agnostic. The first
+time you build Ceph documentation, it will generate a doxygen XML tree, which
+is a bit time consuming.
+
+Python Sphinx does have some dependencies that vary across distributions. The
+first time you build the documentation, the script will notify you if you do not
+have the dependencies installed. To run Sphinx and build documentation successfully,
+the following packages are required:
+
+.. raw:: html
+
+ <style type="text/css">div.body h3{margin:5px 0px 0px 0px;}</style>
+ <table cellpadding="10"><colgroup><col width="30%"><col width="30%"><col width="30%"></colgroup><tbody valign="top"><tr><td><h3>Debian/Ubuntu</h3>
+
+- gcc
+- python3-dev
+- python3-pip
+- python3-sphinx
+- python3-venv
+- libxml2-dev
+- libxslt1-dev
+- doxygen
+- graphviz
+- ant
+- ditaa
+
+.. raw:: html
+
+ </td><td><h3>Fedora</h3>
+
+- gcc
+- python-devel
+- python-pip
+- python-docutils
+- python-jinja2
+- python-pygments
+- python-sphinx
+- libxml2-devel
+- libxslt1-devel
+- doxygen
+- graphviz
+- ant
+- ditaa
+
+.. raw:: html
+
+ </td><td><h3>CentOS/RHEL</h3>
+
+- gcc
+- python-devel
+- python-pip
+- python-docutils
+- python-jinja2
+- python-pygments
+- python-sphinx
+- libxml2-dev
+- libxslt1-dev
+- doxygen
+- graphviz
+- ant
+
+.. raw:: html
+
+ </td></tr></tbody></table>
+
+
+Install each dependency that is not installed on your host. For Debian/Ubuntu
+distributions, execute the following:
+
+.. prompt:: bash $
+
+ sudo apt-get install gcc python-dev python3-pip libxml2-dev libxslt-dev doxygen graphviz ant ditaa
+ sudo apt-get install python3-sphinx python3-venv
+
+For Fedora distributions, execute the following:
+
+.. prompt:: bash $
+
+ sudo yum install gcc python-devel python-pip libxml2-devel libxslt-devel doxygen graphviz ant
+ sudo pip install html2text
+ sudo yum install python-jinja2 python-pygments python-docutils python-sphinx
+ sudo yum install jericho-html ditaa
+
+For CentOS/RHEL distributions, it is recommended to have ``epel`` (Extra
+Packages for Enterprise Linux) repository as it provides some extra packages
+which are not available in the default repository. To install ``epel``, execute
+the following:
+
+.. prompt:: bash $
+
+ sudo yum install -y https://dl.fedoraproject.org/pub/epel/epel-release-latest-7.noarch.rpm
+
+For CentOS/RHEL distributions, execute the following:
+
+.. prompt:: bash $
+
+ sudo yum install gcc python-devel python-pip libxml2-devel libxslt-devel doxygen graphviz ant
+ sudo pip install html2text
+
+For CentOS/RHEL distributions, the remaining python packages are not available
+in the default and ``epel`` repositories. So, use http://rpmfind.net/ to find
+the packages. Then, download them from a mirror and install them. For example:
+
+.. prompt:: bash $
+
+ wget http://rpmfind.net/linux/centos/7/os/x86_64/Packages/python-jinja2-2.7.2-2.el7.noarch.rpm
+ sudo yum install python-jinja2-2.7.2-2.el7.noarch.rpm
+ wget http://rpmfind.net/linux/centos/7/os/x86_64/Packages/python-pygments-1.4-9.el7.noarch.rpm
+ sudo yum install python-pygments-1.4-9.el7.noarch.rpm
+ wget http://rpmfind.net/linux/centos/7/os/x86_64/Packages/python-docutils-0.11-0.2.20130715svn7687.el7.noarch.rpm
+ sudo yum install python-docutils-0.11-0.2.20130715svn7687.el7.noarch.rpm
+ wget http://rpmfind.net/linux/centos/7/os/x86_64/Packages/python-sphinx-1.1.3-11.el7.noarch.rpm
+ sudo yum install python-sphinx-1.1.3-11.el7.noarch.rpm
+
+Ceph documentation makes extensive use of `ditaa`_, which is not presently built
+for CentOS/RHEL7. You must install ``ditaa`` if you are making changes to
+``ditaa`` diagrams so that you can verify that they render properly before you
+commit new or modified ``ditaa`` diagrams. You may retrieve compatible required
+packages for CentOS/RHEL distributions and install them manually. To run
+``ditaa`` on CentOS/RHEL7, following dependencies are required:
+
+- jericho-html
+- jai-imageio-core
+- batik
+
+Use http://rpmfind.net/ to find compatible ``ditaa`` and the dependencies.
+Then, download them from a mirror and install them. For example:
+
+.. prompt:: bash $
+
+ wget http://rpmfind.net/linux/fedora/linux/releases/22/Everything/x86_64/os/Packages/j/jericho-html-3.3-4.fc22.noarch.rpm
+ sudo yum install jericho-html-3.3-4.fc22.noarch.rpm
+ wget http://rpmfind.net/linux/centos/7/os/x86_64/Packages/jai-imageio-core-1.2-0.14.20100217cvs.el7.noarch.rpm
+ sudo yum install jai-imageio-core-1.2-0.14.20100217cvs.el7.noarch.rpm
+ wget http://rpmfind.net/linux/centos/7/os/x86_64/Packages/batik-1.8-0.12.svn1230816.el7.noarch.rpm
+ sudo yum install batik-1.8-0.12.svn1230816.el7.noarch.rpm
+ wget http://rpmfind.net/linux/fedora/linux/releases/22/Everything/x86_64/os/Packages/d/ditaa-0.9-13.r74.fc21.noarch.rpm
+ sudo yum install ditaa-0.9-13.r74.fc21.noarch.rpm
+
+Once you have installed all these packages, build the documentation by following
+the steps given in `Build the Source`_.
+
+
+Commit the Change
+-----------------
+
+Ceph documentation commits are simple, but follow a strict convention:
+
+- A commit SHOULD have 1 file per commit (it simplifies rollback). You MAY
+ commit multiple files with related changes. Unrelated changes SHOULD NOT
+ be put into the same commit.
+- A commit MUST have a comment.
+- A commit comment MUST be prepended with ``doc:``. (strict)
+- The comment summary MUST be one line only. (strict)
+- Additional comments MAY follow a blank line after the summary,
+ but should be terse.
+- A commit MAY include ``Fixes: https://tracker.ceph.com/issues/{bug number}``.
+- Commits MUST include ``Signed-off-by: Firstname Lastname <email>``. (strict)
+
+.. tip:: Follow the foregoing convention particularly where it says
+ ``(strict)`` or you will be asked to modify your commit to comply with
+ this convention.
+
+The following is a common commit comment (preferred)::
+
+ doc: Fixes a spelling error and a broken hyperlink.
+
+ Signed-off-by: John Doe <john.doe@gmail.com>
+
+
+The following comment includes a reference to a bug. ::
+
+ doc: Fixes a spelling error and a broken hyperlink.
+
+ Fixes: https://tracker.ceph.com/issues/1234
+
+ Signed-off-by: John Doe <john.doe@gmail.com>
+
+
+The following comment includes a terse sentence following the comment summary.
+There is a carriage return between the summary line and the description::
+
+ doc: Added mon setting to monitor config reference
+
+ Describes 'mon setting', which is a new setting added
+ to config_opts.h.
+
+ Signed-off-by: John Doe <john.doe@gmail.com>
+
+
+To commit changes, execute the following:
+
+.. prompt:: bash $
+
+ git commit -a
+
+
+An easy way to manage your documentation commits is to use visual tools for
+``git``. For example, ``gitk`` provides a graphical interface for viewing the
+repository history, and ``git-gui`` provides a graphical interface for viewing
+your uncommitted changes, staging them for commit, committing the changes and
+pushing them to your forked Ceph repository.
+
+
+For Debian/Ubuntu, execute:
+
+.. prompt:: bash $
+
+ sudo apt-get install gitk git-gui
+
+For Fedora/CentOS/RHEL, execute:
+
+.. prompt:: bash $
+
+ sudo yum install gitk git-gui
+
+Then, execute:
+
+.. prompt:: bash $
+
+ cd {git-ceph-repo-path}
+ gitk
+
+Finally, select **File->Start git gui** to activate the graphical user interface.
+
+
+Push the Change
+---------------
+
+Once you have one or more commits, you must push them from the local copy of the
+repository to ``github``. A graphical tool like ``git-gui`` provides a user
+interface for pushing to the repository. If you created a branch previously:
+
+.. prompt:: bash $
+
+ git push origin wip-doc-{your-branch-name}
+
+Otherwise:
+
+.. prompt:: bash $
+
+ git push
+
+
+Make a Pull Request
+-------------------
+
+As noted earlier, you can make documentation contributions using the `Fork and
+Pull`_ approach.
+
+
+Squash Extraneous Commits
+-------------------------
+Each pull request ought to be associated with only a single commit. If you have
+made more than one commit to the feature branch that you are working in, you
+will need to "squash" the multiple commits. "Squashing" is the colloquial term
+for a particular kind of "interactive rebase". Squashing can be done in a great
+number of ways, but the example here will deal with a situation in which there
+are three commits and the changes in all three of the commits are kept. The three
+commits will be squashed into a single commit.
+
+#. Make the commits that you will later squash.
+
+ #. Make the first commit.
+
+ ::
+
+ doc/glossary: improve "CephX" entry
+
+ Improve the glossary entry for "CephX".
+
+ Signed-off-by: Zac Dover <zac.dover@proton.me>
+
+ # Please enter the commit message for your changes. Lines starting
+ # with '#' will be ignored, and an empty message aborts the commit.
+ #
+ # On branch wip-doc-2023-03-28-glossary-cephx
+ # Changes to be committed:
+ # modified: glossary.rst
+ #
+
+ #. Make the second commit.
+
+ ::
+
+ doc/glossary: add link to architecture doc
+
+ Add a link to a section in the architecture document, which link
+ will be used in the process of improving the "CephX" glossary entry.
+
+ Signed-off-by: Zac Dover <zac.dover@proton.me>
+
+ # Please enter the commit message for your changes. Lines starting
+ # with '#' will be ignored, and an empty message aborts the commit.
+ #
+ # On branch wip-doc-2023-03-28-glossary-cephx
+ # Your branch is up to date with 'origin/wip-doc-2023-03-28-glossary-cephx'.
+ #
+ # Changes to be committed:
+ # modified: architecture.rst
+
+ #. Make the third commit.
+
+ ::
+
+ doc/glossary: link to Arch doc in "CephX" glossary
+
+ Link to the Architecture document from the "CephX" entry in the
+ Glossary.
+
+ Signed-off-by: Zac Dover <zac.dover@proton.me>
+
+ # Please enter the commit message for your changes. Lines starting
+ # with '#' will be ignored, and an empty message aborts the commit.
+ #
+ # On branch wip-doc-2023-03-28-glossary-cephx
+ # Your branch is up to date with 'origin/wip-doc-2023-03-28-glossary-cephx'.
+ #
+ # Changes to be committed:
+ # modified: glossary.rst
+
+#. There are now three commits in the feature branch. We will now begin the
+ process of squashing them into a single commit.
+
+ #. Run the command ``git rebase -i main``, which rebases the current branch
+ (the feature branch) against the ``main`` branch:
+
+ .. prompt:: bash
+
+ git rebase -i main
+
+ #. A list of the commits that have been made to the feature branch now
+ appear, and looks like this:
+
+ ::
+
+ pick d395e500883 doc/glossary: improve "CephX" entry
+ pick b34986e2922 doc/glossary: add link to architecture doc
+ pick 74d0719735c doc/glossary: link to Arch doc in "CephX" glossary
+
+ # Rebase 0793495b9d1..74d0719735c onto 0793495b9d1 (3 commands)
+ #
+ # Commands:
+ # p, pick <commit> = use commit
+ # r, reword <commit> = use commit, but edit the commit message
+ # e, edit <commit> = use commit, but stop for amending
+ # s, squash <commit> = use commit, but meld into previous commit
+ # f, fixup [-C | -c] <commit> = like "squash" but keep only the previous
+ # commit's log message, unless -C is used, in which case
+ # keep only this commit's message; -c is same as -C but
+ # opens the editor
+ # x, exec <command> = run command (the rest of the line) using shell
+ # b, break = stop here (continue rebase later with 'git rebase --continue')
+ # d, drop <commit> = remove commit
+ # l, label <label> = label current HEAD with a name
+ # t, reset <label> = reset HEAD to a label
+ # m, merge [-C <commit> | -c <commit>] <label> [# <oneline>]
+ # create a merge commit using the original merge commit's
+ # message (or the oneline, if no original merge commit was
+ # specified); use -c <commit> to reword the commit message
+ # u, update-ref <ref> = track a placeholder for the <ref> to be updated
+ # to this position in the new commits. The <ref> is
+ # updated at the end of the rebase
+ #
+ # These lines can be re-ordered; they are executed from top to bottom.
+ #
+ # If you remove a line here THAT COMMIT WILL BE LOST.
+
+ Find the part of the screen that says "pick". This is the part that you will
+ alter. There are three commits that are currently labeled "pick". We will
+ choose one of them to remain labeled "pick", and we will label the other two
+ commits "squash".
+
+#. Label two of the three commits ``squash``:
+
+ ::
+
+ pick d395e500883 doc/glossary: improve "CephX" entry
+ squash b34986e2922 doc/glossary: add link to architecture doc
+ squash 74d0719735c doc/glossary: link to Arch doc in "CephX" glossary
+
+ # Rebase 0793495b9d1..74d0719735c onto 0793495b9d1 (3 commands)
+ #
+ # Commands:
+ # p, pick <commit> = use commit
+ # r, reword <commit> = use commit, but edit the commit message
+ # e, edit <commit> = use commit, but stop for amending
+ # s, squash <commit> = use commit, but meld into previous commit
+ # f, fixup [-C | -c] <commit> = like "squash" but keep only the previous
+ # commit's log message, unless -C is used, in which case
+ # keep only this commit's message; -c is same as -C but
+ # opens the editor
+ # x, exec <command> = run command (the rest of the line) using shell
+ # b, break = stop here (continue rebase later with 'git rebase --continue')
+ # d, drop <commit> = remove commit
+ # l, label <label> = label current HEAD with a name
+ # t, reset <label> = reset HEAD to a label
+ # m, merge [-C <commit> | -c <commit>] <label> [# <oneline>]
+ # create a merge commit using the original merge commit's
+ # message (or the oneline, if no original merge commit was
+ # specified); use -c <commit> to reword the commit message
+ # u, update-ref <ref> = track a placeholder for the <ref> to be updated
+ # to this position in the new commits. The <ref> is
+ # updated at the end of the rebase
+ #
+ # These lines can be re-ordered; they are executed from top to bottom.
+ #
+ # If you remove a line here THAT COMMIT WILL BE LOST.
+
+#. Now we create a commit message that applies to all the commits that have
+ been squashed together:
+
+ #. When you save and close the list of commits that you have designated for
+ squashing, a list of all three commit messages appears, and it looks
+ like this:
+
+ ::
+
+ # This is a combination of 3 commits.
+ # This is the 1st commit message:
+
+ doc/glossary: improve "CephX" entry
+
+ Improve the glossary entry for "CephX".
+
+ Signed-off-by: Zac Dover <zac.dover@proton.me>
+
+ # This is the commit message #2:
+
+ doc/glossary: add link to architecture doc
+
+ Add a link to a section in the architecture document, which link
+ will be used in the process of improving the "CephX" glossary entry.
+
+ Signed-off-by: Zac Dover <zac.dover@proton.me>
+
+ # This is the commit message #3:
+
+ doc/glossary: link to Arch doc in "CephX" glossary
+
+ Link to the Architecture document from the "CephX" entry in the
+ Glossary.
+
+ Signed-off-by: Zac Dover <zac.dover@proton.me>
+
+ # Please enter the commit message for your changes. Lines starting
+ # with '#' will be ignored, and an empty message aborts the commit.
+ #
+ # Date: Tue Mar 28 18:42:11 2023 +1000
+ #
+ # interactive rebase in progress; onto 0793495b9d1
+ # Last commands done (3 commands done):
+ # squash b34986e2922 doc/glossary: add link to architecture doc
+ # squash 74d0719735c doc/glossary: link to Arch doc in "CephX" glossary
+ # No commands remaining.
+ # You are currently rebasing branch 'wip-doc-2023-03-28-glossary-cephx' on '0793495b9d1'.
+ #
+ # Changes to be committed:
+ # modified: doc/architecture.rst
+ # modified: doc/glossary.rst
+
+ #. The commit messages have been revised into the simpler form presented here:
+
+ ::
+
+ doc/glossary: improve "CephX" entry
+
+ Improve the glossary entry for "CephX".
+
+ Signed-off-by: Zac Dover <zac.dover@proton.me>
+
+ # Please enter the commit message for your changes. Lines starting
+ # with '#' will be ignored, and an empty message aborts the commit.
+ #
+ # Date: Tue Mar 28 18:42:11 2023 +1000
+ #
+ # interactive rebase in progress; onto 0793495b9d1
+ # Last commands done (3 commands done):
+ # squash b34986e2922 doc/glossary: add link to architecture doc
+ # squash 74d0719735c doc/glossary: link to Arch doc in "CephX" glossary
+ # No commands remaining.
+ # You are currently rebasing branch 'wip-doc-2023-03-28-glossary-cephx' on '0793495b9d1'.
+ #
+ # Changes to be committed:
+ # modified: doc/architecture.rst
+ # modified: doc/glossary.rst
+
+#. Force push the squashed commit from your local working copy to the remote
+ upstream branch. The force push is necessary because the newly squashed commit
+ does not have an ancestor in the remote. If that confuses you, just run this
+ command and don't think too much about it:
+
+ .. prompt:: bash $
+
+ git push -f
+
+ ::
+
+ Enumerating objects: 9, done.
+ Counting objects: 100% (9/9), done.
+ Delta compression using up to 8 threads
+ Compressing objects: 100% (5/5), done.
+ Writing objects: 100% (5/5), 722 bytes | 722.00 KiB/s, done.
+ Total 5 (delta 4), reused 0 (delta 0), pack-reused 0
+ remote: Resolving deltas: 100% (4/4), completed with 4 local objects.
+ To github.com:zdover23/ceph.git
+ + b34986e2922...02e3a5cb763 wip-doc-2023-03-28-glossary-cephx -> wip-doc-2023-03-28-glossary-cephx (forced update)
+
+
+
+
+
+Notify Us
+---------
+
+If some time has passed and the pull request that you raised has not been
+reviewed, contact the component lead and ask what's taking so long. See
+:ref:`ctl` for a list of component leads.
+
+Documentation Style Guide
+=========================
+
+One objective of the Ceph documentation project is to ensure the readability of
+the documentation in both native reStructuredText format and its rendered
+formats such as HTML. Navigate to your Ceph repository and view a document in
+its native format. You may notice that it is generally as legible in a terminal
+as it is in its rendered HTML format. Additionally, you may also notice that
+diagrams in ``ditaa`` format also render reasonably well in text mode. :
+
+.. prompt:: bash $
+
+ less doc/architecture.rst
+
+Review the following style guides to maintain this consistency.
+
+
+Headings
+--------
+
+#. **Document Titles:** Document titles use the ``=`` character overline and
+ underline with a leading and trailing space on the title text line.
+ See `Document Title`_ for details.
+
+#. **Section Titles:** Section tiles use the ``=`` character underline with no
+ leading or trailing spaces for text. Two carriage returns should precede a
+ section title (unless an inline reference precedes it). See `Sections`_ for
+ details.
+
+#. **Subsection Titles:** Subsection titles use the ``_`` character underline
+ with no leading or trailing spaces for text. Two carriage returns should
+ precede a subsection title (unless an inline reference precedes it).
+
+
+Text Body
+---------
+
+As a general rule, we prefer text to wrap at column 80 so that it is legible in
+a command line interface without leading or trailing white space. Where
+possible, we prefer to maintain this convention with text, lists, literal text
+(exceptions allowed), tables, and ``ditaa`` graphics.
+
+#. **Paragraphs**: Paragraphs have a leading and a trailing carriage return,
+ and should be 80 characters wide or less so that the documentation can be
+ read in native format in a command line terminal.
+
+#. **Literal Text:** To create an example of literal text (e.g., command line
+ usage), terminate the preceding paragraph with ``::`` or enter a carriage
+ return to create an empty line after the preceding paragraph; then, enter
+ ``::`` on a separate line followed by another empty line. Then, begin the
+ literal text with tab indentation (preferred) or space indentation of 3
+ characters.
+
+#. **Indented Text:** Indented text such as bullet points
+ (e.g., ``- some text``) may span multiple lines. The text of subsequent
+ lines should begin at the same character position as the text of the
+ indented text (less numbers, bullets, etc.).
+
+ Indented text may include literal text examples. Whereas, text indentation
+ should be done with spaces, literal text examples should be indented with
+ tabs. This convention enables you to add an additional indented paragraph
+ following a literal example by leaving a blank line and beginning the
+ subsequent paragraph with space indentation.
+
+#. **Numbered Lists:** Numbered lists should use autonumbering by starting
+ a numbered indent with ``#.`` instead of the actual number so that
+ numbered paragraphs can be repositioned without requiring manual
+ renumbering.
+
+#. **Code Examples:** Ceph supports the use of the
+ ``.. code-block::<language>`` role, so that you can add highlighting to
+ source examples. This is preferred for source code. However, use of this
+ tag will cause autonumbering to restart at 1 if it is used as an example
+ within a numbered list. See `Showing code examples`_ for details.
+
+
+Paragraph Level Markup
+----------------------
+
+The Ceph project uses `paragraph level markup`_ to highlight points.
+
+#. **Tip:** Use the ``.. tip::`` directive to provide additional information
+ that assists the reader or steers the reader away from trouble.
+
+#. **Note**: Use the ``.. note::`` directive to highlight an important point.
+
+#. **Important:** Use the ``.. important::`` directive to highlight important
+ requirements or caveats (e.g., anything that could lead to data loss). Use
+ this directive sparingly, because it renders in red.
+
+#. **Version Added:** Use the ``.. versionadded::`` directive for new features
+ or configuration settings so that users know the minimum release for using
+ a feature.
+
+#. **Version Changed:** Use the ``.. versionchanged::`` directive for changes
+ in usage or configuration settings.
+
+#. **Deprecated:** Use the ``.. deprecated::`` directive when CLI usage,
+ a feature or a configuration setting is no longer preferred or will be
+ discontinued.
+
+#. **Topic:** Use the ``.. topic::`` directive to encapsulate text that is
+ outside the main flow of the document. See the `topic directive`_ for
+ additional details.
+
+
+Table of Contents (TOC) and Hyperlinks
+---------------------------------------
+
+The documents in the Ceph documentation suite follow certain conventions that
+are explained in this section.
+
+Every document (every ``.rst`` file) in the Sphinx-controlled Ceph
+documentation suite must be linked either (1) from another document in the
+documentation suite or (2) from a table of contents (TOC). If any document in
+the documentation suite is not linked in this way, the ``build-doc`` script
+generates warnings when it tries to build the documentation.
+
+The Ceph project uses the ``.. toctree::`` directive. See `The TOC tree`_ for
+details. When rendering a table of contents (TOC), specify the ``:maxdepth:``
+parameter so that the rendered TOC is not too long.
+
+Use the ``:ref:`` syntax where a link target contains a specific unique
+identifier (for example, ``.. _unique-target-id:``). A link to the section
+designated by ``.. _unique-target-id:`` looks like this:
+``:ref:`unique-target-id```. If this convention is followed, the links within
+the ``.rst`` source files will work even if the source files are moved within
+the ``ceph/doc`` directory. See `Cross referencing arbitrary locations`_ for
+details.
+
+.. _start_external_hyperlink_example:
+
+External Hyperlink Example
+~~~~~~~~~~~~~~~~~~~~~~~~~~
+
+It is also possible to create a link to a section of the documentation and to
+have custom text appear in the body of the link. This is useful when it is more
+important to preserve the text of the sentence containing the link than it is
+to refer explicitly to the title of the section being linked to.
+
+For example, RST that links to the Sphinx Python Document Generator homepage
+and generates a sentence reading "Click here to learn more about Python
+Sphinx." looks like this:
+
+::
+
+ ``Click `here <https://www.sphinx-doc.org>`_ to learn more about Python
+ Sphinx.``
+
+And here it is, rendered:
+
+Click `here <https://www.sphinx-doc.org>`_ to learn more about Python Sphinx.
+
+Pay special attention to the underscore after the backtick. If you forget to
+include it and this is your first day working with RST, there's a chance that
+you'll spend all day wondering what went wrong without realizing that you
+omitted that underscore. Also, pay special attention to the space between the
+substitution text (in this case, "here") and the less-than bracket that sets
+the explicit link apart from the substition text. The link will not render
+properly without this space.
+
+Linking Customs
+~~~~~~~~~~~~~~~
+
+By a custom established when Ceph was still being developed by Inktank,
+contributors to the documentation of the Ceph project preferred to use the
+convention of putting ``.. _Link Text: ../path`` links at the bottom of the
+document and linking to them using references of the form ``:ref:`path```. This
+convention was preferred because it made the documents more readable in a
+command line interface. As of 2023, though, we have no preference for one over
+the other. Use whichever convention makes the text easier to read.
+
+Using a part of a sentence as a hyperlink, `like this <docs.ceph.com>`_, is
+discouraged. The convention of writing "See X" is preferred. Here are some
+preferred formulations:
+
+#. For more information, see `docs.ceph.com <docs.ceph.com>`_.
+
+#. See `docs.ceph.com <docs.ceph.com>`_.
+
+
+Quirks of ReStructured Text
+---------------------------
+
+External Links
+~~~~~~~~~~~~~~
+
+.. _external_link_with_inline_text:
+
+Use the formula immediately below to render links that direct the reader to
+addresses external to the Ceph documentation:
+
+::
+
+ `inline text <http:www.foo.com>`_
+
+.. note:: Do not fail to include the space between the inline text and the
+ less-than sign.
+
+ Do not fail to include the underscore after the final backtick.
+
+ To link to addresses that are external to the Ceph documentation, include a
+ space between the inline text and the angle bracket that precedes the
+ external address. This is precisely the opposite of the convention for
+ inline text that links to a location inside the Ceph documentation. See
+ :ref:`here <internal_link_with_inline_text>` for an exemplar of this
+ convention.
+
+ If this seems inconsistent and confusing to you, then you're right. It is
+ inconsistent and confusing.
+
+See also ":ref:`External Hyperlink Example<start_external_hyperlink_example>`".
+
+Internal Links
+~~~~~~~~~~~~~~
+
+To link to a section in the Ceph documentation, you must (1) define a target
+link before the section and then (2) link to that target from another location
+in the documentation. Here are the formulas for targets and links to those
+targets:
+
+Target::
+
+ .. _target:
+
+ Title of Targeted Section
+ =========================
+
+ Lorem ipsum...
+
+Link to target::
+
+ :ref:`target`
+
+.. _internal_link_with_inline_text:
+
+Link to target with inline text::
+
+ :ref:`inline text<target>`
+
+.. note::
+
+ There is no space between "inline text" and the angle bracket that
+ immediately follows it. This is precisely the opposite of :ref:`the
+ convention for inline text that links to a location outside of the Ceph
+ documentation<external_link_with_inline_text>`. If this seems inconsistent
+ and confusing to you, then you're right. It is inconsistent and confusing.
+
+Escaping Bold Characters within Words
+~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
+
+This section explains how to make certain letters within a word bold while
+leaving the other letters in the word regular (non-bold).
+
+The following single-line paragraph provides an example of this:
+
+**C**\eph **F**\ile **S**\ystem.
+
+In ReStructured Text, the following formula will not work:
+
+::
+
+ **C**eph **F**ile **S**ystem
+
+The bolded notation must be turned off by means of the escape character (\\), as shown here:
+
+::
+
+ **C**\eph **F**\ile **S**\ystem
+
+.. _Python Sphinx: https://www.sphinx-doc.org
+.. _restructuredText: http://docutils.sourceforge.net/rst.html
+.. _Fork and Pull: https://help.github.com/articles/using-pull-requests
+.. _github: http://github.com
+.. _ditaa: http://ditaa.sourceforge.net/
+.. _Document Title: http://docutils.sourceforge.net/docs/user/rst/quickstart.html#document-title-subtitle
+.. _Sections: http://docutils.sourceforge.net/docs/user/rst/quickstart.html#sections
+.. _Cross referencing arbitrary locations: http://www.sphinx-doc.org/en/master/usage/restructuredtext/roles.html#role-ref
+.. _The TOC tree: http://sphinx-doc.org/markup/toctree.html
+.. _Showing code examples: http://sphinx-doc.org/markup/code.html
+.. _paragraph level markup: http://sphinx-doc.org/markup/para.html
+.. _topic directive: http://docutils.sourceforge.net/docs/ref/rst/directives.html#topic
diff --git a/doc/start/get-involved.rst b/doc/start/get-involved.rst
new file mode 100644
index 000000000..4f5277e37
--- /dev/null
+++ b/doc/start/get-involved.rst
@@ -0,0 +1,99 @@
+.. _Get Involved:
+
+=====================================
+ Get Involved in the Ceph Community!
+=====================================
+
+These are exciting times in the Ceph community! Get involved!
+
++----------------------+-------------------------------------------------+-----------------------------------------------+
+|Channel | Description | Contact Info |
++======================+=================================================+===============================================+
+| **Blog** | Check the Ceph Blog_ periodically to keep track | http://ceph.com/community/blog/ |
+| | of Ceph progress and important announcements. | |
++----------------------+-------------------------------------------------+-----------------------------------------------+
+| **Planet Ceph** | Check the blog aggregation on Planet Ceph for | https://old.ceph.com/category/planet/ |
+| | interesting stories, information and | |
+| | experiences from the community. **NOTE: NO | |
+| | longer updated as of 2023.** | |
++----------------------+-------------------------------------------------+-----------------------------------------------+
+| **Wiki** | Check the Ceph Wiki is a source for more | http://wiki.ceph.com/ |
+| | community and development related topics. You | |
+| | can find there information about blueprints, | |
+| | meetups, the Ceph Developer Summits and more. | |
++----------------------+-------------------------------------------------+-----------------------------------------------+
+| **IRC** | As you delve into Ceph, you may have questions | |
+| | or feedback for the Ceph development team. Ceph | - **Domain:** |
+| | developers are often available on the ``#ceph`` | ``irc.oftc.net`` |
+| | IRC channel particularly during daytime hours | - **Channels:** |
+| | in the US Pacific Standard Time zone. | ``#ceph``, |
+| | While ``#ceph`` is a good starting point for | ``#ceph-devel``, |
+| | cluster operators and users, there is also | ``#ceph-dashboard``, |
+| | ``#ceph-devel``, ``#ceph-dashboard`` and | ``#cephfs`` |
+| | ``#cephfs`` dedicated for Ceph developers. | |
++----------------------+-------------------------------------------------+-----------------------------------------------+
+| **User List** | Ask and answer user-related questions by | |
+| | subscribing to the email list at | - `User Subscribe`_ |
+| | ceph-users@ceph.io. You can opt out of the email| - `User Unsubscribe`_ |
+| | list at any time by unsubscribing. A simple | - `User Archives`_ |
+| | email is all it takes! | |
++----------------------+-------------------------------------------------+-----------------------------------------------+
+| **Devel List** | Keep in touch with developer activity by | |
+| | subscribing to the email list at dev@ceph.io. | - `Devel Subscribe`_ |
+| | You can opt out of the email list at any time by| - `Devel Unsubscribe`_ |
+| | unsubscribing. A simple email is all it takes! | - `Devel Archives`_ |
++----------------------+-------------------------------------------------+-----------------------------------------------+
+| **Kernel Client** | Linux kernel-related traffic, including kernel | - `Kernel Client Subscribe`_ |
+| | patches and discussion of implementation details| - `Kernel Client Unsubscribe`_ |
+| | for the kernel client code. | - `Kernel Client Archives`_ |
++----------------------+-------------------------------------------------+-----------------------------------------------+
+| **Commit List** | Subscribe to ceph-commit@ceph.com to get | |
+| | commit notifications via email. You can opt out | - `Commit Subscribe`_ |
+| | of the email list at any time by unsubscribing. | - `Commit Unsubscribe`_ |
+| | A simple email is all it takes! | - `Mailing list archives`_ |
++----------------------+-------------------------------------------------+-----------------------------------------------+
+| **QA List** | For Quality Assurance (QA) related activities | |
+| | subscribe to this list. You can opt out | - `QA Subscribe`_ |
+| | of the email list at any time by unsubscribing. | - `QA Unsubscribe`_ |
+| | A simple email is all it takes! | - `Mailing list archives`_ |
++----------------------+-------------------------------------------------+-----------------------------------------------+
+| **Community List** | For all discussions related to the Ceph User | |
+| | Committee and other community topics. You can | - `Community Subscribe`_ |
+| | opt out of the email list at any time by | - `Community Unsubscribe`_ |
+| | unsubscribing. A simple email is all it takes! | - `Mailing list archives`_ |
++----------------------+-------------------------------------------------+-----------------------------------------------+
+| **Bug Tracker** | You can help keep Ceph production worthy by | http://tracker.ceph.com/projects/ceph |
+| | filing and tracking bugs, and providing feature | |
+| | requests using the Bug Tracker_. | |
++----------------------+-------------------------------------------------+-----------------------------------------------+
+| **Source Code** | If you would like to participate in | |
+| | development, bug fixing, or if you just want | - http://github.com/ceph/ceph |
+| | the very latest code for Ceph, you can get it | - http://download.ceph.com/tarballs/ |
+| | at http://github.com. See `Ceph Source Code`_ | |
+| | for details on cloning from github. | |
++----------------------+-------------------------------------------------+-----------------------------------------------+
+| **Ceph Calendar** | Learn about upcoming Ceph events. | https://ceph.io/contribute/ |
++----------------------+-------------------------------------------------+-----------------------------------------------+
+
+
+
+.. _Devel Subscribe: mailto:dev-request@ceph.io?body=subscribe
+.. _Devel Unsubscribe: mailto:dev-request@ceph.io?body=unsubscribe
+.. _Kernel Client Subscribe: mailto:majordomo@vger.kernel.org?body=subscribe+ceph-devel
+.. _Kernel Client Unsubscribe: mailto:majordomo@vger.kernel.org?body=unsubscribe+ceph-devel
+.. _User Subscribe: mailto:ceph-users-request@ceph.io?body=subscribe
+.. _User Unsubscribe: mailto:ceph-users-request@ceph.io?body=unsubscribe
+.. _Community Subscribe: mailto:ceph-community-join@lists.ceph.com
+.. _Community Unsubscribe: mailto:ceph-community-leave@lists.ceph.com
+.. _Commit Subscribe: mailto:ceph-commit-join@lists.ceph.com
+.. _Commit Unsubscribe: mailto:ceph-commit-leave@lists.ceph.com
+.. _QA Subscribe: mailto:ceph-qa-join@lists.ceph.com
+.. _QA Unsubscribe: mailto:ceph-qa-leave@lists.ceph.com
+.. _Devel Archives: https://lists.ceph.io/hyperkitty/list/dev@ceph.io/
+.. _User Archives: https://lists.ceph.io/hyperkitty/list/ceph-users@ceph.io/
+.. _Kernel Client Archives: https://www.spinics.net/lists/ceph-devel/
+.. _Mailing list archives: http://lists.ceph.com/
+.. _Blog: http://ceph.com/community/blog/
+.. _Tracker: http://tracker.ceph.com/
+.. _Ceph Source Code: http://github.com/ceph/ceph
+
diff --git a/doc/start/hardware-recommendations.rst b/doc/start/hardware-recommendations.rst
new file mode 100644
index 000000000..a63b5a457
--- /dev/null
+++ b/doc/start/hardware-recommendations.rst
@@ -0,0 +1,623 @@
+.. _hardware-recommendations:
+
+==========================
+ hardware recommendations
+==========================
+
+Ceph is designed to run on commodity hardware, which makes building and
+maintaining petabyte-scale data clusters flexible and economically feasible.
+When planning your cluster's hardware, you will need to balance a number
+of considerations, including failure domains, cost, and performance.
+Hardware planning should include distributing Ceph daemons and
+other processes that use Ceph across many hosts. Generally, we recommend
+running Ceph daemons of a specific type on a host configured for that type
+of daemon. We recommend using separate hosts for processes that utilize your
+data cluster (e.g., OpenStack, CloudStack, Kubernetes, etc).
+
+The requirements of one Ceph cluster are not the same as the requirements of
+another, but below are some general guidelines.
+
+.. tip:: check out the `ceph blog`_ too.
+
+CPU
+===
+
+CephFS Metadata Servers (MDS) are CPU-intensive. They are
+are single-threaded and perform best with CPUs with a high clock rate (GHz). MDS
+servers do not need a large number of CPU cores unless they are also hosting other
+services, such as SSD OSDs for the CephFS metadata pool.
+OSD nodes need enough processing power to run the RADOS service, to calculate data
+placement with CRUSH, to replicate data, and to maintain their own copies of the
+cluster map.
+
+With earlier releases of Ceph, we would make hardware recommendations based on
+the number of cores per OSD, but this cores-per-osd metric is no longer as
+useful a metric as the number of cycles per IOP and the number of IOPS per OSD.
+For example, with NVMe OSD drives, Ceph can easily utilize five or six cores on real
+clusters and up to about fourteen cores on single OSDs in isolation. So cores
+per OSD are no longer as pressing a concern as they were. When selecting
+hardware, select for IOPS per core.
+
+.. tip:: When we speak of CPU _cores_, we mean _threads_ when hyperthreading
+ is enabled. Hyperthreading is usually beneficial for Ceph servers.
+
+Monitor nodes and Manager nodes do not have heavy CPU demands and require only
+modest processors. if your hosts will run CPU-intensive processes in
+addition to Ceph daemons, make sure that you have enough processing power to
+run both the CPU-intensive processes and the Ceph daemons. (OpenStack Nova is
+one example of a CPU-intensive process.) We recommend that you run
+non-Ceph CPU-intensive processes on separate hosts (that is, on hosts that are
+not your Monitor and Manager nodes) in order to avoid resource contention.
+If your cluster deployes the Ceph Object Gateway, RGW daemons may co-reside
+with your Mon and Manager services if the nodes have sufficient resources.
+
+RAM
+===
+
+Generally, more RAM is better. Monitor / Manager nodes for a modest cluster
+might do fine with 64GB; for a larger cluster with hundreds of OSDs 128GB
+is advised.
+
+.. tip:: when we speak of RAM and storage requirements, we often describe
+ the needs of a single daemon of a given type. A given server as
+ a whole will thus need at least the sum of the needs of the
+ daemons that it hosts as well as resources for logs and other operating
+ system components. Keep in mind that a server's need for RAM
+ and storage will be greater at startup and when components
+ fail or are added and the cluster rebalances. In other words,
+ allow headroom past what you might see used during a calm period
+ on a small initial cluster footprint.
+
+There is an :confval:`osd_memory_target` setting for BlueStore OSDs that
+defaults to 4GB. Factor in a prudent margin for the operating system and
+administrative tasks (like monitoring and metrics) as well as increased
+consumption during recovery: provisioning ~8GB *per BlueStore OSD* is thus
+advised.
+
+Monitors and managers (ceph-mon and ceph-mgr)
+---------------------------------------------
+
+Monitor and manager daemon memory usage scales with the size of the
+cluster. Note that at boot-time and during topology changes and recovery these
+daemons will need more RAM than they do during steady-state operation, so plan
+for peak usage. For very small clusters, 32 GB suffices. For clusters of up to,
+say, 300 OSDs go with 64GB. For clusters built with (or which will grow to)
+even more OSDs you should provision 128GB. You may also want to consider
+tuning the following settings:
+
+* :confval:`mon_osd_cache_size`
+* :confval:`rocksdb_cache_size`
+
+
+Metadata servers (ceph-mds)
+---------------------------
+
+CephFS metadata daemon memory utilization depends on the configured size of
+its cache. We recommend 1 GB as a minimum for most systems. See
+:confval:`mds_cache_memory_limit`.
+
+
+Memory
+======
+
+Bluestore uses its own memory to cache data rather than relying on the
+operating system's page cache. In Bluestore you can adjust the amount of memory
+that the OSD attempts to consume by changing the :confval:`osd_memory_target`
+configuration option.
+
+- Setting the :confval:`osd_memory_target` below 2GB is not
+ recommended. Ceph may fail to keep the memory consumption under 2GB and
+ extremely slow performance is likely.
+
+- Setting the memory target between 2GB and 4GB typically works but may result
+ in degraded performance: metadata may need to be read from disk during IO
+ unless the active data set is relatively small.
+
+- 4GB is the current default value for :confval:`osd_memory_target` This default
+ was chosen for typical use cases, and is intended to balance RAM cost and
+ OSD performance.
+
+- Setting the :confval:`osd_memory_target` higher than 4GB can improve
+ performance when there many (small) objects or when large (256GB/OSD
+ or more) data sets are processed. This is especially true with fast
+ NVMe OSDs.
+
+.. important:: OSD memory management is "best effort". Although the OSD may
+ unmap memory to allow the kernel to reclaim it, there is no guarantee that
+ the kernel will actually reclaim freed memory within a specific time
+ frame. This applies especially in older versions of Ceph, where transparent
+ huge pages can prevent the kernel from reclaiming memory that was freed from
+ fragmented huge pages. Modern versions of Ceph disable transparent huge
+ pages at the application level to avoid this, but that does not
+ guarantee that the kernel will immediately reclaim unmapped memory. The OSD
+ may still at times exceed its memory target. We recommend budgeting
+ at least 20% extra memory on your system to prevent OSDs from going OOM
+ (**O**\ut **O**\f **M**\emory) during temporary spikes or due to delay in
+ the kernel reclaiming freed pages. That 20% value might be more or less than
+ needed, depending on the exact configuration of the system.
+
+.. tip:: Configuring the operating system with swap to provide additional
+ virtual memory for daemons is not advised for modern systems. Doing
+ may result in lower performance, and your Ceph cluster may well be
+ happier with a daemon that crashes vs one that slows to a crawl.
+
+When using the legacy FileStore back end, the OS page cache was used for caching
+data, so tuning was not normally needed. When using the legacy FileStore backend,
+the OSD memory consumption was related to the number of PGs per daemon in the
+system.
+
+
+Data Storage
+============
+
+Plan your data storage configuration carefully. There are significant cost and
+performance tradeoffs to consider when planning for data storage. Simultaneous
+OS operations and simultaneous requests from multiple daemons for read and
+write operations against a single drive can impact performance.
+
+OSDs require substantial storage drive space for RADOS data. We recommend a
+minimum drive size of 1 terabyte. OSD drives much smaller than one terabyte
+use a significant fraction of their capacity for metadata, and drives smaller
+than 100 gigabytes will not be effective at all.
+
+It is *strongly* suggested that (enterprise-class) SSDs are provisioned for, at a
+minimum, Ceph Monitor and Ceph Manager hosts, as well as CephFS Metadata Server
+metadata pools and Ceph Object Gateway (RGW) index pools, even if HDDs are to
+be provisioned for bulk OSD data.
+
+To get the best performance out of Ceph, provision the following on separate
+drives:
+
+* The operating systems
+* OSD data
+* BlueStore WAL+DB
+
+For more
+information on how to effectively use a mix of fast drives and slow drives in
+your Ceph cluster, see the `block and block.db`_ section of the Bluestore
+Configuration Reference.
+
+Hard Disk Drives
+----------------
+
+Consider carefully the cost-per-gigabyte advantage
+of larger disks. We recommend dividing the price of the disk drive by the
+number of gigabytes to arrive at a cost per gigabyte, because larger drives may
+have a significant impact on the cost-per-gigabyte. For example, a 1 terabyte
+hard disk priced at $75.00 has a cost of $0.07 per gigabyte (i.e., $75 / 1024 =
+0.0732). By contrast, a 3 terabyte disk priced at $150.00 has a cost of $0.05
+per gigabyte (i.e., $150 / 3072 = 0.0488). In the foregoing example, using the
+1 terabyte disks would generally increase the cost per gigabyte by
+40%--rendering your cluster substantially less cost efficient.
+
+.. tip:: Hosting multiple OSDs on a single SAS / SATA HDD
+ is **NOT** a good idea.
+
+.. tip:: Hosting an OSD with monitor, manager, or MDS data on a single
+ drive is also **NOT** a good idea.
+
+.. tip:: With spinning disks, the SATA and SAS interface increasingly
+ becomes a bottleneck at larger capacities. See also the `Storage Networking
+ Industry Association's Total Cost of Ownership calculator`_.
+
+
+Storage drives are subject to limitations on seek time, access time, read and
+write times, as well as total throughput. These physical limitations affect
+overall system performance--especially during recovery. We recommend using a
+dedicated (ideally mirrored) drive for the operating system and software, and
+one drive for each Ceph OSD Daemon you run on the host.
+Many "slow OSD" issues (when they are not attributable to hardware failure)
+arise from running an operating system and multiple OSDs on the same drive.
+Also be aware that today's 22TB HDD uses the same SATA interface as a
+3TB HDD from ten years ago: more than seven times the data to squeeze
+through the same same interface. For this reason, when using HDDs for
+OSDs, drives larger than 8TB may be best suited for storage of large
+files / objects that are not at all performance-sensitive.
+
+
+Solid State Drives
+------------------
+
+Ceph performance is much improved when using solid-state drives (SSDs). This
+reduces random access time and reduces latency while increasing throughput.
+
+SSDs cost more per gigabyte than do HDDs but SSDs often offer
+access times that are, at a minimum, 100 times faster than HDDs.
+SSDs avoid hotspot issues and bottleneck issues within busy clusters, and
+they may offer better economics when TCO is evaluated holistically. Notably,
+the amortized drive cost for a given number of IOPS is much lower with SSDs
+than with HDDs. SSDs do not suffer rotational or seek latency and in addition
+to improved client performance, they substantially improve the speed and
+client impact of cluster changes including rebalancing when OSDs or Monitors
+are added, removed, or fail.
+
+SSDs do not have moving mechanical parts, so they are not subject
+to many of the limitations of HDDs. SSDs do have significant
+limitations though. When evaluating SSDs, it is important to consider the
+performance of sequential and random reads and writes.
+
+.. important:: We recommend exploring the use of SSDs to improve performance.
+ However, before making a significant investment in SSDs, we **strongly
+ recommend** reviewing the performance metrics of an SSD and testing the
+ SSD in a test configuration in order to gauge performance.
+
+Relatively inexpensive SSDs may appeal to your sense of economy. Use caution.
+Acceptable IOPS are not the only factor to consider when selecting SSDs for
+use with Ceph. Bargain SSDs are often a false economy: they may experience
+"cliffing", which means that after an initial burst, sustained performance
+once a limited cache is filled declines considerably. Consider also durability:
+a drive rated for 0.3 Drive Writes Per Day (DWPD or equivalent) may be fine for
+OSDs dedicated to certain types of sequentially-written read-mostly data, but
+are not a good choice for Ceph Monitor duty. Enterprise-class SSDs are best
+for Ceph: they almost always feature power less protection (PLP) and do
+not suffer the dramatic cliffing that client (desktop) models may experience.
+
+When using a single (or mirrored pair) SSD for both operating system boot
+and Ceph Monitor / Manager purposes, a minimum capacity of 256GB is advised
+and at least 480GB is recommended. A drive model rated at 1+ DWPD (or the
+equivalent in TBW (TeraBytes Written) is suggested. However, for a given write
+workload, a larger drive than technically required will provide more endurance
+because it effectively has greater overprovsioning. We stress that
+enterprise-class drives are best for production use, as they feature power
+loss protection and increased durability compared to client (desktop) SKUs
+that are intended for much lighter and intermittent duty cycles.
+
+SSDs were historically been cost prohibitive for object storage, but
+QLC SSDs are closing the gap, offering greater density with lower power
+consumption and less power spent on cooling. Also, HDD OSDs may see a
+significant write latency improvement by offloading WAL+DB onto an SSD.
+Many Ceph OSD deployments do not require an SSD with greater endurance than
+1 DWPD (aka "read-optimized"). "Mixed-use" SSDs in the 3 DWPD class are
+often overkill for this purpose and cost signficantly more.
+
+To get a better sense of the factors that determine the total cost of storage,
+you might use the `Storage Networking Industry Association's Total Cost of
+Ownership calculator`_
+
+Partition Alignment
+~~~~~~~~~~~~~~~~~~~
+
+When using SSDs with Ceph, make sure that your partitions are properly aligned.
+Improperly aligned partitions suffer slower data transfer speeds than do
+properly aligned partitions. For more information about proper partition
+alignment and example commands that show how to align partitions properly, see
+`Werner Fischer's blog post on partition alignment`_.
+
+CephFS Metadata Segregation
+~~~~~~~~~~~~~~~~~~~~~~~~~~~
+
+One way that Ceph accelerates CephFS file system performance is by separating
+the storage of CephFS metadata from the storage of the CephFS file contents.
+Ceph provides a default ``metadata`` pool for CephFS metadata. You will never
+have to manually create a pool for CephFS metadata, but you can create a CRUSH map
+hierarchy for your CephFS metadata pool that includes only SSD storage media.
+See :ref:`CRUSH Device Class<crush-map-device-class>` for details.
+
+
+Controllers
+-----------
+
+Disk controllers (HBAs) can have a significant impact on write throughput.
+Carefully consider your selection of HBAs to ensure that they do not create a
+performance bottleneck. Notably, RAID-mode (IR) HBAs may exhibit higher latency
+than simpler "JBOD" (IT) mode HBAs. The RAID SoC, write cache, and battery
+backup can substantially increase hardware and maintenance costs. Many RAID
+HBAs can be configured with an IT-mode "personality" or "JBOD mode" for
+streamlined operation.
+
+You do not need an RoC (RAID-capable) HBA. ZFS or Linux MD software mirroring
+serve well for boot volume durability. When using SAS or SATA data drives,
+forgoing HBA RAID capabilities can reduce the gap between HDD and SSD
+media cost. Moreover, when using NVMe SSDs, you do not need *any* HBA. This
+additionally reduces the HDD vs SSD cost gap when the system as a whole is
+considered. The initial cost of a fancy RAID HBA plus onboard cache plus
+battery backup (BBU or supercapacitor) can easily exceed more than 1000 US
+dollars even after discounts - a sum that goes a log way toward SSD cost parity.
+An HBA-free system may also cost hundreds of US dollars less every year if one
+purchases an annual maintenance contract or extended warranty.
+
+.. tip:: The `Ceph blog`_ is often an excellent source of information on Ceph
+ performance issues. See `Ceph Write Throughput 1`_ and `Ceph Write
+ Throughput 2`_ for additional details.
+
+
+Benchmarking
+------------
+
+BlueStore opens storage devices with ``O_DIRECT`` and issues ``fsync()``
+frequently to ensure that data is safely persisted to media. You can evaluate a
+drive's low-level write performance using ``fio``. For example, 4kB random write
+performance is measured as follows:
+
+.. code-block:: console
+
+ # fio --name=/dev/sdX --ioengine=libaio --direct=1 --fsync=1 --readwrite=randwrite --blocksize=4k --runtime=300
+
+Write Caches
+------------
+
+Enterprise SSDs and HDDs normally include power loss protection features which
+ensure data durability when power is lost while operating, and
+use multi-level caches to speed up direct or synchronous writes. These devices
+can be toggled between two caching modes -- a volatile cache flushed to
+persistent media with fsync, or a non-volatile cache written synchronously.
+
+These two modes are selected by either "enabling" or "disabling" the write
+(volatile) cache. When the volatile cache is enabled, Linux uses a device in
+"write back" mode, and when disabled, it uses "write through".
+
+The default configuration (usually: caching is enabled) may not be optimal, and
+OSD performance may be dramatically increased in terms of increased IOPS and
+decreased commit latency by disabling this write cache.
+
+Users are therefore encouraged to benchmark their devices with ``fio`` as
+described earlier and persist the optimal cache configuration for their
+devices.
+
+The cache configuration can be queried with ``hdparm``, ``sdparm``,
+``smartctl`` or by reading the values in ``/sys/class/scsi_disk/*/cache_type``,
+for example:
+
+.. code-block:: console
+
+ # hdparm -W /dev/sda
+
+ /dev/sda:
+ write-caching = 1 (on)
+
+ # sdparm --get WCE /dev/sda
+ /dev/sda: ATA TOSHIBA MG07ACA1 0101
+ WCE 1 [cha: y]
+ # smartctl -g wcache /dev/sda
+ smartctl 7.1 2020-04-05 r5049 [x86_64-linux-4.18.0-305.19.1.el8_4.x86_64] (local build)
+ Copyright (C) 2002-19, Bruce Allen, Christian Franke, www.smartmontools.org
+
+ Write cache is: Enabled
+
+ # cat /sys/class/scsi_disk/0\:0\:0\:0/cache_type
+ write back
+
+The write cache can be disabled with those same tools:
+
+.. code-block:: console
+
+ # hdparm -W0 /dev/sda
+
+ /dev/sda:
+ setting drive write-caching to 0 (off)
+ write-caching = 0 (off)
+
+ # sdparm --clear WCE /dev/sda
+ /dev/sda: ATA TOSHIBA MG07ACA1 0101
+ # smartctl -s wcache,off /dev/sda
+ smartctl 7.1 2020-04-05 r5049 [x86_64-linux-4.18.0-305.19.1.el8_4.x86_64] (local build)
+ Copyright (C) 2002-19, Bruce Allen, Christian Franke, www.smartmontools.org
+
+ === START OF ENABLE/DISABLE COMMANDS SECTION ===
+ Write cache disabled
+
+In most cases, disabling this cache using ``hdparm``, ``sdparm``, or ``smartctl``
+results in the cache_type changing automatically to "write through". If this is
+not the case, you can try setting it directly as follows. (Users should ensure
+that setting cache_type also correctly persists the caching mode of the device
+until the next reboot as some drives require this to be repeated at every boot):
+
+.. code-block:: console
+
+ # echo "write through" > /sys/class/scsi_disk/0\:0\:0\:0/cache_type
+
+ # hdparm -W /dev/sda
+
+ /dev/sda:
+ write-caching = 0 (off)
+
+.. tip:: This udev rule (tested on CentOS 8) will set all SATA/SAS device cache_types to "write
+ through":
+
+ .. code-block:: console
+
+ # cat /etc/udev/rules.d/99-ceph-write-through.rules
+ ACTION=="add", SUBSYSTEM=="scsi_disk", ATTR{cache_type}:="write through"
+
+.. tip:: This udev rule (tested on CentOS 7) will set all SATA/SAS device cache_types to "write
+ through":
+
+ .. code-block:: console
+
+ # cat /etc/udev/rules.d/99-ceph-write-through-el7.rules
+ ACTION=="add", SUBSYSTEM=="scsi_disk", RUN+="/bin/sh -c 'echo write through > /sys/class/scsi_disk/$kernel/cache_type'"
+
+.. tip:: The ``sdparm`` utility can be used to view/change the volatile write
+ cache on several devices at once:
+
+ .. code-block:: console
+
+ # sdparm --get WCE /dev/sd*
+ /dev/sda: ATA TOSHIBA MG07ACA1 0101
+ WCE 0 [cha: y]
+ /dev/sdb: ATA TOSHIBA MG07ACA1 0101
+ WCE 0 [cha: y]
+ # sdparm --clear WCE /dev/sd*
+ /dev/sda: ATA TOSHIBA MG07ACA1 0101
+ /dev/sdb: ATA TOSHIBA MG07ACA1 0101
+
+Additional Considerations
+-------------------------
+
+Ceph operators typically provision multiple OSDs per host, but you should
+ensure that the aggregate throughput of your OSD drives doesn't exceed the
+network bandwidth required to service a client's read and write operations.
+You should also each host's percentage of the cluster's overall capacity. If
+the percentage located on a particular host is large and the host fails, it
+can lead to problems such as recovery causing OSDs to exceed the ``full ratio``,
+which in turn causes Ceph to halt operations to prevent data loss.
+
+When you run multiple OSDs per host, you also need to ensure that the kernel
+is up to date. See `OS Recommendations`_ for notes on ``glibc`` and
+``syncfs(2)`` to ensure that your hardware performs as expected when running
+multiple OSDs per host.
+
+
+Networks
+========
+
+Provision at least 10 Gb/s networking in your datacenter, both among Ceph
+hosts and between clients and your Ceph cluster. Network link active/active
+bonding across separate network switches is strongly recommended both for
+increased throughput and for tolerance of network failures and maintenance.
+Take care that your bonding hash policy distributes traffic across links.
+
+Speed
+-----
+
+It takes three hours to replicate 1 TB of data across a 1 Gb/s network and it
+takes thirty hours to replicate 10 TB across a 1 Gb/s network. But it takes only
+twenty minutes to replicate 1 TB across a 10 Gb/s network, and it takes
+only one hour to replicate 10 TB across a 10 Gb/s network.
+
+Note that a 40 Gb/s network link is effectively four 10 Gb/s channels in
+parallel, and that a 100Gb/s network link is effectively four 25 Gb/s channels
+in parallel. Thus, and perhaps somewhat counterintuitively, an individual
+packet on a 25 Gb/s network has slightly lower latency compared to a 40 Gb/s
+network.
+
+
+Cost
+----
+
+The larger the Ceph cluster, the more common OSD failures will be.
+The faster that a placement group (PG) can recover from a degraded state to
+an ``active + clean`` state, the better. Notably, fast recovery minimizes
+the likelihood of multiple, overlapping failures that can cause data to become
+temporarily unavailable or even lost. Of course, when provisioning your
+network, you will have to balance price against performance.
+
+Some deployment tools employ VLANs to make hardware and network cabling more
+manageable. VLANs that use the 802.1q protocol require VLAN-capable NICs and
+switches. The added expense of this hardware may be offset by the operational
+cost savings on network setup and maintenance. When using VLANs to handle VM
+traffic between the cluster and compute stacks (e.g., OpenStack, CloudStack,
+etc.), there is additional value in using 10 Gb/s Ethernet or better; 40 Gb/s or
+increasingly 25/50/100 Gb/s networking as of 2022 is common for production clusters.
+
+Top-of-rack (TOR) switches also need fast and redundant uplinks to
+core / spine network switches or routers, often at least 40 Gb/s.
+
+
+Baseboard Management Controller (BMC)
+-------------------------------------
+
+Your server chassis should have a Baseboard Management Controller (BMC).
+Well-known examples are iDRAC (Dell), CIMC (Cisco UCS), and iLO (HPE).
+Administration and deployment tools may also use BMCs extensively, especially
+via IPMI or Redfish, so consider the cost/benefit tradeoff of an out-of-band
+network for security and administration. Hypervisor SSH access, VM image uploads,
+OS image installs, management sockets, etc. can impose significant loads on a network.
+Running multiple networks may seem like overkill, but each traffic path represents
+a potential capacity, throughput and/or performance bottleneck that you should
+carefully consider before deploying a large scale data cluster.
+
+Additionally BMCs as of 2023 rarely sport network connections faster than 1 Gb/s,
+so dedicated and inexpensive 1 Gb/s switches for BMC administrative traffic
+may reduce costs by wasting fewer expenive ports on faster host switches.
+
+
+Failure Domains
+===============
+
+A failure domain can be thought of as any component loss that prevents access to
+one or more OSDs or other Ceph daemons. These could be a stopped daemon on a host;
+a storage drive failure, an OS crash, a malfunctioning NIC, a failed power supply,
+a network outage, a power outage, and so forth. When planning your hardware
+deployment, you must balance the risk of reducing costs by placing too many
+responsibilities into too few failure domains against the added costs of
+isolating every potential failure domain.
+
+
+Minimum Hardware Recommendations
+================================
+
+Ceph can run on inexpensive commodity hardware. Small production clusters
+and development clusters can run successfully with modest hardware. As
+we noted above: when we speak of CPU _cores_, we mean _threads_ when
+hyperthreading (HT) is enabled. Each modern physical x64 CPU core typically
+provides two logical CPU threads; other CPU architectures may vary.
+
+Take care that there are many factors that influence resource choices. The
+minimum resources that suffice for one purpose will not necessarily suffice for
+another. A sandbox cluster with one OSD built on a laptop with VirtualBox or on
+a trio of Raspberry PIs will get by with fewer resources than a production
+deployment with a thousand OSDs serving five thousand of RBD clients. The
+classic Fisher Price PXL 2000 captures video, as does an IMAX or RED camera.
+One would not expect the former to do the job of the latter. We especially
+cannot stress enough the criticality of using enterprise-quality storage
+media for production workloads.
+
+Additional insights into resource planning for production clusters are
+found above and elsewhere within this documentation.
+
++--------------+----------------+-----------------------------------------+
+| Process | Criteria | Bare Minimum and Recommended |
++==============+================+=========================================+
+| ``ceph-osd`` | Processor | - 1 core minimum, 2 recommended |
+| | | - 1 core per 200-500 MB/s throughput |
+| | | - 1 core per 1000-3000 IOPS |
+| | | |
+| | | * Results are before replication. |
+| | | * Results may vary across CPU and drive |
+| | | models and Ceph configuration: |
+| | | (erasure coding, compression, etc) |
+| | | * ARM processors specifically may |
+| | | require more cores for performance. |
+| | | * SSD OSDs, especially NVMe, will |
+| | | benefit from additional cores per OSD.|
+| | | * Actual performance depends on many |
+| | | factors including drives, net, and |
+| | | client throughput and latency. |
+| | | Benchmarking is highly recommended. |
+| +----------------+-----------------------------------------+
+| | RAM | - 4GB+ per daemon (more is better) |
+| | | - 2-4GB may function but may be slow |
+| | | - Less than 2GB is not recommended |
+| +----------------+-----------------------------------------+
+| | Storage Drives | 1x storage drive per OSD |
+| +----------------+-----------------------------------------+
+| | DB/WAL | 1x SSD partion per HDD OSD |
+| | (optional) | 4-5x HDD OSDs per DB/WAL SATA SSD |
+| | | <= 10 HDD OSDss per DB/WAL NVMe SSD |
+| +----------------+-----------------------------------------+
+| | Network | 1x 1Gb/s (bonded 10+ Gb/s recommended) |
++--------------+----------------+-----------------------------------------+
+| ``ceph-mon`` | Processor | - 2 cores minimum |
+| +----------------+-----------------------------------------+
+| | RAM | 5GB+ per daemon (large / production |
+| | | clusters need more) |
+| +----------------+-----------------------------------------+
+| | Storage | 100 GB per daemon, SSD is recommended |
+| +----------------+-----------------------------------------+
+| | Network | 1x 1Gb/s (10+ Gb/s recommended) |
++--------------+----------------+-----------------------------------------+
+| ``ceph-mds`` | Processor | - 2 cores minimum |
+| +----------------+-----------------------------------------+
+| | RAM | 2GB+ per daemon (more for production) |
+| +----------------+-----------------------------------------+
+| | Disk Space | 1 GB per daemon |
+| +----------------+-----------------------------------------+
+| | Network | 1x 1Gb/s (10+ Gb/s recommended) |
++--------------+----------------+-----------------------------------------+
+
+.. tip:: If you are running an OSD node with a single storage drive, create a
+ partition for your OSD that is separate from the partition
+ containing the OS. We recommend separate drives for the
+ OS and for OSD storage.
+
+
+
+.. _block and block.db: https://docs.ceph.com/en/latest/rados/configuration/bluestore-config-ref/#block-and-block-db
+.. _Ceph blog: https://ceph.com/community/blog/
+.. _Ceph Write Throughput 1: http://ceph.com/community/ceph-performance-part-1-disk-controller-write-throughput/
+.. _Ceph Write Throughput 2: http://ceph.com/community/ceph-performance-part-2-write-throughput-without-ssd-journals/
+.. _Mapping Pools to Different Types of OSDs: ../../rados/operations/crush-map#placing-different-pools-on-different-osds
+.. _OS Recommendations: ../os-recommendations
+.. _Storage Networking Industry Association's Total Cost of Ownership calculator: https://www.snia.org/forums/cmsi/programs/TCOcalc
+.. _Werner Fischer's blog post on partition alignment: https://www.thomas-krenn.com/en/wiki/Partition_Alignment_detailed_explanation
diff --git a/doc/start/intro.rst b/doc/start/intro.rst
new file mode 100644
index 000000000..3a50a8733
--- /dev/null
+++ b/doc/start/intro.rst
@@ -0,0 +1,99 @@
+===============
+ Intro to Ceph
+===============
+
+Ceph can be used to provide :term:`Ceph Object Storage` to :term:`Cloud
+Platforms` and Ceph can be used to provide :term:`Ceph Block Device` services
+to :term:`Cloud Platforms`. Ceph can be used to deploy a :term:`Ceph File
+System`. All :term:`Ceph Storage Cluster` deployments begin with setting up
+each :term:`Ceph Node` and then setting up the network.
+
+A Ceph Storage Cluster requires the following: at least one Ceph Monitor and at
+least one Ceph Manager, and at least as many Ceph OSDs as there are copies of
+an object stored on the Ceph cluster (for example, if three copies of a given
+object are stored on the Ceph cluster, then at least three OSDs must exist in
+that Ceph cluster).
+
+The Ceph Metadata Server is necessary to run Ceph File System clients.
+
+.. note::
+
+ It is a best practice to have a Ceph Manager for each Monitor, but it is not
+ necessary.
+
+.. ditaa::
+
+ +---------------+ +------------+ +------------+ +---------------+
+ | OSDs | | Monitors | | Managers | | MDSs |
+ +---------------+ +------------+ +------------+ +---------------+
+
+- **Monitors**: A :term:`Ceph Monitor` (``ceph-mon``) maintains maps
+ of the cluster state, including the monitor map, manager map, the
+ OSD map, the MDS map, and the CRUSH map. These maps are critical
+ cluster state required for Ceph daemons to coordinate with each other.
+ Monitors are also responsible for managing authentication between
+ daemons and clients. At least three monitors are normally required
+ for redundancy and high availability.
+
+- **Managers**: A :term:`Ceph Manager` daemon (``ceph-mgr``) is
+ responsible for keeping track of runtime metrics and the current
+ state of the Ceph cluster, including storage utilization, current
+ performance metrics, and system load. The Ceph Manager daemons also
+ host python-based modules to manage and expose Ceph cluster
+ information, including a web-based :ref:`mgr-dashboard` and
+ `REST API`_. At least two managers are normally required for high
+ availability.
+
+- **Ceph OSDs**: An Object Storage Daemon (:term:`Ceph OSD`,
+ ``ceph-osd``) stores data, handles data replication, recovery,
+ rebalancing, and provides some monitoring information to Ceph
+ Monitors and Managers by checking other Ceph OSD Daemons for a
+ heartbeat. At least three Ceph OSDs are normally required for
+ redundancy and high availability.
+
+- **MDSs**: A :term:`Ceph Metadata Server` (MDS, ``ceph-mds``) stores
+ metadata on behalf of the :term:`Ceph File System` (i.e., Ceph Block
+ Devices and Ceph Object Storage do not use MDS). Ceph Metadata
+ Servers allow POSIX file system users to execute basic commands (like
+ ``ls``, ``find``, etc.) without placing an enormous burden on the
+ Ceph Storage Cluster.
+
+Ceph stores data as objects within logical storage pools. Using the
+:term:`CRUSH` algorithm, Ceph calculates which placement group (PG) should
+contain the object, and which OSD should store the placement group. The
+CRUSH algorithm enables the Ceph Storage Cluster to scale, rebalance, and
+recover dynamically.
+
+.. _REST API: ../../mgr/restful
+
+.. container:: columns-2
+
+ .. container:: column
+
+ .. raw:: html
+
+ <h3>Recommendations</h3>
+
+ To begin using Ceph in production, you should review our hardware
+ recommendations and operating system recommendations.
+
+ .. toctree::
+ :maxdepth: 2
+
+ Hardware Recommendations <hardware-recommendations>
+ OS Recommendations <os-recommendations>
+
+ .. container:: column
+
+ .. raw:: html
+
+ <h3>Get Involved</h3>
+
+ You can avail yourself of help or contribute documentation, source
+ code or bugs by getting involved in the Ceph community.
+
+ .. toctree::
+ :maxdepth: 2
+
+ get-involved
+ documenting-ceph
diff --git a/doc/start/os-recommendations.rst b/doc/start/os-recommendations.rst
new file mode 100644
index 000000000..81906569e
--- /dev/null
+++ b/doc/start/os-recommendations.rst
@@ -0,0 +1,82 @@
+====================
+ OS Recommendations
+====================
+
+Ceph Dependencies
+=================
+
+As a general rule, we recommend deploying Ceph on newer releases of Linux.
+We also recommend deploying on releases with long-term support.
+
+Linux Kernel
+------------
+
+- **Ceph Kernel Client**
+
+ If you are using the kernel client to map RBD block devices or mount
+ CephFS, the general advice is to use a "stable" or "longterm
+ maintenance" kernel series provided by either http://kernel.org or
+ your Linux distribution on any client hosts.
+
+ For RBD, if you choose to *track* long-term kernels, we recommend
+ *at least* 4.19-based "longterm maintenance" kernel series. If you can
+ use a newer "stable" or "longterm maintenance" kernel series, do it.
+
+ For CephFS, see the section about `Mounting CephFS using Kernel Driver`_
+ for kernel version guidance.
+
+ Older kernel client versions may not support your `CRUSH tunables`_ profile
+ or other newer features of the Ceph cluster, requiring the storage cluster to
+ be configured with those features disabled. For RBD, a kernel of version 5.3
+ or CentOS 8.2 is the minimum necessary for reasonable support for RBD image
+ features.
+
+
+Platforms
+=========
+
+The chart below shows which Linux platforms Ceph provides packages for, and
+which platforms Ceph has been tested on.
+
+Ceph does not require a specific Linux distribution. Ceph can run on any
+distribution that includes a supported kernel and supported system startup
+framework, for example ``sysvinit`` or ``systemd``. Ceph is sometimes ported to
+non-Linux systems but these are not supported by the core Ceph effort.
+
+
++---------------+---------------+-----------------+------------------+------------------+
+| | Reef (18.2.z) | Quincy (17.2.z) | Pacific (16.2.z) | Octopus (15.2.z) |
++===============+===============+=================+==================+==================+
+| Centos 7 | | | A | B |
++---------------+---------------+-----------------+------------------+------------------+
+| Centos 8 | A | A | A | A |
++---------------+---------------+-----------------+------------------+------------------+
+| Centos 9 | A | | | |
++---------------+---------------+-----------------+------------------+------------------+
+| Debian 10 | C | | C | C |
++---------------+---------------+-----------------+------------------+------------------+
+| Debian 11 | C | C | C | |
++---------------+---------------+-----------------+------------------+------------------+
+| OpenSUSE 15.2 | C | | C | C |
++---------------+---------------+-----------------+------------------+------------------+
+| OpenSUSE 15.3 | C | C | | |
++---------------+---------------+-----------------+------------------+------------------+
+| Ubuntu 18.04 | | | C | C |
++---------------+---------------+-----------------+------------------+------------------+
+| Ubuntu 20.04 | A | A | A | A |
++---------------+---------------+-----------------+------------------+------------------+
+| Ubuntu 22.04 | A | | | |
++---------------+---------------+-----------------+------------------+------------------+
+
+- **A**: Ceph provides packages and has done comprehensive tests on the software in them.
+- **B**: Ceph provides packages and has done basic tests on the software in them.
+- **C**: Ceph provides packages only. No tests have been done on these releases.
+
+.. note::
+ **For Centos 7 Users**
+
+ ``Btrfs`` is no longer tested on Centos 7 in the Octopus release. We recommend using ``bluestore`` instead.
+
+.. _CRUSH Tunables: ../../rados/operations/crush-map#tunables
+
+.. _Mounting CephFS using Kernel Driver: ../../cephfs/mount-using-kernel-driver#which-kernel-version
diff --git a/doc/start/quick-rbd.rst b/doc/start/quick-rbd.rst
new file mode 100644
index 000000000..c1cf77098
--- /dev/null
+++ b/doc/start/quick-rbd.rst
@@ -0,0 +1,69 @@
+==========================
+ Block Device Quick Start
+==========================
+
+Ensure your :term:`Ceph Storage Cluster` is in an ``active + clean`` state
+before working with the :term:`Ceph Block Device`.
+
+.. note:: The Ceph Block Device is also known as :term:`RBD` or :term:`RADOS`
+ Block Device.
+
+
+.. ditaa::
+
+ /------------------\ /----------------\
+ | Admin Node | | ceph-client |
+ | +-------->+ cCCC |
+ | ceph-deploy | | ceph |
+ \------------------/ \----------------/
+
+
+You may use a virtual machine for your ``ceph-client`` node, but do not
+execute the following procedures on the same physical node as your Ceph
+Storage Cluster nodes (unless you use a VM). See `FAQ`_ for details.
+
+Create a Block Device Pool
+==========================
+
+#. On the admin node, use the ``ceph`` tool to `create a pool`_
+ (we recommend the name 'rbd').
+
+#. On the admin node, use the ``rbd`` tool to initialize the pool for use by RBD::
+
+ rbd pool init <pool-name>
+
+Configure a Block Device
+========================
+
+#. On the ``ceph-client`` node, create a block device image. ::
+
+ rbd create foo --size 4096 --image-feature layering [-m {mon-IP}] [-k /path/to/ceph.client.admin.keyring] [-p {pool-name}]
+
+#. On the ``ceph-client`` node, map the image to a block device. ::
+
+ sudo rbd map foo --name client.admin [-m {mon-IP}] [-k /path/to/ceph.client.admin.keyring] [-p {pool-name}]
+
+#. Use the block device by creating a file system on the ``ceph-client``
+ node. ::
+
+ sudo mkfs.ext4 -m0 /dev/rbd/{pool-name}/foo
+
+ This may take a few moments.
+
+#. Mount the file system on the ``ceph-client`` node. ::
+
+ sudo mkdir /mnt/ceph-block-device
+ sudo mount /dev/rbd/{pool-name}/foo /mnt/ceph-block-device
+ cd /mnt/ceph-block-device
+
+#. Optionally configure the block device to be automatically mapped and mounted
+ at boot (and unmounted/unmapped at shutdown) - see the `rbdmap manpage`_.
+
+
+See `block devices`_ for additional details.
+
+.. _create a pool: ../../rados/operations/pools/#create-a-pool
+.. _block devices: ../../rbd
+.. _FAQ: http://wiki.ceph.com/How_Can_I_Give_Ceph_a_Try
+.. _OS Recommendations: ../os-recommendations
+.. _rbdmap manpage: ../../man/8/rbdmap