summaryrefslogtreecommitdiffstats
path: root/doc/start
diff options
context:
space:
mode:
authorDaniel Baumann <daniel.baumann@progress-linux.org>2024-04-07 18:45:59 +0000
committerDaniel Baumann <daniel.baumann@progress-linux.org>2024-04-07 18:45:59 +0000
commit19fcec84d8d7d21e796c7624e521b60d28ee21ed (patch)
tree42d26aa27d1e3f7c0b8bd3fd14e7d7082f5008dc /doc/start
parentInitial commit. (diff)
downloadceph-19fcec84d8d7d21e796c7624e521b60d28ee21ed.tar.xz
ceph-19fcec84d8d7d21e796c7624e521b60d28ee21ed.zip
Adding upstream version 16.2.11+ds.upstream/16.2.11+dsupstream
Signed-off-by: Daniel Baumann <daniel.baumann@progress-linux.org>
Diffstat (limited to 'doc/start')
-rw-r--r--doc/start/ceph.conf3
-rw-r--r--doc/start/documenting-ceph.rst793
-rw-r--r--doc/start/get-involved.rst91
-rw-r--r--doc/start/hardware-recommendations.rst509
-rw-r--r--doc/start/intro.rst89
-rw-r--r--doc/start/os-recommendations.rst65
-rw-r--r--doc/start/quick-rbd.rst69
-rw-r--r--doc/start/rgw.conf30
8 files changed, 1649 insertions, 0 deletions
diff --git a/doc/start/ceph.conf b/doc/start/ceph.conf
new file mode 100644
index 000000000..f3d558eb0
--- /dev/null
+++ b/doc/start/ceph.conf
@@ -0,0 +1,3 @@
+[global]
+ # list your monitors here
+ mon host = {mon-host-1}, {mon-host-2}
diff --git a/doc/start/documenting-ceph.rst b/doc/start/documenting-ceph.rst
new file mode 100644
index 000000000..93b266d16
--- /dev/null
+++ b/doc/start/documenting-ceph.rst
@@ -0,0 +1,793 @@
+==================
+ Documenting Ceph
+==================
+
+The **easiest way** to help the Ceph project is to contribute to the
+documentation. As the Ceph user base grows and the development pace quickens, an
+increasing number of people are updating the documentation and adding new
+information. Even small contributions like fixing spelling errors or clarifying
+instructions will help the Ceph project immensely.
+
+The Ceph documentation source resides in the ``ceph/doc`` directory of the Ceph
+repository, and Python Sphinx renders the source into HTML and manpages. The
+http://ceph.com/docs link currently displays the ``master`` branch by default,
+but you may view documentation for older branches (e.g., ``argonaut``) or future
+branches (e.g., ``next``) as well as work-in-progress branches by substituting
+``master`` with the branch name you prefer.
+
+Another way to suggest a documentation correction is to make a pull request.
+The instructions for making a pull request against the Ceph documentation are
+in the section :ref:`making_contributions`.
+
+If this is your first time making an improvement to the documentation or
+if you have noticed a small mistake (such as a spelling error or a typo),
+it will be easier to send an email than to make a pull request. You will
+be credited for the improvement unless you instruct Ceph Upstream
+Documentation not to credit you.
+
+Location of the Documentation in the Repository
+===============================================
+
+The Ceph documentation source is in the ``ceph/doc`` directory of the Ceph
+repository. Python Sphinx renders the source into HTML and manpages.
+
+Viewing Old Ceph Documentation
+==============================
+The https://docs.ceph.com link displays the latest release branch by default
+(for example, if "Quincy" is the most recent release, then by default
+https://docs.ceph.com displays the documentation for Quincy), but you can view
+the documentation for older versions of Ceph (for example, ``pacific``) by
+replacing the version name in the url (for example, ``quincy`` in
+`https://docs.ceph.com/en/pacific <https://docs.ceph.com/en/quincy>`_) with the
+branch name you prefer (for example, ``pacific``, to create a URL that reads
+`https://docs.ceph.com/en/pacific/ <https://docs.ceph.com/en/pacific/>`_).
+
+.. _making_contributions:
+
+Making Contributions
+====================
+
+Making a documentation contribution generally involves the same procedural
+sequence as making a code contribution, except that you must build documentation
+source instead of compiling program source. The sequence includes the following
+steps:
+
+#. `Get the Source`_
+#. `Select a Branch`_
+#. `Make a Change`_
+#. `Build the Source`_
+#. `Commit the Change`_
+#. `Push the Change`_
+#. `Make a Pull Request`_
+#. `Notify Us`_
+
+Get the Source
+--------------
+
+Ceph documentation lives in the Ceph repository right alongside the Ceph source
+code under the ``ceph/doc`` directory. For details on github and Ceph,
+see :ref:`Get Involved`.
+
+The most common way to make contributions is to use the `Fork and Pull`_
+approach. You must:
+
+#. Install git locally. For Debian/Ubuntu, execute:
+
+ .. prompt:: bash $
+
+ sudo apt-get install git
+
+ For Fedora, execute:
+
+ .. prompt:: bash $
+
+ sudo yum install git
+
+ For CentOS/RHEL, execute:
+
+ .. prompt:: bash $
+
+ sudo yum install git
+
+#. Ensure your ``.gitconfig`` file has your name and email address. :
+
+ .. code-block:: ini
+
+ [user]
+ email = {your-email-address}
+ name = {your-name}
+
+ For example:
+
+ .. prompt:: bash $
+
+ git config --global user.name "John Doe"
+ git config --global user.email johndoe@example.com
+
+
+#. Create a `github`_ account (if you don't have one).
+
+#. Fork the Ceph project. See https://github.com/ceph/ceph.
+
+#. Clone your fork of the Ceph project to your local host.
+
+
+Ceph organizes documentation into an information architecture primarily by its
+main components.
+
+- **Ceph Storage Cluster:** The Ceph Storage Cluster documentation resides
+ under the ``doc/rados`` directory.
+
+- **Ceph Block Device:** The Ceph Block Device documentation resides under
+ the ``doc/rbd`` directory.
+
+- **Ceph Object Storage:** The Ceph Object Storage documentation resides under
+ the ``doc/radosgw`` directory.
+
+- **Ceph File System:** The Ceph File System documentation resides under the
+ ``doc/cephfs`` directory.
+
+- **Installation (Quick):** Quick start documentation resides under the
+ ``doc/start`` directory.
+
+- **Installation (Manual):** Manual installation documentation resides under
+ the ``doc/install`` directory.
+
+- **Manpage:** Manpage source resides under the ``doc/man`` directory.
+
+- **Developer:** Developer documentation resides under the ``doc/dev``
+ directory.
+
+- **Images:** If you include images such as JPEG or PNG files, you should
+ store them under the ``doc/images`` directory.
+
+
+Select a Branch
+---------------
+
+When you make small changes to the documentation, such as fixing typographical
+errors or clarifying explanations, use the ``main`` branch (default). You
+should also use the ``main`` branch when making contributions to features that
+are in the current release. ``main`` is the most commonly used branch. :
+
+.. prompt:: bash $
+
+ git checkout main
+
+When you make changes to documentation that affect an upcoming release, use
+the ``next`` branch. ``next`` is the second most commonly used branch. :
+
+.. prompt:: bash $
+
+ git checkout next
+
+When you are making substantial contributions such as new features that are not
+yet in the current release; if your contribution is related to an issue with a
+tracker ID; or, if you want to see your documentation rendered on the Ceph.com
+website before it gets merged into the ``main`` branch, you should create a
+branch. To distinguish branches that include only documentation updates, we
+prepend them with ``wip-doc`` by convention, following the form
+``wip-doc-{your-branch-name}``. If the branch relates to an issue filed in
+http://tracker.ceph.com/issues, the branch name incorporates the issue number.
+For example, if a documentation branch is a fix for issue #4000, the branch name
+should be ``wip-doc-4000`` by convention and the relevant tracker URL will be
+http://tracker.ceph.com/issues/4000.
+
+.. note:: Please do not mingle documentation contributions and source code
+ contributions in a single commit. When you keep documentation
+ commits separate from source code commits, it simplifies the review
+ process. We highly recommend that any pull request that adds a feature or
+ a configuration option should also include a documentation commit that
+ describes the changes.
+
+Before you create your branch name, ensure that it doesn't already exist in the
+local or remote repository. :
+
+.. prompt:: bash $
+
+ git branch -a | grep wip-doc-{your-branch-name}
+
+If it doesn't exist, create your branch:
+
+.. prompt:: bash $
+
+ git checkout -b wip-doc-{your-branch-name}
+
+
+Make a Change
+-------------
+
+Modifying a document involves opening a reStructuredText file, changing
+its contents, and saving the changes. See `Documentation Style Guide`_ for
+details on syntax requirements.
+
+Adding a document involves creating a new reStructuredText file within the
+``doc`` directory tree with a ``*.rst``
+extension. You must also include a reference to the document: a hyperlink
+or a table of contents entry. The ``index.rst`` file of a top-level directory
+usually contains a TOC, where you can add the new file name. All documents must
+have a title. See `Headings`_ for details.
+
+Your new document doesn't get tracked by ``git`` automatically. When you want
+to add the document to the repository, you must use ``git add
+{path-to-filename}``. For example, from the top level directory of the
+repository, adding an ``example.rst`` file to the ``rados`` subdirectory would
+look like this:
+
+.. prompt:: bash $
+
+ git add doc/rados/example.rst
+
+Deleting a document involves removing it from the repository with ``git rm
+{path-to-filename}``. For example:
+
+.. prompt:: bash $
+
+ git rm doc/rados/example.rst
+
+You must also remove any reference to a deleted document from other documents.
+
+
+Build the Source
+----------------
+
+To build the documentation, navigate to the ``ceph`` repository directory:
+
+
+.. prompt:: bash $
+
+ cd ceph
+
+.. note::
+ The directory that contains ``build-doc`` and ``serve-doc`` must be included
+ in the ``PATH`` environment variable in order for these commands to work.
+
+
+To build the documentation on Debian/Ubuntu, Fedora, or CentOS/RHEL, execute:
+
+.. prompt:: bash $
+
+ admin/build-doc
+
+To scan for the reachability of external links, execute:
+
+.. prompt:: bash $
+
+ admin/build-doc linkcheck
+
+Executing ``admin/build-doc`` will create a ``build-doc`` directory under
+``ceph``. You may need to create a directory under ``ceph/build-doc`` for
+output of Javadoc files:
+
+.. prompt:: bash $
+
+ mkdir -p output/html/api/libcephfs-java/javadoc
+
+The build script ``build-doc`` will produce an output of errors and warnings.
+You MUST fix errors in documents you modified before committing a change, and
+you SHOULD fix warnings that are related to syntax you modified.
+
+.. important:: You must validate ALL HYPERLINKS. If a hyperlink is broken,
+ it automatically breaks the build!
+
+Once you build the documentation set, you may start an HTTP server at
+``http://localhost:8080/`` to view it:
+
+.. prompt:: bash $
+
+ admin/serve-doc
+
+You can also navigate to ``build-doc/output`` to inspect the built documents.
+There should be an ``html`` directory and a ``man`` directory containing
+documentation in HTML and manpage formats respectively.
+
+Build the Source (First Time)
+~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
+
+Ceph uses Python Sphinx, which is generally distribution agnostic. The first
+time you build Ceph documentation, it will generate a doxygen XML tree, which
+is a bit time consuming.
+
+Python Sphinx does have some dependencies that vary across distributions. The
+first time you build the documentation, the script will notify you if you do not
+have the dependencies installed. To run Sphinx and build documentation successfully,
+the following packages are required:
+
+.. raw:: html
+
+ <style type="text/css">div.body h3{margin:5px 0px 0px 0px;}</style>
+ <table cellpadding="10"><colgroup><col width="30%"><col width="30%"><col width="30%"></colgroup><tbody valign="top"><tr><td><h3>Debian/Ubuntu</h3>
+
+- gcc
+- python3-dev
+- python3-pip
+- python3-sphinx
+- pytnon3-venv
+- libxml2-dev
+- libxslt1-dev
+- doxygen
+- graphviz
+- ant
+- ditaa
+
+.. raw:: html
+
+ </td><td><h3>Fedora</h3>
+
+- gcc
+- python-devel
+- python-pip
+- python-docutils
+- python-jinja2
+- python-pygments
+- python-sphinx
+- libxml2-devel
+- libxslt1-devel
+- doxygen
+- graphviz
+- ant
+- ditaa
+
+.. raw:: html
+
+ </td><td><h3>CentOS/RHEL</h3>
+
+- gcc
+- python-devel
+- python-pip
+- python-docutils
+- python-jinja2
+- python-pygments
+- python-sphinx
+- libxml2-dev
+- libxslt1-dev
+- doxygen
+- graphviz
+- ant
+
+.. raw:: html
+
+ </td></tr></tbody></table>
+
+
+Install each dependency that is not installed on your host. For Debian/Ubuntu
+distributions, execute the following:
+
+.. prompt:: bash $
+
+ sudo apt-get install gcc python-dev python-pip libxml2-dev libxslt-dev doxygen graphviz ant ditaa
+ sudo apt-get install python-sphinx
+
+For Fedora distributions, execute the following:
+
+.. prompt:: bash $
+
+ sudo yum install gcc python-devel python-pip libxml2-devel libxslt-devel doxygen graphviz ant
+ sudo pip install html2text
+ sudo yum install python-jinja2 python-pygments python-docutils python-sphinx
+ sudo yum install jericho-html ditaa
+
+For CentOS/RHEL distributions, it is recommended to have ``epel`` (Extra
+Packages for Enterprise Linux) repository as it provides some extra packages
+which are not available in the default repository. To install ``epel``, execute
+the following:
+
+.. prompt:: bash $
+
+ sudo yum install -y https://dl.fedoraproject.org/pub/epel/epel-release-latest-7.noarch.rpm
+
+For CentOS/RHEL distributions, execute the following:
+
+.. prompt:: bash $
+
+ sudo yum install gcc python-devel python-pip libxml2-devel libxslt-devel doxygen graphviz ant
+ sudo pip install html2text
+
+For CentOS/RHEL distributions, the remaining python packages are not available
+in the default and ``epel`` repositories. So, use http://rpmfind.net/ to find
+the packages. Then, download them from a mirror and install them. For example:
+
+.. prompt:: bash $
+
+ wget http://rpmfind.net/linux/centos/7/os/x86_64/Packages/python-jinja2-2.7.2-2.el7.noarch.rpm
+ sudo yum install python-jinja2-2.7.2-2.el7.noarch.rpm
+ wget http://rpmfind.net/linux/centos/7/os/x86_64/Packages/python-pygments-1.4-9.el7.noarch.rpm
+ sudo yum install python-pygments-1.4-9.el7.noarch.rpm
+ wget http://rpmfind.net/linux/centos/7/os/x86_64/Packages/python-docutils-0.11-0.2.20130715svn7687.el7.noarch.rpm
+ sudo yum install python-docutils-0.11-0.2.20130715svn7687.el7.noarch.rpm
+ wget http://rpmfind.net/linux/centos/7/os/x86_64/Packages/python-sphinx-1.1.3-11.el7.noarch.rpm
+ sudo yum install python-sphinx-1.1.3-11.el7.noarch.rpm
+
+Ceph documentation makes extensive use of `ditaa`_, which is not presently built
+for CentOS/RHEL7. You must install ``ditaa`` if you are making changes to
+``ditaa`` diagrams so that you can verify that they render properly before you
+commit new or modified ``ditaa`` diagrams. You may retrieve compatible required
+packages for CentOS/RHEL distributions and install them manually. To run
+``ditaa`` on CentOS/RHEL7, following dependencies are required:
+
+- jericho-html
+- jai-imageio-core
+- batik
+
+Use http://rpmfind.net/ to find compatible ``ditaa`` and the dependencies.
+Then, download them from a mirror and install them. For example:
+
+.. prompt:: bash $
+
+ wget http://rpmfind.net/linux/fedora/linux/releases/22/Everything/x86_64/os/Packages/j/jericho-html-3.3-4.fc22.noarch.rpm
+ sudo yum install jericho-html-3.3-4.fc22.noarch.rpm
+ wget http://rpmfind.net/linux/centos/7/os/x86_64/Packages/jai-imageio-core-1.2-0.14.20100217cvs.el7.noarch.rpm
+ sudo yum install jai-imageio-core-1.2-0.14.20100217cvs.el7.noarch.rpm
+ wget http://rpmfind.net/linux/centos/7/os/x86_64/Packages/batik-1.8-0.12.svn1230816.el7.noarch.rpm
+ sudo yum install batik-1.8-0.12.svn1230816.el7.noarch.rpm
+ wget http://rpmfind.net/linux/fedora/linux/releases/22/Everything/x86_64/os/Packages/d/ditaa-0.9-13.r74.fc21.noarch.rpm
+ sudo yum install ditaa-0.9-13.r74.fc21.noarch.rpm
+
+Once you have installed all these packages, build the documentation by following
+the steps given in `Build the Source`_.
+
+
+Commit the Change
+-----------------
+
+Ceph documentation commits are simple, but follow a strict convention:
+
+- A commit SHOULD have 1 file per commit (it simplifies rollback). You MAY
+ commit multiple files with related changes. Unrelated changes SHOULD NOT
+ be put into the same commit.
+- A commit MUST have a comment.
+- A commit comment MUST be prepended with ``doc:``. (strict)
+- The comment summary MUST be one line only. (strict)
+- Additional comments MAY follow a blank line after the summary,
+ but should be terse.
+- A commit MAY include ``Fixes: #{bug number}``.
+- Commits MUST include ``Signed-off-by: Firstname Lastname <email>``. (strict)
+
+.. tip:: Follow the foregoing convention particularly where it says
+ ``(strict)`` or you will be asked to modify your commit to comply with
+ this convention.
+
+The following is a common commit comment (preferred)::
+
+ doc: Fixes a spelling error and a broken hyperlink.
+
+ Signed-off-by: John Doe <john.doe@gmail.com>
+
+
+The following comment includes a reference to a bug. ::
+
+ doc: Fixes a spelling error and a broken hyperlink.
+
+ Fixes: #1234
+
+ Signed-off-by: John Doe <john.doe@gmail.com>
+
+
+The following comment includes a terse sentence following the comment summary.
+There is a carriage return between the summary line and the description::
+
+ doc: Added mon setting to monitor config reference
+
+ Describes 'mon setting', which is a new setting added
+ to config_opts.h.
+
+ Signed-off-by: John Doe <john.doe@gmail.com>
+
+
+To commit changes, execute the following:
+
+.. prompt:: bash $
+
+ git commit -a
+
+
+An easy way to manage your documentation commits is to use visual tools for
+``git``. For example, ``gitk`` provides a graphical interface for viewing the
+repository history, and ``git-gui`` provides a graphical interface for viewing
+your uncommitted changes, staging them for commit, committing the changes and
+pushing them to your forked Ceph repository.
+
+
+For Debian/Ubuntu, execute:
+
+.. prompt:: bash $
+
+ sudo apt-get install gitk git-gui
+
+For Fedora/CentOS/RHEL, execute:
+
+.. prompt:: bash $
+
+ sudo yum install gitk git-gui
+
+Then, execute:
+
+.. prompt:: bash $
+
+ cd {git-ceph-repo-path}
+ gitk
+
+Finally, select **File->Start git gui** to activate the graphical user interface.
+
+
+Push the Change
+---------------
+
+Once you have one or more commits, you must push them from the local copy of the
+repository to ``github``. A graphical tool like ``git-gui`` provides a user
+interface for pushing to the repository. If you created a branch previously:
+
+.. prompt:: bash $
+
+ git push origin wip-doc-{your-branch-name}
+
+Otherwise:
+
+.. prompt:: bash $
+
+ git push
+
+
+Make a Pull Request
+-------------------
+
+As noted earlier, you can make documentation contributions using the `Fork and
+Pull`_ approach.
+
+
+
+Notify Us
+---------
+
+After you make a pull request, please email ceph-docs@redhat.com.
+
+
+
+Documentation Style Guide
+=========================
+
+One objective of the Ceph documentation project is to ensure the readability of
+the documentation in both native restructuredText format and its rendered
+formats such as HTML. Navigate to your Ceph repository and view a document in
+its native format. You may notice that it is generally as legible in a terminal
+as it is in its rendered HTML format. Additionally, you may also notice that
+diagrams in ``ditaa`` format also render reasonably well in text mode. :
+
+.. prompt:: bash $
+
+ less doc/architecture.rst
+
+Review the following style guides to maintain this consistency.
+
+
+Headings
+--------
+
+#. **Document Titles:** Document titles use the ``=`` character overline and
+ underline with a leading and trailing space on the title text line.
+ See `Document Title`_ for details.
+
+#. **Section Titles:** Section tiles use the ``=`` character underline with no
+ leading or trailing spaces for text. Two carriage returns should precede a
+ section title (unless an inline reference precedes it). See `Sections`_ for
+ details.
+
+#. **Subsection Titles:** Subsection titles use the ``_`` character underline
+ with no leading or trailing spaces for text. Two carriage returns should
+ precede a subsection title (unless an inline reference precedes it).
+
+
+Text Body
+---------
+
+As a general rule, we prefer text to wrap at column 80 so that it is legible in
+a command line interface without leading or trailing white space. Where
+possible, we prefer to maintain this convention with text, lists, literal text
+(exceptions allowed), tables, and ``ditaa`` graphics.
+
+#. **Paragraphs**: Paragraphs have a leading and a trailing carriage return,
+ and should be 80 characters wide or less so that the documentation can be
+ read in native format in a command line terminal.
+
+#. **Literal Text:** To create an example of literal text (e.g., command line
+ usage), terminate the preceding paragraph with ``::`` or enter a carriage
+ return to create an empty line after the preceding paragraph; then, enter
+ ``::`` on a separate line followed by another empty line. Then, begin the
+ literal text with tab indentation (preferred) or space indentation of 3
+ characters.
+
+#. **Indented Text:** Indented text such as bullet points
+ (e.g., ``- some text``) may span multiple lines. The text of subsequent
+ lines should begin at the same character position as the text of the
+ indented text (less numbers, bullets, etc.).
+
+ Indented text may include literal text examples. Whereas, text indentation
+ should be done with spaces, literal text examples should be indented with
+ tabs. This convention enables you to add an additional indented paragraph
+ following a literal example by leaving a blank line and beginning the
+ subsequent paragraph with space indentation.
+
+#. **Numbered Lists:** Numbered lists should use autonumbering by starting
+ a numbered indent with ``#.`` instead of the actual number so that
+ numbered paragraphs can be repositioned without requiring manual
+ renumbering.
+
+#. **Code Examples:** Ceph supports the use of the
+ ``.. code-block::<language>`` role, so that you can add highlighting to
+ source examples. This is preferred for source code. However, use of this
+ tag will cause autonumbering to restart at 1 if it is used as an example
+ within a numbered list. See `Showing code examples`_ for details.
+
+
+Paragraph Level Markup
+----------------------
+
+The Ceph project uses `paragraph level markup`_ to highlight points.
+
+#. **Tip:** Use the ``.. tip::`` directive to provide additional information
+ that assists the reader or steers the reader away from trouble.
+
+#. **Note**: Use the ``.. note::`` directive to highlight an important point.
+
+#. **Important:** Use the ``.. important::`` directive to highlight important
+ requirements or caveats (e.g., anything that could lead to data loss). Use
+ this directive sparingly, because it renders in red.
+
+#. **Version Added:** Use the ``.. versionadded::`` directive for new features
+ or configuration settings so that users know the minimum release for using
+ a feature.
+
+#. **Version Changed:** Use the ``.. versionchanged::`` directive for changes
+ in usage or configuration settings.
+
+#. **Deprecated:** Use the ``.. deprecated::`` directive when CLI usage,
+ a feature or a configuration setting is no longer preferred or will be
+ discontinued.
+
+#. **Topic:** Use the ``.. topic::`` directive to encapsulate text that is
+ outside the main flow of the document. See the `topic directive`_ for
+ additional details.
+
+
+Table of Contents (TOC) and Hyperlinks
+---------------------------------------
+
+The documents in the Ceph documentation suite follow certain conventions that
+are explained in this section.
+
+Every document (every ``.rst`` file) in the Sphinx-controlled Ceph
+documentation suite must be linked either (1) from another document in the
+documentation suite or (2) from a table of contents (TOC). If any document in
+the documentation suite is not linked in this way, the ``build-doc`` script
+generates warnings when it tries to build the documentation.
+
+The Ceph project uses the ``.. toctree::`` directive. See `The TOC tree`_ for
+details. When rendering a table of contents (TOC), specify the ``:maxdepth:``
+parameter so that the rendered TOC is not too long.
+
+Use the ``:ref:`` syntax where a link target contains a specific unique
+identifier (for example, ``.. _unique-target-id:``). A link to the section
+designated by ``.. _unique-target-id:`` looks like this:
+``:ref:`unique-target-id```. If this convention is followed, the links within
+the ``.rst`` source files will work even if the source files are moved within
+the ``ceph/doc`` directory. See `Cross referencing arbitrary locations`_ for
+details.
+
+.. _start_external_hyperlink_example:
+
+External Hyperlink Example
+~~~~~~~~~~~~~~~~~~~~~~~~~~
+
+It is also possible to create a link to a section of the documentation and to
+have custom text appear in the body of the link. This is useful when it is more
+important to preserve the text of the sentence containing the link than it is
+to refer explicitly to the title of the section being linked to.
+
+For example, RST that links to the Sphinx Python Document Generator homepage
+and generates a sentence reading "Click here to learn more about Python
+Sphinx." looks like this:
+
+::
+
+ ``Click `here <https://www.sphinx-doc.org>`_ to learn more about Python
+ Sphinx.``
+
+And here it is, rendered:
+
+Click `here <https://www.sphinx-doc.org>`_ to learn more about Python Sphinx.
+
+Pay special attention to the underscore after the backtick. If you forget to
+include it and this is your first day working with RST, there's a chance that
+you'll spend all day wondering what went wrong without realizing that you
+omitted that underscore. Also, pay special attention to the space between the
+substitution text (in this case, "here") and the less-than bracket that sets
+the explicit link apart from the substition text. The link will not render
+properly without this space.
+
+Linking Customs
+~~~~~~~~~~~~~~~
+
+By a custom established when Ceph was still being developed by Inktank,
+contributors to the documentation of the Ceph project preferred to use the
+convention of putting ``.. _Link Text: ../path`` links at the bottom of the
+document and linking to them using references of the form ``:ref:`path```. This
+convention was preferred because it made the documents more readable in a
+command line interface. As of 2023, though, we have no preference for one over
+the other. Use whichever convention makes the text easier to read.
+
+Quirks of ReStructured Text
+---------------------------
+
+External Links
+~~~~~~~~~~~~~~
+
+.. _external_link_with_inline_text:
+
+This is the formula for links to addresses external to the Ceph documentation:
+
+::
+
+ `inline text <http:www.foo.com>`_
+
+.. note:: Do not fail to include the space between the inline text and the
+ less-than sign.
+
+ Do not fail to include the underscore after the final backtick.
+
+ To link to addresses that are external to the Ceph documentation, include a
+ space between the inline text and the angle bracket that precedes the
+ external address. This is precisely the opposite of :ref:`the convention for
+ inline text that links to a location inside the Ceph
+ documentation<internal_link_with_inline_text>`. If this seems inconsistent
+ and confusing to you, then you're right. It is inconsistent and confusing.
+
+See also ":ref:`External Hyperlink Example<start_external_hyperlink_example>`".
+
+Internal Links
+~~~~~~~~~~~~~~
+
+To link to a section in the Ceph documentation, you must (1) define a target
+link before the section and then (2) link to that target from another location
+in the documentation. Here are the formulas for targets and links to those
+targets:
+
+Target::
+
+ .. _target:
+
+ Title of Targeted Section
+ =========================
+
+ Lorem ipsum...
+
+Link to target::
+
+ :ref:`target`
+
+.. _internal_link_with_inline_text:
+
+Link to target with inline text::
+
+ :ref:`inline text<target>`
+
+.. note::
+
+ There is no space between "inline text" and the angle bracket that
+ immediately follows it. This is precisely the opposite of :ref:`the
+ convention for inline text that links to a location outside of the Ceph
+ documentation<external_link_with_inline_text>`. If this seems inconsistent
+ and confusing to you, then you're right. It is inconsistent and confusing.
+
+
+.. _Python Sphinx: https://www.sphinx-doc.org
+.. _restructuredText: http://docutils.sourceforge.net/rst.html
+.. _Fork and Pull: https://help.github.com/articles/using-pull-requests
+.. _github: http://github.com
+.. _ditaa: http://ditaa.sourceforge.net/
+.. _Document Title: http://docutils.sourceforge.net/docs/user/rst/quickstart.html#document-title-subtitle
+.. _Sections: http://docutils.sourceforge.net/docs/user/rst/quickstart.html#sections
+.. _Cross referencing arbitrary locations: http://www.sphinx-doc.org/en/master/usage/restructuredtext/roles.html#role-ref
+.. _The TOC tree: http://sphinx-doc.org/markup/toctree.html
+.. _Showing code examples: http://sphinx-doc.org/markup/code.html
+.. _paragraph level markup: http://sphinx-doc.org/markup/para.html
+.. _topic directive: http://docutils.sourceforge.net/docs/ref/rst/directives.html#topic
diff --git a/doc/start/get-involved.rst b/doc/start/get-involved.rst
new file mode 100644
index 000000000..bc83f9899
--- /dev/null
+++ b/doc/start/get-involved.rst
@@ -0,0 +1,91 @@
+.. _Get Involved:
+
+=====================================
+ Get Involved in the Ceph Community!
+=====================================
+
+These are exciting times in the Ceph community! Get involved!
+
++----------------------+-------------------------------------------------+-----------------------------------------------+
+|Channel | Description | Contact Info |
++======================+=================================================+===============================================+
+| **Blog** | Check the Ceph Blog_ periodically to keep track | http://ceph.com/community/blog/ |
+| | of Ceph progress and important announcements. | |
++----------------------+-------------------------------------------------+-----------------------------------------------+
+| **Planet Ceph** | Check the blog aggregation on Planet Ceph for | https://ceph.com/category/planet/ |
+| | interesting stories, information and | |
+| | experiences from the community. | |
++----------------------+-------------------------------------------------+-----------------------------------------------+
+| **Wiki** | Check the Ceph Wiki is a source for more | http://wiki.ceph.com/ |
+| | community and development related topics. You | |
+| | can find there information about blueprints, | |
+| | meetups, the Ceph Developer Summits and more. | |
++----------------------+-------------------------------------------------+-----------------------------------------------+
+| **IRC** | As you delve into Ceph, you may have questions | |
+| | or feedback for the Ceph development team. Ceph | - **Domain:** |
+| | developers are often available on the ``#ceph`` | ``irc.oftc.net`` |
+| | IRC channel particularly during daytime hours | - **Channels:** |
+| | in the US Pacific Standard Time zone. | ``#ceph``, |
+| | While ``#ceph`` is a good starting point for | ``#ceph-devel``, |
+| | cluster operators and users, there is also | ``#ceph-dashboard`` |
+| | ``#ceph-devel`` and ``#ceph-dashboard`` | |
+| | dedicated for Ceph developers. | |
++----------------------+-------------------------------------------------+-----------------------------------------------+
+| **User List** | Ask and answer user-related questions by | |
+| | subscribing to the email list at | - `User Subscribe`_ |
+| | ceph-users@ceph.io. You can opt out of the email| - `User Unsubscribe`_ |
+| | list at any time by unsubscribing. A simple | - `User Archives`_ |
+| | email is all it takes! | |
++----------------------+-------------------------------------------------+-----------------------------------------------+
+| **Devel List** | Keep in touch with developer activity by | |
+| | subscribing to the email list at dev@ceph.io. | - `Devel Subscribe`_ |
+| | You can opt out of the email list at any time by| - `Devel Unsubscribe`_ |
+| | unsubscribing. A simple email is all it takes! | - `Devel Archives`_ |
++----------------------+-------------------------------------------------+-----------------------------------------------+
+| **Commit List** | Subscribe to ceph-commit@ceph.com to get | |
+| | commit notifications via email. You can opt out | - `Commit Subscribe`_ |
+| | of the email list at any time by unsubscribing. | - `Commit Unsubscribe`_ |
+| | A simple email is all it takes! | - `Mailing list archives`_ |
++----------------------+-------------------------------------------------+-----------------------------------------------+
+| **QA List** | For Quality Assurance (QA) related activities | |
+| | subscribe to this list. You can opt out | - `QA Subscribe`_ |
+| | of the email list at any time by unsubscribing. | - `QA Unsubscribe`_ |
+| | A simple email is all it takes! | - `Mailing list archives`_ |
++----------------------+-------------------------------------------------+-----------------------------------------------+
+| **Community List** | For all discussions related to the Ceph User | |
+| | Committee and other community topics. You can | - `Community Subscribe`_ |
+| | opt out of the email list at any time by | - `Community Unsubscribe`_ |
+| | unsubscribing. A simple email is all it takes! | - `Mailing list archives`_ |
++----------------------+-------------------------------------------------+-----------------------------------------------+
+| **Bug Tracker** | You can help keep Ceph production worthy by | http://tracker.ceph.com/projects/ceph |
+| | filing and tracking bugs, and providing feature | |
+| | requests using the Bug Tracker_. | |
++----------------------+-------------------------------------------------+-----------------------------------------------+
+| **Source Code** | If you would like to participate in | |
+| | development, bug fixing, or if you just want | - http://github.com/ceph/ceph |
+| | the very latest code for Ceph, you can get it | - http://download.ceph.com/tarballs/ |
+| | at http://github.com. See `Ceph Source Code`_ | |
+| | for details on cloning from github. | |
++----------------------+-------------------------------------------------+-----------------------------------------------+
+| **Ceph Calendar** | Learn about upcoming Ceph events. | https://ceph.io/contribute/ |
++----------------------+-------------------------------------------------+-----------------------------------------------+
+
+
+
+.. _Devel Subscribe: mailto:dev-request@ceph.io?body=subscribe
+.. _Devel Unsubscribe: mailto:dev-request@ceph.io?body=unsubscribe
+.. _User Subscribe: mailto:ceph-users-request@ceph.io?body=subscribe
+.. _User Unsubscribe: mailto:ceph-users-request@ceph.io?body=unsubscribe
+.. _Community Subscribe: mailto:ceph-community-join@lists.ceph.com
+.. _Community Unsubscribe: mailto:ceph-community-leave@lists.ceph.com
+.. _Commit Subscribe: mailto:ceph-commit-join@lists.ceph.com
+.. _Commit Unsubscribe: mailto:ceph-commit-leave@lists.ceph.com
+.. _QA Subscribe: mailto:ceph-qa-join@lists.ceph.com
+.. _QA Unsubscribe: mailto:ceph-qa-leave@lists.ceph.com
+.. _Devel Archives: https://lists.ceph.io/hyperkitty/list/dev@ceph.io/
+.. _User Archives: https://lists.ceph.io/hyperkitty/list/ceph-users@ceph.io/
+.. _Mailing list archives: http://lists.ceph.com/
+.. _Blog: http://ceph.com/community/blog/
+.. _Tracker: http://tracker.ceph.com/
+.. _Ceph Source Code: http://github.com/ceph/ceph
+
diff --git a/doc/start/hardware-recommendations.rst b/doc/start/hardware-recommendations.rst
new file mode 100644
index 000000000..bf8eca4ba
--- /dev/null
+++ b/doc/start/hardware-recommendations.rst
@@ -0,0 +1,509 @@
+.. _hardware-recommendations:
+
+==========================
+ Hardware Recommendations
+==========================
+
+Ceph was designed to run on commodity hardware, which makes building and
+maintaining petabyte-scale data clusters economically feasible.
+When planning out your cluster hardware, you will need to balance a number
+of considerations, including failure domains and potential performance
+issues. Hardware planning should include distributing Ceph daemons and
+other processes that use Ceph across many hosts. Generally, we recommend
+running Ceph daemons of a specific type on a host configured for that type
+of daemon. We recommend using other hosts for processes that utilize your
+data cluster (e.g., OpenStack, CloudStack, etc).
+
+
+.. tip:: Check out the `Ceph blog`_ too.
+
+
+CPU
+===
+
+CephFS metadata servers (MDS) are CPU-intensive. CephFS metadata servers (MDS)
+should therefore have quad-core (or better) CPUs and high clock rates (GHz). OSD
+nodes need enough processing power to run the RADOS service, to calculate data
+placement with CRUSH, to replicate data, and to maintain their own copies of the
+cluster map.
+
+The requirements of one Ceph cluster are not the same as the requirements of
+another, but here are some general guidelines.
+
+In earlier versions of Ceph, we would make hardware recommendations based on
+the number of cores per OSD, but this cores-per-OSD metric is no longer as
+useful a metric as the number of cycles per IOP and the number of IOPs per OSD.
+For example, for NVMe drives, Ceph can easily utilize five or six cores on real
+clusters and up to about fourteen cores on single OSDs in isolation. So cores
+per OSD are no longer as pressing a concern as they were. When selecting
+hardware, select for IOPs per core.
+
+Monitor nodes and manager nodes have no heavy CPU demands and require only
+modest processors. If your host machines will run CPU-intensive processes in
+addition to Ceph daemons, make sure that you have enough processing power to
+run both the CPU-intensive processes and the Ceph daemons. (OpenStack Nova is
+one such example of a CPU-intensive process.) We recommend that you run
+non-Ceph CPU-intensive processes on separate hosts (that is, on hosts that are
+not your monitor and manager nodes) in order to avoid resource contention.
+
+RAM
+===
+
+Generally, more RAM is better. Monitor / manager nodes for a modest cluster
+might do fine with 64GB; for a larger cluster with hundreds of OSDs 128GB
+is a reasonable target. There is a memory target for BlueStore OSDs that
+defaults to 4GB. Factor in a prudent margin for the operating system and
+administrative tasks (like monitoring and metrics) as well as increased
+consumption during recovery: provisioning ~8GB per BlueStore OSD
+is advised.
+
+Monitors and managers (ceph-mon and ceph-mgr)
+---------------------------------------------
+
+Monitor and manager daemon memory usage generally scales with the size of the
+cluster. Note that at boot-time and during topology changes and recovery these
+daemons will need more RAM than they do during steady-state operation, so plan
+for peak usage. For very small clusters, 32 GB suffices. For clusters of up to,
+say, 300 OSDs go with 64GB. For clusters built with (or which will grow to)
+even more OSDs you should provision 128GB. You may also want to consider
+tuning the following settings:
+
+* `mon_osd_cache_size`
+* `rocksdb_cache_size`
+
+
+Metadata servers (ceph-mds)
+---------------------------
+
+The metadata daemon memory utilization depends on how much memory its cache is
+configured to consume. We recommend 1 GB as a minimum for most systems. See
+``mds_cache_memory``.
+
+Memory
+======
+
+Bluestore uses its own memory to cache data rather than relying on the
+operating system's page cache. In Bluestore you can adjust the amount of memory
+that the OSD attempts to consume by changing the `osd_memory_target`
+configuration option.
+
+- Setting the `osd_memory_target` below 2GB is typically not
+ recommended (Ceph may fail to keep the memory consumption under 2GB and
+ this may cause extremely slow performance).
+
+- Setting the memory target between 2GB and 4GB typically works but may result
+ in degraded performance: metadata may be read from disk during IO unless the
+ active data set is relatively small.
+
+- 4GB is the current default `osd_memory_target` size. This default
+ was chosen for typical use cases, and is intended to balance memory
+ requirements and OSD performance.
+
+- Setting the `osd_memory_target` higher than 4GB can improve
+ performance when there many (small) objects or when large (256GB/OSD
+ or more) data sets are processed.
+
+.. important:: OSD memory autotuning is "best effort". Although the OSD may
+ unmap memory to allow the kernel to reclaim it, there is no guarantee that
+ the kernel will actually reclaim freed memory within a specific time
+ frame. This applies especially in older versions of Ceph, where transparent
+ huge pages can prevent the kernel from reclaiming memory that was freed from
+ fragmented huge pages. Modern versions of Ceph disable transparent huge
+ pages at the application level to avoid this, but that does not
+ guarantee that the kernel will immediately reclaim unmapped memory. The OSD
+ may still at times exceed its memory target. We recommend budgeting
+ approximately 20% extra memory on your system to prevent OSDs from going OOM
+ (**O**\ut **O**\f **M**\emory) during temporary spikes or due to delay in
+ the kernel reclaiming freed pages. That 20% value might be more or less than
+ needed, depending on the exact configuration of the system.
+
+When using the legacy FileStore back end, the page cache is used for caching
+data, so no tuning is normally needed. When using the legacy FileStore backend,
+the OSD memory consumption is related to the number of PGs per daemon in the
+system.
+
+
+Data Storage
+============
+
+Plan your data storage configuration carefully. There are significant cost and
+performance tradeoffs to consider when planning for data storage. Simultaneous
+OS operations and simultaneous requests from multiple daemons for read and
+write operations against a single drive can slow performance.
+
+Hard Disk Drives
+----------------
+
+OSDs should have plenty of storage drive space for object data. We recommend a
+minimum disk drive size of 1 terabyte. Consider the cost-per-gigabyte advantage
+of larger disks. We recommend dividing the price of the disk drive by the
+number of gigabytes to arrive at a cost per gigabyte, because larger drives may
+have a significant impact on the cost-per-gigabyte. For example, a 1 terabyte
+hard disk priced at $75.00 has a cost of $0.07 per gigabyte (i.e., $75 / 1024 =
+0.0732). By contrast, a 3 terabyte disk priced at $150.00 has a cost of $0.05
+per gigabyte (i.e., $150 / 3072 = 0.0488). In the foregoing example, using the
+1 terabyte disks would generally increase the cost per gigabyte by
+40%--rendering your cluster substantially less cost efficient.
+
+.. tip:: Running multiple OSDs on a single SAS / SATA drive
+ is **NOT** a good idea. NVMe drives, however, can achieve
+ improved performance by being split into two or more OSDs.
+
+.. tip:: Running an OSD and a monitor or a metadata server on a single
+ drive is also **NOT** a good idea.
+
+.. tip:: With spinning disks, the SATA and SAS interface increasingly
+ becomes a bottleneck at larger capacities. See also the `Storage Networking
+ Industry Association's Total Cost of Ownership calculator`_.
+
+
+Storage drives are subject to limitations on seek time, access time, read and
+write times, as well as total throughput. These physical limitations affect
+overall system performance--especially during recovery. We recommend using a
+dedicated (ideally mirrored) drive for the operating system and software, and
+one drive for each Ceph OSD Daemon you run on the host (modulo NVMe above).
+Many "slow OSD" issues (when they are not attributable to hardware failure)
+arise from running an operating system and multiple OSDs on the same drive.
+
+It is technically possible to run multiple Ceph OSD Daemons per SAS / SATA
+drive, but this will lead to resource contention and diminish overall
+throughput.
+
+To get the best performance out of Ceph, run the following on separate drives:
+(1) operating systems, (2) OSD data, and (3) BlueStore db. For more
+information on how to effectively use a mix of fast drives and slow drives in
+your Ceph cluster, see the `block and block.db`_ section of the Bluestore
+Configuration Reference.
+
+Solid State Drives
+------------------
+
+Ceph performance can be improved by using solid-state drives (SSDs). This
+reduces random access time and reduces latency while accelerating throughput.
+
+SSDs cost more per gigabyte than do hard disk drives, but SSDs often offer
+access times that are, at a minimum, 100 times faster than hard disk drives.
+SSDs avoid hotspot issues and bottleneck issues within busy clusters, and
+they may offer better economics when TCO is evaluated holistically.
+
+SSDs do not have moving mechanical parts, so they are not necessarily subject
+to the same types of limitations as hard disk drives. SSDs do have significant
+limitations though. When evaluating SSDs, it is important to consider the
+performance of sequential reads and writes.
+
+.. important:: We recommend exploring the use of SSDs to improve performance.
+ However, before making a significant investment in SSDs, we **strongly
+ recommend** reviewing the performance metrics of an SSD and testing the
+ SSD in a test configuration in order to gauge performance.
+
+Relatively inexpensive SSDs may appeal to your sense of economy. Use caution.
+Acceptable IOPS are not the only factor to consider when selecting an SSD for
+use with Ceph.
+
+SSDs have historically been cost prohibitive for object storage, but emerging
+QLC drives are closing the gap, offering greater density with lower power
+consumption and less power spent on cooling. HDD OSDs may see a significant
+performance improvement by offloading WAL+DB onto an SSD.
+
+To get a better sense of the factors that determine the cost of storage, you
+might use the `Storage Networking Industry Association's Total Cost of
+Ownership calculator`_
+
+Partition Alignment
+~~~~~~~~~~~~~~~~~~~
+
+When using SSDs with Ceph, make sure that your partitions are properly aligned.
+Improperly aligned partitions suffer slower data transfer speeds than do
+properly aligned partitions. For more information about proper partition
+alignment and example commands that show how to align partitions properly, see
+`Werner Fischer's blog post on partition alignment`_.
+
+CephFS Metadata Segregation
+~~~~~~~~~~~~~~~~~~~~~~~~~~~
+
+One way that Ceph accelerates CephFS file system performance is by segregating
+the storage of CephFS metadata from the storage of the CephFS file contents.
+Ceph provides a default ``metadata`` pool for CephFS metadata. You will never
+have to create a pool for CephFS metadata, but you can create a CRUSH map
+hierarchy for your CephFS metadata pool that points only to SSD storage media.
+See :ref:`CRUSH Device Class<crush-map-device-class>` for details.
+
+
+Controllers
+-----------
+
+Disk controllers (HBAs) can have a significant impact on write throughput.
+Carefully consider your selection of HBAs to ensure that they do not create a
+performance bottleneck. Notably, RAID-mode (IR) HBAs may exhibit higher latency
+than simpler "JBOD" (IT) mode HBAs. The RAID SoC, write cache, and battery
+backup can substantially increase hardware and maintenance costs. Some RAID
+HBAs can be configured with an IT-mode "personality".
+
+.. tip:: The `Ceph blog`_ is often an excellent source of information on Ceph
+ performance issues. See `Ceph Write Throughput 1`_ and `Ceph Write
+ Throughput 2`_ for additional details.
+
+
+Benchmarking
+------------
+
+BlueStore opens block devices in O_DIRECT and uses fsync frequently to ensure
+that data is safely persisted to media. You can evaluate a drive's low-level
+write performance using ``fio``. For example, 4kB random write performance is
+measured as follows:
+
+.. code-block:: console
+
+ # fio --name=/dev/sdX --ioengine=libaio --direct=1 --fsync=1 --readwrite=randwrite --blocksize=4k --runtime=300
+
+Write Caches
+------------
+
+Enterprise SSDs and HDDs normally include power loss protection features which
+use multi-level caches to speed up direct or synchronous writes. These devices
+can be toggled between two caching modes -- a volatile cache flushed to
+persistent media with fsync, or a non-volatile cache written synchronously.
+
+These two modes are selected by either "enabling" or "disabling" the write
+(volatile) cache. When the volatile cache is enabled, Linux uses a device in
+"write back" mode, and when disabled, it uses "write through".
+
+The default configuration (normally caching enabled) may not be optimal, and
+OSD performance may be dramatically increased in terms of increased IOPS and
+decreased commit_latency by disabling the write cache.
+
+Users are therefore encouraged to benchmark their devices with ``fio`` as
+described earlier and persist the optimal cache configuration for their
+devices.
+
+The cache configuration can be queried with ``hdparm``, ``sdparm``,
+``smartctl`` or by reading the values in ``/sys/class/scsi_disk/*/cache_type``,
+for example:
+
+.. code-block:: console
+
+ # hdparm -W /dev/sda
+
+ /dev/sda:
+ write-caching = 1 (on)
+
+ # sdparm --get WCE /dev/sda
+ /dev/sda: ATA TOSHIBA MG07ACA1 0101
+ WCE 1 [cha: y]
+ # smartctl -g wcache /dev/sda
+ smartctl 7.1 2020-04-05 r5049 [x86_64-linux-4.18.0-305.19.1.el8_4.x86_64] (local build)
+ Copyright (C) 2002-19, Bruce Allen, Christian Franke, www.smartmontools.org
+
+ Write cache is: Enabled
+
+ # cat /sys/class/scsi_disk/0\:0\:0\:0/cache_type
+ write back
+
+The write cache can be disabled with those same tools:
+
+.. code-block:: console
+
+ # hdparm -W0 /dev/sda
+
+ /dev/sda:
+ setting drive write-caching to 0 (off)
+ write-caching = 0 (off)
+
+ # sdparm --clear WCE /dev/sda
+ /dev/sda: ATA TOSHIBA MG07ACA1 0101
+ # smartctl -s wcache,off /dev/sda
+ smartctl 7.1 2020-04-05 r5049 [x86_64-linux-4.18.0-305.19.1.el8_4.x86_64] (local build)
+ Copyright (C) 2002-19, Bruce Allen, Christian Franke, www.smartmontools.org
+
+ === START OF ENABLE/DISABLE COMMANDS SECTION ===
+ Write cache disabled
+
+Normally, disabling the cache using ``hdparm``, ``sdparm``, or ``smartctl``
+results in the cache_type changing automatically to "write through". If this is
+not the case, you can try setting it directly as follows. (Users should note
+that setting cache_type also correctly persists the caching mode of the device
+until the next reboot):
+
+.. code-block:: console
+
+ # echo "write through" > /sys/class/scsi_disk/0\:0\:0\:0/cache_type
+
+ # hdparm -W /dev/sda
+
+ /dev/sda:
+ write-caching = 0 (off)
+
+.. tip:: This udev rule (tested on CentOS 8) will set all SATA/SAS device cache_types to "write
+ through":
+
+ .. code-block:: console
+
+ # cat /etc/udev/rules.d/99-ceph-write-through.rules
+ ACTION=="add", SUBSYSTEM=="scsi_disk", ATTR{cache_type}:="write through"
+
+.. tip:: This udev rule (tested on CentOS 7) will set all SATA/SAS device cache_types to "write
+ through":
+
+ .. code-block:: console
+
+ # cat /etc/udev/rules.d/99-ceph-write-through-el7.rules
+ ACTION=="add", SUBSYSTEM=="scsi_disk", RUN+="/bin/sh -c 'echo write through > /sys/class/scsi_disk/$kernel/cache_type'"
+
+.. tip:: The ``sdparm`` utility can be used to view/change the volatile write
+ cache on several devices at once:
+
+ .. code-block:: console
+
+ # sdparm --get WCE /dev/sd*
+ /dev/sda: ATA TOSHIBA MG07ACA1 0101
+ WCE 0 [cha: y]
+ /dev/sdb: ATA TOSHIBA MG07ACA1 0101
+ WCE 0 [cha: y]
+ # sdparm --clear WCE /dev/sd*
+ /dev/sda: ATA TOSHIBA MG07ACA1 0101
+ /dev/sdb: ATA TOSHIBA MG07ACA1 0101
+
+Additional Considerations
+-------------------------
+
+You typically will run multiple OSDs per host, but you should ensure that the
+aggregate throughput of your OSD drives doesn't exceed the network bandwidth
+required to service a client's need to read or write data. You should also
+consider what percentage of the overall data the cluster stores on each host. If
+the percentage on a particular host is large and the host fails, it can lead to
+problems such as exceeding the ``full ratio``, which causes Ceph to halt
+operations as a safety precaution that prevents data loss.
+
+When you run multiple OSDs per host, you also need to ensure that the kernel
+is up to date. See `OS Recommendations`_ for notes on ``glibc`` and
+``syncfs(2)`` to ensure that your hardware performs as expected when running
+multiple OSDs per host.
+
+
+Networks
+========
+
+Provision at least 10 Gb/s networking in your racks.
+
+Speed
+-----
+
+It takes three hours to replicate 1 TB of data across a 1 Gb/s network and it
+takes thirty hours to replicate 10 TB across a 1 Gb/s network. But it takes only
+twenty minutes to replicate 1 TB across a 10 Gb/s network, and it takes
+only one hour to replicate 10 TB across a 10 Gb/s network.
+
+Cost
+----
+
+The larger the Ceph cluster, the more common OSD failures will be.
+The faster that a placement group (PG) can recover from a ``degraded`` state to
+an ``active + clean`` state, the better. Notably, fast recovery minimizes
+the liklihood of multiple, overlapping failures that can cause data to become
+temporarily unavailable or even lost. Of course, when provisioning your
+network, you will have to balance price against performance.
+
+Some deployment tools employ VLANs to make hardware and network cabling more
+manageable. VLANs that use the 802.1q protocol require VLAN-capable NICs and
+switches. The added expense of this hardware may be offset by the operational
+cost savings on network setup and maintenance. When using VLANs to handle VM
+traffic between the cluster and compute stacks (e.g., OpenStack, CloudStack,
+etc.), there is additional value in using 10 Gb/s Ethernet or better; 40 Gb/s or
+25/50/100 Gb/s networking as of 2022 is common for production clusters.
+
+Top-of-rack (TOR) switches also need fast and redundant uplinks to spind
+spine switches / routers, often at least 40 Gb/s.
+
+
+Baseboard Management Controller (BMC)
+-------------------------------------
+
+Your server chassis should have a Baseboard Management Controller (BMC).
+Well-known examples are iDRAC (Dell), CIMC (Cisco UCS), and iLO (HPE).
+Administration and deployment tools may also use BMCs extensively, especially
+via IPMI or Redfish, so consider the cost/benefit tradeoff of an out-of-band
+network for security and administration. Hypervisor SSH access, VM image uploads,
+OS image installs, management sockets, etc. can impose significant loads on a network.
+Running three networks may seem like overkill, but each traffic path represents
+a potential capacity, throughput and/or performance bottleneck that you should
+carefully consider before deploying a large scale data cluster.
+
+
+Failure Domains
+===============
+
+A failure domain is any failure that prevents access to one or more OSDs. That
+could be a stopped daemon on a host; a disk failure, an OS crash, a
+malfunctioning NIC, a failed power supply, a network outage, a power outage,
+and so forth. When planning out your hardware needs, you must balance the
+temptation to reduce costs by placing too many responsibilities into too few
+failure domains, and the added costs of isolating every potential failure
+domain.
+
+
+Minimum Hardware Recommendations
+================================
+
+Ceph can run on inexpensive commodity hardware. Small production clusters
+and development clusters can run successfully with modest hardware.
+
++--------------+----------------+-----------------------------------------+
+| Process | Criteria | Minimum Recommended |
++==============+================+=========================================+
+| ``ceph-osd`` | Processor | - 1 core minimum |
+| | | - 1 core per 200-500 MB/s |
+| | | - 1 core per 1000-3000 IOPS |
+| | | |
+| | | * Results are before replication. |
+| | | * Results may vary with different |
+| | | CPU models and Ceph features. |
+| | | (erasure coding, compression, etc) |
+| | | * ARM processors specifically may |
+| | | require additional cores. |
+| | | * Actual performance depends on many |
+| | | factors including drives, net, and |
+| | | client throughput and latency. |
+| | | Benchmarking is highly recommended. |
+| +----------------+-----------------------------------------+
+| | RAM | - 4GB+ per daemon (more is better) |
+| | | - 2-4GB often functions (may be slow) |
+| | | - Less than 2GB not recommended |
+| +----------------+-----------------------------------------+
+| | Volume Storage | 1x storage drive per daemon |
+| +----------------+-----------------------------------------+
+| | DB/WAL | 1x SSD partition per daemon (optional) |
+| +----------------+-----------------------------------------+
+| | Network | 1x 1GbE+ NICs (10GbE+ recommended) |
++--------------+----------------+-----------------------------------------+
+| ``ceph-mon`` | Processor | - 2 cores minimum |
+| +----------------+-----------------------------------------+
+| | RAM | 2-4GB+ per daemon |
+| +----------------+-----------------------------------------+
+| | Disk Space | 60 GB per daemon |
+| +----------------+-----------------------------------------+
+| | Network | 1x 1GbE+ NICs |
++--------------+----------------+-----------------------------------------+
+| ``ceph-mds`` | Processor | - 2 cores minimum |
+| +----------------+-----------------------------------------+
+| | RAM | 2GB+ per daemon |
+| +----------------+-----------------------------------------+
+| | Disk Space | 1 MB per daemon |
+| +----------------+-----------------------------------------+
+| | Network | 1x 1GbE+ NICs |
++--------------+----------------+-----------------------------------------+
+
+.. tip:: If you are running an OSD with a single disk, create a
+ partition for your volume storage that is separate from the partition
+ containing the OS. Generally, we recommend separate disks for the
+ OS and the volume storage.
+
+
+
+.. _block and block.db: https://docs.ceph.com/en/latest/rados/configuration/bluestore-config-ref/#block-and-block-db
+.. _Ceph blog: https://ceph.com/community/blog/
+.. _Ceph Write Throughput 1: http://ceph.com/community/ceph-performance-part-1-disk-controller-write-throughput/
+.. _Ceph Write Throughput 2: http://ceph.com/community/ceph-performance-part-2-write-throughput-without-ssd-journals/
+.. _Mapping Pools to Different Types of OSDs: ../../rados/operations/crush-map#placing-different-pools-on-different-osds
+.. _OS Recommendations: ../os-recommendations
+.. _Storage Networking Industry Association's Total Cost of Ownership calculator: https://www.snia.org/forums/cmsi/programs/TCOcalc
+.. _Werner Fischer's blog post on partition alignment: https://www.thomas-krenn.com/en/wiki/Partition_Alignment_detailed_explanation
diff --git a/doc/start/intro.rst b/doc/start/intro.rst
new file mode 100644
index 000000000..d05ccbf7a
--- /dev/null
+++ b/doc/start/intro.rst
@@ -0,0 +1,89 @@
+===============
+ Intro to Ceph
+===============
+
+Whether you want to provide :term:`Ceph Object Storage` and/or
+:term:`Ceph Block Device` services to :term:`Cloud Platforms`, deploy
+a :term:`Ceph File System` or use Ceph for another purpose, all
+:term:`Ceph Storage Cluster` deployments begin with setting up each
+:term:`Ceph Node`, your network, and the Ceph Storage Cluster. A Ceph
+Storage Cluster requires at least one Ceph Monitor, Ceph Manager, and
+Ceph OSD (Object Storage Daemon). The Ceph Metadata Server is also
+required when running Ceph File System clients.
+
+.. ditaa::
+
+ +---------------+ +------------+ +------------+ +---------------+
+ | OSDs | | Monitors | | Managers | | MDSs |
+ +---------------+ +------------+ +------------+ +---------------+
+
+- **Monitors**: A :term:`Ceph Monitor` (``ceph-mon``) maintains maps
+ of the cluster state, including the monitor map, manager map, the
+ OSD map, the MDS map, and the CRUSH map. These maps are critical
+ cluster state required for Ceph daemons to coordinate with each other.
+ Monitors are also responsible for managing authentication between
+ daemons and clients. At least three monitors are normally required
+ for redundancy and high availability.
+
+- **Managers**: A :term:`Ceph Manager` daemon (``ceph-mgr``) is
+ responsible for keeping track of runtime metrics and the current
+ state of the Ceph cluster, including storage utilization, current
+ performance metrics, and system load. The Ceph Manager daemons also
+ host python-based modules to manage and expose Ceph cluster
+ information, including a web-based :ref:`mgr-dashboard` and
+ `REST API`_. At least two managers are normally required for high
+ availability.
+
+- **Ceph OSDs**: An Object Storage Daemon (:term:`Ceph OSD`,
+ ``ceph-osd``) stores data, handles data replication, recovery,
+ rebalancing, and provides some monitoring information to Ceph
+ Monitors and Managers by checking other Ceph OSD Daemons for a
+ heartbeat. At least three Ceph OSDs are normally required for
+ redundancy and high availability.
+
+- **MDSs**: A :term:`Ceph Metadata Server` (MDS, ``ceph-mds``) stores
+ metadata on behalf of the :term:`Ceph File System` (i.e., Ceph Block
+ Devices and Ceph Object Storage do not use MDS). Ceph Metadata
+ Servers allow POSIX file system users to execute basic commands (like
+ ``ls``, ``find``, etc.) without placing an enormous burden on the
+ Ceph Storage Cluster.
+
+Ceph stores data as objects within logical storage pools. Using the
+:term:`CRUSH` algorithm, Ceph calculates which placement group (PG) should
+contain the object, and which OSD should store the placement group. The
+CRUSH algorithm enables the Ceph Storage Cluster to scale, rebalance, and
+recover dynamically.
+
+.. _REST API: ../../mgr/restful
+
+.. raw:: html
+
+ <style type="text/css">div.body h3{margin:5px 0px 0px 0px;}</style>
+ <table cellpadding="10"><colgroup><col width="50%"><col width="50%"></colgroup><tbody valign="top"><tr><td><h3>Recommendations</h3>
+
+To begin using Ceph in production, you should review our hardware
+recommendations and operating system recommendations.
+
+.. toctree::
+ :maxdepth: 2
+
+ Hardware Recommendations <hardware-recommendations>
+ OS Recommendations <os-recommendations>
+
+
+.. raw:: html
+
+ </td><td><h3>Get Involved</h3>
+
+ You can avail yourself of help or contribute documentation, source
+ code or bugs by getting involved in the Ceph community.
+
+.. toctree::
+ :maxdepth: 2
+
+ get-involved
+ documenting-ceph
+
+.. raw:: html
+
+ </td></tr></tbody></table>
diff --git a/doc/start/os-recommendations.rst b/doc/start/os-recommendations.rst
new file mode 100644
index 000000000..64887c925
--- /dev/null
+++ b/doc/start/os-recommendations.rst
@@ -0,0 +1,65 @@
+====================
+ OS Recommendations
+====================
+
+Ceph Dependencies
+=================
+
+As a general rule, we recommend deploying Ceph on newer releases of Linux.
+We also recommend deploying on releases with long-term support.
+
+Linux Kernel
+------------
+
+- **Ceph Kernel Client**
+
+ If you are using the kernel client to map RBD block devices or mount
+ CephFS, the general advice is to use a "stable" or "longterm
+ maintenance" kernel series provided by either http://kernel.org or
+ your Linux distribution on any client hosts.
+
+ For RBD, if you choose to *track* long-term kernels, we currently recommend
+ 4.x-based "longterm maintenance" kernel series or later:
+
+ - 4.19.z
+ - 4.14.z
+ - 5.x
+
+ For CephFS, see the section about `Mounting CephFS using Kernel Driver`_
+ for kernel version guidance.
+
+ Older kernel client versions may not support your `CRUSH tunables`_ profile
+ or other newer features of the Ceph cluster, requiring the storage cluster
+ to be configured with those features disabled.
+
+
+Platforms
+=========
+
+The charts below show how Ceph's requirements map onto various Linux
+platforms. Generally speaking, there is very little dependence on
+specific distributions outside of the kernel and system initialization
+package (i.e., sysvinit, systemd).
+
++--------------+--------+------------------------+--------------------------------+-------------------+-----------------+
+| Release Name | Tag | CentOS | Ubuntu | OpenSUSE :sup:`C` | Debian :sup:`C` |
++==============+========+========================+================================+===================+=================+
+| Quincy | 17.2.z | 8 :sup:`A` | 20.04 :sup:`A` | 15.3 | 11 |
++--------------+--------+------------------------+--------------------------------+-------------------+-----------------+
+| Pacific | 16.2.z | 8 :sup:`A` | 18.04 :sup:`C`, 20.04 :sup:`A` | 15.2 | 10, 11 |
++--------------+--------+------------------------+--------------------------------+-------------------+-----------------+
+| Octopus | 15.2.z | 7 :sup:`B` 8 :sup:`A` | 18.04 :sup:`C`, 20.04 :sup:`A` | 15.2 | 10 |
++--------------+--------+------------------------+--------------------------------+-------------------+-----------------+
+
+- **A**: Ceph provides packages and has done comprehensive tests on the software in them.
+- **B**: Ceph provides packages and has done basic tests on the software in them.
+- **C**: Ceph provides packages only. No tests have been done on these releases.
+
+.. note::
+ **For Centos 7 Users**
+
+ ``Btrfs`` is no longer tested on Centos 7 in the Octopus release. We recommend using ``bluestore`` instead.
+
+.. _CRUSH Tunables: ../../rados/operations/crush-map#tunables
+
+.. _Mounting CephFS using Kernel Driver: ../../cephfs/mount-using-kernel-driver#which-kernel-version
diff --git a/doc/start/quick-rbd.rst b/doc/start/quick-rbd.rst
new file mode 100644
index 000000000..c1cf77098
--- /dev/null
+++ b/doc/start/quick-rbd.rst
@@ -0,0 +1,69 @@
+==========================
+ Block Device Quick Start
+==========================
+
+Ensure your :term:`Ceph Storage Cluster` is in an ``active + clean`` state
+before working with the :term:`Ceph Block Device`.
+
+.. note:: The Ceph Block Device is also known as :term:`RBD` or :term:`RADOS`
+ Block Device.
+
+
+.. ditaa::
+
+ /------------------\ /----------------\
+ | Admin Node | | ceph-client |
+ | +-------->+ cCCC |
+ | ceph-deploy | | ceph |
+ \------------------/ \----------------/
+
+
+You may use a virtual machine for your ``ceph-client`` node, but do not
+execute the following procedures on the same physical node as your Ceph
+Storage Cluster nodes (unless you use a VM). See `FAQ`_ for details.
+
+Create a Block Device Pool
+==========================
+
+#. On the admin node, use the ``ceph`` tool to `create a pool`_
+ (we recommend the name 'rbd').
+
+#. On the admin node, use the ``rbd`` tool to initialize the pool for use by RBD::
+
+ rbd pool init <pool-name>
+
+Configure a Block Device
+========================
+
+#. On the ``ceph-client`` node, create a block device image. ::
+
+ rbd create foo --size 4096 --image-feature layering [-m {mon-IP}] [-k /path/to/ceph.client.admin.keyring] [-p {pool-name}]
+
+#. On the ``ceph-client`` node, map the image to a block device. ::
+
+ sudo rbd map foo --name client.admin [-m {mon-IP}] [-k /path/to/ceph.client.admin.keyring] [-p {pool-name}]
+
+#. Use the block device by creating a file system on the ``ceph-client``
+ node. ::
+
+ sudo mkfs.ext4 -m0 /dev/rbd/{pool-name}/foo
+
+ This may take a few moments.
+
+#. Mount the file system on the ``ceph-client`` node. ::
+
+ sudo mkdir /mnt/ceph-block-device
+ sudo mount /dev/rbd/{pool-name}/foo /mnt/ceph-block-device
+ cd /mnt/ceph-block-device
+
+#. Optionally configure the block device to be automatically mapped and mounted
+ at boot (and unmounted/unmapped at shutdown) - see the `rbdmap manpage`_.
+
+
+See `block devices`_ for additional details.
+
+.. _create a pool: ../../rados/operations/pools/#create-a-pool
+.. _block devices: ../../rbd
+.. _FAQ: http://wiki.ceph.com/How_Can_I_Give_Ceph_a_Try
+.. _OS Recommendations: ../os-recommendations
+.. _rbdmap manpage: ../../man/8/rbdmap
diff --git a/doc/start/rgw.conf b/doc/start/rgw.conf
new file mode 100644
index 000000000..e1bee9986
--- /dev/null
+++ b/doc/start/rgw.conf
@@ -0,0 +1,30 @@
+FastCgiExternalServer /var/www/s3gw.fcgi -socket /tmp/radosgw.sock
+
+
+<VirtualHost *:80>
+
+ ServerName {fqdn}
+ <!--Remove the comment. Add a server alias with *.{fqdn} for S3 subdomains-->
+ <!--ServerAlias *.{fqdn}-->
+ ServerAdmin {email.address}
+ DocumentRoot /var/www
+ RewriteEngine On
+ RewriteRule ^/([a-zA-Z0-9-_.]*)([/]?.*) /s3gw.fcgi?page=$1&params=$2&%{QUERY_STRING} [E=HTTP_AUTHORIZATION:%{HTTP:Authorization},L]
+
+ <IfModule mod_fastcgi.c>
+ <Directory /var/www>
+ Options +ExecCGI
+ AllowOverride All
+ SetHandler fastcgi-script
+ Order allow,deny
+ Allow from all
+ AuthBasicAuthoritative Off
+ </Directory>
+ </IfModule>
+
+ AllowEncodedSlashes On
+ ErrorLog /var/log/apache2/error.log
+ CustomLog /var/log/apache2/access.log combined
+ ServerSignature Off
+
+</VirtualHost> \ No newline at end of file