summaryrefslogtreecommitdiffstats
path: root/doc/start
diff options
context:
space:
mode:
authorDaniel Baumann <daniel.baumann@progress-linux.org>2024-04-27 18:24:20 +0000
committerDaniel Baumann <daniel.baumann@progress-linux.org>2024-04-27 18:24:20 +0000
commit483eb2f56657e8e7f419ab1a4fab8dce9ade8609 (patch)
treee5d88d25d870d5dedacb6bbdbe2a966086a0a5cf /doc/start
parentInitial commit. (diff)
downloadceph-483eb2f56657e8e7f419ab1a4fab8dce9ade8609.tar.xz
ceph-483eb2f56657e8e7f419ab1a4fab8dce9ade8609.zip
Adding upstream version 14.2.21.upstream/14.2.21upstream
Signed-off-by: Daniel Baumann <daniel.baumann@progress-linux.org>
Diffstat (limited to 'doc/start')
-rw-r--r--doc/start/ceph.conf3
-rw-r--r--doc/start/documenting-ceph.rst596
-rw-r--r--doc/start/get-involved.rst92
-rw-r--r--doc/start/hardware-recommendations.rst365
-rw-r--r--doc/start/index.rst47
-rw-r--r--doc/start/intro.rst89
-rw-r--r--doc/start/kube-helm.rst357
-rw-r--r--doc/start/os-recommendations.rst154
-rw-r--r--doc/start/quick-ceph-deploy.rst353
-rw-r--r--doc/start/quick-cephfs.rst119
-rw-r--r--doc/start/quick-common.rst21
-rw-r--r--doc/start/quick-rbd.rst96
-rw-r--r--doc/start/quick-rgw-old.rst30
-rw-r--r--doc/start/quick-rgw.rst101
-rw-r--r--doc/start/quick-start-preflight.rst356
-rw-r--r--doc/start/rgw.conf30
16 files changed, 2809 insertions, 0 deletions
diff --git a/doc/start/ceph.conf b/doc/start/ceph.conf
new file mode 100644
index 00000000..f3d558eb
--- /dev/null
+++ b/doc/start/ceph.conf
@@ -0,0 +1,3 @@
+[global]
+ # list your monitors here
+ mon host = {mon-host-1}, {mon-host-2}
diff --git a/doc/start/documenting-ceph.rst b/doc/start/documenting-ceph.rst
new file mode 100644
index 00000000..b83c2288
--- /dev/null
+++ b/doc/start/documenting-ceph.rst
@@ -0,0 +1,596 @@
+==================
+ Documenting Ceph
+==================
+
+The **easiest way** to help the Ceph project is to contribute to the
+documentation. As the Ceph user base grows and the development pace quickens, an
+increasing number of people are updating the documentation and adding new
+information. Even small contributions like fixing spelling errors or clarifying
+instructions will help the Ceph project immensely.
+
+The Ceph documentation source resides in the ``ceph/doc`` directory of the Ceph
+repository, and Python Sphinx renders the source into HTML and manpages. The
+http://ceph.com/docs link currently displays the ``master`` branch by default,
+but you may view documentation for older branches (e.g., ``argonaut``) or future
+branches (e.g., ``next``) as well as work-in-progress branches by substituting
+``master`` with the branch name you prefer.
+
+
+Making Contributions
+====================
+
+Making a documentation contribution generally involves the same procedural
+sequence as making a code contribution, except that you must build documentation
+source instead of compiling program source. The sequence includes the following
+steps:
+
+#. `Get the Source`_
+#. `Select a Branch`_
+#. `Make a Change`_
+#. `Build the Source`_
+#. `Commit the Change`_
+#. `Push the Change`_
+#. `Make a Pull Request`_
+#. `Notify Us`_
+
+Get the Source
+--------------
+
+Ceph documentation lives in the Ceph repository right alongside the Ceph source
+code under the ``ceph/doc`` directory. For details on github and Ceph,
+see :ref:`Get Involved`.
+
+The most common way to make contributions is to use the `Fork and Pull`_
+approach. You must:
+
+#. Install git locally. For Debian/Ubuntu, execute::
+
+ sudo apt-get install git
+
+ For Fedora, execute::
+
+ sudo yum install git
+
+ For CentOS/RHEL, execute::
+
+ sudo yum install git
+
+#. Ensure your ``.gitconfig`` file has your name and email address. ::
+
+ [user]
+ email = {your-email-address}
+ name = {your-name}
+
+ For example::
+
+ git config --global user.name "John Doe"
+ git config --global user.email johndoe@example.com
+
+
+#. Create a `github`_ account (if you don't have one).
+
+#. Fork the Ceph project. See https://github.com/ceph/ceph.
+
+#. Clone your fork of the Ceph project to your local host.
+
+
+Ceph organizes documentation into an information architecture primarily by its
+main components.
+
+- **Ceph Storage Cluster:** The Ceph Storage Cluster documentation resides
+ under the ``doc/rados`` directory.
+
+- **Ceph Block Device:** The Ceph Block Device documentation resides under
+ the ``doc/rbd`` directory.
+
+- **Ceph Object Storage:** The Ceph Object Storage documentation resides under
+ the ``doc/radosgw`` directory.
+
+- **Ceph Filesystem:** The Ceph Filesystem documentation resides under the
+ ``doc/cephfs`` directory.
+
+- **Installation (Quick):** Quick start documentation resides under the
+ ``doc/start`` directory.
+
+- **Installation (Manual):** Manual installation documentation resides under
+ the ``doc/install`` directory.
+
+- **Manpage:** Manpage source resides under the ``doc/man`` directory.
+
+- **Developer:** Developer documentation resides under the ``doc/dev``
+ directory.
+
+- **Images:** If you include images such as JPEG or PNG files, you should
+ store them under the ``doc/images`` directory.
+
+
+Select a Branch
+---------------
+
+When you make small changes to the documentation, such as fixing typographical
+errors or clarifying explanations, use the ``master`` branch (default). You
+should also use the ``master`` branch when making contributions to features that
+are in the current release. ``master`` is the most commonly used branch. ::
+
+ git checkout master
+
+When you make changes to documentation that affect an upcoming release, use
+the ``next`` branch. ``next`` is the second most commonly used branch. ::
+
+ git checkout next
+
+When you are making substantial contributions such as new features that are not
+yet in the current release; if your contribution is related to an issue with a
+tracker ID; or, if you want to see your documentation rendered on the Ceph.com
+website before it gets merged into the ``master`` branch, you should create a
+branch. To distinguish branches that include only documentation updates, we
+prepend them with ``wip-doc`` by convention, following the form
+``wip-doc-{your-branch-name}``. If the branch relates to an issue filed in
+http://tracker.ceph.com/issues, the branch name incorporates the issue number.
+For example, if a documentation branch is a fix for issue #4000, the branch name
+should be ``wip-doc-4000`` by convention and the relevant tracker URL will be
+http://tracker.ceph.com/issues/4000.
+
+.. note:: Please do not mingle documentation contributions and source code
+ contributions in a single pull request. Editors review the documentation
+ and engineers review source code changes. When you keep documentation
+ pull requests separate from source code pull requests, it simplifies the
+ process and we won't have to ask you to resubmit the requests separately.
+
+Before you create your branch name, ensure that it doesn't already exist in the
+local or remote repository. ::
+
+ git branch -a | grep wip-doc-{your-branch-name}
+
+If it doesn't exist, create your branch::
+
+ git checkout -b wip-doc-{your-branch-name}
+
+
+Make a Change
+-------------
+
+Modifying a document involves opening a restructuredText file, changing
+its contents, and saving the changes. See `Documentation Style Guide`_ for
+details on syntax requirements.
+
+Adding a document involves creating a new restructuredText file under the
+``doc`` directory or its subdirectories and saving the file with a ``*.rst``
+file extension. You must also include a reference to the document: a hyperlink
+or a table of contents entry. The ``index.rst`` file of a top-level directory
+usually contains a TOC, where you can add the new file name. All documents must
+have a title. See `Headings`_ for details.
+
+Your new document doesn't get tracked by ``git`` automatically. When you want
+to add the document to the repository, you must use ``git add
+{path-to-filename}``. For example, from the top level directory of the
+repository, adding an ``example.rst`` file to the ``rados`` subdirectory would
+look like this::
+
+ git add doc/rados/example.rst
+
+Deleting a document involves removing it from the repository with ``git rm
+{path-to-filename}``. For example::
+
+ git rm doc/rados/example.rst
+
+You must also remove any reference to a deleted document from other documents.
+
+
+Build the Source
+----------------
+
+To build the documentation, navigate to the ``ceph`` repository directory::
+
+ cd ceph
+
+To build the documentation on Debian/Ubuntu, Fedora, or CentOS/RHEL, execute::
+
+ admin/build-doc
+
+To scan for the reachability of external links, execute::
+
+ admin/build-doc linkcheck
+
+Executing ``admin/build-doc`` will create a ``build-doc`` directory under ``ceph``.
+You may need to create a directory under ``ceph/build-doc`` for output of Javadoc
+files. ::
+
+ mkdir -p output/html/api/libcephfs-java/javadoc
+
+The build script ``build-doc`` will produce an output of errors and warnings.
+You MUST fix errors in documents you modified before committing a change, and you
+SHOULD fix warnings that are related to syntax you modified.
+
+.. important:: You must validate ALL HYPERLINKS. If a hyperlink is broken,
+ it automatically breaks the build!
+
+Once you build the documentation set, you may start an HTTP server at
+``http://localhost:8080/`` to view it::
+
+ admin/serve-doc
+
+You can also navigate to ``build-doc/output`` to inspect the built documents.
+There should be an ``html`` directory and a ``man`` directory containing
+documentation in HTML and manpage formats respectively.
+
+Build the Source (First Time)
+~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
+
+Ceph uses Python Sphinx, which is generally distribution agnostic. The first
+time you build Ceph documentation, it will generate a doxygen XML tree, which
+is a bit time consuming.
+
+Python Sphinx does have some dependencies that vary across distributions. The
+first time you build the documentation, the script will notify you if you do not
+have the dependencies installed. To run Sphinx and build documentation successfully,
+the following packages are required:
+
+.. raw:: html
+
+ <style type="text/css">div.body h3{margin:5px 0px 0px 0px;}</style>
+ <table cellpadding="10"><colgroup><col width="30%"><col width="30%"><col width="30%"></colgroup><tbody valign="top"><tr><td><h3>Debian/Ubuntu</h3>
+
+- gcc
+- python-dev
+- python-pip
+- python-virtualenv
+- python-sphinx
+- libxml2-dev
+- libxslt1-dev
+- doxygen
+- graphviz
+- ant
+- ditaa
+
+.. raw:: html
+
+ </td><td><h3>Fedora</h3>
+
+- gcc
+- python-devel
+- python-pip
+- python-virtualenv
+- python-docutils
+- python-jinja2
+- python-pygments
+- python-sphinx
+- libxml2-devel
+- libxslt1-devel
+- doxygen
+- graphviz
+- ant
+- ditaa
+
+.. raw:: html
+
+ </td><td><h3>CentOS/RHEL</h3>
+
+- gcc
+- python-devel
+- python-pip
+- python-virtualenv
+- python-docutils
+- python-jinja2
+- python-pygments
+- python-sphinx
+- libxml2-dev
+- libxslt1-dev
+- doxygen
+- graphviz
+- ant
+
+.. raw:: html
+
+ </td></tr></tbody></table>
+
+
+Install each dependency that is not installed on your host. For Debian/Ubuntu
+distributions, execute the following::
+
+ sudo apt-get install gcc python-dev python-pip python-virtualenv libxml2-dev libxslt-dev doxygen graphviz ant ditaa
+ sudo apt-get install python-sphinx
+
+For Fedora distributions, execute the following::
+
+ sudo yum install gcc python-devel python-pip python-virtualenv libxml2-devel libxslt-devel doxygen graphviz ant
+ sudo pip install html2text
+ sudo yum install python-jinja2 python-pygments python-docutils python-sphinx
+ sudo yum install jericho-html ditaa
+
+For CentOS/RHEL distributions, it is recommended to have ``epel`` (Extra
+Packages for Enterprise Linux) repository as it provides some extra packages
+which are not available in the default repository. To install ``epel``, execute
+the following::
+
+ sudo yum install -y https://dl.fedoraproject.org/pub/epel/epel-release-latest-7.noarch.rpm
+
+For CentOS/RHEL distributions, execute the following::
+
+ sudo yum install gcc python-devel python-pip python-virtualenv libxml2-devel libxslt-devel doxygen graphviz ant
+ sudo pip install html2text
+
+For CentOS/RHEL distributions, the remaining python packages are not available in
+the default and ``epel`` repositories. So, use http://rpmfind.net/ to find the
+packages. Then, download them from a mirror and install them. For example::
+
+ wget http://rpmfind.net/linux/centos/7/os/x86_64/Packages/python-jinja2-2.7.2-2.el7.noarch.rpm
+ sudo yum install python-jinja2-2.7.2-2.el7.noarch.rpm
+ wget http://rpmfind.net/linux/centos/7/os/x86_64/Packages/python-pygments-1.4-9.el7.noarch.rpm
+ sudo yum install python-pygments-1.4-9.el7.noarch.rpm
+ wget http://rpmfind.net/linux/centos/7/os/x86_64/Packages/python-docutils-0.11-0.2.20130715svn7687.el7.noarch.rpm
+ sudo yum install python-docutils-0.11-0.2.20130715svn7687.el7.noarch.rpm
+ wget http://rpmfind.net/linux/centos/7/os/x86_64/Packages/python-sphinx-1.1.3-11.el7.noarch.rpm
+ sudo yum install python-sphinx-1.1.3-11.el7.noarch.rpm
+
+Ceph documentation makes extensive use of `ditaa`_, which is not presently built
+for CentOS/RHEL7. You must install ``ditaa`` if you are making changes to
+``ditaa`` diagrams so that you can verify that they render properly before you
+commit new or modified ``ditaa`` diagrams. You may retrieve compatible required
+packages for CentOS/RHEL distributions and install them manually. To run ``ditaa``
+on CentOS/RHEL7, following dependencies are required:
+
+- jericho-html
+- jai-imageio-core
+- batik
+
+Use http://rpmfind.net/ to find compatible ``ditaa`` and the dependencies.
+Then, download them from a mirror and install them. For example::
+
+ wget http://rpmfind.net/linux/fedora/linux/releases/22/Everything/x86_64/os/Packages/j/jericho-html-3.3-4.fc22.noarch.rpm
+ sudo yum install jericho-html-3.3-4.fc22.noarch.rpm
+ wget http://rpmfind.net/linux/centos/7/os/x86_64/Packages/jai-imageio-core-1.2-0.14.20100217cvs.el7.noarch.rpm
+ sudo yum install jai-imageio-core-1.2-0.14.20100217cvs.el7.noarch.rpm
+ wget http://rpmfind.net/linux/centos/7/os/x86_64/Packages/batik-1.8-0.12.svn1230816.el7.noarch.rpm
+ sudo yum install batik-1.8-0.12.svn1230816.el7.noarch.rpm
+ wget http://rpmfind.net/linux/fedora/linux/releases/22/Everything/x86_64/os/Packages/d/ditaa-0.9-13.r74.fc21.noarch.rpm
+ sudo yum install ditaa-0.9-13.r74.fc21.noarch.rpm
+
+Once you have installed all these packages, build the documentation by following
+the steps given in `Build the Source`_.
+
+
+Commit the Change
+-----------------
+
+Ceph documentation commits are simple, but follow a strict convention:
+
+- A commit SHOULD have 1 file per commit (it simplifies rollback). You MAY
+ commit multiple files with related changes. Unrelated changes SHOULD NOT
+ be put into the same commit.
+- A commit MUST have a comment.
+- A commit comment MUST be prepended with ``doc:``. (strict)
+- The comment summary MUST be one line only. (strict)
+- Additional comments MAY follow a blank line after the summary,
+ but should be terse.
+- A commit MAY include ``Fixes: #{bug number}``.
+- Commits MUST include ``Signed-off-by: Firstname Lastname <email>``. (strict)
+
+.. tip:: Follow the foregoing convention particularly where it says
+ ``(strict)`` or you will be asked to modify your commit to comply with
+ this convention.
+
+The following is a common commit comment (preferred)::
+
+ doc: Fixes a spelling error and a broken hyperlink.
+
+ Signed-off-by: John Doe <john.doe@gmail.com>
+
+
+The following comment includes a reference to a bug. ::
+
+ doc: Fixes a spelling error and a broken hyperlink.
+
+ Fixes: #1234
+
+ Signed-off-by: John Doe <john.doe@gmail.com>
+
+
+The following comment includes a terse sentence following the comment summary.
+There is a carriage return between the summary line and the description::
+
+ doc: Added mon setting to monitor config reference
+
+ Describes 'mon setting', which is a new setting added
+ to config_opts.h.
+
+ Signed-off-by: John Doe <john.doe@gmail.com>
+
+
+To commit changes, execute the following::
+
+ git commit -a
+
+
+An easy way to manage your documentation commits is to use visual tools for
+``git``. For example, ``gitk`` provides a graphical interface for viewing the
+repository history, and ``git-gui`` provides a graphical interface for viewing
+your uncommitted changes, staging them for commit, committing the changes and
+pushing them to your forked Ceph repository.
+
+
+For Debian/Ubuntu, execute::
+
+ sudo apt-get install gitk git-gui
+
+For Fedora/CentOS/RHEL, execute::
+
+ sudo yum install gitk git-gui
+
+Then, execute::
+
+ cd {git-ceph-repo-path}
+ gitk
+
+Finally, select **File->Start git gui** to activate the graphical user interface.
+
+
+Push the Change
+---------------
+
+Once you have one or more commits, you must push them from the local copy of the
+repository to ``github``. A graphical tool like ``git-gui`` provides a user
+interface for pushing to the repository. If you created a branch previously::
+
+ git push origin wip-doc-{your-branch-name}
+
+Otherwise::
+
+ git push
+
+
+Make a Pull Request
+-------------------
+
+As noted earlier, you can make documentation contributions using the `Fork and
+Pull`_ approach.
+
+
+
+Notify Us
+---------
+
+After you make a pull request, please email ceph-docs@redhat.com.
+
+
+
+Documentation Style Guide
+=========================
+
+One objective of the Ceph documentation project is to ensure the readability of
+the documentation in both native restructuredText format and its rendered
+formats such as HTML. Navigate to your Ceph repository and view a document in
+its native format. You may notice that it is generally as legible in a terminal
+as it is in its rendered HTML format. Additionally, you may also notice that
+diagrams in ``ditaa`` format also render reasonably well in text mode. ::
+
+ less doc/architecture.rst
+
+Review the following style guides to maintain this consistency.
+
+
+Headings
+--------
+
+#. **Document Titles:** Document titles use the ``=`` character overline and
+ underline with a leading and trailing space on the title text line.
+ See `Document Title`_ for details.
+
+#. **Section Titles:** Section tiles use the ``=`` character underline with no
+ leading or trailing spaces for text. Two carriage returns should precede a
+ section title (unless an inline reference precedes it). See `Sections`_ for
+ details.
+
+#. **Subsection Titles:** Subsection titles use the ``_`` character underline
+ with no leading or trailing spaces for text. Two carriage returns should
+ precede a subsection title (unless an inline reference precedes it).
+
+
+Text Body
+---------
+
+As a general rule, we prefer text to wrap at column 80 so that it is legible in
+a command line interface without leading or trailing white space. Where
+possible, we prefer to maintain this convention with text, lists, literal text
+(exceptions allowed), tables, and ``ditaa`` graphics.
+
+#. **Paragraphs**: Paragraphs have a leading and a trailing carriage return,
+ and should be 80 characters wide or less so that the documentation can be
+ read in native format in a command line terminal.
+
+#. **Literal Text:** To create an example of literal text (e.g., command line
+ usage), terminate the preceding paragraph with ``::`` or enter a carriage
+ return to create an empty line after the preceding paragraph; then, enter
+ ``::`` on a separate line followed by another empty line. Then, begin the
+ literal text with tab indentation (preferred) or space indentation of 3
+ characters.
+
+#. **Indented Text:** Indented text such as bullet points
+ (e.g., ``- some text``) may span multiple lines. The text of subsequent
+ lines should begin at the same character position as the text of the
+ indented text (less numbers, bullets, etc.).
+
+ Indented text may include literal text examples. Whereas, text indentation
+ should be done with spaces, literal text examples should be indented with
+ tabs. This convention enables you to add an additional indented paragraph
+ following a literal example by leaving a blank line and beginning the
+ subsequent paragraph with space indentation.
+
+#. **Numbered Lists:** Numbered lists should use autonumbering by starting
+ a numbered indent with ``#.`` instead of the actual number so that
+ numbered paragraphs can be repositioned without requiring manual
+ renumbering.
+
+#. **Code Examples:** Ceph supports the use of the
+ ``.. code-block::<language>`` role, so that you can add highlighting to
+ source examples. This is preferred for source code. However, use of this
+ tag will cause autonumbering to restart at 1 if it is used as an example
+ within a numbered list. See `Showing code examples`_ for details.
+
+
+Paragraph Level Markup
+----------------------
+
+The Ceph project uses `paragraph level markup`_ to highlight points.
+
+#. **Tip:** Use the ``.. tip::`` directive to provide additional information
+ that assists the reader or steers the reader away from trouble.
+
+#. **Note**: Use the ``.. note::`` directive to highlight an important point.
+
+#. **Important:** Use the ``.. important::`` directive to highlight important
+ requirements or caveats (e.g., anything that could lead to data loss). Use
+ this directive sparingly, because it renders in red.
+
+#. **Version Added:** Use the ``.. versionadded::`` directive for new features
+ or configuration settings so that users know the minimum release for using
+ a feature.
+
+#. **Version Changed:** Use the ``.. versionchanged::`` directive for changes
+ in usage or configuration settings.
+
+#. **Deprecated:** Use the ``.. deprecated::`` directive when CLI usage,
+ a feature or a configuration setting is no longer preferred or will be
+ discontinued.
+
+#. **Topic:** Use the ``.. topic::`` directive to encapsulate text that is
+ outside the main flow of the document. See the `topic directive`_ for
+ additional details.
+
+
+TOC and Hyperlinks
+------------------
+
+All documents must be linked from another document or a table of contents,
+otherwise you will receive a warning when building the documentation.
+
+The Ceph project uses the ``.. toctree::`` directive. See `The TOC tree`_
+for details. When rendering a TOC, consider specifying the ``:maxdepth:``
+parameter so the rendered TOC is reasonably terse.
+
+Document authors should prefer to use the ``:ref:`` syntax where a link target
+contains a specific unique identifier (e.g., ``.. _unique-target-id:``), and a
+reference to the target specifically references the target (e.g.,
+``:ref:`unique-target-id```) so that if source files are moved or the
+information architecture changes, the links will still work. See
+`Cross referencing arbitrary locations`_ for details.
+
+Ceph documentation also uses the backtick (accent grave) character followed by
+the link text, another backtick and an underscore. Sphinx allows you to
+incorporate the link destination inline; however, we prefer to use the use the
+``.. _Link Text: ../path`` convention at the bottom of the document, because it
+improves the readability of the document in a command line interface.
+
+
+.. _Python Sphinx: http://sphinx-doc.org
+.. _resturcturedText: http://docutils.sourceforge.net/rst.html
+.. _Fork and Pull: https://help.github.com/articles/using-pull-requests
+.. _github: http://github.com
+.. _ditaa: http://ditaa.sourceforge.net/
+.. _Document Title: http://docutils.sourceforge.net/docs/user/rst/quickstart.html#document-title-subtitle
+.. _Sections: http://docutils.sourceforge.net/docs/user/rst/quickstart.html#sections
+.. _Cross referencing arbitrary locations: http://www.sphinx-doc.org/en/master/usage/restructuredtext/roles.html#role-ref
+.. _The TOC tree: http://sphinx-doc.org/markup/toctree.html
+.. _Showing code examples: http://sphinx-doc.org/markup/code.html
+.. _paragraph level markup: http://sphinx-doc.org/markup/para.html
+.. _topic directive: http://docutils.sourceforge.net/docs/ref/rst/directives.html#topic
diff --git a/doc/start/get-involved.rst b/doc/start/get-involved.rst
new file mode 100644
index 00000000..ff0bf878
--- /dev/null
+++ b/doc/start/get-involved.rst
@@ -0,0 +1,92 @@
+.. _Get Involved:
+
+=====================================
+ Get Involved in the Ceph Community!
+=====================================
+
+These are exciting times in the Ceph community! Get involved!
+
++----------------------+-------------------------------------------------+-----------------------------------------------+
+|Channel | Description | Contact Info |
++======================+=================================================+===============================================+
+| **Blog** | Check the Ceph Blog_ periodically to keep track | http://ceph.com/community/blog/ |
+| | of Ceph progress and important announcements. | |
++----------------------+-------------------------------------------------+-----------------------------------------------+
+| **Planet Ceph** | Check the blog aggregation on Planet Ceph for | https://ceph.com/category/planet/ |
+| | interesting stories, information and | |
+| | experiences from the community. | |
++----------------------+-------------------------------------------------+-----------------------------------------------+
+| **Wiki** | Check the Ceph Wiki is a source for more | http://wiki.ceph.com/ |
+| | community and development related topics. You | |
+| | can find there information about blueprints, | |
+| | meetups, the Ceph Developer Summits and more. | |
++----------------------+-------------------------------------------------+-----------------------------------------------+
+| **IRC** | As you delve into Ceph, you may have questions | |
+| | or feedback for the Ceph development team. Ceph | - **Domain:** |
+| | developers are often available on the ``#ceph`` | ``irc.oftc.net`` |
+| | IRC channel particularly during daytime hours | - **Channels:** |
+| | in the US Pacific Standard Time zone. | ``#ceph``, |
+| | While ``#ceph`` is a good starting point for | ``#ceph-devel``, |
+| | cluster operators and users, there is also | ``#ceph-dashboard`` |
+| | ``#ceph-devel`` and ``#ceph-dashboard`` | |
+| | dedicated for Ceph developers. | |
++----------------------+-------------------------------------------------+-----------------------------------------------+
+| **User List** | Ask and answer user-related questions by | |
+| | subscribing to the email list at | - `User Subscribe`_ |
+| | ceph-users@ceph.com. You can opt out of | - `User Unsubscribe`_ |
+| | the email list at any time by unsubscribing. | - `Gmane for Users`_ |
+| | A simple email is all it takes! If you would | |
+| | like to view the archives, go to Gmane. | |
++----------------------+-------------------------------------------------+-----------------------------------------------+
+| **Devel List** | Keep in touch with developer activity by | |
+| | subscribing to the email list at | - `Devel Subscribe`_ |
+| | ceph-devel@vger.kernel.org. You can opt out of | - `Devel Unsubscribe`_ |
+| | the email list at any time by unsubscribing. | - `Gmane for Developers`_ |
+| | A simple email is all it takes! If you would | |
+| | like to view the archives, go to Gmane. | |
++----------------------+-------------------------------------------------+-----------------------------------------------+
+| **Commit List** | Subscribe to ceph-commit@ceph.com to get | |
+| | commit notifications via email. You can opt out | - `Commit Subscribe`_ |
+| | of the email list at any time by unsubscribing. | - `Commit Unsubscribe`_ |
+| | A simple email is all it takes! | - `Mailing list archives`_ |
++----------------------+-------------------------------------------------+-----------------------------------------------+
+| **QA List** | For Quality Assurance (QA) related activities | |
+| | subscribe to this list. You can opt out | - `QA Subscribe`_ |
+| | of the email list at any time by unsubscribing. | - `QA Unsubscribe`_ |
+| | A simple email is all it takes! | - `Mailing list archives`_ |
++----------------------+-------------------------------------------------+-----------------------------------------------+
+| **Community List** | For all discussions related to the Ceph User | |
+| | Committee and other community topics. You can | - `Community Subscribe`_ |
+| | opt out of the email list at any time by | - `Community Unsubscribe`_ |
+| | unsubscribing. A simple email is all it takes! | - `Mailing list archives`_ |
++----------------------+-------------------------------------------------+-----------------------------------------------+
+| **Bug Tracker** | You can help keep Ceph production worthy by | http://tracker.ceph.com/projects/ceph |
+| | filing and tracking bugs, and providing feature | |
+| | requests using the Bug Tracker_. | |
++----------------------+-------------------------------------------------+-----------------------------------------------+
+| **Source Code** | If you would like to participate in | |
+| | development, bug fixing, or if you just want | - http://github.com/ceph/ceph |
+| | the very latest code for Ceph, you can get it | - http://download.ceph.com/tarballs/ |
+| | at http://github.com. See `Ceph Source Code`_ | |
+| | for details on cloning from github. | |
++----------------------+-------------------------------------------------+-----------------------------------------------+
+
+
+
+.. _Devel Subscribe: mailto:majordomo@vger.kernel.org?body=subscribe+ceph-devel
+.. _Devel Unsubscribe: mailto:majordomo@vger.kernel.org?body=unsubscribe+ceph-devel
+.. _User Subscribe: mailto:ceph-users-join@lists.ceph.com
+.. _User Unsubscribe: mailto:ceph-users-leave@lists.ceph.com
+.. _Community Subscribe: mailto:ceph-community-join@lists.ceph.com
+.. _Community Unsubscribe: mailto:ceph-community-leave@lists.ceph.com
+.. _Commit Subscribe: mailto:ceph-commit-join@lists.ceph.com
+.. _Commit Unsubscribe: mailto:ceph-commit-leave@lists.ceph.com
+.. _QA Subscribe: mailto:ceph-qa-join@lists.ceph.com
+.. _QA Unsubscribe: mailto:ceph-qa-leave@lists.ceph.com
+.. _Gmane for Developers: http://news.gmane.org/gmane.comp.file-systems.ceph.devel
+.. _Gmane for Users: http://news.gmane.org/gmane.comp.file-systems.ceph.user
+.. _Mailing list archives: http://lists.ceph.com/
+.. _Blog: http://ceph.com/community/blog/
+.. _Tracker: http://tracker.ceph.com/
+.. _Ceph Source Code: http://github.com/ceph/ceph
+
diff --git a/doc/start/hardware-recommendations.rst b/doc/start/hardware-recommendations.rst
new file mode 100644
index 00000000..30e00a89
--- /dev/null
+++ b/doc/start/hardware-recommendations.rst
@@ -0,0 +1,365 @@
+.. _hardware-recommendations:
+
+==========================
+ Hardware Recommendations
+==========================
+
+Ceph was designed to run on commodity hardware, which makes building and
+maintaining petabyte-scale data clusters economically feasible.
+When planning out your cluster hardware, you will need to balance a number
+of considerations, including failure domains and potential performance
+issues. Hardware planning should include distributing Ceph daemons and
+other processes that use Ceph across many hosts. Generally, we recommend
+running Ceph daemons of a specific type on a host configured for that type
+of daemon. We recommend using other hosts for processes that utilize your
+data cluster (e.g., OpenStack, CloudStack, etc).
+
+
+.. tip:: Check out the `Ceph blog`_ too.
+
+
+CPU
+===
+
+Ceph metadata servers dynamically redistribute their load, which is CPU
+intensive. So your metadata servers should have significant processing power
+(e.g., quad core or better CPUs). Ceph OSDs run the :term:`RADOS` service, calculate
+data placement with :term:`CRUSH`, replicate data, and maintain their own copy of the
+cluster map. Therefore, OSDs should have a reasonable amount of processing power
+(e.g., dual core processors). Monitors simply maintain a master copy of the
+cluster map, so they are not CPU intensive. You must also consider whether the
+host machine will run CPU-intensive processes in addition to Ceph daemons. For
+example, if your hosts will run computing VMs (e.g., OpenStack Nova), you will
+need to ensure that these other processes leave sufficient processing power for
+Ceph daemons. We recommend running additional CPU-intensive processes on
+separate hosts.
+
+
+RAM
+===
+
+Generally, more RAM is better.
+
+Monitors and managers (ceph-mon and ceph-mgr)
+---------------------------------------------
+
+Monitor and manager daemon memory usage generally scales with the size of the
+cluster. For small clusters, 1-2 GB is generally sufficient. For
+large clusters, you should provide more (5-10 GB). You may also want
+to consider tuning settings like ``mon_osd_cache_size`` or
+``rocksdb_cache_size``.
+
+Metadata servers (ceph-mds)
+---------------------------
+
+The metadata daemon memory utilization depends on how much memory its cache is
+configured to consume. We recommend 1 GB as a minimum for most systems. See
+``mds_cache_memory``.
+
+OSDs (ceph-osd)
+---------------
+
+By default, OSDs that use the BlueStore backend require 3-5 GB of RAM. You can
+adjust the amount of memory the OSD consumes with the ``osd_memory_target`` configuration option when BlueStore is in use. When using the legacy FileStore backend, the operating system page cache is used for caching data, so no tuning is normally needed, and the OSD memory consumption is generally related to the number of PGs per daemon in the system.
+
+
+Data Storage
+============
+
+Plan your data storage configuration carefully. There are significant cost and
+performance tradeoffs to consider when planning for data storage. Simultaneous
+OS operations, and simultaneous request for read and write operations from
+multiple daemons against a single drive can slow performance considerably.
+
+.. important:: Since Ceph has to write all data to the journal before it can
+ send an ACK (for XFS at least), having the journal and OSD
+ performance in balance is really important!
+
+
+Hard Disk Drives
+----------------
+
+OSDs should have plenty of hard disk drive space for object data. We recommend a
+minimum hard disk drive size of 1 terabyte. Consider the cost-per-gigabyte
+advantage of larger disks. We recommend dividing the price of the hard disk
+drive by the number of gigabytes to arrive at a cost per gigabyte, because
+larger drives may have a significant impact on the cost-per-gigabyte. For
+example, a 1 terabyte hard disk priced at $75.00 has a cost of $0.07 per
+gigabyte (i.e., $75 / 1024 = 0.0732). By contrast, a 3 terabyte hard disk priced
+at $150.00 has a cost of $0.05 per gigabyte (i.e., $150 / 3072 = 0.0488). In the
+foregoing example, using the 1 terabyte disks would generally increase the cost
+per gigabyte by 40%--rendering your cluster substantially less cost efficient.
+Also, the larger the storage drive capacity, the more memory per Ceph OSD Daemon
+you will need, especially during rebalancing, backfilling and recovery. A
+general rule of thumb is ~1GB of RAM for 1TB of storage space.
+
+.. tip:: Running multiple OSDs on a single disk--irrespective of partitions--is
+ **NOT** a good idea.
+
+.. tip:: Running an OSD and a monitor or a metadata server on a single
+ disk--irrespective of partitions--is **NOT** a good idea either.
+
+Storage drives are subject to limitations on seek time, access time, read and
+write times, as well as total throughput. These physical limitations affect
+overall system performance--especially during recovery. We recommend using a
+dedicated drive for the operating system and software, and one drive for each
+Ceph OSD Daemon you run on the host. Most "slow OSD" issues arise due to running
+an operating system, multiple OSDs, and/or multiple journals on the same drive.
+Since the cost of troubleshooting performance issues on a small cluster likely
+exceeds the cost of the extra disk drives, you can accelerate your cluster
+design planning by avoiding the temptation to overtax the OSD storage drives.
+
+You may run multiple Ceph OSD Daemons per hard disk drive, but this will likely
+lead to resource contention and diminish the overall throughput. You may store a
+journal and object data on the same drive, but this may increase the time it
+takes to journal a write and ACK to the client. Ceph must write to the journal
+before it can ACK the write.
+
+Ceph best practices dictate that you should run operating systems, OSD data and
+OSD journals on separate drives.
+
+
+Solid State Drives
+------------------
+
+One opportunity for performance improvement is to use solid-state drives (SSDs)
+to reduce random access time and read latency while accelerating throughput.
+SSDs often cost more than 10x as much per gigabyte when compared to a hard disk
+drive, but SSDs often exhibit access times that are at least 100x faster than a
+hard disk drive.
+
+SSDs do not have moving mechanical parts so they are not necessarily subject to
+the same types of limitations as hard disk drives. SSDs do have significant
+limitations though. When evaluating SSDs, it is important to consider the
+performance of sequential reads and writes. An SSD that has 400MB/s sequential
+write throughput may have much better performance than an SSD with 120MB/s of
+sequential write throughput when storing multiple journals for multiple OSDs.
+
+.. important:: We recommend exploring the use of SSDs to improve performance.
+ However, before making a significant investment in SSDs, we **strongly
+ recommend** both reviewing the performance metrics of an SSD and testing the
+ SSD in a test configuration to gauge performance.
+
+Since SSDs have no moving mechanical parts, it makes sense to use them in the
+areas of Ceph that do not use a lot of storage space (e.g., journals).
+Relatively inexpensive SSDs may appeal to your sense of economy. Use caution.
+Acceptable IOPS are not enough when selecting an SSD for use with Ceph. There
+are a few important performance considerations for journals and SSDs:
+
+- **Write-intensive semantics:** Journaling involves write-intensive semantics,
+ so you should ensure that the SSD you choose to deploy will perform equal to
+ or better than a hard disk drive when writing data. Inexpensive SSDs may
+ introduce write latency even as they accelerate access time, because
+ sometimes high performance hard drives can write as fast or faster than
+ some of the more economical SSDs available on the market!
+
+- **Sequential Writes:** When you store multiple journals on an SSD you must
+ consider the sequential write limitations of the SSD too, since they may be
+ handling requests to write to multiple OSD journals simultaneously.
+
+- **Partition Alignment:** A common problem with SSD performance is that
+ people like to partition drives as a best practice, but they often overlook
+ proper partition alignment with SSDs, which can cause SSDs to transfer data
+ much more slowly. Ensure that SSD partitions are properly aligned.
+
+While SSDs are cost prohibitive for object storage, OSDs may see a significant
+performance improvement by storing an OSD's journal on an SSD and the OSD's
+object data on a separate hard disk drive. The ``osd journal`` configuration
+setting defaults to ``/var/lib/ceph/osd/$cluster-$id/journal``. You can mount
+this path to an SSD or to an SSD partition so that it is not merely a file on
+the same disk as the object data.
+
+One way Ceph accelerates CephFS filesystem performance is to segregate the
+storage of CephFS metadata from the storage of the CephFS file contents. Ceph
+provides a default ``metadata`` pool for CephFS metadata. You will never have to
+create a pool for CephFS metadata, but you can create a CRUSH map hierarchy for
+your CephFS metadata pool that points only to a host's SSD storage media. See
+`Mapping Pools to Different Types of OSDs`_ for details.
+
+
+Controllers
+-----------
+
+Disk controllers also have a significant impact on write throughput. Carefully,
+consider your selection of disk controllers to ensure that they do not create
+a performance bottleneck.
+
+.. tip:: The `Ceph blog`_ is often an excellent source of information on Ceph
+ performance issues. See `Ceph Write Throughput 1`_ and `Ceph Write
+ Throughput 2`_ for additional details.
+
+
+Additional Considerations
+-------------------------
+
+You may run multiple OSDs per host, but you should ensure that the sum of the
+total throughput of your OSD hard disks doesn't exceed the network bandwidth
+required to service a client's need to read or write data. You should also
+consider what percentage of the overall data the cluster stores on each host. If
+the percentage on a particular host is large and the host fails, it can lead to
+problems such as exceeding the ``full ratio``, which causes Ceph to halt
+operations as a safety precaution that prevents data loss.
+
+When you run multiple OSDs per host, you also need to ensure that the kernel
+is up to date. See `OS Recommendations`_ for notes on ``glibc`` and
+``syncfs(2)`` to ensure that your hardware performs as expected when running
+multiple OSDs per host.
+
+Hosts with high numbers of OSDs (e.g., > 20) may spawn a lot of threads,
+especially during recovery and rebalancing. Many Linux kernels default to
+a relatively small maximum number of threads (e.g., 32k). If you encounter
+problems starting up OSDs on hosts with a high number of OSDs, consider
+setting ``kernel.pid_max`` to a higher number of threads. The theoretical
+maximum is 4,194,303 threads. For example, you could add the following to
+the ``/etc/sysctl.conf`` file::
+
+ kernel.pid_max = 4194303
+
+
+Networks
+========
+
+We recommend that each host have at least two 1Gbps network interface
+controllers (NICs). Since most commodity hard disk drives have a throughput of
+approximately 100MB/second, your NICs should be able to handle the traffic for
+the OSD disks on your host. We recommend a minimum of two NICs to account for a
+public (front-side) network and a cluster (back-side) network. A cluster network
+(preferably not connected to the internet) handles the additional load for data
+replication and helps stop denial of service attacks that prevent the cluster
+from achieving ``active + clean`` states for placement groups as OSDs replicate
+data across the cluster. Consider starting with a 10Gbps network in your racks.
+Replicating 1TB of data across a 1Gbps network takes 3 hours, and 3TBs (a
+typical drive configuration) takes 9 hours. By contrast, with a 10Gbps network,
+the replication times would be 20 minutes and 1 hour respectively. In a
+petabyte-scale cluster, failure of an OSD disk should be an expectation, not an
+exception. System administrators will appreciate PGs recovering from a
+``degraded`` state to an ``active + clean`` state as rapidly as possible, with
+price / performance tradeoffs taken into consideration. Additionally, some
+deployment tools (e.g., Dell's Crowbar) deploy with five different networks,
+but employ VLANs to make hardware and network cabling more manageable. VLANs
+using 802.1q protocol require VLAN-capable NICs and Switches. The added hardware
+expense may be offset by the operational cost savings for network setup and
+maintenance. When using VLANs to handle VM traffic between the cluster
+and compute stacks (e.g., OpenStack, CloudStack, etc.), it is also worth
+considering using 10G Ethernet. Top-of-rack routers for each network also need
+to be able to communicate with spine routers that have even faster
+throughput--e.g., 40Gbps to 100Gbps.
+
+Your server hardware should have a Baseboard Management Controller (BMC).
+Administration and deployment tools may also use BMCs extensively, so consider
+the cost/benefit tradeoff of an out-of-band network for administration.
+Hypervisor SSH access, VM image uploads, OS image installs, management sockets,
+etc. can impose significant loads on a network. Running three networks may seem
+like overkill, but each traffic path represents a potential capacity, throughput
+and/or performance bottleneck that you should carefully consider before
+deploying a large scale data cluster.
+
+
+Failure Domains
+===============
+
+A failure domain is any failure that prevents access to one or more OSDs. That
+could be a stopped daemon on a host; a hard disk failure, an OS crash, a
+malfunctioning NIC, a failed power supply, a network outage, a power outage, and
+so forth. When planning out your hardware needs, you must balance the
+temptation to reduce costs by placing too many responsibilities into too few
+failure domains, and the added costs of isolating every potential failure
+domain.
+
+
+Minimum Hardware Recommendations
+================================
+
+Ceph can run on inexpensive commodity hardware. Small production clusters
+and development clusters can run successfully with modest hardware.
+
++--------------+----------------+-----------------------------------------+
+| Process | Criteria | Minimum Recommended |
++==============+================+=========================================+
+| ``ceph-osd`` | Processor | - 1x 64-bit AMD-64 |
+| | | - 1x 32-bit ARM dual-core or better |
+| +----------------+-----------------------------------------+
+| | RAM | ~1GB for 1TB of storage per daemon |
+| +----------------+-----------------------------------------+
+| | Volume Storage | 1x storage drive per daemon |
+| +----------------+-----------------------------------------+
+| | Journal | 1x SSD partition per daemon (optional) |
+| +----------------+-----------------------------------------+
+| | Network | 2x 1GB Ethernet NICs |
++--------------+----------------+-----------------------------------------+
+| ``ceph-mon`` | Processor | - 1x 64-bit AMD-64 |
+| | | - 1x 32-bit ARM dual-core or better |
+| +----------------+-----------------------------------------+
+| | RAM | 1 GB per daemon |
+| +----------------+-----------------------------------------+
+| | Disk Space | 10 GB per daemon |
+| +----------------+-----------------------------------------+
+| | Network | 2x 1GB Ethernet NICs |
++--------------+----------------+-----------------------------------------+
+| ``ceph-mds`` | Processor | - 1x 64-bit AMD-64 quad-core |
+| | | - 1x 32-bit ARM quad-core |
+| +----------------+-----------------------------------------+
+| | RAM | 1 GB minimum per daemon |
+| +----------------+-----------------------------------------+
+| | Disk Space | 1 MB per daemon |
+| +----------------+-----------------------------------------+
+| | Network | 2x 1GB Ethernet NICs |
++--------------+----------------+-----------------------------------------+
+
+.. tip:: If you are running an OSD with a single disk, create a
+ partition for your volume storage that is separate from the partition
+ containing the OS. Generally, we recommend separate disks for the
+ OS and the volume storage.
+
+
+Production Cluster Examples
+===========================
+
+Production clusters for petabyte scale data storage may also use commodity
+hardware, but should have considerably more memory, processing power and data
+storage to account for heavy traffic loads.
+
+Dell Example
+------------
+
+A recent (2012) Ceph cluster project is using two fairly robust hardware
+configurations for Ceph OSDs, and a lighter configuration for monitors.
+
++----------------+----------------+------------------------------------+
+| Configuration | Criteria | Minimum Recommended |
++================+================+====================================+
+| Dell PE R510 | Processor | 2x 64-bit quad-core Xeon CPUs |
+| +----------------+------------------------------------+
+| | RAM | 16 GB |
+| +----------------+------------------------------------+
+| | Volume Storage | 8x 2TB drives. 1 OS, 7 Storage |
+| +----------------+------------------------------------+
+| | Client Network | 2x 1GB Ethernet NICs |
+| +----------------+------------------------------------+
+| | OSD Network | 2x 1GB Ethernet NICs |
+| +----------------+------------------------------------+
+| | Mgmt. Network | 2x 1GB Ethernet NICs |
++----------------+----------------+------------------------------------+
+| Dell PE R515 | Processor | 1x hex-core Opteron CPU |
+| +----------------+------------------------------------+
+| | RAM | 16 GB |
+| +----------------+------------------------------------+
+| | Volume Storage | 12x 3TB drives. Storage |
+| +----------------+------------------------------------+
+| | OS Storage | 1x 500GB drive. Operating System. |
+| +----------------+------------------------------------+
+| | Client Network | 2x 1GB Ethernet NICs |
+| +----------------+------------------------------------+
+| | OSD Network | 2x 1GB Ethernet NICs |
+| +----------------+------------------------------------+
+| | Mgmt. Network | 2x 1GB Ethernet NICs |
++----------------+----------------+------------------------------------+
+
+
+
+
+.. _Ceph blog: https://ceph.com/community/blog/
+.. _Ceph Write Throughput 1: http://ceph.com/community/ceph-performance-part-1-disk-controller-write-throughput/
+.. _Ceph Write Throughput 2: http://ceph.com/community/ceph-performance-part-2-write-throughput-without-ssd-journals/
+.. _Mapping Pools to Different Types of OSDs: ../../rados/operations/crush-map#placing-different-pools-on-different-osds
+.. _OS Recommendations: ../os-recommendations
diff --git a/doc/start/index.rst b/doc/start/index.rst
new file mode 100644
index 00000000..6c799c00
--- /dev/null
+++ b/doc/start/index.rst
@@ -0,0 +1,47 @@
+============================
+ Installation (ceph-deploy)
+============================
+
+.. raw:: html
+
+ <style type="text/css">div.body h3{margin:5px 0px 0px 0px;}</style>
+ <table cellpadding="10"><colgroup><col width="33%"><col width="33%"><col width="33%"></colgroup><tbody valign="top"><tr><td><h3>Step 1: Preflight</h3>
+
+A :term:`Ceph Client` and a :term:`Ceph Node` may require some basic
+configuration work prior to deploying a Ceph Storage Cluster. You can also
+avail yourself of help by getting involved in the Ceph community.
+
+.. toctree::
+
+ Preflight <quick-start-preflight>
+
+.. raw:: html
+
+ </td><td><h3>Step 2: Storage Cluster</h3>
+
+Once you have completed your preflight checklist, you should be able to begin
+deploying a Ceph Storage Cluster.
+
+.. toctree::
+
+ Storage Cluster Quick Start <quick-ceph-deploy>
+
+
+.. raw:: html
+
+ </td><td><h3>Step 3: Ceph Client(s)</h3>
+
+Most Ceph users don't store objects directly in the Ceph Storage Cluster. They typically use at least one of
+Ceph Block Devices, the Ceph Filesystem, and Ceph Object Storage.
+
+.. toctree::
+
+ Block Device Quick Start <quick-rbd>
+ Filesystem Quick Start <quick-cephfs>
+ Object Storage Quick Start <quick-rgw>
+
+.. raw:: html
+
+ </td></tr></tbody></table>
+
+
diff --git a/doc/start/intro.rst b/doc/start/intro.rst
new file mode 100644
index 00000000..18a41c6f
--- /dev/null
+++ b/doc/start/intro.rst
@@ -0,0 +1,89 @@
+===============
+ Intro to Ceph
+===============
+
+Whether you want to provide :term:`Ceph Object Storage` and/or
+:term:`Ceph Block Device` services to :term:`Cloud Platforms`, deploy
+a :term:`Ceph Filesystem` or use Ceph for another purpose, all
+:term:`Ceph Storage Cluster` deployments begin with setting up each
+:term:`Ceph Node`, your network, and the Ceph Storage Cluster. A Ceph
+Storage Cluster requires at least one Ceph Monitor, Ceph Manager, and
+Ceph OSD (Object Storage Daemon). The Ceph Metadata Server is also
+required when running Ceph Filesystem clients.
+
+.. ditaa::
+
+ +---------------+ +------------+ +------------+ +---------------+
+ | OSDs | | Monitors | | Managers | | MDSs |
+ +---------------+ +------------+ +------------+ +---------------+
+
+- **Monitors**: A :term:`Ceph Monitor` (``ceph-mon``) maintains maps
+ of the cluster state, including the monitor map, manager map, the
+ OSD map, and the CRUSH map. These maps are critical cluster state
+ required for Ceph daemons to coordinate with each other. Monitors
+ are also responsible for managing authentication between daemons and
+ clients. At least three monitors are normally required for
+ redundancy and high availability.
+
+- **Managers**: A :term:`Ceph Manager` daemon (``ceph-mgr``) is
+ responsible for keeping track of runtime metrics and the current
+ state of the Ceph cluster, including storage utilization, current
+ performance metrics, and system load. The Ceph Manager daemons also
+ host python-based modules to manage and expose Ceph cluster
+ information, including a web-based :ref:`mgr-dashboard` and
+ `REST API`_. At least two managers are normally required for high
+ availability.
+
+- **Ceph OSDs**: A :term:`Ceph OSD` (object storage daemon,
+ ``ceph-osd``) stores data, handles data replication, recovery,
+ rebalancing, and provides some monitoring information to Ceph
+ Monitors and Managers by checking other Ceph OSD Daemons for a
+ heartbeat. At least 3 Ceph OSDs are normally required for redundancy
+ and high availability.
+
+- **MDSs**: A :term:`Ceph Metadata Server` (MDS, ``ceph-mds``) stores
+ metadata on behalf of the :term:`Ceph Filesystem` (i.e., Ceph Block
+ Devices and Ceph Object Storage do not use MDS). Ceph Metadata
+ Servers allow POSIX file system users to execute basic commands (like
+ ``ls``, ``find``, etc.) without placing an enormous burden on the
+ Ceph Storage Cluster.
+
+Ceph stores data as objects within logical storage pools. Using the
+:term:`CRUSH` algorithm, Ceph calculates which placement group should
+contain the object, and further calculates which Ceph OSD Daemon
+should store the placement group. The CRUSH algorithm enables the
+Ceph Storage Cluster to scale, rebalance, and recover dynamically.
+
+.. _REST API: ../../mgr/restful
+
+.. raw:: html
+
+ <style type="text/css">div.body h3{margin:5px 0px 0px 0px;}</style>
+ <table cellpadding="10"><colgroup><col width="50%"><col width="50%"></colgroup><tbody valign="top"><tr><td><h3>Recommendations</h3>
+
+To begin using Ceph in production, you should review our hardware
+recommendations and operating system recommendations.
+
+.. toctree::
+ :maxdepth: 2
+
+ Hardware Recommendations <hardware-recommendations>
+ OS Recommendations <os-recommendations>
+
+
+.. raw:: html
+
+ </td><td><h3>Get Involved</h3>
+
+ You can avail yourself of help or contribute documentation, source
+ code or bugs by getting involved in the Ceph community.
+
+.. toctree::
+ :maxdepth: 2
+
+ get-involved
+ documenting-ceph
+
+.. raw:: html
+
+ </td></tr></tbody></table>
diff --git a/doc/start/kube-helm.rst b/doc/start/kube-helm.rst
new file mode 100644
index 00000000..a8ae0b4c
--- /dev/null
+++ b/doc/start/kube-helm.rst
@@ -0,0 +1,357 @@
+================================
+Installation (Kubernetes + Helm)
+================================
+
+The ceph-helm_ project enables you to deploy Ceph in a Kubernetes environment.
+This documentation assumes a Kubernetes environment is available.
+
+Current limitations
+===================
+
+ - The public and cluster networks must be the same
+ - If the storage class user id is not admin, you will have to manually create the user
+ in your Ceph cluster and create its secret in Kubernetes
+ - ceph-mgr can only run with 1 replica
+
+Install and start Helm
+======================
+
+Helm can be installed by following these instructions_.
+
+Helm finds the Kubernetes cluster by reading from the local Kubernetes config file; make sure this is downloaded and accessible to the ``helm`` client.
+
+A Tiller server must be configured and running for your Kubernetes cluster, and the local Helm client must be connected to it. It may be helpful to look at the Helm documentation for init_. To run Tiller locally and connect Helm to it, run::
+
+ $ helm init
+
+The ceph-helm project uses a local Helm repo by default to store charts. To start a local Helm repo server, run::
+
+ $ helm serve &
+ $ helm repo add local http://localhost:8879/charts
+
+Add ceph-helm to Helm local repos
+==================================
+::
+
+ $ git clone https://github.com/ceph/ceph-helm
+ $ cd ceph-helm/ceph
+ $ make
+
+Configure your Ceph cluster
+===========================
+
+Create a ``ceph-overrides.yaml`` that will contain your Ceph configuration. This file may exist anywhere, but for this document will be assumed to reside in the user's home directory.::
+
+ $ cat ~/ceph-overrides.yaml
+ network:
+ public: 172.21.0.0/20
+ cluster: 172.21.0.0/20
+
+ osd_devices:
+ - name: dev-sdd
+ device: /dev/sdd
+ zap: "1"
+ - name: dev-sde
+ device: /dev/sde
+ zap: "1"
+
+ storageclass:
+ name: ceph-rbd
+ pool: rbd
+ user_id: k8s
+
+.. note:: If journal is not set it will be colocated with device
+
+.. note:: The ``ceph-helm/ceph/ceph/values.yaml`` file contains the full
+ list of option that can be set
+
+Create the Ceph cluster namespace
+==================================
+
+By default, ceph-helm components assume they are to be run in the ``ceph`` Kubernetes namespace. To create the namespace, run::
+
+ $ kubectl create namespace ceph
+
+Configure RBAC permissions
+==========================
+
+Kubernetes >=v1.6 makes RBAC the default admission controller. ceph-helm provides RBAC roles and permissions for each component::
+
+ $ kubectl create -f ~/ceph-helm/ceph/rbac.yaml
+
+The ``rbac.yaml`` file assumes that the Ceph cluster will be deployed in the ``ceph`` namespace.
+
+Label kubelets
+==============
+
+The following labels need to be set to deploy a Ceph cluster:
+ - ceph-mon=enabled
+ - ceph-mgr=enabled
+ - ceph-osd=enabled
+ - ceph-osd-device-<name>=enabled
+
+The ``ceph-osd-device-<name>`` label is created based on the osd_devices name value defined in our ``ceph-overrides.yaml``.
+From our example above we will have the two following label: ``ceph-osd-device-dev-sdb`` and ``ceph-osd-device-dev-sdc``.
+
+For each Ceph Monitor::
+
+ $ kubectl label node <nodename> ceph-mon=enabled ceph-mgr=enabled
+
+For each OSD node::
+
+ $ kubectl label node <nodename> ceph-osd=enabled ceph-osd-device-dev-sdb=enabled ceph-osd-device-dev-sdc=enabled
+
+Ceph Deployment
+===============
+
+Run the helm install command to deploy Ceph::
+
+ $ helm install --name=ceph local/ceph --namespace=ceph -f ~/ceph-overrides.yaml
+ NAME: ceph
+ LAST DEPLOYED: Wed Oct 18 22:25:06 2017
+ NAMESPACE: ceph
+ STATUS: DEPLOYED
+
+ RESOURCES:
+ ==> v1/Secret
+ NAME TYPE DATA AGE
+ ceph-keystone-user-rgw Opaque 7 1s
+
+ ==> v1/ConfigMap
+ NAME DATA AGE
+ ceph-bin-clients 2 1s
+ ceph-bin 24 1s
+ ceph-etc 1 1s
+ ceph-templates 5 1s
+
+ ==> v1/Service
+ NAME CLUSTER-IP EXTERNAL-IP PORT(S) AGE
+ ceph-mon None <none> 6789/TCP 1s
+ ceph-rgw 10.101.219.239 <none> 8088/TCP 1s
+
+ ==> v1beta1/DaemonSet
+ NAME DESIRED CURRENT READY UP-TO-DATE AVAILABLE NODE-SELECTOR AGE
+ ceph-mon 3 3 0 3 0 ceph-mon=enabled 1s
+ ceph-osd-dev-sde 3 3 0 3 0 ceph-osd-device-dev-sde=enabled,ceph-osd=enabled 1s
+ ceph-osd-dev-sdd 3 3 0 3 0 ceph-osd-device-dev-sdd=enabled,ceph-osd=enabled 1s
+
+ ==> v1beta1/Deployment
+ NAME DESIRED CURRENT UP-TO-DATE AVAILABLE AGE
+ ceph-mds 1 1 1 0 1s
+ ceph-mgr 1 1 1 0 1s
+ ceph-mon-check 1 1 1 0 1s
+ ceph-rbd-provisioner 2 2 2 0 1s
+ ceph-rgw 1 1 1 0 1s
+
+ ==> v1/Job
+ NAME DESIRED SUCCESSFUL AGE
+ ceph-mgr-keyring-generator 1 0 1s
+ ceph-mds-keyring-generator 1 0 1s
+ ceph-osd-keyring-generator 1 0 1s
+ ceph-rgw-keyring-generator 1 0 1s
+ ceph-mon-keyring-generator 1 0 1s
+ ceph-namespace-client-key-generator 1 0 1s
+ ceph-storage-keys-generator 1 0 1s
+
+ ==> v1/StorageClass
+ NAME TYPE
+ ceph-rbd ceph.com/rbd
+
+The output from helm install shows us the different types of resources that will be deployed.
+
+A StorageClass named ``ceph-rbd`` of type ``ceph.com/rbd`` will be created with ``ceph-rbd-provisioner`` Pods. These
+will allow a RBD to be automatically provisioned upon creation of a PVC. RBDs will also be formatted when mapped for the first
+time. All RBDs will use the ext4 filesystem. ``ceph.com/rbd`` does not support the ``fsType`` option.
+By default, RBDs will use image format 2 and layering. You can overwrite the following storageclass' defaults in your values file::
+
+ storageclass:
+ name: ceph-rbd
+ pool: rbd
+ user_id: k8s
+ user_secret_name: pvc-ceph-client-key
+ image_format: "2"
+ image_features: layering
+
+Check that all Pods are running with the command below. This might take a few minutes::
+
+ $ kubectl -n ceph get pods
+ NAME READY STATUS RESTARTS AGE
+ ceph-mds-3804776627-976z9 0/1 Pending 0 1m
+ ceph-mgr-3367933990-b368c 1/1 Running 0 1m
+ ceph-mon-check-1818208419-0vkb7 1/1 Running 0 1m
+ ceph-mon-cppdk 3/3 Running 0 1m
+ ceph-mon-t4stn 3/3 Running 0 1m
+ ceph-mon-vqzl0 3/3 Running 0 1m
+ ceph-osd-dev-sdd-6dphp 1/1 Running 0 1m
+ ceph-osd-dev-sdd-6w7ng 1/1 Running 0 1m
+ ceph-osd-dev-sdd-l80vv 1/1 Running 0 1m
+ ceph-osd-dev-sde-6dq6w 1/1 Running 0 1m
+ ceph-osd-dev-sde-kqt0r 1/1 Running 0 1m
+ ceph-osd-dev-sde-lp2pf 1/1 Running 0 1m
+ ceph-rbd-provisioner-2099367036-4prvt 1/1 Running 0 1m
+ ceph-rbd-provisioner-2099367036-h9kw7 1/1 Running 0 1m
+ ceph-rgw-3375847861-4wr74 0/1 Pending 0 1m
+
+.. note:: The MDS and RGW Pods are pending since we did not label any nodes with
+ ``ceph-rgw=enabled`` or ``ceph-mds=enabled``
+
+Once all Pods are running, check the status of the Ceph cluster from one Mon::
+
+ $ kubectl -n ceph exec -ti ceph-mon-cppdk -c ceph-mon -- ceph -s
+ cluster:
+ id: e8f9da03-c2d2-4ad3-b807-2a13d0775504
+ health: HEALTH_OK
+
+ services:
+ mon: 3 daemons, quorum mira115,mira110,mira109
+ mgr: mira109(active)
+ osd: 6 osds: 6 up, 6 in
+
+ data:
+ pools: 0 pools, 0 pgs
+ objects: 0 objects, 0 bytes
+ usage: 644 MB used, 5555 GB / 5556 GB avail
+ pgs:
+
+Configure a Pod to use a PersistentVolume from Ceph
+===================================================
+
+Create a keyring for the k8s user defined in the ``~/ceph-overwrite.yaml`` and convert
+it to base64::
+
+ $ kubectl -n ceph exec -ti ceph-mon-cppdk -c ceph-mon -- bash
+ # ceph auth get-or-create-key client.k8s mon 'allow r' osd 'allow rwx pool=rbd' | base64
+ QVFCLzdPaFoxeUxCRVJBQUVEVGdHcE9YU3BYMVBSdURHUEU0T0E9PQo=
+ # exit
+
+Edit the user secret present in the ``ceph`` namespace::
+
+ $ kubectl -n ceph edit secrets/pvc-ceph-client-key
+
+Add the base64 value to the key value with your own and save::
+
+ apiVersion: v1
+ data:
+ key: QVFCLzdPaFoxeUxCRVJBQUVEVGdHcE9YU3BYMVBSdURHUEU0T0E9PQo=
+ kind: Secret
+ metadata:
+ creationTimestamp: 2017-10-19T17:34:04Z
+ name: pvc-ceph-client-key
+ namespace: ceph
+ resourceVersion: "8665522"
+ selfLink: /api/v1/namespaces/ceph/secrets/pvc-ceph-client-key
+ uid: b4085944-b4f3-11e7-add7-002590347682
+ type: kubernetes.io/rbd
+
+We are going to create a Pod that consumes a RBD in the default namespace.
+Copy the user secret from the ``ceph`` namespace to ``default``::
+
+ $ kubectl -n ceph get secrets/pvc-ceph-client-key -o json | jq '.metadata.namespace = "default"' | kubectl create -f -
+ secret "pvc-ceph-client-key" created
+ $ kubectl get secrets
+ NAME TYPE DATA AGE
+ default-token-r43wl kubernetes.io/service-account-token 3 61d
+ pvc-ceph-client-key kubernetes.io/rbd 1 20s
+
+Create and initialize the RBD pool::
+
+ $ kubectl -n ceph exec -ti ceph-mon-cppdk -c ceph-mon -- ceph osd pool create rbd 256
+ pool 'rbd' created
+ $ kubectl -n ceph exec -ti ceph-mon-cppdk -c ceph-mon -- rbd pool init rbd
+
+.. important:: Kubernetes uses the RBD kernel module to map RBDs to hosts. Luminous requires
+ CRUSH_TUNABLES 5 (Jewel). The minimal kernel version for these tunables is 4.5.
+ If your kernel does not support these tunables, run ``ceph osd crush tunables hammer``
+
+
+.. important:: Since RBDs are mapped on the host system. Hosts need to be able to resolve
+ the ceph-mon.ceph.svc.cluster.local name managed by the kube-dns service. To get the
+ IP address of the kube-dns service, run ``kubectl -n kube-system get svc/kube-dns``
+
+Create a PVC::
+
+ $ cat pvc-rbd.yaml
+ kind: PersistentVolumeClaim
+ apiVersion: v1
+ metadata:
+ name: ceph-pvc
+ spec:
+ accessModes:
+ - ReadWriteOnce
+ resources:
+ requests:
+ storage: 20Gi
+ storageClassName: ceph-rbd
+
+ $ kubectl create -f pvc-rbd.yaml
+ persistentvolumeclaim "ceph-pvc" created
+ $ kubectl get pvc
+ NAME STATUS VOLUME CAPACITY ACCESSMODES STORAGECLASS AGE
+ ceph-pvc Bound pvc-1c2ada50-b456-11e7-add7-002590347682 20Gi RWO ceph-rbd 3s
+
+You can check that the RBD has been created on your cluster::
+
+ $ kubectl -n ceph exec -ti ceph-mon-cppdk -c ceph-mon -- rbd ls
+ kubernetes-dynamic-pvc-1c2e9442-b456-11e7-9bd2-2a4159ce3915
+ $ kubectl -n ceph exec -ti ceph-mon-cppdk -c ceph-mon -- rbd info kubernetes-dynamic-pvc-1c2e9442-b456-11e7-9bd2-2a4159ce3915
+ rbd image 'kubernetes-dynamic-pvc-1c2e9442-b456-11e7-9bd2-2a4159ce3915':
+ size 20480 MB in 5120 objects
+ order 22 (4096 kB objects)
+ block_name_prefix: rbd_data.10762ae8944a
+ format: 2
+ features: layering
+ flags:
+ create_timestamp: Wed Oct 18 22:45:59 2017
+
+Create a Pod that will use the PVC::
+
+ $ cat pod-with-rbd.yaml
+ kind: Pod
+ apiVersion: v1
+ metadata:
+ name: mypod
+ spec:
+ containers:
+ - name: busybox
+ image: busybox
+ command:
+ - sleep
+ - "3600"
+ volumeMounts:
+ - mountPath: "/mnt/rbd"
+ name: vol1
+ volumes:
+ - name: vol1
+ persistentVolumeClaim:
+ claimName: ceph-pvc
+
+ $ kubectl create -f pod-with-rbd.yaml
+ pod "mypod" created
+
+Check the Pod::
+
+ $ kubectl get pods
+ NAME READY STATUS RESTARTS AGE
+ mypod 1/1 Running 0 17s
+ $ kubectl exec mypod -- mount | grep rbd
+ /dev/rbd0 on /mnt/rbd type ext4 (rw,relatime,stripe=1024,data=ordered)
+
+Logging
+=======
+
+OSDs and Monitor logs can be accessed via the ``kubectl logs [-f]`` command. Monitors have multiple stream of logging,
+each stream is accessible from a container running in the ceph-mon Pod.
+
+There are 3 containers running in the ceph-mon Pod:
+ - ceph-mon, equivalent of ceph-mon.hostname.log on baremetal
+ - cluster-audit-log-tailer, equivalent of ceph.audit.log on baremetal
+ - cluster-log-tailer, equivalent of ceph.log on baremetal or ``ceph -w``
+
+Each container is accessible via the ``--container`` or ``-c`` option.
+For instance, to access the cluster-tail-log, one can run::
+
+ $ kubectl -n ceph logs ceph-mon-cppdk -c cluster-log-tailer
+
+.. _ceph-helm: https://github.com/ceph/ceph-helm/
+.. _instructions: https://github.com/kubernetes/helm/blob/master/docs/install.md
+.. _init: https://github.com/kubernetes/helm/blob/master/docs/helm/helm_init.md
diff --git a/doc/start/os-recommendations.rst b/doc/start/os-recommendations.rst
new file mode 100644
index 00000000..84226c3f
--- /dev/null
+++ b/doc/start/os-recommendations.rst
@@ -0,0 +1,154 @@
+====================
+ OS Recommendations
+====================
+
+Ceph Dependencies
+=================
+
+As a general rule, we recommend deploying Ceph on newer releases of Linux.
+We also recommend deploying on releases with long-term support.
+
+Linux Kernel
+------------
+
+- **Ceph Kernel Client**
+
+ If you are using the kernel client to map RBD block devices or mount
+ CephFS, the general advice is to use a "stable" or "longterm
+ maintenance" kernel series provided by either http://kernel.org or
+ your Linux distribution on any client hosts.
+
+ For RBD, if you choose to *track* long-term kernels, we currently recommend
+ 4.x-based "longterm maintenance" kernel series:
+
+ - 4.14.z
+ - 4.9.z
+
+ For CephFS, see `CephFS best practices`_ for kernel version guidance.
+
+ Older kernel client versions may not support your `CRUSH tunables`_ profile
+ or other newer features of the Ceph cluster, requiring the storage cluster
+ to be configured with those features disabled.
+
+
+Platforms
+=========
+
+The charts below show how Ceph's requirements map onto various Linux
+platforms. Generally speaking, there is very little dependence on
+specific distributions aside from the kernel and system initialization
+package (i.e., sysvinit, upstart, systemd).
+
+Luminous (12.2.z)
+-----------------
+
++----------+----------+--------------------+--------------+---------+------------+
+| Distro | Release | Code Name | Kernel | Notes | Testing |
++==========+==========+====================+==============+=========+============+
+| CentOS | 7 | N/A | linux-3.10.0 | 3 | B, I, C |
++----------+----------+--------------------+--------------+---------+------------+
+| Debian | 8.0 | Jessie | linux-3.16.0 | 1, 2 | B, I |
++----------+----------+--------------------+--------------+---------+------------+
+| Debian | 9.0 | Stretch | linux-4.9 | 1, 2 | B, I |
++----------+----------+--------------------+--------------+---------+------------+
+| Fedora | 22 | N/A | linux-3.14.0 | | B, I |
++----------+----------+--------------------+--------------+---------+------------+
+| RHEL | 7 | Maipo | linux-3.10.0 | | B, I |
++----------+----------+--------------------+--------------+---------+------------+
+| Ubuntu | 14.04 | Trusty Tahr | linux-3.13.0 | | B, I, C |
++----------+----------+--------------------+--------------+---------+------------+
+| Ubuntu | 16.04 | Xenial Xerus | linux-4.4.0 | 3 | B, I, C |
++----------+----------+--------------------+--------------+---------+------------+
+
+
+Jewel (10.2.z)
+--------------
+
++----------+----------+--------------------+--------------+---------+------------+
+| Distro | Release | Code Name | Kernel | Notes | Testing |
++==========+==========+====================+==============+=========+============+
+| CentOS | 7 | N/A | linux-3.10.0 | 3 | B, I, C |
++----------+----------+--------------------+--------------+---------+------------+
+| Debian | 8.0 | Jessie | linux-3.16.0 | 1, 2 | B, I |
++----------+----------+--------------------+--------------+---------+------------+
+| Fedora | 22 | N/A | linux-3.14.0 | | B, I |
++----------+----------+--------------------+--------------+---------+------------+
+| RHEL | 7 | Maipo | linux-3.10.0 | | B, I |
++----------+----------+--------------------+--------------+---------+------------+
+| Ubuntu | 14.04 | Trusty Tahr | linux-3.13.0 | | B, I, C |
++----------+----------+--------------------+--------------+---------+------------+
+
+Hammer (0.94.z)
+---------------
+
++----------+----------+--------------------+--------------+---------+------------+
+| Distro | Release | Code Name | Kernel | Notes | Testing |
++==========+==========+====================+==============+=========+============+
+| CentOS | 6 | N/A | linux-2.6.32 | 1, 2 | |
++----------+----------+--------------------+--------------+---------+------------+
+| CentOS | 7 | N/A | linux-3.10.0 | | B, I, C |
++----------+----------+--------------------+--------------+---------+------------+
+| Debian | 7.0 | Wheezy | linux-3.2.0 | 1, 2 | |
++----------+----------+--------------------+--------------+---------+------------+
+| Ubuntu | 12.04 | Precise Pangolin | linux-3.2.0 | 1, 2 | |
++----------+----------+--------------------+--------------+---------+------------+
+| Ubuntu | 14.04 | Trusty Tahr | linux-3.13.0 | | B, I, C |
++----------+----------+--------------------+--------------+---------+------------+
+
+Firefly (0.80.z)
+----------------
+
++----------+----------+--------------------+--------------+---------+------------+
+| Distro | Release | Code Name | Kernel | Notes | Testing |
++==========+==========+====================+==============+=========+============+
+| CentOS | 6 | N/A | linux-2.6.32 | 1, 2 | B, I |
++----------+----------+--------------------+--------------+---------+------------+
+| CentOS | 7 | N/A | linux-3.10.0 | | B |
++----------+----------+--------------------+--------------+---------+------------+
+| Debian | 7.0 | Wheezy | linux-3.2.0 | 1, 2 | B |
++----------+----------+--------------------+--------------+---------+------------+
+| Fedora | 19 | Schrödinger's Cat | linux-3.10.0 | | B |
++----------+----------+--------------------+--------------+---------+------------+
+| Fedora | 20 | Heisenbug | linux-3.14.0 | | B |
++----------+----------+--------------------+--------------+---------+------------+
+| RHEL | 6 | Santiago | linux-2.6.32 | 1, 2 | B, I, C |
++----------+----------+--------------------+--------------+---------+------------+
+| RHEL | 7 | Maipo | linux-3.10.0 | | B, I, C |
++----------+----------+--------------------+--------------+---------+------------+
+| Ubuntu | 12.04 | Precise Pangolin | linux-3.2.0 | 1, 2 | B, I, C |
++----------+----------+--------------------+--------------+---------+------------+
+| Ubuntu | 14.04 | Trusty Tahr | linux-3.13.0 | | B, I, C |
++----------+----------+--------------------+--------------+---------+------------+
+
+Notes
+-----
+
+- **1**: The default kernel has an older version of ``btrfs`` that we do not
+ recommend for ``ceph-osd`` storage nodes. We recommend using ``XFS``.
+
+- **2**: The default kernel has an old Ceph client that we do not recommend
+ for kernel client (kernel RBD or the Ceph file system). Upgrade to a
+ recommended kernel.
+
+- **3**: The default kernel regularly fails in QA when the ``btrfs``
+ file system is used. We do not recommend using ``btrfs`` for
+ backing Ceph OSDs.
+
+
+Testing
+-------
+
+- **B**: We build release packages for this platform. For some of these
+ platforms, we may also continuously build all ceph branches and exercise
+ basic unit tests.
+
+- **I**: We do basic installation and functionality tests of releases on this
+ platform.
+
+- **C**: We run a comprehensive functional, regression, and stress test suite
+ on this platform on a continuous basis. This includes development branches,
+ pre-release, and released code.
+
+.. _CRUSH Tunables: ../../rados/operations/crush-map#tunables
+
+.. _CephFS best practices: ../../cephfs/best-practices
diff --git a/doc/start/quick-ceph-deploy.rst b/doc/start/quick-ceph-deploy.rst
new file mode 100644
index 00000000..0c9e9bc8
--- /dev/null
+++ b/doc/start/quick-ceph-deploy.rst
@@ -0,0 +1,353 @@
+=============================
+ Storage Cluster Quick Start
+=============================
+
+If you haven't completed your `Preflight Checklist`_, do that first. This
+**Quick Start** sets up a :term:`Ceph Storage Cluster` using ``ceph-deploy``
+on your admin node. Create a three Ceph Node cluster so you can
+explore Ceph functionality.
+
+.. include:: quick-common.rst
+
+As a first exercise, create a Ceph Storage Cluster with one Ceph Monitor and three
+Ceph OSD Daemons. Once the cluster reaches a ``active + clean`` state, expand it
+by adding a fourth Ceph OSD Daemon, a Metadata Server and two more Ceph Monitors.
+For best results, create a directory on your admin node for maintaining the
+configuration files and keys that ``ceph-deploy`` generates for your cluster. ::
+
+ mkdir my-cluster
+ cd my-cluster
+
+The ``ceph-deploy`` utility will output files to the current directory. Ensure you
+are in this directory when executing ``ceph-deploy``.
+
+.. important:: Do not call ``ceph-deploy`` with ``sudo`` or run it as ``root``
+ if you are logged in as a different user, because it will not issue ``sudo``
+ commands needed on the remote host.
+
+
+Starting over
+=============
+
+If at any point you run into trouble and you want to start over, execute
+the following to purge the Ceph packages, and erase all its data and configuration::
+
+ ceph-deploy purge {ceph-node} [{ceph-node}]
+ ceph-deploy purgedata {ceph-node} [{ceph-node}]
+ ceph-deploy forgetkeys
+ rm ceph.*
+
+If you execute ``purge``, you must re-install Ceph. The last ``rm``
+command removes any files that were written out by ceph-deploy locally
+during a previous installation.
+
+
+Create a Cluster
+================
+
+On your admin node from the directory you created for holding your
+configuration details, perform the following steps using ``ceph-deploy``.
+
+#. Create the cluster. ::
+
+ ceph-deploy new {initial-monitor-node(s)}
+
+ Specify node(s) as hostname, fqdn or hostname:fqdn. For example::
+
+ ceph-deploy new node1
+
+ Check the output of ``ceph-deploy`` with ``ls`` and ``cat`` in the
+ current directory. You should see a Ceph configuration file
+ (``ceph.conf``), a monitor secret keyring (``ceph.mon.keyring``),
+ and a log file for the new cluster. See `ceph-deploy new -h`_ for
+ additional details.
+
+#. If you have more than one network interface, add the ``public network``
+ setting under the ``[global]`` section of your Ceph configuration file.
+ See the `Network Configuration Reference`_ for details. ::
+
+ public network = {ip-address}/{bits}
+
+ For example,::
+
+ public network = 10.1.2.0/24
+
+ to use IPs in the 10.1.2.0/24 (or 10.1.2.0/255.255.255.0) network.
+
+#. If you are deploying in an IPv6 environment, add the following to
+ ``ceph.conf`` in the local directory::
+
+ echo ms bind ipv6 = true >> ceph.conf
+
+#. Install Ceph packages.::
+
+ ceph-deploy install {ceph-node} [...]
+
+ For example::
+
+ ceph-deploy install node1 node2 node3
+
+ The ``ceph-deploy`` utility will install Ceph on each node.
+
+#. Deploy the initial monitor(s) and gather the keys::
+
+ ceph-deploy mon create-initial
+
+ Once you complete the process, your local directory should have the following
+ keyrings:
+
+ - ``ceph.client.admin.keyring``
+ - ``ceph.bootstrap-mgr.keyring``
+ - ``ceph.bootstrap-osd.keyring``
+ - ``ceph.bootstrap-mds.keyring``
+ - ``ceph.bootstrap-rgw.keyring``
+ - ``ceph.bootstrap-rbd.keyring``
+ - ``ceph.bootstrap-rbd-mirror.keyring``
+
+ .. note:: If this process fails with a message similar to "Unable to
+ find /etc/ceph/ceph.client.admin.keyring", please ensure that the
+ IP listed for the monitor node in ceph.conf is the Public IP, not
+ the Private IP.
+
+#. Use ``ceph-deploy`` to copy the configuration file and admin key to
+ your admin node and your Ceph Nodes so that you can use the ``ceph``
+ CLI without having to specify the monitor address and
+ ``ceph.client.admin.keyring`` each time you execute a command. ::
+
+ ceph-deploy admin {ceph-node(s)}
+
+ For example::
+
+ ceph-deploy admin node1 node2 node3
+
+#. Deploy a manager daemon. (Required only for luminous+ builds)::
+
+ ceph-deploy mgr create node1 *Required only for luminous+ builds, i.e >= 12.x builds*
+
+#. Add three OSDs. For the purposes of these instructions, we assume you have an
+ unused disk in each node called ``/dev/vdb``. *Be sure that the device is not currently in use and does not contain any important data.* ::
+
+ ceph-deploy osd create --data {device} {ceph-node}
+
+ For example::
+
+ ceph-deploy osd create --data /dev/vdb node1
+ ceph-deploy osd create --data /dev/vdb node2
+ ceph-deploy osd create --data /dev/vdb node3
+
+ .. note:: If you are creating an OSD on an LVM volume, the argument to
+ ``--data`` *must* be ``volume_group/lv_name``, rather than the path to
+ the volume's block device.
+
+#. Check your cluster's health. ::
+
+ ssh node1 sudo ceph health
+
+ Your cluster should report ``HEALTH_OK``. You can view a more complete
+ cluster status with::
+
+ ssh node1 sudo ceph -s
+
+
+Expanding Your Cluster
+======================
+
+Once you have a basic cluster up and running, the next step is to
+expand cluster. Add a Ceph Metadata Server to ``node1``. Then add a
+Ceph Monitor and Ceph Manager to ``node2`` and ``node3`` to improve reliability and availability.
+
+.. ditaa::
+
+ /------------------\ /----------------\
+ | ceph-deploy | | node1 |
+ | Admin Node | | cCCC |
+ | +-------->+ mon.node1 |
+ | | | osd.0 |
+ | | | mgr.node1 |
+ | | | mds.node1 |
+ \---------+--------/ \----------------/
+ |
+ | /----------------\
+ | | node2 |
+ | | cCCC |
+ +----------------->+ |
+ | | osd.1 |
+ | | mon.node2 |
+ | \----------------/
+ |
+ | /----------------\
+ | | node3 |
+ | | cCCC |
+ +----------------->+ |
+ | osd.2 |
+ | mon.node3 |
+ \----------------/
+
+Add a Metadata Server
+---------------------
+
+To use CephFS, you need at least one metadata server. Execute the following to
+create a metadata server::
+
+ ceph-deploy mds create {ceph-node}
+
+For example::
+
+ ceph-deploy mds create node1
+
+Adding Monitors
+---------------
+
+A Ceph Storage Cluster requires at least one Ceph Monitor and Ceph
+Manager to run. For high availability, Ceph Storage Clusters typically
+run multiple Ceph Monitors so that the failure of a single Ceph
+Monitor will not bring down the Ceph Storage Cluster. Ceph uses the
+Paxos algorithm, which requires a majority of monitors (i.e., greater
+than *N/2* where *N* is the number of monitors) to form a quorum.
+Odd numbers of monitors tend to be better, although this is not required.
+
+.. tip: If you did not define the ``public network`` option above then
+ the new monitor will not know which IP address to bind to on the
+ new hosts. You can add this line to your ``ceph.conf`` by editing
+ it now and then push it out to each node with
+ ``ceph-deploy --overwrite-conf config push {ceph-nodes}``.
+
+Add two Ceph Monitors to your cluster::
+
+ ceph-deploy mon add {ceph-nodes}
+
+For example::
+
+ ceph-deploy mon add node2 node3
+
+Once you have added your new Ceph Monitors, Ceph will begin synchronizing
+the monitors and form a quorum. You can check the quorum status by executing
+the following::
+
+ ceph quorum_status --format json-pretty
+
+
+.. tip:: When you run Ceph with multiple monitors, you SHOULD install and
+ configure NTP on each monitor host. Ensure that the
+ monitors are NTP peers.
+
+Adding Managers
+---------------
+
+The Ceph Manager daemons operate in an active/standby pattern. Deploying
+additional manager daemons ensures that if one daemon or host fails, another
+one can take over without interrupting service.
+
+To deploy additional manager daemons::
+
+ ceph-deploy mgr create node2 node3
+
+You should see the standby managers in the output from::
+
+ ssh node1 sudo ceph -s
+
+
+Add an RGW Instance
+-------------------
+
+To use the :term:`Ceph Object Gateway` component of Ceph, you must deploy an
+instance of :term:`RGW`. Execute the following to create an new instance of
+RGW::
+
+ ceph-deploy rgw create {gateway-node}
+
+For example::
+
+ ceph-deploy rgw create node1
+
+By default, the :term:`RGW` instance will listen on port 7480. This can be
+changed by editing ceph.conf on the node running the :term:`RGW` as follows:
+
+.. code-block:: ini
+
+ [client]
+ rgw frontends = civetweb port=80
+
+To use an IPv6 address, use:
+
+.. code-block:: ini
+
+ [client]
+ rgw frontends = civetweb port=[::]:80
+
+
+
+Storing/Retrieving Object Data
+==============================
+
+To store object data in the Ceph Storage Cluster, a Ceph client must:
+
+#. Set an object name
+#. Specify a `pool`_
+
+The Ceph Client retrieves the latest cluster map and the CRUSH algorithm
+calculates how to map the object to a `placement group`_, and then calculates
+how to assign the placement group to a Ceph OSD Daemon dynamically. To find the
+object location, all you need is the object name and the pool name. For
+example::
+
+ ceph osd map {poolname} {object-name}
+
+.. topic:: Exercise: Locate an Object
+
+ As an exercise, lets create an object. Specify an object name, a path to
+ a test file containing some object data and a pool name using the
+ ``rados put`` command on the command line. For example::
+
+ echo {Test-data} > testfile.txt
+ ceph osd pool create mytest 8
+ rados put {object-name} {file-path} --pool=mytest
+ rados put test-object-1 testfile.txt --pool=mytest
+
+ To verify that the Ceph Storage Cluster stored the object, execute
+ the following::
+
+ rados -p mytest ls
+
+ Now, identify the object location::
+
+ ceph osd map {pool-name} {object-name}
+ ceph osd map mytest test-object-1
+
+ Ceph should output the object's location. For example::
+
+ osdmap e537 pool 'mytest' (1) object 'test-object-1' -> pg 1.d1743484 (1.4) -> up [1,0] acting [1,0]
+
+ To remove the test object, simply delete it using the ``rados rm``
+ command.
+
+ For example::
+
+ rados rm test-object-1 --pool=mytest
+
+ To delete the ``mytest`` pool::
+
+ ceph osd pool rm mytest
+
+ (For safety reasons you will need to supply additional arguments as
+ prompted; deleting pools destroys data.)
+
+As the cluster evolves, the object location may change dynamically. One benefit
+of Ceph's dynamic rebalancing is that Ceph relieves you from having to perform
+data migration or balancing manually.
+
+
+.. _Preflight Checklist: ../quick-start-preflight
+.. _Ceph Deploy: ../../rados/deployment
+.. _ceph-deploy install -h: ../../rados/deployment/ceph-deploy-install
+.. _ceph-deploy new -h: ../../rados/deployment/ceph-deploy-new
+.. _ceph-deploy osd: ../../rados/deployment/ceph-deploy-osd
+.. _Running Ceph with Upstart: ../../rados/operations/operating#running-ceph-with-upstart
+.. _Running Ceph with sysvinit: ../../rados/operations/operating#running-ceph-with-sysvinit
+.. _CRUSH Map: ../../rados/operations/crush-map
+.. _pool: ../../rados/operations/pools
+.. _placement group: ../../rados/operations/placement-groups
+.. _Monitoring a Cluster: ../../rados/operations/monitoring
+.. _Monitoring OSDs and PGs: ../../rados/operations/monitoring-osd-pg
+.. _Network Configuration Reference: ../../rados/configuration/network-config-ref
+.. _User Management: ../../rados/operations/user-management
diff --git a/doc/start/quick-cephfs.rst b/doc/start/quick-cephfs.rst
new file mode 100644
index 00000000..fda0919a
--- /dev/null
+++ b/doc/start/quick-cephfs.rst
@@ -0,0 +1,119 @@
+===================
+ CephFS Quick Start
+===================
+
+To use the :term:`CephFS` Quick Start guide, you must have executed the
+procedures in the `Storage Cluster Quick Start`_ guide first. Execute this quick
+start on the Admin Host.
+
+Prerequisites
+=============
+
+#. Verify that you have an appropriate version of the Linux kernel.
+ See `OS Recommendations`_ for details. ::
+
+ lsb_release -a
+ uname -r
+
+#. On the admin node, use ``ceph-deploy`` to install Ceph on your
+ ``ceph-client`` node. ::
+
+ ceph-deploy install ceph-client
+
+
+#. Ensure that the :term:`Ceph Storage Cluster` is running and in an ``active +
+ clean`` state. Also, ensure that you have at least one :term:`Ceph Metadata
+ Server` running. ::
+
+ ceph -s [-m {monitor-ip-address}] [-k {path/to/ceph.client.admin.keyring}]
+
+
+Create a Filesystem
+===================
+
+You have already created an MDS (`Storage Cluster Quick Start`_) but it will not
+become active until you create some pools and a filesystem. See :doc:`/cephfs/createfs`.
+
+::
+
+ ceph osd pool create cephfs_data <pg_num>
+ ceph osd pool create cephfs_metadata <pg_num>
+ ceph fs new <fs_name> cephfs_metadata cephfs_data
+
+
+Create a Secret File
+====================
+
+The Ceph Storage Cluster runs with authentication turned on by default.
+You should have a file containing the secret key (i.e., not the keyring
+itself). To obtain the secret key for a particular user, perform the
+following procedure:
+
+#. Identify a key for a user within a keyring file. For example::
+
+ cat ceph.client.admin.keyring
+
+#. Copy the key of the user who will be using the mounted CephFS filesystem.
+ It should look something like this::
+
+ [client.admin]
+ key = AQCj2YpRiAe6CxAA7/ETt7Hcl9IyxyYciVs47w==
+
+#. Open a text editor.
+
+#. Paste the key into an empty file. It should look something like this::
+
+ AQCj2YpRiAe6CxAA7/ETt7Hcl9IyxyYciVs47w==
+
+#. Save the file with the user ``name`` as an attribute
+ (e.g., ``admin.secret``).
+
+#. Ensure the file permissions are appropriate for the user, but not
+ visible to other users.
+
+
+Kernel Driver
+=============
+
+Mount CephFS as a kernel driver. ::
+
+ sudo mkdir /mnt/mycephfs
+ sudo mount -t ceph {ip-address-of-monitor}:6789:/ /mnt/mycephfs
+
+The Ceph Storage Cluster uses authentication by default. Specify a user ``name``
+and the ``secretfile`` you created in the `Create a Secret File`_ section. For
+example::
+
+ sudo mount -t ceph 192.168.0.1:6789:/ /mnt/mycephfs -o name=admin,secretfile=admin.secret
+
+
+.. note:: Mount the CephFS filesystem on the admin node,
+ not the server node. See `FAQ`_ for details.
+
+
+Filesystem in User Space (FUSE)
+===============================
+
+Mount CephFS as a Filesystem in User Space (FUSE). ::
+
+ sudo mkdir ~/mycephfs
+ sudo ceph-fuse -m {ip-address-of-monitor}:6789 ~/mycephfs
+
+The Ceph Storage Cluster uses authentication by default. Specify a keyring if it
+is not in the default location (i.e., ``/etc/ceph``)::
+
+ sudo ceph-fuse -k ./ceph.client.admin.keyring -m 192.168.0.1:6789 ~/mycephfs
+
+
+Additional Information
+======================
+
+See `CephFS`_ for additional information. CephFS is not quite as stable
+as the Ceph Block Device and Ceph Object Storage. See `Troubleshooting`_
+if you encounter trouble.
+
+.. _Storage Cluster Quick Start: ../quick-ceph-deploy
+.. _CephFS: ../../cephfs/
+.. _FAQ: http://wiki.ceph.com/How_Can_I_Give_Ceph_a_Try
+.. _Troubleshooting: ../../cephfs/troubleshooting
+.. _OS Recommendations: ../os-recommendations
diff --git a/doc/start/quick-common.rst b/doc/start/quick-common.rst
new file mode 100644
index 00000000..25668f79
--- /dev/null
+++ b/doc/start/quick-common.rst
@@ -0,0 +1,21 @@
+.. ditaa::
+
+ /------------------\ /-----------------\
+ | admin-node | | node1 |
+ | +-------->+ cCCC |
+ | ceph-deploy | | mon.node1 |
+ | | | osd.0 |
+ \---------+--------/ \-----------------/
+ |
+ | /----------------\
+ | | node2 |
+ +----------------->+ cCCC |
+ | | osd.1 |
+ | \----------------/
+ |
+ | /----------------\
+ | | node3 |
+ +----------------->| cCCC |
+ | osd.2 |
+ \----------------/
+
diff --git a/doc/start/quick-rbd.rst b/doc/start/quick-rbd.rst
new file mode 100644
index 00000000..3fb04e8d
--- /dev/null
+++ b/doc/start/quick-rbd.rst
@@ -0,0 +1,96 @@
+==========================
+ Block Device Quick Start
+==========================
+
+To use this guide, you must have executed the procedures in the `Storage
+Cluster Quick Start`_ guide first. Ensure your :term:`Ceph Storage Cluster` is
+in an ``active + clean`` state before working with the :term:`Ceph Block
+Device`.
+
+.. note:: The Ceph Block Device is also known as :term:`RBD` or :term:`RADOS`
+ Block Device.
+
+
+.. ditaa::
+
+ /------------------\ /----------------\
+ | Admin Node | | ceph-client |
+ | +-------->+ cCCC |
+ | ceph-deploy | | ceph |
+ \------------------/ \----------------/
+
+
+You may use a virtual machine for your ``ceph-client`` node, but do not
+execute the following procedures on the same physical node as your Ceph
+Storage Cluster nodes (unless you use a VM). See `FAQ`_ for details.
+
+
+Install Ceph
+============
+
+#. Verify that you have an appropriate version of the Linux kernel.
+ See `OS Recommendations`_ for details. ::
+
+ lsb_release -a
+ uname -r
+
+#. On the admin node, use ``ceph-deploy`` to install Ceph on your
+ ``ceph-client`` node. ::
+
+ ceph-deploy install ceph-client
+
+#. On the admin node, use ``ceph-deploy`` to copy the Ceph configuration file
+ and the ``ceph.client.admin.keyring`` to the ``ceph-client``. ::
+
+ ceph-deploy admin ceph-client
+
+ The ``ceph-deploy`` utility copies the keyring to the ``/etc/ceph``
+ directory. Ensure that the keyring file has appropriate read permissions
+ (e.g., ``sudo chmod +r /etc/ceph/ceph.client.admin.keyring``).
+
+Create a Block Device Pool
+==========================
+
+#. On the admin node, use the ``ceph`` tool to `create a pool`_
+ (we recommend the name 'rbd').
+
+#. On the admin node, use the ``rbd`` tool to initialize the pool for use by RBD::
+
+ rbd pool init <pool-name>
+
+Configure a Block Device
+========================
+
+#. On the ``ceph-client`` node, create a block device image. ::
+
+ rbd create foo --size 4096 --image-feature layering [-m {mon-IP}] [-k /path/to/ceph.client.admin.keyring] [-p {pool-name}]
+
+#. On the ``ceph-client`` node, map the image to a block device. ::
+
+ sudo rbd map foo --name client.admin [-m {mon-IP}] [-k /path/to/ceph.client.admin.keyring] [-p {pool-name}]
+
+#. Use the block device by creating a file system on the ``ceph-client``
+ node. ::
+
+ sudo mkfs.ext4 -m0 /dev/rbd/{pool-name}/foo
+
+ This may take a few moments.
+
+#. Mount the file system on the ``ceph-client`` node. ::
+
+ sudo mkdir /mnt/ceph-block-device
+ sudo mount /dev/rbd/{pool-name}/foo /mnt/ceph-block-device
+ cd /mnt/ceph-block-device
+
+#. Optionally configure the block device to be automatically mapped and mounted
+ at boot (and unmounted/unmapped at shutdown) - see the `rbdmap manpage`_.
+
+
+See `block devices`_ for additional details.
+
+.. _Storage Cluster Quick Start: ../quick-ceph-deploy
+.. _create a pool: ../../rados/operations/pools/#create-a-pool
+.. _block devices: ../../rbd
+.. _FAQ: http://wiki.ceph.com/How_Can_I_Give_Ceph_a_Try
+.. _OS Recommendations: ../os-recommendations
+.. _rbdmap manpage: ../../man/8/rbdmap
diff --git a/doc/start/quick-rgw-old.rst b/doc/start/quick-rgw-old.rst
new file mode 100644
index 00000000..db6474de
--- /dev/null
+++ b/doc/start/quick-rgw-old.rst
@@ -0,0 +1,30 @@
+:orphan:
+
+===========================
+ Quick Ceph Object Storage
+===========================
+
+To use the :term:`Ceph Object Storage` Quick Start guide, you must have executed the
+procedures in the `Storage Cluster Quick Start`_ guide first. Make sure that you
+have at least one :term:`RGW` instance running.
+
+Configure new RGW instance
+==========================
+
+The :term:`RGW` instance created by the `Storage Cluster Quick Start`_ will run using
+the embedded CivetWeb webserver. ``ceph-deploy`` will create the instance and start
+it automatically with default parameters.
+
+To administer the :term:`RGW` instance, see details in the the
+`RGW Admin Guide`_.
+
+Additional details may be found in the `Configuring Ceph Object Gateway`_ guide, but
+the steps specific to Apache are no longer needed.
+
+.. note:: Deploying RGW using ``ceph-deploy`` and using the CivetWeb webserver instead
+ of Apache is new functionality as of **Hammer** release.
+
+
+.. _Storage Cluster Quick Start: ../quick-ceph-deploy
+.. _RGW Admin Guide: ../../radosgw/admin
+.. _Configuring Ceph Object Gateway: ../../radosgw/config-fcgi
diff --git a/doc/start/quick-rgw.rst b/doc/start/quick-rgw.rst
new file mode 100644
index 00000000..5efda04f
--- /dev/null
+++ b/doc/start/quick-rgw.rst
@@ -0,0 +1,101 @@
+===============================
+Ceph Object Gateway Quick Start
+===============================
+
+As of `firefly` (v0.80), Ceph Storage dramatically simplifies installing and
+configuring a Ceph Object Gateway. The Gateway daemon embeds Civetweb, so you
+do not have to install a web server or configure FastCGI. Additionally,
+``ceph-deploy`` can install the gateway package, generate a key, configure a
+data directory and create a gateway instance for you.
+
+.. tip:: Civetweb uses port ``7480`` by default. You must either open port
+ ``7480``, or set the port to a preferred port (e.g., port ``80``) in your Ceph
+ configuration file.
+
+To start a Ceph Object Gateway, follow the steps below:
+
+Installing Ceph Object Gateway
+==============================
+
+#. Execute the pre-installation steps on your ``client-node``. If you intend to
+ use Civetweb's default port ``7480``, you must open it using either
+ ``firewall-cmd`` or ``iptables``. See `Preflight Checklist`_ for more
+ information.
+
+#. From the working directory of your administration server, install the Ceph
+ Object Gateway package on the ``client-node`` node. For example::
+
+ ceph-deploy install --rgw <client-node> [<client-node> ...]
+
+Creating the Ceph Object Gateway Instance
+=========================================
+
+From the working directory of your administration server, create an instance of
+the Ceph Object Gateway on the ``client-node``. For example::
+
+ ceph-deploy rgw create <client-node>
+
+Once the gateway is running, you should be able to access it on port ``7480``.
+(e.g., ``http://client-node:7480``).
+
+Configuring the Ceph Object Gateway Instance
+============================================
+
+#. To change the default port (e.g., to port ``80``), modify your Ceph
+ configuration file. Add a section entitled ``[client.rgw.<client-node>]``,
+ replacing ``<client-node>`` with the short node name of your Ceph client
+ node (i.e., ``hostname -s``). For example, if your node name is
+ ``client-node``, add a section like this after the ``[global]`` section::
+
+ [client.rgw.client-node]
+ rgw_frontends = "civetweb port=80"
+
+ .. note:: Ensure that you leave no whitespace between ``port=<port-number>``
+ in the ``rgw_frontends`` key/value pair.
+
+ .. important:: If you intend to use port 80, make sure that the Apache
+ server is not running otherwise it will conflict with Civetweb. We recommend
+ to remove Apache in this case.
+
+#. To make the new port setting take effect, restart the Ceph Object Gateway.
+ On Red Hat Enterprise Linux 7 and Fedora, run the following command::
+
+ sudo systemctl restart ceph-radosgw.service
+
+ On Red Hat Enterprise Linux 6 and Ubuntu, run the following command::
+
+ sudo service radosgw restart id=rgw.<short-hostname>
+
+#. Finally, check to ensure that the port you selected is open on the node's
+ firewall (e.g., port ``80``). If it is not open, add the port and reload the
+ firewall configuration. For example::
+
+ sudo firewall-cmd --list-all
+ sudo firewall-cmd --zone=public --add-port 80/tcp --permanent
+ sudo firewall-cmd --reload
+
+ See `Preflight Checklist`_ for more information on configuring firewall with
+ ``firewall-cmd`` or ``iptables``.
+
+ You should be able to make an unauthenticated request, and receive a
+ response. For example, a request with no parameters like this::
+
+ http://<client-node>:80
+
+ Should result in a response like this::
+
+ <?xml version="1.0" encoding="UTF-8"?>
+ <ListAllMyBucketsResult xmlns="http://s3.amazonaws.com/doc/2006-03-01/">
+ <Owner>
+ <ID>anonymous</ID>
+ <DisplayName></DisplayName>
+ </Owner>
+ <Buckets>
+ </Buckets>
+ </ListAllMyBucketsResult>
+
+See the `Configuring Ceph Object Gateway`_ guide for additional administration
+and API details.
+
+.. _Configuring Ceph Object Gateway: ../../radosgw/config-ref
+.. _Preflight Checklist: ../quick-start-preflight
diff --git a/doc/start/quick-start-preflight.rst b/doc/start/quick-start-preflight.rst
new file mode 100644
index 00000000..bfb7b856
--- /dev/null
+++ b/doc/start/quick-start-preflight.rst
@@ -0,0 +1,356 @@
+=====================
+ Preflight Checklist
+=====================
+
+The ``ceph-deploy`` tool operates out of a directory on an admin
+:term:`node`. Any host with network connectivity and a modern python
+environment and ssh (such as Linux) should work.
+
+In the descriptions below, :term:`Node` refers to a single machine.
+
+.. include:: quick-common.rst
+
+
+Ceph-deploy Setup
+=================
+
+Add Ceph repositories to the ``ceph-deploy`` admin node. Then, install
+``ceph-deploy``.
+
+Debian/Ubuntu
+-------------
+
+For Debian and Ubuntu distributions, perform the following steps:
+
+#. Add the release key::
+
+ wget -q -O- 'https://download.ceph.com/keys/release.asc' | sudo apt-key add -
+
+#. Add the Ceph packages to your repository. Use the command below and
+ replace ``{ceph-stable-release}`` with a stable Ceph release (e.g.,
+ ``luminous``.) For example::
+
+ echo deb https://download.ceph.com/debian-{ceph-stable-release}/ $(lsb_release -sc) main | sudo tee /etc/apt/sources.list.d/ceph.list
+
+#. Update your repository and install ``ceph-deploy``::
+
+ sudo apt update
+ sudo apt install ceph-deploy
+
+.. note:: You can also use the EU mirror eu.ceph.com for downloading your packages by replacing ``https://ceph.com/`` by ``http://eu.ceph.com/``
+
+
+RHEL/CentOS
+-----------
+
+For CentOS 7, perform the following steps:
+
+#. On Red Hat Enterprise Linux 7, register the target machine with
+ ``subscription-manager``, verify your subscriptions, and enable the
+ "Extras" repository for package dependencies. For example::
+
+ sudo subscription-manager repos --enable=rhel-7-server-extras-rpms
+
+#. Install and enable the Extra Packages for Enterprise Linux (EPEL)
+ repository::
+
+ sudo yum install -y https://dl.fedoraproject.org/pub/epel/epel-release-latest-7.noarch.rpm
+
+ Please see the `EPEL wiki`_ page for more information.
+
+#. Add the Ceph repository to your yum configuration file at ``/etc/yum.repos.d/ceph.repo`` with the following command. Replace ``{ceph-stable-release}`` with a stable Ceph release (e.g.,
+ ``luminous``.) For example::
+
+ cat << EOM > /etc/yum.repos.d/ceph.repo
+ [ceph-noarch]
+ name=Ceph noarch packages
+ baseurl=https://download.ceph.com/rpm-{ceph-stable-release}/el7/noarch
+ enabled=1
+ gpgcheck=1
+ type=rpm-md
+ gpgkey=https://download.ceph.com/keys/release.asc
+ EOM
+
+#. Update your repository and install ``ceph-deploy``::
+
+ sudo yum update
+ sudo yum install ceph-deploy
+
+.. note:: You can also use the EU mirror eu.ceph.com for downloading your packages by replacing ``https://ceph.com/`` by ``http://eu.ceph.com/``
+
+
+openSUSE
+--------
+
+The Ceph project does not currently publish release RPMs for openSUSE, but
+a stable version of Ceph is included in the default update repository, so
+installing it is just a matter of::
+
+ sudo zypper install ceph
+ sudo zypper install ceph-deploy
+
+If the distro version is out-of-date, open a bug at
+https://bugzilla.opensuse.org/index.cgi and possibly try your luck with one of
+the following repositories:
+
+#. Hammer::
+
+ https://software.opensuse.org/download.html?project=filesystems%3Aceph%3Ahammer&package=ceph
+
+#. Jewel::
+
+ https://software.opensuse.org/download.html?project=filesystems%3Aceph%3Ajewel&package=ceph
+
+
+Ceph Node Setup
+===============
+
+The admin node must have password-less SSH access to Ceph nodes.
+When ceph-deploy logs in to a Ceph node as a user, that particular
+user must have passwordless ``sudo`` privileges.
+
+
+Install NTP
+-----------
+
+We recommend installing NTP on Ceph nodes (especially on Ceph Monitor nodes) to
+prevent issues arising from clock drift. See `Clock`_ for details.
+
+On CentOS / RHEL, execute::
+
+ sudo yum install ntp ntpdate ntp-doc
+
+On Debian / Ubuntu, execute::
+
+ sudo apt install ntp
+
+Ensure that you enable the NTP service. Ensure that each Ceph Node uses the
+same NTP time server. See `NTP`_ for details.
+
+
+Install SSH Server
+------------------
+
+For **ALL** Ceph Nodes perform the following steps:
+
+#. Install an SSH server (if necessary) on each Ceph Node::
+
+ sudo apt install openssh-server
+
+ or::
+
+ sudo yum install openssh-server
+
+
+#. Ensure the SSH server is running on **ALL** Ceph Nodes.
+
+
+Create a Ceph Deploy User
+-------------------------
+
+The ``ceph-deploy`` utility must login to a Ceph node as a user
+that has passwordless ``sudo`` privileges, because it needs to install
+software and configuration files without prompting for passwords.
+
+Recent versions of ``ceph-deploy`` support a ``--username`` option so you can
+specify any user that has password-less ``sudo`` (including ``root``, although
+this is **NOT** recommended). To use ``ceph-deploy --username {username}``, the
+user you specify must have password-less SSH access to the Ceph node, as
+``ceph-deploy`` will not prompt you for a password.
+
+We recommend creating a specific user for ``ceph-deploy`` on **ALL** Ceph nodes
+in the cluster. Please do **NOT** use "ceph" as the user name. A uniform user
+name across the cluster may improve ease of use (not required), but you should
+avoid obvious user names, because hackers typically use them with brute force
+hacks (e.g., ``root``, ``admin``, ``{productname}``). The following procedure,
+substituting ``{username}`` for the user name you define, describes how to
+create a user with passwordless ``sudo``.
+
+.. note:: Starting with the :ref:`Infernalis release <infernalis-release-notes>`, the "ceph" user name is reserved
+ for the Ceph daemons. If the "ceph" user already exists on the Ceph nodes,
+ removing the user must be done before attempting an upgrade.
+
+#. Create a new user on each Ceph Node. ::
+
+ ssh user@ceph-server
+ sudo useradd -d /home/{username} -m {username}
+ sudo passwd {username}
+
+#. For the new user you added to each Ceph node, ensure that the user has
+ ``sudo`` privileges. ::
+
+ echo "{username} ALL = (root) NOPASSWD:ALL" | sudo tee /etc/sudoers.d/{username}
+ sudo chmod 0440 /etc/sudoers.d/{username}
+
+
+Enable Password-less SSH
+------------------------
+
+Since ``ceph-deploy`` will not prompt for a password, you must generate
+SSH keys on the admin node and distribute the public key to each Ceph
+node. ``ceph-deploy`` will attempt to generate the SSH keys for initial
+monitors.
+
+#. Generate the SSH keys, but do not use ``sudo`` or the
+ ``root`` user. Leave the passphrase empty::
+
+ ssh-keygen
+
+ Generating public/private key pair.
+ Enter file in which to save the key (/ceph-admin/.ssh/id_rsa):
+ Enter passphrase (empty for no passphrase):
+ Enter same passphrase again:
+ Your identification has been saved in /ceph-admin/.ssh/id_rsa.
+ Your public key has been saved in /ceph-admin/.ssh/id_rsa.pub.
+
+#. Copy the key to each Ceph Node, replacing ``{username}`` with the user name
+ you created with `Create a Ceph Deploy User`_. ::
+
+ ssh-copy-id {username}@node1
+ ssh-copy-id {username}@node2
+ ssh-copy-id {username}@node3
+
+#. (Recommended) Modify the ``~/.ssh/config`` file of your ``ceph-deploy``
+ admin node so that ``ceph-deploy`` can log in to Ceph nodes as the user you
+ created without requiring you to specify ``--username {username}`` each
+ time you execute ``ceph-deploy``. This has the added benefit of streamlining
+ ``ssh`` and ``scp`` usage. Replace ``{username}`` with the user name you
+ created::
+
+ Host node1
+ Hostname node1
+ User {username}
+ Host node2
+ Hostname node2
+ User {username}
+ Host node3
+ Hostname node3
+ User {username}
+
+
+Enable Networking On Bootup
+---------------------------
+
+Ceph OSDs peer with each other and report to Ceph Monitors over the network.
+If networking is ``off`` by default, the Ceph cluster cannot come online
+during bootup until you enable networking.
+
+The default configuration on some distributions (e.g., CentOS) has the
+networking interface(s) off by default. Ensure that, during boot up, your
+network interface(s) turn(s) on so that your Ceph daemons can communicate over
+the network. For example, on Red Hat and CentOS, navigate to
+``/etc/sysconfig/network-scripts`` and ensure that the ``ifcfg-{iface}`` file
+has ``ONBOOT`` set to ``yes``.
+
+
+Ensure Connectivity
+-------------------
+
+Ensure connectivity using ``ping`` with short hostnames (``hostname -s``).
+Address hostname resolution issues as necessary.
+
+.. note:: Hostnames should resolve to a network IP address, not to the
+ loopback IP address (e.g., hostnames should resolve to an IP address other
+ than ``127.0.0.1``). If you use your admin node as a Ceph node, you
+ should also ensure that it resolves to its hostname and IP address
+ (i.e., not its loopback IP address).
+
+
+Open Required Ports
+-------------------
+
+Ceph Monitors communicate using port ``6789`` by default. Ceph OSDs communicate
+in a port range of ``6800:7300`` by default. See the `Network Configuration
+Reference`_ for details. Ceph OSDs can use multiple network connections to
+communicate with clients, monitors, other OSDs for replication, and other OSDs
+for heartbeats.
+
+On some distributions (e.g., RHEL), the default firewall configuration is fairly
+strict. You may need to adjust your firewall settings allow inbound requests so
+that clients in your network can communicate with daemons on your Ceph nodes.
+
+For ``firewalld`` on RHEL 7, add the ``ceph-mon`` service for Ceph Monitor
+nodes and the ``ceph`` service for Ceph OSDs and MDSs to the public zone and
+ensure that you make the settings permanent so that they are enabled on reboot.
+
+For example, on monitors::
+
+ sudo firewall-cmd --zone=public --add-service=ceph-mon --permanent
+
+and on OSDs and MDSs::
+
+ sudo firewall-cmd --zone=public --add-service=ceph --permanent
+
+Once you have finished configuring firewalld with the ``--permanent`` flag, you can make the changes live immediately without rebooting::
+
+ sudo firewall-cmd --reload
+
+For ``iptables``, add port ``6789`` for Ceph Monitors and ports ``6800:7300``
+for Ceph OSDs. For example::
+
+ sudo iptables -A INPUT -i {iface} -p tcp -s {ip-address}/{netmask} --dport 6789 -j ACCEPT
+
+Once you have finished configuring ``iptables``, ensure that you make the
+changes persistent on each node so that they will be in effect when your nodes
+reboot. For example::
+
+ /sbin/service iptables save
+
+TTY
+---
+
+On CentOS and RHEL, you may receive an error while trying to execute
+``ceph-deploy`` commands. If ``requiretty`` is set by default on your Ceph
+nodes, disable it by executing ``sudo visudo`` and locate the ``Defaults
+requiretty`` setting. Change it to ``Defaults:ceph !requiretty`` or comment it
+out to ensure that ``ceph-deploy`` can connect using the user you created with
+`Create a Ceph Deploy User`_.
+
+.. note:: If editing, ``/etc/sudoers``, ensure that you use
+ ``sudo visudo`` rather than a text editor.
+
+
+SELinux
+-------
+
+On CentOS and RHEL, SELinux is set to ``Enforcing`` by default. To streamline your
+installation, we recommend setting SELinux to ``Permissive`` or disabling it
+entirely and ensuring that your installation and cluster are working properly
+before hardening your configuration. To set SELinux to ``Permissive``, execute the
+following::
+
+ sudo setenforce 0
+
+To configure SELinux persistently (recommended if SELinux is an issue), modify
+the configuration file at ``/etc/selinux/config``.
+
+
+Priorities/Preferences
+----------------------
+
+Ensure that your package manager has priority/preferences packages installed and
+enabled. On CentOS, you may need to install EPEL. On RHEL, you may need to
+enable optional repositories. ::
+
+ sudo yum install yum-plugin-priorities
+
+For example, on RHEL 7 server, execute the following to install
+``yum-plugin-priorities`` and enable the ``rhel-7-server-optional-rpms``
+repository::
+
+ sudo yum install yum-plugin-priorities --enablerepo=rhel-7-server-optional-rpms
+
+
+Summary
+=======
+
+This completes the Quick Start Preflight. Proceed to the `Storage Cluster
+Quick Start`_.
+
+.. _Storage Cluster Quick Start: ../quick-ceph-deploy
+.. _OS Recommendations: ../os-recommendations
+.. _Network Configuration Reference: ../../rados/configuration/network-config-ref
+.. _Clock: ../../rados/configuration/mon-config-ref#clock
+.. _NTP: http://www.ntp.org/
+.. _Infernalis release: ../../release-notes/#v9-1-0-infernalis-release-candidate
+.. _EPEL wiki: https://fedoraproject.org/wiki/EPEL
diff --git a/doc/start/rgw.conf b/doc/start/rgw.conf
new file mode 100644
index 00000000..e1bee998
--- /dev/null
+++ b/doc/start/rgw.conf
@@ -0,0 +1,30 @@
+FastCgiExternalServer /var/www/s3gw.fcgi -socket /tmp/radosgw.sock
+
+
+<VirtualHost *:80>
+
+ ServerName {fqdn}
+ <!--Remove the comment. Add a server alias with *.{fqdn} for S3 subdomains-->
+ <!--ServerAlias *.{fqdn}-->
+ ServerAdmin {email.address}
+ DocumentRoot /var/www
+ RewriteEngine On
+ RewriteRule ^/([a-zA-Z0-9-_.]*)([/]?.*) /s3gw.fcgi?page=$1&params=$2&%{QUERY_STRING} [E=HTTP_AUTHORIZATION:%{HTTP:Authorization},L]
+
+ <IfModule mod_fastcgi.c>
+ <Directory /var/www>
+ Options +ExecCGI
+ AllowOverride All
+ SetHandler fastcgi-script
+ Order allow,deny
+ Allow from all
+ AuthBasicAuthoritative Off
+ </Directory>
+ </IfModule>
+
+ AllowEncodedSlashes On
+ ErrorLog /var/log/apache2/error.log
+ CustomLog /var/log/apache2/access.log combined
+ ServerSignature Off
+
+</VirtualHost> \ No newline at end of file