diff options
author | Daniel Baumann <daniel.baumann@progress-linux.org> | 2024-04-07 18:45:59 +0000 |
---|---|---|
committer | Daniel Baumann <daniel.baumann@progress-linux.org> | 2024-04-07 18:45:59 +0000 |
commit | 19fcec84d8d7d21e796c7624e521b60d28ee21ed (patch) | |
tree | 42d26aa27d1e3f7c0b8bd3fd14e7d7082f5008dc /doc/cephfs/index.rst | |
parent | Initial commit. (diff) | |
download | ceph-19fcec84d8d7d21e796c7624e521b60d28ee21ed.tar.xz ceph-19fcec84d8d7d21e796c7624e521b60d28ee21ed.zip |
Adding upstream version 16.2.11+ds.upstream/16.2.11+dsupstream
Signed-off-by: Daniel Baumann <daniel.baumann@progress-linux.org>
Diffstat (limited to 'doc/cephfs/index.rst')
-rw-r--r-- | doc/cephfs/index.rst | 211 |
1 files changed, 211 insertions, 0 deletions
diff --git a/doc/cephfs/index.rst b/doc/cephfs/index.rst new file mode 100644 index 000000000..ad3863985 --- /dev/null +++ b/doc/cephfs/index.rst @@ -0,0 +1,211 @@ +.. _ceph-file-system: + +================= + Ceph File System +================= + +The Ceph File System, or **CephFS**, is a POSIX-compliant file system built on +top of Ceph's distributed object store, **RADOS**. CephFS endeavors to provide +a state-of-the-art, multi-use, highly available, and performant file store for +a variety of applications, including traditional use-cases like shared home +directories, HPC scratch space, and distributed workflow shared storage. + +CephFS achieves these goals through the use of some novel architectural +choices. Notably, file metadata is stored in a separate RADOS pool from file +data and served via a resizable cluster of *Metadata Servers*, or **MDS**, +which may scale to support higher throughput metadata workloads. Clients of +the file system have direct access to RADOS for reading and writing file data +blocks. For this reason, workloads may linearly scale with the size of the +underlying RADOS object store; that is, there is no gateway or broker mediating +data I/O for clients. + +Access to data is coordinated through the cluster of MDS which serve as +authorities for the state of the distributed metadata cache cooperatively +maintained by clients and MDS. Mutations to metadata are aggregated by each MDS +into a series of efficient writes to a journal on RADOS; no metadata state is +stored locally by the MDS. This model allows for coherent and rapid +collaboration between clients within the context of a POSIX file system. + +.. image:: cephfs-architecture.svg + +CephFS is the subject of numerous academic papers for its novel designs and +contributions to file system research. It is the oldest storage interface in +Ceph and was once the primary use-case for RADOS. Now it is joined by two +other storage interfaces to form a modern unified storage system: RBD (Ceph +Block Devices) and RGW (Ceph Object Storage Gateway). + + +Getting Started with CephFS +^^^^^^^^^^^^^^^^^^^^^^^^^^^ + +For most deployments of Ceph, setting up a CephFS file system is as simple as: + +.. code:: bash + + ceph fs volume create <fs name> + +The Ceph `Orchestrator`_ will automatically create and configure MDS for +your file system if the back-end deployment technology supports it (see +`Orchestrator deployment table`_). Otherwise, please `deploy MDS manually +as needed`_. + +Finally, to mount CephFS on your client nodes, see `Mount CephFS: +Prerequisites`_ page. Additionally, a command-line shell utility is available +for interactive access or scripting via the `cephfs-shell`_. + +.. _Orchestrator: ../mgr/orchestrator +.. _deploy MDS manually as needed: add-remove-mds +.. _Orchestrator deployment table: ../mgr/orchestrator/#current-implementation-status +.. _Mount CephFS\: Prerequisites: mount-prerequisites +.. _cephfs-shell: cephfs-shell + + +.. raw:: html + + <!--- + +Administration +^^^^^^^^^^^^^^ + +.. raw:: html + + ---> + +.. toctree:: + :maxdepth: 1 + :hidden: + + Create a CephFS file system <createfs> + Administrative commands <administration> + Creating Multiple File Systems <multifs> + Provision/Add/Remove MDS(s) <add-remove-mds> + MDS failover and standby configuration <standby> + MDS Cache Configuration <cache-configuration> + MDS Configuration Settings <mds-config-ref> + Manual: ceph-mds <../../man/8/ceph-mds> + Export over NFS <nfs> + Application best practices <app-best-practices> + FS volume and subvolumes <fs-volumes> + CephFS Quotas <quota> + Health messages <health-messages> + Upgrading old file systems <upgrading> + CephFS Top Utility <cephfs-top> + Scheduled Snapshots <snap-schedule> + CephFS Snapshot Mirroring <cephfs-mirroring> + +.. raw:: html + + <!--- + +Mounting CephFS +^^^^^^^^^^^^^^^ + +.. raw:: html + + ---> + +.. toctree:: + :maxdepth: 1 + :hidden: + + Client Configuration Settings <client-config-ref> + Client Authentication <client-auth> + Mount CephFS: Prerequisites <mount-prerequisites> + Mount CephFS using Kernel Driver <mount-using-kernel-driver> + Mount CephFS using FUSE <mount-using-fuse> + Mount CephFS on Windows <ceph-dokan> + Use the CephFS Shell <cephfs-shell> + Supported Features of Kernel Driver <kernel-features> + Manual: ceph-fuse <../../man/8/ceph-fuse> + Manual: mount.ceph <../../man/8/mount.ceph> + Manual: mount.fuse.ceph <../../man/8/mount.fuse.ceph> + + +.. raw:: html + + <!--- + +CephFS Concepts +^^^^^^^^^^^^^^^ + +.. raw:: html + + ---> + +.. toctree:: + :maxdepth: 1 + :hidden: + + MDS States <mds-states> + POSIX compatibility <posix> + MDS Journaling <mds-journaling> + File layouts <file-layouts> + Distributed Metadata Cache <mdcache> + Dynamic Metadata Management in CephFS <dynamic-metadata-management> + CephFS IO Path <cephfs-io-path> + LazyIO <lazyio> + Directory fragmentation <dirfrags> + Multiple active MDS daemons <multimds> + + +.. raw:: html + + <!--- + +Troubleshooting and Disaster Recovery +^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ + +.. raw:: html + + ---> + +.. toctree:: + :hidden: + + Client eviction <eviction> + Scrubbing the File System <scrub> + Handling full file systems <full> + Metadata repair <disaster-recovery-experts> + Troubleshooting <troubleshooting> + Disaster recovery <disaster-recovery> + cephfs-journal-tool <cephfs-journal-tool> + Recovering file system after monitor store loss <recover-fs-after-mon-store-loss> + + +.. raw:: html + + <!--- + +Developer Guides +^^^^^^^^^^^^^^^^ + +.. raw:: html + + ---> + +.. toctree:: + :maxdepth: 1 + :hidden: + + Journaler Configuration <journaler> + Client's Capabilities <capabilities> + Java and Python bindings <api/index> + Mantle <mantle> + + +.. raw:: html + + <!--- + +Additional Details +^^^^^^^^^^^^^^^^^^ + +.. raw:: html + + ---> + +.. toctree:: + :maxdepth: 1 + :hidden: + + Experimental Features <experimental-features> |