From e6918187568dbd01842d8d1d2c808ce16a894239 Mon Sep 17 00:00:00 2001 From: Daniel Baumann Date: Sun, 21 Apr 2024 13:54:28 +0200 Subject: Adding upstream version 18.2.2. Signed-off-by: Daniel Baumann --- doc/cephfs/index.rst | 213 +++++++++++++++++++++++++++++++++++++++++++++++++++ 1 file changed, 213 insertions(+) create mode 100644 doc/cephfs/index.rst (limited to 'doc/cephfs/index.rst') diff --git a/doc/cephfs/index.rst b/doc/cephfs/index.rst new file mode 100644 index 000000000..3d52aef38 --- /dev/null +++ b/doc/cephfs/index.rst @@ -0,0 +1,213 @@ +.. _ceph-file-system: + +================= + Ceph File System +================= + +The Ceph File System, or **CephFS**, is a POSIX-compliant file system built on +top of Ceph's distributed object store, **RADOS**. CephFS endeavors to provide +a state-of-the-art, multi-use, highly available, and performant file store for +a variety of applications, including traditional use-cases like shared home +directories, HPC scratch space, and distributed workflow shared storage. + +CephFS achieves these goals through the use of some novel architectural +choices. Notably, file metadata is stored in a separate RADOS pool from file +data and served via a resizable cluster of *Metadata Servers*, or **MDS**, +which may scale to support higher throughput metadata workloads. Clients of +the file system have direct access to RADOS for reading and writing file data +blocks. For this reason, workloads may linearly scale with the size of the +underlying RADOS object store; that is, there is no gateway or broker mediating +data I/O for clients. + +Access to data is coordinated through the cluster of MDS which serve as +authorities for the state of the distributed metadata cache cooperatively +maintained by clients and MDS. Mutations to metadata are aggregated by each MDS +into a series of efficient writes to a journal on RADOS; no metadata state is +stored locally by the MDS. This model allows for coherent and rapid +collaboration between clients within the context of a POSIX file system. + +.. image:: cephfs-architecture.svg + +CephFS is the subject of numerous academic papers for its novel designs and +contributions to file system research. It is the oldest storage interface in +Ceph and was once the primary use-case for RADOS. Now it is joined by two +other storage interfaces to form a modern unified storage system: RBD (Ceph +Block Devices) and RGW (Ceph Object Storage Gateway). + + +Getting Started with CephFS +^^^^^^^^^^^^^^^^^^^^^^^^^^^ + +For most deployments of Ceph, setting up your first CephFS file system is as simple as: + +.. prompt:: bash + + # Create a CephFS volume named (for example) "cephfs": + ceph fs volume create cephfs + +The Ceph `Orchestrator`_ will automatically create and configure MDS for +your file system if the back-end deployment technology supports it (see +`Orchestrator deployment table`_). Otherwise, please `deploy MDS manually +as needed`_. You can also `create other CephFS volumes`_. + +Finally, to mount CephFS on your client nodes, see `Mount CephFS: +Prerequisites`_ page. Additionally, a command-line shell utility is available +for interactive access or scripting via the `cephfs-shell`_. + +.. _Orchestrator: ../mgr/orchestrator +.. _deploy MDS manually as needed: add-remove-mds +.. _create other CephFS volumes: fs-volumes +.. _Orchestrator deployment table: ../mgr/orchestrator/#current-implementation-status +.. _Mount CephFS\: Prerequisites: mount-prerequisites +.. _cephfs-shell: ../man/8/cephfs-shell + + +.. raw:: html + + + +.. toctree:: + :maxdepth: 1 + :hidden: + + Create a CephFS file system + Administrative commands + Creating Multiple File Systems + Provision/Add/Remove MDS(s) + MDS failover and standby configuration + MDS Cache Configuration + MDS Configuration Settings + Manual: ceph-mds <../../man/8/ceph-mds> + Export over NFS + Application best practices + FS volume and subvolumes + CephFS Quotas + Health messages + Upgrading old file systems + CephFS Top Utility + Scheduled Snapshots + CephFS Snapshot Mirroring + +.. raw:: html + + + +.. toctree:: + :maxdepth: 1 + :hidden: + + Client Configuration Settings + Client Authentication + Mount CephFS: Prerequisites + Mount CephFS using Kernel Driver + Mount CephFS using FUSE + Mount CephFS on Windows + Use the CephFS Shell <../../man/8/cephfs-shell> + Supported Features of Kernel Driver + Manual: ceph-fuse <../../man/8/ceph-fuse> + Manual: mount.ceph <../../man/8/mount.ceph> + Manual: mount.fuse.ceph <../../man/8/mount.fuse.ceph> + + +.. raw:: html + + + +.. toctree:: + :maxdepth: 1 + :hidden: + + MDS States + POSIX compatibility + MDS Journaling + File layouts + Distributed Metadata Cache + Dynamic Metadata Management in CephFS + CephFS IO Path + LazyIO + Directory fragmentation + Multiple active MDS daemons + + +.. raw:: html + + + +.. toctree:: + :hidden: + + Client eviction + Scrubbing the File System + Handling full file systems + Metadata repair + Troubleshooting + Disaster recovery + cephfs-journal-tool + Recovering file system after monitor store loss + + +.. raw:: html + + + +.. toctree:: + :maxdepth: 1 + :hidden: + + Journaler Configuration + Client's Capabilities + Java and Python bindings + Mantle + + +.. raw:: html + + + +.. toctree:: + :maxdepth: 1 + :hidden: + + Experimental Features -- cgit v1.2.3