summaryrefslogtreecommitdiffstats
path: root/doc/cephfs/index.rst
blob: d9494a2bc6885e9a6bfc5caa34946e7dcb82a8e0 (plain)
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
42
43
44
45
46
47
48
49
50
51
52
53
54
55
56
57
58
59
60
61
62
63
64
65
66
67
68
69
70
71
72
73
74
75
76
77
78
79
80
81
82
83
84
85
86
87
88
89
90
91
92
93
94
95
96
97
98
99
100
101
102
103
104
105
106
107
108
109
110
111
112
113
114
115
116
117
118
119
120
121
122
123
124
125
126
127
128
129
130
131
132
133
.. _ceph-filesystem:

=================
 Ceph Filesystem
=================

The Ceph Filesystem (CephFS) is a POSIX-compliant filesystem that uses
a Ceph Storage Cluster to store its data. The Ceph filesystem uses the same Ceph
Storage Cluster system as Ceph Block Devices, Ceph Object Storage with its S3
and Swift APIs, or native bindings (librados).

.. note:: If you are evaluating CephFS for the first time, please review
          the best practices for deployment: :doc:`/cephfs/best-practices`

.. ditaa::
            +-----------------------+  +------------------------+
            |                       |  |      CephFS FUSE       |
            |                       |  +------------------------+
            |                       |
            |                       |  +------------------------+
            |  CephFS Kernel Object |  |     CephFS Library     |
            |                       |  +------------------------+
            |                       |
            |                       |  +------------------------+
            |                       |  |        librados        |
            +-----------------------+  +------------------------+

            +---------------+ +---------------+ +---------------+
            |      OSDs     | |      MDSs     | |    Monitors   |
            +---------------+ +---------------+ +---------------+


Using CephFS
============

Using the Ceph Filesystem requires at least one :term:`Ceph Metadata Server` in
your Ceph Storage Cluster.



.. raw:: html

	<style type="text/css">div.body h3{margin:5px 0px 0px 0px;}</style>
	<table cellpadding="10"><colgroup><col width="33%"><col width="33%"><col width="33%"></colgroup><tbody valign="top"><tr><td><h3>Step 1: Metadata Server</h3>

To run the Ceph Filesystem, you must have a running Ceph Storage Cluster with at
least one :term:`Ceph Metadata Server` running.


.. toctree:: 
	:maxdepth: 1

	Provision/Add/Remove MDS(s) <add-remove-mds>
	MDS failover and standby configuration <standby>
	MDS Configuration Settings <mds-config-ref>
	Client Configuration Settings <client-config-ref>
	Journaler Configuration <journaler>
	Manpage ceph-mds <../../man/8/ceph-mds>

.. raw:: html 

	</td><td><h3>Step 2: Mount CephFS</h3>

Once you have a healthy Ceph Storage Cluster with at least
one Ceph Metadata Server, you may create and mount your Ceph Filesystem.
Ensure that your client has network connectivity and the proper
authentication keyring.

.. toctree:: 
	:maxdepth: 1

	Create a CephFS file system <createfs>
	Mount CephFS <kernel>
	Mount CephFS as FUSE <fuse>
	Mount CephFS in fstab <fstab>
	Use the CephFS Shell <cephfs-shell>
	Supported Features of Kernel Driver <kernel-features>
	Manpage ceph-fuse <../../man/8/ceph-fuse>
	Manpage mount.ceph <../../man/8/mount.ceph>
	Manpage mount.fuse.ceph <../../man/8/mount.fuse.ceph>


.. raw:: html 

	</td><td><h3>Additional Details</h3>

.. toctree:: 
    :maxdepth: 1

    Deployment best practices <best-practices>
    MDS States <mds-states>
    Administrative commands <administration>
    Understanding MDS Cache Size Limits <cache-size-limits>
    POSIX compatibility <posix>
    Experimental Features <experimental-features>
    CephFS Quotas <quota>
    Using Ceph with Hadoop <hadoop>
    cephfs-journal-tool <cephfs-journal-tool>
    File layouts <file-layouts>
    Client eviction <eviction>
    Handling full filesystems <full>
    Health messages <health-messages>
    Troubleshooting <troubleshooting>
    Disaster recovery <disaster-recovery>
    Client authentication <client-auth>
    Upgrading old filesystems <upgrading>
    Configuring directory fragmentation <dirfrags>
    Configuring multiple active MDS daemons <multimds>
    Export over NFS <nfs>
    Application best practices <app-best-practices>
    Scrub <scrub>
    LazyIO <lazyio>
    FS volume and subvolumes <fs-volumes>

.. toctree:: 
   :hidden:

    Advanced: Metadata repair <disaster-recovery-experts>

.. raw:: html

	</td></tr></tbody></table>

For developers
==============

.. toctree:: 
    :maxdepth: 1

    Client's Capabilities <capabilities>
    libcephfs <../../api/libcephfs-java/>
    Mantle <mantle>