blob: bccca0239a00788597fb897bc3099b71717062ce (
plain)
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
42
43
44
45
46
47
48
49
50
51
52
53
54
55
56
57
58
59
60
61
62
63
64
65
66
67
68
69
70
71
72
73
74
75
76
77
78
79
80
81
82
83
84
85
86
87
88
89
90
91
92
93
94
95
96
97
98
99
100
101
102
103
104
105
106
107
108
109
110
111
112
113
114
115
116
117
118
119
120
121
122
123
124
125
126
127
128
129
130
131
132
133
134
135
136
137
138
139
140
141
142
143
144
145
146
147
148
149
150
151
152
153
154
155
156
157
158
|
=================================
Developer Guide (Quick)
=================================
This guide will describe how to build and test Ceph for development.
Development
-----------
The ``run-make-check.sh`` script will install Ceph dependencies,
compile everything in debug mode and run a number of tests to verify
the result behaves as expected.
.. prompt:: bash $
./run-make-check.sh
Optionally if you want to work on a specific component of Ceph,
install the dependencies and build Ceph in debug mode with required cmake flags.
Example:
.. prompt:: bash $
./install-deps.sh
./do_cmake.sh -DWITH_MANPAGE=OFF -DWITH_BABELTRACE=OFF -DWITH_MGR_DASHBOARD_FRONTEND=OFF
You can also turn off building of some core components that are not relevant to
your development:
.. prompt:: bash $
./do_cmake.sh ... -DWITH_RBD=OFF -DWITH_KRBD=OFF -DWITH_RADOSGW=OFF
Finally, build ceph:
.. prompt:: bash $
cmake --build build [--target <target>...]
Omit ``--target...`` if you want to do a full build.
Running a development deployment
--------------------------------
Ceph contains a script called ``vstart.sh`` (see also
:doc:`/dev/dev_cluster_deployment`) which allows developers to quickly test
their code using a simple deployment on your development system. Once the build
finishes successfully, start the ceph deployment using the following command:
.. prompt:: bash $
cd build
../src/vstart.sh -d -n
You can also configure ``vstart.sh`` to use only one monitor and one metadata server by using the following:
.. prompt:: bash $
env MON=1 MDS=1 ../src/vstart.sh -d -n -x
Most logs from the cluster can be found in ``build/out``.
The system creates two pools on startup: `cephfs_data_a` and `cephfs_metadata_a`. Let's get some stats on
the current pools:
.. code-block:: console
$ bin/ceph osd pool stats
*** DEVELOPER MODE: setting PATH, PYTHONPATH and LD_LIBRARY_PATH ***
pool cephfs_data_a id 1
nothing is going on
pool cephfs_metadata_a id 2
nothing is going on
$ bin/ceph osd pool stats cephfs_data_a
*** DEVELOPER MODE: setting PATH, PYTHONPATH and LD_LIBRARY_PATH ***
pool cephfs_data_a id 1
nothing is going on
$ bin/rados df
POOL_NAME USED OBJECTS CLONES COPIES MISSING_ON_PRIMARY UNFOUND DEGRADED RD_OPS RD WR_OPS WR
cephfs_data_a 0 0 0 0 0 0 0 0 0 0 0
cephfs_metadata_a 2246 21 0 63 0 0 0 0 0 42 8192
total_objects 21
total_used 244G
total_space 1180G
Make a pool and run some benchmarks against it:
.. prompt:: bash $
bin/ceph osd pool create mypool
bin/rados -p mypool bench 10 write -b 123
Place a file into the new pool:
.. prompt:: bash $
bin/rados -p mypool put objectone <somefile>
bin/rados -p mypool put objecttwo <anotherfile>
List the objects in the pool:
.. prompt:: bash $
bin/rados -p mypool ls
Once you are done, type the following to stop the development ceph deployment:
.. prompt:: bash $
../src/stop.sh
Resetting your vstart environment
---------------------------------
The vstart script creates out/ and dev/ directories which contain
the cluster's state. If you want to quickly reset your environment,
you might do something like this:
.. prompt:: bash [build]$
../src/stop.sh
rm -rf out dev
env MDS=1 MON=1 OSD=3 ../src/vstart.sh -n -d
Running a RadosGW development environment
-----------------------------------------
Set the ``RGW`` environment variable when running vstart.sh to enable the RadosGW.
.. prompt:: bash $
cd build
RGW=1 ../src/vstart.sh -d -n -x
You can now use the swift python client to communicate with the RadosGW.
.. prompt:: bash $
swift -A http://localhost:8000/auth -U test:tester -K testing list
swift -A http://localhost:8000/auth -U test:tester -K testing upload mycontainer ceph
swift -A http://localhost:8000/auth -U test:tester -K testing list
Run unit tests
--------------
The tests are located in `src/tests`. To run them type:
.. prompt:: bash $
(cd build && ninja check)
|