summaryrefslogtreecommitdiffstats
path: root/doc/dev/dev_cluster_deployement.rst
blob: 69185e7f0b053612ee489486acf114a7351d0b63 (plain)
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
42
43
44
45
46
47
48
49
50
51
52
53
54
55
56
57
58
59
60
61
62
63
64
65
66
67
68
69
70
71
72
73
74
75
76
77
78
79
80
81
82
83
84
85
86
87
88
89
90
91
92
93
94
95
96
97
98
99
100
101
102
103
104
105
106
107
108
109
110
111
112
113
114
115
116
117
118
119
120
121
122
123
124
125
126
127
128
129
130
131
132
133
134
135
136
137
138
139
140
141
142
143
144
145
146
147
148
149
150
151
152
153
154
155
156
157
158
159
160
161
162
163
164
165
166
167
168
169
=================================
 Deploying a development cluster
=================================

In order to develop on ceph, a Ceph utility,
*vstart.sh*, allows you to deploy fake local cluster for development purpose.

Usage
=====

It allows to deploy a fake local cluster on your machine for development purpose. It starts rgw, mon, osd and/or mds, or all of them if not specified.

To start your development cluster, type the following::

	vstart.sh [OPTIONS]...

In order to stop the cluster, you can type::

	./stop.sh

Options
=======

.. option:: -b, --bluestore

    Use bluestore as the objectstore backend for osds.

.. option:: --cache <pool>

    Set a cache-tier for the specified pool.

.. option:: -d, --debug

    Launch in debug mode.

.. option:: -e

    Create an erasure pool.

.. option:: -f, --filestore

    Use filestore as the osd objectstore backend.

.. option:: --hitset <pool> <hit_set_type>

    Enable hitset tracking.

.. option:: -i ip_address

    Bind to the specified *ip_address* instead of guessing and resolve from hostname.

.. option:: -k

    Keep old configuration files instead of overwritting theses.

.. option:: -K, --kstore

    Use kstore as the osd objectstore backend.

.. option:: -l, --localhost

    Use localhost instead of hostname.

.. option:: -m ip[:port]

    Specifies monitor *ip* address and *port*.

.. option:: --memstore

    Use memstore as the objectstore backend for osds

.. option:: --multimds <count>

    Allow multimds with maximum active count.

.. option:: -n, --new

    Create a new cluster.

.. option:: -N, --not-new

    Reuse existing cluster config (default).

.. option:: --nodaemon

    Use ceph-run as wrapper for mon/osd/mds.

.. option:: --nolockdep

    Disable lockdep

.. option:: -o <config>

    Add *config* to all sections in the ceph configuration.

.. option:: --rgw_port <port>

    Specify ceph rgw http listen port.

.. option:: --rgw_frontend <frontend>

    Specify the rgw frontend configuration (default is civetweb).

.. option:: --rgw_compression <compression_type>

    Specify the rgw compression plugin (default is disabled).

.. option:: --smallmds

    Configure mds with small limit cache size.

.. option:: --short

    Short object names only; necessary for ext4 dev

.. option:: --valgrind[_{osd,mds,mon}] 'valgrind_toolname [args...]'

    Launch the osd/mds/mon/all the ceph binaries using valgrind with the specified tool and arguments.

.. option:: --without-dashboard

    Do not run using mgr dashboard.

.. option:: -x

    Enable cephx (on by default).

.. option:: -X

    Disable cephx.


Environment variables
=====================

{OSD,MDS,MON,RGW}

Theses environment variables will contains the number of instances of the desired ceph process you want to start.

Example: ::

	OSD=3 MON=3 RGW=1 vstart.sh


============================================================
 Deploying multiple development clusters on the same machine
============================================================

In order to bring up multiple ceph clusters on the same machine, *mstart.sh* a
small wrapper around the above *vstart* can help.

Usage
=====

To start multiple clusters, you would run mstart for each cluster you would want
to deploy, and it will start monitors, rgws for each cluster on different ports
allowing you to run multiple mons, rgws etc. on the same cluster. Invoke it in
the following way::

  mstart.sh <cluster-name> <vstart options>

For eg::

  ./mstart.sh cluster1 -n


For stopping the cluster, you do::

  ./mstop.sh <cluster-name>