summaryrefslogtreecommitdiffstats
path: root/doc/install/upgrading-ceph.rst
blob: bf22b38ecab30915bcb4b0668e74af4af694501c (plain)
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
42
43
44
45
46
47
48
49
50
51
52
53
54
55
56
57
58
59
60
61
62
63
64
65
66
67
68
69
70
71
72
73
74
75
76
77
78
79
80
81
82
83
84
85
86
87
88
89
90
91
92
93
94
95
96
97
98
99
100
101
102
103
104
105
106
107
108
109
110
111
112
113
114
115
116
117
118
119
120
121
122
123
124
125
126
127
128
129
130
131
132
133
134
135
136
137
138
139
140
141
142
143
144
145
146
147
148
149
150
151
152
153
154
155
156
157
158
159
160
161
162
163
164
165
166
167
168
169
170
171
172
173
174
175
176
177
178
179
180
181
182
183
184
185
186
187
188
189
190
191
192
193
194
195
196
197
198
199
200
201
202
203
204
205
206
207
208
209
210
211
212
213
214
215
216
217
218
219
220
221
222
223
224
225
226
227
228
229
230
231
232
233
234
235
================
 Upgrading Ceph
================

Each release of Ceph may have additional steps. Refer to the `release notes
document of your release`_ to identify release-specific procedures for your
cluster before using the upgrade procedures.


Summary
=======

You can upgrade daemons in your Ceph cluster while the cluster is online and in
service! Certain types of daemons depend upon others. For example, Ceph Metadata
Servers and Ceph Object Gateways depend upon Ceph Monitors and Ceph OSD Daemons.
We recommend upgrading in this order:

#. `Ceph Deploy`_
#. Ceph Monitors
#. Ceph OSD Daemons
#. Ceph Metadata Servers
#. Ceph Object Gateways

As a general rule, we recommend upgrading all the daemons of a specific type
(e.g., all ``ceph-mon`` daemons, all ``ceph-osd`` daemons, etc.) to ensure that
they are all on the same release. We also recommend that you upgrade all the
daemons in your cluster before you try to exercise new functionality in a
release.

The `Upgrade Procedures`_ are relatively simple, but do look at the `release
notes document of your release`_ before upgrading. The basic process involves
three steps: 

#. Use ``ceph-deploy`` on your admin node to upgrade the packages for 
   multiple hosts (using the ``ceph-deploy install`` command), or login to each 
   host and upgrade the Ceph package `using your distro's package manager`_.
   For example, when `Upgrading Monitors`_, the ``ceph-deploy`` syntax might
   look like this::
   
	ceph-deploy install --release {release-name} ceph-node1[ ceph-node2]
	ceph-deploy install --release firefly mon1 mon2 mon3

   **Note:** The ``ceph-deploy install`` command will upgrade the packages 
   in the specified node(s) from the old release to the release you specify. 
   There is no ``ceph-deploy upgrade`` command.

#. Login in to each Ceph node and restart each Ceph daemon.
   See `Operating a Cluster`_ for details.

#. Ensure your cluster is healthy. See `Monitoring a Cluster`_ for details.

.. important:: Once you upgrade a daemon, you cannot downgrade it.


Ceph Deploy
===========

Before upgrading Ceph daemons, upgrade the ``ceph-deploy`` tool. ::

	sudo pip install -U ceph-deploy

Or::

	sudo apt-get install ceph-deploy
	
Or::

	sudo yum install ceph-deploy python-pushy


Upgrade Procedures
==================

The following sections describe the upgrade process. 

.. important:: Each release of Ceph may have some additional steps. Refer to
   the `release notes document of your release`_ for details **BEFORE** you
   begin upgrading daemons.


Upgrading Monitors
------------------

To upgrade monitors, perform the following steps:

#. Upgrade the Ceph package for each daemon instance. 

   You may use ``ceph-deploy`` to address all monitor nodes at once. 
   For example::

	ceph-deploy install --release {release-name} ceph-node1[ ceph-node2]
	ceph-deploy install --release hammer mon1 mon2 mon3

   You may also use the package manager for your Linux distribution on 
   each individual node. To upgrade packages manually on each Debian/Ubuntu 
   host, perform the following steps::

	ssh {mon-host}
	sudo apt-get update && sudo apt-get install ceph

   On CentOS/Red Hat hosts, perform the following steps::

	ssh {mon-host}
	sudo yum update && sudo yum install ceph
	
 
#. Restart each monitor. For Ubuntu distributions, use:: 

	sudo restart ceph-mon id={hostname}

   For CentOS/Red Hat/Debian distributions, use::

	sudo /etc/init.d/ceph restart {mon-id}

   For CentOS/Red Hat distributions deployed with ``ceph-deploy``, 
   the monitor ID is usually ``mon.{hostname}``.
   
#. Ensure each monitor has rejoined the quorum::

	ceph mon stat

Ensure that you have completed the upgrade cycle for all of your Ceph Monitors.


Upgrading an OSD
----------------

To upgrade a Ceph OSD Daemon, perform the following steps:

#. Upgrade the Ceph OSD Daemon package. 

   You may use ``ceph-deploy`` to address all Ceph OSD Daemon nodes at 
   once. For example::

	ceph-deploy install --release {release-name} ceph-node1[ ceph-node2]
	ceph-deploy install --release hammer osd1 osd2 osd3

   You may also use the package manager on each node to upgrade packages 
   `using your distro's package manager`_. For Debian/Ubuntu hosts, perform the
   following steps on each host::

	ssh {osd-host}
	sudo apt-get update && sudo apt-get install ceph

   For CentOS/Red Hat hosts, perform the following steps::

	ssh {osd-host}
	sudo yum update && sudo yum install ceph


#. Restart the OSD, where ``N`` is the OSD number. For Ubuntu, use:: 

	sudo restart ceph-osd id=N

   For multiple OSDs on a host, you may restart all of them with Upstart. ::

	sudo restart ceph-osd-all
	
   For CentOS/Red Hat/Debian distributions, use::

	sudo /etc/init.d/ceph restart N	


#. Ensure each upgraded Ceph OSD Daemon has rejoined the cluster::

	ceph osd stat

Ensure that you have completed the upgrade cycle for all of your 
Ceph OSD Daemons.


Upgrading a Metadata Server
---------------------------

To upgrade a Ceph Metadata Server, perform the following steps:

#. Upgrade the Ceph Metadata Server package. You may use ``ceph-deploy`` to 
   address all Ceph Metadata Server nodes at once, or use the package manager 
   on each node. For example::

	ceph-deploy install --release {release-name} ceph-node1
	ceph-deploy install --release hammer mds1

   To upgrade packages manually, perform the following steps on each
   Debian/Ubuntu host::

	ssh {mon-host}
	sudo apt-get update && sudo apt-get install ceph-mds

   Or the following steps on CentOS/Red Hat hosts::

	ssh {mon-host}
	sudo yum update && sudo yum install ceph-mds

 
#. Restart the metadata server. For Ubuntu, use:: 

	sudo restart ceph-mds id={hostname}
	
   For CentOS/Red Hat/Debian distributions, use::

	sudo /etc/init.d/ceph restart mds.{hostname}

   For clusters deployed with ``ceph-deploy``, the name is usually either
   the name you specified on creation or the hostname.

#. Ensure the metadata server is up and running::

	ceph mds stat


Upgrading a Client
------------------

Once you have upgraded the packages and restarted daemons on your Ceph
cluster, we recommend upgrading ``ceph-common`` and client libraries
(``librbd1`` and ``librados2``) on your client nodes too.

#. Upgrade the package:: 

	ssh {client-host}
	apt-get update && sudo apt-get install ceph-common librados2 librbd1 python-rados python-rbd

#. Ensure that you have the latest version::

	ceph --version

If you do not have the latest version, you may need to uninstall, auto remove
dependencies and reinstall.


.. _using your distro's package manager: ../install-storage-cluster/
.. _Operating a Cluster: ../../rados/operations/operating
.. _Monitoring a Cluster: ../../rados/operations/monitoring
.. _release notes document of your release: ../../releases