1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
42
43
44
45
46
47
48
49
50
51
52
53
54
55
56
57
58
59
60
61
62
63
64
65
66
67
68
69
70
71
72
73
74
75
76
77
78
79
80
81
82
83
84
85
86
87
88
89
90
91
92
93
94
95
96
97
98
99
100
101
102
103
104
105
106
107
108
109
110
111
112
113
114
115
116
117
118
119
120
121
122
123
124
125
126
127
128
129
130
131
132
133
134
135
136
137
138
139
140
141
142
143
144
145
146
147
148
149
150
151
152
153
154
155
156
157
158
159
160
161
162
163
164
165
166
167
168
169
170
171
172
173
174
175
176
177
178
179
180
181
182
183
184
185
186
187
188
189
190
191
192
193
194
195
196
197
198
199
200
201
202
203
204
205
206
207
208
209
210
211
212
213
214
215
216
217
218
219
220
221
222
223
224
225
226
227
228
229
230
231
232
233
234
235
236
237
238
239
240
241
242
243
244
245
246
247
|
===============
Perf counters
===============
The perf counters provide generic internal infrastructure for gauges and counters. The counted values can be both integer and float. There is also an "average" type (normally float) that combines a sum and num counter which can be divided to provide an average.
The intention is that this data will be collected and aggregated by a tool like ``collectd`` or ``statsd`` and fed into a tool like ``graphite`` for graphing and analysis. Also, note the :doc:`../mgr/prometheus` and the :doc:`../mgr/telemetry`.
Users and developers can also access perf counter data locally to check a cluster's overall health, identify workload patterns, monitor cluster performance by daemon types, and troubleshoot issues with latency, throttling, memory management, etc. (see :ref:`Access`)
.. _Access:
Access
------
The perf counter data is accessed via the admin socket. For example::
ceph daemon osd.0 perf schema
ceph daemon osd.0 perf dump
Collections
-----------
The values are grouped into named collections, normally representing a subsystem or an instance of a subsystem. For example, the internal ``throttle`` mechanism reports statistics on how it is throttling, and each instance is named something like::
throttle-msgr_dispatch_throttler-hbserver
throttle-msgr_dispatch_throttler-client
throttle-filestore_bytes
...
Schema
------
The ``perf schema`` command dumps a json description of which values are available, and what their type is. Each named value as a ``type`` bitfield, with the following bits defined.
+------+-------------------------------------+
| bit | meaning |
+======+=====================================+
| 1 | floating point value |
+------+-------------------------------------+
| 2 | unsigned 64-bit integer value |
+------+-------------------------------------+
| 4 | average (sum + count pair), where |
+------+-------------------------------------+
| 8 | counter (vs gauge) |
+------+-------------------------------------+
Every value will have either bit 1 or 2 set to indicate the type
(float or integer).
If bit 8 is set (counter), the value is monotonically increasing and
the reader may want to subtract off the previously read value to get
the delta during the previous interval.
If bit 4 is set (average), there will be two values to read, a sum and
a count. If it is a counter, the average for the previous interval
would be sum delta (since the previous read) divided by the count
delta. Alternatively, dividing the values outright would provide the
lifetime average value. Normally these are used to measure latencies
(number of requests and a sum of request latencies), and the average
for the previous interval is what is interesting.
Instead of interpreting the bit fields, the ``metric type`` has a
value of either ``gauge`` or ``counter``, and the ``value type``
property will be one of ``real``, ``integer``, ``real-integer-pair``
(for a sum + real count pair), or ``integer-integer-pair`` (for a
sum + integer count pair).
Here is an example of the schema output::
{
"throttle-bluestore_throttle_bytes": {
"val": {
"type": 2,
"metric_type": "gauge",
"value_type": "integer",
"description": "Currently available throttle",
"nick": ""
},
"max": {
"type": 2,
"metric_type": "gauge",
"value_type": "integer",
"description": "Max value for throttle",
"nick": ""
},
"get_started": {
"type": 10,
"metric_type": "counter",
"value_type": "integer",
"description": "Number of get calls, increased before wait",
"nick": ""
},
"get": {
"type": 10,
"metric_type": "counter",
"value_type": "integer",
"description": "Gets",
"nick": ""
},
"get_sum": {
"type": 10,
"metric_type": "counter",
"value_type": "integer",
"description": "Got data",
"nick": ""
},
"get_or_fail_fail": {
"type": 10,
"metric_type": "counter",
"value_type": "integer",
"description": "Get blocked during get_or_fail",
"nick": ""
},
"get_or_fail_success": {
"type": 10,
"metric_type": "counter",
"value_type": "integer",
"description": "Successful get during get_or_fail",
"nick": ""
},
"take": {
"type": 10,
"metric_type": "counter",
"value_type": "integer",
"description": "Takes",
"nick": ""
},
"take_sum": {
"type": 10,
"metric_type": "counter",
"value_type": "integer",
"description": "Taken data",
"nick": ""
},
"put": {
"type": 10,
"metric_type": "counter",
"value_type": "integer",
"description": "Puts",
"nick": ""
},
"put_sum": {
"type": 10,
"metric_type": "counter",
"value_type": "integer",
"description": "Put data",
"nick": ""
},
"wait": {
"type": 5,
"metric_type": "gauge",
"value_type": "real-integer-pair",
"description": "Waiting latency",
"nick": ""
}
}
Dump
----
The actual dump is similar to the schema, except that average values are grouped. For example::
{
"throttle-msgr_dispatch_throttler-hbserver" : {
"get_or_fail_fail" : 0,
"get_sum" : 0,
"max" : 104857600,
"put" : 0,
"val" : 0,
"take" : 0,
"get_or_fail_success" : 0,
"wait" : {
"avgcount" : 0,
"sum" : 0
},
"get" : 0,
"take_sum" : 0,
"put_sum" : 0
},
"throttle-msgr_dispatch_throttler-client" : {
"get_or_fail_fail" : 0,
"get_sum" : 82760,
"max" : 104857600,
"put" : 2637,
"val" : 0,
"take" : 0,
"get_or_fail_success" : 0,
"wait" : {
"avgcount" : 0,
"sum" : 0
},
"get" : 2637,
"take_sum" : 0,
"put_sum" : 82760
}
}
Labeled Perf Counters
---------------------
A Ceph daemon has the ability to emit a set of perf counter instances with varying labels. These counters are intended for visualizing specific metrics in 3rd party tools like Prometheus and Grafana.
For example, the below counters show the number of put requests for different users on different buckets::
{
"rgw": [
{
"labels": {
"Bucket: "bkt1",
"User: "user1",
},
"counters": {
"put": 1,
},
},
{
"labels": {},
"counters": {
"put": 4,
},
},
{
"labels": {
"Bucket: "bkt1",
"User: "user2",
},
"counters": {
"put": 3,
},
},
]
}
All labeled and unlabeled perf counters can be viewed with ``ceph daemon {daemon id} counter dump``.
All labeled and unlabeled perf counter's schema can be viewed with ``ceph daemon {daemon id} counter schema``.
In the above example the second counter without labels is a counter that would also be shown in ``ceph daemon {daemon id} perf dump``.
Since the ``counter dump`` and ``counter schema`` commands can be used to view both types of counters it is not recommended to use the ``perf dump`` and ``perf schema`` commands which are retained for backwards compatibility and continue to emit only non-labeled counters.
Some perf counters that are emitted via ``perf dump`` and ``perf schema`` may become labeled in future releases and as such will no longer be emitted by ``perf dump`` and ``perf schema`` respectively.
|