3.0
2019-01-25T10:12:41Z
Templates
ceph-mgr Zabbix module
ceph-mgr Zabbix module
Templates
Ceph
-
Number of Monitors
2
0
ceph.num_mon
0
90
365
0
3
0
0
0
0
1
0
0
Number of Monitors configured in Ceph cluster
0
Ceph
-
Number of OSDs
2
0
ceph.num_osd
0
90
365
0
3
0
0
0
0
1
0
0
Number of OSDs in Ceph cluster
0
Ceph
-
Number of OSDs in state: IN
2
0
ceph.num_osd_in
0
90
365
0
3
0
0
0
0
1
0
0
Total number of IN OSDs in Ceph cluster
0
Ceph
-
Number of OSDs in state: UP
2
0
ceph.num_osd_up
0
90
365
0
3
0
0
0
0
1
0
0
Total number of UP OSDs in Ceph cluster
0
Ceph
-
Number of Placement Groups
2
0
ceph.num_pg
0
90
365
0
3
0
0
0
0
1
0
0
Total number of Placement Groups in Ceph cluster
0
Ceph
-
Number of Placement Groups in Temporary state
2
0
ceph.num_pg_temp
0
90
365
0
3
0
0
0
0
1
0
0
Total number of Placement Groups in pg_temp state
0
Ceph
-
Number of Placement Groups in Active state
2
0
ceph.num_pg_active
0
90
365
0
3
0
0
0
0
1
0
0
Total number of Placement Groups in active state
0
Ceph
-
Number of Placement Groups in Clean state
2
0
ceph.num_pg_clean
0
90
365
0
3
0
0
0
0
1
0
0
Total number of Placement Groups in clean state
0
Ceph
-
Number of Placement Groups in Peering state
2
0
ceph.num_pg_peering
0
90
365
0
3
0
0
0
0
1
0
0
Total number of Placement Groups in peering state
0
Ceph
-
Number of Placement Groups in Scrubbing state
2
0
ceph.num_pg_scrubbing
0
90
365
0
3
0
0
0
0
1
0
0
Total number of Placement Groups in scrubbing state
0
Ceph
-
Number of Placement Groups in Undersized state
2
0
ceph.num_pg_undersized
0
90
365
0
3
0
0
0
0
1
0
0
Total number of Placement Groups in undersized state
0
Ceph
-
Number of Placement Groups in Backfilling state
2
0
ceph.num_pg_backfilling
0
90
365
0
3
0
0
0
0
1
0
0
Total number of Placement Groups in backfilling state
0
Ceph
-
Number of Placement Groups in degraded state
2
0
ceph.num_pg_degraded
0
90
365
0
3
0
0
0
0
1
0
0
Total number of Placement Groups in degraded state
0
Ceph
-
Number of Placement Groups in inconsistent state
2
0
ceph.num_pg_inconsistent
0
90
365
0
3
0
0
0
0
1
0
0
Total number of Placement Groups in inconsistent state
0
Ceph
-
Number of Placement Groups in remapped state
2
0
ceph.num_pg_remapped
0
90
365
0
3
0
0
0
0
1
0
0
Total number of Placement Groups in remapped state
0
Ceph
-
Number of Placement Groups in recovering state
2
0
ceph.num_pg_recovering
0
90
365
0
3
0
0
0
0
1
0
0
Total number of Placement Groups in recovering state
0
Ceph
-
Number of Placement Groups in backfill_toofull state
2
0
ceph.num_pg_backfill_toofull
0
90
365
0
3
0
0
0
0
1
0
0
Total number of Placement Groups in backfill_toofull state
0
Ceph
-
Number of Placement Groups in backfill_wait state
2
0
ceph.num_pg_backfill_wait
0
90
365
0
3
0
0
0
0
1
0
0
Total number of Placement Groups in backfill_wait state
0
Ceph
-
Number of Placement Groups in recovery_wait state
2
0
ceph.num_pg_recovery_wait
0
90
365
0
3
0
0
0
0
1
0
0
Total number of Placement Groups in recovery_wait state
0
Ceph
-
Number of Pools
2
0
ceph.num_pools
0
90
365
0
3
0
0
0
0
1
0
0
Total number of pools in Ceph cluster
0
Ceph
-
Ceph OSD avg fill
2
0
ceph.osd_avg_fill
0
90
365
0
0
0
0
0
0
1
0
0
Average fill of OSDs
0
Ceph
-
Ceph OSD max PGs
2
0
ceph.osd_max_pgs
0
90
365
0
0
0
0
0
0
1
0
0
Maximum amount of PGs on OSDs
0
Ceph
-
Ceph OSD min PGs
2
0
ceph.osd_min_pgs
0
90
365
0
0
0
0
0
0
1
0
0
Minimum amount of PGs on OSDs
0
Ceph
-
Ceph OSD avg PGs
2
0
ceph.osd_avg_pgs
0
90
365
0
0
0
0
0
0
1
0
0
Average amount of PGs on OSDs
0
Ceph
-
Ceph backfill full ratio
2
1
ceph.osd_backfillfull_ratio
0
90
365
0
0
0
0
0
0
100
0
0
Backfill full ratio setting of Ceph cluster as configured on OSDMap
0
Ceph
-
Ceph full ratio
2
1
ceph.osd_full_ratio
0
90
365
0
0
0
0
0
0
100
0
0
Full ratio setting of Ceph cluster as configured on OSDMap
0
Ceph
-
Ceph OSD Apply latency Avg
2
0
ceph.osd_latency_apply_avg
0
90
365
0
0
0
0
0
0
1
0
0
Average apply latency of OSDs
0
Ceph
-
Ceph OSD Apply latency Max
2
0
ceph.osd_latency_apply_max
0
90
365
0
0
0
0
0
0
1
0
0
Maximum apply latency of OSDs
0
Ceph
-
Ceph OSD Apply latency Min
2
0
ceph.osd_latency_apply_min
0
90
365
0
0
0
0
0
0
1
0
0
Miniumum apply latency of OSDs
0
Ceph
-
Ceph OSD Commit latency Avg
2
0
ceph.osd_latency_commit_avg
0
90
365
0
0
0
0
0
0
1
0
0
Average commit latency of OSDs
0
Ceph
-
Ceph OSD Commit latency Max
2
0
ceph.osd_latency_commit_max
0
90
365
0
0
0
0
0
0
1
0
0
Maximum commit latency of OSDs
0
Ceph
-
Ceph OSD Commit latency Min
2
0
ceph.osd_latency_commit_min
0
90
365
0
0
0
0
0
0
1
0
0
Minimum commit latency of OSDs
0
Ceph
-
Ceph OSD max fill
2
0
ceph.osd_max_fill
0
90
365
0
0
0
0
0
0
1
0
0
Percentage fill of maximum filled OSD
0
Ceph
-
Ceph OSD min fill
2
0
ceph.osd_min_fill
0
90
365
0
0
0
0
0
0
1
0
0
Percentage fill of minimum filled OSD
0
Ceph
-
Ceph nearfull ratio
2
1
ceph.osd_nearfull_ratio
0
90
365
0
0
0
0
0
0
100
0
0
Near full ratio setting of Ceph cluster as configured on OSDMap
0
Ceph
-
Overall Ceph status
2
0
ceph.overall_status
0
90
0
0
4
0
0
0
0
1
0
0
Overall Ceph cluster status, eg HEALTH_OK, HEALTH_WARN of HEALTH_ERR
0
Ceph
-
Overal Ceph status (numeric)
2
0
ceph.overall_status_int
0
90
365
0
3
0
0
0
0
1
0
0
Overal Ceph status in numeric value. OK: 0, WARN: 1, ERR: 2
0
Ceph
-
Ceph Read bandwidth
2
0
ceph.rd_bytes
0
90
365
0
3
b
1
0
0
0
1
0
0
Global read bandwidth
0
Ceph
-
Ceph Read operations
2
0
ceph.rd_ops
0
90
365
0
3
1
0
0
0
1
0
0
Global read operations per second
0
Ceph
-
Total bytes available
2
0
ceph.total_avail_bytes
0
90
365
0
3
B
0
0
0
0
1
0
0
Total bytes available in Ceph cluster
0
Ceph
-
Total bytes
2
0
ceph.total_bytes
0
90
365
0
3
B
0
0
0
0
1
0
0
Total (RAW) capacity of Ceph cluster in bytes
0
Ceph
-
Total number of objects
2
0
ceph.total_objects
0
90
365
0
3
0
0
0
0
1
0
0
Total number of objects in Ceph cluster
0
Ceph
-
Total bytes used
2
0
ceph.total_used_bytes
0
90
365
0
3
B
0
0
0
0
1
0
0
Total bytes used in Ceph cluster
0
Ceph
-
Ceph Write bandwidth
2
0
ceph.wr_bytes
0
90
365
0
3
b
1
0
0
0
1
0
0
Global write bandwidth
0
Ceph
-
Ceph Write operations
2
0
ceph.wr_ops
0
90
365
0
3
1
0
0
0
1
0
0
Global write operations per second
0
Ceph
Ceph OSD discovery
2
ceph.zabbix.osd.discovery
0
0
0
0
0
0
0
90
[osd.{#OSD}] OSD in
2
0
ceph.[osd.{#OSD},in]
0
90
365
0
3
0
0
0
0
1
0
0
0
Ceph CRUSH [{#CRUSH_RULE}]
[osd.{#OSD}] OSD PGs
2
0
ceph.[osd.{#OSD},num_pgs]
0
90
365
0
3
0
0
0
0
1
0
0
0
Ceph CRUSH [{#CRUSH_RULE}]
[osd.{#OSD}] OSD fill
2
0
ceph.[osd.{#OSD},osd_fill]
0
90
365
0
0
%
0
0
0
0
1
0
0
0
Ceph CRUSH [{#CRUSH_RULE}]
[osd.{#OSD}] OSD latency apply
2
0
ceph.[osd.{#OSD},osd_latency_apply]
0
90
365
0
0
ms
0
0
0
0
1
0
0
0
Ceph CRUSH [{#CRUSH_RULE}]
[osd.{#OSD}] OSD latency commit
2
0
ceph.[osd.{#OSD},osd_latency_commit]
0
90
365
0
0
ms
0
0
0
0
1
0
0
0
Ceph CRUSH [{#CRUSH_RULE}]
[osd.{#OSD}] OSD up
2
0
ceph.[osd.{#OSD},up]
0
90
365
0
3
0
0
0
0
1
0
0
0
Ceph CRUSH [{#CRUSH_RULE}]
{ceph-mgr Zabbix module:ceph.[osd.{#OSD},up].last()}=0
Ceph OSD osd.{#OSD} is DOWN
0
2
0
{ceph-mgr Zabbix module:ceph.[osd.{#OSD},osd_fill].last()}>={ceph-mgr Zabbix module:ceph.osd_full_ratio.last()}
Ceph OSD osd.{#OSD} is full: {ITEM.VALUE}%
0
4
0
{ceph-mgr Zabbix module:ceph.[osd.{#OSD},osd_fill].last()}>={ceph-mgr Zabbix module:ceph.osd_nearfull_ratio.last()}
Ceph OSD osd.{#OSD} is near full: {ITEM.VALUE}%
0
2
0
Ceph pool discovery
2
ceph.zabbix.pool.discovery
0
0
0
0
0
0
0
90
[{#POOL}] Pool Used
2
0
ceph.[{#POOL},bytes_used]
0
90
365
0
3
b
0
0
0
0
1
0
0
0
Ceph CRUSH [{#CRUSH_RULE}]
[{#POOL}] Pool RAW Used
2
0
ceph.[{#POOL},stored_raw]
0
90
365
0
3
b
0
0
0
0
1
0
0
0
Ceph CRUSH [{#CRUSH_RULE}]
[{#POOL}] Pool Percent Used
2
0
ceph.[{#POOL},percent_used]
0
90
365
0
0
%
0
0
0
0
1
0
0
0
Ceph CRUSH [{#CRUSH_RULE}]
[{#POOL}] Pool Read bandwidth
2
0
ceph.[{#POOL},rd_bytes]
0
90
365
0
3
bytes
0
0
0
0
1
0
0
0
Ceph CRUSH [{#CRUSH_RULE}]
[{#POOL}] Pool Read operations
2
0
ceph.[{#POOL},rd_ops]
0
90
365
0
3
ops
0
0
0
0
1
0
0
0
Ceph CRUSH [{#CRUSH_RULE}]
[{#POOL}] Pool Write bandwidth
2
0
ceph.[{#POOL},wr_bytes]
0
90
365
0
3
bytes
0
0
0
0
1
0
0
0
Ceph CRUSH [{#CRUSH_RULE}]
[{#POOL}] Pool Write operations
2
0
ceph.[{#POOL},wr_ops]
0
90
365
0
3
ops
0
0
0
0
1
0
0
0
Ceph CRUSH [{#CRUSH_RULE}]
Ceph
1
7
0
500
100
0
0
1
1
0
0
0
0
0
Ceph storage overview
ceph-mgr Zabbix module
3
0
900
200
0
1
1
1
0
0
0
0
0
Ceph free space
ceph-mgr Zabbix module
3
0
900
200
0
2
1
1
0
0
0
0
0
Ceph health
ceph-mgr Zabbix module
3
0
900
200
0
3
1
1
0
0
0
0
0
Ceph bandwidth
ceph-mgr Zabbix module
3
0
900
200
0
4
1
1
0
0
0
0
0
Ceph I/O
ceph-mgr Zabbix module
3
0
900
200
0
5
1
1
0
0
0
0
0
Ceph OSD utilization
ceph-mgr Zabbix module
3
0
900
200
0
6
1
1
0
0
0
0
0
Ceph OSD latency
ceph-mgr Zabbix module
3
{ceph-mgr Zabbix module:ceph.overall_status_int.last()}=2
Ceph cluster in ERR state
0
5
Ceph cluster is in ERR state
0
{ceph-mgr Zabbix module:ceph.overall_status_int.avg(1h)}=1
Ceph cluster in WARN state
0
4
Issue a trigger if Ceph cluster is in WARN state for >1h
0
{ceph-mgr Zabbix module:ceph.num_osd_in.abschange()}>0
Number of IN OSDs changed
0
2
Amount of OSDs in IN state changed
0
{ceph-mgr Zabbix module:ceph.num_osd_up.abschange()}>0
Number of UP OSDs changed
0
2
Amount of OSDs in UP state changed
0
Ceph bandwidth
900
200
0.0000
100.0000
1
1
1
1
0
0.0000
0.0000
0
0
0
0
0
0
1A7C11
0
4
0
-
ceph-mgr Zabbix module
ceph.rd_bytes
1
0
F63100
0
4
0
-
ceph-mgr Zabbix module
ceph.wr_bytes
Ceph free space
900
200
0.0000
100.0000
1
1
0
1
0
0.0000
0.0000
1
2
0
ceph-mgr Zabbix module
ceph.total_bytes
0
0
00AA00
0
4
0
-
ceph-mgr Zabbix module
ceph.total_avail_bytes
1
0
DD0000
0
4
0
-
ceph-mgr Zabbix module
ceph.total_used_bytes
Ceph health
900
200
0.0000
2.0000
1
1
0
1
0
0.0000
0.0000
1
1
0
0
0
0
1A7C11
0
7
0
-
ceph-mgr Zabbix module
ceph.overall_status_int
Ceph I/O
900
200
0.0000
100.0000
1
1
1
1
0
0.0000
0.0000
1
0
0
0
0
0
1A7C11
0
4
0
-
ceph-mgr Zabbix module
ceph.rd_ops
1
0
F63100
0
4
0
-
ceph-mgr Zabbix module
ceph.wr_ops
Ceph OSD latency
900
200
0.0000
100.0000
1
1
0
1
0
0.0000
0.0000
0
0
0
0
0
0
1A7C11
0
4
0
-
ceph-mgr Zabbix module
ceph.osd_latency_apply_avg
1
0
F63100
0
4
0
-
ceph-mgr Zabbix module
ceph.osd_latency_commit_avg
2
0
2774A4
0
4
0
-
ceph-mgr Zabbix module
ceph.osd_latency_apply_max
3
0
A54F10
0
4
0
-
ceph-mgr Zabbix module
ceph.osd_latency_commit_max
4
0
FC6EA3
0
4
0
-
ceph-mgr Zabbix module
ceph.osd_latency_apply_min
5
0
6C59DC
0
4
0
-
ceph-mgr Zabbix module
ceph.osd_latency_commit_min
Ceph OSD utilization
900
200
0.0000
100.0000
1
1
0
1
0
0.0000
0.0000
1
1
0
0
0
0
0000CC
0
2
0
-
ceph-mgr Zabbix module
ceph.osd_nearfull_ratio
1
0
F63100
0
2
0
-
ceph-mgr Zabbix module
ceph.osd_full_ratio
2
0
CC00CC
0
2
0
-
ceph-mgr Zabbix module
ceph.osd_backfillfull_ratio
3
0
A54F10
0
2
0
-
ceph-mgr Zabbix module
ceph.osd_max_fill
4
0
FC6EA3
0
2
0
-
ceph-mgr Zabbix module
ceph.osd_avg_fill
5
0
6C59DC
0
2
0
-
ceph-mgr Zabbix module
ceph.osd_min_fill
Ceph storage overview
900
200
0.0000
0.0000
0
0
2
1
0
0.0000
0.0000
0
0
0
0
0
0
F63100
0
2
0
-
ceph-mgr Zabbix module
ceph.total_used_bytes
1
0
00CC00
0
2
0
-
ceph-mgr Zabbix module
ceph.total_avail_bytes