From 03bf87dcb06f7021bfb2df2fa8691593c6148aff Mon Sep 17 00:00:00 2001
From: Daniel Baumann Status of this cluster component. Primary - primary group configuration, quorum present. Non-Primary - non-primary group configuration, quorum lost. Disconnected - not connected to group, retrying. Membership state of this cluster component. Undefined - undefined state. Joining - the node is attempting to join the cluster. Donor - the node has blocked itself while it sends a State Snapshot Transfer (SST) to bring a new node up to date with the cluster. Joined - the node has successfully joined the cluster. Synced - the node has established a connection with the cluster and synchronized its local databases with those of the cluster. Error - the node is not part of the cluster and does not replicate transactions. This state is provider-specific, check wsrep_local_state_comment variable for a description. Connections usage across all databases. The maximum number of concurrent connections to the database server is (max_connections - superuser_reserved_connections). As a general rule, if you need more than 200 connections it is advisable to use connection pooling. Available - new connections allowed. Used - connections currently in use. Number of connections in each state across all databases. Active - the backend is executing query. Idle - the backend is waiting for a new client command. IdleInTransaction - the backend is in a transaction, but is not currently executing a query. IdleInTransactionAborted - the backend is in a transaction, and not currently executing a query, but one of the statements in the transaction caused an error. FastPathFunctionCall - the backend is executing a fast-path function. Disabled - is reported if track_activities is disabled in this backend. Number of checkpoints that have been performed. Checkpoints are periodic maintenance operations the database performs to make sure that everything it\'s been caching in memory has been synchronized with the disk. Ideally checkpoints should be time-driven (scheduled) as opposed to load-driven (requested). Scheduled - checkpoints triggered as per schedule when time elapsed from the previous checkpoint is greater than checkpoint_timeout. Requested - checkpoints triggered due to WAL updates reaching the max_wal_size before the checkpoint_timeout is reached. Connections usage. The maximum number of concurrent connections to the database server is max_connections minus superuser_reserved_connections. Available - new connections allowed. Used - connections currently in use. Checkpoint timing information. An important indicator of how well checkpoint I/O is performing is the amount of time taken to sync files to disk. Write - amount of time spent writing files to disk during checkpoint processing. Sync - amount of time spent synchronizing files to disk during checkpoint processing. Number of checkpoints that have been performed. Checkpoints are periodic maintenance operations the database performs to make sure that everything it’s been caching in memory has been synchronized with the disk. It’s desirable when checkpoints are scheduled rather than requested, as the latter can indicate that your databases are under heavy load. Scheduled - checkpoints triggered due that the time elapsed from the previous checkpoint is more than pg setting checkpoint_timeout. Requested - checkpoints ran due to uncheckpointed WAL size grew to more than max_wal_size setting. Checkpoint timing information. Write - amount of time that has been spent in the portion of checkpoint processing where files are written to disk. Sync - amount of time that has been spent in the portion of checkpoint processing where files are synchronized to disk. Amount of data flushed from memory to disk. Checkpoint - buffers written during checkpoints. Backend - buffers written directly by a backend. It may happen that a dirty page is requested by a backend process. In this case the page is synced to disk before the page is returned to the client. BgWriter - buffers written by the background writer. PostgreSQL may clear pages with a low usage count in advance. The process scans for dirty pages with a low usage count so that they could be cleared if necessary. Buffers written by this process increment the counter. Amount of data flushed from memory to disk. Checkpoint - buffers written during checkpoints. Backend - buffers written directly by a backend. It may happen that a dirty page is requested by a backend process. In this case the page is synched to disk before the page is returned to the client. Clean - buffers written by the background writer. PostgreSQL may clear pages with a low usage count in advance. The process scans for dirty pages with a low usage count so that they could be cleared if necessay. Buffers written by this process increment the counter. Number of WAL logs stored in the directory pg_wal under the data directory. Written - generated log segments files. Recycled - old log segment files that are no longer needed. Renamed to become future segments in the numbered sequence to avoid the need to create new ones. WAL archiving. Ready - WAL files waiting to be archived. A non-zero value can indicate archive_command is in error, see Continuous Archiving and Point-in-Time Recovery Done - WAL files successfully archived.'
+ 'postgres.wal_archiving_files_count': {
+ info: ' WAL archiving. Ready - WAL files waiting to be archived. A non-zero value can indicate archive_command is in error, see Continuous Archiving and Point-in-Time Recovery. Done - WAL files successfully archived.'
},
- 'postgres.autovacuum_workers': {
+ 'postgres.autovacuum_workers_count': {
info: 'PostgreSQL databases require periodic maintenance known as vacuuming. For many installations, it is sufficient to let vacuuming be performed by the autovacuum daemon. For more information see The Autovacuum Daemon.'
},
- 'postgres.percent_towards_emergency_autovacuum': {
+ 'postgres.txid_exhaustion_towards_autovacuum_perc': {
info: 'Percentage towards emergency autovacuum for one or more tables. A forced autovacuum will run once this value reaches 100%. For more information see Preventing Transaction ID Wraparound Failures.'
},
- 'postgres.percent_towards_txid_wraparound': {
+ 'postgres.txid_exhaustion_perc': {
info: 'Percentage towards transaction wraparound. A transaction wraparound may occur when this value reaches 100%. For more information see Preventing Transaction ID Wraparound Failures.'
},
- 'postgres.oldest_transaction_xid': {
+ 'postgres.txid_exhaustion_oldest_txid_num': {
info: 'The oldest current transaction ID (XID). If for some reason autovacuum fails to clear old XIDs from a table, the system will begin to emit warning messages when the database\'s oldest XIDs reach eleven million transactions from the wraparound point. For more information see Preventing Transaction ID Wraparound Failures.'
},
'postgres.uptime': {
+ room: {
+ mainheads: [
+ function (os, id) {
+ void (os);
+ return ' Replication WAL delta. SentDelta - sent over the network. WriteDelta - written to disk. FlushDelta - flushed to disk. ReplayDelta - replayed into the database. Replication WAL lag size. SentLag - sent over the network. WriteLag - written to disk. FlushLag - flushed to disk. ReplayLag - replayed into the database. Replication WAL lag. WriteLag - time elapsed between flushing recent WAL locally and receiving notification that the standby server has written it, but not yet flushed it or applied it. FlushLag - time elapsed between flushing recent WAL locally and receiving notification that the standby server has written and flushed it, but not yet applied it. ReplayLag - time elapsed between flushing recent WAL locally and receiving notification that the standby server has written, flushed and applied it. Replication WAL lag time. WriteLag - time elapsed between flushing recent WAL locally and receiving notification that the standby server has written it, but not yet flushed it or applied it. FlushLag - time elapsed between flushing recent WAL locally and receiving notification that the standby server has written and flushed it, but not yet applied it. ReplayLag - time elapsed between flushing recent WAL locally and receiving notification that the standby server has written, flushed and applied it. Replication slot files. For more information see Replication Slots. WalKeep - WAL files retained by the replication slot. PgReplslotFiles - files present in pg_replslot. Number of transactions that have been performed Commited - transactions that have been committed. All changes made by the committed transaction become visible to others and are guaranteed to be durable if a crash occurs. Rollback - transactions that have been rolled back. Rollback aborts the current transaction and causes all the updates made by the transaction to be discarded. Single queries that have failed outside the transactions are also accounted as rollbacks. Number of transactions that have been performed Committed - transactions that have been committed. All changes made by the committed transaction become visible to others and are guaranteed to be durable if a crash occurs. Rollback - transactions that have been rolled back. Rollback aborts the current transaction and causes all the updates made by the transaction to be discarded. Single queries that have failed outside the transactions are also accounted as rollbacks. Number of blocks read from shared buffer cache or from disk. disk - number of disk blocks read. memory - number of times disk blocks were found already in the buffer cache, so that a read was not necessary (this only includes hits in the PostgreSQL buffer cache, not the operating system\'s file system cache). Amount of data read from shared buffer cache or from disk. Disk - data read from disk. Memory - data read from buffer cache (this only includes hits in the PostgreSQL buffer cache, not the operating system\'s file system cache). Read queries throughput. Returned - number of rows returned by queries. The value keeps track of the number of rows read/scanned, not the rows actually returned to the client. Fetched - number of rows fetched that contained data necessary to execute the query successfully. Read queries throughput. Returned - Total number of rows scanned by queries. This value indicates rows returned by the storage layer to be scanned, not rows returned to the client. Fetched - Subset of scanned rows (Returned) that contained data needed to execute the query. Write queries throughput. Inserted - number of rows inserted by queries. Deleted - number of rows deleted by queries. Updated - number of rows updated by queries. Number of queries canceled due to conflicts with recovery. Tablespace - queries that have been canceled due to dropped tablespaces. Lock - queries that have been canceled due to lock timeouts. Snapshot - queries that have been canceled due to old snapshots. Bufferpin - queries that have been canceled due to pinned buffers. Deadlock - queries that have been canceled due to deadlocks. Statistics about queries canceled due to various types of conflicts on standby servers. Tablespace - queries that have been canceled due to dropped tablespaces. Lock - queries that have been canceled due to lock timeouts. Snapshot - queries that have been canceled due to old snapshots. Bufferpin - queries that have been canceled due to pinned buffers. Deadlock - queries that have been canceled due to deadlocks. Number of rows. When you do an UPDATE or DELETE, the row is not actually physically deleted. For a DELETE, the database simply marks the row as unavailable for future transactions, and for UPDATE, under the hood it is a combined INSERT then DELETE, where the previous version of the row is marked unavailable. Live - rows that currently in use and can be queried. Dead - deleted rows that will later be reused for new rows from INSERT or UPDATE. Amount of data read from shared buffer cache or from disk. Disk - data read from disk. Memory - data read from buffer cache (this only includes hits in the PostgreSQL buffer cache, not the operating system\'s file system cache). Amount of data read from all indexes from shared buffer cache or from disk. Disk - data read from disk. Memory - data read from buffer cache (this only includes hits in the PostgreSQL buffer cache, not the operating system\'s file system cache). Amount of data read from TOAST table from shared buffer cache or from disk. Disk - data read from disk. Memory - data read from buffer cache (this only includes hits in the PostgreSQL buffer cache, not the operating system\'s file system cache). Amount of data read from this table\'s TOAST table indexes from shared buffer cache or from disk. Disk - data read from disk. Memory - data read from buffer cache (this only includes hits in the PostgreSQL buffer cache, not the operating system\'s file system cache). Number of scans initiated on this table. If you see that your database regularly performs more sequential scans over time, you can improve its performance by creating an index on data that is frequently accessed. Index - relying on an index to point to the location of specific rows. Sequential - have to scan through each row of a table sequentially. Typically, take longer than index scans. Network traffic received and sent by pgbouncer. Received - received from clients. Sent - sent to servers.ping
, but much better performing when pinging multiple hosts. fping versions after 3.15 can be directly used as netdata plugins.'
},
+ 'ping': {
+ title: 'Ping',
+ icon: '',
+ info: 'Measures round-trip time and packet loss by sending ping messages to network hosts.'
+ },
+
'gearman': {
title: 'Gearman',
icon: '',
@@ -333,12 +345,24 @@ netdataDashboard.menu = {
info: 'Performance metrics for mysql, the open-source relational database management system (RDBMS).'
},
+ 'nvme': {
+ title: 'NVMe',
+ icon: '',
+ info: 'NVMe devices SMART and health metrics. Additional information on metrics can be found in the NVM Express Base Specification.'
+ },
+
'postgres': {
title: 'PostgreSQL',
icon: '',
info: 'Performance metrics for PostgreSQL, the open source object-relational database management system (ORDBMS).'
},
+ 'proxysql': {
+ title: 'ProxySQL',
+ icon: '',
+ info: 'Performance metrics for ProxySQL, a high-performance open-source MySQL proxy.'
+ },
+
'pgbouncer': {
title: 'PgBouncer',
icon: '',
@@ -418,6 +442,12 @@ netdataDashboard.menu = {
info: undefined
},
+ 'nginxplus': {
+ title: 'Nginx Plus',
+ icon: '',
+ info: undefined
+ },
+
'apache': {
title: 'Apache',
icon: '',
@@ -528,7 +558,7 @@ netdataDashboard.menu = {
'logind': {
title: 'Logind',
icon: '',
- info: undefined
+ info: 'Keeps track of user logins and sessions by querying the systemd-logind API.'
},
'powersupply': {
@@ -699,6 +729,16 @@ netdataDashboard.menu = {
icon: '',
info: 'VPN network interfaces and peers traffic.'
},
+
+ 'pandas': {
+ icon: ''
+ },
+
+ 'cassandra': {
+ title: 'Cassandra',
+ icon: '',
+ info: 'Performance metrics for Cassandra, the open source distributed NoSQL database management system'
+ }
};
@@ -1083,6 +1123,11 @@ netdataDashboard.submenu = {
'IRQ (Hard IRQ and Soft IRQ ), Shared Memory, ' +
'Syscalls (Sync, Mount), and Network.'
},
+
+ 'postgres.connections': {
+ info: 'A connection is an established line of communication between a client and the PostgreSQL server. Each connection adds to the load on the PostgreSQL server. To guard against running out of memory or overloading the database the max_connections parameter (default = 100) defines the maximum number of concurrent connections to the database server. A separate parameter, superuser_reserved_connections (default = 3), defines the quota for superuser connections (so that superusers can connect even if all other connection slots are blocked).'
+ },
+
};
// ----------------------------------------------------------------------------
@@ -2676,7 +2721,7 @@ netdataDashboard.context = {
info: 'Real memory (RAM) used per user group. This does not include shared memory.'
},
'users.mem': {
- info: 'Real memory (RAM) used per user group. This does not include shared memory.'
+ info: 'Real memory (RAM) used per user. This does not include shared memory.'
},
'apps.vmem': {
@@ -2687,7 +2732,7 @@ netdataDashboard.context = {
info: 'Virtual memory allocated per user group since the Netdata restart. Please check this article for more information.'
},
'users.vmem': {
- info: 'Virtual memory allocated per user group since the Netdata restart. Please check this article for more information.'
+ info: 'Virtual memory allocated per user since the Netdata restart. Please check this article for more information.'
},
'apps.minor_faults': {
@@ -3767,21 +3812,11 @@ netdataDashboard.context = {
},
'mysql.galera_cluster_status': {
- info:
- '-1
: unknown, ' +
- '0
: primary (primary group configuration, quorum present), ' +
- '1
: non-primary (non-primary group configuration, quorum lost), ' +
- '2
: disconnected(not connected to group, retrying).'
+ info: "0
: Undefined, ' +
- '1
: Joining, ' +
- '2
: Donor/Desynced, ' +
- '3
: Joined, ' +
- '4
: Synced, ' +
- '5
: Inconsistent.'
+ info: "' +
- '
'
+ 'postgres.connections_utilization': {
+ room: {
+ mainheads: [
+ function (_, id) {
+ return '';
+ }
+ ],
+ },
+ info: 'Total connection utilization across all databases. Utilization is measured as a percentage of (max_connections - superuser_reserved_connections). If the utilization is 100% no more new connections will be accepted (superuser connections will still be accepted if superuser quota is available).'
},
- 'postgres.db_stat_tuple_write': {
- info: '
'
+ 'postgres.connections_usage': {
+ info: '' +
- '
'
+ 'postgres.transactions_duration': {
+ info: 'Running transactions duration histogram. The bins are specified as consecutive, non-overlapping intervals. The value is the number of observed transactions that fall into each interval.'
},
- 'postgres.archive_wal': {
- info: 'WAL archiving.' +
- '
'
+ 'postgres.queries_duration': {
+ info: 'Active queries duration histogram. The bins are specified as consecutive, non-overlapping intervals. The value is the number of observed active queries that fall into each interval.'
},
- 'postgres.checkpointer': {
- info: 'Number of checkpoints.' +
- '
' +
- 'For more information see WAL Configuration.'
- },
- 'postgres.autovacuum': {
- info: 'PostgreSQL databases require periodic maintenance known as vacuuming. For many installations, it is sufficient to let vacuuming be performed by the autovacuum daemon. ' +
- 'For more information see The Autovacuum Daemon.'
- },
- 'postgres.standby_delta': {
- info: 'Streaming replication delta.' +
- '
' +
- 'For more information see Synchronous Replication.'
- },
- 'postgres.replication_slot': {
- info: 'Replication slot files.' +
- '
' +
- 'For more information see Replication Slots.'
- },
- 'postgres.backend_usage': {
- info: 'Connections usage against maximum connections allowed, as defined in the max_connections setting.' +
- '
' +
- 'Assuming non-superuser accounts are being used to connect to Postgres (so superuser_reserved_connections are subtracted from max_connections).
' +
- 'For more information see Connections and Authentication.'
- },
- 'postgres.forced_autovacuum': {
- info: 'Percent towards forced autovacuum for one or more tables.' +
- '
' +
- 'For more information see Preventing Transaction ID Wraparound Failures.'
- },
- 'postgres.tx_wraparound_oldest_current_xid': {
- info: 'The oldest current transaction id (xid).' +
- '
' +
- 'If for some reason autovacuum fails to clear old XIDs from a table, the system will begin to emit warning messages when the database\'s oldest XIDs reach eleven million transactions from the wraparound point.
' +
- 'For more information see Preventing Transaction ID Wraparound Failures.'
- },
- 'postgres.percent_towards_wraparound': {
- info: 'Percent towards transaction wraparound.' +
- '
' +
- 'For more information see Preventing Transaction ID Wraparound Failures.'
- },
- // python version end
- 'postgres.connections_utilization': {
- info: 'Connections in use as percentage of max_connections. Connection "slots" that are reserved for superusers (superuser_reserved_connections) are subtracted from the limit. If the utilization is 100% new connections will be accepted only for superusers, and no new replication connections will be accepted.'
+ 'postgres.checkpoints_rate': {
+ info: '
ParNew - young-generation. cms (ConcurrentMarkSweep) - old-generation.
' + }, + 'cassandra.jvm_gc_time': { + info: 'Elapsed time of garbage collection.ParNew - young-generation. cms (ConcurrentMarkSweep) - old-generation.
' + }, + 'cassandra.client_requests_timeouts_rate': { + info: 'Requests which were not acknowledged within the configurable timeout window.' + }, + 'cassandra.client_requests_unavailables_rate': { + info: 'Requests for which the required number of nodes was unavailable.' + }, + 'cassandra.storage_exceptions_rate': { + info: 'Requests for which a storage exception was encountered.' + }, + + // ------------------------------------------------------------------------ + // WMI (Process) + + 'wmi.processes_cpu_time': { + info: 'Total CPU utilization. The amount of time spent by the process in user and privileged modes.' + }, + 'wmi.processes_handles': { + info: 'Total number of handles the process has open. This number is the sum of the handles currently open by each thread in the process.' + }, + 'wmi.processes_io_bytes': { + info: 'Bytes issued to I/O operations in different modes (read, write, other). This property counts all I/O activity generated by the process to include file, network, and device I/Os. Read and write mode includes data operations; other mode includes those that do not involve data, such as control operations.' + }, + 'wmi.processes_io_operations': { + info: 'I/O operations issued in different modes (read, write, other). This property counts all I/O activity generated by the process to include file, network, and device I/Os. Read and write mode includes data operations; other mode includes those that do not involve data, such as control operations.' + }, + 'wmi.processes_page_faults': { + info: 'Page faults by the threads executing in this process. A page fault occurs when a thread refers to a virtual memory page that is not in its working set in main memory. This can cause the page not to be fetched from disk if it is on the standby list and hence already in main memory, or if it is in use by another process with which the page is shared.' + }, + 'wmi.processes_file_bytes': { + info: 'Current number of bytes this process has used in the paging file(s). Paging files are used to store pages of memory used by the process that are not contained in other files. Paging files are shared by all processes, and lack of space in paging files can prevent other processes from allocating memory.' + }, + 'wmi.processes_pool_bytes': { + info: 'Pool Bytes is the last observed number of bytes in the paged or nonpaged pool. The nonpaged pool is an area of system memory (physical memory used by the operating system) for objects that cannot be written to disk, but must remain in physical memory as long as they are allocated. The paged pool is an area of system memory (physical memory used by the operating system) for objects that can be written to disk when they are not being used.' + }, + 'wmi.processes_threads': { + info: 'Number of threads currently active in this process. An instruction is the basic unit of execution in a processor, and a thread is the object that executes instructions. Every running process has at least one thread.' + }, + + // ------------------------------------------------------------------------ + // WMI (TCP) + + 'wmi.tcp_conns_active': { + info: 'Number of times TCP connections have made a direct transition from the CLOSED state to the SYN-SENT state.' + }, + 'wmi.tcp_conns_established': { + info: 'Number of TCP connections for which the current state is either ESTABLISHED or CLOSE-WAIT.' + }, + 'wmi.tcp_conns_failures': { + info: 'Number of times TCP connections have made a direct transition to the CLOSED state from the SYN-SENT state or the SYN-RCVD state, plus the number of times TCP connections have made a direct transition from the SYN-RCVD state to the LISTEN state.' + }, + 'wmi.tcp_conns_passive': { + info: 'Number of times TCP connections have made a direct transition from the LISTEN state to the SYN-RCVD state.' + }, + 'wmi.tcp_conns_resets': { + info: 'Number of times TCP connections have made a direct transition from the LISTEN state to the SYN-RCVD state.' + }, + 'wmi.tcp_segments_received': { + info: 'Rate at which segments are received, including those received in error. This count includes segments received on currently established connections.' + }, + 'wmi.tcp_segments_retransmitted': { + info: 'Rate at which segments are retransmitted, that is, segments transmitted that contain one or more previously transmitted bytes.' + }, + 'wmi.tcp_segments_sent': { + info: 'Rate at which segments are sent, including those on current connections, but excluding those containing only retransmitted bytes.' + }, + // ------------------------------------------------------------------------ // APACHE @@ -5911,15 +6158,16 @@ netdataDashboard.context = { }, 'logind.sessions': { - info: 'Shows the number of active sessions of each type tracked by logind.' + info: 'Local and remote sessions.' }, - - 'logind.users': { - info: 'Shows the number of active users of each type tracked by logind.' + 'logind.sessions_type': { + info: 'Sessions of each session type.
Graphical - sessions are running under one of X11, Mir, or Wayland. Console - sessions are usually regular text mode local logins, but depending on how the system is configured may have an associated GUI. Other - sessions are those that do not fall into the above categories (such as sessions for cron jobs or systemd timer units).
' }, - - 'logind.seats': { - info: 'Shows the number of active seats tracked by logind. Each seat corresponds to a combination of a display device and input device providing a physical presence for the system.' + 'logind.sessions_state': { + info: 'Sessions in each session state.
Online - logged in and running in the background. Closing - nominally logged out, but some processes belonging to it are still around. Active - logged in and running in the foreground.
' + }, + 'logind.users_state': { + info: 'Users in each user state.
Offline - users are not logged in. Closing - users are in the process of logging out without lingering. Online - users are logged in, but have no active sessions. Lingering - users are not logged in, but have one or more services still running. Active - users are logged in, and have at least one active session.
' }, // ------------------------------------------------------------------------ @@ -7368,6 +7616,71 @@ netdataDashboard.context = { 'More info.' }, + // Ping + + 'ping.host_rtt': { + info: 'Round-trip time (RTT) is the time it takes for a data packet to reach its destination and return back to its original source.' + }, + 'ping.host_std_dev_rtt': { + info: 'Round-trip time (RTT) standard deviation. The average value of how far each RTT of a ping differs from the average RTT.' + }, + 'ping.host_packet_loss': { + info: 'Packet loss occurs when one or more transmitted data packets do not reach their destination. Usually caused by data transfer errors, network congestion or firewall blocking. ICMP echo packets are often treated as lower priority by routers and target hosts, so ping test packet loss may not always translate to application packet loss.' + }, + 'ping.host_packets': { + info: 'Number of ICMP messages sent and received. These counters should be equal if there is no packet loss.' + }, + + // NVMe + + 'nvme.device_estimated_endurance_perc': { + info: 'NVM subsystem lifetime used based on the actual usage and the manufacturer\'s prediction of NVM life. A value of 100 indicates that the estimated endurance of the device has been consumed, but may not indicate a device failure. The value can be greater than 100 if you use the storage beyond its planned lifetime.' + }, + 'nvme.device_available_spare_perc': { + info: 'Remaining spare capacity that is available. SSDs provide a set of internal spare capacity, called spare blocks, that can be used to replace blocks that have reached their write operation limit. After all of the spare blocks have been used, the next block that reaches its limit causes the disk to fail.' + }, + 'nvme.device_composite_temperature': { + info: 'The current composite temperature of the controller and namespace(s) associated with that controller. The manner in which this value is computed is implementation specific and may not represent the actual temperature of any physical point in the NVM subsystem.' + }, + 'nvme.device_io_transferred_count': { + info: 'The total amount of data read and written by the host.' + }, + 'nvme.device_power_cycles_count': { + info: 'Power cycles reflect the number of times this host has been rebooted or the device has been woken up after sleep. A high number of power cycles does not affect the device\'s life expectancy.' + }, + 'nvme.device_power_on_time': { + info: 'Power-on time is the length of time the device is supplied with power.' + }, + 'nvme.device_unsafe_shutdowns_count': { + info: 'The number of times a power outage occurred without a shutdown notification being sent. Depending on the NVMe device you are using, an unsafe shutdown can corrupt user data.' + }, + 'nvme.device_critical_warnings_state': { + info: 'Critical warnings for the status of the controller. Status active if set to 1.
AvailableSpare - the available spare capacity is below the threshold. TempThreshold - the composite temperature is greater than or equal to an over temperature threshold or less than or equal to an under temperature threshold. NvmSubsystemReliability - the NVM subsystem reliability is degraded due to excessive media or internal errors. ReadOnly - media is placed in read-only mode. VolatileMemBackupFailed - the volatile memory backup device has failed. PersistentMemoryReadOnly - the Persistent Memory Region has become read-only or unreliable.
' + }, + 'nvme.device_media_errors_rate': { + info: 'The number of occurrences where the controller detected an unrecovered data integrity error. Errors such as uncorrectable ECC, CRC checksum failure, or LBA tag mismatch are included in this counter.' + }, + 'nvme.device_error_log_entries_rate': { + info: 'The number of entries in the Error Information Log. By itself, an increase in the number of records is not an indicator of any failure condition.' + }, + 'nvme.device_warning_composite_temperature_time': { + info: 'The time the device has been operating above the Warning Composite Temperature Threshold (WCTEMP) and below Critical Composite Temperature Threshold (CCTEMP).' + }, + 'nvme.device_critical_composite_temperature_time': { + info: 'The time the device has been operating above the Critical Composite Temperature Threshold (CCTEMP).' + }, + 'nvme.device_thermal_mgmt_temp1_transitions_rate': { + info: 'The number of times the controller has entered lower active power states or performed vendor-specific thermal management actions, minimizing performance impact, to attempt to lower the Composite Temperature due to the host-managed thermal management feature.' + }, + 'nvme.device_thermal_mgmt_temp2_transitions_rate': { + info: 'The number of times the controller has entered lower active power states or performed vendor-specific thermal management actions, regardless of the impact on performance (e.g., heavy throttling), to attempt to lower the Combined Temperature due to the host-managed thermal management feature.' + }, + 'nvme.device_thermal_mgmt_temp1_time': { + info: 'The amount of time the controller has entered lower active power states or performed vendor-specific thermal management actions, minimizing performance impact, to attempt to lower the Composite Temperature due to the host-managed thermal management feature.' + }, + 'nvme.device_thermal_mgmt_temp2_time': { + info: 'The amount of time the controller has entered lower active power states or performed vendor-specific thermal management actions, regardless of the impact on performance (e.g., heavy throttling), to attempt to lower the Combined Temperature due to the host-managed thermal management feature.' + }, // ------------------------------------------------------------------------ }; -- cgit v1.2.3