diff options
author | Daniel Baumann <daniel.baumann@progress-linux.org> | 2022-07-14 18:28:04 +0000 |
---|---|---|
committer | Daniel Baumann <daniel.baumann@progress-linux.org> | 2022-07-16 15:12:07 +0000 |
commit | 589986012c4b3ab68e299a2eadca18f90080113b (patch) | |
tree | f29a53b04a1950cdddae69344bccb3f0146fa728 /Documentation/nvme-wdc-smart-add-log.1 | |
parent | Releasing debian version 1.16-4. (diff) | |
download | nvme-cli-589986012c4b3ab68e299a2eadca18f90080113b.tar.xz nvme-cli-589986012c4b3ab68e299a2eadca18f90080113b.zip |
Merging upstream version 2.0.
Signed-off-by: Daniel Baumann <daniel.baumann@progress-linux.org>
Diffstat (limited to 'Documentation/nvme-wdc-smart-add-log.1')
-rw-r--r-- | Documentation/nvme-wdc-smart-add-log.1 | 496 |
1 files changed, 496 insertions, 0 deletions
diff --git a/Documentation/nvme-wdc-smart-add-log.1 b/Documentation/nvme-wdc-smart-add-log.1 new file mode 100644 index 0000000..fe1d3de --- /dev/null +++ b/Documentation/nvme-wdc-smart-add-log.1 @@ -0,0 +1,496 @@ +'\" t +.\" Title: nvme-wdc-smart-add-log +.\" Author: [FIXME: author] [see http://www.docbook.org/tdg5/en/html/author] +.\" Generator: DocBook XSL Stylesheets vsnapshot <http://docbook.sf.net/> +.\" Date: 01/08/2019 +.\" Manual: NVMe Manual +.\" Source: NVMe +.\" Language: English +.\" +.TH "NVME\-WDC\-SMART\-AD" "1" "01/08/2019" "NVMe" "NVMe Manual" +.\" ----------------------------------------------------------------- +.\" * Define some portability stuff +.\" ----------------------------------------------------------------- +.\" ~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~ +.\" http://bugs.debian.org/507673 +.\" http://lists.gnu.org/archive/html/groff/2009-02/msg00013.html +.\" ~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~ +.ie \n(.g .ds Aq \(aq +.el .ds Aq ' +.\" ----------------------------------------------------------------- +.\" * set default formatting +.\" ----------------------------------------------------------------- +.\" disable hyphenation +.nh +.\" disable justification (adjust text to left margin only) +.ad l +.\" ----------------------------------------------------------------- +.\" * MAIN CONTENT STARTS HERE * +.\" ----------------------------------------------------------------- +.SH "NAME" +nvme-wdc-smart-add-log \- Send NVMe WDC smart\-add\-log Vendor Unique Command, return result +.SH "SYNOPSIS" +.sp +.nf +\fInvme wdc smart\-add\-log\fR <device> [\-\-interval=<NUM>, \-i <NUM>] [\-\-output\-format=<normal|json> \-o <normal|json>] +.fi +.SH "DESCRIPTION" +.sp +For the NVMe device given, send a Vendor Unique WDC smart\-add\-log command and provide the additional smart log\&. The \-\-interval option will return performance statistics from the specified reporting interval\&. +.sp +The <device> parameter is mandatory and may be either the NVMe character device (ex: /dev/nvme0)\&. +.sp +This will only work on WDC devices supporting this feature\&. Results for any other device are undefined\&. +.sp +On success it returns 0, error code otherwise\&. +.SH "OPTIONS" +.PP +\-i <NUM>, \-\-interval=<NUM> +.RS 4 +Return the statistics from specific interval, defaults to 14 +.RE +.PP +\-o <format>, \-\-output\-format=<format> +.RS 4 +Set the reporting format to +\fInormal\fR, or +\fIjson\fR\&. Only one output format can be used at a time\&. Default is normal\&. +.RE +.sp +Valid Interval values and description :\- +.TS +allbox tab(:); +ltB ltB. +T{ +Value +T}:T{ +Description +T} +.T& +lt lt +lt lt +lt lt +lt lt +lt lt. +T{ +.sp +\fB1\fR +T}:T{ +.sp +Most recent five (5) minute accumulated set\&. +T} +T{ +.sp +\fB2\-12\fR +T}:T{ +.sp +Previous five (5) minute accumulated sets\&. +T} +T{ +.sp +\fB13\fR +T}:T{ +.sp +The accumulated total of sets 1 through 12 that contain the previous hour of accumulated statistics\&. +T} +T{ +.sp +\fB14\fR +T}:T{ +.sp +The statistical set accumulated since power\-up\&. +T} +T{ +.sp +\fB15\fR +T}:T{ +.sp +The statistical set accumulated during the entire lifetime of the device\&. +T} +.TE +.sp 1 +.SH "CA LOG PAGE DATA OUTPUT EXPLANATION" +.TS +allbox tab(:); +ltB ltB. +T{ +Field +T}:T{ +Description +T} +.T& +lt lt +lt lt +lt lt +lt lt +lt lt +lt lt +lt lt +lt lt +lt lt +lt lt +lt lt +lt lt +lt lt +lt lt +lt lt +lt lt +lt lt. +T{ +.sp +\fBPhysical NAND bytes written\&.\fR +T}:T{ +.sp +The number of bytes written to NAND\&. 16 bytes \- hi/lo +T} +T{ +.sp +\fBPhysical NAND bytes read\fR +T}:T{ +.sp +The number of bytes read from NAND\&. 16 bytes \- hi/lo +T} +T{ +.sp +\fBBad NAND Block Count\fR +T}:T{ +.sp +Raw and normalized count of the number of NAND blocks that have been retired after the drives manufacturing tests (i\&.e\&. grown back blocks)\&. 2 bytes normalized, 6 bytes raw count +T} +T{ +.sp +\fBUncorrectable Read Error Count\fR +T}:T{ +.sp +Total count of NAND reads that were not correctable by read retries, all levels of ECC, or XOR (as applicable)\&. 8 bytes +T} +T{ +.sp +\fBSoft ECC Error Count\fR +T}:T{ +.sp +Total count of NAND reads that were not correctable by read retries, or first\-level ECC\&. 8 bytes +T} +T{ +.sp +\fBSSD End to End Detection Count\fR +T}:T{ +.sp +A count of the detected errors by the SSD end to end error correction which includes DRAM, SRAM, or other storage element ECC/CRC protection mechanism (not NAND ECC)\&. 4 bytes +T} +T{ +.sp +\fBSSD End to End Correction Count\fR +T}:T{ +.sp +A count of the corrected errors by the SSD end to end error correction which includes DRAM, SRAM, or other storage element ECC/CRC protection mechanism (not NAND ECC)\&. 4 bytes +T} +T{ +.sp +\fBSystem Data % Used\fR +T}:T{ +.sp +A normalized cumulative count of the number of erase cycles per block since leaving the factory for the system (FW and metadata) area\&. Starts at 0 and increments\&. 100 indicates that the estimated endurance has been consumed\&. +T} +T{ +.sp +\fBUser Data Max Erase Count\fR +T}:T{ +.sp +The maximum erase count across all NAND blocks in the drive\&. 4 bytes +T} +T{ +.sp +\fBUser Data Min Erase Count\fR +T}:T{ +.sp +The minimum erase count across all NAND blocks in the drive\&. 4 bytes +T} +T{ +.sp +\fBRefresh Count\fR +T}:T{ +.sp +A count of the number of blocks that have been re\-allocated due to background operations only\&. 8 bytes +T} +T{ +.sp +\fBProgram Fail Count\fR +T}:T{ +.sp +Raw and normalized count of total program failures\&. Normalized count starts at 100 and shows the percent of remaining allowable failures\&. 2 bytes normalized, 6 bytes raw count +T} +T{ +.sp +\fBUser Data Erase Fail Count\fR +T}:T{ +.sp +Raw and normalized count of total erase failures in the user area\&. Normalized count starts at 100 and shows the percent of remaining allowable failures\&. 2 bytes normalized, 6 bytes raw count +T} +T{ +.sp +\fBSystem Area Erase Fail Count\fR +T}:T{ +.sp +Raw and normalized count of total erase failures in the system area\&. Normalized count starts at 100 and shows the percent of remaining allowable failures\&. 2 bytes normalized, 6 bytes raw count +T} +T{ +.sp +\fBThermal Throttling Status\fR +T}:T{ +.sp +The current status of thermal throttling (enabled or disabled)\&. 2 bytes +T} +T{ +.sp +\fBThermal Throttling Count\fR +T}:T{ +.sp +A count of the number of thermal throttling events\&. 2 bytes +T} +T{ +.sp +\fBPCIe Correctable Error Count\fR +T}:T{ +.sp +Summation counter of all PCIe correctable errors (Bad TLP, Bad DLLP, Receiver error, Replay timeouts, Replay rollovers)\&. 8 bytes +T} +.TE +.sp 1 +.SH "C1 LOG PAGE DATA OUTPUT EXPLANATION" +.TS +allbox tab(:); +ltB ltB. +T{ +Field +T}:T{ +Description +T} +.T& +lt lt +lt lt +lt lt +lt lt +lt lt +lt lt +lt lt +lt lt +lt lt +lt lt +lt lt +lt lt +lt lt +lt lt +lt lt +lt lt +lt lt +lt lt +lt lt +lt lt +lt lt +lt lt +lt lt +lt lt +lt lt. +T{ +.sp +\fBHost Read Commands\fR +T}:T{ +.sp +Number of host read commands received during the reporting period\&. +T} +T{ +.sp +\fBHost Read Blocks\fR +T}:T{ +.sp +Number of 512\-byte blocks requested during the reporting period\&. +T} +T{ +.sp +\fBAverage Read Size\fR +T}:T{ +.sp +Average Read size is calculated using (Host Read Blocks/Host Read Commands)\&. +T} +T{ +.sp +\fBHost Read Cache Hit Commands\fR +T}:T{ +.sp +Number of host read commands that serviced entirely from the on\-board read cache during the reporting period\&. No access to the NAND flash memory was required\&. This count is only updated if the entire command was serviced from the cache memory\&. +T} +T{ +.sp +\fBHost Read Cache Hit Percentage\fR +T}:T{ +.sp +Percentage of host read commands satisfied from the cache\&. +T} +T{ +.sp +\fBHost Read Cache Hit Blocks\fR +T}:T{ +.sp +Number of 512\-byte blocks of data that have been returned for Host Read Cache Hit Commands during the reporting period\&. This count is only updated with the blocks returned for host read commands that were serviced entirely from cache memory\&. +T} +T{ +.sp +\fBAverage Read Cache Hit Size\fR +T}:T{ +.sp +Average size of read commands satisfied from the cache\&. +T} +T{ +.sp +\fBHost Read Commands Stalled\fR +T}:T{ +.sp +Number of host read commands that were stalled due to a lack of resources within the SSD during the reporting period (NAND flash command queue full, low cache page count, cache page contention, etc\&.)\&. Commands are not considered stalled if the only reason for the delay was waiting for the data to be physically read from the NAND flash\&. It is normal to expect this count to equal zero on heavily utilized systems\&. +T} +T{ +.sp +\fBHost Read Commands Stalled Percentage\fR +T}:T{ +.sp +Percentage of read commands that were stalled\&. If the figure is consistently high, then consideration should be given to spreading the data across multiple SSDs\&. +T} +T{ +.sp +\fBHost Write Commands\fR +T}:T{ +.sp +Number of host write commands received during the reporting period\&. +T} +T{ +.sp +\fBHost Write Blocks\fR +T}:T{ +.sp +Number of 512\-byte blocks written during the reporting period\&. +T} +T{ +.sp +\fBAverage Write Size\fR +T}:T{ +.sp +Average Write size calculated using (Host Write Blocks/Host Write Commands)\&. +T} +T{ +.sp +\fBHost Write Odd Start Commands\fR +T}:T{ +.sp +Number of host write commands that started on a non\-aligned boundary during the reporting period\&. The size of the boundary alignment is normally 4K; therefore this returns the number of commands that started on a non\-4K aligned boundary\&. The SSD requires slightly more time to process non\-aligned write commands than it does to process aligned write commands\&. +T} +T{ +.sp +\fBHost Write Odd Start Commands Percentage\fR +T}:T{ +.sp +Percentage of host write commands that started on a non\-aligned boundary\&. If this figure is equal to or near 100%, and the NAND Read Before Write value is also high, then the user should investigate the possibility of offsetting the file system\&. For Microsoft Windows systems, the user can use Diskpart\&. For Unix\-based operating systems, there is normally a method whereby file system partitions can be placed where required\&. +T} +T{ +.sp +\fBHost Write Odd End Commands\fR +T}:T{ +.sp +Number of host write commands that ended on a non\-aligned boundary during the reporting period\&. The size of the boundary alignment is normally 4K; therefore this returns the number of commands that ended on a non\-4K aligned boundary\&. +T} +T{ +.sp +\fBHost Write Odd End Commands Percentage\fR +T}:T{ +.sp +Percentage of host write commands that ended on a non\-aligned boundary\&. +T} +T{ +.sp +\fBHost Write Commands Stalled\fR +T}:T{ +.sp +Number of host write commands that were stalled due to a lack of resources within the SSD during the reporting period\&. The most likely cause is that the write data was being received faster than it could be saved to the NAND flash memory\&. If there was a large volume of read commands being processed simultaneously, then other causes might include the NAND flash command queue being full, low cache page count, or cache page contention, etc\&. It is normal to expect this count to be non\-zero on heavily utilized systems\&. +T} +T{ +.sp +\fBHost Write Commands Stalled Percentage\fR +T}:T{ +.sp +Percentage of write commands that were stalled\&. If the figure is consistently high, then consideration should be given to spreading the data across multiple SSDs\&. +T} +T{ +.sp +\fBNAND Read Commands\fR +T}:T{ +.sp +Number of read commands issued to the NAND devices during the reporting period\&. This figure will normally be much higher than the host read commands figure, as the data needed to satisfy a single host read command may be spread across several NAND flash devices\&. +T} +T{ +.sp +\fBNAND Read Blocks\fR +T}:T{ +.sp +Number of 512\-byte blocks requested from NAND flash devices during the reporting period\&. This figure would normally be about the same as the host read blocks figure +T} +T{ +.sp +\fBAverage NAND Read Size\fR +T}:T{ +.sp +Average size of NAND read commands\&. +T} +T{ +.sp +\fBNAND Write Commands\fR +T}:T{ +.sp +Number of write commands issued to the NAND devices during the reporting period\&. There is no real correlation between the number of host write commands issued and the number of NAND Write Commands\&. +T} +T{ +.sp +\fBNAND Write Blocks\fR +T}:T{ +.sp +Number of 512\-byte blocks written to the NAND flash devices during the reporting period\&. This figure would normally be about the same as the host write blocks figure\&. +T} +T{ +.sp +\fBAverage NAND Write Size\fR +T}:T{ +.sp +Average size of NAND write commands\&. This figure should never be greater than 128K, as this is the maximum size write that is ever issued to a NAND device\&. +T} +T{ +.sp +\fBNAND Read Before Write\fR +T}:T{ +.sp +This is the number of read before write operations that were required to process non\-aligned host write commands during the reporting period\&. See Host Write Odd Start Commands and Host Write Odd End Commands\&. NAND Read Before Write operations have a detrimental effect on the overall performance of the device\&. +T} +.TE +.sp 1 +.SH "EXAMPLES" +.sp +.RS 4 +.ie n \{\ +\h'-04'\(bu\h'+03'\c +.\} +.el \{\ +.sp -1 +.IP \(bu 2.3 +.\} +Has the program issue WDC smart\-add\-log Vendor Unique Command with default interval (14) : +.sp +.if n \{\ +.RS 4 +.\} +.nf +# nvme wdc smart\-add\-log /dev/nvme0 +.fi +.if n \{\ +.RE +.\} +.RE +.SH "NVME" +.sp +Part of the nvme\-user suite\&. |