summaryrefslogtreecommitdiffstats
path: root/doc/clzip.texi
diff options
context:
space:
mode:
Diffstat (limited to '')
-rw-r--r--doc/clzip.texi122
1 files changed, 60 insertions, 62 deletions
diff --git a/doc/clzip.texi b/doc/clzip.texi
index ce4d9ac..ebfe833 100644
--- a/doc/clzip.texi
+++ b/doc/clzip.texi
@@ -6,8 +6,8 @@
@finalout
@c %**end of header
-@set UPDATED 23 November 2024
-@set VERSION 1.15-rc1
+@set UPDATED 10 January 2025
+@set VERSION 1.15
@dircategory Compression
@direntry
@@ -52,7 +52,7 @@ This manual is for Clzip (version @value{VERSION}, @value{UPDATED}).
@end menu
@sp 1
-Copyright @copyright{} 2010-2024 Antonio Diaz Diaz.
+Copyright @copyright{} 2010-2025 Antonio Diaz Diaz.
This manual is free documentation: you have unlimited permission to copy,
distribute, and modify it.
@@ -99,8 +99,7 @@ taking into account both data integrity and decoder availability:
@itemize @bullet
@item
-The lzip format provides very safe integrity checking and some data
-recovery means. The program
+The program
@uref{http://www.nongnu.org/lzip/manual/lziprecover_manual.html#Data-safety,,lziprecover}
can repair bit flip errors (one of the most common forms of data corruption)
in lzip files, and provides data recovery capabilities, including
@@ -129,13 +128,12 @@ the beginning is a thing of the past.
The member trailer stores the 32-bit CRC of the original data, the size of
the original data, and the size of the member. These values, together with
-the 'End Of Stream' marker, provide a 3-factor integrity checking which
-guarantees that the decompressed version of the data is identical to the
-original. This guards against corruption of the compressed data, and against
-undetected bugs in clzip (hopefully very unlikely). The chances of data
-corruption going undetected are microscopic. Be aware, though, that the
-check occurs upon decompression, so it can only tell you that something is
-wrong. It can't help you recover the original uncompressed data.
+the 'End Of Stream' marker, provide a 3-factor integrity checking that
+guards against corruption of the compressed data and against undetected bugs
+in clzip (hopefully very unlikely). The chances of data corruption going
+undetected are microscopic. Be aware, though, that the check occurs upon
+decompression, so it can only tell you that something is wrong. It can't
+help you recover the original uncompressed data.
Clzip uses the same well-defined exit status values used by bzip2, which
makes it safer than compressors returning ambiguous warning values (like
@@ -345,7 +343,8 @@ additionally checks that none of the files specified contain trailing data.
When compressing, set the match length limit in bytes. After a match this
long is found, the search is finished. Valid values range from 5 to 273.
Larger values usually give better compression ratios but longer compression
-times.
+times. A match is a Lempel-Ziv back-reference coded as a distance-length
+pair.
@item -o @var{file}
@itemx --output=@var{file}
@@ -621,14 +620,14 @@ overflowing.
@chapter Format of the LZMA stream in lzip files
@cindex format of the LZMA stream
-The LZMA algorithm has three parameters, called 'special LZMA
-properties', to adjust it for some kinds of binary data. These
-parameters are: @samp{literal_context_bits} (with a default value of 3),
+The LZMA algorithm has three parameters, called 'special LZMA properties',
+to adjust it for some kinds of binary data. These parameters are:
+@samp{literal_context_bits} (with a default value of 3),
@samp{literal_pos_state_bits} (with a default value of 0), and
@samp{pos_state_bits} (with a default value of 2). As a general purpose
-compressor, lzip only uses the default values for these parameters. In
-particular @samp{literal_pos_state_bits} has been optimized away and
-does not even appear in the code.
+compressed format, lzip only uses the default values for these parameters.
+In particular @samp{literal_pos_state_bits} has been optimized away and does
+not even appear in the code.
The first byte of the LZMA stream is set to zero to help tools like grep
recognize lzip files as binary files.
@@ -671,7 +670,7 @@ reusing a recently used distance). There are 7 different coding sequences:
@multitable @columnfractions .35 .14 .51
@headitem Bit sequence @tab Name @tab Description
@item 0 + byte @tab literal @tab literal byte
-@item 1 + 0 + len + dis @tab match @tab distance-length pair
+@item 1 + 0 + len + dis @tab match @tab LZ distance-length pair
@item 1 + 1 + 0 + 0 @tab shortrep @tab 1 byte match at latest used distance
@item 1 + 1 + 0 + 1 + len @tab rep0 @tab len bytes match at latest used distance
@item 1 + 1 + 1 + 0 + len @tab rep1 @tab len bytes match at second
@@ -721,16 +720,17 @@ alone. This seems to need 66 slots (twice the number of positions), but for
positions 0 and 1 there is no next bit, so the number of slots needed is 64
(0 to 63).
-The 6 bits representing this "slot number" are then context-coded. If
-the distance is @w{>= 4}, the remaining bits are encoded as follows.
+The 6 bits representing this "slot number" are then context-coded.
+If the distance is @w{>= 4}, the remaining bits are encoded as follows.
@samp{direct_bits} is the amount of remaining bits (from 1 to 30) needed
to form a complete distance, and is calculated as @w{(slot >> 1) - 1}.
If a distance needs 6 or more direct_bits, the last 4 bits are encoded
separately. The last piece (all the direct_bits for distances 4 to 127
(slots 4 to 13), or the last 4 bits for distances @w{>= 128}
-@w{(slot >= 14)}) is context-coded in reverse order (from LSB to MSB). For
-distances @w{>= 128}, the @w{@samp{direct_bits - 4}} part is encoded with
-fixed 0.5 probability.
+@w{(slot >= 14)}) is context-coded in reverse order (from LSB to MSB)
+because between distances the LSB tends to correlate better than more
+significant bits. For distances @w{>= 128}, the @w{@samp{direct_bits - 4}}
+part is encoded with fixed 0.5 probability.
@multitable @columnfractions .5 .5
@headitem Bit sequence @tab Description
@@ -749,9 +749,8 @@ The indices used in these arrays are:
@table @samp
@item state
-A state machine (@samp{State} in the source) with 12 states (0 to 11),
-coding the latest 2 to 4 types of sequences processed. The initial state
-is 0.
+A state machine (@samp{State} in the source) with 12 states (0 to 11) coding
+the latest 2 to 4 types of sequences processed. The initial state is 0.
@item pos_state
Value of the 2 least significant bits of the current position in the
@@ -883,10 +882,10 @@ reviewed carefully and is believed to be free from design errors.
@section Format design
-When gzip was designed in 1992, computers and operating systems were much
-less capable than they are today. The designers of gzip tried to work around
-some of those limitations, like 8.3 file names, with additional fields in
-the file format.
+When gzip was designed in 1992, computers and operating systems were less
+capable than they are today. The designers of gzip tried to work around some
+of those limitations, like 8.3 file names, with additional fields in the
+file format.
Today those limitations have mostly disappeared, and the format of gzip has
proved to be unnecessarily complicated. It includes fields that were never
@@ -894,7 +893,8 @@ used, others that have lost their usefulness, and finally others that have
become too limited.
Bzip2 was designed 5 years later, and its format is simpler than the one of
-gzip.
+gzip. Both gzip and bzip2 lack the fields required to implement a reliable
+and efficient @option{--list} operation.
Probably the worst defect of the gzip format from the point of view of data
safety is the variable size of its header. If the byte at offset 3 (flags)
@@ -916,22 +916,22 @@ lzip format is extraordinarily safe. The simple and safe design of the file
format complements the embedded error detection provided by the LZMA data
stream. Any distance larger than the dictionary size acts as a forbidden
symbol, allowing the decompressor to detect the approximate position of
-errors, and leaving very little work for the check sequence (CRC and data
-sizes) in the detection of errors. Lzip is usually able to detect all
-possible bit flips in the compressed data without resorting to the check
-sequence. It would be difficult to write an automatic recovery tool like
-lziprecover for the gzip format. And, as far as I know, it has never been
-written.
+errors, and leaving little work for the check sequence (CRC and data sizes)
+in the detection of errors. Lzip is usually able to detect all possible bit
+flips in the compressed data without resorting to the check sequence. It
+would be difficult to write an automatic recovery tool like lziprecover for
+the gzip format. And, as far as I know, it has never been written.
Lzip, like gzip and bzip2, uses a CRC32 to check the integrity of the
decompressed data because it provides optimal accuracy in the detection of
errors up to a compressed size of about @w{16 GiB}, a size larger than that
of most files. In the case of lzip, the additional detection capability of
-the decompressor reduces the probability of undetected errors several
+the decompressor reduces the probability of undetected errors about 50
million times more, resulting in a combined integrity checking optimally
-accurate for any member size produced by lzip. Preliminary results suggest
-that the lzip format is safe enough to be used in critical safety avionics
-systems.
+accurate for any member size produced by lzip. Moreover, a CRC is better
+than a hash of the same size for detection of errors in lzip files because
+the decompressor catches almost all the large errors, while the CRC
+guarantees the detection of the small errors (which the hash does not).
The lzip format is designed for long-term archiving. Therefore it excludes
any unneeded features that may interfere with the future extraction of the
@@ -943,10 +943,9 @@ decompressed data.
@item Multiple algorithms
Gzip provides a CM (Compression Method) field that has never been used
-because it is a bad idea to begin with. New compression methods may require
-additional fields, making it impossible to implement new methods and, at the
-same time, keep the same format. This field does not solve the problem of
-format proliferation; it just makes the problem less obvious.
+because it is too limiting. New compression methods may require additional
+fields, making it impossible to implement new methods and, at the same time,
+keep the same format.
@item Optional fields in header
@@ -959,12 +958,11 @@ compressed blocks.
@item Optional CRC for the header
-Using an optional CRC for the header is not only a bad idea, it is an error;
-it circumvents the Hamming distance (HD) of the CRC and may prevent the
-extraction of perfectly good data. For example, if the CRC is used and the
-bit enabling it is reset by a bit flip, then the header seems to be intact
-(in spite of being corrupt) while the compressed blocks seem to be totally
-unrecoverable (in spite of being intact). Very misleading indeed.
+Using an optional CRC for the header circumvents the Hamming distance (HD)
+of the CRC and may prevent the extraction of good data. For example, if the
+CRC is used and the bit enabling it is reset by a bit flip, then the header
+seems to be intact (in spite of being corrupt) while the compressed blocks
+seem to be unrecoverable (in spite of being intact).
@item Metadata
@@ -994,8 +992,8 @@ size.
@item Distributed index
The lzip format provides a distributed index that, among other things, helps
-plzip to decompress several times faster than pigz and helps lziprecover do
-its job. Neither the gzip format nor the bzip2 format do provide an index.
+plzip to decompress faster than pigz and helps lziprecover do its job.
+Neither the gzip format nor the bzip2 format do provide an index.
A distributed index is safer and more scalable than a monolithic index. The
monolithic index introduces a single point of failure in the compressed file
@@ -1029,7 +1027,7 @@ errors.
Three related but independent compressor implementations, lzip, clzip, and
minilzip/lzlib, are developed concurrently. Every stable release of any of
them is tested to check that it produces identical output to the other two.
-This guarantees that all three implement the same algorithm, and makes it
+This corroborates that all three implement the same algorithm, and makes it
unlikely that any of them may contain serious undiscovered errors. In fact,
no errors have been discovered in lzip since 2009.
@@ -1322,7 +1320,7 @@ find by running @w{@samp{clzip --version}}.
@verbatim
/* Lzd - Educational decompressor for the lzip format
- Copyright (C) 2013-2024 Antonio Diaz Diaz.
+ Copyright (C) 2013-2025 Antonio Diaz Diaz.
This program is free software. Redistribution and use in source and
binary forms, with or without modification, are permitted provided
@@ -1373,9 +1371,9 @@ public:
const int next[states] = { 0, 0, 0, 0, 1, 2, 3, 4, 5, 6, 4, 5 };
st = next[st];
}
- void set_match() { st = ( st < 7 ) ? 7 : 10; }
- void set_rep() { st = ( st < 7 ) ? 8 : 11; }
- void set_short_rep() { st = ( st < 7 ) ? 9 : 11; }
+ void set_match() { st = ( st < 7 ) ? 7 : 10; }
+ void set_rep() { st = ( st < 7 ) ? 8 : 11; }
+ void set_shortrep() { st = ( st < 7 ) ? 9 : 11; }
};
@@ -1679,7 +1677,7 @@ bool LZ_decoder::decode_member() // Return false if error
if( rdec.decode_bit( bm_rep0[state()] ) == 0 ) // 3rd bit
{
if( rdec.decode_bit( bm_len[state()][pos_state] ) == 0 ) // 4th bit
- { state.set_short_rep(); put_byte( peek( rep0 ) ); continue; }
+ { state.set_shortrep(); put_byte( peek( rep0 ) ); continue; }
}
else
{
@@ -1746,7 +1744,7 @@ int main( const int argc, const char * const argv[] )
"See the lzip manual for an explanation of the code.\n"
"\nUsage: %s [-d] < file.lz > file\n"
"Lzd decompresses from standard input to standard output.\n"
- "\nCopyright (C) 2024 Antonio Diaz Diaz.\n"
+ "\nCopyright (C) 2025 Antonio Diaz Diaz.\n"
"License 2-clause BSD.\n"
"This is free software: you are free to change and redistribute "
"it.\nThere is NO WARRANTY, to the extent permitted by law.\n"