diff options
author | Daniel Baumann <mail@daniel-baumann.ch> | 2015-11-07 10:03:36 +0000 |
---|---|---|
committer | Daniel Baumann <mail@daniel-baumann.ch> | 2015-11-07 10:03:36 +0000 |
commit | e43c45c952bb5d273724bfc6dd69e4c2de1aa190 (patch) | |
tree | bca9bb3d2f16f3edb5ab0d6f6b209be3f4fabb92 /doc | |
parent | Adding upstream version 1.16~pre1. (diff) | |
download | lzip-e43c45c952bb5d273724bfc6dd69e4c2de1aa190.tar.xz lzip-e43c45c952bb5d273724bfc6dd69e4c2de1aa190.zip |
Adding upstream version 1.16~pre2.upstream/1.16_pre2
Signed-off-by: Daniel Baumann <mail@daniel-baumann.ch>
Diffstat (limited to 'doc')
-rw-r--r-- | doc/lzip.1 | 10 | ||||
-rw-r--r-- | doc/lzip.info | 124 | ||||
-rw-r--r-- | doc/lzip.texi | 111 |
3 files changed, 135 insertions, 110 deletions
@@ -1,7 +1,7 @@ .\" DO NOT MODIFY THIS FILE! It was generated by help2man 1.37.1. -.TH LZIP "1" "January 2014" "Lzip 1.16-pre1" "User Commands" +.TH LZIP "1" "May 2014" "lzip 1.16-pre2" "User Commands" .SH NAME -Lzip \- reduces the size of files +lzip \- reduces the size of files .SH SYNOPSIS .B lzip [\fIoptions\fR] [\fIfiles\fR] @@ -89,13 +89,13 @@ This is free software: you are free to change and redistribute it. There is NO WARRANTY, to the extent permitted by law. .SH "SEE ALSO" The full documentation for -.B Lzip +.B lzip is maintained as a Texinfo manual. If the .B info and -.B Lzip +.B lzip programs are properly installed at your site, the command .IP -.B info Lzip +.B info lzip .PP should give you access to the complete manual. diff --git a/doc/lzip.info b/doc/lzip.info index 609da4f..244d120 100644 --- a/doc/lzip.info +++ b/doc/lzip.info @@ -11,7 +11,7 @@ File: lzip.info, Node: Top, Next: Introduction, Up: (dir) Lzip Manual *********** -This manual is for Lzip (version 1.16-pre1, 11 January 2014). +This manual is for Lzip (version 1.16-pre2, 16 May 2014). * Menu: @@ -41,15 +41,27 @@ File: lzip.info, Node: Introduction, Next: Algorithm, Prev: Top, Up: Top Lzip is a lossless data compressor with a user interface similar to the one of gzip or bzip2. Lzip is about as fast as gzip, compresses most files more than bzip2, and is better than both from a data recovery -perspective. Lzip is a clean implementation of the LZMA algorithm. +perspective. Lzip is a clean implementation of the LZMA +(Lempel-Ziv-Markov chain-Algorithm) algorithm. - The lzip file format is designed for long-term data archiving and -provides very safe integrity checking. It is as simple as possible (but -not simpler), so that with the only help of the lzip manual it would be -possible for a digital archaeologist to extract the data from a lzip -file long after quantum computers eventually render LZMA obsolete. -Additionally lzip is copylefted, which guarantees that it will remain -free forever. + The lzip file format is designed for long-term data archiving, taking +into account both data integrity and decoder availability: + + * The lzip format provides very safe integrity checking and some data + recovery means. The lziprecover program can repair bit-flip errors + (one of the most common forms of data corruption) in lzip files, + and provides data recovery capabilities, including error-checked + merging of damaged copies of a file. + + * The lzip format is as simple as possible (but not simpler). The + lzip manual provides the code of a simple decompressor along with + a detailed explanation of how it works, so that with the only help + of the lzip manual it would be possible for a digital + archaeologist to extract the data from a lzip file long after + quantum computers eventually render LZMA obsolete. + + * Additionally lzip is copylefted, which guarantees that it will + remain free forever. The member trailer stores the 32-bit CRC of the original data, the size of the original data and the size of the member. These values, @@ -63,16 +75,22 @@ though, that the check occurs upon decompression, so it can only tell you that something is wrong. It can't help you recover the original uncompressed data. - If you ever need to recover data from a damaged lzip file, try the -lziprecover program. Lziprecover makes lzip files resistant to bit-flip -(one of the most common forms of data corruption), and provides data -recovery capabilities, including error-checked merging of damaged copies -of a file. - Lzip uses the same well-defined exit status values used by bzip2, which makes it safer than compressors returning ambiguous warning values (like gzip) when it is used as a back end for tar or zutils. + The amount of memory required for compression is about 1 or 2 times +the dictionary size limit (1 if input file size is less than dictionary +size limit, else 2) plus 9 times the dictionary size really used. The +option '-0' is special and only requires about 1.5 MiB at most. The +amount of memory required for decompression is about 46 kB larger than +the dictionary size really used. + + Lzip will automatically use the smallest possible dictionary size for +each file without exceeding the given limit. Keep in mind that the +decompression memory requirement is affected at compression time by the +choice of dictionary size limit. + When compressing, lzip replaces every file given in the command line with a compressed version of itself, with the name "original_name.lz". When decompressing, lzip attempts to guess the name for the decompressed @@ -111,31 +129,28 @@ multivolume compressed tar archives. automatically creating multi-member output. The members so created are large, about 64 PiB each. - The amount of memory required for compression is about 1 or 2 times -the dictionary size limit (1 if input file size is less than dictionary -size limit, else 2) plus 9 times the dictionary size really used. The -option '-0' is special and only requires about 1.5 MiB at most. The -amount of memory required for decompression is about 46 kB larger than -the dictionary size really used. - - Lzip will automatically use the smallest possible dictionary size -without exceeding the given limit. Keep in mind that the decompression -memory requirement is affected at compression time by the choice of -dictionary size limit. - File: lzip.info, Node: Algorithm, Next: Invoking lzip, Prev: Introduction, Up: Top 2 Algorithm *********** -Lzip implements a simplified version of the LZMA (Lempel-Ziv-Markov -chain-Algorithm) algorithm. The high compression of LZMA comes from -combining two basic, well-proven compression ideas: sliding dictionaries -(LZ77/78) and markov models (the thing used by every compression -algorithm that uses a range encoder or similar order-0 entropy coder as -its last stage) with segregation of contexts according to what the bits -are used for. +There is no such thing as a "LZMA algorithm"; it is more like a "LZMA +coding scheme". For example, the option '-0' of lzip uses the scheme in +almost the simplest way possible; issuing the longest match it can find, +or a literal byte if it can't find a match. Inversely, a much more +elaborated way of finding coding sequences of minimum price than the one +currently used by lzip could be developed, and the resulting sequence +could also be coded using the LZMA coding scheme. + + Lzip currently implements two variants of the LZMA algorithm; fast +(used by option -0) and normal (used by all other compression levels). + + The high compression of LZMA comes from combining two basic, +well-proven compression ideas: sliding dictionaries (LZ77/78) and +markov models (the thing used by every compression algorithm that uses +a range encoder or similar order-0 entropy coder as its last stage) +with segregation of contexts according to what the bits are used for. Lzip is a two stage compressor. The first stage is a Lempel-Ziv coder, which reduces redundancy by translating chunks of data to their @@ -143,11 +158,6 @@ corresponding distance-length pairs. The second stage is a range encoder that uses a different probability model for each type of data; distances, lengths, literal bytes, etc. - The match finder, part of the LZ coder, is the most important piece -of the LZMA algorithm, as it is in many Lempel-Ziv based algorithms. -Most of lzip's execution time is spent in the match finder, and it has -the greatest influence on the compression ratio. - Here is how it works, step by step: 1) The member header is written to the output stream. @@ -259,7 +269,7 @@ The format for running lzip is: '--dictionary-size=BYTES' Set the dictionary size limit in bytes. Valid values range from 4 KiB to 512 MiB. Lzip will use the smallest possible dictionary - size for each member without exceeding this limit. Note that + size for each file without exceeding this limit. Note that dictionary sizes are quantized. If the specified size does not match one of the valid sizes, it will be rounded upwards by adding up to (BYTES / 16) to it. @@ -450,7 +460,7 @@ that could be individually described. It seems that the only way of describing the LZMA-302eos stream is describing the algorithm that decodes it. And given the many details about the range decoder that need to be described accurately, the source -code of a real decoder seems the only appropiate reference to use. +code of a real decoder seems the only appropriate reference to use. What follows is a description of the decoding algorithm for LZMA-302eos streams using as reference the source code of "lzd", an @@ -609,7 +619,7 @@ The LZMA stream is consumed one byte at a time by the range decoder. variable number of decoded bits, depending on how well these bits agree with their context. (See 'decode_bit' in the source). - The range decoder state consists of two unsigned 32-bit variables, + The range decoder state consists of two unsigned 32-bit variables; 'range' (representing the most significant part of the range size not yet decoded), and 'code' (representing the current point within 'range'). 'range' is initialized to (2^32 - 1), and 'code' is @@ -627,7 +637,7 @@ source). After decoding the member header and obtaining the dictionary size, the range decoder is initialized and then the LZMA decoder enters a loop (See 'decode_member' in the source) where it invokes the range decoder -with the appropiate contexts to decode the different coding sequences +with the appropriate contexts to decode the different coding sequences (matches, repeated matches, and literal bytes), until the "End Of Stream" marker is decoded. @@ -1005,7 +1015,7 @@ void LZ_decoder::flush_data() crc32.update_buf( crc_, buffer + stream_pos, size ); errno = 0; if( std::fwrite( buffer + stream_pos, 1, size, stdout ) != size ) - { std::fprintf( stderr, "Write error: %s\n", std::strerror( errno ) ); + { std::fprintf( stderr, "Write error: %s.\n", std::strerror( errno ) ); std::exit( 1 ); } if( pos >= dictionary_size ) { partial_data_pos += pos; pos = 0; } stream_pos = pos; @@ -1104,12 +1114,12 @@ bool LZ_decoder::decode_member() // Returns false if error } state.set_match(); if( rep0 >= dictionary_size || rep0 >= data_position() ) - return false; + { flush_data(); return false; } } - for( int i = 0; i < len; ++i ) - put_byte( get_byte( rep0 ) ); + for( int i = 0; i < len; ++i ) put_byte( get_byte( rep0 ) ); } } + flush_data(); return false; } @@ -1171,7 +1181,7 @@ int main( const int argc, const char * const argv[] ) } if( std::fclose( stdout ) != 0 ) - { std::fprintf( stderr, "Can't close stdout: %s\n", std::strerror( errno ) ); + { std::fprintf( stderr, "Can't close stdout: %s.\n", std::strerror( errno ) ); return 1; } return 0; } @@ -1202,15 +1212,15 @@ Concept index Tag Table: Node: Top208 -Node: Introduction1059 -Node: Algorithm5466 -Node: Invoking lzip7960 -Node: File format13638 -Node: Stream format16186 -Node: Examples25616 -Node: Problems27573 -Node: Reference source code28103 -Node: Concept index41592 +Node: Introduction1055 +Node: Algorithm5733 +Node: Invoking lzip8491 +Node: File format14167 +Node: Stream format16715 +Node: Examples26147 +Node: Problems28104 +Node: Reference source code28634 +Node: Concept index42151 End Tag Table diff --git a/doc/lzip.texi b/doc/lzip.texi index 957af34..698526a 100644 --- a/doc/lzip.texi +++ b/doc/lzip.texi @@ -6,8 +6,8 @@ @finalout @c %**end of header -@set UPDATED 11 January 2014 -@set VERSION 1.16-pre1 +@set UPDATED 16 May 2014 +@set VERSION 1.16-pre2 @dircategory Data Compression @direntry @@ -61,15 +61,32 @@ to copy, distribute and modify it. Lzip is a lossless data compressor with a user interface similar to the one of gzip or bzip2. Lzip is about as fast as gzip, compresses most files more than bzip2, and is better than both from a data recovery -perspective. Lzip is a clean implementation of the LZMA algorithm. - -The lzip file format is designed for long-term data archiving and -provides very safe integrity checking. It is as simple as possible (but -not simpler), so that with the only help of the lzip manual it would be -possible for a digital archaeologist to extract the data from a lzip -file long after quantum computers eventually render LZMA obsolete. +perspective. Lzip is a clean implementation of the LZMA +(Lempel-Ziv-Markov chain-Algorithm) algorithm. + +The lzip file format is designed for long-term data archiving, taking +into account both data integrity and decoder availability: + +@itemize @bullet +@item +The lzip format provides very safe integrity checking and some data +recovery means. The lziprecover program can repair bit-flip errors (one +of the most common forms of data corruption) in lzip files, and provides +data recovery capabilities, including error-checked merging of damaged +copies of a file. + +@item +The lzip format is as simple as possible (but not simpler). The lzip +manual provides the code of a simple decompressor along with a detailed +explanation of how it works, so that with the only help of the lzip +manual it would be possible for a digital archaeologist to extract the +data from a lzip file long after quantum computers eventually render +LZMA obsolete. + +@item Additionally lzip is copylefted, which guarantees that it will remain free forever. +@end itemize The member trailer stores the 32-bit CRC of the original data, the size of the original data and the size of the member. These values, together @@ -82,16 +99,22 @@ going undetected are microscopic. Be aware, though, that the check occurs upon decompression, so it can only tell you that something is wrong. It can't help you recover the original uncompressed data. -If you ever need to recover data from a damaged lzip file, try the -lziprecover program. Lziprecover makes lzip files resistant to bit-flip -(one of the most common forms of data corruption), and provides data -recovery capabilities, including error-checked merging of damaged copies -of a file. - Lzip uses the same well-defined exit status values used by bzip2, which makes it safer than compressors returning ambiguous warning values (like gzip) when it is used as a back end for tar or zutils. +The amount of memory required for compression is about 1 or 2 times the +dictionary size limit (1 if input file size is less than dictionary size +limit, else 2) plus 9 times the dictionary size really used. The option +@samp{-0} is special and only requires about 1.5 MiB at most. The amount +of memory required for decompression is about 46 kB larger than the +dictionary size really used. + +Lzip will automatically use the smallest possible dictionary size for +each file without exceeding the given limit. Keep in mind that the +decompression memory requirement is affected at compression time by the +choice of dictionary size limit. + When compressing, lzip replaces every file given in the command line with a compressed version of itself, with the name "original_name.lz". When decompressing, lzip attempts to guess the name for the decompressed @@ -132,30 +155,27 @@ Lzip is able to compress and decompress streams of unlimited size by automatically creating multi-member output. The members so created are large, about 64 PiB each. -The amount of memory required for compression is about 1 or 2 times the -dictionary size limit (1 if input file size is less than dictionary size -limit, else 2) plus 9 times the dictionary size really used. The option -@samp{-0} is special and only requires about 1.5 MiB at most. The amount -of memory required for decompression is about 46 kB larger than the -dictionary size really used. - -Lzip will automatically use the smallest possible dictionary size -without exceeding the given limit. Keep in mind that the decompression -memory requirement is affected at compression time by the choice of -dictionary size limit. - @node Algorithm @chapter Algorithm @cindex algorithm -Lzip implements a simplified version of the LZMA (Lempel-Ziv-Markov -chain-Algorithm) algorithm. The high compression of LZMA comes from -combining two basic, well-proven compression ideas: sliding dictionaries -(LZ77/78) and markov models (the thing used by every compression -algorithm that uses a range encoder or similar order-0 entropy coder as -its last stage) with segregation of contexts according to what the bits -are used for. +There is no such thing as a "LZMA algorithm"; it is more like a "LZMA +coding scheme". For example, the option '-0' of lzip uses the scheme in +almost the simplest way possible; issuing the longest match it can find, +or a literal byte if it can't find a match. Inversely, a much more +elaborated way of finding coding sequences of minimum price than the one +currently used by lzip could be developed, and the resulting sequence +could also be coded using the LZMA coding scheme. + +Lzip currently implements two variants of the LZMA algorithm; fast (used +by option -0) and normal (used by all other compression levels). + +The high compression of LZMA comes from combining two basic, well-proven +compression ideas: sliding dictionaries (LZ77/78) and markov models (the +thing used by every compression algorithm that uses a range encoder or +similar order-0 entropy coder as its last stage) with segregation of +contexts according to what the bits are used for. Lzip is a two stage compressor. The first stage is a Lempel-Ziv coder, which reduces redundancy by translating chunks of data to their @@ -163,11 +183,6 @@ corresponding distance-length pairs. The second stage is a range encoder that uses a different probability model for each type of data; distances, lengths, literal bytes, etc. -The match finder, part of the LZ coder, is the most important piece of -the LZMA algorithm, as it is in many Lempel-Ziv based algorithms. Most -of lzip's execution time is spent in the match finder, and it has the -greatest influence on the compression ratio. - Here is how it works, step by step: 1) The member header is written to the output stream. @@ -282,7 +297,7 @@ Quiet operation. Suppress all messages. @itemx --dictionary-size=@var{bytes} Set the dictionary size limit in bytes. Valid values range from 4 KiB to 512 MiB. Lzip will use the smallest possible dictionary size for each -member without exceeding this limit. Note that dictionary sizes are +file without exceeding this limit. Note that dictionary sizes are quantized. If the specified size does not match one of the valid sizes, it will be rounded upwards by adding up to (@var{bytes} / 16) to it. @@ -478,7 +493,7 @@ that could be individually described. It seems that the only way of describing the LZMA-302eos stream is describing the algorithm that decodes it. And given the many details about the range decoder that need to be described accurately, the source -code of a real decoder seems the only appropiate reference to use. +code of a real decoder seems the only appropriate reference to use. What follows is a description of the decoding algorithm for LZMA-302eos streams using as reference the source code of "lzd", an educational @@ -648,7 +663,7 @@ The LZMA stream is consumed one byte at a time by the range decoder. variable number of decoded bits, depending on how well these bits agree with their context. (See @samp{decode_bit} in the source). -The range decoder state consists of two unsigned 32-bit variables, +The range decoder state consists of two unsigned 32-bit variables; @code{range} (representing the most significant part of the range size not yet decoded), and @code{code} (representing the current point within @code{range}). @code{range} is initialized to (2^32 - 1), and @@ -665,7 +680,7 @@ the source). After decoding the member header and obtaining the dictionary size, the range decoder is initialized and then the LZMA decoder enters a loop (See @samp{decode_member} in the source) where it invokes the range -decoder with the appropiate contexts to decode the different coding +decoder with the appropriate contexts to decode the different coding sequences (matches, repeated matches, and literal bytes), until the "End Of Stream" marker is decoded. @@ -1073,7 +1088,7 @@ void LZ_decoder::flush_data() crc32.update_buf( crc_, buffer + stream_pos, size ); errno = 0; if( std::fwrite( buffer + stream_pos, 1, size, stdout ) != size ) - { std::fprintf( stderr, "Write error: %s\n", std::strerror( errno ) ); + { std::fprintf( stderr, "Write error: %s.\n", std::strerror( errno ) ); std::exit( 1 ); } if( pos >= dictionary_size ) { partial_data_pos += pos; pos = 0; } stream_pos = pos; @@ -1172,12 +1187,12 @@ bool LZ_decoder::decode_member() // Returns false if error } state.set_match(); if( rep0 >= dictionary_size || rep0 >= data_position() ) - return false; + { flush_data(); return false; } } - for( int i = 0; i < len; ++i ) - put_byte( get_byte( rep0 ) ); + for( int i = 0; i < len; ++i ) put_byte( get_byte( rep0 ) ); } } + flush_data(); return false; } @@ -1239,7 +1254,7 @@ int main( const int argc, const char * const argv[] ) } if( std::fclose( stdout ) != 0 ) - { std::fprintf( stderr, "Can't close stdout: %s\n", std::strerror( errno ) ); + { std::fprintf( stderr, "Can't close stdout: %s.\n", std::strerror( errno ) ); return 1; } return 0; } |