\input texinfo @c -*-texinfo-*- @c %**start of header @setfilename lzip.info @settitle Lzip @finalout @c %**end of header @set UPDATED 2 September 2009 @set VERSION 1.8 @dircategory Data Compression @direntry * Lzip: (lzip). Data compressor based on the LZMA algorithm @end direntry @titlepage @title Lzip @subtitle A data compressor based on the LZMA algorithm @subtitle for Lzip version @value{VERSION}, @value{UPDATED} @author by Antonio Diaz Diaz @page @vskip 0pt plus 1filll @end titlepage @contents @node Top @top This manual is for Lzip (version @value{VERSION}, @value{UPDATED}). @menu * Introduction:: Purpose and features of lzip * Algorithm:: How lzip compresses the data * Invoking Lzip:: Command line interface * File Format:: Detailed format of the compressed file * Examples:: A small tutorial with examples * Lziprecover:: Recovering data from damaged compressed files * Problems:: Reporting bugs * Concept Index:: Index of concepts @end menu @sp 1 Copyright @copyright{} 2008, 2009 Antonio Diaz Diaz. This manual is free documentation: you have unlimited permission to copy, distribute and modify it. @node Introduction @chapter Introduction @cindex introduction Lzip is a lossless data compressor based on the LZMA algorithm, with very safe integrity checking and a user interface similar to the one of gzip or bzip2. Lzip decompresses almost as fast as gzip and compresses better than bzip2, which makes it well suited for software distribution and data archiving. Lzip replaces every file given in the command line with a compressed version of itself, with the name "original_name.lz". Each compressed file has the same modification date, permissions, and, when possible, ownership as the corresponding original, so that these properties can be correctly restored at decompression time. Lzip is able to read from some types of non regular files if the @samp{--stdout} option is specified. If no file names are specified, lzip compresses (or decompresses) from standard input to standard output. In this case, lzip will decline to write compressed output to a terminal, as this would be entirely incomprehensible and therefore pointless. Lzip will correctly decompress a file which is the concatenation of two or more compressed files. The result is the concatenation of the corresponding uncompressed files. Integrity testing of concatenated compressed files is also supported. Lzip can produce multimember files and safely recover, with lziprecover, the undamaged members in case of file damage. Lzip can also split the compressed output in volumes of a given size, even when reading from standard input. This allows the direct creation of multivolume compressed tar archives. The amount of memory required for compression is about 5 MiB plus 1 or 2 times the dictionary size limit (1 if input file size is less than dictionary size limit, else 2) plus 8 times the dictionary size really used. For decompression is a little more than the dictionary size really used. Lzip will automatically use the smallest possible dictionary size without exceeding the given limit. It is important to appreciate that the decompression memory requirement is affected at compression time by the choice of dictionary size limit. When decompressing, lzip attempts to guess the name for the decompressed file from that of the compressed file as follows: @multitable {anyothername} {becomes} {anyothername.out} @item filename.lz @tab becomes @tab filename @item filename.tlz @tab becomes @tab filename.tar @item anyothername @tab becomes @tab anyothername.out @end multitable As a self-check for your protection, lzip stores in the member trailer the 32-bit CRC of the original data and the size of the original data, to make sure that the decompressed version of the data is identical to the original. This guards against corruption of the compressed data, and against undetected bugs in lzip (hopefully very unlikely). The chances of data corruption going undetected are microscopic, less than one chance in 4000 million for each member processed. Be aware, though, that the check occurs upon decompression, so it can only tell you that something is wrong. It can't help you recover the original uncompressed data. Return values: 0 for a normal exit, 1 for environmental problems (file not found, invalid flags, I/O errors, etc), 2 to indicate a corrupt or invalid input file, 3 for an internal consistency error (eg, bug) which caused lzip to panic. @node Algorithm @chapter Algorithm @cindex algorithm Lzip implements a simplified version of the LZMA (Lempel-Ziv-Markov chain-Algorithm) algorithm. The original LZMA algorithm was designed by Igor Pavlov. The high compression of LZMA comes from combining two basic, well-proven compression ideas: sliding dictionaries (LZ77/78) and markov models (the thing used by every compression algorithm that uses a range encoder or similar order-0 entropy coder as its last stage) with segregation of contexts according to what the bits are used for. Lzip is a two stage compressor. The first stage is a Lempel-Ziv coder, which reduces redundancy by translating chunks of data to their corresponding distance-length pairs. The second stage is a range encoder that uses a different probability model for each type of data; distances, lengths, literal bytes, etc. The match finder, part of the LZ coder, is the most important piece of the LZMA algorithm, as it is in many Lempel-Ziv based algorithms. Most of lzip's execution time is spent in the match finder, and it has the greatest influence on the compression ratio. Here is how it works, step by step: 1) The member header is written to the output stream. 2) The first byte is coded literally, because there are no previous bytes to which the match finder can refer to. 3) The main encoder advances to the next byte in the input data and calls the match finder. 4) The match finder fills an array with the minimum distances before the current byte where a match of a given length can be found. 5) Go back to step 3 until a sequence (formed of pairs, repeated distances and literal bytes) of minimum price has been formed. Where the price represents the number of output bits produced. 6) The range encoder encodes the sequence produced by the main encoder and sends the produced bytes to the output stream. 7) Go back to step 3 until the input data is finished or until the member or volume size limits are reached. 8) The range encoder is flushed. 9) The member trailer is written to the output stream. 10) If there are more data to compress, go back to step 1. @node Invoking Lzip @chapter Invoking Lzip @cindex invoking @cindex options @cindex usage @cindex version The format for running lzip is: @example lzip [@var{options}] [@var{files}] @end example Lzip supports the following options: @table @samp @item --help @itemx -h Print an informative help message describing the options and exit. @item --version @itemx -V Print the version number of lzip on the standard output and exit. @item --member-size=@var{size} @itemx -b @var{size} Produce a multimember file and set the member size limit to @var{size} bytes. Minimum member size limit is 100kB. Small member size may degrade compression ratio, so use it only when needed. The default is to produce single member files. @item --stdout @itemx -c Compress or decompress to standard output. Needed when reading from a named pipe (fifo) or from a device. Use it to recover as much of the uncompressed data as possible when decompressing a corrupt file. @item --decompress @itemx -d Decompress. @item --force @itemx -f Force overwrite of output file. @item --keep @itemx -k Keep (don't delete) input files during compression or decompression. @item --match-length=@var{length} @itemx -m @var{length} Set the match length limit in bytes. Valid values range from 5 to 273. Larger values usually give better compression ratios but longer compression times. @item --output=@var{file} @itemx -o @var{file} When reading from standard input and @samp{--stdout} has not been specified, use @samp{@var{file}} as the virtual name of the uncompressed file. This produces a file named @samp{@var{file}} when decompressing, a file named @samp{@var{file}.lz} when compressing, and several files named @samp{@var{file}00001.lz}, @samp{@var{file}00002.lz}, etc, when compressing and splitting the output in volumes. @item --quiet @itemx -q Quiet operation. Suppress all messages. @item --dictionary-size=@var{size} @itemx -s @var{size} Set the dictionary size limit in bytes. Valid values range from 4KiB to 512MiB. Lzip will use the smallest possible dictionary size for each member without exceeding this limit. Note that dictionary sizes are quantized. If the specified size does not match one of the valid sizes, it will be rounded upwards. @item --volume-size=@var{size} @itemx -S @var{size} Split the compressed output into several volume files with names @samp{original_name00001.lz}, @samp{original_name00002.lz}, etc, and set the volume size limit to @var{size} bytes. Each volume is a complete, maybe multimember, lzip file. Minimum volume size limit is 100kB. Small volume size may degrade compression ratio, so use it only when needed. @item --test @itemx -t Check integrity of the specified file(s), but don't decompress them. This really performs a trial decompression and throws away the result. Use @samp{-tvv} or @samp{-tvvv} to see information about the file. @item --verbose @itemx -v Verbose mode. Show the compression ratio for each file processed. Further -v's increase the verbosity level. @item -1 .. -9 Set the compression parameters (dictionary size and match length limit) as shown in the table below. Note that @samp{-9} can be much slower than @samp{-1}. These options have no effect when decompressing. @multitable {Level} {Dictionary size} {Match length limit} @item Level @tab Dictionary size @tab Match length limit @item -1 @tab 1MiB @tab 10 bytes @item -2 @tab 1MiB @tab 12 bytes @item -3 @tab 1MiB @tab 17 bytes @item -4 @tab 2MiB @tab 26 bytes @item -5 @tab 4MiB @tab 44 bytes @item -6 @tab 8MiB @tab 80 bytes @item -7 @tab 16MiB @tab 108 bytes @item -8 @tab 16MiB @tab 163 bytes @item -9 @tab 32MiB @tab 273 bytes @end multitable @item --fast @itemx --best Aliases for GNU gzip compatibility. @end table @sp 1 Numbers given as arguments to options may be followed by a multiplier and an optional @samp{B} for "byte". Table of SI and binary prefixes (unit multipliers): @multitable {Prefix} {kilobyte (10^3 = 1000)} {|} {Prefix} {kibibyte (2^10 = 1024)} @item Prefix @tab Value @tab | @tab Prefix @tab Value @item k @tab kilobyte (10^3 = 1000) @tab | @tab Ki @tab kibibyte (2^10 = 1024) @item M @tab megabyte (10^6) @tab | @tab Mi @tab mebibyte (2^20) @item G @tab gigabyte (10^9) @tab | @tab Gi @tab gibibyte (2^30) @item T @tab terabyte (10^12) @tab | @tab Ti @tab tebibyte (2^40) @item P @tab petabyte (10^15) @tab | @tab Pi @tab pebibyte (2^50) @item E @tab exabyte (10^18) @tab | @tab Ei @tab exbibyte (2^60) @item Z @tab zettabyte (10^21) @tab | @tab Zi @tab zebibyte (2^70) @item Y @tab yottabyte (10^24) @tab | @tab Yi @tab yobibyte (2^80) @end multitable @node File Format @chapter File Format @cindex file format In the diagram below, a box like this: @verbatim +---+ | | <-- the vertical bars might be missing +---+ @end verbatim represents one byte; a box like this: @verbatim +==============+ | | +==============+ @end verbatim represents a variable number of bytes. @sp 1 A lzip file consists of a series of "members" (compressed data sets). The members simply appear one after another in the file, with no additional information before, between, or after them. Each member has the following structure: @verbatim +--+--+--+--+----+----+=============+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+ | ID string | VN | DS | Lzma stream | CRC32 | Data size | Member size | +--+--+--+--+----+----+=============+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+ @end verbatim All multibyte values are stored in little endian order. @table @samp @item ID string A four byte string, identifying the member type, with the value "LZIP". @item VN (version number, 1 byte) Just in case something needs to be modified in the future. Valid values are 0 and 1. Version 0 files have only one member and lack @samp{Member size}. @item DS (coded dictionary size, 1 byte) Bits 4-0 contain the base 2 logarithm of the base dictionary size.@* Bits 7-5 contain the number of "wedges" to substract from the base dictionary size to obtain the dictionary size. The size of a wedge is (base dictionary size / 16).@* Valid values for dictionary size range from 4KiB to 512MiB. @item Lzma stream The lzma stream, finished by an end of stream marker. Uses default values for encoder properties. @item CRC32 (4 bytes) CRC of the uncompressed original data. @item Data size (8 bytes) Size of the uncompressed original data. @item Member size (8 bytes) Total size of the member, including header and trailer. This facilitates safe recovery of undamaged members from multimember files. @end table @node Examples @chapter A small tutorial with examples @cindex examples WARNING! If your data is important, give the @samp{--keep} option to lzip and do not remove the original file until you verify the compressed file with a command like @samp{lzip -cd file.lz | cmp file -}. @sp 1 @noindent Example 1: Replace a regular file with its compressed version file.lz and show the compression ratio. @example lzip -v file @end example @sp 1 @noindent Example 2: Like example 1 but the created file.lz is multimember with a member size of 1MiB. @example lzip -b 1MiB file @end example @sp 1 @noindent Example 3: Compress a whole floppy in /dev/fd0 and send the output to file.lz. @example lzip -c /dev/fd0 > file.lz @end example @sp 1 @noindent Example 4: Create a multivolume compressed tar archive with a volume size of 1440KiB. @example tar -c some_directory | lzip -S 1440KiB -o volume_name @end example @sp 1 @noindent Example 5: Extract a multivolume compressed tar archive. @example lzip -cd volume_name*.lz | tar -xf - @end example @sp 1 @noindent Example 6: Create a multivolume compressed backup of a big database file with a volume size of 650MB, where each volume is a multimember file with a member size of 32MiB. @example lzip -b 32MiB -S 650MB big_database @end example @sp 1 @noindent Example 7: Recover the first volume of those created in example 6 from two copies, @samp{big_database1_00001.lz} and @samp{big_database2_00001.lz}, with member 00007 damaged in the first copy and member 00018 damaged in the second copy. (Indented lines are lzip error messages). @example lziprecover big_database1_00001.lz lziprecover big_database2_00001.lz lzip -t rec*big_database1_00001.lz rec00007big_database1_00001.lz: crc mismatch lzip -t rec*big_database2_00001.lz rec00018big_database1_00001.lz: crc mismatch cp rec00007big_database2_00001.lz rec00007big_database1_00001.lz cat rec*big_database1_00001.lz > big_database3_00001.lz @end example @node Lziprecover @chapter Lziprecover @cindex lziprecover Lziprecover is a program that searches for members in .lz files, and writes each member in its own .lz file. You can then use @w{@samp{lzip -t}} to test the integrity of the resulting files, and decompress those which are undamaged. Lziprecover takes a single argument, the name of the damaged file, and writes a number of files @samp{rec00001file.lz}, @samp{rec00002file.lz}, etc, containing the extracted members. The output filenames are designed so that the use of wildcards in subsequent processing, for example, @w{@samp{lzip -dc rec*file.lz > recovered_data}}, processes the files in the correct order. @node Problems @chapter Reporting Bugs @cindex bugs @cindex getting help There are probably bugs in lzip. There are certainly errors and omissions in this manual. If you report them, they will get fixed. If you don't, no one will ever know about them and they will remain unfixed for all eternity, if not longer. If you find a bug in lzip, please send electronic mail to @email{lzip-bug@@nongnu.org}. Include the version number, which you can find by running @w{@samp{lzip --version}}. @node Concept Index @unnumbered Concept Index @printindex cp @bye