\input texinfo @c -*-texinfo-*- @c %**start of header @setfilename clzip.info @documentencoding ISO-8859-15 @settitle Clzip Manual @finalout @c %**end of header @set UPDATED 30 January 2014 @set VERSION 1.6-pre1 @dircategory Data Compression @direntry * Clzip: (clzip). LZMA lossless data compressor @end direntry @ifnothtml @titlepage @title Clzip @subtitle LZMA lossless data compressor @subtitle for Clzip version @value{VERSION}, @value{UPDATED} @author by Antonio Diaz Diaz @page @vskip 0pt plus 1filll @end titlepage @contents @end ifnothtml @node Top @top This manual is for Clzip (version @value{VERSION}, @value{UPDATED}). @menu * Introduction:: Purpose and features of clzip * Algorithm:: How clzip compresses the data * Invoking clzip:: Command line interface * File format:: Detailed format of the compressed file * Examples:: A small tutorial with examples * Problems:: Reporting bugs * Concept index:: Index of concepts @end menu @sp 1 Copyright @copyright{} 2010, 2011, 2012, 2013, 2014 Antonio Diaz Diaz. This manual is free documentation: you have unlimited permission to copy, distribute and modify it. @node Introduction @chapter Introduction @cindex introduction Clzip is a lossless data compressor with a user interface similar to the one of gzip or bzip2. Clzip decompresses almost as fast as gzip, compresses most files more than bzip2, and is better than both from a data recovery perspective. Clzip is a clean implementation of the LZMA algorithm. Clzip uses the lzip file format; the files produced by clzip are fully compatible with lzip-1.4 or newer, and can be rescued with lziprecover. Clzip is in fact a C language version of lzip, intended for embedded devices or systems lacking a C++ compiler. The lzip file format is designed for long-term data archiving and provides very safe integrity checking. It is as simple as possible (but not simpler), so that with the only help of the lzip manual it would be possible for a digital archaeologist to extract the data from a lzip file long after quantum computers eventually render LZMA obsolete. Additionally lzip is copylefted, which guarantees that it will remain free forever. The member trailer stores the 32-bit CRC of the original data, the size of the original data and the size of the member. These values, together with the value remaining in the range decoder and the end-of-stream marker, provide a 4 factor integrity checking which guarantees that the decompressed version of the data is identical to the original. This guards against corruption of the compressed data, and against undetected bugs in clzip (hopefully very unlikely). The chances of data corruption going undetected are microscopic. Be aware, though, that the check occurs upon decompression, so it can only tell you that something is wrong. It can't help you recover the original uncompressed data. If you ever need to recover data from a damaged lzip file, try the lziprecover program. Lziprecover makes lzip files resistant to bit-flip (one of the most common forms of data corruption), and provides data recovery capabilities, including error-checked merging of damaged copies of a file. Clzip uses the same well-defined exit status values used by lzip and bzip2, which makes it safer than compressors returning ambiguous warning values (like gzip) when it is used as a back end for tar or zutils. When compressing, clzip replaces every file given in the command line with a compressed version of itself, with the name "original_name.lz". When decompressing, clzip attempts to guess the name for the decompressed file from that of the compressed file as follows: @multitable {anyothername} {becomes} {anyothername.out} @item filename.lz @tab becomes @tab filename @item filename.tlz @tab becomes @tab filename.tar @item anyothername @tab becomes @tab anyothername.out @end multitable (De)compressing a file is much like copying or moving it; therefore clzip preserves the access and modification dates, permissions, and, when possible, ownership of the file just as "cp -p" does. (If the user ID or the group ID can't be duplicated, the file permission bits S_ISUID and S_ISGID are cleared). Clzip is able to read from some types of non regular files if the @samp{--stdout} option is specified. If no file names are specified, clzip compresses (or decompresses) from standard input to standard output. In this case, clzip will decline to write compressed output to a terminal, as this would be entirely incomprehensible and therefore pointless. Clzip will correctly decompress a file which is the concatenation of two or more compressed files. The result is the concatenation of the corresponding uncompressed files. Integrity testing of concatenated compressed files is also supported. Clzip can produce multi-member files and safely recover, with lziprecover, the undamaged members in case of file damage. Clzip can also split the compressed output in volumes of a given size, even when reading from standard input. This allows the direct creation of multivolume compressed tar archives. Clzip is able to compress and decompress streams of unlimited size by automatically creating multi-member output. The members so created are large, about 64 PiB each. The amount of memory required for compression is about 1 or 2 times the dictionary size limit (1 if input file size is less than dictionary size limit, else 2) plus 9 times the dictionary size really used. The amount of memory required for decompression is about 46 kB larger than the dictionary size really used. Clzip will automatically use the smallest possible dictionary size without exceeding the given limit. Keep in mind that the decompression memory requirement is affected at compression time by the choice of dictionary size limit. @node Algorithm @chapter Algorithm @cindex algorithm Clzip implements a simplified version of the LZMA (Lempel-Ziv-Markov chain-Algorithm) algorithm. The high compression of LZMA comes from combining two basic, well-proven compression ideas: sliding dictionaries (LZ77/78) and markov models (the thing used by every compression algorithm that uses a range encoder or similar order-0 entropy coder as its last stage) with segregation of contexts according to what the bits are used for. Clzip is a two stage compressor. The first stage is a Lempel-Ziv coder, which reduces redundancy by translating chunks of data to their corresponding distance-length pairs. The second stage is a range encoder that uses a different probability model for each type of data; distances, lengths, literal bytes, etc. The match finder, part of the LZ coder, is the most important piece of the LZMA algorithm, as it is in many Lempel-Ziv based algorithms. Most of clzip's execution time is spent in the match finder, and it has the greatest influence on the compression ratio. Here is how it works, step by step: 1) The member header is written to the output stream. 2) The first byte is coded literally, because there are no previous bytes to which the match finder can refer to. 3) The main encoder advances to the next byte in the input data and calls the match finder. 4) The match finder fills an array with the minimum distances before the current byte where a match of a given length can be found. 5) Go back to step 3 until a sequence (formed of pairs, repeated distances and literal bytes) of minimum price has been formed. Where the price represents the number of output bits produced. 6) The range encoder encodes the sequence produced by the main encoder and sends the produced bytes to the output stream. 7) Go back to step 3 until the input data are finished or until the member or volume size limits are reached. 8) The range encoder is flushed. 9) The member trailer is written to the output stream. 10) If there are more data to compress, go back to step 1. @sp 1 @noindent The ideas embodied in clzip are due to (at least) the following people: Abraham Lempel and Jacob Ziv (for the LZ algorithm), Andrey Markov (for the definition of Markov chains), G.N.N. Martin (for the definition of range encoding), Igor Pavlov (for putting all the above together in LZMA), and Julian Seward (for bzip2's CLI). @node Invoking clzip @chapter Invoking clzip @cindex invoking @cindex options @cindex usage @cindex version The format for running clzip is: @example clzip [@var{options}] [@var{files}] @end example Clzip supports the following options: @table @samp @item -h @itemx --help Print an informative help message describing the options and exit. @item -V @itemx --version Print the version number of clzip on the standard output and exit. @item -b @var{bytes} @itemx --member-size=@var{bytes} Set the member size limit to @var{bytes}. A small member size may degrade compression ratio, so use it only when needed. Valid values range from 100 kB to 64 PiB. Defaults to 64 PiB. @item -c @itemx --stdout Compress or decompress to standard output. Needed when reading from a named pipe (fifo) or from a device. Use it to recover as much of the uncompressed data as possible when decompressing a corrupt file. @item -d @itemx --decompress Decompress. @item -f @itemx --force Force overwrite of output files. @item -F @itemx --recompress Force recompression of files whose name already has the @samp{.lz} or @samp{.tlz} suffix. @item -k @itemx --keep Keep (don't delete) input files during compression or decompression. @item -m @var{bytes} @itemx --match-length=@var{bytes} Set the match length limit in bytes. After a match this long is found, the search is finished. Valid values range from 5 to 273. Larger values usually give better compression ratios but longer compression times. @item -o @var{file} @itemx --output=@var{file} When reading from standard input and @samp{--stdout} has not been specified, use @samp{@var{file}} as the virtual name of the uncompressed file. This produces a file named @samp{@var{file}} when decompressing, a file named @samp{@var{file}.lz} when compressing, and several files named @samp{@var{file}00001.lz}, @samp{@var{file}00002.lz}, etc, when compressing and splitting the output in volumes. @item -q @itemx --quiet Quiet operation. Suppress all messages. @item -s @var{bytes} @itemx --dictionary-size=@var{bytes} Set the dictionary size limit in bytes. Valid values range from 4 KiB to 512 MiB. Clzip will use the smallest possible dictionary size for each member without exceeding this limit. Note that dictionary sizes are quantized. If the specified size does not match one of the valid sizes, it will be rounded upwards by adding up to (@var{bytes} / 16) to it. For maximum compression you should use a dictionary size limit as large as possible, but keep in mind that the decompression memory requirement is affected at compression time by the choice of dictionary size limit. @item -S @var{bytes} @itemx --volume-size=@var{bytes} Split the compressed output into several volume files with names @samp{original_name00001.lz}, @samp{original_name00002.lz}, etc, and set the volume size limit to @var{bytes}. Each volume is a complete, maybe multi-member, lzip file. A small volume size may degrade compression ratio, so use it only when needed. Valid values range from 100 kB to 4 EiB. @item -t @itemx --test Check integrity of the specified file(s), but don't decompress them. This really performs a trial decompression and throws away the result. Use it together with @samp{-v} to see information about the file. @item -v @itemx --verbose Verbose mode.@* When compressing, show the compression ratio for each file processed. A second @samp{-v} shows the progress of compression.@* When decompressing or testing, further -v's (up to 4) increase the verbosity level, showing status, compression ratio, dictionary size, and trailer contents (CRC, data size, member size). @item -1 .. -9 Set the compression parameters (dictionary size and match length limit) as shown in the table below. Note that @samp{-9} can be much slower than @samp{-1}. These options have no effect when decompressing. The bidimensional parameter space of LZMA can't be mapped to a linear scale optimal for all files. If your files are large, very repetitive, etc, you may need to use the @samp{--match-length} and @samp{--dictionary-size} options directly to achieve optimal performance. For example, @samp{-9m64} usually compresses executables more (and faster) than @samp{-9}. @multitable {Level} {Dictionary size} {Match length limit} @item Level @tab Dictionary size @tab Match length limit @item -1 @tab 1 MiB @tab 5 bytes @item -2 @tab 1.5 MiB @tab 6 bytes @item -3 @tab 2 MiB @tab 8 bytes @item -4 @tab 3 MiB @tab 12 bytes @item -5 @tab 4 MiB @tab 20 bytes @item -6 @tab 8 MiB @tab 36 bytes @item -7 @tab 16 MiB @tab 68 bytes @item -8 @tab 24 MiB @tab 132 bytes @item -9 @tab 32 MiB @tab 273 bytes @end multitable @item --fast @itemx --best Aliases for GNU gzip compatibility. @end table Numbers given as arguments to options may be followed by a multiplier and an optional @samp{B} for "byte". Table of SI and binary prefixes (unit multipliers): @multitable {Prefix} {kilobyte (10^3 = 1000)} {|} {Prefix} {kibibyte (2^10 = 1024)} @item Prefix @tab Value @tab | @tab Prefix @tab Value @item k @tab kilobyte (10^3 = 1000) @tab | @tab Ki @tab kibibyte (2^10 = 1024) @item M @tab megabyte (10^6) @tab | @tab Mi @tab mebibyte (2^20) @item G @tab gigabyte (10^9) @tab | @tab Gi @tab gibibyte (2^30) @item T @tab terabyte (10^12) @tab | @tab Ti @tab tebibyte (2^40) @item P @tab petabyte (10^15) @tab | @tab Pi @tab pebibyte (2^50) @item E @tab exabyte (10^18) @tab | @tab Ei @tab exbibyte (2^60) @item Z @tab zettabyte (10^21) @tab | @tab Zi @tab zebibyte (2^70) @item Y @tab yottabyte (10^24) @tab | @tab Yi @tab yobibyte (2^80) @end multitable @sp 1 Exit status: 0 for a normal exit, 1 for environmental problems (file not found, invalid flags, I/O errors, etc), 2 to indicate a corrupt or invalid input file, 3 for an internal consistency error (eg, bug) which caused clzip to panic. @node File format @chapter File format @cindex file format Perfection is reached, not when there is no longer anything to add, but when there is no longer anything to take away.@* --- Antoine de Saint-Exupery @sp 1 In the diagram below, a box like this: @verbatim +---+ | | <-- the vertical bars might be missing +---+ @end verbatim represents one byte; a box like this: @verbatim +==============+ | | +==============+ @end verbatim represents a variable number of bytes. @sp 1 A lzip file consists of a series of "members" (compressed data sets). The members simply appear one after another in the file, with no additional information before, between, or after them. Each member has the following structure: @verbatim +--+--+--+--+----+----+=============+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+ | ID string | VN | DS | Lzma stream | CRC32 | Data size | Member size | +--+--+--+--+----+----+=============+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+ @end verbatim All multibyte values are stored in little endian order. @table @samp @item ID string A four byte string, identifying the lzip format, with the value "LZIP" (0x4C, 0x5A, 0x49, 0x50). @item VN (version number, 1 byte) Just in case something needs to be modified in the future. 1 for now. @item DS (coded dictionary size, 1 byte) Lzip divides the distance between any two powers of 2 into 8 equally spaced intervals, named "wedges". The dictionary size is calculated by taking a power of 2 (the base size) and substracting from it a number of wedges between 0 and 7. The size of a wedge is (base_size / 16).@* Bits 4-0 contain the base 2 logarithm of the base size (12 to 29).@* Bits 7-5 contain the number of wedges (0 to 7) to substract from the base size to obtain the dictionary size.@* Example: 0xD3 = 2^19 - 6 * 2^15 = 512 KiB - 6 * 32 KiB = 320 KiB@* Valid values for dictionary size range from 4 KiB to 512 MiB. @item Lzma stream The lzma stream, finished by an end of stream marker. Uses default values for encoder properties. See the lzip manual for a full description. @item CRC32 (4 bytes) CRC of the uncompressed original data. @item Data size (8 bytes) Size of the uncompressed original data. @item Member size (8 bytes) Total size of the member, including header and trailer. This field acts as a distributed index, allows the verification of stream integrity, and facilitates safe recovery of undamaged members from multi-member files. @end table @node Examples @chapter A small tutorial with examples @cindex examples WARNING! Even if clzip is bug-free, other causes may result in a corrupt compressed file (bugs in the system libraries, memory errors, etc). Therefore, if the data you are going to compress are important, give the @samp{--keep} option to clzip and do not remove the original file until you verify the compressed file with a command like @w{@samp{clzip -cd file.lz | cmp file -}}. @sp 1 @noindent Example 1: Replace a regular file with its compressed version @samp{file.lz} and show the compression ratio. @example clzip -v file @end example @sp 1 @noindent Example 2: Like example 1 but the created @samp{file.lz} is multi-member with a member size of 1 MiB. The compression ratio is not shown. @example clzip -b 1MiB file @end example @sp 1 @noindent Example 3: Restore a regular file from its compressed version @samp{file.lz}. If the operation is successful, @samp{file.lz} is removed. @example clzip -d file.lz @end example @sp 1 @noindent Example 4: Verify the integrity of the compressed file @samp{file.lz} and show status. @example clzip -tv file.lz @end example @sp 1 @noindent Example 5: Compress a whole floppy in /dev/fd0 and send the output to @samp{file.lz}. @example clzip -c /dev/fd0 > file.lz @end example @sp 1 @noindent Example 6: Decompress @samp{file.lz} partially until 10 KiB of decompressed data are produced. @example clzip -cd file.lz | dd bs=1024 count=10 @end example @sp 1 @noindent Example 7: Decompress @samp{file.lz} partially from decompressed byte 10000 to decompressed byte 15000 (5000 bytes are produced). @example clzip -cd file.lz | dd bs=1000 skip=10 count=5 @end example @sp 1 @noindent Example 8: Create a multivolume compressed tar archive with a volume size of 1440 KiB. @example tar -c some_directory | clzip -S 1440KiB -o volume_name @end example @sp 1 @noindent Example 9: Extract a multivolume compressed tar archive. @example clzip -cd volume_name*.lz | tar -xf - @end example @sp 1 @noindent Example 10: Create a multivolume compressed backup of a large database file with a volume size of 650 MB, where each volume is a multi-member file with a member size of 32 MiB. @example clzip -b 32MiB -S 650MB big_db @end example @node Problems @chapter Reporting bugs @cindex bugs @cindex getting help There are probably bugs in clzip. There are certainly errors and omissions in this manual. If you report them, they will get fixed. If you don't, no one will ever know about them and they will remain unfixed for all eternity, if not longer. If you find a bug in clzip, please send electronic mail to @email{lzip-bug@@nongnu.org}. Include the version number, which you can find by running @w{@samp{clzip --version}}. @node Concept index @unnumbered Concept index @printindex cp @bye