summaryrefslogtreecommitdiffstats
path: root/README
blob: bfeffefe843dcf001a6e985094d3f1863244a83f (plain)
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
42
43
44
45
46
47
48
49
50
51
52
53
54
55
56
57
58
59
60
61
62
63
64
65
66
Description

Lzip is a lossless data compressor based on the LZMA algorithm, with
very safe integrity checking and a user interface similar to the one of
gzip or bzip2. Lzip decompresses almost as fast as gzip and compresses
better than bzip2, which makes it well suited for software distribution
and data archiving.

Lzip replaces every file given in the command line with a compressed
version of itself, with the name "original_name.lz". Each compressed
file has the same modification date, permissions, and, when possible,
ownership as the corresponding original, so that these properties can be
correctly restored at decompression time. Lzip is able to read from some
types of non regular files if the "--stdout" option is specified.

If no file names are specified, lzip compresses (or decompresses) from
standard input to standard output. In this case, lzip will decline to
write compressed output to a terminal, as this would be entirely
incomprehensible and therefore pointless.

Lzip will correctly decompress a file which is the concatenation of two
or more compressed files. The result is the concatenation of the
corresponding uncompressed files. Integrity testing of concatenated
compressed files is also supported.

Lzip can produce multimember files and safely recover, with lziprecover,
the undamaged members in case of file damage. Lzip can also split the
compressed output in volumes of a given size, even when reading from
standard input. This allows the direct creation of multivolume
compressed tar archives.

Lzip will automatically use the smallest possible dictionary size
without exceeding the given limit. It is important to appreciate that
the decompression memory requirement is affected at compression time by
the choice of dictionary size limit.

As a self-check for your protection, lzip stores in the member trailer
the 32-bit CRC of the original data and the size of the original data,
to make sure that the decompressed version of the data is identical to
the original. This guards against corruption of the compressed data, and
against undetected bugs in lzip (hopefully very unlikely). The chances
of data corruption going undetected are microscopic, less than one
chance in 4000 million for each member processed. Be aware, though, that
the check occurs upon decompression, so it can only tell you that
something is wrong. It can't help you recover the original uncompressed
data.

Lzip implements a simplified version of the LZMA (Lempel-Ziv-Markov
chain-Algorithm) algorithm. The original LZMA algorithm was designed by
Igor Pavlov.

The high compression of LZMA comes from combining two basic, well-proven
compression ideas: sliding dictionaries (LZ77/78) and markov models (the
thing used by every compression algorithm that uses a range encoder or
similar order-0 entropy coder as its last stage) with segregation of
contexts according to what the bits are used for.


Copyright (C) 2008, 2009 Antonio Diaz Diaz.

This file is free documentation: you have unlimited permission to copy,
distribute and modify it.

The file Makefile.in is a data file used by configure to produce the
Makefile. It has the same copyright owner and permissions that this
file.