From 06a01dc7a04d92c60210008435950afa9b6c0a69 Mon Sep 17 00:00:00 2001 From: Daniel Baumann Date: Fri, 6 Nov 2015 13:53:27 +0100 Subject: Merging upstream version 1.7. Signed-off-by: Daniel Baumann --- README | 9 ++++++++- 1 file changed, 8 insertions(+), 1 deletion(-) (limited to 'README') diff --git a/README b/README index b358f08..e6464da 100644 --- a/README +++ b/README @@ -45,6 +45,13 @@ each file without exceeding the given limit. Keep in mind that the decompression memory requirement is affected at compression time by the choice of dictionary size limit. +The amount of memory required for compression is about 1 or 2 times the +dictionary size limit (1 if input file size is less than dictionary size +limit, else 2) plus 9 times the dictionary size really used. The option +'-0' is special and only requires about 1.5 MiB at most. The amount of +memory required for decompression is about 46 kB larger than the +dictionary size really used. + When compressing, clzip replaces every file given in the command line with a compressed version of itself, with the name "original_name.lz". When decompressing, clzip attempts to guess the name for the decompressed @@ -93,7 +100,7 @@ used by lzip could be developed, and the resulting sequence could also be coded using the LZMA coding scheme. Clzip currently implements two variants of the LZMA algorithm; fast -(used by option -0) and normal (used by all other compression levels). +(used by option '-0') and normal (used by all other compression levels). The high compression of LZMA comes from combining two basic, well-proven compression ideas: sliding dictionaries (LZ77/78) and markov models (the -- cgit v1.2.3