summaryrefslogtreecommitdiffstats
path: root/README
diff options
context:
space:
mode:
Diffstat (limited to 'README')
-rw-r--r--README26
1 files changed, 15 insertions, 11 deletions
diff --git a/README b/README
index 3e26a40..7395a92 100644
--- a/README
+++ b/README
@@ -18,7 +18,7 @@ masked by a low compression ratio in the last members.
The xlunzip tarball contains a copy of the lzip_decompress module and can be
compiled and tested without downloading or applying the patch to the kernel.
-
+
My lzip patch for linux can be found at
http://download.savannah.gnu.org/releases/lzip/kernel/
@@ -61,8 +61,8 @@ data is uncompressible. The worst case is very compressible data followed by
uncompressible data because in this case the output pointer increases faster
when the input pointer is smaller.
- | * <-- input pointer
- | * , <-- output pointer
+ | * <-- input pointer (*)
+ | * , <-- output pointer (,)
| * , '
| x ' <-- overrun (x)
memory | * ,'
@@ -71,7 +71,7 @@ address | * ,'
| ,'
| ,'
|,'
- `--------------------------
+ '--------------------------
time
All we need to know to calculate the minimum required extra space is:
@@ -82,19 +82,23 @@ All we need to know to calculate the minimum required extra space is:
The maximum expansion ratio of LZMA data is of about 1.4%. Rounding this up
to 1/64 (1.5625%) and adding 36 bytes per input member, the extra space
required to decompress lzip data in place is:
+
extra_bytes = ( compressed_size >> 6 ) + members * 36
-Using the compressed size to calculate the extra_bytes (as in the equation
+Using the compressed size to calculate the extra_bytes (as in the formula
above) may slightly overestimate the amount of space required in the worst
case. But calculating the extra_bytes from the uncompressed size (as does
-linux) is wrong (and inefficient for high compression ratios). The formula
-used in arch/x86/boot/header.S
- extra_bytes = (uncompressed_size >> 8) + 65536
-fails with 1 MB of zeros followed by 8 MB of random data, and wastes memory
-for compression ratios > 4:1.
+linux currently) is wrong (and inefficient for high compression ratios). The
+formula used in arch/x86/boot/header.S
+
+ extra_bytes = ( uncompressed_size >> 8 ) + 65536
+
+fails to decompress 1 MB of zeros followed by 8 MB of random data, wastes
+memory for compression ratios larger than 4:1, and does not even consider
+multimember data.
-Copyright (C) 2016-2020 Antonio Diaz Diaz.
+Copyright (C) 2016-2021 Antonio Diaz Diaz.
This file is free documentation: you have unlimited permission to copy,
distribute, and modify it.