From 8139f66c36f8b437f5dbecb19607e2e09a9358d3 Mon Sep 17 00:00:00 2001 From: Daniel Baumann Date: Wed, 27 Jan 2021 17:11:42 +0100 Subject: Merging upstream version 0.7. Signed-off-by: Daniel Baumann --- README | 26 +++++++++++++++----------- 1 file changed, 15 insertions(+), 11 deletions(-) (limited to 'README') diff --git a/README b/README index 3e26a40..7395a92 100644 --- a/README +++ b/README @@ -18,7 +18,7 @@ masked by a low compression ratio in the last members. The xlunzip tarball contains a copy of the lzip_decompress module and can be compiled and tested without downloading or applying the patch to the kernel. - + My lzip patch for linux can be found at http://download.savannah.gnu.org/releases/lzip/kernel/ @@ -61,8 +61,8 @@ data is uncompressible. The worst case is very compressible data followed by uncompressible data because in this case the output pointer increases faster when the input pointer is smaller. - | * <-- input pointer - | * , <-- output pointer + | * <-- input pointer (*) + | * , <-- output pointer (,) | * , ' | x ' <-- overrun (x) memory | * ,' @@ -71,7 +71,7 @@ address | * ,' | ,' | ,' |,' - `-------------------------- + '-------------------------- time All we need to know to calculate the minimum required extra space is: @@ -82,19 +82,23 @@ All we need to know to calculate the minimum required extra space is: The maximum expansion ratio of LZMA data is of about 1.4%. Rounding this up to 1/64 (1.5625%) and adding 36 bytes per input member, the extra space required to decompress lzip data in place is: + extra_bytes = ( compressed_size >> 6 ) + members * 36 -Using the compressed size to calculate the extra_bytes (as in the equation +Using the compressed size to calculate the extra_bytes (as in the formula above) may slightly overestimate the amount of space required in the worst case. But calculating the extra_bytes from the uncompressed size (as does -linux) is wrong (and inefficient for high compression ratios). The formula -used in arch/x86/boot/header.S - extra_bytes = (uncompressed_size >> 8) + 65536 -fails with 1 MB of zeros followed by 8 MB of random data, and wastes memory -for compression ratios > 4:1. +linux currently) is wrong (and inefficient for high compression ratios). The +formula used in arch/x86/boot/header.S + + extra_bytes = ( uncompressed_size >> 8 ) + 65536 + +fails to decompress 1 MB of zeros followed by 8 MB of random data, wastes +memory for compression ratios larger than 4:1, and does not even consider +multimember data. -Copyright (C) 2016-2020 Antonio Diaz Diaz. +Copyright (C) 2016-2021 Antonio Diaz Diaz. This file is free documentation: you have unlimited permission to copy, distribute, and modify it. -- cgit v1.2.3