]> git.karo-electronics.de Git - karo-tx-linux.git/commit
staging: zram: Fix handling of incompressible pages
authorNitin Gupta <ngupta@vflare.org>
Thu, 11 Oct 2012 00:42:18 +0000 (17:42 -0700)
committerGreg Kroah-Hartman <gregkh@linuxfoundation.org>
Wed, 31 Oct 2012 17:09:59 +0000 (10:09 -0700)
commit88d5e653e2282bda43459dd577f3841f6ef140bf
tree1e04ef574c218cf84567fc5b8f9dcd6d304a5cfa
parente4c52c43352370daf54ba0cbe9476f593d89b0de
staging: zram: Fix handling of incompressible pages

commit c8f2f0db1d0294aaf37e8a85bea9bbc4aaf5c0fe upstream.

Change 130f315a (staging: zram: remove special handle of uncompressed page)
introduced a bug in the handling of incompressible pages which resulted in
memory allocation failure for such pages.

When a page expands on compression, say from 4K to 4K+30, we were trying to
do zsmalloc(pool, 4K+30). However, the maximum size which zsmalloc can
allocate is PAGE_SIZE (for obvious reasons), so such allocation requests
always return failure (0).

For a page that has compressed size larger than the original size (this may
happen with already compressed or random data), there is no point storing
the compressed version as that would take more space and would also require
time for decompression when needed again. So, the fix is to store any page,
whose compressed size exceeds a threshold (max_zpage_size), as-it-is i.e.
without compression.  Memory required for storing this uncompressed page can
then be requested from zsmalloc which supports PAGE_SIZE sized allocations.

Lastly, the fix checks that we do not attempt to "decompress" the page which
we stored in the uncompressed form -- we just memcpy() out such pages.

Signed-off-by: Nitin Gupta <ngupta@vflare.org>
Reported-by: viechweg@gmail.com
Reported-by: paerley@gmail.com
Reported-by: wu.tommy@gmail.com
Acked-by: Minchan Kim <minchan@kernel.org>
Signed-off-by: Greg Kroah-Hartman <gregkh@linuxfoundation.org>
drivers/staging/zram/zram_drv.c