Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

zstd: Data corruption on SpeedBestCompression level #875

Closed
MichaelEischer opened this issue Oct 21, 2023 · 8 comments · Fixed by #876
Closed

zstd: Data corruption on SpeedBestCompression level #875

MichaelEischer opened this issue Oct 21, 2023 · 8 comments · Fixed by #876

Comments

@MichaelEischer
Copy link

Following a bug report at restic/restic#4523, I've traced the data corruption back to the zstd library.

The following two minimized data chunks return corrupt data after compressing it at SpeedBestCompression level and decompressing it afterwards. There is no data corruption at the default compression level. I've tested versions v1.16.7 (via restic using Go 1.20.7) and v1.17.1 (using Go 1.21.3).

correct-3d0e366bad4a5e9b443f407b114756a0f5a8153dbc242e0b2601c707136815eb.bin-minimized.txt
correct-dc82d97be7683ecd41097ab02d7b15de81e8bbcd1c476c50b254b1f458090929.bin-minimized.txt

To produce the minimized examples, I've used the following code snippet, which also serves as a reproducer:

package main

import (
	"bytes"
	"os"
	"sort"

	"github.com/klauspost/compress/zstd"
)

func buildEncoder() *zstd.Encoder {
	level := zstd.SpeedBestCompression

	opts := []zstd.EOption{
		// Set the compression level configured.
		zstd.WithEncoderLevel(level),
		// Disable CRC, we have enough checks in place, makes the
		// compressed data four bytes shorter.
		zstd.WithEncoderCRC(false),
		// Set a window of 512kbyte, so we have good lookbehind for usual
		// blob sizes.
		zstd.WithWindowSize(512 * 1024),
	}

	enc, err := zstd.NewWriter(nil, opts...)
	if err != nil {
		panic(err)
	}
	return enc
}

func buildDecoder() *zstd.Decoder {
	opts := []zstd.DOption{
		// Use all available cores.
		zstd.WithDecoderConcurrency(0),
		// Limit the maximum decompressed memory. Set to a very high,
		// conservative value.
		zstd.WithDecoderMaxMemory(16 * 1024 * 1024 * 1024),
	}

	dec, err := zstd.NewReader(nil, opts...)
	if err != nil {
		panic(err)
	}
	return dec
}

var enc = buildEncoder()
var dec = buildDecoder()

func verifyCompression(data []byte) bool {
	compressed := enc.EncodeAll(data, nil)
	decompressed, err := dec.DecodeAll(compressed, make([]byte, 0, len(data)))
	if err != nil {
		panic(err)
	}

	return bytes.Equal(data, decompressed)
}

func main() {
	data, err := os.ReadFile(os.Args[1])
	if err != nil {
		panic(err)
	}

	idx := sort.Search(len(data), func(i int) bool {
		return !verifyCompression(data[:i])
	})
	if verifyCompression(data[:idx]) {
		panic("missing compression error")
	}

	startIdx := sort.Search(idx, func(i int) bool {
		return verifyCompression(data[i:idx])
	}) - 1
	if verifyCompression(data[startIdx:idx]) {
		panic("missing compression error")
	}

	println("minimal example from", startIdx, "to", idx)

	f, err := os.OpenFile(os.Args[1]+"-minimized", os.O_WRONLY|os.O_CREATE|os.O_EXCL, 0666)
	if err != nil {
		panic(err)
	}
	_, err = f.Write(data[startIdx:idx])
	if err != nil {
		panic(err)
	}
	err = f.Close()
	if err != nil {
		panic(err)
	}
}

Run it using

go mod init test
go mod tidy
go run main.go correct-3d0e366bad4a5e9b443f407b114756a0f5a8153dbc242e0b2601c707136815eb.bin
@MichaelEischer
Copy link
Author

As restic 0.15.2 is not affected, the regression must have occurred between v1.16.0 and v1.16.7 .

@klauspost
Copy link
Owner

Thanks for the detailed report. I will set up the reproducer and look for a fix tomorrow.

If at all possible, I will see if I can make a recovery possible.

@klauspost
Copy link
Owner

I get the reproducer just by plugging the input into the fuzz test.

@klauspost
Copy link
Owner

I got the issue isolated, but I need to find the root cause - even if I plan to add more safety code around that area.

Unfortunately there doesn't seem to be a recovery, since it is an "offset 0" self-reference that has been added.

@klauspost
Copy link
Owner

Pretty crazy setup needed for this. I will send a PR with fix and details.

klauspost added a commit that referenced this issue Oct 22, 2023
Regression from #784 and followup #793

Fixes #875

A 0 offset backreference was possible when "improve" was successful twice in a row in the "skipBeginning" part, only finding 2 (previously unmatches) length 4 matches, but where start offset decreased by 2 in both cases.

This would result in output where the end offset would equal to the next 's', thereby doing a self-reference.

Add a general check in "improve" and just reject these. Will also guard against similar issues in the future.

This also hints at some potentially suboptimal hash indexing - but I will take that improvement separately.

Fuzz test set updated.
@klauspost
Copy link
Owner

Fix in #876 - I will leave fuzz tests running for a few hours and merge+release later today.

@MichaelEischer
Copy link
Author

Thanks a lot for fixing this so quickly!

klauspost added a commit that referenced this issue Oct 22, 2023
* zstd: Fix corrupted output in "best"

Regression from #784 and followup #793

Fixes #875

A 0 offset backreference was possible when "improve" was successful twice in a row in the "skipBeginning" part, only finding 2 (previously unmatches) length 4 matches, but where start offset decreased by 2 in both cases.

This would result in output where the end offset would equal to the next 's', thereby doing a self-reference.

Add a general check in "improve" and just reject these. Will also guard against similar issues in the future.

This also hints at some potentially suboptimal hash indexing - but I will take that improvement separately.

Fuzz test set updated.
@klauspost
Copy link
Owner

Merged and released as v1.17.2. 4+ hours of fuzzing hasn't turned up any regressions.

ItalyPaleAle added a commit to ItalyPaleAle/dapr-components-contrib that referenced this issue Feb 24, 2024
Fixes 2 possible data corruption issues. See:

- klauspost/compress#875
- klauspost/compress#922

Signed-off-by: ItalyPaleAle <43508+ItalyPaleAle@users.noreply.github.com>
ItalyPaleAle added a commit to ItalyPaleAle/dapr-components-contrib that referenced this issue Feb 24, 2024
Fixes 2 possible data corruption issues. See:

- klauspost/compress#875
- klauspost/compress#922

Signed-off-by: ItalyPaleAle <43508+ItalyPaleAle@users.noreply.github.com>
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

Successfully merging a pull request may close this issue.

2 participants