Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

release-20.2: backupccl: fix rare failure in reading backup file #59744

Merged
merged 1 commit into from
Feb 10, 2021

Conversation

pbardea
Copy link
Contributor

@pbardea pbardea commented Feb 3, 2021

Backport 1/1 commits from #59730.

/cc @cockroachdb/release


BackupManifests are compresed when written to ExternalStorage. When
reading backup manifests, we check if the content type indicates that
the backup manifest is compressed (and thus needs to be decompressed).
We do this because some previous backups were not compressed, so backup
needs to be able to detect if the backup manfiest has been compressed.

However, very rarely (1 in 60000 attempts in my case), the compressed
data might be detected as "application/vnd.ms-fontobject", rather than a
gzipped file. This causes backup to not decompress the file, and thus
try to unmarshall the compressed data. The file is misdetected because
the defined sniffing algorithm in http.DetectContentType first looks at
the 35th byte to see if it matches a "magic pattern", before checking if
the data is in gzip format. And, sometimes, the 35th byte happens to
match up.

This commit updates the check so that we only check if the GZip magic
bytes header is present or not, rather than checking all possible
content types. The gzip header is not expected to conflict with the
first 6 bytes of normally generated protobuf messages that are
compressed.

Closes #59685.
Closes #54550.

Release note (bug fix): Fixes a bug where backups would fail with an
error when trying to read a backup that was written.

BackupManifests are compresed when written to ExternalStorage. When
reading backup manifests, we check if the content type indicates that
the backup manifest is compressed (and thus needs to be decompressed).
We do this because some previous backups were not compressed, so backup
needs to be able to detect if the backup manfiest has been compressed.

However, very rarely (1 in 60000 attempts in my case), the compressed
data might be detected as "application/vnd.ms-fontobject", rather than a
gzipped file. This causes backup to not decompress the file, and thus
try to unmarshall the compressed data. The file is misdetected because
the defined sniffing algorithm in http.DetectContentType first looks at
the 35th byte to see if it matches a "magic pattern", before checking if
the data is in gzip format. And, sometimes, the 35th byte happens to
match up with that magic pattern.

This commit updates the check so that we only check if the GZip magic
bytes header is present or not, rather than checking all possible
content types. The gzip header is not expected to conflict with the
first 6 bytes of normally generated protobuf messages that are
compressed.

Release note (bug fix): Fixes a bug where backups would fail with an
error when trying to read a backup that was written.
@pbardea pbardea requested review from dt and a team February 3, 2021 03:05
@cockroach-teamcity
Copy link
Member

This change is Reviewable

@pbardea
Copy link
Contributor Author

pbardea commented Feb 4, 2021

Note, that I think that the Bazel CI is expected to be failing on the release-* branches, so this should be ready for a review.

@pbardea pbardea merged commit bd46ccc into cockroachdb:release-20.2 Feb 10, 2021
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

Successfully merging this pull request may close these issues.

3 participants