Skip to content

Commit

Permalink
Fixed chunk data corruption when querying back series using the block…
Browse files Browse the repository at this point in the history
…s storage (#2400)

* Fixed chunk data corruption when querying back series using the blocks storage

Signed-off-by: Marco Pracucci <[email protected]>

* Added PR number to CHANGELOG

Signed-off-by: Marco Pracucci <[email protected]>
  • Loading branch information
pracucci authored Apr 3, 2020
1 parent 3c899e6 commit 9b4841a
Show file tree
Hide file tree
Showing 2 changed files with 17 additions and 2 deletions.
1 change: 1 addition & 0 deletions CHANGELOG.md
Original file line number Diff line number Diff line change
Expand Up @@ -12,6 +12,7 @@
* [ENHANCEMENT] Experimental TSDB: Added `cortex_querier_blocks_meta_synced`, which reflects current state of synced blocks over all tenants. #2392
* [ENHANCEMENT] Added `cortex_distributor_latest_seen_sample_timestamp_seconds` metric to see how far behind Prometheus servers are in sending data. #2371
* [ENHANCEMENT] FIFO cache to support eviction based on memory usage. The `-<prefix>.fifocache.size` CLI flag has been renamed to `-<prefix>.fifocache.max-size-items` as well as its YAML config option `size` renamed to `max_size_items`. Added `-<prefix>.fifocache.max-size-bytes` CLI flag and YAML config option `max_size_bytes` to specify memory limit of the cache. #2319
* [BUGFIX] Experimental TSDB: fixed chunk data corruption when querying back series using the experimental blocks storage. #2400

## 1.0.0 / 2020-04-02

Expand Down
18 changes: 16 additions & 2 deletions pkg/querier/blocks_bucket_store_inmemory_server.go
Original file line number Diff line number Diff line change
Expand Up @@ -30,8 +30,22 @@ func (s *bucketStoreSeriesServer) Send(r *storepb.SeriesResponse) error {
s.Warnings = append(s.Warnings, errors.New(r.GetWarning()))
}

if r.GetSeries() != nil {
s.SeriesSet = append(s.SeriesSet, r.GetSeries())
if recvSeries := r.GetSeries(); recvSeries != nil {
// Thanos uses a pool for the chunks and may use other pools in the future.
// Given we need to retain the reference after the pooled slices are recycled,
// we need to do a copy here. We prefer to stay on the safest side at this stage
// so we do a marshal+unmarshal to copy the whole series.
recvSeriesData, err := recvSeries.Marshal()
if err != nil {
return errors.Wrap(err, "marshal received series")
}

copiedSeries := &storepb.Series{}
if err = copiedSeries.Unmarshal(recvSeriesData); err != nil {
return errors.Wrap(err, "unmarshal received series")
}

s.SeriesSet = append(s.SeriesSet, copiedSeries)
}

return nil
Expand Down

0 comments on commit 9b4841a

Please sign in to comment.