Skip to content

Commit

Permalink
store: discard unneeded information directly (#4750)
Browse files Browse the repository at this point in the history
Discard unneeded data directly by calling Discard instead of copying to
`io.Discard`. The latter has a `sync.Pool` underneath from which it
retrieves byte slices into which data is read into, and after that it is
immediately discarded. So, save some time by just discarding unneeded
bytes directly.

Comparison:

```
name                            old time/op    new time/op    delta
BlockSeries/concurrency:_1-16     8.81ms ± 3%    8.35ms ± 7%  -5.26%  (p=0.000 n=69+76)
BlockSeries/concurrency:_2-16     4.76ms ± 5%    4.36ms ± 5%  -8.41%  (p=0.000 n=80+74)
BlockSeries/concurrency:_4-16     2.83ms ± 4%    2.70ms ± 6%  -4.82%  (p=0.000 n=77+80)
BlockSeries/concurrency:_8-16     2.24ms ± 7%    2.21ms ± 5%  -1.20%  (p=0.002 n=80+78)
BlockSeries/concurrency:_16-16    2.36ms ± 7%    2.24ms ± 8%  -5.29%  (p=0.000 n=78+76)
BlockSeries/concurrency:_32-16    3.53ms ±10%    3.42ms ± 9%  -3.23%  (p=0.000 n=79+80)

name                            old alloc/op   new alloc/op   delta
BlockSeries/concurrency:_1-16     5.19MB ± 8%    5.17MB ± 5%    ~     (p=0.243 n=79+76)
BlockSeries/concurrency:_2-16     5.34MB ± 6%    5.27MB ± 8%  -1.31%  (p=0.006 n=79+79)
BlockSeries/concurrency:_4-16     5.28MB ±10%    5.28MB ± 9%    ~     (p=0.641 n=80+79)
BlockSeries/concurrency:_8-16     5.33MB ±12%    5.39MB ± 8%    ~     (p=0.143 n=80+77)
BlockSeries/concurrency:_16-16    6.39MB ± 9%    6.16MB ±12%  -3.66%  (p=0.000 n=75+78)
BlockSeries/concurrency:_32-16    9.20MB ±18%    9.03MB ±18%    ~     (p=0.061 n=79+80)

name                            old allocs/op  new allocs/op  delta
BlockSeries/concurrency:_1-16      31.6k ± 4%     31.7k ± 3%    ~     (p=0.325 n=80+76)
BlockSeries/concurrency:_2-16      31.9k ± 2%     30.9k ± 3%  -3.37%  (p=0.000 n=80+75)
BlockSeries/concurrency:_4-16      32.4k ± 3%     31.9k ± 4%  -1.39%  (p=0.000 n=80+80)
BlockSeries/concurrency:_8-16      32.2k ± 6%     32.5k ± 4%  +0.96%  (p=0.011 n=78+80)
BlockSeries/concurrency:_16-16     35.0k ± 7%     33.7k ± 8%  -3.70%  (p=0.000 n=78+76)
BlockSeries/concurrency:_32-16     51.6k ± 8%     50.6k ±10%  -1.81%  (p=0.012 n=80+80)
```

Signed-off-by: Giedrius Statkevičius <[email protected]>
  • Loading branch information
GiedriusS authored Oct 8, 2021
1 parent 3040829 commit e342026
Showing 1 changed file with 3 additions and 3 deletions.
6 changes: 3 additions & 3 deletions pkg/store/bucket.go
Original file line number Diff line number Diff line change
Expand Up @@ -2495,7 +2495,7 @@ func (r *bucketChunkReader) loadChunks(ctx context.Context, res []seriesEntry, a
readOffset = int(pIdxs[0].offset)

// Save a few allocations.
written int64
written int
diff uint32
chunkLen int
n int
Expand All @@ -2504,11 +2504,11 @@ func (r *bucketChunkReader) loadChunks(ctx context.Context, res []seriesEntry, a
for i, pIdx := range pIdxs {
// Fast forward range reader to the next chunk start in case of sparse (for our purposes) byte range.
for readOffset < int(pIdx.offset) {
written, err = io.CopyN(ioutil.Discard, bufReader, int64(pIdx.offset)-int64(readOffset))
written, err = bufReader.Discard(int(pIdx.offset) - int(readOffset))
if err != nil {
return errors.Wrap(err, "fast forward range reader")
}
readOffset += int(written)
readOffset += written
}
// Presume chunk length to be reasonably large for common use cases.
// However, declaration for EstimatedMaxChunkSize warns us some chunks could be larger in some rare cases.
Expand Down

0 comments on commit e342026

Please sign in to comment.