Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Bring 0.19.0 rc fixes to main. #3978

Merged
merged 6 commits into from
Mar 26, 2021
Merged
Show file tree
Hide file tree
Changes from all commits
Commits
File filter

Filter by extension

Filter by extension


Conversations
Failed to load comments.
Loading
Jump to
Jump to file
Failed to load files.
Loading
Diff view
Diff view
36 changes: 21 additions & 15 deletions CHANGELOG.md
Original file line number Diff line number Diff line change
Expand Up @@ -13,6 +13,7 @@ We use _breaking :warning:_ to mark changes that are not backward compatible (re
## Unreleased

### Added

- [#3977](https://github.com/thanos-io/thanos/pull/3903) Expose exemplars for `http_request_duration_seconds` histogram if tracing is enabled.
- [#3903](https://github.com/thanos-io/thanos/pull/3903) Store: Returning custom grpc code when reaching series/chunk limits.
- [3919](https://github.com/thanos-io/thanos/pull/3919) Allow to disable automatically setting CORS headers using `--web.disable-cors` flag in each component that exposes an API.
Expand All @@ -24,35 +25,42 @@ We use _breaking :warning:_ to mark changes that are not backward compatible (re
- [#3960](https://github.com/thanos-io/thanos/pull/3960) fix deduplication of equal alerts with different labels

### Changed

- [#3948](https://github.com/thanos-io/thanos/pull/3948) Receiver: Adjust `http_request_duration_seconds` buckets for low latency requests.
- [#3856](https://github.com/thanos-io/thanos/pull/3856) Mixin: _breaking :warning:_ Introduce flexible multi-cluster/namespace mode for alerts and dashboards. Removes jobPrefix config option. Removes `namespace` by default.

### Removed

## [v0.19.0 - <in progress>](https://github.com/thanos-io/thanos/tree/release-0.19)
## [v0.19.0-rc.2](https://github.com/thanos-io/thanos/releases/tag/v0.19.0-rc.2) - 2021.03.24

### Added

- [#3862](https://github.com/thanos-io/thanos/pull/3862) Sidecar, Store, Query, Ruler, Receiver, Query-Frontend: Added request logging for gRPC and HTTP in the server side.
- [#3740](https://github.com/thanos-io/thanos/pull/3740) Query: Added `--query.default-step` flag to set default step.
- [#3700](https://github.com/thanos-io/thanos/pull/3700) ui: make old bucket viewer UI work with vanilla Prometheus blocks
- [#2641](https://github.com/thanos-io/thanos/issues/2641) Query Frontend: Added `--query-range.request-downsampled` flag enabling additional queries for downsampled data in case of empty or incomplete response to range request.
- [#3792](https://github.com/thanos-io/thanos/pull/3792) Receiver: Added `--tsdb.allow-overlapping-blocks` flag to allow overlapping tsdb blocks and enable vertical compaction
- [#3031](https://github.com/thanos-io/thanos/pull/3031) Compact/Sidecar/other writers: added `--hash-func`. If some function has been specified, writers calculate hashes using that function of each file in a block before uploading them. If those hashes exist in the `meta.json` file then Compact does not download the files if they already exist on disk and with the same hash. This also means that the data directory passed to Thanos Compact is only *cleared once at boot* or *if everything succeeds*. So, if you, for example, use persistent volumes on k8s and your Thanos Compact crashes or fails to make an iteration properly then the last downloaded files are not wiped from the disk. The directories that were created the last time are only wiped again after a successful iteration or if the previously picked up blocks have disappeared.
- [#3686](https://github.com/thanos-io/thanos/pull/3686) Query: Added federated metric metadata support.
- [#3846](https://github.com/thanos-io/thanos/pull/3846) Query: Added federated exemplars API support.
- [#3700](https://github.com/thanos-io/thanos/pull/3700) Compact/Web: Make old bucket viewer UI work with vanilla Prometheus blocks.
- [#3657](https://github.com/thanos-io/thanos/pull/3657) *: It's now possible to configure HTTP transport options for S3 client.
- [#3752](https://github.com/thanos-io/thanos/pull/3752) Compact/Store: Added `--block-meta-fetch-concurrency` allowing to configure number of go routines for block metadata synchronization.
- [#3723](https://github.com/thanos-io/thanos/pull/3723) Query Frontend: Added `--query-range.request-downsampled` flag enabling additional queries for downsampled data in case of empty or incomplete response to range request.
- [#3579](https://github.com/thanos-io/thanos/pull/3579) Cache: Added inmemory cache for caching bucket.
- [#3792](https://github.com/thanos-io/thanos/pull/3792) Receiver: Added `--tsdb.allow-overlapping-blocks` flag to allow overlapping tsdb blocks and enable vertical compaction.
- [#3740](https://github.com/thanos-io/thanos/pull/3740) Query: Added `--query.default-step` flag to set default step. Useful when your tenant scrape interval is stable and far from default UI's 1s.
- [#3686](https://github.com/thanos-io/thanos/pull/3686) Query/Sidecar: Added metric metadata API support. You can now configure you Querier to fetch Prometheus metrics metadata from leaf Prometheus-es!
- [#3031](https://github.com/thanos-io/thanos/pull/3031) Compact/Sidecar/Receive/Rule: Added `--hash-func`. If some function has been specified, writers calculate hashes using that function of each file in a block before uploading them. If those hashes exist in the `meta.json` file then Compact does not download the files if they already exist on disk and with the same hash. This also means that the data directory passed to Thanos Compact is only *cleared once at boot* or *if everything succeeds*. So, if you, for example, use persistent volumes on k8s and your Thanos Compact crashes or fails to make an iteration properly then the last downloaded files are not wiped from the disk. The directories that were created the last time are only wiped again after a successful iteration or if the previously picked up blocks have disappeared.

### Fixed

- [#3773](https://github.com/thanos-io/thanos/pull/3773) Compact: Pad compaction planner size check
- [#3795](https://github.com/thanos-io/thanos/pull/3795) s3: A truncated "get object" response is reported as error.
- [#3705](https://github.com/thanos-io/thanos/pull/3705) Store: Fix race condition leading to failing queries or possibly incorrect query results.
- [#3661](https://github.com/thanos-io/thanos/pull/3661) Compact: Deletion-mark.json is deleted as the last one, which could in theory lead to potential store gateway load or query error for such in-deletion block.
- [#3760](https://github.com/thanos-io/thanos/pull/3760) Store: Fix panic caused by a race condition happening on concurrent index-header reader usage and unload, when `--store.enable-index-header-lazy-reader` is enabled.
- [#3759](https://github.com/thanos-io/thanos/pull/3759) Store: Fix panic caused by a race condition happening on concurrent index-header lazy load and unload, when `--store.enable-index-header-lazy-reader` is enabled.
- [#3773](https://github.com/thanos-io/thanos/pull/3773) Compact: Fixed compaction planner size check, making sure we don't create too large blocks.
- [#3814](https://github.com/thanos-io/thanos/pull/3814) Store: Decreased memory utilisation while fetching block's chunks.
- [#3815](https://github.com/thanos-io/thanos/pull/3815) Receive: Improve handling of empty time series from clients
- [#3795](https://github.com/thanos-io/thanos/pull/3795) s3: A truncated "get object" response is reported as error.
- [#3899](https://github.com/thanos-io/thanos/pull/3899) Receive: Correct the inference of client gRPC configuration.
- [#3943](https://github.com/thanos-io/thanos/pull/3943): Receive: Fixed memory regression introduced in v0.17.0.

### Changed

- [#3705](https://github.com/thanos-io/thanos/pull/3705) Store: Fix race condition leading to failing queries or possibly incorrect query results.
- [#3854](https://github.com/thanos-io/thanos/pull/3854) Mixin: Remove assumed metrics. Use `thanos_info` instead of `kube_pod_info` for dashboard selectors.
- [#3804](https://github.com/thanos-io/thanos/pull/3804) Ruler, Receive, Querier: Updated Prometheus dependency. TSDB characteristics might have changed.

## [v0.18.0](https://github.com/thanos-io/thanos/releases/tag/v0.18.0) - 2021.01.27

Expand All @@ -74,8 +82,6 @@ We use _breaking :warning:_ to mark changes that are not backward compatible (re

- [#3567](https://github.com/thanos-io/thanos/pull/3567) Mixin: Reintroduce `thanos_objstore_bucket_operation_failures_total` alert.
- [#3527](https://github.com/thanos-io/thanos/pull/3527) Query Frontend: Fix query_range behavior when start/end times are the same
- [#3760](https://github.com/thanos-io/thanos/pull/3760) Store: Fix panic caused by a race condition happening on concurrent index-header reader usage and unload, when `--store.enable-index-header-lazy-reader` is enabled.
- [#3759](https://github.com/thanos-io/thanos/pull/3759) Store: Fix panic caused by a race condition happening on concurrent index-header lazy load and unload, when `--store.enable-index-header-lazy-reader` is enabled.
- [#3560](https://github.com/thanos-io/thanos/pull/3560) Query Frontend: Allow separate label cache
- [#3672](https://github.com/thanos-io/thanos/pull/3672) Rule: Prevent crashing due to `no such host error` when using `dnssrv+` or `dnssrvnoa+`.
- [#3461](https://github.com/thanos-io/thanos/pull/3461) Compact, Shipper, Store: Fixed panic when no external labels are set in block metadata.
Expand Down
2 changes: 1 addition & 1 deletion docs/storage.md
Original file line number Diff line number Diff line change
Expand Up @@ -92,7 +92,7 @@ config:
trace:
enable: false
list_objects_version: ""
part_size: 134217728
part_size: 67108864
sse_config:
type: ""
kms_key_id: ""
Expand Down
3 changes: 3 additions & 0 deletions go.mod
Original file line number Diff line number Diff line change
Expand Up @@ -77,6 +77,9 @@ replace (
github.com/bradfitz/gomemcache => github.com/themihai/gomemcache v0.0.0-20180902122335-24332e2d58ab
// Update to v1.1.1 to make sure windows CI pass.
github.com/elastic/go-sysinfo => github.com/elastic/go-sysinfo v1.1.1

// TODO: Remove this: https://github.com/thanos-io/thanos/issues/3967.
github.com/minio/minio-go/v7 => github.com/bwplotka/minio-go/v7 v7.0.11-0.20210324165441-f9927e5255a6
// Make sure Prometheus version is pinned as Prometheus semver does not include Go APIs.
github.com/prometheus/prometheus => github.com/prometheus/prometheus v1.8.2-0.20210315220929-1cba1741828b
github.com/sercand/kuberesolver => github.com/sercand/kuberesolver v2.4.0+incompatible
Expand Down
5 changes: 2 additions & 3 deletions go.sum
Original file line number Diff line number Diff line change
Expand Up @@ -185,6 +185,8 @@ github.com/blang/semver v3.5.0+incompatible/go.mod h1:kRBLl5iJ+tD4TcOOxsy/0fnweb
github.com/bmizerany/assert v0.0.0-20160611221934-b7ed37b82869/go.mod h1:Ekp36dRnpXw/yCqJaO+ZrUyxD+3VXMFFr56k5XYrpB4=
github.com/bmizerany/pat v0.0.0-20170815010413-6226ea591a40/go.mod h1:8rLXio+WjiTceGBHIoTvn60HIbs7Hm7bcHjyrSqYB9c=
github.com/boltdb/bolt v1.3.1/go.mod h1:clJnj/oiGkjum5o1McbSZDSLxVThjynRyGBgiAx27Ps=
github.com/bwplotka/minio-go/v7 v7.0.11-0.20210324165441-f9927e5255a6 h1:h9SZ0jmAKjtrZF6iZ77/jdXdHr+Usn29itI669SVRp4=
github.com/bwplotka/minio-go/v7 v7.0.11-0.20210324165441-f9927e5255a6/go.mod h1:td4gW1ldOsj1PbSNS+WYK43j+P1XVhX/8W8awaYlBFo=
github.com/c-bata/go-prompt v0.2.2/go.mod h1:VzqtzE2ksDBcdln8G7mk2RX9QyGjH+OVqOCSiVIqS34=
github.com/casbin/casbin/v2 v2.1.2/go.mod h1:YcPU1XXisHhLzuxH9coDNf2FbKpjGlbCg3n9yuLkIJQ=
github.com/cenkalti/backoff v0.0.0-20181003080854-62661b46c409/go.mod h1:90ReRw6GdpyfrHakVjL/QHaoyV4aDUVVkXQJJJ3NXXM=
Expand Down Expand Up @@ -888,9 +890,6 @@ github.com/minio/md5-simd v1.1.0 h1:QPfiOqlZH+Cj9teu0t9b1nTBfPbyTl16Of5MeuShdK4=
github.com/minio/md5-simd v1.1.0/go.mod h1:XpBqgZULrMYD3R+M28PcmP0CkI7PEMzB3U77ZrKZ0Gw=
github.com/minio/minio-go/v6 v6.0.44/go.mod h1:qD0lajrGW49lKZLtXKtCB4X/qkMf0a5tBvN2PaZg7Gg=
github.com/minio/minio-go/v6 v6.0.56/go.mod h1:KQMM+/44DSlSGSQWSfRrAZ12FVMmpWNuX37i2AX0jfI=
github.com/minio/minio-go/v7 v7.0.2/go.mod h1:dJ80Mv2HeGkYLH1sqS/ksz07ON6csH3S6JUMSQ2zAns=
github.com/minio/minio-go/v7 v7.0.10 h1:1oUKe4EOPUEhw2qnPQaPsJ0lmVTYLFu03SiItauXs94=
github.com/minio/minio-go/v7 v7.0.10/go.mod h1:td4gW1ldOsj1PbSNS+WYK43j+P1XVhX/8W8awaYlBFo=
github.com/minio/sha256-simd v0.1.1 h1:5QHSlgo3nt5yKOJrC7W8w7X+NFl8cMPZm96iu8kKUJU=
github.com/minio/sha256-simd v0.1.1/go.mod h1:B5e1o+1/KgNmWrSQK08Y6Z1Vb5pwIktudl0J58iy0KM=
github.com/mitchellh/cli v1.0.0/go.mod h1:hNIlj7HEI86fIcpObd7a0FcrxTWetlwJDGcceTlRvqc=
Expand Down
4 changes: 3 additions & 1 deletion pkg/compact/compact.go
Original file line number Diff line number Diff line change
Expand Up @@ -957,7 +957,9 @@ func (c *BucketCompactor) Compact(ctx context.Context) (rerr error) {

ignoreDirs := []string{}
for _, gr := range groups {
ignoreDirs = append(ignoreDirs, gr.Key())
for _, grID := range gr.IDs() {
ignoreDirs = append(ignoreDirs, filepath.Join(gr.Key(), grID.String()))
}
}

if err := runutil.DeleteAll(c.compactDir, ignoreDirs...); err != nil {
Expand Down
91 changes: 91 additions & 0 deletions pkg/compactv2/compactor_test.go
Original file line number Diff line number Diff line change
Expand Up @@ -306,6 +306,97 @@ func TestCompactor_WriteSeries_e2e(t *testing.T) {
NumChunks: 2,
},
},
{
name: "1 blocks + delete modifier. For deletion request, full match is required. Delete the first two series",
input: [][]seriesSamples{
{
{lset: labels.Labels{{Name: "a", Value: "1"}, {Name: "b", Value: "2"}},
chunks: [][]sample{{{0, 0}, {1, 1}, {2, 2}}, {{10, 11}, {11, 11}, {20, 20}}}},
{lset: labels.Labels{{Name: "a", Value: "1"}, {Name: "b", Value: "2"}, {Name: "foo", Value: "bar"}},
chunks: [][]sample{{{0, 0}, {1, 1}, {2, 2}}, {{10, 11}, {11, 11}, {20, 20}}}},
{lset: labels.Labels{{Name: "a", Value: "1"}},
chunks: [][]sample{{{0, 0}, {1, 1}, {2, 2}, {10, 12}, {11, 11}, {20, 20}}}},
{lset: labels.Labels{{Name: "b", Value: "2"}},
chunks: [][]sample{{{0, 0}, {1, 1}, {2, 2}, {10, 12}, {11, 11}, {20, 20}}}},
{lset: labels.Labels{{Name: "c", Value: "1"}},
chunks: [][]sample{{{0, 0}, {1, 1}, {2, 2}, {10, 12}, {11, 11}, {20, 20}}}},
},
},
modifiers: []Modifier{WithDeletionModifier(
metadata.DeletionRequest{
Matchers: []*labels.Matcher{
labels.MustNewMatcher(labels.MatchEqual, "a", "1"),
labels.MustNewMatcher(labels.MatchEqual, "b", "2"),
},
})},
expected: []seriesSamples{
{lset: labels.Labels{{Name: "a", Value: "1"}},
chunks: [][]sample{{{0, 0}, {1, 1}, {2, 2}, {10, 12}, {11, 11}, {20, 20}}}},
{lset: labels.Labels{{Name: "b", Value: "2"}},
chunks: [][]sample{{{0, 0}, {1, 1}, {2, 2}, {10, 12}, {11, 11}, {20, 20}}}},
{lset: labels.Labels{{Name: "c", Value: "1"}},
chunks: [][]sample{{{0, 0}, {1, 1}, {2, 2}, {10, 12}, {11, 11}, {20, 20}}}},
},
expectedChanges: "Deleted {a=\"1\", b=\"2\"} [{0 20}]\nDeleted {a=\"1\", b=\"2\", foo=\"bar\"} [{0 20}]\n",
expectedStats: tsdb.BlockStats{
NumSamples: 18,
NumSeries: 3,
NumChunks: 3,
},
},
{
name: "1 blocks + delete modifier. Deletion request contains non-equal matchers.",
input: [][]seriesSamples{
{
{lset: labels.Labels{{Name: "a", Value: "1"}},
chunks: [][]sample{{{0, 0}, {1, 1}, {2, 2}}, {{10, 11}, {11, 11}, {20, 20}}}},
{lset: labels.Labels{{Name: "a", Value: "2"}},
chunks: [][]sample{{{0, 0}, {1, 1}, {2, 2}}, {{10, 11}, {11, 11}, {20, 20}}}},
{lset: labels.Labels{{Name: "a", Value: "2"}, {Name: "foo", Value: "1"}},
chunks: [][]sample{{{0, 0}, {1, 1}, {2, 2}}, {{10, 11}, {11, 11}, {20, 20}}}},
{lset: labels.Labels{{Name: "a", Value: "2"}, {Name: "foo", Value: "bar"}},
chunks: [][]sample{{{0, 0}, {1, 1}, {2, 2}}, {{10, 11}, {11, 11}, {20, 20}}}},
{lset: labels.Labels{{Name: "a", Value: "3"}, {Name: "foo", Value: "baz"}},
chunks: [][]sample{{{0, 0}, {1, 1}, {2, 2}}, {{10, 11}, {11, 11}, {20, 20}}}},
{lset: labels.Labels{{Name: "foo", Value: "bat"}},
chunks: [][]sample{{{0, 0}, {1, 1}, {2, 2}}, {{10, 11}, {11, 11}, {20, 20}}}},

// Label a is present but with an empty value.
{lset: labels.Labels{{Name: "a", Value: ""}, {Name: "foo", Value: "bat"}},
chunks: [][]sample{{{0, 0}, {1, 1}, {2, 2}}, {{10, 11}, {11, 11}, {20, 20}}}},
// Series with unrelated labels.
{lset: labels.Labels{{Name: "c", Value: "1"}},
chunks: [][]sample{{{0, 0}, {1, 1}, {2, 2}}, {{10, 11}, {11, 11}, {20, 20}}}},
},
},
modifiers: []Modifier{WithDeletionModifier(
metadata.DeletionRequest{
Matchers: []*labels.Matcher{
labels.MustNewMatcher(labels.MatchNotEqual, "a", "1"),
labels.MustNewMatcher(labels.MatchRegexp, "foo", "^ba.$"),
},
})},
expected: []seriesSamples{
{lset: labels.Labels{{Name: "a", Value: ""}, {Name: "foo", Value: "bat"}},
chunks: [][]sample{{{0, 0}, {1, 1}, {2, 2}}, {{10, 11}, {11, 11}, {20, 20}}}},
{lset: labels.Labels{{Name: "a", Value: "1"}},
chunks: [][]sample{{{0, 0}, {1, 1}, {2, 2}}, {{10, 11}, {11, 11}, {20, 20}}}},
{lset: labels.Labels{{Name: "a", Value: "2"}},
chunks: [][]sample{{{0, 0}, {1, 1}, {2, 2}}, {{10, 11}, {11, 11}, {20, 20}}}},
{lset: labels.Labels{{Name: "a", Value: "2"}, {Name: "foo", Value: "1"}},
chunks: [][]sample{{{0, 0}, {1, 1}, {2, 2}}, {{10, 11}, {11, 11}, {20, 20}}}},
{lset: labels.Labels{{Name: "c", Value: "1"}},
chunks: [][]sample{{{0, 0}, {1, 1}, {2, 2}}, {{10, 11}, {11, 11}, {20, 20}}}},
{lset: labels.Labels{{Name: "foo", Value: "bat"}},
chunks: [][]sample{{{0, 0}, {1, 1}, {2, 2}}, {{10, 11}, {11, 11}, {20, 20}}}},
},
expectedChanges: "Deleted {a=\"2\", foo=\"bar\"} [{0 20}]\nDeleted {a=\"3\", foo=\"baz\"} [{0 20}]\n",
expectedStats: tsdb.BlockStats{
NumSamples: 36,
NumSeries: 6,
NumChunks: 12,
},
},
} {
t.Run(tcase.name, func(t *testing.T) {
tmpDir, err := ioutil.TempDir("", "test-series-writer")
Expand Down
5 changes: 1 addition & 4 deletions pkg/compactv2/modifiers.go
Original file line number Diff line number Diff line change
Expand Up @@ -62,12 +62,9 @@ SeriesLoop:
for _, deletions := range d.d.deletions {
for _, m := range deletions.Matchers {
v := lbls.Get(m.Name)
if v == "" {
continue
}

// Only if all matchers in the deletion request are matched can we proceed to deletion.
if !m.Matches(v) {
if v == "" || !m.Matches(v) {
continue DeletionsLoop
}
}
Expand Down
6 changes: 2 additions & 4 deletions pkg/objstore/s3/s3.go
Original file line number Diff line number Diff line change
Expand Up @@ -66,9 +66,7 @@ var DefaultConfig = Config{
MaxIdleConnsPerHost: 100,
MaxConnsPerHost: 0,
},
// Minimum file size after which an HTTP multipart request should be used to upload objects to storage.
// Set to 128 MiB as in the minio client.
PartSize: 1024 * 1024 * 128,
PartSize: 1024 * 1024 * 64, // 64MB.
}

// Config stores the configuration for s3 bucket.
Expand All @@ -85,6 +83,7 @@ type Config struct {
TraceConfig TraceConfig `yaml:"trace"`
ListObjectsVersion string `yaml:"list_objects_version"`
// PartSize used for multipart upload. Only used if uploaded object size is known and larger than configured PartSize.
// NOTE we need to make sure this number does not produce more parts than 10 000.
PartSize uint64 `yaml:"part_size"`
SSEConfig SSEConfig `yaml:"sse_config"`
}
Expand Down Expand Up @@ -449,7 +448,6 @@ func (b *Bucket) Upload(ctx context.Context, name string, r io.Reader) error {
size = -1
}

// partSize cannot be larger than object size.
partSize := b.partSize
if size < int64(partSize) {
partSize = 0
Expand Down
55 changes: 55 additions & 0 deletions pkg/objstore/s3/s3_e2e_test.go
Original file line number Diff line number Diff line change
@@ -0,0 +1,55 @@
// Copyright (c) The Thanos Authors.
// Licensed under the Apache License 2.0.

package s3_test

import (
"bytes"
"context"
"strings"
"testing"

"github.com/cortexproject/cortex/integration/e2e"
e2edb "github.com/cortexproject/cortex/integration/e2e/db"
"github.com/go-kit/kit/log"
"github.com/thanos-io/thanos/pkg/objstore/s3"
"github.com/thanos-io/thanos/test/e2e/e2ethanos"

"github.com/thanos-io/thanos/pkg/testutil"
)

// Regression benchmark for https://github.com/thanos-io/thanos/issues/3917.
func BenchmarkUpload(b *testing.B) {
b.ReportAllocs()
ctx := context.Background()

s, err := e2e.NewScenario("e2e_bench_mino_client")
testutil.Ok(b, err)
b.Cleanup(e2ethanos.CleanScenario(b, s))

const bucket = "test"
m := e2edb.NewMinio(8080, bucket)
testutil.Ok(b, s.StartAndWaitReady(m))

bkt, err := s3.NewBucketWithConfig(log.NewNopLogger(), s3.Config{
Bucket: bucket,
AccessKey: e2edb.MinioAccessKey,
SecretKey: e2edb.MinioSecretKey,
Endpoint: m.HTTPEndpoint(),
Insecure: true,
}, "test-feed")
testutil.Ok(b, err)

buf := bytes.Buffer{}
buf.Grow(1028 * 1028 * 100) // 100MB.
word := "abcdefghij"
for i := 0; i < buf.Cap()/len(word); i++ {
_, _ = buf.WriteString(word)
}
str := buf.String()

b.ResetTimer()
for i := 0; i < b.N; i++ {
testutil.Ok(b, bkt.Upload(ctx, "test", strings.NewReader(str)))
}
}
2 changes: 1 addition & 1 deletion pkg/objstore/s3/s3_test.go
Original file line number Diff line number Diff line change
Expand Up @@ -208,7 +208,7 @@ http_config:

cfg, err := parseConfig(input)
testutil.Ok(t, err)
testutil.Assert(t, cfg.PartSize == 1024*1024*128, "when part size not set it should default to 128MiB")
testutil.Assert(t, cfg.PartSize == 1024*1024*64, "when part size not set it should default to 128MiB")

input2 := []byte(`bucket: "bucket-name"
endpoint: "s3-endpoint"
Expand Down
Loading