Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Simplify M3DB config #1371

Merged
merged 54 commits into from
Feb 14, 2019
Merged

Simplify M3DB config #1371

merged 54 commits into from
Feb 14, 2019

Conversation

richardartoul
Copy link
Contributor

@richardartoul richardartoul commented Feb 12, 2019

Hard code all our defaults for pooling and filesystem and M3 client so they don't need to be provided in every YAML

@codecov
Copy link

codecov bot commented Feb 12, 2019

Codecov Report

Merging #1371 into master will decrease coverage by 3.4%.
The diff coverage is 0%.

Impacted file tree graph

@@            Coverage Diff            @@
##           master   #1371      +/-   ##
=========================================
- Coverage    70.6%   67.1%    -3.5%     
=========================================
  Files         823     666     -157     
  Lines       70369   59352   -11017     
=========================================
- Hits        49692   39870    -9822     
+ Misses      17455   16897     -558     
+ Partials     3222    2585     -637
Flag Coverage Δ
#aggregator 78.7% <ø> (-3.6%) ⬇️
#cluster 84.1% <ø> (-1.8%) ⬇️
#collector 63.7% <ø> (ø) ⬆️
#dbnode 74.8% <0%> (-6.1%) ⬇️
#m3em 57% <ø> (-16.2%) ⬇️
#m3ninx 68.4% <ø> (-5.8%) ⬇️
#m3nsch 79.1% <ø> (+27.9%) ⬆️
#metrics 17.6% <ø> (ø) ⬆️
#msg 74.9% <ø> (-0.3%) ⬇️
#query 69.1% <ø> (+4.7%) ⬆️
#x 72.2% <ø> (-3.9%) ⬇️

Continue to review full report at Codecov.

Legend - Click here to learn more
Δ = absolute <relative> (impact), ø = not affected, ? = missing data
Powered by Codecov. Last update 4f3fd6e...0df88d3. Read the comment docs.

@codecov
Copy link

codecov bot commented Feb 12, 2019

Codecov Report

Merging #1371 into master will decrease coverage by 0.1%.
The diff coverage is 62.4%.

Impacted file tree graph

@@           Coverage Diff            @@
##           master   #1371     +/-   ##
========================================
- Coverage    70.7%   70.5%   -0.2%     
========================================
  Files         823     823             
  Lines       70484   70754    +270     
========================================
+ Hits        49840   49890     +50     
- Misses      17415   17569    +154     
- Partials     3229    3295     +66
Flag Coverage Δ
#aggregator 82.3% <ø> (-0.1%) ⬇️
#cluster 85.9% <ø> (ø) ⬆️
#collector 63.7% <ø> (ø) ⬆️
#dbnode 80.4% <62.1%> (-0.6%) ⬇️
#m3em 73.2% <ø> (ø) ⬆️
#m3ninx 73.9% <ø> (-0.3%) ⬇️
#m3nsch 51.1% <ø> (ø) ⬆️
#metrics 17.6% <ø> (ø) ⬆️
#msg 74.9% <ø> (ø) ⬆️
#query 64.7% <100%> (ø) ⬆️
#x 75.6% <ø> (-0.5%) ⬇️

Continue to review full report at Codecov.

Legend - Click here to learn more
Δ = absolute <relative> (impact), ø = not affected, ? = missing data
Powered by Codecov. Last update 7450997...d2dffb6. Read the comment docs.

@richardartoul richardartoul changed the title [WIP - Dont Review] - Simplify M3DB config Simplify M3DB config - Part 1 Feb 12, 2019
@richardartoul richardartoul requested review from benraskin92, robskillington, prateek and justinjc and removed request for benraskin92 February 12, 2019 21:30
src/cmd/services/m3dbnode/config/fs.go Outdated Show resolved Hide resolved
src/cmd/services/m3dbnode/config/fs.go Show resolved Hide resolved
src/dbnode/server/server.go Outdated Show resolved Hide resolved
@richardartoul richardartoul changed the title Simplify M3DB config - Part 1 Simplify M3DB config Feb 12, 2019
src/cmd/services/m3dbnode/config/config.go Outdated Show resolved Hide resolved
src/cmd/services/m3dbnode/config/config.go Show resolved Hide resolved
src/cmd/services/m3dbnode/config/fs.go Outdated Show resolved Hide resolved
src/cmd/services/m3dbnode/config/fs.go Show resolved Hide resolved
Richard Artoul added 2 commits February 13, 2019 09:42
@richardartoul
Copy link
Contributor Author

@benraskin92 Yeah I'll add a "complete" YAML file that has everything filled out and then we can start adding comments to it or something

@richardartoul
Copy link
Contributor Author

@benraskin92 I created a file called m3dbnode-all-config.yml with comments that we can point people to who want to see what all the options are

src/cmd/services/m3dbnode/config/config_test.go Outdated Show resolved Hide resolved
"tagEncoder": defaultPoolPolicy,
"tagDecoder": defaultPoolPolicy,
"context": poolPolicyDefault{
size: 262144,
Copy link
Collaborator

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

nit: throw these numbers into consts?

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I kinda like it how it is, reads like a YAML file / table. They basically are constants since everything is declared at the top anyways

Copy link
Collaborator

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Fair enough, I was thinking specifically for 262144 and some of the other values which are commonly used, but I guess doing that would kinda imply that they're all tied together

if err := p.IdentifierPool.initDefaultsAndValidate("identifier"); err != nil {
return err
}
if err := p.FetchBlockMetadataResultsPool.initDefaultsAndValidate("fetchBlockMetadataResults"); err != nil {
Copy link
Collaborator

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I think you may have FetchBlockMetadataResultsPool twice here

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

nah its dumb, there is FetchBlockMetadataResultsPool and FetchBlocksMetadataResultsPool (note the s)

Copy link
Collaborator

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Ah, gotcha

if err := p.BlockMetadataSlicePool.initDefaultsAndValidate("blockMetadataSlice"); err != nil {
return err
}
if err := p.BlocksMetadataPool.initDefaultsAndValidate("blocksMetadata"); err != nil {
Copy link
Collaborator

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

May have BlocksMetadataPool twice too

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

same comment as above, they're different (note the s)

@@ -351,6 +351,7 @@ func newM3DBStorage(
etcdCfg = &cfg.ClusterManagement.Etcd

case len(cfg.Clusters) == 1 &&
cfg.Clusters[0].Client.EnvironmentConfig != nil &&
Copy link
Collaborator

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Is this fine to fail if configs are not set?

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I don't follow. Could you elaborate?

Copy link
Collaborator

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

My bad, somehow got myself convinced you could have cfg.Clusters[0].Client.EnvironmentConfig be nil and continue here properly at the moment

src/query/api/v1/httpd/handler.go Show resolved Hide resolved
xretry.NewOptions().
SetInitialBackoff(500 * time.Millisecond).
SetBackoffFactor(2).
SetMaxRetries(3).
Copy link
Collaborator

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

nit: Is it intentional to have the write retrier have 3 backoff factor and 2 retries and the fetch retrier to have it flipped?

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Yeah I'm just copya-pastaing what was in our old YAML files

samplingRate: 1.0
extended: none

limits:
Copy link
Collaborator

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Any chance you could add something like this to this config? I've been meaning to add it to defaults and the docs to have users use collision free id generation by default

# Configuration setting for generating metric IDs from tags.
tagOptions:
  metricName: "_new"
  idScheme: quoted

if writeBatchPoolPolicy.Size == 0 {
// If no value set, calculate a reasonabble value based on the commit log
var writeBatchPoolSize int
if policy.WriteBatchPool.Size != nil {
Copy link
Collaborator

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Might need to check that it's not 0 too?

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

0 is a valid pool size I think, basically turns off pooling

@@ -351,6 +351,7 @@ func newM3DBStorage(
etcdCfg = &cfg.ClusterManagement.Etcd

case len(cfg.Clusters) == 1 &&
cfg.Clusters[0].Client.EnvironmentConfig != nil &&
Copy link
Collaborator

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

My bad, somehow got myself convinced you could have cfg.Clusters[0].Client.EnvironmentConfig be nil and continue here properly at the moment

"tagEncoder": defaultPoolPolicy,
"tagDecoder": defaultPoolPolicy,
"context": poolPolicyDefault{
size: 262144,
Copy link
Collaborator

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Fair enough, I was thinking specifically for 262144 and some of the other values which are commonly used, but I guess doing that would kinda imply that they're all tied together

@richardartoul richardartoul merged commit 1464f33 into master Feb 14, 2019
@robskillington robskillington deleted the ra/simple-conf branch February 14, 2019 18:26
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

Successfully merging this pull request may close these issues.

3 participants