Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

storage: add constraint rule solver for allocation #8786

Closed
wants to merge 1 commit into from

Conversation

d4l3k
Copy link
Contributor

@d4l3k d4l3k commented Aug 23, 2016

This builds on #8676, #8627.

The existing allocator is rather complicated and hard to understand. This is an attempt to make it easier to understand by having composable rules.

Rules are represented as a single function that returns the candidacy of the
store as well as a float value representing the score. These scores are then
aggregated from all rules and returns the stores sorted by them.

Current rules:

  • ruleReplicasUniqueNodes ensures that no two replicas are put on the same node.
  • ruleNoProhibitedConstraints ensures that the candidate store has no prohibited constraints.
  • ruleRequiredConstraints ensures that the candidate store has the required constraints.
  • rulePositiveConstraints ensures that nodes that match more the positive constraints are given higher priority.
  • ruleDiversity ensures that nodes that have the fewest locality tiers in common are given higher priority.
  • ruleCapacity prioritizes placing data on empty nodes when the choice is available and prevents data from going onto mostly full nodes.

I'm slightly worried that this will be too slow when there are lots of stores and constraints.

cc @bdarnell @petermattis @BramGruneir


This change is Reviewable

@d4l3k
Copy link
Contributor Author

d4l3k commented Aug 24, 2016

Since Peter and Ben are both out, @tamird, would you mind looking at this?

@tamird
Copy link
Contributor

tamird commented Aug 26, 2016

I haven't read the RFC. @bdarnell: will you be able to give this some
attention?

On Wed, Aug 24, 2016 at 2:25 PM, Tristan Rice [email protected]
wrote:

Since Peter and Ben are both out, @tamird https://github.com/tamird,
would you mind looking at this?


You are receiving this because you were mentioned.
Reply to this email directly, view it on GitHub
#8786 (comment),
or mute the thread
https://github.com/notifications/unsubscribe-auth/ABdsPEnSVa7bqkBH9JHziKKitc7yPQnvks5qjIyjgaJpZM4JrXmc
.

@bdarnell
Copy link
Contributor

Review status: 0 of 40 files reviewed at latest revision, 7 unresolved discussions, some commit checks failed.


storage/rule_solver.go, line 54 [r4] (raw file):

// rule is a generic rule that can be used to solve a constraint problem.
type rule func(

In general I prefer to see types defined before their first use. Move rule and candidate up to the top of the file.

Document the inputs and outputs of this function where it's not trivial; in particular mention that the scores from different rules are summed.


storage/rule_solver.go, line 67 [r4] (raw file):

}

// solveInternal solves given constraints. See (*ruleSolver).solveInternal.

Comment doesn't match code.


storage/rule_solver.go, line 83 [r4] (raw file):

}

// solvel solves given constraints and returns the score.

Comment doesn't match code.


storage/rule_solver.go, line 164 [r4] (raw file):

// ruleNoProhibitedConstraints ensures that the candidate store has no
// prohibited constraints.
func ruleNoProhibitedConstraints(

I think it would be a little simpler to combine the three constraint rules into a single pass over Constraints with a switch on the Type field.


storage/rule_solver.go, line 233 [r4] (raw file):

  sl StoreList,
) (candidate bool, score float64) {
  const weight = 0.1

How were these weights determined? I think diversity is more important than positive constraints. It would be good for all the weights to be collected in one place since they're only meaningful relative to each other. Maybe all the rules should return scores in the range 0-1 and then defaultRules is a list of (rule, weight) pairs instead of just the rules.


storage/rule_solver.go, line 236 [r4] (raw file):

  stores := map[roachpb.StoreID]roachpb.StoreDescriptor{}
  for _, store := range sl.stores {

How big is sl.stores? We only need descriptors for the stores in existing. We should probably be passing this map in from the outside instead of reconstructing it for every candidate. Should StoreList provide a by-id interface?


storage/rule_solver.go, line 248 [r4] (raw file):

          st := store.Locality.Tiers
          if len(st) < i || st[i].Key != tier.Key {
              panic("TODO(d4l3k): Node locality configurations are not equivalent")

This shouldn't be a panic - store descriptors are gossiped and cannot be updated atomically, so we have to allow for times when the set of tiers doesn't match. We'll have to just skip tiers that are not present.


Comments from Reviewable

@petermattis
Copy link
Collaborator

@BramGrunier, @cdo: I'm not going to be able to give this a thorough review due to the stability code yellow. Can you guys pick this up? In particular, it is worth thinking about the performance implications of this approach and what can be done. Something that has been percolating in my mind is whether we can precompute lists of candidate stores given a set of constraints. My guess is that the sets of required and prohibited constraints required by zone configs will translate into a small(ish) number of sets of candidate stores. So performing an allocation would be a map lookup to find the set of candidate stores and then an additional bit of filtering and processing to ensure rules such as "unique nodes", "diversity" and "capacity" that can't be determined ahead of time.


Review status: 0 of 40 files reviewed at latest revision, 7 unresolved discussions, some commit checks failed.


Comments from Reviewable

@BramGruneir
Copy link
Member

I'll give it a thorough review this morning.


Comments from Reviewable

@d4l3k
Copy link
Contributor Author

d4l3k commented Aug 29, 2016

Review status: 0 of 40 files reviewed at latest revision, 7 unresolved discussions, some commit checks failed.


storage/rule_solver.go, line 233 [r4] (raw file):

Previously, bdarnell (Ben Darnell) wrote…

How were these weights determined? I think diversity is more important than positive constraints. It would be good for all the weights to be collected in one place since they're only meaningful relative to each other. Maybe all the rules should return scores in the range 0-1 and then defaultRules is a list of (rule, weight) pairs instead of just the rules.

I disagree, the whole point of positive constraints is that they're slightly more flexible than required constraints. If they take less precedence than diversity, they're pretty much useless. For instance, if you have three datacenters and you only have SSDs in one datacenter and you add `constraints: [ssd]`, you'll end up with only one replica having an SSD.

storage/rule_solver.go, line 236 [r4] (raw file):

Previously, bdarnell (Ben Darnell) wrote…

How big is sl.stores? We only need descriptors for the stores in existing. We should probably be passing this map in from the outside instead of reconstructing it for every candidate. Should StoreList provide a by-id interface?

`sl.stores` is every store in the cluster. Which in current use isn't that many, but I agree we can make it better.

storage/rule_solver.go, line 248 [r4] (raw file):

Previously, bdarnell (Ben Darnell) wrote…

This shouldn't be a panic - store descriptors are gossiped and cannot be updated atomically, so we have to allow for times when the set of tiers doesn't match. We'll have to just skip tiers that are not present.

Yeah, I hadn't gotten around to implementing that.

Comments from Reviewable

@BramGruneir
Copy link
Member

Reviewed 3 of 3 files at r1, 25 of 25 files at r2, 18 of 18 files at r3, 3 of 3 files at r4.
Review status: all files reviewed at latest revision, 24 unresolved discussions, some commit checks failed.


cli/cliflags/flags.go, line 63 [r3] (raw file):

<PRE>

  --locality=country=us,region=us-west,datacenter=us-west-1b,rack=12`,

Please add a number of other examples!


config/config.go, line 112 [r2] (raw file):

// MarshalYAML implements yaml.Marshaler.
func (c Constraints) MarshalYAML() (interface{}, error) {
  short := make([]string, len(c.Constraints))

Why short?


config/config.go, line 121 [r2] (raw file):

// UnmarshalYAML implements yaml.Unmarshaler.
func (c *Constraints) UnmarshalYAML(unmarshal func(interface{}) error) error {
  var shortConstraints []string

Again, why short?


config/config.proto, line 17 [r2] (raw file):

// Author: Tamir Duberstein ([email protected])

syntax = "proto2";

While here, why not make this a proto3? It should clean up a lot of the gogoproto stuff.


config/config.proto, line 78 [r2] (raw file):

  // order in which the constraints are stored is arbitrary and may change.
  // https://github.com/cockroachdb/cockroach/blob/master/docs/RFCS/expressive_zone_config.md#constraint-system
  optional Constraints constraints = 6 [(gogoproto.nullable) = false, (gogoproto.moretags) = "yaml:\"constraints,flow\""];

Why not just make this a repeated Constraint here? Why do you need the Constraints message?


config/config_test.go, line 376 [r2] (raw file):

// TestZoneConfigMarshalYAML makes sure that ZoneConfig is correctly marshaled
// to YAML and back.
func TestZoneConfigMarshalYAML(t *testing.T) {

Probably a good idea to add a bunch of different test cases here. Empty constraints, single constraints, a bunch of constraints.


config/config_test.go, line 387 [r2] (raw file):

      NumReplicas: 1,
      Constraints: config.Constraints{
          Constraints: []config.Constraint{

Again, this seems unnecessary. The double constraints: seems strange.


config/migration_test.go, line 33 [r2] (raw file):

      input, want proto.Message
  }{
      {

You could make these YAML instead and maybe save some space. Not sure if it's worth it.


config/migration_test.go, line 76 [r2] (raw file):

              },
          },
      },

Please add an error case in which the migration failure happens.


roachpb/metadata.go, line 207 [r3] (raw file):

// String returns a string representation of the Tier.
func (t Tier) String() string {
  return fmt.Sprintf("%s=%s", t.Key, t.Value)

Do we escape equal signs?


roachpb/metadata.proto, line 124 [r3] (raw file):

// Tier represents one level of the locality hiearchy.
message Tier {

I think (but I'm not sure) that if this is a proto3, you can just nest the tier inside the locality.


sql/config.go, line 35 [r2] (raw file):

  // Look in the zones table.
  if zoneVal := cfg.GetValue(sqlbase.MakeZoneKey(sqlbase.ID(id))); zoneVal != nil {
      // We're done.

This old comment can go.


storage/allocator.go, line 72 [r2] (raw file):

func (ae *allocatorError) Error() string {
  anyAll := "all attributes"

You're going to need to rename/update a lot of the places where attribute is used.


storage/allocator.go, line 205 [r2] (raw file):

  // matching here is lenient, and tries to find a target by relaxing an
  // attribute constraint, from last attribute to first.
  for attrs := append([]config.Constraint(nil), constraints.Constraints...); ; attrs = attrs[:len(attrs)-1] {

attrs seems like the wrong name now.

Also, this should relax in a predetermined order now since we have different levels of constraints.


storage/allocator_test.go, line 245 [r2] (raw file):

}

func TestAllocatorThreeDisksSameDC(t *testing.T) {

Can you add in the equivalent of this test with the new system?


storage/rule_solver.go, line 164 [r4] (raw file):

Previously, bdarnell (Ben Darnell) wrote…

I think it would be a little simpler to combine the three constraint rules into a single pass over Constraints with a switch on the Type field.

👍

storage/rule_solver_test.go, line 46 [r4] (raw file):

  storePool.mu.Lock()
  storePool.mu.stores[1].desc.Attrs.Attrs = []string{"a"}

Can you add a helper function to add all of these?


storage/store_pool.go, line 115 [r2] (raw file):

  }

  // Does the store match the attributes?

This obviously needs to be updated. I'm guessing that's in one of the next commits.


Comments from Reviewable

@d4l3k
Copy link
Contributor Author

d4l3k commented Aug 29, 2016

Thanks, @BramGruneir! Also, not sure if you noticed, but there are a stack of PRs for this to split it. ZoneConfig format stuff should go in #8627 (which was already merged), CLI stuff should go in #8676.


Review status: all files reviewed at latest revision, 24 unresolved discussions, some commit checks failed.


cli/cliflags/flags.go, line 63 [r3] (raw file):

Previously, BramGruneir (Bram Gruneir) wrote…

Please add a number of other examples!

This info will be folded into the https://www.cockroachlabs.com/docs/configure-replication-zones.html docs. As far as I can tell, none of the other cliflags have redundant examples.

config/config.go, line 112 [r2] (raw file):

Previously, BramGruneir (Bram Gruneir) wrote…

Why short?

Because the marshaled YAML format is the short hand notation.

config/config.go, line 121 [r2] (raw file):

Previously, BramGruneir (Bram Gruneir) wrote…

Again, why short?

Because the marshaled YAML format is the short hand notation.

config/config.proto, line 78 [r2] (raw file):

Previously, BramGruneir (Bram Gruneir) wrote…

Why not just make this a repeated Constraint here? Why do you need the Constraints message?

We need the Constraint message to do MarshalYAML/UnmarshalYAML cause you can't put methods on slices.

config/config_test.go, line 387 [r2] (raw file):

Previously, BramGruneir (Bram Gruneir) wrote…

Again, this seems unnecessary. The double constraints: seems strange.

We need the Constraint message to do MarshalYAML/UnmarshalYAML cause you can't put methods on slices.

config/migration_test.go, line 33 [r2] (raw file):

Previously, BramGruneir (Bram Gruneir) wrote…

You could make these YAML instead and maybe save some space. Not sure if it's worth it.

Google style guide is to always prefer Go structures in tests so you get type checking.

roachpb/metadata.go, line 207 [r3] (raw file):

Previously, BramGruneir (Bram Gruneir) wrote…

Do we escape equal signs?

We only get a `Tier` struct from `(*Tier) FromString` which ensures that there aren't any extraneous `=` signs. Thus no need to escape.

storage/allocator.go, line 205 [r2] (raw file):

Previously, BramGruneir (Bram Gruneir) wrote…

attrs seems like the wrong name now.

Also, this should relax in a predetermined order now since we have different levels of constraints.

This function gets completely rewritten in a later PR which integrates RuleSolver in.

storage/allocator_test.go, line 245 [r2] (raw file):

Previously, BramGruneir (Bram Gruneir) wrote…

Can you add in the equivalent of this test with the new system?

Later PR migrates all these tests to use the new system.

storage/store_pool.go, line 115 [r2] (raw file):

Previously, BramGruneir (Bram Gruneir) wrote…

This obviously needs to be updated. I'm guessing that's in one of the next commits.

Yup!

Comments from Reviewable

@BramGruneir
Copy link
Member

Ah, I didn't know. I was just doing a review of this alone. I'll take a look at both of those.


Review status: all files reviewed at latest revision, 17 unresolved discussions, some commit checks failed.


cli/cliflags/flags.go, line 63 [r3] (raw file):

Previously, d4l3k (Tristan Rice) wrote…

This info will be folded into the https://www.cockroachlabs.com/docs/configure-replication-zones.html docs. As far as I can tell, none of the other cliflags have redundant examples.

store has a large number of examples, but we should ask @jseldess what he thinks.

config/config.proto, line 78 [r2] (raw file):

Previously, d4l3k (Tristan Rice) wrote…

We need the Constraint message to do MarshalYAML/UnmarshalYAML cause you can't put methods on slices.

Sure, but you can put methods on type aliases. This doubles the size of the "shorthand" version of it. constraints/constraints looks really bad.

config/config_test.go, line 387 [r2] (raw file):

Previously, d4l3k (Tristan Rice) wrote…

We need the Constraint message to do MarshalYAML/UnmarshalYAML cause you can't put methods on slices.

There must be a way. This is really ugly.

Comments from Reviewable

@d4l3k
Copy link
Contributor Author

d4l3k commented Aug 29, 2016

Review status: all files reviewed at latest revision, 17 unresolved discussions, some commit checks failed.


config/config_test.go, line 387 [r2] (raw file):

Previously, BramGruneir (Bram Gruneir) wrote…

There must be a way. This is really ugly.

It's the same as the existing constraint system works with `roachpb.Attributes.Attrs`.

storage/rule_solver.go, line 54 [r4] (raw file):

Previously, bdarnell (Ben Darnell) wrote…

In general I prefer to see types defined before their first use. Move rule and candidate up to the top of the file.

Document the inputs and outputs of this function where it's not trivial; in particular mention that the scores from different rules are summed.

Done.

storage/rule_solver.go, line 67 [r4] (raw file):

Previously, bdarnell (Ben Darnell) wrote…

Comment doesn't match code.

Done.

storage/rule_solver.go, line 83 [r4] (raw file):

Previously, bdarnell (Ben Darnell) wrote…

Comment doesn't match code.

Done.

storage/rule_solver.go, line 164 [r4] (raw file):

Previously, BramGruneir (Bram Gruneir) wrote…

👍

Done.

storage/rule_solver_test.go, line 46 [r4] (raw file):

Previously, BramGruneir (Bram Gruneir) wrote…

Can you add a helper function to add all of these?

To `storePool`? Or something like `func setAttrsTiers(s *StorePool, attrs []string, tiers []roachpb.Tier)`?

I feel like it's pretty clear what's going on here and a helper function isn't going to save much.


Comments from Reviewable

@d4l3k
Copy link
Contributor Author

d4l3k commented Aug 29, 2016

Latest updates also have a bunch of smallish changes that I made while waiting for comments.


Review status: 8 of 21 files reviewed at latest revision, 17 unresolved discussions, some commit checks pending.


Comments from Reviewable

@bdarnell
Copy link
Contributor

Review status: 8 of 21 files reviewed at latest revision, 17 unresolved discussions, some commit checks failed.


config/config.proto, line 17 [r2] (raw file):

Previously, BramGruneir (Bram Gruneir) wrote…

While here, why not make this a proto3? It should clean up a lot of the gogoproto stuff.

Avoid unnecessary refactorings during the code yellow. There's also not consensus that moving to proto3 is desirable (#5891)

roachpb/metadata.proto, line 124 [r3] (raw file):

Previously, BramGruneir (Bram Gruneir) wrote…

I think (but I'm not sure) that if this is a proto3, you can just nest the tier inside the locality.

I think nested messages are supported in both versions, but since go doesn't support nested types they get very awkward names.

storage/rule_solver.go, line 233 [r4] (raw file):

Previously, d4l3k (Tristan Rice) wrote…

I disagree, the whole point of positive constraints is that they're slightly more flexible than required constraints. If they take less precedence than diversity, they're pretty much useless. For instance, if you have three datacenters and you only have SSDs in one datacenter and you add constraints: [ssd], you'll end up with only one replica having an SSD.

If SSDs are a requirement, they can be a required constraint instead of a positive one. My thinking is that unless there is a locality constraint to a single datacenter or a required constraint that cannot be satisfied otherwise, we should prefer to maximize survivability.

Maybe this means that we can't choose a single weight for positive constraints and if we're going to allow them we need to allow the admin to specify how important it is.


Comments from Reviewable

@d4l3k d4l3k mentioned this pull request Aug 30, 2016
@d4l3k
Copy link
Contributor Author

d4l3k commented Aug 30, 2016

Review status: 8 of 21 files reviewed at latest revision, 16 unresolved discussions, some commit checks failed.


storage/rule_solver.go, line 233 [r4] (raw file):

Previously, bdarnell (Ben Darnell) wrote…

If SSDs are a requirement, they can be a required constraint instead of a positive one. My thinking is that unless there is a locality constraint to a single datacenter or a required constraint that cannot be satisfied otherwise, we should prefer to maximize survivability.

Maybe this means that we can't choose a single weight for positive constraints and if we're going to allow them we need to allow the admin to specify how important it is.

The way the RFC describes positive constraints is that they act very similar to required constraints, except when they can't be satisfied. Required constraints will block replication, whereas positive constraints will allow less than ideal matches. The only time the RFC describes using required constraints is for legal purposes where your DB going down is preferable to your data being in a different location.

If we want to change that behavior we should probably update the RFC. cc @petermattis


Comments from Reviewable

@BramGruneir
Copy link
Member

Reviewed 20 of 20 files at r5, 3 of 3 files at r6.
Review status: all files reviewed at latest revision, 18 unresolved discussions, some commit checks failed.


cli/cliflags/flags.go, line 56 [r5] (raw file):

      Name: "locality",
      Description: `
Not fully implemented.

Could you explain what is or what is not implemented? Or link to an issue number or something?


config/config.proto, line 17 [r2] (raw file):

Previously, bdarnell (Ben Darnell) wrote…

Avoid unnecessary refactorings during the code yellow. There's also not consensus that moving to proto3 is desirable (#5891)

I see your point about avoiding refactorings during the code yellow. But the move to proto3 seems like the right thing, once we're more stable.

storage/rule_solver.go, line 201 [r6] (raw file):

  sl StoreList,
) (candidate bool, score float64) {
  const weight = 0.1

Can you move these weights up to the top of the file? So they can be visible together in case we need to change them.


config/config_test.go, line 387 [r2] (raw file):

Previously, d4l3k (Tristan Rice) wrote…

It's the same as the existing constraint system works with roachpb.Attributes.Attrs.

Doesn't mean it shouldn't be updated to be cleaner. I've always hated that it was attributes.attrs. Anyway, that's an argument for another time.

Comments from Reviewable

@d4l3k
Copy link
Contributor Author

d4l3k commented Aug 30, 2016

Review status: all files reviewed at latest revision, 18 unresolved discussions, some commit checks failed.


cli/cliflags/flags.go, line 63 [r3] (raw file):

Previously, BramGruneir (Bram Gruneir) wrote…

store has a large number of examples, but we should ask @jseldess what he thinks.

Done in #8676.

cli/cliflags/flags.go, line 56 [r5] (raw file):

Previously, BramGruneir (Bram Gruneir) wrote…

Could you explain what is or what is not implemented? Or link to an issue number or something?

Done in #8676.

storage/rule_solver.go, line 201 [r6] (raw file):

Previously, BramGruneir (Bram Gruneir) wrote…

Can you move these weights up to the top of the file? So they can be visible together in case we need to change them.

Done.

Comments from Reviewable

@BramGruneir
Copy link
Member

The rule solver looks good. So LGTM for that. Probably best to wait on Ben's approval too. Might want to rename the PR to remove the WIP as well.
:lgtm:


Reviewed 7 of 7 files at r7, 3 of 3 files at r8.
Review status: all files reviewed at latest revision, 14 unresolved discussions, some commit checks pending.


Comments from Reviewable

@bdarnell
Copy link
Contributor

LGTM


Review status: all files reviewed at latest revision, 7 unresolved discussions, some commit checks failed.


Comments from Reviewable

@jseldess
Copy link
Contributor

cli/cliflags/flags.go, line 63 [r3] (raw file):

Previously, d4l3k (Tristan Rice) wrote…

Done in #8676.

Way late here, but examples for `--locality` look helpful to me.

Comments from Reviewable

@d4l3k
Copy link
Contributor Author

d4l3k commented Aug 31, 2016

Review status: 1 of 3 files reviewed at latest revision, 8 unresolved discussions, some commit checks pending.


config/config.proto, line 78 [r2] (raw file):

Previously, BramGruneir (Bram Gruneir) wrote…

Sure, but you can put methods on type aliases. This doubles the size of the "shorthand" version of it. constraints/constraints looks really bad.

I'm thinking about using casttype, but I'm worried that since constraints.Constraints is already checked into develop it'll mess up everyone else unless I write another migration function. Thoughts?

Comments from Reviewable

@BramGruneir
Copy link
Member

config/config.proto, line 78 [r2] (raw file):

Previously, d4l3k (Tristan Rice) wrote…

I'm thinking about using casttype, but I'm worried that since constraints.Constraints is already checked into develop it'll mess up everyone else unless I write another migration function. Thoughts?

Is it checked into dev or master? If dev, then you have time to update it.

Comments from Reviewable

@d4l3k
Copy link
Contributor Author

d4l3k commented Aug 31, 2016

config/config.proto, line 78 [r2] (raw file):

Previously, BramGruneir (Bram Gruneir) wrote…

Is it checked into dev or master? If dev, then you have time to update it.

It's in `develop`, but it'll still break everyone's cockroach-data because the default zone-config won't be able to load.

Comments from Reviewable

@BramGruneir
Copy link
Member

config/config.proto, line 78 [r2] (raw file):

Previously, d4l3k (Tristan Rice) wrote…

It's in develop, but it'll still break everyone's cockroach-data because the default zone-config won't be able to load.

I don't think that's a big deal. But you can ping everyone on eng to make sure it's cool.

Comments from Reviewable

@d4l3k
Copy link
Contributor Author

d4l3k commented Aug 31, 2016

I just switched the rule functions to take a solveState struct instead because it makes it easier to add in new field that multiple runs of a rule might need. And we now correctly handle nodes with different locality tier orders set.

@BramGruneir, if it's okay I think I'll make the constraint changes in a separate PR so it's easier to deal with.


Review status: 1 of 3 files reviewed at latest revision, 9 unresolved discussions, some commit checks pending.


storage/rule_solver.go, line 236 [r4] (raw file):

Previously, d4l3k (Tristan Rice) wrote…

sl.stores is every store in the cluster. Which in current use isn't that many, but I agree we can make it better.

Done.

Comments from Reviewable

@d4l3k d4l3k changed the title WIP storage: add constraint rule solver for allocation storage: add constraint rule solver for allocation Aug 31, 2016
Rules are represented as a single function that returns the candidacy of the
store as well as a float value representing the score. These scores are then
aggregated from all rules and returns the stores sorted by them.

Current rules:
- ruleReplicasUniqueNodes ensures that no two replicas are put on the same node.
- ruleConstraints enforces that required and prohibited constraints are
  followed, and that stores with more positive constraints are ranked higher.
- ruleDiversity ensures that nodes that have the fewest locality tiers in common
  are given higher priority.
- ruleCapacity prioritizes placing data on empty nodes when the choice is
  available and prevents data from going onto mostly full nodes.
@d4l3k
Copy link
Contributor Author

d4l3k commented Sep 2, 2016

Closed after merging superset #8959.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Projects
None yet
Development

Successfully merging this pull request may close these issues.

6 participants