Skip to content
This repository has been archived by the owner on Nov 18, 2017. It is now read-only.

golang-custom-raft: add abilty to execute post-election script #43

Open
jberkus opened this issue Nov 17, 2016 · 14 comments
Open

golang-custom-raft: add abilty to execute post-election script #43

jberkus opened this issue Nov 17, 2016 · 14 comments

Comments

@jberkus
Copy link

jberkus commented Nov 17, 2016

In order to enable Kubernetes services which communicate with the PostgreSQL master, I need to update some annotations in the Kubernetes API. As such, I want to execute a script each time we finish a leader election in golang-custom-raft.

Thoughts on what's the best way to add this? Obviously we could add it to the go code, but we don't necessarily want the dependancies in the main codebase, do we?

@doodles526
Copy link
Contributor

Hmm. Interesting problem @jberkus . I agree I don't think I want to add an "execute arbitrary code" bit to the main codebase. Currently you can poll the /fsm/leader endpoint from the API and check for changes - so that's a possible stop-gap for now.

Example output from that is

> curl http://localhost:5000/fsm/leader | json_pp                      golang-custom-raft [75c3f42] modified untracked
{
   "data" : {
      "exists" : true,
      "leader" : {
         "connection_string" : "postgres://replication:[email protected]:5432/postgres",
         "name" : "postgresql0"
      }
   }
}

In the long-term, we might look to implement something in the API similar to etcd's Watchers https://coreos.com/etcd/docs/latest/api.html#waiting-for-a-change

But in general I can't think of a good way to have the core code execute something on an event without bloating it beyond what I'm comfortable with

@codepope
Copy link
Contributor

codepope commented Nov 18, 2016

Why not an option to run a script built into the code so that it's just an extension point for integration? (Serious question - just wondering what the rationale is for not doing it)

Dj

On 18 Nov 2016, at 05:39, Joshua Deare [email protected] wrote:

Hmm. Interesting problem @jberkus . I agree I don't think I want to add an "execute arbitrary code" bit to the main codebase. Currently you can poll the /fsm/leader endpoint from the API and check for changes - so that's a possible stop-gap for now.

Example output from that is

curl http://localhost:5000/fsm/leader | json_pp golang-custom-raft [75c3f42] modified untracked
{
"data" : {
"exists" : true,
"leader" : {
"connection_string" : "postgres://replication:[email protected]:5432/postgres",
"name" : "postgresql0"
}
}
}
In the long-term, we might look to implement something in the API similar to etcd's Watchers https://coreos.com/etcd/docs/latest/api.html#waiting-for-a-change

But in general I can't think of a good way to have the core code execute something on an event without bloating it beyond what I'm comfortable with


You are receiving this because you are subscribed to this thread.
Reply to this email directly, view it on GitHub, or mute the thread.

@doodles526
Copy link
Contributor

My reasoning is that I want governor to be a tool to provide auto failover - and give endpoints for external processes to query the state. Those processes can use that information to what they wish

I'm always rather cautious of giving a process the ability to say "When an event happens, fork and execute this user-provided string". I know buffer overflows are supposed to be impossible with standard Go code. But... There's been exploits published where you can exploit certain data race conditions, garbage collector, and the internals of a Slice to induce a malicious buffer overflow. So I'm a little hesitant to expose Governor to the possibility of an arbitrary code execution exploit

@jberkus
Copy link
Author

jberkus commented Nov 21, 2016

Well, any approach which requires polling has two major drawbacks:

  1. requires an additional container, or an additional daemon inside the Governor/Postgres container, and I thought one goal of this new approach was to reduce external requriements?

  2. adds latency between the time failover happens and the orchestration system learns that failover has happened, thus adding additional downtime.

@doodles526
Copy link
Contributor

@jberkus Agreed with the argument against polling. I don't like polling either. Since adding the ability to run a script on failover is quite a bit easier than implementing a "watchable" API endpoint I'll add the ability to run a script on failover. But it may be deprecated in the future if we add that endpoint. I'm probably too cautious with security for my own good

@jberkus
Copy link
Author

jberkus commented Nov 22, 2016

Just make the requirement that the external script has to be in a certain directory and owned by the same user as governor. There's no real point in restricting it further; if someone can shell in as the user who owns the governor process, they can do anything they want anyway.

@doodles526
Copy link
Contributor

Sounds good. I'm on an on-call shift this Sunday, so I'll probably shell this out to pass the time during that. Expect a release probably Sunday evening

@jberkus
Copy link
Author

jberkus commented Nov 25, 2016

So at a minimum, we'd need to pass the external script the result of the election, which would be one of three things:

  • leader
  • follower
  • down (as in, no longer part of a valid cluster)

@doodles526
Copy link
Contributor

Cool. That fits with what I was going to do. There's a few reasons a failover could occur (dead man switch, governor recognizes underlying PG is unrecoverable for some reason, loss of quorum). I wasn't planning on revealing this to the script. But just passing the script the current state(follower, leader, no-leader available)

@jberkus
Copy link
Author

jberkus commented Nov 27, 2016

Discussion: do we want two different failure states?

  1. Can't join cluster
  2. Local instance is inoperable (postgres won't start, etc.)

Just thinking that the system might react differently to those two different situations.

@jberkus
Copy link
Author

jberkus commented Nov 27, 2016

Relevant to this is that in failure state (1), the node probably can't message the orchestration system anyway. The only time that state would really be relevant is if cluster networking is messed up, or if you have a "no valid lead" situation.

@doodles526
Copy link
Contributor

Good point. Although can't join cluster could mean 2 things:

  • Network issues for the node
  • Loss of quorum due to other nodes dropping out on their own(for whatever reason)

As I'm looking through the possible state changes there are actually 5 possible states in a failover. Right now there's

  • I am the new leader
  • I am a new follower - This only happens when the governor process launches.
  • Read only following no leader - When a leader step-down occurs but this particular node's WAL log isn't progressed enough to claim the leader position(set in the maximum_lag_on_failover flag). The node is waiting for a new leader to be elected to follow.
  • Read only following no leader - When there is no quorum so no leader can exist until the cluster regains quorum(caused by network issues or other nodes dying)
  • PG has become unhealthy - Currently if PG isn't healthy and something with pg_ctl errors when attempting to fix it then Governor gets killed. There are a couple dangerous situations with outcomes like this I need to fix and I'll create issues for that

That being said I see 3 failure states I think are worth reporting:

  • maximum_lag_on_failover condition not met when trying to gain leadership(this will auto-resolve after another node declares leadership)
  • Quorum issues with the cluster
  • Unhealthy PG

@jberkus
Copy link
Author

jberkus commented Nov 27, 2016

Hmmm. Given that mess of states, I'm starting to think that we just want two failure states:

  • Fail: any condition in which somehow Governor is up, but Postgres isn't taking connections, and
  • Orphan: any condition in which the node isn't part of a cluster, but is up for read-only access.

Really only those two states matter in terms of the orchestration system; "Fail" status should be taken out of any connection pool, and "Orphan" would be according to local policy (some might want orphans in a pool, some might not).

For any other status refinements, I think it makes sense to just poll the HTTP status from the individual node, which would tell us all of the above other information.

@doodles526
Copy link
Contributor

Agreed. I found an issue with how it handled the maximum_lag_on_failover flag that I'm working on correcting. After that then I'll get back to working on script execution

Sign up for free to subscribe to this conversation on GitHub. Already have an account? Sign in.
Projects
None yet
Development

No branches or pull requests

3 participants