Skip to content
This repository has been archived by the owner on Nov 18, 2017. It is now read-only.

haproxy_status.sh should get leader status from etcd #2

Open
Winslett opened this issue Mar 24, 2015 · 6 comments
Open

haproxy_status.sh should get leader status from etcd #2

Winslett opened this issue Mar 24, 2015 · 6 comments

Comments

@Winslett
Copy link
Contributor

Given etcd is the proper location for leaders/followers, haproxy_status.sh should respond after checking leader information from etcd instead of checking for leadership in PostgreSQL.

This will reduce the chance of writing data to a PostgreSQL that has lost its lock on the leader key, but has not failed over.

@Winslett Winslett changed the title haproxy_status read from etcd instead of from PostgreSQL haproxy_status.sh should get leader status from etcd Mar 24, 2015
@tvb
Copy link

tvb commented May 7, 2015

Hi @Winslett any plans to implement this any time soon?

@Winslett
Copy link
Contributor Author

Winslett commented May 7, 2015

@tvb I'm indecisive how this should work.

My core issue with moving this way is solving the "what if the etcd cluster goes away?" problem. I need to create another issue for that problem, and probably reference this problem. If we relied on etcd state for leader / follower on haproxy_status.sh, and etcd had a maintenance window, crashed, or had a network partition, then the Postgres cluster would go down. With the current behavior, etcd going away would cause governor.py to throw an urllib error, which would stop PostgreSQL. In a perfect scenario, if etcd is unavailable to a running cluster, the cluster should maintain the current Primary if possible, but not failover. @jberkus and I chatted about this scenario. If etcd is unaccessible by the leader (network partition, etcd outage, or maintenance), a leader governor should expect a majority of follower governors to provide heartbeats to the leader. If follower heartbeats are not providing enough votes, the leader governor would go read-only and the cluster would wait for etcd to return. I would start the process by modifying the decision tree.

[update: created the issue at https://github.com//issues/7]

In the interim of solving that problem…

The more I think about this, the more I think governor.py should handle the state responses to haproxy. Thus, removing the haproxy_status.sh files and moving the HTTP port cofiguration to the postgres*.yml files.

For people who know Python better than I do, is there a sensible way to run governor.py with a looping runner and a HTTP listener?

@tvb
Copy link

tvb commented May 7, 2015

In a perfect scenario, if etcd is unavailable to a running cluster, the cluster should maintain the current Primary if possible, but not failover

This is tricky as there would be no way for the primary to check its status.

@jberkus
Copy link

jberkus commented May 7, 2015

" If etcd is unaccessible by the leader (network partition, etcd outage, or maintenance), a leader governor should expect a majority of follower governors to provide heartbeats to the leader. If follower heartbeats are not providing enough votes, the leader governor would go read-only and the cluster would wait for etcd to return. I would start the process by modifying the decision tree."

My thinking was this:

  • if etcd is not accessible to any follower db, it should remain as it is;
  • if etcd is not accessible to the leader db, it should restart in read-only mode in case it is no longer the leader
  • if etcd is not accessible to HAproxy, it should make no changes except for disabling failed nodes.

The last reason is a good reason, IMHO, for HAProxy to be doing direct checks against each node as well as etcd, via this logic:

Is etcd responding?
    Is node marked leader in etcd?
        Is node responding?
            enable node
        else:
            disable node
    else:
        disable node
Else:
    Is node responding?
        leave node enabled/disabled
    else:
        disable node

One problem with the above logic is that this doesn't support ever load-balancing connections to the read replica. However, that seems to be a limitation with any HAProxy-based design if we want automated connection switching, due to an inability to add new backends to HAproxy without restarting. FYI, I plan to instead use Kubernetes networking to handle the load-balancing case.

One thing I don't understand is why we need to have an HTTP daemon for HAProxy auth. Isn't there some way it can check the postgres port? I'm pretty sure there is something for HAProxy; we really want a check based on pg_isready. This is a serious issue if you want to use Postgres in containers, because we really don't want a container listening on two ports.

Also, if we can do the check via the postgres port, then we can implement whatever logic we want on the backend, including checks against etcd and internal postgres status.

Parathetically: At first, the idea of implementing a custom worker for Postgres which implements the leader election portion of RAFT is appealing. However, this does not work with binary replication, because without etcd we have nowhere to store status information. And if we're using etcd anyway, we might as well rely on it as a source of truth. Therefore: let's keep governor/etcd.

@bjoernbessert
Copy link

@jberkus
"One problem with the above logic is that this doesn't support ever load-balancing connections to the read replica. However, that seems to be a limitation with any HAProxy-based design if we want automated connection switching, due to an inability to add new backends to HAproxy without restarting. FYI, I plan to instead use Kubernetes networking to handle the load-balancing case"

You can add new backends (modify HAProxy config) with zero-downtime. Reload HAProxy with a little bit of help from iptables. We're using this with great success: https://medium.com/@Drew_Stokes/actual-zero-downtime-with-haproxy-18318578fde6

@jberkus
Copy link

jberkus commented May 8, 2015

Still seems like a heavy-duty work-around to do something which Kubernetes does as a built-in feature.

Sign up for free to subscribe to this conversation on GitHub. Already have an account? Sign in.
Labels
None yet
Projects
None yet
Development

No branches or pull requests

4 participants