-
Notifications
You must be signed in to change notification settings - Fork 13
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
ROX-13924: provision failover RDS instance #764
ROX-13924: provision failover RDS instance #764
Conversation
/test e2e |
if err := r.ensureDBInstanceCreated(instanceID, clusterID); err != nil { | ||
return fmt.Errorf("ensuring DB instance %s exists in cluster %s: %w", instanceID, clusterID, err) | ||
} | ||
|
||
failoverID := getFailoverInstanceID(databaseID) | ||
if err := r.ensureDBInstanceCreated(failoverID, clusterID); err != nil { |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
I'm not 100 sure if it's a blocking operation, but if it is we may want to execute it in a separate goroutine to add some parallelism. Just food of thought for further improvements.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
It's not blocking, they go in parallel.
if err := r.ensureDBInstanceCreated(failoverID, clusterID); err != nil { | ||
return fmt.Errorf("ensuring failover DB instance %s exists in cluster %s: %w", failoverID, clusterID, err) | ||
} | ||
|
||
return r.waitForInstanceToBeAvailable(ctx, instanceID, clusterID) |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Don't we need to wait for a failover instance to be available too?
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
We don't need to, because Central only requires the primary one to start (the first one we create is a read/write instance, the second one is a read only instance that is not used except as failover, because Central doesn't support RO instances).
There's the very unlikely scenario that the primary one would fail in those few minutes before the failover one is available, but:
- I'd rather keep the provisioning time shorter, and
- there's not much to recover in fresh DB anyway :)
[APPROVALNOTIFIER] This PR is APPROVED This pull-request has been approved by: kovayur, vladbologa The full list of commands accepted by this bot can be found here. The pull request process is described here
Needs approval from an approver in each of these files:
Approvers can indicate their approval by writing |
Description
For high availability, a failover RDS DB instance is created, in a different Availability Zone (AZ).
This PR implements provisioning logic for the failover instance (also for existing Centrals), and makes sure that both instances are deleted when deprovisioning an ACSCS instance.
Note that during ACSCS provisioning,
fleetshard
does not wait for the failover instance to be completely created, as that would increase our provisioning time.Checklist (Definition of Done)
Test manual
ROX-12345: ...
Test manual
Tested in a local cluster. Verified in the AWS console that the DB was provisioned and that 2 instances were created, and that the Central was able to connect:
INSTALL_OPERATOR=NO MANAGED_DB_ENABLED=TRUE ./dev/env/scripts/up.sh
./scripts/create-central.sh
To test locally, I had to make the DB publicly accessible. For this purpose, I added a new VPC, security group and DB subnet group in dev on AWS.