-
Notifications
You must be signed in to change notification settings - Fork 994
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Implement runner for e2e tests #548
Conversation
@CyberDem0n your review will be much appreciated |
e2e/tests/test_e2e.py
Outdated
self. wait_for_pod_start("name=postgres-operator") | ||
# HACK operator must register CRD / add existing PG clusters after pod start up | ||
# for local execution ~ 10 seconds suffices | ||
time.sleep(30) |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
this is sadly not a solution: Travis build fails but local one succeed.
does anyone has other ideas on how to circumvent the issue ?
e2e/tests/test_e2e.py
Outdated
|
||
# submit the most recent operator image built on the Docker host | ||
with open("manifests/postgres-operator.yaml", 'r+') as f: | ||
operator_deployment = yaml.load(f, Loader=yaml.Loader) |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
yaml.load
is unsafe, it is better to use yaml.safe_load
|
||
with open("manifests/complete-postgres-manifest.yaml", 'r+') as f: | ||
pg_manifest = yaml.load(f, Loader=yaml.Loader) | ||
pg_manifest["metadata"]["namespace"] = self.namespace |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
IIRC, one can specify the namespace as kubectl parameters, kubectl create -f foo.yaml --namespace=bar
e2e/tests/test_e2e.py
Outdated
replica_pod_nodes = [] | ||
podsList = self.api.core_v1.list_namespaced_pod(namespace, label_selector=pod_labels) | ||
for pod in podsList.items: | ||
if ('spilo-role', 'master') in pod.metadata.labels.items(): |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
if pod.metadata.labels.get('spilo-role') == 'master'
e2e/tests/test_e2e.py
Outdated
k8s.wait_for_pod_start('spilo-role=replica') | ||
|
||
new_master_node, new_replica_nodes = k8s.get_spilo_nodes(labels) | ||
self.assertTrue(current_master_node != new_master_node, |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
assertNotEqual
e2e/tests/test_e2e.py
Outdated
self.assertTrue(job.metadata.name == "logical-backup-acid-minimal-cluster", | ||
"Expected job name {}, found {}" | ||
.format("logical-backup-acid-minimal-cluster", job.metadata.name)) | ||
self.assertTrue(job.spec.schedule == schedule, |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
assertEqual
e2e/tests/test_e2e.py
Outdated
|
||
labels = 'version=acid-minimal-cluster' | ||
while self.count_pods_with_label(labels) != number_of_instances: | ||
pass |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
sleep?
e2e/tests/test_e2e.py
Outdated
|
||
def wait_for_logical_backup_job(self, expected_num_of_jobs): | ||
while (len(self.get_logical_backup_job().items) != expected_num_of_jobs): | ||
pass |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
sleep?
current_master_node, failover_targets = k8s.get_spilo_nodes(labels) | ||
num_replicas = len(failover_targets) | ||
|
||
# if all pods live on the same node, failover will happen to other worker(s) |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
i'd expand this comment to better explain why we can have len(failover_targets) == 0
- single pod clusters (e.g. in test environments) - a new master pod must be started on another worker node meaning longer downtimes for such clusters
- clusters that have master/replica on the same node due to unused pod anti-affinity -> a new replica must be started on another worker node for a failover to take place.
- other clusters -> failover takes place /
failover_targets
not empty
👍 |
1 similar comment
👍 |
early work on #475