-
Notifications
You must be signed in to change notification settings - Fork 4k
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
[gce]: DeleteInstances 409 case #5192
Conversation
Welcome @Freyert! |
continue on 409s as the operation already exists
ad4443f
to
f5e6fea
Compare
@x13n we have a P1 GCP support ticket related to this issue. I've asked support to forward the ticket to your team. Hopefully it gets to you, but I'll also try connecting with you on slack. |
JSON Log autoscaler delete fail{
"insertId": "XXXXXXXXXX",
"jsonPayload": {
"reportingComponent": "",
"reason": "DeleteUnregisteredFailed",
"kind": "Event",
"type": "Warning",
"eventTime": null,
"apiVersion": "v1",
"involvedObject": {
"resourceVersion": "2173434705",
"uid": "XXXXXXXXXXXX",
"name": "cluster-autoscaler-status",
"kind": "ConfigMap",
"apiVersion": "v1",
"namespace": "kube-system"
},
"reportingInstance": "",
"source": {
"component": "cluster-autoscaler"
},
"metadata": {
"resourceVersion": "113800445",
"managedFields": [
{
"time": "XXXXXXXX",
"operation": "Update",
"fieldsV1": {
"f:count": {},
"f:lastTimestamp": {},
"f:source": {
"f:component": {}
},
"f:message": {},
"f:involvedObject": {},
"f:type": {},
"f:firstTimestamp": {},
"f:reason": {}
},
"fieldsType": "FieldsV1",
"apiVersion": "v1",
"manager": "cluster-autoscaler"
}
],
"creationTimestamp": "XXXXXXXXXXX",
"namespace": "kube-system",
"name": "cluster-autoscaler-status.XXXXXXXXXX",
"uid": "XXXXXXXXXXXX"
},
"message": "Failed to remove node gce://ctp-production-us/us-central1-f/gke-production-XXXXXXXXXX: error while getting operation operation-XXXXX-YYYYYYY-ZZZZZZ-000000 on https://www.googleapis.com/compute/v1/projects/XXXXXX/zones/XXXXXXX/instanceGroupManagers/gke-production-XXXXXXXX: <nil>"
},
"resource": {
"type": "k8s_cluster",
"labels": {
"cluster_name": "XXXXXXX",
"project_id": "XXXXXXX",
"location": "XXXXXXXX"
}
},
"timestamp": "XXXXXXX",
"severity": "WARNING",
"logName": "projects/XXXXXX/logs/events",
"receiveTimestamp": "XXXXXXXXX"
} Corresponding API Error{
"protoPayload": {
"@type": "type.googleapis.com/google.cloud.audit.AuditLog",
"status": {
"code": 3,
"message": "INVALID_USAGE",
"details": [
{
"@type": "type.googleapis.com/google.protobuf.Struct",
"value": {
"invalidUsage": {
"userVisibleReason": "Cannot flag instance https://www.googleapis.com/compute/v1/projects/$PROJECT/zones/$ZONE/instances/$NODE_POOL to be deleted. Instance is already being deleted.",
"resource": {
"resourceType": "INSTANCE",
"resourceName": "$NODE_POOL",
"project": {
"canonicalProjectId": "$PROJECT_ID"
},
"scope": {
"scopeType": "ZONE",
"scopeName": "$ZONE"
}
}
}
}
}
]
},
"authenticationInfo": {
"principalEmail": "$PRINCIPAL_EMAIL",
"principalSubject": "XXXXXX:$PRINCIPAL_EMAIL"
},
"requestMetadata": {
"callerIp": "$CLIENT_IP",
"callerSuppliedUserAgent": "google-api-go-client/0.5 cluster-autoscaler,gzip(gfe)",
"requestAttributes": {},
"destinationAttributes": {}
},
"serviceName": "compute.googleapis.com",
"methodName": "v1.compute.instanceGroupManagers.deleteInstances",
"resourceName": "projects/$PROJECT/zones/$ZONE/instanceGroupManagers/$INSTANCE_GROUP_MANAGERS",
"request": {
"@type": "type.googleapis.com/compute.instanceGroupManagers.deleteInstances"
}
},
"insertId": "$INSERT_ID",
"resource": {
"type": "gce_instance_group_manager",
"labels": {
"project_id": "$PROJECT",
"instance_group_manager_name": "$INSTANCE_GROUP_MANAGERS",
"location": "$ZONE",
"instance_group_manager_id": "$INSTANCE_GROUP_MANAGER_ID"
}
},
"timestamp": "$TIMESTAMP.029142Z",
"severity": "ERROR",
"logName": "projects/$PROJECT/logs/cloudaudit.googleapis.com%2Factivity",
"operation": {
"id": "$OPERATION_ID",
"producer": "compute.googleapis.com",
"last": true
},
"receiveTimestamp": "$TIMESTAMP.651600414Z"
}
|
@@ -259,7 +259,8 @@ func (client *autoscalingGceClientV1) DeleteInstances(migRef GceRef, instances [ | |||
req.Instances = append(req.Instances, GenerateInstanceUrl(i)) | |||
} | |||
op, err := client.gceService.InstanceGroupManagers.DeleteInstances(migRef.Project, migRef.Zone, migRef.Name, &req).Do() | |||
if err != nil { | |||
wasConflictErr := op != nil && op.HttpErrorStatusCode == http.StatusConflict |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
I'd create a new function to valuate the http status code for cleaner code, e.g.
func isDeleteInstanceConflict(errorCode string) bool {
return errorCode == http.StatusConflict
}
if !isDeleteInstanceConflict(op.HttpErrorStatusCode) && err != nil {
....
@@ -259,7 +259,8 @@ func (client *autoscalingGceClientV1) DeleteInstances(migRef GceRef, instances [ | |||
req.Instances = append(req.Instances, GenerateInstanceUrl(i)) | |||
} | |||
op, err := client.gceService.InstanceGroupManagers.DeleteInstances(migRef.Project, migRef.Zone, migRef.Name, &req).Do() | |||
if err != nil { | |||
wasConflictErr := op != nil && op.HttpErrorStatusCode == http.StatusConflict | |||
if !wasConflictErr && err != nil { | |||
return err |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Can you move wasConflictErr
var inside the if err != nil
condition & log a warning whenever this happens? CA generally shouldn't attempt to delete the same VM twice so it would be good to at least leave a trace that this happened.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
@@ -259,7 +259,8 @@ func (client *autoscalingGceClientV1) DeleteInstances(migRef GceRef, instances [ | |||
req.Instances = append(req.Instances, GenerateInstanceUrl(i)) | |||
} | |||
op, err := client.gceService.InstanceGroupManagers.DeleteInstances(migRef.Project, migRef.Zone, migRef.Name, &req).Do() |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
What if there was a conflict deleting one instance, but not the others?
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Hmmm . . . in my mind a "delete" request is idempotent so if there is currently a delete in progress then there is no reason to consider it an error.
So delete conflicts now become successes.
If you have N successes and N delete conflicts you now have 2N successes.
If one of those N requests had an error besides a conflict error you still return that.
What do you think? Is there something more to be concerned about here?
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
I guess at this point it's an issue with the API itself?
If the API is only returning a single error that could be very problematic.
Maybe you were thinking what if there was a 500 error, but we wait for the op to finish because we think there was only a 409 error?
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
https://cloud.google.com/compute/docs/reference/rest/v1/instanceGroupManagers/deleteInstances
skipInstancesOnValidationError
Specifies whether the request should proceed despite the inclusion of instances that are not members of the group or that are already in the process of being deleted or abandoned. If this field is set to false and such an instance is specified in the request, the operation fails. The operation always fails if the request contains a malformed instance URL or a reference to an instance that exists in a zone or region other than the group's zone or region.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
[APPROVALNOTIFIER] This PR is NOT APPROVED This pull-request has been approved by: Freyert The full list of commands accepted by this bot can be found here.
Needs approval from an approver in each of these files:
Approvers can indicate their approval by writing |
1ca89a9
to
082e961
Compare
Which component this PR applies to?
cluster-autoscaler
gce
provider.What type of PR is this?
/kind bug
What this PR does / why we need it:
If the cluster autoscaler for some reason re-enters the
DeleteInstances
function after an operation to delete nodes has started the autoscaler will not correctly wait on the operation.Instead, the autoscaler will loop over this error (from what I've seen) putting the cluster in a state where it can't be modified again until the delete operation finishes.
Which issue(s) this PR fixes:
Special notes for your reviewer:
If this PR is merged it would close #5213
I expect #5213 is potentially the more correct PR so I don't imagine this one will be merged.
Does this PR introduce a user-facing change?
Additional documentation e.g., KEPs (Kubernetes Enhancement Proposals), usage docs, etc.: