Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Scaling Deployments to 0 replicas does not work with wait set to true #5102

Closed
scholzj opened this issue May 4, 2023 · 3 comments · Fixed by #5103
Closed

Scaling Deployments to 0 replicas does not work with wait set to true #5102

scholzj opened this issue May 4, 2023 · 3 comments · Fixed by #5103
Assignees
Labels
Milestone

Comments

@scholzj
Copy link
Contributor

scholzj commented May 4, 2023

Describe the bug

From 6.6.0 release, it does not seem to be possible anymore to scale down Deployments to 0 with the wait flag set to true. My guess is that this is caused by #4976 which removed some of the Deployment specific scale-down logic ad the wait never completes. Possibly, because when spec.replicas is set to 0, .status.replicas is not set in the Deployment resource (Kube 1.26.0).

I guess one can work around it by not setting the wait to true when scaling to 0, but this worked fine in 6.5.1, so if nothing else it is a backward compatibility issue

Fabric8 Kubernetes Client version

6.6.0

Steps to reproduce

Use the following code to reproduce it:

package cz.scholz.sandbox.fabric8;

import io.fabric8.kubernetes.api.model.ContainerBuilder;
import io.fabric8.kubernetes.api.model.apps.Deployment;
import io.fabric8.kubernetes.api.model.apps.DeploymentBuilder;
import io.fabric8.kubernetes.client.KubernetesClient;
import io.fabric8.kubernetes.client.KubernetesClientBuilder;
import org.slf4j.Logger;
import org.slf4j.LoggerFactory;

import java.util.Map;
import java.util.concurrent.TimeUnit;

public class DeploymentScaling {
    private final static Logger LOGGER = LoggerFactory.getLogger(DeploymentScaling.class);

    private static final String NAMESPACE = "myproject";
    private static final String NAME = "fabric8-test";

    public static void main(final String[] args) {
        System.setProperty("org.slf4j.simpleLogger.defaultLogLevel", "info");
        System.setProperty("org.slf4j.simpleLogger.showThreadName", "false");

        try (KubernetesClient client = new KubernetesClientBuilder().build()) {
            Deployment dep = new DeploymentBuilder()
                    .withNewMetadata()
                        .withName(NAME)
                        .withNamespace(NAMESPACE)
                    .endMetadata()
                    .withNewSpec()
                        .withReplicas(2)
                        .withNewSelector()
                            .withMatchLabels(Map.of("app", NAME))
                        .endSelector()
                        .withNewTemplate()
                            .withNewMetadata()
                                .withLabels(Map.of("app", NAME))
                            .endMetadata()
                            .withNewSpec()
                                .withContainers(new ContainerBuilder()
                                        .withName("nginx")
                                        .withImage("nginx:1.14.2")
                                        .build())
                            .endSpec()
                        .endTemplate()
                    .endSpec()
                    .build();

            LOGGER.info("Creating deployment");
            client.apps().deployments().resource(dep).create();

            LOGGER.info("Waiting for readiness");
            client.apps().deployments().inNamespace(NAMESPACE).withName(NAME).waitUntilReady(5, TimeUnit.MINUTES);
            LOGGER.info("Deployment is ready");

            LOGGER.info("Scaling deployment to 1 replica");
            client.apps().deployments().inNamespace(NAMESPACE).withName(NAME).scale(1, true);

            LOGGER.info("Scaling deployment to 0 replica");
            client.apps().deployments().inNamespace(NAMESPACE).withName(NAME).scale(0, false);

            LOGGER.info("Scaling done, deleting resource");
            client.apps().deployments().inNamespace(NAMESPACE).withName(NAME).delete();
        }
    }
}

With Fabric8 6.5.1 it will complete just fine. WIth 6.6.0 it will get stuck at scale(0, true).

Expected behavior

The same behavior as with 6.5.1.

Runtime

Kubernetes (vanilla)

Kubernetes API Server version

other (please specify in additional context)

Environment

Linux

Fabric8 Kubernetes Client Logs

No response

Additional context

  • Tested with Kube 1.26.0 on Linux & Java 17 on MacOS
@shawkins
Copy link
Contributor

shawkins commented May 4, 2023

The issue here is that scale.getStatus/Spec().getReplicas() will return null, not 0 when scaling to zero. That will have to be accounted for in HasMetadataOperation

@scholzj
Copy link
Contributor Author

scholzj commented May 4, 2023

As a sidenote -> from a quick read of the scaling code in HasMetadataOperation, it seems to have hardcoded expectations that .spec.replicas and .status.replicas are always used. But that is not always the case -> this can use different fields. For example in one of the Strimzi resources, we use

      scale:
        specReplicasPath: .spec.tasksMax
        statusReplicasPath: .status.tasksMax

I do not know how many other CRDs like that will exist. I do not really have an usecase for this right now. I also know it can be worked around by changing the resource itself. So I did not want to open an issue for it just like that if I don't really need it. But I think this is an limitation and if you want, I can open an issue to track it.

@shawkins
Copy link
Contributor

shawkins commented May 4, 2023

it seems to have hardcoded expectations that .spec.replicas and .status.replicas are always used

No, the expectation is the status subresource is used. kube already handles the mapping to those standard fields.

@shawkins shawkins linked a pull request May 4, 2023 that will close this issue
11 tasks
@shawkins shawkins self-assigned this May 4, 2023
@manusa manusa added the bug label May 4, 2023
@manusa manusa added this to the 6.6.1 milestone May 4, 2023
manusa pushed a commit that referenced this issue May 5, 2023
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
Projects
None yet
Development

Successfully merging a pull request may close this issue.

3 participants