Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Mock KubernetesServer not performing cleanup #3461

Closed
kenfinnigan opened this issue Sep 10, 2021 · 17 comments
Closed

Mock KubernetesServer not performing cleanup #3461

kenfinnigan opened this issue Sep 10, 2021 · 17 comments
Assignees
Labels
Milestone

Comments

@kenfinnigan
Copy link

When creating pods in the mock server with mockServer.getClient().pods().create(pod1), deleting them at the end of a test with mockServer.getClient().pods().delete() does not result in the pods being removed in subsequent tests.

@shawkins
Copy link
Contributor

What version is this on? Older versions of the mock logic had an issue with inferring the namespace of the object created. That was addressed by #3156. This works fine on latest from what I can tell.

@kenfinnigan
Copy link
Author

5.6.0 of the fabric8 client

@shawkins
Copy link
Contributor

Would it be possible to provide a reproducer?

@rohanKanojia
Copy link
Member

I'm not able to reproduce this. I tried adding a new test case to PodCrudTest like this but it is passing for me:

  @Test
  void testDeleteActuallyRemovesPods() {
    // Given
    Pod pod1 = new PodBuilder().withNewMetadata().withName("pod1").addToLabels("testKey", "testValue").endMetadata().build();

    // When
    client.pods().inNamespace("foo").create(pod1);
    client.pods().inNamespace("foo").withName("pod1").delete();

    // Then
    assertNull(client.pods().inNamespace("foo").withName("pod1").get());
  }

Would appreciate it if we could get some more information.

@kenfinnigan
Copy link
Author

@rohanKanojia does your example use the mock server? It looks like it's using the actual client

@rohanKanojia
Copy link
Member

PodCrudTest is using @EnableKubernetesMockClient annotation which initializes mock server and client automatically:

@EnableKubernetesMockClient(crud = true)
class PodCrudTest {
KubernetesMockServer server;
KubernetesClient client;

@kenfinnigan
Copy link
Author

Thanks @rohanKanojia

The main difference I see is that I didn't delete a specific pod, but all pods with:

mockServer.getClient().pods().delete();

Is that the problem? I need to delete the pods by name?

@shawkins
Copy link
Contributor

Under the covers pods().delete() will perform a list and delete each individually.

@kenfinnigan
Copy link
Author

Is there a problem with how I create the pod?

I've used mockServer.getClient().pods().create(pod1);

@rohanKanojia
Copy link
Member

If you don't specify namespace Kubernetes Mock Server assumes namespace to be test(This is done in order to mock default context behavior we have in .kube/config files):

public NamespacedKubernetesClient createClient() {
Config config = new ConfigBuilder()
.withMasterUrl(url("/"))
.withTrustCerts(true)
.withTlsVersions(TLS_1_2)
.withNamespace("test")
.build();
return new DefaultKubernetesClient(createHttpClientForMockServer(config), config);
}

Anyway, even if I trim namespace test is passing:

  @Test
  void testDeleteActuallyRemovesPods() {
    // Given
    Pod pod1 = new PodBuilder().withNewMetadata().withName("pod1").addToLabels("testKey", "testValue").endMetadata().build();

    // When
    client.pods().create(pod1);
    client.pods().delete();

    // Then
    assertNull(client.pods().withName("pod1").get());
  }

I think if you check logs of test maybe they can provide some insight on what namespace is getting picked by tests:

[INFO] Running io.fabric8.kubernetes.client.mock.PodCrudTest
Sep 14, 2021 9:48:59 PM okhttp3.mockwebserver.MockWebServer$2 execute
INFO: MockWebServer[39897] starting to accept connections
Sep 14, 2021 9:48:59 PM okhttp3.mockwebserver.MockWebServer$2 execute
INFO: MockWebServer[48149] starting to accept connections
Sep 14, 2021 9:49:00 PM okhttp3.mockwebserver.MockWebServer$3 processOneRequest
INFO: MockWebServer[48149] received request: POST /api/v1/namespaces/test/pods HTTP/1.1 and responded: HTTP/1.1 200 OK
Sep 14, 2021 9:49:00 PM okhttp3.mockwebserver.MockWebServer$3 processOneRequest
INFO: MockWebServer[48149] received request: GET /api/v1/namespaces/test/pods HTTP/1.1 and responded: HTTP/1.1 200 OK
Sep 14, 2021 9:49:00 PM okhttp3.mockwebserver.MockWebServer$3 processOneRequest
INFO: MockWebServer[48149] received request: DELETE /api/v1/namespaces/test/pods/pod1 HTTP/1.1 and responded: HTTP/1.1 200 OK
Sep 14, 2021 9:49:00 PM okhttp3.mockwebserver.MockWebServer$3 processOneRequest
INFO: MockWebServer[48149] received request: GET /api/v1/namespaces/test/pods/pod1 HTTP/1.1 and responded: HTTP/1.1 404 Client Error
Sep 14, 2021 9:49:00 PM okhttp3.mockwebserver.MockWebServer$2 acceptConnections

@rohanKanojia
Copy link
Member

rohanKanojia commented Sep 17, 2021

ohk, I am able to reproduce this issue with the test shown below. I've prepared a reproducer project https://github.com/rohankanojia-forks/fabric8-k8s-client-mockwebserver-issue3461-reproducer (which is based on v5.6.0). I'm also able to reproduce this on master.

  @Test
  void testDeleteAllPodsInDifferentNamespaces() {
    // Given
    Pod pod1 = new PodBuilder().withNewMetadata().withName("p1").addToLabels("testKey", "testValue").endMetadata().build();
    Pod pod2 = new PodBuilder().withNewMetadata().withName("p2").addToLabels("testKey", "testValue").endMetadata().build();
    Pod pod3 = new PodBuilder().withNewMetadata().withName("p3").addToLabels("testKey", "testValue").endMetadata().build();
    client.pods().inNamespace("ns1").create(pod1);
    client.pods().inNamespace("ns2").create(pod2);
    client.pods().inNamespace("ns3").create(pod3);

    // When
    client.pods().delete();
    PodList ns1Pods = client.pods().inNamespace("ns1").list();
    PodList ns2Pods = client.pods().inNamespace("ns2").list();
    PodList ns3Pods = client.pods().inNamespace("ns3").list();

    // Then
    assertTrue(ns1Pods.getItems().isEmpty());
    assertTrue(ns2Pods.getItems().isEmpty());
    assertTrue(ns3Pods.getItems().isEmpty());
  }

@rohanKanojia
Copy link
Member

rohanKanojia commented Sep 17, 2021

Sorry, false alarm. client.pods().delete() would delete pods in all namespaces only when there is no namespace set in Config(In case of mockwebserver, it's always test. It's done to mock having namespace set in current context of .kube/config:

contexts:
- context:
    cluster: api-sandbox-test-com:6443
    namespace: rokumar-dev
    user: rokumar/api-sandbox-test-com:6443
  name: rokumar-dev/api-sandbox-test-com:6443/rokumar
current-context: rokumar-dev/api-sandbox-test-com:6443/rokumar

In the case of mockwebserver, the namespace set would be always test. In order to delete pods in all namespaces, this should always work:

client.pods().inAnyNamespace().delete();

@kenfinnigan : Would it be possible for you to try specifying .inAnyNamespace() and see if it cleans up properly?

@kenfinnigan
Copy link
Author

@rohanKanojia that seems to work for most tests, but I did hit one error.

I happened to notice that any watcher I installed with client.pods().inAnyNamespace().watch(podWatcher) is not removed with the delete.

Is there a way to remove a watcher?

@shawkins
Copy link
Contributor

Watches require an explicit close. If the client is closed that should do it as well.

@kenfinnigan
Copy link
Author

By "explicit close", do you mean calling onClose() for the watcher? Or another way to close the watcher, as I can't find anything obvious?

@rohanKanojia
Copy link
Member

I think he's referring to watcher.close() method. Something like this:

Watch watch = client.pods().watch(new ListOptionsBuilder().withTimeoutSeconds(30L).build(), new Watcher<Pod>() {
@Override
public void eventReceived(Action action, Pod resource) { eventReceivedLatch.countDown(); }
@Override
public void onClose(WatcherException cause) { }
});
// Then
assertTrue(eventReceivedLatch.await(3, TimeUnit.SECONDS));
watch.close();

@kenfinnigan
Copy link
Author

Thank you!

I'd completely missed there was a Watch instance I could capture that had the close() on it!

All seems good now, thanks again for the help

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
Projects
None yet
Development

No branches or pull requests

4 participants