-
Notifications
You must be signed in to change notification settings - Fork 1.7k
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Executing a command inside the pod in Reconciliation loop #4302
Comments
@psaini79 I'd like to better understand what you're trying to accomplish. Based on the information above I see a few things.
If you'd like to chat real time we could chat on kubernetes slack #kubernetes-operators. |
@jmrodri
To achieve the same, I need to execute a command to delete the records on testapp statefulset pod which is testrepo0. What is the best way to achieve the above? Sure, I will also connect with you on slack. |
Mentioned on Slack but overall this is not something OSDK or Kubernetes provides. |
Hi @psaini79, Why not use the finalizers? The finalizer let you implement what actions/operations should be done to allow effectively delete your CR. See: https://sdk.operatorframework.io/docs/building-operators/golang/advanced-topics/#handle-cleanup-on-deletion Then, why you do not use the client provide in the controller to GET/LIST/DELETE/UPDATE all these resources as you wish before the CR be deleted, in the finalizers? See here an example which indeed does HTTP requests see here and here |
@camilamacedo86 and @coderanger Thanks for the reply. I discussed with @coderanger on the slack and it seems we can use the exec method from the operator but should not be used as the exec method is not the correct way to handle update and delete. Instead, I should use sidecar container or REST server specific to my app. Sidecar will not help in my case. The app running on testrepo stateful set, I need to login in the app and delete the record. However, REST server can do this job but it is extra work for me. I am trying to understand how can I use the finalizer as I am not deleting CR but deleting a statefulset. For example:
Let us say as per the above conf, I have 4 statefulsets testapp, prodapp, devapp and testrepo are up running. User passes new CR like below:
Operator logic read the difference between previous statefulset configuration and new CR and perform delete on prodapp and testapp. However, before doing any deletion, operator connect to testrepo and execute a command to delete all the record and balance the data on testapp. Once that part is done then r.Client.Delete function is being called to delete the stateful set. I am maintaining all the statefulset conf in status struct along with their name. i.e Whenever there is a successful create operation, I record the statefulset name in the status struct and when I found a difference between status struct and instance conf, I execute delete operation on missing statefulsets. Please let me know if I can use finalizer per statefulset? if yes, how? |
Hi @psaini79, If |
@camilamacedo86 I have been playing around with execution in the container via // executors.go
package executors
import (
"bytes"
"net/http"
corev1 "k8s.io/api/core/v1"
"k8s.io/apimachinery/pkg/types"
"k8s.io/client-go/kubernetes"
"k8s.io/client-go/rest"
"k8s.io/client-go/tools/remotecommand"
"k8s.io/kubectl/pkg/scheme"
)
// Executor implements the remote execution in pods.
type Executor struct {
KubeClient *kubernetes.Clientset
KubeConfig *rest.Config
Pod types.NamespacedName
Container string
}
// ExecutorResult contains the outputs of the execution.
type ExecutorResult struct {
Stdout bytes.Buffer
Stderr bytes.Buffer
}
// NewExecutor creates a new executor from a kube config.
func NewExecutor(kubeConfig *rest.Config) Executor {
return Executor{
KubeConfig: kubeConfig,
KubeClient: kubernetes.NewForConfigOrDie(kubeConfig),
}
}
// Select configures the pod that
func (e *Executor) Select(pod types.NamespacedName, container string) *Executor {
e.Pod = pod
e.Container = container
return e
}
// Exec runs an exec call on the container without a shell.
func (e *Executor) Exec(command []string) (*ExecutorResult, error) {
request := e.KubeClient.
CoreV1().
RESTClient().
Post().
Resource("pods").
Namespace(e.Pod.Namespace).
Name(e.Pod.Name).
SubResource("exec").
VersionedParams(&corev1.PodExecOptions{
Command: command,
Container: e.Container,
Stdout: true,
Stderr: true,
TTY: true,
}, scheme.ParameterCodec)
result := new(ExecutorResult)
exec, err := remotecommand.NewSPDYExecutor(e.KubeConfig, http.MethodPost, request.URL())
if err != nil {
return result, err
}
if err := exec.Stream(remotecommand.StreamOptions{Stdout: &result.Stdout, Stderr: &result.Stderr}); err != nil {
return result, err
}
return result, nil
} // main.go
...
kubeConfig := ctrl.GetConfigOrDie()
mgr, err := ctrl.NewManager(kubeConfig, ctrl.Options{
Scheme: scheme,
MetricsBindAddress: metricsAddr,
Port: 9443,
LeaderElection: enableLeaderElection,
LeaderElectionID: "example.com",
Namespace: "",
})
if err != nil {
setupLog.Error(err, "Failed to start manager.")
os.Exit(1)
}
if err = (&testcontroller.TestReconciler{
Client: mgr.GetClient(),
Log: ctrl.Log.WithName("controller").WithName("test"),
Scheme: mgr.GetScheme(),
Executor: extensions.NewExecutor(kubeConfig),
}).SetupWithManager(mgr); err != nil {
setupLog.Error(err, "Failed to create controller.", "controller", "test")
os.Exit(1)
}
... The API is not perfect, because it does not support |
Issues go stale after 90d of inactivity. Mark the issue as fresh by commenting If this issue is safe to close now please do so with /lifecycle stale |
Stale issues rot after 30d of inactivity. Mark the issue as fresh by commenting If this issue is safe to close now please do so with /lifecycle rotten |
Rotten issues close after 30d of inactivity. Reopen the issue by commenting /close |
@openshift-bot: Closing this issue. In response to this:
Instructions for interacting with me using PR comments are available here. If you have questions or suggestions related to my behavior, please file an issue against the kubernetes/test-infra repository. |
Type of question
Best practices
Question
What is the best way to run the command inside the Pod from the reconciliation loop?
What did you do?
I followed this https://github.com/halkyonio/hal/blob/78f0b5ee8e27117b78fe9d6d5192bc5b04c0e5db/pkg/k8s/client.go and implemented the same in reconciliation loop and it works but I am wondering is it the correct way as in my case user cannot pass the kubeconfig location:
Following is the function which is using remotecommand.NewSPDYExecutor to execute the command:
What did you expect to see?
I wanted to what is the best way to read kubeconfig inside operator-sdk so that we can execute command inside the pod.
What did you see instead? Under which circumstances?
Environment
Operator type:
/language go
Kubernetes cluster type:
Testing/Deployment
$ operator-sdk version
operator-sdk version: "v1.2.0", commit: "215fc50b2d4acc7d92b36828f42d7d1ae212015c", kubernetes version: "v1.18.8", go version: "go1.15.3", GOOS: "linux", GOARCH: "amd64"
$ go version
(if language is Go)go version go1.15.5 linux/amd64
$ kubectl version
Additional context
The text was updated successfully, but these errors were encountered: