Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Modify autodiscovery for ControlPlane considering new Arch #218

Closed
paologallinaharbur opened this issue Oct 4, 2021 · 2 comments
Closed
Assignees
Labels
feature request Categorizes issue or PR as related to a new feature or enhancement.

Comments

@paologallinaharbur
Copy link
Member

Taking into account that only one component can to scrape ControlPlane and that the process needs to be longrunning we could simplify greatly autodiscovery.

@invidian invidian added the feature request Categorizes issue or PR as related to a new feature or enhancement. label Oct 11, 2021
@invidian
Copy link
Contributor

Some questions and thoughts arised while we were discussing #217:

  • On what platforms do we want to support controlplane discovery? Currently we have https://docs.newrelic.com/docs/integrations/kubernetes-integration/get-started/kubernetes-integration-compatibility-requirements/ and
    // Kops / Kubeadm / ClusterAPI
    {"k8s-app": "kube-scheduler"},
    {"tier": "control-plane", "component": "kube-scheduler"},
    // OpenShift
    {"app": "openshift-kube-scheduler", "scheduler": "true"},
  • kube-apiserver scraping can be done centrally always via kubernetes.default.svc.cluster.local, without extra discovery process.
  • etcd, kube-controller-manager and kube-scheduler scraping may only be possible from controller nodes, as they might be network-isolated from worker nodes. In such case, how can we scrape them? Also, do we want to scrape all the instances or just (active) one?
  • Configuration of discovery for controlplane components may get some inspiration from https://github.com/prometheus-community/helm-charts/blob/accabfb2d3402cb915ff3a77c9b433fc929a7d38/charts/kube-prometheus-stack/values.yaml#L957-L1011, which according to my knowledge supports wide variety of clusters.
  • If we override URLs for scraping controlplane, do we want the scrapes to be executed from worker nodes or from controller nodes only?
  • All components scraping could support either authentication via kubeconfig OR custom certificates as right now
    func (c *ControlPlaneComponentClient) getTLSConfigFromSecret() (*tls.Config, error) {
    namespace := c.tlsSecretNamespace
    if namespace == "" {
    c.logger.Debugf("TLS Secret name configured, but not TLS Secret namespace. Defaulting to `default` namespace.")
    namespace = "default"
    }
    secret, err := c.k8sClient.FindSecret(c.tlsSecretName, namespace)
    if err != nil {
    return nil, errors.Wrapf(err, "could not find secret %s containing TLS configuration", c.tlsSecretName)
    }
    var cert, key, cacert []byte
    var ok bool
    if cert, ok = secret.Data["cert"]; !ok {
    return nil, invalidTLSConfig{
    message: fmt.Sprintf("could not find TLS certificate in `cert` field in secret %s", c.tlsSecretName),
    }
    }
    if key, ok = secret.Data["key"]; !ok {
    return nil, invalidTLSConfig{
    message: fmt.Sprintf("could not find TLS key in `key` field in secret %s", c.tlsSecretName),
    }
    }
    cacert, hasCACert := secret.Data["cacert"]
    insecureSkipVerifyRaw, hasInsecureSkipVerify := secret.Data["insecureSkipVerify"]
    if !hasCACert && !hasInsecureSkipVerify {
    return nil, invalidTLSConfig{
    message: "both cacert and insecureSkipVerify are not set. One of them need to be set to be able to call ETCD metrics",
    }
    }
    // insecureSkipVerify is set to false by default, and can be overridden with the insecureSkipVerify field
    insecureSkipVerify := false
    if hasInsecureSkipVerify {
    insecureSkipVerify = strings.ToLower(string(insecureSkipVerifyRaw)) == "true"
    }
    return parseTLSConfig(cert, key, cacert, insecureSkipVerify)
    }
    . This should likely be refactored to provide a RoundTripper handling the authentication.
  • If we keep supporting scraping static pods on the cluster, we can likely share the discoverer with KSM (say, Endpoints discovery) and then have some extra logic on top of it, like checking the TLS etc, if needed.

@paologallinaharbur
Copy link
Member Author

possible issue with --bind-address kubernetes/kubeadm#2388

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
feature request Categorizes issue or PR as related to a new feature or enhancement.
Projects
None yet
Development

No branches or pull requests

4 participants