Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Support for user-customizeable links in Pods, Service and StatefulSets through Annotations #1989

Open
mwitkow opened this issue May 25, 2017 · 27 comments
Assignees
Labels
help wanted Denotes an issue that needs help from a contributor. Must meet "help wanted" guidelines. kind/feature Categorizes issue or PR as related to a new feature. lifecycle/frozen Indicates that an issue or PR should not be auto-closed due to staleness. priority/awaiting-more-evidence Lowest priority. Possibly useful, but not yet enough support to actually get it done.

Comments

@mwitkow
Copy link

mwitkow commented May 25, 2017

Environment

Standard pre-built docker container.

Dashboard version: 1.5.1
Kubernetes version: 1.5.3
Feature Request

It would be fantastic to be able to leverage Kubernetes dashboard as a command and control place for all your services by cross-linking the relevant resources for a Service, StatefulSet or a Pod.

This could be done either through

  • forking dashboard and customizing the Angular templates to add these
  • adding annotations to Pods/Services/StatefulSets
Use cases
Adding links to /debug pages of Pods

A lot of go servers expose /debug pages, e.g. /debug/requests. It would be fantastic for users to be able to add links to the relevant endpoints with the Pod's DNS address.

For example, an annotation for Pod:

apiVersion: v1
kind: Pod
metadata:
  annotations:
    alpha.dashboard.kubernetes.io/links/pod: |
      {
        "requests":"http://{{.pod.dns_name}}:9090/debug/requests",
        "pprof":"http://{{.pod.dns_name}}:9090/debug/pprof",
      }
Adding links to monitoring dashboards for Services

It would be fantastic to be able to link from a Service page to other, richer monitoring services. For example with Grafana you can cross link to a dashboard using templating variables:

For example, an annotation for Service :

apiVersion: v1
kind: Service
metadata:
  annotations:
    alpha.dashboard.kubernetes.io/links/pod: |
      {
        "monitoring":"https://mymonitoring.example.com/dashboard/file/service.json?var.cluster={{.cluster.name}}&var.job="{{.service.name}}",
      }
  labels:
    component: myservice
    k8s-app: myservice
  name: myservice
  namespace: myteam

It would be fantastic if it was generic enough to access any property of the Pod or Service definition, but simple things such as DNS name, cluster name and Pod/Service/StatefulSet name would be enough.

We would be happy contributing support for this feature :)

@floreks floreks added the kind/feature Categorizes issue or PR as related to a new feature. label May 25, 2017
@mwitkow
Copy link
Author

mwitkow commented May 31, 2017

@floreks any feedback on this FR? We're actually considering doing something like this ourselves and would rather do it in a way that could be up-streamable.

@cheld
Copy link
Contributor

cheld commented May 31, 2017

Dashboard can already show simple links (using labels I guess). Maybe a starting point.

@floreks
Copy link
Member

floreks commented Jun 1, 2017

I like the general idea. I don't see any reason why we couldn't have that upstream. It is generic enough and templating variables like in grafana could be used by users in many different ways.

You are welcome to work on that as I think we won't be able for some time.

@maciaszczykm
Copy link
Member

I agree with others, sounds enough generic to push it to upstream.

@maciaszczykm maciaszczykm added help wanted Denotes an issue that needs help from a contributor. Must meet "help wanted" guidelines. priority/P3 labels Jun 1, 2017
@cheld
Copy link
Contributor

cheld commented Jun 2, 2017

In general I like the idea.

I think the proposal about pods is not working exactly as you suggest.

apiVersion: v1
kind: Pod
metadata:
  annotations:
    alpha.dashboard.kubernetes.io/links/pod: |
      {
        "requests":"http://{{.pod.dns_name}}:9090/debug/requests",
        "pprof":"http://{{.pod.dns_name}}:9090/debug/pprof",
      }

First, pods are not reachable from outside of cluster, second I am not sure if pods have dns entries, do they? So, I think the annotation should be rather moved to the service resource.

Lets make the use case a bit more generic: lets say you deploy wordpress. In case you create a service that is external reachable (type load-balancer), we already create a clickable link.

This behavior has two problems:

  1. we do not know it the endpoint is a web-page or even REST. So the link might not make sense in some cases.
  2. we do not create a link the the admin page (/admin).

So, some meta-tags that describe the service for dashboard would make sense.

The second use case for the integration with a monitoring tool I would suggest to solve a bit differently.
We are working on two new features (settings and plugins) that would help to make the experience better.

So, my proposal would be to add a url setting on our global settings page (based on config-map) to add a link to every pod. If a user would enable Prometheus plugin we could automatically add this URL setting. So, automatically all pods would be linked to prometheus without changing any pod definition

@floreks, @maciaszczykm WDYT?

@maciaszczykm
Copy link
Member

maciaszczykm commented Jun 2, 2017

@cheld You are right, there are few things to consider as not in all cases DNS/specific monitoring tool will be there up and running.

At the moment only possibility is to check everything and then add link, but like you said we might wait for @floreks' pull request which will introduce integrations/settings page. Then if we are sure, that for example Prometheus is installed and running we can add links to all pods (or just to one based on metadata from annotation).

Still DNS doesn't sound like Dashboard integration, so I don't know if we will be considered like one. If not then we would have to make health check on our own, because there should be no link if it won't work.

However, I would wait for @mwitkow to elaborate a bit more on his proposal. Maybe preparing some simple proof of concept would clarify a lot of things.

@cheld
Copy link
Contributor

cheld commented Jun 2, 2017

@maciaszczykm Ok, I try to explain my idea agian.

  1. We could add some meta-data to services and use this information to create links in dashboard. This would be better than the current solution.

  2. We could add a global setting of links to be added to all pods/rc ... e.g.

This list is edited manually

  1. In case a plugin (e.g. scope) is activated, we automatically add an entry to the upper list. Every pod would get a link to scope. Nice integration, right?

@bryk
Copy link
Contributor

bryk commented Jun 2, 2017

This is an interesting proposal. What do you think of extending the scope of this a bit and invite all K8s UIs to collaborate here. The idea is that whatever if we introduce something only here it is likely to stay niche and low use. If something is Kubernetes-global it is likely that service providers integrate by their own. E.g., when you install grafana, you get integration for free.

@mwitkow @floreks @maciaszczykm @cheld WDYT?

@bryk bryk self-assigned this Jun 2, 2017
@bryk
Copy link
Contributor

bryk commented Jun 2, 2017

Reference discussion: kubernetes/community#122

@bryk
Copy link
Contributor

bryk commented Jun 2, 2017

Similar request: #2008

@mwitkow
Copy link
Author

mwitkow commented Jun 2, 2017

@cheld

First, pods are not reachable from outside of cluster, second I am not sure if pods have dns entries, do they? So, I think the annotation should be rather moved to the service resource.

The reason why we want per POD links is that a lot of servers (e.g. Elastic Search, many Go servers) have per-instance troubleshooting interfaces. Accessing them via Service is a bad idea: you never know which instance you're going to end up on.

And yes, there are DNS entries for PODs, see the Pods Section of DNS docs.

So, my proposal would be to add a url setting on our global settings page (based on config-map) to add a link to every pod. If a user would enable Prometheus plugin we could automatically add this URL setting. So, automatically all pods would be linked to prometheus without changing any pod definition

The use case I'm particularly interested in is as follows:

I'm an operator of a service and I write my Deployment YAML file (with my Service and Pod templates). I do not really know what plugins are loaded into my Kubernetes Dashboard (as it is run by a different team, the one that provides Kubernetes clusters), but I know that my team's monitoring is set up at URL monitoring.example.com, as it for example is using an external monitoring service like Wavefront. Other teams have their own monitoring, and all I want is just a link that is handy to me.

At the same time, I agree with @bryk's idea that certain things should be pluggable at a higher level, and would be great if certain things came for free.

So I propose:

Can we do both?
a) ability to drive links pod and service links from global config maps/plugins for cluster-wide components to "plug in"
b) ability to drive links from annotations per-resource (per-pod, per-service), so that owners of individual components can drive their own debug views

I'd be happy to get working on b), and it seems like a really easy win. I would happily partake in the discussion about a), but I'd propose we keep them separate.

@cheld
Copy link
Contributor

cheld commented Jun 4, 2017

Ok, I understand. Lets focus on pod and service URLs for the moment. In general we should aim for a generic and portable solution.

Service and pods could be accessed through apiserver proxy. So we have to describe only the suffix path, e.g. /metrics, /admin, /ui
We can generate the full URL out of that information. I don't think we need templates for this use-case.

@mwitkow
Copy link
Author

mwitkow commented Jun 6, 2017

@cheld Thanks.

As for the API server proxy URL, can we make it not hard coded and instead expose it as a template variable like {{proxy_url}}?

Users have different ways of accessing Pods, and don't necessarily use the API server proxy. For example, we have our own edge server that routes based on DNS addresses. I know other people use VPN and would prefer this to be access over IP without DNS.

Having the proxy URL as a template would be the best of both worlds: easy to use for most early adopters, while providing flexibility for others.

@bryk
Copy link
Contributor

bryk commented Jun 6, 2017

I like the idea of templates for the reasons @mwitkow explained. Some folks can clusters tied to their corp network and able to see individual pods. Some may want to do this via external LB.

@bryk
Copy link
Contributor

bryk commented Jun 6, 2017

@mwitkow I'm thinking whether we covered all use cases for this problem. Would implementing this proposal cover all you cases? Do you see any other extensions we can cover?

@mwitkow
Copy link
Author

mwitkow commented Jun 6, 2017

It'd be great to have per-resource annotations that are templated, yes! :)

Just to clarify we're talking about annotations that have template variables for:

  • values of all nested fields (for full flexibility)? alternatively the most important ones like name, namespace, IP
  • a DNS name of a service or a pod (according to spec), but these can be implemented with just templates of full nested fields
  • a template that is expanded to an auto-generated URL to the API server's proxy endpoint

@cheld
Copy link
Contributor

cheld commented Jun 7, 2017

@mwitkow I think we can move forward as you suggested.

However, I am thinking about it like absolute/relative URL paths. A URL http://{{.pod.dns_name}}:9090/debug/requests is kind of like an absolute URL path while /debug/requests is kind of a relative path. Such an absolute URL harms portability of pod definitions.

  • Already, this might be an issue between your minikube development environment and staging/production. Links might have to be adapted to make them work

  • If you deploy a helm chart with embedded links, they will not work (even if the URLs are relative), because they do not fit to your environment

  • We want to add a plugin mechanism. If we generate links for integration they might not work as well

In general, I don't object absolute URLs as long as they are not default or recommended.

In the documentation I would suggest to make {{pod-proxy-url}}/debug/requests default. In a subsequent PR we can add a global setting to change the proxy value something else (e.g. pod-proxy-url=http:\\{{pod.dns-name}}:{{pod.port}}). So, we can move forward and improve the behavior later on.

@mwitkow
Copy link
Author

mwitkow commented Jun 7, 2017

@cheld I see. These make sense.

I think we're mostly agreed on the semantics, the only thing I think is that I think we don't need the extra pod-proxy-url variable but instead allow users to use template variables to define absolute URLs?

How in documentation examples we use: myLinkName: "{{pod-proxy-url-base}}/debug/requests" and outline that as the recommended solution (in documentation and helm etc), but at the same time, allow the user to set: myLinkName: "http://{{pod.dns-name}}::{{pod.port}}/debug/requests" if they prefer that?

However, users who don't care can still

@cheld
Copy link
Contributor

cheld commented Jun 8, 2017

Yes, we can start like in your example. (In future step we could make {{pod-proxy-url-base}} configurable, maybe)

@cheld
Copy link
Contributor

cheld commented Jul 13, 2017

/CC @kenan435

@therc
Copy link
Member

therc commented Jul 27, 2017

This is something I really miss from the Borg UI. In that case, it's the pod that writes out a file with a bunch of links, but that's mainly because Borg has no labels or annotations (and, I think, the feature predates Borg itself). My vote would be for cluster-wide settings like prod-proxy-url-base to be in a ConfigMap.

@kenan435
Copy link
Contributor

kenan435 commented Aug 2, 2017

Looking at this issue and started some development work. @mwitkow do you think you can have something committed anytime soon? If not I'll take it :)

@marusak
Copy link

marusak commented Sep 12, 2017

Is there any progress on this? #2332 is quite similar and I am interested if and how it fits current design.

@maciaszczykm
Copy link
Member

@marusak Check #2249.

@fejta-bot
Copy link

Issues go stale after 90d of inactivity.
Mark the issue as fresh with /remove-lifecycle stale.
Stale issues rot after an additional 30d of inactivity and eventually close.

Prevent issues from auto-closing with an /lifecycle frozen comment.

If this issue is safe to close now please do so with /close.

Send feedback to sig-testing, kubernetes/test-infra and/or @fejta.
/lifecycle stale

@k8s-ci-robot k8s-ci-robot added the lifecycle/stale Denotes an issue or PR has remained open with no activity and has become stale. label Jan 5, 2018
@fejta-bot
Copy link

Stale issues rot after 30d of inactivity.
Mark the issue as fresh with /remove-lifecycle rotten.
Rotten issues close after an additional 30d of inactivity.

If this issue is safe to close now please do so with /close.

Send feedback to sig-testing, kubernetes/test-infra and/or fejta.
/lifecycle rotten
/remove-lifecycle stale

@k8s-ci-robot k8s-ci-robot added lifecycle/rotten Denotes an issue or PR that has aged beyond stale and will be auto-closed. and removed lifecycle/stale Denotes an issue or PR has remained open with no activity and has become stale. labels Feb 9, 2018
@maciaszczykm maciaszczykm added the lifecycle/frozen Indicates that an issue or PR should not be auto-closed due to staleness. label Feb 27, 2018
@maciaszczykm maciaszczykm removed lifecycle/rotten Denotes an issue or PR that has aged beyond stale and will be auto-closed. labels Oct 11, 2018
@maciaszczykm maciaszczykm added priority/awaiting-more-evidence Lowest priority. Possibly useful, but not yet enough support to actually get it done. and removed priority/P3 labels Nov 9, 2018
@zoidyzoidzoid
Copy link

Something like this would also be super useful for a multi-cluster setup to link to objects owned in another cluster.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
help wanted Denotes an issue that needs help from a contributor. Must meet "help wanted" guidelines. kind/feature Categorizes issue or PR as related to a new feature. lifecycle/frozen Indicates that an issue or PR should not be auto-closed due to staleness. priority/awaiting-more-evidence Lowest priority. Possibly useful, but not yet enough support to actually get it done.
Projects
None yet
Development

No branches or pull requests