-
Notifications
You must be signed in to change notification settings - Fork 1.2k
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Add DomainInternal to status for Route and Service #1586
Add DomainInternal to status for Route and Service #1586
Conversation
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
We should probably update these: https://github.com/knative/serving/blob/master/docs/spec/spec.md#route
@@ -131,6 +131,12 @@ type RouteStatus struct { | |||
// +optional | |||
Domain string `json:"domain,omitempty"` | |||
|
|||
// ServiceName holds the name of a core Kubernetes Service resource that | |||
// fonts this Route, this service would be an appropriate ingress target for |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
fronts?
@@ -134,6 +134,13 @@ type ServiceStatus struct { | |||
// +optional | |||
Domain string `json:"domain,omitempty"` | |||
|
|||
// From RouteStatus. | |||
// ServiceName holds the name of a core Kubernetes Service resource that | |||
// fonts this Route, this service would be an appropriate ingress target for |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
fronts?
@mattmoor PTAL |
Since you're modifying the https://github.com/knative/serving/tree/master/docs/spec#knative-serving-api-spec |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
/approve
/lgtm
/hold Just saw @dprotaso comment |
/lgtm |
@mattmoor added conformance test PTYAL |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
/lgtm
@@ -131,6 +131,12 @@ type RouteStatus struct { | |||
// +optional | |||
Domain string `json:"domain,omitempty"` | |||
|
|||
// ServiceName holds the name of a core Kubernetes Service resource that |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Could this also be Virtual Service? Since it's a string there's really no different from this point, but just curious if we should document it somewhere?
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
A VirtualService maps traffic from one Service to another Service, so there is still a core Service created by the route controller. The VirtualService is an implementation detail behind the Service much like a Deployment is, as far as the sender knows, they are talking to a Service.
/lgtm |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Question: how does serviceName
differ from domain
? From my POV:
- Both should be DNS-resolvable names
- Both should be reachable within the cluster
- Both should apply Istio rules
I believe @tcnghia ia already creating a k8s service for existing VirtualServices
. If we want an "internal only" name, we could call it internalDomain
, but I'd rather pretend that the DNS names presented between knative/serving and knative/eventing are unified within a single k8s cluster (which is as far as an ObjectReference can reach).
Is this solely about getting a shorter hostname for the dispatcher to make HTTP calls to?
pkg/controller/route/cruds.go
Outdated
@@ -81,6 +81,7 @@ func (c *Controller) reconcilePlaceholderService(ctx context.Context, route *v1a | |||
return err | |||
} | |||
logger.Infof("Created service %s", name) | |||
route.Status.ServiceName = resourcenames.K8sService(route) |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Why doesn't this use desiredService.Name?
@evankanderson funny enough I started this work by adding an
I generally agree with your points. The difference is that domain is set using external dns. This has two major drawbacks:
The length isn't my concern, but instead having a resource I know I can reliably address from inside the cluster without needing to know the naming conventions used by the serving controller. |
For #1, would defaulting to With respect to whether the name is reachable outside the cluster, I wouldn't expect that traffic sent directly to a Revision for any purpose would necessarily work (for example, we might scale down to zero, remove Revision's service, and instead route that traffic to an activator). If we're treating |
I'm 👍 on creating a |
For resoruces that own a core Kubernetes Services, the name of the service should be exposed on the resoruce's status. The Revision resources already exposes `.status.serviceName`, this PR adds the same property for the Route and Knative Service resources. Yes, it's a bit strange to have a Serivce include the Service name in it's status, ¯\_(ツ)_/¯ Fixes: #1584
- typos - added status property to spec.md
/test pull-knative-serving-integration-tests |
@evankanderson I added I skipped conformance tests for the new property since it's well covered by unit tests and we'd need to resolve DNS from inside the cluster. The best method to do that I could figure out is to deploy a pod into the cluster with nslookup or dig and then |
We should (separately) probably do better about making sure that the cluster is in a proper state with respect to DNS. One thing I'm thinking of trying out would be to change the With respect to having spec include a service name vs a DNS name, my preference would be for using (only) a DNS name, rather than both a DNS name and a service name. The service name seems like it:
It sounds like Ville is not a huge fan of using |
Opened #1598 for the serving changes to make DNS usually work. |
@evankanderson I'm on board with using a dns name instead of a service name, however, I don't like conflating internal and external dns names. If you know you're talking to something inside the cluster, it seems better to use an internal dns name:
This PR currently uses |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
I'm happy to approve the DomainInternal part, but let's keep k8s Service out (particularly of Knative Service).
For v1alpha2, we might want to rename domain
and domainInternal
to something like dnsName
and localDnsName
, but let's keep DomainInternal
or LocalDomain
names in v1alpha1 for parallelism.
docs/spec/spec.md
Outdated
@@ -78,6 +78,10 @@ status: | |||
# along with a cluster-specific prefix (here, mydomain.com). | |||
domain: my-service.default.mydomain.com | |||
|
|||
# serviceName: The name for the core Kubernetes Service that fronts this | |||
# route. Typically, the name will be the same as the name of the route. | |||
serviceName: my-service |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Change this to domainInternal
(or domainLocal
), documented as:
A DNS name for the default (traffic-split) route which can be accessed without leaving the cluster environment.
# serviceName: The name for the core Kubernetes Service that fronts this | ||
# revision. Typically, the name will be the same as the name of the | ||
# revision. | ||
serviceName: myservice-a1e34 |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
I don't think we want a serviceName/domainInternal on Revision -- a Revision may have zero traffic and be non-routable if it is not addressed by any Route (and clients should not assume that they will know when that transition may happen). If clients want a name for a specific Revision, they should use the traffic.name
subrouting feature in Route.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
ServiceName predates this PR on RevisionStatus, but was missing from the spec.md. I left the docs since the values should be documented.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
That sounds fine, as long as we don't suggest this is a good way to reach the revision. Thanks for documenting!
// over the provided targets from inside the cluster. It generally has the | ||
// form {revision-name}.{revision-namespace}.svc.cluster.local | ||
// +optional | ||
DomainInternal string `json:"domainInternal,omitempty"` |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
I don't think we should add a DomainInternal to Revision -- this feels like something which should be managed through the Route if needed.
// fronts this Route, this service would be an appropriate ingress target | ||
// for targeting the revision. | ||
// +optional | ||
ServiceName string `json:"serviceName,omitempty"` |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Only DomainInternal
here.
// fronts this Route, this service would be an appropriate ingress target | ||
// for targeting the revision. | ||
// +optional | ||
ServiceName string `json:"serviceName,omitempty"` |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Ditto.
test/conformance/route_test.go
Outdated
@@ -106,6 +106,14 @@ func assertResourcesUpdatedWhenRevisionIsReady(t *testing.T, logger *zap.Sugared | |||
if err != nil { | |||
t.Fatalf("The Route %s was not updated to route traffic to the Revision %s: %v", names.Route, names.Revision, err) | |||
} | |||
logger.Infof("The Route has a core Kubernetes Service referenced in it's status") | |||
err = test.CheckRouteState(clients.Routes, names.Route, func(r *v1alpha1.Route) (bool, error) { | |||
err := test.CheckCoreServiceState(clients.Kube.CoreV1().Services(r.Namespace), r.Status.ServiceName, test.IsCoreServiceCreated) |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Switch this to check DomainInternal
?
Should this test also attempt to actually reach the DomainInternal
DNS name? My preference would be "yes", since we're adding this to verify that there will always be a name that will be within-cluster reachable for the Route.
@evankanderson @vaikas-google PTAL |
@@ -131,6 +131,12 @@ type RouteStatus struct { | |||
// +optional | |||
Domain string `json:"domain,omitempty"` | |||
|
|||
// DomainInternal holds the top-level domain that will distribute traffic over the provided |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
The field name DomainInternal
sounds very odd to me. How about InClusterDomain
or DomainInCluster
?
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
I would avoid 'internal' because it's not clear what internal is referring to
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
This is for concistency with Domain. We'll change these later but to minimize the name changes, wanted to keep them consistent for now. Ok with changing in a follow on PR?
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
I'm most definitely OK with it as a follow-on :)
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
I think we'll rename this and "domain" in v1alpha2. (Ville doesn't like it either, and I agree that it sounds a little weird given what it actually does). Up here in Seattle, we've bandied around dnsName' and
localDnsName` as terms for these, since "internal" is unclear whether it's "internal to the implementation", "internal to the cluster", or "internal to the customer's deployed environment" (which may be != a single cluster).
Kubernetes 1.11 has beta support for CRD versioning, which will enable transitioning cleanly from v1alpha1 to v1alpha2.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
I still dream of unified in-cluster and out-of-cluster naming, but this LGTM for now. @cooperneil may be interested in this API addition as well.
/lgtm
/approve
# serviceName: The name for the core Kubernetes Service that fronts this | ||
# revision. Typically, the name will be the same as the name of the | ||
# revision. | ||
serviceName: myservice-a1e34 |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
That sounds fine, as long as we don't suggest this is a good way to reach the revision. Thanks for documenting!
@@ -367,7 +376,11 @@ status: | |||
# domain: The hostname used to access the default (traffic-split) | |||
# route. Typically, this will be composed of the name and namespace | |||
# along with a cluster-specific prefix (here, mydomain.com). | |||
domain: my-service.default.mydomain.com | |||
domain: myservice.default.mydomain.com |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Hah, thanks for fixing this name!
@@ -131,6 +131,12 @@ type RouteStatus struct { | |||
// +optional | |||
Domain string `json:"domain,omitempty"` | |||
|
|||
// DomainInternal holds the top-level domain that will distribute traffic over the provided |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
I think we'll rename this and "domain" in v1alpha2. (Ville doesn't like it either, and I agree that it sounds a little weird given what it actually does). Up here in Seattle, we've bandied around dnsName' and
localDnsName` as terms for these, since "internal" is unclear whether it's "internal to the implementation", "internal to the cluster", or "internal to the customer's deployed environment" (which may be != a single cluster).
Kubernetes 1.11 has beta support for CRD versioning, which will enable transitioning cleanly from v1alpha1 to v1alpha2.
[APPROVALNOTIFIER] This PR is APPROVED This pull-request has been approved by: evankanderson, mattmoor, scothis The full list of commands accepted by this bot can be found here. The pull request process is described here
Needs approval from an approver in each of these files:
Approvers can indicate their approval by writing |
What happened to /lgtm cancel |
@evankanderson the conformance tests were for the ServiceName property. per #1586 (comment)
|
The following is the coverage report on pkg/.
|
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
/lgtm
Resources exposed by a dns name outside of the cluster should also have a dns name that can target them from inside the cluster. Routes and Services (which already have a Domain status property) now include a DomainInternal status property.
Fixes: #1584
Proposed Changes
.status.domainInternal
to Service and RouteRelease Note
/assign @vaikas-google