Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

fix(port): Update grpc executor target port #2131

Merged
merged 1 commit into from
Jul 21, 2020

Conversation

groszewn
Copy link
Contributor

@groszewn groszewn commented Jul 14, 2020

Update to have gRPC traffic forwarded to the gRPC target port.

Signed off by: Nick Groszewski [email protected]

What this PR does / why we need it:

Which issue(s) this PR fixes:

Fixes #

Special notes for your reviewer:

Does this PR introduce a user-facing change?:

Fixed gRPC port name in executor.

@seldondev
Copy link
Collaborator

Hi @groszewn. Thanks for your PR.

I'm waiting for a SeldonIO member to verify that this patch is reasonable to test. If it is, they should reply with /ok-to-test on its own line. Until that is done, I will not automatically test new commits in this PR, but the usual testing commands by org members will still work. Regular contributors should join the org to skip this step.

Once the patch is verified, the new status will be reflected by the ok-to-test label.

I understand the commands that are listed here.

Instructions for interacting with me using PR comments are available here. If you have questions or suggestions related to my behavior, please file an issue against the jenkins-x/lighthouse repository.

@ukclivecox ukclivecox requested a review from glindsell July 14, 2020 16:51
@groszewn
Copy link
Contributor Author

Services created by 1.2.1

Service resource created for the custom named service

apiVersion: v1
kind: Service
...
spec:
  clusterIP: xx.xx.xx.xx
  ports:
  - name: http
    port: 8000
    protocol: TCP
    targetPort: 8000
  - name: grpc
    port: 5001
    protocol: TCP
    targetPort: 8000
  selector:
    seldon-app: test
  sessionAffinity: None
  type: ClusterIP
status:
  loadBalancer: {}

Service resource created for the regular service name (non-custom)

apiVersion: v1
kind: Service
...
spec:
  clusterIP: xx.xx.xx.xx
  ports:
  - name: grpc
    port: 9000
    protocol: TCP
    targetPort: 9000
  selector:
    seldon-app-svc: test-test-test
  sessionAffinity: None
  type: ClusterIP
status:
  loadBalancer: {}

Services created by 1.1.0

If we compare this to the custom service resource created with 1.1.0, we get the following

apiVersion: v1
kind: Service
...
spec:
  clusterIP: xx.xx.xx.xx
  ports:
  - name: http
    port: 8000
    protocol: TCP
    targetPort: 8000
  - name: grpc
    port: 5001
    protocol: TCP
    targetPort: 5001
  selector:
    seldon-app: test
  sessionAffinity: None
  type: ClusterIP
status:
  loadBalancer: {}

@glindsell
Copy link
Contributor

@groszewn thanks for creating PR. However, the change to use http port as targetPort for Executor only was introduced because the Executor now multiplexes all traffic on http port.

What is the issue you are seeing on the service resource created for the custom named service that you believe is caused by this change?

@groszewn
Copy link
Contributor Author

groszewn commented Jul 14, 2020

Gotcha, understood. Let me lay out what I've been testing out then. For all of the tests, I just have a "model" that returns the identity function built with Seldon 1.2.1. It seems like Istio is the likely culprit, as namespaces without istio enabled seem to work fine.

The following code is used from within a pod in the same namespace as the seldon deployment in our cluster.

import grpc
from seldon_core.proto import prediction_pb2, prediction_pb2_grpc

channel = grpc.insecure_channel(<svc name>:<grpc port>)
stub = prediction_pb2_grpc.SeldonStub(channel)

proto = prediction_pb2.SeldonMessage()
proto.strData = "dfasdf"
stub.Predict(proto)

Non-Istio Enabled Custom Service Endpoint (no sidecars)

Successful response

>>> channel = grpc.insecure_channel("<custom svc name>.<non-istio namespace>:5001")
...
>>> proto.strData = "dfasdf"
>>> stub.Predict(proto)
meta {
}
strData: "dfasdf"

Non-Istio Enabled Standard Service Endpoint (no sidecars)

Successful response

>>> channel = grpc.insecure_channel("<custom svc name>.<non-istio namespace>:5001")
...
>>> proto.strData = "dfasdf"
>>> stub.Predict(proto)
meta {
}
strData: "dfasdf"

Istio Enabled Custom Service Endpoint (with sidecar)

>>> channel = grpc.insecure_channel("<custom svc name>.<istio-enabled namespace>:5001")
...
>>> proto.strData = "dfasd"
>>> stub.Predict(proto)
Traceback (most recent call last):
  File "<stdin>", line 1, in <module>
  File "/opt/conda/envs/microservice/lib/python3.8/site-packages/grpc/_channel.py", line 826, in __call__
    return _end_unary_response_blocking(state, call, False, None)
  File "/opt/conda/envs/microservice/lib/python3.8/site-packages/grpc/_channel.py", line 729, in _end_unary_response_blocking
    raise _InactiveRpcError(state)
grpc._channel._InactiveRpcError: <_InactiveRpcError of RPC that terminated with:
        status = StatusCode.UNIMPLEMENTED
        details = "Received http2 header with status: 404"
        debug_error_string = "{"created":"@1594748074.864071375","description":"Received http2 :status header with non-200 OK status","file":"src/core/ext/filters/http/client/http_client_filter.cc","file_line":130,"grpc_message":"Received http2 header with status: 404","grpc_status":12,"value":"404"}"
>
>>>

Istio Enabled Standard Service Endpoint (with sidecar)

>>> channel = grpc.insecure_channel("<standard svc name>.<istio-enabled namespace>:9000")
...
>>> proto.strData = "dfasd"
>>> stub.Predict(proto)
meta {
}
strData: "dfasdf"

@groszewn
Copy link
Contributor Author

I'm a little confused as to what differs between the service call to the custom service endpoint (5001 -> 8000) and the standard service endpoint (9000 -> 9000). I would expect gRPC traffic to perform similarly for both standard and custom service names.

@glindsell
Copy link
Contributor

@groszewn Thanks for all the info! Sounds like a problem only when using sidecars. Are you able to provide the sidecar yaml?

@groszewn
Copy link
Contributor Author

@glindsell The sidecar is auto-injected by Istio, but I can share the pod resource that's created if that helps.

apiVersion: v1
kind: Pod
metadata:
  annotations:
    kubernetes.io/psp: restricted
    prometheus.io/path: /prometheus
    prometheus.io/scrape: "true"
    seccomp.security.alpha.kubernetes.io/pod: runtime/default
    seldon.io/svc-name: toymodel-maas
    sidecar.istio.io/status: '{"version":"023ae377d1a8981380141286422d04a98c39883f0647804d1ec0a7b5683da18d","initContainers":["istio-init"],"containers":["istio-proxy"],"volumes":["istio-envoy","istio-certs"],"imagePullSecrets":null}'
    toymodel-maas: ""
  creationTimestamp: "2020-07-15T15:26:26Z"
  generateName: toymodel-maas-toymodel-maas-0-toymodel-maas-9fd96774b-
  labels:
    app: toymodel-maas-toymodel-maas-0-toymodel-maas
    app.kubernetes.io/managed-by: seldon-core
    fluentd: "true"
    pod-template-hash: 9fd96774b
    security.istio.io/tlsMode: istio
    seldon-app: toymodel-maas
    seldon-app-svc: toymodel-maas-toymodel-maas-toymodel-maas
    seldon-deployment-id: toymodel-maas
    version: toymodel-maas
  name: toymodel-maas-toymodel-maas-0-toymodel-maas-9fd96774b-2wl7w
  namespace: testns
...
spec:
  containers:
  - env:
    - name: PREDICTIVE_UNIT_SERVICE_PORT
      value: "9000"
    - name: PREDICTIVE_UNIT_ID
      value: toymodel-maas
    - name: PREDICTIVE_UNIT_IMAGE
      value: <model image>
    - name: PREDICTOR_ID
      value: toymodel-maas
    - name: PREDICTOR_LABELS
      value: '{"version":"toymodel-maas"}'
    - name: SELDON_DEPLOYMENT_ID
      value: toymodel-maas
    - name: PREDICTIVE_UNIT_METRICS_SERVICE_PORT
      value: "6000"
    - name: PREDICTIVE_UNIT_METRICS_ENDPOINT
      value: /prometheus
    image: <model image>
    imagePullPolicy: IfNotPresent
    lifecycle:
      preStop:
        exec:
          command:
          - /bin/sh
          - -c
          - /bin/sleep 10
    livenessProbe:
      failureThreshold: 3
      initialDelaySeconds: 60
      periodSeconds: 5
      successThreshold: 1
      tcpSocket:
        port: grpc
      timeoutSeconds: 1
    name: toymodel-maas
    ports:
    - containerPort: 6000
      name: metrics
      protocol: TCP
    - containerPort: 9000
      name: grpc
      protocol: TCP
    readinessProbe:
      failureThreshold: 3
      initialDelaySeconds: 20
      periodSeconds: 5
      successThreshold: 1
      tcpSocket:
        port: grpc
      timeoutSeconds: 1
    resources:
      limits:
        cpu: "1"
        memory: 1Gi
      requests:
        cpu: "1"
        memory: 1Gi
    securityContext:
      allowPrivilegeEscalation: false
      capabilities:
        drop:
        - ALL
    terminationMessagePath: /dev/termination-log
    terminationMessagePolicy: File
    volumeMounts:
    - mountPath: /etc/podinfo
      name: seldon-podinfo
    - mountPath: /var/run/secrets/kubernetes.io/serviceaccount
      name: default-token-ppdlc
      readOnly: true
  - args:
    - --sdep
    - toymodel-maas
    - --namespace
    - testns
    - --predictor
    - toymodel-maas
    - --port
    - "8000"
    - --protocol
    - seldon
    - --prometheus_path
    - /prometheus
    env:
    - name: ENGINE_PREDICTOR
      value: <Engine predictor value>
    - name: REQUEST_LOGGER_DEFAULT_ENDPOINT
      value: http://default-broker
    - name: SELDON_LOG_MESSAGES_EXTERNALLY
      value: "false"
    image: seldonio/seldon-core-executor:1.2.1
    imagePullPolicy: IfNotPresent
    livenessProbe:
      failureThreshold: 3
      httpGet:
        path: /live
        port: 8000
        scheme: HTTP
      initialDelaySeconds: 20
      periodSeconds: 5
      successThreshold: 1
      timeoutSeconds: 60
    name: seldon-container-engine
    ports:
    - containerPort: 8000
      protocol: TCP
    - containerPort: 8000
      name: metrics
      protocol: TCP
    readinessProbe:
      failureThreshold: 3
      httpGet:
        path: /ready
        port: 8000
        scheme: HTTP
      initialDelaySeconds: 20
      periodSeconds: 5
      successThreshold: 1
      timeoutSeconds: 60
    resources:
      requests:
        cpu: 100m
    securityContext:
      allowPrivilegeEscalation: false
      capabilities:
        drop:
        - ALL
      runAsUser: 8888
    terminationMessagePath: /dev/termination-log
    terminationMessagePolicy: File
    volumeMounts:
    - mountPath: /etc/podinfo
      name: seldon-podinfo
    - mountPath: /var/run/secrets/kubernetes.io/serviceaccount
      name: default-token-ppdlc
      readOnly: true
  - args:
    - proxy
    - sidecar
    - --domain
    - $(POD_NAMESPACE).svc.cluster.local
    - --configPath
    - /etc/istio/proxy
    - --binaryPath
    - /usr/local/bin/envoy
    - --serviceCluster
    - toymodel-maas-toymodel-maas-0-toymodel-maas.$(POD_NAMESPACE)
    - --drainDuration
    - 45s
    - --parentShutdownDuration
    - 1m0s
    - --discoveryAddress
    - istio-pilot.istio-system:15010
    - --zipkinAddress
    - zipkin.istio-system:9411
    - --proxyLogLevel=info
    - --dnsRefreshRate
    - 300s
    - --connectTimeout
    - 10s
    - --proxyAdminPort
    - "15000"
    - --controlPlaneAuthPolicy
    - NONE
    - --statusPort
    - "15020"
    - --applicationPorts
    - 6000,9000,8000,8000
    - --concurrency
    - "2"
    env:
    - name: POD_NAME
      valueFrom:
        fieldRef:
          apiVersion: v1
          fieldPath: metadata.name
    - name: ISTIO_META_POD_PORTS
      value: |-
        [
            {"name":"metrics","containerPort":6000,"protocol":"TCP"}
            ,{"name":"grpc","containerPort":9000,"protocol":"TCP"}
            ,{"containerPort":8000,"protocol":"TCP"}
            ,{"name":"metrics","containerPort":8000,"protocol":"TCP"}
        ]
    - name: ISTIO_META_CLUSTER_ID
      value: Kubernetes
    - name: POD_NAMESPACE
      valueFrom:
        fieldRef:
          apiVersion: v1
          fieldPath: metadata.namespace
    - name: INSTANCE_IP
      valueFrom:
        fieldRef:
          apiVersion: v1
          fieldPath: status.podIP
    - name: SERVICE_ACCOUNT
      valueFrom:
        fieldRef:
          apiVersion: v1
          fieldPath: spec.serviceAccountName
    - name: ISTIO_META_POD_NAME
      valueFrom:
        fieldRef:
          apiVersion: v1
          fieldPath: metadata.name
    - name: ISTIO_META_CONFIG_NAMESPACE
      valueFrom:
        fieldRef:
          apiVersion: v1
          fieldPath: metadata.namespace
    - name: SDS_ENABLED
      value: "false"
    - name: ISTIO_META_INTERCEPTION_MODE
      value: REDIRECT
    - name: ISTIO_META_INCLUDE_INBOUND_PORTS
      value: 6000,9000,8000,8000
    - name: ISTIO_METAJSON_ANNOTATIONS
      value: |
        {"kubernetes.io/psp":"restricted","prometheus.io/path":"/prometheus","prometheus.io/scrape":"true","seccomp.security.alpha.kubernetes.io/pod":"runtime/default","seldon.io/svc-name":"toymodel-maas","toymodel-maas":""}
    - name: ISTIO_METAJSON_LABELS
      value: |
        {"app":"toymodel-maas-toymodel-maas-0-toymodel-maas","app.kubernetes.io/managed-by":"seldon-core","fluentd":"true","pod-template-hash":"9fd96774b","seldon-app":"toymodel-maas","seldon-app-svc":"toymodel-maas-toymodel-maas-toymodel-maas","seldon-deployment-id":"toymodel-maas","version":"toymodel-maas"}
    - name: ISTIO_META_WORKLOAD_NAME
      value: toymodel-maas-toymodel-maas-0-toymodel-maas
    - name: ISTIO_META_OWNER
      value: <>
    imagePullPolicy: IfNotPresent
    name: istio-proxy
    ports:
    - containerPort: 15090
      name: http-envoy-prom
      protocol: TCP
    readinessProbe:
      failureThreshold: 30
      httpGet:
        path: /healthz/ready
        port: 15020
        scheme: HTTP
      initialDelaySeconds: 1
      periodSeconds: 2
      successThreshold: 1
      timeoutSeconds: 1
    resources:
      limits:
        cpu: "2"
        memory: 1Gi
      requests:
        cpu: 500m
        memory: 256Mi
    securityContext:
      allowPrivilegeEscalation: false
      capabilities:
        drop:
        - ALL
      privileged: false
      readOnlyRootFilesystem: true
      runAsGroup: 1337
      runAsNonRoot: true
      runAsUser: 1337
    terminationMessagePath: /dev/termination-log
    terminationMessagePolicy: File
    volumeMounts:
    - mountPath: /etc/istio/proxy
      name: istio-envoy
    - mountPath: /etc/certs/
      name: istio-certs
      readOnly: true
    - mountPath: /var/run/secrets/kubernetes.io/serviceaccount
      name: default-token-ppdlc
      readOnly: true
  dnsPolicy: ClusterFirst
  enableServiceLinks: true
  initContainers:
  - command:
    - istio-iptables
    - -p
    - "15001"
    - -z
    - "15006"
    - -u
    - "1337"
    - -m
    - REDIRECT
    - -i
    - '*'
    - -x
    - ""
    - -b
    - '*'
    - -d
    - "15020"
    image: istio/proxyv2:1.4.3
    imagePullPolicy: IfNotPresent
    name: istio-init
    resources:
      limits:
        cpu: 100m
        memory: 50Mi
      requests:
        cpu: 10m
        memory: 10Mi
    securityContext:
      allowPrivilegeEscalation: false
      capabilities:
        add:
        - NET_ADMIN
        - NET_RAW
        drop:
        - ALL
      privileged: false
      readOnlyRootFilesystem: false
      runAsGroup: 0
      runAsNonRoot: false
      runAsUser: 0
    terminationMessagePath: /dev/termination-log
    terminationMessagePolicy: File
    volumeMounts:
    - mountPath: /var/run/secrets/kubernetes.io/serviceaccount
      name: default-token-ppdlc
      readOnly: true
  priority: 1
  priorityClassName: default-priority
  restartPolicy: Always
  schedulerName: default-scheduler
  securityContext:
    fsGroup: 1
    runAsUser: 8888
    supplementalGroups:
    - 1
  serviceAccount: default
  serviceAccountName: default
  terminationGracePeriodSeconds: 20
  tolerations:
  - effect: NoExecute
    key: node.kubernetes.io/not-ready
    operator: Exists
    tolerationSeconds: 300
  - effect: NoExecute
    key: node.kubernetes.io/unreachable
    operator: Exists
    tolerationSeconds: 300
  volumes:
  - downwardAPI:
      defaultMode: 420
      items:
      - fieldRef:
          apiVersion: v1
          fieldPath: metadata.annotations
        path: annotations
    name: seldon-podinfo
  - name: default-token-ppdlc
    secret:
      defaultMode: 420
      secretName: default-token-ppdlc
  - emptyDir:
      medium: Memory
    name: istio-envoy
  - name: istio-certs
    secret:
      defaultMode: 420
      optional: true
      secretName: istio.default
...

Based off of our discussion
[here](https://seldondev.slack.com/archives/C8Y9A8G0Y/p1594744362472100).

The current port for grpc traffic is named `grpc` (very reasonably). Due
to the new pattern of multiplexing http/grpc traffic on the http port,
issues have popped up when trying to send grpc traffic to a service
defined by the `seldon.io/svc-name` due to the istio sidecar determining
the protocol and only allowing grpc traffic through (more info
[here](https://istio.io/latest/docs/ops/configuration/traffic-management/protocol-selection/)).

This fix changes the name of the grpc port from `grpc` to `http2` to
allow for multiplexing of traffic when istio sidecars are used, while
still maintaining the ability for istio grpc metrics to be generated
(more info
[here](https://istio.io/latest/docs/reference/config/policy-and-telemetry/metrics/#metrics)).

Signed off by: Nick Groszewski <[email protected]>
@ukclivecox
Copy link
Contributor

/ok-to-test

@seldondev
Copy link
Collaborator

Wed Jul 15 17:22:40 UTC 2020
The logs for [pr-build] [1] will show after the pipeline context has finished.
https://github.com/SeldonIO/seldon-core/blob/gh-pages/jenkins-x/logs/SeldonIO/seldon-core/PR-2131/1.log

impatient try
jx get build logs SeldonIO/seldon-core/PR-2131 --build=1

@ukclivecox
Copy link
Contributor

/test notebooks

@seldondev
Copy link
Collaborator

Wed Jul 15 17:22:49 UTC 2020
The logs for [lint] [2] will show after the pipeline context has finished.
https://github.com/SeldonIO/seldon-core/blob/gh-pages/jenkins-x/logs/SeldonIO/seldon-core/PR-2131/2.log

impatient try
jx get build logs SeldonIO/seldon-core/PR-2131 --build=2

@seldondev
Copy link
Collaborator

Wed Jul 15 17:24:29 UTC 2020
The logs for [notebooks] [3] will show after the pipeline context has finished.
https://github.com/SeldonIO/seldon-core/blob/gh-pages/jenkins-x/logs/SeldonIO/seldon-core/PR-2131/3.log

impatient try
jx get build logs SeldonIO/seldon-core/PR-2131 --build=3

@glindsell
Copy link
Contributor

/test integration

@seldondev
Copy link
Collaborator

Wed Jul 15 17:55:37 UTC 2020
The logs for [integration] [4] will show after the pipeline context has finished.
https://github.com/SeldonIO/seldon-core/blob/gh-pages/jenkins-x/logs/SeldonIO/seldon-core/PR-2131/4.log

impatient try
jx get build logs SeldonIO/seldon-core/PR-2131 --build=4

@ukclivecox
Copy link
Contributor

/test notebooks
/test integration

@seldondev
Copy link
Collaborator

Thu Jul 16 09:34:00 UTC 2020
The logs for [notebooks] [5] will show after the pipeline context has finished.
https://github.com/SeldonIO/seldon-core/blob/gh-pages/jenkins-x/logs/SeldonIO/seldon-core/PR-2131/5.log

impatient try
jx get build logs SeldonIO/seldon-core/PR-2131 --build=5

@seldondev
Copy link
Collaborator

Thu Jul 16 09:34:11 UTC 2020
The logs for [integration] [6] will show after the pipeline context has finished.
https://github.com/SeldonIO/seldon-core/blob/gh-pages/jenkins-x/logs/SeldonIO/seldon-core/PR-2131/6.log

impatient try
jx get build logs SeldonIO/seldon-core/PR-2131 --build=6

@ukclivecox
Copy link
Contributor

@groszewn Would it be safer to add a new port to the SVC with name http2 rather than change the existing grpc port?

@groszewn
Copy link
Contributor Author

groszewn commented Jul 16, 2020

@cliveseldon I think that sounds like a good idea. Are you thinking that change should propagate all the way out to the helm chart values and CRD? And any preference on a default port for http2?

@ukclivecox
Copy link
Contributor

@groszewn I was thinking we can use the same port. So just duplicate the existing setting and add a new name?

@glindsell
Copy link
Contributor

/test notebooks

@seldondev
Copy link
Collaborator

Thu Jul 16 13:05:06 UTC 2020
The logs for [notebooks] [7] will show after the pipeline context has finished.
https://github.com/SeldonIO/seldon-core/blob/gh-pages/jenkins-x/logs/SeldonIO/seldon-core/PR-2131/7.log

impatient try
jx get build logs SeldonIO/seldon-core/PR-2131 --build=7

@groszewn
Copy link
Contributor Author

groszewn commented Jul 16, 2020

@cliveseldon It's currently not possible in K8s to have a single service with multiple service ports using the same port.

As an example, this manifest would lead to the following error.

apiVersion: v1
kind: Service
metadata:
  labels:
    seldon-app: test
  name: testsvc
spec:
  ports:
  - name: http
    port: 8000
    protocol: TCP
    targetPort: 8000
  - name: grpc
    port: 5001
    protocol: TCP
    targetPort: 8000
  - name: http2
    port: 5001
    protocol: TCP
    targetPort: 8000
  selector:
    seldon-app: test
  type: ClusterIP
The Service "testsvc" is invalid: spec.ports[2]: Duplicate value: core.ServicePort{Name:"", Protocol:"TCP", Port:5001, TargetPort:intstr.IntOrString{Type:0, IntVal:0, StrVal:""}, NodePort:0}

However, flipping the http2 port to another number would work just fine.

@ukclivecox
Copy link
Contributor

@groszewn ok. But that would be a breaking change. Can we just clarify again what we are solving. As this would be for all Services exposed so user will probably be assuming 5001 maybe for existing applications and we need to take into account Ambassador and other configurations.

Also, are we sure there is no downside to using http2 as the name in istio for grpc?

@groszewn
Copy link
Contributor Author

@cliveseldon Sure, what we're solving with this is the ability for istio sidecars to multiplex traffic along with the executor. The istio sidecar determines the protocol to use from the service port name, so multiplexing would have to happen over http2 since http traffic wouldn't be understood the sidecar over grpc and vice-versa.

As far as I can tell from the istio documentation, there should be no adverse impacts to leveraging http2 with istio (traffic management, metrics, etc. seem to all be supported in the same way).

If we were to take the approach of having 3 service ports (http, grpc, and http2) and running http2 on a new port, would you still consider it a breaking change? Theoretically just the service could be updated with the additional entry and current users would see no difference for http or grpc ports if we leverage a new port for http2. I'm happy to make the necessary updates to take it all the way through, just let me know what you think the appropriate direction to take with this is.

@ukclivecox
Copy link
Contributor

If sidcars break things at present. This would seem to be an issue for people using Istio or Ambassador. So I think we need to keep the 2 ports only?

@groszewn
Copy link
Contributor Author

I think that approach should work. As far as I can tell, only Istio uses the service port name to infer the protocol. Since metrics and traffic management don't seem to be impacted by just changing the name, the switch should simply allow multiplexing when there is an istio sidecar and work as it currently does otherwise.

@groszewn
Copy link
Contributor Author

@cliveseldon @glindsell just wanted to check back in on this before the weekend. Anything else needed from my end? Do the pipelines need to get kicked again?

@ukclivecox
Copy link
Contributor

/test integration

@ukclivecox
Copy link
Contributor

/test notebooks

@ukclivecox
Copy link
Contributor

/approve

@ukclivecox
Copy link
Contributor

Yes all good @groszewn

@seldondev
Copy link
Collaborator

[APPROVALNOTIFIER] This PR is APPROVED

This pull-request has been approved by: cliveseldon

The full list of commands accepted by this bot can be found here.

The pull request process is described here

Needs approval from an approver in each of these files:

Approvers can indicate their approval by writing /approve in a comment
Approvers can cancel approval by writing /approve cancel in a comment

@seldondev
Copy link
Collaborator

Fri Jul 17 13:44:31 UTC 2020
The logs for [notebooks] [9] will show after the pipeline context has finished.
https://github.com/SeldonIO/seldon-core/blob/gh-pages/jenkins-x/logs/SeldonIO/seldon-core/PR-2131/9.log

impatient try
jx get build logs SeldonIO/seldon-core/PR-2131 --build=9

@seldondev
Copy link
Collaborator

Fri Jul 17 13:44:42 UTC 2020
The logs for [integration] [8] will show after the pipeline context has finished.
https://github.com/SeldonIO/seldon-core/blob/gh-pages/jenkins-x/logs/SeldonIO/seldon-core/PR-2131/8.log

impatient try
jx get build logs SeldonIO/seldon-core/PR-2131 --build=8

@ukclivecox
Copy link
Contributor

/test integration

@ukclivecox
Copy link
Contributor

/test notebooks

@seldondev
Copy link
Collaborator

Mon Jul 20 11:00:25 UTC 2020
The logs for [integration] [10] will show after the pipeline context has finished.
https://github.com/SeldonIO/seldon-core/blob/gh-pages/jenkins-x/logs/SeldonIO/seldon-core/PR-2131/10.log

impatient try
jx get build logs SeldonIO/seldon-core/PR-2131 --build=10

@seldondev
Copy link
Collaborator

Mon Jul 20 11:00:45 UTC 2020
The logs for [notebooks] [11] will show after the pipeline context has finished.
https://github.com/SeldonIO/seldon-core/blob/gh-pages/jenkins-x/logs/SeldonIO/seldon-core/PR-2131/11.log

impatient try
jx get build logs SeldonIO/seldon-core/PR-2131 --build=11

@groszewn
Copy link
Contributor Author

@cliveseldon is there anything I need to do here or are the CI checks just being a little flaky? Looks like the integration tests failed different things in the different runs and the notebook test failure didn't look particularly related.

@ukclivecox
Copy link
Contributor

/test integration

@ukclivecox
Copy link
Contributor

/test notebooks

@seldondev
Copy link
Collaborator

Tue Jul 21 14:45:46 UTC 2020
The logs for [integration] [12] will show after the pipeline context has finished.
https://github.com/SeldonIO/seldon-core/blob/gh-pages/jenkins-x/logs/SeldonIO/seldon-core/PR-2131/12.log

impatient try
jx get build logs SeldonIO/seldon-core/PR-2131 --build=12

@seldondev
Copy link
Collaborator

Tue Jul 21 14:45:53 UTC 2020
The logs for [notebooks] [13] will show after the pipeline context has finished.
https://github.com/SeldonIO/seldon-core/blob/gh-pages/jenkins-x/logs/SeldonIO/seldon-core/PR-2131/13.log

impatient try
jx get build logs SeldonIO/seldon-core/PR-2131 --build=13

@seldondev
Copy link
Collaborator

seldondev commented Jul 21, 2020

@groszewn: The following test failed, say /retest to rerun them all:

Test name Commit Details Rerun command
notebooks def3eb4 link /test notebooks

Instructions for interacting with me using PR comments are available here. If you have questions or suggestions related to my behavior, please file an issue against the jenkins-x/lighthouse repository. I understand the commands that are listed here.

@ukclivecox
Copy link
Contributor

1 flaky notebook test failed so will force merge.

@ukclivecox ukclivecox merged commit a15b95c into SeldonIO:master Jul 21, 2020
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Projects
None yet
Development

Successfully merging this pull request may close these issues.

4 participants