Skip to content

Commit

Permalink
Merge pull request #1369 from ryandawsonuk/1293-newrequestlogging
Browse files Browse the repository at this point in the history
new request logging
  • Loading branch information
seldondev authored Feb 12, 2020
2 parents ef722f9 + 558f5ba commit cbcb125
Show file tree
Hide file tree
Showing 29 changed files with 881 additions and 235 deletions.
3 changes: 3 additions & 0 deletions .gitignore
Original file line number Diff line number Diff line change
Expand Up @@ -117,6 +117,9 @@ wrappers/python/fbs/
examples/models/onnx_resnet50/cpu_codegen/Function_0_codegen.cpp
examples/models/onnx_resnet50/resnet.onnx

#logging example
examples/centralised-logging/request-logging/istio-1.1.6

#openapi
engine/src/main/resources/static/seldon.json
api-frontend/src/main/resources/static/seldon.json
Expand Down
4 changes: 2 additions & 2 deletions doc/source/analytics/analytics.md
Original file line number Diff line number Diff line change
Expand Up @@ -6,8 +6,8 @@ The metrics are:

**Prediction Requests**

* ```seldon_api_executor_server_requests_duration_seconds_(bucket,count,sum) ``` : Requests to the service orchestrator from an ingress, e.g. API gateway or Ambassador
* ```seldon_api_executor_client_requests_duration_seconds_(bucket,count,sum) ``` : Requests from the service orchestrator to a component, e.g., a model
* ```seldon_api_executor_server_requests_seconds_(bucket,count,sum) ``` : Requests to the service orchestrator from an ingress, e.g. API gateway or Ambassador
* ```seldon_api_executor_client_requests_seconds_(bucket,count,sum) ``` : Requests from the service orchestrator to a component, e.g., a model

Each metric has the following key value pairs for further filtering which will be taken from the SeldonDeployment custom resource that is running:

Expand Down
50 changes: 35 additions & 15 deletions examples/centralised-logging/README.md
Original file line number Diff line number Diff line change
Expand Up @@ -4,13 +4,29 @@

Here we will set up EFK (elasticsearch, fluentd/fluentbit, kibana) as a stack to gather logs from SeldonDeployments and make them searchable.

This demo is aimed at minikube.
This demo is aimed at KIND or minikube but can also work with a cloud provider. Uses helm v3.

Alternatives are available and if you are running in cloud then you can consider a managed service from your cloud provider.

If you just want to bootstrap a full logging and request tracking setup for minikube, run ./full-setup.sh. That includes the [request logging setup](./request-logging/README.md)

## Setup
## Setup Elastic - KIND

Start cluster

```
kind create cluster --config kind_config.yaml --image kindest/node:v1.15.6
```

Install elastic with KIND config:

```
kubectl create namespace logs
kubectl apply -f https://raw.githubusercontent.com/rancher/local-path-provisioner/master/deploy/local-path-storage.yaml
helm install elasticsearch elasticsearch --version 7.5.2 --namespace=logs -f elastic-kind.yaml --repo https://helm.elastic.co
```

## Setup Elastic - Minikube

Start Minikube with flags as shown:

Expand All @@ -22,11 +38,10 @@ Install elasticsearch with minikube configuration:

```
kubectl create namespace logs
helm install elasticsearch elasticsearch --version 7.5.2 --namespace=logs -f elastic-minikube.yaml --repo https://helm.elastic.co
```

```
helm install elasticsearch elasticsearch --version 7.1.1 --namespace=logs -f elastic-minikube.yaml --repo https://helm.elastic.co
```
## Fluentd and Kibana

Then fluentd as a collection agent (chosen in preference to fluentbit - see notes at end):

Expand All @@ -37,7 +52,7 @@ helm install fluentd fluentd-elasticsearch --namespace=logs -f fluentd-values.ya
And kibana UI:

```
helm install kibana kibana --version 7.1.1 --namespace=logs --set service.type=NodePort --repo https://helm.elastic.co
helm install kibana kibana --version 7.5.2 --namespace=logs --set service.type=NodePort --repo https://helm.elastic.co
```

## Generating Logging
Expand All @@ -57,15 +72,16 @@ Check that it now recognises the seldon CRD by running `kubectl get sdep`.
Now a model:

```
helm install seldon-single-model ../../helm-charts/seldon-single-model/ --set engine.env.LOG_MESSAGES_EXTERNALLY="false"
helm install seldon-single-model ../../helm-charts/seldon-single-model/
```

And the loadtester:
And the loadtester (first line is only needed for KIND):

```
kubectl label nodes kind-worker role=locust --overwrite
kubectl label nodes $(kubectl get nodes -o jsonpath='{.items[0].metadata.name}') role=locust --overwrite
helm install seldon-core-loadtesting ../../helm-charts/seldon-core-loadtesting/ --set locust.host=http://seldon-single-model-seldon-single-model:8000 --set oauth.enabled=false --set oauth.key=oauth-key --set oauth.secret=oauth-secret --set locust.hatchRate=1 --set locust.clients=1 --set loadtest.sendFeedback=0 --set locust.minWait=0 --set locust.maxWait=0 --set replicaCount=1
helm install seldon-core-loadtesting ../../helm-charts/seldon-core-loadtesting/ --set locust.host=http://seldon-single-model-seldon-single-model-seldon-single-model:8000 --set oauth.enabled=false --set oauth.key=oauth-key --set oauth.secret=oauth-secret --set locust.hatchRate=1 --set locust.clients=1 --set loadtest.sendFeedback=0 --set locust.minWait=1000 --set locust.maxWait=1000 --set replicaCount=1
```

## Inspecting Logging and Search for Requests
Expand All @@ -75,11 +91,17 @@ To find kibana URL
```
echo $(minikube ip)":"$(kubectl get svc kibana-kibana -n logs -o=jsonpath='{.spec.ports[?(@.port==5601)].nodePort}')
```
Or if not on minikube then port-forward to `localhost:5601`:
```
kubectl port-forward svc/kibana-kibana -n logs 5601:5601
```

If you want to check the elastic API with postman then also run `kubectl port-forward svc/elasticsearch-master -n logs 9200:9200`

When Kibana appears for the first time there will be a brief animation while it initializes.
On the Welcome page click Explore on my own.
From the top-left or from the `Visualize and Explore Data` panel select the `Discover` item.
In the form field Index pattern enter logstash-*
In the form field Index pattern enter *
It should read "Success!" and Click the `> Next` step button on the right.
In the next form select timestamp from the dropdown labeled `Time Filter` field name.
From the bottom-right of the form select `Create index pattern`.
Expand All @@ -88,13 +110,11 @@ From the top-left or the home screen's `Visualize and Explore Data` panel, selec
The log list will appear.
Refine the list a bit by selecting `log` near the bottom the left-hand Selected fields list.
When you hover over or click on the word `log`, click the `Add` button to the right of the label.
You can create a filter using the `Add Filter` button under `Search`. The field can be `kubernetes.labels.seldon-app` and the value can be an 'is' match on `seldon-single-model-seldon-single-model`.

The custom fields in the request bodies may not currently be in the index. If you hover over one in a request you may see `No cached mapping for this field`.
You can create a filter using the `Add Filter` button under `Search`. The field can be `kubernetes.labels.seldon-app` and the value can be an 'is' match on `seldon-single-model-seldon-single-model-seldon-single-model`.

To add mappings, go to `Management` at the bottom-left and then `Index Patterns`. Hit `Refresh` on the index created earlier. The number of fields should increase and `request.data.names` should be present.
To add mappings, go to `Management` at the bottom-left and then `Index Patterns`. Hit `Refresh` on the index created earlier. The number of fields should increase.

Now we can go back and add a further filter for `data.names` with the operator `exists`. We can add further filters if we want, such as the presence of a feature name or the presence of a feature value.
Now we can go back and add further filters if we want.

![picture](./kibana-custom-search.png)

Expand Down
45 changes: 45 additions & 0 deletions examples/centralised-logging/elastic-kind.yaml
Original file line number Diff line number Diff line change
@@ -0,0 +1,45 @@
---
# Permit co-located instances for solitary minikube virtual machines.
antiAffinity: "soft"

# Shrink default JVM heap.
esJavaOpts: "-Xmx256m -Xms256m"

podAnnotations:
fluentbit.io/exclude: "true"

replicas: 1

# Allocate smaller chunks of memory per pod.
resources:
requests:
cpu: "200m"
memory: "512M"
limits:
cpu: "1500m"
memory: "1024M"

# Request smaller persistent volumes.
volumeClaimTemplate:
accessModes: [ "ReadWriteOnce" ]
storageClassName: "local-path"
resources:
requests:
storage: 400M
extraInitContainers: |
- name: create
image: busybox:1.28
command: ['mkdir', '-p', '/usr/share/elasticsearch/data/nodes/']
securityContext:
runAsUser: 0
volumeMounts:
- mountPath: /usr/share/elasticsearch/data
name: elasticsearch-master
- name: file-permissions
image: busybox:1.28
command: ['chown', '-R', '1000:1000', '/usr/share/elasticsearch/']
securityContext:
runAsUser: 0
volumeMounts:
- mountPath: /usr/share/elasticsearch/data
name: elasticsearch-master
37 changes: 30 additions & 7 deletions examples/centralised-logging/full-setup-existing-kubeflow.sh
Original file line number Diff line number Diff line change
Expand Up @@ -9,18 +9,35 @@ set -o xtrace

# Assumes existing cluster with kubeflow's istio gateway
# Will put services behind kubeflow istio gateway
brokercrd=$(kubectl get crd inmemorychannels.messaging.knative.dev -o jsonpath='{.metadata.name}') || true
# First check what parts of knative are present
autoscaler=$(kubectl get deployment -n knative-serving autoscaler -o jsonpath='{.metadata.name}') || true
if [[ $autoscaler == 'autoscaler' ]] ; then
echo "knative serving already installed"
else
./request-logging/install_knative.sh
fi

imc=$(kubectl get deployment -n knative-eventing imc-controller -o jsonpath='{.metadata.name}') || true

if [[ $brokercrd == 'inmemorychannels.messaging.knative.dev' ]] ; then
echo "knative already installed"
if [[ $imc == 'imc-controller' ]] ; then
echo "knative eventing already installed"
else
./kubeflow/knative-setup-existing-istio.sh
kubectl apply --selector knative.dev/crd-install=true --filename https://github.com/knative/eventing/releases/download/v0.11.0/eventing.yaml
sleep 5
kubectl apply --filename https://github.com/knative/eventing/releases/download/v0.11.0/eventing.yaml
kubectl apply --filename https://github.com/knative/eventing/releases/download/v0.11.0/in-memory-channel.yaml
fi

#istio for knative needs to have cluster-local-gateway
#script installs any missing istio components (but leaves existing ones)
cd request-logging
./install_istio.sh
cd ..

sleep 5

kubectl create namespace seldon-system || echo "namespace seldon-system exists"
helm upgrade --install seldon-core ../../helm-charts/seldon-core-operator/ --namespace seldon-system --set istio.gateway="kubeflow-gateway.kubeflow.svc.cluster.local" --set istio.enabled="true" --set engine.logMessagesExternally="true" --set certManager.enabled="true"
helm upgrade --install seldon-core ../../helm-charts/seldon-core-operator/ --namespace seldon-system --set istio.gateway="kubeflow-gateway.kubeflow.svc.cluster.local" --set istio.enabled="true" --set certManager.enabled="true"

kubectl rollout status -n seldon-system deployment/seldon-controller-manager

Expand All @@ -29,22 +46,28 @@ sleep 5
helm upgrade --install seldon-core-analytics ../../helm-charts/seldon-core-analytics/ --namespace default -f ./kubeflow/seldon-analytics-kubeflow.yaml

kubectl create namespace logs || echo "namespace logs exists"
helm upgrade --install elasticsearch elasticsearch --version 7.5.0 --namespace=logs --set service.type=ClusterIP --set antiAffinity="soft" --repo https://helm.elastic.co
helm upgrade --install elasticsearch elasticsearch --version 7.5.2 --namespace=logs --set service.type=ClusterIP --set antiAffinity="soft" --repo https://helm.elastic.co
kubectl rollout status statefulset/elasticsearch-master -n logs

helm upgrade --install fluentd fluentd-elasticsearch --namespace=logs -f fluentd-values.yaml --repo https://kiwigrid.github.io
helm upgrade --install kibana kibana --version 7.5.0 --namespace=logs --set service.type=ClusterIP -f ./kubeflow/kibana-values.yaml --repo https://helm.elastic.co
helm upgrade --install kibana kibana --version 7.5.2 --namespace=logs --set service.type=ClusterIP -f ./kubeflow/kibana-values.yaml --repo https://helm.elastic.co

kubectl apply -f ./kubeflow/virtualservice-kibana.yaml
kubectl apply -f ./kubeflow/virtualservice-elasticsearch.yaml

kubectl rollout status deployment/kibana-kibana -n logs

#have to delete logger if existing as otherwise get 'expected exactly one, got both' err if existing resource is v1alpha1
kubectl delete -f ./request-logging/seldon-request-logger.yaml || true
kubectl apply -f ./request-logging/seldon-request-logger.yaml
# remove and recreate broker if already have one to activate eventing
kubectl delete broker -n default default || true
kubectl label namespace default knative-eventing-injection- --overwrite=true
kubectl label namespace default knative-eventing-injection=enabled --overwrite=true
#sleep 3
sleep 6
kubectl -n default get broker default

kubectl apply -f ./request-logging/trigger.yaml

ISTIO_INGRESS=$(kubectl get svc -n istio-system istio-ingressgateway -o jsonpath='{.status.loadBalancer.ingress[0].ip}')
Expand Down
4 changes: 2 additions & 2 deletions examples/centralised-logging/full-setup.sh
Original file line number Diff line number Diff line change
Expand Up @@ -19,12 +19,12 @@ helm install --name seldon-core-loadtesting ../../helm-charts/seldon-core-loadte


kubectl create namespace logs || echo "namespace logs exists"
helm install --name elasticsearch elasticsearch --version 7.5.0 --namespace=logs -f elastic-minikube.yaml --repo https://helm.elastic.co
helm install --name elasticsearch elasticsearch --version 7.5.2 --namespace=logs -f elastic-minikube.yaml --repo https://helm.elastic.co
kubectl rollout status statefulset/elasticsearch-master -n logs
kubectl patch svc elasticsearch-master -n logs -p '{"spec": {"type": "LoadBalancer"}}'

helm install fluentd-elasticsearch --name fluentd --namespace=logs -f fluentd-values.yaml --repo https://kiwigrid.github.io
helm install kibana --version 7.5.0 --name=kibana --namespace=logs --set service.type=NodePort --repo https://helm.elastic.co
helm install kibana --version 7.5.2 --name=kibana --namespace=logs --set service.type=NodePort --repo https://helm.elastic.co

kubectl rollout status deployment/kibana-kibana -n logs

Expand Down
32 changes: 32 additions & 0 deletions examples/centralised-logging/kind_config.yaml
Original file line number Diff line number Diff line change
@@ -0,0 +1,32 @@
kind: Cluster
apiVersion: kind.sigs.k8s.io/v1alpha3
nodes:
- role: control-plane
- role: worker
extraPortMappings:
- containerPort: 30080
hostPort: 8003
- containerPort: 31380
hostPort: 8004
kubeadmConfigPatches:
- |
apiVersion: kubelet.config.k8s.io/v1beta1
kind: KubeletConfiguration
metadata:
name: config
kubeReserved:
cpu: "300m"
memory: "300Mi"
ephemeral-storage: "1Gi"
kubeReservedCgroup: "/kube-reserved"
systemReserved:
cpu: "300m"
memory: "300Mi"
ephemeral-storage: "1Gi"
evictionHard:
memory.available: "200Mi"
nodefs.available: "10%"
featureGates:
DynamicKubeletConfig: true
RotateKubeletServerCertificate: true
Original file line number Diff line number Diff line change
@@ -1,17 +1,3 @@
#this assumes installing to cloud and istio already installed e.g. with kubeflow

kubectl apply --selector knative.dev/crd-install=true \
--filename https://github.com/knative/serving/releases/download/v0.8.0/serving.yaml \
--filename https://github.com/knative/eventing/releases/download/v0.8.0/release.yaml \
--filename https://github.com/knative/serving/releases/download/v0.8.0/monitoring.yaml

kubectl apply --filename https://github.com/knative/serving/releases/download/v0.8.0/serving.yaml \
--filename https://github.com/knative/eventing/releases/download/v0.8.0/release.yaml \
--filename https://github.com/knative/serving/releases/download/v0.8.0/monitoring.yaml

kubectl label namespace default istio-injection=enabled

kubectl apply -f https://github.com/knative/eventing/releases/download/v0.8.0/eventing.yaml

#kafka if you have a kafka cluster setup already
#kubectl apply -f https://github.com/knative/eventing/releases/download/v0.8.0/kafka.yaml
./../request-logging/install_knative.sh
Original file line number Diff line number Diff line change
@@ -1,15 +1,6 @@
grafana_prom_service_type: ClusterIP
grafana_prom_admin_password: admin
grafana_anonymous_auth: true
grafana:
virtualservice:
enabled: true
#trailing dash important and should be used when accessing
prefix: "/grafana/"
gateways:
- kubeflow-gateway.kubeflow.svc.cluster.local
extraEnv:
#replace with KF gateway URI
GF_SERVER_ROOT_URL: "%(protocol)s://%(domain)s/grafana"
nodeExporter:
port: 9200
prometheus:
nodeExporter:
hostNetwork: false
service:
hostPort: 9200
servicePort: 9200
Loading

0 comments on commit cbcb125

Please sign in to comment.