-
Notifications
You must be signed in to change notification settings - Fork 10
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Is Slack integration actually working? #66
Comments
Hi @kubebn, please try to add your actual Slack payload to your pods to flag
This changes will make I don't test it - but I think it should work. Please let me know is it help for you. |
I tried like this:
It's not working.
|
Ok, let's try some simple message - do exacly as YAML below apiVersion: v1
kind: Pod
metadata:
annotations:
...
containers:
- args:
- -webhook.url=https://hooks.slack.com/services/T0000000/B0000000000/000000000000000
- '-webhook.template={"text": "test message"}' you need
You don't need to add anything else in |
That actually worked, so basically it's a matter of what is inside the message rather than hook itself. Thank you However, I am wondering if "node_termination_event{node="{{ .Node }}"} 1" template does not work, is there any potential way of using anything with vars at all in the webhook.template? For example, in aws-spot-instances handler, we actually use webhookTemplate to be aware of what nodes, az, id's, pods are going to evict: webhookTemplate: |-
{
"fields": [
{
"title": "Node",
"value": "{{ .NodeName }}",
"short": true
},
{
"title": "InstanceType",
"value": "{{ .InstanceType }}",
"short": true
},
{
"title": "AvailabilityZone",
"value": "{{ .AvailabilityZone }}",
"short": true
},
{
"title": "InstanceID",
"value": "{{ .InstanceID }}",
"short": true
},
{
"title": "Pods",
"value": "{{ .Pods }}",
"short": true
}
] |
@kubebn yes it's interesting idea for I create dev changes that can close your issue. Please help me to test this new features. Now you can compose your payload as file, you can use this variables aks-node-termination-handler/pkg/template/template.go Lines 27 to 40 in f095104
Please follow the instruction (change # create request json for Slack, file can be templated
cat <<EOF | tee slack-config.json
{
"channel": "#mychannel",
"username": "webhookbot",
"text": "This is message for {{ .NodeName }}, {{ .InstanceType }} from {{ .NodeRegion }}",
"icon_emoji": ":ghost:"
}
EOF
# create configmap
kubectl -n kube-system create configmap aks-node-termination-handler-files --from-file=slack-config.json
# install/upgrade helm chart
helm upgrade aks-node-termination-handler \
--install \
--namespace kube-system \
https://github.com/maksim-paskal/aks-node-termination-handler/releases/download/v1.0.9/f095104.tgz \
--set priorityClassName=system-node-critical \
--set image=paskalmaksim/aks-node-termination-handler:dev \
--set imagePullPolicy=Always \
--set configMap.create=false \
--set configMap.name=aks-node-termination-handler-files \
--set 'args[1]=-webhook.url=https://hooks.slack.com/services/T00000000/B00000000/XXXXXXXXXXXXXXXXXXXXXXXX' \
--set 'args[2]=-webhook.template-file=/files/slack-config.json' |
Tried it. Message did not come to Slack. Likewise, there was no logs regarding webhook error in the pods.
P.S. Do you think there would be a possibility of adding pod names which are going to be gracefully evicted as well?
|
I think that problem with helm installation # install/upgrade helm chart
helm upgrade aks-node-termination-handler \
--install \
--namespace kube-system \
https://github.com/maksim-paskal/aks-node-termination-handler/releases/download/v1.0.9/f095104.tgz \
--set priorityClassName=system-node-critical \
--set image=paskalmaksim/aks-node-termination-handler:dev \
--set imagePullPolicy=Always \
--set configMap.create=false \
--set configMap.name=aks-node-termination-handler-files \
--set 'args[0]=-webhook.url=https://hooks.slack.com/services/T00000000/B00000000/XXXXXXXXXXXXXXXXXXXXXXXX' \
--set 'args[1]=-webhook.template-file=/files/slack-config.json' You want to send in Slack message all pods names that was on node while drain?
|
Yes, it's posible, I add aks-node-termination-handler/pkg/template/template.go Lines 26 to 39 in 99776cc
Please test, you need to modify your payload # create request json for Slack, file can be templated
cat <<EOF | tee slack-config.json
{
"channel": "#mychannel",
"username": "webhookbot",
"text": "This is message for {{ .NodeName }}, {{ .InstanceType }} from {{ .NodeRegion }}, pods {{ .NodePods }}",
"icon_emoji": ":ghost:"
}
EOF
# delete current configmap
kubectl -n kube-system delete configmap aks-node-termination-handler-files
# create configmap
kubectl -n kube-system create configmap aks-node-termination-handler-files --from-file=slack-config.json
# restart all pods to apply new payload
kubectl -n kube-system delete pods -lapp=aks-node-termination-handler |
Recreated configmap with:
deleted pods:
Logs:
P.S. Also, would be nice to add "kubernetes.azure.com/cluster" so cluster name can be added to the text. |
Error said that webhook address is incorrect, please reinstall chart with correct webhook address #66 (comment) I will add .ClusterName on next week |
Hi Maksim, I have already reinstalled it multiple times. I am using the same webhook as before :) I manually changed it to "myhook". In the logs the correct webhook is set
Is it possible that webhook & slack text can't handle that amount of pods in the message?
|
Seems that doing request to Slack is bigger than 5 seconds (default value, I will raise this default value to 30s sooner) and it errored. You can raise this limit, adding new limit in your chart installation ... |
Yes, try to compose your Slack payload as best as you can, and try to test it. Share with me this payload, please, I will add to README as example. Slack has rich functionality to compose messages, look here https://api.slack.com/reference/surfaces/formatting#advanced I think you do not need {{ .NewLine }} - this marker I use only when I load templates from string. As you load payload file instead - I think that this option not for your usage. |
Looks like this: Cluster name is NodeZone on the screenshot. @maksim-paskal do you think there is possibility to exclude DaemonSets from .NodePods info? It seems like their info can be taken from Drain. Thanks
|
Good catch. Thanks. Yes, sure I will remove daemonsets pods from this list next week. My TODO list for next release:
I will try to release this new features next week. |
@kubebn this changes was released, I recomend you delete your current installation and move to stable releases helm repo add aks-node-termination-handler https://maksim-paskal.github.io/aks-node-termination-handler/
helm repo update
# delete old configmap
kubectl -n kube-system delete configmap aks-node-termination-handler-files
# I recomend you use values.yaml other than create configmap
# https://github.com/maksim-paskal/aks-node-termination-handler?tab=readme-ov-file#send-notification-events
cat <<EOF | tee values.yaml
priorityClassName: system-node-critical
args:
- -webhook.url=https://hooks.slack.com/services/T00000000/B00000000/XXXXXXXXXXXXXXXXXXXXXXXX
- -webhook.template-file=/files/slack-payload.json
- -webhook.contentType=application/json
- -webhook.method=POST
- -webhook.timeout=30s
configMap:
data:
slack-payload.json: |
{
"channel": "#mychannel",
"username": "webhookbot",
"text": "This is message for {{ .NodeName }}, {{ .InstanceType }} from {{ .NodeRegion }}",
"icon_emoji": ":ghost:"
}
EOF
# install/upgrade helm chart
helm upgrade aks-node-termination-handler \
--install \
--namespace kube-system \
aks-node-termination-handler/aks-node-termination-handler \
--values values.yaml |
Hi @maksim-paskal, I have set these values:
Simulated eviction, there is no webhook error logs but no message in slack either: https://paste.openstack.org/show/brLN9Az0Iqgr0MCcYFxf/ ConfigMap seems to be fine:
|
I have tried with simpler payload like
same thingy. |
@kubebn maybe you use not latest chart - try to define latest helm uninstall aks-node-termination-handler --namespace kube-system
helm repo update
helm upgrade aks-node-termination-handler \
--install \
--namespace kube-system \
aks-node-termination-handler/aks-node-termination-handler \
--version 1.1.3 \
--values=/tmp/values.yaml Also please remove from resources:
limits:
cpu: 20m
memory: 100Mi
requests:
cpu: 20m
memory: 100Mi |
Add please args:
- -log.level=debug
- -webhook.url=https://hooks
- -webhook.template-file=/files/slack-payload.json
- -webhook.contentType=application/json
- -webhook.method=POST
- -webhook.timeout=30s |
Hi, yeah I definitely use the latest chart version. I noticed that when I removed |
I will extend logs #71 for webhooks, for next webhook debug |
Okay, that config worked: https://paste.openstack.org/show/bfRvfT5iZurBLsPT1gG0/
|
Hi @maksim-paskal, apologies for the off topic but what actually is wrong with requests/limits on Windows nodes? Is it really better to avoid using them at all on Windows containers? |
when I test windows node, pod doesn't show any logs, same configuration works on Linux. When I remove Seems that Windows nodes have some other cpu limiter other then Linux. If you need to limit pod cpu limit - you can set - but that value needs to be greater than |
Hi @maksim-paskal , even I to facing the similar issue, slack notification is not getting triggered. We are running the application in aks (Linux) cluster through helm chart.Below is my values.yaml file for reference
|
Hi @ShashankV007, can you share the logs of pod of |
Hi @maksim-paskal , please find the logs attached here. |
@ShashankV007 thanks for raising this problem. Will be fixed today in #90 |
@ShashankV007 please restart all |
Thanks @maksim-paskal, the issue is fixed |
Hi, I have set values this way:
In the logs I am getting:
When I try to send message to the channel via curl:
The text was updated successfully, but these errors were encountered: