This repository has been archived by the owner on Feb 5, 2020. It is now read-only.
-
Notifications
You must be signed in to change notification settings - Fork 266
*: enable debug logging for etcd-operator by default #1425
Merged
s-urbaniak
merged 1 commit into
coreos:master
from
hasbro17:haseeb/enable-default-debug-logging
Jul 19, 2017
Merged
Changes from all commits
Commits
File filter
Filter by extension
Conversations
Failed to load comments.
Loading
Jump to
Jump to file
Failed to load files.
Loading
Diff view
Diff view
There are no files selected for viewing
This file contains bidirectional Unicode text that may be interpreted or compiled differently than what appears below. To review, open the file in an editor that reveals hidden Unicode characters.
Learn more about bidirectional Unicode characters
This file contains bidirectional Unicode text that may be interpreted or compiled differently than what appears below. To review, open the file in an editor that reveals hidden Unicode characters.
Learn more about bidirectional Unicode characters
Original file line number | Diff line number | Diff line change |
---|---|---|
|
@@ -16,7 +16,11 @@ spec: | |
metadata: | ||
labels: | ||
k8s-app: etcd-operator | ||
spec: | ||
spec: | ||
volumes: | ||
- name: debug-volume | ||
hostPath: | ||
path: /var/tmp | ||
containers: | ||
- env: | ||
- name: MY_POD_NAMESPACE | ||
|
@@ -31,6 +35,12 @@ spec: | |
value: /tmp | ||
image: ${etcd_operator_image} | ||
name: etcd-operator | ||
command: | ||
- /usr/local/bin/etcd-operator | ||
- --debug-logfile-path=/var/tmp/etcd-operator/debug/debug.log | ||
There was a problem hiding this comment. Choose a reason for hiding this commentThe reason will be displayed to describe this comment to others. Learn more. question: why is this insisted to go into a log file? Are these debug messages also going to visible in There was a problem hiding this comment. Choose a reason for hiding this commentThe reason will be displayed to describe this comment to others. Learn more. for self hosted etcd, when it is down, k8s is down. when k8s is down, kubectl is unusable. the whole point of this is to enforce we log down to disk for debugging purpose. |
||
volumeMounts: | ||
- mountPath: /var/tmp/etcd-operator/debug | ||
name: debug-volume | ||
nodeSelector: | ||
node-role.kubernetes.io/master: "" | ||
securityContext: | ||
|
Add this suggestion to a batch that can be applied as a single commit.
This suggestion is invalid because no changes were made to the code.
Suggestions cannot be applied while the pull request is closed.
Suggestions cannot be applied while viewing a subset of changes.
Only one suggestion per line can be applied in a batch.
Add this suggestion to a batch that can be applied as a single commit.
Applying suggestions on deleted lines is not supported.
You must change the existing code in this line in order to create a valid suggestion.
Outdated suggestions cannot be applied.
This suggestion has been applied or marked resolved.
Suggestions cannot be applied from pending reviews.
Suggestions cannot be applied on multi-line comments.
Suggestions cannot be applied while the pull request is queued to merge.
Suggestion cannot be applied right now. Please check back later.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
I am worried that we are hardcoding a host path here. The etcd-operator is a deployment, hence subject to be rescheduled by k8s at any time. This
/var/tmp/etcd-operator/debug/debug.log
file will eventually be sprinkled across all master nodes. Judging from https://github.com/coreos/etcd-operator/blob/c946e30490947dc8b171fc4439a98356c7a85078/pkg/debug/debug_logger.go#L51 I see that this at least opens the file file usingO_APPEND
, but those logs would still be pretty inconsistent in the face of rescheduling.Cannot debug simply output to stdout such that its output is captured by standard k8s logging facilities?
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
if we can force every tectonic users to use a logging system like splunk, then it is a great help. Most of the users we interact with today have no logging system setup, this brings a huge problem for debugging self hosted etcd. when k8s is down, we have no easy way to get logging.
with this hack way, we at least can get the logging we want by downloading files from a well known path on all master nodes. we do not really worry about logging spreading too much. the operator is leader elected and time skew should not be a really problem.
and something is better than nothing.