You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
Problem: tf-job-operator produced too many logs, and quickly filled up the disk.
I login to the tf-job-operator pod use kubectl exec -it <tf-job-operator-xxx>. And /opt/kubeflow/tf-operator.v1 --help shows that the parameter -v is the log level for V logs. But I can't found any document can tell me what the value of this parameter really mean. As we use the kubeflow version 0.7, the initial start option is -v=1, I then tried to change it to 0,2,3,4,5 but none of them took effect.
Then I read the source code, found that the -v option is not included in ServerOption definition
What I want: Can anyone kindly tell me how can I change the log level of tf-job-operator or any way can limit the log size. Thanks.
The text was updated successfully, but these errors were encountered:
This issue has been automatically marked as stale because it has not had recent activity. It will be closed if no further activity occurs. Thank you for your contributions.
Problem: tf-job-operator produced too many logs, and quickly filled up the disk.
I login to the tf-job-operator pod use
kubectl exec -it <tf-job-operator-xxx>
. And/opt/kubeflow/tf-operator.v1 --help
shows that the parameter-v
is thelog level for V logs
. But I can't found any document can tell me what the value of this parameter really mean. As we use the kubeflow version 0.7, the initial start option is-v=1
, I then tried to change it to 0,2,3,4,5 but none of them took effect.Then I read the source code, found that the
-v
option is not included in ServerOption definitionWhat I want: Can anyone kindly tell me how can I change the log level of tf-job-operator or any way can limit the log size. Thanks.
The text was updated successfully, but these errors were encountered: