Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Add kubectl desc nodes to minikube logs #7105

Merged
14 changes: 14 additions & 0 deletions pkg/minikube/logs/logs.go
Original file line number Diff line number Diff line change
Expand Up @@ -23,16 +23,20 @@ import (
"fmt"
"os"
"os/exec"
"path"
"regexp"
"sort"
"strings"

"github.com/golang/glog"
"github.com/pkg/errors"
"github.com/spf13/viper"
"k8s.io/minikube/pkg/minikube/bootstrapper"
"k8s.io/minikube/pkg/minikube/command"
"k8s.io/minikube/pkg/minikube/config"
"k8s.io/minikube/pkg/minikube/cruntime"
"k8s.io/minikube/pkg/minikube/out"
"k8s.io/minikube/pkg/minikube/vmpath"
)

// rootCauseRe is a regular expression that matches known failure root causes
Expand Down Expand Up @@ -186,5 +190,15 @@ func logCommands(r cruntime.Manager, bs bootstrapper.Bootstrapper, length int, f
}
cmds[r.Name()] = r.SystemLogCmd(length)
cmds["container status"] = cruntime.ContainerStatusCommand()

cfg, err := config.Load(viper.GetString(config.ProfileName))
Copy link
Contributor

@tstromberg tstromberg Mar 20, 2020

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Nicely done! Thank you!

So, the frustrating part is that the configuration for this profile is already loaded, but the logs package does not have access to it. If it isn't too much trouble on your part, can you help us get that addressed? It'll be cleaner and faster that way.

I know I said differently in the issue, but lets start by moving this addition to the kubeadm bootstrapper, since it's the only one that guarantees where the kubelet is installed. Start my moving your code here:

func (k *Bootstrapper) LogCommands(o bootstrapper.LogOptions) map[string]string {

Now, to plumb in our missing loaded configuration (config.ClusterConfig). First pass the cfg variable to logs.Output:

err = logs.Output(cr, bs, runner, numberOfLines)

Then logs.logCommands, where we are now:

cmds := logCommands(r, bs, lines, false)

Then to bs.LogCommands (effectively kubeadm.LogCommands), which handles bootstrapper specific logs:

cmds := bs.LogCommands(bootstrapper.LogOptions{Lines: length, Follow: follow})

One tip: Some packages name the config.ClusterConfig variable as cfg, some name it cc. Use whichever nomenclature you find in the package, which I believe will usually be cfg in this part of the code base.

If this is all too much, please let me know. Thank you again! The code looks great otherwise.

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I can definitely try. If I need help along the way, I will ask on the slack channel.

if err != nil && !config.IsNotExist(err) {

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

You can just say

if err != nil

here

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

okay, thanks for the review! Are the failing checks unrelated to my changes? I couldn't really tell.

out.ErrLn("Error loading profile config: %v", err)
}

cmds["describe nodes"] = fmt.Sprintf("sudo %s describe node -A --kubeconfig=%s",
path.Join(vmpath.GuestPersistentDir, "binaries", cfg.KubernetesConfig.KubernetesVersion, "kubectl"),
path.Join(vmpath.GuestPersistentDir, "kubeconfig"))

return cmds
}