Add this suggestion to a batch that can be applied as a single commit.
This suggestion is invalid because no changes were made to the code.
Suggestions cannot be applied while the pull request is closed.
Suggestions cannot be applied while viewing a subset of changes.
Only one suggestion per line can be applied in a batch.
Add this suggestion to a batch that can be applied as a single commit.
Applying suggestions on deleted lines is not supported.
You must change the existing code in this line in order to create a valid suggestion.
Outdated suggestions cannot be applied.
This suggestion has been applied or marked resolved.
Suggestions cannot be applied from pending reviews.
Suggestions cannot be applied on multi-line comments.
Suggestions cannot be applied while the pull request is queued to merge.
Suggestion cannot be applied right now. Please check back later.
Problem:
The current version of wrangler that node-manager depends on doesn't have the generic/fake package that supports mocking controllers, but my current patchset for another issue relies on that functionality for automated unit tests.
Solution:
Upgrade to the latest version of wrangler, so that the updated generic/fake package is available.
Also upgrade module github.com/google/gnostic to
836f55b2639b105f02aa6786b9c1ade794570ff8, to resolve a panic at runtime since the new dependency graph pulls in gnostic-models and there was some issue with gnostic+gnostic-models that is patched upstream now.
Related Issue: harvester/harvester#4471
Test plan:
make build ; make package
docker image save harvester/harvester-node-manager:a59a988-amd64 -o harvester-node-manager.tar
scp harvester-node-manager.tar rancher@HARVESTER-NODE:
ctr -n=k8s.io image import /home/rancher/harvester-node-manager.tar
kubectl patch managedchart harvester -n fleet-local -p='{"spec":{"paused":true}}' --type=merge
kubectl edit daemonset/harvester-node-manager -n harvester-system
(set image to the image tag of the container built in step 1)kubectl rollout restart daemonset/harvester-node-manager -n harvester-system
kubectl logs harvester-node-manager-ll5w5 -n harvester-system
-- just confirm there are no obvious issues and that the pod remains healthy