Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

AWS: Failed to look for node info on ASG Launch Template #1754

Closed
lcasassa opened this issue Mar 6, 2019 · 26 comments
Closed

AWS: Failed to look for node info on ASG Launch Template #1754

lcasassa opened this issue Mar 6, 2019 · 26 comments
Assignees
Labels
area/cluster-autoscaler area/provider/aws Issues or PRs related to aws provider lifecycle/stale Denotes an issue or PR has remained open with no activity and has become stale.

Comments

@lcasassa
Copy link

lcasassa commented Mar 6, 2019

I have several ASG with current size 0. All of the ASG are created with Launch Template. I'm using gcr.io/google-containers/cluster-autoscaler:v1.3.7 with the command:

    - ./cluster-autoscaler
    - --cloud-provider=aws
    - --namespace=kube-system
    - --node-group-auto-discovery=asg:tag=k8s.io/cluster-autoscaler/enabled,kubernetes.io/cluster/eks-tf-mi-playground-1-cluster-01
    - --expander=least-waste
    - --logtostderr=true
    - --stderrthreshold=info
    - --v=4

And somehow it is detecting them as Launch Configuration but it is not. How can I tell autoscaler to look for Launch Template on all ASG instead of Launch Configuration?

This are some logs:

I0306 13:16:06.427091       1 auto_scaling.go:48] Failed LaunchConfiguration info request for : ValidationError: 1 validation error detected: Value '[]' at 'launchConfigurationNames' failed to satisfy constraint: Member must satisfy constraint: [Member must have length less than or equal to 1600, Member must have length greater than or equal to 1, Member must satisfy regular expression pattern: [\u0020-\uD7FF\uE000-\uFFFD\uD800\uDC00-\uDBFF\uDFFF\r\n\t]*]
	status code: 400, request id: ffed3ce5-4011-11e9-a015-cb400eaf9394
E0306 13:16:06.427123       1 utils.go:280] Unable to build proper template node for eks-worker-tf-eks-01-m5-2xl-2: ValidationError: 1 validation error detected: Value '[]' at 'launchConfigurationNames' failed to satisfy constraint: Member must satisfy constraint: [Member must have length less than or equal to 1600, Member must have length greater than or equal to 1, Member must satisfy regular expression pattern: [\u0020-\uD7FF\uE000-\uFFFD\uD800\uDC00-\uDBFF\uDFFF\r\n\t]*]
	status code: 400, request id: ffed3ce5-4011-11e9-a015-cb400eaf9394
I0306 13:16:06.513070       1 auto_scaling_groups.go:316] Regenerating instance to ASG map for ASGs: [eks-worker-tf-eks-01-m5-2xl-0 eks-worker-tf-eks-01-m5-2xl-1 eks-worker-tf-eks-01-m5-2xl-2 eks-worker-tf-eks-01-m5-4xl-0 eks-worker-tf-eks-01-m5-4xl-1 eks-worker-tf-eks-01-m5-4xl-2 eks-worker-tf-eks-01-m5-l-0 eks-worker-tf-eks-01-m5-l-1 eks-worker-tf-eks-01-m5-l-2 eks-worker-tf-eks-01-p2-xl-0 eks-worker-tf-eks-01-p2-xl-1 eks-worker-tf-eks-01-p2-xl-2]
I0306 13:16:06.626631       1 aws_manager.go:148] Refreshed ASG list, next refresh after 2019-03-06 13:16:16.626624752 +0000 UTC m=+715.550545349
I0306 13:16:06.626757       1 utils.go:541] No pod using affinity / antiaffinity found in cluster, disabling affinity predicate for this loop
I0306 13:16:06.626771       1 static_autoscaler.go:260] Filtering out schedulables
I0306 13:16:06.627040       1 static_autoscaler.go:270] No schedulable pods
I0306 13:16:06.627055       1 scale_up.go:249] Pod ml-dev/fuad-neural-training-job-pznxl is unschedulable
I0306 13:16:06.627059       1 scale_up.go:249] Pod ml-dev/ali-changeme-0 is unschedulable
E0306 13:16:06.627091       1 static_autoscaler.go:293] Failed to scale up: Could not compute total resources: No node info for: eks-worker-tf-eks-01-m5-2xl-0
I0306 13:16:08.094772       1 leaderelection.go:209] successfully renewed lease kube-system/cluster-autoscaler
I0306 13:16:10.104128       1 leaderelection.go:209] successfully renewed lease kube-system/cluster-autoscaler

@lcasassa lcasassa changed the title AWS: Failed to scale up: Could not compute total resources: No node info for... AWS: Failed to look for ASG Launch Template Mar 6, 2019
@lcasassa lcasassa changed the title AWS: Failed to look for ASG Launch Template AWS: Failed to look for node info on ASG Launch Template Mar 6, 2019
@Jeffwan
Copy link
Contributor

Jeffwan commented Mar 6, 2019

/assign @Jeffwan

@Jeffwan
Copy link
Contributor

Jeffwan commented Mar 6, 2019

@lcasassa Seems this PR #1053 did not merged into 1.3.x branch. In v1.12, LaunchTemplate is supported, so I would suggest you to use newer image. This change will be hard to cherry-pick now. Since you use Kubernetes is v1.11.x, should be fine to run with CA v1.12. However, it's always get into some situation not expect. If you meet any problems on eks, just let me know.

func (m *AwsManager) buildInstanceType(asg *asg) (string, error) {
if asg.LaunchConfigurationName != "" {
return m.autoScalingService.getInstanceTypeByLCName(asg.LaunchConfigurationName)
} else if asg.LaunchTemplateName != "" && asg.LaunchTemplateVersion != "" {
return m.ec2Service.getInstanceTypeByLT(asg.LaunchTemplateName, asg.LaunchTemplateVersion)
}
return "", fmt.Errorf("Unable to get instance type from launch config or launch template")
}

@lcasassa
Copy link
Author

lcasassa commented Mar 7, 2019 via email

@Jeffwan
Copy link
Contributor

Jeffwan commented Mar 7, 2019

@lcasassa Sure. If something is missing and you want to cherry-pick, feel free to submit PR or let us know, we can also help on that.

@lcasassa
Copy link
Author

lcasassa commented Mar 7, 2019 via email

@Jeffwan
Copy link
Contributor

Jeffwan commented Mar 7, 2019

/sig aws

@benishak
Copy link

benishak commented Mar 10, 2019

@Jeffwan
I have an ASG with Launch Template. because I want to use spot instances and diversify instance types I had to configure instance types in the ASG itself not in the Launch Template otherwise it won't work.
Now from reading the source code I expect CA won't work with my setup. Right?

Just FYI I use 4 instance types but they have same count of CPU cores but not memory. I use the following types
c5,c5d,m5,m5d.2xlarge

why does the CA need to care about instance types instead of just adjusting the Desired Capacity of ASG regardless of instance types?

EDIT:
But I think because I use EKS which comes with Kubernetes 1.11 then I must use CA 1.3.x which does not support Launch Templates anyway. Right?

@Jeffwan
Copy link
Contributor

Jeffwan commented Mar 10, 2019

But I think because I use EKS which comes with Kubernetes 1.11 then I must use CA 1.3.x which does not support Launch Templates anyway. Right?

You don't have to limit to 1.3.x. Even thought compatibility test is not done for higher CA version, based on my experience, v1.12 shoud also work for Kubernetes v1.11.x

why does the CA need to care about instance types instead of just adjusting the Desired Capacity of ASG regardless of instance types?

Without instance type information, CA doesn't know right number of nodes to bring up.

because I want to use spot instances and diversify instance types

You want to use diversity instance types in one ASG or single instance type per ASG?
mix instance types in one ASG and spot instance are not supported yet.

@aleksandra-malinowska aleksandra-malinowska added area/cluster-autoscaler area/provider/aws Issues or PRs related to aws provider labels Mar 11, 2019
@CAR6807
Copy link

CAR6807 commented Apr 1, 2019

I'm seeing this issue on K8 1.12 and 1.14 clusters with CA version 1.12.x and 1.14.x respectively.
Is there a stable version that has this fix?

@Jeffwan
Copy link
Contributor

Jeffwan commented Apr 1, 2019

Let me have a test on these versions and have a patch. @CAR6807 Thanks for reporting.

@Jeffwan
Copy link
Contributor

Jeffwan commented Apr 3, 2019

@CAR6807. The PR for Launch Template support has been merged from v1.12 and later versions. Could you share more details about your setups? Attaching error messages would be super helpful

@JasonCarter80
Copy link

You want to use diversity instance types in one ASG or single instance type per ASG?
mix instance types in one ASG and spot instance are not supported yet.

@Jeffwan - Is this on the backlog anywhere?

@Jeffwan
Copy link
Contributor

Jeffwan commented Apr 17, 2019

@JasonCarter80 Not yet. This is against current CA single instance type per node group structure. We are still evaluating mix instance type support and make sure we can better fit into CA.

Do you have any use cases? I'd love to hear more requirements which will give enough datapoints for next iteration

@JasonCarter80
Copy link

@Jeffwan

Do you have any use cases? I'd love to hear more requirements which will give enough datapoints for next iteration
I'm attempting to run my nodes as spot ready as possible, but occasionally instances types get terminated and are out of spot capacity, thus using multiple instance types would make this more reliable, currently I am running multiple ASG per AZ to cover this behavior.

@Jeffwan
Copy link
Contributor

Jeffwan commented Apr 18, 2019

@JasonCarter80
I see, so currently your major pain point is spot capacity could run out for single instance. You want to increase similar instances types (with same CPU and Memory) to fulfill your requests.

@davidquarles
Copy link

@Jeffwan Is there any way to do this today, perhaps via #1656 ? Our use case is the same as @JasonCarter80. When using spot instances, an unfulfilled request for an ASG with zero nodes never times out and blocks cluster autoscale for all other ASGs. We want to use the new mixed ASGs, because they do actually timeout gracefully. AFAICT they require at least two instance types to be specified, and we are more than fine using instance types that differ only on disk and/or bandwidth (with same CPU and memory) to accomplish this.

@choseh
Copy link

choseh commented Jul 15, 2019

I'm running into this issue in one cluster with an ig with 0 nodes, set it to 1 instances minimum as a workaround. What's the progress on a more permanent solution here?

(k8s 1.12 with cluster-autoscaler 1.12.6)

@Jeffwan
Copy link
Contributor

Jeffwan commented Aug 12, 2019

@choseh @angeloskyratzakos Can you guys share some logs. Seem I didn't see issues scaling from 0..

@AlexBaturin
Copy link

@Jeffwan Hi!
What logs do you need?
I use v1.15.1 and have the same issue with a scale of 0 size ASG.

flags.go:52] FLAG: --test.mutexprofilefraction="1"
flags.go:52] FLAG: --test.outputdir=""
flags.go:52] FLAG: --test.parallel="4"
flags.go:52] FLAG: --test.run=""
flags.go:52] FLAG: --test.short="false"
flags.go:52] FLAG: --test.testlogfile=""
flags.go:52] FLAG: --test.timeout="0s"
flags.go:52] FLAG: --test.trace=""
flags.go:52] FLAG: --test.v="false"
flags.go:52] FLAG: --unremovable-node-recheck-timeout="5m0s"
flags.go:52] FLAG: --v="4"
flags.go:52] FLAG: --vmodule=""
flags.go:52] FLAG: --write-status-configmap="true"
main.go:354] Cluster Autoscaler 1.15.1
leaderelection.go:235] attempting to acquire leader lease kube-system/cluster-autoscaler...
leaderelection.go:344] lock is held by cluster-autoscaler-787d9d54c6-n4tjq and has not yet expired
leaderelection.go:240] failed to acquire lease kube-system/cluster-autoscaler
leaderelection.go:344] lock is held by cluster-autoscaler-787d9d54c6-n4tjq and has not yet expired
leaderelection.go:240] failed to acquire lease kube-system/cluster-autoscaler
leaderelection.go:344] lock is held by cluster-autoscaler-787d9d54c6-n4tjq and has not yet expired
leaderelection.go:240] failed to acquire lease kube-system/cluster-autoscaler
leaderelection.go:344] lock is held by cluster-autoscaler-787d9d54c6-n4tjq and has not yet expired
leaderelection.go:240] failed to acquire lease kube-system/cluster-autoscaler
leaderelection.go:344] lock is held by cluster-autoscaler-787d9d54c6-n4tjq and has not yet expired
leaderelection.go:240] failed to acquire lease kube-system/cluster-autoscaler
leaderelection.go:245] successfully acquired lease kube-system/cluster-autoscaler
event.go:258] Event(v1.ObjectReference{Kind:"Endpoints", Namespace:"kube-system", Name:"cluster-autoscaler", UID:"4c9cc471-77ba-46c6-b3ce-e6704245ba1f", APIVersion:"v1", ResourceVersion:"31038", FieldPath:""}): type: 'Normal' reason: 'LeaderElection' cluster-autoscaler-5c98d479b9-w59px became leader
reflector.go:122] Starting reflector *v1.DaemonSet (1h0m0s) from k8s.io/autoscaler/cluster-autoscaler/utils/kubernetes/listers.go:312
reflector.go:160] Listing and watching *v1.DaemonSet from k8s.io/autoscaler/cluster-autoscaler/utils/kubernetes/listers.go:312
reflector.go:122] Starting reflector *v1.ReplicationController (1h0m0s) from k8s.io/autoscaler/cluster-autoscaler/utils/kubernetes/listers.go:321
reflector.go:160] Listing and watching *v1.ReplicationController from k8s.io/autoscaler/cluster-autoscaler/utils/kubernetes/listers.go:321
reflector.go:122] Starting reflector *v1.Pod (1h0m0s) from k8s.io/autoscaler/cluster-autoscaler/utils/kubernetes/listers.go:188
reflector.go:160] Listing and watching *v1.Pod from k8s.io/autoscaler/cluster-autoscaler/utils/kubernetes/listers.go:188
reflector.go:122] Starting reflector *v1.Job (1h0m0s) from k8s.io/autoscaler/cluster-autoscaler/utils/kubernetes/listers.go:330
reflector.go:160] Listing and watching *v1.Job from k8s.io/autoscaler/cluster-autoscaler/utils/kubernetes/listers.go:330
reflector.go:122] Starting reflector *v1.Pod (1h0m0s) from k8s.io/autoscaler/cluster-autoscaler/utils/kubernetes/listers.go:212
reflector.go:160] Listing and watching *v1.Pod from k8s.io/autoscaler/cluster-autoscaler/utils/kubernetes/listers.go:212
reflector.go:122] Starting reflector *v1.Node (1h0m0s) from k8s.io/autoscaler/cluster-autoscaler/utils/kubernetes/listers.go:246
reflector.go:160] Listing and watching *v1.Node from k8s.io/autoscaler/cluster-autoscaler/utils/kubernetes/listers.go:246
reflector.go:122] Starting reflector *v1.ReplicaSet (1h0m0s) from k8s.io/autoscaler/cluster-autoscaler/utils/kubernetes/listers.go:339
reflector.go:160] Listing and watching *v1.ReplicaSet from k8s.io/autoscaler/cluster-autoscaler/utils/kubernetes/listers.go:339
reflector.go:122] Starting reflector *v1.StatefulSet (1h0m0s) from k8s.io/autoscaler/cluster-autoscaler/utils/kubernetes/listers.go:348
reflector.go:160] Listing and watching *v1.StatefulSet from k8s.io/autoscaler/cluster-autoscaler/utils/kubernetes/listers.go:348
reflector.go:122] Starting reflector *v1.Node (1h0m0s) from k8s.io/autoscaler/cluster-autoscaler/utils/kubernetes/listers.go:246
reflector.go:160] Listing and watching *v1.Node from k8s.io/autoscaler/cluster-autoscaler/utils/kubernetes/listers.go:246
reflector.go:122] Starting reflector *v1beta1.PodDisruptionBudget (1h0m0s) from k8s.io/autoscaler/cluster-autoscaler/utils/kubernetes/listers.go:301
reflector.go:160] Listing and watching *v1beta1.PodDisruptionBudget from k8s.io/autoscaler/cluster-autoscaler/utils/kubernetes/listers.go:301
defaults.go:87] TaintNodesByCondition is enabled, PodToleratesNodeTaints predicate is mandatory
factory.go:345] Creating scheduler from algorithm provider 'DefaultProvider'
factory.go:433] Creating scheduler with fit predicates 'map[CheckNodeUnschedulable:{} CheckVolumeBinding:{} GeneralPredicates:{} MatchInterPodAffinity:{} MaxAzureDiskVolumeCount:{} MaxCSIVolumeCountPred:{} MaxEBSVolumeCount:{} MaxGCEPDVolumeCount:{} NoDiskConflict:{} NoVolumeZoneConflict:{} PodToleratesNodeTaints:{}]' and priority functions 'map[BalancedResourceAllocation:{} ImageLocalityPriority:{} InterPodAffinityPriority:{} LeastRequestedPriority:{} NodeAffinityPriority:{} NodePreferAvoidPodsPriority:{} SelectorSpreadPriority:{} TaintTolerationPriority:{}]'
predicates.go:189] Using predicate PodFitsResources
predicates.go:189] Using predicate PodToleratesNodeTaints
predicates.go:189] Using predicate GeneralPredicates
predicates.go:189] Using predicate ready
predicates.go:189] Using predicate MaxAzureDiskVolumeCount
predicates.go:189] Using predicate MaxCSIVolumeCountPred
predicates.go:189] Using predicate CheckNodeUnschedulable
predicates.go:189] Using predicate CheckVolumeBinding
predicates.go:189] Using predicate MaxEBSVolumeCount
predicates.go:189] Using predicate MatchInterPodAffinity
predicates.go:189] Using predicate NoVolumeZoneConflict
predicates.go:189] Using predicate MaxGCEPDVolumeCount
predicates.go:189] Using predicate NoDiskConflict
cloud_provider_builder.go:29] Building aws cloud provider.
reflector.go:122] Starting reflector *v1.Service (0s) from k8s.io/client-go/informers/factory.go:133
reflector.go:160] Listing and watching *v1.Service from k8s.io/client-go/informers/factory.go:133
reflector.go:122] Starting reflector *v1.PersistentVolumeClaim (0s) from k8s.io/client-go/informers/factory.go:133
reflector.go:160] Listing and watching *v1.PersistentVolumeClaim from k8s.io/client-go/informers/factory.go:133
reflector.go:122] Starting reflector *v1.StatefulSet (0s) from k8s.io/client-go/informers/factory.go:133
reflector.go:160] Listing and watching *v1.StatefulSet from k8s.io/client-go/informers/factory.go:133
reflector.go:122] Starting reflector *v1.ReplicationController (0s) from k8s.io/client-go/informers/factory.go:133
reflector.go:160] Listing and watching *v1.ReplicationController from k8s.io/client-go/informers/factory.go:133
reflector.go:122] Starting reflector *v1.Node (0s) from k8s.io/client-go/informers/factory.go:133
reflector.go:160] Listing and watching *v1.Node from k8s.io/client-go/informers/factory.go:133
reflector.go:122] Starting reflector *v1.ReplicaSet (0s) from k8s.io/client-go/informers/factory.go:133
reflector.go:160] Listing and watching *v1.ReplicaSet from k8s.io/client-go/informers/factory.go:133
reflector.go:122] Starting reflector *v1beta1.PodDisruptionBudget (0s) from k8s.io/client-go/informers/factory.go:133
reflector.go:160] Listing and watching *v1beta1.PodDisruptionBudget from k8s.io/client-go/informers/factory.go:133
reflector.go:122] Starting reflector *v1.StorageClass (0s) from k8s.io/client-go/informers/factory.go:133
reflector.go:160] Listing and watching *v1.StorageClass from k8s.io/client-go/informers/factory.go:133
reflector.go:122] Starting reflector *v1.PersistentVolume (0s) from k8s.io/client-go/informers/factory.go:133
reflector.go:160] Listing and watching *v1.PersistentVolume from k8s.io/client-go/informers/factory.go:133
reflector.go:122] Starting reflector *v1.Pod (0s) from k8s.io/client-go/informers/factory.go:133
reflector.go:160] Listing and watching *v1.Pod from k8s.io/client-go/informers/factory.go:133
request.go:530] Throttling request took 159.448437ms, request: GET:https://100.64.0.1:443/api/v1/nodes?limit=500&resourceVersion=0
auto_scaling_groups.go:320] Regenerating instance to ASG map for ASGs: [mixed.dev.env static.dev.env]
auto_scaling_groups.go:124] Registering ASG mixed.dev.env
auto_scaling_groups.go:124] Registering ASG static.dev.env
aws_manager.go:255] Refreshed ASG list, next refresh after 2019-09-05 19:13:42.358167218 +0000 UTC m=+77.723093389
main.go:268] Registered cleanup signal handler
node_instances_cache.go:155] Start refreshing cloud provider node instances cache
node_instances_cache.go:167] Refresh cloud provider node instances cache finished, refresh took 5.801µs
request.go:530] Throttling request took 351.204239ms, request: GET:https://100.64.0.1:443/api/v1/persistentvolumes?limit=500&resourceVersion=0
request.go:530] Throttling request took 553.343215ms, request: GET:https://100.64.0.1:443/api/v1/pods?limit=500&resourceVersion=0
static_autoscaler.go:187] Starting main loop
utils.go:318] Unable to build proper template node for mixed.dev.env: Unable to get instance type from launch config or launch template
static_autoscaler.go:187] Starting main loop
utils.go:318] Unable to build proper template node for mixed.dev.env: Unable to get instance type from launch config or launch template
static_autoscaler.go:187] Starting main loop
utils.go:318] Unable to build proper template node for mixed.dev.env: Unable to get instance type from launch config or launch template

And this is my Deployment
AIM roles for access to AGS assinged on EC2 instances.


apiVersion: apps/v1
kind: Deployment
metadata:
name: cluster-autoscaler
namespace: kube-system
labels:
app: cluster-autoscaler
spec:
replicas: 1
selector:
matchLabels:
app: cluster-autoscaler
template:
metadata:
labels:
app: cluster-autoscaler
spec:
serviceAccountName: cluster-autoscaler
containers:
- image: gcr.io/google-containers/cluster-autoscaler:v1.15.1
name: cluster-autoscaler
resources:
limits:
memory: 300Mi
requests:
memory: 300Mi
command:
- ./cluster-autoscaler
- --v=4
- --stderrthreshold=info
- --cloud-provider=aws
- --skip-nodes-with-local-storage=false
- --expander=least-waste
- --node-group-auto-discovery=asg:tag=k8s.io/cluster-autoscaler/enabled,k8s.io/cluster-autoscaler/dev.env
volumeMounts:
- name: ssl-certs
mountPath: /etc/ssl/certs/ca-certificates.crt
readOnly: true
imagePullPolicy: "Always"
volumes:
- name: ssl-certs
hostPath:
path: "/etc/ssl/certs/ca-certificates.crt"

Thank You!

@lcasassa lcasassa closed this as completed Sep 5, 2019
@lcasassa lcasassa reopened this Sep 5, 2019
@lcasassa
Copy link
Author

lcasassa commented Sep 5, 2019

Sorry....

@chrissound
Copy link

Having the same issue over here:

I0922 13:37:36.829203       1 aws_manager.go:152] Refreshed ASG list, next refresh after 2019-09-22 13:37:46.829194089 +0000 UTC m=+100.536432316
E0922 13:37:36.829254       1 static_autoscaler.go:473] Failed to update node registry: Error while looking for instances of ASG: {terraform-eks-demo20190922124246790200000007}

@Jeffwan
Copy link
Contributor

Jeffwan commented Oct 11, 2019

I'm running into this issue in one cluster with an ig with 0 nodes, set it to 1 instances minimum as a workaround. What's the progress on a more permanent solution here?

(k8s 1.12 with cluster-autoscaler 1.12.6)

@choseh Do you use node selector in this case? Does your ASG has tags?

@Jeffwan
Copy link
Contributor

Jeffwan commented Oct 11, 2019

#2248
this PR should address some of the issues users reported here..

We cherry-pick to version 1.14+ and will release it next week

@fejta-bot
Copy link

Issues go stale after 90d of inactivity.
Mark the issue as fresh with /remove-lifecycle stale.
Stale issues rot after an additional 30d of inactivity and eventually close.

If this issue is safe to close now please do so with /close.

Send feedback to sig-testing, kubernetes/test-infra and/or fejta.
/lifecycle stale

@k8s-ci-robot k8s-ci-robot added the lifecycle/stale Denotes an issue or PR has remained open with no activity and has become stale. label Jan 9, 2020
@Jeffwan
Copy link
Contributor

Jeffwan commented Jan 10, 2020

This has been released. Feel free to reopen if there's any issues

/close

@k8s-ci-robot
Copy link
Contributor

@Jeffwan: Closing this issue.

In response to this:

This has been released. Feel free to reopen if there's any issues

/close

Instructions for interacting with me using PR comments are available here. If you have questions or suggestions related to my behavior, please file an issue against the kubernetes/test-infra repository.

yaroslava-serdiuk pushed a commit to yaroslava-serdiuk/autoscaler that referenced this issue Feb 22, 2024
Bumps [sigs.k8s.io/jobset](https://github.com/kubernetes-sigs/jobset) from 0.3.1 to 0.3.2.
- [Release notes](https://github.com/kubernetes-sigs/jobset/releases)
- [Changelog](https://github.com/kubernetes-sigs/jobset/blob/main/RELEASE.md)
- [Commits](kubernetes-sigs/jobset@v0.3.1...v0.3.2)

---
updated-dependencies:
- dependency-name: sigs.k8s.io/jobset
  dependency-type: direct:production
  update-type: version-update:semver-patch
...

Signed-off-by: dependabot[bot] <[email protected]>
Co-authored-by: dependabot[bot] <49699333+dependabot[bot]@users.noreply.github.com>
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
area/cluster-autoscaler area/provider/aws Issues or PRs related to aws provider lifecycle/stale Denotes an issue or PR has remained open with no activity and has become stale.
Projects
None yet
Development

No branches or pull requests