You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
It seems that Karpenter instance scheduling is solely based on EC2 economics.
Nodes will be scheduled based on instance type price, so, given node type constraints and resources needed to be scheduled, we can end up with, for example, 2 c7.xlarge instances instead of 1 c7.2xlarge.
This model does not work when using paying services which are billed per host (e.g.: DataDog, outcoldsolutions, orca.security ... etc).
It would really be nice to have a fewest-nodes strategy for scheduling.
How important is this feature to you?
Given the price of the aforementioned services, it is really important :)
The text was updated successfully, but these errors were encountered:
We could also consider adding possibility to define custom costs which would be considered during scheduling, e.g.:
---
apiVersion: karpenter.sh/v1kind: NodePoolmetadata:
name: dedicated-ingress-arm64spec:
# In the same currency as the instance pricecosts:
- name: datadog per host costperHostMonthlyPrice: 25
- name: orca.security per host costperHostMonthlyPrice: 30
With this, considering that all resources needed for scheduling correspond to a m7.2xlarge we would have:
Description
What problem are you trying to solve?
It seems that Karpenter instance scheduling is solely based on EC2 economics.
Nodes will be scheduled based on instance type price, so, given node type constraints and resources needed to be scheduled, we can end up with, for example, 2
c7.xlarge
instances instead of 1c7.2xlarge
.This model does not work when using paying services which are billed per host (e.g.: DataDog, outcoldsolutions, orca.security ... etc).
It would really be nice to have a
fewest-nodes
strategy for scheduling.How important is this feature to you?
Given the price of the aforementioned services, it is really important :)
The text was updated successfully, but these errors were encountered: