You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
"This is one of the main advantages of the public cloud; you know that when you do need to scale up, the space will be there to take up the slack. With multiple organizations making use of the public cloud, spare capacity can always be made available to whoever needs it, so that no servers are sitting idle."
From:
This is one of the main advantages of the public cloud; you know that when you do need to scale up, the space will be there to take up the slack. With multiple organizations making use of the public cloud, spare capacity can always be made available to whoever needs it, so that no servers are sitting idle.
It was pointed out in a working group in my current company, that this allows you to get and remove embodied emissions from your calculation at will. Because the public cloud will always have 100% hardware efficiency.
Do we have any proof for this? Because logically I think this can't be true, it must be <<100%.
If hardware is always used by customer applications (leaving right-sizing out for a moment), a customer would be unable to scale-up an application without someone else scaling-down at the same time (or in an acceptable timeframe for both). Does this really happen?
I would argue that this would hinder the customer experience and customers would always experience a "full cloud where nothing starts the next 15m" (15m being the AWS spot instance timeout you have until the instance is taken away from you). Since I never had the experience with all the clouds, I think there is always a spare capacity available that is roaming free and unused.
The text was updated successfully, but these errors were encountered:
I reference this sentence:
"This is one of the main advantages of the public cloud; you know that when you do need to scale up, the space will be there to take up the slack. With multiple organizations making use of the public cloud, spare capacity can always be made available to whoever needs it, so that no servers are sitting idle."
From:
learn/docs/hardware-efficiency.mdx
Line 66 in a9bcbe0
It was pointed out in a working group in my current company, that this allows you to get and remove embodied emissions from your calculation at will. Because the public cloud will always have 100% hardware efficiency.
Do we have any proof for this? Because logically I think this can't be true, it must be <<100%.
If hardware is always used by customer applications (leaving right-sizing out for a moment), a customer would be unable to scale-up an application without someone else scaling-down at the same time (or in an acceptable timeframe for both). Does this really happen?
I would argue that this would hinder the customer experience and customers would always experience a "full cloud where nothing starts the next 15m" (15m being the AWS spot instance timeout you have until the instance is taken away from you). Since I never had the experience with all the clouds, I think there is always a spare capacity available that is roaming free and unused.
The text was updated successfully, but these errors were encountered: