Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Can't see task level cpu/memory utilization in cloudwatch #106

Closed
rutchkiwi opened this issue Oct 21, 2016 · 37 comments
Closed

Can't see task level cpu/memory utilization in cloudwatch #106

rutchkiwi opened this issue Oct 21, 2016 · 37 comments
Labels
ECS Amazon Elastic Container Service Proposed Community submitted issue

Comments

@rutchkiwi
Copy link

Hi!

In my team we're using ECS to run services. The ECS agent collect a bunch of metrics, that we can then view in cloudwatch. Especially useful for us is the memory/cpu utilization metric, that we use to tune how much memory and cpu we allocate to services. This is really nice that in that it allows us to catch services that are running on dangerously high memory before they go to 100% and get killed by the agent.

In our cluster we're also running a bunch of scheduled tasks, like ETLs, daily cleanups, sending emails at specific times etc. These are run by simply starting ECS tasks.

In cloudwatch, we can only see ECS cluster and service level metrics, not task level ones. This means that we can't see how much memory/cpu these tasks use (as they are not running under a ECS service)

It would be really nice to get these metrics for these kind of short-lived tasks as well. Are there any plans to support this in cloudwatch?

thanks for any replies!

@christianblunden
Copy link

+1

@billyshambrook
Copy link

Is this a limitation of the agent or cloudwatch?

@atifrizwan89
Copy link

+1 for task level monitoring

@skatenerd
Copy link

+1 it would be nice to have an official response at least telling us whether cloudwatch will eventually offer this

@bramswenson
Copy link

@billyshambrook More likely a limitation of ECS itself, and the metrics it is emitting to Cloudwatch. The current dimensions are ClusterName and ServiceName:
http://docs.aws.amazon.com/AmazonCloudWatch/latest/monitoring/ecs-metricscollected.html

@jonathonsim
Copy link

+1 for adding a dimension for TaskName to these metrics. Without it we can't really get a full picture of what's running on the cluster, only what's running in a service. Tasks scheduled based on things like Cloudwatch events are invisible

@aaithal
Copy link

aaithal commented Jan 30, 2018

Hello,

One of the reasons for not having utilization metrics more granular than service name is the ephemeral nature of Task IDs. Publishing utilization metrics by task IDs can lead to metric spam as most of these tasks are short-lived by nature. It's also very hard to alarm on something as ephemeral as task IDs. Having something that's more human readable and generated makes it easier to do these things.

Emitting utilization metrics aggregated by task definition family and version strings is something that is sort of a middle ground here, which we have considered as an alternative here. Is that something that you think would prove to be helpful here?

Thanks,
Anirudh

@jonathonsim
Copy link

@aaithal - I think that would give us what we need for our use case

@mdamir
Copy link

mdamir commented Feb 11, 2018

Yes. @aaithal That can be helpful. An alternative can also be to have a completely new metric such as "maxMemoryUtilization" which will track max memory among all tasks in a ecs service.

@bashilbers
Copy link

+1 a task container name does not change that much right? Is that an alternative to use as task metric?

@jrodr12
Copy link

jrodr12 commented Apr 16, 2018

+1

4 similar comments
@mandeepbal
Copy link

+1

@dustinbolton
Copy link

+1

@hlopezvg
Copy link

+1

@milanbrahmbhatt
Copy link

+1

@blues4ugrl
Copy link

Another reason for insight into task-level metrics is for helping debug issues. I have Service A and it runs 30 tasks. By other means (i.e. alerting on CloudWatch events from the ECS Agent) I get notification that 1 or 2 tasks get stopped due to an OutOfMemoryError. When I view service-level metrics and look at max memory utilization during the timeframe that said tasks are stopped, the max utilization is < 80%.

According to documentation:

Service memory utilization (metrics that are filtered by ClusterName and ServiceName) is measured as the total memory in use by the tasks that belong to the service, divided by the total memory that is reserved for the tasks that belong to the service.

Out of my 30 tasks, only 2 of them were stopped due to memory pressure. What about the other tasks? Are they only utilizing a small percentage compared to the 2 that fell over? Or, were they high in utilization as well and only 2 tasks hit that breaking point? Knowing that makes a difference - either you don't have enough capacity overall or you have some code that in certain data scenarios is using a ton of memory.

If you already know the "total memory in use by the tasks that belong to the service" to be able to show us the overall utilization, I'm hoping that based on the conversations/feedback above, you'll find a way to expose it that makes sense to those looking for it. Thanks for listening! :)

@waffleshop
Copy link

+1

It's hard for me to recommend ECS as a container solution without being able to monitor basic container-level metrics. The burden is being pushed on your consumers to develop our own means of container resource monitoring.

I wrote PowerShell and Python scripts to ship these metrics to CloudWatch, but depending on the number of containers you're running across your environments, the cost can be quite ridiculous. I recommend shipping these metrics to another monitoring solution if you have lots of containers.

@ryanpagel
Copy link

+1

1 similar comment
@vimmis
Copy link

vimmis commented Oct 18, 2018

+1

@coultn
Copy link

coultn commented Oct 31, 2018

Thanks for the feedback. I wanted to let you know that the ECS team is aware of this issue, and that it is under active consideration. We always appreciate +1's and additional details on use cases.

@kbhandar
Copy link

kbhandar commented Nov 2, 2018

+1

@danielfosbery
Copy link

+1 This would be really helpful. We have a service running that hits 100% Max CPU but with an average CPU of about 40%. Some tasks are doing more work than others, without task level stats it is very hard to debug which tasks are running at capacity and why.

@sandeepboyapati
Copy link

+1

2 similar comments
@kandoiNikhil
Copy link

+1

@DionJones615
Copy link

+1

@abby-fuller
Copy link
Contributor

moving this over to the containers roadmap since this is a feature request and not an ecs-agent issue.

@abby-fuller abby-fuller transferred this issue from aws/amazon-ecs-agent Jan 10, 2019
@abby-fuller abby-fuller added ECS Amazon Elastic Container Service Proposed Community submitted issue labels Jan 10, 2019
@nicolas-modsy
Copy link

+1

@deleugpn
Copy link

deleugpn commented Feb 9, 2019

My use case is that I often seen some of my services with max utilization CPU nearly 100% and min utilization nearly 10%. I can only assume that some tasks are working hard while others are being lazy, but I don't know which. I'd like to know so I could either find out why or at least kill them and get a better one.

@enricopesce
Copy link

+1

1 similar comment
@medbensalem
Copy link

+1

@gdanielson
Copy link

A big +1 👍 for task level resource tracking. Since the inside of a running container is normally so opaque any additional information on run-time state is extremely valuable when things do not go according to plan

@akshayram-wolverine
Copy link
Contributor

Hi everyone,

This feature is now in preview: https://aws.amazon.com/about-aws/whats-new/2019/07/introducing-container-insights-for-ecs-and-aws-fargate-in-preview/

Look forward to feedback!

@akshayram-wolverine
Copy link
Contributor

Shipped! More info here: https://aws.amazon.com/about-aws/whats-new/2019/08/container-monitoring-for-amazon-ecs-eks-and-kubernetes-is-now-available-in-amazon-cloudwatch/

@esbie
Copy link

esbie commented Sep 3, 2019

It seems like for ecs, task-level metrics were not added to cloudwatch insights. I only see "TaskDefinitionFamily" in the the ecs supported dimensions. https://docs.aws.amazon.com/AmazonCloudWatch/latest/monitoring/Container-Insights-metrics-ECS.html

@vcolano
Copy link

vcolano commented Oct 22, 2019

The docs explicitly state that this is not available for AWS Batch: "Currently, Container Insights isn't supported in AWS Batch."

When will this be supported for Batch?

@ayush-san
Copy link

Is there any timeline for it to be supported for batch too?

@sasuolanderSito
Copy link

Technically, is it possible to turn container insight on in batch compute environment by running:

aws ecs update-cluster-settings --cluster BatchComputeEnviromentClusterEC2 --settings "name=containerInsights,value=enabled" ?

Compute environment for a fargate seems to be just a normal EC2 cluster.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
ECS Amazon Elastic Container Service Proposed Community submitted issue
Projects
None yet
Development

No branches or pull requests