Skip to content
This repository has been archived by the owner on May 15, 2024. It is now read-only.

Not able to get the emailable report even after successful cloudformation deployment #33

Open
kkashyap1707 opened this issue May 21, 2021 · 17 comments

Comments

@kkashyap1707
Copy link

Hi team,

I am not able to get the emailable report even after successful cloud formation deployment.

Steps:
Updated the deploy.sh file.

export BUCKET=test-awsbilling-cost
export SES_TO=[email protected] -- verified from SES
export SES_FROM=[email protected] -- verified from SES

Please suggest the steps.

@nikolaigauss
Copy link

Do you see the report in the S3 bucket?

@tanvir-appveen
Copy link

Do you see the report in the S3 bucket?

I'm facing the same issue and I don't see the report in the S3 Bucket

@nikolaigauss
Copy link

Have you set the bucket name in the S3_BUCKET environment value in the Lambda?

@tanvir-appveen
Copy link

Have you set the bucket name in the S3_BUCKET environment value in the Lambda?

Yes, I have

@nikolaigauss
Copy link

That's certainly odd, maybe you could try to put some logging between:

        if os.environ.get('S3_BUCKET'):
            s3 = boto3.client('s3')
            s3.upload_file(filename, os.environ.get('S3_BUCKET'), filename)

something like logging.info("hello!") should do the trick and hopefully give you more insight in the Lambda logs.

@tanvir-appveen
Copy link

tanvir-appveen commented Jun 16, 2021

That's certainly odd, maybe you could try to put some logging between:

        if os.environ.get('S3_BUCKET'):
            s3 = boto3.client('s3')
            s3.upload_file(filename, os.environ.get('S3_BUCKET'), filename)

something like logging.info("hello!") should do the trick and hopefully give you more insight in the Lambda logs.

Okay, first I had to deploy it and get it running. Then I had to change main_handler to lambda_handler.

When I deployed and ran it, I got this error :

botocore.errorfactory.AWSOrganizationsNotInUseException: An error occurred (AWSOrganizationsNotInUseException) when calling the ListAccounts operation: Your account is not a member of an organization.END RequestId: a3fec461-570f-4a8d-9f9d-c344fbcf5b7f

We don't need to create an organization. I can just hardcode it, right?

@nikolaigauss
Copy link

Yeah, this is what I've done:

        self.accounts = {} #We don't have permission to list accounts in org so this is irrelevant
        # try:
        #     self.accounts = self.getAccounts()
        # except:
        #     logging.exception("Getting Account names failed")
        #     self.accounts = {}
        
    # def getAccounts(self):
    #     accounts = {}
    #     client = boto3.client('organizations', region_name='us-east-1')
    #     paginator = client.get_paginator('list_accounts')
    #     response_iterator = paginator.paginate()
    #     for response in response_iterator:
    #         for acc in response['Accounts']:
    #             accounts[acc['Id']] = acc
    #     return accounts

we're inside an org but we run linked accounts so I just deploy this code in my different accounts with Terraform and that's it.

@tanvir-appveen
Copy link

Can you please explain what I should enter in COST_TAGS, TAG_KEY and TAG_VALUE_FILTER

My resources are tagged with Team as key and their department as value. So, I've entered TAG_KEY as Team and TAG_VALUE_FILTER as TeamName.

Is that correct and what is an example of COST_TAGS?

@tanvir-appveen
Copy link

tanvir-appveen commented Jun 16, 2021

Also, I'm getting this error :

[ERROR] S3UploadFailedError: Failed to upload cost_explorer_report.xlsx to mybucketsamplename-aws-cost-reports/cost_explorer_report.xlsx: An error occurred (AccessDenied) when calling the PutObject operation: Access Denied
Traceback (most recent call last):
  File "/var/task/lambda_function.py", line 465, in lambda_handler
    costexplorer.generateExcel()
  File "/var/task/lambda_function.py", line 408, in generateExcel
    s3.upload_file("cost_explorer_report.xlsx", os.environ.get('S3_BUCKET'), "cost_explorer_report.xlsx")
  File "/var/runtime/boto3/s3/inject.py", line 129, in upload_file
    return transfer.upload_file(
  File "/var/runtime/boto3/s3/transfer.py", line 285, in upload_file
    raise S3UploadFailedError(END RequestId: cd5402e6-646e-4dd9-b8cc-623c23b7d5d8
REPORT RequestId: cd5402e6-646e-4dd9-b8cc-623c23b7d5d8	Duration: 13606.18 ms	Billed Duration: 13607 ms	Memory Size: 128 MB	Max Memory Used: 128 MB

FYI, for now, the IAM role that this lambda is using has been given full access to S3 (until I can understand the issue)

@nikolaigauss
Copy link

nikolaigauss commented Jun 16, 2021

Can you please explain what I should enter in COST_TAGS, TAG_KEY and TAG_VALUE_FILTER

My resources are tagged with Team as key and their department as value. So, I've entered TAG_KEY as Team and TAG_VALUE_FILTER as TeamName.

Is that correct and what is an example of COST_TAGS?

Well, that bit is quite confusing tbh, COST_TAGS refer to actual cost allocation tags: https://docs.aws.amazon.com/awsaccountbilling/latest/aboutv2/cost-alloc-tags.html and that's what I use to identify what actual resources are the cause of the expenditure, that's limited tho because you can't tag every resource on AWS, a prime example is data transfer, you get an aggregate value rather than a breakdown cost per "tag", if that makes sense.
TAG_KEY and TAG_VALUE_FILTER I don't know but I don't use them so apologies for not being able to bring any insight.

On the Lambda roles this is what I'm using:

data "aws_iam_policy_document" "lambda_costs_report" {
  statement {
    sid       = "CostReportsConfection"
    effect    = "Allow"
    resources = ["*"]
    actions = [
      "ce:*",
      "ses:SendEmail",
      "ses:SendRawEmail",
      "s3:PutObject",
      "s3:PutObjectAcl"
    ]
  }
  
  statement {
    sid       = "PublishLogsOnCW"
    effect    = "Allow"
    resources = ["*"]
    actions = [
      "logs:CreateLogGroup",
      "logs:CreateLogStream",
      "logs:PutLogEvents"
    ]
  }
}

You could probably make it more restrictive by just allowing S3 to write in a particular bucket, I'm not that concerned about that so I left it like this.

I hope all of that is of any help.

@davfaulk
Copy link
Contributor

Hi. So in theory you can just ignore the first error. The list accounts error prints, but then it continues just to do the account it is running in instead.
The thing stopping this working is that you have to supply an s3 bucket to use, and it looks like yours is set to the placeholder value "mybucketsamplename-aws-cost-reports"
Which used to throw an error for not existing, but I figure some one has made a bucket named that now.
Bucket names are globally unique, so you need to have made one, and put its name in the Cloudformation template (or deploy script, or updating the lambdas environment variables).

@tanvir-appveen
Copy link

Hi. So in theory you can just ignore the first error. The list accounts error prints, but then it continues just to do the account it is running in instead.
The thing stopping this working is that you have to supply an s3 bucket to use, and it looks like yours is set to the placeholder value "mybucketsamplename-aws-cost-reports"
Which used to throw an error for not existing, but I figure some one has made a bucket named that now.
Bucket names are globally unique, so you need to have made one, and put its name in the Cloudformation template (or deploy script, or updating the lambdas environment variables).

I didn't wanna expose my bucket name here so I just made up a fake name and replaced it.

It was a policy error.Though, not on my end. I made the correct policy via the console in the first place but AWS only saved the first statement which had the Cost Explorer service part.

Manually writing the JSON and saving it worked.

Could you please explain the differnce between using COST_TAGS and TAG_KEY with TAG_VALUE_FILTER?

Right now their values are as follows :

  1. COST_TAGS : Team
  2. TAG_KEY : Team
  3. TAG_VALUE_FILTER : TeamName

My main confusion is with COST_TAGS

@ardasendurplentific
Copy link

ardasendurplentific commented Oct 27, 2021

@kkashyap1707 did you solve email error? Actually I am getting same error.Even if boto client gave me 200 response code, I could not receive email. I have same configuration from this link (#33 (comment))

@nikolaigauss
Copy link

nikolaigauss commented Oct 27, 2021

@ardasendurplentific Is the SES account in Sandbox mode? If that's the case you either need to request to put it in production mode, or either pre-validate the e-mails addresses.

@ardasendurplentific
Copy link

@nikolaigauss I am not using sandbox mode. email-address are validated.

@nikolaigauss
Copy link

@ardasendurplentific What do you see in the Lambda logs in the monitoring section? You might need to give permissions so that the Lambda can write to Cloud watch to get them. I can't remember from the top of my head what those permissions are but p sure there is a lot of documentation about it.

@ardasendurplentific
Copy link

@nikolaigauss To be honest if it related to permission boto client gives me 403 error. Btw I checked my lambda execution role it contains ses related things. We solved the issue it is related to filter of mail domain. thanks for your support and effort @nikolaigauss

Sign up for free to subscribe to this conversation on GitHub. Already have an account? Sign in.
Labels
None yet
Projects
None yet
Development

No branches or pull requests

5 participants