-
Notifications
You must be signed in to change notification settings - Fork 103
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Encountered errors while running AWS cloudformation query to list the stacks #1954
Comments
Hello, @sivashankar-tomtom, I'm sorry to hear that you're encountering an issue, and I appreciate you bringing it to our attention. We will investigate and address it. Thank You! |
@sivashankar-tomtom, Sorry for the delay in response, I have raised an PR with the fixes by addressing this issue. It would be great if you could try it and let us know if you are running into any other issues. Steps for testing out the code changes:
Thanks! |
@ParthaI Can you please add some more details on what caused the bug with an example CFN stack template, and what your approach for the fix was? I had a look at the PR but it wasn't immediately clear to me why we needed a new transform. Thanks! |
Hi @cbruno10, Here is the summary of the changes. Please have a look.
Regrettably, these functions did not effectively process the following template block: CopyZipsFunction:
Type: AWS::Lambda::Function
Properties:
Description: Copies objects from a source S3 bucket to a destination
Handler: index.handler
Runtime: python3.8
Role: !GetAtt "CopyZipsRole.Arn"
Timeout: 240
Code:
ZipFile: |
import json
import logging
import threading
import boto3
import cfnresponse
def copy_objects(source_bucket, dest_bucket, prefix, objects):
s3 = boto3.client('s3')
for o in objects:
key = prefix + o
copy_source = {
'Bucket': source_bucket,
'Key': key
}
print('copy_source: %s' % copy_source)
print('dest_bucket = %s'%dest_bucket)
print('key = %s' %key)
s3.copy_object(CopySource=copy_source, Bucket=dest_bucket,
Key=key)
def delete_objects(bucket, prefix, objects):
s3 = boto3.client('s3')
objects = {'Objects': [{'Key': prefix + o} for o in objects]}
s3.delete_objects(Bucket=bucket, Delete=objects)
def timeout(event, context):
logging.error('Execution is about to time out, sending failure response to CloudFormation')
cfnresponse.send(event, context, cfnresponse.FAILED, {}, None)
def handler(event, context):
# make sure we send a failure to CloudFormation if the function
# is going to timeout
timer = threading.Timer((context.get_remaining_time_in_millis()
/ 1000.00) - 0.5, timeout, args=[event, context])
timer.start()
print('Received event: %s' % json.dumps(event))
status = cfnresponse.SUCCESS
try:
source_bucket = event['ResourceProperties']['SourceBucket']
dest_bucket = event['ResourceProperties']['DestBucket']
prefix = event['ResourceProperties']['Prefix']
objects = event['ResourceProperties']['Objects']
if event['RequestType'] == 'Delete':
delete_objects(dest_bucket, prefix, objects)
else:
copy_objects(source_bucket, dest_bucket, prefix, objects)
except Exception as e:
logging.error('Exception: %s' % e, exc_info=True)
status = cfnresponse.FAILED
finally:
timer.cancel()
cfnresponse.send(event, context, status, {}, None) To address this, I implemented additional manipulation logic to unescape the template string correctly. Subsequently, I employed the Your suggestions about the implementation would be highly appreciated. Thanks! |
This issue is stale because it has been open 60 days with no activity. Remove stale label or comment or this will be closed in 30 days. |
Describe the bug
I have started using steampipe DB as a service in our organization. I could not get the list of cloudformation stacks running in an account.
Steampipe version (steampipe -v)
Example: v0.21.1
Plugin version (steampipe plugin list)
Example: hub.steampipe.io/plugins/turbot/[email protected]
To reproduce
I encountered following error while running
select * from aws.aws_cloudformation_stack
Expected behavior
The query should fetch the list of cloudformation stacks running in an AWS account.
The text was updated successfully, but these errors were encountered: