Skip to content
This repository has been archived by the owner on Oct 1, 2020. It is now read-only.

Latest commit

 

History

History
38 lines (25 loc) · 3.91 KB

File metadata and controls

38 lines (25 loc) · 3.91 KB

AWS Event Fork Pipelines: Event Storage and Backup Pipeline

Build Status

This AWS Event Fork Pipelines app backs up events from the given Amazon SNS topic to an Amazon S3 bucket, using an Amazon Kinesis Data Firehose stream.

Architecture

AWS Event Fork Pipelines Backup and Storage Architecture

  1. An Amazon SQS queue is subscribed to the given SNS Topic ARN with an optional subscription filter policy.
  2. An AWS Lambda function reads events from the SQS queue and publishes them to an Amazon Kinesis Data Firehose Delivery Stream, which saves them to an Amazon S3 bucket.
    1. An optional data transformation Lambda function can be specified to transform the data prior to saving it to the S3 bucket.

Installation

This app is meant to be used as part of a larger application, so the recommended way to use it is to embed it as a nested app in your serverless application. To do this, visit the app's page on the AWS Lambda Console. Click the "Copy as SAM Resource" button and paste the copied YAML into your SAM template, filling in any required parameters. Alternatively, you can deploy the application into your account directly via the AWS Lambda Console.

Parameters

  1. TopicArn (required) - The ARN of the SNS topic to which this instance of the pipeline should be subscribed.
  2. SubscriptionFilterPolicy (optional) - The SNS subscription filter policy, in JSON format, used for filtering the incoming events. The filter policy decides which events are processed by this pipeline. If you don’t enter any value, then no filtering is used, meaning all events are processed.
  3. StreamPrefix (optional) - The string prefix used for naming files stored in the S3 bucket. If you don’t enter any value, then no prefix is used.
  4. StreamCompressionFormat (optional) - The format used for compressing the incoming events. Three options are available, namely GZIP, ZIP, and SNAPPY. If you don’t enter any value, then data compression is disabled.
  5. BucketArn (optional) - The ARN of the S3 bucket to which incoming events are loaded. If you don't enter any value, then a new S3 bucket is created in your account.
  6. StreamBufferingIntervalInSeconds (optional) - The amount of seconds for which the stream should buffer incoming events before delivering them to the destination. Any integer value from 60 to 900 seconds. If you don't enter any value, then 300 is used.
  7. StreamBufferingSizeInMBs (optional) - The amount of data, in MB, that the stream should buffer before delivering them to the destination. Any integer value from 1 to 100. If you don't enter any value, then 5 is used.
  8. DataTransformationFunctionArn (optional) - The ARN of the Lambda function used for transforming the incoming events. If you don’t enter any value, then data transformation is disabled.
  9. LogLevel (optional) - The level used for logging the execution of the Lambda function that polls events from the SQS queue. Four options are available, namely DEBUG, INFO, WARNING, and ERROR. If you don’t enter any value, then INFO is used.

Outputs

  1. BackupBucketName - Backup bucket name (only output if this app created a backup bucket).
  2. BackupBucketArn - Backup bucket ARN (only output if this app created a backup bucket).

License Summary

This code is made available under a modified MIT license. See the LICENSE file.