Mozilla Push message log processors. These processors can be run under AWS Lambda or deployed to a machine. This project also contains common helper functions for parsing heka protobuf messages and running in AWS Lambda.
Currently this consists of a single processor for the Push Messages API.
Processes push log messages to extract message metadata for registered crypto public-keys.
Requires Redis.
Checkout this repo, you will need Redis installed to test against.
Then:
$ virtualenv ppenv
$ source ppenv/bin/activate
$ pip install -r requirements.txt
$ python setup.py develop
After modifying push_processor/settings.js
for your appropriate settings, a
Lambda-ready zipfile can be created with make:
$ make lambda-package
All files in the work directory are included except files listed in the
ignore
section of lambda.json
. Remove any other files that should not
be included in the zipfile from the work directory.
The AWS handler you should set is: lambda.handler
, which will then call
the Push Processor.
The Push Processor needs access to both Redis and AWS S3. It will likely be necessary to run the Lambda function in a VPC that has access to the Elasticache Redis instance. Lambda requires the subnet(s) it runs in to have NAT Gateway access to the Internet. A rough overview of what this looks like:
- Define one or more 'private' subnets, these are where your Lambda function will run.
- Define one or more 'public' subnets, these are where the NAT Gateway(s) will be.
- Create the NAT Gateway(s) in the 'public' subnets.
- Create a route table for the 'public' subnet(s), and route '0.0.0.0/0' to the VPC Internet Gateway. The public subnets should then be explicitly associated with the new route table.
- Create a route table for the 'private' subnet(s), and route '0.0.0.0/0' to a NAT Gateway that you defined.
The Push Processor Lambda function should then be able to access both S3 and the Elasticache machines in the same 'private' subnet(s).