serverless api with aws lambda aurora nodejs
- Trello
- C4 Diagrams
- Initialize a new environment
- Database Management
- Production
- Contributing
- Adding a new Method
- Stack & Toolchain
- List of Simplifications
- Next Steps
first time run
You'll need an aws account for all the infrastructure to live in. Terraform is used for idempotent environment deployments.
Once you have an iam
user created,
obtain the access key and secret key in the aws iam
module.
Then create the following directory & file (with your credentials.)
config/secrets-dev.env
export AWS_ACCESS_KEY_ID=AAAAAAAABBBBBBBCCCCCC
export AWS_SECRET_ACCESS_KEY=******************************
export AWS_DEFAULT_REGION=us-east-1
export DATABASE_PASS=supersecretpassword
useful tool for working with aws
sudo apt update
sudo apt-get install python-dev
sudo pip install -U setuptools
curl "https://s3.amazonaws.com/aws-cli/awscli-bundle.zip" -o "awscli-bundle.zip"
unzip awscli-bundle.zip
./awscli-bundle/install -b ~/bin/aws
if you have awscli, then you can run one command to create perishable-dev-terraform-state
Keep in mind that I have already created the dev
environment and the aws s3 namespace is global.
This means that you have to use a different environment name if this tutorial is followed.
aws s3api create-bucket --bucket perishable-dev-terraform-state --region us-east-1
infrastructure as code
if you do not have terraform installed, run make prerequisite
to install it.
terraform -v
should output Terraform v0.11.11
warn: it is very important that you use at least this version of terraform.
package.json will mention the lambda nodejs supported version 8.10.
nvm can be used to swap versions quickly.
Once the proper version of node and npm are installed,
then install the remaining dependencies with npm install
wget -qO- https://raw.githubusercontent.com/creationix/nvm/v0.34.0/install.sh | bash
nvm install 8.10
npm install
remember to change the 'ENV' parameter to meet your unique environment name
The terraform commands will take about ~20 minutes to complete because the datastore step takes awhile to stand up an aurora cluster. The other 3 steps are usually done in seconds.
make deploy TARGET=core ENV=dev
make deploy TARGET=datastore ENV=dev
make deploy TARGET=gateway ENV=dev
make deploy TARGET=lambda ENV=dev
what you need to know to get the schema in place
Use these commands to get the endpoints for database access. You can also find this information in the aws console if you prefer.
db WRITER endpoint: aws rds describe-db-cluster-endpoints
ec2 proxy: aws ec2 describe-instances --query "Reservations[].Instances[][PublicIpAddress]"
note: You must have the private key mentioned here in order to connect to the ec2-proxy.
The database is secured behind a vpc. You must connect through a proxy ssh host in order to even hit it.
In a tool such as putty, datagrip, openssh, etc; these settings will allow you to connect to the proxy host.
proxy host: (ec2 proxy endpoint from step 1)
proxy user: ubuntu
auth type: private key (the private key mentioned in step 1)
port: 22
Once in the same network as database, connect with the following.
host: (make sure you put the db WRITER endpoint from step 1)
database: perishable
user: mom
password: (env var DATABASE_PASS in config/secrets-*.env)
port: 3306
Connect to database cluster write endpoint and run scripts in order found in src/db/up
.
This final step makes the system live!
In order for users (people with api keys that want access to perishable systems) to actually be able to hit the system; you'll need to associate one of the generated api keys with a user in the database. Then users will be able to authenticate with the perishable system.
Changing the system, adding a new method or updating an existing method.
Tests can be invoked with npm test
or make test
.
Changes made to infrastructure will require a deploy command in order to take effect.
ex:
- dev makes changes to
./infra/core/network.tf
make deploy TARGET=core ENV=dev
- changes are reflected in aws
Changes made to api methods will require a lambda redeploy.
ex:
- dev makes changes to
./src/api/get-unit-id.js
make deploy TARGET=lambda ENV=dev
- changes are live right away, all new requests will pass through the new function
The scripts here are for record keeping and for standing up a new environment. Synchronizing the scripts here with the environment is currently a manually effort.
ex:
- dev makes changes to
./src/db/up/unit.sql
- dev connects to database and runs alter
- changes are live right away, all new requests will pass through the new function
what to expect
In ./infra/gateway/api.tf
the rules of access to the api are defined. Here are some notable takeaways from the current environment:
a. There is a limit of 1000 hits per day per key.
b. There are two keys that are generated and two fake companies. Each of these keys are to be associated with a member from each different org. (you must manually write the update query to make the association)
c. No vanity url exists; instead aws generates one for us. https://hgembpmlo6.execute-api.us-east-1.amazonaws.com/v1
d. While the api is itself secured with https, the demo page is a simple static site over http. A non-issue, but should be noted.
e. Since no one is using this system, the first time you hit a lambda may require it to "heat up". This can take 1-3 seconds. After that "heat up" the methods are then lightning fast.
There are a few places that need to be touched in order to create a new method.
src/api/new-method.js
spec/new-method.spec.js
infra/lambda/new-method.tf
infra/gateway/oas-integrations.yml
doc/api-spec.yml
Once development is satisfactory;
make deploy TARGET=gateway ENV=dev
make deploy TARGET=lambda ENV=dev
Finally remember to make the endpoint public by creating a new deployment for the v1
stage in api-gateway
(either with a command or in the aws console)
aws apigateway create-deployment --rest-api-id <value> --stage-name v1
The name suggests 4 diagrams, however I think that the first two diagrams in the C4 spec are the most useful! So here they are.