This is an opinionated template for a FastAPI based Serverless Framework microservice running on AWS Lambda.
This template supports being run:
- Locally using uvicorn
- Through Docker with uvicorn (optionally run through Gunicorn)
- As a Serverless Framework application deployed to AWS Lambda
If you want to skip down to the Get Started section, here you go!
- FastAPI (Web framework)
- Starlette (ASGI toolkit FastAPI builds on)
- pydantic (Data validation)
- Mangum (ASGI adapter for Lambda)
- Loguru (Logging)
- uvicorn
- Gunicorn
- Serverless Framework
- Serverless Python Requirements (Dependency management)
- Docker
- AWS (Cloud provider)
- Lambda (Serverless host)
- IAM (AWS permissions)
- CloudFormation (Syntax used in Serverless config)
- API Gateway
- CLI
- pre-commit (pre-commit hook management)
- pre-commit-hooks (Out of the box hooks)
- Black (Code formatting)
- isort (Import sorting)
- Flake8 (Linting)
- mypy (Type checking)
- Python 3.9
- venv (Virtual enviroment management)
- pip (Python package management)
- npm (Node package management)
You're going to need a couple of prerequisites:
To find the version of your Python installation run:
~ python3 --version
Python 3.9.13
Note: For Windows users always replace python3
with py
or python
First, fork this repo:
- Navigate to this repo on GitHub
- In the top-right corner of the page, click "Fork"
If you wish to rename your fork, do it now.
Then, clone the fork to your local device:
- Navigate to your fork
- Above the list of files, click the green "Code" button
- Either copy the link manually or click the π next to it
- Open your favorite terminal and
cd
into the directory you want your project located - Run
git clone [Link You Copied]
cd
into the newly created directory
It's highly recommended to use a virtual enviroment to help with Python versioning and depedency hell (xkcd). I'm not going to talk about them too much, but a great article can be found here.
To create your virtual enviroment run:
~ python3 -m venv venv
On MacOS and Linux run the following to activate:
~ source venv/bin/activate
On Windows run:
> venv\Scripts\activate.bat
On both platforms, if you wish to deactivate the virtual enviroment run:
(venv) ~ deactivate
Everytime you open a new terminal you'll need to activate the virtual enviroment.
To start, you'll need to install the provided dependencies. This can be done by running:
(venv) ~ pip install -r dev-requirements.txt
Feel free to modify the layout of the repo as much as you want but the given structure is as follows:
app/
βββ __init__.py
βββ main.py
βββ dependencies.py
βββ middleware.py
βββ routers/
β βββ users.py
βββ models/
β βββ user.py
βββ stores/
βββ user.py
tests/
βββ test_main.py
βββ routers/
βββ test_users.py
serverless.yml
requirements.txt
dev-requirements.txt
Note: all the files with "user" in the name are files to demonstrate the recomended structure. They form two endpoints defined in routers/users.py
.
This file structure is effectively an extension of the recomended file structure for "bigger applications" which can be found here.
__init__.py
defines and initializes the app configuration.
main.py
defines the FastAPI application, adds middleware, includes routers, and creates the Mangum handler.
dependencies.py
defines... π dependencies π! This is where you can put common parameters or basic authentication. If necessary this module can be split up into a package.
middleware.py
is where custom middleware can be placed. The middleware used to log endpoint execution time is defined here.
routers/
is for modules defining routers (pretty self explanatory). Again, this can be expanded into even more nested packages but at that point you might be leaving "microservice" territory.
models/
is the space to define input, output and database models. Once this grows it could be split into models/db/
and models/io/
if desired.
stores/
is the space to define database or "store" interfaces and corresponding wrapper classes. If that doesn't make sense right now, take a look at stores/user.py
and how it's used in routers/users.py
.
tests/
should mimic the file structure of routers
when defining tests. To see how to use the FastAPI testing system look here.
Now, you might want to spend a little bit of time starting at main.py
and looking through the code to see how it's structured in practice. Once your done, and you want to delete the example code:
- Delete the files
app/routers/users.py
app/models/user.py
app/stores/user.py
tests/routers/test_users.py
- Delete parts involving "user(s)" in
app/main.py
app/dependencies.py
Now would be a good time to replace the first line of serverless.yml
with service: [Insert App Name Here]
.
To run your microservice locally you either need to create a new .env.local
file for your local configuration, or use a .env
file for an existing stage. To use an enviroment file, the enviroment variable STAGE
must be set to the stage of the file. For example:
If you want to use the .env.staging
enviroment run the following on MacOS and Linux:
~ export STAGE=staging
On Windows:
> set STAGE=staging
Then run uvicorn from the root of your project using:
~ uvicorn app.main:app --reload
This will host your API on localhost
bound to port 8000
by default. When you update and save a file it will automatically reload.
While this template primarily supports running serverlessly, it can also be run with Docker. There are two independent Dockerfiles to support this: One where uvicorn is used on its own for use running on a cluster, and one which runs multiple uvicorn workers through Gunicorn for use on a single server or locally.
In short, if you have some sort of cluster of machines running Docker containers, you will likely want to create multiple Docker containers instead of running multiple uvicorn instances in one. To do this you can use the Dockerfile at docker/cluster/Dockerfile
.
With Docker running, build the image by running:
~ docker build -t my-image -f docker/cluster/Dockerfile ./
Note: You can change the name of the image by replacing my-cluster-image
with whatever you want.
Now you have an image which you can run by whatever mechanism you wish. You will have to expose port 80 and specify the .env
file you want to use. To run it detached locally with the .env.staging
file run:
~ docker run -d -p 80:80 --env-file .env.staging my-cluster-image
If you are running your Docker container on a single server or locally, you will likely want to use the Dockerfile at `docker/server/Dockerfile. This file uses the base Dockerfile by tiangolo found here. It will use Gunicorn to start some number of uvicorn workers. That number will be determined automatically by the number of cores the system has.
With Docker running, build the image by running:
~ docker build -t my-server-image -f docker/server/Dockerfile ./
Note: You can change the name of the image by replacing my-server-image
with whatever you want.
Now you have an image which can again be run any way you wish. Just make sure you expose port 80 and specify the .env
file you want to use. To run it detached locally with the .env.staging
file run:
~ docker run -p 80:80 --env-file .env.staging my-server-image
To deploy your application to Lambda, first install the latest serverless
CLI. This can be done by running:
~ npm install -g serverless
You will also need to install any other serverless dependencies with:
~ npm install
Then, you need to get your AWS key and secret from the dashboard. A guide to do that can be found here. Configure them with the serverless
CLI by running:
~ serverless config credentials --provider aws --key [Insert Key Here] --secret [Insert Secret Here]
Now you're ready to deploy! With Docker running, run:
~ sls deploy
The first time, this command can take up to 15 minutes to complete. Once it's done you can access your app at the link printed in the console.
The recomended code format is Black. isort is also run as a part of the pre-commit hooks by default. To save yourself a lot of effort you can enable these to run on save in your editor or IDE. Details for VS Code are below, but a tutorial for your editor can often be found by Googling "automatically reformat code on save in [Insert Editor Name Here]".
Autoformatting in VS Code
If you don't already, you'll need an installation of VS Code with the Python plugin installed. You can find a guide to do that here. You'll also need to have setup your virtual enviroment as instructed in the Get Started section.
Open the Settings editor with the keyboard shortcut (
β
+,
or Ctrl
+,
) or by going:
- On Windows/Linux - File > Preferences > Settings
- On macOS - Code > Preferences > Settings
In the search bar, enter "Python Formatting Provider" and select "black" from the dropdown menu. Then, search for "Editor: Format on Save" and enable it. Finally, enabling isort requires a bit more work. Search for "Editor: Code Actions On Save" and click "Edit in settings.json".
Add the following line:
"editor.codeActionsOnSave": {
"source.organizeImports": true
}
Done! Go ahead and try it out.
Using pre-commit is not required but heavily encouraged. It's an easy way to make sure that style is followed and simple bugs are found before code even makes it to a pull request. On commit, Black, isort, Flake8, and mypy are run and if any changes are made or errors are raised the commit will fail. Changes will then need to be staged and everything commited again.
To start using pre-commit with the provided config (located in .pre-commit-config.yaml
) run:
(venv) ~ pre-commit install --install-hooks
Note: The --install-hooks
flag is optional but you save time on your first commit by installing them now.
Note 2: The installed pre-commit hooks do not get commited. Everybody working on the repo will have to run the above command.
Other Useful Commands
pre-commit run
- Run hooks on currently staged filespre-commit run --all-files
- Run hooks on all files in repopre-commit autoupdate
- Auto-update pre-commit config to the latest repos' versions
mypy is a Python static type checker, that is run by default in pre-commit. However, it can be a bit of a double-edged sword. It has a lot of benefits including being able to catch bugs that wouldn't even be found with 100% code coverage. Personally, I use it on every project with strict
mode enabled. The default configuration is relatively lenient but if you find it too pedantic for your needs, you can disable some features in setup.cfg
or remove it entirely from .pre-commit-config.yaml
. If you wish to enable additional features, those can be added in setup.cfg
.