Welcome to Pixa, a sophisticated cloud-based image storage and management solution. This repository houses the deployment architecture, storage units, and other essential components that power Pixa. Explore the seamless deployment process, microservices structure, and CI/CD automation.
- Azure Service: Azure Database for MySQL flexible server
- Used by Microservice: auth-service
- It stores user credentials for authentication
- Azure Service: Azure Storage Account File Service
- Used by Microservice: storage-account-service ; storage-monitor-service
- It stores images uploaded by users
- Azure Service: Azure Storage Account Blob Service
- Used by Microservice: usage-monitor-service
- Image Storage generates logs on every event that are automatically stored here using Azure Diagnostic Tools
- Azure Service: Azure Storage Account File Service
- Used by Microservice: all
- This is the persistent storage that is mounted in each container in the pod to store application logs
- All microservices are developed using FastAPI
- They run on different ports and communicate in the pod using localhost
- Node's Public IP is attached to only controller-service using LoadBalancer Service thus exposing it to online traffic
- Single Pod in Deployment consists of following 5 services that are pulled from github container registry
Running Locally
cd microservices/auth-service
pip install -r requirements.txt
uvicorn main:app --host 0.0.0.0 --port 5000
Building Docker Image
cd microservices/auth-service
docker build -t pixa-auth-service .
docker run -p 5000:5000 pixa-auth-service
Using Remote Image
docker pull ghcr.io/geetu040/pixa-auth-service:latest
docker run -p 5000:5000 pixa-auth-service
Definition in Deployment
kind: Deployment
...
spec:
...
template:
...
spec:
containers:
- name: pixa-controller-service
image: ghcr.io/geetu040/pixa-controller-service
ports:
- containerPort: 80
...
- name: pixa-auth-service
image: ghcr.io/geetu040/pixa-auth-service
ports:
- containerPort: 5000
...
- name: pixa-storage-account-service
image: ghcr.io/geetu040/pixa-storage-account-service
ports:
- containerPort: 5001
...
- name: pixa-storage-monitor-service
image: ghcr.io/geetu040/pixa-storage-monitor-service
ports:
- containerPort: 8000
...
- name: pixa-usage-monitor-service
image: ghcr.io/geetu040/pixa-usage-monitor-service
ports:
- containerPort: 8001
...
A total of 3 replicas have been created for deployment which means there will always be 3 pods running in the node. If on pod fails, other 2 will still be available for service. Remember that each pod contains 5 containers (5 microservices)
apiVersion: apps/v1
kind: Deployment
...
spec:
...
replicas: 3
This service serves the following functions
- It balances the load between multiples replicas of deployment thus distributing the traffic
- It balances at Layer 4 which is the transport layer
- It maps the Node's public IP address to port 80 of pod which is controller-service, thus allowing external traffic to communicate with the pod
apiVersion: v1
kind: Service
metadata:
name: pixa-service
spec:
selector:
app: pixa-deploy
type: LoadBalancer
ports:
- protocol: TCP
port: 80
targetPort: 80
All the application secrets and variables are stored in a configmap that are accessable as enviromental variables
Declaration: manifests/configmap.yaml
kind: ConfigMap
data:
PIX_DB_HOST: 'your-db-host-placeholder'
PIX_DB_DATABASE: 'your-db-name-placeholder'
PIX_DB_USER: 'your-db-user-placeholder'
PIX_DB_PASSWORD: 'your-db-password-placeholder'
...
Referred in Deployment: manifests/deployment.yaml
kind: Deployment
...
containers:
- name: pixa-controller-service
image: ghcr.io/geetu040/pixa-controller-service
envFrom:
- configMapRef:
name: pixa-config
Usage in Containers: microservices/auth-service/config.py
import os
HOST = os.environ.get("PIX_DB_HOST")
DATABASE = os.environ.get("PIX_DB_DATABASE")
USER = os.environ.get("PIX_DB_USER")
PASSWORD = os.environ.get("PIX_DB_PASSWORD")
A persistent volume is mounted in each container to store application logs
If an application or container fails or stops, the logs are always saved in the persistent storage, therefore it becomes easy to find and fix a bug by getting complete description of failure
Declaration: manifests/storageclass.yaml
apiVersion: storage.k8s.io/v1
kind: StorageClass
metadata:
name: pixa-sc
provisioner: kubernetes.io/azure-file
mountOptions:
- dir_mode=0777
- file_mode=0777
- uid=1000
- gid=1000
- mfsymlinks
- cache=strict
Creating Claim: manifests/persistentvolumeclaim.yaml
apiVersion: v1
kind: PersistentVolumeClaim
metadata:
name: pixa-pvc
spec:
accessModes:
- ReadWriteMany # ReadWriteOnce, ReadOnlyMany or ReadWriteMany
# storageClassName: hostpath
storageClassName: pixa-sc
resources:
requests:
storage: 1Gi
Referred in Deployment: manifests/deployment.yaml
kind: Deployment
...
volumes:
- name: volume
persistentVolumeClaim:
claimName: pixa-pvc
containers:
- name: pixa-controller-service
image: ghcr.io/geetu040/pixa-controller-service
...
volumeMounts:
- mountPath: "/mnt"
name: volume
Usage in Containers for saving logs: microservices/auth-service/Dockerfile
FROM python:3.9-slim
...
CMD ["sh", "-c", "uvicorn main:app --host 0.0.0.0 --port 5000 >> /mnt/auth-service.log"]
User credentials are authenticated and jwt tokens in headers are used for keeping users logged in
from jose import jwt
def create_token(username: str):
to_encode = {'sub': username}
token = jwt.encode(to_encode, SECRET_KEY, algorithm=ALGORITHM)
return token
def decode_token(token: str):
payload = jwt.decode(token, SECRET_KEY, algorithms=[ALGORITHM])
username = payload.get("sub", None)
return username
Threading has been used where the contoller needs to communicate with other services to get data or create user asynchronously
microservices/controller-service/service.py
def create_user(username, password):
with ThreadPoolExecutor() as executor:
# Execute the functions concurrently
creater1_future = executor.submit(
create_user_in_auth_service,
username, password
)
creater2_future = executor.submit(
create_user_in_storage_account_service,
username, password
)
def load_user_data(username):
with ThreadPoolExecutor() as executor:
# Execute the functions concurrently
storage_usage_future = executor.submit(get_storage_usage, username)
bandwidth_usage_future = executor.submit(get_bandwidth_usage, username)
images_links_future = executor.submit(get_images_links, username)
# Get results from the futures
storage_usage = storage_usage_future.result()
bandwidth_usage = bandwidth_usage_future.result()
images_links = images_links_future.result()
All endpoints of each service return valid JSON Responses after error handling.
- Status Code 200 is used to send valid responses
- Status Code 400 when there is a failure on client side
- Status Code 500 on server side error
From Building Docker Images to pushing them on Github Container Registry to Deploying them on Azure Kubernetes, everything is automated using github actions. This process involves following steps
- Saving Github Credentials in Repository Secrets
- Login to GitHub Container Registry
- Build Docker Images
- Push Docker Images to Github Container Registry
workflow manifest - .github/workflows/ghcr-to-aks-.yaml
github-container-registry:
runs-on: ubuntu-latest
steps:
- name: Checkout Repository
uses: actions/checkout@v2
- name: Login to GitHub Container Registry
run: echo "${{ secrets.TOKEN }}" | docker login ghcr.io -u ${{ github.actor }} --password-stdin
- name: pixa-auth-service
run: |
docker build -t ghcr.io/geetu040/pixa-auth-service:latest microservices/auth-service/
docker push ghcr.io/geetu040/pixa-auth-service:latest
...
- Save Azure Credentials in Repository Secrets
- Set up kubelogin for non-interactive login
- Get K8s context
- Deploys manifests
azure-kubernetes-services:
runs-on: ubuntu-latest
needs: [github-container-registry]
steps:
- uses: actions/checkout@v3
- name: Azure login
uses: azure/[email protected]
with:
creds: '${{ secrets.AZURE_CREDENTIALS }}'
- name: Set up kubelogin for non-interactive login
uses: azure/use-kubelogin@v1
with:
kubelogin-version: "v0.0.25"
- name: Get K8s context
uses: azure/aks-set-context@v3
with:
resource-group: pixa-resource
cluster-name: pixa-cluster
admin: "false"
use-kubelogin: "true"
- name: Deploys application
uses: Azure/k8s-deploy@v4
with:
action: deploy
manifests: |
manifests/storageclass.yaml
manifests/service.yaml
manifests/persistentvolumeclaim.yaml
manifests/deployment.yaml