- create a new VPC with CIDR 172.16.0.0
- create two new subnets with CIDR 172.16.1.0/24 and 172.16.2.0/24 in two different availability zones
- create 5 new ec2 instances based on Ubuntu 18.04 (bionic)
- deploy the following Java application on these instances
- create a load balancer for the Java application on port 80
- setup route53 to host CNAME record of elb url
- creates a new Docker container running nginx and proxies all requests on /<container_name> to the appropriate container and port
- there is only one nginx container running at all times
- if the nginx container is down, it needs to be started
- when a new application container is created the nginx configuration is updated to proxy requests to the new container
- creates a new Docker container running Java and deploys demo-0.0.1-SNAPTSHOT.jar from the previous step
- the Docker container publishes container 8080 on a free port between 8000 and 8200
- the container name is a unique identifier
- container is only created if it does not exist
- there can be multiple Docker containers with different names running at the same time
- Implement a piece of software exposing a JSON document:
{
"id": "1",
"message": "Hello world"
}
when visited with a HTTP client
- Dockerize the application
- Put the application to kind's Kubernetes
- Create a second application, that utilizes the first and displays reversed message text
- Deployment docker image with github workflow and login github registry with k8s secret config.
- Update application with kubctl in script
- Please consider the work you submit here a small, but production-ready deliverable, in the sense that you are happy to ship such code, tests and documentation.
- Write a Roman numeral converter that converts integer numbers into Roman numerals:
func(36)
Output: "XXXVI"
jupyter nbconvert --execute task05/go_tuturial.ipynb --to html
performs occasional test queries to the hostname "google.com" to each of the DNS servers configured in /etc/resolv.conf
- Technical scopes
github action
docker-compose
pytest
fastapi
minikube
make
Using helm to provision mongodb
build up a API to return GET request, and create a docker image on minikube, then create manifests to bring up the service and deployment.
This program is going to be provided json lines as input in the stdin, and should provide a json line output for each one — imagine this as a stream of events arriving at the authorizer.
$ cat operations
{"account": {"active-card": true, "available-limit": 100}} {"transaction": {"merchant": "Burger King", "amount": 20, "time": "2019-02-13T10:00:00.000Z"}}
{"transaction": {"merchant": "Habbib's", "amount": 90, "time": "2019-02-13T11:00:00.000Z"}}
$ authorize < operations
{"account": {"active-card": true, "available-limit": 100}, "violations": []} {"account": {"active-card": true, "available-limit": 80}, "violations": []} {"account": {"active-card": true, "available-limit": 80}, "violations": ["insufficient-limit"]}
Use FastAPI and sqlalchemy to create shortened URL similar to https://goo.gl/
Using terraform to create a private s3 bucket and a authorized user(IAM) to upload files. Using WhitelistIPs to grant user's exteral public IP address permission to access bucket.
⚠️ When you provision this task, you can not depend on STS token. Because there is a lack support of STS to create a new IAM user. ⚠️
# sample for whitelist to access s3 bucket
whitelistIPs = ["127.0.0.1/32"]
# get access_key from ssm
aws ssm get-parameter --name /system_user/backup-dev-uploader/access_key_id --with-decryption | jq .Parameter.Value
# get secret_key from ssm
aws ssm get-parameter --name /system_user/backup-dev-uploader/secret_access_key --with-decryption | jq .Parameter.Value
# terraform output
bucket_domain_name = "backup-dev-upload-task14.s3.amazonaws.com"
You will find two applications: A Golang-based and a Java-based application. Both need to be containerized according to industry best practices. The Golang application needs to be compiled from source, while the Java application is delivered as a pre-built Jar file, runnable using Java 11. Both are providing an HTTP service, binding to all interfaces on port 8080, with the same endpoints:
Route | Description |
---|---|
/ | A static site. Should not appear in the final setup |
/hotels | JSON object containing hotel search results |
/health | Exposes the health status of the application |
/ready | Readiness probe |
/metrics | Exposes Prometheus metrics of the application |
Your challenge will be to provide a load balancer setup like the following:
+------------------>Java app
|
|
|
30% of traffic
|
|
User +-------> load balancer+
|
|
70% of traffic
|
|
|
+------------------>Go app
The traffic distribution should be as follows: 70% of the requests are going to the application written in Golang, 30% of the requests are going to the application written in Java. Also, each HTTP response needs to carry a custom header, called x-trv-heritage which indicates the application that responded. Your implementation must be runnable on a machine using x86_64 CPU architecture and must be built on top of Kubernetes. One should be able to at least see that the traffic distribution works as expected in some form. As a bonus, you can try to show other metrics, like CPU usage, memory utilization, and latency as well to compare the two services. Your implementation should:
- Build both container images locally
- Find a solution to make them available to a Kubernetes cluster
- Do not push them to a public registry on the internet!
- Setup an ingress solution of your choice
- Deploy both workloads
- Wait for the readiness of the system
- Run 100 requests against / of the applications under test
You are tasked with the creation of a small infrastructure stack on
AWS:
* Deploy a redundant and scalable EKS cluster
* Deploy on the cluster a simple Web Server application, exposing on
the public internet a simple home page with a custom message.
* Provide basic monitoring for your infrastructure *[Optional Task]*
* Increase the scalability of the stack *[Optional Task]*
* Provide cost estimations *[Optional Task]*
EKS
cluster must consist of at least 3Worker Nodes
Worker Nodes
should be distributed on at least2 AZ
Worker Nodes
should be assigned to at least2 Worker Groups
- You must run the latest version of
EKS-1.21
- Deploy, using
Helm
, a web server of your choice on the above running cluster. - The deployment should span the 2AZ where EKS nodes are spread, and have a minimal redundancy.
- Customise the web server in order to show a home page with custom message like "Hello bot, welcome to your simple web page"
- Expose securely the page in order to be reached from public internet
- Present k8s with digitalocean provider, using resource digitalocean_kubernetes_cluster and digitalocean_kubernetes_node_pool.
- Deploy bitnami/nginx-ingress-controller helm chart by using resource helm_release.
- A simple example to present how
helmfile
can depend on includeing vaules.yaml from different directory to share the common setting between landscape.