- Introduction
- System Architecture
- What You'll Learn
- Technologies
- Getting Started
- Watch the Video Tutorial
This project serves as a comprehensive guide to building an end-to-end data engineering pipeline. It covers each stage from data ingestion to processing and finally to storage, utilizing a robust tech stack that includes Apache Airflow, Python, Apache Kafka, Apache Zookeeper, Apache Spark, and Cassandra. Everything is containerized using Docker for ease of deployment and scalability.
The project is designed with the following components:
- Data Source: We use
randomuser.me
API to generate random user data for our pipeline. - Apache Airflow: Responsible for orchestrating the pipeline and storing fetched data in a PostgreSQL database.
- Apache Kafka and Zookeeper: Used for streaming data from PostgreSQL to the processing engine.
- Control Center and Schema Registry: Helps in monitoring and schema management of our Kafka streams.
- Apache Spark: For data processing with its master and worker nodes.
- Cassandra: Where the processed data will be stored.
- Setting up a data pipeline with Apache Airflow
- Real-time data streaming with Apache Kafka
- Distributed synchronization with Apache Zookeeper
- Data processing techniques with Apache Spark
- Data storage solutions with Cassandra and PostgreSQL
- Containerizing your entire data engineering setup with Docker
- Apache Airflow
- Python
- Apache Kafka
- Apache Zookeeper
- Apache Spark
- Cassandra
- PostgreSQL
- Docker
-
Clone the repository:
git clone https://github.com/airscholar/e2e-data-engineering.git
-
Navigate to the project directory:
cd e2e-data-engineering
-
Run Docker Compose to spin up the services:
docker-compose up
For more detailed instructions, please check out the video tutorial linked below.
For a complete walkthrough and practical demonstration, check out our YouTube Video Tutorial.