Run PySpark code in the 'cloud' with Amazon Web Services (AWS) Elastic MapReduce (EMR) service in a few simple steps with this cookiecutter project template!
pip install -U "cookiecutter>=1.7"
cookiecutter --no-input https://github.com/daniel-cortez-stevenson/cookiecutter-pyspark-cloud.git
cd pyspark-cloud
make install
pyspark_cloud
Your console will look something like:
-
AWS ☁️ Cloudformation Template for EMR: Simple Spark cluster deployment with infrastructure as code
- JupyterHub is installed to the EMR Master node for development, and is backed by AWS S3 for persistent storage
- JupyterLab endpoint available at https://master-dns:9443/lab
- Jupyter Notebook 📔 endpoint available at https://master-dns:9443/tree with sparkmagic kernel
- Includes examples of bootstrapping your cluster with bash scripts and EMR Steps 👀
-
A Command-Line Interface for Running PySpark 'Jobs': For production 🚀 runs via EMR Step API
- Uses the concept of 'jobs', which run PySpark scripts as a Python function via a common entrypoint - this an important point
- Checkout the Medium article, which inspired a lot of this
-
Log Like a Pro: Save time debugging in style 💃
-
Wrap Scala with Python 🐍: Use libraries that haven't been included in the PySpark API!
- An example of wrapping Scala Spark API code with PySpark API code is provided with
SnowballStemmer
- Could be extended to other Scala MLlib classes (and other Scala classes that implement the UDF interface)
- An example of wrapping Scala Spark API code with PySpark API code is provided with
-
Simplify Workflows with Make ✅: A Makefile with commands for installation, development, and deployment.
- use with
make [COMMAND]
- For example, distribute an executable .egg 🥚 distribution of your PySpark code to AWS S3 with
make s3dist
- use with
-
Organize Your Code: Package code shared between 'jobs' in a Python module of your package called
common
-
Extend the PySpark API: An example of extending the PySpark SQL
DataFrame
class, which allows chaining custom transformations with dot.
notation- checkout this awesome PySpark utilities & extensions repo Quinn
-
Development Framework: All the tools you need
- Use bump2version to version your project
- Use CodeCov to track the completeness of unit tests - see codecov.yml
- Use [Flake8] to write Python code with common style & formatting conventions
As defined in the Cloudformation template
- Clone this repo:
git clone https://github.com/daniel-cortez-stevenson/cookiecutter-pyspark-cloud.git
cd cookiecutter-pyspark-cloud
- Create a Python environment with dependencies installed:
conda create -n cookiecutter -y "python=3.7"
pip install -r requirements.txt
conda activate cookiecutter
-
Make any changes to the template, as you wish.
-
Create your project from the template:
cd ..
cookiecutter ./cookiecutter-pyspark-cloud
- Initialize git:
cd *your-repo_name*
git init
git add .
git commit -m "Initial Commit"
- Create a new Conda environment for your new project & install project development dependenices:
conda deactivate
conda create -n *your-repo_name* -y "python=3.6"
make install-dev
Contributions are welcome! Thanks!
Submit an Bug or Feature Request
Most of the ideas expressed in this repo are not new, but rather expressed in a new way. Thanks, folks! 🙌
- @MrPowers for the
DataFrame
extension snippet - @ekampf for the original concept for the pyspark_entrypoint