If you haven't already, check out the quickstart guide on Feast's website (http://docs.feast.dev/quickstart), which
uses this repo. A quick view of what's in this repository's feature_repo/
directory:
data/
contains raw demo parquet datafeature_repo/example_repo.py
contains demo feature definitionsfeature_repo/feature_store.yaml
contains a demo setup configuring where data sources are IBM Cloud redis service for online store and IBM Cloud Data Engine for offline store.feature_repo/test_workflow.py
showcases how to run all key Feast commands, including defining, retrieving, and pushing features.
You can run the overall workflow with python test_workflow.py
.
-
Procure IBM Cloud Redis and IBM Cloud Data Engine services
-
Set below environment variables
export DATA_ENGINE_API_KEY=<DATA_ENGINE_API_KEY> export IBM_CLOUD_OBJECT_STORE_URL=<IBM_CLOUD_OBJECT_STORE_URL> export REDIS_HOST=<REDIS_HOST> export REDIS_PORT=<REDIS_PORT> export REDIS_PASSWORD=<REDIS_PASSWORD> export REDIS_CERT_PATH=<REDIS_CERT_PATH>
-
Download ibm-cloud-data-engine plugin project and configure path in pyproject.toml
-
Install dependencies
poetry install
-
Run feast apply to create/update feature store deployment.
poetry run feast -c ./feature_repo apply
-
Run training by retriving historical feature information from feature store
poetry run python training.py
-
Materialize features from offline to online store
poetry run feast -c ./ materialize '<START_TIMESTAMP>' '<END_TIMESTAMP>'
-
Run inference during production to retrieve features from online store.
poetry run python inference.py