You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
Test that a parquet dataset saved in dataset format can be loaded and accessed using get_historical_features.
Steps:
upload a parquet dataset to the test s3 folder
point a source to the parquet dataset s3 folder (not the file)
load the source features using get_historical_features
Ideally, there'd be a test for this running, but you'd be blocked on making a file within the Feast integration tests S3 bucket. Mind making an issue for that separately? @mzwiessele
The text was updated successfully, but these errors were encountered:
mzwiessele
changed the title
Ideally, there'd be a test for this running, but you'd be blocked on making a file within the Feast integration tests S3 bucket. Mind making an issue for that separately? @mzwiessele
Test loading S3 parquet folder in dataset format
Sep 20, 2022
there is an S3FileDataSourceCreator in sdk/python/tests/integration/universal/feature_repos/universal/data_sources/file.py, which uses minio - it hasn't been used in a while, but it could be updated and used
instead of using minio, we could also just create a dataset and upload data directly to an S3 bucket - we could just use the feast-integration-tests S3 bucket, which is used in universal/data_sources/redshift.py
@mzwiessele if you want to take this on, I'd be happy to help you and review any PRs!
Test that a parquet dataset saved in dataset format can be loaded and accessed using
get_historical_features
.Steps:
get_historical_features
Originally posted by @adchia in #3217 (comment)
The text was updated successfully, but these errors were encountered: