You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
We want to test various sizes but are still just assuming one and two nodes for execution for the time being. I think that we can test on 1GB and 10GB datasets without running into too many scale issues on even very complex queries.
For the various backends and various file formats that we use for storing data we need to upload 1GB and 10GB versions (we already have these in S3 for parquet for example) and have a subset of queries run on these files to make sure that we are still able to run queries at scale.
to see the various places where we will need to be uploading these datasets.
As a start I would pick CSV and Parquet. As the file formats that we make available.
In addition to this someone needs to modify the e2e testing framework so that we can run these kinds of tests specifying the scale and file format that we want to run the scales tests on.
The text was updated successfully, but these errors were encountered:
We want to test various sizes but are still just assuming one and two nodes for execution for the time being. I think that we can test on 1GB and 10GB datasets without running into too many scale issues on even very complex queries.
For the various backends and various file formats that we use for storing data we need to upload 1GB and 10GB versions (we already have these in S3 for parquet for example) and have a subset of queries run on these files to make sure that we are still able to run queries at scale.
look at #1460
to see the various places where we will need to be uploading these datasets.
As a start I would pick CSV and Parquet. As the file formats that we make available.
In addition to this someone needs to modify the e2e testing framework so that we can run these kinds of tests specifying the scale and file format that we want to run the scales tests on.
The text was updated successfully, but these errors were encountered: