-
Notifications
You must be signed in to change notification settings - Fork 6
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Improve tests #9
Comments
Yes, probably enough. Let's see how it goes.
I think so, your package is small enough that you don't really care about deprecated test data
Use pytest.fixtures! import xarray as xr
import pytest
@pytest.fixture
def my_ds():
return xr.Dataset(
{"foo": xr.DataArray(np.random.randn(2, 3, 4), dims=("time", "lat", "lon"))}
)
def test_dummy(my_ds):
assert set(my_ds.dims) == {"time", "lat", "lon"} |
You could also have a fixture with cached downloaded data: import xarray as xr
import pytest
@pytest.fixture
def ds_ecco():
url = "https://bla/bla/bla/ecco.nc"
with fsspec.open(f"simplecache::{url}") as f:
return xr.open_dataset(f)
def test_dummy(ds_ecco):
assert isinstance(ds_ecco, xr.Dataset) |
Cool. Here is my plan of reducing the data.
fixture seems like a very good solution, thanks a lot! |
Sounds good! BTW, if you start using @pytest.fixture
def ds_ecco():
url = "https://bla/bla/bla/ecco.nc"
with fsspec.open(f"simplecache::{url}", same_names=True) as f:
return xr.open_dataset(f.name) |
Hi @MaceKuailv,
Let's start with the test data: It's taking too long to download data.
I see you copied the workflow form OceanSpy, but OceanSpy's is not ideal and extra-steps are needed because of some legacy code:
fsspec
rather thanpooch
. The advantage is that it's very simple to use, and it's a dependency of dask, so very likely it's already installed.Here is an example:
Originally posted by @malmans2 in #8 (comment)
And most importantly, make sure you reduce as much as possible the size of data to download.
Even better, create the data from scratch whenever possible.
Sometimes you can just create yourself a dummy dataset, something like:
Originally posted by @malmans2 in #8 (comment)
I think the first one is pretty straightforward to implement. But I am not sure how much do I need to reduce the size of the data. I am reluctant to generate the velocity field purely from scratch so I will preserve at least one snapshot of it. And of course, I need to store many of the coords. My estimate is that I could probably reduce it to around 20-50 mb. Is that enough?
Every time I upload to zenodo, it asks me something like "are you sure you want to do this? we will create a doi that will be forever". Is it OK to upload things frequently?
If I were to generate the data, where should I store the function that open and generate the data?
Do you have any other comments not related to the tests? I developed the test from main branch, so every change I made on the test from cookiecut branch will almost certainly cause a merge conflict. I would prefer we merge it first, and create a new branch for the test data problem.
Originally posted by @MaceKuailv in #8 (comment)
The text was updated successfully, but these errors were encountered: