Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Improve tests #9

Closed
malmans2 opened this issue May 16, 2023 · 4 comments
Closed

Improve tests #9

malmans2 opened this issue May 16, 2023 · 4 comments

Comments

@malmans2
Copy link
Contributor

Hi @MaceKuailv,
Let's start with the test data: It's taking too long to download data.
I see you copied the workflow form OceanSpy, but OceanSpy's is not ideal and extra-steps are needed because of some legacy code:

  1. Do not store all files in a single tar archive, but store all of them separately. That way, pooch can cache all of them separately.
  2. Do not download all of data first and don't use symlinks. Whenever test functions need to use files, just directly use pooch there.
  3. You could use fsspec rather than pooch. The advantage is that it's very simple to use, and it's a dependency of dask, so very likely it's already installed.
    Here is an example:
import xarray as xr
import fsspec

def test_dummy():
    url = "https://github.com/pydata/xarray-data/raw/master/air_temperature.nc"
    with fsspec.open(f"simplecache::{url}") as f:
        ds = xr.open_dataset(f)

Originally posted by @malmans2 in #8 (comment)

And most importantly, make sure you reduce as much as possible the size of data to download.
Even better, create the data from scratch whenever possible.
Sometimes you can just create yourself a dummy dataset, something like:

import xarray as xr
import numpy as np

ds = xr.Dataset(
    {"foo": xr.DataArray(np.random.randn(2, 3, 4), dims=("time", "lat", "lon"))}
)

Originally posted by @malmans2 in #8 (comment)

I think the first one is pretty straightforward to implement. But I am not sure how much do I need to reduce the size of the data. I am reluctant to generate the velocity field purely from scratch so I will preserve at least one snapshot of it. And of course, I need to store many of the coords. My estimate is that I could probably reduce it to around 20-50 mb. Is that enough?

Every time I upload to zenodo, it asks me something like "are you sure you want to do this? we will create a doi that will be forever". Is it OK to upload things frequently?

If I were to generate the data, where should I store the function that open and generate the data?

Do you have any other comments not related to the tests? I developed the test from main branch, so every change I made on the test from cookiecut branch will almost certainly cause a merge conflict. I would prefer we merge it first, and create a new branch for the test data problem.

Originally posted by @MaceKuailv in #8 (comment)

@malmans2
Copy link
Contributor Author

I think the first one is pretty straightforward to implement. But I am not sure how much do I need to reduce the size of the data. I am reluctant to generate the velocity field purely from scratch so I will preserve at least one snapshot of it. And of course, I need to store many of the coords. My estimate is that I could probably reduce it to around 20-50 mb. Is that enough?

Yes, probably enough. Let's see how it goes.

Every time I upload to zenodo, it asks me something like "are you sure you want to do this? we will create a doi that will be forever". Is it OK to upload things frequently?

I think so, your package is small enough that you don't really care about deprecated test data

If I were to generate the data, where should I store the function that open and generate the data?

Use pytest.fixtures!
For example:

import xarray as xr
import pytest


@pytest.fixture
def my_ds():
    return xr.Dataset(
        {"foo": xr.DataArray(np.random.randn(2, 3, 4), dims=("time", "lat", "lon"))}
    )


def test_dummy(my_ds):
    assert set(my_ds.dims) == {"time", "lat", "lon"}

@malmans2
Copy link
Contributor Author

You could also have a fixture with cached downloaded data:

import xarray as xr
import pytest


@pytest.fixture
def ds_ecco():
    url = "https://bla/bla/bla/ecco.nc"
    with fsspec.open(f"simplecache::{url}") as f:
        return xr.open_dataset(f)


def test_dummy(ds_ecco):
    assert isinstance(ds_ecco, xr.Dataset)

@MaceKuailv
Copy link
Owner

Cool. Here is my plan of reducing the data.

  1. get rid of hycom.
  2. Maybe get rid of MITgcm_rect as well
  3. reduce the latitude range of AVISO.
  4. get rid of all scalar variables in ECCO, preserve one snapshot of velocity, generate the other snapshots by *(1+normal_distribution)

fixture seems like a very good solution, thanks a lot!

@malmans2
Copy link
Contributor Author

Sounds good! BTW, if you start using fsspec, I just remembered that you should actually get the cached filename, because most of xarray's backends need a filename rather than a file object:

@pytest.fixture
def ds_ecco():
    url = "https://bla/bla/bla/ecco.nc"
    with fsspec.open(f"simplecache::{url}", same_names=True) as f:
        return xr.open_dataset(f.name)

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

No branches or pull requests

2 participants