Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Cookiecut #8

Merged
merged 11 commits into from
May 16, 2023
Merged

Cookiecut #8

merged 11 commits into from
May 16, 2023

Conversation

MaceKuailv
Copy link
Owner

Changes related to CI

@malmans2
Copy link
Contributor

malmans2 commented May 16, 2023

Hi @MaceKuailv,
Let's start with the test data: It's taking too long to download data.
I see you copied the workflow form OceanSpy, but OceanSpy's is not ideal and extra-steps are needed because of some legacy code:

  1. Do not store all files in a single tar archive, but store all of them separately. That way, pooch can cache all of them separately.
  2. Do not download all of data first and don't use symlinks. Whenever test functions need to use files, just directly use pooch there.
  3. You could use fsspec rather than pooch. The advantage is that it's very simple to use, and it's a dependency of dask, so very likely it's already installed.
    Here is an example:
import xarray as xr
import fsspec

def test_dummy():
    url = "https://github.com/pydata/xarray-data/raw/master/air_temperature.nc"
    with fsspec.open(f"simplecache::{url}") as f:
        ds = xr.open_dataset(f)

@malmans2
Copy link
Contributor

malmans2 commented May 16, 2023

And most importantly, make sure you reduce as much as possible the size of data to download.
Even better, create the data from scratch whenever possible.
Sometimes you can just create yourself a dummy dataset, something like:

import xarray as xr
import numpy as np

ds = xr.Dataset(
    {"foo": xr.DataArray(np.random.randn(2, 3, 4), dims=("time", "lat", "lon"))}
)

@MaceKuailv
Copy link
Owner Author

I think the first one is pretty straightforward to implement. But I am not sure how much do I need to reduce the size of the data. I am reluctant to generate the velocity field purely from scratch so I will preserve at least one snapshot of it. And of course, I need to store many of the coords. My estimate is that I could probably reduce it to around 20-50 mb. Is that enough?

Every time I upload to zenodo, it asks me something like "are you sure you want to do this? we will create a doi that will be forever". Is it OK to upload things frequently?

If I were to generate the data, where should I store the function that open and generate the data?

Do you have any other comments not related to the tests? I developed the test from main branch, so every change I made on the test from cookiecut branch will almost certainly cause a merge conflict. I would prefer we merge it first, and create a new branch for the test data problem.

@malmans2
Copy link
Contributor

How about you merge this, and I will open issues with a few tips?

I also want to show you a couple of improvements/changes to the template, but I think it's easier if I open a PR.

I'll reply to the test question in a dedicated issue!

@malmans2 malmans2 mentioned this pull request May 16, 2023
@malmans2
Copy link
Contributor

malmans2 commented May 16, 2023

OK, issue opened. After you merge this, I will open a PR with a couple of improvements for stuff like CI and packaging.

@MaceKuailv
Copy link
Owner Author

Very nice

@MaceKuailv MaceKuailv merged commit 772386f into main May 16, 2023
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

Successfully merging this pull request may close these issues.

2 participants