Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Improve Dataset test maintainability/readability #1131

Closed
10 tasks done
mthrok opened this issue Dec 28, 2020 · 2 comments · Fixed by #1147 or #1148
Closed
10 tasks done

Improve Dataset test maintainability/readability #1131

mthrok opened this issue Dec 28, 2020 · 2 comments · Fixed by #1147 or #1148

Comments

@mthrok
Copy link
Collaborator

mthrok commented Dec 28, 2020

In #821, we improved the dataset tests to work on mocked files. In these changes, we applied the pattern to create mock dataset in setUpClass. As we keep improving the test for dataset, the setUpClass gets cluttered and became harder to grasp what's going on, so we should refactor the pattern.

In #1126 we extracted the dataset mocking part into a separate function. We can apply the same pattern to the other tests too. In the end, the initialization should look as simple as,

@classmethod
def setUpClass(cls):
    cls.root_dir = cls.get_base_temp_dir()
    cls.data = get_mock_dataset(cls.root_dir)

with extracted helper function that creates mock data and returns the expected data.

def get_mock_dataset(root_dir):
    ...

Additionally, for CommonVoice, the mock part is extracted, but there are two helper functions what are very similar each other, so we can refactor that part too.

@AzizCode92
Copy link
Contributor

Hi @mthrok , a PR is created for this issue, started by fixing the Commonvoice dataset.

@mthrok
Copy link
Collaborator Author

mthrok commented Jan 5, 2021

@AzizCode92 and @krishnakalyan3 Thank you for working on this.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Projects
None yet
Development

Successfully merging a pull request may close this issue.

2 participants