Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Some expect tests? #204

Open
Lupus opened this issue Aug 21, 2023 · 8 comments
Open

Some expect tests? #204

Lupus opened this issue Aug 21, 2023 · 8 comments

Comments

@Lupus
Copy link
Contributor

Lupus commented Aug 21, 2023

I'm thinking that some expect tests should help with maintaining the ocaml-protoc codebase with less effort. At the cost of additional dependency, say ppx_yojson_conv, we could instrument all of internal representations with yojson_of_t, and use ppx_expect for tests. You just need to feed a bunch of proto definitions in a form of strings into library bits, get some structures as output, pretty-print them as JSON and ppx_expect will capture them and check that code works in the same way it used to without any hand-writing of unit tests. In my experience expect tests with automatic pretty-printing of internal structures works incredibly well with compiler-kind of projects, which ocaml-protoc definitely belongs to.

@c-cube
Copy link
Collaborator

c-cube commented Aug 21, 2023 via email

@Lupus
Copy link
Contributor Author

Lupus commented Aug 21, 2023

Dune expect tests are quite heavyweight compared to ppx_expect, with latter you can have dozens of granular tests in one source file, while with dune expect test you can have only one test output per test, and to have multiple tests you need to create multiple dune rules and that gets pretty verbose. ppx_expect won't become a runtime dependency, tests are easily organized into separate lib. Or you want to have minimal dependency footprint for the library and thus suggest to use dune, which is already a dependency?

@c-cube
Copy link
Collaborator

c-cube commented Aug 21, 2023 via email

@Lupus
Copy link
Contributor Author

Lupus commented Aug 21, 2023

We don't need a ppx

I'm currently adding some support for missing cases with option parsing. Looking at current tests with looots of manual asserts for all the fields spelled out by hand, I feel very reluctant to write any tests for my changes. There are no printers for compilerlib/pb_parsing_parse_tree.ml or compilerlib/pb_typing_type_tree.ml, I can't just print the stuff and match it with expected output, via dune or ppx_expect...

I wonder how many contributions in OCaml community were never completed/upstreamed because of "don't bring new dependencies! it's OCaml, we don't have decent stdlib, nor runtime type information/standard derivers, you have to code everything by hand in every project, only hardcore!" principle... </rant> sorry, I'm too accustomed to our corporate environment where most of the tests are actually expect tests and all the code to print stuff is written by a machine, it's literally zero friction to maintain tests along with new features.

@mbarbin
Copy link
Contributor

mbarbin commented Jan 13, 2024

Hello @Lupus, @c-cube, and team,

I've been following this discussion and wanted to share my thoughts and a
proposal.

Firstly, I agree with @Lupus's point about the potential benefits of expect
tests in maintaining the ocaml-protoc codebase. I've found that expect tests,
particularly when combined with automatic pretty-printing of internal
structures, can be incredibly effective for compiler-like projects such as this
one.

I also understand @c-cube's concerns about introducing heavyweight dependencies
and the desire to keep the dependency footprint minimal. I believe there's a way
to balance these considerations, as far as testing the Ocaml generated code.

In my recent PR #229, I made a small change that affects the generated code.
I noticed that this didn't yield any corresponding changes in the tests. This led me
to think about how we could improve the testing workflow.

Here's a proposal: For every ml and mli file generated by a dune rule in
the src/examples/ directory, we add a corresponding *.ml.expect and
*.mli.expect file. We could then create a dune rule that diffs the generated
files against their expected reference as part of dune @runtest.

This approach would allow us to easily see the impact of every PR on the
generated code across all examples. It would also provide a proof that the
generated code didn't change for every change that isn't expected to have an
impact on it (such as refactors, etc.)

I believe this wouldn't add too much maintenance pressure as updating the tests
would simply involve running dune promote. Moreover, it would make the testing
process more robust and transparent, which could potentially encourage more
contributions.

If this proposal aligns with your vision for the project, I'd be happy to write
a PR that adds these dune rules. This could serve as a prerequisite to my
upcoming contribution.

Looking forward to hearing your thoughts. Thank you!

@c-cube
Copy link
Collaborator

c-cube commented Jan 13, 2024 via email

@Lupus
Copy link
Contributor Author

Lupus commented Jan 16, 2024

#230 looks really nice! Great work @mbarbin!

@mbarbin
Copy link
Contributor

mbarbin commented Jan 16, 2024

Thanks for your kind words on #230, @Lupus! On a related note, I'm discussing roundtrip property testing for ocaml-protoc here: #233. Your input would be greatly appreciated. Thank you!

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

No branches or pull requests

3 participants