-
Notifications
You must be signed in to change notification settings - Fork 997
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Bridging the spec to the tests generation #793
Comments
I'm definitely thinking along the same lines, but I wasn't originally planning to create a separate
Edit: do you see a significant advantage in pulling out to a separate repo? My main concern is that it is going to limit the tests that actually get run in the spec CI and instead kick it out to generator repo which is too far away from the spec dev imo. |
additional comment -- Can easily tag tests in the |
I'm opposed to this. For several reasons:
Alternatively, you would have it push to a separate disconnected branch if you really have to do it in one repo. But at this point I rather give the pyspec project a repo, and have anything that imports it through git cleanly download it, without dealing with any extra git history or file data. We can reference a version and/or commit hash to make sure we know the spec-source of a particular build. Also, we made this exact same decision for test-generators <-> test-cases. And it works, is easy to maintain, and has clear rules on where permanent modifications can be made and what isn't desirable. RE additional comment:
Good idea! There are multiple ways to implement this:
I would go with simplify. Or plural if anyone can come up with non-leveled requirements. My thoughts about tags: |
Posting this here for transparency: The work on CI: previous proposal by me: (here, #793) below is my personal interpretation, it's a lot to go through Agree on:
Pain points:
Proposal to resolve pain points:
Disagree on:
Unclear: How do minimal (but hardcoded) tests of the specs repo, copy over to the tests suite? Do we just ignore them? |
I’d also support #412 in Pros:
Cons:
I intuitively think that we can manually update them if the sanity checks are light. |
Also brought up the idea of just having a python spec, and referring to line-numbers/function names in the readable spec. Then just have a bot compile it in a webpage/markdown file. It would be neat, but it may be too far from "specification", it would be more like documentation. #412 Is interesting, and may be a good trade-off. But I would like to see some existing projects using it first, before taking that direction. My priorities for now:
|
A form of this was implemented in #851 |
Since a while we have the concept of an "executable spec".
And we have two versions:
This weekend I was in Taipei with @hwwhww, @ChihChengLiang, and we discussed a way forward to make it easier to generate tests for more complex parts of the spec, like state transition tests. We don't want to maintain duplicate code here in the test-generators repository.
Instead, we want to utilize the python "executable spec", to create tests with.
Proposal
The following proposal is based on discussion with @ChihChengLiang and @hwwhww, credits to them too.
@djrtwo had plans (or so I heard) to move the spec-pythonizer to the specs repo. This is great, as it enables a clean and straightforward CI design:
1: Build script: runs the code puller, and merges it with the helper/utils definitions for a fully executable spec. Then pushes it to an output repo. This process is very similar to the way the current test-generators repo builds the tests repo.
2: Each generator can switch to new pyspec versions (matches spec versioning) by updating their requirements.txt, and possibly making any necessary changes to the generation code to call the new/changed functions.
3: @hwwhww mentioned that we still have to do some work for creating fake blocks/invalid attestations/etc., but this is not duplicate, it's exactly what we want to do in the test generators :). We just import the spec to be able to super-reliably replicate spec behavior in the test-suite.
The text was updated successfully, but these errors were encountered: