-
-
Notifications
You must be signed in to change notification settings - Fork 192
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Allow marking an example as should-fail #397
Comments
First, I think we should clarify what it is to fail. I think that we have 3 distinct cases:
If I was to choose how to mark the expected results, I would go for folders named like this:
Any other prefix or the lack of a sub-folder should make Travis fail the tests. To implement this, it would be necessary to separate building from testing. Another think that may be a problem is that, because we are substituting files in the same project, the cache for the examples seems to be always overwritten. Maybe we'll have to cache separately each of the examples to keep recompilation times reasonable. All this will probably make the |
The idea seems good to me.
Ah, I see. Yes, that might be needed. It appears I missed that a cached folder is being used as
Sometimes I am told it is not wise to write too-complex logic in shell scripts and it makes me wonder if we should go back to having something like https://github.com/exercism/xhaskell/commits/master/_test/check-exercises.hs . Sure, the complexity would still be somewhere (we only change where and what language it is), but maybe it is more desirable to keep the complexity there. |
I was working on something like that, but I still have a lot to learn before opening a PR. At first I thought it was going to be easy, but I discovered that Stack doesn't like my NixOS environment, so I'll have spend a few more days preparing a Debian virtual machine for development. |
Should I assume you are working on it and therefore I shouldn't, or might I give it a try sometime? |
I got stuck in my solution and right now I don't have the time to continue, so please please please do it! 👍 The hardest part seems to be how to separate the
In both cases we have to link each I think that the second solution is better because it would avoid any present or future problem with the global stack cache. I made a few tests and seems that the first solution works too, but sharing exactly the same path for distinct build feels wrong... Anyway...it is you call! 😄 |
I may extract some of the current functionality contained in travis.yml out into a shell script in bin/ or something. The reason is that I would really like to have something that lets me automatically test a single example locally. My general idea is to have:
(If it is desired, we could also have such scripts as I know we got rid of _test in #203, but I think being able to easily test an example locally would be helpful for me. The template issues for new tracks suggest that tests:
So it seems reasonable for us to follow the suggestions. One more argument for having this easy script: It makes the completion of #398 quite trivial: Just say "Run the script" in the README. |
I like the idea of being able to test an example/exercise easily! 👍 |
Note that I have not yet found any other track that does this. I imagine I would have pointed to any other examples I was aware of when opening this issue. I don't think I've looked particularly hard since then, so that information might be out of date. I would gladly defend this idea regardless of what any other tracks are doing, though. |
As an extension to multiple examples in #395, we'd like to be able to add examples that we can mark as expected to fail the test.
We can use this to make sure that our test cases properly reject some mistaken solutions that we expect them to reject. The example I gave earlier: In
accumulate
, we have an inefficient example noted at https://github.com/exercism/xhaskell/blob/master/exercises/accumulate/src/Example.hs#L15 and we say that it should fail the test suite. But this was never automatically verified and indeed for the longest time that example did not even compile until #209The things to figure out here are:
SHOULDFAIL
? Or something else?stack test
, of course. We'll have to keep in mind that we usually haveset -e
on, but this time we have to invert the exit code ofstack test
. Maybe something can do that for us. Maybe simply the!
operator will work.The text was updated successfully, but these errors were encountered: