Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

cabal test sucks #2058

Closed
ezyang opened this issue Aug 26, 2014 · 11 comments
Closed

cabal test sucks #2058

ezyang opened this issue Aug 26, 2014 · 11 comments

Comments

@ezyang
Copy link
Contributor

ezyang commented Aug 26, 2014

How does it suck? Let me count the ways:

  1. It always try to compile the test executable; there should be an option to interpret (maybe even by default), since it's really annoying having to wait several seconds for what should be an instantaneous test run.
  2. It doesn't stream output, even when --show-details=always. It always waits until the test executable is done running before printing the output.
  3. It's not obvious from the help text how to build the test suite, without actually running it (cabal build test-name)
  4. It's really hard to tell what the options for the underlying test executables are: usually you have to build it yourself, and then run the executable manually with --help. The help text doesn't give any advice on how to do this.
  5. --test-option is really nonsense when there are multiple test suites, since you really wanted to feed separate options to each test suite.
  6. cabal test passes hidden information to the underlying executables, making it hard to run the tests in isolation. This is best seen by running the cabal-install test suite itself, where if you run 'cabal test package-tests', the locally built cabal-install executable is picked up, but if you run the package-tests executable manually, it picks up whatever cabal is in your PATH. This is unobvious even with -v, which reports that Cabal is just running the relevant command
@23Skidoo
Copy link
Member

/cc @ttuegel

@ttuegel
Copy link
Member

ttuegel commented Aug 26, 2014

Thank you for taking the time to report these! I think most people don't bother with so many documentation bugs. Each of these deserves it's own ticket, so I'm going to split them up later and we can use this thread as a checklist. Right now I'm just going to leave some notes below. Also, you should know that (2) has an undocumented fix (see below).

  1. Interpreted test suites is a great idea! Unfortunately, this will only save linking time. The preprocessors for the package still need to run, which can take a long time.
  2. Streaming output is enabled with --show-details=streaming. Just checked, and apparently I didn't document this.
  3. The help text for cabal build needs to note that it can build any kind of component. The help texts for cabal bench and cabal test need to be updated to say the same.
  4. Unfortunately, Cabal has no way to determine what the options for an executable test are. This was the goal of the detailed test type. In an example of "good enough" being the enemy of "perfect," that test type hasn't seen any adoption; it needs to be updated or abandoned. The documentation of cabal run should note that it can run any component.
  5. Yes, what we really need is --test-NAME-option (to set an option for test NAME, in keeping with --PROG-option tradition), or something like that.
  6. We have showCommandForUser, but what we want is showCommandEnvForUser.

@ezyang
Copy link
Contributor Author

ezyang commented Aug 26, 2014

Thanks. Do we avoid running preprocessors when there is definitely no need for them? I personally care a lot about fixed costs for compilation (instantaneous for me) so if there are things we can do to speed this up that would be great.

I can submit doc patches for some of these, but since I'm not a committer and they seem small, it might be easier for you to just fix them directly.

@23Skidoo
Copy link
Member

@ezyang
I can add you to committers - on the condition that you won't start pushing large changes without prior review.

@tibbe
Copy link
Member

tibbe commented Aug 27, 2014

@ezyang you're in the collaborators group, so feel free to push doc fixes directly to the repo.

Thomas addressed most of your points, so I will just add whatever extra tidbits I have:

  1. I think adding a flag to optionally run the tests through ghci would be interesting. It won't work with all test suites, because ghci doesn't work with all Haskell code (e.g. there are TH issues, GUI toolkit issues, etc). How about --interpreted?
  2. Perhaps we should make streaming the default if we don't run the tests in parallel. Streaming output is a quite recent addition because it didn't use to be possible to create a pipe in a portable way.
  3. The cabal help build gives an example towards the bottom, but it's not that easy to spot. I'd rather that the first line read: Usage: cabal build [FLAGS] [COMPONENTS]. Same for other commands that take optional component names.
  4. You can do cabal run my-test -- --help, but that's not very discoverable.
  5. It's sometimes useful but we should also have --test-COMPONENT-option.
  6. I wasn't aware (or more likely, didn't remember) that we did this. Also note that I manually figure out which Cabal the test suite was configured with in the case of the Cabal test suite. This is required for these test to work correctly.

@ttuegel
Copy link
Member

ttuegel commented Aug 28, 2014

@ezyang In response to your question about preprocessors, Cabal always runs them. Cabal doesn't keep track of source dependencies between the modules in a package, so it has no way of knowing if sources need to be re-preprocessed or not. The only thing we can do is re-run everything and check if GHC wants to rebuild anything.

@23Skidoo
Copy link
Member

@ttuegel I think that we do at least check whether the target is newer than the source. Or do we?

@ttuegel
Copy link
Member

ttuegel commented Aug 29, 2014

@23Skidoo Yes, you're right. It's still needlessly slow because we have to search the filesystem for all the modules that could be used. Even if we've built it, we don't know which modules were actually involved in compilation. But what I was actually thinking of is module autogeneration. We check the autogenerated modules by regenerating them and comparing the contents. This is probably silly, because they shouldn't change unless the LBI changes. I haven't profiled it I'm going to go profile it now, but the whole module autogeneration business uses String, so it's undoubtedly dog-slow. It's also ironic, because it gets packed to a ByteString before writing.

@ttuegel
Copy link
Member

ttuegel commented Aug 29, 2014

I stand corrected: for a do-nothing build, GHC accounts for 94% of the time. Another 5% of runtime is spent in the Read instance of LocalBuildInfo. There is basically nothing we can do to Cabal at this point to speed builds up, unless we can avoid calling GHC somehow.

@23Skidoo
Copy link
Member

@ttuegel

Another 5% of runtime is spent in the Read instance of LocalBuildInfo.

We should make dist/setup-config a binary file, parsing it can be surprisingly slow for packages with many dependencies (see #1841).

@ttuegel
Copy link
Member

ttuegel commented Apr 24, 2015

We now have separate issues for the problems enumerated in this ticket, so I will close this one.

@ttuegel ttuegel closed this as completed Apr 24, 2015
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Projects
None yet
Development

No branches or pull requests

4 participants