Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Using ccache for full orchestrator rebuilds #190

Closed
m8pple opened this issue May 16, 2021 · 5 comments · Fixed by #217
Closed

Using ccache for full orchestrator rebuilds #190

m8pple opened this issue May 16, 2021 · 5 comments · Fixed by #217
Assignees

Comments

@m8pple
Copy link
Contributor

m8pple commented May 16, 2021

When moving between orchestrator branches it's useful to do a make -B, but
because it uses a custom mpicxx installation, ccache doesn't work if you have
it installed.

I found it was useful to add the following to Makefile.dependencies:

# If ccache is installed then use it. Even if ccache is set up for
# g++ and gcc, it won't be for the custom install of mpi
CCACHE_PATH := $(strip $(shell which ccache 2> /dev/null))
ifneq ($(CCACHE_PATH),)
CC := $(CCACHE_PATH) $(CC)
CXX := $(CCACHE_PATH) $(CXX)
endif

This probably mainly affects people working on laptops with limited
numbers of CPUs and power, but it about halves the full re-compile
time for me.

Results for a 4-ish year old machine running under WSL (so slow disk accesses),
on a 4 CPU / 8 thread machine :

  • Single process, no caching: 1m18.764s
  • Single process, with caching: 0m31.378s
  • Eight process, no caching: 0m19.563s
  • Eight process, with caching: 0m9.243s

Time becomes dominated by sequential and/or non-compile processes - I think
dependency scanning and linking. I didn't look at incremental compiles, as those
are already optimised by dependency scanning.

This may be a bit niche, as the main usage model is that most people don't recompile the
orchestrator a lot, and currently there is less value in running the orchestrator on a local
machine. Probably most development is done on bigger workstations or servers too.

@m8pple
Copy link
Contributor Author

m8pple commented May 17, 2021

This suggestion is implemented in 6c4034c. However, for people with powerful
machines it might not be worth it, or with a spinning disk might even slow things down.

@mvousden
Copy link
Contributor

When moving between orchestrator branches it's useful to do a make -B, but because it uses a custom mpicxx installation, ccache doesn't work if you have it installed.

I'm not sure what you mean by this. Why wouldn't ccache not work in our setup if it's installed? The only gotcha is that the compiler wouldn't be registered.

I think this is a good idea.

@mvousden
Copy link
Contributor

My times, single process:

  • No cache: ~100s
  • Yes cache: 4s

@heliosfa
Copy link
Contributor

My times, single process:

  • No cache: ~100s
  • Yes cache: 4s

Under my WSL install, it goes from ~28s to ~4s. Good shout!

@m8pple
Copy link
Contributor Author

m8pple commented May 26, 2021

When moving between orchestrator branches it's useful to do a make -B, but because it uses a custom mpicxx installation, ccache doesn't work if you have it installed.

I'm not sure what you mean by this. Why wouldn't ccache not work in our setup if it's installed? The only gotcha is that the compiler wouldn't be registered.

I think this is a good idea.

My experience was that ccache wasn't working via mpicxx, so for some reason it was using
the true gcc, rather than the ccache install. This might be just some caching or something
going on - I can't think of a reason why mpicxx would deliberately try to find the "true" cxx.

On the 100s -> 4s: y'all have faster disks than me. Or more disk cache memory.

But glad it is providing an improvement. Ideally the existing careful header tracking dependency
in the makefile would provide the benefit, but I've found with the modern git branch switching
approach this is less effective, so moving to make -B has become kind of a default.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

Successfully merging a pull request may close this issue.

3 participants