Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Push branches for testing #1

Closed
nrc opened this issue Jun 9, 2015 · 8 comments
Closed

Push branches for testing #1

nrc opened this issue Jun 9, 2015 · 8 comments
Labels
P-med Medium priority

Comments

@nrc
Copy link
Member

nrc commented Jun 9, 2015

Be able to push a local branch to the repo to test its performance.

@Mark-Simulacrum
Copy link
Member

Is this referring to pushing backend branches or is this referring to what @brson is talking about here?

@nrc
Copy link
Member Author

nrc commented Aug 10, 2016

This more like what brson was talking about. If I am a compiler hacker and have written something I'm not sure about, it would be great to do a trial run of the branch on the perf server without merging it to master.

@Mark-Simulacrum
Copy link
Member

Mark-Simulacrum commented Aug 10, 2016

Okay, copy pasting the reddit post and rewriting to a bulleted list of what the next steps [I think] are.

  • We have a machine monitoring Rust PR's just like buildbot and travis do.

For each head commit on a PR:

  • Build the compiler (with time-passes) on a pool of linux builders.
    • This means we lose accurate bootstrap benchmarks, but gain speed since building the compiler takes a long time.
  • Send off the compiler to a single machine to run the performance tests.
    • Sandboxing is needed here since we're effectively running arbitrary code.
    • Doing this on the same server as what's running perf.rust-lang.org isn't ideal due to noise from the backend serving the website.
  • Post a result to the PR with a link to the results on perf.rust-lang.org.
    • There is also a cumulative list of these, grouped by PR.

I think that the main steps that are needed for this to happen are more on the human side (getting and configuring the servers; configuring GitHub to ping the compiler build server). The main steps for this repository are within the UI; but I think that most of the UI portion can be done in parallel with the human work; the steps below are what needs to be done for the UI:

  • Add a JS page which generates a list of PRs with compile time changes for each benchmark.
    • Only show totals here.
  • Add a details view (for each PR) which shows:
    • difference per phase (memory and time).
    • graph of summary of commits in that PR to allow viewing if each commit made a significant difference or not.

@nrc
Copy link
Member Author

nrc commented Aug 10, 2016

I think there are two similar but different ideas here - brson's idea is to test every push to the Rust repo after it is merged, the other idea is to test nominated branches before they are merged. Long term, we probably want both. The second idea is easier in the short term because we don't need any new hardware or load balancing, etc.

@Mark-Simulacrum
Copy link
Member

I think I interpreted brson's idea as more of the second than the first...

I think you meant to say that the first (testing after merge) is easier. I'm not sure I follow you on not needing new hardware: wouldn't the compiler still need to be built? Or is it possible to get the built artifacts from buildbot? Running the benchmarks once we get a compiler though is a single-computer task and can be done (for now, at least) on the same server as what runs perf.rust-lang.org.

@nrc
Copy link
Member Author

nrc commented Aug 12, 2016

No - testing after merge is harder. It would mean we would need more than one server, since we merge more quickly than we test. We do need to build the compiler since that is where the bootstrap numbers come from (we could use a stage1 compiler from the bots, but it doesn't seem worth the effort, it doesn't take so long compared to the other tasks).

If we only test on request (and I'm assuming we wouldn't get too many requests), then we should still be able to run on a single server.

(Note that the tests don't run on the same server as the website backend - we need a clean server to get consistent results).

@Mark-Simulacrum
Copy link
Member

Testing on request for now sounds like a good plan (and I believe there's already a PR asking for that); I didn't realize that the merging happens faster than the tests, though I suppose that makes sense. Let me know if/how I can help!

@nrc nrc added the P-med Medium priority label Aug 26, 2016
@Mark-Simulacrum
Copy link
Member

@bors try builds can be benchmarked with today's infrastructure. I don't think we're prepared to do more than that for the meantime, and not sure it's super worth tracking either. As such, closing.

rylev added a commit that referenced this issue Jul 19, 2021
Move comparison page to Vue.js
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
P-med Medium priority
Projects
None yet
Development

No branches or pull requests

2 participants