-
Notifications
You must be signed in to change notification settings - Fork 56
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Difficulty getting feedback on score output #415
Comments
To kick off the discussion - historical reason for this is compatibility with DaCapo harness output, which people are used to, even though it lacks a lot of the information that gets collected by the Renaissance harness. Currently, the JSON output is the most informative - perhaps we could dump that as a default, possibly with timestamp to avoid overwriting past results accidentally ? |
I don't want to question whether the output data format is csv or json. |
I think it would be more appropriate if the user enters the following command to output a csv file and a json file by default: java -jar renaissance-gpl-0.14.2.jar all |
Renaissance's parameters include java -jar renaissance-gpl-0.14.2.jar java -jar renaissance-gpl-0.14.2.jar all |
While outputting a file by default can be convenient, we should really not run Note that DaCapo also just shows the help message if no benchmark is specified. Actually, I think we should provide a giant warning message when users run |
Thank you very much for your reply, I understand what you mean. |
Since this has no impact on benchmarking, I think whatever makes it more convenient for users is a good idea, so yes. |
The default output is to screen (standard output) which is fine for quick experimenting. If one needs to do some serious benchmarking, they often need to fine-tune the JVM too (e.g. with From the beginning we intentionally wanted to provide a set of workloads open for different use rather than a self-evaluating package (like SPEC does). For example, SPEC benchmarks are always driven by their ability to provide verified results that can be published on SPEC website. We do not have ambitions to provide such a database of results but instead we expect that the users would tweak the invocation to their needs. Some people might be interested in startup costs, some in long running effects. We try not to limit them which means we do not perform any analysis and let the user choose as much options as possible. To me, your use case seems reasonable for some initial experimenting but once you start running more heavy experiments, you would probably need to store data in well-defined filenames instead of searching the current directory for the last in the list of files (or something). I understand that it might be surprising behavior for some people. But some people (including me) would argue that creating (persistent) files without (explicit) request is even more confusing. Perhaps I missed some clear and frequent use case for dumping to |
I'm new to the performance field, and the question may be a little naive. Thank you for your patient answers. |
I ran the tool as follows and got no output:
Later I realized that maybe I had to add
--json
or--csv
to get the output.Such a design may unexpectedly fail to obtain output.
Taking SPECjvm2008 released by SPEC as an example, its default output is SPECjvm2008.001.html, SPECjvm2008.001.raw, SPECjvm2008.001.txt, and some images. In this way, we can see clear results after running the benchmark once, and even have a visualized html web page.
Is it possible to consider making some improvements and optimizations to Renaissance's default output?
The text was updated successfully, but these errors were encountered: