[ Back to index ]
cm docker script --tags=build,nvidia,inference,server
cmr "run-mlperf inference _find-performance" --scenario=Offline \
--model=bert-99 --implementation=nvidia-original --device=cuda --backend=tensorrt \
--category=edge --division=open --quiet
- Use
--model=bert-99.9
to run the high-accuracy model (only for datacenter) - Use
--rerun
to force a rerun even when result files (from a previous run) exist
cmr "run-mlperf inference _submission _all-scenarios" --model=bert-99 \
--device=cuda --implementation=nvidia-original --backend=tensorrt \
--execution-mode=valid --category=edge --division=open --quiet
- Use
--category=datacenter
to run datacenter scenarios (only for bert-99.9) - Use
--power=yes
for measuring power. It is ignored for accuracy and compliance runs - Use
--division=closed
to run all scenarios for the closed division including the compliance tests --offline_target_qps
,--server_target_qps
, and--singlestream_target_latency
can be used to pass in the performance numbers
Follow this guide to generate the submission tree and upload your results.
Check the MLCommons Task Force on Automation and Reproducibility and get in touch via public Discord server.
- CM automation for Nvidia's MLPerf inference implementation was developed by Arjun Suresh and Grigori Fursin.
- Nvidia's MLPerf inference implementation was developed by Zhihan Jiang, Ethan Cheng, Yiheng Zhang and Jinho Suh.