Skip to content

Commit

Permalink
Merge from cTuning (#879)
Browse files Browse the repository at this point in the history
  • Loading branch information
gfursin authored Jul 27, 2023
2 parents f335dab + 0409f01 commit 2db33ea
Show file tree
Hide file tree
Showing 3 changed files with 24 additions and 2 deletions.
Original file line number Diff line number Diff line change
Expand Up @@ -61,5 +61,24 @@ CM will install a new Python virtual environment in CM cache and will install al
cm show cache
```

### Do a test run to detect and record the system performance

```bash
cm run script --tags=generate-run-cmds,inference,_find-performance \
--model=bert-99 --implementation=reference --device=cpu --backend=deepsparse \
--category=edge --division=open --quiet --scenario=Offline
```

### Do full accuracy and performance run

```
cm run script --tags=generate-run-cmds,inference,_submission --model=bert-99 \
--device=cpu --implementation=reference --backend=deepsparse \
--execution-mode=valid --results_dir=$HOME/results_dir \
--category=edge --division=open --quiet --scenario=Offline
```
### Generate and upload MLPerf submission

Follow [this guide](https://github.com/mlcommons/ck/blob/master/docs/mlperf/inference/Submission.md) to generate the submission tree and upload your results.


5 changes: 4 additions & 1 deletion cm-mlops/script/get-dataset-openimages/run.sh
Original file line number Diff line number Diff line change
Expand Up @@ -27,5 +27,8 @@ else
test $? -eq 0 || exit 1
fi
cd ${INSTALL_DIR}
ln -s ../ open-images-v6-mlperf
if [[ ! -f "open-images-v6-mlperf" ]]; then
ln -s ../ open-images-v6-mlperf
fi

test $? -eq 0 || exit 1
2 changes: 1 addition & 1 deletion docs/mlperf/inference/bert/README_reference.md
Original file line number Diff line number Diff line change
Expand Up @@ -14,7 +14,7 @@ cm run script --tags=generate-run-cmds,inference,_find-performance,_all-scenario
* Use `--device=cuda` to run the inference on Nvidia GPU
* Use `--division=closed` to run all scenarios for the closed division (compliance tests are skipped for `_find-performance` mode)
* Use `--category=datacenter` to run datacenter scenarios
* Use `--backend=pytorch` and `--backend=tf` to use the pytorch and tensorflow backends respectively
* Use `--backend=pytorch` and `--backend=tf` to use the pytorch and tensorflow backends respectively. `--backend=deepsparse` will run the sparse int8 model using deepsparse backend (not allowed to be submitted under closed division).
* Use `--model=bert-99.9` to run the high accuracy constraint bert-99 model. But since we are running the fp32 model, this is redundant and instead, we can reuse the results of bert-99 for bert-99.9


Expand Down

0 comments on commit 2db33ea

Please sign in to comment.