Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

docs clean up #870

Merged
merged 11 commits into from
Jul 22, 2023
4 changes: 4 additions & 0 deletions cm-mlops/automation/experiment/module.py
Original file line number Diff line number Diff line change
Expand Up @@ -57,6 +57,10 @@ def test(self, i):

return {'return':0}





############################################################
def run(self, i):
"""
Expand Down
Original file line number Diff line number Diff line change
@@ -1,20 +1,23 @@
### Introduction

Our goal is to help the community benchmark and optimize various AI/ML applications
across diverse software and hardware provided by volunteers similar to SETI@home!

Open-source [MLPerf inference benchmarks](https://arxiv.org/abs/1911.02549)
were developed by a [consortium of 50+ companies and universities (MLCommons)](https://mlcommons.org)
to enable trustable and reproducible comparison of AI/ML systems
in terms of latency, throughput, power consumption, accuracy and other metrics
across diverse software/hardware stacks from different vendors.

However, running MLPerf inference benchmarks and submitting results [turned out to be a challenge](https://doi.org/10.5281/zenodo.8144274)
even for experts and could easily take many weeks. That's why MLCommons,
even for experts and could easily take many weeks. That's why [MLCommons](https://mlcommons.org),
[cTuning.org](https://www.linkedin.com/company/ctuning-foundation)
and [cKnowledge.org](https://www.linkedin.com/company/cknowledge)
decided to develop an open-source, technology-agnostic
and non-intrusive [Collective Mind automation language (CM)](https://github.com/mlcommons/ck)
and [Collective Knowledge Playground (CK)](https://access.cknowledge.org/playground/?action=experiments)
to run, reproduce, optimize and compare MLPerf inference benchmarks out-of-the-box
across diverse software, hardware, models and data sets from any vendor.
to help anyone run, reproduce, optimize and compare MLPerf inference benchmarks out-of-the-box
across diverse software, hardware, models and data sets.

You can read more about our vision, open-source technology and future plans
in this [presentation](https://doi.org/10.5281/zenodo.8105339).
Expand All @@ -23,19 +26,20 @@ in this [presentation](https://doi.org/10.5281/zenodo.8105339).

### Challenge

We would like you to run as many MLPerf inference benchmarks on as many CPUs (Intel, AMD, Arm) and Nvidia GPUs
as possible across different framework (ONNX, PyTorch, TF, TFLite)
We would like to help volunteers run various MLPerf inference benchmarks
on diverse CPUs (Intel, AMD, Arm) and Nvidia GPUs similar to SETI@home
across different framework (ONNX, PyTorch, TF, TFLite)
either natively or in a cloud (AWS, Azure, GCP, Alibaba, Oracle, OVHcloud, ...)
and submit official results to MLPerf inference v3.1.
and submit results to MLPerf inference v3.1.

However, since some benchmarks may take 1 day to run, we suggest to start in the following order:
However, since some benchmarks may take 1..2 days to run, we suggest to start in the following order (these links describe CM commands to run benchmarks and submit results):
* [CPU: Reference implementation of Image Classification with ResNet50 (open and then closed division)](https://github.com/mlcommons/ck/blob/master/docs/mlperf/inference/resnet50/README_reference.md)
* [CPU: TFLite C++ implementation of Image classification with variations of MobileNets and EfficientNets (open division)](https://github.com/mlcommons/ck/blob/master/cm-mlops/script/run-mlperf-inference-mobilenet-models/README-about.md)
* [Nvidia GPU: Nvidia optimized implementation of Image Classification with ResNet50 (open and then closed division)](https://github.com/mlcommons/ck/blob/master/docs/mlperf/inference/resnet50/README_nvidia.md)
* [Nvidia GPU: Nvidia optimized implementation of Language processing with BERT large (open and then closed division)](https://github.com/mlcommons/ck/blob/master/docs/mlperf/inference/bert/README_nvidia.md)
* [Nvidia GPU: Reference implementation of Image Classification with ResNet50 (open and then closed division)](https://github.com/mlcommons/ck/blob/master/docs/mlperf/inference/bert/README_nvidia.md)
* [Nvidia GPU: Reference implementation of Language processing with BERT large (open and then closed division)](https://github.com/mlcommons/ck/blob/master/docs/mlperf/inference/resnet50/README_reference.md)
* [Nvidia GPU (24GB of memory min): Reference implementation of Language processing with GPT-J 6B (open)](https://github.com/mlcommons/ck/blob/master/docs/mlperf/inference/gpt-j/README_reference.md)
* [CPU: TFLite C++ implementation of Image classification with variations of MobileNets and EfficientNets (open division)](https://github.com/mlcommons/ck/blob/master/cm-mlops/script/run-mlperf-inference-mobilenet-models/README-about.md)
* [Nvidia GPU: Nvidia optimized implementation of all other models (open and closed division)](https://github.com/ctuning/mlcommons-ck/blob/master/docs/mlperf/inference/README.md#run-benchmarks-and-submit-results)

Please read [this documentation](https://github.com/mlcommons/ck/blob/master/docs/mlperf/inference/README.md)
Expand All @@ -60,7 +64,7 @@ Looking forward to your submissions and happy hacking!
* *All submitters will participate in writing a common white paper about running and comparing MLPerf inference benchmarks out-of-the-box.*
* *All submitters will receive 1 point for submitting valid results for 1 complete benchmark on one system.*
* *All submitters will receive an official MLCommons Collective Knowledge contributor award (see [this example](https://ctuning.org/awards/ck-award-202307-zhu.pdf)).*
* *The top 3 submitters by points will receive a prize of 200$ each.*
* *The top contributors will receive cash prizes from [MLCommons organizations](https://mlcommons.org) and [cKnowledge.org](https://www.linkedin.com/company/cknowledge)*.


### Organizers
Expand Down
Original file line number Diff line number Diff line change
Expand Up @@ -22,6 +22,6 @@
"mlperf-inference-v3.1-2023",
"v3.1"
],
"title": "Run and optimize MLPerf inference v3.1 benchmarks (latency, throughput, power consumption, accuracy, cost)",
"title": "Participate in collaborative benchmarking of AI/ML systems similar to SETI@home (latency, throughput, power consumption, accuracy, cost)",
"uid": "3e971d8089014d1f"
}
Original file line number Diff line number Diff line change
Expand Up @@ -18,7 +18,7 @@ Check [this ACM REP'23 keynote](https://doi.org/10.5281/zenodo.8105339) to learn

* *All contributors will participate in writing a common white paper about running and comparing MLPerf inference benchmarks out-of-the-box.*
* *All contributors will receive an official MLCommons Collective Knowledge contributor award (see [this example](https://ctuning.org/awards/ck-award-202307-zhu.pdf)).*
* *The first implementation will receive 3 points and a prize of 400$ and the fastest implementation will receive a prize of 300$.*
* *The top contributors will receive cash prizes from [MLCommons organizations](https://mlcommons.org) and [cKnowledge.org](https://www.linkedin.com/company/cknowledge)*.



Expand Down
Original file line number Diff line number Diff line change
Expand Up @@ -19,7 +19,7 @@ Check [this ACM REP'23 keynote](https://doi.org/10.5281/zenodo.8105339) to learn

* *All contributors will participate in writing a common white paper about running and comparing MLPerf inference benchmarks out-of-the-box.*
* *All contributors will receive an official MLCommons Collective Knowledge contributor award (see [this example](https://ctuning.org/awards/ck-award-202307-zhu.pdf)).*
* *The first implementation will receive 2 points and a prize of 300$.*
* *The top contributors will receive cash prizes from [MLCommons organizations](https://mlcommons.org) and [cKnowledge.org](https://www.linkedin.com/company/cknowledge)*.


### Organizers
Expand Down
Original file line number Diff line number Diff line change
Expand Up @@ -13,10 +13,10 @@ Looking forward to your submissions and happy hacking!

### Prizes

*All submitters will participate in writing a common white paper about running and comparing MLPerf inference benchmarks out-of-the-box.*
*All submitters will receive 1 point for submitting valid results for 1 complete benchmark on one system.*
*All submitters will receive an official MLCommons Collective Knowledge contributor award (see [this example](https://ctuning.org/awards/ck-award-202307-zhu.pdf)).*
*The top 3 submitters by points will receive a prize of 200$ each.*
* *All submitters will participate in writing a common white paper about running and comparing MLPerf inference benchmarks out-of-the-box.*
* *All submitters will receive 1 point for submitting valid results for 1 complete benchmark on one system.*
* *All submitters will receive an official MLCommons Collective Knowledge contributor award (see [this example](https://ctuning.org/awards/ck-award-202307-zhu.pdf)).*
* *The top contributors will receive cash prizes from [MLCommons organizations](https://mlcommons.org) and [cKnowledge.org](https://www.linkedin.com/company/cknowledge)*.


### Organizers
Expand Down
Original file line number Diff line number Diff line change
Expand Up @@ -20,7 +20,7 @@ Check [this ACM REP'23 keynote](https://doi.org/10.5281/zenodo.8105339) to learn
* *All contributors will participate in writing a common white paper about running and comparing MLPerf inference benchmarks out-of-the-box.*
* *All contributors will receive 1 point for submitting valid results for 1 complete benchmark on one system.*
* *All contributors will receive an official MLCommons Collective Knowledge contributor award (see [this example](https://ctuning.org/awards/ck-award-202307-zhu.pdf)).*
* *The top 3 submitters by points will receive a prize of 150$ each.*
* *The top contributors will receive cash prizes from [MLCommons organizations](https://mlcommons.org) and [cKnowledge.org](https://www.linkedin.com/company/cknowledge)*.

### Organizers

Expand Down
Original file line number Diff line number Diff line change
Expand Up @@ -20,7 +20,7 @@ Check [this ACM REP'23 keynote](https://doi.org/10.5281/zenodo.8105339) to learn

* *All contributors will participate in writing a common white paper about running and comparing MLPerf inference benchmarks out-of-the-box.*
* *All contributors will receive an official MLCommons Collective Knowledge contributor award (see [this example](https://ctuning.org/awards/ck-award-202307-zhu.pdf)).*
* *The first implementation will receive 3 points and a prize of 400$ and the fastest implementation will receive a prize of 300$.*
* *The top contributors will receive cash prizes from [MLCommons organizations](https://mlcommons.org) and [cKnowledge.org](https://www.linkedin.com/company/cknowledge)*.

### Organizers

Expand Down
Original file line number Diff line number Diff line change
Expand Up @@ -45,7 +45,7 @@ Check [this ACM REP'23 keynote](https://doi.org/10.5281/zenodo.8105339) to learn
* *All contributors will participate in writing a common white paper about running and comparing MLPerf inference benchmarks out-of-the-box.*
* *All contributors will receive 1 point for submitting valid results for 1 complete benchmark on one system.*
* *All contributors will receive an official MLCommons Collective Knowledge contributor award (see [this example](https://ctuning.org/awards/ck-award-202307-zhu.pdf)).*
* *The first implementation will receive 3 points and a prize of 300$.*
* *The top contributors will receive cash prizes from [MLCommons organizations](https://mlcommons.org) and [cKnowledge.org](https://www.linkedin.com/company/cknowledge)*.


### Organizers
Expand Down
Original file line number Diff line number Diff line change
Expand Up @@ -19,8 +19,7 @@ Check [this ACM REP'23 keynote](https://doi.org/10.5281/zenodo.8105339) to learn

* *All contributors will participate in writing a common white paper about running and comparing MLPerf inference benchmarks out-of-the-box.*
* *All contributors will receive an official MLCommons Collective Knowledge contributor award (see [this example](https://ctuning.org/awards/ck-award-202307-zhu.pdf)).*
* *The first implementation will receive 2 points and a prize of 200$ and the fastest implementation will receive a prize of 200$.*

* *The top contributors will receive cash prizes from [MLCommons organizations](https://mlcommons.org) and [cKnowledge.org](https://www.linkedin.com/company/cknowledge)*.


### Organizers
Expand Down
Original file line number Diff line number Diff line change
Expand Up @@ -23,7 +23,7 @@ Check [this ACM REP'23 keynote](https://doi.org/10.5281/zenodo.8105339) to learn
* *The first implementation will receive a cache prize from organizers.*
* *All contributors will participate in writing a common white paper about running and comparing MLPerf inference benchmarks out-of-the-box.*
* *All contributors will receive 1 point for submitting valid results for 1 complete benchmark on one system.*
* *All contributors will receive an official MLCommons Collective Knowledge contributor award (see [this example](https://ctuning.org/awards/ck-award-202307-zhu.pdf)).*
* *The top contributors will receive cash prizes from [MLCommons organizations](https://mlcommons.org) and [cKnowledge.org](https://www.linkedin.com/company/cknowledge)*.

### Organizers

Expand Down
Original file line number Diff line number Diff line change
Expand Up @@ -19,7 +19,7 @@ Check [this ACM REP'23 keynote](https://doi.org/10.5281/zenodo.8105339) to learn

* *All contributors will participate in writing a common white paper about running and comparing MLPerf inference benchmarks out-of-the-box.*
* *All contributors will receive an official MLCommons Collective Knowledge contributor award (see [this example](https://ctuning.org/awards/ck-award-202307-zhu.pdf)).*
* *The first implementation will receive 3 points and a prize of 200$ and the fastest implementation will receive a prize of 200$.*
* *The top contributors will receive cash prizes from [MLCommons organizations](https://mlcommons.org) and [cKnowledge.org](https://www.linkedin.com/company/cknowledge)*.



Expand Down
Original file line number Diff line number Diff line change
Expand Up @@ -20,7 +20,7 @@ Check [this ACM REP'23 keynote](https://doi.org/10.5281/zenodo.8105339) to learn
* *All contributors will participate in writing a common white paper about running and comparing MLPerf inference benchmarks out-of-the-box.*
* *All contributors will receive 1 point for submitting valid results for 1 complete benchmark on one system.*
* *All contributors will receive an official MLCommons Collective Knowledge contributor award (see [this example](https://ctuning.org/awards/ck-award-202307-zhu.pdf)).*
* *The top 3 submitters by points will receive a prize of 150$ each.*
* *The top contributors will receive cash prizes from [MLCommons organizations](https://mlcommons.org) and [cKnowledge.org](https://www.linkedin.com/company/cknowledge)*.

### Organizers

Expand Down
Original file line number Diff line number Diff line change
Expand Up @@ -18,7 +18,7 @@ Check [this ACM REP'23 keynote](https://doi.org/10.5281/zenodo.8105339) to learn

* *All contributors will participate in writing a common white paper about running and comparing MLPerf inference benchmarks out-of-the-box.*
* *All contributors will receive an official MLCommons Collective Knowledge contributor award (see [this example](https://ctuning.org/awards/ck-award-202307-zhu.pdf)).*
* *The first implementation will receive 2 points and a prize of 200$ and the fastest implementation will receive a prize of 200$.*
* *The top contributors will receive cash prizes from [MLCommons organizations](https://mlcommons.org) and [cKnowledge.org](https://www.linkedin.com/company/cknowledge)*.



Expand Down
Original file line number Diff line number Diff line change
Expand Up @@ -18,7 +18,7 @@ Check [this ACM REP'23 keynote](https://doi.org/10.5281/zenodo.8105339) to learn

* *All contributors will participate in writing a common white paper about running and comparing MLPerf inference benchmarks out-of-the-box.*
* *All contributors will receive an official MLCommons Collective Knowledge contributor award (see [this example](https://ctuning.org/awards/ck-award-202307-zhu.pdf)).*
* *The first implementation will receive 2 points and a prize of 200$.*
* *The top contributors will receive cash prizes from [MLCommons organizations](https://mlcommons.org) and [cKnowledge.org](https://www.linkedin.com/company/cknowledge)*.


### Organizers
Expand Down
3 changes: 3 additions & 0 deletions cm-mlops/script/get-cuda-devices/_cm.json
Original file line number Diff line number Diff line change
Expand Up @@ -14,6 +14,9 @@
],
"deps": [
{
"names": [
"cuda"
],
"tags": "get,cuda,_toolkit"
}
],
Expand Down
10 changes: 5 additions & 5 deletions cm-mlops/script/gui/playground_challenges.py
Original file line number Diff line number Diff line change
Expand Up @@ -204,7 +204,7 @@ def page(st, params):

prize = row.get('prize_short','')
if prize!='':
x += '   Prizes from [MLCommons organizations]({}): **{}**\n'.format('https://mlcommons.org', prize)
x += '   Prizes from [MLCommons organizations](https://mlcommons.org) and [cKnowledge.org](https://www.linkedin.com/company/cknowledge): **{}**\n'.format(prize)
if awards!='': awards+=' , '
awards += prize

Expand All @@ -225,7 +225,7 @@ def page(st, params):
import numpy as np

df = pd.DataFrame(data,
columns=['Challenge', 'Closing&nbsp;date', 'Extension', 'Points', 'Contributor&nbsp;award and prizes from <a href="https://mlcommons.org">MLCommons&nbsp;organizations</a>'])
columns=['Challenge', 'Closing&nbsp;date', 'Extension', 'Points', 'Contributor&nbsp;award and prizes from <a href="https://mlcommons.org">MLCommons&nbsp;organizations</a> and <a href="https://www.linkedin.com/company/cknowledge">cKnowledge.org</a>'])

df.index+=1

Expand Down Expand Up @@ -347,9 +347,9 @@ def page(st, params):
if prize_short!='':
z+='* **Prizes:** {}\n'.format(prize_short)

prize = meta.get('prize','')
if prize!='':
z+='* **Student prizes:** {}\n'.format(prize)
# prize = meta.get('prize','')
# if prize!='':
# z+='* **Student prizes:** {}\n'.format(prize)


urls = meta.get('urls',[])
Expand Down
5 changes: 5 additions & 0 deletions cm-mlops/script/install-cuda-prebuilt/_cm.json
Original file line number Diff line number Diff line change
Expand Up @@ -68,6 +68,11 @@
"env": {
"CM_CUDA_LINUX_FILENAME": "cuda_12.0.0_525.60.13_linux.run"
}
},
"12.2.0": {
"env": {
"CM_CUDA_LINUX_FILENAME": "cuda_12.2.0_535.54.03_linux.run"
}
}
},
"variations": {
Expand Down
2 changes: 1 addition & 1 deletion cm-mlops/script/install-cuda-prebuilt/customize.py
Original file line number Diff line number Diff line change
Expand Up @@ -10,7 +10,7 @@ def preprocess(i):
automation = i['automation']
version = env.get('CM_VERSION')
if version not in env.get('CM_CUDA_LINUX_FILENAME', ''):
return {'return': 1, 'error': "Only CUDA versions 11.7.0 and 11.8.0 are supported now!"}
return {'return': 1, 'error': "Only CUDA versions 11.7.0, 11.8.0, 12.0.0 and 12.2.0 are supported now!"}

recursion_spaces = i['recursion_spaces']
nvcc_bin = "nvcc"
Expand Down
Loading
Loading