Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

docs clean up #870

Merged
merged 11 commits into from
Jul 22, 2023
4 changes: 4 additions & 0 deletions cm-mlops/automation/experiment/module.py
Original file line number Diff line number Diff line change
Expand Up @@ -57,6 +57,10 @@ def test(self, i):

return {'return':0}





############################################################
def run(self, i):
"""
Expand Down
Original file line number Diff line number Diff line change
@@ -1,20 +1,23 @@
### Introduction

Our goal is to help the community benchmark and optimize various AI/ML applications
across diverse software and hardware provided by volunteers similar to SETI@home!

Open-source [MLPerf inference benchmarks](https://arxiv.org/abs/1911.02549)
were developed by a [consortium of 50+ companies and universities (MLCommons)](https://mlcommons.org)
to enable trustable and reproducible comparison of AI/ML systems
in terms of latency, throughput, power consumption, accuracy and other metrics
across diverse software/hardware stacks from different vendors.

However, running MLPerf inference benchmarks and submitting results [turned out to be a challenge](https://doi.org/10.5281/zenodo.8144274)
even for experts and could easily take many weeks. That's why MLCommons,
even for experts and could easily take many weeks. That's why [MLCommons](https://mlcommons.org),
[cTuning.org](https://www.linkedin.com/company/ctuning-foundation)
and [cKnowledge.org](https://www.linkedin.com/company/cknowledge)
decided to develop an open-source, technology-agnostic
and non-intrusive [Collective Mind automation language (CM)](https://github.com/mlcommons/ck)
and [Collective Knowledge Playground (CK)](https://access.cknowledge.org/playground/?action=experiments)
to run, reproduce, optimize and compare MLPerf inference benchmarks out-of-the-box
across diverse software, hardware, models and data sets from any vendor.
to help anyone run, reproduce, optimize and compare MLPerf inference benchmarks out-of-the-box
across diverse software, hardware, models and data sets.

You can read more about our vision, open-source technology and future plans
in this [presentation](https://doi.org/10.5281/zenodo.8105339).
Expand All @@ -23,19 +26,20 @@ in this [presentation](https://doi.org/10.5281/zenodo.8105339).

### Challenge

We would like you to run as many MLPerf inference benchmarks on as many CPUs (Intel, AMD, Arm) and Nvidia GPUs
as possible across different framework (ONNX, PyTorch, TF, TFLite)
We would like to help volunteers run various MLPerf inference benchmarks
on diverse CPUs (Intel, AMD, Arm) and Nvidia GPUs similar to SETI@home
across different framework (ONNX, PyTorch, TF, TFLite)
either natively or in a cloud (AWS, Azure, GCP, Alibaba, Oracle, OVHcloud, ...)
and submit official results to MLPerf inference v3.1.
and submit results to MLPerf inference v3.1.

However, since some benchmarks may take 1 day to run, we suggest to start in the following order:
* [CPU: Reference implementation of Image Classification with ResNet50 (open and then closed division)](https://github.com/mlcommons/ck/blob/master/docs/mlperf/inference/resnet50/README_reference.md)
* [CPU: TFLite C++ implementation of Image classification with variations of MobileNets and EfficientNets (open division)](https://github.com/mlcommons/ck/blob/master/cm-mlops/script/run-mlperf-inference-mobilenet-models/README-about.md)
* [Nvidia GPU: Nvidia optimized implementation of Image Classification with ResNet50 (open and then closed division)](https://github.com/mlcommons/ck/blob/master/docs/mlperf/inference/resnet50/README_nvidia.md)
* [Nvidia GPU: Nvidia optimized implementation of Language processing with BERT large (open and then closed division)](https://github.com/mlcommons/ck/blob/master/docs/mlperf/inference/bert/README_nvidia.md)
* [Nvidia GPU: Reference implementation of Image Classification with ResNet50 (open and then closed division)](https://github.com/mlcommons/ck/blob/master/docs/mlperf/inference/bert/README_nvidia.md)
* [Nvidia GPU: Reference implementation of Language processing with BERT large (open and then closed division)](https://github.com/mlcommons/ck/blob/master/docs/mlperf/inference/resnet50/README_reference.md)
* [Nvidia GPU (24GB of memory min): Reference implementation of Language processing with GPT-J 6B (open)](https://github.com/mlcommons/ck/blob/master/docs/mlperf/inference/gpt-j/README_reference.md)
* [CPU: TFLite C++ implementation of Image classification with variations of MobileNets and EfficientNets (open division)](https://github.com/mlcommons/ck/blob/master/cm-mlops/script/run-mlperf-inference-mobilenet-models/README-about.md)
* [Nvidia GPU: Nvidia optimized implementation of all other models (open and closed division)](https://github.com/ctuning/mlcommons-ck/blob/master/docs/mlperf/inference/README.md#run-benchmarks-and-submit-results)

Please read [this documentation](https://github.com/mlcommons/ck/blob/master/docs/mlperf/inference/README.md)
Expand All @@ -60,7 +64,7 @@ Looking forward to your submissions and happy hacking!
* *All submitters will participate in writing a common white paper about running and comparing MLPerf inference benchmarks out-of-the-box.*
* *All submitters will receive 1 point for submitting valid results for 1 complete benchmark on one system.*
* *All submitters will receive an official MLCommons Collective Knowledge contributor award (see [this example](https://ctuning.org/awards/ck-award-202307-zhu.pdf)).*
* *The top 3 submitters by points will receive a prize of 200$ each.*
* *The top contributors will receive cash prizes from [MLCommons organizations](https://mlcommons.org) and [cKnowledge.org](https://www.linkedin.com/company/cknowledge)*.


### Organizers
Expand Down
Original file line number Diff line number Diff line change
Expand Up @@ -22,6 +22,6 @@
"mlperf-inference-v3.1-2023",
"v3.1"
],
"title": "Run and optimize MLPerf inference v3.1 benchmarks (latency, throughput, power consumption, accuracy, cost)",
"title": "Participate in collaborative benchmarking of AI/ML systems similar to SETI@home (latency, throughput, power consumption, accuracy, cost)",
"uid": "3e971d8089014d1f"
}
Original file line number Diff line number Diff line change
Expand Up @@ -18,7 +18,7 @@ Check [this ACM REP'23 keynote](https://doi.org/10.5281/zenodo.8105339) to learn

* *All contributors will participate in writing a common white paper about running and comparing MLPerf inference benchmarks out-of-the-box.*
* *All contributors will receive an official MLCommons Collective Knowledge contributor award (see [this example](https://ctuning.org/awards/ck-award-202307-zhu.pdf)).*
* *The first implementation will receive 3 points and a prize of 400$ and the fastest implementation will receive a prize of 300$.*
* *The top contributors will receive cash prizes from [MLCommons organizations](https://mlcommons.org) and [cKnowledge.org](https://www.linkedin.com/company/cknowledge)*.



Expand Down
Original file line number Diff line number Diff line change
Expand Up @@ -19,7 +19,7 @@ Check [this ACM REP'23 keynote](https://doi.org/10.5281/zenodo.8105339) to learn

* *All contributors will participate in writing a common white paper about running and comparing MLPerf inference benchmarks out-of-the-box.*
* *All contributors will receive an official MLCommons Collective Knowledge contributor award (see [this example](https://ctuning.org/awards/ck-award-202307-zhu.pdf)).*
* *The first implementation will receive 2 points and a prize of 300$.*
* *The top contributors will receive cash prizes from [MLCommons organizations](https://mlcommons.org) and [cKnowledge.org](https://www.linkedin.com/company/cknowledge)*.


### Organizers
Expand Down
Original file line number Diff line number Diff line change
Expand Up @@ -13,10 +13,10 @@ Looking forward to your submissions and happy hacking!

### Prizes

*All submitters will participate in writing a common white paper about running and comparing MLPerf inference benchmarks out-of-the-box.*
*All submitters will receive 1 point for submitting valid results for 1 complete benchmark on one system.*
*All submitters will receive an official MLCommons Collective Knowledge contributor award (see [this example](https://ctuning.org/awards/ck-award-202307-zhu.pdf)).*
*The top 3 submitters by points will receive a prize of 200$ each.*
* *All submitters will participate in writing a common white paper about running and comparing MLPerf inference benchmarks out-of-the-box.*
* *All submitters will receive 1 point for submitting valid results for 1 complete benchmark on one system.*
* *All submitters will receive an official MLCommons Collective Knowledge contributor award (see [this example](https://ctuning.org/awards/ck-award-202307-zhu.pdf)).*
* *The top contributors will receive cash prizes from [MLCommons organizations](https://mlcommons.org) and [cKnowledge.org](https://www.linkedin.com/company/cknowledge)*.


### Organizers
Expand Down
Original file line number Diff line number Diff line change
Expand Up @@ -20,7 +20,7 @@ Check [this ACM REP'23 keynote](https://doi.org/10.5281/zenodo.8105339) to learn
* *All contributors will participate in writing a common white paper about running and comparing MLPerf inference benchmarks out-of-the-box.*
* *All contributors will receive 1 point for submitting valid results for 1 complete benchmark on one system.*
* *All contributors will receive an official MLCommons Collective Knowledge contributor award (see [this example](https://ctuning.org/awards/ck-award-202307-zhu.pdf)).*
* *The top 3 submitters by points will receive a prize of 150$ each.*
* *The top contributors will receive cash prizes from [MLCommons organizations](https://mlcommons.org) and [cKnowledge.org](https://www.linkedin.com/company/cknowledge)*.

### Organizers

Expand Down
Original file line number Diff line number Diff line change
Expand Up @@ -20,7 +20,7 @@ Check [this ACM REP'23 keynote](https://doi.org/10.5281/zenodo.8105339) to learn

* *All contributors will participate in writing a common white paper about running and comparing MLPerf inference benchmarks out-of-the-box.*
* *All contributors will receive an official MLCommons Collective Knowledge contributor award (see [this example](https://ctuning.org/awards/ck-award-202307-zhu.pdf)).*
* *The first implementation will receive 3 points and a prize of 400$ and the fastest implementation will receive a prize of 300$.*
* *The top contributors will receive cash prizes from [MLCommons organizations](https://mlcommons.org) and [cKnowledge.org](https://www.linkedin.com/company/cknowledge)*.

### Organizers

Expand Down
Original file line number Diff line number Diff line change
Expand Up @@ -45,7 +45,7 @@ Check [this ACM REP'23 keynote](https://doi.org/10.5281/zenodo.8105339) to learn
* *All contributors will participate in writing a common white paper about running and comparing MLPerf inference benchmarks out-of-the-box.*
* *All contributors will receive 1 point for submitting valid results for 1 complete benchmark on one system.*
* *All contributors will receive an official MLCommons Collective Knowledge contributor award (see [this example](https://ctuning.org/awards/ck-award-202307-zhu.pdf)).*
* *The first implementation will receive 3 points and a prize of 300$.*
* *The top contributors will receive cash prizes from [MLCommons organizations](https://mlcommons.org) and [cKnowledge.org](https://www.linkedin.com/company/cknowledge)*.


### Organizers
Expand Down
Original file line number Diff line number Diff line change
Expand Up @@ -19,8 +19,7 @@ Check [this ACM REP'23 keynote](https://doi.org/10.5281/zenodo.8105339) to learn

* *All contributors will participate in writing a common white paper about running and comparing MLPerf inference benchmarks out-of-the-box.*
* *All contributors will receive an official MLCommons Collective Knowledge contributor award (see [this example](https://ctuning.org/awards/ck-award-202307-zhu.pdf)).*
* *The first implementation will receive 2 points and a prize of 200$ and the fastest implementation will receive a prize of 200$.*

* *The top contributors will receive cash prizes from [MLCommons organizations](https://mlcommons.org) and [cKnowledge.org](https://www.linkedin.com/company/cknowledge)*.


### Organizers
Expand Down
Original file line number Diff line number Diff line change
Expand Up @@ -23,7 +23,7 @@ Check [this ACM REP'23 keynote](https://doi.org/10.5281/zenodo.8105339) to learn
* *The first implementation will receive a cache prize from organizers.*
* *All contributors will participate in writing a common white paper about running and comparing MLPerf inference benchmarks out-of-the-box.*
* *All contributors will receive 1 point for submitting valid results for 1 complete benchmark on one system.*
* *All contributors will receive an official MLCommons Collective Knowledge contributor award (see [this example](https://ctuning.org/awards/ck-award-202307-zhu.pdf)).*
* *The top contributors will receive cash prizes from [MLCommons organizations](https://mlcommons.org) and [cKnowledge.org](https://www.linkedin.com/company/cknowledge)*.

### Organizers

Expand Down
Original file line number Diff line number Diff line change
Expand Up @@ -19,7 +19,7 @@ Check [this ACM REP'23 keynote](https://doi.org/10.5281/zenodo.8105339) to learn

* *All contributors will participate in writing a common white paper about running and comparing MLPerf inference benchmarks out-of-the-box.*
* *All contributors will receive an official MLCommons Collective Knowledge contributor award (see [this example](https://ctuning.org/awards/ck-award-202307-zhu.pdf)).*
* *The first implementation will receive 3 points and a prize of 200$ and the fastest implementation will receive a prize of 200$.*
* *The top contributors will receive cash prizes from [MLCommons organizations](https://mlcommons.org) and [cKnowledge.org](https://www.linkedin.com/company/cknowledge)*.



Expand Down
Original file line number Diff line number Diff line change
Expand Up @@ -20,7 +20,7 @@ Check [this ACM REP'23 keynote](https://doi.org/10.5281/zenodo.8105339) to learn
* *All contributors will participate in writing a common white paper about running and comparing MLPerf inference benchmarks out-of-the-box.*
* *All contributors will receive 1 point for submitting valid results for 1 complete benchmark on one system.*
* *All contributors will receive an official MLCommons Collective Knowledge contributor award (see [this example](https://ctuning.org/awards/ck-award-202307-zhu.pdf)).*
* *The top 3 submitters by points will receive a prize of 150$ each.*
* *The top contributors will receive cash prizes from [MLCommons organizations](https://mlcommons.org) and [cKnowledge.org](https://www.linkedin.com/company/cknowledge)*.

### Organizers

Expand Down
Original file line number Diff line number Diff line change
Expand Up @@ -18,7 +18,7 @@ Check [this ACM REP'23 keynote](https://doi.org/10.5281/zenodo.8105339) to learn

* *All contributors will participate in writing a common white paper about running and comparing MLPerf inference benchmarks out-of-the-box.*
* *All contributors will receive an official MLCommons Collective Knowledge contributor award (see [this example](https://ctuning.org/awards/ck-award-202307-zhu.pdf)).*
* *The first implementation will receive 2 points and a prize of 200$ and the fastest implementation will receive a prize of 200$.*
* *The top contributors will receive cash prizes from [MLCommons organizations](https://mlcommons.org) and [cKnowledge.org](https://www.linkedin.com/company/cknowledge)*.



Expand Down
Original file line number Diff line number Diff line change
Expand Up @@ -18,7 +18,7 @@ Check [this ACM REP'23 keynote](https://doi.org/10.5281/zenodo.8105339) to learn

* *All contributors will participate in writing a common white paper about running and comparing MLPerf inference benchmarks out-of-the-box.*
* *All contributors will receive an official MLCommons Collective Knowledge contributor award (see [this example](https://ctuning.org/awards/ck-award-202307-zhu.pdf)).*
* *The first implementation will receive 2 points and a prize of 200$.*
* *The top contributors will receive cash prizes from [MLCommons organizations](https://mlcommons.org) and [cKnowledge.org](https://www.linkedin.com/company/cknowledge)*.


### Organizers
Expand Down
10 changes: 5 additions & 5 deletions cm-mlops/script/gui/playground_challenges.py
Original file line number Diff line number Diff line change
Expand Up @@ -204,7 +204,7 @@ def page(st, params):

prize = row.get('prize_short','')
if prize!='':
x += '   Prizes from [MLCommons organizations]({}): **{}**\n'.format('https://mlcommons.org', prize)
x += '   Prizes from [MLCommons organizations](https://mlcommons.org) and [cKnowledge.org](https://www.linkedin.com/company/cknowledge): **{}**\n'.format(prize)
if awards!='': awards+=' , '
awards += prize

Expand All @@ -225,7 +225,7 @@ def page(st, params):
import numpy as np

df = pd.DataFrame(data,
columns=['Challenge', 'Closing&nbsp;date', 'Extension', 'Points', 'Contributor&nbsp;award and prizes from <a href="https://mlcommons.org">MLCommons&nbsp;organizations</a>'])
columns=['Challenge', 'Closing&nbsp;date', 'Extension', 'Points', 'Contributor&nbsp;award and prizes from <a href="https://mlcommons.org">MLCommons&nbsp;organizations</a> and <a href="https://www.linkedin.com/company/cknowledge">cKnowledge.org</a>'])

df.index+=1

Expand Down Expand Up @@ -347,9 +347,9 @@ def page(st, params):
if prize_short!='':
z+='* **Prizes:** {}\n'.format(prize_short)

prize = meta.get('prize','')
if prize!='':
z+='* **Student prizes:** {}\n'.format(prize)
# prize = meta.get('prize','')
# if prize!='':
# z+='* **Student prizes:** {}\n'.format(prize)


urls = meta.get('urls',[])
Expand Down
5 changes: 5 additions & 0 deletions docs/news.md
Original file line number Diff line number Diff line change
Expand Up @@ -2,6 +2,11 @@

## MLCommons CK and CM news

### 202307

The overview of the MedPerf project was published in Nature:
[Federated benchmarking of medical artificial intelligence with MedPerf](https://www.nature.com/articles/s42256-023-00652-2)!

### 202306

We were honored to give a [keynote](https://doi.org/10.5281/zenodo.8105338) about our MLCommons automation and reproducibility language
Expand Down
59 changes: 4 additions & 55 deletions platform/register.md
Original file line number Diff line number Diff line change
@@ -1,59 +1,8 @@
# Register for Collective Knowledge challenges

Since the [MLCommons CK playground](https://access.cKnowledge.org)
is still in the heavy development stage, the registration is not yet automated via CK GUI.

You can simply add add your name, organization and URL in this [GitHub ticket](https://github.com/mlcommons/ck/issues/855).

You name will be added to the [CK leaderboard](https://access.cknowledge.org/playground)
with 1 point after your PR is accepted (to support your intent to participate in our collaborative effort).

You can add yourself to this [GitHub repository](https://github.com/mlcommons/ck/tree/master/cm-mlops/contributor)
using our [CM automation language](https://doi.org/10.5281/zenodo.8105339) from the command line as follows.

Install [CM](../docs/installation.md) on your system.

Fork https://github.com/mlcommons/ck .

Pull it via CM as follows:

```bash
cm pull repo --url={URL of the fork of github.com/mlcommons/ck}
```

Note that if you already have `mlcommons@ck` repository installed via CM,
you need to delete it and then install your fork:
```bash
cm rm repo mlcommons@ck --all
cm pull repo --url={URL of the fork of github.com/mlcommons/ck}
```
Create a new contributor with your name:
```bash
cm add contributor "your name"
```

CM will ask you a few questions and will create a new CM contributor entry with your name.

You can commit this entry to your fork and create a PR to https://github.com/mlcommons/ck .

*Note that you will need to sign MLCommons CLA to contribute to MLCommons projects - it may take a few days to approve it by MLCommons*.

Note that you will need CM and your fork of https://github.com/mlcommons/ck to participate in challenges,
so please keep and use it.

Happy hacking!

## Discussions

You can now join the [public Discord server](https://discord.gg/JjWNWXKxwT)
Please join the [public Discord server](https://discord.gg/JjWNWXKxwT)
from the [MLCommons Task Force on Automation and Reproducibility](../docs/taskforce.md)
to ask any questions, provide feedback and discuss challenges!

## Our mission

You can learn more about our mission [here](https://doi.org/10.5281/zenodo.8105339).

## Organizers
and send your name, organization and URL to @gfursin and @arjunsuresh
(task force co-chairs and organizers of open challenges).

* [Grigori Fursin](https://cKnowledge.org/gfursin) and [Arjun Suresh](https://www.linkedin.com/in/arjunsuresh)
([MLCommons](https://mlcommons.org), [cTuning.org](https://cTuning.org) and [cKnowledge.org](https://cKnowledge.org))
In the future, we plan to add a registeration GUI to our [MLCommons Collective Knowledge playground](https://access.cKnowledge.org).
Loading
Loading