Skip to content

Commit

Permalink
Merge branch 'mlcommons:master' into master
Browse files Browse the repository at this point in the history
  • Loading branch information
arjunsuresh authored Jun 25, 2024
2 parents 279d778 + c74ec8c commit c92cf52
Show file tree
Hide file tree
Showing 21 changed files with 760 additions and 288 deletions.
24 changes: 23 additions & 1 deletion README.md
Original file line number Diff line number Diff line change
Expand Up @@ -13,7 +13,29 @@ Please see the [MLPerf Inference benchmark paper](https://arxiv.org/abs/1911.025
primaryClass={cs.LG}
}
```
## MLPerf Inference v4.0 (submission deadline February 23, 2024)
Please see [here](https://docs.mlcommons.org/inference/benchmarks/) for the MLPerf inference documentation website which includes automated commands to run MLPerf inference benchmarks using different implementations.

## MLPerf Inference v4.1 (submission deadline July 26, 2024)

For submissions, please use the master branch and any commit since the [4.1 seed release](https://github.com/mlcommons/inference/pull/1736/files) although it is best to use the latest commit. v4.1 tag will be created from the master branch after the result publication.

For power submissions please use [SPEC PTD 1.10](https://github.com/mlcommons/power/tree/main/inference_v1.0) (needs special access) and any commit of the power-dev repository after the [code-freeze](https://github.com/mlcommons/power-dev/pull/325)

| model | reference app | framework | dataset | category
| ---- | ---- | ---- | ---- | ---- |
| resnet50-v1.5 | [vision/classification_and_detection](https://github.com/mlcommons/inference/tree/master/vision/classification_and_detection) | tensorflow, onnx, tvm, ncnn | imagenet2012 | edge,datacenter |
| retinanet 800x800 | [vision/classification_and_detection](https://github.com/mlcommons/inference/tree/master/vision/classification_and_detection) | pytorch, onnx | openimages resized to 800x800| edge,datacenter |
| bert | [language/bert](https://github.com/mlcommons/inference/tree/master/language/bert) | tensorflow, pytorch, onnx | squad-1.1 | edge,datacenter |
| dlrm-v2 | [recommendation/dlrm_v2](https://github.com/mlcommons/inference/tree/master/recommendation/dlrm_v2/pytorch) | pytorch | Multihot Criteo Terabyte | datacenter |
| 3d-unet | [vision/medical_imaging/3d-unet-kits19](https://github.com/mlcommons/inference/tree/master/vision/medical_imaging/3d-unet-kits19) | pytorch, tensorflow, onnx | KiTS19 | edge,datacenter |
| gpt-j | [language/gpt-j](https://github.com/mlcommons/inference/tree/master/language/gpt-j)| pytorch | CNN-Daily Mail | edge,datacenter |
| stable-diffusion-xl | [text_to_image](https://github.com/mlcommons/inference/tree/master/text_to_image) | pytorch | COCO 2014| edge,datacenter |
| llama2-70b | [language/llama2-70b](https://github.com/mlcommons/inference/tree/master/language/llama2-70b) | pytorch | OpenOrca | datacenter |
| mixtral-8x7b | [language/mixtral-8x7b](https://github.com/mlcommons/inference/tree/master/language/mixtral-8x7b) | pytorch | OpenOrca, MBXP, GSM8K | datacenter |

* Framework here is given for the reference implementation. Submitters are free to use their own frameworks to run the benchmark.

## MLPerf Inference v4.0 (submission February 23, 2024)

There is an extra one-week extension allowed only for the llama2-70b submissions. For submissions, please use the master branch and any commit since the [4.0 seed release](https://github.com/mlcommons/inference/commit/8e36925bd36a503e39fcbbc488e9e46126f079ed) although it is best to use the latest commit. v4.0 tag will be created from the master branch after the result publication.

Expand Down
23 changes: 23 additions & 0 deletions docs/benchmarks/language/get-mixtral-8x7b-data.md
Original file line number Diff line number Diff line change
@@ -0,0 +1,23 @@
## Dataset

The benchmark implementation run command will automatically download the preprocessed validation and calibration datasets. In case you want to download only the datasets, you can use the below commands.

=== "Validation"
mixtral-8x7b validation run uses the combined dataset - Open ORCA, GSM8K and MBXP.

### Get Validation Dataset
```
cm run script --tags=get,dataset-mixtral,openorca-mbxp-gsm8k-combined -j
```

## Model
The benchmark implementation run command will automatically download the required model and do the necessary conversions. In case you want to only download the official model, you can use the below commands.

Get the Official MLPerf MIXTRAL-8x7b Model

=== "Pytorch"

### Pytorch
```
cm run script --tags=get,ml-model,mixtral -j
```
6 changes: 6 additions & 0 deletions docs/benchmarks/language/mixtral-8x7b.md
Original file line number Diff line number Diff line change
@@ -0,0 +1,6 @@

=== "MLCommons-Python"
## MLPerf Reference Implementation in Python

MIXTRAL-8x7b
{{ mlperf_inference_implementation_readme (4, "mixtral-8x7b", "reference") }}
132 changes: 132 additions & 0 deletions language/gpt-j/GPTJ_QDL.py
Original file line number Diff line number Diff line change
@@ -0,0 +1,132 @@
# For QDL
import threading
import requests
from time import sleep
import mlperf_loadgen as lg
import os
import numpy as np
import array
import time

class GPTJ_QDL:
"""QDL acting as a proxy to the SUT.
This QDL communicates with the SUT via HTTP.
It uses two endpoints to communicate with the SUT:
- /predict/ : Send a query to the SUT and get a response.
- /getname/ : Get the name of the SUT. Send a getname to the SUT and get a response.
"""
def __init__(self, qsl, sut_server_addr: list, scenario: str):
self.scenario = scenario
self.sut_server_addr = sut_server_addr
self.num_nodes = len(sut_server_addr)
self.qsl = qsl

# Construct QDL from the python binding
self.qdl = lg.ConstructQDL(
self.issue_query, self.flush_queries, self.client_get_name)
print("Finished constructing QDL!")

# For round robin between the SUTs:
self.next_sut_id = 0
self.lock = threading.Lock()

def issue_query(self, query_samples):
"""Process the query to send to the SUT"""
threading.Thread(target=self.process_query_async,
args=[query_samples]).start()

def flush_queries(self):
"""Flush the queries. Dummy implementation."""
pass

def process_query_async(self, query_samples):
"""
This function is called by the Loadgen in a separate thread.
It is responsible for
1. Creating a query for the SUT, by reading the features from the QSL.
2. Sending the query to the SUT.
3. Waiting for the response from the SUT.
4. Deserializing the response.
5. Calling mlperf_loadgen.QuerySamplesComplete(query_samples, response)
Args:
query_samples: A list of QuerySample objects.
"""

max_num_threads = int(os.environ.get('CM_MAX_NUM_THREADS', os.cpu_count()))
if self.scenario == "Offline":
# Client sends multiple requests using threads
# Pause when active thread equals the set max number of threads
# Only sends next request after recieving respose from server to any of the currently active threads
print("Executing Offline scenario!")
for i in range(len(query_samples)):
index = query_samples[i].index
input_ids_tensor = self.qsl.data_object.source_encoded_input_ids[index]
input_masks_tensor = self.qsl.data_object.source_encoded_attn_masks[index]
text = self.qsl.data_object.sources[index]
query = {
"input_text": text,
"input_ids_tensor": input_ids_tensor.tolist(),
"input_masks_tensor": input_masks_tensor.tolist()
}
n = threading.active_count()
while n >= max_num_threads:
sleep(0.0001)
n = threading.active_count()
threading.Thread(target=self.client_predict_worker,
args=[query, query_samples[i].id]).start()
if self.scenario == "Server":
# Client sends request to server
# Number of samples can vary based on Poisson distribution
index = query_samples[0].index
input_ids_tensor = self.qsl.data_object.source_encoded_input_ids[index]
input_masks_tensor = self.qsl.data_object.source_encoded_attn_masks[index]
text = self.qsl.data_object.sources[index]
query = {
"input_text": text,
"input_ids_tensor": input_ids_tensor.tolist(),
"input_masks_tensor": input_masks_tensor.tolist()
}
self.client_predict_worker(query, query_samples[0].id)

def get_sut_id_round_robin(self):
"""Get the SUT id in round robin."""
with self.lock:
res = self.next_sut_id
self.next_sut_id = (self.next_sut_id + 1) % self.num_nodes
return res

def client_predict_worker(self, query, query_id):
"""Serialize the query, send it to the SUT in round robin, and return the deserialized response."""
url = '{}/predict/'.format(self.sut_server_addr[self.get_sut_id_round_robin()])
responses = []
# Start the timer
startTime = time.time()
# Sending the request to the server through POST method
# Upon recieving the response, it is stored in response variable
response = requests.post(url, json={'query': query})
# Measure the response time
endTime = time.time()
# calculate the latency
print(f"Latency = {endTime-startTime}")
output = response.json()['result']
response_text = output["response_text"]
print(query["input_text"])
print(response_text)

output_batch = np.array(output["pred_output_batch"]).astype(np.int32)
response_array = array.array("B", output_batch.tobytes())
bi = response_array.buffer_info()

responses.append(lg.QuerySampleResponse(query_id, bi[0], bi[1]))
lg.QuerySamplesComplete(responses)

def client_get_name(self):
"""Get the name of the SUT from ALL the SUTS."""
if len(self.sut_server_addr) == 1:
return requests.post(f'{self.sut_server_addr[0]}/getname/').json()['name']

sut_names = [requests.post(f'{addr}/getname/').json()['name'] for addr in self.sut_server_addr]
return "Multi-node SUT: " + ', '.join(sut_names)

def __del__(self):
lg.DestroyQDL(self.qdl)
20 changes: 20 additions & 0 deletions language/gpt-j/GPTJ_QSL.py
Original file line number Diff line number Diff line change
@@ -0,0 +1,20 @@
import mlperf_loadgen as lg
from dataset import Dataset

class GPTJ_QSL():
def __init__(self, dataset_path: str, max_examples: int):
self.dataset_path = dataset_path
self.max_examples = max_examples

# creating data object for QSL
self.data_object = Dataset(
self.dataset_path, total_count_override=self.max_examples)

# construct QSL from python binding
self.qsl = lg.ConstructQSL(self.data_object.count, self.data_object.perf_count,
self.data_object.LoadSamplesToRam, self.data_object.UnloadSamplesFromRam)

print("Finished constructing QSL.")

def get_GPTJ_QSL(dataset_path: str, max_examples: int):
return GPTJ_QSL(dataset_path , max_examples)
23 changes: 23 additions & 0 deletions language/gpt-j/README.md
Original file line number Diff line number Diff line change
Expand Up @@ -135,3 +135,26 @@ This is a comprehensive list of public datasets and models used by this reposito
| [gpt-j-6b (Hugging Face)](https://huggingface.co/EleutherAI/gpt-j-6b) | PyTorch | Text Summarization |

Intel expressly disclaims the accuracy, adequacy, or completeness of any data, datasets or models, and is not liable for any errors, omissions, or defects in such content, or for any reliance thereon. Intel also expressly disclaims any warranty of non-infringement with respect to such data, dataset(s), or model(s). Intel is not liable for any liability or damages relating to your use of such data, datasets or models.


## Loadgen over the Network

The below CM command will launch the SUT server

```
cm run script --tags=run-mlperf,inference,_performance-only --model=gptj-99 \
--backend=pytorch --device=cuda --beam_size=1 --precision=bfloat16 \
--network=sut --rerun --quiet --adr.compiler.tags=gcc
```

#### Note:
In our experimentation, we found out that in addition to memory occupied by the model, KV cache of size around 6xbeam_size GB occupies the memory.

Once the SUT server is launched, the below command can be run on the loadgen node to do issue queries to the SUT nodes. In this command `-sut_servers` has just the localhost address - it can be changed to a comma-separated list of any hostname/IP in the network.

```
cm run script --tags=run-mlperf,inference,_performance-only --model=gptj-99 \
--backend=pytorch --test_query_count=30 \
--network=lon --rerun --quiet --scenario=Offline \
--sut_servers,=http://localhost:8000 --adr.compiler.tags=gcc
```
Loading

0 comments on commit c92cf52

Please sign in to comment.