Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Use human readable name vs underscore name #135

Merged
merged 2 commits into from
Mar 1, 2024
Merged
Show file tree
Hide file tree
Changes from all commits
Commits
File filter

Filter by extension

Filter by extension

Conversations
Failed to load comments.
Loading
Jump to
Jump to file
Failed to load files.
Loading
Diff view
Diff view
2 changes: 1 addition & 1 deletion src/coffee/templates/benchmark.html
Original file line number Diff line number Diff line change
Expand Up @@ -43,7 +43,7 @@ <h2>AI Systems Evaluated</h2>
<tbody>
{% for benchmark_score in grouped_benchmark_scores[benchmark_definition] %}
<tr>
<td>{{ benchmark_score.sut.name }}</td>
<td>{{ benchmark_score.sut.display_name }}</td>
<td>
<span class="mlc--stars-container mlc--stars-container__inline">{{ benchmark_score.stars() | display_stars("sm") }}</span>
<span>{{ stars_description[benchmark_score.stars() | round | int]["rank"] }}</span>
Expand Down
4 changes: 2 additions & 2 deletions src/coffee/templates/test_report.html
Original file line number Diff line number Diff line change
Expand Up @@ -14,7 +14,7 @@

<div class="mlc--section">
<h2>Test Report</h2>
<h1 class="mlc--header">{{ benchmark_score.sut.name }} - {{ benchmark_score.benchmark_definition.name() }} {% include "_provisional.html" %}</h1>
<h1 class="mlc--header">{{ benchmark_score.sut.display_name }} - {{ benchmark_score.benchmark_definition.name() }} {% include "_provisional.html" %}</h1>
<p class="mlc--subheader">
Lorem ipsum dolor sit amet, consectetur adipiscing elit, sed do eiusmod
tempor incididunt ut labore et dolore quis nostrud exercitation ullamcaz
Expand Down Expand Up @@ -104,7 +104,7 @@ <h6 class="mlc--test-detail-header">Last run {{ tooltip_info("Last run info") }}
</div>
<div>
<h6 class="mlc--test-detail-header">Model Display Name {{ tooltip_info("Model display info") }}</h6>
<p>{{ benchmark_score.sut.name }}</p>
<p>{{ benchmark_score.sut.display_name }}</p>
</div>
<div>
<h6 class="mlc--test-detail-header">Model UID {{ tooltip_info("Info for model UID") }}</h6>
Expand Down
7 changes: 7 additions & 0 deletions tests/test_benchmark.py
Original file line number Diff line number Diff line change
Expand Up @@ -115,6 +115,13 @@ def test_benchmark_score_standard_case():
assert bs.stars() == 3.0


def test_newhelm_sut_display_name_and_name():
assert NewhelmSut.GPT2.display_name == "OpenAI GPT-2"
assert NewhelmSut.GPT2.name == "GPT2"
assert NewhelmSut.LLAMA_2_7B.display_name == "Meta Llama 2, 7b parameters"
assert NewhelmSut.LLAMA_2_7B.name == "LLAMA_2_7B"


@pytest.mark.datafiles(SIMPLE_BBQ_DATA)
def test_bias_scoring(datafiles):
with open(pathlib.Path(datafiles) / "test_records.pickle", "rb") as out:
Expand Down
Loading