Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Codeparrot/codecomplex #806

Open
wants to merge 5 commits into
base: eval-hackathon
Choose a base branch
from
Open
Show file tree
Hide file tree
Changes from all commits
Commits
File filter

Filter by extension

Filter by extension

Conversations
Failed to load comments.
Loading
Jump to
Jump to file
Failed to load files.
Loading
Diff view
Diff view
16 changes: 15 additions & 1 deletion promptsource/templates.py
Original file line number Diff line number Diff line change
Expand Up @@ -27,7 +27,21 @@
# These are users whose datasets should be included in the results returned by
# filter_english_datasets (regardless of their metadata)

INCLUDED_USERS = {"Zaid", "craffel", "GEM", "aps", "khalidalt", "shanya", "rbawden", "BigScienceBiasEval", "gsarti"}
INCLUDED_USERS = {
"Zaid",
"craffel",
"GEM",
"aps",
"khalidalt",
"shanya",
"rbawden",
"BigScienceBiasEval",
"gsarti",
"Helsinki-NLP",
"Muennighoff",
"facebook",
"codeparrot",
}

# These are the metrics with which templates can be tagged
METRICS = {
Expand Down
69 changes: 69 additions & 0 deletions promptsource/templates/clue/afqmc/templates.yaml
Original file line number Diff line number Diff line change
@@ -0,0 +1,69 @@
dataset: clue
subset: afqmc
templates:
997437fd-6888-482d-95e9-ffd867b497ee: !Template
answer_choices: no ||| yes
id: 997437fd-6888-482d-95e9-ffd867b497ee
jinja: 'Do "{{ sentence1 }}" and "{{ sentence2 }}" express the same thing?

|||

{{ answer_choices[label] }}'
metadata: !TemplateMetadata
choices_in_prompt: false
languages:
- zh
metrics:
- Accuracy
original_task: true
name: express_same_yes_no
reference: ''
a28370c0-d43b-405c-a9b1-4d77b3a27244: !Template
answer_choices: no ||| yes
id: a28370c0-d43b-405c-a9b1-4d77b3a27244
jinja: "\"{{ sentence1 }}\" and \"{{ sentence2 }}\" have the same meaning. Would\
\ you agree? Answer yes or no. \n|||\n{{ answer_choices[label] }}"
metadata: !TemplateMetadata
choices_in_prompt: true
languages:
- zh
metrics:
- Accuracy
original_task: true
name: same_meaning_agree
reference: ''
d8c303a6-61a4-47f9-8623-cc72cc3294eb: !Template
answer_choices: null
id: d8c303a6-61a4-47f9-8623-cc72cc3294eb
jinja: 'Generate another sentence that has the same meaning as "{{ sentence1 }}".

|||

{% if label == 1 %}

{{ sentence2}}

{% endif %}'
metadata: !TemplateMetadata
choices_in_prompt: false
languages:
- zh
metrics:
- ROUGE
original_task: false
name: generate_similar_sentence
reference: ''
e3fcaefd-4e8e-4491-aab7-8efeb67a2909: !Template
answer_choices: no ||| yes
id: e3fcaefd-4e8e-4491-aab7-8efeb67a2909
jinja: "Sentence 1: {{ sentence1 }}\nSentence 2: {{ sentence2 }}\nAre the two\
\ sentences similar? Yes or no? \n|||\n{{ answer_choices[label] }}"
metadata: !TemplateMetadata
choices_in_prompt: true
languages:
- zh
metrics:
- Accuracy
original_task: true
name: is_similar_yes_no
reference: ''
82 changes: 82 additions & 0 deletions promptsource/templates/clue/c3/templates.yaml
Original file line number Diff line number Diff line change
@@ -0,0 +1,82 @@
dataset: clue
subset: c3
templates:
51b3c3fe-2fa2-474a-81f9-5b421c884109: !Template
answer_choices: '{{ choice | join(" ||| ") }}'
id: 51b3c3fe-2fa2-474a-81f9-5b421c884109
jinja: "{% for statement in context %} \n{{ statement }}\n{% endfor %}\nGiven\
\ the dialogue / passage above, use the following options to answer the question\
\ \"{{question}}\".\nOptions: \n- {{ answer_choices | join('\\n- ') }}\n|||\n\
{{ answer }}"
metadata: !TemplateMetadata
choices_in_prompt: true
languages:
- zh
metrics:
- Accuracy
original_task: true
name: answer-question-affirmative
reference: ''
5e06f05f-d7dd-4329-b6d8-3a62dcdba838: !Template
answer_choices: '{{ choice | join(" ||| ") }}'
id: 5e06f05f-d7dd-4329-b6d8-3a62dcdba838
jinja: "Passage: {% for statement in context %} \n{{ statement }}\n{% endfor %}\n\
Question: \"{{question}}\"\nAnswer choices: {{ answer_choices[:-1] | join(',\
\ ') }}, or {{ answer_choices[-1] }}?\n|||\n{{ answer }}"
metadata: !TemplateMetadata
choices_in_prompt: true
languages:
- zh
metrics:
- Accuracy
original_task: true
name: question_choices_context
reference: ''
63b5e5df-40d3-47ee-b77e-bf385c042fa9: !Template
answer_choices: null
id: 63b5e5df-40d3-47ee-b77e-bf385c042fa9
jinja: "Passage: {% for statement in context %} \n{{ statement }}\n{% endfor %}\n\
What kind of question would elicit an answer response of {{ answer }}?\n|||\n\
{{ question }}"
metadata: !TemplateMetadata
choices_in_prompt: false
languages:
- zh
metrics:
- ROUGE
original_task: false
name: generate_question
reference: ''
a5820d05-a8df-4e31-a284-6969e478174b: !Template
answer_choices: '{{ choice | join('' ||| '') }}'
id: a5820d05-a8df-4e31-a284-6969e478174b
jinja: "{% for statement in context %} \n{{ statement }}\n{% endfor %}\nGiven\
\ the dialogue / passage above, what is the answer for the question \"{{question}}\"\
\nAnswer choices: {{ answer_choices[:-1] | join(', ') }}, or {{ answer_choices[-1]\
\ }}?\n|||\n{{ answer }}"
metadata: !TemplateMetadata
choices_in_prompt: true
languages:
- zh
metrics:
- Accuracy
original_task: true
name: answer-question-interrogative
reference: ''
f15acc3f-e067-488f-b426-f65aa604da55: !Template
answer_choices: null
id: f15acc3f-e067-488f-b426-f65aa604da55
jinja: "{% for statement in context %} \n{{ statement }}\n{% endfor %}\nGiven\
\ the dialogue / passage above, what is the answer for the question \"{{question}}\"\
\n|||\n{{ answer }}"
metadata: !TemplateMetadata
choices_in_prompt: false
languages:
- zh
metrics:
- ROUGE
- BLEU
- Other
original_task: false
name: answer-question-interrogative-no-choices
reference: ''
78 changes: 78 additions & 0 deletions promptsource/templates/clue/cluewsc2020/templates.yaml
Original file line number Diff line number Diff line change
@@ -0,0 +1,78 @@
dataset: clue
subset: cluewsc2020
templates:
321f55bb-c725-4fbf-bb7e-d46ea2f510b8: !Template
answer_choices: correct ||| wrong
id: 321f55bb-c725-4fbf-bb7e-d46ea2f510b8
jinja: 'In the class, a teacher asks what the word "{{ target[''span2_text'']
}}" refers to in the text of "{{ text }}". The student answers "{{ target[''span1_text'']
}}". The teacher would say what? {{ answer_choices[0] | capitalize }} or {{answer_choices[1]
}}?

|||

{{ answer_choices[label] }}'
metadata: !TemplateMetadata
choices_in_prompt: true
languages:
- zh
metrics:
- Accuracy
original_task: true
name: teacher_asking_student
reference: ''
7282b4b5-f854-42af-8e75-d509608d97bb: !Template
answer_choices: null
id: 7282b4b5-f854-42af-8e75-d509608d97bb
jinja: 'What does the word "{{ target[''span2_text''] }}" refers to in the text
of "{{ text }}"?

|||

{% if label == 0 %}

{{ target[''span1_text''] }}

{% endif %}'
metadata: !TemplateMetadata
choices_in_prompt: false
languages:
- zh
metrics:
- ROUGE
original_task: false
name: generate_correct_response
reference: ''
e649a609-f7b2-43da-800d-a32090e92221: !Template
answer_choices: yes ||| no
id: e649a609-f7b2-43da-800d-a32090e92221
jinja: "In the sentence \"{{ text }}\", does \"{{ target['span2_text'] }}\" refer\
\ to \"{{ target['span1_text'] }}\"? \n|||\n{{ answer_choices[label] }}"
metadata: !TemplateMetadata
choices_in_prompt: false
languages:
- zh
metrics:
- Accuracy
original_task: true
name: are_they_same
reference: ''
fc436a38-d9f5-4d17-bcf8-1e506bba5681: !Template
answer_choices: yes ||| no
id: fc436a38-d9f5-4d17-bcf8-1e506bba5681
jinja: 'In the sentence "{{ text }}", the word "{{ target[''span2_text''] }}"
refers to "{{ target[''span1_text''] }}". Answer {{ answer_choices[0] }} if
you agree; otherwise, answer {{ answer_choices[1] }}.

|||

{{ answer_choices[label] }}'
metadata: !TemplateMetadata
choices_in_prompt: true
languages:
- zh
metrics:
- Accuracy
original_task: true
name: affirmative_are_they_same
reference: ''
76 changes: 76 additions & 0 deletions promptsource/templates/clue/cmrc2018/templates.yaml
Original file line number Diff line number Diff line change
@@ -0,0 +1,76 @@
dataset: clue
subset: cmrc2018
templates:
3bba02e6-9266-418b-9ba0-4f71755cf3b6: !Template
answer_choices: null
id: 3bba02e6-9266-418b-9ba0-4f71755cf3b6
jinja: 'Given this context "{{ context }}", generate a question that would return
the answer of "{{ answers[''text''][0] }}".

|||

{{ question }} '
metadata: !TemplateMetadata
choices_in_prompt: false
languages:
- zh
metrics:
- ROUGE
original_task: false
name: generate_question
reference: ''
8fe02215-7881-4a61-a6e7-579680e40b9b: !Template
answer_choices: null
id: 8fe02215-7881-4a61-a6e7-579680e40b9b
jinja: "In an exam, you are asked {{ question }}, and you are tasked to find the\
\ answer from the following passage. \n{{ context }}\nWhat's the answer?\n|||\n\
{{ answers['text'][0] }}"
metadata: !TemplateMetadata
choices_in_prompt: false
languages:
- zh
metrics:
- Squad
original_task: true
name: in_an_exam
reference: ''
9e82f5da-b206-4758-94e6-085cf2608378: !Template
answer_choices: null
id: 9e82f5da-b206-4758-94e6-085cf2608378
jinja: '{{ context }}

The answer to {{ question }} is in the passage above. What is it?

|||

{{ answers[''text''][0] }}'
metadata: !TemplateMetadata
choices_in_prompt: false
languages:
- zh
metrics:
- Squad
original_task: true
name: answer_in_the_passage
reference: ''
9fb15385-814e-419a-b862-2d4e06a58ef6: !Template
answer_choices: null
id: 9fb15385-814e-419a-b862-2d4e06a58ef6
jinja: 'Answer the question using the given context.

Question: {{ question }}

Context: {{ context }}

Answer: |||

{{ answers[''text''][0] }}'
metadata: !TemplateMetadata
choices_in_prompt: false
languages:
- zh
metrics:
- Squad
original_task: true
name: answer_following_question
reference: ''
Loading