-
Notifications
You must be signed in to change notification settings - Fork 146
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Average grades in export #2366
base: main
Are you sure you want to change the base?
Average grades in export #2366
Conversation
e5117a8
to
d51385d
Compare
d51385d
to
af9697b
Compare
af9697b
to
382680b
Compare
if semesters: | ||
evaluations_filter &= Q(course__semester__in=semesters) | ||
if evaluation_states: | ||
evaluations_filter &= Q(state__in=evaluation_states) | ||
if program_ids: | ||
evaluations_filter &= Q(course__programs__in=program_ids) | ||
if course_type_ids: | ||
evaluations_filter &= Q(course__type__in=course_type_ids) |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
The if
s should all use is not None
checks to distuingish between empty iterable and None.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
But then empty iterables are not covered, and the code breaks, we want to skip both.
evap/results/exporters.py
Outdated
@@ -198,6 +203,8 @@ def write_headings_and_evaluation_info( | |||
else: | |||
self.write_cell(export_name, "headline") | |||
|
|||
self.write_cell("Average for this Question", "evaluation") |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
I think "Average for this Question" would need to be translated here.
The styles used here and in the lines changed below confuse me. Why is this using "evaluation"? Why is the empty cell in the course type column using the "program" style below?
Please double-check that these all make sense, and if we really need some unintuitive value, add some explanation as to why.
evap/results/exporters.py
Outdated
# One more cell is needed for the question column | ||
self.write_empty_row_with_styles(["default"] + ["border_left_right"] * len(evaluations_with_results)) | ||
self.write_empty_row_with_styles(["default"] + ["border_left_right"] * (len(evaluations_with_results) + 1)) |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
The comment does not adequately explain the code anymore. Suggestion:
One column for the question, one column for the average, n columns for the evaluations
evap/results/exporters.py
Outdated
# Borders only if there is a course grade below. Offset by one column | ||
self.write_empty_row_with_styles( | ||
["default"] + ["border_left_right" if gt1 else "default" for gt1 in count_gt_1] | ||
["default", "default"] + ["border_left_right" if gt1 else "default" for gt1 in count_gt_1] |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Comment doesn't match code anymore
) | ||
|
||
self.write_cell(_("Evaluation weight"), "bold") | ||
weight_percentages = ( | ||
self.write_cell("") | ||
weight_percentages = tuple( |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Why?
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
We need to write an empty cell for the average col
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Tuple was something with typing.
evap/results/exporters.py
Outdated
avg_values = [] | ||
count_avg = 0 | ||
avg_approval = [] | ||
count_approval = 0 |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Same performance consideration here: Please don't build lists, just keep track of two numbers instead
evap/results/exporters.py
Outdated
results.get(questionnaire_id) is None | ||
): # ignore all results without the questionaire for average calculation | ||
continue | ||
avg, percent_approval = cls._calculate_display_result(questionnaire_id, question, results) |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
@janno42 how much is performance a concern with this exporter? For production, I can see this making the exporter take 7x longer.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
It's not time critical but shouldn't take longer than ~10 seconds (with ~100 evaluations in the semester).
evap/results/exporters.py
Outdated
# first cell of row is printed above | ||
self.write_empty_row_with_styles(["border_left_right"] * len(evaluations_with_results)) | ||
|
||
for question in self.filter_text_and_heading_questions(questionnaire.questions.all()): | ||
self.write_cell(question.text, "italic" if question.is_heading_question else "default") | ||
|
||
question_average, question_approval_count = self._calculate_display_result_average( |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Suggestion: average_grade
and approval_ratio
?
evap/results/exporters.py
Outdated
if question_average is not None: | ||
if question.is_yes_no_question: | ||
self.write_cell(f"{question_approval_count:.0%}", self.grade_to_style(question_average)) | ||
else: | ||
self.write_cell(question_average, self.grade_to_style(question_average)) | ||
else: | ||
self.write_cell("", "border_left_right") |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Can be unnested:
if approval_ratio is not None:
self.write_cell(...)
elif average_grade is not None:
self.write_cell(...)
else:
self.write_cell(...)
In what situation can we have a question result here that doesn't have an average? We're putting the result here, so there must be at least our result, so there should always be an average, or am I missing something?
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Think its fine
evap/results/exporters.py
Outdated
if ( | ||
results.get(questionnaire_id) is None | ||
): # ignore all results without the questionaire for average calculation | ||
continue |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
From the code, it is very clear to me that this snippet skips results without a questionnaire. I'd much more appreciate the comment telling me 1. why this can happen in the first place (aren't all results always mapped to some question which needs to be part of some questionaire?) and 2. why we want to skip them
dced350
to
6163186
Compare
closes #2001