Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Average grades in export #2366

Open
wants to merge 3 commits into
base: main
Choose a base branch
from

Conversation

fidoriel
Copy link
Collaborator

closes #2001

@fidoriel fidoriel force-pushed the average-grades-in-export branch 3 times, most recently from e5117a8 to d51385d Compare January 13, 2025 19:12
@fidoriel fidoriel force-pushed the average-grades-in-export branch from d51385d to af9697b Compare January 13, 2025 20:04
@fidoriel fidoriel force-pushed the average-grades-in-export branch from af9697b to 382680b Compare January 13, 2025 20:13
Comment on lines +129 to +136
if semesters:
evaluations_filter &= Q(course__semester__in=semesters)
if evaluation_states:
evaluations_filter &= Q(state__in=evaluation_states)
if program_ids:
evaluations_filter &= Q(course__programs__in=program_ids)
if course_type_ids:
evaluations_filter &= Q(course__type__in=course_type_ids)
Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

The ifs should all use is not None checks to distuingish between empty iterable and None.

Copy link
Collaborator Author

@fidoriel fidoriel Jan 27, 2025

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

But then empty iterables are not covered, and the code breaks, we want to skip both.

@@ -198,6 +203,8 @@ def write_headings_and_evaluation_info(
else:
self.write_cell(export_name, "headline")

self.write_cell("Average for this Question", "evaluation")
Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I think "Average for this Question" would need to be translated here.

The styles used here and in the lines changed below confuse me. Why is this using "evaluation"? Why is the empty cell in the course type column using the "program" style below?

Please double-check that these all make sense, and if we really need some unintuitive value, add some explanation as to why.

Comment on lines 229 to 230
# One more cell is needed for the question column
self.write_empty_row_with_styles(["default"] + ["border_left_right"] * len(evaluations_with_results))
self.write_empty_row_with_styles(["default"] + ["border_left_right"] * (len(evaluations_with_results) + 1))
Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

The comment does not adequately explain the code anymore. Suggestion:

One column for the question, one column for the average, n columns for the evaluations

Comment on lines 262 to 264
# Borders only if there is a course grade below. Offset by one column
self.write_empty_row_with_styles(
["default"] + ["border_left_right" if gt1 else "default" for gt1 in count_gt_1]
["default", "default"] + ["border_left_right" if gt1 else "default" for gt1 in count_gt_1]
Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Comment doesn't match code anymore

)

self.write_cell(_("Evaluation weight"), "bold")
weight_percentages = (
self.write_cell("")
weight_percentages = tuple(
Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Why?

Copy link
Collaborator Author

@fidoriel fidoriel Jan 27, 2025

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

We need to write an empty cell for the average col

Copy link
Collaborator Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Tuple was something with typing.

Comment on lines 325 to 328
avg_values = []
count_avg = 0
avg_approval = []
count_approval = 0
Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Same performance consideration here: Please don't build lists, just keep track of two numbers instead

results.get(questionnaire_id) is None
): # ignore all results without the questionaire for average calculation
continue
avg, percent_approval = cls._calculate_display_result(questionnaire_id, question, results)
Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

@janno42 how much is performance a concern with this exporter? For production, I can see this making the exporter take 7x longer.

Copy link
Member

@janno42 janno42 Jan 20, 2025

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

It's not time critical but shouldn't take longer than ~10 seconds (with ~100 evaluations in the semester).

# first cell of row is printed above
self.write_empty_row_with_styles(["border_left_right"] * len(evaluations_with_results))

for question in self.filter_text_and_heading_questions(questionnaire.questions.all()):
self.write_cell(question.text, "italic" if question.is_heading_question else "default")

question_average, question_approval_count = self._calculate_display_result_average(
Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Suggestion: average_grade and approval_ratio?

Comment on lines 371 to 377
if question_average is not None:
if question.is_yes_no_question:
self.write_cell(f"{question_approval_count:.0%}", self.grade_to_style(question_average))
else:
self.write_cell(question_average, self.grade_to_style(question_average))
else:
self.write_cell("", "border_left_right")
Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Can be unnested:

if approval_ratio is not None:
    self.write_cell(...)
elif average_grade is not None:
    self.write_cell(...)
else:
    self.write_cell(...)

In what situation can we have a question result here that doesn't have an average? We're putting the result here, so there must be at least our result, so there should always be an average, or am I missing something?

Copy link
Collaborator Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Think its fine

Comment on lines 331 to 334
if (
results.get(questionnaire_id) is None
): # ignore all results without the questionaire for average calculation
continue
Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

From the code, it is very clear to me that this snippet skips results without a questionnaire. I'd much more appreciate the comment telling me 1. why this can happen in the first place (aren't all results always mapped to some question which needs to be part of some questionaire?) and 2. why we want to skip them

@fidoriel fidoriel force-pushed the average-grades-in-export branch from dced350 to 6163186 Compare January 27, 2025 18:58
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Development

Successfully merging this pull request may close these issues.

Add averages to results exports
3 participants