Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

analysis: Improve log messages, warning and errors #313

Merged
merged 5 commits into from
Nov 17, 2023

Conversation

EwoutH
Copy link
Collaborator

@EwoutH EwoutH commented Nov 17, 2023

Improve the errors, warnings and log messages in the ema_workbench/analysis, by:

  • Making them more explicit
  • Including what the variable value or type not expected currently is
  • Including what the variable value or type not expected should be
  • Stating the explicit type of warning or error (DeprecationWarning, ValueError, etc.)
  • Converting them to F-strings (for better in-code readability)
  • Formatting warnings and errors with a Capital letter, while leaving log-statements lowercase.

This will make debugging easier, allowing more time for model development and experimentation.

Very similar to #300, but then on the analysis part instead of the em_framework.

Improve the errors in the ema_workbench/analysis, by:
- Making them more explicit
- Including what the variable value or type not expected currently is
- Including what the variable value or type not expected should be
- Stating the explicit type of warning or error (DeprecationWarning, ValueError, etc.)
- Converting them to F-strings (for better in-code readability)
- Formatting warnings and errors with a Capital letter, while leaving log-statements lowercase.

This will make debugging easier, allowing more time for model development and experimentation.
@EwoutH EwoutH requested a review from quaquel November 17, 2023 10:02
@EwoutH EwoutH added this to the 2.5.0 milestone Nov 17, 2023
Apparently an empty list [] is allowed and provided in tests. Is this intended?
f"outcomes_to_show must be a list of strings or None, instead of a {type(outcomes_to_show)}"
)
# TODO: Apparently an empty list [] is allowed. Is this intended?
elif len(outcomes_to_show) == 1:
Copy link
Collaborator Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

@quaquel when I defined len(outcomes_to_show) < 2, test failed because the test where inputting an empty list [] (with length 0). Apparently this is allowed, is this intended?

Copy link
Owner

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

if you insert None, or an empty list, it defaults to all outcomes in prepare_data. See prepare_data.

Copy link
Collaborator Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Right, then we should explicitly allow empty lists.

Copy link
Owner

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Not sure. I know that a while ago, I changed everything explicitly to None instead of using empty lists. Can't recall exactly why (perhaps because lists are mutable?)

@coveralls
Copy link

coveralls commented Nov 17, 2023

Coverage Status

coverage: 80.578% (-0.007%) from 80.585%
when pulling 6e53fd3 on analysis-error-messages
into 2cfca39 on master.

@EwoutH EwoutH merged commit 0ecf4a8 into master Nov 17, 2023
17 of 18 checks passed
@EwoutH EwoutH deleted the analysis-error-messages branch November 17, 2023 13:02
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Projects
None yet
Development

Successfully merging this pull request may close these issues.

3 participants