Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

There is no competency question coverage report #1419

Closed
1 task
areleu opened this issue Nov 24, 2022 · 6 comments · Fixed by #1420
Closed
1 task

There is no competency question coverage report #1419

areleu opened this issue Nov 24, 2022 · 6 comments · Fixed by #1420
Assignees

Comments

@areleu
Copy link
Contributor

areleu commented Nov 24, 2022

Description of the issue

I have an idea on how to implement a coverage report for the competency questions. If I'm successful I would like to push towards Test Driven Development in the project.

Ideas of solution

This issue is a reference for the PR I will prepare

Workflow checklist

  • I am aware of the workflow for this repository
@areleu areleu added the To do Issues that haven't got discussed yet label Nov 24, 2022
@areleu areleu self-assigned this Nov 24, 2022
@l-emele
Copy link
Contributor

l-emele commented Nov 24, 2022

Can you please describe your idea?

@areleu
Copy link
Contributor Author

areleu commented Nov 24, 2022

Can you please describe your idea?

Yes. In general, I want to write a prototype of a coverage checking framework that goes through the codes extracted using the existing terms and definitions solution and the competency questions and from the comparision of both calculate a coverage.

coverage = sum(concepts in cqs in existing terms and definitions) / sum(all existing terms and definitions)
The coverage framework could also throw some interesting metrics like in which CQ is certain term being covered and if the CQ was sucessfully checked.

@areleu
Copy link
Contributor Author

areleu commented Nov 24, 2022

@l-emele Here is an example of the report: https://gist.github.com/areleu/c005c7ee8e5b0ebf9f70e41689dfb51e

I got a draft in the MR but I have yet to integrate it to the CI. It probably wont work locally for you because it needs some adjustments like adding relative paths

We could use the CI to add a Badge, although the 6% is still very discouraging. We got a lot of work to do.

Edit 1: Let me know if there is any nice-to-have you would like in the report.
Edit 2: I could also calculate the consitency percentage based on the existing CQs and their pass rate.

@github-actions github-actions bot removed the To do Issues that haven't got discussed yet label Nov 24, 2022
@l-emele
Copy link
Contributor

l-emele commented Nov 25, 2022

Edit 2: I could also calculate the consitency percentage based on the existing CQs and their pass rate.

All CQs should pass because merging a branch to dev is not possible if a CQ fails. So this value should always be 100%.

Edit: I opened issue #1421 to cover classes with competency questions.

@areleu
Copy link
Contributor Author

areleu commented Nov 25, 2022

Edit 2: I could also calculate the consitency percentage based on the existing CQs and their pass rate.

All CQs should pass because merging a branch to dev is not possible if a CQ fails. So this value should always be 100%.

Edit: I opened issue #1421 to cover classes with competency questions.

I was hoping this to be the case but we have these declared:

NON_INFERABLE_BUT_SHOULD="q42.omn q43.omn q44.omn q46.omn q47.omn q48.omn q49.omn q50.omn"

@OpenEnergyPlatform/oeo-general-expert-formal-ontology there is also a single "negative" question that should be infered false. Is this something that we should consider also including in general? I guess these would be formally soundness CQs. I think one needs to structure the folders differently to ease navigation. Right now is realtively hard to tell what question does what.

I suggest doing such a folder structure:

competency_questions

|---> completeness
    |---> shared
        |---> competency_question_with_explicit_wording.omn
          ...
    |---> model
     ...
    |---> social
     ...
    |---> physical
     ...
|---> soundness
 ...

In general can look more convoluted but is better for navigation

@MGlauer
Copy link
Contributor

MGlauer commented Nov 25, 2022

I was hoping this to be the case but we have these declared:

NON_INFERABLE_BUT_SHOULD="q42.omn q43.omn q44.omn q46.omn q47.omn q48.omn q49.omn q50.omn"

Competency questions are not unit tests. They can be used as such, but only as soon as their respective concepts have been implemented. CQs are also used as scope definitions, that is "questions that the ontology should be able to answer in the future". Therefore, contrary to unit tests for software development, there are not just two outcomes (success/failed), but there is also "not yet covered".

Questions in the line that you quoted contain concepts that were not covered by the OEO (when we created them) and could therefore not be inferred. An example would be A wind turbine is not powered by coal. But, to be fair, I think nobody took a look at these questions, since we created them. I guess, we should work through them again and see, whether we can add some of them to the active tests. I am also very much in favour of restructuring the folder structure here. 👍

We should probably turn those questions into issues so that we can keep track of them!? But that might result in long-lasting issues, which is generally frowned upon in open-source projects.

@stale stale bot added the stale already discussed issues that haven't got worked on for a while label Dec 20, 2022
@l-emele l-emele added this to the oeo-release-1.15.0 milestone Jan 16, 2023
@stale stale bot removed the stale already discussed issues that haven't got worked on for a while label Jan 16, 2023
@stale stale bot added the stale already discussed issues that haven't got worked on for a while label Feb 2, 2023
@stale stale bot removed stale already discussed issues that haven't got worked on for a while labels May 31, 2023
@stale stale bot added the stale already discussed issues that haven't got worked on for a while label Jun 15, 2023
@stale stale bot removed the stale already discussed issues that haven't got worked on for a while label Aug 1, 2023
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

Successfully merging a pull request may close this issue.

5 participants