-
Notifications
You must be signed in to change notification settings - Fork 23
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
There is no competency question coverage report #1419
Comments
Can you please describe your idea? |
Yes. In general, I want to write a prototype of a coverage checking framework that goes through the codes extracted using the existing terms and definitions solution and the competency questions and from the comparision of both calculate a coverage. coverage = sum(concepts in cqs in existing terms and definitions) / sum(all existing terms and definitions) |
@l-emele Here is an example of the report: https://gist.github.com/areleu/c005c7ee8e5b0ebf9f70e41689dfb51e I got a draft in the MR but I have yet to integrate it to the CI. It probably wont work locally for you because it needs some adjustments like adding relative paths We could use the CI to add a Badge, although the 6% is still very discouraging. We got a lot of work to do. Edit 1: Let me know if there is any nice-to-have you would like in the report. |
All CQs should pass because merging a branch to dev is not possible if a CQ fails. So this value should always be 100%. Edit: I opened issue #1421 to cover classes with competency questions. |
I was hoping this to be the case but we have these declared:
@OpenEnergyPlatform/oeo-general-expert-formal-ontology there is also a single "negative" question that should be infered false. Is this something that we should consider also including in general? I guess these would be formally soundness CQs. I think one needs to structure the folders differently to ease navigation. Right now is realtively hard to tell what question does what. I suggest doing such a folder structure: competency_questions
In general can look more convoluted but is better for navigation |
Competency questions are not unit tests. They can be used as such, but only as soon as their respective concepts have been implemented. CQs are also used as scope definitions, that is "questions that the ontology should be able to answer in the future". Therefore, contrary to unit tests for software development, there are not just two outcomes (success/failed), but there is also "not yet covered". Questions in the line that you quoted contain concepts that were not covered by the OEO (when we created them) and could therefore not be inferred. An example would be We should probably turn those questions into issues so that we can keep track of them!? But that might result in long-lasting issues, which is generally frowned upon in open-source projects. |
Description of the issue
I have an idea on how to implement a coverage report for the competency questions. If I'm successful I would like to push towards Test Driven Development in the project.
Ideas of solution
This issue is a reference for the PR I will prepare
Workflow checklist
The text was updated successfully, but these errors were encountered: