You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
I'm using this with npm test which runs a npx jest command that runs many tests. It's not useful to me to have a max-score: 100 that results in either 0 points or 100 points, with no partial (percentage) value for partial test success. For reference, see classroom-resources/autograding-example-node#4
It would be interesting if there was an option for partial success that somehow scaled the points of the result based on the number of successful (inside the aggregate) tests that pass.
Each testing environment is different, but maybe a test command (maybe a special script) can pass some environment variables for the number of tests that pass/fail and the action can calculate the points accordingly.
Otherwise, I don't see this action getting much uptake (at least in my courses) where multiple tests are defined inside a project using standard harnesses. The CoPilot script I proposed in the issue I cited above works for Jest, but it generates a static YAML file (dynamically, based on the jest configuration) as a work-around (it still invokes the action once for each test file). It's still not perfect, since each test file (suite) inside a project can have multiple tests, and the action results in either all the points (if all tests pass) or none (if at least one test fails), which is still not ideal.
The text was updated successfully, but these errors were encountered:
I'm using this with
npm test
which runs anpx jest
command that runs many tests. It's not useful to me to have amax-score: 100
that results in either 0 points or 100 points, with no partial (percentage) value for partial test success. For reference, see classroom-resources/autograding-example-node#4It would be interesting if there was an option for partial success that somehow scaled the points of the result based on the number of successful (inside the aggregate) tests that pass.
Each testing environment is different, but maybe a test command (maybe a special script) can pass some environment variables for the number of tests that pass/fail and the action can calculate the points accordingly.
Otherwise, I don't see this action getting much uptake (at least in my courses) where multiple tests are defined inside a project using standard harnesses. The CoPilot script I proposed in the issue I cited above works for Jest, but it generates a static YAML file (dynamically, based on the jest configuration) as a work-around (it still invokes the action once for each test file). It's still not perfect, since each test file (suite) inside a project can have multiple tests, and the action results in either all the points (if all tests pass) or none (if at least one test fails), which is still not ideal.
The text was updated successfully, but these errors were encountered: