-
-
Notifications
You must be signed in to change notification settings - Fork 698
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Take advantage of .coverage being a SQLite database #847
Comments
Filed a related issue with some ideas against |
On closer inspection, I don't know if there's that much useful stuff you can do with the data from Consider the following query against a select file_id, context_id, numbits_to_nums(numbits) from line_bits It looks like this tells me which lines of which files were executed during the test run. But... without the actual source code, I don't think I can calculate the coverage percentage for each file. I don't want to count comment lines or whitespace as untested for example, and I don't know how many lines were in the file. If I'm right that it's not possible to calculate percentage coverage from just the |
Here's the plugin that adds those custom SQLite functions: from datasette import hookimpl
from coverage.numbits import register_sqlite_functions
@hookimpl
def prepare_connection(conn):
register_sqlite_functions(conn) |
I'm happy enough with https://codecov.io/gh/simonw/datasette that I'm not going to spend any more time on this. |
The
.coverage
file generated by runningpytest-cov
is now a SQLite database!I could do something interesting with this. Maybe after each test run for a new commit I could store that database file somewhere?
Lots of interesting challenges here.
I got a change into
coveragepy
last year which helps make the custom SQL functions available for doing fun things in Datasette: nedbat/coveragepy#868Bigger challenge: if I have a DB file for every commit, that's hundreds (potentially thousands) of DB files. Datasette isn't designed to handle thousands of files like that.
So, do I figure out how to have Datasette open a file on-command for just a single request? Or, an easier option, do I copy data from those files into a single database with a modified schema to include the commit hash in each table row?
(Following on from #841 and #844)
The text was updated successfully, but these errors were encountered: