-
-
Notifications
You must be signed in to change notification settings - Fork 115
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
sqlite-utils extract could handle nested objects #239
Comments
The Python def extract(self, columns, table=None, fk_column=None, rename=None):
rename = rename or {}
if isinstance(columns, str):
columns = [columns]
if not set(columns).issubset(self.columns_dict.keys()):
raise InvalidColumns(
"Invalid columns {} for table with columns {}".format(
columns, list(self.columns_dict.keys())
)
)
... Note that it takes a list of columns (and treats a string as a single item list). That's because it can be called with a list of columns and it will use them to populate another table of unique tuples of those column values. So a new mechanism that can instead read JSON values from a single column needs to be compatible with that existing design. |
Likewise the
|
For the Python version I'd like to be able to provide a transformation callback function - which can be |
It would be OK if the CLI version only allows you to specify a single column if you are using the |
Maybe the Python version takes an optional dictionary mapping column names to transformation functions? It could then merge all of those results together - and maybe throw an error if the same key is produced by more than one column. db["Reports"].extract(["Reported by"], transform={"Reported by": json.loads}) Or it could have an option for different strategies if keys collide: first wins, last wins, throw exception, add a prefix to the new column name. That feels a bit too complex for an edge-case though. |
I'm going to go with last-wins - so if multiple transform functions return the same key the last one will over-write the others. |
Problem with calling this argument I could use ... but that doesn't instantly make me think of turning a value into multiple columns. How about
I think that works. You're expanding a single value into several columns of information. |
Here's the current implementation of sqlite-utils/sqlite_utils/db.py Lines 1049 to 1074 in 806c210
Tricky detail here: I create the lookup table first, based on the types of the columns that are being extracted. I need to do this because extraction currently uses unique tuples of values, so the table has to be created in advance. But if I'm using these new expand functions to figure out what's going to be extracted, I don't know the names of the columns and their types in advance. I'm only going to find those out during the transformation. This may turn out to be incompatible with how I can still use the existing |
WIP in a pull request. |
This came up in office hours! |
If there's no primary key in the JSON could use the |
Could this handle lists of objects too? That would be pretty amazing - if the column has a |
I am super interested in this feature. After reading the other issues you referenced, I think the right way would be to use the current extract feature and then to use |
I think I only wonder how I would parse the JSON My naive approach would have been |
I was looking for something like this today, for extracting columns containing objects (and arrays of objects) into separate tables. Would it make sense (especially for the fields containing arrays of objects) to create a one-to-many relationship, where each row of the newly created table would contain the id of the row that originally contained it? If the extracted objects have a unique id and are repeated, it could even create a many-to-many relationship, with a third table for the joins. |
Yeah having a version of this that can setup m2m relationships would definitely be interesting. |
Could the extract cli command also accept --flatten? E.g. if you have a structure like this: {"batch_id": 1,"jobs": [{ "job_id": 1, "name": "job 1"}, {"job_id": 2, "name": "job 2" }]} Then after a sqlite-utils insert the batch table will contain a jobs column of type text with [{ "job_id": 1, "name": "job 1"}, {"job_id": 2, "name": "job 2"}] Running extract on this text field will create a new table, but content will still be of type text, but with extract --flatten it could populate a one-to-many relationship and create the job_id and name column on the new table. If it's a nested object, it could populate a one-to-one relationship with the created columns. I came across this great tool today and tested it, but didn't find a way to handle nested objects or arrays when inserting from JSON. But maybe there are some other ways to achieve this with the convert command? |
Imagine a table (imported from a nested JSON file) where one of the columns contains values that look like this:
The
sqlite-utils extract
command already uses single text values in a column to populate a new table. It would not be much of a stretch for it to be able to use JSON instead, including specifying which of those values should be used as the primary key in the new table.The text was updated successfully, but these errors were encountered: