Skip to content

Commit

Permalink
docs: documentation of batch_insert route (#11)
Browse files Browse the repository at this point in the history
* docs: documentation for batch_insert

* fix: typo
  • Loading branch information
francojreyes authored Jun 25, 2024
1 parent 754c705 commit abb60b4
Show file tree
Hide file tree
Showing 2 changed files with 26 additions and 2 deletions.
4 changes: 2 additions & 2 deletions app/helpers/postgres.py
Original file line number Diff line number Diff line change
Expand Up @@ -49,11 +49,11 @@ def execute_up_down(metadata: Metadata):
)

try:
cur.execute(metadata.sql_down)
cur.execute(metadata.sql_up)
except Error as e:
raise HTTPException(
status_code=400,
detail=f"sql_down of '{metadata.table_name}' does not fully undo sql_up"
detail=f"sql_down of '{metadata.table_name}' does not fully undo sql_up:\n{e}"
)


Expand Down
24 changes: 24 additions & 0 deletions scrapers.md
Original file line number Diff line number Diff line change
Expand Up @@ -76,6 +76,30 @@ X-API-Key: my_key
```

### POST `/batch_insert` Route
If you have a scraper which scrapes data for and inserts into multiple tables at once, you can use the `/batch_insert` route. This route ensures that if one of the inserts fails, none of the inserts will be committed.

The `/batch_insert` route accepts a list of objects containing `metadata` and `payload` (note that this list is not in an object, the top level JSON entity in the body of the request is the list).

Example:
```http request
POST /batch_insert HTTP/1.1
Content-Type: application/json
X-API-Key: my_key
[
{
"metadata": { ... },
"payload": [ ... ]
},
{
"metadata": { ... },
"payload": [ ... ]
},
...
]
```

## Multiple Scrapers for One Table

If you want to connect multiple scrapers to the same table, for example if you have multiple data sources, then Hasuragres is able to support this. Follow the guidelines below to set this up.
Expand Down

0 comments on commit abb60b4

Please sign in to comment.