A humanitarian data service. The aspirational goal is to consolidate the various fragmented raw data sources, whether it be APIs, libraries, raw urls, or PDFs, behind a single REST API and expos via Swagger UI. This is the initial effort to lay the groundwork for this to be possible.
- install all dependencies and track them in reqs.txt files (e.g. swagger)
- use hdx api to download, parse, and merge some data (first focus on lake chad basin)
- lay out structure
- lay out skeleton for rest api and swagger ui
- add landing page with links to api docs and repo
- construct endpoints for accessing the data
- a single data source end to end
- a set of lake chad basin data sources end to end (more granular than country level, or historic data)
See the website with interactive API documentation here. Note: this is currently local, to see it run the following:
python api.py
Current set of endpoints (for Lake Chad Basin, 2016-2017):
- GET /funding/totals/:country
- GET /funding/categories/:country
- GET /needs/totals/:country
- GET /needs/regions/:country
To pull data from HDX (the Humanitarian Data Exchange), run the following:
python3 run_hdx.py
This data script is configured to run every Monday at 2:30am (system time) for the latest data.
See resources/constants.py
and resources/data/raw
:
- HDX Lake Chad Basin Key Figures January 2017
- HDX Lake Chad Basin FTS Appeal Data
- HDX Lake Chad Basin Crisis Displaced Persons
- Lake Chad Basin Humanitarian Needs Overview January 2017
- UNOCHA ORS ROWCA
See resources/data/derived
- this cleaned and formatted data is what the API is ultimately serving.