Skip to content

dragonaire/humdata

Repository files navigation

Humdata

A humanitarian data service. The aspirational goal is to consolidate the various fragmented raw data sources, whether it be APIs, libraries, raw urls, or PDFs, behind a single REST API and expos via Swagger UI. This is the initial effort to lay the groundwork for this to be possible.

Preliminary TODOs (by 3/10)

  • install all dependencies and track them in reqs.txt files (e.g. swagger)
  • use hdx api to download, parse, and merge some data (first focus on lake chad basin)
  • lay out structure
  • lay out skeleton for rest api and swagger ui
  • add landing page with links to api docs and repo
  • construct endpoints for accessing the data
  • a single data source end to end
  • a set of lake chad basin data sources end to end (more granular than country level, or historic data)

Swagger UI

See the website with interactive API documentation here. Note: this is currently local, to see it run the following:

python api.py

Current set of endpoints (for Lake Chad Basin, 2016-2017):

  • GET /funding/totals/:country
  • GET /funding/categories/:country
  • GET /needs/totals/:country
  • GET /needs/regions/:country

Raw data sources

To pull data from HDX (the Humanitarian Data Exchange), run the following:

python3 run_hdx.py

This data script is configured to run every Monday at 2:30am (system time) for the latest data. See resources/constants.py and resources/data/raw:

Derived data sources

See resources/data/derived - this cleaned and formatted data is what the API is ultimately serving.

About

A humanitarian data service

Resources

License

Stars

Watchers

Forks

Releases

No releases published

Packages

No packages published

Languages