+-------------+
|parking scans| ~4.000.000 A month
+------+------+
|
|
+-------------------+ |
|BGT kaart gegevens +--+ Large Scale Topography: Official City of Amsterdam Map
+-------------------+ |
| +------------+
+------+parkeerkaart| Map of parking spaces
| +------------+
+------+ |
| BAG +-----------+ API of addresses and buildings
+------+ |
|
|
+-------v---------+
| |
+---------------+ Database | in blocks of 500.000 / Cleanup.
| | |
| +-------+---------+
+--v------+ |
| API | |
+--^------+ v
| +-------+---------+
| | |
+---------------+ Elasticsearch |
| |
+-----------------+
This is a project about the scan data analysis. Making maps of the parking "pressure" in the city is the main goal.
The project is devided in a few docker-containers with their own functions.
- API
- provide web api on scan data
- uses postgres database for assets and messurements
- uses elasticsearch to create aggregations of all kinds
- contains database building / migrations and loading of related databases
- csvimporter
- golang code which crunches and cleans up the raw csv scan data into postgres database
- kibana
- default kibana to analyse scan - data. deployed at: https://kibana.parkeren.data.amsterdam.nl
- postgres
- database docker with custom settings
There are the implemented stages
-
- Prepare, combine, cleanup the data.
-
- Visualize the data in kibana.
-
- Visualize the occupancy special viewer.
-
- Create occupancy maps from the entire city.
Architecture docs (only available on the City of Amsterdam network): https://dokuwiki.datapunt.amsterdam.nl/doku.php?id=start:pparking:architectuur
We take data from a few sources and create a dataset usable for predictive parking analysis.
Source data:
- all known parking spots.
- all known roads. (wegdelen) / partial roads. (weddelen) from the Official City of Amsterdam Map (BGT).
- all known neigborhoods. (buurten)
- all 50+ milion cars scans of 2016/2017.
We combine all data sources in a postgres database table
in scan_scans
which contains all scans
normalized with parkingspot, neighborhood and road information.
All this data is indexed in elasticsearch and that allows us to create a
kibana dashboard.
The kibana project has one customized view with loads out vector data of roads, neighborhood and parkingspots and allows us to create dynamic maps.
In the deploy
folder is the import.sh
which triggers all the needed build steps.
Local development can be done using docker-compose up database elasticsearch
.
To get quick results and vast visualizations we choose kibana on top of elastic search.
The visuallizations are done with a kibana instance.
After experimenting with kibana we decided to make a specialized viewer using angular4 and leaflet. We show parking pressure for year, month, week, day by hour summaries for the parking/road map of Amsterdam.
deployed here!
https://parkeren.data.amsterdam.nl/#/
-
set environment variables TESTING=no/yes (when yes will load small subset of all data), ENVIRONMENT=acceptance, and PARKEERVAKKEN_OBJECTSTORE (parking spaces) password.
-
RUN deploy/import.sh
-
TEST .jenkins-test.sh
-
to run API test locally.
-
docker-compose up -d test database elasticsearch
-
'cd' in the api/parkeerscans folder and run
-
bash testdata/loadtestdata.sh parkeerscans
-
bash testdata/loadelastic.sh parkeerscans
-
manage.py test will work now.
-
Tips.
- Downloads are cached in named volumes. Database downloads, zips and csv's are saved.
forcefull remove the named volume (pp_unzip-volume) if it contains the wrong data.
When TESTING = no the
unziped
will be deleted - to follow the import flow check the the steps deploy/import.sh
There is an angular
project to visualize the data.
See the readme / Dockerfile in the angular
directory.