Having ready-to-use data is cool, but what about data that is not readily downloadable from the internet? What if manual download is a hassle when there are lots of information and sources involved?
What to expect?
- Getting and storing data as JSON or CSV
- Collecting information from webpages
- Automating scraping with GitHub actions
Download or clone the repository from GitHub to see the demo scraper in action.