This project crawl through thousends of vg artickles
- When enabling "deep dive" scrapy filter out some pages due to dupefilter/filtered which is used to detect and filter duplicate requests. As a result, the storing of the articles will not happend due to as mismatch of lengths.