Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Page scraping timeouts are not handled properly #5

Open
deoren opened this issue Jan 21, 2018 · 0 comments
Open

Page scraping timeouts are not handled properly #5

deoren opened this issue Jan 21, 2018 · 0 comments
Labels

Comments

@deoren
Copy link
Contributor

deoren commented Jan 21, 2018

Traceback (most recent call last):
  File "/usr/local/bin/email_ebook_deals.py", line 467, in 
    main()
  File "/usr/local/bin/email_ebook_deals.py", line 426, in main
    site_content = fetch_page(site)
  File "/usr/local/bin/email_ebook_deals.py", line 239, in fetch_page
    html_page = urllib2.urlopen(site['url'])
  File "/usr/lib/python2.6/urllib2.py", line 126, in urlopen
    return _opener.open(url, data, timeout)
  File "/usr/lib/python2.6/urllib2.py", line 391, in open
    response = self._open(req, data)
  File "/usr/lib/python2.6/urllib2.py", line 409, in _open
    '_open', req)
  File "/usr/lib/python2.6/urllib2.py", line 369, in _call_chain
    result = func(*args)
  File "/usr/lib/python2.6/urllib2.py", line 1170, in http_open
    return self.do_open(httplib.HTTPConnection, req)
  File "/usr/lib/python2.6/urllib2.py", line 1145, in do_open
    raise URLError(err)
urllib2.URLError: 
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
Projects
None yet
Development

No branches or pull requests

1 participant