A low-code tool that generates python crawler code based on curl or url
Python >= 3.6
pip install kkba
Copy the curl command or url from the browser, without pasting, execute the command directly: kkba [options]
kkba [options]
# After the command execution, the crawler directory will be generated in the current directory (including the crawler articles and readme files).
# 1. Copy curl or url
# 2. Excute commands
kkba -F
# use proxy,support 蜻蜓、快代理、阿布云, For detailed usage, you can view the source code
from kkba.proxy import Proxy
p = Proxy(crawlerType='requests', proxyType='xxx', username='xxx', password='xxx')
proxies = p.get_proxy()
kkba -h
爬虫生成器
usage: kkba [options]
optional arguments:
-F, 推荐: 将粘贴板curl或者url,生成feapder异步爬虫代码,相当于scrapy的用法
-s 将粘贴板curl或者url,生成scrapy单文件项目
-f, 将粘贴板curl或者url,生成feapder同步爬虫代码,相当于requests的用法
-r, 将粘贴板curl或者url,生成requests爬虫代码
-h, --help 帮助文档
-v, --version 查看版本
# install fepader
pip install feapder
# generates feapder spiders code
kkba -F
# install scrapy
pip install scrapy
# generates scrapy single spiders code
kkba -F
curl2pyreqs 令狐 向娜