First, you need to run install the requirements:
pip install -r requirement.txt
RUN
python crawler.py
get_profile_details()
arguments:
Argument | Argument Type | Description |
twitter_username | String | Twitter Username |
output_filename | String | What should be the filename where output is stored?. |
output_dir | String | What directory output file should be saved? |
proxy | String | Optional parameter, if user wants to use proxy for scraping. If the proxy is authenticated proxy then the proxy format is username:password@host:port. |
output
Key | Type | Description |
Topic | String | topic of the tweet |
Tweet | String | tweet |
Username | Integer | Username of tweet |
Reposts | Integer | Number of Reposts of tweet |
likes | Integer | Number of likes of tweet |
Views | Integer | number of views |
Replies | String | Replies of Reposts of tweet |
Date | String | Date of the tweet |
Output:
{
{"Topic":"cosmos","Username":"Nature & Cosmos\n@nature2cosmos\n\u00b7\n17h","Tweet":"The Sun is 20 years old.\nYes, 20 \"galactic\" years.\n\n Understanding Meaning.","Likes":56,"Views":"5.1K","Reposts":12.0,"Replies":null,"Date":"2023-11-16T20:27:16.000Z"},{"Topic":"cosmos","Username":"jake\u00ae\n@jakestars_\n\u00b7\nNov 16","Tweet":"http:\/\/Stargaze.zone is your portal to the $STARS univ .....