Note
Reddit scraping does not work anymore because of the new (June 2023) policy changes, due to which pushshift had to shut down
usage.mp4
Markify is an open source command line application written in python which scrapes data from your social media accounts and utilises markov chains to generate new sentences based on the scraped data
- Engineered a command-line application, Markify, leveraging Python to extract and analyze data from social media accounts
- Employed NLTK for meticulous data sanitization
- Demonstrated proficiency in interfacting with a variety of APIs (official and unofficial) to aggregate data
- Employed the use of the markov chains for generating new sentences
- Packaged the application for widespread use by uploading it to PyPI
There are many methods to install markify on your device, such as:
(Reccomended)
python -m pip install markify
python -m pip install git+https://github.com/msr8/markify.git
git clone https://github.com/msr8/markify
cd markify
python setup.py install
git clone https://github.com/msr8/markify
cd markify
python -m pip install -r requirements.txt
cd src
python markify.py
To use, you can simply just run markify
on the command line, but we gotta setup a config file first. If you're windows, the default location for the config file is %LOCALAPPDATA%\markify\config.json
, and on linux/macOS it is ~/.config/markify/config.json
. Alterantively, you can provide the path to the config file using the -c --config
flag. If you run the program and the config file doesn't exist, it makes an empty template. An ideal config file should look like:
{
"reddit": {
"username" : "..."
},
"discord": {
"token" : "..."
},
"twitter": {
"username" : "..."
}
}
where the username under reddit section is your reddit username, token under discord is your discord token, and username under twitter is your twitter username. If any of them are not given, the program will skip the collection process for that social media
You can view the available flags by running markify --help
. It should show the following text:
-h, --help show this help message and exit
-c CONFIG, --config CONFIG
The path to config file. By default, its {LOCALAPPDATA}/markify/config.json on
windows, and ~/.config/markify/config.json on other operating systems
-d DATA, --data DATA The path to the json data file. If given, the program will not scrape any data and
will just compile the model and generate sentences
-n NUMBER, --number NUMBER
Number of sentences to generate. Default is 50
-v, --version Print out the version number
More explanation is given below:
This is the path to the config file (config.json). By default, its {LOCALAPPDATA}/markify/config.json
on windows, and ~/.config/markify/config.json
on other operating systems. For example:
markify -c /Users/tyrell/Documents/config.json
This is the path to the data file containing all the scraped content. If it is given, the program doesn't scrape any data and just complies a model based on the data present in the file. By default, a new data file is generated in the DATA
folder in the config folder and is named x.json
where x
is the current epoch time in seconds. For example:
markify -d /Users/tyrell/.config/markify/DATA/1658433988.json
This is the number of sentences to generate after compiling the model. Default is 50. For example:
markify -n 20
Print out the version of markify you're using via this flag. For example:
markify -v
This program has 4 main parts: Scraping reddit comments, scraping discord messages, scraping tweets, generating sentences using markov chains. More explanation is given below
The program uses the Pushshift's API to scrape your comments. Since Pushshift can only return 1000 comments at a time, the program gets the timestamp of the oldest comment and then sends a request to the API to get comments before that timestamp. This loop goes on until either all your comments are scraped, or 10000 comments are scraped. I chose to use Pushshift's API since its faster, yeilds more result, and doesnt need a client ID or secret
To scrape discord messages, first the program checks if the token is valid or not by getting basic information (username, discriminator, and account ID) through the /users/@me
endpoint. Then it gets all the DM channels you have participated in through the /@me/channels
endpoint. Then it extracts the channel IDs from the response and gets the recent 100 messages in the channels using the /channels/channelid/messages
endpoint, where channelid
is the channel ID. Then it goes through the respone and adds the messages which are a text message, sent by you, and arent empty, to the data file
The program uses the snscrape module to scrape your tweets. The program keeps scraping your tweets until either it has scraped all the tweets, or has scraped 10000 tweets
The program extracts all the useful texts from the data file and makes a markov chain model based on them using the markovify module. Then the program generates new sentences (default being 50) and prints them out
Recently (as of July 2022), discord reworked its system of tokens and the format of the new tokes is a bit different. You can obtain your discord token using this guide
Q) The program is throwing an error and is telling me to install "averaged_perceptron_tagger" or something. What to do?
Running the command given below should work
python3 -c "import nltk; nltk.download('averaged_perceptron_tagger')"
You can visit this page for more information
Sadly, all you can do is wait. It is a known issue with lxml