You'll need:
Before executing these steps, ensure you have an SSH keypair named Area 51
on openstack and you have the private key at ~/.ssh/area51
. You will also need to cp openrc.example .openrc
, and then fill in your openstack credentials in .openrc
. Then:
sudo chmod +x deploy/deploy-k3s-cluster.sh
./deploy/deploy-k3s-cluster.sh
Few things to note of importance.
- The file
twitter_streaming.py
is the main file for executing the app for streaming from theDockerfile
. - There are changes still to be made to this file in terms of what data we are to collect from each tweet. We will need to discuss this and agree on what this should look like.
- The file
twitter_streaming_into_database.py
is the original version oftwitter_streaming.py
except that it feeds tweets directly into a CouchDB on my local machine. This was mainly for testing as we now need to get the networking setup with the Docker containers for each harvester so as to feed into the database (I don't know how to do this yet). - The environment variables (authentication keys) are missing from the
Dockerfile
. We need to add these to each container for each of the harvesters. This can be done in the Dockerfile I'm pretty sure using a.env
file to accompany theDockerfile
but I haven't figured this part out just yet. - I need to put together a
docker-compose.yml
file still.
To the run the app locally: docker run --name twitter-streamer -p 5000:5000 -d twitter-streamer
I still need to review/test some of the above but any help on linking the couchDB setup with the above would be great. I have it working locally but not in a clustered setup.