-
-
Notifications
You must be signed in to change notification settings - Fork 2
Commit
This commit does not belong to any branch on this repository, and may belong to a fork outside of the repository.
- Loading branch information
1 parent
ed3571b
commit 4c43984
Showing
11 changed files
with
780 additions
and
461 deletions.
There are no files selected for viewing
This file contains bidirectional Unicode text that may be interpreted or compiled differently than what appears below. To review, open the file in an editor that reveals hidden Unicode characters.
Learn more about bidirectional Unicode characters
Original file line number | Diff line number | Diff line change |
---|---|---|
@@ -0,0 +1,112 @@ | ||
# Simple simulations on a remote computer | ||
|
||
The simplest thing you can do remotely is running simulations that you generate with Makita. | ||
This should be a very straightforward process, assuming that you have some permissions to install what you need. | ||
|
||
## SSH into your remote computer | ||
|
||
The first thing you have to do is login into your remote computer. | ||
Find the IP address and your user name. | ||
You might also need to perform additional steps, such as configure SSH keys. | ||
These depend on which cloud provider you use (SURF, Digital Ocean, AWS, Azure, etc.), so make sure you follow the provider's instructions. | ||
|
||
Connect to the remote computer using SSH. | ||
If you are using Windows, use [PuTTY](https://www.chiark.greenend.org.uk/~sgtatham/putty/latest.html) or some other SSH client. | ||
On Linux, OSX, and WSL you should be able to use the terminal: | ||
|
||
```bash | ||
ssh USER@IPADRESS | ||
``` | ||
|
||
It is normal to receive a message such as "The authenticity ... can't be established". | ||
Write "yes" and press "enter". | ||
|
||
For more information on using ssh, check <https://www.digitalocean.com/community/tutorials/ssh-essentials-working-with-ssh-servers-clients-and-keys>. | ||
|
||
## Copying data | ||
|
||
To copy the `data` folder and other possible files, use the `scp` command. | ||
|
||
```bash | ||
scp -r data USER@IPADDRESS:/location/data | ||
``` | ||
|
||
The `-r` is only necessary for folders. | ||
|
||
## Install and run tmux | ||
|
||
The main issue with running things remotely is that when you close the SSH connection, the commands that you are running will be killed. | ||
To avoid that, we run [tmux](https://github.com/tmux/tmux/wiki), which is like a virtual terminal that we can detach (it's more than that, read their link). | ||
|
||
Install `tmux` through whatever means your remote computer allows. | ||
E.g., for Ubuntu you can run | ||
|
||
```bash | ||
apt install tmux | ||
``` | ||
|
||
To run tmux just enter | ||
|
||
```bash | ||
tmux | ||
``` | ||
|
||
## Run your simulations | ||
|
||
This is where you do what you know. | ||
For example, let's assume that we want to run Makita with an arfi template on one or more files in our `data` folder. | ||
For that, we will install `makita` in a python environment, install the packages from pip, and run the `jobs.sh` file. | ||
**This is exactly what we would do in a local machine.** | ||
|
||
```bash | ||
apt install python3-venv | ||
python3 -m venv env | ||
. env/bin/activate | ||
pip3 install --upgrade pip setuptools | ||
pip3 install asreview asreview-makita asreview-insights asreview-wordcloud asreview-datatools | ||
asreview makita template arfi | ||
bash jobs.sh | ||
``` | ||
|
||
Now, the remote computer will be running the simulations. | ||
To leave it running and come back later, follow the steps below | ||
|
||
## Detach and attach tmux and close ssh session | ||
|
||
Since your simulations are running inside tmux, you have to *detach* it pressing CTRL+b and then d (hold the CTRL key, press b, release both, press d). | ||
|
||
You will be back in the terminal, with some `[detached (from session 0)]` or similar message. | ||
|
||
**Your simulations are still running on tmux.** | ||
|
||
To go back to them, attach back using | ||
|
||
```bash | ||
tmux attach | ||
``` | ||
|
||
It will be as if you never left. | ||
|
||
Most importantly, you can now exit from your SSH session and come back and the tmux session should still be reachable. | ||
|
||
To close a ssh session, simply enter `exit` on the terminal. | ||
|
||
### Test the persistence of your simulation run | ||
|
||
To make sure that things work as expected before you leave your remote computer unnatended, do the following: | ||
|
||
- Connect through ssh, open tmux. | ||
- Run some simulation that takes a few minutes. | ||
- Detach, exit the ssh session. | ||
- Connect back, attach tmux. | ||
|
||
The simulation should still be running but it should have made some progress. | ||
To make sure that it is making progress, you can repeat and wait longer before reconnecting. | ||
|
||
## Copy things back to your local machine | ||
|
||
We use `scp` again to copy from the remote machine back to the local machine. | ||
|
||
```bash | ||
scp -r USER@IPADDRESS:/location/output ./ | ||
``` |
This file contains bidirectional Unicode text that may be interpreted or compiled differently than what appears below. To review, open the file in an editor that reveals hidden Unicode characters.
Learn more about bidirectional Unicode characters
Original file line number | Diff line number | Diff line change |
---|---|---|
@@ -0,0 +1,79 @@ | ||
# Running the jobs.sh file with GNU parallel | ||
|
||
These steps can be run locally or in a remote computer. | ||
However, given the nature of this parallelization, there are no limitation to memory usage, so your local computer can run out of memory. | ||
|
||
If you run this method in a remote computer, follow the [guide on running simulations remotely first](10-simple.md). | ||
When that guide tell you to run your simulations, stop and come back here. | ||
|
||
## Install GNU parallel | ||
|
||
Install the package [GNU parallel](https://www.gnu.org/software/parallel/) following the instructions on the website. | ||
We recommend installing the package via package managers if you have one (such as `apt-get`, `homebrew` or `chocolatey`). | ||
|
||
> **Note** | ||
> | ||
> For SURF, that would be `sudo apt-get install parallel`. | ||
In case you do not have one, you can follow the steps below: | ||
|
||
- If you are using UNIX based system(Linux or MacOS),you are going to need `wget`. | ||
|
||
Run those commands `parallel-*` will be a downloaded file with a number instead of '*': | ||
|
||
```bash | ||
wget https://ftp.gnu.org/gnu/parallel/parallel-latest.tar.bz2 | ||
tar -xjf parallel-latest.tar.bz2 | ||
cd parallel-NUMBER | ||
./configure | ||
make | ||
sudo make install | ||
``` | ||
|
||
Check that the package is installed with | ||
|
||
```bash | ||
parallel --version | ||
``` | ||
|
||
## Running jobs.sh in parallel | ||
|
||
To parallelize your `jobs.sh` file, we need to split it into blocks that can be parallelized. | ||
To do that, we need the `split-file.py` script included in this repo. | ||
|
||
To directly download it from the internet, you can issue the following command: | ||
|
||
```bash | ||
wget https://raw.githubusercontent.com/abelsiqueira/asreview-cloud/main/split-file.py | ||
``` | ||
|
||
Now run the following to split on the jobs.sh file into three files: | ||
|
||
```bash | ||
python3 split-file.py jobs.sh | ||
``` | ||
|
||
This will generate files `jobs.sh.part1`, 2, and 3. | ||
The first part contains all lines with "mkdir" and "describe" in them. | ||
The second part contains all lines with "simulate" in them. | ||
The rest of the useful lines (non-empty and not comments) constitute the third part. | ||
|
||
Each part must finish before the next is run, and the first part must be run sequentially. | ||
The other two parts can be run using `parallel`. | ||
|
||
To simplify your usage, we have created the script `parallel_run.sh`. | ||
Download it issuing | ||
|
||
```bash | ||
wget https://raw.githubusercontent.com/abelsiqueira/asreview-cloud/main/parallel_run.sh | ||
``` | ||
|
||
Then you can just run the script below, specifying the number of cores as an argument. | ||
> **Warning** | ||
> We recommend not using all of your CPU cores at once. | ||
> Leave at least one or two to allow your machine to process other tasks. | ||
> Notice that there is no limitation on memory usage per task, so for models that use a lot of memory, there might be some competition for resources. | ||
```bash | ||
bash parallel_run.sh NUMBER_OF_CORES | ||
``` |
This file contains bidirectional Unicode text that may be interpreted or compiled differently than what appears below. To review, open the file in an editor that reveals hidden Unicode characters.
Learn more about bidirectional Unicode characters
Original file line number | Diff line number | Diff line change |
---|---|---|
@@ -0,0 +1,62 @@ | ||
# Running many jobs.sh files one after the other | ||
|
||
One more advanced situation is running many simulations that have changing parameters. | ||
For instance, simulating different combinations of models. | ||
|
||
Ideally, you would want to parallelize even more your execution, but this guide assumes that you can't do that, for instance, because you don't have access to more computers. | ||
|
||
In that case, the guidance that we can provide is to recommend writing a loop over the arguments, and properly saving the output. | ||
|
||
Let's assume that you want to run `asreview makita CONSTANT VARIABLE` where `CONSTANT` is a fixed part that is the same for all runs and `VARIABLE` is what you are varying. | ||
|
||
## Arguments | ||
|
||
Open a file `makita-args.txt` and write the arguments that you want to run. | ||
For instance, we could write | ||
|
||
```plaintext | ||
-m logistic -e tfidf | ||
-m nb -e tfidf | ||
``` | ||
|
||
## Execution script | ||
|
||
Now, download the file `many-jobs.sh`: | ||
|
||
```bash | ||
wget https://raw.githubusercontent.com/abelsiqueira/asreview-cloud/main/many-jobs.sh | ||
``` | ||
|
||
This file should contain something like | ||
|
||
```bash | ||
CONSTANT="template arfi" # Edit here to your liking | ||
num_cores=$1 | ||
|
||
# Shortened for readability | ||
|
||
while read -r arg | ||
do | ||
# A overwrites all files | ||
echo "A" | asreview makita "$CONSTANT" "$arg" | ||
# Edit to your liking from here | ||
python3 split-file.py jobs.sh | ||
bash parallel_run.sh "$num_cores" | ||
mv output "output-args_$arg" | ||
# to here | ||
done < makita-args.txt | ||
``` | ||
|
||
Edit this file to reflect your usage: | ||
|
||
1. The `CONSTANT` variable defines that we will run `template arfi` for every `asreview makita` call. If you use a custom template, change here. | ||
|
||
2. After running `asreview makita`, we chose to use the [parallelization strategy](20-parallel.md). If you prefer, you can use just `bash jobs.sh` instead of these two first lines. The last line renames the output, so it is important, but you do something else that you find more relevant, such as uploading the results. | ||
|
||
## Running | ||
|
||
After you change everything that needs changing, simply run | ||
|
||
```bash | ||
bash many-jobs.sh NUMBER_OF_CORES | ||
``` |
Oops, something went wrong.