-
Notifications
You must be signed in to change notification settings - Fork 1
4. SuperCloud
Documentation: https://mit-supercloud.github.io/supercloud-docs/
Portal: https://txe1-portal.mit.edu/
IMPORTANT: There's a monthly downtime on the second Tuesday of each month.
Please refer to the documentation for detailed instructions: https://mit-supercloud.github.io/supercloud-docs/requesting-account/
- Complete the following online courses:
- Complete and submit the Account Request Form.
- Refer to the template on GitHub in
/imaging-resources/4_supercloud/account_request_form.txt
- Refer to the template on GitHub in
- Email Ernest and ask him to email [email protected] verifying that you need a SuperCloud account.
- This was recently added back as a requirement.
- While waiting for your account to be approved, complete the Practical HPC online course.
- Once approved, you will receive an email with your username and further instructions to set up your account.
We have a group directory located in /home/gridsan/groups/Fraenkel
.
- There's 50TB of storage. Please use good practices for storing files in the group directory.
- Please refer to the documentation: https://supercloud.mit.edu/best-practices-and-performance-tips
- Please create a user folder and store your files there.
- I.e.
/home/gridsan/groups/Fraenkel/<YOUR_NAME>
.
- I.e.
- If you don't have access after your account is approved, please request access by emailing [email protected] and cc'ing an owner (e.g. Nhan).
You have a personal directory located in /home/gridsan/<USER_NAME>
.
- There's 10TB of storage in your personal directory.
Note: This has to be done for connecting to SuperCloud and transferring files from both the NAS and your personal computer.
Also, this only has to be done once.
- Open the "Terminal".
- Run the command:
ssh-keygen -t rsa
- For the prompt
Enter file in which to save the key (/home/user1234/.ssh/id_rsa):
- On the imaging computer, enter
/home/imaging/.ssh/id_rsa_<YOUR_NAME>
. - On your personal computer, press enter to keep the default name (i.e.
id_rsa
).
- On the imaging computer, enter
- For the prompt
Enter passphrase (empty for no passphrase):
- Enter a passphrase if you want extra security and want to enter a password every time you log in.
- Hit "return" twice to not require a password.
- From your "home" directory, run the command:
cd .ssh
. - Run the command:
cat <RSA_NAME>.pub
.- On the imaging computer, replace
<RSA_NAME>
withid_rsa_<YOUR_NAME>
. - On your personal computer, replace
<RSA_NAME>
withid_rsa
.
- On the imaging computer, replace
- Copy the entire output, including the
ssh-rsa
at the beginning. - Go to https://txe1-portal.mit.edu/
- Log in with "MIT Touchstone / InCommon".
- Click on "sshkeys".
- Paste the copied output from above in the box at the bottom of the page.
- Click "Update Keys".
- Open the "Terminal".
- SSH into SuperCloud.
ssh <USER_NAME>@txe1-login.mit.edu
- Replace
<USER_NAME>
with your username.
- Open the "Terminal".
- Connect to imaging computer.
- Run the command:
rsync -avz -e "ssh -i /home/imaging/.ssh/id_rsa_<YOUR_NAME>" <PATH_TO_NAS_FOLDER> <USER_NAME>@txe1-login.mit.edu:<PATH_TO_SUPERCLOUD_FOLDER>
.- Replace
<YOUR_NAME>
with your name. - Replace
<USER_NAME>
with your username for SuperCloud. - Replace
<PATH_TO_NAS_FOLDER>
with the path of the folder/file you want to transfer (e.g./mnt/imagestore/Nhan/image_folder
). - Replace
<PATH_TO_SUPERCLOUD_FOLDER>
with the destination path on SuperCloud (e.g./home/gridsan/groups/Fraenkel/Nhan/image_folder
).
- Replace
- Open the "Terminal".
- Run the command:
rsync -avz <PATH_TO_LOCAL_FOLDER> <USER_NAME>@txe1-login.mit.edu:<PATH_TO_SUPERCLOUD_FOLDER>
.- Replace
<USER_NAME>
with your username for SuperCloud. - Replace
<PATH_TO_LOCAL_FOLDER>
with the path of the folder/file you want to transfer (e.g./Users/nhanhuynh/image_folder
). - Replace
<PATH_TO_SUPERCLOUD_FOLDER>
with the destination path on SuperCloud (e.g./home/gridsan/groups/Fraenkel/Nhan/image_folder
).
- Replace
Note: For Windows, another FTP client could be used (e.g. Filezilla).
- Download and install the "Cyberduck" app: https://cyberduck.io/download/
- Click and open "Cyberduck".
- At the top, click "Open Connection".
- For the prompt box:
- In the dropdown, select "SFTP (SSH File Transfer Protocol)".
- For "Server", enter
txe1-login.mit.edu
. - For "Username", enter your username for SuperCloud.
- Leave "Password" empty.
- For "SSH Private Key", select "~/.ssh/id_rsa".
- Click "Connect".
- SuperCloud is connected and you can drag files into "Cyberduck" to transfer them to SuperCloud.
To get detailed instructions, please refer to: https://supercloud.mit.edu/software-and-package-management
Most popular software packages are installed in modules on SuperCloud. To see the list of modules, run the command:
modules avail
To see the packages installed in a module, go to and list the contents in the path:
/state/partition1/llgrid/pkg/anaconda/<MODULE_NAME>/bin
- Replace
<MODULE_NAME>
with the module name.
A good module to use is anaconda3-2023a
.
If a package is not installed, you can install it by:
- Loading an Anaconda module:
module load anaconda/2023a
- Create a temporary directory for installing the package.
mkdir /state/partition1/user/$USER
export TMPDIR=/state/partition1/user/$USER
- Install the package:
pip install --user --no-cache-dir <PACKAGE_NAME>
- Replace
<PACKAGE_NAME>
with the package name.
To install a basic set of packages for running the examples provided in this GitHub:
- Download
/imaging-resources/4_supercloud/requirements.txt
from the GitHub.wget https://raw.githubusercontent.com/fraenkel-lab/imaging-resources/main/4_supercloud/requirements.txt
- Create a temporary directory for installing the requirements file.
mkdir /state/partition1/user/$USER
export TMPDIR=/state/partition1/user/$USER
- Install the requirements file:
pip install --user --no-cache-dir -r requirements.txt
To start a Jupyter notebook server on SuperCloud and access it from your personal computer:
- Go to the SuperCloud portal: https://txe1-portal.mit.edu
- Login with "MIT Touchstone / InCommon Federation".
- Click "/jupyter/".
- Click "Launch Notebook".
- If you need additional configurations (e.g. GPU), click "Show Advanced Launch Options".
To quickly test scripts on a compute node, you can start an interactive job:
- In the "Terminal", run the command:
LLsub -i
- To request a GPU, run the following:
LLsub -i -s 20 -g volta:1
Note: Compute nodes don't have network access, so packages need to be installed on a login node.
To submit jobs to run scripts on the high-performance compute nodes:
- Create a shell script with the commands for running your script.
- At the top, include
#!/bin/bash
. - To request a GPU, include
#SBATCH --gres=gpu:volta:1
. - Load the required modules (e.g.
module load anaconda/2023a
). - Run your script (e.g.
python script.py
) - Refer to
/imaging-resources/4_supercloud/example_submission.sh
on the GitHub.
- At the top, include
- Submit your job:
sbatch example_submission.sh
- To check the status of your job:
squeue
- To cancel your job:
scancel <JOB_ID>
- Replace
<JOB_ID>
with the job ID fromsqueue
.
(under construction)
(under construction)