Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Is there a MESA docker image available on the docker hub? #19

Open
casesyh opened this issue Jun 3, 2022 · 6 comments
Open

Is there a MESA docker image available on the docker hub? #19

casesyh opened this issue Jun 3, 2022 · 6 comments

Comments

@casesyh
Copy link

casesyh commented Jun 3, 2022

Hi, thanks a lot for putting the docker container together. It's great. I wonder if it's possible to have a MESA docker image available on docker hub. It'd make it more convenient to use MESA on the high performance computing clusters. (Sorry this may not necessarily fall into the issues board. But this is a convenient way of raising the issue.)

Thanks,
Yuhao.

Repository owner deleted a comment from brucehev Jun 3, 2022
@evbauer
Copy link
Owner

evbauer commented Jun 3, 2022

Yes, it's probably confusing from the way the scripts are structured in this repository, but they actually are pulling images from docker hub, so you should be able to pull those images and use them independently if you're familiar with docker and want to do that.

The containers are basically a bare-bones ubuntu with MESA installed along with a few python tools. You can find the docker images here: https://hub.docker.com/r/evbauer/mesa_lean/tags

Is that roughly what you're looking for?

@casesyh
Copy link
Author

casesyh commented Jun 3, 2022

Yes. That's what I was looking for. I pulled the docker image and launched it. The size of the image is 4.6 GB. But the $MESA_DIR is not pointing to a place with the star/work folder in it. I'm using the singularity container. Where are MESA and the MESA modules located in the container? Thanks.

@evbauer
Copy link
Owner

evbauer commented Jun 3, 2022

$MESA_DIR should point to /home/docker/mesa, which should be a fully installed MESA directory including a star/work folder. Can you post the output of the env command from within the container? I'm not very familiar with singularity. Is it possible that there's something different about running under singularity that's causing the issue?

@casesyh
Copy link
Author

casesyh commented Jun 3, 2022

Yes, it must be something different for the singularity container. But it's the only container available on the cluster. It'll be nice to make it work. This is what the env command gave.
`Singularity> env
SLURM_NODELIST=c14n02
SLURM_JOB_NAME=ondemand/sys/dashboard/sys/ycrc_desktop
MATE_DESKTOP_SESSION_ID=this-is-deprecated
MANPATH=/usr/share/lmod/lmod/share/man:
XDG_SESSION_ID=c495
ModuleTable003=cHMvYXZ4L21vZHVsZXMvbWF0aCIsIi9ncGZzL2xvb21pcy9hcHBzL2F2eC9tb2R1bGVzL21waSIsIi9ncGZzL2xvb21pcy9hcHBzL2F2eC9tb2R1bGVzL251bWxpYiIsIi9ncGZzL2xvb21pcy9hcHBzL2F2eC9tb2R1bGVzL3BlcmYiLCIvZ3Bmcy9sb29taXMvYXBwcy9hdngvbW9kdWxlcy9waHlzIiwiL2dwZnMvbG9vbWlzL2FwcHMvYXZ4L21vZHVsZXMvc3lzdGVtIiwiL2dwZnMvbG9vbWlzL2FwcHMvYXZ4L21vZHVsZXMvdG9vbGNoYWluIiwiL2dwZnMvbG9vbWlzL2FwcHMvYXZ4L21vZHVsZXMvdG9vbHMiLCIvZ3Bmcy9sb29taXMvYXBwcy9hdngvbW9kdWxlcy92aXMiLCIvZ3Bmcy9sb29taXMvYXBwcy9hdngvbW9kdWxlcy9yZXN0cmljdGVkIiwiL2V0Yy9tb2R1bGVmaWxl
SLURM_TOPOLOGY_ADDR=ibswitch4.ibswitch10.c14n02
SLURMD_NODENAME=c14n02
HOSTNAME=c14n02
SLURM_PRIO_PROCESS=0
SLURM_NODE_ALIASES=(null)
host=c14n02.grace.hpc.yale.internal
__LMOD_REF_COUNT_MODULEPATH=/gpfs/loomis/apps/avx/modules/base:1;/gpfs/loomis/apps/avx/modules/bio:1;/gpfs/loomis/apps/avx/modules/cae:1;/gpfs/loomis/apps/avx/modules/chem:1;/gpfs/loomis/apps/avx/modules/compiler:1;/gpfs/loomis/apps/avx/modules/data:1;/gpfs/loomis/apps/avx/modules/debugger:1;/gpfs/loomis/apps/avx/modules/devel:1;/gpfs/loomis/apps/avx/modules/geo:1;/gpfs/loomis/apps/avx/modules/ide:1;/gpfs/loomis/apps/avx/modules/lang:1;/gpfs/loomis/apps/avx/modules/lib:1;/gpfs/loomis/apps/avx/modules/math:1;/gpfs/loomis/apps/avx/modules/mpi:1;/gpfs/loomis/apps/avx/modules/numlib:1;/gpfs/loomis/apps/avx/modules/perf:1;/gpfs/loomis/apps/avx/modules/phys:1;/gpfs/loomis/apps/avx/modules/system:1;/gpfs/loomis/apps/avx/modules/toolchain:1;/gpfs/loomis/apps/avx/modules/tools:1;/gpfs/loomis/apps/avx/modules/vis:1;/gpfs/loomis/apps/avx/modules/restricted:1;/etc/modulefiles:1;/usr/share/modulefiles:1;/usr/share/modulefiles/Linux:1;/usr/share/modulefiles/Core:1;/usr/share/lmod/lmod/modulefiles/Core:1
VTE_VERSION=5204
TERM=xterm-256color
SLURM_EXPORT_ENV=NONE
SHELL=/bin/bash
BASH_FUNC_create_passwd()=() { tr -cd 'a-zA-Z0-9' < /dev/urandom 2> /dev/null | head -c${1:-8}
}
SLURM_JOB_QOS=normal
SLURM_HINT=nomultithread
LMOD_ROOT=/usr/share/lmod
HISTSIZE=1000
TMPDIR=/tmp
SLURM_TOPOLOGY_ADDR_PATTERN=switch.switch.node
MODULEPATH_ROOT=/usr/share/modulefiles
LMOD_SYSTEM_DEFAULT_MODULES=StdEnv
BASH_FUNC_ml()=() { eval $($LMOD_DIR/ml_cmd "$@")
}
LMOD_PACKAGE_PATH=/vast/palmer/apps/avx.grace
GNOME_TERMINAL_SCREEN=/org/gnome/Terminal/screen/47cd4e64_c99e_49be_9d3a_3b7ddc4f87ec
LMOD_PKG=/usr/share/lmod/lmod
SINGULARITY_APPNAME=
QTDIR=/usr/lib64/qt-3.3
QTINC=/usr/lib64/qt-3.3/include
LMOD_VERSION=8.5.8
LMOD_ADMIN_FILE=/vast/palmer/apps/avx.grace/admin.list
__LMOD_REF_COUNT_LOADEDMODULES=StdEnv:1
SINGULARITY_COMMAND=shell
SLURM_MEM_PER_CPU=5120
QT_GRAPHICSSYSTEM_CHECKED=1
__LMOD_REF_COUNT_CONDA_PKGS_DIRS=/gpfs/loomis/project/heeger/ys633/conda_pkgs:1
USER_PATH=/gpfs/loomis/bin:/opt/TurboVNC/bin:/usr/lib64/qt-3.3/bin:/usr/local/bin:/bin:/usr/bin:/usr/local/sbin:/usr/sbin:/opt/ibutils/bin:/home/ys633/.local/bin:/home/ys633/bin:/bin:/usr/bin:/sbin:/usr/sbin:/usr/local/bin:/usr/local/sbin
USER=ys633
SLURM_NNODES=1
LMOD_sys=Linux
LD_LIBRARY_PATH=/.singularity.d/libs
GNOME_TERMINAL_SERVICE=:1.35
LMOD_MODULERCFILE=/vast/palmer/apps/avx.grace/modulerc.lua
SINGULARITY_NAME=mesa_lean_r22.05.1.01.sif
LOOMIS_PROJECT=/gpfs/loomis/project/heeger/ys633
ModuleTable004=cyIsIi91c3Ivc2hhcmUvbW9kdWxlZmlsZXMiLCIvdXNyL3NoYXJlL21vZHVsZWZpbGVzL0xpbnV4IiwiL3Vzci9zaGFyZS9tb2R1bGVmaWxlcy9Db3JlIiwiL3Vzci9zaGFyZS9sbW9kL2xtb2QvbW9kdWxlZmlsZXMvQ29yZSIsfSxbInN5c3RlbUJhc2VNUEFUSCJdPSIvZXRjL21vZHVsZWZpbGVzOi91c3Ivc2hhcmUvbW9kdWxlZmlsZXM6L3Vzci9zaGFyZS9tb2R1bGVmaWxlcy9MaW51eDovdXNyL3NoYXJlL21vZHVsZWZpbGVzL0NvcmU6L3Vzci9zaGFyZS9sbW9kL2xtb2QvbW9kdWxlZmlsZXMvQ29yZSIsfQ==
SLURM_JOBID=58867273
PALMER_SCRATCH=/vast/palmer/scratch/heeger/ys633
_LMOD_REF_COUNT__LMFILES=/etc/modulefiles/StdEnv.lua:1
SESSION_MANAGER=local/unix:@/tmp/.ICE-unix/14609,unix/unix:/tmp/.ICE-unix/14609
BASH_FUNC_source_helpers()=() { function random_number ()
{
shuf -i ${1}-${2} -n 1
};
export -f random_number;
function port_used_python ()
{
python -c "import socket; socket.socket().connect(('$1',$2))" > /dev/null 2>&1
};
function port_used_python3 ()
{
python3 -c "import socket; socket.socket().connect(('$1',$2))" > /dev/null 2>&1
};
function port_used_nc ()
{
nc -w 2 "$1" "$2" < /dev/null > /dev/null 2>&1
};
function port_used_lsof ()
{
lsof -i :"$2" > /dev/null 2>&1
};
function port_used_bash ()
{
local bash_supported=$(strings /bin/bash 2>/dev/null | grep tcp);
if [ "$bash_supported" == "/dev/tcp//" ]; then
( : < /dev/tcp/$1/$2 ) > /dev/null 2>&1;
else
return 127;
fi
};
function port_used ()
{
local port="${1#:}";
local host=$((expr "${1}" : '(.
):' || echo "localhost") | awk 'END{print $NF}');
local port_strategies=(port_used_nc port_used_lsof port_used_bash port_used_python port_used_python3);
for strategy in ${port_strategies[@]};
do
$strategy $host $port;
status=$?;
if [[ "$status" == "0" ]] || [[ "$status" == "1" ]]; then
return $status;
fi;
done;
return 127
};
export -f port_used;
function find_port ()
{
local host="${1:-localhost}";
local port=$(random_number "${2:-2000}" "${3:-65535}");
while port_used "${host}:${port}"; do
port=$(random_number "${2:-2000}" "${3:-65535}");
done;
echo "${port}"
};
export -f find_port;
function wait_until_port_used ()
{
local port="${1}";
local time="${2:-30}";
for ((i=1; i<=time2; i++))
do
port_used "${port}";
port_status=$?;
if [ "$port_status" == "0" ]; then
return 0;
else
if [ "$port_status" == "127" ]; then
echo "commands to find port were either not found or inaccessible.";
echo "command options are lsof, nc, bash's /dev/tcp, or python (or python3) with socket lib.";
return 127;
fi;
fi;
sleep 0.5;
done;
return 1
};
export -f wait_until_port_used;
function create_passwd ()
{
tr -cd 'a-zA-Z0-9' < /dev/urandom 2> /dev/null | head -c${1:-8}
};
export -f create_passwd
}
ModuleTable001=X01vZHVsZVRhYmxlXz17WyJNVHZlcnNpb24iXT0zLFsiY19yZWJ1aWxkVGltZSJdPTg2NDAwLFsiY19zaG9ydFRpbWUiXT1mYWxzZSxkZXB0aFQ9e30sZmFtaWx5PXt9LG1UPXtTdGRFbnY9e1siZm4iXT0iL2V0Yy9tb2R1bGVmaWxlcy9TdGRFbnYubHVhIixbImZ1bGxOYW1lIl09IlN0ZEVudiIsWyJsb2FkT3JkZXIiXT0xLHByb3BUPXtsbW9kPXtbInN0aWNreSJdPTEsfSx9LFsic3RhY2tEZXB0aCJdPTAsWyJzdGF0dXMiXT0iYWN0aXZlIixbInVzZXJOYW1lIl09IlN0ZEVudiIsWyJ3ViJdPSJNLip6ZmluYWwiLH0sfSxtcGF0aEE9eyIvZ3Bmcy9sb29taXMvYXBwcy9hdngvbW9kdWxlcy9iYXNlIiwiL2dwZnMvbG9vbWlzL2FwcHMvYXZ4L21vZHVsZXMvYmlvIiwiL2dwZnMv
SLURM_TASKS_PER_NODE=1
PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin
MAIL=/var/spool/mail/ys633
SLURM_WORKING_CLUSTER=grace:scheduler.grace.hpc.yale.internal:6817:9472:109
SLURM_CONF=/etc/slurm/slurm.conf
SLURM_JOB_ID=58867273
SLURM_CPUS_PER_TASK=4
SLURM_JOB_USER=ys633
PWD=/home/ys633/MESA
LMFILES=/etc/modulefiles/StdEnv.lua
LANG=en_US.UTF-8
CONDA_PKGS_DIRS=/gpfs/loomis/project/heeger/ys633/conda_pkgs
MODULEPATH=/gpfs/loomis/apps/avx/modules/base:/gpfs/loomis/apps/avx/modules/bio:/gpfs/loomis/apps/avx/modules/cae:/gpfs/loomis/apps/avx/modules/chem:/gpfs/loomis/apps/avx/modules/compiler:/gpfs/loomis/apps/avx/modules/data:/gpfs/loomis/apps/avx/modules/debugger:/gpfs/loomis/apps/avx/modules/devel:/gpfs/loomis/apps/avx/modules/geo:/gpfs/loomis/apps/avx/modules/ide:/gpfs/loomis/apps/avx/modules/lang:/gpfs/loomis/apps/avx/modules/lib:/gpfs/loomis/apps/avx/modules/math:/gpfs/loomis/apps/avx/modules/mpi:/gpfs/loomis/apps/avx/modules/numlib:/gpfs/loomis/apps/avx/modules/perf:/gpfs/loomis/apps/avx/modules/phys:/gpfs/loomis/apps/avx/modules/system:/gpfs/loomis/apps/avx/modules/toolchain:/gpfs/loomis/apps/avx/modules/tools:/gpfs/loomis/apps/avx/modules/vis:/gpfs/loomis/apps/avx/modules/restricted:/etc/modulefiles:/usr/share/modulefiles:/usr/share/modulefiles/Linux:/usr/share/modulefiles/Core:/usr/share/lmod/lmod/modulefiles/Core
ModuleTable_Sz=4
SLURM_JOB_UID=15274
LOADEDMODULES=StdEnv
LMOD_SYSTEM_NAME=grace-rhel7
SLURM_NODEID=0
SINGULARITY_ENVIRONMENT=/.singularity.d/env/91-environment.sh
BASH_FUNC_wait_until_port_used()=() { local port="${1}";
local time="${2:-30}";
for ((i=1; i<=time
2; i++))
do
port_used "${port}";
port_status=$?;
if [ "$port_status" == "0" ]; then
return 0;
else
if [ "$port_status" == "127" ]; then
echo "commands to find port were either not found or inaccessible.";
echo "command options are lsof, nc, bash's /dev/tcp, or python (or python3) with socket lib.";
return 127;
fi;
fi;
sleep 0.5;
done;
return 1
}
SLURM_SUBMIT_DIR=/var/www/ood/apps/sys/dashboard
PS1=Singularity>
BASH_FUNC_random_number()=() { shuf -i ${1}-${2} -n 1
}
SLURM_TASK_PID=14371
SINGULARITY_BIND=
LMOD_CMD=/usr/share/lmod/lmod/libexec/lmod
SQUEUE_FORMAT=%18i %11P %18j %6u %.2t %.10M %.10l %.5D %.5C %.10m %R
SLURM_CPUS_ON_NODE=4
CONDA_ENVS_PATH=/gpfs/loomis/project/heeger/ys633/conda_envs
BASH_FUNC_find_port()=() { local host="${1:-localhost}";
local port=$(random_number "${2:-2000}" "${3:-65535}");
while port_used "${host}:${port}"; do
port=$(random_number "${2:-2000}" "${3:-65535}");
done;
echo "${port}"
}
SLURM_PROCID=0
HISTCONTROL=ignoredups
ENVIRONMENT=BATCH
SLURM_JOB_NODELIST=c14n02
LOOMIS_SCRATCH=/gpfs/loomis/scratch60/heeger/ys633
BASH_FUNC_port_used()=() { local port="${1#:}";
local host=$((expr "${1}" : '(.
):' || echo "localhost") | awk 'END{print $NF}');
local port_strategies=(port_used_nc port_used_lsof port_used_bash port_used_python port_used_python3);
for strategy in ${port_strategies[@]};
do
$strategy $host $port;
status=$?;
if [[ "$status" == "0" ]] || [[ "$status" == "1" ]]; then
return $status;
fi;
done;
return 127
}
SHLVL=5
LMOD_CASE_INDEPENDENT_SORTING=yes
HOME=/home/ys633
__LMOD_REF_COUNT_PATH=/gpfs/loomis/bin:1;/opt/TurboVNC/bin:1;/usr/lib64/qt-3.3/bin:1;/usr/local/bin:1;/bin:1;/usr/bin:1;/usr/local/sbin:1;/usr/sbin:1;/opt/ibutils/bin:1;/home/ys633/.local/bin:1;/home/ys633/bin:1
SLURM_LOCALID=0
SLURM_GET_USER_ENV=1
__LMOD_REF_COUNT_CONDA_ENVS_PATH=/gpfs/loomis/project/heeger/ys633/conda_envs:1
ModuleTable002=bG9vbWlzL2FwcHMvYXZ4L21vZHVsZXMvY2FlIiwiL2dwZnMvbG9vbWlzL2FwcHMvYXZ4L21vZHVsZXMvY2hlbSIsIi9ncGZzL2xvb21pcy9hcHBzL2F2eC9tb2R1bGVzL2NvbXBpbGVyIiwiL2dwZnMvbG9vbWlzL2FwcHMvYXZ4L21vZHVsZXMvZGF0YSIsIi9ncGZzL2xvb21pcy9hcHBzL2F2eC9tb2R1bGVzL2RlYnVnZ2VyIiwiL2dwZnMvbG9vbWlzL2FwcHMvYXZ4L21vZHVsZXMvZGV2ZWwiLCIvZ3Bmcy9sb29taXMvYXBwcy9hdngvbW9kdWxlcy9nZW8iLCIvZ3Bmcy9sb29taXMvYXBwcy9hdngvbW9kdWxlcy9pZGUiLCIvZ3Bmcy9sb29taXMvYXBwcy9hdngvbW9kdWxlcy9sYW5nIiwiL2dwZnMvbG9vbWlzL2FwcHMvYXZ4L21vZHVsZXMvbGliIiwiL2dwZnMvbG9vbWlzL2Fw
SLURM_JOB_GID=10414
SLURM_JOB_CPUS_PER_NODE=4
SLURM_CLUSTER_NAME=grace
SLURM_SUBMIT_HOST=ondemand1.grace.hpc.yale.internal
SLURM_GTIDS=0
GTK_OVERLAY_SCROLLING=0
SLURM_JOB_PARTITION=interactive
BASH_ENV=/usr/share/lmod/lmod/init/bash
LOGNAME=ys633
QTLIB=/usr/lib64/qt-3.3/lib
CVS_RSH=ssh
XDG_DATA_DIRS=/home/ys633/.local/share/flatpak/exports/share:/var/lib/flatpak/exports/share:/usr/local/share:/usr/share
SLURM_JOB_ACCOUNT=heeger
DBUS_SESSION_BUS_ADDRESS=unix:abstract=/tmp/dbus-la4zv9JcJx,guid=1a6041c9c6b9671ac746e3a7629a3837
SLURM_JOB_NUM_NODES=1
MODULESHOME=/usr/share/lmod/lmod
LMOD_RC=/vast/palmer/apps/avx.grace/lmodrc.lua
LMOD_SETTARG_FULL_SUPPORT=no
LESSOPEN=||/usr/bin/lesspipe.sh %s
__Init_Default_Modules=1
port=5901
WEBSOCKIFY_CMD=/usr/bin/websockify
SINGULARITY_CONTAINER=/home/ys633/MESA/mesa_lean_r22.05.1.01.sif
XDG_RUNTIME_DIR=/dev/shm/tmp.yPLz2PmUz6
DISPLAY=:1
LMOD_CACHED_LOADS=yes
BASH_FUNC_module()=() { eval $($LMOD_CMD bash "$@") && eval $(${LMOD_SETTARG_CMD:-:} -s sh)
}
XDG_CURRENT_DESKTOP=MATE
LMOD_DIR=/usr/share/lmod/lmod/libexec
HISTTIMEFORMAT=%Y-%m-%d %T
COLORTERM=truecolor
_=/usr/bin/env

@casesyh
Copy link
Author

casesyh commented Jun 5, 2022

I found a possible solution here
https://hub.docker.com/r/singularityware/docker2singularity
and here
https://github.com/singularityhub/docker2singularity
I'm going to try it out and let you know what it's like.

Thanks.

ps: if you're tempted to build an original singularity image for cluster computing with MESA, that'd be super awesome, too : )

@evbauer
Copy link
Owner

evbauer commented Jun 6, 2022

Great! Hopefully that works out for you. At the moment, it looks like I don't know enough about singularity to help you much further. Sorry about that!

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

No branches or pull requests

2 participants