HimMUC Quick Start
The HimMUC cluster uses SLURM for resource management. For further information on SLURM, visit the SLURM Homepage or read the various manual pages for the individual SLURM commands.
Access to the cluster has to be granted from one of the members of the chair. Your username and password are the same as your RBG-Login (used in the "Rechnerhalle") and can't be changed separately on the cluster. Note that logging into the cluster gives you a shell at the login VM, which is not an ARM machine and therefore can't be used for compilation.
On Linux and macOS, it is sufficient to use the
For Windows, it is possible to login using PuTTY. A detailed guide to using PuTTY can be found in the RBG-Wiki; using
himmuc.caps.in.tum.de as host.
Files can be copied to and from the cluster using
scp. To copy files from your computer to the cluster, use:
# Copy localfile to cluster scp <localfile> <login>@himmuc.caps.in.tum.de:<path/to/target> # Copy remotefile on the cluster to your computer scp <login>@himmuc.caps.in.tum.de:<remotefile> <target>
To copy folders recursively, use
State of the Cluster
To view the current state of the cluster, run the following command anywhere on the cluster:
PARTITION AVAIL TIMELIMIT NODES STATE NODELIST odr up 8:00:00 40 idle odr[00-39] rpi* up 8:00:00 40 idle rpi[00-39]
This shows the two partitions on the cluster: one for the Raspberry Pi and one for the ODroid nodes. The default partition is
To show all current and pending allocations of the cluster, use:
To get access to a node in the cluster, you first need to get a job, which grants exclusive access to a specific set of nodes. You can get an interactive job by running the following command:
salloc -p <partition> -N <number-of-nodes>
If the number of nodes requested is 1, the
-N parameter can be omitted. Jobs spanning over different partitions are not possible. If you need jobs with more than 20 nodes, please ask a member of staff for confirmation.
This command gives a new sub-shell on the login node! To actually get to the nodes, use Ssh:
If you no longer need the job, type
exit to relinquish the job and make the resources available for others.
Usage Example (Interactive Job)
username@vmschulz8:~$ salloc -p rpi salloc: Granted job allocation 19401 salloc: Waiting for resource configuration salloc: Nodes rpi00 are ready for job username@vmschulz8:~$ ssh rpi00 username@rpi00:~$ hostname rpi00 username@rpi00:~$ exit logout Connection to rpi00 closed. username@vmschulz8:~$ exit exit salloc: Relinquishing job allocation 19401 username@vmschulz8:~$
Shortcut for Interactive Job
An alternative way to get an interactive shell on a Raspberry Pi node is to use our shortcut host (originally setup for out lab courses, hence the name), which gives you a node shared with other users. This can be used for development, where no exclusive access is required. For performance measurements, an exclusive node has to be used.
When using this host, the following command is executed (use
man salloc to figure out, what the individual switches do):
salloc -I -Q -n1 -s -J sshgate sh -c "ssh $SLURM_NODELIST"
On the nodes, some software is not installed directly as part of the operating system image, but as a module. The following software can be used on the boards:
- OpenMPI 3.0.0:
module load mpi
- Perf (Linux profiling tool):
module load perf