Skip to content

Tools and Software

Open OnDemand (OOD)

Open OnDemand provides remote desktop sessions, jupyter notebooks, and Rstudio sessions.

You can find the Open OnDemand URL under your project space in the portal:

To access Open OnDemand you need to log in using your EduID account with 2-factor authentication.

If you are already logged in the portal you won’t need to login again.

Open OnDemand shell access

Access “Clusters >> scicore+ shell access” on the top menu to get a shell in the login node in your slurm cluster

ondemand-shell

Open OnDemand remote desktop access

Access “Interactive Apps >> Desktop” in the top menu

ondemand-desktop-1

Select the node specs and hours and click “Launch”

ondemand-desktop-2

Once you click “Launch” your job will appear in the list of “My interactive sessions” in “queued” state (blue background)

Warning

Once you launch the job, it can be in queued state for ~1min until it’s ready to use. Please be patient

ondemand-desktop-3

Once the job is in “Running” state (green background) you can click on “Launch Desktop” to access your desktop session

ondemand-desktop-4

ondemand-desktop-5

Open OnDemand RStudio

Access “Interactive Apps >> Rstudio” in the top menu

ondemand-rstudio-1

Select the R lang version, node specs and runtime hours and click “Launch”

ondemand-rstudio-2

Once you click “Launch” your job will appear in the list of “My interactive sessions” in “queued” state (blue background)

Warning

Once you launch the job, it can be in queued state for ~1min until it’s ready to use. Please be patient

ondemand-rstudio-3

Once the job is in “Running” state (green background) you can click on “Connect to Rstudio Server” to access your Rstudio session

ondemand-rstudio-4

ondemand-rstudio-5

Batch Computing (SLURM)

SLURM is a queueing system (also known as Workload Manager) used to maintain a fair shared usage of the computational resources at sciCORE+. Users that want to run calculations in the cluster must interact with SLURM to reserve the resources required for the calculation.

You can use the SLURM cluster to:

SLURM resources

You can check the available SLURM resources with commands sinfo e.g.

$ sinfo
PARTITION                          AVAIL  TIMELIMIT  NODES  STATE NODELIST
dynamic-4cores-16g*                   up   infinite      4  idle~ demo-slurm-compute-dynamic-[01-04]
dynamic-8cores-32g                    up   infinite      4  idle~ demo-slurm-compute-dynamic-[05-08]
dynamic-16cores-64g                   up   infinite      4  idle~ demo-slurm-compute-dynamic-[09-12]
dynamic-32cores-128g                  up   infinite      2  idle~ demo-slurm-compute-dynamic-[13-14]
all-cpu-nodes                         up   infinite     14  idle~ demo-slurm-compute-dynamic-[01-14]
dynamic-8cores-64g-1gpu-titanxp       up   infinite      2  idle~ demo-slurm-compute-dynamic-gpu-titanxp-[01-02]
dynamic-64cores-256g-1gpu-A100-40g    up   infinite      1  idle~ demo-slurm-compute-dynamic-gpu-a100-01

sciCORE provides the following SLURM Partitions:

  • dynamic-4cores-16g >> 4 nodes with 4 cores and 16G ram >> total 16 cores and 64G
  • dynamic-8cores-32g >> 4 nodes with 8 cores and 32G >> total 32 cores and 128G ram
  • dynamic-16cores-64g >> 4 nodes with 16 cores and 64G >> total 64 cores and 256G ram
  • dynamic-32cores-128g >> 4 nodes with 32 cores and 128G >> total 128 cores and 512G ram
  • all-cpu-nodes >> includes all previous cpu partitions
  • dynamic-8cores-64g-1gpu-titanxp >> 2 nodes with 1xTitanXP GPU, 8 cores and 64G ram
  • dynamic-64cores-256g-1gpu-A100-40g >> 1 node with 1xA100 GPU, 64 cores and 256G ram

Warning

It is important to specify the partition of your jobs. If you don’t specify it, your jobs will land in the default partition “dynamic-4cores-16g”.

Depending on the requirements of each job, you can submit your jobs to different partitions.

The SLURM configuration can be adjusted based on the project’s needs. Should the existing configurations do not fit your project’s need, please contact us at scicore-admin@unibas.ch.

Allocating the right resources for your SLURM batch jobs

When submitting your SLURM batch jobs, you should pay close attention to the resources you request.

Tip

To maximize efficiency and reduce job wait times, try to allocate all the resources available on a compute node (e.g., CPUs, memory) so that the scheduler can assign the entire node to your job without fragmenting resources.

However, the operating system requires a portion of RAM to function properly. In this case you should not request 100% of the node’s memory. Doing so may cause the job to be terminated or fail due to memory exhaustion.

Examples

  • If you submit jobs requesting 4 cpus and 10GB ram to partition dynamic-4cores-16g you can only allocate 1 job per node. In this case you would have 4GB ram unused by node.
  • If you submit jobs requesting 10 cpus and 20GB ram to partition dynamic-16cores-64g you can only allocate 1 job per node. Each node will have 6 cores and 40G ram unused.
  • If you submit jobs requesting 4 cpus and 14GB ram to partition dynamic-16cores-64g you can allocate 4 jobs per node. All the cpus and ram in every node would be in use.

To optimize resource usage for your pipeline, you should adapt the requested resources per job based on your specific needs. You may need to experiment a bit before finalizing your configuration.

A helpful strategy is to submit test jobs that simply run “sleep 900”. This allows you to determine how many jobs can be allocated per node with various resource reservations. Once you’re done testing, you can cancel these jobs using “scancel”. Additionally, you may consider submitting jobs to different partitions, if that suits your workflow.

SLURM batch jobs

This is an example with sbatch submission script:

#!/bin/bash

#SBATCH --job-name=test_JOB
#SBATCH --partition=dynamic-8cores-32g
#SBATCH --cpus-per-task=4
#SBATCH --mem=14G
#SBATCH --time=08:00:00

# activate the software stack (modules)
source /etc/profile.d/soft_stacks.sh

# enable EESSI or ComputeCanada soft stack (uncomment only one)
enable-software-stack-eessi
# enable-software-stack-compute-canada

# load your required software modules
module load SAMtools/1.18-GCC-12.3.0

samtools XXX YYY ZZZZ

SLURM interactive shell

If you need to work in an interactive shell in a compute node you can use the srun command

Allocating an interactive shell in partition dynamic-8cores-32g

srun --partition=dynamic-8cores-32g --cpus-per-task=8 --mem=0 --pty bash -li

Allocating an interactive shell in partition dynamic-16cores-64g

srun --partition=dynamic-16cores-64g --cpus-per-task=16 --mem=0 --pty bash -li

Allocating an interactive shell in partition dynamic-32cores-128g

srun --partition=dynamic-32cores-128g --cpus-per-task=32 --mem=0 --pty bash -li

Allocating an interactive shell in a TitanXP gpu node

srun --partition=dynamic-8cores-64g-1gpu-titanxp --cpus-per-task=8 --mem=0 --pty bash -li

Allocating an interactive shell in a A100 gpu node

srun --partition=dynamic-64cores-256g-1gpu-A100-40g --cpus-per-task=64 --mem=0 --pty bash -li