Soleus Cluster Access
This page provides an overview and instructions for using the Soleus cluster, hosted by the Feist Group at the Universidad Autónoma de Madrid. It can be accessed “directly” via ssh (ssh user@soleus.ftmc.uam.es) or through the JupyterHub interface, which is the recommended way to use the cluster for interactive work.
Accessing JupyterHub
The cluster provides a JupyterHub interface for interactive computing, allowing you to run Jupyter notebooks directly on the cluster hardware without manual SSH tunneling or job submission scripts.
- URL: https://soleus.ftmc.uam.es:8131/
- Authentication: Log in with your standard cluster username and password.
General Jupyter Usage
Important: Closing a tab does NOT stop your code.
When you close a notebook tab in your browser, the kernel (the process actually running your code on the cluster) continues to run in the background. This allows your long-running calculations to finish even if you disconnect.
However, this also means you must explicitly shut down kernels when you are finished to free up resources for other users.
- To stop a kernel: Go to “File” -> “Shut Down Kernel”, or use the “Running Terminals and Kernels” tab in the left sidebar to shut down active sessions.
Environment & Package Management
Python: “Batteries Included”
The default Python environment is pre-configured with a comprehensive set of scientific libraries. You likely do not need to install anything yourself, and indeed, you cannot do so in the shared environment.
- Includes:
numpy,scipy,matplotlib,qutip,pandas,jax, and many more. If you need additional packages, please contact Johannes to have them added to the central environment. - Custom Packages: In some specialized cases (incompatibilities etc.), you may need to set up your own environment and set up a new jupyter kernel configured to run within that environment, ask Johannes for guidance if you think you might need this.
Julia: Bare-bones & Project-based
The Julia environment is deliberately “bare-bones” and contains almost no pre-installed packages. This follows the recommended Julia workflow where each project manages its own dependencies.
Recommended Workflow:
- Create a Project: For every new project or analysis, create a
Project.tomlfile (or let Julia create it for you). - Activate Environment: In the very first cell of your notebook, activate your project environment:
import Pkg Pkg.activate(".") # Activates the project in the current folder # or Pkg.activate("@myproject") # Activates a shared environment named 'myproject'- Shared Environments: Using
Pkg.activate("@myproject")is often easiest as it stores the environment in your~/.juliafolder, making it accessible from any notebook or folder on the cluster.
- Shared Environments: Using
- Install Packages: The first time you use the environment, install what you need:
Pkg.add("Plots") Pkg.add("DifferentialEquations")
Available Kernels
When creating a new notebook, you can choose from several kernel types. These determine where and how your code runs.
1. Standard Shared Kernels
- Kernels:
- Python 3
- Julia 1.11
These kernels run on a dedicated but shared compute node and should be used by default. While resources are shared with other users, they are separate from the login node and are suitable for standard calculations and development. The shared node has 24 CPUs, 256 GB of RAM, and 8 GPUs, and is usually not heavily loaded. For any computations that use only a single CPU or GPU, these kernels should be used by default. These kernels have no time limit, but please shut down the kernels when done to free up resources for others.
2. Compute Node Kernels
These kernels automatically submit a slurm job that runs on a compute node, reserving resources exclusively for that kernel. Since they reserve resources, they are ideal for heavier computations or when you need guaranteed performance. However, since they reserve resources in the cluster that cannot be used by anyone else, please shut down the kernels when done to free up resources for others. In particular, if you have simulations that take a long time to run, save their output to disk and shut down the kernel when done, so that you can retrieve the results later without keeping the kernel running and resources reserved. For even heavier simulations, the recommendation is to write a “normal” slurm job script that runs the code and saves the output to a file. This has the advantage that it’s easy to send many simulation runs at the same time, and they only occupy the resources they actually use.
All compute node kernels have a 7-day (168 hours) running time limit.
| Kernel Name | Description | Resources |
|---|---|---|
| node Py3 0 GPUs | Python 3, CPU only | 1 CPU, 10 GB RAM |
| node Py3 1 GPU | Python 3, Single GPU | 3 CPUs, 30 GB RAM, 1 GPU |
| node Py3 8 GPUs | Python 3, Multi-GPU | 24 CPUs, Full Node RAM, 8 GPUs |
| node Julia 0 GPUs | Julia 1.11, CPU only | 1 CPU, 10 GB RAM |
| node Julia 1 GPU | Julia 1.11, Single GPU | 1 CPU, 10 GB RAM, 1 GPU |
| node Julia 8 GPUs | Julia 1.11, Multi-GPU | 24 CPUs, Full Node RAM, 8 GPUs |
Note: The kernels with “8 GPUs” request a full exclusive node.
Feist Group