Hyperion system¶
How to connect to Hyperion¶
Prerequisites¶
Before connecting to Hyperion, ensure you have:
- An active account on Hyperion.
- An SSH client installed on your local machine. UNIX-based systems, including macOS and Linux, typically come pre-installed with an SSH client. For Windows users, software like PuTTY or MobaXTerm are some recommended options.
SSH Login¶
To establish a connection to Hyperion, use the following command:
$ ssh <username>@hyperion.sw.ehu.es
Replace <username>
with your actual Hyperion username.
You can also establish direct connection with the login nodes:
$ ssh username@hyperion-01.sw.ehu.es
$ ssh username@hyperion-02.sw.ehu.es
Upon successful connection, users will be greeted with a shell on one of Hyperion's login nodes. Here, you can manage your files, compile code, or prepare batch submission scripts.
Security Recommendations¶
-
Public Key Authentication: While password-based logins are simple, using SSH key pairs for authentication is much more secure. Implementing public key authentication enhances security by making unauthorized access significantly more challenging.
-
Monitor Activity: Regularly review the list of active sessions and last logins to your Hyperion account. If you detect any unfamiliar activity, report it to our support team immediately.
Specifications¶
Compute Node Range | Proccessor / Total cores | Memory | Accelerator | Nodes |
---|---|---|---|---|
hyperion-[001-007], [023-029], [045-051], [067-073], [089-095], [111-117], [133-139], [155-161], [177-181], [206] | Intel Xeon Gold 6342 (Icelake) - 48 cores | 256 GB | - | 60 |
hyperion-[008], [030], [052], [074], [096], [118], [140], [162], [182] | Intel Xeon Gold 6248R (Cascadelake) - 48 cores | 96 GB | 2x NVIDIA RTX 3090 24GB | 9 |
hyperion-[009-022], [031-044], [053-066], [075-088], [097-110], [119-132], [141-154], [163-175] | Intel Xeon Gold 6248R (Cascadelake) - 48 cores | 96 GB | - | 111 |
hyperion-[208-224],[234-251],[263] | Intel Xeon Platinum 8362 (Icelake) - 64 cores | 2 TB | - | 36 |
hyperion-252 | Intel Xeon Gold 6348 (Icelake) - 56 cores | 1 TB | 8x NVIDIA A100 PCIe 80GB | 1 |
hyperion-253 | Intel Xeon Gold 6348 (Icelake) - 56 cores | 1 TB | 8x NVIDIA A100 SXM4 80GB | 1 |
hyperion-[254,257] | Intel Xeon Platinum 8358 (Icelake) - 64 cores | 2 TB | 8x NVIDIA A100 PCIe 80GB | 2 |
hyperion-262 | Intel Xeon Platinum 8362 (Icelake) - 64 cores | 2 TB | 1x NVIDIA RTX A6000 48GB | 1 |
hyperion-263 | Intel Xeon Platinum 8362 (Icelake) - 64 cores | 2 TB | - | 1 |
hyperion-[282-284] | Intel Xeon Platinum 8358 (Icelake) - 64 cores | 2 TB | 8x NVIDIA A100 SXM4 80GB | 2 |
hyperion-[285-292] | Intel Xeon Platinum 8570 (Emerald Rapids) - 112 cores | 2 TB | - | 8 |
Compute Node Range | Proccessor / Total cores | Memory | Accelerator | Nodes |
---|---|---|---|---|
hyperion-[176], [180-181], [183-205], [207] | Intel Xeon Gold 6248R (Cascadelake) - 48 cores | 192 GB | - | 27 |
hyperion-[225-233] | Intel Xeon Platinum 8368 (Icelake) - 76 cores | 2 TB | - | 9 |
hyperion-[255-256] | AMD EPYC 75F3 (Zen 3) - 64 cores | 1 TB | 8x NVIDIA A100 SXM4 80GB | 2 |
hyperion-[258-259] | Intel Xeon Platinum 8362 (Icelake) - 64 cores | 4 TB | - | 2 |
hyperion-[260-261] | Intel Xeon Gold 6348 (Icelake) - 56 cores | 1.5 TB | 1x NVIDIA A30 24GB | 2 |
hyperion-264 | Intel Xeon Platinum 8362 (Icelake) - 64 cores | 2 TB | - | 1 |
hyperion-[265-280] | Intel Xeon Platinum 8362 (Icelake) - 64 cores | 2 TB | - | 16 |
hyperion-281 | Intel Xeon Platinum 8362 (Icelake) - 32 cores | 512 GB | - | 1 |
Compute Node Range | Proccessor / Total cores | Memory | Accelerator | Nodes |
---|---|---|---|---|
hyperion-253 | Intel Xeon Gold 6348 (Icelake) - 56 cores | 1 TB | 8x NVIDIA A100 SXM4 80GB | 1 |
hyperion-[282-284] | Intel Xeon Platinum 8358 (Icelake) - 64 cores | 2 TB | 8x NVIDIA A100 SXM4 80GB | 3 |
The cluster currently integrates nodes with two distinct Intel Xeon microarchitectures: Icelake and Cascadelake. When compiling software, it's imperative to consider the specific microarchitecture to optimize performance.
For interconnection, Hyperion employs Infiniband HDR technology. This provides a bandwidth of up to 200 Gb/s per direction and ensures low latency in data communication between nodes.
Filesystems and IO¶
Filesystem | Mount point | Quota | Size | Purpose | Backup |
---|---|---|---|---|---|
Home directories | /home | 50 GB | 56 TB | storage, dotfiles, config files | No |
scratch | /scratch | 2 TB | 611 TB | running jobs | No |
lscratch | /lscratch | None | - | running single node jobs | No |
data | /data | 5 TB | 917 TB | storage | No |
Info
/data
filesystem is mounted as read-only
on the compute nodes
Login Nodes¶
- Hyperion has 2 login nodes:
hyperion-01.sw.ehu.es
andhyperion-02.sw.ehu.es
. - Each node has two sockets populated with a 48 core Intel Xeon Platinum 8362 each.
- Each node has 256 GB of RAM.
Warning
Remember that login nodes should only be used to do small tasks or compilation and not to run interactive jobs.
In case a multi-process execution or high memory demanding process is detected on a login node, all the user processes will be terminated, and the user will be banned from the cluster until contacting with support-hpc@dipc.org
Cluster Architecture Considerations¶
Hyperion is a heterogeneous cluster, composed of nodes with various microarchitectures including Cascadelake, Icelake and Emerald Rapids nodes.
Compiling Your Code¶
When compiling your code, it's essential to target the specific microarchitecture of the node you intend to run on. For reliable results and performance, compile and execute programs on nodes of the same microarchitecture. For instance, if you're compiling your code for a Cascadelake node, you should also run it on a Cascadelake node. Using binaries compiled for one microarchitecture on a different microarchitecture may lead to unpredictable behaviors, performance issues, or even application crashes.
Warning
All software that we, the staff of the supercomputing center, compile system-wide in a centralized manner is optimized and compiled for all Cascadelake, Icelake and Emerald Rapids microarchitectures. Users can find the same environment modules available, irrespective of the microarchitecture of the node they are working on. This approach ensures compatibility and performance across different node types within Hyperion, allowing users to work seamlessly across various microarchitectures.
Software¶
Compiling your code¶
Intel compilers are recommended for building your applications on Hyperion. There is no system default modulefile that takes care of this. Use the module avail command to see what versions are available and load an Intel compiler module before compiling. For example:
$ module load intel/2022a
Notice that when a compiler module is loaded, some environment variables are set or modified to add the paths to certain commands, include files, or libraries, to your environment. This helps to simplify the way you do your work.
As an alternative, Hyperion also offers a collection of open source tools such as compilers or scientific libraries. Use module avail
command to see versions available. For example:
module avail intel
module avail FFTW
To learn more about compilers and scientific libraries checkout Environment Modules and the Compilers sections.