Avoid running interactive jobs on the login nodes¶
Login nodes are meant to be used for:
- Job submission and management.
- Compilations (as long as they are serial and fast).
- Data transfer (from
/dipc, for example).
- Job management.
Small scale interactive runs may also be acceptable if your resource requirements are minimal. However, in general, running or executing jobs interactively (that is, out of the queue system) in this nodes is strictly forbidden.
Run jobs from scratch¶
/dipc filesystem is only exported to the head nodes of the computing systems and the access nodes. Therefore, it will not be visible from any of the computing nodes of any of the supercomputing systems. Within the computing nodes
/dipc is a symbolic link that points to
/scratch. This is the reason why it is not recommended to submit jobs from your home directories. It is possible, but requires some scripting. Thus, submit your jobs from your
Do not use scratch as permanent storage¶
/scratch file system is designed for performance rather than reliability, as a fast workspace to redirect the I/O of you jobs and store files temporarily.
When the occupancy goes above 80% the BeeGFS filesystem shows a performance degradation that affects all users. If occupancy grows more than said percentage, we will be forced to free up space manually removing files and directories older than 90 days without further notice.
Keep also in mind that data on
/scratch is not backed up, therefore users are advised to move valuable data to their home directories under
/dipc where daily backups are performed.
Organize your files in the /scratch¶
BeeGFS filesystems do not get along with large numbers of small files in the same directory and this kind of behavior will degrade the performance of the
/scratch filesystem severily. Try not to store more than some hundreds of files per directory.
Check your scratch usage¶
Since the usage of the
/scratch filesystem is limited for each user (500GB in Atlas-FDR and 1.5TB in Atlas EDR) it may be useful to check your occupation:
$ beegfs-ctl --getquota --uid username user/group || size || chunk files name | id || used | hard || used | hard --------------|------||------------|------------||---------|--------- username | xxxx|| 43.81 GiB| 1.46 TiB|| 262281| 3000000