Every HPC node of both LiCCA and ALCC has access to the same network filesystem This filesystem contains the following folders, which currently share the same performance characteristics: Group home directory Backup All content of Pro Tip: All data that can easily be recreated (e.g. temporary files, python evironments, etc.) should be stored in the User scratch directory (not part of the Backup). Once Project and Cluster access have been approved, default permissions as well as user and group ownerships are applied to the four directories listed above. Permissions and ownerships of existing files and folders in these directories remain untouched. These directories can only be accessed by the owner and nobody else (except the root user). Default permissions of newly created files and folders are These directories can (only) be accessed and modified by all group members. Files and directories created by one member can be aribrarily modified or removed by any other group member. Note that user created files and folders in group directories won't have ACL an entry for special DO NOT attempt to "fix" file and folder permissions in group directories. Especially DO NOT run any kind of recursive Due to the nature of these ACL on group home and scratch directories, all files are marked as executable, and the output of DO NOT make your home or scratch folder world writable (e.g. To grant readonly access for your home and/or scratch directory to a specific group: The IdM group of choice should contain as few people as possible, because all members of this group will have read access to your personal home or scratch space this way. Recommendation: the respective To grant readonly access for your home and/or scratch directory to a specific user: You cannot modify the ACL of home and scratch group/project directories. To get access to another group's home or scratch folder you have to apply for Access to the Project Membership. The GPFS filesystem is operated with quota enabled for the user and group directories in home and scratch. Users can check their current GPFS filesystem usage and quota situation on the login nodes with the command with output of the following form There are quota set on the used block storage and also on inode usage (number of files and directories). There are also quota set on the HPC-project-group directories in home and scratch. additionally shows the quota for all HPC-project-group directories where the user is a member. You can exceed the quota for some time (the grace time) up to a hard limit (your quota times three). The grace time (for block and inodes) is set to 30 days. There is a quota monitoring running, which will send you a one-time "warning", once you exceed any of your quota . You will get a second message ("critical") if the grace time is under one week. Please try to clean up your directories at this point at the latest. Open a ticket with our Service-desk, if this is a problem. After the end of the grace time, no further writes are possible!Parallel File System (GPFS)
Overview
/hpc/gpfs2
which is a shared ressource./hpc/gpfs2/home/u/$USER
/hpc/gpfs2/scratch/u/$USER
/hpc/gpfs2/home/g/$HPC-Projekt/
/hpc/gpfs2/scratch/g/$HPC-Projekt/
/hpc/gpfs2/home
is backed up once a day to the Tape Library of the Rechenzentrum. All important data (e.g. results of calculations, user maintained software, etc.) is recommended to be stored in User home or Group directories.Default Permissions and Ownerships for User and Group directories
User directories
0644
and 0755
, respectively, due to the default umask setting of 0022
. This does not mean that other cluster user may access your files, because no regular user can get past your personal home and scratch directories, which act as gatekeepers.Group/Project directories
group
and other
(everyone) permissions, therefore the last two mode bits (e.g. 700) or corresponding output of ls -l
(e.g. -rwx------
) is completely meaningless.chmod
in group folders (e.g. chown -R
), even if you know what you are doing, because it is not necessary at all and will allocate useless extra metadata for every single file and folder.ls
may show all files with green color. Again, no need to fix this.Granting Access to User and Group directories
User directories
chmod 777
). This is explicitly forbidden and users doing so will receive a formal warning.rzhpc-*
group of your project.Group/Project directories
Quota regulations and management
list-quota
(/usr/local/bin/list-quota
):list-quota
johndoe@licca001:~$ list-quota
user quota: johndoe
Block Limits | File
Filesystem Fileset type blocks quota limit in_doubt grace | files quota limit in_doubt grace
gpfs2 home USR 0 512G 1.5T 80M none | 5 2000000 6000000 38 none
gpfs2 scratch USR 0 1T 3T 0 none | 1 4000000 12000000 0 none
none
is stated under the column grace
, everything is fine,blocks
and for the number of files under files
,quota
and less than a hard limit
, the time until the corresponding resource expires is stated under the column grace
(for example: 28 days).list-quota -g
johndoe@licca001:~$ list-quota -g
user quota: johndoe
Block Limits | File
Filesystem Fileset type blocks quota limit in_doubt grace | files quota limit in_doubt grace
gpfs2 home USR 0 512G 1.5T 80M none | 5 2000000 6000000 38 none
gpfs2 scratch USR 0 1T 3T 0 none | 1 4000000 12000000 0 none
group home fileset: home.g.test
Block Limits | File
Filesystem type blocks quota limit in_doubt grace | files quota limit in_doubt grace
gpfs2 FILESET 0 2.5T 7.5T 0 none | 1 10000000 30000000 0 none
group scratch fileset: scratch.g.test
Block Limits | File
Filesystem type blocks quota limit in_doubt grace | files quota limit in_doubt grace
gpfs2 FILESET 0 5.039T 15.12T 0 none | 1 20000000 60000000 0 none