Page tree
Skip to end of metadata
Go to start of metadata

24th Gauss Call for Large Scale Projects


Twice per year, the Gauss Centre for Supercomputing GCS issues a Call for Large-Scale Projects, usually at the end of winter and at the end of summer.

The 24th Gauss Call for Large Scale Projects will be open

20 July to 17 August 2020, 17:00 CEST (strict deadline)

The call covers the period 1 November 2020 to 31 October 2021.

Eligible are applications from German universities and publicly funded German research institutions, e.g., Max-Planck Society, and Helmholtz Association. Researchers from outside Germany may apply through PRACE (

Important: Change of Budget Policy for Project Extensions on SuperMUC-NG

Beginning with this call, the LRZ will change its policy regarding the extension of projects - un-used compute time budget for Gauss Large Scale Projects will be cut-off at the start of the new granting period. This applies equally, wether the project was a regular or a large scale project before.

Example: A current Gauss Large Scale Project with 50mio core-h on SuperMUC-NG ends 31 October 2020 and has a remaining compute budget of 10 M core h. The PI writes an application for an extension of his Gauss Large Scale Project asking for 75 mio core-h for the next call period. The GCS steering committee grants 60 mio core-h based on the reviews. November 1, 2020, the project starts with a budget of 60 mio core-h. The remaining 10 M core h from the previous granting period could not be consumed in time and, therefore, are cut off. This new policy ensures comparability between project applications and helps the LRZ to evenly distribute the available compute time budget.

You still have the possibility to request a cost neutral prolongation of your project, to consume a remaining budget but no new compute time will be awarded.

Supercomputing at the leading edge

The Gauss Centre for Supercomputing (GCS) provides computing power and services of the highest performance class for computational sciences and engineering at its three member sites in Garching (Leibniz Supercomputing Centre, LRZ), Jülich (Jülich Supercomputing Centre, JSC), and Stuttgart (High Performance Computing Center Stuttgart, HLRS). To ensure a most efficient utilisation of these highly valuable resources, GCS provides its users with world-leading support, education, and dissemination of best practices and methods in simulation science. Here, the three members focus on different topics with some overlap on the subjects due to the centers' traditional user base or specific system requirements. While LRZ equally supports all scientific fields, JSC focusses on fundamental and applied sciences and HLRS specialises in engineering sciences and global system science. GCS aims, in particular, at innovative and scientifically challenging large-scale projects that cannot be carried out within smaller infrastructures. Such projects will also benefit most from the existing successful support structures within the GCS and from their continuous synchronisation and optimization. Please be aware of the different priorities of the GCS member sites when you apply for computing time.

State-of-the-art systems

The GCS offers the highest level of computing and networking infrastructure.

  • JSC provides computing time on the modular supercomputer JUWELS (Jülich Wizard for European Leadership Science). Its 2511 nodes on the Cluster Module are equipped with dual-socket Intel-Skylake Platinum 8168 CPUs. In addition, 56 Dual Intel Xeon Gold 6148 nodes are equipped with 4 additional NVIDIA Volta GPUs yielding a total performance of about 12 PF/s. Furthermore, the JUWELS Booster Module is currently being installed and will be available for the first time with this call. The JUWELS Booster Module comprises 936 nodes each equipped with two AMD EPYC Rome 7502 CPUs with 512 GB DDR memory each and 4 NVIDIA A100 GPUs. This adds up to about 75 PF/s. JUWELS thus provides 87 PF/s in total to its users.

  • LRZ provides SuperMUC-NG. It is equipped with 6,480 dual-socket nodes with 8,174 Intel Xeon processors (48 cores/node) consisting of 6,336 thin nodes with a main memory of 96 GByte and 144 additional fat nodes with a main memory of 768 GByte. SuperMUC-NG delivers a peak performance of 26.9 PF/s.
  • HLRS provides the Hawk system with 5,632 nodes, each one equipped with AMD EPYC 7742 processors (Zen2 aka “Rome”), offering 128 cores and 256 GByte of memory per node. Applicants shall port their applications to the new system as soon as possible. HLRS support is available and required support should be outlined in the project application.

The systems within the GCS are continuously upgraded in a round robin fashion.

Large-Scale Projects  

Large-scale projects and highly scalable parallel applications are characterised by large computing time requirements, not only for short time frames, but often for longer time periods. Projects are classified as “Large-Scale” if they require

  • >= 100 million core-hours on HAWK or
  • >= 45,000 EFLOP on JUWELS
  • >= 45 million core-hours on SuperMUC-NG

per year. Please note that the architecture and sustainable performance of a core of each system may widely differ and that the “core-hours” of the systems are not comparable or interchangeable.

For these large-scale projects a competitive review and resource allocation process is established by the GCS. Requests above these limits will be processed according to joint procedures of the GCS and will be reviewed in a national context. Requests below these limits and requests for test projects will be directly processed by the individual member centres.

Answering the Call

Leading, ground-breaking projects should deal with complex, demanding, innovative simula­tions that would not be possible without the GCS infrastructure, and which can benefit from the exceptional resources provided by GCS. Application for a large-scale project must be done by filling in the appropriate electronic application form that can be accessed from the GCS web page :

Please use the template for the project description of your GCS large-scale application which can be reached from the above web page and are provided in pdf, docx, and LaTeX format.

Note that also the regular application forms of the GCS member centres can be reached from there.

Please note:

  • Projects with a running large-scale grant must clearly indicate and justify this.
  • Projects targeting multiple GCS platforms must clearly indicate and justify this.
  • Projects applying for an extension must clearly indicate the differences to the previous applications in the project description and must have submitted their reports of the previous application.
  • Accepted large-scale projects must fulfil their reporting obligations.
  • Project descriptions must not exceed 18 pages.
  • Grants from or applications to all German computing centres and PRACE have to be reported in the online application forms.

The proposals for large-scale projects will be reviewed with respect to their technical feasibility and peer-reviewed for a comparative scientific evaluation. On the basis of this evaluation by a GCS committee the projects will be approved for a period of one year and given their allocations.

Criteria for decision

Applications for compute resources are evaluated only according to their scientific excellence and technical feasibility.

  • The proposed scientific tasks must be scientifically challenging, and their treatment must be of substantial interest.
  • Clear scientific goals and verifiable milestones on the way to reach these goals must be specified.
  • The implementation of the project must be technically feasible on the available computing systems, and must be in reasonable proportion to the performance characteristics of these systems.
  • The Principal Investigator must have a proven scientific record, and she/he must be able to successfully accomplish the proposed tasks. In particular, applicants must possess the necessary specialized know-how for the effective use of high-end computing systems. This has to be proven in the application for compute resources, e.g. by presenting work done on smaller computing system, scaling studies etc.
  • The specific features of the high-end computers should be optimally exploited by the program implementations. This will be checked regularly during the course of the project.

Further information:

Application process

The image shows the process of applying for a GCS large scale project on SuperMUC-NG. All applications for projects on SuperMUC are managed through the GCS-JARDS tool, hosted by GCS, the Gauss Centre for Supercomputing. After finalizing your application within the GCS-JARDS website, you will be asked to print out and sign the "Principal Investigator’s Agreement for Access to HPC resources". Scan the signed document and email it to Once your project is approved, your SuperMUC-NG project will be created by LRZ. GCS-JARDS is used to manage the review process, collect status reports, final reports, dissemination material, and to manage project extensions.

Address of the GCS-Coordination Office for Large Scale Calls

GCS-Coordination Office
Jülich Supercomputing Centre
Forschungszentrum Jülich
52425 Jülich


  • No labels