MSC Nastran on HPC Systems


Announcement on 22.01.2021: Due to the very limited use of MSC products or MSC Licenses on High-Performance Systems at LRZ, the software support including installations and license server will be taken off by September 2021.
Please do write us a ticket if you want to communicate on this matter with a subject "MSC Nastran" via https://servicedesk.lrz.de/.

What is MSC Nastran?

MSC Nastran is a multidisciplinary structural analysis application used by engineers to perform static, dynamic, and thermal analysis across the linear and nonlinear domains, complemented with automated structural optimization and award-winning embedded fatigue analysis technologies, all enabled by high-performance computing [1].

Getting Started

Being logged in on any of the LRZ clusters / SuperMUC systems, please, check the available versions:

  • Outdated information : At the moment, Nastran 20182, 20190, 20191 & 20200 versions are available. 
    Updated information : Currently the MSC/Hexagon software is no longer installed on LRZ High-Performance Computers.

> module avail mscnastran

MSC Nastran can be used by loading the appropriate module,

>module load mscnastran

One could see more info about Nastran installation and environment settings by,

>module show mscnastran

SSH User Environment Settings

As described in the LRZ documentation "Using Secure Shell on LRZ HPC Systems → LRZ internal configuration", some applications require special SSH keys for the cluster internal communication. MSC Nastran belongs to this class of applications.

Therefore MSC Nastran software users should follow the SSH setup steps as outlined here. This section is only relevant for setting ssh up to work without passwords (and without ssh-agent) inside the cluster. This is needed for e.g. batch process startup in some MPI implementations. None of the keys (public or private) generated here should be transferred outside the cluster, since this may incur security problems. The following commands must, therefore, be executed on the LRZ target system, i.e. one of the Linux Cluster or SuperMUC-NG login nodes.

Commands

Remarks

mkdir ~/.sshMake in your $HOME directory a hidden subfolder for the SSH-Keys and configuration files (if not existing yet).
chmod 700 ~/.sshMake this .ssh subfolder accessible only for yourself, not for the project group and not for others. This is a mandatory setting. With different permission settings, SSH connections might not work correctly on LRZ Linux Clusters.
ssh-keygen -t ed25519
Generate an Ed25519 key. The command will respond with which you respond by typing the ENTER key. Next, you are prompted for a passphrase which you should respond by typing ENTER twice (no passphrase).
cd ~/.ssh
cat id_ed25519.pub >> authorized_keys
add the internal public key to the list of authorized keys

Nastran on SuperMUC-NG 

Use the batch script below to start the batch job on the SuperMUC-NG. Please change the appropriate nodes and ntasks depending on the requirement of the job. Also please make yourself aware of the partition that you would like to use depending on the requirement of the resources for your jobs. More info can be found under, Job Processing with SLURM on SuperMUC-NG

Please note: Loading an MSC Nastran module will automatically set all necessary environments for you. 

Single node / Shared memory / OpenMP Job
#!/bin/bash
#SBATCH -J nastran_job
#SBATCH -o ./%J.out
#SBATCH -D ./
#Notification and type
#SBATCH --mail-type=NONE  #ALL,NONE,BEGIN,END
#SBATCH --mail-user=Max.Mustermann@lrz.de
#SBATCH --time=00:10:00
#SBATCH --no-requeue
#SBATCH --export=NONE
#SBATCH --get-user-env
#SBATCH --account=<project ID>
#SBATCH --partition=test
#SBATCH --nodes=1
#SBATCH --ntasks-per-node=1
#SBATCH --cpus-per-task=48
#SBATCH --switches=1
##Important

module load slurm_setup 
module load mscnastran/20200

module li

#============================
# User definded variables
#============================

data_file=<data file>
nastran_binary='nast20200'

#============================
#Do not change anything below
#============================
mkdir -p $SCRATCH/nastran
$nastran_binary   jid=$data_file \
                  smp=$SLURM_CPUS_PER_TASK \
                  sdir=$SCRATCH/nastran
#===========================
Multi nodes / Distributed memory / MPI Job
#!/bin/bash
#SBATCH -J nastran_job
#SBATCH -o ./%J.out
#SBATCH -D ./
#Notification and type
#SBATCH --mail-type=NONE  #ALL,NONE,BEGIN,END
#SBATCH --mail-user=Max.Mustermann@lrz.de
#SBATCH --time=00:10:00
#SBATCH --no-requeue
#SBATCH --export=NONE
#SBATCH --get-user-env
#SBATCH --account=<project ID>
#SBATCH --partition=test
#SBATCH --nodes=2
#SBATCH --ntasks-per-node=48
#SBATCH --switches=1
##Important

module load slurm_setup 
module load mscnastran/20200

module li

#============================
# User definded variables
#============================

data_file=<data file>
nastran_binary='nast20200'

#============================
#Do not change anything below
#============================
hostlist=`scontrol show hostname | paste -d: -s`
echo $hostlist
mkdir -p $SCRATCH/nastran
$nastran_binary   jid=$data_file \
                  dmp=$SLURM_NTASKS \
                  hosts=$hostlist \
                  s.rsh=ssh \
                  symbol=I_MPI_FABRICS=shm:tcp \
                  sdir=$SCRATCH/nastran
#===========================
Multi node / Hybrid Job
#!/bin/bash
#SBATCH -J nastran_job
#SBATCH -o ./%J.out
#SBATCH -D ./
#Notification and type
#SBATCH --mail-type=NONE  #ALL,NONE,BEGIN,END
#SBATCH --mail-user=Max.Mustermann@lrz.de
#SBATCH --time=00:10:00
#SBATCH --no-requeue
#SBATCH --export=NONE
#SBATCH --get-user-env
#SBATCH --account=<project ID>
#SBATCH --partition=test
#SBATCH --nodes=2
#SBATCH --ntasks-per-node=2
#SBATCH --cpus-per-task=24
#SBATCH --switches=1
##Important

module load slurm_setup 
module load mscnastran/20200

module li

#============================
# User definded variables
#============================

data_file=<data file>
nastran_binary='nast20200'

#============================
#Do not change anything below
#============================
hostlist=`scontrol show hostname | paste -d: -s`
echo $hostlist
mkdir -p $SCRATCH/nastran
$nastran_binary   jid=$data_file \
                  dmp=$SLURM_NTASKS \
                  smp=$SLURM_CPUS_PER_TASK \
                  hosts=$hostlist \
                  s.rsh=ssh \
                  symbol=I_MPI_FABRICS=shm:tcp \
                  sdir=$SCRATCH/nastran
#===========================

Nastran on CoolMUC-3 

Use the batch script below to start the batch job on the CoolMUC-3 Cluster. Please change the appropriate nodes and ntasks depending on the requirement of the job. Also please make yourself aware of the partition that you would like to use depending on the requirement of the resources for your jobs. More info can be found under, Job Processing with SLURM on CoolMUC-3

Please note: Loading an MSC Nastran module will automatically set all necessary environments for you. 

Single node / Shared memory / OpenMP Job
#!/bin/bash
#SBATCH -J nastran_job
#SBATCH -o ./%J.out
#SBATCH -D ./
#Notification and type
#SBATCH --mail-type=NONE  #ALL,NONE,BEGIN,END
#SBATCH --mail-user=Max.Mustermann@lrz.de
#SBATCH --time=00:10:00
#SBATCH --no-requeue
#SBATCH --export=NONE
#SBATCH --get-user-env
#SBATCH --clusters=mpp3
#SBATCH --nodes=1
#SBATCH --cpus-per-task=64
##Important

module load slurm_setup 
module load mscnastran

module li

#============================
# User definded variables
#============================

data_file=<data file>
nastran_binary='nast20190'

#============================
#Do not change anything below
#============================
mkdir -p $SCRATCH/nastran
$nastran_binary   jid=$data_file \
                  smp=$SLURM_CPUS_PER_TASK \
                  sdir=$SCRATCH/nastran
#===========================
Multi nodes / Distributed memory / MPI Job
#!/bin/bash
#SBATCH -J nastran_job
#SBATCH -o ./%J.out
#SBATCH -D ./
#Notification and type
#SBATCH --mail-type=NONE  #ALL,NONE,BEGIN,END
#SBATCH --mail-user=Max.Mustermann@lrz.de
#SBATCH --time=00:10:00
#SBATCH --no-requeue
#SBATCH --export=NONE
#SBATCH --get-user-env
#SBATCH --clusters=mpp3
#SBATCH --nodes=2
#SBATCH --ntasks-per-node=64
##Important

module load slurm_setup 
module load mscnastran

module li

#============================
# User definded variables
#============================

data_file=<data file>
nastran_binary='nast20190'

#============================
#Do not change anything below
#============================
hostlist=`scontrol show hostname | paste -d: -s`
echo $hostlist
mkdir -p $SCRATCH/nastran
$nastran_binary   jid=$data_file \
                  dmp=$SLURM_NTASKS \
                  hosts=$hostlist \
                  s.rsh=ssh \
                  symbol=I_MPI_FABRICS=shm:tcp \
                  sdir=$SCRATCH/nastran
#===========================
Multi node / Hybrid Job
#!/bin/bash
#SBATCH -J nastran_job
#SBATCH -o ./%J.out
#SBATCH -D ./
#Notification and type
#SBATCH --mail-type=NONE  #ALL,NONE,BEGIN,END
#SBATCH --mail-user=Max.Mustermann@lrz.de
#SBATCH --time=00:10:00
#SBATCH --no-requeue
#SBATCH --export=NONE
#SBATCH --get-user-env
#SBATCH --clusters=mpp3
#SBATCH --nodes=2
#SBATCH --ntasks-per-node=2
#SBATCH --cpus-per-task=32
##Important

module load slurm_setup 
module load mscnastran

module li

#============================
# User definded variables
#============================

data_file=<data file>
nastran_binary='nast20190'

#============================
#Do not change anything below
#============================
hostlist=`scontrol show hostname | paste -d: -s`
echo $hostlist
mkdir -p $SCRATCH/nastran
$nastran_binary   jid=$data_file \
                  dmp=$SLURM_NTASKS \
                  smp=$SLURM_CPUS_PER_TASK \
                  hosts=$hostlist \
                  s.rsh=ssh \
                  symbol=I_MPI_FABRICS=shm:tcp \
                  sdir=$SCRATCH/nastran
#===========================

Nastran on CoolMUC-2 

Use the batch script below to start the batch job on the CoolMUC-2 Cluster. Please change the appropriate nodes and ntasks depending on the requirement of the job. Also please make yourself aware of the partition that you would like to use depending on the requirement of the resources for your jobs. More info can be found under, Job Processing with SLURM on CoolMUC-2

Please note: Loading an MSC Nastran module will automatically set all necessary environments for you. 

Single node / Shared memory / OpenMP Job
#!/bin/bash
#SBATCH -J nastran_job
#SBATCH -o ./%J.out
#SBATCH -D ./
#Notification and type
#SBATCH --mail-type=NONE  #ALL,NONE,BEGIN,END
#SBATCH --mail-user=Max.Mustermann@lrz.de
#SBATCH --time=00:10:00
#SBATCH --no-requeue
#SBATCH --export=NONE
#SBATCH --get-user-env
#SBATCH --clusters=cm2_tiny
#SBATCH --nodes=1
#SBATCH --ntasks-per-node=1
#SBATCH --cpus-per-task=28
##Important

module load slurm_setup 
module load mscnastran/20200

module li

#============================
# User definded variables
#============================

data_file=<data file>
nastran_binary='nast20200'

#============================
#Do not change anything below
#============================
mkdir -p $SCRATCH/nastran
$nastran_binary   jid=$data_file \
                  smp=$SLURM_CPUS_PER_TASK \
                  sdir=$SCRATCH/nastran
#===========================
Multi nodes / Distributed memory / MPI Job
#!/bin/bash
#SBATCH -J nastran_job
#SBATCH -o ./%J.out
#SBATCH -D ./
#Notification and type
#SBATCH --mail-type=NONE  #ALL,NONE,BEGIN,END
#SBATCH --mail-user=Max.Mustermann@lrz.de
#SBATCH --time=00:10:00
#SBATCH --no-requeue
#SBATCH --export=NONE
#SBATCH --get-user-env
#SBATCH --clusters=cm2
#SBATCH --partition=cm2_std
#SBATCH --qos=cm2_std
#SBATCH --nodes=3
#SBATCH --ntasks-per-node=28
##Important

module load slurm_setup 
module load mscnastran/20200

module li

#============================
# User definded variables
#============================

data_file=<data file>
nastran_binary='nast20200'

#============================
#Do not change anything below
#============================
hostlist=`scontrol show hostname | paste -d: -s`
echo $hostlist
mkdir -p $SCRATCH/nastran
$nastran_binary   jid=$data_file \
                  dmp=$SLURM_NTASKS \
                  hosts=$hostlist \
                  s.rsh=ssh \
                  symbol=I_MPI_FABRICS=shm:tcp \
                  sdir=$SCRATCH/nastran
#===========================
Multi node / Hybrid Job
#!/bin/bash
#SBATCH -J nastran_job
#SBATCH -o ./%J.out
#SBATCH -D ./
#Notification and type
#SBATCH --mail-type=NONE  #ALL,NONE,BEGIN,END
#SBATCH --mail-user=Max.Mustermann@lrz.de
#SBATCH --time=00:10:00
#SBATCH --no-requeue
#SBATCH --export=NONE
#SBATCH --get-user-env
#SBATCH --clusters=cm2
#SBATCH --partition=cm2_std
#SBATCH --qos=cm2_std
#SBATCH --nodes=3
#SBATCH --ntasks-per-node=1
#SBATCH --cpus-per-task=28
##Important

module load slurm_setup 
module load mscnastran/20200

module li

#============================
# User definded variables
#============================

data_file=<data file>
nastran_binary='nast20200'

#============================
#Do not change anything below
#============================
hostlist=`scontrol show hostname | paste -d: -s`
echo $hostlist
mkdir -p $SCRATCH/nastran
$nastran_binary   jid=$data_file \
                  dmp=$SLURM_NTASKS \
                  smp=$SLURM_CPUS_PER_TASK \
                  hosts=$hostlist \
                  s.rsh=ssh \
                  symbol=I_MPI_FABRICS=shm:tcp \
                  sdir=$SCRATCH/nastran
#===========================

Help MSC Nastran

Once a module is loaded: 

> nast_VERSION_ help -h

for example,

> nast20200 help -h

or

> nast20200 help -h

User Support

In case of any observed issues in the usage of the MSC Nastran software on LRZ managed compute resources or any arising questions, please feel free to contact the LRZ support.

But please note, that LRZ is currently not providing access to MSC/Hexagon products anymore.

Note

LRZ currently does not host any MSC licenses anymore, due to the vanishing usage of the software over the past years.
Typically the Enterprise products of MSC Software solutions split into the following software packages.
People being interested in using the softare on LRZ High-Performance Computing systems need to contact MSC/Hexagon by themselfes and need to provide the MSC/Hexagon licenses on an own operated license server.

  • Acumen
  • Adams
  • Akusmod
  • AMS
  • Dytran
  • Easy5
  • Enterprise Mvision
  • Explore
  • FlightLoads
  • GS-Mesher
  • Marc
  • MaterialCenter Databanks
  • MD Nastran
  • Mechanism Pro
  • MSC Nastran
  • MSC Software Development Kit
  • Mvision
  • Patran
  • Robust Design
  • SimDesigner Enterprise
  • SimManager
  • Sinda
  • Sofy
  • SuperForm

As of now (i.e. April 2024) we do not provide any of the above packages as a module on LRZ clusters / Supercomputers. As of now LRZ is not owning any of the above MSC/Hexagon licenses anymore and so we are not able to provide related installation packages as well. In case of interest please contact MSC/Hexagon directly.