Gurobi Optimization on HPC Systems

What is the Gurobi Optimization Software Package?

Gurobi is a software for optimization problems providing state-of-the-art solvers for mathematical programming. The solvers in the Gurobi Optimizer were designed from the ground up to exploit modern architectures and multi-core processors, using the most advanced implementations of the latest algorithms. It includes the following solvers: 

  • linear programming solver (LP)
  • mixed-integer linear programming solver (MILP)
  • mixed-integer quadratic programming solver (MIQP)
  • quadratic programming solver (QP)
  • quadratically constrained programming solver (QCP)
  • mixed-integer quadratically constrained programming solver (MIQCP)

More information can be found on Gurobi's website.

Gurobi Optimizer supports the following interfaces for a variety of programming and modeling languages:

  • Object-oriented interfaces for C++, Java, .NET and Python
  • Matrix-oriented interfaces for C, MATLAB® and R
  • Links to standard modeling languages: AIMMS, AMPL, GAMS and MPL
  • Links to Excel through Premium Solver Platform and Risk Solver Platform


Software Producer & Vendor

Gurobi GmbH
Ulmenstrasse 37-39
60325 Frankfurt am Main
Germany
Phone: +49 69 667737484
General Information: info@gurobi.com
Support: Gurobi Support Portal
Sales: sales@gurobi.de

Licensing

Gurobi Optimizer is commercial software which requires a valid user license. Until further notice LRZ has been granted by Gurobi GmbH with a Free Academic Named-User Floating License of the Gurobi software. This essentially implies:

  1. The Gurobi Optimizer can be used on the LRZ Linux Cluster systems for purely academic purposes free of charge. Please refer to the conditions for the use of the Free Academic Licenses on Gurobis website.
  2. In particular Gurobi software under the condition of this Academic License can only be used for teaching or research that will be published in a public article.
  3. The license has been issued for named and specific LRZ User-ID's. The addition of new user-ID's to the granted Free Academic License requires to obtain a new license key from Gurobi GmbH on request. Users interested to be included in the list of eligible Gurobi software users on LRZ systems are asked to file a corresponding LRZ Service Request to the licencing administration team at LRZ.
  4. Due to the necessary correspondence with Gurobi support, the addition of a new User-ID to the LRZ license might require between several days and up to several working days depending on the timely reaction from the Gurobi support side.
  5. Once a user has been added with his User-ID to the LRZ license for the Gurobi software, the user needs to add to his $HOME directory a file named "gurobi.lic" with the following contents, which points to the Gurobi license server at LRZ:

    TOKENSERVER=license1.lrz.de
    PORT=41954

Actual Version

Since July 2024 the actual installed version of the Gurobi Optimizer software is version 11.0.2 (the previously installed versions of Gurobi 8.1.1 / 9.0.1 / 9.1.1 / 10.0.0 can still be used; corresponding modules are provided and the Gurobi license now serves all the available versions v8.1.1, v9.0.1, v9.1.1, v10.0.0 and v11.0.2).

This new version Gurobi 11.0.2 has been made the default version of the Gurobi software on the LRZ Linux Clusters under SLES15. With upcoming new versions of Gurobi older versions will be still supported and provided as separate modules in the module system.

Getting Started

LRZ users are adviced NOT to use Linux Cluster login nodes for any kind of Gurobi or Matlab/Gurobi simulations which have the potential to put some heavy processor load on these login node systems or which are potentially consuming large amounts of memory in order not to disturb other cluster users. For such purposes large memory nodes are provided e.g. in interactive or serial cluster queues. Furthermore it is not permitted to use multi-core cluster nodes for non-parallelized, i.e. serial, computations, thereby wasting the potential of N-1 cores of the corresponding cluster node and steeling urgently needed parallel resources from other users. In cases where LRZ cluster administrators encounter such permitted usage of LRZ cluster resources might lead to the ban of correspondig users from any further usage of LRZ resources and to disabled LRZ user account of the corresponding user.

Gurobi software is available in the default module system of CoolMUC-2/3 Linux clusters. Available Gurobi software modules can be looked up by typing the command:

> module avail gurobi

Load the prefered Gurobi version environment module, e.g.:

> module load gurobi/11.02

Verification of Gurobi Optimizer Functionality

Once you have been registered with your LRZ User-ID for the Gurobi Optimizer Free Academic Floating License you can as a LRZ Linux Cluster User simply verify the functionality of the Gurobi Optimizer software and license access for you by carrying out the following steps:

ssh lxlogin8.lrz.de
#
# --> following commands are executed on the CoolMUC-2 login node lxlogin1.lrz.de:
#
$HOME> module load gurobi
$HOME> mkdir ~/gurobi_temp
$HOME> cd ~/gurobi_temp
~/gurobi_temp> cp $GUROBI_HOME/examples/data/coins.lp .
~/gurobi_temp> gurobi_cl ./coins.lp

In the result of the execution of the previous commands you should end up with a file gurobi.log in the subdirectory ~/gurobi_temp in your $HOME directory, showing the successful execution of the Gurobi command line interface for this simple optimization example.

The content of this gurobi.log file should look like this:

Gurobi Optimizer version 11.0.2 build v11.0.2rc0 (linux64 - "SUSE Linux Enterprise Server 15 SP1")
Copyright (c) 2024, Gurobi Optimization, LLC

Read LP format model from file coins.lp
Reading time = 0.00 seconds
: 4 rows, 9 columns, 16 nonzeros

CPU model: Intel(R) Xeon(R) CPU E5-2697 v3 @ 2.60GHz, instruction set [SSE2|AVX|AVX2]
Thread count: 28 physical cores, 56 logical processors, using up to 28 threads

Optimize a model with 4 rows, 9 columns and 16 nonzeros
Model fingerprint: 0x06e334a4
Variable types: 4 continuous, 5 integer (0 binary)
Coefficient statistics:
  Matrix range     [6e-02, 7e+00]
  Objective range  [1e-02, 1e+00]
  Bounds range     [5e+01, 1e+03]
  RHS range        [0e+00, 0e+00]
Found heuristic solution: objective -0.0000000
Presolve removed 1 rows and 5 columns
Presolve time: 0.00s
Presolved: 3 rows, 4 columns, 9 nonzeros
Variable types: 0 continuous, 4 integer (0 binary)
Found heuristic solution: objective 26.1000000

Root relaxation: objective 1.134615e+02, 2 iterations, 0.00 seconds (0.00 work units)

    Nodes    |    Current Node    |     Objective Bounds      |     Work
 Expl Unexpl |  Obj  Depth IntInf | Incumbent    BestBd   Gap | It/Node Time

     0     0  113.46154    0    1   26.10000  113.46154   335%     -    0s
H    0     0                     113.3000000  113.46154  0.14%     -    0s
H    0     0                     113.4500000  113.46154  0.01%     -    0s
     0     0  113.46154    0    1  113.45000  113.46154  0.01%     -    0s

Explored 1 nodes (2 simplex iterations) in 0.01 seconds (0.00 work units)
Thread count was 28 (of 56 available processors)

Solution count 4: 113.45 113.3 26.1 -0

Optimal solution found (tolerance 1.00e-04)
Best objective 1.134500000000e+02, best bound 1.134500000000e+02, gap 0.0000%

Wrote result file 'coins.sol'

Running Gurobi Optimizer in Parallel on Linux Clusters

The Gurobi Optimizer command line interface supports the usage of multiple threads for large parallel optimization runs. For the above small coins.lp example a submission to the CoolMUC-2 (Tiny) cluster with 28 threads would look like the following. First one needs to write a corresponding SLURM submission shell script, containing the call to Gurobi (e.g. with the filename: gurobi_cm2_tiny_slurm.sh):

#!/bin/bash
#SBATCH -o ./job.gurobi.%j.%N.out
#SBATCH -D ./
#SBATCH -J gurobi_cm2_tiny
#SBATCH --clusters=cm2_tiny
#SBATCH --partition=cm2_tiny
#SBATCH --get-user-env
#SBATCH --nodes=1
#SBATCH --ntasks-per-node=28
# --- 28 cores per node for cm2_tiny ---
#SBATCH --mail-type=none
#SBATCH --mail-type=end
#SBATCH --mail-user=Max.Mustermann@lrz.de
#SBATCH --export=NONE
#SBATCH --time=0:10:00
#----------------------------------------------------
module load slurm_setup
module av gurobi
module load gurobi/11.02

echo ============================ Gurobi Start ======================
cd ~/Gurobi_Slurm_Test
echo gurobi_cl ResultFile=coins.sol Threads=$SLURM_NTASKS coins.lp
gurobi_cl ResultFile=coins.sol Threads=$SLURM_NTASKS coins.lp
echo ============================ Gurobi Stop =======================

Assumed that this SLURM script is stored together with the input data in the file coins.lp in the users $HOME subdirectory ~/Gurobi_Slurm_Test, the SLURM job can be submitted on CoolMUC-3 (mpp3) as follows:

ssh lxlogin1.lrz.de
#
# --> following commands are executed on the CoolMUC-2 login node lxlogin1.lrz.de:
#
> cd ~/Gurobi_Slurm_Test
> sbatch gurobi_cm2_tiny_slurm.sh

After the execution of the Slurm job on CoolMUC-3 please check the output in the gurobi.log and the generated job output file. Watch for the statement "Set parameter Threads to value 28".

User Support

In case of any observed issues in the usage of the Gurobi software on LRZ managed compute resources or any arising questions, please feel free to contact the LRZ support. Please submit your LRZ Service Request with a clear specification of the encountered issue with the Gurobi software, e.g. by indicating in the request subject "Gurobi Optimizer problem: ....". This will help the LRZ operators to assign your support request to the most appropriate LRZ staff to assist you.