|Table of Contents|
What is the Gurobi Optimization Software Package?
Gurobi is a software for optimization problems providing state-of-the-art solvers for mathematical programming. The solvers in the Gurobi Optimizer were designed from the ground up to exploit modern architectures and multi-core processors, using the most advanced implementations of the latest algorithms. It includes the following solvers:
- linear programming solver (LP)
- mixed-integer linear programming solver (MILP)
- mixed-integer quadratic programming solver (MIQP)
- quadratic programming solver (QP)
- quadratically constrained programming solver (QCP)
- mixed-integer quadratically constrained programming solver (MIQCP)
More information can be found on Gurobi's website.
Gurobi Optimizer supports the following interfaces for a variety of programming and modeling languages:
- Object-oriented interfaces for C++, Java, .NET and Python
- Matrix-oriented interfaces for C, MATLAB® and R
- Links to standard modeling languages: AIMMS, AMPL, GAMS and MPL
- Links to Excel through Premium Solver Platform and Risk Solver Platform
Software Producer & Vendor
Gurobi Optimizer is commercial software which requires a valid user license. Until further notice LRZ has been granted by Gurobi GmbH with a Free Academic Named-User Floating License of the Gurobi software. This essentially implies:
- The Gurobi Optimizer can be used on the LRZ Linux Cluster systems for purely academic purposes free of charge. Please refer to the conditions for the use of the Free Academic Licenses on Gurobis website.
- In particular Gurobi software under the condition of this Academic License can only be used for teaching or research that will be published in a public article.
- The license has been issued for named and specific LRZ User-ID's. The addition of new user-ID's to the granted Free Academic License requires to obtain a new license key from Gurobi GmbH on request. Users interested to be included in the list of eligible Gurobi software users on LRZ systems are asked to file a corresponding LRZ Service Request to the licencing administration team at LRZ.
- Due to the necessary correspondence with Gurobi support, the addition of a new User-ID to the LRZ license might require between several days and up to 1-2 weeks depending on the timely reaction from the Gurobi support side.
Once a user has been added with his User-ID to the LRZ license for the Gurobi software, the user needs to add to his $HOME directory a file named "gurobi.lic" with the following contents, which points to the Gurobi license server at LRZ:
Code Block language text theme Confluence
Since March 2021 the actual installed version of the Gurobi Optimizer software is version 9.1.1 (the previously installed versions of Gurobi 8.1.1 / 9.0.1 can still be used; corresponding modules are provided and the Gurobi license now serves all the available versions v8.1.1, v9.0.1 and v9.1.1).
This new version Gurobi 9.1.1 has been made the default version of the Gurobi software on the LRZ Linux Cluster. With upcoming new versions of Gurobi older versions will be still supported and provided as separate modules in the module system.
LRZ users are adviced NOT to use Linux Cluster login nodes for any kind of Gurobi or Matlab/Gurobi simulations which have the potential to put some heavy processor load on these login node systems or which are potentially consuming large amounts of memory in order not to disturb other cluster users. For such purposes large memory nodes are provided e.g. in interactive or serial cluster queues. Furthermore it is not permitted to use multi-core cluster nodes for non-parallelized, i.e. serial, computations, thereby wasting the potential of N-1 cores of the corresponding cluster node and steeling urgently needed parallel resources from other users. In cases where LRZ cluster administrators encounter such permitted usage of LRZ cluster resources might lead to the ban of correspondig users from any further usage of LRZ resources and to disabled LRZ user account of the corresponding user.
Gurobi software is available in the default module system of CoolMUC-2/3 Linux clusters. Available Gurobi software modules can be looked up by typing the command:
> module avail gurobi
Load the prefered Gurobi version environment module, e.g.:
> module load gurobi/9.11
If you intend to use the Matlab interface in combination with the Gurobi optimizer you may want to load the following combination of moduls:
> module load gurobi/9.11 > module load matlab/R2018b-intel
In this case of combining Gurobi Optimizer with Matlab the path to the Gurobi installation needs to be declared for the Matlab script. This can be realized by either of two possibilities:
- Opening Matlab on the login node in GUI mode. Including the path to the Matlab subdirectory within the Gurobi installation via "HOME → Set Path" and specifying "Add Folder" with the path "/lrz/sys/applications/gurobi/gurobi911/linux64/matlab". In this case a user defined file "pathdef.m" will be added by Matlab to your working directory.
Adding the path specification directly to your used Matlab script by adding:
Code Block language erl theme Confluence
addpath([filesep 'lrz' filesep 'sys' filesep 'applications' filesep 'gurobi' filesep]) addpath([filesep 'lrz' filesep 'sys' filesep 'applications' filesep 'gurobi' filesep 'gurobi911' filesep]) addpath([filesep 'lrz' filesep 'sys' filesep 'applications' filesep 'gurobi' filesep 'gurobi911' filesep 'linux64' filesep]) addpath([filesep 'lrz' filesep 'sys' filesep 'applications' filesep 'gurobi' filesep 'gurobi911' filesep 'linux64' filesep 'matlab' filesep])
Verification of Gurobi Optimizer Functionality
Once you have been registered with your LRZ User-ID for the Gurobi Optimizer Free Academic Floating License you can as a LRZ Linux Cluster User simply verify the functionality of the Gurobi Optimizer software and license access for you by carrying out the following steps:
ssh lxlogin8.lrz.de # # --> following commands are executed on the CoolMUC-3 login node lxlogin8.lrz.de: # $HOME> module load gurobi $HOME> mkdir ~/gurobi_temp $HOME> cd ~/gurobi_temp ~/gurobi_temp> cp $GUROBI_HOME/examples/data/coins.lp . ~/gurobi_temp> gurobi_cl ./coins.lp
In the result of the execution of the previous commands you should end up with a file gurobi.log in the subdirectory ~/gurobi_temp in your $HOME directory, showing the successful execution of the Gurobi command line interface for this simple optimization example.
The content of this gurobi.log file should look like this:
Gurobi Optimizer version 9.1.1 build v9.1.1rc0 (linux64) Copyright (c) 2020, Gurobi Optimization, LLC Read LP format model from file ./coins.lp Reading time = 0.01 seconds : 4 rows, 9 columns, 16 nonzeros Optimize a model with 4 rows, 9 columns and 16 nonzeros Variable types: 4 continuous, 5 integer (0 binary) Coefficient statistics: Matrix range [6e-02, 7e+00] Objective range [1e-02, 1e+00] Bounds range [5e+01, 1e+03] RHS range [0e+00, 0e+00] Found heuristic solution: objective -0.0000000 Presolve removed 1 rows and 5 columns Presolve time: 0.01s Presolved: 3 rows, 4 columns, 9 nonzeros Variable types: 0 continuous, 4 integer (0 binary) Root relaxation: objective 1.134615e+02, 2 iterations, 0.01 seconds Nodes | Current Node | Objective Bounds | Work Expl Unexpl | Obj Depth IntInf | Incumbent BestBd Gap | It/Node Time 0 0 113.46154 0 1 -0.00000 113.46154 - - 0s H 0 0 113.4500000 113.46154 0.01% - 0s Explored 1 nodes (2 simplex iterations) in 0.03 seconds Thread count was 24 (of 24 available processors) Solution count 2: 113.45 -0 Optimal solution found (tolerance 1.00e-04) Best objective 1.134500000000e+02, best bound 1.134600000000e+02, gap 0.0088%
Running Gurobi Optimizer in Parallel on Linux Clusters
The Gurobi Optimizer command line interface supports the usage of multiple threads for large parallel optimization runs. For the above small coins.lp example a submission to the CoolMUC-3 cluster with 64 threads would look like the following. First one needs to write a corresponding SLURM submission shell script, containing the call to Gurobi (e.g. with the filename: gurobi_mpp3_slurm.sh):
#!/bin/bash #SBATCH -o ./job.gurobi.%j.%N.out #SBATCH -D ./ #SBATCH -J gurobi_mpp3 #SBATCH --clusters=mpp3 #SBATCH --get-user-env #SBATCH --nodes=1 #SBATCH --ntasks-per-node=64 # --- multiples of 64 for mpp3 --- #SBATCH --mail-type=none #SBATCH --mail-type=end #SBATCH --mail-user=Max.Mustermann@lrz.de #SBATCH --export=NONE #SBATCH --time=0:10:00 #---------------------------------------------------- module load slurm_setup module av gurobi module load gurobi/9.11 echo ============================ Gurobi Start ====================== cd ~/Gurobi_Slurm_Test echo gurobi_cl ResultFile=coins.sol Threads=$SLURM_NTASKS coins.lp gurobi_cl ResultFile=coins.sol Threads=$SLURM_NTASKS coins.lp echo ============================ Gurobi Stop =======================
Assumed that this SLURM script is stored together with the input data in the file coins.lp in the users $HOME subdirectory ~/Gurobi_Slurm_Test, the SLURM job can be submitted on CoolMUC-3 (mpp3) as follows:
ssh lxlogin8.lrz.de # # --> following commands are executed on the CoolMUC-3 login node lxlogin8.lrz.de: # > cd ~/Gurobi_Slurm_Test > sbatch gurobi_mpp3_slurm.sh
After the execution of the Slurm job on CoolMUC-3 please check the output in the gurobi.log and the generated job output file. Watch for the statement "Set parameter Threads to value 64".
In case of any observed issues in the usage of the Gurobi software on LRZ managed compute resources or any arising questions, please feel free to contact the LRZ support. Please submit your LRZ Service Request with a clear specification of the encountered issue with the Gurobi software, e.g. by indicating in the request subject "Gurobi Optimizer problem: ....". This will help the LRZ operators to assign your support request to the most appropriate LRZ staff to assist you.