Gurobi
Gurobi is a proprietary software package for mathematical optimization.
0.0.1 Using Gurobi on CARC systems
Begin by logging in. You can find instructions for this in the Getting Started with Discovery or Getting Started with Endeavour user guides.
You can use Gurobi in either interactive or batch modes. In either mode, first load the corresponding software module:
module load gurobi
To see all available versions of Gurobi, enter:
module spider gurobi
0.0.1.1 Installing a different version of Gurobi
If you require a different version of Gurobi that is not currently installed, please submit a help ticket and we will install it for you.
0.0.1.2 Installing gurobipy
To use Gurobi from within Python, you will need to install the gurobipy package. The following is an example of how to do this using a Python virtual environment:
module purge
module load gurobi/10.0.0
module load gcc/11.3.0
module load python/3.11.3
python3 -m venv $HOME/gurobipy
source $HOME/gurobipy/bin/activate
pip3 install gurobipy==10.0.0
0.0.2 Running Gurobi in interactive mode
A common mistake for new users of HPC clusters is to run heavy workloads directly on a login node (e.g., discovery.usc.edu
or endeavour.usc.edu
). Unless you are only running a small test, please make sure to run your program as a job interactively on a compute node. Processes left running on login nodes may be terminated without warning. For more information on jobs, see our Running Jobs user guide.
To run the Gurobi command line tool interactively on a compute node, follow these two steps:
- Reserve job resources on a node using
salloc
- Once resources are allocated, load the required modules and use
gurobi_cl
[user@discovery1 ~]$ salloc --time=1:00:00 --ntasks=1 --cpus-per-task=8 --mem=16G --account=<project_id>
salloc: Pending job allocation 24737
salloc: job 24737 queued and waiting for resources
salloc: job 24737 has been allocated resources
salloc: Granted job allocation 24737
salloc: Waiting for resource configuration
salloc: Nodes d05-04 are ready for job
Make sure to change the resource requests (the --time=1:00:00 --ntasks=1 --cpus-per-task=8 --mem=16G --account=<project_id>
part after your salloc
command) as needed, such as the number of cores and memory required. Also make sure to substitute your project ID; enter myaccount
to view your available project IDs.
Once you are granted the resources and logged in to a compute node, load the module and run gurobi_cl model.mps
for example:
[user@d05-04 ~]$ module purge
[user@d05-04 ~]$ module load gurobi/10.0.0
[user@d05-04 ~]$ gurobi_cl model.mps
In this example, model.mps is some model file that you have created. Gurobi will attempt to solve the given model.
Notice that the shell prompt changes from user@discovery1
to user@<nodename>
to indicate that you are now on a compute node (e.g., d05-04
).
To exit the node and relinquish the job resources, enter exit
in the shell. This will return you to the login node:
[user@d05-04 ~]$ exit
exit
salloc: Relinquishing job allocation 24737
[user@discovery1 ~]$
0.0.3 Running Gurobi in batch mode
In order to submit batch jobs to the Slurm job scheduler, use Gurobi in batch mode:
- Create a Slurm job script
- Submit the job script to the job scheduler using
sbatch
A Slurm job script is a special type of Bash shell script that the Slurm job scheduler recognizes as a job. For a job running Gurobi, a Slurm job script should look something like the following:
#!/bin/bash
#SBATCH --account=<project_id>
#SBATCH --partition=main
#SBATCH --nodes=1
#SBATCH --ntasks=1
#SBATCH --cpus-per-task=8
#SBATCH --mem=16G
#SBATCH --time=1:00:00
module purge
module load gurobi/10.0.0
gurobi_cl model.mps
Each line is described below:
Command or Slurm argument | Meaning |
---|---|
#!/bin/bash |
Use Bash to execute this script |
#SBATCH |
Syntax that allows Slurm to read your requests (ignored by Bash) |
--account=<project_id> |
Charge compute resources used to <project_id>; enter myaccount to view your available project IDs |
--partition=main |
Submit job to the main partition |
--nodes=1 |
Use 1 compute node |
--ntasks=1 |
Run 1 task (e.g., running a Gurobi model) |
--cpus-per-task=8 |
Reserve 8 CPUs for your exclusive use |
--mem=16G |
Reserve 16 GB of memory for your exclusive use |
--time=1:00:00 |
Reserve resources described for 1 hour |
module purge |
Clear environment modules |
module load gurobi/10.0.0 |
Load the gurobi environment module |
gurobi_cl model.mps |
Use gurobi_cl to solve model.mps |
Make sure to adjust the resources requested based on your needs, but remember that fewer resources requested leads to less queue time for your job. Note that to fully utilize the resources, especially the number of CPUs, you may need to explicitly change the Gurobi model parameters (e.g., the Threads parameter).
Save the job script as gurobi.job
, for example, and then submit it to the job scheduler with Slurm’s sbatch
command:
[user@discovery1 ~]$ sbatch gurobi.job
Submitted batch job 13587
To check the status of your job, enter myqueue
. If there is no job status listed, then this means the job has completed.
The results of the job will be logged and, by default, saved to a file of the form slurm-<jobid>.out
in the same directory where the job script is located. To view the contents of this file, enter less slurm-<jobid>.out
, and then enter q
to exit the viewer.
For more information on job status and running jobs, see the Running Jobs user guide.
0.0.4 Additional resources
If you have questions about or need help with Gurobi, please submit a help ticket and we will assist you.