Using MATLAB

MATLAB is a proprietary scientific computing language and environment.

Using MATLAB on CARC systems

Begin by logging in. You can find instructions for this in the Getting Started with Discovery or Getting Started with Endeavour user guides.

MATLAB can be used in either interactive or batch modes. In either mode, first load a corresponding software module:

module purge
module load matlab/2022a

Older versions of MATLAB are also available. To see all available versions of MATLAB, enter:

module spider matlab

Requesting newer versions of MATLAB

It may be necessary to run MATLAB using a version that is not currently available. Licensed software such as MATLAB requires license setup by CARC staff. If there is a new version of MATLAB that you want to use, please submit a help ticket and we will install it for you.

Installed toolboxes

To check installed toolboxes and their versions, enter the ver command when using MATLAB in interactive mode.

MATLAB GUI

The MATLAB GUI is available to use on compute nodes via our OnDemand service.

Running MATLAB in interactive mode

A common mistake for new users of HPC clusters is to run heavy workloads directly on a login node (e.g., discovery.usc.edu or endeavour.usc.edu). Unless you are only running a small test, please make sure to run your program as a job interactively on a compute node. Processes left running on login nodes may be terminated without warning. For more information on jobs, see our Running Jobs user guide.

To run MATLAB interactively on a compute node, follow these two steps:

  1. Reserve job resources on a node using salloc
  2. Once resources are allocated, load the required modules and enter matlab -nodisplay
[user@discovery1 ~]$ salloc --time=1:00:00 --ntasks=1 --cpus-per-task=8 --mem=16G --account=<project_id>
salloc: Pending job allocation 24316
salloc: job 24316 queued and waiting for resources
salloc: job 24316 has been allocated resources
salloc: Granted job allocation 24316
salloc: Waiting for resource configuration
salloc: Nodes d05-08 are ready for job

Make sure to change the resource requests (the --time=1:00:00 --ntasks=1 --cpus-per-task=8 --mem=16G --account=<project_id> part after your salloc command) as needed to reflect the number of cores and memory required. Also make sure to substitute your project ID; enter myaccount to view your available project IDs.

Once you are granted the resources and logged in to a compute node, load the module and then enter matlab -nodisplay:

[user@d05-08 ~]$ module load matlab/2022a
[user@d05-08 ~]$ matlab -nodisplay

                        < M A T L A B (R) >
              Copyright 1984-2022 The MathWorks, Inc.
         R2022a Update 2 (9.12.0.1956245) 64-bit (glnxa64)
                            May 11, 2022


To get started, type doc.
For product information, visit www.mathworks.com.

>>

Notice that the shell prompt changes from user@discovery1 to user@<nodename> to indicate that you are now on a compute node (e.g., d05-08).

To exit the node and relinquish the job resources, enter exit to exit MATLAB and then enter exit in the shell. This will return you to the login node:

>> exit
[user@d05-08 ~]$ exit
exit
salloc: Relinquishing job allocation 24316
[user@discovery1 ~]$

Running MATLAB in batch mode

In order to submit jobs to the Slurm job scheduler, you will need to use MATLAB in batch mode. There are a few steps to follow:

  1. Create a MATLAB script
  2. Create a Slurm job script that runs the MATLAB script
  3. Submit the job script to the job scheduler using sbatch

Your MATLAB script should consist of the sequence of MATLAB commands needed for your analysis or simulation.

A Slurm job script is a special type of Bash shell script that the Slurm job scheduler recognizes as a job. For a job running MATLAB, a Slurm job script should look something like the following:

#!/bin/bash

#SBATCH --account=<project_id>
#SBATCH --partition=main
#SBATCH --nodes=1
#SBATCH --ntasks=1
#SBATCH --cpus-per-task=8
#SBATCH --mem=16G
#SBATCH --time=1:00:00

module purge
module load matlab/2022a

# Do not include the .m extension in your script name (script.m)
matlab -batch 'script'

Each line is described below:

Command or Slurm argumentMeaning
#!/bin/bashUse Bash to execute this script
#SBATCHSyntax that allows Slurm to read your requests (ignored by Bash)
--account=<project_id>Charge compute resources used to <project_id>; enter myaccount to view your available project IDs
--partition=mainSubmit job to the main partition
--nodes=1Use 1 compute node
--ntasks=1Run 1 task (e.g., running a MATLAB script)
--cpus-per-task=8Reserve 8 CPUs for your exclusive use
--mem=16GReserve 16 GB of memory for your exclusive use
--time=1:00:00Reserve resources described for 1 hour
module purgeClear environment modules
module load matlab/2022aLoad the matlab environment module
matlab -batch 'script'Use matlab to run script.m in batch mode

Make sure to adjust the resources requested based on your needs, but remember that fewer resources requested leads to less queue time for your job. Note that to fully utilize the resources, especially the number of CPUs, you may need to explicitly change your MATLAB code (see the section on the parallel computing toolbox below).

You can develop and edit MATLAB scripts and job scripts to run on CARC clusters in a few ways: on your local computer and then transfer the files to one of your directories on CARC file systems, with the Files app available on our OnDemand service, or with one of the available text editor modules (nano, micro, vim, or emacs).

Save the job script as matlab.job, for example, and then submit it to the job scheduler with Slurm's sbatch command:

[user@discovery1 ~]$ sbatch matlab.job
Submitted batch job 170554

To check the status of your job, enter myqueue. If there is no job status listed, then this means the job has completed.

The results of the job will be logged and, by default, saved to a plain-text file of the form slurm-<jobid>.out in the same directory where the job script was submitted from. To view the contents of this file, enter less slurm-<jobid>.out, and then enter q to exit the viewer.

For more information on running and monitoring jobs, see the Running Jobs guide.

Using the parallel computing toolbox

To run a MATLAB script using the parallel computing toolbox, first create a cluster profile in order to start a pool of parallel workers. There are two types of clusters: local and remote. A local cluster will run on only a single compute node. A remote cluster is necessary for running scripts across multiple compute nodes. You can also use GPUs to accelerate certain calculations. See the section below for more information on using GPUs. Additional help can be found at MATLAB parallel computing documentation.

Setting up a local cluster (single node)

To set up a local cluster, add lines like the following to your MATLAB script:

pc = parallel.cluster.Local;
job_folder = fullfile('/scratch1/',getenv('USER'),getenv('SLURM_JOB_ID'));
mkdir(job_folder);
set(pc,'JobStorageLocation',job_folder);
ncores = str2num(getenv('SLURM_CPUS_PER_TASK')) - 1;
pool = parpool(pc,ncores)

The ncores variable is defined as one less than the requested CPUs per task, to reserve one CPU for overhead that is used when starting the pool of workers.

Once the pool is started, you can then use parallel commands like parfor or spmd in your script.

When the pool is no longer needed, such as at the end of the script, add the following line to shut down the pool:

delete(pool)

Setting up a remote cluster (multiple nodes)

To start a worker pool larger than can fit on a single node (based on the number of CPUs), you will need to set up a remote cluster. With the remote cluster, MATLAB will submit a Slurm job on your behalf to allocate resources. The initial Slurm job that you submit only needs 1 CPU, and the Slurm job that MATLAB submits will allocate more CPUs.

To set up a remote cluster, add lines like the following to your MATLAB script:

pc = parallel.cluster.Slurm;
job_folder = fullfile('/scratch1/',getenv('USER'),getenv('SLURM_JOB_ID'));
mkdir(job_folder);
set(pc,'JobStorageLocation',job_folder);
set(pc,'HasSharedFilesystem',true);
set(pc,'SubmitArguments','--partition=main --time=1:00:00 --mem-per-cpu=4G');
set(pc,'ResourceTemplate','--ntasks=^N^');
pool = parpool(pc,16)

Make sure to modify the SubmitArguments attribute and the number of workers (e.g., 16) as necessary. For example, you may wish to change the partition, time, or memory requirements for the pool of workers. Please note that the time requirement should be less than the time required for the initial Slurm job that you submitted. Also keep in mind that the job MATLAB submits may have to wait in the queue.

Once the pool is started, you can then use parallel commands like parfor or spmd in your script.

When the pool is no longer needed, such as at the end of the script, add the following line to shut down the pool:

delete(pool)

We also recommend a setup like the following in the job script to avoid worker communication issues:

#!/bin/bash

#SBATCH --account=<project_id>
#SBATCH --partition=main
#SBATCH --nodes=1
#SBATCH --ntasks=1
#SBATCH --cpus-per-task=1
#SBATCH --mem=4G
#SBATCH --time=2:00:00

module purge
module load matlab/2020b

ulimit -s unlimited
ulimit -u 23741
ulimit -n 16384

export SLURM_MPI_TYPE=pmi2

# Do not include the .m extension in your script name (script.m)
matlab -batch 'script'

The time request for the job script should be longer than the time request for the remote cluster in the MATLAB script.

Using GPUs

Some calculations can be accelerated by using GPUs. If the input data can be stored as a gpuArray and the function supports gpuArray data, then the function will automatically run on a GPU. To use multiple GPUs, you also need to set up a parallel pool, with the number of workers equal to the number of GPUs to be used. For more information, see the MATLAB GPU computing documentation.

The Slurm job script will need to request a GPU. For example, to request a V100 GPU:

#!/bin/bash

#SBATCH --account=<project_id>
#SBATCH --partition=gpu
#SBATCH --nodes=1
#SBATCH --ntasks=1
#SBATCH --cpus-per-task=2
#SBATCH --gpus-per-task=v100:1
#SBATCH --mem-per-cpu=4G
#SBATCH --time=1:00:00

module purge
module load matlab/2022a

# Do not include the .m extension in your script name (script.m)
matlab -batch 'script'

Also see our Using GPUs user guide.

Additional resources

If you have questions about or need help with MATLAB, please submit a help ticket and we will assist you.

MATLAB website
MATLAB documentation
MATLAB parallel computing documentation
MATLAB GPU computing documentation

Back to top