High-Performance Computing

Last updated March 04, 2024

Our high-performance computing resources are the backbone of everything we do at the Center for Advanced Research Computing (CARC).

0.0.1 What is high-performance computing?

Computational research often requires resources that exceed those of a personal laptop or desktop computer. High-performance computing (HPC) aggregates the resources from individual computers (known as nodes) into a cluster that works together to perform advanced, specialized computing jobs.

Many academic fields, including epigenetics, geophysics, materials science, engineering, natural language translation, and health sciences, utilize high-performance computing to advance their research beyond what would be possible with a personal computer.

As the amount of data used in research continues to grow with the popularity of such technologies as artificial intelligence (AI) and advanced data analysis, high-performance computing is becoming increasingly necessary for technological advancement.

0.0.2 Discovery cluster

CARC launched its high-performance computing cluster, Discovery, in August 2020. The Discovery cluster marks a significant upgrade to CARC’s cyberinfrastructure, and the first step in a major, user-focused overhaul of the program. This cluster includes additional compute nodes and a rebuilt software stack, as well as new system configurations to better serve CARC users. Discovery consists of 2 shared login nodes and a total of around 20,000 CPU cores in around 500 compute nodes. Of these, over 200 nodes are equipped with graphics processing units (GPUs) with a total of over 180 NVIDIA GPUs available. The typical compute node has dual 8 to 16 core processors and resides on a 200 Gigabits-per-second (Gbps) NDR InfiniBand backbone.

Discovery includes an array of scientific software packages, both licensed and open source, for engineering, molecular simulation, and computational chemistry. Researchers can also install software packages or develop their own code within their project’s allotted storage.

Discovery is free to use for all USC faculty, research staff, and graduate students (with the approval of their faculty advisor). For detailed information on Discovery’s computing resources, see the Discovery Resource Overview.

0.0.3 Endeavour condo cluster

In an effort to provide the most comprehensive support to the USC research community, CARC built the Endeavour condo cluster allowing researchers a way to customize their high-performance computing experience.

The Condo Cluster Program (CCP) was launched in December 2020 to provide service to USC researchers that require dedicated resources for their work. Compute nodes leased through the CCP form CARC’s Endeavour condo cluster. The CCP gives researchers the convenience of having their own dedicated compute nodes, without the responsibility of purchasing and maintaining the nodes themselves. The CCP operates on two different models - an annual subscription model and a traditional system purchase model - to provide researchers with flexible and efficient options for their resources. All hardware is purchased and maintained by CARC throughout the course of the lease or subscription term.

For more information on the CCP, including details on the two purchase models and pricing, see the Condo Cluster Program pages.

0.0.4 CARC OnDemand

CARC’s OnDemand service provides users with web access to the Discovery and Endeavour HPC clusters, including file storage systems. OnDemand offers:

  • Easy file management
  • Command line shell access
  • Slurm job management
  • Access to interactive applications, including Jupyter notebooks and RStudio Server

OnDemand is available to all users. For more information on how to use this service, see the CARC OnDemand pages.

0.0.5 Data transfer nodes

CARC has two dedicated, high-speed, 100 Gbps data transfer nodes available that are especially useful for large transfers. The Discovery and Endeavour login nodes have a 40 Gbps connection speed, which are adequate for most transfers.

For more information on CARC’s data transfer services, see the Data Management pages.

0.0.6 Software Stack

CARC offers a comprehensive software stack on both the Discovery cluster and the Endeavour condo cluster for our users. The software stack allows users to find and load software using the Lmod module system.

Some of the available software includes, but is not limited to, Singularity, MATLAB, Mathematica, and COMSOL. For more information on CARC’s software stack, see the Software pages.

0.0.7 Operating system

CARC uses a customized distribution of the Community Enterprise Operating System (CentOS), built using the publicly available RPM Package Manager (RPM). CentOS is a high-quality Linux distribution that gives CARC complete control of its open-source software packages and is fully customized to suit advanced research computing needs, without the need for license fees.

CARC’s distribution of CentOS 7 has been modified for minor bug fixes and desired localized behavior. Many desktop and clustering-related packages were also added to CARC’s CentOS installation.

White papers, tutorials, FAQs, and other documentation on CentOS can be found on the official CentOS website.

0.0.8 Artemis private cloud

CARC launched Artemis in August 2023 as a cost-effective and comprehensive solution for cloud computing at USC.

Artemis is CARC’s private, on-premises cloud computing platform. Artemis complements existing CARC systems and services (Discovery and Endeavour clusters, file systems, etc.) by offering researchers access to virtual machines (VMs) on which they can run alternative operating system environments and deploy resources. Built on OpenNebula, Artemis provides a variety of virtual machines (VMs) and microVMs for our users.

The development of Artemis was made possible through an award received from the 2020 NSF grant “CC* Compute: A Customizable, Reproducible, and Secure Cloud Infrastructure as a Service for Scientific Research in Southern California” (NSF award # 2019220).

For more information, see the Artemis user guides.