RIS has announced an upcoming HPC Training Series designed to equip both beginners and advanced users with essential computational skills. The training sessions will focus on speeding up C code and scaling AI models on RIS Compute2.
Training sessions available:
- Introduction to Parallel Computing (C & OpenMP) Best for: Researchers using traditional simulation codes or those new to parallel logic. Topics covered: Shared memory programming, thread management, and loop parallelization. Date and Time: March 23, 2026, from 2:00 PM to 3:00 PM CDT. Registration is open.
- AI Environments: PyTorch and Container Technologies Best for: Data scientists and AI researchers transitioning from local machines to the cluster. Topics covered: Creating reproducible environments with Singularity/Apptainer and running PyTorch on RIS Compute2. Date and Time: March 27, 2026, from 11:00 AM to 12:00 PM CDT. Registration is open.
- Scale Your AI: Multi-Node Training and Profiling Best for: Power users training large models who require multi-node speed and efficiency. Topics covered: Multi-node PyTorch (DDP), Slurm orchestration, and bottleneck analysis using NVIDIA Nsight. Date and Time: March 31, 2026, from 1:30 PM to 2:30 PM CDT. Registration is open.
Reasons to Attend:
As computational demands grow, efficiency remains essential. These workshops will provide hands-on skills necessary to:
- Reduce Walltime: Obtain results faster.
- Ensure Reproducibility: Build once, run anywhere with containers.
- Maximize Resources: Use profiling tools to prevent wasting expensive GPU hours.
Prerequisites: General familiarity with the Linux Command Line is recommended for all sessions. Specific prerequisites for each workshop can be found on the registration pages. All attendees must have an account with RIS Compute2. To gain access to Compute2, onboarding with RIS is required.
RIS encourages attendees to take advantage of these training opportunities to expand the boundaries of their research. For further inquiries, contact the RIS Service Desk.