Julia¶
Experimental support
Support for the Julia programming language on Polaris is currently experimental. This guide provides a set of best practices, but you may encounter unexpected issues.
Introduction¶
Julia is a high-level, high-performance programming language designed for technical and scientific computing. It combines the ease of use of dynamic languages with the performance of compiled languages, making it well-suited for large-scale simulations and data analysis.
This guide details how to configure and run Julia on the Polaris supercomputer, focusing on leveraging the system's key architectural features for large-scale parallel and GPU-accelerated computing.
Contributing
This guide is a first draft of the Julia documentation for Polaris. If you have suggestions or find errors, please open a pull request or contact us by opening a ticket at the ALCF Helpdesk.
All the source files used in this documentation are located at https://github.com/anlsys/julia_alcf. Feel free to open PRs!
Julia Setup¶
Julia is available on Polaris as a module.
We recommend setting your $JULIA_DEPOT_PATH to a project directory PROJECT on Eagle or Flare for faster file access and to avoid filling up your home directory.
Add the following to your shell configuration file (e.g., ~/.bashrc or ~/.bash_profile):
If $JULIA_DEPOT_PATH is not set, it defaults to ~/.julia with a warning when you load the module.
Loading Julia¶
Load the Julia module:
By default, this loads the latest stable version of Julia. To load a specific version:
module load julia/1.12 # Latest version
module load julia/1.11 # Previous version
module load julia/1.10 # LTS (Long Term Support)
Version Policy¶
Polaris maintains three Julia versions:
- Latest stable release (currently 1.12): The most recent stable version with the newest features and performance improvements
- Previous version (currently 1.11): The previous stable release for compatibility with recent projects
- LTS (Long Term Support) (currently 1.10): Provides long-term stability with bug fixes but no new features, ideal for production workloads requiring consistency
When new versions are released, the oldest non-LTS version is retired (removed from the system and no longer available), and the LTS version is updated according to the Julia LTS release schedule.
Configuring the Programming Environment¶
To leverage Polaris's architecture, you must configure Julia to use the system's optimized libraries for MPI.jl, CUDA.jl, and HDF5.jl. For a modern, interactive development experience, we recommend using Visual Studio Code with the official Julia and Remote - SSH extensions.
The Julia module on Polaris is pre-configured with system-specific preferences (via LocalPreferences.toml in the system load path) to ensure these packages use the correct system libraries (MPICH, CUDA 12.6, HDF5).
Install the required packages in your Julia environment with the following commands:
Note: MPIPreferences does not need to be explicitly added as it's a dependency of MPI.jl.
CUDA-Aware MPI¶
CUDA-aware MPI is enabled by default on Polaris. You can pass CuArray objects directly to MPI.jl functions without explicit host-device transfers, enabling efficient GPU-to-GPU communication across nodes:
using CUDA, MPI
MPI.Init()
# Create a CuArray and pass it directly to MPI operations
data = CUDA.rand(Float64, 100)
MPI.Allreduce!(data, +, MPI.COMM_WORLD) # GPU-to-GPU communication
Verify Configuration on a Compute Node¶
The Polaris login nodes do not have GPU access. You must request an interactive job to test your GPU configuration.
Example Julia Code for Approximating Pi¶
Job Submission Script¶
This PBS script requests resources and launches the Julia application using mpiexec: