{| class="wikitable"
|-
| Availability: April 15, 2025
|-
| Login node: vulcan.alliancecan.ca
|-
| Globus endpoint: [https://app.globus.org/file-manager?origin_id=97bda3da-a723-4dc0-ba7e-728f35183b43 Vulcan Globus v5]
|-
| System Status Page: https://status.alliancecan.ca/system/Vulcan
|-
| Status: Testing
|}
Vulcan is a cluster dedicated to the needs of the Canadian scientific Artificial Intelligence community. Vulcan is located at the [https://www.ualberta.ca/ University of Alberta] and is managed by the University of Alberta and [https://amii.ca/ Amii]. It is named after the town [https://en.wikipedia.org/wiki/Vulcan,_Alberta Vulcan, AB], located in southern Alberta.
This cluster is part of the Pan-Canadian AI Compute Environment (PAICE).
==Site-specific Policies==
Internet access is not generally available from the compute nodes. A globally available Squid proxy is enabled by default with certain domains whitelisted. Contact [[Technical support|technical support]] if you are not able to connect to a domain and we will evaluate whether it belongs on the whitelist.
Maximum duration of jobs is 7 days.
Vulcan is currently open to Amii affiliated PIs with CCAI Chairs. Further access will be announced at a later date.
==Vulcan hardware specifications==
{| class="wikitable sortable"
!Performance Tier || Nodes !! Model || CPU !! Cores !! System Memory!! GPUs per node || Total GPUs
|-
| Standard Compute || 205 || Dell R760xa || 2 x Intel Xeon Gold 6448Y || 64 || 512 GB || 4 x NVIDIA L40s 48GB || 820
|}
==Storage System==
Vulcan's storage system uses a combination of NVMe flash and HDD storage running on the Dell PowerScale platform with a total usable capacity of approximately 5PB. Home, Scratch, and Project are on the same Dell PowerScale system.
{| class="wikitable sortable"
|-
| Home space||
* Location of /home directories.
* Each /home directory has a small fixed [[Storage and file management#Filesystem_quotas_and_policies|quota]].
* Not allocated via [https://alliancecan.ca/en/services/advanced-research-computing/accessing-resources/rapid-access-service RAS] or [https://alliancecan.ca/en/services/advanced-research-computing/accessing-resources/resource-allocation-competition RAC]. Larger requests go to the /project space.
* Has daily backup
|-
| Scratch space ||
* For active or temporary (scratch) storage.
* Not allocated.
* Large fixed [[Storage and file management#Filesystem_quotas_and_policies|quota]] per user.
* Inactive data will be [[Scratch purging policy|purged]].
|-
|Project space
||
* Large adjustable [[Storage and file management#Filesystem_quotas_and_policies|quota]] per project.
* Has daily backup.
|}
==Network Interconnects==
Standard Compute nodes are interconnected with 100Gbps Ethernet with RoCE (RDMA over Converged Ethernet) enabled.
==Scheduling==
The Vulcan cluster uses the Slurm scheduler to run user workloads. The basic scheduling commands are similar to the other national systems.
==Software==
* Module-based software stack.
* Both the standard Alliance software stack as well as cluster-specific software.