Extended monthly maintenance
Scheduled Maintenance Report for Pawsey Supercomputing Centre
Completed
All HPC systems have been returned to service after the maintenance period completed a day earlier than anticipated.
Posted Sep 01, 2020 - 19:55 AWST
Update
Magnus and Zeus are now available for user access.
Galaxy is operational, but staff are finishing software tasks before opening to user access.
Posted Sep 01, 2020 - 19:38 AWST
Update
Topaz and Garrawarla have been returned to service and are available for user access.
Posted Sep 01, 2020 - 19:02 AWST
Update
The lustre filesystems have been brought back online and staff are commencing bringing the supercompute systems back online and performing maintenance and software upgrades. There is still no user access at this time.

The nimbus network cable replacement completed successfully earlier, and those services are running normally.
Posted Sep 01, 2020 - 18:48 AWST
Update
Electrical work has been completed. Pawsey staff are commencing bringing systems back online. We will send out further announcements when services are available for public access.
Posted Sep 01, 2020 - 12:54 AWST
In progress
Scheduled maintenance is currently in progress. We will provide updates as necessary.
Posted Sep 01, 2020 - 07:00 AWST
Scheduled
In preparation for installing new equipment in the Pawsey Centre as part of the capital refresh, we need to perform work on the electrical supply to the computer rooms. This requires us to shut down a substantial amount of equipment, hence why this is a longer than usual maintenance window.
Posted Aug 03, 2020 - 11:48 AWST
This scheduled maintenance affected: Zeus (Zeus login node, Zeus Compute nodes, Galaxy ingest nodes, Data Mover nodes (CopyQ), Slurm Controller (Zeus)), Galaxy (Galaxy Compute nodes, Galaxy login nodes, Slurm Controller (Galaxy)), Topaz (Slurm Controller (topaz), GPU partition), Garrawarla (Garrawarla compute nodes, Slurm Controller (Garrawarla)), Lustre filesystems (/scratch filesystem, /group filesystem, /astro filesystem, /askapbuffer filesystem), and Magnus (Magnus Compute nodes, Magnus login nodes, Slurm Controller (Magnus)).