Pawsey Supercomputing Research Centre

All Systems Operational

Setonix Operational
Login nodes ? Operational
Data-mover nodes ? Operational
Slurm scheduler ? Operational
Setonix work partition Operational
Setonix debug partition Operational
Setonix long partition Operational
Setonix copy partition Operational
Setonix askaprt partition Operational
Setonix highmem partition Operational
Setonix gpu partition Operational
Setonix gpu high mem partition Operational
Setonix gpu debug partition Operational
Lustre filesystems Operational
/scratch filesystem ? Operational
/software filesystem ? Operational
/askapbuffer filesystem ? Operational
/askapingest filesystem ? Operational
Storage Systems Operational
Acacia - Projects ? Operational
Banksia ? Operational
Data Portal Systems ? Operational
MWA Nodes Operational
CASDA Nodes Operational
Acacia - Ingest ? Operational
MWA ASVO ? Operational
ASKAP Operational
ASKAP ingest nodes ? Operational
ASKAP service nodes Operational
Nimbus Operational
Ceph storage ? Operational
Nimbus instances ? Operational
Nimbus dashboard ? Operational
Nimbus APIs ? Operational
Central Services Operational
Authentication and Authorization ? Operational
Service Desk Operational
License Server Operational
Application Portal ? Operational
Origin ? Operational
/home filesystem Operational
/pawsey filesystem Operational
Central Slurm Database ? Operational
Documentation ? Operational
Visualisation Services Operational
Remote Vis ? Operational
Vis scheduler ? Operational
Setonix vis nodes ? Operational
Nebula vis nodes ? Operational
Visualisation Lab Operational
Reservation ? Operational
CARTA - Stable ? Operational
CARTA - Test ? Operational
Pawsey Remote VR Operational
The Australian Biocommons Operational
Fgenesh++ ? Operational
Operational
Degraded Performance
Partial Outage
Major Outage
Maintenance
Allocated Cores (Setonix)
Fetching
Allocated Nodes (Setonix work partition)
Fetching
Allocated nodes (Setonix askaprt partition) ?
Fetching
Active Instances (Nimbus)
Fetching
Active Cores (Nimbus)
Fetching
May 24, 2025

No incidents reported today.

May 23, 2025

No incidents reported.

May 22, 2025

No incidents reported.

May 21, 2025
Resolved - There has been no further issues with the control plane overnight. We are currently writing a PIR to determine if there are ways to minimise the impact in the future.
May 21, 06:25 AWST
Monitoring - HPE are collecting logs, but Setonix appears to have soldiered on irregardless.
May 20, 20:14 AWST
Investigating - Setonix's administrative control plane in unresponsive. Setonix continues to run jobs, but node name resolution is compromised.

A Critical Case has been raised with HPE.

May 20, 19:07 AWST
May 20, 2025
May 19, 2025

No incidents reported.

May 18, 2025

No incidents reported.

May 17, 2025

No incidents reported.

May 16, 2025
Resolved - This incident has been resolved
May 16, 16:03 AWST
Monitoring - Flash pool usage has returned to less concerning levels and we are monitoring the levels and implementing more systems to migrate data that has not been used in a while to disk to free up the highest speed storage. Pawsey remains impressed with our colleagues ability to generate data on Setonix at a rate faster than we can deal with it and acknowledge that under normal circumstances we'd be celebrating it rather than panicking. Thank you for your patience.
May 16, 08:50 AWST
Investigating - There are performance / usability issues pertaining to "/scratch" filesystem
* Flash Storage pools are approaching Max capacity
* This affects the overall performance / usability of scratch
* Data generation is exceeding data migration to Non-flash pools (Mitigation efforts)

If you have unnecessary files on /scratch, please remove them ASAP

May 15, 15:23 AWST
May 15, 2025
May 14, 2025

No incidents reported.

May 13, 2025

No incidents reported.

May 12, 2025

No incidents reported.

May 11, 2025

No incidents reported.

May 10, 2025

No incidents reported.