Sidebar 7: Data-intensive computing, cherax, ruby, DMF and the Scientific Computing Data Store – a retrospective

By April 30th, 2021

These pages attempt to give some of the history of CSIRO’s use of computing in its research, focussing mainly on the large shared systems and services.

Sidebar 7:

Data-intensive computing, cherax, ruby, HSM, DMF and the Scientific Computing Data Store – a retrospective

 

Last updated: 2 Sep 2021.

Added link to Why HSM? presentation.
Robert C. Bell

With the commencement of service of CSIRO’s first Cray Y-MP (“cherax”) in March 1990, and the start of Cray’s Data Migration Facility on 14th November 1991, a unique partnership developed – a high-performance compute service and a large-scale data repository as closely coupled as possible. With the start of automation of tape mounting in June 1993 with the installation of a StorageTek Automated Tape Library, the foundation was set for a platform to support large-scale computing on large-scale data (large-scale for the time).

Users were freed from the need to deal with mountable media (tapes, floppy discs, diskettes, CDs), and the users’ data was preserved safely and carried forward through many generations of tape technology, all done transparently behind the users’ backs on their behalf – 13 generations of drive/media so far, going from 240 Mbyte to 20 Tbyte (and more with compression) on a single tape cartridge.  There were a succession of hosts – three Cray vector systems (cheraxes running UNICOS), and 4 SGI hosts (3 cheraxes and ruby running Linux).

Users had a virtually infinite storage capacity – though at times very little of it was available on-line – the Hierarchical Storage Management (HSM) would automatically recall data from lower levels when it was required, or users could issue a command to recall sets of data in batches.  It was somewhat difficult for new users to get used to (to them, DMF stood for “Don’t Migrate my Files”), but experienced users valued the ability to have large holdings of data in one place.

Unlike many sites, the Data Store was directly accessible as the /home filesystem on the hosts.  This overcame the problem of managing a large shared filesystem, and DMF took care of the data as it filled by copying data to lower cost media, and removing the data (though not the metadata) from the on-line storage.  Other sites (such as NCI, Pawsey and the main US supercomputer centres) ran their mass storage as separate systems, with users having to explicitly send data from the compute to the storage servers, and recall explicitly: users then had a minimum of two areas to manage, rather than just one.

One of the main users ran atmospheric models on various HPC platforms in Australia, but always copied data back to the Data Store for safe-keeping, and to enable analysis to be carried out in one place.

The Data Store continues after the de-commissioning of ruby on 30th April 2021, as the filesystem will be available (via NFS) on the Scientific Computing cluster systems.  There is no user-accessible host with the Data Store as the home filesystem.

Below is a graph showing the total holdings in the Data store since 1992.  The compound annual growth rate over that period was 1.59 – meaning that the amount stored increased by 59% each year.

The Data Store depended heavily on the strong support from the vendors (principally Cray Research, SGI, and StorageTek, and IBM), and the superb support by the systems staff – principally Peter Edwards, but also Ken Ho Le, Virginia Norling, Jeroen van den Muyzenberg and latterly Igor Zupanovic.

Thanks Team!

Rob.

P.S. Note that CSIRO developed its own operating system (DAD) and own HSM (the document region) in the late 1960s, using drums, disc and tape (manually mounted), with automatic recall of files, and integration with the job scheduler so that jobs blocked until the required files had been recalled!  See Sidebar 1.

P.P.S.  Here’s a link to talk given in 2012 entitled, Why HSM?

Back to contents