9. List of Pre-Configured Resources

The following list of resources are supported by the underlying layers of ExTASY.

Note

To configure your applications to run on these machines, you would need to add entries to your kernel definitions and specify the environment to be loaded for execution, executable, arguments, etc.

9.1. RESOURCE_LOCAL

9.1.1. LOCALHOST_SPARK_ANA

Your local machine gets spark.

  • Resource label : local.localhost_spark_ana
  • Raw config : resource_local.json
  • Note : To use the ssh schema, make sure that ssh access to localhost is enabled.
  • Default values for ComputePilotDescription attributes:
  • queue         : None
  • sandbox       : $HOME
  • access_schema : local
  • Available schemas : local, ssh

9.1.2. LOCALHOST_YARN

Your local machine.

Uses the YARN resource management system* Resource label : local.localhost_yarn * Raw config : resource_local.json * Note : To use the ssh schema, make sure that ssh access to localhost is enabled. * Default values for ComputePilotDescription attributes:

  • queue         : None
  • sandbox       : $HOME
  • access_schema : local
  • Available schemas : local, ssh

9.1.3. LOCALHOST_ANACONDA

Your local machine.

To be used when the anaconda python interpreter is enabled* Resource label : local.localhost_anaconda * Raw config : resource_local.json * Note : To use the ssh schema, make sure that ssh access to localhost is enabled. * Default values for ComputePilotDescription attributes:

  • queue         : None
  • sandbox       : $HOME
  • access_schema : local
  • Available schemas : local, ssh

9.1.4. LOCALHOST

Your local machine.

  • Resource label : local.localhost
  • Raw config : resource_local.json
  • Note : To use the ssh schema, make sure that ssh access to localhost is enabled.
  • Default values for ComputePilotDescription attributes:
  • queue         : None
  • sandbox       : $HOME
  • access_schema : local
  • Available schemas : local, ssh

9.1.5. LOCALHOST_SPARK

Your local machine gets spark.

  • Resource label : local.localhost_spark
  • Raw config : resource_local.json
  • Note : To use the ssh schema, make sure that ssh access to localhost is enabled.
  • Default values for ComputePilotDescription attributes:
  • queue         : None
  • sandbox       : $HOME
  • access_schema : local
  • Available schemas : local, ssh

9.2. RESOURCE_EPSRC

9.2.1. ARCHER

The EPSRC Archer Cray XC30 system (https://www.archer.ac.uk/)

  • Resource label : epsrc.archer
  • Raw config : resource_epsrc.json
  • Note : Always set the project attribute in the ComputePilotDescription or the pilot will fail.
  • Default values for ComputePilotDescription attributes:
  • queue         : standard
  • sandbox       : /work/`id -gn`/`id -gn`/$USER
  • access_schema : ssh
  • Available schemas : ssh

9.2.2. ARCHER_ORTE

The EPSRC Archer Cray XC30 system (https://www.archer.ac.uk/)

  • Resource label : epsrc.archer_orte
  • Raw config : resource_epsrc.json
  • Note : Always set the project attribute in the ComputePilotDescription or the pilot will fail.
  • Default values for ComputePilotDescription attributes:
  • queue         : standard
  • sandbox       : /work/`id -gn`/`id -gn`/$USER
  • access_schema : ssh
  • Available schemas : ssh

9.3. RESOURCE_NERSC

9.3.1. EDISON_CCM

The NERSC Edison Cray XC30 in Cluster Compatibility Mode (https://www.nersc.gov/users/computational-systems/edison/)

  • Resource label : nersc.edison_ccm
  • Raw config : resource_nersc.json
  • Note : For CCM you need to use special ccm_ queues.
  • Default values for ComputePilotDescription attributes:
  • queue         : ccm_queue
  • sandbox       : $SCRATCH
  • access_schema : ssh
  • Available schemas : ssh

9.3.2. EDISON

The NERSC Edison Cray XC30 (https://www.nersc.gov/users/computational-systems/edison/)

  • Resource label : nersc.edison
  • Raw config : resource_nersc.json
  • Note :
  • Default values for ComputePilotDescription attributes:
  • queue         : regular
  • sandbox       : $SCRATCH
  • access_schema : ssh
  • Available schemas : ssh, go

9.3.3. HOPPER

The NERSC Hopper Cray XE6 (https://www.nersc.gov/users/computational-systems/hopper/)

  • Resource label : nersc.hopper
  • Raw config : resource_nersc.json
  • Note :
  • Default values for ComputePilotDescription attributes:
  • queue         : regular
  • sandbox       : $SCRATCH
  • access_schema : ssh
  • Available schemas : ssh, go

9.3.4. HOPPER_APRUN

The NERSC Hopper Cray XE6 (https://www.nersc.gov/users/computational-systems/hopper/)

  • Resource label : nersc.hopper_aprun
  • Raw config : resource_nersc.json
  • Note : Only one CU per node in APRUN mode
  • Default values for ComputePilotDescription attributes:
  • queue         : regular
  • sandbox       : $SCRATCH
  • access_schema : ssh
  • Available schemas : ssh

9.3.5. HOPPER_CCM

The NERSC Hopper Cray XE6 in Cluster Compatibility Mode (https://www.nersc.gov/users/computational-systems/hopper/)

  • Resource label : nersc.hopper_ccm
  • Raw config : resource_nersc.json
  • Note : For CCM you need to use special ccm_ queues.
  • Default values for ComputePilotDescription attributes:
  • queue         : ccm_queue
  • sandbox       : $SCRATCH
  • access_schema : ssh
  • Available schemas : ssh

9.3.6. EDISON_APRUN

The NERSC Edison Cray XC30 (https://www.nersc.gov/users/computational-systems/edison/)

  • Resource label : nersc.edison_aprun
  • Raw config : resource_nersc.json
  • Note : Only one CU per node in APRUN mode
  • Default values for ComputePilotDescription attributes:
  • queue         : regular
  • sandbox       : $SCRATCH
  • access_schema : ssh
  • Available schemas : ssh, go

9.4. RESOURCE_STFC

9.4.1. JOULE

The STFC Joule IBM BG/Q system (http://community.hartree.stfc.ac.uk/wiki/site/admin/home.html)

  • Resource label : stfc.joule
  • Raw config : resource_stfc.json
  • Note : This currently needs a centrally administered outbound ssh tunnel.
  • Default values for ComputePilotDescription attributes:
  • queue         : prod
  • sandbox       : $HOME
  • access_schema : ssh
  • Available schemas : ssh

9.5. RESOURCE_RICE

9.5.1. DAVINCI

The DAVinCI Linux cluster at Rice University (https://docs.rice.edu/confluence/display/ITDIY/Getting+Started+on+DAVinCI).

  • Resource label : rice.davinci
  • Raw config : resource_rice.json
  • Note : DAVinCI compute nodes have 12 or 16 processor cores per node.
  • Default values for ComputePilotDescription attributes:
  • queue         : parallel
  • sandbox       : $SHARED_SCRATCH/$USER
  • access_schema : ssh
  • Available schemas : ssh

9.5.2. BIOU

The Blue BioU Linux cluster at Rice University (https://docs.rice.edu/confluence/display/ITDIY/Getting+Started+on+Blue+BioU).

  • Resource label : rice.biou
  • Raw config : resource_rice.json
  • Note : Blue BioU compute nodes have 32 processor cores per node.
  • Default values for ComputePilotDescription attributes:
  • queue         : serial
  • sandbox       : $SHARED_SCRATCH/$USER
  • access_schema : ssh
  • Available schemas : ssh

9.6. RESOURCE_LRZ

9.6.1. SUPERMUC

The SuperMUC petascale HPC cluster at LRZ, Munich (http://www.lrz.de/services/compute/supermuc/).

  • Resource label : lrz.supermuc
  • Raw config : resource_lrz.json
  • Note : Default authentication to SuperMUC uses X509 and is firewalled, make sure you can gsissh into the machine from your registered IP address. Because of outgoing traffic restrictions your MongoDB needs to run on a port in the range 20000 to 25000.
  • Default values for ComputePilotDescription attributes:
  • queue         : test
  • sandbox       : $HOME
  • access_schema : gsissh
  • Available schemas : gsissh, ssh

9.7. RESOURCE_NCSA

9.7.1. BW_CCM

The NCSA Blue Waters Cray XE6/XK7 system in CCM (https://bluewaters.ncsa.illinois.edu/)

  • Resource label : ncsa.bw_ccm
  • Raw config : resource_ncsa.json
  • Note : Running ‘touch .hushlogin’ on the login node will reduce the likelihood of prompt detection issues.
  • Default values for ComputePilotDescription attributes:
  • queue         : normal
  • sandbox       : /scratch/sciteam/$USER
  • access_schema : gsissh
  • Available schemas : gsissh

9.7.2. BW

The NCSA Blue Waters Cray XE6/XK7 system (https://bluewaters.ncsa.illinois.edu/)

  • Resource label : ncsa.bw
  • Raw config : resource_ncsa.json
  • Note : Running ‘touch .hushlogin’ on the login node will reduce the likelihood of prompt detection issues.
  • Default values for ComputePilotDescription attributes:
  • queue         : normal
  • sandbox       : /scratch/sciteam/$USER
  • access_schema : gsissh
  • Available schemas : gsissh

9.7.3. BW_LOCAL

The NCSA Blue Waters Cray XE6/XK7 system (https://bluewaters.ncsa.illinois.edu/)

  • Resource label : ncsa.bw_local
  • Raw config : resource_ncsa.json
  • Note : Running ‘touch .hushlogin’ on the login node will reduce the likelihood of prompt detection issues.
  • Default values for ComputePilotDescription attributes:
  • queue         : normal
  • sandbox       : /scratch/training/$USER
  • access_schema : local
  • Available schemas : local

9.7.4. BW_APRUN

The NCSA Blue Waters Cray XE6/XK7 system (https://bluewaters.ncsa.illinois.edu/)

  • Resource label : ncsa.bw_aprun
  • Raw config : resource_ncsa.json
  • Note : Running ‘touch .hushlogin’ on the login node will reduce the likelihood of prompt detection issues.
  • Default values for ComputePilotDescription attributes:
  • queue         : normal
  • sandbox       : /scratch/sciteam/$USER
  • access_schema : gsissh
  • Available schemas : gsissh

9.8. RESOURCE_RADICAL

9.8.1. TUTORIAL

Our private tutorial VM on EC2

  • Resource label : radical.tutorial
  • Raw config : resource_radical.json
  • Default values for ComputePilotDescription attributes:
  • queue         : batch
  • sandbox       : $HOME
  • access_schema : ssh
  • Available schemas : ssh, local

9.9. RESOURCE_XSEDE

9.9.1. LONESTAR

The XSEDE ‘Lonestar’ cluster at TACC (https://www.tacc.utexas.edu/resources/hpc/lonestar).

  • Resource label : xsede.lonestar
  • Raw config : resource_xsede.json
  • Note : Always set the project attribute in the ComputePilotDescription or the pilot will fail.
  • Default values for ComputePilotDescription attributes:
  • queue         : normal
  • sandbox       : $HOME
  • access_schema : ssh
  • Available schemas : ssh, gsissh

9.9.2. COMET_SPARK

The Comet HPC resource at SDSC ‘HPC for the 99%’ (http://www.sdsc.edu/services/hpc/hpc_systems.html#comet).

  • Resource label : xsede.comet_spark
  • Raw config : resource_xsede.json
  • Note : Always set the project attribute in the ComputePilotDescription or the pilot will fail.
  • Default values for ComputePilotDescription attributes:
  • queue         : compute
  • sandbox       : $HOME
  • access_schema : ssh
  • Available schemas : ssh, gsissh

9.9.3. WRANGLER

The XSEDE ‘Wrangler’ cluster at TACC (https://www.tacc.utexas.edu/wrangler/).

  • Resource label : xsede.wrangler
  • Raw config : resource_xsede.json
  • Note : Always set the project attribute in the ComputePilotDescription or the pilot will fail.
  • Default values for ComputePilotDescription attributes:
  • queue         : normal
  • sandbox       : $WORK
  • access_schema : ssh
  • Available schemas : ssh, gsissh, go

9.9.4. STAMPEDE_YARN

The XSEDE ‘Stampede’ cluster at TACC (https://www.tacc.utexas.edu/stampede/).

  • Resource label : xsede.stampede_yarn
  • Raw config : resource_xsede.json
  • Note : Always set the project attribute in the ComputePilotDescription or the pilot will fail.
  • Default values for ComputePilotDescription attributes:
  • queue         : normal
  • sandbox       : $WORK
  • access_schema : gsissh
  • Available schemas : gsissh, ssh, go

9.9.5. STAMPEDE

The XSEDE ‘Stampede’ cluster at TACC (https://www.tacc.utexas.edu/stampede/).

  • Resource label : xsede.stampede
  • Raw config : resource_xsede.json
  • Note : Always set the project attribute in the ComputePilotDescription or the pilot will fail.
  • Default values for ComputePilotDescription attributes:
  • queue         : normal
  • sandbox       : $WORK
  • access_schema : gsissh
  • Available schemas : gsissh, ssh, go

9.9.6. COMET_SSH

The Comet HPC resource at SDSC ‘HPC for the 99%’ (http://www.sdsc.edu/services/hpc/hpc_systems.html#comet).

  • Resource label : xsede.comet_ssh
  • Raw config : resource_xsede.json
  • Note : Always set the project attribute in the ComputePilotDescription or the pilot will fail.
  • Default values for ComputePilotDescription attributes:
  • queue         : compute
  • sandbox       : $HOME
  • access_schema : ssh
  • Available schemas : ssh, gsissh

9.9.7. WRANGLER_YARN

The XSEDE ‘Wrangler’ cluster at TACC (https://www.tacc.utexas.edu/wrangler/).

  • Resource label : xsede.wrangler_yarn
  • Raw config : resource_xsede.json
  • Note : Always set the project attribute in the ComputePilotDescription or the pilot will fail.
  • Default values for ComputePilotDescription attributes:
  • queue         : hadoop
  • sandbox       : $WORK
  • access_schema : ssh
  • Available schemas : ssh, gsissh, go

9.9.8. GORDON

The XSEDE ‘Gordon’ cluster at SDSC (http://www.sdsc.edu/us/resources/gordon/).

  • Resource label : xsede.gordon
  • Raw config : resource_xsede.json
  • Note : Always set the project attribute in the ComputePilotDescription or the pilot will fail.
  • Default values for ComputePilotDescription attributes:
  • queue         : normal
  • sandbox       : $HOME
  • access_schema : ssh
  • Available schemas : ssh, gsissh

9.9.9. BLACKLIGHT

The XSEDE ‘Blacklight’ cluster at PSC (https://www.psc.edu/index.php/computing-resources/blacklight).

  • Resource label : xsede.blacklight
  • Raw config : resource_xsede.json
  • Note : Always set the project attribute in the ComputePilotDescription or the pilot will fail.
  • Default values for ComputePilotDescription attributes:
  • queue         : batch
  • sandbox       : $HOME
  • access_schema : ssh
  • Available schemas : ssh, gsissh

9.9.10. WRANGLER_SPARK

The XSEDE ‘Wrangler’ cluster at TACC (https://www.tacc.utexas.edu/wrangler/).

  • Resource label : xsede.wrangler_spark
  • Raw config : resource_xsede.json
  • Note : Always set the project attribute in the ComputePilotDescription or the pilot will fail.
  • Default values for ComputePilotDescription attributes:
  • queue         : normal
  • sandbox       : $WORK
  • access_schema : ssh
  • Available schemas : ssh, gsissh, go

9.9.11. COMET

The Comet HPC resource at SDSC ‘HPC for the 99%’ (http://www.sdsc.edu/services/hpc/hpc_systems.html#comet).

  • Resource label : xsede.comet
  • Raw config : resource_xsede.json
  • Note : Always set the project attribute in the ComputePilotDescription or the pilot will fail.
  • Default values for ComputePilotDescription attributes:
  • queue         : compute
  • sandbox       : $HOME
  • access_schema : ssh
  • Available schemas : ssh, gsissh

9.9.12. SUPERMIC

SuperMIC (pronounced ‘Super Mick’) is Louisiana State University’s (LSU) newest supercomputer funded by the National Science Foundation’s (NSF) Major Research Instrumentation (MRI) award to the Center for Computation & Technology. (https://portal.xsede.org/lsu-supermic)

  • Resource label : xsede.supermic
  • Raw config : resource_xsede.json
  • Note : Partially allocated through XSEDE. Primary access through GSISSH. Allows SSH key authentication too.
  • Default values for ComputePilotDescription attributes:
  • queue         : workq
  • sandbox       : /work/$USER
  • access_schema : gsissh
  • Available schemas : gsissh, ssh

9.9.13. STAMPEDE_SPARK

The XSEDE ‘Stampede’ cluster at TACC (https://www.tacc.utexas.edu/stampede/).

  • Resource label : xsede.stampede_spark
  • Raw config : resource_xsede.json
  • Note : Always set the project attribute in the ComputePilotDescription or the pilot will fail.
  • Default values for ComputePilotDescription attributes:
  • queue         : normal
  • sandbox       : $WORK
  • access_schema : gsissh
  • Available schemas : gsissh, ssh, go

9.9.14. TRESTLES

The XSEDE ‘Trestles’ cluster at SDSC (http://www.sdsc.edu/us/resources/trestles/).

  • Resource label : xsede.trestles
  • Raw config : resource_xsede.json
  • Note : Always set the project attribute in the ComputePilotDescription or the pilot will fail.
  • Default values for ComputePilotDescription attributes:
  • queue         : normal
  • sandbox       : $HOME
  • access_schema : ssh
  • Available schemas : ssh, gsissh

9.9.15. GREENFIELD

The XSEDE ‘Greenfield’ cluster at PSC (https://www.psc.edu/index.php/computing-resources/greenfield).

  • Resource label : xsede.greenfield
  • Raw config : resource_xsede.json
  • Note : Always set the project attribute in the ComputePilotDescription or the pilot will fail.
  • Default values for ComputePilotDescription attributes:
  • queue         : batch
  • sandbox       : $HOME
  • access_schema : ssh
  • Available schemas : ssh, gsissh

9.10. RESOURCE_ORNL

9.10.1. TITAN_ORTE

The Cray XK7 supercomputer located at the Oak Ridge Leadership Computing Facility (OLCF), (https://www.olcf.ornl.gov/titan/)

  • Resource label : ornl.titan_orte
  • Raw config : resource_ornl.json
  • Note : Requires the use of an RSA SecurID on every connection.
  • Default values for ComputePilotDescription attributes:
  • queue         : batch
  • sandbox       : $MEMBERWORK/`groups | cut -d' ' -f2`
  • access_schema : ssh
  • Available schemas : ssh, local, go

9.10.2. TITAN_APRUN

The Cray XK7 supercomputer located at the Oak Ridge Leadership Computing Facility (OLCF), (https://www.olcf.ornl.gov/titan/)

  • Resource label : ornl.titan_aprun
  • Raw config : resource_ornl.json
  • Note : Requires the use of an RSA SecurID on every connection.
  • Default values for ComputePilotDescription attributes:
  • queue         : batch
  • sandbox       : $MEMBERWORK/`groups | cut -d' ' -f2`
  • access_schema : ssh
  • Available schemas : ssh, local, go