Storage Standards

Overview The NYU Storage environment consists of block, file, and HPC specialized enterprise-level storage technologies. All three forms of storage are available in the main NYU IT data center; limited data storage capacity exists in other NYC campus locations, as well as the Syracuse, NY data center. Limited data replication capability is available for block […]

Overview

The NYU Storage environment consists of block, file, and HPC specialized enterprise-level storage technologies. All three forms of storage are available in the main NYU IT data center; limited data storage capacity exists in other NYC campus locations, as well as the Syracuse, NY data center. Limited data replication capability is available for block and file storage between data centers.

Standard Resource Specification

Standard specifications are defined for storage sizing, storage tiers, Logical Unit Number (LUN) types and sizes, and network-attached storage (NAS).

Storage Sizing

  • Capacity allocations are based on data requirements
    • Calculations are based on the immediate need plus annual growth projections
    • Rate of growth projections are required for proper sizing; 5-year projections preferred
  • Annual or ad hoc requests are honored; additional charges could apply
  • Requests for unusually large capacity allocations require justification and are subject to review
  • By default, the maximum size on all block volumes is 4TB
    • Volume sizes larger than 4TB are subject to review
    • Capacity allocations are based on individual solution requirements

VMware

  • Data store size – 4TB
  • Additional LUNS provisioned and logically joined for capacity over 4TB

Database

  • ASM is the standard
  • ASM data volume – 512GB
  • ASM log volume – 40GB
  • ASM OCR volume – 2GB

Physical

  • RedHat or Windows
  • Boot volume size – 100GB

Storage Tiers

  • Array-based Dynamic Storage Tiering (DST) can utilize solid-state drives (SSD), fiber channel drives, SAS, and nearline SAS
  • DST is supported on the EMC, HDS, and 3PAR storage systems.

High Performance Production

  • Hybrid, primarily SSD
  • Front-end cache
  • Latency: <5ms
  • I/O rate + IOPS dependent upon transaction size

Production

  • Hybrid, SSD with justification
  • Front-end cache
  • Latency: <10ms
  • I/O rate + IOPS dependent upon transaction size

Development / QA

  • No SSD
  • Development latency: <20ms
  • QA latency: <10ms

Network-Attached Storage (NAS)

  • EMC Isilon is the default NAS platform
  • NFSv4 open standard is the default supported protocol
  • Static UIDs are provisioned
  • Asynchronous replication to Disaster Recovery (DR) site is supported
    • For DR use cases only
  • Snapshots
    • Taken every 30 days or when 20% of allocated storage is exceeded, whichever comes first
    • Additional retention in increments of 30 days is available for a fee

Network Security

High Availability

  • All storage array installations come with redundant controllers and disk configurations
  • All SAN fiber channel fabrics are redundant across all data centers
  • Limited data replication capability available for block and file storage between data centers
  • Active/Active configurations are not supported

HPC

Please refer to High Performance Computing at NYU for additional information HPC clusters and storage.

/home

  • 20GB
  • NFSv4
  • backed up
  • shared on all nodes
  • quota space can be shared for groups under single mount point

/scratch

  • 5TB
  • Lustre
  • not backed up
  • files >60 days old are deleted
  • shared on all nodes

/archive

  • 2TB
  • NFSv4
  • backed up
  • shared only on login nodes

/beegfs

  • 2TB
  • beegfs
  • not backed up
  • files >60 days old are deleted