Data Center Networking Standards

Data Center Firewall Service Introduction to Silos and Security Zones In data center networks, NYU IT provisions each user group (e.g., an administrative department) a silo. A silo is a collection of subnets belonging to a single department. Typically, one silo is created per department, and cannot mix networks or hosts from other departments. Within […]

Data Center Firewall Service

Introduction to Silos and Security Zones

In data center networks, NYU IT provisions each user group (e.g., an administrative department) a silo. A silo is a collection of subnets belonging to a single department. Typically, one silo is created per department, and cannot mix networks or hosts from other departments.

Within a silo are four zones representing the security risk of the hosts within each zone:

  • High Security Zone
  • Medium Security Zone
  • Low Security Zone
  • Infrastructure

The security risk of a host is defined by NYU’s Electronic Data and System Risk Classification Policy.

Silos are broken up into security zones in order to aggregate resources with common security exposure or security risk behind the same virtual firewall. Because of this segmentation, a compromise in one security zone should not be able to lead to compromise in other security zones; damage to the compromised security zone should be isolated to that zone and not impact other zones. Aggregating machines by security risk allows for the enforcement of a somewhat uniform access control policy across a zone.

Silos

  • Departments can have multiple silos allocated, subject to security review
  • IP allocation is based on client requirements, a two- to three-year growth estimate is considered

Zones

  • By default, hosts cannot communicate across zones in the same silo
  • To allow communication between zones (in the same silo) with different security classifications, an Access Control List (ACL) request must be submitted.
  • All requests are subject to review by OIS.
  • Nodes in zones with the same level of security classification but in different silos can communicate by default (i.e., medium-zone hosts can reach other medium-zone hosts in a different silo)
  • Nodes cannot have interfaces in multiple zones; deviation from this policy is subject to security review
  • Users are not permitted to physically link network nodes across different zones; exceptions to this policy are subject to OIS review
  • Internet facing hosts must always be in a different security zone than non-Internet facing hosts; exceptions to this policy require an OIS review
  • A Node’s security zone classification must correlate to the zone in which the node is deployed (i.e., a high-security zone classified node cannot be deployed in a low-security zone).

ACL and Rules Requests

  • Requests must be submitted through Network Engineering’s request intake and orchestration system. For access to the request intake system, please contact noc@nyu.edu.
  • Default inbound and outbound ACLs are deployed and can be reviewed with NYU IT upon request

Colocation

  • All colocated nodes must be deployed behind an NYU IT firewall
  • NYU IT solely manages NYU IT firewalls; guest access is not permitted
  • Host-based firewalls are permitted but are managed by the party responsible for administering the node
  • Access to nodes behind NYU IT firewalls (e.g., via ssh or remote desktop) is permitted upon request
  • Non-standard requests for node access require OIS review

Data Center Firewall Zone Classification

Per the Office of Information Security, described below are the factors that determine the appropriate firewall zone for a host. System criticality is defined by data criticality and system availability requirements, as outlined in the Electronic Data and System Risk Classification Policy.

Low Security Zone

Technical Triggers

  • Medium criticality system with a public audience  for direct communication (audience includes people outside of NYU community)
  • Front-end/portal/pivot point to a medium criticality system
  • Low criticality system

Logical Triggers

  • Systems with Public or Confidential data (no Protected or Restricted data)
    AND
  • No high availability requirements

Examples

  • Departmental web server
  • Departmental server WITHOUT sensitive data

Medium Security Zone

Technical Triggers

  • Medium criticality system with small audience for direct communication (“all of NYU-NET”)
  • Front-end/portal/pivot point to a high criticality system
  • High criticality system with a wide audience for direct communication (“all of NYU-NET” or larger)

Logical Triggers

  • Backend systems with protected data AND small audience (“all of NYU-NET”)
  • Front-end systems to high criticality backends
  • Systems with high availability requirements AND large audience

Examples

  • WWW
  • Middle-ware
  • High availability public services

High Security Zone

Technical Triggers

  • High criticality system
    AND
  • Has a small audience for direct communication (“all of NYU-NET” or smaller)

Logical Triggers

  • Backend systems with restricted data OR systems with high availability requirements
    AND
  • Systems with a small audience (“all of NYU-NET” or smaller)

Examples

  • Data Warehouses
  • Databases

Infrastructure Zone

Technical Triggers

  • Has no need to initiate outgoing connections outside of its local subnet
  • Only accepts incoming connections from a small, custom set of hosts or subnets (significantly smaller than “all of NYU-NET”)

Logical Triggers

  • This generally includes devices used to facilitate the management of the servers in High, Medium or Low

Examples

  • VMWare console network
  • KVM over IP

Data Center Connectivity

Network Access Ports

Port Default Configuration

  • Auto-Negotiation of speed & duplex
  • Spanning-Tree Portfast is enabled
  • Access-mode
  • Administratively disabled and not assigned to a production VLAN
  • 9000 Byte MTU
    • Other MTU options are available upon request and review

Copper Access Ports

  • 100Base-TX, 1000Base-T, 10GBase-T
  • For each standard server rack, there is a single top-of-rack copper patch panel containing a total of 24 network access ports.
    • Switch-to-patch panel connections are deployed by the Networks Team; Server-to-patch connections are deployed by either the Systems Team or the customer.
  • By default, 10GbaseT is available. Fiber ports (see below) are available upon request.
  • All ports in the South Data Center (NYC), copper and fiber, are 10GB capable

Fiber Access Ports

  • 1000Base-SX, 10Gbase-SR
  • Fiber access ports are available in specific IDFs (inquire with NYU IT for available locations).
  • Supported fiber: OM3 Multimode
  • Fiber Distribution Panel (FDP) and Cartridge layout
    • An FDP is listed as “Top”, “Middle, or “Bottom,” and Cartridge positions are listed as “Left”, “Middle,” and “Right”.
    • Each cartridge has either six or twelve ports, and the activations are listed as “MMxx” (where xx is the number of the port).
    • Port 1 has fiber strands “1&2”, port 2 has fiber strands “3&4”, etc.

Supported Fault-Tolerant Port Configurations

  • High-Availability, bandwidth aggregation, and bandwidth load balancing through multiple network port connections is supported.
  • NIC Teaming (NIC Bonding) utilizes discrete network access switches by default
  • “A-side” and “B-side” switches for a given IDF provide diverse connectivity.
  • Active/Active mode of NIC Teaming is the default deployment.
  • Active/Standby is supported upon request
  • Link Aggregation Control Protocol (LACP) is the default protocol
  • Provides high availability

Network Access Port Standard Features

The following port-based configurations are deployed to ensure secure and stable data center network.

Broadcast Storm Protection

The rate of incoming broadcast packets on each port is monitored for “storm conditions”, and will automatically disable if the maximum threshold is reached to protect the rest of the hosts on the network from disruption. This feature is enabled by default. Please contact the Network Team for details on current threshold values.

Loop Protection

The switch will monitor for any “control frames”, including Spanning-Tree BPDUs, that originate from itself, and automatically disable a port to prevent disruption. This feature is enabled by default.

Cisco Discovery Protocol (CDP)

By default, CDP will be disabled on all access ports unless it is required by an application or for temporary diagnostic purposes. Please contact the Network Team to submit a request.

DHCP Snooping

By default, ingress DHCP server traffic is blocked on access ports. In some cases, a server may need to generate DHCP server or similar traffic for purposes such as MacOS Netboot or PXE boot. In these cases, a request for “DHCP Snooping Bypass” should be submitted to noc@nyu.edu, including the server’s registration information (FQDN, IP, MAC address) and port information (Datajack).

Support for 802.1q VLAN Tagging on Access Ports

Overview

The standard Data Center network access layer security features preclude trunking (“vlan tagging”) on any access port except for those approved use cases (see below) designated by NYU IT.

VLAN tagging can be enabled on data center access ports used by large-scale VMWare ESX cluster hosts, provided the following parameters are followed on the host/server side:

  • A server NIC configured to perform VLAN tagging cannot generate Spanning-Tree Protocol (STP) Bridge PDUs (i.e. disable STP)
  • For high availability (dual-homed) systems, physical NIC pairs must operate in Active/Active mode with no traffic “bridged” between them and forwarded back towards the network
  • VLANs containing Data-Plane networks (i.e. subnets for VMs functioning as application servers) and Control-Plane networks (e.g. VMotion, VMWare Console communication, etc) cannot be tagged on the same physical links
  • Separate Access Ports must be used for each security zone. Only Data-Plane VLANs in the same security zone can be tagged together on the same physical link terminating on the same physical host
  • Separate access ports must be used for tagging Control-Plane Vlans.

Additional Service Notes & Requirements

  • All network VLAN IDs are allocated and assigned by NYU-IT Network Engineering; requests for specific VLAN IDs are not permitted.
  • Unless noted otherwise, the native vlan for untagged traffic on a trunk link is VLAN 1. Untagged traffic on the link will be discarded
  • Tagging access ports are configured statically, no dynamic trunk negotiation is enabled. Trunk or VLAN Tagging Mode will be “forced” (non-negotiate) on the given access port(s)
  • Only the minimum number of VLANs required can be configured on the trunk
  • All VSwitch and server NIC configuration is the responsibility of the group that administers the connected node(s)