Blog‎ > ‎

Fibre Channel Switching

posted Nov 19, 2013, 4:58 PM by Rick McGee   [ updated May 7, 2014, 10:44 AM ]
What is Fibre Channel?
  • From a high level, it replaces SCSI disk cable with a network
  • Protocol stack primarily used to send SCSI commands over the SAN
    • Technically you could run IP over FC, but main application is SAN
    • RFC 2625 - IP and ARP over FC
  • Standard "T11" per International Committee for Information Technology Standards
  • FCOE is T11's FC-BB-5 standard

  • Fibre Channel vs. OSI Stack
Fibre Channel Protocol (FCP) is analogous to TCP

Fibre Channel Topologies
  • Supports three types of topologies
    • Point-to-Point (FC-P2P)
      • Initiator (server) and Target (storage) directly connected
    • Arbitrated Loop (FC-AL) 
      • Logical ring topology, similar to Token Ring
      • Implies contention is required on the ring one one can read/write
    • Switched Fabric (FC-SW) (MOST COMMON)
      • Logical equivalent to a Switched Ethernet LAN
      • Switches, manage the fabric allowing any-to-any communication without contention
      • Similar to how CSMA-CD is removed with Ethernet Switching
        Like moving form hubs to switches 
Fibre Channel Port Types
  • FC has different port types to define a ports function
    • N_port - Node Port
      • End host (target or initiator) in P2P or switched Fabric
    • NL_port - Node Loop Port (like half duplex in ethernet hubs)
      • End hos in a Arbitrated Loop
    • F_port - Fabric Port
      • SAN switch's port the connects to a Node port
    • FL_port - Fabric Loop Port
      • Switch's port that connects to a Node loop port
    • E_port - Expansion Port (Link between to FC switches)
      • Inter Switch Link (ISL) DIFFERNET FROM ETHERNET VERSION
    • TE_port - Trunking Expansion Port
      • Extended ISL (same as a 802.1Q Trunk)
Fibre Channel Addressing
  • FC addressing is the same as IP over Ethernet
    • IP address are logical and manually assigned
    • Ethernet MAC address are physical and burned in
  • FC World Wide Names (WWN's)
    • 8 byte address burned in by manufacturer (Like MAC Address)
  • FC Identifier (FCID)
    • 3 byte logical address assigned by fabric (Like your IP Address)
FC World Wide Names
  • WWN is a FC's 8 byte physical address
  • WWN is subdivided further into:
    • World Wide Node Name (WWNN or nWWN)
      • Switches, severs, or disk's physical address
    • World Wide Port Name (WWPN or pWWN)
      • Switches, severs, or disk's port's physical address
      • E.g. Switches, HBA's, and arrays have more than one port
    •   WWN is not used for data plane switching
      Just used to identify the Port in the SAN 
FC Identifiers 
  • FCID is FC's 3 byte logical address
  • FCID is subdivide into three fields
    • Domain ID (where the node is located via FSPF routing table)
      • Each switch get a Domain ID
    • Area ID
      • Group of port on a switch have an Area ID
    • Port ID
      • End station connected to switch gets a Port ID
    • FCID is used for the actual data plane traffic switching
  • Domain ID's
    • Domain ID's identify a switch in the fabric
    • Can be manually assigned, otherwise will be automatically assigned by the "Principle Switch"
    • Principle Switch is the same as the STP Root Bridge, and is chosen based on an election
    • NO CONFIGURATION needed for Principle Switch
Fibre Channel Routing
  • FC doesn't use flooding to build topologies like Ethernet
  • Fabric Shortest Path First (FSPF) is used to route the traffic between switches
    • Same Dijkstra SPF as OSPF and IS-IS
    • Node ID in the SPT is the FCID's Domain ID (Similar OSFP Router ID)
    • Traffic is routed via lowest cost path between Domain ID's 
    • ECMP is supported for equal SPT branches
    • Unequal cost load distribution is not supported
  • FSFP runs automatically as a Fabric Service
    • No Configuration requited
    • Can customize, but typically none is required

Fibre Channel Logins
  • Ethernet network are connectionless
    • Traffic in the data plane results in topology learning in the control plane
  • Fibre Channel networks are connection oriented
    • All end stations must first register with the control plane of the fabric before sending any traffic
  • Fabric Registration has three parts
    • Fabric Login (FLOGI) (most important)
    • Port Login (PLOGI)
    • Process Login (PLRI)
  • Fabric Login (FLOGI)
    • Node Port (N_Port) tells the switch's Fabric Port (F_Port) it wants to register
    • Switch learns the WWNN and WWPN of Node
    • Switch assings FCID to Node
  • Port Login (PLOGI)
    • End-to-End login between Node Ports
    • Initiator tells Target it wants to talk Reads/Writes
    • Used for application such as end-to-end flow control
  • Process Login (PLRI)
    • Upper later protocol login negotiation between Node Ports
Fibre Channel Name Server
  • Fibre Channel Name Server (FCNS) is similar to ARP Cache
  • Used to resolve the WWN (physical address) to the FCID (logical address)
  • Like Principle Switch and FSPF, FCNS is a distributed fabric service that requires NO configuration


Zoning
  • By default all initiators learn about all targets during the login process
    • FCNS maintains the mappings of everyone's WWPN to FCID
  • Servers mounting the wrong volumes can corrupt the data
    • E.G. Windows NTFS and MBR not compatible with Linux GPT
  • Zoning prevents this by limiting which resource an initiator can use
    • Zoning is similar to ACL's in Ethernet and IP world
    • Associate WWN's, FCID's, aliases, etc. to control who can talk to who
  • Like FCNS, Zoning is a distributed fabric service
  • Controls which initiators can talk to which targets
  • Zoning is REQUIRED, and is not optional
    • Default zone policy is to deny
    • Can be changed to permit as......
      • "zone default-zone permit vsan 1"
      • "system default zone"

Virtual SAN's
  • Traditionally multiple SAN's were designed as physical separate networks
    • i.e. SAN Islands
    • Physical separation is costly in term of equipment, power, space, cooling, management, etc.
  • VSAN's solve the isolation problem similar to how VLAN's segment broadcast domains
    • Isolate the management and failure domain of the network
    • Separate FLOG,FCNS, Zoning, Aliases, etc. per VSAN
  • With VSAN's E Ports now become TE Ports
    • Similar to 802.1Q trunks in Ethernet
SAN Port Channeling
  • Like Ethernet Port-Channeling, SAN PC's can be sued to aggregate the bandwidth of physical links
  • Supports Port Channeling Protocol (PCP) for negotiation of links
    • Similar to 802.3ad LACP in Ethernet 

Soft vs. Hard Zoning
  • Soft Zoning
    • Initiator registers with the FCNS to get zoning
    • zoning enforced in the control plane but not the data plane
    • initiator can manually mount the wrong target
  • Hard Zoning
    • Initiator registers with FCNS to get Zoning
    • Zoning enforced in the control AND data plane
    • Initiator cannot manually mount the wrong Target
  • NX-OS/SAN-OS runs Hard Zoning by default
Zone vs. Zoneset
  • Zone is used to create a mapping between
    • WWPN's, FCID, Aliases, Interface, Domain-ID, etc.... 
  • Zones are grouped together in a Zoneset
    • Zoneset is the ACL, and the Zone s the ACE
  • Zoneset is applied to the VSAN and then activated
    • Makes the "Full" Zoneset the "Active" Zoneset
    • Zoneset must be re-activated after each change
Full vs Active Zoneset
  • Only one Zoneset per VSAN can be "Active" in the fabric at a time
    • Same logic as one ACL per interface per direction
  • "Full" Zoneset is the one in the configuration
  • "Active" Zoneset is the one being enforced in the Fabric
  • By default only the Active Zoneset is advertised, no the full Zoneset
    • Can result in misconfigured or "isolated" fabric
Zoning Configuration and Verification
  • "show zone status vsan 1"
    • Display mode and default action (permit or deny)
  • "show zone"
    • Display full zone info
  • "show zone active"
    • Displays the currently active zones
  • "show Zoneset"
    • Displays full Zoneset information
  • "show Zoneset active"
    • Displays the currently active Zoneset
  • "clear zone database vsan 1"
    • Deletes the local full zone but not the active one
  • "Zoneset distribute full vsan 1"
    • In the global config enables full distribution when new E ports com up
  • "Zoneset distribute vsan 1"
    • In the exec mode forces the distribution of the full Zoneset
FC Aliases
  • Zoning based on WWPN is error-prone
    • zoning errors can be catastrophic to the fabric
  • FC Aliases give user-friendly names to WWN's, FCIDs, etc..
    • Think DNS in IP
  • Configured with "fcalias name"
  • Can be advertised through Zoneset distribution
    • "Zoneset distribute vsan 1"

Basic vs Enhanced Zoning
  • By default the Full Zoneset is local and the Active Zoneset is Fabric-Wide
  • Order of operations errors can corrupt the Active Zoneset
    • Think VTP deleting all Ethernet VLAN's
  • Enhanced Zoning" prevents this by "locking" the Fabric
    • Ensures that people don't accidentally overwrite each other
Using Enhanced Zoning
  • Admin logs into any switch in the Fabric and starts to configured Zoning
    • Lock is advertised to all switches in the Fabric
    • Other admins cannot edit Zonesets until lock is released by "commiting the Zoneset
  • Configured with:
    • "zone mode enhanced vsan"
    • "system default zone mode enhanced"


FC Device Aliases
  • FC aliases are locally significant
    • Can be distributed through manual Zoneset distribution
    • still prone to becoming unsynchronized through the Fabric
    • "Device Aliases" solve this proble
    • Devic Aliases serve the same purpose of FC Aliases
      • Bing a WWPN to a user-friendly name
    • Difference is that the binding is advertised to the Fabric
Using Device Aliases
  • Device Aliases are advertised like Enhanced Zoning
    • Device Alias sssionis created and "lock" is advertised to the fabric
    • changes are made and "committed"
    • aliases are advertised through CFS (Cisco Fabric Services) and lock is removed
  • Configure with "device-alias database"


SAN Port Channels
  • Used to aggregate the bandwidth of physical links
  • Ethernet PC's and SAN PC's use the SAME number space
  • Created with the link level "channel-group 1"
  • New Members added with link level "channel-group 1 force"
  • Port Channeling Protocol (PCP) enabled with PC link level "channel mode active"
    • "interface Port-Channel" in MDS
    • "interface SAN-Port-Channel" in Nexus
  • Verified as "show {SAN-}port-channel summary"

Fibre Channel Switching Review
  • Fibre Channel forwards frames based on the 3 byte Fibre Channel ID (FCID)
  • FCID is subdivided into three fields
    • Domain ID
      • Each Switch get a Domain ID
    • Area ID
      • Group of port on a switch have an Area ID
    • Port ID
      • End station connected to a switch gets a Port ID
Fibre Channel Domain ID's
  • Domain ID is first byte of the FCID
  • Used to identify the Switch in the Fabric's SPT
    • FSPF uses the Domain ID as SFP Node ID
  • Implies that a hard limit of switches per Fabric would be 256
    • Some ID's are reserved so only 239 are usable
    • "Qualified" limit by the OSM's is ~50
      • I.E. no vendor will support your large Fabric when it crashes
  • Scaling the Fabric requires fixing the Domain ID limitation
Node Port Virtualization (NVP)
  • NPV fixes the Domain ID problem by removing the need for a switch to participate in Fabric Services
    • I.E. no FSPF, FCNS, Zoning, etc. 
  • Switches running NPV appear to the rest of the fabric as an end host
    • I.E. a Node Port (N_Port)
  • Upstream facing links on the NPV switch is call the NP_Port
    • Proxy Node Port
Node Port ID Virtualization (NPIV)
  • Switch upstream of the NPV switch is the NPV core switch
  • NPV core switch runs Node Port ID Virtualization (NPIV)
  • Allows multiple FLOGI's and FCID assignments on it's F port facing downstream
  • NPIV is also applicable in virtualization environments
    • E.G. VMware host assigning separate WWPN/FCID's to the VMware guests
NPV/NPIV Configuration
  • Enable NPV on NPV Switch (downstream switch)
    • "feature npv"
    • Forces a write erase and reload of the switch
      • Not all the config is lost, but backup before switch restarts
    • On 55UP, reallocate ports as FC after reload
      • Have to reload the switch again
    • Configure NP Ports on NPV switch
      • "switch  port mode np"
    • Enable NPIV on NPV Core Switch
      • "feature npiv"
    • Configure previous E ports as F ports on NPV core switch
      • "switchport mode f"
    • NPV switch doesn't participate in Fabric Services
Comments