Blog‎ > ‎


posted Sep 30, 2013, 9:21 PM by Rick McGee
Video 30 Finished
DHCP in N1Kv
  • Works pretty much the same as DHCP snooping in a Nexus 5K/7K
    • Used for Man in the Middle attaches
    • Can setup Approved DHCP servers
    • Works wit IP Source Guard and Dynamic ARP inspection
    • Enable globally or per desired VLAN
    • Trust certain ports where DHCP server is connected to
    • Difference with DHCP snooping in a N1Kv, is that with a normal switch
      • No Port is trusted
      • In N1Kv all Eth port are trusted by default, however all vEth port are not trusted.
    • If DHCP serve is on a VEM have to enter the command "IP DHCP Snooping Trust"
  • Works very much like in a N5K or N7K would act
  • For SPAN, both Source and Destination can be physical Eth or VEth interface, but must be on the same ESXi host
    • Bear in mind that the N1Kv, up to 64 VEM's look like one big module switch
    • Important that you kep clear which vEth and Eth part are on same host when configuring SPAN
    • If you need to SPAN across ESXi host then you must use ERSPAN
  • ERSPAN is still an overlay GRE tunnel just like any other NX-OS switch, so Destination is simply a IP address
    • Destination must be configured to accept and de-encapsulate the GRE/ERSPAN session
    • Destination work to N5K/6K/7K as well (all see same encapsulation)
UCS VM-FEX (Low Latency Server Requirements)
  •     Creates the same type of DVS in VMware as the N1Kv does
    • (Now supported on KVM and Hyper-V in UCS 2.1)
  • Made up of
    • UCS Fabric Interconnects that act as the control and management plane (in lieu of the N1Kv VSM)
    • VEM is still the Data Plane
      • XML configuration file to point it to the UCS Fabric Interconnect to get configuration
    • VM-FEX switched in hardware
      • Two Modes                                                     (In Software.........)
        • Pass Through Switching (PTS) Hyper Visor ----> VEM ----> Fabric Interconnect
        • Direct Path IO= Bypasses Hyper Visor goes straight to UCS Fabric Interconnects
    • Configure in the VM tab in UCS Manager
C-Series server management choices
  • Have to choose from the following
    • UCSM with 2x 2232PP and UCSM v2.0 required (both required)
      • With UCSM v1.4 you cloud use two N2248TP switches but not now
    • Use Adapter-FEX
      • 1 or 2 Cisco N5k's required
        • With Palo P81E or newer VIC1225 virtual cards
          • You have the ability to switch modes in CIMC on the C200 to VN-TAG
            • Instead of two physical links you now get multiple channels per link (physical Port)
    • Used C-Series server in the traditional Ethernet
      • To any 10GE switch
UCSM Management
  • Pair of Nexus 2232PP's FEX's to act as the "IOM" in a blade chassis
  • Every pair of Nexus 2232PP's in UCSM takes up a chassis count
  • Required 4 cables
    • Two 1GE cables connected from the C-Series Server LOM ports to 2232 FEX to provide OOB control/Management plane
    • Two 10GE cables connected from the C-Series server SFP ports to 2232 FEX provides the data plane
      • Have to have P81E or VIC 1225 Palo Cards
    • In UCS 2.1 "single wire management" means a single pair of 10GE cables from the C-Series server to the Nexus 2232 FEX with both providing Control/Management/Data planes
  • Another Cisco FEX solution to connect Nexus 5K's down to C-Series servers
    • To FEX the P81E or VIC 1225 PCIe CNA
    • Creates vEth and VFC ports in the Nexus 5K
    • Two physical 10GE SFP port on PCIe card, each with two channels breaks out into four logical channels
      • You see something very similar within the UCS Fabric Interconnects with the Palo cards
        • Port 1, Channel 1 = Ethernet with Failover to Physical Port 1
        • Port 2, Channel 2 = Ethernet with Failover to Physical Port 2
        • Port 1, Channel 3 = HBA0 (no failover multipathing software needed)
        • Port 2, Channel 4 = HBA1 (no failover multipathing software needed)