Single vs. Multi Chassis EtherChannel - Port Channeling was original between only two devices - E.g. end host and catalyst 3850 via 2 GE links Remember for port-channeling it's best to have multiple decisions from L3-L4 to make the hashing decision, they can be different on each end of the port-channel and is only specific to that switch. - Increases the bandwidth but still has a single point of failure Multi Chassis EtherChannel (MCEC/MEC) is between 3 devices - 1 downstream device and 2 upstream devices - E.g. end host to 2 Catalyst 3850's via 2 GE links - Increase bandwidth and resiliency - Logically appears the same as a 2 device port-channel Three Main Types of MCEC - Catalyst 3750, 3850, 3650, and 2960s/x/xr switches - Single Control Plane via Stacking Cable - Catalyst 65xx Virtual Switching System (VSS) - Single Control Plane can use L2/L3 interfaces via Virtual Switch Link (VSL) - Only one Supervisor managing both of the chassis - Nexus Virtual Port Channel (vPC) - Uses two separate control planes - Configurations managed independently - Separate control plane protocol instances - STP, FHRP's, IGP's, BGP, etc... - Synchronization via a Peer Link - Similar logic to VSS's VSL High Level Overview - vPC made up of two physical switches - The vPC Peers - vPC Peers each have... - Peer Link - Peer Keepalive Link - vPC Member Ports vPC Peer Link Layer 2 trunk link used to sync control plane between vPC peers - CAM table, ARP cache, IGMP Snooping DB, etc... - If one of the peers learns a MAC addresses, want to make sure that both switches have the forwarding decision. - Cisco uses Cisco Fabric Services over Ethernet (CFSoE) protocol to sync the control planes - This is why it has to be a Layer 2 trunk link port-channel - Used to elect a vPC primary and vPC secondary role - Normally not used in the data plane - Peer Link is generally much lower bandwidth than aggregate of vPC member ports - If Peer Link is used in the data plane, it becomes the bottleneck in the network vPC Peer Keepalive Live (backup to the Peer-Link) - Layer 3 link used as a heartbeat in the control plane - Used to prevent active/active or "Split Brain" vPC roles - Not used in the vPC data plane - Uses unicast UDP port 3200 - Peer Keepalive Linke can be... - Mgmt0 port - Potential problems with Dual SUP's in N7K's - L3 routed link or port-channel - back to back or over routed infrastructure - Ideally in an isolated VRF vPC Member ports - Data plane port channel towards downstream neighbor - Each vPC peer has at lest on member port per vPC - Can be more, up to hardware platforms limits (M or F modules) -From the perspective of the downstream neighbor, the upstream vPC appears as one switch - Physical result is a triangle - Logical result is a point-to-point Port-Channel with no STP blocking ports - VLAN's on vPC Member ports must be allowed on the vPC Peer Link Trunk - Want to make sure that the root bridge is on the upstream switch in this case N5Kx. vPC order of Operations - Establish IP connectivity for Peer Keepalive - Enable vPC and LACP features globally - Create vPC domain - Locally significant number has to match between the vPC peers - Define Peer Keepalive address - Establish Port-Channel for the vPC peer link - Layer 2 port trunk (will run CFSoE) - Verify vPC Consistence parameters - Same Speed, Same Duplex, Same VLAN's - Diable vPC Member ports ( is recommended) - Configure vPC Member ports - Enable vPC Member ports 1.) make sure you have layer 3 reachability from N5K1 to N5K2 "ping 192.168.0.52 vrf management" (because mgmt 0 is in it's own VFR) 2.) turn on the features on the Nexus 5K switches feature vcp feature lacp (if not already enabled) 3.) Assign a VPC doamin, this will be the same for each N5K that is participating in the VPC vpc domain 1 after you issue this command it will put you into the (config-vpc-domain) configuration mode 4.) Configure the peer-keepalive peer-keepalive destination 192.168.0.52 peer-keepalive destination 192.168.0.51 You can get granular on this command, if you just hit return it assumes you will used the Mgmt 0 interface Make sure the keep-alive link is working with the "show vpc" Make sure there is no pervious configuration on these ports for both N5K's int e1/1 - 3 channel-group 50 mode active (LACP more aggressive and faster failure notice) int po50 switchport mode trunk vpc peer-link The spanning-tree port type will be changed to NETWORK for Bridge Assurance Both Nexus 5K's should have the same configuration for the po50 interface interface port-channel50 switchport mode trunk spanning-tree port type network speed 10000 vpc peer-link Check to see if the port-channel is up show port-channel summary show vpc one more time to see if peer-link adjacency is up This is on N5K2, this is using CFS (Cisco Fabric Services for synchronization) show cvs application This will make sure the CFSoE is enabled after peer-link is established show vpc peer-keepalives show vpc consistency-parameters interface port-channel 6.) Configure Member Ports on both N5K's int e1/24 shut channel-group 51 mode on (router doesn't support LACP) int po51 switchport mode access switchport access vlan 10 vpc 51 speed 1000 (one gig port on the router) In IOS you would configure on INT R1 Configuration default int gig0/0 default int gig0/1 int po1 ip address 10.0.0.1 255.255.255.0 int range gig0/0 - 1 channel-group 1 "This is just a norman port-channel " Now on N5K's issue the no shut command on int e1/24 int e1/24 no shut The link should be up On the router you can issue show cpd neighbors This will show both upstream parent switches (not the Nexus 2K's) On the N5K's issue show vpc for a final check Make sure that the root bridge is the one of the Nexus 5K's upstream Tip Make the mac address of the port-channel1 on R1 0000.0000.00001 for easier troubleshooting on the N5K's You really can't see where the traffic is traversing unless you do a consistent ping to a an IP address and look at the port-channels on each N5K's to see which is incrementing. Load Balancing port-channel load-balance ethernet source-dest-port This will give you more evenly flow across all member ports based on L2 source, destination, and port number this will give you more information to look at in the packet vPC Loop Prevention Goal of vPC is to hid redundant links from STP Could result in layer 2 flooding loops Loops are prevented by a "vPC Check" Frames received in the vPC Peer Link cannot flood out a vCP Member Port while the remote vPC Peer has active vPC members in the same vPC Think of the CFSoE where both peers know what member links are up and down (why peer-link is so important) VPC Check Exception If vPC peer's member ports are down, the vPC member ports become "Orphan Ports" and the vPC check is disabled vPC Peer-Link is essentially a last resort connection The vPC check is happening on a per port-channel basses vPC Consistency Check CFSoE runs o the vPC peer-link to synchronize the control plane Includes advertisements of "Consistency parameters" that must match for vPC to from successfully E.g. Line Card type (M or F), Speed, Duplex, Trunking, LACP mode, STP configurations, etc... Three type of consistency checks Type 1 Global ( will bring down vPC) and Interface Consistency Check (E.g. VLAN's allowed on one peer not configured on the other will disable VLAN's not entire vPC) Mismatch results in vPC failing to form for new vPC Mismatch results in VLAN's being suspended if change in an active vPC Type 2 Consistency Check Mismatch, which results in a log message but not a vPC failure Could result in failures in the data plane Useful vPC commands show port-channel usage Shows currently used channel number's to make sure you don't use any already in use show vpc consistency-parameters global Verifies compatibility between vPC peers for a given vPC show vpc consistency-parameters interface Configure Active/Active FEX connections to upstream N5K parent switches N5K's conf t feature vpc vpc domain 5 (config-vpc-domain) peer-keepalive destination 192.168.1.51/52 show vpc to make sure vpc peer is alive Configure Peer-Link , make sure no other configuration are on the interfaces before configuring, could run into consistences sh run int e1/1 - 3 config t int e1/1 - 3 channel-group 50 mode active int po50 vpc peer-link switchport mode trunk spanning-tree port type network show vpc conf t int e1/4 - 7 shut (config) feature fex int e1/4 - 5 channel-group 501 mode on (FEX uplink ports don't support LACP or Active mode) End host port do support LACP int po501 vpc 501 (this number doesn't have to match but is nice for trouble shooting later on) switchport mode fex-fabric fex-associate 101 enable ports on the primary vPC peer N5K1 and wait for fex association to complete show fex show fex detail You'll notice that not all the member ports are shown, most likely the switch it still rebooting. Wait tell you can see all the host ports before activating the member ports on N5K2 conf t int e1/6 - 7 channel-group 502 int po502 switchport mode fex-fabric fex associate 502 vpc 502 The above commands would would be on each N5K switch Config Sync on the N5k's When the FEX has two parent switches configuration mismatches can have an negative effect Parent switches run separate control and management planes Config Sync allows a template of a config to be pushed between N5K's and applied simultaneously Uses CFSoIP to exchange config parameters not specific to FEX and vPC, but it's most common application Applied as config sync (instead of config t) Sometimes referred to as switch profile !You have to use Config Synch for 100% of the configuration not config t in NxOS 5.1.(3) and later! Configuration conf t cfs ipv4 distribute Enter the config sync config sync will change prompt to (config-sync) switch-profile PROFILE1 will change prompt to (config-sync-sp) sync-peer destination 192.168.0.51/52 Example of a Server hanging off of ports e101/6 - 7 int e101/1/6 - 7 switchport mode access switchport access vlan 10 verify Will show "Verification Successful" if all is correct commit This will commit to the changes to the configuration and send Enhanced vPC R4 is connected to ports e101/1/3, e102/1/3 on the N2k1 and N2k2 ![]() R4 Configuration conf t int g0/0 - 1 no ip address duplex auto speed auto media-type rj45 channel-group 1 int po1 ip address 10.0.0.4 255.255.255.0 no shut N5K's Configuration conf t int e101/1/3 , e102/1/3 shut speed 1000 (R4 only has 1GE ports) channel-group 123 mode on (R4 doesn't support LACP) switchport access vlan 10 int e101/1/3 , e102/1/3 no shut Note, With the N5K's (5548/96) even without the Layer 3 module you can still create SVI's and add a IP address to do basic PING's back and forth between two Nexus 5K's Advanced vPC design Back to Back vPC's See one vPC on the 7K's and one on the 5K's Configuration N7K1-1 conf t feature interface vlan feature lacp feature vpc int vlan 3000 no shut exit vrf context KEEPALIVE int vlan 3000 vrf member KEEPALIVE ip address 169.254.0.71/24 int e1/1 - 2 no shut switchport (M1 module) channel-group 3000 mode active int po3000 switchport switchport access vlan 3000 vpc domain 7 (Source is used when not using Mgmt0) peer-keepalive destination 169.254.0.72 vrf KEEPALIVE source 169.254.0.71 peer-switch (this will give the appearance that both peers have the same MAC and are both root STP) N7K1-2 conf t feature interface vlan feature lacp feature vpc vrf context KEEPALIVE int vlan 3000 no shut vrf member KEEPALIVE ip address 168.254.0.72/24 int e1/9 - 10 no shut switchport (M1 module) channel-group 3000 mode active int po3000 switchport switchport access vlan 3000 Ping the other peer ping 169.254.0.72 vrf KEEPALIVE vpc domain 7 (Source is used when not using Mgmt0) peer-keepalive destination 169.254.0.71 vrf KEEPALIVE source 169.254.0.72 peer-switch (this will give the appearance that both peers have the same MAC and are both root STP) peer should be alive Establish the peer link N7K1-1 conf t int e2/1 - 2 channel-group 3001 mode active int po3001 vpc peer-link switchport mode trunk spanning-tree port type network int e2/1 -2 no shut N7K1-2 conf t int e2/9 - 10 channel-group 3001 mode active int po3001 vpc peer-link switchport mode trunk spanning-tree port type network int e2/9 -10 no shut show vpc Configure member ports N7K1-1 int e2/3 - 6 shut channel-group 5 mode active int po5 vpc 5 spanning-tree port type is network switchport mode is trunk N7K1-2 int e2/11 - 14 shut channel-group 5 mode active int po5 vpc 5 spanning-tree port type is network switchport mode is trunk N5K Configuration N5K1 feature lacp feature vpc vpc domain 5 peer-keep destination 192.168.0.52 int e1/1 - 3 switchport channel-group 50 mode active no shut int po50 switchport mode trunk spanning-tree port type network vpc peer-link N5K2 feature lacp feature vpc vpc domain 5 peer-keep destination 192.168.0.51 int e1/1 - 3 switchport channel-group 50 mode active no shut int po50 switchport mode trunk spanning-tree port type network vpc peer-link Member ports N5K1 int e1/8 - 11 no shut (other side N7K's they are disabled) channel-group 7 mode active int po7 switchport mode trunk spanning-tree port type network vpc 7 N5K2 int e1/8 - 11 no shut (other side N7K's they are disabled) channel-group 7 mode active int po7 switchport mode trunk spanning-tree port type network vpc 7 Enable Member port on N7K1-1 which is the primary wait for the control plane to settle down before enabling member port on N7K1-2 N7K1-1 int e2/3 - 6 no shut N7K1-2 int e2/11 - 14 no shut vPC's and FHRP Nexus 7K is typically the L2 and L3 network boundary HSRP, GLBP, and VRRP (standard) FHRP behavior changes to accommodate an Active/Active forwarding over vPC Traffic received on the vPC member port of FHRP standby to FHRP virtual MAC is not forwarded over the Peer Link to Active FHRP member In fact the Standby HSRP router acts as the HSRP Active router for Active/Active FHRP vPC can break in certain non-standard vendor applications Usually they respond to the actual MAC address of the Active or Standby device Frames sent to FHRP Standby with their Physical DST MAC of FHRP Active are sent out the Peer Link (which we don't want) Peer-Gateway allows FHRP standby to forward frames on the behalf of the DST MAC of the FHRP Active without going over the Peer Link End host configuration N5K1 conf t int e101/1/1 shut speed 1000 (to R3) no shut R3 conf t default gig0/1 int gig0/0 (going to the N2K1) ip address 10.0.0.3 255.255.255.0 Show CDP neighbors you should see N5K1 Go to S1 and change the team0 (two 10GE adapters) with the DFGW of 10.0.0.254 You should be able to ping R3 at 10.0.0.3 N5K1 Configure S1 ports show run in e101/1/4 - 5 conf t int e101/1/4 - 5 channel-group 10 mode active int po10 switchport mode access switchport access vlan 10 spanning-tree port type edge vlan 10 int e101/1/1 switchport access vlan 10 N5k2 conf t vlan 20 int e102/1/1 (to R2) switchport access vlan 20 int e102/1/6 - 7 channel-group 30 mode active int po30 switchport access vlan 20 spanning-tree port type edge R2 conf t default int g0/0 ip address 20.0.0.2 255.255.255.0 no shut int g0/1 shut Configure S3 IP Address 20.0.0.30 255.255.255.0 DFGW 20.0.0.254 Test ping via R2 to S3 to make sure you have local connectivity N7K1-1 conf t vlan 10,20 feature interface-vlan int vlan 10 ip address 10.0.0.71/24 no shut int vlan 20 ip address 20.0.0.71/24 no shut N7K1-2 conf t vlan 10,20 feature interface-vlan int vlan 10 ip address 10.0.0.72/24 no shut int vlan 20 ip address 20.0.0.72/24 no shut Also create on N5k1 and N5K2 because we are running back to back vPC's with the N7K's Ping 10.0.0.3 Ping 10.0.0.10 Ping 20.0.0.2 Ping 20.0.0.30 Configure HSPR on N7K's N7K1-1 conf t feature HSRP int vlan 10 hsrp 10 (config-if-hsrp) ip address 10.0.0.254 priority 255 int vlan 20 hsrp 20 ip address 20.0.0.254 priority 255 vpc domain 7 peer-gateway N7K1-2 Conf t feature HSRP int vlan 10 hsrp 10 (config-if-hsrp) ip address 10.0.0.254 priority 100 int vlan 20 hsrp 20 ip address 20.0.0.254 priority 100 vpc domain 7 peer-gateway Show harp There will still be an Active and a Standby via the show HSRP , but they can both forward traffic based on the MAC address being the same with the peer-gateway command. The resulting traffic will not go over the peer-link but out the member port, which is what the desired effect should be unless there is a failure in the network. This may not be the desired effect if you have north bound routers out the public internet. vPC failure Scenarios Worst case scenario in vPC failure is "Split Brain" vPC control is broken and both vPC peers assume vPC primary Role Peer Keepalive and Peer Link have built in protection against this Upon failure vPC Secondary suspends it's local vPC member port and SVI's Normally the desired behavior to prevent Split Brain (Active/) Can isolate Orphan port that use vPC Secondary's SVI as their default gateway !!!Don't use orphan port!!! vPC peer ideally only have vPC member port, e.g. all downstream devices are dual attached vPC Failure Problems vPC Peer Link goes down.... Secondary waits for hold-time and keep alive timeouts to expire After times expire if keep alive is not received, assume vPC primary Else if keep alive is received, suspend vPC member ports vPC secondary is now effectively disables Nexus, vPC Primary fails completely vPC, Secondary already have vPC member ports suspended and they don't come back Secondary does not continually check for vPC Primary Now both vPC primary and vPC secondary acre effectively disabled vPC Auto Recovery (off by default on N5K and N7K) (Replaces Reload restore) Allows vPC secondary to assume Primary in certain failure scenarios vPC Peer Link goes down.... Secondary waits for hold-timeout and keep alive timeouts to expire Keep-alive received, suspend vPC Member Ports Primary completely fails With Auto Recovery vPC secondary actively checks for keepalive vPC secondary promotes itself to Primary and un-suspends vPC member ports Upon recovery of Primary, no preemption Bouncing Peer Link will return to operational secondary to primary Second case is recovery after initial boot up Power outage occurs on both Primary and Secondary After boot, if vPC Peer Link does not come up, Role election cannot occur and vPC's are never brought up Auto Recovery allow a single vPC peer to elect itself vPC primary after configured timeout if vPC Peer Link never comes up after reload Issues with vPC auto recovery problems Certain failure scenarios will cause Spit Brain Auto recovery is on Peer Link and Peer Keepalive cables are both cut Eventually vPC secondary elect itself Primary Dual Primary Oh NO!, traffic forwarding fails Operational solution to catastrophic failure like this is to power off secondary or NO FEATURE VPC Design solution is better physical redundancy Dual SUP's Redundant Power Grid Peer Link and Keepalive are port channels on separate line modules vPC Peer Switch Marks vPC peer appear as the same root bride Same Priority and MAC address on primary and secondary vPC peers Useful in failure scenarios to reduce RSTP reconvergence time when when vPC primary fails and then recovers With Peer Switch, secondary vPC peer doesn't need to run RSTP sync when primary comes back Configuration conf t vpc domain 7 peer-switch (not supported on N5K's 5.2(1)N(1) or above) Hints Have local "term mon" on during lab exam to see local log messages "show run vpc" to see the configuration of all vpc configurations Multicast with vPC's When the receiver is reachable via vPC member Port IGMP Reports are synchronized over the Peer-Link Usually there is a PIM Assert Winning (lowest IGP cost to sending segment) In vPC, Primary vPC peer forwards towards Peer-Link and vPC member ports Secondary traffic via Peer-Link is forwarded to Orphan port but no vPC Member because of the vPC check rule Peer-Link is used in the data plane for Multicast, so the bandwidth budget has to be taking into consideration and adjusted if necessary. When the source is reachable via a vPC member port Both vPC Peers act as a PIM DR (designated router) Called "Dual DR" or "Proxy DR" PIM DR role of informing RP that a new sender is on the network PIM register message (unicast) to RP has new (S,G) Allows either vPC Primary or Secondary to receive traffic from the source and forward it north bound without having to cross the vPC Peer-Link Nexus Doest not support SSM Nexus looks into the Layer 3 payload information to do Layer 2 forwarding based on the Layer 3 information Catalyst IOS has a separate L2 table for the multicast MAC addresses show mac address-table multicast Design No back to back vPC's two separate vPC's 51 and 52 to the corresponding N5K's Sending multicast feeds from S3 to S1 via JPERF So the Multicast flow is from N2K2 up to N5K2 to N7K1-2 then out int E2/14 (member port) and out int e2/9 the peer-link. Once N7K1-1 receives the multicast flow it will not send it out any of it's member ports which are int e2/3 - 6 When there is a receiver behind one of the member ports the multicast flow will always be sent over the peer link as well as the member ports . The switches cannot guarantee that there is no Orphans ports (for instance hanging off N7K1-1) that are trying to receive the multicast feed. But once N7K1-1 get's the multicast flow it will not send it down any member ports Vlans 10 = 10.0.0.0/24 20 = 20.0.0.0/24 Nexus IGMP Snooping N7K1-1 and N7K1-2 Feature pim int vlan 10 ip pim sparse S1 through jperf sending out UDP to multicast group 255.5.5.5 on N7K1-1 show ip igmp snooping groups Layer 3 Multicast routing design Using the JPERF application on S1 and S3 to send multicast packets to mast group 277.7.7.7 S3 is the sender and S1 is the receiver going over multiple L3 hops from N7K1-4 to N7K1-1 and N7K1-2 Layer 2 Configuration N7K1-4 (Layer2 to Layer3 DEMARC for the Multicast Sender) conf t int e2/27 - 28 channel-group 333 mode active no shut int po42 switchport mode trunk spanning-tree port type network conf t feature interface-vlan vlan 20 int vlan 20 ip address 20.0.0.254/24 no shut show spanning-tree vlan 20 on N7K1-4 and N5K2 (po333 is in a FWD state) Should be able to ping 20.0.0.30 (Sever 3 IP address) N7K1-1 N7K1-2 no int vlan 20 (This will be only on N7K1-4) This will also break the previous HSRP address for VLAN 20 on these switches N5K2 conf t int e1/12 - 13 channel-group 333 mode active int po333 switchport mode trunk spanning-tree port type network Layer 3 Configuration conf t int e1/25 no switchport ip address 150.73.74.74/24 no shut N7K1-3 conf t int e1/17 no switchport ip address 150.73.74.73/24 no shut Ping the host 150.73.74.74 to make sure you have basic L3 connectivity once you've confirmed that move to installing the feature ospf for the routing instance N7K1-3/4 conf t feature ospf (this requires the LAN_ENTERPRISE_SERVICES_PKG) router ospf 1 N7K1-4 conf t int e1/25 ip router ospf 1 area 0 int vlan 20 ip router ospf 1 area 0 sho ip ospf neighbors N7K1-3 conf t int e1/17 ip router ospf 1 area 0 int vlan 20 ip router ospf 1 area 0 sho ip route ospf N7K1-3 conf t int e1/19 description TO 7K1-1 no switchport ip address 150.71.73.73/24 ip router ospf 1 area 0 no shut int e1/20 desc TO 7K1-2 no switch port ip address 150.72.73.73/24 ip router ospf 1 area 0 no shut int lo0 ip address 1.1.1.73/32 (PIM Rendezvous Point) (Root of the Shared Tree (S,G) ip router ospf 1 area 0 ip pim sparse N7K1-1 and N7K1-2 conf t feature ospf N7K1-1 conf t ip router ospf 1 int E1/3 no switchport ip address 150.71.73.71/24 ip router ospf 1 area 0 no shut int vlan 10 ip router ospf 1 area 0 ip ospf passive-interface N7K1-2 conf t ip router ospf 1 int e1/12 no switchport ip address 150.72.73.72/24 ip router ospf 1 area 0 no shut int vlan 10 ip router ospf 1 area 0 ip ospf passive-interface (you don't want ospf traffic forming an adjacency with the peer switch over the peer-link) If the needed adjacencyes in the distribution/aggregation layer you should create another L3 connection between the two peer switches show ip ospf neighbors You should be able to ping S1 or S3 from anywhere in the network. Multicast Configuration N7K1-4 conf t feature pin ip pim rp-address 1.1.1.73 int vlan 20 ip pim sparse int e1/25 ip pim sparse show ip pin interface brief N7K1-3 conf t feature pin ip pim rp-address 1.1.1.73 int e1/17 ip pim sparse int e1/19 - 20 ip pim sparse N7K1-1 conf t int e1/3 ip pim sparse exit ip pim rp-address 1.1.1.73 N7K1-2 conf t int e1/12 ip pim sparse exit ip pim rp-address 1.1.1.73 show ip pim neighbors N7K-143 sho ip mroute show ip route 277.7.7.7 summary software-forwarded So the final output is that S1 can receive the multicast from S3 on mast group 277.7.7.7 Both vPC peers receive the mast but only N7K1-1 (Primary forward it to S1). N7K1-1 will be sending the multicast packets out the peer-link to N7K1-2 (secondary) but due to the vPC check it will not send it out any members port. Hint... Turning on terminal monitor if you are connected to the line console you have to bump up the speed to 38400 to receive severity level 7 messagesconf t line console speed 38400 |
Blog >