Blog

CCIE DC N1Kv Installation

posted Jul 20, 2014, 11:43 AM by Rick McGee   [ updated Aug 1, 2014, 4:20 PM ]


Start by installing the VSM on your hosts

You can do this one of two ways
    OVA and or OVF File
    Or via an Installer App (Java Based)
The Installer App gives you a few choices, choose Cisco Nexus 1000V Complete Installation; with the custom option

A list of Prerequisites that need to be meet.

Enter the IP Address and user name and password for the vCenter server credentials, and at the lower left hand section you'll see a status indication.

Click on the browse radio button and it will open another windows where you can pick the host you want.

Same action for the Datastore, click the Browse button and choose the correct DataStore.

Choose the correct vSwtich in the same manner as the host and datastore.
For the second host you can also Key in the values if you know them without browsing. You'll also name the VDS and enter the credentials for the N1Kv, and finally choose the OVA file for the VSM.

The next section you have to choose Layer 2 (older way) and Layer 3 Connectivity. Layer 2 as you would think all VSM's and VEM's have to be on the same subnet. Layer 3 you could have VSM's in a main DC and the VEM's in another data center in different subnets. The Layer 3 option also gives you more trouble shooting advantages; such as ping and trace route.

Control Port Group used for 
    VEM to VSM Heartbeats
    VSM to VSM AIPC Traffic
    IGMP state information 
    This could be the same as Mgmt. VLAN or could be in a separate VLAN

Management 
    SNMP, Telnet, and SSH 

Click next, and it will validate the information we provided thus far regarding the hosts.
It will now give you the option to review the configuration before moving on. Click next...

You can now see that it's deploying the secondary VSM after already deploying successfully the Primary VSM.

You can see now that the Nexus 1000V plug-in is installed.

You can see that the N1000 V has added the VSM-MGMT switch to the standard switch layout.
The installation will go through the check list.

Final choices to add additional modules. Click Next... You can also add a VIB (vSphere Installation Bundle) file to add the VEM sat this time is you would like.

In this section you can migrate hosts over to the VSM, but it's better to do in manually. So you would just close the program.

You should be able to know log into the N1KV and issue the command "show module" and it will give you all the information that was just configured.


Here is the "show run" You can see it's only running the Essential's version (no DAI or DHCP Snooping SGT, SGACL's)
Here you see the SVS connection status the remote IP address the the Vcenter IP address and the vmware-vim protocol

Next you would need to deploy the VEM's (virtual ethernet modules)

On the CLI on the N1Kv


conf t
vlan 4093
name MGMT
vlan 110
name VM-Guests
port-profile type ethernet VM-Sys-Uplink
vmmware port-group
no shut
switchport mode trunk
switchport trunk allowed vlan 1,4093
system vlan 1,4093
state enable d

vlan 115
name vMotion 
port-pr type ethernet Vmotion-Uplink 
vmware port-group
no shut 
switchport mode trunk
switchport trunk allowed vlan 115
system vlan 115
state enabled

port-profile type ethernet VM-Guests
vmware port-group
no shut
switchport mode trunk
switchport trunk allowed vlan 110
state enabled

copy run start (this will save to both VSM's)

As you can see from the above example, the changes we made from the N1Kv is now showing up in Vcenter.


Know you can add a host by right clicking on the N1K and click Add Host

In this example we are moving every other vmnic from the standard vswtich to the N1Kv VDS and choosing the uplink port from the port groups we created earlier.

It should look the the following, when you click next it will migrate these selected vmnics to the N1Kv VDS.

HINT: Make sure you don't overlap VLAN's for the ethernet port profiles, VMWare doesn't care for those.

This is a warning that while migrating the host ports vmnics might look connectivity.

Here is a graphical layout of the N1Kv VDS and the ports.

This example is showing the VUM (VMWare Update Mgr.) installing the VEM's onto the hosts.

You can now see the hosts under the DVS (distributed virtual switch), one has an error relating to no adapter redundancy on host 2.

You have to add the vmnics to the N1Kv DVS.
Click on Manage Physical Adapters (These are the Eth interfaces in the N1Kv) 

Click to add NIC's for the VM-Sys-Uplink

Add the vmnic1 to migrate to the N1Kv DVS. And choose the vmnic 03 for the Vmotion, and vmnic 05 for vm-guest
It will ask you if you really want to move it from the Vswitch to the NK1V-01 DVS. Hit Yes!....

It will look similar to this output, the VM-Guest is grayed out because there are no Guests that are registered wit the VM-Guest vmnic/vlan 110 created yet.
You would now do the same for the second host.


You know have to create the veth port-profiles.

N1Kv1-1
conf t
port-profile type vethernet VMKernel
vwmare port-group
no shut
switchport mode access
swtichport access vlan 1
system vlan 1
state enabled
You'll now see that VMKernel show up under DVS in VCENTER

Back on the N1KV-01 Switch
Issue the command

config t

capability l3control

You'll see this warning, to make sure the system vlan is the same for the port-profle.
This will look for the VEM heartbeats and encapsulated in UDP 4785 and send it to the mgmt. IP address

You can now create the other vEth port-profiles

conf t
port-profile type vethernet VSM-MGMT
vwmare port-group
no shut
switchport mode access
swtichport access vlan 4093
system vlan 4093
state enabled

port-profile type vethernet vMotion
vwmare port-group
no shut
switchport mode access
swtichport access vlan 115
system vlan 115
state enabled

port-profile type vethernet VM-Guest-110
vwmare port-group
no shut
switchport mode access
swtichport access vlan 110
system vlan 110
state enabled

You can see all the veth port show up in the VCENTER N1Kv-01 DVS

Now we can move the vEth to the Eth through the N1Kv DVS


Click on Add

Click on Migrate existing virtual adapters.

You can do one at a time or both at the same time, you just pick the Port Group that we created earlier.

You'll see that the VMKernel vEth is associated with VM-Sys-Uplink Eth port. Click Finish


You'll see Mod 3 show up in the N1Kv switch from the show module command. You can also see the ESXi version and the IP address for the server, as well as the Server UUID. 
The N1Kv uses the UUID to distinguish the VEM, so even if the VEM was to shut down it would and then brought back up it would still use the same VEM 3 designation. 


CCIE DC Nexus 1000V VM-FEX Adapter-FEX

posted Jul 17, 2014, 9:00 PM by Rick McGee

Act's as a Cisco modular chassis switch
    There is actually no hardware

Creates DVS or vDS in VMWare 

Made up of:
    Virtual Supervisor Modules (VSM) Control and mgmt. plane
    Virtual Ethernet Modules (VEM) (data plane)

Virtual Service Blades
    Virtual Security Gateway (VSG)
    ASA 1000v
    wWAAS
        All use vPath 2.0 for data interception/control

Each server in the data center is represented as a link care in the Cisco Nexus 1000v and can be managed as if it were a line card in a physical Cisco switch.

Nexus 1000v and Cisco UCS
    It's compatible with each other
    They don't have to know about each other to work

N1KV is compatible with vPC-HM is using MAC Pinning
N1KV is not compatible with allocating "Dynamic vNIC's" in a Service Profile
    Dynamic vNIC's create VM-FEX

VM-FEX and N1KV are mutually exclusive 
    Both of these options create VDS on the hypervisor. You wouldn't run multiple VDS on the same
    host.
        VM-FEX and N1KV both use VEM's
        N1KV uses the VSM for the Control Plane
        VM-FEX uses the UCS FI's as it's Control Plane

vPath
    vPath protocol is always running in the VEM
        Directs traffic to the VSN (Virtual Services Node) applies security or optimization policy, sends 
        packet back to VEM along with the ability to fast-switch traffic now directly in the VEM

    Only new traffic flow must first be sent to VSN, subsequent traffic is forwarded directly by the 
    VEM on ESX(i) host.

Installation 
    VSM install Opaque Data in VMware vCenter for its DVS
        Done using "svs connection"
        Server Virtualization Switch (SVS)

    VSM's and VEM's should all be the sam version
    
    Control/Mgmt. network should be low latency (more important then BW)

    vCenter download this information to ESXi for VEM's to use whenever a host is added to the
    N1Kv-DVS
        All VES Heart Beats should increase at roughly at the same rate
            "show mod gem courters"
            If the VSM misses 6 heart beats from a VEM it considers it offline

    Always hardcode VEM to Module number before you add ESXi host to N1Kv
        It's recommended to tie to UCS Chassis and Blade
        Get UUID from ESXi host:   
            #esxcfg-info -u (must be lower case letters)

VEM Port Profiles 
    Eth (Uplink tied to HW ports)
    vEth (Virtual tied to VM's)

    System VLAN's in Eth and vEth Port Profiles
        Used to give immediate cut-through access to the vmkernel 

Modes 
    L2
        VEM's must be to eh same VLSN as VSM Control VLAN

    L3 (Recommended)
        VEM traffic is encapsulated in UDP 4785
            capability l3control needed on vEth profile used for ESXi VMKernel before moving from 
            vSwitch0

    System VLAN for both vEth and Eth 

Port Channels in N1Kv

    Remember that the N1Kv doesn't have to run on UCS B series server, it can run on any manufacture 
    Server with ESXi

    Show commands
        module vem 3 execute vdmcmd show port 
        module vem 3 execute vdmcmd show pinning 

UCS VM-FEX
     Creates the same type of DVS in VMWare as N1Kv does
        (can also run on KVM and Hyper-V in UCS 2.1)

    UCS FI's acts as the VSM for Control and Mgmt. Plane

    Virtual Ethernet Modules (VEM) are used for the data plane.
         

Adapter-FEX
   Used to extend a N5K down to a pizza-box C-Series rack mount servers
        Specifically, to FEX the P81E (Palo) or VIC1225 PCIe CNA
        Creates vEth and vFC port in the N5K

    2 10GE SFP physical ports on PCIe card, each with 2 channels that break out into 4 logical Channels
        Port 1, Channel 1=Ethernet with F/O to physical port 2
        Port 1, Channel 2= HBA0 (no F/O, multipathing software needed)
        Port 2, Channel 3= Ethernet with F/O to physical port 1
        Port 2, Channel 4= HBA0 (no F/O, multipathing software needed)

Can also use UCS Manager to mange your C-Series 
    Require a pair of N2232PP FEX's to act as the "IOM" in a blade chassis
    
        UCS 2.0 Requires 4 Cables 
        Two 1GE cables connect from C-Server LOM to 2232 FEX to provide OOB control and mgmt. 
        Plane
    
    In UCS 2.1 Single wire mgmt. means a single pair of 10GE cable from the C-Series SFP port to the
    2232 FEX provide both mgmt. and control planes.


N1Kv Topologies 

The VSM's in this example are running on Nexus 1110 Hardware devices that can run multiple VM's for the vWAAS, VSG, vASA, or back VSM's.

Can also run VSM's on a virtual server with the VEM's


Logical View of Eth and Veth ports for the Nexus 1Kv


The Eth ports are your northbound links to the N5K switches and the vEth ports are assigned to VM hosts. If a VM host moves from one blade to another (to another VEM) it keeps the same vEth port. So all the configurations for the vEth ports doesn't have to be reconfigured again.


CCIE DC UCS QoS and Network Control

posted Jul 16, 2014, 8:16 PM by Rick McGee

Flow control policies 

Determine whether the uplink Ethernet ports in a Cisco UCS domain send and receive IEEE 802.3x pause frames when the receive buffer for a port fills.


Network Control Policies 
 Have to trun on CDP, which is off by default and Allow MAC Security otherwise you'll have issues with vm hosts coming up through the fabric interconnects to the rest of the network.

You can now apply this Network Control Policy to the vNIC template


QoS Policies 

First look at the QoS system Class Under LAN Cloud
As you can see the Best Effort and Fibre Channel classes are already created. Also by default the Platinum, Gold, Silver, Bronze are all off by default. 
Once you enable them they will get assigned a Weight % as you see in the above example. The Weight and Weight % apply to the ingress port from this QoS system class. You can also change the MTU that is assigned, and assigned for iSCSI traffic


Apply to the SP Template the the correct vNIC interfaces that we assigned as Mgmt. vNIC's

CCIE DC UCS Associating Service Profile with Server Blades

posted Jul 14, 2014, 11:42 AM by Rick McGee   [ updated Jul 15, 2014, 5:59 PM ]

So from the ESXi Service profile you can crate a template form that service profile and you can create really fast service profiles as long as they are similar SP's. For instance ESXi SP Template or a Windows 2008/2013 Server Template. You can also Clone a SP from another SP as well. 
These will be determined by the the NIC Adapter and HBA policies for each individual OS.




This example shows a SP Template for ESXi that is an updating template. The updating template has the behavior of once the template is updated all SP's that where created from it will be updated as well. This may or may not cause the server to reboot, so changes to the template would have to be done after-hours during a maintenance window.

Once you created the SP's from the temp you will not be able to go into each SP and make changes, however you'll be able to make changes to the template itself

You can unbind the SP from the updating template with the following:
Now after you click this, you'll be able to make changes directly to the SP.


In the example the SP's have already been associated with blades that where from the server pools
As you can see form this example the SP_ESXI-2 has ben associated with chassis-1/blade-1
The FSM is the Finite Sate Machine that show the the process of the blade being associated with the Service Profile.  You can also disassociate the SP with the blade and manually assign it to another blade


You can KVM to the device and after a few boot cycles it will show the IOSLINUX image being loaded
In this example we are KVM'ing form the Service Profile, you can also KVM from the blade itself as well.


This IOSLINUX makes all the changes to the BIOS and Blade components such as the Mezzanine adapters, vNIC's, vHBA's 


From the KVM window you would pick the Virtual Media tab and map a Virtual CD drive that has the ESXi installer.
Click "Add Image" and browse to the installer file and check mapped.

You may have to reboot the server as the CD is mapped and hit the F6 Boot Menu form the BIOS and enter the CD setup.

Arrow down to the Cisco Virtual CD/DVD drive


You'll see the following

After the installer package unpacks all the contents of the file, you'll have to choose a disk to install the Hypervisor. You can install to a local disk, or a SAN disk device. If you have a SAN Disk device you have to make sure that the proper zoning is configured on your MDS switches.

The SEGATE is SAN disk in a JBOD. Arrow down to that as drive and hit Enter.

ESXi will will ask for what keyboard layout you want to use and root password.


You'll see the VMWare status for the installation if it was able to write to the drive.
After installation is complete hit F2 to change the Management IP Address to mange the VMWare hyper visor 

After this is done assuming you have Layer 3 connectivity you can go to your desktop and download the VSphere client.


On your desktop install the application and run VSphere Client


Through this GUI Mgmt. you would create all of your LAN and Settings for that particular VMWare host

Here you would add another vmmic for failover for the mgmt. network
Click on Properties 
Click Add...

As you can see it see's all the vNIC's that we created as part of the Service Profile. Click the box for vmmic1 and add for the mgmt. ports and click next

Here is shows the purpose of the vnmics Mgmt. vSwitch

Hi-light the vSwitch and click edit to configure the hashing algorithm to use for the two vmnics.


Only use Route based on the origination virtual port ID. Click okay and you'll now see the following.

It added the second vmmic to the mgmt. network. 

Next click on Add Networking to create another vSwitch
Click VMkernel 

Choose vmmc 2 and 3 and click next...
Label the network and place the VLAN number used for vMotion.

Assign it an IP address from the subnet. Click next and then finish.

You'll now see the vSwitch, click on Properties, and edit the vSwitch 
Move down the vmmic 3 to Standby Adapters group. You really wouldn't need the second vmnic, it all depends on how your VMWare administrator likes the vmnic's presented to VMWare. Click OK

Click Add Networking again to create the last vSwtich for the Virtual Machines

Click Virtual Machine and next....
Add the final two vmnics and click next.....
Click Next... This will be deleted later and replaced by the Cisco Nexus 1000V

Check the Storage Adapters settings.

Make sure the WWN's match each target....

Look at the Storage section for the Data Store...



Use template for the VCenter
Use the provisioning as you normally would for your network. Thin provisioning works best for lab setups.

You would next setup the second VMWare host with the same vSwtiches that correspond to the vSwtiches. You would want to rename the datastore on VMWare host two to DataStore2.






Hint: If you want to use the same ISO file, you may run into issues if you use the same ISO file, you can just make a copy of the file as name it as v2 and you'll be able to install tow instances of VMWare as the same time.





CCIE DC UCS Service Profiles.....

posted Jul 3, 2014, 10:12 PM by Rick McGee   [ updated Jul 9, 2014, 8:27 PM ]

Topology 

1.) Identify Service Profile

Pool was created already from previous lab. If you didn't create the pool you could evoke the creation form this page.

2.) Storage

No local storage has been selected you could choose RAID0 or RAID 5 this was what we configured perviously as part of the server hardware capabilities 

Next, click expert mode
Create vHBA1 for FI-A and choose the VMware adapter policy for the vHBA

Create the QoS policy for FC

This is what it should look like in the end
Name vHBA1 WWPN from UCS-ESX1-FABA and the VMWare Adapter policy and FC QoS Policy

You would know create vHBA2 for FI-B and the final result is as follows

3.) Networking (Create VNIC's 1 and 2 for the vmKernel)
Click Expert and Add to create the vNIC's

Pick the proper pool of MAC address (UCS-1-ESXi)
Choose the Fabric A and the VLAN's this vNIC will be associated with.

Choose the Adapter Policy VMWare


Create VNIC1 (FAB-A) for VMKernel MAC Address Pool UCS-1-ESXi, VLAN Default (native) 110-115, and Adapter policy VMWare

Create VNIC2 (FAB-B) for VMKernel MAC Address Pool UCS-1-ESXi, VLAN Default (native) 110-115, and Adapter policy VMWare

Create VNIC3 (FAB-A) for VMotion MAC Address Pool UCS-1-ESXi, VLAN 115, and Adapter policy VMWare

Create VNIC4 (FAB-B) for VMotion MAC Address Pool UCS-1-ESXi, VLAN 115, and Adapter policy VMWare

Create VNIC5 (FAB-A) For VMWare hosts MAC Address Pool UCS-1-ESXi, VLANS 110-114, and Adapter policy VMWare

Create VNIC6 (FAB-B) For VMWare hosts AC Address Pool UCS-1-ESXi, VLANS 110-114, and Adapter policy VMWare

You could create VNIC 7 and 8 for NFS Data Stores as well


4.) vNIC/vHBA Placement

This is going to be dependent on the Server OS that will reside on the blade server itself. VMWare doesn't really matter what order, but for Windows Servers you should pay particular attention to the order of the vNIC's and vHBA's

5.) Create Boot Policy

The Boot order will start with Boot From SAN first and saying that vHBA1 is the primary 
Create vHBA2 for the secondary

Create a Boot Target primary under vHBA1 (DISK or LUN pWWN) to boot from
Create another Boot Target secondary under vHBA2 (DISK or LUN pWWN) to boot from.

Final configuration should look as follows

Finally pick the Boot Order Policy you just created and click next

6.) Maintenance Policy
I didn't choose any Maintenance Policy

7.) Server Assignments 

Choose one of the Server Pools that I created earlier and click Next

8.) Operational Policies 

Choose the BIOS Policy Loud Boot and the IPMI SoL-115200
Choose the Pooled option

 Choose the default option


 Choose the default option

Choose WipeBIOS

And Click Finish 

You'll see the final Configuration to make sure it's correct and then click Yes



You'll see the FSM is associating the Service Profile to the blades from that server pools




CCIE DC UCS Configuring Pool and Profiles

posted Jul 1, 2014, 2:06 PM by Rick McGee   [ updated Jul 2, 2014, 3:25 PM ]

Where to start, I would think the most logical step would be to configure the 

1.>Management IP Pool....

 Give the pool the starting IP address and then the size, subnet Mask, and finally the default gateway

As you can see from the following output blades are being assigned to the to the Mgmt. IP address pool. These shouldn't change when you change the Service Profile from one blade to the next

2.>Next Create the UUID pools


You'll notice none of the UUID's are assigned, this will be done during the creation of the Service Profile

3.> Create the MAC address Pool
I don't show here but I named the Pool UCS1-ESXi pool. The Pool Size has to be a block size of 16 so placed that size value at 128
As you can see none of the MAC addresses are assigned to a blades via a service profile


4.> Create the nWWN/WWWNN
Create in the bits place of the 2,3,4 to change. The first bit can only be 2 or 5(reserved addresses) So you want to keep it at 2. Bit 2 can be changed depending on the needs for the organization the next 3 bytes are the OUI which you cannot change and the last 3 bytes can be changed according to your organization. I kept it the same as 01 for the UCS Domain 1 and 01 to signify it's a ESXi host.

5.> Create the pWWN/WWPN

Here you have to create a pool for Fabric A and Fabric B

UCS1-ESXi-FabA

Here I have the naming as follows

20:AI (FI A):00:25:B1:01 (UCS Domain 1):01 (ESXi Host):00 for a size of 128

Do the same for FI B and use the naming as follows

20:B1:00:25:B1:01:01:00

You should know see this


6.> Create BIOS Policy 

Disable Quiet Boot to see full startup screens, Don't check Reboot on BIOS Settings Change, this will reboot the servers associated with this BIOS policy.

Here you can change the amount of CPU Cores are used and various other CPU options












Click Finish and you'll see the LoudBoot Option in the root tree

7.> Boot Policy 

I like to leave it as default's 

8.>IPMI (Intelligent Platform Mgmt. Interface)


9.> LOCAL Disk Config Policies 
You'll notice that it has Protect Configuration Checked, this will preserve the RAID settings if disassociated with a service policy. Have to be cognizant of the Scrub Policy setting as well


You have the following RAID Options

10.> Create Scrub Policy 
This will not scrub anything.

You can also setup just to scrub the BIOS or Everything.



11.> Schedules 
  
  Here you can create a one time occurrence or a reocurring maintenance schedule 
This is an example or a 
reoccurring schedule. In the above example if the tasks are not completed within 2 hours the server will cancel tasks and reboot.


12.> Maintenance Policy 
This was based on the schedule that was created before.

13.> Power Caps (this is optional) 
You can assign Power Caps to particular servers, for instance you can have all our high priority servers in PowerCap 1 that will always be powered on if there are Power Supply failures, or PowerCap 10 for servers that maybe for Testing Purposes only.

14.> Server Pools (Optional)
            You can create Static Pools that you can manually assigned Blade Servers

Here you would pick what blades are to be part of this static pool

You can also create pools for Dynamic Allocation 
In the above example show a pool for servers that have a VIC CNA and 32GB of Memory 

15.> Server Pool Policy Qualifications (Optional) 

Here you can state that it has to be a Virtualized VIC

You can specify what chassis you what these qualifications in

Here you would specify the amount of memory and or clock speed

Here you can specify the number of cores etc...

 Here you can specify if the Blades is Diskless etc....


16.> Server Pool Policy 
So this ties together the Server Pool Qualifications and the Server Pools.

Here you would pick the Target Pool that where configured previously.

Here you would pick the Qualifications that where configured previously.


CCIE DC UCS Addressing Recommendations

posted Jun 24, 2014, 10:09 PM by Rick McGee   [ updated Jun 25, 2014, 11:37 AM ]

Servers UUID Addresses
        Don't changed the PREFIX
        Allocate 515 suffixes to the default pool

LAN and SAN 
    Use Pools when ever possible 
    UCS does allow you the option of using the actual hardware-derived addresses
        This is not recommended, and can lock you into the blade you are currently running the OS
        on, and requires intervention for upgrades and moves, add's, and changes.

    Pools are required for Palo mezzanine cards (N81KR,VIC1240, and VIC1280)
    
    Use pool that are multiple of 16, and less then 128 addresses each

MAC Addresses
    Format is a 3 byte OUI, 3 byte Ext ID
        00:25:B5: XX:XX:XX

    Good practice is to use an Ext ID scheme in different pools that makes it easy to identify
    UCS Domain/Blade/OS-Type
        XX:YZ:ZZ
        XX = UCS Domain (up to 255 domains 0 with 40,800 blades)
        Y= is OS type, y is pool #
            1= ESXi
            2=HyperV
            3=Win2k8 bare
            4=Win2k12 bare
            etc.....
        Z:ZZ= leave open for dynamo population 

SAN nWWN/WWNN Addresses 
    One nWWN per blade 
    Don't overlap nWWN's and pWWN's 
        You can do this but becomes impossible to trouble shoot
     Idea is to use "F" or "FF" in Section 2 of address

WWN Addressing 4 Sections

    WX:XX:YY:YY:YY:ZZ:ZZ:ZZ

        W= Section 1
            Either is a 1,2, or a 5
            1 is older "standard" format 
            2 is newer "extended' format (this is what you want to use)
            5 is "registered' format (typically found in disk array enclosures)

       X:XX= Section 2 
            Cisco recommends using this, this is used to denote is this is a nWWN  or a 
            pWWN and which Fabric the pWWn is on 
            0:FF= nWWN
            0:A1= pWWN on Fabric A, VSAN 1
            0:B1= pWWN on Fabric B, VSAN 1
    
    YY:YY:YY= Section 3= OUI DON"T CHNAGE"

    ZZ:ZZ:ZZ= Section 4, Exit ID 
        Configure this the same as you would for the MAC addresses 

SAN pWWN/WWPN
    MAC addressing and nWWN/pWWN addressing are similar in the back-half of the 
    structure.

    3 byte OUI, 3 Byte Ext ID
    Major difference is that pWWN/nWWN add on a 2 byte prefix

Boot-From-SAN
    Disable quiet boot in BIOS

    Will show pWWN what we are booting form 

    Boot order should always have SAN ordered first 
        No PXE or CDROM before vHBA1

    For Win2k8 on bare metal 
        No secondary vHBA/Target (add after installation is complete) and no local disks at 
        all in boot order 
        
        Target LUN/Volume must already have GPT (GUI Partition Table) 

ESXi vNIC's 
    Create 8 vNIC's 
        2 for vmkernel using Active/Active load balancing vswich
        No fabric failover in UCS, A/A LB takes care of that
        
        2 for vMotion using Active/Passive load balancing
            Used by vswitch1, no Fabric Failover in UCS
                Remember that you would want this on the same VLAN/Subnet vMotion is 
                not supported over Layer3 network 
            Prefernce would use one vNIC with Failover checked

        2 vNIC's for VM's 
            Possibly teamed using Active/Active, possibly use NIC's completely separate
            from one another (separate switches of DVS in ESXi)

    2 vNIC's for future use
            You'll never know when you might need them
        
    Some of this may changed if you use the Nexus 1000v 
            You will want to use 2 vNIC's for VM's in vPC-HM using MAC Pinning
        
    Win2k8 on bare metal 
        1 vNIC using UCS Fabric Failover 
        2 vNIC's if wish ito have disjointed L2 LAN's for backup or such..

    Native VLAN (UCS manger any traffic without a 801.Q herder goes into native VLAN)
        To check, or not to check?
            Win2k8 on bare metal, want to check Native VLAN

            ESXi, won't check unless using VLAN0-untagged, otherwise ESXi expects 
            dot1q tagged frames 
                VLAN0 for it's management vlan, you can change to another VLAN

CCIE DC UCS Server Pools

posted Jun 24, 2014, 8:57 AM by Rick McGee

Pools are Pools of Data
    Data Includes
        Management IP Pools
        UUID Pools
        MAC Addresses 
        WWNN/WWPN Pools
        Server Pools and Membership

Server Pools 
    Pools of Blades
        Can assign Service Profiles to a Server pool instead of individual blade
    
    Blades can exist in multiple pools

    Blades are assigned to Server Pool either manually or by a Pool Policy 

    Pool Policy states that if you meet a certain criteria, blades will get assigned to a particular pool
        This is determined based on:
            CPU model
                Number or Cores
            Memory 
                Speed and or amount
            CNA/VIC adapters
   
    Create Pool Policy before physically adding or discovering server blades, that way as they come 
    on-line you can manually acknowledge blades getting assigned to a a server pool and service 
    profile

Creating Server Pool Policy Creation:
    Create an empty Server Pool

    Create the Server Pool Policy Qualifications
        Memory, CPU, CNA

    Finally create your Server Pool Policy that ties both together 
    

CCIE DC UCS Storage

posted Jun 23, 2014, 9:34 PM by Rick McGee

Fabric Interconnects Modes

    End Host (NPV) (default and recommended)
        FCoE is performed from Server/Chassis up to FI's 
        
        FCoE is decapsulated, and FLOGI's turned into FDISC's and 
        transmitted to upstream FC switch F-Port running in NPIV
        mode 
        
        FCoE Northbound of FI's is supported in 2.1 (Del Mar)

    FC Switching Mode 
        Limited use 
        
        No zoning configuration in 2.0 - get's zoning form
        upstream switch (Zoning is support in UCS Mgr. 2.1)
        
        For UCS 2.0 upstream switch must be a MDS or Nexus 5K to 
        get zoning
    
        FC Switching is necessary for storage array direct connect

        Designed for very small scale or demo environments 

    HBA's and pWWN's 
        Presented as 2 normal HBA's to OS using standard PCIe virtualization 

        Standard OS level Multipathing software 
            PowerPath /DMP/MPIO
            No hardware failover (standard for Fibre Channel Network)

        WWN/pWWN
            Emulex and Qlogic and Burn In Addresses (BIA)

            M81KR/PALO/VIC don't have a BIA

            There are three for pWWN usages
                Derived form the BIA (Only for Emulex or Qlogic)
                Manually assign 
                From a Pool 

        VSAN's Trunking and Port Channels 
                Can have an ISL with multiple VSAN's heading northbound out of the FI to FC Fabric
                
                Cannot limit which VSAN's head northbound our of FI (they all do) in 2.0
                
                Can limit which VSAN's are allowed to transverse onto the NB switch from MDS/N5K
                trunk 
                
                Limit of 32 VSAN's per UCS system

                Port Channel can and should be used with trunking

                Can use up to 16FC interfaces in a  Port Channel 
                    Hasing algorithm uses Src, Dst, and Exch ID (OxID (originator exchange identifier) - same as 
                    N5K
                    Non Configurable 

                Both Port Channeling and Trunking work in both NPV and FC Switching modes
                    (remember FC Switching mode you need MDS or N5K F-PortChannel mode)
                        If you don't use Port-Channeling, vHBA would need to re-FLOGI
                        need to enable "feature fort-channel-trunk" on MDS

            Direct Connect FC or FCoE Storage 
                Must be in FC Switching mode
                    Remember FC and Ethernet modes are independent 

           Still must have upstream FC/FCoE switch to configure zoning (in 2.0)
                Reasoning behind the need for a MDS or N5K for zoning

            Would only use if you have to transition between FC and FCoE 
            
            Switching will be local on FI's and they do get a FC domain ID
                
Configuration 

Topology 


Click on the SAN TAB in UCS manger 
    You'll see the following 
You'll see all FC interfaces that we configured in the beginning. 

Click on SAN Cloud and then SAN Uplinks Manager

You'll notice that the Uplink mode is in end-host not Fibre Channel Switching

On MDS's make sure the configuration for the Ports are current 

show run int fc1/7-8


Check the Port-Channel settings
sho run int po2
You'll see under PO2 the switchport mode is forced to F. Also in the port-channel summary it shows the total ports equal to two and the First Port in the PO is FC1/7 

show run | in feature (to make sure the correct features are enabled)


Back in UCS Manger click on SAN Tab, click on SAN Cloud, and right click on Fabric A and click Create Port-Channel


Name the Port-Channel
 
Give it a name Po1 with an ID of 11

Choose Ports and click finish, and repeat for FI-B

Next enable Port Channel

You'll now see that all Port-Channels are up and Enabled


VSAN Creations 

You can see as with the LAN tab you have the same VSAN options where you can create VSAN's indecently on FI-A or FI-B or Common\Global for both.

You also have some pre-defined policies that get assigned to the vHBA's and you can also create your own if needed.


You can do a show flogi database to make sure the FI's have logged into the fabric correctly and you can see the WWPN form the Equipment Tab, click on FI A, Click on Physical Ports, and finally  click on Uplink FC ports

CCIE DC UCS LAB Pinning

posted Jun 19, 2014, 7:55 PM by Rick McGee

How does Pinning work.

If you look at the image below pinning can be either dynamic or static via a pinning group that is configured in UCS Manager under the LAN tab.



In this example vNIC 1 (blue) is dynamically pinned to the port-channel between IOM A to FI-A that goes up to N5K1. vNIC2 (Green) is statically pinning to a single port form FI-A to N5K2. So if N5K1 has to talk to MAC 00B1 it would have to transverse to N5K2 down the particular port down to FI-A.

Remember that all FI's in End Host Mode (EHM) act as end host to the upstream switches so no MAC learning takes place North of the FI's

Broadcast/MultiCast

Each FI's picks a port (could be a Port-Channel) to be the Multicast or Broadcast Receiver port. This is not configurable.  In the above example the orange ports are the Broadcast/MultiCast receiver port, if a  Broadcast/MultiCast is received on a port other then the assigned Broadcast/MultiCast port it will be dropped.

You can see what he Multicast/BroadCast receiver port with the command

connect nxos
show platform software enm internal info vlandb all


Deja Vu Check 

Also in the above example if FI-A (sourced from vNIC2 (greeN)sends out the Broadcast/MultiCast packet and it is received on the same (blue) port-channle, through the DeJa Vu check it will drop the packet because it's seen it before that was sent by vNIC2 (green).


RPF (Reverse path forwarding Check)

If traffic form MAC 00A1 is trying to come in via N5K2 to FI-A it will the fail the RPF check and the traffic will be dropped. MAC 00A1 is only allowed on N5K1 (blue) port-channel.

Static Pinning 

If you have a static pinning on vNIC1 (blue) that is configured for the port-channel from FI-A if that port-channel fails but all other ports in FI-A are operational it will not fail over to another port on FI-A, if you configured Failover in the static pinning it will failover to FI-B.


1-10 of 68