Blog‎ > ‎

CCIE DC UCS Configuring Pool and Profiles

posted Jul 1, 2014, 2:06 PM by Rick McGee   [ updated Jul 2, 2014, 3:25 PM ]
Where to start, I would think the most logical step would be to configure the 

1.>Management IP Pool....

 Give the pool the starting IP address and then the size, subnet Mask, and finally the default gateway

As you can see from the following output blades are being assigned to the to the Mgmt. IP address pool. These shouldn't change when you change the Service Profile from one blade to the next

2.>Next Create the UUID pools


You'll notice none of the UUID's are assigned, this will be done during the creation of the Service Profile

3.> Create the MAC address Pool
I don't show here but I named the Pool UCS1-ESXi pool. The Pool Size has to be a block size of 16 so placed that size value at 128
As you can see none of the MAC addresses are assigned to a blades via a service profile


4.> Create the nWWN/WWWNN
Create in the bits place of the 2,3,4 to change. The first bit can only be 2 or 5(reserved addresses) So you want to keep it at 2. Bit 2 can be changed depending on the needs for the organization the next 3 bytes are the OUI which you cannot change and the last 3 bytes can be changed according to your organization. I kept it the same as 01 for the UCS Domain 1 and 01 to signify it's a ESXi host.

5.> Create the pWWN/WWPN

Here you have to create a pool for Fabric A and Fabric B

UCS1-ESXi-FabA

Here I have the naming as follows

20:AI (FI A):00:25:B1:01 (UCS Domain 1):01 (ESXi Host):00 for a size of 128

Do the same for FI B and use the naming as follows

20:B1:00:25:B1:01:01:00

You should know see this


6.> Create BIOS Policy 

Disable Quiet Boot to see full startup screens, Don't check Reboot on BIOS Settings Change, this will reboot the servers associated with this BIOS policy.

Here you can change the amount of CPU Cores are used and various other CPU options












Click Finish and you'll see the LoudBoot Option in the root tree

7.> Boot Policy 

I like to leave it as default's 

8.>IPMI (Intelligent Platform Mgmt. Interface)


9.> LOCAL Disk Config Policies 
You'll notice that it has Protect Configuration Checked, this will preserve the RAID settings if disassociated with a service policy. Have to be cognizant of the Scrub Policy setting as well


You have the following RAID Options

10.> Create Scrub Policy 
This will not scrub anything.

You can also setup just to scrub the BIOS or Everything.



11.> Schedules 
  
  Here you can create a one time occurrence or a reocurring maintenance schedule 
This is an example or a 
reoccurring schedule. In the above example if the tasks are not completed within 2 hours the server will cancel tasks and reboot.


12.> Maintenance Policy 
This was based on the schedule that was created before.

13.> Power Caps (this is optional) 
You can assign Power Caps to particular servers, for instance you can have all our high priority servers in PowerCap 1 that will always be powered on if there are Power Supply failures, or PowerCap 10 for servers that maybe for Testing Purposes only.

14.> Server Pools (Optional)
            You can create Static Pools that you can manually assigned Blade Servers

Here you would pick what blades are to be part of this static pool

You can also create pools for Dynamic Allocation 
In the above example show a pool for servers that have a VIC CNA and 32GB of Memory 

15.> Server Pool Policy Qualifications (Optional) 

Here you can state that it has to be a Virtualized VIC

You can specify what chassis you what these qualifications in

Here you would specify the amount of memory and or clock speed

Here you can specify the number of cores etc...

 Here you can specify if the Blades is Diskless etc....


16.> Server Pool Policy 
So this ties together the Server Pool Qualifications and the Server Pools.

Here you would pick the Target Pool that where configured previously.

Here you would pick the Qualifications that where configured previously.


Comments