jump to navigation

Cisco UCS Quick – Start Guide October 4, 2013

Posted by ramnathgr in Uncategorized.
add a comment



Another deeper look at deploying Nimble with Cisco UCS October 4, 2013

Posted by ramnathgr in UCS, VMware.
add a comment

Another deeper look at deploying Nimble with Cisco UCS

We continue to get customer inquiries on the specifics of deploying Nimble with Cisco UCS – particularly on what the service profile should look like for iSCSI vNICs.  So here we go, we will dive straight into that bad boy:

We will start with the Fabric Interconnect, then to the vNICS, then to Nimble array, and last but not least, the vSphere vSwitch

1.  Fabric Interconnect

  • Configure cluster for the FI
    • The FIs should be configured in cluster mode, with a primary and subordinate (clustering of FI does NOT mean data traffic flows between the two – it is an active/passive cluster with management traffic flowing between the pair)
    • Configure appliance ports
      • The ports connected with Nimble data interfaces should be configured with appliance port mode – why you may ask?  Well, prior to UCSM 1.4 release, the ports on the FI are Ethernet ports that will receive broadcast/multicast traffic from the Ethernet fabric.  Appliance ports are designed specifically to accommodate Ethernet-based storage devices such as Nimble so its ports don’t get treated as another host/VM connected to an Ethernet uplink port
      • Here’s what ours look like for each FI (under “Physical Ports” for each FI in the “Equipment” tab
  • FI-A (connect one 10G port from each controller to the FI-A)
  • FI-B (connect remaining 10G port from each controller to FI-B)
2.  vNIC (it’s important to pin the iSCSI vNICs to a specific FI)

In our service profile, we have two vNICs defined for iSCSI traffic, and each vNIC is pinned to a specific FI1

Here’s what the vNIC setting looks like for each vNIC dedicated for iSCSI (under “General” tab):

We use VLAN 27 & 28 representing the two subnets we have

Why didn’t we check “Enable Failover”?  Simply put, we let ESX SATP/PSP to handle failover for us.  More on topic is discussed in my joint presentation with Mostafa Khalil from VMware.

3.  Nimble Array

Notice we have subnet 127 & 128?  Why you may ask – that is so we could leverage both FIs for iSCSI data traffic


4.  vSphere vSwitch

We will need two VMkernel ports for data traffic, each configured on a separate subnet to match our design. You could use either a single vSwitch or two vSwitches.  Note if you use a single vSwitch, your NIC teaming policy for each VMKernel port must be overridden like below:

How the hell do I know vmnic1 & vmnic2 are the correct vNICs dedicated for iSCSI?  Please don’t share this secret J  If you click on “vNICs” under your service profile/service profile template, you get to see the “Desired Order” in which they will show up in ESX – remember, ESX assigns this based on the PCI bus number.  Desired order of “1” will show up as vmnic0, so our vNIC iSCSI-A with desired order of “2” will show up as vmnic1, so forth with vNIC iSCSI-B


VMware or Microsoft? Comparing vSphere 5.5 and Windows Server 2012 R2 Hyper-V At-A-Glance October 3, 2013

Posted by ramnathgr in Uncategorized.
add a comment

VMware or Microsoft? Comparing vSphere 5.5 and Windows Server 2012 R2 Hyper-V At-A-Glance

VMware or Microsoft? Comparing vSphere 5.5 and Windows Server 2012 R2 Hyper-V At-A-Glance



Hyper-V Server 2012 or Windows Server 2012 with Hyper-V? October 3, 2013

Posted by ramnathgr in Uncategorized.
add a comment

There has been a lot of talk about Hyper-V in its 2012 editions. One of the questions that I hear on a regular basis is about which version of Hyper-V to use. Usually the conversation goes something like this.

“So Microsoft makes Hyper-V Server 2012 and Windows Server 2012 with Hyper-V?”


“Hyper-V Server 2012 is a free download, right?”


“So why would I want to use Windows Server 2012 to get Hyper-V?”

The question is a good one, and it has a really good answer. Licensing! Here are the nitty gritty details. First this from the Microsoft Hyper-V Server2012 page;

“Hyper-V Server is a dedicated stand-alone product that contains the hypervisor, Windows Server driver model, virtualization capabilities, and supporting components such as failover clustering, but does not contain the robust set of features and roles as the Windows Server operating system.”

The language is clear that the focus of Hyper-V Server 2012 is to provide the platform components to run virtual machines for Windows Servers that are already licensed. For example if you were consolidating a group of your existing Windows servers with their existing licenses onto the Hyper-V platform it would be in your best interest to use Hyper-V Server 2012. If you were implementing a Virtual Desktop Infrastructure (VDI) for your existing Licensed Windows environment then Hyper-V Server 2012 is a great solution because here again you already have the OS licenses. But what if you don’t already have the licensed Windows Servers?

Windows Server 2012 is all about virtualization rights. Both editions (Standard and Datacenter) have the exact same functionality. The only difference between the two editions is the virtualization rights. Windows Server 2012 Standard edition gives the purchaser the rights to run 2 virtual instances of Windows Server, while the Datacenter Edition has unlimited virtualization rights. That’s it. Now of course Windows Server 2012 also has all of the cool roles and features that you can install on the host machine but when it comes right down to it we generally keep the hosts as clean as possible from running additional workloads outside of Hyper-V.

wp-preview-posey-hands-on-guide-newI know someone reading this is thinking, “Couldn’t you run Hyper-V Server 2012 and just buy the Windows Server 2012 licenses?” The answer is; Of course you could. Should you do it? No. Maximize your licensing position by using the full version Windows Server 2012 and add Hyper-V.

Next time someone asks you about Windows Server 2012 with Hyper-V, or Hyper-V Server 2012 you can smile and nod your head knowingly and then tell them “It seems to be a question of licensing.”

For more information about all things Hyper-V in Windows Server 2012 please check out the free E-book from Veeam The Hands-On Guide on Understanding Hyper-V in Windows Server 2012.

New features and capabilities available in vSphere 5.5 October 2, 2013

Posted by ramnathgr in Uncategorized.
add a comment

Summary of new features and capabilities available in vSphere 5.5

  1. Doubled Host-Level Configuration Maximums – vSphere 5.5 is capable of hosting any size workload; a fact that is punctuated by the doubling of several host-level configuration maximums. The maximum number of logical CPUs has doubled from 160 to 320, the number of NUMA nodes doubled from 8 to 16, the number of virtual CPUs has doubled from 2048 to 4096, and the amount of RAM has also doubled from 2TB to 4TB. There is virtually no workload that is too big for vSphere 5.5!
  2. Hot-pluggable PCIe SSD Devices – vSphere 5.5 provides the ability to perform hot-add and remove of SSD devices to/from a vSphere 5.5 host. With the increased adoption of SSD, having the ability to perform both orderly as well as unplanned SSD hot-add/remove operations is essential to protecting against downtime and improving host resiliency.
  3. Improved Power Management – ESXi 5.5 provides additional power savings by leveraging CPU deep process power states (C-states). By leveraging the deeper CPU sleep states ESXi can minimizes the amount of power consumed by idle CPUs during periods of inactivity. Along with the improved power savings comes additional performance boost on Intel chipsets as turbo mode frequencies can be reached more quickly when CPU cores are in a deep C-State.
  4. Virtual Machine Compatibility ESXi 5.5 (aka Virtual Hardware 10) – ESXi 5.5 provides a new Virtual Machine Compatibility level that includes support for a new virtual-SATA Advance Host Controller Interface (AHCI) with support for up to 120 virtual disk and CD-ROM devices per virtual machine. This new controller is of particular benefit when virtualizing Mac OS X as it allows you to present a SCSI based CD-ROM device to the guest.
  5. VM Latency Sensitivity – included with the new virtual machine compatibility level comes a new “Latency Sensitivity” setting that can be tuned to help reduce virtual machine latency. When the Latency sensitivity is set to high the hypervisor will try to reduce latency in the virtual machine by reserving memory, dedicating CPU cores and disabling network features that are prone to high latency.
  6. Expanded vGPU Support – vSphere 5.5 extends VMware’s hardware-accelerated virtual 3D graphics support (vSGA) to include GPUs from AMD. The multi-vendor approach provides customers with more flexibility in the data center for Horizon View virtual desktop workloads. In addition 5.5 enhances the “Automatic” rendering by enabling the migration of virtual machines with 3D graphics enabled between hosts running GPUs from different hardware vendors as well as between hosts that are limited to software backed graphics rendering.
  7. Graphics Acceleration for Linux Guests – vShere 5.5 also provides out of the box graphics acceleration for modern GNU/Linux distributions that include VMware’s guest driver stack, which was developed by VMware and made available to all Linux vendors at no additional cost.
  8. vCenter Single Sign-On (SSO) – in vSphere 5.5 SSO comes with many improvements. There is no longer an external database required for the SSO server, which together with the vastly improved installation experience helps to simplify the deployment of SSO for both new installations as well as upgrades from earlier versions. This latest release of SSO provides enhanced active directory integration to include support for multiple forest as well as one-way and two-way trusts. In addition, a new multi-master architecture provides built in availability that helps not only improve resiliency for the authentication service, but also helps to simplify the overall SSO architecture.
  9. vSphere Web Client – the web client in vSphere 5.5 also comes with several notable enhancements. The web client is now supported on Mac OS X, to include the ability to access virtual machine consoles, attach client devices and deploy OVF templates. In addition there have been several usability improvements to include support for drag and drop operations, improved filters to help refine search criteria and make it easy to find objects, and the introduction of a new “Recent Items” icon that makes it easier to navigate between commonly used views.
  10. vCenter Server Appliance – with vSphere 5.5 the vCenter Server Appliance (VCSA) now uses a reengineered, embedded vPostgres database that offers improved scalability. I wasn’t able to officially confirm the max number of hosts and VMs that will be supported with the embedded DB. They are targeting 100 hosts and 3,000 VMs but we’ll need to wait until 5.5 releases to confirm these numbers. However, regardless what the final numbers are, with this improved scalability the VCSA is a very attractive alternative for folks who may be looking to move a way from a Windows based vCenter.
  11. vSphere App HA – App HA brings application awareness to vSphere HA helping to further improve application uptime. vSphere App HA works together with VMware vFabric Hyperic Server to monitor application services running inside the virtual machine, and when issues are detected perform restart actions as defined by the administrator in the vSphere App HA Policy.
  12. vSphere HA Compatibility with DRS Anti-Affinity Rules –vSphere HA will now honor DRS anti-affinity rules when restarting virtual machines. If you have anti-affinity rules defined in DRS that keep selected virtual machines on separate hosts, VMware HA will now honor those rules when restarting virtual machines following a host failure.
  13. vSphere Big Data Extensions(BDE) – Big Data Extensions is a new addition to the VMware vSphere Enterprise and Enterprise Plus editions. BDE is a vSphere plug-in that enables administrators to deploy and manage Hadoop clusters on vSphere using the vSphere web client.
  14. Support for 62TB VMDK – vSphere 5.5 increases the maximum size of a virtual machine disk file (VMDK) to 62TB (note the maximum VMFS volume size is 64TB where the max VMDK file size is 62TB). The maximum size for a Raw Device Mapping (RDM) has also been increased to 62TB.
  15. Microsoft Cluster Server (MCSC) Updates – MSCS clusters running on vSphere 5.5 now support Microsoft Windows 2012, round-robin path policy for shared storage, and iSCSI and Fibre Channel over Ethernet (FCoE) for shared storage.
  16. 16Gb End-to-End Support – In vsphere 5.5 16Gb end-to-end FC support is now available. Both the HBAs and array controllers can run at 16Gb as long as the FC switch between the initiator and target supports it.
  17. Auto Remove of Devices on PDL – This feature automatically removes a device from a host when it enters a Permanent Device Loss (PDL) state. Each vSphere host is limited to 255 disk devices, removing devices that are in a PDL state prevents failed devices from occupying a device slot.
  18. VAAI UNMAP Improvements – vSphere 5.5 provides and new “esxcli storage vmfs unmap” command with the ability to specify the reclaim size in blocks, opposed to just a percentage, along with the ability to reclaim space in increments rather than all at once.
  19. VMFS Heap Improvements – vSphere 5.5 introduces a much improved heap eviction process, which eliminates the need for large heap sizes. With vSphere 5.5 a maximum of 256MB of heap is needed to enable vSphere hosts to access the entire address space of a 64TB VMFS.
  20. vSphere Flash Read Cache – a new flash-based storage solution that enables the pooling of multiple flash-based devices into a single consumable vSphere construct called a vSphere Flash Resource, which can be used to enhance virtual machine performance by accelerating read-intensive workloads.
  21. Link Aggregation Control Protocol (LACP) Enhancements – with the vSphere Distributed Switch in vSphere 5.5 LACP now supports 22 new hashing algorithms, support for up to 64 Link Aggregation Groups (LAGs), and new workflows to help configure LACP across large numbers of hosts.
  22. Traffic Filtering Enhancements – the vSphere Distributed Switch now supports packet classification and filtering based on MAC SA and DA qualifiers, traffic type qualifiers (i.e. vMotion, Management, FT), and IP qualifiers (i.e. protocol, IP SA, IP DA, and port number).
  23. Quality of Service Tagging – vSphere 5.5 adds support for Differentiated Service Code Point (DCSP) marking. DSCP marking support enables users to insert tags in the IP header which helps in layer 3 environments where physical routers function better with an IP header tag than with an Ethernet header tag.
  24. Single-Root I/O Virtualization (SR-IOV) Enhancements – vSphere 5.5 provides improved workflows for configuring SR-IOV as well as the ability to propagate port group properties to up to the virtual functions.
  25. Enhanced Host-Level Packet Capture – vSphere 5.5 provides an enhanced host-level packet capture tool that is equivalent to the command-line tcpdump tool available on the Linux platform.
  26. 40Gb NIC Support – vSphere 5.5 provides support for 40Gb NICs. In 5.5 the functionality is limited to the Mellanox ConnectX-3 VPI adapters configured in Ethernet mode.
  27. vSphere Data Protection (VDP) – VDP has also been updated in 5.5 with several great improvements to include the ability to replicate backup data to EMC Avamar, direct-to-host emergency restore, the ability to backup and restore of individual .vmdk files, more granular scheduling for backup and replication jobs, and the ability to mount existing VDP backup data partitions when deploying a new VDP appliance. For more information about these new features as well as more information about VDP vs. VDP advanced check out Jeff Hunter’s recent blog post.

For Reference blogs





Installing vCenter 5.1 on Windows 2012: Profile Driven Storage installation error October 1, 2013

Posted by ramnathgr in Uncategorized.
add a comment

I ran into an issue with vCenter Server 5.1 in my Windows Server 2012 lab. The vCenter installation fails with this error message.

Trying to manually start the vCenter service didn’t work as it was looking for a dependent service.

I stumbled on this tip from an online forum. Details and link below:

For those not aware, ESXi 5.1 was released today. Has official Win 8 and Server 2012 compatibility and in addition to that you can actually see the console when the client is run from Win 8 now.

I also upgraded vCenter which was interesting. Gets a little more complex with multiple services now. Now to get to the reason I’m posting (because someone on the internet somewhere will come across this) – I attempted to install vCenter on Windows Server 2012. The installation fails just before it tries to install the vSphere Profile-Driven Storage Service.

Here is why – to install that service, the Vcenter service needs to be running. However, the vcenter service does not start properly in windows server 2012. This is due to a missing dependency. In particular, the VirtualCenter Server service relies on the ProtectedStorage service which was removed from Win8/Server 2012. The work around is the following (at your own risk) – open regedit and go to \System\CurrentControlSet\Services\vpxd and the

Open the DependOnService key and remove ProtectedStorage from the list. Reboot the machine and the Vcenter service should come alive (might take a while). Then restart ONLY the vcenter installation again once everything has come up (you need to wait for vcenter service to come alive which can take a few minutes). The install will continue from where it kicked off and finish.

So the short version is, when the vcenter install fails, go to registry and remove ProtectedStorage dependency from the vpxd service, reboot and it should work. Restart the vcenter install and it will finish as per normal.

There is how I got vCenter working on Server 2012. I also installed the Web Client service without an issue.

Obviously vcenter isn’t officially supported on Server 2012 so it is a work around. Hopefully this helps someone else like myself who upgrades there 2k8r2 install to 2012 just to see and then finds vcenter dies

Hello world! November 3, 2011

Posted by ramnathgr in Uncategorized.
add a comment

%d bloggers like this: