Lets Design, Implement and do Administration of ESX3

Virtualization with VMWare Infrastructure 3.0

Archive for the ‘VMHA’ Category

VMWARE High Availability

ESX3.5 Notes -Part01

Posted by Preetam on April 30, 2008

IMPROVEMENTS

DRS (Distributed Resource Scheduling)

When maintenance mode was triggered in the past it would only move VMs that were powered on. In this release maintenance mode will also move VMs which are powered off and suspended.

In the past maintenance mode in with manual or partial automated DRS would generate a whole list of 5-Star recommendations. This no longer happens and VMs are just moved automatically. It’s assumed if you are entering maintenance mode you want to evacuate the ESX host of all VM’s.

 

VMware HA Clusters

Number of ESX host supported in a cluster has increased from 16 to 32.

When ESX host use to fail, all vm’s use to get powered on next available host. Now VM’s will get powered on Host where there is large amount of CPU and Memory resources are available. Some intelligence ha s been put into.

 

ALARMS

New sets of alarms has been introduced. One of them to check the VM’s heartbeat. If VM is hung, action of restarting that VM can be triggered.

 

NEW FEATURES

 

Lock Down mode

When this option is enabled you won’t be able to logon using administrative priviliges to ESX host using VC client.

 

DPM (Distributed Power Management) (Experimental feature)

Its job is monitor the clusters usage, and move VMs during non-peak usage to a fewer number of ESX hosts. The unneeded ESX hosts are put into a standby mode, so consume less power in the server room. DPM is integrated into DRS such that other rules such as reservations and affinity rules are obeyed. This can be certainly used for development Cluster.

DPM is initiated based on three conditions

• Guest CPU and Memory usage
• ESX host CPU and Memory usage
• ESX host power consumption

Before ESX host is put in standby mode it takes this decision based last 20 min history. And for Power ON event, it checks other node in cluster every 5 min, to see if it is not overloaded and not violating HA constraints. Like DRS, DPM can also be configured in Manual mode which offers recommendations. Also Particular ESX host can be excluded from DPM. To test DPM it is recommended that you test that each ESX host enters and Exit standby mode. There is extra option available shutdown button for ESX host. As maintenance mode, standby mode can hang as it’s wait either for the operator for move vm or automatic vmotion, if there are any VM running it won’t go into standby mode.

 

Enabling the iSCSI Software Initiator no longer by default opens the iSCSI port of 3260 on the firewall. This now has to be done using the command:
esxcfg-firewall -e swISCSIClient

Posted in DPM, DRS, ESX3.5, VMHA | Leave a Comment »

MSCS AND VMWARE

Posted by Preetam on April 16, 2007

MSCS AND VMWARE

Few points to remember when you decide to built clustering inside VM which might be CIB or CAB.

Virtual Machine (Cluster Node) have forementioned boundaries

  • Only LSI Logic virtual SCSI Card
  • Only VMXnet
  • Only 32-Bit VMs
  • 2-Node Clustering only
  • Nic teaming is not supported
  • iSCSI clustering is not supported
  • Boot from SAN is not supported
  • VMs part of clustering cannot be part of VMHA & DRS
  • Cannot VMotion on VMs using clustering software
  • ESX 2.5 and ESX 3.0 is not supported
  • Different HBA’s card manufacturer not supported
  • When using N+I SCSIPort Miniport driver must be present on Physical Node and not Storport Miniport driver, also there must be no powerpath software installed on physical node.

If you clone VM’s with RDM enabled, RDM will be converted into vmdks You must zero-out the disk which you would like to shared disk, you can also use mapped SAN LUN, in this case you don’t need to use VMKFSTOOLS Disk must map to SAN LUN and it is recommended to have RDM set up in physical mode Upgrade of VMs, which are using MSCS, is supported only from ESX 2.5.2 to ESX 3.0.

UPGRADING RDM AND BOOT VOLUME VMFS ON DIFFERENT VOLUMES

Power off Virtual NodesUpgrade volumes from VI ClientPower on each node, in case you get error ‘Invalid argument, you have misconfigured cluster setup’, Virtual disk of ESX 2.x cannot be powered on ESX3.0. In this case you need to import this disk using VMKFSTOOLS utility

UPGRADING RDM AND BOOT VOLUME ON SAME VMFS VOLUME

Power off Virtual NodesUpgrade volumes from VI ClientUpgrade from 2 to VMFS3.0 relocates RDM and first Node VMx file, when you now upgrade the second node that unable to locate vmdk file, ignore it. In anyway it will uprade VMDK now manually edit second node’s vmx file and point to quorum and RDM file’s new location.

UPGRADING CLUSTER ACROSS BOX

Using shared pass-through RDM is similar to upgrading on same VMFS volume

Using files in shared VMFS-2 volumes
  1. Change the mode of volumes from shared to public
  2. Upgrade ESX server
  3. Upgrade VMFS volume from VI client
  4. Create LUN for each shared disk
  5. For each shared disk , create RDM pointing to respective luns

e.g vmkfstool –i oldvmdk.vmdk newrdm.vmdk –d respectivelun

  1. Finally modify VMx file for each node pointing to respective new RDMs

This is what I have understood from the document, but honestly this requires practically experience to say with confidence.

Posted in Advance Concepts, Licenses, MSCS, VMFS, VMHA, VMWare | Leave a Comment »

Resource Mgmt Guide -03

Posted by Preetam on March 31, 2007

VMHA & Special Situations:

If you power off host, VMs on that host restarts on other host

When you are in the middle Vmotion & target or source fails

  • If target fails: VMs will be powered on source
  • If source fails: VMs will be powered on target
  • If both target & host fails VMs will be powered on 3rd host.

Cluster turns red if current failover capacity is less than configured capacity, however if you turn of strict admission control, cluster won’t turn red.

In case cluster turns red, HA fails over VMs with high priority first, so consider giving high priority to VMs, which are important to your organization.

By default VMs are shutdown on isolated host, this shutdown is not graceful.

When add host to the cluster, host has to communicate with primary host in the same cluster to complete configurations. When first host is unavailable or is removed, secondary host becomes primary. In case when all the hosts are unavailable, you won’t be able to add cluster to host. In this situation, you must disconnect all hosts that are not responding before you can add new host.

When a host is manually disconnected from Virtual center, it also not computed for current failover capacity. Because the status of the host is not know and Virtual center is not communicating with that host, HA cannot use it as a guaranteed failover target. Also same decision is taking by virtual center for host, which is not responding.

 

However Host disconnected from Virtual center and Host not responding are quite different.

NB:The VirtualCenter Server tries to reconnect to a managed host if the connection is lost. You can define how long VirtualCenter tries to re-establish the connection. This feature is not available When the VI Client is connected directly to an ESX Server. Host not responding means Virtual center no longer recieves Heart Beat. This could be because host has failed or Virtual center agent has crashed. Host failure detection occurs every 15 seconds, if there is no response within 12-14 secs window, Host declares itself isolated. However during the same interval default isolation response applies on VMs, which is shutting down VMs. However if the network is restored in the same window VMs are shutdown but not failed over. BUT If the Network is restored before 12 sec, Host is not considered as isolated.

If the isolated host has SAN access, it retains disk locks on the VM files and attempt to failover the virtual machine to another host fails. VMFS disk locking prevents simultaneous write operations to the virtual machine disk files and potential corruption. It is not true for iSCSI and NAS storage, host might lose access to its disk and loose disk locks, even if the network connection is restored later on. These VMs may be creating and consuming network I/Os there it is recommended that you keep default isolation response unchanged when your storage is on iSCSI or NAS.

If more than one host goes down, VMs on the host will restart on another based on the re-start priority set on VMs, and this priority is per-host basis, depending upon which host fails first. It applies to isolation response.

Cluster can turn yellow if DRS is overcommitted and it can turn red if DRS or HA violation occurs

Cluster is valid unless some things make cluster overcommitted or invalid.

  • Host fails, DRS becomes overcomitted i.e. turns Yellow

 

E.g. If you have cluster 12 GHz resources, divided among 3 hosts. All three host are having VMs consuming 10 Ghz resource divided into three resource pool RP1(3GHz),RP2(3GHz),RP3(4GHz) respectively. If one of the host goes down, capacity left is 8 GHz and also We shutdown 2 VMs, so effectively cluster will be able to run VMs because we have 8 GHz capacity and each VM is reserved to use only 8 GHz but resource pool reservation won’t be met. This makes cluster Yellow.

 

 

When you use particulary large VM e.g 8 GB, then make sure atleast there is one host which will be able to individually run VMs rather than jointly.

A DRS cluster can be invalid if Virtual Center becomes unavailable and you power on VM using a VI client connected directly to an ESX server hosts. You can resolve red DRS cluster problem either by powering off one or more VMs, moving VMs to parts of the tree that have sufficient resources, or editing resource pool setting.

HA cluster can become red when number of VMs powered on Exceeds the failover requirements, i.e. current failover capacity is smaller than configured failover capacity.

DRS behaviour is not affected if a cluster is red because of an ha issue.

 

In general cluster enabled for HA and DRS must meet Vmotion requirements. If the hosts are not in the VMotion network, DRS can still make initial placement recommendations. Each host in the cluster must be able to resolve host and IP address of all other hosts in the clusters. To achieve this we can setup DNS on each host or fill in etc/hosts entries. For VMHA redundancy of Networking is higly recommended. Each host should have two nics. For Vmotion requirement processor must come from same vendor (Intel, AMD) and same processor fanmily to be compatible for migration with Vmotion. In most cases processor versions within same family are similar enough to maintain compatability. In some cases, processor vendors has introduced significant architectural difference within same processor family (Such as 64-bit ext and SSE3), Vmware identifies these exception if it cannot gurantee sucessfully migration. Vmotion does not currently support raw or undoable VMDKs or migration of clustered using MSCS.

For migration of VMs you have two option

  • Drag VM directly to the cluster object
  • Right-click VM name and choose migrate

For DRS-enabled cluster, migrating directly to host is not allowed because resource pools controls the resource.


Posted in Advance Concepts, DRS, iSCSI, VMHA, VMWare | Leave a Comment »

Resource Mgmt Guide -02

Posted by Preetam on March 31, 2007

If a host is added to the cluster, you can no longer create child resource in the host
While creation resource pools, and assigning limits, reservation, shares, if any of the value is not valid, you will see yellow triangle against resource pool.

E.g.You created a resource pool of 10GB, in that you create resource pool of 6 GB and again you try to create resource pool of another 6 GB and type is fixed, you will get yellow triangle

When you move VMs into new resource pool, VMs existing limits & reservations don’t not change. If shares value is customized, it is not changed as well, but it is set to normal.high, low % share value changes. Also unreserved resource values changes to reflect newly added reservations. That being said, if VMs reservations are not met by resource pool, move will fail

When you move hosts into cluster, depending upon what you choose like

  • DRS enabled
  • DRS disabled

Existing resource pools are affected accordingly, when you enable DRS on cluster, you have option of moving the entire tree of resource pool into new cluster or you have option to put this host’s VMs into the cluster’s root resources, when you enable second option, tree structure is changed to flat structure and all VMs & Resource pool becomes child of cluster root, instead of host, However when you happen to move out Host from the cluster, resource pool heirarchy is not moved, in short it becomes completely independent of host.

If DRS is disabled all resource pools are deleted and VMs become direct child of the cluster.

Also in Non-DRS cluster, there is no cluster-wide resource management based on the shares. Shares remain relative to host.

You can create cluster without special license, but you must have a license to enable a cluster for DRS or HA.

What happens to DRS & HA when Virtual center goes down?

HA – Continues to work and can still restart VMs on other hosts in case of failover; however information specific to VMs like cluster properties (priority or isolation response) is based on the state of the cluster before the virtual center goes down

DRS – No recommendations are made for resource optimization.

By default automation level (HA) is enabled at cluster level but you can customize it at VM level as well. Migration recommendation made by VMHA is based on the priority and associated reasons.

When in maintenance mode, the host does not allow you to deploy VMs, VMs that are running on maintenance mode continues to run, you either migrate them to another host or shutdown. When no VMs are running on the host, host’s icon changes to include under maintenance mode. If DRS cluster is in automation mode, all VMs are migrated to different host, when host moved to maintenance mode. This makes sense when other host fails and tries to failover VMs on this host which is in maintenance, being in maintenance mode it won’t allow any VMs. Also if host goes into maintenance mode VMHA will compute current failover capacity excluding host which are in maintenance mode. When host exist maintenance mode, failover capacity is again computed by VMHA.

When you allow VM to be started even if they violate availability constraints deselected (i.e. disable) you will also not able to

  • Reverting VMs to last snapshot
  • Change CPU/Memory reservations
  • Migration VM into the cluster

Posted in Advance Concepts, DRS, iSCSI, Limits, Reservations, Resource Pools, Resources, VMHA, VMWare | Leave a Comment »

VMWARE HA

Posted by Preetam on March 6, 2007

Clustering in VMWare is based upon customer requirements.

Cluster-in-a-Box: Both the Nodes in same Physical Hosts, this type of configuration is suitable in case there is possibility of data crashes or administrative errors, but there is no cover if ESX host fails on hardware front.

Cluster-across-Boxes: Both the nodes are placed on seperate ESX host, and this takes of ESX host’s hardware failure.

Physical-to-Virtual Cluster: Here Node A is actually physical box and Node B is Virtual Machine in ESX host, acting as standby host.

VMWARE HA solutions has some advantages which not very obvious. But we should any case apply VM HA for one simple reason, if the ESX host fails, all VM’s at least get started at other host. You don’t have to manually do that. Downtime will be Non-Zero

VMHA and VC 2.0 deals only with Host failures, for VM’s (Node failure) you monitor Heart Beat using Alarm

PRE-REQUISITES VMHA:

  • Each host must be able to poweron VM’s i.e. Each host must have access to VM’s files, in other words all VMotion requirements are met.
  • ESX server is reachable when you type it’s fully qualified domain name

For VMHA heartbeats it is recommended to set

  • Two service console port on different virtual switch
  • One service console with NIC teaming enable at virtual switch level

VMHA is fully integrated with DRS, which means when your host fails and all VM’s are moved to different hosts, DRS takes care of resource management. VMHA is reactive solution, which means it will act only when one or more host fails but VMDRS is proactive solution, it is always best to implement both VMHA & VMDRS

Failover capacity: When you enable cluster, two important configurations you need to do and they are again dependant upon client’s requirement.

  1. Number of host allowed failures allowed

    Maximum is 04 and Minimum is 01. This configuration help HA to determine if there are enough resources to power on VM in the cluster. But it is we who decided how much redundant capacity to be made available.

  2. Admission Control
    1. Do not power ON VM if they violate availability constraints (Selected as default option)
    2. Allow virtual machines to be powered on if they violate availability constraints

Depending upon adminission control option you select, VM will be either powered ON or NOT. These values help VMHA to balance and calculated enough resource across hosts in case there is any host failures. Current failover capacity under Cluster’s summary tab informs how many hosts are available at that time to hold the VM’s

We only need to provide number of host, rest like resources required to power on VM’s across these host or only 1 host is alive, decision like this is taken by VMHA. If resources are not enough VMHA wouldn’t all VM’s to be powered ON(default option). You can force VMHA to start VM’s(when you like the constraints to be voilated), in this case Cluster will show RED sign, which means failover might not be guaranteed. It is not recommended that you work with red clusters. Also if you have 3 hosts and 2 fails cluster will turn RED.

So when you enable VMHA, you should design in such a way that hosts in ESX will be able to handle additional VM’s without any over utilization of resource.

For example: Two ESX Host having equal capacity handling 50 VM’s each. We should design in way that each Host should be able to handle 100 VM’s.


Posted in Virtual Center, VMHA, VMWare | Leave a Comment »