Lets Design, Implement and do Administration of ESX3

Virtualization with VMWare Infrastructure 3.0

Archive for the ‘DRS’ Category

Dynamic Resources Schedular

ESX3.5 Notes -Part02

Posted by Preetam on April 30, 2008

Lab Manager 2.5.1 does not support ESX Server 3.5.
All hosts in a VMware HA cluster must have DNS configured so that the short host name (without the domain suffix) of any host in the cluster can be resolved to the appropriate IP address from any other host in the cluster.

If a host is added to a cluster, you can no longer create child resource pools of that host. You can create child resource pools of the cluster if the cluster is enabled for Distributed Resource Scheduler (DRS).

You cannot use VMotion to migrate a virtual machine with a guest operating system with 16GB of memory or more to ESX Sever 3.5 hosts or earlier. Resize the guest operating system memory or migrate to a compatible version of ESX Server 3.

Using VI Client or VI Web Access ensures that the starting sectors of partitions are 64K aligned, which improves storage performance.

In centralized license server mode, license files are located at the following default location on the machine running the VMware license server: C:\Program Files\VMware\VMware License Server\Licenses. This is different from VirtualCenter 2.0, where the default location of the license file was C:\Documents and Settings\All Users\Application Data\VMware\VMware License Server\vmware.lic. which no longer exists.

The VI Client installer installs Microsoft .NET Framework 2.0 on your machine. If you have an older version, the VirtualCenter Server installer upgrades your version to version 2.0.

While installing ESX Server 3.5, the option to create a default network for virtual machines is selected by default. If you proceed with installing ESX Server 3.5 with this option selected, your virtual machines share a network adapter with the service console, which does not provide optimal security.

Manage remote console connections—You can now configure VirtualCenter 2.5 to set the maximum number of allowed console connections (0 to 100) to all virtual machines.

VirtualCenter 2.5 provides an unlicensed evaluation mode that doesn’t require that you install and configure a license server while installing VirtualCenter 2.5 and ESX Server 3.

Virtual Center 2.5 can Manage up to 200 hosts and 2000 virtual machines
ESX Server 3.5 supports 256GB of physical memory and virtual machines with 64GB of RAM.
ESX Server hosts support for up to 32 logical processors
SATA support—ESX Server 3.5 supports selected SATA devices connected to dual SAS/SATA controllers
ESX Server 3.5 introduces support for N-Port ID Virtualization (NPIV) for Fibre Channel SANs. Each virtual machine can now have its own World Wide Port Name (WWPN).

VMotion migration of virtual machines with local swap files is supported only across ESX Server 3.5 hosts and later with VirtualCenter 2.5 and later

Enhanced HA provides experimental support for monitoring individual virtual machine failures. VMware HA can now be set up to either restart the failed virtual machine or send a notification to the administrator.

Storage VMotion simplifies array migration and upgrade tasks and reduces I/O bottlenecks by moving virtual machines to the best available storage resource in your environment.Migrations using Storage VMotion must be administered through the Remote Command Line Interface (Remote CLI)

VirtualCenter 2.5 provides support for batch installations of VMware Tools where VMware Tools can now be updated for selected groups of virtual machines. VMware Tools upgrades can now be scheduled for the next boot cycle

Advertisements

Posted in Advance Concepts, DRS, Limits, System Requirements, Virtual Center, VMWare, VMWare Tools | Leave a Comment »

ESX3.5 Notes -Part01

Posted by Preetam on April 30, 2008

IMPROVEMENTS

DRS (Distributed Resource Scheduling)

When maintenance mode was triggered in the past it would only move VMs that were powered on. In this release maintenance mode will also move VMs which are powered off and suspended.

In the past maintenance mode in with manual or partial automated DRS would generate a whole list of 5-Star recommendations. This no longer happens and VMs are just moved automatically. It’s assumed if you are entering maintenance mode you want to evacuate the ESX host of all VM’s.

 

VMware HA Clusters

Number of ESX host supported in a cluster has increased from 16 to 32.

When ESX host use to fail, all vm’s use to get powered on next available host. Now VM’s will get powered on Host where there is large amount of CPU and Memory resources are available. Some intelligence ha s been put into.

 

ALARMS

New sets of alarms has been introduced. One of them to check the VM’s heartbeat. If VM is hung, action of restarting that VM can be triggered.

 

NEW FEATURES

 

Lock Down mode

When this option is enabled you won’t be able to logon using administrative priviliges to ESX host using VC client.

 

DPM (Distributed Power Management) (Experimental feature)

Its job is monitor the clusters usage, and move VMs during non-peak usage to a fewer number of ESX hosts. The unneeded ESX hosts are put into a standby mode, so consume less power in the server room. DPM is integrated into DRS such that other rules such as reservations and affinity rules are obeyed. This can be certainly used for development Cluster.

DPM is initiated based on three conditions

• Guest CPU and Memory usage
• ESX host CPU and Memory usage
• ESX host power consumption

Before ESX host is put in standby mode it takes this decision based last 20 min history. And for Power ON event, it checks other node in cluster every 5 min, to see if it is not overloaded and not violating HA constraints. Like DRS, DPM can also be configured in Manual mode which offers recommendations. Also Particular ESX host can be excluded from DPM. To test DPM it is recommended that you test that each ESX host enters and Exit standby mode. There is extra option available shutdown button for ESX host. As maintenance mode, standby mode can hang as it’s wait either for the operator for move vm or automatic vmotion, if there are any VM running it won’t go into standby mode.

 

Enabling the iSCSI Software Initiator no longer by default opens the iSCSI port of 3260 on the firewall. This now has to be done using the command:
esxcfg-firewall -e swISCSIClient

Posted in DPM, DRS, ESX3.5, VMHA | Leave a Comment »

Resource Mgmt Guide -03

Posted by Preetam on March 31, 2007

VMHA & Special Situations:

If you power off host, VMs on that host restarts on other host

When you are in the middle Vmotion & target or source fails

  • If target fails: VMs will be powered on source
  • If source fails: VMs will be powered on target
  • If both target & host fails VMs will be powered on 3rd host.

Cluster turns red if current failover capacity is less than configured capacity, however if you turn of strict admission control, cluster won’t turn red.

In case cluster turns red, HA fails over VMs with high priority first, so consider giving high priority to VMs, which are important to your organization.

By default VMs are shutdown on isolated host, this shutdown is not graceful.

When add host to the cluster, host has to communicate with primary host in the same cluster to complete configurations. When first host is unavailable or is removed, secondary host becomes primary. In case when all the hosts are unavailable, you won’t be able to add cluster to host. In this situation, you must disconnect all hosts that are not responding before you can add new host.

When a host is manually disconnected from Virtual center, it also not computed for current failover capacity. Because the status of the host is not know and Virtual center is not communicating with that host, HA cannot use it as a guaranteed failover target. Also same decision is taking by virtual center for host, which is not responding.

 

However Host disconnected from Virtual center and Host not responding are quite different.

NB:The VirtualCenter Server tries to reconnect to a managed host if the connection is lost. You can define how long VirtualCenter tries to re-establish the connection. This feature is not available When the VI Client is connected directly to an ESX Server. Host not responding means Virtual center no longer recieves Heart Beat. This could be because host has failed or Virtual center agent has crashed. Host failure detection occurs every 15 seconds, if there is no response within 12-14 secs window, Host declares itself isolated. However during the same interval default isolation response applies on VMs, which is shutting down VMs. However if the network is restored in the same window VMs are shutdown but not failed over. BUT If the Network is restored before 12 sec, Host is not considered as isolated.

If the isolated host has SAN access, it retains disk locks on the VM files and attempt to failover the virtual machine to another host fails. VMFS disk locking prevents simultaneous write operations to the virtual machine disk files and potential corruption. It is not true for iSCSI and NAS storage, host might lose access to its disk and loose disk locks, even if the network connection is restored later on. These VMs may be creating and consuming network I/Os there it is recommended that you keep default isolation response unchanged when your storage is on iSCSI or NAS.

If more than one host goes down, VMs on the host will restart on another based on the re-start priority set on VMs, and this priority is per-host basis, depending upon which host fails first. It applies to isolation response.

Cluster can turn yellow if DRS is overcommitted and it can turn red if DRS or HA violation occurs

Cluster is valid unless some things make cluster overcommitted or invalid.

  • Host fails, DRS becomes overcomitted i.e. turns Yellow

 

E.g. If you have cluster 12 GHz resources, divided among 3 hosts. All three host are having VMs consuming 10 Ghz resource divided into three resource pool RP1(3GHz),RP2(3GHz),RP3(4GHz) respectively. If one of the host goes down, capacity left is 8 GHz and also We shutdown 2 VMs, so effectively cluster will be able to run VMs because we have 8 GHz capacity and each VM is reserved to use only 8 GHz but resource pool reservation won’t be met. This makes cluster Yellow.

 

 

When you use particulary large VM e.g 8 GB, then make sure atleast there is one host which will be able to individually run VMs rather than jointly.

A DRS cluster can be invalid if Virtual Center becomes unavailable and you power on VM using a VI client connected directly to an ESX server hosts. You can resolve red DRS cluster problem either by powering off one or more VMs, moving VMs to parts of the tree that have sufficient resources, or editing resource pool setting.

HA cluster can become red when number of VMs powered on Exceeds the failover requirements, i.e. current failover capacity is smaller than configured failover capacity.

DRS behaviour is not affected if a cluster is red because of an ha issue.

 

In general cluster enabled for HA and DRS must meet Vmotion requirements. If the hosts are not in the VMotion network, DRS can still make initial placement recommendations. Each host in the cluster must be able to resolve host and IP address of all other hosts in the clusters. To achieve this we can setup DNS on each host or fill in etc/hosts entries. For VMHA redundancy of Networking is higly recommended. Each host should have two nics. For Vmotion requirement processor must come from same vendor (Intel, AMD) and same processor fanmily to be compatible for migration with Vmotion. In most cases processor versions within same family are similar enough to maintain compatability. In some cases, processor vendors has introduced significant architectural difference within same processor family (Such as 64-bit ext and SSE3), Vmware identifies these exception if it cannot gurantee sucessfully migration. Vmotion does not currently support raw or undoable VMDKs or migration of clustered using MSCS.

For migration of VMs you have two option

  • Drag VM directly to the cluster object
  • Right-click VM name and choose migrate

For DRS-enabled cluster, migrating directly to host is not allowed because resource pools controls the resource.


Posted in Advance Concepts, DRS, iSCSI, VMHA, VMWare | Leave a Comment »

Resource Mgmt Guide -02

Posted by Preetam on March 31, 2007

If a host is added to the cluster, you can no longer create child resource in the host
While creation resource pools, and assigning limits, reservation, shares, if any of the value is not valid, you will see yellow triangle against resource pool.

E.g.You created a resource pool of 10GB, in that you create resource pool of 6 GB and again you try to create resource pool of another 6 GB and type is fixed, you will get yellow triangle

When you move VMs into new resource pool, VMs existing limits & reservations don’t not change. If shares value is customized, it is not changed as well, but it is set to normal.high, low % share value changes. Also unreserved resource values changes to reflect newly added reservations. That being said, if VMs reservations are not met by resource pool, move will fail

When you move hosts into cluster, depending upon what you choose like

  • DRS enabled
  • DRS disabled

Existing resource pools are affected accordingly, when you enable DRS on cluster, you have option of moving the entire tree of resource pool into new cluster or you have option to put this host’s VMs into the cluster’s root resources, when you enable second option, tree structure is changed to flat structure and all VMs & Resource pool becomes child of cluster root, instead of host, However when you happen to move out Host from the cluster, resource pool heirarchy is not moved, in short it becomes completely independent of host.

If DRS is disabled all resource pools are deleted and VMs become direct child of the cluster.

Also in Non-DRS cluster, there is no cluster-wide resource management based on the shares. Shares remain relative to host.

You can create cluster without special license, but you must have a license to enable a cluster for DRS or HA.

What happens to DRS & HA when Virtual center goes down?

HA – Continues to work and can still restart VMs on other hosts in case of failover; however information specific to VMs like cluster properties (priority or isolation response) is based on the state of the cluster before the virtual center goes down

DRS – No recommendations are made for resource optimization.

By default automation level (HA) is enabled at cluster level but you can customize it at VM level as well. Migration recommendation made by VMHA is based on the priority and associated reasons.

When in maintenance mode, the host does not allow you to deploy VMs, VMs that are running on maintenance mode continues to run, you either migrate them to another host or shutdown. When no VMs are running on the host, host’s icon changes to include under maintenance mode. If DRS cluster is in automation mode, all VMs are migrated to different host, when host moved to maintenance mode. This makes sense when other host fails and tries to failover VMs on this host which is in maintenance, being in maintenance mode it won’t allow any VMs. Also if host goes into maintenance mode VMHA will compute current failover capacity excluding host which are in maintenance mode. When host exist maintenance mode, failover capacity is again computed by VMHA.

When you allow VM to be started even if they violate availability constraints deselected (i.e. disable) you will also not able to

  • Reverting VMs to last snapshot
  • Change CPU/Memory reservations
  • Migration VM into the cluster

Posted in Advance Concepts, DRS, iSCSI, Limits, Reservations, Resource Pools, Resources, VMHA, VMWare | Leave a Comment »