Lets Design, Implement and do Administration of ESX3

Virtualization with VMWare Infrastructure 3.0

Archive for the ‘Multi-Path Policy’ Category

STORAGE -02 VMWARE

Posted by Preetam on March 18, 2007

ESX server does not typically perform I/O load balancing across paths for a given storage device.At any given time, only single path is used, which is called as Active Path. The ESX server host automatically sets multipathing policy according to the make and model of the array it detects. If the detected array is not supported, it is treated as active/active.

Manually changing MRU to Fixed is not recommended. If you are using Fixed Policy, you can see which path is the preferred path with an asterisk mark.

It is recommended to use fixed policy when SP are active/active and for MRU should be used when SP are active/passive mode

RDM is a special mapping file in VMFS volume that manages metadata for its mapped device. Mapping file has a .vmdk extension, but the file contains only disk information describing the mapping to the LUN on the ESX server system

Benefits of Raw Device Mapping (RDM)

  • User-friendly persistent name
  • Dynamic name resolution
  • Distributed file locking: distributed locking on a RDM makes it safe to use a shared RAW SCSI devices without losing date when two VM are accessing the same LUN.
  • File permissions
  • File system operations
  • Snapshots
  • Vmotion

In RDM there are two modes physical modes and virtual modes

In Physical mode, VMKernel passess are SCSI commands to the device except REPORT LUN command is virtualized so the Vmkernel can isolate the Lun from the owning VM.

All mapped LUNs are uniquely identified by VMFS, RDM lets you give a permanent name to a device name, which is relative to the first visible LUN. so that any change in HBA,FC failure can change Vmhba because name includes initiator,HBA,SP,LUN. Dynamic resolution can compensate this.

Key contents of the metadata in the mapping file include the location of the mapped device (name resolution) and the locking state of the mapped device.

Vmkfstools can used for managing RDM from SVC console, typical operations are querying mapping information, create mapping file and to import or export a virtual disk.

Advertisements

Posted in Advance Concepts, Multi-Path Policy, RDM, VMWare | 1 Comment »

VMFS

Posted by Preetam on February 16, 2007

VMFS

VMWare file system is a file system optimized for storing VM’s. A virtual disk stored on a VMFS always appears to the virtual machine as mounted SCSI device. VMFS store is used to ISO Images,templates.

VMFS volumes are accessible in the service console underneath /vmfs/volumes directory

To create VMFS datastore

Configuration tab ->

– > Hardware

o Storage(SCSI,SAN and NFS)

§ Add Storage

ScreenShot021.jpg

Adding extends to datastore

Datastore can span upto 32 physical disks. You generally wish to add extend when VM’s need more space or you need to create more space.

To add one or more extend to the datastore

Configuration

Storage

Properties

Volume properties

Extends

ScreenShot022.jpg

Select the disk which you want to add as an extend and click next

If disk or partition you add was formatted previously, it will be reformatted and loose file systems and any data it contained.you have the option to decided the disk space to utilize.

ScreenShot023.jpg

To remove extends you will have to delete the entire VMFS, to remove VMFS, select VMFS and click remove. Make sure there no running VM’s on it. Removing datastore from the ESX server breaks the connection between system and storage device that holds the datastore and stops all functions of that storage device.

Managing Paths for Fibre Channel and iSCSI

ESX Server supports multipathing to maintain a constant connection between the server machine and the storage device in case of the failure of an HBA, switch, storage processor (SP), or cable. Multipathing support does not require specific failover drivers.

To support path switching, the server typically has two or more HBAs available, from which the storage array can be reached using one or more switches. Alternatively, the setup could include one HBA and two storage processors so that the HBA can use a different path to reach the disk array.

By default, ESX Server systems use only one path from the host to a given LUN at any given time. If the path being used by the ESX Server system fails, the server selects another of the available paths. The process of detecting a failed path and switching to another is called path failover. A path fails if any of the components—HBA, cable, switch port, or storage processor—along the path fails.

sc_storage_manage_11_9_1.jpg

The process of one HBA taking over for another is called HBA failover. The process of 1 SP taking over SP2 is called SP failover. VMware ESX Server supports both HBA and SP failover with its multipathing capability.

Setting Multipathing policies for LUN’s

MRU: Most recently used: [Default] which means once failover occur, we do not automatically failover. Recommended under Active/Passive storage devices

Fixed: means ESX server will always try to use preferred path. Recommended under active/active storage devices

ScreenShot026.jpg

ScreenShot024.jpg

ScreenShot025.jpg

The ESX Server host automatically sets the multipathing policy according to the make and model of the array it detects. If the detected array is not supported, it is treated as active/active.

NAS and NFS

NAS is a specialised storage device that connects to a network and can provide file level access services to an ESX server. VMWare only support NFS for access file system over network.

NAS is low cost and less infrastructure investment required than FC. NFS volumes are treated just like VMFS volume, can hold ISO/Templates and VM’s. ESX server supports

– VMotion

– Create VM

– Boot virtual Machines

– Mount ISO files

– Create virtual machine snapshots on NFS mounted volumes. The snapshot feature lets you preserve the state of the virtual machine so you can return to the same state repeatedly.

NFS client built into ESX server lets us access NFS Server and use NFS volume for storing VM’s.

sc_storage_10_13_1.jpg

When ESX Server accesses a virtual machine disk file on an NFS-based datastore, a special .lck-XXX lock file is generated in the same directory where the disk file resides to prevent other ESX Server hosts from accessing this virtual disk file. Don’t remove the .lck-XXX lock file, otherwise the running virtual machine will not be able to access its virtual disk file.

NFS and Permission

ESX server must be configured with a VMKernel port defined on a virtual switch. VMkernel port must be access NFS server over the network.

/Etc/Exports defines the systems allowed to access the shared directory. The options used in this file are :

Name of the directory to be shared

Subnet allowed to access the share

The root squash feature maps root to a user with no significant privileges on the NFS server, limiting the root user’s abilities. This feature is commonly used to prevent unauthorized access to files on an NFS volume. If the NFS volume was exported with root squash enabled, the NFS server might refuse access to the ESX Server host. To ensure that you can create and manage virtual machines from your host, the NFS administrator must turn off the root squash feature or add the ESX Server host’s physical network adapter to the list of trusted servers

If the NFS administrator is unwilling to take either of these actions, you can change the delegate user to a different identity through experimental ESX Server functionality. This identity must match the owner of the directory on the NFS server otherwise the ESX Server host will be unable to perform file level operations. To set up a different identity for the delegate user, acquire the following information:

• User name of the directory owner

• User ID (UID) of the directory owner

• Group ID (GID) of the directory owner

The delegate user is configured globally, and the same identity is used to access to every volume.

Setting up the delegate user on an ESX Server host requires that you complete these activities:

• From the Users & Groups tab for a VI Client running directly on the ESX Server host, either:

• Edit the user named vimuser to add the correct UID and GID. vimuser is an ESX Server host user provided to you as a convenience for setting up delegate users. By default, vimuser has a UID of 12 and a GID of 20.

• Add a completely new user to the ESX Server host with the delegate user name, UID, and GID.

You must perform one of these steps regardless of whether you manage the host through a direct connection or through the VirtualCenter Server. Also, you need to make sure that the delegate user (vimuser or a delegate user you create) is identical across all ESX Server hosts that use the NFS datastore.

To change the virtual machine delegate

1 Log on to the VI Client through the ESX Server host.

2 Select the server from the inventory panel.

1. The hardware configuration page for this server appears with the Summary tab displayed.

3 Click Enter Maintenance Mode.

4 Click the Configuration tab and click Security Profile.

5 Click Virtual Machine Delegate > Edit to open the Virtual Machine Delegate dialog box.

2. Enter the user name for the delegate user.

3. ScreenShot027.jpg

6 Click OK.

7 Reboot the ESX Server host.

After you reboot the host, the delegate user setting is visible in both VirtualCenter and the VI Client running directly on the ESX Server host.

Before you begin access NFS datastore you have to create VMKernel port manually.VMkernel port can be created on an existing virtual switch or as new connection on a new virtual switch

Posted in Multi-Path Policy, VMFS, VMWare | Leave a Comment »