Lets Design, Implement and do Administration of ESX3

Virtualization with VMWare Infrastructure 3.0

Archive for the ‘VMFS’ Category

What are Thin Disks,Thick Disks

Posted by Preetam on September 13, 2008

Excellent explanation of various storage technologies, especially Thin disks from vmmba.

 There are three main technologies that can accomplish storage oversubscription:

  1. Linked clones
    • This feature is available in VMware Lab Manager and VMware Workstation at the virtual disk level.  When a linked clone is used, the new VM uses pointers to the original VM for all common data.
    • The additional advantage of linked clones is that whitespace is not stored – for example if an empty data disk is part of a clone operation, the new disk will act as a "thin" disk and only consume the storage that it really requires for data
    • Linked clones can also be accomplished at the datastore level using technologies such as NetApp FlexClone (useful when cloning many VMs at once)
    • Keep in mind: linked clones pay a performance penalty on write operations (using copy-on-write), and put added stress on the source disks on read operations
  2. Thin Disks
    • Thin-provisioned disks are virtual disks that "appear" to the VM as one size, but only consume up to the amount of data that is required by that disk.  So, a 10 GB drive that is 50% utilized will only store 5 GB on disk (a traditional "thick" virtual disk would consume the entire 10 GB on disk)
    • Thin disks are options in VMware Workstation, and are the default disk type when using NFS storage in VMware ESX Server – however, VMs cloned from templates are always thick
    • Storage vendors such as Hitachi and NetApp have LUN-level thin provisioning, but that would only apply to VMware if using RDMs
  3. Deduplication
    • Deduplication is a technology similar to memory page sharing (above), where common data is stored only once.  It is done "after the fact" (ex poste), meaning de-duplication opportunities are scanned using a background process
    • Deduplication is primarily used for backups (e.g. Symantec PureDisk, EMC Avamar, or Quantum DXi-Series), but can also be used on the filesystem itself (today, using NetApp Deduplication, formerly A-SIS)

REFERENCE: www.vmmba.com

Posted in Advance Concepts, Storage, VM Provisioning, VMFS, VMWare | 1 Comment »

Workload Distribution and VMFS LUN Sizing relations

Posted by Preetam on September 24, 2007

Because virtual machines share a common VMFS, it might be difficult to characterize peak‐access periods or optimize performance. You need to plan virtual machine storage access for peak periods, but different applications might have different peak‐access periods. The more virtual machines are sharing a VMFS, the greater the potential for performance degradation due to I/O contention.

Therefore VMware recommends that you load balance virtual machines over servers, CPU, and storage. You should run a mix of virtual machines on each given server so that not all experience high demand in the same area at the same time.

 

However this no definitive size recommended but common LUN size is around 300-700GB.As you can see Workload nature can be influencing factor in deciding the LUN Size. For more information please have a look at page 37, San configuration guide.

 

When making your LUN decision, keep in mind the following:

  • Each LUN should have the right RAID level and storage characteristic for applications in virtual machines that use it.
  • One LUN must contain only one single VMFS volume.
  • If multiple virtual machines access the same LUN, use disk shares to prioritize virtual machines.

Posted in Advance Concepts, VMFS, VMWare | Leave a Comment »

MSCS AND VMWARE

Posted by Preetam on April 16, 2007

MSCS AND VMWARE

Few points to remember when you decide to built clustering inside VM which might be CIB or CAB.

Virtual Machine (Cluster Node) have forementioned boundaries

  • Only LSI Logic virtual SCSI Card
  • Only VMXnet
  • Only 32-Bit VMs
  • 2-Node Clustering only
  • Nic teaming is not supported
  • iSCSI clustering is not supported
  • Boot from SAN is not supported
  • VMs part of clustering cannot be part of VMHA & DRS
  • Cannot VMotion on VMs using clustering software
  • ESX 2.5 and ESX 3.0 is not supported
  • Different HBA’s card manufacturer not supported
  • When using N+I SCSIPort Miniport driver must be present on Physical Node and not Storport Miniport driver, also there must be no powerpath software installed on physical node.

If you clone VM’s with RDM enabled, RDM will be converted into vmdks You must zero-out the disk which you would like to shared disk, you can also use mapped SAN LUN, in this case you don’t need to use VMKFSTOOLS Disk must map to SAN LUN and it is recommended to have RDM set up in physical mode Upgrade of VMs, which are using MSCS, is supported only from ESX 2.5.2 to ESX 3.0.

UPGRADING RDM AND BOOT VOLUME VMFS ON DIFFERENT VOLUMES

Power off Virtual NodesUpgrade volumes from VI ClientPower on each node, in case you get error ‘Invalid argument, you have misconfigured cluster setup’, Virtual disk of ESX 2.x cannot be powered on ESX3.0. In this case you need to import this disk using VMKFSTOOLS utility

UPGRADING RDM AND BOOT VOLUME ON SAME VMFS VOLUME

Power off Virtual NodesUpgrade volumes from VI ClientUpgrade from 2 to VMFS3.0 relocates RDM and first Node VMx file, when you now upgrade the second node that unable to locate vmdk file, ignore it. In anyway it will uprade VMDK now manually edit second node’s vmx file and point to quorum and RDM file’s new location.

UPGRADING CLUSTER ACROSS BOX

Using shared pass-through RDM is similar to upgrading on same VMFS volume

Using files in shared VMFS-2 volumes
  1. Change the mode of volumes from shared to public
  2. Upgrade ESX server
  3. Upgrade VMFS volume from VI client
  4. Create LUN for each shared disk
  5. For each shared disk , create RDM pointing to respective luns

e.g vmkfstool –i oldvmdk.vmdk newrdm.vmdk –d respectivelun

  1. Finally modify VMx file for each node pointing to respective new RDMs

This is what I have understood from the document, but honestly this requires practically experience to say with confidence.

Posted in Advance Concepts, Licenses, MSCS, VMFS, VMHA, VMWare | Leave a Comment »

VMFS -VMWARE

Posted by Preetam on March 18, 2007

CONSIDERATION WHEN CREATING VMFS

You should always have one VMFS volume per LUN, however you can have multiple smaller or one large VMFS volume. With ESX Server you can create Minimum 1.2 GB VMFS  and 256 VMFS volume per system. You can connect upto 32 ESX servers to single volume.

Environment where you should go for Larger VMFS Volume:

  • When you need more flexibility in creating VMs, more flexibility for resizing VMDKs,snapshots
  • Few Volumes better management

If you go for smaller VMFS Volume you following Advantages:

  • Less wasted storage space
  • Less contention on each VMFS due to locking and SCSI reservation issues
  • More flexibility, as the multipathing policy and disk shares are set per LUN
  • Use of MSCS requires each cluster disk resources has its own LUN

NB: Best practise would be configure few servers with Larger VMFS vols and few with smaller VMFS vols

  • Maximum VMDK file size: 2 TB
  • Maximum file size: 2TB
  • Block size: 1 MB to 8 MB

When you add datastore, name must be unique within the current Virtual Infrastructure instances. Before creating a new datastore on a FC device, rescan a fibre channel adapater to discover any newly added LUNs.

UPGRADING VMFS 2.0 TO VMFS 3.0

When upgrading to 3.0 ESX server file-lock mechanism ensures that no remote ESX Server or local process is accessing the VMFS volume being converted. ESX Server 3.0 supports VMFS 3. VMFS-3 is not backward compatible with earlier versions of ESX server

Before you carry out upgrade process make sure
  • Commit or discard any changes to VMDK
  • Backup the VMDK suppose to be upgraded
  • No Power ON VM is using VMFS2.0
  • No ESX Server is accessing VMFS2.0 or mounted on any ESX Server

Posted in Advance Concepts, VMFS, VMWare | Leave a Comment »

STORAGE-Advance Concepts

Posted by Preetam on March 18, 2007

For preparing VCP you first need to read the Exam Blue print available on vmware site, after going through it you would realize that one should go through

  1. Basic Administration Guide
  2. Server configuration Guide
  3. Resource Administration

All the above guides and additional guides are available at Vi3 Documents in PDF

Below are the contents from all three guide, they are actually few important concepts rather than entire text. This blog talks about storage.

STORAGE

TYPES OF STORAGE

  • Local
  • Fibre Channel (FC)
  • ISCSI (Hardware iniatiated)
  • ISCSI (software iniatiated)
  • NFS (NFS client is built-in into ESX server)

iSCSI

With iSCSI, SCSI storage cmds are send by VM to its VMDKs & are converted into TCP/IP protocol packets and transmitted to a remote device or target, that stores the virtual disk. ISCSI initiators are responsible for transporting SCSI requests between ESX Server and the target storage device on the IP Network.

There are two types of ISCSI initiators

1. Software based

2. Hardware based

Software based iSCSI initiators have a code built into VMKernel which carries out the transporting job, using software initiators, the ESX server connects to a LAN through an existing NIC card using network stacks, in short you can implement iSCSI without purchasing specialized hardware. You also need to open a firewall port by enabling the iSCSI software client service.

Hardware based iSCSI initiators requires HBA cards which are specialized to transport iSCSI cmds over LAN to the target. Currently ESX Server supports only Qlogic QLA4010 iSCSI HBA.

NB: ESX 3.0 does not support both types of initiators on single system.

Naming requirements:

IQN (iSCSI qualified name)

e.g. iqn.1998-01.com.mycompany:myserver

Format Template: iqn.<year-mo>.<reversed_domain_name>:<unique_name>

Discovery methods

Initiator discovers iSCSI targets by sending a sendtargets requested for specific target address.

Static: Only available for Hardware based iSCSI initiators, you can manually add additional targets or remove unneeded targets. If you remove a dynamically discovery static target, the target can be returned to the list the next time a rescan happens, the HBA is reset, or the system is rebooted.

Dynamic: to use this method enter the address of the target device so that the initiator can established a discovery session with this target. The target device then responds by forwarding a list of additional targets that the initiator is allowed to access.


iSCSI Security

Since iSCSI communications between initiator and target happens over TCP/IP stack, it is necessary to ensure security of the connection. ESX server supports CHAP that iSCSI initiators can use for authentication purposes.

You can’t store VM on IDE or SATA, but on SCSI,NAS or FC storage only.

VMs communicate with datastore (where vmdk is placed) using SCSI commands, SCSI commands are encapsulated into various protocols e.g. FC,iSCSI, NFS depending type of physical storage.

HBA Naming convention vmhba1:1:3:1, Hba card 1, on Storage processor 1, using LUN3 and partition 1. First 2 numbers can change but last will remain unchanged

Select a large LUN if you plan to create multiple virtual machines on it., if more space is needed you can increase the VMS volume at any time –up to 64 TB.

Posted in Networking, VMFS, VMWare | Leave a Comment »

VMFS

Posted by Preetam on February 16, 2007

VMFS

VMWare file system is a file system optimized for storing VM’s. A virtual disk stored on a VMFS always appears to the virtual machine as mounted SCSI device. VMFS store is used to ISO Images,templates.

VMFS volumes are accessible in the service console underneath /vmfs/volumes directory

To create VMFS datastore

Configuration tab ->

– > Hardware

o Storage(SCSI,SAN and NFS)

§ Add Storage

ScreenShot021.jpg

Adding extends to datastore

Datastore can span upto 32 physical disks. You generally wish to add extend when VM’s need more space or you need to create more space.

To add one or more extend to the datastore

Configuration

Storage

Properties

Volume properties

Extends

ScreenShot022.jpg

Select the disk which you want to add as an extend and click next

If disk or partition you add was formatted previously, it will be reformatted and loose file systems and any data it contained.you have the option to decided the disk space to utilize.

ScreenShot023.jpg

To remove extends you will have to delete the entire VMFS, to remove VMFS, select VMFS and click remove. Make sure there no running VM’s on it. Removing datastore from the ESX server breaks the connection between system and storage device that holds the datastore and stops all functions of that storage device.

Managing Paths for Fibre Channel and iSCSI

ESX Server supports multipathing to maintain a constant connection between the server machine and the storage device in case of the failure of an HBA, switch, storage processor (SP), or cable. Multipathing support does not require specific failover drivers.

To support path switching, the server typically has two or more HBAs available, from which the storage array can be reached using one or more switches. Alternatively, the setup could include one HBA and two storage processors so that the HBA can use a different path to reach the disk array.

By default, ESX Server systems use only one path from the host to a given LUN at any given time. If the path being used by the ESX Server system fails, the server selects another of the available paths. The process of detecting a failed path and switching to another is called path failover. A path fails if any of the components—HBA, cable, switch port, or storage processor—along the path fails.

sc_storage_manage_11_9_1.jpg

The process of one HBA taking over for another is called HBA failover. The process of 1 SP taking over SP2 is called SP failover. VMware ESX Server supports both HBA and SP failover with its multipathing capability.

Setting Multipathing policies for LUN’s

MRU: Most recently used: [Default] which means once failover occur, we do not automatically failover. Recommended under Active/Passive storage devices

Fixed: means ESX server will always try to use preferred path. Recommended under active/active storage devices

ScreenShot026.jpg

ScreenShot024.jpg

ScreenShot025.jpg

The ESX Server host automatically sets the multipathing policy according to the make and model of the array it detects. If the detected array is not supported, it is treated as active/active.

NAS and NFS

NAS is a specialised storage device that connects to a network and can provide file level access services to an ESX server. VMWare only support NFS for access file system over network.

NAS is low cost and less infrastructure investment required than FC. NFS volumes are treated just like VMFS volume, can hold ISO/Templates and VM’s. ESX server supports

– VMotion

– Create VM

– Boot virtual Machines

– Mount ISO files

– Create virtual machine snapshots on NFS mounted volumes. The snapshot feature lets you preserve the state of the virtual machine so you can return to the same state repeatedly.

NFS client built into ESX server lets us access NFS Server and use NFS volume for storing VM’s.

sc_storage_10_13_1.jpg

When ESX Server accesses a virtual machine disk file on an NFS-based datastore, a special .lck-XXX lock file is generated in the same directory where the disk file resides to prevent other ESX Server hosts from accessing this virtual disk file. Don’t remove the .lck-XXX lock file, otherwise the running virtual machine will not be able to access its virtual disk file.

NFS and Permission

ESX server must be configured with a VMKernel port defined on a virtual switch. VMkernel port must be access NFS server over the network.

/Etc/Exports defines the systems allowed to access the shared directory. The options used in this file are :

Name of the directory to be shared

Subnet allowed to access the share

The root squash feature maps root to a user with no significant privileges on the NFS server, limiting the root user’s abilities. This feature is commonly used to prevent unauthorized access to files on an NFS volume. If the NFS volume was exported with root squash enabled, the NFS server might refuse access to the ESX Server host. To ensure that you can create and manage virtual machines from your host, the NFS administrator must turn off the root squash feature or add the ESX Server host’s physical network adapter to the list of trusted servers

If the NFS administrator is unwilling to take either of these actions, you can change the delegate user to a different identity through experimental ESX Server functionality. This identity must match the owner of the directory on the NFS server otherwise the ESX Server host will be unable to perform file level operations. To set up a different identity for the delegate user, acquire the following information:

• User name of the directory owner

• User ID (UID) of the directory owner

• Group ID (GID) of the directory owner

The delegate user is configured globally, and the same identity is used to access to every volume.

Setting up the delegate user on an ESX Server host requires that you complete these activities:

• From the Users & Groups tab for a VI Client running directly on the ESX Server host, either:

• Edit the user named vimuser to add the correct UID and GID. vimuser is an ESX Server host user provided to you as a convenience for setting up delegate users. By default, vimuser has a UID of 12 and a GID of 20.

• Add a completely new user to the ESX Server host with the delegate user name, UID, and GID.

You must perform one of these steps regardless of whether you manage the host through a direct connection or through the VirtualCenter Server. Also, you need to make sure that the delegate user (vimuser or a delegate user you create) is identical across all ESX Server hosts that use the NFS datastore.

To change the virtual machine delegate

1 Log on to the VI Client through the ESX Server host.

2 Select the server from the inventory panel.

1. The hardware configuration page for this server appears with the Summary tab displayed.

3 Click Enter Maintenance Mode.

4 Click the Configuration tab and click Security Profile.

5 Click Virtual Machine Delegate > Edit to open the Virtual Machine Delegate dialog box.

2. Enter the user name for the delegate user.

3. ScreenShot027.jpg

6 Click OK.

7 Reboot the ESX Server host.

After you reboot the host, the delegate user setting is visible in both VirtualCenter and the VI Client running directly on the ESX Server host.

Before you begin access NFS datastore you have to create VMKernel port manually.VMkernel port can be created on an existing virtual switch or as new connection on a new virtual switch

Posted in Multi-Path Policy, VMFS, VMWare | Leave a Comment »