Monday, October 24, 2011

ESXi - Storage

One of the very important section of ESXi is to understand Storage. Though big companies have Storage Administrators to take care of Storage but a VMware administrator has lot to understand and take care of this section.

Depending on type of storage used, datastores can have these two file systems; VMFS & NFS. The datastores are logical containers like file systems that hide specifices of each storage device and provide a uniform model for storing virtual machine files. Datastores can also be used for storing ISO images, VM templates etc. Several storage technologies are supported by VMware ESX/ESXi hosts in the VMware vSphere environment. These are locally attached storage, /fibre Channel, iSCSI and NAS (Network attached Storage). Let us understand Datastores, VMFS, NFS in more details before proceeding further.

DATASTORES

A Datastore is a logical storage unit that can use disk space on one physical device or one disk partition or can span across several physical devices. A VM is stored as a set of files in its own directory in the datastore. The datastore can be a VMFS or NFS datastore. The datastores can be used to store ISO images, floppy images, virtual machines and templates. A VMFS datastore can also hold Raw Device Mapping (RDM) which is used by a VM to access its data.

RDM (Raw Device Mapping) : The VMs running on ESX/ESXi host, instead of storing VM data in virtual disk file, you can store the data directly on a raw LUN. Storing the data this way is useful if you are running applications in your VMs that must know the physical characteristics of the storage device. Mapping a raw LUN allows you to use existing SAN commands to manage storage for the disk. The RDM is used to map to the raw LUN. RDM is a special file in a VMFS datastore that acts as a proxy for a raw LUN. It maps a file in a VMFS data store to a raw volume. A VM then references the RDM, which in turn points to the raw volume holding the VM's data. The raw device mapping is recommended when a its is must that VM should interact with a real disk on the SAN. e.g. when you make disk array snapshots or when you have a large amount of data that you do not want to move onto a virtual disk. When there is requirement to use VM in cluster using MSCS then Raw Device Mapping is very important.


Virtual Machine File System (VMFS) : The datastore reconised by VMFS which is created by assigning unpartitioned disks space. Once it is assigned an ESXi host automatically discover VMFS volume. This datastore is a clustered file system that allows multiple physical servers to read/write to the same storage device simulataneously. The cluster of file system enables unique virtualization-based services, which include:


Live migration of running VMs across physical servers.


Automatic restart of a failed VM on another physical server.


Clustering of VM across different physical servers.


The VMFS allows IT organisations to greatly simply virtual machines provisioning by efficiently storing the entire machine state in a central location. VMFS sllows multiple ESX/ESXi hosts to concurrently access shared virtual machine storage. The size of a VMFS datastore can be increased dynamically while VMs residing on it are powered on. A VMFS datastore efficiently stores both large and small files belonging to a VM. A VMFS datastore can be configured to use an 8mb block size, to support large, virtual disk files up to 2 TB in size. A VMFS datastore uses sub-block addressing to make efficient use of storage for small files. VMFS provides block-level, distributed locking to ensure that the same virtual machine is not powered on by multiple servers at the same time. If a physical server fails, the on-disk lock for each virtual machine can be released so that virtual machines can be restarted on other physical servers. The VMFS can be deployed as locally attached storage, FCS SAN, iSCSI storage and these appear to VM as mounted SCSI device. The virtual disk hides the physical storage layer from the virtual machine's OS. For the OS on a VM, VMFS preserves the internal file system semantics. The semantics ensure correct application behavior and data integrity for applications running in VMs. To view existing ESXi VMware datastore, login in to host using VMware vSphere client, Click on configuration >> Storage. Click on any datastore whose detail is needed. It will show; Volume label (datastore..), Device, Capacity, Free, file system etc. To add another datastore, Click add storage from here, leave default option Disk/LUN select >> Click next which will bring to screen where you can un-assigned LUNs. Selected required storage and click next >> review layout and click next >> Give a suitable name to it click next >> here is the section where you can choose appropriate block size. The Virtual disk upto 256gb will require block size 1 mb, 512gb - 2mb, 1 TB - 4 mb, 2 TB - 8 mb. >> Click finish. To understand Block size in respect of disk size in more details please refer http://www.vmware.com/pdf/vi3_301_201_config_max.pdf . Though this pdf talks about host 3.1 and before but good to take a look.


LOCAL AND SHARED STORAGE


All VMware ESX and ESXi installables are installed at Local storage. Also it is ideal for small environments. When all VMs are located on local sotrage management become easier. The usage of shared storage is


>vMotion.


>fast central repository for virtual machine templates


>recovery of VMs on another host incase of a host failure.


>Clustering of VM across hosts.


>Allocate large amounts of storage.


IP Storage


The ESX & ESXi hosts support two types of IP storage; iSCSI and NAS. iSCSI is used to hold one or more VMFS datastores whereas NAS is user to hold one or more NFS datastores. Both are used to hold virutal machines, ISO images and templates. vMotion, HA, DRS etc. features are supported on these datastores. The ESX & ESXi hosts support upto 64 NFS datastores and iSCSI and NAS run over 10 Gbps which provides increased storage performance.


iSCSI Components


Let us take a scenario where an iSCSI SAN storage system has more than one LUNs and 2 Storage processors. The communication between host and storage happen over a TCP/IP network. The components for this scenario will be as below:


1. iSCSI Storage consists of, number of physical disks under number of LUNs and storage processors connecting to TCP/IP network.


2. TCP/IP Network


3. Physical servers with software or hardware iSCSI initiators (HBA) connecting to TCP/IP network.


An iSCSI initiator trasmits SCSI commands over the IP network. A target receives SCSI commands from the IP network. You can have multiple initiators and targets in your iSCSI network. iSCSI is SAN oriented because the initator finds one or more targets, a target presents LUNs to the initiator and the initiator sends it SCSI commands. An initiator resides in the ESX/ESXI host. LUN masking is also available in iSCSI and works as it does in Fibre Channel. Ethernet switches do not implement zoning like Fibre Channel switches. Instead you can use VLANs to create zones.


Let us discuss iSCSI initiators; Software & hardware. The Software iSCSI initiator is VMware code built in to the vKernal. It allows host to connect to the iSCSI storage device through standard network adapters. The software iSCSI initiator handles iSCSI processing while communicating with the network adapter. With the software iSCSI initiator you can use iSCSI technology without purchasing specialised hardware.


Hardware iSCSI - A hardware iSCSI initiator is a specialised thierd-party adapter capable of accessing iSCSI storage over TCP/IP. Hardware iSCSI initiators are divided further into two categories: dependent and independent hardware. A dependent hardware iSCSI initiator or adapter depends on VMware networking and on iSCSI configuration and management iterfaces provided by VMware. This type of adapter, such as a broadcom 5709 NIC presents a standard network adaptor and iSCSI off-load functionality for the same port. To make this type of adapter functional, one must setup networking for the iSCSI traffic and bind the adapter and an appropriate vKernel iSCSI port.


An independent hardware iSCSI adapter handles all iSCSI and network processing and management for the host. The QLogic 4062C is an example of an independent hardware iSCSI.

1 comment:

  1. I'm happy with my choice, and strongly recommend them to anyone who is looking for one. This storage place rocks! I only rented for a month and they were totally professional and accommodating.
    Self storage software

    ReplyDelete