Quantcast
Channel: VMware Communities: Message List
Viewing all articles
Browse latest Browse all 170134

Re: Esxi 5 - Single Disk or Multiple VDisk for SBS2011?

$
0
0

Just adding to the discussion a little...

 

From a performance point of view, there is no difference to ESXi if the VM has 3 VMDK's, one per guest drive, or 1 VMDK with all 3 drives as partitions. The storage IO requests are mapped to a VMDK file and it really makes no difference if it's one or 3 files in your example. - However from a 'clean design' and flexibility point of view, there are some advantages to following a simple practice of 1 OS Volume per VMDK for presentations.

 

Flexibility is that you can in the future detach and re-attach a VMDK file between VM's - Think about potentially migrating a data volume to a new server OS in a new VM etc. You can also easily migrate data from one storage area to another volume at a time. As an example, you need to move the (D:) 2nd volume to a new faster storage device, and you can do that without having to also move the other partitions as they are maintained separately. You can argue that this is a standalone box, starts that way and will always be that way - however for consistency across all systems you manage adopt a basic strategy and stick to it, you'll eliminate surprises when things go wrong because you always do things the same way and you're used to it.

 

From a performance point of view - keep in mind that each VMDK is essentially treated as a spindle or a LUN to the guest VM. It's seen as a disk, and as such it has a command queue and limitations within the driver stack etc. If you have a single VMDK holding all 3 volumes, then your IO to those 3 OS volumes shares the command queue on the LUN. However if you have 3 VMDK (1 per volume) then each volume has it's own queues and you pass the sharing or balancing of that down to the ESXi host to deal with (if applicable). The command queue limits at the VMFS level are capable of much more than the guest-OS level queues, so you can maximize your IO to the volumes in this manner.

 

So to that end, I would recommend 1 OS Volume per VMDK as a minimum plan of attack.

 

As for provisioning, these days there is next to no performance impact for general use disks between Thin or Thick provisioned disks. There was a 'best practice' going around at some point that you should use Thin-provisioned VMDK's sparingly and Thick always for DB's and the like - recently at a course I heard 'thin thin thin, all the time, everywhere".    I'm a fan of Thin provisioned VMDK's because they offer the best option in terms of initial allocation overhead - and with a little care no administrative downside.

 

You suggested a 100Gb C: for the OS - In fact I've seen this become a standard sizing for most Windows Server OS installs - it's a good size for the OS (about 13-30G depending on rolls and features) plus Patches/Updates over time. Generally it should sit at about 40-60g for most servers over their lifespan. And you've got a little overhead in there from day one so you don't need to go doing a re-size just to keep some logs happy etc. You really don't want to be setting yourself for micro-managing disk size increases on a daily basis - so a safe overhead is a sane choice. As Andre mentioned however - that sane choice depends on your OS allowing you to extend volumes. e.g. Windows 2008R2/2012 allow C: increases with ease. Prior to that, you'd want to make sure you got it right because growing a C drive was some work.

 

The benefits of Thin provisioning are again in providing flexibility for administration - As you've discovered so long as the VM doesn't write into that free space the thin disks will stay quite small over time. If you have a need for making snapshots and clones on occasion for software deployments - then this 'extra' free space can be handy for giving you some working overhead. However this flexibility comes are a price - that of careful allocation and admin.

 

If you completely fill a VMFS and you've got a guest-OS wanting space to allocate things go rather nasty, and unless you can grow the underlying hardware LUN it's somewhat painful to fix. So as a general rule for safe allocations - Use thin-provisioning for everything, set realistic limits for growth of data within the OS. i.e. don't make a 1Tb Thin provisioned disk with only 100G of user data on it. Set it to something sane like 150 or 200Gb for now - remember you can always grow it if needs be - but with a safe limit it can't run away from you. Take care not to over-subscribe the VMFS sizes (unless you can grow the underlying LUNs).  If you have fixed (or relatively fixed) LUNs then make sure the total of your Thin provisioned disks is slightly less say (5%) thank the VMFS size. This way if things go nuts in the Guest-OS it won't wipe you out - The upside is you get all that 'spare' space to allow for snaps and clones and moving things around, testing etc.

 

Lastly - on Thin provisioned disks 'freeing' space. There is a SCSI command that allows zero'ed blocks to be de-allocated. Thin provision supporting SAN's are able to support this, Windows server 2012 now includes this for releasing zeroed (deleted) blocks in a SAN. However at present the support for VMFS to pass these de-allocate commands appears to have been suspended in 5.1. But what I'd expect to see eventually is that VMFS will support the guest-OS telling it to free-blocks and will release them at the VMFS level, and those in turn will be passed on to the hardware storage level to be free'd also. I doubt we're far from seeing it in reality. Once this happens thick provisioning will be a thing of the past.

 

Best of luck in your journey


Viewing all articles
Browse latest Browse all 170134

Trending Articles



<script src="https://jsc.adskeeper.com/r/s/rssing.com.1596347.js" async> </script>