Being a storage admin in the age of VMWare two of the most common questions are:
- How large should my LUNs be that I present to VMWare?
- How many VMs should I run per lun?
These two questions tie into each other and you cannot answer one without the other. With that being said VM’s per lun is more of a rule of thumb with major influences based on Storage Array performance and VM IO considerations.
I am going to outline what has worked well for me in several different situation. It is a flow chart because there are some things that should be accounted for. Bottom line is there is no magical answer that will work for everyone. What I am recommending is based off of best practices and experience. If you know your environment by all means adjust some of the thresholds appropriately.
There is going to be two rough, good rules of thumb we are going to define (none of the numbers are concrete and you will not tear a hole in the universe if you violate these number these are just a good starting numbers):
No more than 10 VM’s per Lun.
Now lets talk about why: (the more you know the better you can fine tune your environment). The biggest reason for this is SCSI reservations, there is already a lot of info on this (http://kb.vmware.com/selfservice/microsites/search.do?language=en_US&cmd=displayKC&externalId=1005009), in a nut shell there are some situation where VMWare locks a lun for exclusive use and the more VM’s you have on a lun the more of the locks are potentially going to attempt to be placed on a lun, and a lot of cycles are spent on trying to acquire locks on a lun, causing delays etc. Another is IO (now this is a highly volatile number and is 100% dependent on what your VM is used for (if your VM IO is too high this will reduce the number of VM’s you want on that LUN) but for average VM usage with typical VM’s 10 VM’s on a typical SAN LUN is a good number to start the conversation.)
No more that 2TB Lun size
This use to be a hard number but with vSphere 5 the 2TB limit has been shattered (64TB). Even with this ability I would try not to exceed this limit unless necessary. For one unless you are running the latest and greatest storage / storage firmware and OS your storage array may have issues dealing with larger than 2TB, and advanced features (replication, snapshots, etc) may be affected (also who wants to run disk repairs on a lun that is larger that 2TB, really??)
First there are some variables we need to define:
avmS = How much storage does your average system take? “(Size of virtual machine’s hard disk(s)) + (size of RAM for virtual machine) + (100MB for log files per virtual machine) is the minimum space needed for each virtual machine. ” from http://kb.vmware.com/selfservice/microsites/search.do?language=en_US&cmd=displayKC&externalId=1003755
avmPL = How many VM’s per LUN, you can use 10 as a starting number, or lower it if you have a slow SAN, or really big VM’s etc. This is going to be average (don’t let one or two really big (or small) VM’s fully define your average)
lunBS% = Buffer Space, how much do you want free on your lun after your VM’s (this space is used for snapshots, temp move space etc) if you are using pre vSphere 4 update 2, I would not do anything less than 20% (if you have ever run a lun out of space because of a snapshot you know why) and because of how snapshots collapse see http://www.yellow-bricks.com/2010/07/05/changes-to-snapshot-mechanism-delete-all/ for more info. Now with the changes to how snapshots collapse the 20% is still a good rule, but can be fudged a bit more (aka not require as much buffer space). Other factors should include how long do you hold snapshots, and how big do your snapshots get?
Here is the math:
Lun size = avmS * avmPL + lunBS%
2TB Lun (rounded up 1920 gb) = 160gb avmS * 10 avmPL + 320gb 20%
1TB Lun (rounded up 960 gb ) = 80gb avmS * 10 avmPL + 160gb 20%
Ok so are we done? well in a simple world yes.. But our world No..
Now come the questions like what about thin provisioning?
Ok thin provisioning where? At the VM level or at the Storage array level?
I will go into details on a later post but here is my stance on it. If it is thin provisioning at the VM level we are still done (you would potentially have more free space, but what would you do with it? I would not add more VM’s to that lun as that would violate the 10 VM limit)
If you have enabled Storage array thin provisioning I personally would set your Luns to 2TB, why you might ask? Un-used space does not consume much of anything (a very small bit, and you are already taking the SAN performance hit by thin provisioning) and it actually gives more flexibility in the long run. One other note if this is pre vSphere 5, I personally have had the best flexibility when I formatted all of my luns with the 8mb block size this let me grow my systems and move systems around without worry or extra complication.