Tuesday 8 November 2011

Volume placement in a mixed Equallogic environment


Situation:
So you have a mixture of HDD based EQL’s and SSD based EQL’s in your array, you add the units into the same pool and now you have lost control over what data resides where! Your dev/test environment is using valuable SSD, and half your production DB is sitting on SATA – what do you do?

Maybe if we had so 6110S's we would have some space for snapshots...
Background:
First – the reasons for this happening: EQL’s try to proportionately distribute data across all members in a pool. So you have 8TB of SATA and 2TB on SSD, it will put 80% of each LUN on SATA and 20% on SDD. Which 20% is out of your control, unless you have Automated Performance Load Balancing turned on, in which case it should be the most used 20%. When you turn on APLB, you will also find that the percentages will start to skew over time, so the percentage on SSD should start to drop on quiet LUNS and grow a little on hot LUNs. However, you still can’t directly control what goes where.

Your options:
The easy option is to split the SSD and HDD units into separate pools and move the LUNS accordingly, but then you basically have trapped islands of capacity and performance. You don’t get any APLB benefit and when you run out of capacity in a pool, you are out.

On the other hand there are 2 other mechanisms that can be employed – neither is perfect, but at least you have some options.

Firstly there is volume binding. This is a CLI only operation allowing you to specify where a LUN must reside. In our SSD/HDD example, you could bind a LUN to the SSD and then you know that the data will always be there. On the negative side – what happens if you run out of snapshot space on that member? You also cannot bind a LUN that is bigger than a single member which could be an issue if you are using multiple SSD units to give you the capacity you require.

Next there is RAID preference. If your SSD and HDD members are on different RAID policies, you could specify the RAID type that you want per LUN. This means that if the SSD is RAID50 and the SATA member is RAID10, specifying RAID50 preference on your target LUN would force it to reside on the SDD members even if APLB is enabled. This is a best effort operation, and if the LUN space utilisation grows past the preferred RAID capacity, any other pool capacity is used and the data is re-distributed as per normal. Once capacity is available again, the RAID preference is honoured and data is moved back to the RAID type of choice.
The big advantages of this method are that LUNS can span multiple units, and there are no hard limits in terms of capacity – you just get degraded performance.

Summary:
You have 3 options:
Pool separation – putting members in separate pools – manual and you lose APLB.
Volume binding – specifying a single member to host a LUN – Limited in terms of size and flexibility.
RAID preference – specifying a preferred RAID level if members are on differing RAID levels – best effort but most flexible.

No comments:

Post a Comment