Was doing some testing during my compellent admin training yesterday, and came across some rather interesting behavior.
My IO testing tool of choice is SQLIO and it allows for very quick and easy perfomance baselining for your storage.
It's not SQL specific as the name would suggest, is command line and can easily be adapted to test almost any IO profile.
The best way to test is to ceate a test file larger than the cache on the SAN to ensure that the test is not completely cached.
(Unless of course you want to test cache performance...)
In this instance I created a 10GB file which is significantly bigger than the 3.5GB read cache on the compellent.
I ran my tests but considering that the underlying disks were only 5 x 15K drives, the perfomance was WAY too high. (30K IOPS - would save a lot on SSD's)
In addition, RAID 5 was writing as fast as RAID 10 - so something was fishy...
Having a look at the disk utilisation on the SAN I could see why straight away : the SAN was only storing change data !
My 10GB file was using less than 150MB because the rest was filled with 0's and the compellent does not write large blocks of sequential zero's...
|Volume with 10 GB data file...|
As a result my test file was easily fitting into cache - it was basically just metadata, which also explains why R5 and R10 were performing the same - 100% cache hit for both read and write!!
Disabling the cache allowed me to get a bit further but the fantastic thin write technology means that I'm going to have to get creative if I want to continue using SQLIO in future.
So lessons learned (by me) are:
1) When doing testing, always have expectations of what the outcome should be, as a both a sanity check, and to make sure the results are valid.
2) Make sure that your synthetic test works for the platform you are testing!
P.S.Subsequently found a good explanation of how SQLIO works and similar phenomena by Grant Fritchey which would confirm my observations: