This is actually sort of an odd issue: I actually had it partially working at one point, but it seems disabling the cache as a test disabled it for good, even after reboots, updating drivers, recreating arrays, etc. Let's start with my specifications...
ASUS Maximus VII Hero BIOS 2012 (RAID mode)
Intel i7 4790k processor 4 GHz (Slight OC to 4.32 GHz)
16GB RAM
Windows 8.1 Pro
Intel RST RAID Drivers 13.1.0.1058 (from ASUS website), and later
Intel RST RAID Drivers 13.6.0.1002 (from Intel download center)
Boot disk: Samsung SSD 850 PRO 256GB
Hopeful array includes:
2x Toshiba DT01ACA300 (3TB)
3x Seagate ST3000DM001 (3TB)
Well, I wanted to set up a raid 5 array with those 5 drives listed as a storage array. The problem is, when I first created the array, I just went with the defaults of a 128k stripe: This resulted in abysmal write speeds of 25 megabytes per second at most, which tends to be an issue when restoring a 7.6 TB backup. So, I messed around with cache values: I went through each one, but it didn't change the write speeds at all, even if I turned the cache off completely. I then looked around, and found that it seems that the default of a 128k stripe for raid 5 has known terrible performance (which begs the question on why it's the default). Anyways, deleted the volume, started again (with a 64k stripe, seemingly the best for performance), and: I wasn't able to turn the cache back on. Uh oh.
To try and fix it, I try a reboot: Which doesn't change anything. Note it's still initializing, but it's comically slow, even with how slow they are supposed to be: Approximately 1% every 30 minutes. Which I'm not waiting for. Note with cache disabled I'm still getting that 25 MB/s write, which even WITH initialization I shouldn't be getting. I try rebooting into the raid settings during the boot: No options there for it.
Let's try updating the drivers, since I haven't since I installed windows since I wasn't using it. Good, a few versions have passed. Install the update. Restart. Create new raid 5 array, and still can't enable cache. Lovely.
After messing around, I find that if I start a raid 0 array, I can change it into a raid 5 array: Awesome! Make a raid 0 array (with 4/5 drives, 64k stripe), and look: I can change the cache type! Awesome. Change the cache type to write-back which apparently the best one, and run a small test: Copy a 25 gigabyte file on, and then back off again to test the read and write speeds. Other drive being an SSD. Write: 475-485 MB/s. Read: 470-485 MB/s, lowering over time. Fair enough.
Now, delete the NTFS volume, convert to raid 5 (adding that last disk). Once again, default is 128k: Change it to 64k so it's the same stripe as before. It starts migrating data, but uh oh! I still can't change the cache type. It still says it's using write-back though, so let's try benchmarking it! Create a new NTFS volume, and try the benchmarks again. Write: 100-200 MB/s. It goes back and forth, but at least it's better than 25 MB/s. Well, let's continue. Read: 20-35 MB/s. I didn't even wait the 20 or so minutes this would have taken to finish. I'd consider messing with the cache type, but that's still disabled. It might be so bad because it's 'migrating', but that's even slower than initializing: about 1% every 2-3 hours.
Try reforming the array, and (without touching it: No partition or anything!) wait for it to initialize. Takes 2-3 days. Maybe it'll let me enable the cache then. Nope. Performance any better? Nope.
I give up. Sorry about the style of post, but as you can tell I've spent a good week and a half debugging this so I'm starting to get rather annoyed. Here's what it looks like, by the way.
From a raid 0 to raid 5:
Before I show the created raid 5 array, I should note that the option to Enable Write-Back cache is disabled during creation.
And now the Manage tab for the created raid 5 array.
I really can't figure out what it is I'm missing here.