![]() |
Why RAID 5 stops working in 2009 - GOOD INFO!
I'm sure lots of you have RAID5's setup, I know that we just went through a RAID5 nightmare on a 4TB array... Lost a drive, then the hardware raid controller wouldn't accept a disk, and recommended we "delete the volume" ............
This led to huge downtime, a massive rsync, and a rebuild of the whole array, then a copy back. No fucking fun. http://blogs.zdnet.com/storage/?p=162 |
Seems like scaremongering to me. Modern RAID controllers have background consistency checks to actively prevent that sort of scenario from happening. If you use RAID6, the odds of that ever happening are basically zero.
|
Doesn't sound like you ran into the problem described in the article.
The "mathematical certainty of a read failure during rebuild" has been well known for some time. This is why nearly every modern RAID controller supports RAID6 :) You'll see it during an actual rebuild, say halfway through, where a *second* disk will throw offline, thus trashing the array completely. Also, background consistency checks do not help in this scenario. The read failure rate given in the article is for fully operational disks - e.g. it's completely normal for them to throw a bit here and there. The article does go a bit overboard though. For one, very few arrays are going to be 100% maxed out on use, thus your chances for an error are substantially lower. We still utilize RAID5 for any 6 drives or less arrays, and have yet to have a dual disk failure as described (although, we've had two complete disk failures which are unrelated to the problem discussed). All in all though, remember backups! RAID is in no way, whatsoever, not even close, a substitute for a proper backup strategy. If the server availability is extremely important, have a warm-spare handy that is sync'ed on a regular basis, as restores from backups can take quite some time depending on your content set. -Phil |
Where I live RAID kills roaches.
|
Quote:
|
Quote:
raid 5 sucks all broke down with me to in the past |
The article headline is 100% sensationalist and makes it sound like there's a fundamental bug with the RAID5 algorithm that will cause it to fail in 2009. LOL
Another thing the author skipped over (which Phil21 points out) is the importance of backups. I had a lot of problems with RAID5 on my Windows PC (which I later determined were probably due to dodgy SATA cables) and had to rebuild several times. So what's the first thing you do when your controller says the array is degraded? You don't rebuild, you BACK UP first. I always had at least one full backup so an immediate incremental backup of the degraded array only took about 10-20 mins. At that point my data was a lot safer, and I could then think about rebuilding the array I do agree that increasing capacities will present some serious maintenance problems in the coming years. Maximum drive capacities have increased by a factor of more than 10 times in the past 5 years but the raw read and write speeds haven't kept up with that improvement... this means longer and longer to copy or rebuild your data safely. |
All times are GMT -7. The time now is 07:49 AM. |
Powered by vBulletin® Version 3.8.8
Copyright ©2000 - 2025, vBulletin Solutions, Inc.
©2000-, AI Media Network Inc