As far as I’m concerned open source software raid is the only way to go for home server applications. The ability to use mixed drive sizes and manufacturers over time is simply too good to pass up. In an enterprise environment with hot spares and 4 hour support you can deal with that but in a home server environment your system could be down for weeks as you try to source very specific parts. The CPU overhead is so marginal the slight performance hit is easily soaked up my modern multi core cpus where that load isn’t noticeable. SnapRAID also has the advantage of keeping only the needed amount of drives spinning for any operation, where traditional raid, software or hardware, keeps all drives spinning.
As a side effect, if you lose more drives than you have parity for, with SnapRAID you only lose the contents of the failed drives, where a traditional raid will be toast as data is striped across all drives. I’ve run RAID in various incarnations (1,5,6,10) at home for the past 15 years, and have recently switched my NAS to a Btrfs Raid1 with 2 drives for stuff I’d be rather upset to lose, and a 4 drive MerferFS/SnapRAID pool for stuff I would be less upset about losing.
Motherboard integrated 'RAID' solutions are universally shit. They offer disadvantages of both real hardware RAID card and software RAID with almost no benefits of either:. If anything happens to your motherboard, its BIOS, its RAID controller or SATA controller your data is going to be very hard to recover.
I'm a sysadmin by trade and as such I deal with RAID enabled servers on a daily basis. Today a server with a hardware RAID controller reported (when I say.
Best case scenario is that you buy exact same motherboard down to revision and BIOS version and pray for it to work. Performance wise motherboard RAID tends to have just hacked together firmware solution that does everything on CPU anyway. Using software RAID is basically skipping the middle-man in this case. Because there is no proper write cache with independent power loss protection you might end up with botched data on disks every time you get an unexpected power cut. Only advantage I can kinda think of is that Windows can boot off motherboard RAID, but not off software one. Though I only really have experience with Linux, so maybe this has changed in last decade:) Secondly - RAID5 itself is not necessarily considered a good option nowadays, especially with large HDD capacities. Mostly because during rebuilding an array after losing one drive there is surprisingly large risk of second drive failing and all your data going down the drain.
Lastly - I second: before setting up any RAID you should really know what you are doing. It is surprisingly easy to end up with configuration that has lower performance and reliability than using separate disks in first place. Same as everyone else, I've used motherboard based RAID (or, hardware assisted software RAID as it's sometimes called) and it's been a disaster. Chipset manufacturers seemed to have a love for putting their own unique spin on RAID configuration, making recovery just plain difficult for no reason (looking at you nVidia and your i680). A caveat to that would be Intel's RAID.
IMHO, Intel-based motherboard RAID is widespread and understood enough to be a reasonable option. Recovery software understands RAID arrays created with it enough for recovery to be fine, and it's compatible across many generations and revisions of board, so replacing a failed one is easy. But, if I were you, stick with pure software RAID or pure hardware RAID from the start. Decide on which you want to go with from the start, making sure you understand all the pros and cons of both. Oh, and backups.
We’re still on this RAID kick (and on a ‘vs’ kick). If you missed our RAID primer,. Now that you’re familiar with what RAID is, let’s go a little deeper to figure out who wins the hardware vs software RAID battle. Spoiler alert: neither–the winner is you!
You know because you chose the right one for you. Moving on It takes processing power to handle all the calculations that go into RAID operations.
The more complex the RAID configuration, the more processing power needed. From a pure operations perspective, there is very little difference between hardware and software RAID. Ultimately, the difference comes down to where the RAID processing is performed.
It can either be performed in the host server’s CPU (software RAID), or in an external CPU (hardware RAID). Hardware RAID Let’s start the hardware vs software RAID battle with the hardware side. In a hardware RAID setup, the drives connect to a RAID controller card inserted in a fast PCI-Express (PCI-e) slot in a motherboard. This works the same for larger servers as well as desktop computers. Many external RAID drive enclosures have the RAID controller card built into the drive enclosure. Advantages.
Better performance, especially in more complex RAID configurations. Processing is handled by the dedicated RAID processor rather than the main computer processor which translates to less strain on the system when writing backups, and less downtime when restoring data. Has more RAID configuration options including hybrid configurations which may not be available with certain OS options. Compatible across different operating systems.
This is critical if you plan to access your RAID system from a Mac and a Windows. Hardware RAID would be recognizable by any system. Disadvantages. Since there’s more hardware, there’s more cost involved in the initial setup. Inconsistent performance for certain hardware RAID setups that use flash storage (SSD) arrays.
Older RAID controllers disable the built-in fast caching functionality of the SSD that needed for efficient programming and erasing onto the drive. Software RAID is used exclusively in large systems (mainframes, Solaris RISC, Itanium, SAN systems) found in enterprise computing. Software RAID When storage drives are connected directly to the computer or server without a RAID controller, RAID configuration is managed by utility software in the operating system, which is referred to as a software RAID setup.
Numerous operating systems support RAID configuration, including those from Apple, Microsoft, various Linux flavors as well as OpenBSD, FreeBSD, NetBSD and Solaris Unix. Advantages.
Low cost of entry. All you need to do is connect the drives and then configure them within your OS. Today’s computers are so powerful, the processors can easily handle RAID 0 & 1 processing with no noticeable performance hit. Disadvantages. Software RAID is often specific to the OS being used, so it can’t generally be used for drive arrays that are shared between operating systems. You’ll be restricted to the RAID levels your specific OS can support. Performance hit if you’re using more complex RAID configurations.
Hardware vs Software RAID? The winner really depends on your use case. If you’re trying to save some money (and who isn’t, really?), you’ll be using a single operating system to access the RAID array, and you are using either RAID 0 or 1, using software RAID will give the the same RAID protection and experience than it’s more expensive counterpart.
If you can handle the initial investment, hardware RAID is definitely the way to go. It will free you from the limitations of software RAID and give you more flexibility in the way it is used and the types of configurations. What type of RAID processing do you use?
Let us know in the comments.