Either a RAID controller, a SAN storage controller or a piece of software on the OS has to be in charge of your drives if you want any drive resiliency. If not, it’s just a bunch of drives (JBOD) with no redundancy.
How these drives are managed and implemented greatly affects their overall usefulness and effectiveness. Drive management is having some of the biggest changes in decades thanks to the size of spinning drives, the speed of flash, and the huge upwelling of software-defined storage. How do each of these play into drives, though?
RAID controllers have been around “forever”. RAID itself (what we look at in modern servers) has been around since the 80s. RAID controllers have essentially tweaked the formula over the last 40 years to be able to sustain failures better, to fail fewer drives, to have more cache for better performance, and to support multiple interfaces.
Even so, it’s old technology with new wrappers. With that in mind, RAID does have some shortcomings.
Part of RAID’s beauty was its drive agnosticism, which is great…as long as talking directly to the drives doesn’t give you a major benefit. With flash, however, talking directly to the drives is a very important thing as it gives insight into the drive’s lifecycle, performance, and other operational parameters. RAID cards don’t know how to deal with that information, so these benefits can’t be properly realized. This is one of the reasons enterprise drives are so much more expensive than consumer drives — they have to be able to independently take care of themselves when treated as a generic drive on a RAID controller, so they have extra intelligence built in for garbage collection, data loss protection, and trim. Additionally, modern SSDs are so fast that a RAID controller, even in a fast full-bandwidth PCIe slot, can be out-paced by just a few SSDs. The inherent nature of RAID controllers adds some latency to super-fast SSDs. So, they’re dead, right? RIP RAID, 1980s-2020…not quite.
This isn’t to say that RAID controllers don’t have their place with flash storage. As we’re about to discuss, ANY other (properly) redundant solution requires significantly more effort and costs than RAID. On top of that, for smaller environments or distributed storage arrays, the latency impact of a (modern) RAID card isn’t extreme, and most won’t notice the upper bandwidth cap. Simply putting four SSDs behind a RAID controller will have you up and running with a relatively painless and self-contained storage solution in minutes…for an extremely cheap expenditure.
These new NVMe RAID controllers are a slight variation because they aren’t something you’ll easily be able to just install in your server and go. There are special cabling requirements that basically mean it has to be sold in the server. It will also further exacerbate the aforementioned issues of bandwidth. Each drive is optimized for an x4 PCIe connection to the CPU, but if you then stick 16 of those drives on a single RAID card, there will obviously be bandwidth contention. All that being said, with more and more drives moving to NVMe, this grants the ability to support those drives quickly and easily and get most of the benefits for small deployments with little overhead. The RAID controller also offers a hardware abstraction layer that means the software on top of it doesn’t have to care what kind of individual drives are running.
But what if you have need of large scale or extreme performance? Well….
SAN controllers are custom built from the ground up for three purposes (all of the additional marketing around them always settles back to one of these three): protect data, share data, and make data performant. Depending on the vendor, those three are in a different priority order but those are what every SAN controller works towards. SAN controllers are made for — and are great at — managing a lot of drives. SANs are often made to support hundreds of drives and hundreds of gigabits of bandwidth. Being on dedicated hardware also means there isn’t any contention around giving that performance (if they’re properly designed, that is).
Since SAN controllers are purpose built, the way they choose to implement drive care and feeding can vary greatly. There could be just a RAID controller inside, or they could have extremely custom firmware with direct PCI-e links to NVMe drives, with detailed monitoring and data placement to optimize the life, performance, and reliability of the drives. It’s nearly impossible to know for sure, but you can get a vague idea by asking some questions around how they’re doing data redundancy and handling drive failures. If the array is doing something like distributed parity and sparing, it at least says there is some intelligence in the array past basic RAID levels (0-6), which means some effort has been made. Finding out details about how your vendor treats their drives gives you some good idea of how much actual engineering, which gives you better inside than the marketing fluff that goes with their product.
The final way of managing drives goes into software-defined storage. Software-defined storage is a very vague and open-ended term. Technically, every RAID controller that runs custom software could be considered “software defined”. Windows Dynamic Disks have been “software-defined storage” since the early 2000s. However, most recently when most people refer to software-defined storage, they mean distributed redundant storage across multiple x86 servers.
The two examples that immediately come to mind: VMware VSAN and Microsoft Storage Spaces Direct (S2D). But there are dozens of other companies doing similar scale-out redundant storage solutions. These systems can provide some of the best benefits of custom storage array controllers and RAID. With VMware and S2D it’s easy to simply bring some servers online, add the disks, and say “make it a pool,” which gives you some of the simplicity of RAID configuration (assuming you did all the other stuff to get you to that point). However, since these are just pieces of software talking to the drives without the hardware abstraction layer of the RAID controller, they talk directly to the drives and optimize the way they are handled and managed to their most optimal state while providing redundancy across servers.
The downsides of these solutions come back around to their main benefit, ironically enough. Since the data is distributed across many drives and many servers, there is a directly proportional relationship between the redundancy and performance of the drives and the numbers of servers that are involved. With only two servers, if a single server goes down, all of the data has to be served from the other server, and half of the drives are no longer usable, which also means half the drive performance for reads is gone. Depending on how it’s configured, a single drive loss in the remaining server could mean data loss. However, if there are 16 servers in the pool, then the loss of one server means only 1/16th of the drives/performance is lost, and there are many more servers to absorb the load. Since software is taking care of all of the drives, you can end up in a situation in high performance configurations where the management of data flow can use a substantial amount of CPU resources. In extreme examples, up to 8 CPU cores per box can be used JUST to control the data flow and networking.
So which solution should you choose for drive management? As with everything, there’s no single right answer, and there are a dozen variables going into the choice and you have to weigh pros and cons of each solution to figure out what will work best for your environment.
There you have it: the basics on the different types of drive management and some pointers on which you might want to choose.