Differentiating Flash Storage Series: Drive Interconnection

Feb 3, 2021 by Brent Earls

Welcome back to our flash storage series. We’ve discussed drive types, drive interfaces, and management, so check those out before reading on!

How hosts actually get and use the storage is the final step of this flash situation. At this point, there are over a dozen ways that computers can connect to storage, and it is still a very common question when discussing new storage environments. There are three main types of storage connectivity: iSCSI, Fibre Channel and file (SMB/NFS).

File-Based Storage

File is one of the easiest ways of setting up storage. You share out a repository and remote users get access to it. Historically it was never the fastest or best way of sharing storage per se, but with some of the modern RDMA and offloads built into cards, performance may not be a problem.

The main issue that usually comes with file storage is the fact that each OS has its own way it prefers to access storage and the other ways are less optimal. The same is true on the source side. A SAN or server that is built for SMB (Windows) isn’t great at NFS, and a SAN or server built for NFS (Linux) isn’t great at SMB. They can do it, but don’t support all of the features and performance optimizations of native connectivity.

If your environment is completely homogenous that might not be a problem, but few environments are.  Additionally, the way in which multi-pathing and redundancy are achieved for file varies greatly not just between technologies, but even versions of that technology (NFS 3 vs 4). This can create a complicated support structure knowing what versions give what features and what will work in each interoperation.

Block-Based Storage

Block-based storage access methods predominantly come down to iSCSI and Fibre Channel. Both of these technologies present raw blocks of storage across their interconnects and then let the host OS take care of the file system. The benefit of this is that the file system is always going to be optimized for that OS, and you don’t have to worry about as many interoperability issues between host and storage.  It also means you can serve storage to disparate OSes and it will be optimized for each OS as needed without having different sources. The downside of this is that multiple servers can’t access the same volume unless it is a file system that allows multiple simultaneous access, such as a clustered file system.

iSCSI

iSCSI is the SCSI protocol over IP. This means that it rides on the existing ethernet and TCP/IP standards, making it extremely compatible across disparate environments. iSCSI has NO actual requirements on speeds or methodology, so you can as easily run it across 11 Mbit wireless 802.11b as you can over brand new 400 Gbit. This can create some interesting inter-operability situations. Different NICs and different switches all have different functionality, offloads, performance, and other features designed for normal network traffic that can cause issues for storage if not optimized. RDMA and various offloads can make it extremely fast, but if it isn’t optimized there can be problems. Additionally, having iSCSI run on top of TCP/IP and on top of ethernet means there is a lot of overhead and encapsulation going on.  iSCSI is extremely flexible, but that flexibility is what causes most of the issues people run into with iSCSI.

Fibre Channel

Fibre Channel is built from the ground up just for storage. This means it is a pared down stack that is optimized for the purpose. It often “just works” and doesn’t have as many tuning and issues as iSCSI does with its extreme flexibility. However, being purpose built means that extra infrastructure has to be bought for Fibre Channel. (To be fair, proper storage environments for iSCSI are also often architected with dedicated ports and switches, so the extra cost/complexity for that may come out in the wash.)  Fibre Channel also operates in specific speeds, with current mainstream Fibre Channel going up to 64 Gbit/s, with 128 Gb/s already being present in specific use cases.

Scrabble Bag of Letters: FCoE, FCIP, NVME

But wait! What about those other things I hear about? FCoE, FCIP, NVMe over… stuff? As of right now, most all of these are extremely limited use cases.

FCoE is Fibre Channel over Ethernet and it allows for Fibre Channel packets to be transmitted over the Ethernet standard, cutting the TCP/IP stack out of the works. To make it function as needed, however, there are a lot of settings required on the NIC and on the switches to get them performing at the lossless low latency manner that the FC protocol expects. This gained popularity for a while, but has now somewhat fallen to the wayside.

FCIP is Fibre Channel over TCP/IP, which is essentially the worst combination of things. The extremely reliable and resilient FC protocol is shoved over the uncontrolled and unstandardized IP network. FCIP is almost exclusively relegated to SAN replication at this point.

Finally, we get to what everyone is pinning their hopes and dreams on: NVMe over… things.

NVME over Fibre Channel, NVMe over Ethernet, and NVMe over TCP are exactly what they sound like — the fast NVMe signaling and messaging we talked about earlier, but put over familiar interfaces for long-range access. NVME over Fibre Channel uses the fast and reliable FC protocol to encapsulate NVMe data and allow remote access. NVMe over Ethernet with RDMA encapsulates NVMe over… ethernet, similar to NVMe over TCP encapsulating it in TCP frames. Both of these technologies, relying on normal networks means a little more configuration, validation, and testing (like we talked about with iSCSI).

But wait again! What about the way Storage Spaces Direct (S2D) and vSAN communicate? What do they use? Well, in both cases they use TCP/IP over normal networks. S2D uses a special implementation of SMB3 with RDMA for extra performance. It does allow you to share out files from an S2D cluster to normal servers over SMB. VMware vSAN uses a proprietary method for their host connectivity and communication that can’t be read by non-vSAN servers. However, with version 7 it can also now do file shares with NFS.

In short, there are lots of factors to keep in mind when buying storage — especially flash — everything from the type of drives, the way you’re going to deploy them, the interfaces they are using, as well as the interconnectivity.

If you need help sizing a storage solution, we’re here. Call us at 502-240-0404 or email info@mirazon.com!

Press enter to search