UPDATE: VMware 6.5 (and VMFS 6.0) supports 512e drives as direct attached drives. vSphere & vSAN will expose 512 to Guest OS. However, VMware still does not support 4K. Read more in VMware’s KB announcement.
We went over the differences in hard disk sector size that have come up. But why does this matter now? Let’s continue on into real life implementations and implications …
We know that there are now there are three options: 512 sector, 512e sector, and 4K. If the hard drive manufacturers agreed in 2009 they were going to this, and it’s been six years, surely this isn’t a problem anymore, right? Well, not necessarily, since Seagate recently released its first 10,000 RPM (10K) enterprise drive size that doesn’t natively support 512 sectors at all. The new 1.8 TB drives only come in 512e and 4K sector sizes. The 7.2K drives have been at 512e/4K for a while, but this is the first 10K drive. For a lot of organizations running VMs, the only drives they run on are 10K, maybe with some 7200 for archive or something else.
Does Your Environment Support 4K or 512e Hard Disk Sector Size?
The major problem lies with operating system support of this new formatting methodology. Microsoft fully supported 4K drives natively since Server 2012 (four years ago). 512e is supported all the way back through Windows 2008, except nothing prior to 2012 supports 512e for Hyper-V. Even so, that’s a pretty large support basis for it — not for everything, but it should cover most of your deployments because you’ve gotten rid of all 2003 because that’s end of extended support … right? Most Linux distros also support 4K sectors.
VMware 6.0 and below (and anything below VMFS 5.5), on the other hand, doesn’t support 512e, and VMware as a whole doesn’t support 4K. At all. In the previous post, we covered abstraction layers and that you don’t necessarily have to have 512 disks to get your SAN to present 512-byte sector sizes. That’s really what’s saved VMware this long: not having to see the back-end disks because something sits inline. However, that’s starting to be a problem. Neither ESX 6 nor vSAN support 4K disks or 512e disks, and 6.5 ONLY supports 512e. Rather than working with 512e disks in a supported format, because of the potential performance issues we listed before, VMware previously chose to not support them at all, which left them with no support of new drives.
If you’re on VMware 6.0 or lower and you’re running a SAN, that SAN probably abstracts those drives away from the hypervisor so you probably don’t have a problem. But what if you’re running local storage in a server? If you’re running vSAN, well … you’re up a creek without a paddle, because it’s not supported.
But what about running a local RAID card on VMware 6.0 or earlier?
Well, it depends. Some RAID cards simply pass the sector size of the back-end disks through to the front-end clients. If that back-end disk is 4K, that means VMware won’t work with it past a RAID card. If it’s 512e, it will look like 512 (even though it’s 512e) to VMware and it will work, albeit unsupported, which could potentially present a performance problem. This is especially noteworthy, again, because we’re now on our first drive that doesn’t natively have 512 support. That means if you have 1.8 TB drives, you’re basically unsupported from VMware 6.0 and lower, if you’re doing local storage — be it vSAN or local datastores — because one way or another, you’d be running 4K or 512e. This varies from manufacturer to manufacturer on how it works through a RAID card, but in short, be very careful about what drives you buy right now from a VMware support standpoint.
Check on your hardware and software compatibility lists before committing to be sure.