Quite simply AFS stands for “Acropolis File Services” and ABS stands for “Acropolis Block Services” so there you have it. …. Oh, you’re still here? That wasn’t enough? You want to know more? Ok then, read on…
The Nutanix storage layer is referred to as the “Nutanix Distributed Storage Fabric” (NDSF) and can be presented in a manner of ways. NFSv3, SMBv3, iSCSI and CIFS are the currently supported transport/network/presentation methods. Nutanix treats storage as a single pool and using its software it can present storage in whichever method is required simultaneously. Having a flexible single pool of storage that’s centrally managed and easy to scale negates the need to have standalone solutions addressing single use-cases. Running silos of SANs for VMs, Raw Disk presentation then NAS for file data and an expensive something whizzy for Object store makes no sense. Amazon, Google and other web giants don’t approach storage in this way so why do it in the private data centre?
AFS is deployed initially using 3 VMs distributed within a Nutanix Cluster for high availability/resilience, no 2 VMs can reside on the same hypervisor host. If there’s a need to keep file services physically and/or logically separated AFS can be deployed on a standalone Nutanix cluster too. The AFS architecture is identical to the underlying AOS which means file serving is balanced across the cluster ensuring all users get the same performance. Once deployed AFS is joined to an Active Directory and a single namespace is presented. The administrator can define shares, associate user permissions and if required, enable hard/soft storage quotas.
“Can AFS be joined to an existing Microsoft DFS?”
A question commonly asked and the answer is, no.
“What hypervisors can I deploy AFS on?”
The supported hypervisors today are AHV and VMware’s vSphere deployed on Nutanix.
The Block Services feature is simply carving out an allocation of capacity from the Nutanix storage pool and presenting it out over the network via iSCSI. Connect your standalone servers to this shared storage and hey-presto, you’re using Nutanix for high performing, scale-out, distributed storage.
“Surely ABS is no different to carving up LUNs then?”
That’s not strictly true. The purpose of having this functionality is for workloads that don’t fit the ‘virtual world’ for varying reasons such as, locally installed peripheral adaptors, high socket quantity compute demands, licensing constraints (think Oracle)… If workloads need to remain physical but still need highly available and scalable storage, that’s where ABS really comes into its own.