People new to Nutanix are easily confused by some of the terminology and concepts so let’s get back to basics in this ‘101’ explanation in which some underlying architecture will be covered too.
What is a block?
- This is the chassis that contains the servers, disks, power supplies and cooling fans
- There are no RAID controllers or inter-node connecting backplanes
- The block permits the ‘hot-plugging’ of nodes (see below for node explanation), power supplies and disks but not fans
- The block can be populated with 1 to 4 nodes
- The block does not have to be fully populated
- Only nodes of the same model type and generation can reside in the same block
- By distributing blocks across multiple racks, it defines the idea of building out across the data centre opposed to building up within racks, in turn reducing the fault domain
- All models have 2 power supplies per chassis, a single power supply can support all the nodes in the chassis
What is a node?
- This is the collective name for a single server and the disks it comes with, you may also hear it referred to as a host too
- A node will be populated with 1 or 2 physical CPUs, varying RAM density, network adapter(s), onboard ‘out of band’ management NIC interface, (optional) Nvidia GRID card
- All nodes are configured to order, review the Hardware Specifications on the Nutanix website or on the OEM vendor of choice
How do the nodes communicate?
- The nodes independently connect to a top of rack (ToR) switch as there is no backplane within a chassis
- It’s recommended to have 10GbE networking although 1GbE is supported for small Remote Branch Office deployments
Can I mix Nutanix nodes and blocks in the same Nutanix Cluster?
- Yes, Nutanix does not impose any restrictions which means a Nutanix cluster can be formed of model type, say compute focussed, then have another model node added that has a more storage-centric capability
- The only item to consider is how the hypervisor manages the differing CPU models (if adding newer generations). In VMware it’s more than likely EVC would be enabled, in Hyper-V this is CPU compatibility mode, and in AHV the CPU compatibility is on by default and not configurable
- You can mix All Flash and Hybrid configured nodes in the same cluster too. The Nutanix recommendation is to ensure you have the correct number of nodes for the planned RF (see below for explanation) per model type. For example, mixing 2 x All Flash and 4 x Hybrid in an RF2 cluster would require a minimum of 2 nodes per type. If RF3 were selected this wouldn’t be ideal for the All-Flash nodes as the tertiary write would have to be placed on a Hybrid node – this may well be ok but needs to be considered for the load on the SSDs of the Hybrid nodes.
- A single Nutanix Cluster cannot comprise of mixed hardware vendors, e.g. Nutanix NX with Dell XC nodes, software validation prevents this from happening. The reason is not to do with software compatibility it’s the support ownership, who would be called in the event of an issue?
What does the model number mean and how do interpret it?
- This is best explained using the following diagram:
- The product series (correct as of 01/2020) reflects the NX (SuperMicro) offering from Nutanix and does not apply to other hardware vendors
What is AHV?
- It’s a branch of KVM with code optimisation focused on delivering the Acropolis architecture
- It has a reduced software footprint, and additional hardening applied
- Code that has been re-written and updated does get returned to the KVM project if applicable
- You may hear it referred to as Nutanix’s own hypervisor, which isn’t too far from the truth
- There is no additional cost, it’s included in the purchase price and pre-loaded on all nodes shipped from the factory (unless you purchase through an OEM and choose a different hypervisor)
Has Nutanix released its own hypervisor to compete with the other hypervisor vendors?
- No, that’s no the case regardless of what is discussed in social media
- The hypervisor isn’t a selling point of Nutanix, it’s there to underpin the Enterprise Cloud Platform architecture
What is the minimum Nutanix Cluster size?
- A Nutanix Cluster starts at 3 nodes, regardless of the node model
- There are 1 and 2 node options, however, these are not intended to be standalone but managed by Prism Central from a larger Nutanix deployment – only AHV and ESXi are supported
I see (and hear) RF mentioned a lot, what is it?
- RF is an abbreviation for Replication Factor for a Storage Container and also referred to as Redundancy Factor for a Nutanix Cluster
- There are two options for the RF, 2 copies or 3 copies of data, remember Nutanix does not require hardware to protect data, so additional copies of data have to be made
What is block awareness?
- When multiple nodes/blocks have been deployed this always-on feature ensures a secondary or tertiary disk write is completed in another block, this reduces the fault domain
- In RF2 a minimum of 3 blocks with 1 node per block is required
- In RF3 a minimum of 5 blocks with 1 node per block is required
What is Prism?
- This is the management tool for administering a Nutanix Cluster
- No additional software needs to be deployed to run Prism, it’s natively installed in the Controller Virtual Machines – there’s no need to manage the management layer
- All metadata is distributed across the Nutanix Cluster so no single point of failure can occur with the management layer
What is Prism Central?
- Prism Central runs from a dedicated virtual machine, and as of AOS 5.5, it can be deployed in a scale-out model using 3 VMs
- Prism Central connects to standalone Nutanix Clusters regardless of their underlying hypervisor and hardware provider. It extracts the local cluster metrics into its own database to provide further intelligence via custom dashboards, intelligent searching and reporting
- Prism Central provides far more management tooling including capacity planning and ‘what if?’ scenario modelling
- Additional features such as Calm (Continuous Application Life-cycle Management) and network Micro-segmentation (Flow) are also (and only) available through Prism Central