High performance and low latency applications demand far more throughput and capability from a hypervisor kernel and for Nutanix specifically, the Controller Virtual Machine. Installing faster hardware doesn’t simply increase the throughput especially if the underlying software remains the same. In this model using Nutanix’s AHV Turbo the hypervisor kernel is bypassed for disk I/O which dramatically accelerates data flow which in turn then requires rapid network access for data and metadata replication, hence the RDMA and 40GbE bandwidth.
Before deploying these nodes in an environment a few pre-requisites must be met.
- Top of rack switching must be DCB capable, Wiki article here
- Top of rack switching must support RoCEv2, Wiki article here
- The NX-9030 nodes cannot be included in an existing Nutanix Cluster
The diagram below is not representative of a deployment but here to show 2 nodes, their storage capability and the network providing inter-CVM replication.
At the time of release the available disk capacity options were…
- 1.6TB NVMe
- 960GB, 1.92TB & 3.84TB SSD
…per node.