Networking is a foundational component of a Nutanix cluster. Understanding how virtual networking works, how to configure VLANs, isolate traffic, and implement advanced features like NIC Teaming, Microsegmentation, and VPC will allow you to effectively manage and secure network traffic.
Nutanix leverages hypervisor-based networking to handle the flow of traffic within and between Virtual Machines (VMs) as well as management and storage communication.
Virtual networking in Nutanix is based on virtual switches that allow VMs to communicate with each other and external systems. These switches are created and managed by the hypervisor running on each node.
The vSwitch operates at Layer 2 of the OSI model (Data Link Layer), allowing VMs in the same VLAN to communicate with each other.
Layer 2 Networking:
VLAN Configuration:
NIC Teaming:
Nutanix AHV uses Open vSwitch (OVS) as its default virtual switch. OVS is an open-source virtual switch that is highly flexible and powerful.
VLAN Tagging:
Quality of Service (QoS):
Integration:
A VLAN (Virtual LAN) is used to segment network traffic logically without requiring separate physical switches or cables. VLANs improve security, performance, and traffic isolation.
Access Prism Element:
Create VLAN IDs:
Assign VLAN Tags to vNICs:
Verify VLAN Configuration:
Network segmentation further improves security and traffic management by isolating different types of traffic into separate VLANs.
Separate Management, Storage, and VM Traffic:
Microsegmentation (Nutanix Flow):
NIC Teaming aggregates multiple physical NICs (Network Interface Cards) to improve redundancy and performance.
Active-Active:
Active-Backup:
Access Prism:
Create a Bond:
Verify Configuration:
Advanced networking features in Nutanix provide greater control, security, and flexibility in managing network traffic. These features include Network I/O Control, Microsegmentation with Nutanix Flow, and Virtual Private Cloud (VPC). Let’s break them down step by step with detailed explanations, examples, and use cases.
Network I/O Control allows administrators to prioritize network traffic by defining Quality of Service (QoS) policies. This ensures that critical workloads get the bandwidth they need, while non-critical traffic can be limited to avoid contention.
Access Prism Element:
Set QoS Policies for vNICs:
Apply and Monitor:
Imagine you have two VMs:
Using Network I/O Control:
Microsegmentation is a zero-trust networking model that provides granular control over traffic between VMs. Instead of relying only on external firewalls, microsegmentation enforces security rules inside the cluster to protect East-West traffic (traffic between VMs).
Nutanix Flow uses distributed firewalls to enforce network security rules. These firewalls are:
Access Nutanix Flow:
Define Categories for VMs:
Create Security Policies:
Apply Policies to VMs:
Monitor Traffic:
Using Nutanix Flow:
Improved Security:
Granular Control:
Simplified Management:
A VPC allows you to create isolated private networks within a Nutanix cluster. VPCs logically segment workloads to provide better security and network control.
Access Prism Central:
Create a New VPC:
10.10.0.0/24 for a subnet with 256 IPs.Create Subnets:
10.10.1.0/24 for Web Servers.10.10.2.0/24 for Database Servers.Assign VMs to the VPC:
Verify Connectivity:
Network security is essential in any IT environment, and Nutanix provides tools and best practices to ensure that your cluster and workloads are protected.
Security policies are sets of rules that determine what traffic is allowed or denied between VMs, subnets, or categories in your Nutanix cluster. These policies are enforced using tools like Nutanix Flow and VLAN segmentation.
Access Prism Central:
Define Categories:
Create Security Policies:
Attach Policies to Categories:
Test the Rules:
Let’s say you have the following workloads:
Security Policies:
This segmentation ensures:
Monitoring network traffic helps you identify abnormal behavior, optimize performance, and ensure that your security policies are working effectively.
Access Network Traffic Analytics:
Identify Abnormal Traffic:
Analyze Bandwidth Usage:
Review Firewall Logs:
Nutanix can integrate with third-party tools like Splunk, ELK (Elasticsearch, Logstash, Kibana), or Syslog servers for advanced traffic analysis.
Network isolation ensures that sensitive workloads, management traffic, and storage communication are protected from unauthorized access. Isolation can be achieved using VLANs, microsegmentation, and Virtual Private Clouds (VPCs).
Use separate VLANs for different types of traffic:
Benefits:
10.1.0.0/24.10.2.0.0/24.Define Security Policies:
Monitor Traffic:
Enforce Network Isolation:
Review Logs Regularly:
Networking problems in a Nutanix cluster can arise from misconfigurations, hardware failures, or software issues. Here are the most common issues:
Misconfigured VLANs
Network Interface (NIC) Failures
Incorrect Firewall Rules
IP Address Conflicts
Routing and Connectivity Issues
Nutanix provides built-in tools, and you can use common network troubleshooting commands to identify and resolve issues.
The Prism Dashboard provides a real-time overview of network health and traffic.
The Nutanix Cluster Check (NCC) tool performs automated health checks and identifies misconfigurations or errors.
Access the CVM (Controller Virtual Machine) via SSH.
Run the following command:
ncc health_checks network_checks
Review Results:
Here are useful Linux commands for troubleshooting network issues:
| Command | Description | Example |
|---|---|---|
ping |
Tests connectivity to a specific IP. | ping 8.8.8.8 |
traceroute |
Shows the path packets take to reach a host. | traceroute google.com |
ifconfig |
Displays NIC configurations and statuses. | ifconfig |
ovs-vsctl |
Manages Open vSwitch settings. | ovs-vsctl show |
netstat |
Displays network statistics. | netstat -an |
Since Nutanix AHV uses Open vSwitch (OVS), OVS-specific commands are important for identifying issues.
View OVS Configuration:
Check the virtual switch configuration and ports:
ovs-vsctl show
Check Port Status:
Verify if the VM's vNIC is connected to the vSwitch:
ovs-ofctl show br0
Replace br0 with the appropriate OVS bridge name.
Monitor Traffic:
View real-time traffic statistics on a virtual switch:
ovs-appctl fdb/show br0
Symptoms:
Steps to Troubleshoot:
Verify VLAN configuration on the VM’s virtual NIC:
Verify VLAN configuration on the switch:
Use ovs-vsctl to check VLANs on Open vSwitch:
ovs-vsctl show
Ping between VMs on the same VLAN to test connectivity:
ping <destination VM IP>
Symptoms:
Steps to Troubleshoot:
Check NIC status in Prism:
Verify NIC status using ifconfig:
ifconfig eth0
If using NIC Teaming:
Replace the failed NIC or move workloads to a different node.
Symptoms:
Steps to Troubleshoot:
Review firewall rules in Nutanix Flow:
Test traffic flow using ping or telnet to check if specific ports are open:
telnet <destination IP> <port>
Adjust the firewall rules in Flow to allow required communication.
Symptoms:
Steps to Troubleshoot:
Verify default gateway settings for VMs:
Use traceroute to identify where the packet drops:
traceroute <destination IP>
Check DNS settings:
This section expands on Nutanix cluster networking by addressing VLAN trunking, LACP, MTU tuning, Nutanix Flow vs. traditional firewalls, IPFIX monitoring, Zero Trust Networking, and advanced troubleshooting techniques.
Nutanix Open vSwitch (OVS) supports both VLAN trunking and VLAN access ports, allowing administrators to design flexible network architectures.
To allow multiple VLANs through a single network port:
ovs-vsctl set port eth1 trunks=10,20,30
To configure a VLAN access port for a specific interface:
ovs-vsctl set port eth0 tag=10
ovs-vsctl set port br0 tag=1
LACP dynamically manages NIC bonding, improving redundancy and load balancing.
| Mode | Description | Use Case |
|---|---|---|
| Active | Actively negotiates link aggregation with the switch. | Recommended for Nutanix hosts. |
| Passive | Waits for the switch to initiate link aggregation. | Use when the switch requires LACP initiation. |
Enable LACP Active Mode and configure load balancing:
ovs-vsctl set port bond0 lacp=active
ovs-vsctl set port bond0 bond_mode=balance-slb
MTU tuning improves network performance by reducing packet fragmentation.
| Network Type | Recommended MTU |
|---|---|
| Management Traffic | 1500 (default) |
| Storage Traffic (iSCSI, RDMA, NFS) | 9000 (Jumbo Frames) |
ovs-vsctl set interface eth0 mtu_request=9000
| Feature | Nutanix Flow (Microsegmentation) | Traditional Firewalls |
|---|---|---|
| Traffic Control | East-West (VM-to-VM) | North-South (External-to-Internal) |
| Deployment | Software-defined, no external hardware required | Requires physical or virtual appliances |
| Granularity | Per-VM firewall rules | Subnet-based filtering |
| Security Model | Zero Trust Networking (default deny) | Perimeter Security |
Prevent unauthorized VM communication:
Allow: Web-Servers → Database-Servers (TCP 3306)
Deny: All → Database-Servers (Default Deny)
IPFIX is a network flow monitoring tool integrated into Nutanix Flow. It enables administrators to track VM communication patterns and detect anomalies.
ncli flow enable-ipfix
Security Central provides real-time security insights into Nutanix environments.
Zero Trust Networking (ZTN) enforces strict access controls, ensuring only authorized traffic is allowed.
| Principle | Description |
|---|---|
| Default Deny | Block all traffic unless explicitly allowed. |
| Least Privilege Access | Limit access to the minimum required level. |
| Multi-Factor Authentication (MFA) | Secure critical management interfaces. |
Allow Web Servers to Access Databases:
Allow: Web-Servers → Database-Servers (TCP 3306)
Deny Development-to-Production Access:
Deny: Dev-VLAN → Prod-VLAN
To perform a full network health check:
ncc health_checks network_checks
To check LACP status:
ovs-appctl bond/show bond0
| Issue | Possible Cause | Solution |
|---|---|---|
| LACP not negotiating | Switch-side LACP not enabled | Ensure LACP is active on the switch |
| LACP Disabled in OVS | Incorrect bond settings | Run ovs-vsctl set port bond0 lacp=active |
ping -I eth0 <destination IP>
traceroute <destination IP>
| Topic | Enhancements |
|---|---|
| Cluster Networking | Added VLAN Trunking, Native VLAN, LACP, MTU tuning. |
| Advanced Features | Expanded Nutanix Flow vs. Firewalls, IPFIX monitoring. |
| Security Best Practices | Added Security Central, Zero Trust Networking. |
| Troubleshooting | Improved NCC checks, LACP troubleshooting, ping/traceroute tests. |
A virtual machine on an AHV cluster cannot reach external networks after a VLAN configuration change. What is the most likely cause administrators should verify?
Administrators should verify that the VM network is configured with the correct VLAN ID and that the physical switch port supports the VLAN.
AHV networking relies on VLAN tagging to isolate and route traffic correctly. If the VLAN ID assigned to the VM network does not match the VLAN configured on the physical switch port, packets cannot be forwarded correctly outside the cluster. Administrators should confirm that the network configuration in Prism matches the VLAN configuration on the switch and that the switch port is configured as a trunk or access port as required. A common mistake is modifying VLAN settings in the cluster without updating corresponding switch configurations.
Demand Score: 84
Exam Relevance Score: 90
What role do AHV network bridges play in VM networking?
AHV network bridges connect virtual networks used by VMs to the physical network interfaces on cluster hosts.
A network bridge in AHV acts as the link between virtual machine network interfaces and the underlying physical NICs of the host. When a VM sends traffic, the bridge forwards packets to the physical network adapter associated with that bridge. This allows VMs to communicate with external networks and other hosts. Proper bridge configuration ensures traffic flows correctly between virtual and physical networks. A common mistake is assuming VLAN configuration alone enables connectivity, while overlooking the bridge that connects VM networks to the host interfaces.
Demand Score: 72
Exam Relevance Score: 86
When troubleshooting VM connectivity issues in AHV, why is verifying the VM’s network assignment important?
Because an incorrect network assignment can isolate the VM from required VLANs or network segments.
Each VM is attached to a specific network configured within the cluster. If the VM is assigned to the wrong network, it may be connected to a different VLAN or isolated segment. This prevents communication with expected services or external systems. Administrators should confirm that the VM is attached to the correct network object and that the network configuration matches the intended VLAN and bridge settings. A frequent mistake during troubleshooting is focusing on external network issues while overlooking incorrect VM network assignments.
Demand Score: 74
Exam Relevance Score: 84
Why is consistent VLAN configuration across cluster nodes important for AHV networking?
Because inconsistent VLAN configuration can cause communication failures between VMs and external networks.
In a Nutanix cluster, VMs may migrate between hosts through live migration processes. If VLAN configurations differ between nodes or physical switches, a migrated VM may lose network connectivity after relocation. Ensuring consistent VLAN support and trunk configurations across all hosts and connected switches allows VM traffic to remain accessible regardless of which node runs the VM. Administrators often overlook this requirement when expanding clusters or modifying switch configurations.
Demand Score: 70
Exam Relevance Score: 83