Designing NSX networking in a VCF context is about making a few big decisions that stay stable as everything scales: where routing happens, how segments connect, how traffic exits to the physical world, and how you extend those patterns across sites. A good mental model is to treat NSX as a “virtual network fabric” with:
In exams and real designs, you’ll often be judged on whether your choices produce predictable traffic paths and operational simplicity—not whether you can recite every feature name.
At a high level, most NSX traffic paths can be described as a sequence: Workload (VM/Pod) → Segment → Tier-1 (local routing/policy boundary) → Tier-0 (north-south edge and external routing) → Edge uplink → Physical network.
Two design choices shape everything:
Certificates, authentication, and trust at Base level
Basic sizing and placement decisions
Scenario A: Designing connectivity inside a site (centralized vs distributed)
You’re given application tiers and required flows. Your job is to decide where routing boundaries live and what the “default path” should be:
Scenario B: Designing multi-site in VCF
A multi-site design usually forces you to answer:
Scenario C: Fleet design considerations
When you manage “a fleet” (multiple domains/instances/sites), consistency becomes a feature:
Scenario D: Optimization and acceleration decisions
Design often includes “performance and resilience choices,” such as:
The exam typically tests whether you can make reasonable design decisions from a scenario:
You should now be able to read a scenario and quickly decide: where does routing happen, where does traffic exit, how do sites relate, and what the operator experience will be. Next, you’ll move from “design intent” into “how to deploy and configure” these constructs in real VCF/NSX workflows.
In scenario questions, the “right” design choice is usually the one that makes traffic paths predictable and makes enforcement happen where the scenario assumes it happens.
Use this compact mapping when you explain NSX architecture:
A high-scoring exam explanation explicitly names the enforcement/decision point:
When options include multiple correct-sounding architectures, choose the one with the fewest implied hidden assumptions (no unexpected hairpin, clear egress, clear tenant boundaries).
When designing NSX routing architecture, when should a Tier-0 gateway operate in Active-Active mode?
Tier-0 should use Active-Active mode when high throughput and ECMP load balancing are required for North-South traffic.
In NSX architecture, the Tier-0 gateway connects the overlay network to the physical infrastructure. When configured in Active-Active mode, multiple Edge nodes simultaneously forward traffic using Equal Cost Multi-Path (ECMP) routing. This design distributes traffic across multiple paths, increasing throughput and scalability. Active-Active is ideal for environments with high external traffic volumes such as multi-tenant cloud environments or large enterprise workloads. However, certain services like stateful NAT or load balancing require Active-Standby mode because session state cannot be shared across multiple active nodes. Architects must evaluate service requirements and traffic patterns before selecting the gateway mode.
Demand Score: 91
Exam Relevance Score: 94
What is the recommended minimum size for an NSX Edge cluster in production environments?
A production NSX Edge cluster should typically contain at least two Edge nodes to provide high availability.
Edge clusters host centralized networking services such as Tier-0 gateways, NAT, and load balancing. Deploying multiple Edge nodes ensures that services remain available if one node fails. In most environments, a two-node cluster provides basic redundancy, while larger deployments may use four or more Edge nodes to support ECMP routing and higher throughput. Workload demand, throughput requirements, and availability targets should influence cluster sizing decisions. If only one Edge node is deployed, the environment becomes vulnerable to service outages because centralized networking functions would stop during node failure.
Demand Score: 85
Exam Relevance Score: 90
Why might an architect deploy multiple transport zones in an NSX environment?
Multiple transport zones can isolate networking domains and control which hosts participate in specific overlay networks.
Transport zones define the scope where logical switches and overlay segments can be deployed. In large environments, architects may create separate transport zones for different clusters, racks, or workload types. This approach helps isolate traffic domains and reduce unnecessary overlay participation by hosts that do not require access to certain networks. For example, management clusters and workload clusters may operate in separate transport zones to simplify network segmentation and reduce configuration complexity. Designing transport zones carefully helps improve scalability and maintain logical network boundaries.
Demand Score: 82
Exam Relevance Score: 88
What design consideration is important when connecting Tier-1 gateways to Tier-0 gateways?
Tier-1 gateways must be linked to a Tier-0 gateway to provide North-South connectivity to external networks.
Tier-1 gateways connect logical segments and workloads to upstream Tier-0 gateways.
In NSX networking architecture, Tier-1 gateways handle workload routing within the overlay network, including East-West communication between segments. However, Tier-1 gateways do not directly connect to the physical network. Instead, they rely on Tier-0 gateways, which run on Edge nodes and provide connectivity to the external physical network. During design, architects must ensure that the correct Tier-0 gateway is attached to each Tier-1 gateway so workloads can reach external networks. Misconfigured Tier-1 connections can lead to traffic isolation where workloads cannot reach external resources.
Demand Score: 84
Exam Relevance Score: 89
Why is ECMP commonly used in NSX Tier-0 gateway design?
ECMP allows traffic to be distributed across multiple active paths, improving scalability and bandwidth utilization.
Equal Cost Multi-Path (ECMP) routing enables multiple routes with the same cost to be used simultaneously. In NSX, this feature is typically enabled when Tier-0 gateways operate in Active-Active mode. Each Edge node can forward traffic using different physical uplinks, allowing the network to utilize available bandwidth more efficiently. ECMP also improves redundancy because traffic can continue flowing even if one path fails. Architects designing large VMware Cloud Foundation environments frequently use ECMP to support high traffic volumes and resilient North-South connectivity.
Demand Score: 88
Exam Relevance Score: 92
What is an important design consideration when placing Edge nodes in a VMware Cloud Foundation environment?
Edge nodes should be placed in dedicated clusters or resource pools to ensure predictable performance for network services.
Edge nodes run centralized networking services such as routing, NAT, and load balancing. Because these services process significant traffic volumes, architects often deploy Edge nodes in dedicated Edge clusters rather than shared compute clusters. This design isolates networking workloads from application workloads and ensures adequate CPU and memory resources for network processing. It also simplifies scaling because additional Edge nodes can be added to the cluster as traffic demand grows. Proper placement improves network performance and prevents resource contention between network services and application workloads.
Demand Score: 86
Exam Relevance Score: 91