This domain is about turning design intent into a working NSX environment inside VCF—reliably and repeatably. A helpful way to think about it is: you are assembling building blocks in a safe order.
Typical build order (conceptual):
When you configure NSX, you’re shaping two “worlds” that must align:
A beginner-friendly flow to keep in mind: Workload → Segment → Tier-1 (local routing/policy boundary) → Tier-0 (north-south boundary) → Edge uplink → Physical network.
Where many new operators struggle is assuming configuration automatically equals forwarding. In reality, most outages are “the intent exists, but it’s not being realized” due to health, trust, transport, or placement issues.
Scenario A: Deploying NSX Federation in VCF (high-level steps)
Federation is usually chosen when you need consistent policy intent across multiple locations. In practice, your “safe order” looks like:
Scenario B: Deploying an Edge Cluster and establishing north-south connectivity
Edge clusters provide the attachment point to the physical network. A common workflow is:
Scenario C: Creating segments, Tier-0/Tier-1, and app connectivity
For a multi-tier app, you might:
Scenario D: VPC, Projects, and Tenancy (organizing shared platforms)
As environments become multi-tenant, you need structure:
Scenario E: Stateful services and integrations
When you add services (NAT, firewalling, load balancing, IDS/IPS-style capabilities, or third-party integrations), you introduce new decision points in the packet path. Operationally:
Scenario F: Monitoring and day-2 operations
Day-2 work includes backups, certificate changes, upgrades, compliance checks, capacity monitoring, and “is it healthy?” triage. A consistent approach is:
In this domain, the exam often checks practical operator thinking:
You now have a “build and operate” mental model: prepare foundations, deploy forwarding capacity, create routing and networks, add services/tenancy, and then monitor day-2 health. Next, you’ll focus on troubleshooting and repair: taking a symptom and narrowing quickly to the most likely layer and the most efficient verification path.
Federation scenarios often fail in ways that look like “random NSX issues,” but they usually reduce to a missing prerequisite, a trust/identity mismatch, or a skipped validation milestone.
Think in milestones with a proof test after each:
A strong exam response names: the next milestone, the minimal proof test, and the most likely prerequisite category (reachability, MTU, time, identity/trust).
Why must Tunnel Endpoint (TEP) IP addresses be configured when preparing transport nodes?
TEP IP addresses enable overlay tunnel creation between transport nodes.
Tunnel Endpoints (TEPs) are essential for Geneve encapsulated overlay communication in NSX. When a host is prepared as a transport node, the NSX Virtual Distributed Switch creates a TEP interface that uses an assigned IP address to establish tunnels with other transport nodes. These tunnels carry encapsulated traffic for logical switches and routers across the physical network. If TEP addresses are missing or incorrectly configured, overlay networks cannot form and virtual machines on different hosts cannot communicate. Administrators usually assign TEP IP addresses using IP pools or DHCP and place them on a dedicated VLAN supported by the underlay network. Ensuring IP reachability between TEP interfaces is a critical step in validating NSX deployment.
Demand Score: 91
Exam Relevance Score: 95
What configuration step is required before ESXi hosts can participate in NSX overlay networking?
The hosts must be prepared as transport nodes and assigned to a transport zone.
Preparing a host as a transport node installs the NSX kernel modules and networking components required for overlay networking. During this process, administrators configure the NSX virtual switch, assign uplink profiles, configure TEP interfaces, and attach the host to a transport zone. The transport zone defines which logical networks the host can access. Without completing this preparation step, ESXi hosts cannot participate in NSX logical switching or routing. This preparation is often automated by VMware Cloud Foundation through SDDC Manager, but administrators still need to verify uplink mappings and VLAN connectivity to ensure proper deployment.
Demand Score: 88
Exam Relevance Score: 93
How are Edge nodes connected to the physical network during deployment?
Edge nodes connect to the physical network through uplink interfaces mapped to VLAN-backed segments.
When deploying an NSX Edge node, administrators configure uplink interfaces that connect to the physical network infrastructure. These uplinks are typically mapped to VLAN-backed segments which correspond to VLANs configured on the physical switches. Through these uplinks, Edge nodes exchange routing information with physical routers and provide North-South connectivity for workloads. Proper configuration of VLAN IDs, MTU settings, and physical switch trunking is required to ensure reliable communication. If these parameters are misconfigured, routing adjacency and external connectivity may fail.
Demand Score: 86
Exam Relevance Score: 92
What is required to enable BGP routing on a Tier-0 gateway?
BGP must be enabled on the Tier-0 gateway and configured with neighbor IP addresses and Autonomous System numbers.
Border Gateway Protocol (BGP) is commonly used in NSX environments to exchange routes between the Tier-0 gateway and physical routers. Administrators configure the local Autonomous System (AS) number, neighbor router IP addresses, and route advertisement settings. Once the BGP session is established, the Tier-0 gateway can advertise overlay network routes to the physical infrastructure and learn external routes from upstream routers. Proper configuration ensures workloads can reach external networks while maintaining dynamic routing updates. Incorrect AS numbers or neighbor IP addresses will prevent BGP adjacency from forming.
Demand Score: 90
Exam Relevance Score: 95
What is the purpose of an uplink profile in NSX host configuration?
An uplink profile defines how physical NICs connect to the NSX virtual switch and physical network.
The uplink profile standardizes host networking configuration by defining NIC teaming policies, VLAN settings, and MTU values. When preparing transport nodes, administrators apply uplink profiles to ensure that ESXi hosts connect to the physical network consistently. This simplifies configuration across clusters and ensures the underlay network can support overlay traffic requirements. Using uplink profiles also helps enforce best practices such as correct MTU sizes required for Geneve encapsulation. Misconfigured uplink profiles can lead to connectivity issues between hosts and Edge nodes.
Demand Score: 84
Exam Relevance Score: 89
Why must the underlay network support sufficient MTU size when deploying NSX?
The underlay network must support larger MTU values to accommodate Geneve encapsulated packets.
Overlay networking adds encapsulation headers to packets before they traverse the physical network. In NSX, Geneve encapsulation increases packet size, which can exceed the standard Ethernet MTU of 1500 bytes. If the underlay network does not support larger MTU values (commonly around 1600 or higher), packets may be fragmented or dropped. This leads to connectivity problems between virtual machines on different hosts. To prevent these issues, administrators configure jumbo frames on physical switches and ESXi hosts before deploying overlay networking. Verifying MTU compatibility is an important step during installation and troubleshooting.
Demand Score: 87
Exam Relevance Score: 91