IP Fabric is a networking concept that forms the foundation of modern data center architectures. It simplifies data center design, enabling scalable, high-performance, and resilient communication between devices like servers, storage systems, and switches. This section explains IP Fabrics step by step to help you understand its concepts and benefits clearly.
An IP Fabric is a type of network design that uses a "flat" architecture to efficiently connect devices in a data center. Unlike traditional hierarchical networks with core, distribution, and access layers, IP Fabrics use a simpler, more efficient layout based on a spine-leaf topology.
The spine-leaf topology is the backbone of IP Fabrics, with two primary layers of switches:
Key Characteristic: Every leaf switch connects to every spine switch, ensuring predictable performance and redundancy.
Flat Architecture:
Non-blocking Design:
Scalable Design:
In an IP Fabric, traffic moves across the network using Layer 3 (IP-based) protocols. This approach:
Here’s a simplified process for setting up an IP Fabric in a data center:
Enable Underlay Routing:
Configure the Control Plane:
Set Up the Overlay:
Validate and Monitor:
Imagine a data center with:
How traffic flows:
EVPN-VXLAN builds on the foundation of IP Fabrics by introducing network virtualization and scalability features. It integrates Layer 2 and Layer 3 functionalities while enabling efficient traffic routing, multi-tenancy, and workload mobility.
EVPN-VXLAN is a combination of two technologies:
Together, EVPN-VXLAN allows you to create scalable, flexible, and virtualized networks in a data center.
VXLAN is a protocol that allows Layer 2 networks (like VLANs) to be extended across a Layer 3 infrastructure. It uses encapsulation to achieve this.
EVPN is a control-plane technology that works with VXLAN. It uses BGP (Border Gateway Protocol) to distribute MAC and IP address information across the data center.
Multi-Tenant Data Centers:
Workload Mobility:
Disaster Recovery:
Server A sends data to Server B:
Leaf Switch B forwards the packet:
In traditional Layer 2 networks, devices use broadcast traffic (e.g., ARP) to discover MAC or IP addresses. EVPN replaces this with a more efficient approach.
A distributed anycast gateway ensures that all leaf switches in the network can serve as a gateway for devices in the same VXLAN segment.
Broadcast and multicast traffic within VXLAN segments can be handled using two approaches:
EVPN uses specific BGP route types to share information about endpoints.
Let’s walk through a simplified EVPN-VXLAN configuration on a spine-leaf topology.
The underlay ensures connectivity between all spine and leaf switches.
Assign IP Addresses to Interfaces:
Enable a Routing Protocol:
Configure OSPF or ISIS to establish IP connectivity.
Example OSPF Configuration on Leaf1:
set protocols ospf area 0 interface xe-0/0/0
set protocols ospf area 0 interface lo0
set interfaces lo0 unit 0 family inet address 192.168.1.1/32
Set up BGP EVPN as the control plane to distribute endpoint information.
Define BGP AS Numbers:
Enable EVPN Address Family:
Example on Leaf1:
set protocols bgp group EVPN type internal
set protocols bgp group EVPN local-address 192.168.1.1
set protocols bgp group EVPN family evpn signaling
set protocols bgp group EVPN peer-as 65000
Configure Route Reflectors:
The overlay enables logical Layer 2 networks over the Layer 3 fabric.
Map VLAN to VNI:
Example on Leaf1:
set routing-instances VXLAN-10 instance-type virtual-switch
set routing-instances VXLAN-10 vlan-id 10
set routing-instances VXLAN-10 vxlan vni 1000
Enable VTEP Functionality:
Example:
set routing-instances VXLAN-10 vxlan encapsulation vxlan
set routing-instances VXLAN-10 vxlan source-address 192.168.1.1
Check BGP Peering:
Verify EVPN routes:
show bgp evpn
Verify VXLAN Tunnels:
Check tunnel status and endpoint mappings:
show evpn vtep
EVPN Routes Missing:
VXLAN Traffic Not Flowing:
Broadcast Storms:
To streamline revision and reinforce clarity, it is beneficial to create a dedicated terminology section. Below are key EVPN-VXLAN and IP Fabric terms with definitions relevant to both practical use and JN0-480 exam content:
| Term | Description |
|---|---|
| VXLAN | Virtual Extensible LAN; encapsulates Layer 2 Ethernet frames in UDP packets |
| VNI | VXLAN Network Identifier; 24-bit identifier used to separate VXLAN segments |
| VTEP | VXLAN Tunnel Endpoint; encapsulates/decapsulates VXLAN traffic |
| EVPN | Ethernet VPN; BGP-based control plane for VXLAN overlays |
| RD (Route Distinguisher) | Differentiates overlapping prefixes in different VRFs |
| RT (Route Target) | Used for importing/exporting VPN routes into routing instances (VRFs) |
| MAC/IP Advertisement (Type 2 Route) | BGP EVPN route type that announces both MAC and IP of endpoints |
| Inclusive Multicast (Type 3 Route) | Announces VXLAN group membership, used for BUM traffic |
| IP Prefix (Type 5 Route) | Advertises IP prefixes for inter-subnet routing across VNIs |
| Anycast Gateway | A shared IP address assigned to all leaf switches in the same VLAN/VNI |
Tip: Group these by function — encapsulation (VNI, VXLAN), endpoint mapping (VTEP, MAC/IP Type 2), and control plane (EVPN, RD/RT, BGP) — for better mental mapping.
When preparing for the JN0-480 exam, it’s critical to understand how Juniper implements EVPN-VXLAN features within JUNOS. The exam often assumes familiarity with Juniper CLI conventions.
Sample JUNOS Configuration:
set routing-instances VRF-A instance-type vrf
set routing-instances VRF-A route-distinguisher 192.0.2.1:100
set routing-instances VRF-A vrf-target target:64512:100
Explanation:
route-distinguisher makes the route globally unique across VRFs.
vrf-target controls import/export of routes between VRFs.
Note: Unlike some vendors that auto-generate RDs/RTs, Juniper expects explicit declarations.
set protocols evpn encapsulation vxlan
set protocols evpn extended-vni-list 1000
set protocols evpn multicast-mode ingress-replication
This sets up basic EVPN VXLAN support with VNI 1000 and ingress replication for BUM traffic.
set protocols bgp group EVPN type internal
set protocols bgp group EVPN family evpn signaling
set protocols bgp group EVPN neighbor 192.0.2.2 peer-as 65001
This enables BGP signaling for EVPN. Ensure this is configured on both leaf and spine switches with proper peering.
While theoretical explanations are helpful, real-world troubleshooting is a vital exam focus. These examples demonstrate how to diagnose problems tied to route types, which are a common exam topic.
Symptoms:
Remote MAC/IP information is not visible.
Ping between hosts in the same VXLAN segment fails.
Troubleshooting Steps:
| Checkpoint | Command | Expected Outcome |
|---|---|---|
| 1. Verify EVPN is enabled | show configuration protocols evpn |
Confirm VNI and encapsulation present |
| 2. Validate BGP peering | show bgp summary |
BGP session is established |
| 3. Confirm MAC learning locally | show ethernet-switching table |
Local MACs should be learned |
| 4. Verify Type-2 route received | show route table bgp.evpn.0 extensive |
Type-2 route with MAC/IP should exist |
| 5. Check VRF and interface bind | show routing-instances |
Interfaces must be bound to VRF |
Possible Misconfigurations:
VNI not mapped to correct VLAN.
Interface not part of EVPN-aware routing instance.
Missing evpn signaling in BGP family config.
Tip for Exams: If asked which command best verifies MAC/IP distribution via EVPN, the correct answer is often:show route table bgp.evpn.0 extensive
| Improvement Area | What Was Added |
|---|---|
| Terminology Table | Quick reference for 10+ core terms, grouped by function |
| CLI Enhancements | JUNOS-specific RD/RT, BGP EVPN, and VNI configuration examples with explanations |
| Troubleshooting | Scenario on missing Type-2 route with step-by-step diagnostics |
Why do modern data centers commonly use a leaf-spine architecture instead of a traditional three-tier architecture?
Leaf-spine architectures provide predictable low latency and equal-cost paths between all servers, making them more scalable and suitable for east-west traffic common in modern data centers.
Traditional three-tier networks (core, aggregation, access) were designed primarily for north-south traffic, where clients accessed centralized servers. Modern cloud applications generate heavy east-west traffic between servers inside the data center.
Leaf-spine architecture connects every leaf switch to every spine switch, creating multiple equal-cost paths (ECMP). This provides:
Consistent latency between hosts
High bandwidth utilization
Simple horizontal scalability
Adding capacity is also straightforward—operators simply add more spine switches without redesigning the topology.
A common mistake is assuming spine switches forward traffic between themselves. In reality, spines never connect to each other; they only connect to leaf switches.
Demand Score: 85
Exam Relevance Score: 90
What is the difference between an underlay network and an overlay network in EVPN-VXLAN fabrics?
The underlay provides IP connectivity between switches, while the overlay carries tenant Layer-2 or Layer-3 networks encapsulated inside VXLAN tunnels.
In EVPN-VXLAN fabrics, the network is divided into two logical layers.
Underlay network
Provides IP reachability between all fabric devices
Typically uses routing protocols such as OSPF, IS-IS, or eBGP
Carries transport traffic between VTEPs
Overlay network
Uses VXLAN encapsulation to carry tenant traffic
Uses EVPN as the control plane to distribute MAC/IP reachability
Enables Layer-2 extension and Layer-3 tenant routing across the fabric
This separation allows operators to design a stable transport network while independently scaling tenant networks.
A frequent misconception is thinking VXLAN replaces the underlay; it actually runs on top of the underlay.
Demand Score: 80
Exam Relevance Score: 88
Why is BGP commonly used as the control plane protocol for EVPN-VXLAN fabrics?
BGP is used because it provides scalable route distribution, policy control, and built-in support for EVPN address families.
EVPN uses MP-BGP (Multiprotocol BGP) to distribute MAC and IP reachability information between VTEPs.
Reasons BGP is preferred:
Scalability for large fabrics with thousands of endpoints
Policy control through route targets and filtering
Multiprotocol support for EVPN address families
Loop prevention through BGP attributes
Unlike traditional flood-and-learn Layer-2 networks, EVPN allows switches to learn MAC/IP information through the control plane, reducing broadcast traffic.
In Juniper fabrics, BGP also integrates naturally with EVPN route types, which advertise host reachability and tenant routing information.
Demand Score: 78
Exam Relevance Score: 92
What problem does VXLAN solve in modern data center networks?
VXLAN solves the scalability limitations of VLANs by expanding the number of Layer-2 segments from 4096 VLANs to approximately 16 million VXLAN Network Identifiers (VNIs).
Traditional VLAN-based networks are limited by the 12-bit VLAN ID, which allows only 4096 unique networks. Large multi-tenant data centers quickly exhaust this limit.
VXLAN uses a 24-bit VNI field, enabling roughly 16 million logical networks. It also encapsulates Layer-2 frames inside UDP packets, allowing Layer-2 networks to extend across Layer-3 infrastructure.
This enables:
Large-scale multi-tenant data centers
Workload mobility across racks or pods
Segmentation across cloud environments
VXLAN alone only provides encapsulation. When combined with EVPN, the network gains a scalable control plane that distributes MAC and IP information.
Demand Score: 82
Exam Relevance Score: 90
What role do VTEPs play in an EVPN-VXLAN fabric?
VTEPs (VXLAN Tunnel Endpoints) encapsulate and decapsulate Layer-2 frames into VXLAN packets for transport across the IP fabric.
A VTEP is typically implemented on leaf switches in a data center fabric. Its responsibilities include:
Encapsulating Ethernet frames into VXLAN UDP packets
Decapsulating received VXLAN traffic
Mapping VLANs to VXLAN Network Identifiers (VNIs)
Participating in the EVPN control plane
Each VTEP uses an IP address reachable through the underlay network. When traffic is sent to a remote host, the VTEP encapsulates the frame and forwards it through the IP fabric toward the destination VTEP.
A common exam trap is assuming spine switches act as VTEPs. In most architectures, only leaf switches perform VTEP functions.
Demand Score: 76
Exam Relevance Score: 88