Shopping cart

Subtotal:

$0.00

350-401 Architecture

Architecture

Detailed list of 350-401 knowledge points

Architecture Detailed Explanation

1. Enterprise Network Design Models

Enterprise networks must be carefully designed to be scalable, efficient, easy to manage, and resilient. Cisco promotes certain models to help network engineers create structured and reliable networks.

1.1 Hierarchical Network Design (Three-Tier Model)

This model breaks the network into three logical layers, each with its own role. It’s like organizing a company into departments so that everything runs smoothly.

Access Layer (Bottom Layer)
  • What it does: This is where devices like computers, phones, printers, and wireless access points connect to the network.

  • Equipment used: Typically Layer 2 switches (but can also be Layer 3).

  • Key features:

    • VLAN assignment (each department or team may be in its own VLAN).

    • Port security (to prevent unauthorized access).

    • Power over Ethernet (PoE) for devices like phones or APs.

Imagine this layer as the building’s entrance where employees and guests enter. It controls who can come in and where they can go.

Distribution Layer (Middle Layer)
  • What it does: This layer connects all the access layer switches and acts as a traffic cop — enforcing security, routing between VLANs, and QoS policies.

  • Equipment used: Usually Layer 3 switches or routers.

  • Key functions:

    • Inter-VLAN routing (e.g., communication between Sales and HR VLANs).

    • Policy enforcement (e.g., ACLs for security).

    • Redundancy and load balancing.

This is like the building’s lobby security — checking IDs and deciding who can go where.

Core Layer (Top Layer)
  • What it does: This is the high-speed backbone of the network. It connects the distribution layers together and provides fast data transfer between different parts of the organization.

  • Equipment used: High-performance Layer 3 switches or routers.

  • Key features:

    • Speed and low latency.

    • High availability (often with redundant links).

    • No complex policy processing — just fast transport.

Think of this layer as the building’s elevator system — it connects every floor quickly, without checking who you are (that's already done below).

Summary of the Three Layers

Layer Role Device Type Features
Access Connects end-user devices Switches, APs VLANs, port security, PoE
Distribution Connects access switches and routes between VLANs Layer 3 switches Routing, ACLs, redundancy
Core High-speed backbone Core routers/switches Speed, uptime, low latency

1.2 Collapsed Core Model (Two-Tier Architecture)

Why it exists:

In smaller or medium-sized networks, having all three layers is often too expensive and complex. So we combine the Core and Distribution layers into one.

Structure:
  • Two layers:

    • Access Layer

    • Collapsed Core/Distribution Layer

Benefits:
  • Lower cost (fewer devices to buy).

  • Easier management (less complexity).

  • Good performance for networks with fewer users or devices.

Trade-offs:
  • Less scalability.

  • Less isolation of responsibilities (same devices do both routing and backbone functions).

It’s like combining the building’s lobby and elevator controls into one small team — efficient for a small office but may not scale well to a skyscraper.

1.3 Spine-Leaf Architecture

This model is most commonly used in data centers and environments that need predictable, high-speed, and scalable performance.

How it works:
  • There are only two types of switches:

    • Spine switches: Connect to every leaf switch. They don’t connect to each other.

    • Leaf switches: Connect to servers, firewalls, and other endpoints. Each leaf switch connects to every spine switch.

Key Benefits:
  • Low latency: Every endpoint is always just 1 spine and 1 leaf away from another.

  • High redundancy: Multiple paths between any two endpoints.

  • Scalable: Easy to add more leaf switches without changing the whole structure.

Use Case:

Perfect for cloud computing, virtualization, and software-defined networking (SDN).

Imagine a city where every building (leaf) has direct highways (spine) to every other building — no traffic lights or stop signs, just fast travel!

2. WAN Architecture and Options

A WAN (Wide Area Network) connects different sites of a business across cities, countries, or continents. Unlike LANs (which connect devices in a single location), WANs use public or private service providers to link distant networks.

2.1 MPLS (Multiprotocol Label Switching)

What is MPLS?

MPLS is a WAN technology used by service providers to move packets efficiently and quickly across a network. Instead of routing packets based only on IP addresses, MPLS adds a label to each packet. This label tells routers where to send the packet — like an express lane in a highway system.

Key Features:
  • Label switching: Faster forwarding than traditional IP routing.

  • Supports multiple services: Internet, VoIP, VPN, etc.

  • QoS (Quality of Service): You can prioritize critical traffic like voice or video.

  • Scalability: Handles complex topologies for large enterprises.

Use Case Example:

A large company has offices in Beijing, Shanghai, and Guangzhou. MPLS connects them with reliable, high-speed service that supports video conferencing and secure data transfer.

2.2 VPN Options (Virtual Private Networks)

VPNs let you create a secure "tunnel" over the internet. It’s like sending a sealed envelope through the postal system: only the receiver can open and understand it.

A. Site-to-Site VPN (IPsec)
  • What it does: Securely connects two or more physical office locations using IPsec (Internet Protocol Security).

  • Common setup: A router or firewall at each site with VPN configured.

  • Example: Headquarters and branch office can share files and systems securely.

B. Remote Access VPN
  • What it does: Allows individual users (like work-from-home staff) to securely connect to the office network from anywhere.

  • Types:

    • SSL VPN: Uses a web browser for access.

    • IPsec VPN: Requires client software.

C. DMVPN (Dynamic Multipoint VPN)
  • What it does: Allows branch offices to communicate directly with each other without needing tunnels to be manually created.

  • Uses:

    • mGRE (multipoint GRE) tunnels: one interface, many connections.

    • NHRP (Next Hop Resolution Protocol): helps devices find each other dynamically.

  • Big benefit: Scalable and dynamic — ideal for organizations with many branches.

2.3 SD-WAN (Software-Defined WAN)

What is SD-WAN?

SD-WAN is a modern approach to WAN that uses software to control traffic intelligently across multiple links like MPLS, broadband, LTE.

Key Features:
  • Centralized control: A controller decides how traffic flows (instead of manual configuration on each router).

  • Application-aware routing: Voice calls get the best path; backup traffic gets a cheaper link.

  • Built-in security: IPsec encryption, firewall, and sometimes content filtering.

  • Transport independence: Works with any type of connection.

Common Cisco SD-WAN Solutions:
  • Cisco vManage (Viptela): Full-featured SD-WAN platform with advanced policies.

  • Cisco Meraki SD-WAN: Easy to manage, ideal for small/medium networks.

Example Use Case:

A retail chain with 200 stores wants to connect all stores reliably, prioritize payment data, use low-cost broadband for guest Wi-Fi, and enforce security. SD-WAN solves all of these.

Summary of WAN Technologies

Technology Used For Secure? Centralized? Best Use Case
MPLS Site-to-site WAN Yes (private) No Large, stable enterprise networks
Site-to-Site VPN Site-to-site Yes (IPsec) No Branch-to-HQ secure tunnels
Remote Access VPN Users to site Yes (SSL/IPsec) No Remote workers
DMVPN Dynamic branches Yes (IPsec + mGRE) No Many dynamic connections
SD-WAN All WAN traffic Yes Yes Flexible, smart WAN with cost savings

3. Cloud Architecture

Cloud computing is a way of delivering IT services — such as servers, storage, databases, networking, software — over the internet (“the cloud”) on demand. Instead of buying and maintaining your own data centers or servers, you rent computing resources from a provider like AWS, Microsoft Azure, or Google Cloud.

3.1 Cloud Service Models

There are three main types of cloud service models, each offering different levels of control, flexibility, and management.

A. IaaS (Infrastructure as a Service)
  • What it is: You rent virtualized hardware resources like virtual machines, storage, and networking.

  • You are responsible for:

    • Installing your own OS

    • Managing applications

    • Configuring the network and firewall

  • Example providers: AWS EC2, Microsoft Azure VMs

  • When to use: When you want full control over your environment, like building custom applications or hosting a private website.

Think of it like renting an empty apartment: you bring your own furniture, appliances, and decorate it however you want.

B. PaaS (Platform as a Service)
  • What it is: You rent a platform where you can develop and deploy applications without managing the underlying infrastructure.

  • You control the app, but the provider manages:

    • Servers

    • OS

    • Middleware (e.g., database engines)

  • Example providers: Google App Engine, Microsoft Azure App Services

  • When to use: When you are a developer who just wants to write and run code, not manage servers.

Like renting a furnished apartment: you don’t need to buy furniture — you just move in and live.

C. SaaS (Software as a Service)
  • What it is: You use a ready-to-use application over the internet.

  • No need to install, manage, or update anything.

  • Examples:

    • Gmail

    • Microsoft 365 / Office 365

    • Salesforce

  • When to use: For everyday apps like email, collaboration, or CRM tools.

Like staying in a hotel: everything is ready — just walk in and start using it.

Summary Table

Model You manage Provider manages Example
IaaS OS, apps, data Virtual machines, storage, network AWS EC2
PaaS Applications, data OS, servers, network Google App Engine
SaaS Nothing Everything Office 365

3.2 Cloud Deployment Models

This refers to where and how the cloud services are deployed. There are four main models:

A. Public Cloud
  • Hosted by third-party providers.

  • Resources are shared among multiple customers.

  • Examples: AWS, Microsoft Azure, Google Cloud

  • Benefits:

    • Scalable

    • Cost-effective

    • No infrastructure to manage

  • Drawbacks:

    • Less control

    • Security concerns in shared environments

B. Private Cloud
  • Infrastructure is used only by one organization.

  • Can be hosted on-site or in a third-party data center.

  • Benefits:

    • Greater control and security
  • Drawbacks:

    • More expensive

    • Requires management

C. Hybrid Cloud
  • Combines public and private clouds.

  • Data and applications can move between the two.

  • Common use case:

    • Sensitive data in private cloud

    • Other data or services in public cloud

D. Community Cloud
  • Shared infrastructure for a specific group (e.g., universities, government agencies).

  • Shared responsibilities, data policies, and requirements.

Summary Table

Deployment Used By Managed By Control Level Example Use
Public Cloud Many customers Cloud provider Low Email, backup
Private Cloud One company Internal or hosted High Finance, healthcare
Hybrid Cloud One company Shared Medium-High Flexible workloads
Community Cloud Multiple orgs with same goals Shared Medium Education systems

3.3 On-Prem vs Cloud Comparison

Criteria On-Premises Cloud
Latency Usually lower (local) Can be higher (depends on network)
Security You control everything Shared responsibility model
Control Full hardware/software control Limited to your service level
Cost High up-front costs (CAPEX) Pay-as-you-go (OPEX)
Scalability Limited by physical capacity Virtually unlimited
Maintenance You manage hardware, software, patches Managed by provider

Real-world scenario:
A university may use:

  • IaaS to host its custom-built apps,

  • PaaS for development projects in the computer science department,

  • SaaS like Google Workspace for students and faculty.

4. Network Management and Assurance Architecture

Modern enterprise networks are no longer just collections of devices. They must be centrally managed, monitored, and intelligently analyzed. Cisco offers tools that help engineers control networks from a central platform, automate configurations, and ensure network health using real-time data and artificial intelligence.

4.1 Cisco DNA Center (Cisco Digital Network Architecture Center)

What is Cisco DNA Center?

Cisco DNA Center is a centralized management platform that allows you to:

  • Automate network configuration and policies

  • Monitor the health of the network

  • Analyze performance with real-time data

  • Secure the network with identity-based policies

It’s like a mission control center for your enterprise network.

Key Features of Cisco DNA Center:
1. Provisioning
  • Automatically configure devices like switches, routers, and wireless controllers.

  • Uses templates and profiles to apply consistent settings.

  • Reduces human error and saves time.

Instead of logging into 50 switches to configure VLANs, you do it once and apply it across the network.

2. Policy-Based Automation
  • Uses intent-based networking: you tell the system what outcome you want, and it figures out how to make it happen.

  • You can apply:

    • Access policies (who can access what)

    • Security policies (isolate devices or departments)

    • Traffic rules (e.g., prioritize video)

Like setting a rule in a smart home system: “Turn off the lights if no one is home.”

3. Telemetry and Analytics
  • DNA Center collects real-time data from network devices.

  • Uses AI/ML (artificial intelligence and machine learning) to detect:

    • Congestion

    • Failing links

    • Poor wireless coverage

  • You get health scores and troubleshooting suggestions.

Think of it like a fitness tracker for your network — it shows what’s healthy and what needs help.

4. Device Inventory and Configuration History
  • Shows a list of all network devices and their details (model, software, uptime).

  • Keeps a record of all configuration changes (what changed, when, and by whom).

  • Allows for easy rollbacks if something goes wrong.

Like having a “time machine” to reverse any bad configuration.

4.2 Cisco SDA (Software-Defined Access)

Cisco SDA is built on top of DNA Center. It brings next-generation access control and segmentation to enterprise networks, using software-defined networking principles.

What is SDA?
  • SDA replaces traditional VLANs and access lists with identity-based policies.

  • You define who or what a user/device is — not just their IP address — and assign policies based on that.

Key Technologies Behind SDA:
1. LISP (Locator/ID Separation Protocol)
  • Separates who the device is (identity) from where the device is (location).

  • Supports mobility: users can move around, and their policies follow them.

Like your phone automatically knowing your apps and settings when you log into a new device.

2. VXLAN Tunnels (Virtual Extensible LAN)
  • Creates secure tunnels between switches and routers across the network.

  • Supports segmentation even across different locations or buildings.

Like having private corridors inside a large building — only people with a key can enter.

3. Macro and Micro Segmentation
  • Macro segmentation: separates large groups (e.g., employees vs guests).

  • Micro segmentation: enforces rules between individual devices or users within the same group.

Think of it as walls (macro) and locked doors (micro) in a secure building.

4. Identity-Based Access Policies
  • Policies are based on user identity, device type, role, or location.

  • Enforced dynamically — you don’t have to reconfigure switches when a new device joins.

For example, a printer and a visitor’s phone may be in the same VLAN but have very different permissions.

Summary: DNA Center vs SDA

Feature Cisco DNA Center Cisco SDA
Purpose Central control for automation and analytics Secure and dynamic access control
Includes Inventory, telemetry, templates, APIs Segmentation, VXLAN, identity-based policy
Underlying Technologies SNMP, NETCONF, CLI, REST APIs VXLAN, LISP, TrustSec

Why it matters:
As networks become more complex, managing them manually is no longer feasible. Platforms like DNA Center and SDA help ensure your network is secure, scalable, and self-correcting.

5. Control Plane vs Data Plane vs Management Plane

These three "planes" define how a network device — like a router or switch — functions internally. Understanding them is crucial for designing, securing, and troubleshooting networks.

5.1 Control Plane

What is it?

The control plane is the "brain" of the device. It is responsible for making decisions about where traffic should go.

Functions of the Control Plane:
  • Runs routing protocols:

    • OSPF

    • EIGRP

    • BGP

  • Maintains the routing table

  • Calculates the best path

  • Manages Layer 2 functions like Spanning Tree Protocol (STP)

  • Handles protocols like ARP, ICMP, and DHCP relay

Where it runs:
  • It runs in software, typically on the device’s main CPU.
Example:
  • When a router receives a new OSPF route from a neighbor, it uses the control plane to:

    • Validate the route

    • Add it to the RIB (Routing Information Base)

    • Decide which path is best

Think of the control plane as the GPS system in a car: it figures out the best route before you start driving.

5.2 Data Plane (Forwarding Plane)

What is it?

The data plane is the "muscle" of the device. It is responsible for moving packets based on the decisions made by the control plane.

Functions of the Data Plane:
  • Forwards packets through the device

  • Applies access control lists (ACLs)

  • Performs QoS (Quality of Service) marking

  • Handles NAT, fragmentation, and encapsulation

  • Filters and drops traffic when needed

Where it runs:
  • It runs in hardware, often using ASICs (Application-Specific Integrated Circuits) for speed.
Example:
  • Once the control plane decides that packets to 10.1.1.0/24 should go out interface G0/1, the data plane takes over and forwards those packets quickly without asking again.

Think of the data plane as the driver of the car, following the route set by the GPS.

5.3 Management Plane

What is it?

The management plane is the "communication interface" for configuring and monitoring the device. It handles device-to-human or device-to-system communication.

Functions of the Management Plane:
  • Accepts configuration commands via:

    • SSH

    • HTTP/HTTPS (GUI)

    • SNMP

    • RESTCONF/NETCONF

  • Sends logs and telemetry (e.g., Syslog, NetFlow)

  • Supports AAA authentication

  • Allows firmware upgrades and backups

Example:
  • When an engineer logs into a switch via SSH to change the hostname, they are using the management plane.

Think of the management plane as the dashboard or touchscreen interface where you configure your car’s settings.

Summary Table

Plane Role Runs On Examples
Control Plane Decides how traffic flows CPU (software) OSPF, BGP, STP, ARP
Data Plane Moves traffic based on decisions Hardware (ASIC) Packet forwarding, ACLs, NAT
Management Plane Allows human/system interaction CPU (or separate management port) SSH, SNMP, REST API

Why this matters:

  • Understanding planes helps in troubleshooting: e.g., if packets aren’t being forwarded, check data plane; if routing protocols fail, check control plane.

  • Helps design secure and segmented networks, e.g., securing management plane access with ACLs or out-of-band interfaces.

6. High Availability and Redundancy

High availability (HA) means keeping the network running without interruptions, even if some components go down. Redundancy is the key technique used to achieve HA — by having backup links, paths, or devices ready to take over when needed.

6.1 First Hop Redundancy Protocols (FHRPs)

When a host (like a PC or printer) sends traffic to a different subnet, it sends the data to its default gateway — usually a router.
But what happens if that router fails?

FHRPs allow multiple routers to share a virtual IP address, so if one fails, another takes over — seamlessly.

A. HSRP (Hot Standby Router Protocol) – Cisco Proprietary
  • One active router and one or more standby routers.

  • Uses virtual IP and virtual MAC address.

  • Active router responds to ARP and forwards packets.

  • If active fails, the standby takes over.

Like a backup generator that turns on automatically if the power goes out.

B. VRRP (Virtual Router Redundancy Protocol) – Open Standard
  • Similar to HSRP, but supports multiple vendors.

  • Allows preemption: if a higher-priority router comes back, it takes control again.

C. GLBP (Gateway Load Balancing Protocol) – Cisco Proprietary
  • Adds load balancing on top of redundancy.

  • Multiple routers can actively forward traffic at the same time.

  • Balances the traffic using multiple virtual MAC addresses.

FHRP Comparison Table

Protocol Standard Backup Style Load Balancing Preemption
HSRP Cisco One Active + Standby N Y
VRRP Open One Master + Backup N Y
GLBP Cisco All Active Y Y

6.2 Link Redundancy

Just like routers, links (cables or paths between devices) can also fail. Link redundancy ensures there is more than one path for traffic.

EtherChannel / PortChannel
  • Combines multiple physical links into one logical interface.

  • Advantages:

    • Higher bandwidth (up to 8 links can be combined).

    • Redundancy: if one link fails, traffic keeps flowing over the others.

  • Uses protocols:

    • PAgP (Cisco proprietary)

    • LACP (IEEE 802.3ad standard)

Redundant Uplinks
  • Switches can have multiple uplinks to higher-layer switches.

  • Often used with Spanning Tree Protocol (STP) to prevent loops.

  • With STP, one link is active, and the other is blocked (but can take over if the first fails).

Load Balancing with ECMP (Equal-Cost Multi-Path)
  • Routers can send traffic across multiple equal-cost paths to improve performance and fault tolerance.

  • Common in OSPF and EIGRP.

6.3 Device Redundancy

Sometimes, even hardware components inside a device can fail. Cisco enterprise devices support internal redundancy:

A. Dual Power Supplies
  • If one power supply fails, the other keeps the device running.

  • Often found in switches and routers used in data centers or critical offices.

B. Dual Supervisor Engines (for Modular Switches)
  • A supervisor engine is like the "brain" of a switch.

  • Having two means one can failover instantly if the other has issues.

  • Used in large chassis switches like Cisco Catalyst 9500 or 9600 series.

Real-world example:
In a hospital’s network:

  • Redundant core switches ensure no single failure shuts down the system.

  • HSRP provides gateway failover for life-critical monitoring systems.

  • EtherChannel uplinks double bandwidth and reliability.

7. Infrastructure Services Integration

Infrastructure services are essential for supporting core network functions like naming, time synchronization, and device monitoring. These services must be integrated into the network for everything else to work smoothly and securely.

7.1 Time Synchronization – NTP (Network Time Protocol)

What is NTP?

NTP keeps the clocks on all devices in a network synchronized.
Why is this important?

  • Logs must be time-aligned for troubleshooting.

  • Security certificates rely on accurate time.

  • Time-stamped data must be consistent across systems.

Key Concepts:
  • NTP uses a hierarchy of time servers:

    • Stratum 0: Atomic clocks, GPS receivers (most accurate)

    • Stratum 1: Directly connected to Stratum 0 devices

    • Stratum 2: Get time from Stratum 1, and so on…

Basic Configuration Example (Cisco IOS):
ntp server 192.168.1.1
clock timezone UTC 0
Best Practices:
  • Use at least two NTP servers for redundancy.

  • Don’t rely on the Internet for NTP in secure environments — use internal time servers.

7.2 DNS and DHCP Integration

These two services help devices join and operate on the network without manual setup.

A. DHCP (Dynamic Host Configuration Protocol)
  • Automatically assigns:

    • IP address

    • Subnet mask

    • Default gateway

    • DNS server

DHCP Relay Agent
  • Used when the DHCP server is not on the same subnet as the client.

  • The router or switch forwards DHCP requests using:

ip helper-address 192.168.100.10
Split Scope DHCP
  • Provides redundancy by having two servers manage the same range.

  • Example:

    • Server A: 80% of addresses

    • Server B: 20%

  • If one server fails, the other still handles requests.

B. DNS (Domain Name System)
  • Translates domain names into IP addresses.

  • Example:

    • You type www.cisco.com, DNS resolves it to 72.163.4.161.
Best Practices for DNS/DHCP Integration:
  • DHCP should provide DNS server addresses to clients.

  • DHCP leases should be short in dynamic environments (like Wi-Fi).

  • Secure DNS servers against spoofing and poisoning.

7.3 SNMP and Syslog Integration

These services are used for monitoring, alerting, and centralized log storage.

A. SNMP (Simple Network Management Protocol)
  • Enables network monitoring tools (like SolarWinds or Cisco Prime) to:

    • Query device status

    • Track performance (CPU, memory, interface traffic)

    • Receive alerts (SNMP traps)

SNMP Versions:
  • v1/v2c: Basic, uses “community strings” (like a password).

  • v3: Secure — supports encryption and authentication.

Basic SNMP Example:
snmp-server community public RO

This allows read-only access with the password “public”.

B. Syslog
  • Stores log messages from network devices:

    • Interface up/down

    • Configuration changes

    • Security violations

Logging Levels (0 to 7):
Level Meaning
0 Emergencies
1 Alerts
2 Critical
3 Errors
4 Warnings
5 Notifications
6 Informational
7 Debugging
Syslog Server Integration:
logging 192.168.1.50
logging trap warnings

This sends log messages of level 4 and above to a remote server.

Real-world benefits of integration:

  • DHCP gives out IPs → DNS resolves names → NTP timestamps everything → SNMP and Syslog monitor it all.

  • This combination provides full operational visibility for network admins.

8. QoS Architectural Concepts

(QoS = Quality of Service)

QoS is a set of techniques used to prioritize certain types of traffic over others, avoid congestion, and ensure predictable performance. This is critical for apps like voice over IP (VoIP) or video conferencing, which need low latency and minimal jitter.

8.1 Traffic Classification and Marking

Classification:

This is the first step — it’s about identifying what kind of traffic is passing through (e.g., voice, video, bulk data).

  • Can be based on:

    • IP address

    • Protocol (TCP/UDP)

    • Port number (e.g., TCP 80 = HTTP)

    • Application

    • Interface

Marking:

Once identified, traffic can be tagged with a priority value to signal how it should be treated.

CoS (Class of Service) – Layer 2:
  • Used in Ethernet frames (802.1Q).

  • 3-bit field, values from 0–7.

  • CoS 5 is commonly used for voice traffic.

DSCP (Differentiated Services Code Point) – Layer 3:
  • Found in the IP header.

  • 6 bits, values from 0 to 63.

  • Common DSCP values:

    • EF (Expedited Forwarding) = 46 (used for voice)

    • AF31 = 26 (used for video)

    • Default = 0 (best effort)

Example:
A router inspects a packet and sees it’s VoIP. It classifies it and marks it with DSCP EF (46) so switches and routers downstream know it should be given top priority.

8.2 Queuing and Scheduling

When there’s congestion (e.g., during peak hours), QoS controls which packets wait and which go first.

Common Queuing Techniques:
FIFO (First In, First Out)
  • Default behavior

  • Simple — no prioritization

  • Problem: voice packets may be delayed behind large downloads

WFQ (Weighted Fair Queuing)
  • Traffic is grouped into flows, and each gets fair bandwidth.

  • Better than FIFO, but still doesn’t guarantee low delay for voice.

LLQ (Low Latency Queuing)
  • Combines strict priority queuing (PQ) with class-based queuing (CBWFQ).

  • The priority queue ensures that voice/video is always sent first.

  • Most commonly used in enterprise QoS for real-time applications.

Think of LLQ like an ambulance lane on a highway: even during traffic jams, the ambulance (voice) goes through immediately.

8.3 Congestion Avoidance

This is about preventing the network from becoming overloaded by selectively dropping packets before queues are full.

WRED (Weighted Random Early Detection)
  • Monitors queue depth (how full the buffer is).

  • Begins to randomly drop lower-priority packets as the queue grows.

  • Helps avoid total congestion and global TCP synchronization.

Benefits of WRED:
  • Maintains throughput under heavy load.

  • Protects high-priority traffic (e.g., voice).

  • Prevents queue starvation.

Summary: QoS Flow

  1. Classify traffic (e.g., voice, video, web, backup)

  2. Mark packets with DSCP or CoS

  3. Queue based on priority (LLQ for voice)

  4. Avoid congestion with WRED and shaping

Real-World QoS Example:

You work at a company with:

  • Zoom meetings

  • Cloud backups

  • Web browsing

You:

  • Mark Zoom packets with DSCP EF

  • Use LLQ to guarantee Zoom traffic is prioritized

  • Use WRED to drop backup traffic during congestion

  • Limit YouTube usage for guests via QoS policy

Architecture (Additional Content)

1. Modular QoS CLI (MQC) Framework

Overview:

MQC is Cisco’s standardized framework used to configure and apply Quality of Service (QoS) policies on devices. It separates the classification, policy definition, and application steps, making QoS modular and scalable.

MQC Components:

a. class-map

Defines traffic classification based on Layer 2 to Layer 7 parameters (e.g., access control lists, DSCP, protocol type).

Example:

class-map match-any VOICE
 match ip dscp ef
 match access-group 101
b. policy-map

Specifies the QoS actions (e.g., policing, shaping, marking, queueing) to be taken on the classified traffic.

Example:

policy-map QOS-POLICY
 class VOICE
  priority percent 30
 class class-default
  fair-queue
c. service-policy

Applies the defined policy to a physical or logical interface (input or output direction).

Example:

interface GigabitEthernet0/1
 service-policy output QOS-POLICY

2. Virtual Network Functions (VNFs)

Definition:

VNFs are software-based versions of traditional hardware network appliances such as routers, firewalls, load balancers, etc. They are deployed on virtual machines or containers and provide network services without the need for dedicated hardware.

Use Cases:

  • Key component in SD-WAN service chaining (e.g., vFirewall, vRouter).

  • Deployed on NFV Infrastructure (NFVi).

  • Support rapid provisioning and scaling.

3. Cisco Express Forwarding (CEF)

Purpose:

CEF is Cisco’s default Layer 3 packet forwarding mechanism that improves speed and scalability by separating the control and data planes.

Components:

a. FIB (Forwarding Information Base)
  • Derived from the routing table.

  • Stores next-hop information used for Layer 3 forwarding.

b. Adjacency Table
  • Maintains Layer 2 addressing and encapsulation information.

  • Used to quickly construct the packet headers.

Key Features:

  • Deterministic performance

  • Scalability in high-speed environments

  • Required for features like NetFlow, QoS, MPLS

4. NetFlow vs Streaming Telemetry

Feature NetFlow/SNMP (Polling) Streaming Telemetry (Push-based)
Model Pull (device is polled) Push (device streams data)
Protocols SNMP, NetFlow v5/v9 gRPC, HTTP/2, Kafka
Format Mostly unstructured or custom Structured (YANG-based)
Frequency Periodic polling Real-time streaming
Scalability Limited (high CPU usage) High (efficient transmission)
Use Cases Traffic statistics, billing Anomaly detection, automation

5. AAA (Authentication, Authorization, Accounting)

Although primarily discussed under security, AAA has significant architectural relevance for device access and infrastructure services.

Server Types:

Server Description
TACACS+ Cisco proprietary; encrypts entire packet
RADIUS Industry standard; encrypts only password

Common Commands:

aaa new-model
aaa authentication login default group tacacs+ local
tacacs-server host 192.168.1.1 key secret123

These define centralized authentication with a fallback to local login.

6. Overlay vs Underlay Networking

Underlay Network:

  • The physical transport network

  • Responsible for basic packet delivery

  • Examples: MPLS, Broadband Internet, LTE

Overlay Network:

  • A logical/virtual network built on top of the underlay

  • Encapsulates customer traffic (e.g., GRE, IPSec, VXLAN)

  • Provides features like segmentation, encryption, tunneling

Example: In SD-WAN

  • Underlay: MPLS/Internet circuits

  • Overlay: IPsec tunnels forming the WAN fabric

Key Comparison:

Layer Underlay Overlay
Visibility Network admin Application/service admin
Technology Physical transport Tunnels, encapsulation
Control Static/dynamic routing Centralized via controllers (vSmart)

7. REST vs NETCONF/RESTCONF

Feature REST API NETCONF RESTCONF
Model RESTful RPC over SSH RESTful over HTTP
Transport HTTP/HTTPS SSH HTTP/HTTPS
Data Format JSON XML JSON/XML
Schema Ad hoc or OpenAPI YANG YANG
Use Case Application control (e.g., DNA Center) Device config and state Device config/state with REST simplicity
Example Tool Cisco DNA Center API IOS-XE IOS-XE, RESTCONF clients

Summary:

  • REST: Commonly used for controller and application integration (e.g., DNA Center).

  • NETCONF/RESTCONF: Used for device-level configuration and operational data, structured by YANG.

Frequently Asked Questions

Why is first hop redundancy important in enterprise campus architecture?

Answer:

First Hop Redundancy Protocols (FHRPs) ensure that hosts maintain gateway connectivity even if the primary router fails.

Explanation:

Protocols such as HSRP and VRRP create a virtual default gateway shared by multiple routers. One router acts as the active gateway while another remains in standby mode. If the active router fails, the standby router quickly assumes the virtual IP address, allowing hosts to continue sending traffic without reconfiguration. This redundancy is essential in enterprise networks where access-layer devices rely on a default gateway to reach other networks. A common operational issue occurs when load-balancing is not implemented correctly across multiple VLANs, leading to uneven traffic distribution across redundant gateways.

Demand Score: 66

Exam Relevance Score: 82

What role does the vSmart controller play in Cisco SD-WAN?

Answer:

The vSmart controller distributes routing information and security policies between SD-WAN edge devices.

Explanation:

vSmart acts as the centralized control-plane component in Cisco SD-WAN. It establishes secure control connections with WAN edge devices and distributes routing updates, policy rules, and segmentation information. These policies determine how traffic flows between sites, including segmentation via VPNs and application-aware routing rules. A key misconception is that vSmart manages devices directly; device lifecycle and configuration tasks are handled by vManage, while vSmart focuses purely on control-plane operations.

Demand Score: 69

Exam Relevance Score: 86

What is the primary architectural difference between a two-tier and three-tier enterprise campus design?

Answer:

A two-tier design collapses the core and distribution layers into a single layer, while a three-tier design separates access, distribution, and core layers.

Explanation:

In a traditional three-tier architecture, access switches connect endpoints, distribution switches aggregate access layers and enforce policies, and core switches provide high-speed backbone connectivity. A two-tier design combines the distribution and core into a collapsed core layer, reducing hardware and operational complexity. This architecture is commonly used in smaller campuses where the scale does not justify a dedicated core layer. A frequent design mistake is deploying a collapsed core in environments requiring high scalability or redundancy between multiple distribution blocks. In such scenarios, a full three-tier architecture is preferred.

Demand Score: 65

Exam Relevance Score: 80

How do the control plane and data plane interact in a Cisco SD-WAN architecture?

Answer:

The control plane distributes routing and policy information, while the data plane forwards actual traffic between WAN edges.

Explanation:

In Cisco SD-WAN, controllers such as vSmart operate in the control plane, exchanging routing information and distributing policies to WAN edge devices. WAN edge routers then populate their forwarding tables based on these policies and route information. Once the control information is installed, the data plane forwards packets across encrypted tunnels (often IPsec) between sites. Separation of planes allows centralized policy enforcement and simplified routing management. A common misunderstanding is assuming controllers forward traffic; in reality, controllers only manage routing intelligence, while edge devices handle packet forwarding and encryption.

Demand Score: 71

Exam Relevance Score: 84

350-401 Training Course