Avi Load Balancer (now part of VMware) is:
A next-generation load balancing solution.
Built entirely in software – no special hardware needed.
Designed for modern applications that run:
On-premises
In public cloud
Across hybrid or multi-cloud environments
This refers to network layers:
L4 (Layer 4) = Transport layer (TCP, UDP)
L7 (Layer 7) = Application layer (HTTP, HTTPS)
Avi supports both L4 and L7, and more:
SSL termination
Content-based routing
Advanced security
Avi is not just a load balancer — it's a platform that combines:
Load balancing
Analytics
Security (WAF)
Automation tools
All managed from one central place.
Knowing the history gives us context about where Avi fits in the VMware world.
Founded as a startup to build a software-defined, analytics-driven load balancer.
Focused on cloud-native and scalable architectures.
VMware saw the value in Avi’s modern design.
It became part of VMware’s Application Networking and Security (ANS) portfolio.
Now works closely with:
NSX-T for networking/firewall
Tanzu for Kubernetes
vRealize Suite for automation and observability
This makes it a strong alternative to legacy load balancers like F5 or Citrix ADC, especially in modern VMware environments.
This section focuses on what Avi can do — its core functions and advanced features that make it stand out from traditional load balancers.
Avi Load Balancer supports full-featured Layer 4 to Layer 7 services — from basic TCP/UDP balancing to advanced web security.
Avi decides how to distribute incoming traffic across multiple backend servers.
Layer 4:
Balances TCP/UDP traffic without inspecting content.
Example: Load balancing database connections or streaming traffic.
Layer 7:
Inspects HTTP/S traffic to make smart routing decisions.
Example: Send users to different backends based on URL or cookies.
Avi can decrypt incoming HTTPS traffic, process it, and then:
Forward it unencrypted to backend servers (offloading).
Or re-encrypt before sending it to the servers.
Why it matters:
Offloading SSL reduces the workload on your backend apps.
Avi can make decisions based on content, such as:
Path (/api, /login)
Hostname (site1.example.com, site2.example.com)
HTTP header or cookie values
This allows:
Hosting multiple apps on a single IP.
Custom routing rules for traffic control.
Avi boosts application performance by:
Compressing responses (gzip, Brotli)
Caching content (e.g., static files)
Using modern protocols like HTTP/2 or QUIC
These features help apps load faster and reduce server load.
Avi has a built-in WAF for Layer 7 security.
Protects against common attacks (based on OWASP Top 10), like:
SQL Injection
Cross-site scripting (XSS)
Command injection
Highly configurable rules
Can run in learning, logging, or blocking mode
Avi supports GSLB to:
Load balance traffic across multiple sites, regions, or data centers.
Choose the best site based on:
Proximity (GeoDNS)
Server health
Site load
Use case example:
Users in Europe go to the London site, while users in Asia go to the Singapore site.
Unlike legacy systems where each device must be managed separately, Avi uses a central controller to manage everything.
One Avi Controller cluster manages:
SEs in the data center
SEs in the cloud
Kubernetes integrations
Consistent policies and visibility everywhere
You define what you want, not how to do it.
/api to Pool A" — you don’t need to configure low-level details.You can manage policies via:
Web UI
CLI
RESTful APIs
This is especially helpful for DevOps and automation.
Avi offers built-in real-time analytics to show how your apps are performing.
Latency — how long it takes for your app to respond
Error rates — number of 500 errors, 404s, timeouts, etc.
Throughput — how much data is going in and out
Health scores — overall app health on a 0–100 scale
These insights help:
Identify problems fast
Track trends over time
Show usage for capacity planning
Avi was built for automation — it’s API-first.
Avi can scale SEs or Virtual Services up or down automatically, based on:
CPU usage
Connections per second
Throughput
Application behavior
This ensures your apps stay responsive even under load — and save resources when idle.
Avi works with popular automation tools:
| Tool | Use Case |
|---|---|
| Ansible | Automate provisioning of load balancing services |
| Terraform | Infrastructure-as-code, define Avi services in code |
| vRealize Automation (vRA) | Let users request services through a self-service portal |
This section explains the different ways you can deploy Avi Load Balancer — in a data center, in the public cloud, or even in Kubernetes. Flexibility is one of Avi’s biggest strengths.
Let’s start with the most traditional model — on-premises, which means running in your own data center.
In a VMware vSphere environment
Or on other supported hypervisors (like KVM)
Avi Controllers are deployed as virtual machines (VMs).
Service Engines (SEs) are also deployed as VMs.
These VMs run on your existing infrastructure.
No need for physical hardware appliances.
Replacing legacy load balancers (F5, Citrix)
Supporting internal apps in a corporate data center
Full control over your network and data
Avi is cloud-native, which means it works in major public clouds just like it does on-prem.
AWS
Microsoft Azure
Google Cloud Platform (GCP)
Oracle Cloud Infrastructure (OCI)
Avi Controllers and SEs can be deployed:
Using native VM images from the cloud provider
Using automation tools like Terraform
The Controller manages SEs even across cloud accounts or regions.
SEs can be auto-scaled using cloud-native tools (e.g., EC2 autoscaling in AWS).
VIPs are assigned using cloud-native networking (Elastic IPs, Load Balancer IPs).
Integration with cloud DNS, IAM, tagging, and VPCs is possible.
Hosting public-facing websites
Running microservices-based apps in the cloud
Spinning up test environments dynamically
This is where things get interesting — you don’t have to choose between on-prem and cloud.
Avi supports hybrid cloud deployments, which means:
You can run Controllers in your data center, and
Have SEs deployed across both:
On-prem infrastructure
Public cloud VMs (AWS, Azure, etc.)
One management interface across all locations
Consistent:
Policies
Logging
Security
Perfect for gradual cloud migration
Migrate apps from on-prem to cloud without breaking traffic flow
Maintain DR (disaster recovery) sites in the cloud
Load balance across sites (with GSLB)
Avi is also designed for modern, container-based environments like Kubernetes.
Tanzu Kubernetes Grid (TKG) — VMware’s Kubernetes platform
Also works with vanilla Kubernetes, OpenShift, and other K8s platforms
Avi acts as the Ingress Controller, meaning:
It handles traffic coming into the Kubernetes cluster.
Routes it to the correct services based on:
Hostnames
Paths
Ports
It also provides:
L7 visibility into Kubernetes services
TLS termination (HTTPS support)
WAF protection
Autoscaling of services based on traffic
Unified load balancing for VMs and containers
Centralized control for both traditional and modern apps
Deep analytics for microservices traffic
Understanding the editions and licensing options is important, especially if you’re preparing for real-world deployment or the 6V0-22.25 exam.
VMware Avi Load Balancer comes in different editions, depending on the features you need and the level of scalability required.
This edition is designed for small to medium-sized environments with core load balancing needs.
Included features:
Layer 4 (TCP/UDP) and Layer 7 (HTTP/HTTPS) load balancing
SSL termination and basic content switching
Basic analytics (real-time traffic graphs, latency)
Limited automation
Not included:
Web Application Firewall (WAF)
Global Server Load Balancing (GSLB)
Advanced multi-cloud or enterprise features
Best for: Simple apps, internal environments, entry-level deployments
This is the full-featured edition, used in enterprise environments with demanding scalability, security, and automation needs.
Included features:
Everything in Essentials
Web Application Firewall (WAF) – OWASP Top 10 protection
GSLB – Load balancing across data centers or regions
Advanced analytics and health scoring
Multi-cloud and hybrid cloud support
Full automation (API-first, integration with Ansible, Terraform)
Elastic scale-out of SEs
Kubernetes integration
Best for: Modern cloud-native apps, multi-site architectures, enterprise SLAs
Now let’s talk about how Avi is licensed. VMware offers flexibility based on the kind of environment and deployment size.
You buy a license for X gigabits per second of total traffic.
Avi will enforce or alert based on that usage.
Example: If you purchase a 10 Gbps license, you can handle that much total traffic (in and out).
You pay based on the number of vCPUs assigned to Service Engines (SEs).
The more vCPUs across all SEs, the more license you need.
Example: If you have 3 SEs with 4 vCPUs each, that's 12 cores total.
You pay per application or Virtual Service.
Useful for service providers, multi-tenant environments, or app-level billing.
Example: If you're hosting 20 websites for clients, you can license 20 Virtual Services.
VMware offers two main types of license duration:
| License Type | Description |
|---|---|
| Perpetual | One-time purchase. You own the license. Pay separately for support. |
| Subscription | Pay monthly or yearly. Includes support and upgrades in the plan. |
Most modern deployments go with subscription licensing, especially in the cloud.
If your company uses VMware Cloud Foundation (VCF) — the full-stack private cloud solution — Avi is included as the default Layer 7 load balancer.
Native integration with NSX-T and vSphere
Lifecycle managed via VMware SDDC Manager
Seamless deployment as part of VCF automation workflows
No need to install F5 or other third-party load balancers — Avi is ready out of the box.
| Feature | Essentials (Basic) | Enterprise (Advanced) |
|---|---|---|
| L4/L7 Load Balancing | Y | Y |
| SSL Offloading | Y | Y |
| WAF | N | Y |
| GSLB | N | Y |
| Multi-cloud | N | Y |
| Auto-scaling | N | Y |
| Kubernetes Integration | N | Y |
| Licensing Model | When to Use |
|---|---|
| Throughput (Gbps) | When scaling by total traffic volume |
| CPU-based | When you want predictable core usage |
| Per-Application | Good for SaaS and service providers |
This section helps you understand where and how VMware Avi Load Balancer is used in real-world environments.
There are four major categories of use cases:
classic enterprise applications — the kind that run in data centers, often with older architectures.
Many companies use hardware load balancers like:
F5 BIG-IP
Citrix ADC (NetScaler)
These devices are:
Expensive
Hard to scale quickly
Not built for cloud or automation
Avi replaces them with a software-only, scalable solution that runs on:
vSphere
Public clouds
Containers
Avi is great for monolithic apps — the traditional kind where all functions are in one big server or service.
Use cases:
Load balancing access to:
Web front-ends (Apache, IIS)
App servers (Java, .NET)
Databases (MySQL, PostgreSQL)
Even if your apps aren’t modern or cloud-native, Avi works perfectly for them.
Today, many apps are built using microservices and containers, often managed by Kubernetes.
Avi is designed to work natively in these modern environments.
Avi acts as a Layer 7 Ingress Controller in Kubernetes platforms like:
Tanzu Kubernetes Grid (TKG)
OpenShift
Vanilla Kubernetes
It routes external traffic into your microservices based on hostname, path, or headers.
Also supports:
TLS termination
Path-based routing
WAF for microservices
Real-time metrics and logging per service
In continuous integration / continuous deployment (CI/CD) setups, applications:
Change frequently
Scale dynamically
Require fast deployment of networking services
Avi helps by:
Automatically creating/removing Virtual Services and SEs
Integrating with pipelines (via API, Terraform, Ansible)
Providing instant visibility into app performance
Think of Avi as “infrastructure that adapts to your code.”
Avi was built for environments that span across multiple clouds or regions.
One Avi Controller (cluster) can manage:
Service Engines in AWS
Service Engines in Azure
SEs in your on-prem vSphere environment
All with one single interface and consistent policies.
Instead of:
Different load balancers in each cloud
Separate teams managing each one
You can use Avi everywhere, and:
Apply the same security rules
Get centralized logging
Use GSLB for global traffic routing
Avi = one control plane for all locations and clouds.
Security is a core strength of Avi.
It helps organizations meet compliance standards like:
PCI-DSS (for payment systems)
HIPAA (for healthcare)
GDPR (for EU data protection)
Built-in WAF defends against:
OWASP Top 10 threats (XSS, SQLi, etc.)
Custom attack signatures
Bots and scanning tools
Can run in:
Detection mode (learn and log)
Blocking mode (stop threats)
Avi terminates SSL/TLS at the edge and:
Scans traffic for threats
Manages certificates (including auto-renewals and expiration alerts)
Supports Perfect Forward Secrecy (PFS) and TLS 1.3
Everything is logged:
Who made what changes
When and how
Full history for compliance audits
Role-Based Access Control (RBAC) ensures:
Only the right users have access
Multi-tenant support for service providers
| Use Case | Description |
|---|---|
| Traditional Apps | Replace F5/Citrix, load balance monolithic apps and DBs |
| Modern Apps | Kubernetes Ingress, microservices, CI/CD integrations |
| Multi-Cloud Strategy | One Avi Controller for all clouds and sites |
| Security & Compliance | WAF, SSL inspection, certificate management, audit logs, RBAC |
Avi Load Balancer’s true power comes from how well it integrates into the broader VMware ecosystem and third-party tools used by modern IT teams.
You don’t use Avi in isolation — it’s designed to fit perfectly into real environments with many tools.
If your company uses VMware, Avi fits in naturally with the rest of your infrastructure.
Avi Controllers and SEs run as virtual machines inside vSphere.
Avi supports:
DRS (Distributed Resource Scheduler)
vMotion (live migration of SEs)
HA (automatic restart of VMs on other hosts)
This means Avi takes advantage of everything vSphere offers, like scalability, resource pools, host affinity rules, and more.
What this looks like in practice:
NSX-T handles:
Routing
Firewalling
Network segmentation
Avi handles:
HTTP/S inspection
SSL offload
Application-level routing
WAF
You can combine NSX and Avi to create a powerful, full-stack software-defined network and security platform.
Avi is now the default Layer 7 load balancer in VCF — VMware’s complete SDDC (Software-Defined Data Center) solution.
Benefits:
Avi is pre-integrated into the VCF lifecycle
Managed via SDDC Manager
Automated deployment with VCF infrastructure
You don’t need F5 or any third-party device anymore. Avi is the official VMware load balancer in VCF environments.
Avi integrates with these vRealize tools:
| Tool | Integration |
|---|---|
| vRealize Operations (vROps) | Import Avi’s metrics and alerts for infrastructure visibility |
| vRealize Log Insight | Stream Avi logs for centralized log management |
| vRealize Automation (vRA) | Automate Virtual Service and SE creation through self-service portals |
This enables self-service, observability, and policy-based automation.
Avi is also designed to work well with tools outside of VMware — especially in modern, DevOps-driven environments.
Avi can stream logs and metrics to:
| Tool | Purpose |
|---|---|
| Splunk | Log search and analytics |
| ELK Stack (Elasticsearch, Logstash, Kibana) | Real-time logging and dashboards |
| Kafka | Stream logs to big data platforms |
| Prometheus | Metric collection and alerting |
| Grafana | Custom dashboards |
This is important for:
Operations teams
Security monitoring
Incident response
Avi is API-first, which means everything you do in the UI can also be done in code.
You can integrate Avi with:
| Tool | Use Case |
|---|---|
| Ansible | Automate tasks like creating Virtual Services and Pools |
| Terraform | Define and deploy infrastructure, including Avi resources, as code |
| vRealize Automation | Self-service portal for DevOps teams to request app delivery |
These tools are vital in CI/CD pipelines and DevOps workflows, where infrastructure needs to be:
Reproducible
Scripted
Scalable
Avi supports:
| Tool/Protocol | Purpose |
|---|---|
| SAML / OAuth2 | Single Sign-On (SSO) for Avi Controller access |
| LDAP / AD | Role-based access control (RBAC) |
| SIEM Integration | For security analytics and compliance reporting |
This makes Avi suitable for enterprise security standards and compliance.
| Integration Area | Examples and Benefits |
|---|---|
| VMware Tools | vSphere, NSX-T, VCF, vRealize – seamless fit into VMware environments |
| Logging/Monitoring | Splunk, ELK, Kafka, Prometheus, Grafana – for visibility and alerting |
| Automation/DevOps | Terraform, Ansible, vRA – full lifecycle automation |
| Security Tools | SAML, OAuth, AD, SIEM – secure and auditable access control |
Understanding how Avi differs from traditional load balancers is critical for migration scenario questions and articulating its value in modern architectures.
| Category | VMware NSX Advanced Load Balancer (Avi) | Traditional LB (F5 / Citrix ADC) |
|---|---|---|
| Architecture | Software-defined, distributed (SE + Controller) | Hardware or VM appliance, centralized |
| Scalability | Elastic auto-scale with N+M SE model | Limited to appliance size or license |
| Automation | Full REST API, SDKs, Ansible, Terraform | Basic scripting or proprietary tools |
| Cost Control | No appliance lock-in, better TCO | High CapEx (appliance), additional license for features |
| Multi-cloud Readiness | Native support for vSphere, AWS, Azure, OpenStack, K8s | Limited or manual setup in public cloud |
| DevOps Support | Integrates with CI/CD pipelines, supports GitOps | Typically separate from DevOps toolchains |
Key Takeaway:
Avi is a cloud-native, programmable, and fully distributed load balancer that scales elastically and integrates seamlessly with modern DevOps environments, unlike static, appliance-centric traditional solutions.
Avi Load Balancer is deeply integrated with VMware Tanzu as its default ingress controller, making it ideal for container-native applications in VMware ecosystems.
Ingress Controller Functionality:
Avi automatically discovers Kubernetes Ingress and Service resources from TKG clusters.
Dynamically provisions Virtual Services (VS) and routes traffic to appropriate pods.
Advanced Routing Capabilities:
Supports hostname-based and path-based routing rules.
Enables TLS offloading, rate limiting, and Web Application Firewall (WAF) directly at the ingress level.
Multi-Tenant Integration:
Each TKG cluster can be mapped to a dedicated Avi tenant.
Ensures role-based access and resource isolation across teams or business units.
Automatic Lifecycle Management:
Key Benefit:
The integration provides cloud-like L4-L7 services for containerized apps, with full visibility, elasticity, and security.
Avi’s Controller architecture ensures control-plane resilience, even in multi-AZ or multi-region deployments.
3-node cluster using a distributed consensus algorithm (similar to Raft).
Each node maintains a copy of the config database.
Quorum is required (minimum 2 out of 3) for write operations and cluster leadership.
Controllers can be placed in different Availability Zones or Regions.
Requires low-latency interconnects for sync traffic (< 10ms latency is ideal).
Use fault domains to logically separate nodes for higher availability.
If the Controller is unreachable, SEs continue to process traffic.
No new config changes can be made, but existing services remain functional.
Once the Controller recovers, SEs sync state back.
Best Practice:
Use DNS + NTP consistency, enforce data redundancy, and maintain external backup schedules.
This area is frequently tested in feature coverage questions. Memorize the layers and supported protocols.
TCP
UDP
DNS
FTP
ICMP (for health checks)
HTTP
HTTPS
HTTP/2
WebSocket
gRPC
QUIC (experimental support)
TLS 1.2 and TLS 1.3
Perfect Forward Secrecy (PFS)
Server Name Indication (SNI)
SSL Offload / Passthrough
Full RESTful API
Ansible modules
Terraform provider
vRealize Automation (vRA) integration
Log integration with ELK, Splunk, Kafka
Metrics via Prometheus
Visualization via Grafana
These are high-yield exam topics. You may be asked to identify the most suitable topology for a given business case.
One Avi Controller cluster is deployed in a primary site.
Separate SE Groups are deployed in:
vSphere (on-prem)
AWS or Azure (public cloud)
Each SE Group manages Virtual Services in its own cloud.
Useful for hybrid cloud deployments, with centralized control and distributed data plane.
TKG deployed on vSphere with NSX-T as the CNI.
Avi is integrated via Kubernetes CRDs to act as Ingress Controller.
SEs provide VIPs and route to backend pods.
Enables:
East-west and north-south routing
TLS termination
Per-namespace tenant mapping
In VMware Cloud Foundation (VCF), Avi can be automatically deployed via SDDC Manager.
Integrated with vRealize Suite and NSX-T.
Used to:
Load balance NSX ALB Edge Services
Provide tenant-level L7 services per workload domain
Manage GSLB, WAF, and analytics natively
Avi is deployed across multiple geographic regions (on-prem + cloud).
GSLB (Global Server Load Balancing) is configured using Avi’s built-in DNS features.
Each site has its own SE Group.
With autoscaling enabled, SEs scale in/out based on traffic or CPU usage.
Use case: Disaster recovery, performance-based traffic steering, cloud bursting.
What application services are commonly supported by VMware Avi Load Balancer?
Avi supports Layer 4–Layer 7 load balancing, SSL termination, web application firewall (WAF), and analytics.
The platform provides a wide set of application delivery services including:
TCP/UDP load balancing
HTTP/HTTPS application load balancing
SSL/TLS offloading
Global Server Load Balancing (GSLB)
Web Application Firewall (WAF)
These services allow organizations to deliver applications securely while maintaining performance.
One exam hint: if the question mentions application analytics or integrated WAF, those capabilities are native features of Avi rather than external add-ons.
Demand Score: 68
Exam Relevance Score: 85
What is Global Server Load Balancing (GSLB) in Avi?
GSLB distributes client traffic across geographically distributed data centers.
Global Server Load Balancing enables applications to remain available even if an entire data center fails.
Avi achieves this by monitoring the health and performance of multiple sites. DNS responses are dynamically adjusted to direct users to the most optimal or available location.
Common decision methods include:
geo-location
round-robin
latency-based routing
health-based failover
Exam scenarios often describe multiple sites or regions, which indicates the question is about GSLB rather than standard local load balancing.
Demand Score: 64
Exam Relevance Score: 87
How does Avi Load Balancer provide application visibility?
Through built-in analytics and real-time monitoring provided by the Avi Controller.
The Controller collects telemetry data from Service Engines and presents detailed analytics such as:
client latency
server response time
application errors
throughput metrics
This data allows administrators to quickly identify performance issues.
Avi also includes log streaming and alerting capabilities, which can integrate with external monitoring platforms.
Exam questions often highlight troubleshooting or application insight, pointing to Avi’s analytics features as the correct concept.
Demand Score: 60
Exam Relevance Score: 82