Shopping cart

Subtotal:

$0.00

6V0-22.25 Performance-Tuning, Optimization, and Upgrades

Performance-Tuning, Optimization, and Upgrades

Detailed list of 6V0-22.25 knowledge points

Performance-Tuning, Optimization, and Upgrades Detailed Explanation

1. Performance Tuning Principles

Before you dive into specific tuning details, it helps to understand the high‑level principles that guide performance improvement in the VMware Avi Load Balancer environment.

1.1 Vertical vs. Horizontal Scaling

  • Vertical scaling: Increasing capacity of an existing node (for example giving a Service Engine more vCPUs and more RAM).

  • Horizontal scaling: Adding more Service Engines (SEs) to distribute traffic across multiple instances rather than relying on a single more‑powerful node.

  • In many cases horizontal scaling is preferred for fault tolerance and better resource distribution, since if one node fails, others still handle traffic.

  • When designing, ask: will traffic growth come as more simultaneous sessions (favor horizontal), or heavier compute per session (vertical might help)?

1.2 Application‑Aware Optimization

  • Use Application Profiles to match how your application behaves, rather than using one generic profile for everything.

    • Example: An HTTP profile may enable features like compression, caching, connection multiplexing which help web‑traffic.

    • Example: A TCP profile may tune time‑outs, buffer sizes, window scaling for non‑HTTP protocols.

  • Matching the profile to the workload type helps ensure you’re not paying performance overhead for unused features or missing optimizations for your actual traffic.

  • Ask: is your application a streaming video server (high throughput, large payloads), an API server (many small requests), or a legacy database front‑end (raw TCP)? Then choose the profile accordingly.

2. Service Engine (SE) Tuning

Service Engines (SEs) are the data plane of Avi—they process all live traffic. To maximize performance and efficiency, they must be sized and configured appropriately for your workload.

2.1 CPU and Memory Allocation

Performance is directly impacted by how much compute and memory you allocate to each SE.

  • More vCPUs = Higher Throughput:

    • Especially important for SSL/TLS offload, where encryption/decryption is CPU-intensive.

    • Also improves parallel connection handling.

  • More RAM = Higher Connection Capacity:

    • RAM is consumed by:

      • Connection state tracking

      • HTTP request buffers

      • Analytics logging

  • Guidance:

    • Use VMware’s Avi Sizing Calculator (available via support portal) to estimate requirements based on:

      • Number of Virtual Services (VS)

      • Expected connections per second (CPS)

      • SSL throughput

2.2 Packet Processing

Avi SEs can be optimized to handle high packet rates, especially in large data centers or telecom environments.

DPDK (Data Plane Development Kit):

  • Enables user-space packet processing for ultra-high performance.

  • Must be enabled during SE group creation.

  • Suitable for:

    • Low-latency apps

    • 10 Gbps+ environments

    • Bypass kernel network stack

Receive Side Scaling (RSS):

  • Distributes network traffic processing across multiple CPU cores.

  • Helps avoid bottlenecks on a single vCPU.

  • Improves concurrency and reduces latency.

Best Practice:

  • Use multi-vCPU SEs with DPDK enabled for high-traffic deployments.

2.3 Connection Handling

Avi SEs are connection-aware. You can fine-tune how they manage client-server connections.

  • Max Concurrent Connections:

    • Set limits per SE to prevent overload.

    • Helps control memory usage and failover behavior.

  • Connection Multiplexing (HTTP):

    • For HTTP/1.1: Keep-alive reuse reduces overhead.

    • For HTTP/2: Multiplex multiple streams over one TCP connection.

    • Reduces backend connections significantly—improves resource efficiency.

Best Practice:

  • Enable HTTP/2 if clients support it.

  • Avoid disabling keep-alives unless you have specific latency concerns.

Summary of SE Tuning

Area Optimization Strategy
CPU/Memory Scale SE vCPUs for SSL, RAM for connection-heavy apps
Packet Processing Use DPDK and RSS to handle high packet volume with low latency
Connection Handling Tune connection reuse and multiplexing for HTTP and TCP performance

3. Traffic Optimization

This section focuses on reducing latency, improving throughput, and relieving backend server load using features such as caching, compression, TCP tuning, and protocol optimizations.

3.1 Caching

Avi allows you to cache HTTP responses at the SE level to offload work from backend servers.

Benefits:

  • Reduces repeated requests to backend.

  • Speeds up response time for clients.

Configuration Options:

  • Enable caching per Application Profile.

  • Define cacheable object types:

    • Based on Content-Type headers (e.g., text/css, application/javascript)

    • File extensions (.jpg, .css, .js, etc.)

  • Set cache expiration rules (e.g., using Cache-Control, Expires headers)

Best Practices:

  • Use for static content (images, scripts, style sheets)

  • Avoid caching dynamic or personalized content unless explicitly safe

3.2 Compression

Avi supports on-the-fly compression of HTTP responses to reduce bandwidth usage and speed up delivery.

Options:

  • Gzip (widely supported)

  • Brotli (newer, more efficient for modern browsers)

Control compression based on:

  • MIME types (e.g., compress only text/html, JSON)

  • Response size (e.g., compress only if > 1 KB)

  • Request headers (e.g., Accept-Encoding from client)

Best Practices:

  • Enable Brotli for modern browsers; fallback to Gzip for older clients

  • Don’t compress already-compressed files like .zip, .mp4, .jpg

3.3 TCP Optimization

At the transport layer, tuning TCP parameters can significantly impact throughput and latency, especially for mobile or long-distance connections.

Features:

  • TCP Fast Open:

    • Reduces connection setup time

    • Useful in environments with many short-lived connections

  • Selective ACK (SACK):

    • Improves packet recovery efficiency during packet loss
  • Window Scaling & Delayed ACK:

    • Increases throughput for high-bandwidth or high-latency links

    • Avoids unnecessary acknowledgments

  • Timeout Tuning:

    • Idle Timeout: Drop idle connections earlier to save memory

    • FIN Timeout: Controls how long SEs wait for proper connection closure

Best Practices:

  • Use optimized TCP profiles for WAN-heavy applications

  • Tune timeouts per application type (APIs vs. web vs. streaming)

3.4 HTTP/2 and WebSockets

Modern applications benefit from protocol-level enhancements that improve concurrency and responsiveness.

HTTP/2:

  • Multiplexes multiple streams over one TCP connection

  • Built-in header compression (HPACK)

  • Reduces latency caused by TCP connection limits

WebSockets:

  • Long-lived, full-duplex communication channel

  • Used in real-time apps (chat, trading, live updates)

Avi Support:

  • Enable HTTP/2 in application profile (clients and/or server-side)

  • WebSocket support is native for VS with L7 profiles

Best Practices:

  • Enable HTTP/2 where backend supports it

  • Use WebSockets carefully; they consume persistent resources

Summary: Traffic Optimization

Feature Optimization Purpose
Caching Reduces backend load, speeds up repeated requests
Compression Lowers bandwidth usage, improves perceived response time
TCP Tuning Increases throughput, especially in lossy or long-distance networks
HTTP/2 / WebSockets Modernize app delivery, support real-time and highly interactive workloads

4. Auto-Scaling and Elasticity

Avi Load Balancer supports intelligent, metrics-driven auto-scaling, which ensures that your infrastructure grows or shrinks dynamically based on live application demands — without manual intervention.

4.1 Auto-Scaling Policies

Auto-scaling allows Avi to automatically:

  • Add or remove SEs within a group

  • Distribute or consolidate Virtual Services

  • Adjust resource allocation based on thresholds

Key Triggers for Auto-Scaling:

  • CPU Utilization: Add SEs when CPU exceeds a defined threshold (e.g., 80%)

  • Throughput (Mbps): Scale up if a single SE is handling too much bandwidth

  • Concurrent Connections: Trigger scaling if too many open sessions exist

You can define:

  • Upper and lower limits

  • Reaction delay or cool-down periods

  • Threshold values per metric

Example:

If CPU > 75% for 5 minutes → add 1 SE
If CPU < 40% for 10 minutes → remove 1 SE

4.2 SE Group Configuration for Auto-Scaling

To support auto-scaling, you must configure the SE Group properly.

Key Settings:

  • Minimum SEs:

    • Minimum number of SEs that should always be available
  • Maximum SEs:

    • Maximum number of SEs the group can grow to (enforces resource limits)
  • Buffer SEs:

    • Extra SEs that are powered on and ready for immediate use

    • Useful for fast reaction to sudden traffic spikes

  • Scale-In Cool-Down:

    • Prevents rapid scaling up and down (flapping)

    • Defines a delay before reducing SE count after a scale-out

  • Elastic HA (N+M):

    • Supports highly available scaling

    • E.g., 3 active SEs (N) with 1 standby (M) = 3+1 redundancy

Example Auto-Scaling Scenario

You are running an e-commerce platform that sees surges during flash sales:

  • SE Group Configuration:

    • Min: 2 SEs

    • Max: 6 SEs

    • Scale-out trigger: CPU > 80%

    • Scale-in trigger: CPU < 30%

    • Cool-down: 10 minutes

  • During a sale:

    • Traffic spikes → SE count grows to 5
  • After the sale:

    • CPU drops → Avi scales SEs down to 2 over time

Summary: Auto-Scaling and Elasticity

Setting Purpose
Auto-Scaling Metrics Defines how scaling reacts to traffic patterns (CPU, connections, Mbps)
Min/Max SEs Keeps control over scaling boundaries and prevents resource overuse
Buffer SEs Enables fast response without waiting for new VMs to boot
Cool-Down Avoids excessive churn due to short traffic fluctuations

5. Analytics-Driven Optimization

Avi’s real-time analytics engine gives you deep visibility into traffic, performance issues, and system behavior. You can use this data not only to troubleshoot but also to fine-tune your environment proactively.

5.1 Real-Time Metrics

Every Virtual Service and Service Engine continuously exports telemetry data to the Avi Controller.

Key Metrics Tracked:

  • Client RTT (Round-Trip Time): Measures latency between client and Avi SE

  • Server RTT: Measures latency between SE and backend server

  • Application Response Time: Time taken by the app to generate a response

  • HTTP Error Codes:

    • 4xx errors (client issues) and 5xx errors (server issues)

    • Useful for spotting app/API issues or misconfigurations

  • TCP Metrics:

    • Retransmissions (packet loss indicator)

    • Connection drops

    • SSL handshake failures

Use Cases:

  • Identify slow backend servers

  • Detect latency spikes during peak times

  • Monitor SSL handshake delays

5.2 FlightPath

FlightPath is Avi’s built-in traffic debugging and trace tool.

How It Works:

  • You enter a source IP, destination IP/hostname, and protocol (e.g., TCP or HTTP).

  • The tool traces how the traffic flows through the Avi system, showing:

    • Each decision point (e.g., DNS resolution, pool selection, policy match)

    • Any failures, delays, or errors

    • TLS handshakes and connection reuse

Benefits:

  • Real-time, visual trace

  • Fast root-cause analysis for:

    • Traffic not reaching backend

    • Incorrect routing

    • Security/policy blocks

Best Practices:

  • Use during troubleshooting instead of guessing

  • Save traces for audit trails or RCA documentation

5.3 Health Score Tuning

Each Virtual Service and application has an automatically calculated Health Score ranging from 0 to 100.

Health Score Inputs:

  • Server health (backend status)

  • App responsiveness

  • Infrastructure conditions (CPU, memory, interface errors)

  • Error rates (HTTP 5xx, TCP resets)

You Can Tune:

  • Alert Thresholds:

    • Avoid being flooded with warnings for minor, known issues
  • Weighting of Metrics:

    • Increase weight on server response time if it’s critical to your SLA
  • Suppress Known False Positives:

    • E.g., temporarily suppress alerts from a known flaky test server

Use Case:
If your app returns 5xx errors during normal auto-scaling, you might adjust the health score logic to prevent alerts from triggering unnecessarily during those transitions.

Summary: Analytics-Driven Optimization

Feature Purpose & Benefits
Real-Time Metrics View latency, errors, connection stats to guide tuning
FlightPath Trace traffic path in real time for fast root-cause analysis
Health Score Monitor app health holistically and customize alert thresholds

6. Upgrades and Patching

Upgrading Avi involves both the Controller and Service Engines (SEs). A proper upgrade plan ensures feature enhancements, bug fixes, and security patches are applied without service disruption.

6.1 Upgrade Strategy

Upgrade Sequence:

  1. Controller Cluster First – upgrade all Controller nodes (usually 3-node cluster).

  2. Service Engines Next – SEs are upgraded via the Controller interface.

Key Concepts:

  • Always perform backups and VM snapshots before starting.

  • Use Maintenance Mode to drain traffic from components before upgrading.

  • Upgrades can be manual (UI/API) or automated via scripts/tools.

Best Practices:

  • Schedule upgrades during low-traffic windows.

  • Confirm high availability is working before starting.

  • Test upgrades in staging before production.

6.2 Upgrade Process

Controller Upgrade:

  • Navigate to Administration > System > Upgrade in the UI.

  • Upload the new Controller image.

  • Start rolling upgrade (nodes are upgraded one by one).

  • The cluster remains active throughout.

Service Engine Upgrade:

  • Performed from the Controller UI/API.

  • Avi upgrades SEs one at a time (rolling fashion).

  • Traffic is moved off each SE during its upgrade (using maintenance mode).

  • Supports zero-downtime upgrades in HA setups.

Example Process:

  • Upload image

  • Initiate Controller upgrade

  • Wait for quorum to restore

  • Initiate SE group upgrade

  • SEs rotate in and out of traffic handling safely

6.3 Compatibility Checks

Before upgrading, check:

  • Controller–SE compatibility matrix

    • SEs must not be running a higher version than the Controller.

    • Controller can manage older SEs temporarily (for staged rollouts).

  • Resource Requirements:

    • Does the new version require more CPU, RAM, disk?

    • Check system prerequisites in the release notes.

  • API or feature deprecations:

    • Some features may be renamed, modified, or removed.

    • Read the release notes carefully to avoid breaking integrations.

6.4 Patch Management

Patches are usually:

  • Minor fixes or enhancements

  • Delivered as .pkg or .ova images (Controller or SE)

Patch Application:

  • Same process as upgrades — use the Controller UI/API

  • No need to replace or rebuild VMs

  • Patches are backward-compatible in most cases

Best Practices:

  • Apply critical patches quickly (e.g., security issues)

  • For minor patches, combine with scheduled upgrade cycles

Summary: Upgrades and Patching

Area Best Practice
Upgrade Strategy Always start with Controller → SEs, use maintenance mode, back up first
Rolling Upgrades Perform upgrades node by node to avoid downtime
Compatibility Check Confirm version matrix, resource sizing, deprecated features
Patch Management Use same upgrade tools, apply selectively, test when possible

7. Troubleshooting Performance Bottlenecks

This final section teaches you how to identify and resolve performance issues in real-world scenarios. Using built-in tools and metrics, you can track down the root cause of common problems such as high CPU usage, latency, and SSL processing delays.

7.1 High CPU or Memory Usage

Common Symptoms:
  • Service Engines are slow or unresponsive.

  • Virtual Services become unavailable.

  • Delays in traffic processing.

Diagnostic Steps:
  • In the Avi UI, go to:

    • Infrastructure > Service Engines

    • Sort by CPU or Memory usage

  • Use Analytics > Virtual Services to see if a specific app is overloading SEs.

Typical Causes:
  • Overloaded Virtual Services sharing one SE.

  • Too many SSL connections on small SEs.

  • Unoptimized health monitors (e.g., frequent checks, long responses).

Solutions:
  • Scale vertically: Add more CPU/RAM to affected SEs.

  • Scale horizontally: Split traffic across more SEs.

  • Optimize health checks: Reduce frequency or use lighter protocols (e.g., TCP vs HTTP).

7.2 Latency Issues

Common Symptoms:
  • Pages or APIs respond slowly.

  • End users experience high load times.

Troubleshooting with Real-Time Metrics:
  • Client RTT: High = network issue on client side.

  • Server RTT: High = backend app is slow.

  • Application Response Time: Indicates internal app logic delay.

  • Use FlightPath to trace the request and visualize the delay.

Potential Root Causes:
  • DNS resolution delay

  • Backend server saturation

  • Network congestion

  • SSL handshake bottlenecks

Fixes:
  • Move SEs closer to backend servers (e.g., same subnet/AZ).

  • Use HTTP keep-alive or HTTP/2 to reduce handshake overhead.

  • Load balance across more servers.

7.3 SSL Offload Performance

Common Symptoms:
  • Slow HTTPS connections.

  • High SE CPU usage during TLS handshakes.

Causes:
  • SE under-provisioned for SSL load.

  • Use of RSA certificates instead of ECC (RSA = higher CPU).

  • SSL re-encryption enabled unnecessarily.

Optimization Tips:
  • Use ECC (Elliptic Curve) certificates — faster and lighter on CPU.

  • Disable SSL re-encryption if your backend doesn’t need HTTPS.

  • Move SSL termination to larger SEs or enable DPDK.

Bonus: SSL Session Reuse
  • Enabled by default — reduces overhead by reusing keys for repeat clients.

  • Check that backend supports session reuse or session tickets for best performance.

Summary: Troubleshooting Performance

Issue Symptoms Fixes / Tools
High CPU/Memory Slow SEs, unavailable apps Scale SEs, optimize health checks
Latency Issues Long page/API load time FlightPath, move SEs closer, TCP tuning
SSL Bottlenecks Slow HTTPS, CPU spikes Use ECC certs, offload SSL, reuse sessions

Performance-Tuning, Optimization, and Upgrades (Additional Content)

1. Performance Tuning in Multi-Tenant Environments

In multi-tenant architectures, performance isolation is critical to prevent one tenant's workload from impacting others — commonly referred to as the “noisy neighbor” problem.

Best Practices:

  • Dedicated SE Groups per Tenant:

    • Assign separate Service Engine (SE) Groups to high-priority or high-throughput tenants.

    • Enables control over sizing, placement, and fault isolation.

  • SE Resource Quotas:

    • Configure:

      • Max bandwidth per tenant

      • Max number of VSs per SE Group

      • CPU/memory limits for SEs

  • Rate Limiting and Analytics Control:

    • Use rate limiting to prevent traffic bursts from low-priority tenants.

    • Disable or reduce analytics granularity on non-critical VSs to save CPU/disk.

  • CPU Pinning (Advanced):

    • On bare-metal SEs or DPDK-enabled setups, pin vCPUs to SE processes.

2. Detailed Upgrade Rollback Strategy

Even with careful planning, upgrades can fail. Avi provides several mechanisms to roll back quickly.

Rollback Tactics:

  • Controller Snapshot Restore:

    • Before upgrading, take a VM snapshot (via vSphere or cloud console).

    • If the upgrade breaks the UI/API or DB, restore snapshot and restart.

  • SE Image Downgrade:

    • Go to Infrastructure > SE Groups.

    • Select “Previous Image” or manually upload an older SE image.

  • Automated Downgrade via API:

    • Use the upgrade API object:

      POST /api/upgrade/segroup
      {
        "se_group_ref": "/api/serviceenginegroup?name=se-group1",
        "image_ref": "/api/image/previous"
      }
      
  • Disaster Recovery (DR) vs Minor Rollback:

    • DR: Full controller + SE + config recovery (usually after critical failures).

    • Minor Rollback: Downgrade within a major version (e.g., 30.2.5 → 30.2.3).

Best Practice:

  • Always export config backup before upgrade.

  • Don’t allow automatic SE upgrade unless tested.

3. Application-Aware Health Monitoring Optimization

Relying only on basic TCP or HTTP probes can miss application-specific issues. Avi supports custom health checks.

Techniques:

  • Custom Scripted Health Monitors:

    • Upload Python or Bash scripts to perform advanced checks (e.g., database query, login endpoint test).

    • Scripts run from the SE and return exit code.

  • Layered Health Checks:

    • Combine multiple protocols:

      • TCP (connectivity)

      • HTTP with specific status codes (200, 302)

      • SSL negotiation

  • Tuning Frequency:

    • Critical services: shorter intervals (e.g., 5s), lower timeout

    • Less critical: longer intervals to reduce SE load

  • Health Monitor Pools:

    • Assign different health monitors per pool member to isolate root causes.

4. Virtual Service-Level Performance Tuning

Each Virtual Service (VS) can be individually optimized based on workload characteristics.

Tuning Areas:

  • SE Allocation Mode:

    • Use Dedicated SE Mode for high-throughput or security-sensitive services.

    • Use Shared SE Mode to conserve resources across lightweight VSs.

  • Performance Profiles:

    • TCP Profile:

      • Enable/disable TCP fast open, selective ACK, scaling
    • SSL Profile:

      • Offload options, cipher suite selection, session reuse
    • Analytics Profile:

      • Enable real-time logs only for debug or high-SLA apps
  • Connection Multiplexing:

    • Reduce backend connections by reusing client-side connections (HTTP Keep-Alive, HTTP/2)
  • Advanced Load Balancing Algorithm:

    • Least Connections vs Consistent Hashing — choose based on session stickiness and backend capacity.

5. Certificate Management Optimization (Performance-Aware)

TLS offloading affects CPU usage and session performance. Efficient certificate design improves both performance and security.

Optimization Methods:

  • ECC vs RSA Certificates:

    • ECC (Elliptic Curve Cryptography):

      • Smaller key size

      • ~30–40% faster handshake

      • Recommended for mobile and high-load services

    • RSA is heavier and slower, especially with 2048+ bit keys

  • Wildcard or SAN Certificates:

    • SAN or wildcard certs reduce SSL handshakes and memory footprint.

    • Better than per-VS certs for subdomain-heavy apps.

  • Intermediate Chain Optimization:

    • Merge and deduplicate intermediate certs.

    • Avoid excessive depth in certificate chains.

    • Use tools like OpenSSL to test:

      openssl s_client -connect vip.example.com:443 -showcerts
      
  • Session Reuse & Caching:

    • Enable SSL session reuse across clients.

    • Reduces handshake CPU load.

6. Performance Monitoring at Scale

As deployments grow to hundreds of SEs and VSs, proactive monitoring becomes essential.

Scalable Monitoring Practices:

  • Performance Baselines:

    • Define expected:

      • Bandwidth

      • CPU per SE

      • Response time

    • Set per-tenant or per-VS thresholds.

  • Layered Alerts:

    • System-level:

      • Controller CPU, memory, disk usage
    • SE-level:

      • High latency or CPU over 85%
    • App-level:

      • Error spikes (5xx)

      • Health score drops

  • Log Retention & Streaming:

    • Local SE log retention should be short (1–2 days).

    • Use Kafka, ELK, or Splunk for long-term storage.

  • Metric Aggregation:

    • Use Prometheus exporters or Avi API to push data to:

      • Grafana

      • vRealize Operations (vROps)

      • DataDog / New Relic

  • Daily/Weekly Reports:

    • Use Avi Controller scheduled exports for:

      • SLA conformance

      • Resource utilization trends

Summary

Area Key Techniques
Multi-Tenant Tuning SE group per tenant, resource quotas
Upgrade Rollback Snapshots, SE downgrade, API rollback
Health Checks Custom scripts, layered probes
VS-Level Tuning Dedicated SEs, SSL/TCP/Analytics profiles
Certificate Optimization ECC certs, SAN, chain cleanup
Large-Scale Monitoring Kafka + ELK, layered alerts, Grafana dashboards

Frequently Asked Questions

What configuration helps optimize load balancing performance for high traffic applications?

Answer:

Using multiple Service Engines and proper resource allocation improves performance.

Explanation:

Because Service Engines handle traffic processing, their CPU, memory, and network resources directly impact performance.

Administrators can optimize performance by:

  • deploying additional Service Engines

  • increasing CPU and memory allocation

  • distributing Virtual Services across multiple SEs

Avi automatically distributes traffic between Service Engines when scaling is enabled.

Exam questions involving traffic spikes or throughput issues typically expect answers involving Service Engine scaling rather than controller tuning.

Demand Score: 76

Exam Relevance Score: 89

What upgrade method ensures minimal disruption when updating an Avi Controller cluster?

Answer:

A rolling upgrade of Controller nodes.

Explanation:

During a rolling upgrade, Controller nodes are upgraded sequentially instead of all at once.

This allows the cluster to remain operational while one node is upgraded.

The process generally follows this order:

  1. upgrade one Controller node

  2. verify cluster stability

  3. upgrade the remaining nodes sequentially

This approach preserves configuration management and analytics availability during the upgrade.

Exam questions referencing cluster upgrades with minimal downtime typically indicate rolling upgrades.

Demand Score: 70

Exam Relevance Score: 87

How does Avi maintain traffic availability during upgrades?

Answer:

Service Engines continue handling traffic while Controller nodes are upgraded.

Explanation:

Because Avi separates the control plane and data plane, Service Engines can continue forwarding traffic even if controllers are temporarily unavailable.

During upgrades:

  • controllers are upgraded sequentially

  • Service Engines remain active

  • application traffic continues uninterrupted

This architecture reduces downtime during maintenance.

Demand Score: 68

Exam Relevance Score: 88

Which metric should administrators monitor to detect load balancer performance issues?

Answer:

Application latency and throughput metrics.

Explanation:

Avi provides analytics that display performance indicators such as:

  • client latency

  • server response time

  • throughput

  • connection rate

Monitoring these metrics allows administrators to identify performance bottlenecks.

If latency increases or throughput decreases under normal conditions, additional Service Engines or configuration adjustments may be required.

Demand Score: 72

Exam Relevance Score: 86

6V0-22.25 Training Course