ESXi is VMware’s Type-1 (bare-metal) hypervisor. This means:
It installs directly onto physical hardware, not on top of a general-purpose OS.
It provides the virtualization layer that allows multiple virtual machines (VMs) to share CPU, memory, storage, and network resources.
It abstracts hardware differences so VMs can move between hosts seamlessly (vMotion).
Key responsibilities:
Hardware abstraction – VMs see standardized virtual hardware (vNICs, vDisks).
Resource scheduling – ESXi decides which VM uses CPU or memory at any moment.
Isolation – VMs run independently; a crash in one VM does not affect others.
Security – The hypervisor provides secure boundaries between workloads.
To function correctly in a cluster, ESXi hosts require consistent and secure configuration:
Management IPs
The ESXi management network allows vCenter to manage the host.
Each host needs a stable IP address reachable by vCenter.
DNS
Hostname → IP mapping must be correct.
Forward and reverse DNS resolution must work.
DNS issues frequently break vCenter–ESXi communication.
NTP (Network Time Protocol)
Time synchronization is essential for:
HA and DRS
vCenter tasks
Log correlation
Authentication
Time drift can cause cluster failures or authentication errors.
Host profiles
A feature used to standardize ESXi configuration across multiple hosts.
Ensures consistent settings for networking, storage, security, and services.
SSH lockdown
SSH is disabled by default for security.
VMware encourages enabling it only for troubleshooting.
“Lockdown Mode” restricts direct host access to increase security.
ESXi interacts tightly with hardware, so compatibility is critical:
Device drivers
ESXi needs certified drivers for NICs, storage controllers, GPUs, etc.
Incorrect drivers can cause performance issues or host instability.
Firmware and BIOS versions
Vendors provide recommended firmware levels for optimal performance.
Lifecycle Manager can update firmware with vendor add-ons.
VMware Compatibility Guide (VCG)
A database listing supported hardware (servers, controllers, NICs).
Ensures the entire configuration is supported by VMware.
vCenter is the centralized management platform for vSphere. It provides:
Management of ESXi hosts and clusters
Performance monitoring
Role-based access control
High-level features (HA, DRS, vMotion, templates)
Without vCenter:
You cannot use DRS.
You cannot manage clusters.
You cannot perform advanced operations like template management or vMotion between clusters.
The logical structure managed by vCenter includes:
Data centers – top-level containers for physical resources.
Clusters – groups of ESXi hosts that share resources and enable HA/DRS.
Hosts – individual ESXi servers.
VMs (Virtual Machines) – compute workloads.
Templates – VM images used for rapid deployment.
Folders – organizational grouping for VMs or hosts.
These structures help administrators organize and control large environments.
RBAC (Role-Based Access Control)
Assigns permissions to roles (e.g., Administrator, Read-Only).
Roles assigned to users or groups.
SSO (Single Sign-On)
Central authentication system in vSphere.
Provides identity domains and authentication tools.
Identity Federation
Allows vCenter to use external identity providers (e.g., Active Directory, ADFS, Azure AD).
Supports MFA and enterprise-grade security integration.
vMotion
Moves a running VM between hosts without downtime.
Useful for maintenance and load balancing.
Storage vMotion
Moves VM disk files between datastores while the VM is running.
Useful for storage balancing or migrating from older hardware.
Automatically restarts VMs if an ESXi host fails.
Components: master agent, heartbeat, datastore monitoring.
Settings include:
Restart priority
Admission control (ensures capacity for failover)
Host isolation response
Balances workloads across hosts.
Considers CPU/memory utilization and VM priority.
Allows grouping via resource pools, which control shares/limits/reservations.
Provides zero-downtime protection for select workloads.
Creates a secondary VM that mirrors the primary in real time.
Suitable for workloads that cannot tolerate restart delays.
Standardizes lifecycle operations using image-based management.
Applies ESXi versions, drivers, firmware consistently across clusters.
Replaces older baseline-based patching workflows.
vSAN aggregates local storage (SSD/HDD) from ESXi hosts into a shared, distributed datastore.
Eliminates the need for external SAN arrays.
Storage grows by adding more hosts or disks.
A vSAN-capable host typically has:
Cache tier (SSD or NVMe)
Capacity tier (SSD/HDD or all-flash)
A disk group = 1 cache device + 1 to 7 capacity devices.
Multiple disk groups improve performance and fault tolerance.
Policies define how data is stored on vSAN:
FTT (Failures To Tolerate):
FTT=1 → 1 component failure tolerated
FTT=2 → 2 failures tolerated
RAID type
RAID-1 (mirroring) – best performance
RAID-5/6 (erasure coding) – better capacity efficiency
Stripe width
Number of capacity devices used per object
Helps performance for large I/O workloads
Policies can be applied per VM, per disk, or per vSAN file.
Allows tiered storage within the same datastore.
When disks/hosts fail or policy changes occur, vSAN rebalances or rebuilds data.
Resync operations consume IOPS → must be monitored.
vSAN includes built-in health checks for:
Network connectivity
Disk performance
Cluster integrity
Storage policy compliance
vSAN requires free space for rebuilds and resyncs.
VMware recommends keeping ~25–30% free capacity.
Stretched cluster: two sites + witness for site-level protection.
Fault domains: group hosts by rack or location to avoid correlated failures.
The control plane for NSX.
Handles API, configuration, security policies, and cluster state.
Provide north-south traffic, connecting logical networks to physical networks.
Support:
Load balancing
NAT
VPN (IPsec, SSL)
BGP/OSPF routing
ESXi or KVM hosts that run NSX data plane.
Encapsulate and forward overlay traffic.
Equivalent to logical VLANs.
Used to isolate different application tiers.
Tier-1 (T1): Handles east-west routing within app domains.
Tier-0 (T0): Provides north-south connectivity to physical networks.
Virtual networks created on top of the physical underlay.
Provide isolation, agility, and scalability.
Enforces security policies at each VM’s vNIC.
Enables micro-segmentation.
Policies move with the VM during vMotion.
Advanced NSX features depending on license tier.
Provide deep packet inspection and threat detection.
Runs core infrastructure components:
vCenter
NSX
vSAN
SDDC Manager (if applicable)
Used only for platform administration.
Dedicated resource pools for applications or tenants.
Provide isolation, lifecycle independence, and scalability.
Ensures consistent versions across ESXi, vCenter, NSX, vSAN.
Reduces compatibility risks.
Deployment workflows automate cluster creation and configuration.
Scaling is done using guided workflows.
Automates upgrades for:
ESXi
vCenter
NSX
vSAN
Ensures consistency across workload domains.
CPU, memory, disk I/O, network throughput.
Capacity usage and forecasting.
Automated detection of performance issues.
Hardware and software health checks.
Collects logs from ESXi, vCenter, NSX, and applications.
Essential for troubleshooting and compliance.
Helps identify root causes across multiple components.
Enables event-driven alerts.
vCenter, NSX, VCF must be backed up regularly.
Some components require dedicated backup tools.
Must ensure consistency between application data and VM snapshots.
Consider multi-VM application recovery sequences.
PowerCLI (PowerShell-based)
Python with VMware SDKs
Useful for bulk operations and repeatable tasks.
Tools like Terraform automate:
VM creation
Resource provisioning
Network configuration
Entire VCF/domain deployments (depending on provider)
Automation improves consistency and reduces operational overhead.
The vSphere Distributed Switch centralizes network configuration and delivers advanced enterprise networking capabilities essential for visibility, performance control, and security.
LACP (Link Aggregation Control Protocol)
LACP provides dynamic link aggregation between ESXi hosts and physical switches. It automatically negotiates and verifies aggregated uplinks, improving bandwidth availability and providing redundancy. LACP requires configuration on both the VDS and the physical switch.
Network I/O Control (NIOC)
NIOC enforces bandwidth allocation rules for traffic types such as vMotion, management traffic, vSAN, replication, and VM traffic. Shares help prioritize traffic during contention, while limits cap maximum bandwidth consumption. NIOC provides predictable performance in oversubscribed network environments.
Traffic Shaping (Ingress and Egress)
Traffic shaping regulates bandwidth at the port or port-group level.
Ingress shaping controls incoming traffic toward the virtual switch.
Egress shaping controls outgoing traffic.
Shaping is commonly used for multi-tenant environments or when specific applications require rate limiting.
Port Mirroring (SPAN / ERSPAN)
Port mirroring duplicates traffic from specified source ports or port groups to a destination port or remote analyzer.
SPAN mirrors traffic locally.
ERSPAN supports remote mirroring encapsulated over IP networks.
This capability is critical for packet inspection, intrusion analysis, and advanced troubleshooting.
NetFlow / IPFIX
NetFlow and IPFIX allow the VDS to export flow statistics to network analytics tools. Administrators can identify which VMs generate the most traffic, detect anomalies, evaluate east-west traffic patterns, and measure bandwidth consumption across the virtual network.
Health Check (VLAN, MTU, Teaming)
VDS health checks verify alignment between virtual and physical network configurations.
VLAN checks validate trunking.
MTU checks detect mismatched jumbo frame settings.
Teaming checks ensure uplinks follow proper failover and hashing expectations.
vSphere contains multiple layers of security across compute, network, and management planes.
VM Encryption
VM Encryption provides per-VM encryption of virtual disks and metadata using a Key Management Server (KMS). It protects VM data at rest and prevents unauthorized access even if the storage medium is compromised.
vTPM (Virtual Trusted Platform Module)
vTPM provides secure key storage and enables guest OS security features such as BitLocker, Secure Boot, and attestation. vTPM devices are isolated and encrypted within vSphere.
ESXi Secure Boot
Secure Boot ensures ESXi loads only signed and trusted components. The hypervisor verifies the integrity of boot modules, drivers, and firmware. It prevents tampering or rootkit-based compromises.
VM Secure Boot
VM Secure Boot validates EFI firmware and guest OS bootloaders. Only signed OS components are permitted to run. This is essential for hardened workloads and compliance-driven environments.
Virtualization-Based Security (VBS) Support
vSphere supports VBS for Windows workloads by enabling virtualized trust layers such as Credential Guard and Device Guard. This requires VM hardware version alignment, vTPM, and compatible CPU virtualization extensions.
Lockdown Mode (Normal and Strict)
Lockdown Mode restricts direct ESXi host access.
Normal mode allows access through DCUI for break-glass scenarios.
Strict mode removes DCUI access entirely, allowing only vCenter-mediated administration.
This enforces strong access boundaries and reduces attack surface.
Certificate Management Basics (Machine SSL, Solution User Certificates)
Machine SSL certificates secure communications to and from vCenter.
Solution user certificates authenticate services such as vpxd, vpxd-extension, and others within the vSphere architecture.
Certificate rotation and renewal are critical to maintaining trust and avoiding service interruption.
CPU Ready, Co-Stop, and Contention Indicators
CPU Ready measures how long a VM waits for physical CPU resources.
Co-Stop indicates scheduling delays for multi-vCPU VMs that must run vCPUs simultaneously.
High values indicate oversubscription or improperly sized VMs.
Memory Compression
When memory is under pressure, ESXi compresses cold memory pages before resorting to swapping. Compression reduces performance impact but still indicates resource contention.
Memory Reclamation (Ballooning vs Swap) Granular Behavior
Ballooning reclaims memory through the balloon driver inside the guest OS. It affects non-critical memory and preserves performance.
Swapping moves memory pages to disk. This severely impacts performance and occurs only when memory pressure is extreme.
Latency Sensitivity: High Mode
Latency Sensitivity set to High dedicates CPU cores to a VM and disables co-scheduling delays. It improves determinism but reduces cluster flexibility and increases resource fragmentation.
Resource Pool Operational Considerations
Resource pools apply shares, limits, and reservations across groups of VMs. Improper configuration can cause unintended performance throttling. Child pools share resources relative to siblings, not globally.
Reservations and Impact on HA Admission Control
VM reservations reduce the available unreserved cluster capacity.
HA admission control must ensure enough reserved resources exist to restart VMs after a host failure. Overuse of reservations reduces consolidation efficiency.
vSAN File Services
vSAN File Services provides NFS and SMB shares directly from vSAN clusters. This removes the need for external file-storage appliances and integrates file workloads into the vSAN storage policy framework.
vSAN HCI Mesh (Compute-Only and Storage-Only Models)
HCI Mesh allows vSAN storage to be shared across clusters.
Compute-only clusters consume vSAN storage without contributing disks.
Storage-only models centralize storage for multiple compute clusters.
vSAN Encryption (KMS Integration)
vSAN Encryption encrypts data at the disk-group level using KEK/DEK models.
All objects remain encrypted regardless of datastore movement.
Requires an external KMS compatible with KMIP.
vSAN ESA (Express Storage Architecture)
ESA introduces a new architecture leveraging high-performance NVMe storage.
It optimizes log-structured writes, distributed RAID, and enhanced compression.
ESA provides significantly higher throughput and efficiency compared to the original OSA architecture.
Compression-Only vs Deduplication-and-Compression
Compression-only reduces storage consumption with low CPU overhead.
Deduplication-and-compression provides higher space efficiency but requires all-flash configurations and increases CPU cost.
vSAN Object Repair Timer
The repair delay timer determines when vSAN begins reconstructing absent components after a host failure. This prevents unnecessary rebuilds during transient failures.
vSAN Stretched Cluster Additional Behaviors (Delayed Ack, Site Locality)
Delayed acknowledgement improves write efficiency across sites.
Site locality ensures read operations use the closest replica when possible to reduce latency.
NSX Load Balancer (L4/L7)
The NSX Load Balancer supports both L4 (TCP/UDP) and L7 (HTTP/HTTPS) services.
It provides SSL offload, health checks, and application-aware routing.
DNS and DHCP Services in NSX
NSX can provide distributed or centralized DHCP relay, DHCP servers, and DNS forwarding mechanisms. This reduces dependency on external network services.
NSX Federation (Global Manager and Local Manager)
NSX Federation enables multi-site management under a global policy framework.
The Global Manager distributes configuration and enforces consistent networking and security across Local Manager instances.
Traceflow, Port Mirroring, and Packet Capture Tools
Traceflow simulates packet paths through the NSX fabric, showing firewall rule evaluations and routing stages.
Port mirroring and packet captures enable detailed traffic analysis on Edge Nodes and transport nodes.
NSX Identity Firewall (IDFW)
IDFW applies firewall rules based on user identity, integrating with AD domain membership. It secures workloads using user-level context rather than IP addresses alone.
NSX DFW Rule Publishing and Section Hierarchy
Rules are processed top-down, with system sections preceding user-defined sections.
Publishing behavior determines how rules are applied across transport nodes.
Password Rotation Workflows
VCF maintains credential governance through password rotation for vCenter, NSX, ESXi, and SDDC Manager. Automated workflows ensure consistency and prevent stale credentials.
Certificate Rotation Workflows
VCF manages certificate lifecycle through coordinated rotation processes across vCenter, NSX, and SDDC Manager. This prevents trust failures and service outages.
BOM (Bill of Materials) Validation
The VCF BOM defines the supported versions of ESXi, vCenter, NSX, and vSAN.
SDDC Manager enforces BOM consistency and prevents unsupported upgrade paths.
VCF Backup and Restore Requirements
Backups must include SDDC Manager, vCenter databases, NSX Manager clusters, and configuration exports.
Restore procedures must follow validated ordering to avoid dependency failures.
VCF Troubleshooting Workflows
Bring-up logs help diagnose deployment issues.
LCM logs track upgrade and patch lifecycle problems.
Task failures within SDDC Manager provide detailed error codes and remediation suggestions.
Fleet Management and Multi-Instance Management (VCF 9.x)
VCF 9 introduces multi-instance awareness, allowing centralized governance of multiple VCF deployments.
VCF Lifecycle Drift Detection
SDDC Manager detects configuration drift from the desired state defined by the BOM.
Drift remediation workflows realign versions, firmware, and configuration.
Which VMware component performs the initial deployment of the management domain during the VCF bring-up process?
Cloud Builder
Cloud Builder is a deployment appliance used during the initial bring-up of a VMware Cloud Foundation environment. It validates prerequisites and automatically deploys the core infrastructure components required for the management domain, including ESXi hosts, vCenter Server, NSX Manager, and VCF Operations management components. Administrators supply configuration parameters through a deployment workbook or JSON configuration file. Cloud Builder then orchestrates the automated deployment and configuration process to ensure the environment meets VMware’s validated architecture standards. Once the management domain is successfully deployed, administrators transition to VCF Operations management tools to create workload domains and perform lifecycle operations. A frequent misconception is that vCenter performs the initial deployment, but vCenter itself is deployed during the Cloud Builder process.
Demand Score: 76
Exam Relevance Score: 90
What management component is responsible for lifecycle management of VCF infrastructure?
VCF Operations Fleet Management (formerly SDDC Manager).
VCF Operations Fleet Management automates the deployment, configuration, patching, and upgrading of the VMware Cloud Foundation stack. It coordinates lifecycle operations across vSphere, vSAN, NSX, and vCenter components. Administrators use its interface or APIs to manage infrastructure updates and ensure compatibility between platform components. This centralized lifecycle management reduces operational complexity and prevents unsupported version combinations. It also maintains the inventory of hosts, clusters, and workload domains. A common misunderstanding is that vCenter handles lifecycle management, but vCenter primarily manages virtualization resources. VCF Operations Fleet Management handles the platform-level lifecycle orchestration that ensures the entire SDDC stack remains compliant and properly updated.
Demand Score: 73
Exam Relevance Score: 91
During a VMware Cloud Foundation upgrade, what is the typical sequence of component upgrades?
Operations components → SDDC Manager/VCF Operations → NSX → vCenter → ESXi hosts.
VMware Cloud Foundation upgrades follow a controlled lifecycle workflow to maintain platform stability and compatibility. The process typically begins with operations and management components. After those are updated, the lifecycle workflow upgrades the central management components, followed by networking infrastructure (NSX). Once the networking layer is updated, the process continues with vCenter upgrades and finally upgrades the ESXi hosts. This sequence ensures that the management layer remains compatible with the infrastructure it controls. Upgrading components in the wrong order can lead to version mismatches or operational failures. Lifecycle workflows in VCF Operations enforce this sequence automatically to reduce the risk of administrator error.
Demand Score: 71
Exam Relevance Score: 86
What VMware components form the core software-defined data center stack within VCF?
vSphere, vSAN, NSX, and VCF Operations management components.
VMware Cloud Foundation integrates several technologies to create a complete software-defined data center (SDDC). vSphere provides compute virtualization and cluster management through ESXi and vCenter. vSAN delivers software-defined storage by pooling local disks across hosts into a shared datastore. NSX provides network virtualization, enabling software-defined networking, routing, security policies, and microsegmentation. VCF Operations management components orchestrate lifecycle management and automation across these technologies. Together, these components form the foundation of the VCF platform and allow administrators to deploy scalable private cloud environments. Understanding how these components interact is essential for troubleshooting, design, and operational tasks in VCF environments.
Demand Score: 72
Exam Relevance Score: 89
What function does VCF Automation (formerly VMware Aria Automation) provide in a VCF environment?
VCF Automation provides self-service provisioning and automation of infrastructure resources for application teams.
VCF Automation enables infrastructure-as-a-service capabilities within VMware Cloud Foundation. It allows developers and application teams to request virtual machines, Kubernetes clusters, or other resources through a self-service portal. Administrators define blueprints, policies, and governance controls that regulate how resources are provisioned. When users submit requests, the platform automatically deploys the required infrastructure using automation workflows. This capability transforms the VCF environment into a private cloud platform rather than a manually managed virtualization environment. Without automation, administrators must manually provision infrastructure, which slows down delivery and increases operational overhead. VCF Automation helps organizations accelerate application deployment while maintaining governance and compliance.
Demand Score: 70
Exam Relevance Score: 83