VMware Cloud Foundation (VCF) 9.x is VMware’s integrated Software-Defined Data Center (SDDC) platform. It consolidates compute, storage, networking, and lifecycle management into a single solution. For modern workloads—including VMs, containers, and Kubernetes clusters—VCF serves as the standardized foundation.
VCF bundles four major VMware technologies:
vSphere – compute virtualization
vSAN – software-defined, policy-driven storage
NSX – software-defined networking and security
SDDC Manager – centralized lifecycle and domain management
As a whole, these components deliver a consistent operational model across private, hybrid, and multi-cloud environments.
VCF organizes infrastructure into domains, each serving a specific operational purpose.
The Management Domain hosts all platform-level services required to run the environment:
SDDC Manager
Management vCenter Server
NSX managers
Monitoring solutions such as VMware Aria Operations
Infrastructure services (DNS, NTP, certificate authority integrations, etc.)
It is typically compact but built with high availability to ensure platform resiliency.
A Workload Domain is a dedicated environment used to host tenant or business workloads. Each WLD:
Uses its own vCenter Server (and optionally its own NSX instance)
Contains one or more vSphere clusters
Can serve different purposes, such as:
General VM workloads
Virtual Desktop Infrastructure (VDI)
vSphere with Tanzu / VKS clusters
Mission-critical or isolated environments
WLDs allow predictable scaling and strong isolation boundaries.
Within each domain, clusters act as modular units of capacity. You expand capacity by adding:
More hosts to an existing cluster
More clusters to a Workload Domain
This modular approach simplifies lifecycle operations, scaling, and workload placement.
VCF includes full-stack lifecycle automation:
Automated, end-to-end upgrades of:
ESXi
vCenter
NSX
vSAN
Pre-checks and drift remediation
Version consistency enforced through VMware’s interoperability matrix
This ensures the platform remains secure, stable, and compliant without manual patch coordination.
vSphere is VMware’s foundational virtual infrastructure platform, enabling organizations to run VMs at scale with high performance and reliability.
A lightweight, purpose-built hypervisor running directly on physical servers
Provides CPU, memory, storage, and network virtualization services
Forms the building block for vSphere clusters
vCenter is the centralized management plane for vSphere environments. It provides:
Inventory and configuration management for hosts, clusters, and VMs
High-level operational tools (performance charts, tasks/events, alarms)
Template and content library management
Integration with backup, monitoring, automation, and security solutions
Through vCenter, administrators gain complete visibility and control over the compute infrastructure.
vSphere includes several enterprise-grade features that ensure workload continuity and resource efficiency:
vMotion – live migration of VMs between hosts
Storage vMotion – migration of VM disks between datastores
HA (High Availability) – automatic VM recovery after host failures
DRS (Distributed Resource Scheduler) – automatic workload balancing
Fault Tolerance – continuous availability through VM mirroring
Resource Pools – hierarchical allocation of CPU and RAM
These capabilities ensure predictable performance and minimal downtime.
vSAN is VMware’s hyperconverged storage technology. It aggregates local storage devices from ESXi hosts to form a resilient, shared datastore across the cluster.
Key characteristics:
Eliminates the need for external SAN/NAS arrays
Delivers storage through distributed architecture
Scales as hosts are added or removed
Integrates deeply with vSphere clusters
vSAN allows applications to benefit from high availability and performance without complex storage fabrics.
Instead of configuring storage manually, vSAN uses policies to define:
Redundancy requirements
Failure tolerance
Stripe width
Checksum usage
Compression and deduplication
Policies can be applied at the level of a VM or even an individual virtual disk, offering fine-grained control.
vSAN supports two architectural models:
ESA (Express Storage Architecture)
Optimized for NVMe-based hardware
Offers higher performance and efficiency
Recommended in modern VCF deployments
OSA (Original Storage Architecture)
SAS/SATA-based disk groups
Still supported but gradually being phased out
ESA represents VMware’s future direction for high-performance software-defined storage.
NSX is VMware’s platform for network virtualization and security. It decouples network services from physical hardware and enables flexible, programmable networking.
NSX provides:
Logical switching
Logical routing
Load balancing
Distributed firewalling (DFW)
VPN and NAT services
Micro-segmentation for east–west traffic control
These features enable dynamic, secure networking for both VM-based and container-based workloads.
NSX is a mandatory component in VCF and provides:
Overlay networks (using GENEVE encapsulation) for flexible network segmentation
CNI (Container Network Interface) integration for VKS workloads
Security policies at VM and Pod levels
Load-balancing and north–south routing
Support for application-level network policies in Kubernetes
NSX is essential for delivering a unified, multi-tenant Kubernetes platform in VCF.
vSphere with Tanzu transforms vSphere clusters into Kubernetes-enabled platforms, allowing admins and DevOps teams to run containers and VMs in a unified manner.
When Workload Management is enabled:
The ESXi hosts in the cluster become Kubernetes worker nodes
The Kubernetes control plane is deployed and integrated directly with vSphere
Workloads become accessible as native Kubernetes resources
This architecture enables Kubernetes to be a “first-class citizen” within vSphere.
VKS supports multiple workload execution models:
PodVMs
Pods running as highly isolated VMs
Provide strong security boundaries and performance guarantees
VM Service
Allows developers to request and manage VMs through Kubernetes YAML
Integrates VM lifecycle with Kubernetes APIs
Tanzu Kubernetes Cluster (TKC)
Full guest Kubernetes clusters running inside the Supervisor
Suitable for multi-team environments
Allows version selection, scaling, and lifecycle control per cluster
This flexible workload model supports both modern microservices and legacy workloads.
Namespaces act as project-level logical boundaries for Kubernetes-based workloads.
Key features:
Resource limits (CPU, memory, storage)
Access control policies
Storage policies and allowed StorageClasses
Network and security policies
Visibility in both Kubernetes and vSphere UI
They offer a structured, multi-tenant model across teams and applications.
The Tanzu portfolio extends Kubernetes capabilities in VCF environments.
TKG is VMware’s enterprise-ready Kubernetes distribution:
Can run inside or outside VCF
Provides consistent cluster deployment models
Offers tested, validated Kubernetes versions
TMC delivers centralized management across many clusters:
Cluster lifecycle operations
Policy management
Backup and restore
Fleet-wide observability
It enables organizations to manage Kubernetes at scale across multiple data centers and clouds.
Depending on deployment:
Tanzu Observability provides:
Metrics
Logs
Traces
Dashboards
Tanzu Service Mesh offers:
Secure service-to-service communication
Traffic shaping and control
mTLS encryption
Advanced routing for microservices
These tools improve visibility, reliability, and security for cloud-native applications.
The Aria Suite delivers management, automation, and operational insights across VMware environments.
Resource monitoring
Performance tuning
Capacity forecasting
Anomaly detection
Root cause analytics
Self-service catalog
Blueprint-based provisioning
Infrastructure-as-Code (IaC)
Policy-based governance
Multi-cloud automation
Centralized log collection
Log search and correlation
Alerting
Dashboards for NSX, vSphere, VKS, and applications
Together, these tools improve observability and automation across the SDDC.
Protecting workloads is essential in any enterprise environment.
VM-level backup
VADP-based solutions (e.g., Veeam, Commvault)
Image-based backups of VMs and configuration
Cloud Disaster Recovery / SRM (Site Recovery Manager)
Automated failover and failback
Integration with replication solutions
DR workflows for both VMs and management components
Can extend to cloud-based DR options
Backup and DR strategies ensure business continuity across regions and failure domains.
VCF enables organizations to build private cloud platforms that deliver:
Self-service VM provisioning
Automated networking and storage allocation
Consistent governance and compliance
Backend integration with:
VMware Aria Automation
ITSM platforms (ServiceNow)
CMDB systems
This model enhances agility while maintaining enterprise control.
VCF combined with Tanzu/VKS forms a Kubernetes-based application platform:
Supports microservices architectures
Enables CI/CD pipelines and GitOps workflows
Provides multi-tenant cluster environments
Integrates with observability and DevOps tooling
It helps organizations modernize applications while leveraging existing VMware investments.
VMware solutions extend seamlessly beyond the data center:
VCF on-premises integrates with hyperscaler-hosted VMware environments:
VMware Cloud on AWS
Azure VMware Solution
Google Cloud VMware Engine
Benefits include:
Unified operations across clouds
Consistent security and networking
Simplified workload mobility
Disaster recovery and cloud bursting capabilities
SDDC Manager is the central automation and lifecycle management engine of VMware Cloud Foundation (VCF). It manages not only vSphere and vSAN, but also NSX, vRealize/Aria components, and Workload Domains. A deep understanding of its behavior is essential for designing and operating VCF.
Lifecycle operations follow a strict order. SDDC Manager enforces upgrade sequences such as:
First updating core management components
Then updating NSX
Then updating vCenter
Then updating ESXi hosts and vSAN
Finally upgrading edge components or add-on services
This ensures version compatibility across all layers. Administrators do not choose the order; SDDC Manager applies VMware-defined Bill of Materials (BOM) sequencing.
Lifecycle bundles include updates for multiple products. Each bundle comes with:
A specific version of vCenter, ESXi, NSX, and vSAN
Defined compatibility between components
Dependency rules (for example, NSX must be upgraded before ESXi hosts)
Bundles cannot be applied out of order. Attempting to skip versions or manually upgrade components breaks compliance.
SDDC Manager continuously validates that environments match the expected state:
It checks whether each component is at the correct version
It identifies hosts that have drifted from the vLCM image
It confirms consistency across clusters and Workload Domains
Drift detection prevents configuration inconsistencies that could cause failures during upgrades.
SDDC Manager controls host life cycle:
Commissioning validates hardware, firmware, image compatibility, and network configuration
Hosts become available for inclusion in Workload Domains
Decommissioning safely removes a host by evacuating workloads, removing it from vCenter, and cleaning up NSX/vSAN configuration
These workflows ensure that no manual steps break the managed state.
SDDC Manager relies on vLCM images to maintain uniform host configurations:
Each cluster has a desired image
vLCM ensures firmware, drivers, ESXi version, and vendor addons match the image
SDDC Manager triggers image remediation as part of lifecycle workflows
This integrates hardware and software lifecycle into a single process.
The entire VCF environment is managed based on a desired state model:
Workload Domains must match their defined BOM
NSX must match its cluster-level configuration
Hosts must match the vLCM image
SDDC Manager synchronizes configuration across all management components
Consistency ensures predictable behavior and reduces operational risk.
vLCM simplifies host lifecycle by replacing traditional baselines with cluster images.
The baseline model uses multiple patch baselines that may vary per host. This can lead to inconsistencies.
The image-based model uses a single cluster image containing:
ESXi version
Vendor firmware/driver addons
Hardware support packages
All hosts must match this image for full compliance.
Cluster images define the complete desired state of a host:
ESXi base image
Optional vendor add-ons
Firmware and driver packages
Vendor-specific configuration layers
Once defined, vLCM enforces this state across all hosts.
vLCM integrates firmware updates by:
Using vendor-provided hardware support packages
Updating firmware during host remediation
Ensuring driver–firmware compatibility through vendor add-ons
This eliminates the need for separate hardware update utilities.
vLCM performs hardware validation:
NICs, storage controllers, CPUs, and disk devices
Firmware and driver versions
Compatibility with ESXi versions
These checks prevent remediation from applying incompatible images.
Remediation rules require:
All hosts must be able to enter maintenance mode
vSAN needs enough spare capacity for evacuation
NSX transport nodes must remain available
HA must maintain sufficient failover capacity
vLCM proceeds only when conditions for safe remediation are met.
NSX is the network virtualization platform for VCF and the foundation for VKS pod networking.
Tier-0 provides north–south routing.
Active/Active: uses ECMP across multiple Edge nodes, suitable for high throughput
Active/Standby: one active instance, one standby instance, typically used with stateful services
The choice impacts throughput, failover behavior, and routing design.
Tier-1 routing components include:
Distributed Router (DR): runs on ESXi hosts and handles east–west traffic
Service Router (SR): runs on Edge nodes and handles north–south services such as NAT or load balancing
A correct design ensures efficient traffic paths.
Edge nodes are essential for:
North–south routing
Load balancing
NAT and firewalling
Scale-out patterns include:
Multiple Edge nodes in a cluster
ECMP routing for throughput
Redundancy across racks or AZs
NSX Federation enables:
Multi-region networking
Centralized policy management
Cross-site failover
Global managers controlling local managers
Useful for large enterprises or regulated environments.
Traffic flows:
North–south: between external clients and workloads
East–west: between workloads inside the data center
Understanding paths is critical for troubleshooting and firewall design.
The NSX Load Balancer:
Provides VIPs for Kubernetes Ingress traffic
Routes external connections into Kubernetes clusters
Supports L4 and L7 services
Designing load balancing correctly is essential for production-grade applications.
For VKS, NSX becomes the CNI (Container Network Interface):
NCP (NSX Container Plugin) programs pod networks
NSX creates logical segments for pod traffic
PodVMs use NSX overlay for networking
Policies apply down to pod vNICs
This results in consistent networking between VMs and containers.
vSAN provides storage for virtual machines and Kubernetes persistent volumes.
RAID1: best performance, highest capacity consumption
RAID5/6: better capacity efficiency, more overhead for writes
Selection depends on performance needs and available capacity.
Storage policies define:
FTT levels
RAID type
Stripe width
Compression and encryption
Policies influence where objects are placed and how much usable capacity exists.
Fault domains ensure that object replicas do not reside in the same rack or chassis. This protects against rack-level failures.
ESA (Express Storage Architecture): optimized for NVMe, simplified architecture, higher performance
OSA (Original Storage Architecture): uses discrete cache and capacity tiers
Operations such as rebuilds, resyncs, and compression behavior differ significantly between the two.
vSAN integrates with:
Snapshots
vSphere Replication
Third-party backup solutions
These tools protect VM and PV data.
Kubernetes PVs map to vSphere storage policies:
Higher FTT improves resilience but consumes more resources
Stripe width affects IO patterns
Encryption affects CPU overhead
StatefulSets rely on consistent storage policy behavior during scaling and recovery.
Supervisor is the Kubernetes control plane integrated directly into vSphere.
Spherelet is VMware’s adaptation of kubelet:
Runs on ESXi hosts
Manages PodVM lifecycle
Integrates with vSphere resource scheduler
This ensures containers behave like first-class workloads.
PodVMs:
Are lightweight VMs that represent pods
Obtain scheduling from Supervisor
Use vSphere DRS for placement decisions
Receive networking and storage from NSX and vSAN
This hybrid model combines VM isolation with Kubernetes agility.
Each Namespace maps to a Resource Pool:
Controls CPU, memory, and resource boundaries
Enforces quotas and isolation
Provides an operational boundary for workloads
This links Kubernetes and vSphere resource management.
Networking includes:
Pod networks
Node networks
Service networks
Load balancers
NSX ensures consistent routing across Supervisor and TKCs.
Node networking deals with VM or PodVM connectivity
Pod networking handles intra-cluster communication
Separation allows applying different security and routing rules
Storage provisioning involves:
CNS creating First-Class Disks
PVs mapped to vSAN objects
TKCs inheriting StorageClasses from Supervisor
This ensures data persistence across restarts and migrations.
Identity governs how users interact with vSphere and Kubernetes.
Identity Federation allows vSphere to authenticate users via an external identity provider using modern authentication protocols.
Kubernetes clusters authenticate users via OIDC:
Tokens issued by the IDP
RBAC controls permissions
Used for Supervisor and TKC access
Namespaces:
Inherit some permissions from vSphere
Apply Kubernetes RBAC for workload deployment
Define clear access boundaries between teams
NSX can enforce rules based on:
User identity
Group membership
VM or pod identity
This creates granular access control.
Teams can be isolated using:
Namespaces
Separate TKCs
Dedicated Workload Domains
This ensures security and limits blast radius.
Kubernetes workloads require both metadata and data protection.
Backups include:
vCenter and Supervisor VMs
NSX configuration
Supervisor cluster state
Supervisor is tightly integrated into vSphere, so platform-level backups are critical.
Guest clusters need etcd backups to protect cluster state:
etcd is the authoritative store for API objects
Regular backups prevent catastrophic loss during upgrade or failure
Velero can:
Back up Kubernetes API objects
Back up PVs (via snapshot or CSI integration)
Restore workloads across clusters
Velero enables flexible operational recovery.
Policies determine:
Replication behavior
Restore speed
Availability of PVs after failover
Applications with strict RTO/RPO require more resilient storage policies.
Constraints include:
StorageClass mismatch between clusters
Network differences in target clusters
IP addressing changes
Load balancer reconfiguration
These issues must be addressed for successful recovery.
What is the primary role of Tanzu Mission Control in a VMware Kubernetes ecosystem?
Tanzu Mission Control provides centralized lifecycle management and policy governance for Kubernetes clusters across multiple environments.
Tanzu Mission Control allows organizations to manage Kubernetes clusters deployed on vSphere, public clouds, or other infrastructures from a single control plane. Administrators can apply consistent policies, manage access control, enforce security standards, and perform cluster lifecycle operations such as upgrades and backups. It integrates with Tanzu Kubernetes clusters running on vSphere Kubernetes Service but does not replace the Supervisor Cluster. Instead, it provides higher-level management across multiple clusters and environments. A common misunderstanding is thinking it deploys clusters directly; its main role is governance, visibility, and lifecycle management at scale.
Demand Score: 80
Exam Relevance Score: 87
What is the difference between Tanzu Kubernetes Grid (TKG) and vSphere Kubernetes Service?
Tanzu Kubernetes Grid provides a standardized Kubernetes distribution across multiple environments, while vSphere Kubernetes Service embeds Kubernetes directly into the vSphere platform.
TKG is designed to deploy Kubernetes clusters on multiple infrastructures including vSphere, public cloud platforms, and edge environments. It focuses on portability and consistent Kubernetes deployment across environments. vSphere Kubernetes Service integrates Kubernetes directly with vSphere infrastructure and exposes Kubernetes APIs through vCenter. This integration allows administrators to manage Kubernetes resources alongside virtual machines using familiar tools. In exam scenarios, the key distinction is that VKS is tightly integrated with vSphere infrastructure, whereas TKG is a portable Kubernetes distribution for multi-cloud environments.
Demand Score: 77
Exam Relevance Score: 86
Which VMware component provides centralized cluster policy management?
Tanzu Mission Control.
Tanzu Mission Control allows administrators to apply consistent policies such as access control, security policies, quotas, and backup rules across multiple Kubernetes clusters. It integrates with clusters deployed on vSphere Kubernetes Service or other infrastructures and provides a unified dashboard for monitoring and governance. This central management simplifies compliance and operational management in large environments with many clusters. In exam contexts, questions often test the distinction between infrastructure management tools like vCenter and governance platforms like Tanzu Mission Control.
Demand Score: 72
Exam Relevance Score: 83
What VMware solution enables Kubernetes workloads to run natively on vSphere infrastructure?
vSphere Kubernetes Service (formerly vSphere with Tanzu).
vSphere Kubernetes Service integrates Kubernetes directly into the vSphere platform, allowing administrators to deploy and manage Kubernetes clusters alongside traditional virtual machines. The platform introduces the Supervisor Cluster, which provides the Kubernetes control plane integrated with ESXi and vCenter. Administrators can create namespaces, apply storage policies, and deploy Tanzu Kubernetes Clusters for application workloads. This integration simplifies operations by unifying VM and container management under the same infrastructure platform. A key exam concept is that Kubernetes is embedded into vSphere rather than running as an external platform.
Demand Score: 70
Exam Relevance Score: 88