Installation covers the initial setup of ESXi and vCenter Server.
Designers don’t memorize step-by-step methods, but must understand:
Different installation options
Their pros/cons
Which method fits which environment
ESXi is the hypervisor running on every server. Before you have a working cluster, you must install ESXi on each host.
You can install ESXi using several approaches:
You boot the server from an ISO file (usually mounted through remote console like iDRAC, iLO, or physical DVD/USB).
Most common method for small or medium environments.
Manual but simple.
Server boots from the network (Preboot Execution Environment).
Good for environments with standardized boot infrastructure.
Use answer files to automate the installation.
Great for consistent builds at scale.
ESXi runs stateless (no local installation).
Hosts boot ESXi entirely from the network.
Best for very large environments, where you want full automation.
Preferred by enterprises with hundreds or thousands of hosts.
ESXi can boot from several types of storage:
Traditional and reliable
Often mirrored (RAID1) SSD/SAS drives
Host boots ESXi from SAN LUN
Good for diskless servers
Requires careful network/storage configuration
Used heavily in the past, but:
No longer recommended in vSphere 7+ because:
Too many writes
Device failure
VMware strongly recommends alternative boot media now.
No boot disk at all
ESXi image streamed at boot
Configuration stored in Host Profiles
Extremely fast to scale large environments
After ESXi is installed, you must configure:
Must have static IP address
Used for connecting ESXi to vCenter
Critical for stability:
ESXi must have:
Correct DNS A & PTR records
Accurate time (NTP)
Wrong DNS/TIME leads to:
HA failures
vCenter connection issues
Authentication problems
You add the ESXi host to vCenter
Once joined, you manage almost everything from vCenter
Clustering, vMotion, HA, DRS become available
vCenter Server is the central management server for your VMware environment.
In older versions, PSC could be external
Today the embedded PSC is the standard
Meaning everything (SSO + vCenter services) runs in one appliance
Design implication:
Simple
Easier failover and lifecycle management
No more multi-PSC replication topologies needed
VCSA comes in sizes:
Tiny
Small
Medium
Large
X-Large
Sizing is based on:
Number of hosts
Number of VMs
Expected load
As a rule:
Tiny → labs
Small/Medium → common production clusters
Large/X-Large → enterprise-scale
Designers must ensure that vCenter sizing matches current + future growth.
Example: vsphere.local
Houses identity information, permissions, tokens
Important in multi-site or multi-vCenter designs
Helps with topology awareness
vCenter uses certificates to secure connections.
Options:
Internal VMware CA
External enterprise CA (for compliance)
Design considerations:
If company has strict security policy, external CA may be required
Certificate replacement must be tracked as part of operations
Configuration refers to setting up ESXi hosts, clusters, storage, and networking so that they form a working vSphere environment.
Each ESXi host needs consistent configuration. This is key for:
HA
DRS
vMotion
vSAN
iSCSI/NFS
Security
You must configure virtual networking:
vSS (Standard Switch) on each host
vDS (Distributed Switch), centrally managed by vCenter
Design implication:
For enterprise → use vDS for consistency and advanced features
For small sites → vSS may be acceptable
VMkernel interfaces (vmk) are special ports used for host-level services:
Management
vMotion
vSAN
iSCSI/NFS
Fault Tolerance logging
Storage Replication traffic
Each function often needs:
Correct VLAN
Correct NIC teaming
Proper MTU configuration
Correct routing
Example design hint:
vMotion and vSAN often use Jumbo Frames (MTU 9000)
Management stays at MTU 1500
You configure storage connectivity:
Connect to SAN (FC, iSCSI)
Connect to NAS (NFS)
Discover LUNs
Create & format VMFS datastores
Mount NFS shares
Configure vSAN if applicable
Design implication:
Storage type affects:
HA
Performance
Fault domains
DR strategy
Capacity planning
Clusters enable advanced features like HA and DRS.
Design considerations:
Admission control policy
Heartbeat datastores
Isolation response
VM restart priority
Failure tolerance (N+1, N+2)
DRS requires:
vMotion network
Shared storage
Compatible CPUs (for cross-host migration)
Designers must choose:
Automation level
Migration threshold
Rules (affinity/anti-affinity)
Configuration includes:
Disk groups
Caching/capacity disks
Storage policies
Fault domains
Design implications:
Network design → vSAN requires 10 GbE
Storage planning → RAID 1 vs RAID 5/6
Host count → minimum 3 hosts (2 hosts + witness for ROBO)
Host profiles enforce consistency, such as:
Network settings
Storage configuration
Security settings
NTP
Firewall settings
Used heavily in:
Large environments
Auto Deploy environments
Environments with strict compliance
In design scenarios:
If the environment requires consistent host configuration, Host Profiles are recommended.
Administration is what happens daily in a vSphere environment.
Architects must understand it because operational requirements influence design choices.
Deploy from template
Fast, consistent
Golden images reduce errors
Clone VMs
Snapshots
Temporary checkpoints
Useful for patching/testing
Must not be used long-term
Can grow large → performance risk
Delete/retire VMs
Free up resources
Clean up datastore consumption
Design impact:
Enter maintenance mode
DRS migrates VMs away
Host can be patched safely
Patching/upgrades via Lifecycle Manager (LCM)
Centralized
Ensures consistency
Supports firmware integration (vLCM)
Design implication:
Clusters must have enough free capacity to allow hosts to enter maintenance mode.
Usually factored with N+1 or N+2 policy.
Custom roles are common to meet security requirements
Give minimal privileges needed for the user’s job
Examples:
Backup Operator
VM Operator
Network Admin
Storage Admin
Scopes include:
vCenter root
Datacenter
Cluster
Folder
VM
Design implication:
The more restrictive and specific the scope, the safer the access model
Large organizations rely heavily on folder/role assignments
Built-in alarms monitor:
CPU usage
Memory usage
Disk latency
Host connectivity
HA/DRS events
vSAN health
You can also create custom alarms.
Examples:
vRealize / Aria Operations
Capacity forecasting
Workload balancing
Anomaly detection
Heat maps
Third-party systems
SNMP
Syslog
Security tools
Design implication:
Monitoring tools influence where you store logs, how you size vCenter, and how you manage compliance.
Performance metrics
CPU Ready
Memory Ballooning
Disk latency
Network saturation
Capacity trends
CPU/Memory growth
Storage utilization
Datastore fragmentation
Configuration drift
Host or cluster settings that deviate from standards
Very important for compliance
Tools like Aria Operations help detect drift automatically.
VCHA protects the vCenter Server service by deploying three coordinated nodes:
Active Node
Runs all vCenter services and handles client connections.
Passive Node
Maintains a continuously synchronized copy of the Active node using synchronous replication and is ready to take over during failures.
Witness Node
Acts as a quorum member to prevent split-brain conditions and ensures failover decisions are valid.
Uses synchronous state replication between Active and Passive nodes, ensuring no data loss.
Failover occurs automatically and typically completes in a few minutes.
Protects only the vCenter service; it does not protect ESXi hosts, virtual machines, or cluster services such as HA or DRS.
Requires separate, dedicated networks:
Management network
VCHA replication network
Not intended for stretched clusters or environments with high network latency; synchronous replication requires low latency.
Active and Passive nodes should not share the same datastore to avoid a single point of failure.
VCHA is not a backup replacement; normal vCenter backups remain mandatory.
Enhanced Linked Mode enables multiple vCenter Servers to share:
A single SSO domain
Global permissions
A unified global inventory view across vCenters
This allows administrators to navigate and manage multiple vCenters seamlessly.
All vCenter Servers must join the same SSO domain during deployment.
Existing, separate SSO domains cannot be merged later.
Versions of all vCenters participating in ELM must be compatible.
Proper naming of the SSO domain (for example, vsphere.local) is important for long-term scalability.
Multi-site deployments that require consistent administration.
Large environments with multiple vCenters and distributed operational teams.
Designs that require centralized view and unified RBAC across diverse infrastructure.
vLCM manages ESXi lifecycle using a cluster-level image that defines:
ESXi version
Vendor add-ons or OEM customizations
Firmware and driver versions (when supported by vendor hardware integration)
All hosts in a cluster are maintained according to this defined image.
Ensures uniform configuration across all hosts in a cluster.
Minimizes configuration drift and unplanned discrepancies.
Simplifies combined firmware, driver, and ESXi updates through a single workflow.
Baseline Mode
Patch-driven, flexible, but prone to inconsistencies and configuration drift.
Image Mode (Recommended)
Enforces a fixed, validated cluster image, providing predictability, consistency, and simplified lifecycle management.
Host Profiles are powerful but have limitations. They do not handle well:
Local user accounts, which are host-specific.
Storage devices with unpredictable identifiers (such as NAA IDs that may differ across hosts).
Network interface mappings that rely on specific physical NIC order or port positions.
Host Profiles are ideal for:
Auto Deploy environments, where hosts are stateless and must be configured at boot.
Large enterprise environments that require consistent host configuration.
Compliance-driven environments where configuration drift must be minimized.
Define a stable reference host with known-good configuration.
Validate compliance regularly to find configuration drift.
Avoid embedding host-specific identifiers such as MAC addresses or device IDs in profiles.
The vCenter Server Appliance (VCSA) supports file-based backup through:
FTP or FTPS
HTTP or HTTPS
SCP
A backup includes:
vCenter configuration data
Inventory
Certificates
SSO domain configuration
Virtual machine data is not part of this backup and must be handled by separate backup solutions.
Restoring vCenter requires deploying a new appliance and applying the backup during the restore workflow.
In VCHA environments, only the Active node should be backed up and restored.
Restore does not recover virtual machine data; only the management plane is restored.
Backup frequency must match RPO requirements.
Backup repositories must be durable, protected, and ideally offsite.
Certificate retention and management must align with the restore process to avoid authentication issues.
ESXi hosts must use UEFI firmware to support Secure Boot.
ESXi boot components and installed VIBs must be digitally signed.
Updates must maintain signature integrity or Secure Boot will block the component from loading.
TPM 2.0 devices store cryptographic measurements of host boot components.
Enables Host Attestation in vCenter, confirming integrity at boot.
Enhances compliance and provides tamper-resistant verification.
VCSA supports Secure Boot when underlying hardware and hypervisor support it.
Secure Boot may affect patching, as unsigned extensions or drivers will not load.
ESXi stores logs in RAM under /var/log, so logs are lost upon reboot unless redirected.
Persistent logging requires configuration to forward logs or store them on shared storage.
ESXi hosts should forward logs to centralized log collectors such as:
Aria Log Insight
Standard syslog servers using UDP, TCP, or TLS
Centralized logging enhances troubleshooting and security analysis.
Stateless hosts lose all local logs on reboot; therefore, remote syslog is mandatory.
Host Profiles should be used to apply consistent syslog settings across all hosts.
vCenter logs are stored within VCSA and include multiple components such as vpxd, vmdird, and appliance management logs.
Support bundles can be exported for advanced troubleshooting and vendor support.
What is a common cause of VCF bring-up failure?
Incorrect network configuration or DNS resolution issues.
VCF bring-up relies heavily on proper DNS, NTP, and network settings. Misconfigured DNS entries or unreachable services often cause failures early in deployment. Verifying prerequisites is critical.
Demand Score: 90
Exam Relevance Score: 90
Why might Aria Automation deployment fail?
Failures often occur due to certificate, network, or integration misconfigurations.
Aria Automation requires proper certificate trust and connectivity to endpoints like vCenter and NSX. Misaligned configurations cause deployment or integration errors.
Demand Score: 88
Exam Relevance Score: 88
How should NSX be configured for automation readiness?
Ensure API access, proper transport zones, and integration endpoints are configured.
Automation relies on NSX APIs to create networking components. Missing configurations prevent successful provisioning.
Demand Score: 85
Exam Relevance Score: 87
What is a key administrative task in VCF automation environments?
Maintaining lifecycle updates through SDDC Manager.
SDDC Manager ensures consistent patching and upgrades across the stack. Skipping lifecycle management leads to drift and instability.
Demand Score: 82
Exam Relevance Score: 85
Why is certificate management critical in VCF automation?
Certificates ensure secure communication between components and APIs.
Expired or untrusted certificates break integrations and automation workflows. Proper certificate lifecycle management is essential.
Demand Score: 84
Exam Relevance Score: 86
What causes API authentication failures in VCF?
Incorrect credentials, expired tokens, or misconfigured identity sources.
Automation tools rely on API authentication. Misconfigurations in identity providers or credentials lead to failures. Regular validation is necessary.
Demand Score: 83
Exam Relevance Score: 85