This 4-week plan builds from storage fundamentals (HCI vs traditional, protocol use-cases) into VCF storage choices (vSAN ESA/OSA, principal vs supplemental, Supervisor storage), then moves through design+sizing, deployment/configuration of vSAN and supported (non-vSAN) storage, and finishes with monitoring/troubleshooting drills and exam-style mixed scenarios across all domains.
Daily target: 4 pomodoros (about 100 minutes of focused study).
1 pomodoro = 25 minutes focus + 5 minutes break (keep breaks strict).
Pomodoro 1–2: Learn (read + annotate) + build one compact artifact (diagram/checklist/matrix).
Pomodoro 3: Apply (mini-scenario, validation steps, or “teach-back” summary out loud).
Pomodoro 4: Spaced review (yesterday’s notes/flashcards) + quick self-quiz (5–10 prompts).
You will establish a clean mental model of storage architectures and protocols, then map that model onto VCF storage choices (vSAN ESA/OSA, solution components, principal vs supplemental) and basic Supervisor storage translation, producing a small set of reusable decision artifacts you’ll reuse in Weeks 2–4.
IT Architectures, Technologies, Standards — Differentiate between types of Storage Architecture (HCI vs Traditional)
IT Architectures, Technologies, Standards — Identify the use case for different storage architectures
Build a “blast radius” mental model: what fails first, and how far the impact spreads (host vs fabric vs array)
Review your glossary notes for HCI, datastore, NFS/iSCSI/FC, multipathing, and failure domain; add missing definitions in your own words.
Deliverable: 15 flashcards (Q/A) in a notes app; Verification: you can answer each card in ≤10 seconds without looking.
Write a one-page “when to choose which” card with 5 signals for HCI and 5 for traditional, using requirement language (growth, ops ownership, failure isolation).
Deliverable: 1-page decision card; Verification: you can map 3 sample requirements to a choice with a one-sentence justification each.
Draw a simple diagram showing where failures occur for HCI (device/host/cluster) vs traditional (path/fabric/controller/array) and what symptom each causes.
Deliverable: a diagram (photo or digital); Verification: for each failure point, you can state the most likely first observable symptom.
Write 8 short prompts (e.g., “only some hosts see datastore—what layer first?”) and answer them from memory, then fix mistakes by updating your decision card.
Deliverable: quiz prompts + corrected notes; Verification: re-answer missed prompts correctly without referencing the source text.
IT Architectures, Technologies, Standards — Differentiate between the use cases for supported storage types
External storage “trust gates”: exports vs CHAP vs zoning/masking (symptom signatures)
Quick verification order for “partial visibility” incidents
Re-read your decision card and failure-domain sketch and refine any unclear wording; aim for “exam stem language” phrasing.
Deliverable: revised decision card + sketch; Verification: you can explain both in 90 seconds without pausing.
Create a 4-column matrix (NFS, iSCSI, FC, NVMe-oF) with rows: access control primitive, ESXi-side constructs, common failure signature, first verification check.
Deliverable: protocol matrix; Verification: given a symptom, you can point to the matching column+row within 5 seconds.
Write a 5-step checklist: visibility across hosts → access controls → host drift → multipathing → backend saturation, with one concrete example per step.
Deliverable: triage checklist; Verification: run through a fictional scenario and confirm each step produces a clear yes/no outcome.
Write 3 mini-stems (2–3 sentences each) that imply NFS/iSCSI/FC issues, and answer: “first check” + “most likely misconfiguration.”
Deliverable: 3 stems + answers; Verification: each answer explicitly references a row from your protocol matrix.
VMware Cloud Foundation (VCF) Products and Solutions — Differentiate between vSAN OSA and vSAN ESA
VMware Cloud Foundation (VCF) Products and Solutions — Identify the components of a vSAN Architecture/Solution
Component-to-symptom mapping (health, compliance, resync, latency)
Quiz yourself on each protocol’s access control and first check; fix any ambiguity in your matrix wording.
Deliverable: updated protocol matrix; Verification: 12/12 quick prompts answered correctly from memory.
Write a “what changes operationally” card: constructs you’d expect, hardware readiness emphasis, and what “normal vs abnormal” looks like post-deploy.
Deliverable: ESA vs OSA card; Verification: you can explain which choice is implied by 3 different stems (legacy runbooks vs modern devices vs migration constraints).
Draw a “parts list” map: ESXi Host, vSAN network, SPBM, vCenter Server, Skyline Health for vSAN, plus one line per component: “break symptom.”
Deliverable: component map; Verification: for each component you can name one likely symptom and one first check.
Write 6 prompts about “policy noncompliant” and answer what you would verify first (capability/headroom vs recovery state vs fault domains).
Deliverable: 6 prompts + answers; Verification: each answer contains an explicit verification cue (what would confirm/refute your hypothesis).
VMware Cloud Foundation (VCF) Products and Solutions — Differentiate between Principal and Supplemental storage in a VCF Workload Domain cluster
VMware Cloud Foundation (VCF) Products and Solutions — Identify the role of supported Storage within a VMware Supervisor-context
Translate PVC/PV symptoms into datastore + policy readiness checks
Review ESA/OSA card and vSAN component map; add one “common trap” note to each (why an answer looks right but isn’t).
Deliverable: revised cards with trap notes; Verification: you can state the trap and the correct reasoning in two sentences.
Write a short guideline: what “principal” implies for lifecycle and Day 2 expectations, and what risks “supplemental” introduces (drift/change control).
Deliverable: principal vs supplemental guideline; Verification: for 3 stems, you can label storage as principal/supplemental and justify with lifecycle reasoning.
Create a translation chain: PVC → storage class → SPBM policy → eligible datastore visibility/capabilities, with 3 failure cues (PVC pending, provisioning failed, snapshot not supported).
Deliverable: translation sheet; Verification: you can map each failure cue to one vSphere-side check (policy capability, datastore accessibility, headroom).
Write 10 mixed questions (single sentence each) spanning Day 1–4 topics and answer them from memory; update your weakest artifact with corrections.
Deliverable: quiz + corrected artifact; Verification: second pass score ≥9/10 without looking at notes.
Consolidate Week 1 artifacts into a single “Week 1 Storage Pack” (decision card + protocol matrix + vSAN component map)
Build a personal checklist of “first checks” for common symptoms (partial visibility, noncompliance, resync/latency)
Prepare for Week 2 design+sizing by identifying which assumptions you need in stems (headroom, failure domain, growth)
Do a fast recall pass: 20 flashcards + 5 “explain this in 30 seconds” prompts across all Week 1 topics.
Deliverable: scored recall log (right/wrong list); Verification: re-run the missed items until you can answer them cleanly twice in a row.
Combine your best versions of: HCI vs Traditional card, protocol matrix, vSAN component map, principal vs supplemental notes, and Supervisor translation sheet.
Deliverable: one consolidated document (1–3 pages); Verification: you can locate any concept within 15 seconds and explain it in exam-stem terms.
Write one longer scenario (6–8 sentences) that includes: a requirement, a protocol choice, and a symptom (e.g., “some hosts can’t see datastore” or “policy noncompliant”).
Deliverable: scenario + a structured answer (architecture choice, first checks, likely root cause, verification step); Verification: your answer follows a consistent ladder (scope → visibility/compliance → access controls/drift → pathing → backend).
List the top 10 data points you’d want in a design/sizing stem (capacity headroom, failure domain, maintenance window, growth rate, service dependencies, etc.).
Deliverable: “Sizing & Design Inputs” checklist; Verification: you can explain why each data point changes the design decision in one sentence.
This week you will learn to convert real requirements into a defensible vSAN design for a VCF Workload Domain: picking the right failure domain assumptions, expressing intent with SPBM policies, sizing for steady-state plus repairs, and recognizing when stretched/2-node or advanced services (encryption, protection) change the dependency chain and risk profile.
Plan and Design the VMware Solution — Design a vSAN Storage Solution for VCF
Plan and Design the VMware Solution — Appropriately size a storage solution based on VMware vSAN (focus: which inputs matter)
VMware Cloud Foundation (VCF) Products and Solutions — Differentiate between Principal and Supplemental storage in a VCF Workload Domain cluster (lifecycle framing)
IT Architectures, Technologies, Standards — Identify the use case for different storage architectures (ops ownership and blast radius lens)
Review your Week 1 Storage Pack and pick the 5 weakest concepts (the ones you hesitate on).
Deliverable: a “Weak-5” list + corrected flashcards; Verification: you can explain each concept in ≤30 seconds without notes.
Create a checklist of the minimum stem facts you need to design safely (availability target, failure domain, growth, maintenance window, workload IO sensitivity, headroom).
Deliverable: checklist (10–15 lines); Verification: for each line you can state what design choice it influences (one sentence each).
Take 3 sample requirements (e.g., “site resilience,” “low ops overhead,” “tight maintenance windows”) and write what SPBM intent they imply and what the cluster must be capable of.
Deliverable: 3 requirement→intent mappings; Verification: each mapping includes one “capability check” you would verify (headroom/fault domain/repair state).
Write a 5–6 sentence scenario where external storage exists alongside vSAN, then decide what is principal vs supplemental and why that matters for lifecycle/change control.
Deliverable: scenario + decision + 3 verification steps; Verification: your steps include one “all hosts see storage” check and one “policy/compliance expectation” check.
Plan and Design the VMware Solution — Appropriately size a storage solution based on VMware vSAN
Plan and Design the VMware Solution — Design a vSAN Storage Solution for VCF (trade-offs under failures/maintenance)
Troubleshoot and optimize the VMware Solution — Monitor VMware vSAN using tools in VCF (what sizing failures look like in signals)
Do a recall pass on your Day 1 checklist and SPBM mappings, then revise any line that feels vague or untestable.
Deliverable: revised checklist + 8 flashcards; Verification: you can answer each flashcard correctly twice in a row.
Create a one-page worksheet with four sections: usable capacity, performance assumptions, repair budget, operational windows (maintenance).
Deliverable: worksheet template; Verification: each section has at least 3 prompts that produce numeric or yes/no outputs when filled.
Invent a deliberately incomplete stem (missing 2–3 key facts) and fill what you can, then list the missing facts as “blocking questions.”
Deliverable: completed worksheet + blocking questions; Verification: every blocking question maps to a design risk (compliance, rebuild time, or performance).
Write 6 symptom prompts (e.g., “resync backlog never shrinks,” “latency spikes after maintenance,” “persistent noncompliance”) and state the most likely sizing dimension at fault and the first check.
Deliverable: 6 prompts + answers; Verification: each answer includes a measurable verification cue (trend/backlog/headroom/compliance).
Install, Configure, Administrate the VMware Solution — Deploy a vSAN Stretched Cluster within a VCF Workload Domain
Install, Configure, Administrate the VMware Solution — Deploy a vSAN 2-Node Cluster
Troubleshoot and optimize the VMware Solution — Troubleshoot and resolve issues with VMware vSAN Storage (site vs host failure reasoning)
Review your sizing worksheet template and explain aloud what “repair budget” means and why small clusters are more sensitive to maintenance.
Deliverable: a 10-line teach-back transcript (bullet notes are fine); Verification: you can deliver it smoothly in ≤90 seconds.
Write a short card that distinguishes site impairment signals from host/component failures, including what you would verify first for each.
Deliverable: decision card; Verification: you can classify 5 sample symptoms correctly (site vs host) and justify each in one sentence.
Create a checklist for stretched/2-node designs: what must be reachable, what “stable” looks like, and what symptoms appear when the witness is unhealthy or unreachable.
Deliverable: checklist; Verification: your checklist includes at least 3 concrete “how to know” cues (not just “check health”).
Write one scenario with geography constraints and maintenance requirements, then choose the topology and list 4 design validations you would require.
Deliverable: chosen topology + validations; Verification: validations include one fault-domain check and one “expected behavior during maintenance” statement.
Install, Configure, Administrate the VMware Solution — Configure vSAN Encryption
Install, Configure, Administrate the VMware Solution — Deploy/Configure vSAN Data Protection and create/configure a vSAN Data Protection Recovery Plan
VMware Cloud Foundation (VCF) Products and Solutions — Identify the use cases for advanced VMware vSAN features/services/capabilities
Review your stretched/2-node cards and run a 5-question recall test on witness dependency and failure classification.
Deliverable: 5 Q/A + corrected notes; Verification: you can answer all 5 correctly without notes on the second pass.
Draw a simple dependency chain map showing what encryption adds (trust/reachability/key availability) and what symptoms appear if the dependency breaks.
Deliverable: dependency map; Verification: you can name one “first check” for an encryption enablement failure and one for a Day 2 key-availability issue.
Write a recovery plan skeleton: ordering, verification steps, and what you would document as outputs (what restored, where, how validated).
Deliverable: recovery plan skeleton (10–15 lines); Verification: includes at least 3 explicit verification steps (not generic “test restore”).
Write 8 prompts that describe a requirement and ask “which vSAN feature fits” or “which is risky and why,” covering File Services, iSCSI Target Service, Data Protection, HCI Mesh, and stretched.
Deliverable: 8 prompts + answers; Verification: each answer includes one dependency/prerequisite reason (not just the feature name).
Plan and Design the VMware Solution — Design a vSAN Storage Solution for VCF (full loop: requirements → intent → verification)
Install, Configure, Administrate the VMware Solution — Create/configure a vSAN Storage policy (interpret compliance safely)
Troubleshoot and optimize the VMware Solution — Monitor VMware vSAN using tools in VCF (confirm design with signals)
Do a 25-minute retrieval pass: 15 flashcards + 5 “explain in 30 seconds” prompts across Week 2.
Deliverable: scored log + missed-item fixes; Verification: re-test missed items until you get 100% in a final mini-pass.
Create a checklist that starts with design intent (failure domain, policy, sizing) and ends with what you verify after deployment (health, compliance, resync trend, latency baseline).
Deliverable: checklist (12–18 lines); Verification: each line is testable (you can say what evidence would satisfy it).
Write an 8–10 sentence design scenario (include: growth, maintenance, availability target, one advanced service dependency). Then answer with: architecture/topology choice, policy intent, sizing assumptions, and validation plan.
Deliverable: scenario + structured answer; Verification: your answer includes at least 6 explicit checks (capability, headroom, compliance expectation, recovery load expectation, and one dependency check).
Create 12 rapid-fire stems (one sentence each) and answer them with “best next step” or “best design choice” in under 90 seconds total, then review mistakes.
Deliverable: 12 stems + answers + correction notes; Verification: second run improves by at least 3 correct choices or reduces time by 15 seconds without accuracy loss.
This week you will turn your Week 2 design intent into concrete deployment and configuration workflows: deploying vSAN clusters (standard, stretched, 2-node), implementing SPBM policies, enabling key services (encryption, file/iSCSI, capacity sharing), and integrating supported (non-vSAN) datastores and datastore clusters with disciplined verification so you can recognize “deployment succeeded but not healthy” patterns quickly.
Install, Configure, Administrate the VMware Solution — Deploy a vSAN Cluster within a VCF Workload Domain
VMware Cloud Foundation (VCF) Products and Solutions — Identify the components of a vSAN Architecture/Solution (component-to-symptom mapping)
Troubleshoot and optimize the VMware Solution — Monitor VMware vSAN using tools in VCF (baseline health/compliance/resync/latency)
Plan and Design the VMware Solution — Design a vSAN Storage Solution for VCF (turn intent into proof checks)
Recall from memory your Week 2 “Design-to-Verification” checklist and rewrite it without looking, then compare to the original and fix gaps.
Deliverable: rewritten checklist + corrections; Verification: at least 10/12 lines match your original intent and are testable (evidence stated).
Write a “minimum viable proof set” for a newly deployed vSAN Workload Domain cluster: what must be true for health, datastore visibility, and basic placement/policy behavior.
Deliverable: proof set (12–15 lines); Verification: each line can be answered with a clear pass/fail observation (not vague “looks good”).
Create 10 prompts: each prompt names one component (ESXi Host, vSAN network, SPBM, vCenter Server, Skyline Health for vSAN) and asks “if this fails, what do you see first?”
Deliverable: 10 prompts + answers; Verification: each answer includes a first check and the expected scope (single host vs cluster-wide).
Write a 6–8 sentence scenario where deployment completes but the cluster shows warnings or noncompliance; answer with a safe triage plan (scope → health → compliance → resync trend).
Deliverable: scenario + triage plan; Verification: your plan includes at least 3 explicit verification cues (e.g., resync backlog trend, headroom check, compliance persistence).
Install, Configure, Administrate the VMware Solution — Create/configure a vSAN Storage policy
Install, Configure, Administrate the VMware Solution — Complete Day 2 administration tasks on a vSAN Cluster
Troubleshoot and optimize the VMware Solution — Troubleshoot and resolve issues with VMware vSAN Storage (triage ladder)
Plan and Design the VMware Solution — Appropriately size a storage solution based on VMware vSAN (repair budget lens)
Do a rapid recall pass: define “noncompliant” and list 3 reasons it can be transient versus persistent, then write the first check for each reason.
Deliverable: 3 transient + 3 persistent reasons with checks; Verification: each check is observable and distinguishes the cases.
Create a short table with 5 policy intents (availability, site tolerance, performance sensitivity, capacity efficiency, maintenance friendliness) and write what consequence you must be prepared to see (overhead, rebuild pressure, compliance behavior).
Deliverable: 5-row table; Verification: you can explain each row in one sentence using exam-stem language.
Write a Day 2 “maintenance safety script”: what you verify before maintenance, what you watch during, and what you verify after (health, compliance, resync convergence, latency trend).
Deliverable: script (15–20 lines); Verification: includes at least 4 “after” checks and one “stop condition” (when you halt the operation).
Write a scenario where performance degrades after host maintenance and resync is running; answer: what you check first, what you do not change yet, and what confirms recovery is progressing.
Deliverable: scenario + structured answer; Verification: answer includes a trend-based verification (backlog shrinking or latency returning to baseline).
Install, Configure, Administrate the VMware Solution — Deploy a vSAN Stretched Cluster within a VCF Workload Domain
Install, Configure, Administrate the VMware Solution — Deploy a vSAN 2-Node Cluster
Troubleshoot and optimize the VMware Solution — Monitor VMware vSAN using tools in VCF (site vs host symptom cues)
Plan and Design the VMware Solution — Design a vSAN Storage Solution for VCF (failure domain reasoning)
From memory, write the difference between “site impairment” and “host/component failure” in 6 lines, then add one first check for each.
Deliverable: 6-line distinction + 2 checks; Verification: each check would produce a clear yes/no outcome.
Create a checklist that proves two-site correctness: fault domain assignment, witness reachability, and expected behavior under a simulated site impairment scenario.
Deliverable: checklist (12–16 lines); Verification: includes at least 3 witness-related checks and 2 site-fault-domain checks.
Write a “2-node constraints” card focusing on witness dependency and maintenance tolerance (what becomes risky, what must be verified).
Deliverable: 2-node card (10–12 lines); Verification: you can explain why a common maintenance action is riskier in 2-node than in a larger cluster.
Create 8 one-sentence stems that imply standard vs stretched vs 2-node; choose the topology and list one validation for each.
Deliverable: 8 stems + choices + validations; Verification: each validation explicitly references fault domain or witness stability (not generic “check health”).
Install, Configure, Administrate the VMware Solution — Configure vSAN Encryption
Install, Configure, Administrate the VMware Solution — Configure the vSAN File Service and configure a File Share using vSAN File Services
Install, Configure, Administrate the VMware Solution — Configure the vSAN iSCSI Target Service
Install, Configure, Administrate the VMware Solution — Configure vSAN Cross-Cluster Capacity Sharing and vSAN Storage Clusters
Quiz yourself on the dependency chains for encryption, file, and iSCSI: write one “added dependency” and one “first failure symptom” for each, from memory.
Deliverable: 3 dependency chains + symptoms; Verification: each chain includes an explicit first check (reachability/trust/identity alignment).
Write a checklist for encryption readiness focusing on trust/reachability and Day 2 consequences (what breaks if the dependency is unstable).
Deliverable: readiness checklist (10–14 lines); Verification: includes one enablement check and one Day 2 “key availability” check with evidence cues.
Draft a validation script: service health stable, share creation succeeds, client access behaves as intended, and what you check if access is denied or intermittent.
Deliverable: validation script (12–16 lines); Verification: includes at least 3 client-side verification cues and 2 admin-side checks.
Write a ladder that starts at initiator discovery and ends at LUN visibility and stable access; include what changes when only some initiators fail.
Deliverable: ladder (10–15 lines); Verification: includes one “partial failure” branch and names identity/access alignment as a first check.
Write a short provider vs consumer roles checklist and a “what would you verify if capacity is not visible” first-response plan.
Deliverable: roles checklist + first-response plan; Verification: your plan starts with role clarity, then connectivity/permissions, then placement expectations.
Install, Configure, Administrate the VMware Solution — Deploy a VCF Workload Domain cluster with supported (non-vSAN) Storage
Install, Configure, Administrate the VMware Solution — Configure a Datastore (non-vSAN) in a VCF Workload Domain Cluster
Install, Configure, Administrate the VMware Solution — Configure a Datastore Cluster in a VCF Workload Domain Cluster
Troubleshoot and optimize the VMware Solution — Troubleshoot and resolve issues with supported (non-vSAN) Storage
Recall the external storage troubleshooting ladder (visibility → access controls → drift → multipathing → backend) and rewrite it without notes, adding one example for each step.
Deliverable: rewritten ladder + examples; Verification: each example clearly matches the step (no duplicates or vague cases).
Create a sheet listing the first three checks for NFS, iSCSI, and FC/NVMe-oF when a datastore is missing on one host.
Deliverable: protocol sheet; Verification: each protocol’s checks include one access-control check and one host-consistency check.
Write a card explaining what changes when you introduce a Datastore Cluster (Storage DRS influence) and how that can confuse troubleshooting if you expect manual placement.
Deliverable: expectations card (10–12 lines); Verification: includes one “placement surprise” symptom and what you verify first to confirm Storage DRS influence.
Write a scenario where a patch/host replacement occurs and afterward only some hosts see the datastore; answer with the exact order you check and what confirms the root cause.
Deliverable: scenario + ordered checks; Verification: the first two checks are access control alignment and host configuration consistency, each with a concrete evidence cue.
This week you will operationalize everything you learned by drilling the monitoring signals and troubleshooting ladders for both vSAN and supported (non-vSAN) storage, then completing timed exam-style simulations that force you to choose the safest “best next step” and verify outcomes (compliance trends, resync convergence, datastore visibility consistency, and dependency-chain readiness).
Troubleshoot and optimize the VMware Solution — Monitor VMware vSAN using tools in VCF
Troubleshoot and optimize the VMware Solution — Troubleshoot and resolve issues with VMware vSAN Storage
Plan and Design the VMware Solution — Appropriately size a storage solution based on VMware vSAN (symptoms of sizing failure)
Review your Week 3 “minimum proof set” and “maintenance safety script” and rewrite the top 8 checks from memory.
Deliverable: 8 checks (one line each) + a one-sentence reason per check; Verification: each check states what evidence would confirm it (pass/fail).
Create a one-page card listing: health status, policy compliance, capacity headroom, resync backlog/trend, and latency trend, with one “normal vs abnormal” note each.
Deliverable: dashboard card; Verification: you can explain how you’d classify a stem as recovery-load vs contention vs availability in ≤60 seconds.
Write 6 short stems that mix “slow” with other signals (resync running, capacity low, maintenance done, network warning) and answer: first check + what you would not change yet.
Deliverable: 6 stems + structured answers; Verification: every answer includes a trend-based cue (backlog shrinking/growing, compliance persistence, latency trend).
Do a 12-question rapid set focused on vSAN monitoring interpretation (compliance, resync, capacity pressure, latency). Answer in under 6 minutes, then review mistakes.
Deliverable: 12 Q/A + corrected dashboard card; Verification: second run improves by at least 3 correct answers without taking longer.
Troubleshoot and optimize the VMware Solution — Monitor supported (non-vSAN) Storage using tools in VCF
Troubleshoot and optimize the VMware Solution — Troubleshoot and resolve issues with supported (non-vSAN) Storage
Install, Configure, Administrate the VMware Solution — Configure a Datastore (non-vSAN) in a VCF Workload Domain Cluster
Rewrite your external troubleshooting ladder from memory (visibility → access controls → drift → multipathing → backend), then add one concrete “evidence cue” per step.
Deliverable: ladder + evidence cues; Verification: each evidence cue is observable (e.g., “host A sees 0 targets”) and not a generic “check logs.”
Create a one-page sheet with three sections (NFS, iSCSI, FC/NVMe-oF): first three checks + common failure signature + most likely misconfiguration.
Deliverable: protocol quick-checks sheet; Verification: given a symptom, you can pick the protocol section and name the first check in ≤10 seconds.
Write a 7–9 sentence scenario where only some hosts see the datastore after a change; produce an ordered response plan with a pass/fail test per step.
Deliverable: scenario + ordered plan; Verification: the first two steps test access control alignment and host configuration drift with explicit evidence.
Write 6 prompts about “latency increased after a link event” and decide whether you suspect host-path queueing vs backend saturation, and what would prove it.
Deliverable: 6 prompts + answers; Verification: every answer includes a comparison (uniform across hosts vs only some hosts) and a measurable cue (path count, queueing symptom).
Troubleshoot and optimize the VMware Solution — Troubleshoot and resolve issues with VMware vSAN Storage
Install, Configure, Administrate the VMware Solution — Complete Day 2 administration tasks on a vSAN Cluster
Install, Configure, Administrate the VMware Solution — Complete Day 2 administration tasks on a vSAN Stretched Cluster
From memory, write your vSAN triage flow: define scope → classify (availability/compliance/performance) → highest-signal checks → safe action → verify.
Deliverable: triage flow (10–14 lines); Verification: you can apply it to a new stem and produce a consistent “first check” without hesitation.
Create 8 noncompliance stems that vary the context (after maintenance, capacity low, host failure, stretched cluster, resync running) and choose the best next check/action.
Deliverable: 8 stems + answers; Verification: each answer includes what would confirm the hypothesis (e.g., “resync backlog shrinking” or “fault domain mis-assigned”).
Write 10 one-sentence symptoms and classify each as site impairment vs host/component failure vs witness instability; add one first check per item.
Deliverable: 10 classifications + checks; Verification: at least 8/10 classifications are consistent with your own decision card logic (no contradictory reasoning).
Write 6 pairs of answers for the same stem: one risky/shortcut response and one safe/root-cause response; then explain why the safe one is better.
Deliverable: 6 pairs + explanations; Verification: each safe response ends with a verification outcome (compliance trend, resync convergence, latency improvement).
VMware Cloud Foundation (VCF) Products and Solutions — Identify the role of supported Storage within a VMware Supervisor-context
Install, Configure, Administrate the VMware Solution — Configure vSAN Encryption, vSAN File Service, vSAN iSCSI Target Service, vSAN Data Protection
Troubleshoot and optimize the VMware Solution — Monitor/Troubleshoot both vSAN and non-vSAN storage
Recall from memory the dependency chain for encryption (trust/reachability), file services (service health + access), iSCSI targets (discovery/session/visibility), and data protection (enablement → restore points → recovery plan).
Deliverable: 4 dependency chains (3–5 lines each); Verification: each chain includes one “first failure symptom” and one “first check.”
Write 6 short stems about PVC/PV issues (pending, provisioning failed, snapshot not supported) and translate each into the vSphere-side checks (storage class/SPBM capability, datastore eligibility/visibility, headroom).
Deliverable: 6 stems + translations; Verification: each translation contains at least one specific eligibility check (capability or visibility) and one verification cue.
Write 4 medium scenarios (6–9 sentences each): (1) vSAN noncompliance, (2) resync storm + performance, (3) external datastore partial visibility, (4) service dependency failure (encryption or iSCSI or file).
Deliverable: 4 scenarios + structured answers (scope → layer → first checks → likely root cause → verification); Verification: every answer includes a “what I would verify after remediation” line.
Create 15 multiple-choice-style prompts (you write the options) where two answers look plausible; practice eliminating the wrong one by citing the missing prerequisite or wrong failure-domain assumption.
Deliverable: 15 prompts + elimination notes; Verification: for each, you can state the deciding clue in one sentence (e.g., “partial visibility implies access control drift”).
All domains: architecture/protocol mapping, VCF storage choices, design+sizing, deployment+services, monitoring+troubleshooting
Focus on consistency: best-next-step, safest remediation, and verification outcomes
Run through 30 flashcards + 10 “30-second teach-back” prompts (mix across all Parents) and log every miss.
Deliverable: miss log + fixes; Verification: re-test misses until you can answer them correctly twice consecutively.
Do a timed simulation of 25 questions you create: 10 vSAN, 8 non-vSAN, 4 services/dependencies, 3 Supervisor translation.
Deliverable: answer sheet + confidence rating (high/med/low) per question; Verification: review all low-confidence items and write the missing fact or rule you needed.
Compress your key anchors into one page: HCI vs traditional signals, protocol matrix, vSAN dashboard, external ladder, stretched/2-node classification, Supervisor translation chain.
Deliverable: one-page anchors sheet; Verification: you can answer a random prompt from any domain using only this sheet in ≤20 seconds.
Analyze your simulation misses and group them into 3–5 error patterns (e.g., “ignored partial visibility clue,” “changed policy too early,” “forgot dependency chain”).
Deliverable: error-pattern list + a 7-day micro-fix plan (one action per day); Verification: each pattern has a specific prevention rule you can recite.