Shopping cart

Subtotal:

$0.00

DP-600 Exam Training Course Study Plan

DP-600 Study Plan — Implementing Analytics Solutions Using Microsoft Fabric

This 4-week plan sequences DP-600 around how real Fabric solutions are built and maintained: Week 1 establishes the mental model across governance, ingestion, and semantic models; Week 2 deepens data preparation with tool choice, transformation placement, and validation; Week 3 focuses on semantic model design, security, and reuse; Week 4 hardens enterprise readiness through optimization, lifecycle promotion, and full practice/review, repeatedly weaving the three domains (Maintain a data analytics solution, Prepare data, Implement and manage semantic models) into daily build-and-verify cycles.

Daily target: 4–6 pomodoros (25 minutes each) on study days.

Pomodoro definition: 25 min focus + 5 min break; after 4 pomodoros take a longer break.

Micro-task mapping: each pomodoro ends with a tangible artifact (notes, checklist, diagram, mini-solution).

Verification rule: every day includes at least one “prove it” check (validation query, role test, impact checklist).

Spaced review: end each day by recalling yesterday’s key points from memory and updating a 1-page cheat sheet.

Week 1 — Foundations: governance, lifecycle, and data preparation basics

Week 1 Theme

This week builds your baseline Fabric mental model: how security and governance layers differ (workspace vs item vs data-level), how changes move safely (version control, .pbip, deployment pipelines, impact analysis), and how data preparation flows from ingestion through transformation to validation. You’ll produce a small set of reusable study artifacts (checklists and diagrams) that you will keep refining in Weeks 2–4.

Day 1 — Security & governance layers in a Fabric solution

Study Content
  • Maintain a data analytics solution — Implement security and governance: workspace-level access controls vs item-level access controls.

  • Data-level controls: Row-level security (RLS) vs Column-level/object-level security (CLS/OLS).

  • Governance signals: sensitivity labels and endorsements as trust/discoverability cues (not access).

  • Troubleshooting framing: “can’t open” vs “opens but blank/wrong totals.”

Tasks
Task 1: 2 pomodoros — Build a “security layering” one-pager (workspace → item → RLS/OLS → file)

Write a single-page diagram that shows each control layer, what it protects, and a real example (e.g., region-based RLS).

Deliverable: 1-page security layering note/diagram; Verification: explain the layering in 90 seconds without reading.

Task 2: 1 pomodoro — Create a governance signals mini-checklist (labels + endorsements)

List what sensitivity labels and endorsements affect (discoverability/trust/handling expectations) and what they do not (permissions).

Deliverable: 10-line checklist; Verification: for two sample scenarios, point to the exact checklist line that resolves the confusion.

Task 3: 1 pomodoro — Draft an “access issue triage” flow (4-step)

Create a 4-step decision order: workspace access → item access → RLS mapping → OLS/CLS/relationships/measures.

Deliverable: triage flowchart; Verification: apply it to a “user can open report but sees blanks” scenario and identify the likely layer.

Task 4: 1 pomodoro — Spaced review + flash prompts

From memory, write 8 Q/A prompts (e.g., “When does endorsement help?” “RLS vs OLS?”) and answer them without notes first.

Deliverable: 8 flash prompts; Verification: re-answer after checking notes and mark at least 2 corrections.

Day 2 — Lifecycle basics: version control, .pbip, deployment pipelines, impact analysis

Study Content
  • Maintain a data analytics solution — Maintain the analytics development lifecycle: why reviewable changes matter.

  • Power BI Desktop project (.pbip) as a collaboration-friendly artifact format.

  • Deployment Pipeline: dev → test → prod promotion mindset and post-deploy validation.

  • Impact analysis: “blast radius” across data layer → model layer → reports.

Tasks
Task 1: 2 pomodoros — Write a lifecycle “definition of done” checklist for model/report changes

Include: what must be reviewed, what must be tested, and what must be validated after promotion (access + correctness + performance).

Deliverable: checklist with 12+ items; Verification: for each item, add a “how to verify” cue (test identity, KPI spot-check, benchmark page).

Task 2: 1 pomodoro — Build an impact analysis template (upstream/middle/downstream)

Create a 3-column template: Upstream (Lakehouse/Warehouse/Dataflow Gen2), Middle (Semantic Model), Downstream (Reports).

Deliverable: impact template; Verification: fill it with one example change (rename a column used by a measure) and list likely breakpoints.

Task 3: 1 pomodoro — Draft a rollback/roll-forward decision note

Write criteria for rollback vs quick fix vs forward-only change when production reports fail after deployment.

Deliverable: 8–10 line decision note; Verification: apply it to “prod dashboards slow down after deploy” and choose an action with one reason.

Task 4: 1 pomodoro — Spaced review (Day 1 + Day 2)

Recall the security layering and lifecycle checklist from memory, then refine both with 3 improvements.

Deliverable: updated one-pagers; Verification: improvements must be specific (added step, clarified boundary, added validation cue).

Day 3 — Ingestion choices: Dataflow Gen2 vs Data Pipeline vs Notebook, plus gateway constraints

Study Content
  • Prepare data — Get data: choose the ingestion tool based on complexity and operations needs.

  • Connectivity constraints: when On-premises Data Gateway is required.

  • Landing surface choice: Lakehouse vs Warehouse vs Eventhouse (KQL Database) as a destination mindset.

  • Reliability basics: schedule, retries, and load-status outputs.

Tasks
Task 1: 2 pomodoros — Create a tool-selection matrix (Dataflow Gen2 / Data Pipeline / Notebook)

For each tool, write: best-fit constraints, common pitfalls, and a sample use case.

Deliverable: 1-page matrix; Verification: for three scenario prompts, pick a tool and justify in 2 sentences each.

Task 2: 1 pomodoro — Write a gateway “must-check” list

List the minimum checks when on-prem ingestion fails (connectivity, credentials, identity, schedule).

Deliverable: 8-line gateway checklist; Verification: map each check to a symptom it would explain.

Task 3: 1 pomodoro — Define a basic load-status artifact

Design a minimal “load status” table/log entry: run id, start/end, rows in/out, rejected, max date, success flag.

Deliverable: schema sketch; Verification: explain how a dashboard could detect “stale data” using max date + success flag.

Task 4: 1 pomodoro — Spaced review (Days 1–3)

Write 10 quick prompts that mix governance + lifecycle + ingestion, then answer without notes before checking.

Deliverable: 10 prompts + answers; Verification: mark at least 3 places where your first answer was incomplete and fix them.

Day 4 — Transformation placement and data quality patterns

Study Content
  • Prepare data — Transform data: where transformations should live (Dataflow Gen2 vs Notebook vs SQL).

  • Maintainability vs pushdown: keep transforms closest to maintainers but validate performance.

  • Data quality: uniqueness, nulls, duplicates, reject/quarantine outputs.

  • Preventing data explosion: join grain and cardinality checks.

Tasks
Task 1: 2 pomodoros — Write “where should the transform live?” decision rules

Create 6 rules that choose between Dataflow Gen2, Notebook, and Warehouse/SQL Analytics Endpoint.

Deliverable: 6-rule decision card; Verification: apply rules to two scenarios (nested JSON, simple column standardization) and confirm the tool choice.

Task 2: 1 pomodoro — Design a data quality checklist with thresholds

Include: key uniqueness check, null-rate checks, duplicate detection, referential integrity spot-checks.

Deliverable: checklist with at least 8 checks; Verification: for each check, specify what output proves pass/fail (count query result, reject table row count).

Task 3: 1 pomodoro — Create a “row explosion” debugging mini-flow

List the top 3 causes (many-to-many joins, dimension duplicates, grain mismatch) and the 3 fastest validations.

Deliverable: mini-flow; Verification: show how each validation would change (what numbers would be “bad”).

Task 4: 1 pomodoro — Spaced review (Days 1–4)

Update your Week 1 cheat sheet with the most exam-relevant decision rules (security layers, lifecycle checklist, tool matrix, transform rules).

Deliverable: 1-page cheat sheet v1; Verification: keep it to one page and ensure each rule has a “trigger phrase” you can recognize in questions.

Day 5 — Validation: query and analyze across SQL / DAX-level checks, plus weekly consolidation

Study Content
  • Prepare data — Query and analyze data: validation flow from ingestion → transforms → serving layer.

  • Choosing the query surface: SQL Analytics Endpoint/Warehouse SQL vs DAX Query View (measure behavior).

  • Common discrepancy causes: time zone boundaries and late-arriving data.

  • Week 1 consolidation: connect governance + lifecycle + data prep into one scenario narrative.

Tasks
Task 1: 2 pomodoros — Build a 4-step validation flow you can reuse in any scenario

Write the flow: ingest checks → transform checks → SQL aggregate checks → semantic measure checks (context + security).

Deliverable: validation flow card; Verification: apply it to “totals changed after transform update” and identify the most likely failing step.

Task 2: 1 pomodoro — Create a “DAX vs SQL validation” note

Explain when SQL is sufficient and when you must validate at the semantic layer (filter context, security).

Deliverable: 10-line note; Verification: include 2 examples where SQL matches but DAX differs, and state why.

Task 3: 1 pomodoro — Weekly mini-scenario write-up

Write a short scenario: ingest + transform + model + security + deployment, and list the top 5 risks and mitigations.

Deliverable: 1-page scenario + risk list; Verification: each risk must map to a concrete control or validation artifact you created this week.

Task 4: 1 pomodoro — Spaced review + self-test

Without notes, answer: “How do labels/endorsement differ from permissions?” “What is impact analysis?” “How to prevent duplicates?” “Why validate measures with DAX?”

Deliverable: self-test answers; Verification: compare against your cheat sheet and correct inaccuracies in redline notes.


Week 2 — Data preparation mastery: transformation, quality checks, query validation

Week 2 Theme

This week turns data preparation into a repeatable system: you’ll practice choosing ingestion and transformation approaches under real constraints (incremental loads, schema drift, late-arriving data), build data-quality outputs that are easy to audit, and adopt a validation flow that separates ingestion problems from transformation mistakes and downstream semantic/report issues.

Day 1 — Incremental ingestion, re-runs, and “no duplicates” design

Study Content
  • Prepare data — Get data: incremental vs full loads; designing re-runnable pipelines.

  • Failure recovery patterns: retries, partial failure detection, idempotency.

  • Load-status outputs: run id, watermark, rows in/out, rejected, max date, success flag.

  • Late-arriving data basics: watermark windows and backfill strategy.

Tasks
Task 1: 2 pomodoros — Write an “idempotent load” playbook (copy + merge mindset)

Draft a short playbook that explains how you prevent duplicates on re-run (keys, merge logic, overwrite partitions, or staging tables).

Deliverable: 1-page idempotent load playbook; Verification: walk through a “job re-run after partial failure” story and show exactly why duplicates won’t occur.

Task 2: 1 pomodoro — Design a watermark strategy for late-arriving data

Define how you choose the watermark column, how far you look back (window), and how you mark completion.

Deliverable: watermark rules (5–8 lines) + a small example timeline; Verification: apply it to a “yesterday changes today” scenario and show which rows will be re-processed.

Task 3: 1 pomodoro — Create a load-status table schema + alert rules

Specify columns for status and 3 alert rules (e.g., max date older than expected, rejected rows > threshold, runtime spike).

Deliverable: schema sketch + 3 alert rules; Verification: for each rule, explain the exact signal and the likely root-cause category.

Task 4: 1 pomodoro — Spaced review + tighten Week 1 cheat sheet

Recall Week 1 tool-selection and governance rules from memory, then add 5 Week 2 “re-runs/duplicates” rules.

Deliverable: cheat sheet v2 (still 1 page); Verification: read it once, then recite the 5 new rules without looking.

Day 2 — Schema drift, gateway constraints, and destination choices (Lakehouse vs Warehouse vs Eventhouse)

Study Content
  • Prepare data — Get data: schema drift detection and quarantine/reject patterns.

  • On-premises Data Gateway: connectivity/credential/identity checks for scheduled runs.

  • Choosing the destination: Lakehouse vs Warehouse vs Eventhouse (KQL Database) by workload shape.

  • Discovery vs ingestion: OneLake catalog / Real-Time hub as “find first, ingest second.”

Tasks
Task 1: 2 pomodoros — Build a schema drift response runbook

Include: detection (column diff), safe handling (quarantine), notification, and how/when to update transforms.

Deliverable: runbook with 6+ steps; Verification: simulate “new column added + type changed” and show the runbook decision you’d take.

Task 2: 1 pomodoro — Create a “gateway failure triage” card (symptom → check)

Map at least 6 symptoms (timeout, auth failure, intermittent schedule miss) to the first check you’d do.

Deliverable: triage card; Verification: pick 2 symptoms and explain why that check is the best first move.

Task 3: 1 pomodoro — Make a destination decision table (Lakehouse/Warehouse/Eventhouse)

For each, write the best-fit workloads, query style, and typical consumers.

Deliverable: 1-page decision table; Verification: classify 3 prompts (dimensional BI, telemetry/logs, mixed engineering + BI) and justify in 2 sentences each.

Task 4: 1 pomodoro — Spaced review (Days 1–2) with “trigger phrase” practice

Write 10 trigger phrases (e.g., “telemetry,” “retry dependencies,” “on-prem,” “schema drift”) and the immediate best-fit action/tool.

Deliverable: 10 triggers + responses; Verification: time yourself—answer all 10 in under 3 minutes, then correct any misses.

Day 3 — Transformation placement and performance-aware shaping

Study Content
  • Prepare data — Transform data: deciding where transforms live (Dataflow Gen2 vs Notebook vs Warehouse SQL).

  • Pushdown and maintainability: keep logic close to maintainers while preserving performance.

  • Grain and joins: preventing row explosion with uniqueness and cardinality checks.

  • Transform documentation: making steps reviewable and handoff-ready.

Tasks
Task 1: 2 pomodoros — Write 8 “placement rules” for transformations (with examples)

For each rule, include a “when you’ll regret it” warning (e.g., heavy JSON parsing in low-code).

Deliverable: 8-rule decision card; Verification: apply the rules to two scenarios and show the chosen approach plus one risk.

Task 2: 1 pomodoro — Create a join-grain validation checklist

Include distinct key checks, expected row-count bounds, and a “stop the pipeline” threshold.

Deliverable: checklist with 8 checks; Verification: for each check, state what output indicates failure (count, distinct count, reject rows).

Task 3: 1 pomodoro — Draft a transformation handoff note template

Include: inputs, outputs, assumptions, known edge cases, and validation queries to run after changes.

Deliverable: handoff template; Verification: fill the template for one transform (standardize customer IDs) in under 10 minutes.

Task 4: 1 pomodoro — Spaced review + refine your tool-selection matrix

Add a row: “Best place for heavy joins and dimension shaping?” and update the matrix with one new example per tool.

Deliverable: updated matrix; Verification: explain one improvement you made and how it helps avoid a real failure mode.

Day 4 — Query validation: SQL vs semantic checks, time zones, and reconciliation

Study Content
  • Prepare data — Query and analyze data: 4-step validation flow (ingest → transform → SQL aggregates → semantic checks).

  • Query surfaces: Warehouse SQL / SQL Analytics Endpoint vs Notebook exploration vs DAX-level validation.

  • Discrepancies: time zone boundaries and late-arriving transactions.

  • Performance diagnosis basics: data growth vs join changes vs filter selectivity.

Tasks
Task 1: 2 pomodoros — Build a “validation query pack” outline (10 queries)

Define query intents (counts, distinct keys, null rates, max/min dates, top-N sanity checks).

Deliverable: list of 10 validation queries with purpose; Verification: for each query, state what “good” vs “bad” looks like.

Task 2: 1 pomodoro — Write a reconciliation note for time zones and cutoffs

Define the business “day” rule and how you align source and destination validation to the same cutoff.

Deliverable: 8–12 line reconciliation note; Verification: explain how your rule prevents “daily totals drift” across midnight boundaries.

Task 3: 1 pomodoro — Create a bottleneck isolation checklist for slow dashboards

Separate causes into: data explosion, join path, calculation complexity, and environment differences.

Deliverable: checklist with 8 items; Verification: apply it to “table grew 10x and visuals slowed” and choose the first two validations.

Task 4: 1 pomodoro — Spaced review: teach-back the full validation flow

Explain the 4-step validation flow out loud (or in writing) as if teaching a teammate, with one example per step.

Deliverable: teach-back script (12–18 lines); Verification: after reading once, rewrite the 4 steps from memory in your own words.

Day 5 — End-to-end data prep mini-design + weekly consolidation

Study Content
  • Prepare data — Get data + Transform data + Query and analyze data: end-to-end design under constraints.

  • Reliability outputs: load-status + reject tables + data quality status table.

  • Maintain a data analytics solution — Maintain the analytics development lifecycle: impact notes for transform changes.

  • Week 2 consolidation: convert decision rules into “exam-ready” short answers.

Tasks
Task 1: 3 pomodoros — Mini-design: a resilient daily pipeline with quality outputs

Design an end-to-end approach for: SaaS + on-prem ingestion, incremental loads, schema drift handling, and validation outputs.

Deliverable: 1–2 page design note (tools chosen + why + outputs); Verification: ensure each requirement maps to a concrete artifact (status table, reject table, validation pack).

Task 2: 1 pomodoro — Write 6 exam-style “because” sentences

For each common choice (tool selection, transform placement, validation surface), write a one-sentence decision + one-sentence justification.

Deliverable: 6 decision+justification pairs; Verification: each pair must mention the constraint it addresses (e.g., “complex parsing,” “orchestration,” “filter context”).

Task 3: 1 pomodoro — Produce a weekly risk register (top 5) + mitigations

List the 5 biggest Week 2 failure modes (duplicates, drift, late data, explosion, wrong validation layer) and mitigation checks.

Deliverable: risk register table (text); Verification: each risk must reference at least one specific validation or monitoring output you created.

Task 4: 1 pomodoro — Spaced review + upgrade cheat sheet to v3

From memory, write the “Week 2 core loop” (ingest → transform → validate → publish outputs) and add your top 10 triggers.

Deliverable: cheat sheet v3 (1 page); Verification: do a 5-minute closed-book recall, then compare and patch gaps.


Week 3 — Semantic model build: star schema, measures, security, reuse

Week 3 Theme

This week turns “tables into answers” by focusing on semantic model design: modeling a clean star schema, implementing correct relationships (including bridge and many-to-many patterns), writing robust DAX calculations (variables, iterators, filtering, windowing), and building reusable enterprise features (calculation groups, dynamic format strings, field parameters, composite models, and large semantic model storage format) while validating behavior under filters, security, and performance constraints.

Day 1 — Star schema in the semantic model + relationship correctness

Study Content
  • Implement and manage semantic models — Design and build semantic models: implement a star schema for a semantic model.

  • Implement relationships, such as bridge tables and many-to-many relationships.

  • Grain thinking: fact table grain, dimension uniqueness, and ambiguity avoidance.

  • Validation: filter propagation and “numbers change when I add a slicer” diagnosis.

Tasks
Task 1: 2 pomodoros — Draw a semantic model diagram (fact + 4 dimensions + grain statement)

Pick a simple scenario (Sales) and define the fact grain (e.g., one row per order line).

Deliverable: diagram + 5-line “grain + keys” note; Verification: show one example filter path and explain why it’s unambiguous.

Task 2: 1 pomodoro — Build a relationship checklist (cardinality + direction + ambiguity)

Write a checklist that forces you to declare: one-to-many vs many-to-many, direction, and the reason for any non-default choice.

Deliverable: 10–12 item checklist; Verification: apply it to your diagram and identify at least 2 relationships you would double-check.

Task 3: 1 pomodoro — Create a bridge-table mini-pattern note

Write when a bridge table is needed and what symptom it fixes (e.g., many-to-many categories, tagging, shared ownership).

Deliverable: 8–10 line pattern note; Verification: give one example of “wrong totals” that the bridge pattern prevents.

Task 4: 1 pomodoro — Spaced review: relationship “red flags”

From memory, list 6 red flags (duplicate keys, multiple paths, hidden many-to-many) and the fastest validation for each.

Deliverable: 6 red flags + validations; Verification: time-box to 7 minutes and refine any unclear validation into a single concrete check.

Day 2 — DAX calculations: variables, iterators, filtering, windowing, and information functions

Study Content
  • Write calculations that use DAX variables and functions, such as iterators, table filtering, windowing, and information functions.

  • Semantic validation: why SQL aggregates can match but measures still differ (filter context).

  • DAX Query View: testing measures under controlled filters.

  • Performance mindset: avoid “correct but expensive” patterns for heavily used KPIs.

Tasks
Task 1: 2 pomodoros — Create a “measure anatomy” template (VARs + base measure + final measure)

Write a reusable template that starts with base measures, adds VAR blocks for intermediate logic, then returns the final calculation.

Deliverable: template + 3 example measures (e.g., Total Sales, YoY %, Rolling 28-day); Verification: explain what each VAR does and what would break if you removed it.

Task 2: 1 pomodoro — Build a DAX validation script pack (5 test contexts)

Define five contexts you’ll always test: All, Single Region, Single Product Category, Single Month, and a “multi-select” slice.

Deliverable: 5-context test plan; Verification: for each context, specify the expected directional change and one “surprise” that indicates relationship or filter issues.

Task 3: 1 pomodoro — Write a “SQL vs DAX validation” decision card

Explain when SQL is enough (table-level aggregates) and when you must validate measures in DAX (filter context, security).

Deliverable: 10-line decision card; Verification: include 2 concrete examples where SQL looks right but the DAX measure can still be wrong.

Task 4: 1 pomodoro — Spaced review: DAX pitfalls list

From memory, list 8 common DAX pitfalls (context confusion, over-iteration, filter misuse) and one safer alternative tactic per pitfall.

Deliverable: 8 pitfalls + alternatives; Verification: mark your top 3 “most likely to appear on the exam” and add a trigger phrase for each.

Day 3 — Enterprise modeling features: calculation groups, dynamic format strings, field parameters, composite models

Study Content
  • Implement calculation groups, dynamic format strings, and field parameters.

  • Design and build composite models (mixing sources/modes).

  • Large semantic model storage format: recognizing when model size/scale requires it.

  • Governance and maintainability: reducing duplicated KPIs across many reports.

Tasks
Task 1: 2 pomodoros — Create a “calc group vs measure” decision guide

Write when to use calculation groups (time intelligence patterns, consistent transformations) vs separate measures.

Deliverable: 1-page guide with 4 decision rules; Verification: classify 4 examples (YoY, MTD, Currency Format, KPI Variants) into calc group vs measures and justify.

Task 2: 1 pomodoro — Build a field parameters use-case note (end-user slicing without many report pages)

Describe how field parameters reduce report sprawl and keep visuals flexible for business users.

Deliverable: 8–12 line note; Verification: write one example scenario prompt and your “why field parameters” answer in 2 sentences.

Task 3: 1 pomodoro — Draft a composite model risk checklist

List the risks: ambiguous relationships across sources, inconsistent refresh/freshness, security propagation assumptions, and performance surprises.

Deliverable: checklist with 10 items; Verification: for each risk, add one verification cue (test identity, benchmark page, filter-context test).

Task 4: 1 pomodoro — Spaced review: “enterprise feature flash prompts”

Create 10 prompts (calc group, dynamic format strings, field parameters, composite model, large model format) and answer from memory.

Deliverable: 10 prompts + answers; Verification: correct at least 2 answers after checking your notes and annotate why you missed them.

Day 4 — Storage mode decisions + Direct Lake concepts (fallback and refresh behavior)

Study Content
  • Choose a storage mode for the semantic model based on constraints (freshness vs performance vs complexity).

  • Configure Direct Lake, including default fallback and refresh behavior (conceptual decision logic).

  • Choose between Direct Lake on OneLake and Direct Lake on SQL endpoints (what the constraint wording implies).

  • Troubleshooting: why a model “suddenly got slower” after a mode/fallback change.

Tasks
Task 1: 2 pomodoros — Build a storage-mode decision matrix (3 modes, 6 constraints)

Create a matrix that maps typical constraints (near real-time, large data, strict performance, limited refresh window, complex transformations, governance) to the best-fit mode choice.

Deliverable: matrix; Verification: answer 5 scenario prompts with mode + one-sentence justification tied to a constraint.

Task 2: 1 pomodoro — Write a “fallback behavior” explanation you can reuse on the exam

Explain what it means for Direct Lake to fall back and how you’d detect it conceptually (symptom-based).

Deliverable: 8–10 line explanation; Verification: list 3 symptoms and one confirmation step per symptom (benchmark, query path reasoning, regression timing).

Task 3: 1 pomodoro — Create a refresh behavior checklist (correctness + freshness)

List the minimum checks after changing mode/refresh: KPI spot-check, date freshness indicator, and one heavy visual benchmark.

Deliverable: checklist (8 items); Verification: apply it to a “today’s data missing” scenario and identify the first two checks.

Task 4: 1 pomodoro — Spaced review: mode-choice triggers

Write 12 trigger phrases (e.g., “near real-time,” “very large model,” “strict freshness,” “shared model at scale”) and the recommended mode reasoning in one line each.

Deliverable: 12 triggers + reasoning; Verification: complete in 10 minutes and refine any vague reasoning into a constraint-based statement.

Day 5 — Security, reuse, and weekly consolidation (linking model design to lifecycle and governance)

Study Content
  • Maintain a data analytics solution — Implement security and governance: RLS/CLS/OLS and how it changes user experience.

  • Maintain a data analytics solution — Maintain the analytics development lifecycle: safe promotion and post-deploy validation for shared models.

  • Reuse assets: shared semantic models, plus reusable assets like Power BI template (.pbit) and Power BI data source (.pbids) files.

  • Governance signals: sensitivity labels and endorsements for trusted discovery (not permissions).

Tasks
Task 1: 2 pomodoros — Build a “security in semantic models” test protocol (RLS + OLS/CLS)

Write steps to validate role mapping, effective data slices, and “blanks vs access denied” outcomes.

Deliverable: test protocol (10–12 steps); Verification: include one scenario where totals differ under RLS and explain how your protocol detects relationship/measure issues.

Task 2: 1 pomodoro — Create a post-deploy validation checklist for a shared semantic model

Include: item-level access, RLS slice checks, KPI spot-checks, refresh success/freshness, and benchmark page performance.

Deliverable: checklist (12+ items); Verification: apply it to “reports broke after deployment” and identify which item would catch the issue earliest.

Task 3: 2 pomodoros — Weekly mini-scenario: design → measures → mode → security → promotion

Write a 1–2 page scenario answer: model diagram, 3 KPI measures, storage mode choice, security approach, and a promotion/validation plan.

Deliverable: mini-scenario write-up; Verification: each decision must cite the constraint it addresses (scale, reuse, security, freshness, maintainability).

Task 4: 1 pomodoro — Spaced review + upgrade cheat sheet to v4 (semantic focus)

From memory, write your “semantic model core loop”: model shape → relationships → measures → mode/refresh → security → validate → publish/govern.

Deliverable: cheat sheet v4 (1 page); Verification: do a 6-minute closed-book recall, then patch gaps and mark your top 5 trigger phrases for Week 4.


Week 4 — Enterprise readiness: optimization, lifecycle promotion, full review & practice

Week 4 Theme

This week hardens your DP-600 readiness by treating the solution like a production product: you’ll tune semantic model performance (shape, cardinality, measures), reason about mode/refresh and fallback symptoms, practice lifecycle operations (version control, deployments, impact analysis, rollback decisions), and finish with full scenario rehearsals that mix Prepare data + Semantic models + Maintain a data analytics solution into exam-style decision making.

Day 1 — Performance triage: model slimming, cardinality, and measure efficiency

Study Content
  • Implement and manage semantic models — Optimize enterprise-scale semantic models: model slimming and column/cardinality discipline.

  • Measure performance reasoning: “correct but expensive” patterns and safer alternatives.

  • Bottleneck isolation: data explosion vs relationship ambiguity vs calculation complexity.

  • Benchmarking: define a small “heavy page” set and compare before/after changes.

Tasks
Task 1: 2 pomodoros — Build a semantic model performance checklist (shape + measures)

Write a checklist that forces you to check: unused columns, high-cardinality fields, relationship paths, and the top 5 expensive measures.

Deliverable: checklist (12+ items); Verification: for each item, add one concrete “how to confirm” cue (distinct count, field removal test, benchmark visual).

Task 2: 2 pomodoros — Create a “before/after benchmark” protocol

Define 3 benchmark visuals/pages and the exact steps to compare performance and correctness after a change.

Deliverable: benchmark protocol (10–14 lines); Verification: include a pass/fail rule (e.g., “no regression beyond X%” + KPI spot-check requirement).

Task 3: 1 pomodoro — Write a “row explosion” rapid diagnosis card

List the top causes (grain mismatch, dimension duplicates, many-to-many surprises) and the first 3 validations you run.

Deliverable: diagnosis card (8–10 lines); Verification: apply it to a “table grew 10x” scenario and name the first two checks you would run.

Task 4: 1 pomodoro — Spaced review (Weeks 1–3) + update cheat sheet v5

From memory, write the 6 most important performance triggers and your default response for each.

Deliverable: cheat sheet v5 (1 page); Verification: recite the 6 triggers without looking, then patch any missing constraint language.

Day 2 — Mode/refresh and fallback: keeping performance and freshness predictable

Study Content
  • Implement and manage semantic models — Optimize enterprise-scale semantic models: refresh strategy concepts and stability checks.

  • Storage/mode decision logic: match constraints (freshness, scale, complexity) to the conceptual mode choice.

  • Direct Lake concepts: default fallback and “sudden slowdown” symptom reasoning.

  • Validation after changes: freshness indicator + KPI spot-check + benchmark page.

Tasks
Task 1: 2 pomodoros — Build a mode/refresh decision matrix (constraints → choice)

Create a matrix with at least 6 constraints (near real-time, very large model, strict refresh window, complex transforms, governance, heavy concurrency).

Deliverable: matrix; Verification: answer 6 scenario prompts with choice + one-sentence constraint-based justification.

Task 2: 1 pomodoro — Write a fallback symptom decoder (3 symptoms → likely cause)

Define three “fallback-like” symptoms (performance regression, unexpected query behavior, inconsistent freshness) and what you check first.

Deliverable: decoder note (9–12 lines); Verification: for each symptom, include one confirmation cue and one likely root-cause category.

Task 3: 1 pomodoro — Create a post-change validation checklist (correctness + freshness + performance)

Include: date freshness indicator, 2 KPI spot-checks under filters, one security slice check, and one benchmark page.

Deliverable: checklist (10 items); Verification: apply it to “today’s data missing” and state the first two checks and what a fail looks like.

Task 4: 1 pomodoro — Spaced review: teach-back mode/refresh in 2 minutes

Write a short teach-back script that explains how you choose and validate mode/refresh choices without naming irrelevant details.

Deliverable: teach-back script (10–14 lines); Verification: rewrite the core decision logic from memory after 5 minutes, keeping it to 5 lines.

Day 3 — Production operations: version control, deployment pipeline promotion, impact analysis, rollback

Study Content
  • Maintain a data analytics solution — Maintain the analytics development lifecycle: version control and reviewable changes (.pbip).

  • Deployment Pipeline: promotion mindset and environment-specific differences (access, bindings, performance).

  • Impact analysis: blast radius across Prepare data → Semantic Model → Reports.

  • XMLA Endpoint: enterprise automation and model management positioning.

Tasks
Task 1: 2 pomodoros — Build a “promotion gate” checklist for shared semantic models

Include access checks, RLS slice checks, KPI correctness checks, refresh success/freshness, and benchmark performance checks.

Deliverable: promotion gate checklist (15+ items); Verification: map each checklist item to the failure symptom it would catch earliest.

Task 2: 2 pomodoros — Write a rollback vs fix vs roll-forward decision guide

Define criteria and a short decision flow for production incidents after deployment (broken visuals, wrong totals, performance regression).

Deliverable: decision guide (12–16 lines); Verification: apply it to two incidents and justify your choice in 2 sentences each.

Task 3: 1 pomodoro — Create an impact analysis template you can fill in under pressure

Use a 3-layer template: Upstream data changes, Semantic model changes, Downstream report impacts, plus “tests to run.”

Deliverable: template; Verification: fill it for “rename column used in measures” and list 5 tests you would run.

Task 4: 1 pomodoro — Spaced review: lifecycle quick prompts

Write 12 prompts (version control, .pbip, pipeline promotion, impact analysis, XMLA endpoint) and answer without notes.

Deliverable: 12 prompts + answers; Verification: mark the 3 weakest answers and rewrite them using constraint-based wording.

Day 4 — Governance and security final pass: align permissions, data security, and trust signals

Study Content
  • Maintain a data analytics solution — Implement security and governance: workspace-level vs item-level access controls.

  • Data-level security: RLS vs CLS/OLS and how it changes user experience and totals.

  • Governance signals: sensitivity labels and endorsements for trusted discovery and reuse (not permissions).

  • Troubleshooting: “can’t access” vs “can access but sees blanks/wrong totals.”

Tasks
Task 1: 2 pomodoros — Build a unified “security + governance” decision tree

Create a decision tree that routes requirements to workspace role, item permission, RLS, OLS/CLS, and labeling/endorsement.

Deliverable: decision tree (1 page); Verification: solve 5 mini-prompts by pointing to the exact branch you used.

Task 2: 1 pomodoro — Write a “blank visuals” troubleshooting flow (fast)

Draft a 6-step flow: access layer → model permission → RLS mapping → relationship propagation → OLS/CLS → measure context assumptions.

Deliverable: flow (6 steps); Verification: apply it to “report opens but nothing shows” and name the top two likely layers.

Task 3: 1 pomodoro — Create a governance rollout checklist for shared assets

Include: labeling policy, endorsement policy, naming/description standards, and “how users find the certified model.”

Deliverable: checklist (10–12 items); Verification: for 3 items, add a measurable verification cue (search behavior, usage adoption, reduced duplicate models).

Task 4: 1 pomodoro — Spaced review + consolidate your “exam answer patterns”

Write 8 reusable answer starters (e.g., “Because the constraint is X, choose Y…”) for security/governance scenarios.

Deliverable: 8 answer starters; Verification: use 3 starters to answer 3 prompts in under 5 minutes.

Day 5 — Full rehearsal: mixed-domain scenarios + final review loop

Study Content
  • Mixed scenario synthesis: Prepare data + Semantic models + Maintain a data analytics solution in one story.

  • Decision discipline: choose tools, place transformations, validate outputs, model for reuse, secure and govern, deploy safely.

  • Error-driven learning: identify your top recurring mistakes and harden them into checklists.

  • Exam readiness: time management and option elimination using constraints.

Tasks
Task 1: 3 pomodoros — Write and solve two end-to-end scenarios (short-answer style)

Create two scenarios: (1) ingestion + transform + validation with late data/drift, (2) shared semantic model + security + deployment regression.

Deliverable: 2 scenario write-ups + your solution steps; Verification: each solution must include at least 5 explicit checks (validation, security, impact, performance).

Task 2: 2 pomodoros — Build a “mistake log” and convert it into prevention rules

List your top 10 mistakes (tool mismatch, validation layer confusion, relationship ambiguity, security assumptions, governance confusion).

Deliverable: mistake log + 10 prevention rules; Verification: each rule must reference one artifact you built (checklist/flow/matrix) as the prevention mechanism.

Task 3: 1 pomodoro — Create a final 1-page DP-600 decision map

Compress your best matrices and flows into one page: tool selection, transform placement, validation flow, model design checks, security layering, promotion gate.

Deliverable: 1-page decision map; Verification: answer 10 rapid prompts using only this page and mark any missing rule.

Task 4: 1 pomodoro — Spaced review + exam pacing plan

Write a pacing plan: how you read prompts, extract constraints, eliminate options, and verify assumptions before committing.

Deliverable: pacing plan (8–12 lines); Verification: apply it to one scenario and show the constraint list you extracted before choosing an answer.