This study plan is a structured, execution-oriented roadmap designed specifically for candidates preparing for the AI-102: Designing and Implementing a Microsoft Azure AI Solution exam.
Unlike generic reading lists or loosely organized schedules, this plan is built around how people actually learn and retain complex technical knowledge, and how the AI-102 exam actually evaluates understanding. Its primary purpose is not only to help you pass the exam, but to ensure that you develop correct architectural judgment, which is the core competency tested by AI-102.
This week builds the architectural foundation for the entire AI-102 exam.
The goal is not memorization, but correct service selection, deployment reasoning, and lifecycle management.
Everything you learn later (OpenAI, Agents, Vision, NLP, Search) depends on this week.
By the end of Week 1, you must be able to:
Explain the purpose and boundaries of each major Azure AI service in plain language.
Select the correct Azure AI service for a given business scenario and justify the choice.
Decide how an AI solution should be deployed (API, container, or edge) and explain why.
Describe how an Azure AI solution is monitored, scaled, and cost-controlled in production.
Explain Responsible AI requirements and compliance considerations at an exam-appropriate level.
If you cannot do all five, you are not ready to move on.
Daily structure:
4 to 6 Pomodoro sessions per day
Each Pomodoro is 25 minutes
Every day includes:
New content learning
Active processing (rewriting, comparing, explaining)
Spaced review based on the forgetting curve
No passive reading is allowed.
Build a correct mental model of what Azure AI is and what it is not.
Task:
Read the section explaining what Azure AI is.
Rewrite the concept in your own words focusing on:
Why Azure provides prebuilt AI services
Why most solutions do not require custom model training
Write a short paragraph answering:
Expected outcome:
Task:
Study the four categories of Cognitive Services:
Vision
Speech
Language
Decision
For each category, write:
Typical input
Typical output
One real-world business use case
Do not write APIs or SDK details.
Expected outcome:
Task:
Study Azure OpenAI Service at a conceptual level.
Write a comparison table:
Azure Cognitive Services vs Azure OpenAI
Deterministic vs generative output
Cost and risk differences
Write a paragraph answering:
Expected outcome:
Task:
Without looking at notes, write a one-page explanation answering:
What Azure AI services exist
When to use Cognitive Services
When to use Azure OpenAI
Then check and correct gaps.
Forgetting curve action:
Learn how the exam tests service selection through scenarios.
Task:
Study the selection criteria:
Performance
Scalability
Cost
Security
Compliance
For each criterion, write:
Expected outcome:
Task:
Take three example scenarios:
E-commerce
Healthcare
Enterprise knowledge base
For each scenario:
Choose services
Explain why alternatives are weaker choices
Expected outcome:
Task:
Write three incorrect architectural decisions, such as:
Using OpenAI without grounding
Using Azure ML instead of Cognitive Services
Explain why each decision is wrong.
Expected outcome:
Task:
Re-explain Day 1 concepts in exam-style language.
Focus on service boundaries and terminology.
Forgetting curve action:
Understand how AI solutions are deployed and why deployment choice matters.
Task:
Study API-based deployment.
Write:
When API deployment is ideal
When it becomes a limitation
Provide one business example.
Expected outcome:
Task:
Study container deployment using AKS or ACI.
Write:
What control containers provide
Why regulated industries prefer this model
Expected outcome:
Task:
Study edge AI deployment.
Write:
Why latency and privacy drive edge AI
Why edge AI is not suitable for all workloads
Expected outcome:
Task:
Review service selection decisions from Day 2.
Add deployment reasoning to each scenario.
Forgetting curve action:
Understand how AI solutions are operated after deployment.
Task:
Study Azure Monitor concepts.
Write:
What metrics matter for AI services
Why monitoring is critical for reliability
Task:
Study Application Insights.
Write:
How it differs from Azure Monitor
What problems it helps diagnose
Task:
Study autoscaling and cost optimization.
Write:
Why AI cost can grow unexpectedly
How batching and caching reduce cost
Task:
Forgetting curve action:
Understand ethical, legal, and governance expectations.
Task:
Study the six Microsoft Responsible AI principles.
Write one concrete failure example per principle.
Task:
Study GDPR and HIPAA at a conceptual level.
Write:
What the system must do
What the system must never do
Task:
Study human oversight patterns.
Write:
When human review is mandatory
Why automation alone is risky
Task:
Forgetting curve action:
Integrate everything into a single architectural mindset.
Task:
Design a full Azure AI solution for:
Include:
Service selection
Deployment
Monitoring
Compliance
Task:
Forgetting curve action:
Lock knowledge into long-term memory.
Task:
Task:
Compare with notes.
Identify weak areas.
Task:
Task:
Forgetting curve action:
This week focuses on generative AI in enterprise systems, specifically how Azure OpenAI is used safely, reliably, and correctly.
The exam does not test creativity.
It tests engineering judgment.
By the end of Week 2, you must be able to:
Explain how Azure OpenAI differs from public OpenAI services.
Decide when generative AI is appropriate and when it is not.
Design a Retrieval-Augmented Generation (RAG) solution at a conceptual level.
Explain why grounding, security, and cost control are mandatory in enterprise AI.
Identify common generative AI failure modes and how Azure mitigates them.
You should be able to explain these clearly without referencing APIs or code.
This week emphasizes:
Conceptual precision
Scenario reasoning
Error analysis
Daily structure:
4 to 6 Pomodoro sessions
Every day includes:
New concept learning
Scenario-based reasoning
Spaced review from Week 1
Understand what Azure OpenAI is designed to do in enterprise environments.
Task:
Study the conceptual description of Azure OpenAI Service.
Write a short explanation answering:
Why Microsoft offers OpenAI models through Azure
Why enterprises prefer Azure OpenAI over public endpoints
Expected outcome:
Task:
Study the types of generative tasks supported:
Text generation
Summarization
Question answering
Write a list of tasks that generative AI should not be trusted to do alone.
Expected outcome:
Task:
Compare:
Cognitive Services output
Azure OpenAI output
Write examples where determinism is required.
Expected outcome:
Task:
Write a half-page explanation:
When Azure OpenAI is appropriate
When it is not appropriate
Forgetting curve action:
Understand prompting as system behavior control, not creative writing.
Task:
Study how prompts guide behavior.
Write:
What prompts can control
What prompts cannot guarantee
Expected outcome:
Task:
Study system-level instructions.
Write examples of:
Safety constraints
Behavioral constraints
Explain why these are critical for compliance.
Expected outcome:
Task:
Write examples of:
Hallucination
Overconfidence
Ambiguous responses
Explain why these failures occur.
Expected outcome:
Task:
Forgetting curve action:
Understand why RAG is essential for enterprise AI.
Task:
Study the limitations of standalone LLMs.
Write a paragraph explaining:
Expected outcome:
Task:
Study the high-level RAG flow:
User query
Retrieval
Generation
Draw a simple conceptual diagram in words.
Expected outcome:
Task:
Study how search indexes provide grounding.
Write:
Expected outcome:
Forgetting curve action:
Understand how Azure controls generative AI risk.
Task:
Study grounding concepts:
Retrieved documents
Citations
Write why grounding reduces hallucination but does not eliminate it.
Task:
Study enterprise data protection concepts.
Write:
What data is not used for model training
Why this matters legally and ethically
Expected outcome:
Task:
Study content filtering at a conceptual level.
Write examples of:
Harmful content
Regulatory risks
Expected outcome:
Forgetting curve action:
Understand why generative AI requires strict operational controls.
Task:
Study how generative AI cost is calculated conceptually.
Write:
Why costs scale unpredictably
Why cost monitoring is essential
Task:
Study latency considerations.
Write:
Why generative AI is slower than traditional APIs
How this affects architecture decisions
Task:
Write examples of:
Fallback responses
Graceful degradation
Explain why systems must handle model failure.
Forgetting curve action:
Apply generative AI concepts to real exam-style scenarios.
Task:
Design a generative AI solution for:
Include:
Whether RAG is needed
How data is protected
How cost is controlled
Task:
Forgetting curve action:
Convert knowledge into long-term memory.
Task:
Write everything you remember about:
Azure OpenAI
RAG
Safety
Cost
Task:
Task:
Task:
Forgetting curve action:
This week focuses on agentic AI systems, which are tested in AI-102 as goal-driven, multi-step, action-capable solutions.
The exam does not expect you to build agents from scratch.
It expects you to recognize when agents are required, how they are structured, and how risk is controlled.
By the end of Week 3, you must be able to:
Clearly distinguish between:
A chatbot
A RAG-based system
An agentic solution
Explain why tool calling is essential for reliable agent behavior.
Describe the core components of an agentic system:
Model
Tools
Memory
Orchestration
Identify risks introduced by agents and explain how they are mitigated.
Decide when an agent is justified and when it is unnecessary or harmful.
You should be able to explain these decisions in exam-style reasoning.
This week emphasizes:
Systems thinking
Step-by-step reasoning
Failure analysis
Daily structure:
4 to 6 Pomodoro sessions
Every day includes:
Conceptual learning
Decomposition of agent behavior
Spaced review of generative AI concepts from Week 2
Understand what an agentic solution is in the exam context, and what it is not.
Task:
Study the definition of an agentic solution in Azure AI.
Write a clear explanation answering:
How an agent differs from a chatbot
How an agent differs from a RAG system
Expected outcome:
Task:
Compare:
Single-response systems
Goal-driven multi-step systems
Write examples where a single response is insufficient.
Expected outcome:
Task:
Study how AI-102 frames agents as workflow orchestrators.
Write:
What the exam expects you to know
What the exam does not test
Expected outcome:
Task:
Write a short explanation:
When an agent is the correct architectural choice
When it is a mistake
Forgetting curve action:
Understand tools as the foundation of reliable agent behavior.
Task:
Study why agents must call tools.
Write:
Why text-only reasoning is unreliable
Why tools provide authoritative results
Expected outcome:
Task:
Study different tool categories:
Retrieval tools
Action tools
Computation tools
For each category, write one enterprise example.
Expected outcome:
Task:
Study why structured inputs and outputs matter.
Write:
How schemas reduce ambiguity
Why validation is mandatory
Expected outcome:
Forgetting curve action:
Understand how agents maintain continuity and context.
Task:
Study session-level memory.
Write:
What information must persist during a task
What should not persist
Expected outcome:
Task:
Study long-term memory concepts.
Write:
When storing user preferences is useful
Why long-term memory introduces privacy risk
Expected outcome:
Task:
Study why agent actions must be traceable.
Write:
What should be logged
What must not be logged
Expected outcome:
Forgetting curve action:
Understand how agent workflows are executed safely.
Task:
Study the standard agent loop:
Interpret
Plan
Act
Observe
Iterate
Rewrite this loop in your own words.
Expected outcome:
Task:
Study failure scenarios:
Tool timeout
Partial results
Invalid responses
Write how an agent should respond safely.
Expected outcome:
Task:
Study guardrails and stopping conditions.
Write:
Why agents must not loop indefinitely
How termination is enforced
Expected outcome:
Forgetting curve action:
Understand why agents amplify risk and how Azure mitigates it.
Task:
Study how agents can be manipulated.
Write:
Why retrieved content is untrusted
How tool access must be restricted
Expected outcome:
Task:
Study identity-based access for tools.
Write:
Why least privilege is essential
Why agents must respect user permissions
Expected outcome:
Task:
Study approval gates.
Write:
Which actions require human approval
Why full automation is unsafe
Expected outcome:
Forgetting curve action:
Apply agentic concepts to realistic exam scenarios.
Task:
Design an agent for:
Decide:
Which tools are required
Which steps are automated
Where human review is needed
Task:
Write exam-style justifications for:
Why an agent is used
Why alternatives are insufficient
Forgetting curve action:
Convert understanding into long-term memory.
Task:
Task:
Task:
Task:
Forgetting curve action:
This week focuses on computer vision services in Azure, with an emphasis on service selection, capability boundaries, and compliance-aware design.
The exam does not test image-processing theory.
It tests whether you can choose the correct vision service and apply it appropriately.
By the end of Week 4, you must be able to:
Distinguish between Azure Computer Vision, Custom Vision, and Document Intelligence.
Select the correct vision service for image analysis, OCR, and document processing scenarios.
Explain when prebuilt models are sufficient and when custom models are required.
Identify compliance and ethical risks related to visual data, especially face recognition.
Explain how vision solutions are integrated, deployed, and scaled in enterprise systems.
You should be able to justify all choices in exam-style scenario questions.
This week emphasizes:
Input–output thinking
Service boundary clarity
Scenario-driven differentiation
Daily structure:
4 to 6 Pomodoro sessions
Every day includes:
New vision concepts
Comparative reasoning between services
Spaced review from Weeks 2 and 3
Understand what Azure Computer Vision can and cannot do.
Task:
Study the core purpose of Azure Computer Vision.
Write a clear explanation covering:
What types of images it analyzes
What kinds of insights it returns
Avoid implementation details.
Expected outcome:
Task:
Study the following features conceptually:
Image analysis
Object detection
Image tagging
Image captioning
For each feature, write:
Typical input
Typical output
One business use case
Expected outcome:
Task:
Write:
Three problems Azure Computer Vision solves well
Three problems it should not be used for
Explain why limitations exist.
Expected outcome:
Task:
Forgetting curve action:
Understand text extraction from images and documents.
Task:
Study what OCR does conceptually.
Write examples of:
Printed text extraction
Handwritten text extraction
Explain why OCR is critical for document automation.
Expected outcome:
Task:
Compare OCR with general image analysis.
Write:
Why OCR is not just “reading text from images”
Why structured output matters
Expected outcome:
Task:
Write examples of:
Low-quality image issues
Language and layout challenges
Explain how these affect system design.
Expected outcome:
Forgetting curve action:
Understand structured document extraction.
Task:
Study what Document Intelligence is designed to do.
Write:
Why it exists separately from OCR
What “structured extraction” means
Expected outcome:
Task:
Study prebuilt models (invoices, receipts, IDs).
Compare with custom document models.
Write when each is appropriate.
Expected outcome:
Task:
Write how Document Intelligence fits into:
Accounts payable
Contract processing
Compliance workflows
Expected outcome:
Forgetting curve action:
Understand when and why custom vision models are required.
Task:
Study limitations of prebuilt vision models.
Write scenarios where domain-specific recognition is required.
Expected outcome:
Task:
Study training data requirements at a conceptual level.
Write:
Why labeling quality matters
Why dataset bias affects results
Expected outcome:
Task:
Write:
Why custom models cost more to maintain
When maintenance effort is justified
Expected outcome:
Forgetting curve action:
Understand sensitive vision capabilities and governance requirements.
Task:
Study the difference conceptually.
Write:
What face detection does
What face recognition does
Explain why the distinction matters legally.
Expected outcome:
Task:
Study privacy and bias concerns.
Write:
Why face recognition is high-risk
When it should be avoided
Expected outcome:
Task:
Write:
Required controls for face-related systems
Role of human oversight
Expected outcome:
Forgetting curve action:
Integrate vision services into complete systems.
Task:
Design a vision-based solution for:
Decide:
Which vision service is used at each step
How results are stored and validated
Task:
Forgetting curve action:
Lock vision knowledge into long-term memory.
Task:
Task:
Task:
Task:
Forgetting curve action:
This week focuses on language understanding and speech processing using Azure AI services.
The exam does not test linguistic theory.
It tests whether you can select the correct language service, understand capability boundaries, and integrate NLP safely into enterprise systems.
By the end of Week 5, you must be able to:
Distinguish between Azure AI Language services and Azure OpenAI for text-based tasks.
Select the correct NLP capability for sentiment analysis, entity extraction, classification, and summarization.
Understand custom language models and when they are justified.
Explain speech-to-text and text-to-speech use cases and constraints.
Design NLP solutions that are secure, scalable, and appropriate for enterprise use.
You should be able to justify every service choice using exam-style reasoning.
This week emphasizes:
Intent-to-service mapping
Boundary awareness between deterministic NLP and generative AI
Scenario-driven differentiation
Daily structure:
4 to 6 Pomodoro sessions
Every day includes:
New NLP concepts
Service comparison exercises
Spaced review from Week 4 (Vision) and earlier weeks
Understand what Azure AI Language services are designed to do.
Task:
Study the role of Azure AI Language services.
Write a concise explanation covering:
What types of text it processes
What kinds of structured outputs it produces
Avoid references to APIs or SDKs.
Expected outcome:
Task:
Study the following capabilities conceptually:
Language detection
Sentiment analysis
Key phrase extraction
Named entity recognition
For each capability, write:
Typical input
Typical output
One realistic business scenario
Expected outcome:
Task:
Write:
Three problems Azure AI Language solves well
Three problems it should not be used for
Explain why these limitations exist.
Expected outcome:
Task:
Forgetting curve action:
Understand how text can be categorized and structured.
Task:
Study text classification at a conceptual level.
Write:
What classification does
How it differs from sentiment analysis
Expected outcome:
Task:
Study custom text classification.
Write:
When prebuilt models are insufficient
Why domain-specific labels matter
Expected outcome:
Task:
Study entity extraction use cases.
Write:
How entities support downstream automation
Why structured extraction is valuable
Expected outcome:
Forgetting curve action:
Clearly separate deterministic NLP from generative AI.
Task:
Compare:
Azure AI Language outputs
Azure OpenAI outputs
Write examples where determinism is required.
Expected outcome:
Task:
Take three scenarios:
Customer feedback analysis
Legal document review
Internal knowledge summarization
Choose the correct service for each and explain why.
Expected outcome:
Task:
Study scenarios combining Language services and OpenAI.
Write:
Why hybrid approaches improve reliability
Where boundaries must be enforced
Expected outcome:
Forgetting curve action:
Understand speech processing capabilities and constraints.
Task:
Study speech recognition at a conceptual level.
Write:
Typical inputs and outputs
Common enterprise use cases
Expected outcome:
Task:
Study speech synthesis.
Write:
Why voice output is used
Accessibility and UX considerations
Expected outcome:
Task:
Write:
Accent and noise challenges
Latency considerations
Explain how these affect system design.
Expected outcome:
Forgetting curve action:
Understand governance requirements for text and speech data.
Task:
Study how NLP systems handle sensitive data.
Write:
What constitutes PII
Why text data is high-risk
Expected outcome:
Task:
Study retention considerations.
Write:
What data should be stored
What data should be discarded
Expected outcome:
Task:
Write:
When NLP outputs require review
How audits support accountability
Expected outcome:
Forgetting curve action:
Integrate language and speech into complete systems.
Task:
Design an NLP solution for:
Decide:
Which language capabilities are used
Whether generative AI is involved
How outputs are validated
Task:
Forgetting curve action:
Convert NLP knowledge into long-term memory.
Task:
Task:
Task:
Task:
Forgetting curve action:
This final week focuses on enterprise knowledge mining, where AI systems extract, enrich, index, and retrieve information from large volumes of documents.
This is where search, language, vision, and generative AI come together.
The exam heavily uses knowledge mining scenarios to test whether you can design end-to-end AI solutions, not isolated features.
By the end of Week 6, you must be able to:
Explain what knowledge mining is and why it is critical in enterprise AI systems.
Design a document ingestion and enrichment pipeline conceptually.
Explain how unstructured data becomes searchable and usable.
Decide when to combine search with NLP, vision, and generative AI.
Answer full AI-102 case-study questions with correct architectural reasoning.
You should be able to reason across all six AI-102 domains fluently.
This week emphasizes:
Integration of all prior knowledge
End-to-end system thinking
Exam-style synthesis
Daily structure:
4 to 6 Pomodoro sessions
Every day includes:
Knowledge mining concepts
Cross-domain integration
Spaced review from Weeks 1–5
Understand what knowledge mining is and why enterprises need it.
Task:
Study the concept of knowledge mining.
Write a clear explanation answering:
What problem knowledge mining addresses
Why traditional databases are insufficient
Expected outcome:
Task:
Study differences between structured and unstructured data.
Write:
Examples of each
Why most enterprise data is unstructured
Expected outcome:
Task:
Write explanations for:
Internal document search
Compliance and audit support
Customer support knowledge bases
Expected outcome:
Task:
Forgetting curve action:
Understand how raw documents become searchable knowledge.
Task:
Study how documents enter a knowledge mining system.
Write:
Common document sources
Why ingestion must be automated
Expected outcome:
Task:
Study enrichment at a conceptual level.
Write:
What enrichment adds to raw text
Why enrichment improves retrieval quality
Expected outcome:
Task:
Write how:
OCR extracts text
NLP extracts entities and key phrases
Explain why multiple AI services are chained.
Expected outcome:
Forgetting curve action:
Understand how enriched data becomes searchable.
Task:
Study what a search index represents.
Write:
Why indexes are not raw document stores
What fields matter for search
Expected outcome:
Task:
Study:
Keyword search
Semantic search
Write when each is appropriate.
Expected outcome:
Task:
Write:
Why poor indexing causes poor answers
Why search quality impacts downstream AI
Expected outcome:
Forgetting curve action:
Understand how generative AI uses retrieved knowledge safely.
Task:
Write:
Why generative models cannot be trusted as knowledge sources
How retrieval grounds responses
Expected outcome:
Task:
Study how retrieved data is summarized and reformulated.
Write:
Why citations matter
Why hallucination is still possible
Expected outcome:
Task:
Write examples of:
Missing documents
Outdated content
Incorrect enrichment
Explain mitigation strategies.
Expected outcome:
Forgetting curve action:
Understand governance requirements in enterprise knowledge systems.
Task:
Study document-level security concepts.
Write:
Why search must respect user permissions
Risks of overexposed knowledge
Expected outcome:
Task:
Study PII and sensitive document handling.
Write:
What should be indexed
What should be excluded or redacted
Expected outcome:
Task:
Write:
Why knowledge systems must be auditable
What logs are required
Expected outcome:
Forgetting curve action:
Integrate all AI-102 domains into a single solution mindset.
Task:
Design a complete AI solution for:
Include:
Service selection
Ingestion and enrichment
Search and retrieval
Generative AI usage
Security and compliance
Task:
Write answers as if responding to a case-study question:
Why each service is used
Why alternatives are rejected
Forgetting curve action:
Lock all AI-102 knowledge into long-term memory and exam readiness.
Task:
Write everything you remember about:
Task:
Identify remaining weak topics.
Create a focused revision list.
Task:
Task:
Mentally walk through:
Forgetting curve action: