Shopping cart

This study plan incorporates Pomodoro Technique and Forgetting Curve strategies to ensure an efficient, structured, and retention-oriented learning approach for the SPLK-1002 exam. The tasks for each day are broken down in detail to guide you step by step.

Learning Goals
  1. Primary Goal: Pass the SPLK-1002 exam and gain strong proficiency in Splunk.
  2. Subgoals:
    • Understand foundational Splunk concepts and commands.
    • Master advanced Splunk functionalities like field management, macros, and data models.
    • Apply knowledge in real-world scenarios and practice hands-on.
Learning Duration
  • Total Duration: 4 weeks.
  • Daily Study Commitment: 2–3 hours (3–4 Pomodoro cycles).

Week 1: Foundational Knowledge

Day 1: Introduction to SPLK-1002 and Splunk Basics

Goals:

  • Understand the SPLK-1002 exam structure.
  • Familiarize yourself with Splunk’s core components and interface.

Tasks:

  1. Exam Blueprint:
    • Review the exam objectives and identify key areas of focus.
  2. Splunk Architecture:
    • Study components like indexers, forwarders, search heads, and deployment servers.
    • Understand how these components interact.
  3. Splunk Interface:
    • Navigate the Search & Reporting app.
    • Explore menus like Search, Reports, and Dashboards.
  4. Hands-On Practice:
    • Run a search in the Splunk environment using index=_internal.
    • Inspect fields like sourcetype and source.

Day 2: Transforming Commands for Visualizations (Part 1)

Goals:

  • Understand and apply the stats command for data aggregation.

Tasks:

  1. Study the syntax and functions of stats:
    • Key functions: count, sum, avg, max, min.
  2. Hands-On Practice:
    • Query example: index=_internal | stats count BY sourcetype.
    • Modify the query to group by host or source.
  3. Experiment with combining multiple functions:
    • Query: index=web_logs | stats count, avg(response_time) BY status_code.

Day 3: Transforming Commands for Visualizations (Part 2)

Goals:

  • Master chart and timechart commands for visualizations.

Tasks:

  1. Study chart syntax and structure:
    • Query: index=web_logs | chart count BY http_status.
  2. Explore timechart for time-based data:
    • Query: index=web_logs | timechart count span=1h.
  3. Create line and bar visualizations in Splunk.

Day 4: Filtering and Formatting Results (Part 1)

Goals:

  • Learn to filter data using search and where.

Tasks:

  1. Study search for basic filtering:
    • Query: index=orders | search quantity > 10.
  2. Use where for complex logic:
    • Query: index=orders | where quantity > 10 AND price < 50.
  3. Hands-On:
    • Test conditions with varying operators (AND, OR, NOT).

Day 5: Filtering and Formatting Results (Part 2)

Goals:

  • Format data using fields and eval.

Tasks:

  1. Practice field inclusion/exclusion:
    • Query: index=employees | fields + name, department.
  2. Create calculated fields:
    • Query: eval total_price = price * quantity.
  3. Test conditional logic with eval:
    • Query: eval price_category = if(price > 100, "High", "Low").

Day 6: Correlating Events

Goals:

  • Correlate related events using transaction and eventstats.

Tasks:

  1. Study transaction for event grouping:
    • Query: index=web_logs | transaction startswith="login" endswith="logout" maxspan=10m.
  2. Add contextual data with eventstats:
    • Query: eventstats avg(price) AS avg_price BY category.

Day 7: Review and Practice

Goals:

  • Consolidate Week 1 topics through practice and reflection.

Tasks:

  1. Review key commands and examples (stats, chart, transaction).
  2. Solve practice questions related to filtering and visualizations.
  3. Reflect on challenges and focus on weak areas for improvement.

Week 2: Advanced Skills

Day 1: Creating and Managing Fields (Part 1)

Goal: Understand how to manage fields in Splunk, including field extraction and customization.

Tasks:

  1. Learn Automatic Field Extraction:

    • Open a dataset (e.g., index=_internal) in the Splunk Search bar.
    • Review the fields extracted by default (e.g., _time, source, host) in the Field Sidebar.
  2. Practice Manual Field Extraction:

    • Use the rex command to extract a field dynamically:

      index=web_logs | rex field=_raw "user_id=(?<user_id>\d+)"
      
    • Verify the extracted field appears in the results table.

  3. Test the fields Command:

    • Include specific fields:

      index=web_logs | fields + user_id, session_id
      
    • Exclude specific fields:

      index=web_logs | fields - eventtype, source
      
  4. Hands-On Exercise:

    • Run a query on your dataset, extract new fields with rex, and limit results with fields.

Day 2: Creating and Managing Fields (Part 2)

Goal: Learn how to use calculated fields and field aliases effectively.

Tasks:

  1. Learn Calculated Fields with eval:

    • Create new fields based on existing data:

      index=sales | eval total_price = price * quantity
      
    • Use conditional logic:

      eval price_category = if(price > 100, "High", "Low")
      
    • Verify that the calculated fields are added to your results table.

  2. Create Field Aliases:

    • Map raw field names to user-friendly names:

      alias client_ip AS src
      
    • Test the alias with a search query:

      search src=192.168.0.1
      
  3. Hands-On Exercise:

    • Use a dataset of your choice.
    • Extract fields with rex, create a calculated field using eval, and verify aliases by searching the mapped field.

Day 3: Tags and Event Types

Goal: Understand how to categorize and label events with tags and event types.

Tasks:

  1. Learn Tags:

    • Assign tags to specific field values:

      tag::status_code=ClientError
      
    • Test the tag with a search query:

      tag=ClientError
      
    • Assign multiple tags to a single event (e.g., web and proxy).

  2. Learn Event Types:

    • Create an event type:

      • Name: ServerError
      • Definition: search status_code=500
    • Test the event type:

      search eventtype=ServerError
      
  3. Hands-On Exercise:

    • Assign tags and event types to your dataset, then run searches using those tags or event types to verify correctness.

Day 4: Macros

Goal: Simplify repetitive searches using macros.

Tasks:

  1. Create a Static Macro:

    • Define a macro for a common search query:

      search status_code=404 OR status_code=500
      
    • Save the macro as error_filter.

    • Test the macro:

      `error_filter`
      
  2. Create a Dynamic Macro:

    • Define a macro with parameters:

      search status_code=$status$
      
    • Save the macro as dynamic_error_filter.

    • Test the macro with a parameter:

      `dynamic_error_filter("404")`
      
  3. Hands-On Exercise:

    • Create both static and dynamic macros.
    • Test them on different datasets to confirm correct behavior.

Day 5: Workflow Actions

Goal: Create actionable links to external resources or dashboards directly from search results.

Tasks:

  1. Learn GET Actions:

    • Configure a GET Workflow Action:
      • Example URI: http://logs.example.com?ip=$src_ip$
    • Test the GET action by clicking on it in your search results.
  2. Learn Search Actions:

    • Create a Workflow Action that initiates a new search:

      search index=web_logs src_ip=$src_ip$
      
    • Test the action to ensure it opens a new search window.

  3. Hands-On Exercise:

    • Configure both a GET and a Search Workflow Action.
    • Use your dataset to confirm the actions work as expected.

Day 6: Review and Practice

Goal: Consolidate Week 2 topics through targeted practice and reflection.

Tasks:

  1. Review Key Commands:
    • Revisit rex, eval, alias, tag, and macro functionalities.
  2. Solve Practical Scenarios:
    • Create macros for repetitive queries in a dataset.
    • Extract fields using rex and create aliases to normalize data.
  3. Reflection:
    • Identify weak areas and revisit challenging concepts.

Day 7: Weekly Quiz and Self-Assessment

Goal: Evaluate your progress and address any knowledge gaps.

Tasks:

  1. Take a self-created quiz covering:
    • Field extraction and management.
    • Tagging and event type creation.
    • Macro and Workflow Action implementation.
  2. Perform hands-on exercises:
    • Apply tags, macros, and Workflow Actions to a real-world dataset.
  3. Review quiz results and focus on areas needing improvement.

Week 3: Data Models and CIM

Day 1: Creating Data Models (Part 1)

Goal: Understand how to create and manage Splunk data models, focusing on event datasets.

Tasks:

  1. Understand Data Models:

    • Review the purpose of data models: organizing and accelerating data for analysis.
    • Study the difference between Event, Search, and Transaction datasets.
  2. Create an Event Dataset:

    • Open Settings > Data Models.

    • Create a new data model named Web Traffic.

    • Add an event dataset with the following definition:

      index=web_logs
      
    • Save and validate the dataset.

  3. Hands-On Exercise:

    • Query the dataset using:

      | datamodel "Web Traffic" search
      

Day 2: Creating Data Models (Part 2)

Goal: Learn to refine data models using Search datasets.

Tasks:

  1. Add a Search Dataset:

    • Open the Web Traffic data model.

    • Add a child dataset (Search dataset) with the definition:

      index=web_logs status_code=200
      
    • Save and validate the dataset.

  2. Use Filters for Refinement:

    • Add a calculated field:

      eval success_rate = count / total_count * 100
      
    • Test the field in your query:

      | datamodel "Web Traffic" search | stats avg(success_rate)
      
  3. Hands-On Exercise:

    • Create a search dataset for error events:

      status_code=404 OR status_code=500
      

Day 3: Transaction Datasets

Goal: Understand how to use transaction datasets to correlate events.

Tasks:

  1. Create a Transaction Dataset:

    • Open the Web Traffic data model.
    • Add a transaction dataset grouped by session_id with:
      • Maximum span: 15 minutes.
      • Maximum pause: 5 minutes.
    • Save and validate the dataset.
  2. Test the Transaction Dataset:

    • Query the transaction dataset:

      | datamodel "Web Traffic" transactions
      
    • Observe how events are grouped by session_id.

  3. Hands-On Exercise:

    • Create a transaction dataset to group user actions starting with "login" and ending with "logout".

Day 4: Data Model Acceleration

Goal: Learn how to accelerate data models for improved performance.

Tasks:

  1. Enable Acceleration:

    • Open the Web Traffic data model.
    • Enable acceleration for the past 7 days.
    • Save and rebuild the model.
  2. Test Acceleration:

    • Query the accelerated dataset:

      | datamodel "Web Traffic" search | stats count BY status_code
      
    • Measure the response time compared to unaccelerated datasets.

  3. Hands-On Exercise:

    • Enable acceleration for an error dataset and validate the performance improvement.

Day 5: Common Information Model (CIM) Overview

Goal: Understand the purpose and structure of the CIM Add-On.

Tasks:

  1. Learn CIM Basics:

    • Study how CIM provides a unified schema for normalizing data.
    • Understand key CIM concepts: field aliases, tags, and data models.
  2. Review CIM Data Models:

    • Study common CIM models (Authentication, Network Traffic, Web).
  3. Hands-On Exercise:

    • Validate data against the Authentication model:

      | datamodel Authentication search
      

Day 6: Normalizing Data with CIM

Goal: Learn how to normalize data fields and tags to match CIM standards.

Tasks:

  1. Field Normalization:

    • Use props.conf to map raw fields to CIM-compliant names:

      FIELDALIAS-src_ip = client_ip AS src
      
  2. Add Tags:

    • Assign CIM tags to data:

      tag::eventtype = authentication
      
  3. Validate Tags:

    • Search with the CIM tag:

      tag=authentication
      
  4. Hands-On Exercise:

    • Normalize a dataset using field aliases and CIM tags, then validate it.

Day 7: Review and Comprehensive Practice

Goal: Consolidate knowledge of data models and CIM through practical applications.

Tasks:

  1. Review Key Concepts:
    • Revisit event, search, and transaction datasets.
    • Rehearse data model acceleration and validation.
  2. Practical Challenge:
    • Create a data model for a specific use case (e.g., monitoring network traffic).
    • Normalize data for CIM compliance using aliases and tags.
  3. Reflection:
    • Identify areas needing improvement and plan for further practice.

Week 4: Final Review and Mock Exams

Day 1-3: Comprehensive Review

Goal: Systematically review all topics covered in Weeks 1–3.

Tasks:

  1. Revise foundational topics:
    • Transforming commands (stats, chart, timechart).
    • Filtering and formatting (search, eval, fields).
  2. Review advanced topics:
    • Field management, macros, Workflow Actions.
    • Data models and CIM concepts.
  3. Practice with real-world scenarios:
    • Solve complex queries.
    • Configure a data model and validate it.

Day 4-6: Mock Exams and Practice

Goal: Simulate exam conditions to assess readiness.

Tasks:

  1. Take a full-length practice test.
  2. Analyze results and revisit weak areas.
  3. Perform practical tasks:
    • Normalize data with CIM.
    • Create advanced searches and dashboards.

Day 7: Rest and Final Preparation

Goal: Mentally prepare for the exam.

Tasks:

  1. Light review of key concepts and commands.
  2. Reflect on achievements and boost confidence.