Shopping cart

Subtotal:

$0.00

ISTQB-CTFL Managing the Test Activities

Managing the Test Activities

Detailed list of ISTQB-CTFL knowledge points

Managing the Test Activities Detailed Explanation

5.1 Test Planning

What is a Test Plan?

A Test Plan is a critical document that outlines the objectives, scope, strategy, schedule, resources, and deliverables of testing. It serves as a blueprint for the testing process and ensures that all stakeholders have a clear understanding of how testing will be conducted.

Why is Test Planning Important?

  1. Provides Clarity: Clearly defines the goals, scope, and strategy of testing.
  2. Manages Resources: Helps in organizing and allocating resources effectively.
  3. Reduces Risks: Identifies risks early and plans mitigation strategies.
  4. Improves Efficiency: Ensures testing activities are organized and systematic.
  5. Tracks Progress: Provides measurable milestones to track testing progress.

Contents of a Test Plan

Let’s break down the key components of a test plan one by one:

1. Test Objectives
  • Definition: Test objectives define the specific goals to achieve through testing.
  • Purpose: To ensure that testing activities are aligned with project goals.
  • Examples of Test Objectives:
    • Verify that the login feature works with valid and invalid inputs.
    • Ensure the system meets the specified performance requirements.
    • Detect defects in the payment gateway functionality.
    • Achieve 90% requirements coverage before release.
2. Scope of Testing
  • Definition: The scope specifies what will be tested and what will not be tested.
  • Why it’s important: Clearly defining the scope prevents misunderstandings and ensures focus on critical areas.

Example:

In-Scope Out-of-Scope
- Login functionality - Third-party integration testing
- User registration - Backend database migration
- Payment processing - Mobile app compatibility testing
3. Test Approach/Strategy

The test approach outlines how testing will be performed, including techniques, tools, and strategies.

Key Elements:

  1. Techniques to be Used:

    • Black-Box Testing: Verify inputs and outputs without focusing on the internal logic.
    • White-Box Testing: Test the code structure and internal logic.
    • Experience-Based Testing: Leverage tester intuition (e.g., exploratory testing).
  2. Tools and Automation Strategy:

    • Identify tools for manual and automated testing.
    • Example Tools: Selenium (for automation), JIRA (for defect tracking), JMeter (for performance testing).
  3. Levels of Testing:

    • Unit Testing → Integration Testing → System Testing → Acceptance Testing.

Example of Test Approach for a Web Application:

  • Functional Testing: Use black-box techniques to verify user-facing functionalities like login, search, and checkout.
  • Non-Functional Testing: Use JMeter for load testing to ensure the system can handle 1,000 concurrent users.
  • Regression Testing: Automate test cases using Selenium to quickly verify unchanged functionality after code changes.
4. Resource Requirements
  • Definition: Specifies the resources needed for testing.
  • Resources Include:
    • People: Test manager, test analysts, automation engineers.
    • Tools: Testing tools, defect tracking tools, performance testing tools.
    • Hardware: Servers, machines for testing environments.
    • Software: Test environments, test data, simulators.

Example:

Resource Type Details
Human Resources 1 Test Manager, 3 Test Analysts
Tools Selenium, JIRA, Postman
Hardware 2 Windows servers, 2 Linux VMs
5. Schedule and Milestones
  • Definition: A schedule defines the testing phases, timelines, and milestones.
  • Why it’s important: It helps track progress and ensures testing is completed on time.

Example of a Test Schedule:

Activity Start Date End Date
Test Planning Jan 1 Jan 5
Test Case Design Jan 6 Jan 15
Test Execution Jan 16 Jan 31
Defect Fixing and Retesting Feb 1 Feb 7
Test Closure and Reporting Feb 8 Feb 10

Milestones:

  • Test Plan Approval: Jan 5.
  • 50% Test Case Design Completion: Jan 10.
  • Test Execution Start: Jan 16.
6. Risk Management
  • Definition: Identifying potential problems (risks) and planning how to address them.

Steps in Risk Management:

  1. Risk Identification: List possible risks that may affect testing.
  2. Risk Assessment: Evaluate each risk based on likelihood and impact.
  3. Risk Mitigation: Define actions to reduce or eliminate risks.

Example of Risk Management:

Risk Likelihood Impact Mitigation Plan
Delay in test environment setup High High Arrange alternative testing environments.
Test data is unavailable Medium High Generate synthetic test data or request backup.
Team member unavailability Medium Medium Cross-train team members for critical tasks.
7. Exit Criteria
  • Definition: The conditions that must be met to stop testing and declare it complete.

Examples of Exit Criteria:

  1. Test Coverage: 90% of all requirements are covered by test cases.
  2. Defect Resolution: All critical and major defects are fixed and retested.
  3. Test Execution: All planned test cases have been executed successfully.
  4. Acceptance Criteria: The system meets business and user acceptance requirements.

Example of a Complete Test Plan (Summary)

Section Description
Test Objectives Ensure login functionality works perfectly.
Scope Test login, search, and payment modules only.
Test Approach Black-box testing for functionalities.
Resource Requirements 2 testers, Selenium, 2 Windows machines.
Schedule and Milestones Execution: Jan 16–31; Closure: Feb 10.
Risks Delay in environment setup → Use backup server.
Exit Criteria 95% test coverage, 0 critical defects remaining.

Summary of Test Planning

Component Purpose Example
Test Objectives Define measurable goals for testing. Verify login functionality works correctly.
Scope of Testing Specify what will and will not be tested. Include: Login; Exclude: Database testing.
Test Approach/Strategy Outline techniques, tools, and testing methods. Use Selenium for automated regression tests.
Resource Requirements Specify team, tools, and environments. 3 testers, JIRA for defect tracking.
Schedule and Milestones Define timelines and key milestones. Test execution: Jan 16–Jan 31.
Risk Management Identify and mitigate risks. Mitigation: Use backup test environments.
Exit Criteria Define conditions to stop testing. 90% coverage, 0 critical defects.

5.2 Test Monitoring and Control

What is Test Monitoring and Control?

  • Test Monitoring: The process of collecting, analyzing, and reporting on testing progress and performance metrics to understand where the testing process stands.
  • Test Control: Activities performed to adjust the test plan or processes based on monitoring data to ensure project objectives are met.

1. Test Monitoring

Purpose of Test Monitoring
  • To measure progress toward test objectives.
  • To identify delays, bottlenecks, or problems early in the process.
  • To provide accurate information to stakeholders about the status of testing.
Metrics Used in Test Monitoring

Test metrics are measurable values used to assess testing progress and performance. Below are common metrics:

Metric Description Example
Number of Test Cases Executed How many test cases have been run. “200 out of 300 test cases executed.”
Number of Defects Found Total defects identified during testing. “15 defects identified so far.”
Defects Fixed vs. Open How many defects have been fixed vs. pending. “10 defects fixed, 5 still open.”
Test Coverage Percentage Percentage of requirements or code tested. “80% of requirements are tested.”
Defect Detection Rate Rate at which defects are being detected. “Finding 5 defects per day on average.”
How Test Monitoring Works
  1. Collect Metrics: Regularly gather data on test execution, defects, and coverage.
  2. Analyze Metrics: Compare actual progress to planned progress. Identify deviations or delays.
  3. Report Findings: Share updates with stakeholders (e.g., project managers, developers).
Example of Test Monitoring

Let’s assume a test team is working on a project with 100 test cases.

Day Planned Execution Actual Execution Defects Found
Day 1 20 15 5
Day 2 40 35 8
Day 3 60 55 12

Observation:

  • Execution is behind schedule by 5 test cases per day.
  • Immediate action is required to reallocate resources or extend testing time.

2. Test Control

What is Test Control?

Test control involves making decisions and taking corrective actions based on the insights gathered through test monitoring.

Activities in Test Control
  1. Reallocating Resources:

    • If one area is causing delays, assign more testers to it.
    • Example: Add two more testers to the payment module to speed up execution.
  2. Updating Test Plans:

    • Modify the test plan when the scope or risks change.
    • Example: Include an additional test case if a new requirement is added mid-project.
  3. Prioritizing Test Cases:

    • Focus on testing high-risk features or critical functionalities first.
    • Example: Prioritize testing login and payment over non-critical UI fixes.
  4. Rescheduling Test Activities:

    • Adjust deadlines or testing phases if progress is slow.
Example of Test Control

Scenario: Testing is behind schedule because of delayed test environment setup.

Control Actions:

  1. Shift focus to manual testing temporarily.
  2. Use a backup environment to proceed with testing.
  3. Update the test plan with a new execution schedule.

5.3 Configuration Management

What is Configuration Management?

Configuration management involves managing changes to software, test artifacts, and related documents to ensure consistency, traceability, and version control.

Key Objectives of Configuration Management

  1. Ensure that changes are implemented systematically.
  2. Maintain version control for all work products (e.g., code, test scripts).
  3. Keep the team synchronized by managing the latest versions of all artifacts.
  4. Avoid confusion caused by untracked changes.

Key Activities in Configuration Management

  1. Version Control

    • Keep track of different versions of code, test cases, and documents.
    • Tools: Git, SVN, or Mercurial for version control.
    • Example: Developers save their code changes in version 1.2. Testers use version 1.2 for testing.
  2. Change Control

    • A systematic process to manage requested changes in work products.
    • Example:
      • A developer requests to add a new “Forgot Password” feature.
      • The change is reviewed, approved, and tracked through a Change Request.
  3. Baseline Management

    • Define a stable version (baseline) of artifacts like requirements, test plans, or code.
    • Purpose: Any future changes to the baseline must go through the change control process.
    • Example: Baseline version 1.0 of the test plan is finalized and saved. Any updates to it will result in version 1.1.

Example of Configuration Management in Action

Scenario:

  • A test team is testing version 2.0 of a web application.

Configuration Management Activities:

  1. Version Control: Developers check in their code for version 2.0 in a Git repository.
  2. Change Control: If a change request is approved to add new login validation, the updated test plan is tracked as version 2.1.
  3. Baseline Management: The final approved version of the test plan for 2.0 is saved as Baseline 2.0.

Benefits of Configuration Management

  1. Ensures all team members work with the latest versions.
  2. Maintains consistency across software, documents, and test artifacts.
  3. Tracks changes systematically to avoid confusion or duplication.
  4. Supports traceability for audits and compliance.

Summary of Test Monitoring, Control, and Configuration Management

Activity Purpose Examples
Test Monitoring Measure test progress using metrics. Track executed test cases, defects, coverage.
Test Control Adjust plans and resources to stay on track. Reallocate resources, update test schedules.
Configuration Management Manage changes and versions systematically. Use Git for version control and baseline tracking.

5.4 Risk Management

What is Risk?

A risk is a potential problem or uncertain event that may impact the success of the project or product.

Types of Risks:

  1. Product Risks: Related to software quality, functionality, or performance.
    • Example: “The payment gateway may fail to process transactions during peak loads.”
  2. Project Risks: Related to project schedule, budget, or resources.
    • Example: “Key testers might become unavailable during critical testing phases.”

The Risk Management Process

1. Risk Identification
  • The process of identifying potential risks that might impact the project or software.
  • Techniques to identify risks:
    • Brainstorming with the team.
    • Reviewing past project experiences.
    • Using checklists of common risks.

Examples of Risks:

  • Product Risks:
    • Incorrect calculations in a financial system.
    • Slow response time under heavy user load.
  • Project Risks:
    • Test environment not ready on time.
    • Sudden change in requirements mid-project.
2. Risk Assessment
  • Once risks are identified, evaluate them based on:
    • Likelihood: Probability of the risk occurring (Low, Medium, High).
    • Impact: The severity of the risk if it occurs (Low, Medium, High).

Risk Priority = Likelihood × Impact

Risk Likelihood Impact Priority
Test environment delay High High High
Incorrect tax calculations in software Medium High Medium
Tester unavailability Medium Medium Medium
3. Risk Mitigation
  • Mitigation involves planning actions to reduce the likelihood or impact of a risk.
  • Example Mitigation Strategies:
    • Risk: “Test environment may be delayed.”
      • Mitigation: Use a backup test environment or cloud-based environments.
    • Risk: “Sudden change in requirements.”
      • Mitigation: Use Agile methodology to handle changes iteratively.
4. Risk Monitoring
  • Continuously track identified risks and evaluate new ones during the project lifecycle.
  • Update the risk plan and mitigation actions as needed.

Example of Risk Management

Step Details
Identification Test environment may not be ready for execution.
Assessment Likelihood: High, Impact: High → Priority: High
Mitigation Plan to use a cloud environment (AWS or Azure) if delays occur.
Monitoring Weekly checks on environment readiness; escalate delays immediately.

Benefits of Risk Management

  1. Prevents unexpected project delays or failures.
  2. Ensures critical issues are addressed proactively.
  3. Improves test planning and resource allocation.
  4. Enhances stakeholder confidence by reducing uncertainty.

5.5 Defect Management

What is a Defect?

A defect is an issue where the actual behavior of the software deviates from the expected behavior.

Defect Lifecycle

The defect lifecycle represents the stages a defect passes through from identification to resolution.

Status Description
New The defect is reported and logged for the first time.
Assigned The defect is assigned to a developer for fixing.
In Progress The developer is working on resolving the defect.
Fixed The defect has been fixed by the developer.
Retested Testers verify that the defect fix works as expected.
Closed The defect has been verified and is now resolved.
Deferred The defect will be fixed in a later release due to low priority.
Rejected The defect is invalid, not reproducible, or works as designed.

Defect Reporting

A defect report provides a detailed description of the issue to help developers understand and resolve it quickly.

Elements of a Defect Report
Field Description Example
Defect ID A unique identifier for the defect. DEF_001
Summary A brief description of the defect. “Login button does not respond.”
Steps to Reproduce Clear, step-by-step instructions to reproduce the defect. 1. Open login page.2. Click ‘Login’.
Actual Result What the system does incorrectly. Login button does nothing.
Expected Result What the system should do. Redirect to homepage.
Severity Impact of the defect (Critical, Major, Minor). Critical
Priority Urgency to fix the defect (P1 = High, P2 = Medium, P3 = Low). P1
Environment The environment where the defect was found. Windows 10, Chrome 95.
Example of a Defect Report
Field Details
Defect ID DEF_002
Summary “Password reset link throws 404 error.”
Steps to Reproduce 1. Open login page. 2. Click ‘Forgot Password’. 3. Click reset link in email.
Actual Result 404 error page is displayed.
Expected Result Password reset page should open.
Severity Critical
Priority P1
Environment Windows 10, Chrome 95

5.6 Test Reporting

Test reporting provides stakeholders with updates about testing progress, defects, and results.

1. Test Progress Report

Purpose
  • Real-time updates about ongoing testing activities.
Key Metrics
  • Number of test cases executed vs. pending.
  • Number of defects found, resolved, and open.
  • Percentage of test coverage achieved.

Example:

Metric Value
Test Cases Executed 120/150 (80%)
Defects Found 20
Defects Fixed 15
Test Coverage 85%

2. Test Summary Report

Purpose

A summary report is created at the end of testing to communicate overall testing results and outcomes.

Key Contents
  1. Scope of Testing: Features tested and not tested.
  2. Test Execution Summary: Total test cases executed and their status.
  3. Defect Summary: Number of defects, their severity, and resolution status.
  4. Exit Criteria Status: Whether testing goals have been achieved.
  5. Lessons Learned: Insights for improving future testing efforts.

Example of a Defect Summary:

Severity Count Fixed Open
Critical 5 5 0
Major 10 8 2
Minor 5 5 0

Managing the Test Activities (Additional Content)

1. Static Testing of the Test Plan

While the test plan is a management artifact, it is also a work product subject to static testing techniques.

Why is this important?

The test plan should be reviewed (e.g., walkthrough, inspection) before execution begins to ensure:

  • Clarity of objectives and scope
  • Correct allocation of resources
  • Realistic scheduling
  • Risk management is adequately addressed

Exam Insight:

A test plan is not just written and forgotten—it should be reviewed as part of static testing, just like requirements or design documents.

2. Comparison Table: Test Management vs. Test Execution Activities

This table helps clarify the difference between “management-level” and “execution-level” testing activities—a frequent source of confusion in ISTQB questions.

Activity Type Test Management Activity Test Execution Activity
Planning Define objectives, scope, schedule in the test plan Design test cases based on requirements
Control Adjust scope, schedule, or resources based on test progress Execute test cases; retest fixed defects
Monitoring Collect and analyze metrics (e.g., coverage, defect trends) Log defects, record test results
Reporting Create progress reports and summaries for stakeholders Document status of each test case
Closure Ensure exit criteria are met, assess lessons learned Finalize defect retests and close test cycles

ISTQB Exam Tip:
Be prepared to identify which activities belong to test management vs. execution, especially in scenario-based questions.

3. Recap Box – You Should Know

You Should Know – Managing the Test Activities

  • The test plan is a management artifact that should undergo static review.
  • Test monitoring tracks progress using metrics like coverage, execution rate, defect trends.
  • Test control involves adjusting resources and priorities based on monitoring data.
  • Configuration management ensures consistency and version control of test artifacts.
  • Risk management addresses both product and project risks with mitigation plans.
  • Defect management includes lifecycle tracking and defect reporting.
  • Be able to differentiate between test management and test execution activities.

Frequently Asked Questions

What is the purpose of a test plan?

Answer:

A test plan defines the scope, objectives, approach, resources, and schedule for testing activities.

Explanation:

The test plan provides guidance for the testing process and ensures that testing activities align with project goals. It typically includes information about test levels, test types, responsibilities, environments, entry and exit criteria, and risk considerations. By documenting these elements, the test plan helps coordinate the work of testers, developers, and stakeholders.

Demand Score: 68

Exam Relevance Score: 87

What is test monitoring in ISTQB?

Answer:

Test monitoring is the activity of tracking testing progress and comparing actual results against the planned objectives.

Explanation:

Metrics such as test case execution progress, defect discovery rates, and coverage levels are used to evaluate whether testing is proceeding as expected. Monitoring allows managers to identify deviations from the plan and determine whether corrective actions are required.

Demand Score: 66

Exam Relevance Score: 83

What is configuration management in testing?

Answer:

Configuration management ensures that all test artifacts and system components are properly identified, versioned, and controlled.

Explanation:

Testing involves multiple artifacts such as test cases, scripts, data sets, environments, and software versions. Configuration management tracks these elements so that tests are executed against the correct versions of the system and associated artifacts.

Demand Score: 64

Exam Relevance Score: 82

What is defect management?

Answer:

Defect management is the process of identifying, recording, tracking, and resolving defects throughout the testing lifecycle.

Explanation:

When testers discover failures, they create defect reports describing the issue, reproduction steps, and severity. The defect management process tracks these reports through states such as open, assigned, fixed, retested, and closed. Proper defect tracking ensures visibility of issues, supports communication between testers and developers, and helps prioritize fixes based on severity and business impact.

Demand Score: 69

Exam Relevance Score: 86

ISTQB-CTFL Training Course
$68$29.99
ISTQB-CTFL Training Course