Shopping cart

Subtotal:

$0.00

ISTQB-CTFL Testing Throughout the Software Development Lifecycle

Testing Throughout the Software Development Lifecycle

Detailed list of ISTQB-CTFL knowledge points

Testing Throughout the Software Development Lifecycle Detailed Explanation

2.1 Software Development Lifecycle (SDLC)

The Software Development Lifecycle (SDLC) is a process used to plan, develop, test, and deliver software. Testing plays a critical role at every phase of SDLC to ensure that the final product is reliable, functional, and meets user requirements.

Different SDLC models determine how and when testing is performed. Let’s explore the most common models in detail:

1. Waterfall Model

The Waterfall Model is one of the earliest SDLC models. It follows a linear, sequential approach where each phase depends on the completion of the previous phase.

Phases of the Waterfall Model
  1. Requirements:
    • Gather and document all functional and non-functional requirements.
  2. Design:
    • System architecture and design specifications are created.
  3. Implementation (Coding):
    • Developers write the code based on the design.
  4. Testing:
    • Testing begins after the implementation phase is complete.
  5. Deployment:
    • The product is delivered to users for production use.
Role of Testing in the Waterfall Model
  • Testing occurs late in the lifecycle (after implementation).
  • This means defects may only be discovered at the end, making them expensive and time-consuming to fix.

Example:

  • Imagine you’re building a house. You only inspect the quality of the plumbing and electrical work after the entire house is built.
  • If issues are found, fixing them may require tearing down parts of the house.
Advantages of the Waterfall Model
  • Simple and easy to understand for small projects.
  • Each phase has well-defined outputs and deliverables.
Limitations of the Waterfall Model
  • Late testing increases defect-fixing costs.
  • It’s not suitable for projects with evolving or unclear requirements.
  • Defects found in earlier phases (e.g., requirements) are often carried forward, causing major problems later.

2. V-Model (Verification and Validation)

The V-Model improves upon the Waterfall Model by integrating testing into every development phase. It is also known as the Validation and Verification model.

Key Concept of the V-Model
  • For each development phase (on the left side of the "V"), there is a corresponding testing phase (on the right side of the "V").
  • Testing and development happen in parallel, which allows defects to be detected earlier.
Phases of the V-Model
Development Phase Corresponding Testing Phase
Requirements Analysis Acceptance Testing: Validate the system meets user requirements.
System Design System Testing: Verify the entire system works as specified.
Architecture Design Integration Testing: Test interfaces between integrated modules.
Coding (Implementation) Unit Testing: Test individual components or units of the code.
Role of Testing in the V-Model
  • Testing begins as early as the requirements phase, ensuring that issues are identified and addressed sooner.
  • Each testing phase focuses on verifying the deliverables from the corresponding development phase.

Example:

  • In the V-Model, as soon as requirements are defined, testers design acceptance tests to validate those requirements.
  • If requirements are unclear, defects can be caught immediately.
Advantages of the V-Model
  • Testing occurs early, reducing the cost of defect fixes.
  • Clear relationships between development and testing phases.
  • Well-suited for projects with stable and clear requirements.
Limitations of the V-Model
  • Still sequential and inflexible to changes once development starts.
  • Not ideal for projects where requirements are likely to evolve.

3. Iterative/Incremental Model

The Iterative/Incremental Model breaks the software development process into small increments. Each increment builds upon the previous one.

Key Features
  • Development is done in small, manageable chunks (increments).
  • Testing occurs after each increment to ensure the new functionality works and integrates well with the previous increment.
How It Works:
  1. The project is divided into small iterations (mini Waterfalls).
  2. Each iteration includes:
    • Requirements → Design → Implementation → Testing → Delivery
  3. New functionality is added incrementally.

Example:

  • Imagine developing a website.
  • In the first iteration, you build the “Login” feature and test it.
  • In the second iteration, you add the “Product Catalog” feature and test both it and the login functionality.
  • This continues until the entire system is built.
Advantages of the Iterative/Incremental Model
  • Allows for early delivery of partial functionality.
  • Testing after each increment ensures that defects are caught early.
  • Easier to incorporate changes based on user feedback.
Limitations of the Iterative/Incremental Model
  • Requires careful planning and design for each increment.
  • Can become complex if many iterations are required.

4. Agile Model

The Agile Model is a modern development approach where development and testing occur iteratively and collaboratively. Agile focuses on incremental delivery and adaptability to changing requirements.

Key Features of the Agile Model
  1. Development is divided into short, time-boxed iterations (usually 1-4 weeks).
  2. Testing occurs continuously throughout the development process.
  3. Emphasis on collaboration:
    • Developers, testers, and business stakeholders work together closely.
    • Frequent feedback is encouraged.
Key Agile Practices
  1. Test-Driven Development (TDD):

    • Write tests before writing the code.
    • Developers first write a failing test case, then write the code to make the test pass.
  2. Continuous Integration (CI):

    • Developers frequently integrate code changes into a shared repository.
    • Automated builds and tests are triggered to detect integration issues early.
  3. Frequent Automated Testing:

    • Tests (e.g., unit, regression tests) are automated to ensure quick feedback.
    • Tools like Selenium and JUnit are often used.
Advantages of the Agile Model
  • Testing occurs early and frequently, ensuring defects are caught quickly.
  • Adaptable to changes and user feedback.
  • Faster delivery of working software.
Limitations of the Agile Model
  • Requires a high level of collaboration and communication.
  • Can be challenging to manage for large, complex projects without proper planning.

Summary of SDLC Models

Model Key Features When Testing Occurs Best Use Case
Waterfall Sequential phases; testing is late. After implementation. Small, well-defined projects.
V-Model Testing corresponds to each development phase. Parallel to development. Projects with clear, stable requirements.
Iterative/Incremental Development in small increments. After each increment. Evolving or large projects.
Agile Iterative, collaborative development. Continuously during sprints. Projects with frequent changes.

2.2 Test Levels

In software testing, there are four main test levels:

  1. Unit Testing
  2. Integration Testing
  3. System Testing
  4. Acceptance Testing

Each level has its purpose, scope, and focus, as outlined below:

1. Unit Testing

Definition

Unit Testing involves testing individual components or modules of the software. It ensures that each unit of the code works as expected.

Key Characteristics
  • Performed at the lowest level of the system (e.g., a single function or method).
  • Focuses on the internal logic of the code.
  • Conducted by developers during the coding phase.
Goals of Unit Testing
  1. Verify the correctness of individual functions, classes, or methods.
  2. Detect and fix low-level defects early.
  3. Test for boundary conditions and error handling.
How Unit Testing is Performed
  • Developers write test cases for each unit of code.
  • They execute the code and compare the actual output to the expected output.
  • Tools like JUnit (Java), NUnit (.NET), and PyTest (Python) automate unit testing.

Example of Unit Testing:
Suppose you have a function to add two numbers:

def add(a, b):
    return a + b

Unit test cases:

  • Input: add(2, 3)Expected Output: 5.
  • Input: add(-1, 1)Expected Output: 0.
  • Input: add(0, 0)Expected Output: 0.

2. Integration Testing

Definition

Integration Testing verifies the interfaces and interactions between integrated components or systems.

Key Characteristics
  • Focuses on testing data flow and communication between modules.
  • Conducted after unit testing to ensure units work together correctly.
  • Defects often involve interface mismatches or data communication issues.
Types of Integration Testing
Type Description
Top-Down Testing starts with high-level modules first, integrating lower-level modules step by step.
Bottom-Up Testing starts with low-level modules first, integrating higher-level modules gradually.
Big-Bang All components are integrated at once, and testing is performed on the complete system.
Examples of Integration Testing
  1. Top-Down Approach:

    • Start by testing the main function and progressively add sub-functions.
    • Example: Test the “Order Checkout” module first, then integrate the “Payment” and “Shipping” modules.
  2. Bottom-Up Approach:

    • Start by testing low-level components like “Payment Calculation” before integrating them into the “Order Checkout” module.
  3. Big-Bang Integration:

    • Test all integrated components (e.g., Login, Catalog, Payment) at once after combining them.
    • This method is faster but makes defect isolation harder.

Real-Life Example:
Suppose you’re building an e-commerce website. Integration testing ensures:

  • The “Add to Cart” feature sends the correct product details to the “Checkout” module.
  • The “Payment” module processes correct amounts from the “Cart” module.

3. System Testing

Definition

System Testing verifies the entire software system as a whole against the specified requirements.

Key Characteristics
  • Conducted in an environment that mimics production (real-world usage).
  • Focuses on testing end-to-end functionality.
  • Performed by a dedicated testing team (not developers).
Goals of System Testing
  1. Verify the system meets functional and non-functional requirements.
  2. Ensure all components integrate and work together correctly.
  3. Identify defects in the system’s behavior under real conditions.
Types of System Testing
  • Functional Testing: Check that all features (e.g., login, search, payment) work as expected.
  • Non-Functional Testing: Validate performance, usability, security, and compatibility.

Example of System Testing:
For an online banking system, system testing would include:

  • Testing login with valid and invalid credentials.
  • Transferring money between accounts and verifying the transaction.
  • Ensuring the system responds quickly under high user load.

4. Acceptance Testing

Definition

Acceptance Testing is the final level of testing that determines if the system meets business needs and is ready for deployment.

Key Characteristics
  • Performed after system testing.
  • Conducted by end-users or business stakeholders.
  • Focuses on real-world scenarios and business processes.
Goals of Acceptance Testing
  1. Verify the system meets user requirements.
  2. Ensure the software is ready for production.
  3. Validate that the software provides a positive user experience.
Types of Acceptance Testing
Type Description
User Acceptance Testing (UAT) Performed by end-users to ensure the system works as expected.
Operational Acceptance Testing Validates whether the system is ready for deployment (e.g., installation, backups, and failover testing).
Example of Acceptance Testing

Imagine testing a Hospital Management System:

  1. User Acceptance Testing (UAT):

    • Doctors and nurses use the system to ensure:
      • Patient details can be entered and retrieved correctly.
      • Appointment scheduling works smoothly.
  2. Operational Acceptance Testing:

    • IT administrators test:
      • System backups occur successfully.
      • Failover recovery works in case of a server crash.

Summary of Test Levels

Test Level What is Tested Who Performs It Purpose
Unit Testing Individual components or modules. Developers Verify internal logic and code correctness.
Integration Testing Interaction between components. Developers or testers Verify interfaces and data flow.
System Testing Entire system as a whole. Testing team Validate end-to-end functionality.
Acceptance Testing Business processes and real usage. End-users/stakeholders Ensure the system meets business needs.

2.3 Test Types

The main test types include:

  1. Functional Testing
  2. Non-Functional Testing
  3. Structural Testing
  4. Change-Related Testing

Each test type targets specific aspects of the software to ensure it meets both functional and non-functional requirements.

1. Functional Testing

Definition

Functional Testing verifies that the software behaves as expected and performs its functional requirements correctly.

Key Characteristics
  • Focuses on what the software does (not how it does it).
  • Tests are based on requirements, user stories, or specifications.
  • Uses black-box testing techniques, where the tester does not need to know the internal code structure.
How Functional Testing Works
  1. Identify the functional requirements (e.g., login, payment, search features).
  2. Design test cases for different scenarios:
    • Positive scenarios: Valid inputs and expected outcomes.
    • Negative scenarios: Invalid inputs and error handling.
  3. Execute test cases and compare the actual results with expected results.
Examples of Functional Testing
Scenario Input Expected Outcome
Login - Valid Credentials Username: user1, Password: Pass123 “Login successful”
Login - Invalid Password Username: user1, Password: wrong “Invalid credentials”
Search Functionality Search term: “Laptop” Display list of laptops
Checkout Payment Enter valid credit card details Payment successful, order confirmed
Functional Testing Techniques
  • Equivalence Partitioning: Test one value per group of valid/invalid inputs.
  • Boundary Value Analysis: Test inputs at the edges of valid ranges.
  • Decision Table Testing: Test combinations of inputs and expected outcomes.
Tools for Functional Testing
  • Manual: Testers execute scenarios step by step.
  • Automated: Tools like Selenium, QTP (UFT), and TestComplete automate repetitive functional tests.

2. Non-Functional Testing

Definition

Non-Functional Testing evaluates aspects of the software other than functionality, such as performance, usability, security, and compatibility.

Key Characteristics
  • Focuses on how the system works under specific conditions.
  • Ensures the software meets quality attributes like speed, security, and reliability.
Types of Non-Functional Testing
  1. Performance Testing

    • Measures how the system behaves under normal and peak loads.

    • Includes:

      • Load Testing: Simulate a specific number of users to check performance.

      • Stress Testing: Push the system beyond its limits to identify breaking points.

      • Scalability Testing: Test how well the system scales as user load increases.

Example:
An e-commerce website must handle 10,000 concurrent users during a sale. Performance testing verifies page load times and server response.

  1. Usability Testing

    • Ensures the system is user-friendly and easy to navigate.

    • Focuses on:

      • Interface design.

      • Navigation flow.

      • User satisfaction.

Example:
A banking app is tested to ensure users can transfer funds quickly and intuitively.

  1. Security Testing

    • Identifies vulnerabilities and ensures the system is protected from threats like hacking, unauthorized access, and data breaches.

    • Includes:

      • Penetration Testing: Simulate attacks to find security weaknesses.

      • Authentication Testing: Verify login and access control mechanisms.

Example:
Testing an e-commerce app to ensure payment details are encrypted and only authorized users can view orders.

  1. Compatibility Testing

    • Verifies the system works correctly on:

      • Different browsers (Chrome, Firefox, Safari).

      • Different operating systems (Windows, Mac, Linux).

      • Various devices (mobile phones, tablets, desktops).

Example:
A website is tested on Chrome, Firefox, and Safari to ensure consistent performance and appearance.

Tools for Non-Functional Testing
  • Performance Testing: JMeter, LoadRunner, Gatling.
  • Usability Testing: Manual testing with user feedback.
  • Security Testing: OWASP ZAP, Burp Suite.
  • Compatibility Testing: BrowserStack, Sauce Labs.

3. Structural Testing (White-Box Testing)

Definition

Structural Testing verifies the internal code structure of the software. It ensures that the code behaves as expected and meets quality standards.

Key Characteristics
  • Based on the internal design of the system (requires knowledge of the code).
  • Uses white-box testing techniques, such as:
    • Statement Coverage: Execute all code statements at least once.
    • Branch Coverage: Execute all decision branches (e.g., “if-else” conditions).
    • Path Coverage: Execute all possible paths through the program.
Example of Structural Testing
def check_even(number):
    if number % 2 == 0:
        return "Even"
    else:
        return "Odd"
  • Test Case 1: Input: 2 → Path: True condition → Output: "Even"
  • Test Case 2: Input: 3 → Path: False condition → Output: "Odd"

Structural testing ensures all paths (True and False) are tested.

Tools for Structural Testing
  • Code Coverage Tools: JaCoCo, Clover, Coverage.py.

4. Change-Related Testing

Change-Related Testing focuses on verifying the software after changes are made.

Types of Change-Related Testing
  1. Regression Testing

    • Ensures that changes (bug fixes or new features) do not introduce new defects in the existing system.
    • Example: Adding a new feature like “Wishlist” on an e-commerce site should not break the “Add to Cart” feature.
  2. Confirmation Testing (Re-Testing)

    • Verifies that previously reported defects are fixed.
    • Example: If a defect in the login feature is reported and fixed, confirmation testing ensures the fix works as expected.
Tools for Change-Related Testing
  • Automated regression testing tools like Selenium, TestNG, or UFT.

Summary of Test Types

Test Type Focus Example
Functional Testing Verifies system functionality. Login feature works with valid credentials.
Non-Functional Testing Checks quality attributes (e.g., performance, usability). System handles 10,000 users simultaneously.
Structural Testing Tests internal code and logic. All decision branches in a function are executed.
Change-Related Testing Ensures changes don’t break existing functionality. New feature does not impact existing features.

Testing Throughout the Software Development Lifecycle (Additional Content)

1. Mapping: Test Levels vs. Test Types

Understanding which test types are commonly applied at each test level is essential for both real-world testing and ISTQB exam questions.

Test Level Common Test Types
Unit Testing - Structural (White-box)- Functional
Integration Testing - Functional- Structural (especially interface-level white-box testing)
System Testing - Functional- Non-Functional (e.g., performance, security, usability)
Acceptance Testing - Functional- Non-Functional (focused on business context, such as usability or operational readiness)

Quick Summary:

  • Structural (white-box) testing is mostly found in unit and integration levels.
  • Non-functional testing (e.g., performance, security) appears more in system and acceptance levels.
  • Functional testing applies to all levels, depending on scope.

2. Dynamic vs. Static Testing Across the SDLC

ISTQB emphasizes the difference between static testing and dynamic testing, and it's important to understand how both can be applied across the Software Development Lifecycle (SDLC).

Dynamic Testing:

  • Involves executing the software or code.
  • Used in phases like unit, integration, system, and acceptance testing.
  • Examples: Running test cases, checking outputs, performance testing, usability testing.

Static Testing:

  • Involves reviewing documents or code without execution.
  • Can be applied throughout all SDLC models, even before code exists.
  • Examples: Reviewing requirement specs, checking design documents, performing code reviews, using static analysis tools.

Key Message for Exam:

Static testing is not limited to any one test level. It is universally applicable across all SDLC phases and models, including Waterfall, V-Model, Agile, etc.

3. Change-Related Testing: Priority & Automation

Change-related testing includes two main types:

Type Purpose When Performed
Regression Testing Ensure that recent changes haven’t broken existing features After every build, deployment, or bug fix
Confirmation Testing Verify that a specific defect has been successfully fixed Immediately after a defect fix is deployed

Regression Testing: Automation Priority

  • Best candidate for automation.
  • Automated tests can be run frequently and consistently.
  • Helps detect side effects or unintended defects after updates.

Typical Exam Question:

“Which type of testing is most suitable to automate after every code deployment?”

Correct Answer: Regression Testing

Confirmation Testing: Manual or Automated?

  • Usually manual, unless the test case is repeatable and automation-ready.
  • Focus is narrow and targeted on a specific fix.

Bonus: Visual Aid – Summary Table

Aspect Regression Testing Confirmation Testing
Focus Unintended impact on existing features Verify specific defect fix
Timing After any change (feature, fix, config) Immediately after fixing a reported defect
Automation Suitability High – stable, reusable, frequent Medium – case by case
Common Test Levels Integration, System, Acceptance Any level where the defect was found

Frequently Asked Questions

What is the difference between test levels and test types in ISTQB?

Answer:

Test levels represent stages of testing related to the software development lifecycle, while test types focus on specific objectives or qualities being tested.

Explanation:

Test levels structure testing activities based on the scope of the component being evaluated. Typical levels include component testing, integration testing, system testing, and acceptance testing. Each level verifies different artifacts and responsibilities within the lifecycle. Test types, however, describe the purpose of the test regardless of level. Examples include functional testing, usability testing, performance testing, and security testing. A test type can occur at multiple levels—for instance, performance testing may be conducted at both system and integration levels. Understanding this distinction helps testers plan comprehensive coverage without confusing structural scope with testing objectives.

Demand Score: 72

Exam Relevance Score: 92

What is the main objective of component testing?

Answer:

The objective of component testing is to verify that individual software units function correctly in isolation.

Explanation:

Component testing focuses on the smallest testable parts of the application, such as functions, classes, or modules. It is typically performed early in the development process and often executed by developers. The purpose is to validate internal logic, data structures, and individual operations before the components are integrated with others. By identifying defects at this stage, teams can reduce the cost and complexity of fixing issues later in the development lifecycle. Component tests may involve white-box techniques, unit testing frameworks, and controlled test environments that isolate the component from external dependencies.

Demand Score: 68

Exam Relevance Score: 85

What distinguishes system testing from acceptance testing?

Answer:

System testing verifies the complete integrated system against specified requirements, while acceptance testing determines whether the system satisfies user or business needs.

Explanation:

System testing evaluates the fully integrated application in an environment similar to production. The focus is on verifying that the system meets functional and non-functional requirements defined in specifications. Acceptance testing occurs later and is typically conducted by customers, users, or stakeholders. Its purpose is to confirm that the delivered system is ready for operational use and meets business expectations. While system testing validates the product from a technical perspective, acceptance testing validates it from a business perspective.

Demand Score: 71

Exam Relevance Score: 89

What is maintenance testing and when is it performed?

Answer:

Maintenance testing is performed after changes are made to software to verify that modifications work correctly and have not introduced new defects.

Explanation:

Maintenance testing occurs when an existing system is modified due to bug fixes, enhancements, environment changes, or platform upgrades. It usually includes two major activities: confirmation testing and regression testing. Confirmation testing verifies that a specific defect has been successfully fixed. Regression testing ensures that previously working functionality still behaves correctly after the change. Because software changes can unintentionally affect unrelated components, regression testing is critical to maintaining system stability during updates or releases.

Demand Score: 70

Exam Relevance Score: 86

ISTQB-CTFL Training Course
$68$29.99
ISTQB-CTFL Training Course