The test basis refers to the set of documents, specifications, or knowledge that provide the foundation for creating test cases. It serves as the source of information for identifying test conditions.
Requirements Documents
Design Specifications
User Stories (Agile Projects)
Risk Analysis Reports
A test condition is an item, feature, or event that can be verified by one or more test cases. It represents what you need to test.
Specific System Inputs
Expected System Responses
Boundary Values
| Requirement | Test Condition |
|---|---|
| “The system must allow users to log in with a valid username and password.” | Verify login with a valid username and password. |
| Verify error message for an invalid password. | |
| Verify error message when both fields are empty. | |
| “The email field must allow up to 50 characters.” | Verify email input accepts exactly 50 characters. |
| Verify error message for input over 50 characters. |
A test case is a set of inputs, actions, and expected results that verify whether a specific test condition is met.
Test case design techniques are categorized into three main types:
| Technique | Description | Example |
|---|---|---|
| Equivalence Partitioning | Divide inputs into groups where each group behaves similarly. Test one value per group. | Input range 1–100 → Test 50 (valid), -1 (invalid). |
| Boundary Value Analysis | Test at the edges of input ranges (minimum, maximum, just outside the range). | Input range 1–100 → Test 0, 1, 100, 101. |
| Decision Table Testing | Represent combinations of inputs and corresponding outputs in a table format. | Loan approval system: Income >50K, Credit Score >700 → Approved. |
| State Transition Testing | Verify changes in system states based on inputs or events. | Verify account lock after 3 failed login attempts. |
| Use Case Testing | Derive test cases based on user workflows or scenarios. | Simulate placing an order in an e-commerce app. |
Example: Loan approval rules:
| Income | Credit Score | Loan Approved? |
|---|---|---|
| >50,000 | >700 | Yes |
| ≤50,000 | >700 | No |
| >50,000 | ≤700 | No |
| Technique | Description | Example |
|---|---|---|
| Statement Coverage | Ensures that every statement in the code is executed at least once. | Run all lines of code in a simple “if-else” structure. |
| Branch Coverage | Ensures that every decision branch (e.g., true/false in “if” statements) is executed. | Test both true and false outcomes of an “if-else.” |
| Path Coverage | Ensures that all possible paths through the code are executed. | Test all routes in a decision tree. |
Consider the following code:
def check_even(number):
if number % 2 == 0:
print("Even")
print("Done")
Test Cases for Statement Coverage:
number = 2number = 3Observation: Both test cases combined ensure every line of code is executed at least once.
Consider the following code:
def check_number(number):
if number > 0:
print("Positive")
else:
print("Non-Positive")
Test Cases for Branch Coverage:
number = 5number = -1Observation: Both the true and false branches are executed, achieving full branch coverage.
Consider the following code:
def check_grade(score):
if score >= 90:
print("Grade A")
elif score >= 80:
print("Grade B")
else:
print("Grade C")
Possible Paths:
score = 95 → Condition score >= 90 is true → “Grade A.”score = 85 → Condition score >= 90 is false, score >= 80 is true → “Grade B.”score = 70 → Condition score >= 90 is false, score >= 80 is false → “Grade C.”Test Cases for Path Coverage:
95 → Path 1.85 → Path 2.70 → Path 3.Observation: All possible paths are executed, achieving full path coverage.
| Technique | Focus | Purpose |
|---|---|---|
| Statement Coverage | Execute all statements at least once. | Detect unused lines of code. |
| Branch Coverage | Execute all decision branches. | Test both true and false outcomes. |
| Path Coverage | Execute all possible code paths. | Test all combinations of branches. |
Experience-based techniques rely on the knowledge, intuition, and experience of testers to identify defects. These techniques are useful when formal documentation is unavailable or incomplete.
| Technique | Description |
|---|---|
| Error Guessing | Testers “guess” where defects might occur based on experience. |
| Exploratory Testing | Testers simultaneously design and execute tests to explore the system. |
Imagine testing a registration form:
Suppose you are testing a new mobile banking app:
Test data refers to the input values or data sets used during test case execution to verify whether the system behaves as expected.
Valid Data
user123Password123Invalid Data
-5, 0, 101, abc.Boundary Data
1, 100 (valid), 0, 101 (invalid).Null or Empty Data
Special Characters
!@#$%, <script>).!@#$% in a name field → Should return an error.| Scenario | Valid Data | Invalid Data | Boundary Data | Null/Empty Data |
|---|---|---|---|---|
| Login Form | user123 / Pass123 |
user123 / wrongpass |
N/A | Empty username field |
| Age Input (1–100) | 25 |
-1, 101, abc |
1, 100 |
Empty age field |
| Email Field | [email protected] |
[email protected], invalid |
Long email (50 chars) | Empty email |
| Password Strength Check | Pass@123 |
123, password |
N/A | Empty password |
Imagine you are testing a user registration form with fields: Name, Age, and Email.
| Field | Test Data Type | Test Data Example | Expected Outcome |
|---|---|---|---|
| Name | Valid | John Doe |
Registration successful. |
| Name | Invalid | !@#123 |
Error: “Invalid characters.” |
| Age | Valid | 25 |
Registration successful. |
| Age | Invalid | -5, abc |
Error: “Age must be a number.” |
| Age | Boundary | 1, 100 |
Registration successful. |
| Age | Boundary Invalid | 0, 101 |
Error: “Age out of range.” |
| Valid | [email protected] |
Registration successful. | |
| Invalid | [email protected], testmail |
Error: “Invalid email format.” | |
| Null/Empty | Empty field | Error: “Email cannot be blank.” |
Traceability ensures that every requirement is linked to one or more test cases. This process guarantees that:
The Requirements Traceability Matrix (RTM) is a document that maps requirements to test cases. It helps verify that all requirements are adequately covered by testing.
| Requirement ID | Requirement Description | Test Case ID | Test Case Description | Test Result |
|---|---|---|---|---|
| R1 | Login with valid credentials | TC_001 | Verify login with valid username/password | Passed |
| R2 | Handle invalid login attempts | TC_002 | Verify error message for invalid login | Failed |
| R3 | Password reset functionality | TC_003 | Verify reset password link works | Passed |
Imagine a project with the following requirements:
In the RTM:
By reviewing the RTM, the testing team can ensure that all requirements have corresponding test cases and have been tested.
| Aspect | Description |
|---|---|
| What is RTM? | A matrix linking requirements to test cases. |
| Purpose | Ensure full test coverage of requirements. |
| Benefits | Identifies gaps, supports change management, and improves reporting. |
| Concept | Definition | Example |
|---|---|---|
| Test Data | Input values used during test execution. | Valid, invalid, boundary, null data. |
| Traceability | Mapping requirements to test cases to ensure full coverage. | Requirements Traceability Matrix (RTM). |
This table helps learners quickly compare and memorize the key differences between Black-box, White-box, and Experience-based techniques.
| Technique Type | Key Focus | Requires Code Knowledge? | Example Techniques |
|---|---|---|---|
| Black-box | System’s external behavior (inputs, outputs, responses) | No | Boundary Value Analysis, Equivalence Partitioning, Decision Table |
| White-box | System’s internal logic and control flow | Yes | Statement Coverage, Branch Coverage, Path Coverage |
| Experience-based | Tester’s intuition, experience, and creativity | No | Error Guessing, Exploratory Testing |
Exam Tip: Questions often ask which techniques require code knowledge or are suited for well-documented vs. undocumented systems.
RTM stands for Requirements Traceability Matrix. It maps each requirement to one or more test cases, ensuring complete coverage.
| Requirement ID | Requirement | Test Case ID(s) |
|---|---|---|
| R1 | Users must log in with valid credentials. | TC_001, TC_002 |
| R2 | Password reset must send an email. | TC_003 |
When a requirement changes, you must quickly identify all affected test cases. RTM enables this by maintaining traceability between:
Typical Exam Question:
Q: Which document helps assess the impact of requirement changes on testing activities?
A: Requirements Traceability Matrix (RTM)
At the end of the chapter or subchapter, include a quick recap box for review:
What is equivalence partitioning and why is it used in test design?
Equivalence partitioning divides input data into groups where all values are expected to behave similarly, allowing testers to select representative test cases from each group.
Instead of testing every possible input value, testers identify partitions of valid and invalid inputs. If one value from a partition behaves correctly, other values in that same partition are assumed to behave similarly. For example, if a system accepts ages from 18 to 60, partitions might include valid values (18–60) and invalid values (<18 or >60). A few representative test cases can then be selected from each partition. This technique reduces the number of tests while maintaining reasonable coverage.
Demand Score: 91
Exam Relevance Score: 96
What is boundary value analysis (BVA) in software testing?
Boundary value analysis is a black-box test design technique that focuses on testing values at the edges of input ranges.
Defects frequently occur at boundary conditions where comparisons or limit checks are implemented in code. BVA therefore selects test cases at minimum values, maximum values, and values just inside or outside those limits. For example, if an input range is 1–100, typical BVA test values might include 0, 1, 2, 99, 100, and 101. These tests verify whether the system correctly handles transitions between valid and invalid input ranges.
Demand Score: 92
Exam Relevance Score: 95
What is the difference between equivalence partitioning and boundary value analysis?
Equivalence partitioning selects representative values from groups of inputs, while boundary value analysis specifically targets the edges between those groups.
Equivalence partitioning reduces the test set by assuming that all values within a partition behave similarly. Testers therefore choose one or a few values from each partition. Boundary value analysis complements this by focusing on the transitions between partitions where errors are more likely. For instance, if the valid input range is 10–50, equivalence partitioning may choose values like 20 or 30, whereas BVA tests values such as 9, 10, 11, 49, 50, and 51. Using both techniques together improves defect detection efficiency.
Demand Score: 90
Exam Relevance Score: 95
What is decision table testing used for?
Decision table testing is used to verify system behavior when multiple conditions combine to produce different outcomes.
A decision table lists conditions and corresponding actions in a structured format. Each column in the table represents a rule describing a specific combination of conditions and the expected result. This technique ensures that all meaningful combinations of inputs are tested. It is particularly useful for complex business logic where many rules interact, such as discount calculations or eligibility checks.
Demand Score: 86
Exam Relevance Score: 90
What is statement coverage in white-box testing?
Statement coverage measures the percentage of executable statements in code that are executed by the test cases.
In white-box testing, the internal structure of the code is considered when designing tests. Statement coverage is achieved when every executable statement is executed at least once during testing. This metric helps identify untested code sections and provides insight into how thoroughly the codebase has been exercised. However, achieving 100% statement coverage does not guarantee that all logical paths or defects are detected, since some conditional paths may remain untested.
Demand Score: 84
Exam Relevance Score: 88
What are experience-based test techniques?
Experience-based techniques rely on the tester’s knowledge, intuition, and past experience to design test cases.
Unlike systematic techniques such as equivalence partitioning, experience-based techniques depend on insights gained from previous projects or domain expertise. Examples include error guessing, exploratory testing, and checklist-based testing. Testers may predict likely failure areas based on common mistakes, complex functionality, or historical defect patterns. These techniques are particularly useful when documentation is limited or when rapid feedback is needed.
Demand Score: 83
Exam Relevance Score: 85