Software Testing Cheat Sheet
Core Concepts What is Software Testing? Process to verify software meets requirements and works as expected. Identifies bugs, gaps, and missing functionality before release. Ensures quality, reliability, and proper functionality. Why Testing Matters Early Defect Detection: Finds bugs when they're cheaper to fix Quality Assurance: Ensures product meets standards and requirements Customer Satisfaction: Delivers reliable, functional products Risk Mitigation: Prevents costly failures in production Performance Verification: Confirms system performs as expected under various conditions Testing Principles Testing shows defects exist, not their absence Exhaustive testing is impossible - use risk-based approaches instead Early testing saves time and money Defects cluster in specific modules (80/20 rule) Pesticide paradox: Tests lose effectiveness if not updated regularly Testing depends on context - different applications need different approaches Bug-free doesn't mean useful - software must meet user needs Test Strategy & Planning Test Strategy Organization-level approach defining general testing principles, tools, and processes. Includes risk-based testing approaches and overall methodology. Test Planning Project-specific document detailing what, when, how, and by whom testing will be performed. Includes: Scope and objectives Test deliverables Features to test/not test Testing schedule Resource allocation Entry/exit criteria Test Design Creation of test cases based on requirements. Includes test conditions, test data, expected results, and execution procedures. Ensures requirements coverage. Test Execution Process of running test cases against the application. Involves: Preparing test environment Executing test scripts Logging results Reporting defects Retesting fixes Regression testing Testing Types By Execution Method Manual Testing: Human execution following test cases Automated Testing: Using scripts and tools for repetitive tests By Knowledge of Structure Black Box: Testing without knowledge of internal code - focuses on inputs/outputs White Box: Testing with code knowledge - focuses on code paths and coverage Gray Box: Combination approach with limited internal knowledge By Testing Level Unit Testing: Tests individual components in isolation Integration Testing: Tests component interactions System Testing: Tests complete integrated system Acceptance Testing: Verifies user requirements are met Functional Testing Types Smoke Testing: Quick check for critical functionality Sanity Testing: Focused check after minor changes Regression Testing: Ensures changes don't break existing functionality Interface Testing: Verifies communication between components Non-Functional Testing Performance Testing: Speed, scalability, stability Load Testing: Behavior under expected load Stress Testing: Behavior beyond normal capacity Security Testing: Vulnerability identification Usability Testing: User experience quality Compatibility Testing: Works across environments Bug Management Bug Life Cycle New: Bug identified and reported Assigned: Developer tasked with fixing Open/In Progress: Under investigation Fixed: Solution implemented Verified: QA confirms fix works Closed: Issue resolved Reopened: If issue returns after fix Bug Classification Severity: Impact on system functionality Critical: System crash, data loss Major: Feature unusable Minor: Feature works with limitations Trivial: Cosmetic issues Priority: Order of fix implementation High: Fix immediately Medium: Fix in current cycle Low: Fix when resources available Testing Techniques Black Box Techniques Equivalence Partitioning: Divide inputs into valid/invalid groups, test one from each Boundary Value Analysis: Test at input boundaries (min/max values) Decision Table: Test logical combinations of inputs State Transition: Test system state changes Use Case Testing: Test user scenarios end-to-end Error Guessing: Test based on experience where errors likely occur White Box Techniques Statement Coverage: Execute each line of code Branch Coverage: Execute each decision outcome (if/else paths) Path Coverage: Execute all possible code paths Loop Testing: Test loops at 0, 1, and multiple iterations Code Complexity: Test based on cyclomatic complexity Test Documentation Test Plan Document outlining overall testing approach, scope, schedule, deliverables, and resources. Test Case Specific test condition with steps, data, and expected results: Test ID and description Preconditions Test steps Expected results Actual results Pass/Fail status Requirements Traceability Matrix (RTM) Maps requirements to tes

Core Concepts
What is Software Testing?
Process to verify software meets requirements and works as expected. Identifies bugs, gaps, and missing functionality before release. Ensures quality, reliability, and proper functionality.
Why Testing Matters
- Early Defect Detection: Finds bugs when they're cheaper to fix
- Quality Assurance: Ensures product meets standards and requirements
- Customer Satisfaction: Delivers reliable, functional products
- Risk Mitigation: Prevents costly failures in production
- Performance Verification: Confirms system performs as expected under various conditions
Testing Principles
- Testing shows defects exist, not their absence
- Exhaustive testing is impossible - use risk-based approaches instead
- Early testing saves time and money
- Defects cluster in specific modules (80/20 rule)
- Pesticide paradox: Tests lose effectiveness if not updated regularly
- Testing depends on context - different applications need different approaches
- Bug-free doesn't mean useful - software must meet user needs
Test Strategy & Planning
Test Strategy
Organization-level approach defining general testing principles, tools, and processes. Includes risk-based testing approaches and overall methodology.
Test Planning
Project-specific document detailing what, when, how, and by whom testing will be performed. Includes:
- Scope and objectives
- Test deliverables
- Features to test/not test
- Testing schedule
- Resource allocation
- Entry/exit criteria
Test Design
Creation of test cases based on requirements. Includes test conditions, test data, expected results, and execution procedures. Ensures requirements coverage.
Test Execution
Process of running test cases against the application. Involves:
- Preparing test environment
- Executing test scripts
- Logging results
- Reporting defects
- Retesting fixes
- Regression testing
Testing Types
By Execution Method
- Manual Testing: Human execution following test cases
- Automated Testing: Using scripts and tools for repetitive tests
By Knowledge of Structure
- Black Box: Testing without knowledge of internal code - focuses on inputs/outputs
- White Box: Testing with code knowledge - focuses on code paths and coverage
- Gray Box: Combination approach with limited internal knowledge
By Testing Level
- Unit Testing: Tests individual components in isolation
- Integration Testing: Tests component interactions
- System Testing: Tests complete integrated system
- Acceptance Testing: Verifies user requirements are met
Functional Testing Types
- Smoke Testing: Quick check for critical functionality
- Sanity Testing: Focused check after minor changes
- Regression Testing: Ensures changes don't break existing functionality
- Interface Testing: Verifies communication between components
Non-Functional Testing
- Performance Testing: Speed, scalability, stability
- Load Testing: Behavior under expected load
- Stress Testing: Behavior beyond normal capacity
- Security Testing: Vulnerability identification
- Usability Testing: User experience quality
- Compatibility Testing: Works across environments
Bug Management
Bug Life Cycle
- New: Bug identified and reported
- Assigned: Developer tasked with fixing
- Open/In Progress: Under investigation
- Fixed: Solution implemented
- Verified: QA confirms fix works
- Closed: Issue resolved
- Reopened: If issue returns after fix
Bug Classification
-
Severity: Impact on system functionality
- Critical: System crash, data loss
- Major: Feature unusable
- Minor: Feature works with limitations
- Trivial: Cosmetic issues
-
Priority: Order of fix implementation
- High: Fix immediately
- Medium: Fix in current cycle
- Low: Fix when resources available
Testing Techniques
Black Box Techniques
- Equivalence Partitioning: Divide inputs into valid/invalid groups, test one from each
- Boundary Value Analysis: Test at input boundaries (min/max values)
- Decision Table: Test logical combinations of inputs
- State Transition: Test system state changes
- Use Case Testing: Test user scenarios end-to-end
- Error Guessing: Test based on experience where errors likely occur
White Box Techniques
- Statement Coverage: Execute each line of code
- Branch Coverage: Execute each decision outcome (if/else paths)
- Path Coverage: Execute all possible code paths
- Loop Testing: Test loops at 0, 1, and multiple iterations
- Code Complexity: Test based on cyclomatic complexity
Test Documentation
Test Plan
Document outlining overall testing approach, scope, schedule, deliverables, and resources.
Test Case
Specific test condition with steps, data, and expected results:
- Test ID and description
- Preconditions
- Test steps
- Expected results
- Actual results
- Pass/Fail status
Requirements Traceability Matrix (RTM)
Maps requirements to test cases ensuring complete test coverage. Tracks which requirements have passing/failing tests.
Test Data Management
Strategies for creating, maintaining and using test data. Includes synthetic data generation, data masking, and maintaining test data integrity across test cycles.
Root Cause Analysis
Process to identify underlying causes of defects. Uses techniques like 5 Whys, Fishbone diagrams, and Pareto analysis to prevent recurrence.
Debugging
Systematic process to isolate, identify, and resolve bugs. Involves:
- Reproducing the issue
- Isolating the source
- Analyzing code or conditions
- Fixing the root cause
- Verifying the solution
Agile Testing
Agile Testing Principles
- Tests continuously throughout development
- Whole team responsible for quality
- Test early, test often
- Automate regression tests
- Tests drive development (TDD)
Agile Testing Practices
- TDD (Test-Driven Development): Write tests before code
- BDD (Behavior-Driven Development): Tests based on user behavior
- ATDD (Acceptance Test-Driven Development): Tests based on acceptance criteria
- Continuous Integration: Tests run automatically with code commits
- Testing Quadrants: Balance automated/manual, business/technology focused tests
Test Automation
When to Automate
- Repetitive tests (regression, smoke)
- Data-driven scenarios
- Performance/load testing
- High-risk functionality
- Cross-browser/platform tests
Automation Frameworks
- Data-Driven: Separate test data from logic
- Keyword-Driven: Action keywords define test steps
- Hybrid: Combines multiple approaches
- BDD: Uses natural language specifications
Popular Tools
- UI Automation: Selenium, Cypress, Playwright
- API Testing: Postman, RestAssured, SoapUI
- Mobile: Appium, XCUITest, Espresso
- Performance: JMeter, LoadRunner, Gatling
- CI/CD Integration: Jenkins, GitHub Actions, GitLab CI
Test Metrics
Effectiveness Metrics
- Defect Density: Defects per code size unit
- Defect Removal Efficiency: % of defects found before release
- Requirements Coverage: % of requirements tested
- Code Coverage: % of code executed during tests
Efficiency Metrics
- Test Execution Time: Time to run test suite
- Test Case Productivity: Defects found per test case
- Automation Coverage: % of tests automated
- Cost per Defect: Resources spent per defect found
Tools and Reporting
Test Management Tools
- TestRail, Zephyr, qTest: Manage test cases and execution
- JIRA, Azure DevOps: Track defects and requirements
Reporting Metrics
- Test execution progress
- Pass/fail ratios
- Defect trends
- Test coverage
- Open/closed defect counts
Mobile and Web Testing Specifics
Mobile Testing Challenges
- Device fragmentation
- OS versions
- Network conditions
- Battery usage
- Interruptions handling
Web Testing Specifics
- Browser compatibility
- Responsive design
- Accessibility compliance
- Security (OWASP Top 10)
- Performance optimization
Performance Testing
Performance Testing Types
- Load Testing: Normal load behavior
- Stress Testing: Breaking point identification
- Endurance Testing: Long-duration stability
- Spike Testing: Sudden load increase handling
- Volume Testing: Data volume impact
Key Performance Metrics
- Response time
- Throughput
- Resource utilization
- Error rates
- Concurrent users capacity
Security Testing
Common Security Tests
- Authentication/authorization verification
- Input validation and sanitization
- Session management
- Data protection and encryption
- API security
Security Testing Approaches
- Vulnerability scanning
- Penetration testing
- Security code review
- Compliance checking
- Threat modeling
Common Interview Differences Questions
Verification vs Validation
- Verification: Are we building the product right? (Reviews, inspections, walkthroughs)
- Validation: Are we building the right product? (Testing against requirements)
Testing vs Debugging
- Testing: Finding defects/bugs in software
- Debugging: Finding root cause and fixing bugs
Severity vs Priority
- Severity: Impact on functionality (critical, major, minor, trivial)
- Priority: Order of fix implementation (high, medium, low)
Quality Assurance vs Quality Control
- QA: Preventive process ensuring quality standards are met
- QC: Detective process finding defects in existing products
Black Box vs White Box vs Grey Box
- Black Box: No knowledge of internals, focus on inputs/outputs
- White Box: Full knowledge of code internals, focus on coverage
- Grey Box: Limited knowledge of internals, combines both approaches
Smoke Testing vs Sanity Testing
- Smoke: Basic verification that critical functionality works
- Sanity: Focused check of specific functionality after changes
Regression Testing vs Retesting
- Regression: Ensuring unchanged areas still work after changes
- Retesting: Verifying fixed defects work properly
Alpha Testing vs Beta Testing
- Alpha: Testing by internal teams before release
- Beta: Testing by real users in real environments before full release
Static Testing vs Dynamic Testing
- Static: Reviewing code/documents without execution
- Dynamic: Testing with actual code execution
Load Testing vs Stress Testing
- Load: Testing at expected normal/peak loads
- Stress: Testing beyond normal capacity until breaking point
Manual Testing vs Automated Testing
- Manual: Human execution of test cases
- Automated: Tool-based execution of scripted tests
System Testing vs Acceptance Testing
- System: Testing complete integrated system against specifications
- Acceptance: Verifying system meets business/user requirements
Functional Testing vs Non-functional Testing
- Functional: Testing what the system does
- Non-functional: Testing how the system performs (performance, usability, security)
Test Plan vs Test Strategy
- Test Plan: Project-specific detailed testing approach
- Test Strategy: Organization-level general testing guidelines
Defect vs Error vs Failure vs Fault
- Error: Mistake made by developer
- Defect/Bug: Implementation that doesn't match requirements
- Failure: System not performing required function
- Fault: Incorrect step/process/data definition
STLC vs SDLC
- SDLC: Software Development Life Cycle (requirements to maintenance)
- STLC: Software Testing Life Cycle (planning to closure)
TDD vs BDD
- TDD: Test-Driven Development (write tests before code)
- BDD: Behavior-Driven Development (tests based on system behavior)
SDLC & STLC
Software Development Life Cycle (SDLC)
- Requirements gathering
- Design
- Implementation
- Testing
- Deployment
- Maintenance
SDLC Models
- Waterfall: Sequential phases
- V-Model: Testing paired with each development phase
- Agile: Iterative, incremental approach
- Spiral: Risk-driven approach
- Prototype: Build-evaluate-refine cycle
Software Testing Life Cycle (STLC)
- Requirements Analysis: Understand what to test
- Test Planning: Develop testing strategy
- Test Design: Create test cases
- Test Environment Setup: Prepare testing infrastructure
- Test Execution: Run tests, report defects
- Test Closure: Evaluate test completion criteria
Key Testing Concepts
Entry/Exit Criteria
- Entry: Conditions to start testing (requirements, code ready)
- Exit: Conditions to complete testing (coverage, defect thresholds)
Testing Approaches
- Risk-Based: Focus on highest-risk areas first
- Requirement-Based: Tests derived from requirements
- Exploratory: Simultaneous learning and testing
- Session-Based: Time-boxed exploratory testing
This cheat sheet provides a quick reference for all essential software testing knowledge. Use it as a refresher before interviews or as a daily reference in your testing activities.