Wednesday, January 12, 2011

Some Ideas for Effective Software Testing

Requirements Phase
  • Idea 1: Involve Testers from the Beginning
  • Idea 2: Verify the Requirements
  • Idea 3: Design Test Procedures As Soon As Requirements Are Available
  • Idea 4: Ensure That Requirement Changes Are Communicated
  • Idea 5: Beware of Developing and Testing Based on an Existing System

Test Planning
  • Idea 6: Understand the Task At Hand and the Related Testing Goal
  • Idea 7: Consider the Risks
  • Idea 8: Base Testing Efforts on a Prioritized Feature Schedule
  • Idea 9: Keep Software Issues in Mind
  • Idea 10: Acquire Effective Test Data
  • Idea 11: Plan the Test Environment
  • Idea 12: Estimate Test Preparation and Execution Time

The Testing Team
  • Idea 13: Define Roles and Responsibilities
  • Idea 14: Require a Mixture of Testing Skills, Subject-Matter Expertise, and Experience
  • Idea 15: Evaluate the Tester's Effectiveness

The System Architecture
  • Idea 16: Understand the Architecture and Underlying Components
  • Idea 17: Verify That the System Supports Testability
  • Idea 18: Use Logging to Increase System Testability
  • Idea 19: Verify That the System Supports Debug and Release Execution Modes

Test Design and Documentation
  • Idea 20: Divide and Conquer
  • Idea 21: Mandate the Use of a Test-Procedure Template and Other Test-Design Standards
  • Idea 22: Derive Effective Test Cases from Requirements
  • Idea 23: Treat Test Procedures As "Living" Documents
  • Idea 24: Utilize System Design and Prototypes
  • Idea 25: Use Proven Testing Techniques when Designing Test-Case Scenarios
  • Idea 26: Avoid Including Constraints and Detailed Data Elements within Test Procedures
  • Idea 27: Apply Exploratory Testing

Unit Testing
  • Idea 28: Structure the Development Approach to Support Effective Unit Testing
  • Idea 29: Develop Unit Tests in Parallel or Before the Implementation
  • Idea 30: Make Unit-Test Execution Part of the Build Process

Automated Testing Tools
  • Idea 31: Know the Different Types of Testing-Support Tools
  • Idea 32: Consider Building a Tool Instead of Buying One
  • Idea 33: Know the Impact of Automated Tools on the Testing Effort
  • Idea 34: Focus on the Needs of Your Organization
  • Idea 35: Test the Tools on an Application Prototype

Automated Testing: Selected Best Practices
  • Idea 36: Do Not Rely Solely on Capture/Playback
  • Idea 37: Develop a Test Harness When Necessary
  • Idea 38: Use Proven Test-Script Development Techniques
  • Idea 39: Automate Regression Tests When Feasible
  • Idea 40: Implement Automated Builds and Smoke Tests

Nonfunctional Testing
  • Idea 41: Do Not Make Nonfunctional Testing an Afterthought
  • Idea 42: Conduct Performance Testing with Production-Sized Databases
  • Idea 43: Tailor Usability Tests to the Intended Audience
  • Idea 44: Consider All Aspects of Security, for Specific Requirements and System-Wide
  • Idea 45: Investigate the System's Implementation To Plan for Concurrency Tests
  • Idea 46: Set Up an Efficient Environment for Compatibility Testing

Managing Test Execution
  • Idea 47: Clearly Define the Beginning and End of the Test-Execution Cycle
  • Idea 48: Isolate the Test Environment from the Development Environment
  • Idea 49: Implement a Defect-Tracking Life Cycle
  • Idea 50: Track the Execution of the Testing Program

Reference:
  • Effective software testing : 50 specific ways to improve your testing / Elfriede Dustin.

My Experience in Software Testing

What is Software testing?
  • Finding out gaps between customer's requirements and system to be delivered.
Or,
  • Testing is a process used to help identify the correctness,completeness and quality of developed computer software.

Why is testing necessary?

  • Software is not defect free
  • Defects cause failures
  • Unreliable software can cause failures
  • Failures have associated costs like loss of business
  • Testing helps to find defects and learn about reliability of software

Fundamental test process

  • Test planing and control
  • Test analysis and design
  • Test implementation and execution
  • Evaluating exit criteria and reporting
  • Test closure activities

General testing principles

  • Testing shows presence of defects
  • Exhaustive testing is impossible
  • Early testing
  • Defect clustering (i.e. A small number of modules contain most of the defects discovered during pre-release testing, or show the most operational failures.)
  • Pesticide paradox (i.e. If the same test are repeated over and over again, eventually the same set of test cases will no longer find any new bugs To overcome this “pesticide paradox” the test cases need to be regularly reviewed and revised,and new and different tests need to be written to exercise different parts of the software or system to potentially find more defects.)
  • Testing is context dependent (i.e. Testing is done differently in different contexts. For example, safety-critical software is tested differently from an e-commerce site.)
  • Absence-of-errors fallacy (i.e. Finding and fixing defects does not help if the system built id unusable and does not fulfill the users' needs and expectations.)

What is Quality?

  • Meeting customer's needs and requirements.

What is Quality assurance?

  • Part of quality management focused on providing confidence that quality requirements will be fulfilled.

What is Quality attribute/Quality characteristic?

  • A feature or characteristic that affects an item’s quality.

How Testing improves quality?

  • Finding defects and measuring quality in terms of defects
  • Building confidence
  • Preventing defects
  • Reducing risk

How much testing is enough?

  • Factors deciding how much to test
  • Level of risk
  • Technical Risk
  • Business Product Risk
  • Project Risk
  • Project Constraints
  • Time
  • Budget

Objectives of testing

  • Finding defects early in the SDLC
  • Gaining confidence about the quality and providing information
  • Preventing defects

Error
  • A human action that produces an incorrect result.
Defect
  • A flaw in a component or system that can cause the component or system to fail to perform its required function, e.g. an incorrect statement or data definition. A defect, if encountered during execution, may cause a failure of the component or system.
Failure
  • Deviation of the component or system from its expected delivery, service or result.
Smoke testing
  • A subset of all defined/planned test cases that cover the main functionality of a component or system, to ascertaining that the most crucial functions of a program work,but not bothering with finer details. A daily build and smoke test is among industry best practices.
Functional testing
  • Testing based on an analysis of the specification of the functionality of a component or system.
System testing
  • The process of testing an integrated system to verify that it meets specified requirements.

Performance testing
  • The process of testing to determine the performance of a software product.

Re-testing
  • Testing that runs test cases that failed the last time they were run, in order to verify the success of corrective actions.
Regression testing
  • Testing of a previously tested program following modification to ensure that defects have not been introduced or uncovered in unchanged areas of the software, as a result of the changes made. It is performed when the software or its environment is changed.

Usability testing/UI testing
  • Testing to determine the extent to which the software product is understood, easy to learn, easy to operate and attractive to the users under specified conditions.

Use case testing
  • A black box test design technique in which test cases are designed to execute user scenarios.

User acceptance testing/Acceptance testing
  • Formal testing with respect to user needs, requirements, and business processes conducted to determine whether or not a system satisfies the acceptance criteria and to enable the user, customers or other authorized entity to determine whether or not to accept the system.

What is Bug life cycle?
  • The duration or time span between the first time bug is found (‘New’) and closed successfully (status: ‘Closed’), rejected, postponed or deferred is called as ‘Bug/Error Life Cycle’.

Is it possible to test everything?

  • It may possible but impractical.
Possible Solution:
  • “Prioritize Test” to do the best testing in the available time.

Methods of testing
  • Static testing (Non- execution testing method)
  • Dynamic testing (Test execution testing method)

Test Planning
  • Defining testing objective and scope
  • Identifying risks and constrains
  • Defining test policy and/or the test strategy
  • Deciding test approach ( techniques, test items, coverage, identifying and interfacing)
  • Defining entry and exit criteria for testing
  • Identifying teams and skills involved in testing
  • Identifying test resources ( E. g. people, test environment, hardware, software,etc)
  • Scheduling testing activities/tasks

Test control
  • Measuring and analyzing results
  • Monitoring and documenting progress
  • Test coverage and exit criteria
  • Initiation of corrective actions
  • Making decisions

Test analysis and design
  • Reviewing the test basis
  • Identifying test conditions or test requirements
  • Designing test conditions
  • Evaluating testability
  • Designing the test environment set-up

Test implementation and execution
  • Developing and prioritizing test cases
  • Creating test suites from the test cases
  • Verifying the test environment
  • Executing test cases
  • Logging the outcome of test execution
  • Comparing actual results with expected results
  • Reporting discrepancies as incidents and analyzing them
  • Reporting test activities as result of action taken ( confirmation/retesting) and regression

Evaluating exit/ completion criteria and reporting
  • Check the test logs against the exit/completion criteria defined in the test plan
  • Excess if more tests are needed or if the exit criteria specified should be changed
  • Creating a test summary report for stakeholders

Test closure activities
  • Checking which planned deliverable have been delivered
  • Analyzing and archiving test ware
  • Handover of test ware
  • Analyzing lessons learned

Psychology of testing

-Why test?
  • Build confidence in software under test
  • To find defects
  • Prove that the software conforms to user requirements/ functional specifications
  • Reduce failure costs
  • Asses quality of software

-Can testing prove software is correct?
  • Not possible to prove system has no defects
  • Only possible to prove system has defects
-The purpose of testing is to build confidence that the system is working
-But purpose of testing is also to find defects
-Finding defects destroys confidence
-So, purpose of testing is to destroy confidence?

Paradox of testing
The way to build confidence is to try and destroy it