Wednesday, January 12, 2011

My Experience in Software Testing

What is Software testing?
  • Finding out gaps between customer's requirements and system to be delivered.
Or,
  • Testing is a process used to help identify the correctness,completeness and quality of developed computer software.

Why is testing necessary?

  • Software is not defect free
  • Defects cause failures
  • Unreliable software can cause failures
  • Failures have associated costs like loss of business
  • Testing helps to find defects and learn about reliability of software

Fundamental test process

  • Test planing and control
  • Test analysis and design
  • Test implementation and execution
  • Evaluating exit criteria and reporting
  • Test closure activities

General testing principles

  • Testing shows presence of defects
  • Exhaustive testing is impossible
  • Early testing
  • Defect clustering (i.e. A small number of modules contain most of the defects discovered during pre-release testing, or show the most operational failures.)
  • Pesticide paradox (i.e. If the same test are repeated over and over again, eventually the same set of test cases will no longer find any new bugs To overcome this “pesticide paradox” the test cases need to be regularly reviewed and revised,and new and different tests need to be written to exercise different parts of the software or system to potentially find more defects.)
  • Testing is context dependent (i.e. Testing is done differently in different contexts. For example, safety-critical software is tested differently from an e-commerce site.)
  • Absence-of-errors fallacy (i.e. Finding and fixing defects does not help if the system built id unusable and does not fulfill the users' needs and expectations.)

What is Quality?

  • Meeting customer's needs and requirements.

What is Quality assurance?

  • Part of quality management focused on providing confidence that quality requirements will be fulfilled.

What is Quality attribute/Quality characteristic?

  • A feature or characteristic that affects an item’s quality.

How Testing improves quality?

  • Finding defects and measuring quality in terms of defects
  • Building confidence
  • Preventing defects
  • Reducing risk

How much testing is enough?

  • Factors deciding how much to test
  • Level of risk
  • Technical Risk
  • Business Product Risk
  • Project Risk
  • Project Constraints
  • Time
  • Budget

Objectives of testing

  • Finding defects early in the SDLC
  • Gaining confidence about the quality and providing information
  • Preventing defects

Error
  • A human action that produces an incorrect result.
Defect
  • A flaw in a component or system that can cause the component or system to fail to perform its required function, e.g. an incorrect statement or data definition. A defect, if encountered during execution, may cause a failure of the component or system.
Failure
  • Deviation of the component or system from its expected delivery, service or result.
Smoke testing
  • A subset of all defined/planned test cases that cover the main functionality of a component or system, to ascertaining that the most crucial functions of a program work,but not bothering with finer details. A daily build and smoke test is among industry best practices.
Functional testing
  • Testing based on an analysis of the specification of the functionality of a component or system.
System testing
  • The process of testing an integrated system to verify that it meets specified requirements.

Performance testing
  • The process of testing to determine the performance of a software product.

Re-testing
  • Testing that runs test cases that failed the last time they were run, in order to verify the success of corrective actions.
Regression testing
  • Testing of a previously tested program following modification to ensure that defects have not been introduced or uncovered in unchanged areas of the software, as a result of the changes made. It is performed when the software or its environment is changed.

Usability testing/UI testing
  • Testing to determine the extent to which the software product is understood, easy to learn, easy to operate and attractive to the users under specified conditions.

Use case testing
  • A black box test design technique in which test cases are designed to execute user scenarios.

User acceptance testing/Acceptance testing
  • Formal testing with respect to user needs, requirements, and business processes conducted to determine whether or not a system satisfies the acceptance criteria and to enable the user, customers or other authorized entity to determine whether or not to accept the system.

What is Bug life cycle?
  • The duration or time span between the first time bug is found (‘New’) and closed successfully (status: ‘Closed’), rejected, postponed or deferred is called as ‘Bug/Error Life Cycle’.

Is it possible to test everything?

  • It may possible but impractical.
Possible Solution:
  • “Prioritize Test” to do the best testing in the available time.

Methods of testing
  • Static testing (Non- execution testing method)
  • Dynamic testing (Test execution testing method)

Test Planning
  • Defining testing objective and scope
  • Identifying risks and constrains
  • Defining test policy and/or the test strategy
  • Deciding test approach ( techniques, test items, coverage, identifying and interfacing)
  • Defining entry and exit criteria for testing
  • Identifying teams and skills involved in testing
  • Identifying test resources ( E. g. people, test environment, hardware, software,etc)
  • Scheduling testing activities/tasks

Test control
  • Measuring and analyzing results
  • Monitoring and documenting progress
  • Test coverage and exit criteria
  • Initiation of corrective actions
  • Making decisions

Test analysis and design
  • Reviewing the test basis
  • Identifying test conditions or test requirements
  • Designing test conditions
  • Evaluating testability
  • Designing the test environment set-up

Test implementation and execution
  • Developing and prioritizing test cases
  • Creating test suites from the test cases
  • Verifying the test environment
  • Executing test cases
  • Logging the outcome of test execution
  • Comparing actual results with expected results
  • Reporting discrepancies as incidents and analyzing them
  • Reporting test activities as result of action taken ( confirmation/retesting) and regression

Evaluating exit/ completion criteria and reporting
  • Check the test logs against the exit/completion criteria defined in the test plan
  • Excess if more tests are needed or if the exit criteria specified should be changed
  • Creating a test summary report for stakeholders

Test closure activities
  • Checking which planned deliverable have been delivered
  • Analyzing and archiving test ware
  • Handover of test ware
  • Analyzing lessons learned

Psychology of testing

-Why test?
  • Build confidence in software under test
  • To find defects
  • Prove that the software conforms to user requirements/ functional specifications
  • Reduce failure costs
  • Asses quality of software

-Can testing prove software is correct?
  • Not possible to prove system has no defects
  • Only possible to prove system has defects
-The purpose of testing is to build confidence that the system is working
-But purpose of testing is also to find defects
-Finding defects destroys confidence
-So, purpose of testing is to destroy confidence?

Paradox of testing
The way to build confidence is to try and destroy it

1 comment:

  1. Reference:Foundations of Software Testing-ISTQB Certification

    ReplyDelete