Sunday, 16 September 2007

Full Lifecycle Testing Concept

What is Software Testing?

>>>Process of identifying defects

Defect is any variance between actual and expected results

>>>Testing should intentionally attempt to make things go wrong to determine if

things happen when they shouldn't or

things don't happen when they should

Types of Testing

>>> Static Testing

>>> Dynamic Testing

Static Testing

>> Involves testing of the development work products before any code is developed.

> Ex

Plan reviews

Requirements walkthroughs

Design or code inspections

Test plan inspections

Test case reviews

Dynamic Testing

>> Process of validation by exercising or operating a work product under scrutiny & observing its behavior to changing inputs or environments

>>Some representative examples of dynamic testing are

Executing test cases in an working system

Simulating usage scenarios with real end-users to test usability

Parallel testing in a production environment

Purpose of Static Testing

>> Early removal of the Defects in the software development life cycle

>>Increases the productivity by shortening the testing lifecycles & reducing work

>>> Increases the Quality of the project deliverables

Static Testing Techniques

>Prototyping

>Desk checks

>Checklists

>Mapping

>Reviews

>Walkthroughs

>Inspections

>Prototyping
Prototyping is reviewing a work product model (initial version) defined without the full capability of the proposed final product

- The prototype is demonstrated and the result of the exercise is evaluated, leading to an enhanced design

Used to verify and validate the User Interface design and to confirm usability

>Desk checks

This technique involves the process of a product author reading his or her own product to identify defects

>> Checklists

Checklists are a series of common items, or prompting questions to verify completeness of task steps.

>> Mapping

Mapping technique is identification of functions to the specification and to show how that function directly or indirectly, maps to the requirements.

- Used with Test Scripts to Cases, Test Conditions to Test Scripts etc

> Reviews

Reviews are useful mechanism for getting feedback quickly from peers and team members

> Walkthroughs

Walkthroughs are generally run as scheduled meetings and participants are invited to attend.

- The minutes of the meeting are recorded, as are the issues and action items resulting from the meeting.

- Owners are assigned for issues and actions and there is generally follow up done.

> Inspections

Defect detection activity, aimed at producing a "defect free" work product, before it is passed on to the next phase of the development process

> Objectives of an Inspection.

Increase quality and productivity

Minimize costs and cycle elapsed time

Facilitate project management

Inspection Process

> An inspection has the following key properties

A moderator

Definite participant roles

Author, Inspector, Recorder

Stated entry and exit criteria

Clearly defined defect types

A record of detected defects

Re-inspection criteria

Detected defect feedback to author

Follow-up to ensure defects are fixed / resolved

Inspection Process Results

> Major defects

Missing (M) - an item is missing

Wrong (W) - an item has been implemented incorrectly.

Extra (E) - an item is included which is not part of the specifications.

Issues (I) - an item not implemented in satisfactory manner

Suggestion (S) - suggestion to improve the work product.

> Minor defects

Clarity in comments and description

Insufficient / excessive documentation

Incorrect spelling / punctuation

Testing Techniques

>Black Box Testing

The testers have an "outside" view of the system.

They are concerned with "what is done" NOT "how it is done.“

>White Box Testing

In the White Box approach, the testers have an inside view of the system. They are concerned with "how it is done" NOT "what is done

Levels of Testing

  • Unit testing

  • Integration testing

  • System testing

  • Systems integration testing

  • User acceptance testing

Unit Testing

>Unit level test is the initial testing of new and changed code in a module.

>Verifies the program specifications to the internal logic of the program or module and validates the logic.

Integration Testing

> Integration level tests verify proper execution of application components and do not require that the application under test interface with other applications

>Communication between modules within the sub-system is tested in a controlled and isolated environment within the project

System Testing

System level tests verify proper execution of the entire application components including interfaces to other applications

> Functional and structural types of tests are performed to verify that the system is functionally and operationally sound.

Systems Integration Testing

>Systems Integration testing is a test level which verifies the integration of all applications

Includes interfaces internal and external to the organization, with their hardware, software and infrastructure components.

> Carried out in a production-like environment

User Acceptance Testing

> Verify that the system meets user requirements as specified.

> Simulates the user environment and emphasizes security, documentation and regression tests

> Demonstrate that the system performs as expected to the sponsor and end-user so that they may accept the system.

Types of Tests

> Functional Testing

> Structural Testing

Functional Testing

> Audit and Controls testing

> Conversion testing

> Documentation & Procedures testing

> Error Handling testing

> Functions / Requirements testing

> Interface / Inter-system testing

> Installation testing

> Parallel testing

> Regression testing

> Transaction Flow (Path) testing

> Usability testing

Audit And Controls Testing

> Verifies the adequacy and effectiveness of controls and ensures the capability to prove the completeness of data processing results

Their validity would have been verified during design

> Normally carried out as part of System Testing once the primary application functions have been stabilized

Conversation Testing

> Verifies the compatibility of the converted program, data, and procedures with those from existing systems that are being converted or replaced.

> Most programs that are developed for conversion purposes are not totally new. They are often enhancements or replacements for old, deficient, or manual systems.

> The conversion may involve files, databases, screens, report formats, etc.

User Documentation And Procedures Testing

> Ensures that the interface between the system and the people works and is useable.

> Done as part of procedure testing to verify that the instruction guides are helpful and accurate.

Both areas of testing are normally carried out late in the cycle as part of System Testing or in the UAT.

> Not generally done until the externals of the system have stabilized.

Ideally, the persons who will use the documentation and procedures are the ones who should conduct these tests.

Error-Handling Testing

> Error-handling is the system function for detecting and responding to exception conditions (such as erroneous input)

> Ensures that incorrect transactions will be properly processed and that the system will terminate in a controlled and predictable way in case of a disastrous failure

Function Testing

> Function Testing verifies, at each stage of development, that each business function operates as stated in the Requirements and as specified in the External and Internal Design documents.

> Function testing is usually completed in System Testing so that by the time the system is handed over to the user for UAT, the test group has already verified that the system meets requirements.

Installation Testing

> Any application that will be installed and run in an environment remote from the development location requires installation testing.

This is especially true of network systems that may be run in many locations.

This is also the case with packages where changes were developed at the vendor's site.

Necessary if the installation is complex, critical, should be completed in a short window, or of high volume such as in microcomputer installations.

This type of testing should always be performed by those who will perform the installation process

Interface / Inter-system Testing

> Application systems often interface with other application systems. Most often, there are multiple applications involved in a single project implementation.

> Ensures that the interconnections between applications function correctly.

> More complex if the applications operate on different platforms, in different locations or use different languages.

Parallel Testing

> Parallel testing compares the results of processing the same data in both the old and new systems.

> Parallel testing is useful when a new application replaces an existing system, when the same transaction input is used in both, and when the output from both is reconcilable.

> Useful when switching from a manual system to an automated system.

Regression Testing

> Verifies that no unwanted changes were introduced to one part of the system as a result of making changes to another part of the system.

Transaction Flow Testing

> Testing of the path of a transaction from the time it enters the system until it is completely processed and exits a suite of applications

Usability Testing

> Ensures that the final product is usable in a practical, day-to-day fashion

> Looks for simplicity and user-friendliness of the product

> Usability testing would normally be performed as part of functional testing during System and User Acceptance Testing.

Structural Testing

> Ensures that the technical and "housekeeping" functions of the system work

> Designed to verify that the system is structurally sound and can perform the intended tasks.

The categories for structural testing

> Backup and Recovery testing

> Contingency testing

> Job Stream testing

> Operational testing

> Performance testing

> Security testing

> Stress / Volume testing

Backup And Recovery Testing

> Recovery is the ability of an application to be restarted after failure.

> The process usually involves backing up to a point in the processing cycle where the integrity of the system is assured and then re-processing the transactions past the original point of failure.

> The nature of the application, the volume of transactions, the internal design of the application to handle a restart process, the skill level of the people involved in the recovery procedures, documentation and tools provided, all impact the recovery process

Contingency Testing

> Operational situations may occur which result in major outages or "disasters". Some applications are so crucial that special precautions need to be taken to minimize the effects of these situations and speed the recovery process. This is called Contingency.

Job Stream Testing

>> Done as a part of operational testing (the test type not the test level, although this is still performed during Operability Testing).

> Starts early and continues throughout all levels of testing.

- Conformance to standards is checked in User Acceptance and Operability testing.

Operational Testing

>> All products delivered into production must obviously perform according to user requirements. However, a product's performance is not limited solely to its functional characteristics. Its operational characteristics are just as important since users expect and demand a guaranteed level of service from Computer Services. Therefore, even though Operability Testing is the final point where a system's operational behavior is tested, it is still the responsibility of the developers to consider and test operational factors during the construction phase.

Performance Testing

> Performance Testing is designed to test whether the system meets the desired level of performance in a production environment. Performance considerations may relate to response times, turn around times (through-put), technical design issues and so on. Performance testing can be conducted using a production system, a simulated environment, or a prototype.

Security Testing

> Security of an application system is required to ensure the protection of confidential information in a system and in other affected systems is protected against loss, corruption, or misuse; either by deliberate or accidental actions. The amount of testing needed depends on the risk assessment of the consequences of a breach in security. Tests should focus on, and be limited to those security features developed as part of the system


Stress/ Volume Testing

Stress testing is defined as the processing of a large number of transactions through the system in a defined period of time. It is done to measure the performance characteristics of the system under peak load conditions.

Stress factors may apply to different aspects of the system such as input transactions, report lines, internal tables, communications, computer processing capacity, throughput, disk space, I/O and so on.

Stress testing should not begin until the system functions are fully tested and stable. The need for Stress Testing must be identified in the Design Phase and should commence as soon as operationally stable system units are available.

Bottom-up Testing

Approach to integration testing where the lowest level components are tested first then used to facilitate the testing of higher level components. This process is repeated until the component at the top of the hierarchy is tested.

Test Bed

A test environment containing the hardware, instrumentation tools, simulators, and other support software necessary for testing a system or system component. (2) A set of test files, (including databases and reference files), in a known state, used with input test data to test one or more test conditions, measuring against expected results.

3 comments:

Shiri's Kitchen said...

Very useful information, keep up the good work.

rajesh said...

Good work......it help a lot for beginners

Thanks

rajesh said...
This comment has been removed by the author.