Tuesday, 11 December 2007

Software Testing Best Practices

The Foundational practices are the rock in the soil that protects your efforts against harshness of nature, be it a redesign of your architecture or enhancements to sustain unforeseen growth. Every time we conclude a study or task force on the subject of software development process I have found one recommendation that comes out loud and clear.

"We need to adopt the best practices in the industry." While it appears as an obvious conclusion, the most glaring lack of it's presence continues to astound the study team.

The best practices can be classified into the following types:

a) Basic Best Practices
b) Foundational Best Practices
c) Incremental Best Practices

a) Basic Best Practices

They are the training wheels you need to get started and when you take them off, it is evident that you know how to ride. But remember, that you take them off does not mean you forget how to ride. This is an important difference, which all too often is forgotten in software. "Yeah, we used to write functional specification but we don't do that anymore" means you forget to ride, not that you didn't need to do that step anymore. The Basic practices have been around for a long time.

Functional Specifications
The testers use this to write down test cases from a black box testing perspective. The advantage of having a functional specification is that the test generation activity could happen in parallel with the development of the code. This is ideal from several dimensions. It gains parallelism in execution, removing a serious serialization bottleneck in the development process.

Reviews and Inspection
It is argued that software inspection can easily provide a ten times gain in the process of debugging software. Not much needs to be said about this, since it is a fairly well known and understood practice.

Formal Entry and Exit Criteria
The idea is that every process step, be it inspection, functional test, or software design, has a precise entry and precise exit criteria. These are defined by the development process and are watched by management to gate the movement from one stage to another.

Functional Test – Variations
Most functional tests are written as black box tests working off a functional specification. The number of test cases that are generated usually are variations on the input space coupled with visiting the output conditions.

Multi-platform Testing
When code is ported from one platform to another, modifications are sometimes done for performance purposes. The net result is that testing on multiple platforms has become a necessity for most products. Therefore techniques to do this better, both in development and testing, are essential.

Internal Betas
Techniques to best conduct such an internal Beta test are essential for us to obtain good coverage and efficiently use internal resources. This best practice has everything to do with Beta programs though on a smaller scale to best leverage it and reduce cost and expense of an external Beta.

Automated Test Execution
The goal of automated test execution is that we minimize the amount of manual work involved in test execution and gain higher coverage with a larger number of test cases. The automated test execution has a significant impact on both the tools sets for test execution and also the way tests are designed.

‘Nightly’ Builds
The concept of a nightly build has been in vogue for a long time. While every build is not necessarily done every day, the concept captures frequent builds from changes that are being promoted into the change control system.

b) Foundational Best Practices

The Foundational practices are the rock in the soil that protects your efforts against harshness of nature, be it a redesign of your architecture or enhancements to sustain unforeseen growth. They need to be put down thoughtfully and will make the difference in the long haul, whether you build a ranch or a skyscraper. Their value added is significant and established by a few leaders in the industry. Unlike the Basics, they are probably not as well known and therefore need implementation help. While there may be no textbooks on them yet, there is plenty of documentation to dig up.

User Scenarios
One of the viable methods of testing is to develop user scenarios that exercise the functionality of the applications. This best practice should capture methods of recording user scenarios and developing test cases based on them. In addition it could discuss potential diagnosis methods when a specific failure scenario occurs.

Usability Testing
Usability testing needs to not only assess how usable a product is but also provide feedback on methods to improve the user experience and thereby gain a positive quality image. The best practice for usability testing should also have knowledge about advances in the area of Human Computer Interface.

In-Process ODC Feedback Loops
Orthogonal defect classification is a measurement method that uses the defect stream to provide precise measurability into the product and the process. Given the measurement, a variety of analysis techniques have been developed to assist management and decision-making on a range of software engineering activities.

Multi-Release ODC/Butterfly
The technology of multi-release ODC/Butterfly analysis allows a product manager to make strategic development decisions so as to optimize development costs, time to market, and quality issues by recognizing customer trends, usage patterns, and product performance.

“Requirements” for Test Planning
One of the roles of software testing is to ensure that the product meets the requirements of the clientele. Capturing the requirements therefore becomes an essential part not only to help develop but to create test plans that can be used to gauge if the developed product is likely to meet customer needs.

Automated Test Generation
Almost 30% of the testing task can be the writing of test cases. To first order of approximation, this is a completely manual exercise and a prime candidate for savings through automation.

c) Incremental Best Practices

The Incremental practices provide specific advantages in special conditions. While they may not provide broad gains across the board of testing, they are more specialized. These are the right angle drills -- when you need it, there's nothing else that can get between narrow studs and drill a hole perfectly square. At the same time, if there was just one drill you were going to buy, it may not be your first choice. Not all practices are widely known or greatly documented. But they all possess the strength that are powerful when judiciously applied.

Conclusion
So clear is its presence that it distinguishes the winners from the also-ran like no other factor. The search for best practices is constant. Some are known and well recognized, others debated, and several hidden.
Note : Get more topics from Exforys Online Tutorials.

Testing : Introduction to CMM

Quality software should reasonably be bug-free, delivered on time and within budget. It should meet the given requirements and/or expectations, and should be maintainable. In order to produce error free and high quality software certain standards need to be followed.

Software Quality:
Quality software should reasonably be bug-free, delivered on time and within budget. It should meet the given requirements and/or expectations, and should be maintainable.
In order to produce error free and high quality software certain standards need to be followed.

Quality Standards
ISO 9001: 2000 is Quality Management System Certification. To achieve this, an organization must satisfy ISO 9001: 2000 clauses (clauses 1 - 8).
Six Sigma is a process improvement methodology focused on reduction in variation of the processes around the mean. Its objective is to make the process defect free.
SEI CMM is a defacto standard for assessing and improving processes related to software development, developed by the software community in 1986 with leadership from SEI. It’s a software specific process maturity model. It provides guidance for measuring software process maturity and helps process improvement programs.

SEI CMM is organized into 5 maturity levels:
Initial
Repeatable
Defined
Manageable
Optimizing
1)Initial
The software process is characterized as ad hoc, and occasionally even chaotic. Few processes are defined, and success depends on individual effort and heroics.

2) Repeatable:
Basic project management processes are established to track cost, schedule, and functionality. The necessary process discipline is in place to repeat earlier successes on projects with similar applications.

3) Defined:
The software process for both management and engineering activities is documented, standardized, and integrated into a standard software process for the organization. All projects use an approved, tailored version of the organization's standard software process for developing and maintaining software.

4) Managed:
Detailed measures of the software process and product quality are collected. Both the software process and products are quantitatively understood and controlled.

5) Optimizing:
Continuous process improvement is enabled by quantitative feedback from the process and from piloting innovative ideas and technologies.

Comparison of CMM and ISO


Conclusion:
CMM ensures that the process followed for developing a product produces error free product. A company which is process driven is more successful than the company which is people driven. Hence a company needs to have a good process for software development for it to be successful.

Sunday, 2 December 2007

Life Cycle of Testing Process

Life Cycle of Testing

This article explains about Different steps in Life Cycle of Testing Process. in Each phase of the development process will have a specific input and a specific output. Once the project is confirmed to start, the phases of the development of project can be divided into the following phases:

* Software requirements phase.
* Software Design
* Implementation
* Testing
* Maintenance

In the whole development process, testing consumes highest amount of time. But most of the developers oversee that and testing phase is generally neglected. As a consequence, erroneous software is released. The testing team should be involved right from the requirements stage itself.

The various phases involved in testing, with regard to the software development life cycle are:

1. Requirements stage
2. Test Plan
3. Test Design.
4. Design Reviews
5. Code Reviews
6. Test Cases preparation.
7. Test Execution
8. Test Reports.
9. Bugs Reporting
10. Reworking on patches.
11. Release to production.


Requirements Stage :
Normally in many companies, developers itself take part in the requirements stage. Especially for product-based companies, a tester should also be involved in this stage. Since a tester thinks from the user side whereas a developer can’t. A separate panel should be formed for each module comprising a developer, a tester and a user. Panel meetings should be scheduled in order to gather everyone’s view. All the requirements should be documented properly for further use and this document is called “Software Requirements Specifications”.

Test Plan :
Without a good plan, no work is a success. A successful work always contains a good plan. The testing process of software should also require good plan. Test plan document is the most important document that brings in a process – oriented approach. A test plan document should be prepared after the requirements of the project are confirmed. The test plan document must consist of the following information:
• Total number of features to be tested.
• Testing approaches to be followed.
• The testing methodologies
• Number of man-hours required.
• Resources required for the whole testing process.
• The testing tools that are to be used.
• The test cases, etc

Test Design :
Test Design is done based on the requirements of the project. Test has to be designed based on whether manual or automated testing is done. For automation testing, the different paths for testing are to be identified first. An end to end checklist has to be prepared covering all the features of the project.

The test design is represented pictographically. The test design involves various stages. These stages can be summarized as follows:
• The different modules of the software are identified first.
• Next, the paths connecting all the modules are identified.

Then the design is drawn. The test design is the most critical one, which decides the test case preparation. So the test design assesses the quality of testing process.

Test Cases Preparation :
Test cases should be prepared based on the following scenarios:
• Positive scenarios
• Negative scenarios
• Boundary conditions and
• Real World scenarios

Design Reviews :
The software design is done in systematical manner or using the UML language. The tester can do the reviews over the design and can suggest the ideas and the modifications needed.

Code Reviews :
Code reviews are similar to unit testing. Once the code is ready for release, the tester should be ready to do unit testing for the code. He must be ready with his own unit test cases. Though a developer does the unit testing, a tester must also do it. The developers may oversee some of the minute mistakes in the code, which a tester may find out.

Test Execution and Bugs Reporting :
Once the unit testing is completed and the code is released to QA, the functional testing is done. A top-level testing is done at the beginning of the testing to find out the top-level failures. If any top-level failures occur, the bugs should be reported to the developer immediately to get the required workaround.

The test reports should be documented properly and the bugs have to be reported to the developer after the testing is completed.
Release to Production

Once the bugs are fixed, another release is given to the QA with the modified changes. Regression testing is executed. Once the QA assures the software, the software is released to production. Before releasing to production, another round of top-level testing is done.

The testing process is an iterative process. Once the bugs are fixed, the testing has to be done repeatedly. Thus the testing process is an unending process.

Release to Production
Once the bugs are fixed, another release is given to the QA with the modified changes. Regression testing is executed. Once the QA assures the software, the software is released to production. Before releasing to production, another round of top-level testing is done.

The testing process is an iterative process. Once the bugs are fixed, the testing has to be done repeatedly. Thus the testing process is an unending process.

Friday, 30 November 2007

Basic Test Case Concepts

A testcase is simply a test with formal steps and instructions; testcases are valuable because they are repeatable, reproducible under the same environments, and easy to improve upon with feedback. A testcase is the difference between saying that something seems to be working okay and proving that a set of specific tasks are known to be working correctly.

Some tests are more straightforward than others. For example, say you need to verify that all the links in your web site work. There are several different approaches to checking this:

you can read your HTML code to see that all the link code is correct
you can run an HTML DTD validator to see that all of your HTML syntax is correct, which would imply that your links are correct
you can use your browser (or even multiple browsers) to check every link manually
you can use a link-checking program to check every link automatically
you can use a site maintenance program that will display graphically the relationships between pages on your site, including links good and bad
you could use all of these approaches to test for any possible failures or inconsistencies in the tests themselves
Verifying that your site's links are not broken is relatively unambiguous. You simply need to decide which one of more of these tests best suits your site structure, your test resources, and your need for granularity of results. You run the test, and you get your results showing any broken links.

Notice that you now have a list of broken links, not of incorrect links. If a link is valid syntactically, but points at the incorrect page, your link test won't catch the problem. My general point here is that you must understand what you are testing. A testcase is a series of explicit actions and examinations that identifies the "what".

A testcase for checking links might specify that each link is tested for functionality, appropriateness, usability, style, consistency, etc. For example, a testcase for checking links on a typical page of a site might include these steps:
Link Test: for each link on the page, verify that

the link works (i.e., it is not broken)
the link points at the correct page
the link text effectively and unambiguously describes the target page
the link follows the approved style guide for this web site (for example, closing punctuation is or is not included in the link text, as per the style guide specification)
every instance of a link to the same target page is coded the same way.

As you can see, this is a detailed testing of many aspects of the link, with the result that on completion of the test, you can say definitively what you know works. However, this is a simple example: testcases can run to hundreds of instructions, depending on the types of functionality being tested and the need for iterations ofsteps.

Defining Test and Testcase Parameters
A testcase should set up any special environment requirements the test may have, such as clearing the browser cache, enabling JavaScript support, or turning on the warnings for the dropping of cookies.
In addition to specific configuration instructions, testcases should also record browser types and versions, operating system, machine platforms, connection speeds -- in short, the testcase should record any parameter that would affect the reproducibility of the results or could aid in troubleshooting any defects found by testing. Or to state this a little differently, specify what platforms this testcase should be run against, record what platforms it is run against, and in the case of defects report the exact environment in which the defect was found. The various required fields of a test case are as follows

Test Case ID: It is unique number given to test case in order to be identified.

Test description: The description if test case you are going to test.

Revision history: Each test case has to have its revision history in order to know when and by whom it is created or modified.

Function to be tested: The name of function to be tested.

Environment: It tells in which environment you are testing.

Test Setup: Anything you need to set up outside of your application for example printers, network and so on.

Test Execution: It is detailed description of every step of execution.

Expected Results: The description of what you expect the function to do.

Actual Results: pass / failed If pass - What actually happen when you run the test. If failed - put in description of what you've observed.

Sample Testcase
Here is a simple test case for applying bold formatting to a text.

Test case ID: B 001
Test Description: verify B - bold formatting to the text
Revision History:
3/ 23/ 00 1.0- Valerie- Created
Function to be tested: B - bold formatting to the text
Environment: Win 98
Test setup: N/A
Test Execution:

Open program
Open new document
Type any text
Select the text to make bold.
Click Bold

Expected Result: Applies bold formatting to the text
Actual Result: pass

Testcase definition
Define testcases in the Definition pane of the Component Test perspective. This is also where you define the hosts on which the testcases will run. Once you define the testcase element in the Definition pane, its contents appear in the Outline pane. You can add elements to the testcase's main block, and once your definition is complete you can prepare it to run and create a testcase instance.

Testcase stages
As you work with testcases in the Component Test perspective, they go through different stages, from definition to analysis. Each stage is generated from the previous one, but is otherwise unrelated: for example, although a testcase instance is generated from a testcase definition, changes to the definition will not affect the instance

Creating manual testcases
Create manual testcases to guide a tester through the steps necessary to test a component or application. Once you have created a manual testcase, you can prepare it to run.

Adding manual testcases
To add a manual testcase to the Component Test perspective, follow these steps:
1.In the Definition pane, right-click on Testcases and click: New > Testcase
2.In the New Testcase wizard, select the project you want to define the testcase in.
3.Name the project and click Next.
4.Select the Manual scheduler.
5.Click Finish to add the testcase to the Testcase folder under the selected project.
The contents of the testcase appear in the Outline pane. To start with, it contains a main block, which will organize all the other contents of the testcase.

Creating HTTP testcases
Create HTTP testcases to run methods and queries against an HTTP server. You can define HTTP testcases by importing an HTTP XML file that defines a set of interactions, or you can define it using the tasks below. Once you have defined the testcase, you can prepare it to run.
Creating Java testcases
Create Java testcases to test static Java methods by calling them and verifying the results. Once you have defined the testcase, you can generate an instance of it, and edit the instance's code to provide the logic for evaluating each task and verification point.

Adding Java testcases
To add a Java testcase to the Component Test perspective:
1.In the Definition pane, right-click on Testcases and click: New > Testcase
2.In the New Testcase wizard, select the project you want to define the testcase in.
3.Name the project and click Next.
4.Select the Java scheduler.
5.Click Finish to add the testcase to the Testcase folder under the selected project.
The contents of the testcase appear in the Outline pane. To start with, it contains a main block, which will organize all the other contents of the testcase.

Reusing testcases
You can reuse existing testcase definitions when you define new ones. This lets you define testcases for common sequences (such as logging into an application) that you can then reuse in more complex compound testcases.
To reuse a testcase:
1.Select the testcase you want to add the existing testcase to.
2.In the Outline pane, right-click the block you want to add the testcase to and click Add Testcase Definition Reference.
3.In the Add Testcase Definition Reference wizard, select the testcase you want to reuse.
4.Click Finish.
The reused testcase is incorporated by reference: its definition is still maintained separately, and the compound testcase definition will pick up changes to the testcases it reuses. However, when you create a testcase instance, the generated code for the referenced testcase definition will be stored as part of the referencing testcase instance. In other words, reuse happens only at the definition level: at the instance level, each reusing testcase creates its own copy of the reused testcases.

Test Cases & Explanation
We will not supply you with test input for most of your assignments. Part of your job will be to select input cases to show that your program works correctly. You should select input from the following categories:

Normal Test Cases: These are inputs that would be considered "normal" or "average" for your program. For example, if your program computes square roots, you could try several positive numbers, both less than and greater than 1, including some perfect squares such as 16 and some numbers without rational square roots.

Boundary Test Cases: These are inputs that are legal, but on or near the boundary between legal and illegal values. For example, in a square root program, you should try 0 as a boundary cases.

Exception Test Cases: These are inputs that are illegal. Your program may give an error message or it might crash. In a square root program, negative numbers would be exception test cases.

You must hand in outputs (saved in file form) of your test runs. In addition to handing in your actual test runs, give us a quick explanation of how you picked them. For example, if you write a program to compute square roots, you might say "my test input included zero, small and large positive numbers, perfect squares and numbers without a rational square root, and a negative number to demonstrate error handling". You may give this explanation in the separate README file, or included alongside the test cases.

You will be marked for how well the test cases you pick demonstrate that your program works correctly. If your program doesn't work correctly in all cases, please be honest about it. It is perfectly valid to have test cases which illustrate the circumstances in which your program does not yet work. If your program doesn't run at all, you can hand in a set of test cases with an explanation of how you picked them and what the correct output would be. Both of these will get you full marks for testing. If you pick test cases to hide the faults in your program, you will lose marks.

Black Box Test Case Design
Objective and Purpose
The purpose of the Black Box Test Case Design (BBTD) is to discover circumstances under which the assessed object will not react and behave according to the requirements or respectively the specifications.
Operational Sequence
The test cases in a black box test case design are deviated from the requirements or respectively the specifications. The object to be assessed is considered as a black box, i. e. the assessor is not interested in the internal structure and the behavior of the object to be assessed.

It can be differentiated between the following black box test case designs:
>Generation of equivalence classes
>Marginal value analysis
>Intuitive test case definition
>Function coverage

1. Generation of Equivalence Classes :
Objective and Purpose :
It is the objective of the generation of equivalence classes to achieve an optional probability to detect errors with a minimum number of test cases.
Operational Sequence :The principle of the generation of equivalence classes is to group all input data of a program into a finite number of equivalence classes so it can be assumed that with any representative of a class it is possible to detect the same errors as with any other representative of this class.
The definition of test cases via equivalence classes is realized by means of the following steps:
Analysis of the input data requirements, the output data requirements, and the conditions according to the specifications

2. Definition of the equivalence classes by setting up the ranges for input and output data

3. Definition of the test cases by means of selecting values for each classwhen defining equivalence classes, two groups of equivalence classes have to be differentiated:
valid equivalence classes
invalid equivalence classes For valid equivalence classes, the valid input data are selected; in case of invalid equivalence classes erroneous input data are selected. If the specification is available, the definition of equivalence classes is predominantly a heuristic process.

4. Marginal Value Analysis :
Objective and Purpose :
It is the objective of the marginal value analysis to define test cases that can be used to discover errors connected with the handling of range margins.
Operational Sequence :
The principle of the marginal value analysis is to consider the range margins in connection with the definition of test cases. This analysis is based on the equivalence classes defined by means of the generation of equivalence classes. Contrary to the generation of equivalence classes, not any one representative of the class is selected as test case but only the representatives at the class margins. Therefore, the marginal value analysis represents an addition to the test case design according to the generation of equivalence classes.

5. Intuitive Test Case Definition
Objective and Purpose :
It is the objective of the intuitive test case definition to improve systematically detected test cases qualitatively, and also to detect supplementary test cases.
Operational Sequence :
Basis for this methodical approach is the intuitive ability and experience of human beings to select test cases according to expected errors. A regulated procedure does not exist. Apart from the analysis of the requirements and the systematically defined test cases (if realized) it is most practical to generate a list of possible errors and error-prone situations. In this connection it is possible to make use of the experience with repeatedly occurred standard errors. Based on these identified errors and critical situations the additional test cases will then be defined.

6. Function Coverage Objective and Purpose
It is the purpose of the function coverage to identify test cases that can be used to proof that the corresponding function is available and can be executed as well. In this connection the test case concentrates on the normal behavior and the exceptional behavior of the object to be assessed.
Operational Sequence
Based on the defined requirements, the functions to be tested must be identified. Then the test cases for the identified functions can be defined.
Recommendation
With the help of a test case matrix it is possible to check if functions are covered by several test cases. In order to improve the efficiency of the tests, redundant test cases ought to be deleted.

White Box Test Case Design
Objective and Purpose
The objective of the "White Box Test Case Design" (WBTD) is to detect errors by means of execution-oriented test cases.
Operational Sequence
White Box Testing is a test strategy which investigates the internal structure of the object to be assessed in order to specify execution-oriented test cases on the basis of the program logic. In this connection the specifications have to be taken into consideration, though. In a test case design, the portion of the assessed object which is addressed by the test cases is taken into consideration. The considered aspect may be a path, a statement, a branch, and a condition. The test cases are selected in such a manner that the correspondingly addressed portion of the assessed object is increased.

The following White Box Test Case methods exist:
1. Path coverage
2. Statement coverage
3. Branch coverage
4. Condition coverage
5. Branch/condition coverage
6. Coverage of all multiple conditions

1.Path Coverage
Objective and Purpose
It is the objective of the path coverage to identify test cases executing a required minimum number of paths in the object to be assessed. The execution of all paths cannot be realized as a rule.
Operational Sequence
By taking into consideration the specification, the paths to be executed and the corresponding test cases will be defined.

2. Statement Coverage
Objective and Purpose
It is the objective of the statement coverage to identify test cases executing a required minimum number of statements in the object to be assessed.
Operational Sequence
By taking into consideration the specification, statements are identified and the corresponding test cases are defined. Depending on the required coverage degree, either all or only a certain number of statements are to be used for the test case definition.

3. Branch Coverage
Objective and Purpose
It is the objective of the branch coverage to identify test cases executing a required minimum number of branches, i. e. at least once in the object to be assessed.
Operational Sequence
By taking into consideration the specification, a sufficiently large number of test cases must be designed by means of an analysis so both the THEN and the ELSE branch are executed at least once for each decision. I. e. the exit for the fulfilled condition and the exit for the unfulfilled must be utilized and each entry must be addressed at least once. For multiple decisions there exists the additional requirement to test each possible exit at least once and to address each entry at least once.

4. Condition Coverage
Objective and Purpose
The objective of the condition coverage is to identify test cases executing a required minimum number of conditions in the object to be assessed.
Operational Sequence
By taking into consideration the specification, conditions are identified and the corresponding test cases are defined. The test cases are defined on the basis of a path sequence analysis.

5. Branch/Condition Coverage
Objective and Purpose
The objective of the branch/condition coverage is to identify test cases executing a required minimum number of branches and conditions in the object to be assessed.
Operational Sequence
By taking into consideration the specification, branches and conditions are identified and the corresponding test cases are defined.

6. Coverage of all Multiple Conditions
Objective and Purpose
The objective of the coverage of all multiple conditions is to identify test cases executing a required minimum number of all possible condition combinations for a decision in the object to be assessed.
Operational Sequence
By taking into consideration the specification, condition combinations for decisions are identified and the corresponding test cases are defined. When defining test cases it must be observed that all entries are addressed at least once.

Test Cases :::::

How to write TEST CASES?
To write test cases one should be clear on the specifications required for a particular case. Once the case is decided check out for the requirments and then write test cases. For writing test cases first you must find Boundary Value Analysis. Let us write a test case for a Consignee Details Form. (Consignee Details : Consignee is the customer whoever to purchase our product. Here he want to give the information about himself. For example name, address and etc...)

Here is the screen shot of the form


Software Requirement Specification
According to the software requirement specification (SRS) one should write test cases upto expected results.

Here is the screen shot of SRS

Boundary Value Analysis:
It concentrate on range between minimum value and maximum values. It does not concentrate on centre values.

For example how to calculate Boundary Value for Company name field

Minimum length is 4 & Maximum length is 15

For Boundary value you have to check + or – minimum length and + or – Maximum length
for Company name field minimum value =3,4,5
maximum value=14,15,16

According to the Software Requirement Specification
The boundary values given above are

Valid values=4,5,14,15
Invalid values=3,16 because this values are out of range where as given in software requirement specification.




>You have to write test cases for Boundary values also.
For single user id field you have 11 test case including boundary value.
>You have to write test cases upto expected result after getting software requirement specification itself you can start writing a test cases.
>After the creation of test cases completed.
>Arrival of build will be arises to the testing field
>Build->Its a complete project
>After that you have to execute the test cases

EXECUTION OF TEST CASES
You have to check all the possible Test input given in test cases and then check whether all the test cases are executed or not

How to execute?>For example
whether you are checking company name as a mandatory means
you need not give any input to Company name field and then enter password .then click OK button means.
That alert message “Enter Company name:” must be displayed. This was your expected result . If it is happen while you are executing the test cases with the project .
Mandatory->compulsory

Test Case 1
Test Case ID : Test Case Title
The test case ID may be any convenient identifier, as decided upon by the tester. Identifiers should follow a consistent pattern within Test cases, and a similar consistency should apply access Test Modules written for the same project.



Purpose:
The purpose of the Test case, usually to verify a specific requirement.

Owner:
The persons or department responsible for keeping the Test cases accurate.

Expected Result :
Describe the expected results and outputs from this Test Case. It is also desirable to include some method of recording whether or not the expected results actually occurred (i.e.) if the test case, or even individual steps of the test case, passed.

Test Data:
Any required data input for the Test Case.

Test Tools:
Any specific or unusual tools or utilities required for the execution of this Test Case.

Dependencies :
If correct execution of this Test Case depends on being pleceded by any other Test Cases, that fact should be mentioned here. Similarly any dependency on factory outside the immediate test environment should also be mentioned.

Initialization :
If the system software or hardware has to be initialized in a particular manner in order for this Test case to succeed, such initialization should be mentioned here.

Description:
Describe what will take place during the Test Case the description should take the form of a narrative description of the Test Case, along with a Test procedure , which in turn can be specified by test case steps, tables of values or configurations, further narrative or whatever is most appropriate to the type of testing taking place.

Test Case 2


Test Case 3

Test case 4

Test Case Description : Identify the Items or features to be tested by this test case.

Pre and post conditions: Description of changes (if any) to be standard environment. Any modification should be automatically done.

Test Case 4 - Description

Case : Test Case Name

Component : Component Name

Author : Developer Name

Date : MM – DD – YY

Version : Version Number

Input / Output Specifications:
Identify all inputs / Outputs required to execute the test case. Be sure to identify all required inputs / outputs not just data elements and values:

> Data (Values , ranges, sets )
> Conditions (States: initial, intermediate, final)
> Files (database, control files)

Test Procedure
Identify any special constrains on the test case. Focus on key elements such as special setup.

Expected Results
Fill this row with the description of the test results

Failure Recovery
Explanations regarding which actions should be performed in case of test failure.

Comments
Suggestions, description of possible improvements, etc.

Test Case 5




WEB TESTING

Writing Test Cases for Web Browsers

This is a guide to making test cases for Web browsers, for example making test cases to show HTML, CSS, SVG, DOM, or JS bugs. There are always exceptions to all the rules when making test cases. The most important thing is to show the bug without distractions. This isn't something that can be done just by following some steps, you have to be intelligent about it. Minimising existing testcases.

STEP ONE: FINDING A BUG
The first step to making a testcase is finding a bug in the first place. There are four ways of doing this:
1. Letting someone else do it for you: Most of the time, the testcases you write will be for bugs that other people have filed. In those cases, you will typically have a Web page which renders incorrectly, either a demo page or an actual Web site. However, it is also possible that the bug report will have no problem page listed, just a problem description.
2. Alternatively, you can find a bug yourself while browsing the Web. In such cases, you will have a Web site that renders incorrectly.
3. You could also find the bug because one of the existing testcases fails. In this case, you have a Web page that renders incorrectly.
4. Finally, the bug may be hypothetical: you might be writing a test suite for a feature without knowing if the feature is broken or not, with the intention of finding bugs in the implementation of that feature. In this case you do not have a Web page, just an idea of what a problem could be.

If you have a Web page showing a problem, move to the next step. Otherwise, you will have to create an initial testcase yourself. This is covered on the section on "Creating testcases from scratch" later.

STEP TWO: REMOVING DEPENDENCIES
You have a page that renders incorrectly.
Make a copy of this page and all the files it uses, and update the links so they all point to the copies you made of the files. Make sure that it still renders incorrectly in the same way -- if it doesn't, find out why not. Make your copy of the original files as close to possible as the original environment, as close as needed to reproduce the bug. For example, instead of loading the files locally, put the files on a remote server and try it from there. Make sure the MIME types are the same if they need to be, etc.
Once you have your page and its dependencies all set up and still showing the same problem, embed the dependencies one by one.
For example, change markup like this:
link rel="stylesheet" href="foo.css"
...to this:

Each time you do this, check that you haven't broken any relative URIs and that the page still shows the problem. If the page stops showing the problem, you either made a mistake when embedding the external files, or you found a bug specifically related to the way that particular file was linked. Move on to the next file.

STEP THREE: MAKING THE TEST FILE SMALLER
Once you have put as many of the external dependencies into the test file as you can, start cutting the file down.
Go to the middle of the file. Delete everything from the middle of the file to the end. (Don't pay attention to whether the file is still valid or not.) Check that the error still occurs. If it doesn't, put that part pack, and remove the top half instead, or a smaller part.
Continue in this vein until you have removed almost all the file and are left with 20 or fewer lines of markup, or at least, the smallest amount that you need to reproduce the problem.
Now, start being intelligent. Look at the file. Remove bits that clearly will have no effect on the bug. For example if the bug is that the text "investments are good" is red but should be green, replace the text with just "test" and check it is still the wrong colour.
Remove any scripts. If the scripts are needed, try doing what the scripts do then removing them -- for example, replace this:
;
..with:

test


...and check that the bug still occurs.
Merge any < style > blocks together.
Change presentational markup for CSS. For example, change this:
< font color="red" >
...to:
span { color: red; } /* in the stylesheet */

Do the same with style="" attributes (remove the attributes, but it in a < style > block instead).
Remove any classes, and use element names instead. For example: .
.a { color: red; }
.b { color: green; }

This should be green.


...becomes:
div { color: red; }
p { color: green; }

This should be green.


Do the same with IDs. Make sure there is a strict mode DOCTYPE:

Remove any< meta >elements. Remove any "lang" attributes or anything that isn't needed to show the bug.
If you have images, replace them with very simple images, e.g.:
http://hixie.ch/resources/images/sample
If there is script that is required, remove as many functions as possible, merge functions together, put them inline instead of in functions.

STEP FOUR: GIVE THE TEST AN OBVIOUS PASS CONDITION
The final step is to make sure that the test can be used quickly. It must be possible to look at a test and determine if it has passed or failed within about 2 seconds.
There are many tricks to do this, which are covered in other documents such as the CSS2.1 Test Case Authoring
Guidelines:
http://www.w3.org/Style/CSS/Test/guidelines.html
Make sure your test looks like it has failed even if no script runs or anything. Make sure the test doesn't look blank if it fails.

Creating testcases from scratch

STEP ONE: FIND SOMETHING TO TEST

Read the relevant specification.
Read it again.
Read it again, making sure you read every last bit of it, cover to cover.
Read it one more time, this time checking all the cross-references.
Read the specification in random order, making sure you understand every last bit of it.
Now, find a bit you think is likely to be implemented wrongly.
Work out a way in which a page could be created so that if the browser gets it right, the page will look like the test has passed, and if the browser gets it wrong, the page will look like it failed.
Write that page.
Now jump to step four above.

Note:
This information is collected.

Friday, 23 November 2007

ISTQB Foundation Exam Preparation

Exam Preparation follows :

Each and every syllabus module contains :
> Description of the module.
> Content of the module.
> Module Regarding Document.
> A simple test on the module.

Fundamentals of testing :
This section looks at why testing is necessary, what testing is, explains general testing principles, the fundamental test process, and psychological aspects of testing.

1.0 Fundamentals (or) Principles of testing
1.1 Why is testing necessary
1.2 What is testing
1.3 General testing principles
1.4 Fundamental test process
1.5 Psychology of testing
Prepare a bit from here Chapter 1

Take a Small Test now :
1. Use numbers 1 to 5 to indicate which fundamental test process the following major tasks belong to:
1 for planning and control,
2 for analysis and design,
3 for implementation and execution,
4 for evaluating exit criteria and reporting, and
5 for test closure activities.

A. _____ Creating the test data
B. _____ Designing test cases
C. _____ Analyzing lessons learned
D. _____ Defining the testing objectives
E. _____ Assessing whether more tests are needed
F. _____ Identifying the required test data
G. _____ Comparing actual progress against the plan
H. _____ Preparing a test summary report
I. _____ Documenting the acceptance of the system
J. _____ Re-executing a test that previously failed

2. What should be taken into account to determine when to stop testing?
I. Technical risk
II. Business risk
III. Project constraints
IV. Product documentation

A. I and II are true; III and IV are false
B. III is true; I, II, and IV are false
C. I, II, and IV are true; III is false
D. I, II and III are true; IV is false

3. How can software defects in future projects be prevented from reoccurring?
A. Creating documentation procedures and allocating resource contingencies
B. Asking programmers to perform a thorough and independent testing
C. Combining levels of testing and mandating inspections of all documents
D. Documenting lessons learned and determining the root cause of problems

4.Use numbers 1 to 5 to indicate which fundamental test process the following major tasks belong to:

1 for planning and control,
2 for analysis and design,
3 for implementation and execution,
4 for evaluating exit criteria and reporting, and
5 for test closure activities.

K. _____ Reporting the status of testing
L. _____ Documenting the infrastructure for reuse later
M. _____ Checking the test logs against the exit criteria
N. _____ Identifying the required test environment
O. _____ Developing and prioritizing test procedures
P. _____ Comparing actual vs. expected results
Q. _____ Designing and prioritizing test cases
R. _____ Assessing whether the exit criteria should be changed
S. _____ Receiving feedback and monitoring test activities
T. _____ Handing over the testware to the operations team


Testing throughout the software lifecycle :
Explains the relationship between testing and life cycle development models, including the V-model and iterative development. Outlines four levels of testing:
• Component testing
• Integration testing
• System testing
• Acceptance testing
Describes four test types, the targets of testing:
• Functional
• Non-functional characteristics
• Structural
• Change-related
Outlines the role of testing in maintenance.

2.0 Testing throughout the life cycle
2.1 Software development models
2.2 Test levels
2.3 Test types: the targets of testing
2.4 Maintenance testing
Prepare a bit from here Chapter 2

Take a small test now :

1. What test can be conducted for off-the-shelf software to get market feedback?
A. Beta testing
B. Usability testing
C. Alpha testing
D. COTS testing

2. Fill in the Blanks now :

1. _____ are the capabilities that a component or system must perform.
2. Reliability, usability, and portability are examples of _____.
3. Hardware and instrumentation needed for testing are parts of a _____.
4. _____ is also known as structural testing.
5. _____ ignores the internal mechanisms of a system being tested.
6. Which test level tests individual components or a group of related units?
7. Which test level determines if the customer will accept the system?
8. _____ checks the interactions between components.
9. _____ is usually performed on a complete, integrated system.
10. _____ is another name for unit testing.

3. Which test levels are USUALLY included in the common type of V-model?

A. Integration testing, system testing, acceptance testing and regression testing
B. Component testing, integration testing, system testing and acceptance testing
C. Incremental testing, exhaustive testing, exploratory testing and data driven testing
D. Alpha testing, beta testing, black-box testing and white-box testing

Static techniques :
Explains the differences between the various types of review and outlines the characteristics of a formal review. Describes how static analysis can find defects.

3.0 Static techniques
3.1 Reviews and the test process
3.2 Review process
3.3 Static analysis by tools
Prepare a bit from here Chapter 3

Take a small test now :

1.Which typical defects are easier to find using static instead of dynamic testing?

L. Deviation from standards
M. Requirements defects
N. Insufficient maintainability
O. Incorrect interface specifications

A. L, M, N and O
B. L and N
C. L, N and O
D. L, M and N

2. In a formal review, who is primarily responsible for the documents to be reviewed?

A. Author
B. Manager
C. Moderator
D. Reviewers

3.What are the typical six main phases of a formal review?

Test Design Techniques :
Explains the differences between the various types of review and outlines the characteristics of a formal review. Describes how static analysis can find defects.

4.0 Test Design Techniques
4.1 Identifying test conditions and designing test cases
4.2 Categories of test design techniques
4.3 Specification-based or black box techniques
4.4 Structure-based or white box techniques
4.5 Experience-based techniques
4.6 Choosing test techniques
Prepare a bit from here Chapter 4

Take a small test now :

1. Features to be tested, approach, item pass/fail criteria and test deliverables should be specified in which document?
A. Test case specification
B. Test procedure specification
C. Test plan
D. Test design specification

Which aspects of testing will establishing traceability help?

A. Configuration management and test data generation
B. Test specification and change control
C. Test condition and test procedures specification
D. Impact analysis and requirements coverage

Test Management
This section explains how to identify test conditions (things to test) and how to design test cases and procedures. It also explains the difference between white and black box testing. The following techniques are described in some detail with practical exercises:
• Equivalence partitioning
• Boundary value analysis
• Decision tables
• State transition testing
• Statement and decision testing
In addition, use case testing and experience-based testing (such as exploratory testing) are described and advice is given on choosing techniques.

5.0 Test management
5.1 Test organisation
5.2 Test planning and estimation
5.3 Test progress monitoring and control
5.4 Configuration management
5.5 Risk and testing
5.6 Incident or bug management
Prepare a bit from here Chapter 5

Take a small test now :

1. Which of the following is a KEY task of a tester?
A. Reviewing tests developed by others
B. Writing a test strategy for the project
C. Deciding what should be automated
D. Writing test summary reports

2. Which of the following are test leader's vs. tester's tasks?
A. Adjust plans as needed
B. Analyze design documents
C. Analyze overall test progress
D. Assess user requirements
E. Automate tests as needed
F. Contribute to test plans
G. Coordinate configuration management
H. Coordinate the test strategy
I. Create test specifications
J. Decide what to automate

Tool support for testing :
Different types of tool support for testing are described throughout the course. This session summarises them, discusses how to use them effectively and how to best introduce a new tool.

6.0 Tool support for testing
6.1 Types of test tools
6.2 Effective use of tools, potential benefits and risks
6.3 Introducing a tool into an organisation
Prepare a bit from here Chapter 6


Take a small test now :

1) Match the test tool classifications to the test tools.

1. Test management—applies to all test activities
2. Static testing—facilitates static analysis in detecting problems early
3. Test specification—generates tests and prepares data
4. Test execution and logging—runs tests and provides framework
5. Performance and monitoring—observes systems behavior
6. Specialized—caters to specific environment or platform
7. Other—assists in other miscellaneous testing tasks

A. ___ Configuration management tools
B. ___ Coverage measurement tools
C. ___ Debugging tools
D. ___ Dynamic analysis tools
E. ___ Incident management tools
F. ___ Industry-specific tools
G. ___ Modeling tools
H. ___ Monitoring tools
I. ___ Performance testing tools
J. ___ Platform-specific tools

2. Which of the following are potential benefits of using test support tools?

A. Ensuring greater consistency and minimizing software project risks
B. Reducing repetitive work and gaining easy access to test information
C. Performing objective assessment and reducing the need for training
D. Allowing for greater reliance on the tool to automate the test process

Mail me for answers.
All the best :-)

Please follow this post often to see latest questions updated.

Note:- This is just for reference. Don`t not completely refer this for your exam.

Saturday, 17 November 2007

Statement Coverage & Decision Coverage

ISEB Foundation Certification Syllabus Covers three TEST DESIGN TECHNIQUES,
The 3 categories are :
1) Specification based or Black box testing.
2) Structured-based or White-Box Techniques.
3) Experienced Based Techniques.

As per the request of my blog readers I would like to post the "Structured-based or White-Box Techniques" first then later on continue with other design techniques.

TEST DESIGN TECHNIQUES for Structured-based (or) White-Box Techniques are:
-> Statement Testing Coverage
-> Decision Testing Coverage

{Statement Coverage & Decision Coverage : These 2 topics are covered in ISEB foundation Syllabus in 4th chapter "TEST DESIGN TECHNIQUES".}

Structured-based or White-Box Techniques :

White Box Testing :
-> Testing based on Knowledge of internal structure and logic.
-> Logic errors and incorrect assumptions are inversely proportional to a path’s execution probability.
-> We often believe that a path is not likely to be executed, but reality is often counter intuitive.
-> Measure Coverage.

Structured-based or White-Box Techniques is based on an identified structure of the software or system, as seen in the following examples:
Component level: The structure is that of the code itdelf, ie., statements, decisions or branches.
Integration level : The structure may be a call three (a diagram in which modules call other modules).
System level : the structure may be a menu structure, bussiness process or webpage structure.

Structure-based or white-box testing can be applied at different levels of testing. Here we will be focussing on white-box testing at the code level, but it can be applied wherever we want to test the structure of something - for example ensuring that all modules in a particular system have been executed.

** Further we will discuss about two code-related structural techniques for code coverage, based on statement and dicision, are discussed.

** For decision testing, a control flow diagram may be used to visualize the alternatives for each decision.

As said earlier, I focus main on code-related structural techniques. These techniques identify paths through the code tha need to be excercised in oredr to acheive the required level of code average.

These are methods that can be deployed that can make the identification of white-box test cases easier - one method is control-flow graphing, control flow graphing uses nodes, edges and regions.. I will show them in detail with examples here..

Now comming to the actual topic :

TEST DESIGN TECHNIQUES for Structured-based (or) White-Box Techniques are:
-> Statement Testing Coverage
-> Decision Testing Coverage

1. Statement testing & Coverage :
A Statement is:
>> 'An entity in a programming language, which is typically the smallest indivisible unit of execution' (ISTQB Def).
A Statement coverage is:
>> 'The percentage of executable statements that has been exercised by a test suite' (ISTQB Def)

Statement coverage:
-> Does not ensure coverage of all functionality
->
The objective if the statement testing is to show that the executable statements within a program have been executed at least once. An executable statement can be described as a line of program sourse code that will carry out some type of action. For example:


If all statements in a program have been executed by a set of tests then 100% statement coverage has been acheived. However, if only half of the statement have been executed by a set of tests then 50% statement coverage has been acheived.

The aim is to acheive the maximum amount of statement coverage with the mimimum number of test cases.

>>Statement testing test cases to execute specific statements, normally to increase statement coverage.
>>100% statement coverage for a component is acheived by executing all of the execuatbel statements in that component.

If we require to carry out statemtnt testing, the amount of statement coverage required for the component should be stated in the test coverage requirements in the test plan.We should aim to acheive atleast the minimim coverage requirements with our test cases. If 100% statement coverage is not required, then we need to determine which ares of the component are more important to test by this method.

>>Consider the following lines of code:

>> 1 test would be required to execute all three executable statements.

If our component consists of three lines of code we will execute all with one test case, thus acheiving 100% statement coverage. There is only one way we can execute the code - starting at line number 1 and finishing at line number 3.

>>statement testing is more complicated when there is logic in the code
>>For example..

>>Here there is one executable statement i.e., "Display error message"
>> hence 1 test is required to execute all executable statements.

Program code becomes tough when logic is introduced. It is likely what a component will have to carry out different actions depending upon circumstances at the time of execution. In the code examp,e shown, the component will do different things depending on whether the age input is less than 17 or if it is 17 and above. With the statement testing we have to determine the routes through the code we need to take in order to execute the statements and the input required to get us there!

In this example, the statement will be executed if the age is less than 17, so we would create a test case accordingly.

>> For more complex logic we could use control flow graphing
>> Control flow graphs consists of nodes, edges and regions

Control flow graphs describes the logic structure if the software programs - it is a method by which flows through the program logic are charted, usign the code itseld rather than the program specification. Each flow graph nodes and egdes The nodes represent computational statements or expressions, and the edges represent transfer of control between the nodes. Together the nodes and edges encompass an area known as a region.

In the diagram, the structure represents an 'If Then Else Endif' costurct. NOdes are shown for the 'If' and the 'Endif'. Edges are shown for the 'Then' ( the true path) and the 'Else ( the false path). The region is the area enclosed by the nodes and the edges.

>>All programs consists of these basic structures..

This is hetzel notation that only shows logic flow.

There are 4 basic structures that are used withn control-flow graphong.

The 'DoWhile' structure will execute a section of code whilst a feild or indicator is set to a certain value. For example,


The 'Do until' structure will execute a section of code until a field or indicator is set to a certain value. Foe example,

The evaluation of the condition occurs after the code is executed.

The 'Go To' structure will divert the program execution to the program section in question. For example

>> SO the logic flow code could now be shown as follows:

If we applied control-flow graphing to our sample code, then 'if Then Else' structure is applicable.
However, while it shows us the structure of the code, it doesn`t show us where the executabel statements are, and so it doesn`t help us at the moment with determining the tests we required for statement coverage.

>> we can introduce extra nodes to indicate where the executable statements are

>> And we can see the path we need to travel to execute the statemen in the code.

What we can do is introduce extra nodes to indicate where the statements occur in the program code.
NOw in our example we can see that we need to answer 'yes' to the question being posed to traverse the code and execute the statement on line 2.

>> Now consider this code and control flow graph:

>> We will need 2 tests to acheive 100% statement coverage.

Program logic can be a lot more complicated than the examples I have given so far!
In the source code shown here, We have executable statements associated with each outcome of the question being asked. We have to dosplay an error message if the age is less than 17( answering 'yes' to the question), and we have display 'costomer OK' if we answer 'No'.
We can only traverse the code only once with a given test; therefore we require two tests to acheive 100% statement coverage.

>> And this example...

>> We will need 3 tests to acheive 100% statement coverage.

NOw it get even more complecated!
In this example, we have a supplementary question, or what is know as a 'nested if'. If we answer 'yes' to 'If fuel tank empty?' we then have a further question asked, and each outcome of this question has an associated statement.

Therefore we will need two tests that answer 'yes' to 'if fuel tank empty'
* Fuel tank empty AND petrol engine ( to execute line 3)
* Fuel tanl empty AND NOT petrol engine( to execute line 5)
one further test will be required where we anser 'no' to 'if fuel tank empty' to enable us to execute the statement at line 8.

>>And this will be the last example for statement coverage.. we will then go for decision coverage.

>> We will need 2 tests to acheive 100% statement coverage.

In this example,,, we ahve two saperate questions that are being asked.

The tests have shown are
* A coffee drinker who wants cream
* A non coffee drinker who doesn`t want cream

Our 2 tests acheive 100% statement coverage, but equally we could have had 2 tests with:
* A coffee drinker who doesn`t want cream
* A no-coffee drinker who wants cream

If we were being asked to acheive 100% statement coverage, and if all statements were of equal importance, it would n`t matter which set if tests we chooose.

~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
Checking your calculation values:

Minimum tests required to acheive 100%

Decision coverage >= Statement coverage

~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~

Decision testing & Coverage
A Decision is :
>> ' A program point at which the control flow has two or more alternative routes. A node with two or more links to saperate branches.'(ISTQB Def)
A Decision Coverageis :
>> ' The percentage if the decision outcomes that have been exercised by a test suite. 100% decision coverage implies both 100% branch coverage and 100% statement coverage.' (ISTQB Def)

Decision Coverage :
>>

The objective of decision coverage testing is to show all the decisions within a component have been executed at least once.
A decision can be described as a line of source code that asks a question.
For example:

If all decisions within a component have exercised by a given set of tests then 100% decision coverage has been achieved. However if only half of the decisions have been taken with a given set of tests then you have only achieved 50% decision coverage.
Again, as with statement testing, the aim is to achieve the maximum amount of coverage with the minimum number of tests.

>> Decision testing derives test cases to execute specific decision outcomes, normally to increase decision coverage.
>> Decision testing is a form of control flow testing as it generates a specific flow of control through the decision points.

If we are required to carry out decision testing, the amount of decision coverage required for a component should be stated in the test requirements in the test plan.
We should aim to achieve atleast the minimum coverage requirements with our test cases. If 100% decision coverage is not required, then we need to determine which areas of the component are more important to test by this method.

>> Decision coverage is stronger than statement coverage.
>> 100% decision coverage for a component is achieved by exercising all decision outcomes int he component.
>> 100% decision coverage guarantees 100% statement coverage, but not vice versa.

Decision testing can be considered as the next logical progression from statement testing in that we are not much concerned with testing every statement but the true and false outcomes from every decision.
As we saw in our earlier examples of the statement testing, not every decision outcome has a statement( or statements) to execute.
If we achieve 100% decision coverage, we would have executed every outcome of every decision, regardless of whether there were associated statements or not..

>>Lets take earlier example we had for statement testing:

>>This would require 2 tests to achieve 100% decision coverage, but only 1 test to achieve 100% statement coverage.

In this example there is one decision, and therefore 2 outcomes.
To achieve 100% decision coverage we could have two tests:
* Age less than 17(answer 'yes')
* Age equal to or greater than 17 (answer 'no')
This is a greater number of tests than would be required for statement testing as statements are only associated with one decision outcome(line 2).

>> Again, consider this earlier example :

>> we will need 2 tests to achieve 100% decision coverage & also 2 tests to achieve 100% statement coverage.

This example would still result in two tests, as there is one decision therefore 2 outcomes to tests.
However, we would need two tests to achieve 100% statement coverage, as there are statements with each outcome of the decision.
So,in this instance, statement and decision testing would give us the same number of tests. NOte that if 100% coverage is required, statement testing can give us the same number of tests as decision testing, BUT NEVER MORE!

>>Lets look at some more examples now..

>> We will need 3 tests to achieve 100% decision coverage, but only 1 test to achieve 100% statement coverage.

Here we have an example of a supplimentary question, or a 'nested if'.
We have 2 decisions, so you may think that 4 tests may be required to achieve 100% decision coverage( two for each decision).
This is NOT the case! We can achieve 100% decision coverage with three tests - we need to exercise the 'Yes' outcome from the first decision ( line 1) twice, in order to subsequently exercise the 'Yes' and then the 'No' outcome from the supplementary question(line 2).
We need a further third test to ensure we exercise the 'No' outcome of the first decision( line 1 ).
There is only one decision outcome that has an associated statement - this means that 100% statement coverage can be achieved with one test.

>> As more statements are added, the tests for decision coverage are the same:

>> 3 tests to achieve 100% decision coverage, and 2 tests to achieve 100% statement coverage.

We have now introduced a statement that is associated with 'No' outcome of the decision on line 2.
This change affects the number of tests required to achieve 100% statement coverage, but does NOT alter the number of tests required to achieved 100% decision coverage - it is still three!

>And again an example..

>> 3 tests to achieved both decision and statement coverage.

Finally, we have statements associated with each outcome of each decision - the number of tests to achieve 100% statement coverage and 100% decision coverage are now the same.

>> And Last Example..

>> We will need 2 tests to achieve 100% decision coverage and 100% statement coverage.

We looked at this example of the "if Then Else' structure when considering statement testing.
As the decisions are separate questions we only need two tests to achieve 100% decision coverage( the same as the number required for statement coverage).
You may have thought that four tests were required - exercising the four different routes through the code, but remember, with decision testing our concern is to exercise each outcome of each decision atleast once - as long as we have answered 'Yes' and 'No' to each decision we have satisfied the requirements of the techinique.
The tests we have illustrated would need the following input conditions:
* Coffee drinker wanting cream.
* Non Coffee drinker not wanting cream ( but milk).

Equally, we could have chosen the following input conditions:
* Coffee drinker not wanting cream( but milk).
* Non coffee drinker wanting cream.

>> Then What about loops?

>> If we choose an initial value of p=4, we only need 1 test to achieve 100% statement and 100% decision coverage.

The control-flow graphs we showed earlier depicted a 'Do While' construct.
To reiterate, thw 'Do While' structure will execute a section of code whist a field or indicator is set to a certain value. For example,

The evaluation of the condition occurs before the code is executed.

Unlike the 'If Then Else', we can loop around the 'Do While' structure, which means that we exercise different routes through the code with one test.
As in the above diagram, if we set 'p' with an initial value '4', the first time through the code will :
* Go from line 1 to line 2
* Answer 'Yes' to the 'If' on line 2 ( if p<5)
* Execute the statement in line 3 (p=p*2, so p now equals 8)
* Go from line 3, through line 4 to line 5
* Execute the statement on line 5 ( which adds 1 to 'p', making it`s value '9')
* Execute the statement on line 6, which takes it back up to line 1.

Again we execute the code, with the value of 'P' now '9'
* GO from line1 to line2
* Answer 'NO' to the 'if' on line 2 (If p>5)
* Go from line 4 to line 5
* Execute the statement on line 5( which adds 1 to 'p', making it`s value '10')
* Execute the statement on line 6, which takes it back up to line 1.

Once more we execute the code
* Line 1 - 'P' is not less than '10' ( it is equal to 10), therefore, we exit this structure.

1 test - it achieves 100% statement coverage and 100% decision coverage.

>> And it`s same for 'Do until' structure

>> IF we choose an initial value of A =15, we only need 1 test to achieve 100% Decision coverage and 100% statement coverage.

The control flow structures we showed earlier also depicted a 'Do Until' structure.
To reiterate, the 'Do Until' structure will execute a section of code until a field or indicator is set to a certain value. For example,

The evaluation of the condition occurs after the code is executed.
Unlike the 'If Then Else', we can loop around the 'Do Until' structure, which means that we exercise different routes through the code with one test.
In the example above, If we set 'A' with an initial value of '15', the first time through the code will:
* Go from line 1 to line 2
* Answer 'Yes' to the 'If' on line 2 (If A<20)
* Execute the statement on line 3 ( A=A*2,which makes A=30)
* GO from line 3, through the line 4 to line 5
* Execute the statement on line 5( which adds 1 to 'A', making its value '31'.
* Execute the statement on line 6, Which takes back to line 1

Again we execute the code, with the value of 'A' now '31'
* Go from line 1 to line 2
* Answer 'No' to the 'If' on line 2 (If A < 20)
* Go from line 2, through line 4 to line 5
* Execute the statement on line 5( which adds 1 to 'A', making it`s value '32')
* Execute the statement on line 6, which exits the structure('A' is greater than 31)

1 test - it achieves 100% statement coverage and 100% decision coverage.

END...