Wednesday 26 September 2007

Automation Testing versus Manual Testing

Automation Testing versus Manual Testing Guidelines


Collected this info from a blog I read !!
I met with my team’s automation experts a few weeks back to get their input on when to automate and when to manually test. The general rule of thumb has always been to use common sense. If you’re only going to run the test one or two times or the test is really expensive to automation, it is most likely a manual test. But then again, what good is saying “use common sense” when you need to come up with deterministic set of guidelines on how and when to automate?



Pros of Automation

* If you have to run a set of tests repeatedly, automation is a huge win for you
* It gives you the ability to run automation against code that frequently changes to catch regressions in a timely manner
* It gives you the ability to run automation in mainstream scenarios to catch regressions in a timely manner (see What is a Nighlty)
* Aids in testing a large test matrix (different languages on different OS platforms). Automated tests can be run at the same time on different machines, whereas the manual tests would have to be run sequentially.

Cons of Automation

* It costs more to automate. Writing the test cases and writing or configuring the automate framework you’re using costs more initially than running the test manually.
* Can’t automate visual references, for example, if you can’t tell the font color via code or the automation tool, it is a manual test.

Pros of Manual

* If the test case only runs twice a coding milestone, it most likely should be a manual test. Less cost than automating it.
* It allows the tester to perform more ad-hoc (random testing). In my experiences, more bugs are found via ad-hoc than via automation. And, the more time a tester spends playing with the feature, the greater the odds of finding real user bugs.

Cons of Manual

* Running tests manually can be very time consuming
* Each time there is a new build, the tester must rerun all required tests - which after a while would become very mundane and tiresome.

Other deciding factors

* What you automate depends on the tools you use. If the tools have any limitations, those tests are manual.
* Is the return on investment worth automating? Is what you get out of automation worth the cost of setting up and supporting the test cases, the automation framework, and the system that runs the test cases?

Criteria for automating

There are two sets of questions to determine whether automation is right for your test case:


Is this test scenario automatable?

1. Yes, and it will cost a little
2. Yes, but it will cost a lot
3. No, it is no possible to automate

How important is this test scenario?

1. I must absolutely test this scenario whenever possible
2. I need to test this scenario regularly
3. I only need to test this scenario once in a while

If you answered #1 to both questions – definitely automate that test

If you answered #1 or #2 to both questions – you should automate that test

If you answered #2 to both questions – you need to consider if it is really worth the investment to automate


What happens if you can’t automate?

Let’s say that you have a test that you absolutely need to run whenever possible, but it isn’t possible to automate. Your options are

* Reevaluate – do I really need to run this test this often?
* What’s the cost of doing this test manually?
* Look for new testing tools
* Consider test hooks

Software Testing Types.

Software Testing Glossary


Acceptance Testing: Testing conducted to enable a user/customer to determine whether to accept a software product. Normally performed to validate the software meets a set of agreed acceptance criteria.

Accessibility Testing: Verifying a product is accessible to the people having disabilities (deaf, blind, mentally disabled etc.).

Ad Hoc Testing: A testing phase where the tester tries to 'break' the system by randomly trying the system's functionality. Can include negative testing as well. See also Monkey Testing.

Agile Testing: Testing practice for projects using agile methodologies, treating development as the customer of testing and emphasizing a test-first design paradigm. See also Test Driven Development.

Application Binary Interface (ABI): A specification defining requirements for portability of applications in binary forms across defferent system platforms and environments.

Application Programming Interface (API): A formalized set of software calls and routines that can be referenced by an application program in order to access supporting system or network services.

Automated Software Quality (ASQ): The use of software tools, such as automated testing tools, to improve software quality.

Automated Testing:

* Testing employing software tools which execute tests without manual intervention. Can be applied in GUI, performance, API, etc. testing.
* The use of software to control the execution of tests, the comparison of actual outcomes to predicted outcomes, the setting up of test preconditions, and other test control and test reporting functions.

B

Backus-Naur Form: A metalanguage used to formally describe the syntax of a language.

Basic Block: A sequence of one or more consecutive, executable statements containing no branches.

Basis Path Testing: A white box test case design technique that uses the algorithmic flow of the program to design tests.

Basis Set: The set of tests derived using basis path testing.

Baseline: The point at which some deliverable produced during the software engineering process is put under formal change control.

Benchmark Testing: Tests that use representative sets of programs and data designed to evaluate the performance of computer hardware and software in a given configuration.

Beta Testing: Testing of a rerelease of a software product conducted by customers.

Binary Portability Testing: Testing an executable application for portability across system platforms and environments, usually for conformation to an ABI specification.

Black Box Testing: Testing based on an analysis of the specification of a piece of software without reference to its internal workings. The goal is to test how well the component conforms to the published requirements for the component.

Bottom Up Testing: An approach to integration testing where the lowest level components are tested first, then used to facilitate the testing of higher level components. The process is repeated until the component at the top of the hierarchy is tested.

Boundary Testing: Test which focus on the boundary or limit conditions of the software being tested. (Some of these tests are stress tests).

Boundary Value Analysis: In boundary value analysis, test cases are generated using the extremes of the input domaini, e.g. maximum, minimum, just inside/outside boundaries, typical values, and error values. BVA is similar to Equivalence Partitioning but focuses on "corner cases".

Branch Testing: Testing in which all branches in the program source code are tested at least once.

Breadth Testing: A test suite that exercises the full functionality of a product but does not test features in detail.

Bug: A fault in a program which causes the program to perform in an unintended or unanticipated manner.

C

CAST: Computer Aided Software Testing.

Capture/Replay Tool: A test tool that records test input as it is sent to the software under test. The input cases stored can then be used to reproduce the test at a later time. Most commonly applied to GUI test tools.

CMM: The Capability Maturity Model for Software (CMM or SW-CMM) is a model for judging the maturity of the software processes of an organization and for identifying the key practices that are required to increase the maturity of these processes.

Cause Effect Graph: A graphical representation of inputs and the associated outputs effects which can be used to design test cases.

Code Complete: Phase of development where functionality is implemented in entirety; bug fixes are all that are left. All functions found in the Functional Specifications have been implemented.

Code Coverage: An analysis method that determines which parts of the software have been executed (covered) by the test case suite and which parts have not been executed and therefore may require additional attention.

Code Inspection: A formal testing technique where the programmer reviews source code with a group who ask questions analyzing the program logic, analyzing the code with respect to a checklist of historically common programming errors, and analyzing its compliance with coding standards.

Code Walkthrough: A formal testing technique where source code is traced by a group with a small set of test cases, while the state of program variables is manually monitored, to analyze the programmer's logic and assumptions.

Coding: The generation of source code.

Compatibility Testing: Testing whether software is compatible with other elements of a system with which it should operate, e.g. browsers, Operating Systems, or hardware.

Component: A minimal software item for which a separate specification is available.

Component Testing: See Unit Testing.

Concurrency Testing: Multi-user testing geared towards determining the effects of accessing the same application code, module or database records. Identifies and measures the level of locking, deadlocking and use of single-threaded code and locking semaphores.

Conformance Testing: The process of testing that an implementation conforms to the specification on which it is based. Usually applied to testing conformance to a formal standard.

Context Driven Testing: The context-driven school of software testing is flavor of Agile Testing that advocates continuous and creative evaluation of testing opportunities in light of the potential information revealed and the value of that information to the organization right now.
>>> As said by Cem Kaner Context Driven Testing is

Conversion Testing: Testing of programs or procedures used to convert data from existing systems for use in replacement systems.

Cyclomatic Complexity: A measure of the logical complexity of an algorithm, used in white-box testing.

D

Data Dictionary: A database that contains definitions of all data items defined during analysis.

Data Flow Diagram: A modeling notation that represents a functional decomposition of a system.

Data Driven Testing: Testing in which the action of a test case is parameterized by externally defined data values, maintained as a file or spreadsheet. A common technique in Automated Testing.

Debugging: The process of finding and removing the causes of software failures.

Defect: Nonconformance to requirements or functional / program specification

Dependency Testing: Examines an application's requirements for pre-existing software, initial states and configuration in order to maintain proper functionality.

Depth Testing: A test that exercises a feature of a product in full detail.

Dynamic Testing: Testing software through executing it. See also Static Testing.

E
Emulator: A device, computer program, or system that accepts the same inputs and produces the same outputs as a given system.

Endurance Testing: Checks for memory leaks or other problems that may occur with prolonged execution.

End-to-End testing: Testing a complete application environment in a situation that mimics real-world use, such as interacting with a database, using network communications, or interacting with other hardware, applications, or systems if appropriate.

Equivalence Class: A portion of a component's input or output domains for which the component's behaviour is assumed to be the same from the component's specification.

Equivalence Partitioning: A test case design technique for a component in which test cases are designed to execute representatives from equivalence classes.

Exhaustive Testing: Testing which covers all combinations of input values and preconditions for an element of the software under test.

F

Functional Decomposition: A technique used during planning, analysis and design; creates a functional hierarchy for the software.

Functional Specification: A document that describes in detail the characteristics of the product with regard to its intended features.

Functional Testing: See also Black Box Testing.

* Testing the features and operational behavior of a product to ensure they correspond to its specifications.
* Testing that ignores the internal mechanism of a system or component and focuses solely on the outputs generated in response to selected inputs and execution conditions.

G

Glass Box Testing: A synonym for White Box Testing.

Gorilla Testing: Testing one particular module,functionality heavily.

Gray Box Testing: A combination of Black Box and White Box testing methodologies: testing a piece of software against its specification but using some knowledge of its internal workings.

H

High Order Tests: Black-box tests conducted once the software has been integrated.

I
Independent Test Group (ITG): A group of people whose primary responsibility is software testing,

Inspection: A group review quality improvement process for written material. It consists of two aspects; product (document itself) improvement and process improvement (of both document production and inspection).

Integration Testing: Testing of combined parts of an application to determine if they function together correctly. Usually performed after unit and functional testing. This type of testing is especially relevant to client/server and distributed systems.

Installation Testing: Confirms that the application under test recovers from expected or unexpected events without loss of data or functionality. Events can include shortage of disk space, unexpected loss of communication, or power out conditions.

J

K

L

Load Testing: See Performance Testing.

Localization Testing: This term refers to making software specifically designed for a specific locality.

Loop Testing: A white box testing technique that exercises program loops.

M

Metric: A standard of measurement. Software metrics are the statistics describing the structure or content of a program. A metric should be a real objective measurement of something such as number of bugs per lines of code.

Monkey Testing: Testing a system or an Application on the fly, i.e just few tests here and there to ensure the system or an application does not crash out.

Mutation Testing: Testing done on the application where bugs are purposely added to it.

N

Negative Testing: Testing aimed at showing software does not work. Also known as "test to fail". See also Positive Testing.

N+1 Testing: A variation of Regression Testing. Testing conducted with multiple cycles in which errors found in test cycle N are resolved and the solution is retested in test cycle N+1. The cycles are typically repeated until the solution reaches a steady state and there are no errors. See also Regression Testing.

O

P

Path Testing: Testing in which all paths in the program source code are tested at least once.

Performance Testing: Testing conducted to evaluate the compliance of a system or component with specified performance requirements. Often this is performed using an automated test tool to simulate large number of users. Also know as "Load Testing".

Positive Testing: Testing aimed at showing software works. Also known as "test to pass". See also Negative Testing.

Q

Quality Assurance: All those planned or systematic actions necessary to provide adequate confidence that a product or service is of the type and quality needed and expected by the customer.

Quality Audit: A systematic and independent examination to determine whether quality activities and related results comply with planned arrangements and whether these arrangements are implemented effectively and are suitable to achieve objectives.

Quality Circle: A group of individuals with related interests that meet at regular intervals to consider problems or other matters related to the quality of outputs of a process and to the correction of problems or to the improvement of quality.

Quality Control: The operational techniques and the activities used to fulfill and verify requirements of quality.

Quality Management: That aspect of the overall management function that determines and implements the quality policy.

Quality Policy: The overall intentions and direction of an organization as regards quality as formally expressed by top management.

Quality System: The organizational structure, responsibilities, procedures, processes, and resources for implementing quality management.

R

Race Condition: A cause of concurrency problems. Multiple accesses to a shared resource, at least one of which is a write, with no mechanism used by either to moderate simultaneous access.

Ramp Testing: Continuously raising an input signal until the system breaks down.

Recovery Testing: Confirms that the program recovers from expected or unexpected events without loss of data or functionality. Events can include shortage of disk space, unexpected loss of communication, or power out conditions.

Regression Testing: Retesting a previously tested program following modification to ensure that faults have not been introduced or uncovered as a result of the changes made.

Release Candidate: A pre-release version, which contains the desired functionality of the final version, but which needs to be tested for bugs (which ideally should be removed before the final version is released).

S (return to top of page)

Sanity Testing: Brief test of major functional elements of a piece of software to determine if its basically operational. See also Smoke Testing.

Scalability Testing: Performance testing focused on ensuring the application under test gracefully handles increases in work load.

Security Testing: Testing which confirms that the program can restrict access to authorized personnel and that the authorized personnel can access the functions available to their security level.

Smoke Testing: A quick-and-dirty test that the major functions of a piece of software work. Originated in the hardware testing practice of turning on a new piece of hardware for the first time and considering it a success if it does not catch on fire.

Soak Testing: Running a system at high load for a prolonged period of time. For example, running several times more transactions in an entire day (or night) than would be expected in a busy day, to identify and performance problems that appear after a large number of transactions have been executed.

Software Requirements Specification: A deliverable that describes all data, functional and behavioral requirements, all constraints, and all validation requirements for software/

Software Testing: A set of activities conducted with the intent of finding errors in software.

Static Analysis: Analysis of a program carried out without executing the program.

Static Analyzer: A tool that carries out static analysis.

Static Testing: Analysis of a program carried out without executing the program.

Storage Testing: Testing that verifies the program under test stores data files in the correct directories and that it reserves sufficient space to prevent unexpected termination resulting from lack of space. This is external storage as opposed to internal storage.

Stress Testing: Testing conducted to evaluate a system or component at or beyond the limits of its specified requirements to determine the load under which it fails and how. Often this is performance testing using a very high level of simulated load.

Structural Testing: Testing based on an analysis of internal workings and structure of a piece of software. See also White Box Testing.

System Testing: Testing that attempts to discover defects that are properties of the entire system rather than of its individual components.

T

Testability: The degree to which a system or component facilitates the establishment of test criteria and the performance of tests to determine whether those criteria have been met.

Testing:

* The process of exercising software to verify that it satisfies specified requirements and to detect errors.
* The process of analyzing a software item to detect the differences between existing and required conditions (that is, bugs), and to evaluate the features of the software item (Ref. IEEE Std 829).
* The process of operating a system or component under specified conditions, observing or recording the results, and making an evaluation of some aspect of the system or component.

Test Automation: See Automated Testing.

Test Bed: An execution environment configured for testing. May consist of specific hardware, OS, network topology, configuration of the product under test, other application or system software, etc. The Test Plan for a project should enumerated the test beds(s) to be used.

Test Case:

* Test Case is a commonly used term for a specific test. This is usually the smallest unit of testing. A Test Case will consist of information such as requirements testing, test steps, verification steps, prerequisites, outputs, test environment, etc.
* A set of inputs, execution preconditions, and expected outcomes developed for a particular objective, such as to exercise a particular program path or to verify compliance with a specific requirement.

Test Driven Development: Testing methodology associated with Agile Programming in which every chunk of code is covered by unit tests, which must all pass all the time, in an effort to eliminate unit-level and regression bugs during development. Practitioners of TDD write a lot of tests, i.e. an equal number of lines of test code to the size of the production code.

Test Driver: A program or test tool used to execute a tests. Also known as a Test Harness.

Test Environment: The hardware and software environment in which tests will be run, and any other software with which the software under test interacts when under test including stubs and test drivers.

Test First Design: Test-first design is one of the mandatory practices of Extreme Programming (XP).It requires that programmers do not write any production code until they have first written a unit test.

Test Harness: A program or test tool used to execute a tests. Also known as a Test Driver.

Test Plan: A document describing the scope, approach, resources, and schedule of intended testing activities. It identifies test items, the features to be tested, the testing tasks, who will do each task, and any risks requiring contingency planning. Ref IEEE Std 829.

Test Procedure: A document providing detailed instructions for the execution of one or more test cases.

Test Scenario: Definition of a set of test cases or test scripts and the sequence in which they are to be executed.

Test Script: Commonly used to refer to the instructions for a particular test that will be carried out by an automated test tool.

Test Specification: A document specifying the test approach for a software feature or combination or features and the inputs, predicted results and execution conditions for the associated tests.

Test Suite: A collection of tests used to validate the behavior of a product. The scope of a Test Suite varies from organization to organization. There may be several Test Suites for a particular product for example. In most cases however a Test Suite is a high level concept, grouping together hundreds or thousands of tests related by what they are intended to test.

Test Tools: Computer programs used in the testing of a system, a component of the system, or its documentation.

Thread Testing: A variation of top-down testing where the progressive integration of components follows the implementation of subsets of the requirements, as opposed to the integration of components by successively lower levels.

Top Down Testing: An approach to integration testing where the component at the top of the component hierarchy is tested first, with lower level components being simulated by stubs. Tested components are then used to test lower level components. The process is repeated until the lowest level components have been tested.

Total Quality Management: A company commitment to develop a process that achieves high quality product and customer satisfaction.

Traceability Matrix: A document showing the relationship between Test Requirements and Test Cases.

U

Usability Testing: Testing the ease with which users can learn and use a product.

Use Case: The specification of tests that are conducted from the end-user perspective. Use cases tend to focus on operating software as an end-user would conduct their day-to-day activities.

User Acceptance Testing: A formal product evaluation performed by a customer as a condition of purchase.

Unit Testing: Testing of individual software components.

V

Validation: The process of evaluating software at the end of the software development process to ensure compliance with software requirements. The techniques for validation is testing, inspection and reviewing.

Verification: The process of determining whether of not the products of a given phase of the software development cycle meet the implementation steps and can be traced to the incoming objectives established during the previous phase. The techniques for verification are testing, inspection and reviewing.

Volume Testing: Testing which confirms that any values that may become large over time (such as accumulated counts, logs, and data files), can be accommodated by the program and will not cause the program to stop working or degrade its operation in any manner.

W
Walkthrough: A review of requirements, designs or code characterized by the author of the material under review guiding the progression of the review.

White Box Testing: Testing based on an analysis of internal workings and structure of a piece of software. Includes techniques such as Branch Testing and Path Testing. Also known as Structural Testing and Glass Box Testing. Contrast with Black Box Testing.

Workflow Testing: Scripted end-to-end testing which duplicates specific workflows which are expected to be utilized by the end-user.

X

Y

Z


Software Testing Types

COMPATIBILITY TESTING. Testing to ensure compatibility of an application or Web site with different browsers, OSs, and hardware platforms. Compatibility testing can be performed manually or can be driven by an automated functional or regression test suite.

CONFORMANCE TESTING. Verifying implementation conformance to industry standards. Producing tests for the behavior of an implementation to be sure it provides the portability, interoperability, and/or compatibility a standard defines.

FUNCTIONAL TESTING. Validating an application or Web site conforms to its specifications and correctly performs all its required functions. This entails a series of tests which perform a feature by feature validation of behavior, using a wide range of normal and erroneous input data. This can involve testing of the product's user interface, APIs, database management, security, installation, networking, etcF testing can be performed on an automated or manual basis using black box or white box methodologies.

LOAD TESTING. Load testing is a generic term covering Performance Testing and Stress Testing.

PERFORMANCE TESTING. Performance testing can be applied to understand your application or WWW site's scalability, or to benchmark the performance in an environment of third party products such as servers and middleware for potential purchase. This sort of testing is particularly useful to identify performance bottlenecks in high use applications. Performance testing generally involves an automated test suite as this allows easy simulation of a variety of normal, peak, and exceptional load conditions.

REGRESSION TESTING. Similar in scope to a functional test, a regression test allows a consistent, repeatable validation of each new release of a product or Web site. Such testing ensures reported product defects have been corrected for each new release and that no new quality problems were introduced in the maintenance process. Though regression testing can be performed manually an automated test suite is often used to reduce the time and resources needed to perform the required testing.

SMOKE TESTING. A quick-and-dirty test that the major functions of a piece of software work without bothering with finer details. Originated in the hardware testing practice of turning on a new piece of hardware for the first time and considering it a success if it does not catch on fire.

STRESS TESTING. Testing conducted to evaluate a system or component at or beyond the limits of its specified requirements to determine the load under which it fails and how. A graceful degradation under load leading to non-catastrophic failure is the desired result. Often Stress Testing is performed using the same process as Performance Testing but employing a very high level of simulated load.

UNIT TESTING. Functional and reliability testing in an Engineering environment. Producing tests for the behavior of components of a product to ensure their correct behavior prior to system integration.

Test Director

Test Director: -

TestDirector is Mercury Interactive’s software test management tool. It helps quality assurance personnel plan and organize the testing process. With TestDirector you can create a database of manual and automated tests, build test cycles, run tests, and report and track defects. You can also create reports and graphs to help review the progress of planning tests, running tests, and tracking defects before a software release.

What is the use of Test Director software?
>>Test Director is an efficient Management tool
for having more than 2-3 years of experience in testing.
Mostly Test Lead/Quality Lead can use this Tool for managing,organizing,planning the software projects.
We can easily understand this tool

>>Using TestDirector, we can define application requirements and testing objectives, design test plans and develop test cases, create automated scripts and store them in the repository, run manual and automated tests, report execution results, enter defects, review and fix defects by logging into the database. Using application status reports we can decide whether an application is ready to be released.


>>but we do not create scripts in test director?//it is only done in winrunner ....right?

>>Test Director is a test management tool with which we can manage our entire testing process. It is a central repository where we can store our requirements, test plans, test cases and tests scripts and execute the test cases and test scripts. We can share the work with other QA testers using Test Director since it is a web based test management tool.


>>Throught TD we can genrate automated scripts either using win runner or QTP.

>>TestDirector is Mercury Interactive’s software test management tool. It helps quality assurance personnel plan and organize the testing process. With TestDirector you can create a database of manual and automated tests, build test cycles, run tests, and report and track defects. You can also create reports and graphs to help review the progress of planning tests, running tests, and tracking defects before a software release. It is ued to manage test Assets. It is very useful Techinical Analysts.

>>We can store the script which is genearted by winrunner.whenever you want to run that script ,then you can use the script from test Director.

>>we can create test scripts in tet director

>>Test director software is a test management tool which helps us or a new person to know the status of the functions which are present in an application. Status of the application is nothing but: the function is pass or fail, status of defect, analyze the reports, write test case, create a subject, gather the requirements etc



>>Test Director is a test management tool where we manage the entire testing process. We can define requirements, design test plans, Test cases, Test script and execute them. We can integrate the tool with automation tools. It can be used as a defect tracking tool.

>>WinRunner Generates Automation Scripts in TSL but 'Test Scripts' is written manually and can be stored in TDs repository

>>Test direct is a test management tool. Usually test director design test plans, test cases, bug report. It is uses (BRD) bord land requirement document Getting the expected and actual results.

>>Test Director is a Mercury Interactive’s software test case management & defect tracking tool.It helps in planning & organinzing the testing process.With TestDirector you can create a database of manual and automated tests, build test cycles, run tests, and report and track defects. You can also create reports and graphs to help review the progress of planning tests, running tests, and tracking defects before a software release.


>>Test Directot has been changed to QC by Mercury, it really is a test management tool help quality assurance personeel to plan and organize the entire software testing process. It has four modules which are requirment, test plan, run test and defect respectively.

Test Management - Intresting Questions

Test Management-Interesting questionsfor new comer

1.Why test - what is Testing?
Testing is a process used to help identify the correctness, completeness and quality of developed computer software.

2. System Testing myths and legends - What are they?
Myth1: There is no need to test
Myth2: If testing must be done; two weeks at the end of the project is sufficient for testing
Myth3: Re-testing is not necessary
Myth4: Any fool can test
Myth5: The last thing you want is users involved in test
Myth6: The V-model is too complicated

3.What are the Concepts for Application Test Management?
Testing should be pro-active following the V-model
Test execution can be a manual process
Test execution can be an automated process
It is possible to plan the start date for testing
It is not possible to accurately plan the end date of testing
Ending testing is through risk assessment
A fool with a tool is still a fool
Testing is not a diagnosis process
Testing is a triage process
Testing is expensive
Not testing, can be more expensive

4. What Test Principles do you Recommend?
• Test involvement early in the lifecycle - Test Architect Signs off Requirements - Test Architect Signs off Use Cases • Fail Fast - Identify failures early via core test scripts • All Test Phases have equal value - Each Test Phase has its own value add • RACI chart everything • Testing is a pro-active activity - Plan the Test - Test the Plan • Finding defects is good - Ignorance of faults in a non-conformant system is no excuse

5.Test Analysts - What is their Value Add?
Understand the system under test
Document Assumptions
Create and execute repeatable tests
Value add through negative testing
Contribute to Impact Analysis when assessing Changes
Contribute to the risk assessment when considering to end testing

6. What do Test Analysts Need?
Education
Test Environment
Test Tools
Access Requirements Traceability -

7. What is this about?
Tracing requirements to test cases
Tracing test cases to requirements
Should be a feature of the Test Asset Management tool
Automatic on-demand process
Pie chart reporting

8. What is involved in the Application Test Lifecycle?
Unit testing
Module testing
Component testing
Component integration testing
Subsystem testing
System testing
Functional testing
Technical integration testing
System integration testing
Non-functional testing
Integration testing
Regression testing
Model Office testing
User Acceptance testing

9. How to manage Risk Mitigation?
Identify risks before the adversity affects the project
Analyse risk data for interpretation by the project team
Plan actions for probability, magnitude & consequences
Track risks and actions, maintaining a risk register
Control risk action plan, correct plan deviations

10. What should the Test Team do?
Programme Management
Strong Change Management
Strict Configuration Control
Pro Active Scope Creep Management
Inclusion in the decision making process

11. What are the Test Team Deliverables
Test Plans
Test Script Planner
Test Scripts
Test Execution Results
Defect Reports

Sunday 16 September 2007

General Interview Questions!!

1. Tell me about yourself
The most often asked question in interviews. You need to have a short statement prepared in your mind. Be careful that it does not sound rehearsed. Limit it to work-related items unless instructed otherwise. Talk about things you have done and jobs you have held that relate to the position you are interviewing for. Start with the item farthest back and work up to the present.

2. Why did you leave your last job?
Stay positive regardless of the circumstances. Never refer to a major problem with management and never speak ill of supervisors, co-workers or the organization. If you do, you will be the one looking bad. Keep smiling and talk about leaving for a positive reason such as an opportunity, a chance to do something special or other forward-looking reasons.

3. What experience do you have in this field?
Speak about specifics that relate to the position you are applying for. If you do not have specific experience, get as close as you can.

4. Do you consider yourself successful?
You should always answer yes and briefly explain why. A good explanation is that you have set goals, and you have met some and are on track to achieve the others.

5. What do co-workers say about you?
Be prepared with a quote or two from co-workers. Either a specific statement or a paraphrase will work.

6. What do you know about this organization?
This question is one reason to do some research on the organization before the interview. Find out where they have been and where they are going. What are the current issues and who are the major players?

7. What have you done to improve your knowledge in the last year?
Try to include improvement activities that relate to the job. A wide variety of activities can be mentioned as positive self-improvement. Have some good ones handy to mention.

8. Are you applying for other jobs?
Be honest but do not spend a lot of time in this area. Keep the focus on this job and what you can do for this organization. Anything else is a distraction.

9. Why do you want to work for this organization?
This may take some thought and certainly, should be based on the research you have done on the organization. Sincerity is extremely important here and will easily be sensed. Relate it to your long-term career goals.

10. Do you know anyone who works for us?
Be aware of the policy on relatives working for the organization. This can affect your answer even though they asked about friends not relatives. Be careful to mention a friend only if they are well thought of.

11. What kind of salary do you need?
A loaded question. A nasty little game that you will probably lose if you answer first. So, do not answer it. Instead, say something like, That's a tough question. Can you tell me the range for this position? In most cases, the interviewer, taken off guard, will tell you. If not, say that it can depend on the details of the job. Then give a wide range.

12. Are you a team player?
You are, of course, a team player. Be sure to have examples ready. Specifics that show you often perform for the good of the team rather than for yourself are good evidence of your team attitude. Do not brag, just say it in a matter-of-fact tone. This is a key point.

13. How long would you expect to work for us if hired?
Specifics here are not good. Something like this should work: I'd like it to be a long time. Or As long as we both feel I'm doing a good job.

14. Have you ever had to fire anyone? How did you feel about that?
This is serious. Do not make light of it or in any way seem like you like to fire people. At the same time, you will do it when it is the right thing to do. When it comes to the organization versus the individual who has created a harmful situation, you will protect the organization. Remember firing is not the same as layoff or reduction in force.

15. What is your philosophy towards work?
The interviewer is not looking for a long or flowery dissertation here. Do you have strong feelings that the job gets done? Yes. That's the type of answer that works best here. Short and positive, showing a benefit to the organization.

16. If you had enough money to retire right now, would you?
Answer yes if you would. But since you need to work, this is the type of work you prefer. Do not say yes if you do not mean it.

17. Have you ever been asked to leave a position?
If you have not, say no. If you have, be honest, brief and avoid saying negative things about the people or organization involved.

18. Explain how you would be an asset to this organization
You should be anxious for this question. It gives you a chance to highlight your best points as they relate to the position being discussed. Give a little advance thought to this relationship.

19. Why should we hire you?
Point out how your assets meet what the organization needs. Do not mention any other candidates to make a comparison.

20. Tell me about a suggestion you have made
Have a good one ready. Be sure and use a suggestion that was accepted and was then considered successful. One related to the type of work applied for is a real plus.

21. What irritates you about co-workers?
This is a trap question. Think real hard but fail to come up with anything that irritates you. A short statement that you seem to get along with folks is great.

22. What is your greatest strength?
Numerous answers are good, just stay positive. A few good examples: Your ability to prioritize, Your problem-solving skills, Your ability to work under pressure, Your ability to focus on projects, Your professional expertise, Your leadership skills, Your positive attitude .

23. Tell me about your dream job.
Stay away from a specific job. You cannot win. If you say the job you are contending for is it, you strain credibility. If you say another job is it, you plant the suspicion that you will be dissatisfied with this position if hired. The best is to stay genetic and say something like: A job where I love the work, like the people, can contribute and can't wait to get to work.

24. Why do you think you would do well at this job?
Give several reasons and include skills, experience and interest.

25. What are you looking for in a job?
See answer # 23

26. What kind of person would you refuse to work with?
Do not be trivial. It would take disloyalty to the organization, violence or lawbreaking to get you to object. Minor objections will label you as a whiner.

27. What is more important to you: the money or the work?
Money is always important, but the work is the most important. There is no better answer.

28. What would your previous supervisor say your strongest point is?
There are numerous good possibilities: Loyalty, Energy, Positive attitude, Leadership, Team player, Expertise, Initiative, Patience, Hard work, Creativity, Problem solver

29. Tell me about a problem you had with a supervisor
Biggest trap of all. This is a test to see if you will speak ill of your boss. If you fall for it and tell about a problem with a former boss, you may well below the interview right there. Stay positive and develop a poor memory about any trouble with a supervisor.

30. What has disappointed you about a job?
Don't get trivial or negative. Safe areas are few but can include: Not enough of a challenge. You were laid off in a reduction Company did not win a contract, which would have given you more responsibility.

31. Tell me about your ability to work under pressure.
You may say that you thrive under certain types of pressure. Give an example that relates to the type of position applied for.

32. Do your skills match this job or another job more closely?
Probably this one. Do not give fuel to the suspicion that you may want another job more than this one.

33. What motivates you to do your best on the job?
This is a personal trait that only you can say, but good examples are: Challenge, Achievement, Recognition

34. Are you willing to work overtime? Nights? Weekends?
This is up to you. Be totally honest.

35. How would you know you were successful on this job?
Several ways are good measures: You set high standards for yourself and meet them. Your outcomes are a success.Your boss tell you that you are successful

36. Would you be willing to relocate if required?
You should be clear on this with your family prior to the interview if you think there is a chance it may come up. Do not say yes just to get the job if the real answer is no. This can create a lot of problems later on in your career. Be honest at this point and save yourself future grief.

37. Are you willing to put the interests of the organization ahead of your own?
This is a straight loyalty and dedication question. Do not worry about the deep ethical and philosophical implications. Just say yes.

38. Describe your management style.
Try to avoid labels. Some of the more common labels, like progressive, salesman or consensus, can have several meanings or descriptions depending on which management expert you listen to. The situational style is safe, because it says you will manage according to the situation, instead of one size fits all.

39. What have you learned from mistakes on the job?
Here you have to come up with something or you strain credibility. Make it small, well intentioned mistake with a positive lesson learned. An example would be working too far ahead of colleagues on a project and thus throwing coordination off.

40. Do you have any blind spots?
Trick question. If you know about blind spots, they are no longer blind spots. Do not reveal any personal areas of concern here. Let them do their own discovery on your bad points. Do not hand it to them.

41. If you were hiring a person for this job, what would you look for?
Be careful to mention traits that are needed and that you have.

42. Do you think you are overqualified for this position?
Regardless of your qualifications, state that you are very well qualified for the position.

43. How do you propose to compensate for your lack of experience?
First, if you have experience that the interviewer does not know about, bring that up: Then, point out (if true) that you are a hard working quick learner.

44. What qualities do you look for in a boss?
Be generic and positive. Safe qualities are knowledgeable, a sense of humor, fair, loyal to subordinates and holder of high standards. All bosses think they have these traits.

45. Tell me about a time when you helped resolve a dispute between others.
Pick a specific incident. Concentrate on your problem solving technique and not the dispute you settled.

46. What position do you prefer on a team working on a project?
Be honest. If you are comfortable in different roles, point that out.

47. Describe your work ethic.
Emphasize benefits to the organization. Things like, determination to get the job done and work hard but enjoy your work are good.

48. What has been your biggest professional disappointment?
Be sure that you refer to something that was beyond your control. Show acceptance and no negative feelings.

49. Tell me about the most fun you have had on the job.
Talk about having fun by accomplishing something for the organization.

50. Do you have any questions for me?
Always have some questions prepared. Questions prepared where you will be an asset to the organization are good. How soon will I be able to be productive? and What type of projects will I be able to assist on? are examples.

Full Lifecycle Testing Concept

What is Software Testing?

>>>Process of identifying defects

Defect is any variance between actual and expected results

>>>Testing should intentionally attempt to make things go wrong to determine if

things happen when they shouldn't or

things don't happen when they should

Types of Testing

>>> Static Testing

>>> Dynamic Testing

Static Testing

>> Involves testing of the development work products before any code is developed.

> Ex

Plan reviews

Requirements walkthroughs

Design or code inspections

Test plan inspections

Test case reviews

Dynamic Testing

>> Process of validation by exercising or operating a work product under scrutiny & observing its behavior to changing inputs or environments

>>Some representative examples of dynamic testing are

Executing test cases in an working system

Simulating usage scenarios with real end-users to test usability

Parallel testing in a production environment

Purpose of Static Testing

>> Early removal of the Defects in the software development life cycle

>>Increases the productivity by shortening the testing lifecycles & reducing work

>>> Increases the Quality of the project deliverables

Static Testing Techniques

>Prototyping

>Desk checks

>Checklists

>Mapping

>Reviews

>Walkthroughs

>Inspections

>Prototyping
Prototyping is reviewing a work product model (initial version) defined without the full capability of the proposed final product

- The prototype is demonstrated and the result of the exercise is evaluated, leading to an enhanced design

Used to verify and validate the User Interface design and to confirm usability

>Desk checks

This technique involves the process of a product author reading his or her own product to identify defects

>> Checklists

Checklists are a series of common items, or prompting questions to verify completeness of task steps.

>> Mapping

Mapping technique is identification of functions to the specification and to show how that function directly or indirectly, maps to the requirements.

- Used with Test Scripts to Cases, Test Conditions to Test Scripts etc

> Reviews

Reviews are useful mechanism for getting feedback quickly from peers and team members

> Walkthroughs

Walkthroughs are generally run as scheduled meetings and participants are invited to attend.

- The minutes of the meeting are recorded, as are the issues and action items resulting from the meeting.

- Owners are assigned for issues and actions and there is generally follow up done.

> Inspections

Defect detection activity, aimed at producing a "defect free" work product, before it is passed on to the next phase of the development process

> Objectives of an Inspection.

Increase quality and productivity

Minimize costs and cycle elapsed time

Facilitate project management

Inspection Process

> An inspection has the following key properties

A moderator

Definite participant roles

Author, Inspector, Recorder

Stated entry and exit criteria

Clearly defined defect types

A record of detected defects

Re-inspection criteria

Detected defect feedback to author

Follow-up to ensure defects are fixed / resolved

Inspection Process Results

> Major defects

Missing (M) - an item is missing

Wrong (W) - an item has been implemented incorrectly.

Extra (E) - an item is included which is not part of the specifications.

Issues (I) - an item not implemented in satisfactory manner

Suggestion (S) - suggestion to improve the work product.

> Minor defects

Clarity in comments and description

Insufficient / excessive documentation

Incorrect spelling / punctuation

Testing Techniques

>Black Box Testing

The testers have an "outside" view of the system.

They are concerned with "what is done" NOT "how it is done.“

>White Box Testing

In the White Box approach, the testers have an inside view of the system. They are concerned with "how it is done" NOT "what is done

Levels of Testing

  • Unit testing

  • Integration testing

  • System testing

  • Systems integration testing

  • User acceptance testing

Unit Testing

>Unit level test is the initial testing of new and changed code in a module.

>Verifies the program specifications to the internal logic of the program or module and validates the logic.

Integration Testing

> Integration level tests verify proper execution of application components and do not require that the application under test interface with other applications

>Communication between modules within the sub-system is tested in a controlled and isolated environment within the project

System Testing

System level tests verify proper execution of the entire application components including interfaces to other applications

> Functional and structural types of tests are performed to verify that the system is functionally and operationally sound.

Systems Integration Testing

>Systems Integration testing is a test level which verifies the integration of all applications

Includes interfaces internal and external to the organization, with their hardware, software and infrastructure components.

> Carried out in a production-like environment

User Acceptance Testing

> Verify that the system meets user requirements as specified.

> Simulates the user environment and emphasizes security, documentation and regression tests

> Demonstrate that the system performs as expected to the sponsor and end-user so that they may accept the system.

Types of Tests

> Functional Testing

> Structural Testing

Functional Testing

> Audit and Controls testing

> Conversion testing

> Documentation & Procedures testing

> Error Handling testing

> Functions / Requirements testing

> Interface / Inter-system testing

> Installation testing

> Parallel testing

> Regression testing

> Transaction Flow (Path) testing

> Usability testing

Audit And Controls Testing

> Verifies the adequacy and effectiveness of controls and ensures the capability to prove the completeness of data processing results

Their validity would have been verified during design

> Normally carried out as part of System Testing once the primary application functions have been stabilized

Conversation Testing

> Verifies the compatibility of the converted program, data, and procedures with those from existing systems that are being converted or replaced.

> Most programs that are developed for conversion purposes are not totally new. They are often enhancements or replacements for old, deficient, or manual systems.

> The conversion may involve files, databases, screens, report formats, etc.

User Documentation And Procedures Testing

> Ensures that the interface between the system and the people works and is useable.

> Done as part of procedure testing to verify that the instruction guides are helpful and accurate.

Both areas of testing are normally carried out late in the cycle as part of System Testing or in the UAT.

> Not generally done until the externals of the system have stabilized.

Ideally, the persons who will use the documentation and procedures are the ones who should conduct these tests.

Error-Handling Testing

> Error-handling is the system function for detecting and responding to exception conditions (such as erroneous input)

> Ensures that incorrect transactions will be properly processed and that the system will terminate in a controlled and predictable way in case of a disastrous failure

Function Testing

> Function Testing verifies, at each stage of development, that each business function operates as stated in the Requirements and as specified in the External and Internal Design documents.

> Function testing is usually completed in System Testing so that by the time the system is handed over to the user for UAT, the test group has already verified that the system meets requirements.

Installation Testing

> Any application that will be installed and run in an environment remote from the development location requires installation testing.

This is especially true of network systems that may be run in many locations.

This is also the case with packages where changes were developed at the vendor's site.

Necessary if the installation is complex, critical, should be completed in a short window, or of high volume such as in microcomputer installations.

This type of testing should always be performed by those who will perform the installation process

Interface / Inter-system Testing

> Application systems often interface with other application systems. Most often, there are multiple applications involved in a single project implementation.

> Ensures that the interconnections between applications function correctly.

> More complex if the applications operate on different platforms, in different locations or use different languages.

Parallel Testing

> Parallel testing compares the results of processing the same data in both the old and new systems.

> Parallel testing is useful when a new application replaces an existing system, when the same transaction input is used in both, and when the output from both is reconcilable.

> Useful when switching from a manual system to an automated system.

Regression Testing

> Verifies that no unwanted changes were introduced to one part of the system as a result of making changes to another part of the system.

Transaction Flow Testing

> Testing of the path of a transaction from the time it enters the system until it is completely processed and exits a suite of applications

Usability Testing

> Ensures that the final product is usable in a practical, day-to-day fashion

> Looks for simplicity and user-friendliness of the product

> Usability testing would normally be performed as part of functional testing during System and User Acceptance Testing.

Structural Testing

> Ensures that the technical and "housekeeping" functions of the system work

> Designed to verify that the system is structurally sound and can perform the intended tasks.

The categories for structural testing

> Backup and Recovery testing

> Contingency testing

> Job Stream testing

> Operational testing

> Performance testing

> Security testing

> Stress / Volume testing

Backup And Recovery Testing

> Recovery is the ability of an application to be restarted after failure.

> The process usually involves backing up to a point in the processing cycle where the integrity of the system is assured and then re-processing the transactions past the original point of failure.

> The nature of the application, the volume of transactions, the internal design of the application to handle a restart process, the skill level of the people involved in the recovery procedures, documentation and tools provided, all impact the recovery process

Contingency Testing

> Operational situations may occur which result in major outages or "disasters". Some applications are so crucial that special precautions need to be taken to minimize the effects of these situations and speed the recovery process. This is called Contingency.

Job Stream Testing

>> Done as a part of operational testing (the test type not the test level, although this is still performed during Operability Testing).

> Starts early and continues throughout all levels of testing.

- Conformance to standards is checked in User Acceptance and Operability testing.

Operational Testing

>> All products delivered into production must obviously perform according to user requirements. However, a product's performance is not limited solely to its functional characteristics. Its operational characteristics are just as important since users expect and demand a guaranteed level of service from Computer Services. Therefore, even though Operability Testing is the final point where a system's operational behavior is tested, it is still the responsibility of the developers to consider and test operational factors during the construction phase.

Performance Testing

> Performance Testing is designed to test whether the system meets the desired level of performance in a production environment. Performance considerations may relate to response times, turn around times (through-put), technical design issues and so on. Performance testing can be conducted using a production system, a simulated environment, or a prototype.

Security Testing

> Security of an application system is required to ensure the protection of confidential information in a system and in other affected systems is protected against loss, corruption, or misuse; either by deliberate or accidental actions. The amount of testing needed depends on the risk assessment of the consequences of a breach in security. Tests should focus on, and be limited to those security features developed as part of the system


Stress/ Volume Testing

Stress testing is defined as the processing of a large number of transactions through the system in a defined period of time. It is done to measure the performance characteristics of the system under peak load conditions.

Stress factors may apply to different aspects of the system such as input transactions, report lines, internal tables, communications, computer processing capacity, throughput, disk space, I/O and so on.

Stress testing should not begin until the system functions are fully tested and stable. The need for Stress Testing must be identified in the Design Phase and should commence as soon as operationally stable system units are available.

Bottom-up Testing

Approach to integration testing where the lowest level components are tested first then used to facilitate the testing of higher level components. This process is repeated until the component at the top of the hierarchy is tested.

Test Bed

A test environment containing the hardware, instrumentation tools, simulators, and other support software necessary for testing a system or system component. (2) A set of test files, (including databases and reference files), in a known state, used with input test data to test one or more test conditions, measuring against expected results.

Software Development Life Cycle (SDLC)

Various Software Life Cycle Models


Software life cycle models describe phases of the software cycle and the order in which those phases are executed. There are tons of models, and many companies adopt their own, but all have very similar patterns. Some of the models as follows.


  • General Model
  • Water fall model/ Linear Sequential/ Classic Life Cycle Model
  • V-Model
  • Rapid Application Development (RAD) model
  • Incremental Model
  • Spiral Model
  • Proto type model
  • Fourth Generation (4GT) Techniques

General Life Cycle Model


Software life cycle models describe phases of the software cycle and the order in which those phases are executed. There are tons of models, and many companies adopt their own, but all have very similar patterns. The general, basic model is shown below

Water fall / Linear Sequential /Classic Life Cycle Model


The "waterfall model", documented in 1970 by Royce was the first publicly documented life cycle model. The model was developed to help with the increasing complexity of aerospace products.


This is the most common and classic of life cycle models, also referred to as a linear-sequential life cycle model. It is very simple to understand and use. In a waterfall model, each phase must be completed in its entirety before the next phase can begin. At the end of each phase, a review takes place to determine if the project is on the right path and whether or not to continue or discard the project. Unlike what I mentioned in the general model, phases do not overlap in a waterfall model.


The least flexible and most obsolete of the life cycle models. Well suited to projects that has low risk in the areas of user interface and performance requirements, but high risk in budget and schedule predictability and control.

Advantages

    • Simple and easy to use.
    • Easy to manage due to the rigidity of the model – each phase has specific deliverables and a review process.
    • Phases are processed and completed one at a time.
    • Works well for smaller projects where requirements are very well understood/stable.

Disadvantages

    • It’s difficult to respond to changing customer requirements.
    • Adjusting scope during the life cycle can kill a project
    • No working software is produced until late during the life cycle.
    • High amounts of risk and uncertainty.
    • Poor model for complex and object-orented projects.
    • Poor model for long run and ongoing projects.

V - model

  • This is the most common and classic of life cycle models, also referred to as a linear-sequential life cycle model. It is very simple to understand and use. In a waterfall model, each phase must be completed in its entirety before the next phase can begin. At the end of each phase, a review takes place to determine if the project is on the right path and whether or not to continue or discard the project. Unlike what we mentioned in the general model, phases do not overlap in a waterfall model.

  • The least flexible and most obsolete of the life cycle models. Well suited to projects that has low risk in the areas of user interface and performance requirements, but high risk in budget and schedule predictability and control.

Advantages

  • Simple and easy to use.
  • Each phase has specific deliverables.
  • Higher chance of success over the waterfall model due to the development of test plans early on during the life cycle.
  • Works well for small projects where requirements are easily understood.

Disadvantages

  • Very rigid, like the waterfall model.
  • Little flexibility and adjusting scope is difficult and expensive.
  • Software is developed during the implementation phase, so no early prototypes of the software are produced.
  • Model doesn’t provide a clear path for problems found during testing phases.

Incremental/Iterative model


This model does not attempt to start with full specification of requirements. Multiple development cycles take place here, making the life cycle a “multi-waterfall” cycle. Cycles are divided up into smaller, more easily managed iterations. Each iteration passes through the requirements, design, implementation and testing phases.

A working version of software is produced during the first iteration, so you have working software early on during the software life cycle. Subsequent iterations build on the initial software produced during the first iteration.


Key Points

  • Development and delivery is broken down into increments
  • Each increment delivers part of the required functionality
  • Requirements are prioritised and the highest priority requirements are included in early increments
  • Once the development of an increment is started, the requirements are frozen
  • Requirements for later increments can continue to evolve

Advantages

  • System functionality is available earlier and customer does not have to wait as long.
  • Early increments act as a prototype to help elicit requirements for later increments.
  • The highest priority functionalities tend to receive more testing.
  • More flexible – less costly to change scope and requirements.
  • Easier to test and debug during a smaller iteration.
  • Easier to manage risk because risky pieces are identified and handled during its iteration.
  • Each iteration is an easily managed milestone.

Disadvantages

  • Each phase of an iteration is rigid and do not overlap each other.
  • Problems may arise pertaining to system architecture because not all requirements are gathered up front for the entire software life cycle.

Prototype model

In this model, a prototype (an early approximation of a final system or product) is built, tested, and then reworked as necessary until an acceptable prototype is finally achieved from which the complete system or product can now be developed.

Prototype paradigm begins with requirements gathering. Developer and customer meet and define the overall objectives for the software, identify whatever requirements are known, and outline areas where further definition is mandatory.

A quick design occurs which leads to the construction of prototype.

The prototype is evaluated by the customer/user and used to refine the requirements for the software to be developed.

Iteration occurs as the prototype is tuned to satisfy the user requirements, while at the same time enabling developer to better understand what needs to be done.

Spiral - model

  • This model of development combines the features of the prototyping model and the waterfall model. The spiral model is favored for large, expensive, and complicated projects.
  • The spiral model is similar to the incremental model, with more emphases placed on risk analysis. The spiral model has four phases: Planning, Risk Analysis, Engineering and Evaluation. A software project repeatedly passes through these phases in iterations (called Spirals in this model). The baseline spiral, starting in the planning phase, requirements is gathered and risk is assessed. Each subsequent spiral builds on the baseline spiral.
  • Requirements are gathered during the planning phase. In the risk analysis phase, a process is undertaken to identify risk and alternate solutions. A prototype is produced at the end of the risk analysis phase.
  • Software is produced in the engineering phase, along with testing at the end of the phase. The evaluation phase allows the customer to evaluate the output of the project to date before the project continues to the next spiral.
  • In the spiral model, the angular component represents progress, and the radius of the spiral represents cost.


Advantages

  • High amount of risk analysis.
  • Risks are explicitly assessed and resolved throughout the process
  • Focus on early error detection and design flaws.
  • Good for large and mission-critical projects.
  • Software is produced early in the software life cycle.

Disadvantages

  • Can be a costly model to use.
  • Risk analysis requires highly specific expertise.
  • Project’s success is highly dependent on the risk analysis phase.
  • Doesn’t work well for smaller projects.

Rapid Application Development (RAD) model

RAD model makes heavy use of reusable software components with an extremely short development cycle.

The RAD is a linear sequential software development process that emphasizes an extremely short development cycle. The RAD software model is a "high speed" adaptation of the linear sequential model in which rapid development is achieved by using a component-based construction approach. Used primarily for information systems applications, the RAD approach encompasses the following phases

  • Business modeling
  • Data modeling
  • Process modeling
  • Application generation
  • Testing

RAD process emphasizes reuse many of the program components have already been tested, which minimizes the testing and development time.


Fourth Generation (4GT) Techniques


Software tool is used to generate the source code automatically for a software system from a high level specification representation.