Pages

Wednesday, March 16, 2011

Software Testing Methodologies

Software testing is an integral part of the software development life cycle (SDLC). Effectively and efficiently testing a piece of code is equally important, if not more, than writing it. So what is software testing? Well, for those of you who are new to software testing and quality assurance, here's the answer to this question.

Software testing is nothing but subjecting a piece of code to both, controlled as well as uncontrolled operating conditions, in an attempt to observe the output and examine whether it is in accordance with certain pre-specified conditions. Different sets of test cases and testing strategies are prepared, all of which aim at achieving one common goal - removing all the bugs and errors from the code and making the software error-free and capable enough of providing accurate and optimum outputs. There are different types of software testing techniques and methodologies. A software testing methodology is different from a software testing technique. We will have a look at a few software testing methodologies in the later part of this article.

Software Testing Methods
There are different types of testing methods or techniques as part of the software testing process. I have enlisted a few of them below.
•White box testing
•Black box testing
•Gray box testing
•Unit testing
•Integration testing
•Regression testing
•Usability testing
•Performance testing
•Scalability testing
•Software stress testing
•Recovery testing
•Security testing
•Conformance testing
•Smoke testing
•Compatibility testing
•System testing
•Alpha testing
•Beta testing
The above software testing methods can be implemented in two ways - manually or by automation. Manual software testing is done by human software testers who manually i.e. physically check, test and report errors or bugs in the product or piece of code. In case of automated software testing, the same process is performed by a computer by means of an automated testing software such as WinRunner, LoadRunner, Test Director, etc.

Software Testing Methodologies
These are some commonly used software testing methodologies:
•Waterfall model
•V model
•Spiral model
•RUP
•Agile model
•RAD
Let us have a look at each one of these methodologies one by one.

Waterfall Model
The waterfall model adopts a 'top down' approach regardless of whether it is being used for software development or testing. The basic steps involved in this software testing methodology are:
1.Requirement analysis
2.Test case design
3.Test case implementation
4.Testing, debugging and validating the code or product
5.Deployment and maintenance
In this methodology, you move on to the next step only after you have completed the present step. There is no scope for jumping backward or forward or performing two steps simultaneously. Also, this model follows a non-iterative approach. The main benefit of this methodology is its simplistic, systematic and orthodox approach. However, it has many shortcomings since bugs and errors in the code are not discovered until and unless the testing stage is reached. This can often lead to wastage of time, money and valuable resources.

V Model
The V model gets its name from the fact that the graphical representation of the different test process activities involved in this methodology resembles the letter 'V'. The basic steps involved in this methodology are more or less the same as those in the waterfall model. However, this model follows both a 'top-down' as well as a 'bottom-up' approach (you can visualize them forming the letter 'V'). The benefit of this methodology is that in this case, both the development and testing activities go hand-in-hand. For example, as the development team goes about its requirement analysis activities, the testing team simultaneously begins with its acceptance testing activities. By following this approach, time delays are minimized and optimum utilization of resources is assured.

Spiral Model
As the name implies, the spiral model follows an approach in which there are a number of cycles (or spirals) of all the sequential steps of the waterfall model. Once the initial cycle is completed, a thorough analysis and review of the achieved product or output is performed. If it is not as per the specified requirements or expected standards, a second cycle follows, and so on. This methodology follows an iterative approach and is generally suited for very large projects having complex and constantly changing requirements.

Rational Unified Process (RUP)
The RUP methodology is also similar to the spiral model in the sense that the entire testing procedure is broken up into multiple cycles or processes. Each cycle consists of four phases namely; inception, elaboration, construction and transition. At the end of each cycle, the product or the output is reviewed and a further cycle (made up of the same four phases) follows if necessary. Today, you will find certain organizations and companies adopting a slightly modified version of the RUP, which goes by the name of Enterprise Unified Process (EUP).

Agile Model
This methodology follows neither a purely sequential approach nor does it follow a purely iterative approach. It is a selective mix of both of these approaches in addition to quite a few new developmental methods. Fast and incremental development is one of the key principles of this methodology. The focus is on obtaining quick, practical and visible outputs and results, rather than merely following theoretical processes. Continuous customer interaction and participation is an integral part of the entire development process.

Rapid Application Development (RAD)
The name says it all. In this case, the methodology adopts a rapid development approach by using the principle of component-based construction. After understanding the various requirements, a rapid prototype is prepared and is then compared with the expected set of output conditions and standards. Necessary changes and modifications are made after joint discussions with the customer or the development team (in the context of software testing). Though this approach does have its share of advantages, it can be unsuitable if the project is large, complex and happens to be of an extremely dynamic nature, wherein the requirements are constantly changing. Here are some more advantages of rapid application development.

This was a short overview of some commonly used software testing methodologies. With the applications of information technology growing with every passing day, the importance of proper software testing has grown multifold.

Operations and Maintenance

Corrections, modifications and extensions are bound to occur even for small programs and testing is required every time there is a change. Testing during maintenance is termed regression testing. The test set, the test plan, and the test results for the original program should exist.

Modifications must be made to accommodate the program changes, and then all portions of the program affected by the modifications must be re-tested. After regression testing is complete, the program and test documentation must be updated to reflect the changes.

Programming/Construction

Here the main testing points are:

- Check the code for consistency with design - the areas to check include modular structure, module interfaces, data structures, functions, algorithms and I/O handling.

- Perform the Testing process in an organized and systematic manner with test runs dated, annotated and saved. A plan or schedule can be used as a checklist to help the programmer organize testing efforts. If errors are found and changes made to the program, all tests involving the erroneous segment (including those which resulted in success previously) must be rerun and recorded.
- Asks some colleague for assistance - Some independent party, other than the programmer of the specific part of the code, should analyze the development product at each phase. The programmer should explain the product to the party who will then question the logic and search for errors with a checklist to guide the search. This is needed to locate errors the programmer has overlooked.

- Use available tools - the programmer should be familiar with various compilers and interpreters available on the system for the implementation language being used because they differ in their error analysis and code generation capabilities.

- Apply Stress to the Program - Testing should exercise and stress the program structure, the data structures, the internal functions and the externally visible functions or functionality. Both valid and invalid data should be included in the test set.
- Test one at a time - Pieces of code, individual modules and small collections of modules should be exercised separately before they are integrated into the total program, one by one. Errors are easier to isolate when the no. of potential interactions should be kept small. Instrumentation-insertion of some code into the program solely to measure various program characteristics – can be useful here. A tester should perform array bound checks, check loop control variables, determine whether key data values are within permissible ranges, trace program execution, and count the no. of times a group of statements is executed.

- Measure testing coverage/When should testing stop? - If errors are still found every time the program is executed, testing should continue. Because errors tend to cluster, modules appearing particularly error-prone require special scrutiny.

The metrics used to measure testing thoroughness include statement testing (whether each statement in the program has been executed at least once), branch testing (whether each exit from each branch has been executed at least once) and path testing (whether all logical paths, which may involve repeated execution of various segments, have been executed at least once). Statement testing is the coverage metric most frequently used as it is relatively simple to implement.

The amount of testing depends on the cost of an error. Critical programs or functions require more thorough testing than the less significant functions.

Thursday, February 17, 2011

Design

The design document aids in programming, communication, and error analysis and test data generation. The requirements statement and the design document should together give the problem and the organization of the solution i.e. what the program will do and how it will be done.

The design document should contain:

Principal data structures.

Functions, algorithms, heuristics or special techniques used for processing.

The program organization, how it will be modularized and categorized into external and internal interfaces.

Any additional information.

Here the testing activities should consist of:

- Analysis of design to check its completeness and consistency - the total process should be analyzed to determine that no steps or special cases have been overlooked. Internal interfaces, I/O handling and data structures should specially be checked for inconsistencies.

- Analysis of design to check whether it satisfies the requirements - check whether both requirements and design document contain the same form, format, units used for input and output and also that all functions listed in the requirement document have been included in the design document. Selected test data which is generated during the requirements analysis phase should be manually simulated to determine whether the design will yield the expected values.

- Generation of test data based on the design - The tests generated should cover the structure as well as the internal functions of the design like the data structures, algorithm, functions, heuristics and general program structure etc. Standard extreme and special values should be included and expected output should be recorded in the test data.

- Re-examination and refinement of the test data set generated at the requirements analysis phase.

The first two steps should also be performed by some colleague and not only the designer/developer.

Requirements Analysis

The following test activities should be performed during this stage:

1.1 Invest in analysis at the beginning of the project - Having a clear, concise and formal statement of the requirements facilitates programming, communication, error analysis and test data generation.

The requirements statement should record the following information and decisions:

a. Program function - What the program must do?

b. The form, format, data types and units for input.

c. The form, format, data types and units for output.

d. How exceptions, errors and deviations are to be handled.

e. For scientific computations, the numerical method or at least the required accuracy of the solution.

f. The hardware/software environment required or assumed (e.g. the machine, the operating system, and the implementation language).

Deciding the above issues is one of the activities related to testing that should be performed during this stage.

1.2 Start developing the test set at the requirements analysis phase - Data should be generated that can be used to determine whether the requirements have been met. To do this, the input domain should be partitioned into classes of values that the program will treat in a similar manner and for each class a representative element should be included in the test data.

In addition, following should also be included in the data set:

(1) boundary values

(2) any non-extreme input values that would require special handling.

The output domain should be treated similarly.

Invalid input requires the same analysis as valid input.

1.3 The correctness, consistency and completeness of the requirements should also be analyzed - Consider whether the correct problem is being solved, check for conflicts and inconsistencies among the requirements and consider the possibility of missing cases.

Tuesday, January 11, 2011

Testing Start Process

Testing is sometimes incorrectly thought as an after-the-fact activity; performed after programming is done for a product. Instead, testing should be performed at every development stage of the product. Test data sets must be derived and their correctness and consistency should be monitored throughout the development process.
If we divide the lifecycle of software development into “Requirements Analysis”, “Design”, “Programming/Construction” and “Operation and Maintenance”, then testing should accompany each of the above phases. If testing is isolated as a single phase late in the cycle, errors in the problem statement or design may incur exorbitant costs. Not only must the original error be corrected, but the entire structure built upon it must also be changed. Therefore, testing should not be isolated as an inspection activity. Rather testing should be involved throughout the SDLC in order to bring out a quality product.

The following testing activities should be performed during the phases:

Requirements Analysis

- Determine correctness
- Generate functional test data.

Design

- Determine correctness and consistency
- Generate structural and functional test data.


Programming/Construction

- Determine correctness and consistency
- Generate structural and functional test data
- Apply test data
- Refine test data

Operation and Maintenance

- Retest

Difference between Quality Assurance, Quality Control, and Testing?

Many people and organizations are confused about the difference between quality assurance (QA), quality control (QC), and testing. They are closely related, but they are different concepts.

But all these three are useful to manage risks of developing and managing software.
Quality Assurance: A set of activities designed to ensure that the development and/or maintenance process is adequate to ensure a system will meet its objectives.
Quality Control: A set of activities designed to evaluate a developed work product.
Testing: The process of executing a system with the intent of finding defects. (Note that the "process of executing a system" includes test planning prior to the execution of the test cases.)
QA activities ensure that the process is defined and appropriate. Methodology and standards development are examples of QA activities. A QA review would focus on the process elements of a project - e.g., are requirements being defined at the proper level of detail.

QC activities focus on finding defects in specific deliverables - e.g., are the defined requirements the right requirements

Testing is one example of a QC activity, but there are others such as inspections

The difference is that QA is process oriented and QC is product oriented.

Testing therefore is product oriented and thus is in the QC domain. Testing for quality isn't assuring quality, it's controlling it.

Quality Assurance makes sure you are doing the right things, the right way.
Quality Control makes sure the results of what you've done are what you expected.

Tuesday, January 4, 2011

Software development life cycle


The Systems Development Life Cycle (SDLC), or Software Development Life Cycle in systems engineering and software engineering, is the process of creating or altering systems, and the models and methodologies that people use to develop these systems. The concept generally refers to computer or information systems.

In software engineering the SDLC concept underpins many kinds of software development methodologies. These methodologies form the framework for planning and controlling the creation of an information system: the software development process.

Software Testing Life Cycle


Software testing life cycle identifies what test activities to carry out and when (what is the best time) to accomplish those test activities. Even though testing differs between organizations, there is a testing life cycle.






Software Testing Life Cycle consists of six (generic) phases:

* Test Planning,
* Test Analysis,
* Test Design,
* Construction and verification,
* Testing Cycles,
* Final Testing and Implementation and
* Post Implementation.


Software testing has its own life cycle that intersects with every stage of the SDLC. The basic requirements in software testing life cycle is to control/deal with software testing – Manual, Automated and Performance.

Test Planning

This is the phase where Project Manager has to decide what things need to be tested, do I have the appropriate budget etc. Naturally proper planning at this stage would greatly reduce the risk of low quality software. This planning will be an ongoing process with no end point.

Activities at this stage would include preparation of high level test plan-(according to IEEE test plan template The Software Test Plan (STP) is designed to prescribe the scope, approach, resources, and schedule of all testing activities. The plan must identify the items to be tested, the features to be tested, the types of testing to be performed, the personnel responsible for testing, the resources and schedule required to complete testing, and the risks associated with the plan.). Almost all of the activities done during this stage are included in this software test plan and revolve around a test plan.

Test Analysis

Once test plan is made and decided upon, next step is to delve little more into the project and decide what types of testing should be carried out at different stages of SDLC, do we need or plan to automate, if yes then when the appropriate time to automate is, what type of specific documentation I need for testing.

Proper and regular meetings should be held between testing teams, project managers, development teams, Business Analysts to check the progress of things which will give a fair idea of the movement of the project and ensure the completeness of the test plan created in the planning phase, which will further help in enhancing the right testing strategy created earlier. We will start creating test case formats and test cases itself. In this stage we need to develop Functional validation matrix based on Business Requirements to ensure that all system requirements are covered by one or more test cases, identify which test cases to automate, begin review of documentation, i.e. Functional Design, Business Requirements, Product Specifications, Product Externals etc. We also have to define areas for Stress and Performance testing.

Test Design

Test plans and cases which were developed in the analysis phase are revised. Functional validation matrix is also revised and finalized. In this stage risk assessment criteria is developed. If you have thought of automation then you have to select which test cases to automate and begin writing scripts for them. Test data is prepared. Standards for unit testing and pass / fail criteria are defined here. Schedule for testing is revised (if necessary) & finalized and test environment is prepared.

Construction and verification

In this phase we have to complete all the test plans, test cases, complete the scripting of the automated test cases, Stress and Performance testing plans needs to be completed. We have to support the development team in their unit testing phase. And obviously bug reporting would be done as when the bugs are found. Integration tests are performed and errors (if any) are reported.

Testing Cycles

In this phase we have to complete testing cycles until test cases are executed without errors or a predefined condition is reached. Run test cases --> Report Bugs --> revise test cases (if needed) --> add new test cases (if needed) --> bug fixing --> retesting (test cycle 2, test cycle 3….).

Final Testing and Implementation

In this we have to execute remaining stress and performance test cases, documentation for testing is completed / updated, provide and complete different matrices for testing. Acceptance, load and recovery testing will also be conducted and the application needs to be verified under production conditions.

Post Implementation

In this phase, the testing process is evaluated and lessons learnt from that testing process are documented. Line of attack to prevent similar problems in future projects is identified. Create plans to improve the processes. The recording of new errors and enhancements is an ongoing process. Cleaning up of test environment is done and test machines are restored to base lines in this stage
***