Manual Testing

Functional testing is the dynamic testing (as opposed to reviews which are static) of each function of the application. Functional testing validates that each requirement is implemented in the application and that each requirement is implemented correctly. It also focuses on other aspects such as previously implemented requirements continue to function (functional regression testing), consistency of functionality within the application and statutory/ regulatory requirements are implemented. The test phases usually follow the V-model and the testing team is involved in various phases of the V-model in the Software Development Life Cycle.

Process for functional testing

At the beginning of the project, we focus on understanding requirements and designing the overall strategy of the project.

1.
Knowledge Transfer and Requirements' Analysis

Even before reviewing the requirements, we go through any existing application and documentation. The purpose is to understand the overview of the application and the context of the project. The team is experienced by working in various industry domains. Still, the team asks questions related to the client, the industry domain and the application, all within the context of the project.

Next, we analyze the requirements. Requirements are usually available in a variety of formats - business requirements, requirements specifications, design documents, bug reports approved for release or even notes from meetings. The team reviews the requirements and seeks clarifications in case of incompleteness, conflict with other requirements or current application behavior, lack of clarity and low testability. If required, the statutory/ regulatory requirements are also provided to the team. Once the requirements are finalized, each requirement is assigned a unique Id for traceability and the scope of the project is defined.

2.
Test Planning

First, we understand the Test Strategy suggested by the client. The team then designs a Test Strategy covering each requirement in scope and maximum possible input data values. The focus of the Test Strategy is to minimize the Test Design and Test Execution effort while providing coverage to each requirement. This is done by categorizing/ grouping testing tasks, simplifying them and resolving duplicities. Useful test design techniques (e.g. equivalence partitioning, boundary value analysis, cause-effect graphing and error guessing) are decided in the test strategy. Second, we design the Test Approach. This includes defining the simplest possible Test Environment that covers each supported hardware/ software combination. Next, we define the test data approach (including the data types, volume, generation and storage of test data). Based on the requirements, we also define the other required sub-types of testing such as web testing, installation testing, localization/ internationalization testing and compatibility testing. The schedule and distribution of responsibilities among team members, risk management plan and communication management plan are defined. The test plan is submitted to the client for their information, review and suggestions.

The team repeats the following activities in various phases of the testing life cycle e.g. System Testing, System Integration Testing and User Acceptance Testing.

3.
Test Design

We write the test cases in accordance with the requirements traceability and test strategy. Test cases are documented in the client-specified format or our internal format. The team focuses on covering each functional scenario, making the test case simple to reduce the effort but still ensuring that the test case covers the entire requirement end-to-end so that defects do not slip. Test data is identified, sourced/ generated and stored along with each test case. Both the test cases and the test data are stored in the repository for further use. Also, the Test Environment is prepared in time before the Test Execution begins. Regular Progress Reports are submitted to the client for their information and to get suggestions.

4.
Test Execution

We review the release notes provided with the build. The build is deployed in the Test Environment per the release notes. Special attention is given to the correct versions of required components, download media and installation steps. If required, the test data is partially or completely dropped and re-populated. The Test Environment is backed up as a safeguard. In accordance with the defined test approach and schedule, the team executes all test cases against each requirement delivered in the build. After execution of each test case, the traceability is updated with the result (blocked, passed or failed). Each executed test case is reviewed internally and the traceability is updated. Regular Status Reports are submitted to the client for their information and to get suggestions.

5.
Defects and Test Results Reporting

Any discrepancy between expected results and actual results is analyzed by the team. The defect is isolated by the team by using different test data, re-ordering the user operations, using other workflows and other methods. Once, isolated, the team creates the defect report with a concise description, steps to reproduce, test data used, Test Environment details and failed test case Id. Defects and non-conformances to requirements are reported to the Development Team or designated coordinator immediately after internal review. On completion of the Test Execution phase, the Test Results (showing the test cases executed, defects reported and closed etc.) are submitted to the client.

6.
Re-testing

The team deploys the new build in the Test Environment. The team performs the Sanity Test. Next, the team re-executes the relevant failed test cases for each defect fix delivered in the new build. The team updates the requirements - test cases traceability.

7.
Regression Test Design and Execution

First, we identify the modified areas of the application. Next, the team identifies (and modifies) existing regression test cases or write new ones depending on the particular phase of the testing life cycle. We focus on selecting effective regression test cases which have the potential of finding defects, have wide coverage and are fast to execute. As the regression test cases are executed, defects and non-conformances to requirements are reported immediately to the Development Team or designated coordinator.

8.
Change Requests

A Change Request can come anytime during the project. We analyze the Change Request and perform each applicable activity in our test process. Towards the end of the project, we focus on learning and backing up and submitting deliverables.

9.
Process Improvement

We make it a point to meet at the end of the project and discuss the activities where the team could improve and how. The team logs the agreed improvement opportunities. This log becomes an input to all our subsequent projects, whether it results in a Process Improvement in our test approach or even in our overall test process.

10.
Deliver/ Archive Artifacts

Project artifacts are delivered to the client on completion of an activity. Examples of project artifacts that we deliver post internal reviews include Test Plans, Test Cases, Progress Reports, Defect Reports and Test Results. At the end of the project, each project artifact is labeled, archived and also delivered to the client as a set.