Difference Between Verification and Validation in Software Quality Assurance


The terms ‘Verification‘ and ‘Validation‘ are frequently used in the software testing world but the meaning of these terms are mostly vague and debatable. Verification and validation are often used interchangeably but have different definitions. Generally speaking, Verification is the process confirming that something—software—meets its specification; and Validation is the process confirming that it meets the user’s requirements. 



Difference Between Verification and Validation





The process of evaluating work-products (not the actual final product) of a development phase to determine whether they meet the specified requirements for that phase.

The process of evaluating software during or at the end of the development process to determine whether it satisfies specified business requirements.


To ensure that the product is being built according to the requirements and design specifications. In other words, to ensure that work products meet their specified requirements.

To ensure that the product actually meets the user’s needs, and that the specifications were correct in the first place. In other words, to demonstrate that the product fulfills its intended use when placed in its intended environment.


Are we building the product right?

Are we building the right product?

Evaluation Items

Plans, Requirement Specs, Design Specs, Code, Test Cases

The actual product/software.


  • Reviews

  • Walkthroughs

  • Inspections

  • Testing

    Verification is a static practice of verifying documents, design code and program

    Validation is a dynamic mechanism of validating and testing the actual product.

    It doesn’t involve executing the code.

    It always involves executing the code.

It can catch errors that validation cannot catch. It is low level exercise.

    Validation can catch errors that verification cannot catch. It is high level exercise.

    It generally comes first-done before validation.

    It generally follows after verification.


Verification is an activity that is carried out to confirm that something conforms to its documented specifications, standards, regulations, etc. It is confirmation that “the right thing is done” and that all the required components are present in the right quantity.

Process of Software Verification

Software Walkthroughs (Peer Reviews)

A walkthroughs is an activity in which people other than the author walk through every sentence of the software information artifacts(mainly documents) and every line of code for the software code artifacts(source code, table scripts, stored procedures, interface, routines, etc

There are five types of walkthroughs:

1. Independent walkthroughs

2. Guided walkthroughs

3. Group walkthroughs

4. Expert reviews

5. Managerial reviews

Software Inspection

The software inspection is structured to serve the needs of quality management in verifying that the software artifact complies with the standard of excellence for software engineering artifacts. The focus is one of verification, on doing the job right. The software inspection is a formal review held at the conclusion of a life-cycle activity and serves as a quality gate with an exit criteria for moving on to subsequent activities.

The software inspection utilizes a structured review process of planning, preparation, entry criteria, conduct, exit criteria, report out, and follow-up. It ensures that a close and strict examination of the product artifact is conducted according to the standard of excellence criteria, which spans completeness, correctness, style, rules of construction, and multiple views and may also include technology and metrics. This close and strict examination results in the early detection of defects. The software inspection is led by a moderator and assisted by other role players including recorder, reviewer, reader, and producer. The software inspection is initiated as an exit criteria for each activity in the life cycle. Product and process measurements are recorded during the software inspection session and recorded on specially formatted forms and reports. These issues and defects are tracked to closure.


The activities of the structured review process are organized for software inspections. Software walkthroughs may employ variations for planning, conduct, and follow-up.

Steps in Review Process are : -

  1. Planning

  2. Preparation

  3. Conduct

  4. Reporting

  5. Follow-Up

udit Process

Audits are used mainly in organizations that have a defined software development process that has been implemented in their projects. Audits are document verification systems in which project documents and records are compared with the organization’s standards or defined processes. They generally are short in duration, with about one to two hours spent on auditing a project or a function.

Audits are used as a QA tool mainly to ensure conformance of project execution to the organization’s defined software development processes. They ensure that a project is being executed in conformance with the organization’s defined processes and that it is ready for the next phase of execution.

The audit process describes the types of audits to be conducted in the organization as well as the roles and responsibilities in requesting and conducting audits. It offers a procedure for conducting each of the following types of audits:

1.Periodic audits

The QA department prepares the annual audit plan for periodic audits. Normally, these audits are conducted once every quarter. The plan is reviewed by the software engineering process group and approved by the head of the QA department before the start of the new fiscal year. In each cycle of audits, 25% of the projects in the state of execution are covered, and all service departments (human resources, administration, network and systems administration, technical heads, and QA) are audited. The types of audit conducted in periodic audits are conformance audits and vertical audits. Figure A.1 depicts yearly audit planning.

2. Phase-end audits

Phase-end audits are conducted for every project at the end of the project initiation, requirements analysis, software design, construction, testing, and project closure phases.

3. Investigative audits
4. Any other organizational audits

Parameters of Comparison




It involves a deeper investigation to identify and analyze the process issues.

This mode of investigation is situated on reviewing issues one by one.


It is less frequent to occur, but when happens can continue throughout the day.

They occur more frequently and repeatedly to ensure the operative conditions.

Mode of Investigation

An audit is qualitative which further employs means such as interviews, data collection, etc.

Inspection is quantitative because it wholly deals with the questions of numerical order.


Consists of no particular well-defined structure because the process entails tasks such as evaluative examination, equipment analysis, etc

Often follows more rigid checklists based structure to keep an account of viable sources of sustenance.

Subject of Interest

The audit mainly focuses on the entire management system.

Inspection, on the other hand, focuses on the workplace, work equipment, and work activities.


Validation indicates confirmation or corroboration of claim. In the context of software development, validation refers to the activities performed on a software product to confirm that all the designed(required) functionalities are indeed built and are working in adherence with the original specifications(intended use), along with other implicit functions for ensuring safety, security, and usability.

There are different methods that can be used for software testing. This chapter briefly describes the methods available.

Black-Box Testing

The technique of testing without having any knowledge of the interior workings of the application is called black-box testing. The tester is oblivious to the system architecture and does not have access to the source code. Typically, while performing a black-box test, a tester will interact with the system's user interface by providing inputs and examining outputs without knowing how and where the inputs are worked upon.



Well suited and efficient for large code segments.

Limited coverage, since only a selected number of test scenarios is actually performed.

Code access is not required.

Inefficient testing, due to the fact that the tester only has limited knowledge about an application.

Clearly separates user's perspective from the developer's perspective through visibly defined roles.

Blind coverage, since the tester cannot target specific code segments or errorprone areas.

Large numbers of moderately skilled testers can test the application with no knowledge of implementation, programming language, or operating systems.

The test cases are difficult to design.

White-Box Testing

White-box testing is the detailed investigation of internal logic and structure of the code. White-box testing is also called glass testing or open-box testing. In order to perform white-box testing on an application, a tester needs to know the internal workings of the code.

The tester needs to have a look inside the source code and find out which unit/chunk of the code is behaving inappropriately.



As the tester has knowledge of the source code, it becomes very easy to find out which type of data can help in testing the application effectively.

Due to the fact that a skilled tester is needed to perform white-box testing, the costs are increased.

It helps in optimizing the code.

Sometimes it is impossible to look into every nook and corner to find out hidden errors that may create problems, as many paths will go untested.

Extra lines of code can be removed which can bring in hidden defects.

It is difficult to maintain white-box testing, as it requires specialized tools like code analyzers and debugging tools.

Due to the tester's knowledge about the code, maximum coverage is attained during test scenario writing.

Grey-Box Testing

Grey-box testing is a technique to test the application with having a limited knowledge of the internal workings of an application. In software testing, the phrase the more you know, the better carries a lot of weight while testing an application.

Mastering the domain of a system always gives the tester an edge over someone with limited domain knowledge. Unlike black-box testing, where the tester only tests the application's user interface; in grey-box testing, the tester has access to design documents and the database. Having this knowledge, a tester can prepare better test data and test scenarios while making a test plan.



Offers combined benefits of black-box and white-box testing wherever possible.

Since the access to source code is not available, the ability to go over the code and test coverage is limited.

Grey box testers don't rely on the source code; instead they rely on interface definition and functional specifications.

The tests can be redundant if the software designer has already run a test case.

Based on the limited information available, a grey-box tester can design excellent test scenarios especially around communication protocols and data type handling.

Testing every possible input stream is unrealistic because it would take an unreasonable amount of time; therefore, many program paths will go untested.

The test is done from the point of view of the user and not the designer.

A Comparison of Testing Methods

The following table lists the points that differentiate black-box testing, grey-box testing, and white-box testing.

Black-Box Testing

Grey-Box Testing

White-Box Testing

The internal workings of an application need not be known.

The tester has limited knowledge of the internal workings of the application.

Tester has full knowledge of the internal workings of the application.

Also known as closed-box testing, data-driven testing, or functional testing.

Also known as translucent testing, as the tester has limited knowledge of the insides of the application.

Also known as clear-box testing, structural testing, or code-based testing.

Performed by end-users and also by testers and developers.

Performed by end-users and also by testers and developers.

Normally done by testers and developers.

Testing is based on external expectations - Internal behavior of the application is unknown.

Testing is done on the basis of high-level database diagrams and data flow diagrams.

Internal workings are fully known and the tester can design test data accordingly.

It is exhaustive and the least time-consuming.

Partly time-consuming and exhaustive.

The most exhaustive and time-consuming type of testing.

Not suited for algorithm testing.

Not suited for algorithm testing.

Suited for algorithm testing.

This can only be done by trial-and-error method.

Data domains and internal boundaries can be tested, if known.

Data domains and internal boundaries can be better tested.

Difference Between Verification and Validation in Software Quality Assurance Difference Between Verification and Validation in Software Quality Assurance Reviewed by Nischal Lal Shrestha on February 05, 2021 Rating: 5

No comments:

Powered by Blogger.