Test strategy

1. Document Revision History 

Date

Version No.

Author

Description

<dd/mm/yyyy>

<x.y>

<Document revision details/ Approvals>

2. Approvals 

The Test Strategy is approved by the following stakeholders: 

Stakeholder

Full Name

Status

Date of Sign Off

Client Representative

Project Manager

DEV Lead

FE Lead

3. Purpose

The purpose of the test strategy is to define the testing approach, the types of tests, test environments, tools to be used for testing, and the high-level details of how the test strategy will be aligned with other processes. The test strategy document is intended to be a living document and will be updated when we get more clarity on Requirements, Test environment and Build management approach, etc.

4. Project Overview

The main purpose of the <Project Name> project is to create and style the site based on the requirements.

5. Tools for QA planning and testing purposes

Confluence will be used for storing all project-related information.

JIRA will be used as a bug tracking system, as well as for planning, tracking, and analyzing the project activities and tasks.

Google docs will be used for creating different supporting documentation (for ex. checklists or test cases).

Browserstack will be used for cross-browser/device testing.

6. Requirements references for <Project Name>

Type

Link

Comment

Active Sprint

Backlog

Requirements

Design

7. Test Environment

Name

Link

<Test environment 1>

<link>

<Test environment 2>

<link>

8. Testing Types

The following testing types will be executed during the <Project Name> project:

8.1.  Smoke testing

The smoke testing will be performed to ensure that the most important functions work and all expected functional areas are available for testing. The results of smoke testing are used to decide if a build is stable enough to proceed with further testing. In other words, smoke tests play the role of acceptance criteria for each new build.

8.2. Functional testing

The functional testing will be executed to evaluate the compliance of a system or component or third-party with specified functional requirements and corresponding predicted results. Functional testing is performed for each planned feature and is guided by approved client requirements.

8.3. Regression testing

The regression testing will be performed to ensure that any bugs have been fixed and that no other previously working functions have failed as a result of the reparations and that newly added features have not caused any problems to previous versions of the software.

Regression testing is usually performed when all the components are tested based on created high-priority test cases; no critical and blocking bugs are open that were found during the component testing.

The regression testing is usually done after the code freeze and is always done before the deployment to production.

8.4. Design (Responsive) testing

The design testing will be performed for all testing levels to assure that it meets the design-related specifications.

Responsive testing on tablet and mobile devices is focused on business logic for the project in the scope of features.

Design testing will be based on the approved scope of the UI designs - <link to the design>

(синяя звезда) Design testing will NOT be based on the pixel-to-pixel verification, but generally look only (elements positions, colors).

(синяя звезда) Responsive testing on other intermediate resolution values is OOS.

8.5. Cross-browser compatibility testing

The cross-browser compatibility testing will be performed to check the ability of the solution to interact with the agreed list of browsers.

Cross-browser testing will be covered manually on Test Environment only on browsers defined for cross-browser testing.

9. Planned testing types on the test environments (browsers, devices, email clients)

9.1. Browsers and Devices Scope

Browser/device

Component testing (full scope)

Smoke/Sanity testing

Regression testing

 

detailed testing per component using the whole testing scope based on created test cases
use only high priority tests while testing and use exploratory testing (experience based technique)
use regression test scenarios to confirm correct behavior for the previously delivered critical functionality when the new build is deployed

Win10 + Chrome (the latest)

* primary device for Desktop scope

(синяя звезда)

(синяя звезда)

(синяя звезда)

Win10 + Edge (the latest)

(синяя звезда)

(синяя звезда)

(синяя звезда)

Win10 + Firefox (the latest)

(синяя звезда)

(синяя звезда)

(синяя звезда)

MacOS + Safari (the latest)

* primary device for Desktop scope

(синяя звезда)

(синяя звезда)

(синяя звезда)

iPhone 13 Pro Max (the latest iOS + Safari)

* primary device for Mobile scope

(синяя звезда)

(синяя звезда)

(синяя звезда)

iPhone 12 Pro Max (the latest iOS + Safari)

(синяя звезда)

(синяя звезда)

(синяя звезда)

Samsung Galaxy S22 Ultra (the latest Android + Chrome)

* primary device for Mobile scope

(синяя звезда)

(синяя звезда)

(синяя звезда)

Samsung Galaxy S21 Plus (the latest Android + Chrome)

(синяя звезда)

(синяя звезда)

(синяя звезда)

Clarification:

(синяя звезда) - used

(синяя звезда) - not used

9.2. Email Clients

Browser/Device

Email Client

Chrome, Windows

Gmail web

Android

Gmail application

iOS

Default Mail application

10. Approach for Process Flow

10.1. Work with Tasks

Tasks will be splitted on BE and FE.

(warning) Pay attention:

All specific statuses and labels should be defined according to the project.

  1. All Tasks which are selected to the current/next Sprint could be picked up for Test design.

  2. All Tasks that have the status “Ready for QA” should be assigned to QA.

  3. All found issues that relate to the Task should be linked to it.

  4. In case acceptance criteria are met, QA leaves a comment that the Task is "Ready for Demo".

  5. After the Demo, all Accepted Tasks will be closed by the responsible person.

10.2. Work with Bugs

Bug creation tips:

  • In case the found bug is related to a certain task - it should be linked to the task.

  • In case QA found Blocker/Critical bug during the testing ticket which is not related to the task - it should be added to the Active sprint.

  • In case QA found a Major/Minor/Trivial bug during the testing ticket which is not related to the task - it should be reported and added to the backlog.

Bug verification tips:

  • In case the ticket is passed - QA should add a detailed comment with a screenshot (video if needed) and move it to the “Approved”/”Done” status.

  • In case the ticket is failed - QA should add a detailed comment with a screenshot (video if needed) and the ticket should have the “Reopened“ status.

Bugs priority

Priority shows the degree of importance for the business to resolve the Issue. In other words, the priority is driven by business value and indicates how soon the Issue should be fixed.

Blocker

Indicates that Defect needs to be fixed immediately. It evidences that core functionality fails or test execution is completely blocked and/or (some part of) project is blocked.

Critical

Indicates that this Issue is causing a problem and requires urgent attention.

Major

Indicates that this issue has a significant impact and Defect needs to be fixed soon, but as early as possible. It shows that important functionality fails but we don’t need to test it right now and we have a workaround.

Minor

Indicates that this issue has a relatively minor impact. So, it may be fixed after the release / in the next release.

Trivial

Indicates that fixing of this Defect can be deferred until all other priority Issues are fixed.

10.3. Regression testing procedure

The regression testing will be performed before the UAT based on impact analysis to ensure that any bugs have been fixed, that no other previously working functions have failed as a result of the changes, and that newly added features have not caused any problems to previous versions of the software.

The scope for regression testing is planned based on priorities for planned test cases and covered by impact analysis if any.

Entrance criteria:

  • Planned Tasks are done; all the found defects are registered in JIRA;

  • All blocker and critical defects for all features are fixed and acceptance criteria are met;

  • The features are deployed to the test environment - DEV.

  • The Production Candidate build is accepted by the QA team.

Exit criteria:

  • All blocker and critical defects, found during Regression testing for all features are fixed and all acceptance criteria are met.

  • PO (product owner) confirms that all is good.

  • PO provides the final Go/NoGo decision.

General tips:

  • After the QA team finished regression testing - an official email with the results and the list of issues should be sent to the client team;

  • All regression bugs have to be reviewed by the PO to confirm the business priority;

  • All blocker/critical bugs found during regression should be fixed prior to the release.

11. Test Cases creation approach

Test design will be started once any initial requirements (project passport, design etc) will be provided to the development team.

As a result the high-level test cases for component testing will be created in the internal checklists for component and regression testing.

Whenever existing requirements are changed or new ones are received, the test documentation will be expanded with new checklists and test cases.

12. Build procedure

Below are described the main QA activities after a new build is deployed:

12.1. In case it's a Standard build with new features and bug fixing:

QA team performs smoke testing

  • if there are no blocker/critical issues found and the build is stable - the QA team prepares a list of found bugs during smoke if any which informs that build is accepted. Product Owner prioritizes the bugs which should be a part of Sprint and which can go to the backlog.

  • if during smoke testing blocker/critical issues are found - the QA team prepares a list of found bugs during smoke if any which informs that build is declined. (After this the issue should be fixed and a new build should be deployed asap)

After Smoke testing is done and the build is accepted, QA Team verifies fixed bugs and testing included tasks.

12.2. In case it's a Release Candidate (Code Freeze):

QA team performs smoke testing

  • if there are no blocker/critical issues found and the build is stable - the QA team prepares a list of found bugs during smoke if any which informs that build is accepted.

  • if during smoke testing blocker/critical issues are found - the QA team prepares a list of found bugs during smoke which informs that build is declined. (After this the issue should be fixed and a new build should be deployed asap)

After Smoke testing is done and the build is accepted, QA Team verifies fixed bugs and starts/continuous Regression testing.

12.3. In case it's a Release build:

QA team performs smoke testing

  • if there are no blocker/critical issues found and the build is stable - the QA team prepares a list of found bugs during smoke if any which informs that build is accepted.

  • if during smoke testing blocker/critical issues are found - the QA team prepares a list of found bugs during the smoke. (After discussion with the Product Owner, two ways possible: 1. Rollback 2. Hotfix)

Comments

Leave a Reply