Open Framework, Information Management Strategy & Collaborative Governance | Data & Social Methodology - MIKE2.0 Methodology
Wiki Home
Collapse Expand Close

Members
Collapse Expand Close

To join, please contact us.

Improve MIKE 2.0
Collapse Expand Close
Need somewhere to start? How about the most wanted pages; or the pages we know need more work; or even the stub that somebody else has started, but hasn't been able to finish. Or create a ticket for any issues you have found.

Overall Application Test Plan Deliverable Template

From MIKE2.0 Methodology

Share/Save/Bookmark
Jump to: navigation, search
This deliverable template is used to describe a sample of the MIKE2.0 Methodology (typically at a task level). More templates are now being added to MIKE2.0 as this has been a frequently requested aspect of the methodology. Contributors are strongly encouraged to assist in this effort.
Deliverable templates are illustrative as opposed to fully representative. Please help add examples to this template that are representative of the proposed output.

An Overall Application Test Plan is created for testing applications, using the categories defined below.

For testing Applications, Functional Testing and E2E Testing will be the areas of highest complexity.

The Test plan for Infrastructure will provide a test plan focused primarily on integration and infrastructure management functions (backup & recovery, security, operations & management). For testing Infrastructure, Functional Testing (for Integration), SIT Testing and SVT Testing will be the areas of highest complexity.

For testing Information Development, Functional Testing and E2E Testing will be the areas of highest complexity.


Testing should be defined across the workstream to be tested (applications, infrastructure, information) and by cycle of testing (Functional, SIT Testing, E2E Testing, SVT, UAT and PVT). Each cycle of testing will include:

Definition of overall testing scope and approach:

  • Define scope of each testing cycle
  • Identify business and system functions that require validation
  • Identify the different types of testing required
  • Determine what is required from a development perspective and dependencies between test cycles
  • Define test documentation and provide a revision of baseline testing documentation

Definition of Entry and Exit Criteria for Testing

  • Defect Definition
  • Defect Status
  • Escalation
  • Define Testing Signoff process and Handover procedures

Definition of testing resources and responsibilities

  • Create a skills development plan
  • Assign responsibilities
  • Define test environments and configurations
  • Define testing tool suite
  • Test case libraries
  • Defect report template
  • Overall traceability matrix
  • Ensure alignment with Risk and Issue Management Processes

Testing Plan

  • Produce testing task plan
  • Define detailed schedule
  • Define major milestones and dependencies

Contents

Examples

Listed below is an example Testing plan:

Example of a sample Test plan

Objectives

CSS Program Background

Client is a leading provider of retail loans and leases to consumers who purchase, automobiles, motorcycles, and power equipment from authorized Client dealerships. In an effort to increase customer-servicing capabilities and improve business flexibility and responsiveness, Client has undertaken the Customer Service System (CSS) initiative. The technical objective of CSS is to replace some of their legacy application with newer systems.

Testing Objectives

The objectives of the Client testing initiative will be to: (what does Validate mean?)

  • Validate that the selected requirements specified in the Business Process Flows (BPF) are satisfied
  • Validate that the CSS software satisfies user defined requirements or identifies those requirements that have not been satisfied
  • Validate that the custom developed code functions as designed
  • Validate that new incidents were not introduced into the existing functionality
  • Validate that the operational processes perform within defined quality and time metrics
  • Validate that the systems and network process the estimated volume requirements within defined service level agreements
  • Validate that the interfaces between both internal and external applications interact and process data correctly
  • Validate that the application systems are configured and process data according to pre-determined parameters
  • Validate that the system meets all stated security requirements
  • Validate that the back-up and restore processes function as designed
  • Validate that failure analysis and fail over processes function as designed to maintain data integrity
  • Validate that the disaster recovery plan is properly documented and tested to enable a system recovery in the event of a defined disaster
  • Validate the end-to-end processing of the Client solution

The following system components will be included in the testing initiative.

  • XYZ AF application systems
  • Legacy system modifications
  • ABC system
  • Graphical user interfaces
  • System interfaces (both internal and external)
  • Reporting applications
  • Data Conversion

CSS Program Lifecycle

This section outlines the lifecycle stages of the CSS Program and the testing activities in each.

The Client testing effort will use our Methodology as an initial framework for the CSS Program lifecycle. The Methodology provides a standard, repeatable and proven implementation process for delivering multi-vendor systems integration solutions. The methodology and the development lifecycle methodologies used by the five individual development groups were merged to form the CSS Program Methodology. The five development groups include:

XYZ AF

GUIs – Reports – Interfaces

ISD Legacy

ABC

Data Conversion

Planning and Analysis

The high level requirements for the project are defined and documented during the Planning and Analysis stage. Preparations for the testing phases are also identified during this stage; preparing for testing requires a significant amount of planning, communication, and coordination. The following testing activities will be performed during this stage:

  • Define the testing objectives
  • Define the testing approach
  • Define testing roles and responsibilities
  • Identify key milestone dates

Solution Definition

The detailed functional and technical requirements for the project are defined and documented during the Solution Definition stage. The testing deliverables map the business functions and process flows to testing scenarios. The success of the testing effort is dependent on the thoroughness and coverage of the testing scenarios. The Solution Definition and

Test Plan documents are the key deliverables during this stage. The following testing activities will be performed during this stage:

  • Develop functional test plans
  • Evaluate and select testing tools
  • Define the incident reporting process
  • Define the business processes / activities to be tested
  • Identify the system functionality to be tested

Detail Design (Test Cases)

Once the requirements are defined and approved, the system modifications necessary to meet those requirements are documented in design documents. As the changes to the system are identified, test cases and scripts will be developed to fulfill the testing requirements derived from process flows and design documents. The Detail Design documents and Test Cases are the key deliverables during this stage. The following testing activities will be performed during this stage:

  • Develop test cases
  • Develop test scripts including expected results, to validate business and system functionality
  • Define and describe the test data required to execute test scripts
  • Identify the modifications and custom development to be tested
  • Identify the system interfaces to be tested

Build/Unit Test

The modifications to the system, outlined in the design documents, are implemented during the Build / Unit Test stage. As each component is modified, it is unit tested, application tested, Beta tested, and functionally tested to verify that the modifications meet the previously defined requirements. During unit testing, the smallest units of the system are tested. The definition and version of builds are controlled and communicated so the code can be appropriately tested through all testing levels. The executable code and unit test results/application test results/Beta test results/functional test results are the key deliverables during this stage. During the Build / Unit Test stage, the system architects and system users will be available for guidance. The following testing activities will be performed during this stage:

  • Establish the testing environment, including the hardware, software, databases, screens, and tools
  • Acquire or create the test data required to execute test scripts
  • Develop the installation, backup, restoration, and construction procedures for the testing environment
  • Conduct Unit Testing
  • Conduct Application Testing
  • Conduct Functional Testing

Handshake / System Testing

During this stage, the components previously tested as individual units are compiled and they are tested as a complete system. Handshake Testing verifies that the physical interfaces between components are operating correctly. Handshake Testing does not focus on data validation, but additional integration testing will be performed to validate the data being passed and interface functionality. System Testing is responsible for testing the complete end-to-end functional interaction between components. During System Testing, all the systems will be integrated and they will be tested as one unit. Performance, stress, regression, and security testing are also performed during this stage. Test scripts are executed to verify that the desired results are achieved and to identify any discrepancies between expected and actual results. The Handshake and System Test results are the key deliverables during this stage. The following testing activities will be performed during this stage:

  • Execute the Handshake test scripts and evaluate results
  • Execute the functional system test scripts and evaluate results
  • Execute performance and stress test scripts and evaluate results
  • Execute security test scripts and evaluate results
  • Execute failover test scripts and evaluate results
  • Execute disaster recovery test scripts and evaluate results

User Acceptance Testing

User Acceptance Testing (UAT) is performed by the system users in a production-like environment. This stage is the last stage before delivery to production and verifies that the complete system meets all requirements set forth by the users. Test scripts will be executed to verify that the desired results are achieved, and to identify any discrepancies between expected and actual results. The User Acceptance Test result is the key deliverable during this stage. The following testing activities that will be performed during this stage:

  • Execute the user acceptance test scripts and evaluate results
  • Verify the accuracy of the new business processes
  • Verify the accuracy of the training materials for the new business processes

Delivery

During the delivery stage, the system is migrated to the production environment and ownership is transferred to the appropriate support group(s). A Dress Rehearsal Playbook is the key deliverable during this stage. The following testing activities that will be performed during this stage:

  • Produce Dress Rehearsal Test scripts
  • Execute the Dress Rehearsal Testing scripts and evaluate results for system production readiness
  • Verify the accuracy of the new business processes
  • Verify the accuracy of the training materials for the new business processes

Post-Production

The production support group(s) assumes ownership and maintains the systems.

Testing Definitions

This section defines the types of testing in the CSS Program.

The objective of the CSS Program testing initiative is to demonstrate that the functional and technical requirements, defined during the Solution Definition and Detail Design stages, have been satisfied. Integrating the testing methodologies from the different development groups with the testing methodology resulted in the CSS Program testing methodology.

The primary phases of testing in the CSS Program are listed and described below in section 4.1. Each of these testing phases occurs during a particular CSS Program Stage, as defined in section 3. Within each of the primary testing phases, specific functionality testing may occur, including: performance testing, stress testing, security testing, regression testing, disaster recovery and failover testing. These testing activities are defined in section 4.2.

(Refer to section 6.1 for detailed descriptions of the CSS Program teams.)

Program Testing Phases

These testing phases are performed during a particular stage of the CSS Program lifecycle.

Unit Testing

Unit Testing validates modifications to the smallest unit of the configuration or software application to verify that the functionally satisfies the requirement and design specifications. The purpose of unit testing is to verify that every executable statement can execute successfully and achieve the desired results. Both positive and negative testing will be performed during this phase. The test will be scripted to enable a quality review and the capability to re-execute the test. The test limits itself to the internal logic of a unit. However, any required interfaces may be simulated.

  Development Group Testing CSS Program Testing
Stage of Lifecycle Build/Unit Test N/A
Responsibility System Development Teams N/A
Participants System Development Teams N/A
Location Torrance
Little Rock
N/A
Test Data Type Seed N/A

Application Testing

Application Testing is performed to verify that all internal application components for the package satisfy the requirements identified in the detailed design specification. Additionally, Application Testing verifies that all affected products, transactions, and outputs are processed and produced successfully. Application Test will only be performed for the XYZ AF applications and the Data Conversion applications. Specifically for the Data Conversion Development Group, reconciliation and balancing will be included in the Application Testing.

  Development Group Testing CSS Program Testing
Stage of Lifecycle Build/Unit Test N/A
Responsibility XYZ Testing Support Team
Data Conversion Team
N/A
Participants XYZ Testing Support Team
Data Conversion Team
N/A
Location Little Rock N/A
Test Data Type XYZ Test Data,
Partial Volume / Fully Converted Data
N/A

Functional Testing

Functional Testing serves to validate incremental functionality developed for the solution to-date. Unlike system testing, functional testing does not require a complete and entire solution to be available. Functional testing is, however, dependent on the availability of a limited scope of functionality across multiple components of the solution, so that a single process (which uses GUI, Interfaces, ABC, Reports/Letters) can be verified during end-to-end testing. In the event that partial functionality for one of the components is not available for functional testing (i.e. development and build/unit testing are not complete), functionality will be simulated using harnesses and stubs in order to achieve the results of end-to-end testing. Functional testing attempts to validate completed pieces of the solution in order to identify shortcomings in the fulfillment of business requirements as early as possible. Functional Testing encompasses more than unit testing because it builds upon AF software with additional components of the CSS solution, especially GUI.

  Development Group Testing CSS Program Testing
Stage of Lifecycle Build/Unit Test N/A
Responsibility Business Architecture Team N/A
Participants Business Architecture Team,
Delivery Management Team,
XYZ Testing Support Team,
System Development Teams
N/A
Location Little Rock N/A
Test Data Type XYZ Test Data N/A

Handshake Testing

Handshake Testing verifies that the physical interfaces between components are operational. It validates the logical grouping of units or components to verify that they can interface with other logical groupings. It is the process of combining and testing multiple components or applications together to verify the full and correct linkage between separate components. Handshake testing is designed to test groups of units, components, or modules to verify that data and control are transferred properly between components.

Handshake Testing, on the CSS Program level, will be performed multiple times to accommodate the iterative software release process of the development groups. Please reference section 5.1 for a detailed explanation.

  Development Group Testing CSS Program Testing
Stage of Lifecycle Build / Unit Test
Handshake / System Test
Handshake / System Test
UAT
Responsibility XYZ Testing Support Team
Delivery Management Team
Delivery Management Team
Participants Business Architecture Team
System Development Teams
Business Architecture Team
System Development Teams
Location Little Rock Little Rock
Torrance
Test Data Type Seed
XYZ Test Data
Partial Volume /
Fully Converted Data,
Full Volume /
Fully Converted Data

System Testing

System Testing validates the functionality of the entire solution. This testing is driven from the business requirements. The purpose of system testing is to test both positive and negative conditions of business functions in an isolated and controlled environment to validate the quality of the system.

System Testing, on the CSS Program level, will be performed multiple times to accommodate the iterative software release process of the development groups. Please reference section 5.1 for a detailed explanation.

  Development Group Testing CSS Program Testing
Stage of Lifecycle N/A Handshake / System Test
Responsibility N/A System Development Teams
Business Architecture Team
Participants N/A System Development Teams
Delivery Management Team
Business Architecture Team
XYZ Testing Support Team
Location N/A Little Rock
Test Data Type N/A Partial Volume /
Fully Converted Data

CSS Program User Acceptance Testing (UAT)

CSS Program User Acceptance Testing is performed by the user community to verify that the documented user requirements have been satisfied and that the system is ready for release to the user population. UAT utilizes converted data files to complete testing and validate the accuracy of data conversion efforts. UAT confirms full functionality as defined by the user, including screen navigation and the processing of data in a predictable manner.

  Development Group Testing CSS Program Testing
Stage of Lifecycle N/A User Acceptance Test
Responsibility N/A XYZ Testing Support Team
System Development Teams
Delivery Management Team
Business Architecture Team
Participants N/A
End Users
Location N/A Torrance
Test Data Type N/A Full Volume /
Fully Converted Data

CSS Program Dress Rehearsal Testing

CSS Program Dress Rehearsal Testing is performed to verify that all the systems and the data are ready to be moved into the production environment. Testing confirms that all the components and data are configured for the production environment.

  Development Group Testing CSS Program Testing
Stage of Lifecycle N/A Delivery
Responsibility N/A Delivery Management Team
Data Conversion Team
Participants N/A XYZ Testing Support Team
System Development Teams
Business Architecture Team
Location N/A Torrance
Test Data Type N/A Full Volume /
Fully Converted Data

Specific Testing Activities (not coupled to a Stage of the Program)

The following testing activities may be performed in various stages of the CSS Program lifecycle. These testing activities will be performed throughout the CSS Program testing phases described above in section 4.1.

Performance Testing

Performance Testing validates the performance of the system in terms of architecture and structural capabilities. Both normal and large volumes of data are used to measure response time, database input/output, network throughput, and CPU utilization rates. This testing verifies that the system will support production volumes and that the response times for the system satisfy the documented performance requirements.

  Development Group Testing CSS Program Testing
Stage of Lifecycle Build/Unit Test
Handshake / System Test
User Acceptance Test
Responsibility Delivery Management Team Delivery Management Team
Participants XYZ Testing Support Team
System Development Teams
Business Architecture Team
System Development Teams
Business Architecture Team
Location Little Rock Little Rock
Torrance
Test Data Type Seed
XYZ Test Data
Partial Volume / Fully Converted Data
(System Test)
Full Volume / Fully Converted Data
(UAT)

Stress Testing

Stress Testing demonstrates that strains in the environment due to excessive data volumes, concurrent users, batch jobs, or other resource consuming activities will not result in system performance failure, general system failure, damaged data, or lost data. It measures the capacity and resiliency of the system to discover the limitations of the system. Problem areas may include memory leaks, excessive use of system resources and disk space, and lockout of multi-user access.

  Development Group Testing CSS Program Testing
Stage of Lifecycle Build/Unit Test
Handshake / System Test
User Acceptance Test
Responsibility XYZ Conversion Team
Delivery Management Team
Delivery Management Team
Participants System Development Teams
Business Architecture Team
System Development Teams
Business Architecture Team
Location Little Rock Little Rock
Torrance
Test Data Type Seed
XYZ Test Data
Partial Volume / Fully Converted Data
(System Test)
Full Volume / Fully Converted Data
(UAT)

Security Testing

Security Testing validates that only legitimate users have access to the system. It will test system access controlled by the application, architecture, hardware, and software. Security testing verifies that internal processes and data remain inside the Client systems.

  Development Group Testing CSS Program Testing
Stage of Lifecycle Build/Unit Test
Handshake / System Test
User Acceptance Test
Responsibility XYZ Testing Support Team
Delivery Management Team
Delivery Management Team
Participants System Development Teams
Business Architecture Team
System Development Teams
Business Architecture Team
Location Little Rock Little Rock
Torrance
Test Data Type Seed
XYZ Test Data
Partial Volume / Fully Converted Data
(System Test)
Full Volume / Fully Converted Data
(UAT)

Failover Testing

Failover Testing verifies system recovery during the fail-over processes. Testing verifies that the backup systems startup and continue processing when and where the primary system stopped. This testing also validates that data integrity is maintained and that the proper documentation is in place.

  Development Group Testing CSS Program Testing
Stage of Lifecycle N/A User Acceptance Test
Responsibility N/A Delivery Management Team
Participants N/A System Development Teams
Business Architecture Team
Location N/A Torrance
Test Data Type N/A Full Volume / Fully Converted Data
(UAT)

Disaster Recovery Testing

Disaster Recovery Testing validates the functionality of the backup and restore, and system recovery processes. The tests demonstrate that the database and software can recover from partial or full catastrophic failures of system hardware and software. Disaster Recovery Testing is the responsibility of AC and requirements have yet to be determined.

  Development Group Testing CSS Program Testing
Stage of Lifecycle N/A User Acceptance Test
Responsibility N/A AC
Delivery Management Team
Participants N/A AC
Location N/A Torrance
Test Data Type N/A Full Volume / Fully Converted Data
(UAT)

Regression Testing

Regression testing validates that changes to the existing software did not adversely affect existing functionality or interfaces. Regression testing is a technique applied at any level of testing to detect any unintended side effects resulting from changes made to an established software base.

  Development Group Testing CSS Program Testing
Stage of Lifecycle Build/Unit Test
Application Test
Handshake Test
Functional Test

Handshake Test
System Test
User Acceptance Test
Responsibility XYZ Testing Support
System Development Teams
Delivery Management Team
System Development Teams
Delivery Management Team
Participants Business Architecture Team Business Architecture Team
End Users
Location Little Rock
Little Rock
Torrance
Test Data Type Seed
XYZ Test Data
Partial Volume / Fully Converted Data
(System Test)
Full Volume / Fully Converted Data
(UAT)

Testing Approach

The section outlines the relationship between the different development group lifecycles and the CSS Program lifecycle.

Approach for Iterative Software Development

The five development groups in the CSS Program may promote their software, to the program level, in multiple releases. This iterative software development process will be accommodated by executing the program level Handshake Testing and System Testing multiple times. For example, for each major release of the XYZ AF system, program level Handshake Testing and program level System Testing will be performed on the software. User Acceptance Testing will not be performed until all the releases are final. Once all the development groups have finalized their systems and published their last software release, all the systems will be retested as one unit. The program level Handshake and System Tests will be re-executed and then the systems will be migrated to User Acceptance Testing.

CSS Program Testing Approach

The system modifications required by the CSS Program are extended across various systems and development groups. The five development groups (XYZ AF, GUIs-Interfaces-Reports, ISD Legacy, Data Conversion, ABC) will each follow their own development lifecycle, patterning themselves on the CSS Program lifecycle. As each development group completes their own development lifecycle, they will merge their systems with the other development groups. As the systems and functionality of the different development groups combine, the various testing phases will be repeated on the program level, testing the systems as one whole unit.

Test Team Responsibilities

This section outlines the roles and responsibilities for each team involved in the testing process.

Resources from the various System Development Teams, Delivery Management Team, Business Architecture Team, and the End Users will conduct the CSS Program testing effort. The specific resources for each group will be defined at a later stage in the project.

The responsibilities listed in this section are preliminary and further investigation is needed to finalize this list.

Team Responsibilities

The following teams and testing responsibilities have been defined for the CSS Program.

System Development Team(s)

Each of the development groups (XYZ AF, GUIs-Interfaces-Reports, ABC, ISD Legacy and Data Conversion) will have a separate System Development Team.

  • Design and build of the individual applications
  • Perform Unit Test of the individual applications
  • Perform Handshake Testing of the individual applications
  • Perform System Testing of the individual applications
  • Perform Performance Testing of the individual applications
  • Perform Stress Testing of the individual applications
  • Incident remediation for the individual applications during all phases of testing

Delivery Management Team

  • Perform Handshake Testing of the CSS applications, as a whole unit
  • Perform Performance Testing of the CSS applications, as a whole unit
  • Perform Stress Testing of the CSS applications, as a whole unit
  • Perform Security Testing of the CSS applications, as a whole unit
  • Perform Failover Testing of the CSS applications, as a whole unit
  • Perform Disaster Recovery Testing of the CSS applications, as a whole unit
  • Specify incident remediation procedures
  • Configure tools for incident capture, tracking and remediation
  • Develop and implement standard test management process
  • Design and implement automated testing tools and procedures
  • Specify software configuration management (SCM) procedures
  • Manage the testing environments

XYZ Test Support Team

  • Perform Application Testing of the ASF Applications
  • Support Handshake Testing of the ASF Applications
  • Support System Testing of the ASF Applications
  • Support Performance Testing of the ASF Applications
  • Ensure that the ASF testing environments provide the ability to meet testing deliverables on time and with quality
  • Provide standard and customized incident tracking reporting
  • Coordinate incident logging, correction and validation
  • Report status of incidents (meetings, e-mail, etc.)
  • Schedule, coordinate and execute ASF batch test cycles
  • Coordinate, perform and communicate ASF code and PARMS migrations
  • Build JCL for ASF and perform updates as needed
  • Assist with account data selection for ASF test regions
  • Provide account data selection reports

Data Conversion Test Team

  • Transfer necessary data from the Client legacy applications to the ASF system.
  • Perform field to field mapping
  • Perform data conversions
  • Perform application to application balancing
  • Assist with data selection reports
  • Perform data capture for Application and System Testing
  • Perform data conversion processes during program dress rehearsal
  • Support Data Conversion testing

Business Architecture Team

  • Develop test plans for the system testing of business process flows
  • Develop test scenarios, test cases, and scripts for business requirements
  • Perform functional testing for CSS business functionality
  • Support handshake/integration testing for CSS interfaces
  • Perform system testing for CSS business functionality
  • Plan and facilitate user acceptance testing
  • Complete testing components applicable to system testing: traceability matrix, testing and audit trail, testing metrics and incident reports, testing/incident meetings
  • Coordinate End-Users during User Acceptance Test

End-Users

  • Provide user requirements to Business Architecture Team for test case creation
  • Perform User Acceptance Testing of the CSS applications, as a whole unit

Testing Components

This section lists and describes the testing related components produced during the lifecycle of the CSS Program.

Test Strategy

This document defines the testing strategy for the Client CSS Program. The strategy document should outline overall objectives, approach, roles and responsibilities, deliverables, and quality assurance procedures.

Test Plan

Test Plans will be created for each phase of testing – unit testing, handshake testing, system testing, and user acceptance testing. In addition, plans will be created for performance & stress testing, security testing, failover testing, and disaster recovery testing. These documents will define the specific processes and functionality to be tested, resource requirements, and timelines.

Test Scripts

Test scripts outline what is to be tested, the setup requirements, clear instructions on the steps necessary to perform the test, the expected results, and the documentation of actual results and comments. Test scripts will contain sufficient information for a system user to be able to execute and validate the test. Test scripts will also give explicit details of the expected results.

Traceability Matrix

Traceability Matrices map the functional requirements to test scripts. The matrix documents the pass / fail status of the test scripts and the incident numbers for the failed scripts. This matrix helps determine if all the functional requirements have been tested, as well as the status of each test script.

Testing and Audit Trail

As the test scripts are executed, as outlined in the test script-tracking tool, an audit trail will be captured via hard copies of test results. In addition, the test script-tracking tool will be updated with the status of each executed test. Any incidents found will be logged into an incident-reporting tracking tool, and will be referred to the System Development Teams for fixes before re-testing.

Incident Tracking Inventory

Incidents from all testing efforts will be tracked using an incident-reporting tracking tool. The project will follow an incident reporting process, which will be outlined in a later section. The specific tracking tools will be not discussed in this document.

Testing Metrics and Incident Reports

Testing metrics and incident reports will be summarized into a periodic report for distribution and review in scheduled meetings.

Testing / Incident Meeting

Periodic meetings will be held to report on testing progress and to review open incidents with Project Management and Team Leads.

Sign-offs

The Program Management Team is responsible for verifying and acknowledging that the system fulfills the functional and / or technical requirements, before the application is migrated to the next testing phase or testing environment.

Wiki Contributors
Collapse Expand Close