Open Framework, Information Management Strategy & Collaborative Governance | Data & Social Methodology - MIKE2.0 Methodology
Wiki Home
Collapse Expand Close

Members
Collapse Expand Close

To join, please contact us.

Improve MIKE 2.0
Collapse Expand Close
Need somewhere to start? How about the most wanted pages; or the pages we know need more work; or even the stub that somebody else has started, but hasn't been able to finish. Or create a ticket for any issues you have found.

Test Design

From MIKE2.0 Methodology

Share/Save/Bookmark
Jump to: navigation, search
Content Model Relationship

Contents

Activity: Test Design

Objective

The Test Design phase includes the definition of Test Requirements and Test Cases and ensuring that the Test Environments are ready. Finalisation of the schedule for Test Execution and revisions to the overall approach are also made during the Test Design phase.

The primary focus of the Test Design phase, however, is on the definition of Test Cases. Test Cases document the steps, pre-conditions, data requirements, and expected results required to test each of the scenarios identified within the Test Plan. These Test Cases are then run during the Test Execution phase of the project.

Test Cases are defined across the workstream to be tested (applications, infrastructure, information) and by cycle of testing (Functional, SIT Testing, E2E Testing, SVT, UAT and PVT). For each cycle of Testing, there will be a need to define and track requirements and then ultimately define the Test Cases. Definition of Test Cases then includes the following steps:

  • Identify and document testing scenarios
  • Produce test cases (happy path and exception cases)
  • Verify test coverage against traceability matrix
  • Develop overall testing scripts
  • Automation of testing processes (where applicable)

These are iterative processes that may involve multiple review cycles and refinements.

Major Deliverables

  • Test Cases for each phase of Testing across Applications, Infrastructure and Information

Tasks

Define Test Requirements

Objective:

Test Requirements translate the specification for the software that will be developed into specific requirements for testing this software. The major steps for definition of testing requirements are as follows:

  • Create the initial requirements using design specifications
  • Create traceability matrix of Testing Requirements to Functional Requirements
  • Prioritise Testing Requirements by risk and impact
  • Refinement of Test Requirements into testable level


Input:

  • Testing Strategy
  • Detailed Business Requirements for Increment
  • Solution Architecture
  • Respective design areas for integration, applications and information management (done in parallel)


Output:

Design Test Cases for Application Development

Objective:

An overall set of test cases is created for testing applications, using the categories defined above. For testing Applications, Functional Testing and E2E Testing will be the areas of highest complexity.


Input:

  • Test Strategy
  • Develop Test Plans for Applications
  • Test Requirements
  • Requirements and Design for Application Development (done in parallel with design)


Output:

Application Development Test Cases for:

  • Functional Testing
  • SIT Testing
  • E2E Testing
  • SVT
  • UAT
  • PVT

Test Case development should be staggered to align with an implementation schedule.

Design Test Cases for Infrastructure Development

Objective:

This task provides test cases focused primarily on management functions and integration. For testing Infrastructure, Functional Testing (for Integration), SIT Testing and SVT Testing will be the areas of highest complexity.


Input:

  • Test Strategy
  • Develop Test Plans for Infrastructure
  • Test Requirements
  • Requirements and Design for Infrastructure Development (done in parallel with design)


Output:

Infrastructure Development Test Cases for:

  • Functional Testing
  • SIT Testing
  • E2E Testing
  • SVT
  • UAT
  • PVT

Test Case development should be staggered to align with implementation schedule.

Design Test Cases for Information Development

Objective:

An overall set of test cases are created for testing information development, primarily focused on testing Data Re-Engineering processes and Metadata Management. For testing Information Development, Functional Testing and E2E Testing will be the areas of highest complexity.

Input:

  • Test Strategy
  • Develop Test Plans for Information
  • Test Requirements
  • Requirements and Design for Information Development (done in parallel with design)


Output:

Information Development Test Cases for:

  • Functional Testing
  • SIT Testing
  • E2E Testing
  • SVT
  • UAT
  • PVT

Test Case development should be staggered to align with implementation schedule.

Check Availability of Test Data and Testing Environment

Objective:

In this step we complete to acquisition of test data for the execution of the testing process that was started during Test Planning.

Input:

  • Detailed Business Requirements for Increment
  • Testing Strategy
  • Testing Planning

Test data should be gathered in parallel with:

The gathering of test data was started in the previous phase. Test data will also be required for unit testing, which will be conducted by the developer.

Output:

  • Test Data Extracts
  • Test Environment Ready
  • Revisions to Test Plan

Core Supporting Assets

Yellow Flags

  • Too few exception test cases – should be many more exception test cases than “happy path” test cases
  • Test Design does not trace back to business and technical requirements
  • Test Design does not include regression testing
  • Test environments are not ready
  • Testing approach is very document-centric and does not incorporate automated testing

Key Resource Requirements

Wiki Contributors
Collapse Expand Close