Open Framework, Information Management Strategy & Collaborative Governance | Data & Social Methodology - MIKE2.0 Methodology
Wiki Home
Collapse Expand Close

Members
Collapse Expand Close

To join, please contact us.

Improve MIKE 2.0
Collapse Expand Close
Need somewhere to start? How about the most wanted pages; or the pages we know need more work; or even the stub that somebody else has started, but hasn't been able to finish. Or create a ticket for any issues you have found.

System Integration Testing

From MIKE2.0 Methodology

Share/Save/Bookmark
Jump to: navigation, search
Activities in Phase 5
Phase 5 - Incremental Development, Testing, Deployment and Improvement
Content Model Relationship

Contents

Activity: System Integration Testing

Objective

System Integration Testing (SIT) verifies the proper execution of software components and proper interfacing between components within the solution. The objective of SIT Testing is to validate that all software module dependencies are functionally correct and that data integrity is maintained between separate modules for the entire solution. Whereas Functional Testing is focused on testing all business rules and transformations, SIT Testing is principally focused on testing all automated aspects of the solution and integration touch points.

As testing for dependencies between different components is a primary function of SIT Testing, this area is often most subject to Regression Testing. Therefore, it makes sense to try and ensure that test data used for SIT Testing is small enough as to allow test cases to be run in the minimum amount of time. It must also be assured that there is end-to-end integrity within the test data set to ensure the highest levels of automation in the testing process.

The capability to do some automated testing is ideal at all stages, but it can be very valuable during SIT Testing. Whilst using a specialised vendor tool generally can be very helpful for this process, automated SIT Testing can generally be done very effectively through scripts.

Major Deliverables

  • SIT Test Results
  • Changes to software environment (if required)

Tasks

Migrate Software to SIT Testing Environment

Objective:

Move software from configuration management environment to the SIT testing environment and ensure that all changes have been made prior to propagating software.

Input:

  • Availability of a suitable and stable SIT Testing environment

Output:

  • SIT Testing Environment ready

Test BI Application Development Work Products

Objective:

SIT Testing from an application perspective is generally focused on initial connectivity and access to actual data passed in from systems. Ideally, additional functionality testing of applications is done during this phase (from a data use perspective) but this may be difficult without complete sets of data. E2E Testing is when BI Application processes are fully tested for functionality.

There may also be integration and automation components as part of the application environment (e.g. information delivery mechanism of reports to end-users); these should be tested during the SIT Testing phase.


Input:

  • SIT Test Environment ready
  • SIT Application Test Plans and Test Cases
  • Completed Test Case Execution for BI Application Development Functional Testing


Output:

  • Completed Initial Test Case Execution for BI Application Development SIT Testing

Test Integration Work Products

Objective:

The primary function of SIT Testing is to test the automation of integration processes and the dependencies between these processes. This may be a complex task in some environments, as there may be a very large number of integration processes and dependencies. SIT Testing should ensure that it replicates the likely timing scenarios and all dependencies in a sequence. Automated Testing provides the most value for SIT Testing of Integration processes, due to the complexity of this task.


Input:

  • SIT Test Environment ready
  • SIT Integration Test Plans and Test Cases
  • Automation Components Developed and Unit Tested
  • Completed Test Case Execution for Infrastructure Development Functional Testing


Output:

  • Completed Initial Test Case Execution for Infrastructure Development SIT Testing

Test Information Development Work Products

Objective:

SIT Testing of Information Management Processes generally overlaps into SIT Testing of Integration Processes as it oftentimes involves the bringing together of Data Management Services. It can also be applied at a lower level of detail, such as looking at specific results of table loads, referential integrity issues or audit records.

For projects where there is integration of metadata between systems, this is a particularly critical part of SIT.

Input:

  • SIT Test Environment ready
  • SIT Information Management Test Plans and Test Cases
  • Completed Test Case Execution for Information Development Functional Testing


Output:

  • Completed Initial Test Case Execution for Information Development SIT Testing

Defect Resolution and Re-Testing

Objective:

This task covers re-execution of SIT test cases that had issues in initial round of SIT Testing. Due to dependencies between test cases, this may require regression testing of some test cases; it should be noted that testing for dependencies between different components is the primary function of SIT Testing so this area is often more subject to Regression Testing than the other test cycles. Regression testing may also require adding some new test cases.


Input:

  • Completed Initial Test Case Execution for BI Application Development SIT Testing
  • Completed Initial Test Case Execution for Infrastructure Development SIT Testing
  • Completed Initial Test Case Execution for Information Development SIT Testing


Output:

  • Completed Test Case Execution for BI Application Development SIT Testing
  • Completed Test Case Execution for Infrastructure Development SIT Testing
  • Completed Test Case Execution for Information Development SIT Testing

Depends of project metrics, but typical exit criteria out of SIT Testing may include:

  • 98% of performance test cases successfully completed
  • No Severity 1 or 2 Defects outstanding including performance issues
  • Agreed plans to resolve outstanding Severity 3 and 4 defects

These metrics should be agreed-upon as part of the overall Test Strategy.

Core Supporting Assets

Yellow Flags

  • Defects that are functional in nature mean that Functional Testing was insufficient and it may have been inappropriate to progress to SIT Testing
  • Large numbers of defects with testing connectivity and other aspects of technical testing mean that there be deployment issues down the track if the root causes of these problems are not resolved
  • Inconsistencies in defects during SIT Testing that may be related to latency problems or poor exception handling and may be difficult to resolve

Key Resource Requirements

Wiki Contributors
Collapse Expand Close