Show TOC

Procedure documentationPerforming Automated and Semi-automated Tests of Effectiveness Locate this document in the navigation structure

 

Process Control can facilitate automation of the effectiveness testing of controls that exist in your ERP system. This increases testing efficiency and standardizes testing if several organizations have similar controls. You can customize your automated tests based on filter parameters. You can also run the automated tests at any frequency based upon your configuration. Automated test rules automate the test procedures. Automated test rules can fully or partially automate your tests of effectiveness.

Test of Effectiveness

In a fully automated test of effectiveness, the system creates an issue when the system identifies exceptions based upon your rule criteria. The following figure displays the process flow for an automated test of effectiveness scenario:

This graphic is explained in the accompanying text.

  1. The system performs the test of control effectiveness. If the test passes, the work flow is complete.

  2. If the test fails, the system creates issues and routes them to the issue owner.

  3. The issue owner reviews the issues for validity. If it is not a valid issue, the work flow is complete.

  4. If it is a valid issue, the issue owner assigns a remediation plan owner and submits it.

    The plan owner creates, executes, and completes the plan.

  5. The issue owner reviews the remediation activities and closes the issue. The work flow is complete.

Semi-automated Test of Effectiveness

In a semi-automated test of effectiveness, the tester receives the test results, with any identified issues. The tester must review and validate the exceptions. The tester can then void the issue or assign the issue to an owner for processing.

Automated and semi-automated tests of effectiveness have differences in certain workflow tasks. Shown below is the routing of tasks for automated and semi-automated tests of effectiveness.

Note Note

The receiver of issues and tasks in the table below represent the predelivered configuration by SAP. You can define your own settings in the Customizing activity found at   Governance, Risk and Compliance   General Settings   Workflow   Maintain Custom Agent Determination Rules  . For more information, see the SAP BusinssObjects Process Control 10.0 Security Guide.

End of the note.
Routing of Tasks for Automated and Semi-automated Tests of Effectiveness

Deficiency Rating of Issue

Automated Issues Go to:

Semi-automated: Tasks Go to:

Rule with Deficiency (High/Medium/Low)

Subprocess Owner

Tester

Rule with Review Required

Rule with No Deficiency

N/A

N/A

Procedure

System Execution of Automated or Semi-automated Test of Effectiveness
  1. Process Control performs automated tests based on the plan you created in the Planner. The plan includes information such as start and due date of testing, organization name, and control selection. When the plan start date occurs, the test executes in the ERP system based on business rule assignments.

    Automatic retesting is not applicable to automated and semi-automated tests of effectiveness. This is because if the test is rerun for the same period, it would return the same results based upon the ERP data. For this reason, some companies perform automated testing on a more frequent basis than manual testing.

    For more information, see Planner and Assigning a Business Rule to a Control

  2. The ERP system returns any test exceptions to Process Control. The exceptions have a deficiency rating of High, Medium, Low, or Review Required depending on the rule settings and the data in your ERP system. You define your tolerance settings for High, Medium, Low deficiencies within the rule parameters for specific rule criteria.

  3. If no exceptions are identified, the system performs the following depending on whether the test is fully or partially automated:

    • Automated Test of Effectiveness — Testing of the plan is complete. The system assigns the test a deficiency rating of Adequate.

    • Semi-automated Test of Effectiveness — The system assigns the test a deficiency rating of Adequate.

    Note Note

    For monitoring, no task is generated if no exceptions are found. For testing purposes, a task is generated, even if no exceptions are found.

    End of the note.
  4. If exceptions are identified, the system performs the following depending on whether the test is fully or partially automated:

    • Automated Test of Effectiveness — The system automatically creates an issue. The system routes the issue to the person assigned the task Receive Issues from Automated Test of Control Effectiveness. In the delivered business content (BC Set), this person has the role Subprocess Owner.

    • Semi-automated Test of Effectiveness — The system automatically creates an issue. The system routes the test results to the person assigned the task Perform semi-automated Test of Effectiveness. In the BC set, this person has the Process Tester role. The tester can void the issue or assign the issue to an owner for processing.

    Note Note

    You can assign this task to another role, depending on your business requirements.

    End of the note.
Accessing Tasks Related to Automated or Semi-automated Test of Effectiveness

To access your tasks and reports for compliance tests or control monitoring, choose   My Home   Work Inbox   Work Inbox  

Performing Tasks Related to Issues from Automated/Semi-automated Test of Effectiveness
  1. To perform the task, select, and open the task.

  2. To review exceptions, select the Evaluation tab. Choose the Fail link under the Results column to display details.

    The following instructions apply to semi-automated test of effectiveness only:

    • To review and validate the exceptions, select the Issue tab. Enter issue owner and choose Submit. Issue status changes to Ready.

    • To void the issue, select the Issue tab. Choose Void the Issue. Choose Submit. Issue status changes to Canceled.

    Note Note

    The overall rating of the test is based upon the issues.

    • Adequate (green icon)

      – Test with no open issues

    • Deficient (yellow icon) – Test with open issues, none of which are high priority.

    • Significantly Deficient (red icon) – Test with open issues, at least one of which is high priority.

    End of the note.
  3. To perform tasks related to remediation, see Remediation of Open Issues.