Show TOC

Procedure documentationLegacy: Performing Automated and Semi-automated Tests of Effectiveness Locate this document in the navigation structure

 

Process Control can facilitate automation of the effectiveness testing of controls that exist in your ERP system. This increases testing efficiency and standardizes testing if several organizations have similar controls. You can customize your automated tests based on filter parameters. You can also run the automated tests at any frequency based upon your configuration. Automated test rules automate the test procedures. These rules use a script and rule criteria to identify control exceptions on data in the ERP system. Automated test rules can fully or partially automate your tests of effectiveness.

Test of Effectiveness

In a fully automated test of effectiveness, the system creates an issue when the system identifies exceptions based upon your rule criteria. The following figure displays the process flow for an automated test of effectiveness scenario:

This graphic is explained in the accompanying text.

  1. The system performs the test of control effectiveness. If the test passes, the work flow is complete.

  2. If the test fails, the system creates issues and routes them to the issue owner.

  3. The issue owner reviews the issues for validity. If it is not a valid issue, the work flow is complete.

  4. If it is a valid issue, the issue owner assigns a remediation plan owner and submits it.

    The plan owner creates, executes, and completes the plan.

  5. The issue owner reviews the remediation activities and closes the issue. The work flow is complete.

Semi-automated Test of Effectiveness

In a semi-automated test of effectiveness, the tester receives the test results, with any issues if the system has identified exceptions. The tester must review and validate the exceptions. The tester can then void the issue or assign the issue to an owner for processing.

Automated and semi-automated tests of effectiveness have differences in certain workflow tasks. Shown below is the routing of tasks for automated and semi-automated tests of effectiveness.

Routing of Tasks for Automated and Semi-automated Tests of Effectiveness

Deficiency Rating of Issue

Automated Issues Go to

semi-automated: Tasks Go to

Rule with Deficiency (High/Medium/Low)

Subprocess Owner

Tester

Rule with Review Required

Role with No Deficiency

N/A

N/A

Prerequisites

Activities
  • System execution of automated or semi-automated test of effectiveness

  • Access tasks related to automated or semi-automated test of effectiveness

  • Perform tasks related to automated or semi-automated test of effectiveness

  • Create and perform remediation plans

Note Note

Automatic retesting is not applicable to automated and semi-automated tests of effectiveness. This is because if the test is rerun for the same period, it would return the same results based upon the ERP data. For this reason, some companies perform automated testing on a more frequent basis than manual testing.

End of the note.

Procedure

System Execution of Automated or Semi-automated Test of Effectiveness
  1. Process Control performs automated tests based on the plan you created in the Planner. The plan includes information such as start and due date of testing, organization name, and control selection. When the plan start date occurs, the test executes in the ERP system based on control-rule assignments. For more information, see Planner and Legacy: Control Rule Assignments.

  2. The ERP system returns any test exceptions to Process Control. The exceptions have a deficiency rating of High, Medium, Low, or Review Required depending on the rule settings and the data in your ERP system. You define your tolerance settings for High, Medium, Low deficiencies within the rule parameters for specific rule criteria.

  3. If no exceptions are identified, the system performs the following depending on whether the test is fully or partially automated:

    • Automated Test of Effectiveness — Testing of the plan is complete. The system assigns the test a deficiency rating of Adequate.

    • Semi-automated Test of Effectiveness — The system assigns the test a deficiency rating of Adequate.

    Note Note

    For monitoring, no task is generated if no exceptions are found. For testing purposes, a task is generated, even if no exceptions are found.

    End of the note.
  4. If exceptions are identified, the system performs the following depending on whether the test is fully or partially automated:

    • Automated Test of Effectiveness — The system automatically creates an issue. The system routes the issue to the person assigned the task Receive Issues from Automated Test of Control Effectiveness. In the delivered business content (BC Set), this person has the role Subprocess Owner.

    • Semi-automated Test of Effectiveness — The system automatically creates an issue. The system routes the test results to the person assigned the task Perform semi-automated Test of Effectiveness. In the BC set, this person has the Process Tester role. The tester can void the issue or assign the issue to an owner for processing.

    Note Note

    You can assign this task to another role, depending on your business requirements.

    End of the note.
Accessing Tasks Related to Automated or Semi-automated Test of Effectiveness

To access your tasks for compliance tests or control monitoring, choose a path from the following:

  •   My Home   Work Inbox   Work Inbox   – lists all tasks and reports delivered to your Work Inbox.

  •   Evaluation Results   My Tasks   My Tasks   – lists all your tasks.

  •   Evaluation Results   Compliance   My Tasks   – lists just your compliance tasks.

Performing Tasks Related to Issues from Automated/Semi-automated Test of Effectiveness
  1. To perform the task, select, and open the task.

  2. To review exceptions identified by the system, select the Evaluation tab. Choose the Fail link under the Results column to display details.

    The following instructions apply to semi-automated test of effectiveness only:

    • To review and validate the exceptions, select the Issue tab. Enter issue owner and choose Submit. Issue status changes to Ready.

    • To void the issue, select the Issue tab. Choose Void the Issue. Choose Submit. Issue status changes to Canceled.

    Note Note

    The overall rating of the test is based upon the issues. A test with no open issues has passed and displays a green icon. A test with open issues (not voided) has failed and displays a red or yellow icon, depending upon the priority of the issues. If at least one issue with high priority exists, the rating is red. If no issues with high priority exist, the rating is yellow.

    End of the note.
  3. To perform tasks related to remediation, see Remediation of Open Issues.