Modeling Guide

Execute an SAP Data Hub Pipeline

Use the Pipeline operator in the SAP Data Hub Modeler tool to execute an SAP Data Hub Pipeline in an SAP Data Hub system.


You can use the Pipeline operator to execute a data pipeline in a remote SAP Data Hub system or in a local system. A data pipeline represents a concrete and complex data flow, and helps transform data between elements connected in a series. When you execute a data pipeline, it can help process the raw data from multiple sources and make it available for different use cases.


  1. Start the SAP Data Hub Modeler.
  2. In the navigation pane, select the Graphs tab.
  3. In the navigation pane toolbar, choose + (Create Graph).
    The tool opens an empty graph editor in the same window, where you can define your graph.
  4. Select the operator.
    A graph can contain a single operator or a network of operators based on the business requirement.
    1. In the navigation pane, choose the Operators tab.
    2. In the search bar, search for the Pipeline operator.
    3. In the search results, double-click the Pipeline operator (or drag and drop it to the graph editor) to add it as a process in the graph execution.
  5. Configure the operator.
    1. In the graph editor, select the Pipeline operator and choose (Open Configuration).
    2. In the VFlow Connection text field, enter a connection ID that references a connection to a remote SAP Data Hub system.
      You can also use the form-based editor that the tool provides to select or enter the connection details. In the configuration pane, choose (Open editor). If you have already created connections in the SAP Data Hub Connection Management application, in the Connection ID dropdown list, you can browse and select the required connection. If you want to manually enter the connection details to the remote system, in the Configuration Type dropdown list, select Manual and enter the required connection details.
    3. In the Graph Name dropdown list, select the SAP Data Hub pipeline (graph) that you want to execute.
      The tool populates the dropdown list with all graphs in the remote system (based on the connection ID) or all graphs available in the local system.
    4. (Optional) In the Retry Interval text field, enter the time interval in seconds for the engine to wait until next status update.
      The default value is 20 seconds.
    5. (Optional) In the Retry Attempts text field, enter the maximum number of attempts to query for status update.
      The default value is 10 attempts.
    6. In the Running Permanently dropdown list, select a value to indicate whether the selected SAP Data Hub pipeline is a permanently running data pipeline.




      If set to true, the operator execution checks whether the data pipeline is in a running state, starts the data pipeline execution if it is not in running, and terminates immediately (status:completed) while the data pipeline remains in the running state. But, if the task was already running with a different task version, then that instance is stopped, and the new task version is started and immediately terminated (status: completed).


      This is the default value. If set to false, the operator executes the data pipeline once, and the operator execution terminates after the pipeline execution terminates.

  6. Save and execute the graph.
    Use the Workflow Trigger and Workflow Terminator operators in the graph to control the start and stop of the graph execution respectively.