Show TOC

SAP IQ Output AdapterLocate this document in the navigation structure

Adapter type: sybase_iq_out. The SAP IQ Output adapter reads data from Event Stream Processor and loads it into the SAP IQ database.

Prerequisites

Before running the adapters, create a directory for the primary file location and one for the overflow file location.

To be able to load data using the SAP IQ Output adapter, you need to:
  • Source the SAP IQ shell script (IQ-15_4.sh or IQ-15_4.csh) from your SAP IQ installation or SAP IQ client install before running the ESP Server. Then source the Event Stream Processor shell script ($STREAMING_HOME/../STREAMING.sh or $STREAMING_HOME/../STREAMING.csh).
  • On UNIX and Windows systems, install an SAP IQ ODBC driver using the SAP IQ client installation process.
  • On UNIX systems, you also need to install an ODBC driver manager. You can use the ODBC driver manager that is shipped with SAP IQ. See the SAP IQ documentation for more information.
  • On UNIX systems, Event Stream Processor expects your ODBC driver manager library to be called libodbc.so.1. Ensure that your driver manager library has this name, or create a symbolic link from libodbc.so.1 to your ODBC driver manager library.
  • Make sure that the Data Service has the ODBC Driver Library set correctly. Use streamingdbodbc_lib for the Windows platform. For non-Windows platforms, the choice of driver library depends on the size of SQLLEN in the driver manager. If SQLLEN is 4 bytes, use streamingdbodbc_lib . If SQLLEN is 8 bytes use streamingdbodbc64_lib. See the SAP Event Stream Processor: Configuration and Administration Guide for more information.
  • On UNIX systems, SAP recommends that you upgrade to version 2.3.0 or later of unixODBC. If you are using a version earlier than 2.3.0, set a parameter for the driver that instructs the database manager not to synchronize database access. To do this, add a line that says “Threading = 0” for your driver in the odbcinst.ini file.
  • Within SAP IQ, grant the required permissions to the user that the adapter uses to connect to the SAP IQ database.
    • Enable reading permissions: set the allow_read_client_file option to on for the database into which the data is being loaded. Grant the user the READCLIENTFILE authority for versions of SAP IQ prior to 16.0, or READ CLIENT FILE system privileges for versions of SAP IQ 16.0 and later.
    • Enable permissions to load data into a table:

      SAP IQ prior to 16.0: If the -gl option is set to ALL, grant the user ALTER privileges. If the -gl option is set to DBA, grant the user DBA authority.

      SAP IQ from 16.0 and later: If the -gl option is set to ALL, grant the user ALTER or LOAD permission on the table, or the ALTER ANY TABLE, LOAD ANY TABLE, or ALTER ANY OBJECT system privileges. If the -gl option is set to DBA, grant the user ALTER ANY TABLE, LOAD ANY TABLE, or ALTER ANY OBJECT system privileges.

    The SAP IQ Output adapter executes a client-side load of a file. Regardless of the number of SAP IQ nodes in a multiplex or the number of connections, SAP IQ only allows one writer per table. The SAP IQ Output adapter requires a write lock to function and cannot obtain one while other processes or queries are writing to the table. If another process or query is writing to the table, the SAP IQ Output adapter waits until that process or query is done. For more information on the required permissions, refer to SAP IQ documentation.
Using the Adapter

The adapter writes data to SAP IQ load files in the native SAP IQ binary format, and loads these files in sequence into the database.  The adapter supports persisting Event Stream Processor insert, update, and delete records. To improve performance, the data warehousing mode allows you to configure the adapter to either ignore updates and deletes, or to treat updates as inserts and ignore deletes.

The adapter creates files in the primary or overflow file locations as rows are received from Event Stream Processor. Once a file is successfully loaded into SAP IQ, the data is visible within the database and the file is removed from the file system. If an error occurs while loading a file, an error is logged to the ESP project log, but the file remains on the file system. Once the problem preventing the load is resolved, you can manually attempt to reload the file using the SQL statement provided in the Event Stream Processor logs.

You can track the progress of this adapter using SAP ESP Cockpit. The file activity report shows each of the files processed by this adapter and lists its current state. To view the file activity report, you must add a special table to the database into which the adapter is loading. See Enabling File Activity Monitoring for the SAP IQ Adapter. For additional information on the file activity report, see Viewing File Activity for the SAP IQ Output Adapter in the ESP Cockpit online help.For additional information on the file activity report, see Viewing File Activity for the SAP IQ Output Adapter in the ESP Cockpit online help.

When the adapter receives a shutdown request because the project is stopping, the ESP Server is shutting down, or the adapter itself is stopped, it continues processing data until certain conditions are satisfied. Any data the adapter receives before the shutdown request is written into the currently writing file. This file remains on the file system until you run the adapter again, at which time it is loaded into SAP IQ. If a file is being loaded into SAP IQ when the adapter receives the shutdown request, that file continues to load until it has completed; but no further files are loaded. When the adapter restarts, it resumes processing any files created, but not loaded, by a previous instance. The same process also occurs if the ESP Server or adapter terminates unexpectedly while a file is being loaded.
Note Because the adapter continues to process data after it receives the shutdown signal, it may appear to be slow shutting down.

The adapter also supports schema discovery and permutations. Permutations allow mapping between a compatible Event Stream Processor schema and a database schema when the two schemas are not identical. If a permutation does not provide a mapping for a database column, that database column must be nullable. If the column is nullable, the adapter inserts NULL for each row into this column.

The default character set of the SAP IQ database must be either ASCII (if no international characters are to be loaded) or UTF-8. 

If you use the CCL ATTACH ADAPTER statement to attach an adapter, you must supply the adapter type.

Rows

The SAP IQ Output adapter reports the following record loading metrics to the project log file.

Row Type Definition
Posted Row

A row that the ESP engine passes to the adapter.

Skipped Row

A row that the adapter ignores because the opcode does not match the database warehouse mode. See parameters below for accepted database warehouse modes.

Processed Row

A row that is posted and is not skipped, sitting in an internal buffer waiting to be written to a file, or sitting in a partially written file waiting to be loaded.

Bad Row

A row that is passed to the adapter but cannot be converted into the format required for writing it to the file.

For example, if the range of a numeric value does not fit into the type of the database column. A bad row can also be a row that was not loaded because it was contained in a file that failed to load. If a file fails to load, the number of bad rows is incremented by the number of rows in the file.

Adapter Parameters

The following table defines the required and optional properties to configure.

Property Label Description
Database Service Name

Property ID: service

Type: string

(Required for adapter operation and schema discovery) The name of the database service that represents the IQ database into which information will be loaded. You must define an ODBC service for the database service. See the SAP Event Stream Processor: Configuration and Administration Guide. No default value.

Target Database Table Name

Property ID: table

Type: string

(Required for adapter operation; optional only if you are using schema discovery) A string value representing the name of the table in SAP IQ into which you wish to load data. No default value.

Include Base Content

Property ID: outputBase

Type: boolean

(Optional) If set to true, the adapter outputs the initial stream contents in addition to stream updates.

If this option is enabled and the adapter is running in GD mode, once the adapter has done a GD commit on the entire base data, the ESP Server does not redeliver the base data on adapter restart and only sends deltas that are saved for delivery. The default value is false.

Primary File Location

Property ID: primaryFileLocation

Type: directory

(Required for adapter operation; optional only if you are using schema discovery) Specify the directory in which SAP IQ load files are stored temporarily until they are loaded into the system. If this directory does not exist, create it before running the adapter. Because there is sensitive information being loaded into the SAP IQ database and anyone with access to the load files can change the data being loaded, you must secure this directory as you do your database. SAP recommends that only users with access to the database have access to this directory. No default value.

Overflow File Location

Property ID: overflowFileLocation

Type: directory

(Optional) Specify the directory for overflow files. If this directory does not exist, create it before running the adapter. If the primary file location does not contain enough space to write out load files, the files are created in the overflow location.

Because there is sensitive information being loaded into the SAP IQ database and anyone with access to these files can change the data being loaded, you must secure this directory as you do your database. SAP recommends that only users with access to the database have access to this directory. No default value.

Field Mapping

Property ID: permutation

Type: permutation

(Advanced) The permutation list maps the Event Stream Processor column names to the database schema column names in the specified table. If you do not specify a permutation, ensure that the Event Stream Processor stream or window columns exactly match the database schema of the destination table. For example, both must have the same order, same number of columns, and compatible datatypes.

If the Data Warehouse Mode property is OFF, and you want to specify a permutation, include a mapping for at least one column of the primary key of the ESP window attached to the adapter. Without mapping at least one primary key column, the adapter fails to start.

The format for this property is: <streaming_columnname>=<database_columnname>:<streaming_columnname>=<database_columnname>

Use a colon to separate mappings. No default value.

Only Base Content

Property ID: onlyBase

Type: boolean

(Advanced) If true, the adapter processes only the base data of the window or stream to which it is attached. No further message flow is processed. The default value is false.

Recover Existing Files Only

Property ID: recoverOnly

Type: boolean

(Advanced) If true, the adapter processes any files that have been created, but not loaded, by a previous instance of the adapter. After these files are processed, the adapter does not process further message flow. The default value is false.

Data Warehouse Mode

Property ID: dataWarehouseMode

Type: choice

(Advanced) Specifies the type of data warehousing mode the adapter uses. Valid values are:
    • ON updates are converted to inserts, and deletes are ignored.
    • INSERTONLY only inserts are processed, and updates and deletes are ignored.
    • OFF (default) all inserts, updates and deletes are processed as such.
    • ARCHIVE updates and deletes become inserts. The opcode must be stored as a column in the database table. See the opcodeColumnName property. This column must be of characters type. The required length for the opcode column is one. Use this mode if you wish to store incoming data as a series of events.
    If you want to specify a field mapping, or permutation, map at least one column of the primary key of the ESP window attached to the adapter. Without mapping of at least one primary key column, the adapter fails to start.
Timestamp Column Name

Property ID: timestampColumnName

Type: string

(Advanced) If a column name is provided, the time at which the record is written to the load file is stored in that column of the database record. If this property is empty, no timestamp is stored. 

The timestamp is always provided in UTC. No default value.

Operation Code Column Name

Property ID: opcodeColumnName

Type: string

(Required if adapter is running in the ARCHIVE data warehouse mode; advanced for other data warehouse modes) The name of the database column that the adapter uses to store the operation code. If specified for other modes, the opcode is written into the specified column.

Target Temporary File Size

Property ID: targetFileSize

Type: uint (in MB)

(Advanced) Specifies the maximum size for the SAP IQ load file before it is loaded into SAP IQ. SAP IQ load performance is better for larger files, but latency increases with file size. Default value is 2000.

Each SAP IQ load file is not necessarily this size. If the adapter is not in data warehousing mode, any updates or deletes received reduce the file size. The Idle Buffer Write Delay and Buffer Age Limit parameters may also cause smaller files to be generated.

Idle Buffer Write Delay

Property ID: idleBufferWriteDelay

Type: uint (in seconds)

(Advanced) If the adapter has not received a row during the number of seconds specified, the previously created file is loaded. This provides latency control in situations where message flow may be sporadic. Default value is 10.

Buffer Age Limit

Property ID: bufferAgeLimit

Type: uint

(Advanced) The maximum amount of time (in seconds) between the first and last record in an I/O buffer. This ensures that consistent but low volume message flow does not create very high latency. Default value is 600.

Disable Referential Integrity

Property ID: disableReferentialIntegrity

Type: boolean

(Advanced) Disables the referential integrity constraints on the table before the loading operation is performed.  This can significantly improve performance of the load, but causes problems if the source does not maintain data integrity. Default value is false.

I/O Buffer Size

Property ID: ioBufferSizeMB

Type: uint (in MB)

(Advanced) Determines how much data is buffered in memory before it is written to disk. Increasing this value may increase write performance, but may also increase latency.

Determine the amount of memory required by the adapter to buffer by multiplying this number by the number of I/O buffers. The target file size should be a multiple of this number. Default value is 20.

Number of I/O Buffers

Property ID: numIOBuffers

Type: uint (in MB)

(Advanced) The number of data buffers to maintain. This number should be sufficiently high so that a buffer is always available to write into. The slower the file system, the more I/O buffers required. Default value is 5.

PropertySet

Property ID: propertyset

Type: string

(Advanced) Specifies the name of the property set. Property sets are reusable sets of properties that are stored in the project configuration file. Using these sets allows you to move adapter configuration properties out of the CCL file and into the CCR file. If you specify the same properties in the project configuration file and the ATTACH ADAPTER statement, the values in the property set override the values defined in the ATTACH ADAPTER statement. No default value.

Timezone For Statistics

Property ID: timezoneForStats

Type: string

(Advanced) Controls the time zone in which midnight statistics for the adapter roll over. For example, if the adapter is left at its default value of GMT, midnight statistics reset at midnight GMT.

This setting does not affect any other dates processed or returned by this adapter. The default value is GMT.

Delay between Reconnection Attempts

Property ID: reconnectAttemptDelayMSec

Type: uint (in milliseconds)

(Advanced) Controls the time to delay before a new reconnection attempt. A higher value increases the time between reconnection attempts, which may impact performance. Default value is 1000.

Maximum Number of Reconnection Attempts

Property ID: maxReconnectAttempts

Type: uint

(Advanced) The maximum number of times the adapter attempts to reconnect to the database. To retry indefinitely, set this value to -1. Default value is 1.

Enable Join Delete

Property ID: joinDelete

Type: boolean

(Advanced) If set to true, the adapter creates a temporary table in SAP IQ, populates this table with the key values of the rows to delete, and then issues a DELETE statement which uses a join between the main table and the temporary table to delete rows. The default value is false.

Some restrictions on the use of this property:
  • Ensure there is sufficient room in the tempdb database to carry the delete data. You can use a general guideline of the value of targetFileSize multiplied by the number of adapters.
  • Ensure that, aside from permissions needed to load client files, the database user who is using the adapter has the RESOURCE permission, or on SAP IQ 16+, the CREATE TABLE permission.
  • The name of the temporary table follows the #[adaptername][tablename] naming convention. Ensure your adapters and tables are named in a way that avoids duplicate temporary table names.
  • This impacts performance. Chunks of deletes are faster when this property is set to true, though deletes are batched separately from updates which negatively impacts performance if you are alternating between sending updates and deletes. If your data follows this pattern, it is recommended that you set this property to false.
Runs Adapter in GD Mode

Property ID: enableGDMode

Type: boolean

(Advanced) Specifies whether the adapter runs in guaranteed delivery (GD) mode. GD ensures that data continues to be processed in the case that the ESP Server fails, or the destination (third-party server) fails or does not respond for a long time. See Guaranteed Delivery in the SAP Event Stream Processor: Developer Guide for details on enabling GD for your project.

The default value is false.

Enable the Adapter Internal Cache Mode

Property ID: enableGDCache

Type: boolean

(Advanced) If set to true, only rows that can be recovered (that is, checkpointed) by the ESP Server on restart are sent to the database. Other rows are cached internally by the adapter.

When this option is enabled, you may see a significant increase in latency depending on how frequently the attached stream delivers checkpoint messages. Streams support three modes related to GD: GD not supported, GD supported, and GD supported with checkpoint messages. This setting is ignored if the attached stream does not support GD and does not deliver checkpoint message. The default value is true.

GD Batch Size

Property ID: gdBatchSize

Type: int

(Advanced) Specify after how many message blocks (transactions or envelopes) the adapter issues a commit command to the database and a GD commit to the stream to which the adapter is attached. The default value is 10. Increasing this value improves performance at the expense of latency (specifically, how soon rows are available in the database to be consumed by other applications.)

Increasing this value also increases memory consumption in the ESP Server because the uncommitted rows need to be preserved for redelivery in case of failure.

Custom Statistics

The SAP IQ Output adapter maintains custom statistics to show its status and to track its loading activities. Enable the time-granularity option in the project configuration (ccr) file to get these statistics reported by the _ESP_Adapter_Statistics metadata stream. This option helps determine how often the ESP Server gathers statistics.

Rows Files Bytes Average Rows
  • Received in the last hour
  • Received since midnight
  • Loaded in the last hour
  • Loaded since midnight
  • Skipped* in the last hour
  • Skipped* since midnight
  • Number loaded in the last minute
  • Number loaded in the last hour
  • Number rejected in the last minute
  • Number rejected in the last hour
  • Number pending load
  • Last file loaded
  • Received since midnight
  • Loaded since midnight
  • Skipped since midnight
  • Received over the last minute
  • Received over the last hour
  • Loaded over the last minute
  • Loaded, per second, over the last minute
  • Loaded over the last hour
  • Loaded, per second, over the last hour

* A row is skipped when the opcode does not match the adapter's current warehousing mode. For example, if the data warehousing mode is INSERTONLY, and the adapter passes in a delete or an update, this results in a skipped row.

The "since midnight" statistics reset after midnight in the time zone specified by the timezoneForStats adapter property. If no value is set, the default behavior is reset at midnight GMT.