Show TOC

Handling Duplicate Data RecordsLocate this document in the navigation structure

Use

DataSources for texts or attributes can transfer several data records with the same key into BW in one request. Whether the DataSource transfers multiple data records with the same key within one request is a property of the DataSource. There may be cases in which you want to transfer multiple data records with the same key (referred to as duplicate data records below) to BW more than once within a request, and this is not evaluated as an error. The BW provides functions that can deal with such an ambiguity when handling duplicate data records.

Features

In a dataflow that is modeled using 3.x objects, you can only work with duplicate data records for time-independent attributes and texts.

If the DataSource transfers potentially duplicate data records, this information is passed to the scheduler when you create new InfoPackages, and on the Processing tab page, the DataSource Transfers Duplicate Data Records indicator is set. The Ignore Duplicate Data Records indicator is also set. When data records are transferred multiple times, for a specific key the last data record in the request is updated to BW. Any other data records in the request with the same key are ignored.

Note

To maintain consistency, ignoring duplicate data records is only possible if the data is updated serially. A serial update is when data is first updated into the PSA and then, after it has been successfully written to the PSA, it is updated into the master data or text tables of the InfoObject.

If a DataSource transfers potentially duplicate data records, or if you manually set the Ignore Duplicate Data Records indicator, the PSA Only update type is automatically selected in the scheduler.

If you deselect the Ignore Duplicate Data Records indicator, any duplicate data records cause errors. The error message is displayed in the extraction monitor.

Note

Since the characteristic key is used to identify duplicate data records, make sure that all the key fields of a characteristic are filled by the DataSource. If this is not the case, for example because key fields from the extract structure are hidden, the system notifies you of this when you activate the transfer rules.

Note

You can specify how duplicate data records within a request are handled, independently of whether the setting that allows DataSources to deliver potentially duplicate data records has been made. This is useful if the setting is made for the DataSource but the system knows from other sources that duplicate data records are being transferred (when loading flat files, for example).