Handling Duplicate Data
Records
DataSources for texts or attributes can transfer data records with the same key into BI in one request. Whether the DataSource transfers multiple data records with the same key in one request is a property of the DataSource. There may be cases in which you want to transfer multiple data records with the same key (referred to as duplicate data records below) to BI more than once within a request; this is not always an error. BI provides functions to handle duplicate data records so that you can accommodate this.
In a dataflow that is modeled using 3.x objects, you can only work with duplicate data records for time-independent attributes and texts.
If the DataSource transfers potentially duplicate data records, this information is passed to the scheduler when you create new InfoPackages. On the Processing tab page, the DataSource Transfers Duplicate Data Records indicator is set. The Ignore Duplicate Data Records indicator is also set. When multiple data records that have the same key are transferred, the last data record in the request is updated to BI. Any other data records in the request with the same key are ignored.

To maintain consistency, ignoring duplicate data records is only possible if the data is updated serially. A serial update is when data is first updated into the PSA then, after it has been successfully written to the PSA, it is updated into the master data or text tables of the InfoObject.
If a DataSource transfers potentially duplicate data records or if you manually set the Ignore Duplicate Data Records indicator, the PSA Only update type is automatically selected in the scheduler.
If you deselect the Ignore Duplicate Data Records indicator, any duplicate data records cause errors. The error message is displayed in the extraction monitor.

Since the characteristic key is used to identify duplicate data records, make sure that all the key fields of a characteristic are filled by the DataSource. If this is not the case, for example because key fields are hidden in the extract structure, the system notifies you of this when you activate the transfer rules.
You can specify how duplicate data records within a request are handled, independently of whether the setting that allows DataSources to deliver potentially duplicate data records has been made. This is useful if the setting was not made to the DataSource, but the system knows from other sources that duplicate data records are transferred (for example, when flat files are loaded).