Start of Content Area

Process documentation Data Archiving Process Flow  Locate the document in its SAP Library structure

Purpose

You can use data archiving for data in InfoCubes and DataStore objects.

Process Flow

For general information about the process, see Schematic Data Archiving Process. For the specific process flow in a BI system, refer to the following process description.

...

       1.      To archive data from an InfoProvider, you first need to create an archiving object for the InfoProvider.

See also: Archiving Object

       2.      Archiving is carried out in the Archive Administration.

You can call up the following commands from here.

·         Maintain archiving variants.

·         Schedule archiving run; here you can firstly start a test run before starting the actual archiving run.

·         Schedule a delete run

§         Store and retrieve from a storage system.

Also refer to the detailed documentation in Archive Administration.

Special technical features in the BI system:

The following display shows the BI data archiving architecture:

This graphic is explained in the accompanying text

Description of special technical features in the BI system:

Variants maintenance:

To schedule the execution of the write program, you first need to create a variant. The structure of the selection screen depends on the archiving method you chose when defining the archiving object. With the archiving of time slots (time periods with beginning and end points), you can only use selection conditions for the selected time characteristic. See also: Time Restrictions In doing so, maintenance is kept to a minimum.

Writing to the archive

Read or delete access to the data from the archiving processes is made possible using the data manager interface. When data is read from InfoCubes, it is transformed from the star-shaped table structure into a flat format that only contains the actual characteristic attributes. Therefore, the archive is not affected by any reorganization of the IDs in the star schema. The disadvantage is an increase in the volume of data that is at least partially cleared again by means of the data compression in the archive file.

For each archiving object, an executable ABAP program is generated to write to the archive. The program generates and processes archiving requests.

First of all, it generates an archiving request. The selection criteria and archiving run information is stored in the archiving request, and a status is administered. The system uses the InfoProvider interface to read InfoCube- and DataStore object data according to the grouping characteristics that were set. The data is written to archive files in the I structure (no SIDs, no navigation attributes, table-enabled field names) for InfoCubes, and in the structure of the A table (active data table) for DataStore objects.

Deleting archived data:

A separate background process is started for each archive file to delete the archived data. Depending on the archiving object settings, it is either automatically started in the write phase, started manually in Archive Administration or is triggered by a particular event. Each productive delete process has three steps:

1.       In the first step, the archive file is verified in the test mode. The system checks here whether the archive file is complete and whether it can be accessed. Successful verification of the file is stored persistently in the corresponding archive request. The background process is ended if the write phase is still running, or if the system has not verified all archive files in the associated archiving run.

2.       The second step begins when the archiving run write phase has ended successfully, and all generated archive files have been successfully verified. In this step, data is deleted from the database with the selection criteria of the archiving run. An optimal delete strategy is displayed for the selection criteria (also see Selective Deletion from an InfoCube and from an DataStore Object). For InfoCubes, this step also involves the aggregates being adjusted.

3.       When the data has been successfully deleted, all archive files in the archiving run are confirmed. This is the third step. As a result, the data is seen as successfully deleted with regards to the ADK.

Here is a schematic display of the deletion process:

This graphic is explained in the accompanying text

For more information on the strategies that the system chooses for deletion, read the Background Information.

       3.      You can use the Extractor Checker (transaction RSA3) to check an archive file.

Also refer to Check Archive File.

       4.      You can extract data from BI archives in order to reload it into the InfoProvider.

We do not recommend you  reload data directly into the original object. As an alternative, you can use the archive connection for the Export DataSource of an InfoCube or DataStore object. Archived data can then be extracted from the archive of the original object and loaded into an InfoProvider with the same (or a similar) structure using the data mart interface. Reporting permits a combination of data that has yet to be archived with data that has been extracted from the archive using the MultiProvider function.

Also refer to Reload Data from Archive File.

You can find detailed step-by-step instructions under Archiving Data.

For more information about possible errors during the archive run and their removal, read the Background Information.

 

End of Content Area