Show TOC

Background documentationPassing and Storing MAI Data in the Business Warehouse

 

Note Note

This section is part of the technical documentation of the E2E Monitoring and Alerting Infrastructure reporting scenario (Interactive Reporting). It is the continuation of the section Getting the MAI Data for Interactive Reporting.

End of the note.

When DPC has obtained monitoring data, it forwards it to technical monitoring, but also for long-term storage in the Business Warehouse.

Note Note

In the technical monitoring configuration in the transaction SOLMAN_SETUP, in the step Template Management, in the Data Usage tab, you can specify for individual metrics whether the the values obtained by DPC are to be forwarded to MAI (Send Values to Event Calculation Engine flag), or to the BW (Send Values to SAP NetWeaver Business Warehouse flag).

End of the note.

A data loader passes the data to BW. This function module is called in the Business Warehouse system by the main extractor, after the extractor. The data loader writes the values obtained by the extractors into BW.

Note Note

The data loader SMD_DATA_LOADER_S passes data from MAI.

End of the note.
Mapping

Before saving in BW, the data structures are mapped. The mapping is for the following reasons:

  • The name of a metric in BW should be independent of the technical collection infrastructure if it has the same semantic meaning. There are metrics in MAI which were already collected in CCMS, but have different technical names in the two infrastructures. To ensure data continuity, the MAI names of these metrics are mapped onto the CCMS names when they are saved in BW.

  • If there is no data for a metric for a long time, the mapping mechanism creates gap data records for this period, for display in reporting.

  • Host metrics are copied to the data records of the systems which run on this host.

  • The availability, taking account of downtimes, is calculated.

  • Data rows, the BW objects in which the data is written, are specified in BW.

This data is written in mapping tables. Until SP11 of SAP Solution Manager 7.1, this is the table E2EREP_MAPPING.

This mapping mechanism has been optimized for SP12 of SAP Solution Manager 7.1, to improve performance and reduce database load. The original table E2EREP_MAPPING is now only used for metric master data. There are two additional tables for transaction data:

  • E2EREP_MAPP_CURR

    This table contains the current period of uninterrupted data provision for each metric.

  • E2EREP_MAPP_HIST

    This table contains the metric data provision period history. If the data provision for a metric is interrupted, an entry from the table E2EREP_MAPP_CURR is copied into this table, and an interruption counter in the table E2EREP_MAPP_CURR is incremented.

Note Note

The mapping tables are reorganized to delete obsolete entries, by a weekly job E2EREP_MAPPING_CLEANUP .

End of the note.

For more information, see Using the Mapping Tabe Analysis Program.

Monitoring Data Flow in Business Warehouse

The data loader writes the data in various data store objects (DSOs) in BW. The data is then usually written from the DSOs into InfoCubes by jobs. Twin Cubes are used.

Twin Cubes are two InfoCubes with the same structure. The data is initially written to the first InfoCube, until the end of retention period for the resolution specified in the configuration. Data is then written to the second Cube, until the end of its retention period. The first cube is then deleted, and the cycle is repeated. A MultiProvider connects the two twin cubes.

Note Note

You specify the lifetime of the monitoring data in BW depending on the time resolution, in the configuration of Technical Monitoring, in the transaction SOLMAN_SETUP. This is in the step Start of the navigation path Configure Infrastructure Next navigation step Periodic Tasks End of the navigation path, in the group box BI Data Lifetime for Each Granularity.

One advantage of this concept is to avoid the resource-intensive selective deletion of obsolete data records. The deletion of all data in an InfoCube when the active cube changes, is much quicker than the selective deletion of obsolete data records.

End of the note.

Queries and web templates use a MultiProvider over the twin cubes, because they do not know which of the twin cubes contains the desired data at a particular time.

For a detailed dewscription of the data flow, see Monitoring Data Flow.