Show TOC

Performance Tips InfoCubesLocate this document in the navigation structure


To optimize performance when loading and deleting data from the InfoCube, see the following recommendations:

  • Indexes:

    As the number of data records in the InfoCube increases, performance can be impaired for both the load and the query. This is due to the increasing system load for index maintenance. To alleviate this situation, you can check, delete, repair or rebuild the secondary indexes of the F and E fact table and the aggregate table indexes on the InfoCube administration screen.

    If the F fact table for your InfoCubes is small without being compressed, you should delete the secondary indexes before loading and rebuild them directly after loading. With a small F fact table, this is faster than assimilating it. By small, we mean that that number of loaded records is larger than the number of records in the F fact table. For standard InfoCubes, parallel loading is only possible if the indexes have been deleted.

    More information: InfoCube Performance

  • Aggregates:

    When loading data to aggregates, there are a number of points that can help you to optimize performance.

    More information: Loading Data into Aggregates Efficiently

  • Various settings for indexes/aggregates/statistics:

    You can make various other steps in the InfoCube modeling transaction.

    More information: InfoProvider Properties

  • Line item and high cardinality:

    If your InfoCube has a dimension with almost as many entries as the fact table itself, you can optimize performance by setting the Line Item or High Cardinality flag for this dimension.

    More information: Line Item and High Cardinality

  • Change run:

    If hierarchies and attributes of InfoCube characteristics have changed, structural changes need to be made in the aggregates in order to adapt the data. Note that a structural change adapts all aggregates of all InfoCubes affected by the changes to the hierarchies and InfoObjects. This can be time consuming.

    You therefore need to note the following:

    • You can optimize the change run. For more information, see SAP Note 176606.

    • Try to define aggregates that do not reference master data. This will prevent these aggregates from having to be adapted after changes to master data.

    • In some cases, the change run can be optimized if the DB statistics run is left out. For more information, see SAP Note 555030.

    • If you use master data with a large number of navigation attributes, the change run will take longer. In cases like this, you have the option of splitting the SQL statements. For more information, see SAP Note 536223.

  • Number range buffer:

    When loading large amounts of data to an InfoCube, you should increase the number range buffer for the dimensions in question. If possible, reset if after loading, This will prevent unnecessary memory consumption.

    For more information, see SAP Note 130253.

  • Compression:

    When compressing an InfoCube's data for the first time, you should select no more than one request. After the first compression, you can select more than one request for the next one.

    If compression takes particularly long, you can optimize performance in the following case: If the data that you are loading to the InfoCube is disjunct with the data in other requests. By disjunct, we mean that all records in a request are different in at least one of the user-defined dimensions.

    For more information, see SAP Note 375132.

  • Non-cumulatives:

    How you model the store for non-cumulatives in the BW system depends on your scenario. If the non-cumulatives change infrequently, you should choose non-cumulative management with non-cumulative key figures. If the non-cumulatives change frequently, you should choose non-cumulative management with normal key figures - cumulative values that is.

    More information: Non-Cumulatives