If the analysis or BEx queries or BW statistics has suggested several proposals for aggregates, it is not advisable to activate all of them. Using aggregates reduces the runtimes of queries and the roll up of data is optimized in such a way that allows aggregates that are already rolled up and available to be used. But, as a rule, the total memory space required for all aggregates is too great and it takes too long to fill the aggregates.
In addition to the runtime for the queries, a complete optimization must also take into account the dependencies of the aggregates, their memory requirements, the time taken to roll up new data, and other factors.
You can run a simplified optimization by choosing Proposal → Optimize.
For this optimization it is usually assumed (heuristics), that the number of aggregates should be reduced first. In addition, those aggregates are selected that have been called up least often and together account for 20% of all calls. These aggregates are checked, one after the other, to see if there are any aggregates with exactly one extra component.
If the system finds more than one aggregate with exactly one additional component, it chooses the aggregate that has been called the most number of times. The calls for the aggregates that have been checked are added to this number. The checked aggregate (from the 20% quantity) is then deleted from the list of proposed aggregates.
However, this only happens if the number of calls for the checked aggregate is not more than double the calls for the aggregate with the extra components. This prevents aggregates from being replaced by others that are used relatively rarely.
You can continue to optimize until the point where the aggregate is small enough, or until no more aggregates can be compressed.
Since the optimizer has no information about the data structure, you should check the proposals again before filling aggregates with data. For example, a proposed aggregate may contain a characteristic that would make the aggregate almost the same size as the InfoCube. This would mean that when the aggregate is filled, the system virtually creates a copy of the InfoCube. This is not generally the objective when using aggregates.