Skip to content

Performance

Offline OData service is the mobile services component that transmits data between the back end (OData service) and the client offline store.

Offline OData service is the mobile services component that transmits data between the back end (OData service) and the client offline store.

Improving Server Performance

  • Use “NEVER” for delta tracking

    When the back-end server can support delta query, you should configure delta tracking to be NEVER. This makes the Offline OData service use the back-end server delta token directly, which improves sync performance by reducing database access.

  • Set the Number of Max Delta Resends to 0

    When the back-end server data doesn't change frequently, set the Number of Max Delta Resends to 0 so that the Offline OData service doesn't send extra requests to the back-end server, thus improving sync performance.

  • Set automaticallyRetrievesStreams to false

    For media entity sets, set the automaticallyRetrievesStreams parameter to false when adding the defining query, otherwise mobile services will always send an extra request to the back-end server to fetch the media content for every entity. If there are a significant number of entities contained in the media entity set, an excessive number of HTTP requests could be sent to the back-end server, which will decrease sync performance.

Improve Performance Using the Client SDK

  • Use “$filter” query options

    When an entity set contains a significant amount of data, use $filter to avoid loading all of the data, thus improving query performance. For example:

    DataQuery dataQuery  = new DataQuery();
    dataQuery.setRequestPath("T1");
    dataQuery.setQueryString("?$filter=pk eq 15");
    
    let dataQuery: DataQuery = DataQuery()
    dataQuery.requestPath = "T1";
    dataQuery.queryString = "?$filter=pk eq 15";
    
  • Use the client-side pagination query

    When you have to load all of the data for particular sets that contain a lot of data, use the client-side pagination function, which only loads a portion of the data in a query and then queries the remaining data by using the next query, if necessary. The recommended number of the entity count in each page is between 1,000 to 2,000:

    DataQuery dataQuery = new DataQuery().from( ... );
    dataQuery.page( 2000 );
    do {
        QueryResult queryResult = provider.executeQuery( dataQuery );
        ...
        dataQuery = queryResult.getNextQuery();
    } while (dataQuery.getUrl() != null);
    
    let dataQuery: DataQuery = DataQuery().from( ... )
    dataQuery.page( 2000 )
    repeat {
        let result: QueryResult = try provider.executeQuery( DataQuery )
        ...
        dataQuery = try result.nextQuery().page( 2000 )
    }
    while query.url != nil
    
  • Avoid using the $expand query option

    Avoid creating queries using the $expand query option, otherwise additional database access is required to get the navigation property for every entity instance. If there are a significant number of entity instances, this can cause poor query performance due to excessive database access.

  • Avoid filtering on a collection property

    When filtering on a collection property, the offline OData service will load all of the collection properties into memory first and then filter them there.

    Avoid filtering on a collection property if there are too many (such as over 1000) items inside the entity. If possible, try to replace the collection property with the navigation property to improve performance.

    For example, in this query: Customers(id=1)/contacts?$filter=$it/city eq 'NewYork'

    Contacts is a collection property in the Customers entity with the Collection(Contact) type. Contact is a complex type and 'city' is one field of Contact.

    If one 'customer' entity has many contacts, the query will be slow and we might want to consider converting 'Contact' from a complex type to an entity type.

    The query should be changed to:

    Customers(id=1)?$expand=contacts&$filter=contacts/city eq 'NewYork'

  • Undo pending changes in a batch

    In the scenario where there are many entities to undo, undoing them one-by-one might be time consuming. Undoing an array of entities in a single batch reduces the time to accomplish the task and can also improve performance.

    For example:

    A local order with 10 OrderItems is created locally.

    1. Create OrderItem(1), Create OrderItem(2), Create OrderItem(3) … Create OrderItem(10)

    2. Create Order(1)

    If Order(1) is canceled before the upload and Order(1) as well as OrderItems 1 through 10 need to be reverted, undoing them one at a time will work, but it is a time-consuming approach. Undoing all the pending changes in a single batch call is the recommended approach:

    OrderItem item1 = new OrderItem();
    // set item1 properties and etc…
    dataService.createEntity(item1);
    
    OrderItem item2 = new OrderItem();
    // set item2 properties and etc…
    dataService.createEntity(item2);
    
    
    
    OrderItem item10 = new OrderItem();
    // set item10 properties and etc…
    dataService.createEntity(item10);
    
    Order order1 = new Order();
    // set order1 properties and etc…
    dataService.createEntity(order1);
    
    List<EntityValue> entities = new ArrayList<>();
    entities.add(item1);
    entities.add(item2);
    
    
    
    entities.add(item10);
    entities.add(order1);
    
    //undo all the entities in a single batch
    offlineODataProvider.undoPendingChanges(entities.toArray(new EntityValue[0]))
    
    let item1 : OrderItem = OrderItem()
    // set item1 properties and etc…
    try dataService.createEntity(item1)
    
    let item2 : OrderItem = OrderItem()
    // set item2 properties and etc…
    try dataService.createEntity(item2)
    
    
    
    let item10 : OrderItem = OrderItem()
    // set item10 properties and etc…
    try dataService.createEntity(item10)
    
    let order1 : Order = Order()
    try dataService.createEntity(order1)
    
    var entities:[ EntityValue ] = []
    entities.append(item1)
    entities.append(item2)
    
    
    
    entities.append(item10)
    entities.append(order1)
    
    // undo all the entities in a single batch
    try offlineODataProvider.undoPendingChanges(for: entities)
    

    For additional information, see Undoing Pending Changes.

  • Avoid using random defining request names

    The mobile services will store a defining request as metadata even if the defining request is created by the app and not defined in the mobile services cockpit.

    Using random defining request names will cause the mobile services to store a lot of metadata and slow down download processing.

    Try to use incremental numbers, such as ‘defining-request-1’, ‘defining-request-2’, and so forth, if the defining request needs to be created on-the-fly, thus avoiding each device having their own defining request names.

  • Avoid accumulating defining requests on a device

    The best practice is having a limited number of defining requests per device. This is because, during download, each defining request will create one roundtrip to the back end (if not using batching) and, if defining requests accumulate, the download will become slower and slower. An extreme example might be if the app downloaded each entity using a defining request when the user wants to view the data after initial download.

    The recommended approach would be:

    1. Try to download enough data during the initial download if the data size is not huge and does not significantly impact the onboarding experience.

    2. Try to use remove defining request to clean up the defining request and local data if local data of the defining request is not used. The number of defining requests that will impact download performance is largely determined by the latency to make one roundtrip to the back end. If the app is likely to add many defining requests on-the-fly, the app should consider cleaning up defining requests to avoid potential performance problems.

    3. Try to use defining request batching if possible.

  • Increase the maximum cache size of the internal database

    The internal database can utilize the cache to store rows of tables, and increasing the maximum cache size may reduce the query cost for rows. Currently, the default maximum cache size is 75MB, while the hard limit for this value is set at 250MB. To modify the maximum cache size, you can use extra database connection parameters.

    OfflineODataParameters parameters = new OfflineODataParameters();
    parameters.setExtraDatabaseConnectionParameters("CACHE_MAX_SIZE=100M");
    
    var params = OfflineODataParameters()
    params.extraDatabaseConnectionParameters = "CACHE_MAX_SIZE=100M"
    

    Note

    • The internal database allocates cache from native memory, so a larger cache size will result in the application consuming more memory.
    • The default value is suitable for most applications. We recommend conducting a performance test before making any adjustments to this value.

Last update: June 15, 2023