Import and Prepare Fact Data for a Model with Measures

Once your new model is created, you might want to import fact data to your model.

In this section, we describe each step required to import data acquired from an external data source into the fact table of your new model type. There are four main steps in the import process: creating an import job, preparing the data, mapping the data, running the import. You can also schedule import jobs to import data on a one time or recurring basis. Depending on whether you're working with an SAP data center (Neo) or non-SAP data center (Cloud Foundry), the user interface and procedure can slightly differ. When that is the case, it will be explicitly mentioned in the sections below.

Creating an Import Job

Context

Before being able to go to the data preparation step, you first have to import data using the Import Data from option. You can import data either from a file, or from a data source. If you’re importing data from a data source, make sure to select an existing connection, or create a new one. For more information on how to do so, make sure to check out Import Data to Your Model.
Restriction
The following data sources are not supported by the new model type:
  • SAP BPC
  • SAP Concur
  • SAP ERP
  • SAP Fieldglass
  • Workforce Analytics
  • Dow Jones

Also, for the Salesforce data source, importing data into an existing model isn't supported.

Procedure

  1. Select Start of the navigation path Import Data from Next navigation step FileEnd of the navigation path or Start of the navigation path Import Data from Next navigation step Data SourceEnd of the navigation path depending where your data is stored.
  2. Select your file if your data is stored in a local file or connect to your data source.
  3. Select the data you want to import.

    The import job is added to the Draft Sources or Import Jobs list, depending on whether you're working with a classic account model or a model with measures (i.e, the new model type). At this moment, you can save and exit the process and come back later by clicking . The uploaded draft data expires 7 days after the upload.

  4. Set up the import job:
    • If you're importing data to a model with measures, click Set up import.
    • If you're importing data to a classic account model, click the draft source you've just imported listed under Draft Sources.

Results

Now that you've imported data to your model, you're ready for the data preparation step. See Preparing the Data for more information.

Preparing the Data

The data preparation step is where you can you can resolve data quality issues before the mapping step, but also wrangle data and make edits such as renaming columns, create transformations, etc…

First, make sure to resolve data quality issues if there are any. After the initial data import, the Details panel gives you a summary of the characteristics of the model with general information about the imported data, including any data quality issues.

For tenants based on an SAP Data Center, issues can be listed and described under the Model Requirements section if they are related to the model itself, or under the Data Quality section when you click a particular dimension.

For tenants based on a non-SAP Data Center, the Details panel list all the dimensions and measures of the model. You can click the icon to access more information. If ever the application has detected issues with a dimension, the exact number of issues with that particular dimension is indicated right next to its name, and clicking that number takes you straight to the Validation tab, where you can see a description of the issue.

Select a message to see the options available to resolve any identified quality issue. Use the context-sensitive editing features and the transform bar to edit data in a cell, column, or card. When selecting a column, a cell, or content in a cell, a menu will appear with options for performing transforms. This menu is two-fold:

  • Choose the Quick Actions option to perform actions such as duplicate column, trim whitespace, hide columns, or delete columns or rows. The table below lists all the available actions.
  • Select the (Smart Transformations) icon to list suggested transformations to apply to the column, such as replacing the value in a cell with a suggested value. You can also select Create a Transform and choose from the options listed under the transformation bar displayed above the columns. The transformation bar is used to edit and create transforms.

As you hover over a suggested transformation, the anticipated results are previewed in the grid. To apply the transformation, simply select the transform. You can manually enter your own transformation in the transformation bar; as the transform is built, a preview is provided in the affected column, cell, or content within the cell. The table below summarizes the available transformations you can apply to selected columns, cells, or content within cells.

Transformation Description Transform Bar Format
Delete Rows Delete rows in the data, either by selecting individual members, or by specifying a range (not possible for text columns; only numerical and Date columns). Provided only as a quick action
Trim Whitespace Remove spaces, including non-printing characters, from the start and end of strings. Provided only as a quick action
Duplicate Column Create a copy of an existing column. Provided only as a quick action
Delete Column Delete a column. Use the Shift key to select and then delete multiple columns. Provided only as a quick action
Remove duplicate rows Remove duplicate rows when creating or adding data to a model. Provided only as a task bar icon
Concatenate Combine two or more columns into one. An optional value can be entered to separate the column values. Concatenate [<Column1>], [<Column2>]… using "value"
Split Split a text column on a chosen delimiter, starting from left to right. The number of splits can be chosen by the user. Split [<Column>] on "delimiter" repeat "#"
Extract Use this transform to extract a block of text specified as numbers, words, or targeted values within a column to a new column. Extract [<what to extract]>] [<where to extract>] [<which occurrence>] [""] from [<column name>] [<include value option> ].

Options for what to extract:

  • number Limits extracted text to one number from column cell.
  • word Limits extracted text to a word from column cell.
  • everything Includes all text.
Options for where to extract relative to the target value:
  • before
  • after
  • between
    Note
    You must specify two target values when using between.
  • containing: Extracts a word or number containing the target. For example, if your target is ship, both ship and shipping will be extracted.
  • equal to: Extracts the specific target value if it exists in the cell.
Options for specifying occurrence:
  • first
  • last
  • occurrence: Allows you to specify the position of the target from one to ten.
    Note
    Use occurrence when there are multiple instances of the target.

To extract the first number from all column cells, you would specify the following:

Extract before first "" from [<column name>].

To extract all text between parenthesis in column cells, you would specify:

Extract everything between "(" and ")" from [<column name>].
Change Change a column to uppercase, lowercase, or title case. Change [<Column>] to (<UPPERCASE>/<lowercase>/<TitleCase>)
Replace Replaces either an entire cell or content that could be found in multiple different cells.
Note
You can optionally select Start of the navigation path Next navigation step whereEnd of the navigation path to add a <where> clause to the transform bar. You need to specify an associated column and value when using a <where> clause to limit the replace action in a given column.
Replace (<cell/content>) in [<Column>] matching "value" with "value"

With a <where> clause: Replace (<cell/content>) in [<Column>] matching "value" with "value" where[<Column>]is "value"

A history of all transforms you implement is displayed in Transform Log. There are two levels of transformation logs recorded: transforms on the entire dataset, and transformations on the currently selected column. Hover over a specific transform in the Transform Log to highlight the impacted column. You can roll back the change by either deleting the entries in the history or using the (Undo/Redo) buttons on the toolbar. You can remove transforms in the Transform Log panel out of sequential order provided that there are no dependencies.

When selecting a dimension, measure, or attribute, the Data Distribution section gives contextual information about a dimension with numerical or textual histograms:
  • Numerical histograms are vertical, and represent the range of values along the x-axis. Hover over any bar to show the count, minimum, and maximum values for the data in the bar. The number of bars can also be adjusted by using the slider above the histogram. Checking the Show Outliers (SAP Data Center) or Include Outliers (Non-SAP Data Center) option includes or removes outliers from the histogram. Below the histogram, a box and whisker plot help you visualize the histogram's distribution of values.
  • Text histograms are horizontal, and the values are clustered by count or percent. The number of clusters can be adjusted by dragging the slider shown above the histogram. When a cluster contains more than one value, the displayed count is the average count for the cluster. The count is prefaced by a tilde symbol (~) if there are multiple different occurrences. Expand the cluster for a more detailed view of the values in the cluster along with individual counts. Use the search tool to look up specific column values, and press Enter to initiate the search. When you select a value in the histogram, the corresponding column is sorted and the value is highlighted in the grid.
    Note
    The displayed histogram is determined by the data type of the column. A column with numbers could still be considered as text if the data type is set to text.
Once you have checked and fixed all errors, if you're working with a dataset sample, click Validate Data to apply your transforms across the entire dataset and check for data quality errors.
Note
Validating data in the data preparation step is only possible for tenants based on a non-SAP Data Center. For tenants based on an SAP Data Center, you can only validate the full dataset when reviewing the import. See Reviewing and Running the Import for more information.

Selecting Dimension Types

When you create an import job, the application automatically qualifies the data. Typically, columns containing text are identified as dimensions, and columns containing numeric data are identified as measures. You can still change the data qualification and change it to another type if needed. For example, you can change a Date dimension to an Organization dimension.

If you’re unsure about dimension types, make sure to check out Learn About Dimensions and Measures.

After you've selected your dimensions make sure to follow the best practices described in the sections below.

Maximum Number of Dimension Members

To maintain optimal performance, the application sets a limit to the number of unique members per dimension when importing data to a new model. For more information, check out System Requirements and Technical Prerequisites.

For non-planning-enabled models only, you can import dimensions with more than the maximum number of members. However, the following restrictions apply:
  • In the data integration view, these dimensions cannot have any dimension attributes added to them, such as description or property.
  • Once the dimensions are imported into the Modeler, they have only one ID column, and are read-only.
  • The dimensions can't be used as exception aggregation dimensions or required dimensions.
  • The dimensions can't be referenced in formulas.

Calculated Columns

While you're preparing data in the data integration view, you can create a calculated column based on input from another column and the application of a formula expression.

Click from the menu toolbar, and use the Create Calculated Column interactive dialog to build up your calculated column. Add a name, and build the formula for your column in the Edit Formula space. You can either select an entry from Formula Functions as a starting point or type “[" to view all the available columns. Press Ctrl + Space or Cmd + Space to view all the available functions and columns.

Click Preview to view a 10 line sample of the results of the formula. Click OK to add the calculated column to the model. If necessary, you can go back and edit the calculated column’s formula by clicking Edit Formula in the Designer panel.

For a listing of supported functions, see Supported Functions for Calculated Columns.

Dimension Attributes

Dimension attributes are information that is not suitable to be standalone dimensions. You can use them to create charts, filters, calculations, input controls, linked analyses, and tables (with “import data” models only). For example, if you have a Customers column, and a Phone Numbers column, you could set Phone Numbers to be an attribute of the Customers dimension.

There are mutliple types of attributes available:
  • Description: The column can be used for descriptive labels of the dimension members when the member IDs (the unique identifiers for the dimension members) are technical and not easily understandable.

    For example, if your imported data contains a pair of related columns Product_Description and Product_ID with data descriptions and data identifiers, you can set Product_Description to be the Description attribute for the Product_ID dimension. Note that the Product_ID column would then need to contain unique identifiers for the dimension members.

  • Property: The column represents information that is related to the dimension; for example, phone numbers.
  • Parent-Child Hierarchy (Parent): The column is the parent of the parent/child hierarchy pair.

    For example, if your imported data contains the two columns Country and City, you can set the Country column to be the parent of the Country-City hierarchy.

    The hierarchy column is a free-format text attribute where you can enter the ID value of the parent member. By maintaining parent-child relationships in this way, you can build up a data hierarchy that is used when viewing the data to accumulate high-level values that can be analyzed at lower levels of detail.

    You can also create level-based hierarchies when your data is organized into levels, such as Product Category, Product Group, and Product. When the data is displayed in a story, hierarchies can be expanded or collapsed. In the toolbar, select (Level Based Hierarchy). For more information about hierarchies, see Learn About Hierarchies.

  • Currency: If you set a column to be an Organization dimension, the Currency attribute is available. The Organization dimension offers an organizational analysis of the account data, based, for example, on geographic entities. You can add the Currency attribute to provide currency information for the geographic entities.

Data Quality Considerations

During data import, anomalies in your data can prevent the data from being imported properly, or prevent the model from being created. If issues are found in your data, the impacted data cells are highlighted, and messages in the Details panel explain the issues. You'll need to resolve the following issues before the data can be fully imported or the model can be created:
  • In account dimensions, dimension member IDs cannot contain the following characters: , ; : ' [ ] =.
  • Numeric data cells in measures cannot contain non-numeric characters, and scientific notation is not supported.
  • When importing data to an existing model, cells in a column that is mapped to an existing dimension must match the existing dimension members. Unmatched values will result in those rows being omitted from the model.
  • For stories, if any member IDs are empty, you can type values in those cells, or select Delete empty rows in the Details panel to remove those rows.
  • When creating a new model, if member IDs are empty, they are automatically filled with the “#” value if you select the Fill applicable empty ID cells with the "#" value option. Otherwise, those rows are omitted from the model.
    Note
    This option is only available on for tenants based on an SAP Data Center (Neo).
  • In dimensions and properties, a single member ID cannot correspond to different Descriptions in multiple rows (but a single Description can correspond to multiple member IDs).

    For example, if member IDs are employee numbers, and Descriptions are employee names, you can have more than one employee with the same name, but cannot have more than one employee with the same member ID.

  • In a Date dimension column, cell values must match the format specified in the Details panel.

    The following date formats are supported: dd-mm-yy, dd.mm.yy, dd/mm/yy, dd-mm-yyyy, dd.mm.yyyy, dd/mm/yyyy, dd-mmm-yyyy, dd-mmmm-yyyy, mm-dd-yy, mm.dd.yy, mm/dd/yy, mm-dd-yyyy, mm.dd.yyyy, mm/dd/yyyy, mm.yyyy, mmm yyyy, yy-mm-dd, yy.mm.dd, yy/mm/dd, yyyq, yyyy, yyyymm, yyyy-mm, yyyy.mm, yyyy/mm, yyyymmdd, yyyy-mm-dd, yyyy-mm/dd, yyyy.mm.dd, yyyy/mm-dd, yyyy/mm/dd, yyyy.mmm, yyyyqq.

    Examples:
    • mmm: JAN/Jan/jan
    • mmmm: JANUARY/January/january
    • q: 1/2/3/4
    • qq: 01/02/03/04
  • Latitude and longitude columns, from which location dimensions are created, must contain values within the valid latitude and longitude ranges.
  • For planning-enabled models, in a hierarchy measure, non-leaf-node members are not allowed.
    Note
    For analytic (non-planning-enabled) models only, non-leaf node members are allowed, but be aware of the effects of this behavior. For example: in an organizational chart that includes employee salaries, the manager has her individual salary, and her staff members have their own salaries as well. In a visualization, do you expect the sanager’s data point to reflect her individual salary, or the sum of her staff members' salaries?

Mapping the Data (Non-SAP Data Center Tenants)

Now that the data is prepared, you can start the mapping process. The application automatically pre-maps some parts of the data, and you can map the remaining data manually.

Context

Restriction
Restrictions apply when mapping dimensions and attributes. These issues are listed in the Review Import step just before running the import. To avoid having issues during the import, make sure to follow these best practices:
  • All dimensions are mapped.
  • If the Version is mapped to a column, then the Category is also mapped to a column.
  • There is only one Actuals version in the model.
  • The Actuals category is mapped to public.Actuals for planning models.
  • There is at least one measure mapped for the model.

Procedure

  1. Map all the unmapped dimensions and at least one measure: drag a dimension or from the Source Column and drop it on a dimension or measure in the Unmapped column to map it. You can also hover over an unmapped source column and click to open the Quick Map menu and select a target column. If needed, click to filter the source columns and show only the mapped or unmapped columns.
    If at any time, you want to discard the mapping and restart from scratch, click in the toolbar to reset the mapping.
  2. Optional: If needed, you can also set default values to dimensions. In the Unmapped column, click in the Source section next to an unmapped dimension, and select Set default value to "#" or Set default value....
  3. Map the date dimension.
    If you are importing a custom date dimension, decide wether you want your source data to match your model target. That would be the case for example if your source data has a traditional calendar month granularity while the target date dimension has a week granularity. However, you can also decide to keep the granularity differences between the source and target data. The things you might want to consider when importing data from date dimensions are the fiscal periods, weeks, and additional periods.
    1. Click Mapping Summary to edit both the data type and conversion format of the source date dimension if needed.
      The formats available depend on both the granularity and data type you select. You can customize the source dimension and select whether you want to Map by ID or Map by Date.

      If you leave the data type set to Date, you can change the date format in the data preparation. If you set the data type to String, you can choose from formats allowed by the target date dimension, especially if the target dimension has fiscal period settings and week granularity. Lastly, if you set the data type to Integer, your values are assumed to be dates formatted without separators.

    2. If the target date dimension has fiscal settings and you want to automatically replicate the fiscal settings to the source date dimension, click Apply fiscal settings to column values.
  4. Map the version:
    • If you have a single version, you can select a default value and map all rows to that version. Click in the Source section next to the version and click Set default value, and select a version. Each version is described with its version category.
    • Alternatively, if the data you're importing has a version, you now have to handle multiple versions in the dataset. Map the version you have just imported to the version dimension in the Target column. This allows you to assign rows to multiple versions.
      Note
      If you have multiple versions, it is required to map the Category attribute in the next step. If you don't have one, go back to the data preparation step and create one.
  5. Once you have mapped all the dimensions and at least on measure, click Next.
  6. Map the attributes.
    If you’re looking for specific attributes to map, use the expand and collapse functions, or the search function. You can also filter on unmapped or mapped attributes. For each dimension, if the data you’re importing has new members, the application indicates the exact number.
  7. Click Next.
    You're now in the Review Import screen.

Results

Once you've fixed remaining issues, you're ready to validate the data, review and run the import. For more information, check out Reviewing and Running the Import.

Mapping the Data (SAP Data Center Tenants)

Now that the data is prepared, you can start the mapping process. The application automatically pre-maps some parts of the data, and you can map the remaining data manually.

Context

Restriction
Restrictions apply when mapping dimensions and attributes. These issues are listed in the Review Import step just before running the import. To avoid having issues during the import, make sure to follow these best practices:
  • All dimensions are mapped.
  • If the Version is mapped to a column, then the Category is also mapped to a column.
  • There is only one Actuals version in the model.
  • The Actuals category is mapped to public.Actuals for planning models.
  • There is at least one measure mapped for the model.

    If the source data doesn't contain a column that can be used directly as a measure, you can create a measure column based on a count of an existing column that contains 100% unique values. The generated measure column will contain the value 1 for all rows, representing the count of the unique values.

    If you proceed to create the model without having created a measure column, a measure column will be generated automatically, based on the first column that contains 100% unique values.

    To create the measure column yourself, select a column with 100% unique values, and then select Create Count in the Details panel. If there isn't an existing column that contains 100% unique values, a measure column will be generated automatically when the model is created, containing the value 0 for all rows.

Procedure

  1. Switch to the Card View and determine what remains to be mapped. Cards display as either mapped or unmapped entities. A card represents mapped data if it is shaded solid and has defined borders. Cards that must be mapped appear transparent and borderless.

    Dimensions with attributes appear stacked and can be expanded. Mapping attributes is optional.

  2. Drag and drop an unmapped imported dimension, measure or attributes on the associated card.
    The date dimension can either be matched, or set with a default value. To set a default value, select the date dimension and click Set a default value in the Details panel. After you've set a default value, you can change it by clicking Change Default Value.
    Note
    Check Apply fiscal settings to column values in Details to map imported fiscal period data into a date dimension in a model enabled to support fiscal year. Review the format listed under Format and change it if required.
  3. Repeat step 2 until you have mapped all dimensions, at least one measure, and the attributes.
    You can check the progress of the mapping thanks to the dedicated Dimensions, Attributes, and Measures headings. When each bar is fully colored in grey, the mapping is complete.
  4. In the Details panel, specify the version for which you're importing data by checking either Existing Version or New Version. If you have mutliple versions available, select the source version dimension card and click Map Versions in the Details panel to map each version to the desired version and category.
  5. Once you have mapped all cards, check for mapping issues. If there are any, fix them before moving to the next step.
    A red dot in the top right corner of the card indicates there are mapping errors that need to be addressed. A blue dot indicates that new values have been added from the import to the existing data.

Results

Once you've fixed remaining issues, you're ready to validate the data, review and run the import. For more information, check out Reviewing and Running the Import.

Reviewing and Running the Import

Once the mapping is complete, you're ready to review the import options and method before running the import job.

Context

Procedure

  1. Check whether there are pending data issues after the mapping.
    All issues must be solved to make sure all data is imported into your model. If you decide to create the model ignoring some of the detected errors, either entire rows or invalid cells will be omitted from the new model.
  2. Review the import option. For tenants based on an SAP Data Center, click View all options under Mapping Options in the Details panel. For tenants based on a non-SAP Data Center, click to access the preferences.
    • Update dimensions with new values (SAP Data Center only): Select this option if you're importing data to a model that already ontains data and want to update the data with new dimensions members, atributes, and hierarchy changes.
      Note
      This option doesn't apply to public dimensions.
    • Convert value symbol by account type (SAP Data Center only): Select this option to match the value symbol, positive or negative, to each account type in the model, for when values are stored as positive regardless of whether they represent income or expense accounts.
    • Fill applicable empty ID cells with the "#" value (SAP Data Center only): Select this option to fill empty ID cells with a "#" value to preserve rows without a value in the Dimension ID column. Otherwise, empty ID cells are deleted.
    • Reverse Sign by Account Type (Non-SAP Data Center only): Select this option to import your INC and LEQ account values with reversed signs (+/-). This option is only available if your model has an account dimension. If you want your INC and LEQ values to show up as positive, import them as negative values, and vice-versa. Before importing data, make sure to check the signs of the original source values first, and then set the option accordingly.
    • Update Local Dimensions with New Members (Non-SAP Data Center only): Select this option to map your source data to dimension properties to update their members during the import.
    • Conditional Validation: Select which hierarchies to validate against. Validating against selected hierarchies will prevent data from being imported to non-leaf members. This option is only available if you have at least one parent-child hierarchy in your model. For more details about storing data in non-leaf members, see Entering Values with Multiple Hierarchies.
  3. Select an import method.
    Note
    For tenants based on an SAP Data Center, import methods are listed under Import Method in the Details panel. For tenants based on a non-SAP Data Center, import methods can be found in the preferences .
    • Update: The target model’s measure values for the dimension member combinations specified by the source data are updated by the corresponding measure values in the source data. If the particular dimension member combination has no measure values in the model prior the import, new value is inserted.
    • Append: The target model’s measure values for the dimension member combinations specified by the source data are added by the corresponding measure values in the source data (summed together). If the particular dimension member combination has no measure values in the model prior the import, new value is inserted. For a more refined scope, use either the Clean and replace selected version data or Clean and replace subset of data update options.
    • Clean and replace selected version data: Deletes the existing data and adds new entries to the target model, only for the versions that you specify in the import. You can choose to use either the existing version or specify a new version under Version. If you specify to import data for the "actual" version, only the data in the "actual" version is cleaned and replaced. Other versions, for example "planning", are not affected.
    • Clean and replace subset of data: Replaces existing data and adds new entries to the target model for a defined subset of the data based on a scope of selected versions using either the Existing Version or New Version buttons. You can also limit the scope to specific dimensions. To define a scope based on a combination of dimensions select + Add Scope and use the Select a dimension field to specify a dimension.

      When a Date dimension is defined in the scope, the time range in the source data (determined by the minimum time value and maximum time value of the source data, as well as the granularity of the date dimension) combined with other dimensions in the scope, will determine what existing data is cleaned and replaced.

      If for example, Date and Region dimensions are defined as part of a scope, only entries that fall within the time range and match Region from the source data will be replaced in the target model. Existing data that does not match the scope will be kept as is. Other dimensions that are not part of the scope will be cleaned and replaced regardless of whether the dimension members are in the source data or not.

    Note
    These options affect measures and dimensions. To include both measures and dimensions, see Update and Schedule Models.
  4. For tenants based on an SAP Data Center, once you have checked and fixed all errors, if you're working with a dataset sample, click Validate Data to apply your transforms across the entire dataset and check for data quality errors.
    Note
    Validating data in the right before running the import is only possible for tenants based on an SAP Data Center. For tenants based on a non-SAP Data Center, you can only validate the full dataset during the data preparation step. See Preparing the Data for more information.
  5. Click Run Import (non-SAP Data Center tenant) or Finish Mapping (SAP Data Center tenant).

Scheduling an Import Job

Schedule a data import job if you want to refresh data against the original data source. You can import data from multiple queries and data sources into a model, and each of these imports can be separately scheduled.

Context

Procedure

  1. Select one or multiple import jobs you want to schedule.
    If you select multiple jobs, you can order them, set a group name for the job, and select one of the group processing option:
    • Stop if any query fails: If any of the import jobs fails, the group processing stops. You can then cancel the remaining jobs, or try to fix the cause of the failure, and later resume execution of the grouping from the same point where execution stopped.
    • Skip any failed query: If any of the import jobs fails, the remaining jobs are still processed.
    Note
    A grouping can include jobs from public dimensions as well as the model. Running the grouping refreshes the public dimensions and model together. You can ungroup your import at anytime clicking .
  2. Define the frequency for the scheduling:
    • None: Select this option when you want to update the data manually.
    • Once: The import is performed only once, at a preselected time.
    • Recurring: The import is executed according to a recurrence pattern.
    You can update the schedule at anytime clicking .
  3. Define or update your import settings.