Show TOC

 Using the Table Content Aging Report

 

You use this report (/CAR/TABLE_CONTENT_AGING) to copy your transaction log (TLOG) data and its extensions from your SAP HANA database to an alternate storage technology such as SAP IQ or Hadoop. Depending on the target technology you adopt in your implementation scenario, different parameters have to be provided.

SAP IQ

The SAP HANA database points to your SAP IQ database using smart data access (SDA), which exposes data from SAP IQ remote sources as virtual tables.

For more information, see Start of the navigation path http://help.sap.com/hana_platformInformation published on SAP site Next navigation step System Administration Next navigation step SAP HANA Administration Guide Next navigation step SAP HANA Data Provisioning Next navigation step SAP HANA Smart Data Access End of the navigation path.

Report Parameters

Parameter

Description

Source Connection

The source SAP HANA database connection as defined in transaction DBCO.

Source Schema

The database schema containing your /POSDW/TLOGF, /POSDW/TLOGF_EXT, and /POSDW/TLOGF_X tables.

Target Connection

The destination SAP HANA database connection as defined in transaction DBCO.

Target Schema

The SAP HANA database schema containing your SAP IQ virtual tables.

TLOGF Table

The SAP HANA virtual table pointing to your SAP IQ TLOGF table.

TLOGF_X Table

The SAP HANA virtual table pointing to your SAP IQ TLOGF_X table.

TLOGF_EXT Table

The SAP HANA virtual table pointing to your SAP IQ TLOGF_EXT table.

Retention Value

The number of days a record is kept in memory, after which it will be copied to SAP IQ.

Package Size

The number of table rows written into the internal table during one operation (to reduce the risk of lack of memory).

The value you enter here determines the size of the package. For example, an entry of 3 means that the table will be copied 3 rows at a time.

LUW Size

A database logical unit of work (LUW) is an inseparable sequence of database operations that ends with a database commit.

Specify the size (in table rows) of this logical unit of work. It should be greater or equal to the package size.

Parallel Processes

Parallel processing is implemented in the report to enable background mode.

Specify the amount of processes that should run in parallel.

Hadoop

Prerequisites

The TLOG data and its extensions are copied from your SAP HANA database to Hadoop using the HDFS NFS Gateway on your Hadoop system. To enable this you must create a mount point on your SAP NetWeaver system for the data files to be created directly in the Hadoop File System (HDFS).

Note Note

The following steps are only guidelines which provide an example of how to mount Network File System (NFS) on a SAP NetWeaver Linux-based client.

End of the note.
  1. Make sure the NFS client is installed based on the examples provided:

    Operating System

    Command

    Red Hat, CentOS

    sudo yum install nfs-utils

    Ubuntu

    sudo apt-get install nfs-common

    SUSE

    sudo zypper install nfs-client

  2. List the NFS shares exported on the server.

    Example

    showmount -e <host>

  3. Set up a mount point for an NFS share.

    Example

    sudo mkdir <folder>

    Note Note

    You must ensure that the folder paths share the same naming conventions, as follows:

    Temporary data folder

    /tmp/tct/csv_out/temp

    Data folder

    /tmp/tct_csv_out/data

    End of the note.
  4. Mount the cluster using NFS.

    Example

    sudo mount -o hard, nolock <host> <folder>

On your HDFS, the different tables are stored under a folder using the following convention:

<data directory>/<schema>/<table>/<businessdaydate=partition_value>/{files}

On the SAP NetWeaver file system, the Hadoop files are stored under a physical path and file name that is derived from a customer-definable logical path or file name. The configuration is provided via the FILE transaction. Inside the FILE transaction, you also need to make use of parameters PARAM_1 and PARAM_2. PARAM_1 will be populated during runtime by the program (generated file name) and PARAM_2 will be populated by the program during runtime <schema>/<table>/<businessdaydate=partition_value>.

Example (Data Directory)

If the Hadoop data files are stored in Unix/Linux folder /tmp/tct_csv_out/data/hdp/apps/hive/warehouse/<schema>/<table>/businessdaydate=partition_value/{files}, the data directory should point to physical file PARAM_1.CSV and physical directory /tmp/tct_csv_out/data/hdp/apps/hive/warehouse/<PARAM_2><FILENAME>.

You create the following logical path in the FILE transaction as follows:

Logical path

/CAR/HDFS_DATA

Name

HDFS Data

Syntax group

UNIX

Physical path

/tmp/tct_csv_out/data/hdp/apps/hive/warehouse/<PARAM_2><FILENAME>

You create the following logical file in the FILE transaction as follows:

Logical file

/CAR/HDFS_DATA

Name

HDFS Data

Physical file

<PARAM_1>.CSV

Data format

WK1

Application area

IS

Logical path

/CAR/HDFS_DATA

Example (Temporary Directory)

On top of the Hadoop data files, you also need to provide a temporary directory in which the program will populate script files and also temporarily store data files to be compressed.

If the temporary files are stored in Unix/Linux folder /tmp/tct_csv_out/temp/{files}, you create the following logical path in the FILE transaction as follows:

Logical path

/CAR/HDFS_TEMP

Name

HDFS Temp

Syntax group

UNIX

Physical path

/tmp/tct_csv_out/temp/<FILENAME>

You create the following logical file in the FILE transaction as follows:

Logical file

/CAR/HDFS_TEMP

Name

HDFS Temp

Physical file

<PARAM_1>.SH

Data format

Application area

IS

Logical path

/CAR/HDFS_TEMP

Report Parameters

Parameter

Description

Source Connection

The source SAP HANA database connection as defined in transaction DBCO.

Source Schema

The database schema containing your /POSDW/TLOGF, /POSDW/TLOGF_EXT, and /POSDW/TLOGF_X tables.

Target Schema

<schema> in <data directory>/<schema>/<table>/<businessdaydate=partition_value>/{files}>.

TLOGF Table

<table> pointing to the TLOGF table, based on the format provided above.

TLOGF_X Table

<table> pointing to the TLOGF_X table, based on the format provided above.

TLOGF_EXT Table

<table> pointing to the TLOGF_EXT table, based on the format provided above.

Data Directory

The logical data directory created in the previous example using the FILE transaction.

Temporary Directory

The logical temporary directory created in the previous example using the FILE transaction.

Partitioning Enabled

Indicate whether partitioning is enabled under pattern <schema>/<table>/<businessdaydate=partition_value>.

Retention Value

The number of days a record is kept in memory, after which it will be copied to Hadoop.

Package Size

The number of table rows written into the internal table during one operation (to reduce the risk of lack of memory).

The value you enter here determines the size of the package. For example, an entry of 3 means that the table will be copied 3 rows at a time.

LUW Size

A database logical unit of work (LUW) is an inseparable sequence of database operations that ends with a database commit.

Specify the size (in table rows) of this logical unit of work. It should be greater or equal to the package size.

Parallel Processes

Parallel processing is implemented in the report to enable background mode.

Specify the amount of processes that should run in parallel.