You use this report (/CAR/TABLE_CONTENT_AGING
) to copy your transaction log (TLOG) data and its extensions from your SAP HANA database to an alternate storage technology such as SAP IQ or Hadoop. Depending on the target technology you adopt in your implementation scenario, different parameters have to be provided.
The SAP HANA database points to your SAP IQ database using smart data access (SDA), which exposes data from SAP IQ remote sources as virtual tables.
For more information, see
.Parameter | Description |
---|---|
| The source SAP HANA database connection as defined in transaction |
| The database schema containing your |
| The destination SAP HANA database connection as defined in transaction |
| The SAP HANA database schema containing your SAP IQ virtual tables. |
| The SAP HANA virtual table pointing to your SAP IQ |
| The SAP HANA virtual table pointing to your SAP IQ |
| The SAP HANA virtual table pointing to your SAP IQ |
| The number of days a record is kept in memory, after which it will be copied to SAP IQ. |
| The number of table rows written into the internal table during one operation (to reduce the risk of lack of memory). The value you enter here determines the size of the package. For example, an entry of 3 means that the table will be copied 3 rows at a time. |
| A database logical unit of work (LUW) is an inseparable sequence of database operations that ends with a database commit. Specify the size (in table rows) of this logical unit of work. It should be greater or equal to the package size. |
| Parallel processing is implemented in the report to enable background mode. Specify the amount of processes that should run in parallel. |
The TLOG data and its extensions are copied from your SAP HANA database to Hadoop using the HDFS NFS Gateway on your Hadoop system. To enable this you must create a mount point on your SAP NetWeaver system for the data files to be created directly in the Hadoop File System (HDFS).
Note
The following steps are only guidelines which provide an example of how to mount Network File System (NFS) on a SAP NetWeaver Linux-based client.
Make sure the NFS client is installed based on the examples provided:
Operating System | Command |
---|---|
Red Hat, CentOS |
|
Ubuntu |
|
SUSE |
|
List the NFS shares exported on the server.
Example
showmount -e <host>
Set up a mount point for an NFS share.
Example
sudo mkdir <folder>
Note
You must ensure that the folder paths share the same naming conventions, as follows:
Temporary data folder |
|
Data folder |
|
Mount the cluster using NFS.
Example
sudo mount -o hard, nolock <host> <folder>
On your HDFS, the different tables are stored under a folder using the following convention:
<data directory>/<schema>/<table>/<businessdaydate=partition_value>/{files}
On the SAP NetWeaver file system, the Hadoop files are stored under a physical path and file name that is derived from a customer-definable logical path or file name. The configuration is provided via the FILE
transaction. Inside the FILE
transaction, you also need to make use of parameters PARAM_1
and PARAM_2
. PARAM_1
will be populated during runtime by the program (generated file name) and PARAM_2
will be populated by the program during runtime <schema>/<table>/<businessdaydate=partition_value>
.
If the Hadoop data files are stored in Unix/Linux folder /tmp/tct_csv_out/data/hdp/apps/hive/warehouse/<schema>/<table>/businessdaydate=partition_value/{files}
, the data directory should point to physical file PARAM_1.CSV
and physical directory /tmp/tct_csv_out/data/hdp/apps/hive/warehouse/<PARAM_2><FILENAME>
.
You create the following logical path in the FILE
transaction as follows:
Logical path |
|
Name | HDFS Data |
Syntax group | UNIX |
Physical path |
|
You create the following logical file in the FILE
transaction as follows:
Logical file |
|
Name | HDFS Data |
Physical file |
|
Data format | WK1 |
Application area | IS |
Logical path |
|
On top of the Hadoop data files, you also need to provide a temporary directory in which the program will populate script files and also temporarily store data files to be compressed.
If the temporary files are stored in Unix/Linux folder /tmp/tct_csv_out/temp/{files}
, you create the following logical path in the FILE
transaction as follows:
Logical path |
|
Name | HDFS Temp |
Syntax group | UNIX |
Physical path |
|
You create the following logical file in the FILE
transaction as follows:
Logical file |
|
Name | HDFS Temp |
Physical file |
|
Data format |
|
Application area | IS |
Logical path |
|
Parameter | Description |
---|---|
| The source SAP HANA database connection as defined in transaction |
| The database schema containing your |
|
|
|
|
|
|
|
|
| The logical data directory created in the previous example using the |
| The logical temporary directory created in the previous example using the |
| Indicate whether partitioning is enabled under pattern |
| The number of days a record is kept in memory, after which it will be copied to Hadoop. |
| The number of table rows written into the internal table during one operation (to reduce the risk of lack of memory). The value you enter here determines the size of the package. For example, an entry of 3 means that the table will be copied 3 rows at a time. |
| A database logical unit of work (LUW) is an inseparable sequence of database operations that ends with a database commit. Specify the size (in table rows) of this logical unit of work. It should be greater or equal to the package size. |
| Parallel processing is implemented in the report to enable background mode. Specify the amount of processes that should run in parallel. |