hana_ml.artifacts package

The artifacts package consists of the following sections:

hana_ml.artifacts.generators.abap and hana_ml.artifacts.deployers.amdp modules provide various methods which helps you to embed machine learning algorithms of SAP HANA (e.g. Predictive Analytics Library (PAL)) via the Python API into SAP S/4HANA business applications with Intelligent Scenario Lifecycle Management (ISLM) framework. The ISLM framework is integrated into the ABAP layer (SAP Basis) so that the intelligent scenarios from above layers in SAP S/4HANA stack can utilize the framework completely. Specifically, a custom ABAP Managed Database Procedure (AMDP) class for a machine learning model needs to be created which can be consumed by ISLM.

Suppose you have a machine learning model developed in hana-ml and decide to embed it into a SAP S/4HANA business application. First, you can create an AMDPGenerator to establish a corresponding AMDP class and then create a AMDPDeployer to upload such class into the ISLM framework by creating an Intelligent Scenario. Hence, with ISLM, you can perform operations, such as training, activating, and monitoring of the intelligent scenario for a specific SAP S/4HANA.

SAP S/4HANA System Requirement: S/4HANA 2020 FPS1 or higher.

Supported hana-ml algorithm for AMDP: UnifiedClassification.

AMDP Examples

Let's assume we have a connection to SAP HANA called connection_context and a basic Random Decision Trees Classifier 'rfc' with training data 'diabetes_train_valid' and prediction data 'diabetes_test'. Remember that every model has to contain fit and predict logic, therefore the methods fit() and predict() have to be called at least once.

>>> rfc_params = dict(n_estimators=5, split_threshold=0, max_depth=10)
>>> rfc = UnifiedClassification(func="randomdecisiontree", **rfc_params)
>>> rfc.fit(diabetes_train_valid,
...         key='ID',
...         label='CLASS',
...         categorical_variable=['CLASS'],
...         partition_method='stratified',
...         stratified_column='CLASS')
>>> rfc.predict(diabetes_test.drop(cols=['CLASS']), key="ID")

Then, generate abap managed database procedures (AMDP) artifact by creating an AMDPGenerator:

>>> generator = AMDPGenerator(project_name="PIMA_DIAB", version="1", connection_context=connection_context, outputdir="out/")
>>> generator.generate()

The generate() process creates a .abap file on your local machine based on the work that was done previously. This .abap file contains the SQL logic wrapped in AMDPs you have created by interacting with the hana-ml package.

Then, you can now take the generated code in the 'outputdir' and deploy it to SAP S/4HANA with ISLM. All you need is to provide the .abap file, and some basic parameters for the ISLM registration.

>>> deployer = AMDPDeployer(backend_url=backend_url,
...                         backend_auth=(backend_user,
...                                       backend_password),
...                         frontend_url=frontend_url,
...                         frontend_auth=(frontend_user,
...                                        frontend_password))
>>> guid = deployer.deploy(fp="XXX.abap",
...                        model_name="MODEL_01",
...                        catalog="$TMP",
...                        scenario_name="DEMO_CUSTOM01",
...                        scenario_description="Hello S/4 demo!",
...                        scenario_type="CLASSIFICATION",
...                        force_overwrite=True)

After the deployment is competed, you can see an intelligent scenario in the 'Intelligent Scenarios' Fiori app of the ISLM framework. This scenario has a name specified during the deployment step.

hana_ml.artifacts.deployers.amdp

This module provides AMDP related functionality.

The following class is available:

hana_ml.artifacts.deployers.amdp.gen_pass_key(url, user, passwd)

This function provides a way to encrypt user name and password, and then returns the key for future access.

Parameters
urlstr

The url of backend/frontend.

userstr

User name.

passwdstr

Password.

Returns
pass_key
class hana_ml.artifacts.deployers.amdp.AMDPDeployer(backend_url, backend_auth, frontend_url, frontend_auth, backend_key=None, frontend_key=None)

Bases: object

This class provides AMDP deployer related functionality. After you create an AMDPGenerator to establish a corresponding AMDP class, then you can create an AMDPDeployer to upload such class into the ISLM framework by creating an intelligent scenario.

Parameters
backend_urlstr

The url of backend.

backend_authtuple

The authentication information of backend which contain user name and password.

frontend_urlstr

The url of frontend.

frontend_authtuple

The authentication information of frontend which contain user name and password.

backend_keybytes, optional

If backend_key has been generated, it can be used instead of password.

Defaults to None.

frontend_keybytes, optional

If frontend_key has been genereted, it can be used instead of password.

Defaults to None.

Examples

After you use an AMDPGenerator to generate a .abap file, you can now take the generated code in the 'outputdir' and deploy it to SAP S/4HANA or any ABAP stack with ISLM for that matter. All you need is to provide the .abap file, and some basic parameters for the ISLM registration.

>>> deployer = AMDPDeployer(backend_url=backend_url,
...                         backend_auth=(backend_user,
...                                       backend_password),
...                         frontend_url=frontend_url,
...                         frontend_auth=(frontend_user,
...                                        frontend_password))
>>> guid = deployer.deploy(fp="XXX.abap",
...                        model_name="MODEL_01",
...                        catalog="$TMP",
...                        scenario_name="DEMO_CUSTOM01",
...                        scenario_description="Hello S/4 demo!",
...                        scenario_type="CLASSIFICATION",
...                        force_overwrite=True)

After the deployment is competed, you can see an intelligent scenario in the 'Intelligent Scenarios' Fiori app of the ISLM framework. This scenario has a name specified during the deployment step.

Methods

deploy(self, fp, model_name, catalog, ...[, ...])

The deploy method is to deploy an AMDP class into SAP S/4HANA with Intelligent Scenario Lifecycle Management (ISLM).

deploy_class(self, class_name, abap_class_code)

Deploy the class.

format(self, abap_class_code, master_system)

Format from AMDP session.

get_is_information_from_islm(self, ...)

Get Intelligent Scenario Lifecycle Management (ISLM) information.

register_islm(self, class_name, model_name, ...)

Register in Intelligent Scenario Lifecycle Management (ISLM).

deploy(self, fp, model_name, catalog, scenario_name, scenario_type, class_description=None, scenario_description=None, force_overwrite=False, master_system='ER9', transport_request='$TMP', sap_client='000')

The deploy method is to deploy an AMDP class into SAP S/4HANA with Intelligent Scenario Lifecycle Management (ISLM).

Parameters
fpstr

Name of the abap file to be opened.

model_namestr

Name of the model.

catalogstr

Name of the catalog.

scenario_namestr

Name of the intelligent scenario.

scenario_typestr

Type of the intelligent scenario.

class_descriptionstr, optional

Description of the class.

Defaults to None.

scenario_descriptionstr, optional

Description of the intelligent scenario.

Defaults to None.

force_overwritebool, optional

Whether to overwrite the class if class already exists.

Defaults to False.

master_systemstr, optional

Name of the master system.

Defualts to "ER9".

transport_requeststr, optional

Name of the package.

Defaults to '$TMP'.

sap_clientstr, optional

The client of SAP.

Defaults to '000'.

Returns
GUID (Globally Unique Identifier).

Examples

Create an AMDPDeployer obejct:

>>> deployer = AMDPDeployer(backend_url=backend_url,
...                         backend_auth=(backend_user,
...                                       backend_password),
...                         frontend_url=frontend_url,
...                         frontend_auth=(frontend_user,
...                                        frontend_password))

Deploy:

>>> guid = deployer.deploy(fp="XXX.abap",
...                        model_name=model_name,
...                        catalog="XXX",
...                        scenario_name=scenario_name,
...                        scenario_description=scenario_description,
...                        scenario_type=scenario_type,
...                        force_overwrite=True)
deploy_class(self, class_name, abap_class_code, class_description=None, master_system='ER9', force_overwrite=False, transport_request='$TMP')

Deploy the class.

Note that all request data in this class is kept in XML because it allows for an easier development in combination with the SAP ABAP Development Tools (ADT, Eclipse). In the communication log that can be viewed in the IDE everything is done in XML -> easier translation to this method.

Parameters
class_namestr

Name of the class.

abap_class_codestr

Code of SAP ABAP class.

class_descriptionstr, optional

Description of the class.

Defaults to None.

master_systemstr, optional

Name of master system.

Defualts to "ER9".

force_overwritebool, optional

whether to overwrite the class if class already exists.

Defaults to False.

transport_requeststr, optional

Name of the package.

Defaults to '$TMP'.

register_islm(self, class_name, model_name, catalog, scenario_name, scenario_type, scenario_description, sap_client)

Register in Intelligent Scenario Lifecycle Management (ISLM).

Parameters
class_namestr

Name of the class.

model_namestr

Name of the model.

catalogstr

Name of the catalog.

scenario_namestr

Name of the intelligent scenario.

scenario_typestr

Type of the intelligent scenario.

scenario_descriptionstr

Description of the intelligent scenario.

sap_clientstr

The client of SAP.

get_is_information_from_islm(self, scenario_name, sap_client)

Get Intelligent Scenario Lifecycle Management (ISLM) information.

Parameters
scenario_namestr

Name of the intelligent scenario.

sap_clientstr

The client of SAP.

format(self, abap_class_code, master_system)

Format from AMDP session.

Parameters
abap_class_codestr

Code of SAP ABAP class.

master_systemstr

Name of master system.

hana_ml.artifacts.generators.abap

This module handles generation of all AMDP(ABAP Managed Database Procedure) related artifacts based on the provided consumption layer elements. Currently this is experimental code only.

The following class is available:

class hana_ml.artifacts.generators.abap.AMDPGenerator(project_name, version, connection_context, outputdir)

Bases: object

This class provides AMDP(ABAP Managed Database Procedure) specific generation functionality. It also extends the config to cater for AMDP generation specific config.

Parameters
project_namestr

Name of the project.

versionstr

The version.

connection_contextstr

The connection to the SAP HANA.

outputdirstr

The directory of output.

Examples

Let's assume we have a connection to SAP HANA called connection_context and a basic Random Decision Trees Classifier 'rfc' with training data 'diabetes_train_valid' and prediction data 'diabetes_test'. Remember that every model has to contain fit and predict logic, therefore the methods fit() and predict() have to be called at least once.

>>> rfc_params = dict(n_estimators=5, split_threshold=0, max_depth=10)
>>> rfc = UnifiedClassification(func="randomdecisiontree", **rfc_params)
>>> rfc.fit(diabetes_train_valid,
...         key='ID',
...         label='CLASS',
...         categorical_variable=['CLASS'],
...         partition_method='stratified',
...         stratified_column='CLASS')
>>> rfc.predict(diabetes_test.drop(cols=['CLASS']), key="ID")

Then, generate abap managed database procedures (AMDP) artifact by creating an AMDPGenerator:

>>> generator = AMDPGenerator(project_name="PIMA_DIAB", version="1", connection_context=connection_context, outputdir="out/")
>>> generator.generate()

The generate() process creates a .abap file on your local machine based on the work that was done previously. This .abap file contains the SQL logic wrapped in AMDPs you have created by interacting with the hana-ml package.

Methods

generate(self[, training_dataset, ...])

Generate artifacts by first building up the required folder structure for artifacts storage and then generating different required files.

generate(self, training_dataset='', apply_dataset='', no_reason_features=3)

Generate artifacts by first building up the required folder structure for artifacts storage and then generating different required files.

Parameters
training_datasetstr, optional

Name of training dataset.

Defaults to ''.

apply_datasetstr, optional

Name of apply dataset.

Defaults to ''.

no_reason_featuresint, optional

The number of features that contribute to the classification decision the most. This reason code information is to be displayed during the prediction phase.

Defaults to 3.

hana_ml.artifacts.generators.hana

This module handles generation of all HANA design-time artifacts based on the provided base and consumption layer elements. These artifacts can incorporate into development projects in SAP Web IDE for SAP HANA or SAP Business Application Studio and be deployed via HANA Deployment Infrastructure (HDI) into a SAP HANA system.

The following class is available:

class hana_ml.artifacts.generators.hana.HanaGenerator(project_name, version, grant_service, connection_context, outputdir, generation_merge_type=1, generation_group_type=12, sda_grant_service=None, remote_source='')

Bases: object

This class provides HANA specific generation functionality. It also extends the config file to cater for HANA specific config generation.

Parameters
project_namestr

The name of project.

versionstr

The version name.

grant_servicestr

The grant service.

connection_contextstr

The connection to the SAP HANA.

outputdirstr

The directory of output.

generation_merge_typeint, optional

The generation type of merge.

Defaults to 1.

generation_group_typeint, optional

The generation type of group.

Defaults to 12.

sda_grant_service: str, optional

The grant service of Smart Data Access (SDA).

Defaults to None.

remote_sourcestr, optional

The name of remote source.

Defaults to ''.

Examples

Let's assume we have a connection to SAP HANA called connection_context and a basic Random Decision Trees Classifier 'rfc' with training data 'diabetes_train_valid' and prediction data 'diabetes_test'.

>>> rfc_params = dict(n_estimators=5, split_threshold=0, max_depth=10)
>>> rfc = UnifiedClassification(func="randomdecisiontree", **rfc_params)
>>> rfc.fit(diabetes_train_valid,
>>>         key='ID',
>>>         label='CLASS',
>>>         categorical_variable=['CLASS'],
>>>         partition_method='stratified',
>>>         stratified_column='CLASS',)
>>> rfc.predict(diabetes_test.drop(cols=['CLASS']), key="ID")

Then, we could generate HDI artifacts:

>>> hg = hana.HanaGenerator(project_name="test", version='1', grant_service='', connection_context=connection_context, outputdir="./hana_out")
>>> hg.generate_artifacts()

Returns a output path of the root folder where the hana related artifacts are stored:

>>> './hana_out\test\hana'

Methods

generate_artifacts(self[, base_layer, ...])

Generate the artifacts by first building up the required folder structure for artifacts storage and then generating the different required files.

generate_artifacts(self, base_layer=True, consumption_layer=True, sda_data_source_mapping_only=False)

Generate the artifacts by first building up the required folder structure for artifacts storage and then generating the different required files. Be aware that this method only generates the generic files and offloads the generation of artifacts where traversal of base and consumption layer elements is required.

Parameters
base_layerbool, optional

The base layer is the low level procedures that will be generated.

Defaults to True.

consumption_layerbool, optional

The consumption layer is the layer that will consume the base layer artifacts.

Defaults to True.

sda_data_source_mapping_onlybool, optional

In case data source mapping is provided, you can force to only do this for the Smart Data Access (SDA) HANA deployment infrastructure (HDI) container.

Defaults to False.

Returns
str

Return the output path of the root folder where the hana related artifacts are stored.