MLPRegressor

class hana_ml.algorithms.pal.neural_network.MLPRegressor(activation=None, activation_options=None, output_activation=None, output_activation_options=None, hidden_layer_size=None, hidden_layer_size_options=None, max_iter=None, training_style='stochastic', learning_rate=None, momentum=None, batch_size=None, normalization=None, weight_init=None, categorical_variable=None, resampling_method=None, evaluation_metric=None, fold_num=None, repeat_times=None, search_strategy=None, random_search_times=None, random_state=None, timeout=None, progress_indicator_id=None, param_values=None, param_range=None, thread_ratio=None, reduction_rate=None, aggressive_elimination=None)

Multi-layer perceptron (MLP) Regressor.

Parameters
activationstr

Specifies the activation function for the hidden layer.

Valid activation functions include:
  • 'tanh',

  • 'linear',

  • 'sigmoid_asymmetric',

  • 'sigmoid_symmetric',

  • 'gaussian_asymmetric',

  • 'gaussian_symmetric',

  • 'elliot_asymmetric',

  • 'elliot_symmetric',

  • 'sin_asymmetric',

  • 'sin_symmetric',

  • 'cos_asymmetric',

  • 'cos_symmetric',

  • 'relu'

Should not be specified only if activation_options is provided.

activation_optionslist of str, optional

A list of activation functions for parameter selection.

See activation for the full set of valid activation functions.

output_activationstr

Specifies the activation function for the output layer.

Valid choices of activation function same as those in activation.

Should not be specified only if output_activation_options is provided.

output_activation_optionslist of str, conditionally mandatory

A list of activation functions for the output layer for parameter selection.

See activation for the full set of activation functions for output layer.

hidden_layer_sizelist of int or tuple of int

Sizes of all hidden layers.

Should not be specified only if hidden_layer_size_options is provided.

hidden_layer_size_optionslist of tuples, optional

A list of optional sizes of all hidden layers for parameter selection.

max_iterint, optional

Maximum number of iterations.

Defaults to 100.

training_style{'batch', 'stochastic'}, optional

Specifies the training style.

Defaults to 'stochastic'.

learning_ratefloat, optional

Specifies the learning rate.

Mandatory and valid only when training_style is 'stochastic'.

momentumfloat, optional

Specifies the momentum for gradient descent update.

Mandatory and valid only when training_style is 'stochastic'.

batch_sizeint, optional

Specifies the size of mini batch.

Valid only when training_style is 'stochastic'.

Defaults to 1.

normalization{'no', 'z-transform', 'scalar'}, optional

Defaults to 'no'.

weight_init{'all-zeros', 'normal', 'uniform', 'variance-scale-normal', 'variance-scale-uniform'}, optional

Specifies the weight initial value.

Defaults to 'all-zeros'.

categorical_variablestr or list of str, optional

Specifies column name(s) in the data table used as category variable.

Valid only when column is of INTEGER type.

thread_ratiofloat, optional

Controls the proportion of available threads to use for training.

The value range is from 0 to 1, where 0 indicates a single thread, and 1 indicates up to all available threads.

Values between 0 and 1 will use that percentage of available threads.

Values outside this range tell PAL to heuristically determine the number of threads to use.

Defaults to 0.

resampling_methodstr, optional

Specifies the resampling method for model evaluation or parameter selection. Valid options are listed as follows: 'cv', 'bootstrap', 'cv_sha', 'bootstrap_sha', 'cv_hyperband', 'bootstrap_hyperband'.

If not specified, neither model evaluation or parameter selection shall be triggered.

Note

Resampling methods with suffix 'sha' or 'hyperband' are for parameter selection only, not for model evaluation.

evaluation_metric{'rmse'}, optional

Specifies the evaluation metric for model evaluation or parameter selection. Must be specified together with resampling_method to activate model evaluation or parameter selection.

No default value.

fold_numint, optional

Specifies the fold number for the cross-validation.

Mandatory and valid only when resampling_method is specified as one of the following: 'cv', 'cv_sha', 'cv_hyperband'.

repeat_timesint, optional

Specifies the number of repeat times for resampling.

Defaults to 1.

search_strategy{'grid', 'random'}, optional

Specifies the method for parameter selection.

  • if resampling_method is specified as 'cv_sha' or 'bootstrap_sha', then this parameter is mandatory.

  • if resampling_method is specified as 'cv_hyperband' or 'bootstrap_hyperband', then this parameter defaults to 'random' and cannot be changed.

  • otherwise this parameter has no default value, and parameter selection will not be activated if it is not specified.

random_searhc_timesint, optional

Specifies the number of times to randomly select candidate parameters.

Mandatory and valid only when search_strategy is set to 'random'.

random_stateint, optional

Specifies the seed for random generation.

When 0 is specified, system time is used.

Defaults to 0.

timeoutint, optional

Specifies maximum running time for model evaluation/parameter selection, in seconds.

No timeout when 0 is specified.

Defaults to 0.

progress_idstr, optional

Sets an ID of progress indicator for model evaluation/parameter selection.

If not provided, no progress indicator is activated.

param_valuesdict or list of tuples, optional

Specifies the values of following parameters for model parameter selection:

learning_rate, momentum, batch_size.

If input is list of tuples, then each tuple must contain exactly two elements:

  • 1st element is the parameter name(str type),

  • 2nd element is a list of valid values for that parameter.

Otherwise, if input is dict, then for each element, the key must be a parameter name, while value be a list of valid values for that parameter.

A simple example for illustration:

[('learning_rate', [0.1, 0.2, 0.5]), ('momentum', [0.2, 0.6])],

or

dict(learning_rate=[0.1, 0.2, 0.5], momentum=[0.2, 0.6]).

Valid only when resampling_method and search_strategy are both specified, and training_style is 'stochastic'.

param_rangedict or list of tuple, optional

Sets the range of the following parameters for model parameter selection:

learning_rate, momentum, batch_size.

If input is a list of tuples, the each tuple should contain exactly two elements:

  • 1st element is the parameter name(str type),

  • 2nd element is a list that specifies the range of that parameter as follows: first value is the start value, second value is the step, and third value is the end value. The step value can be omitted, and will be ignored, if search_strategy is set to 'random'.

Otherwise, if input is a dict, then for each element the key should be parameter name, while value specifies the range of that parameter.

Valid only when resampling_method and search_strategy are both specified, and training_style is 'stochastic'.

reduction_ratefloat, optional

Specifies reduction rate in SHA or Hyperband method.

For each round, the available parameter candidate size will be divided by value of this parameter. Thus valid value for this parameter must be greater than 1.0

Valid only when resampling_method takes one of the following values: 'cv_sha', 'bootstrap_sha', 'cv_hyperband', 'bootstrap_hyperband'.

Defaults to 3.0.

aggressive_eliminationbool, optional

Specifies whether to apply aggressive elimination while using SHA method.

Aggressive elimination happens when the data size and parameters size to be searched does not match and there are still bunch of parameters to be searched while data size reaches its upper limits. If aggressive elimination is applied, lower bound of limit of data size will be used multiple times first to reduce number of parameters.

Valid only when resampling_method is 'cv_sha' or 'bootstrap_sha'.

Defaults to False.

Examples

Training data:

>>> df.collect()
   V000  V001 V002  V003  T001  T002  T003
0     1  1.71   AC     0  12.7   2.8  3.06
1    10  1.78   CA     5  12.1   8.0  2.65
2    17  2.36   AA     6  10.1   2.8  3.24
3    12  3.15   AA     2  28.1   5.6  2.24
4     7  1.05   CA     3  19.8   7.1  1.98
5     6  1.50   CA     2  23.2   4.9  2.12
6     9  1.97   CA     6  24.5   4.2  1.05
7     5  1.26   AA     1  13.6   5.1  2.78
8    12  2.13   AC     4  13.2   1.9  1.34
9    18  1.87   AC     6  25.5   3.6  2.14

Training the model:

>>> mlpr = MLPRegressor(hidden_layer_size=(10,5),
...                     activation='sin_asymmetric',
...                     output_activation='sin_asymmetric',
...                     learning_rate=0.001, momentum=0.00001,
...                     training_style='batch',
...                     max_iter=10000, normalization='z-transform',
...                     weight_init='normal', thread_ratio=0.3)
>>> mlpr.fit(data=df, label=['T001', 'T002', 'T003'])

Training result may look different from the following results due to model randomness.

>>> mlpr.model_.collect()
   ROW_INDEX                                      MODEL_CONTENT
0          1  {"CurrentVersion":"1.0","DataDictionary":[{"da...
1          2  3782583596893},{"from":10,"weight":-0.16532599...
>>> mlpr.train_log_.collect()
     ITERATION       ERROR
0            1   34.525655
1            2   82.656301
2            3   67.289241
3            4  162.768062
4            5   38.988242
5            6  142.239468
6            7   34.467742
7            8   31.050946
8            9   30.863581
9           10   30.078204
10          11   26.671436
11          12   28.078312
12          13   27.243226
13          14   26.916686
14          15   26.782915
15          16   26.724266
16          17   26.697108
17          18   26.684084
18          19   26.677713
19          20   26.674563
20          21   26.672997
21          22   26.672216
22          23   26.671826
23          24   26.671631
24          25   26.671533
25          26   26.671485
26          27   26.671460
27          28   26.671448
28          29   26.671442
29          30   26.671439
..         ...         ...
705        706   11.891081
706        707   11.891081
707        708   11.891081
708        709   11.891081
709        710   11.891081
710        711   11.891081
711        712   11.891081
712        713   11.891081
713        714   11.891081
714        715   11.891081
715        716   11.891081
716        717   11.891081
717        718   11.891081
718        719   11.891081
719        720   11.891081
720        721   11.891081
721        722   11.891081
722        723   11.891081
723        724   11.891081
724        725   11.891081
725        726   11.891081
726        727   11.891081
727        728   11.891081
728        729   11.891081
729        730   11.891081
730        731   11.891081
731        732   11.891081
732        733   11.891081
733        734   11.891081
734        735   11.891081

[735 rows x 2 columns]

>>> pred_df.collect()
   ID  V000  V001 V002  V003
0   1     1  1.71   AC     0
1   2    10  1.78   CA     5
2   3    17  2.36   AA     6

Prediction:

>>> res  = mlpr.predict(data=pred_df, key='ID')

Result may look different from the following results due to model randomness.

>>> res.collect()
   ID TARGET      VALUE
0   1   T001  12.700012
1   1   T002   2.799133
2   1   T003   2.190000
3   2   T001  12.099740
4   2   T002   6.100000
5   2   T003   2.190000
6   3   T001  10.099961
7   3   T002   2.799659
8   3   T003   2.190000
Attributes
model_DataFrame

Model content.

train_log_DataFrame

Provides mean squared error between predicted values and target values for each iteration.

stats_DataFrame

Names and values of statistics.

optim_param_DataFrame

Provides optimal parameters selected.

Available only when parameter selection is triggered.

Methods

create_model_state([model, function, ...])

Create PAL model state.

delete_model_state([state])

Delete PAL model state.

fit(data[, key, features, label, ...])

Fit the model when given training dataset.

predict(data[, key, features, thread_ratio])

Predict using the multi-layer perceptron model.

score(data, key[, features, label, thread_ratio])

Returns the coefficient of determination R^2 of the prediction.

set_model_state(state)

Set the model state by state information.

create_model_state(model=None, function=None, pal_funcname='PAL_MULTILAYER_PERCEPTRON', state_description=None, force=False)

Create PAL model state.

Parameters
modelDataFrame, optional

Specify the model for AFL state.

Defaults to self.model_.

functionstr, optional

Specify the function in the unified API.

A placeholder parameter, not effective for Multilayer Perceptron.

pal_funcnameint or str, optional

PAL function name.

Defaults to 'PAL_MULTILAYER_PERCEPTRON'.

state_descriptionstr, optional

Description of the state as model container.

Defaults to None.

forcebool, optional

If True it will delete the existing state.

Defaults to False.

delete_model_state(state=None)

Delete PAL model state.

Parameters
stateDataFrame, optional

Specified the state.

Defaults to self.state.

property fit_hdbprocedure

Returns the generated hdbprocedure for fit.

property predict_hdbprocedure

Returns the generated hdbprocedure for predict.

set_model_state(state)

Set the model state by state information.

Parameters
state: DataFrame or dict

If state is DataFrame, it has the following structure:

  • NAME: VARCHAR(100), it mush have STATE_ID, HINT, HOST and PORT.

  • VALUE: VARCHAR(1000), the values according to NAME.

If state is dict, the key must have STATE_ID, HINT, HOST and PORT.

fit(data, key=None, features=None, label=None, categorical_variable=None)

Fit the model when given training dataset.

Parameters
dataDataFrame

DataFrame containing the data.

keystr, optional

Name of the ID column.

If key is not provided, then:

  • if data is indexed by a single column, then key defaults to that index column

  • otherwise, it is assumed that data contains no ID column

featureslist of str, optional

Names of the feature columns.

If features is not provided, it defaults to all the non-ID and non-label columns.

labelstr or list of str, optional

Name of the label column, or list of names of multiple label columns.

If label is not provided, it defaults to the last column.

categorical_variablestr or list of str, optional

Specifies INTEGER column(s) that should be treated as categorical.

Other INTEGER columns will be treated as continuous.

predict(data, key=None, features=None, thread_ratio=None)

Predict using the multi-layer perceptron model.

Parameters
dataDataFrame

DataFrame containing the data.

keystr, optional

Name of the ID column.

Mandatory if data is not indexed, or the index of data contains multiple columns.

Defaults to the single index column of data if not provided.

features : list of str, optional

Names of the feature columns.

If features is not provided, it defaults to all the non-ID columns.

thread_ratio : float, optional

Controls the proportion of available threads to be used for prediction.

The value range is from 0 to 1, where 0 indicates a single thread, and 1 indicates up to all available threads.

Values between 0 and 1 will use that percentage of available threads.

Values outside this range tell PAL to heuristically determine the number of threads to use.

Defaults to 0.

Returns
DataFrame

Predicted results, structured as follows:

  • ID column, with the same name and type as data 's ID column.

  • TARGET, type NVARCHAR, target name.

  • VALUE, type DOUBLE, regression value.

score(data, key, features=None, label=None, thread_ratio=None)

Returns the coefficient of determination R^2 of the prediction.

Parameters
dataDataFrame

DataFrame containing the data.

keystr, optional

Name of the ID column.

Mandatory if data is not indexed, or the index of data contains multiple columns.

Defaults to the single index column of data if not provided.

featureslist of str, optional

Names of the feature columns.

If features is not provided, it defaults to all the non-ID and non-label columns.

labelstr or list of str, optional

Name of the label column, or list of names of multiple label columns.

If label is not provided, it defaults to the last column.

Returns
float

Returns the coefficient of determination R2 of the prediction.

Inherited Methods from PALBase

Besides those methods mentioned above, the MLPRegressor class also inherits methods from PALBase class, please refer to PAL Base for more details.