GRUAttention

class hana_ml.algorithms.pal.tsa.rnn.GRUAttention(learning_rate=None, batch_size=None, time_dim=None, hidden_dim=None, num_layers=None, max_iter=None, interval=None)

Gated Recurrent Units(GRU) based encoder-decoder model with Attention mechanism for time series prediction.

Parameters:
learning_ratefloat, optional

Learning rate for gradient descent.

Defaults to 0.05.

batch_sizeint, optional

Number of pieces of data for training in one iteration.

Defaults to 32.

time_dimint, optional

It specifies how many time steps in a sequence that will be trained by LSTM/GRU and then for time series prediction.

The value of it must be smaller than the length of input time series minus 1.

Defaults to 16.

hidden_dimint, optional

Number of hidden neurons within every GRU layer.

Defaults to 64.

num_layersint, optional

Number of layers in GRU unit at encoder part and decoder part.

Defaults to 1.

max_iterint, optional

Number of batches of data by which attention model is trained.

Defaults to 1000.

intervalint, optional

Output the average loss within every interval iterations.

Defaults to 100.

Examples

Input dataframe df:

>>> df.head(3).collect()
    TIMESTAMP  SERIES
0           0    20.7
1           1    17.9
2           2    18.8

Create a LSTM model:

>>> from hana_ml.algorithms.pal.tsa import rnn
>>> att = rnn.GRUAttention(max_iter=1000,
                           learning_rate=0.01,
                           batch_size=32,
                           hidden_dim=128,
                           num_layers=1,
                           interval=1)

Perform fit on the given data:

>>> att.fit(self.df)

Perform predict on the fitted model:

>>> res = att.predict(self.df_predict)

Expected output:

>>> res.head(3).collect()
    ID                                      MODEL_CONTENT
0    0  {"PreprocessArgs1":[16.191999999999998],"Prepr...
1    1  2943487073086303,-0.14125998608870073,0.016476...
2    2  9670601,0.26983407416799118,-0.125378179522916...
Attributes:
loss_DateFrame

LOSS.

model_DataFrame

Model content.

Methods

build_report()

Generate time series report.

fit(data[, key, endog, exog])

Generates a GRUAttention model with given data and specified parameters.

generate_html_report([filename])

Display function.

generate_notebook_iframe_report()

Display function.

predict(data[, top_k_attributions, explain_mode])

Makes time series forecast based on the trained GRUAttention model.

fit(data, key=None, endog=None, exog=None)

Generates a GRUAttention model with given data and specified parameters.

Parameters:
dataDataFrame

Input data, should contain at least 2 columns described as follows:

  • Index(i.e. time-stamp) column, type INTEGER. The time-stamps do not need to be in order, but must be unique and evenly spaced.

  • Time-series values column, type INTEGER, DOUBLE or DECIMAL(p,s).

keystr, optional

Specifies the name of the index column in data.

If not provided, it defaults to:

  • 1st column of data if data is not indexed or indexed by multiple columns.

  • the index column of data if data is indexed by a single column.

endogstr, optional

Specifies the name of endogenous variable, i.e. time series values in data. The type of endog column could be INTEGER, DOUBLE or DECIMAL(p,s).

Defaults to the 1st non-key column of data if not provided.

exogstr or ListOfStrings, optional

Specifies the name of exogenous variable. The type of an exog column could be INTEGER, DOUBLE, DECIMAL(p,s), VARCHAR or NVARCHAR.

Defaults to all columns in data except the key column and the endog column.

Returns:
A fitted object of class "GRUAttention".
predict(data, top_k_attributions=None, explain_mode=None)

Makes time series forecast based on the trained GRUAttention model.

Parameters:
dataDataFrame

Data for prediction, structured as follows:

  • 1st column : Record IDs.

  • Other columns: Columns for holding the time-series and external data values for all records, arranged in time-order. The columns' data type need to be consistent with that of time-series or external data.

Note

In this DataFrame, each row contains a piece of time-series data with with external data for prediction, and the number of columns for time-series values should be equal to time_dim * (M-1), where M is the number of columns of the input data in the training phase.

top_k_attributionsint, optional

Specifies the number of features with highest attributions to output.

  • If explain_mode is 'time-wise', this value needs to be smaller than the length of time series data for prediction;

  • If explain_mode is 'feature-wise', the value needs to be smaller than the number of exogenous variables.

Defaults to 0(i.e. empty reason code).

explain_mode{'time-wise', 'feature-wise'}, optional

Specifies the mechanism for generating the reason code for inference results.

  • 'time-wise' : Use attention weights to assign time-dimension-wise contributions.

  • 'feature-wise' : Use Bayesian Structural Time Series(BSTS) to assign feature-wise contributions.

Defaults to 'time-wise'.

Returns:
DataFrame

The aggregated forecasted values. Forecasted values, structured as follows:

  • ID, type INTEGER, representing record ID.

  • VALUE, type DOUBLE, containing the forecast value of the corresponding record.

  • REASON_CODE, type NCLOB, containing sorted SHAP values for test data at each time step/each feature component.

build_report()

Generate time series report.

generate_html_report(filename=None)

Display function.

generate_notebook_iframe_report()

Display function.

property fit_hdbprocedure

Returns the generated hdbprocedure for fit.

property predict_hdbprocedure

Returns the generated hdbprocedure for predict.

Inherited Methods from PALBase

Besides those methods mentioned above, the GRUAttention class also inherits methods from PALBase class, please refer to PAL Base for more details.