Regression with Leonardo MLF Inference Client
This is an example graph using a Leonardo Machine Learning Foundation (MLF) service. To get familiar with the service you can run the graph and use the terminal to iteratively send the parameters.
-
InferenceClient: The API client responsible to connect and carry out the references.
-
Terminal1: Terminal where the parameters for the regression will be set and the results displayed.
-
Terminal2: Terminal where optionally the user can alter the configuration for the InferenceClient operator.
-
Python2Operator: Creates a message whose header contains the features used for the regression.
Configuration Parameters
Parameter |
Type |
Description |
---|---|---|
oauthClientId |
string |
Mandatory. Client ID used for the OAuth2 authentication. |
oauthClientSecret |
string |
Mandatory. Client Secret used for the OAuth2 authentication. |
oauthTokenUrl |
string |
Mandatory. Url for the address where the OAuth2 authentication will be performed. |
deploymentAPI |
list |
Mandatory. Url where the status of the server will be checked and the certificate and model host/port will be acquired. |
numberResults |
integer |
Mandatory. Maximum number of results to be returned. |
modelName |
string |
Mandatory. Model name which will process the input. |
signatureName |
string |
Mandatory. Server signature name defined when building the model. |
inputTag |
string |
Mandatory. |
inputShape |
list |
Mandatory. List of integers with the input dimensions. |
Configure and Run the Graph
- Select the InferenceClient operator.
- Fill in the configurations according to the service and the desired model.
- Run the graph, right-click on the Terminal operator, and select Open UI.
- Once in the terminal, input the parameters following the format: <number>,<number>.
- If desired, in order to change the InferenceClient
operator during the execution:
- Select "Terminal2" operator.
- Input a line with the following
format:
{"<configuration name>": <value>}
{"numberResults": 2}