Inference Client
Performs the OAuth2 authentication using oauthClientId and oauthClientSecret on oauthTokenUrl and sends the request to the MLF model server. With the authentication token the request built from signatureName, inputTag, inputShape and input port data is sent to deploymentAPI. The response from the server has numberResults results.
Configuration Parameters
Parameter |
Type |
Description |
---|---|---|
oauthClientId |
string |
Mandatory. Client ID used for the OAuth2 authentication. |
oauthClientSecret |
string |
Mandatory. Client Secret used for the OAuth2 authentication. |
oauthTokenUrl |
string |
Mandatory. URL for the address where the OAuth2 authentication will be performed. |
deploymentAPI |
list |
Mandatory. URL where the status of the server will be checked and the certificate and model host/port will be acquired. |
numberResults |
integer |
Maximum number of results to be returned. |
modelName |
string |
Mandatory. Model name which will process the input. |
signatureName |
string |
Server signature name defined when building the model. |
inputTag |
string |
Mandatory. |
inputShape |
list |
List of integers with the input dimensions. |
Input
Input |
Type |
Description |
---|---|---|
config |
message |
Input to dynamically change the configurations. Only the message headers are considered. If a field has the same field as the config, then it is considered. |
input |
message |
Defines two modes of operation, according to the presence of the
header in the message:
A dictionary whose keys are strings, and the values are values or a list of values of types: unicode, integers, float. |
Output
Output |
Type |
Description |
---|---|---|
response |
string |
The response returned by the server as a string. |