Modeling Guide

Classification with Leonardo MLF Inference Client

This is an example graph using a Leonardo Machine Learning Foundation(MLF) service. To get familiar with the service you can run the graph and use the terminal to iteratively send the parameters.

The graph is composed of the following components:
  • InferenceClient: The API client responsible to connect and carry out the references.

  • Terminal1: Terminal where the parameters for the regression will be set and the results displayed.

  • Terminal2: Terminal where optionally the user can alter the configuration for the InferenceClient operator.

  • Read File: An operator which loads the file specified by "Terminal1".

  • Python2Operator: A python script that creates a message with only the body(with the serialized data).

Prerequisites

  • InferenceClient operator configuration:

    Parameter

    Type

    Description

    oauthClientId

    string

    Mandatory. Client ID used for the OAuth2 authentication.

    oauthClientSecret

    string

    Mandatory. Client Secret used for the OAuth2 authentication.

    oauthTokenUrl

    string

    Mandatory. Url for the address where the OAuth2 authentication will be performed.

    deploymentAPI

    list

    Mandatory. Url where the status of the server will be checked and the certificate and model host/port will be acquired.

    numberResults

    integer

    Mandatory. Maximum number of results to be returned.

    modelName

    string

    Mandatory. Model name which will process the input.

    signatureName

    string

    Mandatory. Server signature name defined when building the model.

    inputTag

    string

    Mandatory.

    inputShape

    list

    Mandatory. List of integers with the input dimensions.

  • A file which will be used as the input to the query locally available. Example: an .jpg image.

Configure and Run the Graph

Follow the steps below to run the training example from the Data Pipeline UI:
  1. Select the InferenceClient operator.
  2. Fill in the configurations according to the service and the desired model.
  3. Run the graph, right-click on the Terminal operator, and select Open UI.
  4. Once in the terminal, input the file path following the format: filepath
  5. If desired, in order to change the InferenceClient operator during the execution:
    • Select "Terminal2" operator.
    • Input a line with the following format:
      {"<configuration name>": <value>}
Example:
{"numberResults": 2}