Modeling Guide

Classify Video Stream with Leonardo MLF Inference Client

This is an example graph using a Leonardo Machine Learning Foundation (MLF) service. To get familiar with the service, you can run the graph and use the terminal to iteratively send the parameters.

The graph is composed of the following components:
  • Capture Video: Reads the video into frames according to the defined frame rate.

  • Frame Diviser: Sends one image for every number of images received. This is defined by the secondaryNumber.

  • Motion JPG Stream: Runs a server that waits a request at <host>:<port>/stream/view.mjpg. Host, port and frame rate can be defined by the user. It comes with an example host and port.

  • MLFInference: The API client responsible to connect and carry out the references. The result has the string on base64.

  • Terminal1: Terminal where the serialized image is displayed.

  • Terminal2: Terminal where the JSON from the inference is displayed.

  • JSON Adapter: It takes the JSON result from the reference and transforms the base64 string to utf8.

  • UI for Image Recognition: Takes the stream of images specified in the Motion JPG Stream and the results of the recognition from the input port.

Prerequisites

  • MLFInference operator configured for a valid service (details in the operator documentation).

  • The connection between the Motion JPG Stream and UI for Image Recognition correctly set (host and port).

  • A file which will be used as the input to the query locally available. Example: an .mp4 video.

Configure and Run the Graph

  1. The graph does not require further inputs after configured.
  2. With the graph running, the terminals will display the text information and the UI for image recognition will stream the video and display the classification result.