Release Notes

4.10.2

Features

  • Added support for OpenAI o1 and o3-mini models.

  • Added support for AWS amazon--nova-micro, amazon--nova-lite, and amazon--nova-pro models.

  • Added support for asynchronous calls to Bedrock models.

  • Added support for asynchronous calls to Vertex models.

  • Added support for masked_grounding_input and allowlist also for the grounding output in the orchestration service. See Configuration Options for details.

  • Deprecation of input_filters and output_filters in the orchestration configuration, use ContentFiltering instead. See Content Filtering for details.

4.4.3

Features

4.3.1

Features

  • Add support for prompt registry APIs. You can create, retrieve and modify prompt templates from the prompt repository. For example usage, see Prompt Registry

  • Add support for grounding in orchestration. You can now configure the grounding module in the orchestration service.

  • Add support for structured output in the orchestration service by specifying the response format, for instance text or json. See Overview of response_format Parameter Options for details.

  • Add autodiscovery for orchestration deployments. See Understanding Deployment Resolution for details.

Bugfixes

  • OpenAI deprecated max_tokens in favor of max_completion_tokens parameter. This was now also included in the generative AI Hub SDK and the dependency of the langchain-openai version could be relaxed.

4.1.1

Features

  • Add support for prompt registry templates in orchestration. You can now configure a prompt registry template in the orchestration service call by referencing the ID or scenario, template name, and version. See Referencing Templates in the Prompt Registry

Bugfixes

  • Set langchain-openai==0.2.9 due to max_completion_token issues with later versions.

4.0.0

Breaking Changes

  • Switch to langchain 0.3.x This also results in upgrading of dependent langchain libraries and a transition to pydantic v2. You need to ensure your code works with the upgraded dependencies. Langchain 0.2.x is no longer supported in the SDK. Please refer to Package dependencies for details on the dependencies.

Features

  • Add support for streaming in orchestration service. See the example notebook here: Streaming.

  • Add enhanced debug logging: When log level debug is enabled the source of configuration will be logged to support troubleshooting.

3.8.0

Features

  • Add support for mistralai--mistral-large-instruct model

  • Add support for ibm--granite-13b-chat model

  • Add capability to access unsupported models, see the example notebook Using New Models in Gen AI Hub Before They Are Added to SDK for details.

  • Add enhanced logging for API calls

    • By setting the environment variable DEBUG_LOG_API_CALLS to true, all calls to the backend are logged for better error diagnosis

3.2.6

Features

  • Add support for orchestration service: data masking. See the example notebook section Content Filtering for details.

Bugfixes

  • Bugfix for x509 certificate authentication support

3.1.1

Features

  • Add support for gpt-4o model

3.1.0

Breaking Changes

  • Switch to vertexAI SDK for native Google model access. The previous library 'google-generativeai' is no longer supported by the generative AI Hub SDK.

Features

  • Add support for orchestration service: templating, content safety, inference. See the example notebook Orchestration Service for details.

  • Add support for anthropic--claude-3.5-sonnet model