Christian Tzolov

Christian Tzolov

Spring Framework Engineer at Broadcom; Committer, PMC member at Apache Software Foundation.

Anything about Integration and Interoperability Architectures, Distributed and Data-Intensive Systems

Blog posts by Christian Tzolov

Supercharging Your AI Applications with Spring AI Advisors

Engineering | October 02, 2024 | ...

In the rapidly evolving world of artificial intelligence, developers are constantly seeking ways to enhance their AI applications. Spring AI, a Java framework for building AI-powered applications, has introduced a powerful feature: the Spring AI Advisors.

The advisors can supercharge your AI applications, making them more modular, portable and easier to maintain.

If reading the post isn't convenient, you can listen to this experimental podcast, AI-generated from blog's content:

What are Spring AI Advisors?

At their core, Spring AI Advisors are components that intercept and potentially modify the flow of chat-completion requests and responses in your AI applications. The key player in this system is the AroundAdvisor

Spring AI with NVIDIA LLM API

Engineering | August 20, 2024 | ...

Spring AI now supports NVIDIA's Large Language Model API, offering integration with a wide range of models. By leveraging NVIDIA's OpenAI-compatible API, Spring AI allows developers to use NVIDIA's LLMs through the familiar Spring AI API.

SpringAI-NVIDIA-API-5

We'll explore how to configure and use the Spring AI OpenAI chat client to connect with NVIDIA LLM API.

  • The demo application code is available in the nvidia-llm GitHub repository.
  • The SpringAI / NVIDIA integration documentation.

Prerequisite

  • Create NVIDIA account with sufficient credits.
  • Select your preferred LLM model from NVIDIA's offerings. Like the meta/llama-3.1-70b-instruct in the screenshot below.
  • From the model's page, obtain the API key for your chosen model.

Spring AI Embraces OpenAI's Structured Outputs: Enhancing JSON Response Reliability

Engineering | August 09, 2024 | ...

OpenAI recently introduced a powerful feature called Structured Outputs, which ensures that AI-generated responses adhere strictly to a predefined JSON schema. This feature significantly improves the reliability and usability of AI-generated content in real-world applications. Today, we're excited to announce that Spring AI (1.0.0-SNAPSHOT) has fully integrated support for OpenAI's Structured Outputs, bringing this capability to Java developers in a seamless, Spring-native way.

Following diagram shows how the new Structured Outputs feature extends the OpenAI Chat API:

Restored Spring AI (2)

Note: Spring AI already provides a powerful, Model-agnostic Structured Output utilities that can be use with various AI models including the OpenAI. The OpenAI Structured Outputs feature offers an additional, consistent, but a model specific solution, currently available only to the gpt-4o, gpt-4o-mini and later models.

Spring AI with Groq - a blazingly fast AI inference engine

Engineering | July 31, 2024 | ...

Faster information processing not only informs - it transforms how we perceive and innovate.

Spring AI, a powerful framework for integrating AI capabilities into Spring applications, now offers support for Groq - a blazingly fast AI inference engine with support for Tool/Function calling.

Leveraging Groq's OpenAI-compatible API, Spring AI seamlessly integrates by adapting its existing OpenAI Chat client. This approach enables developers to harness Groq's high-performance models through the familiar Spring AI API.

spring-ai-groq-integration

We'll explore how to configure and use the Spring AI OpenAI chat client to connect with Groq. For detailed information, consult the Spring AI Groq documentation and related tests

Spring AI with Ollama Tool Support

Engineering | July 26, 2024 | ...

Earlier this week, Ollama introduced an exciting new feature: tool support for Large Language Models (LLMs).

Today, we're thrilled to announce that Spring AI (1.0.0-SNAPSHOT) has fully embraced this powerful feature, bringing Ollama's function calling capabilities to the Spring ecosystem.

Ollama's tool support allows models to make decisions about when to call external functions and how to use the returned data. This opens up a world of possibilities, from accessing real-time information to performing complex calculations. Spring AI takes this concept and integrates it seamlessly with the…

Spring AI - Structured Output

Engineering | May 09, 2024 | ...

UPDATE: (04.06.2024) Adde snippets for using structured output with the new, fluent ChatClient API .

UPDATE: (17.05.2024) Generic Types support for BeanOutputConverter added.

Science works with chunks and bits and pieces of things with the continuity presumed, and Art works only with the continuities of things with the chunks and bits and pieces presumed. - Robert M. Pirsig

The ability of LLMs to produce structured outputs is important for downstream applications that rely on reliably parsing output values. Developers want to quickly turn results from an AI model into data types, such as…

Spring AI - Multimodality - Orbis Sensualium Pictus

Engineering | April 19, 2024 | ...

UPDATE 20.07.2024 : Update Message API hierarchy diagram and update the model names supporting multimodality

UPDATE 02.06.2024 : Add an additional code snippet showing how to use the new ChatClient API.

Humans process knowledge, simultaneously across multiple modes of data inputs. The way we learn, our experiences are all multimodal. We don't have just vision, just audio and just text.

These foundational principles of learning were articulated by the father of modern education John Amos Comenius, in his work, "Orbis Sensualium Pictus", dating back to 1658.

orbis-sensualium-pictus2

"All things that are naturally connected ought to be taught in combination"

Function Calling in Java and Spring AI using the latest Mistral AI API

Engineering | March 06, 2024 | ...

UPDATE: As of March 13, 2024, Mistral AI has integrated support for parallel function calling into their large model, a feature that was absent at the time of this blog's initial publication.

Mistral AI, a leading developer of open-source large language models, unveiled the addition of Function Calling support to its cutting-edge models.

Function Calling is a feature that facilitates the integration of LLM with external tools and APIs. It enables the language model to request the execution of client-side functions, allowing it to access necessary run-time information or perform tasks…

Spring Cloud Function for Azure Function

Engineering | February 24, 2023 | ...

What is the Spring Cloud Function?

Spring Cloud Function is a SpringBoot-based framework allowing users to concentrate on their business logic by implementing them as Java Functions (i.e., Supplier, Function, Consumer). In turn the framework provides necessary abstraction to enable execution of these functions in various environments (e.g., REST, Streaming) as well as serverless environments such as AWS Lambda or Azure Functions, without having to worry about the underlying platform-specific details. This allows developers to focus on writing their business logic and let the framework handle…

Case Study: Change Data Capture (CDC) Analysis with CDC Debezium source and Analytics sink in Real-Time

Engineering | December 14, 2020 | ...

Get ahead

VMware offers training and certification to turbo-charge your progress.

Learn more

Get support

Tanzu Spring offers support and binaries for OpenJDK™, Spring, and Apache Tomcat® in one simple subscription.

Learn more

Upcoming events

Check out all the upcoming events in the Spring community.

View all