Christian Tzolov

Christian Tzolov

R&D Software Engineer on the Spring team, leading Spring AI and MCP Java SDK projects. Committer and PMC member at Apache Software Foundation.

I help Java developers build intelligent AI applications with a pragmatic approach that makes advanced AI concepts accessible for real-world enterprise systems.

Find my recent blogs below, plus my talks and LinkedIn posts.

Recent Blog posts by Christian Tzolov

Announcing Spring AI MCP: A Java SDK for the Model Context Protocol

Engineering | December 11, 2024 | ...

We're excited to introduce Spring AI MCP, a robust Java SDK implementation of the Model Context Protocol (MCP). This new addition to the Spring AI ecosystem brings standardized AI model integration capabilities to the Java platform.

What is MCP?

The Model Context Protocol (MCP) is an open protocol that standardizes how applications provide context to Large Language Models (LLMs). MCP provides a standardized way to connect AI models to different data sources and tools, making integration seamless and consistent. It helps you build agents and complex workflows on top of LLMs. LLMs frequently…

Introducing Spring AI Amazon Bedrock Nova Integration via Converse API

Engineering | December 10, 2024 | ...

The Amazon Bedrock Nova models represent a new generation of foundation models supporting a broad range of use cases, from text and image understanding to video-to-text analysis.

With the Spring AI Bedrock Converse API integration, developers can seamlessly connect to these advanced Nova models and build sophisticated conversational applications with minimal effort.

This blog post introduces the key features of Amazon Nova models, demonstrates their integration with Spring AI's Bedrock Converse API, and provides practical examples for text, image, video, document processing, and function…

Audio Multimodality: Expanding AI Interaction with Spring AI and OpenAI

Engineering | December 05, 2024 | ...

This blog post is co-authored by our great contributor Thomas Vitale.

OpenAI provides specialized models for speech-to-text and text-to-speech conversion, recognized for their performance and cost-efficiency. Spring AI integrates these capabilities via Voice-to-Text and Text-to-Speech (TTS).

The new Audio Generation feature (gpt-4o-audio-preview) goes further, enabling mixed input and output modalities. Audio inputs can contain richer data than text alone. Audio can convey nuanced information like tone and inflection, and together with the audio outputs it enables asynchronous speech-to-speech interactions. Additionally, this new multimodality opens up possibilities for innovative applications, such as structured data extraction. Developers can extract structured information not just from simple text, but also from images and audio, building complex, structured objects…

Leverage the Power of 45k, free, Hugging Face Models with Spring AI and Ollama

Engineering | October 22, 2024 | ...

This blog post is co-authored by our great contributor Thomas Vitale.

Ollama now supports all GGUF models from Hugging Face, allowing access to over 45,000 community-created models through Spring AI's Ollama integration, runnable locally.

spring-ai-ollama-huggingface-gguf2

We'll explore using this new feature with Spring AI. The Spring AI Ollama integration can automatically pull unavailable models for both chat completion and embedding models. This is useful when switching models or deploying to new environments.

Setting Up Spring AI with Ollama

Install Ollama on your system: https://ollama.com/download.

Tip: Spring AI also supports running Ollama via Testcontainers or integrating with an external Ollama service via Kubernetes Service Bindings

Supercharging Your AI Applications with Spring AI Advisors

Engineering | October 02, 2024 | ...

In the rapidly evolving world of artificial intelligence, developers are constantly seeking ways to enhance their AI applications. Spring AI, a Java framework for building AI-powered applications, has introduced a powerful feature: the Spring AI Advisors.

The advisors can supercharge your AI applications, making them more modular, portable and easier to maintain.

If reading the post isn't convenient, you can listen to this experimental podcast, AI-generated from blog's content:

What are Spring AI Advisors?

At their core, Spring AI Advisors are components that intercept and potentially modify the flow of chat-completion requests and responses in your AI applications. The key player in this system is the AroundAdvisor

Spring AI with NVIDIA LLM API

Engineering | August 20, 2024 | ...

Spring AI now supports NVIDIA's Large Language Model API, offering integration with a wide range of models. By leveraging NVIDIA's OpenAI-compatible API, Spring AI allows developers to use NVIDIA's LLMs through the familiar Spring AI API.

SpringAI-NVIDIA-API-5

We'll explore how to configure and use the Spring AI OpenAI chat client to connect with NVIDIA LLM API.

  • The demo application code is available in the nvidia-llm GitHub repository.
  • The SpringAI / NVIDIA integration documentation.

Prerequisite

  • Create NVIDIA account with sufficient credits.
  • Select your preferred LLM model from NVIDIA's offerings. Like the meta/llama-3.1-70b-instruct in the screenshot below.
  • From the model's page, obtain the API key for your chosen model.

Spring AI Embraces OpenAI's Structured Outputs: Enhancing JSON Response Reliability

Engineering | August 09, 2024 | ...

OpenAI recently introduced a powerful feature called Structured Outputs, which ensures that AI-generated responses adhere strictly to a predefined JSON schema. This feature significantly improves the reliability and usability of AI-generated content in real-world applications. Today, we're excited to announce that Spring AI (1.0.0-SNAPSHOT) has fully integrated support for OpenAI's Structured Outputs, bringing this capability to Java developers in a seamless, Spring-native way.

Following diagram shows how the new Structured Outputs feature extends the OpenAI Chat API:

Restored Spring AI (2)

Note: Spring AI already provides a powerful, Model-agnostic Structured Output utilities that can be use with various AI models including the OpenAI. The OpenAI Structured Outputs feature offers an additional, consistent, but a model specific solution, currently available only to the gpt-4o, gpt-4o-mini and later models.

Spring AI with Groq - a blazingly fast AI inference engine

Engineering | July 31, 2024 | ...

Faster information processing not only informs - it transforms how we perceive and innovate.

Spring AI, a powerful framework for integrating AI capabilities into Spring applications, now offers support for Groq - a blazingly fast AI inference engine with support for Tool/Function calling.

Leveraging Groq's OpenAI-compatible API, Spring AI seamlessly integrates by adapting its existing OpenAI Chat client. This approach enables developers to harness Groq's high-performance models through the familiar Spring AI API.

spring-ai-groq-integration

We'll explore how to configure and use the Spring AI OpenAI chat client to connect with Groq. For detailed information, consult the Spring AI Groq documentation and related tests

Spring AI with Ollama Tool Support

Engineering | July 26, 2024 | ...

Earlier this week, Ollama introduced an exciting new feature: tool support for Large Language Models (LLMs).

Today, we're thrilled to announce that Spring AI (1.0.0-SNAPSHOT) has fully embraced this powerful feature, bringing Ollama's function calling capabilities to the Spring ecosystem.

Ollama's tool support allows models to make decisions about when to call external functions and how to use the returned data. This opens up a world of possibilities, from accessing real-time information to performing complex calculations. Spring AI takes this concept and integrates it seamlessly with the…

Spring AI - Structured Output

Engineering | May 09, 2024 | ...

UPDATE: (04.06.2024) Adde snippets for using structured output with the new, fluent ChatClient API .

UPDATE: (17.05.2024) Generic Types support for BeanOutputConverter added.

Science works with chunks and bits and pieces of things with the continuity presumed, and Art works only with the continuities of things with the chunks and bits and pieces presumed. - Robert M. Pirsig

The ability of LLMs to produce structured outputs is important for downstream applications that rely on reliably parsing output values. Developers want to quickly turn results from an AI model into data types, such as…

Get ahead

VMware offers training and certification to turbo-charge your progress.

Learn more

Get support

Tanzu Spring offers support and binaries for OpenJDK™, Spring, and Apache Tomcat® in one simple subscription.

Learn more

Upcoming events

Check out all the upcoming events in the Spring community.

View all