Spring AI 1.0.0 M6 Released

Releases | Mark Pollack | February 14, 2025 | ...

We are excited to announce the release of Spring AI 1.0.0 Milestone 6. To celebrate this release, we have created a special AI-generated music playlist to enhance your blog reading and coding experiences!

As usual, this release includes several new features and bug fixes. We have continued to focus on reviewing the codebase from a design perspective. While we have tried to make this transition smooth by deprecating methods and classes for one release cycle, there are some breaking changes we know about and potentially some that we don't, so please bear with us. See the Breaking Changes section at the bottom of this post for details.

New Features

🎵 It's Tool time!

There have been significant improvements to the overall design and feature set of function calling, now adopting the more prevalent term tool calling. Many thanks to Thomas Vitale for driving these improvements.

There are now several ways you can define tools:

  • Declarative method-based tools using @Tool and @ToolParam annotations
  • Programmatic method-based tools using MethodToolCallback
  • Function-based tools using FunctionToolCallback
  • Dynamic tool resolution from Spring beans

Here's an example that creates a tool to get the current date and time using the @Tool annotation.

import java.time.LocalDateTime;
import org.springframework.ai.tool.annotation.Tool;
import org.springframework.context.i18n.LocaleContextHolder;

class DateTimeTools {

    @Tool(description = "Get the current date and time in the user's timezone")
    String getCurrentDateTime() {
        return LocalDateTime.now().atZone(LocaleContextHolder.getTimeZone().toZoneId()).toString();
    }
}

// Using the tool with ChatClient
String response = ChatClient.create(chatModel)
        .prompt("What day is tomorrow?")
        .tools(new DateTimeTools())
        .call()
        .content();

When the model needs to know the current date and time, it will automatically request the tool to be called. The ChatClient handles the tool execution internally and returns the result to the model, which then uses this information to generate the final response. The JSON Schema is automatically generated using the class and tool annotations.

In terms of deprecations, everything in the package org.springframework.ai.model.function has been deprecated and the new interfaces and classes are available in the org.springframework.ai.tool package.

Related APIs have been updated:

  • FunctionCallingOptions to ToolCallingChatOptions
  • ChatClient.builder().defaultFunctions() to ChatClient.builder().defaultTools()
  • ChatClient.functions() to ChatClient.tools()

Please refer to the Tools Migration Guide for more information.

There are various other improvements to the tool calling capabilities, please check the tool reference documentation for more information.

🎵 MCP, it's as easy as 1, 2, 3.

Instead of an October surprise, last year we had a November surprise: the Model Context Protocol was released and was incredibly well received by the AI community.

In short, the Model Context Protocol (MCP) provides a unified way to connect AI models to different data sources and tools, making integration seamless and consistent. It helps you build agents and complex workflows on top of Large Language Models (LLMs). Since LLMs frequently need to integrate with data and tools, MCP provides:

  • A growing list of pre-built integrations that your LLM can directly plug into
  • The flexibility to switch between LLM providers and vendors
  • Standardized interfaces for tool discovery and execution

The spring-ai-mcp experimental project was started last November and has been evolving since then. The Spring AI team has collaborated with David Soria Parra and others at Anthropic to move the experimental project into the official MCP Java SDK. The MCP Java SDK has the following core capabilities:

  • Synchronous and asynchronous MCP client/server implementations
  • Protocol version compatibility negotiation
  • Tool discovery and execution with change notifications
  • Resource management with URI templates
  • Roots list management and notifications
  • Prompt handling and management
  • Sampling support for AI model interactions

There are also Multiple Transport Options:

  • Stdio-based transport for process-based communication
  • Java HttpClient-based SSE client transport
  • Servlet-based SSE server transport
  • Spring-specific Transports, WebFlux SSE transport for reactive HTTP streaming and WebMVC SSE transport for servlet-based HTTP streaming

Please check out the MCP Java SDK documentation for more information on getting started with the SDK, and visit the MCP Java SDK GitHub repository to open issues and join discussions.

With the core components moved into the Anthropic Java SDK, the integration with Spring AI centers around providing an even easier developer experience to create client and server implementations leveraging Spring Boot Autoconfiguration.

Here is a small client example showing the use of the Brave Search API as part of a simple chatbot application:

@SpringBootApplication
public class Application {
    @Bean
    public CommandLineRunner predefinedQuestions(
            ChatClient.Builder chatClientBuilder, 
            ToolCallbackProvider tools,
            ConfigurableApplicationContext context) {
        return args -> {
            var chatClient = chatClientBuilder
                    .defaultTools(tools)
                    .build();

            String question = "Does Spring AI support the Model Context Protocol?";
            System.out.println("ASSISTANT: " + 
                chatClient.prompt(question).call().content());
        };
    }
}

The configuration is simple:

spring.ai.mcp.client.stdio.enabled=true
spring.ai.mcp.client.stdio.servers-configuration=classpath:/mcp-servers-config.json

and the server configuration is:

{
  "mcpServers": {
    "brave-search": {
      "command": "npx",
      "args": [
        "-y",
        "@modelcontextprotocol/server-brave-search"
      ],
      "env": {
      }
    }
  }
}

In your dependency configuration, import the Spring AI Bom and add the Spring Boot client starter, shown here for Maven.

<dependency>
    <groupId>org.springframework.ai</groupId>
    <artifactId>spring-ai-mcp-client-spring-boot-starter</artifactId>
</dependency>

MCP is a big topic. The reference documentation has more information, and there are several examples

Vector Store Enhancements

The VectorStore API has been improved, though it does come with some breaking changes.
The VectorStore interface's delete method has been simplified to be a void operation, removing the previous Optional return type. Instead of checking return values, developers should now use exception handling to manage deletion failures. See the reference docs for more information.

Filter based Deletion

Now you can delete documents based on metadata criteria rather than just document IDs.

Here is an example:

// Create and add documents to the store
Document bgDocument = new Document("content", 
    Map.of("country", "BG", "year", 2020));
Document nlDocument = new Document("content", 
    Map.of("country", "NL", "year", 2021));
vectorStore.add(List.of(bgDocument, nlDocument));

// Delete documents using string filter expression
vectorStore.delete("country == 'BG'");

// Or use the Expression-based API
Filter.Expression expr = // ... your filter expression
vectorStore.delete(expr);

This functionality has been implemented across all vector store providers including Chroma, Elasticsearch, PostgreSQL, Weaviate, Redis, Milvus, and more.

Native Client Access

A new getNativeClient() API allows developers to access the underlying native client implementation when needed:

WeaviateVectorStore vectorStore = // get vector store
Optional<WeaviateClient> nativeClient = vectorStore.getNativeClient();
if (nativeClient.isPresent()) {
    WeaviateClient client = nativeClient.get();
    // Use native client capabilities
}

Simple Vector Store Improvements

The SimpleVectorStore implementation now supports metadata filtering using Spring Expression Language (SpEL)

Postgres Vector Store Improvements

The PostgreSQL vector store implementation has been improved to handle different ID column types more flexibly. Instead of enforcing UUID as the only primary key type, it now supports:

  • UUID
  • TEXT
  • INTEGER
  • SERIAL
  • BIGSERIAL

This makes it easier to integrate with existing database schemas and different ID management strategies.

Removed deprecated Amazon Bedrock Chat models

These are now replaced with the Amazon Bedrock Converse API, which is more flexible and supports different models with the same API.

🎵 It's all 'bout those agents...

Everyone is talking about agents. We won't be making an agent framework in the near future as the blog post "Building Effective Agents" from Anthropic resonates strongly with the team.

From the blog post:

Over the past year, we've worked with dozens of teams building large language model (LLM) agents across industries. Consistently, the most successful implementations weren't using complex frameworks or specialized libraries. Instead, they were building with simple, composable patterns.

The two main categories of agentic systems are workflows and agents:

  • Workflows: Systems where LLMs and tools are orchestrated through predefined code paths
  • Agents: Systems where LLMs dynamically direct their own processes and tool usage, maintaining control over how they accomplish tasks

Spring AI already provides the "building block: augmented LLM" - an LLM enhanced with augmentations such as retrieval, tools, and memory. Using this building block, the blog post describes several workflows for building effective agents:

  1. Chain Workflow – Breaks a task into sequential steps where each LLM call processes the output of the previous step, ensuring structured progress.
  2. Evaluator-Optimizer – Uses an evaluator LLM to assess the output of another LLM, improving quality by providing feedback and refining responses.
  3. Orchestrator-Workers – A central orchestrator LLM delegates tasks to specialized worker LLMs, enabling modular and scalable task execution.
  4. Parallelization Workflow – Splits tasks into independent subtasks that can run concurrently, improving efficiency and aggregating multiple perspectives.
  5. Routing Workflow – Classifies inputs and directs them to the appropriate LLM model or processing path, optimizing for accuracy and efficiency.

For detailed implementations of these patterns, see Spring AI's Building Effective Agents with Spring AI (Part 1) and accompanying examples.

Here is example for the first parttern:

Chain Workflow in Spring AI

In this example, we'll create a chain workflow that processes a business report by:

  1. Extracting numerical metrics
  2. Standardizing their format
  3. Sorting them
  4. Presenting them in a clean table format

Here's how to implement this pattern using Spring AI:

@SpringBootApplication
public class Application {
    public static void main(String[] args) {
        SpringApplication.run(Application.class, args);
    }

    // Sample business report with various metrics in natural language
    String report = """
            Q3 Performance Summary:
            Our customer satisfaction score rose to 92 points this quarter.
            Revenue grew by 45% compared to last year.
            Market share is now at 23% in our primary market.
            Customer churn decreased to 5% from 8%.
            New user acquisition cost is $43 per user.
            Product adoption rate increased to 78%.
            Employee satisfaction is at 87 points.
            Operating margin improved to 34%.
            """;

    @Bean
    public CommandLineRunner commandLineRunner(ChatClient.Builder chatClientBuilder) {
        return args -> {
            new ChainWorkflow(chatClientBuilder.build()).chain(report);
        };
    }
}

/**
 * Implements a prompt chaining workflow that breaks down complex tasks into a sequence
 * of simpler LLM calls, where each step's output feeds into the next step.
 */
public class ChainWorkflow {
    /**
     * System prompts that define each transformation step in the chain.
     * Each prompt acts as a gate that validates and transforms the output
     * before proceeding to the next step.
     */
    private static final String[] CHAIN_PROMPTS = {
        // Step 1: Extract numerical values
        """
        Extract only the numerical values and their associated metrics from the text.
        Format each as 'value: metric' on a new line.
        Example format:
        92: customer satisfaction
        45%: revenue growth""",
        
        // Step 2: Standardize to percentages
        """
        Convert all numerical values to percentages where possible.
        If not a percentage or points, convert to decimal (e.g., 92 points -> 92%).
        Keep one number per line.
        Example format:
        92%: customer satisfaction
        45%: revenue growth""",
        
        // Step 3: Sort in descending order
        """
        Sort all lines in descending order by numerical value.
        Keep the format 'value: metric' on each line.
        Example:
        92%: customer satisfaction
        87%: employee satisfaction""",
        
        // Step 4: Format as markdown
        """
        Format the sorted data as a markdown table with columns:
        | Metric | Value |
        |:--|--:|
        | Customer Satisfaction | 92% |"""
    };

    private final ChatClient chatClient;

    public ChainWorkflow(ChatClient chatClient) {
        this.chatClient = chatClient;
    }

    public String chain(String userInput) {
        String response = userInput;
        
        for (String prompt : CHAIN_PROMPTS) {
            response = chatClient.prompt(
                String.format("{%s}\n{%s}", prompt, response)
            ).call().content();
        }
        
        return response;
    }
}

The key data flow is that the output of each step becomes the input for the next step. For our sample report, the data flows like this:

  • Step 1 extracts metrics like "92: customer satisfaction", "45%: revenue growth"
  • Step 2 standardizes to percentage format: "92%: customer satisfaction"
  • Step 3 sorts values in descending order
  • Step 4 creates a formatted markdown table

The full source code, along with implementations of other agentic patterns, is available in the spring-ai-examples repository and the Building Effective Agents with Spring AI (Part 1).

Contributors

There were other refactoring, bug fixing, documentation enhancements across the board by a wide range of contributors. If we haven’t gotten to your PR yet, we will, please be patient. Thanks to

Breaking Changes

This release includes several breaking changes as we continue to refine the API design. Key changes include:

  • Renaming of function-related packages and classes to tool-related equivalents (e.g., org.springframework.ai.model.function to org.springframework.ai.tool)
  • Updates to the VectorStore API
  • Changes to model client configuration properties

For a complete list of breaking changes and detailed upgrade instructions, please refer to our Upgrade Notes in the reference documentation.

Get the Spring newsletter

Stay connected with the Spring newsletter

Subscribe

Get ahead

VMware offers training and certification to turbo-charge your progress.

Learn more

Get support

Tanzu Spring offers support and binaries for OpenJDK™, Spring, and Apache Tomcat® in one simple subscription.

Learn more

Upcoming events

Check out all the upcoming events in the Spring community.

View all