Get ahead
VMware offers training and certification to turbo-charge your progress.
Learn moreSpring Data Moore ships with 16 modules and over 700 tickets completed. It includes tons of improvements and new features across the portfolio and has a strong focus on three major topics: Reactive, Kotlin, and Performance. The release adds features such as declarative reactive transactions and Coroutines/Flow support and comes with up to 60%* faster finder methods.
Let’s start with a look at some of the Reactive features of Moore.
The Lovelace Release introduced early support for reactive transactions in a closure-fashioned style that left some room for improvements. The following listing shows that style:
Reactive Transactions in Lovelace (with MongoDB)
public Mono<Process> doSomething(Long id) {
return template.inTransaction().execute(txTemplate -> {
return txTemplate.findById(id)
.flatMap(it -> start(txTemplate, it))
.flatMap(it -> verify(it))
.flatMap(it -> finish(txTemplate, it));
}).next();
}
In the preceding snippet, the transaction has to be initiated by explicitly calling inTransaction()
with a transaction-aware template within the closure, calling next()
at the end to turn the returned Flux
into a Mono
to satisfy the method signature, even though findById(…)
already emits only a single element.
Obviously, this is not the most intuitive way of doing reactive transactions. So let’s have a look at the same flow using declarative reactive transaction support. As with Spring’s transaction support, you need a component to handle the transaction for you. For reactive transactions, a ReactiveTransactionManager
is currently provided by the MongoDB and R2DBC modules. The following listing shows such a component:
@EnableTransactionManagement
class Config extends AbstractReactiveMongoConfiguration {
// …
@Bean
ReactiveTransactionManager mgr(ReactiveMongoDatabaseFactory f) {
return new ReactiveMongoTransactionManager(f);
}
}
From there, you can annotate methods with @Transactional
and rely on the infrastructure to start, commit, and roll back transactional flows to handle the lifecycle via the Reactor Context. This lets you turn the code from Lovelace into the following listing, removing the need for the closure with its scoped template and the superfluous Flux
to Mono
transformation:
Declarative Reactive Transactions in Moore (with MongoDB)
@Transactional
public Mono<Process> doSomething(Long id) {
return template.findById(id)
.flatMap(it -> start(template, it))
.flatMap(it -> verify(it))
.flatMap(it -> finish(template, it));
}
Another notable addition to the reactive family can be found in one of the community modules, with Spring Data Elasticsearch now offering reactive template and repository support built upon a fully reactive Elasticsearch REST client that in turn is based on Spring’s WebClient
.
The client offers first class support for everyday search operations by exposing a familiar API close to the Java High-Level REST Client, making necessary cuts where needed. The combination of the template and repository API lets you, if needed, seamlessly transition to reactive without getting lost. The following listing shows how to configure Elasticsearch to use a reactive client:
Reactive Elasticsearch
class Config extends AbstractReactiveElasticsearchConfiguration {
// …
@Bean
public ReactiveElasticsearchClient reactiveClient() {
return ReactiveRestClients.create(localhost());
}
}
@Autowired
ReactiveElasticsearchTemplate template;
//…
Criteria criteria = new Criteria("topics").contains("spring")
.and("date").greaterThanEqual(today())
Flux<Conference> result = template.find(new CriteriaQuery(criteria), Conference.class);
Speaking of getting lost in transition: Querydsl (← plain HTTP / NO HTTPS) offers a remarkable way of defining type safe queries for several data stores and has been supported for non-reactive data access for quite a while already. To support it in reactive scenarios, we added a reactive execution layer that lets you run Predicate
backed queries. The ReactiveQuerydslPredicateExecutor
, when added to the repository interface, provides all entry points, as the following example shows:
Reactive Querydsl
interface SampleRepository extends …, ReactiveQuerydslPredicateExecutor<…> {
// …
}
@Autowired
SampleRepository repository;
// …
Predicate predicate = QCustomer.customer.lastname.eq("Matthews");
Flux<Customer> result = repository.findAll(predicate);
Along the lines of the enhanced reactive support in Moore, we continued the Kotlin story that we already started with the Lovelace Release. In particular, we provide several extensions for Kotlin Coroutines and Flows by such offering methods as awaitSingle()
and asFlow()
. The following method uses the awaitSingle()
method:
Kotlin Coroutine Support
val result = runBlocking {
operations.query<Person>()
.matching(query(where("lastname").isEqualTo("Matthews")))
.awaitSingle()
}
Another great enhancement that uses Kotlin language features was contributed by the community, adding a type safe query DSL for the Spring Data MongoDB criteria API. This lets you transform code such as query(where("lastname").isEqualTo("Matthews"))
to the following notation:
Kotlin type safe queries
val people = operations.query<Person>()
.matching(query(Person::lastname isEqualTo "Matthews"))
.all()
Along with crafting all these new features, we also took some time to investigate potential bottlenecks of the current implementations and found some areas for improvement. This included getting rid of Optional
, capturing lambdas and stream execution in a lot of places, adding caches, and avoiding unnecessary lookup operations. In the end, the benchmarks showed an almost 60% increase of throughput for JPA single attribute finder methods, such as findByTitle(…)
.
This is great and was worth the time it took! However, and I want to be clear about this, all benchmarks use clean-room scenarios that avoid any kind of overhead whatsoever. If you move them to a more real-world scenario (for example, by replacing an in-memory H2 database with an actual production-ready database), results look way different, as performance throttles shift to the network interaction, query execution and result transmission. The improvements are still visible but are usually down to single-digit percentages. The benchmarks can be found in this GitHub repository.
We also refined our existing hooks to intercept an entity’s lifecycle during persistence operations by moving away from the current ApplicationEvent
-based approach to a more direct interaction model. The EntityCallback
API introduces better support for immutable types, provides runtime guarantees, and also seamlessly integrates into a reactive flow. Of course, we still support and publish ApplicationEvents
, but we highly recommend switching to EntityCallbacks
when changes to the processed entity should be made.
In the following sample, the BeforeConvertCallback
modifies a given immutable entity by using a wither
method that assigns an id
to a copy of the entity, which is then returned and, in the next step, converted into the store specific representation:
EntityCallback API
@Bean
BeforeConvertCallback<Person> beforeConvert() {
return (entity, collection) -> {
return entity.withId(…);
}
}
Other than with ApplicationEvents
(which could be configured with an AsyncTaskExecutor
, leaving it pretty much open when the action is executed), the EntityCallback
API guarantees to be invoked right before the actual event is triggered. Even in a reactive stream. The following listing shows how it works:
Reactive EntityCallback API
@Bean
ReactiveBeforeConvertCallback<Person> beforeConvert() {
return (entity, collection) -> {
return Mono.just(entity.withId(…));
}
}
Speaking of streams, Spring Data Redis now has support for Redis Streams, which have almost nothing to do with reactive streams but are a new Redis append-only data structure that models a log where each entry consists of an id (typically a timestamp plus a sequence number) and multiple key/value pairs. Along with the usual suspects, such as adding to the log and reading from it, Spring Data Redis provides containers that allow infinite listening and processing of entries added to the log. It works like tail -f
but for a Redis Stream. The following example shows a Redis stream listener:
Redis Streams listener
@Autowired
RedisConnectionFactory factory;
StreamListener<String, MapRecord<…>> listener =
(msg) -> {
// … msg.getId()
// … msg.getStream()
// … msg.getValue()
};
StreamMessageListenerContainer container = StreamMessageListenerContainer.create(factory));
container.receive(StreamOffset.fromStart("my-stream"), listener);
The StreamMessageListenerContainer
in the preceding sample reads all existing entries of my-stream
and gets notified about newly added ones. For each message received, the StreamListener
is invoked. A single container can receive messages from multiple streams.
Of course, stream-like structures are best consumed by a reactive infrastructure, as the following example shows:
StreamReceiver receiver = // …
receiver.receive(StreamOffset.fromStart("my-stream"))
.doOnNext(msg -> {
// …
})
.subscribe();
On the JPA side of things, a tiny improvement now lets you have multiple OUT
parameters for stored procedures, which are returned within a Map
. The following example shows how to do so:
Out parameters with JPA Stored Procedures
@NamedStoredProcedureQuery(name = "User.s1p", procedureName = "s1p",
parameters = {
@StoredProcedureParameter(mode = IN, name = "in_1", type = …),
@StoredProcedureParameter(mode = OUT, name = "out_1", type = …),
@StoredProcedureParameter(mode = OUT, name = "out_2", type = …)})
@Table(name = "SD_User")
class User { … }
interface UserRepository extends JpaRepository<…> {
@Procedure(name = "User.s1p")
Map<String, Integer> callS1P(@Param("in_1") Integer arg);
}
All of the out parameters declared in JPA’s @StoredProcedureParameter
annotations will eventually be available in the Map
returned by the repository query method.
With MongoDB, complex data processing is done with Aggregations for which Spring Data offers a specific (fluent) API with abstractions for the operations and expressions. However, Stackoverflow taught us that people tend to craft their aggregations on the command line and translate those into Java code later on. That translation turned out to be one major pain point.
So we took the opportunity to introduce @Aggregation
as a direct way to run aggregations in a repository method. The following example shows how to do so:
Declarative MongoDB Aggregations
interface OrderRepository extends CrudRepository<Order, Long> {
@Aggregation("{ $group : { _id : '$cust_id', total : { $sum : '$amount' }}}")
List<TotalByCustomer> totalByCustomer(Sort sort);
@Aggregation(pipeline = {
"{ $match : { customerId : ?0 }}",
"{ $count : total }"
})
Long totalOrdersForCustomer(String customerId);
}
Like its relative, the @Query
annotation, @Aggregation
supports parameter replacement and adds sorting to the aggregation if provided by a query method argument, as shown in the preceding example. We even took it one step further, extracting single-attribute document values for methods that return simple types, such as the totalOrdersForCustomer method in the preceding example. The $count
stage in this case returns a document like {"total" : 101 }
that normally requires mapping to either plain org.bson.Document
or a corresponding domain type. However, since the method declares Long
as its return type, we inspect the result document and extract / convert the value from there, removing the need of the dedicated type.
To round things off for now, I want to mention some additional features across other modules. If you’re interested in all of them, please have a look at our release wiki or refer to the "What’s New" section in the reference documentation of the individual modules. So, without further ado, here are yet more improvements provided by this release:
Gemfire/Apache Geode: Improved SSL support & dynamic port configuration
JDBC: Read only properties, SQL generation & embeddable load options
REST: making use of HATEOAS 1.0 and all the cool stuff in there!
MongoDB: Reactive GridFS, declarative collation support & JSON Schema generator
neo4j: Spatial types & exists projections
Apache Cassandra: Range queries, Optimistic locking and auditing support
Redis: Cluster caching and non blocking connect methods
Elasticsearch: High Level REST Client support & non Jackson based entity mapping
If you’d like to know more, here's a 30-minute presentation recorded at SpringOne 2019 in Austin, TX.