Hi, Spring fans! This week I am in beautiful Tokyo, Japan, where I just spoke at the always lovely annual Spring Fest event. I loved the show and I hope that they got something out of my performance.
Last week was tough. Possibly the toughest week of my life. I didn’t publish an episode of A Bootiful Podcast, as such. You won’t see that episode reflected on the blog because it was my heartbroken dedication to my father, who passed away last week at the age of 81. No interview in that brief, less-than-20 minutes episode.
This blog post is the third in a series of posts that aim at providing a deeper look into Reactor’s more advanced concepts and inner workings.
In this post, we explore the threading model, how some (most) operators are concurrent agnostic, the
Scheduler abstraction and how to hop from one thread to another mid-sequence with operators like
This series is derived from the
Flight of the Flux talk, which content I found to be more adapted to a blog post format.
The table below will be updated with links when the other posts are published, but here is the planned content:
Hi, Spring fans! Welcome to another installment of This Week in Spring! Today I just wrapped up my appearance in Brisbane, Australia, where I have been for the epic YOW! conference. Truly, one of my all-time favorite shows on the planet. I feel like an imposter in the ranks of the other speakers. I can not recommend this show enough.
I’m just about to board my fight back to San Francisco, and we’ve got a ton of stuff to get to so let’s press on!
- Spring Cloud Data Flow 2.3.0 GA Released
- Stream Processing with Spring Cloud Stream and Apache Kafka Streams. Part 6 - State Stores and Interactive Queries
- Stream Processing with Spring Cloud Stream and Apache Kafka Streams. Part 5 - Application Customizations
- Spring Data R2DBC goes GA
- Spring Boot 2.2.2 is now available
- Spring Boot 2.1.11 is now available
- In last week’s A Bootiful Podcast, I talked to Pivotal’s Katrina Bakas about the Pivotal HealthWatch product, Kubernetes, Cloud Foundry and so much more.
- Stream Processing with Spring Cloud Stream and Apache Kafka Streams. Part 4 - Error Handling
- Stream Processing with Spring Cloud Stream and Apache Kafka Streams. Part 3 - Data deserialization and serialization
- Spring Data Moore SR3 and Lovelace SR14 released
- Stream Processing with Spring Cloud Stream and Apache Kafka Streams. Part 2 - Programming Model Continued
- Spring Framework maintenance roadmap in 2020 (including 4.3 EOL)
- Spring Framework 5.2.2 and 5.1.12 available now
- Reactor team member Sergei Egorov’s new Daily Reactive series looks awesome! Well worth a read, too!
- Have you checked out this month’s This Month in RabbitMQ roundup yet?
- I love Trisha Gee’s tutorial series introducing reactive Spring Boot.
Stream Processing with Spring Cloud Stream and Apache Kafka Streams. Part 6 - State Stores and Interactive Queries
In this part (the sixth and final one of this series), we are going to look into the ways Spring Cloud Stream Binder for Kafka Streams supports state stores and interactive queries in Kafka Streams.
When you have the need to maintain state in the application, Kafka Streams lets you materialize that state information into a named state store. There are several operations in Kafka Streams that require it to keep track of the state such as
windowing operations, and others. Kafka Streams uses a special database called RocksDB for maintaining this state store in most cases (unless you explicitly change the store type). By default, the same information in the state store is backed up to a changelog topic as well as within Kafka, for fault-tolerant reasons.
Stream Processing with Spring Cloud Stream and Apache Kafka Streams. Part 5 - Application Customizations
In this blog post, we continue our discussion on the support for Kafka Streams in Spring Cloud Stream. We are going to elaborate on the ways in which you can customize a Kafka Streams application.
Kafka Streams binder uses the StreamsBuilderFactoryBean, provided by the Spring for Apache Kafka project, to build the StreamsBuilder object that is the foundation for a Kafka Streams application. This factory bean is a Spring lifecycle bean. Oftentimes, this factory bean must be customized before it is started, for various reasons. As described in the previous blog post on error handling, you need to customize the
StreamsBuilderFactoryBean if you want to register a production exception handler. Let’s say that you have this producer exception handler:
A Bootiful Podcast: Pivotal's Katrina Bakas about the Pivotal HealthWatch product, Kubernetes, Cloud Foundry and so much more.
Continuing with the series on looking at the Spring Cloud Stream binder for Kafka Streams, in this blog post, we are looking at the various error-handling strategies that are available in the Kafka Streams binder.
The error handling in Kafka Streams is largely centered around errors that occur during deserialization on the inbound and during production on the outbound.
Stream Processing with Spring Cloud Stream and Apache Kafka Streams. Part 3 - Data deserialization and serialization
Continuing on the previous two blog posts, in this series on writing stream processing applications with Spring Cloud Stream and Kafka Streams, now we will look at the details of how these applications handle deserialization on the inbound and serialization on the outbound.
All three major higher-level types in Kafka Streams -
GlobalKTable<K,V> - work with a key and a value.
With Spring Cloud Stream Kafka Streams support, keys are always deserialized and serialized by using the native
Serde mechanism. A
Serde is a container object where it provides a deserializer and a serializer.
Stream Processing with Spring Cloud Stream and Apache Kafka Streams. Part 2 - Programming Model Continued
On the heels of the previous blog in which we introduced the basic functional programming model for writing streaming applications with Spring Cloud Stream and Kafka Streams, in this part, we are going to further explore that programming model.
Let’s look at a few scenarios.
If your application consumes data from a single input binding and produces data into an output binding, you can use Java’s Function interface to do that. Keep in mind that binding in this sense is not necessarily mapped to a single input Kafka topic, because topics could be multiplexed and attached to a single input binding (with comma-separated multiple topics configured on a single binding - see below for an example). On the outbound case, the binding maps to a single topic here.