The Spring Blog

News and Events

Spring for Apache Hadoop 2.2 M1 released

We are pleased to announce the Spring for Apache Hadoop 2.2 M1 milestone releases.

We continue to provide version specific artifacts with their respective transitive dependencies in the Spring IO milestone repository:

  • 2.2.0.M1 (default - Apache Hadoop stable 2.6.0)
  • 2.2.0.M1-phd21 (Pivotal HD 2.1)
  • 2.2.0.M1-phd30 (Pivotal HD 3.0)
  • 2.2.0.M1-cdh5 (Cloudera CDH5)
  • 2.2.0.M1-hdp22 (Hortonworks HDP 2.2)

The most important enhancements in the Spring for Apache Hadoop 2.2 M1 release:

  • Remove support for running with JDK 6, Java 7 or later is now required
  • Improvements to the HDFS writer to support syncable writes and a new timeout option
  • Add support for Pivotal HD 3.0
  • Update CLoudera CDH 5 to use version 5.3.3
  • Update Hortonworks HDP 2.2 version to
  • Update Kite SDK to version 1.0
  • Update Spring project versions to the latest

Spring XD 1.2 M1 and 1.1.2 released

On behalf of the Spring XD team, I am very pleased to announce the first milestone release of Spring XD 1.2 and the 1.1.2 maintenance release.

Download Links:

  • 1.1.2.RELEASE: zip
  • 1.2.0.M1: zip

You can also install using brew and rpm

The full list of issues fixed for 1.1.2 is available in JIRA. Of note the 1.1.2 release provides PHD 3.0 support.

The 1.2 M1 release includes bug fixes as well and several new features and enhancements:

  • PHD 3.0 support
  • MongoDB Source, a community contribution from Abhinav Gandhi
  • Module registry backed by HDFS
  • Greenplum gpload as provided batch job. This allows for efficient loading from CSV files into Greenplum DB/HAWQ.
  • gpfdist sink that adheres to the gpfdist protocol. This allows for streaming data in parallel into Greenplum DB/HAWQ.
  • Zookeeper distributed queue based deployment for streams and jobs.
  • Improved error handling for RabbitMQ with Dead Letter Queue and durable queue support for pub/sub named channels (tap: and topic:)
  • Sqoop integration improvements, support for merge and codegen commands as well as running against a secured Hadoop cluster.
  • Kafka message bus improvements, customized partition count for topics created by the message bus. (module.[modulename].producer.minParitionCount)
  • Improved performance characteristics for TupleBuilder and the JDBC to HDFS job
  • Spark Streaming integration improvements, reliable receiver support and bug fixes.

Spring LDAP 2.0.3 Released

I’m pleased to announce the release of Spring LDAP 2.0.3.RELEASE. The highlights of this release include:

  • LDAP-330 - Support for Spring Data Commons 1.10 (Spring Data Fowler)
  • LDAP-304 - NullPointerException DirContextAdapter.collectModifications
  • LDAP-314 - repository methods ignoring @Entity(base=)
  • LDAP-317 - ldap:context-source/url not parsing properties #{}
  • LDAP-321 - IllegalStateException: No value for key PoolingContextSource

For additional information on the release, refer to the changelog.

Project Site | Reference | Issues


This Week in Spring - April 29th, 2015

Welcome to another installment of This Week in Spring! This week, I’m in Barcelona, Spain for the Spring I/O conference.

(can you spot your favorite Spring team or community member?)


SpringOne2GX 2014 Replay: Building highly modular and testable business systems with Spring Integration

Recorded at SpringOne2GX 2014.

Speaker: Marius Bogoevici

Data / Integration Track


By its very nature, Spring Integration allows for building sophisticated business systems that aggregate multiple sources of data and orchestrate a complex set of business services. But complex functionality doesn’t have to translate into complex design. In fact, through its emphasis on low coupling, Spring Integration is fostering a highly modular application design, with huge benefits in terms of understandability, reusability and testability. In this session you will learn how to design your Spring Integration applications in a modular fashion, by grouping together logically-related components into subsystems that interact with each other, a core concept of Spring XD, but can be successfully applied in any application. Besides the benefit of a heightened level of abstraction, this approach has a number of other important benefits as well: first, such subsystems are reusable, and, secondly, and equally important, they can be tested in isolation. So, after a brief discussion on reusability, the presentation will focus on how to unit test such subsystems and even complete Spring Integration applications, with the ultimate goal of applying business-centric techniques such as Behaviour-Driven Development.


SpringOne2GX 2014 Replay: Server-side JavaScript with Nashorn and Spring

Recorded at SpringOne2GX 2014.

Speakers: Topher Bullock, Will Tran

Web / JavaScript Track


To stay competitive, enterprises are scrambling to find ways to rapidly deliver applications that are a pleasure to use on a wide range of devices. Microservice architectures, continuous delivery and the cloud can give businesses the agility to transform into great software businesses, but how do you actually turn those buzzwords into reality? Here we present our take on a solution. Using Spring Boot, Java 8’s Nashorn JavaScript engine, and Cloud Foundry, we’ve created a framework that makes it really easy to deliver API’s to support the rich and highly contextualized experiences that users expect in world class applications. We’d like to share with you what we’ve built, and what we’ve learned along the way.


SpringOne2GX 2014 Replay: Efficient Client-Server Communication with Differential Synchronization and JSON Patch

Recorded at SpringOne2GX 2014.

Speaker: Brian Cavalier

Web / JavaScript Track


The world of client-server has changed. The traditional application of REST is no longer the best fit. We're depolying applications into a world where users expect responsive UIs, on all their devices, even while disconnected. We're deploying into a world where connection latency, mobile radio usage and battery life have become primary concerns. Differential Synchronization (DS) is an algorithm that syncs data across N parties, even in the face of dropped connections, offline devices, etc. It makes more efficient use of connections by batching and sending only changes, in both directions, from client to server and from server to client. We’ll look at how it can be used with JSON Patch to synchronize application data between clients and servers over HTTP Patch, WebSocket, and STOMP, and how it can be integrated into the Spring ecosystem.


Spring Social Facebook 2.0.1 Released

I’m pleased to announce the release of Spring Social Facebook 2.0.1.RELEASE. This maintenance release addresses a handful of bugs that were discovered following the 2.0.0.RELEASE two weeks ago. For complete details regarding this release, see the changelog.

Note that if you’re using Spring Social Facebook with Spring Boot, the Spring Boot starter for Spring Social Facebook still references 1.1.1.RELEASE. But you can override that by explicitly declaring the 2.0.1.RELEASE dependency in your Maven or Gradle build. See the Spring Social Showcase/Spring Boot example to see how this is done.


Binding to Data Services with Spring Boot in Cloud Foundry

In this article we look at how to bind a Spring Boot application to data services (JDBC, NoSQL, messaging etc.) and the various sources of default and automatic behaviour in Cloud Foundry, providing some guidance about which ones to use and which ones will be active under what conditions. Spring Boot provides a lot of autoconfiguration and external binding features, some of which are relevant to Cloud Foundry, and many of which are not. Spring Cloud Connectors is a library that you can use in your application if you want to create your own components programmatically, but it doesn’t do anything “magical” by itself. And finally there is the Cloud Foundry java buildpack which has an “auto-reconfiguration” feature that tries to ease the burden of moving simple applications to the cloud. The key to correctly configuring middleware services, like JDBC or AMQP or Mongo, is to understand what each of these tools provides, how they influence each other at runtime, and and to switch parts of them on and off. The goal should be a smooth transition from local execution of an application on a developer’s desktop to a test environment in Cloud Foundry, and ultimately to production in Cloud Foundry (or otherwise) with no changes in source code or packaging, per the twelve-factor application guidelines.