Spring Team
Mark Paluch

Mark Paluch

Spring Data Committer

Weinheim, Germany

Mark is Software Craftsman, Spring Data Engineer at Pivotal, a member of the CDI 2.0 expert group, and Lead of the lettuce Redis driver. He has been into developing Java server-, frontend-, and web applications for over 12 years and his focus is now on software architecture, Spring and Redis clients.
Blog Posts by Mark Paluch

Spring Vault 1.0 goes GA

On behalf of the community, it’s my pleasure to announce the general availability of Spring Vault 1.0 – the very first GA release of Spring Vault after almost a year of development.

The artifacts are available from Maven Central and Bintray.

<dependency>
  <groupId>org.springframework.vault</groupId>
  <artifactId>spring-vault-core</artifactId>
  <version>1.0.0.RELEASE</version>
</dependency>

The release ships more than 50 tickets fixed in total. Here’s a very truncated list of the most important features shipping with the release:

Read more...

Spring Vault 1.0 RC1 is now available

On behalf of the community, I am pleased to announce Spring Vault 1.0 RC1.

The artifacts are available in the Milestone repo.

Spring Vault includes 15 fixes, improvements and dependency upgrades.

Here’s a short-list of the most important features shipping with the release:

  • Support for renewable @VaultPropertySource with credentials rotation
  • Reshaping APIs dropping VaultClient and using RestTemplate instead
  • Added EnvironmentVaultConfiguration for simplified configuration without the need to create a derived configuration class.
Read more...

What's New in Spring Data Release Ingalls?

As you probably have seen, we have just announced the GA release of Spring Data release train Ingalls. As the release is packed with way too many features to cover them in a release announcement, I would like to use this post to take a deeper look at the changes and features that come with the 15 modules on the train.

Housekeeping

A very fundamental change in the release train’s dependencies is the upgrade to Spring Framework 4.3 (currently 4.3.6) as the baseline. Other dependency upgrades are mostly driven by major version bumps of the underlying store drivers and implementations that need to be reflected in potential breaking changes to the API exposed by those modules.

Ingalls also ships with a new Spring Data Module: Spring Data LDAP. The Spring LDAP project has shipped Spring Data repository support for quite a while. After a couple of glitches and incompatibilities we decided to move LDAP repository support into a separate Spring Data module so that it is more closely aligned to the release train.

Another big change to the module setup is that Spring Data for Apache Cassandra has now become a core module, which means it now has been and is going to be maintained by the Spring Data team at Pivotal. A great chance to thank the previous core maintainers David Webb and Matthew T. Adams for all their efforts.

Besides those very fundamental changes, the team has been working on a whole bunch of new features:

  • Use of method handles for property access in conversion subsystem.

  • Support for XML and JSON based projections for REST payloads (Commons)

  • Cross-origin resource sharing with Spring Data REST

  • More MongoDB Aggregation Framework operators for array, arithmetic, date and set operations.

  • Support for Redis Geo commands.

  • Upgrade to Cassandra 3.0 with support for query derivation in repository query methods, User-defined types, Java 8 types (Optional, Stream), JSR-310 and ThreeTen Backport.

  • Support for Javaslang’s Option, collection and map types for repository query methods.

These are the ones that I would like to discuss in the remainder of this post.

Read more...

Going reactive with Spring Data

Last weeks' Spring Data Kay M1 is the first release ever that comes with support for reactive data access. Its initial set of supported stores — MongoDB, Apache Cassandra and Redis — all ship reactive drivers already, which made them very natural candidates for such a prototype. Let’s take a more detailed look at the new programming model and the APIs that make up that support.

Reactive Repositories

The repositories programming model is the most high-level abstraction Spring Data users usually deal with. They’re usually comprised of a set of CRUD methods defined in a Spring Data provided interface and domain-specific query methods. Here’s what a reactive Spring Data repository definition would look like:

public interface ReactivePersonRepository
  extends ReactiveCrudRepository<Person, String> {

  Flux<Person> findByLastname(Mono<String> lastname);

  @Query("{ 'firstname': ?0, 'lastname': ?1}")
  Mono<Person> findByFirstnameAndLastname(String firstname, String lastname);
}

As you can see, there’s not too much difference to what you’re used to. However, in contrast to the traditional repository interfaces, a reactive repository uses reactive types as return types and can do so for parameter types, too. The CRUD methods in the newly introduced ReactiveCrudRepository, of course make use of these types, too.

By default, reactive repositories use Project Reactor types but other reactive libraries can also be used. We provide custom repository base interface (e.g. RxJava1CrudRepository) for those and also automatically adapt the types as needed for query methods, e.g RxJava’s Observable and Single. The rest basically stays the same. Note, however, that the current milestone does not support pagination yet and you of course have to have the necessary reactive libraries on the classpath to activate support for a particular library.

Activating reactive Spring Data

Similarly to what we have in the blocking world, the support for reactive Spring Data is activated through an @Enable… annotation alongside some infrastructure setup:

@EnableReactiveMongoRepositories
public class AppConfig extends AbstractReactiveMongoConfiguration {

  @Bean
  public MongoClient mongoClient() {
    return MongoClients.create();
  }

  @Override
  protected String getDatabaseName() {
    return "reactive";
  }
}

See how we use a different base class for the infrastructure configuration, as we need to make use of the MongoDB async driver.

Read more...

Spring Vault and Spring Cloud Vault 1.0.0.M1 are now available

On behalf of the community, I am pleased to announce the first milestone releases of Spring Vault and Spring Cloud Vault 1.0.0.M1.

The artifacts are available in the Milestone repo.

What is Spring Vault and Spring Cloud Vault?

Spring Vault is a client for HashiCorp Vault that provides familiar Spring abstractions. It comes with @VaultPropertySource that exposes encrypted properties from Vault the Environment and VaultTemplate to access secrets stored and encrypted inside Vault.

@Configuration
@VaultPropertySource("secret/my-application")
public class AppConfig extends AbstractVaultConfiguration {

    /**
     * Specify an endpoint for connecting to Vault.
     */
    @Override
    public VaultEndpoint vaultEndpoint() {
        return VaultEndpoint.create("localhost", 8200);
    }

    /**
     * Configure a client authentication.
     */
    @Override
    public ClientAuthentication clientAuthentication() {
        return new TokenAuthentication("…");
    }
}
Read more...

Managing your Database Secrets with Vault

In my previous post about Managing Secrets with Vault, I introduced you to Vault and how to store arbitrary secrets using the generic secret backend. Vault can manage more than just secret data like API keys, passwords, and other sensitive string-like data. Today we’re taking a look at Vault’s integration with databases, services, and certificates.

Database credentials tend to be static

When it comes to databases, the regular workflow of getting credentials applying for a database is asking some operator or a self-service tool to give you credentials so your application can log into the database. At this point, credentials are considered static. Credentials get usually changed in case the database is migrated or if there’s a security breach.

Read more...

Spring Data Release Train Ingalls M1 Released

On behalf of the Spring Data team, I’m happy to announce the first milestone of the Ingalls release train. The release ships 230 tickets fixed! The most noteworthy new features are:

  • Use of method handles for property access in conversion subsystem (Commons, MongoDB).
  • Upgrade to Cassandra 3.0 for Spring Data Cassandra (see the updated examples for details).
  • Support for declarative query methods for Cassandra repositories.
  • Support for Redis geo commands.
  • Any-match mode for query-by-example.
  • Support for XML and JSON based projections for REST payloads (see the example for details)
Read more...

Managing Secrets with Vault

Passwords, API keys and confidential data fall into the category of secrets. Storing secrets the secure way is a challenge with limiting access and a true secure storage. Let’s take a look at Hashicorp Vault and how you can use it to store and access secrets.

How do you store Secrets?

Passwords, API keys, secure Tokens, and confidential data fall into the category of secrets.
That’s data which shouldn’t lie around. It mustn’t be available in plaintext in easy to guess locations. In fact, it must not be stored in plaintext in any location.

Read more...

Spring Data release train Hopper SR2 released

On behalf of the Spring Data team I’d like to announce the availability of the second service release of the Spring Data Hopper release train. The release ships 103 issues fixed. We fixed a couple of bugs in the area of repository projections, especially for JPA users and introduce Hibernate 5.2 compatibility with this release (also already back-ported to the Gosling release train for inclusion in the upcoming service release). Hopper SR2 is a recommended upgrade for all Hopper users and also users of previous release trains.

Read more...