Get ahead
VMware offers training and certification to turbo-charge your progress.
Learn moreThis is the fifth article of a series about how to tackle various challenges you might encounter when using Spring Data JDBC. The series consists of:
Spring Data JDBC - How do I make bidirectional relationships?.
Spring Data JDBC - How Can I Do a Partial Update of an Aggregate Root?
Spring Data JDBC - How do I Generate the Schema for my Domain Model? (This article)
If you are new to Spring Data JDBC, you should start by reading its introduction and this article, which explains the relevance of aggregates in the context of Spring Data JDBC to understand the basic concepts.
With any Object Relational Mapper (ORM), you have to create two things, and they have to match each other:
Starting with version 3.2.0-M1 Spring Data Relational will help you do this. This article shows you how to make it work.
The first thing to do is to find a place to put the code for the schema generation. We recommend using a test for this. You can use the configuration of your main application from it, and it cannot run by accident in production.
The next thing to do is to get a RelationalMappingContext
.
This is the class that is at the core of Spring Data Relational, which is parent to both Spring Data JDBC and Spring Data R2DBC.
This class holds all the mapping meta information about your aggregates, once it is fully initialized.
But this initialization happens lazily, so you have to register your aggregate roots yourself.
Then you need to create a LiquibaseChangeSetWriter
from it
and use that to write out a Liquibase change set.
// context is a RelationalMappingContext that you autowire in your test.
context.setInitialEntitySet(Collections.singleton(Minion.class));
LiquibaseChangeSetWriter writer = new LiquibaseChangeSetWriter(context);
writer.writeChangeSet(new FileSystemResource("cs-minimum.yaml"));
For this to work, you need Liquibase in your dependencies:
<dependency>
<groupId>org.liquibase</groupId>
<artifactId>liquibase-core</artifactId>
</dependency>
Note: If you use Spring Boot, the Liquibase dependency will trigger schema initialization using Liquibase, which will fail, because it cannot find any change sets.
You can easily disable this by adding this line to your application.properties
.
spring.liquibase.enabled=false
If you run this test, you should find a file named cs-minimum.yaml
in the root folder of your project:
databaseChangeLog:
- changeSet:
id: '1692728224754'
author: Spring Data Relational
objectQuotingStrategy: LEGACY
changes:
- createTable:
columns:
- column:
autoIncrement: true
constraints:
nullable: true
primaryKey: true
name: id
type: BIGINT
- column:
constraints:
nullable: true
name: name
type: VARCHAR(255 BYTE)
tableName: minion
You should review this, modify it as desired, and put it in the proper location for Liquibase to pick up. If you have disabled it before, now enable schema initialization by Liquibase in order to actually use this change set.
For the second release of your application, you probably have some updates to the database schema. Spring Data JDBC can help you with these as well.
In order to create such an incremental schema update, we need to provide the current state of the database.
This is done with an instance of liquibase.database.Database
, which you can create from a DataSource
.
@Autowired
DataSource ds;
// ...
context.setInitialEntitySet(Collections.singleton(Minion.class));
LiquibaseChangeSetWriter writer = new LiquibaseChangeSetWriter(context);
try (Database db = new HsqlDatabase()) {
db.setConnection(new JdbcConnection(ds.getConnection()));
writer.writeChangeSet(new FileSystemResource("cs-diff.yaml"), db);
} catch (IOException | SQLException | LiquibaseException e) {
throw new RuntimeException("Changeset generation failed", e);
}
The example above uses a HsqlDatabase
.
You would use an implementation that matches your actual database.
By default, the change set never drops columns or tables from your schema.
Just because they are not modeled in the domain model does not mean that you do not need them, right?
However, if you actually want to delete some or all of the tables and columns not present in your Java domain model, register a DropTableFilter
or DropColumnFilter
), like in the following example, which drops all unmapped columns except those named special
.
writer.setDropColumnFilter((table, column) -> !column.equalsIgnoreCase("special"));
Spring Data JDBC does not have annotations for specifying the exact database type for columns.
But it does offer a hook to use the types that you want.
You can provide a SqlTypeMapping
to the LiquibaseChangeSetWriter
.
writer.setSqlTypeMapping(((SqlTypeMapping) property -> {
if (property.getName().equalsIgnoreCase("name")) {
return "VARCHAR(500)";
}
return null;
}).and(new DefaultSqlTypeMapping()));
You have to implement just a single method of that interface: String getColumnType(RelationalPersistentProperty property)
.
In the likely case that you want to modify the types for only some cases, you can combine it with a DefaultSqlTypeMapping
, which will be used for all the cases where your implementation returns null
, as shown in the example.
RelationalPersistentProperty
has some very helpful methods, like findAnnotation
to access an annotation (including meta annotations) on the property or its owning entity.
You can use this feature to use your own annotations and meta annotations to control the database types used for your domain model.
For example, you may create a layer of annotations that specify database level types and another domain-specific set of annotations using the first, as demonstrated in the following code snippets:
@Retention(RetentionPolicy.RUNTIME)
public @interface Varchar {
/**
* the size of the varchar.
*/
int value();
}
@Varchar(20)
@Retention(RetentionPolicy.RUNTIME)
public @interface Name {
}
You could then use this annotation to annotate properties in your domain model and use a matching SqlTypeMapping
:
@Name
String name;
writer.setSqlTypeMapping(((SqlTypeMapping) property -> {
if (!property.getType().equals(String.class)) {
return null;
}
// findAnnotation will find meta annotations
Varchar varchar = property.findAnnotation(Varchar.class);
int value = varchar.value();
if (varchar == null) {
return null;
}
return "VARCHAR(" +
varchar.value() +
")";
}).and(new DefaultSqlTypeMapping()));
Schema generation currently does not support references. Those will currently be silently ignored. Of course, we will improve on that in the future.
If you are coming from JPA/Hibernate, you are used to having a simple configuration to directly generate the schema in the database and also to having schema information as part of the mapping annotations. It's natural to ask why we chose a different way.
There are multiple answers to this question:
You can easily do things that you can recover from only by applying a database backup. We do not think it is a good thing to condition developers to do this kind of thing without really seeing, let alone thinking about, the changes they apply. That is why we create the changes but leave applying the changes as a separate step.
That is why we chose Liquibase for creating and managing changes.
Therefore, this kind of information should not be part of the mapping annotations used by Spring Data JDBC. Instead, this kind of information should be derived from your model in a way that is really independent of Spring Data JDBC. We think the demonstrated meta annotation approach is a good way to do that.
With current milestone and upcoming GA releases, Spring Data JDBC provides a flexible and powerful way to generate database migrations from your domain model. We look forward to hear about your opinions and experiences with this.
The complete example code is available in the Spring Data Example repository.