On behalf of the team, I am excited to announce the release of the first milestone of Spring Cloud Data Flow 1.2.
Note: A great way to start using this new release(s) is to follow the release matrix on the project page, which includes the download coordinates and the links to the reference guide.
Over the last few weeks, we have added new features and improvements to the overall orchestration of data microservices. The following new features were included in the 1.2.0.M1 release:
Core
Introduce dedicated prefixes for deployment properties. Using the deployer properties is as simple as deployer.<appname>.xxx as opposed to app.<appname>.spring.cloud.deployer.xxx
Introduce a new REST-API controller and shell support to cleanup Task Executions
Foundation work to consolidate the use of controllers between Task deployments and Task Executions
Consolidate REST-API call traces and return codes for consistency
Performance optimizations to “stream list” operation. Instead of making individual calls for each app associated with the stream, the newly introduced MultiStateAppDeployer SPI operation invokes a call per stream that queries all the application statuses in a single network call
Improves error reporting for “stream list” operation
Dashboard
Adds a convenient option in “About” tab from the Dashboard to download compatible Shell application
Adds connectivity between Tasks and Batch-jobs in the Dashboard. The batch-job “details view” can be accessed from the Task-list page and likewise, the task “details view” can be accessed from the Batch-list page.
Adds role-based access control integration to the Dashboard
Following new applications were added and it is targeted to be released in the upcoming Bacon release-train:
MongoDB Sink
PGCopy Sink
Aggregator Processor
Header-enricher Processor
Add improvements to core app generation framework in the app-starters project that allows selectively upgrading dependent release versions. We can independently upgrade Spring Boot, Spring Integration or any other dependency at each application level and generate kafka, rabbitmq, or any other binder based applications more easily.
Include core foundation work to support Docker artifacts as first-class citizen in shell, dsl, and the UI.
The ability to orchestrate “composition of batch-jobs or tasks” is making progress. A new set of DSL primitives to support this from shell/UI is underway, too.
Significant refactoring of core constructs around controllers, dsl, and REST-APIs underway to support “application grouping” functionality. Apart from the ability to orchestrate Spring Cloud Stream or Spring Cloud Task applications, this new model would allow orchestration for any Spring Boot application. There will be an option to define the application groups and such “groups” can be tagged by "labels", so it will be then easy to perform group operations at the “label” level such group-deploy or group-destroy. For example, a stream is a specialization of a “group” that includes source, processor(s), and sink type of applications in it.
We envision further evolving “application grouping” capability to stream versioning, too. Stay tuned!
A few of us from the Spring Cloud Data Flow team will be at DevNexus next week. Please do consider attending the sessions to learn more about these feature capabilities.