Debugging DSLD Scripts

Engineering | Andrew Eisenberg | August 02, 2011 | ...

Not too long ago, I introduced DSL descriptors (DSLDs) for Groovy-Eclipse. DSLDs are Groovy scripts that provide rich editing support (content assist, navigation, etc.) for Groovy projects in your Eclipse workspace. Since DSLDs can only be executed inside a running Eclipse process, debugging is not as simple as firing up the Eclipse debugger and stepping through a Groovy script. In this post, I'll describe some simple and some more complex techniques that you can use for debugging your DSLDs.

To get all of this working, you will need the latest development builds:

Simple and crude

The simplest and crudest way to debug your DSLDs is by using println. This will print expressions to the standard out of the running Eclipse process, which can be seen if you launched your Eclipse from the command line. However, I recommend using a log statement instead. This will print logging information to the Groovy event console

Spring Security Configuration with Scala

Engineering | Luke Taylor | August 01, 2011 | ...

In a previous article, Behind the Spring Security Namespace, I talked about how the Spring Security namespace has been very successful in providing a simple alternative to plain Spring bean configuration, but how there is still a steep learning curve when you want to start customizing its behaviour. Behind the XML elements and attributes, various filters and helper strategies are created and wired together, but, short of reading the code which handles the XML parsing, there is no easy way of working out which classes are involved or the details of how they interact.

For some time now, we've been trying to come up with an alternative Java-based solution using Spring's @Configuration classes that retains the simplicity of the XML namespace but also makes the underlying behavior more transparent and easier to customize. While theoretically possible, no Java-based solution seemed to meet…

Fine-tuning Spring Data repositories

Engineering | Oliver Drotbohm | July 27, 2011 | ...

It's only been a few days only since we've released Spring Data JPA 1.0 GA which is the first major version of a Spring Data project shipping with an implementation of the repository abstraction inside our Spring Data Commons module. The repository abstraction consists of three major parts: defining a repository interface, exposing CRUD methods and adding query methods. Adding query methods was discussed in detail in the first Spring Data JPA blog post. But defining a repository interface and exposing CRUD methods triggered quite some questions in earlier blog posts. That's why will have a…

This week in Spring: July 26th, 2011

Engineering | Josh Long | July 26, 2011 | ...

Welcome back to another installment of This Week in Spring! This week finds @springsource at OSCON (and OSCON Java and OSCON Data) in Portland, OR. If you're here, come visit our booth in the exhibition hall or check the schedule for any of the numerous Spring-talks!

If you missed us at OSCON, or if you're simply looking for an even better Spring experience, be sure to register for SpringOne 2GX 2011, the premier event for Spring, Grails and CloudFoundry developers. SpringOne 2GX is a one-of-a-kind conference for application developers, solution architects, web operations and IT teams who develop business applications, create multi-device aware web applications, design cloud architectures, and manage high performance infrastructure. The sessions are specifically tailored for developers using the hugely popular open source Spring technologies, Groovy & Grails, and Tomcat. Whether you're building and running mission-critical business applications or designing the next killer cloud application, SpringOne 2GX will keep you up to date with the latest enterprise technology.

  1. OSCON's great, but I will be taking an hour to watch the webinar, Getting Started with Spring Data Redis for North America, and Europe.
    You should too: <a href="http://redis.io/">Redis</a> is an open source, advanced key-value store known for its excellent performance, its small footprint and embed-ability. <a href="http://www.springsource.org/spring-data/redis">The Spring Data</a> project makes it easier to build Spring-powered applications that use new data access technologies such as non-relational "NOSQL" databases and cloud based data services. Check it out!  </li>
    
  2. <a href= "http://www.springsource.org/node/3189">Spring Data Graph 1.1.0.RC1 with Neo4j support Released</a>
    The key changes in the Spring Data Graph 1.1.…

This week in Spring: July 19th, 2011

Engineering | Josh Long | July 20, 2011 | ...

Welcome back to another installment of This Week in Spring. Lots of good stuff to get into, so let's get to it.

  1. The video from Grails Advocate Peter Ledbrook's webinar, "Tuning Your Grails Applications," has been made available here. Lots of great tidbits for web developers in general, as well as Grails developers in specific. Be sure to check out the other great content on the SpringSource YouTube channel.
  2. OSCON is right around the corner, and SpringSource is going to be there in full force! Come see yours truly (Josh Long), Steve Mayzak, Ezra Zygmuntowicz, Derek Collison, Bruce Snyder, David McCrory, James Watters and others talk about Spring, CloudFoundry, and much more at OSCON (as well as OSCON Java, and OSCON Data!). Also, feel free to check out our booth where we'd be happy to answer questions, introduce you to new technologies and meet and greet. Going to be there? Let us know, send us a message on Twitter to @SpringSource.
  3. Spring Data Redis 1.0.0.M4 has been released The new release features several improvements. My favorite? A Spring 3.1 CacheManager implementation that uses Redis! Out of the box, Spring 3.1's cache abstraction supports a CacheManager implementation based on the java.util.Map<K,V> interface, and an Ehcache implementation. The Spring Gemfire project ships with a CacheManager implementation that delegates to GemFire, as well. This new Redis implementation adds to the raft of options available already, and Spring 3.1's not even GA!
  4. Speaking of Redis, check out this upcoming webinar, Getting Started with Spring Data Redis. From the description, "This webinar will introduce Redis, its data structures, the fundamental concepts behind it and the Redis support in Spring Data, and will showcase how easy it is to get started and scale out into a cloud environment such as Cloud Foundry." Be sure to tune in!
  5. 		  <LI> 	<a href="http://www.springsource.org/node/3183">Spring Integration 2.0.5 has just been released.</a>
    				This release addresses 48 issues of which roughly half were bugs and half were improvements. For details <A href="https://jira.springsource.org/secure/ReleaseNote.jspa?projectId=10121&version=12104">see the Release Notes</a>.  </li> 
    <LI>Dr. David Syer, lead of the <a href="http://static.springsource.org/spring-batch/">Spring Batch project,</a> lead of the Spring Hadoop project, committer on just about everything else, and nice guy, all around, has just posted an amazingly clear…

Social Coding: Pull Requests - What to Do When Things Get Complicated

Engineering | Dave Syer | July 18, 2011 | ...

Scenario: you want to contribute some code to an open source project hosted on a public git repository service like github. Lots of people make pull requests to projects I'm involved in and many times they are more complicated to merge than they need to be, which slows down the process a bit. The basic workflow is conceptually simple:

  1. fork a public open source project
  2. make some changes to it locally and push them up to your own remote fork
  3. ask the project lead to merge your changes with the main codebase

and there is an excellent account of this basic workflow in a blog by Keith Donald.

Complications arise when the main codebase changes in between the time you fork it and the time you send the pull request, or (worse) you want to send multiple pull requests for different features or bugfixes, and need to keep them separate so the project owner can deal with them individually. This tutorial aims to help you navigate the complications using git.

The descriptions here use github domain language ("pull request", "fork", "merge" etc.), but the same principles apply to other public git services. We assume for the purposes of this tutorial that the public project is accepting pull requests on its master branch. Most Spring projects work that way, but some other public projects don't. You can substitute the word "master" below with the correct branch name and the same examples should all be roughly correct.

To help you follow what's going on locally, the shell commands below beginning with "$" can be extracted into a script and run in the order they appear. The endpoint should be a local repository in a directory called "work" that has an origin linked to its master branch (simulating the remote public project) and two branches on a private fork. The two branches have the same contents at their heads, but different commit histories (as per the ASCII diagram at the bottom).

The Two Remote Repositories

If you are going to send a pull request, there are two remote repositories in the mix: the main public project, and the fork where you push your changes.

It's a matter of taste to some extent, but what I like to do is make the main project the remote "origin" of my working copy, and use my fork as a second remote called "fork". This makes it easy to keep track of what's happening in the main project because all I have to do is

# git fetch origin

and all the changes are available locally. It also means that I never get confused when I do my natural git workflow

# git checkout master
# git pull --rebase
... build, test, install etc ...

which always brings me up to date with the main project. I can keep my fork in sync with the main project simply by doing this after a pull from master:

# git push fork

Initial Set Up

Let's create a simple "remote" repo to work with in a sandbox. Instead of using a git service provider we'll just do it locally in your filesystem (using UN*X commands as an example).

$ rm -rf repo fork work
$ git init repo
$ (cd repo; echo foo > foo; git add .; git commit -m "initial"; git checkout `git rev-parse HEAD`)

(The last checkout there was to leave the repository in a detached head state, so we can later push to it from a clone.) From now on, pretend "repo" is a public github project (e.g. git://github.com/SpringSource/repo.git).

The "fork" URL in this clone command would be something like [email protected]/myuserid/repo.git. Now we'll create the fork. This is equivalent to what github does when you ask it to fork a repository:

$ git clone repo fork
$ (cd fork; git checkout `git rev-parse HEAD`)

Finally we need to set up a working directory where we make our changes (remember "repo" = git://github.com/SpringSource/repo.git):

$ git clone repo work
$ cd work
$ git checkout origin/master

Because we cloned the main public repo that is by default the remote "origin". We are going to add a new remote so we can push our changes:

$ git remote add fork ../fork
$ git fetch fork
$ git push fork

The local repository now has a single commit and looks something like this in gitk (or your favourite git visiualization tool):

A (origin/master, fork/master, master)

In this diagram, "A" is the commit label, and in brackets we list the branches associated with the commit.

Get the Latest Stuff

You can always get the latest stuff from the main repo using

# git checkout master
# git pull --rebase

and sync it with the fork

# git push fork

If you operate this way, keeping master synchronized between the main repo and your fork as much as possible, and never making any local changes to the master branch, you will never have any confusion about where the rest of the world is. Also, if you are going to send multiple pull requests to the same public project, they will not overlap each other if you keep them separate on their own branches (i.e. not on master).

The Pull Request

When you want to start work on a pull request, start from a master branch fully up to date as above, and make a new local branch

$ git checkout -b mynewstuff

Make changes, test etc:

$ echo bar > bar
$ echo myfoo > foo
$ git add .
$ git commit -m "Added bar, edited foo"

and push it up to your fork repository with the new branch name (not master)

$ git push fork mynewstuff

If nothing has changed in the origin, you can send a pull request from there.

What if the Origin Changes?

For the purpose of this tutorial we simulate a change in the origin like this:

$ cd ../repo
$ git checkout master
$ echo spam > spam; git add .; git commit -m "add spam"
$ git checkout `git rev-parse HEAD`
$ cd ../work

Now we're ready to react to the change. First we'll bring our local master up to date

$ git checkout master
$ git pull
$ git push fork

The local repository now looks like this:

A -- B (mynewstuff, fork/mynewstuff)
 \
  -- D (master, fork/master, origin/master)

Notice how your new stuff does not have origin/master as a direct ancestor (it's on another branch). This makes it awkward for the project owner to merge your changes. You can make it easier by doing some of the work yourself locally, and pushing it up to your fork before sending the pull request.

Re-writing History on your Branch

If you aren't collaborating with anyone on your branch it should be absolutely fine to rebase onto the latest changes from the remote repo and force a push:

# git checkout mynewstuff
# git rebase master

The rebase might fail if you have made changes that are incompatible with somehting that happened in the remote repo. You will want to fix the conflicts and commit them before moving on. This makes life difficult for you, but easy for the remote project owner because the pull requiest is guaranteed to merge successfully.

While you are re-writing history, maybe you want to squash some commits together to make the patch easier to read, e.g.

# git rebase -i HEAD~2
...

In any case (even if the rebase went smoothly) if you have already pushed to your fork you will need to force the next push because it has re-written history (assuming the remote repo has changed).

# git push --force fork mynewstuff

The local repository now looks like this (the B commit isn't actually identical to the previous version, but the difference isn't important here):

A -- D (master, fork/master, origin/master) -- B (mynewstuff, fork/mynewstuff)

Your new branch has a direct ancestor which is origin/master so everyone is happy. Then you are ready to go into github UI and send a pull request for your branch against repo:master.

What if I want to Keep my Local Commits?

If you committed your changes locally in multiple steps, maybe you want to keep all you little bitty commits, and still present your pull request as a single commit to the remore repo. That's OK, you can create a new branch for that and send the pull request from there. This is also a good thing to do if you are collaborating with someone on your feature branch and don't want to force the push.

First we'll push the new stuff to the fork repo so that our collaborators can see it (this is unnecessary if you want to keep the changes local):

$ git checkout mynewstuff
$ git push fork

then we'll create a new branch for the squashed pull request:

$ git checkout master
$ git checkout -b mypullrequest
$ git merge --squash mynewstuff
$ git commit -m "comment for pull request"
$ git push fork mypullrequest

Here's the local repository:

A -- B (mynewstuff, fork/mynewstuff)
 \
  -- D (master, fork/master, origin/master) -- E (mypullrequest, fork/mypullrequest)

You are good to go with this and your new branch has a direct ancestor which is origin/master so it will be trivial to merge.

If you weren't collaborating on the mynewstuff branch, you could even throw it away at this point. I often do that to keep my fork clean:

# git branch -D mynewstuff
# git push fork :mynewstuff

Here's the local repo, fully synchronized with both of its remotes:

A -- D (master, fork/master, origin/master) -- E (mypullrequest, fork/mypullrequest)

Continue Working on your New Stuff

Let's say your pull request is rejected and the project owner wants you to make some changes, or the new stuff turns into something more interesting and you need to do some more work on it.

If you didn't delete it above, you can continue to work on your granular branch...

$ git checkout mynewstuff
$ echo yetmore > foo; git commit -am "yet more"
$ git push fork

and then move the changes over to the pull request branch when you are ready

$ git rebase --onto mypullrequest master mynewstuff

All the changes we want are in place now, but the branches are on the wrong commits. As you can see below, mynewstuff is where I want mypullrequest to be, and the remote fork/mynewstuff doesn't have a corresponding local branch:

A -- B -- C (fork/mynewstuff)
 \
  -- D (master, fork/master, origin/master) -- E (mypullrequest, fork/mypullrequest) -- F (mynewstuff)

We can use git reset to switch the two branches to where we want them (you can probably do this is a graphical UI if you like):

$ git checkout mypullrequest
$ git reset --hard mynewstuff
$ git checkout mynewstuff
$ git reset --hard fork/mynewstuff

and the new repository looks like this:

A -- B -- C (mynewstuff, fork/mynewstuff)
 \
  -- D (master, fork/master, origin/master) -- E (fork/mypullrequest) -- F (mypullrequest)

If we are OK with the pull request being 2 commits, we can just push it as it is:

$ git checkout mypullrequest
$ git push fork

The endpoint looks like this:

A -- B -- C(mynewstuff, fork/mynewstuff)
 \
  -- D (master, fork/master, origin/master) -- E -- F (mypullrequest, fork/mypullrequest)

Or we could rebase it to squash the commits together and force the push, scematically:

# git rebase -i HEAD~2
...
# git push --force fork

Because origin/master is a direct ancestor of fork/mypullrequest I know that my pull request will be trivial to merge.

Wrap Up

Hopefully this tutorial has given you enough git ammunition to go ahead and make some changes to your favourite open source project and be confident that the merge will be easy. Remember there is always more than one way to do it, and git is a powerful, low-level tool, so your mileage my vary and you might find variants of the approach above preferable or even necessary, depending on your changes.

This week in Spring: July 12th, 2011

Engineering | Josh Long | July 13, 2011 | ...

Welcome back to another installment of "This Week in Spring." Today saw a new sunrise, and - more importantly - the release of vSphere 5, the next step in cloud infrastructure!

My head's still buzzing after the excitement that accompanied this morning's launch.
This - and the recent release of vFabric 5 - represent the next stage in cloud innovation, and a huge part of taking your applications to production, and to the cloud, with Spring.

    <LI>O'Reilly has published a fantastic roundup on the <a href = "http://radar.oreilly.com/2011/07/7-java-projects.html">seven Java projects that <EM…

Countdown to Grails 2.0: Static resources

Engineering | Peter Ledbrook | June 30, 2011 | ...

Web applications typically rely heavily on what we call static resources, such as Javascript, CSS and image files. In a Grails application, they are put into a project's web-app directory and then referenced from the HTML. For example,

<link rel="stylesheet" href="${resource(dir: 'css', file: 'main.css')}" type="text/css">

will create a link to the file web-app/css/main.css. All very straightforward. You might even think that the current support is more than sufficient for anyone's needs. What else would you want to do?

That's a good point. The answer depends on the complexity of your application, but let's start with the example CSS link above. Why do we have to type out the <link rel="..." href=...>? Just by looking at the extension, we know that the resource is a CSS file. We also know that CSS files should be linked into an HTML page using the…

This week in Spring: June 28th, 2011

Engineering | Josh Long | June 29, 2011 | ...

Welcome back to another installment of "This Week in Spring."

Lots of great stuff this week, as usual. When we compile this list, we trawl the internet looking for interesting stuff and try to bring it to you, digest style, in this weekly roundup. Some of the resources that we commonly check are Twitter, the SpringSource blogs, CloudFoundry.org, and Tomcat Expert,

We try to not miss anything, but we might. If you know of something that we've missed or think should be included, don't hesitate to ping your humble editors with any suggestions.

While SpringSource has a strong presence at numerous conferences and industry events, the premiere conference for Spring developers remains the SpringOne conference, held yearly in the United States. Work is well underway in planning the final program. Check out the SpringOne 2GX page to see news and activity, and to register, for the upcoming SpringOne2GX conference.

    <LI><a href="http://www.springsource.org/spring-social/news/1.0.0.rc1-released">Spring Social 1.0.0.RC1</a…

This week in Spring: June 21st, 2011

Engineering | Josh Long | June 22, 2011 | ...

Welcome back to yet another This Week in Spring. SpringSource is out in full force at JAX San Jose this week and we will be at OSCON, in July. These events are great avenues for us to connect with the userbase. As usual, we've got a nice complement of stuff to cover this week, so let's get to it!

          <LI>  There has been loads of interest and discussion surrounding last week's <a href="http://blog.springsource.com/2011/06/09/spring-framework-3-1-m2-released/">Spring 3.1 second milestone</a>.  Sam Brannen writes about the <a href="http://blog.springsource.com/2011/06/21/spring-3-1-m…

Get the Spring newsletter

Stay connected with the Spring newsletter

Subscribe

Get ahead

VMware offers training and certification to turbo-charge your progress.

Learn more

Get support

Tanzu Spring offers support and binaries for OpenJDK™, Spring, and Apache Tomcat® in one simple subscription.

Learn more

Upcoming events

Check out all the upcoming events in the Spring community.

View all