Thursday, July 16, 2015

Spring Boot with non JDBC database url

Using Spring Boot a DataSource usually is configured using spring.datasource.* properties as shown in the following example:

However database as a service providers (like Heroku Postgres, Compose PostgreSQL, ClearDB MySQL) provide the connection parameters in a format of
scheme://user:password@host:port/path
Would be nice to have a single URI property in Spring Boot to configure database connection properties, similar how we can configure a MongoDB connection via spring.data.mongodb.uri or a Redis connection via spring.redis.uri

Below you find how you could use a single URI property, in this case spring.datasource.uri to specify the database connection parameters to a Heroku PostgreSQL database service. The demo application is running on Heroku and using three datastore services:



The connection properties are managed on Heroku:



In the application we just reference the Heroku config variables, encapsulating them in a heroku profile.

Then we just need to extract the connection properties from the spring.datasource.uri and create a DataSource. In this example we use the Tomcat JDBC Connection Pool to create the DataSource instance.

The demo application is available on my github profile.

Monday, February 09, 2015

Spring Boot with AngularJS form authentication leveraging Spring Session

In this blog post I would like to show you how you can distribute the session on Heroku via Spring Session. In order to get started quickly I am using Dave Syer's code from the II part of the awesome "Spring and Angular JS" blog series. I highly recommend to read them.

I did some modifications to the initial code, like using npm and bower instead of wro4j to manage front end dependencies. If you would like to jump right to the code, you can find it here.

The http sessions will be stored in a Redis instance, which all web dynos will have access to. This enables to deploy the web application on multiple dyno's and the login will still work. Heroku has a stateless architecture where the routers use a random selection algorithm for HTTP request load balancing across web dynos, there is no sticky session functionality.
I chose Redis Cloud service on Heroku since it gave me a 25MB free data plan. After adding
heroku addons:add rediscloud
The REDISCLOUD_URL environment is available where the connection settings are provided as seen below.



The BUILDPACK_URL was used to configure a multipack build using this library. Basically it allowed to run first the npm install and then the ./gradlew build command.
Via the embedded-redis library it is possible to start a redis server during initialisation. The related Redis configuration can be found below

Running on Heroku we needed another Redis configuration which connects to the previously defined Redis Cloud service.


You can connect to the Redis cloud service also from your localhost via
redis-cli -h hostname -p port -a password
And you will see the created keys which correspond to the value of your SESSION cookie.
Try to increase the dynos for your web application and you will see the login will still work.

Saturday, October 25, 2014

Datasource configuration with Spring Boot

For configuring a datasource using Spring Boot we have couple of options. With the help of a simple example project, which is available on my github profile, I will walk you through these options. The example contains two simple entities Portal and Page with a 1:N bidirectional relationship.

The example with the help of Spring Data JPA and REST modules, creates and exposes CRUD operations for these entities using plain HTTP REST semantics.

In development mode it is very convenient to use an in-memory database. By simply including H2 as a maven dependency, Spring Boot uses it as an in-memory database, no connection URL needs to be provided. This example uses the Spring Data JPA interface to populate the database with some example data.

Using the @Profile("default") this bean will be just initialised in the default Spring profile (when no other profile is specified). We could have created an explicit profile for development mode also, but this example is just using the default for this.
Next we just need to start the application. With the following glue code we can start it as a standalone application (java -jar). It will also work if we drop the produced war file in a Servlet 3.0 servlet container.

Setting to debug level the org.hibernate.SQL logger, we can see the that database schema was created and populated during application startup.

In a production environment however is unlikely that we need to create the database schema and populate the database. We just need to tell to Spring Boot where and how to connect to the database. Since we would like to keep the ability to work with the app in development mode, the solution is to use another Spring profile for the production setup. Creating the application-production.properties configuration with the below properties (using mysql in this example) we can instruct Spring Boot to switch to production mode by setting the spring.profiles.active to production

As you can see the jdbc driver class name is not needed, Spring Boot can infer it from the database URL. Have a look at the DriverClassNameProvider for the details.
With the next release of Spring Boot (in 1.2.0-SNAPSHOT is already available) it will be possible to reference the datasource via JNDI, which is very common in traditional deployment setups where the datasources are configured inside the application server. Of course we can do this already with Spring using JndiDataSourceLookup but it got easier using Spring Boot, without using infrastructure beans. In order to demonstrate this, I have deployed the application into JBoss Wildfly, and created another Spring profile called jboss with the following configuration

Using the default standalone.xml as the initial configuration the datasource connecting to MySQL can be added like in the example below.

We also need to create a module.xml in the JBOSS_WILDFLY_HOME/modules/com/mysql/main folder with the following configuration. The mysql jdbc driver needs to be put into this folder also.

Next, by dropping the created war file into JBOSS_WILDFLY_HOME/standalone/deployments folder and executing the following command we can startup this simple application in JBoss.

Accessing the http://localhost:8080/datasource-configuration-1.0-SNAPSHOT/ you will see that the data is loaded from MySQL end exposed with Spring Data REST. If you would like to try it out this example, have a look at my github profile.

Monday, October 13, 2014

Software configuration with Spring Boot

In this blog post I would like to show you the configuration possibilities of a Spring bean's name property in combination with Spring Boot. Let's consider the following simple bean.

The @Value("${greeting.name:World}") means that the name can be configured via the greeting.name property and has the default value of "World". You can quickly try it by cloning my example repository which I have created for this blog post and accessing the http://localhost:8080

With the help of Spring Boot Maven Plugin this example is creating a very simple war artifact starting an embedded tomcat instance.
Now let's see what configuration options we have.
We can configure the name property using a command line argument.

We can set it also via a system property.

Or we can use an OS environment variable.

Here you can see that I have used underscore since the OS does not allow me to use period-separated key name. But is not a problem for Spring Boot, it can match it.
The Spring Boot Actuator module's /env endpoint can be very useful in analysing used configuration.

We could also set it via a JNDI attribute. In order to demonstrate it, I will use Wildfly (formerly known as JBoss AS). Just drop the generated war file into your /standalone/deployments and after you have started the server (/bin/standalone.sh) add a JNDI binding via JBoss CLI tool.

And accessing the http://localhost:8080/configuration-with-spring-boot-0.0.1-SNAPSHOT/ you will see the name property was set via JNDI. The interested reader can have a look what modifications I needed to make to able to deploy it to JBoss EAP 6.3.0.

Another option is to set it via an external property file. By default it uses the application.properties, however you can easily override it via spring.config.name as shown below.

You can group configuration in profiles. With Spring profiles you can achieve that you have one deployable artifact across development, staging and live environments, running on your laptop, application server or on a PaaS provider. Achieving this makes testing very easy.

Lastly I would like to show you how Spring Boot can help in setting up logging configuration. Spring Boot already provides a default base configuration for each logging implementation that you can include if you just want to set levels. The base configuration is set up with console output and file output (rotating, 10 Mb file size) which is usually enough.

With the logging.file you can configure the file output location.
What you would however mostly do is to setup an externalised logging configuration. For logging I recommend logback. It can automatically reload logging configuration upon modification. The external logging configuration you can set via the logging.config property.

You should also customise the banner for your app :) I used this tool.



Hope you see the great flexibility regarding configuration when using Spring Boot. There are other goodies like using YAML instead of properties files, which is a very convenient format for specifying hierarchical configuration data.

Sunday, June 29, 2014

Flexibility with Spring's cache abstraction

This blog post tries to demonstrate how easily you can switch the caching provider if you are using the caching abstraction from Spring framework without modifying your business logic. As an example, let's consider an expensive operation, like calling the facebook graph API to get the website of a company. This operation we could speed up with caching. If you would like to jump right ahead to the code have a look at my github profile

With the @Cacheable annotation we demarcate the method which is calling an expensive remote call. On the very first time the method will be executed and the result will be put into the pages cache. Repetitive calls of the method with the same parameter will not execute the method, instead the result will be the cached value.
In this simple example the service is exposed via a Spring MVC controller as seen below, where we also measure how long it takes to call the service method.

To build and run the example, issue the following commands in a terminal:

The last command will start up an embedded tomcat instance using Spring Boot.
Now, in another terminal let's call the service couple of times with the same name.

As you can see above, the lookup took similar amount of time on each client invocation. This is because the caching is not activated, it was just declared. In order to activate caching you need a caching provider. The following code snippet configures EhCache as a caching provider for our facebook lookup service.

In order to enable caching for our facebook lookup service with EhCache as caching provider, we activate the ehcache Spring profile:

And again in another terminal when calling the service couple of times with the same name it is visible that at first time it took more than half a second, however the subsequent calls were near instantaneous.

Later on, we might want to scale out our service by starting more than one tomcat instance. In this case we might want to have a distributed cache, where a result cached on one node will be also available transparently on other nodes. The following code snippet contains a configuration for Hazelcast using as a distributed cache.

Run the following two commands in separate terminals, enabling caching with Hazelcast as a provider by activating the hazelcast profile.

This will start up two tomcat instances one on 8081 the other on 8082 port. And as shown below we have added a distributed cache as a caching provider without changing our business logic.

In the sample project the interested reader could check out a configuration for Redis to be used as caching provider.

Saturday, February 22, 2014

Two factor authentication with Spring Security

In this blog post I would like to show you how you could implement (simulate) two factor authentication with Spring Security. If you would like to jump ahead right to the code have a look at my github profile. To easily test the simple demo application I have uploaded it to heroku. Note that by default the application will use a single dyno (Heroku's term for scalable unit) and it will go to sleep after one hour of inactivity. This causes delay of a few seconds for the first request, subsequent requests will perform normally.

I mentioned "simulate" previously since the demo application turns the two factor authentication problem into a normal authentication plus authorisation problem. When valid credentials (here: email and password) are provided the PRE_AUTH_USER role is assigned to the user. With this role the user is authorised only to access the view where the verification code can be provided. If the correct verification code is provided the user will be granted with the USER role, with which all the views can be accessed.

Below you can see how easy is to configure Spring Security with the Java config introduced in version 3.2

In order to support non-security related user information, the AccountRepository is adapted to the UserDetailsService, so Spring Security can use it as an authentication source.

For the second step verification a time based one time password (TOTP) verification algorithm is used, which is described very good here.

Sunday, February 16, 2014

Non HTTP initiated broadcast with Atmosphere framework

In this blog post I would like to show you how you can push server side events to a web application using the atmosphere framework. I will also show how you can broadcast to a set of AtmosphereResource instances, not to all suspended connections. If you want to jump right to the code have a look at my github account.

In this simple example the backend will notify the front-end about the following three events

In order to demonstrate the selective broadcast functionality we will broadcast ACCOUNT_DEBITED to chrome, ACCOUNT_CREDITED to firefox and ACCOUNT_LOCKED to IE browser clients. In real application you would broadcast the event to the currently logged in user whose account was debited, credited or locked.

On the front-end with the help of atmosphere.js I send as a header the information about the client (currently the type of the browser)

The initial transport is set to websocket, however if the browser or backend doesn't support it, it will fall back transparently to long polling. The subscribe function initiates a GET request which is handled by the following managed service

With a camel timer component we simulate the continuous event generation. The events are broadcasted to the appropriate AtmosphereResource which models a connections between a client and the server.

With the help of atmosphere framework the events are pushed via websocket or long-polling transport. As you can see below with IE 9 the transport will be long polling since it does not support websockets. With chrome or firefox websocket transport will be chosen.