Posts tagged with: distributed

This tag is affixed to posts exploring the technical ramifications of operating in a distributed environment.

snowcast – Migration and Failover – Feature Complete

When I started snowcast back at end of 2014 I haven’t thought that people will really be interested but most of the times it will work out differently from your imagination. A still fairly small group of interested people showed up and I got a lot of nice words and

snowcast – Hazelcast Client and the snowcast logo


In December I started a new project called snowcast. Arisen from the need in one of my own private projects I decided to open source this part of the work.

snowcast is an auto-configuration, distributed, scalable ID generator on top of Hazelcast. Since snowcast is not an official Hazelcast

snowcast – like christmas in the distributed Hazelcast world

snowcast is an auto-configuration, distributed, scalable ID generator on top of Hazelcast. Since snowcast is not an official Hazelcast project, Hazelcast will not offer any kind of commercial support for it, it is one of my private spare time projects!

Why this project?

While working on a side project I

Press Release: OrientDB becomes Distributed using Hazelcast, Leading Open Source In-Memory Data Grid

Tessara headshot

OrientDB becomes Distributed using Hazelcast, Leading Open Source In-Memory Data Grid Elastic Distributed scalability added to OrientDB, a Graph Database that support hybrid Document Database features Palo Alto, CA – Hazelcast ( and Orient Technologies ( today announced that OrientDB has gained a multi-master replication feature powered by Hazelcast. Clustering multiple server nodes is the […]

Writing a Hazelcast / CastMapR MapReduce Task in Java

Hazelcast is a distributed In-Memory-Datagrid written in Java. In addition to the internal features like EntryProcessors and queries you can write MapReduce tasks using the CastMapR projects which adds MapReduce capabilities on top of Hazelcast 3.x.

To make it comparable to other MapReduce frameworks we will try to reimplement

CastMapR – The Hazelcast 3 MapReduce Framework

A few days ago while porting our current system to Hazelcast 3 Snapshots I finally decided to start a MapReduce implementation for Hazelcast which I was missing for a long time.

Whereas there always was a way to query IMaps in a distributed manner using Predicates I missed a solution

Hazelcast and MongoDB

Hazelcast and MongoDB

In this article, I will implement a sample (getting-started) project which uses MongoDB as persistence layer for Hazelcast distributed cache.

Hazelcast has a flexible persistence layer, you should just implement an Interface (MapStore) to store your memory grid into your preferred database. By 2.1 version Hazelcast supports MongoDB persistence in a smoother way using Spring-MongoDB data library. Let’s implement a simple project step-by-step to illustrate this feature. Our project will have a single model class and we will see it will persisted to MongoDB when we put it to Hazelcast distributed map.
1- Project Set-Up
I will use Maven. Here the dependencies:
The dependencies are libraries for projects Spring, Spring-MongoDB, 
Hazelcast makes use of Spring Data project, connecting and mapping objects to MongoDB.

2- MongoDB Set-Up
Install and run mongodb in your local machine. One of the things makes mongodb attractive, its quick-start is really quick. 
You can follow this guide:
3- Model
A simple POJO to store basic info about users. Only thing you should care, it should be Serializable.
4- Configuration
As we use Spring, all configuration is bundled in Spring configuration xml. I named the file as beans.xml
5- Run and Test
Now we can test Mongo-Hazelcast integration. What we will do is to get the user map from spring context and put a new User object into map. We do not add any code related to Mongo or database layer, the object should be saved to MongoDB automatically. Also there is no Hazelcast code in this class. It seems that it just puts an object to a map. But in fact the object is put to distributed data grid, also persisted to MongoDB. The code is so clean thanks to Spring and the Hazelcast’s standart Map implementation.
Here the main class for that:
And let’s see if it is in Mongo:

MongoDB shell version: 2.0.2
connecting to: test
> db.user.find()
{ “_id” : “id-134”, “_class” : “com.hazelmongo.User”, “name” : “Enes”, “age” : 29 }
As you see, Mongo generates two fields other than the ones defined in POJO. _id field is assigned from the key which you used putting to the map. And _class is used to map record the corresponding Java Object.

This sample illustrates the default usage of MongoDB-Hazelcast. You can override default behaviour and object mapping (annotating the POJO) thanks to Spring Data project. Have a look at here for further details.

Distribute Grails with Hazelcast

Distribute Grails with Hazelcast


In this article I will try to integrate my two favorite technology: grails and hazelcast.
(Bias: I am currently work for Hazelcast)

Ruby on Rails gained popularity among people who seeks productivity on web programming.
Java is often criticisized on being heavy for rapid development.
But richness of Java community has given birth to flexible and dynamic JVM languages like Groovy.
Grails is somewhat synthesis of power of Java (with the help of Groovy) and productivity of Rails with philosophy “convention over configuration”.
Another technology which amazes me is Hazelcast.
I remember the days which I first meet socket programming, RMI; in university.
And when I first tried the Hazelcast my first reaction is “How “Distributing your data over machines” could be so easy.
Single jar, no dependency, distribute your data over maps, queues, topics…

So I have decided to integrate these two technology, write a simple plugin so anyone can easily distribute data over memory by hazelcast.
I called it hazelgrails and pushed it to GitHub:
Here introductory on using this plugin.
How to Install Plugin

Run the command:

install-plugin hazelgrails

You will see hazelcast.xml in conf directory under plugins directory.
You can configure hazelcast in details. 
For available options have a look at:

To see hazelcast logs add following to Config.groovy:

info ‘com.hazelcast’

Use Hazelcast as Hibernate 2nd Level Cache
In DataSource.groovy replace the following line in hibernate configuration block.

cache.region.factory_class = ‘com.hazelcast.hibernate.HazelcastCacheRegionFactory’

For more details about 2nd level cache configuration have a look at:

Test The  Plugin

Create an Grails application and install the plugin. Then create a domain and two controllers.

create-domain-class com.hazelgrails.Customer
create-controller com.hazelgrails.Server1
create-controller com.hazelgrails.Server2

As you see, Customer is serializable. Hazelcast requires the objects to be serializable in order to distribute them in cluster.

Now create the war file (command “grails war”) but copy the file with different name (app2.war). 
You may deploy the wars into different machines in the same network, or to different servers (Tomcat, Jetty) in same machine or even into the same Tomcat.
For simplicity I have run the current app by “grails run-app” and I have deployed the war to an external Tomcat.
And test them:

Cities:[2:New York, 1:London] 
Timestamps:[1333447087796, 1333447112863]
Cities:[2:New York, 1:London] 
First customer name:tom, age:20 
Timestamps:[1333447087796, 1333447112863]

In practice, if you see the following then you can conclude the nodes formed a cluster succesfully. (you should add “info ‘com.hazelcast’” into Config.groovy)

Members [2] {
        Member []
        Member [] this

Usage Examples
There are two new methods defined for domain classes.
saveHz() method, first persists the domain object (like original save()) then puts it to hazelcast map.
getHz() method tries to find object with given id first in hazelcast map, if not found then tries to find it in datasource.
Hazelcast create a distributed map for each domain class. 
So by using saveHz() and getHz() you can get your objects from distributed memory instead of getting by database operations.
Also by injecting hazelService, you can create hazelcast instances.
Here the usage exampples:

Hazelcast 2.0 is coming! What is new?

Hazelcast 2.0 is a huge step forward in building the best IMDG and making Hazelcast experience even more pleasant. As always this release contains many fixes, enhancements and improvements. But there are different reasons that make 2.0 very special. We have many big changes in the internals of Hazelcast. Many of these changes are made […]