Community News

Dear Infinispan users, we thought CR3 was going to be the last candidate release before Final... but we were mistaken!The reason for yet another CR is that we decided to make some changes which affect some default behaviours:
  • enabling optimistic transactions with repeatable read now turns on write-skew by default
  • retrieving an already configured cache by passing in a template doesn't redefine that cache's configuration
Other important changes:
  • big improvements to the client/server rolling upgrade process
  • allow indexes to be stored in off-heap caches
  • lots of bug fixes
For the full list of changes check the release notes, download the 9.0.0.CR4 release and let us know if you have any questions or suggestions.

The Infinispan team
In the latest 9.0.0.CR3 version, the Infinispan REST endpoint is secured by default, and in order to facilitate remote access, the Docker image has some changes related to the security.

The image now creates a default user login upon start; this user can be changed via environment variables if desired:

You can check if the settings are in place by manipulating data via REST. Trying to do a curl without credentials should lead to a 401 response:

So make sure to always include the credentials from now on when interacting with the Rest endpoint! If using curl, this is the syntax:

And that's all for this post. To find out more about the Infinispan Docker image, check the documentation, give it a try and let us know if you have any issues or suggestions!

In one of the previous blog posts we wrote about different configuration options for our Docker image. Now we did another step adding auto-configuration steps for memory and CPU constraints.

Before we dig in...
Setting memory and CPU constraints to containers is very popular technique especially for public cloud offerings (such as OpenShift). Behind the scenes everything works based on adding additional Docker settings to the containers. There are two very popular switches: --memory (which is responsible for setting the amount of available memory) and --cpu-quota (which throttles CPU usage).

Now here comes the best part... JDK has no idea about those settings! We will probably need to wait until JDK9 for getting full CGroups support.

What can we do about it?
The answer is very simple, we need to tell JDK what is the available memory (at least by setting Xmx) and available number of CPUs (by setting XX:ParallelGCThreadsXX:ConcGCThreads and Djava.util.concurrent.ForkJoinPool.common.parallelism).

And we have some very good news! We already did it for you!

Let's test it out!
At first you need to pull our latest Docker image:

Then run it with CPU and memory limits using the following command:

Note that JAVA_OPTS variable was overridden. Let's have a look what had happened:
  • -Xms64m -Xmx350m - it is always a good idea to set Xmn inside a Docker container. Next we set Xmx to 70% of available memory. 
  • -XX:ParallelGCThreads=6 -XX:ConcGCThreads=6 -Djava.util.concurrent.ForkJoinPool.common.parallelism=6 - The next thing is setting CPU throttling as I explained above.
There might be some cases where you wouldn't like to set those properties automatically. In that case, just pass -n switch to the starter script:

More reading
If this topic sounds interesting to you, do not forget to have a look at those links:
  • A great series of articles about memory and CPU in the containers by Andrew Dinn [1][2]
  • A practical implementation by Fabric8 Team [3]
  • A great article about memory limits by Rafael Benevides [4]
  • OpenShift guidelines for creating Docker images [5]
Dear Infinispan community,

as announced in a previous post, starting from version 8.1.0 also the C++/C# clients can receive and process Infinispan events.

Here's an example of usage of C++ event listeners that, with a good dose of imagination, pretends to be a customer behavior tracking system for our store chain (don't take this too seriously, we're just trying to add some fiction).

As a first requirement our tracking system will record every single purchase made in our stores. How many stores we have? 1, 100, millions? It doesn't matter: we're backed with an Infinispan data grid.
This is version 0.x and hence the checker must use the keyboard to enter all the needed information.

As you can see our entry key is a concatenation of the product name and the timestamp and the entry value is an unstructured string, maybe too simply but it could work for now.
Seems we are at a good point: we have the data and we can do analytics on it, so far so good but now our boss makes a new request: he wants a runtime monitor on how's the sales performance. That's a perfect request to be fulfilled with event listener: the monitor application will be an Hotrod C++ client that registers a client listener on the server and receives and show on the boss's laptop the data flow.
A client listener, once registered on the server, can receive events related to: creation, modification, deletion, expiration of cache entries; in our example only the creation and expiration events are processed (expired events can be useful to do some moving average statistics?). Following a snip of code that creates and registers a listener that writes the events key on the stdout.

You can git this quickstart here [1]. On startup a multiple choice menu is shown with all the available operations. Running several instances you can act as the checker (data entry) or the boss (installing the listener and seeing the events flow).

FiltersAgain so far so good, but then the marketing department ask support to do targeted advertising like: soliciting customers that bought product Y to buy product X.
Let's suppose that X="harmonica" and Y="hiking boots" (it's a well known fact of life that in the high mountains you feel the desire to play an harmonica).

To do that we register on the server another listener, but this time we're not interested in the whole flow of purchase data: to run our marketing campaign, we only interested in cache entries having the key starting with "hiking". The Infinispan server can filter out events for us, if we pass in the AddClientListener operation the name of the wanted filter along with any configuration arguments.

Filter are java classes that must be deployed into the Infinispan server (more here [2])
and convertersPredefined events contain very few information: basically the event type and the entry key, this to prevent to flood the network spreading around very long entry values. Users can override this limitation using a converter, that is a java class deployed into the server, that can create custom events containing every data needed by the application.
As in the previous case, we pass into the add operation the name of the converter and the configuration arguments, any.

That's all guys, let us know your feedback: do you like it? Could be better? Tell us how it can be improved creating an issue [3], or fork and improve it yourself [4]!

Thanks for reading and enjoy! The Infinispan Team
I'm happy to announce a new release (the first feature-complete!) of Infinispan Spring Boot Starters.

We finally added new properties for managing Hot Rod client mode in as well Spring Cache automatic support. Finally, we fixed a couple of smaller issues.

For complete changelog, please refer to the release page.

The artifacts should be available in Maven Central as soon as the sync completes. In the meantime grab them from JBoss Repository.
I'm happy to announce a new release of KUBE_PING JGroups protocol.

Since this is a minor maintenance release, there are no ground breaking changes but we fixed a couple of issues that prevented our users from using JGroups 3.6.x and KUBE_PING 0.9.1.

Have a look at the release page to learn more details.

The artifacts should be available in Maven Central as soon as the sync completes. In the meantime grab them from JBoss Repository.

we're pleased to announce that 8.1.0.CR2 release for C++/C# clients is out!

Check the release notes, focus was on bug fixes this round so you have the opportunity to download the cleanest code so far!

Spring cleaning will continue in the next release iteration, stay tuned and, if you like, take part signalling new issues here!


The Infinispan Team
Infinispan 9 has introduced many improvements to its marshalling codebase in order to improve performance and allow for greater flexibility. Consequently, data marshalled and persisted by Infinispan 8.x is no longer compatible with Infinispan 9.x. Furthermore, as part of our ongoing efforts to improve the cache stores provided by Infinispan, we have removed both the JdbcBinaryStore and JdbcMixedStore in Infinispan 9.0.

To assist users migrating from Infinispan 8.x, we have created the JDBC Migrator that enables existing JDBC stores to be migrated to Infinispan 9's JdbcStringBasedStore.

No More Binary Keyed Stores!
The original intention of the JdbcBinaryStore was to provide greater flexibility over the JdbcStringBasedStore as it did not require a Key2StringMapper implementation.  This was achieved by utilising the hashcode of an entries key for a table's ID column entry.  However, due to the possibility of hash collisions all entries had to be placed inside a Bucket object which was then serialised and inserted into the underlying table. Utilising buckets in this manner was far from optimal as each read/write to the underlying table required an existing bucket for a given hash to be retrieved, deserialised, updated, serialised and then re-inserted back into the db.

Introducing JDBC Migrator
The JDBCMigrator is a standalone application that takes a single argument, the path to a .properties file which must contain the configuration properties for both the source and target stores.  To use the migrator you need the infinispan-tools-9.x.jar, as well as the jdbc drivers required by your source and target databases, on your classpath.

An example maven pom that launches the migrator via mvn exec:java is presented below:

Migration Examples
Below are several example .properties files used for migrating various stores, however an exhaustive list of all available properties can be found in the Infinispan user guide.  
Before attempting to migrate your existing stores please ensure you have backed up your database!

8.x JdbcBinaryStore -> 9.x JdbcStringBasedStore
The most important property to set in this example is "source.marshaller.type=LEGACY" as this instructs the migrator to utilise the Infinispan 8.x marshaller to unmarshall data stored in your existing DB tables. 
If you specified custom AdvancedExternalizer implementations in your Infinispan 8.x configuration, then it is necessary for you to specify these in the migrator configuration and ensure that they are available on the migrators classpath.  To Specify the AdvancedExternalizers to load, it is necessary to define the "source.marshaller.externalizers" property with a comma-separated list of class names. If an ID was explicitly set for your externalizer, then it is possible to prepend the externalizers class name with "<id>:" to ensure the IDs is respected by the marshaller. 

TwoWayKey2StringMapper Migration
As well as using the JDBC Migrator to migrate from Infinispan 8.x, it is also possible to utilise it to migrate from one DB dialect to another or to migrate from one TwoWayKey2StringMapper implementation to another. 

Infinispan 9 stores are no longer compatible with Infinispan 8.x stores due to internal marshalling changes. Furthermore, the JdbcBinary and JdbcMixed stores have been removed due to their poor performance characteristics.  To aid users in their transition from Infinispan 8.x we have created the JDBC Migrator to enable users to migrate their existing JDBC stores.

If you're a user of the JDBC stores and have any feedback on the latest changes, let us know via the forum, issue tracker or the #infinispan channel on Freenode. 
Dear users, the last release candidate for Infinispan 9 is out!

This milestone contains mostly bug fixes and documentation improvements ahead of 9.0.0.Final. Noteworthy changes:
  • Kubernetes Rolling Updates are fully supported
  • Infinispan Rolling Upgrades on Kubernetes is fully supported
  • Library updates: JGroups 4.0.1, Protostream 4.0.0.Alpha9, Log4j2 2.8.1
  • The deadlock detection hasn't keep up with the improvements of our locking algorithm and has been removed.
  • Support for authentication in the Rest endpoint
For the full list of changes check the release notes, download the 9.0.0.CR3 release and let us know if you have any questions or suggestions.

The Infinispan team

    Modern applications and microservices often need to expose their health status. A common example is Spring Actuator but there are also many different ways of doing that. 
    Starting from Infinispan 9.0.0.Beta2 we introduced the HealthCheck API. It is accessible in both Embedded and Client/Server mode. 
    Cluster Health and Embedded Mode
    The HealthCheck API might be obtained directly from EmbeddedCacheManager and it looks like this:

    The nice thing about the API is that it is exposed in JMX by default:

    More information about using HealthCheck API in Embedded Mode might be found here:
    Cluster Health and Server Mode
    Since Infinispan is based on Wildfly, we decided to use CLI as well as built-in Management REST interface.
    Here's an example of checking the status of a running server:

    Querying the HealthCheck API using the Management REST is also very simple:

    Note that for the REST endpoint, you have to use proper credentials. 
    More information about the HealthCheckA API in Server Mode might be found here:
    Cluster Health and Kubernetes/OpenShift
    Monitoring cluster health is crucial for Clouds Platforms such as Kubernetes and OpenShift. Those Clouds use a concept of immutable Pods. This means that every time you need change anything in your application (changing configuration for the instance), you need to replace the old instances with new ones. There are several ways of doing that but we highly recommend using Rolling Updates. We also recommend to tune the configuration and instruct Kubernetes/OpenShift to replace Pods one by one (I will show you an example in a moment). 
    Our goal is to configure Kubernetes/OpenShift in such a way, that each time a new Pod is joining or leaving the cluster a State Transfer is triggered. When data is being transferred between the nodes, the Readiness Probe needs to report failures and prevent Kubernetes/OpenShift from doing progress in Rolling Update procedure. Once the cluster is back in stable state, Kubernetes/OpenShift can replace another node. This process loops until all nodes are replaced. 
    Luckily, we introduced two scripts in our Docker image, which can be used out of the box for Liveness and Readiness Probes:At this point we are ready to put all the things together and assemble DeploymentConfig:

    Interesting parts of the configuration:
    • lines 13 and 14: We allocate additional capacity for the Rolling Update and allow one Pod to be down. This ensures Kubernetes/OpenShift replaces nodes one by one.
    • line 44: Sometimes shutting a Pod down takes a little while. It is always better to wait until it terminates gracefully than taking the risk of losing data.
    • lines 45 - 53: The Liveness Probe definition. Note that when a node is transferring the data it might highly occupied. It is wise to set higher value of 'failureThreshold'.
    • lines 54 - 62: The same rule as the above. The bigger the cluster is, the higher the value of 'successThreshold' as well as 'failureThreshold'.
    Feel free to checkout other articles about deploying Infinispan on Kubernetes/OpenShift:
    We've just released Infinispan Node.js Client version 0.4.0 which comes with encrypted client connectivity via SSL/TLS (with optional TLS/SNI support), as well as cross-site client failover.

    Thanks to the encryption integration, Node.js Hot Rod clients can talk to Hot Rod servers via an encrypted channel, allowing trusted client and/or authenticated clients to connect. Check the documentation for information on how to enable encryption in Node.js Hot Rod client.

    Also, we've added the possibility for the client to connect to multiple clusters. Normally, the client is connected to a single cluster, but if all nodes fail to respond, the client can failover to a different cluster, as long as one or more initial addresses have been provided. On top of that, clients can manually switch clusters using switchToCluster and switchToDefaultCluster APIs. Check documentation for more info.

    On top of that, we've applied several bug fixes that further tighten the inner workings of the Node.js client.

    If you're a Node.js user and want to store data remotely in Infinispan Server instances, please give the client a go and tell us what you think of it via our forum, via our issue tracker or via IRC on the #infinispan channel on Freenode.
    Dear community.

    We are one step closer to the final release of Infinispan 9: we gladly announce the release of Infinispan 9.0.0.CR2.

    The highlights of this release are:
    • Many dependencies have been upgraded to the latest and greatest:
      • JGroups 4.0.0.Final
      • Apache Lucene 5.5.4
      • Hibernate Search 5.7.0.Final
      • Protostream 4.0.0.Alpha7 
    • Transactional caches changes:
      • Removed asynchronous configuration since it won't be supported anymore.
      • Introduced EmbeddedTransactionManager: a basic transaction manager implementation.
    • Query now supports java.time.Instant natively
      • Changes in the configuration;
      • Significant performance improvements for embedded and client/server mode;
      • And finally, quite a few bug fixes preparing us for the final release !

      You can read all about these in the release notes.Keep an eye on the upgrade guide and start prepare your project for the final Infinispan 9 release.

      So, please head over to the download page and try it out. If you have an issue, please report it in our bug tracker, ask us on the forum, or join us for a friendly chat on the #infinispan IRC channel on Freenode.

      Infinispan Team.
      I'm happy to announce Spring Boot Starters 1.0.0.Beta1.

      The changelog includes:

      • Fixed path (now it uses 
      • ISPN-7468 Added Spring Cache automatic discovery and creation 
      • Fixed typo in artifact name 
      • Upgraded to the latest artifact versions 
      • Removed deprecated classes from tests
      Grab them while they are hot!
      Those busy hackers over in the Infinispan dungeon have brewed up a new release, and it is the first candidate on the road to the final 9.
      Infinispan 9.0.0.CR1 (codenamed "Ruppaner") includes a number of fixes and component upgrades over the last Beta release. You can read all about these in the fixed issues  We have also done a lot of work to restructure the user guide, upgrade guide and server admin guide to make it easier to find the answers you need.

      So, please head over to the download page and try it out. If you have an issue, please report it in our bug tracker, ask us on the forum, or join us for a friendly chat on the #infinispan IRC channel on Freenode.

      Your friendly Infinispan team

      we're pleased to announce that 8.1.0.CR1 release for C++/C# clients is out! Downloading the code you'll find these changes (and may more):
      • C++11 instead of the old portable custom classes
      • less bugs
      • more safety through TLS client authentication
      We're getting closer to a final release, more updates on what's going on here:

      It's release day at Infinispan HQ and we've just released a couple of new versions:
      • Infinispan 9.0.0.Beta2 includes:
        • New:
          • Multi-tenancy support for Hot Rod and REST endpoints improve Infinispan Server experience on OpenShift.
          • Transactional support for Functional API (thx Radim!)
          • Internal data container changes, see Will's blog posts (here and here) for more info.
          • Off-heap bounded data container has been added.
          • ElasticSearch storage for indexes.
          • Multiple additions and enhancements to the management console.
          • Further performance improvements.
        • Backwards compatibility:
          • Binary and mixed JDBC cache stores have been removed. To migrate data over, use the JDBC cache store migrator.
          • Dropped default cache inheritance.
        • Full release notes.

      Before the end of the year I wrote a blog post detailing some of the more recent changes that Infinispan introduced with the in memory data container.  As was mentioned in the previous post we would be detailing some other new changes. If you poked around in our new schema after Beta 1 you may have spoiled the surprise for yourself.

      With the upcoming Beta 2, I am excited to announce that Infinispan will have support for entries being stored off heap, as in outside of the JVM heap. This has some interesting benefits and drawbacks, but we hope you can agree the benefits in many cases far outweigh the drawbacks. But before we get into that lets first see how you can configure your cache to utilize off heap.

      New Configuration
      The off heap configuration is another option under the new memory element that was discussed in the previous post. It is used in the same way that either OBJECT or BINARY is used.
      DECLARATIVEAs you can see the configuration is almost identical to the other types of storage. The only real difference is the new address pointer argument, which will be explained below.

      Our off heap implementation uses the Java Unsafe to allocate memory outside of the Java heap. This data is stored as a bucket of linked list pointers, just like a standard Java HashMap. When an entry is added the key's serialized byte[] is hashed and an appropriate offset is found in the bucket. Then the entry is added to the bucket as the first element or if an entry(ies) is present it is added to the rear of the linked list.

      All of this data is protected by an array of ReadWriteLock instances.  The number of address pointers is evenly divisible by the number of lock instances.  The number of lock instances is how many cores your machines doubled and rounded to the nearest power of two.  Thus each lock protects an equivalent amount of address spaces.  This provides for good lock granularity and reads will not block each other but unfortunately writes will wait and block all reads.

      If you are using a bounded off heap container either by count or memory this will create a backing LRU doubly linked list to keep track of which elements were accessed most recently and removes the least recently accessed element when there are too many present in the cache.

      Our off heap implementation supports all existing features of Infinispan. There are some limitations and drawbacks of using the feature. This section will describe these in further detail.

      Off Heap runs in essentially BINARY mode, which requires entries to be serialized into their byte[] forms. Thus all keys and entries must be Serializable or have provided Infinispan Externalizers.

      Currently a key and a value must be able to be stored in a byte[]. Therefore a key or value in serialized form cannot be more than just over 2 Gigabytes.  This could be enhanced possibly at a later point, if the need arose.  I hope you aren't transferring this over your network though!

      Memory Overhead
      As with all cache implementations there is overhead required to store these entries. We have a fixed and variable overhead which scales with the amount of entries. I will detail these and briefly mention what they are used for.
      Fixed overheadAs was mentioned there is a new address count parameter when configuring off heap. This value is used to determine how many linked list pointers are available. Normally you want to have more node pointers than you have entries in the cache, since then chances are you have one element in each linked list.  This is very similar to the int argument constructor for HashMap.  The big difference being that this off heap implementation will not resize.  Thus your read/write times will be slower if you have a lot of collisions. The overhead of a pointer is 8 bytes, so for approximately one million pointers it will be 8 Megabytes of off heap.

      Bounded off heap requires very little fixed memory, just 32 bytes for head/tail pointers and a counter and an additional Java lock object.
      Variable overheadUnfortunately to store your entries we may need to wrap them with some data. Thus for every entry you add to the cache we store an additional 25 bytes for each entry.  This data is used for header information and also our linked list forward pointer.

      Bounded off heap requires an additional address pointer for its LRU list.  Thus each entry adds an additional 36 bytes above the number above. It is larger due to requiring a doubly linked list and having to have pointers to and from the entry and eviction node.

      The off heap container was designed with the intent that key lookups are quite fast. In general these should be about the same performance. However local reads and stream operations can be a little slower as there is an additional deserialization phase required.

      We hope you all try out our new off heap feature! Please make sure to contact us if you have any feedback, find any bugs or have any questions!  You can get in contact with us on our forum, issue tracker, or directly on IRC freenode channel Infinispan
      Infinispan 9 introduces several changes to the JDBC stores, in summary:
      • Configuration of DB version
      • Upsert support for store writes
      • Timestamp indexing
      • c3p0 connection pool replaced by HikariCP

      DB Version Configuration
      Previously when configuring a JDBC store it was only possible for a user to specify the vendor of the underlying DB. Consequently, it was not possible for Infinispan to utilise more recent features of DB as the SQL utilised by our JDBC stores had to satisfy the capabilities of the oldest supported DB version.

      In Infinispan 9 we have completely refactored the code responsible for generating SQL queries.  Enabling our JDBC stores to take greater advantage of optimisations and features applicable to a given database vendor and version. See the below gist for examples of how to specify the major and minor versions of your database.

      Programmatic config:
      XML Config:
      Note: If no version information is provided, then we attempt to retrieve version data via the JDBC driver.  This is not always possible and in such cases we default to SQL queries which are compatible with the lowest supported version of the specified DB dialect.

      Upsert Support
      As a consequence of the refactoring mentioned above, writes to the JDBC stores finally utilise upserts. Previously, the JDBC stores had to first select an entry, before inserting or updating a DB row depending on whether the entry previously existed.  Now, in supported DBs, store writes are performed atomically via a single SQL statement.

      In some cases it may be desirable for the previous store behaviour to be utilised, in such cases the following property should be passed to your store's configuration and set to true: `infinispan.jdbc.upsert.disabled`.

      Timestamp Indexing
      By default an index is now created on the `timestamp-column` of a JDBC store when the "create-on-start" option is set to true for a store's table.  The advantage of this index is that it prevents the DB from having to perform full table searches when purging a table of expired cache entries.  Similar to upsert support, this index is optional an can be disabled by setting the property `infinispan.jdbc.indexing.disabled` to true.  
      Hello HikariCP
      In Infinispan 9 we welcome HikariCP as the new default implementation for the JDBC PooledConnectionFactory. HikariCP provides superior performance to c3p0 (the previous default), whilst also providing a much smaller footprint. The PooledConnectionFactoryConfiguration remains the same as before, expect we now include the ability to explicitly define a properties file where additional configuration parameters can be specified for the underlying HikariCP. For a full list of the available HikariCP configuration properties, please see the official documentation
      Note: Support for c3p0 has been deprecated and will be removed in a future release. However, users can force c3p0 to be utilised as before by providing the system property `-Dinfinispan.jdbc.c3p0.force=true`.

      We have introduced the above new features to the JDBC stores in order to improve performance and to enable us to further the store's capabilities in the future. If you're a user of the JDBC stores and have any feedback on the latest changes, or would like to request some new features/optimisations, let us know via the forumissue tracker or the #infinispan channel on Freenode
      Dear Reders,

      as mentioned in our previous post about the new C++/C# release 8.1.0.Beta1, clients are now equipped with near cache support.

      The near cache is an additional cache level that keeps the most recently used cache entries in an "in memory" data structure. Near cached objects are synchronized with the remote server value in the background and can be get as fast as a map[] operation.

      So, your client tends to periodically focus the operations on a subset of your entries? This feature could be of help: it's easy to use, just enable it and you'll have near cache seamless under the wood.

      A C++ example of a cache with near cache configuration
      The last line does the magic, the INVALIDATED mode is the active mode for the near cache (default mode is DISABLED which means no near cache, see Java docs), maxEntries is the maximum number of entries that can be stored nearly. If the near cache is full the oldest entry will be evicted. Set maxEntries=0 for unbounded cache (do you have enough memory?)
      Now a full example of application that just does some gets and puts and counts how many of them are served remote and how many are served nearly. As you can see the cache object is an instance of the "well known" RemoteCache class
      Entries values in the near cache are kept aligned with the remote cache state via the events subsystem: if something changes in the server, an update event (modified, expired, removed) is sent to the client that updates the cache accordingly.

      By the way: do you know that C++/C# clients can subscribe listener to events? In the next "native" post we will see how.

      and thank you for reading.
      New Year, New (Beta) Clients!

      I'm pleased to announce that the C++/C# clients version 8.1.0.Beta1 are out!
      The big news in this release is:

      • Near Caching Support

      Find the bits in the usual place:

      Features list for 8.1 is almost done... not bad :)
      Feedbacks, proposals, hints and lines of code are welcome!

      Happy New Year,
      The Infinispan Team
      Ho, ho, hooo! It looks like all members of Infinispan Community have been nice and Santa brought you Spring Boot Starters!

      This will make you even more productive and your code less verbose!
      Why do I need starters?
      Spring Boot Starters make the bootstrapping process much easier and faster. The starter brings you required Maven dependencies as well as some predefined configuration bits.
      What do I need to get started?
      The starter can operate in two modes: client/server (when you connect to a remote Infinispan Server cluster) and embedded (packaged along with your app). The former is the default. It's also possible to use both those modes at the same time (store some data along with your app and connect to a remote Infinispan Server cluster to perform some other type of operations).
      Assuming you have an Infinispan Server running on IP address, all you need to do is to use the following dependencies:

      By default, the starter will try to locate file. The file should contain at least the server list:

      It is also possible to create RemoteCacheManager's configuration manually:

      That's it! Your app should successfully connect to a remote cluster and you should be able to inject RemoteCacheManager.
      Using Infinispan embedded is even simpler than that. All you need to do is to add additional dependency to the classpath:

      The starter will provide you a preconfigured EmbeddedCacheManager. In order to customize the configuration, use the following code snippet:
      Further reading
      There are two link I highly recommend you to read. The first is the Spring Boot tutorial and the second is the Github page of the Starters project

      Special thanks go to Marco Yuen, who donated us with Spring Boot Starters code and Tomasz Zabłocki, who updated them to current version and Stéphane Nicoll who spent tremendous amount of time reviewing the Starters.
      Infinispan 9.0 Beta 1 introduces some big changes to the Infinispan data container.  This is the first of two blog posts detailing those changes.

      This post will cover the changes to eviction which utilizes a new provider, Caffeine.  As you may already know Infinispan has supported our own implementations of LRU (Least Recently Used) and LIRS (Low Inter-reference Receny Set) algorithms for our bounded caches.

      Our implementations of eviction were even rewritten for Infinispan 8, but we found we still had some issues or limitations with them, especially LIRS.  Our old implementation had some problems with keeping the correct number of entries.  The new implementation while not having that issue had others, such as being considerably more complex.  And while it implemented the entire LIRS specification, it could have memory usage issues.  This led us to looking at alternatives and Caffeine seemed like a logical fit as well as being well maintained and the author, Ben Manes, is quite responsive.

      Enter Caffeine
      Caffeine doesn't utilize LRU or LIRS for its eviction algorithm and instead implements TinyLFU with an admission window.  This has the benefit of the high hit ratio like LIRS, while also requiring low memory overhead like LRU.  Caffeine also provides custom weighting for objects, which allow us to reuse the code that was developed for MEMORY based eviction as well.

      The only thing that Caffeine doesn't support is our idea of a custom Equivalence.  Thus Infinispan now wraps byte[] instances to ensure equals and hashCode methods work properly.  This also gives us a good opportunity to reevaluate the dataContainer configuration element.

      The data container configuration has thus been deprecated and is now replaced by a new configuration element named memory.   Also since we are adding a new element the eviction configuration could also be consolidated into memory, and thus eviction is also deprecated.

      New Configuration
      The new memory configuration will start out pretty simple and new elements can be added as there is a need.  The memory element will be composed of a single sub element that can be of three different choices.  For this post we will go over two of the sub elements: OBJECT and BINARY.

      Object storage stores the actual objects as provided from the user in the Java Heap.  This is the default storage method when no memory configuration is provided.  This method will provide the best performance when using operations that operate upon the entire data set, such as distributed streams, indexing and local reads etc.

      Unfortunately OBJECT storage only allows for COUNT based eviction as we cannot properly estimate user object types properly.  This could be improved in a feature version if there is enough interest. Note that you can technically configured MEMORY eviction type with the OBJECT storage type with declarative configuration, but it will throw an exception when you build the configuration.  Therefore OBJECT only has a single element named size to determine the amount of entries that can be stored in the cache.

      An example of how Object storage can be configured:
      Binary storage stores the object in its serialized form in a byte array.  This has an interesting side effect of objects are always stored as a deep copy.  This can be useful if you want to modify an object after retrieving it without affecting the underlying cache stored object.  Since objects have to be deserialized when performing operations on them some things such as distributed streams and local gets will be a little bit slower.

      A nice benefit of storing entries as BINARY is that we can estimate the total on heap size of the object.  Thus BINARY supports both COUNT and MEMORY based eviction types.

      An example of how Binary storage can be configured:
      This option will be described in more detail in the next blog post.  Stay tuned!

      Caffeine should bring us a great solution, while also reducing a lot of maintenance ourselves.  The new memory configuration also provides a simpler solution by removing two other configuration elements.

      We hope you enjoy the new changes to the data container and look out for another blog post coming soon to detail the other new changes to the data container!  In the meantime please check out our latest Infinispan 9.0 before it goes final and give us any feedback on IRC or JIRA
      Last month I presented about building functional reactive applications with Infinispan, Node.js and Elm at both Soft-Shake in Geneva (slides) and Devoxx Morocco (slides).
      Thanks a lot to all the participants who attended the talks and thanks also to the organisers for accepting my talk. Both conferences were really enjoyable!
      At Soft-Shake I managed to attend a few presentations, and the one that really stuck with me was the one from Alexandre Masselot on "Données CFF en temps réel: tribulations techniques dans la stack Big Data" (slides). It was a very interesting use case on doing big data with the information from the Swiss Rail system. Although there was no live demo, Alexandre gave the link to a repo where you can run stuff yourself. Very cool!
      On top of that, I also attended a talk by Tom Bujok on Scaling Your Application Out. Tom happens to be an old friend who since I last met him has joined Hazelcast ;)

      Shortly after Shoft-Shake I headed to Casablanca to speak at Devoxx Morocco. This was a fantastic conference with a lot of young attendees. The room was almost packed up for my talk and I got good reaction from the audience on both the talk and the live demo.
      During the conference I also attended other talks, including a couple of Kubernetes talks by Ray Tsang, who is an Infinispan committer himself. In his presentations he uses a Kubernetes visualizer which is very cool and I'm hoping to use it in future presentations :)
      No more conferences for this year, thanks to all who've attended Infinispan presentations throughout the year!

      As you’ve already learned from an earlier post this week, Infinispan 9 is on its final approach to landing and is bringing a new query language. Hurray! But wait, was there something wrong with the old one(s)? Not wrong really ...  I’ll explain.
      Infinispan is a data grid of several query languages. Historically, it has offered search support early in its existence by integrating with Hibernate Search which provides a powerful Java-based DSL enabling you to build Lucene queries and run them on top of your Java domain model living in the data grid. Usage of this integration is confined to embedded mode, but that still succeeds in making Java users happy.
      While the Hibernate Search combination is neat and very appealing to Java users it completely leaves non-JVM languages accessing Infinispan via remote protocols out in the cold.
      Enter Remote Query. Infinispan 6.0 starts to address the need of searching the grid remotely via Hot Rod. The internals are still built on top of Lucene and Hibernate Search bedrock but these technologies are now hidden behind a new query API, the QueryBuilder, an internal DSL resembling JPA criteria query. The QueryBuilder has implementations for both embedded mode and Hot Rod. This new API provides all relational operators you can think of, but no full-text search initially, we planned to add that later.
      Creating a new internal DSL was fun. However, having a long term strategy for evolving it while keeping complete backward compatibility and also doing so uniformly across implementations in multiple languages proved to be a difficult challenge. So while we were contemplating adding new full-text operators to this DSL we decided on making a long leap forward and adopt a more flexible alternative by having our own string based query language instead, another DSL really, albeit an external one this time.
      So after the long ado, let me introduce Ickle, Infinispan’s new query language, conspicuously resembling JP-QL.
      • is a light and small subset of JP-QL, hence the lovely name
      • queries Java classes and supports Protocol Buffers too
      • queries can target a single entity type
      • queries can filter on properties of embedded objects too, including collections
      • supports projections, aggregations, sorting, named parameters
      • supports indexed and non-indexed execution
      • supports complex boolean expressions
      • does not support computations in expressions (eg. user.age > sqrt(user.shoeSize + 3) is not allowed but user.age >= 18 is fine)
      • does not support joins
        • but, navigations along embedded entities are implicit joins and are allowed
        • joining on embedded collections is allowed
        • other join types not supported
      • subqueries are not supported
      • besides the normal relational operators it offers full-text operators, similar to Lucene’s  query parser
      • is now supported across various Infinispan APIs, wherever a Query produced by the QueryBuilder is accepted (even for continuous queries or in event filters for listeners!)

      That is to say we squeezed JP-QL to the bare minimum and added full-text predicates that closely follow the syntax of Lucene’s query parser.
      If you are familiar with JPA/JP-QL then the following example will speak for itself:
      select accountId, sum(amount) from com.acme.Transaction
          where amount < 20.0
          group by accountId
          having sum(amount) > 1000.0
          order by accountId
      The same query can be written using the QueryBuilder:
      Query query = queryFactory.from(Transaction.class)
      .select("accountId"), Expression.sum("amount"))
      Both examples look nice but I hope you will agree the first one is better.
      Ickle supports several new predicates for full-text matching that the QueryBuilder is missing. These predicates use the : operator that you are probably familiar from Lucene’s own query language.  This example demonstrates a simple full-text term query:
      select transactionId, amount, description from com.acme.Transaction
      where amount > 10 and description : "coffee"
      As you can see, relational predicates and full-text predicates can be combined with boolean operators at will.
      The only important thing to remark here is relational predicates are applicable to non-analyzed fields while full-text predicates can be applied to analyzed field only. How does indexing work, what is analysis and how do I turn it on/off for my fields? That’s the topic of a future post, so please be patient or start reading here.
      Besides term queries we support several more:
      • Term                     description : "coffee"
      • Fuzzy                    description : "cofee"~2
      • Range                   amount : [40 to 90}
      • Phrase                  description : "hello world"
      • Proximity               description : "canceling fee"~3
      • Wildcard                description : "te?t"
      • Regexp                 description : /[mb]oat/
      • Boosting                description : "beer"^3 and description : "books"
      You can read all about them starting from here.
      But is Ickle really new? Not really. The name is new, the full-text features are new, but a JP-QL-ish query string was always internally present in the Query objects produced by the QueryBuilder since the beginning of Remote Query. That language was never exposed and specified until now. It evolved significantly over time and now it is ready for you to use it. The QueryBuilder / criteria-like API is still there as a convenience but it might go out of favor over time. It will be limited to non-full-text functionality only. As Ickle grows we’ll probably not be able to include some of the additions in the QueryBuilder in a backward compatible manner. If growing will cause too much pain we might consider deprecating it in favor of Ickle or if there is serious demand for it we might continue to evolve the QueryBuilder in a non compatible manner.
      Being a string based query language, Ickle is very convenient for our REST endpoint, the CLI, and the administration console allowing you to quickly inspect the contents of the grid. You’ll be able to use it there pretty soon. We’ll also continue to expand Ickle with more advanced full-text features like spatial queries and faceting, but that’s a subject for another major version. Until then, why not grab the current 9.0 Beta1 and test drive the new query language yourself? We’d love to hear your feedback on the forum, on our issue tracker or on IRC on the #infinispan channel on Freenode.
      Happy coding!
      As I hope most people reading this already know, since Infinispan 8 you can utilize the entire Java 8 Stream API and have it be distributed across your cluster.  This performs the various intermediate and terminal operations on the data local to the node it lives on, providing for extreme performance.  There are some limitations and things to know as was explained at distributed-streams.

      The problem with the API up to now was that, if you wanted to use lambdas, it was quite an ugly scene.  Take for example the following code snippet:

      8.0 Distributed Streams ExampleHowever, for Infinispan 9 we utilize a little syntax feature added with Java 8 [1] to add some much needed quality of life improvements.  This allows the most specific interface to be chosen when a method is overloaded.  This allows for a neat interaction when we add some new interfaces that implement Serializable and the various function interfaces (SerializableFunction, SerializablePredicate, SerializableSupplier, etc).  All of the Stream methods have been overridden on the CacheStream interface to take these arguments.

      This allows for the code to be much cleaner as we can see here:
      9.0 Distributed Streams ExampleExtra MethodsThis is not the only benefit of providing the CacheStream interface: we can also provide new methods that aren't available on the standard Stream interface.  One example is the forEach method which allows the user to more easily provide a Cache that is injected on each node as required.  This way you don't have to use the clumsy CacheAware interface and can directly use lambdas as desired.

      Here is an example of the new forEach method in action:

      In this example we take a cache and, based on the keys in it, write those values into another cache. Since forEach doesn't have to be side effect free, you can do whatever you want inside here.

      All in all these improvements should make using Distributed Streams with Infinispan much easier.  The extra methods could be extended further if users have use cases they would love to suggest.  Just let us know, and I hope you enjoy using Infinispan!

      It took us quite a bit to get here, but we're finally ready to announce Infinispan 9.0.0.Beta1, which comes loaded with a ton of goodies.

      • Performance improvements
        • JGroups 4
        • A new algorithm for non-transactional writes (aka the Triangle) which reduces the number of RPCs required when performing writes 
        • A new faster internal marshaller which produced smaller payloads. 
        • A new asynchronous interceptor core
      • Off-Heap support
        • Avoid the size of the data in the caches affecting your GC times
      • CaffeineMap-based bounded data container
        • Superior performance
        • More reliable eviction
      • Ickle, Infinispan's new query language
        • A limited yet powerful subset of JPQL
        • Supports full-text predicates
      • The Server Admin console now supports both Standalone and Domain modes
        • Pluggable marshallers for Kryo and ProtoStuff
        • The LevelDB cache store has been replaced with the better-maintained and faster RocksDB 
        • Spring Session support
        • Upgraded Spring to 4.3.4.RELEASE
        We will be blogging about the above in detail over the coming weeks, including benchmarks and tutorials.
          The following improvements were also present in our previous Alpha releases:
          • Graceful clustered shutdown / restart with persistent state
          • Support for streaming values over Hot Rod, useful when you are dealing with very large entries
          • Cloud and Containers
            • Out-of-the box support for Kubernetes discovery
          • Cache store improvements
            • The JDBC cache store now use transactions and upserts. Also the internal connection pool is now based on HikariCP

          Also, our documentation has received a big overhaul and we believe it is vastly superior than before.

          There will be one more Beta including further performance improvements as well as additional features, so stay tuned.
            Infinispan 9 is codenamed "Ruppaner" in honor of the Konstanz brewery, since many of the improvements of this release have been brewed on the shores of the Bodensee !


            In the previous post we showed how to manipulate the Infinispan Docker container configuration at both runtime and boot time.

            Before diving into multi-host Docker usage, in this post we'll explore how to create multi-container Docker applications involving Infinispan with the help of Docker Compose.

            For this we'll look at a typical scenario of an Infinispan server backed by an Oracle database as a cache store.

            All the code for this sample can be found on github.

             Infinispan with Oracle JDBC cache store 
            In order to have a cache with persistence with Oracle, we need to do some configuration: configure the driver in the server, create the data source associated with the driver, and configure the cache itself with JDBC persistence.
            Let's take a look at each of those steps:Obtaining and configuring the driverThe driver (ojdbc6.jar) should be downloaded and placed in the 'driver' folder of the sample project.

            The module.xml declaration used to make it available on the server is as follows:

            Configuring the Data source The data source is configured in the "datasource" element of the server configuration file as shown below:

            and inside the "datasource/drivers" element, we need to declare the driver:

            Creating the cache The last piece is to define a cache with the proper JDBC Store:

            Putting all togetherFrom now on, without using Docker we'd be ready to download and install Oracle following the specific instructions for your OS, then download the Infinispan Server, edit the configuration files, copy over the driver jar, figure out how to launch the database and server, taking care not to have any port conflicts.

            If it sounds too much work, it's because it really is. Wouldn't it be nice to have all these wired together and launched with a simple command line? Let's take a look at the Docker way next.

             Enter Docker Compose
            Docker Compose is a tool part of the Docker stack to facilitate configuration, execution and management of related Docker containers.

            By describing the application aspects in a single yaml file, it allows centralized control of the containers, including custom configuration and parameters, and it also allows runtime interactions with each of the exposed services.

            Composing InfinispanOur Docker Compose file to assemble the application is given below:

            It contains two services:
            • one called oracle that uses the wnameless/oracle-xe-11g Docker image, with an environment variable to allow remote connections.
            •  another one called infinispan that uses version 8.2.5.Final of the Infinispan Server image. It is launched with a custom command pointing to the changed configuration file and it also mounts two volumes in the container: one for the driver and its module.xml and another for the folder holding our server xml configuration.
            LaunchingTo start the application, just execute

            To inspect the status of the containers:

            To follow the Infinispan server logs, use:

            Infinispan usually starts faster than the database, and since the server waits until the database is ready (more on that later), keep an eye in the log output for "Infinispan Server 8.2.5.Final (WildFly Core 2.0.10.Final) started". After that, both Infinispan and Oracle are properly initialized.
            Testing itLet's insert a value using the Rest endpoint from Infinispan and verify it was saved to the Oracle database:

            To check the Oracle database, we can attach to the container and use Sqlplus:

            Other operations
            It's also possible to increase and decrease the number of containers for each of the services:

            A thing or two about startup order When dealing with dependent containers in Docker based environments, it's highly recommended to make the connection obtention between parties robust enough so that the fact that one dependency is not totally initialized doesn't cause the whole application to fail when starting.

            Although Compose does have a depends_on instruction, it simply starts the containers in the declared order but it has no means to detected when a certain container is fully initialized and ready to serve requests before launching a dependent one.

            One may be tempted to simply write some glue script to detect if a certain port is open, but that does work in practice: the network socket may be opened, but the background service could still be in transient initialization state.

            The recommended solution for this it to make whoever depends on a service to retry periodically until the dependency is ready. On the Infinispan + Oracle case, we specifically configured the data source with retries to avoid failing at once if the database is not ready:

            When starting the application via Compose you'll notice that Infinispan print some WARN with connection exceptions until Oracle is available: don't panic, this is expected!

            Docker Compose is a powerful and easy to use tool to launch applications involving multiple containers: in this post it allowed to start Infinispan plus Oracle with custom configurations with a single command.
            It's also a handy tool to have during development and testing phase of a project, specially when using/evaluating Infinispan with its many possible integrations.

            Be sure to check other examples of using Docker Compose involving Infinispan: the Infinispan+Spark Twitter demo, and the Infinispan+Apache Flink demo.