Community News

Dear Infinispan Community,

the Infinispan 9.1.0.Beta1 is out and can be found on our downloads page.


Full details of the new features and enhancements included in this release can be found here.

Short list of highlights:
  • [ISPN-7114] Consistency Checker, Conflict Resolution and Automatic merge policies
  • [ISPN-5218] Batching for CacheStores
  • [ISPN-7896] On-demand data conversion in caches
  • [ISPN-6676] HTTP/2 suport in the REST endpoint with TLS/ALPN upgrade
  • [ISPN-7841] Add stream operations that can operate upon data exclusively
  • [ISPN-7868] Add encryption and authentication support to the Remote Store
  • [ISPN-7772] Hot Rod Client create/remove cache operations
  • [ISPN-6994] Add an AdvancedCache.withSubject(Subject) method for explicit impersonation
  • [ISPN-7803] Functional commands-based AtomicMaps
  • The usual slew of bug fixes, clean ups and general improvements.
As usual, we will be blogging about each feature and improvement.

Always consult the Upgrading guide to see what has changed. thank you for following us and stay tuned! The Infinispan Team
The implementation of cache authorization in Infinispan has traditionally followed the JAAS model of wrapping calls in a PrivilegedAction invoked through Subject.doAs(). This led to the following cumbersome pattern:


We also provided an implementation which, instead of relying on enabling the SecurityManager, could use a lighter and faster ThreadLocal for storing the Subject:


While this solves the performance issue, it still leads to unreadable code.
This is why, in Infinispan 9.1 we have introduced a new way to perform authorization on caches:


Obviously, for multiple invocations, you can hold on to the "impersonated" cache and reuse it:

We hope this will make your life simpler and your code more readable !
Are you attending Berlin Buzzwords and want to find out more how Infinispan can help your systems react to real-time data quickly, and see the cool stuff we have for data analytics, make sure you come to my talk on Big Data In Action with Infinispan on Tuesday, 13th June at 16:30.



Cheers,
Galder
Dear Infinispan Community,

The first Alpha release of Infinispan 9.1 is out and can be found on our downloads page.

Highlights include:



Full details of the new features and enhancements included in this release can be found here.

Check out the new features and enhancements, download the release and tell us all about it on the forum, on our issue tracker or on IRC on the #infinispan channel on Freenode.

Cheers,
The Infinispan Team
Dear Infinispanners,

we're pleased to announce that 8.1.1.Final release for C++/C# clients is out!

Check the release notes and browse the source code, effort this time has been put in reducing code complexity.

This is the first release built by our new CI Jenkins environment, this is supposed to not affect the binaries but if you feel that something has gone wrong please fill a jira issue.

Enjoy and thanks for reading!

The Infinispan Team
I'm happy to announce that JGroups KUBE_PING 0.9.3 was released. The major changes include:
  • Fixed releasing connections for embedded HTTP Server
  • Fixed JGroups 3/4 compatibility issues
  • Fixed test suite
  • Fixed `Message.setSrc` compatibility issues
  • Updated documentation
The bits might be downloaded from JBoss Repository as soon as the sync completes. Please download them from here in the meantime. 
I would also like to recommend you recent blog post created by Bela Ban. KUBE_PING was completely revamped (no embedded HTTP Server, reduced dependencies) and we plan to use new, 1.0.0 version in Infinispan soon! If you'd like to try it out, grab it from here.
Dear Infinispan Community,

We have just released Infinispan 9.0.1.Final which can be found on our downloads page. Full details of the fixes included in this release can be found here.

Check out the fixed issues, download the release and tell us all about it on the forum, on our issue tracker or on IRC on the #infinispan channel on Freenode.

Cheers,
The Infinispan Team
We are happy to announce that Infinispan Spring Boot Starters 1.0.0.Final have been released.

Change-list:

You can grab the bits from JBoss Repository after the sync is complete. In the meantime, grab them from here.
J On The Beach was a blast! It's only their second year doing the conference, but it was really well managed and it had an amazing lineup of speakers. To top that up, it was in Malaga so the good weather made it possible to stay outside in the garden at La Termica chatting to attendees and speakers.

The evening before the start of the conference, we had a welcome reception at the Ayuntamiento de Malaga learning about IT and Big Data promotion that the major and his team are helping with.

The conference started with a mind-blowing keynote on quantum computing by Eric Ladizinsky. It was a super talk with very interesting information about what the future might hold in terms of computing. The challenges of quantum computing are immense but the possibilities it opens up staggering as well.

That first morning I had the chance to see Kyle Kingsbury's Jepsen talk which was very entertaining. He gave an intro on Jepsen and looked back at the results of different distributed environments. This allowed the audience to get a good overview on what each system is capable of and what guarantees they provide. Also in the first day I attended, Christopher Meiklejohn's talk on Antidote, a geo-replicated NoSQL database with strong guarantees based on Riak. It uses CRDTs and Hight Available Transactions to achieve this.

On the second day I had my presentation on Functional Reactive Programming with Elm, Node.js and Infinispan. It was well received and got good feedback. Slides can be found here, and the demo repository is here. Unfortunately, due to scheduling and preparations for my talk, I couldn't go to Duarte Dunes' ScyllaDB and Tyler Akidau's Apache Beam talks, but I hope to catch those up when the videos are shared.

However, I was able to attend Caitie Mccaffrey's talk on Distributed Sagas, a protocol for coordinating microservices. Even though such protocol would be hard to implement in all situations, e.g. online ticket shop for a very popular artist, it had some interesting characteristics. The talk itself was delivered masterfully.

Finally, I was at Martin Thompson's High Performance Managed Languages talk which was superb! With years of experience and the development of Aeron on his back, he was able to give a interesting overview of the performance characteristics of managed vs unmanaged languages. Flexibility in managed languages, such as in C#, seems to be the best way to achieve the best performance.

All in all it was a fantastic conference, and I was delighted to have been part of it. Valo, the company behind J On The Beach were fantastic hosts and met some amazing people that are or had been part of this company, including: Luis, Justo, Michael, Danielle...etc.

I hope to come up another time :)

Cheers,
Galder
Are you in Malaga for J On The Beach 2017 and want to know more about functional reactive programming with Elm, Node.js and Infinispan? Then, make sure you come to this talk on Friday, 11am at Mollete Hall. It's a fun, live coding talk that you just can't miss :)

Cheers,
Galder
NoSQL Unit is a JUnit extension that helps you write NoSQL unit tests, created by Alex Soto. It brings the ideas first introduced by DBUnit to the world of NoSQL databases.

The essence of DBUnit or NoSQL Unit is that before running each test, the persistence layer is found in a known state. This makes your test repeatable, independent of other test failures or potential database corruptions.
You can use NoSQL Unit for testing embedded or remote Infinispan instances, and since version 1.0.0-rc.5, which was released a few days back, it supports the latest Infinispan 9.0.0.Final.

We have a created a little demo GitHub repository showing you how to test Infinispan using NoSQL Unit. Go and give it a go! :)
Thanks Alex bringing NoSQL Unit to my attention!
Cheers,Galder
Over the past few years we've been blogging a lot on how to use Infinispan in cloud environments based on Docker, Kubernetes or OpenShift.

Continuing with this series of blog posts, Bela Ban, chief-in-charge of JGroups, posted an unmissable blog post yesterday not only how to run Infinispan with Kubernetes on Google Container Engine (GKE), but also how to load test it with IspnPerfTest.

If any of these topics interests you, don't miss out and head to Bela's blog to read all about it!

Thanks Bela for the blog post!!!

Cheers,
Galder
Thanks a lot to everyone who attended the Infinispan sessions I gave in Great Indian Developer Summit! Your questions after the talks were really insightful.

One of the talks I gave was titled Big Data In Action with Infinispan (slides are available here), where I was looking at how Infinispan based in-memory data grids can help you deal with the problems of real-time big data and how to do big data analytics.
During the talk I live coded a demo showing both real-time and analytics parts, running on top of OpenShift and using Vert.x for joining the different parts. The demo repository contains background information on instructions to get started with the demo, but I thought it'd be useful to get focused step-by-step instructions in this blog post.
Set Up
Before we start with any of the demos, it's necessary to run some set up steps:
    1. Check out git repository:            git clone https://github.com/galderz/swiss-transport-datagrid
    2. Install OpenShift Origin or Minishift to get an OpenShift environment running in your own         machine. I decided to use OpenShift Origin, so the instructions below are tailored for that         environment, but similar instructions could be used with Minishift.
    3. Install Anaconda for Python 3, this is required to run Jupyter notebook for plotting.
Demo Domain
Once the set up is complete, it's time to talk about the demos before we run them.
Both demos shown below work with the same application domain: swiss rail transport systems. In this domain, we differentiate between physical stations, trains, station boards which are located in stations, and finally stops, which are individual entries in station boards.
Real Time Demo
The first demo is about working with real-time data from station boards around the country and presenting a centralised dashboard of delayed trains around the country. The following diagrams shows how the following components interact with each other to achieve this:


Infinispan, which provides the in-memory data grid storage, and Vert.x, which provides the glue for the centralised delayed dashboard to interact with Infinispan, all run within OpenShift cloud. 
Within the cloud, the Injector verticle cycles through station board data and injects it into Infinispan. Also within the cloud, a Vert.x verticle that uses Infinispan's Continuous Query to listen for station board entries that are delayed, and these are pushed into the Vert.x event bus, which in turn, via a SockJS bridge, get consumed via WebSockets from the dashboard. The centralised dashboards is written with JavaFX and runs outside the cloud.
To run the demo, do the following:
    1. Start OpenShift Origin if you've not already done so:
        oc cluster up --public-hostname=127.0.0.1
    2. Deploy all the OpenShift cloud components:
        cd ~/swiss-transport-datagrid        ./deploy-all.sh
    3. Open the OpenShift console and verify that all pods are up.
    4. Load github repository into your favourite IDE and run        delays.query.continuous.fx.FxApp Java FX application. This will load the        centralised dashboard. Within seconds delayed trains will start appearing. For example:

Analytics Demo
The first demo is focused on how you can use Infinispan for doing offline analytics. In particular, this demo tries to answer the following question:
Q. What is the time of the day when there is the biggest ratio of delayed trains?
Once again, this demo runs on top of OpenShift cloud, uses Infinispan as in-memory data grid for storage and Vert.x for glueing components together.
To answer this question, Infinispan data grid will be loaded with 3 weeks worth of data from station boards using a Vert.x verticle. Once the data is loaded, the Jupyter notebook will invoke an HTTP restful endpoint which will invoke an Vert.x verticle called AnalyticsVerticle
This verticle will invoke a remote server task which will use Infinispan Distributed Java Streams to calculate the two pieces of information required to answer the question: per hour, how many trains are going through the system, and out of those, how many are delayed.
An important aspect to bear in mind about this server tasks is that it will only be executed in one of the nodes in the cluster. It does not matter which one. In turn, this node will will ship the lambdas required to do the computation to each of the nodes so that they can executed against their local data. The other nodes will reply with the results and the node where the server task was invoked will aggregate the results.
The results will be sent back to the originating invoker, the Jupyter notebook which will plot the results. The following diagrams shows how the following components interact with each other to achieve this:



Here is the demo step-by-step guide:
    1. Start OpenShift Origin and deploy all components as shown in previous demo.
    2. Start the Jupyter notebook:
        cd ~/swiss-transport-datagrid/analytics/analytics-jupyter        ~/anaconda/bin/jupyter notebook
    3.  Once the notebook opens, click open live-demo.ipynb notebook and execute each of the cells in order. You should end up seeing a plot like this:

So, the answer to the question:
Q. What is the time of the day when there is the biggest ratio of delayed trains?
is 2am! That's because last connecting trains of the day wait for each other to avoid leaving passengers stranded.
Conclusion
This has been a summary of the demos that I presented at Great Indian Developer Summit with the intention of getting you running these demos as quickly as possible. The repository contains more detailed information of these demos. If there's anything unclear or any of the instructions above are not working, please let us know!
Once again, a very special thanks to Alexandre Masselot for being the inspiration for these demos. Merci @Alex!!
Over the next few months we will be enhancing the demo and hopefully we'll be able to do some more live demonstrations at other conferences.
Cheers,Galder
I've just arrived in India where I'll be speaking about Infinispan, JBoss Data Grid and other related technologies in the Great Indian Developer Developer Summit in Bangalore. So if you're attending and want to find out more how Infinispan can help your systems react to real-time data quickly, and see the cool stuff we have for data analytics, make sure you come!!




For more details, check the conference schedule :)

Cheers,
Galder
Dears,

we're pleased to announce that 8.1.0.Final release for C++/C# clients is out!

Check the Release Notes and try it yourself without fear, it's tagged as stable!

As in the best TV series: Final doesn't mean the last! Stay tuned for the next 8.2.0 "More Fun Is Coming" season :)

Enjoy and thanks for reading!

The Infinispan Team
I'm pleased to announce that we have just released version 1.0.19 of the infinispan-archetype. This release focuses on making the archetype compatible with Infinispan 9.0 as well as adding a store archetype for creating custom cache writer/loader implementations.

Archetype Usage
To utilise the archetypes use the following commands:

Contributing
If you encounter any issues with the archetypes, or would like to request additional archetypes, please raise an issue on GitHub.
    Devoxx France 2017 was a blast!! Emmanuel and I would like to thank all attendees to our in-memory data grids patterns talk. The room was full and we thoroughly enjoyed the experience!

    During the talk we presented a couple of small demos that showcased some in-memory data grid use cases. The demos are located here, but I thought it'd be useful to provide some step-by-step here so that you can get them running as quickly as possible.

    Before we start with any of the demos, it's necessary to run some set up steps:

      1. Check out git repository:

        git clone https://github.com/galderz/datagrid-patterns

      2. Download Infinispan Server 9.0.0.Final and at the same level as the git repository.

      3. Go into the datagrid-patterns directory, start the servers and wait until they've started:

        cd datagrid-patterns
        ./run-servers.sh

      4. Install Anaconda for Python 3, this is required to run Jupyter notebook for plotting.

      5. Install Maven 3.

    Once the set up is complete, it's time to start with the individual demos.

    Both demos shown below work with the same application domain: rail transport systems. In this domain, we differentiate between physical stations, trains, station boards which are located in stations, and finally stops, which are individual entries in station boards.

    Analytics Demo
    The first demo is focused on how you can use Infinispan for doing offline analytics. In particular, this demo tries to answer the following question:

    Q. What is the time of the day when there is the biggest ratio of delayed trains?

    To answer this question, Infinispan data grid will be loaded with 3 weeks worth of data from station boards. Once the data is loaded, we will execute a remote server task which will use Infinispan Distributed Java Streams to calculate the two pieces of information required to answer the question: per hour, how many trains are going through the system, and out of those, how many are delayed.
    An important aspect to bear in mind about this server tasks is that it will only be executed in one of the nodes in the cluster. It does not matter which one. In turn, this node will will ship the lambdas required to do the computation to each of the nodes so that they can executed against their local data. The other nodes will reply with the results and the node where the server task was invoked will aggregate the results.
    Then, these results are sent back to the client, which in turn, stores the results as JSON in an intermediate cache. Once the results are in place, we will use a Jupyter notebook to read those results and plot the result.
    Let's see these steps in action:
      1. First, we need to install the server tasks in the running servers above:
        cd datagrid-patterns/analytics    mvn clean install package -am -pl analytics-server    mvn wildfly:deploy -pl analytics-server      2. Open the datagrid-pattern repo with your favourite IDE and run delays.java.stream.InjectApp class located in analytics/analytics-server project. This command will inject the data into the cache. On my environment, it takes between 1 and 2 minutes.
      3. With the data loaded, we need to run the remote task that will calculate the total number of trains per hour and how many of those are delayed. To do that, execute delays.java.stream.AnalyticsApp class located in analytics/analytics-server project from your IDE.
      4. You can verify that the results have been calculating by going to the following address:
        http://localhost:8180/rest/analytics-results/results
      5. With the results in place, it's time to start the Jupyter notebook:
        cd datagrid-patterns/analytics/analytics-jupyter    ~/anaconda/bin/jupyter notebook
      6. Once the notebook opens, click open live-demo.ipynb notebook and execute each of the cells in order. You should end up seeing a plot like this:

    So, the answer to the question:
    Q. What is the time of the day when there is the biggest ratio of delayed trains?
    is 2am! That's because last connecting trains of the day wait for each other to avoid leaving passengers stranded.
    Real Time Demo
    The second demo that we presented uses the same application domain as above, but this time we're trying to use our data grid as a way of storing the station board state of each station at a given point in time. So, the idea is to use Infinispan as an in memory data grids for working with real time data.
    So, what can we do with this type of data? In our demo, we will create a centralised dashboard of delayed trains around the country. To do that, we will take advantage of Infinispan's Continuous Query functionality which allows us to find those station boards which contain stops that are delayed, and as new delayed trains appeared these will be pushed to our dashboard.
    To run this demo, keep the same servers running as above and do the following:
    1. Run delays.query.continuous.FxApp application located in real-time project inside the datagrid-patterns demo. This app will inject some live station board data and will launch a JavaFX dashboard that shows delayed trains as they appear. It should look something like this:


    ConclusionThis has been a summary of the demos that we run in our talk at Devoxx France with the intention of getting you running these demos as quickly as possible. The repository contains more detailed information of these demos. If there's anything unclear or any of the instructions above are not working, please let us know!
    Thanks to Emmanuel Bernard for partnering with me for this Devoxx France talk and for the continuous feedback while developing the demos. Thanks as well to Tristan Tarrant for the input in the demos and many thanks to all Devoxx France attendees who attended our talk :)
    A very special thanks to Alexandre Masselot whose "Swiss Transport in Real Time: Tribulations in the Big Data Stack" talk at Soft-Shake 2016 was the inspiration for these demos. @Alex, thanks a lot for sharing the demos and data with me and the rest of the community!!
    In a just a few weeks I'll be at Great Indian Developer Summit presenting these demos and much more! Stay tuned :)
    Cheers,Galder
    Infinispan will be present in Devoxx France from 5th to 7th April 2017. Emmanuel Bernard and myself will be speaking about in-memory data grid use cases with some cool demos around rail train transport (who doesn't love trains?).

    So, if you're at Devoxx France, or considering going there, and want to find out more about in-memory data grids and Infinispan, make sure you come to our talk!!

    Cheers,
    Galder
    The Infinispan Spark connector offers seamless integration between Apache Spark and Infinispan Servers.
    Apart from supporting Infinispan 9.0.0.Final and Spark 2.1.0, this release brings many usability improvements, and support for another major Spark API.

    Configuration changes
    The connector no longer uses a java.util.Properties object to hold configuration, that's now duty of org.infinispan.spark.config.ConnectorConfiguration, type safe and both Java and Scala friendly:


     Filtering by query String
    The previous version introduced the possibility of filtering an InfinispanRDD by providing a Query instance, that required going through the QueryDSL which in turn required a properly configured remote cache.

    It's now possible to simply use an Ickle query string:



    Improved support for Protocol Buffers
    Support for reading from a Cache with protobuf encoding was present in the previous connector version, but now it's possible to also write using protobuf encoding and also have protobuf schema registration automatically handled.

    To see this in practice, consider an arbitrary non-Infinispan based RDD<Integer, Hotel> where Hotel is given by:


    In order to write this RDD to Infinispan it's just a matter of doing:

    Internally the connector will trigger the auto-generation of the .proto file and message marshallers related to the configured entity(ies) and will handle registration of schemas in the server prior to writing.



    Splitter is now pluggable
    The Splitter is the interface responsible to create one or more partitions from a Infinispan cache, being each partition related to one or more segments. The Infinispan Spark connector now can be created using a custom implementation of Splitter allowing for different data partitioning strategies during the job processing.


    Goodbye Scala 2.10
    Scala 2.10 support was removed, Scala 2.11 is currently the only supported version. Scala 2.12 support will follow https://issues.apache.org/jira/browse/SPARK-14220


     Streams with initial state
    It is possible to configure the InfinispanInputDStream with an extra boolean parameter to receive the current cache state as events.

     Dataset support
    The Infinispan Spark connector now ships with support for Spark's Dataset API, with support for pushing down predicates, similar to rdd.filterByQuery. The entry point of this API is the Spark session:


    To create an Infinispan based Dataframe, the "infinispan" data source need to be used, along with the usual connector configuration:

    From here it's possible to use the untyped API, for example:

    or execute SQL queries by setting a view:

    In both cases above, the predicates and the required columns will be converted to an Infinispan Ickle filter, thus filtering data at the source rather than at Spark processing phase.


    For the full list of changes see the release notes. For more information about the connector, the official documentation is the place to go. Also check the twitter data processing sample and to report bugs or request new features use the project JIRA.



    Infinispan 9 is the culmination of nearly a year of work. It is codenamed "Ruppaner" in honor of the city of Konstanz, where we designed many of the improvements we've made. Prost!

    Performance
    We decided it was time to revisit Infinispan's performance and scalability. So we went back to our internals design and we made a number of improvements. Infinispan 9.0 is faster than any previous release by quite a sizeable margin in a number of key aspects:

    • distributed writes, thanks to a new algorithm which reduces the number of RPCs required to write to the owners
    • distributed reads, which scale much better under load
    • replicated writes, also with better scalability under load
    • eviction, thanks to a new in-memory container
    • internal marshalling, which was completely rewritten

    We will have a post dedicated to benchmarks detailing the difference against previous versions and in various scenarios.

    Marshalling
    We've made several improvements in the cluster and persistent storage marshalling layer which has resulted in increased performance and smaller payloads. Also, the new marshaller layer makes JBoss Marshalling an optional component, which is only used when no Infinispan Externalizers (or AdvancedExternalizers) are available for a given type, hence relying on standard JDK Serializable/Externalizable capabilities to be marshalled.

    Remote Hot Rod Clients
    We now ship alternate marshallers for remote clients based on Kryo and ProtoStuff.

    Additionally, the Hot Rod protocol now supports streaming operations for dealing with large objects.

    Off-Heap and data-container changes
    An In-Memory Data Grid likes to eat through your memory (because you want it to be fast!), but in the world of the JVM that is not ideal: that huge chunk of data gives Garbage Collectors a hard time when the heap goes into double-digit gigabyte territory. Long GC pauses can make individual nodes unresponsive, compromising the stability of your cluster.

    Infinispan 9 introduces an improved data container which can optionally store entries off-heap.

    Additionally, our bounded container has been replaced with Ben Manes' excellent Caffeine which provides much better performance. Check out Ben's benchmarks where he compares, among other things, against Infinispan's old bounded container.

    Configuration-wise, the previously separate concepts of eviction, store-as-binary and data-container have been merged into a single 'memory' configuration element.

    Persistence
    The JDBC cache store received quite an overhaul:

    • The internal connection pool is now based on HikariCP, for improved performance
    • Writes will now use database-specific upsert functionality when available
    • Transactional writes to the cache translate to transactional writes to the database
    • The JdbcBinaryStore and JdbcMixedStore have been removed as detailed here

    We have also replaced the LevelDB cache store with the better-maintained and faster RocksDB cache store.

    Ickle, our new query language
    We decided it was time for Infinispan to have a proper query language, which would take full advantage of our query capabilities. We have therefore grafted Lucene's full-text operators on top of a subset of JP-QL to obtain Ickle. We have already started describing Ickle in a recent blog post. For a taste of Ickle, the following query shows how to combine a traditional boolean clause with a full-text term query:


    select transactionId, amount, description from com.acme.Transaction
    where amount > 10 and description : "coffee"

    Cloud integrations
    Infinispan continues to play nicely in cloud environments thanks to a number of improvements that have been made to discovery (such as KUBE_PING for Kubernetes/OpenShift), health probes and our pre-built Docker images.

    Multi-tenant server and SNI support
    Infinispan Server is now capable of exposing multiple cache containers through a single Hot Rod or REST endpoint. The selection of the container is performed via SNI. This allows you to have a single cluster serve all your applications while maintaining each one's data isolated.

    Administration Console
    The adminstration console has been completely rewritten in a more modular fashion using TypeScript to allow for greater extensibility and ease of maintanence. In addition to this refactor, the console now supports the following:

    • Stateless views
    • HTTP Digest Authentication
    • Management of individual and clustered Standalone server instances
    • Internet Explorer

    Documentation overhaul
    Our documentation has been completely overhauled with entire chapters being added or rewritten for readability and consistency.

    What's coming
    We will be blogging in more detail about some of the things above, so watch out for more content coming soon !


    We've already started working on Infinispan 9.1 which will bring a number of new features and improvements, such as clustered counters, consistency checker with merge policies, a new distributed cache for even better write performance, and more.

    Get it now !
    Head over to our download page to get binaries, sources, clients, etc.

    Please join us to let us know what you think about this release.


    The Infinispan team
    Dear Infinispan users, we thought CR3 was going to be the last candidate release before Final... but we were mistaken!The reason for yet another CR is that we decided to make some changes which affect some default behaviours:
    • enabling optimistic transactions with repeatable read now turns on write-skew by default
    • retrieving an already configured cache by passing in a template doesn't redefine that cache's configuration
    Other important changes:
    • big improvements to the client/server rolling upgrade process
    • allow indexes to be stored in off-heap caches
    • lots of bug fixes
    For the full list of changes check the release notes, download the 9.0.0.CR4 release and let us know if you have any questions or suggestions.

    Cheers,
    The Infinispan team
    In the latest 9.0.0.CR3 version, the Infinispan REST endpoint is secured by default, and in order to facilitate remote access, the Docker image has some changes related to the security.

    The image now creates a default user login upon start; this user can be changed via environment variables if desired:

    You can check if the settings are in place by manipulating data via REST. Trying to do a curl without credentials should lead to a 401 response:

    So make sure to always include the credentials from now on when interacting with the Rest endpoint! If using curl, this is the syntax:

    And that's all for this post. To find out more about the Infinispan Docker image, check the documentation, give it a try and let us know if you have any issues or suggestions!



    In one of the previous blog posts we wrote about different configuration options for our Docker image. Now we did another step adding auto-configuration steps for memory and CPU constraints.

    Before we dig in...
    Setting memory and CPU constraints to containers is very popular technique especially for public cloud offerings (such as OpenShift). Behind the scenes everything works based on adding additional Docker settings to the containers. There are two very popular switches: --memory (which is responsible for setting the amount of available memory) and --cpu-quota (which throttles CPU usage).

    Now here comes the best part... JDK has no idea about those settings! We will probably need to wait until JDK9 for getting full CGroups support.

    What can we do about it?
    The answer is very simple, we need to tell JDK what is the available memory (at least by setting Xmx) and available number of CPUs (by setting XX:ParallelGCThreadsXX:ConcGCThreads and Djava.util.concurrent.ForkJoinPool.common.parallelism).

    And we have some very good news! We already did it for you!

    Let's test it out!
    At first you need to pull our latest Docker image:

    Then run it with CPU and memory limits using the following command:

    Note that JAVA_OPTS variable was overridden. Let's have a look what had happened:
    • -Xms64m -Xmx350m - it is always a good idea to set Xmn inside a Docker container. Next we set Xmx to 70% of available memory. 
    • -XX:ParallelGCThreads=6 -XX:ConcGCThreads=6 -Djava.util.concurrent.ForkJoinPool.common.parallelism=6 - The next thing is setting CPU throttling as I explained above.
    There might be some cases where you wouldn't like to set those properties automatically. In that case, just pass -n switch to the starter script:


    More reading
    If this topic sounds interesting to you, do not forget to have a look at those links:
    • A great series of articles about memory and CPU in the containers by Andrew Dinn [1][2]
    • A practical implementation by Fabric8 Team [3]
    • A great article about memory limits by Rafael Benevides [4]
    • OpenShift guidelines for creating Docker images [5]
    Dear Infinispan community,

    as announced in a previous post, starting from version 8.1.0 also the C++/C# clients can receive and process Infinispan events.

    Here's an example of usage of C++ event listeners that, with a good dose of imagination, pretends to be a customer behavior tracking system for our store chain (don't take this too seriously, we're just trying to add some fiction).

    As a first requirement our tracking system will record every single purchase made in our stores. How many stores we have? 1, 100, millions? It doesn't matter: we're backed with an Infinispan data grid.
    This is version 0.x and hence the checker must use the keyboard to enter all the needed information.

    As you can see our entry key is a concatenation of the product name and the timestamp and the entry value is an unstructured string, maybe too simply but it could work for now.
    Seems we are at a good point: we have the data and we can do analytics on it, so far so good but now our boss makes a new request: he wants a runtime monitor on how's the sales performance. That's a perfect request to be fulfilled with event listener: the monitor application will be an Hotrod C++ client that registers a client listener on the server and receives and show on the boss's laptop the data flow.
    A client listener, once registered on the server, can receive events related to: creation, modification, deletion, expiration of cache entries; in our example only the creation and expiration events are processed (expired events can be useful to do some moving average statistics?). Following a snip of code that creates and registers a listener that writes the events key on the stdout.

    You can git this quickstart here [1]. On startup a multiple choice menu is shown with all the available operations. Running several instances you can act as the checker (data entry) or the boss (installing the listener and seeing the events flow).


    FiltersAgain so far so good, but then the marketing department ask support to do targeted advertising like: soliciting customers that bought product Y to buy product X.
    Let's suppose that X="harmonica" and Y="hiking boots" (it's a well known fact of life that in the high mountains you feel the desire to play an harmonica).

    To do that we register on the server another listener, but this time we're not interested in the whole flow of purchase data: to run our marketing campaign, we only interested in cache entries having the key starting with "hiking". The Infinispan server can filter out events for us, if we pass in the AddClientListener operation the name of the wanted filter along with any configuration arguments.

    Filter are java classes that must be deployed into the Infinispan server (more here [2])
    and convertersPredefined events contain very few information: basically the event type and the entry key, this to prevent to flood the network spreading around very long entry values. Users can override this limitation using a converter, that is a java class deployed into the server, that can create custom events containing every data needed by the application.
    As in the previous case, we pass into the add operation the name of the converter and the configuration arguments, any.

    That's all guys, let us know your feedback: do you like it? Could be better? Tell us how it can be improved creating an issue [3], or fork and improve it yourself [4]!

    Thanks for reading and enjoy! The Infinispan Team
    [1] https://github.com/rigazilla/infinispan-simple-tutorials/tree/new_event_tutorial/c%2B%2B/events
    [2] http://blog.infinispan.org/2014/08/hot-rod-remote-events-1-getting-started.html
    [2] http://blog.infinispan.org/2014/08/hot-rod-remote-events-2-filtering-events.html
    [2] http://blog.infinispan.org/2014/09/hot-rod-remote-events-3-customizing.html
    [3] https://issues.jboss.org/projects/HRCPP/issues
    [4] https://github.com/infinispan/cpp-client
    I'm happy to announce a new release (the first feature-complete!) of Infinispan Spring Boot Starters.

    We finally added new properties for managing Hot Rod client mode in application.properties as well Spring Cache automatic support. Finally, we fixed a couple of smaller issues.

    For complete changelog, please refer to the release page.

    The artifacts should be available in Maven Central as soon as the sync completes. In the meantime grab them from JBoss Repository.
    I'm happy to announce a new release of KUBE_PING JGroups protocol.

    Since this is a minor maintenance release, there are no ground breaking changes but we fixed a couple of issues that prevented our users from using JGroups 3.6.x and KUBE_PING 0.9.1.

    Have a look at the release page to learn more details.

    The artifacts should be available in Maven Central as soon as the sync completes. In the meantime grab them from JBoss Repository.
    Dears,

    we're pleased to announce that 8.1.0.CR2 release for C++/C# clients is out!

    Check the release notes, focus was on bug fixes this round so you have the opportunity to download the cleanest code so far!

    Spring cleaning will continue in the next release iteration, stay tuned and, if you like, take part signalling new issues here!

    Enjoy!

    The Infinispan Team
    Infinispan 9 has introduced many improvements to its marshalling codebase in order to improve performance and allow for greater flexibility. Consequently, data marshalled and persisted by Infinispan 8.x is no longer compatible with Infinispan 9.x. Furthermore, as part of our ongoing efforts to improve the cache stores provided by Infinispan, we have removed both the JdbcBinaryStore and JdbcMixedStore in Infinispan 9.0.

    To assist users migrating from Infinispan 8.x, we have created the JDBC Migrator that enables existing JDBC stores to be migrated to Infinispan 9's JdbcStringBasedStore.


    No More Binary Keyed Stores!
    The original intention of the JdbcBinaryStore was to provide greater flexibility over the JdbcStringBasedStore as it did not require a Key2StringMapper implementation.  This was achieved by utilising the hashcode of an entries key for a table's ID column entry.  However, due to the possibility of hash collisions all entries had to be placed inside a Bucket object which was then serialised and inserted into the underlying table. Utilising buckets in this manner was far from optimal as each read/write to the underlying table required an existing bucket for a given hash to be retrieved, deserialised, updated, serialised and then re-inserted back into the db.


    Introducing JDBC Migrator
    The JDBCMigrator is a standalone application that takes a single argument, the path to a .properties file which must contain the configuration properties for both the source and target stores.  To use the migrator you need the infinispan-tools-9.x.jar, as well as the jdbc drivers required by your source and target databases, on your classpath.

    An example maven pom that launches the migrator via mvn exec:java is presented below:


    Migration Examples
    Below are several example .properties files used for migrating various stores, however an exhaustive list of all available properties can be found in the Infinispan user guide.  
    Before attempting to migrate your existing stores please ensure you have backed up your database!

    8.x JdbcBinaryStore -> 9.x JdbcStringBasedStore
    The most important property to set in this example is "source.marshaller.type=LEGACY" as this instructs the migrator to utilise the Infinispan 8.x marshaller to unmarshall data stored in your existing DB tables. 
    If you specified custom AdvancedExternalizer implementations in your Infinispan 8.x configuration, then it is necessary for you to specify these in the migrator configuration and ensure that they are available on the migrators classpath.  To Specify the AdvancedExternalizers to load, it is necessary to define the "source.marshaller.externalizers" property with a comma-separated list of class names. If an ID was explicitly set for your externalizer, then it is possible to prepend the externalizers class name with "<id>:" to ensure the IDs is respected by the marshaller. 


    TwoWayKey2StringMapper Migration
    As well as using the JDBC Migrator to migrate from Infinispan 8.x, it is also possible to utilise it to migrate from one DB dialect to another or to migrate from one TwoWayKey2StringMapper implementation to another. 


    Summary
    Infinispan 9 stores are no longer compatible with Infinispan 8.x stores due to internal marshalling changes. Furthermore, the JdbcBinary and JdbcMixed stores have been removed due to their poor performance characteristics.  To aid users in their transition from Infinispan 8.x we have created the JDBC Migrator to enable users to migrate their existing JDBC stores.

    If you're a user of the JDBC stores and have any feedback on the latest changes, let us know via the forum, issue tracker or the #infinispan channel on Freenode. 
    Dear users, the last release candidate for Infinispan 9 is out!

    This milestone contains mostly bug fixes and documentation improvements ahead of 9.0.0.Final. Noteworthy changes:
    • Kubernetes Rolling Updates are fully supported
    • Infinispan Rolling Upgrades on Kubernetes is fully supported
    • Library updates: JGroups 4.0.1, Protostream 4.0.0.Alpha9, Log4j2 2.8.1
    • The deadlock detection hasn't keep up with the improvements of our locking algorithm and has been removed.
    • Support for authentication in the Rest endpoint
    For the full list of changes check the release notes, download the 9.0.0.CR3 release and let us know if you have any questions or suggestions.

    Cheers,
    The Infinispan team