JasperReports Server supports partial session replication and provides the beans to enable it. The app server usually manages the user session for a web application and is responsible for the policies that allow the session to be replicated in a cluster environment. However, you must also configure parts of JasperReports Server, including the Ehcache component.
To configure JasperReports Server nodes for partial session replication:
1. | Make sure that the subnet that contains all the cluster nodes is configured to allow IP multicasting. This is usually required by the app server for replication, and it's also required by JasperReports Server's Ehcache component in a cluster environment. In some alternative configurations described in step 5, Ehcache relies on other services and does not require IP multicasting. |
2. | On each node of the cluster, edit the file <web-app>/WEB-INF/web.xml to make the following changes: |
a. | Locate the listener of class RequestContextListener and replace it with the listener of class TolerantRequestContextListener. The new listener class is given in comments that you need to uncomment as follows: |
<!--Replace the default spring listener with the Tolerant listener to enable replication--> <listener-class>com.jaspersoft.jasperserver.core.util.TolerantRequestContextListener </listener-class> <!--listener-class>org.springframework.web.context.request.RequestContextListener </listener-class--> |
b. | Locate the ClusterFilter that's given in comments and uncomment it as follows: |
<filter> <filter-name>ClusterFilter</filter-name> <filter-class>com.jaspersoft.jasperserver.war.TolerantSessionFilter</filter-class> </filter> |
c. | Locate the corresponding mapping for the ClusterFilter and uncomment it as well. You must also uncomment the <distributable> element. |
<filter-mapping> <filter-name>ClusterFilter</filter-name> <url-pattern>/*</url-pattern> </filter-mapping> <distributable/> |
3. | On each node of the cluster, enable session replication in your app server or web container. For example, to enable session replication on Apache Tomcat 6.x, edit the file <tomcat>/conf/server.xml as follows. |
Add the Cluster definition within the <Engine name="Catalina" defaultHost="localhost"> configuration. In this example, 123.45.6.701 is the IP address of the node being configured. This example uses Delta Manager, but you can also use Backup Manager:
<Cluster className="org.apache.catalina.ha.tcp.SimpleTcpCluster" channelSendOptions="8"> <Manager className="org.apache.catalina.ha.session.DeltaManager" expireSessionsOnShutdown="false" notifyListenersOnReplication="true"/> |
<Channel className="org.apache.catalina.tribes.group.GroupChannel"> <Membership className="org.apache.catalina.tribes.membership. McastService" address="228.0.0.4" port="45564" frequency="500" dropTime="3000"/> <Sender className="org.apache.catalina.tribes.transport. ReplicationTransmitter"> <Transport className="org.apache.catalina.tribes. transport.nio.PooledParallelSender"/> </Sender> <Receiver className="org.apache.catalina.tribes.transport. nio.NioReceiver" address="123.45.6.701" port="4000" autoBind="100" selectorTimeout="5000" maxThreads="6"/> <Interceptor className="org.apache.catalina.tribes.group. interceptors.TcpFailureDetector"/> <Interceptor className="org.apache.catalina.tribes.group. interceptors.MessageDispatch15Interceptor"/> </Channel> |
<!--<Valve className="org.apache.catalina.ha.tcp.ForceReplicationValve"/>--> <Valve className="org.apache.catalina.ha.tcp.ReplicationValve" filter=""/> <Valve className="org.apache.catalina.ha.session.JvmRouteBinderValve"/> <ClusterListener className="org.apache.catalina.ha.session.JvmRouteSessionIDBinderListener"/> <ClusterListener className="org.apache.catalina.ha.session.ClusterSessionListener"/> </Cluster> |
4. | On each node, edit the <web-app>/WEB-INF/ehcache.xml file to uncomment the following section: |
<cache name="attributeCache" ...> <cacheEventListenerFactory class="net.sf.ehcache.distribution.RMICacheReplicatorFactory" properties="replicateAsynchronously=true, replicatePuts=false, replicateUpdates=true, replicateUpdatesViaCopy=false, replicateRemovals=true"/> <bootstrapCacheLoaderFactory class="net.sf.ehcache.distribution.RMIBootstrapCacheLoaderFactory" properties="bootstrapAsynchronously=true, maximumChunkSizeBytes=5000000"/> </cache> |
5. | JasperReports Server's internal Ehcache must also be configured so that it can be distributed among all nodes. There are several distribution mechanisms available: |
• | RMI – Remote Method Invocation is the simplest and fastest cache distribution mechanism. Use RMI distribution if your cluster runs on your own real or virtual computers, as long as their addresses will not change. You cannot use RMI distribution if your cluster is hosted in a cloud, such as with Amazon Redshift, because the IP addresses of the nodes may change. RMI distribution relies on IP multicast, which you must set up as described in step 1. |
• | JMS – Java Message Services can provide cache distribution for nodes in a cloud where IP addresses may change. Jaspersoft provides a configuration for using the Apache ActiveMQ JMS Server. You must first install and configure ActiveMQ on one of the computers in your cluster. |
• | Amazon SNS/SQS – Simple Notification Service and Simple Queue Service can provide cache distribution for nodes in Amazon Web Services (AWS). Using this option may incur additional costs as Amazon charges customers per API call. Before you can use SNS, you must create an SNS topic from your AWS Console. Amazon SNS/SQS support is experimental; the ActiveMQ JMS option also works for AWS and is the preferred method for Ehcache distribution. |
On each node, edit both <web-app>/WEB-INF/ehcache_hibernate.xml and <web-app>/WEB-INF/classes/ehcache_hibernate.xml files as described below for your chosen distribution mechanism. The files are identical and you should make the same changes in both:
• | For all distribution mechanisms comment out the section marked "NO CLUSTERING" as follows: |
<!-- ********************* NO CLUSTERING ******************** --> <!-- START <cache name="defaultRepoCache" maxElementsInMemory="10000" eternal="false" overflowToDisk="false" timeToIdleSeconds="36000" timeToLiveSeconds="180000" diskPersistent="false" diskExpiryThreadIntervalSeconds="120" statistics="true"> </cache> |
<cache name="aclCache" maxElementsInMemory="10000" eternal="false" overflowToDisk="false" timeToIdleSeconds="360000" timeToLiveSeconds="720000" diskPersistent="false"> </cache> END --> <!-- ******************* END of NO CLUSTERING ******************* --> |
• | For RMI distribution, uncomment the RMI section on every node, and make sure the properties are correct for your IP multicast. You must also add the hostname property with the value of the real IP address. |
<!-- ************************** RMI ************************* --> <cacheManagerPeerProviderFactory class="net.sf.ehcache.distribution.RMICacheManagerPeerProviderFactory" properties="peerDiscovery=automatic,multicastGroupAddress=228.0.0.1, multicastGroupPort=4446,timeToLive=1"/> <cacheManagerPeerListenerFactory class="net.sf.ehcache.distribution.RMICacheManagerPeerListenerFactory" properties="hostName=123.45.6.701,port=40011,remoteObjectPort=40012, socketTimeoutMillis=120000"/> ... <!-- *********************** END of RMI *********************** --> |
Add the hostName property to the cacheManagerPeerListenerFactory, right before port=40011. This specifies the real IP address of the host, as shown in the example above.
• | For JMS distribution, install the JMS server on one computer in your cluster. Then uncomment the JMS section on every node and set the providerURL properties to the address of your JMS server, in this example 123.45.6.701. There are 5 providerURL properties to set in all, only the first one is shown below: |
<!-- ************************** JMS ************************* --> <cacheManagerPeerProviderFactory class="net.sf.ehcache.distribution.jms.JMSCacheManagerPeerProviderFactory" properties="initialContextFactoryName=com.jaspersoft.jasperserver.api. engine.replication.JRSActiveMQInitialContextFactory, providerURL=tcp://123.45.6.701:61616, replicationTopicConnectionFactoryBindingName=topicConnectionFactory, replicationTopicBindingName=ehcache, getQueueConnectionFactoryBindingName=queueConnectionFactory, getQueueBindingName=ehcacheQueue, topicConnectionFactoryBindingName=topicConnectionFactory, topicBindingName=ehcache" propertySeparator=","/> ... <!-- *********************** END of JMS *********************** --> |
• | For Amazon SNS/SQS, uncomment the AWS section on every node. Using Amazon SNS/SQS may incur additional costs as Amazon charges customers per API call. Amazon SNS/SQS support is experimental; the ActiveMQ JMS distribution also works for AWS and is the preferred method for Ehcache distribution. |
<!-- ************************** AWS ************************* --> <cacheManagerPeerProviderFactory class="com.jaspersoft.jasperserver.api.engine.replication. JRSNevadoCacheManagerPeerProviderFactory" properties="" propertySeparator="," /> ... <!-- *********************** END of AWS *********************** --> |
Before you can use SNS, you must create an SNS topic from your AWS Console. Then edit the <web-app>/META-INF/classes/aws.properties file to specify your AWS credentials, SNS topic (same for every node), SQS queue (unique per cluster) and node id (unique per cluster). You can also specify the desired number of queue reading threads (10 is recommended).
aws.accessKey=AKILRCPWVYTY3MPDPS6A aws.secretKey=tSlV/scTtHUfe6JggTO56lkeZFb+0DEBDyUWuQMe aws.queuesuffix=_aws3 aws.topicsuffix=_aws3 aws.nevadoTopicName=ehcacheJMSTopic aws.nevadoQueueName=ehcacheAWS3 aws.clientID=AWS3Server aws.threadCount=10 |
6. | On each node, edit the file <web-app>/META-INF/context.xml. Locate the Manager pathname near the end, and comment it out as follows: |
<!-- <Manager pathname="" /> --> |
7. | Restart or redeploy JasperReports Server on each node. |