ehcache rmi集群手动发现部署

之前要测试一个ehcache的集群,然后采用手动配置发现的方式,期间参考网上的别人写的博客进行配置,完成后测试的时候发现死活无法同步,折腾了好几天,翻遍了网上的博客发现都大体相同的配置,所有的博客都忽略了一点:都没有提到RMI方式需要开两个端口也没有提到防火墙要放行RMI的两个端口,没有提到cacheManagerPeerListenerFactory中remoteObjectPort这个参数的配置,如果不指定就会随机分配空闲端口,防火墙没有开随机分配的端口就会拦截,这也直接导致了同步失败或许他们默认防火墙关闭或者打开了所有的防火墙端口吧。
经查找官方配置文档和测试,楼主找到了原因及解决方法,最后也成功实现了缓存集群同步,经验如下:

RMI方式配置ehcache分布式集群同步手工配置和自动发现两种方式,鉴于自动发现方式涉及服务器、网段等网络的问题比较麻烦,在这里只测试手动配置方式:
两个完全相同的接口实例分别布在两台机器上:

注:
(1)以上的server1和server2是两台不同的主机,因此web端口、RMI宿主主机监听同步请求端口、RMI宿主主机监听同步数据传输端口均可以完全相同也可不完全相同,在这里为了演示没有设置相同。
(2)若server1和server2在同一台主机上那么他们的web端口、RMI宿主主机监听同步请求端口、RMI宿主主机监听同步数据传输端口就必须完全不同。

一、server1的ehcache.xml配置:
1.首先配置cacheManagerPeerProviderFactory,指定server1(211.87.227.223)需要同步的集群中的server2(211.87.227.226)的端口40002上的名为UserCache的ehcache,其中peerDiscovery=manual指定了发现方式采用手动方式:
< cacheManagerPeerProviderFactory
class=“net.sf.ehcache.distribution.RMICacheManagerPeerProviderFactory”
properties=“peerDiscovery=manual,rmiUrls=//211.87.227.226:40002/UserCache”
/>

配置cacheManagerPeerProviderFactory是指定除自身之外的网络群体中其他提供同步的主机列表,若需要同步集群中的多个ehcache则可以在rmiUrls中添加多个地址,中间以竖线“|”分开:

官方文档解释:

2.再配置cacheManagerPeerListenerFactory监听,它是用来监听从集群发过来的同步请求消息,简单地说就是指定在集群中server1(211.87.227.223)的ehcache的地址:不确定请求同步的是哪个cache:

< cacheManagerPeerListenerFactory
class=“net.sf.ehcache.distribution.RMICacheManagerPeerListenerFactory”
properties=“hostName=localhost,port=40001,remoteObjectPort=40004”
/>

指定宿主主机(211.87.227.223)的监听集群中其他节点同步请求消息的端口号为40001,同时要指定宿主主机的监听集群中其他节点同步数据的消息端口号为40004.

注:这里是一个大坑,也是大部分博客教程里没有提到的点,之前没有设置这个remoteObjectPort导致同步一直失败,原来RMI方式需要两个端口一个是固定的RMI服务注册的端口或叫做RMI宿主主机监听同步请求端口即cacheManagerPeerListenerFactory中配置的port参数,这个端口是必须指定的否则无法定位缓存所在的位置,另一个是用来传送同步数据的端口即cacheManagerPeerListenerFactory中配置的remoteObjectPort参数,这个参数如果不指定每次启动服务将随机分配一个空闲端口。这样当没有指定这个同步数据的端口时启动同步的同时如果宿主主机防火墙正在运行且随机分配的同步数据的端口没有在防火墙中放行那么就会出现无法同步的问题,后台也会有连接失败的错误提示。因此:
(1)如果要同步的所有cache都在同一台主机上就不牵扯防火墙问题时,cacheManagerPeerListenerFactory中可不配置remoteObjectPort参数。
(2)如果要同步的cache不在同一台主机上而且没有配置remoteObjectPort参数时,要想同步成功必须关闭防火墙或者在防火墙中开放所有本地端口(那样就失去了防火墙的作用了)。
(3)如果如果要同步的cache不在同一台主机上而且防火墙又必须发挥它的作用时就必须配置remoteObjectPort参数来固定下来传送同步数据的端口这样就可以在宿主主机的防火墙上放行port和remoteObjectPort的RMI两个端口来穿过防火墙实现同步。
因此,要把server1(211.87.227.223)上防火墙中的40001、40004端口放行。
官方文档解释:

3.再配置cache节点,在该节点中添加 cacheEventListenerFactory,注册相应的的缓存监听类,用于处理缓存事件,如put,remove,update,和expire :

< cacheEventListenerFactory
class=“net.sf.ehcache.distribution.RMICacheReplicatorFactory”
properties="replicateAsynchronously=true,
replicatePuts=true,
replicateUpdates=true,
replicateUpdatesViaCopy=true,
replicateRemovals=true "/>

replicatePuts=true | false – 当一个新元素增加到缓存中的时候是否要复制到其他的peers。默认是true。
replicateUpdates=true | false – 当一个已经在缓存中存在的元素被覆盖时是否要进行复制。默认是true。
replicateRemovals= true | false – 当元素移除的时候是否进行复制。默认是true。
replicateAsynchronously=true | 对象同步是否异步完成,默认为true。如果比较紧急就设为false。 在一致性时间性要求不强的时候,设为异步可大大提供性能,因为它是异步立即返回的,而且可以批量提交。
replicatePutsViaCopy=true | false – 当一个新增元素被拷贝到其他的cache中时是否进行复制指定为true时为复制,默认是true。
replicateUpdatesViaCopy=true | 是否将对象变更复制到所有节点,还是只是发送一个失效信息,让对方该缓存失效,当对方需要该缓存时重新计算载入。 默认为true。鉴于对象复制的消耗挺大的,又有锁的问题,而且对方也未必需要该对象,所以此属性建议设为false。

经试验在这里全部设置成true为好,全设置成true的一种简便写法是:
< cacheEventListenerFactory class=“net.sf.ehcache.distribution.RMICacheReplicatorFactory” />

紧接着cacheEventListenerFactory节点后再添加bootstrapCacheLoaderFactory节点,用于在初始化缓存,以及自动设置:

< bootstrapCacheLoaderFactory class=“net.sf.ehcache.distribution.RMIBootstrapCacheLoaderFactory”/>

官方文档解释:

至此,server1完成。
需要注意的几点是:
(1)cacheManagerPeerProviderFactory和cacheManagerPeerListenerFactory元素出现的位置一定要在diskStore元素下否则报错。
(2)cacheEventListenerFactory和bootstrapCacheLoaderFactory元素要放在< cache>标签内。
(3)在集群环境中 EhCache 所有缓存对象的键和值都必须是可序列化的,也就是必须实现 java.io.Serializable 接口,这点在其它集群方式下也是需要遵守的。

二、Server2的ehcache.xml配置:
Server2的配置文件与server1除了cacheManagerPeerProviderFactory和cacheManagerPeerListenerFactory元素以外的完全相同,只要把cacheManagerPeerProviderFactory和cacheManagerPeerListenerFactory元素中对应的地址和端口号换过来写就行:

< cacheManagerPeerProviderFactory
class=“net.sf.ehcache.distribution.RMICacheManagerPeerProviderFactory”
properties=“peerDiscovery=manual,rmiUrls=//211.87.227.223:40001/UserCache”
/>

< cacheManagerPeerListenerFactory
class=“net.sf.ehcache.distribution.RMICacheManagerPeerListenerFactory”
properties=“hostName=localhost,port=40002,remoteObjectPort=40003”
/>

同时,要把server2(211.87.227.226)上防火墙中的40002、40003端口放行。

至此。server1和server2配置完成。分别将两个接口打包部署到两个地址服务器上启动,测试即可看到同步效果。

总之,整个ehcache的RMI手动集群同步配置并不复杂,挺简单的,坑点就在于网上大部分的教程博客里都没有提到cacheManagerPeerListenerFactory配置的时候还要设置remoteObjectPort以及防火墙端口的设置。

注:我的接口开发环境是:springboot2.0.5.RELEASE、ehcache-2.10.5、java1.8.0_131,两台服务器均是学校内网,

另外,关于广播的自动发现的集群方式我还没有测试过,等有空再测试学习一下吧。

第一次写博客凑合着看吧,有什么错误、问题、或者新的理解、经验欢迎小伙伴们讨论~

附录1:我的测试ehcache.xml配置文件

<?xml version="1.0" encoding="UTF-8"?>

<cacheManagerPeerListenerFactory
	class="net.sf.ehcache.distribution.RMICacheManagerPeerListenerFactory"
	properties="hostName=localhost,port=40002,remoteObjectPort=40003"
/>
<!-- <cacheManagerPeerProviderFactory 
    class="net.sf.ehcache.distribution.RMICacheManagerPeerProviderFactory" 
    properties="peerDiscovery=manual,rmiUrls=//211.87.227.226:40002/UserCache"
/>
<cacheManagerPeerListenerFactory
	class="net.sf.ehcache.distribution.RMICacheManagerPeerListenerFactory"
	properties="hostName=localhost,port=40001,remoteObjectPort=40004"
/> -->







1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
42
43
44
45
46
47
48
49
50
51
52
53
54
55
56
57
58
59
60
61
62
附录2:官方ehcache.xml配置文件,特别长,可以去(https://www.cnblogs.com/zno2/p/4856865.html)看看

<?xml version="1.0" encoding="UTF-8"?>

<!–
CacheManager Configuration

An ehcache.xml corresponds to a single CacheManager.

See instructions below or the ehcache schema (ehcache.xsd) on how to configure.

System property tokens can be specified in this file which are replaced when the configuration
is loaded. For example multicastGroupPort=multicastGroupPortcanbereplacedwiththeSystempropertyeitherfromanenvironmentvariableorasystempropertyspecifiedwithacommandlineswitchsuchasDmulticastGroupPort=4446.Anotherexample,usefulforTerracottaserverbaseddeploymentsis&lt;terracottaConfigurl=&quot;{multicastGroupPort} can be replaced with the System property either from an environment variable or a system property specified with a command line switch such as -DmulticastGroupPort=4446. Another example, useful for Terracotta server based deployments is &lt;terracottaConfig url=&quot;multicastGroupPortcanbereplacedwiththeSystempropertyeitherfromanenvironmentvariableorasystempropertyspecifiedwithacommandlineswitchsuchas−DmulticastGroupPort=4446.Anotherexample,usefulforTerracottaserverbaseddeploymentsis<terracottaConfigurl="{serverAndPort}"/ and specify a command line
switch of -Dserver36:9510

The attributes of are:

  • name - an optional name for the CacheManager. The name is optional and primarily used
    for documentation or to distinguish Terracotta clustered cache state. With Terracotta
    clustered caches, a combination of CacheManager name and cache name uniquely identify a
    particular cache store in the Terracotta clustered memory. There is a restriction on characters
    in the name in case MBeans are used. See the section describing restrictions for unquoted values
    in the javax.management.ObjectName javadoc.
  • dynamicConfig - an optional setting that can be used to disable dynamic configuration of caches
    associated with this CacheManager. By default this is set to true - i.e. dynamic configuration
    is enabled. Dynamically configurable caches can have their TTI, TTL and maximum disk and
    in-memory capacity changed at runtime through the cache’s configuration object.
  • monitoring - an optional setting that determines whether the CacheManager should
    automatically register the SampledCacheMBean with the system MBean server.

Currently, this monitoring is only useful when using Terracotta clustering and using the
Terracotta Developer Console. With the “autodetect” value, the presence of Terracotta clustering
will be detected and monitoring, via the Developer Console, will be enabled. Other allowed values
are “on” and “off”. The default is “autodetect”. This setting does not perform any function when
used with JMX monitors.

  • maxBytesLocalHeap - optional setting that constraints the memory usage of the Caches managed by the CacheManager
    to use at most the specified number of bytes of the local VM’s heap.
  • maxBytesLocalOffHeap - optional setting that constraints the offHeap usage of the Caches managed by the CacheManager
    to use at most the specified number of bytes of the local VM’s offHeap memory.
  • maxBytesLocalDisk - optional setting that constraints the disk usage of the Caches managed by the CacheManager
    to use at most the specified number of bytes of the local disk.

These settings let you define “resource pools”, caches will share. For instance setting maxBytesLocalHeap to 100M, will result in
all caches sharing 100 MegaBytes of ram. The CacheManager will balance these 100 MB across all caches based on their respective usage
patterns. You can allocate a precise amount of bytes to a particular cache by setting the appropriate maxBytes* attribute for that cache.
That amount will be subtracted from the CacheManager pools, so that if a cache a specified 30M requirement, the other caches will share
the remaining 70M.

Also, specifying a maxBytesLocalOffHeap at the CacheManager level will result in overflowToOffHeap to be true by default. If you don’t want
a specific cache to overflow to off heap, you’ll have to set overflowToOffHeap=“false” explicitly

Here is an example of CacheManager level resource tuning, which will use up to 400M of heap and 2G of offHeap:

–>

<!--
Management Rest Service configuration
=====================================

The managementRESTService element is optional.  By default the REST service that exposes monitoring and
management features for the caches within the cache manager is disabled.  Enabling this feature will
affect cache performance.

The 'bind' attribute defaults to "0.0.0.0:9888" and sets the IP Address and Port to bind the web service
to.  "0.0.0.0" binds to all local addresses / network interfaces.

If you provide the 'securityServiceLocation' attribute, this will also enable authentication and other
security measures on the REST service - which are only available for the enterprise-edition of the
service.  The location should be the URL to the Terracotta Management Server that is being used to
manage the ehcache instance. Enabling security requires that the management REST service be provided with
a terracotta keychain in the default location ${user.home}/.tc/mgmt/keychain or as defined by the system property
com.tc.management.keychain.file. The keychain is expected to hold a secret shared by the management client
and keyed with this REST service's URI.

Related to the the enterprise-edition security setup is the 'securityServiceTimeout' attribute. Setting this
value will allow adjustment of the connection timeout to the security service location. The default value is
5000 millis.

If the 'sslEnabled' attribute is set to true, this will enable a non-blocking ssl connection to the management
REST service. Turning this ssl connection on requires an identity store be provided at the default location
${user.home}/.tc/mgmt/keystore and that the JKS passphrase be included in the REST service keychain, keyed with
the identity store file URI, or that the keystore and passphrase be identified with the ssl system properties
javax.net.ssl.keyStore and javax.net.ssl.keyStorePassword.

The 'needClientAuth' attribute requires ssl client certificate authorization if the 'sslEnabled' attribute has been
set to true. Otherwise, it will be ignored.  Setting this attribute to true will require that the client's
identity is imported as trusted into a truststore which is provided in the default location
${user.home}/.tc/mgmt/keystore and that the JKS passphrase be included in the REST service keychain, keyed with
the trust store file URI, or that the truststore and passphrase be identified with the ssl system properties
javax.net.ssl.trustStore and javax.net.ssl.trustStorePassword.

Finally, several attributes exist to configure sampling history.

- 'sampleHistorySize' allows the configuration of how many statistical samples will be kept in memory for
each cache. The default value is set to 30.
- 'sampleIntervalSeconds' allows the configuration of how often cache statistics will be obtained in seconds.
The default value is set to 1 second.
- 'sampleSearchIntervalSeconds' allows the configuration of how often cache seach statistics will be obtained in
seconds. The default value is set to 10 seconds.

examples:

<managementRESTService enabled="true" bind="0.0.0.0:9888" />

<managementRESTService enabled="true" securityServiceLocation="http://localhost:9889/tmc/api/assertIdentity"  />

 -->

<!--
DiskStore configuration
=======================

The diskStore element is optional. To turn off disk store path creation, comment out the diskStore
element below.

Configure it if you have disk persistence enabled for any cache or if you use
unclustered indexed search.

If it is not configured, and a cache is created which requires a disk store, a warning will be
 issued and java.io.tmpdir will automatically be used.

diskStore has only one attribute - "path". It is the path to the directory where
any required disk files will be created.

If the path is one of the following Java System Property it is replaced by its value in the
running VM. For backward compatibility these should be specified without being enclosed in the ${token}
replacement syntax.

The following properties are translated:
* user.home - User's home directory
* user.dir - User's current working directory
* java.io.tmpdir - Default temp file path
* ehcache.disk.store.dir - A system property you would normally specify on the command line
  e.g. java -Dehcache.disk.store.dir=/u01/myapp/diskdir ...

Subdirectories can be specified below the property e.g. java.io.tmpdir/one

-->
<diskStore path="java.io.tmpdir"/>

<!--
TransactionManagerLookup configuration
======================================
This class is used by ehcache to lookup the JTA TransactionManager use in the application
using an XA enabled ehcache. If no class is specified then DefaultTransactionManagerLookup
will find the TransactionManager in the following order

 *GenericJNDI (i.e. jboss, where the property jndiName controls the name of the
                TransactionManager object to look up)
 *Bitronix
 *Atomikos

You can provide you own lookup class that implements the
net.sf.ehcache.transaction.manager.TransactionManagerLookup interface.
-->

<transactionManagerLookup class="net.sf.ehcache.transaction.manager.DefaultTransactionManagerLookup"
                          properties="jndiName=java:/TransactionManager" propertySeparator=";"/>


<!--
CacheManagerEventListener
=========================
Specifies a CacheManagerEventListenerFactory which is notified when Caches are added
or removed from the CacheManager.

The attributes of CacheManagerEventListenerFactory are:
* class - a fully qualified factory class name
* properties - comma separated properties having meaning only to the factory.

Sets the fully qualified class name to be registered as the CacheManager event listener.

The events include:
* adding a Cache
* removing a Cache

Callbacks to listener methods are synchronous and unsynchronized. It is the responsibility
of the implementer to safely handle the potential performance and thread safety issues
depending on what their listener is doing.

If no class is specified, no listener is created. There is no default.
-->
<cacheManagerEventListenerFactory class="" properties=""/>


<!--
CacheManagerPeerProvider
========================
(For distributed operation)

Specifies a CacheManagerPeerProviderFactory which will be used to create a
CacheManagerPeerProvider, which discovers other CacheManagers in the cluster.

One or more providers can be configured. The first one in the ehcache.xml is the default, which is used
for replication and bootstrapping.

The attributes of cacheManagerPeerProviderFactory are:
* class - a fully qualified factory class name
* properties - comma separated properties having meaning only to the factory.

Providers are available for RMI, JGroups and JMS as shown following.

RMICacheManagerPeerProvider
+++++++++++++++++++++++++++

Ehcache comes with a built-in RMI-based distribution system with two means of discovery of
CacheManager peers participating in the cluster:
* automatic, using a multicast group. This one automatically discovers peers and detects
  changes such as peers entering and leaving the group
* manual, using manual rmiURL configuration. A hardcoded list of peers is provided at
  configuration time.

Configuring Automatic Discovery:
Automatic discovery is configured as per the following example:
<cacheManagerPeerProviderFactory
                    class="net.sf.ehcache.distribution.RMICacheManagerPeerProviderFactory"
                    properties="hostName=fully_qualified_hostname_or_ip,
                                peerDiscovery=automatic, multicastGroupAddress=230.0.0.1,
                                multicastGroupPort=4446, timeToLive=32"/>

Valid properties are:
* peerDiscovery (mandatory) - specify "automatic"
* multicastGroupAddress (mandatory) - specify a valid multicast group address
* multicastGroupPort (mandatory) - specify a dedicated port for the multicast heartbeat
  traffic
* timeToLive - specify a value between 0 and 255 which determines how far the packets will
  propagate.

  By convention, the restrictions are:
  0   - the same host
  1   - the same subnet
  32  - the same site
  64  - the same region
  128 - the same continent
  255 - unrestricted

 * hostName - the hostname or IP of the interface to be used for sending and receiving multicast
   packets (relevant to multi-homed hosts only)

Configuring Manual Discovery:
Manual discovery requires a unique configuration per host. It is contains a list of rmiURLs for
the peers, other than itself. So, if we have server1, server2 and server3 the configuration will
be:

In server1's configuration:
<cacheManagerPeerProviderFactory class=
                      "net.sf.ehcache.distribution.RMICacheManagerPeerProviderFactory"
                      properties="peerDiscovery=manual,
                      rmiUrls=//server2:40000/sampleCache1|//server3:40000/sampleCache1
                      | //server2:40000/sampleCache2|//server3:40000/sampleCache2"
                      propertySeparator="," />

In server2's configuration:
<cacheManagerPeerProviderFactory class=
                      "net.sf.ehcache.distribution.RMICacheManagerPeerProviderFactory"
                      properties="peerDiscovery=manual,
                      rmiUrls=//server1:40000/sampleCache1|//server3:40000/sampleCache1
                      | //server1:40000/sampleCache2|//server3:40000/sampleCache2"
                      propertySeparator="," />

In server3's configuration:
<cacheManagerPeerProviderFactory class=
                      "net.sf.ehcache.distribution.RMICacheManagerPeerProviderFactory"
                      properties="peerDiscovery=manual,
                      rmiUrls=//server1:40000/sampleCache1|//server2:40000/sampleCache1
                      | //server1:40000/sampleCache2|//server2:40000/sampleCache2"
                      propertySeparator="," />


Valid properties are:
* peerDiscovery (mandatory) - specify "manual"
* rmiUrls (mandatory) - specify a pipe separated list of rmiUrls, in the form
                        //hostname:port
* hostname (optional) - the hostname is the hostname of the remote CacheManager peer. The port is the listening
  port of the RMICacheManagerPeerListener of the remote CacheManager peer.

JGroupsCacheManagerPeerProvider
+++++++++++++++++++++++++++++++
<cacheManagerPeerProviderFactory
     class="net.sf.ehcache.distribution.jgroups.JGroupsCacheManagerPeerProviderFactory"
     properties="channel=ehcache^connect=UDP(mcast_addr=231.12.21.132;mcast_port=45566;ip_ttl=32;
     mcast_send_buf_size=150000;mcast_recv_buf_size=80000):
     PING(timeout=2000;num_initial_members=6):
     MERGE2(min_interval=5000;max_interval=10000):
     FD_SOCK:VERIFY_SUSPECT(timeout=1500):
     pbcast.NAKACK(gc_lag=10;retransmit_timeout=3000):
     UNICAST(timeout=5000):
     pbcast.STABLE(desired_avg_gossip=20000):
     FRAG:
     pbcast.GMS(join_timeout=5000;join_retry_timeout=2000;shun=false;print_local_addr=false)"
     propertySeparator="^"
 />
 JGroups configuration is done by providing a connect string using connect= as in the above example which uses
 multicast, or since version 1.4, a file= to specify the location of a JGroups configuration file.

 If neither a connect or file property is specified, the default JGroups JChannel will be used.

 Multiple JGroups clusters may be run on the same network by specifying a different CacheManager name. The name
 is used as the cluster name.

 Since version 1.4 you can specify a channelName to avoid conflicts.


JMSCacheManagerPeerProviderFactory
++++++++++++++++++++++++++++++++++
<cacheManagerPeerProviderFactory
        class="net.sf.ehcache.distribution.jms.JMSCacheManagerPeerProviderFactory"
        properties="..."
        propertySeparator=","
        />

The JMS PeerProviderFactory uses JNDI to maintain message queue independence. Refer to the manual for full configuration
examples using ActiveMQ and Open Message Queue.

Valid properties are:
* initialContextFactoryName (mandatory) - the name of the factory used to create the message queue initial context.
* providerURL (mandatory) - the JNDI configuration information for the service provider to use.
* topicConnectionFactoryBindingName (mandatory) - the JNDI binding name for the TopicConnectionFactory
* topicBindingName (mandatory) - the JNDI binding name for the topic name
* getQueueBindingName (mandatory only if using jmsCacheLoader) - the JNDI binding name for the queue name
* securityPrincipalName - the JNDI java.naming.security.principal
* securityCredentials - the JNDI java.naming.security.credentials
* urlPkgPrefixes - the JNDI java.naming.factory.url.pkgs
* userName - the user name to use when creating the TopicConnection to the Message Queue
* password - the password to use when creating the TopicConnection to the Message Queue
* acknowledgementMode - the JMS Acknowledgement mode for both publisher and subscriber. The available choices are
                        AUTO_ACKNOWLEDGE, DUPS_OK_ACKNOWLEDGE and SESSION_TRANSACTED. The default is AUTO_ACKNOWLEDGE.
-->
<cacheManagerPeerProviderFactory
        class="net.sf.ehcache.distribution.RMICacheManagerPeerProviderFactory"
        properties="peerDiscovery=automatic,
                    multicastGroupAddress=230.0.0.1,
                    multicastGroupPort=4446, timeToLive=1"
        propertySeparator=","
        />


<!--
CacheManagerPeerListener
========================
(Enable for distributed operation)

Specifies a CacheManagerPeerListenerFactory which will be used to create a
CacheManagerPeerListener, which listens for messages from cache replicators participating in the cluster.

The attributes of cacheManagerPeerListenerFactory are:
class - a fully qualified factory class name
properties - comma separated properties having meaning only to the factory.

Ehcache comes with a built-in RMI-based distribution system. The listener component is
RMICacheManagerPeerListener which is configured using
RMICacheManagerPeerListenerFactory. It is configured as per the following example:

<cacheManagerPeerListenerFactory
    class="net.sf.ehcache.distribution.RMICacheManagerPeerListenerFactory"
    properties="hostName=fully_qualified_hostname_or_ip,
                port=40001,
                remoteObjectPort=40002,
                socketTimeoutMillis=120000"
                propertySeparator="," />

All properties are optional. They are:
* hostName - the hostName of the host the listener is running on. Specify
  where the host is multihomed and you want to control the interface over which cluster
  messages are received. Defaults to the host name of the default interface if not
  specified.
* port - the port the RMI Registry listener listens on. This defaults to a free port if not specified.
* remoteObjectPort - the port number on which the remote objects bound in the registry receive calls.
                     This defaults to a free port if not specified.
* socketTimeoutMillis - the number of ms client sockets will stay open when sending
  messages to the listener. This should be long enough for the slowest message.
  If not specified it defaults to 120000ms.

-->
<cacheManagerPeerListenerFactory
        class="net.sf.ehcache.distribution.RMICacheManagerPeerListenerFactory"/>

<!--
TerracottaConfig
========================
(Enable for Terracotta clustered operation)

Note: You need to install and run one or more Terracotta servers to use Terracotta clustering.
See http://www.terracotta.org/web/display/orgsite/Download.

Specifies a TerracottaConfig which will be used to configure the Terracotta
runtime for this CacheManager.

Configuration can be specified in two main ways: by reference to a source of
configuration or by use of an embedded Terracotta configuration file.

To specify a reference to a source (or sources) of configuration, use the url
attribute.  The url attribute must contain a comma-separated list of:
* path to Terracotta configuration file (usually named tc-config.xml)
* URL to Terracotta configuration file
* <server host>:<port> of running Terracotta Server instance

Simplest example for pointing to a Terracotta server on this machine:
<terracottaConfig url="localhost:9510"/>

This element has one attribute "rejoin" which can take values of either "true" or "false":
<terracottaConfig rejoin="true" url="localhost:9510" />

By default, this attribute is false.

Without rejoin, if the Terracotta Server is restarted the client cannot connect back to the
server. When enabled, this allows the client to connect to the new cluster without the
need to restart the node.

Example using a path to Terracotta configuration file:
<terracottaConfig url="/app/config/tc-config.xml"/>

Example using a URL to a Terracotta configuration file:
<terracottaConfig url="http://internal/ehcache/app/tc-config.xml"/>

Example using multiple Terracotta server instance URLs (for fault tolerance):
<terracottaConfig url="host1:9510,host2:9510,host3:9510"/>

To embed a Terracotta configuration file within the ehcache configuration, simply
place a normal Terracotta XML config within the <terracottaConfig> element.

Example:
<terracottaConfig>
    <tc-config>
        <servers>
            <server host="server1" name="s1"/>
            <server host="server2" name="s2"/>
        </servers>
        <clients>
            <logs>app/logs-%i</logs>
        </clients>
    </tc-config>
</terracottaConfig>

For more information on the Terracotta configuration, see the Terracotta documentation.
-->

<!--
Cache configuration
===================

The following attributes are required.

name:
Sets the name of the cache. This is used to identify the cache. It must be unique.
There is a restriction on characters in the name in case MBeans are used.
See the section describing restrictions for unquoted values in the javax.management.ObjectName javadoc.

The following attributes and elements are optional.

maxEntriesLocalHeap:
Sets the maximum number of objects that will be held on heap memory.  0 = no limit.

maxEntriesLocalDisk:
Sets the maximum number of objects that will be maintained in the DiskStore
The default value is zero, meaning unlimited.

eternal:
Sets whether elements are eternal. If eternal,  timeouts are ignored and the
element is never expired.

maxEntriesInCache:
This feature is applicable only to Terracotta distributed caches.
Sets the maximum number of entries that can be stored in the cluster. 0 = no limit.
Note that clustered cache will still perform eviction if resource usage requires it.
This property can be modified dynamically while the cache is operating.

overflowToOffHeap:
(boolean) This feature is available only in enterprise versions of Ehcache.
When set to true, enables the cache to utilize off-heap memory
storage to improve performance. Off-heap memory is not subject to Java
GC. The default value is false.

maxBytesLocalHeap:
Defines how many bytes the cache may use from the VM's heap. If a CacheManager
maxBytesLocalHeap has been defined, this Cache's specified amount will be
subtracted from the CacheManager. Other caches will share the remainder.
This attribute's values are given as <number>k|K|m|M|g|G for
kilobytes (k|K), megabytes (m|M), or gigabytes (g|G).
For example, maxBytesLocalHeap="2g" allots 2 gigabytes of heap memory.
If you specify a maxBytesLocalHeap, you can't use the maxEntriesLocalHeap attribute.
maxEntriesLocalHeap can't be used if a CacheManager maxBytesLocalHeap is set.

Elements put into the cache will be measured in size using net.sf.ehcache.pool.sizeof.SizeOf
If you wish to ignore some part of the object graph, see net.sf.ehcache.pool.sizeof.annotations.IgnoreSizeOf

maxBytesLocalOffHeap:
This feature is available only in enterprise versions of Ehcache.
Sets the amount of off-heap memory this cache can use, and will reserve.

This setting will set overflowToOffHeap to true. Set explicitly to false to disable overflow behavior.

Note that it is recommended to set maxEntriesLocalHeap to at least 100 elements
when using an off-heap store, otherwise performance will be seriously degraded,
and a warning will be logged.

The minimum amount that can be allocated is 128MB. There is no maximum.

maxBytesLocalDisk:
As for maxBytesLocalHeap, but specifies the limit of disk storage this cache will ever use.

timeToIdleSeconds:
Sets the time to idle for an element before it expires.
i.e. The maximum amount of time between accesses before an element expires
Is only used if the element is not eternal.
Optional attribute. A value of 0 means that an Element can idle for infinity.
The default value is 0.

timeToLiveSeconds:
Sets the time to live for an element before it expires.
i.e. The maximum time between creation time and when an element expires.
Is only used if the element is not eternal.
Optional attribute. A value of 0 means that and Element can live for infinity.
The default value is 0.

diskExpiryThreadIntervalSeconds:
The number of seconds between runs of the disk expiry thread. The default value
is 120 seconds.

diskSpoolBufferSizeMB:
This is the size to allocate the DiskStore for a spool buffer. Writes are made
to this area and then asynchronously written to disk. The default size is 30MB.
Each spool buffer is used only by its cache. If you get OutOfMemory errors consider
lowering this value. To improve DiskStore performance consider increasing it. Trace level
logging in the DiskStore will show if put back ups are occurring.

clearOnFlush:
whether the MemoryStore should be cleared when flush() is called on the cache.
By default, this is true i.e. the MemoryStore is cleared.

memoryStoreEvictionPolicy:
Policy would be enforced upon reaching the maxEntriesLocalHeap limit. Default
policy is Least Recently Used (specified as LRU). Other policies available -
First In First Out (specified as FIFO) and Less Frequently Used
(specified as LFU)

copyOnRead:
Whether an Element is copied when being read from a cache.
By default this is false.

copyOnWrite:
Whether an Element is copied when being added to the cache.
By default this is false.

Cache persistence is configured through the persistence sub-element.  The attributes of the
persistence element are:

strategy:
Configures the type of persistence provided by the configured cache.  This must be one of the
following values:

* localRestartable - Enables the RestartStore and copies all cache entries (on-heap and/or off-heap)
to disk. This option provides fast restartability with fault tolerant cache persistence on disk.
It is available for Enterprise Ehcache users only.

* localTempSwap - Swaps cache entries (on-heap and/or off-heap) to disk when the cache is full.
"localTempSwap" is not persistent.

* none - Does not persist cache entries.

* distributed - Defers to the <terracotta> configuration for persistence settings. This option
is not applicable for standalone.

synchronousWrites:
When set to true write operations on the cache do not return until after the operations data has been
successfully flushed to the disk storage.  This option is only valid when used with the "localRestartable"
strategy, and defaults to false.

The following example configuration shows a cache configured for localTempSwap restartability.

<cache name="persistentCache" maxEntriesLocalHeap="1000">
    <persistence strategy="localTempSwap"/>
</cache>

Cache elements can also contain sub elements which take the same format of a factory class
and properties. Defined sub-elements are:

* cacheEventListenerFactory - Enables registration of listeners for cache events, such as
  put, remove, update, and expire.

* bootstrapCacheLoaderFactory - Specifies a BootstrapCacheLoader, which is called by a
  cache on initialisation to prepopulate itself.

* cacheExtensionFactory - Specifies a CacheExtension, a generic mechanism to tie a class
  which holds a reference to a cache to the cache lifecycle.

* cacheExceptionHandlerFactory - Specifies a CacheExceptionHandler, which is called when
  cache exceptions occur.

* cacheLoaderFactory - Specifies a CacheLoader, which can be used both asynchronously and
  synchronously to load objects into a cache. More than one cacheLoaderFactory element
  can be added, in which case the loaders form a chain which are executed in order. If a
  loader returns null, the next in chain is called.

* copyStrategy - Specifies a fully qualified class which implements
  net.sf.ehcache.store.compound.CopyStrategy. This strategy will be used for copyOnRead
  and copyOnWrite in place of the default which is serialization.

Example of cache level resource tuning:
<cache name="memBound" maxBytesLocalHeap="100m" maxBytesLocalOffHeap="4g" maxBytesLocalDisk="200g" />


Cache Event Listeners
+++++++++++++++++++++

All cacheEventListenerFactory elements can take an optional property listenFor that describes
which events will be delivered in a clustered environment.  The listenFor attribute has the
following allowed values:

* all - the default is to deliver all local and remote events
* local - deliver only events originating in the current node
* remote - deliver only events originating in other nodes

Example of setting up a logging listener for local cache events:

<cacheEventListenerFactory class="my.company.log.CacheLogger"
    listenFor="local" />


Search
++++++

A <cache> can be made searchable by adding a <searchable/> sub-element. By default the keys
and value objects of elements put into the cache will be attributes against which
queries can be expressed.

<cache>
    <searchable/>
</cache>


An "attribute" of the cache elements can also be defined to be searchable. In the example below
an attribute with the name "age" will be available for use in queries. The value for the "age"
attribute will be computed by calling the method "getAge()" on the value object of each element
in the cache. See net.sf.ehcache.search.attribute.ReflectionAttributeExtractor for the format of
attribute expressions. Attribute values must also conform to the set of types documented in the
net.sf.ehcache.search.attribute.AttributeExtractor interface

<cache>
    <searchable>
        <searchAttribute name="age" expression="value.getAge()"/>
    </searchable>
</cache>


Attributes may also be defined using a JavaBean style. With the following attribute declaration
a public method getAge() will be expected to be found on either the key or value for cache elements

<cache>
    <searchable>
        <searchAttribute name="age"/>
    </searchable>
</cache>

In more complex situations you can create your own attribute extractor by implementing the
AttributeExtractor interface. Providing your extractor class is shown in the following example:

<cache>
    <searchable>
        <searchAttribute name="age" class="com.example.MyAttributeExtractor"/>
    </searchable>
</cache>

Use properties to pass state to your attribute extractor if needed. Your implementation must provide
a public constructor that takes a single java.util.Properties instance

<cache>
    <searchable>
        <searchAttribute name="age" class="com.example.MyAttributeExtractor" properties="foo=1,bar=2"/>
    </searchable>
</cache>

Attributes may also be defined with an optional type constraint on their values. The type specified 
must be one of the supported types, or resolve to an enum. It is possible to use either a fully 
qualified name of the class that represents the type, or its shortened version when the type is not
an enum. The type names are case sensitive, i.e. "double" is distinct from "Double".

<cache>
	<searchable>
	   <searchAttribute name="address" type="java.lang.String" expression="value.address.toString()"/>
       <searchAttribute name="income" type="Long"/>
       <searchAttribute name="age" type="int"/>
       <searchAttribute name="gender" expression="value.gender" type="com.example.Gender"/>
	<searchable>
</cache>

If you intend to use dynamic attribute extraction (see net.sf.ehcache.Cache.registerDynamicAttributesExtractor) then
you need to enable it as follows:

<cache>
    <searchable allowDynamicIndexing="true"/>
</cache>


RMI Cache Replication
+++++++++++++++++++++

Each cache that will be distributed needs to set a cache event listener which replicates
messages to the other CacheManager peers. For the built-in RMI implementation this is done
by adding a cacheEventListenerFactory element of type RMICacheReplicatorFactory to each
distributed cache's configuration as per the following example:

<cacheEventListenerFactory class="net.sf.ehcache.distribution.RMICacheReplicatorFactory"
     properties="replicateAsynchronously=true,
     replicatePuts=true,
     replicatePutsViaCopy=false,
     replicateUpdates=true,
     replicateUpdatesViaCopy=true,
     replicateRemovals=true,
     asynchronousReplicationIntervalMillis=<number of milliseconds>,
     asynchronousReplicationMaximumBatchSize=<number of operations>"
     propertySeparator="," />

The RMICacheReplicatorFactory recognises the following properties:

* replicatePuts=true|false - whether new elements placed in a cache are
  replicated to others. Defaults to true.

* replicatePutsViaCopy=true|false - whether the new elements are
  copied to other caches (true), or whether a remove message is sent. Defaults to true.

* replicateUpdates=true|false - whether new elements which override an
  element already existing with the same key are replicated. Defaults to true.

* replicateRemovals=true - whether element removals are replicated. Defaults to true.

* replicateAsynchronously=true | false - whether replications are
  asynchronous (true) or synchronous (false). Defaults to true.

* replicateUpdatesViaCopy=true | false - whether the new elements are
  copied to other caches (true), or whether a remove message is sent. Defaults to true.

* asynchronousReplicationIntervalMillis=<number of milliseconds> - The asynchronous
  replicator runs at a set interval of milliseconds. The default is 1000. The minimum
  is 10. This property is only applicable if replicateAsynchronously=true

* asynchronousReplicationMaximumBatchSize=<number of operations> - The maximum
  number of operations that will be batch within a single RMI message.  The default
  is 1000. This property is only applicable if replicateAsynchronously=true

JGroups Replication
+++++++++++++++++++

For the Jgroups replication this is done with:
<cacheEventListenerFactory class="net.sf.ehcache.distribution.jgroups.JGroupsCacheReplicatorFactory"
                        properties="replicateAsynchronously=true, replicatePuts=true,
           replicateUpdates=true, replicateUpdatesViaCopy=false,
           replicateRemovals=true,asynchronousReplicationIntervalMillis=1000"/>
This listener supports the same properties as the RMICacheReplicationFactory.


JMS Replication
+++++++++++++++

For JMS-based replication this is done with:
<cacheEventListenerFactory
      class="net.sf.ehcache.distribution.jms.JMSCacheReplicatorFactory"
      properties="replicateAsynchronously=true,
                   replicatePuts=true,
                   replicateUpdates=true,
                   replicateUpdatesViaCopy=true,
                   replicateRemovals=true,
                   asynchronousReplicationIntervalMillis=1000"
       propertySeparator=","/>

This listener supports the same properties as the RMICacheReplicationFactory.

Cluster Bootstrapping
+++++++++++++++++++++

Bootstrapping a cluster may use a different mechanism to replication. e.g you can mix
JMS replication with bootstrap via RMI - just make sure you have the cacheManagerPeerProviderFactory
and cacheManagerPeerListenerFactory configured.

There are two bootstrapping mechanisms: RMI and JGroups.

RMI Bootstrap

The RMIBootstrapCacheLoader bootstraps caches in clusters where RMICacheReplicators are
used. It is configured as per the following example:

<bootstrapCacheLoaderFactory
    class="net.sf.ehcache.distribution.RMIBootstrapCacheLoaderFactory"
    properties="bootstrapAsynchronously=true, maximumChunkSizeBytes=5000000"
    propertySeparator="," />

The RMIBootstrapCacheLoaderFactory recognises the following optional properties:

* bootstrapAsynchronously=true|false - whether the bootstrap happens in the background
  after the cache has started. If false, bootstrapping must complete before the cache is
  made available. The default value is true.

* maximumChunkSizeBytes=<integer> - Caches can potentially be very large, larger than the
  memory limits of the VM. This property allows the bootstraper to fetched elements in
  chunks. The default chunk size is 5000000 (5MB).

JGroups Bootstrap

Here is an example of bootstrap configuration using JGroups boostrap:

<bootstrapCacheLoaderFactory class="net.sf.ehcache.distribution.jgroups.JGroupsBootstrapCacheLoaderFactory"
                                properties="bootstrapAsynchronously=true"/>

The configuration properties are the same as for RMI above. Note that JGroups bootstrap only supports
asynchronous bootstrap mode.


Cache Exception Handling
++++++++++++++++++++++++

By default, most cache operations will propagate a runtime CacheException on failure. An
interceptor, using a dynamic proxy, may be configured so that a CacheExceptionHandler can
be configured to intercept Exceptions. Errors are not intercepted.

It is configured as per the following example:

  <cacheExceptionHandlerFactory class="com.example.ExampleExceptionHandlerFactory"
                                  properties="logLevel=FINE"/>

Caches with ExceptionHandling configured are not of type Cache, but are of type Ehcache only,
and are not available using CacheManager.getCache(), but using CacheManager.getEhcache().


Cache Loader
++++++++++++

A default CacheLoader may be set which loads objects into the cache through asynchronous and
synchronous methods on Cache. This is different to the bootstrap cache loader, which is used
only in distributed caching.

It is configured as per the following example:

    <cacheLoaderFactory class="com.example.ExampleCacheLoaderFactory"
                                  properties="type=int,startCounter=10"/>

Element value comparator
++++++++++++++++++++++++

These two cache atomic methods:
  removeElement(Element e)
  replace(Element old, Element element)

rely on comparison of cached elements value. The default implementation relies on Object.equals()
but that can be changed in case you want to use a different way to compute equality of two elements.

This is configured as per the following example:

<elementValueComparator class="com.company.xyz.MyElementComparator"/>

The MyElementComparator class must implement the is net.sf.ehcache.store.ElementValueComparator
interface. The default implementation is net.sf.ehcache.store.DefaultElementValueComparator.


SizeOf Policy
+++++++++++++

Control how deep the SizeOf engine can go when sizing on-heap elements.

This is configured as per the following example:

<sizeOfPolicy maxDepth="100" maxDepthExceededBehavior="abort"/>

maxDepth controls how many linked objects can be visited before the SizeOf engine takes any action.
maxDepthExceededBehavior specifies what happens when the max depth is exceeded while sizing an object graph.
 "continue" makes the SizeOf engine log a warning and continue the sizing. This is the default.
 "abort"    makes the SizeOf engine abort the sizing, log a warning and mark the cache as not correctly tracking
            memory usage. This makes Ehcache.hasAbortedSizeOf() return true when this happens.

The SizeOf policy can be configured at the cache manager level (directly under <ehcache>) and at
the cache level (under <cache> or <defaultCache>). The cache policy always overrides the cache manager
one if both are set. This element has no effect on distributed caches.

Transactions
++++++++++++

To enable an ehcache as transactions, set the transactionalMode

transactionalMode="xa" - high performance JTA/XA implementation
transactionalMode="xa_strict" - canonically correct JTA/XA implementation
transactionMode="local" - high performance local transactions involving caches only
transactionalMode="off" - the default, no transactions

If set, all cache operations will need to be done through transactions.

To prevent users keeping references on stored elements and modifying them outside of any transaction's control,
transactions also require the cache to be configured copyOnRead and copyOnWrite.

CacheWriter
++++++++++++

A CacheWriter can be set to write to an underlying resource. Only one CacheWriter can be
configured per cache.

The following is an example of how to configure CacheWriter for write-through:

    <cacheWriter writeMode="write-through" notifyListenersOnException="true">
        <cacheWriterFactory class="net.sf.ehcache.writer.TestCacheWriterFactory"
                            properties="type=int,startCounter=10"/>
    </cacheWriter>

The following is an example of how to configure CacheWriter for write-behind:

    <cacheWriter writeMode="write-behind" minWriteDelay="1" maxWriteDelay="5"
                 rateLimitPerSecond="5" writeCoalescing="true" writeBatching="true" writeBatchSize="1"
                 retryAttempts="2" retryAttemptDelaySeconds="1">
        <cacheWriterFactory class="net.sf.ehcache.writer.TestCacheWriterFactory"
                            properties="type=int,startCounter=10"/>
    </cacheWriter>

The cacheWriter element has the following attributes:
* writeMode: the write mode, write-through or write-behind

These attributes only apply to write-through mode:
* notifyListenersOnException: Sets whether to notify listeners when an exception occurs on a writer operation.

These attributes only apply to write-behind mode:
* minWriteDelay: Set the minimum number of seconds to wait before writing behind. If set to a value greater than 0,
  it permits operations to build up in the queue. This is different from the maximum write delay in that by waiting
  a minimum amount of time, work is always being built up. If the minimum write delay is set to zero and the
  CacheWriter performs its work very quickly, the overhead of processing the write behind queue items becomes very
  noticeable in a cluster since all the operations might be done for individual items instead of for a collection
  of them.
* maxWriteDelay: Set the maximum number of seconds to wait before writing behind. If set to a value greater than 0,
  it permits operations to build up in the queue to enable effective coalescing and batching optimisations.
* writeBatching: Sets whether to batch write operations. If set to true, writeAll and deleteAll will be called on
  the CacheWriter rather than write and delete being called for each key. Resources such as databases can perform
  more efficiently if updates are batched, thus reducing load.
* writeBatchSize: Sets the number of operations to include in each batch when writeBatching is enabled. If there are
  less entries in the write-behind queue than the batch size, the queue length size is used.
* rateLimitPerSecond: Sets the maximum number of write operations to allow per second when writeBatching is enabled.
* writeCoalescing: Sets whether to use write coalescing. If set to true and multiple operations on the same key are
  present in the write-behind queue, only the latest write is done, as the others are redundant.
* retryAttempts: Sets the number of times the operation is retried in the CacheWriter, this happens after the
  original operation.
* retryAttemptDelaySeconds: Sets the number of seconds to wait before retrying an failed operation.

Pinning
+++++++

Use this element when data should remain in the cache regardless of resource constraints.
Unexpired entries can never be flushed to a lower tier or be evicted.

This element has a required attribute (store) to specify which data tiers the cache should be pinned to:
* localMemory: Cache data is pinned to the local heap (or off-heap for BigMemory Go and BigMemory Max).
* inCache: Cache data is pinned in the cache, which can be in any tier cache data is stored.

Example:
    <pinning store="inCache"/>

Cache Extension
+++++++++++++++

CacheExtensions are a general purpose mechanism to allow generic extensions to a Cache.
CacheExtensions are tied into the Cache lifecycle.

CacheExtensions are created using the CacheExtensionFactory which has a
<code>createCacheCacheExtension()</code> method which takes as a parameter a
Cache and properties. It can thus call back into any public method on Cache, including, of
course, the load methods.

Extensions are added as per the following example:

     <cacheExtensionFactory class="com.example.FileWatchingCacheRefresherExtensionFactory"
                         properties="refreshIntervalMillis=18000, loaderTimeout=3000,
                                     flushPeriod=whatever, someOtherProperty=someValue ..."/>

Cache Decorator Factory
+++++++++++++++++++++++

Cache decorators can be configured directly in ehcache.xml. The decorators will be created and added to the CacheManager.
It accepts the name of a concrete class that extends net.sf.ehcache.constructs.CacheDecoratorFactory
The properties will be parsed according to the delimiter (default is comma ',') and passed to the concrete factory's
<code>createDecoratedEhcache(Ehcache cache, Properties properties)</code> method along with the reference to the owning cache.

It is configured as per the following example:

    <cacheDecoratorFactory
  class="com.company.DecoratedCacheFactory"
  properties="property1=true ..." />

Distributed Caching with Terracotta
+++++++++++++++++++++++++++++++++++

Distributed Caches connect to a Terracotta Server Array. They are configured with the <terracotta> sub-element.

The <terracotta> sub-element has the following attributes:

* clustered=true|false - indicates whether this cache should be clustered (distributed) with Terracotta. By
  default, if the <terracotta> element is included, clustered=true.

* copyOnRead=true|false - indicates whether cache values are deserialized on every read or if the
  materialized cache value can be re-used between get() calls. This setting is useful if a cache
  is being shared by callers with disparate classloaders or to prevent local drift if keys/values
  are mutated locally without being put back in the cache.

  The default is false.

* consistency=strong|eventual - Indicates whether this cache should have strong consistency or eventual
  consistency. The default is eventual. See the documentation for the meaning of these terms.

* synchronousWrites=true|false

  Synchronous writes (synchronousWrites="true")  maximize data safety by blocking the client thread until
  the write has been written to the Terracotta Server Array.

  This option is only available with consistency=strong. The default is false.

* concurrency - the number of segments that will be used by the map underneath the Terracotta Store.
  Its optional and has default value of 0, which means will use default values based on the internal
  Map being used underneath the store.

  This value cannot be changed programmatically once a cache is initialized.

The <terracotta> sub-element also has a <nonstop> sub-element to allow configuration of cache behaviour if a distributed
cache operation cannot be completed within a set time or in the event of a clusterOffline message. If this element does not appear, nonstop behavior is off.

<nonstop> has the following attributes:

*  enabled="true" - defaults to true.

*  timeoutMillis - An SLA setting, so that if a cache operation takes longer than the allowed ms, it will timeout.

*  searchTimeoutMillis - If a cache search operation in the nonstop mode takes longer than the allowed ms, it will timeout.

*  immediateTimeout="true|false" - What to do on receipt of a ClusterOffline event indicating that communications
   with the Terracotta Server Array were interrupted.

<nonstop> has one sub-element, <timeoutBehavior> which has the following attribute:

*  type="noop|exception|localReads|localReadsAndExceptionOnWrite" - What to do when a timeout has occurred. Exception is the default.

Simplest example to indicate clustering:
    <terracotta/>

To indicate the cache should not be clustered (or remove the <terracotta> element altogether):
    <terracotta clustered="false"/>

To indicate the cache should be clustered using "eventual" consistency mode for better performance :
    <terracotta clustered="true" consistency="eventual"/>

To indicate the cache should be clustered using synchronous-write locking level:
    <terracotta clustered="true" synchronousWrites="true"/>
-->

<!--
Default Cache configuration. These settings will be applied to caches
created programmatically using CacheManager.add(String cacheName).
This element is optional, and using CacheManager.add(String cacheName) when
its not present will throw CacheException

The defaultCache has an implicit name "default" which is a reserved cache name.
-->
<defaultCache
        maxEntriesLocalHeap="10000"
        eternal="false"
        timeToIdleSeconds="120"
        timeToLiveSeconds="120"
        diskSpoolBufferSizeMB="30"
        maxEntriesLocalDisk="10000000"
        diskExpiryThreadIntervalSeconds="120"
        memoryStoreEvictionPolicy="LRU">
    <persistence strategy="localTempSwap"/>
</defaultCache>

<!--
Sample caches. Following are some example caches. Remove these before use.
-->

<!--
Sample cache named sampleCache1
This cache contains a maximum in memory of 10000 elements, and will expire
an element if it is idle for more than 5 minutes and lives for more than
10 minutes.

If there are more than 10000 elements it will overflow to the
disk cache, which in this configuration will go to wherever java.io.tmp is
defined on your system. On a standard Linux system this will be /tmp"
-->
<cache name="sampleCache1"
       maxEntriesLocalHeap="10000"
       maxEntriesLocalDisk="1000"
       eternal="false"
       diskSpoolBufferSizeMB="20"
       timeToIdleSeconds="300"
       timeToLiveSeconds="600"
       memoryStoreEvictionPolicy="LFU"
       transactionalMode="off">
    <persistence strategy="localTempSwap"/>
</cache>


<!--
Sample cache named sampleCache2
This cache has a maximum of 1000 elements in memory. There is no overflow to disk, so 1000
is also the maximum cache size. Note that when a cache is eternal, timeToLive and
timeToIdle are not used and do not need to be specified.
-->
<cache name="sampleCache2"
       maxEntriesLocalHeap="1000"
       eternal="true"
       memoryStoreEvictionPolicy="FIFO"
        />


<!--
Sample cache named sampleCache3. This cache overflows to disk. The disk store is
persistent between cache and VM restarts. The disk expiry thread interval is set to 10
minutes, overriding the default of 2 minutes.
-->
<cache name="sampleCache3"
       maxEntriesLocalHeap="500"
       eternal="false"
       overflowToDisk="true"
       diskPersistent="true"
       timeToIdleSeconds="300"
       timeToLiveSeconds="600"
       diskExpiryThreadIntervalSeconds="1"
       memoryStoreEvictionPolicy="LFU">
</cache>


<!--
Sample distributed cache named sampleReplicatedCache1.
This cache replicates using defaults.
It also bootstraps from the cluster, using default properties.
-->
<cache name="sampleReplicatedCache1"
       maxEntriesLocalHeap="10"
       eternal="false"
       timeToIdleSeconds="100"
       timeToLiveSeconds="100">

    <cacheEventListenerFactory
            class="net.sf.ehcache.distribution.RMICacheReplicatorFactory"/>
    <bootstrapCacheLoaderFactory
            class="net.sf.ehcache.distribution.RMIBootstrapCacheLoaderFactory"/>
</cache>


<!--
Sample distributed cache named sampleReplicatedCache2.
This cache replicates using specific properties.
It only replicates updates and does so synchronously via copy
-->
<cache name="sampleRepicatedCache2"
       maxEntriesLocalHeap="10"
       eternal="false"
       timeToIdleSeconds="100"
       timeToLiveSeconds="100">
    <cacheEventListenerFactory
            class="net.sf.ehcache.distribution.RMICacheReplicatorFactory"
            properties="replicateAsynchronously=false, replicatePuts=false,
                        replicatePutsViaCopy=false, replicateUpdates=true,
                        replicateUpdatesViaCopy=true, replicateRemovals=false"/>
</cache>

<!--
Sample distributed cache named sampleReplicatedCache3.
This cache replicates using defaults except that the asynchronous replication
interval is set to 200ms.
This one includes / and # which were illegal in ehcache 1.5.
-->
<cache name="sampleReplicatedCache3"
       maxEntriesLocalHeap="10"
       eternal="false"
       timeToIdleSeconds="100"
       timeToLiveSeconds="100">
    <cacheEventListenerFactory
            class="net.sf.ehcache.distribution.RMICacheReplicatorFactory"
            properties="asynchronousReplicationIntervalMillis=200"/>
    <persistence strategy="localTempSwap"/>
</cache>

<!--
Sample Terracotta clustered cache named sampleTerracottaCache.
This cache uses Terracotta to cluster the contents of the cache.
-->
<!--
<cache name="sampleTerracottaCache"
       maxBytesLocalHeap="10m"
       eternal="false"
       timeToIdleSeconds="3600"
       timeToLiveSeconds="1800">
    <terracotta/>
</cache>
-->

<!--
  Sample xa enabled cache named xaCache
<cache name="xaCache"
       maxEntriesLocalHeap="500"
       eternal="false"
       timeToIdleSeconds="300"
       timeToLiveSeconds="600"
       diskExpiryThreadIntervalSeconds="1"
       transactionalMode="xa_strict">
</cache>
-->

<!--
  Sample copy on both read and write cache named copyCache
  using the default (explicitly configured here as an example) ReadWriteSerializationCopyStrategy
  class could be any implementation of net.sf.ehcache.store.compound.CopyStrategy
<cache name="copyCache"
       maxEntriesLocalHeap="500"
       eternal="false"
       timeToIdleSeconds="300"
       timeToLiveSeconds="600"
       diskExpiryThreadIntervalSeconds="1"
       copyOnRead="true"
       copyOnWrite="true">
    <copyStrategy class="net.sf.ehcache.store.compound.ReadWriteSerializationCopyStrategy" />
</cache>
-->
<!--
  Sample, for Enterprise Ehcache only, demonstrating a tiered cache with in-memory, off-heap and disk stores. In this example the in-memory (on-heap) store is limited to 10,000 items ... which for example for 1k items would use 10MB of memory, the off-heap store is limited to 4GB and the disk store is unlimited in size.
<cache name="tieredCache"
       maxEntriesLocalHeap="10000"
       eternal="false"
       timeToLiveSeconds="600"
       overflowToOffHeap="true"
       maxBytesLocalOffHeap="4g"
       diskExpiryThreadIntervalSeconds="1">
    <persistence strategy="localTempSwap"/>
 </cache>
-->
<!--
  Sample, for Enterprise Ehcache only, demonstrating a restartable cache with in-memory and off-heap stores.
<cache name="restartableCache"
       maxEntriesLocalHeap="10000"
       eternal="true"
       overflowToOffHeap="true"
       maxBytesLocalOffHeap="4g"
     <persistence strategy="localRestartable"/>
 </cache>
 -->

作者:今天写Bug了吗
来源:CSDN
原文:https://blog.csdn.net/weixin_30379625/article/details/84377126
版权声明:本文为博主原创文章,转载请附上博文链接!

上一篇:springboot+ehcache 基于注解实现简单缓存demo


下一篇:java – 通过JMX的Ehcache数据视图