In-process cache like ehcache, jcs can be a bit faster than distributed cache engines like redis, memcached but they can never scale up.
Here is the brief comparison betweent the two
|Considerations||In-process Cache||Distributed Cache|
|Consistency||Your cache elements are local to your application. Large application will be load balanced across multiple instances each having a different state of cache resulting in inconsistency||It offers a single logical view of the cache. In most cases, an object is stored in a single node by means of hashing algorithm. No data inconsistency|
|Overheads||It can negatively effect performance of application due to garbage collection overheads. It depends on size of cache and how quickly are objects evicted||It will have no such overheads. But if will be slightly slower due to network latency and object serialization|
|Reliability||It makes use of same heap memory so you need to be careful for determining upper limit for cache memory. If your program runs out of memory, there is no easy way to recover||It runs as independent process across multiple nodes. So even failure of single node does not result in complete failure of the cache. Failover’s worst case is degraded performance as opposed to complete failure|