tl;dr: Elasticache forces you to use a single instance of redis, which is sub-optimal.
The long version:
I realize this is an old post (2 years at the time of this writing) but I think it's important to note a point I don't see here.
On elasticache your redis deployment is managed by Amazon. This means you're stuck with however they choose to run your redis.
Redis uses a single thread of execution for reads/writes. This ensures consistency w/o locking. It's a major asset in terms of performance not to manage locks and latches. The unfortunate consequence, though, is that if your EC2 has more than 1 vCPU they will go unused. This is the case for all elasticache instances with more than one vCPU.
The default elasticache instance size is cache.r3.large
, which has two cores.
![Amazon's Elasticache setup menu with defaults populated.]()
In fact, there are a number of instance sizes with multiple vCPUs. Lots of opportunity for this issue to manifest.
![enter image description here]()
It seems Amazon is already aware of this issue, but they seem a bit dismissive of it.
![enter image description here]()
The part that makes this especially relevant to this question is that on your EC2 (since you're managing your own deployment) you're able to implement multi-tenancy. This means you have many instances of the redis process listening on different ports. By choosing which port to read/write to/from in the application based on a hash of the record's key you can leverage all your vCPUs.
As a side note; an redis elasticache deployment on a multi-core machine should always under perform compared to memcached elasticache deployment on the instance size. With multi-tenancy redis tends to be the winner.
Update:
Amazon now provides separate metrics for your redis instance CPU, EngineCPUUtilization. You no longer need to compute your CPU with the shoddy multiplication, but multi-tenancy is still not implemented.