Redis Cache Pattern Review
Review Redis caching logic for key design flaws, invalidation bugs, and operational footguns.
8 views
Cursorrediscachingcache invalidationttlconnection poolingstampedeserializationkey designdatabase
How to Use
Save the agent definition to .cursor/rules/redis-cache-pattern-review.mdc with glob pattern targeting your Redis client files, e.g., **/cache/**, **/redis/**, **/*cache*.ts, **/*cache*.py, **/*redis*.java. Activate by opening any file matching the glob or by typing @redis-cache-pattern-review in Cursor chat. To verify installation, open Cursor Settings then Rules and confirm redis-cache-pattern-review appears in the list. For on-demand review of a specific file, open the file and ask Cursor to review the caching logic. The agent focuses on application-level Redis usage patterns, not Redis server configuration or cluster topology.
Agent Definition
Review Redis caching code for the mistakes that cause production outages: unbounded key growth, missing TTLs, cache stampede under load, silent serialization mismatches, and broken invalidation logic. This targets application-level Redis usage in any language, with emphasis on the patterns that bite teams after they ship.
Key Design
Examine every key construction site. Keys must be namespaced, deterministic, and bounded. Flag these problems:
Critical -- Unbounded key cardinality. If a key includes a user-generated value (email, free-text input, request ID) without hashing or truncation, the keyspace grows without limit. Prefer hashed segments: cache:user:{sha256(email)[:12]} over cache:user:{email}.
Critical -- Missing TTL on write. Every SET, HSET, or cache-write wrapper must include an explicit TTL. "We will clean it up later" is how you get 40GB of dead keys. If the caller omits TTL, the wrapper must enforce a default.
Warning -- Key collisions from weak namespacing. Keys like "user:123" collide across features. Require a feature prefix: auth:user:123 vs billing:user:123.
Suggestion -- Overly long keys. Redis stores key bytes in memory. Keys beyond 128 bytes waste RAM. Truncate or hash verbose segments.
Serialization
Critical -- Mismatched serializer between write and read paths. If one path uses JSON and another uses msgpack or a language-native pickle/marshal, reads silently return garbage or throw opaque errors. Verify the same serialization format is used on both sides. Prefer explicit JSON unless benchmarks justify binary formats.
Warning -- Storing language-specific objects (Python dataclasses, Java POJOs) without a defined schema. When the class definition changes, cached values become undeserializable. Include a version prefix in the value or key: v2:cache:product:456.
Cache Invalidation
Critical -- Write-through inconsistency. If the code updates the database and then deletes/updates the cache in a separate step without a transaction or retry, a failure between the two leaves stale data. Prefer delete-on-write (delete the cache key, let the next read repopulate) over update-on-write unless latency requirements demand otherwise.
Critical -- No invalidation path exists. If cached data has a source of truth that can change (database row, external API), but no code path deletes or refreshes the cache key on mutation, the cache will serve stale data indefinitely. Every cache key must have a documented invalidation trigger.
Warning -- Bulk invalidation via KEYS or SCAN in hot path. KEYS blocks the Redis event loop. SCAN is safe for background jobs but too slow for request-time invalidation. Use explicit key construction so you can delete by known key, or use a cache tag/generation counter pattern.
Stampede and Thundering Herd
Critical -- No stampede protection on hot keys. When a popular cache key expires, hundreds of concurrent requests hit the database simultaneously. Require one of: mutex/lock-based recomputation (SETNX a lock key, one caller rebuilds, others wait or serve stale), probabilistic early expiration (refresh the key before TTL hits, using a jitter window), or background refresh (a worker refreshes keys before expiry, requests never miss).
Warning -- Fixed TTL without jitter on bulk-loaded keys. If 10,000 keys all expire at the same second, Redis and the backing store spike simultaneously. Add random jitter: base_ttl + random(0, base_ttl * 0.1).
Connection and Operational Patterns
Critical -- No connection pooling. Opening a new Redis connection per request adds latency and risks exhausting file descriptors. Use a connection pool (ioredis pool, redis-py ConnectionPool, Jedis pool, etc.).
Warning -- Fire-and-forget writes without error handling. If a cache write fails silently, the app works but performance degrades without any signal. Log cache write failures at Warning level. Do not let cache failures crash the request; degrade gracefully.
Warning -- Using SELECT to switch databases in production. Multi-database Redis is a legacy pattern that complicates monitoring and clustering. Use key namespacing instead.
Suggestion -- No health check or circuit breaker. If Redis goes down, every request pays the cache-miss penalty plus a connection timeout. Wrap Redis calls in a circuit breaker that falls back to direct database access after repeated failures.
Data Structure Misuse
Warning -- Storing large blobs (over 1MB) in a single key. Large values cause latency spikes and block other operations during transfer. Break large payloads into chunks or store them in object storage with a Redis pointer.
Warning -- Using plain GET/SET when HSET fits. If you cache a user profile and only need one field on most reads, storing the whole object wastes bandwidth. Use a hash and HGET the specific field.
Suggestion -- Not using MGET/MSET for batch operations. Multiple sequential GET calls add round-trip latency. Batch them.