Most developers use Redis for one thing: storing session data or caching query results as simple key-value pairs. That is using a Swiss Army knife as a butter knife. Redis has seven distinct data structures, each designed for specific access patterns. Let me walk through the ones that solve real problems in ways you probably did not realize.

Sorted Sets: Leaderboards and Rate Limiting

A sorted set stores members with a floating-point score. Members are kept in sorted order. Operations are O(log n).

Leaderboards are the obvious use case:

ZADD leaderboard 1420 "user:chirag"
ZADD leaderboard 1380 "user:priya"
ZADD leaderboard 1510 "user:rajan"

ZREVRANGE leaderboard 0 9 WITHSCORES
# Returns top 10 users with scores

ZRANK leaderboard "user:chirag"
# Returns rank (0-indexed from highest)

Building this in SQL requires a full table scan or complex indexing. In Redis it is three commands.

Rate limiting is the less obvious use case. Use timestamps as scores:

# Sliding window rate limiter
ZADD requests:user:123 1714000000.5 "req:abc"
ZREMRANGEBYSCORE requests:user:123 -inf (now - window)
count = ZCARD requests:user:123

if count > limit: reject

You slide the window by removing old entries. This is more accurate than fixed-window counters, which allow 2x the limit at window boundaries.

Streams: Durable Event Log

Redis Streams (added in Redis 5.0) implement an append-only log with consumer groups. Think Kafka, simpler, built into your cache.

XADD events * action "page_view" user_id "123" path "/posts/redis"
# Returns ID like 1714000000000-0

XREAD COUNT 10 STREAMS events 0
# Read first 10 entries

XGROUP CREATE events analytics $ MKSTREAM
XREADGROUP GROUP analytics worker1 COUNT 5 STREAMS events >
# Consumer group - each message delivered to one consumer
XACK events analytics 1714000000000-0
# Acknowledge processed

Streams give you pub/sub with persistence, consumer groups with at-least-once delivery, and message replay. For event logging, audit trails, and simple task queues, this replaces a dedicated message broker.

HyperLogLog: Approximate Unique Counts

Counting unique visitors is expensive. Storing a set of every user ID that visited a page does not scale.

HyperLogLog uses probabilistic counting to estimate cardinality in O(1) space - exactly 12KB per counter regardless of how many unique items you add.

PFADD visitors:2026-04-23 "user:123" "user:456" "user:789"
PFCOUNT visitors:2026-04-23
# Returns approximate count, 0.81% standard error

# Merge multiple day counts
PFMERGE visitors:april visitors:2026-04-01 visitors:2026-04-02

For analytics where you need order-of-magnitude accuracy - “we had roughly 50,000 unique visitors today” - HyperLogLog uses 12KB instead of megabytes. The 0.81% error is acceptable for analytics.

Bitmaps: Per-User Feature Flags and Presence

Bitmaps let you set individual bits on a string value. One bit per user, per day.

# Record that user 123 was active on day offset 45
SETBIT active_users:2026-04 45 1

# Check if user 45 was active
GETBIT active_users:2026-04 45

# Count active users this month
BITCOUNT active_users:2026-04

# Users active both months
BITOP AND active_both active_users:2026-03 active_users:2026-04
BITCOUNT active_both

100 million users requires 12.5MB. The entire monthly active user count for a large app fits in memory and counts in under 1ms. This is how analytics platforms track daily active users efficiently.

Hashes: Object Storage

Hashes store multiple fields per key. Better than storing serialized JSON when you frequently access individual fields.

HSET user:123 name "Chirag" email "[email protected]" plan "pro"
HGET user:123 plan
HMGET user:123 name email
HINCRBY user:123 credits -10

HINCRBY is atomic. No read-modify-write cycle. No race conditions. Decrementing credits or incrementing a counter in a hash is safe under concurrent access.

The Practical Combinations

Real applications combine these:

Problem Solution
Real-time leaderboard Sorted set
Unique visitor count HyperLogLog
Rate limiting Sorted set with TTL
Task queue List (LPUSH/BRPOP) or Streams
Session storage Hash
Feature flags per user Bitmap
Event log with replay Streams
Caching with expiry String with EXPIRE

What People Get Wrong

Using Redis for things it should not do:

  • Primary data storage for complex relational data - Redis is not a relational database
  • Full-text search - use Postgres FTS or Elasticsearch
  • Large blobs - not designed for it, memory is expensive

Redis is a complement to your primary database, not a replacement. The sweet spot is anything that benefits from microsecond latency, atomic operations on data structures, or approximate algorithms that trade memory for accuracy.

Bottom Line

The key-value cache is one Redis use case. Sorted sets for leaderboards and rate limiting, Streams for event logs, HyperLogLog for approximate counts, and Bitmaps for per-user tracking all solve real problems faster and more reliably than custom implementations. Before reaching for a specialized service, check whether Redis already has the right data structure for your problem.