🎯 TRUE DROP-IN REPLACEMENT powered by Valkey GLIDE's high-performance Rust core
Production-Ready Core Features - BullMQ, Socket.IO, Express Sessions, JSON module fully validated
A production-ready ioredis replacement that seamlessly integrates Valkey GLIDE's high-performance Rust core with your existing Node.js applications. Zero code changes required for core functionality - simply change your import statement and gain the benefits of GLIDE's resilient, high-performance architecture while maintaining API compatibility.
This project uses exclusively Valkey GLIDE - a high-performance, language-independent Valkey client library with a Rust core and Node.js wrapper.
v0.5.0 Major Updates:
- ✅ Complete Architecture Refactor - Rebuilt for optimal GLIDE integration
- ✅ Binary Pub/Sub Support - Full support for binary data in real-time applications
- ✅ Enhanced Connection Management - Improved auto-connect and cleanup logic
- ✅ ES Module Test Suite - Complete migration to modern ES modules
- ✅ 35+ Test Improvements - Enhanced reliability and test infrastructure
| Component | Status | Test Coverage | Production Use |
|---|---|---|---|
| Valkey Data Types | ✅ Production Ready | String (37), Hash (13), List (16), Set (19), ZSet (14) | Core operations validated |
| ValkeyJSON Module | ✅ Production Ready | 29 commands tested | Document storage ready |
| Bull/BullMQ Integration | ✅ Production Ready | 10/10 integration tests | Job queues validated |
| Express Sessions | ✅ Production Ready | 10/10 session tests | Web apps validated |
| Socket.IO | ✅ Production Ready | 7/7 real-time tests | Live apps validated |
| Connection Management | ✅ Production Ready | 24 pipeline tests | Enterprise ready |
| Cluster Support | ✅ Production Ready | All cluster operations tested | Full Bull/BullMQ compatibility |
- All Valkey Data Types: String, Hash, List, Set, ZSet operations - fully functional
- Bull/BullMQ Job Queues: Complete integration - production ready
- Express Sessions: Session storage with connect-redis - production ready
- Socket.IO Real-time: Cross-instance messaging - production ready
- JSON Document Storage: 29 ValkeyJSON commands - production ready
- Cluster Operations: Complete cluster support with sharded pub/sub, Bull/BullMQ integration
- Complex Lua Scripts: Full Lua scripting support with EVAL, EVALSHA, defineCommand
- Enhanced ZSET Operations: Complete ZSET support with proper WITHSCORES formatting
- Pure GLIDE Architecture: Built exclusively on Valkey GLIDE APIs (no ioredis dependency)
- High Performance: Leverages GLIDE's Rust core for optimal performance
- TypeScript Ready: Full type safety with comprehensive interfaces
- Zero Migration: Change import statement only - your existing code works
npm install valkey-glide-ioredis-adapterRequirements:
- Node.js 18+ (ES2022 support)
- Valkey 6.0+ or Redis 6.0+ server
- TypeScript 4.5+ (for TypeScript projects)
Simply change your import - no other code changes needed:
// Before (ioredis)
import Redis from 'ioredis';
// After (GLIDE adapter)
import { Redis } from 'valkey-glide-ioredis-adapter';
// Everything else stays exactly the same!
const client = new Redis({
host: 'localhost',
port: 6379
});All standard database operations work identically to ioredis:
// String operations
await client.set('user:name', 'John Doe');
await client.setex('session:abc', 3600, 'session_data'); // with TTL
const name = await client.get('user:name');
// Hash operations
await client.hset('user:123', 'name', 'Alice', 'age', '30');
await client.hset('user:123', { email: '[email protected]', city: 'NYC' });
const userData = await client.hgetall('user:123');
// List operations
await client.lpush('notifications', 'Welcome!', 'New message');
const notification = await client.rpop('notifications');
const allNotifications = await client.lrange('notifications', 0, -1);
// Set operations
await client.sadd('tags', 'javascript', 'nodejs', 'valkey');
const allTags = await client.smembers('tags');
const hasTag = await client.sismember('tags', 'javascript');
// Sorted Set operations with proper WITHSCORES handling
await client.zadd('leaderboard', 100, 'player1', 85, 'player2', 92, 'player3');
const topPlayers = await client.zrange('leaderboard', 0, 2, 'WITHSCORES');
// Returns: ['player2', '85', 'player3', '92', 'player1', '100']// Transactions (MULTI/EXEC)
const pipeline = client.multi();
pipeline.set('counter', 1);
pipeline.incr('counter');
pipeline.get('counter');
const results = await pipeline.exec();
// Lua Scripts
const result = await client.eval(
'return redis.call("incr", KEYS[1])',
1, // number of keys
'mycounter' // key
);
// Custom commands via defineCommand
client.defineCommand('myCommand', {
lua: 'return redis.call("get", KEYS[1])',
numberOfKeys: 1
});
await client.myCommand('somekey');
// Streams
await client.xadd('events', '*', 'user', 'john', 'action', 'login');
const messages = await client.xread('STREAMS', 'events', '0');
// Pub/Sub
await client.subscribe('news');
client.on('message', (channel, message) => {
console.log(`Received ${message} from ${channel}`);
});
await client.publish('news', 'Breaking: New Valkey adapter released!');All ioredis constructor patterns are supported:
// Various connection methods
const client = new Redis(); // defaults to localhost:6379
const client = new Redis(6380); // port only
const client = new Redis(6379, 'localhost'); // port, host
const client = new Redis('redis://localhost:6379'); // connection URL
const client = new Redis('rediss://localhost:6380'); // TLS connection
// Full configuration object
const client = new Redis({
host: 'localhost',
port: 6379,
password: 'your-password',
db: 0,
connectTimeout: 10000,
lazyConnect: true,
retryDelayOnFailover: 100
});Cluster operations work identically to ioredis cluster:
import { Cluster } from 'valkey-glide-ioredis-adapter';
const cluster = new Cluster([
{ host: '127.0.0.1', port: 7000 },
{ host: '127.0.0.1', port: 7001 },
{ host: '127.0.0.1', port: 7002 }
]);
// All same operations work on cluster
await cluster.set('key', 'value');
const value = await cluster.get('key');
// Sharded pub/sub (Valkey 7.0+)
await cluster.spublish('shard-channel', 'message');All ioredis connection options are supported, plus GLIDE-specific enhancements:
| Option | Type | Default | Description |
|---|---|---|---|
| Basic Connection | |||
host |
string |
'localhost' |
Server hostname or IP address |
port |
number |
6379 |
Server port number |
username |
string |
- | Username for ACL authentication |
password |
string |
- | Password for authentication |
db |
number |
0 |
Database number (standalone only) |
| Connection Management | |||
connectTimeout |
number |
10000 |
Connection timeout in milliseconds |
commandTimeout |
number |
5000 |
Command execution timeout |
requestTimeout |
number |
5000 |
Request timeout for operations |
lazyConnect |
boolean |
false |
Don't connect immediately, wait for first command |
keepAlive |
boolean |
true |
Enable TCP keep-alive |
family |
number |
4 |
IP version (4 or 6) |
| Retry & Error Handling | |||
retryDelayOnFailover |
number |
100 |
Retry delay during failover (ms) |
maxRetriesPerRequest |
number | null |
3 |
Max retries per command (null = unlimited) |
enableReadyCheck |
boolean |
true |
Check server ready state on connect |
enableOfflineQueue |
boolean |
true |
Queue commands when disconnected |
| TLS/Security | |||
tls |
boolean |
false |
Enable TLS encryption |
useTLS |
boolean |
false |
Alternative TLS flag (same as tls) |
| Performance | |||
enableAutoPipelining |
boolean |
false |
Automatically pipeline commands |
maxLoadingTimeout |
number |
0 |
Max time to wait for server loading |
keyPrefix |
string |
- | Prefix for all keys |
| Client Identity | |||
clientName |
string |
- | Client name for identification |
| 🚀 GLIDE-Specific Extensions | |||
readFrom |
ReadFrom |
- | scaleReads) |
clientAz |
string |
- | |
enableEventBasedPubSub |
boolean |
false |
|
inflightRequestsLimit |
number |
1000 |
Extended cluster configuration options:
| Option | Type | Default | Description |
|---|---|---|---|
| Cluster Behavior | |||
maxRedirections |
number |
16 |
✅ ioredis-compatible: Max cluster redirections to follow |
enableReadFromReplicas |
boolean |
false |
readFrom |
scaleReads |
string |
'master' |
✅ ioredis-compatible: Read scaling strategy |
readOnly |
boolean |
false |
✅ ioredis-compatible: Read-only cluster mode |
| Failover & Retry | |||
retryDelayOnFailover |
number |
100 |
✅ ioredis-compatible: Retry delay during failover |
| Connection | |||
redisOptions |
RedisOptions |
{} |
✅ ioredis-compatible: Options applied to each node |
lazyConnect |
boolean |
false |
✅ ioredis-compatible: Don't connect immediately |
enableOfflineQueue |
boolean |
true |
✅ ioredis-compatible: Queue commands when unavailable |
GLIDE uses sophisticated exponential backoff with jitter. The adapter automatically configures these based on your ioredis settings:
How ioredis options map to GLIDE backoff:
maxRetriesPerRequest→ Sets retry count (null = 50 retries, number = exact count)retryDelayOnFailover→ Converted to jitter percentage (5-100%)connectTimeout→ Maps to GLIDE's connection timeoutenableOfflineQueue: false→ SetsinflightRequestsLimit: 0(no queuing)
| Feature | ioredis | GLIDE | Adapter Support |
|---|---|---|---|
| Read Scaling | scaleReads: 'master'/'slave'/'all' |
readFrom: Primary/Replica |
✅ Both supported |
| Request Queuing | enableOfflineQueue: boolean |
inflightRequestsLimit: number |
✅ Mapped automatically |
| Connection Timeout | connectTimeout: ms |
connectionTimeout: ms |
✅ Direct mapping |
| Retry Strategy | maxRetriesPerRequest + retryDelayOnFailover |
connectionBackoff: {numberOfRetries, jitterPercent} |
✅ Advanced mapping |
| AZ Affinity | ❌ Not available | ✅ clientAz: string |
|
| Binary Pub/Sub | ❌ Limited support | ✅ Native + TCP modes |
// Basic connection
const client = new Redis({ host: 'localhost', port: 6379 });
// With authentication
const client = new Redis({
host: 'prod-server.example.com',
port: 6380,
username: 'myapp',
password: 'secure-password',
tls: true
});
// GLIDE-specific features (requires Valkey 8.0+)
const client = new Redis({
host: 'localhost',
port: 6379,
readFrom: ReadFrom.AzAffinity, // AZ-aware read preference
clientAz: 'us-west-2a', // Availability zone affinity
enableEventBasedPubSub: true // Binary pub/sub compatibility
});
// Performance-tuned configuration
const client = new Redis({
host: 'localhost',
port: 6379,
connectTimeout: 5000,
commandTimeout: 3000,
maxRetriesPerRequest: 5,
retryDelayOnFailover: 50,
enableAutoPipelining: true,
lazyConnect: true
});
// Advanced backoff configuration (enterprise-grade)
const enterprise = new Redis({
host: 'prod-cluster.company.com',
port: 6379,
password: 'secure-password',
maxRetriesPerRequest: 15, // Maps to connectionBackoff.numberOfRetries: 15
retryDelayOnFailover: 200, // Maps to connectionBackoff.jitterPercent: 40%
connectTimeout: 8000, // Maps to advancedConfiguration.connectionTimeout
enableOfflineQueue: false, // Maps to inflightRequestsLimit: 0 (no queuing)
readFrom: ReadFrom.Replica, // Prefer replica reads
clientAz: 'us-east-1a' // Same-AZ affinity for lower latency
});
// Cluster configuration
const cluster = new Cluster([
{ host: '10.0.1.1', port: 7000 },
{ host: '10.0.1.2', port: 7001 },
{ host: '10.0.1.3', port: 7002 }
], {
enableReadFromReplicas: true,
maxRedirections: 10,
retryDelayOnFailover: 100,
redisOptions: {
password: 'cluster-password',
connectTimeout: 5000,
maxRetriesPerRequest: 8
}
});Configure using environment variables:
# Basic connection
VALKEY_HOST=localhost
VALKEY_PORT=6379
VALKEY_PASSWORD=your-password
VALKEY_USERNAME=your-username
# TLS
VALKEY_TLS=true
# Cluster nodes (comma-separated)
VALKEY_CLUSTER_NODES=10.0.1.1:7000,10.0.1.2:7001,10.0.1.3:7002
# Testing with modules
VALKEY_BUNDLE_HOST=localhost
VALKEY_BUNDLE_PORT=6380
ENABLE_CLUSTER_TESTS=trueQ: "Connection timeout" or "Unable to connect"
// ❌ Problem: Default settings too aggressive
const client = new Redis({ host: 'slow-server.com' });
// ✅ Solution: Increase timeouts
const client = new Redis({
host: 'slow-server.com',
connectTimeout: 10000, // 10 seconds
commandTimeout: 5000, // 5 seconds
lazyConnect: true // Connect on first command
});Q: "ECONNREFUSED" errors
// ✅ Check server is running and port is correct
const client = new Redis({
host: 'localhost',
port: 6379,
retryDelayOnFailover: 1000,
maxRetriesPerRequest: 5
});
client.on('error', (err) => {
console.error('Connection error:', err.message);
});Q: "Unknown command 'JSON.SET'" - ValkeyJSON not working
# ✅ Solution: Use valkey-bundle with modules
docker run -d -p 6379:6379 valkey/valkey-bundle:latest
# ✅ Or check module loading
redis-cli MODULE LISTQ: JSON commands return "WRONGTYPE" errors
// ❌ Problem: Using JSON commands on non-JSON keys
await client.set('key', 'string-value');
await client.jsonGet('key', '$'); // Error!
// ✅ Solution: Use correct data types
await client.jsonSet('json-key', '$', { name: 'John' });
await client.jsonGet('json-key', '$.name'); // Works!Q: "CLUSTERDOWN" or "MOVED" errors
// ❌ Problem: Insufficient redirections or timeouts
const cluster = new Cluster(nodes, {
maxRedirections: 3,
retryDelayOnFailover: 50
});
// ✅ Solution: Increase cluster tolerance
const cluster = new Cluster(nodes, {
maxRedirections: 16, // Default is sufficient
retryDelayOnFailover: 100, // Allow failover time
enableOfflineQueue: true // Queue commands during failover
});Q: Bull/BullMQ not working with cluster
// ✅ Use createClient factory pattern
import { Cluster } from 'valkey-glide-ioredis-adapter';
import { Queue } from 'bullmq';
const queue = new Queue('jobs', {
connection: {
createClient: (type) => Cluster.createClient(type, {
nodes: [{ host: '127.0.0.1', port: 7000 }]
})
}
});Q: Commands feel slower than ioredis
// ❌ Problem: Not leveraging GLIDE optimizations
const client = new Redis({
enableAutoPipelining: false,
maxRetriesPerRequest: 20
});
// ✅ Solution: Optimize for GLIDE
const client = new Redis({
enableAutoPipelining: true, // Let GLIDE optimize pipelining
maxRetriesPerRequest: 5, // GLIDE has better backoff
lazyConnect: true, // Faster startup
inflightRequestsLimit: 2000 // Higher throughput
});Q: High memory usage with large datasets
// ✅ Use streaming for large operations
for await (const key of client.scanStream({ match: 'prefix:*', count: 100 })) {
// Process keys in batches
await client.del(key);
}Q: Type errors with commands
// ❌ Problem: Missing types
const result = client.zrange('key', 0, -1, 'WITHSCORES');
// ✅ Solution: Import proper types
import { Redis } from 'valkey-glide-ioredis-adapter';
const client = new Redis();
const result: string[] = await client.zrange('key', 0, -1, 'WITHSCORES');# Enable GLIDE debug output
DEBUG=valkey-glide:* node your-app.js
# Check connection status
client.on('connect', () => console.log('✅ Connected'));
client.on('error', (err) => console.error('❌ Error:', err));
client.on('ready', () => console.log('🎯 Ready for commands'));// Quick connection test
async function testConnection() {
const client = new Redis({ host: 'localhost', port: 6379 });
try {
await client.ping();
console.log('✅ Connection successful');
// Test basic operations
await client.set('test-key', 'test-value');
const value = await client.get('test-key');
console.log('✅ Basic operations work:', value);
await client.del('test-key');
console.log('✅ All tests passed');
} catch (error) {
console.error('❌ Connection failed:', error.message);
} finally {
await client.quit();
}
}// Monitor command performance
const client = new Redis();
const originalSendCommand = client.sendCommand;
client.sendCommand = function(command) {
const start = Date.now();
const promise = originalSendCommand.call(this, command);
promise.finally(() => {
const duration = Date.now() - start;
if (duration > 100) { // Log slow commands
console.warn(`Slow command: ${command.name} took ${duration}ms`);
}
});
return promise;
};- GitHub Issues: Report bugs at valkey-glide-ioredis-adapter/issues
- GLIDE Documentation: Valkey GLIDE Docs
- Migration Issues: Check the Migration Guide
// ❌ Search functionality temporarily disabled
// client.ft.create() // Not available
// client.ft.search() // Not available
// ✅ Use ValkeyJSON for document queries instead
await client.jsonSet('doc:1', '$', { name: 'John', age: 30 });
const results = await client.jsonGet('doc:1', '$.name');Reason: GLIDE doesn't yet support valkey-bundle module syntax. Will be re-enabled when GLIDE adds support.
// ⚠️ ValkeyJSON requires valkey-bundle or manual module loading
const client = new Redis({ host: 'localhost', port: 6379 });
try {
await client.jsonSet('key', '$', { data: 'value' });
} catch (error) {
if (error.message.includes('unknown command')) {
console.error('❌ ValkeyJSON module not loaded on server');
// Fallback to regular JSON storage
await client.set('key', JSON.stringify({ data: 'value' }));
}
}// ⚠️ GLIDE native pub/sub doesn't support binary data
// Use enableEventBasedPubSub for binary compatibility
const client = new Redis({
enableEventBasedPubSub: true // Required for Socket.IO, binary messages
});
// ✅ Now binary data works
await client.publish('channel', Buffer.from('binary-data'));// ioredis: ZRANGE WITHSCORES returns flat array
// ['member1', '1', 'member2', '2']
// This adapter: Same format maintained for compatibility
const result = await client.zrange('key', 0, -1, 'WITHSCORES');
// Returns: ['member1', '1', 'member2', '2'] - Consistent with ioredis// ⚠️ Some ioredis events may have different timing
client.on('connect', () => {
// Fired when TCP connection established
});
client.on('ready', () => {
// Fired when ready for commands (use this for business logic)
});
// ✅ Always use 'ready' event for application logic// ⚠️ Error objects may have different properties
try {
await client.get('nonexistent');
} catch (error) {
// GLIDE errors may have different structure than ioredis errors
console.log('Error:', error.message); // Safe to use
// error.code may differ from ioredis
}// ⚠️ GLIDE uses more memory for connection management
// But provides better performance for concurrent operations
// ✅ For memory-constrained environments:
const client = new Redis({
lazyConnect: true, // Reduce initial memory
inflightRequestsLimit: 500, // Limit concurrent requests
enableOfflineQueue: false // Disable command queuing
});// ⚠️ First connection may be slower due to GLIDE initialization
// ✅ Use lazyConnect for faster application startup
const client = new Redis({
lazyConnect: true,
connectTimeout: 10000 // Allow time for GLIDE initialization
});
// First command triggers connection
await client.ping(); // May take longer on first call// ⚠️ Tied to Valkey GLIDE release cycle
// Features depend on GLIDE capabilities
// Check compatibility: npm list @valkey/valkey-glide
// AZ affinity requires Valkey 8.0+
const client = new Redis({
clientAz: 'us-east-1a', // Availability Zone for affinity routing
readFrom: ReadFrom.AzAffinity // Requires Valkey 8.0+
});// ⚠️ Requires Node.js 18+ (ES2022 support)
// GLIDE's Rust core has specific requirementsThese limitations are expected to be resolved in future versions:
| Limitation | Status | Expected Resolution |
|---|---|---|
| ValkeySearch Module | ❌ Disabled | When GLIDE supports valkey-bundle syntax |
| Advanced RESP3 Features | Future GLIDE releases | |
| Custom Protocol Options | ❌ Not exposed | If needed by community |
| Direct Binary Commands | Enhanced in future versions |
Despite limitations, the adapter maintains complete compatibility for:
- ✅ Core Operations: All data types (String, Hash, List, Set, ZSet)
- ✅ Production Libraries: Bull/BullMQ, Express Sessions, Socket.IO
- ✅ Cluster Operations: Full cluster support with Bull integration
- ✅ JSON Operations: 29 ValkeyJSON commands fully functional
- ✅ Transactions: MULTI/EXEC, WATCH/UNWATCH support
- ✅ Streaming: All stream operations (XADD, XREAD, etc.)
Most limitations have practical workarounds:
// Instead of Search module → Use ValkeyJSON queries
// Instead of custom protocols → Use standard configuration
// Instead of complex binary ops → Use enableEventBasedPubSub
// Instead of bleeding-edge features → Use proven, stable APIsStore and query JSON documents natively with full RedisJSON v2 compatibility:
import { Redis } from 'valkey-glide-ioredis-adapter';
const client = new Redis({ host: 'localhost', port: 6379 });
// Store JSON documents
await client.jsonSet('user:123', '$', {
name: 'John Doe',
age: 30,
address: {
city: 'San Francisco',
country: 'USA'
},
hobbies: ['programming', 'gaming']
});
// Query with JSONPath
const name = await client.jsonGet('user:123', '$.name');
const city = await client.jsonGet('user:123', '$.address.city');
// Update specific paths
await client.jsonNumIncrBy('user:123', '$.age', 1);
await client.jsonArrAppend('user:123', '$.hobbies', 'reading');
// Array operations
const hobbyCount = await client.jsonArrLen('user:123', '$.hobbies');
const removedHobby = await client.jsonArrPop('user:123', '$.hobbies', 0);29 JSON Commands Available: Complete ValkeyJSON/RedisJSON v2 compatibility with jsonSet, jsonGet, jsonDel, jsonType, jsonNumIncrBy, jsonArrAppend, jsonObjKeys, jsonToggle, and more!
Use valkey-bundle for testing JSON functionality:
# Start valkey-bundle with JSON module
docker-compose -f docker-compose.valkey-bundle.yml up -d
# Test JSON functionality
npm test tests/unit/json-commands.test.mjs
# Clean up
docker-compose -f docker-compose.valkey-bundle.yml downSee TESTING-VALKEY-MODULES.md for complete testing guide.
This adapter is production-ready with major Node.js libraries. Zero code changes required - just switch your import:
Complete compatibility with job queue libraries:
import { Redis } from 'valkey-glide-ioredis-adapter';
import Bull from 'bull';
import { Queue as BullMQQueue } from 'bullmq';
// Method 1: Direct configuration (Bull)
const queue = new Bull('email processing', {
redis: { host: 'localhost', port: 6379 }
});
// Method 2: createClient factory (BullMQ)
const client = Redis.createClient('client', { host: 'localhost', port: 6379 });
const bullmqQueue = new BullMQQueue('tasks', { connection: client });
// Method 3: Cluster support for job queues
import { Cluster } from 'valkey-glide-ioredis-adapter';
const clusterQueue = new Bull('cluster-jobs', {
createClient: (type) => Cluster.createClient(type, {
nodes: [
{ host: '127.0.0.1', port: 7000 },
{ host: '127.0.0.1', port: 7001 }
]
})
});
// Custom Lua scripts work via defineCommand
client.defineCommand('customJobScript', {
lua: 'return redis.call("lpush", KEYS[1], ARGV[1])',
numberOfKeys: 1
});
// Blocking operations for job processing
const job = await client.brpop('job:queue', 10);Session storage works without any code changes:
import session from 'express-session';
import RedisStore from 'connect-redis';
import { Redis } from 'valkey-glide-ioredis-adapter';
const client = new Redis({ host: 'localhost', port: 6379 });
app.use(session({
store: new RedisStore({ client: client }),
secret: 'your-session-secret',
resave: false,
saveUninitialized: false,
cookie: { maxAge: 1800000 } // 30 minutes
}));Cross-instance messaging and scaling:
import { createServer } from 'http';
import { Server } from 'socket.io';
import { createAdapter } from '@socket.io/redis-adapter';
import { Redis } from 'valkey-glide-ioredis-adapter';
const httpServer = createServer();
const io = new Server(httpServer);
// Database adapter for horizontal scaling
const pubClient = new Redis({ host: 'localhost', port: 6379 });
const subClient = pubClient.duplicate();
io.adapter(createAdapter(pubClient, subClient));
// Your Socket.IO logic works unchanged
io.on('connection', (socket) => {
socket.on('message', (data) => {
io.emit('broadcast', data); // Scales across instances
});
});Rate limiting with express-rate-limit:
import rateLimit from 'express-rate-limit';
import RedisStore from 'rate-limit-redis';
import { Redis } from 'valkey-glide-ioredis-adapter';
const client = new Redis({ host: 'localhost', port: 6379 });
const limiter = rateLimit({
store: new RedisStore({
client: client,
prefix: 'rl:'
}),
windowMs: 15 * 60 * 1000, // 15 minutes
max: 100 // limit each IP to 100 requests per windowMs
});
app.use('/api', limiter);Common caching implementations:
// Cache-aside pattern
async function getUser(userId) {
const cacheKey = `user:${userId}`;
// Try cache first
const cached = await client.get(cacheKey);
if (cached) {
return JSON.parse(cached);
}
// Cache miss - fetch from database
const user = await database.findUser(userId);
// Store in cache with TTL
await client.setex(cacheKey, 3600, JSON.stringify(user));
return user;
}
// Write-through caching with hash operations
async function updateUserProfile(userId, updates) {
await client.hset(`user:${userId}`, updates);
await database.updateUser(userId, updates);
}We've validated our adapter against 19 real-world usage patterns found in production applications across GitHub and Stack Overflow. All tests pass, proving true drop-in compatibility:
| Pattern Category | Examples | Status |
|---|---|---|
| Basic Operations | String operations, complex operations with WITHSCORES |
✅ Validated |
| Hash Operations | Object-based hset, individual operations, analytics |
✅ Validated |
| Bull Queue Integration | Job serialization, configuration patterns | ✅ Validated |
| Session Store | Express sessions with TTL, user data storage | ✅ Validated |
| Caching Patterns | JSON serialization, cache miss/hit patterns | ✅ Validated |
| Analytics & Counters | Page views, user activity tracking | ✅ Validated |
| Task Queues | List-based queues with lpush/rpop |
✅ Validated |
| Rate Limiting | Sliding window with sorted sets | ✅ Validated |
| Pub/Sub | Channel subscriptions and publishing | ✅ Validated |
| Error Handling | Connection resilience, type mismatches | ✅ Validated |
// All these real-world patterns work without any code changes:
// 1. Bull Queue Pattern (from production configs)
const client = new Redis({ host: 'localhost', port: 6379 });
// Works with Bull without any modifications
// 2. Express Session Pattern
await client.setex('sess:abc123', 1800, JSON.stringify(sessionData));
// 3. Complex Operations (from ioredis examples)
await client.zadd('sortedSet', 1, 'one', 2, 'dos');
const result = await client.zrange('sortedSet', 0, 2, 'WITHSCORES'); // ✅ Works perfectly
// 4. Caching Pattern with JSON
await client.setex(cacheKey, 3600, JSON.stringify(userData));
const cached = JSON.parse(await client.get(cacheKey));
// 5. Rate Limiting Pattern
await client.zadd(`rate_limit:${userId}`, Date.now(), `req:${Date.now()}`);
await client.zremrangebyscore(key, 0, Date.now() - 60000);🔍 Patterns Sourced From:
- GitHub repositories with 1000+ stars
- Stack Overflow top-voted solutions
- Production applications from major companies
- Popular database library documentation examples
🧪 Run Validation Tests:
npm test tests/integration/real-world-patterns.test.ts- Native GLIDE Methods: Uses GLIDE's optimized implementations instead of generic database commands
- Result Translation: Efficient conversion between GLIDE's structured responses and ioredis formats
- Type Safety: Leverages GLIDE's TypeScript interfaces for better development experience
- Rust Core: Benefits from GLIDE's high-performance Rust implementation
- 🔄 Migration Guide: Zero-code migration from ioredis
- 🏆 Compatibility Matrix: Complete compatibility validation results
- Pub/Sub Guide: Comprehensive guide to both pub/sub patterns
- Development Rules: Pure GLIDE development principles
- API Migration: Detailed API mapping from ioredis to GLIDE
# Core Database Operations (All Pass)
npm test tests/unit/string-commands.test.mjs # String operations: 37 tests ✅
npm test tests/unit/hash-commands.test.mjs # Hash operations: 13 tests ✅
npm test tests/unit/list-commands.test.mjs # List operations: 16 tests ✅
npm test tests/unit/set-commands.test.mjs # Set operations: 19 tests ✅
npm test tests/unit/zset-commands.test.mjs # Sorted set operations: 14 tests ✅
# Advanced Features (All Pass)
npm test tests/unit/json-commands.test.mjs # JSON documents: 29 tests ✅
npm test tests/unit/stream-commands.test.mjs # Stream operations: 15 tests ✅
npm test tests/unit/script-commands.test.mjs # Lua scripts: 12 tests ✅
npm test tests/unit/transaction-commands.test.mjs # Transactions: 3 tests ✅
# Real-World Integrations (All Pass)
npm test tests/integration/bullmq/ # Job queues: Bull/BullMQ ✅
npm test tests/integration/socketio/ # Real-time: Socket.IO ✅
npm test tests/integration/session-store/ # Sessions: Express/connect-redis ✅What This Means for You:
- ✅ Immediate Use: Drop-in replacement for most common ioredis use cases
- ✅ Battle Tested: Major server libraries (Bull, Socket.IO, sessions) validated
- ✅ Enterprise Ready: Connection management, transactions, pipelines work
- ✅ Cluster Ready: Full cluster support with sharded pub/sub, multi-node operations
# Test your specific use case
npm test -- --testNamePattern="your-pattern" # Run targeted tests
npm test tests/integration/ # Test all integrations// Before (ioredis)
import Redis from 'ioredis';
const client = new Redis({ host: 'localhost', port: 6379 });
// After (GLIDE adapter) - Just change the import!
import { Redis } from 'valkey-glide-ioredis-adapter';
const client = new Redis({ host: 'localhost', port: 6379 });// All your existing code works without changes:
await client.set('key', 'value');
await client.hset('hash', 'field', 'value');
await client.zadd('zset', 1, 'member');
const results = await client.zrange('zset', 0, -1, 'WITHSCORES');
// Bull queues work without changes:
const queue = new Bull('email', { redis: { host: 'localhost', port: 6379 } });
// Express sessions work without changes:
app.use(session({
store: new RedisStore({ client: client }),
// ... other options
}));Application Code
↓
ioredis API
↓
Parameter Translation
↓
Native GLIDE Methods
↓
Result Translation
↓
ioredis Results
src/
├── BaseClient.ts # Core GLIDE client wrapper
├── Redis.ts # ioredis-compatible Redis class
├── Cluster.ts # ioredis-compatible Cluster class
├── StandaloneClient.ts # Standalone-specific implementation
├── ClusterClient.ts # Cluster-specific implementation
└── utils/ # Translation and utility functions
This project follows pure GLIDE principles:
- Use only GLIDE APIs
- Implement custom logic when needed
- Maintain ioredis compatibility through translation
- Comprehensive testing required
Apache-2.0 License - see LICENSE file for details.
- Valkey GLIDE - The underlying high-performance Rust-based client that powers this adapter
- ioredis - The original Redis client whose API we maintain full compatibility with
- Bull - Redis-based queue for Node.js, fully compatible
- BullMQ - Modern Redis-based queue with advanced features
- Bee Queue - Simple, fast, robust job/task queue for Node.js
- connect-redis - Redis session store for Express/Connect
- express-rate-limit - Rate limiting middleware for Express
- socket.io-redis-adapter - Socket.IO Redis adapter for horizontal scaling
- ValkeyJSON - JSON document storage and manipulation module
- Valkey - High-performance server with module support