Skip to main content

MagicLogger Transport System Documentation

Table of Contents

Overview

MagicLogger's transport system is designed from the ground up to be transport-optimized, providing high-performance logging with minimal impact on your application. Following Pino's proven approach, MagicLogger uses worker threads for I/O operations, achieving true parallelism and non-blocking performance.

Key Features

  • 🚀 Async-First: All transports operate asynchronously by default for maximum throughput
  • 🎯 Zero Overhead: Tree-shakeable design - only pay for what you use
  • 💪 High Performance: Worker threads for I/O operations provide true parallelism
  • 🔄 Backpressure Handling: Explicit feedback when buffers are full
  • 📦 Modular: Each transport is independently importable for optimal bundle size
  • 🔌 Extensible: Simple interface for creating custom transports

Architecture

Async-First Design

MagicLogger's architecture is fundamentally different from traditional loggers. All transports are asynchronous by default because modern applications demand non-blocking I/O:

// MagicLogger - Async by default
import { createLogger } from 'magiclogger';

const logger = createLogger(); // Already async with console output
logger.info('Non-blocking by default'); // Returns immediately

Transport Pipeline

Application Code

Logger API (info, error, etc.)

[Operational Utilities]
• Sampling (reduce volume)
• Rate Limiting (prevent flooding)
• Redaction (remove PII)

Transport Strategy
• Console: Synchronous for immediate feedback
• File/HTTP: Worker threads for I/O
• Each transport manages its own buffering

Transport Manager
• Batch dispatch
• Error handling
• Lifecycle management

Individual Transports
• Console, File, HTTP, etc.
• Each processes batches independently

MAGIC Schema

All transports work with the standardized MAGIC Schema - an open JSON format that preserves styling information across your entire stack:

{
"id": "1733938475123-abc123",
"timestamp": "2024-12-11T12:34:35.123Z",
"timestampMs": 1733938475123,
"level": "info",
"message": "\u001b[32mSuccess:\u001b[39m User logged in", // Preserved ANSI
"plainMessage": "Success: User logged in", // Searchable text
"context": { "userId": 123, "ip": "192.168.1.1" },
"tags": ["auth", "api"],
"metadata": { "hostname": "api-01", "pid": 1234 }
}

Core Concepts

Async vs Sync: Architecture Trade-offs

MagicLogger provides two logger implementations with different trade-offs:

The AsyncLogger uses a high-performance ring buffer with batching:

import { createLogger } from 'magiclogger';

// Default async logger with console output
const logger = createLogger({
buffer: {
size: 8192, // Ring buffer size
flushInterval: 100, // Flush every 100ms
flushSize: 1000 // Or when 1000 entries accumulate
}
});

// All logging is non-blocking
logger.info('User logged in', { userId: 123 }); // Returns AddResult immediately

✅ PROS:

  • High Throughput: Worker threads ensure main thread never blocks
  • Non-blocking: Never blocks main thread, perfect for production services
  • Natural Batching: Logs accumulate for efficient I/O operations
  • Zero Main Thread Blocking: All I/O happens in worker threads
  • Explicit Backpressure: Returns AddResult so you know if logs dropped
  • Memory Efficient: Pre-allocated ring buffer, minimal GC pressure

⚠️ CONS:

  • Potential Log Loss: Unflushed logs lost on crash (mitigated by auto-shutdown)
  • Delayed Output: Logs appear in batches (50-100ms delay)
  • Memory Usage: Holds logs in memory until flush
  • Order: High-concurrency logs may arrive slightly out of order
  • Not for Audit Logs: Can't guarantee immediate persistence

SyncLogger (Special Cases)

The SyncLogger provides zero-overhead synchronous logging for specific needs:

import { SyncLogger, SyncConsoleTransport } from 'magiclogger/sync';

const logger = new SyncLogger({
transports: [new SyncConsoleTransport()]
});

// Direct, immediate output - no promises
logger.info('Immediate output'); // ~220,000 ops/sec

✅ PROS:

  • Maximum Performance: ~220,000 ops/sec - matches Pino sync
  • Zero Overhead: No promises, buffers, or allocations
  • Immediate Output: Critical for CLIs and debugging
  • Guaranteed Delivery: No log loss on crash
  • Predictable Order: Logs always in exact call order

⚠️ CONS:

  • Blocking I/O: Can freeze app during writes
  • No Batching: Every log is a syscall (inefficient)
  • Limited Transports: Only Console, Stream, Null
  • No Backpressure: Can overwhelm destinations
  • Poor for Production: Not suitable for high-throughput services

When to Use Each

ScenarioUse AsyncLoggerUse SyncLogger
Production Services✅ Best choice❌ Blocks event loop
High Throughput✅ Batching benefits❌ Too many syscalls
Microservices✅ Non-blocking❌ Poor performance
CLI Tools⚠️ Delayed output✅ Immediate feedback
Debugging⚠️ Async complexity✅ Simple stack traces
Audit Logs❌ Can lose logs✅ Guaranteed delivery
Benchmarks⚠️ Includes buffer overhead✅ Raw performance

Buffering and Batching

Default Batching Behavior

Important: Transport batching behavior depends on the transport type and logger:

Transport TypeDefault BehaviorWith AsyncLoggerWith Sync Logger
ConsoleNo batchingReceives batches, writes individuallyWrites immediately
FileNo batchingReceives batches, writes individuallyWrites immediately
HTTPBatches automaticallyReceives pre-batched arraysBatches internally
WebSocketBatches automaticallyReceives pre-batched arraysBatches internally
S3Batches automaticallyReceives pre-batched arraysBatches internally
MongoDBBatches automaticallyReceives pre-batched arraysBatches internally

Key Points:

  • Network transports (HTTP, WebSocket, S3, MongoDB) extend BatchingTransport and batch automatically
  • Local transports (Console, File) do NOT batch - they write immediately
  • AsyncLogger uses a ring buffer that flushes periodically - transports still control their own batching
  • Sync Logger does NOT wait for async transports - logs are fire-and-forget
  • Each transport independently decides whether to batch, regardless of logger type

Controlling Batching

// Disable batching for a network transport
const httpTransport = new HTTPTransport({
url: 'https://api.example.com/logs',
batch: false, // Send each log immediately (not recommended)
});

// Configure batch settings
const httpTransport = new HTTPTransport({
url: 'https://api.example.com/logs',
batch: {
enabled: true, // Default for network transports
maxSize: 100, // Max entries per batch
maxTime: 5000, // Max wait time (ms)
maxBytes: 1048576 // Max batch size (1MB)
}
});

// Force batching for console (unusual but possible)
import { BatchingTransport } from 'magiclogger/transports/base';
class BatchedConsole extends BatchingTransport {
async sendBatch(entries) {
entries.forEach(e => console.log(e));
}
}

AsyncLogger Ring Buffer

The AsyncLogger's ring buffer provides several advantages:

  1. Zero Allocations: Pre-allocated buffer avoids GC pressure
  2. Natural Batching: Logs accumulate between event loop ticks
  3. Configurable Triggers: Flush on size, time, or manually
const logger = createLogger({
buffer: {
size: 16384, // Larger buffer for high-volume
flushInterval: 50, // More frequent flushes for lower latency
flushSize: 5000 // Bigger batches for efficiency
},
onFlush: async (entries) => {
// Entries are batched for efficient processing
await sendToElasticsearch(entries);
}
});

Backpressure Handling

Unlike Pino, MagicLogger provides explicit backpressure feedback:

const result = logger.info('High volume log');

if (!result.success) {
switch (result.reason) {
case 'buffer_full':
// Implement application-level throttling
console.warn('Logger buffer full, throttling...');
break;
case 'rate_limited':
// Exceeded rate limits
metrics.increment('logs.rate_limited');
break;
}
}

// For critical logs, use guaranteed delivery
await logger.logCritical('error', 'Database connection lost', {
severity: 'critical'
});

Available Transports

Core Transports

Console Transport

Note: Console transport is automatically enabled by default when you create a Logger instance.

Output to stdout/stderr with full color support:

import { Logger } from 'magiclogger';

// Console transport is created automatically by default
const logger = new Logger(); // Console output enabled

// Explicitly disable console transport for production
const prodLogger = new Logger({
useConsole: false, // Disable automatic console transport
transports: [/* your production transports */]
});

// Or override default console with custom settings
import { ConsoleTransport } from 'magiclogger/transports/console';

const customLogger = new Logger({
transports: [
new ConsoleTransport({
level: 'warn', // Only warnings and errors
useColors: false, // No colors (e.g., for log aggregators)
format: 'json' // JSON format instead of pretty
})
]
// Note: When you provide a transports array, the default console is not added
});

File Transport

Write to files with rotation support:

import { FileTransport } from 'magiclogger/transports/file';

const transport = new FileTransport({
filepath: './logs/app.log',
maxSize: '10MB',
maxFiles: 7,
compress: true,
format: 'json'
});

Stream Transport

Write to any Node.js stream:

import { StreamTransport } from 'magiclogger/transports/stream';

const transport = new StreamTransport({
stream: process.stdout,
format: 'json'
});

Null Transport

Discard all logs (useful for testing):

import { NullTransport } from 'magiclogger/transports/null';

const transport = new NullTransport();

Network Transports

HTTP Transport

Send logs to HTTP endpoints with batching and retry:

import { HTTPTransport } from 'magiclogger/transports/http';

const transport = new HTTPTransport({
url: 'https://logs.example.com/ingest',
method: 'POST',
headers: { 'X-API-Key': process.env.LOG_API_KEY },
batch: {
size: 100,
timeout: 5000
},
retry: {
attempts: 3,
delay: 1000,
backoff: 2
},
compress: true
});

WebSocket Transport

Real-time log streaming:

import { WebSocketTransport } from 'magiclogger/transports/websocket';

const transport = new WebSocketTransport({
url: 'wss://logs.example.com/stream',
reconnect: true,
reconnectDelay: 1000,
heartbeat: 30000
});

Database Transports

MongoDB Transport

Direct database writes with TTL support:

import { MongoDBTransport } from 'magiclogger/transports/mongodb';

const transport = new MongoDBTransport({
uri: 'mongodb://localhost:27017',
database: 'logs',
collection: 'events',
ttl: 2592000, // 30 days
createIndex: true,
batchSize: 100,
transformDocument: (entry) => ({
// Custom document structure
timestamp: new Date(entry.timestampMs),
severity: entry.level.toUpperCase(),
data: entry
})
});

PostgreSQL Transport

Write to PostgreSQL with automatic table creation:

import { PostgreSQLTransport } from 'magiclogger/transports/postgresql';

const transport = new PostgreSQLTransport({
connectionString: process.env.DATABASE_URL,
table: 'application_logs',
createTable: true,
poolSize: 10,
batchSize: 100,
flushInterval: 5000
});

Cloud Storage

S3 Transport

Upload logs to AWS S3 with partitioning:

import { S3Transport } from 'magiclogger/transports/s3';

const transport = new S3Transport({
bucket: 'my-app-logs',
prefix: 'production/',
region: 'us-east-1',
compression: 'gzip',
partitioning: {
strategy: 'daily', // 'hourly' | 'daily' | 'monthly'
format: 'year=%Y/month=%m/day=%d/'
},
batchSize: 1000,
flushInterval: 60000,
credentials: {
accessKeyId: process.env.AWS_ACCESS_KEY_ID,
secretAccessKey: process.env.AWS_SECRET_ACCESS_KEY
}
});

Observability Platforms

OTLP Transport

OpenTelemetry Protocol for modern observability:

import { OTLPTransport } from 'magiclogger/transports/otlp';

const transport = new OTLPTransport({
endpoint: 'https://otlp.example.com/v1/logs',
protocol: 'http/protobuf', // or 'grpc'
headers: { 'x-api-key': process.env.OTLP_KEY },
serviceName: 'my-service',
resource: {
'service.version': '1.0.0',
'deployment.environment': 'production'
},
includeTraceContext: true // Auto-attach trace/span IDs
});

Using Transports

Basic Usage

Default Console Transport

By default, MagicLogger automatically creates a console transport:

import { Logger } from 'magiclogger';

// Console transport is enabled by default
const logger = new Logger();
logger.info('This appears in console automatically');

// Explicitly control console behavior
const customLogger = new Logger({
useConsole: true, // Default: true - console enabled
useColors: true, // Default: true - colored output
verbose: false // Default: false - debug level disabled
});

Performance Optimization - Disable Console

For production environments, disable console to improve performance:

import { Logger } from 'magiclogger';
import { FileTransport } from 'magiclogger/transports/file';
import { HTTPTransport } from 'magiclogger/transports/http';

// Production setup - no console overhead
const prodLogger = new Logger({
useConsole: false, // Disable console for better performance
transports: [
new FileTransport({
filepath: './app.log',
buffer: { size: 1000 } // Buffer writes for performance
}),
new HTTPTransport({
url: 'https://logs.example.com',
batch: { size: 100, timeout: 5000 } // Batch for efficiency
})
]
});

logger.info('Fast logging without console overhead');

The createLogger factory provides the easiest setup:

import { createLogger } from 'magiclogger';
import { FileTransport } from 'magiclogger/transports/file';
import { HTTPTransport } from 'magiclogger/transports/http';

const logger = createLogger({
// Console is included by default unless you set useConsole: false
transports: [
new FileTransport({ filepath: './app.log' }),
new HTTPTransport({ url: 'https://logs.example.com' })
]
});

logger.info('Logs go to console, file, and HTTP');

Custom onFlush Handler

For maximum flexibility, use a custom flush handler:

const logger = createLogger({
onFlush: async (entries) => {
// Custom processing logic
await Promise.all([
writeToCustomDatabase(entries),
sendToAnalytics(entries),
archiveToS3(entries)
]);
}
});

AsyncLogger with Transports

The AsyncLogger provides high-throughput logging with batching:

import { AsyncLogger } from 'magiclogger';
import { ConsoleTransport, S3Transport } from 'magiclogger/transports';

const logger = new AsyncLogger({
buffer: {
size: 16384,
flushInterval: 100
},
transports: [
new ConsoleTransport({ format: 'pretty' }),
new S3Transport({ bucket: 'logs' })
],
// Operational utilities
redactor: { preset: 'strict' },
rateLimiter: { max: 1000, window: 60000 },
sampler: { rate: 0.1 }
});

// Check for backpressure
const result = logger.info('High volume log');
if (!result.success) {
console.warn('Log dropped:', result.reason);
}

SyncLogger with Transports

For scenarios requiring immediate, synchronous output:

import { SyncLogger } from 'magiclogger/sync';
import { SyncConsoleTransport, SyncStreamTransport } from 'magiclogger/sync/transports';

const logger = new SyncLogger({
transports: [
new SyncConsoleTransport({ useColors: true }),
new SyncStreamTransport({
stream: process.stdout,
format: 'json'
})
]
});

// Direct, synchronous writes - no promises
logger.info('Immediate output');
logger.error('Instant error logging');

Note: Only Console, Stream, and Null transports have synchronous implementations. Network and database transports are inherently asynchronous.

Tree-Shaking and Bundle Size

MagicLogger's modular design ensures you only include what you use:

// ✅ GOOD - Tree-shakeable, minimal bundle
import { Logger } from 'magiclogger';
import { ConsoleTransport } from 'magiclogger/transports/console';

// For sync logger with explicit transport
const logger = new Logger({
transports: [new ConsoleTransport()]
});
// Bundle: ~41KB (33KB core + 8KB console)

// ❌ BAD - Imports all transports
import * as transports from 'magiclogger/transports';
// Bundle: ~55KB+ (includes everything)

Transport Comparison

TransportTypePerformanceUse CaseBundle Size
ConsoleSync/AsyncVery HighDevelopment, debugging8KB
StreamSync/AsyncVery HighPipes, stdout/stderr6KB
FileAsyncHighLocal logging, rotation14KB
HTTPAsyncMediumRemote endpoints, APIs22KB
WebSocketAsyncHighReal-time streaming14KB
MongoDBAsyncMediumDirect DB writes13KB
PostgreSQLAsyncMediumStructured storage8KB
S3AsyncLowLong-term archival14KB
OTLPAsyncMediumObservability platforms16KB
NullSyncMaximumTesting, benchmarking1KB

Creating Custom Transports

Basic Transport

Create a simple custom transport:

import { Transport } from 'magiclogger/transports/base';

class CustomTransport extends Transport {
constructor(options = {}) {
super('custom', options);
}

async log(entry) {
// Process single log entry
await this.send(entry);
}

async logBatch(entries) {
// Process multiple entries efficiently
await this.sendBatch(entries);
}

async close() {
// Cleanup resources
await this.cleanup();
}
}

Batching Transport

For efficient batch processing:

import { BatchingTransport } from 'magiclogger/transports/base';

class CustomBatchingTransport extends BatchingTransport {
constructor(options = {}) {
super('custom-batch', {
batchSize: 100,
flushInterval: 5000,
...options
});
}

async sendBatch(entries) {
// Send batch to destination
const response = await fetch(this.options.url, {
method: 'POST',
body: JSON.stringify(entries),
headers: { 'Content-Type': 'application/json' }
});

if (!response.ok) {
throw new Error(`Failed to send batch: ${response.status}`);
}
}
}

Network Transport with Retry

Handle network failures gracefully:

import { NetworkTransport } from 'magiclogger/transports/base';

class ResilientTransport extends NetworkTransport {
constructor(options = {}) {
super('resilient', {
retry: {
attempts: 3,
delay: 1000,
backoff: 2
},
...options
});
}

async send(data) {
return this.retryable(async () => {
const response = await fetch(this.options.endpoint, {
method: 'POST',
body: JSON.stringify(data)
});

if (!response.ok) {
throw new Error(`HTTP ${response.status}`);
}

return response;
});
}
}

Performance Considerations

AsyncLogger Performance

The AsyncLogger achieves high performance through:

  1. Zero-allocation ring buffer - No object pooling overhead
  2. Microtask batching - Natural aggregation without promises
  3. Efficient flushing - Timer-based with size triggers
// Optimized for throughput
const logger = createLogger({
buffer: {
size: 32768, // Large buffer for bursts
flushInterval: 10, // Frequent flushes
flushSize: 5000 // Large batches
}
});
// ~130,000 ops/sec with batching benefits

Why We Chose Ring Buffers Over Worker Threads

Pino v7+ moved from separate processes to Worker Threads for transport isolation. MagicLogger deliberately chose a different approach after careful analysis:

Pino's Worker Thread Approach

How it works: Pino serializes logs and sends them to a Worker Thread where transports run in isolation.

✅ PROS:

  • Complete Isolation: Transport crashes can't affect main thread
  • Parallel Processing: True CPU parallelism for heavy processing
  • Framework Compatibility: Works well with frameworks that manage workers

⚠️ CONS:

  • Serialization Overhead: Every log must be serialized/deserialized between threads
  • Complex Debugging: Cross-thread issues are harder to diagnose
  • Higher Memory: Each Worker Thread has its own V8 instance (~10MB baseline)
  • Startup Cost: Workers take time to spawn and warm up
  • Limited Shared State: Can't share objects between threads

MagicLogger's Ring Buffer Approach

How it works: MagicLogger uses a pre-allocated ring buffer in the main thread with microtask-based flushing.

✅ PROS:

  • No Serialization: Direct object references, no copying needed
  • Simple Debugging: Single thread, straightforward stack traces
  • Fast Startup: No worker spawn time
  • Explicit Backpressure: Know immediately when buffers are full
  • Simpler Architecture: Easier to understand and maintain

⚠️ CONS:

  • No Isolation: Transport errors need careful handling
  • Single Thread: Heavy processing can block (mitigated by async I/O)
  • Manual Shutdown: Need to flush on exit (handled automatically)

Performance Comparison

// Pino with Worker Thread
const pino = require('pino');
const transport = pino.transport({
target: 'pino-pretty'
});
const logger = pino(transport);
logger.info('test'); // High performance with isolation

// MagicLogger with Ring Buffer
import { createLogger } from 'magiclogger';
const logger = createLogger();
logger.info('test'); // Comparable performance, simpler architecture

The Result: Both achieve excellent performance. The choice comes down to whether you prioritize isolation (Pino) or simplicity (MagicLogger).

When Worker Threads Make Sense

Despite choosing ring buffers by default, Worker Threads are valuable for:

  1. CPU-Intensive Processing: Log encryption, complex transformations
  2. Untrusted Code: Running third-party transports safely
  3. Framework Requirements: When your framework manages workers

You can still use Worker Threads with MagicLogger when needed:

import { createLogger } from 'magiclogger';
import { Worker } from 'worker_threads';

const worker = new Worker('./log-processor.js');

const logger = createLogger({
onFlush: async (entries) => {
// Send to worker for CPU-intensive processing
worker.postMessage({ type: 'logs', data: entries });
}
});

Migration from Pino

Pino v7 Transport

// Pino v7
const transport = pino.transport({
target: 'pino-pretty',
options: { destination: 1 }
});
const logger = pino(transport);

MagicLogger Equivalent

// MagicLogger
import { createLogger } from 'magiclogger';

const logger = createLogger(); // Pretty console output by default
// OR with explicit transport
import { ConsoleTransport } from 'magiclogger/transports/console';
const logger = createLogger({
transports: [new ConsoleTransport({ format: 'pretty' })]
});

Key Differences

  1. Default Behavior: MagicLogger is async by default, Pino is sync by default
  2. Transport Architecture: MagicLogger uses async buffers, Pino uses Worker Threads
  3. Backpressure: MagicLogger provides explicit AddResult, Pino may silently drop
  4. Bundle Size: MagicLogger is more modular with better tree-shaking
  5. Schema: MagicLogger uses the open MAGIC Schema format

Migration Strategy

// Step 1: Install MagicLogger
npm install magiclogger

// Step 2: Create compatibility wrapper
import { createLogger } from 'magiclogger';
import { HTTPTransport } from 'magiclogger/transports/http';

function createPinoCompatible(options = {}) {
const transports = [];

// Map Pino transports to MagicLogger
if (options.transport) {
if (options.transport.target === 'pino-pretty') {
// Already included by default
} else if (options.transport.target === 'pino-http-send') {
transports.push(new HTTPTransport({
url: options.transport.options.url
}));
}
}

return createLogger({ transports });
}

// Step 3: Replace gradually
const logger = createPinoCompatible(pinoOptions);

API Reference

Logger Creation

import { createLogger } from 'magiclogger';

interface CreateLoggerOptions {
// Buffer configuration
buffer?: {
size?: number; // Default: 8192
flushInterval?: number; // Default: 100ms
flushSize?: number; // Default: 1000
};

// Transports
transports?: Transport[];

// Custom flush handler (alternative to transports)
onFlush?: (entries: LogEntry[]) => void | Promise<void>;

// Operational utilities
redactor?: RedactorOptions;
rateLimiter?: RateLimiterOptions;
sampler?: SamplerOptions;
queueManager?: QueueManagerOptions;

// Behavior
autoShutdown?: boolean; // Default: true
sync?: boolean; // Force sync mode
}

const logger = createLogger(options);

Transport Interface

interface Transport {
readonly name: string;

// Core methods
log(entry: LogEntry): void | Promise<void>;
logBatch?(entries: LogEntry[]): void | Promise<void>;

// Lifecycle
init?(): void | Promise<void>;
close?(): void | Promise<void>;
flush?(): void | Promise<void>;

// Control
shouldLog?(entry: LogEntry): boolean;
pause?(): void;
resume?(): void;
}

LogEntry Schema

interface LogEntry {
id: string;
timestamp: string; // ISO 8601
timestampMs: number; // Unix ms
level: LogLevel;
message: string; // With ANSI codes
plainMessage: string; // Without ANSI

// Optional
context?: Record<string, unknown>;
tags?: string[];
error?: {
name: string;
message: string;
stack?: string;
};
metadata?: {
hostname?: string;
pid?: number;
};
}

AddResult (Backpressure)

interface AddResult {
success: boolean;
reason?: 'buffer_full' | 'closing' | 'rate_limited';
dropped?: LogEntry;
bufferStats?: {
size: number;
capacity: number;
utilization: number;
};
}

Summary

MagicLogger's transport system represents a fundamental rethinking of how JavaScript applications handle logging:

  • Async-First Architecture: Built for modern async applications
  • High Performance: Ring buffer approach matches Pino's throughput
  • Explicit Backpressure: Never silently drop logs
  • Modular Design: Pay only for what you use
  • Open Schema: MagicLog format works across languages and platforms

Whether you're building a high-throughput microservice or a simple CLI tool, MagicLogger's transport system provides the performance and flexibility you need.