Skip to main content

Overview

The caching layer provides intelligent performance optimization for expensive operations in the documentation generation pipeline. It significantly improves generation speed by caching type analysis results and API resolution operations.
Performance Improvements:
  • Type Analysis: 30-50% faster for complex type parsing
  • API Resolution: 20-40% faster for cross-reference resolution
  • Overall: Significant speedup for large codebases with repetitive patterns

Architecture

Cache Types

TypeAnalysisCache

Purpose: Caches expensive TypeScript type parsing operationsLocation: src/cache/TypeAnalysisCache.tsCaches: Complex type string parsing, object property extraction, union/intersection analysisKey Benefit: Eliminates redundant parsing of identical type strings across multiple API items

ApiResolutionCache

Purpose: Caches API model cross-reference resolutionLocation: src/cache/ApiResolutionCache.tsCaches: Declaration reference resolution, symbol lookups, cross-file referencesKey Benefit: Prevents repeated expensive API model traversals for the same references

CacheManager

Purpose: Centralized coordination of all caching operationsLocation: src/cache/CacheManager.tsManages: Multiple cache instances, statistics collection, global enable/disableKey Benefit: Unified interface for cache operations and performance monitoring

TypeAnalysisCache

Uses a Least Recently Used (LRU) eviction strategy:
  • Default Size: 1000 cached items
  • Eviction: Removes oldest items when cache is full
  • Thread-Safe: Safe for concurrent access
  • Hit Tracking: Monitors cache effectiveness
private readonly _cache: Map<string, TypeAnalysis>;
private readonly _maxSize: number;
private _hitCount: number = 0;
private _missCount: number = 0;
Creates deterministic cache keys from type strings:
private _createCacheKey(type: string): string {
  return type.trim(); // Simple but effective for TypeScript types
}
Examples:
  • "string""string"
  • "Promise<User>""Promise<User>"
  • "{ name: string; age: number }""{ name: string; age: number }"
Provides utility for caching any type analysis function:
public static createCachedFunction<T extends (...args: any[]) => TypeAnalysis>(
  fn: T,
  options: TypeAnalysisCacheOptions = {}
): T
Usage:
const cachedAnalyzer = TypeAnalysisCache.createCachedFunction(
  analyzeTypeString,
  { maxSize: 500, enabled: true }
);

// This call will be cached
const result1 = cachedAnalyzer("ComplexType<T>");
const result2 = cachedAnalyzer("ComplexType<T>"); // Cache hit!

ApiResolutionCache

Caches API model resolution operations:
public get(
  declarationReference: any,
  contextApiItem?: ApiItem
): IResolveDeclarationReferenceResult | undefined
Cache Key Strategy:
const refString = JSON.stringify(declarationReference);
const contextString = contextApiItem?.canonicalReference?.toString() || '';
return `${refString}|${contextString}`;
Creates cached wrappers for API resolution functions:
public createCachedResolver(
  resolveFn: (
    declarationReference: any,
    contextApiItem?: ApiItem
  ) => IResolveDeclarationReferenceResult
): (
  declarationReference: any,
  contextApiItem?: ApiItem
) => IResolveDeclarationReferenceResult
Integration:
// In CustomMarkdownEmitter
this._apiResolutionCache = new ApiResolutionCache({
  enabled: true,
  maxSize: 500
});

// Cache resolution operations
const cachedResult = this._apiResolutionCache.get(declarationRef, contextItem);
if (cachedResult) {
  return cachedResult;
}

CacheManager

Manages multiple cache instances with unified interface:
export class CacheManager {
  private readonly _typeAnalysisCache: TypeAnalysisCache;
  private readonly _apiResolutionCache: ApiResolutionCache;
  private readonly _enabled: boolean;
  private readonly _enableStats: boolean;
}
Key Features:
  • Global enable/disable for all caches
  • Individual cache configuration
  • Unified statistics collection
  • Environment-specific presets
Provides optimized configurations for different environments:
CacheManager.createDevelopment({
  enabled: true,
  enableStats: true,
  typeAnalysis: { maxSize: 500, enabled: true },
  apiResolution: { maxSize: 200, enabled: true }
});
Rationale:
  • Development: Smaller caches, statistics enabled for debugging
  • Production: Larger caches, statistics disabled for performance
Comprehensive cache performance monitoring:
public getStats(): {
  enabled: boolean;
  typeAnalysis: ReturnType<TypeAnalysisCache['getStats']>;
  apiResolution: ReturnType<ApiResolutionCache['getStats']>;
  totalHitRate: number;
}

public printStats(): void
Output Example:
📊 Cache Statistics:
   Overall Hit Rate: 42.3%
   Type Analysis Cache: 45.7% hit rate (457/1000)
   API Resolution Cache: 38.9% hit rate (194/500)

Integration Points

1

ObjectTypeAnalyzer Integration

ObjectTypeAnalyzer automatically uses TypeAnalysisCache:
export class ObjectTypeAnalyzer {
  private readonly _cache: TypeAnalysisCache;

  constructor(cache?: TypeAnalysisCache) {
    this._cache = cache ?? new TypeAnalysisCache({ enabled: true, maxSize: 500 });
  }

  public analyzeType(type: string): TypeAnalysis {
    // Check cache first
    const cached = this._cache.get(type);
    if (cached) {
      return cached;
    }

    // Parse type and cache result
    const result = this._parseType(type);
    this._cache.set(type, result);
    return result;
  }
}
2

CustomMarkdownEmitter Integration

CustomMarkdownEmitter uses ApiResolutionCache for link resolution:
export class CustomMarkdownEmitter extends MarkdownEmitter {
  private _apiResolutionCache: ApiResolutionCache;

  public constructor(apiModel: ApiModel) {
    super();
    this._apiResolutionCache = new ApiResolutionCache({ enabled: true, maxSize: 500 });
  }
}
3

MarkdownDocumenter Integration

MarkdownDocumenter coordinates all caching through CacheManager:
// Initialize cache manager
const cacheManager = getGlobalCacheManager({
  enabled: true,
  enableStats: true,
  typeAnalysis: { maxSize: 1000, enabled: true },
  apiResolution: { maxSize: 500, enabled: true }
});

// Print statistics after generation
cacheManager.printStats();

Performance Characteristics

Cache Hit Rates

Typical Performance:
  • Type Analysis: 40-60% hit rate
  • API Resolution: 30-50% hit rate
  • Overall: Significant speedup for large codebases
Factors:
  • Codebase size and complexity
  • Repetitive type patterns
  • Cross-reference density

Memory Usage

Configurable Limits:
  • TypeAnalysisCache: Default 1000 items
  • ApiResolutionCache: Default 500 items
  • Memory-efficient LRU eviction
Optimization:
  • Environment-specific presets
  • Adjustable cache sizes
  • Automatic cleanup

Speed Improvements

Measured Benefits:
  • Type Analysis: 30-50% faster
  • API Resolution: 20-40% faster
  • Large Projects: Up to 2x improvement
Scenarios:
  • Complex generic types
  • Repeated interface definitions
  • Dense cross-references

Best Practices

Development:
  • Smaller caches (500-1000 items)
  • Enable statistics for monitoring
  • Focus on debugging cache behavior
Production:
  • Larger caches (1000-2000 items)
  • Disable statistics for performance
  • Monitor hit rates in logs
Large Codebases:
  • Increase TypeAnalysisCache size
  • Consider separate cache instances
  • Monitor memory usage
Disable for:
  • Testing cache behavior
  • Debugging type analysis issues
  • Memory-constrained environments
  • Very small projects (minimal benefit)
How to Disable:
    const cacheManager = new CacheManager({
    enabled: false // Disables all caches
    });

    // Or disable individual caches
    const cacheManager = new CacheManager({
    typeAnalysis: { enabled: false },
    apiResolution: { enabled: true }
    });
Enable Statistics:
    const cacheManager = new CacheManager({
    enableStats: true,
    typeAnalysis: { enabled: true },
    apiResolution: { enabled: true }
    });

    // Print after generation
    cacheManager.printStats();
Interpret Results:
  • Hit Rate > 40%: Good cache effectiveness
  • Hit Rate < 20%: Consider tuning or disabling
  • Memory Usage: Monitor cache sizes vs. benefit

Troubleshooting

Symptoms: Cache statistics show < 20% hit rateCauses:
  • Too many unique type patterns
  • Insufficient repetitive structures
  • Cache size too small
Solutions:
  • Increase cache size
  • Analyze type diversity
  • Consider disabling for small projects
Symptoms: High memory usage during generationCauses:
  • Cache sizes too large
  • Retaining too many cached items
  • Memory leaks in cache implementation
Solutions:
  • Reduce cache sizes
  • Enable LRU eviction
  • Monitor memory usage patterns
Symptoms: Incorrect type analysis or resolution resultsCauses:
  • Hash collisions in cache keys
  • Stale cache data
  • Race conditions (rare)
Solutions:
  • Clear caches between runs
  • Check cache key generation
  • Validate cached data integrity