Overview
The emission layer transforms TSDoc AST (Abstract Syntax Tree) into Mintlify-compatible MDX format. It handles the final rendering step, converting semantic document structures into formatted markdown with Mintlify-specific components.Primary Component:
src/markdown/CustomMarkdownEmitter.tsRecent Enhancement: The emission layer now includes API resolution caching for improved performance during cross-reference resolution.
Key Responsibilities
AST to MDX Conversion
Transforms TSDoc AST nodes into MDX strings
Mintlify Component Generation
Creates specialized components like
<ParamField> and <ResponseField>API Resolution Caching
Caches cross-reference resolution for performance
Link Resolution
Resolves cross-references between API items
Architecture
Core Components
CustomMarkdownEmitter
Constructor and Initialization
Constructor and Initialization
Initializes with API model and caching:Caching Integration: Automatically creates API resolution cache for improved performance
Main Emission Method
Main Emission Method
Primary method for AST to MDX conversion:Process:
- Traverse TSDoc AST recursively
- Identify node types and apply appropriate rendering
- Handle special cases (tables, links, etc.)
- Generate Mintlify-specific components
- Resolve cross-references with caching
Node Type Handling
Node Type Handling
Specialized rendering for different DocNode types:
Mintlify Component Generation
Intelligent Table Conversion
DocTable to Mintlify Components
DocTable to Mintlify Components
The emitter’s most critical feature is intelligent table handling:Detection Logic: Analyzes table headers and content to identify the appropriate Mintlify component
Property Section Generation
Property Section Generation
Converts property tables to Features:
<ParamField> components:- Type analysis with nested property support
- Automatic required/optional detection
- Complex type expansion
Method Section Generation
Method Section Generation
Converts method tables to Output Example:
<ResponseField> components:API Resolution Caching
Cached Resolution Strategy
Cached Resolution Strategy
Implements intelligent caching for API resolution:Cache Key Strategy:Performance Benefits: 20-40% improvement for documentation with dense cross-references
Link Generation
Link Generation
Generates proper relative links for cross-references:Features:
- Automatic path resolution
- Relative link generation
- Fallback for unresolved references
Type Analysis Integration
DocumentationHelper Integration
DocumentationHelper Integration
Leverages Capabilities:
DocumentationHelper for sophisticated type analysis:- Recursive type analysis
- Nested property documentation
- Union and intersection type handling
- Generic type parameter support
Complex Type Handling
Complex Type Handling
Sophisticated handling of complex TypeScript types:Supported Patterns:
- Nested object literals
- Generic types with parameters
- Union types (
A | B) - Intersection types (
A & B) - Array and Promise wrappers
- Conditional types
Performance Optimization
Caching Benefits
Caching Benefits
API resolution caching provides significant performance improvements:Before Caching:After Caching:Measured Improvements:
- 20-40% faster for documentation with dense cross-references
- Significant improvement for large codebases
- Reduced memory pressure from repeated traversals
Cache Configuration
Cache Configuration
Configurable cache settings for different use cases:Tuning Guidelines:
- Small Projects: 200-500 items
- Medium Projects: 500-1000 items
- Large Projects: 1000+ items
- Memory-Constrained: Disable or reduce size
Error Handling and Fallbacks
Graceful Degradation
Graceful Degradation
Robust error handling with fallback strategies:Fallback Hierarchy:
- Success: Generate proper relative link
- Unresolved: Use plain text with warning
- Error: Use plain text with error logging
Cache Error Handling
Cache Error Handling
Defensive programming for cache operations:Resilience Features:
- Cache failures don’t break resolution
- Resolution failures are properly logged
- Always returns valid result or throws clear error
Best Practices
Cache Optimization
Cache Optimization
Monitor Cache Performance:Tune Cache Size:
- Start with default (500 items)
- Monitor hit rates during development
- Increase size for large codebases
- Consider memory usage vs. performance trade-offs
- Very small projects (minimal benefit)
- Memory-constrained environments
- Debugging resolution issues
Component Generation
Component Generation
Consistent Table Structure:
- Maintain standard column order: Name, Type, Description
- Use consistent header names for detection
- Include modifier information in type column
- Provide complete type information
- Include examples for complex types
- Document nested object structures
- Use proper declaration references
- Maintain consistent naming conventions
- Test cross-references in generated docs
Troubleshooting
Component Generation Issues
Component Generation Issues
Symptoms: Tables not converted to Mintlify componentsCauses:
- Incorrect table header format
- Missing required columns
- Non-standard table structure
- Verify table headers match expected patterns
- Ensure consistent column structure
- Check detection logic in
_isPropertyTable()etc.
Cache Performance Issues
Cache Performance Issues
Symptoms: Low cache hit rates or poor performanceCauses:
- Cache size too small for project
- Too many unique references
- Cache key collisions
- Increase cache size
- Analyze reference patterns
- Check cache key generation
- Consider disabling for small projects
Link Resolution Failures
Link Resolution Failures
Symptoms: Broken links or unresolved referencesCauses:
- Invalid declaration references
- Missing API items
- Incorrect context resolution
- Verify declaration reference format
- Check API model completeness
- Validate context item
- Review fallback behavior

