Write better TSDoc comments to improve AI navigation of your API reference
Mintlify automatically generates llms.txt from your documentation. This guide focuses on writing TSDoc comments that produce helpful descriptions for your generated API reference.For general llms.txt information, see Mintlify’s llms.txt documentation.
mint-tsdocs extracts frontmatter descriptions from your TSDoc comments:
/** * Centralized cache coordinator for TypeScript analysis and API resolution. * * @remarks * Provides production and development presets with configurable cache sizes * and optional statistics tracking for performance monitoring. */export class CacheManager { // ...}
Generates:
---title: "CacheManager"description: "Centralized cache coordinator for TypeScript analysis and API resolution"---
/** * Main orchestrator for documentation generation. * * @remarks * Coordinates template rendering, MDX generation, and navigation updates. * Processes API models from API Extractor into Mintlify-compatible documentation. */export class MarkdownDocumenter { // ...}
✅ Good: Explains role, lists main responsibilities, mentions output format
Use concrete terms over vague ones:✅ “Converts API models to template data for Liquid rendering”
❌ “Handles template data conversion”✅ “LRU cache for type structure analysis with configurable size limits”
❌ “Cache implementation”
/** * Centralized cache coordinator for TypeScript analysis and API resolution. */export class CacheManager { }/** * LRU cache for type structure analysis with eviction and statistics. */export class TypeAnalysisCache { }/** * LRU cache for API reference resolution with cached decorator support. */export class ApiResolutionCache { }
All cache classes follow the pattern: “[Type] cache for [purpose] with [features]“
User: "What caching options are available?"AI: "I see CacheManager, TypeAnalysisCache, and ApiResolutionCache classes" [Can't explain what each does without reading full pages]
After optimization:
User: "What caching options are available?"AI: "There are three caching components: - CacheManager: Centralized coordinator with presets - TypeAnalysisCache: For type structure analysis - ApiResolutionCache: For API reference resolution Each supports LRU eviction and optional statistics."
The AI can answer from llms.txt alone, without fetching all pages.