Table of Contents
For organisations with multiple teams using AI, a shared prompt library prevents reinvention + spreads tested patterns. Treat prompts like internal code library: structured, versioned, reviewed, tested. Makes the org as a whole better at AI.
Library structure: prompts as YAML / Markdown in repo, organised by use case (extraction, classification, summary, etc.). Governance: PR review for changes; eval harness per prompt; deprecation process. Versioning: prompts referenced by ID + version. Discovery: searchable internal docs. Lift: reuse + quality + onboarding speed.
Structure
- Repo / directory:
prompts/<use-case>/<name>/ - Each prompt: YAML with template, variables, expected output schema, metadata
- Eval examples: per prompt, 10-50 representative test cases with expected outputs
- Documentation: Markdown explaining purpose, parameters, known limitations
- Example use: Python / TypeScript samples showing how to invoke
Governance
- PR review for changes to library prompts
- Eval harness CI on every change
- Deprecation process (90/60/30 days)
- Per-prompt owner (engineer responsible for maintenance)
- Periodic review (quarterly): which prompts deserve updating, retiring
- Internal forum for discussing prompt patterns
Verdict
For mid-to-large organisations adopting AI, a shared prompt library is the difference between every team reinventing patterns vs collective improvement. Structure as code; review changes; eval on tests; document well. The library becomes the institutional memory of what AI patterns work for your domain.
Bottom line
Prompt library = institutional AI memory. See prompt versioning.