Table of Contents
For application engineers consuming an internal AI tier, developer experience determines adoption. Bad DX (ad-hoc auth, opaque errors, non-deterministic responses, no tracing) makes engineers route around the AI tier. Good DX makes it the obvious choice.
Good AI DX: OpenAI-compatible endpoint, per-engineer API keys with rate limits, stable model identifiers, structured outputs default, request tracing, local-dev parity. Anti-patterns: bespoke API shape, shared keys, version drift between dev / prod, opaque costs.
Good DX
- OpenAI-compatible API: engineers don't learn a bespoke client;
OpenAI(base_url=...)works - Per-engineer API keys: trace usage to person; rate-limit to prevent runaway tests breaking prod
- Stable model identifiers:
company/production-llm-v3notllama-3.1-8b-fp8-instruct-q4_K_M-pinned-abc123 - Structured outputs by default: schema validation built into the platform, not per-app
- Request tracing:
request_idin every response; one-click trace lookup in dashboards - Local-dev parity: dev environment mirrors prod model + prompt versions
- Cost transparency: per-request cost shown in tracing
- Documentation: working examples in your stack's primary languages
Anti-patterns
- Bespoke API shape (re-learning curl invocations for every team)
- Shared API keys (no usage attribution)
- Production-only access (engineers can't test locally)
- Opaque errors ("something went wrong" instead of structured error codes)
- Version drift between dev / prod (subtle quality differences nobody anticipated)
- Hidden costs (engineers don't know their feature is £500/mo of inference)
Tools
- OpenAI SDK (Python / Node / Go) pointed at your endpoint — familiar to all engineers
- LiteLLM as the routing layer — transparent fallback / retry
- Per-engineer keys via your auth platform (Auth0 / Okta / native)
- Tracing: OpenTelemetry → Honeycomb / Datadog / Jaeger
- Cost dashboard: per-tenant / per-feature / per-engineer attribution
Verdict
Good AI DX is mostly standard developer-platform discipline applied to a new domain. OpenAI compatibility removes the biggest friction; per-engineer keys + cost transparency make AI usage visible and accountable. Invest here early; the alternative is engineers building shadow AI integrations bypassing your platform.
Bottom line
OpenAI-compatible + per-engineer + transparent. See OpenAI API guide.