A customer applies for insurance through your platform. Your self-hosted LLM analyses their application and flags it for manual review with a risk score of 78/100. The customer asks why. Under GDPR Article 22 and the UK Data Protection Act 2018, they have the right to meaningful information about the logic involved in automated decision-making that significantly affects them. “The AI said so” is not a lawful explanation. You need to explain which factors contributed to the score, how those factors were weighted, and what the customer can do to challenge the decision. This guide covers implementing explainability for AI systems on dedicated GPU servers.
Legal Requirements
The right to explanation under UK data protection law has several dimensions:
| Requirement | Source | What You Must Provide |
|---|---|---|
| Meaningful information about logic | GDPR Art. 13(2)(f), Art. 14(2)(g) | How the system works in general terms |
| Significance and consequences | GDPR Art. 13(2)(f) | What the decision means for the individual |
| Right not to be subject to automated decisions | GDPR Art. 22(1) | Human review pathway |
| Right to contest the decision | GDPR Art. 22(3) | Process to challenge and obtain human intervention |
| Safeguards for automated decisions | UK DPA 2018 s.14 | Measures to prevent discrimination |
Note: Article 22 applies to solely automated decisions that produce legal effects or similarly significant effects. Not every AI output triggers these requirements — a chatbot suggesting products does not, but an AI scoring a credit application does.
Inference Logging for Explainability
You cannot explain a decision you did not record. Log every decision-grade inference with: the complete input (or a reference to it), the model version and configuration used, the output and any scores generated, and a timestamp. For vLLM deployments, implement a logging middleware that captures request-response pairs before they leave the server:
import json
import datetime
def log_decision(request_data, response_data, model_id):
"""Log AI decision for explainability compliance."""
record = {
"timestamp": datetime.datetime.utcnow().isoformat(),
"model_id": model_id,
"model_version": "llama-3.1-70b-q4",
"input_hash": hash_pii(request_data), # Hash PII, keep structure
"input_reference": store_encrypted(request_data),
"output": response_data,
"decision_type": "risk_assessment",
"requires_explanation": True,
"retention_days": 365
}
write_audit_log(record)
Store logs on private infrastructure with appropriate retention periods. Decision logs may themselves contain personal data and must be handled accordingly.
Technical Explainability Approaches
For LLM-based decisions, several techniques provide explanations. Prompt-based explanations ask the model to articulate its reasoning as part of the output — include a “reasoning” field in the prompt template. Factor attribution identifies which input features most influenced the output by using chain-of-thought prompting. Counterfactual explanations show what would need to change for a different outcome. For open-source models, you have full access to implement these techniques without API restrictions.
The explanation must be meaningful to the data subject — technical model internals are not sufficient. Translate model reasoning into plain language that a non-technical person can understand and act upon.
Human Review Pathways
Every automated decision system needs a human review pathway. When a data subject contests a decision, a trained human reviewer must be able to access the original input, the model’s output and logged reasoning, the explanation provided to the data subject, and relevant context. The reviewer must have genuine authority to override the AI decision. A rubber-stamp review where humans always defer to the model does not satisfy Article 22 requirements. Train reviewers on model limitations and known failure modes.
Privacy Notice and Documentation
Your privacy notice must inform data subjects before automated decisions are made. Describe: which decisions use AI, what data is processed, how the AI system works in general terms, the significance of the decision, and how to request human review. Teams deploying AI chatbots for customer service, document processing for applications, or vision models for identity verification should each document their specific explainability approach.
Implementation Checklist
Map every AI deployment to determine which produce decisions triggering Article 22 rights. For those that do: implement inference logging, build explanation generation into the prompt pipeline, create human review workflows, update privacy notices, train review staff, and test the end-to-end process from data subject request to explanation delivery. Review GDPR compliance documentation for data protection specifics, governance frameworks for organisational structure, and industry use cases for sector-specific requirements. See technical tutorials for logging implementation.
Explainable AI Infrastructure
Dedicated GPU servers with full inference logging, audit trails, and data control. Build AI systems you can explain. UK-hosted.
Browse GPU Servers