You will build a pipeline that ingests customer reviews, classifies sentiment (positive, negative, neutral) with aspect-level detail, stores results in a database, and serves a live dashboard showing sentiment trends. The end result: your product team sees real-time sentiment breakdowns by product feature, with drill-down to individual reviews, updated every 5 minutes. No customer feedback leaves your infrastructure. Here is the full pipeline on dedicated GPU infrastructure.
Pipeline Architecture
| Component | Tool | Role |
|---|---|---|
| LLM classifier | LLaMA 3.1 8B via vLLM | Sentiment + aspect extraction |
| Database | PostgreSQL | Store classified results |
| Dashboard | FastAPI + Chart.js | Real-time visualisation |
| Scheduler | APScheduler | Periodic batch processing |
LLM-Based Sentiment Classification
from openai import OpenAI
import json
client = OpenAI(base_url="http://localhost:8000/v1", api_key="none")
def classify_sentiment(review: str) -> dict:
response = client.chat.completions.create(
model="meta-llama/Llama-3.1-8B-Instruct",
messages=[{
"role": "system",
"content": """Classify the sentiment of this customer review. Return JSON:
{"overall": "positive|negative|neutral|mixed",
"score": 0.0-1.0,
"aspects": [{"feature": "name", "sentiment": "positive|negative", "quote": "relevant excerpt"}],
"key_issues": ["list of specific complaints if any"],
"key_praise": ["list of specific positive points if any"]}"""
}, {"role": "user", "content": review}],
max_tokens=300, temperature=0.0
)
return json.loads(response.choices[0].message.content)
LLMs outperform traditional sentiment classifiers because they extract aspect-level sentiment (e.g., “pricing: negative, support: positive”) and understand nuance, sarcasm, and context. The vLLM server batches concurrent classification requests efficiently.
Batch Processing Pipeline
import psycopg2
from apscheduler.schedulers.background import BackgroundScheduler
def process_new_reviews():
conn = psycopg2.connect("dbname=sentiment")
cur = conn.cursor()
cur.execute("SELECT id, text FROM reviews WHERE sentiment IS NULL LIMIT 100")
reviews = cur.fetchall()
for review_id, text in reviews:
result = classify_sentiment(text)
cur.execute("""
UPDATE reviews SET
sentiment = %s, score = %s,
aspects = %s, processed_at = NOW()
WHERE id = %s
""", (result["overall"], result["score"],
json.dumps(result["aspects"]), review_id))
conn.commit()
scheduler = BackgroundScheduler()
scheduler.add_job(process_new_reviews, 'interval', minutes=5)
scheduler.start()
Dashboard API
from fastapi import FastAPI
app = FastAPI()
@app.get("/api/sentiment-summary")
async def sentiment_summary(days: int = 30):
cur.execute("""
SELECT sentiment, COUNT(*), AVG(score),
DATE(processed_at) as date
FROM reviews
WHERE processed_at > NOW() - INTERVAL '%s days'
GROUP BY sentiment, DATE(processed_at)
ORDER BY date
""", (days,))
return {"data": cur.fetchall()}
@app.get("/api/aspect-breakdown")
async def aspect_breakdown():
cur.execute("""
SELECT aspect->>'feature' as feature,
aspect->>'sentiment' as sentiment,
COUNT(*)
FROM reviews, jsonb_array_elements(aspects) as aspect
GROUP BY feature, sentiment
ORDER BY count DESC
""")
return {"data": cur.fetchall()}
Dashboard Frontend
The dashboard renders sentiment trends over time (line chart), aspect-level breakdown (stacked bar chart), recent negative reviews requiring attention (filterable table), and sentiment distribution (pie chart). Connect Chart.js to the API endpoints with auto-refresh every 5 minutes. Add filters for date range, product line, and sentiment category.
Scaling and Accuracy
For higher throughput: batch reviews in groups of 10 using vLLM’s batch API for 3-5x throughput improvement; use smaller models (e.g., Phi-3 or Gemma 2B) for simple positive/negative classification if aspect extraction is not needed; and implement a confidence threshold — flag low-confidence classifications for human review. For multilingual reviews, add language detection and translate to English before classification or use a multilingual model. Deploy on private infrastructure to keep customer data secure. See chatbot integrations for conversational analytics, more tutorials, and industry examples for sentiment analysis in practice. Review infrastructure guides for production deployment.
Analytics AI GPU Servers
Dedicated GPU servers for real-time sentiment analysis and NLP pipelines. Process customer data on isolated UK infrastructure.
Browse GPU Servers