CrewAI is an agent framework built around role-based agents – each agent has a role, a goal, and a set of tools. Agents collaborate on tasks. On our dedicated GPU hosting CrewAI pointed at a self-hosted LLM is a clean way to build structured workflows without API costs.
Contents
Install
pip install crewai crewai-tools
LLM Config
CrewAI uses LiteLLM under the hood, which supports OpenAI-compatible endpoints:
from crewai import LLM
llm = LLM(
model="openai/llama-3.3-70b",
base_url="http://localhost:8000/v1",
api_key="not-needed",
)
Example
from crewai import Agent, Task, Crew
researcher = Agent(
role="Research Analyst",
goal="Find accurate, current information on the topic",
backstory="Experienced analyst with a sharp eye for sources",
llm=llm,
tools=[web_search_tool],
)
writer = Agent(
role="Technical Writer",
goal="Produce a clear, structured summary",
backstory="Writes for a technical audience",
llm=llm,
)
research_task = Task(
description="Research the current state of open-weights LLMs",
agent=researcher,
)
write_task = Task(
description="Write a 500-word summary based on the research",
agent=writer,
context=[research_task],
)
crew = Crew(agents=[researcher, writer], tasks=[research_task, write_task])
result = crew.kickoff()
CrewAI vs AutoGen
- CrewAI: role-oriented, declarative, good for structured workflows with known steps
- AutoGen: message-passing between agents, good for open-ended conversations and code execution
Pick CrewAI when the workflow is predictable. Pick AutoGen when agents need free-form collaboration.
CrewAI Self-Hosted Hosting
UK dedicated GPU servers with LLM and CrewAI preconfigured.
Browse GPU Servers