Open Interpreter executes LLM-generated code locally to complete user tasks – run shell commands, manipulate files, install packages. Pointed at a self-hosted LLM on our dedicated GPU hosting it becomes a private automation agent with no API fees.
Contents
Install
pip install open-interpreter
Config
from interpreter import interpreter
interpreter.llm.model = "openai/qwen-coder-32b"
interpreter.llm.api_base = "http://localhost:8000/v1"
interpreter.llm.api_key = "not-needed"
interpreter.auto_run = False # require confirmation per command
interpreter.chat("Analyse the sales CSV files in ./data and produce a summary")
Setting auto_run=False prompts the user before each command executes. This is the right production default – never auto-run against a live system.
Use Cases
- Data analysis on local files (when uploading is not an option)
- System administration via natural language
- Repetitive refactoring tasks with user oversight
- Interactive debugging
Safety
Open Interpreter executes arbitrary code. Running it as root or in a production environment is dangerous. Safer patterns:
- Run in a container with limited filesystem access
- Keep
auto_run=Falsealways - Scope the work directory – mount only the data the agent needs
- Never give it credentials it does not need
For code execution in unattended production workflows, prefer smolagents with explicit sandboxing.
Private Code Automation Hosting
Open Interpreter + self-hosted LLM on UK dedicated GPUs with sandboxing.
Browse GPU ServersSee smolagents.