How to Use Claude API with Aider in 2026: Custom Endpoint Setup Guide
Aider is one of the few coding agents that still feels built for working developers. It's terminal-first, git-aware, and blunt about what it changed. Pair it with Claude and you get a setup that's especially good at refactors, bug hunts, and messy multi-file edits.
The annoying part isn't Aider itself. It's getting Claude wired in the right way. A lot of guides still assume direct Anthropic access, but in 2026 many teams want an OpenAI-compatible endpoint instead: one API key, one billing path, and the option to switch models without rebuilding the toolchain.
Short version: set OPENAI_API_BASE, set OPENAI_API_KEY, then run Aider with --model openai/claude-sonnet-4-6. If your endpoint speaks the OpenAI chat format, Aider is happy.
Why use Claude with Aider?
Claude is still excellent at code editing. It usually keeps structure intact, writes cleaner diffs than most chat-first tools, and it's less likely to bulldoze a file just to fix one small bug. In Aider, that matters. You're not asking for a pretty explanation. You're asking for a patch you can actually commit.
For most repos, Claude Sonnet 4.6 is the right default. It's fast enough for interactive edits and good enough for almost all day-to-day work. Save Claude Opus 4.6 for planning-heavy prompts, larger refactors, or cases where one better answer is cheaper than three retries.
The setup Aider expects
Aider can talk to any OpenAI-compatible API. That means you don't have to use OpenAI's own models. You point Aider at a compatible base URL, give it a key, and tell it which model name to use.
The two environment variables that matter are:
export OPENAI_API_BASE=https://api.kissapi.ai/v1
export OPENAI_API_KEY=your_api_key_here
Then launch Aider with an OpenAI-style model reference:
aider --model openai/claude-sonnet-4-6
That openai/ prefix trips people up all the time. Without it, Aider may try the wrong provider logic and you'll waste 20 minutes chasing a fake auth problem.
Step 1: Get a Claude-capable endpoint
If you already have direct Anthropic access, that works. But there are good reasons to use an OpenAI-compatible gateway instead:
- one key for Claude, GPT-5, Gemini, and other models
- easier team billing
- fewer region and card-issuer headaches
- the same endpoint works across Aider, Cursor, Cline, and your own scripts
KissAPI is one option here. It exposes Claude models behind an OpenAI-compatible /v1 endpoint, so Aider setup is straightforward and you can switch providers later without changing your workflow.
Step 2: Test the API before opening Aider
This is the step most people skip, and it's why setup guides turn into guesswork. Before you blame Aider, make sure your endpoint and model actually work.
curl test
curl https://api.kissapi.ai/v1/chat/completions \
-H "Content-Type: application/json" \
-H "Authorization: Bearer $OPENAI_API_KEY" \
-d '{
"model": "claude-sonnet-4-6",
"messages": [
{"role": "user", "content": "Reply with the word ready."}
],
"temperature": 0
}'
If that returns a normal chat completion, your key, base URL, and model name are all valid. Good. Now Aider has a fair chance of working on the first try.
Python test
from openai import OpenAI
client = OpenAI(
api_key="YOUR_API_KEY",
base_url="https://api.kissapi.ai/v1"
)
resp = client.chat.completions.create(
model="claude-sonnet-4-6",
messages=[
{"role": "user", "content": "Say ready in one word."}
],
temperature=0
)
print(resp.choices[0].message.content)
Node.js test
import OpenAI from "openai";
const client = new OpenAI({
apiKey: process.env.OPENAI_API_KEY,
baseURL: "https://api.kissapi.ai/v1"
});
const resp = await client.chat.completions.create({
model: "claude-sonnet-4-6",
messages: [
{ role: "user", content: "Say ready in one word." }
],
temperature: 0
});
console.log(resp.choices[0].message.content);
If curl works but Aider doesn't, the problem is usually model naming or missing environment variables in the shell session where Aider was launched.
Step 3: Point Aider at Claude
Once the endpoint is proven, the actual Aider setup is short:
# Mac/Linux
export OPENAI_API_BASE=https://api.kissapi.ai/v1
export OPENAI_API_KEY=your_api_key_here
# open your repo
cd /path/to/your/project
# start aider with Claude
aider --model openai/claude-sonnet-4-6
Want a stronger model for harder tasks? Swap in:
aider --model openai/claude-opus-4-6
My advice: don't use Opus as your all-day default unless your repo is truly ugly or the cost barely matters. Sonnet is the better trade for most edit loops.
Which Claude model should you pick?
| Model | Best for | My take |
|---|---|---|
| Claude Sonnet 4.6 | daily edits, bug fixes, test updates, refactors | Best default in Aider |
| Claude Opus 4.6 | hard architecture work, larger changes, tricky debugging | Use when quality matters more than speed |
| Claude Haiku 4.5 | small automation, cheap helper tasks | Usually too weak for serious Aider sessions |
The mistakes that waste the most time
1) Forgetting the openai/ prefix
This is the big one. With OpenAI-compatible endpoints, Aider expects model names like openai/claude-sonnet-4-6, not just claude-sonnet-4-6.
2) Using the wrong base URL
Many providers want /v1 at the end of the base URL. If you get a 404 or a weird path error, check that first.
3) Assuming every provider exposes the same model IDs
Model names are not universal. One provider might offer claude-sonnet-4-6; another might expose a versioned alias. Check /v1/models if you're unsure.
4) Debugging inside Aider first
Don't. Use curl first. It cuts the problem in half immediately.
5) Overspending on the wrong prompts
Aider gets expensive when you feed it huge repos, giant pasted logs, and flagship models for every tiny edit. Keep context tight. Use Sonnet as the default. Escalate only when the task actually deserves it.
Need a Claude endpoint that works with Aider?
Create a free account and get an OpenAI-compatible API key for Claude, GPT, Gemini, and more. One endpoint. No toolchain rewrite.
Start Free →Final take
Aider + Claude is still one of the best terminal coding setups around. It's faster than bouncing through a browser chat, and it's a lot more honest than the “magic AI teammate” pitch most coding tools push now.
If you remember only three things, make them these: test the endpoint before you open Aider, include the openai/ prefix in the model name, and don't pay Opus prices for Sonnet jobs. Do that, and the setup takes five minutes instead of fifty.