OpenCode Custom API Key Setup Guide (2026): Use Any OpenAI-Compatible Endpoint

OpenCode is one of the better terminal coding agents to show up this year. It feels fast, it handles tools well, and unlike some competitors, it doesn’t force you into one billing stack. That part is great. The annoying part is setup.

Most tutorials stop at /connect. That’s only half the job. In OpenCode, credentials and provider definitions are two different things. /connect stores the API key. opencode.json tells OpenCode which endpoint exists, which SDK package to use, and which models should appear in /models. Miss one piece and you get the classic mess: the key saves fine, but no models show up, or streaming breaks, or tool calls act weird.

This guide is the version I wish existed earlier. We’ll test the endpoint with curl, wire it into OpenCode, and then verify the same endpoint from Python and Node.js. If you want a drop-in endpoint, KissAPI is one practical option because it exposes multiple coding models behind one OpenAI-compatible base URL.

Short version: /connect stores credentials. opencode.json defines the provider. If your endpoint speaks /v1/chat/completions, use @ai-sdk/openai-compatible. If it speaks /v1/responses, use @ai-sdk/openai. Most setup bugs are boring string mismatches, not deep magic.

What OpenCode actually needs from a custom provider

Before touching config, get the moving parts straight:

This last point matters more than people think. A lot of “OpenCode is broken” complaints are really “I pointed a responses endpoint at the chat completions adapter.”

Step 1: Smoke-test the endpoint with curl first

Don’t start inside OpenCode. Start with the dumbest possible request. If the endpoint can’t answer a one-line curl call, OpenCode won’t fix it for you.

export KISSAPI_API_KEY="your-api-key"

curl https://api.kissapi.ai/v1/chat/completions \
  -H "Authorization: Bearer $KISSAPI_API_KEY" \
  -H "Content-Type: application/json" \
  -d '{
    "model": "gpt-5-mini",
    "messages": [
      {"role": "user", "content": "Reply with OK"}
    ]
  }'

Replace the base URL and model name with whatever your provider exposes. This quick check tells you almost everything:

Do this first. It saves ten minutes every single time.

Step 2: Add the credential inside OpenCode

Now open OpenCode and run:

/connect

Scroll to Other, then enter a provider ID such as kissapi. Paste the API key when prompted. OpenCode stores that credential locally.

The gotcha is simple: the provider ID you type here must match the key in opencode.json exactly. If you save the credential under kissapi but define the provider as kiss-api, you’ve built yourself a bug for no reason.

If you prefer environment variables, that works too. OpenCode’s config supports values like {env:KISSAPI_API_KEY}. I still like using /connect for day-to-day use because it keeps project config cleaner.

Step 3: Create opencode.json

Here’s a working project-level example for a normal OpenAI-compatible endpoint that supports /v1/chat/completions:

{
  "$schema": "https://opencode.ai/config.json",
  "model": "kissapi/gpt-5-mini",
  "small_model": "kissapi/gpt-5-mini",
  "provider": {
    "kissapi": {
      "npm": "@ai-sdk/openai-compatible",
      "name": "KissAPI",
      "options": {
        "baseURL": "https://api.kissapi.ai/v1",
        "apiKey": "{env:KISSAPI_API_KEY}"
      },
      "models": {
        "gpt-5-mini": {
          "name": "GPT-5 Mini",
          "limit": {
            "context": 200000,
            "output": 32768
          }
        },
        "claude-sonnet-4-5": {
          "name": "Claude Sonnet 4.5",
          "limit": {
            "context": 200000,
            "output": 32768
          }
        }
      }
    }
  }
}

If you already added the key through /connect, you can remove the apiKey line. The important pieces are the provider ID, the base URL, and the model map.

If your upstream only supports /v1/responses, change one line:

"npm": "@ai-sdk/openai"

That one line is easy to miss and causes a surprising amount of pain.

Step 4: Load the model inside OpenCode

Restart OpenCode or reload the project, then run:

/models

You should now see your custom provider and the models you defined. Pick one, then ask it to do something trivial: explain a function, rename a variable, or summarize the repo layout. Keep the first test boring. Fancy prompts are bad for debugging.

If the model does not appear, one of these is almost always true:

Python and Node.js quick checks

If OpenCode is being stubborn, test the same endpoint outside the agent. These tiny scripts tell you whether the problem is the provider or your OpenCode config.

Python

from openai import OpenAI

client = OpenAI(
    api_key="your-api-key",
    base_url="https://api.kissapi.ai/v1"
)

resp = client.chat.completions.create(
    model="gpt-5-mini",
    messages=[
        {"role": "user", "content": "Say hello in one sentence."}
    ]
)

print(resp.choices[0].message.content)

Node.js

import OpenAI from "openai";

const client = new OpenAI({
  apiKey: process.env.KISSAPI_API_KEY,
  baseURL: "https://api.kissapi.ai/v1"
});

const resp = await client.chat.completions.create({
  model: "gpt-5-mini",
  messages: [
    { role: "user", content: "Say hello in one sentence." }
  ]
});

console.log(resp.choices[0].message.content);

If both scripts work and OpenCode doesn’t, the problem is not your API key. It’s your OpenCode wiring.

Common OpenCode setup failures

ProblemWhat it usually meansFix
401 UnauthorizedBad key, missing key, or wrong auth sourceRe-test with curl and use one clear key path
Model missing from /modelsProvider ID or model key mismatchMatch /connect ID, config key, and upstream model name exactly
404 on requestWrong base URLPoint at the real API root, usually ending in /v1
Streaming or tool calls fail oddlyWrong SDK package for the endpoint typeUse @ai-sdk/openai-compatible for chat completions or @ai-sdk/openai for responses
Agent feels dumb with toolsThe model is weak at tool useSwitch to a stronger coding model instead of blaming the config

One opinionated recommendation

Don’t over-engineer your first setup. Get one model working. Then add a second model. Then decide whether you want a cheaper small_model. People love building a perfect multi-model config before they’ve even confirmed one request can stream cleanly. That’s backwards.

Once the boring setup is done, OpenCode gets a lot more interesting. You can swap providers without changing your workflow, keep one terminal agent while testing multiple model families, and route high-cost work only when it’s actually needed. That flexibility is the whole point. KissAPI is useful here because you can point one OpenAI-compatible config at a mix of coding models instead of juggling separate provider setups on day one.

Need a drop-in endpoint for OpenCode?

Start with a single API key, one OpenAI-compatible base URL, and multiple coding models behind it. That’s the easiest way to get OpenCode running without burning an hour on provider sprawl.

Start Free

Final takeaway

The hard part isn’t auth. It’s being honest about what endpoint you actually have. If it speaks chat completions, wire it as chat completions. If it speaks responses, wire it as responses. Match the provider ID exactly, use the real model names, and test with curl before you touch the TUI. Do that, and OpenCode setup becomes boring in the best possible way.