Gemini CLI Setup Guide (2026): Install, Auth & Headless Workflows
Gemini CLI got attention fast because it does something developers actually want: it puts a serious model in the terminal without making you live inside a web app. Google open-sourced it, gave it built-in tools, and made the first-run path pretty light. That's the good news.
The annoying part is setup. The docs are spread across install pages, auth pages, and GitHub notes. If you're new to it, it's easy to wonder which login path to use, whether you need a Google Cloud project, and why headless mode works on one machine but not another.
This guide cuts through that. You'll install Gemini CLI, pick the right authentication method, test your setup, and get a few workflows worth using on day one.
What You Need Before You Start
According to Google's current docs, Gemini CLI needs Node.js 20+, an internet connection, and a supported OS like macOS 15+, Ubuntu 20.04+, or Windows 11 24H2+. You don't need a monster machine for casual use, but larger repos and long sessions feel better with more memory.
| Choice | Best for | What you need |
|---|---|---|
| Google sign-in | Most individual developers | Browser login |
GEMINI_API_KEY | Headless mode, scripts, quick API-key setup | Google AI Studio key |
| Vertex AI | Teams, enterprise, stricter cloud controls | Project, location, and Google Cloud auth |
My take: start with Google sign-in if you're just testing the CLI. Use GEMINI_API_KEY if you want predictable headless automation. Jump to Vertex AI only if you already live in Google Cloud or your org needs that control plane.
Install Gemini CLI
The two sane install options are npm and Homebrew. Use npx if you just want a quick trial without committing anything to your machine.
# npm
npm install -g @google/gemini-cli
# Homebrew
brew install gemini-cli
# No install
npx @google/gemini-cli
Google also ships preview and nightly channels. That's useful if you like living near the edge, but I'd stick to stable unless you need a specific fix. Terminal agents change fast enough already.
Pick the Right Auth Method
1. Sign in with Google
This is the default path and still the easiest. Launch gemini, pick Sign in with Google, finish the browser flow, and the CLI caches credentials locally.
Google's current public notes say personal accounts get a free tier with up to 60 requests per minute and 1,000 requests per day. That's more than enough to see whether you like the tool.
gemini
One catch: company, school, and some Google Workspace setups may still require a Google Cloud project. If your login succeeds but the CLI complains later, that's usually why.
2. Use a Gemini API key
If you want headless mode, CI, or cleaner scripting, use a key from Google AI Studio and export GEMINI_API_KEY. This is the path I prefer for automation because it's explicit and easy to debug.
export GEMINI_API_KEY="YOUR_GEMINI_API_KEY"
gemini
Once the env var is set, Gemini CLI can use it directly. This also makes non-interactive runs much less annoying.
3. Use Vertex AI
Vertex AI is the heavier setup, but it makes sense for enterprise teams. You usually need a project ID and location, then either ADC via gcloud auth application-default login, a service account JSON file, or a Google Cloud API key.
export GOOGLE_CLOUD_PROJECT="YOUR_PROJECT_ID"
export GOOGLE_CLOUD_LOCATION="us-central1"
gcloud auth application-default login
gemini
If you previously set GEMINI_API_KEY or GOOGLE_API_KEY, unset them before testing ADC. Mixed auth variables create messy failures.
First Commands Worth Learning
Once the CLI opens, don't overcomplicate the first session. Start with a repo you know and use plain prompts.
# Interactive mode
gemini
# One-shot prompt
gemini -p "summarize this codebase"
# Pick a model
gemini -m gemini-2.5-flash
# Safe execution inside the sandbox
gemini --sandbox -p "review package.json for bad dependencies"
# Structured output for scripts
gemini -p "list TODOs from this repo" --output-format json
Headless mode is underrated. If you already have shell scripts for linting, release notes, or PR triage, Gemini CLI can slot into those without much ceremony.
Test Your API Key Before You Blame the CLI
If you're using GEMINI_API_KEY, test the key outside the CLI first. This saves time. A bad key, expired shell session, or wrong environment file looks like a Gemini CLI issue when it usually isn't.
curl
curl "https://generativelanguage.googleapis.com/v1beta/models/gemini-2.5-flash:generateContent?key=$GEMINI_API_KEY" \
-H "Content-Type: application/json" \
-d '{
"contents": [
{
"parts": [
{"text": "Say hello from the Gemini API."}
]
}
]
}'
Python
import os
from google import genai
client = genai.Client(api_key=os.environ["GEMINI_API_KEY"])
response = client.models.generate_content(
model="gemini-2.5-flash",
contents="Summarize why terminal agents are useful for developers."
)
print(response.text)
Node.js
import { GoogleGenAI } from "@google/genai";
const ai = new GoogleGenAI({
apiKey: process.env.GEMINI_API_KEY
});
const response = await ai.models.generateContent({
model: "gemini-2.5-flash",
contents: "Explain one good use case for Gemini CLI in CI."
});
console.log(response.text);
If these work but the CLI still fails, the problem is usually local auth state, conflicting env vars, or a workspace-specific issue. Not the API itself.
Headless Workflows That Are Actually Useful
The flashy demos are fine, but the practical stuff matters more. Gemini CLI is good at three things in headless mode: summarizing diffs, turning logs into short diagnoses, and generating first-pass docs from code.
# Summarize a diff
git diff --staged | gemini -p "summarize these changes for a PR description"
# Analyze logs
cat app.log | gemini -p "find the top 3 likely causes of failure"
# Stream events for longer jobs
gemini -p "run a repo audit" --output-format stream-json
If your stack already mixes several coding tools, keep one thing in mind: Gemini CLI itself authenticates against Google's systems, but the rest of your tooling probably doesn't. That's where a unified endpoint like KissAPI helps. You can keep Gemini CLI for terminal workflows while routing Cursor, Codex, Claude Code, or app traffic through one billing layer instead of four separate dashboards.
The Setup Mistakes That Waste the Most Time
- Old Node.js. If you're below Node 20, fix that first. Don't debug around it.
- Conflicting auth variables. API key env vars left over from earlier tests can break Vertex AI or Google sign-in flows.
- Workspace account confusion. Personal Gmail is simple. Company and school accounts often need a Google Cloud project configured.
- Headless mode without env-based auth. If nothing is cached and no key is set, headless runs won't magically open a browser for you.
- Testing only interactive mode. A setup that works in the TUI can still fail in scripts if your shell profile isn't loading the same env vars.
Need One API Layer for the Rest of Your AI Stack?
Use Gemini CLI for terminal work, and run your app traffic or other coding tools through KissAPI with one OpenAI-compatible endpoint.
Start FreeFinal Verdict
Gemini CLI is worth trying. It's fast, open source, and the headless mode is more useful than most launch-day terminal tools. But the setup goes smoothly only if you make one clean decision up front: browser sign-in, API key, or Vertex AI. Pick one, test it properly, and don't mix auth paths unless you mean to.
Do that, and you'll be productive in minutes instead of burning half an afternoon on env var nonsense.