gemini cli
What Is Gemini CLI?
Gemini CLI is a command-line tool that allows developers to interact with Google's Gemini language models directly from the terminal. It is part of the Google AI Studio and Gemini API ecosystem and supports rapid prototyping, scripting, and automation using generative AI.
Gemini CLI empowers developers with instant access to expert-level help through their terminal. It reduces context switching, accelerates learning, and improves productivity with minimal setup.
Google-Gemini GitHub Repository: github.com/google-gemini/gemini-cli
See also: ai.google.dev/gemini-api/docs
Strategic Advantages
- Streamlined Prototyping: Quickly test prompts and code without writing boilerplate.
- Terminal-First Workflow: Fully integrates with terminal workflows for speed and convenience.
- Script Integration: Input/output can be piped into scripts for automation or data processing.
- Lightweight & Open Source: Minimal footprint and open source transparency.
- Access to Latest Gemini Models: Uses powerful Gemini 1.5 models with multimodal support.
Usage Examples
Basic Prompt
gemini "Write a Python script that converts Celsius to Fahrenheit."
Interactive Mode
gemini
Using Files as Input
gemini -f input.txt
Multimodal Input (if supported)
gemini -f description.txt -i image.png
Use Cases
- Code Snippet Generation: Generate boilerplate or common logic; gemini "Generate a Flask API with a /predict endpoint";
- Debugging Helper: Paste stack traces and get fixes; gemini "Here's my Python traceback, help me fix it: ...";
- Learning Concepts: Ask questions about programming topics; gemini "Explain the difference between threading and multiprocessing in Python";
- Writing Tests: Generate unit tests for your code; gemini "Write unit tests for this function: def add(a, b): return a + b";
- Shell Scripting Assistant: Automate terminal tasks; gemini "Write a bash script to back up a folder and compress it";
- API Docs Companion: Explain endpoints or summarize API behavior;
- DevOps & Infra Support: Generate Dockerfiles, systemd units, or Terraform configs;
Tips
- Use --model to specify model, e.g., gemini --model gemini-pro
- Combine with | tee to log outputs:; gemini "Document this Go function" | tee output.md;
Gemini CLI Power-User Tips
Loading Code into Context
- Use @file-or-dir syntax: @src/main.py Refactor this module for better readability; Supports loading entire directories and respects .gitignore;
- Pipe content from other tools: cat config.yaml | gemini "Summarize this config file."; Also works with output from ls, grep, etc.;
Command-Line Switches & Flags
- Model and output tuning: --model gemini-2.5-pro or -m for model selection,; --temperature 0.2 for deterministic output, 0.9 for creative results,;--format [markdown|text|json|html|code] for output formatting;
- Other useful flags: --stream to receive streaming responses,; --verbose or --quiet to control verbosity,; --output path/to/file to redirect output to a file,
CLI Meta and Session Commands
Command | Description |
/chat save <tag> | Save the current session state |
/chat resume <tag> | Resume a saved session |
/chat list | List saved sessions |
/compress | Shrink the session context |
/stats | Show token usage, rate info, and latency |
/mcp | Manage model context protocol settings |
@ | Inject a file or directory |
!cmd | Run a shell command |
/theme, /auth, /clear, /quit | Session and UI controls |
Advanced Features & Workflows
- GEMINI.md support: Define config and conventions for project context awareness.
- Checkpointing with /restore: Revert changes made by tools.
- Embedding and vector DB support: Use embed content or embed db to index files semantically.
- Token budgeting: Use counttok or /stats to manage and estimate token usage.
Pro Tips
- Shell alias: alias gpr='gemini -m gemini-2.5-pro --stream'
- Combined flags:
gemini --stream --verbose --file main.py "optimize this function"
- Raw code output:
gemini -f script.js "refactor this" --format code > refactored.js
- Auto-completion: Check gemini completion to enable CLI autocomplete.
Quick Starter Commands
gemini chat
gemini -f src/app.js "Explain this code"
gemini --model gemini-1.5-flash -f *.py "Check for unused imports"
Best Practices for Using GEMINI.md
with Gemini CLI
Purpose of GEMINI.md
- Acts as the long-term memory or workspace definition for the Gemini CLI.
- Provides persistent system instructions and contextual grounding.
- Enables multi-turn reasoning across sessions in the same directory.
Best Practices
1. Define the Project or Task Scope Clearly
# GEMINI.md
This is a TypeScript CLI tool for analyzing CSV data and generating reports. It will be used by non-technical users and should be robust against malformed inputs.
2. Include Key Design Decisions
Use Deno instead of Node.js. Prefer functional programming style. Minimize third-party dependencies.
3. Specify Coding Conventions or Style
Follow Google TypeScript Style Guide. Unit tests must be written using Deno.test. Avoid using `any` type.
4. State Goals and Subgoals
## Current Goals
- Add support for multi-file CSV merges.
- Improve error handling for missing headers.
- Write unit tests for `analyze.ts`.
5. Track AI Interaction Expectations
Avoid rewriting files unless explicitly asked. Provide minimal diffs and explain the rationale.
6. Use Structured Markdown
Use sections with headings: ## Goals
, ## Constraints
, ## Known Bugs
, etc. for better parsing.
7. Avoid Overloading with Logs or Irrelevant Content
Keep GEMINI.md
focused and relevant—don’t use it as a README or error log.
8. Update Frequently
Maintain accuracy as the project evolves. Capture behavioral notes about Gemini itself.
9. Keep Instructions Stable Between Sessions
Archive outdated context instead of deleting it outright.
10. Keep It Under ~1,500 Tokens
Excessively long files can cause truncation of important sections in the context.
Example Template
# GEMINI.md
This is a command-line Python tool that analyzes server logs and produces error summaries in Markdown.
## Project Goals
- Parse `.log` files and extract errors, warnings.
- Output summary report with counts and examples.
- Add timestamp filtering option (`--since`, `--until`).
## Constraints
- Must run with Python 3.10+
- Avoid using pandas or other heavy libraries
- Must run on Ubuntu with no special dependencies
## Style Guide
- Use snake_case for functions
- Docstrings in Google style
- Write unittests using pytest
## Current Work
- Refactor `parser.py`
- Add test for `summarize()` function
## Gemini Instructions
- Provide Python code suggestions with inline comments
- Only write one file at a time unless told otherwise
Using Context Engineering Documents with Gemini CLI
The Gemini CLI allows developers to interact with Google's Gemini models. To apply context engineering principles effectively, it's important to organize and supply structured context to the model.
Recommended Directory Structure
project-root/
│
├── context/
│ ├── background.md
│ ├── glossary.md
│ ├── architecture.md
│ └── tone_guidelines.md
│
├── prompts/
│ ├── initial_prompt.txt
│ └── refine_prompt.txt
│
├── src/
│ └── ...
context/
: Holds reusable context engineering documents.prompts/
: Contains initial and refining prompt templates.src/
: Source code or other content relevant to the task.
💡 Supplying Context to Gemini CLI
You can include context manually or via scripting using command-line options:
gemini chat \
--prompt "$(cat prompts/initial_prompt.txt)" \
--context "$(cat context/architecture.md context/tone_guidelines.md)"
Or, with a merged input file:
cat context/*.md prompts/initial_prompt.txt > final_prompt.txt
gemini chat --input_file final_prompt.txt
Standard Workflow Summary
- Organize context documents in a
context/
folder. - Compose final prompt by concatenating selected context files with a base prompt.
- Pass the composed prompt to Gemini CLI using
--prompt
or--input_file
. - Automate with a shell script, Makefile, or Python runner for repeatability.
Optional: Prompt Composition Script
Example shell script:
#!/bin/bash
cat context/*.md prompts/initial_prompt.txt > final_prompt.txt
gemini chat --input_file final_prompt.txt