Gemini CLI: Boost Productivity Without Sacrificing Privacy
Understand Gemini CLI ecosystem and get the essential knowledge you need to be able to properly take advantage of Google's Gemini generative AI model right into your shell without compromising your privacy.

Gemini CLI installation and authentication setup
The fastest way to get access to Gemini CLI without installing and configuring authentication yourself is by using Cloud Shell.
If you want to learn how to install and authenticate with Gemini CLI, have a look at:
To uninstall Gemini CLI have a look at this.
Gemini CLI privacy policy
Gemini CLI data handling policy is gorverned by:
-
The general Google Privacy Policy
Also, depending on the type of authentication (Gemini Code Assist via Google Cloud, Google AI Studio API Key, Vertex AI API Key, etc) and the account type (Individual, Standard, Paid, Unpaid, etc) you use for Gemini CLI, different Terms of Service and Privacy Notices may apply.
Have a look at Gemini CLI TOS and Privacy per account type for precisions.
Here are some important points to highlight:
-
Chat data you send to Gemini and other info like your location, IP address, your Home or Work addresses are collected and stored for up to 18 months, to develop and improve Google technologies. This data collection is effective when Gemini App Activity is turned on in your Google account. It is turned on by default if the age in your account is greater than 18.
-
When Gemini App Activity is turned on, the collected data can also be reviewed and treated by humans to improve Google products. Before being treated by humans, personal data that may be used to identify the user are removed for privacy. However, sensitive info sent in conversations are accessible, so be careful about what you send.
-
You can turn off Gemini App Activity to stop future conversations from being reviewed or used to improve Google machine-learning technologies. When you do that, your conversations will still be saved in your account for 72 hours, to provide the service and process any feedback.
-
You can also review and delete past conversations in Gemini App Activity.
Gemini CLI modes and commands
There are two ways of using Gemini CLI: interactive and non-interactive modes.
Non-interactive mode
In non-interactive mode, you can directly specify your prompt when invoking gemini to get an answer straight into your shell:
$ gemini -p "my prompt" [--model <model>]
Use 'gemini -h' for a list of all the available options you can use.
For the '-m' flag, you can use for instance:
-
gemini-2.5-pro - for enhanced thinking and reasoning, multimodal understanding, advanced coding, and more
-
gemini-2.5-flash - for adaptive thinking and cost efficiency
Have a look at Gemini Model Variations for more.
Interactive mode
For interactive mode, simply run 'gemini'. The console app will be launched and you will get something like this:
███ █████████ ██████████ ██████ ██████ █████ ██████ █████ █████
░░░███ ███░░░░░███░░███░░░░░█░░██████ ██████ ░░███ ░░██████ ░░███ ░░███
░░░███ ███ ░░░ ░███ █ ░ ░███░█████░███ ░███ ░███░███ ░███ ░███
░░░███ ░███ ░██████ ░███░░███ ░███ ░███ ░███░░███░███ ░███
███░ ░███ █████ ░███░░█ ░███ ░░░ ░███ ░███ ░███ ░░██████ ░███
███░ ░░███ ░░███ ░███ ░ █ ░███ ░███ ░███ ░███ ░░█████ ░███
███░ ░░█████████ ██████████ █████ █████ █████ █████ ░░█████ █████
â–‘â–‘â–‘ â–‘â–‘â–‘â–‘â–‘â–‘â–‘â–‘â–‘ â–‘â–‘â–‘â–‘â–‘â–‘â–‘â–‘â–‘â–‘ â–‘â–‘â–‘â–‘â–‘ â–‘â–‘â–‘â–‘â–‘ â–‘â–‘â–‘â–‘â–‘ â–‘â–‘â–‘â–‘â–‘ â–‘â–‘â–‘â–‘â–‘ â–‘â–‘â–‘â–‘â–‘
Tips for getting started:
1. Ask questions, edit files, or run commands.
2. Be specific for the best results.
3. Create GEMINI.md files to customize your interactions with Gemini.
4. /help for more information.
â•─────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────╮
│ > Type your message or @path/to/file │
╰─────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────╯
~ no sandbox (see /docs) gemini-2.5-pro (100% context left)
Now you can interact with Gemini by writing your prompts.
Interactive mode commands
In interactive mode, you can use slash, exclamation mark and at sign for the following:
- / - to run Gemini CLI built-in commands. Use '/help' for the list of all the available built-in commands and their descriptions. Have a look at Gemini CLI Slash Commands for more.
- ! - to run local shell commands. Have a look at Shell Commands From Gemini CLI for detailed description and examples. Gemini CLI shell commands history are stored in project-specific directories. Have a look at Gemini CLI Shell History for more.
- @ - to specify files or folders for Gemini CLI context (@src/app.py for instance). Have a look at Gemini CLI At Commands for detailed description and examples.
In addition to Gemini CLI '/help' interactive mode command, here is a shortened list of other available commands:
- /about - info about Gemini CLI version
- /docs - open Gemini CLI full documentation
- /chat - manage conversations history
- /chat list
- /chat save mychat
- /chat resume mychat
- /chat delete mychat
- /stats - session stats and performance
- /stats model - stats about models API calls
- /stat tools - stats about tools calls
- /settings - View and edit settings
- /privacy - display privacy notice
- /tools - list Gemini CLI tools
- /extensions - list Gemini CLI extensions
- /mcp - manage MCP servers and tools
- /quit - exit Gemini CLI interactive mode
Gemini CLI configuration
Settings file
In addition to Gemini CLI command line flags, you can customize Gemini behavior by using configuration files and environment variables.
Depending on the scope you want for the configuration file, you should create a settings.json file at the following locations:
- Project scoped settings - create the
.gemini/settings.json
file inside the project directory for which you want the settings to apply. - User scoped settings - create the
~/.gemini/settings.json
file inside the home directory of the user for which you want the settings to apply. - System scoped settings - create the
/etc/gemini/settings.json
file containing settings you want to apply for the whole system (regardless the system user or the project directory in which Gemini is used).
Project scoped settings override User scoped settings that in turn override System scoped settings.
Here is a sample configuration for settings.json. Also, here are all the available settings you can put inside the settings.json file.
The settings.json file also supports environment variables expansion. You can use it to avoid putting sensitive info inside that file.
Here is a sample settings.json file for configuring a MCP server through Docker, containing an API Key variable:
{
"mcpServers": {
"myDockerServer": {
"command": "docker",
"args": ["run", "-i", "--rm", "-e", "API_KEY", "ghcr.io/foo/bar"],
"env": {
"API_KEY": "$MY_API_KEY"
}
}
}
}
The variable $MY_API_KEY
will be automatically resolved when the settings are loaded.
Environment variables
Some of the Gemini configuration settings can also be set through environment variables directly.
Those environment variables could also be put inside a .env
file, that should be placed at the following locations, depending on the scope we want them to be loaded for:
-
Project scoped .env file - create the
.env
file inside the project directory for which you want the environment variables to be loaded. -
User scoped .env file - create the
.env
file inside the home directory of the user for which you want the environment variables to be loaded (regardless the project directory from where Gemini is called)
For a list of environment variables settings and additional info about how Gemini searches and load the .env
file, have a look at Gemini Environment Variables Settings.
GEMINI.md
A 'GEMINI.md' file could be placed inside your project folder in order to give specific instructions to Gemini when it answers your questions or performs specific tasks on your project files. Example:
(my-python-scripts)$ ls
GEMINI.md debug.py cleanup.py
(my-python-scripts)$ cat GEMINI.md
During Python code generation or refactoring, follow the rules stated at this page:
* https://google.github.io/styleguide/pyguide.html
The sample 'GEMINI.md' file shown above makes Gemini follow Google's Python Style Guide rules listed at the provided link during code generation or refactoring inside my-python-script project folder.
For detailed explanations and examples about Gemini instructional context (or memory) configuration, have a look at Configuring Instruction Context For Gemini.
Gemini CLI custom commands
For the full documentation about Gemini CLI custom commands, have a look at Gemini CLI Custom Commands Full Doc.
Creating custom commands
You can also create custom commands to accomplish specific tasks through prompt templates containing variables for command arguments. Here is an example showing how to create and call a custom command: Example Refactoring Custom Command.
Retrieving command arguments with args
Inside you custom command template, you can use {{args}}
in the main body of your prompt (prompt = "") to inject inputs typed after the custom command name. Have a look at Custom Command Argument Handling With Args for examples.
Parsing command arguments with instructions
You can also inject custom command arguments (e.g, arg1 and arg2 from this command: /mycommand arg1 arg2 ...) into your custom command template by instructing Gemini CLI how to parse the args, inside the template's prompt section. Have a look at Custom Command Default Argument Handling for examples.
Gemini CLI tools
To serve your requests, Gemini CLI relies on its built-in tools. Tools help Gemini CLI accomplish specific tasks like reading or writing files on the system, running shell commands, searching the web, fetching specific web contents, etc.
Here is a list of the tools that are natively built into Gemini CLI and their descriptions. By clicking on a tool link, you will get a detailed description and usage examples: Gemini CLI Tools.
You can also make additional tools available to Gemini CLI through MCP Servers. This makes the Gemini model automatically use those tools when appropriate to answer your questions or accomplish specific tasks.
Expanding Gemini CLI capabilities with MCP servers
What is an MCP (Model Context Protocol) server?
An MCP server is an application that exposes tools and resources to the Gemini CLI through the Model Context Protocol, allowing it to interact with external systems and data sources. MCP servers act as a bridge between the Gemini model and your local environment or other services like APIs.
With an MCP server, you can extend the Gemini CLI's capabilities to perform actions beyond its built-in features, such as interacting with databases, APIs, custom scripts, or specialized workflows
Source: What is an MCP server?
Setting up an MCP server for Gemini CLI
Here is an example tutorial that demonstrates how to setup a GitHub MCP server, that makes Gemini CLI capable of interacting with GitHub repositories. Once the setup is complete, Gemini CLI will be able to accomplish these tasks on the configured GitHub repositories.
That's all. Hope you better understand the possibilities offered by Gemini CLI.
As an IT pro, understanding Gemini CLI ecosystem and properly using it could be a game changer. Now the ball is on your court. Put your imagination into work and leverage that powerful tool to boost your productivity without sacrificing your privacy.
Want to report a mistake or ask questions ? Feel free to email me at gmkziz@hackerstack.org. I will be glad to answer.
If you like my articles, consider registering to my newsletter in order to receive the latest posts as soon as they are available.
Take care, keep learning and see you in the next post 🚀