Skip to main content
Google Gemini brings some unique tricks to the table — native search grounding and strong multimodal reasoning. In the shipped apps, Gemini is especially useful for chat and the default Vision path.

Get your API key

1

Sign into Google AI Studio

Head to Google AI Studio and sign in with your Google account.
2

Create an API key

Navigate to the API keys section and create a new key.
3

Add it to your env

Paste the key in your .env.local file:
GOOGLE_GENERATIVE_AI_API_KEY=your_google_api_key
Keep your API key safe and don’t share it. You can always regenerate it from Google AI Studio if needed.

Available models

All models are defined in the unified model registry at lib/ai/models.ts.
ModelIDFeatures
Gemini 3 Progemini-3-pro-previewVision, Search Grounding
Gemini 3 Pro Imagegemini-3-pro-image-previewVision, Search Grounding
Gemini 3.1 Flash Litegemini-3.1-flash-lite-previewVision
Gemini 2.5 Flashgemini-2.5-flashVision, Search Grounding, Thinking/Reasoning
Search grounding is a standout Gemini feature — it lets the model pull in live web results when answering questions. Your users get up-to-date info without you needing to build a separate search integration.
Gemini 2.5 Flash supports thinking/reasoning mode, showing its chain of thought before the final answer. Combine that with search grounding and you’ve got a seriously capable model.

Apps using Google Gemini

Google Gemini is integrated through Vercel AI SDK 6.0, with provider routing handled by lib/ai/ai-utils.ts.

Chat

Multi-provider chat with search grounding support

Vision

Default vision-model path for the meal-analysis app

Marketing Plan

Generate structured marketing plans using Gemini models

Launch Simulator

Generate Product Hunt launch simulations using Gemini models
The shared model registry includes gemini-3-pro-image-preview, but the shipped Image Studio currently uses its own OpenAI and Replicate-backed model catalog.

How it works

The codebase uses Vercel AI SDK 6.0 with a unified model registry — no direct Google API calls needed. Here’s the typical flow for an AI request:
  1. You select a model from the unified registry
  2. The request goes through getModelInstance() in lib/ai/ai-utils.ts
  3. The provider is determined via getProviderFromModelId()
  4. The model is instantiated with customModel()
  5. The response is streamed back to you
  6. Results are stored in PostgreSQL

Structure

Understand the full project structure of the codebase.