Get your API key
Sign into Google AI Studio
Head to Google AI Studio and sign in with your Google account.
Available models
All models are defined in the unified model registry atlib/ai/models.ts.
| Model | ID | Features |
|---|---|---|
| Gemini 3 Pro | gemini-3-pro-preview | Vision, Search Grounding |
| Gemini 3 Pro Image | gemini-3-pro-image-preview | Vision, Search Grounding |
| Gemini 3.1 Flash Lite | gemini-3.1-flash-lite-preview | Vision |
| Gemini 2.5 Flash | gemini-2.5-flash | Vision, Search Grounding, Thinking/Reasoning |
Search grounding is a standout Gemini feature — it lets the model pull in live web results when answering questions. Your users get up-to-date info without you needing to build a separate search integration.
Apps using Google Gemini
Google Gemini is integrated through Vercel AI SDK 6.0, with provider routing handled bylib/ai/ai-utils.ts.
Chat
Multi-provider chat with search grounding support
Vision
Default vision-model path for the meal-analysis app
Marketing Plan
Generate structured marketing plans using Gemini models
Launch Simulator
Generate Product Hunt launch simulations using Gemini models
The shared model registry includes
gemini-3-pro-image-preview, but the shipped Image Studio currently uses its own OpenAI and Replicate-backed model catalog.How it works
The codebase uses Vercel AI SDK 6.0 with a unified model registry — no direct Google API calls needed. Here’s the typical flow for an AI request:- You select a model from the unified registry
- The request goes through
getModelInstance()inlib/ai/ai-utils.ts - The provider is determined via
getProviderFromModelId() - The model is instantiated with
customModel() - The response is streamed back to you
- Results are stored in PostgreSQL
Structure
Understand the full project structure of the codebase.

