Think of the AI SDK as the translation layer between your app and different model providers. It lets you talk to OpenAI, Anthropic, Google, xAI, Groq, DeepSeek, and Replicate through one consistent developer experience.
Why This Matters
Without an abstraction layer, every provider has a different API shape, a different streaming format, and different quirks. The Vercel AI SDK means you can:- Stream chat responses in real time
- Switch models without rebuilding your whole app
- Generate structured JSON safely
- Run tool calls inside chat
- Generate embeddings for document search
- Keep your codebase way cleaner than a pile of provider-specific SDK logic
Where the Repo Uses It
The AI SDK isn’t just used in one demo. It powers multiple core parts of the product:- Chat —
streamText()+useChat()for streaming conversations, tool calling, browsing, and reasoning - Marketing Plan & Launch Simulator —
generateText()withOutput.object()for validated JSON output - Vision —
generateText()with a schema so meal photos come back as structured nutrition data - Audio — Structured generation for summaries and action items after transcription
- RAG —
embed()andembedMany()to convert document text into vectors for semantic retrieval
The Main Patterns
- Streaming Chat
- Structured Outputs
- Provider Switching
- Embeddings for RAG
This is what makes the chat app feel alive instead of waiting for one big response at the end.The flow:
- The user sends a message
- The server prepares the system prompt, tools, and optional document context
streamText()starts streaming tokens back immediatelyuseChat()updates the UI as the answer arrives- The final assistant message is stored in PostgreSQL
What You Need to Set Up
At minimum, you need one provider key. For most users, the easiest start is:- OpenAI for chat, structured generation, and embeddings
Anthropic
Groq
xAI
DeepSeek
Replicate
Files to Know
Model catalog and provider setup
Model catalog and provider setup
package.json— installed AI SDK packageslib/ai/models.ts— the shared model catalog
Chat and tool calling
Chat and tool calling
app/(apps)/chat/api/chat/*— streaming chat logicapp/(apps)/chat/tools/*— tool calling implementations
Structured generation apps
Structured generation apps
app/(apps)/marketing-plan/api/route.ts— growth plan generationapp/(apps)/launch-simulator/api/route.ts— schema-based launch simulationsapp/(apps)/vision/api/route.ts— structured vision output
Embeddings and RAG
Embeddings and RAG
lib/rag/*— embeddings and retrieval pipeline
Good First Customizations
If you’re building on top of this starter, here’s what people usually do first:- Remove providers you don’t plan to support
- Add or remove models from the shared model list
- Change which models are free vs. credit-gated
- Tighten prompts for your specific use case
- Duplicate a schema-based generation app and tailor its prompt, schema, and UI
- Add new tools to the chat app
Common Mistakes
Chat
See the flagship app that uses streaming, tool calling, browsing, and RAG.
Vector RAG
Learn how embeddings and document retrieval work in this repo.
OpenAI
OpenAI is the easiest provider to set up first.

