Skip to main content
Think of the AI SDK as the translation layer between your app and different model providers. It lets you talk to OpenAI, Anthropic, Google, xAI, Groq, DeepSeek, and Replicate through one consistent developer experience.

Why This Matters

Without an abstraction layer, every provider has a different API shape, a different streaming format, and different quirks. The Vercel AI SDK means you can:
  • Stream chat responses in real time
  • Switch models without rebuilding your whole app
  • Generate structured JSON safely
  • Run tool calls inside chat
  • Generate embeddings for document search
  • Keep your codebase way cleaner than a pile of provider-specific SDK logic

Where the Repo Uses It

The AI SDK isn’t just used in one demo. It powers multiple core parts of the product:
  • ChatstreamText() + useChat() for streaming conversations, tool calling, browsing, and reasoning
  • Marketing Plan & Launch SimulatorgenerateText() with Output.object() for validated JSON output
  • VisiongenerateText() with a schema so meal photos come back as structured nutrition data
  • Audio — Structured generation for summaries and action items after transcription
  • RAGembed() and embedMany() to convert document text into vectors for semantic retrieval

The Main Patterns

This is what makes the chat app feel alive instead of waiting for one big response at the end.The flow:
  1. The user sends a message
  2. The server prepares the system prompt, tools, and optional document context
  3. streamText() starts streaming tokens back immediately
  4. useChat() updates the UI as the answer arrives
  5. The final assistant message is stored in PostgreSQL
That’s the backbone of the flagship chat app. Users get instant feedback and the conversation persists across sessions.

What You Need to Set Up

At minimum, you need one provider key. For most users, the easiest start is:
  • OpenAI for chat, structured generation, and embeddings
Then add more providers only if you want more model choice:

Anthropic

Google

Groq

xAI

DeepSeek

Replicate

Files to Know

  • package.json — installed AI SDK packages
  • lib/ai/models.ts — the shared model catalog
  • app/(apps)/chat/api/chat/* — streaming chat logic
  • app/(apps)/chat/tools/* — tool calling implementations
  • app/(apps)/marketing-plan/api/route.ts — growth plan generation
  • app/(apps)/launch-simulator/api/route.ts — schema-based launch simulations
  • app/(apps)/vision/api/route.ts — structured vision output
  • lib/rag/* — embeddings and retrieval pipeline

Good First Customizations

If you’re building on top of this starter, here’s what people usually do first:
  1. Remove providers you don’t plan to support
  2. Add or remove models from the shared model list
  3. Change which models are free vs. credit-gated
  4. Tighten prompts for your specific use case
  5. Duplicate a schema-based generation app and tailor its prompt, schema, and UI
  6. Add new tools to the chat app

Common Mistakes

Watch out for these pitfalls:
  • Adding provider keys but forgetting to expose the models in the UI
  • Assuming every model supports browsing, vision, or thinking controls
  • Treating AI SDK abstractions like magic and forgetting provider-level costs
  • Using free-form text when a structured schema would be safer
  • Enabling document chat without also setting up storage and embeddings

Chat

See the flagship app that uses streaming, tool calling, browsing, and RAG.

Vector RAG

Learn how embeddings and document retrieval work in this repo.

OpenAI

OpenAI is the easiest provider to set up first.