Skip to main content
If you are new to AI products, think of the AI SDK as the translation layer between your app and different model providers. It lets this repo talk to OpenAI, Anthropic, Google, xAI, Groq, DeepSeek, and Replicate through one consistent developer experience.

Why This Matters

Without an abstraction layer, every provider has a different API shape, a different streaming format, and different quirks. AnotherWrapper uses the Vercel AI SDK so you can:
  • stream chat responses in real time
  • switch models without rebuilding your whole app
  • generate structured JSON safely
  • run tool calls inside chat
  • generate embeddings for document search
  • keep the codebase much cleaner than a pile of provider-specific SDK logic
In plain English: it makes the AI parts of the product easier to build, easier to swap, and easier to maintain.

Where The Repo Uses It

The AI SDK is not just used in one demo. It powers multiple core parts of the product:
  • Chat uses streamText(...) plus useChat(...) for streaming conversations, tool calling, browsing, and reasoning controls.
  • Structured Output uses generateText(...) with Output.object(...) so models return validated JSON instead of messy free-form text.
  • Vision uses generateText(...) with a schema so meal photos come back as usable nutrition data.
  • Audio uses structured generation for summaries and action items after a transcript is created.
  • RAG uses embed(...) and embedMany(...) to convert document text into vectors for semantic retrieval.

The Main AI SDK Patterns In This Repo

1. Streaming chat

This is what makes the chat app feel alive instead of waiting for one big response at the end. The flow is:
  1. the user sends a message
  2. the server prepares the system prompt, tools, and optional document context
  3. streamText(...) starts streaming tokens back immediately
  4. useChat(...) updates the UI as the answer arrives
  5. the final assistant message is stored in Supabase
That is the backbone of the flagship chat app.

2. Structured outputs

For things like the nutrition analyzer or the structured-output app, normal plain text is not enough. You want fields like title, summary, calories, or actionItems in a predictable shape. That is why the repo uses schema-based outputs. The model is asked to return data that matches a Zod schema, and the app renders that data cleanly. This is a big quality-of-life upgrade compared with trying to parse random AI text after the fact.

3. Provider switching

AnotherWrapper ships with provider adapters for:
  • OpenAI
  • Anthropic
  • Google
  • Groq
  • xAI
  • DeepSeek
  • Replicate
That means you can expose several models in one UI and let users switch between them without rebuilding each app from scratch.

4. Embeddings for RAG

The repo also uses the AI SDK for embeddings in the document chat system. In simple terms:
  • text from a PDF is split into chunks
  • each chunk is converted into a vector
  • the vectors are stored in Supabase with pgvector
  • when the user asks a question, the question is also embedded
  • the app finds the most relevant chunks and injects them into the prompt
That is how the chat app can answer questions about uploaded documents instead of only relying on the model’s general knowledge.

What You Need To Set Up

At minimum, you need one provider key. For most users, the easiest path is:
  • OpenAI for chat, structured generation, and embeddings
Then add more providers only if you want more model choice:

Files To Know

If you want to understand or customize the AI layer, these are the most useful places to start:
  • package.json for the installed AI SDK packages
  • lib/ai/models.ts for the shared model catalog
  • app/(apps)/chat/api/chat/* for streaming chat logic
  • app/(apps)/chat/tools/* for tool calling
  • app/(apps)/structured-output/api/route.ts for schema-based generation
  • app/(apps)/vision/api/route.ts for structured vision output
  • lib/rag/* for embeddings and retrieval

Good First Customizations

If you are building on top of this starter, these are usually the first AI changes people make:
  • remove providers they do not plan to support
  • add or remove models from the shared model list
  • change which models are free vs. credit-gated
  • tighten prompts for their niche use case
  • add custom structured-output templates
  • add new tools to the chat app

Common Mistakes

  • adding provider keys but forgetting to expose the models in the UI
  • assuming every model supports browsing, vision, or thinking controls
  • treating AI SDK abstractions like magic and forgetting provider-level costs
  • using free-form text when a structured schema would be safer
  • enabling document chat without also setting up storage and embeddings