Skip to main content
This is the most advanced app in AnotherWrapper Premium. It’s not a basic chatbot — it’s a multi-model AI workspace with streaming responses, web browsing, PDF chat, citations, generative UI, and persistent conversation history.

What Your Users Get

From the user’s perspective, this feels like a serious AI assistant, not a toy prompt box. They can:
  • Switch models on the fly — GPT, Claude, Gemini, Grok, DeepSeek, Llama
  • Chat with streaming responses that feel instant
  • Upload PDFs and ask questions about them (with citations!)
  • Browse the web when browsing mode is enabled
  • Work with images and other multimodal inputs
  • Generate documents and UI blocks inside the chat
  • Keep conversation history tied to their account
If you want a “main AI app” in your product, this is the one.

Supported Models

ProviderModels
OpenAIGPT-5, GPT-5 mini, GPT-5 nano, GPT-4o, o3
AnthropicClaude Opus 4.5, Sonnet 4.5, Haiku 4.5
GoogleGemini 3 Pro, Gemini 3 Pro Image, Gemini 3.1 Flash Lite, Gemini 2.5 Flash
GroqLlama 4 Scout, Llama 4 Maverick
xAIGrok 4, Grok 4.1, Grok 4.1 Fast Reasoning
DeepSeekDeepSeek Chat
Models are configured in lib/ai/models.ts. You can add, remove, or tweak available models there.

What Makes It Feel Like a Real Product

This isn’t just “call an API and show text.” It combines several systems:
  • Vercel AI SDK for streaming, tools, and provider abstraction
  • Multiple providers so users aren’t locked into one model family
  • Native web search tools for supported providers
  • RAG so uploaded PDFs can be searched semantically
  • Better Auth + PostgreSQL for auth, sessions, and persistence
  • Object storage for uploads
  • Credit gating so premium models and actions can be monetized
That combination is what makes it feel like a product instead of a demo.

Setup

Get chat running with the essentials.
1

Set up core services

You need auth, a database, and at least one configured chat provider. Storage is optional unless you want uploads or PDF chat.

Better Auth + PostgreSQL

Auth + hosted Postgres for this boilerplate

One AI Provider

OpenAI, Anthropic, Google, Groq, xAI, or DeepSeek

Storage (optional)

Only needed for uploads, file attachments, and PDF workflows
2

Run the app

Start the dev server and open the chat page. You should see streaming responses from your configured model.
3

Verify it works

Switch between models, send a few messages, and confirm that conversations persist after a refresh.
Web search uses the native tool support from each provider — there’s no separate search API to configure. Providers with native search support:
  1. OpenAI
  2. Anthropic
  3. Google
  4. xAI
Search availability depends on the selected model/provider. Models without native web search support (like some Groq or DeepSeek paths) won’t get a fallback search provider automatically.

How a Chat Request Works

1

User sends a message

The message hits the server along with the chosen model, browsing mode, and any attached documents.
2

Context assembly

If documents are active, the app retrieves relevant chunks from PostgreSQL via semantic search.
3

Prompt building

The server builds the system prompt and tool list based on the model, context, and active features.
4

Streaming response

The answer streams back in real time to the user’s browser.
5

Persistence

The final messages and metadata are saved in PostgreSQL for conversation history.

PDF Chat, Vector Search, and Citations

The document workflow is one of the most powerful parts of this app.
When a user uploads a PDF:
  1. The file is stored in object storage
  2. Text is extracted from the PDF
  3. The text is split into chunks
  4. Each chunk is embedded with OpenAI
  5. Embeddings are stored in PostgreSQL with pgvector
Later, when the user asks a question, those chunks are matched semantically and injected into the prompt.
When the chat retrieves context from uploaded documents, the relevant source chunks are tracked. The UI then renders citations alongside the AI’s response so users can see exactly where the information came from.
Check out the full guide: Vector Database & RAG

Generative UI and Tools

This chat app isn’t text-only. It supports tool-driven interactions like:
  • Document creation — generate and update docs inside the chat
  • App suggestions — contextual recommendations
  • Web search — provider-native browsing when enabled
That’s what makes the chat feel “more like an agent” and less like a plain chatbot.

Credit Gating

The repo includes an app-wide credit layer that makes monetization straightforward:
  • Free models can stay free
  • Premium models can cost credits
  • Browsing can also cost credits
  • The app returns usage metadata so the UI can show what happened

Good First Customizations

Ready to make this your own? Here are the usual first edits:
  • Update the model list in lib/ai/models.ts
  • Change which models are free vs. premium
  • Adjust the system prompt in app/(apps)/chat/prompt.ts
  • Add or remove tools under app/(apps)/chat/tools/
  • Tune the document retrieval flow under lib/rag/

Verification

Your chat setup is working if:
  • The page loads and streams responses
  • Switching models changes the provider/model badge
  • Browsing works on models that support native search
  • Uploaded PDFs can be indexed and cited
  • New messages persist after refresh

AI SDK

Learn why streaming, tools, and model switching work the way they do.

Vector RAG

Learn how document chat and citations are implemented.

Storage

Uploads and file-backed workflows depend on storage being configured.