This is the most advanced app in AnotherWrapper Premium. It’s not a basic chatbot — it’s a multi-model AI workspace with streaming responses, web browsing, PDF chat, citations, generative UI, and persistent conversation history.
What Your Users Get
From the user’s perspective, this feels like a serious AI assistant, not a toy prompt box. They can:- Switch models on the fly — GPT, Claude, Gemini, Grok, DeepSeek, Llama
- Chat with streaming responses that feel instant
- Upload PDFs and ask questions about them (with citations!)
- Browse the web when browsing mode is enabled
- Work with images and other multimodal inputs
- Generate documents and UI blocks inside the chat
- Keep conversation history tied to their account
Supported Models
| Provider | Models |
|---|---|
| OpenAI | GPT-5, GPT-5 mini, GPT-5 nano, GPT-4o, o3 |
| Anthropic | Claude Opus 4.5, Sonnet 4.5, Haiku 4.5 |
| Gemini 3 Pro, Gemini 3 Pro Image, Gemini 3.1 Flash Lite, Gemini 2.5 Flash | |
| Groq | Llama 4 Scout, Llama 4 Maverick |
| xAI | Grok 4, Grok 4.1, Grok 4.1 Fast Reasoning |
| DeepSeek | DeepSeek Chat |
lib/ai/models.ts. You can add, remove, or tweak available models there.
What Makes It Feel Like a Real Product
This isn’t just “call an API and show text.” It combines several systems:- Vercel AI SDK for streaming, tools, and provider abstraction
- Multiple providers so users aren’t locked into one model family
- Native web search tools for supported providers
- RAG so uploaded PDFs can be searched semantically
- Better Auth + PostgreSQL for auth, sessions, and persistence
- Object storage for uploads
- Credit gating so premium models and actions can be monetized
Setup
- Basic Setup
- Full Setup with RAG + All Providers
Get chat running with the essentials.
Set up core services
You need auth, a database, and at least one configured chat provider. Storage is optional unless you want uploads or PDF chat.
Better Auth + PostgreSQL
Auth + hosted Postgres for this boilerplate
One AI Provider
OpenAI, Anthropic, Google, Groq, xAI, or DeepSeek
Storage (optional)
Only needed for uploads, file attachments, and PDF workflows
Run the app
Start the dev server and open the chat page. You should see streaming responses from your configured model.
Web Search
Web search uses the native tool support from each provider — there’s no separate search API to configure. Providers with native search support:- OpenAI
- Anthropic
- xAI
Search availability depends on the selected model/provider. Models without native web search support (like some Groq or DeepSeek paths) won’t get a fallback search provider automatically.
How a Chat Request Works
User sends a message
The message hits the server along with the chosen model, browsing mode, and any attached documents.
Context assembly
If documents are active, the app retrieves relevant chunks from PostgreSQL via semantic search.
Prompt building
The server builds the system prompt and tool list based on the model, context, and active features.
PDF Chat, Vector Search, and Citations
The document workflow is one of the most powerful parts of this app.How PDF upload and indexing works
How PDF upload and indexing works
When a user uploads a PDF:
- The file is stored in object storage
- Text is extracted from the PDF
- The text is split into chunks
- Each chunk is embedded with OpenAI
- Embeddings are stored in PostgreSQL with
pgvector
How citations work
How citations work
When the chat retrieves context from uploaded documents, the relevant source chunks are tracked. The UI then renders citations alongside the AI’s response so users can see exactly where the information came from.
Setting up RAG
Setting up RAG
Check out the full guide: Vector Database & RAG
Generative UI and Tools
This chat app isn’t text-only. It supports tool-driven interactions like:- Document creation — generate and update docs inside the chat
- App suggestions — contextual recommendations
- Web search — provider-native browsing when enabled
Credit Gating
The repo includes an app-wide credit layer that makes monetization straightforward:- Free models can stay free
- Premium models can cost credits
- Browsing can also cost credits
- The app returns usage metadata so the UI can show what happened
Good First Customizations
Ready to make this your own? Here are the usual first edits:- Update the model list in
lib/ai/models.ts - Change which models are free vs. premium
- Adjust the system prompt in
app/(apps)/chat/prompt.ts - Add or remove tools under
app/(apps)/chat/tools/ - Tune the document retrieval flow under
lib/rag/
Verification
Your chat setup is working if:
- The page loads and streams responses
- Switching models changes the provider/model badge
- Browsing works on models that support native search
- Uploaded PDFs can be indexed and cited
- New messages persist after refresh
AI SDK
Learn why streaming, tools, and model switching work the way they do.
Vector RAG
Learn how document chat and citations are implemented.
Storage
Uploads and file-backed workflows depend on storage being configured.

