Skip to main content
Chat is the most advanced app in AnotherWrapper Premium. It is not a basic chatbot demo. It is a multi-model AI workspace with streaming responses, web browsing, PDF chat, citations, generative UI, and persistent conversation history.

What This App Actually Does

From the user’s point of view, this app feels like a serious AI assistant, not just a prompt box. Users can:
  • switch between models like GPT, Claude, Gemini, Grok, DeepSeek, and Llama
  • chat with streaming responses
  • upload PDFs and ask questions about them
  • browse the web when browsing is enabled
  • work with images and other multimodal inputs
  • generate documents and UI blocks inside the chat flow
  • keep conversation history in their account
  • see citations when answers are grounded in uploaded documents
If you want a “main AI app” in your product, this is usually the one.

Why It Feels More Capable Than A Basic Chat Demo

The current chat app combines several systems working together:
  • Vercel AI SDK for streaming, tools, and provider abstraction
  • multiple providers so users are not locked into one model family
  • native web search tools for supported providers
  • RAG so uploaded PDFs can be searched semantically
  • Supabase for auth, chat history, and document records
  • object storage for uploads
  • credit gating so premium models and premium actions can be monetized
That combination is what makes it feel like a product instead of a toy.

Supported Models

ProviderModels
OpenAIGPT-5, GPT-5 mini, o3
AnthropicClaude Opus 4.5, Sonnet 4.5, Haiku 4.5
GoogleGemini 3 Pro, Gemini 2.5 Flash
GroqLlama 4 Scout, Llama 4 Maverick
xAIGrok 4, Grok 4.1
DeepSeekDeepSeek models
Models are configured in lib/ai/models.ts. You can add, remove, or modify available models there.

What You Need To Run It

The following services are required to run the chat app: To use all available models, set up additional providers: Web search is optional and uses the native tool support exposed by the selected model provider. There is no separate search-provider setup in the current chat implementation. Supported native search providers in the app:
  1. OpenAI
  2. Anthropic
  3. Google
  4. xAI
Search availability depends on the selected model/provider. Models without native web search support, such as some Groq or DeepSeek paths, will not get a fallback search provider automatically.

How A Chat Request Works

Here is the simple version of the pipeline:
  1. the user sends a message
  2. the app checks the chosen model, browsing mode, and attached documents
  3. if documents are active, the app retrieves relevant chunks from the vector database
  4. the server builds the system prompt and tool list
  5. the answer streams back in real time
  6. the final messages and metadata are saved in Supabase
That is why this app can do more than “send prompt, receive text”.

PDF Chat, Vector Search, And Citations

The document workflow is one of the strongest parts of the app. When a user uploads a PDF:
  1. the file is stored
  2. text is extracted
  3. the text is split into chunks
  4. each chunk is embedded with OpenAI
  5. those embeddings are stored in Supabase with pgvector
  6. later, user questions are matched against those chunks
  7. the retrieved context is injected into the prompt
  8. citations are shown in the UI
If you are new to this concept, read the full guide here:

Generative UI And Tools

This chat app is not text-only. It also supports tool-driven interactions such as:
  • document creation
  • document updates
  • app suggestions
  • provider-native web search
That is what people usually mean when they say the chat feels “more like an agent” and less like a plain chatbot.

Credit Gating

The repo includes an app-wide credit layer. In practical terms:
  • free models can stay free
  • premium models can cost credits
  • browsing can also cost credits
  • the app returns usage metadata so the UI can show what happened
This makes the chat app much easier to monetize.

Good First Customizations

If you are building your own product on top of this app, the usual first edits are:
  • update the model list in lib/ai/models.ts
  • change which models are free vs. premium
  • adjust the system prompt in app/(apps)/chat/prompt.ts
  • add or remove tools under app/(apps)/chat/tools/
  • tune the document retrieval flow under lib/rag/

Verification

Your chat setup is working if:
  • the page loads and streams responses
  • switching models changes the provider/model badge
  • browsing works on models that support native search
  • uploaded PDFs can be indexed and cited
  • new messages persist after refresh