Skip to main content
DeepSeek offers powerful reasoning capabilities at competitive pricing. Their chat model comes with built-in thinking/reasoning mode, making it a solid choice if you want to give your users another strong reasoning option alongside the other providers.

Get your API key

1

Create or sign into your DeepSeek account

Head to the DeepSeek Platform and sign up or log in.
2

Create an API key

Navigate to the API keys section and create a new key.
3

Add it to your env

Paste the key in your .env.local file:
DEEPSEEK_API_KEY=your_deepseek_api_key
Save your API key somewhere safe right away — you won’t be able to see it again.

Available models

All models are defined in the unified model registry at lib/ai/models.ts.
ModelIDFeatures
DeepSeek Chatdeepseek-chatThinking/Reasoning
DeepSeek Chat supports thinking/reasoning mode — the model shows its chain of thought before giving you the final answer. This is particularly useful for complex problem-solving and analysis tasks.

Apps using DeepSeek

DeepSeek is integrated through Vercel AI SDK 6.0, with provider routing handled by lib/ai/ai-utils.ts.

Chat

Multi-provider chat app — DeepSeek is available as an LLM provider

How it works

The codebase uses Vercel AI SDK 6.0 with a unified model registry — no direct DeepSeek API calls needed. Here’s the typical flow for an AI request:
  1. You select a model from the unified registry
  2. The request goes through getModelInstance() in lib/ai/ai-utils.ts
  3. The provider is determined via getProviderFromModelId()
  4. The model is instantiated with customModel()
  5. The response is streamed back to you
  6. Results are stored in PostgreSQL

Structure

Understand the full project structure of the codebase.