Obsidian is the king of local knowledge bases, and the Copilot plugin (by logancy) is its most powerful AI companion. Connecting DeepSeek V3 allows you to leverage your entire vault for RAG (Retrieval augmented generation) at a fraction of the cost of GPT-4.
Why DeepSeek + Obsidian?
- Context Master: DeepSeek V3's large context window is perfect for analyzing multiple notes or long documents within your vault.
- Privacy & Cost: DeepSeek is extremely affordable and privacy-friendly, making it ideal for personal knowledge management (PKM).
Configuration Steps
We use the OpenAI Compatible mode in the Copilot plugin.
Step 1: Install Plugin
- Go to Obsidian Settings -> Community Plugins -> Turn on.
- Browse and search for "Copilot" (by Logan Yang). Install and Enable it.
Step 2: Add Custom Model
- Open the Copilot plugin settings.
- Scroll to the "Models" section or select "Add Custom Model" from the dropdown menu in the sidebar chat.
- Fill in the details:
- Config Name:
DeepSeek V3 - Provider: Select
OpenAI Compatible - Base URL:
https://api.deepseek.com - API Key: Your DeepSeek Key (
sk-xxxx) - Model ID:
deepseek-chat - Context Window:
64000(or128000)
- Config Name:
Step 3: Set as Default
- In the Copilot chat sidebar, click the model name at the top.
- Select your new
DeepSeek V3model to switch to it.
Pro Tip: Vault QA
Enable "Vault QA" mode in the chat interface. This allows DeepSeek to search through your local markdown notes to answer questions, effectively chatting with your second brain.
FAQ
Q: Connection failed?
A: Ensure the Base URL is exactly https://api.deepseek.com. Do NOT include /v1/chat/completions unless the plugin specifically asks for the full endpoint (most versions of Copilot auto-append it).
Q: Use Reasoning Model?
A: Use deepseek-reasoner as the Model ID if you want to use the R1 reasoning model for complex logic tasks.

