AI Personal Learning
and practical guidance
CyberKnife Drawing Mirror

DeepClaude: A Chat Interface Fusing DeepSeek R1 Chained Reasoning and Claude Creativity

General Introduction

DeepClaude is a high-performance Large Language Model (LLM) inference API and chat interface that integrates DeepSeek R1's Chained Reasoning (CoT) capabilities with the Anthropic Claude modeling creativity and code generation capabilities. This project significantly outperforms OpenAI o1, DeepSeek R1 and Claude Sonnet 3.5, provides a unified interface that leverages the strengths of both models while maintaining full control over API keys and data. DeepClaude features include zero-latency response, end-to-end security, high configurability, and an open source code base. Users can manage their API keys with their own, ensuring data privacy and security. Best of all, DeepClaude is completely free and open source.

DeepClaude:集成R1和Claude的高性能LLM推理API与聊天界面-1

DeepClaude uses R1 for inference and then lets Claude output the result


 

DeepClaude: High Performance LLM Inference API with Chat Interface Integrating R1 and Claude-1

DeepClaude also offers a high-performance LLM inference API

 

Function List

  • Zero latency response: Instant response through a high-performance Rust API.
  • private and secure: Local API key management to ensure data privacy.
  • Highly configurable: Users can customize all aspects of the API and interface to suit their needs.
  • open source: Free and open source code base, users are free to contribute, modify and deploy.
  • Dual AI capabilities: Combine the creativity and code generation power of Claude Sonnet 3.5 with the reasoning power of DeepSeek R1.
  • Hosted BYOK API: Managed using the user's own API key to ensure complete control.

 

Using Help

Installation process

  1. pre-conditions::
    • Rust 1.75 or higher
    • DeepSeek API Key
    • Anthropic API Key
  2. clone warehouse::
   git clone https://github.com/getAsterisk/deepclaude.git
cd deepclaude
  1. Build the project::
   cargo build --release
  1. configuration file: Create the project root directoryconfig.tomlDocumentation:
   [server]
host = "127.0.0.1"
port = 3000
[pricing]
# 配置定价选项
  1. Operational services::
   cargo run --release

Guidelines for use

  1. API Usage::
    • basic example::
     import requests
    url = "http://127.0.0.1:3000/api"
    payload = {
    "model": "claude",
    "prompt": "Hello, how can I help you today?"
    }
    response = requests.post(url, json=payload)
    print(response.json())
    
    • Streaming Response Example::
     import requests
    url = "http://127.0.0.1:3000/api/stream"
    payload = {
    "model": "claude",
    "prompt": "Tell me a story."
    }
    response = requests.post(url, json=payload, stream=True)
    for line in response.iter_lines():
    if line:
    print(line.decode('utf-8'))
    
  2. self-hosted::
    • Configuration options: Users can modify theconfig.tomlconfiguration options in the documentation to customize aspects of the API and interface.
  3. safety::
    • Local API key management: Ensure the privacy of API keys and data.
    • end-to-end encryption: Protects the security of data transmission.
  4. dedicate::
    • Contribution Guidelines: Users can contribute code and improve the project by submitting Pull Requests or reporting issues.
May not be reproduced without permission:Chief AI Sharing Circle " DeepClaude: A Chat Interface Fusing DeepSeek R1 Chained Reasoning and Claude Creativity
en_USEnglish