AI Personal Learning
and practical guidance

Private Deployment of DeepSeek + Dify: Building a Secure and Controllable Local AI Assistant System

summarize

DeepSeek is a groundbreaking open source big language model that revolutionizes AI dialog interaction with its advanced algorithmic architecture and reflexive chaining capabilities. With private deployment, you can fully control data security and usage security. You can also flexibly adjust the deployment scheme and implement a convenient customization system.

Dify As the same open source AI application development platform, it offers a complete private deployment solution. By seamlessly integrating locally deployed DeepSeek services into the Dify platform, organizations can build powerful AI applications within a local server environment while ensuring data privacy.


The following are the advantages of the private deployment option:

  • superior performance : Provide a dialog interaction experience comparable to business models
  • environmental isolation : Completely offline operation, eliminating the risk of data leakage
  • Data controllability : Full control of data assets to meet compliance requirements

 

pre-positioning

Hardware Environment:

  • CPU >= 2 Core
  • Video Memory/RAM ≥ 16 GiB (recommended)

Software environment:

 

Starting deployment

1. Installation of Ollama

Ollama is a cross-platform large model management client (MacOS, Windows, Linux) designed to seamlessly deploy large language models (LLMs) such as DeepSeek, Llama, Mistral, etc. Ollama provides one-click deployment of large models, and all usage data is stored locally on the machine, providing full data privacy and security. and security.

Visit the Ollama website and follow the prompts to download and install the Ollama client. After installation, run ollama-v command will output the version number.

➜~ollama-v
ollamaversionis0.5.5

Select the appropriate DeepSeek size model for deployment based on your actual environment configuration. The 7B size model is recommended for initial installation.

Private Deployment of DeepSeek + Dify: Building a Secure and Controllable Local AI Assistant System-1

Run command ollama run deepseek-r1:7b mounting DeepSeek R1 Model.

Private Deployment of DeepSeek + Dify: Building a Secure and Controllable Local AI Assistant System-1

2. Install Dify Community Edition

Visit the Dify GitHub project address and run the following commands to complete the pull code repository and installation process.

gitclonehttps://github.com/langgenius/dify.git
cddify/docker
cp.env.example.env
dockercomposeup-d# If the version is Docker Compose V1, use the following command: docker-compose up -d

After running the command, you should see the status and port mapping of all containers. For detailed instructions, please refer to Docker Compose DeploymentThe

Dify Community Edition uses port 80 by default. http://your_server_ip Access your privatized Dify platform.

3. Connecting DeepSeek to Dify

Click on the top right corner of the Dify platform Avatar → Settings → Model Provider Select Ollama and tap Add Model.

Private Deployment of DeepSeek + Dify: Building a Secure and Controllable Local AI Assistant System-1

DeepSeek within the model provider corresponds to the online API service; locally deployed DeepSeek models correspond to the Ollama client. Please ensure that the DeepSeek model has been successfully deployed by the Ollama client, as detailed in the deployment instructions above.

Select the LLM model type.

  • Model Name, fill in the model name of the deployed model. The model model deployed above is deepseek-r1 7b, so fill in the deepseek-r1:7b
  • Base URL, the address where the Ollama client is running, usually http://your_server_ip:11434. In case of connection problems, please read incommon problemsThe
  • The other options remain at their default values. Depending on the Description of the DeepSeek modelThe maximum generated length is 32,768 Tokens.

Private Deployment of DeepSeek + Dify: Building a Secure and Controllable Local AI Assistant System-1

 

Building AI Applications

DeepSeek AI Chatbot (simple application)

  • Tap "Create a Blank App" on the left side of the Dify platform homepage, select the "Chat Assistant" type of app and name it simply.

Private Deployment of DeepSeek + Dify: Building a Secure and Controllable Local AI Assistant System-1

  • In the upper right hand corner, under Application Type, select Ollama Framework within the deepseek-r1:7b Model.

Private Deployment of DeepSeek + Dify: Building a Secure and Controllable Local AI Assistant System-1

  • Verify that the AI application works by entering content in the preview dialog box. Generating a response means that the AI application build is complete.

Private Deployment of DeepSeek + Dify: Building a Secure and Controllable Local AI Assistant System-1

  • Tap the Publish button at the top right of the app to get a link to the AI app and share it with others or embed it in another website.

DeepSeek AI Chatflow / Workflow (advanced application)

Chatflow / Workflow Apps can help you build AI applications with more complex functionality, such as having the ability to do document recognition, image recognition, voice recognition, and more. For a detailed description, please refer toWorkflow DocumentationThe

  • Tap "Create a Blank App" on the left side of the Dify platform homepage, select a "Chatflow" or "Workflow" type app and name it simply.

Private Deployment of DeepSeek + Dify: Building a Secure and Controllable Local AI Assistant System-1

  • To add an LLM node, select the Ollama framework within the deepseek-r1:7b model and add the system prompt word within the {{#sys.query#}} variable to connect the start node.

Private Deployment of DeepSeek + Dify: Building a Secure and Controllable Local AI Assistant System-1

  • Add the end node to complete the configuration. You can enter content in the preview box for testing. Generating a response means that the AI application build is complete.

Private Deployment of DeepSeek + Dify: Building a Secure and Controllable Local AI Assistant System-1

 

common problems

1. Connection errors during Docker deployment

When deploying Dify and Ollama with Docker, the following errors may be encountered:

httpconnectionpool(host=127.0.0.1,port=11434): max retries exceeded with url:/cpi/chat
(CausedbyNewConnectionError(': fail to establish a new connection:[Erlib3.connection.HTTPConnection].
Fail to establish a new connection:[Errno 111] Connection refused'))
httpconnectionpool(host=localhost,port=11434): max retries exceeded with url:/cpi/chat
(CausedbyNewConnectionError(': fail to establish a new connection:[urllib3.connection.
Fail to establish a new connection:[Errno 111] Connection refused'))

Cause of error : This error occurs because the Ollama service is not accessible in the Docker container. localhost usually points to the container itself, not the host or another container. To resolve this issue, you need to expose the Ollama service to the network.

macOS environment configuration method:

If Ollama is running as a macOS application, you need to set the environment variable using launchctl:

  1. This is accomplished by calling the launchctl setenv Setting environment variables:
launchctlsetenvOLLAMA_HOST "0.0.0.0"
  1. Restart the Ollama application.
  2. If the above steps do not work, you can use the following method:

The problem is that inside docker, you should connect to the host.docker.internalto access the docker's hosts, so set the localhost Replace with host.docker.internal The service is ready to take effect:

http://host.docker.internal:11434

Linux environment configuration method:

If Ollama is running as a systemd service, you should use the systemctl Setting environment variables:

  1. This is accomplished by calling the systemctl edit ollama.service Edit the systemd service. This will open an editor.
  2. For each environment variable, the [Service] Add a line under the section Environment::
[Service]
Environment="OLLAMA_HOST=0.0.0.0"
  1. Save and exit.
  2. heavy load (on a truck) systemd and restart Ollama:
systemctldaemon-reload
systemctlrestartollama

Windows environment configuration method:

On Windows, Ollama inherits your user and system environment variables.

  1. First exit the program by clicking on Ollama in the taskbar.
  2. Edit system environment variables from the control panel
  3. Edit or create new variables for your user account, such as OLLAMA_HOST,OLLAMA_MODELS etc.
  4. Click OK / Apply Save
  5. In a new terminal window, run ollama

2. How do I change the Ollama service address and port number?

Ollama Default Binding 127.0.0.1 port 11434, which you can access via the OLLAMA_HOST Environment variables change the binding address.

May not be reproduced without permission:Chief AI Sharing Circle " Private Deployment of DeepSeek + Dify: Building a Secure and Controllable Local AI Assistant System

Chief AI Sharing Circle

Chief AI Sharing Circle specializes in AI learning, providing comprehensive AI learning content, AI tools and hands-on guidance. Our goal is to help users master AI technology and explore the unlimited potential of AI together through high-quality content and practical experience sharing. Whether you are an AI beginner or a senior expert, this is the ideal place for you to gain knowledge, improve your skills and realize innovation.

Contact Us
en_USEnglish