AI Personal Learning
and practical guidance
Beanbag Marscode1

Ollama Local Deployment Model Access Dify

Dify Supported Access Ollama Deployment of large-scale language model inference and embedding capabilities.

 

Quick Access

  1. Download Ollama
    interviews Ollama Installation and ConfigurationFor more information, see the Ollama Local Deployment tutorial.
  2. Run Ollama and Chat with Llama
    ollama run llama3.1
    

    After successful startup, ollama starts an API service on local port 11434, which can be accessed via the http://localhost:11434 Access.
    Additional models are available at https://ollama.ai/library for details.

  3. Accessing Ollama in Dify
    exist Settings > Model Providers > Ollama Fill in the blanks:
    Ollama Local Deployment Model Access Dify-1 Ollama Local Deployment Model Access Dify-2 Ollama Local Deployment Model Access Dify-3

    • Model Name:llama3.1
    • Base URL:http://:11434
      The address of the Ollama service that can be accessed is required here.
      If Dify is deployed as a docker, it is recommended to fill in the LAN IP address, for example:http://192.168.1.100:11434 or the docker host IP address, for example:http://172.17.0.1:11434The

      Note: The Find LAN IP Address command:

      • On Linux/macOS, use the command ip addr show maybe ifconfigThe
      • On Windows, use the ipconfig command to find similar addresses.
      • Typically, this address is displayed under the eth0 or wlan0 interface, depending on whether you are using a wired or wireless network.

      If deployed for local source code, fill in the http://localhost:11434The

    • Model Type:dialogues
    • Model context length:4096
      The maximum context length of the model, if not clear you can fill in the default value 4096.
    • greatest token Cap:4096
      The maximum number of tokens for the content returned by the model, which may be consistent with the model context length if not otherwise specified by the model.
    • Whether or not Vision is supported:be
      When the model supports picture understanding (multimodal) check this box, e.g. llavaThe

    Click "Save" to verify that the model is correct and can be used in your application.
    The Embedding model is accessed in a similar way to LLM, by changing the model type to Text Embedding.

  4. Using the Ollama Model
    Ollama Local Deployment Model Access Dify-4
    Go to the App Prompt Programming page that you need to configure, and select the Ollama vendor under the llama3.1 model, configure the model parameters and use it.

 

FAQ

⚠️ If you are using Docker to deploy Dify and Ollama, you may encounter the following error.

httpconnectionpool(host=127.0.0.1, port=11434): max retries exceeded with url:/cpi/chat (Caused by NewConnectionError(': fail to establish a new connection:[Errno 111] Connection refused'))
httpconnectionpool(host=localhost, port=11434): max retries exceeded with url:/cpi/chat (Caused by NewConnectionError(': fail to establish a new connection:[Errno 111] Connection refused'))

This error is because the Docker container cannot access the Ollama service. The localhost usually refers to the container itself, not the host or other containers. To resolve this issue, you need to expose the Ollama service to the network.

Setting Environment Variables on a Mac

in the event that Ollama act as macOS application is running, then the following command should be used to set the environment variableslaunchctl ::

  1. This is accomplished by calling the launchctl setenv Setting environment variables:
    launchctl setenv OLLAMA_HOST "0.0.0.0"
    
  2. Restart the Ollama application.
  3. If the above steps do not work, you can use the following method:
    The problem is that inside docker, you should connect to the host.docker.internal to access the docker's hosts, so set the localhost Replace with host.docker.internal The service is ready to take effect:

    http://host.docker.internal:11434
    

Setting Environment Variables on Linux

If Ollama is running as a systemd service, you should use the systemctl Setting environment variables:

  1. This is accomplished by calling the systemctl edit ollama.service Edit the systemd service. This will open an editor.
  2. For each environment variable, the [Service] Add a line under the section Environment ::
    [Service]
    Environment="OLLAMA_HOST=0.0.0.0"
    
  3. Save and exit.
  4. heavy load (on a truck) systemd and restart Ollama:
    systemctl daemon-reload
    systemctl restart ollama
    

Setting Environment Variables on Windows

On Windows, Ollama inherits your user and system environment variables.

  1. First exit the program by clicking on Ollama in the taskbar.
  2. Edit system environment variables from the control panel
  3. Edit or create new variables for your user account, such as OLLAMA_HOST , OLLAMA_MODELS etc.
  4. Click OK/Apply to save the application
  5. In a new terminal window, run ollama

How do I expose Ollama on my network?

Ollama binds to port 11434 on 127.0.0.1 by default. via the OLLAMA_HOST Environment variables change the binding address.


CDN1
May not be reproduced without permission:Chief AI Sharing Circle " Ollama Local Deployment Model Access Dify

Chief AI Sharing Circle

Chief AI Sharing Circle specializes in AI learning, providing comprehensive AI learning content, AI tools and hands-on guidance. Our goal is to help users master AI technology and explore the unlimited potential of AI together through high-quality content and practical experience sharing. Whether you are an AI beginner or a senior expert, this is the ideal place for you to gain knowledge, improve your skills and realize innovation.

Contact Us
en_USEnglish