AI Personal Learning
and practical guidance

HiOllama: a clean chat interface for interacting with native Ollama models

General Introduction

HiOllama is a user-friendly interface built on Python and Gradio designed to interact with Ollama models. It provides a simple and intuitive web interface that supports real-time text generation and model management features. Users can adjust parameters such as temperature and maximum number of tokens, and it supports the management of multiple Ollama models and customized server URL configuration.

HiOllama: a clean chat interface for interacting with native Ollama models


 

Function List

  • Simple and intuitive web interface
  • Real-time text generation
  • Adjustable parameters (temperature, maximum number of tokens)
  • Model Management Functions
  • Support for multiple Ollama models
  • Customized Server URL Configuration

 

Using Help

Installation steps

  1. Cloning Warehouse:
    git clone https://github.com/smaranjitghose/HiOllama.git
    cd HiOllama
    
  2. Create and activate a virtual environment:
    • Windows.
      python -m venv env
      . \env\Scripts\activate
      
    • Linux/Mac.
      python3 -m venv env
      source env/bin/activate
      
  3. Install the required packages:
    pip install -r requirements.txt
    
  4. Install Ollama (if not already installed):
    • Linux/Mac.
      curl -fsSL https://ollama.ai/install.sh | sh
      
    • Windows.
      Install WSL2 first, then run the above command.

Procedure for use

  1. Start the Ollama service:
    ollama serve
    
  2. Run HiOllama:
    python main.py
    
  3. Open your browser and navigate to:
    http://localhost:7860
    

Quick Start Guide

  1. Select a model from the drop-down menu.
  2. Enter a prompt in the text area.
  3. Adjust the temperature and maximum number of tokens as needed.
  4. Click "Generate" to get the response.
  5. Use the Model Management option to pull in new models.

configure

The default settings can be set in themain.pyModified in:

DEFAULT_OLLAMA_URL = "http://localhost:11434"
DEFAULT_MODEL_NAME = "llama3"

common problems

  • connection error: Make sure Ollama is running (ollama serve), check that the server URL is correct and make sure that port 11434 is accessible.
  • Model not found: Pull the model first:ollama pull model_name, check the available models:ollama listThe
  • port conflict: inmain.pyChange the port in the
    app.launch(server_port=7860) # change to another port
AI Easy Learning

The layman's guide to getting started with AI

Help you learn how to utilize AI tools at a low cost and from a zero base.AI, like office software, is an essential skill for everyone. Mastering AI will give you an edge in your job search and half the effort in your future work and studies.

View Details>
May not be reproduced without permission:Chief AI Sharing Circle " HiOllama: a clean chat interface for interacting with native Ollama models

Chief AI Sharing Circle

Chief AI Sharing Circle specializes in AI learning, providing comprehensive AI learning content, AI tools and hands-on guidance. Our goal is to help users master AI technology and explore the unlimited potential of AI together through high-quality content and practical experience sharing. Whether you are an AI beginner or a senior expert, this is the ideal place for you to gain knowledge, improve your skills and realize innovation.

Contact Us
en_USEnglish