General Introduction
Harbor is a revolutionary containerized LLM toolset focused on simplifying the deployment and management of local AI development environments. It enables developers to start and manage all AI service components, including LLM backend, API interface, front-end interface, etc., with a single click through a concise command line interface (CLI) and supporting applications. As an open source project, Harbor is particularly suitable for developers who need to quickly build and experiment with LLM applications. It not only supports mainstream AI models and services, but also provides flexible configuration options and a complete toolchain, allowing developers to focus on application development rather than environment configuration.Harbor adopts the Apache 2.0 open source protocol, has active community support, and has received more than 770 star tags on GitHub, which is recognized and used by many developers.
Function List
- One-Click Deployment: Start a complete LLM service environment with a single command
- Container Management: Integrating Docker and Docker Compose for Service Orchestration
- Multiple backend support: compatible with multiple LLM engines and model formats (GGUF, SafeTensors, etc.)
- Service Integration: Collaboration of Pre-Configured API Services and Front-End Interfaces
- Development tools: Provides a complete local development tool chain
- Configuration flexibility: support for customizing service components and configuration options
- SSL Certificates: Built-in Certbot support for easy configuration of HTTPS access
- Environment Migration: Support configuration export, easy to migrate to the production environment
- Monitoring Management: Provide service status monitoring and log viewing functions
- Version control: support the management of different versions of AI service components
Using Help
1. Environmental preparation
1.1 System requirements
- Operating System: Supports Linux, MacOS or Windows WSL2
- Docker Engine 20.10+
- Docker Compose 1.18.0+
- Node.js 16+ (optional, for npm install method)
1.2 Installing Harbor
# Way 1: Install using npm
npm install -g @avlab/harbor
# Way 2: Install using curl
curl -sfL https://get.harbor.ai | sh
2. Basic use
2.1 Starting services
# Initializing the Harbor Environment
harbor init
# start all services
harbor up
# Check service status
harbor ps
2.2 Service configuration
# Configure the model path
harbor config set models.path /path/to/models
# Enable specific services
harbor enable chatui
harbor enable api
# disable service
harbor disable service-name
3. Advanced functions
3.1 SSL Certificate Configuration
# Setting environment variables
export NGINX_SSL_CERT_FILENAME=fullchain.pem
export NGINX_SSL_CERT_KEY_FILENAME=privkey.pem
export CERTBOT_DOMAIN=your_domain.com
export CERTBOT_EMAIL=your@email.com
# Getting a certificate
harbor ssl setup
3.2 Customized configuration
# Export Configuration
harbor eject
# Modify configuration file
vim harbor.yaml
# Start up with custom configuration
harbor up -c custom-config.yaml
4. Common operational processes
4.1 Deployment of new services
- Check the list of services:
harbor list
- Enable the required services:
harbor enable <service-name
- Configure the service parameters:
harbor config set
- Start the service:
Harbor up.
- Verify the status of the service:
harbor ps
4.2 Troubleshooting
# Viewing Service Logs
harbor logs <service-name
# Check service status
harbor status
# Restart the service
harbor restart
5. Best practices
- Regular backup of configuration files
- Managing Custom Configurations with Version Control
- Monitor service resource utilization
- Keep Harbor and related components up to date
- Accelerate the deployment process with project presets
Installable AI services
user
Open WebUI ⦁︎ ComfyUI ⦁︎ LibreChat ⦁︎ HuggingFace ChatUI ⦁︎ Lobe Chat ⦁︎ Hollama ⦁︎ parllama ⦁︎ BionicGPT ⦁︎ AnythingLLM ⦁︎ Chat Nio
back-end service
Ollama ⦁︎ llama.cpp ⦁︎ vLLM ⦁︎ TabbyAPI ⦁︎ Aphrodite Engine ⦁︎ mistral.rs ⦁︎ openedai-speech ⦁︎ faster-whisper-server ⦁︎ Parler ⦁︎ text-generation-inference ⦁︎ LMDeploy ⦁︎ AirLLM ⦁︎ SGLang ⦁︎ KTransformers ⦁︎ Nexa SDK
Extension Tools
Harbor Bench ⦁︎ Harbor Boost ⦁︎ SearXNG ⦁︎ Perplexica ⦁︎ Dify ⦁︎ Plandex ⦁︎ LiteLLM ⦁︎ LangFuse ⦁︎ Open Interpreter ⦁ ︎cloudflared ⦁︎ cmdh ⦁︎ fabric ⦁︎ txtai RAG ⦁︎ TextGrad ⦁︎ Aider ⦁︎ aichat ⦁︎ omnichain ⦁︎ lm-evaluation-harness ⦁︎ JupyterLab ⦁︎ ol1 ⦁︎ OpenHands ⦁︎ LitLytics ⦁︎ Repopack ⦁︎ n8n ⦁︎ Bolt.new ⦁︎ Open WebUI Pipelines ⦁︎ Qdrant ⦁︎ K6 ⦁︎ Promptfoo ⦁︎ Webtop ⦁︎ OmniParser ⦁︎ Flowise ⦁︎ Langflow ⦁︎ OptiLLM
See also Service Documentation Get a brief overview of each service.
Detailed steps for installing Open WebUI with Harbor
1. Pre-preparation
- Ensure that Docker and Docker Compose are installed
- Ensure that the Harbor CLI is properly installed
- Ensure that the system meets the basic requirements (8GB or more RAM recommended)
2. Initializing the Harbor environment
# Initializing the Harbor Environment
harbor init
# Verifying the Harbor Environment
harbor doctor
3. Installation and configuration of Open WebUI
3.1 Enabling WebUI Services
# Enable Open WebUI Service
harbor enable webui
3.2 Configuration of basic parameters (optional)
# Configure the WebUI version (if a specific version needs to be specified)
harbor webui version
# Configure the WebUI port (default is 8080)
harbor config set webui.port
4. Activation of services
# Start all enabled services, including WebUI
harbor up
# or start only WebUI services
harbor up webui
5. Verification of installation
- Go to http://localhost:8080 (or any other port you set up)
- Check the status of the service:
harbor ps
6. Common management commands
Check Service Status
# Viewing all running services
harbor ps
# View WebUI logs
harbor logs webui
service management
# Stop WebUI service
harbor stop webui
# Restart WebUI service
harbor restart webui
# Update WebUI version
harbor webui version latest
harbor restart webui
7. Integration of Ollama (optional)
If you want to connect to Ollama using WebUI:
# Enable Ollama service
harbor enable ollama
# Restart the service
harbor restart
8. Troubleshooting of common problems
Checking service health status
harbor doctor
View specific error messages
harbor logs webui
Port Conflict Resolution
If port 8080 is occupied:
# Modifying the WebUI port
harbor config set webui.port 8081
harbor restart webui
9. Configuration file location
- Master Configuration File:
~/.harbor/.env
- WebUI Configuration:
~/.harbor/open-webui/
10. Backup recommendations
# Export the current configuration
harbor eject > harbor-backup.yaml
Caveats:
- Ensure that the system has sufficient resources to run the service
- The first boot may take some time to download the image
- If you encounter permission issues, check the Docker permission settings
- Regular backup of configuration files is recommended
- Keep Harbor and related services updated to the latest version
Getting Started:
1. After completing the installation, open your browser and visit http://localhost:8080.
2. The first time you visit, you will be asked to make basic settings.
3. You can start using Open WebUI for AI dialog.
If you need help, you can use:
```bash
harbor help webui