General Introduction
UniAPI is an API forwarder compatible with the OpenAI protocol, and its core function is to manage the APIs of multiple large model service providers, such as OpenAI, Azure OpenAI, Claude, etc., through a unified OpenAI format. Developers can use one interface to call models from different vendors without frequent code adjustments.UniAPI supports model optimization, circuit breaker mechanism and streaming output optimization to ensure efficient and stable requests. It provides Vercel one-click deployment, suitable for quickly building personal or team API service station. The project is developed by GitHub user zhangtyzzz, and is still being continuously updated.
Function List
- Support for OpenAI and OpenAI protocol-compliant services, including Azure OpenAI, Claude, and more.
- Unify APIs from different vendors into OpenAI format to simplify the calling process.
- Supports model mapping, calling actual models from different vendors with uniform model names.
- Provide a mechanism for model prioritization based on the 72-hour success rate and the number of first token Response time to select the best service.
- Built-in circuit breaker mechanism, service continuous failure automatically suspend the request to protect system stability.
- Optimize streaming output by breaking large chunks of response into smaller chunks to improve visual impact.
- Supports custom API keys, Base URLs and model lists for flexible configuration.
- Deployed via Vercel, it provides an administration panel and secure authentication.
Using Help
The use of UniAPI is divided into two parts: deployment and configuration. The following is a detailed description of how to install, configure and operate it to ensure that you can get started quickly.
Installation process
UniAPI supports two deployment methods: local runtime and Vercel one-click deployment. Vercel deployment is the main focus here, which is suitable for most users.
Vercel One-Click Deployment
- Access to deployment links
Click on the official Vercel deployment address - Configuring Environment Variables
Fill in the following variables on the Vercel Deployment page:ADMIN_API_KEY
: Administrator key for logging into the admin panel, must be set, e.g.mysecretkey
TheTEMP_API_KEY
: The keys that allow access to the API can be set to up to 2, e.g.key1
cap (a poem)key2
TheREDIS_URL
: Redis connection address for persistent storage configuration (optional).ENVIRONMENT
: set toproduction
to disable the development mode default key.
When the configuration is complete, click "Deploy".
- Getting the deployment address
After a successful deployment, Vercel generates a URL such ashttps://your-vercel-url.vercel.app
. UseADMIN_API_KEY
Log in to the admin panel.
Local operation (optional)
- Preparing the environment
Make sure Python 3.8 or above is installed on your device. Check the version:
python --version
- Download file
Visit https://github.com/zhangtyzzz/uni-api/releases to download the latest binaries, for exampleuni-api-linux-x86_64-0.0.99.pex
The - running program
Executed at the terminal:
chmod +x uni-api-linux-x86_64-0.0.99.pex
./uni-api-linux-x86_64-0.0.99.pex
Default listener http://localhost:8000
The
Configuring the API
- Log in to the administration panel (Vercel deployment)
Open the deployed URL and enterADMIN_API_KEY
Log in. The screen displays Add Configuration and Configuration List. - Adding API Configuration
Click "Add Configuration" and fill in the following information:
- service provider: Choose from OpenAI, Claude, etc.
- Base URL: Enter the API address of the service provider, e.g.
https://api.openai.com/v1
The - API key: Enter the key obtained from the service provider, e.g.
sk-xxxx
The - Model name: Enter the actual model name or mapping name, e.g.
gpt-3.5-turbo
The
After saving, the configuration is displayed in the list.
- model mapping
Add a mapping to the configuration. for example:
- common name
gpt-4
Mapping to OpenAI'sgpt-4
cap (a poem) Claude (used form a nominal expression)claude-2
The
The request is made with thegpt-4
The system automatically selects the available services.
Using the main functions
- Send Request
Test the API with curl:
curl -X POST https://your-vercel-url.vercel.app/v1/chat/completions
-H "Authorization: Bearer your_api_key"
-H "Content-Type: application/json"
-d '{"model": "gpt-3.5-turbo", "messages": [{"role": "user", "content": "你好"}], "stream": true}'
Returns streaming output indicating successful configuration.
- model selection
The system automatically selects the best service provider based on the success rate and response time of the first token in the last 72 hours. You don't need to intervene manually. - breaker mechanism
A circuit breaker is triggered when a service fails continuously:
- 3 failures: 5 minutes pause.
- 4 failures: 10 minutes suspension.
- 9 failures: 48 hours suspension.
During the suspension, the system switches to another service provider.
- Streaming Optimization
with regards to Gemini Models with large chunks of response, such as UniAPI, are automatically split into smaller chunks of output.
Frequently Asked Questions
- Request failed 401: Inspection
Authorization
Does the header contain the correctBearer your_api_key
The - Model unavailable: Ensure that the configured model name matches the one provided by the service provider, or check the mapping settings.
- Inaccessible after deployment: Acknowledgement
ENVIRONMENT
set up asproduction
The Vercel logs are not available for use with the Vercel.
With these steps, you can easily deploy and use UniAPI, which is easy to configure, powerful, and ideal for scenarios where you need to manage multi-vendor APIs.
application scenario
- Developers test multi-vendor models
You want to compare the output of OpenAI and Claude. uniAPI lets you save time by calling both with a single interface. - Teams build stable API services
The team needed a reliable API station to support the business. uniAPI's circuit breaker and prioritization mechanism ensured uninterrupted service. - Education and research
Students can use UniAPI to study the responsiveness and stability of different models, suitable for academic experiments.
QA
- What service providers does UniAPI support?
Support for OpenAI, Azure OpenAI, Claude, and other OpenAI protocol-compliant services, see GitHub for updates. - What happens when the circuit breaker triggers?
The system automatically switches to other service providers and retries after a cooling-off period. No manual operation is required. - What are the advantages of streaming output?
It splits large chunks of response into smaller pieces for a smoother user experience, especially for real-time chat scenarios.