General Introduction
AkashChat is a decentralized cloud-based AI chat platform developed and powered by Akash Network. It utilizes the high-performance computing power of NVIDIA GPUs to run multiple open-source big language models (e.g., QwQ-32B, Llama 3.3 70B, Llama 3.1 405B, DeepSeek R1 671B, etc.) to provide users with a fast, free, and privacy-friendly conversation experience. It can be used without registering an account, and the chat logs are only stored in the user's local browser to ensure data security.AkashChat relies on Akash Supercloud's decentralized architecture to break the limitations of traditional centralized cloud services, and not only has a fast response time, but it also supports switching between multiple AI models, which makes it suitable for users who need a highly efficient conversation tool.
The second recommendation is AkashChat, which provides login-free access to a variety of large-sized inference models for chatting, and offers a free API key, which is very friendly to those who want to access large models on the client side but don't want to pay for an API (and even provides the embedded model BAAI-bge-large-en-v1-5 free of charge). page Assist + AkashChat The combination of Page Assist + AkashChat is very lightweight for daily use!
- Akash Application API Interface: https://chatapi.akash.network/
- exist Page Assist Configure the key in the
Function List
- Multi-model selection: Support QwQ-32B, Llama 3.3 70B, DeepSeek R1 671B, Llama 3.1 405B, AkashGen (picture generation model), and other open source AI models, users can switch according to their needs.
- High-performance dialog: Up to 27 tokens/second response rate with NVIDIA H100 and A100 GPUs.
- Privacy: Chat logs are stored locally without the need for cloud uploads, ensuring user data security.
- No registration required: You can use it by opening the webpage, no account login is required, the operation is simple and convenient.
- Open Source Support: The code is completely open source and users can deploy or contribute features on their own.
- user-friendly interface: Provides a modern chat interface that is intuitive and suitable for both novice and professional users.
Using Help
How to get started with AkashChat
AkashChat does not require any software to be installed, simply visit https://chat.akash.network/即可使用 via your browser. Below are detailed instructions to help users get started quickly and take full advantage of its features.
Access to the website
- Open any modern browser (e.g. Chrome, Firefox, Edge).
- Type in the address bar
https://chat.akash.network/
and press enter. - Once the page has loaded, you will be presented with a clean and simple chat interface that requires no login or registration to get straight to using it.
Selecting an AI model
AkashChat supports a variety of open source AI models, users can choose the appropriate model according to their needs. The operation steps are as follows:
- At the top of the chat interface or in the sidebar (the exact location may change depending on the latest interface design), find the "Model Selection" drop-down menu.
- Clicking on the drop-down menu displays a list of currently supported models, for example:
- Llama 3.1 405B: Suitable for complex question answering and powerful performance.
- Mistral-7B: A lightweight model that is responsive and suitable for everyday conversation.
- QwQ-32B: The latest high-performance models that outperform some of the larger models.
- Click on the name of the model you want and the system will switch automatically, the switching process usually takes only a few seconds.
Enter a question and get an answer
- In the text input box on the chat screen, type the question or content you want to ask, for example: "What's the weather like today?" or "Write me a short text". or "Write me a short essay".
- Press Enter or click the Send button (usually an arrow icon).
- The AI generates the response within a few seconds and displays it in a dialog box. The response speed varies depending on the model and network conditions and is usually within 27 tokens/second.
Manage Chat Logs
AkashChat's privacy design is one of its highlights, chat logs are not uploaded to the server, but stored locally in your browser. Here's how to manage your logs:
- View History: Each time you open a web page, the previous dialog is automatically loaded (provided you are using the same browser and have not cleared the cache).
- Clearing of recordsIf you need to clear your chat history, you can clear the cache in your browser settings or simply use your browser's "No Trace Mode" to avoid record keeping.
- caveat: After changing devices or browsers, previous chats cannot be synchronized because the data does not rely on cloud storage.
Open source deployment (advanced users)
For users with a technical background, AkashChat provides open source code that can be deployed locally or on the Akash Network itself. The steps are as follows:
- Get Code: Visit the GitHub repository (https://github.com/akash-network/akash-chat), click "Clone" or "Download ZIP" to download the source code.
- Installation of dependencies::
- Make sure Node.js and npm are installed locally.
- Go to the project folder in the terminal and run
npm i
Install the required dependencies.
- Configuration environment::
- In the project root directory, create the
.env.local
Documentation. - Enter the following configuration (adjusted to actual requirements):
DEFAULT_MODEL=mistral DEFAULT_SYSTEM_PROMPT=You are a helpful assistant with accurate information. API_KEY=your key API_HOST=your API endpoint (e.g. Ollama deployment address)
- In the project root directory, create the
- Running Projects::
- In the terminal, type
npm run dev
, the local development server will start. - Open your browser and visit
http://localhost:3000
You can use the self-deploying version.
- In the terminal, type
- Deployment to Akash::
- Deploy the project to Akash Supercloud using Akash Provider Console, refer to the Akash official website documentation (https://akash.network/) for detailed steps.
Featured Function Operation
- Switch models to experience different styles: Try different model response styles, e.g. Llama 3.1 is good for in-depth analysis, Mistral-7B is more concise and direct.
- Rapid Response Testing: Enter a complex question (e.g., "Explain quantum mechanics") and observe the speed of response and the quality of the answer.
- Privacy Mode Use: If you're worried about data leakage, you can use your browser's no-trace window throughout to ensure that no traces are left behind.
caveat
- network requirement: Ensure network stability, AkashChat relies on real-time calculations and high latency may affect the experience.
- Model constraints: Some of the models may not be good at handling domain-specific problems, and it is recommended to try switching models more often.
- Feedback suggestions: If you encounter problems or have feature suggestions, you can submit feedback via GitHub and participate in the open source community to improve it.
List of free models (not exactly equivalent to the models offered in the chat interface)
Chat + Completions
DeepSeek-R1-Distill-Llama-70B
DeepSeek-R1-Distill-Qwen-14B
DeepSeek-R1-Distill-Qwen-32B
Meta-Llama-3-1-8B-Instruct-FP8
Meta-Llama-3-1-405B-Instruct-FP8
Meta-Llama-3-2-3B-Instruct
Meta-Llama-3-3-70B-Instruct
Embedding
BAAI-bge-large-en-v1-5