AI Personal Learning
and practical guidance
TRAE

AI hands-on tutorials Page 3

Using the Ollama API in C++

This article describes how to use the Ollama API in C++. This document is designed to help C++ developers get up to speed quickly and take full advantage of Ollama's capabilities. By studying this document, you can easily integrate Ollama into your projects. Note that the Ollama community and documentation may be more...

Using the Ollama API in JavaScript

This article describes how to use the Ollama API in JavaScript. This document is designed to help developers get started quickly and take full advantage of Ollama's capabilities. You can use it in a Node.js environment or import the corresponding module directly in the browser. By studying this document, you can easily set...

在 Java 中使用 Ollama API-首席AI分享圈

Using the Ollama API in Java

This article describes how to use the Ollama API in Java.This document is designed to help developers get started quickly and take full advantage of Ollama's capabilities. You can call the Ollama API directly in your program, or you can call Ollama from a Spring AI component.By studying this document, you can easily set...

Using the Ollama API in Python

In this article, we take a brief look at how to use the Ollama API in Python.Whether you want to have a simple chat conversation, work with big data using streaming responses, or want to do model creation, copying, deletion, etc. locally, this article can guide you. In addition, we show ...

Ollama API 使用指南-首席AI分享圈

Ollama API User's Guide

Introduction Ollama provides a powerful REST API that enables developers to easily interact with large language models. With the Ollama API, users can send requests and receive responses generated by the model, applied to tasks such as natural language processing, text generation, and so on. In this paper, we will introduce in detail the generation of complementary, dialog generation ...

Ollama 自定义在 GPU 中运行-首席AI分享圈

Ollama customization running in GPUs

Windows The following is an example of how to customize Ollama to run in the GPU on a Windows system.Ollama uses the CPU for inference by default. For faster inference, you can configure the GPU used by Ollama.This tutorial will guide you on how to set up the ring on a Windows...

Ollama 自定义模型存储位置-首席AI分享圈

Ollama Custom Model Storage Locations

Take Windows system as an example, the models pulled by Ollama are stored in C disk by default, if you need to pull more than one model, the C disk will be full, affecting the storage space of C disk. Therefore, this section will introduce how to customize the Ollama model in Windows, Linux and MacOS...

Ollama 自定义导入模型-首席AI分享圈

Ollama Custom Import Model

Introduction In this section, we learn how to use Modelfile to customize the import of models, which is divided into the following sections: Importing from GGUF Importing from Pytorch or Safetensors Importing from Models Directly Importing from Models Customizing Prompts I. Importing from GGUF Importing from GGUF (GPT-Generated Unified ...

Ollama 安装与配置 - docker 篇-首席AI分享圈

Ollama Installation and Configuration - docker article

Introduction This section learns how to complete the installation and configuration of Ollama in Docker. Docker is a virtualized container technology that is based on images and can start various containers in seconds. Each of these containers is a complete runtime environment that enables isolation between containers. Ollama Download ...

Ollama 安装与配置 - Linux 系统篇-首席AI分享圈

Ollama Installation and Configuration - Linux Systems

Introduction This section learns how to complete the installation and configuration of Ollama on a Linux system, as well as updating Ollama, version-specific installations, viewing logs, and uninstalling. I. Quick Installation of Ollama Download: https://ollama.com/download Ollama official homepage: https://ollama....

Ollama 安装与配置 - Windows 系统篇-首席AI分享圈

Ollama Installation and Configuration - Windows Systems

Introduction This section learns how to complete the installation and configuration of Ollama in the Windows system, is divided into the following parts: Visit the official website directly to complete the download Environment variable configuration Run Ollama to verify the success of the installation 🎉 First, visit the official website directly to complete the download Visit the official home page of the Ollama under ...

Ollama 安装与配置 - macOS 系统篇-首席AI分享圈

Ollama Installation and Configuration - macOS Systems

Introduction In this section, we will learn how to complete the installation and configuration of Ollama in macOS system, which is mainly divided into the following three parts: Visit the official website to complete the download directly Run Ollama Installation Enchanted a. Visit the official website to complete the download directly Visit the official homepage of Ollama download: https://ollama.com/d...

Ollama 安装与使用详细教学-首席AI分享圈

Ollama Installation and Usage Tutorial

I've published many tutorials on Ollama installation and deployment before, but the information is quite scattered, this time I've organized a complete instruction on using Ollama on local computers in one step. This tutorial is aimed at beginners to avoid stepping on puddles, and we recommend reading the official Ollama documentation if you have the ability to do so. Next I'll go step by step...

Dify 搭建私有数据可视化分析智能体-首席AI分享圈

Dify Builds a Private Data Visualization and Analytics Intelligence Body

Artificial intelligence technology continues to evolve, and chat apps are becoming increasingly feature-rich. Recently, the Dify platform launched a notable update with a newly released chat app that enables data visualization and analytics directly in the conversation, bringing users a more intuitive and efficient communication experience. Although the title of the article mentions the feature...

Dify工作流:告别繁琐 API 对接,一键生成代码与查询参数-首席AI分享圈

Dify Workflow: Say goodbye to cumbersome API docking, generate code and query parameters with one click

In the digital age, APIs (Application Programming Interfaces) have become the cornerstone of interaction between different software systems. However, traditional API interfaces are often inefficient, making developers suffer. Have you ever faced the following dilemmas: Documentation: Interface documentation is obscure and difficult to understand, parameter descriptions are ambiguous ...

Unsloth 解决 QwQ-32B 量化版本重复推理问题-首席AI分享圈

Unsloth solves duplicate inference problem in quantized version of QwQ-32B

Recently, the Qwen team released the QwQ-32B model, an inference model that has shown excellent performance on many benchmarks comparable to DeepSeek-R1. However, many users have encountered infinite generation, excessive duplicates, token issues, and fine-tuning problems. This paper aims to provide ...

本地部署 QwQ-32B 大模型:个人电脑轻松上手指南-首席AI分享圈

Local Deployment QwQ-32B Large Model: Easy to follow guide for PCs

The field of Artificial Intelligence (AI) modeling is always full of surprises, and every technological breakthrough can pull the nerves of the industry. Recently, Alibaba's QwQ team released its latest inference model QwQ-32B in the wee hours of the morning, which once again aroused widespread attention. According to the official announcement, QwQ-32B is a parameter scale...

en_USEnglish