AI Personal Learning
and practical guidance
讯飞绘镜

AI hands-on tutorials

使用 Cursor 一键升级 Dify 最新版 (1.1.1) 教程-首席AI分享圈

One-click upgrade of Dify latest version (1.1.1) with Cursor Tutorials

This article will guide readers on how to easily upgrade Dify. Before you begin, make sure you have the following two tools installed: Dify Local Deployment: This is the foundation of the upgrade operation. Cursor: an AI programming tool that dramatically improves development efficiency. Optional Tools: Silicon Flow: An API aggregation platform that facilitates the use of Dify...

OpenManus 新增 WebUI 及国内搜索引擎配置指南-首席AI分享圈

OpenManus New WebUI and Domestic Search Engine Configuration Guide

OpenManus has been updated frequently recently. In addition to support for local Ollama and web API providers, support for domestic search engines and several WebUI adaptations have been added. In this article, we will introduce several community-contributed OpenManus WebUIs, and how to configure domestic search engines. OpenMan...

文心智能体挂载商品链接变现实战教程-首席AI分享圈

Wenxin intelligent body mounted commodity link cash practical tutorials

Background Based on the Wenshin Intelligent Body Platform, the book recommendation assistant developed with the latest DeepSeek model is able to recommend intelligent products based on the user's conversation content, realize accurate conversion and transaction realization, and build a closed-loop business. This tutorial will deeply analyze the development practice of DeepSeek book recommendation assistant, and help ...

Dify 连接外部知识库教程-首席AI分享圈

Dify Connecting to External Knowledge Bases Tutorial

For ease of differentiation, knowledge bases outside of the Dify platform are collectively referred to as "external knowledge bases" in this article. Introduction Dify's built-in knowledge base functionality and text retrieval mechanisms may not meet the needs of some advanced developers, who may need more precise control over text recall results. Some teams choose to build their own...

一键将Figma设计转换为全栈应用-首席AI分享圈

Convert Figma Designs to Full Stack Applications with One Click

It's been a while since bolt.new joined forces with Anima to introduce a cracking feature that generates a working full-stack application by simply copying the Figma design URL. On the bolt.new homepage, click "Import from Figma": Next, paste the Figma framework URL into the text field...

使用 DeepSeek R1 和 Ollama 实现本地 RAG 应用-首席AI分享圈

Implementing a Native RAG Application with DeepSeek R1 and Ollama

Introduction This document details how to build a localized RAG (Retrieval Augmented Generation) application using DeepSeek R1 and Ollama. It also complements the use of LangChain to build localized RAG applications. We will demonstrate the complete implementation flow with examples, including document processing, vector storage...

Implementing a Local Agent with Ollama+LlamaIndex

Introduction This document describes how to use ReActAgent in LlamaIndex in combination with Ollama to implement a simple local agent. The LLM used in this document is the qwen2:0.5b model, due to the different ability of different models to invoke the tools, you can try to use a different model to achieve ...

使用 Ollama+LangChain 实现本地 Agent-首席AI分享圈

Implementing a Local Agent with Ollama+LangChain

Introduction ReAct (Reasoning and Acting) is a framework that combines reasoning and action to enhance the performance of intelligences in complex tasks. The framework enables intelligences to accomplish tasks more effectively in dynamic environments by tightly integrating logical reasoning with practical action. Source : ReAct: ...

使用 Ollama+LlamaIndex 搭建本地 RAG 应用-首席AI分享圈

Building a Local RAG Application with Ollama+LlamaIndex

Introduction This document will detail how to use the LlamaIndex framework to build a local RAG (Retrieval-Augmented Generation) application. By integrating LlamaIndex, it is possible to build a RAG system in a local environment that combines the capabilities of Retrieval and Generation to improve the efficiency of information retrieval...

Building a Local RAG Application with Ollama+LangChain

This tutorial assumes that you are already familiar with the following concepts: Chat Models Chaining runnables Embeddings Vector stores Retrieval-augmented generation Many popular projects such as llama.cpp , Ollama , and llamafile have shown that running a large language model in a local environment is a good idea. A local environment for running large language models...

Ollama 本地部署模型接入 Dify-首席AI分享圈

Ollama Local Deployment Model Access Dify

Dify supports access to large-scale language model inference and embedding capabilities deployed by Ollama. Quick Access Download Ollama Access Ollama installation and configuration, view Ollama local deployment tutorials. Run Ollama and chat with Llama ollama run llama3.1 Launch into ...

Ollama 接入本地 AI Copilot 编程助手-首席AI分享圈

Ollama access to local AI Copilot programming assistant

Introduction This document describes how to build a local Copilot-like programming assistant to help you write more beautiful and efficient code. In this course you will learn how to use Ollama to integrate local programming assistants, including Continue Aider Note: We will focus on VScode...

OpenWebUI 部署 Ollama 可视化对话界面-首席AI分享圈

OpenWebUI Deploying the Ollama Visualization Dialog Interface

I. Deploying with Node.js 1. Installing Node.js Download and install the Node.js tool: https://www.nodejs.com.cn/download.html Set up a mirror source, for example, using the following mirror source. npm config set registry http://mirrors.cloud.tencent.com/np...

FastAPI 部署 Ollama 可视化对话界面-首席AI分享圈

FastAPI Deployment Ollama Visualization Dialog Interface

I. Directory structure Under the C6 folder of the repository notebook: fastapi_chat_app/ │ ├── app.py ├── websocket_handler.py ├── static/ │ └── index.html └── requirements.txt app.py FastAPI The main settings and routing of the application. webso...

Ollama 在 LangChain 中的使用 - JavaScript 集成-首席AI分享圈

Ollama in LangChain - JavaScript Integration

Introduction This document describes how to use Ollama in a JavaScript environment to integrate with LangChain to create powerful AI applications.Ollama is an open source deployment tool for large language models, while LangChain is a framework for building language model-based applications. By combining...

Ollama 在 LangChain 中的使用 - Python 集成-首席AI分享圈

Ollama in LangChain - Python Integration

Introduction This document describes how to use Ollama in a Python environment to integrate with LangChain to create powerful AI applications.Ollama is an open source deployment tool for large language models, while LangChain is a framework for building language model-based applications. By combining these two...

en_USEnglish