AI Personal Learning
and practical guidance
讯飞绘镜

AI knowledge Page 4

工作流(Workflow):一文读懂工作流的运行原理-首席AI分享圈

Workflow (Workflow): an article to read the operating principles of workflow

Before we start, let's understand a few "key words": Workflow (Workflow): Simply put, it is "the complete steps to accomplish something". It's like an "instruction manual" that tells you what needs to be done, in what order, and by whom, in order to achieve your goal. Input: Before the workflow begins, you need to...

Turn Cursor into Devin in an hour and learn the difference!

This article is part of the series "Understanding and Deploying Intelligent Body AI": Intelligent Body AI Series 1: Comparison between Devin and Agent Cursor Intelligent Body AI Series 2: From Thinker to Doer - The Paradigm Revolution in Intelligent Body AI and Technology Architecture and Technical Architecture Intelligent Body AI Series 3: Turning $20 into $50...

实现 LLM 记忆系统的五种方式-首席AI分享圈

Five ways to realize the LLM memory system

When building large language modeling (LLM) applications, memory systems are one of the key technologies to enhance conversation context management, long-term information storage, and semantic understanding. An efficient memory system can help the model maintain consistency over long conversations, extract key information, and even have the ability to retrieve historical conversations...

OpenAI 函数调用(Function calling)-首席AI分享圈

OpenAI Function calling

OpenAI Function calling V2 Features The core goal of Function calling V2 is to give OpenAI models the ability to interact with the outside world, which is reflected in the following two core functions: Fetching Data - A function calling implementation of RAG: Essentially RAG (Retrieve Augmented...

Retrieval:什么是Retrieval?解释RAG中常见的

Retrieval: What is Retrieval? Explain the common "retrieval" techniques used in RAG.

Basic Concepts In the field of information technology, retrieval refers to the process of efficiently locating and extracting relevant information from a large dataset (usually documents, Web pages, images, audio, video, or other forms of information) in response to a user's query or need. Its core goal is to find information that is relevant to the use...

CAG:比RAG快40倍的缓存增强生成方法-首席AI分享圈

CAG: A cache-enhanced generation method that is 40 times faster than RAG

CAG (Cache Augmented Generation) that is 40 times faster than RAG (Retrieval Augmented Generation).CAG revolutionizes knowledge acquisition: instead of retrieving external data in real time, all knowledge is pre-loaded into the model context. It's like condensing a huge library into an on-the-go toolkit that can be used when needed...

谷歌Agents与基础应用白皮书(中文版)-首席AI分享圈

Google Agents and Basic Applications White Paper (Chinese version)

By Julia Wiesinger, Patrick Marlow and Vladimir Vuskovic Originally published at https://www.kaggle.com/whitepaper-agents Table of Contents Introduction What is an Intelligent Body? Models Tools Orchestration Layers Intelligent Bodies and Models Cognitive Architecture: How Intelligent Bodies Work Tools ...

走近多智能体系统(MAS):协同合作的 AI 世界-首席AI分享圈

Approaching Multi-Agent Systems (MAS): a Collaborative AI World

A Multi-Agent System (MAS) is a computing system consisting of multiple interacting Intelligent Agents. Multi-Agent Systems can be used to solve problems that are difficult or impossible to solve by a single Intelligent Agent or a single system. Intelligent agents can be robots, humans, or soft...

一文带你了解RAG(检索增强生成),概念理论介绍+ 代码实操-首席AI分享圈

An article to take you to understand RAG (Retrieval Augmented Generation), the concept of theoretical introduction + code practice

First, LLMs already have strong capabilities, why do we still need RAG (Retrieval Augmented Generation)? Although LLMs have demonstrated significant capabilities, the following challenges still warrant attention: Illusion problem: LLMs use a statistically based probabilistic approach to generate text word by word, a mechanism that inherently leads to the possibility of...

OpenAI-o3 与 Monte-Carlo 思想-首席AI分享圈

OpenAI-o3 and Monte-Carlo Ideas

o3 is here to share some personal insights. Progress on Test-time Scaling Law has been much faster than we thought. But I'd like to say that the path is actually a bit convoluted - it's OpenAI's way of saving the country from the curve in its pursuit of AGI. Reinforcement Learning and Shortcut Thinking For ...

如何为RAG应用选择最佳Embedding模型-首席AI分享圈

How to choose the best Embedding model for a RAG application

Vector Embedding is the core of current Retrieval Augmented Generation (RAG) applications. They capture semantic information of data objects (e.g., text, images, etc.) and represent them as arrays of numbers. In current generative AI applications, these vector Embedding are usually generated by Embedding models. How to apply for RAG ...

万字长文讲透 RAG 在DB-GPT实际落地场景中的优化-首席AI分享圈

A 10,000-word article on RAG optimization in DB-GPT real-world scenarios.

PREFACE Over the past two years, Retrieval-Augmented Generation (RAG, Retrieval-Augmented Generation) technology has gradually become a core component for enhancing intelligences. By combining the dual capabilities of retrieval and generation, RAG is able to introduce external knowledge, thus providing more applications of large models in complex scenarios...

2025年值得入坑的 AI Agent 五大框架-首席AI分享圈

Top 5 AI Agent Frameworks Worth Getting Into in 2025

Agent The most common translation I've seen so far is "intelligent body", but the direct translation is "agent". What does Agentic translate to? I feel that a word like "agentic" is more appropriate. So in order not to confuse the readers, I use English directly in this article. With the development of LLM, the ability of AI...

朴素、有效的RAG检索策略:稀疏+密集混合检索并重排,并利用“提示缓存”为文本块生成整体文档相关的上下文-首席AI分享圈

Simple, effective RAG retrieval strategy: sparse + dense hybrid search and rearrangement, and use "cue caching" to generate overall document-relevant context for text chunks.

In order for an AI model to be useful in a particular scenario, it usually needs access to background knowledge. For example, a customer support chatbot needs to understand the specific business it serves, while a legal analysis bot needs to have access to a large number of past cases. Developers often use Retrieval-Augmente...

小白也能看懂的大模型微调知识点-首席AI分享圈

Large model fine-tuning knowledge points that even a novice can understand

Full Process of Fine-tuning Large Models It is recommended to strictly follow the above process during fine-tuning and avoid skipping steps, which may lead to ineffective labor. For example, if the dataset is not fully constructed, and it is eventually found that the poor effect of the fine-tuned model is a problem of the quality of the dataset, then the preliminary efforts will be wasted, and the matter...

en_USEnglish