AI Personal Learning
and practical guidance
TRAE

AI knowledge Page 3

基于LLM的查询扩展 (Query Expansion)-首席AI分享圈

LLM-based Query Expansion

Have you ever been in a situation where you type a keyword into a search engine and what comes up is not what you want? Or, you want to search for something, but you don't know what words to use to express the most accurate? Don't worry, "query expansion" technology can help you solve these problems. Recently, the query expansion...

Guide to Building Multi Agent Systems Based on CrewAI

1. Introduction In the field of Artificial Intelligence (AI), Multi Agent system is gradually becoming a key technology for solving complex problems and realizing efficient collaboration.CrewAI, as a powerful Multi Agent collaboration tool, provides developers with a convenient way to build intelligent collaboration systems. In this paper, we will introduce how to build an intelligent collaboration system based on Cr...

Agentic Chunking: AI Agent-Driven Semantic Text Chunking

Introduction Text chunking plays a crucial role in the application domain of Large Language Models (LLMs), especially in Retrieval Augmented Generation (RAG) systems. The quality of text chunking is directly related to the validity of contextual information, which in turn affects the accuracy and completeness of the answers generated by LLM...

ZEP-Graphiti: a temporal knowledge graph architecture for intelligent body memory

Quick Reads Challenges of Intelligent Body Memory and Zep's Innovation Intelligent bodies (AI Agents) face memory bottlenecks in complex tasks. Traditional Large Language Model (LLM)-based AI Agents are constrained by contextual windows that make it difficult to efficiently integrate long-term dialog history and dynamic data, limiting performance and making them prone to hallucinations.Zep is ...

OpenAI 发布:AI 推理模型的应用与最佳实践-首席AI分享圈

OpenAI Release: Applications and Best Practices for AI Inference Modeling

In the field of Artificial Intelligence, the choice of models is crucial, and OpenAI, as an industry leader, offers two main types of model families: Reasoning Models and GPT Models. The former is represented by the o-series of models, such as o1 and o3-mini, while the latter is represented by ...

解惑o1、DeepSeek-R1之类推理模型到底有没有在思考?-首席AI分享圈

Solving the confusion o1, are inference models like DeepSeek-R1 thinking or not?

I found an interesting paper "Thoughts Are All Over the Place: On the Underthinking of o1-Like LLMs", which analyzes the frequent switching of thinking paths and the lack of focus in o1-like reasoning models, or "underthinking" for short. The topic is to analyze the o1-like reasoning model's frequent switching of thinking paths and lack of focused thinking, which is referred to as "underthinking", and at the same time to give a solution to alleviate ...

模型量化是什么:FP32, FP16, INT8, INT4 数据类型详解-首席AI分享圈

What is Model Quantization: FP32, FP16, INT8, INT4 Data Types Explained

Introduction In the vast starry sky of AI technology, deep learning models drive the innovative development of many fields with their excellent performance. However, the continuous expansion of model scale, like a double-edged sword, brings about a dramatic increase in arithmetic demand and storage pressure while improving performance. Especially in resource-constrained applications ...

Think&Cite:使用树搜索技术提升文本引用准确性-首席AI分享圈

Think&Cite: Improving Text Citation Accuracy Using Tree Search Techniques

Abstract Although Large Language Models (LLMs) perform well, they are prone to hallucinating and generating factually inaccurate information. This challenge has motivated efforts in attribute text generation, prompting LLMs to generate content that contains supporting evidence. In this paper, we present a new approach called Think&Cite ...

LLM OCR 的局限性:光鲜外表下的文档解析难题-首席AI分享圈

Limitations of LLM OCR: The Document Parsing Challenge Behind the Glossy Surface

For any application that requires Retrieval Augmented Generation (RAG) system, the massive PDF documents into machine-readable blocks of text (also known as "PDF chunking") are a big headache. There are both open-source programs and commercialized products on the market, but honestly, there is no program that can really...

DeepSeek R1 越狱:尝试突破 DeepSeek 的审查机制-首席AI分享圈

DeepSeek R1 Jailbreak: Trying to Break DeepSeek's Censorship

DeepSeek R1 official jailbreaks are great experimental environments for triggering basically all types of censorship mechanisms, and you can learn a lot of defense techniques, so this is a big model censorship mechanism learning article that will take you through examples of big model jailbreaks over the years. Large model censorship mechanisms are usually used...

OpenAI o3-mini 系统说明书(中文)-首席AI分享圈

OpenAI o3-mini System Manual (Chinese)

Original: https://cdn.openai.com/o3-mini-system-card.pdf 1 Introduction The OpenAI o model family is trained using large-scale reinforcement learning to reason using chains of thought. These advanced reasoning capabilities provide new ways to improve the security and robustness of our models. In particular, ...

en_USEnglish