AI Personal Learning
and practical guidance
Beanbag Marscode1

AI knowledge Page 3

Agentic Chunking: AI Agent-Driven Semantic Text Chunking

Introduction Text chunking plays a crucial role in the application domain of Large Language Models (LLMs), especially in Retrieval Augmented Generation (RAG) systems. The quality of text chunking is directly related to the validity of contextual information, which in turn affects the accuracy and completeness of the answers generated by LLM...

ZEP-Graphiti: a temporal knowledge graph architecture for intelligent body memory

Quick Reads Challenges of Intelligent Body Memory and Zep's Innovation Intelligent bodies (AI Agents) face memory bottlenecks in complex tasks. Traditional Large Language Model (LLM)-based AI Agents are constrained by contextual windows that make it difficult to efficiently integrate long-term dialog history and dynamic data, limiting performance and making them prone to hallucinations.Zep is ...

OpenAI 发布:AI 推理模型的应用与最佳实践-首席AI分享圈

OpenAI Release: Applications and Best Practices for AI Inference Modeling

In the field of Artificial Intelligence, the choice of models is crucial, and OpenAI, as an industry leader, offers two main types of model families: Reasoning Models and GPT Models. The former is represented by the o-series of models, such as o1 and o3-mini, while the latter is represented by ...

解惑o1、DeepSeek-R1之类推理模型到底有没有在思考?-首席AI分享圈

Solving the confusion o1, are inference models like DeepSeek-R1 thinking or not?

I found an interesting paper "Thoughts Are All Over the Place: On the Underthinking of o1-Like LLMs", which analyzes the frequent switching of thinking paths and the lack of focus in o1-like reasoning models, or "underthinking" for short. The topic is to analyze the o1-like reasoning model's frequent switching of thinking paths and lack of focused thinking, which is referred to as "underthinking", and at the same time to give a solution to alleviate ...

模型量化是什么:FP32, FP16, INT8, INT4 数据类型详解-首席AI分享圈

What is Model Quantization: FP32, FP16, INT8, INT4 Data Types Explained

Introduction In the vast starry sky of AI technology, deep learning models drive the innovative development of many fields with their excellent performance. However, the continuous expansion of model scale, like a double-edged sword, brings about a dramatic increase in arithmetic demand and storage pressure while improving performance. Especially in resource-constrained applications ...

Think&Cite:使用树搜索技术提升文本引用准确性-首席AI分享圈

Think&Cite: Improving Text Citation Accuracy Using Tree Search Techniques

Abstract Although Large Language Models (LLMs) perform well, they are prone to hallucinating and generating factually inaccurate information. This challenge has motivated efforts in attribute text generation, prompting LLMs to generate content that contains supporting evidence. In this paper, we present a new approach called Think&Cite ...

LLM OCR 的局限性:光鲜外表下的文档解析难题-首席AI分享圈

Limitations of LLM OCR: The Document Parsing Challenge Behind the Glossy Surface

For any application that requires Retrieval Augmented Generation (RAG) system, the massive PDF documents into machine-readable blocks of text (also known as "PDF chunking") are a big headache. There are both open-source programs and commercialized products on the market, but honestly, there is no program that can really...

DeepSeek R1 越狱:尝试突破 DeepSeek 的审查机制-首席AI分享圈

DeepSeek R1 Jailbreak: Trying to Break DeepSeek's Censorship

DeepSeek R1 official jailbreaks are great experimental environments for triggering basically all types of censorship mechanisms, and you can learn a lot of defense techniques, so this is a big model censorship mechanism learning article that will take you through examples of big model jailbreaks over the years. Large model censorship mechanisms are usually used...

OpenAI o3-mini 系统说明书(中文)-首席AI分享圈

OpenAI o3-mini System Manual (Chinese)

Original: https://cdn.openai.com/o3-mini-system-card.pdf 1 Introduction The OpenAI o model family is trained using large-scale reinforcement learning to reason using chains of thought. These advanced reasoning capabilities provide new ways to improve the security and robustness of our models. In particular, ...

智能代理检索增强生成:Agentic RAG 技术综述-首席AI分享圈

Intelligent Agentic Retrieval Enhanced Generation: An Overview of Agentic RAG Technology

Abstract Large-scale language models (LLMs), such as OpenAI's GPT-4, Google's PaLM, and Meta's LLaMA, have dramatically transformed Artificial Intelligence (AI) by enabling human-like text generation and natural language understanding. However, their reliance on static training data limits their ability to respond to dynamic, real-time queries...

CoRAG:利用MCTS(蒙特卡洛树)动态链式 RAG 模型-首席AI分享圈

CoRAG: Dynamic chained RAG modeling using MCTS (Monte Carlo Trees)

  Summary of Key Contributions of CORAG CORAG (Cost-Constrained Retrieval Optimization for Retrieval-Augmented Generation) is an innovative retrieval-augmented generation (RAG) system designed to address key challenges in existing RAG approaches. The following CORAG ...

en_USEnglish