Have you ever been in a situation where you type a keyword into a search engine and what comes up is not what you want? Or, you want to search for something, but you don't know what words to use to express the most accurate? Don't worry, "query expansion" technology can help you solve these problems. Recently, the query expansion...
Amidst the ever-changing wave of translation technologies, the emergence of ChatGPT (Chat Generative Pre-trained Transformer) has undoubtedly attracted global attention. As a state-of-the-art Large Language Models (LLM), ChatGPT demonstrates impressive natural language...
Enable Builder Smart Programming Mode, unlimited use of DeepSeek-R1 and DeepSeek-V3, smoother experience than the overseas version. Just enter the Chinese commands, even a novice programmer can write his own apps with zero threshold.
1. Introduction In the field of Artificial Intelligence (AI), Multi Agent system is gradually becoming a key technology for solving complex problems and realizing efficient collaboration.CrewAI, as a powerful Multi Agent collaboration tool, provides developers with a convenient way to build intelligent collaboration systems. In this paper, we will introduce how to build an intelligent collaboration system based on Cr...
After OpenAI's Deep Research tool came out of nowhere, all the major vendors launched their own Deep Research tools. The so-called Deep Research is compared with ordinary search, where a simple RAG search generates generally only one round of search. However Deep Research can act like a human, based on a...
Technology Core: Retrieval Interleaved Generation (RIG) What is RIG? RIG is an innovative generation methodology designed to address the problem of hallucination in the processing of statistical data by large language models. Traditional models may generate inaccurate numbers or facts out of thin air, while...
If your RAG application is failing to deliver the desired results, perhaps it's time to revisit your chunking strategy. Better chunking means more accurate searches and, ultimately, higher quality responses. However, chunking is not a one-size-fits-all technique, and no single approach is absolutely optimal. You'll need to tailor your...
Introduction Text chunking plays a crucial role in the application domain of Large Language Models (LLMs), especially in Retrieval Augmented Generation (RAG) systems. The quality of text chunking is directly related to the validity of contextual information, which in turn affects the accuracy and completeness of the answers generated by LLM...
Quick Reads Challenges of Intelligent Body Memory and Zep's Innovation Intelligent bodies (AI Agents) face memory bottlenecks in complex tasks. Traditional Large Language Model (LLM)-based AI Agents are constrained by contextual windows that make it difficult to efficiently integrate long-term dialog history and dynamic data, limiting performance and making them prone to hallucinations.Zep is ...
The emergence of the Ollama framework has certainly attracted a lot of attention in the field of Artificial Intelligence and Large Language Models (LLMs). This open source framework is focused on simplifying the deployment and operation of large language models locally, making it easy for more developers to experience LLMs. However, looking at the market, Ollama is not alone...
In the field of Artificial Intelligence, the choice of models is crucial, and OpenAI, as an industry leader, offers two main types of model families: Reasoning Models and GPT Models. The former is represented by the o-series of models, such as o1 and o3-mini, while the latter is represented by ...
I found an interesting paper "Thoughts Are All Over the Place: On the Underthinking of o1-Like LLMs", which analyzes the frequent switching of thinking paths and the lack of focus in o1-like reasoning models, or "underthinking" for short. The topic is to analyze the o1-like reasoning model's frequent switching of thinking paths and lack of focused thinking, which is referred to as "underthinking", and at the same time to give a solution to alleviate ...
Introduction In the vast starry sky of AI technology, deep learning models drive the innovative development of many fields with their excellent performance. However, the continuous expansion of model scale, like a double-edged sword, brings about a dramatic increase in arithmetic demand and storage pressure while improving performance. Especially in resource-constrained applications ...
Abstract Although Large Language Models (LLMs) perform well, they are prone to hallucinating and generating factually inaccurate information. This challenge has motivated efforts in attribute text generation, prompting LLMs to generate content that contains supporting evidence. In this paper, we present a new approach called Think&Cite ...
Introduction The purpose of this document is to help readers quickly understand and grasp the core concepts and applications of Prompt Engineering through a series of prompt examples (in part). These examples are all derived from an academic paper on a systematic review of prompt engineering techniques ("The Prompt Report: A Systematic Survey of Pr...
Titans: Learning to Memorize at Test Time Original text: https://arxiv.org/pdf/2501.00663v1 Titans architecture Unofficial implementation: https://github.com/lucidrains/titans- pytorch I. Research Background and Motivation: Transformer of ...
For any application that requires Retrieval Augmented Generation (RAG) system, the massive PDF documents into machine-readable blocks of text (also known as "PDF chunking") are a big headache. There are both open-source programs and commercialized products on the market, but honestly, there is no program that can really...
DeepSeek R1 official jailbreaks are great experimental environments for triggering basically all types of censorship mechanisms, and you can learn a lot of defense techniques, so this is a big model censorship mechanism learning article that will take you through examples of big model jailbreaks over the years. Large model censorship mechanisms are usually used...
Original: https://cdn.openai.com/o3-mini-system-card.pdf 1 Introduction The OpenAI o model family is trained using large-scale reinforcement learning to reason using chains of thought. These advanced reasoning capabilities provide new ways to improve the security and robustness of our models. In particular, ...
Quick Reads A comprehensive and in-depth look at the past and present of the Scaling Law of Large Language Models (LLMs) and the future direction of AI research. With clear logic and rich examples, author Cameron R. Wolfe takes the reader from the basic concepts to the...