Knowledge distillation is a machine learning technique that aims to transfer learning from a large pre-trained model (i.e., a "teacher model") to a smaller "student model". Distillation techniques can help us develop lighter weight generative models for intelligent conversations, content creation, and other areas. Recently Distil...
Recently, many people engaged in large model training and inference have been discussing the relationship between the number of model parameters and model size. For example, the famous alpaca series LLaMA large model contains four versions with different parameter sizes, LLaMA-7B, LLaMA-13B, LLaMA-33B and LLaMA-65B. Here "...
Enable Builder Smart Programming Mode, unlimited use of DeepSeek-R1 and DeepSeek-V3, smoother experience than the overseas version. Just enter the Chinese commands, even a novice programmer can write his own apps with zero threshold.
Original article: https://arxiv.org/pdf/2412.15479 INTERPRETATION: This article itself is not very innovative and has little application. However, it reminds me of three highly informative articles I read a long, long time ago. Reading this article in conjunction with the three previous articles will hopefully bring you more inspiration. Recommended reading: the...
In the field of artificial intelligence and machine learning, especially when building applications such as RAG (Retrieval Augmented Generation) systems and semantic search, efficiently processing and retrieving massive unstructured data becomes crucial. Vector databases have emerged as a core technology to address this challenge. They are not only for storing high-dimensional ...
Xiaohongshu, a hot social e-commerce platform in China and even in Asia, has long gone beyond a simple shopping app to become a weathervane for young people's lifestyles and a new position for brand marketing. For overseas brands and individuals wishing to enter the Chinese market or reach young consumers, mastering Xiaohongshu...
Unexpectedly, AI has set off a half-changing sky in the programming field. From v0, bolt.new to various Agant programming tools Cursor, Windsurf, AI Coding has a huge potential of idea MVP. From the traditional AI-assisted coding, to today's direct project generation behind, in the end is...
Before we start, let's understand a few "key words": Workflow (Workflow): Simply put, it is "the complete steps to accomplish something". It's like an "instruction manual" that tells you what needs to be done, in what order, and by whom, in order to achieve your goal. Input: Before the workflow begins, you need to...
This article is part of the series "Understanding and Deploying Intelligent Body AI": Intelligent Body AI Series 1: Comparison between Devin and Agent Cursor Intelligent Body AI Series 2: From Thinker to Doer - The Paradigm Revolution in Intelligent Body AI and Technology Architecture and Technical Architecture Intelligent Body AI Series 3: Turning $20 into $50...
When building large language modeling (LLM) applications, memory systems are one of the key technologies to enhance conversation context management, long-term information storage, and semantic understanding. An efficient memory system can help the model maintain consistency over long conversations, extract key information, and even have the ability to retrieve historical conversations...
OpenAI Function calling V2 Features The core goal of Function calling V2 is to give OpenAI models the ability to interact with the outside world, which is reflected in the following two core functions: Fetching Data - A function calling implementation of RAG: Essentially RAG (Retrieve Augmented...
Basic Concepts In the field of information technology, retrieval refers to the process of efficiently locating and extracting relevant information from a large dataset (usually documents, Web pages, images, audio, video, or other forms of information) in response to a user's query or need. Its core goal is to find information that is relevant to the use...
Agent AI: Surveying the Horizons of Multimodal Interaction Originally published at https://ar5iv.labs.arxiv.org/html/2401.03568 Abstract Multimodal AI systems are likely to be ubiquitous in our daily lives. Making these systems more interactive a...
GraphReader: a graph-based intelligence that enhances long text processing for large language models Graphic Expert: like a tutor who is good at making mind maps, it transforms lengthy text into a clear knowledge network, so that the AI can easily find each key point needed for an answer as if it were exploring along a map, and effectively gr...
CAG (Cache Augmented Generation) that is 40 times faster than RAG (Retrieval Augmented Generation).CAG revolutionizes knowledge acquisition: instead of retrieving external data in real time, all knowledge is pre-loaded into the model context. It's like condensing a huge library into an on-the-go toolkit that can be used when needed...
By Julia Wiesinger, Patrick Marlow and Vladimir Vuskovic Originally published at https://www.kaggle.com/whitepaper-agents Table of Contents Introduction What is an Intelligent Body? Models Tools Orchestration Layers Intelligent Bodies and Models Cognitive Architecture: How Intelligent Bodies Work Tools ...
Retrieval Augmented Generation (RAG) is becoming one of the most popular applications for Large Language Models (LLMs) and vector databases.RAG is the process of augmenting the input to a LLM with context retrieved from vector databases (e.g., Weaviate).The RAG application passes...
A Multi-Agent System (MAS) is a computing system consisting of multiple interacting Intelligent Agents. Multi-Agent Systems can be used to solve problems that are difficult or impossible to solve by a single Intelligent Agent or a single system. Intelligent agents can be robots, humans, or soft...
First, LLMs already have strong capabilities, why do we still need RAG (Retrieval Augmented Generation)? Although LLMs have demonstrated significant capabilities, the following challenges still warrant attention: Illusion problem: LLMs use a statistically based probabilistic approach to generate text word by word, a mechanism that inherently leads to the possibility of...
o3 is here to share some personal insights. Progress on Test-time Scaling Law has been much faster than we thought. But I'd like to say that the path is actually a bit convoluted - it's OpenAI's way of saving the country from the curve in its pursuit of AGI. Reinforcement Learning and Shortcut Thinking For ...