AI Personal Learning
and practical guidance
TRAE

AI knowledge Page 11

Proposition Retrieval:命题检索-首席AI分享圈

Proposition Retrieval

Original text: "Dense X Retrieval: What Retrieval Granularity Should We Use?" Note: This method is suitable for a small number of models, such as the OPENAI series, the Claude series, Mixtral, Yi, and qwen. Abstract In open-domain natural language processing (NLP) tasks, ...

一种新的 Prompt 方法——“类推提示法”(Analogical Prompting)-首席AI分享圈

A new approach to Prompting - Analogical Prompting

Today I read an interesting paper "Large Language Models as Analogical Reasoners", which mentions a new approach to Prompt - "Analogical Prompting. If you are familiar with prompt engineering, you must have heard of "Chain of Thought" (CoT), which is an analogical method of prompting...

BoT:强化思考:用大型语言模型解决试错问题-首席AI分享圈

BoT: Enhanced Thinking: Solving Trial and Error Problems with Large Language Models

Abstract The reasoning performance of Large Language Models (LLMs) on a wide range of problems relies heavily on chained-thinking prompts, which involves providing a number of chained-thinking demonstrations as exemplars in the prompts. Recent research, e.g., thinking trees, has pointed to exploration and self-assessment of reasoning in complex problem solving ...

Intents : Explain with zep how to make a big model understand customer intents.

In Natural Language Processing (NLP), intent refers to a user's expression of some purpose, want or desire. By analyzing the messages sent by the user and recognizing the intention behind them, we can reply with relevant content. For example, "order food", "check the weather", "I want to go to Paris" are all valid intents. In order for chatbots to...

Tokenization(分词标记化)-首席AI分享圈

Tokenization

Hello everyone, today we're going to explore the technique of participles in large-scale language modeling (LLM). Unfortunately, disambiguation is a more complex and tricky part of current top LLMs, but understanding some of its details is very necessary because many people blame some of the shortcomings of LLMs on neural networks or other seemingly mysterious...

LangChain计划执行型智能体-首席AI分享圈

LangChain Program Executive Intelligence

Planning-executing intelligences provide a faster, more cost-effective, and more performant solution to task execution than previous designs. In this article, we will guide you through the construction of three planning intelligences in LangGraph. We have introduced three intelligences structured in the "plan-execute" mode on the LangGraph platform. These intelligences ...

CoD:密度链-首席AI分享圈

CoD: Chain of Density

Original: https://arxiv.org/pdf/2309.04269 Quick Read: "From Sparse to Dense: GPT-4 Summary Generation Using Chained Density Hints" Recorded in: Summarizing Knowledge Commonly Used Prompts Abstract Determining the "right" amount of information to be included in an automated text summary is a ... Determining the "right" amount of information to include in an automated text summary is a challenging task...

ChatGPT Customized Command FAQs

Overview The Custom Directives feature allows you to share any information you would like ChatGPT to take into account in your response. Your instructions will be applied to new conversations. Availability All endpoints Web, iOS and Android How your data is used You can always edit for future conversations or...

如何编写结构化图像生成提示词-首席AI分享圈

How to Write Structured Images to Generate Prompt Words

Structured commands: Paradigm Picture quality words >> Generally more fixed: masterpiece, masterpiece, best quality, Highly detailed, official art, Tyndall effect, fine CG quality, 8K, oversized wallpaper, etc... Generally start by typing masterpiece, best quality, in order to mention...

BM25

Introduction Why to introduce him separately, many scenarios apply GPT3 embedded vector representation, the efficiency and results may not be as good as the traditional model, which needs to be always paid attention to. BM25 is a vector space model, but it does not belong to the word vector model, document vector model, image vector model, knowledge graph vector model...

en_USEnglish