AI Personal Learning
and practical guidance
Beanbag Marscode1

AI knowledge Page 9

Llama 3: A Versatile, Open-Source Family of AI Models - Chief AI Sharing Circle

Llama 3: A Versatile, Open Source Family of AI Models

Abstract: This paper introduces a new set of foundational models called Llama 3. Llama 3 is a community of language models with innate support for multilingualism, code writing, reasoning, and tool usage. Our largest model is a dense context window with 405 billion parameters and up to 128,000 tokens...

How to choose the right Embedding model? -Chief AI Sharing Circle

How to choose the right Embedding model?

Retrieval Augmented Generation (RAG) is a class of applications in Generative AI (GenAI) that supports the use of one's own data to augment the knowledge of an LLM model (e.g., ChatGPT). RAG typically uses three different AI models, namely the Embedding model, the Rerankear model, and the Large Language Model. In this paper, we will ...

What is Transformer?-Chief AI Sharing Circle

What is Transformer?

  Transformer is a deep learning modeling architecture for Natural Language Processing (NLP), proposed by Vaswani et al. in 2017. It is mainly used for processing sequence-to-sequence tasks, such as machine translation, text generation, and so on. Briefly, the original Transformer model for text generation...

COT and Related Advanced Cue Variants Cue Words Explained - Chief AI Sharing Circle

COT and related advanced cue variants cue words explained

DISCLAIMER: While basic hinting techniques (e.g., zero/few sample examples or imperative hints) are very efficient, more sophisticated hints may be more effective when faced with some complex puzzles (e.g., math/programming or problems requiring multi-step logical reasoning). Since Large Language Models (LLMs) deal with such problems...

ChatGPT Prompts Level 10 Tips: From Novice to Expert

I've invested a lot of time researching and testing various cues to find the best results. In this video, I've summarized all these experiences into 10 levels of cue word design techniques. We'll start with the basics and go all the way down to the expert techniques that won the recent Singapore Prompter Design Competition. Then we...

List of common words used by Agent

How to water a thesis? Choose Agent-related propositions, add the following inspirations to React for experimentation, and work backwards to the argument based on the results, which will generally yield some results. Information Perception English Chinese Chinese Explanation Perception refers to the process of acquiring information about the environment through the senses, which encompasses...

The Need for Cue Engineering in Big Language Modeling - Chief AI Sharing Circle

The Need for Cue Engineering in Large Language Modeling

The following focuses on the basic idea of hint engineering and how it can improve the performance of the Large Language Model (LLM)... Interfaces for LLM: One of the key reasons why large language models are so hot is that their text-to-text interfaces enable a minimalist operational experience. In the past, solving tasks using deep learning typically required...

MemGPT project: keeping long memories in conversation - Chief AI Sharing Circle

The MemGPT program: keeping long memories in conversation

Open source address: https://github.com/cpacker/MemGPT Thesis address: https://arxiv.org/abs/2310.08560 Official website: https://memgpt.ai/ MemGPT supports: 1. Management of long-term memory or state 2. Linking of RAG-based technologies External data sources 3.

SEO for Beginners Guide

This beginner's guide consists of seven chapters that contain everything you need to understand the basics of SEO and start improving your rankings. You'll also find links to helpful resources on our SEO blog and YouTube channel so you can build your own path to SEO savvy . 1/ How Search Engines Work...

Evaluating the Impact of Large-scale Language Modeling (LLM) on Knowledge Workers

Original article: https://www.hbs.edu/ris/PublicationFiles/24-013_d9b45b68-9e74-42d6-a1c6-c72fb70c7282.pdf The purpose of this paper is to explore the impact of artificial intelligence on the productivity and quality of knowledge workers, with field experiments Drawing conclusions. The research team includes researchers from ha...

Multibook (example) Jailbreak Attack - Chief AI Sharing Circle

Multibook (example) jailbreak attack

Researchers have investigated a "jailbreak attack" technique - a method that can be used to bypass security fences set up by developers of large language models (LLMs). The technique, known as the "multisample jailbreak attack," works on Anthropic's own models as well as those produced by other AI companies. The researchers pre...

ReAct: Reasoning and Action Work Together in Large Language Models - Chief AI Sharing Circle

ReAct: Reasoning and Action Working Together in Large Language Models

Original: https://arxiv.org/pdf/2210.03629.pdf Can't understand how ReAct works and applies even after reading it? See ReAct Implementation Logic in Practice with real-world examples. Abstract Although large-scale language models (LLMs) are useful in the tasks of language understanding and interactive decision...

RAG: Retrieval Augmentation - Chief AI Sharing Circle

RAG: Retrieval Augmentation

RAG (Retrieve Augmented Generation) is a technique for optimizing the output of Large Language Models (LLMs) based on authoritative knowledge base information. This technique extends the functionality of LLMs to refer to the internal knowledge base of a particular domain or organization when generating responses to...

Proposition Retrieval: Proposition Retrieval - Chief AI Sharing Circle

Proposition Retrieval

Original text: "Dense X Retrieval: What Retrieval Granularity Should We Use?" Note: This method is suitable for a small number of models, such as the OPENAI series, the Claude series, Mixtral, Yi, and qwen. Abstract In open-domain natural language processing (NLP) tasks, ...

A New Prompt Method - Analogical Prompting - Chief AI Sharing Circle

A new approach to Prompting - Analogical Prompting

Today I read an interesting paper "Large Language Models as Analogical Reasoners", which mentions a new approach to Prompt - "Analogical Prompting. If you are familiar with prompt engineering, you must have heard of "Chain of Thought" (CoT), which is an analogical method of prompting...

BoT: Reinforcement Thinking:Solving Trial-and-Error Problems with Large Language Models-Chief AI Sharing Circle

BoT: Enhanced Thinking: Solving Trial and Error Problems with Large Language Models

Abstract The reasoning performance of Large Language Models (LLMs) on a wide range of problems relies heavily on chained-thinking prompts, which involves providing a number of chained-thinking demonstrations as exemplars in the prompts. Recent research, e.g., thinking trees, has pointed to exploration and self-assessment of reasoning in complex problem solving ...

Chief AI Sharing Circle

Chief AI Sharing Circle specializes in AI learning, providing comprehensive AI learning content, AI tools and hands-on guidance. Our goal is to help users master AI technology and explore the unlimited potential of AI together through high-quality content and practical experience sharing. Whether you are an AI beginner or a senior expert, this is the ideal place for you to gain knowledge, improve your skills and realize innovation.

Contact Us
en_USEnglish