AI Personal Learning
and practical guidance

Promptimizer: an experimental library for optimizing large model prompt words, automatically optimizing Prompt

General Introduction

Promptimizer is an experimental cue word optimization library designed to help users systematically improve the cue words of their AI systems. By automating the optimization process, Promptimizer can improve cue word performance on specific tasks. Users simply provide an initial cue word, a dataset, and a custom evaluator (with optional human feedback), and Promptimizer runs an optimization loop that generates an optimized cue word designed to outperform the original cue word.

Promptimizer: an experimental library for optimizing large model prompt words, automatically optimized for Prompt-1


 

Function List

  • Cue word optimization: automated optimization of cue words to improve the performance of the AI system on specific tasks.
  • Dataset support: support for a variety of dataset formats, user-friendly cue word optimization.
  • Custom Evaluators: Users can define custom evaluators to quantify the performance of cue words.
  • Human Feedback: Human feedback is supported to further improve cue word optimization.
  • Quick Start Guide: A detailed Quick Start Guide is provided to help users get started quickly.

 

Using Help

mounting

  1. First install the CLI tool:
    pip install -U promptim
    
  2. Ensure that you have a valid LangSmith API Key in your environment:
    export LANGSMITH_API_KEY=Your API_KEY
    export ANTHROPIC_API_KEY=Your API_KEY
    

Creating Tasks

  1. Create an optimization task:
    promptim create task . /my-tweet-task \
    --name my-tweet-task \
    --prompt langchain-ai/tweet-generator-example-with-nothing:starter \
    --dataset https://smith.langchain.com/public/6ed521df-c0d8-42b7-a0db-48dd73a0c680/d \
    --description "Write informative tweets on any subject." \
    \ --description "Write informative tweets on any subject.
    

    This command will generate a directory containing the task configuration file and task code.

Defining the Evaluator

  1. Open the generated task directory in the task.py file to find the evaluation logic section:
    score = len(str(predicted.content)) < 180
    
  2. Modify the evaluation logic to, for example, penalize output that contains labels:
    score = int("#" not in result)
    

train

  1. Run the training command to start optimizing cue words:
    promptim train --task . /my-tweet-task/config.json
    

    Once the training is complete, the terminal outputs the final optimized cue word.

Add manual labels

  1. Set up the annotation queue:
    promptim train ---task . /my-tweet-task/config.json --annotation-queue my_queue
    
  2. Access the LangSmith UI and navigate to the designated queue for manual labeling.
AI Easy Learning

The layman's guide to getting started with AI

Help you learn how to utilize AI tools at a low cost and from a zero base.AI, like office software, is an essential skill for everyone. Mastering AI will give you an edge in your job search and half the effort in your future work and studies.

View Details>
May not be reproduced without permission:Chief AI Sharing Circle " Promptimizer: an experimental library for optimizing large model prompt words, automatically optimizing Prompt

Chief AI Sharing Circle

Chief AI Sharing Circle specializes in AI learning, providing comprehensive AI learning content, AI tools and hands-on guidance. Our goal is to help users master AI technology and explore the unlimited potential of AI together through high-quality content and practical experience sharing. Whether you are an AI beginner or a senior expert, this is the ideal place for you to gain knowledge, improve your skills and realize innovation.

Contact Us
en_USEnglish