General Introduction
gpt-prompt-engineer is an open source project on GitHub that focuses on prompt engineering for GPT models. Users can enter task descriptions and test cases and this tool is able to generate, test, and rank different prompts to find the best performer. The project utilizes large language models such as GPT-4 and GPT-3.5-Turbo, and employs an ELO scoring system to rank the effectiveness of the generated prompts, with the option of logging and tracing the chain of prompts.
Tip engineering is a bit like alchemy. There is no clear way to predict what will work best. It's all about experimenting until you find the right tip. gpt Tip Engineer is a tool that takes this experimentation to a whole new level.
Helps you generate, optimize, and test Prompts, with support for GPT and Claude, and can optimize the Claude 3 Haiku's cue words help achieve great results. It's worth noting that Claude 3 Haiku is cheaper than GPT-3.5, but has good performance and supports visual modeling.
Function List
Tip Generation: Generate a variety of tips based on use cases and test cases.
Cue Test: Test and rank cue performance using the ELO scoring system.
ELO scoring system: dynamically adjust ELO scores by comparing test case response performance.
Categorized version: designed for categorization tasks, giving test scores for each prompt.
Optional Logging: Supports Weights & Biases logging and Portkey tool logging.
Using Help
Open the project notebook in Google Colab or a local Jupyter notebook.
Add OpenAI's API secret key to the code.
Define task descriptions and test cases.
Adjust the model to select GPT-4 or GPT-3.5-Turbo as needed.
Call the generate_optimal_prompt() function to generate, test, and score prompts.