PROMPTS Aids

Total 33 articles posts
Agenta:集成到AI应用的提示词与模型效果评估工具

Agenta: a tool for evaluating the effectiveness of cue words and models integrated into AI applications

Comprehensive Introduction Agenta is an open source AI model management tool specialized in helping users easily experiment with cue words, test model effects and monitor runs. It is suitable for people who want to develop AI applications quickly, providing a platform that is simple to operate. You can use it to try the effect of different cue words on...
5mos ago
01.1K
Confident AI:自动化大语言模型评估框架,对比不同大模型提示词输出质量

Confident AI: A Framework for Automated Large Language Model Evaluation, Comparing the Output Quality of Different Large Model Cue Words

Comprehensive Introduction DeepEval is an easy-to-use open source LLM evaluation framework for evaluating and testing large language modeling systems. It is similar to Pytest, but focuses on unit testing of LLM output.DeepEval combines the latest research results through G-Eval, phantom...
6mos ago
01.6K
ChainForge:测试和评估大型语言模型提示效果的开源可视化编程环境

ChainForge: An Open Source Visual Programming Environment for Testing and Evaluating the Effectiveness of Large Language Model Hints

Comprehensive Introduction ChainForge is an open source visual programming environment designed for testing and evaluating the effectiveness of Large Language Model (LLM) cues. It provides a data flow cueing engineering environment through which users can quickly explore and analyze the quality of different cues on LLM response...
8mos ago
01.6K
GPTsApp.io:GPTs应用商店数据库

GPTsApp.io: GPTs App Store Database

Comprehensive Introduction GPTsApp.io is a platform that integrates a wide range of customized GPT (Generative Pre-Trained Transformer) applications. Available for Apple, Android devices and OpenAI GPTs, as well as a Chrome plugin. Users can use this platform to...
12mos ago
01.7K