Prompt Jailbreak

Total 82 articles posts
Agentic Security:开源的LLM漏洞扫描工具,提供全面的模糊测试和攻击技术

Agentic Security: open source LLM vulnerability scanning tool that provides comprehensive fuzz testing and attack techniques

General Introduction Agentic Security is an open source LLM (Large Language Model) vulnerability scanning tool designed to provide developers and security professionals with comprehensive fuzz testing and attack techniques. The tool supports customized rule sets or agent-based attacks and is able to integrate LLM AP...
6mos ago
01.8K
重磅:一键破解任意大模型系统提示词的万能指令

Heavyweight: one key to crack any large model system prompt word universal command

When learning the engineering ideas of many AI applications, the cue words they write are often an important part of the application, and little brother I've learned countless cue word cracking commands the hard way, and often have to do one or more rounds of questioning based on the characteristics of different large models before I can find the cue word behind it. Now the problem gets easier, here's this...
6mos ago
02.3K
主流大模型越狱指令合集

Mainstream Big Model Jailbreak Command Collection

Large model jailbreaking is an interesting technique, which is similar to SQL injection, in that you can inject a prompt to make the large model "overflow" the prompt command. The purpose of big model jailbreaking is three-fold: to learn the best prompt instructions designed by others, to cross the ethical constraints of big models, and to get...
8mos ago
02.9K