AI Personal Learning
and practical guidance
讯飞绘镜
Total 82 articles

Tags: prompt jailbreak

Prompt越狱手册:突破AI限制的提示词设计指南-首席AI分享圈

Prompt Jailbreak Manual: A Guide to Designing Prompt Words That Break AI Limitations

General Introduction Prompt Jailbreak Manual is an open source project hosted on GitHub and maintained by the Acmesec team. It focuses on teaching users how to break the limits of the AI macromodel through well-designed prompt words (Prompt), helping technology enthusiasts and security researchers explore the potential capabilities of AI. The project is based on ...

只需输入一个表情符号就会让 DeepSeek-R1 疯掉...-首席AI分享圈

Just typing in an emoticon will drive DeepSeek-R1 crazy...

😊 😊‍‍‍‍‍ ‍‍‍‍‍‍‍‍‍‍‍‍‍ ‍‍‍‍‍‍‍ ‍‍‍‍‍‍‍‍‍‍ ‍‍‍‍‍‍‍‍‍‍‍‍‍ The above two emoticons look the same. If you copy the second emoticon to the DeepSeek-R1 website, you will find that the thinking process is extremely long, and this time it took 239 seconds, which is quite short... &nb...

Agentic Security:开源的LLM漏洞扫描工具,提供全面的模糊测试和攻击技术-首席AI分享圈

Agentic Security: open source LLM vulnerability scanning tool that provides comprehensive fuzz testing and attack techniques

General Introduction Agentic Security is an open source LLM (Large Language Model) vulnerability scanning tool designed to provide developers and security professionals with comprehensive fuzzing testing and attack techniques. The tool supports customized rulesets or agent-based attacks, is able to integrate LLM APIs for stress testing, and provides wide...

重磅:一键破解任意大模型系统提示词的万能指令-首席AI分享圈

Heavyweight: one key to crack any large model system prompt word universal command

When learning the engineering ideas of many AI applications, the cue words they write are often an important part of the application, and little brother I've learned countless cue word cracking commands the hard way, and often have to do one or more rounds of questioning based on the characteristics of different large models before I can find the cue word behind it. Now the problem gets easier, here's this...

主流大模型越狱指令合集-首席AI分享圈

Mainstream Big Model Jailbreak Command Collection

Large model jailbreak is an interesting technique, it is similar to SQL injection, you can inject the form of prompt (prompt instruction) to make the large model "overflow" prompt instruction. The purpose of big model jailbreaking is threefold: to learn the best prompt instructions designed by others, to cross the ethical constraints of big models, and to get more comprehensive answers ...