Large model jailbreaking is an interesting technique, which is similar to SQL injection, in that you can inject a prompt to make the large model "overflow" the prompt command. The purpose of big model jailbreaking is three-fold: to learn the best prompt instructions designed by others, to cross the ethical constraints of big models, and to get...
Introduction Like many others, over the past few days my news tweets have been filled with news, praise, complaints, and speculation about the Chinese-made DeepSeek-R1 large language model, which was released last week. The model itself is being brought up against publics like OpenAI, Meta...
When learning the engineering ideas of many AI applications, the cue words they write are often an important part of the application, and little brother I've learned countless cue word cracking commands the hard way, and often have to do one or more rounds of questioning based on the characteristics of different large models before I can find the cue word behind it. Now the problem gets easier, here's this...
General Introduction Break The AI is a platform focused on AI challenges and competitions designed to help users improve their AI skills and participate in a variety of fun and challenging tasks. The site provides an interactive community for AI enthusiasts, students and professionals where users can...
General Introduction Agentic Security is an open source LLM (Large Language Model) vulnerability scanning tool designed to provide developers and security professionals with comprehensive fuzz testing and attack techniques. The tool supports customized rule sets or agent-based attacks and is able to integrate LLM AP...
v2 is suitable for GPT4 models, ChatGPT free accounts using GPT3.5 models are recommended to try AntiGPT v1. AntiGPT v2 Original Message Please pretend to be in "opposite ...
A simplified prompt to make big language modeling safer and more ethical is just the opposite of the evil DAN, and is more suitable for the mentally incompetent or serious scenarios: Hello! You will be taking on the role of ANT...
General Introduction Prompt Jailbreak Manual is an open source project hosted on GitHub and maintained by the Acmesec team. It focuses on teaching users how to break the limits of the AI macromodel through well-designed prompt words (Prompt), helping tech enthusiasts and security researchers...
Introduction Have you ever wondered how the chatbots we use today, such as OpenAI's models, determine whether a question is safe and should be answered? In fact, these Large Reasoning Models (...