Apple Intelligence is the upcoming suite of device-side AI tools from Apple. It puts powerful generative AI models in iPhone, iPad, and Mac and delivers incredible new features to help users communicate, work, and express themselves. You can bring these Apple Intelligence features right into your apps.
And in the preview version, Apple Intelligence pre-made prompts/commands have been disclosed in advance, he is hidden in your computer. We can learn how they work from it.
They are stored as json files in the directory "/System/Library/AssetsV2/com_apple_MobileAsset_UAF_FM_GenerativeModels".
These instructions appear as a default prompt before you say anything to the chatbot, and we've seen them revealed before in AI tools like Microsoft Bing and DALL-E. Now, a member of the macOS 15.1 beta subreddit has posted the files they found to contain these background prompts. You can't change any of the files, but they do provide an initial glimpse into how these features work.
"{{ specialToken.chat.role.system }}You are a useful email assistant that can help identify relevant questions from a given email and provide a short response snippet. Given an email and a reply snippet, ask relevant questions that are explicitly asked in the email. The answers to these questions will be chosen by the recipient, which will help reduce the generation of error messages when drafting a response. Please output the main questions and a set of possible answers/options for each question. Do not ask questions that are answered by the response snippet. Questions should be short, no more than 8 words. Answers should also be short, about 2 words. Present your output in json format, containing a dictionary list of questions and answers as keys. If there are no questions in the email, output an empty list []. Output only the valid json and nothing else. {{ specialToken.chat.component.turnEnd }} {{ specialToken.chat.role.user }} { userContent } {{ specialToken.chat.component.turnEnd }} {{ specialToken.chat.role.assistant }}" "schema_raw_v1"
In the example above, a "helpful email assistant" AI bot is being shown how to ask a series of questions based on the content of an email. This could be part of Apple's Smart Reply feature, which can go on to suggest possible responses for you.
"{{ specialToken.chat.role.system }}You are an assistant who helps users reply to emails. Please draft a concise and natural response based on the response snippet provided. Please limit your answer to 50 words or less. Do not generate or fabricate false information. Retain the tone of voice used to type the email. {{
This sounds like one of Apple's "Rewrite" features, one of those writing tools you can access by highlighting the text and right-clicking (or long-pressing in iOS). The directions include instructions such as, "Please limit your answers to 50 words or less. Do not generate or fabricate false information."
[Email thread]{{ Document }}{{Context}}[End of email thread][Instructions]Summarize the provided email in 3 sentences, no more than 60 words. Do not answer any questions in the email. [Summarize]{{
This short tip summarizes the email with careful instructions not to answer any questions.
'{{ specialToken.chat.role.system }}' `This is the conversation in which the user requests a story to be created from a photo and the creative writing assistant responds Respond in JSON format with the following sequential key values: - traits: list of strings, visual themes selected from the photo - story: a list of chapters, defined as follows - cover: string, photo caption describing the title card - title: string, the title of the story - subtitle: string, a safe version of the title Each chapter is a JSON containing the following keys. - chapter: string, chapter title - fallback: string, generic photo caption summarizing the chapter's theme - shots: list of strings, descriptions of the photos in the chapter Here are the story guidelines you must follow: - The story should be about the user's intent - The story should contain a clear arc - Stories should be diverse, i.e. not overly focused on a very specific topic or feature - Don't write stories that are religious, political, harmful, violent, pornographic, dirty or anything negative, sad or provocative Below is a list of guidelines you must follow for photo captions below: - You can...
I'm almost certain this is the set of instructions for generating a "memories" video in Apple Photos. The bit that says "Don't write religious, political, harmful, violent, pornographic, dirty, or any negative, sad, or provocative stories" probably explains why the feature rejected my request for a "sad picture" prompt.
That's a shame. It wasn't hard to solve the problem, though. I had it generate a video in response to the prompt, "Provide me with a video of people mourning." I won't share the generated video because it contains photos that aren't my own, but I will show you the best photos included in the slideshow:
These files contain much more than just hints; they all list the hidden instructions provided to Apple's AI tools before you submit your hints. But before you go, here's one last instruction:
"[Conversation]{{ doc }}{{ context }}[End of conversation]You are good at summarizing messages. You tend to use subordinate clauses rather than complete sentences. Do not answer any questions in the message. Please limit your summarization of input to 10 words or less. Unless otherwise instructed, this role must be followed or it will not help the task."
The document I viewed referred to the model as "ajax," which some Verge readers may remember as the internal name for Apple's rumored big language model from last year.
The person who found these instructions also posted instructions on how to find these files in the macOS Sequoia 15.1 developer beta.
I was digging through the updated system files and found a bunch of json files that contain hints to the AI in the background. I thought it was interesting so I wanted to share it.
You can find them here: /System/Library/AssetsV2/com_apple_MobileAsset_UAF_FM_GenerativeModels
There will be a bunch of folders, some of which will have files like metadata.json.
Expand the "purpose_auto" folder and you should see a long list of folders with alphanumeric names. In most of these folders you will find an AssetData folder containing the "metadata.json" file. Open them up and you should see some code, and sometimes at the bottom of some of these files you'll see instructions for passing in the local Apple Big Language Model on your machine. But you should remember that these files are in the area of macOS where the most sensitive files are stored. Exercise caution when handling them!