This can be done in the Dify AI workflows were released in v0.6.9 as reusable tools (for use in Agents or Workflow). This allows it to be integrated with new agents and other workflows, thus eliminating duplication of effort. Two new workflow nodes and one improved node have been added:
Iteration:Make sure the input is an array. The iteration node will process each item in the array in turn until all items have been processed. For example, if you need a long article, simply enter a few headings. This will generate an article that contains a paragraph for each title, eliminating the need for complex prompt organization.
Parameter Extractor:Extracting structured parameters from natural language using the Large Language Model (LLM) simplifies the process of using tools in workflows and making HTTP requests.
Variable Aggregator:The improved variable assigner supports more flexible variable selection. Also, the user experience is enhanced by improved node connectivity.
Workflow in Dify is categorized into Chatflow and Workflow:ChatflowConversational applications for dialog-like scenarios, including customer service, semantic search, and multi-step logic in building responses.Workflow: Oriented towards automation and batch scenarios, suitable for applications such as high-quality translation, data analysis, content generation, email automation, and more.
Chatflow Portal
Workflow Entry
I. Three-step translation workflow
1. Start node
Defining input variables within the Start node supports four types: text, paragraph, drop-down options, and numbers. This is shown below:
In Chatflow, the start node will provide system built-in variables: sys.query and sys.files. sys.query is used for user question input in dialog-based applications, and sys.files is used for uploading files in dialogs, such as uploading an image for understanding the meaning, which needs to be used with an image comprehension model or a tool for image input.
2. LLM (Recognizing Proper Nouns) node
SYSTEM provides high-level guidance for the dialog as follows:
Recognize technical terms entered by the user. Please show the correspondence of technical terms before and after translation in the format {XXX} -> {XXX}.
{{#1711067409646.input_text#}}
Transformer
Token -> Token
Zero Shot -> Zero Shot
Few Shot -> Few Shot
<Specialized Nouns
Within the LLM node, the model input prompts can be customized. If you select the Chat model, you can customize the SYSTEM/USER/ASSISTANT prompts. This is shown below:
serial number | account for | note | |
---|---|---|---|
1 | SYSTEM (cue word) | Providing high-level guidance to the Dialogue | clue |
2 | USER | Provide commands, queries or any text-based input to the model | User Issues |
3 | ASSISTANT | Model responses based on user messages | Helpers' Answers |
3. LLM2 (direct translation) node
SYSTEM provides high-level guidance for the dialog as follows:
You are a professional translator who is proficient in English, especially in converting specialized academic papers into easy-to-understand popular science articles. Please assist me in translating the following Chinese passage into English in a style similar to that of a popular science article in English.
<Restrictions
Please translate directly from the Chinese content, maintaining the original format without omitting any information.
{{#1711067409646.input_text#}}
<Direct Translation
4. LLM3 (pointing out problems with direct translation)
SYSTEM provides high-level guidance for the dialog as follows:
Identify specific problems with the direct translation based on its results. Precise descriptions need to be provided, avoiding ambiguity, and without adding content or formatting not included in the original. Specifics include, but are not limited to:
Does not conform to English expression, please clearly point out where it is inappropriate Sentence structure is awkward, please point out the specific location, no need to provide suggestions for changes, we will adjust in the subsequent free translation Expression is ambiguous and difficult to understand, if possible, try to explain it
{{#1711067578643.text#}}
{{#1711067409646.input_text#}}
{{#1711067409646.input_text#}
5. LLM4 (Italian translation -- second translation)
SYSTEM provides high-level guidance for the dialog as follows:
Based on the results of the initial direct translation and the problems subsequently identified, we will conduct a re-translation aimed at more accurately conveying the meaning of the original text. In this process, we will aim to ensure that the content is both faithful to the original meaning and more closely aligned with the English expression, making it easier to understand. During this process, we will keep the original format unchanged.
<direct translation
{{#1711067578643.text#}}
{{#1711067817657.text#}}
{{#1711067817657.text#
6. End node
Defining output variable namessecond_translation
The
7.Publish Workflow as a tool
Publish Workflow as a tool in order to use it in Agent, as shown below:
Click to access the Tools page, as shown below:
8.Jinja template
While writing Prompt, I found support for Jinja templates. See [3][4] for details.
II. Using Workflow in Workflow
III. Using Workflow in Agents
Essentially think of Workflow as a tool, thus extending Agent capabilities, similar to other tools such as Internet search, scientific computing, etc.
IV. Individual testing of the three-step translation workflow
The input is shown below:
Transformer is the basis of the Big Language Model.
The output is shown below:
The Transformer serves as the cornerstone for large-scale language models.
The details page is shown below:
The tracking page is shown below:
1. Start
(1) Input
{
"input_text": "Transformer is the foundation of the Big Language Model." ,
"sys.files": [],
"sys.user_id": "7d8864c3-c456-4588-9b0a-9368c94ca377"
}
(2) Output
{
"input_text": "Transformer is the foundation of the Big Language Model." ,
"sys.files": [],
"sys.user_id": "7d8864c3-c456-4588-9b0a-9368c94ca377"
}
2. LLM
(1) Data processing
{
"model_mode": "chat",
"prompts": [
{
"role": "system", "text":" Recognize technical terms entered by the user.
"text":" Recognize technical terms entered by the user. Please show the correspondence of technical terms before and after translation in the format {XXX} -> {XXX}. \n \nTransformer is the basis of the Big Language Model. \n\nTransformer -> Transformer\nToken -> Token\nZero Shot -> Zero Shot \nFew Shot -> Few Shot\n",.
"files": []
}
]
}
(2) Output
{
"text": "Transformer -> Transformer",
"usage": {
"prompt_tokens": 107,
"prompt_unit_price": "0.01",
"prompt_price_unit": "0.001",
"prompt_price": "0.0010700",
"completion_tokens": 3, "completion_unit_price_unit": "0.001", "completion_price": "0.0010700",
"completion_unit_price": "0.03", "completion_price_unit_price": "0.03",
"completion_price": "0.0000900", "total_tokens".
"currency": "USD", "latency": 1.01
"latency": 1.0182260260044131
}
}
3. LLM 2
(1) Data processing
{
"model_mode": "chat",
"prompts": [
{
"role": "system", "text":" You are a professional translator who is proficient in English.
"text":" You are a professional translator who is proficient in English, especially in converting specialized academic papers into easy-to-understand popular science articles. Please assist me in translating the following Chinese passage into English in a style similar to that of a popular science article in English. \n \nPlease translate directly from the Chinese content, maintaining the original format without omitting any information. \n \nTransformer is the basis of the Big Language Model. \n ",.
"files": []
}
]
}
(2) Output
{
"text": "The Transformer is the foundation of large language models.", "usage": {
"usage": {
"prompt_price_unit": "0.001",
"prompt_price": "0.0001760",
"completion_tokens": 10, "completion_unit_price_unit": "0.001", "completion_price": "0.0001760",
"completion_price": "0.0000200", "total_tokens".
"currency": "usd", "latency": 0.5
"latency": 0.516718350991141
}
}
4. LLM 3
(1) Data processing
{
"model_mode": "chat",
"prompts": [
{
"role": "system", "text":"\n Indicate its specific presence based on the results of the direct translation.
"text":"\n Indicate the specific problems with the direct translation based on its results. Precise descriptions need to be provided, avoiding ambiguity, and without adding content or formatting not included in the original text. Specifics include, but are not limited to:\nIncompatible with the English expression, please specify where it is inappropriateSentence structure is awkward, please point out the specific location, no need to provide suggestions for changes, we will be adjusted in the subsequent free translationExpression is ambiguous and difficult to understand, if possible, try to explain it\n\nThe Transformer is the foundation of large language models.\n\nThe Transformer is the foundation of large language models. \n".
"files": []
}
]
}
(2) Output
{
"text": "The sentence structure is awkward and does not conform to English expression." ,
"usage": {
"prompt_tokens": 217,
"prompt_unit_price": "0.001",
"prompt_price_unit": "0.001",
"prompt_price": "0.0002170",
"completion_tokens": 22, "completion_unit_price_unit": "0.001", "completion_price": "0.0002170",
"completion_price": "0.0000440", "total_tokens".
"latency": 0.8566757979860995
}
}
5. LLM 4
(1) Data processing
{
"model_mode": "chat",
"prompts": [
{
"role": "system", "text": "We are going to re-translate the results based on the initial direct translation and the subsequent recognition of the issues.
"text": "Based on the results of the initial direct translation and the subsequent problems identified, we will conduct a re-translation aimed at more accurately conveying the meaning of the original text. In this process, we will work to ensure that the content is both faithful to the original meaning and more closely aligned with English expressions that are easier to understand. During this process, we will keep the original format unchanged. \n \nThe Transformer is the foundation of large language models.\n \nThe sentence structure is awkward and does not fit the English expression. \n ",
"files": []
}
]
}
(2) Output
{
"text": "The Transformer serves as the cornerstone for large-scale language models.", "usage": {
"usage": {
"prompt_tokens": 187, "prompt_unit_price": {
"prompt_price_unit": "0.001",
"prompt_price": "0.0018700",
"completion_unit_price": "0.03", "completion_price_unit_price": "0.03",
"completion_price": "0.0003600", "total_tokens": "0.03", "completion_price_unit": "0.001",
"currency": "USD", "latency": 1.36
"latency": 1.3619857440062333
}
}
6. Conclusion
(1) Input
{
"second_translation": "The Transformer serves as the cornerstone for large-scale language models."
}
(2) Output
{
"second_translation": "The Transformer serves as the cornerstone for large-scale language models."
}
V. Testing the three-step translation workflow in Agent
When launching the tool:
When closing the tool:
VI. Related issues
1. When is Workflow triggered in the Agent?
As with tools, triggered by a tool description. The exact implementation is only clear by looking at the source code.
bibliography
[1] Workflow: https://docs.dify.ai/v/zh-hans/guides/workflow
[2] Hands on teaching you to connect Dify to the microsoft ecosystem: https://docs.dify.ai/v/zh-hans/learn-more/use-cases/dify-on-wechat
[3] Jinja official documentation: https://jinja.palletsprojects.com/en/3.0.x/
[4] Jinja template: https://jinja.palletsprojects.com/en/3.1.x/templates/