Use FastGPT's Advanced Orchestration Features to Implement AI Dialogue for Operating Your Own Business System

Environment Preparation

Detailed records of FastGPT installation, adding models, and testing are available in another article:
https://juejin.cn/post/7366981996620890163

Additional Notes

For a complete tutorial on advanced orchestration, please refer to the official documentation:
https://doc.fastai.site/docs/workflow/intro/

To implement business by connecting AI to your own system, the core logic is to have AI extract the required parameters from user input instructions, convert them into corresponding fields, call your own HTTP interface, and after AI receives the data returned by the interface, AI will conduct analysis and combine it with the user's question to provide an answer.
The two most important modules are "Question Classification" and "Text Content Extraction". You need to ensure that the AI model you use supports the function_call feature. If you use a model that does not support function_call, it will not throw an error, but it will always route to the last node you configured or the extracted content will always be empty, and the correct business logic cannot be implemented.
The following models have been tested and confirmed to support this feature: public online models include "yi-vl-plus" from Lingyiwanwu (01.AI), "Baichuan2-Turbo-192k" from Baichuan, and "glm-4" from Zhipu; the local model "qwen:4b" from Qwen has also been tested and is supported.

Business Preparation

First prepare your own business system. I use an inspection system as an example, which provides two methods. The first is to get all inspection list data

The interface result looks roughly like this

The second method is to modify inspection data

My requirement is that when I input query instructions to AI, it can return the results I need that match various filter conditions, and can also modify specified data

Advanced Orchestration

Create a new application, then click [Advanced Orchestration] on the left to enter the advanced orchestration design interface

You can see that a flow already exists by default. This is a standard conversation flow: the user inputs an instruction at the [Flow Start] module, passes the instruction to the [AI Conversation] module, and the [AI Conversation] module outputs an answer, completing a full conversation flow.

Our system has two interfaces: one query interface and one modification interface. First we need AI to know which interface to call, that is, we need AI to determine whether the user's instruction is for "query" or "modification". The [Question Classification] module solves exactly this problem.
First delete the original [AI Conversation] module, click the [+] button in the top left corner, find [Question Classification] and drag it to the panel, then connect the [Flow Start] module and the [Question Classification] module with a connection line. The interface after completion looks like this:

Configure the [Question Classification] module:
AI Model: Select the model you want to use;
Background Knowledge: Write a description to help AI better understand and analyze which category the input belongs to;
User Question: Select variable reference → Flow Start → User Question;
Categories: Fill in your required category descriptions; I filled in 3 categories: Query, Modify, Other;

You can test the configuration now: three categories will generate three output nodes, connect each output node to a [Specified Reply] module, which allows you to verify whether the question classification works correctly. Click the "Debug" button in the top right corner of the panel to run a conversation test, as shown below

Query Function

Let's build the query function first. Once we confirm the input belongs to the "query" category, we know which interface to call. Delete the [Specified Reply] module connected to the "query" output, add an [HTTP Request] module, connect the "query" output to the [HTTP Request] module, and fill in the relevant information: such as request method get/post, the request address is your query interface address, fill in request parameters and request headers according to your situation. My query interface does not require any parameters, so the [HTTP Request] node is now configured.

The output of the [HTTP Request] module is the return result from your interface. We now need to use this return result as AI's knowledge base to answer the user's question. Add a [Text Processing] module to the panel, as shown below:

The [Text Processing] module needs to get the interface return information from the previous step, as well as the user's original question. Click "Add" in the input section to add two variables; you can name them anything, here I added "response" and "q". Configure the variable values as shown below, then enter the following content in the text box (based on the official template):

Please use the data in the  tag below as your knowledge. Please output the answer directly, do not mention that you obtained the knowledge from .  
  
{{response}}  
  

Question: “”“{{q}}”“”

After the [Text Processing] module is configured, the next step is to let AI output the final answer. Add an [AI Conversation] module to the panel, modify the relevant configuration. Note that here you need to select the text variable output by the previous [Text Processing] module for the user question, as shown below:

At this point, a complete AI query function is completed, you can now click the debug button in the top right corner to test:

Modify Function

The [Question Classification] module splits into three branches: query, modify, and other. The first step of the query branch is to make an HTTP call, and since no parameters need to be passed, it can be called directly. For modification, I need to get the primary key ID of the inspection task to be modified and the new inspection result, so I need to extract these two parameters first before making the HTTP call. For this, we can use the [Text Content Extraction] module. Create a new [Text Content Extraction] module, connect it to the "modify" output node of [Question Classification] (remember to delete the [Specified Reply] module we used for testing earlier):

Configure the module: the main step is to add the two fields you need and add a description for each

After extracting the two required fields, we can make the HTTP call. Add an [HTTP Request] module, configure the input parameters, request method, request address, and request parameters. Note the syntax for using variables in request parameters:

The reply for a modify operation only needs to indicate whether the modification succeeded or failed, so we can return this directly without using AI to generate a reply. Add a [Specified Reply] module, select variable reference for the reply content, and select the original response from the HTTP call:

The modify function is now also completed. Let's test it by combining query and modify:

At this point, a complete AI-powered inspection query and modification system is completed. FastGPT's advanced orchestration is very powerful, you can freely combine any flow you want. Use your imagination to extend it: for example, you can add a [Validator] module to perform field validation before executing a modification task:

Supplementary Notes

Generally, your own service requires a token to be passed in the request header. This token can be passed as a global variable to the FastGPT API interface, and then used in the [HTTP Request] module:
FastGPT API documentation: https://doc.fastai.site/docs/development/openapi/chat/


This is a discussion topic separated from the original topic at https://juejin.cn/post/7369397062863224883