V4.11.0 features (#5270)

* feat: workflow catch error (#5220)

* feat: error catch

* feat: workflow catch error

* perf: add catch error to node

* feat: system tool error catch

* catch error

* fix: ts

* update doc

* perf: training queue code (#5232)

* doc

* perf: training queue code

* Feat: 优化错误提示与重试逻辑 (#5192)

* feat: 批量重试异常数据 & 报错信息国际化

  - 新增“全部重试”按钮,支持批量重试所有训练异常数据
  - 报错信息支持国际化,常见错误自动映射为 i18n key
  - 相关文档和 i18n 资源已同步更新

* feat: enhance error message and retry mechanism

* feat: enhance error message and retry mechanism

* feat: add retry_failed i18n key

* feat: enhance error message and retry mechanism

* feat: enhance error message and retry mechanism

* feat: enhance error message and retry mechanism : 5

* feat: enhance error message and retry mechanism : 6

* feat: enhance error message and retry mechanism : 7

* feat: enhance error message and retry mechanism : 8

* perf: catch chat error

* perf: copy hook (#5246)

* perf: copy hook

* doc

* doc

* add app evaluation (#5083)

* add app evaluation

* fix

* usage

* variables

* editing condition

* var ui

* isplus filter

* migrate code

* remove utils

* name

* update type

* build

* fix

* fix

* fix

* delete comment

* fix

* perf: eval code

* eval code

* eval code

* feat: ttfb time in model log

* Refactor chat page (#5253)

* feat: update side bar layout; add login and logout logic at chat page

* refactor: encapsulate login logic and reuse it in `LoginModal` and `Login` page

* chore: improve some logics and comments

* chore: improve some logics

* chore: remove redundant side effect; add translations

---------

Co-authored-by: Archer <545436317@qq.com>

* perf: chat page code

* doc

* perf: provider redirect

* chore: ui improvement (#5266)

* Fix: SSE

* Fix: SSE

* eval pagination (#5264)

* eval scroll pagination

* change eval list to manual pagination

* number

* fix build

* fix

* version doc (#5267)

* version doc

* version doc

* doc

* feat: eval model select

* config eval model

* perf: eval detail modal ui

* doc

* doc

* fix: chat store reload

* doc

---------

Co-authored-by: colnii <1286949794@qq.com>
Co-authored-by: heheer <heheer@sealos.io>
Co-authored-by: 酒川户 <76519998+chuanhu9@users.noreply.github.com>
This commit is contained in:
Archer 2025-07-22 09:42:50 +08:00 committed by GitHub
parent de208d6c3f
commit 13b7e0a192
No known key found for this signature in database
GPG Key ID: B5690EEEBB952194
212 changed files with 5840 additions and 3400 deletions

View File

@ -0,0 +1,28 @@
---
description:
globs:
alwaysApply: false
---
这是一个工作流节点报错捕获的设计文档。
# 背景
现在工作流运行节点若其中有一个报错则整个工作流中断无法继续运行只有红色英文toast不太友好。对于对工作流可控性要求高的用户愿意通过编排为报错兜底。现有编排较麻烦。
这类似于代码里的 try catch 机制,用户可以获得 catch 的错误,而不是直接抛错并结束工作流。
# 实现效果
1. 部分节点可以拥有报错捕获选项,也就是 node 里 catchError 不为 undefined 的节点catchError=true 代表启用报错捕获catchError=false 代表不启用报错捕获。
2. 支持报错捕获节点,在输出栏右侧会有一个“错误捕获”的开关。
3. node 的 output 属性种,有一个`errorField`的字段,标识该输出是开启报错捕获时候,才会拥有的输出。
4. 开启报错捕获的节点,在运行错误时,不会阻塞后续节点运行,而是输出报错信息,并继续向下执行。
5. 开启报错捕获的节点,会多出一个“错误输出”的分支连线,错误时候会走错误的分支提示。
# 实现方案
1. FlowNodeCommonType 属性上增加一个`catchError`的可选 boolean 值。如果需要报错捕获的节点,则设置 true/false标识启用报错捕获并且设置默认是否启用报错捕获。
2. FlowNodeOutputTypeEnume增加一个 error 枚举值,表示该字段是错误时候才展示的输出。
3. IOTitle 组件里接收 catchError 字段,如果为 true则在右侧展示“错误捕获”的开关。
4. 所有现有的 RenderOutput 的组件,都需要改造。传入的 flowOutputList 都不包含 hidden 和 error类型的。
5. 单独在`FastGPT/projects/app/src/pageComponents/app/detail/WorkflowComponents/Flow/nodes/render/RenderOutput`下新建一个`CatchError`的组件,用于专门渲染错误类型输出,同时有一个 SourceHandler。

View File

@ -47,40 +47,41 @@ https://github.com/labring/FastGPT/assets/15308462/7d3a38df-eb0e-4388-9250-2409b
## 💡 RoadMap
`1` 应用编排能力
- [x] 对话工作流、插件工作流
- [x] 工具调用
- [x] Code sandbox
- [x] 循环调用
- [x] 用户选择
- [x] 表单输入
- [x] 对话工作流、插件工作流,包含基础的 RPA 节点。
- [x] Agent 调用
- [x] 用户交互节点
- [x] 双向 MCP
- [ ] 上下文管理
- [ ] AI 生成工作流
`2` 知识库能力
`2` 应用调试能力
- [x] 知识库单点搜索测试
- [x] 对话时反馈引用并可修改与删除
- [x] 完整调用链路日志
- [ ] 应用评测
- [ ] 高级编排 DeBug 调试模式
- [ ] 应用节点日志
`3` 知识库能力
- [x] 多库复用,混用
- [x] chunk 记录修改和删除
- [x] 支持手动输入直接分段QA 拆分导入
- [x] 支持 txtmdhtmlpdfdocxpptxcsvxlsx (有需要更多可 PR file loader),支持 url 读取、CSV 批量导入
- [x] 混合检索 & 重排
- [x] API 知识库
- [ ] 自定义文件读取服务
- [ ] 自定义分块服务
`3` 应用调试能力
- [x] 知识库单点搜索测试
- [x] 对话时反馈引用并可修改与删除
- [x] 完整上下文呈现
- [x] 完整模块中间值呈现
- [ ] 高级编排 DeBug 模式
- [ ] RAG 模块热插拔
`4` OpenAPI 接口
- [x] completions 接口 (chat 模式对齐 GPT 接口)
- [x] 知识库 CRUD
- [x] 对话 CRUD
- [ ] 完整 API Documents
`5` 运营能力
- [x] 免登录分享窗口
- [x] Iframe 一键嵌入
- [x] 聊天窗口嵌入支持自定义 Icon默认打开拖拽等功能
- [x] 统一查阅对话记录,并对数据进行标注
- [ ] 应用运营日志
`6` 其他
- [x] 可视化模型配置。

View File

@ -0,0 +1,56 @@
---
title: 'V4.11.0(进行中)'
description: 'FastGPT V4.11.0 更新说明'
icon: 'upgrade'
draft: false
toc: true
weight: 783
---
<!-- ## 升级说明
### 1. 修改环境变量
FastGPT 商业版用户,可以增加评估相关环境变量,并在更新后,在管理端点击一次保存。
```
EVAL_CONCURRENCY=3 # 评估单节点并发数
EVAL_LINE_LIMIT=1000 # 评估文件最大行数
```
### 2. 更新镜像:
- 更新 FastGPT 镜像tag: v4.11.0
- 更新 FastGPT 商业版镜像tag: v4.11.0
- 更新 fastgpt-plugin 镜像 tag: v0.1.4
- mcp_server 无需更新
- Sandbox 无需更新
- AIProxy 无需更新 -->
## 项目调整
1. 移除所有**开源功能**的限制,包括:应用数量和知识库数量上限。
2. 调整 RoadMap增加`上下文管理`,`AI 生成工作流`,`高级编排 DeBug 调试模式`等计划。
## 🚀 新增内容
1. 商业版增加**应用评测(Beta 版)**,可对应用进行有监督评分。
2. 工作流部分节点支持报错捕获分支。
3. 对话页独立 tab 页面UX。
4. 支持 Signoz traces 和 logs 系统追踪。
5. 新增 Gemini2.5, grok4, kimi 模型配置。
6. 模型调用日志增加首字响应时长和请求 IP。
## ⚙️ 优化
1. 优化代码,避免递归造成的内存堆积。
2. 知识库训练:支持全部重试当前集合异常数据。
3. 工作流 valueTypeFormat避免数据类型不一致。
## 🐛 修复
1. 问题分类和内容提取节点,默认模型无法通过前端校验,导致工作流无法运行和保存发布。
## 🔨 工具更新
1. Markdown 文本转 Docx 和 Xlsx 文件。

View File

@ -20,19 +20,18 @@ FastGPT 商业版是基于 FastGPT 开源版的增强版本,增加了一些独
| 文档知识库 | ✅ | ✅ | ✅ |
| 外部使用 | ✅ | ✅ | ✅ |
| API 知识库 | ✅ | ✅ | ✅ |
| 最大应用数量 | 500 | 无限制 | 由付费套餐决定 |
| 最大知识库数量(单个知识库内容无限制) | 30 | 无限制 | 由付费套餐决定 |
| 自定义版权信息 | ❌ | ✅ | 设计中 |
| 多租户与支付 | ❌ | ✅ | ✅ |
| 团队空间 & 权限 | ❌ | ✅ | ✅ |
| 应用发布安全配置 | ❌ | ✅ | ✅ |
| 内容审核 | ❌ | ✅ | ✅ |
| 应用评测 | ❌ | ✅ | ✅ |
| web站点同步 | ❌ | ✅ | ✅ |
| 增强训练模式 | ❌ | ✅ | ✅ |
| 图片知识库 | ❌ | ✅ | ✅ |
| 知识库索引增强 | ❌ | ✅ | ✅ |
| 第三方应用快速接入(飞书、公众号) | ❌ | ✅ | ✅ |
| 管理后台 | ❌ | ✅ | 不需要 |
| SSO 登录可自定义也可使用内置Github、公众号、钉钉、谷歌等 | ❌ | ✅ | 不需要 |
| 图片知识库 | ❌ | 设计中 | 设计中 |
| 对话日志运营分析 | ❌ | 设计中 | 设计中 |
| 完整商业授权 | ❌ | ✅ | ✅ |
{{< /table >}}

35
env.d.ts vendored
View File

@ -1,43 +1,8 @@
declare global {
namespace NodeJS {
interface ProcessEnv {
LOG_DEPTH: string;
DEFAULT_ROOT_PSW: string;
DB_MAX_LINK: string;
FILE_TOKEN_KEY: string;
AES256_SECRET_KEY: string;
ROOT_KEY: string;
OPENAI_BASE_URL: string;
CHAT_API_KEY: string;
AIPROXY_API_ENDPOINT: string;
AIPROXY_API_TOKEN: string;
MULTIPLE_DATA_TO_BASE64: string;
MONGODB_URI: string;
MONGODB_LOG_URI?: string;
PG_URL: string;
OCEANBASE_URL: string;
MILVUS_ADDRESS: string;
MILVUS_TOKEN: string;
SANDBOX_URL: string;
PRO_URL: string;
FE_DOMAIN: string;
FILE_DOMAIN: string;
NEXT_PUBLIC_BASE_URL: string;
LOG_LEVEL?: string;
STORE_LOG_LEVEL?: string;
USE_IP_LIMIT?: string;
WORKFLOW_MAX_RUN_TIMES?: string;
WORKFLOW_MAX_LOOP_TIMES?: string;
CHECK_INTERNAL_IP?: string;
CHAT_LOG_URL?: string;
CHAT_LOG_INTERVAL?: string;
CHAT_LOG_SOURCE_ID_PREFIX?: string;
ALLOWED_ORIGINS?: string;
SHOW_COUPON?: string;
CONFIG_JSON_PATH?: string;
PASSWORD_LOGIN_LOCK_SECONDS?: string;
PASSWORD_EXPIRED_MONTH?: string;
MAX_LOGIN_SESSION?: string;
}
}
}

View File

@ -6,7 +6,6 @@ export enum UserErrEnum {
userExist = 'userExist',
unAuthRole = 'unAuthRole',
account_psw_error = 'account_psw_error',
balanceNotEnough = 'balanceNotEnough',
unAuthSso = 'unAuthSso'
}
const errList = [
@ -22,10 +21,6 @@ const errList = [
statusText: UserErrEnum.account_psw_error,
message: i18nT('common:code_error.account_error')
},
{
statusText: UserErrEnum.balanceNotEnough,
message: i18nT('common:code_error.user_error.balance_not_enough')
},
{
statusText: UserErrEnum.unAuthSso,
message: i18nT('user:sso_auth_failed')

View File

@ -1,4 +1,5 @@
import { replaceSensitiveText } from '../string/tools';
import { ERROR_RESPONSE } from './errorCode';
export const getErrText = (err: any, def = ''): any => {
const msg: string =
@ -12,6 +13,11 @@ export const getErrText = (err: any, def = ''): any => {
err?.msg ||
err?.error ||
def;
if (ERROR_RESPONSE[msg]) {
return ERROR_RESPONSE[msg].message;
}
// msg && console.log('error =>', msg);
return replaceSensitiveText(msg);
};

View File

@ -51,6 +51,7 @@ export type FastGPTFeConfigsType = {
bind_notification_method?: ['email' | 'phone'];
googleClientVerKey?: string;
mcpServerProxyEndpoint?: string;
chineseRedirectUrl?: string;
show_emptyChat?: boolean;
show_appStore?: boolean;
@ -82,7 +83,6 @@ export type FastGPTFeConfigsType = {
customSharePageDomain?: string;
systemTitle?: string;
systemDescription?: string;
scripts?: { [key: string]: string }[];
favicon?: string;
@ -109,6 +109,7 @@ export type FastGPTFeConfigsType = {
uploadFileMaxAmount?: number;
uploadFileMaxSize?: number;
evalFileMaxLines?: number;
// Compute by systemEnv.customPdfParse
showCustomPdfParse?: boolean;

View File

@ -5,15 +5,17 @@ export const delay = (ms: number) =>
}, ms);
});
export const retryFn = async <T>(fn: () => Promise<T>, retryTimes = 3): Promise<T> => {
try {
return fn();
} catch (error) {
if (retryTimes > 0) {
export const retryFn = async <T>(fn: () => Promise<T>, attempts = 3): Promise<T> => {
while (true) {
try {
return fn();
} catch (error) {
if (attempts <= 0) {
return Promise.reject(error);
}
await delay(500);
return retryFn(fn, retryTimes - 1);
attempts--;
}
return Promise.reject(error);
}
};

View File

@ -47,6 +47,7 @@ export type LLMModelItemType = PriceType &
usedInClassify?: boolean; // classify
usedInExtractFields?: boolean; // extract fields
usedInToolCall?: boolean; // tool call
useInEvaluation?: boolean; // evaluation
functionCall: boolean;
toolChoice: boolean;

View File

@ -0,0 +1,20 @@
import type { PaginationProps } from '@fastgpt/web/common/fetch/type';
export type listEvaluationsBody = PaginationProps<{
searchKey?: string;
}>;
export type listEvalItemsBody = PaginationProps<{
evalId: string;
}>;
export type retryEvalItemBody = {
evalItemId: string;
};
export type updateEvalItemBody = {
evalItemId: string;
question: string;
expectedResponse: string;
variables: Record<string, string>;
};

View File

@ -0,0 +1,22 @@
import { i18nT } from '../../../../web/i18n/utils';
export const evaluationFileErrors = i18nT('dashboard_evaluation:eval_file_check_error');
export enum EvaluationStatusEnum {
queuing = 0,
evaluating = 1,
completed = 2
}
export const EvaluationStatusMap = {
[EvaluationStatusEnum.queuing]: {
name: i18nT('dashboard_evaluation:queuing')
},
[EvaluationStatusEnum.evaluating]: {
name: i18nT('dashboard_evaluation:evaluating')
},
[EvaluationStatusEnum.completed]: {
name: i18nT('dashboard_evaluation:completed')
}
};
export const EvaluationStatusValues = Object.keys(EvaluationStatusMap).map(Number);

View File

@ -0,0 +1,51 @@
import type { EvaluationStatusEnum } from './constants';
export type EvaluationSchemaType = {
_id: string;
teamId: string;
tmbId: string;
evalModel: string;
appId: string;
usageId: string;
name: string;
createTime: Date;
finishTime?: Date;
score?: number;
errorMessage?: string;
};
export type EvalItemSchemaType = {
evalId: string;
question: string;
expectedResponse: string;
globalVariables?: Record<string, any>;
history?: string;
response?: string;
responseTime?: Date;
finishTime?: Date;
status: EvaluationStatusEnum;
retry: number;
errorMessage?: string;
accuracy?: number;
relevance?: number;
semanticAccuracy?: number;
score?: number;
};
export type evaluationType = Pick<
EvaluationSchemaType,
'name' | 'appId' | 'createTime' | 'finishTime' | 'evalModel' | 'errorMessage' | 'score'
> & {
_id: string;
executorAvatar: string;
executorName: string;
appAvatar: string;
appName: string;
completedCount: number;
errorCount: number;
totalCount: number;
};
export type listEvalItemsItem = EvalItemSchemaType & {
evalItemId: string;
};

View File

@ -0,0 +1,10 @@
import type { VariableItemType } from '../type';
export const getEvaluationFileHeader = (appVariables?: VariableItemType[]) => {
if (!appVariables || appVariables.length === 0) return '*q,*a,history';
const variablesStr = appVariables
.map((item) => (item.required ? `*${item.key}` : item.key))
.join(',');
return `${variablesStr},*q,*a,history`;
};

View File

@ -32,11 +32,11 @@ export const getMCPToolSetRuntimeNode = ({
nodeId: getNanoid(16),
flowNodeType: FlowNodeTypeEnum.toolSet,
avatar,
intro: 'MCP Tools',
intro: '',
inputs: [
{
key: NodeInputKeyEnum.toolSetData,
label: 'Tool Set Data',
label: '',
valueType: WorkflowIOValueTypeEnum.object,
renderTypeList: [FlowNodeInputTypeEnum.hidden],
value: {

View File

@ -34,6 +34,9 @@ export type SystemPluginTemplateItemType = WorkflowTemplateType & {
versionList?: {
value: string;
description?: string;
inputs: FlowNodeInputItemType[];
outputs: FlowNodeOutputItemType[];
}[];
// Admin workflow tool

View File

@ -66,6 +66,7 @@ export type AppListItemType = {
inheritPermission?: boolean;
private?: boolean;
sourceMember: SourceMemberType;
hasInteractiveNode?: boolean;
};
export type AppDetailType = AppSchema & {

View File

@ -269,11 +269,11 @@ export enum NodeOutputKeyEnum {
reasoningText = 'reasoningText', // node reasoning. the value will be show but not save to history
success = 'success',
failed = 'failed',
error = 'error',
text = 'system_text',
addOutputParam = 'system_addOutputParam',
rawResponse = 'system_rawResponse',
systemError = 'system_error',
errorText = 'system_error_text',
// start
userFiles = 'userFiles',
@ -312,7 +312,13 @@ export enum NodeOutputKeyEnum {
loopStartIndex = 'loopStartIndex',
// form input
formInputResult = 'formInputResult'
formInputResult = 'formInputResult',
// File
fileTitle = 'fileTitle',
// @deprecated
error = 'error'
}
export enum VariableInputEnum {

View File

@ -99,6 +99,7 @@ export const FlowNodeInputMap: Record<
export enum FlowNodeOutputTypeEnum {
hidden = 'hidden',
error = 'error',
source = 'source',
static = 'static',
dynamic = 'dynamic'

View File

@ -9,7 +9,7 @@ import type { FlowNodeInputItemType, FlowNodeOutputItemType } from '../type/io.d
import type { NodeToolConfigType, StoreNodeItemType } from '../type/node';
import type { DispatchNodeResponseKeyEnum } from './constants';
import type { StoreEdgeItemType } from '../type/edge';
import type { NodeInputKeyEnum } from '../constants';
import type { NodeInputKeyEnum, NodeOutputKeyEnum } from '../constants';
import type { ClassifyQuestionAgentItemType } from '../template/system/classifyQuestion/type';
import type { NextApiResponse } from 'next';
import { UserModelSchema } from '../../../support/user/type';
@ -24,7 +24,10 @@ import type { AiChatQuoteRoleType } from '../template/system/aiChat/type';
import type { OpenaiAccountType } from '../../../support/user/team/type';
import { LafAccountType } from '../../../support/user/team/type';
import type { CompletionFinishReason } from '../../ai/type';
import type { WorkflowInteractiveResponseType } from '../template/system/interactive/type';
import type {
InteractiveNodeResponseType,
WorkflowInteractiveResponseType
} from '../template/system/interactive/type';
import type { SearchDataResponseItemType } from '../../dataset/type';
export type ExternalProviderType = {
openaiAccount?: OpenaiAccountType;
@ -104,6 +107,9 @@ export type RuntimeNodeItemType = {
// Tool
toolConfig?: StoreNodeItemType['toolConfig'];
// catch error
catchError?: boolean;
};
export type RuntimeEdgeItemType = StoreEdgeItemType & {
@ -116,7 +122,12 @@ export type DispatchNodeResponseType = {
runningTime?: number;
query?: string;
textOutput?: string;
// Client will toast
error?: Record<string, any> | string;
// Just show
errorText?: string;
customInputs?: Record<string, any>;
customOutputs?: Record<string, any>;
nodeInputs?: Record<string, any>;
@ -235,7 +246,7 @@ export type DispatchNodeResponseType = {
extensionTokens?: number;
};
export type DispatchNodeResultType<T = {}> = {
export type DispatchNodeResultType<T = {}, ERR = { [NodeOutputKeyEnum.errorText]?: string }> = {
[DispatchNodeResponseKeyEnum.skipHandleId]?: string[]; // skip some edge handle id
[DispatchNodeResponseKeyEnum.nodeResponse]?: DispatchNodeResponseType; // The node response detail
[DispatchNodeResponseKeyEnum.nodeDispatchUsages]?: ChatNodeUsageType[]; // Node total usage
@ -246,7 +257,11 @@ export type DispatchNodeResultType<T = {}> = {
[DispatchNodeResponseKeyEnum.runTimes]?: number;
[DispatchNodeResponseKeyEnum.newVariables]?: Record<string, any>;
[DispatchNodeResponseKeyEnum.memories]?: Record<string, any>;
} & T;
[DispatchNodeResponseKeyEnum.interactive]?: InteractiveNodeResponseType;
data?: T;
error?: ERR;
};
/* Single node props */
export type AIChatNodeProps = {

View File

@ -251,7 +251,8 @@ export const storeNodes2RuntimeNodes = (
outputs: node.outputs,
pluginId: node.pluginId,
version: node.version,
toolConfig: node.toolConfig
toolConfig: node.toolConfig,
catchError: node.catchError
};
}) || []
);

View File

@ -2,6 +2,7 @@ import type { FlowNodeOutputItemType } from '../type/io.d';
import { NodeOutputKeyEnum } from '../constants';
import { FlowNodeOutputTypeEnum } from '../node/constant';
import { WorkflowIOValueTypeEnum } from '../constants';
import { i18nT } from '../../../../web/i18n/utils';
export const Output_Template_AddOutput: FlowNodeOutputItemType = {
id: NodeOutputKeyEnum.addOutputParam,
@ -15,3 +16,11 @@ export const Output_Template_AddOutput: FlowNodeOutputItemType = {
showDefaultValue: false
}
};
export const Output_Template_Error_Message: FlowNodeOutputItemType = {
id: NodeOutputKeyEnum.errorText,
key: NodeOutputKeyEnum.errorText,
type: FlowNodeOutputTypeEnum.error,
valueType: WorkflowIOValueTypeEnum.string,
label: i18nT('workflow:error_text')
};

View File

@ -20,6 +20,7 @@ import { chatNodeSystemPromptTip, systemPromptTip } from '../tip';
import { LLMModelTypeEnum } from '../../../ai/constants';
import { i18nT } from '../../../../../web/i18n/utils';
import { Input_Template_File_Link } from '../input';
import { Output_Template_Error_Message } from '../output';
export const AgentNode: FlowNodeTemplateType = {
id: FlowNodeTypeEnum.agent,
@ -31,6 +32,7 @@ export const AgentNode: FlowNodeTemplateType = {
name: i18nT('workflow:template.agent'),
intro: i18nT('workflow:template.agent_intro'),
showStatus: true,
catchError: false,
courseUrl: '/docs/guide/dashboard/workflow/tool/',
version: '4.9.2',
inputs: [
@ -107,6 +109,7 @@ export const AgentNode: FlowNodeTemplateType = {
description: i18nT('common:core.module.output.description.Ai response content'),
valueType: WorkflowIOValueTypeEnum.string,
type: FlowNodeOutputTypeEnum.static
}
},
Output_Template_Error_Message
]
};

View File

@ -20,6 +20,7 @@ import {
Input_Template_File_Link
} from '../../input';
import { i18nT } from '../../../../../../web/i18n/utils';
import { Output_Template_Error_Message } from '../../output';
export const AiChatQuoteRole = {
key: NodeInputKeyEnum.aiChatQuoteRole,
@ -54,6 +55,7 @@ export const AiChatModule: FlowNodeTemplateType = {
isTool: true,
courseUrl: '/docs/guide/dashboard/workflow/ai_chat/',
version: '4.9.7',
catchError: false,
inputs: [
Input_Template_SettingAiModel,
// --- settings modal
@ -158,6 +160,7 @@ export const AiChatModule: FlowNodeTemplateType = {
const modelItem = llmModelList.find((item) => item.model === model);
return modelItem?.reasoning !== true;
}
}
},
Output_Template_Error_Message
]
};

View File

@ -13,6 +13,7 @@ import {
import { Input_Template_SelectAIModel, Input_Template_History } from '../../input';
import { LLMModelTypeEnum } from '../../../../ai/constants';
import { i18nT } from '../../../../../../web/i18n/utils';
import { Output_Template_Error_Message } from '../../output';
export const ContextExtractModule: FlowNodeTemplateType = {
id: FlowNodeTypeEnum.contentExtract,
@ -25,6 +26,7 @@ export const ContextExtractModule: FlowNodeTemplateType = {
intro: i18nT('workflow:intro_text_content_extraction'),
showStatus: true,
isTool: true,
catchError: false,
courseUrl: '/docs/guide/dashboard/workflow/content_extract/',
version: '4.9.2',
inputs: [
@ -76,6 +78,7 @@ export const ContextExtractModule: FlowNodeTemplateType = {
description: i18nT('workflow:complete_extraction_result_description'),
valueType: WorkflowIOValueTypeEnum.string,
type: FlowNodeOutputTypeEnum.static
}
},
Output_Template_Error_Message
]
};

View File

@ -15,6 +15,7 @@ import {
import { Input_Template_UserChatInput } from '../input';
import { DatasetSearchModeEnum } from '../../../dataset/constants';
import { i18nT } from '../../../../../web/i18n/utils';
import { Output_Template_Error_Message } from '../output';
export const Dataset_SEARCH_DESC = i18nT('workflow:template.dataset_search_intro');
@ -29,6 +30,7 @@ export const DatasetSearchModule: FlowNodeTemplateType = {
intro: Dataset_SEARCH_DESC,
showStatus: true,
isTool: true,
catchError: false,
courseUrl: '/docs/guide/dashboard/workflow/dataset_search/',
version: '4.9.2',
inputs: [
@ -143,6 +145,7 @@ export const DatasetSearchModule: FlowNodeTemplateType = {
type: FlowNodeOutputTypeEnum.static,
valueType: WorkflowIOValueTypeEnum.datasetQuote,
valueDesc: datasetQuoteValueDesc
}
},
Output_Template_Error_Message
]
};

View File

@ -26,6 +26,7 @@ export const HttpNode468: FlowNodeTemplateType = {
intro: i18nT('workflow:intro_http_request'),
showStatus: true,
isTool: true,
catchError: false,
courseUrl: '/docs/guide/dashboard/workflow/http/',
inputs: [
{
@ -123,14 +124,6 @@ export const HttpNode468: FlowNodeTemplateType = {
label: i18nT('workflow:http_extract_output'),
description: i18nT('workflow:http_extract_output_description')
},
{
id: NodeOutputKeyEnum.error,
key: NodeOutputKeyEnum.error,
label: i18nT('workflow:request_error'),
description: i18nT('workflow:http_request_error_info'),
valueType: WorkflowIOValueTypeEnum.object,
type: FlowNodeOutputTypeEnum.static
},
{
id: NodeOutputKeyEnum.httpRawResponse,
key: NodeOutputKeyEnum.httpRawResponse,
@ -139,6 +132,13 @@ export const HttpNode468: FlowNodeTemplateType = {
description: i18nT('workflow:http_raw_response_description'),
valueType: WorkflowIOValueTypeEnum.any,
type: FlowNodeOutputTypeEnum.static
},
{
id: NodeOutputKeyEnum.error,
key: NodeOutputKeyEnum.error,
label: i18nT('workflow:error_text'),
valueType: WorkflowIOValueTypeEnum.string,
type: FlowNodeOutputTypeEnum.error
}
]
};

View File

@ -11,7 +11,7 @@ import {
FlowNodeTemplateTypeEnum
} from '../../constants';
import { Input_Template_DynamicInput } from '../input';
import { Output_Template_AddOutput } from '../output';
import { Output_Template_AddOutput, Output_Template_Error_Message } from '../output';
import { i18nT } from '../../../../../web/i18n/utils';
export const nodeLafCustomInputConfig = {
@ -31,6 +31,7 @@ export const LafModule: FlowNodeTemplateType = {
intro: i18nT('workflow:intro_laf_function_call'),
showStatus: true,
isTool: true,
catchError: false,
courseUrl: '/docs/guide/dashboard/workflow/laf/',
inputs: [
{
@ -57,8 +58,7 @@ export const LafModule: FlowNodeTemplateType = {
valueType: WorkflowIOValueTypeEnum.any,
type: FlowNodeOutputTypeEnum.static
},
{
...Output_Template_AddOutput
}
Output_Template_AddOutput,
Output_Template_Error_Message
]
};

View File

@ -11,6 +11,7 @@ import {
FlowNodeTypeEnum
} from '../../../node/constant';
import { type FlowNodeTemplateType } from '../../../type/node';
import { Output_Template_Error_Message } from '../../output';
export const ReadFilesNode: FlowNodeTemplateType = {
id: FlowNodeTypeEnum.readFiles,
@ -43,6 +44,7 @@ export const ReadFilesNode: FlowNodeTemplateType = {
description: i18nT('app:workflow.read_files_result_desc'),
valueType: WorkflowIOValueTypeEnum.string,
type: FlowNodeOutputTypeEnum.static
}
},
Output_Template_Error_Message
]
};

View File

@ -25,6 +25,7 @@ export const CodeNode: FlowNodeTemplateType = {
name: i18nT('workflow:code_execution'),
intro: i18nT('workflow:execute_a_simple_script_code_usually_for_complex_data_processing'),
showStatus: true,
catchError: false,
courseUrl: '/docs/guide/dashboard/workflow/sandbox/',
inputs: [
{
@ -89,14 +90,6 @@ export const CodeNode: FlowNodeTemplateType = {
valueType: WorkflowIOValueTypeEnum.object,
type: FlowNodeOutputTypeEnum.static
},
{
id: NodeOutputKeyEnum.error,
key: NodeOutputKeyEnum.error,
label: i18nT('workflow:execution_error'),
description: i18nT('workflow:error_info_returns_empty_on_success'),
valueType: WorkflowIOValueTypeEnum.object,
type: FlowNodeOutputTypeEnum.static
},
{
id: 'qLUQfhG0ILRX',
type: FlowNodeOutputTypeEnum.dynamic,
@ -110,6 +103,13 @@ export const CodeNode: FlowNodeTemplateType = {
key: 'data2',
valueType: WorkflowIOValueTypeEnum.string,
label: 'data2'
},
{
id: NodeOutputKeyEnum.error,
key: NodeOutputKeyEnum.error,
label: i18nT('workflow:error_text'),
valueType: WorkflowIOValueTypeEnum.string,
type: FlowNodeOutputTypeEnum.error
}
]
};

View File

@ -48,6 +48,7 @@ export type FlowNodeCommonType = {
isLatestVersion?: boolean; // Just ui show
// data
catchError?: boolean;
inputs: FlowNodeInputItemType[];
outputs: FlowNodeOutputItemType[];

View File

@ -52,7 +52,11 @@ import { ChatRoleEnum } from '../../core/chat/constants';
import { runtimePrompt2ChatsValue } from '../../core/chat/adapt';
import { getPluginRunContent } from '../../core/app/plugin/utils';
export const getHandleId = (nodeId: string, type: 'source' | 'target', key: string) => {
export const getHandleId = (
nodeId: string,
type: 'source' | 'source_catch' | 'target',
key: string
) => {
return `${nodeId}-${type}-${key}`;
};
@ -219,16 +223,14 @@ export const pluginData2FlowNodeIO = ({
]
: [],
outputs: pluginOutput
? [
...pluginOutput.inputs.map((item) => ({
id: item.key,
type: FlowNodeOutputTypeEnum.static,
key: item.key,
valueType: item.valueType,
label: item.label || item.key,
description: item.description
}))
]
? pluginOutput.inputs.map((item) => ({
id: item.key,
type: FlowNodeOutputTypeEnum.static,
key: item.key,
valueType: item.valueType,
label: item.label || item.key,
description: item.description
}))
: []
};
};

View File

@ -54,6 +54,9 @@ export enum AuditEventEnum {
UPDATE_APP_PUBLISH_CHANNEL = 'UPDATE_APP_PUBLISH_CHANNEL',
DELETE_APP_PUBLISH_CHANNEL = 'DELETE_APP_PUBLISH_CHANNEL',
EXPORT_APP_CHAT_LOG = 'EXPORT_APP_CHAT_LOG',
CREATE_EVALUATION = 'CREATE_EVALUATION',
EXPORT_EVALUATION = 'EXPORT_EVALUATION',
DELETE_EVALUATION = 'DELETE_EVALUATION',
//Dataset
CREATE_DATASET = 'CREATE_DATASET',
UPDATE_DATASET = 'UPDATE_DATASET',

View File

@ -12,7 +12,8 @@ export enum UsageSourceEnum {
dingtalk = 'dingtalk',
official_account = 'official_account',
pdfParse = 'pdfParse',
mcp = 'mcp'
mcp = 'mcp',
evaluation = 'evaluation'
}
export const UsageSourceMap = {
@ -51,5 +52,8 @@ export const UsageSourceMap = {
},
[UsageSourceEnum.mcp]: {
label: i18nT('account_usage:mcp')
},
[UsageSourceEnum.evaluation]: {
label: i18nT('account_usage:evaluation')
}
};

View File

@ -8,6 +8,7 @@ export type UsageListItemCountType = {
charsLength?: number;
duration?: number;
pages?: number;
count?: number; // Times
// deprecated
tokens?: number;
@ -17,6 +18,7 @@ export type UsageListItemType = UsageListItemCountType & {
moduleName: string;
amount: number;
model?: string;
count?: number;
};
export type UsageSchemaType = CreateUsageProps & {

View File

@ -20,6 +20,7 @@ const defaultWorkerOpts: Omit<ConnectionOptions, 'connection'> = {
export enum QueueNames {
datasetSync = 'datasetSync',
evaluation = 'evaluation',
// abondoned
websiteSync = 'websiteSync'
}

View File

@ -1,13 +1,9 @@
import { registerOTel, OTLPHttpJsonTraceExporter } from '@vercel/otel';
// Add otel logging
// import { diag, DiagConsoleLogger, DiagLogLevel } from '@opentelemetry/api';
import { SignozBaseURL, SignozServiceName } from '../const';
import { addLog } from '../../system/log';
// diag.setLogger(new DiagConsoleLogger(), DiagLogLevel.INFO);
export function connectSignoz() {
if (!SignozBaseURL) {
addLog.warn('Signoz is not configured');
return;
}
addLog.info(`Connecting signoz, ${SignozBaseURL}, ${SignozServiceName}`);

View File

@ -15,11 +15,6 @@
"toolChoice": true,
"functionCall": false,
"defaultSystemChatPrompt": "",
"datasetProcess": true,
"usedInClassify": true,
"usedInExtractFields": true,
"usedInQueryExtension": true,
"usedInToolCall": true,
"defaultConfig": {},
"fieldMap": {},
"type": "llm"
@ -38,11 +33,6 @@
"toolChoice": true,
"functionCall": false,
"defaultSystemChatPrompt": "",
"datasetProcess": true,
"usedInClassify": true,
"usedInExtractFields": true,
"usedInQueryExtension": true,
"usedInToolCall": true,
"defaultConfig": {},
"fieldMap": {},
"type": "llm"
@ -61,11 +51,6 @@
"toolChoice": false,
"functionCall": false,
"defaultSystemChatPrompt": "",
"datasetProcess": true,
"usedInClassify": true,
"usedInExtractFields": true,
"usedInQueryExtension": true,
"usedInToolCall": true,
"defaultConfig": {},
"fieldMap": {},
"type": "llm"
@ -84,11 +69,6 @@
"toolChoice": true,
"functionCall": false,
"defaultSystemChatPrompt": "",
"datasetProcess": true,
"usedInClassify": true,
"usedInExtractFields": true,
"usedInQueryExtension": true,
"usedInToolCall": true,
"defaultConfig": {},
"fieldMap": {},
"type": "llm"
@ -106,11 +86,6 @@
"toolChoice": false,
"functionCall": false,
"defaultSystemChatPrompt": "",
"datasetProcess": true,
"usedInClassify": true,
"usedInExtractFields": true,
"usedInQueryExtension": true,
"usedInToolCall": true,
"defaultConfig": {},
"fieldMap": {},
"type": "llm"
@ -128,11 +103,6 @@
"toolChoice": false,
"functionCall": false,
"defaultSystemChatPrompt": "",
"datasetProcess": true,
"usedInClassify": true,
"usedInExtractFields": true,
"usedInQueryExtension": true,
"usedInToolCall": true,
"defaultConfig": {},
"fieldMap": {},
"type": "llm"

View File

@ -14,11 +14,6 @@
"toolChoice": true,
"functionCall": false,
"defaultSystemChatPrompt": "",
"datasetProcess": true,
"usedInClassify": true,
"usedInExtractFields": true,
"usedInQueryExtension": true,
"usedInToolCall": true,
"defaultConfig": {},
"fieldMap": {},
"type": "llm"
@ -36,11 +31,6 @@
"toolChoice": true,
"functionCall": false,
"defaultSystemChatPrompt": "",
"datasetProcess": true,
"usedInClassify": true,
"usedInExtractFields": true,
"usedInQueryExtension": true,
"usedInToolCall": true,
"defaultConfig": {},
"fieldMap": {},
"type": "llm"
@ -58,11 +48,6 @@
"toolChoice": true,
"functionCall": false,
"defaultSystemChatPrompt": "",
"datasetProcess": true,
"usedInClassify": true,
"usedInExtractFields": true,
"usedInQueryExtension": true,
"usedInToolCall": true,
"defaultConfig": {},
"fieldMap": {},
"type": "llm"
@ -80,11 +65,6 @@
"toolChoice": true,
"functionCall": false,
"defaultSystemChatPrompt": "",
"datasetProcess": true,
"usedInClassify": true,
"usedInExtractFields": true,
"usedInQueryExtension": true,
"usedInToolCall": true,
"defaultConfig": {},
"fieldMap": {},
"type": "llm"
@ -102,11 +82,6 @@
"toolChoice": true,
"functionCall": false,
"defaultSystemChatPrompt": "",
"datasetProcess": true,
"usedInClassify": true,
"usedInExtractFields": true,
"usedInQueryExtension": true,
"usedInToolCall": true,
"defaultConfig": {},
"fieldMap": {},
"type": "llm"
@ -124,11 +99,6 @@
"toolChoice": true,
"functionCall": false,
"defaultSystemChatPrompt": "",
"datasetProcess": true,
"usedInClassify": true,
"usedInExtractFields": true,
"usedInQueryExtension": true,
"usedInToolCall": true,
"defaultConfig": {},
"fieldMap": {},
"type": "llm"
@ -146,11 +116,6 @@
"toolChoice": true,
"functionCall": false,
"defaultSystemChatPrompt": "",
"datasetProcess": true,
"usedInClassify": true,
"usedInExtractFields": true,
"usedInQueryExtension": true,
"usedInToolCall": true,
"defaultConfig": {},
"fieldMap": {},
"type": "llm"

View File

@ -15,11 +15,6 @@
"toolChoice": true,
"functionCall": false,
"defaultSystemChatPrompt": "",
"datasetProcess": true,
"usedInClassify": true,
"usedInExtractFields": true,
"usedInQueryExtension": true,
"usedInToolCall": true,
"type": "llm"
},
{
@ -34,11 +29,6 @@
"toolChoice": false,
"functionCall": false,
"defaultSystemChatPrompt": "",
"datasetProcess": true,
"usedInClassify": true,
"usedInExtractFields": true,
"usedInQueryExtension": true,
"usedInToolCall": true,
"defaultConfig": {},
"fieldMap": {},
"type": "llm",

View File

@ -14,11 +14,6 @@
"toolChoice": true,
"functionCall": false,
"defaultSystemChatPrompt": "",
"datasetProcess": true,
"usedInClassify": true,
"usedInExtractFields": true,
"usedInQueryExtension": true,
"usedInToolCall": true,
"defaultConfig": {},
"fieldMap": {},
"type": "llm"
@ -36,11 +31,6 @@
"toolChoice": true,
"functionCall": false,
"defaultSystemChatPrompt": "",
"datasetProcess": true,
"usedInClassify": true,
"usedInExtractFields": true,
"usedInQueryExtension": true,
"usedInToolCall": true,
"defaultConfig": {},
"fieldMap": {},
"type": "llm"
@ -58,11 +48,6 @@
"toolChoice": true,
"functionCall": false,
"defaultSystemChatPrompt": "",
"datasetProcess": true,
"usedInClassify": true,
"usedInExtractFields": true,
"usedInQueryExtension": true,
"usedInToolCall": true,
"defaultConfig": {},
"fieldMap": {},
"type": "llm"
@ -80,11 +65,6 @@
"toolChoice": true,
"functionCall": false,
"defaultSystemChatPrompt": "",
"datasetProcess": true,
"usedInClassify": true,
"usedInExtractFields": true,
"usedInQueryExtension": true,
"usedInToolCall": true,
"defaultConfig": {},
"fieldMap": {},
"type": "llm"
@ -102,11 +82,6 @@
"toolChoice": true,
"functionCall": false,
"defaultSystemChatPrompt": "",
"datasetProcess": true,
"usedInClassify": true,
"usedInExtractFields": true,
"usedInQueryExtension": true,
"usedInToolCall": true,
"defaultConfig": {},
"fieldMap": {},
"type": "llm"
@ -124,11 +99,6 @@
"toolChoice": true,
"functionCall": false,
"defaultSystemChatPrompt": "",
"datasetProcess": true,
"usedInClassify": true,
"usedInExtractFields": true,
"usedInQueryExtension": true,
"usedInToolCall": true,
"defaultConfig": {},
"fieldMap": {},
"type": "llm"
@ -146,11 +116,6 @@
"toolChoice": true,
"functionCall": false,
"defaultSystemChatPrompt": "",
"datasetProcess": true,
"usedInClassify": true,
"usedInExtractFields": true,
"usedInQueryExtension": true,
"usedInToolCall": true,
"defaultConfig": {},
"fieldMap": {},
"type": "llm"
@ -168,11 +133,6 @@
"toolChoice": true,
"functionCall": false,
"defaultSystemChatPrompt": "",
"datasetProcess": true,
"usedInClassify": true,
"usedInExtractFields": true,
"usedInQueryExtension": true,
"usedInToolCall": true,
"defaultConfig": {},
"fieldMap": {},
"type": "llm"
@ -190,11 +150,6 @@
"toolChoice": true,
"functionCall": false,
"defaultSystemChatPrompt": "",
"datasetProcess": true,
"usedInClassify": true,
"usedInExtractFields": true,
"usedInQueryExtension": true,
"usedInToolCall": true,
"defaultConfig": {},
"fieldMap": {},
"type": "llm"
@ -210,11 +165,6 @@
"toolChoice": true,
"functionCall": false,
"defaultSystemChatPrompt": "",
"datasetProcess": true,
"usedInClassify": true,
"usedInExtractFields": true,
"usedInQueryExtension": true,
"usedInToolCall": true,
"defaultConfig": {},
"fieldMap": {},
"type": "llm",
@ -232,11 +182,6 @@
"toolChoice": false,
"functionCall": false,
"defaultSystemChatPrompt": "",
"datasetProcess": true,
"usedInClassify": true,
"usedInExtractFields": true,
"usedInQueryExtension": true,
"usedInToolCall": true,
"defaultConfig": {},
"fieldMap": {},
"type": "llm",
@ -254,11 +199,6 @@
"toolChoice": true,
"functionCall": false,
"defaultSystemChatPrompt": "",
"datasetProcess": true,
"usedInClassify": true,
"usedInExtractFields": true,
"usedInQueryExtension": true,
"usedInToolCall": true,
"defaultConfig": {},
"fieldMap": {},
"type": "llm",
@ -276,11 +216,6 @@
"toolChoice": true,
"functionCall": false,
"defaultSystemChatPrompt": "",
"datasetProcess": true,
"usedInClassify": true,
"usedInExtractFields": true,
"usedInQueryExtension": true,
"usedInToolCall": true,
"defaultConfig": {},
"fieldMap": {},
"type": "llm",
@ -298,11 +233,6 @@
"toolChoice": true,
"functionCall": false,
"defaultSystemChatPrompt": "",
"datasetProcess": true,
"usedInClassify": true,
"usedInExtractFields": true,
"usedInQueryExtension": true,
"usedInToolCall": true,
"defaultConfig": {},
"fieldMap": {},
"type": "llm",
@ -320,11 +250,6 @@
"toolChoice": false,
"functionCall": false,
"defaultSystemChatPrompt": "",
"datasetProcess": true,
"usedInClassify": true,
"usedInExtractFields": true,
"usedInQueryExtension": true,
"usedInToolCall": true,
"defaultConfig": {},
"fieldMap": {},
"type": "llm",

View File

@ -12,11 +12,6 @@
"toolChoice": false,
"functionCall": false,
"defaultSystemChatPrompt": "",
"datasetProcess": true,
"usedInClassify": true,
"usedInExtractFields": true,
"usedInQueryExtension": true,
"usedInToolCall": true,
"defaultConfig": {},
"fieldMap": {},
"type": "llm",
@ -34,11 +29,6 @@
"toolChoice": false,
"functionCall": false,
"defaultSystemChatPrompt": "",
"datasetProcess": true,
"usedInClassify": true,
"usedInExtractFields": true,
"usedInQueryExtension": true,
"usedInToolCall": true,
"defaultConfig": {},
"fieldMap": {},
"type": "llm",
@ -56,11 +46,6 @@
"toolChoice": false,
"functionCall": false,
"defaultSystemChatPrompt": "",
"datasetProcess": true,
"usedInClassify": true,
"usedInExtractFields": true,
"usedInQueryExtension": true,
"usedInToolCall": true,
"defaultConfig": {},
"fieldMap": {},
"type": "llm",
@ -78,11 +63,6 @@
"toolChoice": false,
"functionCall": false,
"defaultSystemChatPrompt": "",
"datasetProcess": true,
"usedInClassify": true,
"usedInExtractFields": true,
"usedInQueryExtension": true,
"usedInToolCall": true,
"defaultConfig": {},
"fieldMap": {},
"type": "llm",

View File

@ -1,6 +1,38 @@
{
"provider": "Gemini",
"list": [
{
"model": "gemini-2.5-pro",
"name": "gemini-2.5-pro",
"maxContext": 1000000,
"maxResponse": 63000,
"quoteMaxToken": 1000000,
"maxTemperature": 1,
"vision": true,
"toolChoice": true,
"defaultSystemChatPrompt": "",
"defaultConfig": {},
"fieldMap": {},
"type": "llm",
"showTopP": true,
"showStopSign": true
},
{
"model": "gemini-2.5-flash",
"name": "gemini-2.5-flash",
"maxContext": 1000000,
"maxResponse": 63000,
"quoteMaxToken": 1000000,
"maxTemperature": 1,
"vision": true,
"toolChoice": true,
"defaultSystemChatPrompt": "",
"defaultConfig": {},
"fieldMap": {},
"type": "llm",
"showTopP": true,
"showStopSign": true
},
{
"model": "gemini-2.5-pro-exp-03-25",
"name": "gemini-2.5-pro-exp-03-25",
@ -12,11 +44,6 @@
"toolChoice": true,
"functionCall": false,
"defaultSystemChatPrompt": "",
"datasetProcess": true,
"usedInClassify": true,
"usedInExtractFields": true,
"usedInQueryExtension": true,
"usedInToolCall": true,
"defaultConfig": {},
"fieldMap": {},
"type": "llm",
@ -34,11 +61,6 @@
"toolChoice": true,
"functionCall": false,
"defaultSystemChatPrompt": "",
"datasetProcess": true,
"usedInClassify": true,
"usedInExtractFields": true,
"usedInQueryExtension": true,
"usedInToolCall": true,
"defaultConfig": {},
"fieldMap": {},
"type": "llm",
@ -56,11 +78,6 @@
"toolChoice": true,
"functionCall": false,
"defaultSystemChatPrompt": "",
"datasetProcess": true,
"usedInClassify": true,
"usedInExtractFields": true,
"usedInQueryExtension": true,
"usedInToolCall": true,
"defaultConfig": {},
"fieldMap": {},
"type": "llm",
@ -78,11 +95,6 @@
"toolChoice": true,
"functionCall": false,
"defaultSystemChatPrompt": "",
"datasetProcess": true,
"usedInClassify": true,
"usedInExtractFields": true,
"usedInQueryExtension": true,
"usedInToolCall": true,
"defaultConfig": {},
"fieldMap": {},
"type": "llm",
@ -100,11 +112,6 @@
"toolChoice": true,
"functionCall": false,
"defaultSystemChatPrompt": "",
"datasetProcess": true,
"usedInClassify": true,
"usedInExtractFields": true,
"usedInQueryExtension": true,
"usedInToolCall": true,
"defaultConfig": {},
"fieldMap": {},
"type": "llm",
@ -122,11 +129,6 @@
"toolChoice": true,
"functionCall": false,
"defaultSystemChatPrompt": "",
"datasetProcess": true,
"usedInClassify": true,
"usedInExtractFields": true,
"usedInQueryExtension": true,
"usedInToolCall": true,
"defaultConfig": {},
"fieldMap": {},
"type": "llm",
@ -144,11 +146,6 @@
"toolChoice": true,
"functionCall": false,
"defaultSystemChatPrompt": "",
"datasetProcess": true,
"usedInClassify": true,
"usedInExtractFields": true,
"usedInQueryExtension": true,
"usedInToolCall": true,
"defaultConfig": {},
"fieldMap": {},
"type": "llm",
@ -166,11 +163,6 @@
"toolChoice": true,
"functionCall": false,
"defaultSystemChatPrompt": "",
"datasetProcess": true,
"usedInClassify": true,
"usedInExtractFields": true,
"usedInQueryExtension": true,
"usedInToolCall": true,
"defaultConfig": {},
"fieldMap": {},
"type": "llm",
@ -188,11 +180,6 @@
"toolChoice": true,
"functionCall": false,
"defaultSystemChatPrompt": "",
"datasetProcess": true,
"usedInClassify": true,
"usedInExtractFields": true,
"usedInQueryExtension": true,
"usedInToolCall": true,
"defaultConfig": {},
"fieldMap": {},
"type": "llm",
@ -210,11 +197,6 @@
"toolChoice": true,
"functionCall": false,
"defaultSystemChatPrompt": "",
"datasetProcess": true,
"usedInClassify": true,
"usedInExtractFields": true,
"usedInQueryExtension": true,
"usedInToolCall": true,
"defaultConfig": {},
"fieldMap": {},
"type": "llm",

View File

@ -1,6 +1,38 @@
{
"provider": "Grok",
"list": [
{
"model": "grok-4",
"name": "grok-4",
"maxContext": 256000,
"maxResponse": 8000,
"quoteMaxToken": 128000,
"maxTemperature": 1,
"showTopP": true,
"showStopSign": true,
"vision": true,
"toolChoice": true,
"defaultSystemChatPrompt": "",
"defaultConfig": {},
"fieldMap": {},
"type": "llm"
},
{
"model": "grok-4-0709",
"name": "grok-4-0709",
"maxContext": 256000,
"maxResponse": 8000,
"quoteMaxToken": 128000,
"maxTemperature": 1,
"showTopP": true,
"showStopSign": true,
"vision": true,
"toolChoice": true,
"defaultSystemChatPrompt": "",
"defaultConfig": {},
"fieldMap": {},
"type": "llm"
},
{
"model": "grok-3-mini",
"name": "grok-3-mini",
@ -14,11 +46,6 @@
"toolChoice": true,
"functionCall": false,
"defaultSystemChatPrompt": "",
"datasetProcess": true,
"usedInClassify": true,
"usedInExtractFields": true,
"usedInQueryExtension": true,
"usedInToolCall": true,
"defaultConfig": {},
"fieldMap": {},
"type": "llm"
@ -36,11 +63,6 @@
"toolChoice": true,
"functionCall": false,
"defaultSystemChatPrompt": "",
"datasetProcess": true,
"usedInClassify": true,
"usedInExtractFields": true,
"usedInQueryExtension": true,
"usedInToolCall": true,
"defaultConfig": {},
"fieldMap": {},
"type": "llm"
@ -58,11 +80,6 @@
"toolChoice": true,
"functionCall": false,
"defaultSystemChatPrompt": "",
"datasetProcess": true,
"usedInClassify": true,
"usedInExtractFields": true,
"usedInQueryExtension": true,
"usedInToolCall": true,
"defaultConfig": {},
"fieldMap": {},
"type": "llm"
@ -80,11 +97,6 @@
"toolChoice": true,
"functionCall": false,
"defaultSystemChatPrompt": "",
"datasetProcess": true,
"usedInClassify": true,
"usedInExtractFields": true,
"usedInQueryExtension": true,
"usedInToolCall": true,
"defaultConfig": {},
"fieldMap": {},
"type": "llm"

View File

@ -12,11 +12,6 @@
"toolChoice": true,
"functionCall": false,
"defaultSystemChatPrompt": "",
"datasetProcess": true,
"usedInClassify": true,
"usedInExtractFields": true,
"usedInQueryExtension": true,
"usedInToolCall": true,
"defaultConfig": {},
"type": "llm",
"showTopP": true,
@ -33,11 +28,6 @@
"toolChoice": true,
"functionCall": false,
"defaultSystemChatPrompt": "",
"datasetProcess": true,
"usedInClassify": true,
"usedInExtractFields": true,
"usedInQueryExtension": true,
"usedInToolCall": true,
"defaultConfig": {},
"type": "llm",
"showTopP": true,

View File

@ -12,11 +12,6 @@
"toolChoice": false,
"functionCall": false,
"defaultSystemChatPrompt": "",
"datasetProcess": true,
"usedInClassify": true,
"usedInExtractFields": true,
"usedInQueryExtension": true,
"usedInToolCall": true,
"defaultConfig": {},
"fieldMap": {},
"type": "llm",
@ -34,11 +29,6 @@
"toolChoice": false,
"functionCall": false,
"defaultSystemChatPrompt": "",
"datasetProcess": true,
"usedInClassify": true,
"usedInExtractFields": true,
"usedInQueryExtension": true,
"usedInToolCall": true,
"defaultConfig": {},
"fieldMap": {},
"type": "llm",
@ -56,11 +46,6 @@
"toolChoice": false,
"functionCall": false,
"defaultSystemChatPrompt": "",
"datasetProcess": true,
"usedInClassify": true,
"usedInExtractFields": true,
"usedInQueryExtension": true,
"usedInToolCall": true,
"defaultConfig": {},
"fieldMap": {},
"type": "llm",
@ -78,11 +63,6 @@
"toolChoice": false,
"functionCall": false,
"defaultSystemChatPrompt": "",
"datasetProcess": true,
"usedInClassify": true,
"usedInExtractFields": true,
"usedInQueryExtension": true,
"usedInToolCall": true,
"defaultConfig": {},
"fieldMap": {},
"type": "llm",
@ -100,11 +80,6 @@
"toolChoice": false,
"functionCall": false,
"defaultSystemChatPrompt": "",
"datasetProcess": true,
"usedInClassify": true,
"usedInExtractFields": true,
"usedInQueryExtension": true,
"usedInToolCall": true,
"defaultConfig": {},
"fieldMap": {},
"type": "llm",
@ -122,11 +97,6 @@
"toolChoice": false,
"functionCall": false,
"defaultSystemChatPrompt": "",
"datasetProcess": true,
"usedInClassify": true,
"usedInExtractFields": true,
"usedInQueryExtension": true,
"usedInToolCall": true,
"defaultConfig": {},
"fieldMap": {},
"type": "llm",
@ -144,11 +114,6 @@
"toolChoice": false,
"functionCall": false,
"defaultSystemChatPrompt": "",
"datasetProcess": true,
"usedInClassify": true,
"usedInExtractFields": true,
"usedInQueryExtension": true,
"usedInToolCall": true,
"defaultConfig": {},
"fieldMap": {},
"type": "llm",

View File

@ -12,11 +12,6 @@
"toolChoice": true,
"functionCall": false,
"defaultSystemChatPrompt": "",
"datasetProcess": true,
"usedInClassify": true,
"usedInExtractFields": true,
"usedInQueryExtension": true,
"usedInToolCall": true,
"defaultConfig": {},
"fieldMap": {},
"type": "llm",
@ -34,11 +29,6 @@
"toolChoice": true,
"functionCall": false,
"defaultSystemChatPrompt": "",
"datasetProcess": true,
"usedInClassify": true,
"usedInExtractFields": true,
"usedInQueryExtension": true,
"usedInToolCall": true,
"defaultConfig": {},
"fieldMap": {},
"type": "llm",

View File

@ -12,11 +12,6 @@
"toolChoice": false,
"functionCall": false,
"defaultSystemChatPrompt": "",
"datasetProcess": true,
"usedInClassify": true,
"usedInExtractFields": true,
"usedInQueryExtension": true,
"usedInToolCall": true,
"defaultConfig": {},
"fieldMap": {},
"type": "llm",
@ -34,11 +29,6 @@
"toolChoice": false,
"functionCall": false,
"defaultSystemChatPrompt": "",
"datasetProcess": true,
"usedInClassify": true,
"usedInExtractFields": true,
"usedInQueryExtension": true,
"usedInToolCall": true,
"defaultConfig": {},
"fieldMap": {},
"type": "llm",

View File

@ -12,11 +12,6 @@
"toolChoice": true,
"functionCall": false,
"defaultSystemChatPrompt": "",
"datasetProcess": true,
"usedInClassify": true,
"usedInExtractFields": true,
"usedInQueryExtension": true,
"usedInToolCall": true,
"defaultConfig": {},
"fieldMap": {},
"type": "llm",
@ -34,11 +29,6 @@
"toolChoice": true,
"functionCall": false,
"defaultSystemChatPrompt": "",
"datasetProcess": true,
"usedInClassify": true,
"usedInExtractFields": true,
"usedInQueryExtension": true,
"usedInToolCall": true,
"defaultConfig": {},
"fieldMap": {},
"type": "llm",
@ -56,11 +46,6 @@
"toolChoice": true,
"functionCall": false,
"defaultSystemChatPrompt": "",
"datasetProcess": true,
"usedInClassify": true,
"usedInExtractFields": true,
"usedInQueryExtension": true,
"usedInToolCall": true,
"defaultConfig": {},
"fieldMap": {},
"type": "llm",
@ -78,11 +63,6 @@
"toolChoice": true,
"functionCall": false,
"defaultSystemChatPrompt": "",
"datasetProcess": true,
"usedInClassify": true,
"usedInExtractFields": true,
"usedInQueryExtension": true,
"usedInToolCall": true,
"defaultConfig": {},
"fieldMap": {},
"type": "llm",

View File

@ -1,6 +1,74 @@
{
"provider": "Moonshot",
"list": [
{
"model": "kimi-k2-0711-preview",
"name": "kimi-k2-0711-preview",
"maxContext": 128000,
"maxResponse": 32000,
"quoteMaxToken": 128000,
"maxTemperature": 1,
"vision": false,
"toolChoice": true,
"defaultSystemChatPrompt": "",
"defaultConfig": {},
"fieldMap": {},
"type": "llm",
"showTopP": true,
"showStopSign": true,
"responseFormatList": ["text", "json_object"]
},
{
"model": "kimi-latest-8k",
"name": "kimi-latest-8k",
"maxContext": 8000,
"maxResponse": 4000,
"quoteMaxToken": 6000,
"maxTemperature": 1,
"vision": false,
"toolChoice": true,
"defaultSystemChatPrompt": "",
"defaultConfig": {},
"fieldMap": {},
"type": "llm",
"showTopP": true,
"showStopSign": true,
"responseFormatList": ["text", "json_object"]
},
{
"model": "kimi-latest-32k",
"name": "kimi-latest-32k",
"maxContext": 32000,
"maxResponse": 16000,
"quoteMaxToken": 32000,
"maxTemperature": 1,
"vision": false,
"toolChoice": true,
"defaultSystemChatPrompt": "",
"defaultConfig": {},
"fieldMap": {},
"type": "llm",
"showTopP": true,
"showStopSign": true,
"responseFormatList": ["text", "json_object"]
},
{
"model": "kimi-latest-128k",
"name": "kimi-latest-128k",
"maxContext": 128000,
"maxResponse": 32000,
"quoteMaxToken": 128000,
"maxTemperature": 1,
"vision": false,
"toolChoice": true,
"defaultSystemChatPrompt": "",
"defaultConfig": {},
"fieldMap": {},
"type": "llm",
"showTopP": true,
"showStopSign": true,
"responseFormatList": ["text", "json_object"]
},
{
"model": "moonshot-v1-8k",
"name": "moonshot-v1-8k",
@ -12,11 +80,6 @@
"toolChoice": true,
"functionCall": false,
"defaultSystemChatPrompt": "",
"datasetProcess": true,
"usedInClassify": true,
"usedInExtractFields": true,
"usedInQueryExtension": true,
"usedInToolCall": true,
"defaultConfig": {},
"fieldMap": {},
"type": "llm",
@ -35,11 +98,6 @@
"toolChoice": true,
"functionCall": false,
"defaultSystemChatPrompt": "",
"datasetProcess": true,
"usedInClassify": true,
"usedInExtractFields": true,
"usedInQueryExtension": true,
"usedInToolCall": true,
"defaultConfig": {},
"fieldMap": {},
"type": "llm",
@ -58,11 +116,6 @@
"toolChoice": true,
"functionCall": false,
"defaultSystemChatPrompt": "",
"datasetProcess": true,
"usedInClassify": true,
"usedInExtractFields": true,
"usedInQueryExtension": true,
"usedInToolCall": true,
"defaultConfig": {},
"fieldMap": {},
"type": "llm",
@ -81,11 +134,6 @@
"toolChoice": true,
"functionCall": false,
"defaultSystemChatPrompt": "",
"datasetProcess": true,
"usedInClassify": true,
"usedInExtractFields": true,
"usedInQueryExtension": true,
"usedInToolCall": true,
"defaultConfig": {},
"fieldMap": {},
"type": "llm",
@ -104,11 +152,6 @@
"toolChoice": true,
"functionCall": false,
"defaultSystemChatPrompt": "",
"datasetProcess": true,
"usedInClassify": true,
"usedInExtractFields": true,
"usedInQueryExtension": true,
"usedInToolCall": true,
"defaultConfig": {},
"fieldMap": {},
"type": "llm",
@ -127,11 +170,6 @@
"toolChoice": true,
"functionCall": false,
"defaultSystemChatPrompt": "",
"datasetProcess": true,
"usedInClassify": true,
"usedInExtractFields": true,
"usedInQueryExtension": true,
"usedInToolCall": true,
"defaultConfig": {},
"fieldMap": {},
"type": "llm",

View File

@ -15,10 +15,6 @@
"toolChoice": true,
"functionCall": true,
"defaultSystemChatPrompt": "",
"datasetProcess": true,
"usedInClassify": true,
"usedInExtractFields": true,
"usedInToolCall": true,
"defaultConfig": {},
"fieldMap": {},
"type": "llm"
@ -37,10 +33,6 @@
"toolChoice": true,
"functionCall": true,
"defaultSystemChatPrompt": "",
"datasetProcess": true,
"usedInClassify": true,
"usedInExtractFields": true,
"usedInToolCall": true,
"defaultConfig": {},
"fieldMap": {},
"type": "llm"
@ -59,10 +51,6 @@
"toolChoice": true,
"functionCall": true,
"defaultSystemChatPrompt": "",
"datasetProcess": true,
"usedInClassify": true,
"usedInExtractFields": true,
"usedInToolCall": true,
"defaultConfig": {},
"fieldMap": {},
"type": "llm"
@ -81,10 +69,6 @@
"toolChoice": true,
"functionCall": true,
"defaultSystemChatPrompt": "",
"datasetProcess": true,
"usedInClassify": true,
"usedInExtractFields": true,
"usedInToolCall": true,
"defaultConfig": {},
"fieldMap": {},
"type": "llm"
@ -103,11 +87,6 @@
"toolChoice": true,
"functionCall": true,
"defaultSystemChatPrompt": "",
"datasetProcess": true,
"usedInClassify": true,
"usedInExtractFields": true,
"usedInQueryExtension": true,
"usedInToolCall": true,
"defaultConfig": {},
"fieldMap": {},
"type": "llm"
@ -123,11 +102,6 @@
"toolChoice": true,
"functionCall": false,
"defaultSystemChatPrompt": "",
"datasetProcess": true,
"usedInClassify": true,
"usedInExtractFields": true,
"usedInQueryExtension": true,
"usedInToolCall": true,
"defaultConfig": {},
"fieldMap": {
"max_tokens": "max_completion_tokens"
@ -147,11 +121,6 @@
"toolChoice": true,
"functionCall": false,
"defaultSystemChatPrompt": "",
"datasetProcess": true,
"usedInClassify": true,
"usedInExtractFields": true,
"usedInQueryExtension": true,
"usedInToolCall": true,
"defaultConfig": {},
"fieldMap": {
"max_tokens": "max_completion_tokens"
@ -171,11 +140,6 @@
"toolChoice": true,
"functionCall": false,
"defaultSystemChatPrompt": "",
"datasetProcess": true,
"usedInClassify": true,
"usedInExtractFields": true,
"usedInQueryExtension": true,
"usedInToolCall": true,
"defaultConfig": {},
"fieldMap": {
"max_tokens": "max_completion_tokens"
@ -195,11 +159,6 @@
"toolChoice": false,
"functionCall": false,
"defaultSystemChatPrompt": "",
"datasetProcess": true,
"usedInClassify": true,
"usedInExtractFields": true,
"usedInQueryExtension": true,
"usedInToolCall": true,
"defaultConfig": {},
"fieldMap": {
"max_tokens": "max_completion_tokens"
@ -219,11 +178,6 @@
"toolChoice": false,
"functionCall": false,
"defaultSystemChatPrompt": "",
"datasetProcess": true,
"usedInClassify": true,
"usedInExtractFields": true,
"usedInQueryExtension": true,
"usedInToolCall": true,
"defaultConfig": {},
"fieldMap": {
"max_tokens": "max_completion_tokens"
@ -243,11 +197,6 @@
"toolChoice": false,
"functionCall": false,
"defaultSystemChatPrompt": "",
"datasetProcess": true,
"usedInClassify": true,
"usedInExtractFields": true,
"usedInQueryExtension": true,
"usedInToolCall": true,
"defaultConfig": {
"stream": false
},
@ -271,11 +220,6 @@
"toolChoice": true,
"functionCall": true,
"defaultSystemChatPrompt": "",
"datasetProcess": true,
"usedInClassify": true,
"usedInExtractFields": true,
"usedInQueryExtension": true,
"usedInToolCall": true,
"type": "llm"
},
{
@ -291,11 +235,6 @@
"toolChoice": true,
"functionCall": true,
"defaultSystemChatPrompt": "",
"datasetProcess": true,
"usedInClassify": true,
"usedInExtractFields": true,
"usedInQueryExtension": true,
"usedInToolCall": true,
"type": "llm"
},
{

View File

@ -12,11 +12,6 @@
"toolChoice": true,
"functionCall": false,
"defaultSystemChatPrompt": "",
"datasetProcess": true,
"usedInClassify": true,
"usedInExtractFields": true,
"usedInQueryExtension": true,
"usedInToolCall": true,
"defaultConfig": {},
"fieldMap": {},
"type": "llm",
@ -35,11 +30,6 @@
"toolChoice": false,
"functionCall": false,
"defaultSystemChatPrompt": "",
"datasetProcess": true,
"usedInClassify": true,
"usedInExtractFields": true,
"usedInQueryExtension": true,
"usedInToolCall": true,
"defaultConfig": {},
"fieldMap": {},
"type": "llm",
@ -57,11 +47,6 @@
"toolChoice": true,
"functionCall": false,
"defaultSystemChatPrompt": "",
"datasetProcess": true,
"usedInClassify": true,
"usedInExtractFields": true,
"usedInQueryExtension": true,
"usedInToolCall": true,
"defaultConfig": {},
"fieldMap": {},
"type": "llm",
@ -80,11 +65,6 @@
"toolChoice": false,
"functionCall": false,
"defaultSystemChatPrompt": "",
"datasetProcess": true,
"usedInClassify": true,
"usedInExtractFields": true,
"usedInQueryExtension": true,
"usedInToolCall": true,
"type": "llm",
"showTopP": true,
"showStopSign": true
@ -100,11 +80,6 @@
"toolChoice": true,
"functionCall": false,
"defaultSystemChatPrompt": "",
"datasetProcess": true,
"usedInClassify": true,
"usedInExtractFields": true,
"usedInQueryExtension": true,
"usedInToolCall": true,
"defaultConfig": {},
"fieldMap": {},
"type": "llm",
@ -124,11 +99,6 @@
"toolChoice": true,
"functionCall": false,
"defaultSystemChatPrompt": "",
"datasetProcess": true,
"usedInClassify": true,
"usedInExtractFields": true,
"usedInQueryExtension": true,
"usedInToolCall": true,
"defaultConfig": {
"stream": true
},
@ -150,11 +120,6 @@
"toolChoice": true,
"functionCall": false,
"defaultSystemChatPrompt": "",
"datasetProcess": true,
"usedInClassify": true,
"usedInExtractFields": true,
"usedInQueryExtension": true,
"usedInToolCall": true,
"defaultConfig": {
"stream": true
},
@ -176,11 +141,6 @@
"toolChoice": true,
"functionCall": false,
"defaultSystemChatPrompt": "",
"datasetProcess": true,
"usedInClassify": true,
"usedInExtractFields": true,
"usedInQueryExtension": true,
"usedInToolCall": true,
"defaultConfig": {
"stream": true
},
@ -202,11 +162,6 @@
"toolChoice": true,
"functionCall": false,
"defaultSystemChatPrompt": "",
"datasetProcess": true,
"usedInClassify": true,
"usedInExtractFields": true,
"usedInQueryExtension": true,
"usedInToolCall": true,
"defaultConfig": {
"stream": true
},
@ -228,11 +183,6 @@
"toolChoice": true,
"functionCall": false,
"defaultSystemChatPrompt": "",
"datasetProcess": true,
"usedInClassify": true,
"usedInExtractFields": true,
"usedInQueryExtension": true,
"usedInToolCall": true,
"defaultConfig": {
"stream": true
},
@ -254,11 +204,6 @@
"toolChoice": true,
"functionCall": false,
"defaultSystemChatPrompt": "",
"datasetProcess": true,
"usedInClassify": true,
"usedInExtractFields": true,
"usedInQueryExtension": true,
"usedInToolCall": true,
"defaultConfig": {
"stream": true
},
@ -280,11 +225,6 @@
"toolChoice": true,
"functionCall": false,
"defaultSystemChatPrompt": "",
"datasetProcess": true,
"usedInClassify": true,
"usedInExtractFields": true,
"usedInQueryExtension": true,
"usedInToolCall": true,
"defaultConfig": {
"stream": true
},
@ -306,11 +246,6 @@
"toolChoice": true,
"functionCall": false,
"defaultSystemChatPrompt": "",
"datasetProcess": true,
"usedInClassify": true,
"usedInExtractFields": true,
"usedInQueryExtension": true,
"usedInToolCall": true,
"defaultConfig": {
"stream": true
},
@ -336,7 +271,6 @@
"usedInClassify": false,
"usedInExtractFields": false,
"usedInQueryExtension": false,
"usedInToolCall": true,
"defaultConfig": {
"stream": true
},
@ -361,7 +295,6 @@
"usedInClassify": false,
"usedInExtractFields": false,
"usedInQueryExtension": false,
"usedInToolCall": true,
"defaultConfig": {
"stream": true
},
@ -381,11 +314,6 @@
"toolChoice": false,
"functionCall": false,
"defaultSystemChatPrompt": "",
"datasetProcess": true,
"usedInClassify": true,
"usedInExtractFields": true,
"usedInQueryExtension": true,
"usedInToolCall": true,
"defaultConfig": {},
"fieldMap": {},
"type": "llm",
@ -403,11 +331,6 @@
"toolChoice": true,
"functionCall": false,
"defaultSystemChatPrompt": "",
"datasetProcess": true,
"usedInClassify": true,
"usedInExtractFields": true,
"usedInQueryExtension": true,
"usedInToolCall": true,
"defaultConfig": {},
"fieldMap": {},
"type": "llm",
@ -426,11 +349,6 @@
"toolChoice": true,
"functionCall": false,
"defaultSystemChatPrompt": "",
"datasetProcess": true,
"usedInClassify": true,
"usedInExtractFields": true,
"usedInQueryExtension": true,
"usedInToolCall": true,
"defaultConfig": {},
"fieldMap": {},
"type": "llm",
@ -449,11 +367,6 @@
"toolChoice": true,
"functionCall": false,
"defaultSystemChatPrompt": "",
"datasetProcess": true,
"usedInClassify": true,
"usedInExtractFields": true,
"usedInQueryExtension": true,
"usedInToolCall": true,
"defaultConfig": {},
"fieldMap": {},
"type": "llm",
@ -472,11 +385,6 @@
"toolChoice": true,
"functionCall": false,
"defaultSystemChatPrompt": "",
"datasetProcess": true,
"usedInClassify": true,
"usedInExtractFields": true,
"usedInQueryExtension": true,
"usedInToolCall": true,
"defaultConfig": {},
"fieldMap": {},
"type": "llm",

View File

@ -12,11 +12,6 @@
"toolChoice": true,
"functionCall": false,
"defaultSystemChatPrompt": "",
"datasetProcess": true,
"usedInClassify": true,
"usedInExtractFields": true,
"usedInQueryExtension": true,
"usedInToolCall": true,
"defaultConfig": {},
"fieldMap": {},
"type": "llm",
@ -55,11 +50,6 @@
"toolChoice": true,
"functionCall": false,
"defaultSystemChatPrompt": "",
"datasetProcess": true,
"usedInClassify": true,
"usedInExtractFields": true,
"usedInQueryExtension": true,
"usedInToolCall": true,
"defaultConfig": {},
"fieldMap": {},
"type": "llm",

View File

@ -9,11 +9,6 @@
"quoteMaxToken": 32000,
"maxTemperature": 1,
"vision": false,
"datasetProcess": true,
"usedInClassify": true,
"usedInExtractFields": true,
"usedInToolCall": true,
"usedInQueryExtension": true,
"toolChoice": false,
"functionCall": false,
"defaultSystemChatPrompt": "",
@ -29,11 +24,6 @@
"quoteMaxToken": 8000,
"maxTemperature": 1,
"vision": false,
"datasetProcess": true,
"usedInClassify": true,
"usedInExtractFields": true,
"usedInToolCall": true,
"usedInQueryExtension": true,
"toolChoice": false,
"functionCall": false,
"defaultSystemChatPrompt": "",
@ -49,11 +39,6 @@
"quoteMaxToken": 128000,
"maxTemperature": 1,
"vision": false,
"datasetProcess": true,
"usedInClassify": true,
"usedInExtractFields": true,
"usedInToolCall": true,
"usedInQueryExtension": true,
"toolChoice": false,
"functionCall": false,
"defaultSystemChatPrompt": "",
@ -69,11 +54,6 @@
"quoteMaxToken": 8000,
"maxTemperature": 1,
"vision": false,
"datasetProcess": true,
"usedInClassify": true,
"usedInExtractFields": true,
"usedInToolCall": true,
"usedInQueryExtension": true,
"toolChoice": false,
"functionCall": false,
"defaultSystemChatPrompt": "",
@ -92,11 +72,6 @@
"toolChoice": false,
"functionCall": false,
"defaultSystemChatPrompt": "",
"datasetProcess": true,
"usedInClassify": true,
"usedInExtractFields": true,
"usedInQueryExtension": true,
"usedInToolCall": true,
"defaultConfig": {},
"fieldMap": {},
"type": "llm",
@ -114,11 +89,6 @@
"toolChoice": false,
"functionCall": false,
"defaultSystemChatPrompt": "",
"datasetProcess": true,
"usedInClassify": true,
"usedInExtractFields": true,
"usedInQueryExtension": true,
"usedInToolCall": true,
"defaultConfig": {},
"fieldMap": {},
"type": "llm",

View File

@ -9,11 +9,6 @@
"quoteMaxToken": 6000,
"maxTemperature": 2,
"vision": false,
"datasetProcess": true,
"usedInClassify": true,
"usedInExtractFields": true,
"usedInToolCall": true,
"usedInQueryExtension": true,
"toolChoice": false,
"functionCall": false,
"defaultSystemChatPrompt": "",
@ -29,11 +24,6 @@
"quoteMaxToken": 8000,
"maxTemperature": 2,
"vision": false,
"datasetProcess": true,
"usedInClassify": true,
"usedInExtractFields": true,
"usedInToolCall": true,
"usedInQueryExtension": true,
"toolChoice": false,
"functionCall": false,
"defaultSystemChatPrompt": "",
@ -49,11 +39,6 @@
"quoteMaxToken": 32000,
"maxTemperature": 2,
"vision": false,
"datasetProcess": true,
"usedInClassify": true,
"usedInExtractFields": true,
"usedInToolCall": true,
"usedInQueryExtension": true,
"toolChoice": false,
"functionCall": false,
"defaultSystemChatPrompt": "",
@ -69,11 +54,6 @@
"quoteMaxToken": 128000,
"maxTemperature": 2,
"vision": false,
"datasetProcess": true,
"usedInClassify": true,
"usedInExtractFields": true,
"usedInToolCall": true,
"usedInQueryExtension": true,
"toolChoice": false,
"functionCall": false,
"defaultSystemChatPrompt": "",
@ -89,11 +69,6 @@
"quoteMaxToken": 256000,
"maxTemperature": 2,
"vision": false,
"datasetProcess": true,
"usedInClassify": true,
"usedInExtractFields": true,
"usedInToolCall": true,
"usedInQueryExtension": true,
"toolChoice": false,
"functionCall": false,
"defaultSystemChatPrompt": "",
@ -109,11 +84,6 @@
"maxResponse": 8000,
"maxTemperature": 2,
"vision": true,
"datasetProcess": true,
"usedInClassify": true,
"usedInExtractFields": true,
"usedInToolCall": true,
"usedInQueryExtension": true,
"toolChoice": false,
"functionCall": false,
"defaultSystemChatPrompt": "",
@ -129,11 +99,6 @@
"quoteMaxToken": 8000,
"maxTemperature": 2,
"vision": true,
"datasetProcess": true,
"usedInClassify": true,
"usedInExtractFields": true,
"usedInToolCall": true,
"usedInQueryExtension": true,
"toolChoice": false,
"functionCall": false,
"defaultSystemChatPrompt": "",
@ -149,11 +114,6 @@
"maxResponse": 8000,
"maxTemperature": 2,
"vision": true,
"datasetProcess": true,
"usedInClassify": true,
"usedInExtractFields": true,
"usedInToolCall": true,
"usedInQueryExtension": true,
"toolChoice": false,
"functionCall": false,
"defaultSystemChatPrompt": "",
@ -169,11 +129,6 @@
"quoteMaxToken": 6000,
"maxTemperature": 2,
"vision": false,
"datasetProcess": true,
"usedInClassify": true,
"usedInExtractFields": true,
"usedInToolCall": true,
"usedInQueryExtension": true,
"toolChoice": false,
"functionCall": false,
"defaultSystemChatPrompt": "",
@ -189,11 +144,6 @@
"quoteMaxToken": 4000,
"maxTemperature": 2,
"vision": false,
"datasetProcess": true,
"usedInClassify": true,
"usedInExtractFields": true,
"usedInToolCall": true,
"usedInQueryExtension": true,
"toolChoice": false,
"functionCall": false,
"defaultSystemChatPrompt": "",
@ -209,11 +159,6 @@
"quoteMaxToken": 4000,
"maxTemperature": 2,
"vision": false,
"datasetProcess": true,
"usedInClassify": true,
"usedInExtractFields": true,
"usedInToolCall": true,
"usedInQueryExtension": true,
"toolChoice": false,
"functionCall": false,
"defaultSystemChatPrompt": "",

View File

@ -12,11 +12,6 @@
"toolChoice": false,
"functionCall": false,
"defaultSystemChatPrompt": "",
"datasetProcess": true,
"usedInClassify": true,
"usedInExtractFields": true,
"usedInQueryExtension": true,
"usedInToolCall": true,
"defaultConfig": {},
"fieldMap": {},
"type": "llm",
@ -34,11 +29,6 @@
"toolChoice": false,
"functionCall": false,
"defaultSystemChatPrompt": "",
"datasetProcess": true,
"usedInClassify": true,
"usedInExtractFields": true,
"usedInQueryExtension": true,
"usedInToolCall": true,
"defaultConfig": {},
"fieldMap": {},
"type": "llm",

View File

@ -43,6 +43,15 @@ export const loadSystemModels = async (init = false) => {
const pushModel = (model: SystemModelItemType) => {
global.systemModelList.push(model);
// Add default value
if (model.type === ModelTypeEnum.llm) {
model.datasetProcess = model.datasetProcess ?? true;
model.usedInClassify = model.usedInClassify ?? true;
model.usedInExtractFields = model.usedInExtractFields ?? true;
model.usedInToolCall = model.usedInToolCall ?? true;
model.useInEvaluation = model.useInEvaluation ?? true;
}
if (model.isActive) {
global.systemActiveModelList.push(model);

View File

@ -14,15 +14,15 @@ export const getDatasetModel = (model?: string) => {
?.find((item) => item.model === model || item.name === model) ?? getDefaultLLMModel()
);
};
export const getVlmModel = (model?: string) => {
return Array.from(global.llmModelMap.values())
?.filter((item) => item.vision)
?.find((item) => item.model === model || item.name === model);
};
export const getVlmModelList = () => {
return Array.from(global.llmModelMap.values())?.filter((item) => item.vision) || [];
};
export const getDefaultVLMModel = () => global?.systemDefaultModel.datasetImageLLM;
export const getVlmModel = (model?: string) => {
const list = getVlmModelList();
return list.find((item) => item.model === model || item.name === model) || list[0];
};
export const getDefaultEmbeddingModel = () => global?.systemDefaultModel.embedding!;
export const getEmbeddingModel = (model?: string) => {

View File

@ -0,0 +1,56 @@
import { connectionMongo, getMongoModel } from '../../../common/mongo';
import { EvaluationCollectionName } from './evalSchema';
import {
EvaluationStatusEnum,
EvaluationStatusValues
} from '@fastgpt/global/core/app/evaluation/constants';
import type { EvalItemSchemaType } from '@fastgpt/global/core/app/evaluation/type';
const { Schema } = connectionMongo;
export const EvalItemCollectionName = 'eval_items';
const EvalItemSchema = new Schema({
evalId: {
type: Schema.Types.ObjectId,
ref: EvaluationCollectionName,
required: true
},
question: {
type: String,
required: true
},
expectedResponse: {
type: String,
required: true
},
history: String,
globalVariables: Object,
response: String,
responseTime: Date,
status: {
type: Number,
default: EvaluationStatusEnum.queuing,
enum: EvaluationStatusValues
},
retry: {
type: Number,
default: 3
},
finishTime: Date,
accuracy: Number,
relevance: Number,
semanticAccuracy: Number,
score: Number, // average score
errorMessage: String
});
EvalItemSchema.index({ evalId: 1, status: 1 });
export const MongoEvalItem = getMongoModel<EvalItemSchemaType>(
EvalItemCollectionName,
EvalItemSchema
);

View File

@ -0,0 +1,57 @@
import {
TeamCollectionName,
TeamMemberCollectionName
} from '@fastgpt/global/support/user/team/constant';
import { connectionMongo, getMongoModel } from '../../../common/mongo';
import { AppCollectionName } from '../schema';
import type { EvaluationSchemaType } from '@fastgpt/global/core/app/evaluation/type';
import { UsageCollectionName } from '../../../support/wallet/usage/schema';
const { Schema } = connectionMongo;
export const EvaluationCollectionName = 'eval';
const EvaluationSchema = new Schema({
teamId: {
type: Schema.Types.ObjectId,
ref: TeamCollectionName,
required: true
},
tmbId: {
type: Schema.Types.ObjectId,
ref: TeamMemberCollectionName,
required: true
},
appId: {
type: Schema.Types.ObjectId,
ref: AppCollectionName,
required: true
},
usageId: {
type: Schema.Types.ObjectId,
ref: UsageCollectionName,
required: true
},
evalModel: {
type: String,
required: true
},
name: {
type: String,
required: true
},
createTime: {
type: Date,
required: true,
default: () => new Date()
},
finishTime: Date,
score: Number,
errorMessage: String
});
EvaluationSchema.index({ teamId: 1 });
export const MongoEvaluation = getMongoModel<EvaluationSchemaType>(
EvaluationCollectionName,
EvaluationSchema
);

View File

@ -0,0 +1,80 @@
import { getQueue, getWorker, QueueNames } from '../../../common/bullmq';
import { type Processor } from 'bullmq';
import { addLog } from '../../../common/system/log';
export type EvaluationJobData = {
evalId: string;
};
export const evaluationQueue = getQueue<EvaluationJobData>(QueueNames.evaluation, {
defaultJobOptions: {
attempts: 3,
backoff: {
type: 'exponential',
delay: 1000
}
}
});
const concurrency = process.env.EVAL_CONCURRENCY ? Number(process.env.EVAL_CONCURRENCY) : 3;
export const getEvaluationWorker = (processor: Processor<EvaluationJobData>) => {
return getWorker<EvaluationJobData>(QueueNames.evaluation, processor, {
removeOnFail: {
count: 1000 // Keep last 1000 failed jobs
},
concurrency: concurrency
});
};
export const addEvaluationJob = (data: EvaluationJobData) => {
const evalId = String(data.evalId);
return evaluationQueue.add(evalId, data, { deduplication: { id: evalId } });
};
export const checkEvaluationJobActive = async (evalId: string): Promise<boolean> => {
try {
const jobId = await evaluationQueue.getDeduplicationJobId(String(evalId));
if (!jobId) return false;
const job = await evaluationQueue.getJob(jobId);
if (!job) return false;
const jobState = await job.getState();
return ['waiting', 'delayed', 'prioritized', 'active'].includes(jobState);
} catch (error) {
addLog.error('Failed to check evaluation job status', { evalId, error });
return false;
}
};
export const removeEvaluationJob = async (evalId: string): Promise<boolean> => {
const formatEvalId = String(evalId);
try {
const jobId = await evaluationQueue.getDeduplicationJobId(formatEvalId);
if (!jobId) {
addLog.warn('No job found to remove', { evalId });
return false;
}
const job = await evaluationQueue.getJob(jobId);
if (!job) {
addLog.warn('Job not found in queue', { evalId, jobId });
return false;
}
const jobState = await job.getState();
if (['waiting', 'delayed', 'prioritized'].includes(jobState)) {
await job.remove();
addLog.info('Evaluation job removed successfully', { evalId, jobId, jobState });
return true;
} else {
addLog.warn('Cannot remove active or completed job', { evalId, jobId, jobState });
return false;
}
} catch (error) {
addLog.error('Failed to remove evaluation job', { evalId, error });
return false;
}
};

View File

@ -1,5 +1,8 @@
import { type FlowNodeTemplateType } from '@fastgpt/global/core/workflow/type/node.d';
import { FlowNodeTypeEnum } from '@fastgpt/global/core/workflow/node/constant';
import {
FlowNodeOutputTypeEnum,
FlowNodeTypeEnum
} from '@fastgpt/global/core/workflow/node/constant';
import {
appData2FlowNodeIO,
pluginData2FlowNodeIO,
@ -33,6 +36,7 @@ import type {
FlowNodeOutputItemType
} from '@fastgpt/global/core/workflow/type/io';
import { isProduction } from '@fastgpt/global/common/system/constants';
import { Output_Template_Error_Message } from '@fastgpt/global/core/workflow/template/output';
/**
plugin id rule:
@ -122,15 +126,24 @@ export const getSystemPluginByIdAndVersionId = async (
};
}
// System tool
const versionList = (plugin.versionList as SystemPluginTemplateItemType['versionList']) || [];
if (versionList.length === 0) {
return Promise.reject('Can not find plugin version list');
}
const version = versionId
? plugin.versionList?.find((item) => item.value === versionId)
: plugin.versionList?.[0];
const lastVersion = plugin.versionList?.[0];
? versionList.find((item) => item.value === versionId) ?? versionList[0]
: versionList[0];
const lastVersion = versionList[0];
return {
...plugin,
inputs: version.inputs,
outputs: version.outputs,
version: versionId ? version?.value : '',
versionLabel: version ? version?.value : '',
versionLabel: versionId ? version?.value : '',
isLatestVersion: !version || !lastVersion || version.value === lastVersion?.value
};
})();
@ -198,8 +211,8 @@ export async function getChildAppPreviewNode({
return {
flowNodeType: FlowNodeTypeEnum.tool,
nodeIOConfig: {
inputs: app.inputs!,
outputs: app.outputs!,
inputs: app.inputs || [],
outputs: app.outputs || [],
toolConfig: {
systemTool: {
toolId: app.id
@ -209,6 +222,7 @@ export async function getChildAppPreviewNode({
};
}
// Plugin workflow
if (!!app.workflow.nodes.find((node) => node.flowNodeType === FlowNodeTypeEnum.pluginInput)) {
return {
flowNodeType: FlowNodeTypeEnum.pluginModule,
@ -216,6 +230,7 @@ export async function getChildAppPreviewNode({
};
}
// Mcp
if (
!!app.workflow.nodes.find((node) => node.flowNodeType === FlowNodeTypeEnum.toolSet) &&
app.workflow.nodes.length === 1
@ -236,6 +251,7 @@ export async function getChildAppPreviewNode({
};
}
// Chat workflow
return {
flowNodeType: FlowNodeTypeEnum.appModule,
nodeIOConfig: appData2FlowNodeIO({ chatConfig: app.workflow.chatConfig })
@ -254,6 +270,7 @@ export async function getChildAppPreviewNode({
userGuide: app.userGuide,
showStatus: true,
isTool: true,
catchError: false,
version: app.version,
versionLabel: app.versionLabel,
@ -265,7 +282,10 @@ export async function getChildAppPreviewNode({
hasTokenFee: app.hasTokenFee,
hasSystemSecret: app.hasSystemSecret,
...nodeIOConfig
...nodeIOConfig,
outputs: nodeIOConfig.outputs.some((item) => item.type === FlowNodeOutputTypeEnum.error)
? nodeIOConfig.outputs
: [...nodeIOConfig.outputs, Output_Template_Error_Message]
};
}
@ -414,8 +434,9 @@ export const getSystemPlugins = async (): Promise<SystemPluginTemplateItemType[]
const formatTools = tools.map<SystemPluginTemplateItemType>((item) => {
const dbPluginConfig = systemPlugins.get(item.id);
const inputs = item.versionList[0]?.inputs as FlowNodeInputItemType[];
const outputs = item.versionList[0]?.outputs as FlowNodeOutputItemType[];
const versionList = (item.versionList as SystemPluginTemplateItemType['versionList']) || [];
const inputs = versionList[0]?.inputs;
return {
isActive: item.isActive,
@ -439,9 +460,7 @@ export const getSystemPlugins = async (): Promise<SystemPluginTemplateItemType[]
nodes: [],
edges: []
},
versionList: item.versionList,
inputs,
outputs,
versionList,
inputList: inputs?.find((input) => input.key === NodeInputKeyEnum.systemInputConfig)
?.inputList as any,

View File

@ -25,8 +25,7 @@ const SystemPluginSchema = new Schema({
default: false
},
pluginOrder: {
type: Number,
default: 0
type: Number
},
customConfig: Object,
inputListVal: Object,

View File

@ -9,7 +9,7 @@ export type SystemPluginConfigSchemaType = {
currentCost: number;
hasTokenFee: boolean;
isActive: boolean;
pluginOrder: number;
pluginOrder?: number;
customConfig?: {
name: string;

View File

@ -79,6 +79,28 @@ export async function rewriteAppWorkflowToDetail({
node.currentCost = preview.currentCost;
node.hasTokenFee = preview.hasTokenFee;
node.hasSystemSecret = preview.hasSystemSecret;
// Latest version
if (!node.version) {
const inputsMap = new Map(node.inputs.map((item) => [item.key, item]));
const outputsMap = new Map(node.outputs.map((item) => [item.key, item]));
node.inputs = preview.inputs.map((item) => {
const input = inputsMap.get(item.key);
return {
...item,
value: input?.value,
selectedTypeIndex: input?.selectedTypeIndex
};
});
node.outputs = preview.outputs.map((item) => {
const output = outputsMap.get(item.key);
return {
...item,
value: output?.value
};
});
}
} catch (error) {
node.pluginData = {
error: getErrText(error)

View File

@ -166,7 +166,7 @@ export async function saveChat({
if (isUpdateUseTime) {
await MongoApp.findByIdAndUpdate(appId, {
updateTime: new Date()
});
}).catch();
}
} catch (error) {
addLog.error(`update chat history error`, error);

View File

@ -95,6 +95,10 @@ export const dispatchAppRequest = async (props: Props): Promise<Response> => {
const { text } = chatValue2RuntimePrompt(assistantResponses);
return {
data: {
answerText: text,
history: completeMessages
},
assistantResponses,
system_memories,
[DispatchNodeResponseKeyEnum.nodeResponse]: {
@ -108,8 +112,6 @@ export const dispatchAppRequest = async (props: Props): Promise<Response> => {
moduleName: appData.name,
totalPoints: flowUsages.reduce((sum, item) => sum + (item.totalPoints || 0), 0)
}
],
answerText: text,
history: completeMessages
]
};
};

View File

@ -6,7 +6,7 @@ import type {
RuntimeNodeItemType
} from '@fastgpt/global/core/workflow/runtime/type';
import { getLLMModel } from '../../../../ai/model';
import { filterToolNodeIdByEdges, getHistories } from '../../utils';
import { filterToolNodeIdByEdges, getNodeErrResponse, getHistories } from '../../utils';
import { runToolWithToolChoice } from './toolChoice';
import { type DispatchToolModuleProps, type ToolNodeItemType } from './type';
import { type ChatItemType, type UserChatItemValueItemType } from '@fastgpt/global/core/chat/type';
@ -25,7 +25,6 @@ import { runToolWithPromptCall } from './promptCall';
import { replaceVariable } from '@fastgpt/global/common/string/tools';
import { getMultiplePrompt, Prompt_Tool_Call } from './constants';
import { filterToolResponseToPreview } from './utils';
import { type InteractiveNodeResponseType } from '@fastgpt/global/core/workflow/template/system/interactive/type';
import { getFileContentFromLinks, getHistoryFileLinks } from '../../tools/readFiles';
import { parseUrlToFileType } from '@fastgpt/global/common/file/tools';
import { FlowNodeTypeEnum } from '@fastgpt/global/core/workflow/node/constant';
@ -38,7 +37,6 @@ import type { JSONSchemaInputType } from '@fastgpt/global/core/app/jsonschema';
type Response = DispatchNodeResultType<{
[NodeOutputKeyEnum.answerText]: string;
[DispatchNodeResponseKeyEnum.interactive]?: InteractiveNodeResponseType;
}>;
export const dispatchRunTools = async (props: DispatchToolModuleProps): Promise<Response> => {
@ -64,244 +62,249 @@ export const dispatchRunTools = async (props: DispatchToolModuleProps): Promise<
}
} = props;
const toolModel = getLLMModel(model);
const useVision = aiChatVision && toolModel.vision;
const chatHistories = getHistories(history, histories);
try {
const toolModel = getLLMModel(model);
const useVision = aiChatVision && toolModel.vision;
const chatHistories = getHistories(history, histories);
props.params.aiChatVision = aiChatVision && toolModel.vision;
props.params.aiChatReasoning = aiChatReasoning && toolModel.reasoning;
const fileUrlInput = inputs.find((item) => item.key === NodeInputKeyEnum.fileUrlList);
if (!fileUrlInput || !fileUrlInput.value || fileUrlInput.value.length === 0) {
fileLinks = undefined;
}
console.log(fileLinks, 22);
props.params.aiChatVision = aiChatVision && toolModel.vision;
props.params.aiChatReasoning = aiChatReasoning && toolModel.reasoning;
const fileUrlInput = inputs.find((item) => item.key === NodeInputKeyEnum.fileUrlList);
if (!fileUrlInput || !fileUrlInput.value || fileUrlInput.value.length === 0) {
fileLinks = undefined;
}
const toolNodeIds = filterToolNodeIdByEdges({ nodeId, edges: runtimeEdges });
const toolNodeIds = filterToolNodeIdByEdges({ nodeId, edges: runtimeEdges });
// Gets the module to which the tool is connected
const toolNodes = toolNodeIds
.map((nodeId) => {
const tool = runtimeNodes.find((item) => item.nodeId === nodeId);
return tool;
})
.filter(Boolean)
.map<ToolNodeItemType>((tool) => {
const toolParams: FlowNodeInputItemType[] = [];
// Raw json schema(MCP tool)
let jsonSchema: JSONSchemaInputType | undefined = undefined;
tool?.inputs.forEach((input) => {
if (input.toolDescription) {
toolParams.push(input);
}
// Gets the module to which the tool is connected
const toolNodes = toolNodeIds
.map((nodeId) => {
const tool = runtimeNodes.find((item) => item.nodeId === nodeId);
return tool;
})
.filter(Boolean)
.map<ToolNodeItemType>((tool) => {
const toolParams: FlowNodeInputItemType[] = [];
// Raw json schema(MCP tool)
let jsonSchema: JSONSchemaInputType | undefined = undefined;
tool?.inputs.forEach((input) => {
if (input.toolDescription) {
toolParams.push(input);
}
if (input.key === NodeInputKeyEnum.toolData || input.key === 'toolData') {
const value = input.value as McpToolDataType;
jsonSchema = value.inputSchema;
}
if (input.key === NodeInputKeyEnum.toolData || input.key === 'toolData') {
const value = input.value as McpToolDataType;
jsonSchema = value.inputSchema;
}
});
return {
...(tool as RuntimeNodeItemType),
toolParams,
jsonSchema
};
});
return {
...(tool as RuntimeNodeItemType),
toolParams,
jsonSchema
};
// Check interactive entry
props.node.isEntry = false;
const hasReadFilesTool = toolNodes.some(
(item) => item.flowNodeType === FlowNodeTypeEnum.readFiles
);
const globalFiles = chatValue2RuntimePrompt(query).files;
const { documentQuoteText, userFiles } = await getMultiInput({
runningUserInfo,
histories: chatHistories,
requestOrigin,
maxFiles: chatConfig?.fileSelectConfig?.maxFiles || 20,
customPdfParse: chatConfig?.fileSelectConfig?.customPdfParse,
fileLinks,
inputFiles: globalFiles,
hasReadFilesTool
});
// Check interactive entry
props.node.isEntry = false;
const hasReadFilesTool = toolNodes.some(
(item) => item.flowNodeType === FlowNodeTypeEnum.readFiles
);
const globalFiles = chatValue2RuntimePrompt(query).files;
const { documentQuoteText, userFiles } = await getMultiInput({
runningUserInfo,
histories: chatHistories,
requestOrigin,
maxFiles: chatConfig?.fileSelectConfig?.maxFiles || 20,
customPdfParse: chatConfig?.fileSelectConfig?.customPdfParse,
fileLinks,
inputFiles: globalFiles,
hasReadFilesTool
});
const concatenateSystemPrompt = [
toolModel.defaultSystemChatPrompt,
systemPrompt,
documentQuoteText
? replaceVariable(getDocumentQuotePrompt(version), {
quote: documentQuoteText
})
: ''
]
.filter(Boolean)
.join('\n\n===---===---===\n\n');
const messages: ChatItemType[] = (() => {
const value: ChatItemType[] = [
...getSystemPrompt_ChatItemType(concatenateSystemPrompt),
// Add file input prompt to histories
...chatHistories.map((item) => {
if (item.obj === ChatRoleEnum.Human) {
return {
...item,
value: toolCallMessagesAdapt({
userInput: item.value,
skip: !hasReadFilesTool
})
};
}
return item;
}),
{
obj: ChatRoleEnum.Human,
value: toolCallMessagesAdapt({
skip: !hasReadFilesTool,
userInput: runtimePrompt2ChatsValue({
text: userChatInput,
files: userFiles
const concatenateSystemPrompt = [
toolModel.defaultSystemChatPrompt,
systemPrompt,
documentQuoteText
? replaceVariable(getDocumentQuotePrompt(version), {
quote: documentQuoteText
})
})
}
];
if (lastInteractive && isEntry) {
return value.slice(0, -2);
}
return value;
})();
: ''
]
.filter(Boolean)
.join('\n\n===---===---===\n\n');
// censor model and system key
if (toolModel.censor && !externalProvider.openaiAccount?.key) {
await postTextCensor({
text: `${systemPrompt}
const messages: ChatItemType[] = (() => {
const value: ChatItemType[] = [
...getSystemPrompt_ChatItemType(concatenateSystemPrompt),
// Add file input prompt to histories
...chatHistories.map((item) => {
if (item.obj === ChatRoleEnum.Human) {
return {
...item,
value: toolCallMessagesAdapt({
userInput: item.value,
skip: !hasReadFilesTool
})
};
}
return item;
}),
{
obj: ChatRoleEnum.Human,
value: toolCallMessagesAdapt({
skip: !hasReadFilesTool,
userInput: runtimePrompt2ChatsValue({
text: userChatInput,
files: userFiles
})
})
}
];
if (lastInteractive && isEntry) {
return value.slice(0, -2);
}
return value;
})();
// censor model and system key
if (toolModel.censor && !externalProvider.openaiAccount?.key) {
await postTextCensor({
text: `${systemPrompt}
${userChatInput}
`
});
}
const {
toolWorkflowInteractiveResponse,
dispatchFlowResponse, // tool flow response
toolNodeInputTokens,
toolNodeOutputTokens,
completeMessages = [], // The actual message sent to AI(just save text)
assistantResponses = [], // FastGPT system store assistant.value response
runTimes,
finish_reason
} = await (async () => {
const adaptMessages = chats2GPTMessages({
messages,
reserveId: false
// reserveTool: !!toolModel.toolChoice
});
const requestParams = {
runtimeNodes,
runtimeEdges,
toolNodes,
toolModel,
messages: adaptMessages,
interactiveEntryToolParams: lastInteractive?.toolParams
};
if (toolModel.toolChoice) {
return runToolWithToolChoice({
...props,
...requestParams,
maxRunToolTimes: 30
});
}
if (toolModel.functionCall) {
return runToolWithFunctionCall({
...props,
...requestParams
});
}
const lastMessage = adaptMessages[adaptMessages.length - 1];
if (typeof lastMessage?.content === 'string') {
lastMessage.content = replaceVariable(Prompt_Tool_Call, {
question: lastMessage.content
const {
toolWorkflowInteractiveResponse,
dispatchFlowResponse, // tool flow response
toolNodeInputTokens,
toolNodeOutputTokens,
completeMessages = [], // The actual message sent to AI(just save text)
assistantResponses = [], // FastGPT system store assistant.value response
runTimes,
finish_reason
} = await (async () => {
const adaptMessages = chats2GPTMessages({
messages,
reserveId: false
// reserveTool: !!toolModel.toolChoice
});
} else if (Array.isArray(lastMessage.content)) {
// array, replace last element
const lastText = lastMessage.content[lastMessage.content.length - 1];
if (lastText.type === 'text') {
lastText.text = replaceVariable(Prompt_Tool_Call, {
question: lastText.text
const requestParams = {
runtimeNodes,
runtimeEdges,
toolNodes,
toolModel,
messages: adaptMessages,
interactiveEntryToolParams: lastInteractive?.toolParams
};
if (toolModel.toolChoice) {
return runToolWithToolChoice({
...props,
...requestParams,
maxRunToolTimes: 30
});
}
if (toolModel.functionCall) {
return runToolWithFunctionCall({
...props,
...requestParams
});
}
const lastMessage = adaptMessages[adaptMessages.length - 1];
if (typeof lastMessage?.content === 'string') {
lastMessage.content = replaceVariable(Prompt_Tool_Call, {
question: lastMessage.content
});
} else if (Array.isArray(lastMessage.content)) {
// array, replace last element
const lastText = lastMessage.content[lastMessage.content.length - 1];
if (lastText.type === 'text') {
lastText.text = replaceVariable(Prompt_Tool_Call, {
question: lastText.text
});
} else {
return Promise.reject('Prompt call invalid input');
}
} else {
return Promise.reject('Prompt call invalid input');
}
} else {
return Promise.reject('Prompt call invalid input');
}
return runToolWithPromptCall({
...props,
...requestParams
return runToolWithPromptCall({
...props,
...requestParams
});
})();
const { totalPoints, modelName } = formatModelChars2Points({
model,
inputTokens: toolNodeInputTokens,
outputTokens: toolNodeOutputTokens,
modelType: ModelTypeEnum.llm
});
})();
const toolAIUsage = externalProvider.openaiAccount?.key ? 0 : totalPoints;
const { totalPoints, modelName } = formatModelChars2Points({
model,
inputTokens: toolNodeInputTokens,
outputTokens: toolNodeOutputTokens,
modelType: ModelTypeEnum.llm
});
const toolAIUsage = externalProvider.openaiAccount?.key ? 0 : totalPoints;
// flat child tool response
const childToolResponse = dispatchFlowResponse.map((item) => item.flowResponses).flat();
// flat child tool response
const childToolResponse = dispatchFlowResponse.map((item) => item.flowResponses).flat();
// concat tool usage
const totalPointsUsage =
toolAIUsage +
dispatchFlowResponse.reduce((sum, item) => {
const childrenTotal = item.flowUsages.reduce((sum, item) => sum + item.totalPoints, 0);
return sum + childrenTotal;
}, 0);
const flatUsages = dispatchFlowResponse.map((item) => item.flowUsages).flat();
// concat tool usage
const totalPointsUsage =
toolAIUsage +
dispatchFlowResponse.reduce((sum, item) => {
const childrenTotal = item.flowUsages.reduce((sum, item) => sum + item.totalPoints, 0);
return sum + childrenTotal;
}, 0);
const flatUsages = dispatchFlowResponse.map((item) => item.flowUsages).flat();
const previewAssistantResponses = filterToolResponseToPreview(assistantResponses);
const previewAssistantResponses = filterToolResponseToPreview(assistantResponses);
return {
[DispatchNodeResponseKeyEnum.runTimes]: runTimes,
[NodeOutputKeyEnum.answerText]: previewAssistantResponses
.filter((item) => item.text?.content)
.map((item) => item.text?.content || '')
.join(''),
[DispatchNodeResponseKeyEnum.assistantResponses]: previewAssistantResponses,
[DispatchNodeResponseKeyEnum.nodeResponse]: {
// 展示的积分消耗
totalPoints: totalPointsUsage,
toolCallInputTokens: toolNodeInputTokens,
toolCallOutputTokens: toolNodeOutputTokens,
childTotalPoints: flatUsages.reduce((sum, item) => sum + item.totalPoints, 0),
model: modelName,
query: userChatInput,
historyPreview: getHistoryPreview(
GPTMessages2Chats(completeMessages, false),
10000,
useVision
),
toolDetail: childToolResponse,
mergeSignId: nodeId,
finishReason: finish_reason
},
[DispatchNodeResponseKeyEnum.nodeDispatchUsages]: [
// 工具调用本身的积分消耗
{
moduleName: name,
model: modelName,
totalPoints: toolAIUsage,
inputTokens: toolNodeInputTokens,
outputTokens: toolNodeOutputTokens
return {
data: {
[NodeOutputKeyEnum.answerText]: previewAssistantResponses
.filter((item) => item.text?.content)
.map((item) => item.text?.content || '')
.join('')
},
// 工具的消耗
...flatUsages
],
[DispatchNodeResponseKeyEnum.interactive]: toolWorkflowInteractiveResponse
};
[DispatchNodeResponseKeyEnum.runTimes]: runTimes,
[DispatchNodeResponseKeyEnum.assistantResponses]: previewAssistantResponses,
[DispatchNodeResponseKeyEnum.nodeResponse]: {
// 展示的积分消耗
totalPoints: totalPointsUsage,
toolCallInputTokens: toolNodeInputTokens,
toolCallOutputTokens: toolNodeOutputTokens,
childTotalPoints: flatUsages.reduce((sum, item) => sum + item.totalPoints, 0),
model: modelName,
query: userChatInput,
historyPreview: getHistoryPreview(
GPTMessages2Chats(completeMessages, false),
10000,
useVision
),
toolDetail: childToolResponse,
mergeSignId: nodeId,
finishReason: finish_reason
},
[DispatchNodeResponseKeyEnum.nodeDispatchUsages]: [
// 工具调用本身的积分消耗
{
moduleName: name,
model: modelName,
totalPoints: toolAIUsage,
inputTokens: toolNodeInputTokens,
outputTokens: toolNodeOutputTokens
},
// 工具的消耗
...flatUsages
],
[DispatchNodeResponseKeyEnum.interactive]: toolWorkflowInteractiveResponse
};
} catch (error) {
return getNodeErrResponse({ error });
}
};
const getMultiInput = async ({

View File

@ -17,10 +17,7 @@ import type {
} from '@fastgpt/global/core/ai/type.d';
import { formatModelChars2Points } from '../../../../support/wallet/usage/utils';
import type { LLMModelItemType } from '@fastgpt/global/core/ai/model.d';
import {
ChatCompletionRequestMessageRoleEnum,
getLLMDefaultUsage
} from '@fastgpt/global/core/ai/constants';
import { ChatCompletionRequestMessageRoleEnum } from '@fastgpt/global/core/ai/constants';
import type {
ChatDispatchProps,
DispatchNodeResultType
@ -47,7 +44,7 @@ import type { SearchDataResponseItemType } from '@fastgpt/global/core/dataset/ty
import type { NodeOutputKeyEnum } from '@fastgpt/global/core/workflow/constants';
import { NodeInputKeyEnum } from '@fastgpt/global/core/workflow/constants';
import { DispatchNodeResponseKeyEnum } from '@fastgpt/global/core/workflow/runtime/constants';
import { checkQuoteQAValue, getHistories } from '../utils';
import { checkQuoteQAValue, getNodeErrResponse, getHistories } from '../utils';
import { filterSearchResultsByMaxChars } from '../../utils';
import { getHistoryPreview } from '@fastgpt/global/core/chat/utils';
import { computedMaxToken, llmCompletionsBodyFormat } from '../../../ai/utils';
@ -59,6 +56,7 @@ import { parseUrlToFileType } from '@fastgpt/global/common/file/tools';
import { i18nT } from '../../../../../web/i18n/utils';
import { ModelTypeEnum } from '@fastgpt/global/core/ai/model';
import { postTextCensor } from '../../../chat/postTextCensor';
import { getErrText } from '@fastgpt/global/common/error/utils';
export type ChatProps = ModuleDispatchProps<
AIChatNodeProps & {
@ -67,11 +65,16 @@ export type ChatProps = ModuleDispatchProps<
[NodeInputKeyEnum.aiChatDatasetQuote]?: SearchDataResponseItemType[];
}
>;
export type ChatResponse = DispatchNodeResultType<{
[NodeOutputKeyEnum.answerText]: string;
[NodeOutputKeyEnum.reasoningText]?: string;
[NodeOutputKeyEnum.history]: ChatItemType[];
}>;
export type ChatResponse = DispatchNodeResultType<
{
[NodeOutputKeyEnum.answerText]: string;
[NodeOutputKeyEnum.reasoningText]?: string;
[NodeOutputKeyEnum.history]: ChatItemType[];
},
{
[NodeOutputKeyEnum.errorText]: string;
}
>;
/* request openai chat */
export const dispatchChatCompletion = async (props: ChatProps): Promise<ChatResponse> => {
@ -114,243 +117,253 @@ export const dispatchChatCompletion = async (props: ChatProps): Promise<ChatResp
const modelConstantsData = getLLMModel(model);
if (!modelConstantsData) {
return Promise.reject(`Mode ${model} is undefined, you need to select a chat model.`);
return getNodeErrResponse({
error: `Model ${model} is undefined, you need to select a chat model.`
});
}
aiChatVision = modelConstantsData.vision && aiChatVision;
aiChatReasoning = !!aiChatReasoning && !!modelConstantsData.reasoning;
// Check fileLinks is reference variable
const fileUrlInput = inputs.find((item) => item.key === NodeInputKeyEnum.fileUrlList);
if (!fileUrlInput || !fileUrlInput.value || fileUrlInput.value.length === 0) {
fileLinks = undefined;
}
try {
aiChatVision = modelConstantsData.vision && aiChatVision;
aiChatReasoning = !!aiChatReasoning && !!modelConstantsData.reasoning;
// Check fileLinks is reference variable
const fileUrlInput = inputs.find((item) => item.key === NodeInputKeyEnum.fileUrlList);
if (!fileUrlInput || !fileUrlInput.value || fileUrlInput.value.length === 0) {
fileLinks = undefined;
}
const chatHistories = getHistories(history, histories);
quoteQA = checkQuoteQAValue(quoteQA);
const chatHistories = getHistories(history, histories);
quoteQA = checkQuoteQAValue(quoteQA);
const [{ datasetQuoteText }, { documentQuoteText, userFiles }] = await Promise.all([
filterDatasetQuote({
quoteQA,
const [{ datasetQuoteText }, { documentQuoteText, userFiles }] = await Promise.all([
filterDatasetQuote({
quoteQA,
model: modelConstantsData,
quoteTemplate: quoteTemplate || getQuoteTemplate(version)
}),
getMultiInput({
histories: chatHistories,
inputFiles,
fileLinks,
stringQuoteText,
requestOrigin,
maxFiles: chatConfig?.fileSelectConfig?.maxFiles || 20,
customPdfParse: chatConfig?.fileSelectConfig?.customPdfParse,
runningUserInfo
})
]);
if (!userChatInput && !documentQuoteText && userFiles.length === 0) {
return getNodeErrResponse({ error: i18nT('chat:AI_input_is_empty') });
}
const max_tokens = computedMaxToken({
model: modelConstantsData,
quoteTemplate: quoteTemplate || getQuoteTemplate(version)
}),
getMultiInput({
histories: chatHistories,
inputFiles,
fileLinks,
stringQuoteText,
requestOrigin,
maxFiles: chatConfig?.fileSelectConfig?.maxFiles || 20,
customPdfParse: chatConfig?.fileSelectConfig?.customPdfParse,
runningUserInfo
})
]);
maxToken
});
if (!userChatInput && !documentQuoteText && userFiles.length === 0) {
return Promise.reject(i18nT('chat:AI_input_is_empty'));
}
const max_tokens = computedMaxToken({
model: modelConstantsData,
maxToken
});
const [{ filterMessages }] = await Promise.all([
getChatMessages({
model: modelConstantsData,
maxTokens: max_tokens,
histories: chatHistories,
useDatasetQuote: quoteQA !== undefined,
datasetQuoteText,
aiChatQuoteRole,
datasetQuotePrompt: quotePrompt,
version,
userChatInput,
systemPrompt,
userFiles,
documentQuoteText
}),
// Censor = true and system key, will check content
(() => {
if (modelConstantsData.censor && !externalProvider.openaiAccount?.key) {
return postTextCensor({
text: `${systemPrompt}
const [{ filterMessages }] = await Promise.all([
getChatMessages({
model: modelConstantsData,
maxTokens: max_tokens,
histories: chatHistories,
useDatasetQuote: quoteQA !== undefined,
datasetQuoteText,
aiChatQuoteRole,
datasetQuotePrompt: quotePrompt,
version,
userChatInput,
systemPrompt,
userFiles,
documentQuoteText
}),
// Censor = true and system key, will check content
(() => {
if (modelConstantsData.censor && !externalProvider.openaiAccount?.key) {
return postTextCensor({
text: `${systemPrompt}
${userChatInput}
`
});
});
}
})()
]);
const requestMessages = await loadRequestMessages({
messages: filterMessages,
useVision: aiChatVision,
origin: requestOrigin
});
const requestBody = llmCompletionsBodyFormat(
{
model: modelConstantsData.model,
stream,
messages: requestMessages,
temperature,
max_tokens,
top_p: aiChatTopP,
stop: aiChatStopSign,
response_format: {
type: aiChatResponseFormat as any,
json_schema: aiChatJsonSchema
}
},
modelConstantsData
);
// console.log(JSON.stringify(requestBody, null, 2), '===');
const { response, isStreamResponse, getEmptyResponseTip } = await createChatCompletion({
body: requestBody,
userKey: externalProvider.openaiAccount,
options: {
headers: {
Accept: 'application/json, text/plain, */*'
}
}
})()
]);
});
const requestMessages = await loadRequestMessages({
messages: filterMessages,
useVision: aiChatVision,
origin: requestOrigin
});
let { answerText, reasoningText, finish_reason, inputTokens, outputTokens } =
await (async () => {
if (isStreamResponse) {
if (!res || res.closed) {
return {
answerText: '',
reasoningText: '',
finish_reason: 'close' as const,
inputTokens: 0,
outputTokens: 0
};
}
// sse response
const { answer, reasoning, finish_reason, usage } = await streamResponse({
res,
stream: response,
aiChatReasoning,
parseThinkTag: modelConstantsData.reasoning,
isResponseAnswerText,
workflowStreamResponse,
retainDatasetCite
});
const requestBody = llmCompletionsBodyFormat(
{
model: modelConstantsData.model,
stream,
messages: requestMessages,
temperature,
max_tokens,
top_p: aiChatTopP,
stop: aiChatStopSign,
response_format: {
type: aiChatResponseFormat as any,
json_schema: aiChatJsonSchema
}
},
modelConstantsData
);
// console.log(JSON.stringify(requestBody, null, 2), '===');
const { response, isStreamResponse, getEmptyResponseTip } = await createChatCompletion({
body: requestBody,
userKey: externalProvider.openaiAccount,
options: {
headers: {
Accept: 'application/json, text/plain, */*'
}
}
});
let { answerText, reasoningText, finish_reason, inputTokens, outputTokens } = await (async () => {
if (isStreamResponse) {
if (!res || res.closed) {
return {
answerText: '',
reasoningText: '',
finish_reason: 'close' as const,
inputTokens: 0,
outputTokens: 0
};
}
// sse response
const { answer, reasoning, finish_reason, usage } = await streamResponse({
res,
stream: response,
aiChatReasoning,
parseThinkTag: modelConstantsData.reasoning,
isResponseAnswerText,
workflowStreamResponse,
retainDatasetCite
});
return {
answerText: answer,
reasoningText: reasoning,
finish_reason,
inputTokens: usage?.prompt_tokens,
outputTokens: usage?.completion_tokens
};
} else {
const finish_reason = response.choices?.[0]?.finish_reason as CompletionFinishReason;
const usage = response.usage;
const { content, reasoningContent } = (() => {
const content = response.choices?.[0]?.message?.content || '';
// @ts-ignore
const reasoningContent: string = response.choices?.[0]?.message?.reasoning_content || '';
// API already parse reasoning content
if (reasoningContent || !aiChatReasoning) {
return {
content,
reasoningContent
answerText: answer,
reasoningText: reasoning,
finish_reason,
inputTokens: usage?.prompt_tokens,
outputTokens: usage?.completion_tokens
};
} else {
const finish_reason = response.choices?.[0]?.finish_reason as CompletionFinishReason;
const usage = response.usage;
const { content, reasoningContent } = (() => {
const content = response.choices?.[0]?.message?.content || '';
const reasoningContent: string =
// @ts-ignore
response.choices?.[0]?.message?.reasoning_content || '';
// API already parse reasoning content
if (reasoningContent || !aiChatReasoning) {
return {
content,
reasoningContent
};
}
const [think, answer] = parseReasoningContent(content);
return {
content: answer,
reasoningContent: think
};
})();
const formatReasonContent = removeDatasetCiteText(reasoningContent, retainDatasetCite);
const formatContent = removeDatasetCiteText(content, retainDatasetCite);
// Some models do not support streaming
if (aiChatReasoning && reasoningContent) {
workflowStreamResponse?.({
event: SseResponseEventEnum.fastAnswer,
data: textAdaptGptResponse({
reasoning_content: formatReasonContent
})
});
}
if (isResponseAnswerText && content) {
workflowStreamResponse?.({
event: SseResponseEventEnum.fastAnswer,
data: textAdaptGptResponse({
text: formatContent
})
});
}
return {
reasoningText: formatReasonContent,
answerText: formatContent,
finish_reason,
inputTokens: usage?.prompt_tokens,
outputTokens: usage?.completion_tokens
};
}
const [think, answer] = parseReasoningContent(content);
return {
content: answer,
reasoningContent: think
};
})();
const formatReasonContent = removeDatasetCiteText(reasoningContent, retainDatasetCite);
const formatContent = removeDatasetCiteText(content, retainDatasetCite);
// Some models do not support streaming
if (aiChatReasoning && reasoningContent) {
workflowStreamResponse?.({
event: SseResponseEventEnum.fastAnswer,
data: textAdaptGptResponse({
reasoning_content: formatReasonContent
})
});
}
if (isResponseAnswerText && content) {
workflowStreamResponse?.({
event: SseResponseEventEnum.fastAnswer,
data: textAdaptGptResponse({
text: formatContent
})
});
}
return {
reasoningText: formatReasonContent,
answerText: formatContent,
finish_reason,
inputTokens: usage?.prompt_tokens,
outputTokens: usage?.completion_tokens
};
if (!answerText && !reasoningText) {
return getNodeErrResponse({ error: getEmptyResponseTip() });
}
})();
if (!answerText && !reasoningText) {
return Promise.reject(getEmptyResponseTip());
}
const AIMessages: ChatCompletionMessageParam[] = [
{
role: ChatCompletionRequestMessageRoleEnum.Assistant,
content: answerText,
reasoning_text: reasoningText // reasoning_text is only recorded for response, but not for request
}
];
const completeMessages = [...requestMessages, ...AIMessages];
const chatCompleteMessages = GPTMessages2Chats(completeMessages);
inputTokens = inputTokens || (await countGptMessagesTokens(requestMessages));
outputTokens = outputTokens || (await countGptMessagesTokens(AIMessages));
const { totalPoints, modelName } = formatModelChars2Points({
model,
inputTokens,
outputTokens,
modelType: ModelTypeEnum.llm
});
return {
answerText: answerText.trim(),
reasoningText,
[DispatchNodeResponseKeyEnum.nodeResponse]: {
totalPoints: externalProvider.openaiAccount?.key ? 0 : totalPoints,
model: modelName,
inputTokens: inputTokens,
outputTokens: outputTokens,
query: `${userChatInput}`,
maxToken: max_tokens,
reasoningText,
historyPreview: getHistoryPreview(chatCompleteMessages, 10000, aiChatVision),
contextTotalLen: completeMessages.length,
finishReason: finish_reason
},
[DispatchNodeResponseKeyEnum.nodeDispatchUsages]: [
const AIMessages: ChatCompletionMessageParam[] = [
{
moduleName: name,
role: ChatCompletionRequestMessageRoleEnum.Assistant,
content: answerText,
reasoning_text: reasoningText // reasoning_text is only recorded for response, but not for request
}
];
const completeMessages = [...requestMessages, ...AIMessages];
const chatCompleteMessages = GPTMessages2Chats(completeMessages);
inputTokens = inputTokens || (await countGptMessagesTokens(requestMessages));
outputTokens = outputTokens || (await countGptMessagesTokens(AIMessages));
const { totalPoints, modelName } = formatModelChars2Points({
model,
inputTokens,
outputTokens,
modelType: ModelTypeEnum.llm
});
return {
data: {
answerText: answerText.trim(),
reasoningText,
history: chatCompleteMessages
},
[DispatchNodeResponseKeyEnum.nodeResponse]: {
totalPoints: externalProvider.openaiAccount?.key ? 0 : totalPoints,
model: modelName,
inputTokens: inputTokens,
outputTokens: outputTokens
}
],
[DispatchNodeResponseKeyEnum.toolResponses]: answerText,
history: chatCompleteMessages
};
outputTokens: outputTokens,
query: `${userChatInput}`,
maxToken: max_tokens,
reasoningText,
historyPreview: getHistoryPreview(chatCompleteMessages, 10000, aiChatVision),
contextTotalLen: completeMessages.length,
finishReason: finish_reason
},
[DispatchNodeResponseKeyEnum.nodeDispatchUsages]: [
{
moduleName: name,
totalPoints: externalProvider.openaiAccount?.key ? 0 : totalPoints,
model: modelName,
inputTokens: inputTokens,
outputTokens: outputTokens
}
],
[DispatchNodeResponseKeyEnum.toolResponses]: answerText
};
} catch (error) {
return getNodeErrResponse({ error });
}
};
async function filterDatasetQuote({

View File

@ -78,7 +78,9 @@ export const dispatchClassifyQuestion = async (props: Props): Promise<CQResponse
});
return {
[NodeOutputKeyEnum.cqResult]: result.value,
data: {
[NodeOutputKeyEnum.cqResult]: result.value
},
[DispatchNodeResponseKeyEnum.skipHandleId]: agents
.filter((item) => item.key !== result.key)
.map((item) => getHandleId(nodeId, 'source', item.key)),

View File

@ -19,7 +19,7 @@ import { DispatchNodeResponseKeyEnum } from '@fastgpt/global/core/workflow/runti
import type { ModuleDispatchProps } from '@fastgpt/global/core/workflow/runtime/type';
import { sliceJsonStr } from '@fastgpt/global/common/string/tools';
import { type LLMModelItemType } from '@fastgpt/global/core/ai/model.d';
import { getHistories } from '../utils';
import { getNodeErrResponse, getHistories } from '../utils';
import { getLLMModel } from '../../../ai/model';
import { formatModelChars2Points } from '../../../../support/wallet/usage/utils';
import json5 from 'json5';
@ -46,6 +46,7 @@ type Props = ModuleDispatchProps<{
type Response = DispatchNodeResultType<{
[NodeOutputKeyEnum.success]: boolean;
[NodeOutputKeyEnum.contextExtractFields]: string;
[key: string]: any;
}>;
type ActionProps = Props & { extractModel: LLMModelItemType; lastMemory?: Record<string, any> };
@ -62,7 +63,7 @@ export async function dispatchContentExtract(props: Props): Promise<Response> {
} = props;
if (!content) {
return Promise.reject('Input is empty');
return getNodeErrResponse({ error: 'Input is empty' });
}
const extractModel = getLLMModel(model);
@ -75,88 +76,94 @@ export async function dispatchContentExtract(props: Props): Promise<Response> {
any
>;
const { arg, inputTokens, outputTokens } = await (async () => {
if (extractModel.toolChoice) {
return toolChoice({
try {
const { arg, inputTokens, outputTokens } = await (async () => {
if (extractModel.toolChoice) {
return toolChoice({
...props,
histories: chatHistories,
extractModel,
lastMemory
});
}
return completions({
...props,
histories: chatHistories,
extractModel,
lastMemory
});
}
return completions({
...props,
histories: chatHistories,
extractModel,
lastMemory
});
})();
})();
// remove invalid key
for (let key in arg) {
const item = extractKeys.find((item) => item.key === key);
if (!item) {
delete arg[key];
}
if (arg[key] === '') {
delete arg[key];
}
}
// auto fill required fields
extractKeys.forEach((item) => {
if (item.required && arg[item.key] === undefined) {
arg[item.key] = item.defaultValue || '';
}
});
// auth fields
let success = !extractKeys.find((item) => !(item.key in arg));
// auth empty value
if (success) {
for (const key in arg) {
// remove invalid key
for (let key in arg) {
const item = extractKeys.find((item) => item.key === key);
if (!item) {
success = false;
break;
delete arg[key];
}
if (arg[key] === '') {
delete arg[key];
}
}
}
const { totalPoints, modelName } = formatModelChars2Points({
model: extractModel.model,
inputTokens: inputTokens,
outputTokens: outputTokens,
modelType: ModelTypeEnum.llm
});
// auto fill required fields
extractKeys.forEach((item) => {
if (item.required && arg[item.key] === undefined) {
arg[item.key] = item.defaultValue || '';
}
});
return {
[NodeOutputKeyEnum.success]: success,
[NodeOutputKeyEnum.contextExtractFields]: JSON.stringify(arg),
[DispatchNodeResponseKeyEnum.memories]: {
[memoryKey]: arg
},
...arg,
[DispatchNodeResponseKeyEnum.nodeResponse]: {
totalPoints: externalProvider.openaiAccount?.key ? 0 : totalPoints,
model: modelName,
query: content,
inputTokens,
outputTokens,
extractDescription: description,
extractResult: arg,
contextTotalLen: chatHistories.length + 2
},
[DispatchNodeResponseKeyEnum.nodeDispatchUsages]: [
{
moduleName: name,
// auth fields
let success = !extractKeys.find((item) => !(item.key in arg));
// auth empty value
if (success) {
for (const key in arg) {
const item = extractKeys.find((item) => item.key === key);
if (!item) {
success = false;
break;
}
}
}
const { totalPoints, modelName } = formatModelChars2Points({
model: extractModel.model,
inputTokens: inputTokens,
outputTokens: outputTokens,
modelType: ModelTypeEnum.llm
});
return {
data: {
[NodeOutputKeyEnum.success]: success,
[NodeOutputKeyEnum.contextExtractFields]: JSON.stringify(arg),
...arg
},
[DispatchNodeResponseKeyEnum.memories]: {
[memoryKey]: arg
},
[DispatchNodeResponseKeyEnum.nodeResponse]: {
totalPoints: externalProvider.openaiAccount?.key ? 0 : totalPoints,
model: modelName,
query: content,
inputTokens,
outputTokens
}
]
};
outputTokens,
extractDescription: description,
extractResult: arg,
contextTotalLen: chatHistories.length + 2
},
[DispatchNodeResponseKeyEnum.nodeDispatchUsages]: [
{
moduleName: name,
totalPoints: externalProvider.openaiAccount?.key ? 0 : totalPoints,
model: modelName,
inputTokens,
outputTokens
}
]
};
} catch (error) {
return getNodeErrResponse({ error });
}
}
const getJsonSchema = ({ params: { extractKeys } }: ActionProps) => {

View File

@ -0,0 +1,208 @@
import type { ChatItemType } from '@fastgpt/global/core/chat/type.d';
import type { ModuleDispatchProps } from '@fastgpt/global/core/workflow/runtime/type';
import { dispatchWorkFlow } from '../index';
import { ChatRoleEnum } from '@fastgpt/global/core/chat/constants';
import { SseResponseEventEnum } from '@fastgpt/global/core/workflow/runtime/constants';
import {
getWorkflowEntryNodeIds,
storeEdges2RuntimeEdges,
rewriteNodeOutputByHistories,
storeNodes2RuntimeNodes,
textAdaptGptResponse
} from '@fastgpt/global/core/workflow/runtime/utils';
import type { NodeInputKeyEnum } from '@fastgpt/global/core/workflow/constants';
import { NodeOutputKeyEnum } from '@fastgpt/global/core/workflow/constants';
import { DispatchNodeResponseKeyEnum } from '@fastgpt/global/core/workflow/runtime/constants';
import { filterSystemVariables, getNodeErrResponse, getHistories } from '../utils';
import { chatValue2RuntimePrompt, runtimePrompt2ChatsValue } from '@fastgpt/global/core/chat/adapt';
import { type DispatchNodeResultType } from '@fastgpt/global/core/workflow/runtime/type';
import { authAppByTmbId } from '../../../../support/permission/app/auth';
import { ReadPermissionVal } from '@fastgpt/global/support/permission/constant';
import { getAppVersionById } from '../../../app/version/controller';
import { parseUrlToFileType } from '@fastgpt/global/common/file/tools';
import { getUserChatInfoAndAuthTeamPoints } from '../../../../support/permission/auth/team';
type Props = ModuleDispatchProps<{
[NodeInputKeyEnum.userChatInput]: string;
[NodeInputKeyEnum.history]?: ChatItemType[] | number;
[NodeInputKeyEnum.fileUrlList]?: string[];
[NodeInputKeyEnum.forbidStream]?: boolean;
[NodeInputKeyEnum.fileUrlList]?: string[];
}>;
type Response = DispatchNodeResultType<{
[NodeOutputKeyEnum.answerText]: string;
[NodeOutputKeyEnum.history]: ChatItemType[];
}>;
export const dispatchRunAppNode = async (props: Props): Promise<Response> => {
const {
runningAppInfo,
histories,
query,
lastInteractive,
node: { pluginId: appId, version },
workflowStreamResponse,
params,
variables
} = props;
const {
system_forbid_stream = false,
userChatInput,
history,
fileUrlList,
...childrenAppVariables
} = params;
const { files } = chatValue2RuntimePrompt(query);
const userInputFiles = (() => {
if (fileUrlList) {
return fileUrlList.map((url) => parseUrlToFileType(url)).filter(Boolean);
}
// Adapt version 4.8.13 upgrade
return files;
})();
if (!userChatInput && !userInputFiles) {
return getNodeErrResponse({ error: 'Input is empty' });
}
if (!appId) {
return getNodeErrResponse({ error: 'pluginId is empty' });
}
try {
// Auth the app by tmbId(Not the user, but the workflow user)
const { app: appData } = await authAppByTmbId({
appId: appId,
tmbId: runningAppInfo.tmbId,
per: ReadPermissionVal
});
const { nodes, edges, chatConfig } = await getAppVersionById({
appId,
versionId: version,
app: appData
});
const childStreamResponse = system_forbid_stream ? false : props.stream;
// Auto line
if (childStreamResponse) {
workflowStreamResponse?.({
event: SseResponseEventEnum.answer,
data: textAdaptGptResponse({
text: '\n'
})
});
}
const chatHistories = getHistories(history, histories);
// Rewrite children app variables
const systemVariables = filterSystemVariables(variables);
const { externalProvider } = await getUserChatInfoAndAuthTeamPoints(appData.tmbId);
const childrenRunVariables = {
...systemVariables,
...childrenAppVariables,
histories: chatHistories,
appId: String(appData._id),
...(externalProvider ? externalProvider.externalWorkflowVariables : {})
};
const childrenInteractive =
lastInteractive?.type === 'childrenInteractive'
? lastInteractive.params.childrenResponse
: undefined;
const runtimeNodes = rewriteNodeOutputByHistories(
storeNodes2RuntimeNodes(
nodes,
getWorkflowEntryNodeIds(nodes, childrenInteractive || undefined)
),
childrenInteractive
);
const runtimeEdges = storeEdges2RuntimeEdges(edges, childrenInteractive);
const theQuery = childrenInteractive
? query
: runtimePrompt2ChatsValue({ files: userInputFiles, text: userChatInput });
const {
flowResponses,
flowUsages,
assistantResponses,
runTimes,
workflowInteractiveResponse,
system_memories
} = await dispatchWorkFlow({
...props,
lastInteractive: childrenInteractive,
// Rewrite stream mode
...(system_forbid_stream
? {
stream: false,
workflowStreamResponse: undefined
}
: {}),
runningAppInfo: {
id: String(appData._id),
teamId: String(appData.teamId),
tmbId: String(appData.tmbId),
isChildApp: true
},
runtimeNodes,
runtimeEdges,
histories: chatHistories,
variables: childrenRunVariables,
query: theQuery,
chatConfig
});
const completeMessages = chatHistories.concat([
{
obj: ChatRoleEnum.Human,
value: query
},
{
obj: ChatRoleEnum.AI,
value: assistantResponses
}
]);
const { text } = chatValue2RuntimePrompt(assistantResponses);
const usagePoints = flowUsages.reduce((sum, item) => sum + (item.totalPoints || 0), 0);
return {
data: {
[NodeOutputKeyEnum.answerText]: text,
[NodeOutputKeyEnum.history]: completeMessages
},
system_memories,
[DispatchNodeResponseKeyEnum.interactive]: workflowInteractiveResponse
? {
type: 'childrenInteractive',
params: {
childrenResponse: workflowInteractiveResponse
}
}
: undefined,
assistantResponses: system_forbid_stream ? [] : assistantResponses,
[DispatchNodeResponseKeyEnum.runTimes]: runTimes,
[DispatchNodeResponseKeyEnum.nodeResponse]: {
moduleLogo: appData.avatar,
totalPoints: usagePoints,
query: userChatInput,
textOutput: text,
pluginDetail: appData.permission.hasWritePer ? flowResponses : undefined,
mergeSignId: props.node.nodeId
},
[DispatchNodeResponseKeyEnum.nodeDispatchUsages]: [
{
moduleName: appData.name,
totalPoints: usagePoints
}
],
[DispatchNodeResponseKeyEnum.toolResponses]: text
};
} catch (error) {
return getNodeErrResponse({ error });
}
};

View File

@ -17,23 +17,25 @@ import type { StoreSecretValueType } from '@fastgpt/global/common/secret/type';
import { getSystemPluginById } from '../../../app/plugin/controller';
import { textAdaptGptResponse } from '@fastgpt/global/core/workflow/runtime/utils';
import { pushTrack } from '../../../../common/middle/tracks/utils';
import { getNodeErrResponse } from '../utils';
type SystemInputConfigType = {
type: SystemToolInputTypeEnum;
value: StoreSecretValueType;
};
type RunToolProps = ModuleDispatchProps<
{
[NodeInputKeyEnum.toolData]?: McpToolDataType;
[NodeInputKeyEnum.systemInputConfig]?: SystemInputConfigType;
} & Record<string, any>
>;
type RunToolProps = ModuleDispatchProps<{
[NodeInputKeyEnum.toolData]?: McpToolDataType;
[NodeInputKeyEnum.systemInputConfig]?: SystemInputConfigType;
[key: string]: any;
}>;
type RunToolResponse = DispatchNodeResultType<
{
[NodeOutputKeyEnum.rawResponse]?: any;
} & Record<string, any>
[key: string]: any;
},
Record<string, any>
>;
export const dispatchRunTool = async (props: RunToolProps): Promise<RunToolResponse> => {
@ -43,7 +45,7 @@ export const dispatchRunTool = async (props: RunToolProps): Promise<RunToolRespo
runningAppInfo,
variables,
workflowStreamResponse,
node: { name, avatar, toolConfig, version }
node: { name, avatar, toolConfig, version, catchError }
} = props;
const systemToolId = toolConfig?.systemTool?.toolId;
@ -80,50 +82,69 @@ export const dispatchRunTool = async (props: RunToolProps): Promise<RunToolRespo
const formatToolId = tool.id.split('-')[1];
const result = await (async () => {
const res = await runSystemTool({
toolId: formatToolId,
inputs,
systemVar: {
user: {
id: variables.userId,
teamId: runningUserInfo.teamId,
name: runningUserInfo.tmbId
},
app: {
id: runningAppInfo.id,
name: runningAppInfo.id
},
tool: {
id: formatToolId,
version: version || tool.versionList?.[0]?.value || ''
},
time: variables.cTime
const res = await runSystemTool({
toolId: formatToolId,
inputs,
systemVar: {
user: {
id: variables.userId,
teamId: runningUserInfo.teamId,
name: runningUserInfo.tmbId
},
onMessage: ({ type, content }) => {
if (workflowStreamResponse && content) {
workflowStreamResponse({
event: type as unknown as SseResponseEventEnum,
data: textAdaptGptResponse({
text: content
})
});
}
app: {
id: runningAppInfo.id,
name: runningAppInfo.id
},
tool: {
id: formatToolId,
version: version || tool.versionList?.[0]?.value || ''
},
time: variables.cTime
},
onMessage: ({ type, content }) => {
if (workflowStreamResponse && content) {
workflowStreamResponse({
event: type as unknown as SseResponseEventEnum,
data: textAdaptGptResponse({
text: content
})
});
}
});
if (res.error) {
return Promise.reject(res.error);
}
if (!res.output) return {};
});
let result = res.output || {};
return res.output;
})();
if (res.error) {
// 适配旧版旧版本没有catchError部分工具会正常返回 error 字段作为响应。
if (catchError === undefined && typeof res.error === 'object') {
return {
data: res.error,
[DispatchNodeResponseKeyEnum.nodeResponse]: {
toolRes: res.error,
moduleLogo: avatar
},
[DispatchNodeResponseKeyEnum.toolResponses]: res.error
};
}
// String error(Common error, not custom)
if (typeof res.error === 'string') {
throw new Error(res.error);
}
// Custom error field
return {
error: res.error,
[DispatchNodeResponseKeyEnum.nodeResponse]: {
error: res.error,
moduleLogo: avatar
},
[DispatchNodeResponseKeyEnum.toolResponses]: res.error
};
}
const usagePoints = (() => {
if (
params.system_input_config?.type !== SystemToolInputTypeEnum.system ||
result[NodeOutputKeyEnum.systemError]
) {
if (params.system_input_config?.type !== SystemToolInputTypeEnum.system) {
return 0;
}
return tool.currentCost ?? 0;
@ -140,6 +161,7 @@ export const dispatchRunTool = async (props: RunToolProps): Promise<RunToolRespo
});
return {
data: result,
[DispatchNodeResponseKeyEnum.nodeResponse]: {
toolRes: result,
moduleLogo: avatar,
@ -151,8 +173,7 @@ export const dispatchRunTool = async (props: RunToolProps): Promise<RunToolRespo
moduleName: name,
totalPoints: usagePoints
}
],
...result
]
};
} else {
// mcp tool
@ -168,12 +189,14 @@ export const dispatchRunTool = async (props: RunToolProps): Promise<RunToolRespo
const result = await mcpClient.toolCall(toolName, restParams);
return {
data: {
[NodeOutputKeyEnum.rawResponse]: result
},
[DispatchNodeResponseKeyEnum.nodeResponse]: {
toolRes: result,
moduleLogo: avatar
},
[DispatchNodeResponseKeyEnum.toolResponses]: result,
[NodeOutputKeyEnum.rawResponse]: result
[DispatchNodeResponseKeyEnum.toolResponses]: result
};
}
} catch (error) {
@ -188,12 +211,11 @@ export const dispatchRunTool = async (props: RunToolProps): Promise<RunToolRespo
});
}
return {
[DispatchNodeResponseKeyEnum.nodeResponse]: {
moduleLogo: avatar,
error: getErrText(error)
},
[DispatchNodeResponseKeyEnum.toolResponses]: getErrText(error)
};
return getNodeErrResponse({
error,
customNodeResponse: {
moduleLogo: avatar
}
});
}
};

View File

@ -35,10 +35,12 @@ export async function dispatchDatasetConcat(
);
return {
[NodeOutputKeyEnum.datasetQuoteQA]: await filterSearchResultsByMaxChars(
rrfConcatResults,
limit
),
data: {
[NodeOutputKeyEnum.datasetQuoteQA]: await filterSearchResultsByMaxChars(
rrfConcatResults,
limit
)
},
[DispatchNodeResponseKeyEnum.nodeResponse]: {
concatLength: rrfConcatResults.length
}

View File

@ -17,6 +17,7 @@ import { i18nT } from '../../../../../web/i18n/utils';
import { filterDatasetsByTmbId } from '../../../dataset/utils';
import { ModelTypeEnum } from '@fastgpt/global/core/ai/model';
import { getDatasetSearchToolResponsePrompt } from '../../../../../global/core/ai/prompt/dataset';
import { getNodeErrResponse } from '../utils';
type DatasetSearchProps = ModuleDispatchProps<{
[NodeInputKeyEnum.datasetSelectList]: SelectedDatasetType;
@ -83,11 +84,13 @@ export async function dispatchDatasetSearch(
}
if (datasets.length === 0) {
return Promise.reject(i18nT('common:core.chat.error.Select dataset empty'));
return getNodeErrResponse({ error: i18nT('common:core.chat.error.Select dataset empty') });
}
const emptyResult = {
quoteQA: [],
const emptyResult: DatasetSearchResponse = {
data: {
quoteQA: []
},
[DispatchNodeResponseKeyEnum.nodeResponse]: {
totalPoints: 0,
query: '',
@ -102,177 +105,184 @@ export async function dispatchDatasetSearch(
return emptyResult;
}
const datasetIds = authTmbId
? await filterDatasetsByTmbId({
datasetIds: datasets.map((item) => item.datasetId),
tmbId
})
: await Promise.resolve(datasets.map((item) => item.datasetId));
try {
const datasetIds = authTmbId
? await filterDatasetsByTmbId({
datasetIds: datasets.map((item) => item.datasetId),
tmbId
})
: await Promise.resolve(datasets.map((item) => item.datasetId));
if (datasetIds.length === 0) {
return emptyResult;
}
if (datasetIds.length === 0) {
return emptyResult;
}
// get vector
const vectorModel = getEmbeddingModel(
(await MongoDataset.findById(datasets[0].datasetId, 'vectorModel').lean())?.vectorModel
);
// Get Rerank Model
const rerankModelData = getRerankModel(rerankModel);
// get vector
const vectorModel = getEmbeddingModel(
(await MongoDataset.findById(datasets[0].datasetId, 'vectorModel').lean())?.vectorModel
);
// Get Rerank Model
const rerankModelData = getRerankModel(rerankModel);
// start search
const searchData = {
histories,
teamId,
reRankQuery: userChatInput,
queries: [userChatInput],
model: vectorModel.model,
similarity,
limit,
datasetIds,
searchMode,
embeddingWeight,
usingReRank,
rerankModel: rerankModelData,
rerankWeight,
collectionFilterMatch
};
const {
searchRes,
embeddingTokens,
reRankInputTokens,
usingSimilarityFilter,
usingReRank: searchUsingReRank,
queryExtensionResult,
deepSearchResult
} = datasetDeepSearch
? await deepRagSearch({
...searchData,
datasetDeepSearchModel,
datasetDeepSearchMaxTimes,
datasetDeepSearchBg
})
: await defaultSearchDatasetData({
...searchData,
datasetSearchUsingExtensionQuery,
datasetSearchExtensionModel,
datasetSearchExtensionBg
});
// count bill results
const nodeDispatchUsages: ChatNodeUsageType[] = [];
// vector
const { totalPoints: embeddingTotalPoints, modelName: embeddingModelName } =
formatModelChars2Points({
// start search
const searchData = {
histories,
teamId,
reRankQuery: userChatInput,
queries: [userChatInput],
model: vectorModel.model,
inputTokens: embeddingTokens,
modelType: ModelTypeEnum.embedding
});
nodeDispatchUsages.push({
totalPoints: embeddingTotalPoints,
moduleName: node.name,
model: embeddingModelName,
inputTokens: embeddingTokens
});
// Rerank
const { totalPoints: reRankTotalPoints, modelName: reRankModelName } = formatModelChars2Points({
model: rerankModelData?.model,
inputTokens: reRankInputTokens,
modelType: ModelTypeEnum.rerank
});
if (usingReRank) {
similarity,
limit,
datasetIds,
searchMode,
embeddingWeight,
usingReRank,
rerankModel: rerankModelData,
rerankWeight,
collectionFilterMatch
};
const {
searchRes,
embeddingTokens,
reRankInputTokens,
usingSimilarityFilter,
usingReRank: searchUsingReRank,
queryExtensionResult,
deepSearchResult
} = datasetDeepSearch
? await deepRagSearch({
...searchData,
datasetDeepSearchModel,
datasetDeepSearchMaxTimes,
datasetDeepSearchBg
})
: await defaultSearchDatasetData({
...searchData,
datasetSearchUsingExtensionQuery,
datasetSearchExtensionModel,
datasetSearchExtensionBg
});
// count bill results
const nodeDispatchUsages: ChatNodeUsageType[] = [];
// vector
const { totalPoints: embeddingTotalPoints, modelName: embeddingModelName } =
formatModelChars2Points({
model: vectorModel.model,
inputTokens: embeddingTokens,
modelType: ModelTypeEnum.embedding
});
nodeDispatchUsages.push({
totalPoints: reRankTotalPoints,
totalPoints: embeddingTotalPoints,
moduleName: node.name,
model: reRankModelName,
inputTokens: reRankInputTokens
model: embeddingModelName,
inputTokens: embeddingTokens
});
}
// Query extension
(() => {
if (queryExtensionResult) {
const { totalPoints, modelName } = formatModelChars2Points({
model: queryExtensionResult.model,
inputTokens: queryExtensionResult.inputTokens,
outputTokens: queryExtensionResult.outputTokens,
modelType: ModelTypeEnum.llm
});
nodeDispatchUsages.push({
totalPoints,
moduleName: i18nT('common:core.module.template.Query extension'),
model: modelName,
inputTokens: queryExtensionResult.inputTokens,
outputTokens: queryExtensionResult.outputTokens
});
return {
totalPoints
};
}
return {
totalPoints: 0
};
})();
// Deep search
(() => {
if (deepSearchResult) {
const { totalPoints, modelName } = formatModelChars2Points({
model: deepSearchResult.model,
inputTokens: deepSearchResult.inputTokens,
outputTokens: deepSearchResult.outputTokens,
modelType: ModelTypeEnum.llm
});
nodeDispatchUsages.push({
totalPoints,
moduleName: i18nT('common:deep_rag_search'),
model: modelName,
inputTokens: deepSearchResult.inputTokens,
outputTokens: deepSearchResult.outputTokens
});
return {
totalPoints
};
}
return {
totalPoints: 0
};
})();
const totalPoints = nodeDispatchUsages.reduce((acc, item) => acc + item.totalPoints, 0);
const responseData: DispatchNodeResponseType & { totalPoints: number } = {
totalPoints,
query: userChatInput,
embeddingModel: vectorModel.name,
embeddingTokens,
similarity: usingSimilarityFilter ? similarity : undefined,
limit,
searchMode,
embeddingWeight: searchMode === DatasetSearchModeEnum.mixedRecall ? embeddingWeight : undefined,
// Rerank
...(searchUsingReRank && {
rerankModel: rerankModelData?.name,
rerankWeight: rerankWeight,
reRankInputTokens
}),
searchUsingReRank,
// Results
quoteList: searchRes,
queryExtensionResult,
deepSearchResult
};
return {
quoteQA: searchRes,
[DispatchNodeResponseKeyEnum.nodeResponse]: responseData,
nodeDispatchUsages,
[DispatchNodeResponseKeyEnum.toolResponses]: {
prompt: getDatasetSearchToolResponsePrompt(),
cites: searchRes.map((item) => ({
id: item.id,
sourceName: item.sourceName,
updateTime: item.updateTime,
content: `${item.q}\n${item.a}`.trim()
}))
const { totalPoints: reRankTotalPoints, modelName: reRankModelName } = formatModelChars2Points({
model: rerankModelData?.model,
inputTokens: reRankInputTokens,
modelType: ModelTypeEnum.rerank
});
if (usingReRank) {
nodeDispatchUsages.push({
totalPoints: reRankTotalPoints,
moduleName: node.name,
model: reRankModelName,
inputTokens: reRankInputTokens
});
}
};
// Query extension
(() => {
if (queryExtensionResult) {
const { totalPoints, modelName } = formatModelChars2Points({
model: queryExtensionResult.model,
inputTokens: queryExtensionResult.inputTokens,
outputTokens: queryExtensionResult.outputTokens,
modelType: ModelTypeEnum.llm
});
nodeDispatchUsages.push({
totalPoints,
moduleName: i18nT('common:core.module.template.Query extension'),
model: modelName,
inputTokens: queryExtensionResult.inputTokens,
outputTokens: queryExtensionResult.outputTokens
});
return {
totalPoints
};
}
return {
totalPoints: 0
};
})();
// Deep search
(() => {
if (deepSearchResult) {
const { totalPoints, modelName } = formatModelChars2Points({
model: deepSearchResult.model,
inputTokens: deepSearchResult.inputTokens,
outputTokens: deepSearchResult.outputTokens,
modelType: ModelTypeEnum.llm
});
nodeDispatchUsages.push({
totalPoints,
moduleName: i18nT('common:deep_rag_search'),
model: modelName,
inputTokens: deepSearchResult.inputTokens,
outputTokens: deepSearchResult.outputTokens
});
return {
totalPoints
};
}
return {
totalPoints: 0
};
})();
const totalPoints = nodeDispatchUsages.reduce((acc, item) => acc + item.totalPoints, 0);
const responseData: DispatchNodeResponseType & { totalPoints: number } = {
totalPoints,
query: userChatInput,
embeddingModel: vectorModel.name,
embeddingTokens,
similarity: usingSimilarityFilter ? similarity : undefined,
limit,
searchMode,
embeddingWeight:
searchMode === DatasetSearchModeEnum.mixedRecall ? embeddingWeight : undefined,
// Rerank
...(searchUsingReRank && {
rerankModel: rerankModelData?.name,
rerankWeight: rerankWeight,
reRankInputTokens
}),
searchUsingReRank,
// Results
quoteList: searchRes,
queryExtensionResult,
deepSearchResult
};
return {
data: {
quoteQA: searchRes
},
[DispatchNodeResponseKeyEnum.nodeResponse]: responseData,
nodeDispatchUsages,
[DispatchNodeResponseKeyEnum.toolResponses]: {
prompt: getDatasetSearchToolResponsePrompt(),
cites: searchRes.map((item) => ({
id: item.id,
sourceName: item.sourceName,
updateTime: item.updateTime,
content: `${item.q}\n${item.a}`.trim()
}))
}
};
} catch (error) {
return getNodeErrResponse({ error });
}
}

View File

@ -49,7 +49,7 @@ import { dispatchRunTools } from './ai/agent/index';
import { dispatchStopToolCall } from './ai/agent/stopTool';
import { dispatchToolParams } from './ai/agent/toolParams';
import { dispatchChatCompletion } from './ai/chat';
import { dispatchRunCode } from './code/run';
import { dispatchCodeSandbox } from './tools/codeSandbox';
import { dispatchDatasetConcat } from './dataset/concat';
import { dispatchDatasetSearch } from './dataset/search';
import { dispatchSystemConfig } from './init/systemConfig';
@ -60,10 +60,10 @@ import { dispatchLoop } from './loop/runLoop';
import { dispatchLoopEnd } from './loop/runLoopEnd';
import { dispatchLoopStart } from './loop/runLoopStart';
import { dispatchRunPlugin } from './plugin/run';
import { dispatchRunAppNode } from './plugin/runApp';
import { dispatchRunAppNode } from './child/runApp';
import { dispatchPluginInput } from './plugin/runInput';
import { dispatchPluginOutput } from './plugin/runOutput';
import { dispatchRunTool } from './plugin/runTool';
import { dispatchRunTool } from './child/runTool';
import { dispatchAnswer } from './tools/answer';
import { dispatchCustomFeedback } from './tools/customFeedback';
import { dispatchHttp468Request } from './tools/http468';
@ -74,7 +74,8 @@ import { dispatchLafRequest } from './tools/runLaf';
import { dispatchUpdateVariable } from './tools/runUpdateVar';
import { dispatchTextEditor } from './tools/textEditor';
import type { DispatchFlowResponse } from './type';
import { formatHttpError, removeSystemVariable, rewriteRuntimeWorkFlow } from './utils';
import { removeSystemVariable, rewriteRuntimeWorkFlow } from './utils';
import { getHandleId } from '@fastgpt/global/core/workflow/utils';
const callbackMap: Record<FlowNodeTypeEnum, Function> = {
[FlowNodeTypeEnum.workflowStart]: dispatchWorkflowStart,
@ -96,7 +97,7 @@ const callbackMap: Record<FlowNodeTypeEnum, Function> = {
[FlowNodeTypeEnum.lafModule]: dispatchLafRequest,
[FlowNodeTypeEnum.ifElseNode]: dispatchIfElse,
[FlowNodeTypeEnum.variableUpdate]: dispatchUpdateVariable,
[FlowNodeTypeEnum.code]: dispatchRunCode,
[FlowNodeTypeEnum.code]: dispatchCodeSandbox,
[FlowNodeTypeEnum.textEditor]: dispatchTextEditor,
[FlowNodeTypeEnum.customFeedback]: dispatchCustomFeedback,
[FlowNodeTypeEnum.readFiles]: dispatchReadFiles,
@ -123,6 +124,14 @@ type Props = ChatDispatchProps & {
runtimeNodes: RuntimeNodeItemType[];
runtimeEdges: RuntimeEdgeItemType[];
};
type NodeResponseType = DispatchNodeResultType<{
[NodeOutputKeyEnum.answerText]?: string;
[NodeOutputKeyEnum.reasoningText]?: string;
[key: string]: any;
}>;
type NodeResponseCompleteType = Omit<NodeResponseType, 'responseData'> & {
[DispatchNodeResponseKeyEnum.nodeResponse]?: ChatHistoryItemResType;
};
/* running */
export async function dispatchWorkFlow(data: Props): Promise<DispatchFlowResponse> {
@ -229,8 +238,7 @@ export async function dispatchWorkFlow(data: Props): Promise<DispatchFlowRespons
function pushStore(
{ inputs = [] }: RuntimeNodeItemType,
{
answerText = '',
reasoningText,
data: { answerText = '', reasoningText } = {},
responseData,
nodeDispatchUsages,
toolResponses,
@ -238,14 +246,7 @@ export async function dispatchWorkFlow(data: Props): Promise<DispatchFlowRespons
rewriteHistories,
runTimes = 1,
system_memories: newMemories
}: Omit<
DispatchNodeResultType<{
[NodeOutputKeyEnum.answerText]?: string;
[NodeOutputKeyEnum.reasoningText]?: string;
[DispatchNodeResponseKeyEnum.nodeResponse]?: ChatHistoryItemResType;
}>,
'nodeResponse'
>
}: NodeResponseCompleteType
) {
// Add run times
workflowRunTimes += runTimes;
@ -316,22 +317,27 @@ export async function dispatchWorkFlow(data: Props): Promise<DispatchFlowRespons
/* Pass the output of the node, to get next nodes and update edge status */
function nodeOutput(
node: RuntimeNodeItemType,
result: Record<string, any> = {}
result: NodeResponseCompleteType
): {
nextStepActiveNodes: RuntimeNodeItemType[];
nextStepSkipNodes: RuntimeNodeItemType[];
} {
pushStore(node, result);
const concatData: Record<string, any> = {
...(result.data ?? {}),
...(result.error ?? {})
};
// Assign the output value to the next node
node.outputs.forEach((outputItem) => {
if (result[outputItem.key] === undefined) return;
if (concatData[outputItem.key] === undefined) return;
/* update output value */
outputItem.value = result[outputItem.key];
outputItem.value = concatData[outputItem.key];
});
// Get next source edges and update status
const skipHandleId = (result[DispatchNodeResponseKeyEnum.skipHandleId] || []) as string[];
const skipHandleId = result[DispatchNodeResponseKeyEnum.skipHandleId] || [];
const targetEdges = filterWorkflowEdges(runtimeEdges).filter(
(item) => item.source === node.nodeId
);
@ -591,7 +597,7 @@ export async function dispatchWorkFlow(data: Props): Promise<DispatchFlowRespons
async function nodeRunWithActive(node: RuntimeNodeItemType): Promise<{
node: RuntimeNodeItemType;
runStatus: 'run';
result: Record<string, any>;
result: NodeResponseCompleteType;
}> {
// push run status messages
if (node.showStatus && !props.isToolCall) {
@ -625,23 +631,66 @@ export async function dispatchWorkFlow(data: Props): Promise<DispatchFlowRespons
};
// run module
const dispatchRes: Record<string, any> = await (async () => {
const dispatchRes: NodeResponseType = await (async () => {
if (callbackMap[node.flowNodeType]) {
const targetEdges = runtimeEdges.filter((item) => item.source === node.nodeId);
try {
return await callbackMap[node.flowNodeType](dispatchData);
const result = (await callbackMap[node.flowNodeType](dispatchData)) as NodeResponseType;
const errorHandleId = getHandleId(node.nodeId, 'source_catch', 'right');
if (!result.error) {
const skipHandleId =
targetEdges.find((item) => item.sourceHandle === errorHandleId)?.sourceHandle || '';
return {
...result,
[DispatchNodeResponseKeyEnum.skipHandleId]: (result[
DispatchNodeResponseKeyEnum.skipHandleId
]
? [...result[DispatchNodeResponseKeyEnum.skipHandleId], skipHandleId]
: [skipHandleId]
).filter(Boolean)
};
}
// Run error and not catch error, skip all edges
if (!node.catchError) {
return {
...result,
[DispatchNodeResponseKeyEnum.skipHandleId]: targetEdges.map(
(item) => item.sourceHandle
)
};
}
// Catch error
const skipHandleIds = targetEdges
.filter((item) => {
if (node.catchError) {
return item.sourceHandle !== errorHandleId;
}
return true;
})
.map((item) => item.sourceHandle);
return {
...result,
[DispatchNodeResponseKeyEnum.skipHandleId]: result[
DispatchNodeResponseKeyEnum.skipHandleId
]
? [...result[DispatchNodeResponseKeyEnum.skipHandleId], ...skipHandleIds].filter(
Boolean
)
: skipHandleIds
};
} catch (error) {
// Get source handles of outgoing edges
const targetEdges = runtimeEdges.filter((item) => item.source === node.nodeId);
const skipHandleIds = targetEdges.map((item) => item.sourceHandle);
toolRunResponse = getErrText(error);
// Skip all edges and return error
return {
[DispatchNodeResponseKeyEnum.nodeResponse]: {
error: formatHttpError(error)
error: getErrText(error)
},
[DispatchNodeResponseKeyEnum.skipHandleId]: skipHandleIds
[DispatchNodeResponseKeyEnum.skipHandleId]: targetEdges.map((item) => item.sourceHandle)
};
}
}
@ -649,15 +698,16 @@ export async function dispatchWorkFlow(data: Props): Promise<DispatchFlowRespons
})();
// format response data. Add modulename and module type
const formatResponseData: ChatHistoryItemResType = (() => {
const formatResponseData: NodeResponseCompleteType['responseData'] = (() => {
if (!dispatchRes[DispatchNodeResponseKeyEnum.nodeResponse]) return undefined;
return {
...dispatchRes[DispatchNodeResponseKeyEnum.nodeResponse],
id: getNanoid(),
nodeId: node.nodeId,
moduleName: node.name,
moduleType: node.flowNodeType,
runningTime: +((Date.now() - startTime) / 1000).toFixed(2),
...dispatchRes[DispatchNodeResponseKeyEnum.nodeResponse]
runningTime: +((Date.now() - startTime) / 1000).toFixed(2)
};
})();
@ -675,11 +725,13 @@ export async function dispatchWorkFlow(data: Props): Promise<DispatchFlowRespons
}
// Add output default value
node.outputs.forEach((item) => {
if (!item.required) return;
if (dispatchRes[item.key] !== undefined) return;
dispatchRes[item.key] = valueTypeFormat(item.defaultValue, item.valueType);
});
if (dispatchRes.data) {
node.outputs.forEach((item) => {
if (!item.required) return;
if (dispatchRes.data?.[item.key] !== undefined) return;
dispatchRes.data![item.key] = valueTypeFormat(item.defaultValue, item.valueType);
});
}
// Update new variables
if (dispatchRes[DispatchNodeResponseKeyEnum.newVariables]) {
@ -691,7 +743,7 @@ export async function dispatchWorkFlow(data: Props): Promise<DispatchFlowRespons
// Error
if (dispatchRes?.responseData?.error) {
addLog.warn('workflow error', dispatchRes.responseData.error);
addLog.warn('workflow error', { error: dispatchRes.responseData.error });
}
return {
@ -706,7 +758,7 @@ export async function dispatchWorkFlow(data: Props): Promise<DispatchFlowRespons
async function nodeRunWithSkip(node: RuntimeNodeItemType): Promise<{
node: RuntimeNodeItemType;
runStatus: 'skip';
result: Record<string, any>;
result: NodeResponseCompleteType;
}> {
// Set target edges status to skipped
const targetEdges = runtimeEdges.filter((item) => item.source === node.nodeId);

View File

@ -34,8 +34,9 @@ export const dispatchWorkflowStart = (props: Record<string, any>): Response => {
return {
[DispatchNodeResponseKeyEnum.nodeResponse]: {},
[NodeInputKeyEnum.userChatInput]: text || userChatInput,
[NodeOutputKeyEnum.userFiles]: [...queryFiles, ...variablesFiles]
// [NodeInputKeyEnum.inputFiles]: files
data: {
[NodeInputKeyEnum.userChatInput]: text || userChatInput,
[NodeOutputKeyEnum.userFiles]: [...queryFiles, ...variablesFiles]
}
};
};

View File

@ -6,10 +6,7 @@ import type {
DispatchNodeResultType,
ModuleDispatchProps
} from '@fastgpt/global/core/workflow/runtime/type';
import type {
UserInputFormItemType,
UserInputInteractive
} from '@fastgpt/global/core/workflow/template/system/interactive/type';
import type { UserInputFormItemType } from '@fastgpt/global/core/workflow/template/system/interactive/type';
import { addLog } from '../../../../common/system/log';
type Props = ModuleDispatchProps<{
@ -17,8 +14,8 @@ type Props = ModuleDispatchProps<{
[NodeInputKeyEnum.userInputForms]: UserInputFormItemType[];
}>;
type FormInputResponse = DispatchNodeResultType<{
[DispatchNodeResponseKeyEnum.interactive]?: UserInputInteractive;
[NodeOutputKeyEnum.formInputResult]?: Record<string, any>;
[key: string]: any;
}>;
/*
@ -60,9 +57,11 @@ export const dispatchFormInput = async (props: Props): Promise<FormInputResponse
})();
return {
data: {
...userInputVal,
[NodeOutputKeyEnum.formInputResult]: userInputVal
},
[DispatchNodeResponseKeyEnum.rewriteHistories]: histories.slice(0, -2), // Removes the current session record as the history of subsequent nodes
...userInputVal,
[NodeOutputKeyEnum.formInputResult]: userInputVal,
[DispatchNodeResponseKeyEnum.toolResponses]: userInputVal,
[DispatchNodeResponseKeyEnum.nodeResponse]: {
formInputResult: userInputVal

View File

@ -6,10 +6,7 @@ import type {
import type { NodeInputKeyEnum } from '@fastgpt/global/core/workflow/constants';
import { NodeOutputKeyEnum } from '@fastgpt/global/core/workflow/constants';
import { getHandleId } from '@fastgpt/global/core/workflow/utils';
import type {
UserSelectInteractive,
UserSelectOptionItemType
} from '@fastgpt/global/core/workflow/template/system/interactive/type';
import type { UserSelectOptionItemType } from '@fastgpt/global/core/workflow/template/system/interactive/type';
import { chatValue2RuntimePrompt } from '@fastgpt/global/core/chat/adapt';
type Props = ModuleDispatchProps<{
@ -17,8 +14,6 @@ type Props = ModuleDispatchProps<{
[NodeInputKeyEnum.userSelectOptions]: UserSelectOptionItemType[];
}>;
type UserSelectResponse = DispatchNodeResultType<{
[NodeOutputKeyEnum.answerText]?: string;
[DispatchNodeResponseKeyEnum.interactive]?: UserSelectInteractive;
[NodeOutputKeyEnum.selectResult]?: string;
}>;
@ -59,6 +54,9 @@ export const dispatchUserSelect = async (props: Props): Promise<UserSelectRespon
}
return {
data: {
[NodeOutputKeyEnum.selectResult]: userSelectedVal
},
[DispatchNodeResponseKeyEnum.rewriteHistories]: histories.slice(0, -2), // Removes the current session record as the history of subsequent nodes
[DispatchNodeResponseKeyEnum.skipHandleId]: userSelectOptions
.filter((item) => item.value !== userSelectedVal)
@ -66,7 +64,6 @@ export const dispatchUserSelect = async (props: Props): Promise<UserSelectRespon
[DispatchNodeResponseKeyEnum.nodeResponse]: {
userSelectResult: userSelectedVal
},
[DispatchNodeResponseKeyEnum.toolResponses]: userSelectedVal,
[NodeOutputKeyEnum.selectResult]: userSelectedVal
[DispatchNodeResponseKeyEnum.toolResponses]: userSelectedVal
};
};

View File

@ -11,10 +11,7 @@ import {
type ChatHistoryItemResType
} from '@fastgpt/global/core/chat/type';
import { cloneDeep } from 'lodash';
import {
type LoopInteractive,
type WorkflowInteractiveResponseType
} from '@fastgpt/global/core/workflow/template/system/interactive/type';
import { type WorkflowInteractiveResponseType } from '@fastgpt/global/core/workflow/template/system/interactive/type';
import { storeEdges2RuntimeEdges } from '@fastgpt/global/core/workflow/runtime/utils';
type Props = ModuleDispatchProps<{
@ -22,7 +19,6 @@ type Props = ModuleDispatchProps<{
[NodeInputKeyEnum.childrenNodeIdList]: string[];
}>;
type Response = DispatchNodeResultType<{
[DispatchNodeResponseKeyEnum.interactive]?: LoopInteractive;
[NodeOutputKeyEnum.loopArray]: Array<any>;
}>;
@ -133,6 +129,9 @@ export const dispatchLoop = async (props: Props): Promise<Response> => {
}
return {
data: {
[NodeOutputKeyEnum.loopArray]: outputValueArr
},
[DispatchNodeResponseKeyEnum.interactive]: interactiveResponse
? {
type: 'loopInteractive',
@ -157,7 +156,6 @@ export const dispatchLoop = async (props: Props): Promise<Response> => {
moduleName: name
}
],
[NodeOutputKeyEnum.loopArray]: outputValueArr,
[DispatchNodeResponseKeyEnum.newVariables]: newVariables
};
};

View File

@ -18,10 +18,12 @@ type Response = DispatchNodeResultType<{
export const dispatchLoopStart = async (props: Props): Promise<Response> => {
const { params } = props;
return {
data: {
[NodeOutputKeyEnum.loopStartInput]: params.loopStartInput,
[NodeOutputKeyEnum.loopStartIndex]: params.loopStartIndex
},
[DispatchNodeResponseKeyEnum.nodeResponse]: {
loopInputValue: params.loopStartInput
},
[NodeOutputKeyEnum.loopStartInput]: params.loopStartInput,
[NodeOutputKeyEnum.loopStartIndex]: params.loopStartIndex
}
};
};

View File

@ -13,19 +13,29 @@ import { type DispatchNodeResultType } from '@fastgpt/global/core/workflow/runti
import { authPluginByTmbId } from '../../../../support/permission/app/auth';
import { ReadPermissionVal } from '@fastgpt/global/support/permission/constant';
import { computedPluginUsage } from '../../../app/plugin/utils';
import { filterSystemVariables } from '../utils';
import { filterSystemVariables, getNodeErrResponse } from '../utils';
import { getPluginRunUserQuery } from '@fastgpt/global/core/workflow/utils';
import type { NodeInputKeyEnum } from '@fastgpt/global/core/workflow/constants';
import type { NodeOutputKeyEnum } from '@fastgpt/global/core/workflow/constants';
import { getChildAppRuntimeById, splitCombinePluginId } from '../../../app/plugin/controller';
import { dispatchWorkFlow } from '../index';
import { getUserChatInfoAndAuthTeamPoints } from '../../../../support/permission/auth/team';
import { dispatchRunTool } from './runTool';
import { dispatchRunTool } from '../child/runTool';
import type { PluginRuntimeType } from '@fastgpt/global/core/app/plugin/type';
type RunPluginProps = ModuleDispatchProps<{
[NodeInputKeyEnum.forbidStream]?: boolean;
[key: string]: any;
}>;
type RunPluginResponse = DispatchNodeResultType<{}>;
type RunPluginResponse = DispatchNodeResultType<
{
[key: string]: any;
},
{
[NodeOutputKeyEnum.errorText]?: string;
}
>;
export const dispatchRunPlugin = async (props: RunPluginProps): Promise<RunPluginResponse> => {
const {
node: { pluginId, version },
@ -34,142 +44,145 @@ export const dispatchRunPlugin = async (props: RunPluginProps): Promise<RunPlugi
params: { system_forbid_stream = false, ...data } // Plugin input
} = props;
if (!pluginId) {
return Promise.reject('pluginId can not find');
return getNodeErrResponse({ error: 'pluginId can not find' });
}
// Adapt <= 4.10 system tool
const { source, pluginId: formatPluginId } = splitCombinePluginId(pluginId);
if (source === PluginSourceEnum.systemTool) {
return dispatchRunTool({
...props,
node: {
...props.node,
toolConfig: {
systemTool: {
toolId: formatPluginId
let plugin: PluginRuntimeType | undefined;
try {
// Adapt <= 4.10 system tool
const { source, pluginId: formatPluginId } = splitCombinePluginId(pluginId);
if (source === PluginSourceEnum.systemTool) {
return await dispatchRunTool({
...props,
node: {
...props.node,
toolConfig: {
systemTool: {
toolId: formatPluginId
}
}
}
}
});
}
/*
1. Team app
2. Admin selected system tool
*/
const { files } = chatValue2RuntimePrompt(query);
// auth plugin
const pluginData = await authPluginByTmbId({
appId: pluginId,
tmbId: runningAppInfo.tmbId,
per: ReadPermissionVal
});
}
/*
1. Team app
2. Admin selected system tool
*/
const { files } = chatValue2RuntimePrompt(query);
plugin = await getChildAppRuntimeById(pluginId, version);
// auth plugin
const pluginData = await authPluginByTmbId({
appId: pluginId,
tmbId: runningAppInfo.tmbId,
per: ReadPermissionVal
});
const plugin = await getChildAppRuntimeById(pluginId, version);
const outputFilterMap =
plugin.nodes
.find((node) => node.flowNodeType === FlowNodeTypeEnum.pluginOutput)
?.inputs.reduce<Record<string, boolean>>((acc, cur) => {
acc[cur.key] = cur.isToolOutput === false ? false : true;
return acc;
}, {}) ?? {};
const runtimeNodes = storeNodes2RuntimeNodes(
plugin.nodes,
getWorkflowEntryNodeIds(plugin.nodes)
).map((node) => {
// Update plugin input value
if (node.flowNodeType === FlowNodeTypeEnum.pluginInput) {
const outputFilterMap =
plugin.nodes
.find((node) => node.flowNodeType === FlowNodeTypeEnum.pluginOutput)
?.inputs.reduce<Record<string, boolean>>((acc, cur) => {
acc[cur.key] = cur.isToolOutput === false ? false : true;
return acc;
}, {}) ?? {};
const runtimeNodes = storeNodes2RuntimeNodes(
plugin.nodes,
getWorkflowEntryNodeIds(plugin.nodes)
).map((node) => {
// Update plugin input value
if (node.flowNodeType === FlowNodeTypeEnum.pluginInput) {
return {
...node,
showStatus: false,
inputs: node.inputs.map((input) => ({
...input,
value: data[input.key] ?? input.value
}))
};
}
return {
...node,
showStatus: false,
inputs: node.inputs.map((input) => ({
...input,
value: data[input.key] ?? input.value
}))
showStatus: false
};
}
return {
...node,
showStatus: false
};
});
const { externalProvider } = await getUserChatInfoAndAuthTeamPoints(runningAppInfo.tmbId);
const runtimeVariables = {
...filterSystemVariables(props.variables),
appId: String(plugin.id),
...(externalProvider ? externalProvider.externalWorkflowVariables : {})
};
const { flowResponses, flowUsages, assistantResponses, runTimes, system_memories } =
await dispatchWorkFlow({
...props,
// Rewrite stream mode
...(system_forbid_stream
? {
stream: false,
workflowStreamResponse: undefined
}
: {}),
runningAppInfo: {
id: String(plugin.id),
// 如果系统插件有 teamId 和 tmbId则使用系统插件的 teamId 和 tmbId管理员指定了插件作为系统插件
teamId: plugin.teamId || runningAppInfo.teamId,
tmbId: plugin.tmbId || runningAppInfo.tmbId,
isChildApp: true
},
variables: runtimeVariables,
query: getPluginRunUserQuery({
pluginInputs: getPluginInputsFromStoreNodes(plugin.nodes),
variables: runtimeVariables,
files
}).value,
chatConfig: {},
runtimeNodes,
runtimeEdges: storeEdges2RuntimeEdges(plugin.edges)
});
const output = flowResponses.find((item) => item.moduleType === FlowNodeTypeEnum.pluginOutput);
if (output) {
output.moduleLogo = plugin.avatar;
}
const usagePoints = await computedPluginUsage({
plugin,
childrenUsage: flowUsages,
error: !!output?.pluginOutput?.error
});
return {
// 嵌套运行时,如果 childApp stream=false实际上不会有任何内容输出给用户所以不需要存储
assistantResponses: system_forbid_stream ? [] : assistantResponses,
system_memories,
// responseData, // debug
[DispatchNodeResponseKeyEnum.runTimes]: runTimes,
[DispatchNodeResponseKeyEnum.nodeResponse]: {
moduleLogo: plugin.avatar,
totalPoints: usagePoints,
pluginOutput: output?.pluginOutput,
pluginDetail: pluginData?.permission?.hasWritePer // Not system plugin
? flowResponses.filter((item) => {
const filterArr = [FlowNodeTypeEnum.pluginOutput];
return !filterArr.includes(item.moduleType as any);
})
: undefined
},
[DispatchNodeResponseKeyEnum.nodeDispatchUsages]: [
{
moduleName: plugin.name,
totalPoints: usagePoints
}
],
[DispatchNodeResponseKeyEnum.toolResponses]: output?.pluginOutput
? Object.keys(output.pluginOutput)
.filter((key) => outputFilterMap[key])
.reduce<Record<string, any>>((acc, key) => {
acc[key] = output.pluginOutput![key];
return acc;
}, {})
: null,
...(output ? output.pluginOutput : {})
};
const { externalProvider } = await getUserChatInfoAndAuthTeamPoints(runningAppInfo.tmbId);
const runtimeVariables = {
...filterSystemVariables(props.variables),
appId: String(plugin.id),
...(externalProvider ? externalProvider.externalWorkflowVariables : {})
};
const { flowResponses, flowUsages, assistantResponses, runTimes, system_memories } =
await dispatchWorkFlow({
...props,
// Rewrite stream mode
...(system_forbid_stream
? {
stream: false,
workflowStreamResponse: undefined
}
: {}),
runningAppInfo: {
id: String(plugin.id),
// 如果系统插件有 teamId 和 tmbId则使用系统插件的 teamId 和 tmbId管理员指定了插件作为系统插件
teamId: plugin.teamId || runningAppInfo.teamId,
tmbId: plugin.tmbId || runningAppInfo.tmbId,
isChildApp: true
},
variables: runtimeVariables,
query: getPluginRunUserQuery({
pluginInputs: getPluginInputsFromStoreNodes(plugin.nodes),
variables: runtimeVariables,
files
}).value,
chatConfig: {},
runtimeNodes,
runtimeEdges: storeEdges2RuntimeEdges(plugin.edges)
});
const output = flowResponses.find((item) => item.moduleType === FlowNodeTypeEnum.pluginOutput);
const usagePoints = await computedPluginUsage({
plugin,
childrenUsage: flowUsages,
error: !!output?.pluginOutput?.error
});
return {
data: output ? output.pluginOutput : {},
// 嵌套运行时,如果 childApp stream=false实际上不会有任何内容输出给用户所以不需要存储
assistantResponses: system_forbid_stream ? [] : assistantResponses,
system_memories,
// responseData, // debug
[DispatchNodeResponseKeyEnum.runTimes]: runTimes,
[DispatchNodeResponseKeyEnum.nodeResponse]: {
moduleLogo: plugin.avatar,
totalPoints: usagePoints,
pluginOutput: output?.pluginOutput,
pluginDetail: pluginData?.permission?.hasWritePer // Not system plugin
? flowResponses.filter((item) => {
const filterArr = [FlowNodeTypeEnum.pluginOutput];
return !filterArr.includes(item.moduleType as any);
})
: undefined
},
[DispatchNodeResponseKeyEnum.nodeDispatchUsages]: [
{
moduleName: plugin.name,
totalPoints: usagePoints
}
],
[DispatchNodeResponseKeyEnum.toolResponses]: output?.pluginOutput
? Object.keys(output.pluginOutput)
.filter((key) => outputFilterMap[key])
.reduce<Record<string, any>>((acc, key) => {
acc[key] = output.pluginOutput![key];
return acc;
}, {})
: null
};
} catch (error) {
return getNodeErrResponse({ error, customNodeResponse: { moduleLogo: plugin?.avatar } });
}
};

View File

@ -1,203 +0,0 @@
import type { ChatItemType } from '@fastgpt/global/core/chat/type.d';
import type { ModuleDispatchProps } from '@fastgpt/global/core/workflow/runtime/type';
import { dispatchWorkFlow } from '../index';
import { ChatRoleEnum } from '@fastgpt/global/core/chat/constants';
import { SseResponseEventEnum } from '@fastgpt/global/core/workflow/runtime/constants';
import {
getWorkflowEntryNodeIds,
storeEdges2RuntimeEdges,
rewriteNodeOutputByHistories,
storeNodes2RuntimeNodes,
textAdaptGptResponse
} from '@fastgpt/global/core/workflow/runtime/utils';
import type { NodeInputKeyEnum, NodeOutputKeyEnum } from '@fastgpt/global/core/workflow/constants';
import { DispatchNodeResponseKeyEnum } from '@fastgpt/global/core/workflow/runtime/constants';
import { filterSystemVariables, getHistories } from '../utils';
import { chatValue2RuntimePrompt, runtimePrompt2ChatsValue } from '@fastgpt/global/core/chat/adapt';
import { type DispatchNodeResultType } from '@fastgpt/global/core/workflow/runtime/type';
import { authAppByTmbId } from '../../../../support/permission/app/auth';
import { ReadPermissionVal } from '@fastgpt/global/support/permission/constant';
import { getAppVersionById } from '../../../app/version/controller';
import { parseUrlToFileType } from '@fastgpt/global/common/file/tools';
import { type ChildrenInteractive } from '@fastgpt/global/core/workflow/template/system/interactive/type';
import { getUserChatInfoAndAuthTeamPoints } from '../../../../support/permission/auth/team';
type Props = ModuleDispatchProps<{
[NodeInputKeyEnum.userChatInput]: string;
[NodeInputKeyEnum.history]?: ChatItemType[] | number;
[NodeInputKeyEnum.fileUrlList]?: string[];
[NodeInputKeyEnum.forbidStream]?: boolean;
[NodeInputKeyEnum.fileUrlList]?: string[];
}>;
type Response = DispatchNodeResultType<{
[DispatchNodeResponseKeyEnum.interactive]?: ChildrenInteractive;
[NodeOutputKeyEnum.answerText]: string;
[NodeOutputKeyEnum.history]: ChatItemType[];
}>;
export const dispatchRunAppNode = async (props: Props): Promise<Response> => {
const {
runningAppInfo,
histories,
query,
lastInteractive,
node: { pluginId: appId, version },
workflowStreamResponse,
params,
variables
} = props;
const {
system_forbid_stream = false,
userChatInput,
history,
fileUrlList,
...childrenAppVariables
} = params;
const { files } = chatValue2RuntimePrompt(query);
const userInputFiles = (() => {
if (fileUrlList) {
return fileUrlList.map((url) => parseUrlToFileType(url)).filter(Boolean);
}
// Adapt version 4.8.13 upgrade
return files;
})();
if (!userChatInput && !userInputFiles) {
return Promise.reject('Input is empty');
}
if (!appId) {
return Promise.reject('pluginId is empty');
}
// Auth the app by tmbId(Not the user, but the workflow user)
const { app: appData } = await authAppByTmbId({
appId: appId,
tmbId: runningAppInfo.tmbId,
per: ReadPermissionVal
});
const { nodes, edges, chatConfig } = await getAppVersionById({
appId,
versionId: version,
app: appData
});
const childStreamResponse = system_forbid_stream ? false : props.stream;
// Auto line
if (childStreamResponse) {
workflowStreamResponse?.({
event: SseResponseEventEnum.answer,
data: textAdaptGptResponse({
text: '\n'
})
});
}
const chatHistories = getHistories(history, histories);
// Rewrite children app variables
const systemVariables = filterSystemVariables(variables);
const { externalProvider } = await getUserChatInfoAndAuthTeamPoints(appData.tmbId);
const childrenRunVariables = {
...systemVariables,
...childrenAppVariables,
histories: chatHistories,
appId: String(appData._id),
...(externalProvider ? externalProvider.externalWorkflowVariables : {})
};
const childrenInteractive =
lastInteractive?.type === 'childrenInteractive'
? lastInteractive.params.childrenResponse
: undefined;
const runtimeNodes = rewriteNodeOutputByHistories(
storeNodes2RuntimeNodes(
nodes,
getWorkflowEntryNodeIds(nodes, childrenInteractive || undefined)
),
childrenInteractive
);
const runtimeEdges = storeEdges2RuntimeEdges(edges, childrenInteractive);
const theQuery = childrenInteractive
? query
: runtimePrompt2ChatsValue({ files: userInputFiles, text: userChatInput });
const {
flowResponses,
flowUsages,
assistantResponses,
runTimes,
workflowInteractiveResponse,
system_memories
} = await dispatchWorkFlow({
...props,
lastInteractive: childrenInteractive,
// Rewrite stream mode
...(system_forbid_stream
? {
stream: false,
workflowStreamResponse: undefined
}
: {}),
runningAppInfo: {
id: String(appData._id),
teamId: String(appData.teamId),
tmbId: String(appData.tmbId),
isChildApp: true
},
runtimeNodes,
runtimeEdges,
histories: chatHistories,
variables: childrenRunVariables,
query: theQuery,
chatConfig
});
const completeMessages = chatHistories.concat([
{
obj: ChatRoleEnum.Human,
value: query
},
{
obj: ChatRoleEnum.AI,
value: assistantResponses
}
]);
const { text } = chatValue2RuntimePrompt(assistantResponses);
const usagePoints = flowUsages.reduce((sum, item) => sum + (item.totalPoints || 0), 0);
return {
system_memories,
[DispatchNodeResponseKeyEnum.interactive]: workflowInteractiveResponse
? {
type: 'childrenInteractive',
params: {
childrenResponse: workflowInteractiveResponse
}
}
: undefined,
assistantResponses: system_forbid_stream ? [] : assistantResponses,
[DispatchNodeResponseKeyEnum.runTimes]: runTimes,
[DispatchNodeResponseKeyEnum.nodeResponse]: {
moduleLogo: appData.avatar,
totalPoints: usagePoints,
query: userChatInput,
textOutput: text,
pluginDetail: appData.permission.hasWritePer ? flowResponses : undefined,
mergeSignId: props.node.nodeId
},
[DispatchNodeResponseKeyEnum.nodeDispatchUsages]: [
{
moduleName: appData.name,
totalPoints: usagePoints
}
],
[DispatchNodeResponseKeyEnum.toolResponses]: text,
answerText: text,
history: completeMessages
};
};

View File

@ -2,13 +2,22 @@ import { chatValue2RuntimePrompt } from '@fastgpt/global/core/chat/adapt';
import { ChatFileTypeEnum } from '@fastgpt/global/core/chat/constants';
import { NodeOutputKeyEnum } from '@fastgpt/global/core/workflow/constants';
import { DispatchNodeResponseKeyEnum } from '@fastgpt/global/core/workflow/runtime/constants';
import type { ModuleDispatchProps } from '@fastgpt/global/core/workflow/runtime/type';
import type {
DispatchNodeResultType,
ModuleDispatchProps
} from '@fastgpt/global/core/workflow/runtime/type';
export type PluginInputProps = ModuleDispatchProps<{
[key: string]: any;
}>;
export type PluginInputResponse = DispatchNodeResultType<{
[NodeOutputKeyEnum.userFiles]?: string[];
[key: string]: any;
}>;
export const dispatchPluginInput = (props: PluginInputProps) => {
export const dispatchPluginInput = async (
props: PluginInputProps
): Promise<PluginInputResponse> => {
const { params, query } = props;
const { files } = chatValue2RuntimePrompt(query);
@ -33,12 +42,14 @@ export const dispatchPluginInput = (props: PluginInputProps) => {
}
return {
...params,
[DispatchNodeResponseKeyEnum.nodeResponse]: {},
[NodeOutputKeyEnum.userFiles]: files
.map((item) => {
return item?.url ?? '';
})
.filter(Boolean)
data: {
...params,
[NodeOutputKeyEnum.userFiles]: files
.map((item) => {
return item?.url ?? '';
})
.filter(Boolean)
},
[DispatchNodeResponseKeyEnum.nodeResponse]: {}
};
};

View File

@ -30,7 +30,9 @@ export const dispatchAnswer = (props: Record<string, any>): AnswerResponse => {
});
return {
[NodeOutputKeyEnum.answerText]: responseText,
data: {
[NodeOutputKeyEnum.answerText]: responseText
},
[DispatchNodeResponseKeyEnum.nodeResponse]: {
textOutput: formatText
}

View File

@ -2,20 +2,26 @@ import type { ModuleDispatchProps } from '@fastgpt/global/core/workflow/runtime/
import { NodeInputKeyEnum, NodeOutputKeyEnum } from '@fastgpt/global/core/workflow/constants';
import { type DispatchNodeResultType } from '@fastgpt/global/core/workflow/runtime/type';
import axios from 'axios';
import { formatHttpError } from '../utils';
import { DispatchNodeResponseKeyEnum } from '@fastgpt/global/core/workflow/runtime/constants';
import { SandboxCodeTypeEnum } from '@fastgpt/global/core/workflow/template/system/sandbox/constants';
import { getErrText } from '@fastgpt/global/common/error/utils';
import { getNodeErrResponse } from '../utils';
type RunCodeType = ModuleDispatchProps<{
[NodeInputKeyEnum.codeType]: string;
[NodeInputKeyEnum.code]: string;
[NodeInputKeyEnum.addInputParam]: Record<string, any>;
}>;
type RunCodeResponse = DispatchNodeResultType<{
[NodeOutputKeyEnum.error]?: any;
[NodeOutputKeyEnum.rawResponse]?: Record<string, any>;
[key: string]: any;
}>;
type RunCodeResponse = DispatchNodeResultType<
{
[NodeOutputKeyEnum.error]?: any; // @deprecated
[NodeOutputKeyEnum.rawResponse]?: Record<string, any>;
[key: string]: any;
},
{
[NodeOutputKeyEnum.error]: string;
}
>;
function getURL(codeType: string): string {
if (codeType == SandboxCodeTypeEnum.py) {
@ -25,14 +31,21 @@ function getURL(codeType: string): string {
}
}
export const dispatchRunCode = async (props: RunCodeType): Promise<RunCodeResponse> => {
export const dispatchCodeSandbox = async (props: RunCodeType): Promise<RunCodeResponse> => {
const {
node: { catchError },
params: { codeType, code, [NodeInputKeyEnum.addInputParam]: customVariables }
} = props;
if (!process.env.SANDBOX_URL) {
return {
[NodeOutputKeyEnum.error]: 'Can not find SANDBOX_URL in env'
error: {
[NodeOutputKeyEnum.error]: 'Can not find SANDBOX_URL in env'
},
[DispatchNodeResponseKeyEnum.nodeResponse]: {
errorText: 'Can not find SANDBOX_URL in env',
customInputs: customVariables
}
};
}
@ -51,24 +64,43 @@ export const dispatchRunCode = async (props: RunCodeType): Promise<RunCodeRespon
if (runResult.success) {
return {
[NodeOutputKeyEnum.rawResponse]: runResult.data.codeReturn,
data: {
[NodeOutputKeyEnum.rawResponse]: runResult.data.codeReturn,
...runResult.data.codeReturn
},
[DispatchNodeResponseKeyEnum.nodeResponse]: {
customInputs: customVariables,
customOutputs: runResult.data.codeReturn,
codeLog: runResult.data.log
},
[DispatchNodeResponseKeyEnum.toolResponses]: runResult.data.codeReturn,
...runResult.data.codeReturn
[DispatchNodeResponseKeyEnum.toolResponses]: runResult.data.codeReturn
};
} else {
return Promise.reject('Run code failed');
throw new Error('Run code failed');
}
} catch (error) {
const text = getErrText(error);
// @adapt
if (catchError === undefined) {
return {
data: {
[NodeOutputKeyEnum.error]: { message: text }
},
[DispatchNodeResponseKeyEnum.nodeResponse]: {
customInputs: customVariables,
errorText: text
}
};
}
return {
[NodeOutputKeyEnum.error]: formatHttpError(error),
error: {
[NodeOutputKeyEnum.error]: text
},
[DispatchNodeResponseKeyEnum.nodeResponse]: {
customInputs: customVariables,
error: formatHttpError(error)
errorText: text
}
};
}

View File

@ -47,10 +47,14 @@ type HttpRequestProps = ModuleDispatchProps<{
[NodeInputKeyEnum.httpTimeout]?: number;
[key: string]: any;
}>;
type HttpResponse = DispatchNodeResultType<{
[NodeOutputKeyEnum.error]?: object;
[key: string]: any;
}>;
type HttpResponse = DispatchNodeResultType<
{
[key: string]: any;
},
{
[NodeOutputKeyEnum.error]?: string;
}
>;
const UNDEFINED_SIGN = 'UNDEFINED_SIGN';
@ -349,7 +353,10 @@ export const dispatchHttp468Request = async (props: HttpRequestProps): Promise<H
}
return {
...results,
data: {
[NodeOutputKeyEnum.httpRawResponse]: rawResponse,
...results
},
[DispatchNodeResponseKeyEnum.nodeResponse]: {
totalPoints: 0,
params: Object.keys(params).length > 0 ? params : undefined,
@ -358,21 +365,36 @@ export const dispatchHttp468Request = async (props: HttpRequestProps): Promise<H
httpResult: rawResponse
},
[DispatchNodeResponseKeyEnum.toolResponses]:
Object.keys(results).length > 0 ? results : rawResponse,
[NodeOutputKeyEnum.httpRawResponse]: rawResponse
Object.keys(results).length > 0 ? results : rawResponse
};
} catch (error) {
addLog.error('Http request error', error);
// @adapt
if (node.catchError === undefined) {
return {
data: {
[NodeOutputKeyEnum.error]: getErrText(error)
},
[DispatchNodeResponseKeyEnum.nodeResponse]: {
params: Object.keys(params).length > 0 ? params : undefined,
body: Object.keys(formattedRequestBody).length > 0 ? formattedRequestBody : undefined,
headers: Object.keys(publicHeaders).length > 0 ? publicHeaders : undefined,
httpResult: { error: formatHttpError(error) }
}
};
}
return {
[NodeOutputKeyEnum.error]: formatHttpError(error),
error: {
[NodeOutputKeyEnum.error]: getErrText(error)
},
[DispatchNodeResponseKeyEnum.nodeResponse]: {
params: Object.keys(params).length > 0 ? params : undefined,
body: Object.keys(formattedRequestBody).length > 0 ? formattedRequestBody : undefined,
headers: Object.keys(publicHeaders).length > 0 ? publicHeaders : undefined,
httpResult: { error: formatHttpError(error) }
},
[NodeOutputKeyEnum.httpRawResponse]: getErrText(error)
}
};
}
};

View File

@ -59,6 +59,9 @@ export const dispatchQueryExtension = async ({
});
return {
data: {
[NodeOutputKeyEnum.text]: JSON.stringify(filterSameQueries)
},
[DispatchNodeResponseKeyEnum.nodeResponse]: {
totalPoints,
model: modelName,
@ -75,7 +78,6 @@ export const dispatchQueryExtension = async ({
inputTokens,
outputTokens
}
],
[NodeOutputKeyEnum.text]: JSON.stringify(filterSameQueries)
]
};
};

View File

@ -14,6 +14,7 @@ import { parseFileExtensionFromUrl } from '@fastgpt/global/common/string/tools';
import { addLog } from '../../../../common/system/log';
import { addRawTextBuffer, getRawTextBuffer } from '../../../../common/buffer/rawText/controller';
import { addMinutes } from 'date-fns';
import { getNodeErrResponse } from '../utils';
type Props = ModuleDispatchProps<{
[NodeInputKeyEnum.fileUrlList]: string[];
@ -58,31 +59,37 @@ export const dispatchReadFiles = async (props: Props): Promise<Response> => {
// Get files from histories
const filesFromHistories = version !== '489' ? [] : getHistoryFileLinks(histories);
const { text, readFilesResult } = await getFileContentFromLinks({
// Concat fileUrlList and filesFromHistories; remove not supported files
urls: [...fileUrlList, ...filesFromHistories],
requestOrigin,
maxFiles,
teamId,
tmbId,
customPdfParse
});
try {
const { text, readFilesResult } = await getFileContentFromLinks({
// Concat fileUrlList and filesFromHistories; remove not supported files
urls: [...fileUrlList, ...filesFromHistories],
requestOrigin,
maxFiles,
teamId,
tmbId,
customPdfParse
});
return {
[NodeOutputKeyEnum.text]: text,
[DispatchNodeResponseKeyEnum.nodeResponse]: {
readFiles: readFilesResult.map((item) => ({
name: item?.filename || '',
url: item?.url || ''
})),
readFilesResult: readFilesResult
.map((item) => item?.nodeResponsePreviewText ?? '')
.join('\n******\n')
},
[DispatchNodeResponseKeyEnum.toolResponses]: {
fileContent: text
}
};
return {
data: {
[NodeOutputKeyEnum.text]: text
},
[DispatchNodeResponseKeyEnum.nodeResponse]: {
readFiles: readFilesResult.map((item) => ({
name: item?.filename || '',
url: item?.url || ''
})),
readFilesResult: readFilesResult
.map((item) => item?.nodeResponsePreviewText ?? '')
.join('\n******\n')
},
[DispatchNodeResponseKeyEnum.toolResponses]: {
fileContent: text
}
};
} catch (error) {
return getNodeErrResponse({ error });
}
};
export const getHistoryFileLinks = (histories: ChatItemType[]) => {

View File

@ -157,7 +157,9 @@ export const dispatchIfElse = async (props: Props): Promise<Response> => {
});
return {
[NodeOutputKeyEnum.ifElseResult]: res,
data: {
[NodeOutputKeyEnum.ifElseResult]: res
},
[DispatchNodeResponseKeyEnum.nodeResponse]: {
totalPoints: 0,
ifElseResult: res

View File

@ -6,16 +6,21 @@ import { valueTypeFormat } from '@fastgpt/global/core/workflow/runtime/utils';
import { SERVICE_LOCAL_HOST } from '../../../../common/system/tools';
import { addLog } from '../../../../common/system/log';
import { type DispatchNodeResultType } from '@fastgpt/global/core/workflow/runtime/type';
import { getErrText } from '@fastgpt/global/common/error/utils';
type LafRequestProps = ModuleDispatchProps<{
[NodeInputKeyEnum.httpReqUrl]: string;
[NodeInputKeyEnum.addInputParam]: Record<string, any>;
[key: string]: any;
}>;
type LafResponse = DispatchNodeResultType<{
[NodeOutputKeyEnum.failed]?: boolean;
[key: string]: any;
}>;
type LafResponse = DispatchNodeResultType<
{
[key: string]: any;
},
{
[NodeOutputKeyEnum.errorText]?: string;
}
>;
const UNDEFINED_SIGN = 'UNDEFINED_SIGN';
@ -78,20 +83,24 @@ export const dispatchLafRequest = async (props: LafRequestProps): Promise<LafRes
}
return {
data: {
[NodeOutputKeyEnum.httpRawResponse]: rawResponse,
...results
},
assistantResponses: [],
[DispatchNodeResponseKeyEnum.nodeResponse]: {
totalPoints: 0,
body: Object.keys(requestBody).length > 0 ? requestBody : undefined,
httpResult: rawResponse
},
[DispatchNodeResponseKeyEnum.toolResponses]: rawResponse,
[NodeOutputKeyEnum.httpRawResponse]: rawResponse,
...results
[DispatchNodeResponseKeyEnum.toolResponses]: rawResponse
};
} catch (error) {
addLog.error('Http request error', error);
return {
[NodeOutputKeyEnum.failed]: true,
error: {
[NodeOutputKeyEnum.errorText]: getErrText(error)
},
[DispatchNodeResponseKeyEnum.nodeResponse]: {
totalPoints: 0,
body: Object.keys(requestBody).length > 0 ? requestBody : undefined,

View File

@ -40,7 +40,9 @@ export const dispatchTextEditor = (props: Record<string, any>): Response => {
});
return {
[NodeOutputKeyEnum.text]: textResult,
data: {
[NodeOutputKeyEnum.text]: textResult
},
[DispatchNodeResponseKeyEnum.nodeResponse]: {
textOutput: textResult
}

View File

@ -9,7 +9,10 @@ import {
} from '@fastgpt/global/core/workflow/runtime/type';
import { responseWrite } from '../../../common/response';
import { type NextApiResponse } from 'next';
import { SseResponseEventEnum } from '@fastgpt/global/core/workflow/runtime/constants';
import {
DispatchNodeResponseKeyEnum,
SseResponseEventEnum
} from '@fastgpt/global/core/workflow/runtime/constants';
import { getNanoid } from '@fastgpt/global/common/string/tools';
import { type SearchDataResponseItemType } from '@fastgpt/global/core/dataset/type';
import { getMCPToolRuntimeNode } from '@fastgpt/global/core/app/mcpTools/utils';
@ -206,3 +209,30 @@ export const rewriteRuntimeWorkFlow = (
}
}
};
export const getNodeErrResponse = ({
error,
customErr,
customNodeResponse
}: {
error: any;
customErr?: Record<string, any>;
customNodeResponse?: Record<string, any>;
}) => {
const errorText = getErrText(error);
return {
error: {
[NodeOutputKeyEnum.errorText]: errorText,
...(typeof customErr === 'object' ? customErr : {})
},
[DispatchNodeResponseKeyEnum.nodeResponse]: {
errorText,
...(typeof customNodeResponse === 'object' ? customNodeResponse : {})
},
[DispatchNodeResponseKeyEnum.toolResponses]: {
error: errorText,
...(typeof customErr === 'object' ? customErr : {})
}
};
};

View File

@ -3,7 +3,7 @@
"version": "1.0.0",
"type": "module",
"dependencies": {
"@fastgpt-sdk/plugin": "^0.1.1",
"@fastgpt-sdk/plugin": "^0.1.2",
"@fastgpt/global": "workspace:*",
"@modelcontextprotocol/sdk": "^1.12.1",
"@node-rs/jieba": "2.0.1",

View File

@ -0,0 +1,64 @@
import { parseHeaderCert } from '../controller';
import { authAppByTmbId } from '../app/auth';
import {
ManagePermissionVal,
ReadPermissionVal
} from '@fastgpt/global/support/permission/constant';
import type { EvaluationSchemaType } from '@fastgpt/global/core/app/evaluation/type';
import type { AuthModeType } from '../type';
import { MongoEvaluation } from '../../../core/app/evaluation/evalSchema';
export const authEval = async ({
evalId,
per = ReadPermissionVal,
...props
}: AuthModeType & {
evalId: string;
}): Promise<{
evaluation: EvaluationSchemaType;
tmbId: string;
teamId: string;
}> => {
const { teamId, tmbId, isRoot } = await parseHeaderCert(props);
const evaluation = await MongoEvaluation.findById(evalId, 'tmbId').lean();
if (!evaluation) {
return Promise.reject('Evaluation not found');
}
if (String(evaluation.tmbId) === tmbId) {
return {
teamId,
tmbId,
evaluation
};
}
// App read per
if (per === ReadPermissionVal) {
await authAppByTmbId({
tmbId,
appId: evaluation.appId,
per: ReadPermissionVal,
isRoot
});
return {
teamId,
tmbId,
evaluation
};
}
// Write per
await authAppByTmbId({
tmbId,
appId: evaluation.appId,
per: ManagePermissionVal,
isRoot
});
return {
teamId,
tmbId,
evaluation
};
};

View File

@ -113,10 +113,6 @@ export const checkTeamDatasetLimit = async (teamId: string) => {
return Promise.reject(SystemErrEnum.licenseDatasetAmountLimit);
}
}
// Open source check
if (!global.feConfigs.isPlus && datasetCount >= 30) {
return Promise.reject(SystemErrEnum.communityVersionNumLimit);
}
};
export const checkTeamDatasetSyncPermission = async (teamId: string) => {

View File

@ -235,3 +235,46 @@ export const pushLLMTrainingUsage = async ({
return { totalPoints };
};
export const createEvaluationUsage = async ({
teamId,
tmbId,
appName,
model,
session
}: {
teamId: string;
tmbId: string;
appName: string;
model: string;
session?: ClientSession;
}) => {
const [{ _id: usageId }] = await MongoUsage.create(
[
{
teamId,
tmbId,
appName,
source: UsageSourceEnum.evaluation,
totalPoints: 0,
list: [
{
moduleName: i18nT('account_usage:generate_answer'),
amount: 0,
count: 0
},
{
moduleName: i18nT('account_usage:answer_accuracy'),
amount: 0,
inputTokens: 0,
outputTokens: 0,
model
}
]
}
],
{ session, ordered: true }
);
return { usageId };
};

View File

@ -1,9 +1,13 @@
export type ConcatBillQueueItemType = {
billId: string;
billId: string; // usageId
listIndex?: number;
totalPoints: number;
inputTokens: number;
outputTokens: number;
// Model usage
inputTokens?: number;
outputTokens?: number;
// Times
count?: number;
};
declare global {

53
packages/service/type/env.d.ts vendored Normal file
View File

@ -0,0 +1,53 @@
declare global {
namespace NodeJS {
interface ProcessEnv {
LOG_DEPTH: string;
DB_MAX_LINK: string;
FILE_TOKEN_KEY: string;
AES256_SECRET_KEY: string;
ROOT_KEY: string;
OPENAI_BASE_URL: string;
CHAT_API_KEY: string;
AIPROXY_API_ENDPOINT: string;
AIPROXY_API_TOKEN: string;
MULTIPLE_DATA_TO_BASE64: string;
MONGODB_URI: string;
MONGODB_LOG_URI?: string;
PG_URL: string;
OCEANBASE_URL: string;
MILVUS_ADDRESS: string;
MILVUS_TOKEN: string;
SANDBOX_URL: string;
FE_DOMAIN: string;
FILE_DOMAIN: string;
LOG_LEVEL?: string;
STORE_LOG_LEVEL?: string;
USE_IP_LIMIT?: string;
WORKFLOW_MAX_RUN_TIMES?: string;
WORKFLOW_MAX_LOOP_TIMES?: string;
CHECK_INTERNAL_IP?: string;
ALLOWED_ORIGINS?: string;
SHOW_COUPON?: string;
CONFIG_JSON_PATH?: string;
PASSWORD_LOGIN_LOCK_SECONDS?: string;
PASSWORD_EXPIRED_MONTH?: string;
MAX_LOGIN_SESSION?: string;
// 安全配置
// 密码登录锁定时间
PASSWORD_LOGIN_LOCK_SECONDS?: string;
// Signoz
SIGNOZ_BASE_URL?: string;
SIGNOZ_SERVICE_NAME?: string;
CHAT_LOG_URL?: string;
CHAT_LOG_INTERVAL?: string;
CHAT_LOG_SOURCE_ID_PREFIX?: string;
NEXT_PUBLIC_BASE_URL: string;
}
}
}
export {};

View File

@ -20,11 +20,11 @@ export const iconPaths = {
'common/addLight': () => import('./icons/common/addLight.svg'),
'common/addUser': () => import('./icons/common/addUser.svg'),
'common/administrator': () => import('./icons/common/administrator.svg'),
'common/audit': () => import('./icons/common/audit.svg'),
'common/alipay': () => import('./icons/common/alipay.svg'),
'common/app': () => import('./icons/common/app.svg'),
'common/arrowLeft': () => import('./icons/common/arrowLeft.svg'),
'common/arrowRight': () => import('./icons/common/arrowRight.svg'),
'common/audit': () => import('./icons/common/audit.svg'),
'common/backFill': () => import('./icons/common/backFill.svg'),
'common/backLight': () => import('./icons/common/backLight.svg'),
'common/billing': () => import('./icons/common/billing.svg'),
@ -47,6 +47,7 @@ export const iconPaths = {
'common/editor/resizer': () => import('./icons/common/editor/resizer.svg'),
'common/ellipsis': () => import('./icons/common/ellipsis.svg'),
'common/enable': () => import('./icons/common/enable.svg'),
'common/error': () => import('./icons/common/error.svg'),
'common/errorFill': () => import('./icons/common/errorFill.svg'),
'common/file/move': () => import('./icons/common/file/move.svg'),
'common/fileNotFound': () => import('./icons/common/fileNotFound.svg'),
@ -111,9 +112,9 @@ export const iconPaths = {
'common/tickFill': () => import('./icons/common/tickFill.svg'),
'common/toolkit': () => import('./icons/common/toolkit.svg'),
'common/trash': () => import('./icons/common/trash.svg'),
'common/upRightArrowLight': () => import('./icons/common/upRightArrowLight.svg'),
'common/uploadFileFill': () => import('./icons/common/uploadFileFill.svg'),
'common/upperRight': () => import('./icons/common/upperRight.svg'),
'common/upRightArrowLight': () => import('./icons/common/upRightArrowLight.svg'),
'common/userInfo': () => import('./icons/common/userInfo.svg'),
'common/variable': () => import('./icons/common/variable.svg'),
'common/viewLight': () => import('./icons/common/viewLight.svg'),
@ -150,8 +151,6 @@ export const iconPaths = {
'core/app/simpleMode/tts': () => import('./icons/core/app/simpleMode/tts.svg'),
'core/app/simpleMode/variable': () => import('./icons/core/app/simpleMode/variable.svg'),
'core/app/simpleMode/whisper': () => import('./icons/core/app/simpleMode/whisper.svg'),
'core/app/templates/TranslateRobot': () =>
import('./icons/core/app/templates/TranslateRobot.svg'),
'core/app/templates/animalLife': () => import('./icons/core/app/templates/animalLife.svg'),
'core/app/templates/chinese': () => import('./icons/core/app/templates/chinese.svg'),
'core/app/templates/divination': () => import('./icons/core/app/templates/divination.svg'),
@ -161,6 +160,8 @@ export const iconPaths = {
'core/app/templates/plugin-dalle': () => import('./icons/core/app/templates/plugin-dalle.svg'),
'core/app/templates/plugin-feishu': () => import('./icons/core/app/templates/plugin-feishu.svg'),
'core/app/templates/stock': () => import('./icons/core/app/templates/stock.svg'),
'core/app/templates/TranslateRobot': () =>
import('./icons/core/app/templates/TranslateRobot.svg'),
'core/app/toolCall': () => import('./icons/core/app/toolCall.svg'),
'core/app/ttsFill': () => import('./icons/core/app/ttsFill.svg'),
'core/app/type/httpPlugin': () => import('./icons/core/app/type/httpPlugin.svg'),
@ -179,7 +180,6 @@ export const iconPaths = {
'core/app/variable/input': () => import('./icons/core/app/variable/input.svg'),
'core/app/variable/select': () => import('./icons/core/app/variable/select.svg'),
'core/app/variable/textarea': () => import('./icons/core/app/variable/textarea.svg'),
'core/chat/QGFill': () => import('./icons/core/chat/QGFill.svg'),
'core/chat/backText': () => import('./icons/core/chat/backText.svg'),
'core/chat/cancelSpeak': () => import('./icons/core/chat/cancelSpeak.svg'),
'core/chat/chatFill': () => import('./icons/core/chat/chatFill.svg'),
@ -196,6 +196,7 @@ export const iconPaths = {
'core/chat/fileSelect': () => import('./icons/core/chat/fileSelect.svg'),
'core/chat/finishSpeak': () => import('./icons/core/chat/finishSpeak.svg'),
'core/chat/imgSelect': () => import('./icons/core/chat/imgSelect.svg'),
'core/chat/QGFill': () => import('./icons/core/chat/QGFill.svg'),
'core/chat/quoteFill': () => import('./icons/core/chat/quoteFill.svg'),
'core/chat/quoteSign': () => import('./icons/core/chat/quoteSign.svg'),
'core/chat/recordFill': () => import('./icons/core/chat/recordFill.svg'),
@ -203,6 +204,7 @@ export const iconPaths = {
'core/chat/sendLight': () => import('./icons/core/chat/sendLight.svg'),
'core/chat/setTopLight': () => import('./icons/core/chat/setTopLight.svg'),
'core/chat/sideLine': () => import('./icons/core/chat/sideLine.svg'),
'core/chat/sidebar/logout': () => import('./icons/core/chat/sidebar/logout.svg'),
'core/chat/speaking': () => import('./icons/core/chat/speaking.svg'),
'core/chat/stopSpeech': () => import('./icons/core/chat/stopSpeech.svg'),
'core/chat/think': () => import('./icons/core/chat/think.svg'),
@ -283,13 +285,12 @@ export const iconPaths = {
'core/workflow/redo': () => import('./icons/core/workflow/redo.svg'),
'core/workflow/revertVersion': () => import('./icons/core/workflow/revertVersion.svg'),
'core/workflow/runError': () => import('./icons/core/workflow/runError.svg'),
'core/workflow/running': () => import('./icons/core/workflow/running.svg'),
'core/workflow/runSkip': () => import('./icons/core/workflow/runSkip.svg'),
'core/workflow/runSuccess': () => import('./icons/core/workflow/runSuccess.svg'),
'core/workflow/running': () => import('./icons/core/workflow/running.svg'),
'core/workflow/template/BI': () => import('./icons/core/workflow/template/BI.svg'),
'core/workflow/template/FileRead': () => import('./icons/core/workflow/template/FileRead.svg'),
'core/workflow/template/aiChat': () => import('./icons/core/workflow/template/aiChat.svg'),
'core/workflow/template/baseChart': () => import('./icons/core/workflow/template/baseChart.svg'),
'core/workflow/template/BI': () => import('./icons/core/workflow/template/BI.svg'),
'core/workflow/template/bing': () => import('./icons/core/workflow/template/bing.svg'),
'core/workflow/template/bocha': () => import('./icons/core/workflow/template/bocha.svg'),
'core/workflow/template/codeRun': () => import('./icons/core/workflow/template/codeRun.svg'),
@ -306,6 +307,7 @@ export const iconPaths = {
'core/workflow/template/extractJson': () =>
import('./icons/core/workflow/template/extractJson.svg'),
'core/workflow/template/fetchUrl': () => import('./icons/core/workflow/template/fetchUrl.svg'),
'core/workflow/template/FileRead': () => import('./icons/core/workflow/template/FileRead.svg'),
'core/workflow/template/formInput': () => import('./icons/core/workflow/template/formInput.svg'),
'core/workflow/template/getTime': () => import('./icons/core/workflow/template/getTime.svg'),
'core/workflow/template/google': () => import('./icons/core/workflow/template/google.svg'),
@ -335,12 +337,12 @@ export const iconPaths = {
'core/workflow/template/textConcat': () =>
import('./icons/core/workflow/template/textConcat.svg'),
'core/workflow/template/toolCall': () => import('./icons/core/workflow/template/toolCall.svg'),
'core/workflow/template/toolParams': () =>
import('./icons/core/workflow/template/toolParams.svg'),
'core/workflow/template/toolkitActive': () =>
import('./icons/core/workflow/template/toolkitActive.svg'),
'core/workflow/template/toolkitInactive': () =>
import('./icons/core/workflow/template/toolkitInactive.svg'),
'core/workflow/template/toolParams': () =>
import('./icons/core/workflow/template/toolParams.svg'),
'core/workflow/template/userSelect': () =>
import('./icons/core/workflow/template/userSelect.svg'),
'core/workflow/template/variable': () => import('./icons/core/workflow/template/variable.svg'),
@ -389,6 +391,7 @@ export const iconPaths = {
key: () => import('./icons/key.svg'),
keyPrimary: () => import('./icons/keyPrimary.svg'),
loading: () => import('./icons/loading.svg'),
mcp: () => import('./icons/mcp.svg'),
menu: () => import('./icons/menu.svg'),
minus: () => import('./icons/minus.svg'),
'modal/AddClb': () => import('./icons/modal/AddClb.svg'),
@ -400,10 +403,10 @@ export const iconPaths = {
'modal/selectSource': () => import('./icons/modal/selectSource.svg'),
'modal/setting': () => import('./icons/modal/setting.svg'),
'modal/teamPlans': () => import('./icons/modal/teamPlans.svg'),
'model/BAAI': () => import('./icons/model/BAAI.svg'),
'model/alicloud': () => import('./icons/model/alicloud.svg'),
'model/aws': () => import('./icons/model/aws.svg'),
'model/azure': () => import('./icons/model/azure.svg'),
'model/BAAI': () => import('./icons/model/BAAI.svg'),
'model/baichuan': () => import('./icons/model/baichuan.svg'),
'model/chatglm': () => import('./icons/model/chatglm.svg'),
'model/claude': () => import('./icons/model/claude.svg'),

Some files were not shown because too many files have changed in this diff Show More