V4.11.0 features (#5270)

* feat: workflow catch error (#5220)

* feat: error catch

* feat: workflow catch error

* perf: add catch error to node

* feat: system tool error catch

* catch error

* fix: ts

* update doc

* perf: training queue code (#5232)

* doc

* perf: training queue code

* Feat: 优化错误提示与重试逻辑 (#5192)

* feat: 批量重试异常数据 & 报错信息国际化

  - 新增“全部重试”按钮,支持批量重试所有训练异常数据
  - 报错信息支持国际化,常见错误自动映射为 i18n key
  - 相关文档和 i18n 资源已同步更新

* feat: enhance error message and retry mechanism

* feat: enhance error message and retry mechanism

* feat: add retry_failed i18n key

* feat: enhance error message and retry mechanism

* feat: enhance error message and retry mechanism

* feat: enhance error message and retry mechanism : 5

* feat: enhance error message and retry mechanism : 6

* feat: enhance error message and retry mechanism : 7

* feat: enhance error message and retry mechanism : 8

* perf: catch chat error

* perf: copy hook (#5246)

* perf: copy hook

* doc

* doc

* add app evaluation (#5083)

* add app evaluation

* fix

* usage

* variables

* editing condition

* var ui

* isplus filter

* migrate code

* remove utils

* name

* update type

* build

* fix

* fix

* fix

* delete comment

* fix

* perf: eval code

* eval code

* eval code

* feat: ttfb time in model log

* Refactor chat page (#5253)

* feat: update side bar layout; add login and logout logic at chat page

* refactor: encapsulate login logic and reuse it in `LoginModal` and `Login` page

* chore: improve some logics and comments

* chore: improve some logics

* chore: remove redundant side effect; add translations

---------

Co-authored-by: Archer <545436317@qq.com>

* perf: chat page code

* doc

* perf: provider redirect

* chore: ui improvement (#5266)

* Fix: SSE

* Fix: SSE

* eval pagination (#5264)

* eval scroll pagination

* change eval list to manual pagination

* number

* fix build

* fix

* version doc (#5267)

* version doc

* version doc

* doc

* feat: eval model select

* config eval model

* perf: eval detail modal ui

* doc

* doc

* fix: chat store reload

* doc

---------

Co-authored-by: colnii <1286949794@qq.com>
Co-authored-by: heheer <heheer@sealos.io>
Co-authored-by: 酒川户 <76519998+chuanhu9@users.noreply.github.com>
This commit is contained in:
Archer 2025-07-22 09:42:50 +08:00 committed by GitHub
parent de208d6c3f
commit 13b7e0a192
No known key found for this signature in database
GPG Key ID: B5690EEEBB952194
212 changed files with 5840 additions and 3400 deletions

View File

@ -0,0 +1,28 @@
---
description:
globs:
alwaysApply: false
---
这是一个工作流节点报错捕获的设计文档。
# 背景
现在工作流运行节点若其中有一个报错则整个工作流中断无法继续运行只有红色英文toast不太友好。对于对工作流可控性要求高的用户愿意通过编排为报错兜底。现有编排较麻烦。
这类似于代码里的 try catch 机制,用户可以获得 catch 的错误,而不是直接抛错并结束工作流。
# 实现效果
1. 部分节点可以拥有报错捕获选项,也就是 node 里 catchError 不为 undefined 的节点catchError=true 代表启用报错捕获catchError=false 代表不启用报错捕获。
2. 支持报错捕获节点,在输出栏右侧会有一个“错误捕获”的开关。
3. node 的 output 属性种,有一个`errorField`的字段,标识该输出是开启报错捕获时候,才会拥有的输出。
4. 开启报错捕获的节点,在运行错误时,不会阻塞后续节点运行,而是输出报错信息,并继续向下执行。
5. 开启报错捕获的节点,会多出一个“错误输出”的分支连线,错误时候会走错误的分支提示。
# 实现方案
1. FlowNodeCommonType 属性上增加一个`catchError`的可选 boolean 值。如果需要报错捕获的节点,则设置 true/false标识启用报错捕获并且设置默认是否启用报错捕获。
2. FlowNodeOutputTypeEnume增加一个 error 枚举值,表示该字段是错误时候才展示的输出。
3. IOTitle 组件里接收 catchError 字段,如果为 true则在右侧展示“错误捕获”的开关。
4. 所有现有的 RenderOutput 的组件,都需要改造。传入的 flowOutputList 都不包含 hidden 和 error类型的。
5. 单独在`FastGPT/projects/app/src/pageComponents/app/detail/WorkflowComponents/Flow/nodes/render/RenderOutput`下新建一个`CatchError`的组件,用于专门渲染错误类型输出,同时有一个 SourceHandler。

View File

@ -47,40 +47,41 @@ https://github.com/labring/FastGPT/assets/15308462/7d3a38df-eb0e-4388-9250-2409b
## 💡 RoadMap ## 💡 RoadMap
`1` 应用编排能力 `1` 应用编排能力
- [x] 对话工作流、插件工作流 - [x] 对话工作流、插件工作流,包含基础的 RPA 节点。
- [x] 工具调用 - [x] Agent 调用
- [x] Code sandbox - [x] 用户交互节点
- [x] 循环调用 - [x] 双向 MCP
- [x] 用户选择 - [ ] 上下文管理
- [x] 表单输入 - [ ] AI 生成工作流
`2` 知识库能力 `2` 应用调试能力
- [x] 知识库单点搜索测试
- [x] 对话时反馈引用并可修改与删除
- [x] 完整调用链路日志
- [ ] 应用评测
- [ ] 高级编排 DeBug 调试模式
- [ ] 应用节点日志
`3` 知识库能力
- [x] 多库复用,混用 - [x] 多库复用,混用
- [x] chunk 记录修改和删除 - [x] chunk 记录修改和删除
- [x] 支持手动输入直接分段QA 拆分导入 - [x] 支持手动输入直接分段QA 拆分导入
- [x] 支持 txtmdhtmlpdfdocxpptxcsvxlsx (有需要更多可 PR file loader),支持 url 读取、CSV 批量导入 - [x] 支持 txtmdhtmlpdfdocxpptxcsvxlsx (有需要更多可 PR file loader),支持 url 读取、CSV 批量导入
- [x] 混合检索 & 重排 - [x] 混合检索 & 重排
- [x] API 知识库 - [x] API 知识库
- [ ] 自定义文件读取服务 - [ ] RAG 模块热插拔
- [ ] 自定义分块服务
`3` 应用调试能力
- [x] 知识库单点搜索测试
- [x] 对话时反馈引用并可修改与删除
- [x] 完整上下文呈现
- [x] 完整模块中间值呈现
- [ ] 高级编排 DeBug 模式
`4` OpenAPI 接口 `4` OpenAPI 接口
- [x] completions 接口 (chat 模式对齐 GPT 接口) - [x] completions 接口 (chat 模式对齐 GPT 接口)
- [x] 知识库 CRUD - [x] 知识库 CRUD
- [x] 对话 CRUD - [x] 对话 CRUD
- [ ] 完整 API Documents
`5` 运营能力 `5` 运营能力
- [x] 免登录分享窗口 - [x] 免登录分享窗口
- [x] Iframe 一键嵌入 - [x] Iframe 一键嵌入
- [x] 聊天窗口嵌入支持自定义 Icon默认打开拖拽等功能
- [x] 统一查阅对话记录,并对数据进行标注 - [x] 统一查阅对话记录,并对数据进行标注
- [ ] 应用运营日志
`6` 其他 `6` 其他
- [x] 可视化模型配置。 - [x] 可视化模型配置。

View File

@ -0,0 +1,56 @@
---
title: 'V4.11.0(进行中)'
description: 'FastGPT V4.11.0 更新说明'
icon: 'upgrade'
draft: false
toc: true
weight: 783
---
<!-- ## 升级说明
### 1. 修改环境变量
FastGPT 商业版用户,可以增加评估相关环境变量,并在更新后,在管理端点击一次保存。
```
EVAL_CONCURRENCY=3 # 评估单节点并发数
EVAL_LINE_LIMIT=1000 # 评估文件最大行数
```
### 2. 更新镜像:
- 更新 FastGPT 镜像tag: v4.11.0
- 更新 FastGPT 商业版镜像tag: v4.11.0
- 更新 fastgpt-plugin 镜像 tag: v0.1.4
- mcp_server 无需更新
- Sandbox 无需更新
- AIProxy 无需更新 -->
## 项目调整
1. 移除所有**开源功能**的限制,包括:应用数量和知识库数量上限。
2. 调整 RoadMap增加`上下文管理`,`AI 生成工作流`,`高级编排 DeBug 调试模式`等计划。
## 🚀 新增内容
1. 商业版增加**应用评测(Beta 版)**,可对应用进行有监督评分。
2. 工作流部分节点支持报错捕获分支。
3. 对话页独立 tab 页面UX。
4. 支持 Signoz traces 和 logs 系统追踪。
5. 新增 Gemini2.5, grok4, kimi 模型配置。
6. 模型调用日志增加首字响应时长和请求 IP。
## ⚙️ 优化
1. 优化代码,避免递归造成的内存堆积。
2. 知识库训练:支持全部重试当前集合异常数据。
3. 工作流 valueTypeFormat避免数据类型不一致。
## 🐛 修复
1. 问题分类和内容提取节点,默认模型无法通过前端校验,导致工作流无法运行和保存发布。
## 🔨 工具更新
1. Markdown 文本转 Docx 和 Xlsx 文件。

View File

@ -20,19 +20,18 @@ FastGPT 商业版是基于 FastGPT 开源版的增强版本,增加了一些独
| 文档知识库 | ✅ | ✅ | ✅ | | 文档知识库 | ✅ | ✅ | ✅ |
| 外部使用 | ✅ | ✅ | ✅ | | 外部使用 | ✅ | ✅ | ✅ |
| API 知识库 | ✅ | ✅ | ✅ | | API 知识库 | ✅ | ✅ | ✅ |
| 最大应用数量 | 500 | 无限制 | 由付费套餐决定 |
| 最大知识库数量(单个知识库内容无限制) | 30 | 无限制 | 由付费套餐决定 |
| 自定义版权信息 | ❌ | ✅ | 设计中 | | 自定义版权信息 | ❌ | ✅ | 设计中 |
| 多租户与支付 | ❌ | ✅ | ✅ | | 多租户与支付 | ❌ | ✅ | ✅ |
| 团队空间 & 权限 | ❌ | ✅ | ✅ | | 团队空间 & 权限 | ❌ | ✅ | ✅ |
| 应用发布安全配置 | ❌ | ✅ | ✅ | | 应用发布安全配置 | ❌ | ✅ | ✅ |
| 内容审核 | ❌ | ✅ | ✅ | | 内容审核 | ❌ | ✅ | ✅ |
| 应用评测 | ❌ | ✅ | ✅ |
| web站点同步 | ❌ | ✅ | ✅ | | web站点同步 | ❌ | ✅ | ✅ |
| 增强训练模式 | ❌ | ✅ | ✅ | | 图片知识库 | ❌ | ✅ | ✅ |
| 知识库索引增强 | ❌ | ✅ | ✅ |
| 第三方应用快速接入(飞书、公众号) | ❌ | ✅ | ✅ | | 第三方应用快速接入(飞书、公众号) | ❌ | ✅ | ✅ |
| 管理后台 | ❌ | ✅ | 不需要 | | 管理后台 | ❌ | ✅ | 不需要 |
| SSO 登录可自定义也可使用内置Github、公众号、钉钉、谷歌等 | ❌ | ✅ | 不需要 | | SSO 登录可自定义也可使用内置Github、公众号、钉钉、谷歌等 | ❌ | ✅ | 不需要 |
| 图片知识库 | ❌ | 设计中 | 设计中 |
| 对话日志运营分析 | ❌ | 设计中 | 设计中 | | 对话日志运营分析 | ❌ | 设计中 | 设计中 |
| 完整商业授权 | ❌ | ✅ | ✅ | | 完整商业授权 | ❌ | ✅ | ✅ |
{{< /table >}} {{< /table >}}

35
env.d.ts vendored
View File

@ -1,43 +1,8 @@
declare global { declare global {
namespace NodeJS { namespace NodeJS {
interface ProcessEnv { interface ProcessEnv {
LOG_DEPTH: string;
DEFAULT_ROOT_PSW: string; DEFAULT_ROOT_PSW: string;
DB_MAX_LINK: string;
FILE_TOKEN_KEY: string;
AES256_SECRET_KEY: string;
ROOT_KEY: string;
OPENAI_BASE_URL: string;
CHAT_API_KEY: string;
AIPROXY_API_ENDPOINT: string;
AIPROXY_API_TOKEN: string;
MULTIPLE_DATA_TO_BASE64: string;
MONGODB_URI: string;
MONGODB_LOG_URI?: string;
PG_URL: string;
OCEANBASE_URL: string;
MILVUS_ADDRESS: string;
MILVUS_TOKEN: string;
SANDBOX_URL: string;
PRO_URL: string; PRO_URL: string;
FE_DOMAIN: string;
FILE_DOMAIN: string;
NEXT_PUBLIC_BASE_URL: string;
LOG_LEVEL?: string;
STORE_LOG_LEVEL?: string;
USE_IP_LIMIT?: string;
WORKFLOW_MAX_RUN_TIMES?: string;
WORKFLOW_MAX_LOOP_TIMES?: string;
CHECK_INTERNAL_IP?: string;
CHAT_LOG_URL?: string;
CHAT_LOG_INTERVAL?: string;
CHAT_LOG_SOURCE_ID_PREFIX?: string;
ALLOWED_ORIGINS?: string;
SHOW_COUPON?: string;
CONFIG_JSON_PATH?: string;
PASSWORD_LOGIN_LOCK_SECONDS?: string;
PASSWORD_EXPIRED_MONTH?: string;
MAX_LOGIN_SESSION?: string;
} }
} }
} }

View File

@ -6,7 +6,6 @@ export enum UserErrEnum {
userExist = 'userExist', userExist = 'userExist',
unAuthRole = 'unAuthRole', unAuthRole = 'unAuthRole',
account_psw_error = 'account_psw_error', account_psw_error = 'account_psw_error',
balanceNotEnough = 'balanceNotEnough',
unAuthSso = 'unAuthSso' unAuthSso = 'unAuthSso'
} }
const errList = [ const errList = [
@ -22,10 +21,6 @@ const errList = [
statusText: UserErrEnum.account_psw_error, statusText: UserErrEnum.account_psw_error,
message: i18nT('common:code_error.account_error') message: i18nT('common:code_error.account_error')
}, },
{
statusText: UserErrEnum.balanceNotEnough,
message: i18nT('common:code_error.user_error.balance_not_enough')
},
{ {
statusText: UserErrEnum.unAuthSso, statusText: UserErrEnum.unAuthSso,
message: i18nT('user:sso_auth_failed') message: i18nT('user:sso_auth_failed')

View File

@ -1,4 +1,5 @@
import { replaceSensitiveText } from '../string/tools'; import { replaceSensitiveText } from '../string/tools';
import { ERROR_RESPONSE } from './errorCode';
export const getErrText = (err: any, def = ''): any => { export const getErrText = (err: any, def = ''): any => {
const msg: string = const msg: string =
@ -12,6 +13,11 @@ export const getErrText = (err: any, def = ''): any => {
err?.msg || err?.msg ||
err?.error || err?.error ||
def; def;
if (ERROR_RESPONSE[msg]) {
return ERROR_RESPONSE[msg].message;
}
// msg && console.log('error =>', msg); // msg && console.log('error =>', msg);
return replaceSensitiveText(msg); return replaceSensitiveText(msg);
}; };

View File

@ -51,6 +51,7 @@ export type FastGPTFeConfigsType = {
bind_notification_method?: ['email' | 'phone']; bind_notification_method?: ['email' | 'phone'];
googleClientVerKey?: string; googleClientVerKey?: string;
mcpServerProxyEndpoint?: string; mcpServerProxyEndpoint?: string;
chineseRedirectUrl?: string;
show_emptyChat?: boolean; show_emptyChat?: boolean;
show_appStore?: boolean; show_appStore?: boolean;
@ -82,7 +83,6 @@ export type FastGPTFeConfigsType = {
customSharePageDomain?: string; customSharePageDomain?: string;
systemTitle?: string; systemTitle?: string;
systemDescription?: string;
scripts?: { [key: string]: string }[]; scripts?: { [key: string]: string }[];
favicon?: string; favicon?: string;
@ -109,6 +109,7 @@ export type FastGPTFeConfigsType = {
uploadFileMaxAmount?: number; uploadFileMaxAmount?: number;
uploadFileMaxSize?: number; uploadFileMaxSize?: number;
evalFileMaxLines?: number;
// Compute by systemEnv.customPdfParse // Compute by systemEnv.customPdfParse
showCustomPdfParse?: boolean; showCustomPdfParse?: boolean;

View File

@ -5,15 +5,17 @@ export const delay = (ms: number) =>
}, ms); }, ms);
}); });
export const retryFn = async <T>(fn: () => Promise<T>, retryTimes = 3): Promise<T> => { export const retryFn = async <T>(fn: () => Promise<T>, attempts = 3): Promise<T> => {
try { while (true) {
return fn(); try {
} catch (error) { return fn();
if (retryTimes > 0) { } catch (error) {
if (attempts <= 0) {
return Promise.reject(error);
}
await delay(500); await delay(500);
return retryFn(fn, retryTimes - 1); attempts--;
} }
return Promise.reject(error);
} }
}; };

View File

@ -47,6 +47,7 @@ export type LLMModelItemType = PriceType &
usedInClassify?: boolean; // classify usedInClassify?: boolean; // classify
usedInExtractFields?: boolean; // extract fields usedInExtractFields?: boolean; // extract fields
usedInToolCall?: boolean; // tool call usedInToolCall?: boolean; // tool call
useInEvaluation?: boolean; // evaluation
functionCall: boolean; functionCall: boolean;
toolChoice: boolean; toolChoice: boolean;

View File

@ -0,0 +1,20 @@
import type { PaginationProps } from '@fastgpt/web/common/fetch/type';
export type listEvaluationsBody = PaginationProps<{
searchKey?: string;
}>;
export type listEvalItemsBody = PaginationProps<{
evalId: string;
}>;
export type retryEvalItemBody = {
evalItemId: string;
};
export type updateEvalItemBody = {
evalItemId: string;
question: string;
expectedResponse: string;
variables: Record<string, string>;
};

View File

@ -0,0 +1,22 @@
import { i18nT } from '../../../../web/i18n/utils';
export const evaluationFileErrors = i18nT('dashboard_evaluation:eval_file_check_error');
export enum EvaluationStatusEnum {
queuing = 0,
evaluating = 1,
completed = 2
}
export const EvaluationStatusMap = {
[EvaluationStatusEnum.queuing]: {
name: i18nT('dashboard_evaluation:queuing')
},
[EvaluationStatusEnum.evaluating]: {
name: i18nT('dashboard_evaluation:evaluating')
},
[EvaluationStatusEnum.completed]: {
name: i18nT('dashboard_evaluation:completed')
}
};
export const EvaluationStatusValues = Object.keys(EvaluationStatusMap).map(Number);

View File

@ -0,0 +1,51 @@
import type { EvaluationStatusEnum } from './constants';
export type EvaluationSchemaType = {
_id: string;
teamId: string;
tmbId: string;
evalModel: string;
appId: string;
usageId: string;
name: string;
createTime: Date;
finishTime?: Date;
score?: number;
errorMessage?: string;
};
export type EvalItemSchemaType = {
evalId: string;
question: string;
expectedResponse: string;
globalVariables?: Record<string, any>;
history?: string;
response?: string;
responseTime?: Date;
finishTime?: Date;
status: EvaluationStatusEnum;
retry: number;
errorMessage?: string;
accuracy?: number;
relevance?: number;
semanticAccuracy?: number;
score?: number;
};
export type evaluationType = Pick<
EvaluationSchemaType,
'name' | 'appId' | 'createTime' | 'finishTime' | 'evalModel' | 'errorMessage' | 'score'
> & {
_id: string;
executorAvatar: string;
executorName: string;
appAvatar: string;
appName: string;
completedCount: number;
errorCount: number;
totalCount: number;
};
export type listEvalItemsItem = EvalItemSchemaType & {
evalItemId: string;
};

View File

@ -0,0 +1,10 @@
import type { VariableItemType } from '../type';
export const getEvaluationFileHeader = (appVariables?: VariableItemType[]) => {
if (!appVariables || appVariables.length === 0) return '*q,*a,history';
const variablesStr = appVariables
.map((item) => (item.required ? `*${item.key}` : item.key))
.join(',');
return `${variablesStr},*q,*a,history`;
};

View File

@ -32,11 +32,11 @@ export const getMCPToolSetRuntimeNode = ({
nodeId: getNanoid(16), nodeId: getNanoid(16),
flowNodeType: FlowNodeTypeEnum.toolSet, flowNodeType: FlowNodeTypeEnum.toolSet,
avatar, avatar,
intro: 'MCP Tools', intro: '',
inputs: [ inputs: [
{ {
key: NodeInputKeyEnum.toolSetData, key: NodeInputKeyEnum.toolSetData,
label: 'Tool Set Data', label: '',
valueType: WorkflowIOValueTypeEnum.object, valueType: WorkflowIOValueTypeEnum.object,
renderTypeList: [FlowNodeInputTypeEnum.hidden], renderTypeList: [FlowNodeInputTypeEnum.hidden],
value: { value: {

View File

@ -34,6 +34,9 @@ export type SystemPluginTemplateItemType = WorkflowTemplateType & {
versionList?: { versionList?: {
value: string; value: string;
description?: string; description?: string;
inputs: FlowNodeInputItemType[];
outputs: FlowNodeOutputItemType[];
}[]; }[];
// Admin workflow tool // Admin workflow tool

View File

@ -66,6 +66,7 @@ export type AppListItemType = {
inheritPermission?: boolean; inheritPermission?: boolean;
private?: boolean; private?: boolean;
sourceMember: SourceMemberType; sourceMember: SourceMemberType;
hasInteractiveNode?: boolean;
}; };
export type AppDetailType = AppSchema & { export type AppDetailType = AppSchema & {

View File

@ -269,11 +269,11 @@ export enum NodeOutputKeyEnum {
reasoningText = 'reasoningText', // node reasoning. the value will be show but not save to history reasoningText = 'reasoningText', // node reasoning. the value will be show but not save to history
success = 'success', success = 'success',
failed = 'failed', failed = 'failed',
error = 'error',
text = 'system_text', text = 'system_text',
addOutputParam = 'system_addOutputParam', addOutputParam = 'system_addOutputParam',
rawResponse = 'system_rawResponse', rawResponse = 'system_rawResponse',
systemError = 'system_error', systemError = 'system_error',
errorText = 'system_error_text',
// start // start
userFiles = 'userFiles', userFiles = 'userFiles',
@ -312,7 +312,13 @@ export enum NodeOutputKeyEnum {
loopStartIndex = 'loopStartIndex', loopStartIndex = 'loopStartIndex',
// form input // form input
formInputResult = 'formInputResult' formInputResult = 'formInputResult',
// File
fileTitle = 'fileTitle',
// @deprecated
error = 'error'
} }
export enum VariableInputEnum { export enum VariableInputEnum {

View File

@ -99,6 +99,7 @@ export const FlowNodeInputMap: Record<
export enum FlowNodeOutputTypeEnum { export enum FlowNodeOutputTypeEnum {
hidden = 'hidden', hidden = 'hidden',
error = 'error',
source = 'source', source = 'source',
static = 'static', static = 'static',
dynamic = 'dynamic' dynamic = 'dynamic'

View File

@ -9,7 +9,7 @@ import type { FlowNodeInputItemType, FlowNodeOutputItemType } from '../type/io.d
import type { NodeToolConfigType, StoreNodeItemType } from '../type/node'; import type { NodeToolConfigType, StoreNodeItemType } from '../type/node';
import type { DispatchNodeResponseKeyEnum } from './constants'; import type { DispatchNodeResponseKeyEnum } from './constants';
import type { StoreEdgeItemType } from '../type/edge'; import type { StoreEdgeItemType } from '../type/edge';
import type { NodeInputKeyEnum } from '../constants'; import type { NodeInputKeyEnum, NodeOutputKeyEnum } from '../constants';
import type { ClassifyQuestionAgentItemType } from '../template/system/classifyQuestion/type'; import type { ClassifyQuestionAgentItemType } from '../template/system/classifyQuestion/type';
import type { NextApiResponse } from 'next'; import type { NextApiResponse } from 'next';
import { UserModelSchema } from '../../../support/user/type'; import { UserModelSchema } from '../../../support/user/type';
@ -24,7 +24,10 @@ import type { AiChatQuoteRoleType } from '../template/system/aiChat/type';
import type { OpenaiAccountType } from '../../../support/user/team/type'; import type { OpenaiAccountType } from '../../../support/user/team/type';
import { LafAccountType } from '../../../support/user/team/type'; import { LafAccountType } from '../../../support/user/team/type';
import type { CompletionFinishReason } from '../../ai/type'; import type { CompletionFinishReason } from '../../ai/type';
import type { WorkflowInteractiveResponseType } from '../template/system/interactive/type'; import type {
InteractiveNodeResponseType,
WorkflowInteractiveResponseType
} from '../template/system/interactive/type';
import type { SearchDataResponseItemType } from '../../dataset/type'; import type { SearchDataResponseItemType } from '../../dataset/type';
export type ExternalProviderType = { export type ExternalProviderType = {
openaiAccount?: OpenaiAccountType; openaiAccount?: OpenaiAccountType;
@ -104,6 +107,9 @@ export type RuntimeNodeItemType = {
// Tool // Tool
toolConfig?: StoreNodeItemType['toolConfig']; toolConfig?: StoreNodeItemType['toolConfig'];
// catch error
catchError?: boolean;
}; };
export type RuntimeEdgeItemType = StoreEdgeItemType & { export type RuntimeEdgeItemType = StoreEdgeItemType & {
@ -116,7 +122,12 @@ export type DispatchNodeResponseType = {
runningTime?: number; runningTime?: number;
query?: string; query?: string;
textOutput?: string; textOutput?: string;
// Client will toast
error?: Record<string, any> | string; error?: Record<string, any> | string;
// Just show
errorText?: string;
customInputs?: Record<string, any>; customInputs?: Record<string, any>;
customOutputs?: Record<string, any>; customOutputs?: Record<string, any>;
nodeInputs?: Record<string, any>; nodeInputs?: Record<string, any>;
@ -235,7 +246,7 @@ export type DispatchNodeResponseType = {
extensionTokens?: number; extensionTokens?: number;
}; };
export type DispatchNodeResultType<T = {}> = { export type DispatchNodeResultType<T = {}, ERR = { [NodeOutputKeyEnum.errorText]?: string }> = {
[DispatchNodeResponseKeyEnum.skipHandleId]?: string[]; // skip some edge handle id [DispatchNodeResponseKeyEnum.skipHandleId]?: string[]; // skip some edge handle id
[DispatchNodeResponseKeyEnum.nodeResponse]?: DispatchNodeResponseType; // The node response detail [DispatchNodeResponseKeyEnum.nodeResponse]?: DispatchNodeResponseType; // The node response detail
[DispatchNodeResponseKeyEnum.nodeDispatchUsages]?: ChatNodeUsageType[]; // Node total usage [DispatchNodeResponseKeyEnum.nodeDispatchUsages]?: ChatNodeUsageType[]; // Node total usage
@ -246,7 +257,11 @@ export type DispatchNodeResultType<T = {}> = {
[DispatchNodeResponseKeyEnum.runTimes]?: number; [DispatchNodeResponseKeyEnum.runTimes]?: number;
[DispatchNodeResponseKeyEnum.newVariables]?: Record<string, any>; [DispatchNodeResponseKeyEnum.newVariables]?: Record<string, any>;
[DispatchNodeResponseKeyEnum.memories]?: Record<string, any>; [DispatchNodeResponseKeyEnum.memories]?: Record<string, any>;
} & T; [DispatchNodeResponseKeyEnum.interactive]?: InteractiveNodeResponseType;
data?: T;
error?: ERR;
};
/* Single node props */ /* Single node props */
export type AIChatNodeProps = { export type AIChatNodeProps = {

View File

@ -251,7 +251,8 @@ export const storeNodes2RuntimeNodes = (
outputs: node.outputs, outputs: node.outputs,
pluginId: node.pluginId, pluginId: node.pluginId,
version: node.version, version: node.version,
toolConfig: node.toolConfig toolConfig: node.toolConfig,
catchError: node.catchError
}; };
}) || [] }) || []
); );

View File

@ -2,6 +2,7 @@ import type { FlowNodeOutputItemType } from '../type/io.d';
import { NodeOutputKeyEnum } from '../constants'; import { NodeOutputKeyEnum } from '../constants';
import { FlowNodeOutputTypeEnum } from '../node/constant'; import { FlowNodeOutputTypeEnum } from '../node/constant';
import { WorkflowIOValueTypeEnum } from '../constants'; import { WorkflowIOValueTypeEnum } from '../constants';
import { i18nT } from '../../../../web/i18n/utils';
export const Output_Template_AddOutput: FlowNodeOutputItemType = { export const Output_Template_AddOutput: FlowNodeOutputItemType = {
id: NodeOutputKeyEnum.addOutputParam, id: NodeOutputKeyEnum.addOutputParam,
@ -15,3 +16,11 @@ export const Output_Template_AddOutput: FlowNodeOutputItemType = {
showDefaultValue: false showDefaultValue: false
} }
}; };
export const Output_Template_Error_Message: FlowNodeOutputItemType = {
id: NodeOutputKeyEnum.errorText,
key: NodeOutputKeyEnum.errorText,
type: FlowNodeOutputTypeEnum.error,
valueType: WorkflowIOValueTypeEnum.string,
label: i18nT('workflow:error_text')
};

View File

@ -20,6 +20,7 @@ import { chatNodeSystemPromptTip, systemPromptTip } from '../tip';
import { LLMModelTypeEnum } from '../../../ai/constants'; import { LLMModelTypeEnum } from '../../../ai/constants';
import { i18nT } from '../../../../../web/i18n/utils'; import { i18nT } from '../../../../../web/i18n/utils';
import { Input_Template_File_Link } from '../input'; import { Input_Template_File_Link } from '../input';
import { Output_Template_Error_Message } from '../output';
export const AgentNode: FlowNodeTemplateType = { export const AgentNode: FlowNodeTemplateType = {
id: FlowNodeTypeEnum.agent, id: FlowNodeTypeEnum.agent,
@ -31,6 +32,7 @@ export const AgentNode: FlowNodeTemplateType = {
name: i18nT('workflow:template.agent'), name: i18nT('workflow:template.agent'),
intro: i18nT('workflow:template.agent_intro'), intro: i18nT('workflow:template.agent_intro'),
showStatus: true, showStatus: true,
catchError: false,
courseUrl: '/docs/guide/dashboard/workflow/tool/', courseUrl: '/docs/guide/dashboard/workflow/tool/',
version: '4.9.2', version: '4.9.2',
inputs: [ inputs: [
@ -107,6 +109,7 @@ export const AgentNode: FlowNodeTemplateType = {
description: i18nT('common:core.module.output.description.Ai response content'), description: i18nT('common:core.module.output.description.Ai response content'),
valueType: WorkflowIOValueTypeEnum.string, valueType: WorkflowIOValueTypeEnum.string,
type: FlowNodeOutputTypeEnum.static type: FlowNodeOutputTypeEnum.static
} },
Output_Template_Error_Message
] ]
}; };

View File

@ -20,6 +20,7 @@ import {
Input_Template_File_Link Input_Template_File_Link
} from '../../input'; } from '../../input';
import { i18nT } from '../../../../../../web/i18n/utils'; import { i18nT } from '../../../../../../web/i18n/utils';
import { Output_Template_Error_Message } from '../../output';
export const AiChatQuoteRole = { export const AiChatQuoteRole = {
key: NodeInputKeyEnum.aiChatQuoteRole, key: NodeInputKeyEnum.aiChatQuoteRole,
@ -54,6 +55,7 @@ export const AiChatModule: FlowNodeTemplateType = {
isTool: true, isTool: true,
courseUrl: '/docs/guide/dashboard/workflow/ai_chat/', courseUrl: '/docs/guide/dashboard/workflow/ai_chat/',
version: '4.9.7', version: '4.9.7',
catchError: false,
inputs: [ inputs: [
Input_Template_SettingAiModel, Input_Template_SettingAiModel,
// --- settings modal // --- settings modal
@ -158,6 +160,7 @@ export const AiChatModule: FlowNodeTemplateType = {
const modelItem = llmModelList.find((item) => item.model === model); const modelItem = llmModelList.find((item) => item.model === model);
return modelItem?.reasoning !== true; return modelItem?.reasoning !== true;
} }
} },
Output_Template_Error_Message
] ]
}; };

View File

@ -13,6 +13,7 @@ import {
import { Input_Template_SelectAIModel, Input_Template_History } from '../../input'; import { Input_Template_SelectAIModel, Input_Template_History } from '../../input';
import { LLMModelTypeEnum } from '../../../../ai/constants'; import { LLMModelTypeEnum } from '../../../../ai/constants';
import { i18nT } from '../../../../../../web/i18n/utils'; import { i18nT } from '../../../../../../web/i18n/utils';
import { Output_Template_Error_Message } from '../../output';
export const ContextExtractModule: FlowNodeTemplateType = { export const ContextExtractModule: FlowNodeTemplateType = {
id: FlowNodeTypeEnum.contentExtract, id: FlowNodeTypeEnum.contentExtract,
@ -25,6 +26,7 @@ export const ContextExtractModule: FlowNodeTemplateType = {
intro: i18nT('workflow:intro_text_content_extraction'), intro: i18nT('workflow:intro_text_content_extraction'),
showStatus: true, showStatus: true,
isTool: true, isTool: true,
catchError: false,
courseUrl: '/docs/guide/dashboard/workflow/content_extract/', courseUrl: '/docs/guide/dashboard/workflow/content_extract/',
version: '4.9.2', version: '4.9.2',
inputs: [ inputs: [
@ -76,6 +78,7 @@ export const ContextExtractModule: FlowNodeTemplateType = {
description: i18nT('workflow:complete_extraction_result_description'), description: i18nT('workflow:complete_extraction_result_description'),
valueType: WorkflowIOValueTypeEnum.string, valueType: WorkflowIOValueTypeEnum.string,
type: FlowNodeOutputTypeEnum.static type: FlowNodeOutputTypeEnum.static
} },
Output_Template_Error_Message
] ]
}; };

View File

@ -15,6 +15,7 @@ import {
import { Input_Template_UserChatInput } from '../input'; import { Input_Template_UserChatInput } from '../input';
import { DatasetSearchModeEnum } from '../../../dataset/constants'; import { DatasetSearchModeEnum } from '../../../dataset/constants';
import { i18nT } from '../../../../../web/i18n/utils'; import { i18nT } from '../../../../../web/i18n/utils';
import { Output_Template_Error_Message } from '../output';
export const Dataset_SEARCH_DESC = i18nT('workflow:template.dataset_search_intro'); export const Dataset_SEARCH_DESC = i18nT('workflow:template.dataset_search_intro');
@ -29,6 +30,7 @@ export const DatasetSearchModule: FlowNodeTemplateType = {
intro: Dataset_SEARCH_DESC, intro: Dataset_SEARCH_DESC,
showStatus: true, showStatus: true,
isTool: true, isTool: true,
catchError: false,
courseUrl: '/docs/guide/dashboard/workflow/dataset_search/', courseUrl: '/docs/guide/dashboard/workflow/dataset_search/',
version: '4.9.2', version: '4.9.2',
inputs: [ inputs: [
@ -143,6 +145,7 @@ export const DatasetSearchModule: FlowNodeTemplateType = {
type: FlowNodeOutputTypeEnum.static, type: FlowNodeOutputTypeEnum.static,
valueType: WorkflowIOValueTypeEnum.datasetQuote, valueType: WorkflowIOValueTypeEnum.datasetQuote,
valueDesc: datasetQuoteValueDesc valueDesc: datasetQuoteValueDesc
} },
Output_Template_Error_Message
] ]
}; };

View File

@ -26,6 +26,7 @@ export const HttpNode468: FlowNodeTemplateType = {
intro: i18nT('workflow:intro_http_request'), intro: i18nT('workflow:intro_http_request'),
showStatus: true, showStatus: true,
isTool: true, isTool: true,
catchError: false,
courseUrl: '/docs/guide/dashboard/workflow/http/', courseUrl: '/docs/guide/dashboard/workflow/http/',
inputs: [ inputs: [
{ {
@ -123,14 +124,6 @@ export const HttpNode468: FlowNodeTemplateType = {
label: i18nT('workflow:http_extract_output'), label: i18nT('workflow:http_extract_output'),
description: i18nT('workflow:http_extract_output_description') description: i18nT('workflow:http_extract_output_description')
}, },
{
id: NodeOutputKeyEnum.error,
key: NodeOutputKeyEnum.error,
label: i18nT('workflow:request_error'),
description: i18nT('workflow:http_request_error_info'),
valueType: WorkflowIOValueTypeEnum.object,
type: FlowNodeOutputTypeEnum.static
},
{ {
id: NodeOutputKeyEnum.httpRawResponse, id: NodeOutputKeyEnum.httpRawResponse,
key: NodeOutputKeyEnum.httpRawResponse, key: NodeOutputKeyEnum.httpRawResponse,
@ -139,6 +132,13 @@ export const HttpNode468: FlowNodeTemplateType = {
description: i18nT('workflow:http_raw_response_description'), description: i18nT('workflow:http_raw_response_description'),
valueType: WorkflowIOValueTypeEnum.any, valueType: WorkflowIOValueTypeEnum.any,
type: FlowNodeOutputTypeEnum.static type: FlowNodeOutputTypeEnum.static
},
{
id: NodeOutputKeyEnum.error,
key: NodeOutputKeyEnum.error,
label: i18nT('workflow:error_text'),
valueType: WorkflowIOValueTypeEnum.string,
type: FlowNodeOutputTypeEnum.error
} }
] ]
}; };

View File

@ -11,7 +11,7 @@ import {
FlowNodeTemplateTypeEnum FlowNodeTemplateTypeEnum
} from '../../constants'; } from '../../constants';
import { Input_Template_DynamicInput } from '../input'; import { Input_Template_DynamicInput } from '../input';
import { Output_Template_AddOutput } from '../output'; import { Output_Template_AddOutput, Output_Template_Error_Message } from '../output';
import { i18nT } from '../../../../../web/i18n/utils'; import { i18nT } from '../../../../../web/i18n/utils';
export const nodeLafCustomInputConfig = { export const nodeLafCustomInputConfig = {
@ -31,6 +31,7 @@ export const LafModule: FlowNodeTemplateType = {
intro: i18nT('workflow:intro_laf_function_call'), intro: i18nT('workflow:intro_laf_function_call'),
showStatus: true, showStatus: true,
isTool: true, isTool: true,
catchError: false,
courseUrl: '/docs/guide/dashboard/workflow/laf/', courseUrl: '/docs/guide/dashboard/workflow/laf/',
inputs: [ inputs: [
{ {
@ -57,8 +58,7 @@ export const LafModule: FlowNodeTemplateType = {
valueType: WorkflowIOValueTypeEnum.any, valueType: WorkflowIOValueTypeEnum.any,
type: FlowNodeOutputTypeEnum.static type: FlowNodeOutputTypeEnum.static
}, },
{ Output_Template_AddOutput,
...Output_Template_AddOutput Output_Template_Error_Message
}
] ]
}; };

View File

@ -11,6 +11,7 @@ import {
FlowNodeTypeEnum FlowNodeTypeEnum
} from '../../../node/constant'; } from '../../../node/constant';
import { type FlowNodeTemplateType } from '../../../type/node'; import { type FlowNodeTemplateType } from '../../../type/node';
import { Output_Template_Error_Message } from '../../output';
export const ReadFilesNode: FlowNodeTemplateType = { export const ReadFilesNode: FlowNodeTemplateType = {
id: FlowNodeTypeEnum.readFiles, id: FlowNodeTypeEnum.readFiles,
@ -43,6 +44,7 @@ export const ReadFilesNode: FlowNodeTemplateType = {
description: i18nT('app:workflow.read_files_result_desc'), description: i18nT('app:workflow.read_files_result_desc'),
valueType: WorkflowIOValueTypeEnum.string, valueType: WorkflowIOValueTypeEnum.string,
type: FlowNodeOutputTypeEnum.static type: FlowNodeOutputTypeEnum.static
} },
Output_Template_Error_Message
] ]
}; };

View File

@ -25,6 +25,7 @@ export const CodeNode: FlowNodeTemplateType = {
name: i18nT('workflow:code_execution'), name: i18nT('workflow:code_execution'),
intro: i18nT('workflow:execute_a_simple_script_code_usually_for_complex_data_processing'), intro: i18nT('workflow:execute_a_simple_script_code_usually_for_complex_data_processing'),
showStatus: true, showStatus: true,
catchError: false,
courseUrl: '/docs/guide/dashboard/workflow/sandbox/', courseUrl: '/docs/guide/dashboard/workflow/sandbox/',
inputs: [ inputs: [
{ {
@ -89,14 +90,6 @@ export const CodeNode: FlowNodeTemplateType = {
valueType: WorkflowIOValueTypeEnum.object, valueType: WorkflowIOValueTypeEnum.object,
type: FlowNodeOutputTypeEnum.static type: FlowNodeOutputTypeEnum.static
}, },
{
id: NodeOutputKeyEnum.error,
key: NodeOutputKeyEnum.error,
label: i18nT('workflow:execution_error'),
description: i18nT('workflow:error_info_returns_empty_on_success'),
valueType: WorkflowIOValueTypeEnum.object,
type: FlowNodeOutputTypeEnum.static
},
{ {
id: 'qLUQfhG0ILRX', id: 'qLUQfhG0ILRX',
type: FlowNodeOutputTypeEnum.dynamic, type: FlowNodeOutputTypeEnum.dynamic,
@ -110,6 +103,13 @@ export const CodeNode: FlowNodeTemplateType = {
key: 'data2', key: 'data2',
valueType: WorkflowIOValueTypeEnum.string, valueType: WorkflowIOValueTypeEnum.string,
label: 'data2' label: 'data2'
},
{
id: NodeOutputKeyEnum.error,
key: NodeOutputKeyEnum.error,
label: i18nT('workflow:error_text'),
valueType: WorkflowIOValueTypeEnum.string,
type: FlowNodeOutputTypeEnum.error
} }
] ]
}; };

View File

@ -48,6 +48,7 @@ export type FlowNodeCommonType = {
isLatestVersion?: boolean; // Just ui show isLatestVersion?: boolean; // Just ui show
// data // data
catchError?: boolean;
inputs: FlowNodeInputItemType[]; inputs: FlowNodeInputItemType[];
outputs: FlowNodeOutputItemType[]; outputs: FlowNodeOutputItemType[];

View File

@ -52,7 +52,11 @@ import { ChatRoleEnum } from '../../core/chat/constants';
import { runtimePrompt2ChatsValue } from '../../core/chat/adapt'; import { runtimePrompt2ChatsValue } from '../../core/chat/adapt';
import { getPluginRunContent } from '../../core/app/plugin/utils'; import { getPluginRunContent } from '../../core/app/plugin/utils';
export const getHandleId = (nodeId: string, type: 'source' | 'target', key: string) => { export const getHandleId = (
nodeId: string,
type: 'source' | 'source_catch' | 'target',
key: string
) => {
return `${nodeId}-${type}-${key}`; return `${nodeId}-${type}-${key}`;
}; };
@ -219,16 +223,14 @@ export const pluginData2FlowNodeIO = ({
] ]
: [], : [],
outputs: pluginOutput outputs: pluginOutput
? [ ? pluginOutput.inputs.map((item) => ({
...pluginOutput.inputs.map((item) => ({ id: item.key,
id: item.key, type: FlowNodeOutputTypeEnum.static,
type: FlowNodeOutputTypeEnum.static, key: item.key,
key: item.key, valueType: item.valueType,
valueType: item.valueType, label: item.label || item.key,
label: item.label || item.key, description: item.description
description: item.description }))
}))
]
: [] : []
}; };
}; };

View File

@ -54,6 +54,9 @@ export enum AuditEventEnum {
UPDATE_APP_PUBLISH_CHANNEL = 'UPDATE_APP_PUBLISH_CHANNEL', UPDATE_APP_PUBLISH_CHANNEL = 'UPDATE_APP_PUBLISH_CHANNEL',
DELETE_APP_PUBLISH_CHANNEL = 'DELETE_APP_PUBLISH_CHANNEL', DELETE_APP_PUBLISH_CHANNEL = 'DELETE_APP_PUBLISH_CHANNEL',
EXPORT_APP_CHAT_LOG = 'EXPORT_APP_CHAT_LOG', EXPORT_APP_CHAT_LOG = 'EXPORT_APP_CHAT_LOG',
CREATE_EVALUATION = 'CREATE_EVALUATION',
EXPORT_EVALUATION = 'EXPORT_EVALUATION',
DELETE_EVALUATION = 'DELETE_EVALUATION',
//Dataset //Dataset
CREATE_DATASET = 'CREATE_DATASET', CREATE_DATASET = 'CREATE_DATASET',
UPDATE_DATASET = 'UPDATE_DATASET', UPDATE_DATASET = 'UPDATE_DATASET',

View File

@ -12,7 +12,8 @@ export enum UsageSourceEnum {
dingtalk = 'dingtalk', dingtalk = 'dingtalk',
official_account = 'official_account', official_account = 'official_account',
pdfParse = 'pdfParse', pdfParse = 'pdfParse',
mcp = 'mcp' mcp = 'mcp',
evaluation = 'evaluation'
} }
export const UsageSourceMap = { export const UsageSourceMap = {
@ -51,5 +52,8 @@ export const UsageSourceMap = {
}, },
[UsageSourceEnum.mcp]: { [UsageSourceEnum.mcp]: {
label: i18nT('account_usage:mcp') label: i18nT('account_usage:mcp')
},
[UsageSourceEnum.evaluation]: {
label: i18nT('account_usage:evaluation')
} }
}; };

View File

@ -8,6 +8,7 @@ export type UsageListItemCountType = {
charsLength?: number; charsLength?: number;
duration?: number; duration?: number;
pages?: number; pages?: number;
count?: number; // Times
// deprecated // deprecated
tokens?: number; tokens?: number;
@ -17,6 +18,7 @@ export type UsageListItemType = UsageListItemCountType & {
moduleName: string; moduleName: string;
amount: number; amount: number;
model?: string; model?: string;
count?: number;
}; };
export type UsageSchemaType = CreateUsageProps & { export type UsageSchemaType = CreateUsageProps & {

View File

@ -20,6 +20,7 @@ const defaultWorkerOpts: Omit<ConnectionOptions, 'connection'> = {
export enum QueueNames { export enum QueueNames {
datasetSync = 'datasetSync', datasetSync = 'datasetSync',
evaluation = 'evaluation',
// abondoned // abondoned
websiteSync = 'websiteSync' websiteSync = 'websiteSync'
} }

View File

@ -1,13 +1,9 @@
import { registerOTel, OTLPHttpJsonTraceExporter } from '@vercel/otel'; import { registerOTel, OTLPHttpJsonTraceExporter } from '@vercel/otel';
// Add otel logging
// import { diag, DiagConsoleLogger, DiagLogLevel } from '@opentelemetry/api';
import { SignozBaseURL, SignozServiceName } from '../const'; import { SignozBaseURL, SignozServiceName } from '../const';
import { addLog } from '../../system/log'; import { addLog } from '../../system/log';
// diag.setLogger(new DiagConsoleLogger(), DiagLogLevel.INFO);
export function connectSignoz() { export function connectSignoz() {
if (!SignozBaseURL) { if (!SignozBaseURL) {
addLog.warn('Signoz is not configured');
return; return;
} }
addLog.info(`Connecting signoz, ${SignozBaseURL}, ${SignozServiceName}`); addLog.info(`Connecting signoz, ${SignozBaseURL}, ${SignozServiceName}`);

View File

@ -15,11 +15,6 @@
"toolChoice": true, "toolChoice": true,
"functionCall": false, "functionCall": false,
"defaultSystemChatPrompt": "", "defaultSystemChatPrompt": "",
"datasetProcess": true,
"usedInClassify": true,
"usedInExtractFields": true,
"usedInQueryExtension": true,
"usedInToolCall": true,
"defaultConfig": {}, "defaultConfig": {},
"fieldMap": {}, "fieldMap": {},
"type": "llm" "type": "llm"
@ -38,11 +33,6 @@
"toolChoice": true, "toolChoice": true,
"functionCall": false, "functionCall": false,
"defaultSystemChatPrompt": "", "defaultSystemChatPrompt": "",
"datasetProcess": true,
"usedInClassify": true,
"usedInExtractFields": true,
"usedInQueryExtension": true,
"usedInToolCall": true,
"defaultConfig": {}, "defaultConfig": {},
"fieldMap": {}, "fieldMap": {},
"type": "llm" "type": "llm"
@ -61,11 +51,6 @@
"toolChoice": false, "toolChoice": false,
"functionCall": false, "functionCall": false,
"defaultSystemChatPrompt": "", "defaultSystemChatPrompt": "",
"datasetProcess": true,
"usedInClassify": true,
"usedInExtractFields": true,
"usedInQueryExtension": true,
"usedInToolCall": true,
"defaultConfig": {}, "defaultConfig": {},
"fieldMap": {}, "fieldMap": {},
"type": "llm" "type": "llm"
@ -84,11 +69,6 @@
"toolChoice": true, "toolChoice": true,
"functionCall": false, "functionCall": false,
"defaultSystemChatPrompt": "", "defaultSystemChatPrompt": "",
"datasetProcess": true,
"usedInClassify": true,
"usedInExtractFields": true,
"usedInQueryExtension": true,
"usedInToolCall": true,
"defaultConfig": {}, "defaultConfig": {},
"fieldMap": {}, "fieldMap": {},
"type": "llm" "type": "llm"
@ -106,11 +86,6 @@
"toolChoice": false, "toolChoice": false,
"functionCall": false, "functionCall": false,
"defaultSystemChatPrompt": "", "defaultSystemChatPrompt": "",
"datasetProcess": true,
"usedInClassify": true,
"usedInExtractFields": true,
"usedInQueryExtension": true,
"usedInToolCall": true,
"defaultConfig": {}, "defaultConfig": {},
"fieldMap": {}, "fieldMap": {},
"type": "llm" "type": "llm"
@ -128,11 +103,6 @@
"toolChoice": false, "toolChoice": false,
"functionCall": false, "functionCall": false,
"defaultSystemChatPrompt": "", "defaultSystemChatPrompt": "",
"datasetProcess": true,
"usedInClassify": true,
"usedInExtractFields": true,
"usedInQueryExtension": true,
"usedInToolCall": true,
"defaultConfig": {}, "defaultConfig": {},
"fieldMap": {}, "fieldMap": {},
"type": "llm" "type": "llm"

View File

@ -14,11 +14,6 @@
"toolChoice": true, "toolChoice": true,
"functionCall": false, "functionCall": false,
"defaultSystemChatPrompt": "", "defaultSystemChatPrompt": "",
"datasetProcess": true,
"usedInClassify": true,
"usedInExtractFields": true,
"usedInQueryExtension": true,
"usedInToolCall": true,
"defaultConfig": {}, "defaultConfig": {},
"fieldMap": {}, "fieldMap": {},
"type": "llm" "type": "llm"
@ -36,11 +31,6 @@
"toolChoice": true, "toolChoice": true,
"functionCall": false, "functionCall": false,
"defaultSystemChatPrompt": "", "defaultSystemChatPrompt": "",
"datasetProcess": true,
"usedInClassify": true,
"usedInExtractFields": true,
"usedInQueryExtension": true,
"usedInToolCall": true,
"defaultConfig": {}, "defaultConfig": {},
"fieldMap": {}, "fieldMap": {},
"type": "llm" "type": "llm"
@ -58,11 +48,6 @@
"toolChoice": true, "toolChoice": true,
"functionCall": false, "functionCall": false,
"defaultSystemChatPrompt": "", "defaultSystemChatPrompt": "",
"datasetProcess": true,
"usedInClassify": true,
"usedInExtractFields": true,
"usedInQueryExtension": true,
"usedInToolCall": true,
"defaultConfig": {}, "defaultConfig": {},
"fieldMap": {}, "fieldMap": {},
"type": "llm" "type": "llm"
@ -80,11 +65,6 @@
"toolChoice": true, "toolChoice": true,
"functionCall": false, "functionCall": false,
"defaultSystemChatPrompt": "", "defaultSystemChatPrompt": "",
"datasetProcess": true,
"usedInClassify": true,
"usedInExtractFields": true,
"usedInQueryExtension": true,
"usedInToolCall": true,
"defaultConfig": {}, "defaultConfig": {},
"fieldMap": {}, "fieldMap": {},
"type": "llm" "type": "llm"
@ -102,11 +82,6 @@
"toolChoice": true, "toolChoice": true,
"functionCall": false, "functionCall": false,
"defaultSystemChatPrompt": "", "defaultSystemChatPrompt": "",
"datasetProcess": true,
"usedInClassify": true,
"usedInExtractFields": true,
"usedInQueryExtension": true,
"usedInToolCall": true,
"defaultConfig": {}, "defaultConfig": {},
"fieldMap": {}, "fieldMap": {},
"type": "llm" "type": "llm"
@ -124,11 +99,6 @@
"toolChoice": true, "toolChoice": true,
"functionCall": false, "functionCall": false,
"defaultSystemChatPrompt": "", "defaultSystemChatPrompt": "",
"datasetProcess": true,
"usedInClassify": true,
"usedInExtractFields": true,
"usedInQueryExtension": true,
"usedInToolCall": true,
"defaultConfig": {}, "defaultConfig": {},
"fieldMap": {}, "fieldMap": {},
"type": "llm" "type": "llm"
@ -146,11 +116,6 @@
"toolChoice": true, "toolChoice": true,
"functionCall": false, "functionCall": false,
"defaultSystemChatPrompt": "", "defaultSystemChatPrompt": "",
"datasetProcess": true,
"usedInClassify": true,
"usedInExtractFields": true,
"usedInQueryExtension": true,
"usedInToolCall": true,
"defaultConfig": {}, "defaultConfig": {},
"fieldMap": {}, "fieldMap": {},
"type": "llm" "type": "llm"

View File

@ -15,11 +15,6 @@
"toolChoice": true, "toolChoice": true,
"functionCall": false, "functionCall": false,
"defaultSystemChatPrompt": "", "defaultSystemChatPrompt": "",
"datasetProcess": true,
"usedInClassify": true,
"usedInExtractFields": true,
"usedInQueryExtension": true,
"usedInToolCall": true,
"type": "llm" "type": "llm"
}, },
{ {
@ -34,11 +29,6 @@
"toolChoice": false, "toolChoice": false,
"functionCall": false, "functionCall": false,
"defaultSystemChatPrompt": "", "defaultSystemChatPrompt": "",
"datasetProcess": true,
"usedInClassify": true,
"usedInExtractFields": true,
"usedInQueryExtension": true,
"usedInToolCall": true,
"defaultConfig": {}, "defaultConfig": {},
"fieldMap": {}, "fieldMap": {},
"type": "llm", "type": "llm",

View File

@ -14,11 +14,6 @@
"toolChoice": true, "toolChoice": true,
"functionCall": false, "functionCall": false,
"defaultSystemChatPrompt": "", "defaultSystemChatPrompt": "",
"datasetProcess": true,
"usedInClassify": true,
"usedInExtractFields": true,
"usedInQueryExtension": true,
"usedInToolCall": true,
"defaultConfig": {}, "defaultConfig": {},
"fieldMap": {}, "fieldMap": {},
"type": "llm" "type": "llm"
@ -36,11 +31,6 @@
"toolChoice": true, "toolChoice": true,
"functionCall": false, "functionCall": false,
"defaultSystemChatPrompt": "", "defaultSystemChatPrompt": "",
"datasetProcess": true,
"usedInClassify": true,
"usedInExtractFields": true,
"usedInQueryExtension": true,
"usedInToolCall": true,
"defaultConfig": {}, "defaultConfig": {},
"fieldMap": {}, "fieldMap": {},
"type": "llm" "type": "llm"
@ -58,11 +48,6 @@
"toolChoice": true, "toolChoice": true,
"functionCall": false, "functionCall": false,
"defaultSystemChatPrompt": "", "defaultSystemChatPrompt": "",
"datasetProcess": true,
"usedInClassify": true,
"usedInExtractFields": true,
"usedInQueryExtension": true,
"usedInToolCall": true,
"defaultConfig": {}, "defaultConfig": {},
"fieldMap": {}, "fieldMap": {},
"type": "llm" "type": "llm"
@ -80,11 +65,6 @@
"toolChoice": true, "toolChoice": true,
"functionCall": false, "functionCall": false,
"defaultSystemChatPrompt": "", "defaultSystemChatPrompt": "",
"datasetProcess": true,
"usedInClassify": true,
"usedInExtractFields": true,
"usedInQueryExtension": true,
"usedInToolCall": true,
"defaultConfig": {}, "defaultConfig": {},
"fieldMap": {}, "fieldMap": {},
"type": "llm" "type": "llm"
@ -102,11 +82,6 @@
"toolChoice": true, "toolChoice": true,
"functionCall": false, "functionCall": false,
"defaultSystemChatPrompt": "", "defaultSystemChatPrompt": "",
"datasetProcess": true,
"usedInClassify": true,
"usedInExtractFields": true,
"usedInQueryExtension": true,
"usedInToolCall": true,
"defaultConfig": {}, "defaultConfig": {},
"fieldMap": {}, "fieldMap": {},
"type": "llm" "type": "llm"
@ -124,11 +99,6 @@
"toolChoice": true, "toolChoice": true,
"functionCall": false, "functionCall": false,
"defaultSystemChatPrompt": "", "defaultSystemChatPrompt": "",
"datasetProcess": true,
"usedInClassify": true,
"usedInExtractFields": true,
"usedInQueryExtension": true,
"usedInToolCall": true,
"defaultConfig": {}, "defaultConfig": {},
"fieldMap": {}, "fieldMap": {},
"type": "llm" "type": "llm"
@ -146,11 +116,6 @@
"toolChoice": true, "toolChoice": true,
"functionCall": false, "functionCall": false,
"defaultSystemChatPrompt": "", "defaultSystemChatPrompt": "",
"datasetProcess": true,
"usedInClassify": true,
"usedInExtractFields": true,
"usedInQueryExtension": true,
"usedInToolCall": true,
"defaultConfig": {}, "defaultConfig": {},
"fieldMap": {}, "fieldMap": {},
"type": "llm" "type": "llm"
@ -168,11 +133,6 @@
"toolChoice": true, "toolChoice": true,
"functionCall": false, "functionCall": false,
"defaultSystemChatPrompt": "", "defaultSystemChatPrompt": "",
"datasetProcess": true,
"usedInClassify": true,
"usedInExtractFields": true,
"usedInQueryExtension": true,
"usedInToolCall": true,
"defaultConfig": {}, "defaultConfig": {},
"fieldMap": {}, "fieldMap": {},
"type": "llm" "type": "llm"
@ -190,11 +150,6 @@
"toolChoice": true, "toolChoice": true,
"functionCall": false, "functionCall": false,
"defaultSystemChatPrompt": "", "defaultSystemChatPrompt": "",
"datasetProcess": true,
"usedInClassify": true,
"usedInExtractFields": true,
"usedInQueryExtension": true,
"usedInToolCall": true,
"defaultConfig": {}, "defaultConfig": {},
"fieldMap": {}, "fieldMap": {},
"type": "llm" "type": "llm"
@ -210,11 +165,6 @@
"toolChoice": true, "toolChoice": true,
"functionCall": false, "functionCall": false,
"defaultSystemChatPrompt": "", "defaultSystemChatPrompt": "",
"datasetProcess": true,
"usedInClassify": true,
"usedInExtractFields": true,
"usedInQueryExtension": true,
"usedInToolCall": true,
"defaultConfig": {}, "defaultConfig": {},
"fieldMap": {}, "fieldMap": {},
"type": "llm", "type": "llm",
@ -232,11 +182,6 @@
"toolChoice": false, "toolChoice": false,
"functionCall": false, "functionCall": false,
"defaultSystemChatPrompt": "", "defaultSystemChatPrompt": "",
"datasetProcess": true,
"usedInClassify": true,
"usedInExtractFields": true,
"usedInQueryExtension": true,
"usedInToolCall": true,
"defaultConfig": {}, "defaultConfig": {},
"fieldMap": {}, "fieldMap": {},
"type": "llm", "type": "llm",
@ -254,11 +199,6 @@
"toolChoice": true, "toolChoice": true,
"functionCall": false, "functionCall": false,
"defaultSystemChatPrompt": "", "defaultSystemChatPrompt": "",
"datasetProcess": true,
"usedInClassify": true,
"usedInExtractFields": true,
"usedInQueryExtension": true,
"usedInToolCall": true,
"defaultConfig": {}, "defaultConfig": {},
"fieldMap": {}, "fieldMap": {},
"type": "llm", "type": "llm",
@ -276,11 +216,6 @@
"toolChoice": true, "toolChoice": true,
"functionCall": false, "functionCall": false,
"defaultSystemChatPrompt": "", "defaultSystemChatPrompt": "",
"datasetProcess": true,
"usedInClassify": true,
"usedInExtractFields": true,
"usedInQueryExtension": true,
"usedInToolCall": true,
"defaultConfig": {}, "defaultConfig": {},
"fieldMap": {}, "fieldMap": {},
"type": "llm", "type": "llm",
@ -298,11 +233,6 @@
"toolChoice": true, "toolChoice": true,
"functionCall": false, "functionCall": false,
"defaultSystemChatPrompt": "", "defaultSystemChatPrompt": "",
"datasetProcess": true,
"usedInClassify": true,
"usedInExtractFields": true,
"usedInQueryExtension": true,
"usedInToolCall": true,
"defaultConfig": {}, "defaultConfig": {},
"fieldMap": {}, "fieldMap": {},
"type": "llm", "type": "llm",
@ -320,11 +250,6 @@
"toolChoice": false, "toolChoice": false,
"functionCall": false, "functionCall": false,
"defaultSystemChatPrompt": "", "defaultSystemChatPrompt": "",
"datasetProcess": true,
"usedInClassify": true,
"usedInExtractFields": true,
"usedInQueryExtension": true,
"usedInToolCall": true,
"defaultConfig": {}, "defaultConfig": {},
"fieldMap": {}, "fieldMap": {},
"type": "llm", "type": "llm",

View File

@ -12,11 +12,6 @@
"toolChoice": false, "toolChoice": false,
"functionCall": false, "functionCall": false,
"defaultSystemChatPrompt": "", "defaultSystemChatPrompt": "",
"datasetProcess": true,
"usedInClassify": true,
"usedInExtractFields": true,
"usedInQueryExtension": true,
"usedInToolCall": true,
"defaultConfig": {}, "defaultConfig": {},
"fieldMap": {}, "fieldMap": {},
"type": "llm", "type": "llm",
@ -34,11 +29,6 @@
"toolChoice": false, "toolChoice": false,
"functionCall": false, "functionCall": false,
"defaultSystemChatPrompt": "", "defaultSystemChatPrompt": "",
"datasetProcess": true,
"usedInClassify": true,
"usedInExtractFields": true,
"usedInQueryExtension": true,
"usedInToolCall": true,
"defaultConfig": {}, "defaultConfig": {},
"fieldMap": {}, "fieldMap": {},
"type": "llm", "type": "llm",
@ -56,11 +46,6 @@
"toolChoice": false, "toolChoice": false,
"functionCall": false, "functionCall": false,
"defaultSystemChatPrompt": "", "defaultSystemChatPrompt": "",
"datasetProcess": true,
"usedInClassify": true,
"usedInExtractFields": true,
"usedInQueryExtension": true,
"usedInToolCall": true,
"defaultConfig": {}, "defaultConfig": {},
"fieldMap": {}, "fieldMap": {},
"type": "llm", "type": "llm",
@ -78,11 +63,6 @@
"toolChoice": false, "toolChoice": false,
"functionCall": false, "functionCall": false,
"defaultSystemChatPrompt": "", "defaultSystemChatPrompt": "",
"datasetProcess": true,
"usedInClassify": true,
"usedInExtractFields": true,
"usedInQueryExtension": true,
"usedInToolCall": true,
"defaultConfig": {}, "defaultConfig": {},
"fieldMap": {}, "fieldMap": {},
"type": "llm", "type": "llm",

View File

@ -1,6 +1,38 @@
{ {
"provider": "Gemini", "provider": "Gemini",
"list": [ "list": [
{
"model": "gemini-2.5-pro",
"name": "gemini-2.5-pro",
"maxContext": 1000000,
"maxResponse": 63000,
"quoteMaxToken": 1000000,
"maxTemperature": 1,
"vision": true,
"toolChoice": true,
"defaultSystemChatPrompt": "",
"defaultConfig": {},
"fieldMap": {},
"type": "llm",
"showTopP": true,
"showStopSign": true
},
{
"model": "gemini-2.5-flash",
"name": "gemini-2.5-flash",
"maxContext": 1000000,
"maxResponse": 63000,
"quoteMaxToken": 1000000,
"maxTemperature": 1,
"vision": true,
"toolChoice": true,
"defaultSystemChatPrompt": "",
"defaultConfig": {},
"fieldMap": {},
"type": "llm",
"showTopP": true,
"showStopSign": true
},
{ {
"model": "gemini-2.5-pro-exp-03-25", "model": "gemini-2.5-pro-exp-03-25",
"name": "gemini-2.5-pro-exp-03-25", "name": "gemini-2.5-pro-exp-03-25",
@ -12,11 +44,6 @@
"toolChoice": true, "toolChoice": true,
"functionCall": false, "functionCall": false,
"defaultSystemChatPrompt": "", "defaultSystemChatPrompt": "",
"datasetProcess": true,
"usedInClassify": true,
"usedInExtractFields": true,
"usedInQueryExtension": true,
"usedInToolCall": true,
"defaultConfig": {}, "defaultConfig": {},
"fieldMap": {}, "fieldMap": {},
"type": "llm", "type": "llm",
@ -34,11 +61,6 @@
"toolChoice": true, "toolChoice": true,
"functionCall": false, "functionCall": false,
"defaultSystemChatPrompt": "", "defaultSystemChatPrompt": "",
"datasetProcess": true,
"usedInClassify": true,
"usedInExtractFields": true,
"usedInQueryExtension": true,
"usedInToolCall": true,
"defaultConfig": {}, "defaultConfig": {},
"fieldMap": {}, "fieldMap": {},
"type": "llm", "type": "llm",
@ -56,11 +78,6 @@
"toolChoice": true, "toolChoice": true,
"functionCall": false, "functionCall": false,
"defaultSystemChatPrompt": "", "defaultSystemChatPrompt": "",
"datasetProcess": true,
"usedInClassify": true,
"usedInExtractFields": true,
"usedInQueryExtension": true,
"usedInToolCall": true,
"defaultConfig": {}, "defaultConfig": {},
"fieldMap": {}, "fieldMap": {},
"type": "llm", "type": "llm",
@ -78,11 +95,6 @@
"toolChoice": true, "toolChoice": true,
"functionCall": false, "functionCall": false,
"defaultSystemChatPrompt": "", "defaultSystemChatPrompt": "",
"datasetProcess": true,
"usedInClassify": true,
"usedInExtractFields": true,
"usedInQueryExtension": true,
"usedInToolCall": true,
"defaultConfig": {}, "defaultConfig": {},
"fieldMap": {}, "fieldMap": {},
"type": "llm", "type": "llm",
@ -100,11 +112,6 @@
"toolChoice": true, "toolChoice": true,
"functionCall": false, "functionCall": false,
"defaultSystemChatPrompt": "", "defaultSystemChatPrompt": "",
"datasetProcess": true,
"usedInClassify": true,
"usedInExtractFields": true,
"usedInQueryExtension": true,
"usedInToolCall": true,
"defaultConfig": {}, "defaultConfig": {},
"fieldMap": {}, "fieldMap": {},
"type": "llm", "type": "llm",
@ -122,11 +129,6 @@
"toolChoice": true, "toolChoice": true,
"functionCall": false, "functionCall": false,
"defaultSystemChatPrompt": "", "defaultSystemChatPrompt": "",
"datasetProcess": true,
"usedInClassify": true,
"usedInExtractFields": true,
"usedInQueryExtension": true,
"usedInToolCall": true,
"defaultConfig": {}, "defaultConfig": {},
"fieldMap": {}, "fieldMap": {},
"type": "llm", "type": "llm",
@ -144,11 +146,6 @@
"toolChoice": true, "toolChoice": true,
"functionCall": false, "functionCall": false,
"defaultSystemChatPrompt": "", "defaultSystemChatPrompt": "",
"datasetProcess": true,
"usedInClassify": true,
"usedInExtractFields": true,
"usedInQueryExtension": true,
"usedInToolCall": true,
"defaultConfig": {}, "defaultConfig": {},
"fieldMap": {}, "fieldMap": {},
"type": "llm", "type": "llm",
@ -166,11 +163,6 @@
"toolChoice": true, "toolChoice": true,
"functionCall": false, "functionCall": false,
"defaultSystemChatPrompt": "", "defaultSystemChatPrompt": "",
"datasetProcess": true,
"usedInClassify": true,
"usedInExtractFields": true,
"usedInQueryExtension": true,
"usedInToolCall": true,
"defaultConfig": {}, "defaultConfig": {},
"fieldMap": {}, "fieldMap": {},
"type": "llm", "type": "llm",
@ -188,11 +180,6 @@
"toolChoice": true, "toolChoice": true,
"functionCall": false, "functionCall": false,
"defaultSystemChatPrompt": "", "defaultSystemChatPrompt": "",
"datasetProcess": true,
"usedInClassify": true,
"usedInExtractFields": true,
"usedInQueryExtension": true,
"usedInToolCall": true,
"defaultConfig": {}, "defaultConfig": {},
"fieldMap": {}, "fieldMap": {},
"type": "llm", "type": "llm",
@ -210,11 +197,6 @@
"toolChoice": true, "toolChoice": true,
"functionCall": false, "functionCall": false,
"defaultSystemChatPrompt": "", "defaultSystemChatPrompt": "",
"datasetProcess": true,
"usedInClassify": true,
"usedInExtractFields": true,
"usedInQueryExtension": true,
"usedInToolCall": true,
"defaultConfig": {}, "defaultConfig": {},
"fieldMap": {}, "fieldMap": {},
"type": "llm", "type": "llm",

View File

@ -1,6 +1,38 @@
{ {
"provider": "Grok", "provider": "Grok",
"list": [ "list": [
{
"model": "grok-4",
"name": "grok-4",
"maxContext": 256000,
"maxResponse": 8000,
"quoteMaxToken": 128000,
"maxTemperature": 1,
"showTopP": true,
"showStopSign": true,
"vision": true,
"toolChoice": true,
"defaultSystemChatPrompt": "",
"defaultConfig": {},
"fieldMap": {},
"type": "llm"
},
{
"model": "grok-4-0709",
"name": "grok-4-0709",
"maxContext": 256000,
"maxResponse": 8000,
"quoteMaxToken": 128000,
"maxTemperature": 1,
"showTopP": true,
"showStopSign": true,
"vision": true,
"toolChoice": true,
"defaultSystemChatPrompt": "",
"defaultConfig": {},
"fieldMap": {},
"type": "llm"
},
{ {
"model": "grok-3-mini", "model": "grok-3-mini",
"name": "grok-3-mini", "name": "grok-3-mini",
@ -14,11 +46,6 @@
"toolChoice": true, "toolChoice": true,
"functionCall": false, "functionCall": false,
"defaultSystemChatPrompt": "", "defaultSystemChatPrompt": "",
"datasetProcess": true,
"usedInClassify": true,
"usedInExtractFields": true,
"usedInQueryExtension": true,
"usedInToolCall": true,
"defaultConfig": {}, "defaultConfig": {},
"fieldMap": {}, "fieldMap": {},
"type": "llm" "type": "llm"
@ -36,11 +63,6 @@
"toolChoice": true, "toolChoice": true,
"functionCall": false, "functionCall": false,
"defaultSystemChatPrompt": "", "defaultSystemChatPrompt": "",
"datasetProcess": true,
"usedInClassify": true,
"usedInExtractFields": true,
"usedInQueryExtension": true,
"usedInToolCall": true,
"defaultConfig": {}, "defaultConfig": {},
"fieldMap": {}, "fieldMap": {},
"type": "llm" "type": "llm"
@ -58,11 +80,6 @@
"toolChoice": true, "toolChoice": true,
"functionCall": false, "functionCall": false,
"defaultSystemChatPrompt": "", "defaultSystemChatPrompt": "",
"datasetProcess": true,
"usedInClassify": true,
"usedInExtractFields": true,
"usedInQueryExtension": true,
"usedInToolCall": true,
"defaultConfig": {}, "defaultConfig": {},
"fieldMap": {}, "fieldMap": {},
"type": "llm" "type": "llm"
@ -80,11 +97,6 @@
"toolChoice": true, "toolChoice": true,
"functionCall": false, "functionCall": false,
"defaultSystemChatPrompt": "", "defaultSystemChatPrompt": "",
"datasetProcess": true,
"usedInClassify": true,
"usedInExtractFields": true,
"usedInQueryExtension": true,
"usedInToolCall": true,
"defaultConfig": {}, "defaultConfig": {},
"fieldMap": {}, "fieldMap": {},
"type": "llm" "type": "llm"

View File

@ -12,11 +12,6 @@
"toolChoice": true, "toolChoice": true,
"functionCall": false, "functionCall": false,
"defaultSystemChatPrompt": "", "defaultSystemChatPrompt": "",
"datasetProcess": true,
"usedInClassify": true,
"usedInExtractFields": true,
"usedInQueryExtension": true,
"usedInToolCall": true,
"defaultConfig": {}, "defaultConfig": {},
"type": "llm", "type": "llm",
"showTopP": true, "showTopP": true,
@ -33,11 +28,6 @@
"toolChoice": true, "toolChoice": true,
"functionCall": false, "functionCall": false,
"defaultSystemChatPrompt": "", "defaultSystemChatPrompt": "",
"datasetProcess": true,
"usedInClassify": true,
"usedInExtractFields": true,
"usedInQueryExtension": true,
"usedInToolCall": true,
"defaultConfig": {}, "defaultConfig": {},
"type": "llm", "type": "llm",
"showTopP": true, "showTopP": true,

View File

@ -12,11 +12,6 @@
"toolChoice": false, "toolChoice": false,
"functionCall": false, "functionCall": false,
"defaultSystemChatPrompt": "", "defaultSystemChatPrompt": "",
"datasetProcess": true,
"usedInClassify": true,
"usedInExtractFields": true,
"usedInQueryExtension": true,
"usedInToolCall": true,
"defaultConfig": {}, "defaultConfig": {},
"fieldMap": {}, "fieldMap": {},
"type": "llm", "type": "llm",
@ -34,11 +29,6 @@
"toolChoice": false, "toolChoice": false,
"functionCall": false, "functionCall": false,
"defaultSystemChatPrompt": "", "defaultSystemChatPrompt": "",
"datasetProcess": true,
"usedInClassify": true,
"usedInExtractFields": true,
"usedInQueryExtension": true,
"usedInToolCall": true,
"defaultConfig": {}, "defaultConfig": {},
"fieldMap": {}, "fieldMap": {},
"type": "llm", "type": "llm",
@ -56,11 +46,6 @@
"toolChoice": false, "toolChoice": false,
"functionCall": false, "functionCall": false,
"defaultSystemChatPrompt": "", "defaultSystemChatPrompt": "",
"datasetProcess": true,
"usedInClassify": true,
"usedInExtractFields": true,
"usedInQueryExtension": true,
"usedInToolCall": true,
"defaultConfig": {}, "defaultConfig": {},
"fieldMap": {}, "fieldMap": {},
"type": "llm", "type": "llm",
@ -78,11 +63,6 @@
"toolChoice": false, "toolChoice": false,
"functionCall": false, "functionCall": false,
"defaultSystemChatPrompt": "", "defaultSystemChatPrompt": "",
"datasetProcess": true,
"usedInClassify": true,
"usedInExtractFields": true,
"usedInQueryExtension": true,
"usedInToolCall": true,
"defaultConfig": {}, "defaultConfig": {},
"fieldMap": {}, "fieldMap": {},
"type": "llm", "type": "llm",
@ -100,11 +80,6 @@
"toolChoice": false, "toolChoice": false,
"functionCall": false, "functionCall": false,
"defaultSystemChatPrompt": "", "defaultSystemChatPrompt": "",
"datasetProcess": true,
"usedInClassify": true,
"usedInExtractFields": true,
"usedInQueryExtension": true,
"usedInToolCall": true,
"defaultConfig": {}, "defaultConfig": {},
"fieldMap": {}, "fieldMap": {},
"type": "llm", "type": "llm",
@ -122,11 +97,6 @@
"toolChoice": false, "toolChoice": false,
"functionCall": false, "functionCall": false,
"defaultSystemChatPrompt": "", "defaultSystemChatPrompt": "",
"datasetProcess": true,
"usedInClassify": true,
"usedInExtractFields": true,
"usedInQueryExtension": true,
"usedInToolCall": true,
"defaultConfig": {}, "defaultConfig": {},
"fieldMap": {}, "fieldMap": {},
"type": "llm", "type": "llm",
@ -144,11 +114,6 @@
"toolChoice": false, "toolChoice": false,
"functionCall": false, "functionCall": false,
"defaultSystemChatPrompt": "", "defaultSystemChatPrompt": "",
"datasetProcess": true,
"usedInClassify": true,
"usedInExtractFields": true,
"usedInQueryExtension": true,
"usedInToolCall": true,
"defaultConfig": {}, "defaultConfig": {},
"fieldMap": {}, "fieldMap": {},
"type": "llm", "type": "llm",

View File

@ -12,11 +12,6 @@
"toolChoice": true, "toolChoice": true,
"functionCall": false, "functionCall": false,
"defaultSystemChatPrompt": "", "defaultSystemChatPrompt": "",
"datasetProcess": true,
"usedInClassify": true,
"usedInExtractFields": true,
"usedInQueryExtension": true,
"usedInToolCall": true,
"defaultConfig": {}, "defaultConfig": {},
"fieldMap": {}, "fieldMap": {},
"type": "llm", "type": "llm",
@ -34,11 +29,6 @@
"toolChoice": true, "toolChoice": true,
"functionCall": false, "functionCall": false,
"defaultSystemChatPrompt": "", "defaultSystemChatPrompt": "",
"datasetProcess": true,
"usedInClassify": true,
"usedInExtractFields": true,
"usedInQueryExtension": true,
"usedInToolCall": true,
"defaultConfig": {}, "defaultConfig": {},
"fieldMap": {}, "fieldMap": {},
"type": "llm", "type": "llm",

View File

@ -12,11 +12,6 @@
"toolChoice": false, "toolChoice": false,
"functionCall": false, "functionCall": false,
"defaultSystemChatPrompt": "", "defaultSystemChatPrompt": "",
"datasetProcess": true,
"usedInClassify": true,
"usedInExtractFields": true,
"usedInQueryExtension": true,
"usedInToolCall": true,
"defaultConfig": {}, "defaultConfig": {},
"fieldMap": {}, "fieldMap": {},
"type": "llm", "type": "llm",
@ -34,11 +29,6 @@
"toolChoice": false, "toolChoice": false,
"functionCall": false, "functionCall": false,
"defaultSystemChatPrompt": "", "defaultSystemChatPrompt": "",
"datasetProcess": true,
"usedInClassify": true,
"usedInExtractFields": true,
"usedInQueryExtension": true,
"usedInToolCall": true,
"defaultConfig": {}, "defaultConfig": {},
"fieldMap": {}, "fieldMap": {},
"type": "llm", "type": "llm",

View File

@ -12,11 +12,6 @@
"toolChoice": true, "toolChoice": true,
"functionCall": false, "functionCall": false,
"defaultSystemChatPrompt": "", "defaultSystemChatPrompt": "",
"datasetProcess": true,
"usedInClassify": true,
"usedInExtractFields": true,
"usedInQueryExtension": true,
"usedInToolCall": true,
"defaultConfig": {}, "defaultConfig": {},
"fieldMap": {}, "fieldMap": {},
"type": "llm", "type": "llm",
@ -34,11 +29,6 @@
"toolChoice": true, "toolChoice": true,
"functionCall": false, "functionCall": false,
"defaultSystemChatPrompt": "", "defaultSystemChatPrompt": "",
"datasetProcess": true,
"usedInClassify": true,
"usedInExtractFields": true,
"usedInQueryExtension": true,
"usedInToolCall": true,
"defaultConfig": {}, "defaultConfig": {},
"fieldMap": {}, "fieldMap": {},
"type": "llm", "type": "llm",
@ -56,11 +46,6 @@
"toolChoice": true, "toolChoice": true,
"functionCall": false, "functionCall": false,
"defaultSystemChatPrompt": "", "defaultSystemChatPrompt": "",
"datasetProcess": true,
"usedInClassify": true,
"usedInExtractFields": true,
"usedInQueryExtension": true,
"usedInToolCall": true,
"defaultConfig": {}, "defaultConfig": {},
"fieldMap": {}, "fieldMap": {},
"type": "llm", "type": "llm",
@ -78,11 +63,6 @@
"toolChoice": true, "toolChoice": true,
"functionCall": false, "functionCall": false,
"defaultSystemChatPrompt": "", "defaultSystemChatPrompt": "",
"datasetProcess": true,
"usedInClassify": true,
"usedInExtractFields": true,
"usedInQueryExtension": true,
"usedInToolCall": true,
"defaultConfig": {}, "defaultConfig": {},
"fieldMap": {}, "fieldMap": {},
"type": "llm", "type": "llm",

View File

@ -1,6 +1,74 @@
{ {
"provider": "Moonshot", "provider": "Moonshot",
"list": [ "list": [
{
"model": "kimi-k2-0711-preview",
"name": "kimi-k2-0711-preview",
"maxContext": 128000,
"maxResponse": 32000,
"quoteMaxToken": 128000,
"maxTemperature": 1,
"vision": false,
"toolChoice": true,
"defaultSystemChatPrompt": "",
"defaultConfig": {},
"fieldMap": {},
"type": "llm",
"showTopP": true,
"showStopSign": true,
"responseFormatList": ["text", "json_object"]
},
{
"model": "kimi-latest-8k",
"name": "kimi-latest-8k",
"maxContext": 8000,
"maxResponse": 4000,
"quoteMaxToken": 6000,
"maxTemperature": 1,
"vision": false,
"toolChoice": true,
"defaultSystemChatPrompt": "",
"defaultConfig": {},
"fieldMap": {},
"type": "llm",
"showTopP": true,
"showStopSign": true,
"responseFormatList": ["text", "json_object"]
},
{
"model": "kimi-latest-32k",
"name": "kimi-latest-32k",
"maxContext": 32000,
"maxResponse": 16000,
"quoteMaxToken": 32000,
"maxTemperature": 1,
"vision": false,
"toolChoice": true,
"defaultSystemChatPrompt": "",
"defaultConfig": {},
"fieldMap": {},
"type": "llm",
"showTopP": true,
"showStopSign": true,
"responseFormatList": ["text", "json_object"]
},
{
"model": "kimi-latest-128k",
"name": "kimi-latest-128k",
"maxContext": 128000,
"maxResponse": 32000,
"quoteMaxToken": 128000,
"maxTemperature": 1,
"vision": false,
"toolChoice": true,
"defaultSystemChatPrompt": "",
"defaultConfig": {},
"fieldMap": {},
"type": "llm",
"showTopP": true,
"showStopSign": true,
"responseFormatList": ["text", "json_object"]
},
{ {
"model": "moonshot-v1-8k", "model": "moonshot-v1-8k",
"name": "moonshot-v1-8k", "name": "moonshot-v1-8k",
@ -12,11 +80,6 @@
"toolChoice": true, "toolChoice": true,
"functionCall": false, "functionCall": false,
"defaultSystemChatPrompt": "", "defaultSystemChatPrompt": "",
"datasetProcess": true,
"usedInClassify": true,
"usedInExtractFields": true,
"usedInQueryExtension": true,
"usedInToolCall": true,
"defaultConfig": {}, "defaultConfig": {},
"fieldMap": {}, "fieldMap": {},
"type": "llm", "type": "llm",
@ -35,11 +98,6 @@
"toolChoice": true, "toolChoice": true,
"functionCall": false, "functionCall": false,
"defaultSystemChatPrompt": "", "defaultSystemChatPrompt": "",
"datasetProcess": true,
"usedInClassify": true,
"usedInExtractFields": true,
"usedInQueryExtension": true,
"usedInToolCall": true,
"defaultConfig": {}, "defaultConfig": {},
"fieldMap": {}, "fieldMap": {},
"type": "llm", "type": "llm",
@ -58,11 +116,6 @@
"toolChoice": true, "toolChoice": true,
"functionCall": false, "functionCall": false,
"defaultSystemChatPrompt": "", "defaultSystemChatPrompt": "",
"datasetProcess": true,
"usedInClassify": true,
"usedInExtractFields": true,
"usedInQueryExtension": true,
"usedInToolCall": true,
"defaultConfig": {}, "defaultConfig": {},
"fieldMap": {}, "fieldMap": {},
"type": "llm", "type": "llm",
@ -81,11 +134,6 @@
"toolChoice": true, "toolChoice": true,
"functionCall": false, "functionCall": false,
"defaultSystemChatPrompt": "", "defaultSystemChatPrompt": "",
"datasetProcess": true,
"usedInClassify": true,
"usedInExtractFields": true,
"usedInQueryExtension": true,
"usedInToolCall": true,
"defaultConfig": {}, "defaultConfig": {},
"fieldMap": {}, "fieldMap": {},
"type": "llm", "type": "llm",
@ -104,11 +152,6 @@
"toolChoice": true, "toolChoice": true,
"functionCall": false, "functionCall": false,
"defaultSystemChatPrompt": "", "defaultSystemChatPrompt": "",
"datasetProcess": true,
"usedInClassify": true,
"usedInExtractFields": true,
"usedInQueryExtension": true,
"usedInToolCall": true,
"defaultConfig": {}, "defaultConfig": {},
"fieldMap": {}, "fieldMap": {},
"type": "llm", "type": "llm",
@ -127,11 +170,6 @@
"toolChoice": true, "toolChoice": true,
"functionCall": false, "functionCall": false,
"defaultSystemChatPrompt": "", "defaultSystemChatPrompt": "",
"datasetProcess": true,
"usedInClassify": true,
"usedInExtractFields": true,
"usedInQueryExtension": true,
"usedInToolCall": true,
"defaultConfig": {}, "defaultConfig": {},
"fieldMap": {}, "fieldMap": {},
"type": "llm", "type": "llm",

View File

@ -15,10 +15,6 @@
"toolChoice": true, "toolChoice": true,
"functionCall": true, "functionCall": true,
"defaultSystemChatPrompt": "", "defaultSystemChatPrompt": "",
"datasetProcess": true,
"usedInClassify": true,
"usedInExtractFields": true,
"usedInToolCall": true,
"defaultConfig": {}, "defaultConfig": {},
"fieldMap": {}, "fieldMap": {},
"type": "llm" "type": "llm"
@ -37,10 +33,6 @@
"toolChoice": true, "toolChoice": true,
"functionCall": true, "functionCall": true,
"defaultSystemChatPrompt": "", "defaultSystemChatPrompt": "",
"datasetProcess": true,
"usedInClassify": true,
"usedInExtractFields": true,
"usedInToolCall": true,
"defaultConfig": {}, "defaultConfig": {},
"fieldMap": {}, "fieldMap": {},
"type": "llm" "type": "llm"
@ -59,10 +51,6 @@
"toolChoice": true, "toolChoice": true,
"functionCall": true, "functionCall": true,
"defaultSystemChatPrompt": "", "defaultSystemChatPrompt": "",
"datasetProcess": true,
"usedInClassify": true,
"usedInExtractFields": true,
"usedInToolCall": true,
"defaultConfig": {}, "defaultConfig": {},
"fieldMap": {}, "fieldMap": {},
"type": "llm" "type": "llm"
@ -81,10 +69,6 @@
"toolChoice": true, "toolChoice": true,
"functionCall": true, "functionCall": true,
"defaultSystemChatPrompt": "", "defaultSystemChatPrompt": "",
"datasetProcess": true,
"usedInClassify": true,
"usedInExtractFields": true,
"usedInToolCall": true,
"defaultConfig": {}, "defaultConfig": {},
"fieldMap": {}, "fieldMap": {},
"type": "llm" "type": "llm"
@ -103,11 +87,6 @@
"toolChoice": true, "toolChoice": true,
"functionCall": true, "functionCall": true,
"defaultSystemChatPrompt": "", "defaultSystemChatPrompt": "",
"datasetProcess": true,
"usedInClassify": true,
"usedInExtractFields": true,
"usedInQueryExtension": true,
"usedInToolCall": true,
"defaultConfig": {}, "defaultConfig": {},
"fieldMap": {}, "fieldMap": {},
"type": "llm" "type": "llm"
@ -123,11 +102,6 @@
"toolChoice": true, "toolChoice": true,
"functionCall": false, "functionCall": false,
"defaultSystemChatPrompt": "", "defaultSystemChatPrompt": "",
"datasetProcess": true,
"usedInClassify": true,
"usedInExtractFields": true,
"usedInQueryExtension": true,
"usedInToolCall": true,
"defaultConfig": {}, "defaultConfig": {},
"fieldMap": { "fieldMap": {
"max_tokens": "max_completion_tokens" "max_tokens": "max_completion_tokens"
@ -147,11 +121,6 @@
"toolChoice": true, "toolChoice": true,
"functionCall": false, "functionCall": false,
"defaultSystemChatPrompt": "", "defaultSystemChatPrompt": "",
"datasetProcess": true,
"usedInClassify": true,
"usedInExtractFields": true,
"usedInQueryExtension": true,
"usedInToolCall": true,
"defaultConfig": {}, "defaultConfig": {},
"fieldMap": { "fieldMap": {
"max_tokens": "max_completion_tokens" "max_tokens": "max_completion_tokens"
@ -171,11 +140,6 @@
"toolChoice": true, "toolChoice": true,
"functionCall": false, "functionCall": false,
"defaultSystemChatPrompt": "", "defaultSystemChatPrompt": "",
"datasetProcess": true,
"usedInClassify": true,
"usedInExtractFields": true,
"usedInQueryExtension": true,
"usedInToolCall": true,
"defaultConfig": {}, "defaultConfig": {},
"fieldMap": { "fieldMap": {
"max_tokens": "max_completion_tokens" "max_tokens": "max_completion_tokens"
@ -195,11 +159,6 @@
"toolChoice": false, "toolChoice": false,
"functionCall": false, "functionCall": false,
"defaultSystemChatPrompt": "", "defaultSystemChatPrompt": "",
"datasetProcess": true,
"usedInClassify": true,
"usedInExtractFields": true,
"usedInQueryExtension": true,
"usedInToolCall": true,
"defaultConfig": {}, "defaultConfig": {},
"fieldMap": { "fieldMap": {
"max_tokens": "max_completion_tokens" "max_tokens": "max_completion_tokens"
@ -219,11 +178,6 @@
"toolChoice": false, "toolChoice": false,
"functionCall": false, "functionCall": false,
"defaultSystemChatPrompt": "", "defaultSystemChatPrompt": "",
"datasetProcess": true,
"usedInClassify": true,
"usedInExtractFields": true,
"usedInQueryExtension": true,
"usedInToolCall": true,
"defaultConfig": {}, "defaultConfig": {},
"fieldMap": { "fieldMap": {
"max_tokens": "max_completion_tokens" "max_tokens": "max_completion_tokens"
@ -243,11 +197,6 @@
"toolChoice": false, "toolChoice": false,
"functionCall": false, "functionCall": false,
"defaultSystemChatPrompt": "", "defaultSystemChatPrompt": "",
"datasetProcess": true,
"usedInClassify": true,
"usedInExtractFields": true,
"usedInQueryExtension": true,
"usedInToolCall": true,
"defaultConfig": { "defaultConfig": {
"stream": false "stream": false
}, },
@ -271,11 +220,6 @@
"toolChoice": true, "toolChoice": true,
"functionCall": true, "functionCall": true,
"defaultSystemChatPrompt": "", "defaultSystemChatPrompt": "",
"datasetProcess": true,
"usedInClassify": true,
"usedInExtractFields": true,
"usedInQueryExtension": true,
"usedInToolCall": true,
"type": "llm" "type": "llm"
}, },
{ {
@ -291,11 +235,6 @@
"toolChoice": true, "toolChoice": true,
"functionCall": true, "functionCall": true,
"defaultSystemChatPrompt": "", "defaultSystemChatPrompt": "",
"datasetProcess": true,
"usedInClassify": true,
"usedInExtractFields": true,
"usedInQueryExtension": true,
"usedInToolCall": true,
"type": "llm" "type": "llm"
}, },
{ {

View File

@ -12,11 +12,6 @@
"toolChoice": true, "toolChoice": true,
"functionCall": false, "functionCall": false,
"defaultSystemChatPrompt": "", "defaultSystemChatPrompt": "",
"datasetProcess": true,
"usedInClassify": true,
"usedInExtractFields": true,
"usedInQueryExtension": true,
"usedInToolCall": true,
"defaultConfig": {}, "defaultConfig": {},
"fieldMap": {}, "fieldMap": {},
"type": "llm", "type": "llm",
@ -35,11 +30,6 @@
"toolChoice": false, "toolChoice": false,
"functionCall": false, "functionCall": false,
"defaultSystemChatPrompt": "", "defaultSystemChatPrompt": "",
"datasetProcess": true,
"usedInClassify": true,
"usedInExtractFields": true,
"usedInQueryExtension": true,
"usedInToolCall": true,
"defaultConfig": {}, "defaultConfig": {},
"fieldMap": {}, "fieldMap": {},
"type": "llm", "type": "llm",
@ -57,11 +47,6 @@
"toolChoice": true, "toolChoice": true,
"functionCall": false, "functionCall": false,
"defaultSystemChatPrompt": "", "defaultSystemChatPrompt": "",
"datasetProcess": true,
"usedInClassify": true,
"usedInExtractFields": true,
"usedInQueryExtension": true,
"usedInToolCall": true,
"defaultConfig": {}, "defaultConfig": {},
"fieldMap": {}, "fieldMap": {},
"type": "llm", "type": "llm",
@ -80,11 +65,6 @@
"toolChoice": false, "toolChoice": false,
"functionCall": false, "functionCall": false,
"defaultSystemChatPrompt": "", "defaultSystemChatPrompt": "",
"datasetProcess": true,
"usedInClassify": true,
"usedInExtractFields": true,
"usedInQueryExtension": true,
"usedInToolCall": true,
"type": "llm", "type": "llm",
"showTopP": true, "showTopP": true,
"showStopSign": true "showStopSign": true
@ -100,11 +80,6 @@
"toolChoice": true, "toolChoice": true,
"functionCall": false, "functionCall": false,
"defaultSystemChatPrompt": "", "defaultSystemChatPrompt": "",
"datasetProcess": true,
"usedInClassify": true,
"usedInExtractFields": true,
"usedInQueryExtension": true,
"usedInToolCall": true,
"defaultConfig": {}, "defaultConfig": {},
"fieldMap": {}, "fieldMap": {},
"type": "llm", "type": "llm",
@ -124,11 +99,6 @@
"toolChoice": true, "toolChoice": true,
"functionCall": false, "functionCall": false,
"defaultSystemChatPrompt": "", "defaultSystemChatPrompt": "",
"datasetProcess": true,
"usedInClassify": true,
"usedInExtractFields": true,
"usedInQueryExtension": true,
"usedInToolCall": true,
"defaultConfig": { "defaultConfig": {
"stream": true "stream": true
}, },
@ -150,11 +120,6 @@
"toolChoice": true, "toolChoice": true,
"functionCall": false, "functionCall": false,
"defaultSystemChatPrompt": "", "defaultSystemChatPrompt": "",
"datasetProcess": true,
"usedInClassify": true,
"usedInExtractFields": true,
"usedInQueryExtension": true,
"usedInToolCall": true,
"defaultConfig": { "defaultConfig": {
"stream": true "stream": true
}, },
@ -176,11 +141,6 @@
"toolChoice": true, "toolChoice": true,
"functionCall": false, "functionCall": false,
"defaultSystemChatPrompt": "", "defaultSystemChatPrompt": "",
"datasetProcess": true,
"usedInClassify": true,
"usedInExtractFields": true,
"usedInQueryExtension": true,
"usedInToolCall": true,
"defaultConfig": { "defaultConfig": {
"stream": true "stream": true
}, },
@ -202,11 +162,6 @@
"toolChoice": true, "toolChoice": true,
"functionCall": false, "functionCall": false,
"defaultSystemChatPrompt": "", "defaultSystemChatPrompt": "",
"datasetProcess": true,
"usedInClassify": true,
"usedInExtractFields": true,
"usedInQueryExtension": true,
"usedInToolCall": true,
"defaultConfig": { "defaultConfig": {
"stream": true "stream": true
}, },
@ -228,11 +183,6 @@
"toolChoice": true, "toolChoice": true,
"functionCall": false, "functionCall": false,
"defaultSystemChatPrompt": "", "defaultSystemChatPrompt": "",
"datasetProcess": true,
"usedInClassify": true,
"usedInExtractFields": true,
"usedInQueryExtension": true,
"usedInToolCall": true,
"defaultConfig": { "defaultConfig": {
"stream": true "stream": true
}, },
@ -254,11 +204,6 @@
"toolChoice": true, "toolChoice": true,
"functionCall": false, "functionCall": false,
"defaultSystemChatPrompt": "", "defaultSystemChatPrompt": "",
"datasetProcess": true,
"usedInClassify": true,
"usedInExtractFields": true,
"usedInQueryExtension": true,
"usedInToolCall": true,
"defaultConfig": { "defaultConfig": {
"stream": true "stream": true
}, },
@ -280,11 +225,6 @@
"toolChoice": true, "toolChoice": true,
"functionCall": false, "functionCall": false,
"defaultSystemChatPrompt": "", "defaultSystemChatPrompt": "",
"datasetProcess": true,
"usedInClassify": true,
"usedInExtractFields": true,
"usedInQueryExtension": true,
"usedInToolCall": true,
"defaultConfig": { "defaultConfig": {
"stream": true "stream": true
}, },
@ -306,11 +246,6 @@
"toolChoice": true, "toolChoice": true,
"functionCall": false, "functionCall": false,
"defaultSystemChatPrompt": "", "defaultSystemChatPrompt": "",
"datasetProcess": true,
"usedInClassify": true,
"usedInExtractFields": true,
"usedInQueryExtension": true,
"usedInToolCall": true,
"defaultConfig": { "defaultConfig": {
"stream": true "stream": true
}, },
@ -336,7 +271,6 @@
"usedInClassify": false, "usedInClassify": false,
"usedInExtractFields": false, "usedInExtractFields": false,
"usedInQueryExtension": false, "usedInQueryExtension": false,
"usedInToolCall": true,
"defaultConfig": { "defaultConfig": {
"stream": true "stream": true
}, },
@ -361,7 +295,6 @@
"usedInClassify": false, "usedInClassify": false,
"usedInExtractFields": false, "usedInExtractFields": false,
"usedInQueryExtension": false, "usedInQueryExtension": false,
"usedInToolCall": true,
"defaultConfig": { "defaultConfig": {
"stream": true "stream": true
}, },
@ -381,11 +314,6 @@
"toolChoice": false, "toolChoice": false,
"functionCall": false, "functionCall": false,
"defaultSystemChatPrompt": "", "defaultSystemChatPrompt": "",
"datasetProcess": true,
"usedInClassify": true,
"usedInExtractFields": true,
"usedInQueryExtension": true,
"usedInToolCall": true,
"defaultConfig": {}, "defaultConfig": {},
"fieldMap": {}, "fieldMap": {},
"type": "llm", "type": "llm",
@ -403,11 +331,6 @@
"toolChoice": true, "toolChoice": true,
"functionCall": false, "functionCall": false,
"defaultSystemChatPrompt": "", "defaultSystemChatPrompt": "",
"datasetProcess": true,
"usedInClassify": true,
"usedInExtractFields": true,
"usedInQueryExtension": true,
"usedInToolCall": true,
"defaultConfig": {}, "defaultConfig": {},
"fieldMap": {}, "fieldMap": {},
"type": "llm", "type": "llm",
@ -426,11 +349,6 @@
"toolChoice": true, "toolChoice": true,
"functionCall": false, "functionCall": false,
"defaultSystemChatPrompt": "", "defaultSystemChatPrompt": "",
"datasetProcess": true,
"usedInClassify": true,
"usedInExtractFields": true,
"usedInQueryExtension": true,
"usedInToolCall": true,
"defaultConfig": {}, "defaultConfig": {},
"fieldMap": {}, "fieldMap": {},
"type": "llm", "type": "llm",
@ -449,11 +367,6 @@
"toolChoice": true, "toolChoice": true,
"functionCall": false, "functionCall": false,
"defaultSystemChatPrompt": "", "defaultSystemChatPrompt": "",
"datasetProcess": true,
"usedInClassify": true,
"usedInExtractFields": true,
"usedInQueryExtension": true,
"usedInToolCall": true,
"defaultConfig": {}, "defaultConfig": {},
"fieldMap": {}, "fieldMap": {},
"type": "llm", "type": "llm",
@ -472,11 +385,6 @@
"toolChoice": true, "toolChoice": true,
"functionCall": false, "functionCall": false,
"defaultSystemChatPrompt": "", "defaultSystemChatPrompt": "",
"datasetProcess": true,
"usedInClassify": true,
"usedInExtractFields": true,
"usedInQueryExtension": true,
"usedInToolCall": true,
"defaultConfig": {}, "defaultConfig": {},
"fieldMap": {}, "fieldMap": {},
"type": "llm", "type": "llm",

View File

@ -12,11 +12,6 @@
"toolChoice": true, "toolChoice": true,
"functionCall": false, "functionCall": false,
"defaultSystemChatPrompt": "", "defaultSystemChatPrompt": "",
"datasetProcess": true,
"usedInClassify": true,
"usedInExtractFields": true,
"usedInQueryExtension": true,
"usedInToolCall": true,
"defaultConfig": {}, "defaultConfig": {},
"fieldMap": {}, "fieldMap": {},
"type": "llm", "type": "llm",
@ -55,11 +50,6 @@
"toolChoice": true, "toolChoice": true,
"functionCall": false, "functionCall": false,
"defaultSystemChatPrompt": "", "defaultSystemChatPrompt": "",
"datasetProcess": true,
"usedInClassify": true,
"usedInExtractFields": true,
"usedInQueryExtension": true,
"usedInToolCall": true,
"defaultConfig": {}, "defaultConfig": {},
"fieldMap": {}, "fieldMap": {},
"type": "llm", "type": "llm",

View File

@ -9,11 +9,6 @@
"quoteMaxToken": 32000, "quoteMaxToken": 32000,
"maxTemperature": 1, "maxTemperature": 1,
"vision": false, "vision": false,
"datasetProcess": true,
"usedInClassify": true,
"usedInExtractFields": true,
"usedInToolCall": true,
"usedInQueryExtension": true,
"toolChoice": false, "toolChoice": false,
"functionCall": false, "functionCall": false,
"defaultSystemChatPrompt": "", "defaultSystemChatPrompt": "",
@ -29,11 +24,6 @@
"quoteMaxToken": 8000, "quoteMaxToken": 8000,
"maxTemperature": 1, "maxTemperature": 1,
"vision": false, "vision": false,
"datasetProcess": true,
"usedInClassify": true,
"usedInExtractFields": true,
"usedInToolCall": true,
"usedInQueryExtension": true,
"toolChoice": false, "toolChoice": false,
"functionCall": false, "functionCall": false,
"defaultSystemChatPrompt": "", "defaultSystemChatPrompt": "",
@ -49,11 +39,6 @@
"quoteMaxToken": 128000, "quoteMaxToken": 128000,
"maxTemperature": 1, "maxTemperature": 1,
"vision": false, "vision": false,
"datasetProcess": true,
"usedInClassify": true,
"usedInExtractFields": true,
"usedInToolCall": true,
"usedInQueryExtension": true,
"toolChoice": false, "toolChoice": false,
"functionCall": false, "functionCall": false,
"defaultSystemChatPrompt": "", "defaultSystemChatPrompt": "",
@ -69,11 +54,6 @@
"quoteMaxToken": 8000, "quoteMaxToken": 8000,
"maxTemperature": 1, "maxTemperature": 1,
"vision": false, "vision": false,
"datasetProcess": true,
"usedInClassify": true,
"usedInExtractFields": true,
"usedInToolCall": true,
"usedInQueryExtension": true,
"toolChoice": false, "toolChoice": false,
"functionCall": false, "functionCall": false,
"defaultSystemChatPrompt": "", "defaultSystemChatPrompt": "",
@ -92,11 +72,6 @@
"toolChoice": false, "toolChoice": false,
"functionCall": false, "functionCall": false,
"defaultSystemChatPrompt": "", "defaultSystemChatPrompt": "",
"datasetProcess": true,
"usedInClassify": true,
"usedInExtractFields": true,
"usedInQueryExtension": true,
"usedInToolCall": true,
"defaultConfig": {}, "defaultConfig": {},
"fieldMap": {}, "fieldMap": {},
"type": "llm", "type": "llm",
@ -114,11 +89,6 @@
"toolChoice": false, "toolChoice": false,
"functionCall": false, "functionCall": false,
"defaultSystemChatPrompt": "", "defaultSystemChatPrompt": "",
"datasetProcess": true,
"usedInClassify": true,
"usedInExtractFields": true,
"usedInQueryExtension": true,
"usedInToolCall": true,
"defaultConfig": {}, "defaultConfig": {},
"fieldMap": {}, "fieldMap": {},
"type": "llm", "type": "llm",

View File

@ -9,11 +9,6 @@
"quoteMaxToken": 6000, "quoteMaxToken": 6000,
"maxTemperature": 2, "maxTemperature": 2,
"vision": false, "vision": false,
"datasetProcess": true,
"usedInClassify": true,
"usedInExtractFields": true,
"usedInToolCall": true,
"usedInQueryExtension": true,
"toolChoice": false, "toolChoice": false,
"functionCall": false, "functionCall": false,
"defaultSystemChatPrompt": "", "defaultSystemChatPrompt": "",
@ -29,11 +24,6 @@
"quoteMaxToken": 8000, "quoteMaxToken": 8000,
"maxTemperature": 2, "maxTemperature": 2,
"vision": false, "vision": false,
"datasetProcess": true,
"usedInClassify": true,
"usedInExtractFields": true,
"usedInToolCall": true,
"usedInQueryExtension": true,
"toolChoice": false, "toolChoice": false,
"functionCall": false, "functionCall": false,
"defaultSystemChatPrompt": "", "defaultSystemChatPrompt": "",
@ -49,11 +39,6 @@
"quoteMaxToken": 32000, "quoteMaxToken": 32000,
"maxTemperature": 2, "maxTemperature": 2,
"vision": false, "vision": false,
"datasetProcess": true,
"usedInClassify": true,
"usedInExtractFields": true,
"usedInToolCall": true,
"usedInQueryExtension": true,
"toolChoice": false, "toolChoice": false,
"functionCall": false, "functionCall": false,
"defaultSystemChatPrompt": "", "defaultSystemChatPrompt": "",
@ -69,11 +54,6 @@
"quoteMaxToken": 128000, "quoteMaxToken": 128000,
"maxTemperature": 2, "maxTemperature": 2,
"vision": false, "vision": false,
"datasetProcess": true,
"usedInClassify": true,
"usedInExtractFields": true,
"usedInToolCall": true,
"usedInQueryExtension": true,
"toolChoice": false, "toolChoice": false,
"functionCall": false, "functionCall": false,
"defaultSystemChatPrompt": "", "defaultSystemChatPrompt": "",
@ -89,11 +69,6 @@
"quoteMaxToken": 256000, "quoteMaxToken": 256000,
"maxTemperature": 2, "maxTemperature": 2,
"vision": false, "vision": false,
"datasetProcess": true,
"usedInClassify": true,
"usedInExtractFields": true,
"usedInToolCall": true,
"usedInQueryExtension": true,
"toolChoice": false, "toolChoice": false,
"functionCall": false, "functionCall": false,
"defaultSystemChatPrompt": "", "defaultSystemChatPrompt": "",
@ -109,11 +84,6 @@
"maxResponse": 8000, "maxResponse": 8000,
"maxTemperature": 2, "maxTemperature": 2,
"vision": true, "vision": true,
"datasetProcess": true,
"usedInClassify": true,
"usedInExtractFields": true,
"usedInToolCall": true,
"usedInQueryExtension": true,
"toolChoice": false, "toolChoice": false,
"functionCall": false, "functionCall": false,
"defaultSystemChatPrompt": "", "defaultSystemChatPrompt": "",
@ -129,11 +99,6 @@
"quoteMaxToken": 8000, "quoteMaxToken": 8000,
"maxTemperature": 2, "maxTemperature": 2,
"vision": true, "vision": true,
"datasetProcess": true,
"usedInClassify": true,
"usedInExtractFields": true,
"usedInToolCall": true,
"usedInQueryExtension": true,
"toolChoice": false, "toolChoice": false,
"functionCall": false, "functionCall": false,
"defaultSystemChatPrompt": "", "defaultSystemChatPrompt": "",
@ -149,11 +114,6 @@
"maxResponse": 8000, "maxResponse": 8000,
"maxTemperature": 2, "maxTemperature": 2,
"vision": true, "vision": true,
"datasetProcess": true,
"usedInClassify": true,
"usedInExtractFields": true,
"usedInToolCall": true,
"usedInQueryExtension": true,
"toolChoice": false, "toolChoice": false,
"functionCall": false, "functionCall": false,
"defaultSystemChatPrompt": "", "defaultSystemChatPrompt": "",
@ -169,11 +129,6 @@
"quoteMaxToken": 6000, "quoteMaxToken": 6000,
"maxTemperature": 2, "maxTemperature": 2,
"vision": false, "vision": false,
"datasetProcess": true,
"usedInClassify": true,
"usedInExtractFields": true,
"usedInToolCall": true,
"usedInQueryExtension": true,
"toolChoice": false, "toolChoice": false,
"functionCall": false, "functionCall": false,
"defaultSystemChatPrompt": "", "defaultSystemChatPrompt": "",
@ -189,11 +144,6 @@
"quoteMaxToken": 4000, "quoteMaxToken": 4000,
"maxTemperature": 2, "maxTemperature": 2,
"vision": false, "vision": false,
"datasetProcess": true,
"usedInClassify": true,
"usedInExtractFields": true,
"usedInToolCall": true,
"usedInQueryExtension": true,
"toolChoice": false, "toolChoice": false,
"functionCall": false, "functionCall": false,
"defaultSystemChatPrompt": "", "defaultSystemChatPrompt": "",
@ -209,11 +159,6 @@
"quoteMaxToken": 4000, "quoteMaxToken": 4000,
"maxTemperature": 2, "maxTemperature": 2,
"vision": false, "vision": false,
"datasetProcess": true,
"usedInClassify": true,
"usedInExtractFields": true,
"usedInToolCall": true,
"usedInQueryExtension": true,
"toolChoice": false, "toolChoice": false,
"functionCall": false, "functionCall": false,
"defaultSystemChatPrompt": "", "defaultSystemChatPrompt": "",

View File

@ -12,11 +12,6 @@
"toolChoice": false, "toolChoice": false,
"functionCall": false, "functionCall": false,
"defaultSystemChatPrompt": "", "defaultSystemChatPrompt": "",
"datasetProcess": true,
"usedInClassify": true,
"usedInExtractFields": true,
"usedInQueryExtension": true,
"usedInToolCall": true,
"defaultConfig": {}, "defaultConfig": {},
"fieldMap": {}, "fieldMap": {},
"type": "llm", "type": "llm",
@ -34,11 +29,6 @@
"toolChoice": false, "toolChoice": false,
"functionCall": false, "functionCall": false,
"defaultSystemChatPrompt": "", "defaultSystemChatPrompt": "",
"datasetProcess": true,
"usedInClassify": true,
"usedInExtractFields": true,
"usedInQueryExtension": true,
"usedInToolCall": true,
"defaultConfig": {}, "defaultConfig": {},
"fieldMap": {}, "fieldMap": {},
"type": "llm", "type": "llm",

View File

@ -43,6 +43,15 @@ export const loadSystemModels = async (init = false) => {
const pushModel = (model: SystemModelItemType) => { const pushModel = (model: SystemModelItemType) => {
global.systemModelList.push(model); global.systemModelList.push(model);
// Add default value
if (model.type === ModelTypeEnum.llm) {
model.datasetProcess = model.datasetProcess ?? true;
model.usedInClassify = model.usedInClassify ?? true;
model.usedInExtractFields = model.usedInExtractFields ?? true;
model.usedInToolCall = model.usedInToolCall ?? true;
model.useInEvaluation = model.useInEvaluation ?? true;
}
if (model.isActive) { if (model.isActive) {
global.systemActiveModelList.push(model); global.systemActiveModelList.push(model);

View File

@ -14,15 +14,15 @@ export const getDatasetModel = (model?: string) => {
?.find((item) => item.model === model || item.name === model) ?? getDefaultLLMModel() ?.find((item) => item.model === model || item.name === model) ?? getDefaultLLMModel()
); );
}; };
export const getVlmModel = (model?: string) => {
return Array.from(global.llmModelMap.values())
?.filter((item) => item.vision)
?.find((item) => item.model === model || item.name === model);
};
export const getVlmModelList = () => { export const getVlmModelList = () => {
return Array.from(global.llmModelMap.values())?.filter((item) => item.vision) || []; return Array.from(global.llmModelMap.values())?.filter((item) => item.vision) || [];
}; };
export const getDefaultVLMModel = () => global?.systemDefaultModel.datasetImageLLM;
export const getVlmModel = (model?: string) => {
const list = getVlmModelList();
return list.find((item) => item.model === model || item.name === model) || list[0];
};
export const getDefaultEmbeddingModel = () => global?.systemDefaultModel.embedding!; export const getDefaultEmbeddingModel = () => global?.systemDefaultModel.embedding!;
export const getEmbeddingModel = (model?: string) => { export const getEmbeddingModel = (model?: string) => {

View File

@ -0,0 +1,56 @@
import { connectionMongo, getMongoModel } from '../../../common/mongo';
import { EvaluationCollectionName } from './evalSchema';
import {
EvaluationStatusEnum,
EvaluationStatusValues
} from '@fastgpt/global/core/app/evaluation/constants';
import type { EvalItemSchemaType } from '@fastgpt/global/core/app/evaluation/type';
const { Schema } = connectionMongo;
export const EvalItemCollectionName = 'eval_items';
const EvalItemSchema = new Schema({
evalId: {
type: Schema.Types.ObjectId,
ref: EvaluationCollectionName,
required: true
},
question: {
type: String,
required: true
},
expectedResponse: {
type: String,
required: true
},
history: String,
globalVariables: Object,
response: String,
responseTime: Date,
status: {
type: Number,
default: EvaluationStatusEnum.queuing,
enum: EvaluationStatusValues
},
retry: {
type: Number,
default: 3
},
finishTime: Date,
accuracy: Number,
relevance: Number,
semanticAccuracy: Number,
score: Number, // average score
errorMessage: String
});
EvalItemSchema.index({ evalId: 1, status: 1 });
export const MongoEvalItem = getMongoModel<EvalItemSchemaType>(
EvalItemCollectionName,
EvalItemSchema
);

View File

@ -0,0 +1,57 @@
import {
TeamCollectionName,
TeamMemberCollectionName
} from '@fastgpt/global/support/user/team/constant';
import { connectionMongo, getMongoModel } from '../../../common/mongo';
import { AppCollectionName } from '../schema';
import type { EvaluationSchemaType } from '@fastgpt/global/core/app/evaluation/type';
import { UsageCollectionName } from '../../../support/wallet/usage/schema';
const { Schema } = connectionMongo;
export const EvaluationCollectionName = 'eval';
const EvaluationSchema = new Schema({
teamId: {
type: Schema.Types.ObjectId,
ref: TeamCollectionName,
required: true
},
tmbId: {
type: Schema.Types.ObjectId,
ref: TeamMemberCollectionName,
required: true
},
appId: {
type: Schema.Types.ObjectId,
ref: AppCollectionName,
required: true
},
usageId: {
type: Schema.Types.ObjectId,
ref: UsageCollectionName,
required: true
},
evalModel: {
type: String,
required: true
},
name: {
type: String,
required: true
},
createTime: {
type: Date,
required: true,
default: () => new Date()
},
finishTime: Date,
score: Number,
errorMessage: String
});
EvaluationSchema.index({ teamId: 1 });
export const MongoEvaluation = getMongoModel<EvaluationSchemaType>(
EvaluationCollectionName,
EvaluationSchema
);

View File

@ -0,0 +1,80 @@
import { getQueue, getWorker, QueueNames } from '../../../common/bullmq';
import { type Processor } from 'bullmq';
import { addLog } from '../../../common/system/log';
export type EvaluationJobData = {
evalId: string;
};
export const evaluationQueue = getQueue<EvaluationJobData>(QueueNames.evaluation, {
defaultJobOptions: {
attempts: 3,
backoff: {
type: 'exponential',
delay: 1000
}
}
});
const concurrency = process.env.EVAL_CONCURRENCY ? Number(process.env.EVAL_CONCURRENCY) : 3;
export const getEvaluationWorker = (processor: Processor<EvaluationJobData>) => {
return getWorker<EvaluationJobData>(QueueNames.evaluation, processor, {
removeOnFail: {
count: 1000 // Keep last 1000 failed jobs
},
concurrency: concurrency
});
};
export const addEvaluationJob = (data: EvaluationJobData) => {
const evalId = String(data.evalId);
return evaluationQueue.add(evalId, data, { deduplication: { id: evalId } });
};
export const checkEvaluationJobActive = async (evalId: string): Promise<boolean> => {
try {
const jobId = await evaluationQueue.getDeduplicationJobId(String(evalId));
if (!jobId) return false;
const job = await evaluationQueue.getJob(jobId);
if (!job) return false;
const jobState = await job.getState();
return ['waiting', 'delayed', 'prioritized', 'active'].includes(jobState);
} catch (error) {
addLog.error('Failed to check evaluation job status', { evalId, error });
return false;
}
};
export const removeEvaluationJob = async (evalId: string): Promise<boolean> => {
const formatEvalId = String(evalId);
try {
const jobId = await evaluationQueue.getDeduplicationJobId(formatEvalId);
if (!jobId) {
addLog.warn('No job found to remove', { evalId });
return false;
}
const job = await evaluationQueue.getJob(jobId);
if (!job) {
addLog.warn('Job not found in queue', { evalId, jobId });
return false;
}
const jobState = await job.getState();
if (['waiting', 'delayed', 'prioritized'].includes(jobState)) {
await job.remove();
addLog.info('Evaluation job removed successfully', { evalId, jobId, jobState });
return true;
} else {
addLog.warn('Cannot remove active or completed job', { evalId, jobId, jobState });
return false;
}
} catch (error) {
addLog.error('Failed to remove evaluation job', { evalId, error });
return false;
}
};

View File

@ -1,5 +1,8 @@
import { type FlowNodeTemplateType } from '@fastgpt/global/core/workflow/type/node.d'; import { type FlowNodeTemplateType } from '@fastgpt/global/core/workflow/type/node.d';
import { FlowNodeTypeEnum } from '@fastgpt/global/core/workflow/node/constant'; import {
FlowNodeOutputTypeEnum,
FlowNodeTypeEnum
} from '@fastgpt/global/core/workflow/node/constant';
import { import {
appData2FlowNodeIO, appData2FlowNodeIO,
pluginData2FlowNodeIO, pluginData2FlowNodeIO,
@ -33,6 +36,7 @@ import type {
FlowNodeOutputItemType FlowNodeOutputItemType
} from '@fastgpt/global/core/workflow/type/io'; } from '@fastgpt/global/core/workflow/type/io';
import { isProduction } from '@fastgpt/global/common/system/constants'; import { isProduction } from '@fastgpt/global/common/system/constants';
import { Output_Template_Error_Message } from '@fastgpt/global/core/workflow/template/output';
/** /**
plugin id rule: plugin id rule:
@ -122,15 +126,24 @@ export const getSystemPluginByIdAndVersionId = async (
}; };
} }
// System tool
const versionList = (plugin.versionList as SystemPluginTemplateItemType['versionList']) || [];
if (versionList.length === 0) {
return Promise.reject('Can not find plugin version list');
}
const version = versionId const version = versionId
? plugin.versionList?.find((item) => item.value === versionId) ? versionList.find((item) => item.value === versionId) ?? versionList[0]
: plugin.versionList?.[0]; : versionList[0];
const lastVersion = plugin.versionList?.[0]; const lastVersion = versionList[0];
return { return {
...plugin, ...plugin,
inputs: version.inputs,
outputs: version.outputs,
version: versionId ? version?.value : '', version: versionId ? version?.value : '',
versionLabel: version ? version?.value : '', versionLabel: versionId ? version?.value : '',
isLatestVersion: !version || !lastVersion || version.value === lastVersion?.value isLatestVersion: !version || !lastVersion || version.value === lastVersion?.value
}; };
})(); })();
@ -198,8 +211,8 @@ export async function getChildAppPreviewNode({
return { return {
flowNodeType: FlowNodeTypeEnum.tool, flowNodeType: FlowNodeTypeEnum.tool,
nodeIOConfig: { nodeIOConfig: {
inputs: app.inputs!, inputs: app.inputs || [],
outputs: app.outputs!, outputs: app.outputs || [],
toolConfig: { toolConfig: {
systemTool: { systemTool: {
toolId: app.id toolId: app.id
@ -209,6 +222,7 @@ export async function getChildAppPreviewNode({
}; };
} }
// Plugin workflow
if (!!app.workflow.nodes.find((node) => node.flowNodeType === FlowNodeTypeEnum.pluginInput)) { if (!!app.workflow.nodes.find((node) => node.flowNodeType === FlowNodeTypeEnum.pluginInput)) {
return { return {
flowNodeType: FlowNodeTypeEnum.pluginModule, flowNodeType: FlowNodeTypeEnum.pluginModule,
@ -216,6 +230,7 @@ export async function getChildAppPreviewNode({
}; };
} }
// Mcp
if ( if (
!!app.workflow.nodes.find((node) => node.flowNodeType === FlowNodeTypeEnum.toolSet) && !!app.workflow.nodes.find((node) => node.flowNodeType === FlowNodeTypeEnum.toolSet) &&
app.workflow.nodes.length === 1 app.workflow.nodes.length === 1
@ -236,6 +251,7 @@ export async function getChildAppPreviewNode({
}; };
} }
// Chat workflow
return { return {
flowNodeType: FlowNodeTypeEnum.appModule, flowNodeType: FlowNodeTypeEnum.appModule,
nodeIOConfig: appData2FlowNodeIO({ chatConfig: app.workflow.chatConfig }) nodeIOConfig: appData2FlowNodeIO({ chatConfig: app.workflow.chatConfig })
@ -254,6 +270,7 @@ export async function getChildAppPreviewNode({
userGuide: app.userGuide, userGuide: app.userGuide,
showStatus: true, showStatus: true,
isTool: true, isTool: true,
catchError: false,
version: app.version, version: app.version,
versionLabel: app.versionLabel, versionLabel: app.versionLabel,
@ -265,7 +282,10 @@ export async function getChildAppPreviewNode({
hasTokenFee: app.hasTokenFee, hasTokenFee: app.hasTokenFee,
hasSystemSecret: app.hasSystemSecret, hasSystemSecret: app.hasSystemSecret,
...nodeIOConfig ...nodeIOConfig,
outputs: nodeIOConfig.outputs.some((item) => item.type === FlowNodeOutputTypeEnum.error)
? nodeIOConfig.outputs
: [...nodeIOConfig.outputs, Output_Template_Error_Message]
}; };
} }
@ -414,8 +434,9 @@ export const getSystemPlugins = async (): Promise<SystemPluginTemplateItemType[]
const formatTools = tools.map<SystemPluginTemplateItemType>((item) => { const formatTools = tools.map<SystemPluginTemplateItemType>((item) => {
const dbPluginConfig = systemPlugins.get(item.id); const dbPluginConfig = systemPlugins.get(item.id);
const inputs = item.versionList[0]?.inputs as FlowNodeInputItemType[];
const outputs = item.versionList[0]?.outputs as FlowNodeOutputItemType[]; const versionList = (item.versionList as SystemPluginTemplateItemType['versionList']) || [];
const inputs = versionList[0]?.inputs;
return { return {
isActive: item.isActive, isActive: item.isActive,
@ -439,9 +460,7 @@ export const getSystemPlugins = async (): Promise<SystemPluginTemplateItemType[]
nodes: [], nodes: [],
edges: [] edges: []
}, },
versionList: item.versionList, versionList,
inputs,
outputs,
inputList: inputs?.find((input) => input.key === NodeInputKeyEnum.systemInputConfig) inputList: inputs?.find((input) => input.key === NodeInputKeyEnum.systemInputConfig)
?.inputList as any, ?.inputList as any,

View File

@ -25,8 +25,7 @@ const SystemPluginSchema = new Schema({
default: false default: false
}, },
pluginOrder: { pluginOrder: {
type: Number, type: Number
default: 0
}, },
customConfig: Object, customConfig: Object,
inputListVal: Object, inputListVal: Object,

View File

@ -9,7 +9,7 @@ export type SystemPluginConfigSchemaType = {
currentCost: number; currentCost: number;
hasTokenFee: boolean; hasTokenFee: boolean;
isActive: boolean; isActive: boolean;
pluginOrder: number; pluginOrder?: number;
customConfig?: { customConfig?: {
name: string; name: string;

View File

@ -79,6 +79,28 @@ export async function rewriteAppWorkflowToDetail({
node.currentCost = preview.currentCost; node.currentCost = preview.currentCost;
node.hasTokenFee = preview.hasTokenFee; node.hasTokenFee = preview.hasTokenFee;
node.hasSystemSecret = preview.hasSystemSecret; node.hasSystemSecret = preview.hasSystemSecret;
// Latest version
if (!node.version) {
const inputsMap = new Map(node.inputs.map((item) => [item.key, item]));
const outputsMap = new Map(node.outputs.map((item) => [item.key, item]));
node.inputs = preview.inputs.map((item) => {
const input = inputsMap.get(item.key);
return {
...item,
value: input?.value,
selectedTypeIndex: input?.selectedTypeIndex
};
});
node.outputs = preview.outputs.map((item) => {
const output = outputsMap.get(item.key);
return {
...item,
value: output?.value
};
});
}
} catch (error) { } catch (error) {
node.pluginData = { node.pluginData = {
error: getErrText(error) error: getErrText(error)

View File

@ -166,7 +166,7 @@ export async function saveChat({
if (isUpdateUseTime) { if (isUpdateUseTime) {
await MongoApp.findByIdAndUpdate(appId, { await MongoApp.findByIdAndUpdate(appId, {
updateTime: new Date() updateTime: new Date()
}); }).catch();
} }
} catch (error) { } catch (error) {
addLog.error(`update chat history error`, error); addLog.error(`update chat history error`, error);

View File

@ -95,6 +95,10 @@ export const dispatchAppRequest = async (props: Props): Promise<Response> => {
const { text } = chatValue2RuntimePrompt(assistantResponses); const { text } = chatValue2RuntimePrompt(assistantResponses);
return { return {
data: {
answerText: text,
history: completeMessages
},
assistantResponses, assistantResponses,
system_memories, system_memories,
[DispatchNodeResponseKeyEnum.nodeResponse]: { [DispatchNodeResponseKeyEnum.nodeResponse]: {
@ -108,8 +112,6 @@ export const dispatchAppRequest = async (props: Props): Promise<Response> => {
moduleName: appData.name, moduleName: appData.name,
totalPoints: flowUsages.reduce((sum, item) => sum + (item.totalPoints || 0), 0) totalPoints: flowUsages.reduce((sum, item) => sum + (item.totalPoints || 0), 0)
} }
], ]
answerText: text,
history: completeMessages
}; };
}; };

View File

@ -6,7 +6,7 @@ import type {
RuntimeNodeItemType RuntimeNodeItemType
} from '@fastgpt/global/core/workflow/runtime/type'; } from '@fastgpt/global/core/workflow/runtime/type';
import { getLLMModel } from '../../../../ai/model'; import { getLLMModel } from '../../../../ai/model';
import { filterToolNodeIdByEdges, getHistories } from '../../utils'; import { filterToolNodeIdByEdges, getNodeErrResponse, getHistories } from '../../utils';
import { runToolWithToolChoice } from './toolChoice'; import { runToolWithToolChoice } from './toolChoice';
import { type DispatchToolModuleProps, type ToolNodeItemType } from './type'; import { type DispatchToolModuleProps, type ToolNodeItemType } from './type';
import { type ChatItemType, type UserChatItemValueItemType } from '@fastgpt/global/core/chat/type'; import { type ChatItemType, type UserChatItemValueItemType } from '@fastgpt/global/core/chat/type';
@ -25,7 +25,6 @@ import { runToolWithPromptCall } from './promptCall';
import { replaceVariable } from '@fastgpt/global/common/string/tools'; import { replaceVariable } from '@fastgpt/global/common/string/tools';
import { getMultiplePrompt, Prompt_Tool_Call } from './constants'; import { getMultiplePrompt, Prompt_Tool_Call } from './constants';
import { filterToolResponseToPreview } from './utils'; import { filterToolResponseToPreview } from './utils';
import { type InteractiveNodeResponseType } from '@fastgpt/global/core/workflow/template/system/interactive/type';
import { getFileContentFromLinks, getHistoryFileLinks } from '../../tools/readFiles'; import { getFileContentFromLinks, getHistoryFileLinks } from '../../tools/readFiles';
import { parseUrlToFileType } from '@fastgpt/global/common/file/tools'; import { parseUrlToFileType } from '@fastgpt/global/common/file/tools';
import { FlowNodeTypeEnum } from '@fastgpt/global/core/workflow/node/constant'; import { FlowNodeTypeEnum } from '@fastgpt/global/core/workflow/node/constant';
@ -38,7 +37,6 @@ import type { JSONSchemaInputType } from '@fastgpt/global/core/app/jsonschema';
type Response = DispatchNodeResultType<{ type Response = DispatchNodeResultType<{
[NodeOutputKeyEnum.answerText]: string; [NodeOutputKeyEnum.answerText]: string;
[DispatchNodeResponseKeyEnum.interactive]?: InteractiveNodeResponseType;
}>; }>;
export const dispatchRunTools = async (props: DispatchToolModuleProps): Promise<Response> => { export const dispatchRunTools = async (props: DispatchToolModuleProps): Promise<Response> => {
@ -64,244 +62,249 @@ export const dispatchRunTools = async (props: DispatchToolModuleProps): Promise<
} }
} = props; } = props;
const toolModel = getLLMModel(model); try {
const useVision = aiChatVision && toolModel.vision; const toolModel = getLLMModel(model);
const chatHistories = getHistories(history, histories); const useVision = aiChatVision && toolModel.vision;
const chatHistories = getHistories(history, histories);
props.params.aiChatVision = aiChatVision && toolModel.vision; props.params.aiChatVision = aiChatVision && toolModel.vision;
props.params.aiChatReasoning = aiChatReasoning && toolModel.reasoning; props.params.aiChatReasoning = aiChatReasoning && toolModel.reasoning;
const fileUrlInput = inputs.find((item) => item.key === NodeInputKeyEnum.fileUrlList); const fileUrlInput = inputs.find((item) => item.key === NodeInputKeyEnum.fileUrlList);
if (!fileUrlInput || !fileUrlInput.value || fileUrlInput.value.length === 0) { if (!fileUrlInput || !fileUrlInput.value || fileUrlInput.value.length === 0) {
fileLinks = undefined; fileLinks = undefined;
} }
console.log(fileLinks, 22);
const toolNodeIds = filterToolNodeIdByEdges({ nodeId, edges: runtimeEdges }); const toolNodeIds = filterToolNodeIdByEdges({ nodeId, edges: runtimeEdges });
// Gets the module to which the tool is connected // Gets the module to which the tool is connected
const toolNodes = toolNodeIds const toolNodes = toolNodeIds
.map((nodeId) => { .map((nodeId) => {
const tool = runtimeNodes.find((item) => item.nodeId === nodeId); const tool = runtimeNodes.find((item) => item.nodeId === nodeId);
return tool; return tool;
}) })
.filter(Boolean) .filter(Boolean)
.map<ToolNodeItemType>((tool) => { .map<ToolNodeItemType>((tool) => {
const toolParams: FlowNodeInputItemType[] = []; const toolParams: FlowNodeInputItemType[] = [];
// Raw json schema(MCP tool) // Raw json schema(MCP tool)
let jsonSchema: JSONSchemaInputType | undefined = undefined; let jsonSchema: JSONSchemaInputType | undefined = undefined;
tool?.inputs.forEach((input) => { tool?.inputs.forEach((input) => {
if (input.toolDescription) { if (input.toolDescription) {
toolParams.push(input); toolParams.push(input);
} }
if (input.key === NodeInputKeyEnum.toolData || input.key === 'toolData') { if (input.key === NodeInputKeyEnum.toolData || input.key === 'toolData') {
const value = input.value as McpToolDataType; const value = input.value as McpToolDataType;
jsonSchema = value.inputSchema; jsonSchema = value.inputSchema;
} }
});
return {
...(tool as RuntimeNodeItemType),
toolParams,
jsonSchema
};
}); });
return { // Check interactive entry
...(tool as RuntimeNodeItemType), props.node.isEntry = false;
toolParams, const hasReadFilesTool = toolNodes.some(
jsonSchema (item) => item.flowNodeType === FlowNodeTypeEnum.readFiles
}; );
const globalFiles = chatValue2RuntimePrompt(query).files;
const { documentQuoteText, userFiles } = await getMultiInput({
runningUserInfo,
histories: chatHistories,
requestOrigin,
maxFiles: chatConfig?.fileSelectConfig?.maxFiles || 20,
customPdfParse: chatConfig?.fileSelectConfig?.customPdfParse,
fileLinks,
inputFiles: globalFiles,
hasReadFilesTool
}); });
// Check interactive entry const concatenateSystemPrompt = [
props.node.isEntry = false; toolModel.defaultSystemChatPrompt,
const hasReadFilesTool = toolNodes.some( systemPrompt,
(item) => item.flowNodeType === FlowNodeTypeEnum.readFiles documentQuoteText
); ? replaceVariable(getDocumentQuotePrompt(version), {
quote: documentQuoteText
const globalFiles = chatValue2RuntimePrompt(query).files;
const { documentQuoteText, userFiles } = await getMultiInput({
runningUserInfo,
histories: chatHistories,
requestOrigin,
maxFiles: chatConfig?.fileSelectConfig?.maxFiles || 20,
customPdfParse: chatConfig?.fileSelectConfig?.customPdfParse,
fileLinks,
inputFiles: globalFiles,
hasReadFilesTool
});
const concatenateSystemPrompt = [
toolModel.defaultSystemChatPrompt,
systemPrompt,
documentQuoteText
? replaceVariable(getDocumentQuotePrompt(version), {
quote: documentQuoteText
})
: ''
]
.filter(Boolean)
.join('\n\n===---===---===\n\n');
const messages: ChatItemType[] = (() => {
const value: ChatItemType[] = [
...getSystemPrompt_ChatItemType(concatenateSystemPrompt),
// Add file input prompt to histories
...chatHistories.map((item) => {
if (item.obj === ChatRoleEnum.Human) {
return {
...item,
value: toolCallMessagesAdapt({
userInput: item.value,
skip: !hasReadFilesTool
})
};
}
return item;
}),
{
obj: ChatRoleEnum.Human,
value: toolCallMessagesAdapt({
skip: !hasReadFilesTool,
userInput: runtimePrompt2ChatsValue({
text: userChatInput,
files: userFiles
}) })
}) : ''
} ]
]; .filter(Boolean)
if (lastInteractive && isEntry) { .join('\n\n===---===---===\n\n');
return value.slice(0, -2);
}
return value;
})();
// censor model and system key const messages: ChatItemType[] = (() => {
if (toolModel.censor && !externalProvider.openaiAccount?.key) { const value: ChatItemType[] = [
await postTextCensor({ ...getSystemPrompt_ChatItemType(concatenateSystemPrompt),
text: `${systemPrompt} // Add file input prompt to histories
...chatHistories.map((item) => {
if (item.obj === ChatRoleEnum.Human) {
return {
...item,
value: toolCallMessagesAdapt({
userInput: item.value,
skip: !hasReadFilesTool
})
};
}
return item;
}),
{
obj: ChatRoleEnum.Human,
value: toolCallMessagesAdapt({
skip: !hasReadFilesTool,
userInput: runtimePrompt2ChatsValue({
text: userChatInput,
files: userFiles
})
})
}
];
if (lastInteractive && isEntry) {
return value.slice(0, -2);
}
return value;
})();
// censor model and system key
if (toolModel.censor && !externalProvider.openaiAccount?.key) {
await postTextCensor({
text: `${systemPrompt}
${userChatInput} ${userChatInput}
` `
});
}
const {
toolWorkflowInteractiveResponse,
dispatchFlowResponse, // tool flow response
toolNodeInputTokens,
toolNodeOutputTokens,
completeMessages = [], // The actual message sent to AI(just save text)
assistantResponses = [], // FastGPT system store assistant.value response
runTimes,
finish_reason
} = await (async () => {
const adaptMessages = chats2GPTMessages({
messages,
reserveId: false
// reserveTool: !!toolModel.toolChoice
});
const requestParams = {
runtimeNodes,
runtimeEdges,
toolNodes,
toolModel,
messages: adaptMessages,
interactiveEntryToolParams: lastInteractive?.toolParams
};
if (toolModel.toolChoice) {
return runToolWithToolChoice({
...props,
...requestParams,
maxRunToolTimes: 30
});
}
if (toolModel.functionCall) {
return runToolWithFunctionCall({
...props,
...requestParams
}); });
} }
const lastMessage = adaptMessages[adaptMessages.length - 1]; const {
if (typeof lastMessage?.content === 'string') { toolWorkflowInteractiveResponse,
lastMessage.content = replaceVariable(Prompt_Tool_Call, { dispatchFlowResponse, // tool flow response
question: lastMessage.content toolNodeInputTokens,
toolNodeOutputTokens,
completeMessages = [], // The actual message sent to AI(just save text)
assistantResponses = [], // FastGPT system store assistant.value response
runTimes,
finish_reason
} = await (async () => {
const adaptMessages = chats2GPTMessages({
messages,
reserveId: false
// reserveTool: !!toolModel.toolChoice
}); });
} else if (Array.isArray(lastMessage.content)) { const requestParams = {
// array, replace last element runtimeNodes,
const lastText = lastMessage.content[lastMessage.content.length - 1]; runtimeEdges,
if (lastText.type === 'text') { toolNodes,
lastText.text = replaceVariable(Prompt_Tool_Call, { toolModel,
question: lastText.text messages: adaptMessages,
interactiveEntryToolParams: lastInteractive?.toolParams
};
if (toolModel.toolChoice) {
return runToolWithToolChoice({
...props,
...requestParams,
maxRunToolTimes: 30
}); });
}
if (toolModel.functionCall) {
return runToolWithFunctionCall({
...props,
...requestParams
});
}
const lastMessage = adaptMessages[adaptMessages.length - 1];
if (typeof lastMessage?.content === 'string') {
lastMessage.content = replaceVariable(Prompt_Tool_Call, {
question: lastMessage.content
});
} else if (Array.isArray(lastMessage.content)) {
// array, replace last element
const lastText = lastMessage.content[lastMessage.content.length - 1];
if (lastText.type === 'text') {
lastText.text = replaceVariable(Prompt_Tool_Call, {
question: lastText.text
});
} else {
return Promise.reject('Prompt call invalid input');
}
} else { } else {
return Promise.reject('Prompt call invalid input'); return Promise.reject('Prompt call invalid input');
} }
} else {
return Promise.reject('Prompt call invalid input');
}
return runToolWithPromptCall({ return runToolWithPromptCall({
...props, ...props,
...requestParams ...requestParams
});
})();
const { totalPoints, modelName } = formatModelChars2Points({
model,
inputTokens: toolNodeInputTokens,
outputTokens: toolNodeOutputTokens,
modelType: ModelTypeEnum.llm
}); });
})(); const toolAIUsage = externalProvider.openaiAccount?.key ? 0 : totalPoints;
const { totalPoints, modelName } = formatModelChars2Points({ // flat child tool response
model, const childToolResponse = dispatchFlowResponse.map((item) => item.flowResponses).flat();
inputTokens: toolNodeInputTokens,
outputTokens: toolNodeOutputTokens,
modelType: ModelTypeEnum.llm
});
const toolAIUsage = externalProvider.openaiAccount?.key ? 0 : totalPoints;
// flat child tool response // concat tool usage
const childToolResponse = dispatchFlowResponse.map((item) => item.flowResponses).flat(); const totalPointsUsage =
toolAIUsage +
dispatchFlowResponse.reduce((sum, item) => {
const childrenTotal = item.flowUsages.reduce((sum, item) => sum + item.totalPoints, 0);
return sum + childrenTotal;
}, 0);
const flatUsages = dispatchFlowResponse.map((item) => item.flowUsages).flat();
// concat tool usage const previewAssistantResponses = filterToolResponseToPreview(assistantResponses);
const totalPointsUsage =
toolAIUsage +
dispatchFlowResponse.reduce((sum, item) => {
const childrenTotal = item.flowUsages.reduce((sum, item) => sum + item.totalPoints, 0);
return sum + childrenTotal;
}, 0);
const flatUsages = dispatchFlowResponse.map((item) => item.flowUsages).flat();
const previewAssistantResponses = filterToolResponseToPreview(assistantResponses); return {
data: {
return { [NodeOutputKeyEnum.answerText]: previewAssistantResponses
[DispatchNodeResponseKeyEnum.runTimes]: runTimes, .filter((item) => item.text?.content)
[NodeOutputKeyEnum.answerText]: previewAssistantResponses .map((item) => item.text?.content || '')
.filter((item) => item.text?.content) .join('')
.map((item) => item.text?.content || '')
.join(''),
[DispatchNodeResponseKeyEnum.assistantResponses]: previewAssistantResponses,
[DispatchNodeResponseKeyEnum.nodeResponse]: {
// 展示的积分消耗
totalPoints: totalPointsUsage,
toolCallInputTokens: toolNodeInputTokens,
toolCallOutputTokens: toolNodeOutputTokens,
childTotalPoints: flatUsages.reduce((sum, item) => sum + item.totalPoints, 0),
model: modelName,
query: userChatInput,
historyPreview: getHistoryPreview(
GPTMessages2Chats(completeMessages, false),
10000,
useVision
),
toolDetail: childToolResponse,
mergeSignId: nodeId,
finishReason: finish_reason
},
[DispatchNodeResponseKeyEnum.nodeDispatchUsages]: [
// 工具调用本身的积分消耗
{
moduleName: name,
model: modelName,
totalPoints: toolAIUsage,
inputTokens: toolNodeInputTokens,
outputTokens: toolNodeOutputTokens
}, },
// 工具的消耗 [DispatchNodeResponseKeyEnum.runTimes]: runTimes,
...flatUsages [DispatchNodeResponseKeyEnum.assistantResponses]: previewAssistantResponses,
], [DispatchNodeResponseKeyEnum.nodeResponse]: {
[DispatchNodeResponseKeyEnum.interactive]: toolWorkflowInteractiveResponse // 展示的积分消耗
}; totalPoints: totalPointsUsage,
toolCallInputTokens: toolNodeInputTokens,
toolCallOutputTokens: toolNodeOutputTokens,
childTotalPoints: flatUsages.reduce((sum, item) => sum + item.totalPoints, 0),
model: modelName,
query: userChatInput,
historyPreview: getHistoryPreview(
GPTMessages2Chats(completeMessages, false),
10000,
useVision
),
toolDetail: childToolResponse,
mergeSignId: nodeId,
finishReason: finish_reason
},
[DispatchNodeResponseKeyEnum.nodeDispatchUsages]: [
// 工具调用本身的积分消耗
{
moduleName: name,
model: modelName,
totalPoints: toolAIUsage,
inputTokens: toolNodeInputTokens,
outputTokens: toolNodeOutputTokens
},
// 工具的消耗
...flatUsages
],
[DispatchNodeResponseKeyEnum.interactive]: toolWorkflowInteractiveResponse
};
} catch (error) {
return getNodeErrResponse({ error });
}
}; };
const getMultiInput = async ({ const getMultiInput = async ({

View File

@ -17,10 +17,7 @@ import type {
} from '@fastgpt/global/core/ai/type.d'; } from '@fastgpt/global/core/ai/type.d';
import { formatModelChars2Points } from '../../../../support/wallet/usage/utils'; import { formatModelChars2Points } from '../../../../support/wallet/usage/utils';
import type { LLMModelItemType } from '@fastgpt/global/core/ai/model.d'; import type { LLMModelItemType } from '@fastgpt/global/core/ai/model.d';
import { import { ChatCompletionRequestMessageRoleEnum } from '@fastgpt/global/core/ai/constants';
ChatCompletionRequestMessageRoleEnum,
getLLMDefaultUsage
} from '@fastgpt/global/core/ai/constants';
import type { import type {
ChatDispatchProps, ChatDispatchProps,
DispatchNodeResultType DispatchNodeResultType
@ -47,7 +44,7 @@ import type { SearchDataResponseItemType } from '@fastgpt/global/core/dataset/ty
import type { NodeOutputKeyEnum } from '@fastgpt/global/core/workflow/constants'; import type { NodeOutputKeyEnum } from '@fastgpt/global/core/workflow/constants';
import { NodeInputKeyEnum } from '@fastgpt/global/core/workflow/constants'; import { NodeInputKeyEnum } from '@fastgpt/global/core/workflow/constants';
import { DispatchNodeResponseKeyEnum } from '@fastgpt/global/core/workflow/runtime/constants'; import { DispatchNodeResponseKeyEnum } from '@fastgpt/global/core/workflow/runtime/constants';
import { checkQuoteQAValue, getHistories } from '../utils'; import { checkQuoteQAValue, getNodeErrResponse, getHistories } from '../utils';
import { filterSearchResultsByMaxChars } from '../../utils'; import { filterSearchResultsByMaxChars } from '../../utils';
import { getHistoryPreview } from '@fastgpt/global/core/chat/utils'; import { getHistoryPreview } from '@fastgpt/global/core/chat/utils';
import { computedMaxToken, llmCompletionsBodyFormat } from '../../../ai/utils'; import { computedMaxToken, llmCompletionsBodyFormat } from '../../../ai/utils';
@ -59,6 +56,7 @@ import { parseUrlToFileType } from '@fastgpt/global/common/file/tools';
import { i18nT } from '../../../../../web/i18n/utils'; import { i18nT } from '../../../../../web/i18n/utils';
import { ModelTypeEnum } from '@fastgpt/global/core/ai/model'; import { ModelTypeEnum } from '@fastgpt/global/core/ai/model';
import { postTextCensor } from '../../../chat/postTextCensor'; import { postTextCensor } from '../../../chat/postTextCensor';
import { getErrText } from '@fastgpt/global/common/error/utils';
export type ChatProps = ModuleDispatchProps< export type ChatProps = ModuleDispatchProps<
AIChatNodeProps & { AIChatNodeProps & {
@ -67,11 +65,16 @@ export type ChatProps = ModuleDispatchProps<
[NodeInputKeyEnum.aiChatDatasetQuote]?: SearchDataResponseItemType[]; [NodeInputKeyEnum.aiChatDatasetQuote]?: SearchDataResponseItemType[];
} }
>; >;
export type ChatResponse = DispatchNodeResultType<{ export type ChatResponse = DispatchNodeResultType<
[NodeOutputKeyEnum.answerText]: string; {
[NodeOutputKeyEnum.reasoningText]?: string; [NodeOutputKeyEnum.answerText]: string;
[NodeOutputKeyEnum.history]: ChatItemType[]; [NodeOutputKeyEnum.reasoningText]?: string;
}>; [NodeOutputKeyEnum.history]: ChatItemType[];
},
{
[NodeOutputKeyEnum.errorText]: string;
}
>;
/* request openai chat */ /* request openai chat */
export const dispatchChatCompletion = async (props: ChatProps): Promise<ChatResponse> => { export const dispatchChatCompletion = async (props: ChatProps): Promise<ChatResponse> => {
@ -114,243 +117,253 @@ export const dispatchChatCompletion = async (props: ChatProps): Promise<ChatResp
const modelConstantsData = getLLMModel(model); const modelConstantsData = getLLMModel(model);
if (!modelConstantsData) { if (!modelConstantsData) {
return Promise.reject(`Mode ${model} is undefined, you need to select a chat model.`); return getNodeErrResponse({
error: `Model ${model} is undefined, you need to select a chat model.`
});
} }
aiChatVision = modelConstantsData.vision && aiChatVision; try {
aiChatReasoning = !!aiChatReasoning && !!modelConstantsData.reasoning; aiChatVision = modelConstantsData.vision && aiChatVision;
// Check fileLinks is reference variable aiChatReasoning = !!aiChatReasoning && !!modelConstantsData.reasoning;
const fileUrlInput = inputs.find((item) => item.key === NodeInputKeyEnum.fileUrlList); // Check fileLinks is reference variable
if (!fileUrlInput || !fileUrlInput.value || fileUrlInput.value.length === 0) { const fileUrlInput = inputs.find((item) => item.key === NodeInputKeyEnum.fileUrlList);
fileLinks = undefined; if (!fileUrlInput || !fileUrlInput.value || fileUrlInput.value.length === 0) {
} fileLinks = undefined;
}
const chatHistories = getHistories(history, histories); const chatHistories = getHistories(history, histories);
quoteQA = checkQuoteQAValue(quoteQA); quoteQA = checkQuoteQAValue(quoteQA);
const [{ datasetQuoteText }, { documentQuoteText, userFiles }] = await Promise.all([ const [{ datasetQuoteText }, { documentQuoteText, userFiles }] = await Promise.all([
filterDatasetQuote({ filterDatasetQuote({
quoteQA, quoteQA,
model: modelConstantsData,
quoteTemplate: quoteTemplate || getQuoteTemplate(version)
}),
getMultiInput({
histories: chatHistories,
inputFiles,
fileLinks,
stringQuoteText,
requestOrigin,
maxFiles: chatConfig?.fileSelectConfig?.maxFiles || 20,
customPdfParse: chatConfig?.fileSelectConfig?.customPdfParse,
runningUserInfo
})
]);
if (!userChatInput && !documentQuoteText && userFiles.length === 0) {
return getNodeErrResponse({ error: i18nT('chat:AI_input_is_empty') });
}
const max_tokens = computedMaxToken({
model: modelConstantsData, model: modelConstantsData,
quoteTemplate: quoteTemplate || getQuoteTemplate(version) maxToken
}), });
getMultiInput({
histories: chatHistories,
inputFiles,
fileLinks,
stringQuoteText,
requestOrigin,
maxFiles: chatConfig?.fileSelectConfig?.maxFiles || 20,
customPdfParse: chatConfig?.fileSelectConfig?.customPdfParse,
runningUserInfo
})
]);
if (!userChatInput && !documentQuoteText && userFiles.length === 0) { const [{ filterMessages }] = await Promise.all([
return Promise.reject(i18nT('chat:AI_input_is_empty')); getChatMessages({
} model: modelConstantsData,
maxTokens: max_tokens,
const max_tokens = computedMaxToken({ histories: chatHistories,
model: modelConstantsData, useDatasetQuote: quoteQA !== undefined,
maxToken datasetQuoteText,
}); aiChatQuoteRole,
datasetQuotePrompt: quotePrompt,
const [{ filterMessages }] = await Promise.all([ version,
getChatMessages({ userChatInput,
model: modelConstantsData, systemPrompt,
maxTokens: max_tokens, userFiles,
histories: chatHistories, documentQuoteText
useDatasetQuote: quoteQA !== undefined, }),
datasetQuoteText, // Censor = true and system key, will check content
aiChatQuoteRole, (() => {
datasetQuotePrompt: quotePrompt, if (modelConstantsData.censor && !externalProvider.openaiAccount?.key) {
version, return postTextCensor({
userChatInput, text: `${systemPrompt}
systemPrompt,
userFiles,
documentQuoteText
}),
// Censor = true and system key, will check content
(() => {
if (modelConstantsData.censor && !externalProvider.openaiAccount?.key) {
return postTextCensor({
text: `${systemPrompt}
${userChatInput} ${userChatInput}
` `
}); });
}
})()
]);
const requestMessages = await loadRequestMessages({
messages: filterMessages,
useVision: aiChatVision,
origin: requestOrigin
});
const requestBody = llmCompletionsBodyFormat(
{
model: modelConstantsData.model,
stream,
messages: requestMessages,
temperature,
max_tokens,
top_p: aiChatTopP,
stop: aiChatStopSign,
response_format: {
type: aiChatResponseFormat as any,
json_schema: aiChatJsonSchema
}
},
modelConstantsData
);
// console.log(JSON.stringify(requestBody, null, 2), '===');
const { response, isStreamResponse, getEmptyResponseTip } = await createChatCompletion({
body: requestBody,
userKey: externalProvider.openaiAccount,
options: {
headers: {
Accept: 'application/json, text/plain, */*'
}
} }
})() });
]);
const requestMessages = await loadRequestMessages({ let { answerText, reasoningText, finish_reason, inputTokens, outputTokens } =
messages: filterMessages, await (async () => {
useVision: aiChatVision, if (isStreamResponse) {
origin: requestOrigin if (!res || res.closed) {
}); return {
answerText: '',
reasoningText: '',
finish_reason: 'close' as const,
inputTokens: 0,
outputTokens: 0
};
}
// sse response
const { answer, reasoning, finish_reason, usage } = await streamResponse({
res,
stream: response,
aiChatReasoning,
parseThinkTag: modelConstantsData.reasoning,
isResponseAnswerText,
workflowStreamResponse,
retainDatasetCite
});
const requestBody = llmCompletionsBodyFormat(
{
model: modelConstantsData.model,
stream,
messages: requestMessages,
temperature,
max_tokens,
top_p: aiChatTopP,
stop: aiChatStopSign,
response_format: {
type: aiChatResponseFormat as any,
json_schema: aiChatJsonSchema
}
},
modelConstantsData
);
// console.log(JSON.stringify(requestBody, null, 2), '===');
const { response, isStreamResponse, getEmptyResponseTip } = await createChatCompletion({
body: requestBody,
userKey: externalProvider.openaiAccount,
options: {
headers: {
Accept: 'application/json, text/plain, */*'
}
}
});
let { answerText, reasoningText, finish_reason, inputTokens, outputTokens } = await (async () => {
if (isStreamResponse) {
if (!res || res.closed) {
return {
answerText: '',
reasoningText: '',
finish_reason: 'close' as const,
inputTokens: 0,
outputTokens: 0
};
}
// sse response
const { answer, reasoning, finish_reason, usage } = await streamResponse({
res,
stream: response,
aiChatReasoning,
parseThinkTag: modelConstantsData.reasoning,
isResponseAnswerText,
workflowStreamResponse,
retainDatasetCite
});
return {
answerText: answer,
reasoningText: reasoning,
finish_reason,
inputTokens: usage?.prompt_tokens,
outputTokens: usage?.completion_tokens
};
} else {
const finish_reason = response.choices?.[0]?.finish_reason as CompletionFinishReason;
const usage = response.usage;
const { content, reasoningContent } = (() => {
const content = response.choices?.[0]?.message?.content || '';
// @ts-ignore
const reasoningContent: string = response.choices?.[0]?.message?.reasoning_content || '';
// API already parse reasoning content
if (reasoningContent || !aiChatReasoning) {
return { return {
content, answerText: answer,
reasoningContent reasoningText: reasoning,
finish_reason,
inputTokens: usage?.prompt_tokens,
outputTokens: usage?.completion_tokens
};
} else {
const finish_reason = response.choices?.[0]?.finish_reason as CompletionFinishReason;
const usage = response.usage;
const { content, reasoningContent } = (() => {
const content = response.choices?.[0]?.message?.content || '';
const reasoningContent: string =
// @ts-ignore
response.choices?.[0]?.message?.reasoning_content || '';
// API already parse reasoning content
if (reasoningContent || !aiChatReasoning) {
return {
content,
reasoningContent
};
}
const [think, answer] = parseReasoningContent(content);
return {
content: answer,
reasoningContent: think
};
})();
const formatReasonContent = removeDatasetCiteText(reasoningContent, retainDatasetCite);
const formatContent = removeDatasetCiteText(content, retainDatasetCite);
// Some models do not support streaming
if (aiChatReasoning && reasoningContent) {
workflowStreamResponse?.({
event: SseResponseEventEnum.fastAnswer,
data: textAdaptGptResponse({
reasoning_content: formatReasonContent
})
});
}
if (isResponseAnswerText && content) {
workflowStreamResponse?.({
event: SseResponseEventEnum.fastAnswer,
data: textAdaptGptResponse({
text: formatContent
})
});
}
return {
reasoningText: formatReasonContent,
answerText: formatContent,
finish_reason,
inputTokens: usage?.prompt_tokens,
outputTokens: usage?.completion_tokens
}; };
} }
const [think, answer] = parseReasoningContent(content);
return {
content: answer,
reasoningContent: think
};
})(); })();
const formatReasonContent = removeDatasetCiteText(reasoningContent, retainDatasetCite); if (!answerText && !reasoningText) {
const formatContent = removeDatasetCiteText(content, retainDatasetCite); return getNodeErrResponse({ error: getEmptyResponseTip() });
// Some models do not support streaming
if (aiChatReasoning && reasoningContent) {
workflowStreamResponse?.({
event: SseResponseEventEnum.fastAnswer,
data: textAdaptGptResponse({
reasoning_content: formatReasonContent
})
});
}
if (isResponseAnswerText && content) {
workflowStreamResponse?.({
event: SseResponseEventEnum.fastAnswer,
data: textAdaptGptResponse({
text: formatContent
})
});
}
return {
reasoningText: formatReasonContent,
answerText: formatContent,
finish_reason,
inputTokens: usage?.prompt_tokens,
outputTokens: usage?.completion_tokens
};
} }
})();
if (!answerText && !reasoningText) { const AIMessages: ChatCompletionMessageParam[] = [
return Promise.reject(getEmptyResponseTip());
}
const AIMessages: ChatCompletionMessageParam[] = [
{
role: ChatCompletionRequestMessageRoleEnum.Assistant,
content: answerText,
reasoning_text: reasoningText // reasoning_text is only recorded for response, but not for request
}
];
const completeMessages = [...requestMessages, ...AIMessages];
const chatCompleteMessages = GPTMessages2Chats(completeMessages);
inputTokens = inputTokens || (await countGptMessagesTokens(requestMessages));
outputTokens = outputTokens || (await countGptMessagesTokens(AIMessages));
const { totalPoints, modelName } = formatModelChars2Points({
model,
inputTokens,
outputTokens,
modelType: ModelTypeEnum.llm
});
return {
answerText: answerText.trim(),
reasoningText,
[DispatchNodeResponseKeyEnum.nodeResponse]: {
totalPoints: externalProvider.openaiAccount?.key ? 0 : totalPoints,
model: modelName,
inputTokens: inputTokens,
outputTokens: outputTokens,
query: `${userChatInput}`,
maxToken: max_tokens,
reasoningText,
historyPreview: getHistoryPreview(chatCompleteMessages, 10000, aiChatVision),
contextTotalLen: completeMessages.length,
finishReason: finish_reason
},
[DispatchNodeResponseKeyEnum.nodeDispatchUsages]: [
{ {
moduleName: name, role: ChatCompletionRequestMessageRoleEnum.Assistant,
content: answerText,
reasoning_text: reasoningText // reasoning_text is only recorded for response, but not for request
}
];
const completeMessages = [...requestMessages, ...AIMessages];
const chatCompleteMessages = GPTMessages2Chats(completeMessages);
inputTokens = inputTokens || (await countGptMessagesTokens(requestMessages));
outputTokens = outputTokens || (await countGptMessagesTokens(AIMessages));
const { totalPoints, modelName } = formatModelChars2Points({
model,
inputTokens,
outputTokens,
modelType: ModelTypeEnum.llm
});
return {
data: {
answerText: answerText.trim(),
reasoningText,
history: chatCompleteMessages
},
[DispatchNodeResponseKeyEnum.nodeResponse]: {
totalPoints: externalProvider.openaiAccount?.key ? 0 : totalPoints, totalPoints: externalProvider.openaiAccount?.key ? 0 : totalPoints,
model: modelName, model: modelName,
inputTokens: inputTokens, inputTokens: inputTokens,
outputTokens: outputTokens outputTokens: outputTokens,
} query: `${userChatInput}`,
], maxToken: max_tokens,
[DispatchNodeResponseKeyEnum.toolResponses]: answerText, reasoningText,
history: chatCompleteMessages historyPreview: getHistoryPreview(chatCompleteMessages, 10000, aiChatVision),
}; contextTotalLen: completeMessages.length,
finishReason: finish_reason
},
[DispatchNodeResponseKeyEnum.nodeDispatchUsages]: [
{
moduleName: name,
totalPoints: externalProvider.openaiAccount?.key ? 0 : totalPoints,
model: modelName,
inputTokens: inputTokens,
outputTokens: outputTokens
}
],
[DispatchNodeResponseKeyEnum.toolResponses]: answerText
};
} catch (error) {
return getNodeErrResponse({ error });
}
}; };
async function filterDatasetQuote({ async function filterDatasetQuote({

View File

@ -78,7 +78,9 @@ export const dispatchClassifyQuestion = async (props: Props): Promise<CQResponse
}); });
return { return {
[NodeOutputKeyEnum.cqResult]: result.value, data: {
[NodeOutputKeyEnum.cqResult]: result.value
},
[DispatchNodeResponseKeyEnum.skipHandleId]: agents [DispatchNodeResponseKeyEnum.skipHandleId]: agents
.filter((item) => item.key !== result.key) .filter((item) => item.key !== result.key)
.map((item) => getHandleId(nodeId, 'source', item.key)), .map((item) => getHandleId(nodeId, 'source', item.key)),

View File

@ -19,7 +19,7 @@ import { DispatchNodeResponseKeyEnum } from '@fastgpt/global/core/workflow/runti
import type { ModuleDispatchProps } from '@fastgpt/global/core/workflow/runtime/type'; import type { ModuleDispatchProps } from '@fastgpt/global/core/workflow/runtime/type';
import { sliceJsonStr } from '@fastgpt/global/common/string/tools'; import { sliceJsonStr } from '@fastgpt/global/common/string/tools';
import { type LLMModelItemType } from '@fastgpt/global/core/ai/model.d'; import { type LLMModelItemType } from '@fastgpt/global/core/ai/model.d';
import { getHistories } from '../utils'; import { getNodeErrResponse, getHistories } from '../utils';
import { getLLMModel } from '../../../ai/model'; import { getLLMModel } from '../../../ai/model';
import { formatModelChars2Points } from '../../../../support/wallet/usage/utils'; import { formatModelChars2Points } from '../../../../support/wallet/usage/utils';
import json5 from 'json5'; import json5 from 'json5';
@ -46,6 +46,7 @@ type Props = ModuleDispatchProps<{
type Response = DispatchNodeResultType<{ type Response = DispatchNodeResultType<{
[NodeOutputKeyEnum.success]: boolean; [NodeOutputKeyEnum.success]: boolean;
[NodeOutputKeyEnum.contextExtractFields]: string; [NodeOutputKeyEnum.contextExtractFields]: string;
[key: string]: any;
}>; }>;
type ActionProps = Props & { extractModel: LLMModelItemType; lastMemory?: Record<string, any> }; type ActionProps = Props & { extractModel: LLMModelItemType; lastMemory?: Record<string, any> };
@ -62,7 +63,7 @@ export async function dispatchContentExtract(props: Props): Promise<Response> {
} = props; } = props;
if (!content) { if (!content) {
return Promise.reject('Input is empty'); return getNodeErrResponse({ error: 'Input is empty' });
} }
const extractModel = getLLMModel(model); const extractModel = getLLMModel(model);
@ -75,88 +76,94 @@ export async function dispatchContentExtract(props: Props): Promise<Response> {
any any
>; >;
const { arg, inputTokens, outputTokens } = await (async () => { try {
if (extractModel.toolChoice) { const { arg, inputTokens, outputTokens } = await (async () => {
return toolChoice({ if (extractModel.toolChoice) {
return toolChoice({
...props,
histories: chatHistories,
extractModel,
lastMemory
});
}
return completions({
...props, ...props,
histories: chatHistories, histories: chatHistories,
extractModel, extractModel,
lastMemory lastMemory
}); });
} })();
return completions({
...props,
histories: chatHistories,
extractModel,
lastMemory
});
})();
// remove invalid key // remove invalid key
for (let key in arg) { for (let key in arg) {
const item = extractKeys.find((item) => item.key === key);
if (!item) {
delete arg[key];
}
if (arg[key] === '') {
delete arg[key];
}
}
// auto fill required fields
extractKeys.forEach((item) => {
if (item.required && arg[item.key] === undefined) {
arg[item.key] = item.defaultValue || '';
}
});
// auth fields
let success = !extractKeys.find((item) => !(item.key in arg));
// auth empty value
if (success) {
for (const key in arg) {
const item = extractKeys.find((item) => item.key === key); const item = extractKeys.find((item) => item.key === key);
if (!item) { if (!item) {
success = false; delete arg[key];
break; }
if (arg[key] === '') {
delete arg[key];
} }
} }
}
const { totalPoints, modelName } = formatModelChars2Points({ // auto fill required fields
model: extractModel.model, extractKeys.forEach((item) => {
inputTokens: inputTokens, if (item.required && arg[item.key] === undefined) {
outputTokens: outputTokens, arg[item.key] = item.defaultValue || '';
modelType: ModelTypeEnum.llm }
}); });
return { // auth fields
[NodeOutputKeyEnum.success]: success, let success = !extractKeys.find((item) => !(item.key in arg));
[NodeOutputKeyEnum.contextExtractFields]: JSON.stringify(arg), // auth empty value
[DispatchNodeResponseKeyEnum.memories]: { if (success) {
[memoryKey]: arg for (const key in arg) {
}, const item = extractKeys.find((item) => item.key === key);
...arg, if (!item) {
[DispatchNodeResponseKeyEnum.nodeResponse]: { success = false;
totalPoints: externalProvider.openaiAccount?.key ? 0 : totalPoints, break;
model: modelName, }
query: content, }
inputTokens, }
outputTokens,
extractDescription: description, const { totalPoints, modelName } = formatModelChars2Points({
extractResult: arg, model: extractModel.model,
contextTotalLen: chatHistories.length + 2 inputTokens: inputTokens,
}, outputTokens: outputTokens,
[DispatchNodeResponseKeyEnum.nodeDispatchUsages]: [ modelType: ModelTypeEnum.llm
{ });
moduleName: name,
return {
data: {
[NodeOutputKeyEnum.success]: success,
[NodeOutputKeyEnum.contextExtractFields]: JSON.stringify(arg),
...arg
},
[DispatchNodeResponseKeyEnum.memories]: {
[memoryKey]: arg
},
[DispatchNodeResponseKeyEnum.nodeResponse]: {
totalPoints: externalProvider.openaiAccount?.key ? 0 : totalPoints, totalPoints: externalProvider.openaiAccount?.key ? 0 : totalPoints,
model: modelName, model: modelName,
query: content,
inputTokens, inputTokens,
outputTokens outputTokens,
} extractDescription: description,
] extractResult: arg,
}; contextTotalLen: chatHistories.length + 2
},
[DispatchNodeResponseKeyEnum.nodeDispatchUsages]: [
{
moduleName: name,
totalPoints: externalProvider.openaiAccount?.key ? 0 : totalPoints,
model: modelName,
inputTokens,
outputTokens
}
]
};
} catch (error) {
return getNodeErrResponse({ error });
}
} }
const getJsonSchema = ({ params: { extractKeys } }: ActionProps) => { const getJsonSchema = ({ params: { extractKeys } }: ActionProps) => {

View File

@ -0,0 +1,208 @@
import type { ChatItemType } from '@fastgpt/global/core/chat/type.d';
import type { ModuleDispatchProps } from '@fastgpt/global/core/workflow/runtime/type';
import { dispatchWorkFlow } from '../index';
import { ChatRoleEnum } from '@fastgpt/global/core/chat/constants';
import { SseResponseEventEnum } from '@fastgpt/global/core/workflow/runtime/constants';
import {
getWorkflowEntryNodeIds,
storeEdges2RuntimeEdges,
rewriteNodeOutputByHistories,
storeNodes2RuntimeNodes,
textAdaptGptResponse
} from '@fastgpt/global/core/workflow/runtime/utils';
import type { NodeInputKeyEnum } from '@fastgpt/global/core/workflow/constants';
import { NodeOutputKeyEnum } from '@fastgpt/global/core/workflow/constants';
import { DispatchNodeResponseKeyEnum } from '@fastgpt/global/core/workflow/runtime/constants';
import { filterSystemVariables, getNodeErrResponse, getHistories } from '../utils';
import { chatValue2RuntimePrompt, runtimePrompt2ChatsValue } from '@fastgpt/global/core/chat/adapt';
import { type DispatchNodeResultType } from '@fastgpt/global/core/workflow/runtime/type';
import { authAppByTmbId } from '../../../../support/permission/app/auth';
import { ReadPermissionVal } from '@fastgpt/global/support/permission/constant';
import { getAppVersionById } from '../../../app/version/controller';
import { parseUrlToFileType } from '@fastgpt/global/common/file/tools';
import { getUserChatInfoAndAuthTeamPoints } from '../../../../support/permission/auth/team';
type Props = ModuleDispatchProps<{
[NodeInputKeyEnum.userChatInput]: string;
[NodeInputKeyEnum.history]?: ChatItemType[] | number;
[NodeInputKeyEnum.fileUrlList]?: string[];
[NodeInputKeyEnum.forbidStream]?: boolean;
[NodeInputKeyEnum.fileUrlList]?: string[];
}>;
type Response = DispatchNodeResultType<{
[NodeOutputKeyEnum.answerText]: string;
[NodeOutputKeyEnum.history]: ChatItemType[];
}>;
export const dispatchRunAppNode = async (props: Props): Promise<Response> => {
const {
runningAppInfo,
histories,
query,
lastInteractive,
node: { pluginId: appId, version },
workflowStreamResponse,
params,
variables
} = props;
const {
system_forbid_stream = false,
userChatInput,
history,
fileUrlList,
...childrenAppVariables
} = params;
const { files } = chatValue2RuntimePrompt(query);
const userInputFiles = (() => {
if (fileUrlList) {
return fileUrlList.map((url) => parseUrlToFileType(url)).filter(Boolean);
}
// Adapt version 4.8.13 upgrade
return files;
})();
if (!userChatInput && !userInputFiles) {
return getNodeErrResponse({ error: 'Input is empty' });
}
if (!appId) {
return getNodeErrResponse({ error: 'pluginId is empty' });
}
try {
// Auth the app by tmbId(Not the user, but the workflow user)
const { app: appData } = await authAppByTmbId({
appId: appId,
tmbId: runningAppInfo.tmbId,
per: ReadPermissionVal
});
const { nodes, edges, chatConfig } = await getAppVersionById({
appId,
versionId: version,
app: appData
});
const childStreamResponse = system_forbid_stream ? false : props.stream;
// Auto line
if (childStreamResponse) {
workflowStreamResponse?.({
event: SseResponseEventEnum.answer,
data: textAdaptGptResponse({
text: '\n'
})
});
}
const chatHistories = getHistories(history, histories);
// Rewrite children app variables
const systemVariables = filterSystemVariables(variables);
const { externalProvider } = await getUserChatInfoAndAuthTeamPoints(appData.tmbId);
const childrenRunVariables = {
...systemVariables,
...childrenAppVariables,
histories: chatHistories,
appId: String(appData._id),
...(externalProvider ? externalProvider.externalWorkflowVariables : {})
};
const childrenInteractive =
lastInteractive?.type === 'childrenInteractive'
? lastInteractive.params.childrenResponse
: undefined;
const runtimeNodes = rewriteNodeOutputByHistories(
storeNodes2RuntimeNodes(
nodes,
getWorkflowEntryNodeIds(nodes, childrenInteractive || undefined)
),
childrenInteractive
);
const runtimeEdges = storeEdges2RuntimeEdges(edges, childrenInteractive);
const theQuery = childrenInteractive
? query
: runtimePrompt2ChatsValue({ files: userInputFiles, text: userChatInput });
const {
flowResponses,
flowUsages,
assistantResponses,
runTimes,
workflowInteractiveResponse,
system_memories
} = await dispatchWorkFlow({
...props,
lastInteractive: childrenInteractive,
// Rewrite stream mode
...(system_forbid_stream
? {
stream: false,
workflowStreamResponse: undefined
}
: {}),
runningAppInfo: {
id: String(appData._id),
teamId: String(appData.teamId),
tmbId: String(appData.tmbId),
isChildApp: true
},
runtimeNodes,
runtimeEdges,
histories: chatHistories,
variables: childrenRunVariables,
query: theQuery,
chatConfig
});
const completeMessages = chatHistories.concat([
{
obj: ChatRoleEnum.Human,
value: query
},
{
obj: ChatRoleEnum.AI,
value: assistantResponses
}
]);
const { text } = chatValue2RuntimePrompt(assistantResponses);
const usagePoints = flowUsages.reduce((sum, item) => sum + (item.totalPoints || 0), 0);
return {
data: {
[NodeOutputKeyEnum.answerText]: text,
[NodeOutputKeyEnum.history]: completeMessages
},
system_memories,
[DispatchNodeResponseKeyEnum.interactive]: workflowInteractiveResponse
? {
type: 'childrenInteractive',
params: {
childrenResponse: workflowInteractiveResponse
}
}
: undefined,
assistantResponses: system_forbid_stream ? [] : assistantResponses,
[DispatchNodeResponseKeyEnum.runTimes]: runTimes,
[DispatchNodeResponseKeyEnum.nodeResponse]: {
moduleLogo: appData.avatar,
totalPoints: usagePoints,
query: userChatInput,
textOutput: text,
pluginDetail: appData.permission.hasWritePer ? flowResponses : undefined,
mergeSignId: props.node.nodeId
},
[DispatchNodeResponseKeyEnum.nodeDispatchUsages]: [
{
moduleName: appData.name,
totalPoints: usagePoints
}
],
[DispatchNodeResponseKeyEnum.toolResponses]: text
};
} catch (error) {
return getNodeErrResponse({ error });
}
};

View File

@ -17,23 +17,25 @@ import type { StoreSecretValueType } from '@fastgpt/global/common/secret/type';
import { getSystemPluginById } from '../../../app/plugin/controller'; import { getSystemPluginById } from '../../../app/plugin/controller';
import { textAdaptGptResponse } from '@fastgpt/global/core/workflow/runtime/utils'; import { textAdaptGptResponse } from '@fastgpt/global/core/workflow/runtime/utils';
import { pushTrack } from '../../../../common/middle/tracks/utils'; import { pushTrack } from '../../../../common/middle/tracks/utils';
import { getNodeErrResponse } from '../utils';
type SystemInputConfigType = { type SystemInputConfigType = {
type: SystemToolInputTypeEnum; type: SystemToolInputTypeEnum;
value: StoreSecretValueType; value: StoreSecretValueType;
}; };
type RunToolProps = ModuleDispatchProps< type RunToolProps = ModuleDispatchProps<{
{ [NodeInputKeyEnum.toolData]?: McpToolDataType;
[NodeInputKeyEnum.toolData]?: McpToolDataType; [NodeInputKeyEnum.systemInputConfig]?: SystemInputConfigType;
[NodeInputKeyEnum.systemInputConfig]?: SystemInputConfigType; [key: string]: any;
} & Record<string, any> }>;
>;
type RunToolResponse = DispatchNodeResultType< type RunToolResponse = DispatchNodeResultType<
{ {
[NodeOutputKeyEnum.rawResponse]?: any; [NodeOutputKeyEnum.rawResponse]?: any;
} & Record<string, any> [key: string]: any;
},
Record<string, any>
>; >;
export const dispatchRunTool = async (props: RunToolProps): Promise<RunToolResponse> => { export const dispatchRunTool = async (props: RunToolProps): Promise<RunToolResponse> => {
@ -43,7 +45,7 @@ export const dispatchRunTool = async (props: RunToolProps): Promise<RunToolRespo
runningAppInfo, runningAppInfo,
variables, variables,
workflowStreamResponse, workflowStreamResponse,
node: { name, avatar, toolConfig, version } node: { name, avatar, toolConfig, version, catchError }
} = props; } = props;
const systemToolId = toolConfig?.systemTool?.toolId; const systemToolId = toolConfig?.systemTool?.toolId;
@ -80,50 +82,69 @@ export const dispatchRunTool = async (props: RunToolProps): Promise<RunToolRespo
const formatToolId = tool.id.split('-')[1]; const formatToolId = tool.id.split('-')[1];
const result = await (async () => { const res = await runSystemTool({
const res = await runSystemTool({ toolId: formatToolId,
toolId: formatToolId, inputs,
inputs, systemVar: {
systemVar: { user: {
user: { id: variables.userId,
id: variables.userId, teamId: runningUserInfo.teamId,
teamId: runningUserInfo.teamId, name: runningUserInfo.tmbId
name: runningUserInfo.tmbId
},
app: {
id: runningAppInfo.id,
name: runningAppInfo.id
},
tool: {
id: formatToolId,
version: version || tool.versionList?.[0]?.value || ''
},
time: variables.cTime
}, },
onMessage: ({ type, content }) => { app: {
if (workflowStreamResponse && content) { id: runningAppInfo.id,
workflowStreamResponse({ name: runningAppInfo.id
event: type as unknown as SseResponseEventEnum, },
data: textAdaptGptResponse({ tool: {
text: content id: formatToolId,
}) version: version || tool.versionList?.[0]?.value || ''
}); },
} time: variables.cTime
},
onMessage: ({ type, content }) => {
if (workflowStreamResponse && content) {
workflowStreamResponse({
event: type as unknown as SseResponseEventEnum,
data: textAdaptGptResponse({
text: content
})
});
} }
});
if (res.error) {
return Promise.reject(res.error);
} }
if (!res.output) return {}; });
let result = res.output || {};
return res.output; if (res.error) {
})(); // 适配旧版旧版本没有catchError部分工具会正常返回 error 字段作为响应。
if (catchError === undefined && typeof res.error === 'object') {
return {
data: res.error,
[DispatchNodeResponseKeyEnum.nodeResponse]: {
toolRes: res.error,
moduleLogo: avatar
},
[DispatchNodeResponseKeyEnum.toolResponses]: res.error
};
}
// String error(Common error, not custom)
if (typeof res.error === 'string') {
throw new Error(res.error);
}
// Custom error field
return {
error: res.error,
[DispatchNodeResponseKeyEnum.nodeResponse]: {
error: res.error,
moduleLogo: avatar
},
[DispatchNodeResponseKeyEnum.toolResponses]: res.error
};
}
const usagePoints = (() => { const usagePoints = (() => {
if ( if (params.system_input_config?.type !== SystemToolInputTypeEnum.system) {
params.system_input_config?.type !== SystemToolInputTypeEnum.system ||
result[NodeOutputKeyEnum.systemError]
) {
return 0; return 0;
} }
return tool.currentCost ?? 0; return tool.currentCost ?? 0;
@ -140,6 +161,7 @@ export const dispatchRunTool = async (props: RunToolProps): Promise<RunToolRespo
}); });
return { return {
data: result,
[DispatchNodeResponseKeyEnum.nodeResponse]: { [DispatchNodeResponseKeyEnum.nodeResponse]: {
toolRes: result, toolRes: result,
moduleLogo: avatar, moduleLogo: avatar,
@ -151,8 +173,7 @@ export const dispatchRunTool = async (props: RunToolProps): Promise<RunToolRespo
moduleName: name, moduleName: name,
totalPoints: usagePoints totalPoints: usagePoints
} }
], ]
...result
}; };
} else { } else {
// mcp tool // mcp tool
@ -168,12 +189,14 @@ export const dispatchRunTool = async (props: RunToolProps): Promise<RunToolRespo
const result = await mcpClient.toolCall(toolName, restParams); const result = await mcpClient.toolCall(toolName, restParams);
return { return {
data: {
[NodeOutputKeyEnum.rawResponse]: result
},
[DispatchNodeResponseKeyEnum.nodeResponse]: { [DispatchNodeResponseKeyEnum.nodeResponse]: {
toolRes: result, toolRes: result,
moduleLogo: avatar moduleLogo: avatar
}, },
[DispatchNodeResponseKeyEnum.toolResponses]: result, [DispatchNodeResponseKeyEnum.toolResponses]: result
[NodeOutputKeyEnum.rawResponse]: result
}; };
} }
} catch (error) { } catch (error) {
@ -188,12 +211,11 @@ export const dispatchRunTool = async (props: RunToolProps): Promise<RunToolRespo
}); });
} }
return { return getNodeErrResponse({
[DispatchNodeResponseKeyEnum.nodeResponse]: { error,
moduleLogo: avatar, customNodeResponse: {
error: getErrText(error) moduleLogo: avatar
}, }
[DispatchNodeResponseKeyEnum.toolResponses]: getErrText(error) });
};
} }
}; };

View File

@ -35,10 +35,12 @@ export async function dispatchDatasetConcat(
); );
return { return {
[NodeOutputKeyEnum.datasetQuoteQA]: await filterSearchResultsByMaxChars( data: {
rrfConcatResults, [NodeOutputKeyEnum.datasetQuoteQA]: await filterSearchResultsByMaxChars(
limit rrfConcatResults,
), limit
)
},
[DispatchNodeResponseKeyEnum.nodeResponse]: { [DispatchNodeResponseKeyEnum.nodeResponse]: {
concatLength: rrfConcatResults.length concatLength: rrfConcatResults.length
} }

View File

@ -17,6 +17,7 @@ import { i18nT } from '../../../../../web/i18n/utils';
import { filterDatasetsByTmbId } from '../../../dataset/utils'; import { filterDatasetsByTmbId } from '../../../dataset/utils';
import { ModelTypeEnum } from '@fastgpt/global/core/ai/model'; import { ModelTypeEnum } from '@fastgpt/global/core/ai/model';
import { getDatasetSearchToolResponsePrompt } from '../../../../../global/core/ai/prompt/dataset'; import { getDatasetSearchToolResponsePrompt } from '../../../../../global/core/ai/prompt/dataset';
import { getNodeErrResponse } from '../utils';
type DatasetSearchProps = ModuleDispatchProps<{ type DatasetSearchProps = ModuleDispatchProps<{
[NodeInputKeyEnum.datasetSelectList]: SelectedDatasetType; [NodeInputKeyEnum.datasetSelectList]: SelectedDatasetType;
@ -83,11 +84,13 @@ export async function dispatchDatasetSearch(
} }
if (datasets.length === 0) { if (datasets.length === 0) {
return Promise.reject(i18nT('common:core.chat.error.Select dataset empty')); return getNodeErrResponse({ error: i18nT('common:core.chat.error.Select dataset empty') });
} }
const emptyResult = { const emptyResult: DatasetSearchResponse = {
quoteQA: [], data: {
quoteQA: []
},
[DispatchNodeResponseKeyEnum.nodeResponse]: { [DispatchNodeResponseKeyEnum.nodeResponse]: {
totalPoints: 0, totalPoints: 0,
query: '', query: '',
@ -102,177 +105,184 @@ export async function dispatchDatasetSearch(
return emptyResult; return emptyResult;
} }
const datasetIds = authTmbId try {
? await filterDatasetsByTmbId({ const datasetIds = authTmbId
datasetIds: datasets.map((item) => item.datasetId), ? await filterDatasetsByTmbId({
tmbId datasetIds: datasets.map((item) => item.datasetId),
}) tmbId
: await Promise.resolve(datasets.map((item) => item.datasetId)); })
: await Promise.resolve(datasets.map((item) => item.datasetId));
if (datasetIds.length === 0) { if (datasetIds.length === 0) {
return emptyResult; return emptyResult;
} }
// get vector // get vector
const vectorModel = getEmbeddingModel( const vectorModel = getEmbeddingModel(
(await MongoDataset.findById(datasets[0].datasetId, 'vectorModel').lean())?.vectorModel (await MongoDataset.findById(datasets[0].datasetId, 'vectorModel').lean())?.vectorModel
); );
// Get Rerank Model // Get Rerank Model
const rerankModelData = getRerankModel(rerankModel); const rerankModelData = getRerankModel(rerankModel);
// start search // start search
const searchData = { const searchData = {
histories, histories,
teamId, teamId,
reRankQuery: userChatInput, reRankQuery: userChatInput,
queries: [userChatInput], queries: [userChatInput],
model: vectorModel.model,
similarity,
limit,
datasetIds,
searchMode,
embeddingWeight,
usingReRank,
rerankModel: rerankModelData,
rerankWeight,
collectionFilterMatch
};
const {
searchRes,
embeddingTokens,
reRankInputTokens,
usingSimilarityFilter,
usingReRank: searchUsingReRank,
queryExtensionResult,
deepSearchResult
} = datasetDeepSearch
? await deepRagSearch({
...searchData,
datasetDeepSearchModel,
datasetDeepSearchMaxTimes,
datasetDeepSearchBg
})
: await defaultSearchDatasetData({
...searchData,
datasetSearchUsingExtensionQuery,
datasetSearchExtensionModel,
datasetSearchExtensionBg
});
// count bill results
const nodeDispatchUsages: ChatNodeUsageType[] = [];
// vector
const { totalPoints: embeddingTotalPoints, modelName: embeddingModelName } =
formatModelChars2Points({
model: vectorModel.model, model: vectorModel.model,
inputTokens: embeddingTokens, similarity,
modelType: ModelTypeEnum.embedding limit,
}); datasetIds,
nodeDispatchUsages.push({ searchMode,
totalPoints: embeddingTotalPoints, embeddingWeight,
moduleName: node.name, usingReRank,
model: embeddingModelName, rerankModel: rerankModelData,
inputTokens: embeddingTokens rerankWeight,
}); collectionFilterMatch
// Rerank };
const { totalPoints: reRankTotalPoints, modelName: reRankModelName } = formatModelChars2Points({ const {
model: rerankModelData?.model, searchRes,
inputTokens: reRankInputTokens, embeddingTokens,
modelType: ModelTypeEnum.rerank reRankInputTokens,
}); usingSimilarityFilter,
if (usingReRank) { usingReRank: searchUsingReRank,
queryExtensionResult,
deepSearchResult
} = datasetDeepSearch
? await deepRagSearch({
...searchData,
datasetDeepSearchModel,
datasetDeepSearchMaxTimes,
datasetDeepSearchBg
})
: await defaultSearchDatasetData({
...searchData,
datasetSearchUsingExtensionQuery,
datasetSearchExtensionModel,
datasetSearchExtensionBg
});
// count bill results
const nodeDispatchUsages: ChatNodeUsageType[] = [];
// vector
const { totalPoints: embeddingTotalPoints, modelName: embeddingModelName } =
formatModelChars2Points({
model: vectorModel.model,
inputTokens: embeddingTokens,
modelType: ModelTypeEnum.embedding
});
nodeDispatchUsages.push({ nodeDispatchUsages.push({
totalPoints: reRankTotalPoints, totalPoints: embeddingTotalPoints,
moduleName: node.name, moduleName: node.name,
model: reRankModelName, model: embeddingModelName,
inputTokens: reRankInputTokens inputTokens: embeddingTokens
}); });
}
// Query extension
(() => {
if (queryExtensionResult) {
const { totalPoints, modelName } = formatModelChars2Points({
model: queryExtensionResult.model,
inputTokens: queryExtensionResult.inputTokens,
outputTokens: queryExtensionResult.outputTokens,
modelType: ModelTypeEnum.llm
});
nodeDispatchUsages.push({
totalPoints,
moduleName: i18nT('common:core.module.template.Query extension'),
model: modelName,
inputTokens: queryExtensionResult.inputTokens,
outputTokens: queryExtensionResult.outputTokens
});
return {
totalPoints
};
}
return {
totalPoints: 0
};
})();
// Deep search
(() => {
if (deepSearchResult) {
const { totalPoints, modelName } = formatModelChars2Points({
model: deepSearchResult.model,
inputTokens: deepSearchResult.inputTokens,
outputTokens: deepSearchResult.outputTokens,
modelType: ModelTypeEnum.llm
});
nodeDispatchUsages.push({
totalPoints,
moduleName: i18nT('common:deep_rag_search'),
model: modelName,
inputTokens: deepSearchResult.inputTokens,
outputTokens: deepSearchResult.outputTokens
});
return {
totalPoints
};
}
return {
totalPoints: 0
};
})();
const totalPoints = nodeDispatchUsages.reduce((acc, item) => acc + item.totalPoints, 0);
const responseData: DispatchNodeResponseType & { totalPoints: number } = {
totalPoints,
query: userChatInput,
embeddingModel: vectorModel.name,
embeddingTokens,
similarity: usingSimilarityFilter ? similarity : undefined,
limit,
searchMode,
embeddingWeight: searchMode === DatasetSearchModeEnum.mixedRecall ? embeddingWeight : undefined,
// Rerank // Rerank
...(searchUsingReRank && { const { totalPoints: reRankTotalPoints, modelName: reRankModelName } = formatModelChars2Points({
rerankModel: rerankModelData?.name, model: rerankModelData?.model,
rerankWeight: rerankWeight, inputTokens: reRankInputTokens,
reRankInputTokens modelType: ModelTypeEnum.rerank
}), });
searchUsingReRank, if (usingReRank) {
// Results nodeDispatchUsages.push({
quoteList: searchRes, totalPoints: reRankTotalPoints,
queryExtensionResult, moduleName: node.name,
deepSearchResult model: reRankModelName,
}; inputTokens: reRankInputTokens
});
return {
quoteQA: searchRes,
[DispatchNodeResponseKeyEnum.nodeResponse]: responseData,
nodeDispatchUsages,
[DispatchNodeResponseKeyEnum.toolResponses]: {
prompt: getDatasetSearchToolResponsePrompt(),
cites: searchRes.map((item) => ({
id: item.id,
sourceName: item.sourceName,
updateTime: item.updateTime,
content: `${item.q}\n${item.a}`.trim()
}))
} }
}; // Query extension
(() => {
if (queryExtensionResult) {
const { totalPoints, modelName } = formatModelChars2Points({
model: queryExtensionResult.model,
inputTokens: queryExtensionResult.inputTokens,
outputTokens: queryExtensionResult.outputTokens,
modelType: ModelTypeEnum.llm
});
nodeDispatchUsages.push({
totalPoints,
moduleName: i18nT('common:core.module.template.Query extension'),
model: modelName,
inputTokens: queryExtensionResult.inputTokens,
outputTokens: queryExtensionResult.outputTokens
});
return {
totalPoints
};
}
return {
totalPoints: 0
};
})();
// Deep search
(() => {
if (deepSearchResult) {
const { totalPoints, modelName } = formatModelChars2Points({
model: deepSearchResult.model,
inputTokens: deepSearchResult.inputTokens,
outputTokens: deepSearchResult.outputTokens,
modelType: ModelTypeEnum.llm
});
nodeDispatchUsages.push({
totalPoints,
moduleName: i18nT('common:deep_rag_search'),
model: modelName,
inputTokens: deepSearchResult.inputTokens,
outputTokens: deepSearchResult.outputTokens
});
return {
totalPoints
};
}
return {
totalPoints: 0
};
})();
const totalPoints = nodeDispatchUsages.reduce((acc, item) => acc + item.totalPoints, 0);
const responseData: DispatchNodeResponseType & { totalPoints: number } = {
totalPoints,
query: userChatInput,
embeddingModel: vectorModel.name,
embeddingTokens,
similarity: usingSimilarityFilter ? similarity : undefined,
limit,
searchMode,
embeddingWeight:
searchMode === DatasetSearchModeEnum.mixedRecall ? embeddingWeight : undefined,
// Rerank
...(searchUsingReRank && {
rerankModel: rerankModelData?.name,
rerankWeight: rerankWeight,
reRankInputTokens
}),
searchUsingReRank,
// Results
quoteList: searchRes,
queryExtensionResult,
deepSearchResult
};
return {
data: {
quoteQA: searchRes
},
[DispatchNodeResponseKeyEnum.nodeResponse]: responseData,
nodeDispatchUsages,
[DispatchNodeResponseKeyEnum.toolResponses]: {
prompt: getDatasetSearchToolResponsePrompt(),
cites: searchRes.map((item) => ({
id: item.id,
sourceName: item.sourceName,
updateTime: item.updateTime,
content: `${item.q}\n${item.a}`.trim()
}))
}
};
} catch (error) {
return getNodeErrResponse({ error });
}
} }

View File

@ -49,7 +49,7 @@ import { dispatchRunTools } from './ai/agent/index';
import { dispatchStopToolCall } from './ai/agent/stopTool'; import { dispatchStopToolCall } from './ai/agent/stopTool';
import { dispatchToolParams } from './ai/agent/toolParams'; import { dispatchToolParams } from './ai/agent/toolParams';
import { dispatchChatCompletion } from './ai/chat'; import { dispatchChatCompletion } from './ai/chat';
import { dispatchRunCode } from './code/run'; import { dispatchCodeSandbox } from './tools/codeSandbox';
import { dispatchDatasetConcat } from './dataset/concat'; import { dispatchDatasetConcat } from './dataset/concat';
import { dispatchDatasetSearch } from './dataset/search'; import { dispatchDatasetSearch } from './dataset/search';
import { dispatchSystemConfig } from './init/systemConfig'; import { dispatchSystemConfig } from './init/systemConfig';
@ -60,10 +60,10 @@ import { dispatchLoop } from './loop/runLoop';
import { dispatchLoopEnd } from './loop/runLoopEnd'; import { dispatchLoopEnd } from './loop/runLoopEnd';
import { dispatchLoopStart } from './loop/runLoopStart'; import { dispatchLoopStart } from './loop/runLoopStart';
import { dispatchRunPlugin } from './plugin/run'; import { dispatchRunPlugin } from './plugin/run';
import { dispatchRunAppNode } from './plugin/runApp'; import { dispatchRunAppNode } from './child/runApp';
import { dispatchPluginInput } from './plugin/runInput'; import { dispatchPluginInput } from './plugin/runInput';
import { dispatchPluginOutput } from './plugin/runOutput'; import { dispatchPluginOutput } from './plugin/runOutput';
import { dispatchRunTool } from './plugin/runTool'; import { dispatchRunTool } from './child/runTool';
import { dispatchAnswer } from './tools/answer'; import { dispatchAnswer } from './tools/answer';
import { dispatchCustomFeedback } from './tools/customFeedback'; import { dispatchCustomFeedback } from './tools/customFeedback';
import { dispatchHttp468Request } from './tools/http468'; import { dispatchHttp468Request } from './tools/http468';
@ -74,7 +74,8 @@ import { dispatchLafRequest } from './tools/runLaf';
import { dispatchUpdateVariable } from './tools/runUpdateVar'; import { dispatchUpdateVariable } from './tools/runUpdateVar';
import { dispatchTextEditor } from './tools/textEditor'; import { dispatchTextEditor } from './tools/textEditor';
import type { DispatchFlowResponse } from './type'; import type { DispatchFlowResponse } from './type';
import { formatHttpError, removeSystemVariable, rewriteRuntimeWorkFlow } from './utils'; import { removeSystemVariable, rewriteRuntimeWorkFlow } from './utils';
import { getHandleId } from '@fastgpt/global/core/workflow/utils';
const callbackMap: Record<FlowNodeTypeEnum, Function> = { const callbackMap: Record<FlowNodeTypeEnum, Function> = {
[FlowNodeTypeEnum.workflowStart]: dispatchWorkflowStart, [FlowNodeTypeEnum.workflowStart]: dispatchWorkflowStart,
@ -96,7 +97,7 @@ const callbackMap: Record<FlowNodeTypeEnum, Function> = {
[FlowNodeTypeEnum.lafModule]: dispatchLafRequest, [FlowNodeTypeEnum.lafModule]: dispatchLafRequest,
[FlowNodeTypeEnum.ifElseNode]: dispatchIfElse, [FlowNodeTypeEnum.ifElseNode]: dispatchIfElse,
[FlowNodeTypeEnum.variableUpdate]: dispatchUpdateVariable, [FlowNodeTypeEnum.variableUpdate]: dispatchUpdateVariable,
[FlowNodeTypeEnum.code]: dispatchRunCode, [FlowNodeTypeEnum.code]: dispatchCodeSandbox,
[FlowNodeTypeEnum.textEditor]: dispatchTextEditor, [FlowNodeTypeEnum.textEditor]: dispatchTextEditor,
[FlowNodeTypeEnum.customFeedback]: dispatchCustomFeedback, [FlowNodeTypeEnum.customFeedback]: dispatchCustomFeedback,
[FlowNodeTypeEnum.readFiles]: dispatchReadFiles, [FlowNodeTypeEnum.readFiles]: dispatchReadFiles,
@ -123,6 +124,14 @@ type Props = ChatDispatchProps & {
runtimeNodes: RuntimeNodeItemType[]; runtimeNodes: RuntimeNodeItemType[];
runtimeEdges: RuntimeEdgeItemType[]; runtimeEdges: RuntimeEdgeItemType[];
}; };
type NodeResponseType = DispatchNodeResultType<{
[NodeOutputKeyEnum.answerText]?: string;
[NodeOutputKeyEnum.reasoningText]?: string;
[key: string]: any;
}>;
type NodeResponseCompleteType = Omit<NodeResponseType, 'responseData'> & {
[DispatchNodeResponseKeyEnum.nodeResponse]?: ChatHistoryItemResType;
};
/* running */ /* running */
export async function dispatchWorkFlow(data: Props): Promise<DispatchFlowResponse> { export async function dispatchWorkFlow(data: Props): Promise<DispatchFlowResponse> {
@ -229,8 +238,7 @@ export async function dispatchWorkFlow(data: Props): Promise<DispatchFlowRespons
function pushStore( function pushStore(
{ inputs = [] }: RuntimeNodeItemType, { inputs = [] }: RuntimeNodeItemType,
{ {
answerText = '', data: { answerText = '', reasoningText } = {},
reasoningText,
responseData, responseData,
nodeDispatchUsages, nodeDispatchUsages,
toolResponses, toolResponses,
@ -238,14 +246,7 @@ export async function dispatchWorkFlow(data: Props): Promise<DispatchFlowRespons
rewriteHistories, rewriteHistories,
runTimes = 1, runTimes = 1,
system_memories: newMemories system_memories: newMemories
}: Omit< }: NodeResponseCompleteType
DispatchNodeResultType<{
[NodeOutputKeyEnum.answerText]?: string;
[NodeOutputKeyEnum.reasoningText]?: string;
[DispatchNodeResponseKeyEnum.nodeResponse]?: ChatHistoryItemResType;
}>,
'nodeResponse'
>
) { ) {
// Add run times // Add run times
workflowRunTimes += runTimes; workflowRunTimes += runTimes;
@ -316,22 +317,27 @@ export async function dispatchWorkFlow(data: Props): Promise<DispatchFlowRespons
/* Pass the output of the node, to get next nodes and update edge status */ /* Pass the output of the node, to get next nodes and update edge status */
function nodeOutput( function nodeOutput(
node: RuntimeNodeItemType, node: RuntimeNodeItemType,
result: Record<string, any> = {} result: NodeResponseCompleteType
): { ): {
nextStepActiveNodes: RuntimeNodeItemType[]; nextStepActiveNodes: RuntimeNodeItemType[];
nextStepSkipNodes: RuntimeNodeItemType[]; nextStepSkipNodes: RuntimeNodeItemType[];
} { } {
pushStore(node, result); pushStore(node, result);
const concatData: Record<string, any> = {
...(result.data ?? {}),
...(result.error ?? {})
};
// Assign the output value to the next node // Assign the output value to the next node
node.outputs.forEach((outputItem) => { node.outputs.forEach((outputItem) => {
if (result[outputItem.key] === undefined) return; if (concatData[outputItem.key] === undefined) return;
/* update output value */ /* update output value */
outputItem.value = result[outputItem.key]; outputItem.value = concatData[outputItem.key];
}); });
// Get next source edges and update status // Get next source edges and update status
const skipHandleId = (result[DispatchNodeResponseKeyEnum.skipHandleId] || []) as string[]; const skipHandleId = result[DispatchNodeResponseKeyEnum.skipHandleId] || [];
const targetEdges = filterWorkflowEdges(runtimeEdges).filter( const targetEdges = filterWorkflowEdges(runtimeEdges).filter(
(item) => item.source === node.nodeId (item) => item.source === node.nodeId
); );
@ -591,7 +597,7 @@ export async function dispatchWorkFlow(data: Props): Promise<DispatchFlowRespons
async function nodeRunWithActive(node: RuntimeNodeItemType): Promise<{ async function nodeRunWithActive(node: RuntimeNodeItemType): Promise<{
node: RuntimeNodeItemType; node: RuntimeNodeItemType;
runStatus: 'run'; runStatus: 'run';
result: Record<string, any>; result: NodeResponseCompleteType;
}> { }> {
// push run status messages // push run status messages
if (node.showStatus && !props.isToolCall) { if (node.showStatus && !props.isToolCall) {
@ -625,23 +631,66 @@ export async function dispatchWorkFlow(data: Props): Promise<DispatchFlowRespons
}; };
// run module // run module
const dispatchRes: Record<string, any> = await (async () => { const dispatchRes: NodeResponseType = await (async () => {
if (callbackMap[node.flowNodeType]) { if (callbackMap[node.flowNodeType]) {
const targetEdges = runtimeEdges.filter((item) => item.source === node.nodeId);
try { try {
return await callbackMap[node.flowNodeType](dispatchData); const result = (await callbackMap[node.flowNodeType](dispatchData)) as NodeResponseType;
const errorHandleId = getHandleId(node.nodeId, 'source_catch', 'right');
if (!result.error) {
const skipHandleId =
targetEdges.find((item) => item.sourceHandle === errorHandleId)?.sourceHandle || '';
return {
...result,
[DispatchNodeResponseKeyEnum.skipHandleId]: (result[
DispatchNodeResponseKeyEnum.skipHandleId
]
? [...result[DispatchNodeResponseKeyEnum.skipHandleId], skipHandleId]
: [skipHandleId]
).filter(Boolean)
};
}
// Run error and not catch error, skip all edges
if (!node.catchError) {
return {
...result,
[DispatchNodeResponseKeyEnum.skipHandleId]: targetEdges.map(
(item) => item.sourceHandle
)
};
}
// Catch error
const skipHandleIds = targetEdges
.filter((item) => {
if (node.catchError) {
return item.sourceHandle !== errorHandleId;
}
return true;
})
.map((item) => item.sourceHandle);
return {
...result,
[DispatchNodeResponseKeyEnum.skipHandleId]: result[
DispatchNodeResponseKeyEnum.skipHandleId
]
? [...result[DispatchNodeResponseKeyEnum.skipHandleId], ...skipHandleIds].filter(
Boolean
)
: skipHandleIds
};
} catch (error) { } catch (error) {
// Get source handles of outgoing edges
const targetEdges = runtimeEdges.filter((item) => item.source === node.nodeId);
const skipHandleIds = targetEdges.map((item) => item.sourceHandle);
toolRunResponse = getErrText(error);
// Skip all edges and return error // Skip all edges and return error
return { return {
[DispatchNodeResponseKeyEnum.nodeResponse]: { [DispatchNodeResponseKeyEnum.nodeResponse]: {
error: formatHttpError(error) error: getErrText(error)
}, },
[DispatchNodeResponseKeyEnum.skipHandleId]: skipHandleIds [DispatchNodeResponseKeyEnum.skipHandleId]: targetEdges.map((item) => item.sourceHandle)
}; };
} }
} }
@ -649,15 +698,16 @@ export async function dispatchWorkFlow(data: Props): Promise<DispatchFlowRespons
})(); })();
// format response data. Add modulename and module type // format response data. Add modulename and module type
const formatResponseData: ChatHistoryItemResType = (() => { const formatResponseData: NodeResponseCompleteType['responseData'] = (() => {
if (!dispatchRes[DispatchNodeResponseKeyEnum.nodeResponse]) return undefined; if (!dispatchRes[DispatchNodeResponseKeyEnum.nodeResponse]) return undefined;
return { return {
...dispatchRes[DispatchNodeResponseKeyEnum.nodeResponse],
id: getNanoid(), id: getNanoid(),
nodeId: node.nodeId, nodeId: node.nodeId,
moduleName: node.name, moduleName: node.name,
moduleType: node.flowNodeType, moduleType: node.flowNodeType,
runningTime: +((Date.now() - startTime) / 1000).toFixed(2), runningTime: +((Date.now() - startTime) / 1000).toFixed(2)
...dispatchRes[DispatchNodeResponseKeyEnum.nodeResponse]
}; };
})(); })();
@ -675,11 +725,13 @@ export async function dispatchWorkFlow(data: Props): Promise<DispatchFlowRespons
} }
// Add output default value // Add output default value
node.outputs.forEach((item) => { if (dispatchRes.data) {
if (!item.required) return; node.outputs.forEach((item) => {
if (dispatchRes[item.key] !== undefined) return; if (!item.required) return;
dispatchRes[item.key] = valueTypeFormat(item.defaultValue, item.valueType); if (dispatchRes.data?.[item.key] !== undefined) return;
}); dispatchRes.data![item.key] = valueTypeFormat(item.defaultValue, item.valueType);
});
}
// Update new variables // Update new variables
if (dispatchRes[DispatchNodeResponseKeyEnum.newVariables]) { if (dispatchRes[DispatchNodeResponseKeyEnum.newVariables]) {
@ -691,7 +743,7 @@ export async function dispatchWorkFlow(data: Props): Promise<DispatchFlowRespons
// Error // Error
if (dispatchRes?.responseData?.error) { if (dispatchRes?.responseData?.error) {
addLog.warn('workflow error', dispatchRes.responseData.error); addLog.warn('workflow error', { error: dispatchRes.responseData.error });
} }
return { return {
@ -706,7 +758,7 @@ export async function dispatchWorkFlow(data: Props): Promise<DispatchFlowRespons
async function nodeRunWithSkip(node: RuntimeNodeItemType): Promise<{ async function nodeRunWithSkip(node: RuntimeNodeItemType): Promise<{
node: RuntimeNodeItemType; node: RuntimeNodeItemType;
runStatus: 'skip'; runStatus: 'skip';
result: Record<string, any>; result: NodeResponseCompleteType;
}> { }> {
// Set target edges status to skipped // Set target edges status to skipped
const targetEdges = runtimeEdges.filter((item) => item.source === node.nodeId); const targetEdges = runtimeEdges.filter((item) => item.source === node.nodeId);

View File

@ -34,8 +34,9 @@ export const dispatchWorkflowStart = (props: Record<string, any>): Response => {
return { return {
[DispatchNodeResponseKeyEnum.nodeResponse]: {}, [DispatchNodeResponseKeyEnum.nodeResponse]: {},
[NodeInputKeyEnum.userChatInput]: text || userChatInput, data: {
[NodeOutputKeyEnum.userFiles]: [...queryFiles, ...variablesFiles] [NodeInputKeyEnum.userChatInput]: text || userChatInput,
// [NodeInputKeyEnum.inputFiles]: files [NodeOutputKeyEnum.userFiles]: [...queryFiles, ...variablesFiles]
}
}; };
}; };

View File

@ -6,10 +6,7 @@ import type {
DispatchNodeResultType, DispatchNodeResultType,
ModuleDispatchProps ModuleDispatchProps
} from '@fastgpt/global/core/workflow/runtime/type'; } from '@fastgpt/global/core/workflow/runtime/type';
import type { import type { UserInputFormItemType } from '@fastgpt/global/core/workflow/template/system/interactive/type';
UserInputFormItemType,
UserInputInteractive
} from '@fastgpt/global/core/workflow/template/system/interactive/type';
import { addLog } from '../../../../common/system/log'; import { addLog } from '../../../../common/system/log';
type Props = ModuleDispatchProps<{ type Props = ModuleDispatchProps<{
@ -17,8 +14,8 @@ type Props = ModuleDispatchProps<{
[NodeInputKeyEnum.userInputForms]: UserInputFormItemType[]; [NodeInputKeyEnum.userInputForms]: UserInputFormItemType[];
}>; }>;
type FormInputResponse = DispatchNodeResultType<{ type FormInputResponse = DispatchNodeResultType<{
[DispatchNodeResponseKeyEnum.interactive]?: UserInputInteractive;
[NodeOutputKeyEnum.formInputResult]?: Record<string, any>; [NodeOutputKeyEnum.formInputResult]?: Record<string, any>;
[key: string]: any;
}>; }>;
/* /*
@ -60,9 +57,11 @@ export const dispatchFormInput = async (props: Props): Promise<FormInputResponse
})(); })();
return { return {
data: {
...userInputVal,
[NodeOutputKeyEnum.formInputResult]: userInputVal
},
[DispatchNodeResponseKeyEnum.rewriteHistories]: histories.slice(0, -2), // Removes the current session record as the history of subsequent nodes [DispatchNodeResponseKeyEnum.rewriteHistories]: histories.slice(0, -2), // Removes the current session record as the history of subsequent nodes
...userInputVal,
[NodeOutputKeyEnum.formInputResult]: userInputVal,
[DispatchNodeResponseKeyEnum.toolResponses]: userInputVal, [DispatchNodeResponseKeyEnum.toolResponses]: userInputVal,
[DispatchNodeResponseKeyEnum.nodeResponse]: { [DispatchNodeResponseKeyEnum.nodeResponse]: {
formInputResult: userInputVal formInputResult: userInputVal

View File

@ -6,10 +6,7 @@ import type {
import type { NodeInputKeyEnum } from '@fastgpt/global/core/workflow/constants'; import type { NodeInputKeyEnum } from '@fastgpt/global/core/workflow/constants';
import { NodeOutputKeyEnum } from '@fastgpt/global/core/workflow/constants'; import { NodeOutputKeyEnum } from '@fastgpt/global/core/workflow/constants';
import { getHandleId } from '@fastgpt/global/core/workflow/utils'; import { getHandleId } from '@fastgpt/global/core/workflow/utils';
import type { import type { UserSelectOptionItemType } from '@fastgpt/global/core/workflow/template/system/interactive/type';
UserSelectInteractive,
UserSelectOptionItemType
} from '@fastgpt/global/core/workflow/template/system/interactive/type';
import { chatValue2RuntimePrompt } from '@fastgpt/global/core/chat/adapt'; import { chatValue2RuntimePrompt } from '@fastgpt/global/core/chat/adapt';
type Props = ModuleDispatchProps<{ type Props = ModuleDispatchProps<{
@ -17,8 +14,6 @@ type Props = ModuleDispatchProps<{
[NodeInputKeyEnum.userSelectOptions]: UserSelectOptionItemType[]; [NodeInputKeyEnum.userSelectOptions]: UserSelectOptionItemType[];
}>; }>;
type UserSelectResponse = DispatchNodeResultType<{ type UserSelectResponse = DispatchNodeResultType<{
[NodeOutputKeyEnum.answerText]?: string;
[DispatchNodeResponseKeyEnum.interactive]?: UserSelectInteractive;
[NodeOutputKeyEnum.selectResult]?: string; [NodeOutputKeyEnum.selectResult]?: string;
}>; }>;
@ -59,6 +54,9 @@ export const dispatchUserSelect = async (props: Props): Promise<UserSelectRespon
} }
return { return {
data: {
[NodeOutputKeyEnum.selectResult]: userSelectedVal
},
[DispatchNodeResponseKeyEnum.rewriteHistories]: histories.slice(0, -2), // Removes the current session record as the history of subsequent nodes [DispatchNodeResponseKeyEnum.rewriteHistories]: histories.slice(0, -2), // Removes the current session record as the history of subsequent nodes
[DispatchNodeResponseKeyEnum.skipHandleId]: userSelectOptions [DispatchNodeResponseKeyEnum.skipHandleId]: userSelectOptions
.filter((item) => item.value !== userSelectedVal) .filter((item) => item.value !== userSelectedVal)
@ -66,7 +64,6 @@ export const dispatchUserSelect = async (props: Props): Promise<UserSelectRespon
[DispatchNodeResponseKeyEnum.nodeResponse]: { [DispatchNodeResponseKeyEnum.nodeResponse]: {
userSelectResult: userSelectedVal userSelectResult: userSelectedVal
}, },
[DispatchNodeResponseKeyEnum.toolResponses]: userSelectedVal, [DispatchNodeResponseKeyEnum.toolResponses]: userSelectedVal
[NodeOutputKeyEnum.selectResult]: userSelectedVal
}; };
}; };

View File

@ -11,10 +11,7 @@ import {
type ChatHistoryItemResType type ChatHistoryItemResType
} from '@fastgpt/global/core/chat/type'; } from '@fastgpt/global/core/chat/type';
import { cloneDeep } from 'lodash'; import { cloneDeep } from 'lodash';
import { import { type WorkflowInteractiveResponseType } from '@fastgpt/global/core/workflow/template/system/interactive/type';
type LoopInteractive,
type WorkflowInteractiveResponseType
} from '@fastgpt/global/core/workflow/template/system/interactive/type';
import { storeEdges2RuntimeEdges } from '@fastgpt/global/core/workflow/runtime/utils'; import { storeEdges2RuntimeEdges } from '@fastgpt/global/core/workflow/runtime/utils';
type Props = ModuleDispatchProps<{ type Props = ModuleDispatchProps<{
@ -22,7 +19,6 @@ type Props = ModuleDispatchProps<{
[NodeInputKeyEnum.childrenNodeIdList]: string[]; [NodeInputKeyEnum.childrenNodeIdList]: string[];
}>; }>;
type Response = DispatchNodeResultType<{ type Response = DispatchNodeResultType<{
[DispatchNodeResponseKeyEnum.interactive]?: LoopInteractive;
[NodeOutputKeyEnum.loopArray]: Array<any>; [NodeOutputKeyEnum.loopArray]: Array<any>;
}>; }>;
@ -133,6 +129,9 @@ export const dispatchLoop = async (props: Props): Promise<Response> => {
} }
return { return {
data: {
[NodeOutputKeyEnum.loopArray]: outputValueArr
},
[DispatchNodeResponseKeyEnum.interactive]: interactiveResponse [DispatchNodeResponseKeyEnum.interactive]: interactiveResponse
? { ? {
type: 'loopInteractive', type: 'loopInteractive',
@ -157,7 +156,6 @@ export const dispatchLoop = async (props: Props): Promise<Response> => {
moduleName: name moduleName: name
} }
], ],
[NodeOutputKeyEnum.loopArray]: outputValueArr,
[DispatchNodeResponseKeyEnum.newVariables]: newVariables [DispatchNodeResponseKeyEnum.newVariables]: newVariables
}; };
}; };

View File

@ -18,10 +18,12 @@ type Response = DispatchNodeResultType<{
export const dispatchLoopStart = async (props: Props): Promise<Response> => { export const dispatchLoopStart = async (props: Props): Promise<Response> => {
const { params } = props; const { params } = props;
return { return {
data: {
[NodeOutputKeyEnum.loopStartInput]: params.loopStartInput,
[NodeOutputKeyEnum.loopStartIndex]: params.loopStartIndex
},
[DispatchNodeResponseKeyEnum.nodeResponse]: { [DispatchNodeResponseKeyEnum.nodeResponse]: {
loopInputValue: params.loopStartInput loopInputValue: params.loopStartInput
}, }
[NodeOutputKeyEnum.loopStartInput]: params.loopStartInput,
[NodeOutputKeyEnum.loopStartIndex]: params.loopStartIndex
}; };
}; };

View File

@ -13,19 +13,29 @@ import { type DispatchNodeResultType } from '@fastgpt/global/core/workflow/runti
import { authPluginByTmbId } from '../../../../support/permission/app/auth'; import { authPluginByTmbId } from '../../../../support/permission/app/auth';
import { ReadPermissionVal } from '@fastgpt/global/support/permission/constant'; import { ReadPermissionVal } from '@fastgpt/global/support/permission/constant';
import { computedPluginUsage } from '../../../app/plugin/utils'; import { computedPluginUsage } from '../../../app/plugin/utils';
import { filterSystemVariables } from '../utils'; import { filterSystemVariables, getNodeErrResponse } from '../utils';
import { getPluginRunUserQuery } from '@fastgpt/global/core/workflow/utils'; import { getPluginRunUserQuery } from '@fastgpt/global/core/workflow/utils';
import type { NodeInputKeyEnum } from '@fastgpt/global/core/workflow/constants'; import type { NodeInputKeyEnum } from '@fastgpt/global/core/workflow/constants';
import type { NodeOutputKeyEnum } from '@fastgpt/global/core/workflow/constants';
import { getChildAppRuntimeById, splitCombinePluginId } from '../../../app/plugin/controller'; import { getChildAppRuntimeById, splitCombinePluginId } from '../../../app/plugin/controller';
import { dispatchWorkFlow } from '../index'; import { dispatchWorkFlow } from '../index';
import { getUserChatInfoAndAuthTeamPoints } from '../../../../support/permission/auth/team'; import { getUserChatInfoAndAuthTeamPoints } from '../../../../support/permission/auth/team';
import { dispatchRunTool } from './runTool'; import { dispatchRunTool } from '../child/runTool';
import type { PluginRuntimeType } from '@fastgpt/global/core/app/plugin/type';
type RunPluginProps = ModuleDispatchProps<{ type RunPluginProps = ModuleDispatchProps<{
[NodeInputKeyEnum.forbidStream]?: boolean; [NodeInputKeyEnum.forbidStream]?: boolean;
[key: string]: any; [key: string]: any;
}>; }>;
type RunPluginResponse = DispatchNodeResultType<{}>; type RunPluginResponse = DispatchNodeResultType<
{
[key: string]: any;
},
{
[NodeOutputKeyEnum.errorText]?: string;
}
>;
export const dispatchRunPlugin = async (props: RunPluginProps): Promise<RunPluginResponse> => { export const dispatchRunPlugin = async (props: RunPluginProps): Promise<RunPluginResponse> => {
const { const {
node: { pluginId, version }, node: { pluginId, version },
@ -34,142 +44,145 @@ export const dispatchRunPlugin = async (props: RunPluginProps): Promise<RunPlugi
params: { system_forbid_stream = false, ...data } // Plugin input params: { system_forbid_stream = false, ...data } // Plugin input
} = props; } = props;
if (!pluginId) { if (!pluginId) {
return Promise.reject('pluginId can not find'); return getNodeErrResponse({ error: 'pluginId can not find' });
} }
// Adapt <= 4.10 system tool let plugin: PluginRuntimeType | undefined;
const { source, pluginId: formatPluginId } = splitCombinePluginId(pluginId);
if (source === PluginSourceEnum.systemTool) { try {
return dispatchRunTool({ // Adapt <= 4.10 system tool
...props, const { source, pluginId: formatPluginId } = splitCombinePluginId(pluginId);
node: { if (source === PluginSourceEnum.systemTool) {
...props.node, return await dispatchRunTool({
toolConfig: { ...props,
systemTool: { node: {
toolId: formatPluginId ...props.node,
toolConfig: {
systemTool: {
toolId: formatPluginId
}
} }
} }
} });
}
/*
1. Team app
2. Admin selected system tool
*/
const { files } = chatValue2RuntimePrompt(query);
// auth plugin
const pluginData = await authPluginByTmbId({
appId: pluginId,
tmbId: runningAppInfo.tmbId,
per: ReadPermissionVal
}); });
}
/* plugin = await getChildAppRuntimeById(pluginId, version);
1. Team app
2. Admin selected system tool
*/
const { files } = chatValue2RuntimePrompt(query);
// auth plugin const outputFilterMap =
const pluginData = await authPluginByTmbId({ plugin.nodes
appId: pluginId, .find((node) => node.flowNodeType === FlowNodeTypeEnum.pluginOutput)
tmbId: runningAppInfo.tmbId, ?.inputs.reduce<Record<string, boolean>>((acc, cur) => {
per: ReadPermissionVal acc[cur.key] = cur.isToolOutput === false ? false : true;
}); return acc;
}, {}) ?? {};
const plugin = await getChildAppRuntimeById(pluginId, version); const runtimeNodes = storeNodes2RuntimeNodes(
plugin.nodes,
const outputFilterMap = getWorkflowEntryNodeIds(plugin.nodes)
plugin.nodes ).map((node) => {
.find((node) => node.flowNodeType === FlowNodeTypeEnum.pluginOutput) // Update plugin input value
?.inputs.reduce<Record<string, boolean>>((acc, cur) => { if (node.flowNodeType === FlowNodeTypeEnum.pluginInput) {
acc[cur.key] = cur.isToolOutput === false ? false : true; return {
return acc; ...node,
}, {}) ?? {}; showStatus: false,
const runtimeNodes = storeNodes2RuntimeNodes( inputs: node.inputs.map((input) => ({
plugin.nodes, ...input,
getWorkflowEntryNodeIds(plugin.nodes) value: data[input.key] ?? input.value
).map((node) => { }))
// Update plugin input value };
if (node.flowNodeType === FlowNodeTypeEnum.pluginInput) { }
return { return {
...node, ...node,
showStatus: false, showStatus: false
inputs: node.inputs.map((input) => ({
...input,
value: data[input.key] ?? input.value
}))
}; };
}
return {
...node,
showStatus: false
};
});
const { externalProvider } = await getUserChatInfoAndAuthTeamPoints(runningAppInfo.tmbId);
const runtimeVariables = {
...filterSystemVariables(props.variables),
appId: String(plugin.id),
...(externalProvider ? externalProvider.externalWorkflowVariables : {})
};
const { flowResponses, flowUsages, assistantResponses, runTimes, system_memories } =
await dispatchWorkFlow({
...props,
// Rewrite stream mode
...(system_forbid_stream
? {
stream: false,
workflowStreamResponse: undefined
}
: {}),
runningAppInfo: {
id: String(plugin.id),
// 如果系统插件有 teamId 和 tmbId则使用系统插件的 teamId 和 tmbId管理员指定了插件作为系统插件
teamId: plugin.teamId || runningAppInfo.teamId,
tmbId: plugin.tmbId || runningAppInfo.tmbId,
isChildApp: true
},
variables: runtimeVariables,
query: getPluginRunUserQuery({
pluginInputs: getPluginInputsFromStoreNodes(plugin.nodes),
variables: runtimeVariables,
files
}).value,
chatConfig: {},
runtimeNodes,
runtimeEdges: storeEdges2RuntimeEdges(plugin.edges)
}); });
const output = flowResponses.find((item) => item.moduleType === FlowNodeTypeEnum.pluginOutput);
if (output) {
output.moduleLogo = plugin.avatar;
}
const usagePoints = await computedPluginUsage({ const { externalProvider } = await getUserChatInfoAndAuthTeamPoints(runningAppInfo.tmbId);
plugin, const runtimeVariables = {
childrenUsage: flowUsages, ...filterSystemVariables(props.variables),
error: !!output?.pluginOutput?.error appId: String(plugin.id),
}); ...(externalProvider ? externalProvider.externalWorkflowVariables : {})
return { };
// 嵌套运行时,如果 childApp stream=false实际上不会有任何内容输出给用户所以不需要存储 const { flowResponses, flowUsages, assistantResponses, runTimes, system_memories } =
assistantResponses: system_forbid_stream ? [] : assistantResponses, await dispatchWorkFlow({
system_memories, ...props,
// responseData, // debug // Rewrite stream mode
[DispatchNodeResponseKeyEnum.runTimes]: runTimes, ...(system_forbid_stream
[DispatchNodeResponseKeyEnum.nodeResponse]: { ? {
moduleLogo: plugin.avatar, stream: false,
totalPoints: usagePoints, workflowStreamResponse: undefined
pluginOutput: output?.pluginOutput, }
pluginDetail: pluginData?.permission?.hasWritePer // Not system plugin : {}),
? flowResponses.filter((item) => { runningAppInfo: {
const filterArr = [FlowNodeTypeEnum.pluginOutput]; id: String(plugin.id),
return !filterArr.includes(item.moduleType as any); // 如果系统插件有 teamId 和 tmbId则使用系统插件的 teamId 和 tmbId管理员指定了插件作为系统插件
}) teamId: plugin.teamId || runningAppInfo.teamId,
: undefined tmbId: plugin.tmbId || runningAppInfo.tmbId,
}, isChildApp: true
[DispatchNodeResponseKeyEnum.nodeDispatchUsages]: [ },
{ variables: runtimeVariables,
moduleName: plugin.name, query: getPluginRunUserQuery({
totalPoints: usagePoints pluginInputs: getPluginInputsFromStoreNodes(plugin.nodes),
} variables: runtimeVariables,
], files
[DispatchNodeResponseKeyEnum.toolResponses]: output?.pluginOutput }).value,
? Object.keys(output.pluginOutput) chatConfig: {},
.filter((key) => outputFilterMap[key]) runtimeNodes,
.reduce<Record<string, any>>((acc, key) => { runtimeEdges: storeEdges2RuntimeEdges(plugin.edges)
acc[key] = output.pluginOutput![key]; });
return acc; const output = flowResponses.find((item) => item.moduleType === FlowNodeTypeEnum.pluginOutput);
}, {})
: null, const usagePoints = await computedPluginUsage({
...(output ? output.pluginOutput : {}) plugin,
}; childrenUsage: flowUsages,
error: !!output?.pluginOutput?.error
});
return {
data: output ? output.pluginOutput : {},
// 嵌套运行时,如果 childApp stream=false实际上不会有任何内容输出给用户所以不需要存储
assistantResponses: system_forbid_stream ? [] : assistantResponses,
system_memories,
// responseData, // debug
[DispatchNodeResponseKeyEnum.runTimes]: runTimes,
[DispatchNodeResponseKeyEnum.nodeResponse]: {
moduleLogo: plugin.avatar,
totalPoints: usagePoints,
pluginOutput: output?.pluginOutput,
pluginDetail: pluginData?.permission?.hasWritePer // Not system plugin
? flowResponses.filter((item) => {
const filterArr = [FlowNodeTypeEnum.pluginOutput];
return !filterArr.includes(item.moduleType as any);
})
: undefined
},
[DispatchNodeResponseKeyEnum.nodeDispatchUsages]: [
{
moduleName: plugin.name,
totalPoints: usagePoints
}
],
[DispatchNodeResponseKeyEnum.toolResponses]: output?.pluginOutput
? Object.keys(output.pluginOutput)
.filter((key) => outputFilterMap[key])
.reduce<Record<string, any>>((acc, key) => {
acc[key] = output.pluginOutput![key];
return acc;
}, {})
: null
};
} catch (error) {
return getNodeErrResponse({ error, customNodeResponse: { moduleLogo: plugin?.avatar } });
}
}; };

View File

@ -1,203 +0,0 @@
import type { ChatItemType } from '@fastgpt/global/core/chat/type.d';
import type { ModuleDispatchProps } from '@fastgpt/global/core/workflow/runtime/type';
import { dispatchWorkFlow } from '../index';
import { ChatRoleEnum } from '@fastgpt/global/core/chat/constants';
import { SseResponseEventEnum } from '@fastgpt/global/core/workflow/runtime/constants';
import {
getWorkflowEntryNodeIds,
storeEdges2RuntimeEdges,
rewriteNodeOutputByHistories,
storeNodes2RuntimeNodes,
textAdaptGptResponse
} from '@fastgpt/global/core/workflow/runtime/utils';
import type { NodeInputKeyEnum, NodeOutputKeyEnum } from '@fastgpt/global/core/workflow/constants';
import { DispatchNodeResponseKeyEnum } from '@fastgpt/global/core/workflow/runtime/constants';
import { filterSystemVariables, getHistories } from '../utils';
import { chatValue2RuntimePrompt, runtimePrompt2ChatsValue } from '@fastgpt/global/core/chat/adapt';
import { type DispatchNodeResultType } from '@fastgpt/global/core/workflow/runtime/type';
import { authAppByTmbId } from '../../../../support/permission/app/auth';
import { ReadPermissionVal } from '@fastgpt/global/support/permission/constant';
import { getAppVersionById } from '../../../app/version/controller';
import { parseUrlToFileType } from '@fastgpt/global/common/file/tools';
import { type ChildrenInteractive } from '@fastgpt/global/core/workflow/template/system/interactive/type';
import { getUserChatInfoAndAuthTeamPoints } from '../../../../support/permission/auth/team';
type Props = ModuleDispatchProps<{
[NodeInputKeyEnum.userChatInput]: string;
[NodeInputKeyEnum.history]?: ChatItemType[] | number;
[NodeInputKeyEnum.fileUrlList]?: string[];
[NodeInputKeyEnum.forbidStream]?: boolean;
[NodeInputKeyEnum.fileUrlList]?: string[];
}>;
type Response = DispatchNodeResultType<{
[DispatchNodeResponseKeyEnum.interactive]?: ChildrenInteractive;
[NodeOutputKeyEnum.answerText]: string;
[NodeOutputKeyEnum.history]: ChatItemType[];
}>;
export const dispatchRunAppNode = async (props: Props): Promise<Response> => {
const {
runningAppInfo,
histories,
query,
lastInteractive,
node: { pluginId: appId, version },
workflowStreamResponse,
params,
variables
} = props;
const {
system_forbid_stream = false,
userChatInput,
history,
fileUrlList,
...childrenAppVariables
} = params;
const { files } = chatValue2RuntimePrompt(query);
const userInputFiles = (() => {
if (fileUrlList) {
return fileUrlList.map((url) => parseUrlToFileType(url)).filter(Boolean);
}
// Adapt version 4.8.13 upgrade
return files;
})();
if (!userChatInput && !userInputFiles) {
return Promise.reject('Input is empty');
}
if (!appId) {
return Promise.reject('pluginId is empty');
}
// Auth the app by tmbId(Not the user, but the workflow user)
const { app: appData } = await authAppByTmbId({
appId: appId,
tmbId: runningAppInfo.tmbId,
per: ReadPermissionVal
});
const { nodes, edges, chatConfig } = await getAppVersionById({
appId,
versionId: version,
app: appData
});
const childStreamResponse = system_forbid_stream ? false : props.stream;
// Auto line
if (childStreamResponse) {
workflowStreamResponse?.({
event: SseResponseEventEnum.answer,
data: textAdaptGptResponse({
text: '\n'
})
});
}
const chatHistories = getHistories(history, histories);
// Rewrite children app variables
const systemVariables = filterSystemVariables(variables);
const { externalProvider } = await getUserChatInfoAndAuthTeamPoints(appData.tmbId);
const childrenRunVariables = {
...systemVariables,
...childrenAppVariables,
histories: chatHistories,
appId: String(appData._id),
...(externalProvider ? externalProvider.externalWorkflowVariables : {})
};
const childrenInteractive =
lastInteractive?.type === 'childrenInteractive'
? lastInteractive.params.childrenResponse
: undefined;
const runtimeNodes = rewriteNodeOutputByHistories(
storeNodes2RuntimeNodes(
nodes,
getWorkflowEntryNodeIds(nodes, childrenInteractive || undefined)
),
childrenInteractive
);
const runtimeEdges = storeEdges2RuntimeEdges(edges, childrenInteractive);
const theQuery = childrenInteractive
? query
: runtimePrompt2ChatsValue({ files: userInputFiles, text: userChatInput });
const {
flowResponses,
flowUsages,
assistantResponses,
runTimes,
workflowInteractiveResponse,
system_memories
} = await dispatchWorkFlow({
...props,
lastInteractive: childrenInteractive,
// Rewrite stream mode
...(system_forbid_stream
? {
stream: false,
workflowStreamResponse: undefined
}
: {}),
runningAppInfo: {
id: String(appData._id),
teamId: String(appData.teamId),
tmbId: String(appData.tmbId),
isChildApp: true
},
runtimeNodes,
runtimeEdges,
histories: chatHistories,
variables: childrenRunVariables,
query: theQuery,
chatConfig
});
const completeMessages = chatHistories.concat([
{
obj: ChatRoleEnum.Human,
value: query
},
{
obj: ChatRoleEnum.AI,
value: assistantResponses
}
]);
const { text } = chatValue2RuntimePrompt(assistantResponses);
const usagePoints = flowUsages.reduce((sum, item) => sum + (item.totalPoints || 0), 0);
return {
system_memories,
[DispatchNodeResponseKeyEnum.interactive]: workflowInteractiveResponse
? {
type: 'childrenInteractive',
params: {
childrenResponse: workflowInteractiveResponse
}
}
: undefined,
assistantResponses: system_forbid_stream ? [] : assistantResponses,
[DispatchNodeResponseKeyEnum.runTimes]: runTimes,
[DispatchNodeResponseKeyEnum.nodeResponse]: {
moduleLogo: appData.avatar,
totalPoints: usagePoints,
query: userChatInput,
textOutput: text,
pluginDetail: appData.permission.hasWritePer ? flowResponses : undefined,
mergeSignId: props.node.nodeId
},
[DispatchNodeResponseKeyEnum.nodeDispatchUsages]: [
{
moduleName: appData.name,
totalPoints: usagePoints
}
],
[DispatchNodeResponseKeyEnum.toolResponses]: text,
answerText: text,
history: completeMessages
};
};

View File

@ -2,13 +2,22 @@ import { chatValue2RuntimePrompt } from '@fastgpt/global/core/chat/adapt';
import { ChatFileTypeEnum } from '@fastgpt/global/core/chat/constants'; import { ChatFileTypeEnum } from '@fastgpt/global/core/chat/constants';
import { NodeOutputKeyEnum } from '@fastgpt/global/core/workflow/constants'; import { NodeOutputKeyEnum } from '@fastgpt/global/core/workflow/constants';
import { DispatchNodeResponseKeyEnum } from '@fastgpt/global/core/workflow/runtime/constants'; import { DispatchNodeResponseKeyEnum } from '@fastgpt/global/core/workflow/runtime/constants';
import type { ModuleDispatchProps } from '@fastgpt/global/core/workflow/runtime/type'; import type {
DispatchNodeResultType,
ModuleDispatchProps
} from '@fastgpt/global/core/workflow/runtime/type';
export type PluginInputProps = ModuleDispatchProps<{ export type PluginInputProps = ModuleDispatchProps<{
[key: string]: any; [key: string]: any;
}>; }>;
export type PluginInputResponse = DispatchNodeResultType<{
[NodeOutputKeyEnum.userFiles]?: string[];
[key: string]: any;
}>;
export const dispatchPluginInput = (props: PluginInputProps) => { export const dispatchPluginInput = async (
props: PluginInputProps
): Promise<PluginInputResponse> => {
const { params, query } = props; const { params, query } = props;
const { files } = chatValue2RuntimePrompt(query); const { files } = chatValue2RuntimePrompt(query);
@ -33,12 +42,14 @@ export const dispatchPluginInput = (props: PluginInputProps) => {
} }
return { return {
...params, data: {
[DispatchNodeResponseKeyEnum.nodeResponse]: {}, ...params,
[NodeOutputKeyEnum.userFiles]: files [NodeOutputKeyEnum.userFiles]: files
.map((item) => { .map((item) => {
return item?.url ?? ''; return item?.url ?? '';
}) })
.filter(Boolean) .filter(Boolean)
},
[DispatchNodeResponseKeyEnum.nodeResponse]: {}
}; };
}; };

View File

@ -30,7 +30,9 @@ export const dispatchAnswer = (props: Record<string, any>): AnswerResponse => {
}); });
return { return {
[NodeOutputKeyEnum.answerText]: responseText, data: {
[NodeOutputKeyEnum.answerText]: responseText
},
[DispatchNodeResponseKeyEnum.nodeResponse]: { [DispatchNodeResponseKeyEnum.nodeResponse]: {
textOutput: formatText textOutput: formatText
} }

View File

@ -2,20 +2,26 @@ import type { ModuleDispatchProps } from '@fastgpt/global/core/workflow/runtime/
import { NodeInputKeyEnum, NodeOutputKeyEnum } from '@fastgpt/global/core/workflow/constants'; import { NodeInputKeyEnum, NodeOutputKeyEnum } from '@fastgpt/global/core/workflow/constants';
import { type DispatchNodeResultType } from '@fastgpt/global/core/workflow/runtime/type'; import { type DispatchNodeResultType } from '@fastgpt/global/core/workflow/runtime/type';
import axios from 'axios'; import axios from 'axios';
import { formatHttpError } from '../utils';
import { DispatchNodeResponseKeyEnum } from '@fastgpt/global/core/workflow/runtime/constants'; import { DispatchNodeResponseKeyEnum } from '@fastgpt/global/core/workflow/runtime/constants';
import { SandboxCodeTypeEnum } from '@fastgpt/global/core/workflow/template/system/sandbox/constants'; import { SandboxCodeTypeEnum } from '@fastgpt/global/core/workflow/template/system/sandbox/constants';
import { getErrText } from '@fastgpt/global/common/error/utils';
import { getNodeErrResponse } from '../utils';
type RunCodeType = ModuleDispatchProps<{ type RunCodeType = ModuleDispatchProps<{
[NodeInputKeyEnum.codeType]: string; [NodeInputKeyEnum.codeType]: string;
[NodeInputKeyEnum.code]: string; [NodeInputKeyEnum.code]: string;
[NodeInputKeyEnum.addInputParam]: Record<string, any>; [NodeInputKeyEnum.addInputParam]: Record<string, any>;
}>; }>;
type RunCodeResponse = DispatchNodeResultType<{ type RunCodeResponse = DispatchNodeResultType<
[NodeOutputKeyEnum.error]?: any; {
[NodeOutputKeyEnum.rawResponse]?: Record<string, any>; [NodeOutputKeyEnum.error]?: any; // @deprecated
[key: string]: any; [NodeOutputKeyEnum.rawResponse]?: Record<string, any>;
}>; [key: string]: any;
},
{
[NodeOutputKeyEnum.error]: string;
}
>;
function getURL(codeType: string): string { function getURL(codeType: string): string {
if (codeType == SandboxCodeTypeEnum.py) { if (codeType == SandboxCodeTypeEnum.py) {
@ -25,14 +31,21 @@ function getURL(codeType: string): string {
} }
} }
export const dispatchRunCode = async (props: RunCodeType): Promise<RunCodeResponse> => { export const dispatchCodeSandbox = async (props: RunCodeType): Promise<RunCodeResponse> => {
const { const {
node: { catchError },
params: { codeType, code, [NodeInputKeyEnum.addInputParam]: customVariables } params: { codeType, code, [NodeInputKeyEnum.addInputParam]: customVariables }
} = props; } = props;
if (!process.env.SANDBOX_URL) { if (!process.env.SANDBOX_URL) {
return { return {
[NodeOutputKeyEnum.error]: 'Can not find SANDBOX_URL in env' error: {
[NodeOutputKeyEnum.error]: 'Can not find SANDBOX_URL in env'
},
[DispatchNodeResponseKeyEnum.nodeResponse]: {
errorText: 'Can not find SANDBOX_URL in env',
customInputs: customVariables
}
}; };
} }
@ -51,24 +64,43 @@ export const dispatchRunCode = async (props: RunCodeType): Promise<RunCodeRespon
if (runResult.success) { if (runResult.success) {
return { return {
[NodeOutputKeyEnum.rawResponse]: runResult.data.codeReturn, data: {
[NodeOutputKeyEnum.rawResponse]: runResult.data.codeReturn,
...runResult.data.codeReturn
},
[DispatchNodeResponseKeyEnum.nodeResponse]: { [DispatchNodeResponseKeyEnum.nodeResponse]: {
customInputs: customVariables, customInputs: customVariables,
customOutputs: runResult.data.codeReturn, customOutputs: runResult.data.codeReturn,
codeLog: runResult.data.log codeLog: runResult.data.log
}, },
[DispatchNodeResponseKeyEnum.toolResponses]: runResult.data.codeReturn, [DispatchNodeResponseKeyEnum.toolResponses]: runResult.data.codeReturn
...runResult.data.codeReturn
}; };
} else { } else {
return Promise.reject('Run code failed'); throw new Error('Run code failed');
} }
} catch (error) { } catch (error) {
const text = getErrText(error);
// @adapt
if (catchError === undefined) {
return {
data: {
[NodeOutputKeyEnum.error]: { message: text }
},
[DispatchNodeResponseKeyEnum.nodeResponse]: {
customInputs: customVariables,
errorText: text
}
};
}
return { return {
[NodeOutputKeyEnum.error]: formatHttpError(error), error: {
[NodeOutputKeyEnum.error]: text
},
[DispatchNodeResponseKeyEnum.nodeResponse]: { [DispatchNodeResponseKeyEnum.nodeResponse]: {
customInputs: customVariables, customInputs: customVariables,
error: formatHttpError(error) errorText: text
} }
}; };
} }

View File

@ -47,10 +47,14 @@ type HttpRequestProps = ModuleDispatchProps<{
[NodeInputKeyEnum.httpTimeout]?: number; [NodeInputKeyEnum.httpTimeout]?: number;
[key: string]: any; [key: string]: any;
}>; }>;
type HttpResponse = DispatchNodeResultType<{ type HttpResponse = DispatchNodeResultType<
[NodeOutputKeyEnum.error]?: object; {
[key: string]: any; [key: string]: any;
}>; },
{
[NodeOutputKeyEnum.error]?: string;
}
>;
const UNDEFINED_SIGN = 'UNDEFINED_SIGN'; const UNDEFINED_SIGN = 'UNDEFINED_SIGN';
@ -349,7 +353,10 @@ export const dispatchHttp468Request = async (props: HttpRequestProps): Promise<H
} }
return { return {
...results, data: {
[NodeOutputKeyEnum.httpRawResponse]: rawResponse,
...results
},
[DispatchNodeResponseKeyEnum.nodeResponse]: { [DispatchNodeResponseKeyEnum.nodeResponse]: {
totalPoints: 0, totalPoints: 0,
params: Object.keys(params).length > 0 ? params : undefined, params: Object.keys(params).length > 0 ? params : undefined,
@ -358,21 +365,36 @@ export const dispatchHttp468Request = async (props: HttpRequestProps): Promise<H
httpResult: rawResponse httpResult: rawResponse
}, },
[DispatchNodeResponseKeyEnum.toolResponses]: [DispatchNodeResponseKeyEnum.toolResponses]:
Object.keys(results).length > 0 ? results : rawResponse, Object.keys(results).length > 0 ? results : rawResponse
[NodeOutputKeyEnum.httpRawResponse]: rawResponse
}; };
} catch (error) { } catch (error) {
addLog.error('Http request error', error); addLog.error('Http request error', error);
// @adapt
if (node.catchError === undefined) {
return {
data: {
[NodeOutputKeyEnum.error]: getErrText(error)
},
[DispatchNodeResponseKeyEnum.nodeResponse]: {
params: Object.keys(params).length > 0 ? params : undefined,
body: Object.keys(formattedRequestBody).length > 0 ? formattedRequestBody : undefined,
headers: Object.keys(publicHeaders).length > 0 ? publicHeaders : undefined,
httpResult: { error: formatHttpError(error) }
}
};
}
return { return {
[NodeOutputKeyEnum.error]: formatHttpError(error), error: {
[NodeOutputKeyEnum.error]: getErrText(error)
},
[DispatchNodeResponseKeyEnum.nodeResponse]: { [DispatchNodeResponseKeyEnum.nodeResponse]: {
params: Object.keys(params).length > 0 ? params : undefined, params: Object.keys(params).length > 0 ? params : undefined,
body: Object.keys(formattedRequestBody).length > 0 ? formattedRequestBody : undefined, body: Object.keys(formattedRequestBody).length > 0 ? formattedRequestBody : undefined,
headers: Object.keys(publicHeaders).length > 0 ? publicHeaders : undefined, headers: Object.keys(publicHeaders).length > 0 ? publicHeaders : undefined,
httpResult: { error: formatHttpError(error) } httpResult: { error: formatHttpError(error) }
}, }
[NodeOutputKeyEnum.httpRawResponse]: getErrText(error)
}; };
} }
}; };

View File

@ -59,6 +59,9 @@ export const dispatchQueryExtension = async ({
}); });
return { return {
data: {
[NodeOutputKeyEnum.text]: JSON.stringify(filterSameQueries)
},
[DispatchNodeResponseKeyEnum.nodeResponse]: { [DispatchNodeResponseKeyEnum.nodeResponse]: {
totalPoints, totalPoints,
model: modelName, model: modelName,
@ -75,7 +78,6 @@ export const dispatchQueryExtension = async ({
inputTokens, inputTokens,
outputTokens outputTokens
} }
], ]
[NodeOutputKeyEnum.text]: JSON.stringify(filterSameQueries)
}; };
}; };

View File

@ -14,6 +14,7 @@ import { parseFileExtensionFromUrl } from '@fastgpt/global/common/string/tools';
import { addLog } from '../../../../common/system/log'; import { addLog } from '../../../../common/system/log';
import { addRawTextBuffer, getRawTextBuffer } from '../../../../common/buffer/rawText/controller'; import { addRawTextBuffer, getRawTextBuffer } from '../../../../common/buffer/rawText/controller';
import { addMinutes } from 'date-fns'; import { addMinutes } from 'date-fns';
import { getNodeErrResponse } from '../utils';
type Props = ModuleDispatchProps<{ type Props = ModuleDispatchProps<{
[NodeInputKeyEnum.fileUrlList]: string[]; [NodeInputKeyEnum.fileUrlList]: string[];
@ -58,31 +59,37 @@ export const dispatchReadFiles = async (props: Props): Promise<Response> => {
// Get files from histories // Get files from histories
const filesFromHistories = version !== '489' ? [] : getHistoryFileLinks(histories); const filesFromHistories = version !== '489' ? [] : getHistoryFileLinks(histories);
const { text, readFilesResult } = await getFileContentFromLinks({ try {
// Concat fileUrlList and filesFromHistories; remove not supported files const { text, readFilesResult } = await getFileContentFromLinks({
urls: [...fileUrlList, ...filesFromHistories], // Concat fileUrlList and filesFromHistories; remove not supported files
requestOrigin, urls: [...fileUrlList, ...filesFromHistories],
maxFiles, requestOrigin,
teamId, maxFiles,
tmbId, teamId,
customPdfParse tmbId,
}); customPdfParse
});
return { return {
[NodeOutputKeyEnum.text]: text, data: {
[DispatchNodeResponseKeyEnum.nodeResponse]: { [NodeOutputKeyEnum.text]: text
readFiles: readFilesResult.map((item) => ({ },
name: item?.filename || '', [DispatchNodeResponseKeyEnum.nodeResponse]: {
url: item?.url || '' readFiles: readFilesResult.map((item) => ({
})), name: item?.filename || '',
readFilesResult: readFilesResult url: item?.url || ''
.map((item) => item?.nodeResponsePreviewText ?? '') })),
.join('\n******\n') readFilesResult: readFilesResult
}, .map((item) => item?.nodeResponsePreviewText ?? '')
[DispatchNodeResponseKeyEnum.toolResponses]: { .join('\n******\n')
fileContent: text },
} [DispatchNodeResponseKeyEnum.toolResponses]: {
}; fileContent: text
}
};
} catch (error) {
return getNodeErrResponse({ error });
}
}; };
export const getHistoryFileLinks = (histories: ChatItemType[]) => { export const getHistoryFileLinks = (histories: ChatItemType[]) => {

View File

@ -157,7 +157,9 @@ export const dispatchIfElse = async (props: Props): Promise<Response> => {
}); });
return { return {
[NodeOutputKeyEnum.ifElseResult]: res, data: {
[NodeOutputKeyEnum.ifElseResult]: res
},
[DispatchNodeResponseKeyEnum.nodeResponse]: { [DispatchNodeResponseKeyEnum.nodeResponse]: {
totalPoints: 0, totalPoints: 0,
ifElseResult: res ifElseResult: res

View File

@ -6,16 +6,21 @@ import { valueTypeFormat } from '@fastgpt/global/core/workflow/runtime/utils';
import { SERVICE_LOCAL_HOST } from '../../../../common/system/tools'; import { SERVICE_LOCAL_HOST } from '../../../../common/system/tools';
import { addLog } from '../../../../common/system/log'; import { addLog } from '../../../../common/system/log';
import { type DispatchNodeResultType } from '@fastgpt/global/core/workflow/runtime/type'; import { type DispatchNodeResultType } from '@fastgpt/global/core/workflow/runtime/type';
import { getErrText } from '@fastgpt/global/common/error/utils';
type LafRequestProps = ModuleDispatchProps<{ type LafRequestProps = ModuleDispatchProps<{
[NodeInputKeyEnum.httpReqUrl]: string; [NodeInputKeyEnum.httpReqUrl]: string;
[NodeInputKeyEnum.addInputParam]: Record<string, any>; [NodeInputKeyEnum.addInputParam]: Record<string, any>;
[key: string]: any; [key: string]: any;
}>; }>;
type LafResponse = DispatchNodeResultType<{ type LafResponse = DispatchNodeResultType<
[NodeOutputKeyEnum.failed]?: boolean; {
[key: string]: any; [key: string]: any;
}>; },
{
[NodeOutputKeyEnum.errorText]?: string;
}
>;
const UNDEFINED_SIGN = 'UNDEFINED_SIGN'; const UNDEFINED_SIGN = 'UNDEFINED_SIGN';
@ -78,20 +83,24 @@ export const dispatchLafRequest = async (props: LafRequestProps): Promise<LafRes
} }
return { return {
data: {
[NodeOutputKeyEnum.httpRawResponse]: rawResponse,
...results
},
assistantResponses: [], assistantResponses: [],
[DispatchNodeResponseKeyEnum.nodeResponse]: { [DispatchNodeResponseKeyEnum.nodeResponse]: {
totalPoints: 0, totalPoints: 0,
body: Object.keys(requestBody).length > 0 ? requestBody : undefined, body: Object.keys(requestBody).length > 0 ? requestBody : undefined,
httpResult: rawResponse httpResult: rawResponse
}, },
[DispatchNodeResponseKeyEnum.toolResponses]: rawResponse, [DispatchNodeResponseKeyEnum.toolResponses]: rawResponse
[NodeOutputKeyEnum.httpRawResponse]: rawResponse,
...results
}; };
} catch (error) { } catch (error) {
addLog.error('Http request error', error); addLog.error('Http request error', error);
return { return {
[NodeOutputKeyEnum.failed]: true, error: {
[NodeOutputKeyEnum.errorText]: getErrText(error)
},
[DispatchNodeResponseKeyEnum.nodeResponse]: { [DispatchNodeResponseKeyEnum.nodeResponse]: {
totalPoints: 0, totalPoints: 0,
body: Object.keys(requestBody).length > 0 ? requestBody : undefined, body: Object.keys(requestBody).length > 0 ? requestBody : undefined,

View File

@ -40,7 +40,9 @@ export const dispatchTextEditor = (props: Record<string, any>): Response => {
}); });
return { return {
[NodeOutputKeyEnum.text]: textResult, data: {
[NodeOutputKeyEnum.text]: textResult
},
[DispatchNodeResponseKeyEnum.nodeResponse]: { [DispatchNodeResponseKeyEnum.nodeResponse]: {
textOutput: textResult textOutput: textResult
} }

View File

@ -9,7 +9,10 @@ import {
} from '@fastgpt/global/core/workflow/runtime/type'; } from '@fastgpt/global/core/workflow/runtime/type';
import { responseWrite } from '../../../common/response'; import { responseWrite } from '../../../common/response';
import { type NextApiResponse } from 'next'; import { type NextApiResponse } from 'next';
import { SseResponseEventEnum } from '@fastgpt/global/core/workflow/runtime/constants'; import {
DispatchNodeResponseKeyEnum,
SseResponseEventEnum
} from '@fastgpt/global/core/workflow/runtime/constants';
import { getNanoid } from '@fastgpt/global/common/string/tools'; import { getNanoid } from '@fastgpt/global/common/string/tools';
import { type SearchDataResponseItemType } from '@fastgpt/global/core/dataset/type'; import { type SearchDataResponseItemType } from '@fastgpt/global/core/dataset/type';
import { getMCPToolRuntimeNode } from '@fastgpt/global/core/app/mcpTools/utils'; import { getMCPToolRuntimeNode } from '@fastgpt/global/core/app/mcpTools/utils';
@ -206,3 +209,30 @@ export const rewriteRuntimeWorkFlow = (
} }
} }
}; };
export const getNodeErrResponse = ({
error,
customErr,
customNodeResponse
}: {
error: any;
customErr?: Record<string, any>;
customNodeResponse?: Record<string, any>;
}) => {
const errorText = getErrText(error);
return {
error: {
[NodeOutputKeyEnum.errorText]: errorText,
...(typeof customErr === 'object' ? customErr : {})
},
[DispatchNodeResponseKeyEnum.nodeResponse]: {
errorText,
...(typeof customNodeResponse === 'object' ? customNodeResponse : {})
},
[DispatchNodeResponseKeyEnum.toolResponses]: {
error: errorText,
...(typeof customErr === 'object' ? customErr : {})
}
};
};

View File

@ -3,7 +3,7 @@
"version": "1.0.0", "version": "1.0.0",
"type": "module", "type": "module",
"dependencies": { "dependencies": {
"@fastgpt-sdk/plugin": "^0.1.1", "@fastgpt-sdk/plugin": "^0.1.2",
"@fastgpt/global": "workspace:*", "@fastgpt/global": "workspace:*",
"@modelcontextprotocol/sdk": "^1.12.1", "@modelcontextprotocol/sdk": "^1.12.1",
"@node-rs/jieba": "2.0.1", "@node-rs/jieba": "2.0.1",

View File

@ -0,0 +1,64 @@
import { parseHeaderCert } from '../controller';
import { authAppByTmbId } from '../app/auth';
import {
ManagePermissionVal,
ReadPermissionVal
} from '@fastgpt/global/support/permission/constant';
import type { EvaluationSchemaType } from '@fastgpt/global/core/app/evaluation/type';
import type { AuthModeType } from '../type';
import { MongoEvaluation } from '../../../core/app/evaluation/evalSchema';
export const authEval = async ({
evalId,
per = ReadPermissionVal,
...props
}: AuthModeType & {
evalId: string;
}): Promise<{
evaluation: EvaluationSchemaType;
tmbId: string;
teamId: string;
}> => {
const { teamId, tmbId, isRoot } = await parseHeaderCert(props);
const evaluation = await MongoEvaluation.findById(evalId, 'tmbId').lean();
if (!evaluation) {
return Promise.reject('Evaluation not found');
}
if (String(evaluation.tmbId) === tmbId) {
return {
teamId,
tmbId,
evaluation
};
}
// App read per
if (per === ReadPermissionVal) {
await authAppByTmbId({
tmbId,
appId: evaluation.appId,
per: ReadPermissionVal,
isRoot
});
return {
teamId,
tmbId,
evaluation
};
}
// Write per
await authAppByTmbId({
tmbId,
appId: evaluation.appId,
per: ManagePermissionVal,
isRoot
});
return {
teamId,
tmbId,
evaluation
};
};

View File

@ -113,10 +113,6 @@ export const checkTeamDatasetLimit = async (teamId: string) => {
return Promise.reject(SystemErrEnum.licenseDatasetAmountLimit); return Promise.reject(SystemErrEnum.licenseDatasetAmountLimit);
} }
} }
// Open source check
if (!global.feConfigs.isPlus && datasetCount >= 30) {
return Promise.reject(SystemErrEnum.communityVersionNumLimit);
}
}; };
export const checkTeamDatasetSyncPermission = async (teamId: string) => { export const checkTeamDatasetSyncPermission = async (teamId: string) => {

View File

@ -235,3 +235,46 @@ export const pushLLMTrainingUsage = async ({
return { totalPoints }; return { totalPoints };
}; };
export const createEvaluationUsage = async ({
teamId,
tmbId,
appName,
model,
session
}: {
teamId: string;
tmbId: string;
appName: string;
model: string;
session?: ClientSession;
}) => {
const [{ _id: usageId }] = await MongoUsage.create(
[
{
teamId,
tmbId,
appName,
source: UsageSourceEnum.evaluation,
totalPoints: 0,
list: [
{
moduleName: i18nT('account_usage:generate_answer'),
amount: 0,
count: 0
},
{
moduleName: i18nT('account_usage:answer_accuracy'),
amount: 0,
inputTokens: 0,
outputTokens: 0,
model
}
]
}
],
{ session, ordered: true }
);
return { usageId };
};

View File

@ -1,9 +1,13 @@
export type ConcatBillQueueItemType = { export type ConcatBillQueueItemType = {
billId: string; billId: string; // usageId
listIndex?: number; listIndex?: number;
totalPoints: number; totalPoints: number;
inputTokens: number;
outputTokens: number; // Model usage
inputTokens?: number;
outputTokens?: number;
// Times
count?: number;
}; };
declare global { declare global {

53
packages/service/type/env.d.ts vendored Normal file
View File

@ -0,0 +1,53 @@
declare global {
namespace NodeJS {
interface ProcessEnv {
LOG_DEPTH: string;
DB_MAX_LINK: string;
FILE_TOKEN_KEY: string;
AES256_SECRET_KEY: string;
ROOT_KEY: string;
OPENAI_BASE_URL: string;
CHAT_API_KEY: string;
AIPROXY_API_ENDPOINT: string;
AIPROXY_API_TOKEN: string;
MULTIPLE_DATA_TO_BASE64: string;
MONGODB_URI: string;
MONGODB_LOG_URI?: string;
PG_URL: string;
OCEANBASE_URL: string;
MILVUS_ADDRESS: string;
MILVUS_TOKEN: string;
SANDBOX_URL: string;
FE_DOMAIN: string;
FILE_DOMAIN: string;
LOG_LEVEL?: string;
STORE_LOG_LEVEL?: string;
USE_IP_LIMIT?: string;
WORKFLOW_MAX_RUN_TIMES?: string;
WORKFLOW_MAX_LOOP_TIMES?: string;
CHECK_INTERNAL_IP?: string;
ALLOWED_ORIGINS?: string;
SHOW_COUPON?: string;
CONFIG_JSON_PATH?: string;
PASSWORD_LOGIN_LOCK_SECONDS?: string;
PASSWORD_EXPIRED_MONTH?: string;
MAX_LOGIN_SESSION?: string;
// 安全配置
// 密码登录锁定时间
PASSWORD_LOGIN_LOCK_SECONDS?: string;
// Signoz
SIGNOZ_BASE_URL?: string;
SIGNOZ_SERVICE_NAME?: string;
CHAT_LOG_URL?: string;
CHAT_LOG_INTERVAL?: string;
CHAT_LOG_SOURCE_ID_PREFIX?: string;
NEXT_PUBLIC_BASE_URL: string;
}
}
}
export {};

View File

@ -20,11 +20,11 @@ export const iconPaths = {
'common/addLight': () => import('./icons/common/addLight.svg'), 'common/addLight': () => import('./icons/common/addLight.svg'),
'common/addUser': () => import('./icons/common/addUser.svg'), 'common/addUser': () => import('./icons/common/addUser.svg'),
'common/administrator': () => import('./icons/common/administrator.svg'), 'common/administrator': () => import('./icons/common/administrator.svg'),
'common/audit': () => import('./icons/common/audit.svg'),
'common/alipay': () => import('./icons/common/alipay.svg'), 'common/alipay': () => import('./icons/common/alipay.svg'),
'common/app': () => import('./icons/common/app.svg'), 'common/app': () => import('./icons/common/app.svg'),
'common/arrowLeft': () => import('./icons/common/arrowLeft.svg'), 'common/arrowLeft': () => import('./icons/common/arrowLeft.svg'),
'common/arrowRight': () => import('./icons/common/arrowRight.svg'), 'common/arrowRight': () => import('./icons/common/arrowRight.svg'),
'common/audit': () => import('./icons/common/audit.svg'),
'common/backFill': () => import('./icons/common/backFill.svg'), 'common/backFill': () => import('./icons/common/backFill.svg'),
'common/backLight': () => import('./icons/common/backLight.svg'), 'common/backLight': () => import('./icons/common/backLight.svg'),
'common/billing': () => import('./icons/common/billing.svg'), 'common/billing': () => import('./icons/common/billing.svg'),
@ -47,6 +47,7 @@ export const iconPaths = {
'common/editor/resizer': () => import('./icons/common/editor/resizer.svg'), 'common/editor/resizer': () => import('./icons/common/editor/resizer.svg'),
'common/ellipsis': () => import('./icons/common/ellipsis.svg'), 'common/ellipsis': () => import('./icons/common/ellipsis.svg'),
'common/enable': () => import('./icons/common/enable.svg'), 'common/enable': () => import('./icons/common/enable.svg'),
'common/error': () => import('./icons/common/error.svg'),
'common/errorFill': () => import('./icons/common/errorFill.svg'), 'common/errorFill': () => import('./icons/common/errorFill.svg'),
'common/file/move': () => import('./icons/common/file/move.svg'), 'common/file/move': () => import('./icons/common/file/move.svg'),
'common/fileNotFound': () => import('./icons/common/fileNotFound.svg'), 'common/fileNotFound': () => import('./icons/common/fileNotFound.svg'),
@ -111,9 +112,9 @@ export const iconPaths = {
'common/tickFill': () => import('./icons/common/tickFill.svg'), 'common/tickFill': () => import('./icons/common/tickFill.svg'),
'common/toolkit': () => import('./icons/common/toolkit.svg'), 'common/toolkit': () => import('./icons/common/toolkit.svg'),
'common/trash': () => import('./icons/common/trash.svg'), 'common/trash': () => import('./icons/common/trash.svg'),
'common/upRightArrowLight': () => import('./icons/common/upRightArrowLight.svg'),
'common/uploadFileFill': () => import('./icons/common/uploadFileFill.svg'), 'common/uploadFileFill': () => import('./icons/common/uploadFileFill.svg'),
'common/upperRight': () => import('./icons/common/upperRight.svg'), 'common/upperRight': () => import('./icons/common/upperRight.svg'),
'common/upRightArrowLight': () => import('./icons/common/upRightArrowLight.svg'),
'common/userInfo': () => import('./icons/common/userInfo.svg'), 'common/userInfo': () => import('./icons/common/userInfo.svg'),
'common/variable': () => import('./icons/common/variable.svg'), 'common/variable': () => import('./icons/common/variable.svg'),
'common/viewLight': () => import('./icons/common/viewLight.svg'), 'common/viewLight': () => import('./icons/common/viewLight.svg'),
@ -150,8 +151,6 @@ export const iconPaths = {
'core/app/simpleMode/tts': () => import('./icons/core/app/simpleMode/tts.svg'), 'core/app/simpleMode/tts': () => import('./icons/core/app/simpleMode/tts.svg'),
'core/app/simpleMode/variable': () => import('./icons/core/app/simpleMode/variable.svg'), 'core/app/simpleMode/variable': () => import('./icons/core/app/simpleMode/variable.svg'),
'core/app/simpleMode/whisper': () => import('./icons/core/app/simpleMode/whisper.svg'), 'core/app/simpleMode/whisper': () => import('./icons/core/app/simpleMode/whisper.svg'),
'core/app/templates/TranslateRobot': () =>
import('./icons/core/app/templates/TranslateRobot.svg'),
'core/app/templates/animalLife': () => import('./icons/core/app/templates/animalLife.svg'), 'core/app/templates/animalLife': () => import('./icons/core/app/templates/animalLife.svg'),
'core/app/templates/chinese': () => import('./icons/core/app/templates/chinese.svg'), 'core/app/templates/chinese': () => import('./icons/core/app/templates/chinese.svg'),
'core/app/templates/divination': () => import('./icons/core/app/templates/divination.svg'), 'core/app/templates/divination': () => import('./icons/core/app/templates/divination.svg'),
@ -161,6 +160,8 @@ export const iconPaths = {
'core/app/templates/plugin-dalle': () => import('./icons/core/app/templates/plugin-dalle.svg'), 'core/app/templates/plugin-dalle': () => import('./icons/core/app/templates/plugin-dalle.svg'),
'core/app/templates/plugin-feishu': () => import('./icons/core/app/templates/plugin-feishu.svg'), 'core/app/templates/plugin-feishu': () => import('./icons/core/app/templates/plugin-feishu.svg'),
'core/app/templates/stock': () => import('./icons/core/app/templates/stock.svg'), 'core/app/templates/stock': () => import('./icons/core/app/templates/stock.svg'),
'core/app/templates/TranslateRobot': () =>
import('./icons/core/app/templates/TranslateRobot.svg'),
'core/app/toolCall': () => import('./icons/core/app/toolCall.svg'), 'core/app/toolCall': () => import('./icons/core/app/toolCall.svg'),
'core/app/ttsFill': () => import('./icons/core/app/ttsFill.svg'), 'core/app/ttsFill': () => import('./icons/core/app/ttsFill.svg'),
'core/app/type/httpPlugin': () => import('./icons/core/app/type/httpPlugin.svg'), 'core/app/type/httpPlugin': () => import('./icons/core/app/type/httpPlugin.svg'),
@ -179,7 +180,6 @@ export const iconPaths = {
'core/app/variable/input': () => import('./icons/core/app/variable/input.svg'), 'core/app/variable/input': () => import('./icons/core/app/variable/input.svg'),
'core/app/variable/select': () => import('./icons/core/app/variable/select.svg'), 'core/app/variable/select': () => import('./icons/core/app/variable/select.svg'),
'core/app/variable/textarea': () => import('./icons/core/app/variable/textarea.svg'), 'core/app/variable/textarea': () => import('./icons/core/app/variable/textarea.svg'),
'core/chat/QGFill': () => import('./icons/core/chat/QGFill.svg'),
'core/chat/backText': () => import('./icons/core/chat/backText.svg'), 'core/chat/backText': () => import('./icons/core/chat/backText.svg'),
'core/chat/cancelSpeak': () => import('./icons/core/chat/cancelSpeak.svg'), 'core/chat/cancelSpeak': () => import('./icons/core/chat/cancelSpeak.svg'),
'core/chat/chatFill': () => import('./icons/core/chat/chatFill.svg'), 'core/chat/chatFill': () => import('./icons/core/chat/chatFill.svg'),
@ -196,6 +196,7 @@ export const iconPaths = {
'core/chat/fileSelect': () => import('./icons/core/chat/fileSelect.svg'), 'core/chat/fileSelect': () => import('./icons/core/chat/fileSelect.svg'),
'core/chat/finishSpeak': () => import('./icons/core/chat/finishSpeak.svg'), 'core/chat/finishSpeak': () => import('./icons/core/chat/finishSpeak.svg'),
'core/chat/imgSelect': () => import('./icons/core/chat/imgSelect.svg'), 'core/chat/imgSelect': () => import('./icons/core/chat/imgSelect.svg'),
'core/chat/QGFill': () => import('./icons/core/chat/QGFill.svg'),
'core/chat/quoteFill': () => import('./icons/core/chat/quoteFill.svg'), 'core/chat/quoteFill': () => import('./icons/core/chat/quoteFill.svg'),
'core/chat/quoteSign': () => import('./icons/core/chat/quoteSign.svg'), 'core/chat/quoteSign': () => import('./icons/core/chat/quoteSign.svg'),
'core/chat/recordFill': () => import('./icons/core/chat/recordFill.svg'), 'core/chat/recordFill': () => import('./icons/core/chat/recordFill.svg'),
@ -203,6 +204,7 @@ export const iconPaths = {
'core/chat/sendLight': () => import('./icons/core/chat/sendLight.svg'), 'core/chat/sendLight': () => import('./icons/core/chat/sendLight.svg'),
'core/chat/setTopLight': () => import('./icons/core/chat/setTopLight.svg'), 'core/chat/setTopLight': () => import('./icons/core/chat/setTopLight.svg'),
'core/chat/sideLine': () => import('./icons/core/chat/sideLine.svg'), 'core/chat/sideLine': () => import('./icons/core/chat/sideLine.svg'),
'core/chat/sidebar/logout': () => import('./icons/core/chat/sidebar/logout.svg'),
'core/chat/speaking': () => import('./icons/core/chat/speaking.svg'), 'core/chat/speaking': () => import('./icons/core/chat/speaking.svg'),
'core/chat/stopSpeech': () => import('./icons/core/chat/stopSpeech.svg'), 'core/chat/stopSpeech': () => import('./icons/core/chat/stopSpeech.svg'),
'core/chat/think': () => import('./icons/core/chat/think.svg'), 'core/chat/think': () => import('./icons/core/chat/think.svg'),
@ -283,13 +285,12 @@ export const iconPaths = {
'core/workflow/redo': () => import('./icons/core/workflow/redo.svg'), 'core/workflow/redo': () => import('./icons/core/workflow/redo.svg'),
'core/workflow/revertVersion': () => import('./icons/core/workflow/revertVersion.svg'), 'core/workflow/revertVersion': () => import('./icons/core/workflow/revertVersion.svg'),
'core/workflow/runError': () => import('./icons/core/workflow/runError.svg'), 'core/workflow/runError': () => import('./icons/core/workflow/runError.svg'),
'core/workflow/running': () => import('./icons/core/workflow/running.svg'),
'core/workflow/runSkip': () => import('./icons/core/workflow/runSkip.svg'), 'core/workflow/runSkip': () => import('./icons/core/workflow/runSkip.svg'),
'core/workflow/runSuccess': () => import('./icons/core/workflow/runSuccess.svg'), 'core/workflow/runSuccess': () => import('./icons/core/workflow/runSuccess.svg'),
'core/workflow/running': () => import('./icons/core/workflow/running.svg'),
'core/workflow/template/BI': () => import('./icons/core/workflow/template/BI.svg'),
'core/workflow/template/FileRead': () => import('./icons/core/workflow/template/FileRead.svg'),
'core/workflow/template/aiChat': () => import('./icons/core/workflow/template/aiChat.svg'), 'core/workflow/template/aiChat': () => import('./icons/core/workflow/template/aiChat.svg'),
'core/workflow/template/baseChart': () => import('./icons/core/workflow/template/baseChart.svg'), 'core/workflow/template/baseChart': () => import('./icons/core/workflow/template/baseChart.svg'),
'core/workflow/template/BI': () => import('./icons/core/workflow/template/BI.svg'),
'core/workflow/template/bing': () => import('./icons/core/workflow/template/bing.svg'), 'core/workflow/template/bing': () => import('./icons/core/workflow/template/bing.svg'),
'core/workflow/template/bocha': () => import('./icons/core/workflow/template/bocha.svg'), 'core/workflow/template/bocha': () => import('./icons/core/workflow/template/bocha.svg'),
'core/workflow/template/codeRun': () => import('./icons/core/workflow/template/codeRun.svg'), 'core/workflow/template/codeRun': () => import('./icons/core/workflow/template/codeRun.svg'),
@ -306,6 +307,7 @@ export const iconPaths = {
'core/workflow/template/extractJson': () => 'core/workflow/template/extractJson': () =>
import('./icons/core/workflow/template/extractJson.svg'), import('./icons/core/workflow/template/extractJson.svg'),
'core/workflow/template/fetchUrl': () => import('./icons/core/workflow/template/fetchUrl.svg'), 'core/workflow/template/fetchUrl': () => import('./icons/core/workflow/template/fetchUrl.svg'),
'core/workflow/template/FileRead': () => import('./icons/core/workflow/template/FileRead.svg'),
'core/workflow/template/formInput': () => import('./icons/core/workflow/template/formInput.svg'), 'core/workflow/template/formInput': () => import('./icons/core/workflow/template/formInput.svg'),
'core/workflow/template/getTime': () => import('./icons/core/workflow/template/getTime.svg'), 'core/workflow/template/getTime': () => import('./icons/core/workflow/template/getTime.svg'),
'core/workflow/template/google': () => import('./icons/core/workflow/template/google.svg'), 'core/workflow/template/google': () => import('./icons/core/workflow/template/google.svg'),
@ -335,12 +337,12 @@ export const iconPaths = {
'core/workflow/template/textConcat': () => 'core/workflow/template/textConcat': () =>
import('./icons/core/workflow/template/textConcat.svg'), import('./icons/core/workflow/template/textConcat.svg'),
'core/workflow/template/toolCall': () => import('./icons/core/workflow/template/toolCall.svg'), 'core/workflow/template/toolCall': () => import('./icons/core/workflow/template/toolCall.svg'),
'core/workflow/template/toolParams': () =>
import('./icons/core/workflow/template/toolParams.svg'),
'core/workflow/template/toolkitActive': () => 'core/workflow/template/toolkitActive': () =>
import('./icons/core/workflow/template/toolkitActive.svg'), import('./icons/core/workflow/template/toolkitActive.svg'),
'core/workflow/template/toolkitInactive': () => 'core/workflow/template/toolkitInactive': () =>
import('./icons/core/workflow/template/toolkitInactive.svg'), import('./icons/core/workflow/template/toolkitInactive.svg'),
'core/workflow/template/toolParams': () =>
import('./icons/core/workflow/template/toolParams.svg'),
'core/workflow/template/userSelect': () => 'core/workflow/template/userSelect': () =>
import('./icons/core/workflow/template/userSelect.svg'), import('./icons/core/workflow/template/userSelect.svg'),
'core/workflow/template/variable': () => import('./icons/core/workflow/template/variable.svg'), 'core/workflow/template/variable': () => import('./icons/core/workflow/template/variable.svg'),
@ -389,6 +391,7 @@ export const iconPaths = {
key: () => import('./icons/key.svg'), key: () => import('./icons/key.svg'),
keyPrimary: () => import('./icons/keyPrimary.svg'), keyPrimary: () => import('./icons/keyPrimary.svg'),
loading: () => import('./icons/loading.svg'), loading: () => import('./icons/loading.svg'),
mcp: () => import('./icons/mcp.svg'),
menu: () => import('./icons/menu.svg'), menu: () => import('./icons/menu.svg'),
minus: () => import('./icons/minus.svg'), minus: () => import('./icons/minus.svg'),
'modal/AddClb': () => import('./icons/modal/AddClb.svg'), 'modal/AddClb': () => import('./icons/modal/AddClb.svg'),
@ -400,10 +403,10 @@ export const iconPaths = {
'modal/selectSource': () => import('./icons/modal/selectSource.svg'), 'modal/selectSource': () => import('./icons/modal/selectSource.svg'),
'modal/setting': () => import('./icons/modal/setting.svg'), 'modal/setting': () => import('./icons/modal/setting.svg'),
'modal/teamPlans': () => import('./icons/modal/teamPlans.svg'), 'modal/teamPlans': () => import('./icons/modal/teamPlans.svg'),
'model/BAAI': () => import('./icons/model/BAAI.svg'),
'model/alicloud': () => import('./icons/model/alicloud.svg'), 'model/alicloud': () => import('./icons/model/alicloud.svg'),
'model/aws': () => import('./icons/model/aws.svg'), 'model/aws': () => import('./icons/model/aws.svg'),
'model/azure': () => import('./icons/model/azure.svg'), 'model/azure': () => import('./icons/model/azure.svg'),
'model/BAAI': () => import('./icons/model/BAAI.svg'),
'model/baichuan': () => import('./icons/model/baichuan.svg'), 'model/baichuan': () => import('./icons/model/baichuan.svg'),
'model/chatglm': () => import('./icons/model/chatglm.svg'), 'model/chatglm': () => import('./icons/model/chatglm.svg'),
'model/claude': () => import('./icons/model/claude.svg'), 'model/claude': () => import('./icons/model/claude.svg'),

Some files were not shown because too many files have changed in this diff Show More