4.7-alpha2 (#1027)
Some checks failed
Build FastGPT images in Personal warehouse / build-fastgpt-images (push) Waiting to run
deploy-docs / deploy-production (push) Has been cancelled
Build docs images and copy image to docker hub / build-fastgpt-docs-images (push) Has been cancelled
Sync images / sync (push) Has been cancelled
Build docs images and copy image to docker hub / update-docs-image (push) Has been cancelled

* feat: stop toolCall and rename some field. (#46)

* perf: node delete tip;pay tip

* fix: toolCall cannot save child answer

* feat: stop tool

* fix: team modal

* fix feckbackMoal  auth bug (#47)

* 简单的支持提示词运行tool。优化workflow模板 (#49)

* remove templates

* fix: request body undefined

* feat: prompt tool run

* feat: workflow tamplates modal

* perf: plugin start

* 4.7 (#50)

* fix docker-compose download url (#994)

original code is a bad url with '404 NOT FOUND' return.
fix docker-compose download url, add 'v' before docker-compose version

* Update ai_settings.md (#1000)

* Update configuration.md

* Update configuration.md

* Fix history in classifyQuestion and extract modules (#1012)

* Fix history in classifyQuestion and extract modules

* Add chatValue2RuntimePrompt import and update text formatting

* flow controller to packages

* fix: rerank select

* modal ui

* perf: modal code path

* point not sufficient

* feat: http url support variable

* fix http key

* perf: prompt

* perf: ai setting modal

* simple edit ui

---------

Co-authored-by: entorick <entorick11@qq.com>
Co-authored-by: liujianglc <liujianglc@163.com>
Co-authored-by: Fengrui Liu <liufengrui.work@bytedance.com>

* fix team share redirect to login (#51)

* feat: support openapi import plugins (#48)

* feat: support openapi import plugins

* feat: import from url

* fix: add body params parse

* fix build

* fix

* fix

* fix

* tool box ui (#52)

* fix: training queue

* feat: simple edit tool select

* perf: simple edit dataset prompt

* fix: chatbox tool ux

* feat: quote prompt module

* perf: plugin tools sign

* perf: model avatar

* tool selector ui

* feat: max histories

* perf: http plugin import (#53)

* perf: plugin http import

* chatBox ui

* perf: name

* fix: Node template card (#54)

* fix: ts

* setting modal

* package

* package

* feat: add plugins search (#57)

* feat: add plugins search

* perf: change http plugin header input

* Yjl (#56)

* perf: prompt tool call

* perf: chat box ux

* doc

* doc

* price tip

* perf: tool selector

* ui'

* fix: vector queue

* fix: empty tool and empty response

* fix: empty msg

* perf: pg index

* perf: ui tip

* doc

* tool tip

---------

Co-authored-by: yst <77910600+yu-and-liu@users.noreply.github.com>
Co-authored-by: entorick <entorick11@qq.com>
Co-authored-by: liujianglc <liujianglc@163.com>
Co-authored-by: Fengrui Liu <liufengrui.work@bytedance.com>
Co-authored-by: heheer <71265218+newfish-cmyk@users.noreply.github.com>
This commit is contained in:
Archer 2024-03-21 13:32:31 +08:00 committed by GitHub
parent 6d4b331db9
commit 9d27de154b
No known key found for this signature in database
GPG Key ID: B5690EEEBB952194
322 changed files with 9282 additions and 6498 deletions

View File

@ -58,7 +58,7 @@ fastgpt.run 域名会弃用。
- [x] 源文件引用追踪
- [x] 模块封装,实现多级复用
- [x] 混合检索 & 重排
- [ ] Tool 模块
- [x] Tool 模块
- [ ] 嵌入 [Laf](https://github.com/labring/laf),实现在线编写 HTTP 模块
- [ ] 插件封装功能
@ -114,7 +114,7 @@ fastgpt.run 域名会弃用。
* [多模型配置](https://doc.fastgpt.in/docs/development/one-api/)
* [版本更新/升级介绍](https://doc.fastgpt.in/docs/development/upgrading)
* [OpenAPI API 文档](https://doc.fastgpt.in/docs/development/openapi/)
* [知识库结构详解](https://doc.fastgpt.in/docs/use-cases/datasetengine/)
* [知识库结构详解](https://doc.fastgpt.in/docs/course/datasetengine/)
<a href="#readme">
<img src="https://img.shields.io/badge/-返回顶部-7d09f1.svg" alt="#" align="right">
@ -122,9 +122,9 @@ fastgpt.run 域名会弃用。
## 🏘️ 社区交流群
添加 wx 小助手加入:
wx 扫一下加入:
![](https://otnvvf-imgs.oss.laf.run/wx300.jpg)
![](https://oss.laf.run/htr4n1-images/fastgpt-qr-code.jpg)
<a href="#readme">
<img src="https://img.shields.io/badge/-返回顶部-7d09f1.svg" alt="#" align="right">

View File

@ -116,14 +116,12 @@ Project tech stack: NextJs + TS + ChakraUI + Mongo + Postgres (Vector plugin)
- [Configuring Multiple Models](https://doc.fastgpt.in/docs/installation/reference/models)
- [Version Updates & Upgrades](https://doc.fastgpt.in/docs/installation/upgrading)
<!-- ## :point_right: RoadMap
- [FastGPT RoadMap](https://kjqvjse66l.feishu.cn/docx/RVUxdqE2WolDYyxEKATcM0XXnte) -->
<!-- ## 🏘️ Community
## 🏘️ Community
| Community Group | Assistant |
| ------------------------------------------------- | ---------------------------------------------- |
| ![](https://otnvvf-imgs.oss.laf.run/wxqun300.jpg) | ![](https://otnvvf-imgs.oss.laf.run/wx300.jpg) | -->
| Community Group |
| ------------------------------------------------- |
| ![](https://oss.laf.run/htr4n1-images/fastgpt-qr-code.jpg) |
<a href="#readme">
<img src="https://img.shields.io/badge/-Back_to_Top-7d09f1.svg" alt="#" align="right">

Binary file not shown.

After

Width:  |  Height:  |  Size: 107 KiB

Binary file not shown.

After

Width:  |  Height:  |  Size: 284 KiB

Binary file not shown.

After

Width:  |  Height:  |  Size: 195 KiB

Binary file not shown.

After

Width:  |  Height:  |  Size: 371 KiB

Binary file not shown.

After

Width:  |  Height:  |  Size: 391 KiB

Binary file not shown.

After

Width:  |  Height:  |  Size: 447 KiB

Binary file not shown.

After

Width:  |  Height:  |  Size: 418 KiB

Binary file not shown.

After

Width:  |  Height:  |  Size: 419 KiB

Binary file not shown.

After

Width:  |  Height:  |  Size: 232 KiB

Binary file not shown.

After

Width:  |  Height:  |  Size: 121 KiB

Binary file not shown.

After

Width:  |  Height:  |  Size: 179 KiB

Binary file not shown.

After

Width:  |  Height:  |  Size: 132 KiB

Binary file not shown.

After

Width:  |  Height:  |  Size: 152 KiB

Binary file not shown.

After

Width:  |  Height:  |  Size: 150 KiB

Binary file not shown.

After

Width:  |  Height:  |  Size: 96 KiB

Binary file not shown.

After

Width:  |  Height:  |  Size: 168 KiB

Binary file not shown.

After

Width:  |  Height:  |  Size: 230 KiB

Binary file not shown.

After

Width:  |  Height:  |  Size: 302 KiB

View File

@ -19,16 +19,17 @@ FastGPT 商业版是基于 FastGPT 开源版的增强版本,增加了一些独
| 应用管理与高级编排 | ✅ | ✅ | ✅ |
| 文档知识库 | ✅ | ✅ | ✅ |
| 外部使用 | ✅ | ✅ | ✅ |
| 自定义版权信息 | ❌ | ✅ | |
| 自定义版权信息 | ❌ | ✅ | 设计中 |
| 多租户与支付 | ❌ | ✅ | ✅ |
| 团队空间 | ❌ | ✅ | ✅ |
| 外部使用限制 | ❌ | ✅ | ✅ |
| 应用发布安全配置 | ❌ | ✅ | ✅ |
| 内容审核 | ❌ | ✅ | ✅ |
| web站点同步 | ❌ | ✅ | ✅ |
| 管理后台 | ❌ | ✅ | ✅ |
| Saas服务商业授权 | ❌ | ✅ | ✅ |
| 完整商业授权 | ❌ | ✅ | ✅ |
| 图片知识库 | ❌ | 设计中 | 设计中 |
| 自动规划召回 | ❌ | 设计中 | 设计中 |
| 对话日志运营分析 | ❌ | 设计中 | 设计中 |
{{< /table >}}
## 商业版软件价格

View File

@ -4,7 +4,7 @@ description: "FastGPT AI 高级配置说明"
icon: "sign_language"
draft: false
toc: true
weight: 501
weight: 102
---
在 FastGPT 的 AI 对话模块中,有一个 AI 高级配置,里面包含了 AI 模型的参数配置,本文详细介绍这些配置的含义。
@ -48,7 +48,7 @@ Tips: 可以通过点击上下文按键查看完整的上下文组成,便于
FastGPT 知识库采用 QA 对(不一定都是问答格式,仅代表两个变量)的格式存储,在转义成字符串时候会根据**引用模板**来进行格式化。知识库包含多个可用变量: q, a, sourceId数据的ID, index(第n个数据), source(数据的集合名、文件名)score(距离得分0-1) 可以通过 {{q}} {{a}} {{sourceId}} {{index}} {{source}} {{score}} 按需引入。下面一个模板例子:
可以通过 [知识库结构讲解](/docs/use-cases/datasetEngine/) 了解详细的知识库的结构。
可以通过 [知识库结构讲解](/docs/course/datasetEngine/) 了解详细的知识库的结构。
#### 引用模板

View File

@ -4,7 +4,7 @@ description: "本节会详细介绍 FastGPT 知识库结构设计,理解其 QA
icon: "dataset"
draft: false
toc: true
weight: 502
weight: 102
---
## 理解向量
@ -90,4 +90,4 @@ FastGPT 采用了 `PostgresSQL` 的 `PG Vector` 插件作为向量检索器,
## QA的组合与引用提示词构建
参考[引用模板与引用提示词示例](/docs/use-cases/ai_settings/#示例)
参考[引用模板与引用提示词示例](/docs/course/ai_settings/#示例)

View File

@ -4,7 +4,7 @@ description: " 利用 FastGPT 打造高质量 AI 知识库"
icon: "school"
draft: false
toc: true
weight: 699
weight: 106
---
## 前言

View File

@ -11,7 +11,7 @@ weight: 708
**开发环境下**,你需要将示例配置文件 `config.json` 复制成 `config.local.json` 文件才会生效。
这个配置文件中包含了系统参数和各个模型配置,使用时务必去掉注释。
这个配置文件中包含了系统参数和各个模型配置,`使用时务必去掉注释!!!!!!!!!!!!!!`
## 4.6.8+ 版本新配置文件
@ -28,6 +28,7 @@ llm模型全部合并
{
"model": "gpt-3.5-turbo", // 模型名
"name": "gpt-3.5-turbo", // 别名
"avatar": "/imgs/model/openai.svg", // 模型的logo
"maxContext": 16000, // 最大上下文
"maxResponse": 4000, // 最大回复
"quoteMaxToken": 13000, // 最大引用内容
@ -35,7 +36,7 @@ llm模型全部合并
"charsPointsPrice": 0,
"censor": false,
"vision": false, // 是否支持图片输入
"datasetProcess": false, // 是否设置为知识库处理模型QA务必保证至少有一个为true否则知识库会报错
"datasetProcess": true, // 是否设置为知识库处理模型QA务必保证至少有一个为true否则知识库会报错
"usedInClassify": true, // 是否用于问题分类务必保证至少有一个为true
"usedInExtractFields": true, // 是否用于内容提取务必保证至少有一个为true
"usedInToolCall": true, // 是否用于工具调用务必保证至少有一个为true
@ -47,31 +48,10 @@ llm模型全部合并
"defaultSystemChatPrompt": "", // 对话默认携带的系统提示词
"defaultConfig":{} // LLM默认配置可以针对不同模型设置特殊值比如 GLM4 的 top_p
},
{
"model": "gpt-3.5-turbo-16k",
"name": "gpt-3.5-turbo-16k",
"maxContext": 16000,
"maxResponse": 16000,
"quoteMaxToken": 13000,
"maxTemperature": 1.2,
"charsPointsPrice": 0,
"censor": false,
"vision": false,
"datasetProcess": true,
"usedInClassify": true,
"usedInExtractFields": true,
"usedInToolCall": true,
"usedInQueryExtension": true,
"toolChoice": true,
"functionCall": false,
"customCQPrompt": "",
"customExtractPrompt": "",
"defaultSystemChatPrompt": "",
"defaultConfig":{}
},
{
"model": "gpt-4-0125-preview",
"name": "gpt-4-turbo",
"avatar": "/imgs/model/openai.svg",
"maxContext": 125000,
"maxResponse": 4000,
"quoteMaxToken": 100000,
@ -94,6 +74,7 @@ llm模型全部合并
{
"model": "gpt-4-vision-preview",
"name": "gpt-4-vision",
"avatar": "/imgs/model/openai.svg",
"maxContext": 128000,
"maxResponse": 4000,
"quoteMaxToken": 100000,
@ -118,6 +99,7 @@ llm模型全部合并
{
"model": "text-embedding-ada-002",
"name": "Embedding-2",
"avatar": "/imgs/model/openai.svg",
"charsPointsPrice": 0,
"defaultToken": 700,
"maxToken": 3000,
@ -149,6 +131,20 @@ llm模型全部合并
}
```
## 关于模型 logo
统一放置在项目的`public/imgs/model/xxx`目录中目前内置了以下几种如果有需要可以PR增加。
- /imgs/model/baichuan.svg - 百川
- /imgs/model/chatglm.svg - 智谱
- /imgs/model/calude.svg - calude
- /imgs/model/ernie.svg - 文心一言
- /imgs/model/moonshot.svg - 月之暗面
- /imgs/model/openai.svg - OpenAI GPT
- /imgs/model/qwen.svg - 通义千问
- /imgs/model/yi.svg - 零一万物
-
## 特殊模型
### ReRank 接入

View File

@ -1,13 +1,13 @@
---
weight: 749
title: "常见开发 & 部署问题"
description: "FastGPT 常见开发 & 部署问题"
title: "私有部署常见问题"
description: "FastGPT 私有部署常见问题"
icon: upgrade
draft: false
images: []
---
## 错误排查方式
## 一、错误排查方式
遇到问题先按下面方式排查。
@ -17,7 +17,7 @@ images: []
4. 无法解决时,可以找找[Issue](https://github.com/labring/FastGPT/issues),或新提 Issue私有部署错误务必提供详细的日志否则很难排查。
## 通用问题
## 二、通用问题
### 能否纯本地运行
@ -46,7 +46,7 @@ OneAPI 的 API Key 配置错误,需要修改`OPENAI_API_KEY`环境变量,并
### 页面崩溃
1. 关闭翻译
2. 检查配置文件是否正常加载如果没有正常加载会导致缺失系统信息在某些操作下会导致空指针。95%情况可以F12打开控制台看具体的空指针情况
2. 检查配置文件是否正常加载如果没有正常加载会导致缺失系统信息在某些操作下会导致空指针。95%情况是配置文件不对可以F12打开控制台看具体的空指针情况
3. 某些api不兼容问题较少
### 开启内容补全后,响应速度变慢
@ -54,7 +54,11 @@ OneAPI 的 API Key 配置错误,需要修改`OPENAI_API_KEY`环境变量,并
1. 问题补全需要经过一轮AI生成。
2. 会进行3~5轮的查询如果数据库性能不足会有明显影响。
## 私有部署问题
### 模型响应为空
1. 检查 key 问题。
2. 如果是国内模型,可能是命中风控了。
3. 查看模型请求日志,检查出入参数是否异常。
### 知识库索引没有进度
@ -64,11 +68,7 @@ OneAPI 的 API Key 配置错误,需要修改`OPENAI_API_KEY`环境变量,并
2. 不能对话也不能索引API调用失败。可能是没连上OneAPI或OpenAI
3. 有进度但是非常慢api key不行OpenAI的免费号一分钟只有3次还是60次。一天上限200次。
## Docker 部署常见问题
### 首次部署root用户提示未注册
没有启动 Mongo 副本集模式。
## 三、Docker 部署常见问题
### 如何更新?
@ -133,14 +133,6 @@ mongo连接失败检查
2. 环境变量账号密码注意host和port
3. 副本集启动失败一直在重启没挂载mongo keykey没有权限
## 本地开发问题
### 首次部署root用户提示未注册
### TypeError: Cannot read properties of null (reading 'useMemo' )
删除所有的`node_modules`,用 Node18 重新 install 试试,可能最新的 Node 有问题。 本地开发流程:
1. 根目录: `pnpm i`
2. 复制 `config.json` -> `config.local.json`
3. 复制 `.env.template` -> `.env.local`
4. `cd projects/app`
5. `pnpm dev`
没有启动 Mongo 副本集模式。

View File

@ -48,7 +48,7 @@ git clone git@github.com:<github_username>/FastGPT.git
第一次开发,需要先部署数据库,建议本地开发可以随便找一台 2C2G 的轻量小数据库实践。数据库部署教程:[Docker 快速部署](/docs/development/docker/)。部署完了,可以本地访问其数据库。
Mongo 数据库需要修改副本集的`host`,从原来的`mongo:27017`修改为`ip:27017`。
Mongo 数据库需要修改副本集的`host`,从原来的`mongo:27017`修改为`ip:27017`(ip为对应的公网IP)
### 4. 初始配置
@ -113,7 +113,22 @@ docker build -t dockername/fastgpt:tag --build-arg name=app --build-arg proxy=ta
FastGPT 在`pnpm i`后会执行`postinstall`脚本,用于自动生成`ChakraUI`的`Type`。如果没有权限,可以先执行`chmod -R +x ./scripts/`,再执行`pnpm i`。
### 加入社区
### 长时间运行后崩溃
似乎是由于 tiktoken 库的开发环境问题,生产环境中未遇到,暂时可忽略。
### TypeError: Cannot read properties of null (reading 'useMemo' )
删除所有的`node_modules`,用 Node18 重新 install 试试,可能最新的 Node 有问题。 本地开发流程:
1. 根目录: `pnpm i`
2. 复制 `config.json` -> `config.local.json`
3. 复制 `.env.template` -> `.env.local`
4. `cd projects/app`
5. `pnpm dev`
## 加入社区
遇到困难了吗?有任何问题吗? 加入微信群与开发者和用户保持沟通。

View File

@ -112,6 +112,7 @@ CHAT_API_KEY=sk-xxxxxx
{
"model": "ERNIE-Bot", // 这里的模型需要对应 One API 的模型
"name": "文心一言", // 对外展示的名称
"avatar": "/imgs/model/openai.svg", // 模型的logo
"maxContext": 16000, // 最大上下文
"maxResponse": 4000, // 最大回复
"quoteMaxToken": 13000, // 最大引用内容
@ -135,4 +136,11 @@ CHAT_API_KEY=sk-xxxxxx
],
```
添加完后,重启 FastGPT 即可在选择文心一言模型进行对话。**添加向量模型也是类似操作,增加到 `vectorModels`里。**
### 3. 重启 FastGPT
```bash
docker-compose down
docker-compose up -d
```
重启 FastGPT 即可在选择文心一言模型进行对话。**添加向量模型也是类似操作,增加到 `vectorModels`里。**

View File

@ -9,11 +9,31 @@ weight: 826
## 修改配置文件
增加一些 Boolean 值,用于决定不同功能块可以使用哪些模型:[点击查看最新的配置文件](/docs/development/configuration/)
增加一些 Boolean 值,用于决定不同功能块可以使用哪些模型,同时增加了模型的 logo[点击查看最新的配置文件](/docs/development/configuration/)
## 初始化脚本
从任意终端,发起 1 个 HTTP 请求。其中 {{rootkey}} 替换成环境变量里的 `rootkey`{{host}} 替换成自己域名
```bash
curl --location --request POST 'https://{{host}}/api/admin/initv47' \
--header 'rootkey: {{rootkey}}' \
--header 'Content-Type: application/json'
```
脚本功能:
1. 初始化插件的 parentId
## V4.7 更新说明
1. 新增 - 工具调用模块可以让LLM模型根据用户意图动态的选择其他模型或插件执行。
2. 新增 - 分类和内容提取支持 functionCall 模式。部分模型支持 functionCall 不支持 ToolCall也可以使用了。需要把 LLM 模型配置文件里的 `functionCall` 设置为 `true` `toolChoice`设置为 `false`。如果 `toolChoice` 为 true会走 tool 模式。
3. 优化 - 高级编排性能
3. 新增 - HTTP插件可实现OpenAPI快速生成插件。
4. 优化 - 高级编排性能。
5. 优化 - 抽离 Flow controller 到 packages。
6. 优化 - AI模型选择。
7. 修复 - 开源版重排选不上。
8. 修复 - http 请求 body不使用时传入undefined。会造成部分GET请求失败
9. 新增 - 支持 http url 使用变量。
10. 修复 - 469 的提取的提示词容易造成幻觉。
11. 修复 - PG HNSW索引未实际生效问题本次更新后搜索速度大幅度提升(但是可能会出现精度损失如果出现精度损失需要参考PgVector文档对索引进行调整)。详细见https://github.com/pgvector/pgvector?tab=readme-ov-file#troubleshooting

View File

@ -1,6 +1,6 @@
---
title: " 接入飞书 "
description: "FastGPT 接入飞书机器人 "
title: " 接入飞书(社区文章)"
description: "FastGPT 接入飞书机器人"
icon: "chat"
draft: false
toc: true

View File

@ -0,0 +1,60 @@
---
title: "使用 Gapier 快速导入Agent工具"
description: "FastGPT 使用 Gapier 快速导入Agent工具"
icon: "build"
draft: false
toc: true
weight: 501
---
FastGPT V4.7版本加入了工具调用,可以兼容 GPTs 的 Actions。这意味着你可以直接导入兼容 GPTs 的 Agent 工具。
Gapier 是一个在线 GPTs Actions工具提供了50多种现成工具并且每天有免费额度进行测试方便用户试用官方地址为[https://gapier.com/](https://gapier.com/)。
![](/imgs/gapierToolResult1.png)
现在,我们开始把 Gapier 的工具导入到 FastGPT 中。
## 1. 创建插件
| Step1 | Step2 | Step3 |
| --- | --- | --- |
| ![](/imgs/gapierTool1.png) | ![](/imgs/gapierTool2.png) | 登录[Gapier](https://gapier.com/) 复制相关参数 <br> ![](/imgs/gapierTool3.png) |
| Step4 | Step5 | Step6 |
| 自定义请求头: Authorization<br>请求值: Bearer 复制的key <br> ![](/imgs/gapierTool4.png) | ![](/imgs/gapierTool5.png) | ![](/imgs/gapierTool6.png) |
创建完后,如果需要变更,无需重新创建,只需要修改对应参数即可,会自动做差值比较更新。
![](/imgs/gapierTool7.png)
## 2. 应用绑定工具
### 简易模式
| Step1 | Step2 |
| --- | --- | --- |
| ![](/imgs/gapierTool8.png) | ![](/imgs/gapierTool9.png) |
| Step3 | Step4 |
| ![](/imgs/gapierTool10.png) | ![](/imgs/gapierTool11.png) |
### 高级编排
| Step1 | Step2 |
| --- | --- | --- |
| ![](/imgs/gapierTool12.png) | ![](/imgs/gapierTool13.png) |
| Step3 | Step4 |
| ![](/imgs/gapierTool14.png) | ![](/imgs/gapierTool15.png) |
![](/imgs/gapierTool16.png)
## 3. 工具调用说明
### 不同模型的区别
不同模型调用工具采用不同的方法,有些模型支持 toolChoice 和 functionCall 效果会更好。不支持这两种方式的模型通过提示词调用但是效果不是很好并且为了保证顺利调用FastGPT内置的提示词仅支持每次调用一个工具。
具体哪些模型支持 functionCall 可以官网查看当然也需要OneAPI支持同时需要调整模型配置文件中的对应字段详细看配置字段说明
线上版用户,可以在模型选择时,看到是否支持函数调用的标识。
![](/imgs/gapierTool17.png)

View File

@ -1,5 +1,5 @@
---
title: " 接入微信和企业微信 "
title: "接入微信和企业微信 "
description: "FastGPT 接入微信和企业微信 "
icon: "chat"
draft: false

View File

@ -15,7 +15,7 @@ export enum TeamErrEnum {
const teamErr = [
{ statusText: TeamErrEnum.teamOverSize, message: 'error.team.overSize' },
{ statusText: TeamErrEnum.unAuthTeam, message: '无权操作该团队' },
{ statusText: TeamErrEnum.aiPointsNotEnough, message: 'AI积分已用完~' },
{ statusText: TeamErrEnum.aiPointsNotEnough, message: '' },
{ statusText: TeamErrEnum.datasetSizeNotEnough, message: '知识库容量不足,请先扩容~' },
{ statusText: TeamErrEnum.datasetAmountNotEnough, message: '知识库数量已达上限~' },
{ statusText: TeamErrEnum.appAmountNotEnough, message: '应用数量已达上限~' },

View File

@ -1,7 +1,7 @@
import { replaceSensitiveText } from '../string/tools';
export const getErrText = (err: any, def = '') => {
const msg: string = typeof err === 'string' ? err : err?.message || def || '';
const msg: string = typeof err === 'string' ? err : err?.message ?? def;
msg && console.log('error =>', msg);
return replaceSensitiveText(msg);
};

View File

@ -1,3 +1,4 @@
/* mongo fs bucket */
export enum BucketNameEnum {
dataset = 'dataset'
}
@ -7,4 +8,4 @@ export const bucketNameMap = {
}
};
export const FileBaseUrl = '/api/common/file/read';
export const ReadFileBaseUrl = '/api/common/file/read';

View File

@ -50,3 +50,7 @@ export const mongoImageTypeMap = {
export const uniqueImageTypeList = Object.entries(mongoImageTypeMap)
.filter(([key, value]) => value.unique)
.map(([key]) => key as `${MongoImageTypeEnum}`);
export const FolderIcon = 'file/fill/folder';
export const FolderImgUrl = '/imgs/files/folder.svg';
export const HttpImgUrl = '/imgs/module/http.png';

View File

@ -1,2 +1,3 @@
export const HUMAN_ICON = `/icon/human.svg`;
export const LOGO_ICON = `/icon/logo.svg`;
export const HUGGING_FACE_ICON = `/imgs/model/huggingface.svg`;

View File

@ -1,6 +1,7 @@
export type LLMModelItemType = {
model: string;
name: string;
avatar?: string;
maxContext: number;
maxResponse: number;
quoteMaxToken: number;
@ -31,6 +32,7 @@ export type LLMModelItemType = {
export type VectorModelItemType = {
model: string;
name: string;
avatar?: string;
defaultToken: number;
charsPointsPrice: number;
maxToken: number;

View File

@ -1,4 +1,4 @@
import { PromptTemplateItem } from '@fastgpt/global/core/ai/type.d';
import { PromptTemplateItem } from '../type.d';
export const Prompt_QuoteTemplateList: PromptTemplateItem[] = [
{

View File

@ -58,5 +58,3 @@ Human"{{question}}"
ID=
`;
export const Prompt_QuestionGuide = `我不太清楚问你什么问题,请帮我生成 3 个问题引导我继续提问。问题的长度应小于20个字符按 JSON 格式返回: ["问题1", "问题2", "问题3"]`;

View File

@ -12,16 +12,9 @@ export type CreateAppParams = {
export interface AppUpdateParams {
name?: string;
type?: `${AppTypeEnum}`;
simpleTemplateId?: string;
avatar?: string;
intro?: string;
modules?: AppSchema['modules'];
permission?: AppSchema['permission'];
teamTags?: AppSchema['teamTags'];
}
export type FormatForm2ModulesProps = {
formData: AppSimpleEditFormType;
chatModelMaxToken: number;
llmModelList: LLMModelItemType[];
};

View File

@ -1,7 +1,12 @@
import type { AppTTSConfigType, ModuleItemType, VariableItemType } from '../module/type.d';
import type {
AppTTSConfigType,
FlowNodeTemplateType,
ModuleItemType,
VariableItemType
} from '../module/type.d';
import { AppTypeEnum } from './constants';
import { PermissionTypeEnum } from '../../support/permission/constant';
import type { AIChatModuleProps, DatasetModuleProps } from '../module/node/type.d';
import type { DatasetModuleProps } from '../module/node/type.d';
import { VariableInputEnum } from '../module/constants';
import { SelectedDatasetType } from '../module/api';
import { DatasetSearchModeEnum } from '../dataset/constants';
@ -13,7 +18,6 @@ export interface AppSchema {
tmbId: string;
name: string;
type: `${AppTypeEnum}`;
simpleTemplateId: string;
avatar: string;
intro: string;
updateTime: number;
@ -37,19 +41,6 @@ export type AppDetailType = AppSchema & {
canWrite: boolean;
};
// export type AppSimpleEditFormType = {
// aiSettings: AIChatModuleProps;
// dataset: DatasetModuleProps & {
// searchEmptyText: string;
// };
// userGuide: {
// welcomeText: string;
// variables: VariableItemType[];
// questionGuide: boolean;
// tts: AppTTSConfigType;
// };
// };
// Since useform cannot infer enumeration types, all enumeration keys can only be undone manually
export type AppSimpleEditFormType = {
// templateId: string;
aiSettings: {
@ -58,8 +49,7 @@ export type AppSimpleEditFormType = {
temperature: number;
maxToken: number;
isResponseAnswerText: boolean;
quoteTemplate?: string | undefined;
quotePrompt?: string | undefined;
maxHistories: number;
};
dataset: {
datasets: SelectedDatasetType;
@ -67,11 +57,11 @@ export type AppSimpleEditFormType = {
similarity?: number;
limit?: number;
usingReRank?: boolean;
searchEmptyText?: string;
datasetSearchUsingExtensionQuery?: boolean;
datasetSearchExtensionModel?: string;
datasetSearchExtensionBg?: string;
};
selectedTools: FlowNodeTemplateType[];
userGuide: {
welcomeText: string;
variables: {
@ -94,34 +84,3 @@ export type AppSimpleEditFormType = {
};
};
};
/* simple mode template*/
export type AppSimpleEditConfigTemplateType = {
id: string;
name: string;
desc: string;
systemForm: {
aiSettings?: {
model?: boolean;
systemPrompt?: boolean;
temperature?: boolean;
maxToken?: boolean;
quoteTemplate?: boolean;
quotePrompt?: boolean;
};
dataset?: {
datasets?: boolean;
similarity?: boolean;
limit?: boolean;
searchMode: `${DatasetSearchModeEnum}`;
usingReRank: boolean;
searchEmptyText?: boolean;
};
userGuide?: {
welcomeText?: boolean;
variables?: boolean;
questionGuide?: boolean;
tts?: boolean;
};
};
};

View File

@ -1,6 +1,10 @@
import type { AppSimpleEditFormType } from '../app/type';
import { FlowNodeTypeEnum } from '../module/node/constant';
import { ModuleOutputKeyEnum, ModuleInputKeyEnum } from '../module/constants';
import {
ModuleOutputKeyEnum,
ModuleInputKeyEnum,
FlowNodeTemplateTypeEnum
} from '../module/constants';
import type { FlowNodeInputItemType } from '../module/node/type.d';
import { getGuideModule, splitGuideModule } from '../module/utils';
import { ModuleItemType } from '../module/type.d';
@ -13,20 +17,19 @@ export const getDefaultAppForm = (): AppSimpleEditFormType => {
systemPrompt: '',
temperature: 0,
isResponseAnswerText: true,
quotePrompt: '',
quoteTemplate: '',
maxHistories: 6,
maxToken: 4000
},
dataset: {
datasets: [],
similarity: 0.4,
limit: 1500,
searchEmptyText: '',
searchMode: DatasetSearchModeEnum.embedding,
usingReRank: false,
datasetSearchUsingExtensionQuery: true,
datasetSearchExtensionBg: ''
},
selectedTools: [],
userGuide: {
welcomeText: '',
variables: [],
@ -47,7 +50,10 @@ export const appModules2Form = ({ modules }: { modules: ModuleItemType[] }) => {
};
modules.forEach((module) => {
if (module.flowType === FlowNodeTypeEnum.chatNode) {
if (
module.flowType === FlowNodeTypeEnum.chatNode ||
module.flowType === FlowNodeTypeEnum.tools
) {
defaultAppForm.aiSettings.model = findInputValueByKey(
module.inputs,
ModuleInputKeyEnum.aiModel
@ -64,13 +70,9 @@ export const appModules2Form = ({ modules }: { modules: ModuleItemType[] }) => {
module.inputs,
ModuleInputKeyEnum.aiChatMaxToken
);
defaultAppForm.aiSettings.quoteTemplate = findInputValueByKey(
defaultAppForm.aiSettings.maxHistories = findInputValueByKey(
module.inputs,
ModuleInputKeyEnum.aiChatQuoteTemplate
);
defaultAppForm.aiSettings.quotePrompt = findInputValueByKey(
module.inputs,
ModuleInputKeyEnum.aiChatQuotePrompt
ModuleInputKeyEnum.history
);
} else if (module.flowType === FlowNodeTypeEnum.datasetSearchNode) {
defaultAppForm.dataset.datasets = findInputValueByKey(
@ -104,17 +106,6 @@ export const appModules2Form = ({ modules }: { modules: ModuleItemType[] }) => {
module.inputs,
ModuleInputKeyEnum.datasetSearchExtensionBg
);
// empty text
const emptyOutputs =
module.outputs.find((item) => item.key === ModuleOutputKeyEnum.datasetIsEmpty)?.targets ||
[];
const emptyOutput = emptyOutputs[0];
if (emptyOutput) {
const target = modules.find((item) => item.moduleId === emptyOutput.moduleId);
defaultAppForm.dataset.searchEmptyText =
target?.inputs?.find((item) => item.key === ModuleInputKeyEnum.answerText)?.value || '';
}
} else if (module.flowType === FlowNodeTypeEnum.userGuide) {
const { welcomeText, variableModules, questionGuide, ttsConfig } = splitGuideModule(
getGuideModule(modules)
@ -125,6 +116,18 @@ export const appModules2Form = ({ modules }: { modules: ModuleItemType[] }) => {
questionGuide: questionGuide,
tts: ttsConfig
};
} else if (module.flowType === FlowNodeTypeEnum.pluginModule) {
defaultAppForm.selectedTools.push({
id: module.inputs.find((input) => input.key === ModuleInputKeyEnum.pluginId)?.value || '',
name: module.name,
avatar: module.avatar,
intro: module.intro || '',
flowType: module.flowType,
showStatus: module.showStatus,
inputs: module.inputs,
outputs: module.outputs,
templateType: FlowNodeTemplateTypeEnum.other
});
}
});

View File

@ -83,6 +83,7 @@ export const chats2GPTMessages = ({
});
}
} else {
//AI
item.value.forEach((value) => {
if (value.type === ChatItemValueTypeEnum.tool && value.tools && reserveTool) {
const tool_calls: ChatCompletionMessageToolCall[] = [];

View File

@ -3,6 +3,8 @@ export type UpdateChatFeedbackProps = {
chatId: string;
chatItemId: string;
shareId?: string;
teamId?: string;
teamToken?: string;
outLinkUid?: string;
userBadFeedback?: string;
userGoodFeedback?: string;

View File

@ -141,7 +141,7 @@ export type ChatHistoryItemResType = DispatchNodeResponseType & {
};
/* One tool run response */
export type ToolRunResponseItemType = Record<string, any> | Array;
export type ToolRunResponseItemType = any;
/* tool module response */
export type ToolModuleResponseItemType = {
id: string;

View File

@ -154,8 +154,5 @@ export const SearchScoreTypeMap = {
}
};
export const FolderIcon = 'file/fill/folder';
export const FolderImgUrl = '/imgs/files/folder.svg';
export const CustomCollectionIcon = 'common/linkBlue';
export const LinkCollectionIcon = 'common/linkBlue';

View File

@ -13,3 +13,10 @@ export type HttpQueryType = {
variables: Record<string, any>;
[key: string]: any;
};
/* http node */
export type HttpParamAndHeaderItemType = {
key: string;
type: string;
value: string;
};

View File

@ -1,4 +1,4 @@
export enum ModuleTemplateTypeEnum {
export enum FlowNodeTemplateTypeEnum {
userGuide = 'userGuide',
systemInput = 'systemInput',
tools = 'tools',
@ -87,7 +87,8 @@ export enum ModuleInputKeyEnum {
runAppSelectApp = 'app',
// plugin
pluginId = 'pluginId'
pluginId = 'pluginId',
pluginStart = 'pluginStart'
}
export enum ModuleOutputKeyEnum {
@ -117,7 +118,10 @@ export enum ModuleOutputKeyEnum {
selectedTools = 'selectedTools',
// http
httpRawResponse = 'httpRawResponse'
httpRawResponse = 'httpRawResponse',
// plugin
pluginStart = 'pluginStart'
}
export enum VariableInputEnum {

View File

@ -21,10 +21,12 @@ export enum FlowNodeInputTypeEnum {
// ai model select
selectLLMModel = 'selectLLMModel',
settingLLMModel = 'settingLLMModel',
// dataset special input
selectDataset = 'selectDataset',
selectDatasetParamsModal = 'selectDatasetParamsModal',
settingDatasetQuotePrompt = 'settingDatasetQuotePrompt',
hidden = 'hidden',
custom = 'custom'
@ -57,7 +59,8 @@ export enum FlowNodeTypeEnum {
pluginInput = 'pluginInput',
pluginOutput = 'pluginOutput',
queryExtension = 'cfr',
tools = 'tools'
tools = 'tools',
stopTool = 'stopTool'
// abandon
}

View File

@ -102,6 +102,13 @@ export type EditNodeFieldType = {
};
/* ------------- item type --------------- */
export type SettingAIDataType = {
model: string;
temperature: number;
maxToken: number;
isResponseAnswerText?: boolean;
maxHistories?: number;
};
/* ai chat modules props */
export type AIChatModuleProps = {
[ModuleInputKeyEnum.aiModel]: string;

View File

@ -90,6 +90,7 @@ export type DispatchNodeResponseType = {
// tool
toolCallTokens?: number;
toolDetail?: ChatHistoryItemResType[];
toolStop?: boolean;
};
export type DispatchNodeResultType<T> = {

View File

@ -8,7 +8,9 @@ import { ClassifyQuestionModule } from './system/classifyQuestion';
import { ContextExtractModule } from './system/contextExtract';
import { HttpModule468 } from './system/http468';
import { HttpModule } from './system/abandon/http';
import { ToolModule } from './system/tools';
import { StopToolNode } from './system/stopTool';
import { RunAppModule } from './system/runApp';
import { PluginInputModule } from './system/pluginInput';
@ -16,11 +18,11 @@ import { PluginOutputModule } from './system/pluginOutput';
import { RunPluginModule } from './system/runPlugin';
import { AiQueryExtension } from './system/queryExtension';
import type { FlowModuleTemplateType, moduleTemplateListType } from '../../module/type.d';
import { ModuleTemplateTypeEnum } from '../../module/constants';
import type { FlowNodeTemplateType, moduleTemplateListType } from '../../module/type.d';
import { FlowNodeTemplateTypeEnum } from '../../module/constants';
/* app flow module templates */
export const appSystemModuleTemplates: FlowModuleTemplateType[] = [
export const appSystemModuleTemplates: FlowNodeTemplateType[] = [
UserGuideModule,
UserInputModule,
AiChatModule,
@ -29,13 +31,14 @@ export const appSystemModuleTemplates: FlowModuleTemplateType[] = [
DatasetConcatModule,
RunAppModule,
ToolModule,
StopToolNode,
ClassifyQuestionModule,
ContextExtractModule,
HttpModule468,
AiQueryExtension
];
/* plugin flow module templates */
export const pluginSystemModuleTemplates: FlowModuleTemplateType[] = [
export const pluginSystemModuleTemplates: FlowNodeTemplateType[] = [
PluginInputModule,
PluginOutputModule,
AiChatModule,
@ -44,6 +47,7 @@ export const pluginSystemModuleTemplates: FlowModuleTemplateType[] = [
DatasetConcatModule,
RunAppModule,
ToolModule,
StopToolNode,
ClassifyQuestionModule,
ContextExtractModule,
HttpModule468,
@ -51,7 +55,7 @@ export const pluginSystemModuleTemplates: FlowModuleTemplateType[] = [
];
/* all module */
export const moduleTemplatesFlat: FlowModuleTemplateType[] = [
export const moduleTemplatesFlat: FlowNodeTemplateType[] = [
UserGuideModule,
UserInputModule,
AiChatModule,
@ -63,6 +67,7 @@ export const moduleTemplatesFlat: FlowModuleTemplateType[] = [
HttpModule468,
HttpModule,
ToolModule,
StopToolNode,
AiChatModule,
RunAppModule,
PluginInputModule,
@ -73,43 +78,43 @@ export const moduleTemplatesFlat: FlowModuleTemplateType[] = [
export const moduleTemplatesList: moduleTemplateListType = [
{
type: ModuleTemplateTypeEnum.userGuide,
label: 'core.module.template.Guide module',
type: FlowNodeTemplateTypeEnum.userGuide,
label: '',
list: []
},
{
type: ModuleTemplateTypeEnum.systemInput,
label: 'core.module.template.System input module',
list: []
},
{
type: ModuleTemplateTypeEnum.textAnswer,
type: FlowNodeTemplateTypeEnum.textAnswer,
label: 'core.module.template.Response module',
list: []
},
{
type: ModuleTemplateTypeEnum.functionCall,
type: FlowNodeTemplateTypeEnum.functionCall,
label: 'core.module.template.Function module',
list: []
},
{
type: ModuleTemplateTypeEnum.tools,
type: FlowNodeTemplateTypeEnum.tools,
label: 'core.module.template.Tool module',
list: []
},
{
type: ModuleTemplateTypeEnum.externalCall,
type: FlowNodeTemplateTypeEnum.externalCall,
label: 'core.module.template.External module',
list: []
},
{
type: ModuleTemplateTypeEnum.personalPlugin,
label: 'core.module.template.My plugin module',
type: FlowNodeTemplateTypeEnum.personalPlugin,
label: '',
list: []
},
{
type: ModuleTemplateTypeEnum.other,
type: FlowNodeTemplateTypeEnum.other,
label: '其他',
list: []
},
{
type: FlowNodeTemplateTypeEnum.systemInput,
label: 'core.module.template.System input module',
list: []
}
];

View File

@ -59,7 +59,7 @@ export const Input_Template_DynamicInput: FlowNodeInputItemType = {
hideInApp: true
};
export const Input_Template_AiModel: FlowNodeInputItemType = {
export const Input_Template_SelectAIModel: FlowNodeInputItemType = {
key: ModuleInputKeyEnum.aiModel,
type: FlowNodeInputTypeEnum.selectLLMModel,
label: 'core.module.input.label.aiModel',
@ -68,6 +68,15 @@ export const Input_Template_AiModel: FlowNodeInputItemType = {
showTargetInApp: false,
showTargetInPlugin: false
};
export const Input_Template_SettingAiModel: FlowNodeInputItemType = {
key: ModuleInputKeyEnum.aiModel,
type: FlowNodeInputTypeEnum.settingLLMModel,
label: 'core.module.input.label.aiModel',
required: true,
valueType: ModuleIOValueTypeEnum.string,
showTargetInApp: false,
showTargetInPlugin: false
};
export const Input_Template_System_Prompt: FlowNodeInputItemType = {
key: ModuleInputKeyEnum.aiSystemPrompt,
@ -83,7 +92,7 @@ export const Input_Template_System_Prompt: FlowNodeInputItemType = {
export const Input_Template_Dataset_Quote: FlowNodeInputItemType = {
key: ModuleInputKeyEnum.aiChatDatasetQuote,
type: FlowNodeInputTypeEnum.target,
type: FlowNodeInputTypeEnum.settingDatasetQuotePrompt,
label: '知识库引用',
description: 'core.module.Dataset quote.Input description',
valueType: ModuleIOValueTypeEnum.datasetQuote,

View File

@ -3,11 +3,11 @@ import {
FlowNodeOutputTypeEnum,
FlowNodeTypeEnum
} from '../../../node/constant';
import { FlowModuleTemplateType } from '../../../type';
import { FlowNodeTemplateType } from '../../../type';
import {
ModuleIOValueTypeEnum,
ModuleInputKeyEnum,
ModuleTemplateTypeEnum
FlowNodeTemplateTypeEnum
} from '../../../constants';
import {
Input_Template_AddInputParam,
@ -16,9 +16,9 @@ import {
} from '../../input';
import { Output_Template_AddOutput, Output_Template_Finish } from '../../output';
export const HttpModule: FlowModuleTemplateType = {
export const HttpModule: FlowNodeTemplateType = {
id: FlowNodeTypeEnum.httpRequest,
templateType: ModuleTemplateTypeEnum.externalCall,
templateType: FlowNodeTemplateTypeEnum.externalCall,
flowType: FlowNodeTypeEnum.httpRequest,
avatar: '/imgs/module/http.png',
name: 'core.module.template.Http request',

View File

@ -3,15 +3,15 @@ import {
FlowNodeOutputTypeEnum,
FlowNodeTypeEnum
} from '../../node/constant';
import { FlowModuleTemplateType } from '../../type.d';
import { FlowNodeTemplateType } from '../../type.d';
import {
ModuleIOValueTypeEnum,
ModuleInputKeyEnum,
ModuleOutputKeyEnum,
ModuleTemplateTypeEnum
FlowNodeTemplateTypeEnum
} from '../../constants';
import {
Input_Template_AiModel,
Input_Template_SettingAiModel,
Input_Template_Dataset_Quote,
Input_Template_History,
Input_Template_Switch,
@ -21,18 +21,18 @@ import {
import { chatNodeSystemPromptTip } from '../tip';
import { Output_Template_Finish, Output_Template_UserChatInput } from '../output';
export const AiChatModule: FlowModuleTemplateType = {
export const AiChatModule: FlowNodeTemplateType = {
id: FlowNodeTypeEnum.chatNode,
templateType: ModuleTemplateTypeEnum.textAnswer,
templateType: FlowNodeTemplateTypeEnum.textAnswer,
flowType: FlowNodeTypeEnum.chatNode,
avatar: '/imgs/module/AI.png',
name: 'AI 对话',
intro: 'AI 大模型对话',
showStatus: true,
// isTool: true,
isTool: true,
inputs: [
Input_Template_Switch,
Input_Template_AiModel,
Input_Template_SettingAiModel,
// --- settings modal
{
key: ModuleInputKeyEnum.aiChatTemperature,
@ -83,14 +83,6 @@ export const AiChatModule: FlowModuleTemplateType = {
showTargetInApp: false,
showTargetInPlugin: false
},
{
key: ModuleInputKeyEnum.aiChatSettingModal,
type: FlowNodeInputTypeEnum.aiSettings,
label: '',
valueType: ModuleIOValueTypeEnum.any,
showTargetInApp: false,
showTargetInPlugin: false
},
// settings modal ---
{
...Input_Template_System_Prompt,

View File

@ -1,12 +1,16 @@
import { FlowNodeInputTypeEnum, FlowNodeTypeEnum } from '../../node/constant';
import { FlowModuleTemplateType } from '../../type.d';
import { ModuleIOValueTypeEnum, ModuleInputKeyEnum, ModuleTemplateTypeEnum } from '../../constants';
import { FlowNodeTemplateType } from '../../type.d';
import {
ModuleIOValueTypeEnum,
ModuleInputKeyEnum,
FlowNodeTemplateTypeEnum
} from '../../constants';
import { Input_Template_Switch } from '../input';
import { Output_Template_Finish } from '../output';
export const AssignedAnswerModule: FlowModuleTemplateType = {
export const AssignedAnswerModule: FlowNodeTemplateType = {
id: FlowNodeTypeEnum.answerNode,
templateType: ModuleTemplateTypeEnum.textAnswer,
templateType: FlowNodeTemplateTypeEnum.textAnswer,
flowType: FlowNodeTypeEnum.answerNode,
avatar: '/imgs/module/reply.png',
name: '指定回复',

View File

@ -3,10 +3,14 @@ import {
FlowNodeOutputTypeEnum,
FlowNodeTypeEnum
} from '../../node/constant';
import { FlowModuleTemplateType } from '../../type.d';
import { ModuleIOValueTypeEnum, ModuleInputKeyEnum, ModuleTemplateTypeEnum } from '../../constants';
import { FlowNodeTemplateType } from '../../type.d';
import {
Input_Template_AiModel,
ModuleIOValueTypeEnum,
ModuleInputKeyEnum,
FlowNodeTemplateTypeEnum
} from '../../constants';
import {
Input_Template_SelectAIModel,
Input_Template_History,
Input_Template_Switch,
Input_Template_UserChatInput
@ -15,9 +19,9 @@ import { Output_Template_UserChatInput } from '../output';
import { Input_Template_System_Prompt } from '../input';
import { LLMModelTypeEnum } from '../../../ai/constants';
export const ClassifyQuestionModule: FlowModuleTemplateType = {
export const ClassifyQuestionModule: FlowNodeTemplateType = {
id: FlowNodeTypeEnum.classifyQuestion,
templateType: ModuleTemplateTypeEnum.functionCall,
templateType: FlowNodeTemplateTypeEnum.functionCall,
flowType: FlowNodeTypeEnum.classifyQuestion,
avatar: '/imgs/module/cq.png',
name: '问题分类',
@ -26,7 +30,7 @@ export const ClassifyQuestionModule: FlowModuleTemplateType = {
inputs: [
Input_Template_Switch,
{
...Input_Template_AiModel,
...Input_Template_SelectAIModel,
llmModelType: LLMModelTypeEnum.classify
},
{

View File

@ -3,19 +3,23 @@ import {
FlowNodeOutputTypeEnum,
FlowNodeTypeEnum
} from '../../node/constant';
import { FlowModuleTemplateType } from '../../type.d';
import { FlowNodeTemplateType } from '../../type.d';
import {
ModuleIOValueTypeEnum,
ModuleInputKeyEnum,
ModuleOutputKeyEnum,
ModuleTemplateTypeEnum
FlowNodeTemplateTypeEnum
} from '../../constants';
import { Input_Template_AiModel, Input_Template_History, Input_Template_Switch } from '../input';
import {
Input_Template_SelectAIModel,
Input_Template_History,
Input_Template_Switch
} from '../input';
import { LLMModelTypeEnum } from '../../../ai/constants';
export const ContextExtractModule: FlowModuleTemplateType = {
export const ContextExtractModule: FlowNodeTemplateType = {
id: FlowNodeTypeEnum.contentExtract,
templateType: ModuleTemplateTypeEnum.functionCall,
templateType: FlowNodeTemplateTypeEnum.functionCall,
flowType: FlowNodeTypeEnum.contentExtract,
avatar: '/imgs/module/extract.png',
name: '文本内容提取',
@ -25,7 +29,7 @@ export const ContextExtractModule: FlowModuleTemplateType = {
inputs: [
Input_Template_Switch,
{
...Input_Template_AiModel,
...Input_Template_SelectAIModel,
llmModelType: LLMModelTypeEnum.extractFields
},
{
@ -35,7 +39,6 @@ export const ContextExtractModule: FlowModuleTemplateType = {
label: '提取要求描述',
description:
'给AI一些对应的背景知识或要求描述引导AI更好的完成任务。\n该输入框可使用全局变量。',
required: true,
placeholder:
'例如: \n1. 当前时间为: {{cTime}}。你是一个实验室预约助手,你的任务是帮助用户预约实验室,从文本中获取对应的预约信息。\n2. 你是谷歌搜索助手,需要从文本中提取出合适的搜索词。',
showTargetInApp: true,

View File

@ -3,12 +3,12 @@ import {
FlowNodeOutputTypeEnum,
FlowNodeTypeEnum
} from '../../node/constant';
import { FlowModuleTemplateType } from '../../type.d';
import { FlowNodeTemplateType } from '../../type.d';
import {
ModuleIOValueTypeEnum,
ModuleInputKeyEnum,
ModuleOutputKeyEnum,
ModuleTemplateTypeEnum
FlowNodeTemplateTypeEnum
} from '../../constants';
import { Input_Template_Dataset_Quote, Input_Template_Switch } from '../input';
import { Output_Template_Finish } from '../output';
@ -20,10 +20,10 @@ export const getOneQuoteInputTemplate = (key = getNanoid()) => ({
type: FlowNodeInputTypeEnum.hidden
});
export const DatasetConcatModule: FlowModuleTemplateType = {
export const DatasetConcatModule: FlowNodeTemplateType = {
id: FlowNodeTypeEnum.datasetConcatNode,
flowType: FlowNodeTypeEnum.datasetConcatNode,
templateType: ModuleTemplateTypeEnum.tools,
templateType: FlowNodeTemplateTypeEnum.other,
avatar: '/imgs/module/concat.svg',
name: '知识库搜索引用合并',
intro: '可以将多个知识库搜索结果进行合并输出。使用 RRF 的合并方式进行最终排序输出。',

View File

@ -3,20 +3,20 @@ import {
FlowNodeOutputTypeEnum,
FlowNodeTypeEnum
} from '../../node/constant';
import { FlowModuleTemplateType } from '../../type.d';
import { FlowNodeTemplateType } from '../../type.d';
import {
ModuleIOValueTypeEnum,
ModuleInputKeyEnum,
ModuleOutputKeyEnum,
ModuleTemplateTypeEnum
FlowNodeTemplateTypeEnum
} from '../../constants';
import { Input_Template_Switch, Input_Template_UserChatInput } from '../input';
import { Output_Template_Finish, Output_Template_UserChatInput } from '../output';
import { DatasetSearchModeEnum } from '../../../dataset/constants';
export const DatasetSearchModule: FlowModuleTemplateType = {
export const DatasetSearchModule: FlowNodeTemplateType = {
id: FlowNodeTypeEnum.datasetSearchNode,
templateType: ModuleTemplateTypeEnum.functionCall,
templateType: FlowNodeTemplateTypeEnum.functionCall,
flowType: FlowNodeTypeEnum.datasetSearchNode,
avatar: '/imgs/module/db.png',
name: '知识库搜索',

View File

@ -3,12 +3,12 @@ import {
FlowNodeOutputTypeEnum,
FlowNodeTypeEnum
} from '../../node/constant';
import { FlowModuleTemplateType } from '../../type';
import { FlowNodeTemplateType } from '../../type';
import {
ModuleIOValueTypeEnum,
ModuleInputKeyEnum,
ModuleOutputKeyEnum,
ModuleTemplateTypeEnum
FlowNodeTemplateTypeEnum
} from '../../constants';
import {
Input_Template_AddInputParam,
@ -17,9 +17,9 @@ import {
} from '../input';
import { Output_Template_AddOutput, Output_Template_Finish } from '../output';
export const HttpModule468: FlowModuleTemplateType = {
export const HttpModule468: FlowNodeTemplateType = {
id: FlowNodeTypeEnum.httpRequest468,
templateType: ModuleTemplateTypeEnum.externalCall,
templateType: FlowNodeTemplateTypeEnum.externalCall,
flowType: FlowNodeTypeEnum.httpRequest468,
avatar: '/imgs/module/http.png',
name: 'HTTP 请求',

View File

@ -1,15 +1,43 @@
import { ModuleTemplateTypeEnum } from '../../constants';
import { FlowNodeTypeEnum } from '../../node/constant';
import { FlowModuleTemplateType } from '../../type.d';
import {
FlowNodeTemplateTypeEnum,
ModuleIOValueTypeEnum,
ModuleInputKeyEnum,
ModuleOutputKeyEnum
} from '../../constants';
import {
FlowNodeInputTypeEnum,
FlowNodeOutputTypeEnum,
FlowNodeTypeEnum
} from '../../node/constant';
import { FlowNodeTemplateType } from '../../type.d';
export const PluginInputModule: FlowModuleTemplateType = {
export const PluginInputModule: FlowNodeTemplateType = {
id: FlowNodeTypeEnum.pluginInput,
templateType: ModuleTemplateTypeEnum.systemInput,
templateType: FlowNodeTemplateTypeEnum.systemInput,
flowType: FlowNodeTypeEnum.pluginInput,
avatar: '/imgs/module/input.png',
name: '定义插件输入',
intro: '自定义配置外部输入,使用插件时,仅暴露自定义配置的输入',
showStatus: false,
inputs: [],
outputs: []
inputs: [
{
key: ModuleInputKeyEnum.pluginStart,
type: FlowNodeInputTypeEnum.hidden,
valueType: ModuleIOValueTypeEnum.boolean,
label: '插件开始运行',
description:
'插件开始运行时,会输出一个 True 的标识。有时候,插件不会有额外的的输入,为了顺利的进入下一个阶段,你可以将该值连接到下一个节点的触发器中。',
showTargetInApp: true,
showTargetInPlugin: true
}
],
outputs: [
{
key: ModuleOutputKeyEnum.pluginStart,
label: '插件开始运行',
type: FlowNodeOutputTypeEnum.source,
valueType: ModuleIOValueTypeEnum.boolean,
targets: []
}
]
};

View File

@ -1,10 +1,10 @@
import { ModuleTemplateTypeEnum } from '../../constants';
import { FlowNodeTemplateTypeEnum } from '../../constants';
import { FlowNodeTypeEnum } from '../../node/constant';
import { FlowModuleTemplateType } from '../../type.d';
import { FlowNodeTemplateType } from '../../type.d';
export const PluginOutputModule: FlowModuleTemplateType = {
export const PluginOutputModule: FlowNodeTemplateType = {
id: FlowNodeTypeEnum.pluginOutput,
templateType: ModuleTemplateTypeEnum.systemInput,
templateType: FlowNodeTemplateTypeEnum.systemInput,
flowType: FlowNodeTypeEnum.pluginOutput,
avatar: '/imgs/module/output.png',
name: '定义插件输出',

View File

@ -3,25 +3,25 @@ import {
FlowNodeOutputTypeEnum,
FlowNodeTypeEnum
} from '../../node/constant';
import { FlowModuleTemplateType } from '../../type';
import { FlowNodeTemplateType } from '../../type';
import {
ModuleIOValueTypeEnum,
ModuleInputKeyEnum,
ModuleOutputKeyEnum,
ModuleTemplateTypeEnum
FlowNodeTemplateTypeEnum
} from '../../constants';
import {
Input_Template_History,
Input_Template_Switch,
Input_Template_UserChatInput,
Input_Template_AiModel
Input_Template_SelectAIModel
} from '../input';
import { Output_Template_UserChatInput } from '../output';
import { LLMModelTypeEnum } from '../../../ai/constants';
export const AiQueryExtension: FlowModuleTemplateType = {
export const AiQueryExtension: FlowNodeTemplateType = {
id: FlowNodeTypeEnum.chatNode,
templateType: ModuleTemplateTypeEnum.other,
templateType: FlowNodeTemplateTypeEnum.other,
flowType: FlowNodeTypeEnum.queryExtension,
avatar: '/imgs/module/cfr.svg',
name: '问题优化',
@ -31,7 +31,7 @@ export const AiQueryExtension: FlowModuleTemplateType = {
inputs: [
Input_Template_Switch,
{
...Input_Template_AiModel,
...Input_Template_SelectAIModel,
llmModelType: LLMModelTypeEnum.queryExtension
},
{

View File

@ -3,12 +3,12 @@ import {
FlowNodeOutputTypeEnum,
FlowNodeTypeEnum
} from '../../node/constant';
import { FlowModuleTemplateType } from '../../type.d';
import { FlowNodeTemplateType } from '../../type.d';
import {
ModuleIOValueTypeEnum,
ModuleInputKeyEnum,
ModuleOutputKeyEnum,
ModuleTemplateTypeEnum
FlowNodeTemplateTypeEnum
} from '../../constants';
import {
Input_Template_History,
@ -17,9 +17,9 @@ import {
} from '../input';
import { Output_Template_Finish, Output_Template_UserChatInput } from '../output';
export const RunAppModule: FlowModuleTemplateType = {
export const RunAppModule: FlowNodeTemplateType = {
id: FlowNodeTypeEnum.runApp,
templateType: ModuleTemplateTypeEnum.externalCall,
templateType: FlowNodeTemplateTypeEnum.externalCall,
flowType: FlowNodeTypeEnum.runApp,
avatar: '/imgs/module/app.png',
name: '应用调用',

View File

@ -1,10 +1,10 @@
import { ModuleTemplateTypeEnum } from '../../constants';
import { FlowNodeTemplateTypeEnum } from '../../constants';
import { FlowNodeTypeEnum } from '../../node/constant';
import { FlowModuleTemplateType } from '../../type.d';
import { FlowNodeTemplateType } from '../../type.d';
export const RunPluginModule: FlowModuleTemplateType = {
export const RunPluginModule: FlowNodeTemplateType = {
id: FlowNodeTypeEnum.pluginModule,
templateType: ModuleTemplateTypeEnum.externalCall,
templateType: FlowNodeTemplateTypeEnum.externalCall,
flowType: FlowNodeTypeEnum.pluginModule,
intro: '',
name: '',

View File

@ -0,0 +1,16 @@
import { FlowNodeTypeEnum } from '../../node/constant';
import { FlowNodeTemplateType } from '../../type.d';
import { FlowNodeTemplateTypeEnum } from '../../constants';
import { Input_Template_Switch } from '../input';
export const StopToolNode: FlowNodeTemplateType = {
id: FlowNodeTypeEnum.stopTool,
templateType: FlowNodeTemplateTypeEnum.functionCall,
flowType: FlowNodeTypeEnum.stopTool,
avatar: '/imgs/module/toolStop.svg',
name: '工具调用终止',
intro:
'该模块需配置工具调用使用。当该模块被执行时本次工具调用将会强制结束并且不再调用AI针对工具调用结果回答问题。',
inputs: [Input_Template_Switch],
outputs: []
};

View File

@ -1,12 +1,17 @@
import { FlowNodeOutputTypeEnum, FlowNodeTypeEnum } from '../../node/constant';
import { FlowModuleTemplateType } from '../../type.d';
import {
FlowNodeInputTypeEnum,
FlowNodeOutputTypeEnum,
FlowNodeTypeEnum
} from '../../node/constant';
import { FlowNodeTemplateType } from '../../type.d';
import {
ModuleIOValueTypeEnum,
ModuleOutputKeyEnum,
ModuleTemplateTypeEnum
FlowNodeTemplateTypeEnum,
ModuleInputKeyEnum
} from '../../constants';
import {
Input_Template_AiModel,
Input_Template_SettingAiModel,
Input_Template_History,
Input_Template_Switch,
Input_Template_System_Prompt,
@ -16,19 +21,43 @@ import { chatNodeSystemPromptTip } from '../tip';
import { Output_Template_Finish, Output_Template_UserChatInput } from '../output';
import { LLMModelTypeEnum } from '../../../ai/constants';
export const ToolModule: FlowModuleTemplateType = {
export const ToolModule: FlowNodeTemplateType = {
id: FlowNodeTypeEnum.tools,
flowType: FlowNodeTypeEnum.tools,
templateType: ModuleTemplateTypeEnum.functionCall,
templateType: FlowNodeTemplateTypeEnum.functionCall,
avatar: '/imgs/module/tool.svg',
name: '工具调用(实验)',
intro: '通过AI模型自动选择一个或多个工具进行调用。工具可以是其他功能块或插件。',
intro: '通过AI模型自动选择一个或多个功能块进行调用,也可以对插件进行调用。',
showStatus: true,
inputs: [
Input_Template_Switch,
{
...Input_Template_AiModel,
llmModelType: LLMModelTypeEnum.toolCall
...Input_Template_SettingAiModel,
llmModelType: LLMModelTypeEnum.all
},
{
key: ModuleInputKeyEnum.aiChatTemperature,
type: FlowNodeInputTypeEnum.hidden, // Set in the pop-up window
label: '',
value: 0,
valueType: ModuleIOValueTypeEnum.number,
min: 0,
max: 10,
step: 1,
showTargetInApp: false,
showTargetInPlugin: false
},
{
key: ModuleInputKeyEnum.aiChatMaxToken,
type: FlowNodeInputTypeEnum.hidden, // Set in the pop-up window
label: '',
value: 2000,
valueType: ModuleIOValueTypeEnum.number,
min: 100,
max: 4000,
step: 50,
showTargetInApp: false,
showTargetInPlugin: false
},
{
...Input_Template_System_Prompt,

View File

@ -1,11 +1,15 @@
import { FlowNodeInputTypeEnum, FlowNodeTypeEnum } from '../../node/constant';
import { FlowModuleTemplateType } from '../../type.d';
import { FlowNodeTemplateType } from '../../type.d';
import { userGuideTip } from '../tip';
import { ModuleIOValueTypeEnum, ModuleInputKeyEnum, ModuleTemplateTypeEnum } from '../../constants';
import {
ModuleIOValueTypeEnum,
ModuleInputKeyEnum,
FlowNodeTemplateTypeEnum
} from '../../constants';
export const UserGuideModule: FlowModuleTemplateType = {
export const UserGuideModule: FlowNodeTemplateType = {
id: FlowNodeTypeEnum.userGuide,
templateType: ModuleTemplateTypeEnum.userGuide,
templateType: FlowNodeTemplateTypeEnum.userGuide,
flowType: FlowNodeTypeEnum.userGuide,
avatar: '/imgs/module/userGuide.png',
name: '全局配置',

View File

@ -3,17 +3,17 @@ import {
FlowNodeOutputTypeEnum,
FlowNodeTypeEnum
} from '../../node/constant';
import { FlowModuleTemplateType } from '../../type.d';
import { FlowNodeTemplateType } from '../../type.d';
import {
ModuleIOValueTypeEnum,
ModuleInputKeyEnum,
ModuleOutputKeyEnum,
ModuleTemplateTypeEnum
FlowNodeTemplateTypeEnum
} from '../../constants';
export const UserInputModule: FlowModuleTemplateType = {
export const UserInputModule: FlowNodeTemplateType = {
id: FlowNodeTypeEnum.questionInput,
templateType: ModuleTemplateTypeEnum.systemInput,
templateType: FlowNodeTemplateTypeEnum.systemInput,
flowType: FlowNodeTypeEnum.questionInput,
avatar: '/imgs/module/userChatInput.svg',
name: '对话入口',

View File

@ -2,7 +2,7 @@ import { FlowNodeTypeEnum } from './node/constant';
import {
ModuleIOValueTypeEnum,
ModuleOutputKeyEnum,
ModuleTemplateTypeEnum,
FlowNodeTemplateTypeEnum,
VariableInputEnum
} from './constants';
import { DispatchNodeResponseKeyEnum } from './runtime/constants';
@ -15,10 +15,11 @@ import {
} from '../chat/type';
import { ChatNodeUsageType } from '../../support/wallet/bill/type';
import { RunningModuleItemType } from './runtime/type';
import { PluginTypeEnum } from 'core/plugin/constants';
export type FlowModuleTemplateType = {
export type FlowNodeTemplateType = {
id: string; // module id, unique
templateType: `${ModuleTemplateTypeEnum}`;
templateType: `${FlowNodeTemplateTypeEnum}`;
flowType: `${FlowNodeTypeEnum}`; // render node card
avatar?: string;
name: string;
@ -27,14 +28,18 @@ export type FlowModuleTemplateType = {
showStatus?: boolean; // chatting response step status
inputs: FlowNodeInputItemType[];
outputs: FlowNodeOutputItemType[];
// plugin data
pluginType?: `${PluginTypeEnum}`;
parentId?: string;
};
export type FlowModuleItemType = FlowModuleTemplateType & {
export type FlowModuleItemType = FlowNodeTemplateType & {
moduleId: string;
};
export type moduleTemplateListType = {
type: `${ModuleTemplateTypeEnum}`;
type: `${FlowNodeTemplateTypeEnum}`;
label: string;
list: FlowModuleTemplateType[];
list: FlowNodeTemplateType[];
}[];
// store module type

View File

@ -9,6 +9,7 @@ import { FlowNodeInputItemType, FlowNodeOutputItemType } from './node/type';
import { AppTTSConfigType, ModuleItemType, VariableItemType } from './type';
import { Input_Template_Switch } from './template/input';
import { EditorVariablePickerType } from '../../../web/components/common/Textarea/PromptEditor/type';
import { Output_Template_Finish } from './template/output';
/* module */
export const getGuideModule = (modules: ModuleItemType[]) =>
@ -92,13 +93,16 @@ export const plugin2ModuleIO = (
connected: false
}))
]
: [],
: [Input_Template_Switch],
outputs: pluginOutput
? pluginOutput.outputs.map((item) => ({
...item,
edit: false
}))
: []
? [
...pluginOutput.outputs.map((item) => ({
...item,
edit: false
})),
Output_Template_Finish
]
: [Output_Template_Finish]
};
};

View File

@ -27,6 +27,26 @@ export const defaultModules: ModuleItemType[] = [
}
];
export enum PluginTypeEnum {
folder = 'folder',
custom = 'custom',
http = 'http'
}
export const pluginTypeMap = {
[PluginTypeEnum.folder]: {
label: '文件夹',
icon: 'file/fill/folder'
},
[PluginTypeEnum.custom]: {
label: '自定义',
icon: 'common/custom'
},
[PluginTypeEnum.http]: {
label: 'HTTP',
icon: 'common/http'
}
};
export enum PluginSourceEnum {
personal = 'personal',
community = 'community',

View File

@ -1,21 +1,40 @@
import type { ModuleItemType } from '../module/type.d';
import { PluginTypeEnum } from './constants';
import { HttpAuthMethodType } from './httpPlugin/type';
export type CreateOnePluginParams = {
name: string;
avatar: string;
intro: string;
modules?: ModuleItemType[];
modules: ModuleItemType[];
parentId: string | null;
type: `${PluginTypeEnum}`;
metadata?: {
apiSchemaStr?: string;
customHeaders?: string;
};
};
export type UpdatePluginParams = {
id: string;
parentId?: string | null;
name?: string;
avatar?: string;
intro?: string;
modules?: ModuleItemType[];
metadata?: {
apiSchemaStr?: string;
customHeaders?: string;
};
};
export type PluginListItemType = {
_id: string;
parentId: string;
type: `${PluginTypeEnum}`;
name: string;
avatar: string;
intro: string;
metadata?: {
apiSchemaStr?: string;
customHeaders?: string;
};
};

View File

@ -0,0 +1,13 @@
export type PathDataType = {
name: string;
description: string;
method: string;
path: string;
params: any[];
request: any;
};
export type OpenApiJsonSchema = {
pathData: PathDataType[];
serverPath: string;
};

View File

@ -0,0 +1,516 @@
import { getNanoid } from '../../../common/string/tools';
import { OpenApiJsonSchema } from './type';
import yaml from 'js-yaml';
import { OpenAPIV3 } from 'openapi-types';
import { PluginTypeEnum } from '../constants';
import { FlowNodeInputItemType, FlowNodeOutputItemType } from '../../module/node/type';
import { FlowNodeInputTypeEnum, FlowNodeOutputTypeEnum } from '../../module/node/constant';
import { ModuleIOValueTypeEnum } from '../../module/constants';
import { PluginInputModule } from '../../module/template/system/pluginInput';
import { PluginOutputModule } from '../../module/template/system/pluginOutput';
import { HttpModule468 } from '../../module/template/system/http468';
import { HttpParamAndHeaderItemType } from '../../module/api';
import { CreateOnePluginParams } from '../controller';
import { ModuleItemType } from '../../module/type';
import { HttpImgUrl } from '../../../common/file/image/constants';
export const str2OpenApiSchema = (yamlStr = ''): OpenApiJsonSchema => {
try {
const data: OpenAPIV3.Document = (() => {
try {
return JSON.parse(yamlStr);
} catch (jsonError) {
return yaml.load(yamlStr, { schema: yaml.FAILSAFE_SCHEMA });
}
})();
const serverPath = data.servers?.[0].url || '';
const pathData = Object.keys(data.paths)
.map((path) => {
const methodData: any = data.paths[path];
return Object.keys(methodData)
.filter((method) =>
['get', 'post', 'put', 'delete', 'patch'].includes(method.toLocaleLowerCase())
)
.map((method) => {
const methodInfo = methodData[method];
if (methodInfo.deprecated) return;
const result = {
path,
method,
name: methodInfo.operationId || path,
description: methodInfo.description,
params: methodInfo.parameters,
request: methodInfo?.requestBody
};
return result;
});
})
.flat()
.filter(Boolean) as OpenApiJsonSchema['pathData'];
return { pathData, serverPath };
} catch (err) {
throw new Error('Invalid Schema');
}
};
export const httpApiSchema2Plugins = ({
parentId,
apiSchemaStr = '',
customHeader = ''
}: {
parentId: string;
apiSchemaStr?: string;
customHeader?: string;
}): CreateOnePluginParams[] => {
const jsonSchema = str2OpenApiSchema(apiSchemaStr);
const baseUrl = jsonSchema.serverPath;
return jsonSchema.pathData.map((item) => {
const pluginOutputId = getNanoid();
const httpId = getNanoid();
const pluginOutputKey = 'result';
const properties = item.request?.content?.['application/json']?.schema?.properties;
const propsKeys = properties ? Object.keys(properties) : [];
const pluginInputs: FlowNodeInputItemType[] = [
...(item.params?.map((param: any) => {
return {
key: param.name,
valueType: ModuleIOValueTypeEnum.string,
label: param.name,
type: FlowNodeInputTypeEnum.target,
required: param.required,
description: param.description,
edit: true,
editField: {
key: true,
name: true,
description: true,
required: true,
dataType: true,
inputType: true,
isToolInput: true
},
connected: true,
toolDescription: param.description
};
}) || []),
...(propsKeys?.map((key) => {
const prop = properties[key];
return {
key,
valueType: ModuleIOValueTypeEnum.string,
label: key,
type: FlowNodeInputTypeEnum.target,
required: false,
description: prop.description,
edit: true,
editField: {
key: true,
name: true,
description: true,
required: true,
dataType: true,
inputType: true,
isToolInput: true
},
connected: true,
toolDescription: prop.description
};
}) || [])
];
const pluginOutputs: FlowNodeOutputItemType[] = [
...(item.params?.map((param: any) => {
return {
key: param.name,
valueType: ModuleIOValueTypeEnum.string,
label: param.name,
type: FlowNodeOutputTypeEnum.source,
edit: true,
targets: [
{
moduleId: httpId,
key: param.name
}
]
};
}) || []),
...(propsKeys?.map((key) => {
return {
key,
valueType: ModuleIOValueTypeEnum.string,
label: key,
type: FlowNodeOutputTypeEnum.source,
edit: true,
targets: [
{
moduleId: httpId,
key
}
]
};
}) || [])
];
const httpInputs: FlowNodeInputItemType[] = [
...(item.params?.map((param: any) => {
return {
key: param.name,
valueType: ModuleIOValueTypeEnum.string,
label: param.name,
type: FlowNodeInputTypeEnum.target,
description: param.description,
edit: true,
editField: {
key: true,
description: true,
dataType: true
},
connected: true
};
}) || []),
...(propsKeys?.map((key) => {
const prop = properties[key];
return {
key,
valueType: ModuleIOValueTypeEnum.string,
label: key,
type: FlowNodeInputTypeEnum.target,
description: prop.description,
edit: true,
editField: {
key: true,
description: true,
dataType: true
},
connected: true
};
}) || [])
];
/* http node setting */
const httpNodeParams: HttpParamAndHeaderItemType[] = [];
const httpNodeHeaders: HttpParamAndHeaderItemType[] = [];
let httpNodeBody = '{}';
const requestUrl = `${baseUrl}${item.path}`;
if (item.params && item.params.length > 0) {
for (const param of item.params) {
if (param.in === 'header') {
httpNodeHeaders.push({
key: param.name,
type: param.schema?.type || ModuleIOValueTypeEnum.string,
value: `{{${param.name}}}`
});
} else if (param.in === 'body') {
httpNodeBody = JSON.stringify(
{ ...JSON.parse(httpNodeBody), [param.name]: `{{${param.name}}}` },
null,
2
);
} else if (param.in === 'query') {
httpNodeParams.push({
key: param.name,
type: param.schema?.type || ModuleIOValueTypeEnum.string,
value: `{{${param.name}}}`
});
}
}
}
if (item.request) {
const properties = item.request?.content?.['application/json']?.schema?.properties;
const keys = Object.keys(properties);
if (keys.length > 0) {
httpNodeBody = JSON.stringify(
keys.reduce((acc: any, key) => {
acc[key] = `{{${key}}}`;
return acc;
}, {}),
null,
2
);
}
}
if (customHeader) {
const headersObj = (() => {
try {
return JSON.parse(customHeader) as Record<string, string>;
} catch (err) {
return {};
}
})();
for (const key in headersObj) {
httpNodeHeaders.push({
key,
type: 'string',
// @ts-ignore
value: headersObj[key]
});
}
}
/* Combine complete modules */
const modules: ModuleItemType[] = [
{
moduleId: getNanoid(),
name: PluginInputModule.name,
intro: PluginInputModule.intro,
avatar: PluginInputModule.avatar,
flowType: PluginInputModule.flowType,
showStatus: PluginInputModule.showStatus,
position: {
x: 616.4226348688949,
y: -165.05298493910115
},
inputs: [
{
key: 'pluginStart',
type: 'hidden',
valueType: 'boolean',
label: '插件开始运行',
description:
'插件开始运行时,会输出一个 True 的标识。有时候,插件不会有额外的的输入,为了顺利的进入下一个阶段,你可以将该值连接到下一个节点的触发器中。',
showTargetInApp: true,
showTargetInPlugin: true,
connected: true
},
...pluginInputs
],
outputs: [
{
key: 'pluginStart',
label: '插件开始运行',
type: 'source',
valueType: 'boolean',
targets:
pluginOutputs.length === 0
? [
{
moduleId: httpId,
key: 'switch'
}
]
: []
},
...pluginOutputs
]
},
{
moduleId: pluginOutputId,
name: PluginOutputModule.name,
intro: PluginOutputModule.intro,
avatar: PluginOutputModule.avatar,
flowType: PluginOutputModule.flowType,
showStatus: PluginOutputModule.showStatus,
position: {
x: 1607.7142331269126,
y: -151.8669210746189
},
inputs: [
{
key: pluginOutputKey,
valueType: 'string',
label: pluginOutputKey,
type: 'target',
required: true,
description: '',
edit: true,
editField: {
key: true,
name: true,
description: true,
required: false,
dataType: true,
inputType: false
},
connected: true
}
],
outputs: [
{
key: pluginOutputKey,
valueType: 'string',
label: pluginOutputKey,
type: 'source',
edit: true,
targets: []
}
]
},
{
moduleId: httpId,
name: HttpModule468.name,
intro: HttpModule468.intro,
avatar: HttpModule468.avatar,
flowType: HttpModule468.flowType,
showStatus: true,
position: {
x: 1042.549746602742,
y: -447.77496332641647
},
inputs: [
{
key: 'switch',
type: 'target',
label: 'core.module.input.label.switch',
description: 'core.module.input.description.Trigger',
valueType: 'any',
showTargetInApp: true,
showTargetInPlugin: true,
connected: false
},
{
key: 'system_httpMethod',
type: 'custom',
valueType: 'string',
label: '',
value: item.method.toUpperCase(),
required: true,
showTargetInApp: false,
showTargetInPlugin: false,
connected: false
},
{
key: 'system_httpReqUrl',
type: 'hidden',
valueType: 'string',
label: '',
description: 'core.module.input.description.Http Request Url',
placeholder: 'https://api.ai.com/getInventory',
required: false,
showTargetInApp: false,
showTargetInPlugin: false,
value: requestUrl,
connected: false
},
{
key: 'system_httpHeader',
type: 'custom',
valueType: 'any',
value: httpNodeHeaders,
label: '',
description: 'core.module.input.description.Http Request Header',
placeholder: 'core.module.input.description.Http Request Header',
required: false,
showTargetInApp: false,
showTargetInPlugin: false,
connected: false
},
{
key: 'system_httpParams',
type: 'hidden',
valueType: 'any',
value: httpNodeParams,
label: '',
required: false,
showTargetInApp: false,
showTargetInPlugin: false,
connected: false
},
{
key: 'system_httpJsonBody',
type: 'hidden',
valueType: 'any',
value: httpNodeBody,
label: '',
required: false,
showTargetInApp: false,
showTargetInPlugin: false,
connected: false
},
{
key: 'DYNAMIC_INPUT_KEY',
type: 'target',
valueType: 'any',
label: 'core.module.inputType.dynamicTargetInput',
description: 'core.module.input.description.dynamic input',
required: false,
showTargetInApp: false,
showTargetInPlugin: true,
hideInApp: true,
connected: false
},
{
key: 'system_addInputParam',
type: 'addInputParam',
valueType: 'any',
label: '',
required: false,
showTargetInApp: false,
showTargetInPlugin: false,
editField: {
key: true,
description: true,
dataType: true
},
defaultEditField: {
label: '',
key: '',
description: '',
inputType: 'target',
valueType: 'string'
},
connected: false
},
...httpInputs
],
outputs: [
{
key: 'finish',
label: 'core.module.output.label.running done',
description: 'core.module.output.description.running done',
valueType: 'boolean',
type: 'source',
targets: []
},
{
key: 'httpRawResponse',
label: '原始响应',
description: 'HTTP请求的原始响应。只能接受字符串或JSON类型响应数据。',
valueType: 'any',
type: 'source',
targets: [
{
moduleId: pluginOutputId,
key: pluginOutputKey
}
]
},
{
key: 'system_addOutputParam',
type: 'addOutputParam',
valueType: 'any',
label: '',
targets: [],
editField: {
key: true,
description: true,
dataType: true,
defaultValue: true
},
defaultEditField: {
label: '',
key: '',
description: '',
outputType: 'source',
valueType: 'string'
}
}
]
}
];
return {
name: item.name,
avatar: HttpImgUrl,
intro: item.description,
parentId,
type: PluginTypeEnum.http,
modules
};
});
};

View File

@ -1,6 +1,7 @@
import { ModuleTemplateTypeEnum } from 'core/module/constants';
import type { FlowModuleTemplateType, ModuleItemType } from '../module/type.d';
import { PluginSourceEnum } from './constants';
import { PluginSourceEnum, PluginTypeEnum } from './constants';
import { MethodType } from './controller';
export type PluginItemSchema = {
_id: string;
@ -12,6 +13,13 @@ export type PluginItemSchema = {
intro: string;
updateTime: Date;
modules: ModuleItemType[];
parentId: string;
type: `${PluginTypeEnum}`;
metadata?: {
pluginUid?: string;
apiSchemaStr?: string;
customHeaders?: string;
};
};
/* plugin template */
@ -19,7 +27,7 @@ export type PluginTemplateType = PluginRuntimeType & {
author?: string;
id: string;
source: `${PluginSourceEnum}`;
templateType: FlowModuleTemplateType['templateType'];
templateType: FlowNodeTemplateType['templateType'];
intro: string;
modules: ModuleItemType[];
};
@ -29,5 +37,6 @@ export type PluginRuntimeType = {
name: string;
avatar: string;
showStatus?: boolean;
isTool?: boolean;
modules: ModuleItemType[];
};

View File

@ -6,11 +6,15 @@
"dayjs": "^1.11.7",
"encoding": "^0.1.13",
"js-tiktoken": "^1.0.7",
"openapi-types": "^12.1.3",
"openai": "4.28.0",
"nanoid": "^4.0.1",
"timezones-list": "^3.0.2"
"js-yaml": "^4.1.0",
"timezones-list": "^3.0.2",
"next": "13.5.2"
},
"devDependencies": {
"@types/js-yaml": "^4.0.9",
"@types/node": "^20.8.5"
}
}

View File

@ -1,4 +1,4 @@
import { POST } from '@fastgpt/service/common/api/plusRequest';
import { POST } from './plusRequest';
export const postTextCensor = (data: { text: string }) =>
POST<{ code?: number; message: string }>('/common/censor/text_baidu', data)

View File

@ -0,0 +1,114 @@
import { SERVICE_LOCAL_HOST } from '../system/tools';
import axios, { Method, InternalAxiosRequestConfig, AxiosResponse } from 'axios';
interface ConfigType {
headers?: { [key: string]: string };
hold?: boolean;
timeout?: number;
}
interface ResponseDataType {
code: number;
message: string;
data: any;
}
/**
*
*/
function requestStart(config: InternalAxiosRequestConfig): InternalAxiosRequestConfig {
return config;
}
/**
* ,
*/
function responseSuccess(response: AxiosResponse<ResponseDataType>) {
return response;
}
/**
*
*/
function checkRes(data: ResponseDataType) {
if (data === undefined) {
console.log('error->', data, 'data is empty');
return Promise.reject('服务器异常');
} else if (data?.code && (data.code < 200 || data.code >= 400)) {
return Promise.reject(data);
}
return data.data;
}
/**
*
*/
function responseError(err: any) {
if (!err) {
return Promise.reject({ message: '未知错误' });
}
if (typeof err === 'string') {
return Promise.reject({ message: err });
}
if (err?.response?.data) {
return Promise.reject(err?.response?.data);
}
return Promise.reject(err);
}
/* 创建请求实例 */
const instance = axios.create({
timeout: 60000, // 超时时间
headers: {
'content-type': 'application/json',
'Cache-Control': 'no-cache'
}
});
/* 请求拦截 */
instance.interceptors.request.use(requestStart, (err) => Promise.reject(err));
/* 响应拦截 */
instance.interceptors.response.use(responseSuccess, (err) => Promise.reject(err));
export function request(url: string, data: any, config: ConfigType, method: Method): any {
/* 去空 */
for (const key in data) {
if (data[key] === null || data[key] === undefined) {
delete data[key];
}
}
return instance
.request({
baseURL: `http://${SERVICE_LOCAL_HOST}`,
url,
method,
data: ['POST', 'PUT'].includes(method) ? data : null,
params: !['POST', 'PUT'].includes(method) ? data : null,
...config // custom config
})
.then((res) => checkRes(res.data))
.catch((err) => responseError(err));
}
/**
* api请求方式
* @param {String} url
* @param {Any} params
* @param {Object} config
* @returns
*/
export function GET<T = undefined>(url: string, params = {}, config: ConfigType = {}): Promise<T> {
return request(url, params, config, 'GET');
}
export function POST<T = undefined>(url: string, data = {}, config: ConfigType = {}): Promise<T> {
return request(url, data, config, 'POST');
}
export function PUT<T = undefined>(url: string, data = {}, config: ConfigType = {}): Promise<T> {
return request(url, data, config, 'PUT');
}
export function DELETE<T = undefined>(url: string, data = {}, config: ConfigType = {}): Promise<T> {
return request(url, data, config, 'DELETE');
}

View File

@ -1,4 +1,6 @@
export const stopWords = new Set([
import { cut } from '@node-rs/jieba';
const stopWords = new Set([
'--',
'?',
'“',
@ -1506,3 +1508,14 @@ export const stopWords = new Set([
'i'
]
]);
export function jiebaSplit({ text }: { text: string }) {
const tokens = cut(text, true);
return (
tokens
.map((item) => item.replace(/[\u3000-\u303f\uff00-\uffef]/g, '').trim())
.filter((item) => item && !stopWords.has(item))
.join(' ') || ''
);
}

View File

@ -139,11 +139,11 @@ export const embeddingRecall = async (
const results: any = await PgClient.query(
`BEGIN;
SET LOCAL hnsw.ef_search = ${efSearch};
select id, collection_id, (vector <#> '[${vectors[0]}]') * -1 AS score
select id, collection_id, vector <#> '[${vectors[0]}]' AS score
from ${PgDatasetTableName}
where dataset_id IN (${datasetIds.map((id) => `'${String(id)}'`).join(',')})
AND vector <#> '[${vectors[0]}]' < -${similarity}
order by score desc limit ${limit};
order by score limit ${limit};
COMMIT;`
);
@ -153,7 +153,7 @@ export const embeddingRecall = async (
results: rows.map((item) => ({
id: item.id,
collectionId: item.collection_id,
score: item.score
score: item.score * -1
}))
};
} catch (error) {

View File

@ -2,7 +2,7 @@ import type { ChatCompletionMessageParam } from '@fastgpt/global/core/ai/type.d'
import { getAIApi } from '../config';
import { countGptMessagesTokens } from '@fastgpt/global/common/string/tiktoken';
export const Prompt_QuestionGuide = `我不太清楚问你什么问题,请帮我生成 3 个问题引导我继续提问。问题的长度应小于20个字符按 JSON 格式返回: ["问题1", "问题2", "问题3"]`;
export const Prompt_QuestionGuide = `你是一个AI智能助手可以回答和解决我的问题。请结合前面的对话记录帮我生成 3 个问题引导我继续提问。问题的长度应小于20个字符按 JSON 格式返回: ["问题1", "问题2", "问题3"]`;
export async function createQuestionGuide({
messages,

View File

@ -9,7 +9,7 @@ import { ChatCompletionMessageParam } from '@fastgpt/global/core/ai/type';
*/
const defaultPrompt = `作为一个向量检索助手,你的任务是结合历史记录,从不同角度,为“原问题”生成个不同版本的“检索词”,从而提高向量检索的语义丰富度,提高向量检索的精度。生成的问题要求指向对象清晰明确,并与原问题语言相同。例如:
const defaultPrompt = `作为一个向量检索助手,你的任务是结合历史记录,从不同角度,为“原问题”生成个不同版本的“检索词”,从而提高向量检索的语义丰富度,提高向量检索的精度。生成的问题要求指向对象清晰明确,并与原问题语言相同。例如:
:
"""
"""

View File

@ -1,5 +1,5 @@
import { PostReRankProps, PostReRankResponse } from '@fastgpt/global/core/ai/api';
import { POST } from '@/service/common/api/request';
import { PostReRankProps, PostReRankResponse } from '@fastgpt/global/core/ai/api.d';
import { POST } from '../../../common/api/serverRequest';
export function reRankRecall({ query, inputs }: PostReRankProps) {
const model = global.reRankModels[0];

View File

@ -34,10 +34,6 @@ const AppSchema = new Schema({
default: 'advanced',
enum: Object.keys(AppTypeMap)
},
simpleTemplateId: {
type: String,
required: true
},
avatar: {
type: String,
default: '/icon/logo.svg'

View File

@ -8,6 +8,15 @@ import axios from 'axios';
import { ChatCompletionRequestMessageRoleEnum } from '@fastgpt/global/core/ai/constants';
/* slice chat context by tokens */
const filterEmptyMessages = (messages: ChatCompletionMessageParam[]) => {
return messages.filter((item) => {
if (item.role === ChatCompletionRequestMessageRoleEnum.System) return !!item.content;
if (item.role === ChatCompletionRequestMessageRoleEnum.User) return !!item.content;
if (item.role === ChatCompletionRequestMessageRoleEnum.Assistant)
return !!item.content || !!item.function_call || !!item.tool_calls;
return true;
});
};
export function filterGPTMessageByMaxTokens({
messages = [],
maxTokens
@ -38,7 +47,7 @@ export function filterGPTMessageByMaxTokens({
// If the text length is less than half of the maximum token, no calculation is required
if (rawTextLen < maxTokens * 0.5) {
return messages;
return filterEmptyMessages(messages);
}
// filter startWith system prompt
@ -81,7 +90,7 @@ export function filterGPTMessageByMaxTokens({
}
}
return [...systemPrompts, ...chats];
return filterEmptyMessages([...systemPrompts, ...chats]);
}
export const formatGPTMessagesInRequestBefore = (messages: ChatCompletionMessageParam[]) => {
return messages

View File

@ -67,7 +67,7 @@ const DatasetSchema = new Schema({
agentModel: {
type: String,
required: true,
default: 'gpt-3.5-turbo-16k'
default: 'gpt-3.5-turbo'
},
intro: {
type: String,

View File

@ -0,0 +1,407 @@
import {
DatasetSearchModeEnum,
DatasetSearchModeMap,
SearchScoreTypeEnum
} from '@fastgpt/global/core/dataset/constants';
import { recallFromVectorStore } from '../../../common/vectorStore/controller';
import { getVectorsByText } from '../../ai/embedding';
import { getVectorModel } from '../../ai/model';
import { MongoDatasetData } from '../data/schema';
import {
DatasetDataSchemaType,
DatasetDataWithCollectionType,
SearchDataResponseItemType
} from '@fastgpt/global/core/dataset/type';
import { MongoDatasetCollection } from '../collection/schema';
import { reRankRecall } from '../../../core/ai/rerank';
import { countPromptTokens } from '@fastgpt/global/common/string/tiktoken';
import { datasetSearchResultConcat } from '@fastgpt/global/core/dataset/search/utils';
import { hashStr } from '@fastgpt/global/common/string/tools';
import { jiebaSplit } from '../../../common/string/jieba';
type SearchDatasetDataProps = {
teamId: string;
model: string;
similarity?: number; // min distance
limit: number; // max Token limit
datasetIds: string[];
searchMode?: `${DatasetSearchModeEnum}`;
usingReRank?: boolean;
reRankQuery: string;
queries: string[];
};
export async function searchDatasetData(props: SearchDatasetDataProps) {
let {
teamId,
reRankQuery,
queries,
model,
similarity = 0,
limit: maxTokens,
searchMode = DatasetSearchModeEnum.embedding,
usingReRank = false,
datasetIds = []
} = props;
/* init params */
searchMode = DatasetSearchModeMap[searchMode] ? searchMode : DatasetSearchModeEnum.embedding;
usingReRank = usingReRank && global.reRankModels.length > 0;
// Compatible with topk limit
if (maxTokens < 50) {
maxTokens = 1500;
}
let set = new Set<string>();
let usingSimilarityFilter = false;
/* function */
const countRecallLimit = () => {
if (searchMode === DatasetSearchModeEnum.embedding) {
return {
embeddingLimit: 150,
fullTextLimit: 0
};
}
if (searchMode === DatasetSearchModeEnum.fullTextRecall) {
return {
embeddingLimit: 0,
fullTextLimit: 150
};
}
return {
embeddingLimit: 100,
fullTextLimit: 80
};
};
const embeddingRecall = async ({ query, limit }: { query: string; limit: number }) => {
const { vectors, tokens } = await getVectorsByText({
model: getVectorModel(model),
input: query
});
const { results } = await recallFromVectorStore({
vectors,
limit,
datasetIds,
efSearch: global.systemEnv?.pgHNSWEfSearch
});
// get q and a
const dataList = (await MongoDatasetData.find(
{
teamId,
datasetId: { $in: datasetIds },
'indexes.dataId': { $in: results.map((item) => item.id?.trim()) }
},
'datasetId collectionId q a chunkIndex indexes'
)
.populate('collectionId', 'name fileId rawLink')
.lean()) as DatasetDataWithCollectionType[];
// add score to data(It's already sorted. The first one is the one with the most points)
const concatResults = dataList.map((data) => {
const dataIdList = data.indexes.map((item) => item.dataId);
const maxScoreResult = results.find((item) => {
return dataIdList.includes(item.id);
});
return {
...data,
score: maxScoreResult?.score || 0
};
});
concatResults.sort((a, b) => b.score - a.score);
const formatResult = concatResults
.map((data, index) => {
if (!data.collectionId) {
console.log('Collection is not found', data);
}
const result: SearchDataResponseItemType = {
id: String(data._id),
q: data.q,
a: data.a,
chunkIndex: data.chunkIndex,
datasetId: String(data.datasetId),
collectionId: String(data.collectionId?._id),
sourceName: data.collectionId?.name || '',
sourceId: data.collectionId?.fileId || data.collectionId?.rawLink,
score: [{ type: SearchScoreTypeEnum.embedding, value: data.score, index }]
};
return result;
})
.filter((item) => item !== null) as SearchDataResponseItemType[];
return {
embeddingRecallResults: formatResult,
tokens
};
};
const fullTextRecall = async ({
query,
limit
}: {
query: string;
limit: number;
}): Promise<{
fullTextRecallResults: SearchDataResponseItemType[];
tokenLen: number;
}> => {
if (limit === 0) {
return {
fullTextRecallResults: [],
tokenLen: 0
};
}
let searchResults = (
await Promise.all(
datasetIds.map((id) =>
MongoDatasetData.find(
{
teamId,
datasetId: id,
$text: { $search: jiebaSplit({ text: query }) }
},
{
score: { $meta: 'textScore' },
_id: 1,
datasetId: 1,
collectionId: 1,
q: 1,
a: 1,
chunkIndex: 1
}
)
.sort({ score: { $meta: 'textScore' } })
.limit(limit)
.lean()
)
)
).flat() as (DatasetDataSchemaType & { score: number })[];
// resort
searchResults.sort((a, b) => b.score - a.score);
searchResults.slice(0, limit);
const collections = await MongoDatasetCollection.find(
{
_id: { $in: searchResults.map((item) => item.collectionId) }
},
'_id name fileId rawLink'
);
return {
fullTextRecallResults: searchResults.map((item, index) => {
const collection = collections.find((col) => String(col._id) === String(item.collectionId));
return {
id: String(item._id),
datasetId: String(item.datasetId),
collectionId: String(item.collectionId),
sourceName: collection?.name || '',
sourceId: collection?.fileId || collection?.rawLink,
q: item.q,
a: item.a,
chunkIndex: item.chunkIndex,
indexes: item.indexes,
score: [{ type: SearchScoreTypeEnum.fullText, value: item.score, index }]
};
}),
tokenLen: 0
};
};
const reRankSearchResult = async ({
data,
query
}: {
data: SearchDataResponseItemType[];
query: string;
}): Promise<SearchDataResponseItemType[]> => {
try {
const results = await reRankRecall({
query,
inputs: data.map((item) => ({
id: item.id,
text: `${item.q}\n${item.a}`
}))
});
if (results.length === 0) {
usingReRank = false;
return [];
}
// add new score to data
const mergeResult = results
.map((item, index) => {
const target = data.find((dataItem) => dataItem.id === item.id);
if (!target) return null;
const score = item.score || 0;
return {
...target,
score: [{ type: SearchScoreTypeEnum.reRank, value: score, index }]
};
})
.filter(Boolean) as SearchDataResponseItemType[];
return mergeResult;
} catch (error) {
usingReRank = false;
return [];
}
};
const filterResultsByMaxTokens = (list: SearchDataResponseItemType[], maxTokens: number) => {
const results: SearchDataResponseItemType[] = [];
let totalTokens = 0;
for (let i = 0; i < list.length; i++) {
const item = list[i];
totalTokens += countPromptTokens(item.q + item.a);
if (totalTokens > maxTokens + 500) {
break;
}
results.push(item);
if (totalTokens > maxTokens) {
break;
}
}
return results.length === 0 ? list.slice(0, 1) : results;
};
const multiQueryRecall = async ({
embeddingLimit,
fullTextLimit
}: {
embeddingLimit: number;
fullTextLimit: number;
}) => {
// multi query recall
const embeddingRecallResList: SearchDataResponseItemType[][] = [];
const fullTextRecallResList: SearchDataResponseItemType[][] = [];
let totalTokens = 0;
await Promise.all(
queries.map(async (query) => {
const [{ tokens, embeddingRecallResults }, { fullTextRecallResults }] = await Promise.all([
embeddingRecall({
query,
limit: embeddingLimit
}),
fullTextRecall({
query,
limit: fullTextLimit
})
]);
totalTokens += tokens;
embeddingRecallResList.push(embeddingRecallResults);
fullTextRecallResList.push(fullTextRecallResults);
})
);
// rrf concat
const rrfEmbRecall = datasetSearchResultConcat(
embeddingRecallResList.map((list) => ({ k: 60, list }))
).slice(0, embeddingLimit);
const rrfFTRecall = datasetSearchResultConcat(
fullTextRecallResList.map((list) => ({ k: 60, list }))
).slice(0, fullTextLimit);
return {
tokens: totalTokens,
embeddingRecallResults: rrfEmbRecall,
fullTextRecallResults: rrfFTRecall
};
};
/* main step */
// count limit
const { embeddingLimit, fullTextLimit } = countRecallLimit();
// recall
const { embeddingRecallResults, fullTextRecallResults, tokens } = await multiQueryRecall({
embeddingLimit,
fullTextLimit
});
// ReRank results
const reRankResults = await (async () => {
if (!usingReRank) return [];
set = new Set<string>(embeddingRecallResults.map((item) => item.id));
const concatRecallResults = embeddingRecallResults.concat(
fullTextRecallResults.filter((item) => !set.has(item.id))
);
// remove same q and a data
set = new Set<string>();
const filterSameDataResults = concatRecallResults.filter((item) => {
// 删除所有的标点符号与空格等,只对文本进行比较
const str = hashStr(`${item.q}${item.a}`.replace(/[^\p{L}\p{N}]/gu, ''));
if (set.has(str)) return false;
set.add(str);
return true;
});
return reRankSearchResult({
query: reRankQuery,
data: filterSameDataResults
});
})();
// embedding recall and fullText recall rrf concat
const rrfConcatResults = datasetSearchResultConcat([
{ k: 60, list: embeddingRecallResults },
{ k: 60, list: fullTextRecallResults },
{ k: 58, list: reRankResults }
]);
// remove same q and a data
set = new Set<string>();
const filterSameDataResults = rrfConcatResults.filter((item) => {
// 删除所有的标点符号与空格等,只对文本进行比较
const str = hashStr(`${item.q}${item.a}`.replace(/[^\p{L}\p{N}]/gu, ''));
if (set.has(str)) return false;
set.add(str);
return true;
});
// score filter
const scoreFilter = (() => {
if (usingReRank) {
usingSimilarityFilter = true;
return filterSameDataResults.filter((item) => {
const reRankScore = item.score.find((item) => item.type === SearchScoreTypeEnum.reRank);
if (reRankScore && reRankScore.value < similarity) return false;
return true;
});
}
if (searchMode === DatasetSearchModeEnum.embedding) {
usingSimilarityFilter = true;
return filterSameDataResults.filter((item) => {
const embeddingScore = item.score.find(
(item) => item.type === SearchScoreTypeEnum.embedding
);
if (embeddingScore && embeddingScore.value < similarity) return false;
return true;
});
}
return filterSameDataResults;
})();
return {
searchRes: filterResultsByMaxTokens(scoreFilter, maxTokens),
tokens,
searchMode,
limit: maxTokens,
similarity,
usingReRank,
usingSimilarityFilter
};
}

View File

@ -31,16 +31,14 @@ export async function pushDataListToTrainingQueue({
data,
prompt,
billId,
trainingMode = TrainingModeEnum.chunk,
vectorModelList = [],
datasetModelList = []
trainingMode = TrainingModeEnum.chunk
}: {
teamId: string;
tmbId: string;
vectorModelList: VectorModelItemType[];
datasetModelList: LLMModelItemType[];
} & PushDatasetDataProps): Promise<PushDatasetDataResponse> {
const vectorModelList = global.vectorModels;
const datasetModelList = global.llmModels;
const {
datasetId: { _id: datasetId, vectorModel, agentModel }
} = await getCollectionWithDataset(collectionId);
@ -48,11 +46,11 @@ export async function pushDataListToTrainingQueue({
const checkModelValid = async () => {
const agentModelData = datasetModelList?.find((item) => item.model === agentModel);
if (!agentModelData) {
return Promise.reject(`Vector model ${agentModel} is inValid`);
return Promise.reject(`File model ${agentModel} is inValid`);
}
const vectorModelData = vectorModelList?.find((item) => item.model === vectorModel);
if (!vectorModelData) {
return Promise.reject(`File model ${vectorModel} is inValid`);
return Promise.reject(`Vector model ${vectorModel} is inValid`);
}
if (trainingMode === TrainingModeEnum.chunk) {

View File

@ -13,11 +13,6 @@ import {
export const DatasetTrainingCollectionName = 'dataset.trainings';
const TrainingDataSchema = new Schema({
userId: {
// abandon
type: Schema.Types.ObjectId,
ref: 'user'
},
teamId: {
type: Schema.Types.ObjectId,
ref: TeamCollectionName,
@ -100,7 +95,7 @@ try {
// lock training data; delete training data
TrainingDataSchema.index({ teamId: 1, collectionId: 1 });
// get training data and sort
TrainingDataSchema.index({ lockTime: 1, mode: 1, weight: -1 });
TrainingDataSchema.index({ mode: 1, lockTime: 1, weight: -1 });
TrainingDataSchema.index({ expireAt: 1 }, { expireAfterSeconds: 7 * 24 * 60 * 60 }); // 7 days
} catch (error) {
console.log(error);

View File

@ -1,10 +1,11 @@
import { MongoPlugin } from './schema';
import { FlowModuleTemplateType } from '@fastgpt/global/core/module/type';
import { FlowNodeTemplateType } from '@fastgpt/global/core/module/type';
import { FlowNodeTypeEnum } from '@fastgpt/global/core/module/node/constant';
import { plugin2ModuleIO } from '@fastgpt/global/core/module/utils';
import { PluginSourceEnum } from '@fastgpt/global/core/plugin/constants';
import type { PluginRuntimeType, PluginTemplateType } from '@fastgpt/global/core/plugin/type.d';
import { ModuleTemplateTypeEnum } from '@fastgpt/global/core/module/constants';
import { FlowNodeTemplateTypeEnum } from '@fastgpt/global/core/module/constants';
import type { PluginItemSchema } from '@fastgpt/global/core/plugin/type.d';
/*
plugin id rule:
@ -48,7 +49,7 @@ const getPluginTemplateById = async (id: string): Promise<PluginTemplateType> =>
showStatus: true,
source: PluginSourceEnum.personal,
modules: item.modules,
templateType: ModuleTemplateTypeEnum.personalPlugin
templateType: FlowNodeTemplateTypeEnum.personalPlugin
};
}
return Promise.reject('plugin not found');
@ -59,7 +60,7 @@ export async function getPluginPreviewModule({
id
}: {
id: string;
}): Promise<FlowModuleTemplateType> {
}): Promise<FlowNodeTemplateType> {
const plugin = await getPluginTemplateById(id);
return {
@ -70,6 +71,7 @@ export async function getPluginPreviewModule({
name: plugin.name,
intro: plugin.intro,
showStatus: plugin.showStatus,
isTool: plugin.isTool,
...plugin2ModuleIO(plugin.id, plugin.modules)
};
}

View File

@ -1,3 +1,4 @@
import { pluginTypeMap } from '@fastgpt/global/core/plugin/constants';
import { connectionMongo, type Model } from '../../common/mongo';
const { Schema, model, models } = connectionMongo;
import type { PluginItemSchema } from '@fastgpt/global/core/plugin/type.d';
@ -9,9 +10,10 @@ import {
export const PluginCollectionName = 'plugins';
const PluginSchema = new Schema({
userId: {
parentId: {
type: Schema.Types.ObjectId,
ref: 'user'
ref: PluginCollectionName,
default: null
},
teamId: {
type: Schema.Types.ObjectId,
@ -23,6 +25,11 @@ const PluginSchema = new Schema({
ref: TeamMemberCollectionName,
required: true
},
type: {
type: String,
enum: Object.keys(pluginTypeMap),
required: true
},
name: {
type: String,
required: true
@ -42,11 +49,19 @@ const PluginSchema = new Schema({
modules: {
type: Array,
default: []
},
metadata: {
type: {
pluginUid: String,
apiSchemaStr: String,
customHeaders: String
}
}
});
try {
PluginSchema.index({ tmbId: 1 });
PluginSchema.index({ teamId: 1, parentId: 1 });
PluginSchema.index({ teamId: 1, name: 1, intro: 1 });
} catch (error) {
console.log(error);
}

View File

@ -1,22 +1,22 @@
import { chats2GPTMessages } from '@fastgpt/global/core/chat/adapt';
import { filterGPTMessageByMaxTokens } from '@fastgpt/service/core/chat/utils';
import { filterGPTMessageByMaxTokens } from '../../../chat/utils';
import {
countGptMessagesTokens,
countMessagesTokens
} from '@fastgpt/global/common/string/tiktoken';
import type { ChatItemType } from '@fastgpt/global/core/chat/type.d';
import { ChatItemValueTypeEnum, ChatRoleEnum } from '@fastgpt/global/core/chat/constants';
import { getAIApi } from '@fastgpt/service/core/ai/config';
import { getAIApi } from '../../../ai/config';
import type { ClassifyQuestionAgentItemType } from '@fastgpt/global/core/module/type.d';
import { ModuleInputKeyEnum } from '@fastgpt/global/core/module/constants';
import { DispatchNodeResponseKeyEnum } from '@fastgpt/global/core/module/runtime/constants';
import type { ModuleDispatchProps } from '@fastgpt/global/core/module/type.d';
import { replaceVariable } from '@fastgpt/global/common/string/tools';
import { Prompt_CQJson } from '@/global/core/prompt/agent';
import { Prompt_CQJson } from '@fastgpt/global/core/ai/prompt/agent';
import { LLMModelItemType } from '@fastgpt/global/core/ai/model.d';
import { ModelTypeEnum, getLLMModel } from '@fastgpt/service/core/ai/model';
import { ModelTypeEnum, getLLMModel } from '../../../ai/model';
import { getHistories } from '../utils';
import { formatModelChars2Points } from '@fastgpt/service/support/wallet/usage/utils';
import { formatModelChars2Points } from '../../../../support/wallet/usage/utils';
import { ChatCompletionRequestMessageRoleEnum } from '@fastgpt/global/core/ai/constants';
import {
ChatCompletionCreateParams,

View File

@ -1,22 +1,22 @@
import { chats2GPTMessages } from '@fastgpt/global/core/chat/adapt';
import { filterGPTMessageByMaxTokens } from '@fastgpt/service/core/chat/utils';
import { filterGPTMessageByMaxTokens } from '../../../chat/utils';
import type { ChatItemType } from '@fastgpt/global/core/chat/type.d';
import {
countGptMessagesTokens,
countMessagesTokens
} from '@fastgpt/global/common/string/tiktoken';
import { ChatItemValueTypeEnum, ChatRoleEnum } from '@fastgpt/global/core/chat/constants';
import { getAIApi } from '@fastgpt/service/core/ai/config';
import { getAIApi } from '../../../ai/config';
import type { ContextExtractAgentItemType } from '@fastgpt/global/core/module/type';
import { ModuleInputKeyEnum, ModuleOutputKeyEnum } from '@fastgpt/global/core/module/constants';
import { DispatchNodeResponseKeyEnum } from '@fastgpt/global/core/module/runtime/constants';
import type { ModuleDispatchProps } from '@fastgpt/global/core/module/type.d';
import { Prompt_ExtractJson } from '@/global/core/prompt/agent';
import { Prompt_ExtractJson } from '@fastgpt/global/core/ai/prompt/agent';
import { replaceVariable } from '@fastgpt/global/common/string/tools';
import { LLMModelItemType } from '@fastgpt/global/core/ai/model.d';
import { getHistories } from '../utils';
import { ModelTypeEnum, getLLMModel } from '@fastgpt/service/core/ai/model';
import { formatModelChars2Points } from '@fastgpt/service/support/wallet/usage/utils';
import { ModelTypeEnum, getLLMModel } from '../../../ai/model';
import { formatModelChars2Points } from '../../../../support/wallet/usage/utils';
import json5 from 'json5';
import {
ChatCompletionCreateParams,
@ -42,7 +42,7 @@ type Response = DispatchNodeResultType<{
type ActionProps = Props & { extractModel: LLMModelItemType };
const agentFunName = 'extract_json_data';
const agentFunName = 'request_function';
export async function dispatchContentExtract(props: Props): Promise<Response> {
const {
@ -156,13 +156,15 @@ const getFunctionCallSchema = ({
{
type: ChatItemValueTypeEnum.text,
text: {
content: `你的任务是根据上下文获取适当的 JSON 字符串。要求:
"""
-
-
"""
content: `我正在执行一个函数,需要你提供一些参数,请以 JSON 字符串格式返回这些参数,要求:
"""
${description ? `- ${description}` : ''}
-
-
"""
: "${content}"`
本次输入内容: ${content}
`
}
}
]
@ -191,7 +193,7 @@ const getFunctionCallSchema = ({
// function body
const agentFunction = {
name: agentFunName,
description,
description: '需要执行的函数',
parameters: {
type: 'object',
properties
@ -249,7 +251,6 @@ const toolChoice = async (props: ActionProps) => {
tool_calls: response.choices?.[0]?.message?.tool_calls
}
];
return {
tokens: countGptMessagesTokens(completeMessages, tools),
arg
@ -351,18 +352,24 @@ Human: ${content}`
const start = answer.indexOf('{');
const end = answer.lastIndexOf('}');
if (start === -1 || end === -1)
if (start === -1 || end === -1) {
return {
rawResponse: answer,
tokens: countMessagesTokens(messages),
arg: {}
};
}
const jsonStr = answer
.substring(start, end + 1)
.replace(/(\\n|\\)/g, '')
.replace(/ /g, '');
try {
return {
rawResponse: answer,
tokens: countMessagesTokens(messages),
arg: json5.parse(answer) as Record<string, any>
arg: json5.parse(jsonStr) as Record<string, any>
};
} catch (error) {
console.log(error);

View File

@ -0,0 +1,36 @@
export const Prompt_Tool_Call = `<Instruction>
使
使使 JSON Schema toolId description parameters required
"""
{{toolsPrompt}}
"""
使USER代表用户的输入TOOL_RESPONSE代表工具运行结果ASSISTANT
0,1
0: 不使用工具
1: 使用工具
USER: 你好呀
ANSWER: 0:
USER: 今天杭州的天气如何
ANSWER: 1: {"toolId":"w2121",arguments:{"city": "杭州"}}
TOOL_RESPONSE: """
......
"""
ANSWER: 0:
USER: 今天杭州的天气适合去哪里玩
ANSWER: 1: {"toolId":"as21da",arguments:{"query": "杭州 天气 去哪里玩"}}
TOOL_RESPONSE: """
. 西
"""
ANSWER: 0: 西
</Instruction>
USER: {{question}}
ANSWER:
`;

View File

@ -1,6 +1,6 @@
import { LLMModelItemType } from '@fastgpt/global/core/ai/model.d';
import { getAIApi } from '@fastgpt/service/core/ai/config';
import { filterGPTMessageByMaxTokens } from '@fastgpt/service/core/chat/utils';
import { getAIApi } from '../../../../ai/config';
import { filterGPTMessageByMaxTokens } from '../../../../chat/utils';
import {
ChatCompletion,
StreamChatType,
@ -15,7 +15,7 @@ import {
responseWrite,
responseWriteController,
responseWriteNodeStatus
} from '@fastgpt/service/common/response';
} from '../../../../../common/response';
import { SseResponseEventEnum } from '@fastgpt/global/core/module/runtime/constants';
import { textAdaptGptResponse } from '@fastgpt/global/core/module/runtime/utils';
import { ChatCompletionRequestMessageRoleEnum } from '@fastgpt/global/core/ai/constants';
@ -25,8 +25,10 @@ import json5 from 'json5';
import { DispatchFlowResponse } from '../../type';
import { countGptMessagesTokens } from '@fastgpt/global/common/string/tiktoken';
import { getNanoid } from '@fastgpt/global/common/string/tools';
import { AIChatItemType, AIChatItemValueItemType } from '@fastgpt/global/core/chat/type';
import { GPTMessages2Chats } from '@fastgpt/global/core/chat/adapt';
type ToolRunResponseType = {
type FunctionRunResponseType = {
moduleRunResponse: DispatchFlowResponse;
functionCallMsg: ChatCompletionFunctionMessageParam;
}[];
@ -49,6 +51,7 @@ export const runToolWithFunctionCall = async (
module,
stream
} = props;
const assistantResponses = response?.assistantResponses || [];
const functions: ChatCompletionCreateParams.Function[] = toolModules.map((module) => {
const properties: Record<
@ -161,10 +164,18 @@ export const runToolWithFunctionCall = async (
startParams
});
const stringToolResponse = (() => {
if (typeof moduleRunResponse.toolResponses === 'object') {
return JSON.stringify(moduleRunResponse.toolResponses, null, 2);
}
return moduleRunResponse.toolResponses ? String(moduleRunResponse.toolResponses) : 'none';
})();
const functionCallMsg: ChatCompletionFunctionMessageParam = {
role: ChatCompletionRequestMessageRoleEnum.Function,
name: tool.name,
content: JSON.stringify(moduleRunResponse.toolResponses, null, 2)
content: stringToolResponse
};
if (stream && detail) {
@ -177,7 +188,7 @@ export const runToolWithFunctionCall = async (
toolName: '',
toolAvatar: '',
params: '',
response: JSON.stringify(moduleRunResponse.toolResponses, null, 2)
response: stringToolResponse
}
})
});
@ -189,7 +200,7 @@ export const runToolWithFunctionCall = async (
};
})
)
).filter(Boolean) as ToolRunResponseType;
).filter(Boolean) as FunctionRunResponseType;
const flatToolsResponseData = toolsRunResponse.map((item) => item.moduleRunResponse).flat();
@ -204,9 +215,11 @@ export const runToolWithFunctionCall = async (
...filterMessages,
assistantToolMsgParams
] as ChatCompletionMessageParam[];
const tokens = countGptMessagesTokens(concatToolMessages, undefined, functions);
const completeMessages = [
...concatToolMessages,
...toolsRunResponse.map((item) => item?.functionCallMsg)
];
// console.log(tokens, 'tool');
if (stream && detail) {
@ -216,33 +229,70 @@ export const runToolWithFunctionCall = async (
});
}
// tool assistant
const toolAssistants = toolsRunResponse
.map((item) => {
const assistantResponses = item.moduleRunResponse.assistantResponses || [];
return assistantResponses;
})
.flat();
// tool node assistant
const adaptChatMessages = GPTMessages2Chats(completeMessages);
const toolNodeAssistant = adaptChatMessages.pop() as AIChatItemType;
const toolNodeAssistants = [
...assistantResponses,
...toolAssistants,
...toolNodeAssistant.value
];
// concat tool responses
const dispatchFlowResponse = response
? response.dispatchFlowResponse.concat(flatToolsResponseData)
: flatToolsResponseData;
/* check stop signal */
const hasStopSignal = flatToolsResponseData.some(
(item) => !!item.flowResponses?.find((item) => item.toolStop)
);
if (hasStopSignal) {
return {
dispatchFlowResponse,
totalTokens: response?.totalTokens ? response.totalTokens + tokens : tokens,
completeMessages: filterMessages,
assistantResponses: toolNodeAssistants
};
}
return runToolWithFunctionCall(
{
...props,
messages: [...concatToolMessages, ...toolsRunResponse.map((item) => item?.functionCallMsg)]
messages: completeMessages
},
{
dispatchFlowResponse: response
? response.dispatchFlowResponse.concat(flatToolsResponseData)
: flatToolsResponseData,
totalTokens: response?.totalTokens ? response.totalTokens + tokens : tokens
dispatchFlowResponse,
totalTokens: response?.totalTokens ? response.totalTokens + tokens : tokens,
assistantResponses: toolNodeAssistants
}
);
} else {
// No tool is invoked, indicating that the process is over
const completeMessages = filterMessages.concat({
const gptAssistantResponse: ChatCompletionAssistantMessageParam = {
role: ChatCompletionRequestMessageRoleEnum.Assistant,
content: answer
});
};
const completeMessages = filterMessages.concat(gptAssistantResponse);
const tokens = countGptMessagesTokens(completeMessages, undefined, functions);
// console.log(tokens, 'response token');
// concat tool assistant
const toolNodeAssistant = GPTMessages2Chats([gptAssistantResponse])[0] as AIChatItemType;
return {
dispatchFlowResponse: response?.dispatchFlowResponse || [],
totalTokens: response?.totalTokens ? response.totalTokens + tokens : tokens,
completeMessages
completeMessages,
assistantResponses: [...assistantResponses, ...toolNodeAssistant.value]
};
}
};

View File

@ -4,7 +4,7 @@ import type {
DispatchNodeResultType,
RunningModuleItemType
} from '@fastgpt/global/core/module/runtime/type';
import { ModelTypeEnum, getLLMModel } from '@fastgpt/service/core/ai/model';
import { ModelTypeEnum, getLLMModel } from '../../../../ai/model';
import { getHistories } from '../../utils';
import { runToolWithToolChoice } from './toolChoice';
import { DispatchToolModuleProps, ToolModuleItemType } from './type.d';
@ -16,9 +16,12 @@ import {
getSystemPrompt,
runtimePrompt2ChatsValue
} from '@fastgpt/global/core/chat/adapt';
import { formatModelChars2Points } from '@fastgpt/service/support/wallet/usage/utils';
import { formatModelChars2Points } from '../../../../../support/wallet/usage/utils';
import { getHistoryPreview } from '@fastgpt/global/core/chat/utils';
import { runToolWithFunctionCall } from './functionCall';
import { runToolWithPromptCall } from './promptCall';
import { replaceVariable } from '@fastgpt/global/common/string/tools';
import { Prompt_Tool_Call } from './constants';
type Response = DispatchNodeResultType<{}>;
@ -72,16 +75,19 @@ export const dispatchRunTools = async (props: DispatchToolModuleProps): Promise<
];
const {
dispatchFlowResponse,
dispatchFlowResponse, // tool flow response
totalTokens,
completeMessages = []
completeMessages = [], // The actual message sent to AI(just save text)
assistantResponses = [] // FastGPT system store assistant.value response
} = await (async () => {
const adaptMessages = chats2GPTMessages({ messages, reserveId: false });
if (toolModel.toolChoice) {
return runToolWithToolChoice({
...props,
toolModules,
toolModel,
messages: chats2GPTMessages({ messages, reserveId: false })
messages: adaptMessages
});
}
if (toolModel.functionCall) {
@ -89,14 +95,25 @@ export const dispatchRunTools = async (props: DispatchToolModuleProps): Promise<
...props,
toolModules,
toolModel,
messages: chats2GPTMessages({ messages, reserveId: false })
messages: adaptMessages
});
}
return {
dispatchFlowResponse: [],
totalTokens: 0,
completeMessages: []
};
const lastMessage = adaptMessages[adaptMessages.length - 1];
if (typeof lastMessage.content !== 'string') {
return Promise.reject('暂时只支持纯文本');
}
lastMessage.content = replaceVariable(Prompt_Tool_Call, {
question: userChatInput
});
return runToolWithPromptCall({
...props,
toolModules,
toolModel,
messages: adaptMessages
});
})();
const { totalPoints, modelName } = formatModelChars2Points({
@ -105,11 +122,6 @@ export const dispatchRunTools = async (props: DispatchToolModuleProps): Promise<
modelType: ModelTypeEnum.llm
});
const adaptMessages = GPTMessages2Chats(completeMessages);
//@ts-ignore
const startIndex = adaptMessages.findLastIndex((item) => item.obj === ChatRoleEnum.Human);
const assistantResponse = adaptMessages.slice(startIndex + 1);
// flat child tool response
const childToolResponse = dispatchFlowResponse.map((item) => item.flowResponses).flat();
@ -123,9 +135,7 @@ export const dispatchRunTools = async (props: DispatchToolModuleProps): Promise<
const flatUsages = dispatchFlowResponse.map((item) => item.flowUsages).flat();
return {
[DispatchNodeResponseKeyEnum.assistantResponses]: assistantResponse
.map((item) => item.value)
.flat(),
[DispatchNodeResponseKeyEnum.assistantResponses]: assistantResponses,
[DispatchNodeResponseKeyEnum.nodeResponse]: {
totalPoints: totalPointsUsage,
toolCallTokens: totalTokens,

View File

@ -0,0 +1,385 @@
import { LLMModelItemType } from '@fastgpt/global/core/ai/model.d';
import { getAIApi } from '../../../../ai/config';
import { filterGPTMessageByMaxTokens } from '../../../../chat/utils';
import {
ChatCompletion,
StreamChatType,
ChatCompletionMessageParam,
ChatCompletionAssistantMessageParam
} from '@fastgpt/global/core/ai/type';
import { NextApiResponse } from 'next';
import {
responseWrite,
responseWriteController,
responseWriteNodeStatus
} from '../../../../../common/response';
import { SseResponseEventEnum } from '@fastgpt/global/core/module/runtime/constants';
import { textAdaptGptResponse } from '@fastgpt/global/core/module/runtime/utils';
import { ChatCompletionRequestMessageRoleEnum } from '@fastgpt/global/core/ai/constants';
import { dispatchWorkFlow } from '../../index';
import { DispatchToolModuleProps, RunToolResponse, ToolModuleItemType } from './type.d';
import json5 from 'json5';
import { countGptMessagesTokens } from '@fastgpt/global/common/string/tiktoken';
import { getNanoid, replaceVariable } from '@fastgpt/global/common/string/tools';
import { AIChatItemType } from '@fastgpt/global/core/chat/type';
import { GPTMessages2Chats } from '@fastgpt/global/core/chat/adapt';
type FunctionCallCompletion = {
id: string;
name: string;
arguments: string;
toolName?: string;
toolAvatar?: string;
};
export const runToolWithPromptCall = async (
props: DispatchToolModuleProps & {
messages: ChatCompletionMessageParam[];
toolModules: ToolModuleItemType[];
toolModel: LLMModelItemType;
},
response?: RunToolResponse
): Promise<RunToolResponse> => {
const {
toolModel,
toolModules,
messages,
res,
runtimeModules,
detail = false,
module,
stream
} = props;
const assistantResponses = response?.assistantResponses || [];
const toolsPrompt = JSON.stringify(
toolModules.map((module) => {
const properties: Record<
string,
{
type: string;
description: string;
required?: boolean;
}
> = {};
module.toolParams.forEach((item) => {
properties[item.key] = {
type: 'string',
description: item.toolDescription || ''
};
});
return {
toolId: module.moduleId,
description: module.intro,
parameters: {
type: 'object',
properties,
required: module.toolParams.filter((item) => item.required).map((item) => item.key)
}
};
})
);
const lastMessage = messages[messages.length - 1];
if (typeof lastMessage.content !== 'string') {
return Promise.reject('暂时只支持纯文本');
}
lastMessage.content = replaceVariable(lastMessage.content, {
toolsPrompt
});
const filterMessages = filterGPTMessageByMaxTokens({
messages,
maxTokens: toolModel.maxContext - 500 // filter token. not response maxToken
});
// console.log(JSON.stringify(filterMessages, null, 2));
/* Run llm */
const ai = getAIApi({
timeout: 480000
});
const aiResponse = await ai.chat.completions.create(
{
...toolModel?.defaultConfig,
model: toolModel.model,
temperature: 0,
stream,
messages: filterMessages
},
{
headers: {
Accept: 'application/json, text/plain, */*'
}
}
);
const answer = await (async () => {
if (stream) {
const { answer } = await streamResponse({
res,
detail,
toolModules,
stream: aiResponse
});
return answer;
} else {
const result = aiResponse as ChatCompletion;
return result.choices?.[0]?.message?.content || '';
}
})();
const parseAnswerResult = parseAnswer(answer);
// console.log(answer, '==11==');
// No tools
if (typeof parseAnswerResult === 'string') {
// No tool is invoked, indicating that the process is over
const gptAssistantResponse: ChatCompletionAssistantMessageParam = {
role: ChatCompletionRequestMessageRoleEnum.Assistant,
content: parseAnswerResult
};
const completeMessages = filterMessages.concat(gptAssistantResponse);
const tokens = countGptMessagesTokens(completeMessages, undefined);
// console.log(tokens, 'response token');
// concat tool assistant
const toolNodeAssistant = GPTMessages2Chats([gptAssistantResponse])[0] as AIChatItemType;
return {
dispatchFlowResponse: response?.dispatchFlowResponse || [],
totalTokens: response?.totalTokens ? response.totalTokens + tokens : tokens,
completeMessages,
assistantResponses: [...assistantResponses, ...toolNodeAssistant.value]
};
}
// Run the selected tool.
const toolsRunResponse = await (async () => {
if (!parseAnswerResult) return Promise.reject('tool run error');
const toolModule = toolModules.find((module) => module.moduleId === parseAnswerResult.name);
if (!toolModule) return Promise.reject('tool not found');
parseAnswerResult.toolName = toolModule.name;
parseAnswerResult.toolAvatar = toolModule.avatar;
// SSE response to client
if (stream && detail) {
responseWrite({
res,
event: SseResponseEventEnum.toolCall,
data: JSON.stringify({
tool: {
id: parseAnswerResult.id,
toolName: toolModule.name,
toolAvatar: toolModule.avatar,
functionName: parseAnswerResult.name,
params: parseAnswerResult.arguments,
response: ''
}
})
});
}
// run tool flow
const startParams = (() => {
try {
return json5.parse(parseAnswerResult.arguments);
} catch (error) {
return {};
}
})();
const moduleRunResponse = await dispatchWorkFlow({
...props,
runtimeModules: runtimeModules.map((module) => ({
...module,
isEntry: module.moduleId === toolModule.moduleId
})),
startParams
});
const stringToolResponse = (() => {
if (typeof moduleRunResponse.toolResponses === 'object') {
return JSON.stringify(moduleRunResponse.toolResponses, null, 2);
}
return moduleRunResponse.toolResponses ? String(moduleRunResponse.toolResponses) : 'none';
})();
if (stream && detail) {
responseWrite({
res,
event: SseResponseEventEnum.toolResponse,
data: JSON.stringify({
tool: {
id: parseAnswerResult.id,
toolName: '',
toolAvatar: '',
params: '',
response: stringToolResponse
}
})
});
}
return {
moduleRunResponse,
toolResponsePrompt: stringToolResponse
};
})();
if (stream && detail) {
responseWriteNodeStatus({
res,
name: module.name
});
}
// 合并工具调用的结果,使用 functionCall 格式存储。
const assistantToolMsgParams: ChatCompletionAssistantMessageParam = {
role: ChatCompletionRequestMessageRoleEnum.Assistant,
function_call: parseAnswerResult
};
const concatToolMessages = [
...filterMessages,
assistantToolMsgParams
] as ChatCompletionMessageParam[];
const tokens = countGptMessagesTokens(concatToolMessages, undefined);
const completeMessages: ChatCompletionMessageParam[] = [
...concatToolMessages,
{
role: ChatCompletionRequestMessageRoleEnum.Function,
name: parseAnswerResult.name,
content: toolsRunResponse.toolResponsePrompt
}
];
// tool assistant
const toolAssistants = toolsRunResponse.moduleRunResponse.assistantResponses || [];
// tool node assistant
const adaptChatMessages = GPTMessages2Chats(completeMessages);
const toolNodeAssistant = adaptChatMessages.pop() as AIChatItemType;
const toolNodeAssistants = [...assistantResponses, ...toolAssistants, ...toolNodeAssistant.value];
const dispatchFlowResponse = response
? response.dispatchFlowResponse.concat(toolsRunResponse.moduleRunResponse)
: [toolsRunResponse.moduleRunResponse];
// get the next user prompt
lastMessage.content += `${answer}
TOOL_RESPONSE: ${toolsRunResponse.toolResponsePrompt}
ANSWER: `;
/* check stop signal */
const hasStopSignal = toolsRunResponse.moduleRunResponse.flowResponses.some(
(item) => !!item.toolStop
);
if (hasStopSignal) {
return {
dispatchFlowResponse,
totalTokens: response?.totalTokens ? response.totalTokens + tokens : tokens,
completeMessages: filterMessages,
assistantResponses: toolNodeAssistants
};
}
return runToolWithPromptCall(
{
...props,
messages
},
{
dispatchFlowResponse,
totalTokens: response?.totalTokens ? response.totalTokens + tokens : tokens,
assistantResponses: toolNodeAssistants
}
);
};
async function streamResponse({
res,
detail,
stream
}: {
res: NextApiResponse;
detail: boolean;
toolModules: ToolModuleItemType[];
stream: StreamChatType;
}) {
const write = responseWriteController({
res,
readStream: stream
});
let startResponseWrite = false;
let textAnswer = '';
for await (const part of stream) {
if (res.closed) {
stream.controller?.abort();
break;
}
const responseChoice = part.choices?.[0]?.delta;
if (responseChoice.content) {
const content = responseChoice?.content || '';
textAnswer += content;
if (startResponseWrite) {
responseWrite({
write,
event: detail ? SseResponseEventEnum.answer : undefined,
data: textAdaptGptResponse({
text: content
})
});
} else if (textAnswer.length >= 3) {
textAnswer = textAnswer.trim();
if (textAnswer.startsWith('0')) {
startResponseWrite = true;
// find first : index
const firstIndex = textAnswer.indexOf(':');
textAnswer = textAnswer.substring(firstIndex + 1).trim();
responseWrite({
write,
event: detail ? SseResponseEventEnum.answer : undefined,
data: textAdaptGptResponse({
text: textAnswer
})
});
}
}
}
}
if (!textAnswer) {
return Promise.reject('LLM api response empty');
}
// console.log(textAnswer, '---===');
return { answer: textAnswer.trim() };
}
const parseAnswer = (str: string): FunctionCallCompletion | string => {
// 首先使用正则表达式提取TOOL_ID和TOOL_ARGUMENTS
const prefix = '1:';
str = str.trim();
if (str.startsWith(prefix)) {
const toolString = str.substring(prefix.length).trim();
try {
const toolCall = json5.parse(toolString);
return {
id: getNanoid(),
name: toolCall.toolId,
arguments: JSON.stringify(toolCall.arguments)
};
} catch (error) {
return str;
}
} else {
return str;
}
};

Some files were not shown because too many files have changed in this diff Show More