4.6.5- CoreferenceResolution Module (#631)

This commit is contained in:
Archer 2023-12-22 10:47:31 +08:00 committed by GitHub
parent 41115a96c0
commit cd682d4275
No known key found for this signature in database
GPG Key ID: 4AEE18F83AFDEB23
112 changed files with 4163 additions and 2700 deletions

View File

@ -8,4 +8,5 @@ README.md
.yalc/
yalc.lock
testApi/
testApi/
*.local.*

3
.gitignore vendored
View File

@ -35,4 +35,5 @@ dist/
**/.hugo_build.lock
docSite/public/
docSite/resources/_gen/
docSite/.vercel
docSite/.vercel
*.local.*

View File

@ -76,6 +76,11 @@ COPY --from=builder --chown=nextjs:nodejs /app/projects/$name/.next/static /app/
COPY --from=builder /app/projects/$name/package.json ./package.json
# copy woker
COPY --from=workerDeps /app/worker /app/worker
# copy config
COPY ./projects/$name/data/config.json /app/data/config.json
COPY ./projects/$name/data/pluginTemplates /app/data/pluginTemplates
COPY ./projects/$name/data/simpleTemplates /app/data/simpleTemplates
ENV NODE_ENV production
ENV NEXT_TELEMETRY_DISABLED 1

Binary file not shown.

After

Width:  |  Height:  |  Size: 136 KiB

Binary file not shown.

After

Width:  |  Height:  |  Size: 1.2 MiB

Binary file not shown.

After

Width:  |  Height:  |  Size: 1.0 MiB

Binary file not shown.

Before

Width:  |  Height:  |  Size: 485 KiB

After

Width:  |  Height:  |  Size: 2.4 MiB

Binary file not shown.

Before

Width:  |  Height:  |  Size: 147 KiB

After

Width:  |  Height:  |  Size: 1.3 MiB

Binary file not shown.

Before

Width:  |  Height:  |  Size: 110 KiB

After

Width:  |  Height:  |  Size: 2.1 MiB

Binary file not shown.

Before

Width:  |  Height:  |  Size: 112 KiB

After

Width:  |  Height:  |  Size: 1.4 MiB

Binary file not shown.

Before

Width:  |  Height:  |  Size: 120 KiB

After

Width:  |  Height:  |  Size: 353 KiB

Binary file not shown.

Before

Width:  |  Height:  |  Size: 205 KiB

After

Width:  |  Height:  |  Size: 138 KiB

Binary file not shown.

Before

Width:  |  Height:  |  Size: 39 KiB

After

Width:  |  Height:  |  Size: 178 KiB

Binary file not shown.

After

Width:  |  Height:  |  Size: 118 KiB

Binary file not shown.

After

Width:  |  Height:  |  Size: 291 KiB

Binary file not shown.

Before

Width:  |  Height:  |  Size: 294 KiB

Binary file not shown.

Before

Width:  |  Height:  |  Size: 175 KiB

Binary file not shown.

Before

Width:  |  Height:  |  Size: 172 KiB

Binary file not shown.

Before

Width:  |  Height:  |  Size: 336 KiB

After

Width:  |  Height:  |  Size: 1.3 MiB

Binary file not shown.

Before

Width:  |  Height:  |  Size: 280 KiB

After

Width:  |  Height:  |  Size: 1.3 MiB

View File

@ -92,7 +92,7 @@ weight: 708
"maxContext": 16000,
"maxResponse": 4000,
"price": 0,
"functionCall": true, // 是否支持function call 不支持的模型需要设置为 false会走提示词生成
"toolChoice": true, // 是否支持openai的 toolChoice 不支持的模型需要设置为 false会走提示词生成
"functionPrompt": ""
},
{
@ -101,7 +101,7 @@ weight: 708
"maxContext": 8000,
"maxResponse": 8000,
"price": 0,
"functionCall": true,
"toolChoice": true,
"functionPrompt": ""
}
],
@ -112,7 +112,7 @@ weight: 708
"maxContext": 16000,
"maxResponse": 4000,
"price": 0,
"functionCall": true,
"toolChoice": true,
"functionPrompt": ""
}
],

View File

@ -0,0 +1,31 @@
---
title: 'V4.6.5(需要改配置文件)'
description: 'FastGPT V4.6.5'
icon: 'upgrade'
draft: false
toc: true
weight: 831
---
## 配置文件变更
由于 openai 已开始启用 function call改为 toolChoice。FastGPT 同步的修改了对于的配置和调用方式,需要对配置文件做一些修改:
[点击查看最新的配置文件](/docs/development/configuration/)
主要是修改模型的`functionCall`字段,改成`toolChoice`即可。设置为`true`的模型,会默认走 openai 的 tools 模式;未设置或设置为`false`的,会走提示词生成模式。
问题补全模型与内容提取模型使用同一组配置。
## V4.6.5 功能介绍
1. 新增 - [问题补全模块](/docs/workflow/modules/coreferenceresolution/)
2. 新增 - [文本编辑模块](/docs/workflow/modules/text_editor/)
3. 新增 - [判断器模块](/docs/workflow/modules/tfswitch/)
4. 新增 - [自定义反馈模块](/docs/workflow/modules/custom_feedback/)
5. 新增 - 【内容提取】模块支持选择模型,以及字段枚举
6. 优化 - docx读取兼容表格表格转markdown
7. 优化 - 高级编排连接线交互
8. 优化 - 由于 html2md 导致的 cpu密集计算阻断线程问题
9. 修复 - 高级编排提示词提取描述

File diff suppressed because it is too large Load Diff

File diff suppressed because it is too large Load Diff

View File

@ -1,512 +0,0 @@
---
title: '优化知识库搜索词'
description: '利用 GPT 优化和完善知识库搜索词,实现上下文关联搜索'
icon: 'search'
draft: false
toc: true
weight: 404
---
![](/imgs/demo_op_question1.png)
| 优化前 | 优化后 |
| --------------------- | --------------------- |
| ![](/imgs/demo_op_question3.png) | ![](/imgs/demo_op_question2.png) |
如上图,优化后的搜索可以针对【自动数据预处理】进行搜索,从而找到其相关的内容,一定程度上弥补了向量搜索的上下文缺失问题。
## 模块编排
复制下面配置,点击「高级编排」右上角的导入按键,导入该配置。
{{% details title="编排配置" closed="true" %}}
```json
[
{
"moduleId": "userChatInput",
"name": "用户问题(对话入口)",
"flowType": "questionInput",
"position": {
"x": 585.750318069507,
"y": 1597.4127130315183
},
"inputs": [
{
"key": "userChatInput",
"type": "systemInput",
"label": "用户问题",
"connected": true
}
],
"outputs": [
{
"key": "userChatInput",
"label": "用户问题",
"type": "source",
"valueType": "string",
"targets": [
{
"moduleId": "ssdd86",
"key": "content"
}
]
}
]
},
{
"moduleId": "history",
"name": "聊天记录",
"flowType": "historyNode",
"position": {
"x": 567.49877916803,
"y": 1289.3453864378014
},
"inputs": [
{
"key": "maxContext",
"type": "numberInput",
"label": "最长记录数",
"value": 6,
"min": 0,
"max": 50,
"connected": true
},
{
"key": "history",
"type": "hidden",
"label": "聊天记录",
"connected": true
}
],
"outputs": [
{
"key": "history",
"label": "聊天记录",
"valueType": "chatHistory",
"type": "source",
"targets": [
{
"moduleId": "ssdd86",
"key": "history"
}
]
}
]
},
{
"moduleId": "nkxlso",
"name": "知识库搜索",
"flowType": "datasetSearchNode",
"showStatus": true,
"position": {
"x": 1542.6434554710224,
"y": 1153.7853815737192
},
"inputs": [
{
"key": "kbList",
"type": "custom",
"label": "关联的知识库",
"value": [],
"list": [],
"connected": true
},
{
"key": "similarity",
"type": "slider",
"label": "相似度",
"value": 0.8,
"min": 0,
"max": 1,
"step": 0.01,
"markList": [
{
"label": "100",
"value": 100
},
{
"label": "1",
"value": 1
}
],
"connected": true
},
{
"key": "limit",
"type": "slider",
"label": "单次搜索上限",
"description": "最多取 n 条记录作为本次问题引用",
"value": 7,
"min": 1,
"max": 20,
"step": 1,
"markList": [
{
"label": "1",
"value": 1
},
{
"label": "20",
"value": 20
}
],
"connected": true
},
{
"key": "switch",
"type": "target",
"label": "触发器",
"valueType": "any",
"connected": false
},
{
"key": "userChatInput",
"type": "target",
"label": "用户问题",
"required": true,
"valueType": "string",
"connected": true
}
],
"outputs": [
{
"key": "isEmpty",
"label": "搜索结果为空",
"type": "source",
"valueType": "boolean",
"targets": []
},
{
"key": "unEmpty",
"label": "搜索结果不为空",
"type": "source",
"valueType": "boolean",
"targets": []
},
{
"key": "quoteQA",
"label": "引用内容",
"description": "始终返回数组,如果希望搜索结果为空时执行额外操作,需要用到上面的两个输入以及目标模块的触发器",
"type": "source",
"valueType": "datasetQuote",
"targets": [
{
"moduleId": "ol82hp",
"key": "quoteQA"
}
]
}
]
},
{
"moduleId": "ol82hp",
"name": "AI 对话",
"flowType": "chatNode",
"showStatus": true,
"position": {
"x": 2207.4577044902126,
"y": 1079.6308003796544
},
"inputs": [
{
"key": "model",
"type": "custom",
"label": "对话模型",
"value": "gpt-3.5-turbo",
"list": [],
"connected": true
},
{
"key": "temperature",
"type": "slider",
"label": "温度",
"value": 0,
"min": 0,
"max": 10,
"step": 1,
"markList": [
{
"label": "严谨",
"value": 0
},
{
"label": "发散",
"value": 10
}
],
"connected": true
},
{
"key": "maxToken",
"type": "custom",
"label": "回复上限",
"value": 2000,
"min": 100,
"max": 4000,
"step": 50,
"markList": [
{
"label": "100",
"value": 100
},
{
"label": "4000",
"value": 4000
}
],
"connected": true
},
{
"key": "systemPrompt",
"type": "textarea",
"label": "系统提示词",
"max": 300,
"valueType": "string",
"description": "模型固定的引导词,通过调整该内容,可以引导模型聊天方向。该内容会被固定在上下文的开头。可使用变量,例如 {{language}}",
"placeholder": "模型固定的引导词,通过调整该内容,可以引导模型聊天方向。该内容会被固定在上下文的开头。可使用变量,例如 {{language}}",
"value": "我会向你询问三引号引用中提及的内容,你仅使用提供的引用内容来回答我的问题,不要做额外的扩展补充。",
"connected": true
},
{
"key": "limitPrompt",
"type": "textarea",
"valueType": "string",
"label": "限定词",
"max": 500,
"description": "限定模型对话范围,会被放置在本次提问前,拥有强引导和限定性。不建议内容太长,会影响上下文,可使用变量,例如 {{language}}。可在文档中找到对应的限定例子",
"placeholder": "限定模型对话范围,会被放置在本次提问前,拥有强引导和限定性。不建议内容太长,会影响上下文,可使用变量,例如 {{language}}。可在文档中找到对应的限定例子",
"value": "",
"connected": true
},
{
"key": "switch",
"type": "target",
"label": "触发器",
"valueType": "any",
"connected": false
},
{
"key": "quoteQA",
"type": "target",
"label": "引用内容",
"description": "对象数组格式,结构:\n [{q:'问题',a:'回答'}]",
"valueType": "datasetQuote",
"connected": true
},
{
"key": "history",
"type": "target",
"label": "聊天记录",
"valueType": "chatHistory",
"connected": true
},
{
"key": "userChatInput",
"type": "target",
"label": "用户问题",
"required": true,
"valueType": "string",
"connected": true
}
],
"outputs": [
{
"key": "answerText",
"label": "AI回复",
"description": "将在 stream 回复完毕后触发",
"valueType": "string",
"type": "source",
"targets": []
},
{
"key": "finish",
"label": "回复结束",
"description": "AI 回复完成后触发",
"valueType": "boolean",
"type": "source",
"targets": []
}
]
},
{
"moduleId": "o62kns",
"name": "用户问题(对话入口)",
"flowType": "questionInput",
"position": {
"x": 1696.5940057372968,
"y": 2270.5070479742435
},
"inputs": [
{
"key": "userChatInput",
"type": "systemInput",
"label": "用户问题",
"connected": true
}
],
"outputs": [
{
"key": "userChatInput",
"label": "用户问题",
"type": "source",
"valueType": "string",
"targets": [
{
"moduleId": "ol82hp",
"key": "userChatInput"
}
]
}
]
},
{
"moduleId": "he7013",
"name": "聊天记录",
"flowType": "historyNode",
"position": {
"x": 1636.793907221069,
"y": 1952.7122387165764
},
"inputs": [
{
"key": "maxContext",
"type": "numberInput",
"label": "最长记录数",
"value": 6,
"min": 0,
"max": 50,
"connected": true
},
{
"key": "history",
"type": "hidden",
"label": "聊天记录",
"connected": true
}
],
"outputs": [
{
"key": "history",
"label": "聊天记录",
"valueType": "chatHistory",
"type": "source",
"targets": [
{
"moduleId": "ol82hp",
"key": "history"
}
]
}
]
},
{
"moduleId": "ssdd86",
"name": "文本内容提取",
"flowType": "contentExtract",
"showStatus": true,
"position": {
"x": 1031.822028231947,
"y": 1231.9793566344022
},
"inputs": [
{
"key": "switch",
"type": "target",
"label": "触发器",
"valueType": "any",
"connected": false
},
{
"key": "description",
"type": "textarea",
"valueType": "string",
"value": "结合上下文,优化用户的问题,要求不能包含\"它\"、\"第几个\"等代名词,需将他们替换成具体的名词。",
"label": "提取要求描述",
"description": "写一段提取要求,告诉 AI 需要提取哪些内容",
"required": true,
"placeholder": "例如: \n1. 你是一个实验室预约助手。根据用户问题,提取出姓名、实验室号和预约时间",
"connected": true
},
{
"key": "history",
"type": "target",
"label": "聊天记录",
"valueType": "chatHistory",
"connected": true
},
{
"key": "content",
"type": "target",
"label": "需要提取的文本",
"required": true,
"valueType": "string",
"connected": true
},
{
"key": "extractKeys",
"type": "custom",
"label": "目标字段",
"description": "由 '描述' 和 'key' 组成一个目标字段,可提取多个目标字段",
"value": [
{
"desc": "优化后的问题",
"key": "q",
"required": true
}
],
"connected": true
}
],
"outputs": [
{
"key": "success",
"label": "字段完全提取",
"valueType": "boolean",
"type": "source",
"targets": []
},
{
"key": "failed",
"label": "提取字段缺失",
"valueType": "boolean",
"type": "source",
"targets": []
},
{
"key": "fields",
"label": "完整提取结果",
"description": "一个 JSON 字符串,例如:{\"name:\":\"YY\",\"Time\":\"2023/7/2 18:00\"}",
"valueType": "string",
"type": "source",
"targets": []
},
{
"key": "q",
"label": "提取结果-优化后的问题",
"description": "无法提取时不会返回",
"valueType": "string",
"type": "source",
"targets": [
{
"moduleId": "nkxlso",
"key": "userChatInput"
}
]
}
]
}
]
```
{{% /details %}}
## 流程说明
1. 利用内容提取模块,将用户的问题进行优化。
2. 将优化后的问题传递到知识库搜索模块进行搜索。
3. 搜索内容传递到 AI 对话模块,进行回答。
## Tips
内容提取模块可以将自然语言提取成结构化数据,可以使用其进行一些神奇的操作。

View File

@ -5,4 +5,6 @@ description: "介绍 FastGPT 的常用模块"
icon: "apps"
draft: false
images: []
---
---
<!-- 350 ~ 400 -->

View File

@ -0,0 +1,39 @@
---
title: "问题补全"
description: "问题补全模块介绍和使用"
icon: "input"
draft: false
toc: true
weight: 364
---
## 特点
- 可重复添加
- 有外部输入
- 触发执行
![](/imgs/coreferenceResolution1.png)
## 背景
在 RAG 中,我们需要根据输入的问题去数据库里执行 embedding 搜索,查找相关的内容,从而查找到相似的内容(简称知识库搜索)。
在搜索的过程中,尤其是连续对话的搜索,我们通常会发现后续的问题难以搜索到合适的内容,其中一个原因是知识库搜索只会使用“当前”的问题去执行。看下面的例子:
![](/imgs/coreferenceResolution2.png)
用户在提问“第二点是什么”的时候只会去知识库里查找“第二点是什么”压根查不到内容。实际上需要查询的是“QA结构是什么”。因此我们需要引入一个【问题补全】模块来对用户当前的问题进行补全从而使得知识库搜索能够搜索到合适的内容。使用补全后效果如下
![](/imgs/coreferenceResolution3.png)
## 功能
调用 AI 去对用户当前的问题进行补全。目前主要是补全“指代”词,使得检索词更加的完善可靠,从而增强上下文连续对话的知识库搜索能力。
遇到最大的难题在于:模型对于【补全】的概念可能不清晰,且对于长上下文往往无法准确的知道应该如何补全。
## 示例
- [接入谷歌搜索](/docs/workflow/examples/google_search/)

View File

@ -26,3 +26,7 @@ weight: 363
## 作用
给任意模块输入自定格式文本,或处理 AI 模块系统提示词。
## 示例
- [接入谷歌搜索](/docs/workflow/examples/google_search/)

View File

@ -25,4 +25,5 @@ weight: 362
## 作用
适用场景有:让大模型做判断后输出固定内容,根据大模型回复内容判断是否触发后续模块。
适用场景有:让大模型做判断后输出固定内容,根据大模型回复内容判断是否触发后续模块。

View File

@ -28,7 +28,7 @@ export const simpleMarkdownText = (rawText: string) => {
['####', '###', '##', '#', '```', '~~~'].forEach((item, i) => {
const reg = new RegExp(`\\n\\s*${item}`, 'g');
if (reg.test(rawText)) {
rawText = rawText.replace(new RegExp(`(\\n)\\s*(${item})`, 'g'), '$1$2');
rawText = rawText.replace(new RegExp(`(\\n)( *)(${item})`, 'g'), '$1$3');
}
});

View File

@ -14,7 +14,7 @@ export type ChatModelItemType = LLMModelItemType & {
};
export type FunctionModelItemType = LLMModelItemType & {
functionCall: boolean;
toolChoice: boolean;
functionPrompt: string;
};

View File

@ -1,63 +1,5 @@
import type {
LLMModelItemType,
ChatModelItemType,
FunctionModelItemType,
VectorModelItemType,
AudioSpeechModelType,
WhisperModelType,
ReRankModelItemType
} from './model.d';
import type { LLMModelItemType, VectorModelItemType } from './model.d';
export const defaultChatModels: ChatModelItemType[] = [
{
model: 'gpt-3.5-turbo-1106',
name: 'GPT35-1106',
price: 0,
maxContext: 16000,
maxResponse: 4000,
quoteMaxToken: 2000,
maxTemperature: 1.2,
censor: false,
vision: false,
defaultSystemChatPrompt: ''
},
{
model: 'gpt-3.5-turbo-16k',
name: 'GPT35-16k',
maxContext: 16000,
maxResponse: 16000,
price: 0,
quoteMaxToken: 8000,
maxTemperature: 1.2,
censor: false,
vision: false,
defaultSystemChatPrompt: ''
},
{
model: 'gpt-4',
name: 'GPT4-8k',
maxContext: 8000,
maxResponse: 8000,
price: 0,
quoteMaxToken: 4000,
maxTemperature: 1.2,
censor: false,
vision: false,
defaultSystemChatPrompt: ''
},
{
model: 'gpt-4-vision-preview',
name: 'GPT4-Vision',
maxContext: 128000,
maxResponse: 4000,
price: 0,
quoteMaxToken: 100000,
maxTemperature: 1.2,
censor: false,
vision: true,
defaultSystemChatPrompt: ''
}
];
export const defaultQAModels: LLMModelItemType[] = [
{
model: 'gpt-3.5-turbo-16k',
@ -67,46 +9,6 @@ export const defaultQAModels: LLMModelItemType[] = [
price: 0
}
];
export const defaultCQModels: FunctionModelItemType[] = [
{
model: 'gpt-3.5-turbo-1106',
name: 'GPT35-1106',
maxContext: 16000,
maxResponse: 4000,
price: 0,
functionCall: true,
functionPrompt: ''
},
{
model: 'gpt-4',
name: 'GPT4-8k',
maxContext: 8000,
maxResponse: 8000,
price: 0,
functionCall: true,
functionPrompt: ''
}
];
export const defaultExtractModels: FunctionModelItemType[] = [
{
model: 'gpt-3.5-turbo-1106',
name: 'GPT35-1106',
maxContext: 16000,
maxResponse: 4000,
price: 0,
functionCall: true,
functionPrompt: ''
}
];
export const defaultQGModels: LLMModelItemType[] = [
{
model: 'gpt-3.5-turbo-1106',
name: 'GPT35-1106',
maxContext: 1600,
maxResponse: 4000,
price: 0
}
];
export const defaultVectorModels: VectorModelItemType[] = [
{
@ -117,27 +19,3 @@ export const defaultVectorModels: VectorModelItemType[] = [
maxToken: 3000
}
];
export const defaultReRankModels: ReRankModelItemType[] = [];
export const defaultAudioSpeechModels: AudioSpeechModelType[] = [
{
model: 'tts-1',
name: 'OpenAI TTS1',
price: 0,
voices: [
{ label: 'Alloy', value: 'Alloy', bufferId: 'openai-Alloy' },
{ label: 'Echo', value: 'Echo', bufferId: 'openai-Echo' },
{ label: 'Fable', value: 'Fable', bufferId: 'openai-Fable' },
{ label: 'Onyx', value: 'Onyx', bufferId: 'openai-Onyx' },
{ label: 'Nova', value: 'Nova', bufferId: 'openai-Nova' },
{ label: 'Shimmer', value: 'Shimmer', bufferId: 'openai-Shimmer' }
]
}
];
export const defaultWhisperModel: WhisperModelType = {
model: 'whisper-1',
name: 'Whisper1',
price: 0
};

View File

@ -67,6 +67,9 @@ export type AppSimpleEditFormType = {
searchMode: `${DatasetSearchModeEnum}`;
searchEmptyText: string;
};
cfr: {
background: string;
};
userGuide: {
welcomeText: string;
variables: {
@ -111,6 +114,9 @@ export type AppSimpleEditConfigTemplateType = {
searchMode: `${DatasetSearchModeEnum}`;
searchEmptyText?: boolean;
};
cfr?: {
background?: boolean;
};
userGuide?: {
welcomeText?: boolean;
variables?: boolean;

View File

@ -3,23 +3,23 @@ import { FlowNodeTypeEnum } from '../module/node/constant';
import { ModuleOutputKeyEnum, ModuleInputKeyEnum } from '../module/constants';
import type { FlowNodeInputItemType } from '../module/node/type.d';
import { getGuideModule, splitGuideModule } from '../module/utils';
import { defaultChatModels } from '../ai/model';
import { ModuleItemType } from '../module/type.d';
import { DatasetSearchModeEnum } from '../dataset/constant';
export const getDefaultAppForm = (templateId = 'fastgpt-universal'): AppSimpleEditFormType => {
const defaultChatModel = defaultChatModels[0];
return {
templateId,
aiSettings: {
model: defaultChatModel?.model,
model: 'gpt-3.5-turbo',
systemPrompt: '',
temperature: 0,
isResponseAnswerText: true,
quotePrompt: '',
quoteTemplate: '',
maxToken: defaultChatModel ? defaultChatModel.maxResponse / 2 : 4000
maxToken: 4000
},
cfr: {
background: ''
},
dataset: {
datasets: [],
@ -116,6 +116,11 @@ export const appModules2Form = ({
questionGuide: questionGuide,
tts: ttsConfig
};
} else if (module.flowType === FlowNodeTypeEnum.cfr) {
const value = module.inputs.find((item) => item.key === ModuleInputKeyEnum.aiSystemPrompt);
if (value) {
defaultAppForm.cfr.background = value.value;
}
}
});

View File

@ -93,6 +93,7 @@ export type moduleDispatchResType = {
model?: string;
query?: string;
contextTotalLen?: number;
textOutput?: string;
// chat
temperature?: number;
@ -119,9 +120,7 @@ export type moduleDispatchResType = {
// plugin output
pluginOutput?: Record<string, any>;
// text editor
textOutput?: string;
pluginDetail?: ChatHistoryItemResType[];
// tf switch
tfSwitchResult?: boolean;

View File

@ -38,7 +38,6 @@ export enum FlowNodeOutputTypeEnum {
}
export enum FlowNodeTypeEnum {
empty = 'empty',
userGuide = 'userGuide',
questionInput = 'questionInput',
historyNode = 'historyNode',
@ -52,10 +51,10 @@ export enum FlowNodeTypeEnum {
pluginModule = 'pluginModule',
pluginInput = 'pluginInput',
pluginOutput = 'pluginOutput',
textEditor = 'textEditor',
cfr = 'cfr',
// abandon
variable = 'variable'
}
export const EDGE_TYPE = 'smoothstep';
export const EDGE_TYPE = 'default';

View File

@ -141,7 +141,7 @@ export const AiChatModule: FlowModuleTemplateType = {
},
{
key: ModuleOutputKeyEnum.answerText,
label: 'AI回复',
label: 'AI回复内容',
description: '将在 stream 回复完毕后触发',
valueType: ModuleIOValueTypeEnum.string,
type: FlowNodeOutputTypeEnum.source,

View File

@ -0,0 +1,61 @@
import {
FlowNodeInputTypeEnum,
FlowNodeOutputTypeEnum,
FlowNodeTypeEnum
} from '../../node/constant';
import { FlowModuleTemplateType } from '../../type.d';
import {
ModuleIOValueTypeEnum,
ModuleInputKeyEnum,
ModuleOutputKeyEnum,
ModuleTemplateTypeEnum
} from '../../constants';
import {
Input_Template_History,
Input_Template_Switch,
Input_Template_UserChatInput
} from '../input';
export const AiCFR: FlowModuleTemplateType = {
id: FlowNodeTypeEnum.chatNode,
templateType: ModuleTemplateTypeEnum.tools,
flowType: FlowNodeTypeEnum.cfr,
avatar: '/imgs/module/cfr.svg',
name: 'core.module.template.cfr',
intro: 'core.module.template.cfr intro',
showStatus: true,
inputs: [
Input_Template_Switch,
{
key: ModuleInputKeyEnum.aiModel,
type: FlowNodeInputTypeEnum.selectExtractModel,
label: 'core.module.input.label.aiModel',
required: true,
valueType: ModuleIOValueTypeEnum.string,
showTargetInApp: false,
showTargetInPlugin: false
},
{
key: ModuleInputKeyEnum.aiSystemPrompt,
type: FlowNodeInputTypeEnum.textarea,
label: 'core.module.input.label.cfr background',
max: 300,
valueType: ModuleIOValueTypeEnum.string,
description: 'core.module.input.description.cfr background',
placeholder: 'core.module.input.placeholder.cfr background',
showTargetInApp: true,
showTargetInPlugin: true
},
Input_Template_History,
Input_Template_UserChatInput
],
outputs: [
{
key: ModuleOutputKeyEnum.text,
label: 'core.module.output.label.cfr result',
valueType: ModuleIOValueTypeEnum.string,
type: FlowNodeOutputTypeEnum.source,
targets: []
}
]
};

View File

@ -1,14 +0,0 @@
import { ModuleTemplateTypeEnum } from '../../constants';
import { FlowNodeTypeEnum } from '../../node/constant';
import { FlowModuleTemplateType } from '../../type.d';
export const EmptyModule: FlowModuleTemplateType = {
id: FlowNodeTypeEnum.empty,
templateType: ModuleTemplateTypeEnum.other,
flowType: FlowNodeTypeEnum.empty,
avatar: '/imgs/module/cq.png',
name: '该模块已被移除',
intro: '',
inputs: [],
outputs: []
};

View File

@ -38,7 +38,7 @@ export type ModuleItemType = {
outputs: FlowNodeOutputItemType[];
};
/* function type */
/* --------------- function type -------------------- */
// variable
export type VariableItemType = {
id: string;
@ -74,3 +74,46 @@ export type ContextExtractAgentItemType = {
required: boolean;
enum?: string;
};
/* -------------- running module -------------- */
export type RunningModuleItemType = {
name: ModuleItemType['name'];
moduleId: ModuleItemType['moduleId'];
flowType: ModuleItemType['flowType'];
showStatus?: ModuleItemType['showStatus'];
} & {
inputs: {
key: string;
value?: any;
}[];
outputs: {
key: string;
answer?: boolean;
response?: boolean;
value?: any;
targets: {
moduleId: string;
key: string;
}[];
}[];
};
export type ChatDispatchProps = {
res: NextApiResponse;
mode: 'test' | 'chat';
teamId: string;
tmbId: string;
user: UserType;
appId: string;
chatId?: string;
responseChatItemId?: string;
histories: ChatItemType[];
variables: Record<string, any>;
stream: boolean;
detail: boolean; // response detail
};
export type ModuleDispatchProps<T> = ChatDispatchProps & {
outputs: RunningModuleItemType['outputs'];
inputs: T;
};

View File

@ -15,14 +15,19 @@ export type PluginItemSchema = {
};
/* plugin template */
export type PluginTemplateType = {
export type PluginTemplateType = PluginRuntimeType & {
author?: string;
id: string;
source: `${PluginSourceEnum}`;
templateType: FlowModuleTemplateType['templateType'];
intro: string;
modules: ModuleItemType[];
};
export type PluginRuntimeType = {
teamId?: string;
name: string;
avatar: string;
intro: string;
showStatus?: boolean;
modules: ModuleItemType[];
};

View File

@ -20,11 +20,12 @@ export async function connectMongo({
console.log('mongo start connect');
try {
mongoose.set('strictQuery', true);
const maxConnecting = Math.max(20, Number(process.env.DB_MAX_LINK || 20));
await mongoose.connect(process.env.MONGODB_URI as string, {
bufferCommands: true,
maxConnecting: Number(process.env.DB_MAX_LINK || 5),
maxPoolSize: Number(process.env.DB_MAX_LINK || 5),
minPoolSize: Math.min(10, Number(process.env.DB_MAX_LINK || 10)),
maxConnecting: maxConnecting,
maxPoolSize: maxConnecting,
minPoolSize: Math.max(5, Math.round(Number(process.env.DB_MAX_LINK || 5) * 0.1)),
connectTimeoutMS: 60000,
waitQueueTimeoutMS: 60000,
socketTimeoutMS: 60000,

View File

@ -1,14 +1,12 @@
import type { NextApiResponse } from 'next';
import { getAIApi } from '../config';
import { defaultAudioSpeechModels } from '../../../../global/core/ai/model';
import { UserModelSchema } from '@fastgpt/global/support/user/type';
export async function text2Speech({
res,
onSuccess,
onError,
input,
model = defaultAudioSpeechModels[0].model,
model,
voice,
speed = 1
}: {

View File

@ -0,0 +1,59 @@
import { replaceVariable } from '@fastgpt/global/common/string/tools';
import { getAIApi } from '../config';
const prompt = `
便
问题: FastGPT如何使用
OUTPUT: ["FastGPT使用教程。","怎么使用FastGPT"]
-------------------
问题: FastGPT如何收费
OUTPUT: ["FastGPT收费标准。","FastGPT是如何计费的"]
-------------------
问题: 怎么FastGPT部署
OUTPUT: ["FastGPT的部署方式。","如何部署FastGPT"]
-------------------
question: {{q}}
OUTPUT:
`;
export const searchQueryExtension = async ({ query, model }: { query: string; model: string }) => {
const ai = getAIApi(undefined, 480000);
const result = await ai.chat.completions.create({
model,
temperature: 0,
messages: [
{
role: 'user',
content: replaceVariable(prompt, { q: query })
}
],
stream: false
});
const answer = result.choices?.[0]?.message?.content || '';
if (!answer) {
return {
queries: [query],
model,
inputTokens: 0,
responseTokens: 0
};
}
try {
return {
queries: JSON.parse(answer) as string[],
model,
inputTokens: result.usage?.prompt_tokens || 0,
responseTokens: result.usage?.completion_tokens || 0
};
} catch (error) {
return {
queries: [query],
model,
inputTokens: 0,
responseTokens: 0
};
}
};

View File

@ -3,7 +3,7 @@ import { FlowModuleTemplateType } from '@fastgpt/global/core/module/type';
import { FlowNodeTypeEnum } from '@fastgpt/global/core/module/node/constant';
import { plugin2ModuleIO } from '@fastgpt/global/core/module/utils';
import { PluginSourceEnum } from '@fastgpt/global/core/plugin/constants';
import type { PluginTemplateType } from '@fastgpt/global/core/plugin/type.d';
import type { PluginRuntimeType, PluginTemplateType } from '@fastgpt/global/core/plugin/type.d';
import { ModuleTemplateTypeEnum } from '@fastgpt/global/core/module/constants';
/*
@ -41,6 +41,7 @@ const getPluginTemplateById = async (id: string): Promise<PluginTemplateType> =>
if (!item) return Promise.reject('plugin not found');
return {
id: String(item._id),
teamId: String(item.teamId),
name: item.name,
avatar: item.avatar,
intro: item.intro,
@ -74,16 +75,14 @@ export async function getPluginPreviewModule({
}
/* run plugin time */
export async function getPluginRuntimeById(id: string): Promise<PluginTemplateType> {
export async function getPluginRuntimeById(id: string): Promise<PluginRuntimeType> {
const plugin = await getPluginTemplateById(id);
return {
id: plugin.id,
source: plugin.source,
templateType: plugin.templateType,
teamId: plugin.teamId,
name: plugin.name,
avatar: plugin.avatar,
intro: plugin.intro,
showStatus: plugin.showStatus,
modules: plugin.modules
};
}

View File

@ -0,0 +1,35 @@
import TurndownService from 'turndown';
// @ts-ignore
import * as turndownPluginGfm from 'joplin-turndown-plugin-gfm';
const turndownService = new TurndownService({
headingStyle: 'atx',
bulletListMarker: '-',
codeBlockStyle: 'fenced',
fence: '```',
emDelimiter: '_',
strongDelimiter: '**',
linkStyle: 'inlined',
linkReferenceStyle: 'full'
});
export const htmlStr2Md = (html: string) => {
// 浏览器html字符串转dom
const parser = new DOMParser();
const dom = parser.parseFromString(html, 'text/html');
turndownService.remove(['i', 'script', 'iframe']);
turndownService.addRule('codeBlock', {
filter: 'pre',
replacement(_, node) {
const content = node.textContent?.trim() || '';
// @ts-ignore
const codeName = node?._attrsByQName?.class?.data?.trim() || '';
return `\n\`\`\`${codeName}\n${content}\n\`\`\`\n`;
}
});
turndownService.use(turndownPluginGfm.gfm);
return turndownService.turndown(dom);
};

View File

@ -1,6 +1,11 @@
{
"name": "@fastgpt/web",
"version": "1.0.0",
"dependencies": {},
"devDependencies": {}
"dependencies": {
"joplin-turndown-plugin-gfm": "^1.0.12",
"turndown": "^7.1.2"
},
"devDependencies": {
"@types/turndown": "^5.0.4"
}
}

View File

@ -128,7 +128,18 @@ importers:
specifier: ^0.0.4
version: registry.npmmirror.com/@types/tunnel@0.0.4
packages/web: {}
packages/web:
dependencies:
joplin-turndown-plugin-gfm:
specifier: ^1.0.12
version: registry.npmmirror.com/joplin-turndown-plugin-gfm@1.0.12
turndown:
specifier: ^7.1.2
version: registry.npmmirror.com/turndown@7.1.2
devDependencies:
'@types/turndown':
specifier: ^5.0.4
version: registry.npmmirror.com/@types/turndown@5.0.4
projects/app:
dependencies:
@ -4850,6 +4861,12 @@ packages:
'@types/node': registry.npmmirror.com/@types/node@20.8.5
dev: true
registry.npmmirror.com/@types/turndown@5.0.4:
resolution: {integrity: sha512-28GI33lCCkU4SGH1GvjDhFgOVr+Tym4PXGBIU1buJUa6xQolniPArtUT+kv42RR2N9MsMLInkr904Aq+ESHBJg==, registry: https://registry.npm.taobao.org/, tarball: https://registry.npmmirror.com/@types/turndown/-/turndown-5.0.4.tgz}
name: '@types/turndown'
version: 5.0.4
dev: true
registry.npmmirror.com/@types/unist@2.0.10:
resolution: {integrity: sha512-IfYcSBWE3hLpBg8+X2SEa8LVkJdJEkT2Ese2aaLs3ptGdVtABxndrMaxuFlQ1qdFf9Q5rDvDpxI3WwgvKFAsQA==, registry: https://registry.npm.taobao.org/, tarball: https://registry.npmmirror.com/@types/unist/-/unist-2.0.10.tgz}
name: '@types/unist'
@ -6756,6 +6773,12 @@ packages:
domelementtype: registry.npmmirror.com/domelementtype@2.3.0
dev: false
registry.npmmirror.com/domino@2.1.6:
resolution: {integrity: sha512-3VdM/SXBZX2omc9JF9nOPCtDaYQ67BGp5CoLpIQlO2KCAPETs8TcDHacF26jXadGbvUteZzRTeos2fhID5+ucQ==, registry: https://registry.npm.taobao.org/, tarball: https://registry.npmmirror.com/domino/-/domino-2.1.6.tgz}
name: domino
version: 2.1.6
dev: false
registry.npmmirror.com/dompurify@3.0.3:
resolution: {integrity: sha512-axQ9zieHLnAnHh0sfAamKYiqXMJAVwu+LM/alQ7WDagoWessyWvMSFyW65CqF3owufNu8HBcE4cM2Vflu7YWcQ==, registry: https://registry.npm.taobao.org/, tarball: https://registry.npmmirror.com/dompurify/-/dompurify-3.0.3.tgz}
name: dompurify
@ -8772,6 +8795,12 @@ packages:
set-function-name: registry.npmmirror.com/set-function-name@2.0.1
dev: true
registry.npmmirror.com/joplin-turndown-plugin-gfm@1.0.12:
resolution: {integrity: sha512-qL4+1iycQjZ1fs8zk3jSRk7cg3ROBUHk7GKtiLAQLFzLPKErnILUvz5DLszSQvz3s1sTjPbywLDISVUtBY6HaA==, registry: https://registry.npm.taobao.org/, tarball: https://registry.npmmirror.com/joplin-turndown-plugin-gfm/-/joplin-turndown-plugin-gfm-1.0.12.tgz}
name: joplin-turndown-plugin-gfm
version: 1.0.12
dev: false
registry.npmmirror.com/js-sdsl@4.4.2:
resolution: {integrity: sha512-dwXFwByc/ajSV6m5bcKAPwe4yDDF6D614pxmIi5odytzxRlwqF6nwoiCek80Ixc7Cvma5awClxrzFtxCQvcM8w==, registry: https://registry.npm.taobao.org/, tarball: https://registry.npmmirror.com/js-sdsl/-/js-sdsl-4.4.2.tgz}
name: js-sdsl
@ -12166,6 +12195,14 @@ packages:
engines: {node: '>=0.6.11 <=0.7.0 || >=0.7.3'}
dev: false
registry.npmmirror.com/turndown@7.1.2:
resolution: {integrity: sha512-ntI9R7fcUKjqBP6QU8rBK2Ehyt8LAzt3UBT9JR9tgo6GtuKvyUzpayWmeMKJw1DPdXzktvtIT8m2mVXz+bL/Qg==, registry: https://registry.npm.taobao.org/, tarball: https://registry.npmmirror.com/turndown/-/turndown-7.1.2.tgz}
name: turndown
version: 7.1.2
dependencies:
domino: registry.npmmirror.com/domino@2.1.6
dev: false
registry.npmmirror.com/type-check@0.4.0:
resolution: {integrity: sha512-XleUoc9uwGXqjWwXaUTZAmzMcFZ5858QA2vvx1Ur5xIcixXIP+8LnFDgRplU30us6teqdlskFfu+ae4K79Ooew==, registry: https://registry.npm.taobao.org/, tarball: https://registry.npmmirror.com/type-check/-/type-check-0.4.0.tgz}
name: type-check

View File

@ -7,10 +7,10 @@
},
"ChatModels": [
{
"model": "gpt-3.5-turbo-1106",
"name": "GPT35-1106",
"model": "gpt-3.5-turbo",
"name": "GPT35",
"price": 0,
"maxContext": 16000,
"maxContext": 4000,
"maxResponse": 4000,
"quoteMaxToken": 2000,
"maxTemperature": 1.2,
@ -71,7 +71,7 @@
"maxContext": 4000,
"maxResponse": 4000,
"price": 0,
"functionCall": true,
"toolChoice": true,
"functionPrompt": ""
},
{
@ -80,7 +80,7 @@
"maxContext": 8000,
"maxResponse": 8000,
"price": 0,
"functionCall": true,
"toolChoice": true,
"functionPrompt": ""
}
],
@ -91,7 +91,7 @@
"maxContext": 16000,
"maxResponse": 4000,
"price": 0,
"functionCall": true,
"toolChoice": true,
"functionPrompt": ""
}
],

View File

@ -10,6 +10,9 @@
"quoteTemplate": false,
"quotePrompt": false
},
"cfr": {
"background": true
},
"dataset": {
"datasets": true,
"similarity": false,

View File

@ -1,13 +1,14 @@
### Fast GPT V4.6.4
### Fast GPT V4.6.5
1. 重写 - 分享链接身份逻辑,采用 localID 记录用户的ID。
2. 商业版新增 - 分享链接 SSO 方案,通过`身份鉴权`地址,仅需`3个接口`即可完全接入已有用户系统。具体参考[分享链接身份鉴权](https://doc.fastgpt.in/docs/development/openapi/share/)
3. 新增 - 分享链接更多嵌入方式提示更多DIY方式。
4. 优化 - 历史记录模块。弃用旧的历史记录模块,直接在对应地方填写数值即可。
5. 调整 - 知识库搜索模块 topk 逻辑,采用 MaxToken 计算,兼容不同长度的文本块
6. 链接读取支持多选择器。参考[Web 站点同步用法](https://doc.fastgpt.in/docs/course/websync)
7. [知识库结构详解](https://doc.fastgpt.in/docs/use-cases/datasetengine/)
8. [知识库提示词详解](https://doc.fastgpt.in/docs/use-cases/ai_settings/#引用模板--引用提示词)
9. [使用文档](https://doc.fastgpt.in/docs/intro/)
10. [点击查看高级编排介绍文档](https://doc.fastgpt.in/docs/workflow)
11. [点击查看商业版](https://doc.fastgpt.in/docs/commercial/)
1. 新增 - [问题补全模块](https://doc.fastgpt.in/docs/workflow/modules/coreferenceresolution/)
2. 新增 - [文本编辑模块](https://doc.fastgpt.in/docs/workflow/modules/text_editor/)
3. 新增 - [判断器模块](https://doc.fastgpt.in/docs/workflow/modules/tfswitch/)
4. 新增 - [自定义反馈模块](https://doc.fastgpt.in/docs/workflow/modules/custom_feedback/)
5. 新增 - 【内容提取】模块支持选择模型,以及字段枚举
6. 优化 - docx读取兼容表格表格转markdown
7. 优化 - 高级编排连接线交互
8. 优化 - 由于 html2md 导致的 cpu密集计算阻断线程问题
9. 修复 - 高级编排提示词提取描述
10. [使用文档](https://doc.fastgpt.in/docs/intro/)
11. [点击查看高级编排介绍文档](https://doc.fastgpt.in/docs/workflow)
12. [点击查看商业版](https://doc.fastgpt.in/docs/commercial/)

View File

@ -0,0 +1 @@
<?xml version="1.0" standalone="no"?><!DOCTYPE svg PUBLIC "-//W3C//DTD SVG 1.1//EN" "http://www.w3.org/Graphics/SVG/1.1/DTD/svg11.dtd"><svg t="1703041511149" class="icon" viewBox="0 0 1024 1024" version="1.1" xmlns="http://www.w3.org/2000/svg" p-id="7024" xmlns:xlink="http://www.w3.org/1999/xlink" width="128" height="128"><path d="M512.512 128a384 384 0 1 1-384 384 384 384 0 0 1 384-384z m0 96a288 288 0 1 0 288 288 288 288 0 0 0-288-288z" fill="#3671FD" opacity=".2" p-id="7025"></path><path d="M512.512 64a448 448 0 1 1-302.56 117.6A32.032 32.032 0 1 1 253.312 228.8a384 384 0 1 0 298.016-98.88l-6.816-0.608v140.032a32 32 0 0 1-28.256 31.776l-3.744 0.224a32 32 0 0 1-31.776-28.288l-0.224-3.712V64z" fill="#3671FD" p-id="7026"></path><path d="M336.32 313.376l216.224 165.344a52.288 52.288 0 1 1-73.28 73.312l-165.376-216.224a16 16 0 0 1 22.4-22.4z" fill="#FE9C23" p-id="7027"></path></svg>

After

Width:  |  Height:  |  Size: 893 B

View File

@ -245,7 +245,9 @@
"Out Ad Edit": "You are about to exit the Advanced orchestration page, please confirm",
"Prompt Editor": "Prompt Editor",
"Save and out": "Save out",
"UnSave": "UnSave"
"UnSave": "UnSave",
"cfr background prompt": "Question completion - Chat background description",
"cfr background tip": "Describing the scope of the current conversation makes it easier for AI to complete first or vague questions, thereby enhancing the knowledge base's ability to continue conversations. \nIf is empty, the problem completion function is not used."
},
"feedback": {
"Custom feedback": "Custom feedback",
@ -295,7 +297,10 @@
"Stop Speak": "Stop Speak",
"Type a message": "Input problem",
"error": {
"Messages empty": "Interface content is empty, maybe the text is too long ~"
"Messages empty": "Interface content is empty, maybe the text is too long ~",
"Select dataset empty": "You didn't choose any dataset.",
"user input empty": "User question is empty",
"Chat error": "Chat error"
},
"feedback": {
"Close User Good Feedback": "",
@ -315,6 +320,7 @@
"Read Source": "Read Source"
},
"response": {
"Plugin Resonse Detail": "Plugin Detail",
"context total length": "Context Length",
"module cq": "Question classification list",
"module cq result": "Classification Result",
@ -541,6 +547,7 @@
"Http Request Url": "",
"TFSwitch textarea": "",
"anyInput": "",
"cfr background": "The background knowledge of the current conversation makes it easy to complete the first question and the fuzzy question, and only needs to briefly describe the scope of the current conversation.",
"dynamic input": "",
"textEditor textarea": "The passed variable can be referenced by {{key}}."
},
@ -549,11 +556,16 @@
"Http Request Method": "",
"Http Request Url": "",
"TFSwitch textarea": "",
"aiModel": "",
"anyInput": "",
"cfr background": "Background",
"chat history": "chat history",
"switch": "Switch",
"textEditor textarea": "Text Edit",
"user question": "User question"
},
"placeholder": {
"cfr background": "Questions about the introduction and use of python. \nThe current dialogue is related to the game GTA5."
}
},
"inputType": {
@ -574,6 +586,7 @@
"running done": "Triggered when the module call ends"
},
"label": {
"cfr result": "",
"result false": "",
"result true": "",
"running done": "End of module call ",
@ -584,6 +597,8 @@
"TFSwitch": "",
"TFSwitch intro": "",
"UnKnow Module": "UnKnow Module",
"cfr": "",
"cfr intro": "Refine the current issue based on history, making it more conducive to knowledge base search, while improving continuous conversation capabilities.",
"textEditor": "Text Editor",
"textEditor intro": "Output of fixed or incoming text after edit"
},

View File

@ -239,13 +239,17 @@
"TTS": "语音播报",
"TTS Tip": "开启后,每次对话后可使用语音播放功能。使用该功能可能产生额外费用。",
"Welcome Text": "对话开场白",
"Chat Variable": "对话框变量",
"Question Guide": "问题引导",
"create app": "创建属于你的 AI 应用",
"edit": {
"Confirm Save App Tip": "该应用可能为高级编排模式,保存后将会覆盖高级编排配置,请确认!",
"Out Ad Edit": "您即将退出高级编排页面,请确认",
"Prompt Editor": "提示词编辑",
"Save and out": "保存并退出",
"UnSave": "不保存"
"UnSave": "不保存",
"cfr background prompt": "问题补全 - 对话背景描述",
"cfr background tip": "描述当前对话的范围便于AI补全首次问题或模糊的问题从而增强知识库连续对话的能力。\n为空时表示不使用问题补全功能。"
},
"feedback": {
"Custom feedback": "自定义反馈",
@ -295,7 +299,10 @@
"Stop Speak": "停止录音",
"Type a message": "输入问题",
"error": {
"Messages empty": "接口内容为空,可能文本超长了~"
"Messages empty": "接口内容为空,可能文本超长了~",
"Select dataset empty": "你没有选择知识库",
"user input empty": "传入的用户问题为空",
"Chat error": "对话出现异常"
},
"feedback": {
"Close User Good Feedback": "",
@ -315,6 +322,7 @@
"Read Source": "查看来源"
},
"response": {
"Plugin Resonse Detail": "插件详情",
"context total length": "上下文总长度",
"module cq": "问题分类列表",
"module cq result": "分类结果",
@ -479,7 +487,7 @@
"embeddingReRank": "增强语义检索",
"embeddingReRank desc": "超额进行向量 topk 查询后再使用 Rerank 进行排序,相关度通常差异明显。"
},
"search mode": "索模式"
"search mode": "索模式"
},
"status": {
"active": "已就绪",
@ -541,6 +549,7 @@
"Http Request Url": "新的HTTP请求地址。如果出现两个“请求地址”可以删除该模块重新加入会拉取最新的模块配置。",
"TFSwitch textarea": "允许定义一些字符串来实现 false 匹配,每行一个,支持正则表达式。",
"anyInput": "可传入任意内容",
"cfr background": "描述当前对话的范围便于AI补全首次问题或模糊的问题从而增强知识库连续对话的能力。\n为空时表示【首次对话】不使用问题补全功能。",
"dynamic input": "接收用户动态添加的参数,会在运行时将这些参数平铺传入",
"textEditor textarea": "可以通过 {{key}} 的方式引用传入的变量。变量仅支持字符串或数字。"
},
@ -549,11 +558,16 @@
"Http Request Method": "请求方式",
"Http Request Url": "请求地址",
"TFSwitch textarea": "自定义 False 匹配规则",
"aiModel": "AI 模型",
"anyInput": "任意内容输入",
"cfr background": "背景知识",
"chat history": "聊天记录",
"switch": "触发器",
"textEditor textarea": "文本编辑",
"user question": "用户问题"
},
"placeholder": {
"cfr background": "关于 python 的介绍和使用等问题。\n当前对话与游戏《GTA5》有关。"
}
},
"inputType": {
@ -574,6 +588,7 @@
"running done": "模块调用结束时触发"
},
"label": {
"cfr result": "补全结果",
"result false": "False",
"result true": "True",
"running done": "模块调用结束",
@ -584,6 +599,8 @@
"TFSwitch": "判断器",
"TFSwitch intro": "根据传入的内容进行 True False 输出。默认情况下,当传入的内容为 false, undefined, null, 0, none 时,会输出 false。你也可以增加一些自定义的字符串来补充输出 false 的内容。",
"UnKnow Module": "未知模块",
"cfr": "问题补全",
"cfr intro": "根据历史记录,完善当前问题,使其更利于知识库搜索,同时提高连续对话能力。",
"textEditor": "文本加工",
"textEditor intro": "可对固定或传入的文本进行加工后输出"
},

View File

@ -26,6 +26,43 @@ const QuoteModal = ({
isShare: boolean;
}) => {
const { t } = useTranslation();
return (
<>
<MyModal
isOpen={true}
onClose={onClose}
h={['90vh', '80vh']}
isCentered
minW={['90vw', '600px']}
iconSrc="/imgs/modal/quote.svg"
title={
<Box>
{t('core.chat.Quote Amount', { amount: rawSearch.length })}
<Box fontSize={'sm'} color={'myGray.500'} fontWeight={'normal'}>
{t('core.chat.quote.Quote Tip')}
</Box>
</Box>
}
>
<ModalBody whiteSpace={'pre-wrap'} textAlign={'justify'} wordBreak={'break-all'}>
<QuoteList rawSearch={rawSearch} isShare={isShare} />
</ModalBody>
</MyModal>
</>
);
};
export default QuoteModal;
export const QuoteList = React.memo(function QuoteList({
rawSearch = [],
isShare
}: {
rawSearch: SearchDataResponseItemType[];
isShare: boolean;
}) {
const { t } = useTranslation();
const { isPc } = useSystemStore();
const theme = useTheme();
const { toast } = useToast();
@ -60,124 +97,104 @@ const QuoteModal = ({
return (
<>
<MyModal
isOpen={true}
onClose={onClose}
h={['90vh', '80vh']}
isCentered
minW={['90vw', '600px']}
iconSrc="/imgs/modal/quote.svg"
title={
<Box>
{t('core.chat.Quote Amount', { amount: rawSearch.length })}
<Box fontSize={'sm'} color={'myGray.500'} fontWeight={'normal'}>
{t('core.chat.quote.Quote Tip')}
</Box>
</Box>
}
>
<ModalBody whiteSpace={'pre-wrap'} textAlign={'justify'} wordBreak={'break-all'}>
{rawSearch.map((item, i) => (
<Box
key={i}
flex={'1 0 0'}
p={2}
borderRadius={'lg'}
border={theme.borders.base}
_notLast={{ mb: 2 }}
position={'relative'}
overflow={'hidden'}
_hover={{ '& .hover-data': { display: 'flex' } }}
bg={i % 2 === 0 ? 'white' : 'myWhite.500'}
>
<Flex alignItems={'flex-end'} mb={3} fontSize={'sm'}>
<RawSourceText
fontWeight={'bold'}
color={'black'}
sourceName={item.sourceName}
sourceId={item.sourceId}
canView={!isShare}
/>
<Box flex={1} />
{!isShare && (
<Link
as={NextLink}
className="hover-data"
display={'none'}
alignItems={'center'}
color={'blue.500'}
href={`/dataset/detail?datasetId=${item.datasetId}&currentTab=dataCard&collectionId=${item.collectionId}`}
>
{t('core.dataset.Go Dataset')}
<MyIcon name={'common/rightArrowLight'} w={'10px'} />
</Link>
)}
</Flex>
{rawSearch.map((item, i) => (
<Box
key={i}
flex={'1 0 0'}
p={2}
borderRadius={'lg'}
border={theme.borders.base}
_notLast={{ mb: 2 }}
position={'relative'}
overflow={'hidden'}
_hover={{ '& .hover-data': { display: 'flex' } }}
bg={i % 2 === 0 ? 'white' : 'myWhite.500'}
>
<Flex alignItems={'flex-end'} mb={3} fontSize={'sm'}>
<RawSourceText
fontWeight={'bold'}
color={'black'}
sourceName={item.sourceName}
sourceId={item.sourceId}
canView={!isShare}
/>
<Box flex={1} />
{!isShare && (
<Link
as={NextLink}
className="hover-data"
display={'none'}
alignItems={'center'}
color={'blue.500'}
href={`/dataset/detail?datasetId=${item.datasetId}&currentTab=dataCard&collectionId=${item.collectionId}`}
>
{t('core.dataset.Go Dataset')}
<MyIcon name={'common/rightArrowLight'} w={'10px'} />
</Link>
)}
</Flex>
<Box color={'black'}>{item.q}</Box>
<Box color={'myGray.600'}>{item.a}</Box>
{!isShare && (
<Flex alignItems={'center'} fontSize={'sm'} mt={3} gap={4} color={'myGray.500'}>
{isPc && (
<MyTooltip label={t('core.dataset.data.id')}>
<Flex border={theme.borders.base} py={'1px'} px={3} borderRadius={'3px'}>
# {item.id}
</Flex>
</MyTooltip>
)}
<MyTooltip label={t('core.dataset.Quote Length')}>
<Flex alignItems={'center'}>
<MyIcon name="common/text/t" w={'14px'} mr={1} color={'myGray.500'} />
{item.q.length + (item.a?.length || 0)}
</Flex>
</MyTooltip>
{!isShare && item.score && (
<MyTooltip label={t('core.dataset.Similarity')}>
<Flex alignItems={'center'}>
<MyIcon name={'kbTest'} w={'12px'} />
<Progress
mx={2}
w={['60px', '90px']}
value={item.score * 100}
size="sm"
borderRadius={'20px'}
colorScheme="myGray"
border={theme.borders.base}
/>
<Box>{item.score.toFixed(4)}</Box>
</Flex>
</MyTooltip>
)}
<Box flex={1} />
{item.id && (
<MyTooltip label={t('core.dataset.data.Edit')}>
<Box
bg={'rgba(255,255,255,0.9)'}
alignItems={'center'}
justifyContent={'center'}
boxShadow={'-10px 0 10px rgba(255,255,255,1)'}
>
<MyIcon
name={'edit'}
w={['16px', '18px']}
h={['16px', '18px']}
cursor={'pointer'}
color={'myGray.600'}
_hover={{
color: 'blue.600'
}}
onClick={() => onclickEdit(item)}
/>
</Box>
</MyTooltip>
)}
</Flex>
<Box color={'black'}>{item.q}</Box>
<Box color={'myGray.600'}>{item.a}</Box>
{!isShare && (
<Flex alignItems={'center'} fontSize={'sm'} mt={3} gap={4} color={'myGray.500'}>
{isPc && (
<MyTooltip label={t('core.dataset.data.id')}>
<Flex border={theme.borders.base} py={'1px'} px={3} borderRadius={'3px'}>
# {item.id}
</Flex>
</MyTooltip>
)}
</Box>
))}
</ModalBody>
<Loading fixed={false} />
</MyModal>
<MyTooltip label={t('core.dataset.Quote Length')}>
<Flex alignItems={'center'}>
<MyIcon name="common/text/t" w={'14px'} mr={1} color={'myGray.500'} />
{item.q.length + (item.a?.length || 0)}
</Flex>
</MyTooltip>
{!isShare && item.score && (
<MyTooltip label={t('core.dataset.Similarity')}>
<Flex alignItems={'center'}>
<MyIcon name={'kbTest'} w={'12px'} />
<Progress
mx={2}
w={['60px', '90px']}
value={item.score * 100}
size="sm"
borderRadius={'20px'}
colorScheme="myGray"
border={theme.borders.base}
/>
<Box>{item.score.toFixed(4)}</Box>
</Flex>
</MyTooltip>
)}
<Box flex={1} />
{item.id && (
<MyTooltip label={t('core.dataset.data.Edit')}>
<Box
bg={'rgba(255,255,255,0.9)'}
alignItems={'center'}
justifyContent={'center'}
boxShadow={'-10px 0 10px rgba(255,255,255,1)'}
>
<MyIcon
name={'edit'}
w={['16px', '18px']}
h={['16px', '18px']}
cursor={'pointer'}
color={'myGray.600'}
_hover={{
color: 'blue.600'
}}
onClick={() => onclickEdit(item)}
/>
</Box>
</MyTooltip>
)}
</Flex>
)}
</Box>
))}
{editInputData && editInputData.id && (
<InputDataModal
onClose={() => setEditInputData(undefined)}
@ -191,8 +208,7 @@ const QuoteModal = ({
collectionId={editInputData.collectionId}
/>
)}
<Loading fixed={false} />
</>
);
};
export default QuoteModal;
});

View File

@ -222,7 +222,11 @@ const ResponseTags = ({
<ContextModal context={contextModalData} onClose={() => setContextModalData(undefined)} />
)}
{isOpenWholeModal && (
<WholeResponseModal response={responseData} onClose={onCloseWholeModal} />
<WholeResponseModal
response={responseData}
isShare={isShare}
onClose={onCloseWholeModal}
/>
)}
</Flex>
</>

View File

@ -3,13 +3,14 @@ import { Box, useTheme, Flex, Image } from '@chakra-ui/react';
import type { ChatHistoryItemResType } from '@fastgpt/global/core/chat/type.d';
import { useTranslation } from 'next-i18next';
import { moduleTemplatesFlat } from '@/web/core/modules/template/system';
import Tabs from '../Tabs';
import Tabs from '../Tabs';
import MyModal from '../MyModal';
import MyTooltip from '../MyTooltip';
import { QuestionOutlineIcon } from '@chakra-ui/icons';
import { formatPrice } from '@fastgpt/global/support/wallet/bill/tools';
import Markdown from '../Markdown';
import { QuoteList } from './QuoteModal';
import { DatasetSearchModeMap } from '@fastgpt/global/core/dataset/constant';
function Row({
@ -37,7 +38,9 @@ function Row({
fontSize={'sm'}
{...(isCodeBlock
? { transform: 'translateY(-3px)' }
: { px: 3, py: 1, border: theme.borders.base })}
: value
? { px: 3, py: 1, border: theme.borders.base }
: {})}
>
{value && <Markdown source={strValue} />}
{rawDom}
@ -48,13 +51,51 @@ function Row({
const WholeResponseModal = ({
response,
isShare,
onClose
}: {
response: ChatHistoryItemResType[];
isShare: boolean;
onClose: () => void;
}) => {
const { t } = useTranslation();
return (
<MyModal
isCentered
isOpen={true}
onClose={onClose}
h={['90vh', '80vh']}
minW={['90vw', '600px']}
iconSrc="/imgs/modal/wholeRecord.svg"
title={
<Flex alignItems={'center'}>
{t('chat.Complete Response')}
<MyTooltip label={'从左往右,为各个模块的响应顺序'}>
<QuestionOutlineIcon ml={2} />
</MyTooltip>
</Flex>
}
>
<Flex h={'100%'} flexDirection={'column'}>
<ResponseBox response={response} isShare={isShare} />
</Flex>
</MyModal>
);
};
export default WholeResponseModal;
const ResponseBox = React.memo(function ResponseBox({
response,
isShare
}: {
response: ChatHistoryItemResType[];
isShare: boolean;
}) {
const theme = useTheme();
const { t } = useTranslation();
const list = useMemo(
() =>
response.map((item, i) => ({
@ -83,145 +124,129 @@ const WholeResponseModal = ({
const activeModule = useMemo(() => response[Number(currentTab)], [currentTab, response]);
return (
<MyModal
isCentered
isOpen={true}
onClose={onClose}
h={['90vh', '80vh']}
w={['90vw', '500px']}
iconSrc="/imgs/modal/wholeRecord.svg"
title={
<Flex alignItems={'center'}>
{t('chat.Complete Response')}
<MyTooltip label={'从左往右,为各个模块的响应顺序'}>
<QuestionOutlineIcon ml={2} />
</MyTooltip>
</Flex>
}
>
<Flex h={'100%'} flexDirection={'column'}>
<Box>
<Tabs list={list} activeId={currentTab} onChange={setCurrentTab} />
</Box>
<Box py={2} px={4} flex={'1 0 0'} overflow={'auto'}>
<Row label={t('core.chat.response.module name')} value={t(activeModule.moduleName)} />
{activeModule?.price !== undefined && (
<Row
label={t('core.chat.response.module price')}
value={`${formatPrice(activeModule?.price)}`}
/>
)}
<>
<Box>
<Tabs list={list} activeId={currentTab} onChange={setCurrentTab} />
</Box>
<Box py={2} px={4} flex={'1 0 0'} overflow={'auto'}>
<Row label={t('core.chat.response.module name')} value={t(activeModule.moduleName)} />
{activeModule?.price !== undefined && (
<Row
label={t('core.chat.response.module time')}
value={`${activeModule?.runningTime || 0}s`}
label={t('core.chat.response.module price')}
value={`${formatPrice(activeModule?.price)}`}
/>
<Row label={t('core.chat.response.module tokens')} value={`${activeModule?.tokens}`} />
<Row label={t('core.chat.response.module model')} value={activeModule?.model} />
<Row label={t('core.chat.response.module query')} value={activeModule?.query} />
)}
<Row
label={t('core.chat.response.module time')}
value={`${activeModule?.runningTime || 0}s`}
/>
<Row label={t('core.chat.response.module tokens')} value={`${activeModule?.tokens}`} />
<Row label={t('core.chat.response.module model')} value={activeModule?.model} />
<Row label={t('core.chat.response.module query')} value={activeModule?.query} />
<Row
label={t('core.chat.response.context total length')}
value={activeModule?.contextTotalLen}
/>
{/* ai chat */}
<Row label={t('core.chat.response.module temperature')} value={activeModule?.temperature} />
<Row label={t('core.chat.response.module maxToken')} value={activeModule?.maxToken} />
<Row
label={t('core.chat.response.module historyPreview')}
rawDom={
activeModule.historyPreview ? (
<Box px={3} py={2} border={theme.borders.base} borderRadius={'md'}>
{activeModule.historyPreview?.map((item, i) => (
<Box
key={i}
_notLast={{
borderBottom: '1px solid',
borderBottomColor: 'myWhite.700',
mb: 2
}}
pb={2}
>
<Box fontWeight={'bold'}>{item.obj}</Box>
<Box whiteSpace={'pre-wrap'}>{item.value}</Box>
</Box>
))}
</Box>
) : (
''
)
}
/>
{activeModule.quoteList && activeModule.quoteList.length > 0 && (
<Row
label={t('core.chat.response.context total length')}
value={activeModule?.contextTotalLen}
label={t('core.chat.response.module quoteList')}
rawDom={<QuoteList isShare={isShare} rawSearch={activeModule.quoteList} />}
/>
)}
{/* ai chat */}
{/* dataset search */}
{activeModule?.searchMode && (
<Row
label={t('core.chat.response.module temperature')}
value={activeModule?.temperature}
label={t('core.dataset.search.search mode')}
// @ts-ignore
value={t(DatasetSearchModeMap[activeModule.searchMode]?.title)}
/>
<Row label={t('core.chat.response.module maxToken')} value={activeModule?.maxToken} />
)}
<Row label={t('core.chat.response.module similarity')} value={activeModule?.similarity} />
<Row label={t('core.chat.response.module limit')} value={activeModule?.limit} />
{/* classify question */}
<Row
label={t('core.chat.response.module cq')}
value={(() => {
if (!activeModule?.cqList) return '';
return activeModule.cqList.map((item) => `* ${item.value}`).join('\n');
})()}
/>
<Row label={t('core.chat.response.module cq result')} value={activeModule?.cqResult} />
{/* extract */}
<Row
label={t('core.chat.response.module extract description')}
value={activeModule?.extractDescription}
/>
{activeModule?.extractResult && (
<Row
label={t('core.chat.response.module historyPreview')}
rawDom={
activeModule.historyPreview ? (
<>
{activeModule.historyPreview?.map((item, i) => (
<Box
key={i}
_notLast={{
borderBottom: '1px solid',
borderBottomColor: 'myWhite.700',
mb: 2
}}
pb={2}
>
<Box fontWeight={'bold'}>{item.obj}</Box>
<Box whiteSpace={'pre-wrap'}>{item.value}</Box>
</Box>
))}
</>
) : (
''
)
}
label={t('core.chat.response.module extract result')}
value={`~~~json\n${JSON.stringify(activeModule?.extractResult, null, 2)}`}
/>
{activeModule.quoteList && activeModule.quoteList.length > 0 && (
<Row
label={t('core.chat.response.module quoteList')}
value={`~~~json\n${JSON.stringify(activeModule.quoteList, null, 2)}`}
/>
)}
)}
{/* dataset search */}
{activeModule?.searchMode && (
<Row
label={t('core.dataset.search.search mode')}
// @ts-ignore
value={t(DatasetSearchModeMap[activeModule.searchMode]?.title)}
/>
)}
<Row label={t('core.chat.response.module similarity')} value={activeModule?.similarity} />
<Row label={t('core.chat.response.module limit')} value={activeModule?.limit} />
{/* classify question */}
{/* http */}
{activeModule?.body && (
<Row
label={t('core.chat.response.module cq')}
value={(() => {
if (!activeModule?.cqList) return '';
return activeModule.cqList.map((item) => `* ${item.value}`).join('\n');
})()}
label={t('core.chat.response.module http body')}
value={`~~~json\n${JSON.stringify(activeModule?.body, null, 2)}`}
/>
<Row label={t('core.chat.response.module cq result')} value={activeModule?.cqResult} />
{/* extract */}
)}
{activeModule?.httpResult && (
<Row
label={t('core.chat.response.module extract description')}
value={activeModule?.extractDescription}
label={t('core.chat.response.module http result')}
value={`~~~json\n${JSON.stringify(activeModule?.httpResult, null, 2)}`}
/>
{activeModule?.extractResult && (
<Row
label={t('core.chat.response.module extract result')}
value={`~~~json\n${JSON.stringify(activeModule?.extractResult, null, 2)}`}
/>
)}
)}
{/* http */}
{activeModule?.body && (
<Row
label={t('core.chat.response.module http body')}
value={`~~~json\n${JSON.stringify(activeModule?.body, null, 2)}`}
/>
)}
{activeModule?.httpResult && (
<Row
label={t('core.chat.response.module http result')}
value={`~~~json\n${JSON.stringify(activeModule?.httpResult, null, 2)}`}
/>
)}
{/* plugin */}
{activeModule?.pluginDetail && activeModule?.pluginDetail.length > 0 && (
<Row
label={t('core.chat.response.Plugin Resonse Detail')}
rawDom={<ResponseBox response={activeModule.pluginDetail} isShare={isShare} />}
/>
)}
{activeModule?.pluginOutput && (
<Row
label={t('core.chat.response.plugin output')}
value={`~~~json\n${JSON.stringify(activeModule?.pluginOutput, null, 2)}`}
/>
)}
{/* plugin */}
{activeModule?.pluginOutput && (
<Row
label={t('core.chat.response.plugin output')}
value={`~~~json\n${JSON.stringify(activeModule?.pluginOutput, null, 2)}`}
/>
)}
{/* text editor */}
<Row label={t('core.chat.response.text output')} value={activeModule?.textOutput} />
</Box>
</Flex>
</MyModal>
{/* text output */}
<Row label={t('core.chat.response.text output')} value={activeModule?.textOutput} />
</Box>
</>
);
};
export default WholeResponseModal;
});

View File

@ -349,7 +349,13 @@ const ChatBox = (
responseText,
isNewChat = false
} = await onStartChat({
chatList: newChatList,
chatList: newChatList.map((item) => ({
dataId: item.dataId,
obj: item.obj,
value: item.value,
status: item.status,
moduleName: item.moduleName
})),
messages,
controller: abortSignal,
generatingMessage,
@ -386,7 +392,7 @@ const ChatBox = (
}, 100);
} catch (err: any) {
toast({
title: getErrText(err, '聊天出错了~'),
title: t(getErrText(err, 'core.chat.error.Chat error')),
status: 'error',
duration: 5000,
isClosable: true
@ -419,7 +425,8 @@ const ChatBox = (
generatingMessage,
createQuestionGuide,
generatingScroll,
isPc
isPc,
t
]
);

View File

@ -39,7 +39,7 @@ const Layout = ({ children }: { children: JSX.Element }) => {
const router = useRouter();
const { colorMode, setColorMode } = useColorMode();
const { Loading } = useLoading();
const { loading, setScreenWidth, isPc, loadGitStar } = useSystemStore();
const { loading, setScreenWidth, isPc } = useSystemStore();
const { userInfo } = useUserStore();
const isChatPage = useMemo(
@ -61,12 +61,11 @@ const Layout = ({ children }: { children: JSX.Element }) => {
window.addEventListener('resize', resize);
resize();
loadGitStar();
return () => {
window.removeEventListener('resize', resize);
};
}, [loadGitStar, setScreenWidth]);
}, [setScreenWidth]);
const { data: unread = 0 } = useQuery(['getUnreadCount'], getUnreadCount, {
enabled: !!userInfo && !!feConfigs.isPlus,

View File

@ -346,7 +346,7 @@
width: 100%;
* {
word-break: break-all;
word-break: break-word;
}
pre {

View File

@ -19,49 +19,51 @@ type Props = TextareaProps & {
// variables: string[];
};
const PromptTextarea = (props: Props) => {
const ModalTextareaRef = useRef<HTMLTextAreaElement>(null);
const TextareaRef = useRef<HTMLTextAreaElement>(null);
const PromptTextarea = React.forwardRef<HTMLTextAreaElement, Props>(
function PromptTextarea(props, ref) {
const ModalTextareaRef = useRef<HTMLTextAreaElement>(null);
const TextareaRef = useRef<HTMLTextAreaElement>(null);
const { t } = useTranslation();
const { title = t('core.app.edit.Prompt Editor'), value, ...childProps } = props;
const { t } = useTranslation();
const { title = t('core.app.edit.Prompt Editor'), ...childProps } = props;
const { isOpen, onOpen, onClose } = useDisclosure();
const { isOpen, onOpen, onClose } = useDisclosure();
return (
<>
<Editor textareaRef={TextareaRef} {...childProps} onOpenModal={onOpen} />
{isOpen && (
<MyModal iconSrc="/imgs/modal/edit.svg" title={title} isOpen onClose={onClose}>
<ModalBody>
<Editor
textareaRef={ModalTextareaRef}
{...childProps}
minH={'300px'}
maxH={'auto'}
minW={['100%', '512px']}
/>
</ModalBody>
<ModalFooter>
<Button
onClick={() => {
if (ModalTextareaRef.current && TextareaRef.current) {
TextareaRef.current.value = ModalTextareaRef.current.value;
}
return (
<>
<Editor textareaRef={TextareaRef} {...childProps} onOpenModal={onOpen} />
{isOpen && (
<MyModal iconSrc="/imgs/modal/edit.svg" title={title} isOpen onClose={onClose}>
<ModalBody>
<Editor
textareaRef={ModalTextareaRef}
{...childProps}
minH={'300px'}
maxH={'auto'}
minW={['100%', '512px']}
/>
</ModalBody>
<ModalFooter>
<Button
onClick={() => {
if (ModalTextareaRef.current && TextareaRef.current) {
TextareaRef.current.value = ModalTextareaRef.current.value;
}
onClose();
}}
>
{t('common.Confirm')}
</Button>
</ModalFooter>
</MyModal>
)}
</>
);
};
onClose();
}}
>
{t('common.Confirm')}
</Button>
</ModalFooter>
</MyModal>
)}
</>
);
}
);
export default PromptTextarea;
export default React.memo(PromptTextarea);
const Editor = React.memo(function Editor({
onOpenModal,
@ -75,7 +77,7 @@ const Editor = React.memo(function Editor({
return (
<Box h={'100%'} w={'100%'} position={'relative'}>
<Textarea ref={textareaRef} wordBreak={'break-all'} maxW={'100%'} {...props} />
<Textarea ref={textareaRef} textAlign={'justify'} maxW={'100%'} {...props} />
{onOpenModal && (
<Box
zIndex={1}

View File

@ -1,5 +1,11 @@
import React from 'react';
import { SmoothStepEdge, EdgeLabelRenderer, EdgeProps, getSmoothStepPath } from 'reactflow';
import {
BezierEdge,
getBezierPath,
EdgeLabelRenderer,
EdgeProps,
getSmoothStepPath
} from 'reactflow';
import { Flex } from '@chakra-ui/react';
import MyIcon from '@/components/Icon';
@ -21,7 +27,7 @@ const ButtonEdge = (
style = {}
} = props;
const [edgePath, labelX, labelY] = getSmoothStepPath({
const [edgePath, labelX, labelY] = getBezierPath({
sourceX,
sourceY,
sourcePosition,
@ -42,7 +48,7 @@ const ButtonEdge = (
return (
<>
<SmoothStepEdge {...props} style={edgeStyle} />
<BezierEdge {...props} style={edgeStyle} />
<EdgeLabelRenderer>
<Flex
alignItems={'center'}

View File

@ -14,7 +14,7 @@ import 'reactflow/dist/style.css';
import type { ModuleItemType } from '@fastgpt/global/core/module/type.d';
const NodeSimple = dynamic(() => import('./components/nodes/NodeSimple'));
const nodeTypes = {
const nodeTypes: Record<`${FlowNodeTypeEnum}`, any> = {
[FlowNodeTypeEnum.userGuide]: dynamic(() => import('./components/nodes/NodeUserGuide')),
[FlowNodeTypeEnum.variable]: dynamic(() => import('./components/nodes/abandon/NodeVariable')),
[FlowNodeTypeEnum.questionInput]: dynamic(() => import('./components/nodes/NodeQuestionInput')),
@ -28,7 +28,8 @@ const nodeTypes = {
[FlowNodeTypeEnum.runApp]: NodeSimple,
[FlowNodeTypeEnum.pluginInput]: dynamic(() => import('./components/nodes/NodePluginInput')),
[FlowNodeTypeEnum.pluginOutput]: dynamic(() => import('./components/nodes/NodePluginOutput')),
[FlowNodeTypeEnum.pluginModule]: NodeSimple
[FlowNodeTypeEnum.pluginModule]: NodeSimple,
[FlowNodeTypeEnum.cfr]: NodeSimple
};
const edgeTypes = {
[EDGE_TYPE]: ButtonEdge

View File

@ -1,14 +1,16 @@
import { ModuleInputKeyEnum } from '@fastgpt/global/core/module/constants';
import { FlowNodeTypeEnum } from '@fastgpt/global/core/module/node/constant';
import { FlowNodeOutputTargetItemType } from '@fastgpt/global/core/module/node/type';
import { FlowModuleItemType, ModuleItemType } from '@fastgpt/global/core/module/type';
import { type Node, type Edge } from 'reactflow';
export function flowNode2Modules({
export const flowNode2Modules = ({
nodes,
edges
}: {
nodes: Node<FlowModuleItemType, string | undefined>[];
edges: Edge<any>[];
}) {
}) => {
const modules: ModuleItemType[] = nodes.map((item) => ({
moduleId: item.data.moduleId,
name: item.data.name,
@ -48,4 +50,19 @@ export function flowNode2Modules({
});
return modules;
}
};
export const filterExportModules = (modules: ModuleItemType[]) => {
modules.forEach((module) => {
// dataset - remove select dataset value
if (module.flowType === FlowNodeTypeEnum.datasetSearchNode) {
module.inputs.forEach((item) => {
if (item.key === ModuleInputKeyEnum.datasetSelectList) {
item.value = [];
}
});
}
});
return JSON.stringify(modules, null, 2);
};

View File

@ -14,6 +14,9 @@ export const SimpleModeTemplate_FastGPT_Universal: AppSimpleEditConfigTemplateTy
quoteTemplate: true,
quotePrompt: true
},
cfr: {
background: true
},
dataset: {
datasets: true,
similarity: true,

View File

@ -39,6 +39,7 @@ function App({ Component, pageProps }: AppProps) {
const router = useRouter();
const { hiId } = router.query as { hiId?: string };
const { i18n } = useTranslation();
const { loadGitStar } = useSystemStore();
const [scripts, setScripts] = useState<FeConfigsType['scripts']>([]);
const [title, setTitle] = useState(process.env.SYSTEM_NAME || 'AI');
@ -46,18 +47,23 @@ function App({ Component, pageProps }: AppProps) {
// get init data
(async () => {
const {
feConfigs: { scripts, isPlus, systemTitle }
feConfigs: { scripts, isPlus, show_git, systemTitle }
} = await clientInitData();
setTitle(systemTitle || 'FastGPT');
// log fastgpt
!isPlus &&
if (!isPlus) {
console.log(
'%cWelcome to FastGPT',
'font-family:Arial; color:#3370ff ; font-size:18px; font-weight:bold;',
`GitHubhttps://github.com/labring/FastGPT`
);
}
if (show_git) {
loadGitStar();
}
setScripts(scripts || []);
})();

View File

@ -6,9 +6,9 @@ import type { NextApiRequest, NextApiResponse } from 'next';
import { jsonRes } from '@fastgpt/service/common/response';
import type { AppSimpleEditFormType } from '@fastgpt/global/core/app/type.d';
import type { ModuleItemType } from '@fastgpt/global/core/module/type';
import { FlowNodeInputTypeEnum } from '@fastgpt/global/core/module/node/constant';
import { FormatForm2ModulesProps } from '@fastgpt/global/core/app/api';
import { DatasetSearchModeEnum } from '@fastgpt/global/core/dataset/constant';
import { getExtractModel } from '@/service/core/ai/model';
export default async function handler(req: NextApiRequest, res: NextApiResponse<any>) {
try {
@ -31,13 +31,9 @@ export default async function handler(req: NextApiRequest, res: NextApiResponse<
}
}
function simpleChatTemplate({
formData,
maxToken
}: {
formData: AppSimpleEditFormType;
maxToken: number;
}): ModuleItemType[] {
type Props = { formData: AppSimpleEditFormType; maxToken: number };
function simpleChatTemplate({ formData, maxToken }: Props): ModuleItemType[] {
return [
{
moduleId: 'userChatInput',
@ -52,7 +48,10 @@ function simpleChatTemplate({
{
key: 'userChatInput',
type: 'systemInput',
valueType: 'string',
label: '用户问题',
showTargetInApp: false,
showTargetInPlugin: false,
connected: false
}
],
@ -87,6 +86,8 @@ function simpleChatTemplate({
type: 'target',
label: 'core.module.input.label.switch',
valueType: 'any',
showTargetInApp: true,
showTargetInPlugin: true,
connected: false
},
{
@ -94,6 +95,9 @@ function simpleChatTemplate({
type: 'selectChatModel',
label: '对话模型',
required: true,
valueType: 'string',
showTargetInApp: false,
showTargetInPlugin: false,
value: formData.aiSettings.model,
connected: false
},
@ -102,6 +106,7 @@ function simpleChatTemplate({
type: 'hidden',
label: '温度',
value: 1,
valueType: 'number',
min: 0,
max: 10,
step: 1,
@ -115,6 +120,8 @@ function simpleChatTemplate({
value: 10
}
],
showTargetInApp: false,
showTargetInPlugin: false,
connected: false
},
{
@ -122,6 +129,7 @@ function simpleChatTemplate({
type: 'hidden',
label: '回复上限',
value: maxToken,
valueType: 'number',
min: 100,
max: 4000,
step: 50,
@ -135,14 +143,18 @@ function simpleChatTemplate({
value: 4000
}
],
showTargetInApp: false,
showTargetInPlugin: false,
connected: false
},
{
key: 'isResponseAnswerText',
type: 'hidden',
label: '返回AI内容',
valueType: 'boolean',
value: true,
valueType: 'boolean',
showTargetInApp: false,
showTargetInPlugin: false,
connected: false
},
{
@ -150,6 +162,9 @@ function simpleChatTemplate({
type: 'hidden',
label: '引用内容模板',
valueType: 'string',
showTargetInApp: false,
showTargetInPlugin: false,
value: formData.aiSettings.quoteTemplate,
connected: false
},
{
@ -157,12 +172,18 @@ function simpleChatTemplate({
type: 'hidden',
label: '引用内容提示词',
valueType: 'string',
showTargetInApp: false,
showTargetInPlugin: false,
value: formData.aiSettings.quotePrompt,
connected: false
},
{
key: 'aiSettings',
type: 'aiSettings',
label: '',
valueType: 'any',
showTargetInApp: false,
showTargetInPlugin: false,
connected: false
},
{
@ -175,31 +196,42 @@ function simpleChatTemplate({
'模型固定的引导词,通过调整该内容,可以引导模型聊天方向。该内容会被固定在上下文的开头。可使用变量,例如 {{language}}',
placeholder:
'模型固定的引导词,通过调整该内容,可以引导模型聊天方向。该内容会被固定在上下文的开头。可使用变量,例如 {{language}}',
showTargetInApp: true,
showTargetInPlugin: true,
value: formData.aiSettings.systemPrompt,
connected: false
},
{
key: 'history',
type: 'numberInput',
label: 'core.module.input.label.chat history',
required: true,
min: 0,
max: 30,
valueType: 'chatHistory',
value: 8,
showTargetInApp: true,
showTargetInPlugin: true,
connected: false
},
{
key: 'quoteQA',
type: 'target',
label: '引用内容',
description: "对象数组格式,结构:\n [{q:'问题',a:'回答'}]",
valueType: 'datasetQuote',
showTargetInApp: true,
showTargetInPlugin: true,
connected: false
},
{
key: 'history',
type: 'target',
label: 'core.module.input.label.chat history',
valueType: 'chatHistory',
connected: false,
value: 8
},
{
key: 'userChatInput',
type: 'target',
label: 'core.module.input.label.user question',
required: true,
valueType: 'string',
showTargetInApp: true,
showTargetInPlugin: true,
connected: true
}
],
@ -232,29 +264,26 @@ function simpleChatTemplate({
}
];
}
function datasetTemplate({
formData,
maxToken
}: {
formData: AppSimpleEditFormType;
maxToken: number;
}): ModuleItemType[] {
return [
function datasetTemplate({ formData, maxToken }: Props): ModuleItemType[] {
const modules: ModuleItemType[] = [
{
moduleId: 'userChatInput',
name: '用户问题(对话入口)',
avatar: '/imgs/module/userChatInput.png',
flowType: 'questionInput',
position: {
x: 464.32198615344566,
y: 1602.2698463081606
x: 324.81436595478294,
y: 1527.0012457753612
},
inputs: [
{
key: 'userChatInput',
type: 'systemInput',
valueType: 'string',
label: '用户问题',
connected: true
showTargetInApp: false,
showTargetInPlugin: false,
connected: false
}
],
outputs: [
@ -265,11 +294,11 @@ function datasetTemplate({
valueType: 'string',
targets: [
{
moduleId: 'chatModule',
moduleId: 'vuc92c',
key: 'userChatInput'
},
{
moduleId: 'datasetSearch',
moduleId: 'chatModule',
key: 'userChatInput'
}
]
@ -283,43 +312,65 @@ function datasetTemplate({
flowType: 'datasetSearchNode',
showStatus: true,
position: {
x: 956.0838440206068,
y: 887.462827870246
x: 1351.5043753345153,
y: 947.0780385418003
},
inputs: [
{
key: 'switch',
type: 'target',
label: 'core.module.input.label.switch',
valueType: 'any',
showTargetInApp: true,
showTargetInPlugin: true,
connected: false
},
{
key: 'datasets',
value: formData.dataset.datasets,
type: FlowNodeInputTypeEnum.custom,
type: 'selectDataset',
label: '关联的知识库',
value: formData.dataset.datasets,
valueType: 'selectDataset',
list: [],
required: true,
showTargetInApp: false,
showTargetInPlugin: true,
connected: false
},
{
key: 'similarity',
value: 0.1,
type: FlowNodeInputTypeEnum.slider,
label: '相关度',
type: 'hidden',
label: '最低相关性',
value: 0.15,
valueType: 'number',
min: 0,
max: 1,
step: 0.01,
markList: [
{
label: '0',
value: 0
},
{
label: '1',
value: 1
}
],
showTargetInApp: false,
showTargetInPlugin: false,
connected: false
},
{
key: 'limit',
type: 'hidden',
label: '引用上限',
description: '单次搜索最大的 Tokens 数量中文约1字=1.7Tokens英文约1字=1Tokens',
value: 2000,
type: FlowNodeInputTypeEnum.slider,
label: '单次搜索上限',
valueType: 'number',
showTargetInApp: false,
showTargetInPlugin: false,
connected: false
},
{
key: 'switch',
type: FlowNodeInputTypeEnum.target,
label: '触发器',
connected: false
},
{
key: 'userChatInput',
type: FlowNodeInputTypeEnum.target,
label: '用户问题',
connected: true
},
{
key: 'searchMode',
type: 'hidden',
@ -334,10 +385,20 @@ function datasetTemplate({
key: 'datasetParamsModal',
type: 'selectDatasetParamsModal',
label: '',
connected: false,
valueType: 'any',
showTargetInApp: false,
showTargetInPlugin: false
showTargetInPlugin: false,
connected: false
},
{
key: 'userChatInput',
type: 'target',
label: 'core.module.input.label.user question',
required: true,
valueType: 'string',
showTargetInApp: true,
showTargetInPlugin: true,
connected: true
}
],
outputs: [
@ -386,8 +447,8 @@ function datasetTemplate({
flowType: 'chatNode',
showStatus: true,
position: {
x: 1551.71405495818,
y: 977.4911578918461
x: 2022.7264786978908,
y: 1006.3102431257475
},
inputs: [
{
@ -395,6 +456,8 @@ function datasetTemplate({
type: 'target',
label: 'core.module.input.label.switch',
valueType: 'any',
showTargetInApp: true,
showTargetInPlugin: true,
connected: false
},
{
@ -402,6 +465,9 @@ function datasetTemplate({
type: 'selectChatModel',
label: '对话模型',
required: true,
valueType: 'string',
showTargetInApp: false,
showTargetInPlugin: false,
value: formData.aiSettings.model,
connected: false
},
@ -410,6 +476,7 @@ function datasetTemplate({
type: 'hidden',
label: '温度',
value: 0,
valueType: 'number',
min: 0,
max: 10,
step: 1,
@ -423,6 +490,8 @@ function datasetTemplate({
value: 10
}
],
showTargetInApp: false,
showTargetInPlugin: false,
connected: false
},
{
@ -430,6 +499,7 @@ function datasetTemplate({
type: 'hidden',
label: '回复上限',
value: maxToken,
valueType: 'number',
min: 100,
max: 4000,
step: 50,
@ -443,14 +513,18 @@ function datasetTemplate({
value: 4000
}
],
showTargetInApp: false,
showTargetInPlugin: false,
connected: false
},
{
key: 'isResponseAnswerText',
type: 'hidden',
label: '返回AI内容',
valueType: 'boolean',
value: true,
valueType: 'boolean',
showTargetInApp: false,
showTargetInPlugin: false,
connected: false
},
{
@ -458,6 +532,9 @@ function datasetTemplate({
type: 'hidden',
label: '引用内容模板',
valueType: 'string',
showTargetInApp: false,
showTargetInPlugin: false,
value: '',
connected: false
},
{
@ -465,12 +542,18 @@ function datasetTemplate({
type: 'hidden',
label: '引用内容提示词',
valueType: 'string',
showTargetInApp: false,
showTargetInPlugin: false,
value: '',
connected: false
},
{
key: 'aiSettings',
type: 'aiSettings',
label: '',
valueType: 'any',
showTargetInApp: false,
showTargetInPlugin: false,
connected: false
},
{
@ -483,31 +566,42 @@ function datasetTemplate({
'模型固定的引导词,通过调整该内容,可以引导模型聊天方向。该内容会被固定在上下文的开头。可使用变量,例如 {{language}}',
placeholder:
'模型固定的引导词,通过调整该内容,可以引导模型聊天方向。该内容会被固定在上下文的开头。可使用变量,例如 {{language}}',
showTargetInApp: true,
showTargetInPlugin: true,
value: formData.aiSettings.systemPrompt,
connected: false
},
{
key: 'history',
type: 'numberInput',
label: 'core.module.input.label.chat history',
required: true,
min: 0,
max: 30,
valueType: 'chatHistory',
value: 6,
showTargetInApp: true,
showTargetInPlugin: true,
connected: false
},
{
key: 'quoteQA',
type: 'target',
label: '引用内容',
description: "对象数组格式,结构:\n [{q:'问题',a:'回答'}]",
valueType: 'datasetQuote',
showTargetInApp: true,
showTargetInPlugin: true,
connected: true
},
{
key: 'history',
type: 'target',
label: 'core.module.input.label.chat history',
valueType: 'chatHistory',
connected: false,
value: 8
},
{
key: 'userChatInput',
type: 'target',
label: 'core.module.input.label.user question',
required: true,
valueType: 'string',
showTargetInApp: true,
showTargetInPlugin: true,
connected: true
}
],
@ -537,6 +631,91 @@ function datasetTemplate({
targets: []
}
]
},
{
moduleId: 'vuc92c',
name: 'core.module.template.cfr',
avatar: '/imgs/module/cfr.svg',
flowType: 'cfr',
showStatus: true,
position: {
x: 758.2985382279098,
y: 1124.6527309337314
},
inputs: [
{
key: 'switch',
type: 'target',
label: 'core.module.input.label.switch',
valueType: 'any',
showTargetInApp: true,
showTargetInPlugin: true,
connected: false
},
{
key: 'model',
type: 'selectExtractModel',
label: 'core.module.input.label.aiModel',
required: true,
valueType: 'string',
value: getExtractModel().model,
showTargetInApp: false,
showTargetInPlugin: false,
connected: false
},
{
key: 'systemPrompt',
type: 'textarea',
label: 'core.module.input.label.cfr background',
max: 300,
value: formData.cfr.background,
valueType: 'string',
description: 'core.module.input.description.cfr background',
placeholder: 'core.module.input.placeholder.cfr background',
showTargetInApp: true,
showTargetInPlugin: true,
connected: false
},
{
key: 'history',
type: 'numberInput',
label: 'core.module.input.label.chat history',
required: true,
min: 0,
max: 30,
valueType: 'chatHistory',
value: 6,
showTargetInApp: true,
showTargetInPlugin: true,
connected: false
},
{
key: 'userChatInput',
type: 'target',
label: 'core.module.input.label.user question',
required: true,
valueType: 'string',
showTargetInApp: true,
showTargetInPlugin: true,
connected: true
}
],
outputs: [
{
key: 'system_text',
label: 'core.module.output.label.cfr result',
valueType: 'string',
type: 'source',
targets: [
{
moduleId: 'datasetSearch',
key: 'userChatInput'
}
]
}
]
}
];
return modules;
}

View File

@ -6,15 +6,12 @@ import type { NextApiRequest, NextApiResponse } from 'next';
import { jsonRes } from '@fastgpt/service/common/response';
import type { AppSimpleEditFormType } from '@fastgpt/global/core/app/type.d';
import type { ModuleItemType } from '@fastgpt/global/core/module/type';
import { FlowNodeInputTypeEnum, FlowNodeTypeEnum } from '@fastgpt/global/core/module/node/constant';
import { ModuleIOValueTypeEnum } from '@fastgpt/global/core/module/constants';
import { ModuleInputKeyEnum } from '@fastgpt/global/core/module/constants';
import type { FlowNodeInputItemType } from '@fastgpt/global/core/module/node/type.d';
import { FormatForm2ModulesProps } from '@fastgpt/global/core/app/api';
import { getExtractModel } from '@/service/core/ai/model';
export default async function handler(req: NextApiRequest, res: NextApiResponse<any>) {
try {
const { formData, chatModelList } = req.body as FormatForm2ModulesProps;
const { formData } = req.body as FormatForm2ModulesProps;
const modules =
formData.dataset.datasets.length > 0
@ -32,100 +29,34 @@ export default async function handler(req: NextApiRequest, res: NextApiResponse<
}
}
function chatModelInput(formData: AppSimpleEditFormType): FlowNodeInputItemType[] {
return [
{
key: 'model',
value: formData.aiSettings.model,
type: 'custom',
label: '对话模型',
connected: false
},
{
key: 'temperature',
value: formData.aiSettings.temperature,
type: 'slider',
label: '温度',
connected: false
},
{
key: 'maxToken',
value: formData.aiSettings.maxToken,
type: 'custom',
label: '回复上限',
connected: false
},
{
key: 'systemPrompt',
value: formData.aiSettings.systemPrompt || '',
type: 'textarea',
label: '系统提示词',
connected: false
},
{
key: ModuleInputKeyEnum.aiChatIsResponseText,
value: true,
type: 'hidden',
label: '返回AI内容',
connected: false
},
{
key: 'quoteTemplate',
value: formData.aiSettings.quoteTemplate || '',
type: 'hidden',
label: '引用内容模板',
connected: false
},
{
key: 'quotePrompt',
value: formData.aiSettings.quotePrompt || '',
type: 'hidden',
label: '引用内容提示词',
connected: false
},
{
key: 'switch',
type: 'target',
label: '触发器',
connected: formData.dataset.datasets.length > 0 && !!formData.dataset.searchEmptyText
},
{
key: 'history',
type: 'target',
label: 'core.module.input.label.chat history',
connected: false,
value: 6
},
{
key: 'quoteQA',
type: 'target',
label: '引用内容',
connected: formData.dataset.datasets.length > 0
},
{
key: 'userChatInput',
type: 'target',
label: '用户问题',
connected: true
}
];
}
function simpleChatTemplate(formData: AppSimpleEditFormType): ModuleItemType[] {
return [
{
moduleId: 'userChatInput',
name: '用户问题(对话入口)',
flowType: FlowNodeTypeEnum.questionInput,
avatar: '/imgs/module/userChatInput.png',
flowType: 'questionInput',
position: {
x: 464.32198615344566,
y: 1602.2698463081606
},
inputs: [
{
key: 'userChatInput',
connected: false,
type: 'systemInput',
valueType: 'string',
label: '用户问题',
type: 'systemInput'
showTargetInApp: false,
showTargetInPlugin: false,
connected: false
}
],
outputs: [
{
key: 'userChatInput',
label: '用户问题',
type: 'source',
valueType: 'string',
targets: [
{
moduleId: 'chatModule',
@ -133,115 +64,309 @@ function simpleChatTemplate(formData: AppSimpleEditFormType): ModuleItemType[] {
}
]
}
],
position: {
x: 464.32198615344566,
y: 1602.2698463081606
},
moduleId: 'userChatInput'
]
},
{
moduleId: 'chatModule',
name: 'AI 对话',
flowType: FlowNodeTypeEnum.chatNode,
inputs: chatModelInput(formData),
avatar: '/imgs/module/AI.png',
flowType: 'chatNode',
showStatus: true,
outputs: [
{
key: 'answerText',
label: 'AI回复',
description: '直接响应,无需配置',
type: 'hidden',
targets: []
},
{
key: 'finish',
label: '回复结束',
description: 'AI 回复完成后触发',
valueType: 'boolean',
type: 'source',
targets: []
}
],
position: {
x: 981.9682828103937,
y: 890.014595014464
},
moduleId: 'chatModule'
inputs: [
{
key: 'switch',
type: 'target',
label: 'core.module.input.label.switch',
valueType: 'any',
showTargetInApp: true,
showTargetInPlugin: true,
connected: false
},
{
key: 'model',
type: 'selectChatModel',
label: '对话模型',
required: true,
valueType: 'string',
showTargetInApp: false,
showTargetInPlugin: false,
value: formData.aiSettings.model,
connected: false
},
{
key: 'temperature',
type: 'hidden',
label: '温度',
value: formData.aiSettings.temperature,
valueType: 'number',
min: 0,
max: 10,
step: 1,
markList: [
{
label: '严谨',
value: 0
},
{
label: '发散',
value: 10
}
],
showTargetInApp: false,
showTargetInPlugin: false,
connected: false
},
{
key: 'maxToken',
type: 'hidden',
label: '回复上限',
value: formData.aiSettings.maxToken,
valueType: 'number',
min: 100,
max: 4000,
step: 50,
markList: [
{
label: '100',
value: 100
},
{
label: '4000',
value: 4000
}
],
showTargetInApp: false,
showTargetInPlugin: false,
connected: false
},
{
key: 'isResponseAnswerText',
type: 'hidden',
label: '返回AI内容',
value: true,
valueType: 'boolean',
showTargetInApp: false,
showTargetInPlugin: false,
connected: false
},
{
key: 'quoteTemplate',
type: 'hidden',
label: '引用内容模板',
valueType: 'string',
showTargetInApp: false,
showTargetInPlugin: false,
value: formData.aiSettings.quoteTemplate,
connected: false
},
{
key: 'quotePrompt',
type: 'hidden',
label: '引用内容提示词',
valueType: 'string',
showTargetInApp: false,
showTargetInPlugin: false,
value: formData.aiSettings.quotePrompt,
connected: false
},
{
key: 'aiSettings',
type: 'aiSettings',
label: '',
valueType: 'any',
showTargetInApp: false,
showTargetInPlugin: false,
connected: false
},
{
key: 'systemPrompt',
type: 'textarea',
label: '系统提示词',
max: 300,
valueType: 'string',
description:
'模型固定的引导词,通过调整该内容,可以引导模型聊天方向。该内容会被固定在上下文的开头。可使用变量,例如 {{language}}',
placeholder:
'模型固定的引导词,通过调整该内容,可以引导模型聊天方向。该内容会被固定在上下文的开头。可使用变量,例如 {{language}}',
showTargetInApp: true,
showTargetInPlugin: true,
value: formData.aiSettings.systemPrompt,
connected: false
},
{
key: 'history',
type: 'numberInput',
label: 'core.module.input.label.chat history',
required: true,
min: 0,
max: 30,
valueType: 'chatHistory',
value: 6,
showTargetInApp: true,
showTargetInPlugin: true,
connected: false
},
{
key: 'quoteQA',
type: 'target',
label: '引用内容',
description: "对象数组格式,结构:\n [{q:'问题',a:'回答'}]",
valueType: 'datasetQuote',
showTargetInApp: true,
showTargetInPlugin: true,
connected: false
},
{
key: 'userChatInput',
type: 'target',
label: 'core.module.input.label.user question',
required: true,
valueType: 'string',
showTargetInApp: true,
showTargetInPlugin: true,
connected: true
}
],
outputs: [
{
key: 'answerText',
label: 'AI回复',
description: '将在 stream 回复完毕后触发',
valueType: 'string',
type: 'source',
targets: []
},
{
key: 'finish',
label: 'core.module.output.label.running done',
description: 'core.module.output.description.running done',
valueType: 'boolean',
type: 'source',
targets: []
},
{
key: 'history',
label: '新的上下文',
description: '将本次回复内容拼接上历史记录,作为新的上下文返回',
valueType: 'chatHistory',
type: 'source',
targets: []
}
]
}
];
}
function datasetTemplate(formData: AppSimpleEditFormType): ModuleItemType[] {
return [
const modules: ModuleItemType[] = [
{
moduleId: 'userChatInput',
name: '用户问题(对话入口)',
flowType: FlowNodeTypeEnum.questionInput,
avatar: '/imgs/module/userChatInput.png',
flowType: 'questionInput',
position: {
x: 324.81436595478294,
y: 1527.0012457753612
},
inputs: [
{
key: 'userChatInput',
label: '用户问题',
type: 'systemInput',
valueType: 'string',
label: '用户问题',
showTargetInApp: false,
showTargetInPlugin: false,
connected: false
}
],
outputs: [
{
key: 'userChatInput',
label: '用户问题',
type: 'source',
valueType: 'string',
targets: [
{
moduleId: 'chatModule',
moduleId: 'vuc92c',
key: 'userChatInput'
},
{
moduleId: 'datasetSearch',
moduleId: 'chatModule',
key: 'userChatInput'
}
]
}
],
position: {
x: 464.32198615344566,
y: 1602.2698463081606
},
moduleId: 'userChatInput'
]
},
{
moduleId: 'datasetSearch',
name: '知识库搜索',
flowType: FlowNodeTypeEnum.datasetSearchNode,
avatar: '/imgs/module/db.png',
flowType: 'datasetSearchNode',
showStatus: true,
position: {
x: 1351.5043753345153,
y: 947.0780385418003
},
inputs: [
{
key: 'switch',
type: 'target',
label: 'core.module.input.label.switch',
valueType: 'any',
showTargetInApp: true,
showTargetInPlugin: true,
connected: false
},
{
key: 'datasets',
value: formData.dataset.datasets,
type: FlowNodeInputTypeEnum.custom,
type: 'selectDataset',
label: '关联的知识库',
value: formData.dataset.datasets,
valueType: 'selectDataset',
list: [],
required: true,
showTargetInApp: false,
showTargetInPlugin: true,
connected: false
},
{
key: 'similarity',
type: 'hidden',
label: '最低相关性',
value: formData.dataset.similarity,
type: FlowNodeInputTypeEnum.slider,
label: '相关度',
valueType: 'number',
min: 0,
max: 1,
step: 0.01,
markList: [
{
label: '0',
value: 0
},
{
label: '1',
value: 1
}
],
showTargetInApp: false,
showTargetInPlugin: false,
connected: false
},
{
key: 'limit',
type: 'hidden',
label: '引用上限',
description: '单次搜索最大的 Tokens 数量中文约1字=1.7Tokens英文约1字=1Tokens',
value: formData.dataset.limit,
type: FlowNodeInputTypeEnum.slider,
label: '单次搜索上限',
valueType: 'number',
showTargetInApp: false,
showTargetInPlugin: false,
connected: false
},
{
key: 'switch',
type: FlowNodeInputTypeEnum.target,
label: '触发器',
connected: false
},
{
key: 'userChatInput',
type: FlowNodeInputTypeEnum.target,
label: '用户问题',
connected: true
},
{
key: 'searchMode',
type: 'hidden',
@ -256,19 +381,32 @@ function datasetTemplate(formData: AppSimpleEditFormType): ModuleItemType[] {
key: 'datasetParamsModal',
type: 'selectDatasetParamsModal',
label: '',
connected: false,
valueType: 'any',
showTargetInApp: false,
showTargetInPlugin: false
showTargetInPlugin: false,
connected: false
},
{
key: 'userChatInput',
type: 'target',
label: 'core.module.input.label.user question',
required: true,
valueType: 'string',
showTargetInApp: true,
showTargetInPlugin: true,
connected: true
}
],
outputs: [
{
key: 'isEmpty',
label: '搜索结果为空',
type: 'source',
valueType: 'boolean',
targets: formData.dataset.searchEmptyText
? [
{
moduleId: 'emptyText',
moduleId: '6dtsvu',
key: 'switch'
}
]
@ -276,6 +414,9 @@ function datasetTemplate(formData: AppSimpleEditFormType): ModuleItemType[] {
},
{
key: 'unEmpty',
label: '搜索结果不为空',
type: 'source',
valueType: 'boolean',
targets: formData.dataset.searchEmptyText
? [
{
@ -287,77 +428,352 @@ function datasetTemplate(formData: AppSimpleEditFormType): ModuleItemType[] {
},
{
key: 'quoteQA',
label: '引用内容',
description:
'始终返回数组,如果希望搜索结果为空时执行额外操作,需要用到上面的两个输入以及目标模块的触发器',
type: 'source',
valueType: 'datasetQuote',
targets: [
{
moduleId: 'chatModule',
key: 'quoteQA'
}
]
}
],
position: {
x: 956.0838440206068,
y: 887.462827870246
},
moduleId: 'datasetSearch'
},
...(formData.dataset.searchEmptyText
? [
{
name: '指定回复',
flowType: FlowNodeTypeEnum.answerNode,
inputs: [
{
key: ModuleInputKeyEnum.switch,
type: FlowNodeInputTypeEnum.target,
label: '触发器',
connected: true
},
{
key: ModuleInputKeyEnum.answerText,
value: formData.dataset.searchEmptyText,
type: FlowNodeInputTypeEnum.textarea,
valueType: ModuleIOValueTypeEnum.string,
label: '回复的内容',
connected: false
}
],
outputs: [],
position: {
x: 1553.5815811529146,
y: 637.8753731306779
},
moduleId: 'emptyText'
}
]
: []),
{
name: 'AI 对话',
flowType: FlowNodeTypeEnum.chatNode,
inputs: chatModelInput(formData),
showStatus: true,
outputs: [
{
key: 'answerText',
label: 'AI回复',
description: '直接响应,无需配置',
type: 'hidden',
targets: []
},
{
key: 'finish',
label: '回复结束',
description: 'AI 回复完成后触发',
label: 'core.module.output.label.running done',
description: 'core.module.output.description.running done',
valueType: 'boolean',
type: 'source',
targets: []
}
],
]
},
{
moduleId: 'chatModule',
name: 'AI 对话',
avatar: '/imgs/module/AI.png',
flowType: 'chatNode',
showStatus: true,
position: {
x: 1551.71405495818,
y: 977.4911578918461
x: 2022.7264786978908,
y: 1006.3102431257475
},
moduleId: 'chatModule'
inputs: [
{
key: 'switch',
type: 'target',
label: 'core.module.input.label.switch',
valueType: 'any',
showTargetInApp: true,
showTargetInPlugin: true,
connected: !!formData.dataset?.searchEmptyText
},
{
key: 'model',
type: 'selectChatModel',
label: '对话模型',
required: true,
valueType: 'string',
showTargetInApp: false,
showTargetInPlugin: false,
value: formData.aiSettings.model,
connected: false
},
{
key: 'temperature',
type: 'hidden',
label: '温度',
value: formData.aiSettings.temperature,
valueType: 'number',
min: 0,
max: 10,
step: 1,
markList: [
{
label: '严谨',
value: 0
},
{
label: '发散',
value: 10
}
],
showTargetInApp: false,
showTargetInPlugin: false,
connected: false
},
{
key: 'maxToken',
type: 'hidden',
label: '回复上限',
value: formData.aiSettings.maxToken,
valueType: 'number',
min: 100,
max: 4000,
step: 50,
markList: [
{
label: '100',
value: 100
},
{
label: '4000',
value: 4000
}
],
showTargetInApp: false,
showTargetInPlugin: false,
connected: false
},
{
key: 'isResponseAnswerText',
type: 'hidden',
label: '返回AI内容',
value: true,
valueType: 'boolean',
showTargetInApp: false,
showTargetInPlugin: false,
connected: false
},
{
key: 'quoteTemplate',
type: 'hidden',
label: '引用内容模板',
valueType: 'string',
showTargetInApp: false,
showTargetInPlugin: false,
value: formData.aiSettings.quoteTemplate,
connected: false
},
{
key: 'quotePrompt',
type: 'hidden',
label: '引用内容提示词',
valueType: 'string',
showTargetInApp: false,
showTargetInPlugin: false,
value: formData.aiSettings.quotePrompt,
connected: false
},
{
key: 'aiSettings',
type: 'aiSettings',
label: '',
valueType: 'any',
showTargetInApp: false,
showTargetInPlugin: false,
connected: false
},
{
key: 'systemPrompt',
type: 'textarea',
label: '系统提示词',
max: 300,
valueType: 'string',
description:
'模型固定的引导词,通过调整该内容,可以引导模型聊天方向。该内容会被固定在上下文的开头。可使用变量,例如 {{language}}',
placeholder:
'模型固定的引导词,通过调整该内容,可以引导模型聊天方向。该内容会被固定在上下文的开头。可使用变量,例如 {{language}}',
showTargetInApp: true,
showTargetInPlugin: true,
value: formData.aiSettings.systemPrompt,
connected: false
},
{
key: 'history',
type: 'numberInput',
label: 'core.module.input.label.chat history',
required: true,
min: 0,
max: 30,
valueType: 'chatHistory',
value: 6,
showTargetInApp: true,
showTargetInPlugin: true,
connected: false
},
{
key: 'quoteQA',
type: 'target',
label: '引用内容',
description: "对象数组格式,结构:\n [{q:'问题',a:'回答'}]",
valueType: 'datasetQuote',
showTargetInApp: true,
showTargetInPlugin: true,
connected: true
},
{
key: 'userChatInput',
type: 'target',
label: 'core.module.input.label.user question',
required: true,
valueType: 'string',
showTargetInApp: true,
showTargetInPlugin: true,
connected: true
}
],
outputs: [
{
key: 'answerText',
label: 'AI回复',
description: '将在 stream 回复完毕后触发',
valueType: 'string',
type: 'source',
targets: []
},
{
key: 'finish',
label: 'core.module.output.label.running done',
description: 'core.module.output.description.running done',
valueType: 'boolean',
type: 'source',
targets: []
},
{
key: 'history',
label: '新的上下文',
description: '将本次回复内容拼接上历史记录,作为新的上下文返回',
valueType: 'chatHistory',
type: 'source',
targets: []
}
]
},
{
moduleId: 'vuc92c',
name: 'core.module.template.cfr',
avatar: '/imgs/module/cfr.svg',
flowType: 'cfr',
showStatus: true,
position: {
x: 758.2985382279098,
y: 1124.6527309337314
},
inputs: [
{
key: 'switch',
type: 'target',
label: 'core.module.input.label.switch',
valueType: 'any',
showTargetInApp: true,
showTargetInPlugin: true,
connected: false
},
{
key: 'model',
type: 'selectExtractModel',
label: 'core.module.input.label.aiModel',
required: true,
valueType: 'string',
value: getExtractModel().model,
showTargetInApp: false,
showTargetInPlugin: false,
connected: false
},
{
key: 'systemPrompt',
type: 'textarea',
label: 'core.module.input.label.cfr background',
max: 300,
value: formData.cfr.background,
valueType: 'string',
description: 'core.module.input.description.cfr background',
placeholder: 'core.module.input.placeholder.cfr background',
showTargetInApp: true,
showTargetInPlugin: true,
connected: false
},
{
key: 'history',
type: 'numberInput',
label: 'core.module.input.label.chat history',
required: true,
min: 0,
max: 30,
valueType: 'chatHistory',
value: 6,
showTargetInApp: true,
showTargetInPlugin: true,
connected: false
},
{
key: 'userChatInput',
type: 'target',
label: 'core.module.input.label.user question',
required: true,
valueType: 'string',
showTargetInApp: true,
showTargetInPlugin: true,
connected: true
}
],
outputs: [
{
key: 'system_text',
label: 'core.module.output.label.cfr result',
valueType: 'string',
type: 'source',
targets: [
{
moduleId: 'datasetSearch',
key: 'userChatInput'
}
]
}
]
}
];
if (formData.dataset?.searchEmptyText) {
modules.push({
moduleId: '6dtsvu',
name: '指定回复',
avatar: '/imgs/module/reply.png',
flowType: 'answerNode',
position: {
x: 2018.2744321961648,
y: 616.1220817209096
},
inputs: [
{
key: 'switch',
type: 'target',
label: 'core.module.input.label.switch',
valueType: 'any',
showTargetInApp: true,
showTargetInPlugin: true,
connected: true
},
{
key: 'text',
type: 'textarea',
value: formData.dataset.searchEmptyText,
valueType: 'any',
label: '回复的内容',
description:
'可以使用 \\n 来实现连续换行。\n可以通过外部模块输入实现回复外部模块输入时会覆盖当前填写的内容。\n如传入非字符串类型数据将会自动转成字符串',
placeholder:
'可以使用 \\n 来实现连续换行。\n可以通过外部模块输入实现回复外部模块输入时会覆盖当前填写的内容。\n如传入非字符串类型数据将会自动转成字符串',
showTargetInApp: true,
showTargetInPlugin: true,
connected: false
}
],
outputs: [
{
key: 'finish',
label: 'core.module.output.label.running done',
description: 'core.module.output.description.running done',
valueType: 'boolean',
type: 'source',
targets: []
}
]
});
}
return modules;
}

View File

@ -58,6 +58,7 @@ export default async function handler(req: NextApiRequest, res: NextApiResponse)
/* start process */
const { responseData } = await dispatchModules({
res,
mode: 'test',
teamId,
tmbId,
user,
@ -101,6 +102,9 @@ export default async function handler(req: NextApiRequest, res: NextApiResponse)
export const config = {
api: {
bodyParser: {
sizeLimit: '10mb'
},
responseLimit: '20mb'
}
};

View File

@ -9,6 +9,7 @@ import { pushGenerateVectorBill } from '@/service/support/wallet/bill/push';
import { searchDatasetData } from '@/service/core/dataset/data/pg';
import { updateApiKeyUsage } from '@fastgpt/service/support/openapi/tools';
import { BillSourceEnum } from '@fastgpt/global/support/wallet/bill/constants';
import { searchQueryExtension } from '@fastgpt/service/core/ai/functions/queryExtension';
export default withNextCors(async function handler(req: NextApiRequest, res: NextApiResponse<any>) {
try {
@ -33,8 +34,15 @@ export default withNextCors(async function handler(req: NextApiRequest, res: Nex
// auth balance
await authTeamBalance(teamId);
// query extension
// const { queries } = await searchQueryExtension({
// query: text,
// model: global.chatModels[0].model
// });
const { searchRes, tokenLen } = await searchDatasetData({
text,
rawQuery: text,
queries: [text],
model: dataset.vectorModel,
limit: Math.min(limit * 800, 30000),
datasetIds: [datasetId],

View File

@ -19,6 +19,7 @@ export default async function handler(req: NextApiRequest, res: NextApiResponse<
const result = (() => {
if (typeof input === 'string') {
const defaultReg: any[] = [
'',
undefined,
'undefined',
null,

View File

@ -2,12 +2,14 @@ import type { NextApiRequest, NextApiResponse } from 'next';
import { jsonRes } from '@fastgpt/service/common/response';
import { request } from '@fastgpt/service/common/api/plusRequest';
import type { Method } from 'axios';
import { connectToDatabase } from '@/service/mongo';
import { setCookie } from '@fastgpt/service/support/permission/controller';
import { getInitConfig } from '../system/getInitData';
export default async function handler(req: NextApiRequest, res: NextApiResponse) {
try {
await connectToDatabase();
if (!global.systemEnv?.pluginBaseUrl) {
await getInitConfig();
}
const method = (req.method || 'POST') as Method;
const { path = [], ...query } = req.query as any;

View File

@ -1,4 +1,4 @@
import type { FeConfigsType, SystemEnvType } from '@fastgpt/global/common/system/types/index.d';
import type { FeConfigsType } from '@fastgpt/global/common/system/types/index.d';
import type { NextApiRequest, NextApiResponse } from 'next';
import { jsonRes } from '@fastgpt/service/common/response';
import { readFileSync, readdirSync } from 'fs';
@ -6,23 +6,14 @@ import type { ConfigFileType, InitDateResponse } from '@/global/common/api/syste
import { formatPrice } from '@fastgpt/global/support/wallet/bill/tools';
import { getTikTokenEnc } from '@fastgpt/global/common/string/tiktoken';
import { initHttpAgent } from '@fastgpt/service/common/middle/httpAgent';
import {
defaultChatModels,
defaultQAModels,
defaultCQModels,
defaultExtractModels,
defaultQGModels,
defaultVectorModels,
defaultAudioSpeechModels,
defaultWhisperModel,
defaultReRankModels
} from '@fastgpt/global/core/ai/model';
import { SimpleModeTemplate_FastGPT_Universal } from '@/global/core/app/constants';
import { getSimpleTemplatesFromPlus } from '@/service/core/app/utils';
import { PluginSourceEnum } from '@fastgpt/global/core/plugin/constants';
import { getFastGPTFeConfig } from '@fastgpt/service/common/system/config/controller';
import { connectToDatabase } from '@/service/mongo';
import { PluginTemplateType } from '@fastgpt/global/core/plugin/type';
import { readConfigData } from '@/service/common/system';
import { exit } from 'process';
export default async function handler(req: NextApiRequest, res: NextApiResponse) {
await getInitConfig();
@ -48,18 +39,14 @@ export default async function handler(req: NextApiRequest, res: NextApiResponse)
});
}
const defaultSystemEnv: SystemEnvType = {
vectorMaxProcess: 15,
qaMaxProcess: 15,
pgHNSWEfSearch: 100
};
const defaultFeConfigs: FeConfigsType = {
show_emptyChat: true,
show_git: true,
show_register: false,
docUrl: 'https://doc.fastgpt.in',
openAPIDocUrl: 'https://doc.fastgpt.in/docs/development/openapi',
systemTitle: 'FastGPT',
concatMd:
'* 项目开源地址: [FastGPT GitHub](https://github.com/labring/FastGPT)\n* 交流群: ![](https://doc.fastgpt.in/wechat-fastgpt.webp)',
limit: {
exportLimitMinutes: 0
},
@ -73,24 +60,45 @@ export async function getInitConfig() {
await connectToDatabase();
initGlobal();
const filename =
process.env.NODE_ENV === 'development' ? 'data/config.local.json' : '/app/data/config.json';
const res = JSON.parse(readFileSync(filename, 'utf-8')) as ConfigFileType;
// load config
const [dbConfig, fileConfig] = await Promise.all([
getFastGPTFeConfig(),
readConfigData('config.json')
]);
const fileRes = JSON.parse(fileConfig) as ConfigFileType;
// get config from database
const dbFeConfig = await getFastGPTFeConfig();
const concatConfig: ConfigFileType = {
...res,
const config: ConfigFileType = {
...fileRes,
FeConfig: {
...res.FeConfig,
...dbFeConfig
...defaultFeConfigs,
...fileRes.FeConfig,
...dbConfig
}
};
setDefaultData(concatConfig);
// set config
global.feConfigs = {
isPlus: !!config.SystemParams.pluginBaseUrl,
concatMd: config.FeConfig.show_git ? config.FeConfig.concatMd : '',
...config.FeConfig
};
global.systemEnv = config.SystemParams;
global.chatModels = config.ChatModels;
global.qaModels = config.QAModels;
global.cqModels = config.CQModels;
global.extractModels = config.ExtractModels;
global.qgModels = config.QGModels;
global.vectorModels = config.VectorModels;
global.reRankModels = config.ReRankModels;
global.audioSpeechModels = config.AudioSpeechModels;
global.whisperModel = config.WhisperModel;
global.priceMd = '';
} catch (error) {
setDefaultData();
console.log('get init config error, set default', error);
console.error('Load init config error', error);
exit(1);
}
await getSimpleModeTemplates();
@ -117,45 +125,13 @@ export async function getInitConfig() {
}
export function initGlobal() {
// init tikToken
getTikTokenEnc();
initHttpAgent();
global.communityPlugins = [];
global.simpleModeTemplates = [];
global.qaQueueLen = global.qaQueueLen ?? 0;
global.vectorQueueLen = global.vectorQueueLen ?? 0;
}
export function setDefaultData(res?: ConfigFileType) {
global.systemEnv = res?.SystemParams
? { ...defaultSystemEnv, ...res.SystemParams }
: defaultSystemEnv;
global.feConfigs = res?.FeConfig
? {
concatMd: res?.FeConfig?.show_git
? '* 项目开源地址: [FastGPT GitHub](https://github.com/labring/FastGPT)\n* 交流群: ![](https://doc.fastgpt.in/wechat-fastgpt.webp)'
: '',
...defaultFeConfigs,
...res.FeConfig,
isPlus: !!res.SystemParams?.pluginBaseUrl
}
: defaultFeConfigs;
global.chatModels = res?.ChatModels || defaultChatModels;
global.qaModels = res?.QAModels || defaultQAModels;
global.cqModels = res?.CQModels || defaultCQModels;
global.extractModels = res?.ExtractModels || defaultExtractModels;
global.qgModels = res?.QGModels || defaultQGModels;
global.vectorModels = res?.VectorModels || defaultVectorModels;
global.reRankModels = res?.ReRankModels || defaultReRankModels;
global.audioSpeechModels = res?.AudioSpeechModels || defaultAudioSpeechModels;
global.whisperModel = res?.WhisperModel || defaultWhisperModel;
global.priceMd = '';
// init tikToken
getTikTokenEnc();
initHttpAgent();
}
export function getSystemVersion() {
@ -209,9 +185,7 @@ async function getSimpleModeTemplates() {
try {
const basePath =
process.env.NODE_ENV === 'development'
? 'public/simpleTemplates'
: '/app/projects/app/public/simpleTemplates';
process.env.NODE_ENV === 'development' ? 'data/simpleTemplates' : '/app/data/simpleTemplates';
// read data/simpleTemplates directory, get all json file
const files = readdirSync(basePath);
// filter json file
@ -243,9 +217,7 @@ function getSystemPlugin() {
if (global.communityPlugins && global.communityPlugins.length > 0) return;
const basePath =
process.env.NODE_ENV === 'development'
? 'public/pluginTemplates'
: '/app/projects/app/public/pluginTemplates';
process.env.NODE_ENV === 'development' ? 'data/pluginTemplates' : '/app/data/pluginTemplates';
// read data/pluginTemplates directory, get all json file
const files = readdirSync(basePath);
// filter json file

View File

@ -200,13 +200,14 @@ export default withNextCors(async function handler(req: NextApiRequest, res: Nex
/* start flow controller */
const { responseData, answerText } = await dispatchModules({
res,
mode: 'chat',
user,
teamId: String(user.team.teamId),
tmbId: String(user.team.tmbId),
appId: String(app._id),
chatId,
responseChatItemId,
modules: app.modules,
user,
teamId: user.team.teamId,
tmbId: user.team.tmbId,
variables,
histories: concatHistories,
startParams: {

View File

@ -13,7 +13,7 @@ import MyIcon from '@/components/Icon';
import MyTooltip from '@/components/MyTooltip';
import ChatTest, { type ChatTestComponentRef } from '@/components/core/module/Flow/ChatTest';
import { useFlowProviderStore } from '@/components/core/module/Flow/FlowProvider';
import { flowNode2Modules } from '@/components/core/module/utils';
import { flowNode2Modules, filterExportModules } from '@/components/core/module/utils';
import { useAppStore } from '@/web/core/app/store/useAppStore';
import { useToast } from '@/web/common/hooks/useToast';
import { useConfirm } from '@/web/common/hooks/useConfirm';
@ -136,12 +136,12 @@ const RenderHeaderContainer = React.memo(function RenderHeaderContainer({
borderRadius={'lg'}
variant={'base'}
aria-label={'save'}
onClick={() =>
copyData(
JSON.stringify(flowNode2Modules({ nodes, edges }), null, 2),
t('app.Export Config Successful')
)
}
onClick={() => {
const modules = flow2ModulesAndCheck();
if (modules) {
copyData(filterExportModules(modules), t('app.Export Config Successful'));
}
}}
/>
</MyTooltip>

View File

@ -24,7 +24,6 @@ import { chatNodeSystemPromptTip, welcomeTextTip } from '@fastgpt/global/core/mo
import type { ModuleItemType } from '@fastgpt/global/core/module/type';
import { useRequest } from '@/web/common/hooks/useRequest';
import { useConfirm } from '@/web/common/hooks/useConfirm';
import { FlowNodeTypeEnum } from '@fastgpt/global/core/module/node/constant';
import { streamFetch } from '@/web/common/api/fetch';
import { useRouter } from 'next/router';
import { useToast } from '@/web/common/hooks/useToast';
@ -51,6 +50,7 @@ import { SimpleModeTemplate_FastGPT_Universal } from '@/global/core/app/constant
import VariableEdit from '@/components/core/module/Flow/components/modules/VariableEdit';
import { ModuleInputKeyEnum } from '@fastgpt/global/core/module/constants';
import PromptTextarea from '@/components/common/Textarea/PromptTextarea/index';
import { DatasetSearchModeMap } from '@fastgpt/global/core/dataset/constant';
const InfoModal = dynamic(() => import('../InfoModal'));
const DatasetSelectModal = dynamic(() => import('@/components/core/module/DatasetSelectModal'));
@ -132,6 +132,12 @@ function ConfigForm({
);
}, [getValues, refresh]);
const datasetSearchMode = useMemo(() => {
const mode = getValues('dataset.searchMode');
if (!mode) return '';
return t(DatasetSearchModeMap[mode]?.title);
}, [getValues, t, refresh]);
const { mutate: onSubmitSave, isLoading: isSaving } = useRequest({
mutationFn: async (data: AppSimpleEditFormType) => {
const modules = await postForm2Modules(data, data.templateId);
@ -251,39 +257,6 @@ function ConfigForm({
/>
</Flex>
{/* welcome */}
{selectSimpleTemplate?.systemForm?.userGuide?.welcomeText && (
<Box {...BoxStyles} mt={2}>
<Flex alignItems={'center'}>
<Image alt={''} src={'/imgs/module/userGuide.png'} w={'18px'} />
<Box mx={2}>{t('core.app.Welcome Text')}</Box>
<MyTooltip label={welcomeTextTip} forceShow>
<QuestionOutlineIcon />
</MyTooltip>
</Flex>
<Textarea
mt={2}
rows={5}
placeholder={welcomeTextTip}
borderColor={'myGray.100'}
{...register('userGuide.welcomeText')}
/>
</Box>
)}
{/* variable */}
{selectSimpleTemplate?.systemForm?.userGuide?.variables && (
<Box mt={2} {...BoxStyles}>
<VariableEdit
variables={getValues('userGuide.variables')}
onChange={(e) => {
setValue('userGuide.variables', e);
setRefresh(!refresh);
}}
/>
</Box>
)}
{/* ai */}
{selectSimpleTemplate?.systemForm?.aiSettings && (
<Box mt={5} {...BoxStyles}>
@ -340,7 +313,6 @@ function ConfigForm({
defaultValue={getValues('aiSettings.systemPrompt')}
onBlur={(e) => {
setValue('aiSettings.systemPrompt', e.target.value || '');
setRefresh(!refresh);
}}
/>
</Flex>
@ -372,16 +344,20 @@ function ConfigForm({
</Flex>
)}
</Flex>
<Flex mt={1} color={'myGray.600'} fontSize={['sm', 'md']}>
{t('core.dataset.search.Min Similarity')}: {getValues('dataset.similarity')},{' '}
{t('core.dataset.search.Max Tokens')}: {getValues('dataset.limit')}
{getValues('dataset.searchEmptyText') === ''
? ''
: t('core.dataset.Set Empty Result Tip')}
</Flex>
{getValues('dataset.datasets').length > 0 && (
<Flex mt={1} color={'myGray.600'} fontSize={'sm'} mb={2}>
{t('core.dataset.search.search mode')}: {datasetSearchMode}
{', '}
{t('core.dataset.search.Min Similarity')}: {getValues('dataset.similarity')}
{', '}
{t('core.dataset.search.Max Tokens')}: {getValues('dataset.limit')}
{getValues('dataset.searchEmptyText') === ''
? ''
: t('core.dataset.Set Empty Result Tip')}
</Flex>
)}
<Grid
gridTemplateColumns={['repeat(2, minmax(0, 1fr))', 'repeat(3, minmax(0, 1fr))']}
my={2}
gridGap={[2, 4]}
>
{selectDatasets.map((item) => (
@ -412,6 +388,64 @@ function ConfigForm({
</MyTooltip>
))}
</Grid>
{selectSimpleTemplate?.systemForm?.cfr && getValues('dataset.datasets').length > 0 && (
<Box mt={10}>
<Box {...LabelStyles} w={'auto'}>
{t('core.app.edit.cfr background prompt')}
<MyTooltip label={t('core.app.edit.cfr background tip')} forceShow>
<QuestionOutlineIcon display={['none', 'inline']} ml={1} />
</MyTooltip>
</Box>
<PromptTextarea
mt={1}
flex={1}
bg={'myWhite.400'}
rows={5}
placeholder={t('core.module.input.placeholder.cfr background')}
defaultValue={getValues('cfr.background')}
onBlur={(e) => {
setValue('cfr.background', e.target.value || '');
}}
/>
</Box>
)}
</Box>
)}
{/* variable */}
{selectSimpleTemplate?.systemForm?.userGuide?.variables && (
<Box mt={2} {...BoxStyles}>
<VariableEdit
variables={getValues('userGuide.variables')}
onChange={(e) => {
setValue('userGuide.variables', e);
setRefresh(!refresh);
}}
/>
</Box>
)}
{/* welcome */}
{selectSimpleTemplate?.systemForm?.userGuide?.welcomeText && (
<Box {...BoxStyles} mt={2}>
<Flex alignItems={'center'}>
<Image alt={''} src={'/imgs/module/userGuide.png'} w={'18px'} />
<Box mx={2}>{t('core.app.Welcome Text')}</Box>
<MyTooltip label={welcomeTextTip} forceShow>
<QuestionOutlineIcon />
</MyTooltip>
</Flex>
<PromptTextarea
mt={2}
bg={'myWhite.400'}
rows={5}
placeholder={welcomeTextTip}
defaultValue={getValues('userGuide.welcomeText')}
onBlur={(e) => {
setValue('userGuide.welcomeText', e.target.value || '');
}}
/>
</Box>
)}

View File

@ -609,7 +609,6 @@ const CollectionCard = () => {
),
onClick: () =>
openSyncConfirm(() => {
console.log(collection._id);
onclickStartSync(collection._id);
})()
}

View File

@ -8,7 +8,7 @@ import dynamic from 'next/dynamic';
import MyIcon from '@/components/Icon';
import MyTooltip from '@/components/MyTooltip';
import { useFlowProviderStore } from '@/components/core/module/Flow/FlowProvider';
import { flowNode2Modules } from '@/components/core/module/utils';
import { filterExportModules, flowNode2Modules } from '@/components/core/module/utils';
import { putUpdatePlugin } from '@/web/core/plugin/api';
import { FlowNodeTypeEnum } from '@fastgpt/global/core/module/node/constant';
import { ModuleItemType } from '@fastgpt/global/core/module/type';
@ -62,7 +62,9 @@ const Header = ({ plugin, onClose }: Props) => {
if (
item.inputs.find((input) => {
if (!input.required || input.connected) return false;
if (!input.value || input.value === '' || input.value?.length === 0) return true;
if (input.value === undefined || input.value === '' || input.value?.length === 0) {
return true;
}
return false;
})
) {
@ -155,7 +157,7 @@ const Header = ({ plugin, onClose }: Props) => {
onClick={() => {
const modules = flow2ModulesAndCheck();
if (modules) {
copyData(JSON.stringify(modules, null, 2), t('app.Export Config Successful'));
copyData(filterExportModules(modules), t('app.Export Config Successful'));
}
}}
/>

View File

@ -0,0 +1,25 @@
import { existsSync, readFileSync } from 'fs';
export const readConfigData = (name: string) => {
const isDev = process.env.NODE_ENV === 'development';
const splitName = name.split('.');
const devName = `${splitName[0]}.local.${splitName[1]}`;
const filename = (() => {
if (isDev) {
// check local file exists
const hasLocalFile = existsSync(`data/${devName}`);
if (hasLocalFile) {
return `data/${devName}`;
}
return `data/${name}`;
}
// production path
return `/app/data/${name}`;
})();
const content = readFileSync(filename, 'utf-8');
return content;
};

View File

@ -1,62 +1,26 @@
import {
defaultAudioSpeechModels,
defaultChatModels,
defaultCQModels,
defaultExtractModels,
defaultQAModels,
defaultQGModels,
defaultVectorModels
} from '@fastgpt/global/core/ai/model';
export const getChatModel = (model?: string) => {
return (
(global.chatModels || defaultChatModels).find((item) => item.model === model) ||
global.chatModels?.[0] ||
defaultChatModels[0]
);
return global.chatModels.find((item) => item.model === model) ?? global.chatModels[0];
};
export const getQAModel = (model?: string) => {
return (
(global.qaModels || defaultQAModels).find((item) => item.model === model) ||
global.qaModels?.[0] ||
defaultQAModels[0]
);
return global.qaModels.find((item) => item.model === model) || global.qaModels[0];
};
export const getCQModel = (model?: string) => {
return (
(global.cqModels || defaultCQModels).find((item) => item.model === model) ||
global.cqModels?.[0] ||
defaultCQModels[0]
);
return global.cqModels.find((item) => item.model === model) || global.cqModels[0];
};
export const getExtractModel = (model?: string) => {
return (
(global.extractModels || defaultExtractModels).find((item) => item.model === model) ||
global.extractModels?.[0] ||
defaultExtractModels[0]
);
return global.extractModels.find((item) => item.model === model) || global.extractModels[0];
};
export const getQGModel = (model?: string) => {
return (
(global.qgModels || defaultQGModels).find((item) => item.model === model) ||
global.qgModels?.[0] ||
defaultQGModels[0]
);
return global.qgModels.find((item) => item.model === model) || global.qgModels[0];
};
export const getVectorModel = (model?: string) => {
return (
global.vectorModels.find((item) => item.model === model) ||
global.vectorModels?.[0] ||
defaultVectorModels[0]
);
return global.vectorModels.find((item) => item.model === model) || global.vectorModels[0];
};
export function getAudioSpeechModel(model?: string) {
return (
global.audioSpeechModels.find((item) => item.model === model) ||
global.audioSpeechModels?.[0] ||
defaultAudioSpeechModels[0]
global.audioSpeechModels.find((item) => item.model === model) || global.audioSpeechModels[0]
);
}

View File

@ -12,6 +12,7 @@ import { MongoDatasetData } from '@fastgpt/service/core/dataset/data/schema';
import { jiebaSplit } from '../utils';
import { reRankRecall } from '../../ai/rerank';
import { countPromptTokens } from '@fastgpt/global/common/string/tiktoken';
import { hashStr } from '@fastgpt/global/common/string/tools';
export async function insertData2Pg(props: {
mongoDataId: string;
@ -98,34 +99,40 @@ export async function updatePgDataById({
// ------------------ search start ------------------
type SearchProps = {
text: string;
model: string;
similarity?: number; // min distance
limit: number; // max Token limit
datasetIds: string[];
searchMode?: `${DatasetSearchModeEnum}`;
};
export async function searchDatasetData(props: SearchProps) {
export async function searchDatasetData(
props: SearchProps & { rawQuery: string; queries: string[] }
) {
let {
text,
rawQuery,
queries,
model,
similarity = 0,
limit: maxTokens,
searchMode = DatasetSearchModeEnum.embedding
searchMode = DatasetSearchModeEnum.embedding,
datasetIds = []
} = props;
searchMode = global.systemEnv?.pluginBaseUrl ? searchMode : DatasetSearchModeEnum.embedding;
/* init params */
searchMode = global.systemEnv?.pluginBaseUrl ? searchMode : DatasetSearchModeEnum.embedding;
// Compatible with topk limit
if (maxTokens < 50) {
maxTokens = 1500;
}
const rerank =
global.reRankModels?.[0] &&
(searchMode === DatasetSearchModeEnum.embeddingReRank ||
searchMode === DatasetSearchModeEnum.embFullTextReRank);
let set = new Set<string>();
const oneChunkToken = 50;
const { embeddingLimit, fullTextLimit } = (() => {
/* function */
const countRecallLimit = () => {
const oneChunkToken = 50;
const estimatedLen = Math.max(20, Math.ceil(maxTokens / oneChunkToken));
// Increase search range, reduce hnsw loss. 20 ~ 100
@ -148,34 +155,295 @@ export async function searchDatasetData(props: SearchProps) {
embeddingLimit: Math.min(80, Math.max(50, estimatedLen * 2)),
fullTextLimit: Math.min(40, Math.max(20, estimatedLen))
};
})();
};
const embeddingRecall = async ({ query, limit }: { query: string; limit: number }) => {
const { vectors, tokenLen } = await getVectorsByText({
model,
input: [query]
});
const [{ tokenLen, embeddingRecallResults }, { fullTextRecallResults }] = await Promise.all([
embeddingRecall({
...props,
rerank,
limit: embeddingLimit
}),
fullTextRecall({
...props,
limit: fullTextLimit
})
]);
const results: any = await PgClient.query(
`BEGIN;
SET LOCAL hnsw.ef_search = ${global.systemEnv.pgHNSWEfSearch || 100};
select id, collection_id, data_id, (vector <#> '[${vectors[0]}]') * -1 AS score
from ${PgDatasetTableName}
where dataset_id IN (${datasetIds.map((id) => `'${String(id)}'`).join(',')})
${rerank ? '' : `AND vector <#> '[${vectors[0]}]' < -${similarity}`}
order by score desc limit ${limit};
COMMIT;`
);
// concat embedding and fullText recall result
let set = new Set<string>(embeddingRecallResults.map((item) => item.id));
const concatRecallResults = embeddingRecallResults;
fullTextRecallResults.forEach((item) => {
if (!set.has(item.id) && item.score >= similarity) {
concatRecallResults.push(item);
set.add(item.id);
const rows = results?.[2]?.rows as PgSearchRawType[];
// concat same data_id
const filterRows: PgSearchRawType[] = [];
let set = new Set<string>();
for (const row of rows) {
if (!set.has(row.data_id)) {
filterRows.push(row);
set.add(row.data_id);
}
}
// get q and a
const [collections, dataList] = await Promise.all([
MongoDatasetCollection.find(
{
_id: { $in: filterRows.map((item) => item.collection_id) }
},
'name fileId rawLink'
).lean(),
MongoDatasetData.find(
{
_id: { $in: filterRows.map((item) => item.data_id?.trim()) }
},
'datasetId collectionId q a chunkIndex indexes'
).lean()
]);
const formatResult = filterRows
.map((item) => {
const collection = collections.find(
(collection) => String(collection._id) === item.collection_id
);
const data = dataList.find((data) => String(data._id) === item.data_id);
// if collection or data UnExist, the relational mongo data already deleted
if (!collection || !data) return null;
return {
id: String(data._id),
q: data.q,
a: data.a,
chunkIndex: data.chunkIndex,
indexes: data.indexes,
datasetId: String(data.datasetId),
collectionId: String(data.collectionId),
sourceName: collection.name || '',
sourceId: collection?.fileId || collection?.rawLink,
score: item.score
};
})
.filter((item) => item !== null) as SearchDataResponseItemType[];
return {
embeddingRecallResults: formatResult,
tokenLen
};
};
const fullTextRecall = async ({
query,
limit
}: {
query: string;
limit: number;
}): Promise<{
fullTextRecallResults: SearchDataResponseItemType[];
tokenLen: number;
}> => {
if (limit === 0) {
return {
fullTextRecallResults: [],
tokenLen: 0
};
}
let searchResults = (
await Promise.all(
datasetIds.map((id) =>
MongoDatasetData.find(
{
datasetId: id,
$text: { $search: jiebaSplit({ text: query }) }
},
{
score: { $meta: 'textScore' },
_id: 1,
datasetId: 1,
collectionId: 1,
q: 1,
a: 1,
indexes: 1,
chunkIndex: 1
}
)
.sort({ score: { $meta: 'textScore' } })
.limit(limit)
.lean()
)
)
).flat() as (DatasetDataSchemaType & { score: number })[];
// resort
searchResults.sort((a, b) => b.score - a.score);
searchResults.slice(0, limit);
const collections = await MongoDatasetCollection.find(
{
_id: { $in: searchResults.map((item) => item.collectionId) }
},
'_id name fileId rawLink'
);
return {
fullTextRecallResults: searchResults.map((item) => {
const collection = collections.find((col) => String(col._id) === String(item.collectionId));
return {
id: String(item._id),
datasetId: String(item.datasetId),
collectionId: String(item.collectionId),
sourceName: collection?.name || '',
sourceId: collection?.fileId || collection?.rawLink,
q: item.q,
a: item.a,
chunkIndex: item.chunkIndex,
indexes: item.indexes,
// @ts-ignore
score: item.score
};
}),
tokenLen: 0
};
};
const reRankSearchResult = async ({
data,
query
}: {
data: SearchDataResponseItemType[];
query: string;
}): Promise<SearchDataResponseItemType[]> => {
try {
const results = await reRankRecall({
query,
inputs: data.map((item) => ({
id: item.id,
text: `${item.q}\n${item.a}`
}))
});
if (!Array.isArray(results)) return data;
// add new score to data
const mergeResult = results
.map((item) => {
const target = data.find((dataItem) => dataItem.id === item.id);
if (!target) return null;
return {
...target,
score: item.score || target.score
};
})
.filter(Boolean) as SearchDataResponseItemType[];
return mergeResult;
} catch (error) {
return data;
}
};
const filterResultsByMaxTokens = (list: SearchDataResponseItemType[], maxTokens: number) => {
const results: SearchDataResponseItemType[] = [];
let totalTokens = 0;
for (let i = 0; i < list.length; i++) {
const item = list[i];
totalTokens += countPromptTokens(item.q + item.a);
if (totalTokens > maxTokens + 500) {
break;
}
results.push(item);
if (totalTokens > maxTokens) {
break;
}
}
return results.length === 0 ? list.slice(0, 1) : results;
};
const multiQueryRecall = async ({
embeddingLimit,
fullTextLimit
}: {
embeddingLimit: number;
fullTextLimit: number;
}) => {
// In a group n recall, as long as one of the data appears minAmount of times, it is retained
const getIntersection = (resultList: SearchDataResponseItemType[][], minAmount = 1) => {
minAmount = Math.min(resultList.length, minAmount);
const map: Record<
string,
{
amount: number;
data: SearchDataResponseItemType;
}
> = {};
for (const list of resultList) {
for (const item of list) {
map[item.id] = map[item.id]
? {
amount: map[item.id].amount + 1,
data: item
}
: {
amount: 1,
data: item
};
}
}
return Object.values(map)
.filter((item) => item.amount >= minAmount)
.map((item) => item.data);
};
// multi query recall
const embeddingRecallResList: SearchDataResponseItemType[][] = [];
const fullTextRecallResList: SearchDataResponseItemType[][] = [];
let embTokens = 0;
for await (const query of queries) {
const [{ tokenLen, embeddingRecallResults }, { fullTextRecallResults }] = await Promise.all([
embeddingRecall({
query,
limit: embeddingLimit
}),
fullTextRecall({
query,
limit: fullTextLimit
})
]);
embTokens += tokenLen;
embeddingRecallResList.push(embeddingRecallResults);
fullTextRecallResList.push(fullTextRecallResults);
}
return {
tokens: embTokens,
embeddingRecallResults: getIntersection(embeddingRecallResList, 2),
fullTextRecallResults: getIntersection(fullTextRecallResList, 2)
};
};
/* main step */
// count limit
const { embeddingLimit, fullTextLimit } = countRecallLimit();
// recall
const { embeddingRecallResults, fullTextRecallResults, tokens } = await multiQueryRecall({
embeddingLimit,
fullTextLimit
});
// concat recall results
set = new Set<string>(embeddingRecallResults.map((item) => item.id));
const concatRecallResults = embeddingRecallResults.concat(
fullTextRecallResults.filter((item) => !set.has(item.id))
);
// remove same q and a data
set = new Set<string>();
const filterSameDataResults = concatRecallResults.filter((item) => {
const str = `${item.q}${item.a}`.trim();
// 删除所有的标点符号与空格等,只对文本进行比较
const str = hashStr(`${item.q}${item.a}`.replace(/[^\p{L}\p{N}]/gu, ''));
if (set.has(str)) return false;
set.add(str);
return true;
@ -187,14 +455,14 @@ export async function searchDatasetData(props: SearchProps) {
filterSameDataResults.filter((item) => item.score >= similarity),
maxTokens
),
tokenLen
tokenLen: tokens
};
}
// ReRank result
// ReRank results
const reRankResults = (
await reRankSearchResult({
query: text,
query: rawQuery,
data: filterSameDataResults
})
).filter((item) => item.score > similarity);
@ -204,210 +472,7 @@ export async function searchDatasetData(props: SearchProps) {
reRankResults.filter((item) => item.score >= similarity),
maxTokens
),
tokenLen
tokenLen: tokens
};
}
export async function embeddingRecall({
text,
model,
similarity = 0,
limit,
datasetIds = [],
rerank = false
}: SearchProps & { rerank: boolean }) {
const { vectors, tokenLen } = await getVectorsByText({
model,
input: [text]
});
const results: any = await PgClient.query(
`BEGIN;
SET LOCAL hnsw.ef_search = ${global.systemEnv.pgHNSWEfSearch || 100};
select id, collection_id, data_id, (vector <#> '[${vectors[0]}]') * -1 AS score
from ${PgDatasetTableName}
where dataset_id IN (${datasetIds.map((id) => `'${String(id)}'`).join(',')})
${rerank ? '' : `AND vector <#> '[${vectors[0]}]' < -${similarity}`}
order by score desc limit ${limit};
COMMIT;`
);
const rows = results?.[2]?.rows as PgSearchRawType[];
// concat same data_id
const filterRows: PgSearchRawType[] = [];
let set = new Set<string>();
for (const row of rows) {
if (!set.has(row.data_id)) {
filterRows.push(row);
set.add(row.data_id);
}
}
// get q and a
const [collections, dataList] = await Promise.all([
MongoDatasetCollection.find(
{
_id: { $in: filterRows.map((item) => item.collection_id) }
},
'name fileId rawLink'
).lean(),
MongoDatasetData.find(
{
_id: { $in: filterRows.map((item) => item.data_id?.trim()) }
},
'datasetId collectionId q a chunkIndex indexes'
).lean()
]);
const formatResult = filterRows
.map((item) => {
const collection = collections.find(
(collection) => String(collection._id) === item.collection_id
);
const data = dataList.find((data) => String(data._id) === item.data_id);
// if collection or data UnExist, the relational mongo data already deleted
if (!collection || !data) return null;
return {
id: String(data._id),
q: data.q,
a: data.a,
chunkIndex: data.chunkIndex,
indexes: data.indexes,
datasetId: String(data.datasetId),
collectionId: String(data.collectionId),
sourceName: collection.name || '',
sourceId: collection?.fileId || collection?.rawLink,
score: item.score
};
})
.filter((item) => item !== null) as SearchDataResponseItemType[];
return {
embeddingRecallResults: formatResult,
tokenLen
};
}
export async function fullTextRecall({ text, limit, datasetIds = [] }: SearchProps): Promise<{
fullTextRecallResults: SearchDataResponseItemType[];
tokenLen: number;
}> {
if (limit === 0) {
return {
fullTextRecallResults: [],
tokenLen: 0
};
}
let searchResults = (
await Promise.all(
datasetIds.map((id) =>
MongoDatasetData.find(
{
datasetId: id,
$text: { $search: jiebaSplit({ text }) }
},
{
score: { $meta: 'textScore' },
_id: 1,
datasetId: 1,
collectionId: 1,
q: 1,
a: 1,
indexes: 1,
chunkIndex: 1
}
)
.sort({ score: { $meta: 'textScore' } })
.limit(limit)
.lean()
)
)
).flat() as (DatasetDataSchemaType & { score: number })[];
// resort
searchResults.sort((a, b) => b.score - a.score);
searchResults.slice(0, limit);
const collections = await MongoDatasetCollection.find(
{
_id: { $in: searchResults.map((item) => item.collectionId) }
},
'_id name fileId rawLink'
);
return {
fullTextRecallResults: searchResults.map((item) => {
const collection = collections.find((col) => String(col._id) === String(item.collectionId));
return {
id: String(item._id),
datasetId: String(item.datasetId),
collectionId: String(item.collectionId),
sourceName: collection?.name || '',
sourceId: collection?.fileId || collection?.rawLink,
q: item.q,
a: item.a,
chunkIndex: item.chunkIndex,
indexes: item.indexes,
// @ts-ignore
score: item.score
};
}),
tokenLen: 0
};
}
// plus reRank search result
export async function reRankSearchResult({
data,
query
}: {
data: SearchDataResponseItemType[];
query: string;
}): Promise<SearchDataResponseItemType[]> {
try {
const results = await reRankRecall({
query,
inputs: data.map((item) => ({
id: item.id,
text: `${item.q}\n${item.a}`
}))
});
if (!Array.isArray(results)) return data;
// add new score to data
const mergeResult = results
.map((item) => {
const target = data.find((dataItem) => dataItem.id === item.id);
if (!target) return null;
return {
...target,
score: item.score || target.score
};
})
.filter(Boolean) as SearchDataResponseItemType[];
return mergeResult;
} catch (error) {
return data;
}
}
export function filterResultsByMaxTokens(list: SearchDataResponseItemType[], maxTokens: number) {
const results: SearchDataResponseItemType[] = [];
let totalTokens = 0;
for (let i = 0; i < list.length; i++) {
const item = list[i];
totalTokens += countPromptTokens(item.q + item.a);
if (totalTokens > maxTokens + 200) {
break;
}
results.push(item);
if (totalTokens > maxTokens) {
break;
}
}
return results;
}
// ------------------ search end ------------------

View File

@ -170,9 +170,11 @@ export async function generateVector(): Promise<any> {
err.response?.data?.error?.type === 'invalid_request_error' ||
err?.code === 500
) {
addLog.info('invalid message format', {
dataItem
});
addLog.info('Lock training data');
console.log(err?.code);
console.log(err.response?.data?.error?.type);
console.log(err?.message);
try {
await MongoDatasetTraining.findByIdAndUpdate(data._id, {
lockTime: new Date('2998/5/5')

View File

@ -5,7 +5,7 @@ import { ChatRoleEnum } from '@fastgpt/global/core/chat/constants';
import { getAIApi } from '@fastgpt/service/core/ai/config';
import type { ClassifyQuestionAgentItemType } from '@fastgpt/global/core/module/type.d';
import { ModuleInputKeyEnum, ModuleOutputKeyEnum } from '@fastgpt/global/core/module/constants';
import type { ModuleDispatchProps } from '@/types/core/chat/type';
import type { ModuleDispatchProps } from '@fastgpt/global/core/module/type.d';
import { replaceVariable } from '@fastgpt/global/common/string/tools';
import { Prompt_CQJson } from '@/global/core/prompt/agent';
import { FunctionModelItemType } from '@fastgpt/global/core/ai/model.d';
@ -43,8 +43,8 @@ export const dispatchClassifyQuestion = async (props: Props): Promise<CQResponse
const chatHistories = getHistories(history, histories);
const { arg, tokens } = await (async () => {
if (cqModel.functionCall) {
return functionCall({
if (cqModel.toolChoice) {
return toolChoice({
...props,
histories: chatHistories,
cqModel
@ -73,7 +73,7 @@ export const dispatchClassifyQuestion = async (props: Props): Promise<CQResponse
};
};
async function functionCall({
async function toolChoice({
user,
cqModel,
histories,

View File

@ -5,17 +5,19 @@ import { ChatRoleEnum } from '@fastgpt/global/core/chat/constants';
import { getAIApi } from '@fastgpt/service/core/ai/config';
import type { ContextExtractAgentItemType } from '@fastgpt/global/core/module/type';
import { ModuleInputKeyEnum, ModuleOutputKeyEnum } from '@fastgpt/global/core/module/constants';
import type { ModuleDispatchProps } from '@/types/core/chat/type';
import type { ModuleDispatchProps } from '@fastgpt/global/core/module/type.d';
import { Prompt_ExtractJson } from '@/global/core/prompt/agent';
import { replaceVariable } from '@fastgpt/global/common/string/tools';
import { FunctionModelItemType } from '@fastgpt/global/core/ai/model.d';
import { getHistories } from '../utils';
import { getExtractModel } from '@/service/core/ai/model';
type Props = ModuleDispatchProps<{
[ModuleInputKeyEnum.history]?: ChatItemType[];
[ModuleInputKeyEnum.contextExtractInput]: string;
[ModuleInputKeyEnum.extractKeys]: ContextExtractAgentItemType[];
[ModuleInputKeyEnum.description]: string;
[ModuleInputKeyEnum.aiModel]: string;
}>;
type Response = {
[ModuleOutputKeyEnum.success]?: boolean;
@ -30,19 +32,19 @@ export async function dispatchContentExtract(props: Props): Promise<Response> {
const {
user,
histories,
inputs: { content, history = 6, description, extractKeys }
inputs: { content, history = 6, model, description, extractKeys }
} = props;
if (!content) {
return Promise.reject('Input is empty');
}
const extractModel = global.extractModels[0];
const extractModel = getExtractModel(model);
const chatHistories = getHistories(history, histories);
const { arg, tokens, rawResponse } = await (async () => {
if (extractModel.functionCall) {
return functionCall({
const { arg, tokens } = await (async () => {
if (extractModel.toolChoice) {
return toolChoice({
...props,
histories: chatHistories,
extractModel
@ -60,6 +62,9 @@ export async function dispatchContentExtract(props: Props): Promise<Response> {
if (!extractKeys.find((item) => item.key === key)) {
delete arg[key];
}
if (arg[key] === '') {
delete arg[key];
}
}
// auth fields
@ -91,7 +96,7 @@ export async function dispatchContentExtract(props: Props): Promise<Response> {
};
}
async function functionCall({
async function toolChoice({
extractModel,
user,
histories,
@ -101,17 +106,19 @@ async function functionCall({
...histories,
{
obj: ChatRoleEnum.Human,
value: `<任务描述>
value: `你的任务:
"""
${description || '根据用户要求获取适当的 JSON 字符串。'}
"""
"""
-
-
-
</任务描述>
-
-
"""
<文本>
${content}
</文本>`
: "${content}"`
}
];
const filterMessages = ChatContextFilter({

View File

@ -16,7 +16,7 @@ import { adaptChat2GptMessages } from '@fastgpt/global/core/chat/adapt';
import { Prompt_QuotePromptList, Prompt_QuoteTemplateList } from '@/global/core/prompt/AIChat';
import type { AIChatModuleProps } from '@fastgpt/global/core/module/node/type.d';
import { replaceVariable } from '@fastgpt/global/common/string/tools';
import type { ModuleDispatchProps } from '@/types/core/chat/type';
import type { ModuleDispatchProps } from '@fastgpt/global/core/module/type.d';
import { responseWrite, responseWriteController } from '@fastgpt/service/common/response';
import { getChatModel, ModelTypeEnum } from '@/service/core/ai/model';
import type { SearchDataResponseItemType } from '@fastgpt/global/core/dataset/type';

View File

@ -2,11 +2,12 @@ import type { moduleDispatchResType } from '@fastgpt/global/core/chat/type.d';
import { countModelPrice } from '@/service/support/wallet/bill/utils';
import type { SelectedDatasetType } from '@fastgpt/global/core/module/api.d';
import type { SearchDataResponseItemType } from '@fastgpt/global/core/dataset/type';
import type { ModuleDispatchProps } from '@/types/core/chat/type';
import type { ModuleDispatchProps } from '@fastgpt/global/core/module/type.d';
import { ModelTypeEnum } from '@/service/core/ai/model';
import { searchDatasetData } from '@/service/core/dataset/data/pg';
import { ModuleInputKeyEnum, ModuleOutputKeyEnum } from '@fastgpt/global/core/module/constants';
import { DatasetSearchModeEnum } from '@fastgpt/global/core/dataset/constant';
import { searchQueryExtension } from '@fastgpt/service/core/ai/functions/queryExtension';
type DatasetSearchProps = ModuleDispatchProps<{
[ModuleInputKeyEnum.datasetSelectList]: SelectedDatasetType;
@ -26,24 +27,34 @@ export async function dispatchDatasetSearch(
props: DatasetSearchProps
): Promise<DatasetSearchResponse> {
const {
teamId,
tmbId,
inputs: { datasets = [], similarity = 0.4, limit = 5, searchMode, userChatInput }
} = props as DatasetSearchProps;
if (!Array.isArray(datasets)) {
return Promise.reject('Quote type error');
}
if (datasets.length === 0) {
return Promise.reject("You didn't choose the knowledge base");
return Promise.reject('core.chat.error.Select dataset empty');
}
if (!userChatInput) {
return Promise.reject('Your input is empty');
return Promise.reject('core.chat.error.User question empty');
}
// get vector
const vectorModel = datasets[0]?.vectorModel || global.vectorModels[0];
// const { queries: extensionQueries } = await searchQueryExtension({
// query: userChatInput,
// model: global.chatModels[0].model
// });
const concatQueries = [userChatInput];
// start search
const { searchRes, tokenLen } = await searchDatasetData({
text: userChatInput,
rawQuery: userChatInput,
queries: concatQueries,
model: vectorModel.model,
similarity,
limit,
@ -61,7 +72,7 @@ export async function dispatchDatasetSearch(
tokens: tokenLen,
type: ModelTypeEnum.vector
}),
query: userChatInput,
query: concatQueries.join('\n'),
model: vectorModel.name,
tokens: tokenLen,
similarity,

View File

@ -1,12 +1,11 @@
import { NextApiResponse } from 'next';
import { ModuleInputKeyEnum } from '@fastgpt/global/core/module/constants';
import { ModuleOutputKeyEnum } from '@fastgpt/global/core/module/constants';
import { RunningModuleItemType } from '@/types/app';
import { ModuleDispatchProps } from '@/types/core/chat/type';
import type { ChatHistoryItemResType, ChatItemType } from '@fastgpt/global/core/chat/type.d';
import type { ChatDispatchProps, RunningModuleItemType } from '@fastgpt/global/core/module/type.d';
import { ModuleDispatchProps } from '@fastgpt/global/core/module/type.d';
import type { ChatHistoryItemResType } from '@fastgpt/global/core/chat/type.d';
import { FlowNodeInputTypeEnum, FlowNodeTypeEnum } from '@fastgpt/global/core/module/node/constant';
import { ModuleItemType } from '@fastgpt/global/core/module/type';
import { UserType } from '@fastgpt/global/support/user/type';
import { replaceVariable } from '@fastgpt/global/common/string/tools';
import { responseWrite } from '@fastgpt/service/common/response';
import { sseResponseEventEnum } from '@fastgpt/service/common/response/constant';
@ -22,11 +21,12 @@ import { dispatchClassifyQuestion } from './agent/classifyQuestion';
import { dispatchContentExtract } from './agent/extract';
import { dispatchHttpRequest } from './tools/http';
import { dispatchAppRequest } from './tools/runApp';
import { dispatchCFR } from './tools/cfr';
import { dispatchRunPlugin } from './plugin/run';
import { dispatchPluginInput } from './plugin/runInput';
import { dispatchPluginOutput } from './plugin/runOutput';
const callbackMap: Record<string, Function> = {
const callbackMap: Record<`${FlowNodeTypeEnum}`, Function> = {
[FlowNodeTypeEnum.historyNode]: dispatchHistory,
[FlowNodeTypeEnum.questionInput]: dispatchChatInput,
[FlowNodeTypeEnum.answerNode]: dispatchAnswer,
@ -38,38 +38,28 @@ const callbackMap: Record<string, Function> = {
[FlowNodeTypeEnum.runApp]: dispatchAppRequest,
[FlowNodeTypeEnum.pluginModule]: dispatchRunPlugin,
[FlowNodeTypeEnum.pluginInput]: dispatchPluginInput,
[FlowNodeTypeEnum.pluginOutput]: dispatchPluginOutput
[FlowNodeTypeEnum.pluginOutput]: dispatchPluginOutput,
[FlowNodeTypeEnum.cfr]: dispatchCFR,
// none
[FlowNodeTypeEnum.userGuide]: () => Promise.resolve(),
[FlowNodeTypeEnum.variable]: () => Promise.resolve()
};
/* running */
export async function dispatchModules({
res,
teamId,
tmbId,
user,
appId,
modules,
chatId,
responseChatItemId,
histories = [],
startParams = {},
variables = {},
user,
stream = false,
detail = false
}: {
res: NextApiResponse;
teamId: string;
tmbId: string;
user: UserType;
appId: string;
detail = false,
...props
}: ChatDispatchProps & {
modules: ModuleItemType[];
chatId?: string;
responseChatItemId?: string;
histories: ChatItemType[];
startParams?: Record<string, any>;
variables?: Record<string, any>;
stream?: boolean;
detail?: boolean;
}) {
// set sse response headers
if (stream) {
@ -196,25 +186,21 @@ export async function dispatchModules({
module.inputs.forEach((item: any) => {
params[item.key] = item.value;
});
const props: ModuleDispatchProps<Record<string, any>> = {
const dispatchData: ModuleDispatchProps<Record<string, any>> = {
...props,
res,
teamId,
tmbId,
user,
appId,
chatId,
responseChatItemId,
stream,
detail,
variables,
histories,
user,
stream,
detail,
outputs: module.outputs,
inputs: params
};
const dispatchRes: Record<string, any> = await (async () => {
if (callbackMap[module.flowType]) {
return callbackMap[module.flowType](props);
return callbackMap[module.flowType](dispatchData);
}
return {};
})();

View File

@ -1,6 +1,6 @@
import { ModuleInputKeyEnum } from '@fastgpt/global/core/module/constants';
import type { ChatItemType } from '@fastgpt/global/core/chat/type.d';
import type { ModuleDispatchProps } from '@/types/core/chat/type';
import type { ModuleDispatchProps } from '@fastgpt/global/core/module/type.d';
import { getHistories } from '../utils';
export type HistoryProps = ModuleDispatchProps<{
maxContext?: number;

View File

@ -1,5 +1,5 @@
import { ModuleInputKeyEnum } from '@fastgpt/global/core/module/constants';
import type { ModuleDispatchProps } from '@/types/core/chat/type';
import type { ModuleDispatchProps } from '@fastgpt/global/core/module/type.d';
export type UserChatInputProps = ModuleDispatchProps<{
[ModuleInputKeyEnum.userChatInput]: string;
}>;

View File

@ -1,4 +1,4 @@
import type { ModuleDispatchProps } from '@/types/core/chat/type';
import type { ModuleDispatchProps } from '@fastgpt/global/core/module/type.d';
import { dispatchModules } from '../index';
import { FlowNodeTypeEnum } from '@fastgpt/global/core/module/node/constant';
import {
@ -21,6 +21,7 @@ type RunPluginResponse = {
export const dispatchRunPlugin = async (props: RunPluginProps): Promise<RunPluginResponse> => {
const {
mode,
teamId,
tmbId,
inputs: { pluginId, ...data }
@ -71,6 +72,7 @@ export const dispatchRunPlugin = async (props: RunPluginProps): Promise<RunPlugi
if (output) {
output.moduleLogo = plugin.avatar;
}
console.log(responseData.length);
return {
answerText,
@ -79,7 +81,14 @@ export const dispatchRunPlugin = async (props: RunPluginProps): Promise<RunPlugi
moduleLogo: plugin.avatar,
price: responseData.reduce((sum, item) => sum + (item.price || 0), 0),
runningTime: responseData.reduce((sum, item) => sum + (item.runningTime || 0), 0),
pluginOutput: output?.pluginOutput
pluginOutput: output?.pluginOutput,
pluginDetail:
mode === 'test' && plugin.teamId === teamId
? responseData.filter((item) => {
const filterArr = [FlowNodeTypeEnum.pluginOutput];
return !filterArr.includes(item.moduleType as any);
})
: undefined
},
...(output ? output.pluginOutput : {})
};

View File

@ -1,4 +1,4 @@
import type { ModuleDispatchProps } from '@/types/core/chat/type';
import type { ModuleDispatchProps } from '@fastgpt/global/core/module/type.d';
export type PluginInputProps = ModuleDispatchProps<{
[key: string]: any;

View File

@ -1,5 +1,5 @@
import type { moduleDispatchResType } from '@fastgpt/global/core/chat/type.d';
import type { ModuleDispatchProps } from '@/types/core/chat/type';
import type { ModuleDispatchProps } from '@fastgpt/global/core/module/type.d';
import { ModuleOutputKeyEnum } from '@fastgpt/global/core/module/constants';
export type PluginOutputProps = ModuleDispatchProps<{

View File

@ -1,7 +1,7 @@
import { sseResponseEventEnum } from '@fastgpt/service/common/response/constant';
import { responseWrite } from '@fastgpt/service/common/response';
import { textAdaptGptResponse } from '@/utils/adapt';
import type { ModuleDispatchProps } from '@/types/core/chat/type';
import type { ModuleDispatchProps } from '@fastgpt/global/core/module/type.d';
import { ModuleOutputKeyEnum } from '@fastgpt/global/core/module/constants';
export type AnswerProps = ModuleDispatchProps<{
text: string;

View File

@ -0,0 +1,167 @@
import type { ChatItemType, moduleDispatchResType } from '@fastgpt/global/core/chat/type.d';
import type { ModuleDispatchProps } from '@fastgpt/global/core/module/type.d';
import { ModuleInputKeyEnum, ModuleOutputKeyEnum } from '@fastgpt/global/core/module/constants';
import { getHistories } from '../utils';
import { getAIApi } from '@fastgpt/service/core/ai/config';
import { replaceVariable } from '@fastgpt/global/common/string/tools';
import { getExtractModel } from '@/service/core/ai/model';
type Props = ModuleDispatchProps<{
[ModuleInputKeyEnum.aiModel]: string;
[ModuleInputKeyEnum.aiSystemPrompt]?: string;
[ModuleInputKeyEnum.history]?: ChatItemType[] | number;
[ModuleInputKeyEnum.userChatInput]: string;
}>;
type Response = {
[ModuleOutputKeyEnum.text]: string;
[ModuleOutputKeyEnum.responseData]?: moduleDispatchResType;
};
export const dispatchCFR = async ({
histories,
inputs: { model, systemPrompt, history, userChatInput }
}: Props): Promise<Response> => {
if (!userChatInput) {
return Promise.reject('Question is empty');
}
if (histories.length === 0 && !systemPrompt) {
return {
[ModuleOutputKeyEnum.text]: userChatInput
};
}
const extractModel = getExtractModel(model);
const chatHistories = getHistories(history, histories);
const systemFewShot = systemPrompt
? `Q: 对话背景。
A: ${systemPrompt}
`
: '';
const historyFewShot = chatHistories
.map((item) => {
const role = item.obj === 'Human' ? 'Q' : 'A';
return `${role}: ${item.value}`;
})
.join('\n');
const concatFewShot = `${systemFewShot}${historyFewShot}`.trim();
const ai = getAIApi(undefined, 480000);
const result = await ai.chat.completions.create({
model: extractModel.model,
temperature: 0,
max_tokens: 150,
messages: [
{
role: 'user',
content: replaceVariable(defaultPrompt, {
query: userChatInput,
histories: concatFewShot
})
}
],
stream: false
});
let answer = result.choices?.[0]?.message?.content || '';
// console.log(
// replaceVariable(defaultPrompt, {
// query: userChatInput,
// histories: concatFewShot
// })
// );
// console.log(answer);
const tokens = result.usage?.total_tokens || 0;
return {
[ModuleOutputKeyEnum.responseData]: {
price: extractModel.price * tokens,
model: extractModel.name || '',
tokens,
query: userChatInput,
textOutput: answer
},
[ModuleOutputKeyEnum.text]: answer
};
};
const defaultPrompt = `请不要回答任何问题。
:
"""
Q: 对话背景
A: 关于 FatGPT 使
"""
当前问题: 怎么下载
输出: FastGPT
----------------
:
"""
Q: 报错 "no connection"
A: FastGPT "no connection"
"""
当前问题: 怎么解决
输出: FastGPT "no connection"
----------------
:
"""
Q: 作者是谁
A: FastGPT labring
"""
当前问题: 介绍下他
输出: 介绍下 FastGPT labring
----------------
:
"""
Q: 作者是谁
A: FastGPT labring
"""
当前问题: 我想购买商业版
输出: FastGPT
----------------
:
"""
Q: 对话背景
A: 关于 FatGPT 使
"""
当前问题: nh
输出: nh
----------------
:
"""
Q: FastGPT
A: FastGPT
"""
当前问题: 你知道 laf
输出: 你知道 laf
----------------
:
"""
Q: FastGPT
A: 1.
2. 便
3.
"""
当前问题: 介绍下第2点
输出: 介绍下 FastGPT 便
----------------
:
"""
Q: 什么是 FastGPT
A: FastGPT RAG
Q: 什么是 Sealos
A: Sealos
"""
当前问题: 它们有什么关系
输出: FastGPT Sealos
----------------
:
"""
{{histories}}
"""
: {{query}}
: `;

Some files were not shown because too many files have changed in this diff Show More