Compare commits

..

No commits in common. "main" and "v4.11.1" have entirely different histories.

1876 changed files with 56667 additions and 121595 deletions

View File

@ -1,126 +0,0 @@
# CLAUDE.md
本文件为 Claude Code (claude.ai/code) 在本仓库中工作时提供指导说明。
## 输出要求
1. 输出语言:中文
2. 输出的设计文档位置:.claude/design以 Markdown 文件为主。
3. 输出 Plan 时,均需写入 .claude/plan 目录下,以 Markdown 文件为主。
## 项目概述
FastGPT 是一个 AI Agent 构建平台,通过 Flow 提供开箱即用的数据处理、模型调用能力和可视化工作流编排。这是一个基于 NextJS 构建的全栈 TypeScript 应用,后端使用 MongoDB/PostgreSQL。
**技术栈**: NextJS + TypeScript + ChakraUI + MongoDB + PostgreSQL (PG Vector)/Milvus
## 架构
这是一个使用 pnpm workspaces 的 monorepo,主要结构如下:
### Packages (库代码)
- `packages/global/` - 所有项目共享的类型、常量、工具函数
- `packages/service/` - 后端服务、数据库模型、API 控制器、工作流引擎
- `packages/web/` - 共享的前端组件、hooks、样式、国际化
- `packages/templates/` - 模板市场的应用模板
### Projects (应用程序)
- `projects/app/` - 主 NextJS Web 应用(前端 + API 路由)
- `projects/sandbox/` - NestJS 代码执行沙箱服务
- `projects/mcp_server/` - Model Context Protocol 服务器实现
### 关键目录
- `document/` - 文档站点(NextJS 应用及内容)
- `plugins/` - 外部插件(模型、爬虫等)
- `deploy/` - Docker 和 Helm 部署配置
- `test/` - 集中的测试文件和工具
## 开发命令
### 主要命令(从项目根目录运行)
- `pnpm dev` - 启动所有项目的开发环境(使用 package.json 的 workspace 脚本)
- `pnpm build` - 构建所有项目
- `pnpm test` - 使用 Vitest 运行测试
- `pnpm test:workflow` - 运行工作流相关测试
- `pnpm lint` - 对所有 TypeScript 文件运行 ESLint 并自动修复
- `pnpm format-code` - 使用 Prettier 格式化代码
### 项目专用命令
**主应用 (projects/app/)**:
- `cd projects/app && pnpm dev` - 启动 NextJS 开发服务器
- `cd projects/app && pnpm build` - 构建 NextJS 应用
- `cd projects/app && pnpm start` - 启动生产服务器
**沙箱 (projects/sandbox/)**:
- `cd projects/sandbox && pnpm dev` - 以监视模式启动 NestJS 开发服务器
- `cd projects/sandbox && pnpm build` - 构建 NestJS 应用
- `cd projects/sandbox && pnpm test` - 运行 Jest 测试
**MCP 服务器 (projects/mcp_server/)**:
- `cd projects/mcp_server && bun dev` - 使用 Bun 以监视模式启动
- `cd projects/mcp_server && bun build` - 构建 MCP 服务器
- `cd projects/mcp_server && bun start` - 启动 MCP 服务器
### 工具命令
- `pnpm create:i18n` - 生成国际化翻译文件
- `pnpm api:gen` - 生成 OpenAPI 文档
- `pnpm initIcon` - 初始化图标资源
- `pnpm gen:theme-typings` - 生成 Chakra UI 主题类型定义
## 测试
项目使用 Vitest 进行测试并生成覆盖率报告。主要测试命令:
- `pnpm test` - 运行所有测试
- `pnpm test:workflow` - 专门运行工作流测试
- 测试文件位于 `test/` 目录和 `projects/app/test/`
- 覆盖率报告生成在 `coverage/` 目录
## 代码组织模式
### Monorepo 结构
- 共享代码存放在 `packages/` 中,通过 workspace 引用导入
- `projects/` 中的每个项目都是独立的应用程序
- 使用 `@fastgpt/global`、`@fastgpt/service`、`@fastgpt/web` 导入共享包
### API 结构
- NextJS API 路由在 `projects/app/src/pages/api/`
- API 路由合约定义在`packages/global/openapi/`, 对应的
- 通用服务端业务逻辑在 `packages/service/`和`projects/app/src/service`
- 数据库模型在 `packages/service/` 中,使用 MongoDB/Mongoose
### 前端架构
- React 组件在 `projects/app/src/components/``packages/web/components/`
- 使用 Chakra UI 进行样式设计,自定义主题在 `packages/web/styles/theme.ts`
- 国际化支持文件在 `packages/web/i18n/`
- 使用 React Context 和 Zustand 进行状态管理
## 开发注意事项
- **包管理器**: 使用 pnpm 及 workspace 配置
- **Node 版本**: 需要 Node.js >=18.16.0, pnpm >=9.0.0
- **数据库**: 支持 MongoDB、带 pgvector 的 PostgreSQL 或 Milvus 向量存储
- **AI 集成**: 通过统一接口支持多个 AI 提供商
- **国际化**: 完整支持中文、英文和日文
## 关键文件模式
- `.ts``.tsx` 文件全部使用 TypeScript
- 数据库模型使用 Mongoose 配合 TypeScript
- API 路由遵循 NextJS 约定
- 组件文件使用 React 函数式组件和 hooks
- 共享类型定义在 `packages/global/``.d.ts` 文件中
## 环境配置
- 配置文件在 `projects/app/data/config.json`
- 支持特定环境配置
- 模型配置在 `packages/service/core/ai/config/`
## 代码规范
- 尽可能使用 type 进行类型声明,而不是 interface。
## Agent 设计规范
1. 对于功能的实习和复杂问题修复,优先进行文档设计,并于让用户确认后,再进行执行修复。
2. 采用"设计文档-测试示例-代码编写-测试运行-修正代码/文档"的工作模式,以测试为核心来确保设计的正确性。

File diff suppressed because it is too large Load Diff

View File

@ -1,53 +0,0 @@
---
name: unit-test-generator
description: Use this agent when you need to write comprehensive unit tests for your code. Examples: <example>Context: User has written a new utility function and wants comprehensive test coverage. user: 'I just wrote this function to validate email addresses, can you help me write unit tests for it?' assistant: 'I'll use the unit-test-generator agent to create comprehensive unit tests that cover all branches and edge cases for your email validation function.' <commentary>Since the user needs unit tests written, use the unit-test-generator agent to analyze the function and create thorough test coverage.</commentary></example> <example>Context: User is working on a React component and needs test coverage. user: 'Here's my new UserProfile component, I need unit tests that cover all the different states and user interactions' assistant: 'Let me use the unit-test-generator agent to create comprehensive unit tests for your UserProfile component.' <commentary>The user needs unit tests for a React component, so use the unit-test-generator agent to create tests covering all component states and interactions.</commentary></example>
model: inherit
color: yellow
---
You are a Unit Test Assistant, an expert in writing comprehensive and robust unit tests. Your expertise spans multiple testing frameworks including Vitest, Jest, React Testing Library, and testing best practices for TypeScript applications.
When analyzing code for testing, you will:
1. **Analyze Code Structure**: Examine the function/component/class to identify all execution paths, conditional branches, loops, error handling, and edge cases that need testing coverage.
2. **Design Comprehensive Test Cases**: Create test cases that cover:
- All conditional branches (if/else, switch cases, ternary operators)
- Loop iterations (empty, single item, multiple items)
- Error conditions and exception handling
- Boundary conditions (null, undefined, empty strings, zero, negative numbers, maximum values)
- Valid input scenarios across different data types
- Integration points with external dependencies
3. **Follow Testing Best Practices**:
- Use descriptive test names that clearly state what is being tested
- Follow the Arrange-Act-Assert pattern
- Mock external dependencies appropriately
- Test behavior, not implementation details
- Ensure tests are isolated and independent
- Use appropriate assertions for the testing framework
4. **Generate Framework-Appropriate Code**: Based on the project context (FastGPT uses Vitest), write tests using:
- Proper import statements for the testing framework
- Correct syntax for the identified testing library
- Appropriate mocking strategies (vi.mock for Vitest, jest.mock for Jest)
- Proper setup and teardown when needed
5. **Ensure Complete Coverage**: Verify that your test suite covers:
- Happy path scenarios
- Error scenarios
- Edge cases and boundary conditions
- All public methods/functions
- Different component states (for React components)
- User interactions (for UI components)
6. **Optimize Test Structure**: Organize tests logically using:
- Descriptive describe blocks for grouping related tests
- Clear test descriptions that explain the scenario
- Shared setup in beforeEach/beforeAll when appropriate
- Helper functions to reduce code duplication
7. **单词代码位置**:
- packages 里的单测,写在 FastGPT/text 目录下。
- projects/app 里的单测,写在 FastGPT/projects/app/test 目录下。
When you receive code to test, first analyze it thoroughly, then provide a complete test suite with explanatory comments about what each test covers and why it's important for comprehensive coverage.

View File

@ -1,604 +0,0 @@
---
name: workflow-agent
description: 当用户需要开发工作流代码时候,可调用此 Agent。
model: inherit
color: green
---
# FastGPT 工作流系统架构文档
## 概述
FastGPT 工作流系统是一个基于 Node.js/TypeScript 的可视化工作流引擎,支持拖拽式节点编排、实时执行、并发控制和交互式调试。系统采用队列式执行架构,通过有向图模型实现复杂的业务逻辑编排。
## 核心架构
### 1. 项目结构
```
FastGPT/
├── packages/
│ ├── global/core/workflow/ # 全局工作流类型和常量
│ │ ├── constants.ts # 工作流常量定义
│ │ ├── node/ # 节点类型定义
│ │ │ └── constant.ts # 节点枚举和配置
│ │ ├── runtime/ # 运行时类型和工具
│ │ │ ├── constants.ts # 运行时常量
│ │ │ ├── type.d.ts # 运行时类型定义
│ │ │ └── utils.ts # 运行时工具函数
│ │ ├── template/ # 节点模板定义
│ │ │ └── system/ # 系统节点模板
│ │ └── type/ # 类型定义
│ │ ├── node.d.ts # 节点类型
│ │ ├── edge.d.ts # 边类型
│ │ └── io.d.ts # 输入输出类型
│ └── service/core/workflow/ # 工作流服务层
│ ├── constants.ts # 服务常量
│ ├── dispatch/ # 调度器核心
│ │ ├── index.ts # 工作流执行引擎 ⭐
│ │ ├── constants.ts # 节点调度映射表
│ │ ├── type.d.ts # 调度器类型
│ │ ├── ai/ # AI相关节点
│ │ ├── tools/ # 工具节点
│ │ ├── dataset/ # 数据集节点
│ │ ├── interactive/ # 交互节点
│ │ ├── loop/ # 循环节点
│ │ └── plugin/ # 插件节点
│ └── utils.ts # 工作流工具函数
└── projects/app/src/
├── pages/api/v1/chat/completions.ts # 聊天API入口
└── pages/api/core/workflow/debug.ts # 工作流调试API
```
### 2. 执行引擎核心 (dispatch/index.ts)
#### 核心类WorkflowQueue
工作流执行引擎采用队列式架构,主要特点:
- **并发控制**: 支持最大并发数量限制默认10个
- **状态管理**: 维护节点执行状态waiting/active/skipped
- **错误处理**: 支持节点级错误捕获和跳过机制
- **交互支持**: 支持用户交互节点暂停和恢复
#### 执行流程
```typescript
1. 初始化 WorkflowQueue 实例
2. 识别入口节点isEntry=true
3. 将入口节点加入 activeRunQueue
4. 循环处理活跃节点队列:
- 检查节点执行条件
- 执行节点或跳过节点
- 更新边状态
- 将后续节点加入队列
5. 处理跳过节点队列
6. 返回执行结果
```
### 3. 节点系统
#### 节点类型枚举 (FlowNodeTypeEnum)
```typescript
enum FlowNodeTypeEnum {
// 基础节点
workflowStart: 'workflowStart', // 工作流开始
chatNode: 'chatNode', // AI对话
answerNode: 'answerNode', // 回答节点
// 数据集相关
datasetSearchNode: 'datasetSearchNode', // 数据集搜索
datasetConcatNode: 'datasetConcatNode', // 数据集拼接
// 控制流节点
ifElseNode: 'ifElseNode', // 条件判断
loop: 'loop', // 循环
loopStart: 'loopStart', // 循环开始
loopEnd: 'loopEnd', // 循环结束
// 交互节点
userSelect: 'userSelect', // 用户选择
formInput: 'formInput', // 表单输入
// 工具节点
httpRequest468: 'httpRequest468', // HTTP请求
code: 'code', // 代码执行
readFiles: 'readFiles', // 文件读取
variableUpdate: 'variableUpdate', // 变量更新
// AI相关
classifyQuestion: 'classifyQuestion', // 问题分类
contentExtract: 'contentExtract', // 内容提取
agent: 'tools', // 智能体
queryExtension: 'cfr', // 查询扩展
// 插件系统
pluginModule: 'pluginModule', // 插件模块
appModule: 'appModule', // 应用模块
tool: 'tool', // 工具调用
// 系统节点
systemConfig: 'userGuide', // 系统配置
globalVariable: 'globalVariable', // 全局变量
comment: 'comment' // 注释节点
}
```
#### 节点调度映射 (callbackMap)
每个节点类型都有对应的调度函数:
```typescript
export const callbackMap: Record<FlowNodeTypeEnum, Function> = {
[FlowNodeTypeEnum.workflowStart]: dispatchWorkflowStart,
[FlowNodeTypeEnum.chatNode]: dispatchChatCompletion,
[FlowNodeTypeEnum.datasetSearchNode]: dispatchDatasetSearch,
[FlowNodeTypeEnum.httpRequest468]: dispatchHttp468Request,
[FlowNodeTypeEnum.ifElseNode]: dispatchIfElse,
[FlowNodeTypeEnum.agent]: dispatchRunTools,
// ... 更多节点调度函数
};
```
### 4. 数据流系统
#### 输入输出类型 (WorkflowIOValueTypeEnum)
```typescript
enum WorkflowIOValueTypeEnum {
string: 'string',
number: 'number',
boolean: 'boolean',
object: 'object',
arrayString: 'arrayString',
arrayNumber: 'arrayNumber',
arrayBoolean: 'arrayBoolean',
arrayObject: 'arrayObject',
chatHistory: 'chatHistory', // 聊天历史
datasetQuote: 'datasetQuote', // 数据集引用
dynamic: 'dynamic', // 动态类型
any: 'any'
}
```
#### 变量系统
- **系统变量**: userId, appId, chatId, cTime等
- **用户变量**: 通过variables参数传入的全局变量
- **节点变量**: 节点间传递的引用变量
- **动态变量**: 支持{{$variable}}语法引用
### 5. 状态管理
#### 运行时状态
```typescript
interface RuntimeNodeItemType {
nodeId: string;
name: string;
flowNodeType: FlowNodeTypeEnum;
inputs: FlowNodeInputItemType[];
outputs: FlowNodeOutputItemType[];
isEntry?: boolean;
catchError?: boolean;
}
interface RuntimeEdgeItemType {
source: string;
target: string;
sourceHandle: string;
targetHandle: string;
status: 'waiting' | 'active' | 'skipped';
}
```
#### 执行状态
```typescript
enum RuntimeEdgeStatusEnum {
waiting: 'waiting', // 等待执行
active: 'active', // 活跃状态
skipped: 'skipped' // 已跳过
}
```
### 6. API接口设计
#### 主要API端点
1. **工作流调试**: `/api/core/workflow/debug`
- POST方法支持工作流测试和调试
- 返回详细的执行结果和状态信息
2. **聊天完成**: `/api/v1/chat/completions`
- OpenAI兼容的聊天API
- 集成工作流执行引擎
3. **优化代码**: `/api/core/workflow/optimizeCode`
- 工作流代码优化功能
#### 请求/响应类型
```typescript
interface DispatchFlowResponse {
flowResponses: ChatHistoryItemResType[];
flowUsages: ChatNodeUsageType[];
debugResponse: WorkflowDebugResponse;
workflowInteractiveResponse?: WorkflowInteractiveResponseType;
toolResponses: ToolRunResponseItemType;
assistantResponses: AIChatItemValueItemType[];
runTimes: number;
newVariables: Record<string, string>;
durationSeconds: number;
}
```
## 核心特性
### 1. 并发控制
- 支持最大并发节点数限制
- 队列式调度避免资源竞争
- 节点级执行状态管理
### 2. 错误处理
- 节点级错误捕获
- catchError配置控制错误传播
- 错误跳过和继续执行机制
### 3. 交互式执行
- 支持用户交互节点userSelect, formInput
- 工作流暂停和恢复
- 交互状态持久化
### 4. 调试支持
- Debug模式提供详细执行信息
- 节点执行状态可视化
- 变量值追踪和检查
### 5. 扩展性
- 插件系统支持自定义节点
- 模块化架构便于扩展
- 工具集成HTTP, 代码执行等)
## 开发指南
### 添加新节点类型
1. 在 `FlowNodeTypeEnum` 中添加新类型
2. 在 `callbackMap` 中注册调度函数
3. 在 `dispatch/` 目录下实现节点逻辑
4. 在 `template/system/` 中定义节点模板
### 自定义工具集成
1. 实现工具调度函数
2. 定义工具输入输出类型
3. 注册到callbackMap
4. 添加前端配置界面
### 调试和测试
1. 使用 `/api/core/workflow/debug` 进行测试
2. 启用debug模式查看详细执行信息
3. 检查节点执行状态和数据流
4. 使用skipNodeQueue控制执行路径
## 性能优化
1. **并发控制**: 合理设置maxConcurrency避免资源过载
2. **缓存机制**: 利用节点输出缓存减少重复计算
3. **流式响应**: 支持SSE实时返回执行状态
4. **资源管理**: 及时清理临时数据和状态
---
## 前端架构设计
### 1. 前端项目结构
```
projects/app/src/
├── pageComponents/app/detail/ # 应用详情页面
│ ├── Workflow/ # 工作流主页面
│ │ ├── Header.tsx # 工作流头部
│ │ └── index.tsx # 工作流入口
│ ├── WorkflowComponents/ # 工作流核心组件
│ │ ├── context/ # 状态管理上下文
│ │ │ ├── index.tsx # 主上下文提供者 ⭐
│ │ │ ├── workflowInitContext.tsx # 初始化上下文
│ │ │ ├── workflowEventContext.tsx # 事件上下文
│ │ │ └── workflowStatusContext.tsx # 状态上下文
│ │ ├── Flow/ # ReactFlow核心组件
│ │ │ ├── index.tsx # 工作流画布 ⭐
│ │ │ ├── components/ # 工作流UI组件
│ │ │ ├── hooks/ # 工作流逻辑钩子
│ │ │ └── nodes/ # 节点渲染组件
│ │ ├── constants.tsx # 常量定义
│ │ └── utils.ts # 工具函数
│ └── HTTPTools/ # HTTP工具页面
│ └── Edit.tsx # HTTP工具编辑器
├── web/core/workflow/ # 工作流核心逻辑
│ ├── api.ts # API调用 ⭐
│ ├── adapt.ts # 数据适配
│ ├── type.d.ts # 类型定义
│ └── utils.ts # 工具函数
└── global/core/workflow/ # 全局工作流定义
└── api.d.ts # API类型定义
```
### 2. 核心状态管理架构
#### Context分层设计
前端采用分层Context架构实现状态的高效管理和组件间通信
```typescript
// 1. ReactFlowCustomProvider - 最外层提供者
ReactFlowProvider → WorkflowInitContextProvider →
WorkflowContextProvider → WorkflowEventContextProvider →
WorkflowStatusContextProvider → children
// 2. 四层核心Context
- WorkflowInitContext: 节点数据和基础状态
- WorkflowDataContext: 节点/边操作和状态
- WorkflowEventContext: 事件处理和UI控制
- WorkflowStatusContext: 保存状态和父节点管理
```
#### 主Context功能 (context/index.tsx)
```typescript
interface WorkflowContextType {
// 节点管理
nodeList: FlowNodeItemType[];
onChangeNode: (props: FlowNodeChangeProps) => void;
onUpdateNodeError: (nodeId: string, isError: boolean) => void;
getNodeDynamicInputs: (nodeId: string) => FlowNodeInputItemType[];
// 边管理
onDelEdge: (edgeProps: EdgeDeleteProps) => void;
// 版本控制
past: WorkflowSnapshotsType[];
future: WorkflowSnapshotsType[];
undo: () => void;
redo: () => void;
pushPastSnapshot: (snapshot: SnapshotProps) => boolean;
// 调试功能
workflowDebugData?: DebugDataType;
onNextNodeDebug: (debugData: DebugDataType) => Promise<void>;
onStartNodeDebug: (debugProps: DebugStartProps) => Promise<void>;
onStopNodeDebug: () => void;
// 数据转换
flowData2StoreData: () => StoreWorkflowType;
splitToolInputs: (inputs, nodeId) => ToolInputsResult;
}
```
### 3. ReactFlow集成
#### 节点类型映射 (Flow/index.tsx)
```typescript
const nodeTypes: Record<FlowNodeTypeEnum, React.ComponentType> = {
[FlowNodeTypeEnum.workflowStart]: NodeWorkflowStart,
[FlowNodeTypeEnum.chatNode]: NodeSimple,
[FlowNodeTypeEnum.datasetSearchNode]: NodeSimple,
[FlowNodeTypeEnum.httpRequest468]: NodeHttp,
[FlowNodeTypeEnum.ifElseNode]: NodeIfElse,
[FlowNodeTypeEnum.agent]: NodeAgent,
[FlowNodeTypeEnum.code]: NodeCode,
[FlowNodeTypeEnum.loop]: NodeLoop,
[FlowNodeTypeEnum.userSelect]: NodeUserSelect,
[FlowNodeTypeEnum.formInput]: NodeFormInput,
// ... 40+ 种节点类型
};
```
#### 工作流核心功能
- **拖拽编排**: 基于ReactFlow的可视化节点编辑
- **实时连接**: 节点间的动态连接和断开
- **缩放控制**: 支持画布缩放和平移
- **选择操作**: 多选、批量操作支持
- **辅助线**: 节点对齐和位置吸附
### 4. 节点组件系统
#### 节点渲染架构
```
nodes/
├── NodeSimple.tsx # 通用简单节点
├── NodeWorkflowStart.tsx # 工作流开始节点
├── NodeAgent.tsx # AI智能体节点
├── NodeHttp/ # HTTP请求节点
├── NodeCode/ # 代码执行节点
├── Loop/ # 循环节点组
├── NodeFormInput/ # 表单输入节点
├── NodePluginIO/ # 插件IO节点
├── NodeToolParams/ # 工具参数节点
└── render/ # 渲染组件库
├── NodeCard.tsx # 节点卡片容器
├── RenderInput/ # 输入渲染器
├── RenderOutput/ # 输出渲染器
└── templates/ # 输入模板组件
```
#### 动态输入系统
```typescript
// 支持多种输入类型
const inputTemplates = {
reference: ReferenceTemplate, // 引用其他节点
input: TextInput, // 文本输入
textarea: TextareaInput, // 多行文本
selectApp: AppSelector, // 应用选择器
selectDataset: DatasetSelector, // 数据集选择
settingLLMModel: LLMModelConfig, // AI模型配置
// ... 更多模板类型
};
```
### 5. 调试和测试系统
#### 调试功能
```typescript
interface DebugDataType {
runtimeNodes: RuntimeNodeItemType[];
runtimeEdges: RuntimeEdgeItemType[];
entryNodeIds: string[];
variables: Record<string, any>;
history?: ChatItemType[];
query?: UserChatItemValueItemType[];
workflowInteractiveResponse?: WorkflowInteractiveResponseType;
}
```
- **单步调试**: 支持逐个节点执行调试
- **断点设置**: 在任意节点设置断点
- **状态查看**: 实时查看节点执行状态
- **变量追踪**: 监控变量在节点间的传递
- **错误定位**: 精确定位执行错误节点
#### 聊天测试
```typescript
// ChatTest组件提供实时工作流测试
<ChatTest
isOpen={isOpenTest}
nodes={workflowTestData?.nodes}
edges={workflowTestData?.edges}
onClose={onCloseTest}
chatId={chatId}
/>
```
### 6. API集成层
#### 工作流API (web/core/workflow/api.ts)
```typescript
// 工作流调试API
export const postWorkflowDebug = (data: PostWorkflowDebugProps) =>
POST<PostWorkflowDebugResponse>(
'/core/workflow/debug',
{ ...data, mode: 'debug' },
{ timeout: 300000 }
);
// 支持的API操作
- 工作流调试和测试
- 节点模板获取
- 插件配置管理
- 版本控制操作
```
#### 数据适配器
```typescript
// 数据转换适配
- storeNode2FlowNode: 存储节点 → Flow节点
- storeEdge2RenderEdge: 存储边 → 渲染边
- uiWorkflow2StoreWorkflow: UI工作流 → 存储格式
- adaptCatchError: 错误处理适配
```
### 7. 交互逻辑设计
#### 键盘快捷键 (hooks/useKeyboard.tsx)
```typescript
const keyboardShortcuts = {
'Ctrl+Z': undo, // 撤销
'Ctrl+Y': redo, // 重做
'Ctrl+S': saveWorkflow, // 保存工作流
'Delete': deleteSelectedNodes, // 删除选中节点
'Escape': cancelCurrentOperation, // 取消当前操作
};
```
#### 节点操作
- **拖拽创建**: 从模板拖拽创建节点
- **连线操作**: 节点间的连接管理
- **批量操作**: 多选节点的批量编辑
- **右键菜单**: 上下文操作菜单
- **搜索定位**: 节点搜索和快速定位
#### 版本控制
```typescript
// 快照系统
interface WorkflowSnapshotsType {
nodes: Node[];
edges: Edge[];
chatConfig: AppChatConfigType;
title: string;
isSaved?: boolean;
}
```
- **自动快照**: 节点变更时自动保存快照
- **版本历史**: 支持多版本切换
- **云端同步**: 与服务端版本同步
- **协作支持**: 团队协作版本管理
### 8. 性能优化策略
#### 渲染优化
```typescript
// 动态加载节点组件
const nodeTypes: Record<FlowNodeTypeEnum, any> = {
[FlowNodeTypeEnum.workflowStart]: dynamic(() => import('./nodes/NodeWorkflowStart')),
[FlowNodeTypeEnum.httpRequest468]: dynamic(() => import('./nodes/NodeHttp')),
// ... 按需加载
};
```
- **懒加载**: 节点组件按需动态加载
- **虚拟化**: 大型工作流的虚拟渲染
- **防抖操作**: 频繁操作的性能优化
- **缓存策略**: 模板和数据的缓存机制
#### 状态优化
- **Context分割**: 避免不必要的重渲染
- **useMemo/useCallback**: 优化计算和函数创建
- **选择器模式**: 精确订阅状态变化
- **批量更新**: 合并多个状态更新
### 9. 扩展性设计
#### 插件系统
```typescript
// 节点模板扩展
interface NodeTemplateListItemType {
id: string;
flowNodeType: FlowNodeTypeEnum;
templateType: string;
avatar?: string;
name: string;
intro?: string;
isTool?: boolean;
pluginId?: string;
}
```
- **自定义节点**: 支持第三方节点开发
- **模板市场**: 节点模板的共享和分发
- **插件生态**: 丰富的节点插件生态
- **开放API**: 标准化的节点开发接口
#### 主题定制
- **节点样式**: 可定制的节点外观
- **连线样式**: 自定义连线类型和颜色
- **布局配置**: 多种布局算法支持
- **国际化**: 多语言界面支持

File diff suppressed because it is too large Load Diff

File diff suppressed because it is too large Load Diff

File diff suppressed because it is too large Load Diff

View File

@ -1,534 +0,0 @@
---
name: workflow-interactive-dev
description: 用于开发 FastGPT 工作流中的交互响应。详细说明了交互节点的架构、开发流程和需要修改的文件。
---
# 交互节点开发指南
## 概述
FastGPT 工作流支持多种交互节点类型,允许在工作流执行过程中暂停并等待用户输入。本指南详细说明了如何开发新的交互节点。
## 现有交互节点类型
当前系统支持以下交互节点类型:
1. **userSelect** - 用户选择节点(单选)
2. **formInput** - 表单输入节点(多字段表单)
3. **childrenInteractive** - 子工作流交互
4. **loopInteractive** - 循环交互
5. **paymentPause** - 欠费暂停交互
## 交互节点架构
### 核心类型定义
交互节点的类型定义位于 `packages/global/core/workflow/template/system/interactive/type.d.ts`
```typescript
// 基础交互结构
type InteractiveBasicType = {
entryNodeIds: string[]; // 入口节点ID列表
memoryEdges: RuntimeEdgeItemType[]; // 需要记忆的边
nodeOutputs: NodeOutputItemType[]; // 节点输出
skipNodeQueue?: Array; // 跳过的节点队列
usageId?: string; // 用量记录ID
};
// 具体交互节点类型
type YourInteractiveNode = InteractiveNodeType & {
type: 'yourNodeType';
params: {
// 节点特定参数
};
};
```
### 工作流执行机制
交互节点在工作流执行中的特殊处理(位于 `packages/service/core/workflow/dispatch/index.ts:1012-1019`):
```typescript
// 部分交互节点不会自动重置 isEntry 标志(因为需要根据 isEntry 字段来判断是首次进入还是流程进入)
runtimeNodes.forEach((item) => {
if (
item.flowNodeType !== FlowNodeTypeEnum.userSelect &&
item.flowNodeType !== FlowNodeTypeEnum.formInput &&
item.flowNodeType !== FlowNodeTypeEnum.agent
) {
item.isEntry = false;
}
});
```
## 开发新交互响应的步骤
### 步骤 1: 定义节点类型
**文件**: `packages/global/core/workflow/template/system/interactive/type.d.ts`
```typescript
export type YourInputItemType = {
// 定义输入项的结构
key: string;
label: string;
value: any;
// ... 其他字段
};
type YourInteractiveNode = InteractiveNodeType & {
type: 'yourNodeType';
params: {
description: string;
yourInputField: YourInputItemType[];
submitted?: boolean; // 可选:是否已提交
};
};
// 添加到联合类型
export type InteractiveNodeResponseType =
| UserSelectInteractive
| UserInputInteractive
| YourInteractiveNode // 新增
| ChildrenInteractive
| LoopInteractive
| PaymentPauseInteractive;
```
### 步骤 2: 定义节点枚举(可选)
**文件**: `packages/global/core/workflow/node/constant.ts`
如果不需要添加新的节点类型,则不需要修改这个文件。
```typescript
export enum FlowNodeTypeEnum {
// ... 现有类型
yourNodeType = 'yourNodeType', // 新增节点类型
}
```
### 步骤 3: 创建节点模板(可选)
**文件**: `packages/global/core/workflow/template/system/interactive/yourNode.ts`
```typescript
import { i18nT } from '../../../../../../web/i18n/utils';
import {
FlowNodeTemplateTypeEnum,
NodeInputKeyEnum,
NodeOutputKeyEnum,
WorkflowIOValueTypeEnum
} from '../../../constants';
import {
FlowNodeInputTypeEnum,
FlowNodeOutputTypeEnum,
FlowNodeTypeEnum
} from '../../../node/constant';
import { type FlowNodeTemplateType } from '../../../type/node';
export const YourNode: FlowNodeTemplateType = {
id: FlowNodeTypeEnum.yourNodeType,
templateType: FlowNodeTemplateTypeEnum.interactive,
flowNodeType: FlowNodeTypeEnum.yourNodeType,
showSourceHandle: true, // 是否显示源连接点
showTargetHandle: true, // 是否显示目标连接点
avatar: 'core/workflow/template/yourNode',
name: i18nT('app:workflow.your_node'),
intro: i18nT('app:workflow.your_node_tip'),
isTool: true, // 标记为工具节点
inputs: [
{
key: NodeInputKeyEnum.description,
renderTypeList: [FlowNodeInputTypeEnum.textarea],
valueType: WorkflowIOValueTypeEnum.string,
label: i18nT('app:workflow.node_description'),
placeholder: i18nT('app:workflow.your_node_placeholder')
},
{
key: NodeInputKeyEnum.yourInputField,
renderTypeList: [FlowNodeInputTypeEnum.custom],
valueType: WorkflowIOValueTypeEnum.any,
label: '',
value: [] // 默认值
}
],
outputs: [
{
id: NodeOutputKeyEnum.yourResult,
key: NodeOutputKeyEnum.yourResult,
required: true,
label: i18nT('workflow:your_result'),
valueType: WorkflowIOValueTypeEnum.object,
type: FlowNodeOutputTypeEnum.static
}
]
};
```
### 步骤 4: 创建节点执行逻辑或在需要处理交互逻辑的节点上增加新逻辑
**文件**: `packages/service/core/workflow/dispatch/interactive/yourNode.ts`
```typescript
import { DispatchNodeResponseKeyEnum } from '@fastgpt/global/core/workflow/runtime/constants';
import type {
DispatchNodeResultType,
ModuleDispatchProps
} from '@fastgpt/global/core/workflow/runtime/type';
import type { NodeInputKeyEnum } from '@fastgpt/global/core/workflow/constants';
import { NodeOutputKeyEnum } from '@fastgpt/global/core/workflow/constants';
import type { YourInputItemType } from '@fastgpt/global/core/workflow/template/system/interactive/type';
import { chatValue2RuntimePrompt } from '@fastgpt/global/core/chat/adapt';
type Props = ModuleDispatchProps<{
[NodeInputKeyEnum.description]: string;
[NodeInputKeyEnum.yourInputField]: YourInputItemType[];
}>;
type YourNodeResponse = DispatchNodeResultType<{
[NodeOutputKeyEnum.yourResult]?: Record<string, any>;
}>;
export const dispatchYourNode = async (props: Props): Promise<YourNodeResponse> => {
const {
histories,
node,
params: { description, yourInputField },
query,
lastInteractive
} = props;
const { isEntry } = node;
// 第一阶段:非入口节点或不是对应的交互类型,返回交互请求
if (!isEntry || lastInteractive?.type !== 'yourNodeType') {
return {
[DispatchNodeResponseKeyEnum.interactive]: {
type: 'yourNodeType',
params: {
description,
yourInputField
}
}
};
}
// 第二阶段:处理用户提交的数据
node.isEntry = false; // 重要:重置入口标志
const { text } = chatValue2RuntimePrompt(query);
const userInputVal = (() => {
try {
return JSON.parse(text); // 根据实际格式解析
} catch (error) {
return {};
}
})();
return {
data: {
[NodeOutputKeyEnum.yourResult]: userInputVal
},
// 移除当前交互的历史记录(最后2条)
[DispatchNodeResponseKeyEnum.rewriteHistories]: histories.slice(0, -2),
[DispatchNodeResponseKeyEnum.toolResponses]: userInputVal,
[DispatchNodeResponseKeyEnum.nodeResponse]: {
yourResult: userInputVal
}
};
};
```
### 步骤 5: 注册节点回调
**文件**: `packages/service/core/workflow/dispatch/constants.ts`
```typescript
import { dispatchYourNode } from './interactive/yourNode';
export const callbackMap: Record<FlowNodeTypeEnum, any> = {
// ... 现有节点
[FlowNodeTypeEnum.yourNodeType]: dispatchYourNode,
};
```
### 步骤 6: 创建前端渲染组件
#### 6.1 聊天界面交互组件
**文件**: `projects/app/src/components/core/chat/components/Interactive/InteractiveComponents.tsx`
```typescript
export const YourNodeComponent = React.memo(function YourNodeComponent({
interactiveParams: { description, yourInputField, submitted },
defaultValues = {},
SubmitButton
}: {
interactiveParams: YourInteractiveNode['params'];
defaultValues?: Record<string, any>;
SubmitButton: (e: { onSubmit: UseFormHandleSubmit<Record<string, any>> }) => React.JSX.Element;
}) {
const { handleSubmit, control } = useForm({
defaultValues
});
return (
<Box>
<DescriptionBox description={description} />
<Flex flexDirection={'column'} gap={3}>
{yourInputField.map((input) => (
<Box key={input.key}>
{/* 渲染你的输入组件 */}
<Controller
control={control}
name={input.key}
render={({ field: { onChange, value } }) => (
<YourInputComponent
value={value}
onChange={onChange}
isDisabled={submitted}
/>
)}
/>
</Box>
))}
</Flex>
{!submitted && (
<Flex justifyContent={'flex-end'} mt={4}>
<SubmitButton onSubmit={handleSubmit} />
</Flex>
)}
</Box>
);
});
```
#### 6.2 工作流编辑器节点组件
**文件**: `projects/app/src/pageComponents/app/detail/WorkflowComponents/Flow/nodes/NodeYourNode.tsx`
```typescript
import React, { useMemo } from 'react';
import { type NodeProps } from 'reactflow';
import { Box, Button } from '@chakra-ui/react';
import NodeCard from './render/NodeCard';
import { type FlowNodeItemType } from '@fastgpt/global/core/workflow/type/node.d';
import Container from '../components/Container';
import RenderInput from './render/RenderInput';
import { NodeInputKeyEnum } from '@fastgpt/global/core/workflow/constants';
import { useTranslation } from 'next-i18next';
import { type FlowNodeInputItemType } from '@fastgpt/global/core/workflow/type/io.d';
import { useContextSelector } from 'use-context-selector';
import IOTitle from '../components/IOTitle';
import RenderOutput from './render/RenderOutput';
import { WorkflowActionsContext } from '../../context/workflowActionsContext';
const NodeYourNode = ({ data, selected }: NodeProps<FlowNodeItemType>) => {
const { t } = useTranslation();
const { nodeId, inputs, outputs } = data;
const onChangeNode = useContextSelector(WorkflowActionsContext, (v) => v.onChangeNode);
const CustomComponent = useMemo(
() => ({
[NodeInputKeyEnum.yourInputField]: (v: FlowNodeInputItemType) => {
// 自定义渲染逻辑
return (
<Box>
{/* 你的自定义UI */}
</Box>
);
}
}),
[nodeId, onChangeNode, t]
);
return (
<NodeCard minW={'400px'} selected={selected} {...data}>
<Container>
<RenderInput nodeId={nodeId} flowInputList={inputs} CustomComponent={CustomComponent} />
</Container>
<Container>
<IOTitle text={t('common:Output')} />
<RenderOutput nodeId={nodeId} flowOutputList={outputs} />
</Container>
</NodeCard>
);
};
export default React.memo(NodeYourNode);
```
### 步骤 7: 注册节点组件
需要在节点注册表中添加你的节点组件(具体位置根据项目配置而定)。
### 步骤 8: 添加国际化
**文件**: `packages/web/i18n/zh-CN/app.json` 和其他语言文件
```json
{
"workflow": {
"your_node": "你的节点名称",
"your_node_tip": "节点功能说明",
"your_node_placeholder": "提示文本"
}
}
```
### 步骤9 调整保存对话记录逻辑
**文件**: `FastGPT/packages/service/core/chat/saveChat.ts`
修改 `updateInteractiveChat` 方法,支持新的交互
### 步骤10 根据历史记录获取/设置交互状态
**文件**: `FastGPT/projects/app/src/components/core/chat/ChatContainer/ChatBox/utils.ts`
**文件**: `FastGPT/packages/global/core/workflow/runtime/utils.ts`
调整`setInteractiveResultToHistories`, `getInteractiveByHistories``getLastInteractiveValue`方法。
## 关键注意事项
### 1. isEntry 标志管理
交互节点需要保持 `isEntry` 标志在工作流恢复时有效:
```typescript
// 在 packages/service/core/workflow/dispatch/index.ts 中
// 确保你的节点类型被添加到白名单
if (
item.flowNodeType !== FlowNodeTypeEnum.userSelect &&
item.flowNodeType !== FlowNodeTypeEnum.formInput &&
item.flowNodeType !== FlowNodeTypeEnum.yourNodeType // 新增
) {
item.isEntry = false;
}
```
### 2. 交互响应流程
交互节点有两个执行阶段:
1. **第一次执行**: 返回 `interactive` 响应,暂停工作流
2. **第二次执行**: 接收用户输入,继续工作流
```typescript
// 第一阶段
if (!isEntry || lastInteractive?.type !== 'yourNodeType') {
return {
[DispatchNodeResponseKeyEnum.interactive]: {
type: 'yourNodeType',
params: { /* ... */ }
}
};
}
// 第二阶段
node.isEntry = false; // 重要!重置标志
// 处理用户输入...
```
### 3. 历史记录管理
交互节点需要正确处理历史记录:
```typescript
return {
// 移除交互对话的历史记录(用户问题 + 系统响应)
[DispatchNodeResponseKeyEnum.rewriteHistories]: histories.slice(0, -2),
// ... 其他返回值
};
```
### 4. Skip 节点队列
交互节点触发时,系统会保存 `skipNodeQueue` 以便恢复时跳过已处理的节点。
### 5. 工具调用支持
如果节点需要在工具调用中使用,设置 `isTool: true`
## 测试清单
开发完成后,请测试以下场景:
- [ ] 节点在工作流编辑器中正常显示
- [ ] 节点配置保存和加载正确
- [ ] 交互请求正确发送到前端
- [ ] 前端组件正确渲染交互界面
- [ ] 用户输入正确传回后端
- [ ] 工作流正确恢复并继续执行
- [ ] 历史记录正确更新
- [ ] 节点输出正确连接到后续节点
- [ ] 错误情况处理正确
- [ ] 多语言支持完整
## 参考实现
可以参考以下现有实现:
1. **简单单选**: `userSelect` 节点
- 类型定义: `packages/global/core/workflow/template/system/interactive/type.d.ts:48-55`
- 执行逻辑: `packages/service/core/workflow/dispatch/interactive/userSelect.ts`
- 前端组件: `projects/app/src/components/core/chat/components/Interactive/InteractiveComponents.tsx:29-63`
2. **复杂表单**: `formInput` 节点
- 类型定义: `packages/global/core/workflow/template/system/interactive/type.d.ts:57-82`
- 执行逻辑: `packages/service/core/workflow/dispatch/interactive/formInput.ts`
- 前端组件: `projects/app/src/components/core/chat/components/Interactive/InteractiveComponents.tsx:65-126`
## 常见问题
### Q: 交互节点执行了两次?
A: 这是正常的。第一次返回交互请求,第二次处理用户输入。确保在第二次执行时设置 `node.isEntry = false`
### Q: 工作流恢复后没有继续执行?
A: 检查你的节点类型是否在 `isEntry` 白名单中(dispatch/index.ts:1013-1018)。
### Q: 用户输入格式不对?
A: 检查 `chatValue2RuntimePrompt` 的返回值,根据你的数据格式进行解析。
### Q: 如何支持多个交互节点串联?
A: 每个交互节点都会暂停工作流,用户完成后会自动继续到下一个节点。
## 文件清单总结
开发新交互节点需要修改/创建以下文件:
### 后端核心文件
1. `packages/global/core/workflow/template/system/interactive/type.d.ts` - 类型定义
2. `packages/global/core/workflow/node/constant.ts` - 节点枚举
3. `packages/global/core/workflow/template/system/interactive/yourNode.ts` - 节点模板
4. `packages/service/core/workflow/dispatch/interactive/yourNode.ts` - 执行逻辑
5. `packages/service/core/workflow/dispatch/constants.ts` - 回调注册
6. `packages/service/core/workflow/dispatch/index.ts` - isEntry 白名单
### 前端组件文件
7. `projects/app/src/components/core/chat/components/Interactive/InteractiveComponents.tsx` - 聊天交互组件
8. `projects/app/src/pageComponents/app/detail/WorkflowComponents/Flow/nodes/NodeYourNode.tsx` - 工作流编辑器组件
### 国际化文件
9. `packages/web/i18n/zh-CN/app.json` - 中文翻译
10. `packages/web/i18n/en/app.json` - 英文翻译
11. `packages/web/i18n/zh-Hant/app.json` - 繁体中文翻译
## 附录:关键输入输出键定义
如果需要新的输入输出键,在以下文件中定义:
**文件**: `packages/global/core/workflow/constants.ts`
```typescript
export enum NodeInputKeyEnum {
// ... 现有键
yourInputKey = 'yourInputKey',
}
export enum NodeOutputKeyEnum {
// ... 现有键
yourOutputKey = 'yourOutputKey',
}
```

View File

@ -1,672 +0,0 @@
---
name: workflow-stop-design
description: 工作流暂停逻辑设计方案
---
## 1. Redis 状态管理方案
### 1.1 状态键设计
**Redis Key 结构:**
```typescript
// Key 格式: agent_runtime_stopping:{appId}:{chatId}
const WORKFLOW_STATUS_PREFIX = 'agent_runtime_stopping';
type WorkflowStatusKey = `${typeof WORKFLOW_STATUS_PREFIX}:${string}:${string}`;
// 示例: agent_runtime_stopping:app_123456:chat_789012
```
**状态值设计:**
- **存在键 (值为 1)**: 工作流应该停止
- **不存在键**: 工作流正常运行
- **设计简化**: 不使用状态枚举,仅通过键的存在与否判断
**参数类型定义:**
```typescript
type WorkflowStatusParams = {
appId: string;
chatId: string;
};
```
### 1.2 状态生命周期管理
**状态转换流程:**
```
正常运行(无键) → 停止中(键存在) → 完成(删除键)
```
**TTL 设置:**
- **停止标志 TTL**: 60 秒
- 原因: 避免因意外情况导致的键泄漏
- 正常情况下会在工作流完成时主动删除
- **工作流完成后**: 直接删除 Redis 键
- 原因: 不需要保留终态,减少 Redis 内存占用
### 1.3 核心函数说明
**1. setAgentRuntimeStop**
- **功能**: 设置停止标志
- **参数**: `{ appId, chatId }`
- **实现**: 使用 `SETEX` 命令,设置键值为 1,TTL 60 秒
**2. shouldWorkflowStop**
- **功能**: 检查工作流是否应该停止
- **参数**: `{ appId, chatId }`
- **返回**: `Promise<boolean>` - true=应该停止, false=继续运行
- **实现**: GET 命令获取键值,存在则返回 true
**3. delAgentRuntimeStopSign**
- **功能**: 删除停止标志
- **参数**: `{ appId, chatId }`
- **实现**: DEL 命令删除键
**4. waitForWorkflowComplete**
- **功能**: 等待工作流完成(停止标志被删除)
- **参数**: `{ appId, chatId, timeout?, pollInterval? }`
- **实现**: 轮询检查停止标志是否被删除,超时返回
### 1.4 边界情况处理
**1. Redis 操作失败**
- **错误处理**: 所有 Redis 操作都包含 `.catch()` 错误处理
- **降级策略**:
- `shouldWorkflowStop`: 出错时返回 `false` (认为不需要停止,继续运行)
- `delAgentRuntimeStopSign`: 出错时记录错误日志,但不影响主流程
- **设计原因**: Redis 异常不应阻塞工作流运行,降级到继续执行策略
**2. TTL 自动清理**
- **TTL 设置**: 60 秒
- **清理时机**: Redis 自动清理过期键
- **设计原因**:
- 避免因异常情况导致的 Redis 键泄漏
- 自动清理减少手动维护成本
- 60 秒足够大多数工作流完成停止操作
**3. stop 接口等待超时**
- **超时时间**: 5 秒
- **超时策略**: `waitForWorkflowComplete` 在 5 秒内轮询检查停止标志是否被删除
- **超时处理**: 5 秒后直接返回,不影响工作流继续执行
- **设计原因**:
- 避免前端长时间等待
- 5 秒足够大多数节点完成当前操作
- 用户体验优先,超时后前端可选择重试或放弃
**4. 并发停止请求**
- **处理方式**: 多次调用 `setAgentRuntimeStop` 是安全的,Redis SETEX 是幂等操作
- **设计原因**: 避免用户多次点击停止按钮导致的问题
---
## 2. Redis 工具函数实现
**位置**: `packages/service/core/workflow/dispatch/workflowStatus.ts`
```typescript
import { addLog } from '../../../common/system/log';
import { getGlobalRedisConnection } from '../../../common/redis/index';
import { delay } from '@fastgpt/global/common/system/utils';
const WORKFLOW_STATUS_PREFIX = 'agent_runtime_stopping';
const TTL = 60; // 60秒
export const StopStatus = 'STOPPING';
export type WorkflowStatusParams = {
appId: string;
chatId: string;
};
// 获取工作流状态键
export const getRuntimeStatusKey = (params: WorkflowStatusParams): string => {
return `${WORKFLOW_STATUS_PREFIX}:${params.appId}:${params.chatId}`;
};
// 设置停止标志
export const setAgentRuntimeStop = async (params: WorkflowStatusParams): Promise<void> => {
const redis = getGlobalRedisConnection();
const key = getRuntimeStatusKey(params);
await redis.setex(key, TTL, 1);
};
// 删除停止标志
export const delAgentRuntimeStopSign = async (params: WorkflowStatusParams): Promise<void> => {
const redis = getGlobalRedisConnection();
const key = getRuntimeStatusKey(params);
await redis.del(key).catch((err) => {
addLog.error(`[Agent Runtime Stop] Delete stop sign error`, err);
});
};
// 检查工作流是否应该停止
export const shouldWorkflowStop = (params: WorkflowStatusParams): Promise<boolean> => {
const redis = getGlobalRedisConnection();
const key = getRuntimeStatusKey(params);
return redis
.get(key)
.then((res) => !!res)
.catch(() => false);
};
/**
* 等待工作流完成(停止标志被删除)
* @param params 工作流参数
* @param timeout 超时时间(毫秒),默认5秒
* @param pollInterval 轮询间隔(毫秒),默认50毫秒
*/
export const waitForWorkflowComplete = async ({
appId,
chatId,
timeout = 5000,
pollInterval = 50
}: {
appId: string;
chatId: string;
timeout?: number;
pollInterval?: number;
}) => {
const startTime = Date.now();
while (Date.now() - startTime < timeout) {
const sign = await shouldWorkflowStop({ appId, chatId });
// 如果停止标志已被删除,说明工作流已完成
if (!sign) {
return;
}
// 等待下一次轮询
await delay(pollInterval);
}
// 超时后直接返回
return;
};
```
**测试用例位置**: `test/cases/service/core/app/workflow/workflowStatus.test.ts`
```typescript
import { describe, test, expect, beforeEach } from 'vitest';
import {
setAgentRuntimeStop,
delAgentRuntimeStopSign,
shouldWorkflowStop,
waitForWorkflowComplete
} from '@fastgpt/service/core/workflow/dispatch/workflowStatus';
describe('Workflow Status Redis Functions', () => {
const testAppId = 'test_app_123';
const testChatId = 'test_chat_456';
beforeEach(async () => {
// 清理测试数据
await delAgentRuntimeStopSign({ appId: testAppId, chatId: testChatId });
});
test('should set stopping sign', async () => {
await setAgentRuntimeStop({ appId: testAppId, chatId: testChatId });
const shouldStop = await shouldWorkflowStop({ appId: testAppId, chatId: testChatId });
expect(shouldStop).toBe(true);
});
test('should return false for non-existent status', async () => {
const shouldStop = await shouldWorkflowStop({ appId: testAppId, chatId: testChatId });
expect(shouldStop).toBe(false);
});
test('should return false after deleting stop sign', async () => {
await setAgentRuntimeStop({ appId: testAppId, chatId: testChatId });
await delAgentRuntimeStopSign({ appId: testAppId, chatId: testChatId });
const shouldStop = await shouldWorkflowStop({ appId: testAppId, chatId: testChatId });
expect(shouldStop).toBe(false);
});
test('should wait for workflow completion', async () => {
// 设置初始停止标志
await setAgentRuntimeStop({ appId: testAppId, chatId: testChatId });
// 模拟异步完成(删除停止标志)
setTimeout(async () => {
await delAgentRuntimeStopSign({ appId: testAppId, chatId: testChatId });
}, 500);
// 等待完成
await waitForWorkflowComplete({
appId: testAppId,
chatId: testChatId,
timeout: 2000
});
// 验证停止标志已被删除
const shouldStop = await shouldWorkflowStop({ appId: testAppId, chatId: testChatId });
expect(shouldStop).toBe(false);
});
test('should timeout when waiting too long', async () => {
await setAgentRuntimeStop({ appId: testAppId, chatId: testChatId });
// 等待超时(不删除标志)
await waitForWorkflowComplete({
appId: testAppId,
chatId: testChatId,
timeout: 100
});
// 验证停止标志仍然存在
const shouldStop = await shouldWorkflowStop({ appId: testAppId, chatId: testChatId });
expect(shouldStop).toBe(true);
});
test('should handle concurrent stop sign operations', async () => {
// 并发设置停止标志
await Promise.all([
setAgentRuntimeStop({ appId: testAppId, chatId: testChatId }),
setAgentRuntimeStop({ appId: testAppId, chatId: testChatId })
]);
// 停止标志应该存在
const shouldStop = await shouldWorkflowStop({ appId: testAppId, chatId: testChatId });
expect(shouldStop).toBe(true);
});
});
```
## 3. 工作流停止检测机制改造
### 3.1 修改位置
**文件**: `packages/service/core/workflow/dispatch/index.ts`
### 3.2 工作流启动时的停止检测机制
**改造点 1: 停止检测逻辑 (行 196-216)**
使用内存变量 + 定时轮询 Redis 的方式:
```typescript
import { delAgentRuntimeStopSign, shouldWorkflowStop } from './workflowStatus';
// 初始化停止检测
let stopping = false;
const checkIsStopping = (): boolean => {
if (apiVersion === 'v2') {
return stopping;
}
if (apiVersion === 'v1') {
if (!res) return false;
return res.closed || !!res.errored;
}
return false;
};
// v2 版本: 启动定时器定期检查 Redis
const checkStoppingTimer =
apiVersion === 'v2'
? setInterval(async () => {
stopping = await shouldWorkflowStop({
appId: runningAppInfo.id,
chatId
});
}, 100)
: undefined;
```
**设计要点**:
- v2 版本使用内存变量 `stopping` + 100ms 定时器轮询 Redis
- v1 版本仍使用原有的 `res.closed/res.errored` 检测
- 轮询频率 100ms,平衡性能和响应速度
**改造点 2: 工作流完成后清理 (行 232-249)**
```typescript
return runWorkflow({
...data,
checkIsStopping, // 传递检测函数
query,
histories,
// ... 其他参数
}).finally(async () => {
// 清理定时器
if (streamCheckTimer) {
clearInterval(streamCheckTimer);
}
if (checkStoppingTimer) {
clearInterval(checkStoppingTimer);
}
// Close mcpClient connections
Object.values(mcpClientMemory).forEach((client) => {
client.closeConnection();
});
// 工作流完成后删除 Redis 记录
await delAgentRuntimeStopSign({
appId: runningAppInfo.id,
chatId
});
});
```
### 3.3 节点执行前的停止检测
**位置**: `packages/service/core/workflow/dispatch/index.ts:861-868`
`checkNodeCanRun` 方法中,每个节点执行前检查:
```typescript
private async checkNodeCanRun(
node: RuntimeNodeItemType,
skippedNodeIdList = new Set<string>()
) {
// ... 其他检查逻辑 ...
// Check queue status
if (data.maxRunTimes <= 0) {
addLog.error('Max run times is 0', {
appId: data.runningAppInfo.id
});
return;
}
// 停止检测
if (checkIsStopping()) {
addLog.warn('Workflow stopped', {
appId: data.runningAppInfo.id,
nodeId: node.nodeId,
nodeName: node.name
});
return;
}
// ... 执行节点逻辑 ...
}
```
**说明**:
- 直接调用 `checkIsStopping()` 同步方法
- 内部会检查内存变量 `stopping`
- 定时器每 100ms 更新一次该变量
- 检测到停止时记录日志并直接返回,不执行节点
## 4. v2/chat/stop 接口设计
### 4.1 接口规范
**接口路径**: `/api/v2/chat/stop`
**Schema 位置**: `packages/global/openapi/core/chat/api.ts`
**接口文档位置**: `packages/global/openapi/core/chat/index.ts`
**请求方法**: POST
**请求参数**:
```typescript
// packages/global/openapi/core/chat/api.ts
export const StopV2ChatSchema = z
.object({
appId: ObjectIdSchema.describe('应用ID'),
chatId: z.string().min(1).describe('对话ID'),
outLinkAuthData: OutLinkChatAuthSchema.optional().describe('外链鉴权数据')
});
export type StopV2ChatParams = z.infer<typeof StopV2ChatSchema>;
```
**响应格式**:
```typescript
export const StopV2ChatResponseSchema = z
.object({
success: z.boolean().describe('是否成功停止')
});
export type StopV2ChatResponse = z.infer<typeof StopV2ChatResponseSchema>;
```
### 4.2 接口实现
**文件位置**: `projects/app/src/pages/api/v2/chat/stop.ts`
```typescript
import type { NextApiRequest, NextApiResponse } from 'next';
import { NextAPI } from '@/service/middleware/entry';
import { authChatCrud } from '@/service/support/permission/auth/chat';
import {
setAgentRuntimeStop,
waitForWorkflowComplete
} from '@fastgpt/service/core/workflow/dispatch/workflowStatus';
import { StopV2ChatSchema, type StopV2ChatResponse } from '@fastgpt/global/openapi/core/chat/controler/api';
async function handler(req: NextApiRequest, res: NextApiResponse): Promise<StopV2ChatResponse> {
const { appId, chatId, outLinkAuthData } = StopV2ChatSchema.parse(req.body);
// 鉴权 (复用聊天 CRUD 鉴权)
await authChatCrud({
req,
authToken: true,
authApiKey: true,
appId,
chatId,
...outLinkAuthData
});
// 设置停止标志
await setAgentRuntimeStop({
appId,
chatId
});
// 等待工作流完成 (最多等待 5 秒)
await waitForWorkflowComplete({ appId, chatId, timeout: 5000 });
return {
success: true
};
}
export default NextAPI(handler);
```
**接口文档** (`packages/global/openapi/core/chat/index.ts`):
```typescript
export const ChatPath: OpenAPIPath = {
// ... 其他路径
'/v2/chat/stop': {
post: {
summary: '停止 Agent 运行',
description: `优雅停止正在运行的 Agent, 会尝试等待当前节点结束后返回,最长 5s超过 5s 仍未结束,则会返回成功。
LLM 节点,流输出时会同时被终止,但 HTTP 请求节点这种可能长时间运行的,不会被终止。`,
tags: [TagsMap.chatPage],
requestBody: {
content: {
'application/json': {
schema: StopV2ChatSchema
}
}
},
responses: {
200: {
description: '成功停止工作流',
content: {
'application/json': {
schema: StopV2ChatResponseSchema
}
}
}
}
}
}
};
```
**说明**:
- 接口使用 `authChatCrud` 进行鉴权,支持 Token 和 API Key
- 支持分享链接和团队空间的鉴权数据
- 设置停止标志后等待最多 5 秒
- 无论是否超时,都返回 `success: true`
## 5. 前端改造
由于当前代码已经能够正常工作,且 v2 版本的后端已经实现了基于 Redis 的停止机制,前端可以保持现有的简单实现:
**保持现有实现的原因**:
1. 后端已经通过定时器轮询 Redis 实现了停止检测
2. 前端调用 `abort()` 后,后端会在下个检测周期(100ms内)发现停止标志
3. 简化前端逻辑,避免增加复杂性
4. 用户体验上,立即中断连接响应更快
**可选的增强方案**:
如果需要在前端显示更详细的停止状态,可以添加 API 客户端函数:
**文件位置**: `projects/app/src/web/core/chat/api.ts`
```typescript
import { POST } from '@/web/common/api/request';
import type { StopV2ChatParams, StopV2ChatResponse } from '@fastgpt/global/openapi/core/chat/controler/api';
/**
* 停止 v2 版本工作流运行
*/
export const stopV2Chat = (data: StopV2ChatParams) =>
POST<StopV2ChatResponse>('/api/v2/chat/stop', data);
```
**增强的 abortRequest 函数**:
```typescript
/* Abort chat completions, questionGuide */
const abortRequest = useMemoizedFn(async (reason: string = 'stop') => {
// 先调用 abort 中断连接
chatController.current?.abort(new Error(reason));
questionGuideController.current?.abort(new Error(reason));
pluginController.current?.abort(new Error(reason));
// v2 版本: 可选地通知后端优雅停止
if (chatBoxData?.app?.version === 'v2' && appId && chatId) {
try {
await stopV2Chat({
appId,
chatId,
outLinkAuthData
});
} catch (error) {
// 静默失败,不影响用户体验
console.warn('Failed to notify backend to stop workflow', error);
}
}
});
```
**建议**:
- **推荐**: 保持当前简单实现,后端已经足够健壮
- **可选**: 如果需要更精确的停止状态追踪,可以实现上述增强方案
## 6. 完整调用流程
### 6.1 正常停止流程
```
用户点击停止按钮
前端: abortRequest()
前端: chatController.abort() [立即中断 HTTP 连接]
[可选] 前端: POST /api/v2/chat/stop
后端: setAgentRuntimeStop(appId, chatId) [设置停止标志]
后端: 定时器检测到 Redis 停止标志,更新内存变量 stopping = true
后端: 下个节点执行前 checkIsStopping() 返回 true
后端: 停止处理新节点,记录日志
后端: 工作流 finally 块删除 Redis 停止标志
[可选] 后端: waitForWorkflowComplete() 检测到停止标志被删除
[可选] 前端: 显示停止成功提示
```
### 6.2 超时流程
```
[可选] 前端: POST /api/v2/chat/stop
后端: setAgentRuntimeStop(appId, chatId)
后端: waitForWorkflowComplete(timeout=5s)
后端: 5秒后停止标志仍存在
后端: 返回成功响应 (不区分超时)
[可选] 前端: 显示成功提示
后端: 工作流继续运行,最终完成后删除停止标志
```
### 6.3 工作流自然完成流程
```
工作流运行中
所有节点执行完成
dispatchWorkFlow.finally()
删除 Redis 停止标志
清理定时器
60秒 TTL 确保即使删除失败也会自动清理
```
### 6.4 时序说明
**关键时间点**:
- **100ms**: 后端定时器检查 Redis 停止标志的频率
- **5s**: stop 接口等待工作流完成的超时时间
- **60s**: Redis 键的 TTL,自动清理时间
**响应时间**:
- 用户点击停止 → HTTP 连接中断: **立即** (前端 abort)
- 停止标志写入 Redis: **< 50ms** (Redis SETEX 操作)
- 后端检测到停止: **< 100ms** (定时器轮询周期)
- 当前节点停止执行: **取决于节点类型**
- LLM 流式输出: **立即**中断流
- HTTP 请求节点: **等待请求完成**
- 其他节点: **等待当前操作完成**
## 7. 测试策略
### 7.1 单元测试
**Redis 工具函数测试**:
- `setAgentRuntimeStop` / `shouldWorkflowStop` 基本功能
- `delAgentRuntimeStopSign` 删除功能
- `waitForWorkflowComplete` 等待机制和超时
- 并发操作安全性
**文件位置**: `test/cases/service/core/app/workflow/workflowStatus.test.ts`
**测试用例**:
```typescript
describe('Workflow Status Redis Functions', () => {
test('should set stopping sign')
test('should return false for non-existent status')
test('should detect stopping status')
test('should return false after deleting stop sign')
test('should wait for workflow completion')
test('should timeout when waiting too long')
test('should delete workflow stop sign')
test('should handle concurrent stop sign operations')
});
```

View File

@ -1,475 +0,0 @@
---
name: create-skill-file
description: Guides Claude in creating well-structured SKILL.md files following best practices. Provides clear guidelines for naming, structure, and content organization to make skills easy to discover and execute.
---
# Claude Agent Skill 编写规范
> 如何创建高质量的 SKILL.md 文件
## 目录
- [快速开始](#快速开始)
- [核心原则](#核心原则)
- [文件结构规范](#文件结构规范)
- [命名和描述规范](#命名和描述规范)
- [内容编写指南](#内容编写指南)
- [质量检查清单](#质量检查清单)
---
## 快速开始
### 3步创建 Skill
**第1步: 创建目录**
```bash
mkdir -p .claude/skill/your-skill-name
cd .claude/skill/your-skill-name
```
**第2步: 创建 SKILL.md**
```markdown
---
name: your-skill-name
description: Brief description with trigger keywords and scenarios
---
# Your Skill Title
## When to Use This Skill
- User asks to [specific scenario]
- User mentions "[keyword]"
## How It Works
1. Step 1: [Action]
2. Step 2: [Action]
## Examples
**Input**: User request
**Output**: Expected result
```
**第3步: 测试**
- 在对话中使用 description 中的关键词触发
- 观察 Claude 是否正确执行
- 根据效果调整
---
## 核心原则
### 1. 保持简洁
只添加 Claude **不知道**的新知识:
- ✅ 项目特定的工作流程
- ✅ 特殊的命名规范或格式要求
- ✅ 自定义工具和脚本的使用方法
- ❌ 通用编程知识
- ❌ 显而易见的步骤
**示例对比**:
```markdown
# ❌ 过度详细
1. 创建 Python 文件
2. 导入必要的库
3. 定义函数
4. 编写主程序逻辑
# ✅ 简洁有效
使用 `scripts/api_client.py` 调用内部 API。
请求头必须包含 `X-Internal-Token`(从环境变量 `INTERNAL_API_KEY` 获取)。
```
### 2. 设定合适的自由度
| 自由度 | 适用场景 | 编写方式 |
|--------|---------|---------|
| **高** | 需要创造性、多种解决方案 | 提供指导原则,不限定具体步骤 |
| **中** | 有推荐模式但允许变化 | 提供参数化示例和默认流程 |
| **低** | 容易出错、需严格执行 | 提供详细的分步指令或脚本 |
**判断标准**:
- 任务是否有明确的"正确答案"? → 低自由度
- 是否需要适应不同场景? → 高自由度
- 错误的代价有多大? → 代价高则用低自由度
### 3. 渐进式披露
将复杂内容分层组织:
```
SKILL.md (主文档, 200-500行)
├── reference.md (详细文档)
├── examples.md (完整示例)
└── scripts/ (可执行脚本)
```
**规则**:
- SKILL.md 超过 500行 → 拆分子文件
- 子文件超过 100行 → 添加目录
- 引用深度 ≤ 1层
---
## 文件结构规范
### YAML Frontmatter
```yaml
---
name: skill-name-here
description: Clear description of what this skill does and when to activate it
---
```
**字段规范**:
| 字段 | 要求 | 说明 |
|------|------|------|
| `name` | 小写字母、数字、短横线,≤64字符 | 必须与目录名一致 |
| `description` | 纯文本,≤1024字符 | 用于检索和激活 |
**命名禁忌**:
- ❌ XML 标签、保留字(`anthropic`, `claude`)
- ❌ 模糊词汇(`helper`, `utility`, `manager`)
- ❌ 空格或下划线(用短横线 `-`)
**Description 技巧**:
```yaml
# ❌ 过于泛化
description: Helps with code tasks
# ✅ 具体且包含关键词
description: Processes CSV files and generates Excel reports with charts. Use when user asks to convert data formats or create visual reports.
# ✅ 说明触发场景
description: Analyzes Python code for security vulnerabilities using bandit. Activates when user mentions "security audit" or "vulnerability scan".
```
### 目录组织
**基础结构**(简单 Skill):
```
skill-name/
└── SKILL.md
```
**标准结构**(推荐):
```
skill-name/
├── SKILL.md
├── templates/
│ └── template.md
└── scripts/
└── script.py
```
---
## 命名和描述规范
### Skill 命名
**推荐格式**: 动名词形式 (verb-ing + noun)
```
✅ 好的命名:
- processing-csv-files
- generating-api-docs
- managing-database-migrations
❌ 不好的命名:
- csv (过于简短)
- data_processor (使用下划线)
- helper (过于模糊)
```
### Description 编写
**必须使用第三人称**:
```yaml
# ❌ 错误
description: I help you process PDFs
# ✅ 正确
description: Processes PDF documents and extracts structured data
```
**4C 原则**:
- **Clear** (清晰): 避免术语和模糊词汇
- **Concise** (简洁): 1-2句话说明核心功能
- **Contextual** (上下文): 说明适用场景
- **Complete** (完整): 功能 + 触发条件
---
## 内容编写指南
### "When to Use" 章节
明确说明触发场景:
```markdown
## When to Use This Skill
- User asks to analyze Python code for type errors
- User mentions "mypy" or "type checking"
- User is working in a Python project with type hints
- User needs to add type annotations
```
**模式**:
- 直接请求: "User asks to X"
- 关键词: "User mentions 'keyword'"
- 上下文: "User is working with X"
- 任务类型: "User needs to X"
### 工作流设计
**简单线性流程**:
```markdown
## How It Works
1. Scan the project for all `.py` files
2. Run `mypy --strict` on each file
3. Parse error output and categorize by severity
4. Generate summary report with fix suggestions
```
**条件分支流程**:
```markdown
## Workflow
1. **Check project type**
- If Django → Use `django-stubs` config
- If Flask → Use `flask-stubs` config
- Otherwise → Use default mypy config
2. **Run type checking**
- If errors found → Proceed to step 3
- If no errors → Report success and exit
```
**Checklist 模式**(验证型任务):
```markdown
## Pre-deployment Checklist
Execute in order. Stop if any step fails.
- [ ] Run tests: `npm test` (must pass)
- [ ] Build: `npm run build` (no errors)
- [ ] Check deps: `npm audit` (no critical vulnerabilities)
```
### 示例和模板
**输入-输出示例**:
```markdown
## Examples
### Example 1: Basic Check
**User Request**: "Check my code for type errors"
**Action**:
1. Scan for `.py` files
2. Run `mypy` on all files
**Output**:
Found 3 type errors in 2 files:
src/main.py:15: error: Missing return type
src/utils.py:42: error: Incompatible types
```
### 脚本集成
**何时使用脚本**:
- 简单命令 → 直接在 SKILL.md 中说明
- 复杂流程 → 提供独立脚本
**脚本编写规范**:
```python
#!/usr/bin/env python3
"""
Brief description of what this script does.
Usage:
python script.py <arg> [--option value]
"""
import argparse
DEFAULT_VALUE = 80 # Use constants, not magic numbers
def main():
parser = argparse.ArgumentParser(description=__doc__)
parser.add_argument("directory", help="Directory to process")
parser.add_argument("--threshold", type=int, default=DEFAULT_VALUE)
args = parser.parse_args()
# Validate inputs
if not Path(args.directory).is_dir():
print(f"Error: {args.directory} not found")
return 1
# Execute
result = process(args.directory, args.threshold)
# Report
print(f"Processed {result['count']} files")
return 0
if __name__ == "__main__":
exit(main())
```
**关键规范**:
- ✅ Shebang 行和 docstring
- ✅ 类型注解和常量
- ✅ 参数验证和错误处理
- ✅ 清晰的返回值(0=成功, 1=失败)
### 最佳实践
**Do**:
- ✅ 提供可执行的命令和脚本
- ✅ 包含输入-输出示例
- ✅ 说明验证标准和成功条件
- ✅ 包含 Do/Don't 清单
**Don't**:
- ❌ 包含 Claude 已知的通用知识
- ❌ 使用抽象描述而非具体步骤
- ❌ 遗漏错误处理指导
- ❌ 示例使用伪代码而非真实代码
---
## 质量检查清单
### 核心质量
- [ ] `name` 符合命名规范(小写、短横线、≤64字符)
- [ ] `description` 包含触发关键词和场景(≤1024字符)
- [ ] 名称与目录名一致
- [ ] 只包含 Claude 不知道的信息
- [ ] 没有冗余或重复内容
### 功能完整性
- [ ] 有"When to Use"章节,列出 3-5 个触发场景
- [ ] 有清晰的执行流程或步骤
- [ ] 至少 2-3 个完整示例
- [ ] 包含输入和预期输出
- [ ] 错误处理有指导
### 结构规范
- [ ] 章节组织清晰
- [ ] 超过 200行有目录导航
- [ ] 引用层级 ≤ 1层
- [ ] 所有路径使用正斜杠 `/`
- [ ] 术语使用一致
### 脚本和模板
- [ ] 脚本包含使用说明和参数文档
- [ ] 脚本有错误处理
- [ ] 避免魔法数字,使用配置
- [ ] 模板格式清晰易用
### 最终检查
- [ ] 通读全文,确保流畅易读
- [ ] 使用实际场景测试触发
- [ ] 长度适中(200-500行,或已拆分)
---
## 常见问题
**Q: Skill 多长才合适?**
- 最小: 50-100行
- 理想: 200-500行
- 最大: 500行(超过则拆分)
**Q: 如何让 Skill 更容易激活?**
- 在 `description` 中使用用户会说的关键词
- 说明具体场景("when user asks to X")
- 提及相关工具名称
**Q: 多个 Skill 功能重叠怎么办?**
- 使用更具体的 `description` 区分
- 在"When to Use"中说明关系
- 考虑合并为一个 Skill
**Q: Skill 需要维护吗?**
- 每季度审查一次,更新过时信息
- 根据使用反馈迭代
- 工具或 API 变更时及时更新
---
## 快速参考
### Frontmatter 模板
```yaml
---
name: skill-name
description: Brief description with trigger keywords
---
```
### 基础结构模板
```markdown
# Skill Title
## When to Use This Skill
- Scenario 1
- Scenario 2
## How It Works
1. Step 1
2. Step 2
## Examples
### Example 1
...
## References
- [Link](url)
```
---
## 相关资源
- [Claude Agent Skills 官方文档](https://docs.claude.com/en/docs/agents-and-tools/agent-skills)
- [Best Practices Checklist](https://docs.claude.com/en/docs/agents-and-tools/agent-skills/best-practices)
- [模板文件](templates/) - 开箱即用的模板
- [基础 skill 的模板](templates/basic-skill-template.md)
- [工作流 skill 的模板](templates/workflow-skill-template.md)
- [示例库](examples/) - 完整的 Skill 示例
- [优秀示例](examples/good-example.md)
- [常见错误示例](examples/bad-example.md)
---

View File

@ -1,867 +0,0 @@
# 不好的 Skill 示例与改进建议
本文档展示常见的 Skill 编写错误,并提供改进建议。
---
## 示例 1: 过于模糊的 Skill
### ❌ 不好的版本
```markdown
---
name: helper
description: Helps with code
---
# Code Helper
This skill helps you with coding tasks.
## Usage
Use this when you need help with code.
## How It Works
1. Analyzes your code
2. Provides suggestions
3. Helps improve it
```
### 问题分析
| 问题 | 说明 | 影响 |
|------|------|------|
| **模糊的名称** | "helper" 太泛化,没有说明具体做什么 | Claude 不知道何时激活 |
| **无关键词** | description 缺少具体触发词 | 用户很难激活这个 Skill |
| **无具体场景** | 没说明适用什么类型的代码 | 适用范围不清 |
| **抽象的步骤** | "Provides suggestions" 太模糊 | Claude 不知道具体做什么 |
| **无示例** | 没有实际例子 | 用户和 Claude 都不清楚预期输出 |
### ✅ 改进版本
```markdown
---
name: python-code-refactoring
description: Refactors Python code to improve readability and maintainability using standard patterns. Activates when user asks to clean up code, improve structure, or mentions refactoring. Focuses on function extraction, variable naming, and removing code smells.
---
# Python Code Refactoring Skill
Improves Python code quality through systematic refactoring.
## When to Use This Skill
- User asks to "refactor this code", "clean up this function", or "improve readability"
- User mentions "code smell", "technical debt", or "maintainability"
- User is working with Python code that has:
- Long functions (>50 lines)
- Nested conditionals (>3 levels)
- Repeated code patterns
- Unclear variable names
## How It Works
### 1. Identify Refactoring Opportunities
Scan code for common issues:
- Functions longer than 50 lines
- Repeated code blocks (DRY violations)
- Magic numbers without constants
- Unclear variable names (x, temp, data)
- Deep nesting (>3 levels)
### 2. Prioritize Changes
Focus on high-impact refactorings:
- **High**: Extract complex nested logic to functions
- **Medium**: Rename unclear variables
- **Low**: Minor style improvements
### 3. Apply Refactorings
**Extract Method**:
```python
# Before
def process_order(order):
# 50 lines of validation logic
# 30 lines of payment processing
# 20 lines of email notification
# After
def process_order(order):
validate_order(order)
process_payment(order)
send_confirmation_email(order)
```
**Replace Magic Numbers**:
```python
# Before
if user.age < 18:
return False
# After
MINIMUM_AGE = 18
if user.age < MINIMUM_AGE:
return False
```
**Simplify Conditionals**:
```python
# Before
if user.role == 'admin':
return True
elif user.role == 'moderator':
return True
elif user.role == 'editor':
return True
else:
return False
# After
PRIVILEGED_ROLES = {'admin', 'moderator', 'editor'}
return user.role in PRIVILEGED_ROLES
```
### 4. Verify Improvements
After refactoring:
- Run existing tests (all must pass)
- Check code length reduced
- Verify improved readability
## Example
**User Request**: "Refactor this function, it's too long"
```python
def process_user_registration(data):
if not data.get('email'):
return {'error': 'Email required'}
if '@' not in data['email']:
return {'error': 'Invalid email'}
if not data.get('password'):
return {'error': 'Password required'}
if len(data['password']) < 8:
return {'error': 'Password too short'}
if not any(c.isupper() for c in data['password']):
return {'error': 'Password needs uppercase'}
existing = db.query(User).filter_by(email=data['email']).first()
if existing:
return {'error': 'Email already registered'}
salt = bcrypt.gensalt()
hashed = bcrypt.hashpw(data['password'].encode(), salt)
user = User(email=data['email'], password_hash=hashed)
db.add(user)
db.commit()
token = jwt.encode({'user_id': user.id}, SECRET_KEY)
send_email(data['email'], 'Welcome!', 'Thanks for registering')
return {'success': True, 'token': token}
```
**Refactored**:
```python
def process_user_registration(data):
"""Register new user with validation and email confirmation."""
# Validation
validation_error = validate_registration_data(data)
if validation_error:
return {'error': validation_error}
# Check uniqueness
if user_exists(data['email']):
return {'error': 'Email already registered'}
# Create user
user = create_user(data['email'], data['password'])
# Generate token
token = generate_auth_token(user.id)
# Send welcome email
send_welcome_email(user.email)
return {'success': True, 'token': token}
def validate_registration_data(data):
"""Validate registration data, return error message or None."""
if not data.get('email'):
return 'Email required'
if '@' not in data['email']:
return 'Invalid email'
if not data.get('password'):
return 'Password required'
return validate_password_strength(data['password'])
def validate_password_strength(password):
"""Check password meets security requirements."""
MIN_PASSWORD_LENGTH = 8
if len(password) < MIN_PASSWORD_LENGTH:
return f'Password must be at least {MIN_PASSWORD_LENGTH} characters'
if not any(c.isupper() for c in password):
return 'Password must contain uppercase letter'
return None
def user_exists(email):
"""Check if user with given email already exists."""
return db.query(User).filter_by(email=email).first() is not None
def create_user(email, password):
"""Create and save new user with hashed password."""
salt = bcrypt.gensalt()
hashed = bcrypt.hashpw(password.encode(), salt)
user = User(email=email, password_hash=hashed)
db.add(user)
db.commit()
return user
def generate_auth_token(user_id):
"""Generate JWT authentication token."""
return jwt.encode({'user_id': user_id}, SECRET_KEY)
def send_welcome_email(email):
"""Send welcome email to new user."""
send_email(email, 'Welcome!', 'Thanks for registering')
```
**Improvements**:
- ✅ Main function reduced from 20 lines to 15 lines
- ✅ Each function has single responsibility
- ✅ Magic number (8) extracted to constant
- ✅ All functions documented with docstrings
- ✅ Easier to test individual functions
- ✅ Easier to modify validation rules
## Best Practices
- ✅ Extract functions with clear names
- ✅ Use constants instead of magic numbers
- ✅ Keep functions under 30 lines
- ✅ Maximum nesting depth of 2-3 levels
- ✅ Write docstrings for extracted functions
```
### 改进要点
1. ✅ 具体的名称: `python-code-refactoring` 而非 `helper`
2. ✅ 详细的 description: 包含触发词和适用场景
3. ✅ 明确的触发条件: 列出具体的使用场景
4. ✅ 可执行的步骤: 每个步骤都有具体操作
5. ✅ 实际代码示例: 展示完整的重构过程
6. ✅ 具体的改进指标: 列出可验证的改进效果
---
## 示例 2: 过度冗长的 Skill
### ❌ 不好的版本
```markdown
---
name: python-basics
description: Teaches Python programming basics
---
# Python Basics
This skill helps you learn Python programming.
## Variables
In Python, you can create variables like this:
```python
x = 5
y = "hello"
z = 3.14
```
Python supports different data types:
- Integers (int): whole numbers like 1, 2, 3
- Floats (float): decimal numbers like 3.14, 2.5
- Strings (str): text like "hello", 'world'
- Booleans (bool): True or False
## Conditional Statements
You can use if statements to make decisions:
```python
if x > 0:
print("Positive")
elif x < 0:
print("Negative")
else:
print("Zero")
```
The if statement checks a condition. If True, it runs the indented code.
The elif means "else if" and provides an alternative condition.
The else runs if none of the above conditions are True.
## Loops
Python has two main types of loops:
### For Loops
For loops iterate over a sequence:
```python
for i in range(5):
print(i)
```
This prints numbers 0 through 4. The range() function generates numbers.
### While Loops
While loops continue while a condition is True:
```python
i = 0
while i < 5:
print(i)
i += 1
```
This does the same thing as the for loop above.
## Functions
Functions are reusable blocks of code:
```python
def greet(name):
return f"Hello, {name}!"
```
The def keyword defines a function. The function name is greet.
It takes one parameter called name. The return statement sends back a value.
## Lists
Lists store multiple items:
```python
fruits = ["apple", "banana", "orange"]
```
You can access items by index:
```python
first_fruit = fruits[0] # "apple"
```
... [continues for 50 more sections about Python basics]
```
### 问题分析
| 问题 | 说明 | 影响 |
|------|------|------|
| **包含通用知识** | Python 基础知识 Claude 已经知道 | 浪费 token,增加检索成本 |
| **教程式内容** | 像教程而非工作指南 | Claude 不需要学习,需要的是工作指导 |
| **过度详细** | 解释显而易见的概念 | 信息过载,难以找到关键信息 |
| **缺少项目特定信息** | 没有项目相关的规范或约定 | 无法提供项目特定价值 |
### ✅ 改进版本
```markdown
---
name: project-python-conventions
description: Enforces Python coding conventions specific to this project. Activates when user writes Python code or asks about code style. Covers naming, imports, error handling, and project-specific patterns.
---
# Project Python Conventions
Project-specific Python coding standards and patterns.
## When to Use This Skill
- User is writing or reviewing Python code
- User asks about code style or conventions
- User needs guidance on project patterns
## Import Organization
Follow this order:
```python
# 1. Standard library
import os
import sys
from typing import Optional, List
# 2. Third-party packages
import numpy as np
from fastapi import FastAPI
# 3. Local application imports
from core.models import User
from utils.helpers import format_date
```
## Naming Conventions
### Project-Specific Rules
| Type | Pattern | Example |
|------|---------|---------|
| API endpoints | `/api/v1/{resource}` | `/api/v1/users` |
| Database tables | `{resource}_table` | `users_table` |
| Environment variables | `APP_{NAME}` | `APP_DATABASE_URL` |
| Config files | `{env}.config.py` | `prod.config.py` |
### Forbidden Patterns
```python
# ❌ Don't use single-letter variables (except i, j, k in loops)
d = get_data()
# ✅ Do use descriptive names
user_data = get_data()
# ❌ Don't use abbreviations
usr_mgr = UserManager()
# ✅ Do use full words
user_manager = UserManager()
```
## Error Handling Pattern
Use project's custom exceptions:
```python
from core.exceptions import UserNotFoundError, ValidationError
def get_user(user_id: int) -> User:
"""
Retrieve user by ID.
Raises:
UserNotFoundError: If user doesn't exist
ValidationError: If user_id is invalid
"""
if not isinstance(user_id, int) or user_id <= 0:
raise ValidationError(f"Invalid user_id: {user_id}")
user = db.query(User).get(user_id)
if user is None:
raise UserNotFoundError(f"User {user_id} not found")
return user
```
**Never** use bare `except:` - always catch specific exceptions.
## Database Queries
Always use the project's query helper:
```python
# ❌ Don't use raw SQLAlchemy queries
users = db.query(User).filter(User.age > 18).all()
# ✅ Do use query helper
from core.database import QueryBuilder
users = QueryBuilder(User).where('age', '>', 18).get()
```
## API Response Format
All API endpoints must return this format:
```python
{
"success": True,
"data": {
# ... response data
},
"error": None,
"meta": {
"timestamp": "2025-01-31T12:00:00Z",
"version": "1.0"
}
}
```
Use the response helper:
```python
from core.responses import success_response, error_response
@app.get("/users/{id}")
async def get_user(id: int):
try:
user = get_user_data(id)
return success_response(user)
except UserNotFoundError as e:
return error_response(str(e), status_code=404)
```
## Testing Patterns
### Test File Location
```
project/
├── src/
│ └── services/
│ └── user_service.py
└── tests/
└── services/
└── test_user_service.py
```
### Test Naming
```python
# Format: test_{function_name}_{scenario}_{expected_result}
def test_get_user_valid_id_returns_user():
"""Test getting user with valid ID returns User object."""
pass
def test_get_user_invalid_id_raises_validation_error():
"""Test getting user with invalid ID raises ValidationError."""
pass
def test_get_user_nonexistent_id_raises_not_found_error():
"""Test getting non-existent user raises UserNotFoundError."""
pass
```
## References
- [Full Style Guide](docs/STYLE_GUIDE.md)
- [API Standards](docs/API_STANDARDS.md)
- [Database Conventions](docs/DATABASE.md)
```
### 改进要点
1. ✅ 只包含项目特定信息: 不教 Python 基础
2. ✅ 简洁明了: 200 行 vs 原来的 500+ 行
3. ✅ 实用的规则: 直接可应用的约定
4. ✅ 清晰的示例: Do/Don't 对比
5. ✅ 引用详细文档: 用链接而非全部内容
---
## 示例 3: 缺少上下文的 Skill
### ❌ 不好的版本
```markdown
---
name: deployment
description: Deploys code
---
# Deployment
## Steps
1. Build the code
2. Run tests
3. Deploy to server
4. Verify deployment
```
### 问题分析
| 问题 | 说明 | 影响 |
|------|------|------|
| **无具体命令** | 没说明如何 build, test, deploy | Claude 无法执行 |
| **无环境区分** | 开发、测试、生产部署可能不同 | 可能部署到错误环境 |
| **无错误处理** | 没说明出错时怎么办 | 失败时不知如何恢复 |
| **无验证标准** | "Verify" 太模糊 | 不知道检查什么 |
### ✅ 改进版本
```markdown
---
name: deploy-to-production
description: Deploys application to production environment on AWS. Activates when user asks to deploy to prod or mentions production deployment. Includes pre-flight checks, blue-green deployment, and rollback procedures.
---
# Production Deployment
Safely deploy application to production with zero downtime.
## When to Use This Skill
- User asks to "deploy to production" or "push to prod"
- User mentions "production deployment", "go live"
- User needs to rollback a deployment
## Prerequisites
Before deployment, verify:
```bash
# 1. On main branch
git branch --show-current # Must be "main"
# 2. All tests pass
npm test # Exit code must be 0
# 3. Build succeeds
npm run build # Must complete without errors
# 4. No uncommitted changes
git status # Must show "nothing to commit"
# 5. Latest code pulled
git pull origin main # Must be up to date
```
If any prerequisite fails, **stop** and fix the issue.
## Deployment Process
### Step 1: Pre-flight Checks
```bash
# Run deployment readiness script
./scripts/preflight-check.sh
# Expected output:
# ✓ Tests passed
# ✓ Build succeeded
# ✓ Environment variables configured
# ✓ Database migrations ready
# ✓ Ready to deploy
```
### Step 2: Database Migrations (if needed)
```bash
# Connect to production database
aws rds describe-db-instances --db-instance-identifier prod-db
# Backup before migration
./scripts/backup-database.sh prod
# Run migrations
NODE_ENV=production npm run migrate
# Verify migration succeeded
npm run migrate:status
```
### Step 3: Blue-Green Deployment
```bash
# Deploy to green environment (inactive)
aws elasticbeanstalk create-environment \
--application-name myapp \
--environment-name myapp-prod-green \
--solution-stack-name "64bit Amazon Linux 2 v5.x.x running Node.js 18"
# Wait for green environment to be healthy
aws elasticbeanstalk wait environment-updated \
--environment-name myapp-prod-green
# Check green environment health
curl https://myapp-prod-green.aws.com/health
# Expected: {"status": "healthy"}
```
### Step 4: Smoke Tests
```bash
# Run smoke tests against green environment
BASE_URL=https://myapp-prod-green.aws.com npm run test:smoke
# Tests must include:
# - Health check endpoint
# - Authentication flow
# - Critical API endpoints
# - Database connectivity
```
### Step 5: Switch Traffic
```bash
# Swap URLs (blue becomes green, green becomes blue)
aws elasticbeanstalk swap-environment-cnames \
--source-environment-name myapp-prod-blue \
--destination-environment-name myapp-prod-green
# Wait 5 minutes for DNS propagation
echo "Waiting for DNS propagation..."
sleep 300
# Verify production URL serves new version
curl https://myapp.com/version
# Expected: {"version": "1.2.3"} (new version)
```
### Step 6: Monitor
```bash
# Monitor error rates for 15 minutes
aws cloudwatch get-metric-statistics \
--namespace AWS/ELB \
--metric-name HTTPCode_Backend_5XX \
--start-time $(date -u -d '15 minutes ago' +%Y-%m-%dT%H:%M:%S) \
--end-time $(date -u +%Y-%m-%dT%H:%M:%S) \
--period 300 \
--statistics Sum
# Error rate must be < 1%
```
If error rate exceeds 1%:
- **Rollback immediately** (see Rollback section)
- Investigate issue
- Fix and redeploy
### Step 7: Cleanup
```bash
# After 24 hours, if no issues:
# Terminate old blue environment
aws elasticbeanstalk terminate-environment \
--environment-name myapp-prod-blue
```
## Rollback Procedure
If deployment fails:
```bash
# 1. Swap back to previous version
aws elasticbeanstalk swap-environment-cnames \
--source-environment-name myapp-prod-green \
--destination-environment-name myapp-prod-blue
# 2. Verify old version is serving
curl https://myapp.com/version
# Expected: {"version": "1.2.2"} (old version)
# 3. Rollback database migrations (if ran)
NODE_ENV=production npm run migrate:rollback
# 4. Notify team
./scripts/notify-rollback.sh "Deployment rolled back due to [reason]"
```
## Example Deployment
**User Request**: "Deploy v1.2.3 to production"
**Execution Log**:
```
[14:00:00] Starting deployment of v1.2.3 to production
[14:00:05] ✓ Pre-flight checks passed
[14:00:10] ✓ Database backup completed
[14:00:30] ✓ Database migrations applied (3 migrations)
[14:01:00] → Creating green environment
[14:05:00] ✓ Green environment healthy
[14:05:30] ✓ Smoke tests passed (12/12)
[14:06:00] → Switching traffic to green environment
[14:11:00] ✓ DNS propagated
[14:11:05] ✓ Production serving v1.2.3
[14:11:10] → Monitoring for 15 minutes
[14:26:10] ✓ Error rate: 0.05% (within threshold)
[14:26:15] ✓ Deployment successful
[14:26:20] → Old environment will be terminated in 24h
Deployment completed successfully in 26 minutes
```
## References
- [AWS Deployment Guide](docs/AWS_DEPLOYMENT.md)
- [Runbook](docs/RUNBOOK.md)
- [On-Call Procedures](docs/ONCALL.md)
```
### 改进要点
1. ✅ 具体命令: 每个步骤都有可执行的命令
2. ✅ 环境明确: 专注于生产环境部署
3. ✅ 验证标准: 说明检查什么和预期结果
4. ✅ 错误处理: 包含完整的回滚流程
5. ✅ 实际输出: 展示命令的预期输出
6. ✅ 监控指标: 定义具体的成功标准
---
## 常见错误总结
### 1. 命名和描述问题
| 错误 | 示例 | 改进 |
|------|------|------|
| 过于泛化 | `name: helper` | `name: python-type-hints` |
| 缺少关键词 | `description: Helps with code` | `description: Adds type hints to Python using mypy` |
| 使用第一人称 | `description: I help you...` | `description: Adds type hints...` |
### 2. 内容问题
| 错误 | 说明 | 改进 |
|------|------|------|
| 包含通用知识 | 教 Python 基础语法 | 只包含项目特定规范 |
| 过于抽象 | "分析代码并提供建议" | "检查函数长度、变量命名、重复代码" |
| 缺少示例 | 只有文字描述 | 包含输入-输出示例 |
### 3. 结构问题
| 错误 | 说明 | 改进 |
|------|------|------|
| 无层次结构 | 所有内容混在一起 | 使用标题、列表、代码块组织 |
| 缺少"When to Use" | 不知道何时激活 | 列出 3-5 个触发场景 |
| 无验证步骤 | 不知道如何确认成功 | 说明检查项和预期结果 |
### 4. 自由度问题
| 错误 | 说明 | 改进 |
|------|------|------|
| 创意任务低自由度 | 为架构设计提供分步指令 | 提供指导原则和考虑因素 |
| 危险任务高自由度 | 生产部署没有具体步骤 | 提供详细的检查清单 |
| 不匹配任务类型 | 代码生成用教程式内容 | 提供模板和实际示例 |
---
## 快速检查清单
在发布 Skill 之前,问自己:
### 基础检查
- [ ] Name 是否具体且描述性强?
- [ ] Description 包含触发关键词和场景?
- [ ] 有明确的"When to Use"章节?
- [ ] 内容只包含 Claude 不知道的信息?
### 内容检查
- [ ] 是否有实际的代码示例?
- [ ] 步骤是否具体可执行?
- [ ] 是否说明了如何验证成功?
- [ ] 是否包含错误处理指导?
### 结构检查
- [ ] 内容组织清晰(使用标题、列表)?
- [ ] 自由度设定合适(匹配任务类型)?
- [ ] 长度合适(200-500行,或拆分子文件)?
- [ ] 包含 Do/Don't 最佳实践?
如果有任何一项答"否",参考本文档的改进建议进行修改。

View File

@ -1,908 +0,0 @@
# 好的 Skill 示例
本文档展示几个编写良好的 SKILL.md 示例,说明最佳实践的实际应用。
---
## 示例 1: 数据库迁移 Skill (高质量基础 Skill)
```markdown
---
name: database-migration
description: Manages database schema migrations using Alembic for SQLAlchemy projects. Activates when user asks to create migrations, upgrade/downgrade database, or mentions Alembic. Handles both development and production scenarios with safety checks.
---
# Database Migration Skill
Automates database schema migration management using Alembic.
## When to Use This Skill
- User asks to "create migration", "update database schema", or "rollback migration"
- User mentions "Alembic", "database migration", or "schema change"
- User is working in a Python project with SQLAlchemy models
- User needs to apply or revert database changes
## Quick Start
Create a new migration:
```bash
alembic revision --autogenerate -m "Description of changes"
```
Apply migrations:
```bash
alembic upgrade head
```
## How It Works
### Creating Migrations
1. **Detect model changes**
- Scan SQLAlchemy models in `models/` directory
- Compare with current database schema
- Identify additions, modifications, deletions
2. **Generate migration script**
- Run `alembic revision --autogenerate`
- Review generated script for accuracy
- Edit if necessary (Alembic can't auto-detect everything)
3. **Verify migration**
- Check upgrade() function is correct
- Ensure downgrade() function reverses changes
- Test on development database first
### Applying Migrations
1. **Safety checks**
- Backup database (production only)
- Verify no pending migrations
- Check database connectivity
2. **Execute migration**
- Run `alembic upgrade head`
- Monitor for errors
- Verify schema matches expected state
3. **Post-migration validation**
- Run application tests
- Check data integrity
- Confirm application starts successfully
## Examples
### Example 1: Add New Column
**User Request**: "Add an email column to the users table"
**Step 1**: Update the model
```python
# models/user.py
class User(Base):
__tablename__ = 'users'
id = Column(Integer, primary_key=True)
username = Column(String(50), nullable=False)
email = Column(String(120), nullable=True) # ← New field
```
**Step 2**: Generate migration
```bash
alembic revision --autogenerate -m "Add email column to users table"
```
**Generated migration** (alembic/versions/abc123_add_email.py):
```python
def upgrade():
op.add_column('users', sa.Column('email', sa.String(120), nullable=True))
def downgrade():
op.drop_column('users', 'email')
```
**Step 3**: Review and apply
```bash
# Review the migration file
cat alembic/versions/abc123_add_email.py
# Apply migration
alembic upgrade head
```
**Output**:
```
INFO [alembic.runtime.migration] Running upgrade xyz789 -> abc123, Add email column to users table
```
### Example 2: Complex Migration with Data Changes
**User Request**: "Split the 'name' column into 'first_name' and 'last_name'"
**Step 1**: Create empty migration (can't auto-generate data changes)
```bash
alembic revision -m "Split name into first_name and last_name"
```
**Step 2**: Write custom migration
```python
def upgrade():
# Add new columns
op.add_column('users', sa.Column('first_name', sa.String(50)))
op.add_column('users', sa.Column('last_name', sa.String(50)))
# Migrate existing data
connection = op.get_bind()
users = connection.execute("SELECT id, name FROM users")
for user_id, name in users:
parts = name.split(' ', 1)
first = parts[0]
last = parts[1] if len(parts) > 1 else ''
connection.execute(
"UPDATE users SET first_name = %s, last_name = %s WHERE id = %s",
(first, last, user_id)
)
# Make new columns non-nullable and drop old column
op.alter_column('users', 'first_name', nullable=False)
op.alter_column('users', 'last_name', nullable=False)
op.drop_column('users', 'name')
def downgrade():
# Add back name column
op.add_column('users', sa.Column('name', sa.String(100)))
# Restore data
connection = op.get_bind()
users = connection.execute("SELECT id, first_name, last_name FROM users")
for user_id, first, last in users:
full_name = f"{first} {last}".strip()
connection.execute(
"UPDATE users SET name = %s WHERE id = %s",
(full_name, user_id)
)
op.alter_column('users', 'name', nullable=False)
op.drop_column('users', 'first_name')
op.drop_column('users', 'last_name')
```
**Step 3**: Test thoroughly
```bash
# Apply migration
alembic upgrade head
# Verify data
python -c "from models import User; print(User.query.first().first_name)"
# Test rollback
alembic downgrade -1
python -c "from models import User; print(User.query.first().name)"
# Reapply
alembic upgrade head
```
## Best Practices
### Do
- ✅ Always review auto-generated migrations before applying
- ✅ Test migrations on development database first
- ✅ Write reversible downgrade() functions
- ✅ Backup production databases before major migrations
- ✅ Use meaningful migration messages
### Don't
- ❌ Trust auto-generated migrations blindly
- ❌ Skip downgrade() implementation
- ❌ Apply untested migrations to production
- ❌ Modify existing migration files after they're committed
- ❌ Use raw SQL without bind parameters
## Troubleshooting
### "Target database is not up to date"
**Problem**: Someone else applied migrations you don't have locally
**Solution**:
```bash
git pull # Get latest migrations
alembic upgrade head # Apply them locally
```
### "Can't locate revision identified by 'xyz'"
**Problem**: Migration file deleted or branch conflict
**Solution**:
1. Check if migration file exists in `alembic/versions/`
2. If missing, restore from git history
3. If branch conflict, merge migration branches:
```bash
alembic merge -m "Merge migration branches" head1 head2
```
### Migration fails mid-execution
**Problem**: Error occurred during migration
**Solution**:
1. Check error message for specifics
2. Manually fix database to consistent state if needed
3. Update migration script to fix the issue
4. Mark migration as completed or retry:
```bash
# Mark as done without running
alembic stamp head
# Or fix and retry
alembic upgrade head
```
## Configuration
### Project Structure
```
project/
├── alembic/
│ ├── versions/ # Migration scripts
│ ├── env.py # Alembic environment
│ └── script.py.mako # Migration template
├── alembic.ini # Alembic configuration
└── models/ # SQLAlchemy models
├── __init__.py
├── user.py
└── post.py
```
### alembic.ini Configuration
```ini
[alembic]
script_location = alembic
sqlalchemy.url = driver://user:pass@localhost/dbname
[loggers]
keys = root,sqlalchemy,alembic
[logger_alembic]
level = INFO
handlers = console
qualname = alembic
```
## References
- [Alembic Documentation](https://alembic.sqlalchemy.org/)
- [SQLAlchemy Documentation](https://docs.sqlalchemy.org/)
- [Project Migration Guidelines](docs/database-migrations.md)
```
### 为什么这是好的 Skill?
1. ✅ **清晰的 description**: 包含触发关键词 ("Alembic", "create migrations") 和场景 ("SQLAlchemy projects")
2. ✅ **具体的触发条件**: "When to Use" 列出 4 个明确场景
3. ✅ **分步工作流**: 每个操作都有清晰的 1-2-3 步骤
4. ✅ **实际示例**: 包含简单和复杂两个示例,有完整代码
5. ✅ **最佳实践**: Do/Don't 清单易于遵循
6. ✅ **故障排除**: 覆盖 3 个常见问题及解决方案
7. ✅ **项目特定信息**: 包含配置和目录结构
---
## 示例 2: API 文档生成 Skill (优秀的工作流 Skill)
```markdown
---
name: api-documentation-generation
description: Generates OpenAPI/Swagger documentation from FastAPI or Flask applications. Activates when user asks to create API docs, generate OpenAPI spec, or needs to document REST endpoints. Supports automatic extraction and custom annotations.
---
# API Documentation Generation Skill
Automates creation of comprehensive API documentation from Python web applications.
## When to Use This Skill
- User asks to "generate API docs" or "create OpenAPI spec"
- User mentions "Swagger", "OpenAPI", "API documentation"
- User has a FastAPI or Flask application
- User needs to document REST API endpoints
## Workflow
### Phase 1: Discovery
1. **Identify framework**
- Check for FastAPI: `from fastapi import FastAPI` in codebase
- Check for Flask: `from flask import Flask` in codebase
- Check for Flask-RESTful: `from flask_restful import Resource`
2. **Locate API definitions**
- Scan for route decorators: `@app.get()`, `@app.post()`, `@app.route()`
- Find API routers and blueprints
- Identify request/response models
3. **Extract metadata**
- Endpoint paths and HTTP methods
- Request parameters (path, query, body)
- Response schemas and status codes
- Authentication requirements
### Phase 2: Enhancement
1. **Review docstrings**
- Check if endpoints have docstrings
- Verify docstrings follow format (summary, description, params, returns)
- Flag missing documentation
2. **Add missing docs** (if user approves)
- Generate docstrings based on type hints
- Infer descriptions from parameter names
- Add example requests/responses
3. **Validate schemas**
- Ensure Pydantic models are well-documented
- Check for missing field descriptions
- Verify example values are provided
### Phase 3: Generation
1. **Generate OpenAPI spec**
```bash
# For FastAPI
python -c "from main import app; import json; print(json.dumps(app.openapi()))" > openapi.json
# For Flask with flasgger
python scripts/generate_swagger.py > swagger.json
```
2. **Create Swagger UI**
- Copy Swagger UI static files to `docs/api/`
- Configure to load generated spec
- Test in browser: `http://localhost:8000/docs`
3. **Generate Markdown docs**
- Use `openapi-to-md` to create human-readable docs
- Organize by tags/resource groups
- Add navigation and table of contents
### Phase 4: Validation
1. **Check completeness**
- All endpoints documented?
- All parameters described?
- Example requests provided?
2. **Validate spec**
```bash
openapi-spec-validator openapi.json
```
3. **Test interactive docs**
- Try sample requests in Swagger UI
- Verify authentication flows work
- Check response schemas match actual responses
## Examples
### Example 1: FastAPI Application
**User Request**: "Generate API documentation for my FastAPI app"
**Discovery**:
```
Found FastAPI application in: main.py
Endpoints discovered:
- GET /users (List users)
- POST /users (Create user)
- GET /users/{id} (Get user by ID)
- PUT /users/{id} (Update user)
- DELETE /users/{id} (Delete user)
```
**Enhancement**:
```
Checking documentation coverage...
✓ GET /users - Well documented
✗ POST /users - Missing response examples
✗ GET /users/{id} - No description
✗ PUT /users/{id} - No description
✗ DELETE /users/{id} - No description
Would you like me to add missing documentation?
```
**User**: Yes
**Adding docs**:
```python
@app.get("/users/{id}", summary="Get user by ID")
async def get_user(id: int):
"""
Retrieve a single user by their unique ID.
Args:
id: The unique identifier of the user
Returns:
User object with id, name, email, and created_at fields
Raises:
404: User not found
"""
# ... implementation
```
**Generation**:
```bash
Generating OpenAPI specification...
✓ openapi.json created (15.2 KB)
✓ Swagger UI deployed to docs/api/
✓ Markdown docs created in docs/api/endpoints/
View documentation at: http://localhost:8000/docs
```
### Example 2: Flask Application with Custom Schemas
**User Request**: "Document my Flask API and include request/response examples"
**Discovery**:
```
Found Flask application with Flask-RESTful
Endpoints:
- GET /api/products
- POST /api/products
- GET /api/products/<id>
- PATCH /api/products/<id>
- DELETE /api/products/<id>
```
**Custom Examples Added**:
```python
class ProductList(Resource):
def get(self):
"""
Get all products
Example Response:
```json
{
"products": [
{
"id": 1,
"name": "Widget",
"price": 29.99,
"stock": 100
}
],
"total": 1
}
```
"""
pass
def post(self):
"""
Create a new product
Example Request:
```json
{
"name": "New Widget",
"price": 39.99,
"stock": 50
}
```
Example Response:
```json
{
"id": 2,
"name": "New Widget",
"price": 39.99,
"stock": 50,
"created_at": "2025-01-31T12:00:00Z"
}
```
"""
pass
```
**Result**:
```
Generated documentation:
- openapi.json (with examples)
- Swagger UI at /api/docs
- Postman collection at docs/api/postman_collection.json
- Markdown API reference at docs/api/README.md
All endpoints now include:
✓ Request examples
✓ Response examples
✓ Error codes
✓ Authentication requirements
```
## Configuration
### FastAPI Projects
No additional configuration needed! FastAPI auto-generates OpenAPI docs.
Access at:
- Swagger UI: `http://localhost:8000/docs`
- ReDoc: `http://localhost:8000/redoc`
- OpenAPI JSON: `http://localhost:8000/openapi.json`
### Flask Projects
Install flasgger:
```bash
pip install flasgger
```
Configure in app:
```python
from flasgger import Swagger
app = Flask(__name__)
swagger = Swagger(app, template={
"info": {
"title": "My API",
"description": "API for managing resources",
"version": "1.0.0"
}
})
```
## Best Practices
- ✅ Use type hints - enables automatic schema generation
- ✅ Write descriptive docstrings for all endpoints
- ✅ Provide example requests and responses
- ✅ Document error codes and edge cases
- ✅ Keep docs in sync with code (auto-generate when possible)
## Tools Used
- **FastAPI**: Built-in OpenAPI support
- **flasgger**: Swagger for Flask
- **openapi-spec-validator**: Validates OpenAPI specs
- **openapi-to-md**: Converts OpenAPI to Markdown
## References
- [OpenAPI Specification](https://spec.openapis.org/oas/latest.html)
- [FastAPI Documentation](https://fastapi.tiangolo.com/)
- [Swagger Documentation](https://swagger.io/docs/)
```
### 为什么这是优秀的工作流 Skill?
1. ✅ **清晰的工作流阶段**: 4 个阶段 (Discovery, Enhancement, Generation, Validation)
2. ✅ **决策点**: Phase 2 询问用户是否添加缺失文档
3. ✅ **实际输出示例**: 展示了命令输出和生成的代码
4. ✅ **多框架支持**: 处理 FastAPI 和 Flask 两种情况
5. ✅ **工具集成**: 列出所需工具及其用途
6. ✅ **可执行命令**: 提供完整的命令示例
7. ✅ **验证步骤**: Phase 4 确保生成的文档质量
---
## 示例 3: 代码审查 Skill (高自由度 Skill)
```markdown
---
name: code-review
description: Performs comprehensive code reviews focusing on best practices, security, performance, and maintainability. Activates when user asks to review code, check pull request, or mentions code quality. Provides actionable feedback with severity ratings.
---
# Code Review Skill
Conducts thorough code reviews with focus on quality, security, and best practices.
## When to Use This Skill
- User asks to "review my code" or "check this PR"
- User mentions "code review", "code quality", or "best practices"
- User wants feedback on specific code changes
- User needs security or performance analysis
## Review Criteria
Code is evaluated across 5 dimensions:
### 1. Correctness
- Logic errors and bugs
- Edge case handling
- Error handling and validation
- Type safety
### 2. Security
- SQL injection vulnerabilities
- XSS vulnerabilities
- Authentication/authorization issues
- Sensitive data exposure
- Dependency vulnerabilities
### 3. Performance
- Algorithm efficiency
- Database query optimization
- Memory leaks
- Unnecessary computations
- Caching opportunities
### 4. Maintainability
- Code clarity and readability
- Function/class size
- Code duplication
- Naming conventions
- Documentation
### 5. Best Practices
- Language-specific idioms
- Design patterns
- SOLID principles
- Testing coverage
- Error handling patterns
## Review Process
1. **Understand context**
- What does this code do?
- What problem does it solve?
- Are there any constraints or requirements?
2. **Identify issues**
- Scan for common anti-patterns
- Check against language best practices
- Look for security vulnerabilities
- Assess performance implications
3. **Prioritize feedback**
- **Critical**: Security issues, data loss risks, crashes
- **High**: Bugs, major performance issues
- **Medium**: Code smells, maintainability concerns
- **Low**: Style preferences, minor optimizations
4. **Provide suggestions**
- Explain the issue clearly
- Show better alternative (code example)
- Explain why the alternative is better
## Example Review
### Code Submitted
```python
def get_user_data(user_id):
conn = sqlite3.connect('users.db')
cursor = conn.cursor()
query = "SELECT * FROM users WHERE id = " + str(user_id)
cursor.execute(query)
result = cursor.fetchone()
return result
```
### Review Feedback
**❌ CRITICAL: SQL Injection Vulnerability**
The code concatenates user input directly into SQL query, allowing SQL injection attacks.
**Bad**:
```python
query = "SELECT * FROM users WHERE id = " + str(user_id)
```
**Good**:
```python
query = "SELECT * FROM users WHERE id = ?"
cursor.execute(query, (user_id,))
```
**Why**: Parameterized queries prevent SQL injection by properly escaping user input.
---
**⚠️ HIGH: Resource Leak**
Database connection is never closed, leading to resource exhaustion under load.
**Bad**:
```python
conn = sqlite3.connect('users.db')
# ... use connection
# connection never closed
```
**Good**:
```python
with sqlite3.connect('users.db') as conn:
cursor = conn.cursor()
# ... use connection
# automatically closed when exiting with block
```
---
** MEDIUM: Overly Broad SELECT**
Using `SELECT *` fetches unnecessary data and is fragile to schema changes.
**Bad**:
```python
query = "SELECT * FROM users WHERE id = ?"
```
**Good**:
```python
query = "SELECT id, name, email FROM users WHERE id = ?"
```
**Why**: Explicit column selection is more efficient and makes intent clear.
---
**📝 LOW: Missing Error Handling**
Function doesn't handle case where user doesn't exist.
**Suggestion**:
```python
def get_user_data(user_id: int) -> Optional[dict]:
"""
Retrieve user data by ID.
Args:
user_id: The unique identifier of the user
Returns:
Dictionary with user data, or None if user not found
"""
with sqlite3.connect('users.db') as conn:
cursor = conn.cursor()
query = "SELECT id, name, email FROM users WHERE id = ?"
cursor.execute(query, (user_id,))
result = cursor.fetchone()
if result is None:
return None
return {
'id': result[0],
'name': result[1],
'email': result[2]
}
```
### Revised Code
After applying all suggestions:
```python
from typing import Optional
import sqlite3
def get_user_data(user_id: int) -> Optional[dict]:
"""
Retrieve user data by ID.
Args:
user_id: The unique identifier of the user
Returns:
Dictionary with user data, or None if user not found
Raises:
ValueError: If user_id is not a positive integer
"""
if not isinstance(user_id, int) or user_id <= 0:
raise ValueError(f"Invalid user_id: {user_id}")
with sqlite3.connect('users.db') as conn:
conn.row_factory = sqlite3.Row # Enable dict-like access
cursor = conn.cursor()
query = "SELECT id, name, email, created_at FROM users WHERE id = ?"
cursor.execute(query, (user_id,))
result = cursor.fetchone()
if result is None:
return None
return dict(result) # Convert Row to dict
```
### Summary
**Issues Found**: 4
- 1 Critical (SQL Injection)
- 1 High (Resource Leak)
- 1 Medium (Inefficient Query)
- 1 Low (Missing Error Handling)
**All Issues Addressed**: ✓
## Best Practices
### When Reviewing
- 🎯 Focus on impact - prioritize critical issues
- 📝 Be specific - provide code examples
- 🎓 Be educational - explain why, not just what
- 🤝 Be constructive - suggest improvements, don't just criticize
- ⚖️ Be balanced - acknowledge good practices too
### What to Look For
**Python-specific**:
- Use of `with` for resource management
- Type hints on function signatures
- Proper exception handling
- List comprehensions vs loops
- Dictionary vs if-elif chains
**General**:
- DRY principle violations
- Magic numbers
- Long functions (>50 lines)
- Deep nesting (>3 levels)
- Missing tests for critical paths
## Automated Tools
Complement manual review with automated tools:
```bash
# Linting
pylint mycode.py
flake8 mycode.py
# Type checking
mypy mycode.py
# Security scanning
bandit -r .
safety check
# Code complexity
radon cc mycode.py -a
```
## References
- [OWASP Top 10](https://owasp.org/www-project-top-ten/)
- [Python Best Practices](https://docs.python-guide.org/)
- [Clean Code Principles](https://www.amazon.com/Clean-Code-Handbook-Software-Craftsmanship/dp/0132350882)
```
### 为什么这是高自由度 Skill?
1. ✅ **指导原则而非严格步骤**: 提供评审维度,不限定具体流程
2. ✅ **情境适应**: 根据代码类型和问题严重性调整重点
3. ✅ **教育性**: 解释"为什么",帮助 Claude 做出判断
4. ✅ **优先级框架**: 定义严重性级别,让 Claude 自行判断
5. ✅ **完整示例**: 展示从问题识别到解决的完整流程
6. ✅ **工具集成**: 提到自动化工具,但不强制使用
---
## 总结: 好 Skill 的共同特征
| 特征 | 说明 | 示例位置 |
|------|------|---------|
| **清晰触发** | description 包含关键词和场景 | 所有 frontmatter |
| **结构化内容** | 使用标题、列表、代码块组织信息 | 所有示例 |
| **实际示例** | 真实代码,不是伪代码 | Example sections |
| **决策指导** | 告诉 Claude 何时做什么 | 工作流 Skill 的 Phase 2 |
| **可执行命令** | 提供完整的命令,不是抽象描述 | 迁移 Skill 的命令 |
| **错误处理** | 包含故障排除章节 | 所有 Troubleshooting |
| **最佳实践** | Do/Don't 清单 | 所有 Best Practices |
| **工具引用** | 说明使用哪些工具及如何使用 | API 文档 Skill |
| **验证步骤** | 说明如何确认操作成功 | 迁移 Skill 的验证 |
| **合适的自由度** | 根据任务特性选择指导程度 | 代码审查 Skill |

View File

@ -1,95 +0,0 @@
---
name: your-skill-name
description: Brief description of what this skill does and when to activate it. Include trigger keywords and scenarios where this skill should be used.
---
# Your Skill Title
> Brief one-line summary of what this skill accomplishes
## When to Use This Skill
- User asks to [specific action or task]
- User mentions keywords like "[keyword1]", "[keyword2]", or "[keyword3]"
- User is working with [specific technology/framework/tool]
- User needs to [specific outcome or goal]
## Quick Start
```bash
# Basic usage example
command-to-run --option value
```
## How It Works
1. **Step 1**: Brief description of first step
- Detail about what happens
- Any prerequisites or conditions
2. **Step 2**: Brief description of second step
- Key actions taken
- Expected outputs
3. **Step 3**: Brief description of final step
- Validation or verification
- Success criteria
## Examples
### Example 1: Basic Usage
**User Request**: "Example of what user might say"
**Action**: What Claude does in response
**Output**:
```
Expected output or result
```
### Example 2: Advanced Usage
**User Request**: "More complex user request"
**Action**:
1. First action taken
2. Second action taken
3. Final action
**Output**:
```
Expected output showing more complex results
```
## Best Practices
- ✅ Do this for best results
- ✅ Follow this pattern
- ❌ Avoid this common mistake
- ❌ Don't do this
## Troubleshooting
### Common Issue 1
**Problem**: Description of the problem
**Solution**: How to fix it
### Common Issue 2
**Problem**: Description of another problem
**Solution**: Steps to resolve
## References
- [Related Documentation](link-to-docs)
- [Official Guide](link-to-guide)
- [Additional Resources](link-to-resources)
---
**Version**: 1.0
**Last Updated**: YYYY-MM-DD

View File

@ -1,402 +0,0 @@
---
name: your-workflow-skill
description: Guides Claude through a multi-step workflow for [specific task]. Activates when user needs to [trigger scenario] or mentions [key terms].
---
# Your Workflow Skill Title
> Automates a complex multi-step process with decision points and validation
## When to Use This Skill
- User needs to execute a multi-step workflow
- User asks to "[workflow trigger phrase]"
- User is working on [specific type of project or task]
- Task requires validation and error handling at each step
## Workflow Overview
```
┌─────────────┐
│ Start │
└──────┬──────┘
┌─────────────────┐
│ Preparation │
& Validation │
└────────┬────────┘
┌────▼────┐
│ Step 1 │
└────┬────┘
┌────▼────┐
│ Step 2 │──┐
└────┬────┘ │ (Loop if needed)
│ │
└───────┘
┌────▼────┐
│ Step 3 │
└────┬────┘
┌─────────────┐
│ Complete │
& Report │
└─────────────┘
```
## Detailed Workflow
### Preparation Phase
Before starting the main workflow:
- [ ] Check prerequisite 1
- [ ] Validate prerequisite 2
- [ ] Ensure prerequisite 3 is met
If any prerequisite fails:
- Stop execution
- Report which prerequisite failed
- Provide remediation steps
### Step 1: [Step Name]
**Purpose**: What this step accomplishes
**Actions**:
1. Action 1
2. Action 2
3. Action 3
**Validation**:
- Check condition 1
- Verify condition 2
**On Success**: → Proceed to Step 2
**On Failure**: → [Error handling procedure]
### Step 2: [Step Name]
**Purpose**: What this step accomplishes
**Actions**:
1. Action 1
2. Action 2
**Decision Point**:
- If condition A: → Action X
- If condition B: → Action Y
- Otherwise: → Default action
**Validation**:
- Verify expected output
- Check for errors
**On Success**: → Proceed to Step 3
**On Failure**: → [Error handling procedure]
### Step 3: [Step Name]
**Purpose**: Final actions and cleanup
**Actions**:
1. Finalize changes
2. Run validation tests
3. Generate summary report
**Success Criteria**:
- All tests pass
- No errors in logs
- Expected artifacts created
## Examples
### Example 1: Standard Workflow Execution
**User Request**: "Run the [workflow name]"
**Execution**:
**Preparation Phase** ✓
```
✓ Prerequisite 1 met
✓ Prerequisite 2 validated
✓ Ready to begin
```
**Step 1: [Step Name]** ✓
```
→ Action 1 completed
→ Action 2 completed
→ Validation passed
```
**Step 2: [Step Name]** ✓
```
→ Decision: Condition A detected
→ Executing Action X
→ Validation passed
```
**Step 3: [Step Name]** ✓
```
→ Finalization complete
→ All tests passed
→ Summary generated
```
**Result**: Workflow completed successfully
### Example 2: Workflow with Error Recovery
**User Request**: "Execute [workflow name]"
**Execution**:
**Step 1** ✓
```
→ Completed successfully
```
**Step 2** ⚠️
```
→ Action 1 completed
→ Action 2 failed: [Error message]
```
**Error Recovery**:
1. Identified root cause: [Explanation]
2. Applied fix: [Fix description]
3. Retrying Step 2...
**Step 2 (Retry)** ✓
```
→ Completed after fix
```
**Step 3** ✓
```
→ Completed successfully
```
**Result**: Workflow completed with 1 retry
## Error Handling
### Error Categories
| Category | Action |
|----------|--------|
| **Recoverable** | Attempt automatic fix, retry up to 3 times |
| **User Input Needed** | Pause workflow, ask user for guidance |
| **Critical** | Stop workflow, rollback changes if possible |
### Common Errors
**Error 1: [Error Name]**
- **Cause**: What causes this error
- **Detection**: How to identify it
- **Recovery**: Steps to fix
1. Recovery action 1
2. Recovery action 2
3. Retry from failed step
**Error 2: [Error Name]**
- **Cause**: What causes this error
- **Detection**: How to identify it
- **Recovery**: Manual intervention required
- Ask user: "[Question to ask]"
- Wait for user input
- Apply user's guidance
- Resume workflow
## Rollback Procedure
If the workflow fails critically:
1. **Identify last successful step**
- Step 1: ✓ Completed
- Step 2: ❌ Failed at action 3
2. **Undo changes from failed step**
- Revert action 1
- Revert action 2
- Clean up partial state
3. **Verify system state**
- Confirm rollback successful
- Check for side effects
4. **Report to user**
```
Workflow failed at Step 2, action 3
Reason: [Error message]
All changes have been rolled back
System is back to pre-workflow state
```
## Workflow Variations
### Variation 1: Quick Mode
**When to use**: User needs faster execution, can accept lower validation
**Changes**:
- Skip optional validations
- Use cached data where available
- Reduce logging verbosity
**Trade-offs**:
- ⚡ 50% faster
- ⚠️ Less detailed error messages
### Variation 2: Strict Mode
**When to use**: Production deployments, critical changes
**Changes**:
- Enable all validations
- Require explicit user confirmation at each step
- Generate detailed audit logs
**Trade-offs**:
- 🛡️ Maximum safety
- 🐢 Slower execution
## Monitoring and Logging
Throughout the workflow:
```
[TIMESTAMP] [STEP] [STATUS] Message
[2025-01-31 14:30:01] [PREP] [INFO] Starting preparation phase
[2025-01-31 14:30:02] [PREP] [OK] All prerequisites met
[2025-01-31 14:30:03] [STEP1] [INFO] Beginning Step 1
[2025-01-31 14:30:05] [STEP1] [OK] Step 1 completed successfully
[2025-01-31 14:30:06] [STEP2] [INFO] Beginning Step 2
[2025-01-31 14:30:08] [STEP2] [WARN] Condition B detected, using fallback
[2025-01-31 14:30:10] [STEP2] [OK] Step 2 completed with warnings
[2025-01-31 14:30:11] [STEP3] [INFO] Beginning Step 3
[2025-01-31 14:30:15] [STEP3] [OK] Step 3 completed successfully
[2025-01-31 14:30:16] [COMPLETE] [OK] Workflow finished successfully
```
## Post-Workflow Report
After completion, generate a summary:
```markdown
# Workflow Execution Report
**Workflow**: [Workflow Name]
**Started**: 2025-01-31 14:30:01
**Completed**: 2025-01-31 14:30:16
**Duration**: 15 seconds
**Status**: ✓ Success
## Steps Executed
1. ✓ Preparation Phase (1s)
2. ✓ Step 1: [Step Name] (2s)
3. ✓ Step 2: [Step Name] (4s) - 1 warning
4. ✓ Step 3: [Step Name] (4s)
## Warnings
- Step 2: Condition B detected, used fallback action
## Artifacts Generated
- `/path/to/output1.txt`
- `/path/to/output2.json`
- `/path/to/report.html`
## Next Steps
- Review generated artifacts
- Deploy to production (if applicable)
- Archive logs to `/logs/workflow-20250131-143001.log`
```
## Best Practices
### Do
- ✅ Validate inputs before starting workflow
- ✅ Provide clear progress updates at each step
- ✅ Log all decisions and actions
- ✅ Handle errors gracefully with recovery options
- ✅ Generate summary report at completion
### Don't
- ❌ Skip validation steps to save time
- ❌ Continue after critical errors
- ❌ Assume prerequisites are met without checking
- ❌ Lose partial progress on failure
- ❌ Leave system in inconsistent state
## Advanced Features
### Parallel Execution
Some steps can run in parallel:
```
Step 1 ─┬─→ Step 2A ─┐
│ ├─→ Step 3
└─→ Step 2B ─┘
```
**Requirements**:
- Steps 2A and 2B must be independent
- Both must complete before Step 3
**Implementation**:
1. Start Step 2A in background
2. Start Step 2B in background
3. Wait for both to complete
4. Verify both succeeded
5. Proceed to Step 3
### Conditional Branching
```
Step 1 → Decision
├─→ [Condition A] → Path A → Step 3
├─→ [Condition B] → Path B → Step 3
└─→ [Default] → Path C → Step 3
```
## Testing This Workflow
To test the workflow without side effects:
1. Use `--dry-run` flag to simulate execution
2. Check that all steps are logged correctly
3. Verify error handling with intentional failures
4. Confirm rollback procedure works
Example:
```bash
workflow-runner --dry-run --inject-error step2
```
Expected output:
```
[DRY RUN] Step 1: Would execute [actions]
[DRY RUN] Step 2: Injected error as requested
[DRY RUN] Error Recovery: Would attempt fix
[DRY RUN] Rollback: Would undo Step 1 changes
```
---
**Version**: 1.0
**Last Updated**: YYYY-MM-DD
**Maintainer**: Team Name

View File

@ -9,5 +9,4 @@ README.md
.yalc/
yalc.lock
testApi/
*.local.*
*.local
*.local.*

View File

@ -23,6 +23,4 @@ vitest.config.mts
bin/
scripts/
deploy/
document/
projects/marketplace
document/

View File

@ -15,46 +15,8 @@ permissions:
pull-requests: write
jobs:
sync-images:
runs-on: ubuntu-latest
steps:
- name: Checkout current repository
uses: actions/checkout@v4
- name: Checkout target repository
uses: actions/checkout@v4
with:
repository: labring/fastgpt-img
token: ${{ secrets.DOCS_IMGS_SYNC_TOKEN }}
path: fastgpt-img
- name: Sync images
run: |
# Create imgs directory if it doesn't exist
mkdir -p fastgpt-img
# Copy all images from document/public/imgs to the target repository
cp -r document/public/imgs/* fastgpt-img
# Navigate to target repository
cd fastgpt-img
# Configure git
git config user.name "github-actions[bot]"
git config user.email "github-actions[bot]@users.noreply.github.com"
# Add, commit and push changes
git add .
if ! git diff --cached --quiet; then
git commit -m "Sync images from FastGPT document at $(date)"
git push
echo "Images synced successfully"
else
echo "No changes to sync"
fi
# Add a new job to generate unified timestamp
generate-timestamp:
needs: sync-images
runs-on: ubuntu-latest
outputs:
datetime: ${{ steps.datetime.outputs.datetime }}
@ -78,19 +40,6 @@ jobs:
- name: Checkout
uses: actions/checkout@v4
- name: Rewrite image paths
if: matrix.domain_config.suffix == 'io'
run: |
find document/content/docs -name "*.mdx" -type f | while read file; do
sed -i 's|](/imgs/|](https://cdn.jsdelivr.net/gh/labring/fastgpt-img@main/|g' "$file"
done
- name: Rewrite domain links for CN
if: matrix.domain_config.suffix == 'cn'
run: |
find document/content/docs -name "*.mdx" -type f | while read file; do
sed -i 's|doc\.fastgpt\.io|doc.fastgpt.cn|g' "$file"
done
- name: Docker meta
id: meta
uses: docker/metadata-action@v5
@ -120,7 +69,10 @@ jobs:
labels: ${{ steps.meta.outputs.labels }}
platforms: linux/amd64
build-args: |
NEXT_PUBLIC_SEARCH_APPKEY=c4708d48f2de6ac5d2f0f443979ef92a
NEXT_PUBLIC_SEARCH_APPID=HZAF4C2T88
FASTGPT_HOME_DOMAIN=${{ matrix.domain_config.domain }}
SEARCH_APPWRITEKEY=${{ secrets.SEARCH_APPWRITEKEY }}
- name: Build and push Docker images (IO)
if: matrix.domain_config.suffix == 'io'
@ -133,6 +85,8 @@ jobs:
labels: ${{ steps.meta.outputs.labels }}
platforms: linux/amd64
build-args: |
NEXT_PUBLIC_SEARCH_APPKEY=c4708d48f2de6ac5d2f0f443979ef92a
NEXT_PUBLIC_SEARCH_APPID=HZAF4C2T88
FASTGPT_HOME_DOMAIN=${{ matrix.domain_config.domain }}
update-images:

View File

@ -19,8 +19,6 @@ jobs:
steps:
- name: Checkout
uses: actions/checkout@v4
with:
ref: ${{ github.event.pull_request.head.sha }}
- name: Get current datetime
id: datetime
@ -53,6 +51,8 @@ jobs:
tags: ${{ steps.meta.outputs.tags }}
labels: ${{ steps.meta.outputs.labels }}
build-args: |
NEXT_PUBLIC_SEARCH_APPKEY=c4708d48f2de6ac5d2f0f443979ef92a
NEXT_PUBLIC_SEARCH_APPID=HZAF4C2T88
FASTGPT_HOME_DOMAIN=https://fastgpt.io
outputs:
tags: ${{ steps.datetime.outputs.datetime }}
@ -63,8 +63,6 @@ jobs:
steps:
- name: Checkout code
uses: actions/checkout@v4
with:
ref: ${{ github.event.pull_request.head.sha }}
# Add kubeconfig setup step to handle encoding issues
- name: Setup kubeconfig

View File

@ -73,6 +73,7 @@ jobs:
--label "org.opencontainers.image.description=${{ steps.config.outputs.DESCRIPTION }}" \
--push \
--cache-from=type=local,src=/tmp/.buildx-cache \
--cache-to=type=local,dest=/tmp/.buildx-cache \
-t ${{ steps.config.outputs.DOCKER_REPO_TAGGED }} \
.

View File

@ -1,147 +0,0 @@
name: Build fastgpt-marketplace images
on:
workflow_dispatch:
jobs:
build-fastgpt-marketplace-images:
permissions:
packages: write
contents: read
attestations: write
id-token: write
strategy:
matrix:
include:
- arch: amd64
- arch: arm64
runs-on: ubuntu-24.04-arm
runs-on: ${{ matrix.runs-on || 'ubuntu-24.04' }}
steps:
# install env
- name: Checkout
uses: actions/checkout@v3
with:
fetch-depth: 0
- name: Set up Docker Buildx
uses: docker/setup-buildx-action@v3
with:
driver-opts: network=host
- name: Cache Docker layers
uses: actions/cache@v4
with:
path: /tmp/.buildx-cache
key: ${{ runner.os }}-marketplace-buildx-${{ github.sha }}
restore-keys: |
${{ runner.os }}-marketplace-buildx-
# login docker
- name: Login to GitHub Container Registry
uses: docker/login-action@v3
with:
registry: ghcr.io
username: ${{ github.repository_owner }}
password: ${{ secrets.GITHUB_TOKEN }}
- name: Login to Ali Hub
uses: docker/login-action@v3
with:
registry: registry.cn-hangzhou.aliyuncs.com
username: ${{ secrets.ALI_HUB_USERNAME }}
password: ${{ secrets.ALI_HUB_PASSWORD }}
- name: Build for ${{ matrix.arch }}
id: build
uses: docker/build-push-action@v6
with:
context: .
file: projects/marketplace/Dockerfile
platforms: linux/${{ matrix.arch }}
labels: |
org.opencontainers.image.source=https://github.com/${{ github.repository }}
org.opencontainers.image.description=fastgpt-marketplace image
outputs: type=image,"name=ghcr.io/${{ github.repository_owner }}/fastgpt-marketplace,${{ secrets.ALI_IMAGE_NAME }}/fastgpt-marketplace",push-by-digest=true,push=true
cache-from: type=local,src=/tmp/.buildx-cache
cache-to: type=local,dest=/tmp/.buildx-cache
- name: Export digest
run: |
mkdir -p ${{ runner.temp }}/digests
digest="${{ steps.build.outputs.digest }}"
touch "${{ runner.temp }}/digests/${digest#sha256:}"
- name: Upload digest
uses: actions/upload-artifact@v4
with:
name: digests-fastgpt-marketplace-${{ github.sha }}-${{ matrix.arch }}
path: ${{ runner.temp }}/digests/*
if-no-files-found: error
retention-days: 1
release-fastgpt-marketplace-images:
permissions:
packages: write
contents: read
attestations: write
id-token: write
needs: build-fastgpt-marketplace-images
runs-on: ubuntu-24.04
steps:
- name: Login to GitHub Container Registry
uses: docker/login-action@v3
with:
registry: ghcr.io
username: ${{ github.repository_owner }}
password: ${{ secrets.GITHUB_TOKEN }}
- name: Login to Ali Hub
uses: docker/login-action@v3
with:
registry: registry.cn-hangzhou.aliyuncs.com
username: ${{ secrets.ALI_HUB_USERNAME }}
password: ${{ secrets.ALI_HUB_PASSWORD }}
- name: Download digests
uses: actions/download-artifact@v4
with:
path: ${{ runner.temp }}/digests
pattern: digests-fastgpt-marketplace-${{ github.sha }}-*
merge-multiple: true
- name: Set up Docker Buildx
uses: docker/setup-buildx-action@v3
- name: Generate random tag
id: tag
run: |
# Generate random hash tag (8 characters)
TAG=$(echo $RANDOM | md5sum | head -c 8)
echo "RANDOM_TAG=$TAG" >> $GITHUB_ENV
echo "Generated tag: $TAG"
- name: Set image name and tag
run: |
echo "Git_Tag=ghcr.io/${{ github.repository_owner }}/fastgpt-marketplace:${{ env.RANDOM_TAG }}" >> $GITHUB_ENV
echo "Ali_Tag=${{ secrets.ALI_IMAGE_NAME }}/fastgpt-marketplace:${{ env.RANDOM_TAG }}" >> $GITHUB_ENV
- name: Create manifest list and push
working-directory: ${{ runner.temp }}/digests
run: |
echo "Pushing image with tag: ${{ env.RANDOM_TAG }}"
echo "Available digests:"
ls -la
echo ""
# Create manifest for GitHub Container Registry
echo "Creating manifest for GitHub: ${Git_Tag}"
docker buildx imagetools create -t ${Git_Tag} \
$(printf 'ghcr.io/${{ github.repository_owner }}/fastgpt-marketplace@sha256:%s ' *)
echo "✅ GitHub manifest created"
sleep 5
# Create manifest for Ali Cloud
echo "Creating manifest for Ali Cloud: ${Ali_Tag}"
docker buildx imagetools create -t ${Ali_Tag} \
$(printf '${{ secrets.ALI_IMAGE_NAME }}/fastgpt-marketplace@sha256:%s ' *)
echo "✅ Ali Cloud manifest created"
echo ""
echo "✅ All images pushed successfully:"
echo " - ${{ env.Git_Tag }}"
echo " - ${{ env.Ali_Tag }}"

4
.gitignore vendored
View File

@ -37,6 +37,4 @@ files/helm/fastgpt/charts/*.tgz
tmp/
coverage
document/.source
projects/app/worker/
document/.source

View File

@ -1,8 +1,6 @@
#!/usr/bin/env sh
. "$(dirname -- "$0")/_/husky.sh"
if command -v pnpm >/dev/null 2>&1; then
pnpm lint-staged
elif command -v npx >/dev/null 2>&1; then
if command -v npx >/dev/null 2>&1; then
npx lint-staged
fi

View File

@ -4,7 +4,6 @@ dist
node_modules
document/
*.md
*.mdx
pnpm-lock.yaml
cl100l_base.ts

View File

@ -1,33 +1,35 @@
{
// Place your FastGPT 工作区 snippets here. Each snippet is defined under a snippet name and has a scope, prefix, body and
// description. Add comma separated ids of the languages where the snippet is applicable in the scope field. If scope
// is left empty or omitted, the snippet gets applied to all languages. The prefix is what is
// used to trigger the snippet and the body will be expanded and inserted. Possible variables are:
// $1, $2 for tab stops, $0 for the final cursor position, and ${1:label}, ${2:another} for placeholders.
// Place your FastGPT 工作区 snippets here. Each snippet is defined under a snippet name and has a scope, prefix, body and
// description. Add comma separated ids of the languages where the snippet is applicable in the scope field. If scope
// is left empty or omitted, the snippet gets applied to all languages. The prefix is what is
// used to trigger the snippet and the body will be expanded and inserted. Possible variables are:
// $1, $2 for tab stops, $0 for the final cursor position, and ${1:label}, ${2:another} for placeholders.
// Placeholders with the same ids are connected.
// Example:
"Next api template": {
"scope": "javascript,typescript",
"prefix": "nextapi",
"body": [
"import { NextAPI } from '@/service/middleware/entry';",
"import type { ApiRequestProps, ApiResponseType } from '@fastgpt/service/type/next';",
"",
"async function handler(",
" req: ApiRequestProps,",
" res: ApiResponseType<${1}ResponseType>",
"): Promise<${1}ResponseType> {",
" const body = ${1}BodySchema.parse(req.body);",
" const query = ${1}QuerySchema.parse(req.query);",
"",
" ${2}",
"",
" return {};",
"}",
"",
"export default NextAPI(handler);"
"import type { ApiRequestProps, ApiResponseType } from '@fastgpt/service/type/next';",
"import { NextAPI } from '@/service/middleware/entry';",
"",
"export type ${TM_FILENAME_BASE}Query = {};",
"",
"export type ${TM_FILENAME_BASE}Body = {};",
"",
"export type ${TM_FILENAME_BASE}Response = {};",
"",
"async function handler(",
" req: ApiRequestProps<${TM_FILENAME_BASE}Body, ${TM_FILENAME_BASE}Query>,",
" res: ApiResponseType<any>",
"): Promise<${TM_FILENAME_BASE}Response> {",
" $1",
" return {}",
"}",
"",
"export default NextAPI(handler);"
],
"description": "FastGPT Next API template with Zod validation"
"description": "FastGPT Next API template"
},
"use context template": {
"scope": "typescriptreact",
@ -36,7 +38,7 @@
"import React, { ReactNode } from 'react';",
"import { createContext } from 'use-context-selector';",
"",
"type ContextType = {${1}};",
"type ContextType = {$1};",
"",
"export const Context = createContext<ContextType>({});",
"",
@ -63,4 +65,4 @@
"});"
]
}
}
}

View File

@ -3,7 +3,6 @@
"editor.mouseWheelZoom": true,
"editor.defaultFormatter": "esbenp.prettier-vscode",
"prettier.prettierPath": "node_modules/prettier",
"typescript.preferences.includePackageJsonAutoImports": "on",
"typescript.tsdk": "node_modules/typescript/lib",
"i18n-ally.localesPaths": [
"packages/web/i18n",
@ -26,12 +25,7 @@
"[typescript]": {
"editor.defaultFormatter": "esbenp.prettier-vscode"
},
"mdx.server.enable": true,
"markdown.copyFiles.overwriteBehavior": "nameIncrementally",
"markdown.copyFiles.destination": {
"/document/content/docs/**/*": "${documentWorkspaceFolder}/document/public/imgs/"
},
"files.associations": {
"*.mdx": "markdown"
"/document/content/**/*": "${documentWorkspaceFolder}/document/public/"
}
}

116
CLAUDE.md Normal file
View File

@ -0,0 +1,116 @@
# CLAUDE.md
This file provides guidance to Claude Code (claude.ai/code) when working with code in this repository.
## Project Overview
FastGPT is an AI Agent construction platform providing out-of-the-box data processing, model invocation capabilities, and visual workflow orchestration through Flow. This is a full-stack TypeScript application built on NextJS with MongoDB/PostgreSQL backends.
**Tech Stack**: NextJS + TypeScript + ChakraUI + MongoDB + PostgreSQL (PG Vector)/Milvus
## Architecture
This is a monorepo using pnpm workspaces with the following key structure:
### Packages (Library Code)
- `packages/global/` - Shared types, constants, utilities used across all projects
- `packages/service/` - Backend services, database schemas, API controllers, workflow engine
- `packages/web/` - Shared frontend components, hooks, styles, i18n
- `packages/templates/` - Application templates for the template market
### Projects (Applications)
- `projects/app/` - Main NextJS web application (frontend + API routes)
- `projects/sandbox/` - NestJS code execution sandbox service
- `projects/mcp_server/` - Model Context Protocol server implementation
### Key Directories
- `document/` - Documentation site (NextJS app with content)
- `plugins/` - External plugins (models, crawlers, etc.)
- `deploy/` - Docker and Helm deployment configurations
- `test/` - Centralized test files and utilities
## Development Commands
### Main Commands (run from project root)
- `pnpm dev` - Start development for all projects (uses package.json workspace scripts)
- `pnpm build` - Build all projects
- `pnpm test` - Run tests using Vitest
- `pnpm test:workflow` - Run workflow-specific tests
- `pnpm lint` - Run ESLint across all TypeScript files with auto-fix
- `pnpm format-code` - Format code using Prettier
### Project-Specific Commands
**Main App (projects/app/)**:
- `cd projects/app && pnpm dev` - Start NextJS dev server
- `cd projects/app && pnpm build` - Build NextJS app
- `cd projects/app && pnpm start` - Start production server
**Sandbox (projects/sandbox/)**:
- `cd projects/sandbox && pnpm dev` - Start NestJS dev server with watch mode
- `cd projects/sandbox && pnpm build` - Build NestJS app
- `cd projects/sandbox && pnpm test` - Run Jest tests
**MCP Server (projects/mcp_server/)**:
- `cd projects/mcp_server && bun dev` - Start with Bun in watch mode
- `cd projects/mcp_server && bun build` - Build MCP server
- `cd projects/mcp_server && bun start` - Start MCP server
### Utility Commands
- `pnpm create:i18n` - Generate i18n translation files
- `pnpm api:gen` - Generate OpenAPI documentation
- `pnpm initIcon` - Initialize icon assets
- `pnpm gen:theme-typings` - Generate Chakra UI theme typings
## Testing
The project uses Vitest for testing with coverage reporting. Key test commands:
- `pnpm test` - Run all tests
- `pnpm test:workflow` - Run workflow tests specifically
- Test files are located in `test/` directory and `projects/app/test/`
- Coverage reports are generated in `coverage/` directory
## Code Organization Patterns
### Monorepo Structure
- Shared code lives in `packages/` and is imported using workspace references
- Each project in `projects/` is a standalone application
- Use `@fastgpt/global`, `@fastgpt/service`, `@fastgpt/web` imports for shared packages
### API Structure
- NextJS API routes in `projects/app/src/pages/api/`
- Core business logic in `packages/service/core/`
- Database schemas in `packages/service/` with MongoDB/Mongoose
### Frontend Architecture
- React components in `projects/app/src/components/` and `packages/web/components/`
- Chakra UI for styling with custom theme in `packages/web/styles/theme.ts`
- i18n support with files in `packages/web/i18n/`
- State management using React Context and Zustand
### Workflow System
- Visual workflow editor using ReactFlow
- Workflow engine in `packages/service/core/workflow/`
- Node definitions in `packages/global/core/workflow/template/`
- Dispatch system for executing workflow nodes
## Development Notes
- **Package Manager**: Uses pnpm with workspace configuration
- **Node Version**: Requires Node.js >=18.16.0, pnpm >=9.0.0
- **Database**: Supports MongoDB, PostgreSQL with pgvector, or Milvus for vector storage
- **AI Integration**: Supports multiple AI providers through unified interface
- **Internationalization**: Full i18n support for Chinese, English, and Japanese
## Key File Patterns
- `.ts` and `.tsx` files use TypeScript throughout
- Database schemas use Mongoose with TypeScript
- API routes follow NextJS conventions
- Component files use React functional components with hooks
- Shared types defined in `packages/global/` with `.d.ts` files
## Environment Configuration
- Configuration files in `projects/app/data/config.json`
- Environment-specific configs supported
- Model configurations in `packages/service/core/ai/config/`

View File

@ -7,7 +7,7 @@ The FastGPT is licensed under the Apache License 2.0, with the following additio
a. Multi-tenant SaaS service: Unless explicitly authorized by FastGPT in writing, you may not use the FastGPT.AI source code to operate a multi-tenant SaaS service that is similar to the FastGPT.
b. LOGO and copyright information: In the process of using FastGPT, you may not remove or modify the LOGO or copyright information in the FastGPT console.
Please contact dennis@sealos.io by email to inquire about licensing matters.
Please contact yujinlong@sealos.io by email to inquire about licensing matters.
2. As a contributor, you should agree that your contributed code:

View File

@ -21,5 +21,5 @@ ifeq ($(proxy), taobao)
else ifeq ($(proxy), clash)
docker build -f $(filePath) -t $(image) . --network host --build-arg HTTP_PROXY=http://127.0.0.1:7890 --build-arg HTTPS_PROXY=http://127.0.0.1:7890
else
docker build --progress=plain -f $(filePath) -t $(image) .
docker build -f $(filePath) -t $(image) .
endif

View File

@ -48,16 +48,17 @@ https://github.com/labring/FastGPT/assets/15308462/7d3a38df-eb0e-4388-9250-2409b
`1` 应用编排能力
- [x] 对话工作流、插件工作流,包含基础的 RPA 节点。
- [x] 用户交互
- [x] Agent 调用
- [x] 用户交互节点
- [x] 双向 MCP
- [ ] Agent 模式
- [ ] 上下文管理
- [ ] AI 生成工作流
`2` 应用调试能力
- [x] 知识库单点搜索测试
- [x] 对话时反馈引用并可修改与删除
- [x] 完整调用链路日志
- [x] 应用评测
- [ ] 应用评测
- [ ] 高级编排 DeBug 调试模式
- [ ] 应用节点日志
@ -74,13 +75,13 @@ https://github.com/labring/FastGPT/assets/15308462/7d3a38df-eb0e-4388-9250-2409b
- [x] completions 接口 (chat 模式对齐 GPT 接口)
- [x] 知识库 CRUD
- [x] 对话 CRUD
- [ ] 自动化 OpenAPI 接口
- [ ] 完整 API Documents
`5` 运营能力
- [x] 免登录分享窗口
- [x] Iframe 一键嵌入
- [x] 统一查阅对话记录,并对数据进行标注
- [x] 应用运营日志
- [ ] 应用运营日志
`6` 其他
- [x] 可视化模型配置。
@ -106,7 +107,7 @@ https://github.com/labring/FastGPT/assets/15308462/7d3a38df-eb0e-4388-9250-2409b
* [部署 FastGPT](https://doc.fastgpt.io/docs/introduction/development/sealos/)
* [系统配置文件说明](https://doc.fastgpt.io/docs/introduction/development/configuration/)
* [多模型配置方案](https://doc.fastgpt.io/docs/introduction/development/modelConfig/one-api/)
* [版本更新/升级介绍](https://doc.fastgpt.io/docs/upgrading)
* [版本更新/升级介绍](https://doc.fastgpt.io/docs/introduction/development/upgrading/intro)
* [OpenAPI API 文档](https://doc.fastgpt.io/docs/introduction/development/openapi/)
* [知识库结构详解](https://doc.fastgpt.io/docs/introduction/guide/knowledge_base/RAG/)
@ -114,6 +115,10 @@ https://github.com/labring/FastGPT/assets/15308462/7d3a38df-eb0e-4388-9250-2409b
<img src="https://img.shields.io/badge/-返回顶部-7d09f1.svg" alt="#" align="right">
</a>
## 🏘️ 加入我们
我们正在寻找志同道合的小伙伴,加速 FastGPT 的发展。你可以通过 [FastGPT 2025 招聘](https://fael3z0zfze.feishu.cn/wiki/P7FOwEmPziVcaYkvVaacnVX1nvg)了解 FastGPT 的招聘信息。
## 💪 相关项目
- [FastGPT-plugin](https://github.com/labring/fastgpt-plugin)
@ -211,4 +216,4 @@ https://github.com/labring/FastGPT/assets/15308462/7d3a38df-eb0e-4388-9250-2409b
1. 允许作为后台服务直接商用,但不允许提供 SaaS 服务。
2. 未经商业授权,任何形式的商用服务均需保留相关版权信息。
3. 完整请查看 [FastGPT Open Source License](./LICENSE)
4. 联系方式Dennis@sealos.io[点击查看商业版定价策略](https://doc.fastgpt.io/docs/introduction/commercial/)
4. 联系方式Dennis@sealos.io[点击查看商业版定价策略](https://doc.fastgpt.io/docs/introduction/shopping_cart/intro/)

View File

@ -80,7 +80,7 @@ Project tech stack: NextJs + TS + ChakraUI + MongoDB + PostgreSQL (PG Vector plu
- [Deploying FastGPT](https://doc.fastgpt.io/docs/introduction/development/docker)
- [Guide on System Configs](https://doc.fastgpt.io/docs/introduction/development/configuration)
- [Configuring Multiple Models](https://doc.fastgpt.io/docs//introduction/development/modelConfig/intro)
- [Version Updates & Upgrades](https://doc.fastgpt.io/docs/introduction/development/upgrading/index)
- [Version Updates & Upgrades](https://doc.fastgpt.io/docs/introduction/development/upgrading/intro)
<a href="#FastGPT">
<img src="https://img.shields.io/badge/-Back_to_Top-7d09f1.svg" alt="#" align="right">
@ -185,7 +185,7 @@ This repository complies with the [FastGPT Open Source License](./LICENSE) open
1. Direct commercial use as a backend service is allowed, but provision of SaaS services is not allowed.
2. Without commercial authorization, any form of commercial service must retain relevant copyright information.
3. For full details, please see [FastGPT Open Source License](./LICENSE)
4. Contact: Dennis@sealos.io , [click to view commercial version pricing strategy](https://doc.fastgpt.io/docs/introduction/commercial/)
4. Contact: Dennis@sealos.io , [click to view commercial version pricing strategy](https://doc.fastgpt.io/docs/introduction/shopping_cart/intro/)
<a href="#FastGPT">
<img src="https://img.shields.io/badge/-Back_to_Top-7d09f1.svg" alt="#" align="right">

View File

@ -102,7 +102,7 @@ https://github.com/labring/FastGPT/assets/15308462/7d3a38df-eb0e-4388-9250-2409b
- [FastGPT のデプロイ](https://doc.fastgpt.io/docs/introduction/development/docker)
- [システム 設定 ガイド](https://doc.fastgpt.io/docs/introduction/development/configuration)
- [複数 モデルの 設定](https://doc.fastgpt.io/docs/introduction/development/modelConfig/ai-proxy)
- [バージョン 更新 とアップグレード](https://doc.fastgpt.io/docs/introduction/development/upgrading/index)
- [バージョン 更新 とアップグレード](https://doc.fastgpt.io/docs/introduction/development/upgrading/intro)
<!-- ## :point_right: ロードマップ
- [FastGPT ロードマップ](https://kjqvjse66l.feishu.cn/docx/RVUxdqE2WolDYyxEKATcM0XXnte) -->

View File

@ -5,7 +5,7 @@
如果您发现了 FastGPT 的安全漏洞,请按照以下步骤进行报告:
1. **报告方式**
发送邮件至:archer@fastgpt.io
发送邮件至:yujinlong@sealos.io
请备注版本以及您的 GitHub 账号
3. **响应时间**

View File

@ -1,73 +0,0 @@
## 更新 docker compose 脚本
### 正常更新(不动服务,只改版本)
1. 更新 `args.json` 中的版本号
2. 在 `FastGPT` 目录执行 `pnpm run gen:deploy` 即可
### 加服务
比如要添加 `example` 服务:
1. `init.mjs``Services Enum` 中添加 fastgptExample: fastgpt-example
2. 在 `args.json` 中添加 image 和 tag, 注意 `args.json``key` 值,要和 `init.mjs``value` 值一致。
3. 更新 templates/docker-compose.[dev|prod].yml 文件,把服务的相关配置加进去,并且:服务的 image 改为 ${{example.image}}:${{example.tag}}
### 加向量库
比如添加 `exampleDB` 向量库:
1. 添加 vector service 配置在 `templates/vector` 下面,例如 `templates/vector/exampleDB.txt` 内容可以参考其他 txt注意缩进image 名字也要替换成 ${{exampleDB.image}}:${{exampleDB:tag}}, service name 必须是 `vectorDB`
2. 在 `args.json` 中添加 `exampleDB` 的配置
3. init.mjs vector enum 中添加 `vector`
4. init.mjs 中添加 vector 的相关配置:
```ts
const vector = {
// pg, milvus, ob ...
vector: {
db: '', // 空即可
config: `/
VECTOR_URL:vectordb://xxxxx
`, //注意 第一行反引号后面的 / 不能少(去除首个换行符); 左边的两个空格的缩进不能变,否则会语法错误
extra: `` // 额外的配置,可以看 ob 的那个,需要一个 config 字段引入 init.sql
}
}
```
5. init.mjs 读入 vector 配置
```json
{ // 这是个块作用域, 直接搜 read in Vectors
// read in Vectors
// pg, ob ....
const vectordb = fs.readFileSync(path.join(process.cwd(), 'templates', 'vector', 'vector.txt'));
vector.vector.db = String(vectordb);
}
```
6. init.mjs 最后生成的时候,需要添加
```ts
fs.promises.writeFile(
path.join(process.cwd(), 'docker', 'cn', 'docker-compose.vector.yml'),
replace(template, 'cn', VectorEnum.vector)
),
fs.promises.writeFile(
path.join(process.cwd(), 'docker', 'global', 'docker-compose.ziliiz.yml'),
replace(template, 'global', VectorEnum.vector)
),
```
## yaml 的锚点和引用
`&` 标志一个锚点
```yaml
x-share-config: &x-share-config 'I am the config content'
x-share-config-list: &x-share-config-list
key1: value
key2: value
```
`*` 引用一个锚点
```yaml
some_other_example: *x-share-config-list
```

View File

@ -1,52 +0,0 @@
{
"tags": {
"fastgpt": "v4.14.4",
"fastgpt-sandbox": "v4.14.4",
"fastgpt-mcp_server": "v4.14.4",
"fastgpt-plugin": "v0.3.4",
"aiproxy": "v0.3.2",
"aiproxy-pg": "0.8.0-pg15",
"mongo": "5.0.18",
"redis": "7.2-alpine",
"minio": "RELEASE.2025-09-07T16-13-09Z",
"pg": "0.8.0-pg15",
"milvus-minio": "RELEASE.2023-03-20T20-16-18Z",
"milvus-etcd": "v3.5.5",
"milvus-standalone": "v2.4.3",
"oceanbase": "4.3.5-lts"
},
"images": {
"cn": {
"fastgpt": "registry.cn-hangzhou.aliyuncs.com/fastgpt/fastgpt",
"fastgpt-plugin": "registry.cn-hangzhou.aliyuncs.com/fastgpt/fastgpt-plugin",
"fastgpt-sandbox": "registry.cn-hangzhou.aliyuncs.com/fastgpt/fastgpt-sandbox",
"fastgpt-mcp_server": "registry.cn-hangzhou.aliyuncs.com/fastgpt/fastgpt-mcp_server",
"aiproxy": "registry.cn-hangzhou.aliyuncs.com/labring/aiproxy",
"aiproxy-pg": "registry.cn-hangzhou.aliyuncs.com/fastgpt/pgvector",
"mongo": "registry.cn-hangzhou.aliyuncs.com/fastgpt/mongo",
"redis": "registry.cn-hangzhou.aliyuncs.com/fastgpt/redis",
"minio": "registry.cn-hangzhou.aliyuncs.com/fastgpt/minio",
"pg": "registry.cn-hangzhou.aliyuncs.com/fastgpt/pgvector",
"milvus-minio": "minio/minio",
"milvus-etcd": "quay.io/coreos/etcd",
"milvus-standalone": "milvusdb/milvus",
"oceanbase": "oceanbase/oceanbase-ce"
},
"global": {
"fastgpt": "ghcr.io/labring/fastgpt",
"fastgpt-plugin": "ghcr.io/labring/fastgpt-plugin",
"fastgpt-sandbox": "ghcr.io/labring/fastgpt-sandbox",
"fastgpt-mcp_server": "ghcr.io/labring/fastgpt-mcp_server",
"aiproxy": "ghcr.io/labring/aiproxy",
"aiproxy-pg": "pgvector/pgvector",
"mongo": "mongo",
"redis": "redis",
"minio": "minio/minio",
"pg": "pgvector/pgvector",
"milvus-minio": "minio/minio",
"milvus-etcd": "quay.io/coreos/etcd",
"milvus-standalone": "milvusdb/milvus",
"oceanbase": "oceanbase/oceanbase-ce"
}
}
}

View File

@ -1,4 +0,0 @@
*
!.gitignore
!docker-compose.yml
!docker-compose.cn.yml

View File

@ -1,229 +0,0 @@
# 用于开发的 docker-compose 文件:
# - 只包含 FastGPT 的最小化运行条件
# - 没有 FastGPT 本体
# - 所有端口都映射到外层
# - pg: 5432
# - mongo: 27017
# - redis: 6379
# - fastgpt-sandbox: 3002
# - fastgpt-plugin: 3003
# - aiproxy: 3010
# - 使用 pgvector 作为默认的向量库
services:
# Vector DB
pg:
image: registry.cn-hangzhou.aliyuncs.com/fastgpt/pgvector:0.8.0-pg15
container_name: pg
restart: always
ports: # 生产环境建议不要暴露
- 5432:5432
networks:
- fastgpt
environment:
# 这里的配置只有首次运行生效。修改后,重启镜像是不会生效的。需要把持久化数据删除再重启,才有效果
- POSTGRES_USER=username
- POSTGRES_PASSWORD=password
- POSTGRES_DB=postgres
volumes:
- ./pg/data:/var/lib/postgresql/data
healthcheck:
test: ['CMD', 'pg_isready', '-U', 'username', '-d', 'postgres']
interval: 5s
timeout: 5s
retries: 10
# DB
mongo:
image: registry.cn-hangzhou.aliyuncs.com/fastgpt/mongo:5.0.18 # cpu 不支持 AVX 时候使用 4.4.29
container_name: mongo
restart: always
ports:
- 27017:27017
networks:
- fastgpt
command: mongod --keyFile /data/mongodb.key --replSet rs0
environment:
- MONGO_INITDB_ROOT_USERNAME=myusername
- MONGO_INITDB_ROOT_PASSWORD=mypassword
volumes:
- ./mongo/data:/data/db
healthcheck:
test:
[
'CMD',
'mongo',
'-u',
'myusername',
'-p',
'mypassword',
'--authenticationDatabase',
'admin',
'--eval',
"db.adminCommand('ping')"
]
interval: 10s
timeout: 5s
retries: 5
start_period: 30s
entrypoint:
- bash
- -c
- |
openssl rand -base64 128 > /data/mongodb.key
chmod 400 /data/mongodb.key
chown 999:999 /data/mongodb.key
echo 'const isInited = rs.status().ok === 1
if(!isInited){
rs.initiate({
_id: "rs0",
members: [
{ _id: 0, host: "mongo:27017" }
]
})
}' > /data/initReplicaSet.js
# 启动MongoDB服务
exec docker-entrypoint.sh "$$@" &
# 等待MongoDB服务启动
until mongo -u myusername -p mypassword --authenticationDatabase admin --eval "print('waited for connection')"; do
echo "Waiting for MongoDB to start..."
sleep 2
done
# 执行初始化副本集的脚本
mongo -u myusername -p mypassword --authenticationDatabase admin /data/initReplicaSet.js
# 等待docker-entrypoint.sh脚本执行的MongoDB服务进程
wait $$!
redis:
image: registry.cn-hangzhou.aliyuncs.com/fastgpt/redis:7.2-alpine
container_name: redis
ports:
- 6379:6379
networks:
- fastgpt
restart: always
command: |
redis-server --requirepass mypassword --loglevel warning --maxclients 10000 --appendonly yes --save 60 10 --maxmemory 4gb --maxmemory-policy noeviction
healthcheck:
test: ['CMD', 'redis-cli', '-a', 'mypassword', 'ping']
interval: 10s
timeout: 3s
retries: 3
start_period: 30s
volumes:
- ./redis/data:/data
fastgpt-minio:
image: registry.cn-hangzhou.aliyuncs.com/fastgpt/minio:RELEASE.2025-09-07T16-13-09Z
container_name: fastgpt-minio
restart: always
networks:
- fastgpt
ports:
- '9000:9000'
- '9001:9001'
environment:
- MINIO_ROOT_USER=minioadmin
- MINIO_ROOT_PASSWORD=minioadmin
volumes:
- ./fastgpt-minio:/data
command: server /data --console-address ":9001"
healthcheck:
test: ['CMD', 'curl', '-f', 'http://localhost:9000/minio/health/live']
interval: 30s
timeout: 20s
retries: 3
sandbox:
container_name: sandbox
image: registry.cn-hangzhou.aliyuncs.com/fastgpt/fastgpt-sandbox:v4.14.4
ports:
- 3002:3000
networks:
- fastgpt
restart: always
fastgpt-mcp-server:
container_name: fastgpt-mcp-server
image: registry.cn-hangzhou.aliyuncs.com/fastgpt/fastgpt-mcp_server:v4.14.4
ports:
- 3005:3000
networks:
- fastgpt
restart: always
environment:
- FASTGPT_ENDPOINT=http://fastgpt:3000
fastgpt-plugin:
image: registry.cn-hangzhou.aliyuncs.com/fastgpt/fastgpt-plugin:v0.3.4
container_name: fastgpt-plugin
restart: always
ports:
- 3003:3000
networks:
- fastgpt
environment:
- AUTH_TOKEN=token
- S3_EXTERNAL_BASE_URL=http://127.0.0.1:9000 # TODO: 改为你 Minio 的实际的 ip 地址
- S3_ENDPOINT=fastgpt-minio
- S3_PORT=9000
- S3_USE_SSL=false
- S3_ACCESS_KEY=minioadmin
- S3_SECRET_KEY=minioadmin
- S3_PUBLIC_BUCKET=fastgpt-public # 系统工具,创建的临时文件,存储的桶,要求公开读私有写。
- S3_PRIVATE_BUCKET=fastgpt-private # 系统插件热安装文件的桶,私有读写。
- MONGODB_URI=mongodb://myusername:mypassword@mongo:27017/fastgpt?authSource=admin&directConnection=true
- REDIS_URL=redis://default:mypassword@redis:6379
depends_on:
fastgpt-minio:
condition: service_healthy
# AI Proxy
aiproxy:
image: registry.cn-hangzhou.aliyuncs.com/labring/aiproxy:v0.3.2
container_name: aiproxy
restart: unless-stopped
ports:
- 3010:3000
depends_on:
aiproxy_pg:
condition: service_healthy
networks:
- fastgpt
- aiproxy
environment:
# 对应 fastgpt 里的AIPROXY_API_TOKEN
- ADMIN_KEY=aiproxy
# 错误日志详情保存时间(小时)
- LOG_DETAIL_STORAGE_HOURS=1
# 数据库连接地址
- SQL_DSN=postgres://postgres:aiproxy@aiproxy_pg:5432/aiproxy
# 最大重试次数
- RETRY_TIMES=3
# 不需要计费
- BILLING_ENABLED=false
# 不需要严格检测模型
- DISABLE_MODEL_CONFIG=true
healthcheck:
test: ['CMD', 'curl', '-f', 'http://localhost:3000/api/status']
interval: 5s
timeout: 5s
retries: 10
aiproxy_pg:
image: registry.cn-hangzhou.aliyuncs.com/fastgpt/pgvector:0.8.0-pg15 # docker hub
restart: unless-stopped
container_name: aiproxy_pg
volumes:
- ./aiproxy_pg:/var/lib/postgresql/data
networks:
- aiproxy
environment:
TZ: Asia/Shanghai
POSTGRES_USER: postgres
POSTGRES_DB: aiproxy
POSTGRES_PASSWORD: aiproxy
healthcheck:
test: ['CMD', 'pg_isready', '-U', 'postgres', '-d', 'aiproxy']
interval: 5s
timeout: 5s
retries: 10
networks:
fastgpt:
aiproxy:

View File

@ -1,229 +0,0 @@
# 用于开发的 docker-compose 文件:
# - 只包含 FastGPT 的最小化运行条件
# - 没有 FastGPT 本体
# - 所有端口都映射到外层
# - pg: 5432
# - mongo: 27017
# - redis: 6379
# - fastgpt-sandbox: 3002
# - fastgpt-plugin: 3003
# - aiproxy: 3010
# - 使用 pgvector 作为默认的向量库
services:
# Vector DB
pg:
image: pgvector/pgvector:0.8.0-pg15
container_name: pg
restart: always
ports: # 生产环境建议不要暴露
- 5432:5432
networks:
- fastgpt
environment:
# 这里的配置只有首次运行生效。修改后,重启镜像是不会生效的。需要把持久化数据删除再重启,才有效果
- POSTGRES_USER=username
- POSTGRES_PASSWORD=password
- POSTGRES_DB=postgres
volumes:
- ./pg/data:/var/lib/postgresql/data
healthcheck:
test: ['CMD', 'pg_isready', '-U', 'username', '-d', 'postgres']
interval: 5s
timeout: 5s
retries: 10
# DB
mongo:
image: mongo:5.0.18 # cpu 不支持 AVX 时候使用 4.4.29
container_name: mongo
restart: always
ports:
- 27017:27017
networks:
- fastgpt
command: mongod --keyFile /data/mongodb.key --replSet rs0
environment:
- MONGO_INITDB_ROOT_USERNAME=myusername
- MONGO_INITDB_ROOT_PASSWORD=mypassword
volumes:
- ./mongo/data:/data/db
healthcheck:
test:
[
'CMD',
'mongo',
'-u',
'myusername',
'-p',
'mypassword',
'--authenticationDatabase',
'admin',
'--eval',
"db.adminCommand('ping')"
]
interval: 10s
timeout: 5s
retries: 5
start_period: 30s
entrypoint:
- bash
- -c
- |
openssl rand -base64 128 > /data/mongodb.key
chmod 400 /data/mongodb.key
chown 999:999 /data/mongodb.key
echo 'const isInited = rs.status().ok === 1
if(!isInited){
rs.initiate({
_id: "rs0",
members: [
{ _id: 0, host: "mongo:27017" }
]
})
}' > /data/initReplicaSet.js
# 启动MongoDB服务
exec docker-entrypoint.sh "$$@" &
# 等待MongoDB服务启动
until mongo -u myusername -p mypassword --authenticationDatabase admin --eval "print('waited for connection')"; do
echo "Waiting for MongoDB to start..."
sleep 2
done
# 执行初始化副本集的脚本
mongo -u myusername -p mypassword --authenticationDatabase admin /data/initReplicaSet.js
# 等待docker-entrypoint.sh脚本执行的MongoDB服务进程
wait $$!
redis:
image: redis:7.2-alpine
container_name: redis
ports:
- 6379:6379
networks:
- fastgpt
restart: always
command: |
redis-server --requirepass mypassword --loglevel warning --maxclients 10000 --appendonly yes --save 60 10 --maxmemory 4gb --maxmemory-policy noeviction
healthcheck:
test: ['CMD', 'redis-cli', '-a', 'mypassword', 'ping']
interval: 10s
timeout: 3s
retries: 3
start_period: 30s
volumes:
- ./redis/data:/data
fastgpt-minio:
image: minio/minio:RELEASE.2025-09-07T16-13-09Z
container_name: fastgpt-minio
restart: always
networks:
- fastgpt
ports:
- '9000:9000'
- '9001:9001'
environment:
- MINIO_ROOT_USER=minioadmin
- MINIO_ROOT_PASSWORD=minioadmin
volumes:
- ./fastgpt-minio:/data
command: server /data --console-address ":9001"
healthcheck:
test: ['CMD', 'curl', '-f', 'http://localhost:9000/minio/health/live']
interval: 30s
timeout: 20s
retries: 3
sandbox:
container_name: sandbox
image: ghcr.io/labring/fastgpt-sandbox:v4.14.4
ports:
- 3002:3000
networks:
- fastgpt
restart: always
fastgpt-mcp-server:
container_name: fastgpt-mcp-server
image: ghcr.io/labring/fastgpt-mcp_server:v4.14.4
ports:
- 3005:3000
networks:
- fastgpt
restart: always
environment:
- FASTGPT_ENDPOINT=http://fastgpt:3000
fastgpt-plugin:
image: ghcr.io/labring/fastgpt-plugin:v0.3.4
container_name: fastgpt-plugin
restart: always
ports:
- 3003:3000
networks:
- fastgpt
environment:
- AUTH_TOKEN=token
- S3_EXTERNAL_BASE_URL=http://127.0.0.1:9000 # TODO: 改为你 Minio 的实际的 ip 地址
- S3_ENDPOINT=fastgpt-minio
- S3_PORT=9000
- S3_USE_SSL=false
- S3_ACCESS_KEY=minioadmin
- S3_SECRET_KEY=minioadmin
- S3_PUBLIC_BUCKET=fastgpt-public # 系统工具,创建的临时文件,存储的桶,要求公开读私有写。
- S3_PRIVATE_BUCKET=fastgpt-private # 系统插件热安装文件的桶,私有读写。
- MONGODB_URI=mongodb://myusername:mypassword@mongo:27017/fastgpt?authSource=admin&directConnection=true
- REDIS_URL=redis://default:mypassword@redis:6379
depends_on:
fastgpt-minio:
condition: service_healthy
# AI Proxy
aiproxy:
image: ghcr.io/labring/aiproxy:v0.3.2
container_name: aiproxy
restart: unless-stopped
ports:
- 3010:3000
depends_on:
aiproxy_pg:
condition: service_healthy
networks:
- fastgpt
- aiproxy
environment:
# 对应 fastgpt 里的AIPROXY_API_TOKEN
- ADMIN_KEY=aiproxy
# 错误日志详情保存时间(小时)
- LOG_DETAIL_STORAGE_HOURS=1
# 数据库连接地址
- SQL_DSN=postgres://postgres:aiproxy@aiproxy_pg:5432/aiproxy
# 最大重试次数
- RETRY_TIMES=3
# 不需要计费
- BILLING_ENABLED=false
# 不需要严格检测模型
- DISABLE_MODEL_CONFIG=true
healthcheck:
test: ['CMD', 'curl', '-f', 'http://localhost:3000/api/status']
interval: 5s
timeout: 5s
retries: 10
aiproxy_pg:
image: pgvector/pgvector:0.8.0-pg15 # docker hub
restart: unless-stopped
container_name: aiproxy_pg
volumes:
- ./aiproxy_pg:/var/lib/postgresql/data
networks:
- aiproxy
environment:
TZ: Asia/Shanghai
POSTGRES_USER: postgres
POSTGRES_DB: aiproxy
POSTGRES_PASSWORD: aiproxy
healthcheck:
test: ['CMD', 'pg_isready', '-U', 'postgres', '-d', 'aiproxy']
interval: 5s
timeout: 5s
retries: 10
networks:
fastgpt:
aiproxy:

View File

@ -1,311 +0,0 @@
# 用于部署的 docker-compose 文件:
# - FastGPT 端口映射为 3000:3000
# - FastGPT-mcp-server 端口映射 3005:3000
# - 建议修改账密后再运行
# plugin auth token
x-plugin-auth-token: &x-plugin-auth-token 'token'
# aiproxy token
x-aiproxy-token: &x-aiproxy-token 'token'
# 数据库连接相关配置
x-share-db-config: &x-share-db-config
MONGODB_URI: mongodb://myusername:mypassword@mongo:27017/fastgpt?authSource=admin
DB_MAX_LINK: 100
REDIS_URL: redis://default:mypassword@redis:6379
S3_EXTERNAL_BASE_URL: https://minio.com # S3 的公网访问地址
S3_ENDPOINT: fastgpt-minio
S3_PORT: 9000
S3_USE_SSL: false
S3_ACCESS_KEY: minioadmin
S3_SECRET_KEY: minioadmin
S3_PUBLIC_BUCKET: fastgpt-public # 公开读私有写桶
S3_PRIVATE_BUCKET: fastgpt-private # 私有读写桶
# 向量库相关配置
x-vec-config: &x-vec-config
MILVUS_ADDRESS: http://milvusStandalone:19530
MILVUS_TOKEN: none
version: '3.3'
services:
# Vector DB
milvus-minio:
container_name: milvus-minio
image: minio/minio:RELEASE.2023-03-20T20-16-18Z
environment:
MINIO_ACCESS_KEY: minioadmin
MINIO_SECRET_KEY: minioadmin
networks:
- vector
volumes:
- ./milvus-minio:/minio_data
command: minio server /minio_data --console-address ":9001"
healthcheck:
test: ['CMD', 'curl', '-f', 'http://localhost:9000/minio/health/live']
interval: 30s
timeout: 20s
retries: 3
# milvus
milvus-etcd:
container_name: milvus-etcd
image: quay.io/coreos/etcd:v3.5.5
environment:
- ETCD_AUTO_COMPACTION_MODE=revision
- ETCD_AUTO_COMPACTION_RETENTION=1000
- ETCD_QUOTA_BACKEND_BYTES=4294967296
- ETCD_SNAPSHOT_COUNT=50000
networks:
- vector
volumes:
- ./milvus/etcd:/etcd
command: etcd -advertise-client-urls=http://127.0.0.1:2379 -listen-client-urls http://0.0.0.0:2379 --data-dir /etcd
healthcheck:
test: ['CMD', 'etcdctl', 'endpoint', 'health']
interval: 30s
timeout: 20s
retries: 3
vectorDB:
container_name: milvusStandalone
image: milvusdb/milvus:v2.4.3
command: ['milvus', 'run', 'standalone']
security_opt:
- seccomp:unconfined
environment:
ETCD_ENDPOINTS: milvus-etcd:2379
MINIO_ADDRESS: milvus-minio:9000
networks:
- fastgpt
- vector
volumes:
- ./milvus/data:/var/lib/milvus
healthcheck:
test: ['CMD', 'curl', '-f', 'http://localhost:9091/healthz']
interval: 30s
start_period: 90s
timeout: 20s
retries: 3
depends_on:
- 'milvus-etcd'
- 'milvus-minio'
mongo:
image: registry.cn-hangzhou.aliyuncs.com/fastgpt/mongo:5.0.18 # cpu 不支持 AVX 时候使用 4.4.29
container_name: mongo
restart: always
networks:
- fastgpt
command: mongod --keyFile /data/mongodb.key --replSet rs0
environment:
- MONGO_INITDB_ROOT_USERNAME=myusername
- MONGO_INITDB_ROOT_PASSWORD=mypassword
volumes:
- ./mongo/data:/data/db
healthcheck:
test: ['CMD', 'mongo', '-u', 'myusername', '-p', 'mypassword', '--authenticationDatabase', 'admin', '--eval', "db.adminCommand('ping')"]
interval: 10s
timeout: 5s
retries: 5
start_period: 30s
entrypoint:
- bash
- -c
- |
openssl rand -base64 128 > /data/mongodb.key
chmod 400 /data/mongodb.key
chown 999:999 /data/mongodb.key
echo 'const isInited = rs.status().ok === 1
if(!isInited){
rs.initiate({
_id: "rs0",
members: [
{ _id: 0, host: "mongo:27017" }
]
})
}' > /data/initReplicaSet.js
# 启动MongoDB服务
exec docker-entrypoint.sh "$$@" &
# 等待MongoDB服务启动
until mongo -u myusername -p mypassword --authenticationDatabase admin --eval "print('waited for connection')"; do
echo "Waiting for MongoDB to start..."
sleep 2
done
# 执行初始化副本集的脚本
mongo -u myusername -p mypassword --authenticationDatabase admin /data/initReplicaSet.js
# 等待docker-entrypoint.sh脚本执行的MongoDB服务进程
wait $$!
redis:
image: registry.cn-hangzhou.aliyuncs.com/fastgpt/redis:7.2-alpine
container_name: redis
networks:
- fastgpt
restart: always
command: |
redis-server --requirepass mypassword --loglevel warning --maxclients 10000 --appendonly yes --save 60 10 --maxmemory 4gb --maxmemory-policy noeviction
healthcheck:
test: ['CMD', 'redis-cli', '-a', 'mypassword', 'ping']
interval: 10s
timeout: 3s
retries: 3
start_period: 30s
volumes:
- ./redis/data:/data
fastgpt-minio:
image: registry.cn-hangzhou.aliyuncs.com/fastgpt/minio:RELEASE.2025-09-07T16-13-09Z
container_name: fastgpt-minio
restart: always
ports:
- 9000:9000
- 9001:9001
networks:
- fastgpt
environment:
- MINIO_ROOT_USER=minioadmin
- MINIO_ROOT_PASSWORD=minioadmin
volumes:
- ./fastgpt-minio:/data
command: server /data --console-address ":9001"
healthcheck:
test: ['CMD', 'curl', '-f', 'http://localhost:9000/minio/health/live']
interval: 30s
timeout: 20s
retries: 3
fastgpt:
container_name: fastgpt
image: registry.cn-hangzhou.aliyuncs.com/fastgpt/fastgpt:v4.14.4 # git
ports:
- 3000:3000
networks:
- fastgpt
depends_on:
- mongo
- sandbox
- vectorDB
restart: always
environment:
<<: [*x-share-db-config, *x-vec-config]
# 前端外部可访问的地址,用于自动补全文件资源路径。例如 https:fastgpt.cn不能填 localhost。这个值可以不填不填则发给模型的图片会是一个相对路径而不是全路径模型可能伪造Host。
FE_DOMAIN:
# root 密码,用户名为: root。如果需要修改 root 密码,直接修改这个环境变量,并重启即可。
DEFAULT_ROOT_PSW: 1234
# 登录凭证密钥
TOKEN_KEY: any
# root的密钥常用于升级时候的初始化请求
ROOT_KEY: root_key
# 文件阅读加密
FILE_TOKEN_KEY: filetoken
# 密钥加密key
AES256_SECRET_KEY: fastgptkey
# plugin 地址
PLUGIN_BASE_URL: http://fastgpt-plugin:3000
PLUGIN_TOKEN: *x-plugin-auth-token
# sandbox 地址
SANDBOX_URL: http://sandbox:3000
# AI Proxy 的地址,如果配了该地址,优先使用
AIPROXY_API_ENDPOINT: http://aiproxy:3000
# AI Proxy 的 Admin Token与 AI Proxy 中的环境变量 ADMIN_KEY
AIPROXY_API_TOKEN: *x-aiproxy-token
# 日志等级: debug, info, warn, error
LOG_LEVEL: info
STORE_LOG_LEVEL: warn
# 工作流最大运行次数
WORKFLOW_MAX_RUN_TIMES: 1000
# 批量执行节点,最大输入长度
WORKFLOW_MAX_LOOP_TIMES: 100
# 对话文件过期天数
CHAT_FILE_EXPIRE_TIME: 7
# 服务器接收请求,最大大小,单位 MB
SERVICE_REQUEST_MAX_CONTENT_LENGTH: 10
volumes:
- ./config.json:/app/data/config.json
sandbox:
container_name: sandbox
image: registry.cn-hangzhou.aliyuncs.com/fastgpt/fastgpt-sandbox:v4.14.4
networks:
- fastgpt
restart: always
fastgpt-mcp-server:
container_name: fastgpt-mcp-server
image: registry.cn-hangzhou.aliyuncs.com/fastgpt/fastgpt-mcp_server:v4.14.4
networks:
- fastgpt
ports:
- 3005:3000
restart: always
environment:
- FASTGPT_ENDPOINT=http://fastgpt:3000
fastgpt-plugin:
image: registry.cn-hangzhou.aliyuncs.com/fastgpt/fastgpt-plugin:v0.3.4
container_name: fastgpt-plugin
restart: always
networks:
- fastgpt
environment:
<<: *x-share-db-config
AUTH_TOKEN: *x-plugin-auth-token
# 工具网络请求,最大请求和响应体
SERVICE_REQUEST_MAX_CONTENT_LENGTH: 10
# 最大 API 请求体大小
MAX_API_SIZE: 10
depends_on:
fastgpt-minio:
condition: service_healthy
# AI Proxy
aiproxy:
image: registry.cn-hangzhou.aliyuncs.com/labring/aiproxy:v0.3.2
container_name: aiproxy
restart: unless-stopped
depends_on:
aiproxy_pg:
condition: service_healthy
networks:
- fastgpt
- aiproxy
environment:
# 对应 fastgpt 里的AIPROXY_API_TOKEN
ADMIN_KEY: *x-aiproxy-token
# 错误日志详情保存时间(小时)
LOG_DETAIL_STORAGE_HOURS: 1
# 数据库连接地址
SQL_DSN: postgres://postgres:aiproxy@aiproxy_pg:5432/aiproxy
# 最大重试次数
RETRY_TIMES: 3
# 不需要计费
BILLING_ENABLED: false
# 不需要严格检测模型
DISABLE_MODEL_CONFIG: true
healthcheck:
test: ['CMD', 'curl', '-f', 'http://localhost:3000/api/status']
interval: 5s
timeout: 5s
retries: 10
aiproxy_pg:
image: registry.cn-hangzhou.aliyuncs.com/fastgpt/pgvector:0.8.0-pg15 # docker hub
restart: unless-stopped
container_name: aiproxy_pg
volumes:
- ./aiproxy_pg:/var/lib/postgresql/data
networks:
- aiproxy
environment:
TZ: Asia/Shanghai
POSTGRES_USER: postgres
POSTGRES_DB: aiproxy
POSTGRES_PASSWORD: aiproxy
healthcheck:
test: ['CMD', 'pg_isready', '-U', 'postgres', '-d', 'aiproxy']
interval: 5s
timeout: 5s
retries: 10
networks:
fastgpt:
aiproxy:
vector:

View File

@ -1,293 +0,0 @@
# 用于部署的 docker-compose 文件:
# - FastGPT 端口映射为 3000:3000
# - FastGPT-mcp-server 端口映射 3005:3000
# - 建议修改账密后再运行
# plugin auth token
x-plugin-auth-token: &x-plugin-auth-token 'token'
# aiproxy token
x-aiproxy-token: &x-aiproxy-token 'token'
# 数据库连接相关配置
x-share-db-config: &x-share-db-config
MONGODB_URI: mongodb://myusername:mypassword@mongo:27017/fastgpt?authSource=admin
DB_MAX_LINK: 100
REDIS_URL: redis://default:mypassword@redis:6379
S3_EXTERNAL_BASE_URL: https://minio.com # S3 的公网访问地址
S3_ENDPOINT: fastgpt-minio
S3_PORT: 9000
S3_USE_SSL: false
S3_ACCESS_KEY: minioadmin
S3_SECRET_KEY: minioadmin
S3_PUBLIC_BUCKET: fastgpt-public # 公开读私有写桶
S3_PRIVATE_BUCKET: fastgpt-private # 私有读写桶
# 向量库相关配置
x-vec-config: &x-vec-config
OCEANBASE_URL: mysql://root%40tenantname:tenantpassword@ob:2881/test
version: '3.3'
services:
# Vector DB
vectorDB:
image: oceanbase/oceanbase-ce:4.3.5-lts
container_name: ob
restart: always
# ports: # 生产环境建议不要暴露
# - 2881:2881
networks:
- fastgpt
environment:
# 这里的配置只有首次运行生效。修改后,重启镜像是不会生效的。需要把持久化数据删除再重启,才有效果
- OB_SYS_PASSWORD=obsyspassword
# 不同于传统数据库OceanBase 数据库的账号包含更多字段,包括用户名、租户名和集群名。经典格式为"用户名@租户名#集群名"
# 比如用mysql客户端连接时根据本文件的默认配置应该指定 "-uroot@tenantname"
- OB_TENANT_NAME=tenantname
- OB_TENANT_PASSWORD=tenantpassword
# MODE分为MINI和NORMAL 后者会最大程度使用主机资源
- MODE=MINI
- OB_SERVER_IP=127.0.0.1
# 更多环境变量配置见oceanbase官方文档 https://www.oceanbase.com/docs/common-oceanbase-database-cn-1000000002013494
volumes:
- ../ob/data:/root/ob
- ../ob/config:/root/.obd/cluster
configs:
- source: init_sql
target: /root/boot/init.d/init.sql
healthcheck:
# obclient -h127.0.0.1 -P2881 -uroot@tenantname -ptenantpassword -e "SELECT 1;"
test:
[
"CMD-SHELL",
'obclient -h$${OB_SERVER_IP} -P2881 -uroot@$${OB_TENANT_NAME} -p$${OB_TENANT_PASSWORD} -e "SELECT 1;"',
]
interval: 30s
timeout: 10s
retries: 1000
start_period: 10s
mongo:
image: registry.cn-hangzhou.aliyuncs.com/fastgpt/mongo:5.0.18 # cpu 不支持 AVX 时候使用 4.4.29
container_name: mongo
restart: always
networks:
- fastgpt
command: mongod --keyFile /data/mongodb.key --replSet rs0
environment:
- MONGO_INITDB_ROOT_USERNAME=myusername
- MONGO_INITDB_ROOT_PASSWORD=mypassword
volumes:
- ./mongo/data:/data/db
healthcheck:
test: ['CMD', 'mongo', '-u', 'myusername', '-p', 'mypassword', '--authenticationDatabase', 'admin', '--eval', "db.adminCommand('ping')"]
interval: 10s
timeout: 5s
retries: 5
start_period: 30s
entrypoint:
- bash
- -c
- |
openssl rand -base64 128 > /data/mongodb.key
chmod 400 /data/mongodb.key
chown 999:999 /data/mongodb.key
echo 'const isInited = rs.status().ok === 1
if(!isInited){
rs.initiate({
_id: "rs0",
members: [
{ _id: 0, host: "mongo:27017" }
]
})
}' > /data/initReplicaSet.js
# 启动MongoDB服务
exec docker-entrypoint.sh "$$@" &
# 等待MongoDB服务启动
until mongo -u myusername -p mypassword --authenticationDatabase admin --eval "print('waited for connection')"; do
echo "Waiting for MongoDB to start..."
sleep 2
done
# 执行初始化副本集的脚本
mongo -u myusername -p mypassword --authenticationDatabase admin /data/initReplicaSet.js
# 等待docker-entrypoint.sh脚本执行的MongoDB服务进程
wait $$!
redis:
image: registry.cn-hangzhou.aliyuncs.com/fastgpt/redis:7.2-alpine
container_name: redis
networks:
- fastgpt
restart: always
command: |
redis-server --requirepass mypassword --loglevel warning --maxclients 10000 --appendonly yes --save 60 10 --maxmemory 4gb --maxmemory-policy noeviction
healthcheck:
test: ['CMD', 'redis-cli', '-a', 'mypassword', 'ping']
interval: 10s
timeout: 3s
retries: 3
start_period: 30s
volumes:
- ./redis/data:/data
fastgpt-minio:
image: registry.cn-hangzhou.aliyuncs.com/fastgpt/minio:RELEASE.2025-09-07T16-13-09Z
container_name: fastgpt-minio
restart: always
ports:
- 9000:9000
- 9001:9001
networks:
- fastgpt
environment:
- MINIO_ROOT_USER=minioadmin
- MINIO_ROOT_PASSWORD=minioadmin
volumes:
- ./fastgpt-minio:/data
command: server /data --console-address ":9001"
healthcheck:
test: ['CMD', 'curl', '-f', 'http://localhost:9000/minio/health/live']
interval: 30s
timeout: 20s
retries: 3
fastgpt:
container_name: fastgpt
image: registry.cn-hangzhou.aliyuncs.com/fastgpt/fastgpt:v4.14.4 # git
ports:
- 3000:3000
networks:
- fastgpt
depends_on:
- mongo
- sandbox
- vectorDB
restart: always
environment:
<<: [*x-share-db-config, *x-vec-config]
# 前端外部可访问的地址,用于自动补全文件资源路径。例如 https:fastgpt.cn不能填 localhost。这个值可以不填不填则发给模型的图片会是一个相对路径而不是全路径模型可能伪造Host。
FE_DOMAIN:
# root 密码,用户名为: root。如果需要修改 root 密码,直接修改这个环境变量,并重启即可。
DEFAULT_ROOT_PSW: 1234
# 登录凭证密钥
TOKEN_KEY: any
# root的密钥常用于升级时候的初始化请求
ROOT_KEY: root_key
# 文件阅读加密
FILE_TOKEN_KEY: filetoken
# 密钥加密key
AES256_SECRET_KEY: fastgptkey
# plugin 地址
PLUGIN_BASE_URL: http://fastgpt-plugin:3000
PLUGIN_TOKEN: *x-plugin-auth-token
# sandbox 地址
SANDBOX_URL: http://sandbox:3000
# AI Proxy 的地址,如果配了该地址,优先使用
AIPROXY_API_ENDPOINT: http://aiproxy:3000
# AI Proxy 的 Admin Token与 AI Proxy 中的环境变量 ADMIN_KEY
AIPROXY_API_TOKEN: *x-aiproxy-token
# 日志等级: debug, info, warn, error
LOG_LEVEL: info
STORE_LOG_LEVEL: warn
# 工作流最大运行次数
WORKFLOW_MAX_RUN_TIMES: 1000
# 批量执行节点,最大输入长度
WORKFLOW_MAX_LOOP_TIMES: 100
# 对话文件过期天数
CHAT_FILE_EXPIRE_TIME: 7
# 服务器接收请求,最大大小,单位 MB
SERVICE_REQUEST_MAX_CONTENT_LENGTH: 10
volumes:
- ./config.json:/app/data/config.json
sandbox:
container_name: sandbox
image: registry.cn-hangzhou.aliyuncs.com/fastgpt/fastgpt-sandbox:v4.14.4
networks:
- fastgpt
restart: always
fastgpt-mcp-server:
container_name: fastgpt-mcp-server
image: registry.cn-hangzhou.aliyuncs.com/fastgpt/fastgpt-mcp_server:v4.14.4
networks:
- fastgpt
ports:
- 3005:3000
restart: always
environment:
- FASTGPT_ENDPOINT=http://fastgpt:3000
fastgpt-plugin:
image: registry.cn-hangzhou.aliyuncs.com/fastgpt/fastgpt-plugin:v0.3.4
container_name: fastgpt-plugin
restart: always
networks:
- fastgpt
environment:
<<: *x-share-db-config
AUTH_TOKEN: *x-plugin-auth-token
# 工具网络请求,最大请求和响应体
SERVICE_REQUEST_MAX_CONTENT_LENGTH: 10
# 最大 API 请求体大小
MAX_API_SIZE: 10
depends_on:
fastgpt-minio:
condition: service_healthy
# AI Proxy
aiproxy:
image: registry.cn-hangzhou.aliyuncs.com/labring/aiproxy:v0.3.2
container_name: aiproxy
restart: unless-stopped
depends_on:
aiproxy_pg:
condition: service_healthy
networks:
- fastgpt
- aiproxy
environment:
# 对应 fastgpt 里的AIPROXY_API_TOKEN
ADMIN_KEY: *x-aiproxy-token
# 错误日志详情保存时间(小时)
LOG_DETAIL_STORAGE_HOURS: 1
# 数据库连接地址
SQL_DSN: postgres://postgres:aiproxy@aiproxy_pg:5432/aiproxy
# 最大重试次数
RETRY_TIMES: 3
# 不需要计费
BILLING_ENABLED: false
# 不需要严格检测模型
DISABLE_MODEL_CONFIG: true
healthcheck:
test: ['CMD', 'curl', '-f', 'http://localhost:3000/api/status']
interval: 5s
timeout: 5s
retries: 10
aiproxy_pg:
image: registry.cn-hangzhou.aliyuncs.com/fastgpt/pgvector:0.8.0-pg15 # docker hub
restart: unless-stopped
container_name: aiproxy_pg
volumes:
- ./aiproxy_pg:/var/lib/postgresql/data
networks:
- aiproxy
environment:
TZ: Asia/Shanghai
POSTGRES_USER: postgres
POSTGRES_DB: aiproxy
POSTGRES_PASSWORD: aiproxy
healthcheck:
test: ['CMD', 'pg_isready', '-U', 'postgres', '-d', 'aiproxy']
interval: 5s
timeout: 5s
retries: 10
networks:
fastgpt:
aiproxy:
vector:
configs:
init_sql:
name: init_sql
content: |
ALTER SYSTEM SET ob_vector_memory_limit_percentage = 30;

View File

@ -1,269 +0,0 @@
# 用于部署的 docker-compose 文件:
# - FastGPT 端口映射为 3000:3000
# - FastGPT-mcp-server 端口映射 3005:3000
# - 建议修改账密后再运行
# plugin auth token
x-plugin-auth-token: &x-plugin-auth-token 'token'
# aiproxy token
x-aiproxy-token: &x-aiproxy-token 'token'
# 数据库连接相关配置
x-share-db-config: &x-share-db-config
MONGODB_URI: mongodb://myusername:mypassword@mongo:27017/fastgpt?authSource=admin
DB_MAX_LINK: 100
REDIS_URL: redis://default:mypassword@redis:6379
S3_EXTERNAL_BASE_URL: https://minio.com # S3 的公网访问地址
S3_ENDPOINT: fastgpt-minio
S3_PORT: 9000
S3_USE_SSL: false
S3_ACCESS_KEY: minioadmin
S3_SECRET_KEY: minioadmin
S3_PUBLIC_BUCKET: fastgpt-public # 公开读私有写桶
S3_PRIVATE_BUCKET: fastgpt-private # 私有读写桶
# 向量库相关配置
x-vec-config: &x-vec-config
PG_URL: postgresql://username:password@pg:5432/postgres
version: '3.3'
services:
# Vector DB
vectorDB:
image: registry.cn-hangzhou.aliyuncs.com/fastgpt/pgvector:0.8.0-pg15
container_name: pg
restart: always
networks:
- fastgpt
environment:
# 这里的配置只有首次运行生效。修改后,重启镜像是不会生效的。需要把持久化数据删除再重启,才有效果
- POSTGRES_USER=username
- POSTGRES_PASSWORD=password
- POSTGRES_DB=postgres
volumes:
- ./pg/data:/var/lib/postgresql/data
healthcheck:
test: ['CMD', 'pg_isready', '-U', 'username', '-d', 'postgres']
interval: 5s
timeout: 5s
retries: 10
mongo:
image: registry.cn-hangzhou.aliyuncs.com/fastgpt/mongo:5.0.18 # cpu 不支持 AVX 时候使用 4.4.29
container_name: mongo
restart: always
networks:
- fastgpt
command: mongod --keyFile /data/mongodb.key --replSet rs0
environment:
- MONGO_INITDB_ROOT_USERNAME=myusername
- MONGO_INITDB_ROOT_PASSWORD=mypassword
volumes:
- ./mongo/data:/data/db
healthcheck:
test: ['CMD', 'mongo', '-u', 'myusername', '-p', 'mypassword', '--authenticationDatabase', 'admin', '--eval', "db.adminCommand('ping')"]
interval: 10s
timeout: 5s
retries: 5
start_period: 30s
entrypoint:
- bash
- -c
- |
openssl rand -base64 128 > /data/mongodb.key
chmod 400 /data/mongodb.key
chown 999:999 /data/mongodb.key
echo 'const isInited = rs.status().ok === 1
if(!isInited){
rs.initiate({
_id: "rs0",
members: [
{ _id: 0, host: "mongo:27017" }
]
})
}' > /data/initReplicaSet.js
# 启动MongoDB服务
exec docker-entrypoint.sh "$$@" &
# 等待MongoDB服务启动
until mongo -u myusername -p mypassword --authenticationDatabase admin --eval "print('waited for connection')"; do
echo "Waiting for MongoDB to start..."
sleep 2
done
# 执行初始化副本集的脚本
mongo -u myusername -p mypassword --authenticationDatabase admin /data/initReplicaSet.js
# 等待docker-entrypoint.sh脚本执行的MongoDB服务进程
wait $$!
redis:
image: registry.cn-hangzhou.aliyuncs.com/fastgpt/redis:7.2-alpine
container_name: redis
networks:
- fastgpt
restart: always
command: |
redis-server --requirepass mypassword --loglevel warning --maxclients 10000 --appendonly yes --save 60 10 --maxmemory 4gb --maxmemory-policy noeviction
healthcheck:
test: ['CMD', 'redis-cli', '-a', 'mypassword', 'ping']
interval: 10s
timeout: 3s
retries: 3
start_period: 30s
volumes:
- ./redis/data:/data
fastgpt-minio:
image: registry.cn-hangzhou.aliyuncs.com/fastgpt/minio:RELEASE.2025-09-07T16-13-09Z
container_name: fastgpt-minio
restart: always
ports:
- 9000:9000
- 9001:9001
networks:
- fastgpt
environment:
- MINIO_ROOT_USER=minioadmin
- MINIO_ROOT_PASSWORD=minioadmin
volumes:
- ./fastgpt-minio:/data
command: server /data --console-address ":9001"
healthcheck:
test: ['CMD', 'curl', '-f', 'http://localhost:9000/minio/health/live']
interval: 30s
timeout: 20s
retries: 3
fastgpt:
container_name: fastgpt
image: registry.cn-hangzhou.aliyuncs.com/fastgpt/fastgpt:v4.14.4 # git
ports:
- 3000:3000
networks:
- fastgpt
depends_on:
- mongo
- sandbox
- vectorDB
restart: always
environment:
<<: [*x-share-db-config, *x-vec-config]
# 前端外部可访问的地址,用于自动补全文件资源路径。例如 https:fastgpt.cn不能填 localhost。这个值可以不填不填则发给模型的图片会是一个相对路径而不是全路径模型可能伪造Host。
FE_DOMAIN:
# root 密码,用户名为: root。如果需要修改 root 密码,直接修改这个环境变量,并重启即可。
DEFAULT_ROOT_PSW: 1234
# 登录凭证密钥
TOKEN_KEY: any
# root的密钥常用于升级时候的初始化请求
ROOT_KEY: root_key
# 文件阅读加密
FILE_TOKEN_KEY: filetoken
# 密钥加密key
AES256_SECRET_KEY: fastgptkey
# plugin 地址
PLUGIN_BASE_URL: http://fastgpt-plugin:3000
PLUGIN_TOKEN: *x-plugin-auth-token
# sandbox 地址
SANDBOX_URL: http://sandbox:3000
# AI Proxy 的地址,如果配了该地址,优先使用
AIPROXY_API_ENDPOINT: http://aiproxy:3000
# AI Proxy 的 Admin Token与 AI Proxy 中的环境变量 ADMIN_KEY
AIPROXY_API_TOKEN: *x-aiproxy-token
# 日志等级: debug, info, warn, error
LOG_LEVEL: info
STORE_LOG_LEVEL: warn
# 工作流最大运行次数
WORKFLOW_MAX_RUN_TIMES: 1000
# 批量执行节点,最大输入长度
WORKFLOW_MAX_LOOP_TIMES: 100
# 对话文件过期天数
CHAT_FILE_EXPIRE_TIME: 7
# 服务器接收请求,最大大小,单位 MB
SERVICE_REQUEST_MAX_CONTENT_LENGTH: 10
volumes:
- ./config.json:/app/data/config.json
sandbox:
container_name: sandbox
image: registry.cn-hangzhou.aliyuncs.com/fastgpt/fastgpt-sandbox:v4.14.4
networks:
- fastgpt
restart: always
fastgpt-mcp-server:
container_name: fastgpt-mcp-server
image: registry.cn-hangzhou.aliyuncs.com/fastgpt/fastgpt-mcp_server:v4.14.4
networks:
- fastgpt
ports:
- 3005:3000
restart: always
environment:
- FASTGPT_ENDPOINT=http://fastgpt:3000
fastgpt-plugin:
image: registry.cn-hangzhou.aliyuncs.com/fastgpt/fastgpt-plugin:v0.3.4
container_name: fastgpt-plugin
restart: always
networks:
- fastgpt
environment:
<<: *x-share-db-config
AUTH_TOKEN: *x-plugin-auth-token
# 工具网络请求,最大请求和响应体
SERVICE_REQUEST_MAX_CONTENT_LENGTH: 10
# 最大 API 请求体大小
MAX_API_SIZE: 10
depends_on:
fastgpt-minio:
condition: service_healthy
# AI Proxy
aiproxy:
image: registry.cn-hangzhou.aliyuncs.com/labring/aiproxy:v0.3.2
container_name: aiproxy
restart: unless-stopped
depends_on:
aiproxy_pg:
condition: service_healthy
networks:
- fastgpt
- aiproxy
environment:
# 对应 fastgpt 里的AIPROXY_API_TOKEN
ADMIN_KEY: *x-aiproxy-token
# 错误日志详情保存时间(小时)
LOG_DETAIL_STORAGE_HOURS: 1
# 数据库连接地址
SQL_DSN: postgres://postgres:aiproxy@aiproxy_pg:5432/aiproxy
# 最大重试次数
RETRY_TIMES: 3
# 不需要计费
BILLING_ENABLED: false
# 不需要严格检测模型
DISABLE_MODEL_CONFIG: true
healthcheck:
test: ['CMD', 'curl', '-f', 'http://localhost:3000/api/status']
interval: 5s
timeout: 5s
retries: 10
aiproxy_pg:
image: registry.cn-hangzhou.aliyuncs.com/fastgpt/pgvector:0.8.0-pg15 # docker hub
restart: unless-stopped
container_name: aiproxy_pg
volumes:
- ./aiproxy_pg:/var/lib/postgresql/data
networks:
- aiproxy
environment:
TZ: Asia/Shanghai
POSTGRES_USER: postgres
POSTGRES_DB: aiproxy
POSTGRES_PASSWORD: aiproxy
healthcheck:
test: ['CMD', 'pg_isready', '-U', 'postgres', '-d', 'aiproxy']
interval: 5s
timeout: 5s
retries: 10
networks:
fastgpt:
aiproxy:
vector:

View File

@ -1,252 +0,0 @@
# 用于部署的 docker-compose 文件:
# - FastGPT 端口映射为 3000:3000
# - FastGPT-mcp-server 端口映射 3005:3000
# - 建议修改账密后再运行
# plugin auth token
x-plugin-auth-token: &x-plugin-auth-token 'token'
# aiproxy token
x-aiproxy-token: &x-aiproxy-token 'token'
# 数据库连接相关配置
x-share-db-config: &x-share-db-config
MONGODB_URI: mongodb://myusername:mypassword@mongo:27017/fastgpt?authSource=admin
DB_MAX_LINK: 100
REDIS_URL: redis://default:mypassword@redis:6379
S3_EXTERNAL_BASE_URL: https://minio.com # S3 的公网访问地址
S3_ENDPOINT: fastgpt-minio
S3_PORT: 9000
S3_USE_SSL: false
S3_ACCESS_KEY: minioadmin
S3_SECRET_KEY: minioadmin
S3_PUBLIC_BUCKET: fastgpt-public # 公开读私有写桶
S3_PRIVATE_BUCKET: fastgpt-private # 私有读写桶
# 向量库相关配置
x-vec-config: &x-vec-config
MILVUS_ADDRESS: zilliz_cloud_address
MILVUS_TOKEN: zilliz_cloud_token
version: '3.3'
services:
# Vector DB
mongo:
image: registry.cn-hangzhou.aliyuncs.com/fastgpt/mongo:5.0.18 # cpu 不支持 AVX 时候使用 4.4.29
container_name: mongo
restart: always
networks:
- fastgpt
command: mongod --keyFile /data/mongodb.key --replSet rs0
environment:
- MONGO_INITDB_ROOT_USERNAME=myusername
- MONGO_INITDB_ROOT_PASSWORD=mypassword
volumes:
- ./mongo/data:/data/db
healthcheck:
test: ['CMD', 'mongo', '-u', 'myusername', '-p', 'mypassword', '--authenticationDatabase', 'admin', '--eval', "db.adminCommand('ping')"]
interval: 10s
timeout: 5s
retries: 5
start_period: 30s
entrypoint:
- bash
- -c
- |
openssl rand -base64 128 > /data/mongodb.key
chmod 400 /data/mongodb.key
chown 999:999 /data/mongodb.key
echo 'const isInited = rs.status().ok === 1
if(!isInited){
rs.initiate({
_id: "rs0",
members: [
{ _id: 0, host: "mongo:27017" }
]
})
}' > /data/initReplicaSet.js
# 启动MongoDB服务
exec docker-entrypoint.sh "$$@" &
# 等待MongoDB服务启动
until mongo -u myusername -p mypassword --authenticationDatabase admin --eval "print('waited for connection')"; do
echo "Waiting for MongoDB to start..."
sleep 2
done
# 执行初始化副本集的脚本
mongo -u myusername -p mypassword --authenticationDatabase admin /data/initReplicaSet.js
# 等待docker-entrypoint.sh脚本执行的MongoDB服务进程
wait $$!
redis:
image: registry.cn-hangzhou.aliyuncs.com/fastgpt/redis:7.2-alpine
container_name: redis
networks:
- fastgpt
restart: always
command: |
redis-server --requirepass mypassword --loglevel warning --maxclients 10000 --appendonly yes --save 60 10 --maxmemory 4gb --maxmemory-policy noeviction
healthcheck:
test: ['CMD', 'redis-cli', '-a', 'mypassword', 'ping']
interval: 10s
timeout: 3s
retries: 3
start_period: 30s
volumes:
- ./redis/data:/data
fastgpt-minio:
image: registry.cn-hangzhou.aliyuncs.com/fastgpt/minio:RELEASE.2025-09-07T16-13-09Z
container_name: fastgpt-minio
restart: always
ports:
- 9000:9000
- 9001:9001
networks:
- fastgpt
environment:
- MINIO_ROOT_USER=minioadmin
- MINIO_ROOT_PASSWORD=minioadmin
volumes:
- ./fastgpt-minio:/data
command: server /data --console-address ":9001"
healthcheck:
test: ['CMD', 'curl', '-f', 'http://localhost:9000/minio/health/live']
interval: 30s
timeout: 20s
retries: 3
fastgpt:
container_name: fastgpt
image: registry.cn-hangzhou.aliyuncs.com/fastgpt/fastgpt:v4.14.4 # git
ports:
- 3000:3000
networks:
- fastgpt
depends_on:
- mongo
- sandbox
- vectorDB
restart: always
environment:
<<: [*x-share-db-config, *x-vec-config]
# 前端外部可访问的地址,用于自动补全文件资源路径。例如 https:fastgpt.cn不能填 localhost。这个值可以不填不填则发给模型的图片会是一个相对路径而不是全路径模型可能伪造Host。
FE_DOMAIN:
# root 密码,用户名为: root。如果需要修改 root 密码,直接修改这个环境变量,并重启即可。
DEFAULT_ROOT_PSW: 1234
# 登录凭证密钥
TOKEN_KEY: any
# root的密钥常用于升级时候的初始化请求
ROOT_KEY: root_key
# 文件阅读加密
FILE_TOKEN_KEY: filetoken
# 密钥加密key
AES256_SECRET_KEY: fastgptkey
# plugin 地址
PLUGIN_BASE_URL: http://fastgpt-plugin:3000
PLUGIN_TOKEN: *x-plugin-auth-token
# sandbox 地址
SANDBOX_URL: http://sandbox:3000
# AI Proxy 的地址,如果配了该地址,优先使用
AIPROXY_API_ENDPOINT: http://aiproxy:3000
# AI Proxy 的 Admin Token与 AI Proxy 中的环境变量 ADMIN_KEY
AIPROXY_API_TOKEN: *x-aiproxy-token
# 日志等级: debug, info, warn, error
LOG_LEVEL: info
STORE_LOG_LEVEL: warn
# 工作流最大运行次数
WORKFLOW_MAX_RUN_TIMES: 1000
# 批量执行节点,最大输入长度
WORKFLOW_MAX_LOOP_TIMES: 100
# 对话文件过期天数
CHAT_FILE_EXPIRE_TIME: 7
# 服务器接收请求,最大大小,单位 MB
SERVICE_REQUEST_MAX_CONTENT_LENGTH: 10
volumes:
- ./config.json:/app/data/config.json
sandbox:
container_name: sandbox
image: registry.cn-hangzhou.aliyuncs.com/fastgpt/fastgpt-sandbox:v4.14.4
networks:
- fastgpt
restart: always
fastgpt-mcp-server:
container_name: fastgpt-mcp-server
image: registry.cn-hangzhou.aliyuncs.com/fastgpt/fastgpt-mcp_server:v4.14.4
networks:
- fastgpt
ports:
- 3005:3000
restart: always
environment:
- FASTGPT_ENDPOINT=http://fastgpt:3000
fastgpt-plugin:
image: registry.cn-hangzhou.aliyuncs.com/fastgpt/fastgpt-plugin:v0.3.4
container_name: fastgpt-plugin
restart: always
networks:
- fastgpt
environment:
<<: *x-share-db-config
AUTH_TOKEN: *x-plugin-auth-token
# 工具网络请求,最大请求和响应体
SERVICE_REQUEST_MAX_CONTENT_LENGTH: 10
# 最大 API 请求体大小
MAX_API_SIZE: 10
depends_on:
fastgpt-minio:
condition: service_healthy
# AI Proxy
aiproxy:
image: registry.cn-hangzhou.aliyuncs.com/labring/aiproxy:v0.3.2
container_name: aiproxy
restart: unless-stopped
depends_on:
aiproxy_pg:
condition: service_healthy
networks:
- fastgpt
- aiproxy
environment:
# 对应 fastgpt 里的AIPROXY_API_TOKEN
ADMIN_KEY: *x-aiproxy-token
# 错误日志详情保存时间(小时)
LOG_DETAIL_STORAGE_HOURS: 1
# 数据库连接地址
SQL_DSN: postgres://postgres:aiproxy@aiproxy_pg:5432/aiproxy
# 最大重试次数
RETRY_TIMES: 3
# 不需要计费
BILLING_ENABLED: false
# 不需要严格检测模型
DISABLE_MODEL_CONFIG: true
healthcheck:
test: ['CMD', 'curl', '-f', 'http://localhost:3000/api/status']
interval: 5s
timeout: 5s
retries: 10
aiproxy_pg:
image: registry.cn-hangzhou.aliyuncs.com/fastgpt/pgvector:0.8.0-pg15 # docker hub
restart: unless-stopped
container_name: aiproxy_pg
volumes:
- ./aiproxy_pg:/var/lib/postgresql/data
networks:
- aiproxy
environment:
TZ: Asia/Shanghai
POSTGRES_USER: postgres
POSTGRES_DB: aiproxy
POSTGRES_PASSWORD: aiproxy
healthcheck:
test: ['CMD', 'pg_isready', '-U', 'postgres', '-d', 'aiproxy']
interval: 5s
timeout: 5s
retries: 10
networks:
fastgpt:
aiproxy:
vector:

View File

@ -1,31 +1,7 @@
# 用于部署的 docker-compose 文件:
# - FastGPT 端口映射为 3000:3000
# - FastGPT-mcp-server 端口映射 3005:3000
# - 建议修改账密后再运行
# plugin auth token
x-plugin-auth-token: &x-plugin-auth-token 'token'
# aiproxy token
x-aiproxy-token: &x-aiproxy-token 'token'
# 数据库连接相关配置
x-share-db-config: &x-share-db-config
MONGODB_URI: mongodb://myusername:mypassword@mongo:27017/fastgpt?authSource=admin
DB_MAX_LINK: 100
REDIS_URL: redis://default:mypassword@redis:6379
S3_EXTERNAL_BASE_URL: https://minio.com # S3 的公网访问地址
S3_ENDPOINT: fastgpt-minio
S3_PORT: 9000
S3_USE_SSL: false
S3_ACCESS_KEY: minioadmin
S3_SECRET_KEY: minioadmin
S3_PUBLIC_BUCKET: fastgpt-public # 公开读私有写桶
S3_PRIVATE_BUCKET: fastgpt-private # 私有读写桶
# 向量库相关配置
x-vec-config: &x-vec-config
MILVUS_ADDRESS: http://milvusStandalone:19530
MILVUS_TOKEN: none
# 数据库的默认账号和密码仅首次运行时设置有效
# 如果修改了账号密码,记得改数据库和项目连接参数,别只改一处~
# 该配置文件只是给快速启动,测试使用。正式使用,记得务必修改账号密码,以及调整合适的知识库参数,共享内存等。
# 如何无法访问 dockerhub 和 git可以用阿里云阿里云没有arm包
version: '3.3'
services:
@ -36,8 +12,11 @@ services:
environment:
MINIO_ACCESS_KEY: minioadmin
MINIO_SECRET_KEY: minioadmin
# ports:
# - '9001:9001'
# - '9000:9000'
networks:
- vector
- fastgpt
volumes:
- ./milvus-minio:/minio_data
command: minio server /minio_data --console-address ":9001"
@ -47,8 +26,8 @@ services:
timeout: 20s
retries: 3
# milvus
milvus-etcd:
container_name: milvus-etcd
milvusEtcd:
container_name: milvusEtcd
image: quay.io/coreos/etcd:v3.5.5
environment:
- ETCD_AUTO_COMPACTION_MODE=revision
@ -56,7 +35,7 @@ services:
- ETCD_QUOTA_BACKEND_BYTES=4294967296
- ETCD_SNAPSHOT_COUNT=50000
networks:
- vector
- fastgpt
volumes:
- ./milvus/etcd:/etcd
command: etcd -advertise-client-urls=http://127.0.0.1:2379 -listen-client-urls http://0.0.0.0:2379 --data-dir /etcd
@ -65,18 +44,17 @@ services:
interval: 30s
timeout: 20s
retries: 3
vectorDB:
milvusStandalone:
container_name: milvusStandalone
image: milvusdb/milvus:v2.4.3
command: ['milvus', 'run', 'standalone']
security_opt:
- seccomp:unconfined
environment:
ETCD_ENDPOINTS: milvus-etcd:2379
ETCD_ENDPOINTS: milvusEtcd:2379
MINIO_ADDRESS: milvus-minio:9000
networks:
- fastgpt
- vector
volumes:
- ./milvus/data:/var/lib/milvus
healthcheck:
@ -86,12 +64,14 @@ services:
timeout: 20s
retries: 3
depends_on:
- 'milvus-etcd'
- 'milvusEtcd'
- 'milvus-minio'
# DB
mongo:
image: mongo:5.0.18 # cpu 不支持 AVX 时候使用 4.4.29
image: mongo:5.0.18 # dockerhub
# image: registry.cn-hangzhou.aliyuncs.com/fastgpt/mongo:5.0.18 # 阿里云
# image: mongo:4.4.29 # cpu不支持AVX时候使用
container_name: mongo
restart: always
networks:
@ -102,12 +82,6 @@ services:
- MONGO_INITDB_ROOT_PASSWORD=mypassword
volumes:
- ./mongo/data:/data/db
healthcheck:
test: ['CMD', 'mongo', '-u', 'myusername', '-p', 'mypassword', '--authenticationDatabase', 'admin', '--eval', "db.adminCommand('ping')"]
interval: 10s
timeout: 5s
retries: 5
start_period: 30s
entrypoint:
- bash
- -c
@ -155,14 +129,14 @@ services:
volumes:
- ./redis/data:/data
fastgpt-minio:
image: minio/minio:RELEASE.2025-09-07T16-13-09Z
image: minio/minio:latest
container_name: fastgpt-minio
restart: always
ports:
- 9000:9000
- 9001:9001
networks:
- fastgpt
ports: # comment out if you do not need to expose the port (in production environment, you should not expose the port)
- '9000:9000'
- '9001:9001'
environment:
- MINIO_ROOT_USER=minioadmin
- MINIO_ROOT_PASSWORD=minioadmin
@ -177,7 +151,8 @@ services:
fastgpt:
container_name: fastgpt
image: ghcr.io/labring/fastgpt:v4.14.4 # git
image: ghcr.io/labring/fastgpt:v4.11.0 # git
# image: registry.cn-hangzhou.aliyuncs.com/fastgpt/fastgpt:v4.11.0 # 阿里云
ports:
- 3000:3000
networks:
@ -185,81 +160,100 @@ services:
depends_on:
- mongo
- sandbox
- vectorDB
- milvusStandalone
restart: always
environment:
<<: [*x-share-db-config, *x-vec-config]
# 前端外部可访问的地址,用于自动补全文件资源路径。例如 https:fastgpt.cn不能填 localhost。这个值可以不填不填则发给模型的图片会是一个相对路径而不是全路径模型可能伪造Host。
FE_DOMAIN:
- FE_DOMAIN=
# root 密码,用户名为: root。如果需要修改 root 密码,直接修改这个环境变量,并重启即可。
DEFAULT_ROOT_PSW: 1234
- DEFAULT_ROOT_PSW=1234
# 登录凭证密钥
TOKEN_KEY: any
- TOKEN_KEY=any
# root的密钥常用于升级时候的初始化请求
ROOT_KEY: root_key
- ROOT_KEY=root_key
# 文件阅读加密
FILE_TOKEN_KEY: filetoken
- FILE_TOKEN_KEY=filetoken
# 密钥加密key
AES256_SECRET_KEY: fastgptkey
- AES256_SECRET_KEY=fastgptkey
# plugin 地址
PLUGIN_BASE_URL: http://fastgpt-plugin:3000
PLUGIN_TOKEN: *x-plugin-auth-token
- PLUGIN_BASE_URL=http://fastgpt-plugin:3000
- PLUGIN_TOKEN=xxxxxx
# sandbox 地址
SANDBOX_URL: http://sandbox:3000
- SANDBOX_URL=http://sandbox:3000
# AI Proxy 的地址,如果配了该地址,优先使用
AIPROXY_API_ENDPOINT: http://aiproxy:3000
- AIPROXY_API_ENDPOINT=http://aiproxy:3000
# AI Proxy 的 Admin Token与 AI Proxy 中的环境变量 ADMIN_KEY
AIPROXY_API_TOKEN: *x-aiproxy-token
- AIPROXY_API_TOKEN=aiproxy
# 数据库最大连接数
- DB_MAX_LINK=30
# MongoDB 连接参数. 用户名myusername,密码mypassword。
- MONGODB_URI=mongodb://myusername:mypassword@mongo:27017/fastgpt?authSource=admin
# Redis 连接参数
- REDIS_URL=redis://default:mypassword@redis:6379
# 向量库 连接参数
- MILVUS_ADDRESS=http://milvusStandalone:19530
- MILVUS_TOKEN=none
# 日志等级: debug, info, warn, error
LOG_LEVEL: info
STORE_LOG_LEVEL: warn
- LOG_LEVEL=info
- STORE_LOG_LEVEL=warn
# 工作流最大运行次数
WORKFLOW_MAX_RUN_TIMES: 1000
- WORKFLOW_MAX_RUN_TIMES=1000
# 批量执行节点,最大输入长度
WORKFLOW_MAX_LOOP_TIMES: 100
- WORKFLOW_MAX_LOOP_TIMES=100
# 对话文件过期天数
CHAT_FILE_EXPIRE_TIME: 7
# 服务器接收请求,最大大小,单位 MB
SERVICE_REQUEST_MAX_CONTENT_LENGTH: 10
- CHAT_FILE_EXPIRE_TIME=7
volumes:
- ./config.json:/app/data/config.json
sandbox:
container_name: sandbox
image: ghcr.io/labring/fastgpt-sandbox:v4.14.4
image: ghcr.io/labring/fastgpt-sandbox:v4.10.1 # git
# image: registry.cn-hangzhou.aliyuncs.com/fastgpt/fastgpt-sandbox:v4.10.1 # 阿里云
networks:
- fastgpt
restart: always
fastgpt-mcp-server:
container_name: fastgpt-mcp-server
image: ghcr.io/labring/fastgpt-mcp_server:v4.14.4
networks:
- fastgpt
image: ghcr.io/labring/fastgpt-mcp_server:v4.10.1 # git
# image: registry.cn-hangzhou.aliyuncs.com/fastgpt/fastgpt-mcp_server:v4.10.1 # 阿里云
ports:
- 3005:3000
networks:
- fastgpt
restart: always
environment:
- FASTGPT_ENDPOINT=http://fastgpt:3000
fastgpt-plugin:
image: ghcr.io/labring/fastgpt-plugin:v0.3.4
image: ghcr.io/labring/fastgpt-plugin:v0.1.5 # git
# image: registry.cn-hangzhou.aliyuncs.com/fastgpt/fastgpt-plugin:v0.1.5 # 阿里云
container_name: fastgpt-plugin
restart: always
networks:
- fastgpt
environment:
<<: *x-share-db-config
AUTH_TOKEN: *x-plugin-auth-token
# 工具网络请求,最大请求和响应体
SERVICE_REQUEST_MAX_CONTENT_LENGTH: 10
# 最大 API 请求体大小
MAX_API_SIZE: 10
- AUTH_TOKEN=xxxxxx # 如果不需要鉴权可以直接去掉这个环境变量
# 改成 minio 可访问地址,例如 http://192.168.2.2:9000/fastgpt-plugins
# 必须指向 Minio 的桶的地址
# 如果 Minio 可以直接通过外网访问,可以不设置这个环境变量
# - MINIO_CUSTOM_ENDPOINT=http://192.168.2.2:9000
- MINIO_ENDPOINT=fastgpt-minio
- MINIO_PORT=9000
- MINIO_USE_SSL=false
- MINIO_ACCESS_KEY=minioadmin
- MINIO_SECRET_KEY=minioadmin
- MINIO_BUCKET=fastgpt-plugins
depends_on:
fastgpt-minio:
condition: service_healthy
# AI Proxy
aiproxy:
image: ghcr.io/labring/aiproxy:v0.3.2
image: ghcr.io/labring/aiproxy:v0.2.2
# image: registry.cn-hangzhou.aliyuncs.com/labring/aiproxy:v0.2.2 # 阿里云
container_name: aiproxy
restart: unless-stopped
depends_on:
@ -267,20 +261,19 @@ services:
condition: service_healthy
networks:
- fastgpt
- aiproxy
environment:
# 对应 fastgpt 里的AIPROXY_API_TOKEN
ADMIN_KEY: *x-aiproxy-token
- ADMIN_KEY=aiproxy
# 错误日志详情保存时间(小时)
LOG_DETAIL_STORAGE_HOURS: 1
- LOG_DETAIL_STORAGE_HOURS=1
# 数据库连接地址
SQL_DSN: postgres://postgres:aiproxy@aiproxy_pg:5432/aiproxy
- SQL_DSN=postgres://postgres:aiproxy@aiproxy_pg:5432/aiproxy
# 最大重试次数
RETRY_TIMES: 3
- RETRY_TIMES=3
# 不需要计费
BILLING_ENABLED: false
- BILLING_ENABLED=false
# 不需要严格检测模型
DISABLE_MODEL_CONFIG: true
- DISABLE_MODEL_CONFIG=true
healthcheck:
test: ['CMD', 'curl', '-f', 'http://localhost:3000/api/status']
interval: 5s
@ -288,12 +281,13 @@ services:
retries: 10
aiproxy_pg:
image: pgvector/pgvector:0.8.0-pg15 # docker hub
# image: registry.cn-hangzhou.aliyuncs.com/fastgpt/pgvector:v0.8.0-pg15 # 阿里云
restart: unless-stopped
container_name: aiproxy_pg
volumes:
- ./aiproxy_pg:/var/lib/postgresql/data
networks:
- aiproxy
- fastgpt
environment:
TZ: Asia/Shanghai
POSTGRES_USER: postgres
@ -306,6 +300,3 @@ services:
retries: 10
networks:
fastgpt:
aiproxy:
vector:

View File

@ -1,36 +1,14 @@
# 用于部署的 docker-compose 文件:
# - FastGPT 端口映射为 3000:3000
# - FastGPT-mcp-server 端口映射 3005:3000
# - 建议修改账密后再运行
# plugin auth token
x-plugin-auth-token: &x-plugin-auth-token 'token'
# aiproxy token
x-aiproxy-token: &x-aiproxy-token 'token'
# 数据库连接相关配置
x-share-db-config: &x-share-db-config
MONGODB_URI: mongodb://myusername:mypassword@mongo:27017/fastgpt?authSource=admin
DB_MAX_LINK: 100
REDIS_URL: redis://default:mypassword@redis:6379
S3_EXTERNAL_BASE_URL: https://minio.com # S3 的公网访问地址
S3_ENDPOINT: fastgpt-minio
S3_PORT: 9000
S3_USE_SSL: false
S3_ACCESS_KEY: minioadmin
S3_SECRET_KEY: minioadmin
S3_PUBLIC_BUCKET: fastgpt-public # 公开读私有写桶
S3_PRIVATE_BUCKET: fastgpt-private # 私有读写桶
# 向量库相关配置
x-vec-config: &x-vec-config
OCEANBASE_URL: mysql://root%40tenantname:tenantpassword@ob:2881/test
# 数据库的默认账号和密码仅首次运行时设置有效
# 如果修改了账号密码,记得改数据库和项目连接参数,别只改一处~
# 该配置文件只是给快速启动,测试使用。正式使用,记得务必修改账号密码,以及调整合适的知识库参数,共享内存等。
# 如何无法访问 dockerhub 和 git可以用阿里云阿里云没有arm包
version: '3.3'
services:
# Vector DB
vectorDB:
image: oceanbase/oceanbase-ce:4.3.5-lts
ob:
image: oceanbase/oceanbase-ce:4.3.5-lts # docker hub
# image: quay.io/oceanbase/oceanbase-ce:4.3.5-lts # 镜像
container_name: ob
restart: always
# ports: # 生产环境建议不要暴露
@ -49,26 +27,26 @@ services:
- OB_SERVER_IP=127.0.0.1
# 更多环境变量配置见oceanbase官方文档 https://www.oceanbase.com/docs/common-oceanbase-database-cn-1000000002013494
volumes:
- ../ob/data:/root/ob
- ../ob/config:/root/.obd/cluster
configs:
- source: init_sql
target: /root/boot/init.d/init.sql
- ./ob/data:/root/ob
- ./ob/config:/root/.obd/cluster
- ./init.sql:/root/boot/init.d/init.sql
healthcheck:
# obclient -h127.0.0.1 -P2881 -uroot@tenantname -ptenantpassword -e "SELECT 1;"
test:
[
"CMD-SHELL",
'obclient -h$${OB_SERVER_IP} -P2881 -uroot@$${OB_TENANT_NAME} -p$${OB_TENANT_PASSWORD} -e "SELECT 1;"',
'CMD-SHELL',
'obclient -h$${OB_SERVER_IP} -P2881 -uroot@$${OB_TENANT_NAME} -p$${OB_TENANT_PASSWORD} -e "SELECT 1;"'
]
interval: 30s
timeout: 10s
retries: 1000
start_period: 10s
# DB
mongo:
image: mongo:5.0.18 # cpu 不支持 AVX 时候使用 4.4.29
image: mongo:5.0.18 # dockerhub
# image: registry.cn-hangzhou.aliyuncs.com/fastgpt/mongo:5.0.18 # 阿里云
# image: mongo:4.4.29 # cpu不支持AVX时候使用
container_name: mongo
restart: always
networks:
@ -79,12 +57,6 @@ services:
- MONGO_INITDB_ROOT_PASSWORD=mypassword
volumes:
- ./mongo/data:/data/db
healthcheck:
test: ['CMD', 'mongo', '-u', 'myusername', '-p', 'mypassword', '--authenticationDatabase', 'admin', '--eval', "db.adminCommand('ping')"]
interval: 10s
timeout: 5s
retries: 5
start_period: 30s
entrypoint:
- bash
- -c
@ -132,14 +104,14 @@ services:
volumes:
- ./redis/data:/data
fastgpt-minio:
image: minio/minio:RELEASE.2025-09-07T16-13-09Z
image: minio/minio:latest
container_name: fastgpt-minio
restart: always
ports:
- 9000:9000
- 9001:9001
networks:
- fastgpt
ports: # comment out if you do not need to expose the port (in production environment, you should not expose the port)
- '9000:9000'
- '9001:9001'
environment:
- MINIO_ROOT_USER=minioadmin
- MINIO_ROOT_PASSWORD=minioadmin
@ -154,7 +126,8 @@ services:
fastgpt:
container_name: fastgpt
image: ghcr.io/labring/fastgpt:v4.14.4 # git
image: ghcr.io/labring/fastgpt:v4.11.0 # git
# image: registry.cn-hangzhou.aliyuncs.com/fastgpt/fastgpt:v4.11.0 # 阿里云
ports:
- 3000:3000
networks:
@ -162,81 +135,99 @@ services:
depends_on:
- mongo
- sandbox
- vectorDB
- ob
restart: always
environment:
<<: [*x-share-db-config, *x-vec-config]
# 前端外部可访问的地址,用于自动补全文件资源路径。例如 https:fastgpt.cn不能填 localhost。这个值可以不填不填则发给模型的图片会是一个相对路径而不是全路径模型可能伪造Host。
FE_DOMAIN:
- FE_DOMAIN=
# root 密码,用户名为: root。如果需要修改 root 密码,直接修改这个环境变量,并重启即可。
DEFAULT_ROOT_PSW: 1234
- DEFAULT_ROOT_PSW=1234
# 登录凭证密钥
TOKEN_KEY: any
- TOKEN_KEY=any
# root的密钥常用于升级时候的初始化请求
ROOT_KEY: root_key
- ROOT_KEY=root_key
# 文件阅读加密
FILE_TOKEN_KEY: filetoken
- FILE_TOKEN_KEY=filetoken
# 密钥加密key
AES256_SECRET_KEY: fastgptkey
- AES256_SECRET_KEY=fastgptkey
# plugin 地址
PLUGIN_BASE_URL: http://fastgpt-plugin:3000
PLUGIN_TOKEN: *x-plugin-auth-token
- PLUGIN_BASE_URL=http://fastgpt-plugin:3000
- PLUGIN_TOKEN=xxxxxx
# sandbox 地址
SANDBOX_URL: http://sandbox:3000
- SANDBOX_URL=http://sandbox:3000
# AI Proxy 的地址,如果配了该地址,优先使用
AIPROXY_API_ENDPOINT: http://aiproxy:3000
- AIPROXY_API_ENDPOINT=http://aiproxy:3000
# AI Proxy 的 Admin Token与 AI Proxy 中的环境变量 ADMIN_KEY
AIPROXY_API_TOKEN: *x-aiproxy-token
- AIPROXY_API_TOKEN=aiproxy
# 数据库最大连接数
- DB_MAX_LINK=30
# MongoDB 连接参数. 用户名myusername,密码mypassword。
- MONGODB_URI=mongodb://myusername:mypassword@mongo:27017/fastgpt?authSource=admin
# Redis 连接参数
- REDIS_URL=redis://default:mypassword@redis:6379
# 向量库 连接参数
- OCEANBASE_URL=mysql://root%40tenantname:tenantpassword@ob:2881/test
# 日志等级: debug, info, warn, error
LOG_LEVEL: info
STORE_LOG_LEVEL: warn
- LOG_LEVEL=info
- STORE_LOG_LEVEL=warn
# 工作流最大运行次数
WORKFLOW_MAX_RUN_TIMES: 1000
- WORKFLOW_MAX_RUN_TIMES=1000
# 批量执行节点,最大输入长度
WORKFLOW_MAX_LOOP_TIMES: 100
- WORKFLOW_MAX_LOOP_TIMES=100
# 对话文件过期天数
CHAT_FILE_EXPIRE_TIME: 7
# 服务器接收请求,最大大小,单位 MB
SERVICE_REQUEST_MAX_CONTENT_LENGTH: 10
- CHAT_FILE_EXPIRE_TIME=7
volumes:
- ./config.json:/app/data/config.json
sandbox:
container_name: sandbox
image: ghcr.io/labring/fastgpt-sandbox:v4.14.4
image: ghcr.io/labring/fastgpt-sandbox:v4.10.1 # git
# image: registry.cn-hangzhou.aliyuncs.com/fastgpt/fastgpt-sandbox:v4.10.1 # 阿里云
networks:
- fastgpt
restart: always
fastgpt-mcp-server:
container_name: fastgpt-mcp-server
image: ghcr.io/labring/fastgpt-mcp_server:v4.14.4
networks:
- fastgpt
image: ghcr.io/labring/fastgpt-mcp_server:v4.10.1 # git
# image: registry.cn-hangzhou.aliyuncs.com/fastgpt/fastgpt-mcp_server:v4.10.1 # 阿里云
ports:
- 3005:3000
networks:
- fastgpt
restart: always
environment:
- FASTGPT_ENDPOINT=http://fastgpt:3000
fastgpt-plugin:
image: ghcr.io/labring/fastgpt-plugin:v0.3.4
image: ghcr.io/labring/fastgpt-plugin:v0.1.5 # git
# image: registry.cn-hangzhou.aliyuncs.com/fastgpt/fastgpt-plugin:v0.1.5 # 阿里云
container_name: fastgpt-plugin
restart: always
networks:
- fastgpt
environment:
<<: *x-share-db-config
AUTH_TOKEN: *x-plugin-auth-token
# 工具网络请求,最大请求和响应体
SERVICE_REQUEST_MAX_CONTENT_LENGTH: 10
# 最大 API 请求体大小
MAX_API_SIZE: 10
- AUTH_TOKEN=xxxxxx # 如果不需要鉴权可以直接去掉这个环境变量
# 改成 minio 可访问地址,例如 http://192.168.2.2:9000/fastgpt-plugins
# 必须指向 Minio 的桶的地址
# 如果 Minio 可以直接通过外网访问,可以不设置这个环境变量
# - MINIO_CUSTOM_ENDPOINT=http://192.168.2.2:9000
- MINIO_ENDPOINT=fastgpt-minio
- MINIO_PORT=9000
- MINIO_USE_SSL=false
- MINIO_ACCESS_KEY=minioadmin
- MINIO_SECRET_KEY=minioadmin
- MINIO_BUCKET=fastgpt-plugins
depends_on:
fastgpt-minio:
condition: service_healthy
# AI Proxy
aiproxy:
image: ghcr.io/labring/aiproxy:v0.3.2
image: ghcr.io/labring/aiproxy:v0.2.2
# image: registry.cn-hangzhou.aliyuncs.com/labring/aiproxy:v0.2.2 # 阿里云
container_name: aiproxy
restart: unless-stopped
depends_on:
@ -244,20 +235,19 @@ services:
condition: service_healthy
networks:
- fastgpt
- aiproxy
environment:
# 对应 fastgpt 里的AIPROXY_API_TOKEN
ADMIN_KEY: *x-aiproxy-token
- ADMIN_KEY=aiproxy
# 错误日志详情保存时间(小时)
LOG_DETAIL_STORAGE_HOURS: 1
- LOG_DETAIL_STORAGE_HOURS=1
# 数据库连接地址
SQL_DSN: postgres://postgres:aiproxy@aiproxy_pg:5432/aiproxy
- SQL_DSN=postgres://postgres:aiproxy@aiproxy_pg:5432/aiproxy
# 最大重试次数
RETRY_TIMES: 3
- RETRY_TIMES=3
# 不需要计费
BILLING_ENABLED: false
- BILLING_ENABLED=false
# 不需要严格检测模型
DISABLE_MODEL_CONFIG: true
- DISABLE_MODEL_CONFIG=true
healthcheck:
test: ['CMD', 'curl', '-f', 'http://localhost:3000/api/status']
interval: 5s
@ -265,12 +255,13 @@ services:
retries: 10
aiproxy_pg:
image: pgvector/pgvector:0.8.0-pg15 # docker hub
# image: registry.cn-hangzhou.aliyuncs.com/fastgpt/pgvector:v0.8.0-pg15 # 阿里云
restart: unless-stopped
container_name: aiproxy_pg
volumes:
- ./aiproxy_pg:/var/lib/postgresql/data
networks:
- aiproxy
- fastgpt
environment:
TZ: Asia/Shanghai
POSTGRES_USER: postgres
@ -283,11 +274,3 @@ services:
retries: 10
networks:
fastgpt:
aiproxy:
vector:
configs:
init_sql:
name: init_sql
content: |
ALTER SYSTEM SET ob_vector_memory_limit_percentage = 30;

View File

@ -0,0 +1,2 @@
ALTER SYSTEM SET ob_vector_memory_limit_percentage = 30;

View File

@ -1,37 +1,18 @@
# 用于部署的 docker-compose 文件:
# - FastGPT 端口映射为 3000:3000
# - FastGPT-mcp-server 端口映射 3005:3000
# - 建议修改账密后再运行
# plugin auth token
x-plugin-auth-token: &x-plugin-auth-token 'token'
# aiproxy token
x-aiproxy-token: &x-aiproxy-token 'token'
# 数据库连接相关配置
x-share-db-config: &x-share-db-config
MONGODB_URI: mongodb://myusername:mypassword@mongo:27017/fastgpt?authSource=admin
DB_MAX_LINK: 100
REDIS_URL: redis://default:mypassword@redis:6379
S3_EXTERNAL_BASE_URL: https://minio.com # S3 的公网访问地址
S3_ENDPOINT: fastgpt-minio
S3_PORT: 9000
S3_USE_SSL: false
S3_ACCESS_KEY: minioadmin
S3_SECRET_KEY: minioadmin
S3_PUBLIC_BUCKET: fastgpt-public # 公开读私有写桶
S3_PRIVATE_BUCKET: fastgpt-private # 私有读写桶
# 向量库相关配置
x-vec-config: &x-vec-config
PG_URL: postgresql://username:password@pg:5432/postgres
# 数据库的默认账号和密码仅首次运行时设置有效
# 如果修改了账号密码,记得改数据库和项目连接参数,别只改一处~
# 该配置文件只是给快速启动,测试使用。正式使用,记得务必修改账号密码,以及调整合适的知识库参数,共享内存等。
# 如何无法访问 dockerhub 和 git可以用阿里云阿里云没有arm包
version: '3.3'
services:
# Vector DB
vectorDB:
image: pgvector/pgvector:0.8.0-pg15
pg:
image: pgvector/pgvector:0.8.0-pg15 # docker hub
# image: registry.cn-hangzhou.aliyuncs.com/fastgpt/pgvector:v0.8.0-pg15 # 阿里云
container_name: pg
restart: always
# ports: # 生产环境建议不要暴露
# - 5432:5432
networks:
- fastgpt
environment:
@ -47,9 +28,11 @@ services:
timeout: 5s
retries: 10
# DB
mongo:
image: mongo:5.0.18 # cpu 不支持 AVX 时候使用 4.4.29
image: mongo:5.0.18 # dockerhub
# image: registry.cn-hangzhou.aliyuncs.com/fastgpt/mongo:5.0.18 # 阿里云
# image: mongo:4.4.29 # cpu不支持AVX时候使用
container_name: mongo
restart: always
networks:
@ -60,12 +43,6 @@ services:
- MONGO_INITDB_ROOT_PASSWORD=mypassword
volumes:
- ./mongo/data:/data/db
healthcheck:
test: ['CMD', 'mongo', '-u', 'myusername', '-p', 'mypassword', '--authenticationDatabase', 'admin', '--eval', "db.adminCommand('ping')"]
interval: 10s
timeout: 5s
retries: 5
start_period: 30s
entrypoint:
- bash
- -c
@ -113,14 +90,14 @@ services:
volumes:
- ./redis/data:/data
fastgpt-minio:
image: minio/minio:RELEASE.2025-09-07T16-13-09Z
image: minio/minio:latest
container_name: fastgpt-minio
restart: always
ports:
- 9000:9000
- 9001:9001
networks:
- fastgpt
ports: # comment out if you do not need to expose the port (in production environment, you should not expose the port)
- '9000:9000'
- '9001:9001'
environment:
- MINIO_ROOT_USER=minioadmin
- MINIO_ROOT_PASSWORD=minioadmin
@ -135,7 +112,8 @@ services:
fastgpt:
container_name: fastgpt
image: ghcr.io/labring/fastgpt:v4.14.4 # git
image: ghcr.io/labring/fastgpt:v4.11.0 # git
# image: registry.cn-hangzhou.aliyuncs.com/fastgpt/fastgpt:v4.11.0 # 阿里云
ports:
- 3000:3000
networks:
@ -143,81 +121,99 @@ services:
depends_on:
- mongo
- sandbox
- vectorDB
- pg
restart: always
environment:
<<: [*x-share-db-config, *x-vec-config]
# 前端外部可访问的地址,用于自动补全文件资源路径。例如 https:fastgpt.cn不能填 localhost。这个值可以不填不填则发给模型的图片会是一个相对路径而不是全路径模型可能伪造Host。
FE_DOMAIN:
- FE_DOMAIN=
# root 密码,用户名为: root。如果需要修改 root 密码,直接修改这个环境变量,并重启即可。
DEFAULT_ROOT_PSW: 1234
- DEFAULT_ROOT_PSW=1234
# 登录凭证密钥
TOKEN_KEY: any
- TOKEN_KEY=any
# root的密钥常用于升级时候的初始化请求
ROOT_KEY: root_key
- ROOT_KEY=root_key
# 文件阅读加密
FILE_TOKEN_KEY: filetoken
- FILE_TOKEN_KEY=filetoken
# 密钥加密key
AES256_SECRET_KEY: fastgptkey
- AES256_SECRET_KEY=fastgptkey
# plugin 地址
PLUGIN_BASE_URL: http://fastgpt-plugin:3000
PLUGIN_TOKEN: *x-plugin-auth-token
- PLUGIN_BASE_URL=http://fastgpt-plugin:3000
- PLUGIN_TOKEN=xxxxxx
# sandbox 地址
SANDBOX_URL: http://sandbox:3000
- SANDBOX_URL=http://sandbox:3000
# AI Proxy 的地址,如果配了该地址,优先使用
AIPROXY_API_ENDPOINT: http://aiproxy:3000
- AIPROXY_API_ENDPOINT=http://aiproxy:3000
# AI Proxy 的 Admin Token与 AI Proxy 中的环境变量 ADMIN_KEY
AIPROXY_API_TOKEN: *x-aiproxy-token
- AIPROXY_API_TOKEN=aiproxy
# 数据库最大连接数
- DB_MAX_LINK=30
# MongoDB 连接参数. 用户名myusername,密码mypassword。
- MONGODB_URI=mongodb://myusername:mypassword@mongo:27017/fastgpt?authSource=admin
# Redis 连接参数
- REDIS_URL=redis://default:mypassword@redis:6379
# 向量库 连接参数
- PG_URL=postgresql://username:password@pg:5432/postgres
# 日志等级: debug, info, warn, error
LOG_LEVEL: info
STORE_LOG_LEVEL: warn
- LOG_LEVEL=info
- STORE_LOG_LEVEL=warn
# 工作流最大运行次数
WORKFLOW_MAX_RUN_TIMES: 1000
- WORKFLOW_MAX_RUN_TIMES=1000
# 批量执行节点,最大输入长度
WORKFLOW_MAX_LOOP_TIMES: 100
- WORKFLOW_MAX_LOOP_TIMES=100
# 对话文件过期天数
CHAT_FILE_EXPIRE_TIME: 7
# 服务器接收请求,最大大小,单位 MB
SERVICE_REQUEST_MAX_CONTENT_LENGTH: 10
- CHAT_FILE_EXPIRE_TIME=7
volumes:
- ./config.json:/app/data/config.json
sandbox:
container_name: sandbox
image: ghcr.io/labring/fastgpt-sandbox:v4.14.4
image: ghcr.io/labring/fastgpt-sandbox:v4.10.1 # git
# image: registry.cn-hangzhou.aliyuncs.com/fastgpt/fastgpt-sandbox:v4.10.1 # 阿里云
networks:
- fastgpt
restart: always
fastgpt-mcp-server:
container_name: fastgpt-mcp-server
image: ghcr.io/labring/fastgpt-mcp_server:v4.14.4
networks:
- fastgpt
image: ghcr.io/labring/fastgpt-mcp_server:v4.10.1 # git
# image: registry.cn-hangzhou.aliyuncs.com/fastgpt/fastgpt-mcp_server:v4.10.1 # 阿里云
ports:
- 3005:3000
networks:
- fastgpt
restart: always
environment:
- FASTGPT_ENDPOINT=http://fastgpt:3000
fastgpt-plugin:
image: ghcr.io/labring/fastgpt-plugin:v0.3.4
image: ghcr.io/labring/fastgpt-plugin:v0.1.5 # git
# image: registry.cn-hangzhou.aliyuncs.com/fastgpt/fastgpt-plugin:v0.1.5 # 阿里云
container_name: fastgpt-plugin
restart: always
networks:
- fastgpt
environment:
<<: *x-share-db-config
AUTH_TOKEN: *x-plugin-auth-token
# 工具网络请求,最大请求和响应体
SERVICE_REQUEST_MAX_CONTENT_LENGTH: 10
# 最大 API 请求体大小
MAX_API_SIZE: 10
- AUTH_TOKEN=xxxxxx # 如果不需要鉴权可以直接去掉这个环境变量
# 改成 minio 可访问地址,例如 http://192.168.2.2:9000/fastgpt-plugins
# 必须指向 Minio 的桶的地址
# 如果 Minio 可以直接通过外网访问,可以不设置这个环境变量
# - MINIO_CUSTOM_ENDPOINT=http://192.168.2.2:9000
- MINIO_ENDPOINT=fastgpt-minio
- MINIO_PORT=9000
- MINIO_USE_SSL=false
- MINIO_ACCESS_KEY=minioadmin
- MINIO_SECRET_KEY=minioadmin
- MINIO_BUCKET=fastgpt-plugins
depends_on:
fastgpt-minio:
condition: service_healthy
# AI Proxy
aiproxy:
image: ghcr.io/labring/aiproxy:v0.3.2
image: ghcr.io/labring/aiproxy:v0.2.2
# image: registry.cn-hangzhou.aliyuncs.com/labring/aiproxy:v0.2.2 # 阿里云
container_name: aiproxy
restart: unless-stopped
depends_on:
@ -225,20 +221,19 @@ services:
condition: service_healthy
networks:
- fastgpt
- aiproxy
environment:
# 对应 fastgpt 里的AIPROXY_API_TOKEN
ADMIN_KEY: *x-aiproxy-token
- ADMIN_KEY=aiproxy
# 错误日志详情保存时间(小时)
LOG_DETAIL_STORAGE_HOURS: 1
- LOG_DETAIL_STORAGE_HOURS=1
# 数据库连接地址
SQL_DSN: postgres://postgres:aiproxy@aiproxy_pg:5432/aiproxy
- SQL_DSN=postgres://postgres:aiproxy@aiproxy_pg:5432/aiproxy
# 最大重试次数
RETRY_TIMES: 3
- RETRY_TIMES=3
# 不需要计费
BILLING_ENABLED: false
- BILLING_ENABLED=false
# 不需要严格检测模型
DISABLE_MODEL_CONFIG: true
- DISABLE_MODEL_CONFIG=true
healthcheck:
test: ['CMD', 'curl', '-f', 'http://localhost:3000/api/status']
interval: 5s
@ -246,12 +241,13 @@ services:
retries: 10
aiproxy_pg:
image: pgvector/pgvector:0.8.0-pg15 # docker hub
# image: registry.cn-hangzhou.aliyuncs.com/fastgpt/pgvector:v0.8.0-pg15 # 阿里云
restart: unless-stopped
container_name: aiproxy_pg
volumes:
- ./aiproxy_pg:/var/lib/postgresql/data
networks:
- aiproxy
- fastgpt
environment:
TZ: Asia/Shanghai
POSTGRES_USER: postgres
@ -264,6 +260,3 @@ services:
retries: 10
networks:
fastgpt:
aiproxy:
vector:

View File

@ -1,55 +1,18 @@
# 用于部署的 docker-compose 文件:
# - FastGPT 端口映射为 3000:3000
# - FastGPT-mcp-server 端口映射 3005:3000
# - 建议修改账密后再运行
# plugin auth token
x-plugin-auth-token: &x-plugin-auth-token 'token'
# aiproxy token
x-aiproxy-token: &x-aiproxy-token 'token'
# 数据库连接相关配置
x-share-db-config: &x-share-db-config
MONGODB_URI: mongodb://myusername:mypassword@mongo:27017/fastgpt?authSource=admin
DB_MAX_LINK: 100
REDIS_URL: redis://default:mypassword@redis:6379
S3_EXTERNAL_BASE_URL: https://minio.com # S3 的公网访问地址
S3_ENDPOINT: fastgpt-minio
S3_PORT: 9000
S3_USE_SSL: false
S3_ACCESS_KEY: minioadmin
S3_SECRET_KEY: minioadmin
S3_PUBLIC_BUCKET: fastgpt-public # 公开读私有写桶
S3_PRIVATE_BUCKET: fastgpt-private # 私有读写桶
# 向量库相关配置
x-vec-config: &x-vec-config
PG_URL: postgresql://username:password@pg:5432/postgres
# 数据库的默认账号和密码仅首次运行时设置有效
# 如果修改了账号密码,记得改数据库和项目连接参数,别只改一处~
# 该配置文件只是给快速启动,测试使用。正式使用,记得务必修改账号密码,以及调整合适的知识库参数,共享内存等。
# 如何无法访问 dockerhub 和 git可以用阿里云阿里云没有arm包
version: '3.3'
services:
# Vector DB
vectorDB:
image: pgvector/pgvector:0.8.0-pg15
container_name: pg
restart: always
networks:
- fastgpt
environment:
# 这里的配置只有首次运行生效。修改后,重启镜像是不会生效的。需要把持久化数据删除再重启,才有效果
- POSTGRES_USER=username
- POSTGRES_PASSWORD=password
- POSTGRES_DB=postgres
volumes:
- ./pg/data:/var/lib/postgresql/data
healthcheck:
test: ['CMD', 'pg_isready', '-U', 'username', '-d', 'postgres']
interval: 5s
timeout: 5s
retries: 10
# DB
mongo:
image: mongo:5.0.18 # cpu 不支持 AVX 时候使用 4.4.29
image: mongo:5.0.18 # dockerhub
# image: registry.cn-hangzhou.aliyuncs.com/fastgpt/mongo:5.0.18 # 阿里云
# image: mongo:4.4.29 # cpu不支持AVX时候使用
container_name: mongo
restart: always
networks:
@ -60,12 +23,6 @@ services:
- MONGO_INITDB_ROOT_PASSWORD=mypassword
volumes:
- ./mongo/data:/data/db
healthcheck:
test: ['CMD', 'mongo', '-u', 'myusername', '-p', 'mypassword', '--authenticationDatabase', 'admin', '--eval', "db.adminCommand('ping')"]
interval: 10s
timeout: 5s
retries: 5
start_period: 30s
entrypoint:
- bash
- -c
@ -113,14 +70,14 @@ services:
volumes:
- ./redis/data:/data
fastgpt-minio:
image: minio/minio:RELEASE.2025-09-07T16-13-09Z
image: minio/minio:latest
container_name: fastgpt-minio
restart: always
ports:
- 9000:9000
- 9001:9001
networks:
- fastgpt
ports: # comment out if you do not need to expose the port (in production environment, you should not expose the port)
- '9000:9000'
- '9001:9001'
environment:
- MINIO_ROOT_USER=minioadmin
- MINIO_ROOT_PASSWORD=minioadmin
@ -135,7 +92,8 @@ services:
fastgpt:
container_name: fastgpt
image: ghcr.io/labring/fastgpt:v4.14.4 # git
image: ghcr.io/labring/fastgpt:v4.11.0 # git
# image: registry.cn-hangzhou.aliyuncs.com/fastgpt/fastgpt:v4.11.0 # 阿里云
ports:
- 3000:3000
networks:
@ -143,81 +101,101 @@ services:
depends_on:
- mongo
- sandbox
- vectorDB
restart: always
environment:
<<: [*x-share-db-config, *x-vec-config]
# 前端外部可访问的地址,用于自动补全文件资源路径。例如 https:fastgpt.cn不能填 localhost。这个值可以不填不填则发给模型的图片会是一个相对路径而不是全路径模型可能伪造Host。
FE_DOMAIN:
- FE_DOMAIN=
# root 密码,用户名为: root。如果需要修改 root 密码,直接修改这个环境变量,并重启即可。
DEFAULT_ROOT_PSW: 1234
- DEFAULT_ROOT_PSW=1234
# 登录凭证密钥
TOKEN_KEY: any
- TOKEN_KEY=any
# root的密钥常用于升级时候的初始化请求
ROOT_KEY: root_key
- ROOT_KEY=root_key
# 文件阅读加密
FILE_TOKEN_KEY: filetoken
- FILE_TOKEN_KEY=filetoken
# 密钥加密key
AES256_SECRET_KEY: fastgptkey
- AES256_SECRET_KEY=fastgptkey
# plugin 地址
PLUGIN_BASE_URL: http://fastgpt-plugin:3000
PLUGIN_TOKEN: *x-plugin-auth-token
- PLUGIN_BASE_URL=http://fastgpt-plugin:3000
- PLUGIN_TOKEN=xxxxxx
# sandbox 地址
SANDBOX_URL: http://sandbox:3000
- SANDBOX_URL=http://sandbox:3000
# AI Proxy 的地址,如果配了该地址,优先使用
AIPROXY_API_ENDPOINT: http://aiproxy:3000
- AIPROXY_API_ENDPOINT=http://aiproxy:3000
# AI Proxy 的 Admin Token与 AI Proxy 中的环境变量 ADMIN_KEY
AIPROXY_API_TOKEN: *x-aiproxy-token
- AIPROXY_API_TOKEN=aiproxy
# 数据库最大连接数
- DB_MAX_LINK=30
# MongoDB 连接参数. 用户名myusername,密码mypassword。
- MONGODB_URI=mongodb://myusername:mypassword@mongo:27017/fastgpt?authSource=admin
# Redis 连接参数
- REDIS_URL=redis://default:mypassword@redis:6379
# 向量库 连接参数
# zilliz 连接参数
- MILVUS_ADDRESS=zilliz_cloud_address
- MILVUS_TOKEN=zilliz_cloud_token
# 日志等级: debug, info, warn, error
LOG_LEVEL: info
STORE_LOG_LEVEL: warn
- LOG_LEVEL=info
- STORE_LOG_LEVEL=warn
# 工作流最大运行次数
WORKFLOW_MAX_RUN_TIMES: 1000
- WORKFLOW_MAX_RUN_TIMES=1000
# 批量执行节点,最大输入长度
WORKFLOW_MAX_LOOP_TIMES: 100
- WORKFLOW_MAX_LOOP_TIMES=100
# 对话文件过期天数
CHAT_FILE_EXPIRE_TIME: 7
# 服务器接收请求,最大大小,单位 MB
SERVICE_REQUEST_MAX_CONTENT_LENGTH: 10
- CHAT_FILE_EXPIRE_TIME=7
volumes:
- ./config.json:/app/data/config.json
sandbox:
container_name: sandbox
image: ghcr.io/labring/fastgpt-sandbox:v4.14.4
image: ghcr.io/labring/fastgpt-sandbox:v4.10.1 # git
# image: registry.cn-hangzhou.aliyuncs.com/fastgpt/fastgpt-sandbox:v4.10.1 # 阿里云
networks:
- fastgpt
restart: always
fastgpt-mcp-server:
container_name: fastgpt-mcp-server
image: ghcr.io/labring/fastgpt-mcp_server:v4.14.4
networks:
- fastgpt
image: ghcr.io/labring/fastgpt-mcp_server:v4.10.1 # git
# image: registry.cn-hangzhou.aliyuncs.com/fastgpt/fastgpt-mcp_server:v4.10.1 # 阿里云
ports:
- 3005:3000
networks:
- fastgpt
restart: always
environment:
- FASTGPT_ENDPOINT=http://fastgpt:3000
fastgpt-plugin:
image: ghcr.io/labring/fastgpt-plugin:v0.3.4
image: ghcr.io/labring/fastgpt-plugin:v0.1.5 # git
# image: registry.cn-hangzhou.aliyuncs.com/fastgpt/fastgpt-plugin:v0.1.5 # 阿里云
container_name: fastgpt-plugin
restart: always
networks:
- fastgpt
environment:
<<: *x-share-db-config
AUTH_TOKEN: *x-plugin-auth-token
# 工具网络请求,最大请求和响应体
SERVICE_REQUEST_MAX_CONTENT_LENGTH: 10
# 最大 API 请求体大小
MAX_API_SIZE: 10
- AUTH_TOKEN=xxxxxx # 如果不需要鉴权可以直接去掉这个环境变量
# 改成 minio 可访问地址,例如 http://192.168.2.2:9000/fastgpt-plugins
# 必须指向 Minio 的桶的地址
# 如果 Minio 可以直接通过外网访问,可以不设置这个环境变量
# - MINIO_CUSTOM_ENDPOINT=http://192.168.2.2:9000
- MINIO_ENDPOINT=fastgpt-minio
- MINIO_PORT=9000
- MINIO_USE_SSL=false
- MINIO_ACCESS_KEY=minioadmin
- MINIO_SECRET_KEY=minioadmin
- MINIO_BUCKET=fastgpt-plugins
depends_on:
fastgpt-minio:
condition: service_healthy
# AI Proxy
aiproxy:
image: ghcr.io/labring/aiproxy:v0.3.2
image: ghcr.io/labring/aiproxy:v0.2.2
# image: registry.cn-hangzhou.aliyuncs.com/labring/aiproxy:v0.2.2 # 阿里云
container_name: aiproxy
restart: unless-stopped
depends_on:
@ -225,20 +203,19 @@ services:
condition: service_healthy
networks:
- fastgpt
- aiproxy
environment:
# 对应 fastgpt 里的AIPROXY_API_TOKEN
ADMIN_KEY: *x-aiproxy-token
- ADMIN_KEY=aiproxy
# 错误日志详情保存时间(小时)
LOG_DETAIL_STORAGE_HOURS: 1
- LOG_DETAIL_STORAGE_HOURS=1
# 数据库连接地址
SQL_DSN: postgres://postgres:aiproxy@aiproxy_pg:5432/aiproxy
- SQL_DSN=postgres://postgres:aiproxy@aiproxy_pg:5432/aiproxy
# 最大重试次数
RETRY_TIMES: 3
- RETRY_TIMES=3
# 不需要计费
BILLING_ENABLED: false
- BILLING_ENABLED=false
# 不需要严格检测模型
DISABLE_MODEL_CONFIG: true
- DISABLE_MODEL_CONFIG=true
healthcheck:
test: ['CMD', 'curl', '-f', 'http://localhost:3000/api/status']
interval: 5s
@ -246,12 +223,13 @@ services:
retries: 10
aiproxy_pg:
image: pgvector/pgvector:0.8.0-pg15 # docker hub
# image: registry.cn-hangzhou.aliyuncs.com/fastgpt/pgvector:v0.8.0-pg15 # 阿里云
restart: unless-stopped
container_name: aiproxy_pg
volumes:
- ./aiproxy_pg:/var/lib/postgresql/data
networks:
- aiproxy
- fastgpt
environment:
TZ: Asia/Shanghai
POSTGRES_USER: postgres
@ -264,6 +242,3 @@ services:
retries: 10
networks:
fastgpt:
aiproxy:
vector:

View File

@ -1,293 +0,0 @@
# 用于部署的 docker-compose 文件:
# - FastGPT 端口映射为 3000:3000
# - FastGPT-mcp-server 端口映射 3005:3000
# - 建议修改账密后再运行
# plugin auth token
x-plugin-auth-token: &x-plugin-auth-token 'token'
# aiproxy token
x-aiproxy-token: &x-aiproxy-token 'token'
# 数据库连接相关配置
x-share-db-config: &x-share-db-config
MONGODB_URI: mongodb://myusername:mypassword@mongo:27017/fastgpt?authSource=admin
DB_MAX_LINK: 100
REDIS_URL: redis://default:mypassword@redis:6379
S3_EXTERNAL_BASE_URL: https://minio.com # S3 的公网访问地址
S3_ENDPOINT: fastgpt-minio
S3_PORT: 9000
S3_USE_SSL: false
S3_ACCESS_KEY: minioadmin
S3_SECRET_KEY: minioadmin
S3_PUBLIC_BUCKET: fastgpt-public # 公开读私有写桶
S3_PRIVATE_BUCKET: fastgpt-private # 私有读写桶
# 向量库相关配置
x-vec-config: &x-vec-config
OCEANBASE_URL: mysql://root%40tenantname:tenantpassword@ob:2881/test
version: '3.3'
services:
# Vector DB
vectorDB:
image: oceanbase/oceanbase-ce:4.3.5-lts
container_name: ob
restart: always
# ports: # 生产环境建议不要暴露
# - 2881:2881
networks:
- fastgpt
environment:
# 这里的配置只有首次运行生效。修改后,重启镜像是不会生效的。需要把持久化数据删除再重启,才有效果
- OB_SYS_PASSWORD=obsyspassword
# 不同于传统数据库OceanBase 数据库的账号包含更多字段,包括用户名、租户名和集群名。经典格式为"用户名@租户名#集群名"
# 比如用mysql客户端连接时根据本文件的默认配置应该指定 "-uroot@tenantname"
- OB_TENANT_NAME=tenantname
- OB_TENANT_PASSWORD=tenantpassword
# MODE分为MINI和NORMAL 后者会最大程度使用主机资源
- MODE=MINI
- OB_SERVER_IP=127.0.0.1
# 更多环境变量配置见oceanbase官方文档 https://www.oceanbase.com/docs/common-oceanbase-database-cn-1000000002013494
volumes:
- ../ob/data:/root/ob
- ../ob/config:/root/.obd/cluster
configs:
- source: init_sql
target: /root/boot/init.d/init.sql
healthcheck:
# obclient -h127.0.0.1 -P2881 -uroot@tenantname -ptenantpassword -e "SELECT 1;"
test:
[
"CMD-SHELL",
'obclient -h$${OB_SERVER_IP} -P2881 -uroot@$${OB_TENANT_NAME} -p$${OB_TENANT_PASSWORD} -e "SELECT 1;"',
]
interval: 30s
timeout: 10s
retries: 1000
start_period: 10s
mongo:
image: mongo:5.0.18 # cpu 不支持 AVX 时候使用 4.4.29
container_name: mongo
restart: always
networks:
- fastgpt
command: mongod --keyFile /data/mongodb.key --replSet rs0
environment:
- MONGO_INITDB_ROOT_USERNAME=myusername
- MONGO_INITDB_ROOT_PASSWORD=mypassword
volumes:
- ./mongo/data:/data/db
healthcheck:
test: ['CMD', 'mongo', '-u', 'myusername', '-p', 'mypassword', '--authenticationDatabase', 'admin', '--eval', "db.adminCommand('ping')"]
interval: 10s
timeout: 5s
retries: 5
start_period: 30s
entrypoint:
- bash
- -c
- |
openssl rand -base64 128 > /data/mongodb.key
chmod 400 /data/mongodb.key
chown 999:999 /data/mongodb.key
echo 'const isInited = rs.status().ok === 1
if(!isInited){
rs.initiate({
_id: "rs0",
members: [
{ _id: 0, host: "mongo:27017" }
]
})
}' > /data/initReplicaSet.js
# 启动MongoDB服务
exec docker-entrypoint.sh "$$@" &
# 等待MongoDB服务启动
until mongo -u myusername -p mypassword --authenticationDatabase admin --eval "print('waited for connection')"; do
echo "Waiting for MongoDB to start..."
sleep 2
done
# 执行初始化副本集的脚本
mongo -u myusername -p mypassword --authenticationDatabase admin /data/initReplicaSet.js
# 等待docker-entrypoint.sh脚本执行的MongoDB服务进程
wait $$!
redis:
image: redis:7.2-alpine
container_name: redis
networks:
- fastgpt
restart: always
command: |
redis-server --requirepass mypassword --loglevel warning --maxclients 10000 --appendonly yes --save 60 10 --maxmemory 4gb --maxmemory-policy noeviction
healthcheck:
test: ['CMD', 'redis-cli', '-a', 'mypassword', 'ping']
interval: 10s
timeout: 3s
retries: 3
start_period: 30s
volumes:
- ./redis/data:/data
fastgpt-minio:
image: minio/minio:RELEASE.2025-09-07T16-13-09Z
container_name: fastgpt-minio
restart: always
ports:
- 9000:9000
- 9001:9001
networks:
- fastgpt
environment:
- MINIO_ROOT_USER=minioadmin
- MINIO_ROOT_PASSWORD=minioadmin
volumes:
- ./fastgpt-minio:/data
command: server /data --console-address ":9001"
healthcheck:
test: ['CMD', 'curl', '-f', 'http://localhost:9000/minio/health/live']
interval: 30s
timeout: 20s
retries: 3
fastgpt:
container_name: fastgpt
image: ghcr.io/labring/fastgpt:v4.14.4 # git
ports:
- 3000:3000
networks:
- fastgpt
depends_on:
- mongo
- sandbox
- vectorDB
restart: always
environment:
<<: [*x-share-db-config, *x-vec-config]
# 前端外部可访问的地址,用于自动补全文件资源路径。例如 https:fastgpt.cn不能填 localhost。这个值可以不填不填则发给模型的图片会是一个相对路径而不是全路径模型可能伪造Host。
FE_DOMAIN:
# root 密码,用户名为: root。如果需要修改 root 密码,直接修改这个环境变量,并重启即可。
DEFAULT_ROOT_PSW: 1234
# 登录凭证密钥
TOKEN_KEY: any
# root的密钥常用于升级时候的初始化请求
ROOT_KEY: root_key
# 文件阅读加密
FILE_TOKEN_KEY: filetoken
# 密钥加密key
AES256_SECRET_KEY: fastgptkey
# plugin 地址
PLUGIN_BASE_URL: http://fastgpt-plugin:3000
PLUGIN_TOKEN: *x-plugin-auth-token
# sandbox 地址
SANDBOX_URL: http://sandbox:3000
# AI Proxy 的地址,如果配了该地址,优先使用
AIPROXY_API_ENDPOINT: http://aiproxy:3000
# AI Proxy 的 Admin Token与 AI Proxy 中的环境变量 ADMIN_KEY
AIPROXY_API_TOKEN: *x-aiproxy-token
# 日志等级: debug, info, warn, error
LOG_LEVEL: info
STORE_LOG_LEVEL: warn
# 工作流最大运行次数
WORKFLOW_MAX_RUN_TIMES: 1000
# 批量执行节点,最大输入长度
WORKFLOW_MAX_LOOP_TIMES: 100
# 对话文件过期天数
CHAT_FILE_EXPIRE_TIME: 7
# 服务器接收请求,最大大小,单位 MB
SERVICE_REQUEST_MAX_CONTENT_LENGTH: 10
volumes:
- ./config.json:/app/data/config.json
sandbox:
container_name: sandbox
image: ghcr.io/labring/fastgpt-sandbox:v4.14.4
networks:
- fastgpt
restart: always
fastgpt-mcp-server:
container_name: fastgpt-mcp-server
image: ghcr.io/labring/fastgpt-mcp_server:v4.14.4
networks:
- fastgpt
ports:
- 3005:3000
restart: always
environment:
- FASTGPT_ENDPOINT=http://fastgpt:3000
fastgpt-plugin:
image: ghcr.io/labring/fastgpt-plugin:v0.3.4
container_name: fastgpt-plugin
restart: always
networks:
- fastgpt
environment:
<<: *x-share-db-config
AUTH_TOKEN: *x-plugin-auth-token
# 工具网络请求,最大请求和响应体
SERVICE_REQUEST_MAX_CONTENT_LENGTH: 10
# 最大 API 请求体大小
MAX_API_SIZE: 10
depends_on:
fastgpt-minio:
condition: service_healthy
# AI Proxy
aiproxy:
image: ghcr.io/labring/aiproxy:v0.3.2
container_name: aiproxy
restart: unless-stopped
depends_on:
aiproxy_pg:
condition: service_healthy
networks:
- fastgpt
- aiproxy
environment:
# 对应 fastgpt 里的AIPROXY_API_TOKEN
ADMIN_KEY: *x-aiproxy-token
# 错误日志详情保存时间(小时)
LOG_DETAIL_STORAGE_HOURS: 1
# 数据库连接地址
SQL_DSN: postgres://postgres:aiproxy@aiproxy_pg:5432/aiproxy
# 最大重试次数
RETRY_TIMES: 3
# 不需要计费
BILLING_ENABLED: false
# 不需要严格检测模型
DISABLE_MODEL_CONFIG: true
healthcheck:
test: ['CMD', 'curl', '-f', 'http://localhost:3000/api/status']
interval: 5s
timeout: 5s
retries: 10
aiproxy_pg:
image: pgvector/pgvector:0.8.0-pg15 # docker hub
restart: unless-stopped
container_name: aiproxy_pg
volumes:
- ./aiproxy_pg:/var/lib/postgresql/data
networks:
- aiproxy
environment:
TZ: Asia/Shanghai
POSTGRES_USER: postgres
POSTGRES_DB: aiproxy
POSTGRES_PASSWORD: aiproxy
healthcheck:
test: ['CMD', 'pg_isready', '-U', 'postgres', '-d', 'aiproxy']
interval: 5s
timeout: 5s
retries: 10
networks:
fastgpt:
aiproxy:
vector:
configs:
init_sql:
name: init_sql
content: |
ALTER SYSTEM SET ob_vector_memory_limit_percentage = 30;

View File

@ -1,252 +0,0 @@
# 用于部署的 docker-compose 文件:
# - FastGPT 端口映射为 3000:3000
# - FastGPT-mcp-server 端口映射 3005:3000
# - 建议修改账密后再运行
# plugin auth token
x-plugin-auth-token: &x-plugin-auth-token 'token'
# aiproxy token
x-aiproxy-token: &x-aiproxy-token 'token'
# 数据库连接相关配置
x-share-db-config: &x-share-db-config
MONGODB_URI: mongodb://myusername:mypassword@mongo:27017/fastgpt?authSource=admin
DB_MAX_LINK: 100
REDIS_URL: redis://default:mypassword@redis:6379
S3_EXTERNAL_BASE_URL: https://minio.com # S3 的公网访问地址
S3_ENDPOINT: fastgpt-minio
S3_PORT: 9000
S3_USE_SSL: false
S3_ACCESS_KEY: minioadmin
S3_SECRET_KEY: minioadmin
S3_PUBLIC_BUCKET: fastgpt-public # 公开读私有写桶
S3_PRIVATE_BUCKET: fastgpt-private # 私有读写桶
# 向量库相关配置
x-vec-config: &x-vec-config
MILVUS_ADDRESS: zilliz_cloud_address
MILVUS_TOKEN: zilliz_cloud_token
version: '3.3'
services:
# Vector DB
mongo:
image: mongo:5.0.18 # cpu 不支持 AVX 时候使用 4.4.29
container_name: mongo
restart: always
networks:
- fastgpt
command: mongod --keyFile /data/mongodb.key --replSet rs0
environment:
- MONGO_INITDB_ROOT_USERNAME=myusername
- MONGO_INITDB_ROOT_PASSWORD=mypassword
volumes:
- ./mongo/data:/data/db
healthcheck:
test: ['CMD', 'mongo', '-u', 'myusername', '-p', 'mypassword', '--authenticationDatabase', 'admin', '--eval', "db.adminCommand('ping')"]
interval: 10s
timeout: 5s
retries: 5
start_period: 30s
entrypoint:
- bash
- -c
- |
openssl rand -base64 128 > /data/mongodb.key
chmod 400 /data/mongodb.key
chown 999:999 /data/mongodb.key
echo 'const isInited = rs.status().ok === 1
if(!isInited){
rs.initiate({
_id: "rs0",
members: [
{ _id: 0, host: "mongo:27017" }
]
})
}' > /data/initReplicaSet.js
# 启动MongoDB服务
exec docker-entrypoint.sh "$$@" &
# 等待MongoDB服务启动
until mongo -u myusername -p mypassword --authenticationDatabase admin --eval "print('waited for connection')"; do
echo "Waiting for MongoDB to start..."
sleep 2
done
# 执行初始化副本集的脚本
mongo -u myusername -p mypassword --authenticationDatabase admin /data/initReplicaSet.js
# 等待docker-entrypoint.sh脚本执行的MongoDB服务进程
wait $$!
redis:
image: redis:7.2-alpine
container_name: redis
networks:
- fastgpt
restart: always
command: |
redis-server --requirepass mypassword --loglevel warning --maxclients 10000 --appendonly yes --save 60 10 --maxmemory 4gb --maxmemory-policy noeviction
healthcheck:
test: ['CMD', 'redis-cli', '-a', 'mypassword', 'ping']
interval: 10s
timeout: 3s
retries: 3
start_period: 30s
volumes:
- ./redis/data:/data
fastgpt-minio:
image: minio/minio:RELEASE.2025-09-07T16-13-09Z
container_name: fastgpt-minio
restart: always
ports:
- 9000:9000
- 9001:9001
networks:
- fastgpt
environment:
- MINIO_ROOT_USER=minioadmin
- MINIO_ROOT_PASSWORD=minioadmin
volumes:
- ./fastgpt-minio:/data
command: server /data --console-address ":9001"
healthcheck:
test: ['CMD', 'curl', '-f', 'http://localhost:9000/minio/health/live']
interval: 30s
timeout: 20s
retries: 3
fastgpt:
container_name: fastgpt
image: ghcr.io/labring/fastgpt:v4.14.4 # git
ports:
- 3000:3000
networks:
- fastgpt
depends_on:
- mongo
- sandbox
- vectorDB
restart: always
environment:
<<: [*x-share-db-config, *x-vec-config]
# 前端外部可访问的地址,用于自动补全文件资源路径。例如 https:fastgpt.cn不能填 localhost。这个值可以不填不填则发给模型的图片会是一个相对路径而不是全路径模型可能伪造Host。
FE_DOMAIN:
# root 密码,用户名为: root。如果需要修改 root 密码,直接修改这个环境变量,并重启即可。
DEFAULT_ROOT_PSW: 1234
# 登录凭证密钥
TOKEN_KEY: any
# root的密钥常用于升级时候的初始化请求
ROOT_KEY: root_key
# 文件阅读加密
FILE_TOKEN_KEY: filetoken
# 密钥加密key
AES256_SECRET_KEY: fastgptkey
# plugin 地址
PLUGIN_BASE_URL: http://fastgpt-plugin:3000
PLUGIN_TOKEN: *x-plugin-auth-token
# sandbox 地址
SANDBOX_URL: http://sandbox:3000
# AI Proxy 的地址,如果配了该地址,优先使用
AIPROXY_API_ENDPOINT: http://aiproxy:3000
# AI Proxy 的 Admin Token与 AI Proxy 中的环境变量 ADMIN_KEY
AIPROXY_API_TOKEN: *x-aiproxy-token
# 日志等级: debug, info, warn, error
LOG_LEVEL: info
STORE_LOG_LEVEL: warn
# 工作流最大运行次数
WORKFLOW_MAX_RUN_TIMES: 1000
# 批量执行节点,最大输入长度
WORKFLOW_MAX_LOOP_TIMES: 100
# 对话文件过期天数
CHAT_FILE_EXPIRE_TIME: 7
# 服务器接收请求,最大大小,单位 MB
SERVICE_REQUEST_MAX_CONTENT_LENGTH: 10
volumes:
- ./config.json:/app/data/config.json
sandbox:
container_name: sandbox
image: ghcr.io/labring/fastgpt-sandbox:v4.14.4
networks:
- fastgpt
restart: always
fastgpt-mcp-server:
container_name: fastgpt-mcp-server
image: ghcr.io/labring/fastgpt-mcp_server:v4.14.4
networks:
- fastgpt
ports:
- 3005:3000
restart: always
environment:
- FASTGPT_ENDPOINT=http://fastgpt:3000
fastgpt-plugin:
image: ghcr.io/labring/fastgpt-plugin:v0.3.4
container_name: fastgpt-plugin
restart: always
networks:
- fastgpt
environment:
<<: *x-share-db-config
AUTH_TOKEN: *x-plugin-auth-token
# 工具网络请求,最大请求和响应体
SERVICE_REQUEST_MAX_CONTENT_LENGTH: 10
# 最大 API 请求体大小
MAX_API_SIZE: 10
depends_on:
fastgpt-minio:
condition: service_healthy
# AI Proxy
aiproxy:
image: ghcr.io/labring/aiproxy:v0.3.2
container_name: aiproxy
restart: unless-stopped
depends_on:
aiproxy_pg:
condition: service_healthy
networks:
- fastgpt
- aiproxy
environment:
# 对应 fastgpt 里的AIPROXY_API_TOKEN
ADMIN_KEY: *x-aiproxy-token
# 错误日志详情保存时间(小时)
LOG_DETAIL_STORAGE_HOURS: 1
# 数据库连接地址
SQL_DSN: postgres://postgres:aiproxy@aiproxy_pg:5432/aiproxy
# 最大重试次数
RETRY_TIMES: 3
# 不需要计费
BILLING_ENABLED: false
# 不需要严格检测模型
DISABLE_MODEL_CONFIG: true
healthcheck:
test: ['CMD', 'curl', '-f', 'http://localhost:3000/api/status']
interval: 5s
timeout: 5s
retries: 10
aiproxy_pg:
image: pgvector/pgvector:0.8.0-pg15 # docker hub
restart: unless-stopped
container_name: aiproxy_pg
volumes:
- ./aiproxy_pg:/var/lib/postgresql/data
networks:
- aiproxy
environment:
TZ: Asia/Shanghai
POSTGRES_USER: postgres
POSTGRES_DB: aiproxy
POSTGRES_PASSWORD: aiproxy
healthcheck:
test: ['CMD', 'pg_isready', '-U', 'postgres', '-d', 'aiproxy']
interval: 5s
timeout: 5s
retries: 10
networks:
fastgpt:
aiproxy:
vector:

19
deploy/docker/run.sh Normal file
View File

@ -0,0 +1,19 @@
#!/bin/bash
docker-compose pull
docker-compose up -d
echo "Docker Compose 重新拉取镜像完成!"
# 删除本地旧镜像
images=$(docker images --format "{{.ID}} {{.Repository}}" | grep fastgpt)
# 将镜像 ID 和名称放入数组中
IFS=$'\n' read -rd '' -a image_array <<<"$images"
# 遍历数组并删除所有旧的镜像
for ((i=1; i<${#image_array[@]}; i++))
do
image=${image_array[$i]}
image_id=${image%% *}
docker rmi $image_id
done

398
deploy/docker/yml.js Normal file
View File

@ -0,0 +1,398 @@
const fs = require('fs');
const path = require('path');
const template = `# 数据库的默认账号和密码仅首次运行时设置有效
# 如果修改了账号密码记得改数据库和项目连接参数别只改一处~
# 该配置文件只是给快速启动测试使用正式使用记得务必修改账号密码以及调整合适的知识库参数共享内存等
# 如何无法访问 dockerhub git可以用阿里云阿里云没有arm包
version: '3.3'
services:
# Vector DB
{{Vector_DB_Service}}
# DB
mongo:
image: mongo:5.0.18 # dockerhub
# image: registry.cn-hangzhou.aliyuncs.com/fastgpt/mongo:5.0.18 # 阿里云
# image: mongo:4.4.29 # cpu不支持AVX时候使用
container_name: mongo
restart: always
networks:
- fastgpt
command: mongod --keyFile /data/mongodb.key --replSet rs0
environment:
- MONGO_INITDB_ROOT_USERNAME=myusername
- MONGO_INITDB_ROOT_PASSWORD=mypassword
volumes:
- ./mongo/data:/data/db
entrypoint:
- bash
- -c
- |
openssl rand -base64 128 > /data/mongodb.key
chmod 400 /data/mongodb.key
chown 999:999 /data/mongodb.key
echo 'const isInited = rs.status().ok === 1
if(!isInited){
rs.initiate({
_id: "rs0",
members: [
{ _id: 0, host: "mongo:27017" }
]
})
}' > /data/initReplicaSet.js
# 启动MongoDB服务
exec docker-entrypoint.sh "$$@" &
# 等待MongoDB服务启动
until mongo -u myusername -p mypassword --authenticationDatabase admin --eval "print('waited for connection')"; do
echo "Waiting for MongoDB to start..."
sleep 2
done
# 执行初始化副本集的脚本
mongo -u myusername -p mypassword --authenticationDatabase admin /data/initReplicaSet.js
# 等待docker-entrypoint.sh脚本执行的MongoDB服务进程
wait $$!
redis:
image: redis:7.2-alpine
container_name: redis
networks:
- fastgpt
restart: always
command: |
redis-server --requirepass mypassword --loglevel warning --maxclients 10000 --appendonly yes --save 60 10 --maxmemory 4gb --maxmemory-policy noeviction
healthcheck:
test: ['CMD', 'redis-cli', '-a', 'mypassword', 'ping']
interval: 10s
timeout: 3s
retries: 3
start_period: 30s
volumes:
- ./redis/data:/data
fastgpt-minio:
image: minio/minio:latest
container_name: fastgpt-minio
restart: always
networks:
- fastgpt
ports: # comment out if you do not need to expose the port (in production environment, you should not expose the port)
- '9000:9000'
- '9001:9001'
environment:
- MINIO_ROOT_USER=minioadmin
- MINIO_ROOT_PASSWORD=minioadmin
volumes:
- ./fastgpt-minio:/data
command: server /data --console-address ":9001"
healthcheck:
test: ['CMD', 'curl', '-f', 'http://localhost:9000/minio/health/live']
interval: 30s
timeout: 20s
retries: 3
fastgpt:
container_name: fastgpt
image: ghcr.io/labring/fastgpt:v4.11.0 # git
# image: registry.cn-hangzhou.aliyuncs.com/fastgpt/fastgpt:v4.11.0 # 阿里云
ports:
- 3000:3000
networks:
- fastgpt
depends_on:
- mongo
- sandbox
{{Vector_DB_Depends}}
restart: always
environment:
# 前端外部可访问的地址用于自动补全文件资源路径例如 https:fastgpt.cn不能填 localhost这个值可以不填不填则发给模型的图片会是一个相对路径而不是全路径模型可能伪造Host
- FE_DOMAIN=
# root 密码用户名为: root如果需要修改 root 密码直接修改这个环境变量并重启即可
- DEFAULT_ROOT_PSW=1234
# 登录凭证密钥
- TOKEN_KEY=any
# root的密钥常用于升级时候的初始化请求
- ROOT_KEY=root_key
# 文件阅读加密
- FILE_TOKEN_KEY=filetoken
# 密钥加密key
- AES256_SECRET_KEY=fastgptkey
# plugin 地址
- PLUGIN_BASE_URL=http://fastgpt-plugin:3000
- PLUGIN_TOKEN=xxxxxx
# sandbox 地址
- SANDBOX_URL=http://sandbox:3000
# AI Proxy 的地址如果配了该地址优先使用
- AIPROXY_API_ENDPOINT=http://aiproxy:3000
# AI Proxy Admin Token AI Proxy 中的环境变量 ADMIN_KEY
- AIPROXY_API_TOKEN=aiproxy
# 数据库最大连接数
- DB_MAX_LINK=30
# MongoDB 连接参数. 用户名myusername,密码mypassword
- MONGODB_URI=mongodb://myusername:mypassword@mongo:27017/fastgpt?authSource=admin
# Redis 连接参数
- REDIS_URL=redis://default:mypassword@redis:6379
# 向量库 连接参数
{{Vector_DB_ENV}}
# 日志等级: debug, info, warn, error
- LOG_LEVEL=info
- STORE_LOG_LEVEL=warn
# 工作流最大运行次数
- WORKFLOW_MAX_RUN_TIMES=1000
# 批量执行节点最大输入长度
- WORKFLOW_MAX_LOOP_TIMES=100
# 对话文件过期天数
- CHAT_FILE_EXPIRE_TIME=7
volumes:
- ./config.json:/app/data/config.json
sandbox:
container_name: sandbox
image: ghcr.io/labring/fastgpt-sandbox:v4.10.1 # git
# image: registry.cn-hangzhou.aliyuncs.com/fastgpt/fastgpt-sandbox:v4.10.1 # 阿里云
networks:
- fastgpt
restart: always
fastgpt-mcp-server:
container_name: fastgpt-mcp-server
image: ghcr.io/labring/fastgpt-mcp_server:v4.10.1 # git
# image: registry.cn-hangzhou.aliyuncs.com/fastgpt/fastgpt-mcp_server:v4.10.1 # 阿里云
ports:
- 3005:3000
networks:
- fastgpt
restart: always
environment:
- FASTGPT_ENDPOINT=http://fastgpt:3000
fastgpt-plugin:
image: ghcr.io/labring/fastgpt-plugin:v0.1.5 # git
# image: registry.cn-hangzhou.aliyuncs.com/fastgpt/fastgpt-plugin:v0.1.5 # 阿里云
container_name: fastgpt-plugin
restart: always
networks:
- fastgpt
environment:
- AUTH_TOKEN=xxxxxx # 如果不需要鉴权可以直接去掉这个环境变量
# 改成 minio 可访问地址例如 http://192.168.2.2:9000/fastgpt-plugins
# 必须指向 Minio 的桶的地址
# 如果 Minio 可以直接通过外网访问可以不设置这个环境变量
# - MINIO_CUSTOM_ENDPOINT=http://192.168.2.2:9000
- MINIO_ENDPOINT=fastgpt-minio
- MINIO_PORT=9000
- MINIO_USE_SSL=false
- MINIO_ACCESS_KEY=minioadmin
- MINIO_SECRET_KEY=minioadmin
- MINIO_BUCKET=fastgpt-plugins
depends_on:
fastgpt-minio:
condition: service_healthy
# AI Proxy
aiproxy:
image: ghcr.io/labring/aiproxy:v0.2.2
# image: registry.cn-hangzhou.aliyuncs.com/labring/aiproxy:v0.2.2 # 阿里云
container_name: aiproxy
restart: unless-stopped
depends_on:
aiproxy_pg:
condition: service_healthy
networks:
- fastgpt
environment:
# 对应 fastgpt 里的AIPROXY_API_TOKEN
- ADMIN_KEY=aiproxy
# 错误日志详情保存时间小时
- LOG_DETAIL_STORAGE_HOURS=1
# 数据库连接地址
- SQL_DSN=postgres://postgres:aiproxy@aiproxy_pg:5432/aiproxy
# 最大重试次数
- RETRY_TIMES=3
# 不需要计费
- BILLING_ENABLED=false
# 不需要严格检测模型
- DISABLE_MODEL_CONFIG=true
healthcheck:
test: ['CMD', 'curl', '-f', 'http://localhost:3000/api/status']
interval: 5s
timeout: 5s
retries: 10
aiproxy_pg:
image: pgvector/pgvector:0.8.0-pg15 # docker hub
# image: registry.cn-hangzhou.aliyuncs.com/fastgpt/pgvector:v0.8.0-pg15 # 阿里云
restart: unless-stopped
container_name: aiproxy_pg
volumes:
- ./aiproxy_pg:/var/lib/postgresql/data
networks:
- fastgpt
environment:
TZ: Asia/Shanghai
POSTGRES_USER: postgres
POSTGRES_DB: aiproxy
POSTGRES_PASSWORD: aiproxy
healthcheck:
test: ['CMD', 'pg_isready', '-U', 'postgres', '-d', 'aiproxy']
interval: 5s
timeout: 5s
retries: 10
networks:
fastgpt:
`;
const list = [
{
filename: './docker-compose-pgvector.yml',
depends: `- pg`,
service: `pg:
image: pgvector/pgvector:0.8.0-pg15 # docker hub
# image: registry.cn-hangzhou.aliyuncs.com/fastgpt/pgvector:v0.8.0-pg15 # 阿里云
container_name: pg
restart: always
# ports: # 生产环境建议不要暴露
# - 5432:5432
networks:
- fastgpt
environment:
# 这里的配置只有首次运行生效修改后重启镜像是不会生效的需要把持久化数据删除再重启才有效果
- POSTGRES_USER=username
- POSTGRES_PASSWORD=password
- POSTGRES_DB=postgres
volumes:
- ./pg/data:/var/lib/postgresql/data
healthcheck:
test: ['CMD', 'pg_isready', '-U', 'username', '-d', 'postgres']
interval: 5s
timeout: 5s
retries: 10`,
env: `- PG_URL=postgresql://username:password@pg:5432/postgres`
},
{
filename: './docker-compose-zilliz.yml',
depends: ``,
service: ``,
env: `# zilliz 连接参数
- MILVUS_ADDRESS=zilliz_cloud_address
- MILVUS_TOKEN=zilliz_cloud_token`
},
{
filename: './docker-compose-milvus.yml',
depends: `- milvusStandalone`,
service: `milvus-minio:
container_name: milvus-minio
image: minio/minio:RELEASE.2023-03-20T20-16-18Z
environment:
MINIO_ACCESS_KEY: minioadmin
MINIO_SECRET_KEY: minioadmin
# ports:
# - '9001:9001'
# - '9000:9000'
networks:
- fastgpt
volumes:
- ./milvus-minio:/minio_data
command: minio server /minio_data --console-address ":9001"
healthcheck:
test: ['CMD', 'curl', '-f', 'http://localhost:9000/minio/health/live']
interval: 30s
timeout: 20s
retries: 3
# milvus
milvusEtcd:
container_name: milvusEtcd
image: quay.io/coreos/etcd:v3.5.5
environment:
- ETCD_AUTO_COMPACTION_MODE=revision
- ETCD_AUTO_COMPACTION_RETENTION=1000
- ETCD_QUOTA_BACKEND_BYTES=4294967296
- ETCD_SNAPSHOT_COUNT=50000
networks:
- fastgpt
volumes:
- ./milvus/etcd:/etcd
command: etcd -advertise-client-urls=http://127.0.0.1:2379 -listen-client-urls http://0.0.0.0:2379 --data-dir /etcd
healthcheck:
test: ['CMD', 'etcdctl', 'endpoint', 'health']
interval: 30s
timeout: 20s
retries: 3
milvusStandalone:
container_name: milvusStandalone
image: milvusdb/milvus:v2.4.3
command: ['milvus', 'run', 'standalone']
security_opt:
- seccomp:unconfined
environment:
ETCD_ENDPOINTS: milvusEtcd:2379
MINIO_ADDRESS: milvus-minio:9000
networks:
- fastgpt
volumes:
- ./milvus/data:/var/lib/milvus
healthcheck:
test: ['CMD', 'curl', '-f', 'http://localhost:9091/healthz']
interval: 30s
start_period: 90s
timeout: 20s
retries: 3
depends_on:
- 'milvusEtcd'
- 'milvus-minio'`,
env: `- MILVUS_ADDRESS=http://milvusStandalone:19530
- MILVUS_TOKEN=none`
},
{
filename: './docker-compose-oceanbase/docker-compose.yml',
depends: `- ob`,
service: `ob:
image: oceanbase/oceanbase-ce:4.3.5-lts # docker hub
# image: quay.io/oceanbase/oceanbase-ce:4.3.5-lts # 镜像
container_name: ob
restart: always
# ports: # 生产环境建议不要暴露
# - 2881:2881
networks:
- fastgpt
environment:
# 这里的配置只有首次运行生效修改后重启镜像是不会生效的需要把持久化数据删除再重启才有效果
- OB_SYS_PASSWORD=obsyspassword
# 不同于传统数据库OceanBase 数据库的账号包含更多字段包括用户名租户名和集群名经典格式为"用户名@租户名#集群名"
# 比如用mysql客户端连接时根据本文件的默认配置应该指定 "-uroot@tenantname"
- OB_TENANT_NAME=tenantname
- OB_TENANT_PASSWORD=tenantpassword
# MODE分为MINI和NORMAL 后者会最大程度使用主机资源
- MODE=MINI
- OB_SERVER_IP=127.0.0.1
# 更多环境变量配置见oceanbase官方文档 https://www.oceanbase.com/docs/common-oceanbase-database-cn-1000000002013494
volumes:
- ./ob/data:/root/ob
- ./ob/config:/root/.obd/cluster
- ./init.sql:/root/boot/init.d/init.sql
healthcheck:
# obclient -h127.0.0.1 -P2881 -uroot@tenantname -ptenantpassword -e "SELECT 1;"
test:
[
'CMD-SHELL',
'obclient -h\$\$\$\${OB_SERVER_IP} -P2881 -uroot@\$\$\$\${OB_TENANT_NAME} -p\$\$\$\${OB_TENANT_PASSWORD} -e "SELECT 1;"'
]
interval: 30s
timeout: 10s
retries: 1000
start_period: 10s`,
env: `- OCEANBASE_URL=mysql://root%40tenantname:tenantpassword@ob:2881/test`
}
];
list.forEach((item) => {
const { filename, service, env, depends } = item;
const content = template
.replace('{{Vector_DB_Service}}', service)
.replace('{{Vector_DB_ENV}}', env)
.replace('{{Vector_DB_Depends}}', depends);
fs.writeFileSync(path.join(__dirname, filename), content, 'utf-8');
});

View File

@ -1,228 +0,0 @@
#!/usr/bin/env node
import fs from 'fs';
import path from 'path';
/**
* @enum {String} RegionEnum
*/
const RegionEnum = {
cn: 'cn',
global: 'global'
};
/**
* @enum {String} VectorEnum
*/
const VectorEnum = {
pg: 'pg',
milvus: 'milvus',
zilliz: 'zilliz',
ob: 'ob'
};
/**
* @enum {string} Services
*/
const Services = {
fastgpt: 'fastgpt',
fastgptPlugin: 'fastgpt-plugin',
fastgptSandbox: 'fastgpt-sandbox',
fastgptMcpServer: 'fastgpt-mcp_server',
minio: 'minio',
mongo: 'mongo',
redis: 'redis',
aiproxy: 'aiproxy',
aiproxyPg: 'aiproxy-pg',
// vectors
pg: 'pg',
milvusMinio: 'milvus-minio',
milvusEtcd: 'milvus-etcd',
milvusStandalone: 'milvus-standalone',
oceanbase: 'oceanbase'
};
// make sure the cwd
const basePath = process.cwd();
if (!basePath.endsWith('deploy')) {
process.chdir('deploy');
}
/**
* @typedef {{ tag: String, image: {cn: String, global: String} }} ArgItemType
*/
/** format the args
* @type {Record<Services, ArgItemType>}
*/
const args = (() => {
/**
* @type {{tags: Record<Services, string>, images: Record<Services, Record<string, string>>}}
*/
const obj = JSON.parse(fs.readFileSync(path.join(process.cwd(), 'args.json')));
const args = {};
for (const key of Object.keys(obj.tags)) {
args[key] = {
tag: obj.tags[key],
image: {
cn: obj.images.cn[key],
global: obj.images.global[key]
}
};
}
return args;
})();
const vector = {
pg: {
db: '',
config: `\
PG_URL: postgresql://username:password@pg:5432/postgres`,
extra: ''
},
milvus: {
db: '',
config: `\
MILVUS_ADDRESS: http://milvusStandalone:19530
MILVUS_TOKEN: none
`,
extra: ''
},
zilliz: {
db: '',
config: `\
MILVUS_ADDRESS: zilliz_cloud_address
MILVUS_TOKEN: zilliz_cloud_token`,
extra: ''
},
ob: {
db: '',
config: `\
OCEANBASE_URL: mysql://root%40tenantname:tenantpassword@ob:2881/test
`,
extra: `\
configs:
init_sql:
name: init_sql
content: |
ALTER SYSTEM SET ob_vector_memory_limit_percentage = 30;
`
}
};
/**
* replace all ${{}}
* @param {string} source
* @param {RegionEnum} region
* @param {VectorEnum} vec
* @returns {string}
*/
const replace = (source, region, vec) => {
// Match ${{expr}}, capture "expr" inside {{}}
return source.replace(/\$\{\{([^}]*)\}\}/g, (_, expr) => {
// expr: a.b
/**
* @type {String}
*/
const [a, b] = expr.split('.');
if (a === 'vec') {
if (b === 'db') {
return replace(vector[vec].db, region, vec);
} else {
return vector[vec][b];
}
}
if (b === 'tag') {
return args[a].tag;
} else if (b === 'image') {
return args[a].image[region];
}
});
};
{
// read in Vectors
const pg = fs.readFileSync(path.join(process.cwd(), 'templates', 'vector', 'pg.txt'));
vector.pg.db = String(pg);
const milvus = fs.readFileSync(path.join(process.cwd(), 'templates', 'vector', 'milvus.txt'));
vector.milvus.db = String(milvus);
const ob = fs.readFileSync(path.join(process.cwd(), 'templates', 'vector', 'ob.txt'));
vector.ob.db = String(ob);
}
const generateDevFile = async () => {
console.log('generating dev/docker-compose.yml');
// 1. read template
const template = await fs.promises.readFile(
path.join(process.cwd(), 'templates', 'docker-compose.dev.yml'),
'utf8'
);
await Promise.all([
fs.promises.writeFile(
path.join(process.cwd(), 'dev', 'docker-compose.cn.yml'),
replace(template, 'cn')
),
fs.promises.writeFile(
path.join(process.cwd(), 'dev', 'docker-compose.yml'),
replace(template, 'global')
)
]);
console.log('success geenrate dev files');
};
const generateProdFile = async () => {
console.log('generating prod/docker-compose.yml');
const template = await fs.promises.readFile(
path.join(process.cwd(), 'templates', 'docker-compose.prod.yml'),
'utf8'
);
await Promise.all([
fs.promises.writeFile(
path.join(process.cwd(), 'docker', 'cn', 'docker-compose.pg.yml'),
replace(template, 'cn', VectorEnum.pg)
),
fs.promises.writeFile(
path.join(process.cwd(), 'docker', 'global', 'docker-compose.pg.yml'),
replace(template, 'global', VectorEnum.pg)
),
fs.promises.writeFile(
path.join(process.cwd(), 'docker', 'cn', 'docker-compose.milvus.yml'),
replace(template, 'cn', VectorEnum.milvus)
),
fs.promises.writeFile(
path.join(process.cwd(), 'docker', 'global', 'docker-compose.milvus.yml'),
replace(template, 'global', VectorEnum.milvus)
),
fs.promises.writeFile(
path.join(process.cwd(), 'docker', 'cn', 'docker-compose.zilliz.yml'),
replace(template, 'cn', VectorEnum.zilliz)
),
fs.promises.writeFile(
path.join(process.cwd(), 'docker', 'global', 'docker-compose.ziliiz.yml'),
replace(template, 'global', VectorEnum.zilliz)
),
fs.promises.writeFile(
path.join(process.cwd(), 'docker', 'cn', 'docker-compose.oceanbase.yml'),
replace(template, 'cn', VectorEnum.ob)
),
fs.promises.writeFile(
path.join(process.cwd(), 'docker', 'global', 'docker-compose.oceanbase.yml'),
replace(template, 'global', VectorEnum.ob)
)
]);
console.log('success geenrate prod files');
};
await Promise.all([generateDevFile(), generateProdFile()]);
console.log('copy the docker dir to ../document/public');
await fs.promises.cp(
path.join(process.cwd(), 'docker'),
path.join(process.cwd(), '..', 'document', 'public', 'deploy', 'docker'),
{ recursive: true }
);

View File

@ -1,229 +0,0 @@
# 用于开发的 docker-compose 文件:
# - 只包含 FastGPT 的最小化运行条件
# - 没有 FastGPT 本体
# - 所有端口都映射到外层
# - pg: 5432
# - mongo: 27017
# - redis: 6379
# - fastgpt-sandbox: 3002
# - fastgpt-plugin: 3003
# - aiproxy: 3010
# - 使用 pgvector 作为默认的向量库
services:
# Vector DB
pg:
image: ${{pg.image}}:${{pg.tag}}
container_name: pg
restart: always
ports: # 生产环境建议不要暴露
- 5432:5432
networks:
- fastgpt
environment:
# 这里的配置只有首次运行生效。修改后,重启镜像是不会生效的。需要把持久化数据删除再重启,才有效果
- POSTGRES_USER=username
- POSTGRES_PASSWORD=password
- POSTGRES_DB=postgres
volumes:
- ./pg/data:/var/lib/postgresql/data
healthcheck:
test: ['CMD', 'pg_isready', '-U', 'username', '-d', 'postgres']
interval: 5s
timeout: 5s
retries: 10
# DB
mongo:
image: ${{mongo.image}}:${{mongo.tag}} # cpu 不支持 AVX 时候使用 4.4.29
container_name: mongo
restart: always
ports:
- 27017:27017
networks:
- fastgpt
command: mongod --keyFile /data/mongodb.key --replSet rs0
environment:
- MONGO_INITDB_ROOT_USERNAME=myusername
- MONGO_INITDB_ROOT_PASSWORD=mypassword
volumes:
- ./mongo/data:/data/db
healthcheck:
test:
[
'CMD',
'mongo',
'-u',
'myusername',
'-p',
'mypassword',
'--authenticationDatabase',
'admin',
'--eval',
"db.adminCommand('ping')"
]
interval: 10s
timeout: 5s
retries: 5
start_period: 30s
entrypoint:
- bash
- -c
- |
openssl rand -base64 128 > /data/mongodb.key
chmod 400 /data/mongodb.key
chown 999:999 /data/mongodb.key
echo 'const isInited = rs.status().ok === 1
if(!isInited){
rs.initiate({
_id: "rs0",
members: [
{ _id: 0, host: "mongo:27017" }
]
})
}' > /data/initReplicaSet.js
# 启动MongoDB服务
exec docker-entrypoint.sh "$$@" &
# 等待MongoDB服务启动
until mongo -u myusername -p mypassword --authenticationDatabase admin --eval "print('waited for connection')"; do
echo "Waiting for MongoDB to start..."
sleep 2
done
# 执行初始化副本集的脚本
mongo -u myusername -p mypassword --authenticationDatabase admin /data/initReplicaSet.js
# 等待docker-entrypoint.sh脚本执行的MongoDB服务进程
wait $$!
redis:
image: ${{redis.image}}:${{redis.tag}}
container_name: redis
ports:
- 6379:6379
networks:
- fastgpt
restart: always
command: |
redis-server --requirepass mypassword --loglevel warning --maxclients 10000 --appendonly yes --save 60 10 --maxmemory 4gb --maxmemory-policy noeviction
healthcheck:
test: ['CMD', 'redis-cli', '-a', 'mypassword', 'ping']
interval: 10s
timeout: 3s
retries: 3
start_period: 30s
volumes:
- ./redis/data:/data
fastgpt-minio:
image: ${{minio.image}}:${{minio.tag}}
container_name: fastgpt-minio
restart: always
networks:
- fastgpt
ports:
- '9000:9000'
- '9001:9001'
environment:
- MINIO_ROOT_USER=minioadmin
- MINIO_ROOT_PASSWORD=minioadmin
volumes:
- ./fastgpt-minio:/data
command: server /data --console-address ":9001"
healthcheck:
test: ['CMD', 'curl', '-f', 'http://localhost:9000/minio/health/live']
interval: 30s
timeout: 20s
retries: 3
sandbox:
container_name: sandbox
image: ${{fastgpt-sandbox.image}}:${{fastgpt-sandbox.tag}}
ports:
- 3002:3000
networks:
- fastgpt
restart: always
fastgpt-mcp-server:
container_name: fastgpt-mcp-server
image: ${{fastgpt-mcp_server.image}}:${{fastgpt-mcp_server.tag}}
ports:
- 3005:3000
networks:
- fastgpt
restart: always
environment:
- FASTGPT_ENDPOINT=http://fastgpt:3000
fastgpt-plugin:
image: ${{fastgpt-plugin.image}}:${{fastgpt-plugin.tag}}
container_name: fastgpt-plugin
restart: always
ports:
- 3003:3000
networks:
- fastgpt
environment:
- AUTH_TOKEN=token
- S3_EXTERNAL_BASE_URL=http://127.0.0.1:9000 # TODO: 改为你 Minio 的实际的 ip 地址
- S3_ENDPOINT=fastgpt-minio
- S3_PORT=9000
- S3_USE_SSL=false
- S3_ACCESS_KEY=minioadmin
- S3_SECRET_KEY=minioadmin
- S3_PUBLIC_BUCKET=fastgpt-public # 系统工具,创建的临时文件,存储的桶,要求公开读私有写。
- S3_PRIVATE_BUCKET=fastgpt-private # 系统插件热安装文件的桶,私有读写。
- MONGODB_URI=mongodb://myusername:mypassword@mongo:27017/fastgpt?authSource=admin&directConnection=true
- REDIS_URL=redis://default:mypassword@redis:6379
depends_on:
fastgpt-minio:
condition: service_healthy
# AI Proxy
aiproxy:
image: ${{aiproxy.image}}:${{aiproxy.tag}}
container_name: aiproxy
restart: unless-stopped
ports:
- 3010:3000
depends_on:
aiproxy_pg:
condition: service_healthy
networks:
- fastgpt
- aiproxy
environment:
# 对应 fastgpt 里的AIPROXY_API_TOKEN
- ADMIN_KEY=aiproxy
# 错误日志详情保存时间(小时)
- LOG_DETAIL_STORAGE_HOURS=1
# 数据库连接地址
- SQL_DSN=postgres://postgres:aiproxy@aiproxy_pg:5432/aiproxy
# 最大重试次数
- RETRY_TIMES=3
# 不需要计费
- BILLING_ENABLED=false
# 不需要严格检测模型
- DISABLE_MODEL_CONFIG=true
healthcheck:
test: ['CMD', 'curl', '-f', 'http://localhost:3000/api/status']
interval: 5s
timeout: 5s
retries: 10
aiproxy_pg:
image: ${{aiproxy-pg.image}}:${{aiproxy-pg.tag}} # docker hub
restart: unless-stopped
container_name: aiproxy_pg
volumes:
- ./aiproxy_pg:/var/lib/postgresql/data
networks:
- aiproxy
environment:
TZ: Asia/Shanghai
POSTGRES_USER: postgres
POSTGRES_DB: aiproxy
POSTGRES_PASSWORD: aiproxy
healthcheck:
test: ['CMD', 'pg_isready', '-U', 'postgres', '-d', 'aiproxy']
interval: 5s
timeout: 5s
retries: 10
networks:
fastgpt:
aiproxy:

View File

@ -1,251 +0,0 @@
# 用于部署的 docker-compose 文件:
# - FastGPT 端口映射为 3000:3000
# - FastGPT-mcp-server 端口映射 3005:3000
# - 建议修改账密后再运行
# plugin auth token
x-plugin-auth-token: &x-plugin-auth-token 'token'
# aiproxy token
x-aiproxy-token: &x-aiproxy-token 'token'
# 数据库连接相关配置
x-share-db-config: &x-share-db-config
MONGODB_URI: mongodb://myusername:mypassword@mongo:27017/fastgpt?authSource=admin
DB_MAX_LINK: 100
REDIS_URL: redis://default:mypassword@redis:6379
S3_EXTERNAL_BASE_URL: https://minio.com # S3 的公网访问地址
S3_ENDPOINT: fastgpt-minio
S3_PORT: 9000
S3_USE_SSL: false
S3_ACCESS_KEY: minioadmin
S3_SECRET_KEY: minioadmin
S3_PUBLIC_BUCKET: fastgpt-public # 公开读私有写桶
S3_PRIVATE_BUCKET: fastgpt-private # 私有读写桶
# 向量库相关配置
x-vec-config: &x-vec-config
${{vec.config}}
version: '3.3'
services:
# Vector DB
${{vec.db}}
mongo:
image: ${{mongo.image}}:${{mongo.tag}} # cpu 不支持 AVX 时候使用 4.4.29
container_name: mongo
restart: always
networks:
- fastgpt
command: mongod --keyFile /data/mongodb.key --replSet rs0
environment:
- MONGO_INITDB_ROOT_USERNAME=myusername
- MONGO_INITDB_ROOT_PASSWORD=mypassword
volumes:
- ./mongo/data:/data/db
healthcheck:
test: ['CMD', 'mongo', '-u', 'myusername', '-p', 'mypassword', '--authenticationDatabase', 'admin', '--eval', "db.adminCommand('ping')"]
interval: 10s
timeout: 5s
retries: 5
start_period: 30s
entrypoint:
- bash
- -c
- |
openssl rand -base64 128 > /data/mongodb.key
chmod 400 /data/mongodb.key
chown 999:999 /data/mongodb.key
echo 'const isInited = rs.status().ok === 1
if(!isInited){
rs.initiate({
_id: "rs0",
members: [
{ _id: 0, host: "mongo:27017" }
]
})
}' > /data/initReplicaSet.js
# 启动MongoDB服务
exec docker-entrypoint.sh "$$@" &
# 等待MongoDB服务启动
until mongo -u myusername -p mypassword --authenticationDatabase admin --eval "print('waited for connection')"; do
echo "Waiting for MongoDB to start..."
sleep 2
done
# 执行初始化副本集的脚本
mongo -u myusername -p mypassword --authenticationDatabase admin /data/initReplicaSet.js
# 等待docker-entrypoint.sh脚本执行的MongoDB服务进程
wait $$!
redis:
image: ${{redis.image}}:${{redis.tag}}
container_name: redis
networks:
- fastgpt
restart: always
command: |
redis-server --requirepass mypassword --loglevel warning --maxclients 10000 --appendonly yes --save 60 10 --maxmemory 4gb --maxmemory-policy noeviction
healthcheck:
test: ['CMD', 'redis-cli', '-a', 'mypassword', 'ping']
interval: 10s
timeout: 3s
retries: 3
start_period: 30s
volumes:
- ./redis/data:/data
fastgpt-minio:
image: ${{minio.image}}:${{minio.tag}}
container_name: fastgpt-minio
restart: always
ports:
- 9000:9000
- 9001:9001
networks:
- fastgpt
environment:
- MINIO_ROOT_USER=minioadmin
- MINIO_ROOT_PASSWORD=minioadmin
volumes:
- ./fastgpt-minio:/data
command: server /data --console-address ":9001"
healthcheck:
test: ['CMD', 'curl', '-f', 'http://localhost:9000/minio/health/live']
interval: 30s
timeout: 20s
retries: 3
fastgpt:
container_name: fastgpt
image: ${{fastgpt.image}}:${{fastgpt.tag}} # git
ports:
- 3000:3000
networks:
- fastgpt
depends_on:
- mongo
- sandbox
- vectorDB
restart: always
environment:
<<: [*x-share-db-config, *x-vec-config]
# 前端外部可访问的地址,用于自动补全文件资源路径。例如 https:fastgpt.cn不能填 localhost。这个值可以不填不填则发给模型的图片会是一个相对路径而不是全路径模型可能伪造Host。
FE_DOMAIN:
# root 密码,用户名为: root。如果需要修改 root 密码,直接修改这个环境变量,并重启即可。
DEFAULT_ROOT_PSW: 1234
# 登录凭证密钥
TOKEN_KEY: any
# root的密钥常用于升级时候的初始化请求
ROOT_KEY: root_key
# 文件阅读加密
FILE_TOKEN_KEY: filetoken
# 密钥加密key
AES256_SECRET_KEY: fastgptkey
# plugin 地址
PLUGIN_BASE_URL: http://fastgpt-plugin:3000
PLUGIN_TOKEN: *x-plugin-auth-token
# sandbox 地址
SANDBOX_URL: http://sandbox:3000
# AI Proxy 的地址,如果配了该地址,优先使用
AIPROXY_API_ENDPOINT: http://aiproxy:3000
# AI Proxy 的 Admin Token与 AI Proxy 中的环境变量 ADMIN_KEY
AIPROXY_API_TOKEN: *x-aiproxy-token
# 日志等级: debug, info, warn, error
LOG_LEVEL: info
STORE_LOG_LEVEL: warn
# 工作流最大运行次数
WORKFLOW_MAX_RUN_TIMES: 1000
# 批量执行节点,最大输入长度
WORKFLOW_MAX_LOOP_TIMES: 100
# 对话文件过期天数
CHAT_FILE_EXPIRE_TIME: 7
# 服务器接收请求,最大大小,单位 MB
SERVICE_REQUEST_MAX_CONTENT_LENGTH: 10
volumes:
- ./config.json:/app/data/config.json
sandbox:
container_name: sandbox
image: ${{fastgpt-sandbox.image}}:${{fastgpt-sandbox.tag}}
networks:
- fastgpt
restart: always
fastgpt-mcp-server:
container_name: fastgpt-mcp-server
image: ${{fastgpt-mcp_server.image}}:${{fastgpt-mcp_server.tag}}
networks:
- fastgpt
ports:
- 3005:3000
restart: always
environment:
- FASTGPT_ENDPOINT=http://fastgpt:3000
fastgpt-plugin:
image: ${{fastgpt-plugin.image}}:${{fastgpt-plugin.tag}}
container_name: fastgpt-plugin
restart: always
networks:
- fastgpt
environment:
<<: *x-share-db-config
AUTH_TOKEN: *x-plugin-auth-token
# 工具网络请求,最大请求和响应体
SERVICE_REQUEST_MAX_CONTENT_LENGTH: 10
# 最大 API 请求体大小
MAX_API_SIZE: 10
depends_on:
fastgpt-minio:
condition: service_healthy
# AI Proxy
aiproxy:
image: ${{aiproxy.image}}:${{aiproxy.tag}}
container_name: aiproxy
restart: unless-stopped
depends_on:
aiproxy_pg:
condition: service_healthy
networks:
- fastgpt
- aiproxy
environment:
# 对应 fastgpt 里的AIPROXY_API_TOKEN
ADMIN_KEY: *x-aiproxy-token
# 错误日志详情保存时间(小时)
LOG_DETAIL_STORAGE_HOURS: 1
# 数据库连接地址
SQL_DSN: postgres://postgres:aiproxy@aiproxy_pg:5432/aiproxy
# 最大重试次数
RETRY_TIMES: 3
# 不需要计费
BILLING_ENABLED: false
# 不需要严格检测模型
DISABLE_MODEL_CONFIG: true
healthcheck:
test: ['CMD', 'curl', '-f', 'http://localhost:3000/api/status']
interval: 5s
timeout: 5s
retries: 10
aiproxy_pg:
image: ${{aiproxy-pg.image}}:${{aiproxy-pg.tag}} # docker hub
restart: unless-stopped
container_name: aiproxy_pg
volumes:
- ./aiproxy_pg:/var/lib/postgresql/data
networks:
- aiproxy
environment:
TZ: Asia/Shanghai
POSTGRES_USER: postgres
POSTGRES_DB: aiproxy
POSTGRES_PASSWORD: aiproxy
healthcheck:
test: ['CMD', 'pg_isready', '-U', 'postgres', '-d', 'aiproxy']
interval: 5s
timeout: 5s
retries: 10
networks:
fastgpt:
aiproxy:
vector:
${{vec.extra}}

View File

@ -1,58 +0,0 @@
milvus-minio:
container_name: milvus-minio
image: ${{milvus-minio.image}}:${{milvus-minio.tag}}
environment:
MINIO_ACCESS_KEY: minioadmin
MINIO_SECRET_KEY: minioadmin
networks:
- vector
volumes:
- ./milvus-minio:/minio_data
command: minio server /minio_data --console-address ":9001"
healthcheck:
test: ['CMD', 'curl', '-f', 'http://localhost:9000/minio/health/live']
interval: 30s
timeout: 20s
retries: 3
# milvus
milvus-etcd:
container_name: milvus-etcd
image: ${{milvus-etcd.image}}:${{milvus-etcd.tag}}
environment:
- ETCD_AUTO_COMPACTION_MODE=revision
- ETCD_AUTO_COMPACTION_RETENTION=1000
- ETCD_QUOTA_BACKEND_BYTES=4294967296
- ETCD_SNAPSHOT_COUNT=50000
networks:
- vector
volumes:
- ./milvus/etcd:/etcd
command: etcd -advertise-client-urls=http://127.0.0.1:2379 -listen-client-urls http://0.0.0.0:2379 --data-dir /etcd
healthcheck:
test: ['CMD', 'etcdctl', 'endpoint', 'health']
interval: 30s
timeout: 20s
retries: 3
vectorDB:
container_name: milvusStandalone
image: ${{milvus-standalone.image}}:${{milvus-standalone.tag}}
command: ['milvus', 'run', 'standalone']
security_opt:
- seccomp:unconfined
environment:
ETCD_ENDPOINTS: milvus-etcd:2379
MINIO_ADDRESS: milvus-minio:9000
networks:
- fastgpt
- vector
volumes:
- ./milvus/data:/var/lib/milvus
healthcheck:
test: ['CMD', 'curl', '-f', 'http://localhost:9091/healthz']
interval: 30s
start_period: 90s
timeout: 20s
retries: 3
depends_on:
- 'milvus-etcd'
- 'milvus-minio'

View File

@ -1,36 +0,0 @@
vectorDB:
image: ${{oceanbase.image}}:${{oceanbase.tag}}
container_name: ob
restart: always
# ports: # 生产环境建议不要暴露
# - 2881:2881
networks:
- fastgpt
environment:
# 这里的配置只有首次运行生效。修改后,重启镜像是不会生效的。需要把持久化数据删除再重启,才有效果
- OB_SYS_PASSWORD=obsyspassword
# 不同于传统数据库OceanBase 数据库的账号包含更多字段,包括用户名、租户名和集群名。经典格式为"用户名@租户名#集群名"
# 比如用mysql客户端连接时根据本文件的默认配置应该指定 "-uroot@tenantname"
- OB_TENANT_NAME=tenantname
- OB_TENANT_PASSWORD=tenantpassword
# MODE分为MINI和NORMAL 后者会最大程度使用主机资源
- MODE=MINI
- OB_SERVER_IP=127.0.0.1
# 更多环境变量配置见oceanbase官方文档 https://www.oceanbase.com/docs/common-oceanbase-database-cn-1000000002013494
volumes:
- ../ob/data:/root/ob
- ../ob/config:/root/.obd/cluster
configs:
- source: init_sql
target: /root/boot/init.d/init.sql
healthcheck:
# obclient -h127.0.0.1 -P2881 -uroot@tenantname -ptenantpassword -e "SELECT 1;"
test:
[
"CMD-SHELL",
'obclient -h$${OB_SERVER_IP} -P2881 -uroot@$${OB_TENANT_NAME} -p$${OB_TENANT_PASSWORD} -e "SELECT 1;"',
]
interval: 30s
timeout: 10s
retries: 1000
start_period: 10s

View File

@ -1,18 +0,0 @@
vectorDB:
image: ${{pg.image}}:${{pg.tag}}
container_name: pg
restart: always
networks:
- fastgpt
environment:
# 这里的配置只有首次运行生效。修改后,重启镜像是不会生效的。需要把持久化数据删除再重启,才有效果
- POSTGRES_USER=username
- POSTGRES_PASSWORD=password
- POSTGRES_DB=postgres
volumes:
- ./pg/data:/var/lib/postgresql/data
healthcheck:
test: ['CMD', 'pg_isready', '-U', 'username', '-d', 'postgres']
interval: 5s
timeout: 5s
retries: 10

View File

@ -1 +1,4 @@
NEXT_PUBLIC_SEARCH_APPKEY=
SEARCH_APPWRITEKEY=
NEXT_PUBLIC_SEARCH_APPID=
FASTGPT_HOME_DOMAIN=

View File

@ -19,14 +19,26 @@ RUN apk add --no-cache \
fontconfig
WORKDIR /app
ARG NEXT_PUBLIC_SEARCH_APPKEY
ARG NEXT_PUBLIC_SEARCH_APPID
ARG SEARCH_APPWRITEKEY
ARG FASTGPT_HOME_DOMAIN
ENV NEXT_PUBLIC_SEARCH_APPKEY=$NEXT_PUBLIC_SEARCH_APPKEY
ENV NEXT_PUBLIC_SEARCH_APPID=$NEXT_PUBLIC_SEARCH_APPID
ENV SEARCH_APPWRITEKEY=$SEARCH_APPWRITEKEY
ENV FASTGPT_HOME_DOMAIN=$FASTGPT_HOME_DOMAIN
COPY . .
RUN npm install
RUN npm run build
# Update search index if SEARCH_APPWRITEKEY is provided
RUN if [ -n "$SEARCH_APPWRITEKEY" ]; then \
echo "SEARCH_APPWRITEKEY found, updating search index..." && \
npm run update-index-action; \
else \
echo "SEARCH_APPWRITEKEY not provided, skipping search index update"; \
fi
FROM base AS runner
RUN apk add --no-cache curl
@ -37,7 +49,6 @@ WORKDIR /app
COPY --from=builder /app/public ./public
COPY --from=builder --chown=nextjs:nodejs /app/.next/standalone ./
COPY --from=builder --chown=nextjs:nodejs /app/.next/static ./.next/static
COPY --from=builder --chown=nextjs:nodejs /app/data ./data
USER nextjs
ENV NEXT_TELEMETRY_DISABLED=1

View File

@ -1,23 +1,41 @@
# FastGPT 文档
# fast
这是FastGPT的官方文档采用 fumadoc 框架。
## 运行项目
这是FastGPT的官方文档采用fumadoc框架。
# 获取搜索应用
点击[Algolia](https://dashboard.algolia.com/account/overview),进行注册账号,注册成功后需要点击页面的搜索,然后查看应用,默认会有一个应用。
![](./public/readme/algolia.png)
拥有应用后点击个人头像,点击设置,点击`API Keys`查看自己的应用id和key。
![](./public/readme/algolia2.png)
页面中的`Application ID`和`Search API Key``Write API KEY`就是环境变量对应的`NEXT_PUBLIC_SEARCH_APPID`和`NEXT_PUBLIC_SEARCH_APPKEY``SEARCH_APPWRITEKEY`
![](./public/readme/algolia3.png)
# 运行项目
要运行文档,首先需要进行环境变量配置,在文档的根目录下创建`.env.local`文件,填写以下环境变量:
```bash
SEARCH_APPWRITEKEY = #这是上面获取的Write api key
NEXT_PUBLIC_SEARCH_APPKEY = #这是上面获取的搜索key
NEXT_PUBLIC_SEARCH_APPID = #这是上面的搜索id
FASTGPT_HOME_DOMAIN = #要跳转的FastGPT项目的域名,默认海外版
```
你可以在FastGPT项目根目录下执行以下命令来运行文档。
```bash
npm install # 只能 npm install不能 pnpm
npm install #只能npm install不能pnpm
npm run dev
```
项目会默认跑在`http:localhost:3000`端口
## 书写文档
# 书写文档
文档采用`mdx`格式,大体和`md`一致,但是现在文档的元数据只支持`title` `description`和`icon`三个字段,参考以下示例代码:
@ -63,12 +81,14 @@ import FastGPTLink from '@/components/docs/linkFastGPT'; #FastGPT跳转链接组
}
```
## i18n
# i18n
在`content/docs`下的所有`.mdx`文件为默认语言文件(当前默认语言中文)`.en.mdx`文件为`i18n`支持的英文文件,例如,你可以将`hello.mdx`文档翻译后,写一个`hello.en.mdx`,同时,在对应目录的`meta.en.json`的`"pages"`字段写下对应的文件名来支持英文文档。
## 特殊配置
# ps
### 增加顶层导航栏
`meta.json`的`"pages"`中的`"[Handshake][联系我们](https://fael3z0zfze.feishu.cn/share/base/form/shrcnjJWtKqjOI9NbQTzhNyzljc)"`这个字段是目录的链接形式表现效果为点击后跳转到对应的url。
1. 在 `FastGPT/document/app/[lang]/docs/layout.tsx` 文件中新增导航。
![](./public/readme/link.png)
最后,如果依然有问题,可以进入`https://fumadocs.dev/docs/ui`官网询问官网提供的ai来了解文档框架的使用。

View File

@ -4,9 +4,25 @@ import { notFound } from 'next/navigation';
import NotFound from '@/components/docs/not-found';
import { createRelativeLink } from 'fumadocs-ui/mdx';
import { getMDXComponents } from '@/mdx-components';
import fs from 'fs';
import path from 'path';
// 在构建时导入静态数据
import docLastModifiedData from '@/data/doc-last-modified.json';
// 读取文档修改时间数据
function getDocLastModifiedData(): Record<string, string> {
try {
const dataPath = path.join(process.cwd(), 'data', 'doc-last-modified.json');
if (!fs.existsSync(dataPath)) {
return {};
}
const data = fs.readFileSync(dataPath, 'utf8');
return JSON.parse(data);
} catch (error) {
console.error('读取文档修改时间数据失败:', error);
return {};
}
}
export default async function Page({
params
@ -16,15 +32,16 @@ export default async function Page({
const { lang, slug } = await params;
const page = source.getPage(slug, lang);
// 如果页面不存在,调用 notFound()
if (!page || !page.data || !page.file) {
return <NotFound />;
}
const MDXContent = page.data.body;
// 使用构建时导入的静态数据
const filePath = `document/content/docs/${page.file.path}`;
// @ts-ignore
// 获取文档的最后修改时间
const docLastModifiedData = getDocLastModifiedData();
const filePath = `content/docs/${page.file.path}`;
const lastModified = docLastModifiedData[filePath] || page.data.lastModified;
return (

View File

@ -30,17 +30,9 @@ export default async function Layout({
title: t('common:use-cases', lang),
url: lang === 'zh-CN' ? '/docs/use-cases' : '/en/docs/use-cases'
},
{
title: t('common:faq', lang),
url: lang === 'zh-CN' ? '/docs/faq' : '/en/docs/faq'
},
{
title: t('common:protocol', lang),
url: lang === 'zh-CN' ? '/docs/protocol' : '/en/docs/protocol'
},
{
title: t('common:upgrading', lang),
url: lang === 'zh-CN' ? '/docs/upgrading' : '/en/docs/upgrading'
}
];
@ -49,7 +41,7 @@ export default async function Layout({
{...baseOptions(lang)}
nav={{
title: (
<div className="flex flex-row items-center gap-2 h-14 ml-1">
<div className="flex flex-row items-center gap-2 h-14 ml-10">
<div className="block dark:hidden">
<LogoLight className="w-48 h-auto" />
</div>

View File

@ -0,0 +1,85 @@
import * as fs from 'node:fs/promises';
import * as path from 'node:path';
import fg from 'fast-glob';
import matter from 'gray-matter';
import { i18n } from '@/lib/i18n';
export const revalidate = false;
// 黑名单路径(不带语言前缀)
const blacklist = ['use-cases/index', 'protocol/index', 'api/index'];
// 将文件路径转换为 URL 路径(包括文件名)
function filePathToUrl(filePath: string, defaultLanguage: string): string {
let relativePath = filePath.replace('./content/docs/', '');
const basePath = defaultLanguage === 'zh-CN' ? '/docs' : '/en/docs';
if (defaultLanguage !== 'zh-CN' && relativePath.endsWith('.en.mdx')) {
relativePath = relativePath.replace(/\.en\.mdx$/, '');
} else if (relativePath.endsWith('.mdx')) {
relativePath = relativePath.replace(/\.mdx$/, '');
}
return `${basePath}/${relativePath}`.replace(/\/\/+/g, '/');
}
// 判断是否为黑名单路径
function isBlacklisted(url: string): boolean {
return blacklist.some(
(item) => url.endsWith(`/docs/${item}`) || url.endsWith(`/en/docs/${item}`)
);
}
export async function GET(request: Request) {
const defaultLanguage = i18n.defaultLanguage;
const requestUrl = new URL(request.url);
const isEnRobotsRoute = requestUrl.pathname === '/en/robots';
let globPattern;
if (isEnRobotsRoute) {
globPattern = ['./content/docs/**/*.en.mdx'];
} else if (defaultLanguage === 'zh-CN') {
globPattern = ['./content/docs/**/*.mdx'];
} else {
globPattern = ['./content/docs/**/*.en.mdx'];
}
const files = await fg(globPattern, { caseSensitiveMatch: true });
// 转换文件路径为 URL并过滤黑名单
const urls = files
.map((file) => filePathToUrl(file, defaultLanguage))
.filter((url) => !isBlacklisted(url));
urls.sort((a, b) => a.localeCompare(b));
const html = `
<html>
<head>
<title>FastGPT </title>
<style>
body { font-family: Arial, sans-serif; margin: 20px; }
h1 { color: #333; }
ul { list-style-type: none; padding: 0; }
li { margin: 10px 0; }
a { color: #0066cc; text-decoration: none; }
a:hover { text-decoration: underline; }
</style>
</head>
<body>
<h1>Documentation Links</h1>
<ul>
${urls.map((url) => `<li><a href="${url}">${url}</a></li>`).join('')}
</ul>
</body>
</html>
`;
return new Response(html, {
headers: {
'Content-Type': 'text/html'
}
});
}

View File

@ -0,0 +1,24 @@
// app/api/robots/route.ts
import { i18n } from '@/lib/i18n';
import { NextResponse } from 'next/server';
export async function GET() {
const host =
i18n.defaultLanguage === 'zh-cn' ? 'https://localhost:3000' : 'https://localhost:3000/en';
const robotsTxt = `User-agent: *
Allow: /
Allow: /en/
Disallow: /zh-cn/
Host: ${host}
Sitemap: ${host}/sitemap.xml`;
return new NextResponse(robotsTxt, {
headers: {
'Content-Type': 'text/plain'
}
});
}

View File

@ -1,21 +1,7 @@
import { source } from '@/lib/source';
import { enhancedTokenizer } from '@/lib/tokenizer';
import { createFromSource } from 'fumadocs-core/search/server';
export const { GET } = createFromSource(source, {
// 使用中文分词器时不能设置 language 选项
localeMap: {
en: {
language: 'english'
},
'zh-CN': {
components: {
tokenizer: enhancedTokenizer()
},
search: {
threshold: 0,
tolerance: 0
}
}
}
// https://docs.orama.com/open-source/supported-languages
language: 'english'
});

View File

@ -1,11 +1,6 @@
@import 'tailwindcss';
@import 'fumadocs-ui/css/preset.css';
@font-face {
font-family: 'Alef';
src: url('/fonts/Alef-Regular.ttf') format('truetype');
}
/* 在文件开头添加这些基础变量 */
:root {
/* 基础颜色 */
@ -62,122 +57,16 @@
/* 全局代码块样式 */
pre,
code {
border-radius: 16px;
background: #F5F6F7;
font-family: Alef;
font-size: 1.0rem;
font-weight: 400;
line-height: 16px;
letter-spacing: 0.48px;
}
div[role='tabpanel'] figure:has(+ p) pre,
div[role='tabpanel'] figure:has(+ p) pre code {
background-color: #ececec;
}
.dark div[role='tabpanel'] figure:has(+ p) pre,
.dark div[role='tabpanel'] figure:has(+ p) pre code {
background-color: #3d3d3d;
}
.dark pre,
.dark code {
background: #1e1e1e;
}
pre {
padding: 24px 0 24px 24px ;
}
pre code {
gap: 20px;
}
code span {
padding-left: 0 !important;
}
/* 去除代码块内层边框 */
.bg-fd-secondary.border {
border: none;
}
/* 去除代码块外层边框 */
.shiki {
border: none;
padding: 0;
font-size: 0.9rem !important;
line-height: 1.6 !important;
}
/* 行内代码样式 */
/* 行内代码样式 */
:not(pre) > code {
display: inline-block;
height: 25px;
padding: 0 10px;
margin: 0 0.2em;
color: #272727;
background: #f5f6f7;
font-family: "PingFang SC";
font-size: 14px;
font-style: normal;
font-weight: 500;
line-height: 180%;
letter-spacing: 0.056px;
border: none;
border-radius: 8px;
}
.dark :not(pre) > code {
color: #E6E6E6 !important;
background: #282828 !important;
}
div[role="tablist"] ~ div:has(figure, p, ul) {
border-radius: 0 !important;
border: solid 1.5px #e5e5e5;
border-radius: 0.75rem !important;
}
.dark div[role="tablist"] ~ div:has(figure, p, ul) {
border: solid 1.5px #535353;
}
.dark div[role="tablist"] {
background-color: #1E1E1E;
}
/* 代码块下方的滚动条样式 */
div.bg-fd-secondary:has(pre) {
padding: 0;
}
.dark div.bg-fd-secondary:has(pre) {
background-color: #1E1E1E;
}
div.bg-fd-secondary:has(pre)::-webkit-scrollbar-track {
background: #e8e8e8;
}
div.bg-fd-secondary:has(pre)::-webkit-scrollbar-thumb {
background: #b0b0b0;
}
div.bg-fd-secondary:has(pre)::-webkit-scrollbar-thumb:hover {
background: #909090;
}
.dark div.bg-fd-secondary:has(pre)::-webkit-scrollbar-track {
background: #1a1a1a;
}
.dark div.bg-fd-secondary:has(pre)::-webkit-scrollbar-thumb {
background: #404040;
}
.dark div.bg-fd-secondary:has(pre)::-webkit-scrollbar-thumb:hover {
background: #606060;
padding: 0.2em 0.4em !important;
margin: 0 0.2em !important;
color: #2563eb !important;
}
/* 代码块中的滚动条样式优化 */
@ -264,67 +153,13 @@ div[data-state='open'].fixed.inset-0.z-50 {
}
}
/* 复制按钮容器和按钮样式 */
div[class*="bg-fd-card"]:has(button[aria-label='Copy Text']),
div[class*="bg-fd-card"]:has(button[aria-label='Copied Text']) {
right: 26px;
top: 24px;
display: flex;
align-items: center;
justify-content: center;
background-color: #818181;
color: #818181;
border: none;
border-radius: 4px;
background: rgba(0, 0, 0, 0.01);
-webkit-backdrop-filter: blur(5px);
backdrop-filter: blur(5px);
figure.shiki button[aria-label='Copy Text'] {
background-color: none !important;
&:hover {
cursor: pointer;
}
}
/* 按钮内部样式 */
button[aria-label='Copy Text'],
button[aria-label='Copied Text'] {
color: #818181;
background-color: transparent;
border: none;
padding: 0;
&:hover {
cursor: pointer;
}
}
button[aria-label='Copy Text'] svg {
display: none;
}
button[aria-label='Copy Text']::before {
content: '';
background-image: url('../public/icons/copy.svg');
width: 26px;
height: 26px;
transition: filter 0.2s ease;
}
/* 鼠标悬停时使复制图标颜色变深 */
button[aria-label='Copy Text']:hover::before {
filter: brightness(0.7); /* 降低亮度使颜色变深 */
}
button[aria-label='Copied Text'] {
width: 26px;
height: 26px;
/* transition: filter 0.2s ease; */
}
button[aria-label='Copied Text'] svg {
width: 20px;
height: 20px;
transition: filter 0.2s ease;
}
#nd-subnav > div:nth-of-type(1) button {
&:hover {
cursor: pointer;
@ -416,16 +251,16 @@ div[data-rmiz-modal-overlay='visible'] {
}
.dark {
--color-fd-background: #000000;
--color-fd-background: #060c1a;
--color-fd-foreground: hsl(220, 60%, 94.5%);
--color-fd-muted: hsl(220, 50%, 10%);
--color-fd-muted-foreground: #B0B0B0;
--color-fd-muted-foreground: hsl(220, 30%, 65%);
--color-fd-popover: hsl(220, 50%, 10%);
--color-fd-popover-foreground: hsl(220, 60%, 94.5%);
--color-fd-card: hsla(220, 56%, 15%, 0.4);
--color-fd-card-foreground: hsl(220, 60%, 94.5%);
--color-fd-border: hsla(220, 50%, 50%, 0.2);
--color-fd-primary: #C2D3FF; /* 文本高亮色 */
--color-fd-primary: #3370ff; /* 文本高亮色 */
--color-fd-primary-foreground: hsl(0, 0%, 9%);
--color-fd-secondary: hsl(220, 50%, 20%);
--color-fd-secondary-foreground: hsl(220, 80%, 90%);
@ -441,38 +276,3 @@ div[data-rmiz-modal-overlay='visible'] {
button[data-search-full] {
background-color: var(--color-fd-background);
}
.dark\:text-blue-400:where(.dark, .dark *) {
color: #C2D3FF;
background-color: #434548;
}
.dark div[role="tabpanel"].bg-fd-background {
background-color: #1E1E1E;
}
div[role="tabpanel"].bg-fd-background {
background-color: #F7F7F8;
}
div[role="tabpanel"].bg-fd-background > div > ul {
margin: 0;
display: flex;
flex-direction: column;
gap: 10px;
}
.dark div[role="tabpanel"].bg-fd-background > div > ul {
margin: 0;
background-color: #1E1E1E;
}
div[role="tabpanel"].bg-fd-background > div > ul > li {
margin: 0;
}
button[role="tab"] {
padding-top: 16px;
padding-bottom: 16px;
}

View File

@ -20,14 +20,15 @@ export const baseOptions = (locale: string): BaseLayoutProps => {
<div className="flex flex-row items-center gap-2">
<img src="/FastGPT-full.svg" alt="FastGPT" width={49} height={48} />
</div>
<div className="relative flex flex-row items-center gap-2 h-10 top-14"> 12321</div>
</div>
)
},
// i18n: {
// languages: ['zh-CN', 'en'],
// defaultLanguage: 'zh-CN',
// hideLocale: 'always'
// },
i18n: {
languages: ['zh-CN', 'en'],
defaultLanguage: 'zh-CN',
hideLocale: 'always'
},
searchToggle: {
enabled: true
}

View File

@ -0,0 +1,43 @@
const fs = require('fs');
const path = require('path');
const matter = require('gray-matter');
// ✅ 设置要处理的根目录(可修改为你的文档目录)
const rootDir = path.resolve(__dirname, 'content/docs');
// ✅ 仅保留的 frontmatter 字段
const KEEP_FIELDS = ['title', 'description'];
function cleanFrontmatter(filePath) {
const raw = fs.readFileSync(filePath, 'utf-8');
const parsed = matter(raw);
// 仅保留需要的字段
const newData = {};
for (const key of KEEP_FIELDS) {
if (parsed.data[key] !== undefined) {
newData[key] = parsed.data[key];
}
}
const cleaned = matter.stringify(parsed.content, newData);
fs.writeFileSync(filePath, cleaned, 'utf-8');
console.log(`✔ Cleaned: ${path.relative(rootDir, filePath)}`);
}
function walk(dir) {
const entries = fs.readdirSync(dir);
for (const entry of entries) {
const fullPath = path.join(dir, entry);
const stat = fs.statSync(fullPath);
if (stat.isDirectory()) {
walk(fullPath); // 🔁 递归子目录
} else if (entry.endsWith('.mdx')) {
cleanFrontmatter(fullPath);
}
}
}
// 🚀 开始执行
walk(rootDir);

View File

@ -1,5 +1,6 @@
'use client';
// components/CustomSearchDialog.tsx
import { liteClient } from 'algoliasearch/lite';
import { useDocsSearch } from 'fumadocs-core/search/client';
import {
SearchDialog,
@ -14,11 +15,21 @@ import {
} from 'fumadocs-ui/components/dialog/search';
import { useI18n } from 'fumadocs-ui/contexts/i18n';
if (!process.env.NEXT_PUBLIC_SEARCH_APPID || !process.env.NEXT_PUBLIC_SEARCH_APPKEY) {
throw new Error('NEXT_PUBLIC_SEARCH_APPID and NEXT_PUBLIC_SEARCH_APPKEY are not set');
}
const client = liteClient(
process.env.NEXT_PUBLIC_SEARCH_APPID,
process.env.NEXT_PUBLIC_SEARCH_APPKEY
);
export default function CustomSearchDialog(props: SharedProps) {
const { locale } = useI18n();
const { search, setSearch, query } = useDocsSearch({
type: 'fetch',
api: '/api/search',
type: 'algolia',
client,
indexName: 'document',
locale
});

View File

@ -3,21 +3,18 @@ import { useEffect } from 'react';
import { usePathname, useRouter } from 'next/navigation';
const exactMap: Record<string, string> = {
'/docs': '/docs/introduction',
'/docs/intro': '/docs/introduction',
'/docs/guide/dashboard/workflow/coreferenceresolution':
'/docs/introduction/guide/dashboard/workflow/coreferenceResolution',
'/docs/guide/admin/sso_dingtalk':
'/docs/introduction/guide/admin/sso#/docs/introduction/guide/admin/sso#钉钉',
'/docs/guide/knowledge_base/rag': '/docs/introduction/guide/knowledge_base/RAG',
'/docs/commercial/intro/': '/docs/introduction/commercial',
'/docs/upgrading/intro/': '/docs/upgrading',
'/docs/introduction/shopping_cart/intro/': '/docs/introduction/commercial'
'/docs/commercial/intro/': '/docs/introduction'
};
const prefixMap: Record<string, string> = {
'/docs/development': '/docs/introduction/development',
'/docs/FAQ': '/docs/faq',
'/docs/FAQ': '/docs/introduction/FAQ',
'/docs/guide': '/docs/introduction/guide',
'/docs/shopping_cart': '/docs/introduction/shopping_cart',
'/docs/agreement': '/docs/protocol'
@ -30,16 +27,16 @@ export default function NotFound() {
const router = useRouter();
useEffect(() => {
(async () => {
const tryRedirect = async () => {
if (exactMap[pathname]) {
window.location.replace(exactMap[pathname]);
router.replace(exactMap[pathname]);
return;
}
for (const [oldPrefix, newPrefix] of Object.entries(prefixMap)) {
if (pathname.startsWith(oldPrefix)) {
const rest = pathname.slice(oldPrefix.length);
window.location.replace(newPrefix + rest);
router.replace(newPrefix + rest);
return;
}
}
@ -55,15 +52,17 @@ export default function NotFound() {
if (validPage) {
console.log('validPage', validPage);
window.location.replace(validPage);
router.replace(validPage);
return;
}
} catch (e) {
console.warn('meta.json fallback failed:', e);
}
window.location.replace(fallbackRedirect);
})();
router.replace(fallbackRedirect);
};
tryRedirect();
}, [pathname, router]);
return null;

View File

@ -0,0 +1,47 @@
import { type HTMLAttributes } from 'react';
import { HomeLayout, type HomeLayoutProps } from 'fumadocs-ui/layouts/home';
import Link from 'next/link';
interface CustomHomeLayoutProps extends HomeLayoutProps {
// 可以在这里添加自定义的属性
}
export function CustomHomeLayout({
children,
nav,
...props
}: CustomHomeLayoutProps & HTMLAttributes<HTMLElement>) {
return (
<HomeLayout
{...props}
nav={{
...nav,
title: (
<div className="flex flex-col items-center gap-2">
<div className="flex flex-row items-center gap-2">
<img src="/logo.svg" alt="FastGPT" width={49} height={48} />
FastGPT
</div>
<div className="flex flex-row items-center gap-4 text-sm">
<Link href="/docs/introduction" className="hover:text-blue-500">
使
</Link>
<Link href="/docs/use-cases" className="hover:text-blue-500">
使
</Link>
<Link href="/docs/agreement" className="hover:text-blue-500">
</Link>
<Link href="/docs/api" className="hover:text-blue-500">
API手册
</Link>
</div>
</div>
),
transparentMode: 'none'
}}
>
{children}
</HomeLayout>
);
}

View File

@ -0,0 +1,12 @@
---
title: Tabs
description:
A Tabs component built with Radix UI, with additional features such as
persistent and shared value.
---
<Tabs items={['Javascript', 'Rust']}>
<Tab value="Javascript">Javascript is weird</Tab>
<Tab value="Rust">Rust is fast</Tab>
</Tabs>

View File

@ -0,0 +1,101 @@
---
title: 对话接口
description: FastGPT OpenAPI 对话接口
---
import { Alert } from '@/components/docs/Alert';
# 如何获取 AppId
可在应用详情的路径里获取 AppId。
![](/imgs/appid.png)
# 发起对话
<Alert icon="🤖" context="success">
* 该接口的 API Key 需使用`应用特定的 key`,否则会报错。
{/* * 对话现在有`v1`和`v2`两个接口可以按需使用v2 自 4.9.4 版本新增v1 接口同时不再维护 */}
* 有些包调用时,`BaseUrl`需要添加`v1`路径有些不需要如果出现404情况可补充`v1`重试。
</Alert>
## 请求简易应用和工作流
`v1`对话接口兼容`GPT`的接口!如果你的项目使用的是标准的`GPT`官方接口,可以直接通过修改`BaseUrl`和 `Authorization`来访问 FastGpt 应用,不过需要注意下面几个规则:
<Alert icon="🤖" context="success">
* 传入的`model``temperature`等参数字段均无效,这些字段由编排决定,不会根据 API 参数改变。
* 不会返回实际消耗`Token`值,如果需要,可以设置`detail=true`,并手动计算 `responseData` 里的`tokens`值。
</Alert>
### 请求
<Tabs items={['基础请求示例','参数说明','12312','1231221']}>
<Tab>
```bash
curl --location --request POST 'http://localhost:3000/api/v1/chat/completions' \
--header 'Authorization: Bearer fastgpt-xxxxxx' \
--header 'Content-Type: application/json' \
--data-raw '{
"chatId": "my_chatId",
"stream": false,
"detail": false,
"responseChatItemId": "my_responseChatItemId",
"variables": {
"uid": "asdfadsfasfd2323",
"name": "张三"
},
"messages": [
{
"role": "user",
"content": "导演是谁"
}
]
}'
```
</Tab>
<Tab>
* 仅`messages`有部分区别,其他参数一致。
* 目前不支持上传文件,需上传到自己的对象存储中,获取对应的文件链接。
```bash
curl --location --request POST 'http://localhost:3000/api/v1/chat/completions' \
--header 'Authorization: Bearer fastgpt-xxxxxx' \
--header 'Content-Type: application/json' \
--data-raw '{
"chatId": "abcd",
"stream": false,
"messages": [
{
"role": "user",
"content": [
{
"type": "text",
"text": "导演是谁"
},
{
"type": "image_url",
"image_url": {
"url": "图片链接"
}
},
{
"type": "file_url",
"name": "文件名",
"url": "文档链接,支持 txt md html word pdf ppt csv excel"
}
]
}
]
}'
```
</Tab>
</Tabs>

View File

@ -0,0 +1,8 @@
---
title: API 文档
description: API 文档
---
import { Redirect } from '@/components/docs/Redirect';
<Redirect to="/docs/api/api1" />

View File

@ -0,0 +1,7 @@
{
"title": "API手册",
"description": "FastGPT API手册",
"root": true,
"pages": ["api1", "api2", "test"],
"order": 4
}

View File

@ -0,0 +1,12 @@
---
title: Tabs
description:
A Tabs component built with Radix UI, with additional features such as
persistent and shared value.
---
<Tabs items={['Javascript', 'Rust']}>
<Tab value="Javascript">Javascript is weird</Tab>
<Tab value="Rust">Rust is fast</Tab>
</Tabs>

View File

@ -1,8 +0,0 @@
---
title: 使用案例
description: FastGPT 使用案例
---
import { Redirect } from '@/components/docs/Redirect';
<Redirect to="/docs/faq/app" />

View File

@ -1,14 +0,0 @@
{
"root": true,
"title": "FAQ",
"description": "FastGPT 常见问题",
"pages": [
"app",
"chat",
"dataset",
"external_channel_integration",
"error",
"points_consumption",
"other"
]
}

View File

@ -0,0 +1,8 @@
---
title: Docker 部署问题
description: FastGPT Docker 部署问题
---
import {Redirect} from '@/components/docs/Redirect'
<Redirect to="/docs/introduction/development/docker/#faq" />

View File

@ -4,7 +4,7 @@ title: 报错
1. ### 当前分组上游负载已饱和,请稍后再试(request id:202407100753411462086782835521)
是oneapi渠道的问题可以换个模型用或者换一家中转站
是oneapi渠道的问题可以换个模型用or换一家中转站
1. ### 使用API时在日志中报错Connection Error

View File

@ -0,0 +1,5 @@
{
"title": "FAQ",
"description": "FastGPT 常见问题",
"pages": ["docker","privateDeploy","chat","app","dataset","external_channel_integration","error","points_consumption","other"]
}

View File

@ -8,4 +8,4 @@ title: 其他问题
## 想做多用户
社区版未支持多用户,仅商业版支持。
开源版未支持多用户,仅商业版支持。

View File

@ -0,0 +1,8 @@
---
title: 私有部署常见问题
description: FastGPT 私有部署常见问题
---
import {Redirect} from '@/components/docs/Redirect'
<Redirect to="/docs/introduction/development/faq/" />

View File

@ -1,11 +0,0 @@
---
title: FastGPT 云服务
description: FastGPT 云服务
---
## 服务地址
- [国内版: https://fastgpt.cn](https://fastgpt.cn)
- [国际版: https://fastgpt.io](https://fastgpt.io)
请按需注册,两个版本账号不互通。

View File

@ -1,110 +0,0 @@
---
title: 'FastGPT 商业版'
description: 'FastGPT 商业版相关说明'
---
import { Alert } from '@/components/docs/Alert';
## 简介
FastGPT 商业版是基于 FastGPT 社区版的增强版本,增加了一些独有的功能。只需安装一个商业版镜像,并在社区版基础上填写对应的内网地址,即可快速使用商业版。
## 功能差异
| | 社区版 | 商业版 | 云服务版 |
| ------------------------------ | ------------------------------------------ | ------ | ------- |
| **应用构建** | | | |
| 工作流编排 | ✅ | ✅ | ✅ |
| 分享链接和 API | ✅ | ✅ | ✅ |
| 应用发布安全配置 | ❌ | ✅ | ✅ |
| 第三方应用发布(飞书、公众号) | ❌ | ✅ | ✅ |
| 运行日志看板 | ❌ | ✅ | ✅ |
| 应用评测 | ❌ | ✅ | ✅ |
| **知识库** | | | |
| 知识库 | ✅ | ✅ | ✅ |
| 第三方知识库定时同步 | ❌ | ✅ | ✅ |
| 知识库索引增强 | ❌ | ✅ | ✅ |
| web站点同步 | ❌ | ✅ | ✅ |
| 图片知识库 | ❌ | ✅ | ✅ |
| **通用功能** | | | |
| 多模型配置 | ✅ | ✅ | ✅ |
| 模型日志看板 | ✅ | ✅ | ✅ |
| 模型内容审核 | ❌ | ✅ | ✅ |
| **企业级功能** | | | |
| 自定义版权信息 | ❌ | ✅ | 设计中 |
| 多租户与支付 | ❌ | ✅ | ✅ |
| 团队空间 & 权限 | ❌ | ✅ | ✅ |
| 管理后台 | ❌ | ✅ | 不需要 |
| SSO 登录 | ❌ | ✅ | 设计中 |
| 商业授权 | [查看开源协议](/docs/protocol/open-source) | 完整 | 完整 |
## 商业版软件价格
FastGPT 商业版软件根据不同的部署方式,分为 3 类收费模式。下面列举各种部署方式一些常规内容,如仍有问题,可[联系咨询](https://fael3z0zfze.feishu.cn/share/base/form/shrcnjJWtKqjOI9NbQTzhNyzljc)
**共有服务**
1. Saas 商业授权许可 - 在商业版有效期内,可提供任意形式的商业服务。
2. 首次免费帮助部署。
3. 优先问题工单处理。
**特有服务**
| 部署方式 | 特有服务 | 上线时长 | 标品价格 |
| ---------------------- | ------------------------------------------------- | -------- | ----------------------------------------------------------------------------------------------------- |
| Sealos全托管 | 1. 有效期内免费升级。<br />2. 免运维服务&数据库。 | 半天 | 10000元起/月3个月起<br />或<br />120000元起/年<br />8C32G 资源,额外资源另外收费。 |
| Sealos全托管多节点 | 1. 有效期内免费升级。<br />2. 免运维服务&数据库。 | 半天 | 22000元起/月3个月起<br />或<br />264000元起/年<br />32C128G 资源,额外资源另外收费。 |
| 自有服务器部署 | 1. 6个版本免费升级支持。 | 14天内 | 具体价格和优惠可[联系咨询](https://fael3z0zfze.feishu.cn/share/base/form/shrcnjJWtKqjOI9NbQTzhNyzljc) |
<Alert icon="🤖" context="success">
- 6个版本的升级服务不是指只能用 6 个版本,而是指依赖 FastGPT
团队提供的升级服务。大部分时候,建议自行升级,也不麻烦。 -
全托管版本适合技术人员紧缺的团队,仅需关注业务推动,无需关心服务是否正常运行。 -
自有服务器部署版可以完全部署在自己服务器中。 -
单机版适合中小团队对内提供服务,需要自己维护数据库备份等。 -
高可用版适合对外提供在线服务,包含可视化监控、多副本、负载均衡、数据库自动备份等生产环境的基础设施。
</Alert>
## 联系方式
请填写[咨询问卷](https://fael3z0zfze.feishu.cn/share/base/form/shrcnjJWtKqjOI9NbQTzhNyzljc),我们会尽快与您联系。
## 技术支持
### 应用定制
根据需求,定制实现某个需求的编排功能,最终会交付一个应用编排。可根据实际情况商讨。
### 技术服务费(定开、维护、迁移、三方接入等)
2000 ~ 3000元/人/天
### 更新升级费用
大部分更新升级,重新拉镜像,然后执行一下初始化脚本就可以了,不需要执行额外操作。
跨版本更新或复杂更新可参考文档自行更新;或付费支持,标准与技术服务费一致。
## QA
### 如何交付?
完整版应用 = 社区版镜像 + 商业版镜像
我们会提供一个商业版镜像给你使用,该镜像需要一个 License 启动。
### 二次开发如何操作?
可以修改社区版部分代码,不支持修改商业版镜像。完整版本=社区版+商业版镜像,所以是可以修改部分内容的。但是如果二开了,后续则需要自己进行代码合并升级。
### Sealos 运行费用
Sealos 云服务属于按量计费,下面是它的价格表:
![alt text](/imgs/image-58.png)
## 管理后台部分截图
| | | |
| ------------------------------- | ------------------------------- | ------------------------------- |
| ![alt text](/imgs/image-55.png) | ![alt text](/imgs/image-56.png) | ![alt text](/imgs/image-57.png) |

View File

@ -0,0 +1,12 @@
---
title: 加入社区
description: ' 加入 FastGPT 开发者社区和我们一起成长'
---
FastGPT 是一个由用户和贡献者参与推动的开源项目,如果您对产品使用存在疑问和建议,可尝试以下方式寻求支持。我们的团队与社区会竭尽所能为您提供帮助。
+ 📱 扫码加入社区微信交流群👇
<img width="400px" src="https://oss.laf.run/otnvvf-imgs/fastgpt-feishu1.png" className="medium-zoom-image" />
+ 🐞 请将任何 FastGPT 的 Bug、问题和需求提交到 [GitHub Issue](https://github.com/labring/fastgpt/issues/new/choose)。

View File

@ -3,16 +3,14 @@ title: 配置文件介绍
description: FastGPT 配置参数介绍
---
由于环境变量不利于配置复杂的内容,新版 FastGPT 采用了 ConfigMap 的形式挂载配置文件,你可以在 `projects/app/data/config.json` 看到默认的配置文件。可以参考 [docker-compose 快速部署](/docs/introduction/development/docker/) 来挂载配置文件。
由于环境变量不利于配置复杂的内容,新版 FastGPT 采用了 ConfigMap 的形式挂载配置文件,你可以在 `projects/app/data/config.json` 看到默认的配置文件。可以参考 [docker-compose 快速部署](/docs/development/docker/) 来挂载配置文件。
**开发环境下**,你需要将示例配置文件 `config.json` 复制成 `config.local.json` 文件才会生效。
**开发环境下**,你需要将示例配置文件 `config.json` 复制成 `config.local.json` 文件才会生效。
下面配置文件示例中包含了系统参数和各个模型配置:
## 4.8.20+ 版本新配置文件示例
> 从4.8.20版本开始,模型在页面中进行配置。
```json
{
"feConfigs": {
@ -24,8 +22,7 @@ description: FastGPT 配置参数介绍
"vlmMaxProcess": 15, // 图片理解模型最大处理进程
"tokenWorkers": 50, // Token 计算线程保持数,会持续占用内存,不能设置太大。
"hnswEfSearch": 100, // 向量搜索参数,仅对 PG 和 OB 生效。越大搜索越精确但是速度越慢。设置为100有99%+精度。
"customPdfParse": {
// 4.9.0 新增配置
"customPdfParse": { // 4.9.0 新增配置
"url": "", // 自定义 PDF 解析服务地址
"key": "", // 自定义 PDF 解析服务密钥
"doc2xKey": "", // doc2x 服务密钥
@ -60,7 +57,7 @@ description: FastGPT 配置参数介绍
#### 2. 修改 FastGPT 配置文件
社区版用户在 `config.json` 文件中添加 `systemEnv.customPdfParse.doc2xKey` 配置,并填写上申请到的 API Key。并重启服务。
开源版用户在 `config.json` 文件中添加 `systemEnv.customPdfParse.doc2xKey` 配置,并填写上申请到的 API Key。并重启服务。
商业版用户在 Admin 后台根据表单指引填写 Doc2x 服务密钥。
@ -70,4 +67,4 @@ description: FastGPT 配置参数介绍
### 使用 Marker 解析 PDF 文件
[点击查看 Marker 接入教程](/docs/introduction/development/custom-models/marker)
[点击查看 Marker 接入教程](/docs/development/custom-models/marker)

View File

@ -9,9 +9,9 @@ FastGPT 默认使用了 OpenAI 的 LLM 模型和向量模型,如果想要私
## 部署镜像
- 镜像名: `stawky/chatglm2-m3e:latest`
- 国内镜像名: `registry.cn-hangzhou.aliyuncs.com/fastgpt_docker/chatglm2-m3e:latest`
- 端口号: 6006
+ 镜像名: `stawky/chatglm2-m3e:latest`
+ 国内镜像名: `registry.cn-hangzhou.aliyuncs.com/fastgpt_docker/chatglm2-m3e:latest`
+ 端口号: 6006
```
# 设置安全凭证即oneapi中的渠道密钥
@ -21,7 +21,7 @@ FastGPT 默认使用了 OpenAI 的 LLM 模型和向量模型,如果想要私
## 接入 OneAPI
文档链接:[One API](/docs/introduction/development/modelconfig/one-api/)
文档链接:[One API](/docs/development/modelconfig/one-api/)
为 chatglm2 和 m3e-large 各添加一个渠道,参数如下:
@ -97,7 +97,7 @@ M3E 模型的使用方法如下:
1. 创建知识库时候选择 M3E 模型。
注意,一旦选择后,知识库将无法修改向量模型。
![](/imgs/model-m3e2.png)
2. 导入数据
@ -108,7 +108,7 @@ M3E 模型的使用方法如下:
4. 应用绑定知识库
注意,应用只能绑定同一个向量模型的知识库,不能跨模型绑定。并且,需要注意调整相似度,不同向量模型的相似度(距离)会有所区别,需要自行测试实验。
![](/imgs/model-m3e4.png)
chatglm2 模型的使用方法如下:

View File

@ -9,7 +9,7 @@ PDF 是一个相对复杂的文件格式,在 FastGPT 内置的 pdf 解析器
市面上目前有多种解析 PDF 的方法,比如使用 [Marker](https://github.com/VikParuchuri/marker),该项目使用了 Surya 模型,基于视觉解析,可以有效提取图片、表格、公式等复杂内容。
在 `FastGPT v4.9.0` 版本中,社区版用户可以在`config.json`文件中添加`systemEnv.customPdfParse`配置,来使用 Marker 解析 PDF 文件。商业版用户直接在 Admin 后台根据表单指引填写即可。需重新拉取 Marker 镜像,接口格式已变动。
在 `FastGPT v4.9.0` 版本中,开源版用户可以在`config.json`文件中添加`systemEnv.customPdfParse`配置,来使用 Marker 解析 PDF 文件。商业版用户直接在 Admin 后台根据表单指引填写即可。需重新拉取 Marker 镜像,接口格式已变动。
## 使用教程
@ -23,7 +23,6 @@ PDF 是一个相对复杂的文件格式,在 FastGPT 内置的 pdf 解析器
docker pull crpi-h3snc261q1dosroc.cn-hangzhou.personal.cr.aliyuncs.com/marker11/marker_images:v0.2
docker run --gpus all -itd -p 7231:7232 --name model_pdf_v2 -e PROCESSES_PER_GPU="2" crpi-h3snc261q1dosroc.cn-hangzhou.personal.cr.aliyuncs.com/marker11/marker_images:v0.2
```
### 2. 添加 FastGPT 文件配置
```json
@ -53,7 +52,7 @@ docker run --gpus all -itd -p 7231:7232 --name model_pdf_v2 -e PROCESSES_PER_GPU
```
[Info] 2024-12-05 15:04:42 Parsing files from an external service
[Info] 2024-12-05 15:07:08 Custom file parsing is complete, time: 1316ms
[Info] 2024-12-05 15:07:08 Custom file parsing is complete, time: 1316ms
```
然后你就可以发现,通过 Marker 解析出来的 pdf 会携带图片链接:
@ -64,13 +63,14 @@ docker run --gpus all -itd -p 7231:7232 --name model_pdf_v2 -e PROCESSES_PER_GPU
![alt text](/imgs/marker3.png)
## 效果展示
以清华的 [ChatDev Communicative Agents for Software Develop.pdf](https://arxiv.org/abs/2307.07924) 为例,展示 Marker 解析的效果:
| | | |
| ------------------------------- | ------------------------------- | ------------------------------- |
| ![alt text](/imgs/image-11.png) | ![alt text](/imgs/image-12.png) | ![alt text](/imgs/image-13.png) |
| | | |
| --- | --- | --- |
| ![alt text](/imgs/image-11.png) | ![alt text](/imgs/image-12.png) | ![alt text](/imgs/image-13.png) |
| ![alt text](/imgs/image-14.png) | ![alt text](/imgs/image-15.png) | ![alt text](/imgs/image-16.png) |
上图是分块后的结果,下图是 pdf 原文。整体图片、公式、表格都可以提取出来,效果还是杠杠的。
@ -95,5 +95,5 @@ CUSTOM_READ_FILE_URL=http://xxxx.com/v1/parse/file
CUSTOM_READ_FILE_EXTENSION=pdf
```
- CUSTOM_READ_FILE_URL - 自定义解析服务的地址, host改成解析服务的访问地址path 不能变动。
- CUSTOM_READ_FILE_EXTENSION - 支持的文件后缀,多个文件类型,可用逗号隔开。
* CUSTOM_READ_FILE_URL - 自定义解析服务的地址, host改成解析服务的访问地址path 不能变动。
* CUSTOM_READ_FILE_EXTENSION - 支持的文件后缀,多个文件类型,可用逗号隔开。

View File

@ -1,4 +1,4 @@
{
"title": "本地模型使用",
"pages": ["marker","mineru","xinference","bge-rerank","chatglm2","m3e","chatglm2-m3e","ollama"]
"pages": ["marker","xinference","bge-rerank","chatglm2","m3e","chatglm2-m3e","ollama"]
}

View File

@ -1,83 +0,0 @@
---
title: 接入 MinerU PDF 文档解析
description: 使用 MinerU 解析 PDF 文档,可实现图片提取、布局识别、表格识别和公式识别
---
## 背景
PDF 是一个相对复杂的文件格式,在 FastGPT 内置的 pdf 解析器中,依赖的是 pdfjs 库解析,该库基于逻辑解析,无法有效的理解复杂的 pdf 文件。所以我们在解析 pdf 时候,如果遇到图片、表格、公式等非简单文本内容,会发现解析效果不佳。
市面上目前有多种解析 PDF 的方法,比如使用 [MinerU](https://github.com/opendatalab/MinerU),该项目使用了 YOLO、PaddleOCR以及表格识别等模型基于视觉解析可以有效提取图片、表格、公式等复杂内容。
社区版用户可以在`config.json`文件中添加`systemEnv.customPdfParse`配置,来使用 MinerU 解析 PDF 文件。商业版用户直接在 Admin 后台根据表单指引填写即可,使用教程中会详细解释。
## 使用教程
硬件需求16g+ 的gpu显存推理卡最小 16GB+, 推荐 32GB+的内存,其他要求查看[官网](https://github.com/opendatalab/MinerU)
### 1. 安装 MinerU
这里介绍快速 Docker 安装的方法:
拉取fastgpt-mineru镜像 ---> 创建容器启动解析服务 ---> 把部署好的url地址接入到fastgpt配置文件中
```dockerfile
docker pull crpi-h3snc261q1dosroc.cn-hangzhou.personal.cr.aliyuncs.com/fastgpt_ck/mineru:v1
docker run --gpus all -itd -p 7231:8001 --name mode_pdf_minerU crpi-h3snc261q1dosroc.cn-hangzhou.personal.cr.aliyuncs.com/fastgpt_ck/mineru:v1
```
这里的mineru接入的是pipeline模式并且在docker内部进行了并行化会根据gpu数量创建多个进程来同时处理上传的pdf数据
### 2. 添加 FastGPT 文件配置
```json
{
xxx
"systemEnv": {
xxx
"customPdfParse": {
"url": "http://xxxx.com/v2/parse/file", // 自定义 PDF 解析服务地址 MinerU
"key": "", // 自定义 PDF 解析服务密钥
"doc2xKey": "", // doc2x 服务密钥
"price": 0 // PDF 解析服务价格
}
}
}
```
商业版请按下图配置
![alt text](/imgs/mineru6.png)
**注意:** 通过配置文件添加的服务需要重启服务。
### 3. 测试效果
通过知识库上传一个 pdf 文件,并勾选上 `PDF 增强解析`。
![alt text](/imgs/mineru1.png)
确认上传后,可以在日志中看到 LOG LOG_LEVEL需要设置 info 或者 debug
```
[Info] 2024-12-05 15:04:42 Parsing files from an external service
[Info] 2024-12-05 15:07:08 Custom file parsing is complete, time: 1316ms
```
同样的,在应用中,你可以在文件上传配置里,勾选上 `PDF 增强解析`。
![alt text](/imgs/mineru2.png)
## 效果展示
以清华的 [ChatDev Communicative Agents for Software Develop.pdf](https://arxiv.org/abs/2307.07924) 为例,展示 MinerU 解析的效果:
| | | |
| ------------------------------- | ------------------------------- | ------------------------------- |
| ![alt text](/imgs/mineru3-1.png) | ![alt text](/imgs/mineru4-1.png) | ![alt text](/imgs/mineru5-1.png) |
| ![alt text](/imgs/mineru3.png) | ![alt text](/imgs/mineru4.png) | ![alt text](/imgs/mineru5.png) |
上图是分块后的结果,下图是 pdf 原文。整体图片、公式、ocr手写体都可以提取出来效果还是可以的。
不过要注意的是,[MinerU](https://github.com/opendatalab/MinerU) 的协议是`GPL-3.0 license`,请在遵守协议的前提下使用。

View File

@ -7,14 +7,14 @@ description: ' 采用 Ollama 部署自己的模型'
## 安装 Ollama
Ollama 本身支持多种安装方式,但是推荐使用 Docker 拉取镜像部署。如果是个人设备上安装了 Ollama 后续需要解决如何让 Docker 中 FastGPT 容器访问宿主机 Ollama的问题较为麻烦。
Ollama 本身支持多种安装方式,但是推荐使用 Docker 拉取镜像部署。如果是个人设备上安装了 Ollama 后续需要解决如何让 Docker 中 FastGPT 容器访问宿主机 Ollama的问题较为麻烦。
### Docker 安装(推荐)
你可以使用 Ollama 官方的 Docker 镜像来一键安装和启动 Ollama 服务(确保你的机器上已经安装了 Docker命令如下
```bash
docker pull ollama/ollama
docker pull ollama/ollama
docker run --rm -d --name ollama -p 11434:11434 ollama/ollama
```
@ -81,6 +81,7 @@ ollama pull [模型名]
![](/imgs/Ollama-pull.png)
### 测试通信
在安装完成后,需要进行检测测试,首先进入 FastGPT 所在的容器,尝试访问自己的 Ollama ,命令如下:
@ -107,7 +108,7 @@ ollama ls
### 2. AI Proxy 接入
如果你采用的是 FastGPT 中的默认配置文件部署[这里](/docs/introduction/development/docker.md),即默认采用 AI Proxy 进行启动。
如果你采用的是 FastGPT 中的默认配置文件部署[这里](/docs/development/docker.md),即默认采用 AI Proxy 进行启动。
![](/imgs/Ollama-aiproxy1.png)
@ -115,7 +116,7 @@ ollama ls
![](/imgs/Ollama-aiproxy2.png)
在 FastGPT 中点击账号->模型提供商->模型配置->新增模型添加自己的模型即可添加模型时需要保证模型ID和 OneAPI 中的模型名称一致。详细参考[这里](/docs/introduction/development/modelConfig/intro.md)
在 FastGPT 中点击账号->模型提供商->模型配置->新增模型添加自己的模型即可添加模型时需要保证模型ID和 OneAPI 中的模型名称一致。详细参考[这里](/docs/development/modelConfig/intro.md)
![](/imgs/Ollama-models2.png)
@ -176,5 +177,4 @@ docker run -it --network [ FastGPT 网络 ] --name 容器名 intel/oneapi-hpckit
![](/imgs/Ollama-models4.png)
### 6. 补充
上述接入 Ollama 的代理地址中,主机安装 Ollama 的地址为“http://[主机IP]:[端口]”,容器部署 Ollama 地址为“http://[容器名]:[端口]”

View File

@ -13,8 +13,8 @@ Xinference 支持多种推理引擎作为后端,以满足不同场景下部署
如果你的目标是在一台 Linux 或者 Window 服务器上部署大模型,可以选择 Transformers 或 vLLM 作为 Xinference 的推理后端:
- [Transformers](https://huggingface.co/docs/transformers/index):通过集成 Huggingface 的 Transformers 库作为后端Xinference 可以最快地 集成当今自然语言处理NLP领域的最前沿模型自然也包括 LLM
- [vLLM](https://vllm.ai/): vLLM 是由加州大学伯克利分校开发的一个开源库专为高效服务大型语言模型LLM而设计。它引入了 PagedAttention 算法, 通过有效管理注意力键和值来改善内存管理,吞吐量能够达到 Transformers 的 24 倍,因此 vLLM 适合在生产环境中使用,应对高并发的用户访问。
+ [Transformers](https://huggingface.co/docs/transformers/index):通过集成 Huggingface 的 Transformers 库作为后端Xinference 可以最快地 集成当今自然语言处理NLP领域的最前沿模型自然也包括 LLM
+ [vLLM](https://vllm.ai/): vLLM 是由加州大学伯克利分校开发的一个开源库专为高效服务大型语言模型LLM而设计。它引入了 PagedAttention 算法, 通过有效管理注意力键和值来改善内存管理,吞吐量能够达到 Transformers 的 24 倍,因此 vLLM 适合在生产环境中使用,应对高并发的用户访问。
假设你服务器配备 NVIDIA 显卡,可以参考[这篇文章中的指令来安装 CUDA](https://xorbits.cn/blogs/langchain-streamlit-doc-chat),从而让 Xinference 最大限度地利用显卡的加速功能。
@ -98,7 +98,7 @@ xinference launch -n qwen-chat -s 14 -f pytorch
## 将本地模型接入 One API
One API 的部署和接入请参考[这里](/docs/introduction/development/modelconfig/one-api/)。
One API 的部署和接入请参考[这里](/docs/development/modelconfig/one-api/)。
为 qwen1.5-chat 添加一个渠道,这里的 Base URL 需要填 Xinference 服务的端点,并且注册 qwen-chat (模型的 UID) 。
@ -153,6 +153,9 @@ curl --location --request POST 'https://[oneapi_url]/v1/chat/completions' \
然后重启 FastGPT 就可以在应用配置中选择 Qwen 模型进行对话:
## ![](/imgs/fastgpt-list-models.png)
![](/imgs/fastgpt-list-models.png)
---
+ 参考:[FastGPT + Xinference一站式本地 LLM 私有化部署和应用开发](https://xorbits.cn/blogs/fastgpt-weather-chat)
- 参考:[FastGPT + Xinference一站式本地 LLM 私有化部署和应用开发](https://xorbits.cn/blogs/fastgpt-weather-chat)

View File

@ -32,7 +32,7 @@ description: FastGPT 系统插件设计方案
1. 使用 ts-rest 作为 RPC 框架进行交互,提供 sdk 供 FastGPT 主项目调用
2. 使用 zod 进行类型验证
3. 用 bun 进行编译,每个工具编译为单一的 `.pkg` 文件,支持热插拔。
3. 用 bun 进行编译,每个工具编译为单一的 `.js` 文件,支持热插拔。
## 项目结构
@ -45,14 +45,12 @@ description: FastGPT 系统插件设计方案
- ...
- **type** 类型定义
- **utils** 工具
- **model** 模型预设
- **scripts** 脚本(编译、创建新工具)
- **sdk**: SDK 定义,供外部调用,发布到了 npm
- **runtime**: 运行时express 服务
- **lib**: 库文件,提供工具函数和类库
- **src**: 运行时express 服务
- **test**: 测试相关
系统工具的结构可以参考 [如何开发系统插件](/docs/introduction/guide/plugins/dev_system_tool)。
系统工具的结构可以参考 [如何开发系统工具](/docs/introduction/guide/plugins/dev_system_tool)。
## 技术细节
@ -79,7 +77,7 @@ zod 可以实现在运行时的类型校验,也可以提供更高级的功能
### 使用 bun 进行打包
将插件 bundle 为一个单一的 `.pkg` 文件是一个重要的设计。这样可以将插件发布出来直接通过网络挂载等的形式使用。
将插件 bundle 为一个单一的 `.js` 文件是一个重要的设计。这样可以将插件发布出来直接通过网络挂载等的形式使用。
## 未来规划

View File

@ -7,9 +7,9 @@ import { Alert } from '@/components/docs/Alert';
## 前置知识
1. 基础的网络知识:端口,防火墙……
2. Docker 和 Docker Compose 基础知识
3. 大模型相关接口和参数
1. 基础的网络知识:端口,防火墙……
2. Docker 和 Docker Compose 基础知识
3. 大模型相关接口和参数
4. RAG 相关知识:向量模型,向量数据库,向量检索
## 部署架构图
@ -19,8 +19,8 @@ import { Alert } from '@/components/docs/Alert';
<Alert icon="🤖" context="success">
- MongoDB用于存储除了向量外的各类数据
- PostgreSQL/Milvus/Oceanbase:存储向量数据
- AIProxy: 聚合各类 AI API支持多模型调用 (任何模型问题,先自行通过 OneAPI 测试校验)
- PostgreSQL/Milvus存储向量数据
- OneAPI: 聚合各类 AI API支持多模型调用 (任何模型问题,先自行通过 OneAPI 测试校验)
</Alert>
@ -30,11 +30,13 @@ import { Alert } from '@/components/docs/Alert';
非常轻量,适合知识库索引量在 5000 万以下。
| 环境 | 最低配置(单节点) | 推荐配置 |
| -------------------------------- | ------------------ | ------------ |
| 测试(可以把计算进程设置少一些) | 2c4g | 2c8g |
| 100w 组向量 | 4c8g 50GB | 4c16g 50GB |
| 500w 组向量 | 8c32g 200GB | 16c64g 200GB |
| 环境 | 最低配置(单节点) | 推荐配置 |
| ---- | ---- | ---- |
| 测试(可以把计算进程设置少一些) | 2c4g | 2c8g |
| 100w 组向量 | 4c8g 50GB | 4c16g 50GB |
| 500w 组向量 | 8c32g 200GB | 16c64g 200GB |
### Milvus版本
@ -42,11 +44,13 @@ import { Alert } from '@/components/docs/Alert';
[点击查看 Milvus 官方推荐配置](https://milvus.io/docs/prerequisite-docker.md)
| 环境 | 最低配置(单节点) | 推荐配置 |
| ----------- | ------------------ | -------- |
| 测试 | 2c8g | 4c16g |
| 100w 组向量 | 未测试 | |
| 500w 组向量 | | |
| 环境 | 最低配置(单节点) | 推荐配置 |
| ---- | ---- | ---- |
| 测试 | 2c8g | 4c16g |
| 100w 组向量 | 未测试 | |
| 500w 组向量 | | |
### zilliz cloud版本
@ -58,7 +62,7 @@ Zilliz Cloud 由 Milvus 原厂打造,是全托管的 SaaS 向量数据库服
### 1. 确保网络环境
如果使用`OpenAI`等国外模型接口,请确保可以正常访问,否则会报错:`Connection error` 等。 方案可以参考:[代理方案](/docs/introduction/development/proxy/nginx)
如果使用`OpenAI`等国外模型接口,请确保可以正常访问,否则会报错:`Connection error` 等。 方案可以参考:[代理方案](/docs/development/proxy/)
### 2. 准备 Docker 环境
@ -85,7 +89,6 @@ brew install orbstack
```
或者直接[下载安装包](https://orbstack.dev/download)进行安装。
</Tab>
<Tab value="Windows">
我们建议将源代码和其他数据绑定到 Linux 容器中时,将其存储在 Linux 文件系统中,而不是 Windows 文件系统中。
@ -93,122 +96,74 @@ brew install orbstack
可以选择直接[使用 WSL 2 后端在 Windows 中安装 Docker Desktop](https://docs.docker.com/desktop/wsl/)。
也可以直接[在 WSL 2 中安装命令行版本的 Docker](https://nickjanetakis.com/blog/install-docker-in-wsl-2-without-docker-desktop)。
</Tab>
</Tabs>
## 开始部署
### 1. 获取 `docker-compose.yml` 和 `config.json` 配置文件
### 1. 下载 docker-compose.yml
#### 方法一:使用脚本部署
非 Linux 环境或无法访问外网环境,可手动创建一个目录,并下载配置文件和对应版本的`docker-compose.yml`在这个文件夹中依据下载的配置文件运行docker若作为本地开发使用推荐`docker-compose-pgvector`版本,并且自行拉取并运行`sandbox`和`fastgpt`并在docker配置文件中注释掉`sandbox`和`fastgpt`的部分
<Tabs items={['PgVector版本','Oceanbase版本','Milvus版本','Zilliz版本']}>
<Tab value="PgVector版本">
国内镜像(阿里云)
```bash
bash <(curl -fsSL https://doc.fastgpt.cn/deploy/install.sh) --region=cn --vector=pg
```
非国内镜像(dockhub, ghcr)
```bash
bash <(curl -fsSL https://doc.fastgpt.cn/deploy/install.sh) --region=global --vector=pg
```
需要在 Linux/MacOS/Windows WSL 环境下执行
</Tab>
<Tab value="Oceanbase版本">
国内镜像(阿里云)
```bash
bash <(curl -fsSL https://doc.fastgpt.cn/deploy/install.sh) --region=cn --vector=oceanbase
```
非国内镜像(dockhub, ghcr)
```bash
bash <(curl -fsSL https://doc.fastgpt.cn/deploy/install.sh) --region=global --vector=oceanbase
```
需要在 Linux/MacOS/Windows WSL 环境下执行
</Tab>
<Tab value="Milvus版本">
国内镜像(阿里云)
```bash
bash <(curl -fsSL https://doc.fastgpt.cn/deploy/install.sh) --region=cn --vector=milvus
```
非国内镜像(dockhub, ghcr)
```bash
bash <(curl -fsSL https://doc.fastgpt.cn/deploy/install.sh) --region=global --vector=milvus
```
需要在 Linux/MacOS/Windows WSL 环境下执行
</Tab>
<Tab value="Zilliz版本">
国内镜像(阿里云)
```bash
bash <(curl -fsSL https://doc.fastgpt.cn/deploy/install.sh) --region=cn --vector=zilliz
```
非国内镜像(dockhub, ghcr)
```bash
bash <(curl -fsSL https://doc.fastgpt.cn/deploy/install.sh) --region=global --vector=zilliz
```
需要在 Linux/MacOS/Windows WSL 环境下执行
zilliz 还需要获取密钥,参考 [部署 Zilliz 版本获取账号和密钥](#部署-zilliz-版本获取账号和密钥)
</Tab>
</Tabs>
#### 方法二:手动下载部署
如果部署环境为非 *nix 环境或无法访问外网,需要手动下载 `docker-compose.yml` 进行部署
选择并下载您的 `docker-compose.yml` 文件
- Pgvector
- 中国大陆地区镜像源(阿里云)[docker-compose.pg.yml](https://doc.fastgpt.cn/deploy/docker/cn/docker-compose.pg.yml)
- 全球镜像源(dockerhub, ghcr)[docker-compose.pg.yml](https://doc.fastgpt.cn/deploy/docker/global/docker-compose.pg.yml)
- Oceanbase
- 中国大陆地区镜像源(阿里云)[docker-compose.ob.yml](https://doc.fastgpt.cn/deploy/docker/cn/docker-compose.ob.yml)
- 全球镜像源(dockerhub, ghcr)[docker-compose.ob.yml](https://doc.fastgpt.cn/deploy/docker/global/docker-compose.ob.yml)
- Milvus
- 中国大陆地区镜像源(阿里云)[docker-compose.milvus.yml](https://doc.fastgpt.cn/deploy/docker/cn/docker-compose.milvus.yml)
- 全球镜像源(dockerhub, ghcr)[docker-compose.milvus.yml](https://doc.fastgpt.cn/deploy/docker/global/docker-compose.milvus.yml)
- Zilliz
- 中国大陆地区镜像源(阿里云)[docker-compose.zilliz.yml](https://doc.fastgpt.cn/deploy/docker/cn/docker-compose.zilliz.yml)
- 全球镜像源(dockerhub, ghcr)[docker-compose.zilliz.yml](https://doc.fastgpt.cn/deploy/docker/global/docker-compose.zilliz.yml)
下载 config.json 文件
- [config.json](https://doc.fastgpt.cn/deploy/config/config.json)
- [config.json](https://raw.githubusercontent.com/labring/FastGPT/refs/heads/main/projects/app/data/config.json)
- [docker-compose.yml](https://github.com/labring/FastGPT/blob/main/deploy/docker) (注意,不同向量库版本的文件不一样)
<Alert icon="🤖" context="success">
所有 `docker-compose.yml` 配置文件中 `MongoDB` 为 5.x需要用到AVX指令集部分 CPU 不支持,需手动更改其镜像版本为 4.4.24\*\*需要自己在docker hub下载阿里云镜像没做备份
所有 `docker-compose.yml` 配置文件中 `MongoDB` 为 5.x需要用到AVX指令集部分 CPU 不支持,需手动更改其镜像版本为 4.4.24**需要自己在docker hub下载阿里云镜像没做备份
</Alert>
### 2. 开放外网端口/配置域名
**Linux 快速脚本**
以下两个端口必须被访问到:
```bash
mkdir fastgpt
cd fastgpt
curl -O https://raw.githubusercontent.com/labring/FastGPT/main/projects/app/data/config.json
1. 指向 3000 端口FastGPT 主服务)
2. 指向 9000 端口S3 服务)
# pgvector 版本(测试推荐,简单快捷)
curl -o docker-compose.yml https://raw.githubusercontent.com/labring/FastGPT/main/deploy/docker/docker-compose-pgvector.yml
# oceanbase 版本需要将init.sql和docker-compose.yml放在同一个文件夹方便挂载
# curl -o docker-compose.yml https://raw.githubusercontent.com/labring/FastGPT/main/deploy/docker/docker-compose-oceanbase/docker-compose.yml
# curl -o init.sql https://raw.githubusercontent.com/labring/FastGPT/main/deploy/docker/docker-compose-oceanbase/init.sql
# milvus 版本
# curl -o docker-compose.yml https://raw.githubusercontent.com/labring/FastGPT/main/deploy/docker/docker-compose-milvus.yml
# zilliz 版本
# curl -o docker-compose.yml https://raw.githubusercontent.com/labring/FastGPT/main/deploy/docker/docker-compose-zilliz.yml
```
### 3. 修改环境变量
### 2. 修改环境变量
1. 修改 yml 文件顶部的`S3_EXTERNAL_BASE_URL`变量,改成 S3 的可访问地址(要求使用者可以访问)
2. 按照您的需求自行修改环境变量,建议在生产环境修改账号密码等。
3. 对于 Zilliz 版本 还需要获取密钥,参考 [部署 Zilliz 版本获取账号和密钥](#部署-zilliz-版本获取账号和密钥)
找到 yml 文件中fastgpt 容器的环境变量进行下面操作:
### 4. 修改 config.json 配置文件
<Tabs items={['PgVector版本','Oceanbase版本','Milvus版本','Zilliz版本']}>
<Tab value="PgVector版本">
无需操作
</Tab>
<Tab value="Oceanbase版本">
无需操作
</Tab>
<Tab value="Milvus版本">
无需操作
</Tab>
<Tab value="Zilliz版本">
打开 [Zilliz Cloud](https://zilliz.com.cn/), 创建实例并获取相关秘钥。
![zilliz_key](/imgs/zilliz_key.png)
<Alert icon="🤖" context="success">
1. 修改`MILVUS_ADDRESS`和`MILVUS_TOKEN`链接参数,分别对应 `zilliz` 的 `Public Endpoint` 和 `Api key`记得把自己ip加入白名单。
</Alert>
</Tab>
</Tabs>
### 3. 修改 config.json 配置文件
修改`config.json`文件中的`mcpServerProxyEndpoint`值,设置成`mcp server`的公网可访问地址yml 文件中默认给出了映射到 3005 端口,如通过 IP 访问,则可能是:`120.172.2.10:3005`。
### 5. 启动容器
### 4. 启动容器
在 docker-compose.yml 同级目录下执行。请确保`docker-compose`版本最好在2.17以上,否则可能无法执行自动化命令。
@ -217,28 +172,20 @@ bash <(curl -fsSL https://doc.fastgpt.cn/deploy/install.sh) --region=global --ve
docker-compose up -d
```
### 6. 访问 FastGPT
### 5. 访问 FastGPT
可通过第二步开放的端口/域名访问 FastGPT。
登录用户名为 `root`,密码为`docker-compose.yml`环境变量里设置的 `DEFAULT_ROOT_PSW`。
每次重启容器,都会自动初始化 root 用户,密码为 `1234`(与环境变量中的`DEFAULT_ROOT_PSW`一致)。
目前可以通过 `ip:3000` 直接访问(注意开放防火墙)。登录用户名为 `root`,密码为`docker-compose.yml`环境变量里设置的 `DEFAULT_ROOT_PSW`。
### 7. 配置模型
如果需要域名访问,请自行安装并配置 Nginx。
首次运行,会自动初始化 root 用户,密码为 `1234`(与环境变量中的`DEFAULT_ROOT_PSW`一致),日志可能会提示一次`MongoServerError: Unable to read from a snapshot due to pending collection catalog changes;`可忽略。
### 6. 配置模型
- 首次登录FastGPT后系统会提示未配置`语言模型`和`索引模型`,并自动跳转模型配置页面。系统必须至少有这两类模型才能正常使用。
- 如果系统未正常跳转,可以在`账号-模型提供商`页面,进行模型配置。[点击查看相关教程](/docs/introduction/development/modelConfig/ai-proxy)
- 如果系统未正常跳转,可以在`账号-模型提供商`页面,进行模型配置。[点击查看相关教程](/docs/development/modelconfig/ai-proxy)
- 目前已知可能问题:首次进入系统后,整个浏览器 tab 无法响应。此时需要删除该tab重新打开一次即可。
### 8. 安装系统插件
从 V4.14.0 版本开始fastgpt-plugin 镜像仅提供运行环境,不再预装系统插件,所有 FastGPT 系统需手动安装系统插件。
* 通过插件市场安装,默认会向公开的 FastGPT Marketplace 获取数据进行安装。
* 如果你的 FastGPT 无法访问插件市场,则可以手动访问[FastGPT 插件市场](https://marketplace.fastgpt.cn/),先下载 .pkg 文件,再通过文件导入的方式安装到系统里。
* 除了安装外,还可对工具进行排序、默认安装、标签管理等。
![alt text](/imgs/image-121.png)
## FAQ
### 登录系统后,浏览器无法响应
@ -261,7 +208,7 @@ chown 999:root ./mongodb.key
```
2. 修改 docker-compose.yml挂载密钥
```yml
mongo:
# image: mongo:5.0.18
@ -292,7 +239,7 @@ docker-compose up -d
```bash
# 查看 mongo 容器是否正常运行
docker ps
docker ps
# 进入容器
docker exec -it mongo bash
@ -326,20 +273,20 @@ docker-compose up -d
### 如何更新版本?
1. 查看[更新文档](/docs/upgrading),确认要升级的版本,避免跨版本升级。
1. 查看[更新文档](/docs/development/upgrading/intro/),确认要升级的版本,避免跨版本升级。
2. 修改镜像 tag 到指定版本
3. 执行下面命令会自动拉取镜像:
```bash
docker-compose pull
docker-compose up -d
```
```bash
docker-compose pull
docker-compose up -d
```
4. 执行初始化脚本(如果有)
### 如何自定义配置文件?
修改`config.json`文件,并执行`docker-compose down`再执行`docker-compose up -d`重起容器。具体配置,参考[配置详解](/docs/introduction/development/configuration)。
修改`config.json`文件,并执行`docker-compose down`再执行`docker-compose up -d`重起容器。具体配置,参考[配置详解](/docs/development/configuration)。
### 如何检查自定义配置文件是否挂载
@ -415,14 +362,3 @@ mongo连接失败查看mongo的运行状态**对应日志**。
### 如何修改密码
修改`docker-compose.yml`文件中`DEFAULT_ROOT_PSW`并重启即可,密码会自动更新。
### 部署 Zilliz 版本,获取账号和密钥
打开 [Zilliz Cloud](https://zilliz.com.cn/), 创建实例并获取相关秘钥。
![zilliz_key](/imgs/zilliz_key.png)
<Alert icon="🤖" context="success">
1. 修改`MILVUS_ADDRESS`和`MILVUS_TOKEN`链接参数,分别对应 `zilliz` 的 `Public Endpoint` 和 `Api key`记得把自己ip加入白名单。
</Alert>

Some files were not shown because too many files have changed in this diff Show More