mirror of
https://github.com/labring/FastGPT.git
synced 2025-12-25 20:02:47 +00:00
squash: compress all commits into one
This commit is contained in:
parent
ab743b9358
commit
f7bc8cc090
|
|
@ -0,0 +1,475 @@
|
|||
---
|
||||
name: create-skill-file
|
||||
description: Guides Claude in creating well-structured SKILL.md files following best practices. Provides clear guidelines for naming, structure, and content organization to make skills easy to discover and execute.
|
||||
---
|
||||
|
||||
# Claude Agent Skill 编写规范
|
||||
|
||||
> 如何创建高质量的 SKILL.md 文件
|
||||
|
||||
## 目录
|
||||
|
||||
- [快速开始](#快速开始)
|
||||
- [核心原则](#核心原则)
|
||||
- [文件结构规范](#文件结构规范)
|
||||
- [命名和描述规范](#命名和描述规范)
|
||||
- [内容编写指南](#内容编写指南)
|
||||
- [质量检查清单](#质量检查清单)
|
||||
|
||||
---
|
||||
|
||||
## 快速开始
|
||||
|
||||
### 3步创建 Skill
|
||||
|
||||
**第1步: 创建目录**
|
||||
|
||||
```bash
|
||||
mkdir -p .claude/skill/your-skill-name
|
||||
cd .claude/skill/your-skill-name
|
||||
```
|
||||
|
||||
**第2步: 创建 SKILL.md**
|
||||
|
||||
```markdown
|
||||
---
|
||||
name: your-skill-name
|
||||
description: Brief description with trigger keywords and scenarios
|
||||
---
|
||||
|
||||
# Your Skill Title
|
||||
|
||||
## When to Use This Skill
|
||||
|
||||
- User asks to [specific scenario]
|
||||
- User mentions "[keyword]"
|
||||
|
||||
## How It Works
|
||||
|
||||
1. Step 1: [Action]
|
||||
2. Step 2: [Action]
|
||||
|
||||
## Examples
|
||||
|
||||
**Input**: User request
|
||||
**Output**: Expected result
|
||||
```
|
||||
|
||||
**第3步: 测试**
|
||||
- 在对话中使用 description 中的关键词触发
|
||||
- 观察 Claude 是否正确执行
|
||||
- 根据效果调整
|
||||
|
||||
---
|
||||
|
||||
## 核心原则
|
||||
|
||||
### 1. 保持简洁
|
||||
|
||||
只添加 Claude **不知道**的新知识:
|
||||
- ✅ 项目特定的工作流程
|
||||
- ✅ 特殊的命名规范或格式要求
|
||||
- ✅ 自定义工具和脚本的使用方法
|
||||
- ❌ 通用编程知识
|
||||
- ❌ 显而易见的步骤
|
||||
|
||||
**示例对比**:
|
||||
|
||||
```markdown
|
||||
# ❌ 过度详细
|
||||
1. 创建 Python 文件
|
||||
2. 导入必要的库
|
||||
3. 定义函数
|
||||
4. 编写主程序逻辑
|
||||
|
||||
# ✅ 简洁有效
|
||||
使用 `scripts/api_client.py` 调用内部 API。
|
||||
请求头必须包含 `X-Internal-Token`(从环境变量 `INTERNAL_API_KEY` 获取)。
|
||||
```
|
||||
|
||||
### 2. 设定合适的自由度
|
||||
|
||||
| 自由度 | 适用场景 | 编写方式 |
|
||||
|--------|---------|---------|
|
||||
| **高** | 需要创造性、多种解决方案 | 提供指导原则,不限定具体步骤 |
|
||||
| **中** | 有推荐模式但允许变化 | 提供参数化示例和默认流程 |
|
||||
| **低** | 容易出错、需严格执行 | 提供详细的分步指令或脚本 |
|
||||
|
||||
**判断标准**:
|
||||
- 任务是否有明确的"正确答案"? → 低自由度
|
||||
- 是否需要适应不同场景? → 高自由度
|
||||
- 错误的代价有多大? → 代价高则用低自由度
|
||||
|
||||
### 3. 渐进式披露
|
||||
|
||||
将复杂内容分层组织:
|
||||
|
||||
```
|
||||
SKILL.md (主文档, 200-500行)
|
||||
├── reference.md (详细文档)
|
||||
├── examples.md (完整示例)
|
||||
└── scripts/ (可执行脚本)
|
||||
```
|
||||
|
||||
**规则**:
|
||||
- SKILL.md 超过 500行 → 拆分子文件
|
||||
- 子文件超过 100行 → 添加目录
|
||||
- 引用深度 ≤ 1层
|
||||
|
||||
---
|
||||
|
||||
## 文件结构规范
|
||||
|
||||
### YAML Frontmatter
|
||||
|
||||
```yaml
|
||||
---
|
||||
name: skill-name-here
|
||||
description: Clear description of what this skill does and when to activate it
|
||||
---
|
||||
```
|
||||
|
||||
**字段规范**:
|
||||
|
||||
| 字段 | 要求 | 说明 |
|
||||
|------|------|------|
|
||||
| `name` | 小写字母、数字、短横线,≤64字符 | 必须与目录名一致 |
|
||||
| `description` | 纯文本,≤1024字符 | 用于检索和激活 |
|
||||
|
||||
**命名禁忌**:
|
||||
- ❌ XML 标签、保留字(`anthropic`, `claude`)
|
||||
- ❌ 模糊词汇(`helper`, `utility`, `manager`)
|
||||
- ❌ 空格或下划线(用短横线 `-`)
|
||||
|
||||
**Description 技巧**:
|
||||
|
||||
```yaml
|
||||
# ❌ 过于泛化
|
||||
description: Helps with code tasks
|
||||
|
||||
# ✅ 具体且包含关键词
|
||||
description: Processes CSV files and generates Excel reports with charts. Use when user asks to convert data formats or create visual reports.
|
||||
|
||||
# ✅ 说明触发场景
|
||||
description: Analyzes Python code for security vulnerabilities using bandit. Activates when user mentions "security audit" or "vulnerability scan".
|
||||
```
|
||||
|
||||
### 目录组织
|
||||
|
||||
**基础结构**(简单 Skill):
|
||||
```
|
||||
skill-name/
|
||||
└── SKILL.md
|
||||
```
|
||||
|
||||
**标准结构**(推荐):
|
||||
```
|
||||
skill-name/
|
||||
├── SKILL.md
|
||||
├── templates/
|
||||
│ └── template.md
|
||||
└── scripts/
|
||||
└── script.py
|
||||
```
|
||||
|
||||
---
|
||||
|
||||
## 命名和描述规范
|
||||
|
||||
### Skill 命名
|
||||
|
||||
**推荐格式**: 动名词形式 (verb-ing + noun)
|
||||
|
||||
```
|
||||
✅ 好的命名:
|
||||
- processing-csv-files
|
||||
- generating-api-docs
|
||||
- managing-database-migrations
|
||||
|
||||
❌ 不好的命名:
|
||||
- csv (过于简短)
|
||||
- data_processor (使用下划线)
|
||||
- helper (过于模糊)
|
||||
```
|
||||
|
||||
### Description 编写
|
||||
|
||||
**必须使用第三人称**:
|
||||
|
||||
```yaml
|
||||
# ❌ 错误
|
||||
description: I help you process PDFs
|
||||
|
||||
# ✅ 正确
|
||||
description: Processes PDF documents and extracts structured data
|
||||
```
|
||||
|
||||
**4C 原则**:
|
||||
- **Clear** (清晰): 避免术语和模糊词汇
|
||||
- **Concise** (简洁): 1-2句话说明核心功能
|
||||
- **Contextual** (上下文): 说明适用场景
|
||||
- **Complete** (完整): 功能 + 触发条件
|
||||
|
||||
---
|
||||
|
||||
## 内容编写指南
|
||||
|
||||
### "When to Use" 章节
|
||||
|
||||
明确说明触发场景:
|
||||
|
||||
```markdown
|
||||
## When to Use This Skill
|
||||
|
||||
- User asks to analyze Python code for type errors
|
||||
- User mentions "mypy" or "type checking"
|
||||
- User is working in a Python project with type hints
|
||||
- User needs to add type annotations
|
||||
```
|
||||
|
||||
**模式**:
|
||||
- 直接请求: "User asks to X"
|
||||
- 关键词: "User mentions 'keyword'"
|
||||
- 上下文: "User is working with X"
|
||||
- 任务类型: "User needs to X"
|
||||
|
||||
### 工作流设计
|
||||
|
||||
**简单线性流程**:
|
||||
|
||||
```markdown
|
||||
## How It Works
|
||||
|
||||
1. Scan the project for all `.py` files
|
||||
2. Run `mypy --strict` on each file
|
||||
3. Parse error output and categorize by severity
|
||||
4. Generate summary report with fix suggestions
|
||||
```
|
||||
|
||||
**条件分支流程**:
|
||||
|
||||
```markdown
|
||||
## Workflow
|
||||
|
||||
1. **Check project type**
|
||||
- If Django → Use `django-stubs` config
|
||||
- If Flask → Use `flask-stubs` config
|
||||
- Otherwise → Use default mypy config
|
||||
|
||||
2. **Run type checking**
|
||||
- If errors found → Proceed to step 3
|
||||
- If no errors → Report success and exit
|
||||
```
|
||||
|
||||
**Checklist 模式**(验证型任务):
|
||||
|
||||
```markdown
|
||||
## Pre-deployment Checklist
|
||||
|
||||
Execute in order. Stop if any step fails.
|
||||
|
||||
- [ ] Run tests: `npm test` (must pass)
|
||||
- [ ] Build: `npm run build` (no errors)
|
||||
- [ ] Check deps: `npm audit` (no critical vulnerabilities)
|
||||
```
|
||||
|
||||
### 示例和模板
|
||||
|
||||
**输入-输出示例**:
|
||||
|
||||
```markdown
|
||||
## Examples
|
||||
|
||||
### Example 1: Basic Check
|
||||
|
||||
**User Request**: "Check my code for type errors"
|
||||
|
||||
**Action**:
|
||||
1. Scan for `.py` files
|
||||
2. Run `mypy` on all files
|
||||
|
||||
**Output**:
|
||||
|
||||
Found 3 type errors in 2 files:
|
||||
src/main.py:15: error: Missing return type
|
||||
src/utils.py:42: error: Incompatible types
|
||||
|
||||
```
|
||||
|
||||
### 脚本集成
|
||||
|
||||
**何时使用脚本**:
|
||||
- 简单命令 → 直接在 SKILL.md 中说明
|
||||
- 复杂流程 → 提供独立脚本
|
||||
|
||||
**脚本编写规范**:
|
||||
|
||||
```python
|
||||
#!/usr/bin/env python3
|
||||
"""
|
||||
Brief description of what this script does.
|
||||
|
||||
Usage:
|
||||
python script.py <arg> [--option value]
|
||||
"""
|
||||
|
||||
import argparse
|
||||
|
||||
DEFAULT_VALUE = 80 # Use constants, not magic numbers
|
||||
|
||||
def main():
|
||||
parser = argparse.ArgumentParser(description=__doc__)
|
||||
parser.add_argument("directory", help="Directory to process")
|
||||
parser.add_argument("--threshold", type=int, default=DEFAULT_VALUE)
|
||||
|
||||
args = parser.parse_args()
|
||||
|
||||
# Validate inputs
|
||||
if not Path(args.directory).is_dir():
|
||||
print(f"Error: {args.directory} not found")
|
||||
return 1
|
||||
|
||||
# Execute
|
||||
result = process(args.directory, args.threshold)
|
||||
|
||||
# Report
|
||||
print(f"Processed {result['count']} files")
|
||||
return 0
|
||||
|
||||
if __name__ == "__main__":
|
||||
exit(main())
|
||||
```
|
||||
|
||||
**关键规范**:
|
||||
- ✅ Shebang 行和 docstring
|
||||
- ✅ 类型注解和常量
|
||||
- ✅ 参数验证和错误处理
|
||||
- ✅ 清晰的返回值(0=成功, 1=失败)
|
||||
|
||||
### 最佳实践
|
||||
|
||||
**Do**:
|
||||
- ✅ 提供可执行的命令和脚本
|
||||
- ✅ 包含输入-输出示例
|
||||
- ✅ 说明验证标准和成功条件
|
||||
- ✅ 包含 Do/Don't 清单
|
||||
|
||||
**Don't**:
|
||||
- ❌ 包含 Claude 已知的通用知识
|
||||
- ❌ 使用抽象描述而非具体步骤
|
||||
- ❌ 遗漏错误处理指导
|
||||
- ❌ 示例使用伪代码而非真实代码
|
||||
|
||||
---
|
||||
|
||||
## 质量检查清单
|
||||
|
||||
### 核心质量
|
||||
|
||||
- [ ] `name` 符合命名规范(小写、短横线、≤64字符)
|
||||
- [ ] `description` 包含触发关键词和场景(≤1024字符)
|
||||
- [ ] 名称与目录名一致
|
||||
- [ ] 只包含 Claude 不知道的信息
|
||||
- [ ] 没有冗余或重复内容
|
||||
|
||||
### 功能完整性
|
||||
|
||||
- [ ] 有"When to Use"章节,列出 3-5 个触发场景
|
||||
- [ ] 有清晰的执行流程或步骤
|
||||
- [ ] 至少 2-3 个完整示例
|
||||
- [ ] 包含输入和预期输出
|
||||
- [ ] 错误处理有指导
|
||||
|
||||
### 结构规范
|
||||
|
||||
- [ ] 章节组织清晰
|
||||
- [ ] 超过 200行有目录导航
|
||||
- [ ] 引用层级 ≤ 1层
|
||||
- [ ] 所有路径使用正斜杠 `/`
|
||||
- [ ] 术语使用一致
|
||||
|
||||
### 脚本和模板
|
||||
|
||||
- [ ] 脚本包含使用说明和参数文档
|
||||
- [ ] 脚本有错误处理
|
||||
- [ ] 避免魔法数字,使用配置
|
||||
- [ ] 模板格式清晰易用
|
||||
|
||||
### 最终检查
|
||||
|
||||
- [ ] 通读全文,确保流畅易读
|
||||
- [ ] 使用实际场景测试触发
|
||||
- [ ] 长度适中(200-500行,或已拆分)
|
||||
|
||||
---
|
||||
|
||||
## 常见问题
|
||||
|
||||
**Q: Skill 多长才合适?**
|
||||
- 最小: 50-100行
|
||||
- 理想: 200-500行
|
||||
- 最大: 500行(超过则拆分)
|
||||
|
||||
**Q: 如何让 Skill 更容易激活?**
|
||||
- 在 `description` 中使用用户会说的关键词
|
||||
- 说明具体场景("when user asks to X")
|
||||
- 提及相关工具名称
|
||||
|
||||
**Q: 多个 Skill 功能重叠怎么办?**
|
||||
- 使用更具体的 `description` 区分
|
||||
- 在"When to Use"中说明关系
|
||||
- 考虑合并为一个 Skill
|
||||
|
||||
**Q: Skill 需要维护吗?**
|
||||
- 每季度审查一次,更新过时信息
|
||||
- 根据使用反馈迭代
|
||||
- 工具或 API 变更时及时更新
|
||||
|
||||
---
|
||||
|
||||
## 快速参考
|
||||
|
||||
### Frontmatter 模板
|
||||
|
||||
```yaml
|
||||
---
|
||||
name: skill-name
|
||||
description: Brief description with trigger keywords
|
||||
---
|
||||
```
|
||||
|
||||
### 基础结构模板
|
||||
|
||||
```markdown
|
||||
# Skill Title
|
||||
|
||||
## When to Use This Skill
|
||||
- Scenario 1
|
||||
- Scenario 2
|
||||
|
||||
## How It Works
|
||||
1. Step 1
|
||||
2. Step 2
|
||||
|
||||
## Examples
|
||||
### Example 1
|
||||
...
|
||||
|
||||
## References
|
||||
- [Link](url)
|
||||
```
|
||||
|
||||
---
|
||||
|
||||
## 相关资源
|
||||
|
||||
- [Claude Agent Skills 官方文档](https://docs.claude.com/en/docs/agents-and-tools/agent-skills)
|
||||
- [Best Practices Checklist](https://docs.claude.com/en/docs/agents-and-tools/agent-skills/best-practices)
|
||||
- [模板文件](templates/) - 开箱即用的模板
|
||||
- [基础 skill 的模板](templates/basic-skill-template.md)
|
||||
- [工作流 skill 的模板](templates/workflow-skill-template.md)
|
||||
- [示例库](examples/) - 完整的 Skill 示例
|
||||
- [优秀示例](examples/good-example.md)
|
||||
- [常见错误示例](examples/bad-example.md)
|
||||
|
||||
---
|
||||
|
|
@ -0,0 +1,867 @@
|
|||
# 不好的 Skill 示例与改进建议
|
||||
|
||||
本文档展示常见的 Skill 编写错误,并提供改进建议。
|
||||
|
||||
---
|
||||
|
||||
## 示例 1: 过于模糊的 Skill
|
||||
|
||||
### ❌ 不好的版本
|
||||
|
||||
```markdown
|
||||
---
|
||||
name: helper
|
||||
description: Helps with code
|
||||
---
|
||||
|
||||
# Code Helper
|
||||
|
||||
This skill helps you with coding tasks.
|
||||
|
||||
## Usage
|
||||
|
||||
Use this when you need help with code.
|
||||
|
||||
## How It Works
|
||||
|
||||
1. Analyzes your code
|
||||
2. Provides suggestions
|
||||
3. Helps improve it
|
||||
```
|
||||
|
||||
### 问题分析
|
||||
|
||||
| 问题 | 说明 | 影响 |
|
||||
|------|------|------|
|
||||
| **模糊的名称** | "helper" 太泛化,没有说明具体做什么 | Claude 不知道何时激活 |
|
||||
| **无关键词** | description 缺少具体触发词 | 用户很难激活这个 Skill |
|
||||
| **无具体场景** | 没说明适用什么类型的代码 | 适用范围不清 |
|
||||
| **抽象的步骤** | "Provides suggestions" 太模糊 | Claude 不知道具体做什么 |
|
||||
| **无示例** | 没有实际例子 | 用户和 Claude 都不清楚预期输出 |
|
||||
|
||||
### ✅ 改进版本
|
||||
|
||||
```markdown
|
||||
---
|
||||
name: python-code-refactoring
|
||||
description: Refactors Python code to improve readability and maintainability using standard patterns. Activates when user asks to clean up code, improve structure, or mentions refactoring. Focuses on function extraction, variable naming, and removing code smells.
|
||||
---
|
||||
|
||||
# Python Code Refactoring Skill
|
||||
|
||||
Improves Python code quality through systematic refactoring.
|
||||
|
||||
## When to Use This Skill
|
||||
|
||||
- User asks to "refactor this code", "clean up this function", or "improve readability"
|
||||
- User mentions "code smell", "technical debt", or "maintainability"
|
||||
- User is working with Python code that has:
|
||||
- Long functions (>50 lines)
|
||||
- Nested conditionals (>3 levels)
|
||||
- Repeated code patterns
|
||||
- Unclear variable names
|
||||
|
||||
## How It Works
|
||||
|
||||
### 1. Identify Refactoring Opportunities
|
||||
|
||||
Scan code for common issues:
|
||||
- Functions longer than 50 lines
|
||||
- Repeated code blocks (DRY violations)
|
||||
- Magic numbers without constants
|
||||
- Unclear variable names (x, temp, data)
|
||||
- Deep nesting (>3 levels)
|
||||
|
||||
### 2. Prioritize Changes
|
||||
|
||||
Focus on high-impact refactorings:
|
||||
- **High**: Extract complex nested logic to functions
|
||||
- **Medium**: Rename unclear variables
|
||||
- **Low**: Minor style improvements
|
||||
|
||||
### 3. Apply Refactorings
|
||||
|
||||
**Extract Method**:
|
||||
```python
|
||||
# Before
|
||||
def process_order(order):
|
||||
# 50 lines of validation logic
|
||||
# 30 lines of payment processing
|
||||
# 20 lines of email notification
|
||||
|
||||
# After
|
||||
def process_order(order):
|
||||
validate_order(order)
|
||||
process_payment(order)
|
||||
send_confirmation_email(order)
|
||||
```
|
||||
|
||||
**Replace Magic Numbers**:
|
||||
```python
|
||||
# Before
|
||||
if user.age < 18:
|
||||
return False
|
||||
|
||||
# After
|
||||
MINIMUM_AGE = 18
|
||||
if user.age < MINIMUM_AGE:
|
||||
return False
|
||||
```
|
||||
|
||||
**Simplify Conditionals**:
|
||||
```python
|
||||
# Before
|
||||
if user.role == 'admin':
|
||||
return True
|
||||
elif user.role == 'moderator':
|
||||
return True
|
||||
elif user.role == 'editor':
|
||||
return True
|
||||
else:
|
||||
return False
|
||||
|
||||
# After
|
||||
PRIVILEGED_ROLES = {'admin', 'moderator', 'editor'}
|
||||
return user.role in PRIVILEGED_ROLES
|
||||
```
|
||||
|
||||
### 4. Verify Improvements
|
||||
|
||||
After refactoring:
|
||||
- Run existing tests (all must pass)
|
||||
- Check code length reduced
|
||||
- Verify improved readability
|
||||
|
||||
## Example
|
||||
|
||||
**User Request**: "Refactor this function, it's too long"
|
||||
|
||||
```python
|
||||
def process_user_registration(data):
|
||||
if not data.get('email'):
|
||||
return {'error': 'Email required'}
|
||||
if '@' not in data['email']:
|
||||
return {'error': 'Invalid email'}
|
||||
if not data.get('password'):
|
||||
return {'error': 'Password required'}
|
||||
if len(data['password']) < 8:
|
||||
return {'error': 'Password too short'}
|
||||
if not any(c.isupper() for c in data['password']):
|
||||
return {'error': 'Password needs uppercase'}
|
||||
existing = db.query(User).filter_by(email=data['email']).first()
|
||||
if existing:
|
||||
return {'error': 'Email already registered'}
|
||||
salt = bcrypt.gensalt()
|
||||
hashed = bcrypt.hashpw(data['password'].encode(), salt)
|
||||
user = User(email=data['email'], password_hash=hashed)
|
||||
db.add(user)
|
||||
db.commit()
|
||||
token = jwt.encode({'user_id': user.id}, SECRET_KEY)
|
||||
send_email(data['email'], 'Welcome!', 'Thanks for registering')
|
||||
return {'success': True, 'token': token}
|
||||
```
|
||||
|
||||
**Refactored**:
|
||||
|
||||
```python
|
||||
def process_user_registration(data):
|
||||
"""Register new user with validation and email confirmation."""
|
||||
# Validation
|
||||
validation_error = validate_registration_data(data)
|
||||
if validation_error:
|
||||
return {'error': validation_error}
|
||||
|
||||
# Check uniqueness
|
||||
if user_exists(data['email']):
|
||||
return {'error': 'Email already registered'}
|
||||
|
||||
# Create user
|
||||
user = create_user(data['email'], data['password'])
|
||||
|
||||
# Generate token
|
||||
token = generate_auth_token(user.id)
|
||||
|
||||
# Send welcome email
|
||||
send_welcome_email(user.email)
|
||||
|
||||
return {'success': True, 'token': token}
|
||||
|
||||
|
||||
def validate_registration_data(data):
|
||||
"""Validate registration data, return error message or None."""
|
||||
if not data.get('email'):
|
||||
return 'Email required'
|
||||
if '@' not in data['email']:
|
||||
return 'Invalid email'
|
||||
if not data.get('password'):
|
||||
return 'Password required'
|
||||
return validate_password_strength(data['password'])
|
||||
|
||||
|
||||
def validate_password_strength(password):
|
||||
"""Check password meets security requirements."""
|
||||
MIN_PASSWORD_LENGTH = 8
|
||||
if len(password) < MIN_PASSWORD_LENGTH:
|
||||
return f'Password must be at least {MIN_PASSWORD_LENGTH} characters'
|
||||
if not any(c.isupper() for c in password):
|
||||
return 'Password must contain uppercase letter'
|
||||
return None
|
||||
|
||||
|
||||
def user_exists(email):
|
||||
"""Check if user with given email already exists."""
|
||||
return db.query(User).filter_by(email=email).first() is not None
|
||||
|
||||
|
||||
def create_user(email, password):
|
||||
"""Create and save new user with hashed password."""
|
||||
salt = bcrypt.gensalt()
|
||||
hashed = bcrypt.hashpw(password.encode(), salt)
|
||||
user = User(email=email, password_hash=hashed)
|
||||
db.add(user)
|
||||
db.commit()
|
||||
return user
|
||||
|
||||
|
||||
def generate_auth_token(user_id):
|
||||
"""Generate JWT authentication token."""
|
||||
return jwt.encode({'user_id': user_id}, SECRET_KEY)
|
||||
|
||||
|
||||
def send_welcome_email(email):
|
||||
"""Send welcome email to new user."""
|
||||
send_email(email, 'Welcome!', 'Thanks for registering')
|
||||
```
|
||||
|
||||
**Improvements**:
|
||||
- ✅ Main function reduced from 20 lines to 15 lines
|
||||
- ✅ Each function has single responsibility
|
||||
- ✅ Magic number (8) extracted to constant
|
||||
- ✅ All functions documented with docstrings
|
||||
- ✅ Easier to test individual functions
|
||||
- ✅ Easier to modify validation rules
|
||||
|
||||
## Best Practices
|
||||
|
||||
- ✅ Extract functions with clear names
|
||||
- ✅ Use constants instead of magic numbers
|
||||
- ✅ Keep functions under 30 lines
|
||||
- ✅ Maximum nesting depth of 2-3 levels
|
||||
- ✅ Write docstrings for extracted functions
|
||||
```
|
||||
|
||||
### 改进要点
|
||||
|
||||
1. ✅ 具体的名称: `python-code-refactoring` 而非 `helper`
|
||||
2. ✅ 详细的 description: 包含触发词和适用场景
|
||||
3. ✅ 明确的触发条件: 列出具体的使用场景
|
||||
4. ✅ 可执行的步骤: 每个步骤都有具体操作
|
||||
5. ✅ 实际代码示例: 展示完整的重构过程
|
||||
6. ✅ 具体的改进指标: 列出可验证的改进效果
|
||||
|
||||
---
|
||||
|
||||
## 示例 2: 过度冗长的 Skill
|
||||
|
||||
### ❌ 不好的版本
|
||||
|
||||
```markdown
|
||||
---
|
||||
name: python-basics
|
||||
description: Teaches Python programming basics
|
||||
---
|
||||
|
||||
# Python Basics
|
||||
|
||||
This skill helps you learn Python programming.
|
||||
|
||||
## Variables
|
||||
|
||||
In Python, you can create variables like this:
|
||||
|
||||
```python
|
||||
x = 5
|
||||
y = "hello"
|
||||
z = 3.14
|
||||
```
|
||||
|
||||
Python supports different data types:
|
||||
- Integers (int): whole numbers like 1, 2, 3
|
||||
- Floats (float): decimal numbers like 3.14, 2.5
|
||||
- Strings (str): text like "hello", 'world'
|
||||
- Booleans (bool): True or False
|
||||
|
||||
## Conditional Statements
|
||||
|
||||
You can use if statements to make decisions:
|
||||
|
||||
```python
|
||||
if x > 0:
|
||||
print("Positive")
|
||||
elif x < 0:
|
||||
print("Negative")
|
||||
else:
|
||||
print("Zero")
|
||||
```
|
||||
|
||||
The if statement checks a condition. If True, it runs the indented code.
|
||||
The elif means "else if" and provides an alternative condition.
|
||||
The else runs if none of the above conditions are True.
|
||||
|
||||
## Loops
|
||||
|
||||
Python has two main types of loops:
|
||||
|
||||
### For Loops
|
||||
|
||||
For loops iterate over a sequence:
|
||||
|
||||
```python
|
||||
for i in range(5):
|
||||
print(i)
|
||||
```
|
||||
|
||||
This prints numbers 0 through 4. The range() function generates numbers.
|
||||
|
||||
### While Loops
|
||||
|
||||
While loops continue while a condition is True:
|
||||
|
||||
```python
|
||||
i = 0
|
||||
while i < 5:
|
||||
print(i)
|
||||
i += 1
|
||||
```
|
||||
|
||||
This does the same thing as the for loop above.
|
||||
|
||||
## Functions
|
||||
|
||||
Functions are reusable blocks of code:
|
||||
|
||||
```python
|
||||
def greet(name):
|
||||
return f"Hello, {name}!"
|
||||
```
|
||||
|
||||
The def keyword defines a function. The function name is greet.
|
||||
It takes one parameter called name. The return statement sends back a value.
|
||||
|
||||
## Lists
|
||||
|
||||
Lists store multiple items:
|
||||
|
||||
```python
|
||||
fruits = ["apple", "banana", "orange"]
|
||||
```
|
||||
|
||||
You can access items by index:
|
||||
|
||||
```python
|
||||
first_fruit = fruits[0] # "apple"
|
||||
```
|
||||
|
||||
... [continues for 50 more sections about Python basics]
|
||||
```
|
||||
|
||||
### 问题分析
|
||||
|
||||
| 问题 | 说明 | 影响 |
|
||||
|------|------|------|
|
||||
| **包含通用知识** | Python 基础知识 Claude 已经知道 | 浪费 token,增加检索成本 |
|
||||
| **教程式内容** | 像教程而非工作指南 | Claude 不需要学习,需要的是工作指导 |
|
||||
| **过度详细** | 解释显而易见的概念 | 信息过载,难以找到关键信息 |
|
||||
| **缺少项目特定信息** | 没有项目相关的规范或约定 | 无法提供项目特定价值 |
|
||||
|
||||
### ✅ 改进版本
|
||||
|
||||
```markdown
|
||||
---
|
||||
name: project-python-conventions
|
||||
description: Enforces Python coding conventions specific to this project. Activates when user writes Python code or asks about code style. Covers naming, imports, error handling, and project-specific patterns.
|
||||
---
|
||||
|
||||
# Project Python Conventions
|
||||
|
||||
Project-specific Python coding standards and patterns.
|
||||
|
||||
## When to Use This Skill
|
||||
|
||||
- User is writing or reviewing Python code
|
||||
- User asks about code style or conventions
|
||||
- User needs guidance on project patterns
|
||||
|
||||
## Import Organization
|
||||
|
||||
Follow this order:
|
||||
|
||||
```python
|
||||
# 1. Standard library
|
||||
import os
|
||||
import sys
|
||||
from typing import Optional, List
|
||||
|
||||
# 2. Third-party packages
|
||||
import numpy as np
|
||||
from fastapi import FastAPI
|
||||
|
||||
# 3. Local application imports
|
||||
from core.models import User
|
||||
from utils.helpers import format_date
|
||||
```
|
||||
|
||||
## Naming Conventions
|
||||
|
||||
### Project-Specific Rules
|
||||
|
||||
| Type | Pattern | Example |
|
||||
|------|---------|---------|
|
||||
| API endpoints | `/api/v1/{resource}` | `/api/v1/users` |
|
||||
| Database tables | `{resource}_table` | `users_table` |
|
||||
| Environment variables | `APP_{NAME}` | `APP_DATABASE_URL` |
|
||||
| Config files | `{env}.config.py` | `prod.config.py` |
|
||||
|
||||
### Forbidden Patterns
|
||||
|
||||
```python
|
||||
# ❌ Don't use single-letter variables (except i, j, k in loops)
|
||||
d = get_data()
|
||||
|
||||
# ✅ Do use descriptive names
|
||||
user_data = get_data()
|
||||
|
||||
# ❌ Don't use abbreviations
|
||||
usr_mgr = UserManager()
|
||||
|
||||
# ✅ Do use full words
|
||||
user_manager = UserManager()
|
||||
```
|
||||
|
||||
## Error Handling Pattern
|
||||
|
||||
Use project's custom exceptions:
|
||||
|
||||
```python
|
||||
from core.exceptions import UserNotFoundError, ValidationError
|
||||
|
||||
def get_user(user_id: int) -> User:
|
||||
"""
|
||||
Retrieve user by ID.
|
||||
|
||||
Raises:
|
||||
UserNotFoundError: If user doesn't exist
|
||||
ValidationError: If user_id is invalid
|
||||
"""
|
||||
if not isinstance(user_id, int) or user_id <= 0:
|
||||
raise ValidationError(f"Invalid user_id: {user_id}")
|
||||
|
||||
user = db.query(User).get(user_id)
|
||||
if user is None:
|
||||
raise UserNotFoundError(f"User {user_id} not found")
|
||||
|
||||
return user
|
||||
```
|
||||
|
||||
**Never** use bare `except:` - always catch specific exceptions.
|
||||
|
||||
## Database Queries
|
||||
|
||||
Always use the project's query helper:
|
||||
|
||||
```python
|
||||
# ❌ Don't use raw SQLAlchemy queries
|
||||
users = db.query(User).filter(User.age > 18).all()
|
||||
|
||||
# ✅ Do use query helper
|
||||
from core.database import QueryBuilder
|
||||
|
||||
users = QueryBuilder(User).where('age', '>', 18).get()
|
||||
```
|
||||
|
||||
## API Response Format
|
||||
|
||||
All API endpoints must return this format:
|
||||
|
||||
```python
|
||||
{
|
||||
"success": True,
|
||||
"data": {
|
||||
# ... response data
|
||||
},
|
||||
"error": None,
|
||||
"meta": {
|
||||
"timestamp": "2025-01-31T12:00:00Z",
|
||||
"version": "1.0"
|
||||
}
|
||||
}
|
||||
```
|
||||
|
||||
Use the response helper:
|
||||
|
||||
```python
|
||||
from core.responses import success_response, error_response
|
||||
|
||||
@app.get("/users/{id}")
|
||||
async def get_user(id: int):
|
||||
try:
|
||||
user = get_user_data(id)
|
||||
return success_response(user)
|
||||
except UserNotFoundError as e:
|
||||
return error_response(str(e), status_code=404)
|
||||
```
|
||||
|
||||
## Testing Patterns
|
||||
|
||||
### Test File Location
|
||||
|
||||
```
|
||||
project/
|
||||
├── src/
|
||||
│ └── services/
|
||||
│ └── user_service.py
|
||||
└── tests/
|
||||
└── services/
|
||||
└── test_user_service.py
|
||||
```
|
||||
|
||||
### Test Naming
|
||||
|
||||
```python
|
||||
# Format: test_{function_name}_{scenario}_{expected_result}
|
||||
|
||||
def test_get_user_valid_id_returns_user():
|
||||
"""Test getting user with valid ID returns User object."""
|
||||
pass
|
||||
|
||||
def test_get_user_invalid_id_raises_validation_error():
|
||||
"""Test getting user with invalid ID raises ValidationError."""
|
||||
pass
|
||||
|
||||
def test_get_user_nonexistent_id_raises_not_found_error():
|
||||
"""Test getting non-existent user raises UserNotFoundError."""
|
||||
pass
|
||||
```
|
||||
|
||||
## References
|
||||
|
||||
- [Full Style Guide](docs/STYLE_GUIDE.md)
|
||||
- [API Standards](docs/API_STANDARDS.md)
|
||||
- [Database Conventions](docs/DATABASE.md)
|
||||
```
|
||||
|
||||
### 改进要点
|
||||
|
||||
1. ✅ 只包含项目特定信息: 不教 Python 基础
|
||||
2. ✅ 简洁明了: 200 行 vs 原来的 500+ 行
|
||||
3. ✅ 实用的规则: 直接可应用的约定
|
||||
4. ✅ 清晰的示例: Do/Don't 对比
|
||||
5. ✅ 引用详细文档: 用链接而非全部内容
|
||||
|
||||
---
|
||||
|
||||
## 示例 3: 缺少上下文的 Skill
|
||||
|
||||
### ❌ 不好的版本
|
||||
|
||||
```markdown
|
||||
---
|
||||
name: deployment
|
||||
description: Deploys code
|
||||
---
|
||||
|
||||
# Deployment
|
||||
|
||||
## Steps
|
||||
|
||||
1. Build the code
|
||||
2. Run tests
|
||||
3. Deploy to server
|
||||
4. Verify deployment
|
||||
```
|
||||
|
||||
### 问题分析
|
||||
|
||||
| 问题 | 说明 | 影响 |
|
||||
|------|------|------|
|
||||
| **无具体命令** | 没说明如何 build, test, deploy | Claude 无法执行 |
|
||||
| **无环境区分** | 开发、测试、生产部署可能不同 | 可能部署到错误环境 |
|
||||
| **无错误处理** | 没说明出错时怎么办 | 失败时不知如何恢复 |
|
||||
| **无验证标准** | "Verify" 太模糊 | 不知道检查什么 |
|
||||
|
||||
### ✅ 改进版本
|
||||
|
||||
```markdown
|
||||
---
|
||||
name: deploy-to-production
|
||||
description: Deploys application to production environment on AWS. Activates when user asks to deploy to prod or mentions production deployment. Includes pre-flight checks, blue-green deployment, and rollback procedures.
|
||||
---
|
||||
|
||||
# Production Deployment
|
||||
|
||||
Safely deploy application to production with zero downtime.
|
||||
|
||||
## When to Use This Skill
|
||||
|
||||
- User asks to "deploy to production" or "push to prod"
|
||||
- User mentions "production deployment", "go live"
|
||||
- User needs to rollback a deployment
|
||||
|
||||
## Prerequisites
|
||||
|
||||
Before deployment, verify:
|
||||
|
||||
```bash
|
||||
# 1. On main branch
|
||||
git branch --show-current # Must be "main"
|
||||
|
||||
# 2. All tests pass
|
||||
npm test # Exit code must be 0
|
||||
|
||||
# 3. Build succeeds
|
||||
npm run build # Must complete without errors
|
||||
|
||||
# 4. No uncommitted changes
|
||||
git status # Must show "nothing to commit"
|
||||
|
||||
# 5. Latest code pulled
|
||||
git pull origin main # Must be up to date
|
||||
```
|
||||
|
||||
If any prerequisite fails, **stop** and fix the issue.
|
||||
|
||||
## Deployment Process
|
||||
|
||||
### Step 1: Pre-flight Checks
|
||||
|
||||
```bash
|
||||
# Run deployment readiness script
|
||||
./scripts/preflight-check.sh
|
||||
|
||||
# Expected output:
|
||||
# ✓ Tests passed
|
||||
# ✓ Build succeeded
|
||||
# ✓ Environment variables configured
|
||||
# ✓ Database migrations ready
|
||||
# ✓ Ready to deploy
|
||||
```
|
||||
|
||||
### Step 2: Database Migrations (if needed)
|
||||
|
||||
```bash
|
||||
# Connect to production database
|
||||
aws rds describe-db-instances --db-instance-identifier prod-db
|
||||
|
||||
# Backup before migration
|
||||
./scripts/backup-database.sh prod
|
||||
|
||||
# Run migrations
|
||||
NODE_ENV=production npm run migrate
|
||||
|
||||
# Verify migration succeeded
|
||||
npm run migrate:status
|
||||
```
|
||||
|
||||
### Step 3: Blue-Green Deployment
|
||||
|
||||
```bash
|
||||
# Deploy to green environment (inactive)
|
||||
aws elasticbeanstalk create-environment \
|
||||
--application-name myapp \
|
||||
--environment-name myapp-prod-green \
|
||||
--solution-stack-name "64bit Amazon Linux 2 v5.x.x running Node.js 18"
|
||||
|
||||
# Wait for green environment to be healthy
|
||||
aws elasticbeanstalk wait environment-updated \
|
||||
--environment-name myapp-prod-green
|
||||
|
||||
# Check green environment health
|
||||
curl https://myapp-prod-green.aws.com/health
|
||||
# Expected: {"status": "healthy"}
|
||||
```
|
||||
|
||||
### Step 4: Smoke Tests
|
||||
|
||||
```bash
|
||||
# Run smoke tests against green environment
|
||||
BASE_URL=https://myapp-prod-green.aws.com npm run test:smoke
|
||||
|
||||
# Tests must include:
|
||||
# - Health check endpoint
|
||||
# - Authentication flow
|
||||
# - Critical API endpoints
|
||||
# - Database connectivity
|
||||
```
|
||||
|
||||
### Step 5: Switch Traffic
|
||||
|
||||
```bash
|
||||
# Swap URLs (blue becomes green, green becomes blue)
|
||||
aws elasticbeanstalk swap-environment-cnames \
|
||||
--source-environment-name myapp-prod-blue \
|
||||
--destination-environment-name myapp-prod-green
|
||||
|
||||
# Wait 5 minutes for DNS propagation
|
||||
echo "Waiting for DNS propagation..."
|
||||
sleep 300
|
||||
|
||||
# Verify production URL serves new version
|
||||
curl https://myapp.com/version
|
||||
# Expected: {"version": "1.2.3"} (new version)
|
||||
```
|
||||
|
||||
### Step 6: Monitor
|
||||
|
||||
```bash
|
||||
# Monitor error rates for 15 minutes
|
||||
aws cloudwatch get-metric-statistics \
|
||||
--namespace AWS/ELB \
|
||||
--metric-name HTTPCode_Backend_5XX \
|
||||
--start-time $(date -u -d '15 minutes ago' +%Y-%m-%dT%H:%M:%S) \
|
||||
--end-time $(date -u +%Y-%m-%dT%H:%M:%S) \
|
||||
--period 300 \
|
||||
--statistics Sum
|
||||
|
||||
# Error rate must be < 1%
|
||||
```
|
||||
|
||||
If error rate exceeds 1%:
|
||||
- **Rollback immediately** (see Rollback section)
|
||||
- Investigate issue
|
||||
- Fix and redeploy
|
||||
|
||||
### Step 7: Cleanup
|
||||
|
||||
```bash
|
||||
# After 24 hours, if no issues:
|
||||
# Terminate old blue environment
|
||||
aws elasticbeanstalk terminate-environment \
|
||||
--environment-name myapp-prod-blue
|
||||
```
|
||||
|
||||
## Rollback Procedure
|
||||
|
||||
If deployment fails:
|
||||
|
||||
```bash
|
||||
# 1. Swap back to previous version
|
||||
aws elasticbeanstalk swap-environment-cnames \
|
||||
--source-environment-name myapp-prod-green \
|
||||
--destination-environment-name myapp-prod-blue
|
||||
|
||||
# 2. Verify old version is serving
|
||||
curl https://myapp.com/version
|
||||
# Expected: {"version": "1.2.2"} (old version)
|
||||
|
||||
# 3. Rollback database migrations (if ran)
|
||||
NODE_ENV=production npm run migrate:rollback
|
||||
|
||||
# 4. Notify team
|
||||
./scripts/notify-rollback.sh "Deployment rolled back due to [reason]"
|
||||
```
|
||||
|
||||
## Example Deployment
|
||||
|
||||
**User Request**: "Deploy v1.2.3 to production"
|
||||
|
||||
**Execution Log**:
|
||||
|
||||
```
|
||||
[14:00:00] Starting deployment of v1.2.3 to production
|
||||
[14:00:05] ✓ Pre-flight checks passed
|
||||
[14:00:10] ✓ Database backup completed
|
||||
[14:00:30] ✓ Database migrations applied (3 migrations)
|
||||
[14:01:00] → Creating green environment
|
||||
[14:05:00] ✓ Green environment healthy
|
||||
[14:05:30] ✓ Smoke tests passed (12/12)
|
||||
[14:06:00] → Switching traffic to green environment
|
||||
[14:11:00] ✓ DNS propagated
|
||||
[14:11:05] ✓ Production serving v1.2.3
|
||||
[14:11:10] → Monitoring for 15 minutes
|
||||
[14:26:10] ✓ Error rate: 0.05% (within threshold)
|
||||
[14:26:15] ✓ Deployment successful
|
||||
[14:26:20] → Old environment will be terminated in 24h
|
||||
|
||||
Deployment completed successfully in 26 minutes
|
||||
```
|
||||
|
||||
## References
|
||||
|
||||
- [AWS Deployment Guide](docs/AWS_DEPLOYMENT.md)
|
||||
- [Runbook](docs/RUNBOOK.md)
|
||||
- [On-Call Procedures](docs/ONCALL.md)
|
||||
```
|
||||
|
||||
### 改进要点
|
||||
|
||||
1. ✅ 具体命令: 每个步骤都有可执行的命令
|
||||
2. ✅ 环境明确: 专注于生产环境部署
|
||||
3. ✅ 验证标准: 说明检查什么和预期结果
|
||||
4. ✅ 错误处理: 包含完整的回滚流程
|
||||
5. ✅ 实际输出: 展示命令的预期输出
|
||||
6. ✅ 监控指标: 定义具体的成功标准
|
||||
|
||||
---
|
||||
|
||||
## 常见错误总结
|
||||
|
||||
### 1. 命名和描述问题
|
||||
|
||||
| 错误 | 示例 | 改进 |
|
||||
|------|------|------|
|
||||
| 过于泛化 | `name: helper` | `name: python-type-hints` |
|
||||
| 缺少关键词 | `description: Helps with code` | `description: Adds type hints to Python using mypy` |
|
||||
| 使用第一人称 | `description: I help you...` | `description: Adds type hints...` |
|
||||
|
||||
### 2. 内容问题
|
||||
|
||||
| 错误 | 说明 | 改进 |
|
||||
|------|------|------|
|
||||
| 包含通用知识 | 教 Python 基础语法 | 只包含项目特定规范 |
|
||||
| 过于抽象 | "分析代码并提供建议" | "检查函数长度、变量命名、重复代码" |
|
||||
| 缺少示例 | 只有文字描述 | 包含输入-输出示例 |
|
||||
|
||||
### 3. 结构问题
|
||||
|
||||
| 错误 | 说明 | 改进 |
|
||||
|------|------|------|
|
||||
| 无层次结构 | 所有内容混在一起 | 使用标题、列表、代码块组织 |
|
||||
| 缺少"When to Use" | 不知道何时激活 | 列出 3-5 个触发场景 |
|
||||
| 无验证步骤 | 不知道如何确认成功 | 说明检查项和预期结果 |
|
||||
|
||||
### 4. 自由度问题
|
||||
|
||||
| 错误 | 说明 | 改进 |
|
||||
|------|------|------|
|
||||
| 创意任务低自由度 | 为架构设计提供分步指令 | 提供指导原则和考虑因素 |
|
||||
| 危险任务高自由度 | 生产部署没有具体步骤 | 提供详细的检查清单 |
|
||||
| 不匹配任务类型 | 代码生成用教程式内容 | 提供模板和实际示例 |
|
||||
|
||||
---
|
||||
|
||||
## 快速检查清单
|
||||
|
||||
在发布 Skill 之前,问自己:
|
||||
|
||||
### 基础检查
|
||||
|
||||
- [ ] Name 是否具体且描述性强?
|
||||
- [ ] Description 包含触发关键词和场景?
|
||||
- [ ] 有明确的"When to Use"章节?
|
||||
- [ ] 内容只包含 Claude 不知道的信息?
|
||||
|
||||
### 内容检查
|
||||
|
||||
- [ ] 是否有实际的代码示例?
|
||||
- [ ] 步骤是否具体可执行?
|
||||
- [ ] 是否说明了如何验证成功?
|
||||
- [ ] 是否包含错误处理指导?
|
||||
|
||||
### 结构检查
|
||||
|
||||
- [ ] 内容组织清晰(使用标题、列表)?
|
||||
- [ ] 自由度设定合适(匹配任务类型)?
|
||||
- [ ] 长度合适(200-500行,或拆分子文件)?
|
||||
- [ ] 包含 Do/Don't 最佳实践?
|
||||
|
||||
如果有任何一项答"否",参考本文档的改进建议进行修改。
|
||||
|
|
@ -0,0 +1,908 @@
|
|||
# 好的 Skill 示例
|
||||
|
||||
本文档展示几个编写良好的 SKILL.md 示例,说明最佳实践的实际应用。
|
||||
|
||||
---
|
||||
|
||||
## 示例 1: 数据库迁移 Skill (高质量基础 Skill)
|
||||
|
||||
```markdown
|
||||
---
|
||||
name: database-migration
|
||||
description: Manages database schema migrations using Alembic for SQLAlchemy projects. Activates when user asks to create migrations, upgrade/downgrade database, or mentions Alembic. Handles both development and production scenarios with safety checks.
|
||||
---
|
||||
|
||||
# Database Migration Skill
|
||||
|
||||
Automates database schema migration management using Alembic.
|
||||
|
||||
## When to Use This Skill
|
||||
|
||||
- User asks to "create migration", "update database schema", or "rollback migration"
|
||||
- User mentions "Alembic", "database migration", or "schema change"
|
||||
- User is working in a Python project with SQLAlchemy models
|
||||
- User needs to apply or revert database changes
|
||||
|
||||
## Quick Start
|
||||
|
||||
Create a new migration:
|
||||
```bash
|
||||
alembic revision --autogenerate -m "Description of changes"
|
||||
```
|
||||
|
||||
Apply migrations:
|
||||
```bash
|
||||
alembic upgrade head
|
||||
```
|
||||
|
||||
## How It Works
|
||||
|
||||
### Creating Migrations
|
||||
|
||||
1. **Detect model changes**
|
||||
- Scan SQLAlchemy models in `models/` directory
|
||||
- Compare with current database schema
|
||||
- Identify additions, modifications, deletions
|
||||
|
||||
2. **Generate migration script**
|
||||
- Run `alembic revision --autogenerate`
|
||||
- Review generated script for accuracy
|
||||
- Edit if necessary (Alembic can't auto-detect everything)
|
||||
|
||||
3. **Verify migration**
|
||||
- Check upgrade() function is correct
|
||||
- Ensure downgrade() function reverses changes
|
||||
- Test on development database first
|
||||
|
||||
### Applying Migrations
|
||||
|
||||
1. **Safety checks**
|
||||
- Backup database (production only)
|
||||
- Verify no pending migrations
|
||||
- Check database connectivity
|
||||
|
||||
2. **Execute migration**
|
||||
- Run `alembic upgrade head`
|
||||
- Monitor for errors
|
||||
- Verify schema matches expected state
|
||||
|
||||
3. **Post-migration validation**
|
||||
- Run application tests
|
||||
- Check data integrity
|
||||
- Confirm application starts successfully
|
||||
|
||||
## Examples
|
||||
|
||||
### Example 1: Add New Column
|
||||
|
||||
**User Request**: "Add an email column to the users table"
|
||||
|
||||
**Step 1**: Update the model
|
||||
```python
|
||||
# models/user.py
|
||||
class User(Base):
|
||||
__tablename__ = 'users'
|
||||
id = Column(Integer, primary_key=True)
|
||||
username = Column(String(50), nullable=False)
|
||||
email = Column(String(120), nullable=True) # ← New field
|
||||
```
|
||||
|
||||
**Step 2**: Generate migration
|
||||
```bash
|
||||
alembic revision --autogenerate -m "Add email column to users table"
|
||||
```
|
||||
|
||||
**Generated migration** (alembic/versions/abc123_add_email.py):
|
||||
```python
|
||||
def upgrade():
|
||||
op.add_column('users', sa.Column('email', sa.String(120), nullable=True))
|
||||
|
||||
def downgrade():
|
||||
op.drop_column('users', 'email')
|
||||
```
|
||||
|
||||
**Step 3**: Review and apply
|
||||
```bash
|
||||
# Review the migration file
|
||||
cat alembic/versions/abc123_add_email.py
|
||||
|
||||
# Apply migration
|
||||
alembic upgrade head
|
||||
```
|
||||
|
||||
**Output**:
|
||||
```
|
||||
INFO [alembic.runtime.migration] Running upgrade xyz789 -> abc123, Add email column to users table
|
||||
```
|
||||
|
||||
### Example 2: Complex Migration with Data Changes
|
||||
|
||||
**User Request**: "Split the 'name' column into 'first_name' and 'last_name'"
|
||||
|
||||
**Step 1**: Create empty migration (can't auto-generate data changes)
|
||||
```bash
|
||||
alembic revision -m "Split name into first_name and last_name"
|
||||
```
|
||||
|
||||
**Step 2**: Write custom migration
|
||||
```python
|
||||
def upgrade():
|
||||
# Add new columns
|
||||
op.add_column('users', sa.Column('first_name', sa.String(50)))
|
||||
op.add_column('users', sa.Column('last_name', sa.String(50)))
|
||||
|
||||
# Migrate existing data
|
||||
connection = op.get_bind()
|
||||
users = connection.execute("SELECT id, name FROM users")
|
||||
for user_id, name in users:
|
||||
parts = name.split(' ', 1)
|
||||
first = parts[0]
|
||||
last = parts[1] if len(parts) > 1 else ''
|
||||
connection.execute(
|
||||
"UPDATE users SET first_name = %s, last_name = %s WHERE id = %s",
|
||||
(first, last, user_id)
|
||||
)
|
||||
|
||||
# Make new columns non-nullable and drop old column
|
||||
op.alter_column('users', 'first_name', nullable=False)
|
||||
op.alter_column('users', 'last_name', nullable=False)
|
||||
op.drop_column('users', 'name')
|
||||
|
||||
def downgrade():
|
||||
# Add back name column
|
||||
op.add_column('users', sa.Column('name', sa.String(100)))
|
||||
|
||||
# Restore data
|
||||
connection = op.get_bind()
|
||||
users = connection.execute("SELECT id, first_name, last_name FROM users")
|
||||
for user_id, first, last in users:
|
||||
full_name = f"{first} {last}".strip()
|
||||
connection.execute(
|
||||
"UPDATE users SET name = %s WHERE id = %s",
|
||||
(full_name, user_id)
|
||||
)
|
||||
|
||||
op.alter_column('users', 'name', nullable=False)
|
||||
op.drop_column('users', 'first_name')
|
||||
op.drop_column('users', 'last_name')
|
||||
```
|
||||
|
||||
**Step 3**: Test thoroughly
|
||||
```bash
|
||||
# Apply migration
|
||||
alembic upgrade head
|
||||
|
||||
# Verify data
|
||||
python -c "from models import User; print(User.query.first().first_name)"
|
||||
|
||||
# Test rollback
|
||||
alembic downgrade -1
|
||||
python -c "from models import User; print(User.query.first().name)"
|
||||
|
||||
# Reapply
|
||||
alembic upgrade head
|
||||
```
|
||||
|
||||
## Best Practices
|
||||
|
||||
### Do
|
||||
|
||||
- ✅ Always review auto-generated migrations before applying
|
||||
- ✅ Test migrations on development database first
|
||||
- ✅ Write reversible downgrade() functions
|
||||
- ✅ Backup production databases before major migrations
|
||||
- ✅ Use meaningful migration messages
|
||||
|
||||
### Don't
|
||||
|
||||
- ❌ Trust auto-generated migrations blindly
|
||||
- ❌ Skip downgrade() implementation
|
||||
- ❌ Apply untested migrations to production
|
||||
- ❌ Modify existing migration files after they're committed
|
||||
- ❌ Use raw SQL without bind parameters
|
||||
|
||||
## Troubleshooting
|
||||
|
||||
### "Target database is not up to date"
|
||||
|
||||
**Problem**: Someone else applied migrations you don't have locally
|
||||
|
||||
**Solution**:
|
||||
```bash
|
||||
git pull # Get latest migrations
|
||||
alembic upgrade head # Apply them locally
|
||||
```
|
||||
|
||||
### "Can't locate revision identified by 'xyz'"
|
||||
|
||||
**Problem**: Migration file deleted or branch conflict
|
||||
|
||||
**Solution**:
|
||||
1. Check if migration file exists in `alembic/versions/`
|
||||
2. If missing, restore from git history
|
||||
3. If branch conflict, merge migration branches:
|
||||
```bash
|
||||
alembic merge -m "Merge migration branches" head1 head2
|
||||
```
|
||||
|
||||
### Migration fails mid-execution
|
||||
|
||||
**Problem**: Error occurred during migration
|
||||
|
||||
**Solution**:
|
||||
1. Check error message for specifics
|
||||
2. Manually fix database to consistent state if needed
|
||||
3. Update migration script to fix the issue
|
||||
4. Mark migration as completed or retry:
|
||||
```bash
|
||||
# Mark as done without running
|
||||
alembic stamp head
|
||||
|
||||
# Or fix and retry
|
||||
alembic upgrade head
|
||||
```
|
||||
|
||||
## Configuration
|
||||
|
||||
### Project Structure
|
||||
```
|
||||
project/
|
||||
├── alembic/
|
||||
│ ├── versions/ # Migration scripts
|
||||
│ ├── env.py # Alembic environment
|
||||
│ └── script.py.mako # Migration template
|
||||
├── alembic.ini # Alembic configuration
|
||||
└── models/ # SQLAlchemy models
|
||||
├── __init__.py
|
||||
├── user.py
|
||||
└── post.py
|
||||
```
|
||||
|
||||
### alembic.ini Configuration
|
||||
```ini
|
||||
[alembic]
|
||||
script_location = alembic
|
||||
sqlalchemy.url = driver://user:pass@localhost/dbname
|
||||
|
||||
[loggers]
|
||||
keys = root,sqlalchemy,alembic
|
||||
|
||||
[logger_alembic]
|
||||
level = INFO
|
||||
handlers = console
|
||||
qualname = alembic
|
||||
```
|
||||
|
||||
## References
|
||||
|
||||
- [Alembic Documentation](https://alembic.sqlalchemy.org/)
|
||||
- [SQLAlchemy Documentation](https://docs.sqlalchemy.org/)
|
||||
- [Project Migration Guidelines](docs/database-migrations.md)
|
||||
```
|
||||
|
||||
### 为什么这是好的 Skill?
|
||||
|
||||
1. ✅ **清晰的 description**: 包含触发关键词 ("Alembic", "create migrations") 和场景 ("SQLAlchemy projects")
|
||||
2. ✅ **具体的触发条件**: "When to Use" 列出 4 个明确场景
|
||||
3. ✅ **分步工作流**: 每个操作都有清晰的 1-2-3 步骤
|
||||
4. ✅ **实际示例**: 包含简单和复杂两个示例,有完整代码
|
||||
5. ✅ **最佳实践**: Do/Don't 清单易于遵循
|
||||
6. ✅ **故障排除**: 覆盖 3 个常见问题及解决方案
|
||||
7. ✅ **项目特定信息**: 包含配置和目录结构
|
||||
|
||||
---
|
||||
|
||||
## 示例 2: API 文档生成 Skill (优秀的工作流 Skill)
|
||||
|
||||
```markdown
|
||||
---
|
||||
name: api-documentation-generation
|
||||
description: Generates OpenAPI/Swagger documentation from FastAPI or Flask applications. Activates when user asks to create API docs, generate OpenAPI spec, or needs to document REST endpoints. Supports automatic extraction and custom annotations.
|
||||
---
|
||||
|
||||
# API Documentation Generation Skill
|
||||
|
||||
Automates creation of comprehensive API documentation from Python web applications.
|
||||
|
||||
## When to Use This Skill
|
||||
|
||||
- User asks to "generate API docs" or "create OpenAPI spec"
|
||||
- User mentions "Swagger", "OpenAPI", "API documentation"
|
||||
- User has a FastAPI or Flask application
|
||||
- User needs to document REST API endpoints
|
||||
|
||||
## Workflow
|
||||
|
||||
### Phase 1: Discovery
|
||||
|
||||
1. **Identify framework**
|
||||
- Check for FastAPI: `from fastapi import FastAPI` in codebase
|
||||
- Check for Flask: `from flask import Flask` in codebase
|
||||
- Check for Flask-RESTful: `from flask_restful import Resource`
|
||||
|
||||
2. **Locate API definitions**
|
||||
- Scan for route decorators: `@app.get()`, `@app.post()`, `@app.route()`
|
||||
- Find API routers and blueprints
|
||||
- Identify request/response models
|
||||
|
||||
3. **Extract metadata**
|
||||
- Endpoint paths and HTTP methods
|
||||
- Request parameters (path, query, body)
|
||||
- Response schemas and status codes
|
||||
- Authentication requirements
|
||||
|
||||
### Phase 2: Enhancement
|
||||
|
||||
1. **Review docstrings**
|
||||
- Check if endpoints have docstrings
|
||||
- Verify docstrings follow format (summary, description, params, returns)
|
||||
- Flag missing documentation
|
||||
|
||||
2. **Add missing docs** (if user approves)
|
||||
- Generate docstrings based on type hints
|
||||
- Infer descriptions from parameter names
|
||||
- Add example requests/responses
|
||||
|
||||
3. **Validate schemas**
|
||||
- Ensure Pydantic models are well-documented
|
||||
- Check for missing field descriptions
|
||||
- Verify example values are provided
|
||||
|
||||
### Phase 3: Generation
|
||||
|
||||
1. **Generate OpenAPI spec**
|
||||
```bash
|
||||
# For FastAPI
|
||||
python -c "from main import app; import json; print(json.dumps(app.openapi()))" > openapi.json
|
||||
|
||||
# For Flask with flasgger
|
||||
python scripts/generate_swagger.py > swagger.json
|
||||
```
|
||||
|
||||
2. **Create Swagger UI**
|
||||
- Copy Swagger UI static files to `docs/api/`
|
||||
- Configure to load generated spec
|
||||
- Test in browser: `http://localhost:8000/docs`
|
||||
|
||||
3. **Generate Markdown docs**
|
||||
- Use `openapi-to-md` to create human-readable docs
|
||||
- Organize by tags/resource groups
|
||||
- Add navigation and table of contents
|
||||
|
||||
### Phase 4: Validation
|
||||
|
||||
1. **Check completeness**
|
||||
- All endpoints documented?
|
||||
- All parameters described?
|
||||
- Example requests provided?
|
||||
|
||||
2. **Validate spec**
|
||||
```bash
|
||||
openapi-spec-validator openapi.json
|
||||
```
|
||||
|
||||
3. **Test interactive docs**
|
||||
- Try sample requests in Swagger UI
|
||||
- Verify authentication flows work
|
||||
- Check response schemas match actual responses
|
||||
|
||||
## Examples
|
||||
|
||||
### Example 1: FastAPI Application
|
||||
|
||||
**User Request**: "Generate API documentation for my FastAPI app"
|
||||
|
||||
**Discovery**:
|
||||
```
|
||||
Found FastAPI application in: main.py
|
||||
Endpoints discovered:
|
||||
- GET /users (List users)
|
||||
- POST /users (Create user)
|
||||
- GET /users/{id} (Get user by ID)
|
||||
- PUT /users/{id} (Update user)
|
||||
- DELETE /users/{id} (Delete user)
|
||||
```
|
||||
|
||||
**Enhancement**:
|
||||
```
|
||||
Checking documentation coverage...
|
||||
✓ GET /users - Well documented
|
||||
✗ POST /users - Missing response examples
|
||||
✗ GET /users/{id} - No description
|
||||
✗ PUT /users/{id} - No description
|
||||
✗ DELETE /users/{id} - No description
|
||||
|
||||
Would you like me to add missing documentation?
|
||||
```
|
||||
|
||||
**User**: Yes
|
||||
|
||||
**Adding docs**:
|
||||
```python
|
||||
@app.get("/users/{id}", summary="Get user by ID")
|
||||
async def get_user(id: int):
|
||||
"""
|
||||
Retrieve a single user by their unique ID.
|
||||
|
||||
Args:
|
||||
id: The unique identifier of the user
|
||||
|
||||
Returns:
|
||||
User object with id, name, email, and created_at fields
|
||||
|
||||
Raises:
|
||||
404: User not found
|
||||
"""
|
||||
# ... implementation
|
||||
```
|
||||
|
||||
**Generation**:
|
||||
```bash
|
||||
Generating OpenAPI specification...
|
||||
✓ openapi.json created (15.2 KB)
|
||||
✓ Swagger UI deployed to docs/api/
|
||||
✓ Markdown docs created in docs/api/endpoints/
|
||||
|
||||
View documentation at: http://localhost:8000/docs
|
||||
```
|
||||
|
||||
### Example 2: Flask Application with Custom Schemas
|
||||
|
||||
**User Request**: "Document my Flask API and include request/response examples"
|
||||
|
||||
**Discovery**:
|
||||
```
|
||||
Found Flask application with Flask-RESTful
|
||||
Endpoints:
|
||||
- GET /api/products
|
||||
- POST /api/products
|
||||
- GET /api/products/<id>
|
||||
- PATCH /api/products/<id>
|
||||
- DELETE /api/products/<id>
|
||||
```
|
||||
|
||||
**Custom Examples Added**:
|
||||
```python
|
||||
class ProductList(Resource):
|
||||
def get(self):
|
||||
"""
|
||||
Get all products
|
||||
|
||||
Example Response:
|
||||
```json
|
||||
{
|
||||
"products": [
|
||||
{
|
||||
"id": 1,
|
||||
"name": "Widget",
|
||||
"price": 29.99,
|
||||
"stock": 100
|
||||
}
|
||||
],
|
||||
"total": 1
|
||||
}
|
||||
```
|
||||
"""
|
||||
pass
|
||||
|
||||
def post(self):
|
||||
"""
|
||||
Create a new product
|
||||
|
||||
Example Request:
|
||||
```json
|
||||
{
|
||||
"name": "New Widget",
|
||||
"price": 39.99,
|
||||
"stock": 50
|
||||
}
|
||||
```
|
||||
|
||||
Example Response:
|
||||
```json
|
||||
{
|
||||
"id": 2,
|
||||
"name": "New Widget",
|
||||
"price": 39.99,
|
||||
"stock": 50,
|
||||
"created_at": "2025-01-31T12:00:00Z"
|
||||
}
|
||||
```
|
||||
"""
|
||||
pass
|
||||
```
|
||||
|
||||
**Result**:
|
||||
```
|
||||
Generated documentation:
|
||||
- openapi.json (with examples)
|
||||
- Swagger UI at /api/docs
|
||||
- Postman collection at docs/api/postman_collection.json
|
||||
- Markdown API reference at docs/api/README.md
|
||||
|
||||
All endpoints now include:
|
||||
✓ Request examples
|
||||
✓ Response examples
|
||||
✓ Error codes
|
||||
✓ Authentication requirements
|
||||
```
|
||||
|
||||
## Configuration
|
||||
|
||||
### FastAPI Projects
|
||||
|
||||
No additional configuration needed! FastAPI auto-generates OpenAPI docs.
|
||||
|
||||
Access at:
|
||||
- Swagger UI: `http://localhost:8000/docs`
|
||||
- ReDoc: `http://localhost:8000/redoc`
|
||||
- OpenAPI JSON: `http://localhost:8000/openapi.json`
|
||||
|
||||
### Flask Projects
|
||||
|
||||
Install flasgger:
|
||||
```bash
|
||||
pip install flasgger
|
||||
```
|
||||
|
||||
Configure in app:
|
||||
```python
|
||||
from flasgger import Swagger
|
||||
|
||||
app = Flask(__name__)
|
||||
swagger = Swagger(app, template={
|
||||
"info": {
|
||||
"title": "My API",
|
||||
"description": "API for managing resources",
|
||||
"version": "1.0.0"
|
||||
}
|
||||
})
|
||||
```
|
||||
|
||||
## Best Practices
|
||||
|
||||
- ✅ Use type hints - enables automatic schema generation
|
||||
- ✅ Write descriptive docstrings for all endpoints
|
||||
- ✅ Provide example requests and responses
|
||||
- ✅ Document error codes and edge cases
|
||||
- ✅ Keep docs in sync with code (auto-generate when possible)
|
||||
|
||||
## Tools Used
|
||||
|
||||
- **FastAPI**: Built-in OpenAPI support
|
||||
- **flasgger**: Swagger for Flask
|
||||
- **openapi-spec-validator**: Validates OpenAPI specs
|
||||
- **openapi-to-md**: Converts OpenAPI to Markdown
|
||||
|
||||
## References
|
||||
|
||||
- [OpenAPI Specification](https://spec.openapis.org/oas/latest.html)
|
||||
- [FastAPI Documentation](https://fastapi.tiangolo.com/)
|
||||
- [Swagger Documentation](https://swagger.io/docs/)
|
||||
```
|
||||
|
||||
### 为什么这是优秀的工作流 Skill?
|
||||
|
||||
1. ✅ **清晰的工作流阶段**: 4 个阶段 (Discovery, Enhancement, Generation, Validation)
|
||||
2. ✅ **决策点**: Phase 2 询问用户是否添加缺失文档
|
||||
3. ✅ **实际输出示例**: 展示了命令输出和生成的代码
|
||||
4. ✅ **多框架支持**: 处理 FastAPI 和 Flask 两种情况
|
||||
5. ✅ **工具集成**: 列出所需工具及其用途
|
||||
6. ✅ **可执行命令**: 提供完整的命令示例
|
||||
7. ✅ **验证步骤**: Phase 4 确保生成的文档质量
|
||||
|
||||
---
|
||||
|
||||
## 示例 3: 代码审查 Skill (高自由度 Skill)
|
||||
|
||||
```markdown
|
||||
---
|
||||
name: code-review
|
||||
description: Performs comprehensive code reviews focusing on best practices, security, performance, and maintainability. Activates when user asks to review code, check pull request, or mentions code quality. Provides actionable feedback with severity ratings.
|
||||
---
|
||||
|
||||
# Code Review Skill
|
||||
|
||||
Conducts thorough code reviews with focus on quality, security, and best practices.
|
||||
|
||||
## When to Use This Skill
|
||||
|
||||
- User asks to "review my code" or "check this PR"
|
||||
- User mentions "code review", "code quality", or "best practices"
|
||||
- User wants feedback on specific code changes
|
||||
- User needs security or performance analysis
|
||||
|
||||
## Review Criteria
|
||||
|
||||
Code is evaluated across 5 dimensions:
|
||||
|
||||
### 1. Correctness
|
||||
- Logic errors and bugs
|
||||
- Edge case handling
|
||||
- Error handling and validation
|
||||
- Type safety
|
||||
|
||||
### 2. Security
|
||||
- SQL injection vulnerabilities
|
||||
- XSS vulnerabilities
|
||||
- Authentication/authorization issues
|
||||
- Sensitive data exposure
|
||||
- Dependency vulnerabilities
|
||||
|
||||
### 3. Performance
|
||||
- Algorithm efficiency
|
||||
- Database query optimization
|
||||
- Memory leaks
|
||||
- Unnecessary computations
|
||||
- Caching opportunities
|
||||
|
||||
### 4. Maintainability
|
||||
- Code clarity and readability
|
||||
- Function/class size
|
||||
- Code duplication
|
||||
- Naming conventions
|
||||
- Documentation
|
||||
|
||||
### 5. Best Practices
|
||||
- Language-specific idioms
|
||||
- Design patterns
|
||||
- SOLID principles
|
||||
- Testing coverage
|
||||
- Error handling patterns
|
||||
|
||||
## Review Process
|
||||
|
||||
1. **Understand context**
|
||||
- What does this code do?
|
||||
- What problem does it solve?
|
||||
- Are there any constraints or requirements?
|
||||
|
||||
2. **Identify issues**
|
||||
- Scan for common anti-patterns
|
||||
- Check against language best practices
|
||||
- Look for security vulnerabilities
|
||||
- Assess performance implications
|
||||
|
||||
3. **Prioritize feedback**
|
||||
- **Critical**: Security issues, data loss risks, crashes
|
||||
- **High**: Bugs, major performance issues
|
||||
- **Medium**: Code smells, maintainability concerns
|
||||
- **Low**: Style preferences, minor optimizations
|
||||
|
||||
4. **Provide suggestions**
|
||||
- Explain the issue clearly
|
||||
- Show better alternative (code example)
|
||||
- Explain why the alternative is better
|
||||
|
||||
## Example Review
|
||||
|
||||
### Code Submitted
|
||||
|
||||
```python
|
||||
def get_user_data(user_id):
|
||||
conn = sqlite3.connect('users.db')
|
||||
cursor = conn.cursor()
|
||||
query = "SELECT * FROM users WHERE id = " + str(user_id)
|
||||
cursor.execute(query)
|
||||
result = cursor.fetchone()
|
||||
return result
|
||||
```
|
||||
|
||||
### Review Feedback
|
||||
|
||||
**❌ CRITICAL: SQL Injection Vulnerability**
|
||||
|
||||
The code concatenates user input directly into SQL query, allowing SQL injection attacks.
|
||||
|
||||
**Bad**:
|
||||
```python
|
||||
query = "SELECT * FROM users WHERE id = " + str(user_id)
|
||||
```
|
||||
|
||||
**Good**:
|
||||
```python
|
||||
query = "SELECT * FROM users WHERE id = ?"
|
||||
cursor.execute(query, (user_id,))
|
||||
```
|
||||
|
||||
**Why**: Parameterized queries prevent SQL injection by properly escaping user input.
|
||||
|
||||
---
|
||||
|
||||
**⚠️ HIGH: Resource Leak**
|
||||
|
||||
Database connection is never closed, leading to resource exhaustion under load.
|
||||
|
||||
**Bad**:
|
||||
```python
|
||||
conn = sqlite3.connect('users.db')
|
||||
# ... use connection
|
||||
# connection never closed
|
||||
```
|
||||
|
||||
**Good**:
|
||||
```python
|
||||
with sqlite3.connect('users.db') as conn:
|
||||
cursor = conn.cursor()
|
||||
# ... use connection
|
||||
# automatically closed when exiting with block
|
||||
```
|
||||
|
||||
---
|
||||
|
||||
**ℹ️ MEDIUM: Overly Broad SELECT**
|
||||
|
||||
Using `SELECT *` fetches unnecessary data and is fragile to schema changes.
|
||||
|
||||
**Bad**:
|
||||
```python
|
||||
query = "SELECT * FROM users WHERE id = ?"
|
||||
```
|
||||
|
||||
**Good**:
|
||||
```python
|
||||
query = "SELECT id, name, email FROM users WHERE id = ?"
|
||||
```
|
||||
|
||||
**Why**: Explicit column selection is more efficient and makes intent clear.
|
||||
|
||||
---
|
||||
|
||||
**📝 LOW: Missing Error Handling**
|
||||
|
||||
Function doesn't handle case where user doesn't exist.
|
||||
|
||||
**Suggestion**:
|
||||
```python
|
||||
def get_user_data(user_id: int) -> Optional[dict]:
|
||||
"""
|
||||
Retrieve user data by ID.
|
||||
|
||||
Args:
|
||||
user_id: The unique identifier of the user
|
||||
|
||||
Returns:
|
||||
Dictionary with user data, or None if user not found
|
||||
"""
|
||||
with sqlite3.connect('users.db') as conn:
|
||||
cursor = conn.cursor()
|
||||
query = "SELECT id, name, email FROM users WHERE id = ?"
|
||||
cursor.execute(query, (user_id,))
|
||||
result = cursor.fetchone()
|
||||
|
||||
if result is None:
|
||||
return None
|
||||
|
||||
return {
|
||||
'id': result[0],
|
||||
'name': result[1],
|
||||
'email': result[2]
|
||||
}
|
||||
```
|
||||
|
||||
### Revised Code
|
||||
|
||||
After applying all suggestions:
|
||||
|
||||
```python
|
||||
from typing import Optional
|
||||
import sqlite3
|
||||
|
||||
def get_user_data(user_id: int) -> Optional[dict]:
|
||||
"""
|
||||
Retrieve user data by ID.
|
||||
|
||||
Args:
|
||||
user_id: The unique identifier of the user
|
||||
|
||||
Returns:
|
||||
Dictionary with user data, or None if user not found
|
||||
|
||||
Raises:
|
||||
ValueError: If user_id is not a positive integer
|
||||
"""
|
||||
if not isinstance(user_id, int) or user_id <= 0:
|
||||
raise ValueError(f"Invalid user_id: {user_id}")
|
||||
|
||||
with sqlite3.connect('users.db') as conn:
|
||||
conn.row_factory = sqlite3.Row # Enable dict-like access
|
||||
cursor = conn.cursor()
|
||||
|
||||
query = "SELECT id, name, email, created_at FROM users WHERE id = ?"
|
||||
cursor.execute(query, (user_id,))
|
||||
result = cursor.fetchone()
|
||||
|
||||
if result is None:
|
||||
return None
|
||||
|
||||
return dict(result) # Convert Row to dict
|
||||
```
|
||||
|
||||
### Summary
|
||||
|
||||
**Issues Found**: 4
|
||||
- 1 Critical (SQL Injection)
|
||||
- 1 High (Resource Leak)
|
||||
- 1 Medium (Inefficient Query)
|
||||
- 1 Low (Missing Error Handling)
|
||||
|
||||
**All Issues Addressed**: ✓
|
||||
|
||||
## Best Practices
|
||||
|
||||
### When Reviewing
|
||||
|
||||
- 🎯 Focus on impact - prioritize critical issues
|
||||
- 📝 Be specific - provide code examples
|
||||
- 🎓 Be educational - explain why, not just what
|
||||
- 🤝 Be constructive - suggest improvements, don't just criticize
|
||||
- ⚖️ Be balanced - acknowledge good practices too
|
||||
|
||||
### What to Look For
|
||||
|
||||
**Python-specific**:
|
||||
- Use of `with` for resource management
|
||||
- Type hints on function signatures
|
||||
- Proper exception handling
|
||||
- List comprehensions vs loops
|
||||
- Dictionary vs if-elif chains
|
||||
|
||||
**General**:
|
||||
- DRY principle violations
|
||||
- Magic numbers
|
||||
- Long functions (>50 lines)
|
||||
- Deep nesting (>3 levels)
|
||||
- Missing tests for critical paths
|
||||
|
||||
## Automated Tools
|
||||
|
||||
Complement manual review with automated tools:
|
||||
|
||||
```bash
|
||||
# Linting
|
||||
pylint mycode.py
|
||||
flake8 mycode.py
|
||||
|
||||
# Type checking
|
||||
mypy mycode.py
|
||||
|
||||
# Security scanning
|
||||
bandit -r .
|
||||
safety check
|
||||
|
||||
# Code complexity
|
||||
radon cc mycode.py -a
|
||||
```
|
||||
|
||||
## References
|
||||
|
||||
- [OWASP Top 10](https://owasp.org/www-project-top-ten/)
|
||||
- [Python Best Practices](https://docs.python-guide.org/)
|
||||
- [Clean Code Principles](https://www.amazon.com/Clean-Code-Handbook-Software-Craftsmanship/dp/0132350882)
|
||||
```
|
||||
|
||||
### 为什么这是高自由度 Skill?
|
||||
|
||||
1. ✅ **指导原则而非严格步骤**: 提供评审维度,不限定具体流程
|
||||
2. ✅ **情境适应**: 根据代码类型和问题严重性调整重点
|
||||
3. ✅ **教育性**: 解释"为什么",帮助 Claude 做出判断
|
||||
4. ✅ **优先级框架**: 定义严重性级别,让 Claude 自行判断
|
||||
5. ✅ **完整示例**: 展示从问题识别到解决的完整流程
|
||||
6. ✅ **工具集成**: 提到自动化工具,但不强制使用
|
||||
|
||||
---
|
||||
|
||||
## 总结: 好 Skill 的共同特征
|
||||
|
||||
| 特征 | 说明 | 示例位置 |
|
||||
|------|------|---------|
|
||||
| **清晰触发** | description 包含关键词和场景 | 所有 frontmatter |
|
||||
| **结构化内容** | 使用标题、列表、代码块组织信息 | 所有示例 |
|
||||
| **实际示例** | 真实代码,不是伪代码 | Example sections |
|
||||
| **决策指导** | 告诉 Claude 何时做什么 | 工作流 Skill 的 Phase 2 |
|
||||
| **可执行命令** | 提供完整的命令,不是抽象描述 | 迁移 Skill 的命令 |
|
||||
| **错误处理** | 包含故障排除章节 | 所有 Troubleshooting |
|
||||
| **最佳实践** | Do/Don't 清单 | 所有 Best Practices |
|
||||
| **工具引用** | 说明使用哪些工具及如何使用 | API 文档 Skill |
|
||||
| **验证步骤** | 说明如何确认操作成功 | 迁移 Skill 的验证 |
|
||||
| **合适的自由度** | 根据任务特性选择指导程度 | 代码审查 Skill |
|
||||
|
|
@ -0,0 +1,95 @@
|
|||
---
|
||||
name: your-skill-name
|
||||
description: Brief description of what this skill does and when to activate it. Include trigger keywords and scenarios where this skill should be used.
|
||||
---
|
||||
|
||||
# Your Skill Title
|
||||
|
||||
> Brief one-line summary of what this skill accomplishes
|
||||
|
||||
## When to Use This Skill
|
||||
|
||||
- User asks to [specific action or task]
|
||||
- User mentions keywords like "[keyword1]", "[keyword2]", or "[keyword3]"
|
||||
- User is working with [specific technology/framework/tool]
|
||||
- User needs to [specific outcome or goal]
|
||||
|
||||
## Quick Start
|
||||
|
||||
```bash
|
||||
# Basic usage example
|
||||
command-to-run --option value
|
||||
```
|
||||
|
||||
## How It Works
|
||||
|
||||
1. **Step 1**: Brief description of first step
|
||||
- Detail about what happens
|
||||
- Any prerequisites or conditions
|
||||
|
||||
2. **Step 2**: Brief description of second step
|
||||
- Key actions taken
|
||||
- Expected outputs
|
||||
|
||||
3. **Step 3**: Brief description of final step
|
||||
- Validation or verification
|
||||
- Success criteria
|
||||
|
||||
## Examples
|
||||
|
||||
### Example 1: Basic Usage
|
||||
|
||||
**User Request**: "Example of what user might say"
|
||||
|
||||
**Action**: What Claude does in response
|
||||
|
||||
**Output**:
|
||||
```
|
||||
Expected output or result
|
||||
```
|
||||
|
||||
### Example 2: Advanced Usage
|
||||
|
||||
**User Request**: "More complex user request"
|
||||
|
||||
**Action**:
|
||||
1. First action taken
|
||||
2. Second action taken
|
||||
3. Final action
|
||||
|
||||
**Output**:
|
||||
```
|
||||
Expected output showing more complex results
|
||||
```
|
||||
|
||||
## Best Practices
|
||||
|
||||
- ✅ Do this for best results
|
||||
- ✅ Follow this pattern
|
||||
- ❌ Avoid this common mistake
|
||||
- ❌ Don't do this
|
||||
|
||||
## Troubleshooting
|
||||
|
||||
### Common Issue 1
|
||||
|
||||
**Problem**: Description of the problem
|
||||
|
||||
**Solution**: How to fix it
|
||||
|
||||
### Common Issue 2
|
||||
|
||||
**Problem**: Description of another problem
|
||||
|
||||
**Solution**: Steps to resolve
|
||||
|
||||
## References
|
||||
|
||||
- [Related Documentation](link-to-docs)
|
||||
- [Official Guide](link-to-guide)
|
||||
- [Additional Resources](link-to-resources)
|
||||
|
||||
---
|
||||
|
||||
**Version**: 1.0
|
||||
**Last Updated**: YYYY-MM-DD
|
||||
|
|
@ -0,0 +1,402 @@
|
|||
---
|
||||
name: your-workflow-skill
|
||||
description: Guides Claude through a multi-step workflow for [specific task]. Activates when user needs to [trigger scenario] or mentions [key terms].
|
||||
---
|
||||
|
||||
# Your Workflow Skill Title
|
||||
|
||||
> Automates a complex multi-step process with decision points and validation
|
||||
|
||||
## When to Use This Skill
|
||||
|
||||
- User needs to execute a multi-step workflow
|
||||
- User asks to "[workflow trigger phrase]"
|
||||
- User is working on [specific type of project or task]
|
||||
- Task requires validation and error handling at each step
|
||||
|
||||
## Workflow Overview
|
||||
|
||||
```
|
||||
┌─────────────┐
|
||||
│ Start │
|
||||
└──────┬──────┘
|
||||
│
|
||||
▼
|
||||
┌─────────────────┐
|
||||
│ Preparation │
|
||||
│ & Validation │
|
||||
└────────┬────────┘
|
||||
│
|
||||
┌────▼────┐
|
||||
│ Step 1 │
|
||||
└────┬────┘
|
||||
│
|
||||
┌────▼────┐
|
||||
│ Step 2 │──┐
|
||||
└────┬────┘ │ (Loop if needed)
|
||||
│ │
|
||||
└───────┘
|
||||
│
|
||||
┌────▼────┐
|
||||
│ Step 3 │
|
||||
└────┬────┘
|
||||
│
|
||||
▼
|
||||
┌─────────────┐
|
||||
│ Complete │
|
||||
│ & Report │
|
||||
└─────────────┘
|
||||
```
|
||||
|
||||
## Detailed Workflow
|
||||
|
||||
### Preparation Phase
|
||||
|
||||
Before starting the main workflow:
|
||||
|
||||
- [ ] Check prerequisite 1
|
||||
- [ ] Validate prerequisite 2
|
||||
- [ ] Ensure prerequisite 3 is met
|
||||
|
||||
If any prerequisite fails:
|
||||
- Stop execution
|
||||
- Report which prerequisite failed
|
||||
- Provide remediation steps
|
||||
|
||||
### Step 1: [Step Name]
|
||||
|
||||
**Purpose**: What this step accomplishes
|
||||
|
||||
**Actions**:
|
||||
1. Action 1
|
||||
2. Action 2
|
||||
3. Action 3
|
||||
|
||||
**Validation**:
|
||||
- Check condition 1
|
||||
- Verify condition 2
|
||||
|
||||
**On Success**: → Proceed to Step 2
|
||||
**On Failure**: → [Error handling procedure]
|
||||
|
||||
### Step 2: [Step Name]
|
||||
|
||||
**Purpose**: What this step accomplishes
|
||||
|
||||
**Actions**:
|
||||
1. Action 1
|
||||
2. Action 2
|
||||
|
||||
**Decision Point**:
|
||||
- If condition A: → Action X
|
||||
- If condition B: → Action Y
|
||||
- Otherwise: → Default action
|
||||
|
||||
**Validation**:
|
||||
- Verify expected output
|
||||
- Check for errors
|
||||
|
||||
**On Success**: → Proceed to Step 3
|
||||
**On Failure**: → [Error handling procedure]
|
||||
|
||||
### Step 3: [Step Name]
|
||||
|
||||
**Purpose**: Final actions and cleanup
|
||||
|
||||
**Actions**:
|
||||
1. Finalize changes
|
||||
2. Run validation tests
|
||||
3. Generate summary report
|
||||
|
||||
**Success Criteria**:
|
||||
- All tests pass
|
||||
- No errors in logs
|
||||
- Expected artifacts created
|
||||
|
||||
## Examples
|
||||
|
||||
### Example 1: Standard Workflow Execution
|
||||
|
||||
**User Request**: "Run the [workflow name]"
|
||||
|
||||
**Execution**:
|
||||
|
||||
**Preparation Phase** ✓
|
||||
```
|
||||
✓ Prerequisite 1 met
|
||||
✓ Prerequisite 2 validated
|
||||
✓ Ready to begin
|
||||
```
|
||||
|
||||
**Step 1: [Step Name]** ✓
|
||||
```
|
||||
→ Action 1 completed
|
||||
→ Action 2 completed
|
||||
→ Validation passed
|
||||
```
|
||||
|
||||
**Step 2: [Step Name]** ✓
|
||||
```
|
||||
→ Decision: Condition A detected
|
||||
→ Executing Action X
|
||||
→ Validation passed
|
||||
```
|
||||
|
||||
**Step 3: [Step Name]** ✓
|
||||
```
|
||||
→ Finalization complete
|
||||
→ All tests passed
|
||||
→ Summary generated
|
||||
```
|
||||
|
||||
**Result**: Workflow completed successfully
|
||||
|
||||
### Example 2: Workflow with Error Recovery
|
||||
|
||||
**User Request**: "Execute [workflow name]"
|
||||
|
||||
**Execution**:
|
||||
|
||||
**Step 1** ✓
|
||||
```
|
||||
→ Completed successfully
|
||||
```
|
||||
|
||||
**Step 2** ⚠️
|
||||
```
|
||||
→ Action 1 completed
|
||||
→ Action 2 failed: [Error message]
|
||||
```
|
||||
|
||||
**Error Recovery**:
|
||||
1. Identified root cause: [Explanation]
|
||||
2. Applied fix: [Fix description]
|
||||
3. Retrying Step 2...
|
||||
|
||||
**Step 2 (Retry)** ✓
|
||||
```
|
||||
→ Completed after fix
|
||||
```
|
||||
|
||||
**Step 3** ✓
|
||||
```
|
||||
→ Completed successfully
|
||||
```
|
||||
|
||||
**Result**: Workflow completed with 1 retry
|
||||
|
||||
## Error Handling
|
||||
|
||||
### Error Categories
|
||||
|
||||
| Category | Action |
|
||||
|----------|--------|
|
||||
| **Recoverable** | Attempt automatic fix, retry up to 3 times |
|
||||
| **User Input Needed** | Pause workflow, ask user for guidance |
|
||||
| **Critical** | Stop workflow, rollback changes if possible |
|
||||
|
||||
### Common Errors
|
||||
|
||||
**Error 1: [Error Name]**
|
||||
- **Cause**: What causes this error
|
||||
- **Detection**: How to identify it
|
||||
- **Recovery**: Steps to fix
|
||||
1. Recovery action 1
|
||||
2. Recovery action 2
|
||||
3. Retry from failed step
|
||||
|
||||
**Error 2: [Error Name]**
|
||||
- **Cause**: What causes this error
|
||||
- **Detection**: How to identify it
|
||||
- **Recovery**: Manual intervention required
|
||||
- Ask user: "[Question to ask]"
|
||||
- Wait for user input
|
||||
- Apply user's guidance
|
||||
- Resume workflow
|
||||
|
||||
## Rollback Procedure
|
||||
|
||||
If the workflow fails critically:
|
||||
|
||||
1. **Identify last successful step**
|
||||
- Step 1: ✓ Completed
|
||||
- Step 2: ❌ Failed at action 3
|
||||
|
||||
2. **Undo changes from failed step**
|
||||
- Revert action 1
|
||||
- Revert action 2
|
||||
- Clean up partial state
|
||||
|
||||
3. **Verify system state**
|
||||
- Confirm rollback successful
|
||||
- Check for side effects
|
||||
|
||||
4. **Report to user**
|
||||
```
|
||||
Workflow failed at Step 2, action 3
|
||||
Reason: [Error message]
|
||||
All changes have been rolled back
|
||||
System is back to pre-workflow state
|
||||
```
|
||||
|
||||
## Workflow Variations
|
||||
|
||||
### Variation 1: Quick Mode
|
||||
|
||||
**When to use**: User needs faster execution, can accept lower validation
|
||||
|
||||
**Changes**:
|
||||
- Skip optional validations
|
||||
- Use cached data where available
|
||||
- Reduce logging verbosity
|
||||
|
||||
**Trade-offs**:
|
||||
- ⚡ 50% faster
|
||||
- ⚠️ Less detailed error messages
|
||||
|
||||
### Variation 2: Strict Mode
|
||||
|
||||
**When to use**: Production deployments, critical changes
|
||||
|
||||
**Changes**:
|
||||
- Enable all validations
|
||||
- Require explicit user confirmation at each step
|
||||
- Generate detailed audit logs
|
||||
|
||||
**Trade-offs**:
|
||||
- 🛡️ Maximum safety
|
||||
- 🐢 Slower execution
|
||||
|
||||
## Monitoring and Logging
|
||||
|
||||
Throughout the workflow:
|
||||
|
||||
```
|
||||
[TIMESTAMP] [STEP] [STATUS] Message
|
||||
|
||||
[2025-01-31 14:30:01] [PREP] [INFO] Starting preparation phase
|
||||
[2025-01-31 14:30:02] [PREP] [OK] All prerequisites met
|
||||
[2025-01-31 14:30:03] [STEP1] [INFO] Beginning Step 1
|
||||
[2025-01-31 14:30:05] [STEP1] [OK] Step 1 completed successfully
|
||||
[2025-01-31 14:30:06] [STEP2] [INFO] Beginning Step 2
|
||||
[2025-01-31 14:30:08] [STEP2] [WARN] Condition B detected, using fallback
|
||||
[2025-01-31 14:30:10] [STEP2] [OK] Step 2 completed with warnings
|
||||
[2025-01-31 14:30:11] [STEP3] [INFO] Beginning Step 3
|
||||
[2025-01-31 14:30:15] [STEP3] [OK] Step 3 completed successfully
|
||||
[2025-01-31 14:30:16] [COMPLETE] [OK] Workflow finished successfully
|
||||
```
|
||||
|
||||
## Post-Workflow Report
|
||||
|
||||
After completion, generate a summary:
|
||||
|
||||
```markdown
|
||||
# Workflow Execution Report
|
||||
|
||||
**Workflow**: [Workflow Name]
|
||||
**Started**: 2025-01-31 14:30:01
|
||||
**Completed**: 2025-01-31 14:30:16
|
||||
**Duration**: 15 seconds
|
||||
**Status**: ✓ Success
|
||||
|
||||
## Steps Executed
|
||||
|
||||
1. ✓ Preparation Phase (1s)
|
||||
2. ✓ Step 1: [Step Name] (2s)
|
||||
3. ✓ Step 2: [Step Name] (4s) - 1 warning
|
||||
4. ✓ Step 3: [Step Name] (4s)
|
||||
|
||||
## Warnings
|
||||
|
||||
- Step 2: Condition B detected, used fallback action
|
||||
|
||||
## Artifacts Generated
|
||||
|
||||
- `/path/to/output1.txt`
|
||||
- `/path/to/output2.json`
|
||||
- `/path/to/report.html`
|
||||
|
||||
## Next Steps
|
||||
|
||||
- Review generated artifacts
|
||||
- Deploy to production (if applicable)
|
||||
- Archive logs to `/logs/workflow-20250131-143001.log`
|
||||
```
|
||||
|
||||
## Best Practices
|
||||
|
||||
### Do
|
||||
|
||||
- ✅ Validate inputs before starting workflow
|
||||
- ✅ Provide clear progress updates at each step
|
||||
- ✅ Log all decisions and actions
|
||||
- ✅ Handle errors gracefully with recovery options
|
||||
- ✅ Generate summary report at completion
|
||||
|
||||
### Don't
|
||||
|
||||
- ❌ Skip validation steps to save time
|
||||
- ❌ Continue after critical errors
|
||||
- ❌ Assume prerequisites are met without checking
|
||||
- ❌ Lose partial progress on failure
|
||||
- ❌ Leave system in inconsistent state
|
||||
|
||||
## Advanced Features
|
||||
|
||||
### Parallel Execution
|
||||
|
||||
Some steps can run in parallel:
|
||||
|
||||
```
|
||||
Step 1 ─┬─→ Step 2A ─┐
|
||||
│ ├─→ Step 3
|
||||
└─→ Step 2B ─┘
|
||||
```
|
||||
|
||||
**Requirements**:
|
||||
- Steps 2A and 2B must be independent
|
||||
- Both must complete before Step 3
|
||||
|
||||
**Implementation**:
|
||||
1. Start Step 2A in background
|
||||
2. Start Step 2B in background
|
||||
3. Wait for both to complete
|
||||
4. Verify both succeeded
|
||||
5. Proceed to Step 3
|
||||
|
||||
### Conditional Branching
|
||||
|
||||
```
|
||||
Step 1 → Decision
|
||||
├─→ [Condition A] → Path A → Step 3
|
||||
├─→ [Condition B] → Path B → Step 3
|
||||
└─→ [Default] → Path C → Step 3
|
||||
```
|
||||
|
||||
## Testing This Workflow
|
||||
|
||||
To test the workflow without side effects:
|
||||
|
||||
1. Use `--dry-run` flag to simulate execution
|
||||
2. Check that all steps are logged correctly
|
||||
3. Verify error handling with intentional failures
|
||||
4. Confirm rollback procedure works
|
||||
|
||||
Example:
|
||||
```bash
|
||||
workflow-runner --dry-run --inject-error step2
|
||||
```
|
||||
|
||||
Expected output:
|
||||
```
|
||||
[DRY RUN] Step 1: Would execute [actions]
|
||||
[DRY RUN] Step 2: Injected error as requested
|
||||
[DRY RUN] Error Recovery: Would attempt fix
|
||||
[DRY RUN] Rollback: Would undo Step 1 changes
|
||||
```
|
||||
|
||||
---
|
||||
|
||||
**Version**: 1.0
|
||||
**Last Updated**: YYYY-MM-DD
|
||||
**Maintainer**: Team Name
|
||||
|
|
@ -0,0 +1,243 @@
|
|||
---
|
||||
name: prompt-optimize
|
||||
description: Expert prompt engineering skill that transforms Claude into "Alpha-Prompt" - a master prompt engineer who collaboratively crafts high-quality prompts through flexible dialogue. Activates when user asks to "optimize prompt", "improve system instruction", "enhance AI instruction", or mentions prompt engineering tasks.
|
||||
---
|
||||
|
||||
# 提示词优化专家 (Alpha-Prompt)
|
||||
|
||||
## When to Use This Skill
|
||||
|
||||
触发场景:
|
||||
- 用户明确要求"优化提示词"、"改进 prompt"、"提升指令质量"
|
||||
- 用户提供了现有的提示词并希望改进
|
||||
- 用户描述了一个 AI 应用场景,需要设计提示词
|
||||
- 用户提到"prompt engineering"、"系统指令"、"AI 角色设定"
|
||||
- 用户询问如何让 AI 表现得更好、更专业
|
||||
|
||||
## Core Identity Transformation
|
||||
|
||||
当此技能激活时,你将转变为**元提示词工程师 Alpha-Prompt**:
|
||||
|
||||
- **专家定位**:世界顶级提示词工程专家与架构师
|
||||
- **交互风格**:兼具专家的严谨与顾问的灵动
|
||||
- **核心使命**:通过富有启发性的对话,与用户共同创作兼具艺术感与工程美的提示词
|
||||
- **首要原则**:对话的艺术,而非僵硬的流程
|
||||
|
||||
## Operating Principles
|
||||
|
||||
### 1. 真诚的双向沟通
|
||||
|
||||
**必须避免**:
|
||||
- ❌ 模板化的、可预测的提问
|
||||
- ❌ 自说自话的独白
|
||||
- ❌ 僵硬的流程化操作
|
||||
- ❌ 不等待用户回应就自行完成所有步骤
|
||||
|
||||
**应该做到**:
|
||||
- ✅ 像真正的专家那样灵活沟通
|
||||
- ✅ 激发用户的灵感
|
||||
- ✅ 共同将构想塑造为杰作
|
||||
- ✅ 真诚地等待每个关键决策点的回应
|
||||
|
||||
### 2. 主动的架构升级
|
||||
|
||||
当遇到简单需求时,主动识别是否可以升级为更专业的架构:
|
||||
|
||||
**识别信号**:
|
||||
- 创意生成任务 → 建议使用思想树(ToT) + 自洽性(Self-Consistency)
|
||||
- 复杂推理任务 → 建议使用思维链(CoT)
|
||||
- 需要深刻洞察 → 建议使用后退提示(Step-Back Prompting)
|
||||
- 需要外部信息 → 建议使用推理与行动(ReAct)
|
||||
|
||||
**沟通方式**:用通俗的语言解释技术优势,而非直接抛出术语。
|
||||
|
||||
### 3. 安全护栏意识
|
||||
|
||||
对于面向公众的角色或敏感场景,主动提供安全建议:
|
||||
|
||||
**风险场景识别**:
|
||||
- 公开互动的 AI 角色
|
||||
- 处理用户输入的系统
|
||||
- 可能遭遇越狱或注入攻击的场景
|
||||
|
||||
**安全措施**:
|
||||
- 明确的角色边界和约束
|
||||
- 指令隔离(Instruction Fencing)
|
||||
- 伦理和行为红线
|
||||
- 优雅的拒绝策略
|
||||
|
||||
## Interaction Workflow
|
||||
|
||||
### Phase 1: 诊断与探询
|
||||
|
||||
**当用户需求简单时**:
|
||||
```
|
||||
示例对话:
|
||||
用户:"帮我优化一个提示词,用来生成营销口号"
|
||||
你:"营销口号任务非常有挑战!我有个专业建议:
|
||||
我们可以设计一个'两步走'的提示词。第一步让 AI 像头脑风暴一样,
|
||||
从多个角度生成一批口号(这借鉴了思想树的发散思维)。
|
||||
第二步再让 AI 扮演市场总监,对所有口号打分排序(这利用了自洽性)。
|
||||
您觉得这个'先发散再收敛'的思路如何?"
|
||||
```
|
||||
|
||||
**当用户需求明确时**:
|
||||
```
|
||||
示例对话:
|
||||
用户:"需要把会议记录整理成结构化摘要"
|
||||
你:"收到,这是个实用需求。关键在于定义清晰的模板:
|
||||
- 核心结论 (Key Takeaways)
|
||||
- 待办事项 (Action Items): [负责人] - [任务]
|
||||
- 关键讨论点 (Discussion Points)
|
||||
|
||||
在格式上,我们可以用控制力更强的 XML 标签,
|
||||
也可以用更简洁的 Markdown。如果不确定,我推荐更精准的 XML。"
|
||||
```
|
||||
|
||||
### Phase 2: 协作构建
|
||||
|
||||
**识别用户提供的内容质量**:
|
||||
|
||||
1. **初步想法阶段**:
|
||||
- 通过提问帮助用户明确需求
|
||||
- 提供架构建议和技术选型
|
||||
- 一起探讨最佳实现方案
|
||||
|
||||
2. **半成品阶段**:
|
||||
- 识别已有的优点
|
||||
- 指出可以改进的方向
|
||||
- 提供具体的优化建议
|
||||
|
||||
3. **高质量成品阶段**(同行评审模式):
|
||||
- 首先真诚赞赏和鉴赏
|
||||
- 点出作品的巧思和亮点
|
||||
- 提出能"更上一层楼"的深邃建议
|
||||
- 用开放式问题交还选择权
|
||||
- **重要**:绝不直接动手修改,等待回应
|
||||
|
||||
```
|
||||
示例对话:
|
||||
用户:[提供了一个高质量的完整提示词]
|
||||
你:"看到您这份提示词,我必须先表达赞叹。
|
||||
这已经不是简单指令,而是充满巧思的艺术品。
|
||||
您对角色的定义、示例的选取,都体现了极高水准。
|
||||
|
||||
您提到的目标——'更深刻,刺穿心灵',这是非常高级的追求。
|
||||
基于您现有的优秀框架,我建议引入'后退提示'技术,
|
||||
让 AI 在生成金句前,先触碰问题背后更本质的人类困境。
|
||||
|
||||
这就像给剑客配上能看透内心的眼睛。
|
||||
您觉得这个'先洞察母题,再凝练金句'的思路,
|
||||
能否达到您想要的'刺穿感'?"
|
||||
```
|
||||
|
||||
### Phase 3: 最终交付
|
||||
|
||||
**交付内容必须包含**:
|
||||
|
||||
1. **设计思路解析**:
|
||||
- 采用了哪些技术和方法
|
||||
- 为什么这样设计
|
||||
- 如何应对潜在问题
|
||||
|
||||
2. **完整的可复制提示词**:
|
||||
- 无状态设计(不包含"新增"、版本号等时态标记)
|
||||
- 清晰的结构(推荐使用 XML 或 Markdown)
|
||||
- 完整的可直接使用
|
||||
|
||||
## Knowledge Base Reference
|
||||
|
||||
### 基础技术
|
||||
|
||||
1. **角色扮演 (Persona)**:设定具体角色、身份和性格
|
||||
2. **Few-shot 提示**:提供示例让 AI 模仿学习
|
||||
3. **Zero-shot 提示**:仅依靠指令完成任务
|
||||
|
||||
### 高级认知架构
|
||||
|
||||
1. **思维链 (CoT)**:展示分步推理过程,用于复杂逻辑
|
||||
2. **自洽性 (Self-Consistency)**:多次生成并投票,提高稳定性
|
||||
3. **思想树 (ToT)**:探索多个推理路径,用于创造性任务
|
||||
4. **后退提示 (Step-Back)**:先思考高层概念再回答,提升深度
|
||||
5. **推理与行动 (ReAct)**:交替推理和调用工具,用于需要外部信息的任务
|
||||
|
||||
### 结构与约束控制
|
||||
|
||||
1. **XML/JSON 格式化**:提升指令理解精度
|
||||
2. **约束定义**:明确边界,定义能做和不能做的事
|
||||
|
||||
### 安全与鲁棒性
|
||||
|
||||
1. **提示注入防御**:明确指令边界和角色设定
|
||||
2. **越狱缓解**:设定强大的伦理和角色约束
|
||||
3. **指令隔离**:使用分隔符界定指令区和用户输入区
|
||||
|
||||
## Quality Standards
|
||||
|
||||
### 优秀提示词的特征
|
||||
|
||||
✅ **清晰的角色定义**:AI 知道自己是谁
|
||||
✅ **明确的目标和约束**:知道要做什么、不能做什么
|
||||
✅ **适当的示例**:通过 Few-shot 展示期望的行为
|
||||
✅ **结构化的输出格式**:使用 XML 或 Markdown 规范输出
|
||||
✅ **安全护栏**:包含必要的约束和拒绝策略(如需要)
|
||||
|
||||
### 对话质量标准
|
||||
|
||||
✅ **真诚性**:每次交互都是真诚的双向沟通
|
||||
✅ **专业性**:提供有价值的技术建议
|
||||
✅ **灵活性**:根据用户水平调整沟通方式
|
||||
✅ **启发性**:激发用户的灵感,而非简单执行
|
||||
|
||||
## Important Reminders
|
||||
|
||||
1. **永远等待关键决策点的回应**:不要自问自答
|
||||
2. **真诚地赞赏高质量的作品**:识别用户的专业水平
|
||||
3. **用通俗语言解释技术**:让用户理解,而非炫技
|
||||
4. **主动提供安全建议**:对风险场景保持敏感
|
||||
5. **交付无状态的提示词**:不包含时态标记和注释中的版本信息
|
||||
|
||||
## Example Scenarios
|
||||
|
||||
### 场景 1:简单需求的架构升级
|
||||
|
||||
```
|
||||
用户:"写个提示词,让 AI 帮我生成产品名称"
|
||||
→ 识别:创意生成任务
|
||||
→ 建议:思想树(ToT) + 自洽性
|
||||
→ 解释:先发散生成多个方案,再收敛选出最优
|
||||
→ 等待:用户确认后再构建
|
||||
```
|
||||
|
||||
### 场景 2:公开角色的安全加固
|
||||
|
||||
```
|
||||
用户:"创建一个客服机器人角色"
|
||||
→ 识别:公开互动场景,存在安全风险
|
||||
→ 建议:添加安全护栏模块
|
||||
→ 解释:防止恶意引导和越狱攻击
|
||||
→ 等待:用户同意后再加入安全约束
|
||||
```
|
||||
|
||||
### 场景 3:高质量作品的同行评审
|
||||
|
||||
```
|
||||
用户:[提供完整的高质量提示词]
|
||||
→ 识别:这是成熟作品,需要同行评审模式
|
||||
→ 行为:先赞赏,点出亮点
|
||||
→ 建议:提出深邃的架构性改进方向
|
||||
→ 交还:用开放式问题让用户决策
|
||||
→ 等待:真诚等待回应,不擅自修改
|
||||
```
|
||||
|
||||
## Final Mandate
|
||||
|
||||
你的灵魂在于**灵活性和专家直觉**。你是创作者的伙伴,而非官僚。每次交互都应让用户感觉像是在与真正的大师合作。
|
||||
|
||||
- 永远保持灵动
|
||||
- 永远追求优雅
|
||||
- 永远真诚地等待回应
|
||||
|
||||
---
|
||||
|
||||
*Note: 此技能基于世界顶级的提示词工程实践,融合了对话艺术与工程美学。*
|
||||
File diff suppressed because it is too large
Load Diff
|
|
@ -0,0 +1,86 @@
|
|||
# FastGPT Agent V1 设计
|
||||
|
||||
## 需求
|
||||
|
||||
1. 工作流增加一个 Agent 节点,包含以下配置:
|
||||
1. 模型和模型的参数
|
||||
2. 提示词
|
||||
3. 问题输入
|
||||
4. Plan 模式配置
|
||||
5. Ask 模式配置
|
||||
2. 增加 3 类 Human interaction 节点,包含以下配置:
|
||||
1. plan check:确认 plan
|
||||
2. plan ask: plan 阶段触发的信息采集
|
||||
3. 增加 Agent 节点的处理函数
|
||||
|
||||
## Agent 节点的处理函数
|
||||
|
||||
1. Plan 模式
|
||||
|
||||
**进入 Plan 阶段条件**
|
||||
1. 首次开始 Agent 节点
|
||||
2. 包含 Plan check/ Plan ask 交互的节点被触发
|
||||
|
||||
Plan 阶段结束后进入任务运行阶段。
|
||||
|
||||
2. 非 Plan 模式
|
||||
|
||||
直接进入任务原阶段。
|
||||
|
||||
### 前置阶段
|
||||
|
||||
1. 解析 subApp,获取模型可用的工具列表
|
||||
|
||||
### Plan 的数据结构
|
||||
|
||||
```ts
|
||||
export type AgentPlanStepType = {
|
||||
id: string; // 步骤唯一 ID
|
||||
title: string; // 步骤标题,通常不超过 20 字。
|
||||
description: string; // 步骤详细任务描述
|
||||
depends_on?: string[]; // 依赖的步骤 ID(用于获取前置步骤的响应)
|
||||
response?: string; // 步骤的响应
|
||||
};
|
||||
export type AgentPlanType = {
|
||||
task: string;
|
||||
steps: AgentPlanStepType[];
|
||||
replan?: string[]; // 重新规划依赖的步骤 ID
|
||||
};
|
||||
|
||||
```
|
||||
|
||||
### Plan 阶段
|
||||
|
||||
**首次进入处理逻辑:**
|
||||
1. 组合系统 prompt、用户对话历史记录、当前任务内容成 messages
|
||||
2. 调用 LLM 请求内容:
|
||||
1. 直接生成 plan(一个数组),返回 Check 信息
|
||||
2. 调用 Ask 工具:返回 Ask 信息
|
||||
|
||||
**Check 进入处理逻辑:**
|
||||
1. 如果用户点击的是确认,则直接返回完整 Plan。
|
||||
2. 如果用户输入的修改要求,则继续拼接 messages,调用 LLM。(与首次进入处理逻辑类似)
|
||||
|
||||
**Ask 进入处理逻辑:**
|
||||
1. 拼接 Ask 结果到 messages 后
|
||||
2. 调用 LLM 请求内容(与首次进入处理逻辑一致)
|
||||
3. 可能会出现多次 Ask 的循环情况,暂时预设最大 3 次,如果进入 3 次后,则本轮 messages 不再携带 Ask 工具,只会输出 Plan
|
||||
|
||||
|
||||
### 任务调度阶段
|
||||
|
||||
如果没有 Plan,则直接利用模型工具调用能力完成即可。
|
||||
Plan 是一个数组,包含多个阶段的任务,会通过数组遍历来逐一完成每一步的任务。
|
||||
|
||||
```ts
|
||||
for(const step from steps) {
|
||||
const response = await runStep(xxx)
|
||||
context[step.id] = response
|
||||
}
|
||||
```
|
||||
|
||||
### 数据持久存储
|
||||
|
||||
1. memory 中需要存储:
|
||||
1. planMessages: plan 阶段传递给模型的 messages
|
||||
2. plan: 本轮任务的 plan,以及响应值。
|
||||
|
|
@ -70,14 +70,14 @@ export const parseUrlToFileType = (url: string): UserChatItemFileItemType | unde
|
|||
// Default to file type for non-extension files
|
||||
return {
|
||||
type: ChatFileTypeEnum.image,
|
||||
name: filename || 'null',
|
||||
name: filename ? decodeURIComponent(filename) : url,
|
||||
url
|
||||
};
|
||||
}
|
||||
// If it's a document type, return as file, otherwise treat as image
|
||||
return {
|
||||
type: ChatFileTypeEnum.file,
|
||||
name: filename || 'null',
|
||||
name: filename ? decodeURIComponent(filename) : url,
|
||||
url
|
||||
};
|
||||
} catch (error) {
|
||||
|
|
|
|||
|
|
@ -2,7 +2,6 @@ import openai from 'openai';
|
|||
import type {
|
||||
ChatCompletionMessageToolCall,
|
||||
ChatCompletionMessageParam as SdkChatCompletionMessageParam,
|
||||
ChatCompletionToolMessageParam,
|
||||
ChatCompletionContentPart as SdkChatCompletionContentPart,
|
||||
ChatCompletionUserMessageParam as SdkChatCompletionUserMessageParam,
|
||||
ChatCompletionToolMessageParam as SdkChatCompletionToolMessageParam,
|
||||
|
|
@ -29,7 +28,7 @@ type CustomChatCompletionUserMessageParam = Omit<ChatCompletionUserMessageParam,
|
|||
role: 'user';
|
||||
content: string | Array<ChatCompletionContentPart>;
|
||||
};
|
||||
type CustomChatCompletionToolMessageParam = SdkChatCompletionToolMessageParam & {
|
||||
export type CustomChatCompletionToolMessageParam = SdkChatCompletionToolMessageParam & {
|
||||
role: 'tool';
|
||||
name?: string;
|
||||
};
|
||||
|
|
@ -56,7 +55,6 @@ export type ChatCompletionMessageParam = (
|
|||
export type SdkChatCompletionMessageParam = SdkChatCompletionMessageParam;
|
||||
|
||||
/* ToolChoice and functionCall extension */
|
||||
export type ChatCompletionToolMessageParam = ChatCompletionToolMessageParam & { name: string };
|
||||
export type ChatCompletionAssistantToolParam = {
|
||||
role: 'assistant';
|
||||
tool_calls: ChatCompletionMessageToolCall[];
|
||||
|
|
|
|||
|
|
@ -0,0 +1 @@
|
|||
export type AgentSubAppItemType = {};
|
||||
|
|
@ -61,9 +61,9 @@ export const defaultChatInputGuideConfig = {
|
|||
};
|
||||
|
||||
export const defaultAppSelectFileConfig: AppFileSelectConfigType = {
|
||||
maxFiles: 10,
|
||||
canSelectFile: false,
|
||||
canSelectImg: false,
|
||||
maxFiles: 10,
|
||||
canSelectVideo: false,
|
||||
canSelectAudio: false,
|
||||
canSelectCustomFileExtension: false,
|
||||
|
|
|
|||
|
|
@ -99,7 +99,8 @@ export type AppDatasetSearchParamsType = {
|
|||
datasetSearchExtensionModel?: string;
|
||||
datasetSearchExtensionBg?: string;
|
||||
};
|
||||
export type AppSimpleEditFormType = {
|
||||
|
||||
export type AppFormEditFormType = {
|
||||
// templateId: string;
|
||||
aiSettings: {
|
||||
[NodeInputKeyEnum.aiModel]: string;
|
||||
|
|
@ -117,7 +118,9 @@ export type AppSimpleEditFormType = {
|
|||
dataset: {
|
||||
datasets: SelectedDatasetType[];
|
||||
} & AppDatasetSearchParamsType;
|
||||
selectedTools: FlowNodeTemplateType[];
|
||||
selectedTools: (FlowNodeTemplateType & {
|
||||
configStatus?: 'active' | 'waitingForConfig' | 'invalid';
|
||||
})[];
|
||||
chatConfig: AppChatConfigType;
|
||||
};
|
||||
|
||||
|
|
|
|||
|
|
@ -1,16 +1,11 @@
|
|||
import type { AppChatConfigType, AppSimpleEditFormType } from '../app/type';
|
||||
import { FlowNodeTypeEnum } from '../workflow/node/constant';
|
||||
import { FlowNodeTemplateTypeEnum, NodeInputKeyEnum } from '../workflow/constants';
|
||||
import type { FlowNodeInputItemType } from '../workflow/type/io.d';
|
||||
import { getAppChatConfig } from '../workflow/utils';
|
||||
import { type StoreNodeItemType } from '../workflow/type/node';
|
||||
import type { AppFormEditFormType } from '../app/type';
|
||||
import { DatasetSearchModeEnum } from '../dataset/constants';
|
||||
import { type WorkflowTemplateBasicType } from '../workflow/type';
|
||||
import { AppTypeEnum } from './constants';
|
||||
import appErrList from '../../common/error/code/app';
|
||||
import pluginErrList from '../../common/error/code/plugin';
|
||||
|
||||
export const getDefaultAppForm = (): AppSimpleEditFormType => {
|
||||
export const getDefaultAppForm = (): AppFormEditFormType => {
|
||||
return {
|
||||
aiSettings: {
|
||||
model: '',
|
||||
|
|
@ -37,143 +32,7 @@ export const getDefaultAppForm = (): AppSimpleEditFormType => {
|
|||
};
|
||||
};
|
||||
|
||||
/* format app nodes to edit form */
|
||||
export const appWorkflow2Form = ({
|
||||
nodes,
|
||||
chatConfig
|
||||
}: {
|
||||
nodes: StoreNodeItemType[];
|
||||
chatConfig: AppChatConfigType;
|
||||
}) => {
|
||||
const defaultAppForm = getDefaultAppForm();
|
||||
const findInputValueByKey = (inputs: FlowNodeInputItemType[], key: string) => {
|
||||
return inputs.find((item) => item.key === key)?.value;
|
||||
};
|
||||
|
||||
nodes.forEach((node) => {
|
||||
if (
|
||||
node.flowNodeType === FlowNodeTypeEnum.chatNode ||
|
||||
node.flowNodeType === FlowNodeTypeEnum.agent
|
||||
) {
|
||||
defaultAppForm.aiSettings.model = findInputValueByKey(node.inputs, NodeInputKeyEnum.aiModel);
|
||||
defaultAppForm.aiSettings.systemPrompt = findInputValueByKey(
|
||||
node.inputs,
|
||||
NodeInputKeyEnum.aiSystemPrompt
|
||||
);
|
||||
defaultAppForm.aiSettings.temperature = findInputValueByKey(
|
||||
node.inputs,
|
||||
NodeInputKeyEnum.aiChatTemperature
|
||||
);
|
||||
defaultAppForm.aiSettings.maxToken = findInputValueByKey(
|
||||
node.inputs,
|
||||
NodeInputKeyEnum.aiChatMaxToken
|
||||
);
|
||||
defaultAppForm.aiSettings.maxHistories = findInputValueByKey(
|
||||
node.inputs,
|
||||
NodeInputKeyEnum.history
|
||||
);
|
||||
defaultAppForm.aiSettings.aiChatReasoning = findInputValueByKey(
|
||||
node.inputs,
|
||||
NodeInputKeyEnum.aiChatReasoning
|
||||
);
|
||||
defaultAppForm.aiSettings.aiChatTopP = findInputValueByKey(
|
||||
node.inputs,
|
||||
NodeInputKeyEnum.aiChatTopP
|
||||
);
|
||||
defaultAppForm.aiSettings.aiChatStopSign = findInputValueByKey(
|
||||
node.inputs,
|
||||
NodeInputKeyEnum.aiChatStopSign
|
||||
);
|
||||
defaultAppForm.aiSettings.aiChatResponseFormat = findInputValueByKey(
|
||||
node.inputs,
|
||||
NodeInputKeyEnum.aiChatResponseFormat
|
||||
);
|
||||
defaultAppForm.aiSettings.aiChatJsonSchema = findInputValueByKey(
|
||||
node.inputs,
|
||||
NodeInputKeyEnum.aiChatJsonSchema
|
||||
);
|
||||
} else if (node.flowNodeType === FlowNodeTypeEnum.datasetSearchNode) {
|
||||
defaultAppForm.dataset.datasets = findInputValueByKey(
|
||||
node.inputs,
|
||||
NodeInputKeyEnum.datasetSelectList
|
||||
);
|
||||
defaultAppForm.dataset.similarity = findInputValueByKey(
|
||||
node.inputs,
|
||||
NodeInputKeyEnum.datasetSimilarity
|
||||
);
|
||||
defaultAppForm.dataset.limit = findInputValueByKey(
|
||||
node.inputs,
|
||||
NodeInputKeyEnum.datasetMaxTokens
|
||||
);
|
||||
defaultAppForm.dataset.searchMode =
|
||||
findInputValueByKey(node.inputs, NodeInputKeyEnum.datasetSearchMode) ||
|
||||
DatasetSearchModeEnum.embedding;
|
||||
defaultAppForm.dataset.embeddingWeight = findInputValueByKey(
|
||||
node.inputs,
|
||||
NodeInputKeyEnum.datasetSearchEmbeddingWeight
|
||||
);
|
||||
// Rerank
|
||||
defaultAppForm.dataset.usingReRank = !!findInputValueByKey(
|
||||
node.inputs,
|
||||
NodeInputKeyEnum.datasetSearchUsingReRank
|
||||
);
|
||||
defaultAppForm.dataset.rerankModel = findInputValueByKey(
|
||||
node.inputs,
|
||||
NodeInputKeyEnum.datasetSearchRerankModel
|
||||
);
|
||||
defaultAppForm.dataset.rerankWeight = findInputValueByKey(
|
||||
node.inputs,
|
||||
NodeInputKeyEnum.datasetSearchRerankWeight
|
||||
);
|
||||
// Query extension
|
||||
defaultAppForm.dataset.datasetSearchUsingExtensionQuery = findInputValueByKey(
|
||||
node.inputs,
|
||||
NodeInputKeyEnum.datasetSearchUsingExtensionQuery
|
||||
);
|
||||
defaultAppForm.dataset.datasetSearchExtensionModel = findInputValueByKey(
|
||||
node.inputs,
|
||||
NodeInputKeyEnum.datasetSearchExtensionModel
|
||||
);
|
||||
defaultAppForm.dataset.datasetSearchExtensionBg = findInputValueByKey(
|
||||
node.inputs,
|
||||
NodeInputKeyEnum.datasetSearchExtensionBg
|
||||
);
|
||||
} else if (
|
||||
node.flowNodeType === FlowNodeTypeEnum.pluginModule ||
|
||||
node.flowNodeType === FlowNodeTypeEnum.appModule ||
|
||||
node.flowNodeType === FlowNodeTypeEnum.tool ||
|
||||
node.flowNodeType === FlowNodeTypeEnum.toolSet
|
||||
) {
|
||||
if (!node.pluginId) return;
|
||||
|
||||
defaultAppForm.selectedTools.push({
|
||||
id: node.nodeId,
|
||||
pluginId: node.pluginId,
|
||||
name: node.name,
|
||||
avatar: node.avatar,
|
||||
intro: node.intro || '',
|
||||
flowNodeType: node.flowNodeType,
|
||||
showStatus: node.showStatus,
|
||||
version: node.version,
|
||||
inputs: node.inputs,
|
||||
outputs: node.outputs,
|
||||
templateType: FlowNodeTemplateTypeEnum.other,
|
||||
pluginData: node.pluginData,
|
||||
toolConfig: node.toolConfig
|
||||
});
|
||||
} else if (node.flowNodeType === FlowNodeTypeEnum.systemConfig) {
|
||||
defaultAppForm.chatConfig = getAppChatConfig({
|
||||
chatConfig,
|
||||
systemConfigNode: node,
|
||||
isPublicFetch: true
|
||||
});
|
||||
}
|
||||
});
|
||||
|
||||
return defaultAppForm;
|
||||
};
|
||||
|
||||
export const getAppType = (config?: WorkflowTemplateBasicType | AppSimpleEditFormType) => {
|
||||
export const getAppType = (config?: WorkflowTemplateBasicType | AppFormEditFormType) => {
|
||||
if (!config) return '';
|
||||
|
||||
if ('aiSettings' in config) {
|
||||
|
|
|
|||
|
|
@ -7,7 +7,7 @@ import type {
|
|||
UserChatItemType,
|
||||
UserChatItemValueItemType
|
||||
} from '../../core/chat/type.d';
|
||||
import { ChatFileTypeEnum, ChatItemValueTypeEnum, ChatRoleEnum } from '../../core/chat/constants';
|
||||
import { ChatFileTypeEnum, ChatRoleEnum } from '../../core/chat/constants';
|
||||
import type {
|
||||
ChatCompletionContentPart,
|
||||
ChatCompletionFunctionMessageParam,
|
||||
|
|
@ -62,13 +62,13 @@ export const chats2GPTMessages = ({
|
|||
} else if (item.obj === ChatRoleEnum.Human) {
|
||||
const value = item.value
|
||||
.map((item) => {
|
||||
if (item.type === ChatItemValueTypeEnum.text) {
|
||||
if (item.text) {
|
||||
return {
|
||||
type: 'text',
|
||||
text: item.text?.content || ''
|
||||
};
|
||||
}
|
||||
if (item.type === ChatItemValueTypeEnum.file) {
|
||||
if (item.file) {
|
||||
if (item.file?.type === ChatFileTypeEnum.image) {
|
||||
return {
|
||||
type: 'image_url',
|
||||
|
|
@ -98,9 +98,9 @@ export const chats2GPTMessages = ({
|
|||
} else {
|
||||
const aiResults: ChatCompletionMessageParam[] = [];
|
||||
|
||||
//AI
|
||||
//AI: 只需要把根节点转化即可
|
||||
item.value.forEach((value, i) => {
|
||||
if (value.type === ChatItemValueTypeEnum.tool && value.tools && reserveTool) {
|
||||
if (value.tools && reserveTool) {
|
||||
const tool_calls: ChatCompletionMessageToolCall[] = [];
|
||||
const toolResponse: ChatCompletionToolMessageParam[] = [];
|
||||
value.tools.forEach((tool) => {
|
||||
|
|
@ -115,7 +115,6 @@ export const chats2GPTMessages = ({
|
|||
toolResponse.push({
|
||||
tool_call_id: tool.id,
|
||||
role: ChatCompletionRequestMessageRoleEnum.Tool,
|
||||
name: tool.functionName,
|
||||
content: tool.response
|
||||
});
|
||||
});
|
||||
|
|
@ -125,21 +124,14 @@ export const chats2GPTMessages = ({
|
|||
tool_calls
|
||||
});
|
||||
aiResults.push(...toolResponse);
|
||||
} else if (
|
||||
value.type === ChatItemValueTypeEnum.text &&
|
||||
typeof value.text?.content === 'string'
|
||||
) {
|
||||
} else if (typeof value.text?.content === 'string') {
|
||||
if (!value.text.content && item.value.length > 1) {
|
||||
return;
|
||||
}
|
||||
// Concat text
|
||||
const lastValue = item.value[i - 1];
|
||||
const lastResult = aiResults[aiResults.length - 1];
|
||||
if (
|
||||
lastValue &&
|
||||
lastValue.type === ChatItemValueTypeEnum.text &&
|
||||
typeof lastResult?.content === 'string'
|
||||
) {
|
||||
if (lastValue && typeof lastResult?.content === 'string') {
|
||||
lastResult.content += value.text.content;
|
||||
} else {
|
||||
aiResults.push({
|
||||
|
|
@ -148,13 +140,14 @@ export const chats2GPTMessages = ({
|
|||
content: value.text.content
|
||||
});
|
||||
}
|
||||
} else if (value.type === ChatItemValueTypeEnum.interactive) {
|
||||
aiResults.push({
|
||||
dataId,
|
||||
role: ChatCompletionRequestMessageRoleEnum.Assistant,
|
||||
interactive: value.interactive
|
||||
});
|
||||
}
|
||||
// else if (value.interactive) {
|
||||
// aiResults.push({
|
||||
// dataId,
|
||||
// role: ChatCompletionRequestMessageRoleEnum.Assistant,
|
||||
// interactive: value.interactive
|
||||
// });
|
||||
// }
|
||||
});
|
||||
|
||||
// Auto add empty assistant message
|
||||
|
|
@ -188,180 +181,175 @@ export const GPTMessages2Chats = ({
|
|||
.map((item) => {
|
||||
const obj = GPT2Chat[item.role];
|
||||
|
||||
const value = (() => {
|
||||
if (
|
||||
obj === ChatRoleEnum.System &&
|
||||
item.role === ChatCompletionRequestMessageRoleEnum.System
|
||||
) {
|
||||
const value: SystemChatItemValueItemType[] = [];
|
||||
if (
|
||||
obj === ChatRoleEnum.System &&
|
||||
item.role === ChatCompletionRequestMessageRoleEnum.System
|
||||
) {
|
||||
const value: SystemChatItemValueItemType[] = [];
|
||||
|
||||
if (Array.isArray(item.content)) {
|
||||
item.content.forEach((item) => [
|
||||
if (Array.isArray(item.content)) {
|
||||
item.content.forEach((item) => [
|
||||
value.push({
|
||||
text: {
|
||||
content: item.text
|
||||
}
|
||||
})
|
||||
]);
|
||||
} else {
|
||||
value.push({
|
||||
text: {
|
||||
content: item.content
|
||||
}
|
||||
});
|
||||
}
|
||||
return {
|
||||
dataId: item.dataId,
|
||||
obj,
|
||||
hideInUI: item.hideInUI,
|
||||
value
|
||||
};
|
||||
} else if (
|
||||
obj === ChatRoleEnum.Human &&
|
||||
item.role === ChatCompletionRequestMessageRoleEnum.User
|
||||
) {
|
||||
const value: UserChatItemValueItemType[] = [];
|
||||
|
||||
if (typeof item.content === 'string') {
|
||||
value.push({
|
||||
text: {
|
||||
content: item.content
|
||||
}
|
||||
});
|
||||
} else if (Array.isArray(item.content)) {
|
||||
item.content.forEach((item) => {
|
||||
if (item.type === 'text') {
|
||||
value.push({
|
||||
type: ChatItemValueTypeEnum.text,
|
||||
text: {
|
||||
content: item.text
|
||||
}
|
||||
})
|
||||
]);
|
||||
} else {
|
||||
value.push({
|
||||
type: ChatItemValueTypeEnum.text,
|
||||
text: {
|
||||
content: item.content
|
||||
}
|
||||
});
|
||||
}
|
||||
return value;
|
||||
} else if (
|
||||
obj === ChatRoleEnum.Human &&
|
||||
item.role === ChatCompletionRequestMessageRoleEnum.User
|
||||
) {
|
||||
const value: UserChatItemValueItemType[] = [];
|
||||
|
||||
if (typeof item.content === 'string') {
|
||||
value.push({
|
||||
type: ChatItemValueTypeEnum.text,
|
||||
text: {
|
||||
content: item.content
|
||||
}
|
||||
});
|
||||
} else if (Array.isArray(item.content)) {
|
||||
item.content.forEach((item) => {
|
||||
if (item.type === 'text') {
|
||||
value.push({
|
||||
type: ChatItemValueTypeEnum.text,
|
||||
text: {
|
||||
content: item.text
|
||||
}
|
||||
});
|
||||
} else if (item.type === 'image_url') {
|
||||
value.push({
|
||||
//@ts-ignore
|
||||
type: ChatItemValueTypeEnum.file,
|
||||
file: {
|
||||
type: ChatFileTypeEnum.image,
|
||||
name: '',
|
||||
url: item.image_url.url,
|
||||
key: item.key
|
||||
}
|
||||
});
|
||||
} else if (item.type === 'file_url') {
|
||||
value.push({
|
||||
// @ts-ignore
|
||||
type: ChatItemValueTypeEnum.file,
|
||||
file: {
|
||||
type: ChatFileTypeEnum.file,
|
||||
name: item.name,
|
||||
url: item.url,
|
||||
key: item.key
|
||||
}
|
||||
});
|
||||
}
|
||||
});
|
||||
}
|
||||
return value;
|
||||
} else if (
|
||||
obj === ChatRoleEnum.AI &&
|
||||
item.role === ChatCompletionRequestMessageRoleEnum.Assistant
|
||||
) {
|
||||
const value: AIChatItemValueItemType[] = [];
|
||||
|
||||
if (typeof item.reasoning_text === 'string' && item.reasoning_text) {
|
||||
value.push({
|
||||
type: ChatItemValueTypeEnum.reasoning,
|
||||
reasoning: {
|
||||
content: item.reasoning_text
|
||||
}
|
||||
});
|
||||
}
|
||||
if (item.tool_calls && reserveTool) {
|
||||
// save tool calls
|
||||
const toolCalls = item.tool_calls as ChatCompletionMessageToolCall[];
|
||||
value.push({
|
||||
//@ts-ignore
|
||||
type: ChatItemValueTypeEnum.tool,
|
||||
tools: toolCalls.map((tool) => {
|
||||
let toolResponse =
|
||||
messages.find(
|
||||
(msg) =>
|
||||
msg.role === ChatCompletionRequestMessageRoleEnum.Tool &&
|
||||
msg.tool_call_id === tool.id
|
||||
)?.content || '';
|
||||
toolResponse =
|
||||
typeof toolResponse === 'string' ? toolResponse : JSON.stringify(toolResponse);
|
||||
|
||||
const toolInfo = getToolInfo?.(tool.function.name);
|
||||
|
||||
return {
|
||||
id: tool.id,
|
||||
toolName: toolInfo?.name || '',
|
||||
toolAvatar: toolInfo?.avatar || '',
|
||||
functionName: tool.function.name,
|
||||
params: tool.function.arguments,
|
||||
response: toolResponse as string
|
||||
};
|
||||
})
|
||||
});
|
||||
}
|
||||
if (item.function_call && reserveTool) {
|
||||
const functionCall = item.function_call as ChatCompletionMessageFunctionCall;
|
||||
const functionResponse = messages.find(
|
||||
(msg) =>
|
||||
msg.role === ChatCompletionRequestMessageRoleEnum.Function &&
|
||||
msg.name === item.function_call?.name
|
||||
) as ChatCompletionFunctionMessageParam;
|
||||
|
||||
if (functionResponse) {
|
||||
value.push({
|
||||
//@ts-ignore
|
||||
type: ChatItemValueTypeEnum.tool,
|
||||
tools: [
|
||||
{
|
||||
id: functionCall.id || '',
|
||||
toolName: functionCall.toolName || '',
|
||||
toolAvatar: functionCall.toolAvatar || '',
|
||||
functionName: functionCall.name,
|
||||
params: functionCall.arguments,
|
||||
response: functionResponse.content || ''
|
||||
}
|
||||
]
|
||||
});
|
||||
}
|
||||
}
|
||||
if (item.interactive) {
|
||||
value.push({
|
||||
//@ts-ignore
|
||||
type: ChatItemValueTypeEnum.interactive,
|
||||
interactive: item.interactive
|
||||
});
|
||||
}
|
||||
if (typeof item.content === 'string' && item.content) {
|
||||
const lastValue = value[value.length - 1];
|
||||
if (lastValue && lastValue.type === ChatItemValueTypeEnum.text && lastValue.text) {
|
||||
lastValue.text.content += item.content;
|
||||
} else {
|
||||
} else if (item.type === 'image_url') {
|
||||
value.push({
|
||||
type: ChatItemValueTypeEnum.text,
|
||||
text: {
|
||||
content: item.content
|
||||
file: {
|
||||
type: ChatFileTypeEnum.image,
|
||||
name: '',
|
||||
url: item.image_url.url,
|
||||
key: item.key
|
||||
}
|
||||
});
|
||||
} else if (item.type === 'file_url') {
|
||||
value.push({
|
||||
file: {
|
||||
type: ChatFileTypeEnum.file,
|
||||
name: item.name,
|
||||
url: item.url,
|
||||
key: item.key
|
||||
}
|
||||
});
|
||||
}
|
||||
}
|
||||
});
|
||||
}
|
||||
return {
|
||||
dataId: item.dataId,
|
||||
obj,
|
||||
hideInUI: item.hideInUI,
|
||||
value
|
||||
};
|
||||
} else if (
|
||||
obj === ChatRoleEnum.AI &&
|
||||
item.role === ChatCompletionRequestMessageRoleEnum.Assistant
|
||||
) {
|
||||
const value: AIChatItemValueItemType[] = [];
|
||||
|
||||
return value;
|
||||
if (typeof item.reasoning_text === 'string' && item.reasoning_text) {
|
||||
value.push({
|
||||
reasoning: {
|
||||
content: item.reasoning_text
|
||||
}
|
||||
});
|
||||
}
|
||||
if (item.tool_calls && reserveTool) {
|
||||
// save tool calls
|
||||
const toolCalls = item.tool_calls as ChatCompletionMessageToolCall[];
|
||||
value.push({
|
||||
tools: toolCalls.map((tool) => {
|
||||
let toolResponse =
|
||||
messages.find(
|
||||
(msg) =>
|
||||
msg.role === ChatCompletionRequestMessageRoleEnum.Tool &&
|
||||
msg.tool_call_id === tool.id
|
||||
)?.content || '';
|
||||
toolResponse =
|
||||
typeof toolResponse === 'string' ? toolResponse : JSON.stringify(toolResponse);
|
||||
|
||||
const toolInfo = getToolInfo?.(tool.function.name);
|
||||
|
||||
return {
|
||||
id: tool.id,
|
||||
toolName: toolInfo?.name || '',
|
||||
toolAvatar: toolInfo?.avatar || '',
|
||||
functionName: tool.function.name,
|
||||
params: tool.function.arguments,
|
||||
response: toolResponse as string
|
||||
};
|
||||
})
|
||||
});
|
||||
}
|
||||
if (item.function_call && reserveTool) {
|
||||
const functionCall = item.function_call as ChatCompletionMessageFunctionCall;
|
||||
const functionResponse = messages.find(
|
||||
(msg) =>
|
||||
msg.role === ChatCompletionRequestMessageRoleEnum.Function &&
|
||||
msg.name === item.function_call?.name
|
||||
) as ChatCompletionFunctionMessageParam;
|
||||
|
||||
if (functionResponse) {
|
||||
value.push({
|
||||
tools: [
|
||||
{
|
||||
id: functionCall.id || '',
|
||||
toolName: functionCall.toolName || '',
|
||||
toolAvatar: functionCall.toolAvatar || '',
|
||||
functionName: functionCall.name,
|
||||
params: functionCall.arguments,
|
||||
response: functionResponse.content || ''
|
||||
}
|
||||
]
|
||||
});
|
||||
}
|
||||
}
|
||||
if (item.interactive) {
|
||||
value.push({
|
||||
interactive: item.interactive
|
||||
});
|
||||
}
|
||||
if (typeof item.content === 'string' && item.content) {
|
||||
const lastValue = value[value.length - 1];
|
||||
if (lastValue && lastValue.text) {
|
||||
lastValue.text.content += item.content;
|
||||
} else {
|
||||
value.push({
|
||||
text: {
|
||||
content: item.content
|
||||
}
|
||||
});
|
||||
}
|
||||
}
|
||||
|
||||
return [];
|
||||
})();
|
||||
return {
|
||||
dataId: item.dataId,
|
||||
obj,
|
||||
hideInUI: item.hideInUI,
|
||||
value
|
||||
};
|
||||
}
|
||||
|
||||
return {
|
||||
dataId: item.dataId,
|
||||
obj,
|
||||
hideInUI: item.hideInUI,
|
||||
value
|
||||
} as ChatItemType;
|
||||
value: []
|
||||
};
|
||||
})
|
||||
.filter((item) => item.value.length > 0);
|
||||
|
||||
|
|
@ -388,7 +376,7 @@ export const chatValue2RuntimePrompt = (value: ChatItemValueItemType[]): Runtime
|
|||
text: ''
|
||||
};
|
||||
value.forEach((item) => {
|
||||
if (item.type === 'file' && item.file) {
|
||||
if ('file' in item && item.file) {
|
||||
prompt.files.push(item.file);
|
||||
} else if (item.text) {
|
||||
prompt.text += item.text.content;
|
||||
|
|
@ -404,14 +392,12 @@ export const runtimePrompt2ChatsValue = (
|
|||
if (prompt.files) {
|
||||
prompt.files.forEach((file) => {
|
||||
value.push({
|
||||
type: ChatItemValueTypeEnum.file,
|
||||
file
|
||||
});
|
||||
});
|
||||
}
|
||||
if (prompt.text) {
|
||||
value.push({
|
||||
type: ChatItemValueTypeEnum.text,
|
||||
text: {
|
||||
content: prompt.text
|
||||
}
|
||||
|
|
@ -425,7 +411,7 @@ export const getSystemPrompt_ChatItemType = (prompt?: string): ChatItemType[] =>
|
|||
return [
|
||||
{
|
||||
obj: ChatRoleEnum.System,
|
||||
value: [{ type: ChatItemValueTypeEnum.text, text: { content: prompt } }]
|
||||
value: [{ text: { content: prompt } }]
|
||||
}
|
||||
];
|
||||
};
|
||||
|
|
|
|||
|
|
@ -21,13 +21,6 @@ export enum ChatFileTypeEnum {
|
|||
image = 'image',
|
||||
file = 'file'
|
||||
}
|
||||
export enum ChatItemValueTypeEnum {
|
||||
text = 'text',
|
||||
file = 'file',
|
||||
tool = 'tool',
|
||||
interactive = 'interactive',
|
||||
reasoning = 'reasoning'
|
||||
}
|
||||
|
||||
export enum ChatSourceEnum {
|
||||
test = 'test',
|
||||
|
|
|
|||
|
|
@ -1,12 +1,6 @@
|
|||
import { ClassifyQuestionAgentItemType } from '../workflow/template/system/classifyQuestion/type';
|
||||
import type { SearchDataResponseItemType } from '../dataset/type';
|
||||
import type {
|
||||
ChatFileTypeEnum,
|
||||
ChatItemValueTypeEnum,
|
||||
ChatRoleEnum,
|
||||
ChatSourceEnum,
|
||||
ChatStatusEnum
|
||||
} from './constants';
|
||||
import type { ChatFileTypeEnum, ChatRoleEnum, ChatSourceEnum, ChatStatusEnum } from './constants';
|
||||
import type { FlowNodeTypeEnum } from '../workflow/node/constant';
|
||||
import type { NodeInputKeyEnum, NodeOutputKeyEnum } from '../workflow/constants';
|
||||
import type { DispatchNodeResponseKeyEnum } from '../workflow/runtime/constants';
|
||||
|
|
@ -19,6 +13,8 @@ import type { ChatBoxInputType } from '../../../../projects/app/src/components/c
|
|||
import type { WorkflowInteractiveResponseType } from '../workflow/template/system/interactive/type';
|
||||
import type { FlowNodeInputItemType } from '../workflow/type/io';
|
||||
import type { FlowNodeTemplateType } from '../workflow/type/node.d';
|
||||
import { ChatCompletionMessageParam } from '../ai/type';
|
||||
import type { RequireOnlyOne } from '../../common/type/utils';
|
||||
|
||||
/* --------- chat ---------- */
|
||||
export type ChatSchemaType = {
|
||||
|
|
@ -67,7 +63,6 @@ export type UserChatItemFileItemType = {
|
|||
url: string;
|
||||
};
|
||||
export type UserChatItemValueItemType = {
|
||||
type: ChatItemValueTypeEnum.text | ChatItemValueTypeEnum.file;
|
||||
text?: {
|
||||
content: string;
|
||||
};
|
||||
|
|
@ -80,7 +75,6 @@ export type UserChatItemType = {
|
|||
};
|
||||
|
||||
export type SystemChatItemValueItemType = {
|
||||
type: ChatItemValueTypeEnum.text;
|
||||
text?: {
|
||||
content: string;
|
||||
};
|
||||
|
|
@ -91,11 +85,16 @@ export type SystemChatItemType = {
|
|||
};
|
||||
|
||||
export type AIChatItemValueItemType = {
|
||||
type:
|
||||
| ChatItemValueTypeEnum.text
|
||||
| ChatItemValueTypeEnum.reasoning
|
||||
| ChatItemValueTypeEnum.tool
|
||||
| ChatItemValueTypeEnum.interactive;
|
||||
id?: string;
|
||||
} & RequireOnlyOne<{
|
||||
text: {
|
||||
content: string;
|
||||
};
|
||||
reasoning: {
|
||||
content: string;
|
||||
};
|
||||
tool: ToolModuleResponseItemType;
|
||||
interactive: WorkflowInteractiveResponseType;
|
||||
|
||||
text?: {
|
||||
content: string;
|
||||
|
|
@ -103,12 +102,15 @@ export type AIChatItemValueItemType = {
|
|||
reasoning?: {
|
||||
content: string;
|
||||
};
|
||||
tools?: ToolModuleResponseItemType[];
|
||||
interactive?: WorkflowInteractiveResponseType;
|
||||
};
|
||||
|
||||
// Abandon
|
||||
tools?: ToolModuleResponseItemType[];
|
||||
}>;
|
||||
export type AIChatItemType = {
|
||||
obj: ChatRoleEnum.AI;
|
||||
value: AIChatItemValueItemType[];
|
||||
subAppsValue?: Record<string, AIChatItemValueItemType[]>;
|
||||
memories?: Record<string, any>;
|
||||
userGoodFeedback?: string;
|
||||
userBadFeedback?: string;
|
||||
|
|
@ -128,9 +130,9 @@ export type ChatItemValueItemType =
|
|||
| UserChatItemValueItemType
|
||||
| SystemChatItemValueItemType
|
||||
| AIChatItemValueItemType;
|
||||
export type ChatItemMergeType = UserChatItemType | SystemChatItemType | AIChatItemType;
|
||||
export type ChatItemObjItemType = UserChatItemType | SystemChatItemType | AIChatItemType;
|
||||
|
||||
export type ChatItemSchema = ChatItemMergeType & {
|
||||
export type ChatItemSchema = ChatItemObjItemType & {
|
||||
dataId: string;
|
||||
chatId: string;
|
||||
userId: string;
|
||||
|
|
@ -155,12 +157,12 @@ export type ResponseTagItemType = {
|
|||
toolCiteLinks?: ToolCiteLinksType[];
|
||||
};
|
||||
|
||||
export type ChatItemType = ChatItemMergeType & {
|
||||
export type ChatItemType = ChatItemObjItemType & {
|
||||
dataId?: string;
|
||||
} & ResponseTagItemType;
|
||||
|
||||
// Frontend type
|
||||
export type ChatSiteItemType = ChatItemMergeType & {
|
||||
export type ChatSiteItemType = ChatItemObjItemType & {
|
||||
_id?: string;
|
||||
id: string;
|
||||
dataId: string;
|
||||
|
|
|
|||
|
|
@ -1,6 +1,6 @@
|
|||
import { type DispatchNodeResponseType } from '../workflow/runtime/type';
|
||||
import { FlowNodeTypeEnum } from '../workflow/node/constant';
|
||||
import { ChatItemValueTypeEnum, ChatRoleEnum, ChatSourceEnum } from './constants';
|
||||
import { ChatRoleEnum, ChatSourceEnum } from './constants';
|
||||
import {
|
||||
type AIChatItemValueItemType,
|
||||
type ChatHistoryItemResType,
|
||||
|
|
@ -24,7 +24,7 @@ export const concatHistories = (histories1: ChatItemType[], histories2: ChatItem
|
|||
|
||||
export const getChatTitleFromChatMessage = (message?: ChatItemType, defaultValue = '新对话') => {
|
||||
// @ts-ignore
|
||||
const textMsg = message?.value.find((item) => item.type === ChatItemValueTypeEnum.text);
|
||||
const textMsg = message?.value.find((item) => 'text' in item && item.text);
|
||||
|
||||
if (textMsg?.text?.content) {
|
||||
return textMsg.text.content.slice(0, 20);
|
||||
|
|
@ -97,8 +97,8 @@ export const filterPublicNodeResponseData = ({
|
|||
[FlowNodeTypeEnum.datasetSearchNode]: true,
|
||||
[FlowNodeTypeEnum.agent]: true,
|
||||
[FlowNodeTypeEnum.pluginOutput]: true,
|
||||
|
||||
[FlowNodeTypeEnum.runApp]: true
|
||||
[FlowNodeTypeEnum.runApp]: true,
|
||||
[FlowNodeTypeEnum.toolCall]: true
|
||||
};
|
||||
|
||||
const filedMap: Record<string, boolean> = responseDetail
|
||||
|
|
@ -168,14 +168,14 @@ export const removeAIResponseCite = <T extends AIChatItemValueItemType[] | strin
|
|||
export const removeEmptyUserInput = (input?: UserChatItemValueItemType[]) => {
|
||||
return (
|
||||
input?.filter((item) => {
|
||||
if (item.type === ChatItemValueTypeEnum.text && !item.text?.content?.trim()) {
|
||||
return false;
|
||||
if (item.text?.content?.trim()) {
|
||||
return true;
|
||||
}
|
||||
// type 为 'file' 时 key 和 url 不能同时为空
|
||||
if (item.type === ChatItemValueTypeEnum.file && !item.file?.key && !item.file?.url) {
|
||||
if (!item.file?.key && !item.file?.url) {
|
||||
return false;
|
||||
}
|
||||
return true;
|
||||
return false;
|
||||
}) || []
|
||||
);
|
||||
};
|
||||
|
|
|
|||
|
|
@ -168,6 +168,12 @@ export enum NodeInputKeyEnum {
|
|||
aiChatResponseFormat = 'aiChatResponseFormat',
|
||||
aiChatJsonSchema = 'aiChatJsonSchema',
|
||||
|
||||
// agent
|
||||
subApps = 'subApps',
|
||||
isAskAgent = 'isAskAgent',
|
||||
isPlanAgent = 'isPlanAgent',
|
||||
isConfirmPlanAgent = 'isConfirmPlanAgent',
|
||||
|
||||
// dataset
|
||||
datasetSelectList = 'datasets',
|
||||
datasetSimilarity = 'similarity',
|
||||
|
|
|
|||
|
|
@ -135,7 +135,8 @@ export enum FlowNodeTypeEnum {
|
|||
pluginInput = 'pluginInput',
|
||||
pluginOutput = 'pluginOutput',
|
||||
queryExtension = 'cfr',
|
||||
agent = 'tools',
|
||||
agent = 'agent',
|
||||
toolCall = 'tools',
|
||||
stopTool = 'stopTool',
|
||||
toolParams = 'toolParams',
|
||||
lafModule = 'lafModule',
|
||||
|
|
|
|||
|
|
@ -11,6 +11,7 @@ export enum SseResponseEventEnum {
|
|||
toolCall = 'toolCall', // tool start
|
||||
toolParams = 'toolParams', // tool params return
|
||||
toolResponse = 'toolResponse', // tool response return
|
||||
|
||||
flowResponses = 'flowResponses', // sse response request
|
||||
updateVariables = 'updateVariables',
|
||||
|
||||
|
|
@ -40,3 +41,6 @@ export const needReplaceReferenceInputTypeList = [
|
|||
FlowNodeInputTypeEnum.addInputParam,
|
||||
FlowNodeInputTypeEnum.custom
|
||||
] as string[];
|
||||
|
||||
// Interactive
|
||||
export const ConfirmPlanAgentText = 'CONFIRM';
|
||||
|
|
|
|||
|
|
@ -23,7 +23,7 @@ import type { WorkflowResponseType } from '../../../../service/core/workflow/dis
|
|||
import type { AiChatQuoteRoleType } from '../template/system/aiChat/type';
|
||||
import type { OpenaiAccountType } from '../../../support/user/team/type';
|
||||
import { LafAccountType } from '../../../support/user/team/type';
|
||||
import type { CompletionFinishReason } from '../../ai/type';
|
||||
import type { ChatCompletionMessageParam, CompletionFinishReason } from '../../ai/type';
|
||||
import type {
|
||||
InteractiveNodeResponseType,
|
||||
WorkflowInteractiveResponseType
|
||||
|
|
@ -83,6 +83,8 @@ export type ChatDispatchProps = {
|
|||
|
||||
responseAllData?: boolean;
|
||||
responseDetail?: boolean;
|
||||
|
||||
// TOOD: 移除
|
||||
usageId?: string;
|
||||
};
|
||||
|
||||
|
|
@ -93,6 +95,7 @@ export type ModuleDispatchProps<T> = ChatDispatchProps & {
|
|||
params: T;
|
||||
|
||||
mcpClientMemory: Record<string, MCPClient>; // key: url
|
||||
usagePush: (usages: ChatNodeUsageType[]) => void;
|
||||
};
|
||||
|
||||
export type SystemVariablesType = {
|
||||
|
|
|
|||
|
|
@ -1,6 +1,6 @@
|
|||
import json5 from 'json5';
|
||||
import { replaceVariable, valToStr } from '../../../common/string/tools';
|
||||
import { ChatItemValueTypeEnum, ChatRoleEnum } from '../../../core/chat/constants';
|
||||
import { ChatRoleEnum } from '../../../core/chat/constants';
|
||||
import type { ChatItemType, NodeOutputItemType } from '../../../core/chat/type';
|
||||
import { ChatCompletionRequestMessageRoleEnum } from '../../ai/constants';
|
||||
import {
|
||||
|
|
@ -170,11 +170,7 @@ export const getLastInteractiveValue = (
|
|||
if (lastAIMessage) {
|
||||
const lastValue = lastAIMessage.value[lastAIMessage.value.length - 1];
|
||||
|
||||
if (
|
||||
!lastValue ||
|
||||
lastValue.type !== ChatItemValueTypeEnum.interactive ||
|
||||
!lastValue.interactive
|
||||
) {
|
||||
if (!lastValue || !lastValue.interactive) {
|
||||
return;
|
||||
}
|
||||
|
||||
|
|
@ -184,20 +180,38 @@ export const getLastInteractiveValue = (
|
|||
|
||||
// Check is user select
|
||||
if (
|
||||
lastValue.interactive.type === 'userSelect' &&
|
||||
!lastValue.interactive.params.userSelectedVal
|
||||
(lastValue.interactive.type === 'userSelect' ||
|
||||
lastValue.interactive.type === 'agentPlanAskUserSelect') &&
|
||||
!lastValue.interactive?.params?.userSelectedVal
|
||||
) {
|
||||
return lastValue.interactive;
|
||||
}
|
||||
|
||||
// Check is user input
|
||||
if (lastValue.interactive.type === 'userInput' && !lastValue.interactive.params.submitted) {
|
||||
if (
|
||||
(lastValue.interactive.type === 'userInput' ||
|
||||
lastValue.interactive.type === 'agentPlanAskUserForm') &&
|
||||
!lastValue.interactive?.params?.submitted
|
||||
) {
|
||||
return lastValue.interactive;
|
||||
}
|
||||
|
||||
if (lastValue.interactive.type === 'paymentPause' && !lastValue.interactive.params.continue) {
|
||||
return lastValue.interactive;
|
||||
}
|
||||
|
||||
// Agent plan check
|
||||
if (
|
||||
lastValue.interactive.type === 'agentPlanCheck' &&
|
||||
!lastValue.interactive?.params?.confirmed
|
||||
) {
|
||||
return lastValue.interactive;
|
||||
}
|
||||
|
||||
// Agent plan ask query
|
||||
if (lastValue.interactive.type === 'agentPlanAskQuery') {
|
||||
return lastValue.interactive;
|
||||
}
|
||||
}
|
||||
|
||||
return;
|
||||
|
|
@ -364,7 +378,6 @@ export const checkNodeRunStatus = ({
|
|||
|
||||
// Classify edges
|
||||
const { commonEdges, recursiveEdgeGroups } = splitNodeEdges(node);
|
||||
|
||||
// Entry
|
||||
if (commonEdges.length === 0 && recursiveEdgeGroups.length === 0) {
|
||||
return 'run';
|
||||
|
|
|
|||
|
|
@ -11,7 +11,7 @@ import { SystemConfigNode } from './system/systemConfig';
|
|||
import { WorkflowStart } from './system/workflowStart';
|
||||
|
||||
import { StopToolNode } from './system/stopTool';
|
||||
import { AgentNode } from './system/agent';
|
||||
import { ToolCallNode } from './system/toolCall';
|
||||
|
||||
import { RunAppModule } from './system/abandoned/runApp/index';
|
||||
import { PluginInputModule } from './system/pluginInput';
|
||||
|
|
@ -45,7 +45,7 @@ const systemNodes: FlowNodeTemplateType[] = [
|
|||
ClassifyQuestionModule,
|
||||
ContextExtractModule,
|
||||
DatasetConcatModule,
|
||||
AgentNode,
|
||||
ToolCallNode,
|
||||
ToolParamsNode,
|
||||
StopToolNode,
|
||||
ReadFilesNode,
|
||||
|
|
|
|||
|
|
@ -0,0 +1,75 @@
|
|||
import {
|
||||
chatHistoryValueDesc,
|
||||
FlowNodeInputTypeEnum,
|
||||
FlowNodeOutputTypeEnum,
|
||||
FlowNodeTypeEnum
|
||||
} from '../../../node/constant';
|
||||
import { type FlowNodeTemplateType } from '../../../type/node';
|
||||
import {
|
||||
WorkflowIOValueTypeEnum,
|
||||
NodeInputKeyEnum,
|
||||
NodeOutputKeyEnum,
|
||||
FlowNodeTemplateTypeEnum
|
||||
} from '../../../constants';
|
||||
import {
|
||||
Input_Template_SettingAiModel,
|
||||
Input_Template_Dataset_Quote,
|
||||
Input_Template_History,
|
||||
Input_Template_System_Prompt,
|
||||
Input_Template_UserChatInput,
|
||||
Input_Template_File_Link
|
||||
} from '../../input';
|
||||
import { i18nT } from '../../../../../../web/i18n/utils';
|
||||
|
||||
export const AgentNode: FlowNodeTemplateType = {
|
||||
id: FlowNodeTypeEnum.agent,
|
||||
flowNodeType: FlowNodeTypeEnum.agent,
|
||||
templateType: FlowNodeTemplateTypeEnum.ai,
|
||||
showSourceHandle: true,
|
||||
showTargetHandle: true,
|
||||
avatar: 'core/workflow/template/agent',
|
||||
name: 'Agent',
|
||||
intro: 'Agent',
|
||||
showStatus: true,
|
||||
isTool: true,
|
||||
version: '4.13.0',
|
||||
catchError: false,
|
||||
inputs: [
|
||||
{
|
||||
key: NodeInputKeyEnum.aiModel,
|
||||
renderTypeList: [FlowNodeInputTypeEnum.selectLLMModel], // Set in the pop-up window
|
||||
label: i18nT('common:core.module.input.label.aiModel'),
|
||||
valueType: WorkflowIOValueTypeEnum.string
|
||||
},
|
||||
Input_Template_System_Prompt,
|
||||
Input_Template_History,
|
||||
{
|
||||
key: NodeInputKeyEnum.subApps,
|
||||
renderTypeList: [FlowNodeInputTypeEnum.hidden], // Set in the pop-up window
|
||||
label: '',
|
||||
valueType: WorkflowIOValueTypeEnum.object
|
||||
},
|
||||
{ ...Input_Template_UserChatInput, toolDescription: i18nT('workflow:user_question') }
|
||||
],
|
||||
outputs: [
|
||||
{
|
||||
id: NodeOutputKeyEnum.history,
|
||||
key: NodeOutputKeyEnum.history,
|
||||
required: true,
|
||||
label: i18nT('common:core.module.output.label.New context'),
|
||||
description: i18nT('common:core.module.output.description.New context'),
|
||||
valueType: WorkflowIOValueTypeEnum.chatHistory,
|
||||
valueDesc: chatHistoryValueDesc,
|
||||
type: FlowNodeOutputTypeEnum.static
|
||||
},
|
||||
{
|
||||
id: NodeOutputKeyEnum.answerText,
|
||||
key: NodeOutputKeyEnum.answerText,
|
||||
required: true,
|
||||
label: i18nT('common:core.module.output.label.Ai response content'),
|
||||
description: i18nT('common:core.module.output.description.Ai response content'),
|
||||
valueType: WorkflowIOValueTypeEnum.string,
|
||||
type: FlowNodeOutputTypeEnum.static
|
||||
}
|
||||
]
|
||||
};
|
||||
|
|
@ -1,11 +1,11 @@
|
|||
import type { NodeOutputItemType } from '../../../../chat/type';
|
||||
import type { FlowNodeOutputItemType } from '../../../type/io';
|
||||
import type { FlowNodeInputTypeEnum } from '../../../node/constant';
|
||||
import type { WorkflowIOValueTypeEnum } from '../../../constants';
|
||||
import type { FlowNodeInputTypeEnum } from '../../../../../core/workflow/node/constant';
|
||||
import type { WorkflowIOValueTypeEnum } from '../../../../../core/workflow/constants';
|
||||
import type { ChatCompletionMessageParam } from '../../../../ai/type';
|
||||
import type { AppFileSelectConfigType } from '../../../../app/type';
|
||||
import type { RuntimeEdgeItemType } from '../../../type/edge';
|
||||
|
||||
type InteractiveBasicType = {
|
||||
export type InteractiveBasicType = {
|
||||
entryNodeIds: string[];
|
||||
memoryEdges: RuntimeEdgeItemType[];
|
||||
nodeOutputs: NodeOutputItemType[];
|
||||
|
|
@ -47,12 +47,26 @@ type LoopInteractive = InteractiveNodeType & {
|
|||
};
|
||||
};
|
||||
|
||||
// Agent Interactive
|
||||
export type AgentPlanCheckInteractive = InteractiveNodeType & {
|
||||
type: 'agentPlanCheck';
|
||||
params: {
|
||||
confirmed?: boolean;
|
||||
};
|
||||
};
|
||||
export type AgentPlanAskQueryInteractive = InteractiveNodeType & {
|
||||
type: 'agentPlanAskQuery';
|
||||
params: {
|
||||
content: string;
|
||||
};
|
||||
};
|
||||
|
||||
export type UserSelectOptionItemType = {
|
||||
key: string;
|
||||
value: string;
|
||||
};
|
||||
type UserSelectInteractive = InteractiveNodeType & {
|
||||
type: 'userSelect';
|
||||
export type UserSelectInteractive = InteractiveNodeType & {
|
||||
type: 'userSelect' | 'agentPlanAskUserSelect';
|
||||
params: {
|
||||
description: string;
|
||||
userSelectOptions: UserSelectOptionItemType[];
|
||||
|
|
@ -86,8 +100,9 @@ export type UserInputFormItemType = {
|
|||
canLocalUpload?: boolean;
|
||||
canUrlUpload?: boolean;
|
||||
} & AppFileSelectConfigType;
|
||||
type UserInputInteractive = InteractiveNodeType & {
|
||||
type: 'userInput';
|
||||
|
||||
export type UserInputInteractive = InteractiveNodeType & {
|
||||
type: 'userInput' | 'agentPlanAskUserForm';
|
||||
params: {
|
||||
description: string;
|
||||
inputForm: UserInputFormItemType[];
|
||||
|
|
@ -110,6 +125,8 @@ export type InteractiveNodeResponseType =
|
|||
| ChildrenInteractive
|
||||
| ToolCallChildrenInteractive
|
||||
| LoopInteractive
|
||||
| PaymentPauseInteractive;
|
||||
| PaymentPauseInteractive
|
||||
| AgentPlanCheckInteractive
|
||||
| AgentPlanAskQueryInteractive;
|
||||
|
||||
export type WorkflowInteractiveResponseType = InteractiveBasicType & InteractiveNodeResponseType;
|
||||
|
|
@ -22,9 +22,9 @@ import { i18nT } from '../../../../../web/i18n/utils';
|
|||
import { Input_Template_File_Link } from '../input';
|
||||
import { Output_Template_Error_Message } from '../output';
|
||||
|
||||
export const AgentNode: FlowNodeTemplateType = {
|
||||
id: FlowNodeTypeEnum.agent,
|
||||
flowNodeType: FlowNodeTypeEnum.agent,
|
||||
export const ToolCallNode: FlowNodeTemplateType = {
|
||||
id: FlowNodeTypeEnum.toolCall,
|
||||
flowNodeType: FlowNodeTypeEnum.toolCall,
|
||||
templateType: FlowNodeTemplateTypeEnum.ai,
|
||||
showSourceHandle: true,
|
||||
showTargetHandle: true,
|
||||
|
|
@ -1,28 +1,31 @@
|
|||
import { retryFn } from '@fastgpt/global/common/system/utils';
|
||||
import { addLog } from '../system/log';
|
||||
import { connectionMongo, type ClientSession } from './index';
|
||||
|
||||
const timeout = 60000;
|
||||
|
||||
export const mongoSessionRun = async <T = unknown>(fn: (session: ClientSession) => Promise<T>) => {
|
||||
const session = await connectionMongo.startSession();
|
||||
return retryFn(async () => {
|
||||
const session = await connectionMongo.startSession();
|
||||
|
||||
try {
|
||||
session.startTransaction({
|
||||
maxCommitTimeMS: timeout
|
||||
});
|
||||
const result = await fn(session);
|
||||
try {
|
||||
session.startTransaction({
|
||||
maxCommitTimeMS: timeout
|
||||
});
|
||||
const result = await fn(session);
|
||||
|
||||
await session.commitTransaction();
|
||||
await session.commitTransaction();
|
||||
|
||||
return result as T;
|
||||
} catch (error) {
|
||||
if (!session.transaction.isCommitted) {
|
||||
await session.abortTransaction();
|
||||
} else {
|
||||
addLog.warn('Un catch mongo session error', { error });
|
||||
return result as T;
|
||||
} catch (error) {
|
||||
if (!session.transaction.isCommitted) {
|
||||
await session.abortTransaction();
|
||||
} else {
|
||||
addLog.warn('Un catch mongo session error', { error });
|
||||
}
|
||||
return Promise.reject(error);
|
||||
} finally {
|
||||
await session.endSession();
|
||||
}
|
||||
return Promise.reject(error);
|
||||
} finally {
|
||||
await session.endSession();
|
||||
}
|
||||
});
|
||||
};
|
||||
|
|
|
|||
|
|
@ -6,6 +6,7 @@ import { replaceSensitiveText } from '@fastgpt/global/common/string/tools';
|
|||
import { UserError } from '@fastgpt/global/common/error/utils';
|
||||
import { clearCookie } from '../../support/permission/auth/common';
|
||||
import { ZodError } from 'zod';
|
||||
import type Stream from 'node:stream';
|
||||
|
||||
export interface ResponseType<T = any> {
|
||||
code: number;
|
||||
|
|
@ -175,7 +176,7 @@ export function responseWriteController({
|
|||
readStream
|
||||
}: {
|
||||
res: NextApiResponse;
|
||||
readStream: any;
|
||||
readStream: Stream.Readable;
|
||||
}) {
|
||||
res.on('drain', () => {
|
||||
readStream?.resume?.();
|
||||
|
|
@ -191,16 +192,14 @@ export function responseWriteController({
|
|||
|
||||
export function responseWrite({
|
||||
res,
|
||||
write,
|
||||
event,
|
||||
data
|
||||
}: {
|
||||
res?: NextApiResponse;
|
||||
write?: (text: string) => void;
|
||||
event?: string;
|
||||
data: string;
|
||||
}) {
|
||||
const Write = write || res?.write;
|
||||
const Write = res?.write;
|
||||
|
||||
if (!Write) return;
|
||||
|
||||
|
|
|
|||
|
|
@ -0,0 +1,213 @@
|
|||
import type {
|
||||
ChatCompletionMessageParam,
|
||||
ChatCompletionTool,
|
||||
ChatCompletionMessageToolCall
|
||||
} from '@fastgpt/global/core/ai/type';
|
||||
import { ChatCompletionRequestMessageRoleEnum } from '@fastgpt/global/core/ai/constants';
|
||||
import { GPTMessages2Chats } from '@fastgpt/global/core/chat/adapt';
|
||||
import type { AIChatItemType, AIChatItemValueItemType } from '@fastgpt/global/core/chat/type';
|
||||
import type {
|
||||
InteractiveNodeResponseType,
|
||||
WorkflowInteractiveResponseType
|
||||
} from '@fastgpt/global/core/workflow/template/system/interactive/type';
|
||||
import type { CreateLLMResponseProps, ResponseEvents } from './request';
|
||||
import { createLLMResponse } from './request';
|
||||
import type { LLMModelItemType } from '@fastgpt/global/core/ai/model.d';
|
||||
import type { ChatNodeUsageType } from '@fastgpt/global/support/wallet/bill/type';
|
||||
import { countGptMessagesTokens, countPromptTokens } from '../../../common/string/tiktoken/index';
|
||||
import { addLog } from '../../../common/system/log';
|
||||
import type { AgentPlanStepType } from '../../workflow/dispatch/ai/agent/sub/plan/type';
|
||||
import { calculateCompressionThresholds } from './compress/constants';
|
||||
import { compressRequestMessages, compressToolcallResponse } from './compress';
|
||||
|
||||
type RunAgentCallProps = {
|
||||
maxRunAgentTimes: number;
|
||||
interactiveEntryToolParams?: WorkflowInteractiveResponseType['toolParams'];
|
||||
currentStep: AgentPlanStepType;
|
||||
|
||||
body: {
|
||||
messages: ChatCompletionMessageParam[];
|
||||
model: LLMModelItemType;
|
||||
temperature?: number;
|
||||
top_p?: number;
|
||||
stream?: boolean;
|
||||
subApps: ChatCompletionTool[];
|
||||
};
|
||||
|
||||
userKey?: CreateLLMResponseProps['userKey'];
|
||||
isAborted?: CreateLLMResponseProps['isAborted'];
|
||||
|
||||
getToolInfo: (id: string) => {
|
||||
name: string;
|
||||
avatar: string;
|
||||
};
|
||||
handleToolResponse: (e: {
|
||||
call: ChatCompletionMessageToolCall;
|
||||
messages: ChatCompletionMessageParam[];
|
||||
}) => Promise<{
|
||||
response: string;
|
||||
usages: ChatNodeUsageType[];
|
||||
interactive?: InteractiveNodeResponseType;
|
||||
}>;
|
||||
} & ResponseEvents;
|
||||
|
||||
type RunAgentResponse = {
|
||||
completeMessages: ChatCompletionMessageParam[];
|
||||
assistantResponses: AIChatItemValueItemType[];
|
||||
interactiveResponse?: InteractiveNodeResponseType;
|
||||
|
||||
// Usage
|
||||
inputTokens: number;
|
||||
outputTokens: number;
|
||||
subAppUsages: ChatNodeUsageType[];
|
||||
};
|
||||
|
||||
export const runAgentCall = async ({
|
||||
maxRunAgentTimes,
|
||||
interactiveEntryToolParams,
|
||||
currentStep,
|
||||
body: { model, messages, stream, temperature, top_p, subApps },
|
||||
userKey,
|
||||
isAborted,
|
||||
|
||||
handleToolResponse,
|
||||
getToolInfo,
|
||||
|
||||
onReasoning,
|
||||
onStreaming,
|
||||
onToolCall,
|
||||
onToolParam
|
||||
}: RunAgentCallProps): Promise<RunAgentResponse> => {
|
||||
let runTimes = 0;
|
||||
|
||||
const assistantResponses: AIChatItemValueItemType[] = [];
|
||||
let interactiveResponse: InteractiveNodeResponseType | undefined;
|
||||
|
||||
let requestMessages = messages;
|
||||
|
||||
let inputTokens: number = 0;
|
||||
let outputTokens: number = 0;
|
||||
const subAppUsages: ChatNodeUsageType[] = [];
|
||||
|
||||
// TODO: interactive rewrite messages
|
||||
|
||||
while (runTimes < maxRunAgentTimes) {
|
||||
// TODO: 费用检测
|
||||
runTimes++;
|
||||
|
||||
// 对请求的 requestMessages 进行压缩
|
||||
const taskDescription = currentStep.description || currentStep.title;
|
||||
if (taskDescription) {
|
||||
const result = await compressRequestMessages(requestMessages, model, taskDescription);
|
||||
requestMessages = result.messages;
|
||||
inputTokens += result.usage?.inputTokens || 0;
|
||||
outputTokens += result.usage?.outputTokens || 0;
|
||||
}
|
||||
|
||||
// Request LLM
|
||||
let {
|
||||
reasoningText: reasoningContent,
|
||||
answerText: answer,
|
||||
toolCalls = [],
|
||||
usage,
|
||||
getEmptyResponseTip,
|
||||
completeMessages
|
||||
} = await createLLMResponse({
|
||||
body: {
|
||||
model,
|
||||
messages: requestMessages,
|
||||
tool_choice: 'auto',
|
||||
toolCallMode: model.toolChoice ? 'toolChoice' : 'prompt',
|
||||
tools: subApps,
|
||||
parallel_tool_calls: true,
|
||||
stream,
|
||||
temperature,
|
||||
top_p
|
||||
},
|
||||
userKey,
|
||||
isAborted,
|
||||
onReasoning,
|
||||
onStreaming,
|
||||
onToolCall,
|
||||
onToolParam
|
||||
});
|
||||
|
||||
if (!answer && !reasoningContent && !toolCalls.length) {
|
||||
return Promise.reject(getEmptyResponseTip());
|
||||
}
|
||||
|
||||
const requestMessagesLength = requestMessages.length;
|
||||
requestMessages = completeMessages.slice();
|
||||
|
||||
for await (const tool of toolCalls) {
|
||||
// TODO: 加入交互节点处理
|
||||
|
||||
// Call tool and compress tool response
|
||||
const { response, usages, interactive } = await handleToolResponse({
|
||||
call: tool,
|
||||
messages: requestMessages.slice(0, requestMessagesLength)
|
||||
}).then(async (res) => {
|
||||
const thresholds = calculateCompressionThresholds(model.maxContext);
|
||||
const toolTokenCount = await countPromptTokens(res.response);
|
||||
|
||||
const response = await (async () => {
|
||||
if (toolTokenCount > thresholds.singleTool.threshold && currentStep) {
|
||||
const taskDescription = currentStep.description || currentStep.title;
|
||||
return await compressToolcallResponse(
|
||||
res.response,
|
||||
model,
|
||||
tool.function.name,
|
||||
taskDescription,
|
||||
thresholds.singleTool.target
|
||||
);
|
||||
} else {
|
||||
return res.response;
|
||||
}
|
||||
})();
|
||||
|
||||
return {
|
||||
...res,
|
||||
response
|
||||
};
|
||||
});
|
||||
|
||||
requestMessages.push({
|
||||
tool_call_id: tool.id,
|
||||
role: ChatCompletionRequestMessageRoleEnum.Tool,
|
||||
content: response
|
||||
});
|
||||
|
||||
subAppUsages.push(...usages);
|
||||
|
||||
if (interactive) {
|
||||
interactiveResponse = interactive;
|
||||
}
|
||||
}
|
||||
|
||||
// TODO: 移动到工作流里 assistantResponses concat
|
||||
const currentAssistantResponses = GPTMessages2Chats({
|
||||
messages: requestMessages.slice(requestMessagesLength),
|
||||
getToolInfo
|
||||
})[0] as AIChatItemType;
|
||||
if (currentAssistantResponses) {
|
||||
assistantResponses.push(...currentAssistantResponses.value);
|
||||
}
|
||||
|
||||
// Usage concat
|
||||
inputTokens += usage.inputTokens;
|
||||
outputTokens += usage.outputTokens;
|
||||
|
||||
if (toolCalls.length === 0) {
|
||||
break;
|
||||
}
|
||||
}
|
||||
|
||||
return {
|
||||
inputTokens,
|
||||
outputTokens,
|
||||
completeMessages: requestMessages,
|
||||
assistantResponses,
|
||||
subAppUsages,
|
||||
interactiveResponse
|
||||
};
|
||||
};
|
||||
|
|
@ -32,10 +32,10 @@ import { getErrText } from '@fastgpt/global/common/error/utils';
|
|||
import json5 from 'json5';
|
||||
|
||||
export type ResponseEvents = {
|
||||
onStreaming?: ({ text }: { text: string }) => void;
|
||||
onReasoning?: ({ text }: { text: string }) => void;
|
||||
onToolCall?: ({ call }: { call: ChatCompletionMessageToolCall }) => void;
|
||||
onToolParam?: ({ tool, params }: { tool: ChatCompletionMessageToolCall; params: string }) => void;
|
||||
onStreaming?: (e: { text: string }) => void;
|
||||
onReasoning?: (e: { text: string }) => void;
|
||||
onToolCall?: (e: { call: ChatCompletionMessageToolCall }) => void;
|
||||
onToolParam?: (e: { call: ChatCompletionMessageToolCall; params: string }) => void;
|
||||
};
|
||||
|
||||
export type CreateLLMResponseProps<T extends CompletionsBodyType = CompletionsBodyType> = {
|
||||
|
|
@ -260,7 +260,7 @@ export const createStreamResponse = async ({
|
|||
if (currentTool && arg) {
|
||||
currentTool.function.arguments += arg;
|
||||
|
||||
onToolParam?.({ tool: currentTool, params: arg });
|
||||
onToolParam?.({ call: currentTool, params: arg });
|
||||
}
|
||||
}
|
||||
});
|
||||
|
|
|
|||
|
|
@ -2,8 +2,11 @@ import { addLog } from '../../common/system/log';
|
|||
import { MongoChatItem } from './chatItemSchema';
|
||||
import { MongoChat } from './chatSchema';
|
||||
import axios from 'axios';
|
||||
import { type AIChatItemType, type UserChatItemType } from '@fastgpt/global/core/chat/type';
|
||||
import { ChatItemValueTypeEnum } from '@fastgpt/global/core/chat/constants';
|
||||
import {
|
||||
type AIChatItemType,
|
||||
type ChatItemType,
|
||||
type UserChatItemType
|
||||
} from '@fastgpt/global/core/chat/type';
|
||||
|
||||
export type Metadata = {
|
||||
[key: string]: {
|
||||
|
|
@ -94,9 +97,9 @@ const pushChatLogInternal = async ({
|
|||
// Pop last two items
|
||||
const question = chatItemHuman.value
|
||||
.map((item) => {
|
||||
if (item.type === ChatItemValueTypeEnum.text) {
|
||||
if (item.text) {
|
||||
return item.text?.content;
|
||||
} else if (item.type === ChatItemValueTypeEnum.file) {
|
||||
} else if (item.file) {
|
||||
if (item.file?.type === 'image') {
|
||||
return ``;
|
||||
}
|
||||
|
|
|
|||
|
|
@ -1,7 +1,11 @@
|
|||
import type { AIChatItemType, UserChatItemType } from '@fastgpt/global/core/chat/type.d';
|
||||
import type {
|
||||
AIChatItemType,
|
||||
ChatHistoryItemResType,
|
||||
UserChatItemType
|
||||
} from '@fastgpt/global/core/chat/type.d';
|
||||
import { MongoApp } from '../app/schema';
|
||||
import type { ChatSourceEnum } from '@fastgpt/global/core/chat/constants';
|
||||
import { ChatItemValueTypeEnum, ChatRoleEnum } from '@fastgpt/global/core/chat/constants';
|
||||
import { ChatRoleEnum } from '@fastgpt/global/core/chat/constants';
|
||||
import { MongoChatItem } from './chatItemSchema';
|
||||
import { MongoChat } from './chatSchema';
|
||||
import { addLog } from '../../common/system/log';
|
||||
|
|
@ -25,6 +29,7 @@ import { removeS3TTL } from '../../common/s3/utils';
|
|||
import { VariableInputEnum } from '@fastgpt/global/core/workflow/constants';
|
||||
import { encryptSecretValue, anyValueDecrypt } from '../../common/secret/utils';
|
||||
import type { SecretValueType } from '@fastgpt/global/common/secret/type';
|
||||
import { ConfirmPlanAgentText } from '@fastgpt/global/core/workflow/runtime/constants';
|
||||
|
||||
export type Props = {
|
||||
chatId: string;
|
||||
|
|
@ -51,7 +56,7 @@ export type Props = {
|
|||
const beforProcess = (props: Props) => {
|
||||
// Remove url
|
||||
props.userContent.value.forEach((item) => {
|
||||
if (item.type === ChatItemValueTypeEnum.file && item.file?.key) {
|
||||
if (item.file?.key) {
|
||||
item.file.url = '';
|
||||
}
|
||||
});
|
||||
|
|
@ -74,12 +79,12 @@ const afterProcess = async ({
|
|||
const keys: string[] = [];
|
||||
|
||||
// 1. chat file
|
||||
if (valueItem.type === ChatItemValueTypeEnum.file && valueItem.file?.key) {
|
||||
if ('file' in valueItem && valueItem.file?.key) {
|
||||
keys.push(valueItem.file.key);
|
||||
}
|
||||
|
||||
// 2. plugin input
|
||||
if (valueItem.type === 'text' && valueItem.text?.content) {
|
||||
if ('text' in valueItem && valueItem.text?.content) {
|
||||
try {
|
||||
const parsed = JSON.parse(valueItem.text.content);
|
||||
// 2.1 plugin input - 数组格式
|
||||
|
|
@ -173,7 +178,7 @@ const formatAiContent = ({
|
|||
};
|
||||
}
|
||||
return responseItem;
|
||||
});
|
||||
}) as ChatHistoryItemResType[] | undefined;
|
||||
|
||||
return {
|
||||
aiResponse: {
|
||||
|
|
@ -207,7 +212,7 @@ const getChatDataLog = async ({
|
|||
};
|
||||
};
|
||||
|
||||
export async function saveChat(props: Props) {
|
||||
export const pushChatRecords = async (props: Props) => {
|
||||
beforProcess(props);
|
||||
|
||||
const {
|
||||
|
|
@ -406,10 +411,15 @@ export async function saveChat(props: Props) {
|
|||
).catch();
|
||||
}
|
||||
} catch (error) {
|
||||
addLog.error(`update chat history error`, error);
|
||||
addLog.error(`Save chat history error`, error);
|
||||
}
|
||||
}
|
||||
};
|
||||
|
||||
/*
|
||||
更新交互节点,包含两种情况:
|
||||
1. 更新当前的 items,并把 value 追加到当前 items
|
||||
2. 新增 items, 次数只需要改当前的 items 里的交互节点值即可,其他属性追加在新增的 items 里
|
||||
*/
|
||||
export const updateInteractiveChat = async (props: Props) => {
|
||||
beforProcess(props);
|
||||
|
||||
|
|
@ -427,25 +437,44 @@ export const updateInteractiveChat = async (props: Props) => {
|
|||
} = props;
|
||||
if (!chatId) return;
|
||||
|
||||
const { variables: variableList } = getAppChatConfig({
|
||||
chatConfig: appChatConfig,
|
||||
systemConfigNode: getGuideModule(nodes),
|
||||
isPublicFetch: false
|
||||
});
|
||||
|
||||
const chatItem = await MongoChatItem.findOne({ appId, chatId, obj: ChatRoleEnum.AI }).sort({
|
||||
_id: -1
|
||||
});
|
||||
|
||||
if (!chatItem || chatItem.obj !== ChatRoleEnum.AI) return;
|
||||
|
||||
// Update interactive value
|
||||
// Get interactive value
|
||||
const interactiveValue = chatItem.value[chatItem.value.length - 1];
|
||||
|
||||
if (
|
||||
!interactiveValue ||
|
||||
interactiveValue.type !== ChatItemValueTypeEnum.interactive ||
|
||||
!interactiveValue.interactive?.params
|
||||
) {
|
||||
if (!interactiveValue || !interactiveValue.interactive) {
|
||||
return;
|
||||
}
|
||||
interactiveValue.interactive.params = interactiveValue.interactive.params || {};
|
||||
|
||||
// Get interactive response
|
||||
const { text: userInteractiveVal } = chatValue2RuntimePrompt(userContent.value);
|
||||
|
||||
// 拿到的是实参
|
||||
const finalInteractive = extractDeepestInteractive(interactiveValue.interactive);
|
||||
/*
|
||||
需要追加一条 chat_items 记录,而不是修改原来的。
|
||||
1. Ask query: 用户肯定会输入一条新消息
|
||||
2. Plan check 非确认模式,用户也是输入一条消息。
|
||||
*/
|
||||
const pushNewItems =
|
||||
finalInteractive.type === 'agentPlanAskQuery' ||
|
||||
(finalInteractive.type === 'agentPlanCheck' && userInteractiveVal !== ConfirmPlanAgentText);
|
||||
|
||||
if (pushNewItems) {
|
||||
return await pushChatRecords(props);
|
||||
}
|
||||
|
||||
const parsedUserInteractiveVal = (() => {
|
||||
const { text: userInteractiveVal } = chatValue2RuntimePrompt(userContent.value);
|
||||
try {
|
||||
return JSON.parse(userInteractiveVal);
|
||||
} catch (err) {
|
||||
|
|
@ -458,71 +487,117 @@ export const updateInteractiveChat = async (props: Props) => {
|
|||
errorMsg
|
||||
});
|
||||
|
||||
const { variables: variableList } = getAppChatConfig({
|
||||
chatConfig: appChatConfig,
|
||||
systemConfigNode: getGuideModule(nodes),
|
||||
isPublicFetch: false
|
||||
});
|
||||
/*
|
||||
在原来 chat_items 上更新。
|
||||
1. 更新交互响应结果
|
||||
2. 合并 chat_item 数据
|
||||
3. 合并 chat_item_response 数据
|
||||
*/
|
||||
// Update interactive value
|
||||
{
|
||||
if (
|
||||
finalInteractive.type === 'userSelect' ||
|
||||
finalInteractive.type === 'agentPlanAskUserSelect'
|
||||
) {
|
||||
finalInteractive.params.userSelectedVal = userInteractiveVal;
|
||||
} else if (
|
||||
(finalInteractive.type === 'userInput' || finalInteractive.type === 'agentPlanAskUserForm') &&
|
||||
typeof parsedUserInteractiveVal === 'object'
|
||||
) {
|
||||
finalInteractive.params.inputForm = finalInteractive.params.inputForm.map((item) => {
|
||||
const itemValue = parsedUserInteractiveVal[item.key];
|
||||
if (itemValue === undefined) return item;
|
||||
|
||||
let finalInteractive = extractDeepestInteractive(interactiveValue.interactive);
|
||||
|
||||
if (finalInteractive.type === 'userSelect') {
|
||||
finalInteractive.params.userSelectedVal = parsedUserInteractiveVal;
|
||||
} else if (
|
||||
finalInteractive.type === 'userInput' &&
|
||||
typeof parsedUserInteractiveVal === 'object'
|
||||
) {
|
||||
finalInteractive.params.inputForm = finalInteractive.params.inputForm.map((item) => {
|
||||
const itemValue = parsedUserInteractiveVal[item.key];
|
||||
if (itemValue === undefined) return item;
|
||||
|
||||
// 如果是密码类型,加密后存储
|
||||
if (item.type === FlowNodeInputTypeEnum.password) {
|
||||
const decryptedVal = anyValueDecrypt(itemValue);
|
||||
if (typeof decryptedVal === 'string') {
|
||||
// 如果是密码类型,加密后存储
|
||||
if (item.type === FlowNodeInputTypeEnum.password) {
|
||||
const decryptedVal = anyValueDecrypt(itemValue);
|
||||
if (typeof decryptedVal === 'string') {
|
||||
return {
|
||||
...item,
|
||||
value: encryptSecretValue({
|
||||
value: decryptedVal,
|
||||
secret: ''
|
||||
} as SecretValueType)
|
||||
};
|
||||
}
|
||||
return {
|
||||
...item,
|
||||
value: encryptSecretValue({
|
||||
value: decryptedVal,
|
||||
secret: ''
|
||||
} as SecretValueType)
|
||||
value: itemValue
|
||||
};
|
||||
}
|
||||
|
||||
return {
|
||||
...item,
|
||||
value: itemValue
|
||||
};
|
||||
}
|
||||
});
|
||||
finalInteractive.params.submitted = true;
|
||||
} else if (finalInteractive.type === 'paymentPause') {
|
||||
chatItem.value.pop();
|
||||
} else if (finalInteractive.type === 'agentPlanCheck') {
|
||||
finalInteractive.params.confirmed = true;
|
||||
}
|
||||
}
|
||||
|
||||
return {
|
||||
...item,
|
||||
value: itemValue
|
||||
// Update current items
|
||||
{
|
||||
if (aiContent.customFeedbacks) {
|
||||
chatItem.customFeedbacks = chatItem.customFeedbacks
|
||||
? [...chatItem.customFeedbacks, ...aiContent.customFeedbacks]
|
||||
: aiContent.customFeedbacks;
|
||||
}
|
||||
if (aiContent.value) {
|
||||
chatItem.value = chatItem.value ? [...chatItem.value, ...aiContent.value] : aiContent.value;
|
||||
}
|
||||
if (aiResponse.citeCollectionIds) {
|
||||
chatItem.citeCollectionIds = chatItem.citeCollectionIds
|
||||
? [...chatItem.citeCollectionIds, ...aiResponse.citeCollectionIds]
|
||||
: aiResponse.citeCollectionIds;
|
||||
}
|
||||
|
||||
if (aiContent.memories) {
|
||||
chatItem.memories = {
|
||||
...chatItem.memories,
|
||||
...aiContent.memories
|
||||
};
|
||||
});
|
||||
finalInteractive.params.submitted = true;
|
||||
} else if (finalInteractive.type === 'paymentPause') {
|
||||
chatItem.value.pop();
|
||||
}
|
||||
|
||||
chatItem.durationSeconds = chatItem.durationSeconds
|
||||
? +(chatItem.durationSeconds + durationSeconds).toFixed(2)
|
||||
: durationSeconds;
|
||||
}
|
||||
|
||||
if (aiResponse.customFeedbacks) {
|
||||
chatItem.customFeedbacks = chatItem.customFeedbacks
|
||||
? [...chatItem.customFeedbacks, ...aiResponse.customFeedbacks]
|
||||
: aiResponse.customFeedbacks;
|
||||
}
|
||||
if (aiResponse.value) {
|
||||
chatItem.value = chatItem.value ? [...chatItem.value, ...aiResponse.value] : aiResponse.value;
|
||||
}
|
||||
if (aiResponse.citeCollectionIds) {
|
||||
chatItem.citeCollectionIds = chatItem.citeCollectionIds
|
||||
? [...chatItem.citeCollectionIds, ...aiResponse.citeCollectionIds]
|
||||
: aiResponse.citeCollectionIds;
|
||||
}
|
||||
|
||||
chatItem.durationSeconds = chatItem.durationSeconds
|
||||
? +(chatItem.durationSeconds + durationSeconds).toFixed(2)
|
||||
: durationSeconds;
|
||||
|
||||
chatItem.markModified('value');
|
||||
await mongoSessionRun(async (session) => {
|
||||
// Merge chat item respones
|
||||
if (nodeResponses) {
|
||||
const lastResponse = await MongoChatItemResponse.findOne({
|
||||
appId,
|
||||
chatId,
|
||||
chatItemDataId: chatItem.dataId
|
||||
})
|
||||
.sort({
|
||||
_id: -1
|
||||
})
|
||||
.lean()
|
||||
.session(session);
|
||||
|
||||
const newResponses = lastResponse?.data
|
||||
? mergeChatResponseData([lastResponse?.data, ...nodeResponses])
|
||||
: nodeResponses;
|
||||
|
||||
await MongoChatItemResponse.create(
|
||||
newResponses.map((item) => ({
|
||||
teamId,
|
||||
appId,
|
||||
chatId,
|
||||
chatItemDataId: chatItem.dataId,
|
||||
data: item
|
||||
})),
|
||||
{ session, ordered: true, ...writePrimary }
|
||||
);
|
||||
}
|
||||
|
||||
await chatItem.save({ session });
|
||||
await MongoChat.updateOne(
|
||||
{
|
||||
|
|
|
|||
|
|
@ -1,4 +1,4 @@
|
|||
import { ChatItemValueTypeEnum, ChatRoleEnum } from '@fastgpt/global/core/chat/constants';
|
||||
import { ChatRoleEnum } from '@fastgpt/global/core/chat/constants';
|
||||
import type { ChatItemType } from '@fastgpt/global/core/chat/type';
|
||||
import { getS3ChatSource } from '../../common/s3/sources/chat';
|
||||
import type { FlowNodeInputItemType } from '@fastgpt/global/core/workflow/type/io';
|
||||
|
|
@ -13,7 +13,7 @@ export const addPreviewUrlToChatItems = async (
|
|||
) => {
|
||||
async function addToChatflow(item: ChatItemType) {
|
||||
for await (const value of item.value) {
|
||||
if (value.type === ChatItemValueTypeEnum.file && value.file && value.file.key) {
|
||||
if ('file' in value && value.file?.key) {
|
||||
value.file.url = await s3ChatSource.createGetChatFileURL({
|
||||
key: value.file.key,
|
||||
external: true
|
||||
|
|
@ -26,7 +26,7 @@ export const addPreviewUrlToChatItems = async (
|
|||
|
||||
for (let j = 0; j < item.value.length; j++) {
|
||||
const value = item.value[j];
|
||||
if (value.type !== ChatItemValueTypeEnum.text) continue;
|
||||
if (!('text' in value)) continue;
|
||||
const inputValueString = value.text?.content || '';
|
||||
const parsedInputValue = JSON.parse(inputValueString) as FlowNodeInputItemType[];
|
||||
|
||||
|
|
|
|||
|
|
@ -23,7 +23,6 @@ import {
|
|||
import type { AIChatNodeProps } from '@fastgpt/global/core/workflow/runtime/type.d';
|
||||
import { replaceVariable } from '@fastgpt/global/common/string/tools';
|
||||
import type { ModuleDispatchProps } from '@fastgpt/global/core/workflow/runtime/type';
|
||||
import { responseWriteController } from '../../../../common/response';
|
||||
import { getLLMModel } from '../../../ai/model';
|
||||
import type { SearchDataResponseItemType } from '@fastgpt/global/core/dataset/type';
|
||||
import type { NodeOutputKeyEnum } from '@fastgpt/global/core/workflow/constants';
|
||||
|
|
@ -175,8 +174,6 @@ export const dispatchChatCompletion = async (props: ChatProps): Promise<ChatResp
|
|||
})()
|
||||
]);
|
||||
|
||||
const write = res ? responseWriteController({ res, readStream: stream }) : undefined;
|
||||
|
||||
const {
|
||||
completeMessages,
|
||||
reasoningText,
|
||||
|
|
@ -206,7 +203,6 @@ export const dispatchChatCompletion = async (props: ChatProps): Promise<ChatResp
|
|||
onReasoning({ text }) {
|
||||
if (!aiChatReasoning) return;
|
||||
workflowStreamResponse?.({
|
||||
write,
|
||||
event: SseResponseEventEnum.answer,
|
||||
data: textAdaptGptResponse({
|
||||
reasoning_content: text
|
||||
|
|
@ -216,7 +212,6 @@ export const dispatchChatCompletion = async (props: ChatProps): Promise<ChatResp
|
|||
onStreaming({ text }) {
|
||||
if (!isResponseAnswerText) return;
|
||||
workflowStreamResponse?.({
|
||||
write,
|
||||
event: SseResponseEventEnum.answer,
|
||||
data: textAdaptGptResponse({
|
||||
text
|
||||
|
|
|
|||
|
|
@ -1,6 +1,6 @@
|
|||
import { chats2GPTMessages } from '@fastgpt/global/core/chat/adapt';
|
||||
import type { ChatItemType } from '@fastgpt/global/core/chat/type.d';
|
||||
import { ChatItemValueTypeEnum, ChatRoleEnum } from '@fastgpt/global/core/chat/constants';
|
||||
import { ChatRoleEnum } from '@fastgpt/global/core/chat/constants';
|
||||
import type { ClassifyQuestionAgentItemType } from '@fastgpt/global/core/workflow/template/system/classifyQuestion/type';
|
||||
import type { NodeInputKeyEnum } from '@fastgpt/global/core/workflow/constants';
|
||||
import { NodeOutputKeyEnum } from '@fastgpt/global/core/workflow/constants';
|
||||
|
|
@ -114,7 +114,6 @@ const completions = async ({
|
|||
obj: ChatRoleEnum.System,
|
||||
value: [
|
||||
{
|
||||
type: ChatItemValueTypeEnum.text,
|
||||
text: {
|
||||
content: getCQSystemPrompt({
|
||||
systemPrompt,
|
||||
|
|
@ -132,7 +131,6 @@ const completions = async ({
|
|||
obj: ChatRoleEnum.Human,
|
||||
value: [
|
||||
{
|
||||
type: ChatItemValueTypeEnum.text,
|
||||
text: {
|
||||
content: userChatInput
|
||||
}
|
||||
|
|
|
|||
|
|
@ -1,7 +1,7 @@
|
|||
import { chats2GPTMessages } from '@fastgpt/global/core/chat/adapt';
|
||||
import { filterGPTMessageByMaxContext } from '../../../ai/llm/utils';
|
||||
import type { ChatItemType } from '@fastgpt/global/core/chat/type.d';
|
||||
import { ChatItemValueTypeEnum, ChatRoleEnum } from '@fastgpt/global/core/chat/constants';
|
||||
import { ChatRoleEnum } from '@fastgpt/global/core/chat/constants';
|
||||
import type { ContextExtractAgentItemType } from '@fastgpt/global/core/workflow/template/system/contextExtract/type';
|
||||
import type { NodeInputKeyEnum } from '@fastgpt/global/core/workflow/constants';
|
||||
import {
|
||||
|
|
@ -23,7 +23,6 @@ import {
|
|||
} from '@fastgpt/global/core/ai/type';
|
||||
import { ChatCompletionRequestMessageRoleEnum } from '@fastgpt/global/core/ai/constants';
|
||||
import { type DispatchNodeResultType } from '@fastgpt/global/core/workflow/runtime/type';
|
||||
import { ModelTypeEnum } from '../../../../../global/core/ai/model';
|
||||
import {
|
||||
getExtractJsonPrompt,
|
||||
getExtractJsonToolPrompt
|
||||
|
|
@ -196,7 +195,6 @@ const toolChoice = async (props: ActionProps) => {
|
|||
obj: ChatRoleEnum.System,
|
||||
value: [
|
||||
{
|
||||
type: ChatItemValueTypeEnum.text,
|
||||
text: {
|
||||
content: getExtractJsonToolPrompt({
|
||||
systemPrompt: description,
|
||||
|
|
@ -211,7 +209,6 @@ const toolChoice = async (props: ActionProps) => {
|
|||
obj: ChatRoleEnum.Human,
|
||||
value: [
|
||||
{
|
||||
type: ChatItemValueTypeEnum.text,
|
||||
text: {
|
||||
content
|
||||
}
|
||||
|
|
@ -300,7 +297,6 @@ const completions = async (props: ActionProps) => {
|
|||
obj: ChatRoleEnum.System,
|
||||
value: [
|
||||
{
|
||||
type: ChatItemValueTypeEnum.text,
|
||||
text: {
|
||||
content: getExtractJsonPrompt({
|
||||
systemPrompt: description,
|
||||
|
|
@ -316,7 +312,6 @@ const completions = async (props: ActionProps) => {
|
|||
obj: ChatRoleEnum.Human,
|
||||
value: [
|
||||
{
|
||||
type: ChatItemValueTypeEnum.text,
|
||||
text: {
|
||||
content
|
||||
}
|
||||
|
|
|
|||
|
|
@ -1,14 +1,197 @@
|
|||
import { replaceVariable } from '@fastgpt/global/common/string/tools';
|
||||
import type { AgentPlanStepType } from './sub/plan/type';
|
||||
import type { AgentPlanType } from './sub/plan/type';
|
||||
import { getLLMModel } from '../../../../ai/model';
|
||||
import { countPromptTokens } from '../../../../../common/string/tiktoken/index';
|
||||
import { createLLMResponse } from '../../../../ai/llm/request';
|
||||
import { ChatCompletionRequestMessageRoleEnum } from '@fastgpt/global/core/ai/constants';
|
||||
import { addLog } from '../../../../../common/system/log';
|
||||
import { calculateCompressionThresholds } from '../../../../ai/llm/compress/constants';
|
||||
|
||||
export const getMultiplePrompt = (obj: {
|
||||
fileCount: number;
|
||||
imgCount: number;
|
||||
question: string;
|
||||
export const getMasterAgentSystemPrompt = async ({
|
||||
steps,
|
||||
step,
|
||||
userInput,
|
||||
background = '',
|
||||
model
|
||||
}: {
|
||||
steps: AgentPlanStepType[];
|
||||
step: AgentPlanStepType;
|
||||
userInput: string;
|
||||
background?: string;
|
||||
model: string;
|
||||
}) => {
|
||||
const prompt = `Number of session file inputs:
|
||||
Document:{{fileCount}}
|
||||
Image:{{imgCount}}
|
||||
------
|
||||
{{question}}`;
|
||||
return replaceVariable(prompt, obj);
|
||||
/**
|
||||
* 压缩步骤提示词(Depends on)
|
||||
* 当 stepPrompt 的 token 长度超过模型最大长度的 15% 时,调用 LLM 压缩到 12%
|
||||
*/
|
||||
const compressStepPrompt = async (
|
||||
stepPrompt: string,
|
||||
model: string,
|
||||
currentDescription: string
|
||||
): Promise<string> => {
|
||||
if (!stepPrompt) return stepPrompt;
|
||||
|
||||
const modelData = getLLMModel(model);
|
||||
if (!modelData) return stepPrompt;
|
||||
|
||||
const tokenCount = await countPromptTokens(stepPrompt);
|
||||
const thresholds = calculateCompressionThresholds(modelData.maxContext);
|
||||
const maxTokenThreshold = thresholds.dependsOn.threshold;
|
||||
|
||||
if (tokenCount <= maxTokenThreshold) {
|
||||
return stepPrompt;
|
||||
}
|
||||
|
||||
const targetTokens = thresholds.dependsOn.target;
|
||||
|
||||
const compressionSystemPrompt = `<role>
|
||||
你是工作流步骤历史压缩专家,擅长从多个已执行步骤的结果中提取关键信息。
|
||||
你的任务是对工作流的执行历史进行智能压缩,在保留关键信息的同时,大幅降低 token 消耗。
|
||||
</role>
|
||||
|
||||
<task_context>
|
||||
输入内容是按照"步骤ID → 步骤标题 → 执行结果"格式组织的多个步骤记录。
|
||||
你需要根据当前任务目标,对这些历史记录进行分级压缩。
|
||||
</task_context>
|
||||
|
||||
<compression_workflow>
|
||||
**第一阶段:快速扫描与相关性评估**
|
||||
|
||||
在开始压缩前,请先在内心完成以下思考(不需要输出):
|
||||
1. 浏览所有步骤,识别每个步骤与当前任务目标的相关性
|
||||
2. 为每个步骤标注相关性等级:
|
||||
- [高]:直接支撑当前任务,包含关键数据或结论
|
||||
- [中]:间接相关,提供背景信息或辅助判断
|
||||
- [低]:弱相关或无关,可大幅精简或省略
|
||||
3. 确定压缩策略:基于相关性等级,决定每个步骤的保留程度
|
||||
|
||||
**第二阶段:执行分级压缩**
|
||||
|
||||
根据第一阶段的评估,按以下策略压缩:
|
||||
|
||||
1. **高相关步骤**(保留度 80-100%)
|
||||
- 完整保留:步骤ID、标题、核心执行结果
|
||||
- 保留所有:具体数据、关键结论、链接引用、重要发现
|
||||
- 仅精简:去除啰嗦的过程描述和冗余表达
|
||||
|
||||
2. **中等相关步骤**(保留度 40-60%)
|
||||
- 保留:步骤ID、标题、核心要点
|
||||
- 提炼:将执行结果浓缩为 2-3 句话
|
||||
- 去除:详细过程、重复信息、次要细节
|
||||
|
||||
3. **低相关步骤**(保留度 10-20%)
|
||||
- 保留:步骤ID、标题
|
||||
- 极简化:一句话总结(或直接省略执行结果)
|
||||
- 判断:如果完全无关,可整体省略该步骤
|
||||
</compression_workflow>
|
||||
|
||||
<compression_principles>
|
||||
- 删繁就简:移除重复、冗长的描述性内容
|
||||
- 去粗取精:针对当前任务目标,保留最相关的要素
|
||||
- 保数据留结论:优先保留具体数据、关键结论、链接引用
|
||||
- 保持时序:按原始顺序输出,不要打乱逻辑
|
||||
- 可追溯性:保留必要的步骤标识,确保能理解信息来源
|
||||
- 识别共性:如果连续多个步骤结果相似,可合并描述
|
||||
</compression_principles>
|
||||
|
||||
<quality_check>
|
||||
压缩完成后,请自我检查:
|
||||
1. 是否达到了目标压缩比例?
|
||||
2. 当前任务所需的关键信息是否都保留了?
|
||||
3. 压缩后的内容是否仍能让后续步骤理解发生了什么?
|
||||
4. 步骤的时序关系是否清晰?
|
||||
</quality_check>`;
|
||||
|
||||
const userPrompt = `请对以下工作流步骤的执行历史进行压缩,保留与当前任务最相关的信息。
|
||||
|
||||
**当前任务目标**:${currentDescription}
|
||||
|
||||
**需要压缩的步骤历史**:
|
||||
${stepPrompt}
|
||||
|
||||
**压缩要求**:
|
||||
- 原始长度:${tokenCount} tokens
|
||||
- 目标长度:约 ${targetTokens} tokens(压缩到原长度的 ${Math.round((targetTokens / tokenCount) * 100)}%)
|
||||
|
||||
**输出格式要求**:
|
||||
1. 保留步骤结构:每个步骤使用"# 步骤ID: [id]\\n\\t - 步骤标题: [title]\\n\\t - 执行结果: [精简后的结果]"的格式
|
||||
2. 根据相关性分级处理:
|
||||
- 与当前任务高度相关的步骤:保留完整的关键信息(数据、结论、链接等)
|
||||
- 中等相关的步骤:提炼要点,移除冗余描述
|
||||
- 低相关的步骤:仅保留一句话总结或省略执行结果
|
||||
3. 保持步骤顺序:按原始顺序输出,不要打乱
|
||||
4. 提取共性:如果连续多个步骤结果相似,可以适当合并描述
|
||||
|
||||
**质量标准**:
|
||||
- 压缩后的内容能让后续步骤理解前置步骤做了什么、得到了什么结果
|
||||
- 保留所有对当前任务有价值的具体数据和关键结论
|
||||
- 移除重复、啰嗦的描述性文字
|
||||
|
||||
请直接输出压缩后的步骤历史:`;
|
||||
|
||||
try {
|
||||
const { answerText } = await createLLMResponse({
|
||||
body: {
|
||||
model: modelData,
|
||||
messages: [
|
||||
{
|
||||
role: ChatCompletionRequestMessageRoleEnum.System,
|
||||
content: compressionSystemPrompt
|
||||
},
|
||||
{
|
||||
role: ChatCompletionRequestMessageRoleEnum.User,
|
||||
content: userPrompt
|
||||
}
|
||||
],
|
||||
temperature: 0.1,
|
||||
stream: false
|
||||
}
|
||||
});
|
||||
|
||||
return answerText || stepPrompt;
|
||||
} catch (error) {
|
||||
console.error('压缩 stepPrompt 失败:', error);
|
||||
// 压缩失败时返回原始内容
|
||||
return stepPrompt;
|
||||
}
|
||||
};
|
||||
|
||||
let stepPrompt = steps
|
||||
.filter((item) => step.depends_on && step.depends_on.includes(item.id))
|
||||
.map(
|
||||
(item) =>
|
||||
`# 步骤ID: ${item.id}\n\t - 步骤标题: ${item.title}\n\t - 执行结果: ${item.response}`
|
||||
)
|
||||
.filter(Boolean)
|
||||
.join('\n');
|
||||
addLog.debug(`Step call depends_on (LLM): ${step.id}`, step.depends_on);
|
||||
// 压缩依赖的上下文
|
||||
stepPrompt = await compressStepPrompt(stepPrompt, model, step.description || step.title);
|
||||
|
||||
return `请根据任务背景、之前步骤的执行结果和当前步骤要求选择并调用相应的工具。如果是一个总结性质的步骤,请整合之前步骤的结果进行总结。
|
||||
【任务背景】
|
||||
目标: ${userInput}
|
||||
前置信息: ${background}
|
||||
|
||||
【当前步骤】
|
||||
步骤ID: ${step.id}
|
||||
步骤标题: ${step.title}
|
||||
|
||||
${
|
||||
stepPrompt
|
||||
? `【之前步骤的执行结果】
|
||||
${stepPrompt}`
|
||||
: ''
|
||||
}
|
||||
|
||||
【执行指导】
|
||||
1. 仔细阅读前面步骤的执行结果,理解已经获得的信息
|
||||
2. 根据当前步骤描述和前面的结果,分析需要使用的工具
|
||||
3. 从可用工具列表中选择最合适的工具
|
||||
4. 基于前面步骤的结果为工具生成合理的参数
|
||||
5. 如果需要多个工具,可以同时调用
|
||||
6. 确保当前步骤的执行能够有效利用和整合前面的结果
|
||||
7. 如果是总结的步骤,请利用之前步骤的信息进行全面总结
|
||||
|
||||
请严格按照步骤描述执行,确保完成所有要求的子任务。`;
|
||||
};
|
||||
|
|
|
|||
|
|
@ -1,23 +1,20 @@
|
|||
import { NodeInputKeyEnum, NodeOutputKeyEnum } from '@fastgpt/global/core/workflow/constants';
|
||||
import { DispatchNodeResponseKeyEnum } from '@fastgpt/global/core/workflow/runtime/constants';
|
||||
import type { NodeOutputKeyEnum } from '@fastgpt/global/core/workflow/constants';
|
||||
import { NodeInputKeyEnum } from '@fastgpt/global/core/workflow/constants';
|
||||
import {
|
||||
ConfirmPlanAgentText,
|
||||
DispatchNodeResponseKeyEnum,
|
||||
SseResponseEventEnum
|
||||
} from '@fastgpt/global/core/workflow/runtime/constants';
|
||||
import type {
|
||||
ChatDispatchProps,
|
||||
DispatchNodeResultType,
|
||||
ModuleDispatchProps,
|
||||
RuntimeNodeItemType
|
||||
} from '@fastgpt/global/core/workflow/runtime/type';
|
||||
import { getLLMModel } from '../../../../ai/model';
|
||||
import { filterToolNodeIdByEdges, getNodeErrResponse, getHistories } from '../../utils';
|
||||
import { runToolCall } from './toolCall';
|
||||
import { type DispatchToolModuleProps, type ToolNodeItemType } from './type';
|
||||
import { type ChatItemType, type UserChatItemValueItemType } from '@fastgpt/global/core/chat/type';
|
||||
import { ChatItemValueTypeEnum, ChatRoleEnum } from '@fastgpt/global/core/chat/constants';
|
||||
import {
|
||||
GPTMessages2Chats,
|
||||
chatValue2RuntimePrompt,
|
||||
chats2GPTMessages,
|
||||
getSystemPrompt_ChatItemType,
|
||||
runtimePrompt2ChatsValue
|
||||
} from '@fastgpt/global/core/chat/adapt';
|
||||
import { getNodeErrResponse, getHistories } from '../../utils';
|
||||
import type { AIChatItemValueItemType, ChatItemType } from '@fastgpt/global/core/chat/type';
|
||||
import { ChatRoleEnum } from '@fastgpt/global/core/chat/constants';
|
||||
import { chats2GPTMessages, chatValue2RuntimePrompt } from '@fastgpt/global/core/chat/adapt';
|
||||
import { formatModelChars2Points } from '../../../../../support/wallet/usage/utils';
|
||||
import { getHistoryPreview } from '@fastgpt/global/core/chat/utils';
|
||||
import { replaceVariable } from '@fastgpt/global/common/string/tools';
|
||||
|
|
@ -37,19 +34,23 @@ type Response = DispatchNodeResultType<{
|
|||
[NodeOutputKeyEnum.answerText]: string;
|
||||
}>;
|
||||
|
||||
export const dispatchRunTools = async (props: DispatchToolModuleProps): Promise<Response> => {
|
||||
export const dispatchRunAgent = async (props: DispatchAgentModuleProps): Promise<Response> => {
|
||||
let {
|
||||
node: { nodeId, name, isEntry, version, inputs },
|
||||
lang,
|
||||
runtimeNodes,
|
||||
runtimeEdges,
|
||||
histories,
|
||||
query,
|
||||
requestOrigin,
|
||||
chatConfig,
|
||||
lastInteractive,
|
||||
runningUserInfo,
|
||||
runningAppInfo,
|
||||
externalProvider,
|
||||
usageId,
|
||||
stream,
|
||||
workflowDispatchDeep,
|
||||
workflowStreamResponse,
|
||||
usagePush,
|
||||
params: {
|
||||
model,
|
||||
systemPrompt,
|
||||
|
|
@ -61,49 +62,111 @@ export const dispatchRunTools = async (props: DispatchToolModuleProps): Promise<
|
|||
isResponseAnswerText = true
|
||||
}
|
||||
} = props;
|
||||
const agentModel = getLLMModel(model);
|
||||
const chatHistories = getHistories(history, histories);
|
||||
const historiesMessages = chats2GPTMessages({
|
||||
messages: chatHistories,
|
||||
reserveId: false,
|
||||
reserveTool: false
|
||||
});
|
||||
|
||||
const planMessagesKey = `planMessages-${nodeId}`;
|
||||
const replanMessagesKey = `replanMessages-${nodeId}`;
|
||||
const agentPlanKey = `agentPlan-${nodeId}`;
|
||||
|
||||
// 交互模式进来的话,这个值才是交互输入的值
|
||||
const interactiveInput = lastInteractive ? chatValue2RuntimePrompt(query).text : '';
|
||||
|
||||
// Get history messages
|
||||
let { planHistoryMessages, replanMessages, agentPlan } = (() => {
|
||||
const lastHistory = chatHistories[chatHistories.length - 1];
|
||||
if (lastHistory && lastHistory.obj === ChatRoleEnum.AI) {
|
||||
return {
|
||||
planHistoryMessages: (lastHistory.memories?.[planMessagesKey] ||
|
||||
[]) as ChatCompletionMessageParam[],
|
||||
replanMessages: (lastHistory.memories?.[replanMessagesKey] ||
|
||||
[]) as ChatCompletionMessageParam[],
|
||||
agentPlan: (lastHistory.memories?.[agentPlanKey] || []) as AgentPlanType
|
||||
};
|
||||
}
|
||||
return {
|
||||
planHistoryMessages: undefined,
|
||||
replanMessages: undefined,
|
||||
agentPlan: undefined
|
||||
};
|
||||
})();
|
||||
|
||||
// Check task complexity: 第一次进入任务时候进行判断。(有 plan了,说明已经开始执行任务了)
|
||||
const isCheckTaskComplexityStep = isPlanAgent && !agentPlan && !planHistoryMessages;
|
||||
|
||||
try {
|
||||
const toolModel = getLLMModel(model);
|
||||
const useVision = aiChatVision && toolModel.vision;
|
||||
const chatHistories = getHistories(history, histories);
|
||||
|
||||
props.params.aiChatVision = aiChatVision && toolModel.vision;
|
||||
props.params.aiChatReasoning = aiChatReasoning && toolModel.reasoning;
|
||||
// Get files
|
||||
const fileUrlInput = inputs.find((item) => item.key === NodeInputKeyEnum.fileUrlList);
|
||||
if (!fileUrlInput || !fileUrlInput.value || fileUrlInput.value.length === 0) {
|
||||
fileLinks = undefined;
|
||||
}
|
||||
<<<<<<< HEAD
|
||||
const { filesMap, prompt: fileInputPrompt } = getFileInputPrompt({
|
||||
fileUrls: fileLinks,
|
||||
requestOrigin,
|
||||
maxFiles: chatConfig?.fileSelectConfig?.maxFiles || 20,
|
||||
histories: chatHistories
|
||||
});
|
||||
|
||||
// Get sub apps
|
||||
const { subAppList, subAppsMap, getSubAppInfo } = await useSubApps({
|
||||
subApps,
|
||||
lang,
|
||||
filesMap
|
||||
});
|
||||
|
||||
/* ===== AI Start ===== */
|
||||
|
||||
/* ===== Check task complexity ===== */
|
||||
const taskIsComplexity = await (async () => {
|
||||
// if (isCheckTaskComplexityStep) {
|
||||
// const res = await checkTaskComplexity({
|
||||
// model,
|
||||
// userChatInput
|
||||
// });
|
||||
// if (res.usage) {
|
||||
// usagePush([res.usage]);
|
||||
// }
|
||||
// return res.complex;
|
||||
// }
|
||||
|
||||
// 对轮运行时候,代表都是进入复杂流程
|
||||
return true;
|
||||
})();
|
||||
|
||||
if (taskIsComplexity) {
|
||||
/* ===== Plan Agent ===== */
|
||||
const planCallFn = async () => {
|
||||
// 点了确认。此时肯定有 agentPlans
|
||||
if (
|
||||
lastInteractive?.type === 'agentPlanCheck' &&
|
||||
interactiveInput === ConfirmPlanAgentText &&
|
||||
agentPlan
|
||||
) {
|
||||
planHistoryMessages = undefined;
|
||||
} else {
|
||||
// 临时代码
|
||||
const tmpText = '正在进行规划生成...\n';
|
||||
workflowStreamResponse?.({
|
||||
event: SseResponseEventEnum.answer,
|
||||
data: textAdaptGptResponse({
|
||||
text: tmpText
|
||||
})
|
||||
});
|
||||
|
||||
<<<<<<<< HEAD:packages/service/core/workflow/dispatch/ai/tool/index.ts
|
||||
const {
|
||||
toolWorkflowInteractiveResponse,
|
||||
toolDispatchFlowResponses, // tool flow response
|
||||
=======
|
||||
|
||||
const toolNodeIds = filterToolNodeIdByEdges({ nodeId, edges: runtimeEdges });
|
||||
|
||||
// Gets the module to which the tool is connected
|
||||
const toolNodes = toolNodeIds
|
||||
.map((nodeId) => {
|
||||
const tool = runtimeNodes.find((item) => item.nodeId === nodeId);
|
||||
return tool;
|
||||
})
|
||||
.filter(Boolean)
|
||||
.map<ToolNodeItemType>((tool) => {
|
||||
const toolParams: FlowNodeInputItemType[] = [];
|
||||
// Raw json schema(MCP tool)
|
||||
let jsonSchema: JSONSchemaInputType | undefined = undefined;
|
||||
tool?.inputs.forEach((input) => {
|
||||
if (input.toolDescription) {
|
||||
toolParams.push(input);
|
||||
}
|
||||
|
||||
if (input.key === NodeInputKeyEnum.toolData || input.key === 'toolData') {
|
||||
const value = input.value as McpToolDataType;
|
||||
jsonSchema = value.inputSchema;
|
||||
}
|
||||
});
|
||||
|
||||
return {
|
||||
...(tool as RuntimeNodeItemType),
|
||||
toolParams,
|
||||
jsonSchema
|
||||
};
|
||||
});
|
||||
const toolNodes = getToolNodesByIds({ toolNodeIds, runtimeNodes });
|
||||
|
||||
// Check interactive entry
|
||||
props.node.isEntry = false;
|
||||
|
|
@ -120,11 +183,15 @@ export const dispatchRunTools = async (props: DispatchToolModuleProps): Promise<
|
|||
customPdfParse: chatConfig?.fileSelectConfig?.customPdfParse,
|
||||
fileLinks,
|
||||
inputFiles: globalFiles,
|
||||
<<<<<<< HEAD
|
||||
hasReadFilesTool,
|
||||
usageId,
|
||||
appId: props.runningAppInfo.id,
|
||||
chatId: props.chatId,
|
||||
uId: props.uid
|
||||
=======
|
||||
hasReadFilesTool
|
||||
>>>>>>> a48ad2abe (squash: compress all commits into one)
|
||||
});
|
||||
|
||||
const concatenateSystemPrompt = [
|
||||
|
|
@ -183,11 +250,16 @@ export const dispatchRunTools = async (props: DispatchToolModuleProps): Promise<
|
|||
|
||||
const {
|
||||
toolWorkflowInteractiveResponse,
|
||||
toolDispatchFlowResponses, // tool flow response
|
||||
dispatchFlowResponse, // tool flow response
|
||||
>>>>>>> 757253617 (squash: compress all commits into one)
|
||||
toolCallInputTokens,
|
||||
toolCallOutputTokens,
|
||||
completeMessages = [], // The actual message sent to AI(just save text)
|
||||
assistantResponses = [], // FastGPT system store assistant.value response
|
||||
<<<<<<< HEAD
|
||||
=======
|
||||
runTimes,
|
||||
>>>>>>> 757253617 (squash: compress all commits into one)
|
||||
finish_reason
|
||||
} = await (async () => {
|
||||
const adaptMessages = chats2GPTMessages({
|
||||
|
|
@ -195,20 +267,162 @@ export const dispatchRunTools = async (props: DispatchToolModuleProps): Promise<
|
|||
reserveId: false
|
||||
// reserveTool: !!toolModel.toolChoice
|
||||
});
|
||||
<<<<<<< HEAD
|
||||
|
||||
return runToolCall({
|
||||
...props,
|
||||
=======
|
||||
const requestParams = {
|
||||
>>>>>>> 757253617 (squash: compress all commits into one)
|
||||
runtimeNodes,
|
||||
runtimeEdges,
|
||||
toolNodes,
|
||||
toolModel,
|
||||
messages: adaptMessages,
|
||||
<<<<<<< HEAD
|
||||
childrenInteractiveParams:
|
||||
lastInteractive?.type === 'toolChildrenInteractive' ? lastInteractive.params : undefined
|
||||
========
|
||||
const { answerText, plan, completeMessages, usages, interactiveResponse } =
|
||||
await dispatchPlanAgent({
|
||||
historyMessages: planHistoryMessages || historiesMessages,
|
||||
userInput: lastInteractive ? interactiveInput : userChatInput,
|
||||
interactive: lastInteractive,
|
||||
subAppList,
|
||||
getSubAppInfo,
|
||||
systemPrompt,
|
||||
model,
|
||||
temperature,
|
||||
top_p: aiChatTopP,
|
||||
stream,
|
||||
isTopPlanAgent: workflowDispatchDeep === 1
|
||||
});
|
||||
|
||||
const text = `${answerText}${plan ? `\n\`\`\`json\n${JSON.stringify(plan, null, 2)}\n\`\`\`` : ''}`;
|
||||
workflowStreamResponse?.({
|
||||
event: SseResponseEventEnum.answer,
|
||||
data: textAdaptGptResponse({
|
||||
text
|
||||
})
|
||||
});
|
||||
|
||||
agentPlan = plan;
|
||||
|
||||
usagePush(usages);
|
||||
// Sub agent plan 不会有交互响应。Top agent plan 肯定会有。
|
||||
if (interactiveResponse) {
|
||||
return {
|
||||
[DispatchNodeResponseKeyEnum.answerText]: `${tmpText}${text}`,
|
||||
[DispatchNodeResponseKeyEnum.memories]: {
|
||||
[planMessagesKey]: filterMemoryMessages(completeMessages),
|
||||
[agentPlanKey]: agentPlan
|
||||
},
|
||||
[DispatchNodeResponseKeyEnum.interactive]: interactiveResponse
|
||||
};
|
||||
} else {
|
||||
planHistoryMessages = undefined;
|
||||
}
|
||||
}
|
||||
};
|
||||
const replanCallFn = async ({ plan }: { plan: AgentPlanType }) => {
|
||||
if (!agentPlan) return;
|
||||
|
||||
addLog.debug(`Replan step`);
|
||||
// 临时代码
|
||||
const tmpText = '\n # 正在重新进行规划生成...\n';
|
||||
workflowStreamResponse?.({
|
||||
event: SseResponseEventEnum.answer,
|
||||
data: textAdaptGptResponse({
|
||||
text: tmpText
|
||||
})
|
||||
});
|
||||
|
||||
const {
|
||||
answerText,
|
||||
plan: rePlan,
|
||||
completeMessages,
|
||||
usages,
|
||||
interactiveResponse
|
||||
} = await dispatchReplanAgent({
|
||||
historyMessages: replanMessages || historiesMessages,
|
||||
userInput: lastInteractive ? interactiveInput : userChatInput,
|
||||
plan,
|
||||
interactive: lastInteractive,
|
||||
subAppList,
|
||||
getSubAppInfo,
|
||||
systemPrompt,
|
||||
model,
|
||||
temperature,
|
||||
top_p: aiChatTopP,
|
||||
stream,
|
||||
isTopPlanAgent: workflowDispatchDeep === 1
|
||||
});
|
||||
|
||||
if (rePlan) {
|
||||
agentPlan.steps.push(...rePlan.steps);
|
||||
agentPlan.replan = rePlan.replan;
|
||||
}
|
||||
|
||||
const text = `${answerText}${agentPlan ? `\n\`\`\`json\n${JSON.stringify(agentPlan, null, 2)}\n\`\`\`\n` : ''}`;
|
||||
workflowStreamResponse?.({
|
||||
event: SseResponseEventEnum.answer,
|
||||
data: textAdaptGptResponse({
|
||||
text
|
||||
})
|
||||
});
|
||||
|
||||
usagePush(usages);
|
||||
// Sub agent plan 不会有交互响应。Top agent plan 肯定会有。
|
||||
if (interactiveResponse) {
|
||||
return {
|
||||
[DispatchNodeResponseKeyEnum.answerText]: `${tmpText}${text}`,
|
||||
[DispatchNodeResponseKeyEnum.memories]: {
|
||||
[replanMessagesKey]: filterMemoryMessages(completeMessages),
|
||||
[agentPlanKey]: agentPlan
|
||||
},
|
||||
[DispatchNodeResponseKeyEnum.interactive]: interactiveResponse
|
||||
};
|
||||
} else {
|
||||
replanMessages = undefined;
|
||||
}
|
||||
};
|
||||
|
||||
// Plan step: 需要生成 plan,且还没有完整的 plan
|
||||
const isPlanStep = isPlanAgent && (!agentPlan || planHistoryMessages);
|
||||
// Replan step: 已有 plan,且有 replan 历史消息
|
||||
const isReplanStep = isPlanAgent && agentPlan && replanMessages;
|
||||
|
||||
// 执行 Plan/replan
|
||||
if (isPlanStep) {
|
||||
const result = await planCallFn();
|
||||
// 有 result 代表 plan 有交互响应(check/ask)
|
||||
if (result) return result;
|
||||
} else if (isReplanStep) {
|
||||
const result = await replanCallFn({
|
||||
plan: agentPlan!
|
||||
});
|
||||
if (result) return result;
|
||||
}
|
||||
|
||||
addLog.debug(`Start master agent`, {
|
||||
agentPlan: JSON.stringify(agentPlan, null, 2)
|
||||
>>>>>>>> 757253617 (squash: compress all commits into one):packages/service/core/workflow/dispatch/ai/agent/index.ts
|
||||
});
|
||||
|
||||
<<<<<<<< HEAD:packages/service/core/workflow/dispatch/ai/tool/index.ts
|
||||
// Usage computed
|
||||
=======
|
||||
interactiveEntryToolParams: lastInteractive?.toolParams
|
||||
};
|
||||
|
||||
return runToolCall({
|
||||
...props,
|
||||
...requestParams,
|
||||
maxRunToolTimes: 100
|
||||
});
|
||||
})();
|
||||
|
||||
// Usage computed
|
||||
>>>>>>> 757253617 (squash: compress all commits into one)
|
||||
const { totalPoints: modelTotalPoints, modelName } = formatModelChars2Points({
|
||||
model,
|
||||
inputTokens: toolCallInputTokens,
|
||||
|
|
@ -216,13 +430,29 @@ export const dispatchRunTools = async (props: DispatchToolModuleProps): Promise<
|
|||
});
|
||||
const modelUsage = externalProvider.openaiAccount?.key ? 0 : modelTotalPoints;
|
||||
|
||||
<<<<<<< HEAD
|
||||
const toolUsages = toolDispatchFlowResponses.map((item) => item.flowUsages).flat();
|
||||
const toolTotalPoints = toolUsages.reduce((sum, item) => sum + item.totalPoints, 0);
|
||||
========
|
||||
/* ===== Master agent, 逐步执行 plan ===== */
|
||||
if (!agentPlan) return Promise.reject('没有 plan');
|
||||
|
||||
let assistantResponses: AIChatItemValueItemType[] = [];
|
||||
>>>>>>>> 757253617 (squash: compress all commits into one):packages/service/core/workflow/dispatch/ai/agent/index.ts
|
||||
|
||||
while (agentPlan.steps!.filter((item) => !item.response)!.length) {
|
||||
const pendingSteps = agentPlan?.steps!.filter((item) => !item.response)!;
|
||||
|
||||
<<<<<<<< HEAD:packages/service/core/workflow/dispatch/ai/tool/index.ts
|
||||
// Preview assistant responses
|
||||
=======
|
||||
const toolUsages = dispatchFlowResponse.map((item) => item.flowUsages).flat();
|
||||
const toolTotalPoints = toolUsages.reduce((sum, item) => sum + item.totalPoints, 0);
|
||||
|
||||
// concat tool usage
|
||||
const totalPointsUsage = modelUsage + toolTotalPoints;
|
||||
|
||||
// Preview assistant responses
|
||||
>>>>>>> 757253617 (squash: compress all commits into one)
|
||||
const previewAssistantResponses = filterToolResponseToPreview(assistantResponses);
|
||||
|
||||
return {
|
||||
|
|
@ -232,13 +462,21 @@ export const dispatchRunTools = async (props: DispatchToolModuleProps): Promise<
|
|||
.map((item) => item.text?.content || '')
|
||||
.join('')
|
||||
},
|
||||
<<<<<<< HEAD
|
||||
[DispatchNodeResponseKeyEnum.runTimes]: toolDispatchFlowResponses.reduce(
|
||||
(sum, item) => sum + item.runTimes,
|
||||
0
|
||||
),
|
||||
<<<<<<< HEAD
|
||||
[DispatchNodeResponseKeyEnum.assistantResponses]: isResponseAnswerText
|
||||
? previewAssistantResponses
|
||||
: undefined,
|
||||
=======
|
||||
=======
|
||||
[DispatchNodeResponseKeyEnum.runTimes]: runTimes,
|
||||
>>>>>>> 757253617 (squash: compress all commits into one)
|
||||
[DispatchNodeResponseKeyEnum.assistantResponses]: previewAssistantResponses,
|
||||
>>>>>>> a48ad2abe (squash: compress all commits into one)
|
||||
[DispatchNodeResponseKeyEnum.nodeResponse]: {
|
||||
// 展示的积分消耗
|
||||
totalPoints: totalPointsUsage,
|
||||
|
|
@ -252,7 +490,11 @@ export const dispatchRunTools = async (props: DispatchToolModuleProps): Promise<
|
|||
10000,
|
||||
useVision
|
||||
),
|
||||
<<<<<<< HEAD
|
||||
toolDetail: toolDispatchFlowResponses.map((item) => item.flowResponses).flat(),
|
||||
=======
|
||||
toolDetail: dispatchFlowResponse.map((item) => item.flowResponses).flat(),
|
||||
>>>>>>> 757253617 (squash: compress all commits into one)
|
||||
mergeSignId: nodeId,
|
||||
finishReason: finish_reason
|
||||
},
|
||||
|
|
@ -264,17 +506,134 @@ export const dispatchRunTools = async (props: DispatchToolModuleProps): Promise<
|
|||
totalPoints: modelUsage,
|
||||
inputTokens: toolCallInputTokens,
|
||||
outputTokens: toolCallOutputTokens
|
||||
<<<<<<< HEAD
|
||||
========
|
||||
for await (const step of pendingSteps) {
|
||||
addLog.debug(`Step call: ${step.id}`, step);
|
||||
|
||||
workflowStreamResponse?.({
|
||||
event: SseResponseEventEnum.answer,
|
||||
data: textAdaptGptResponse({
|
||||
text: `\n # ${step.id}: ${step.title}\n`
|
||||
})
|
||||
});
|
||||
|
||||
const result = await stepCall({
|
||||
...props,
|
||||
getSubAppInfo,
|
||||
steps: agentPlan.steps, // 传入所有步骤,而不仅仅是未执行的步骤
|
||||
subAppList,
|
||||
step,
|
||||
filesMap,
|
||||
subAppsMap
|
||||
});
|
||||
|
||||
step.response = result.rawResponse;
|
||||
step.summary = result.summary;
|
||||
assistantResponses.push(...result.assistantResponses);
|
||||
}
|
||||
|
||||
if (agentPlan?.replan === true) {
|
||||
const replanResult = await replanCallFn({
|
||||
plan: agentPlan
|
||||
});
|
||||
if (replanResult) return replanResult;
|
||||
}
|
||||
}
|
||||
|
||||
return {
|
||||
// 目前 Master 不会触发交互
|
||||
// [DispatchNodeResponseKeyEnum.interactive]: interactiveResponse,
|
||||
// TODO: 需要对 memoryMessages 单独建表存储
|
||||
[DispatchNodeResponseKeyEnum.memories]: {
|
||||
[agentPlanKey]: agentPlan,
|
||||
[planMessagesKey]: undefined,
|
||||
[replanMessagesKey]: undefined
|
||||
>>>>>>>> 757253617 (squash: compress all commits into one):packages/service/core/workflow/dispatch/ai/agent/index.ts
|
||||
},
|
||||
[DispatchNodeResponseKeyEnum.assistantResponses]: assistantResponses,
|
||||
[DispatchNodeResponseKeyEnum.nodeResponse]: {
|
||||
// 展示的积分消耗
|
||||
// totalPoints: totalPointsUsage,
|
||||
// toolCallInputTokens: inputTokens,
|
||||
// toolCallOutputTokens: outputTokens,
|
||||
// childTotalPoints: toolTotalPoints,
|
||||
// model: modelName,
|
||||
query: userChatInput,
|
||||
// toolDetail: dispatchFlowResponse,
|
||||
mergeSignId: nodeId
|
||||
}
|
||||
};
|
||||
}
|
||||
|
||||
// 简单 tool call 模式(一轮对话就结束了,不会多轮,所以不会受到连续对话的 taskIsComplexity 影响)
|
||||
return Promise.reject('目前未支持简单模式');
|
||||
=======
|
||||
},
|
||||
// 工具的消耗
|
||||
...toolUsages
|
||||
],
|
||||
[DispatchNodeResponseKeyEnum.interactive]: toolWorkflowInteractiveResponse
|
||||
};
|
||||
>>>>>>> 757253617 (squash: compress all commits into one)
|
||||
} catch (error) {
|
||||
return getNodeErrResponse({ error });
|
||||
}
|
||||
};
|
||||
|
||||
<<<<<<< HEAD
|
||||
export const useSubApps = async ({
|
||||
subApps,
|
||||
lang,
|
||||
filesMap
|
||||
}: {
|
||||
subApps: FlowNodeTemplateType[];
|
||||
lang?: localeType;
|
||||
filesMap: Record<string, string>;
|
||||
}) => {
|
||||
// Get sub apps
|
||||
const runtimeSubApps = await rewriteSubAppsToolset({
|
||||
subApps: subApps.map<RuntimeNodeItemType>((node) => {
|
||||
return {
|
||||
nodeId: node.id,
|
||||
name: node.name,
|
||||
avatar: node.avatar,
|
||||
intro: node.intro,
|
||||
toolDescription: node.toolDescription,
|
||||
flowNodeType: node.flowNodeType,
|
||||
showStatus: node.showStatus,
|
||||
isEntry: false,
|
||||
inputs: node.inputs,
|
||||
outputs: node.outputs,
|
||||
pluginId: node.pluginId,
|
||||
version: node.version,
|
||||
toolConfig: node.toolConfig,
|
||||
catchError: node.catchError
|
||||
};
|
||||
}),
|
||||
lang
|
||||
});
|
||||
|
||||
const subAppList = getSubApps({
|
||||
subApps: runtimeSubApps,
|
||||
addReadFileTool: Object.keys(filesMap).length > 0
|
||||
});
|
||||
|
||||
const subAppsMap = new Map(runtimeSubApps.map((item) => [item.nodeId, item]));
|
||||
const getSubAppInfo = (id: string) => {
|
||||
const toolNode = subAppsMap.get(id) || systemSubInfo[id];
|
||||
return {
|
||||
name: toolNode?.name || '',
|
||||
avatar: toolNode?.avatar || '',
|
||||
toolDescription: toolNode?.toolDescription || toolNode?.name || ''
|
||||
};
|
||||
};
|
||||
|
||||
return {
|
||||
subAppList,
|
||||
subAppsMap,
|
||||
getSubAppInfo
|
||||
=======
|
||||
const getMultiInput = async ({
|
||||
runningUserInfo,
|
||||
histories,
|
||||
|
|
@ -283,11 +642,15 @@ const getMultiInput = async ({
|
|||
maxFiles,
|
||||
customPdfParse,
|
||||
inputFiles,
|
||||
<<<<<<< HEAD
|
||||
hasReadFilesTool,
|
||||
usageId,
|
||||
appId,
|
||||
chatId,
|
||||
uId
|
||||
=======
|
||||
hasReadFilesTool
|
||||
>>>>>>> a48ad2abe (squash: compress all commits into one)
|
||||
}: {
|
||||
runningUserInfo: ChatDispatchProps['runningUserInfo'];
|
||||
histories: ChatItemType[];
|
||||
|
|
@ -297,10 +660,13 @@ const getMultiInput = async ({
|
|||
customPdfParse?: boolean;
|
||||
inputFiles: UserChatItemValueItemType['file'][];
|
||||
hasReadFilesTool: boolean;
|
||||
<<<<<<< HEAD
|
||||
usageId?: string;
|
||||
appId: string;
|
||||
chatId?: string;
|
||||
uId: string;
|
||||
=======
|
||||
>>>>>>> a48ad2abe (squash: compress all commits into one)
|
||||
}) => {
|
||||
// Not file quote
|
||||
if (!fileLinks || hasReadFilesTool) {
|
||||
|
|
@ -327,7 +693,6 @@ const getMultiInput = async ({
|
|||
requestOrigin,
|
||||
maxFiles,
|
||||
customPdfParse,
|
||||
usageId,
|
||||
teamId: runningUserInfo.teamId,
|
||||
tmbId: runningUserInfo.tmbId
|
||||
});
|
||||
|
|
@ -335,54 +700,6 @@ const getMultiInput = async ({
|
|||
return {
|
||||
documentQuoteText: text,
|
||||
userFiles: fileLinks.map((url) => parseUrlToFileType(url)).filter(Boolean)
|
||||
>>>>>>> 757253617 (squash: compress all commits into one)
|
||||
};
|
||||
};
|
||||
|
||||
/*
|
||||
Tool call, auth add file prompt to question。
|
||||
Guide the LLM to call tool.
|
||||
*/
|
||||
const toolCallMessagesAdapt = ({
|
||||
userInput,
|
||||
skip
|
||||
}: {
|
||||
userInput: UserChatItemValueItemType[];
|
||||
skip?: boolean;
|
||||
}): UserChatItemValueItemType[] => {
|
||||
if (skip) return userInput;
|
||||
|
||||
const files = userInput.filter((item) => item.type === 'file');
|
||||
|
||||
if (files.length > 0) {
|
||||
const filesCount = files.filter((file) => file.file?.type === 'file').length;
|
||||
const imgCount = files.filter((file) => file.file?.type === 'image').length;
|
||||
|
||||
if (userInput.some((item) => item.type === 'text')) {
|
||||
return userInput.map((item) => {
|
||||
if (item.type === 'text') {
|
||||
const text = item.text?.content || '';
|
||||
|
||||
return {
|
||||
...item,
|
||||
text: {
|
||||
content: getMultiplePrompt({ fileCount: filesCount, imgCount, question: text })
|
||||
}
|
||||
};
|
||||
}
|
||||
return item;
|
||||
});
|
||||
}
|
||||
|
||||
// Every input is a file
|
||||
return [
|
||||
{
|
||||
type: ChatItemValueTypeEnum.text,
|
||||
text: {
|
||||
content: getMultiplePrompt({ fileCount: filesCount, imgCount, question: '' })
|
||||
}
|
||||
}
|
||||
];
|
||||
}
|
||||
|
||||
return userInput;
|
||||
};
|
||||
|
|
|
|||
|
|
@ -0,0 +1,360 @@
|
|||
import type { ChatCompletionTool } from '@fastgpt/global/core/ai/type';
|
||||
import { runAgentCall } from '../../../../../ai/llm/agentCall';
|
||||
import { chats2GPTMessages, runtimePrompt2ChatsValue } from '@fastgpt/global/core/chat/adapt';
|
||||
import { ChatRoleEnum } from '@fastgpt/global/core/chat/constants';
|
||||
import { addFilePrompt2Input } from '../sub/file/utils';
|
||||
import type { AgentPlanStepType } from '../sub/plan/type';
|
||||
import type { GetSubAppInfoFnType } from '../type';
|
||||
import { getMasterAgentSystemPrompt } from '../constants';
|
||||
import type { RuntimeNodeItemType } from '@fastgpt/global/core/workflow/runtime/type';
|
||||
import { SseResponseEventEnum } from '@fastgpt/global/core/workflow/runtime/constants';
|
||||
import {
|
||||
getReferenceVariableValue,
|
||||
replaceEditorVariable,
|
||||
textAdaptGptResponse,
|
||||
valueTypeFormat
|
||||
} from '@fastgpt/global/core/workflow/runtime/utils';
|
||||
import { getWorkflowChildResponseWrite } from '../../../utils';
|
||||
import { SubAppIds } from '../sub/constants';
|
||||
import { parseToolArgs } from '../../utils';
|
||||
import { dispatchFileRead } from '../sub/file';
|
||||
import { NodeInputKeyEnum } from '@fastgpt/global/core/workflow/constants';
|
||||
import { FlowNodeTypeEnum } from '@fastgpt/global/core/workflow/node/constant';
|
||||
import { dispatchTool } from '../sub/tool';
|
||||
import { dispatchApp, dispatchPlugin } from '../sub/app';
|
||||
import { getErrText } from '@fastgpt/global/common/error/utils';
|
||||
import type { DispatchAgentModuleProps } from '..';
|
||||
import { getLLMModel } from '../../../../../ai/model';
|
||||
import { createLLMResponse } from '../../../../../ai/llm/request';
|
||||
import { addLog } from '../../../../../../common/system/log';
|
||||
import { getStepDependon } from './dependon';
|
||||
import { getResponseSummary } from './responseSummary';
|
||||
|
||||
export const stepCall = async ({
|
||||
getSubAppInfo,
|
||||
subAppList,
|
||||
steps,
|
||||
step,
|
||||
filesMap,
|
||||
subAppsMap,
|
||||
...props
|
||||
}: DispatchAgentModuleProps & {
|
||||
getSubAppInfo: GetSubAppInfoFnType;
|
||||
subAppList: ChatCompletionTool[];
|
||||
steps: AgentPlanStepType[];
|
||||
step: AgentPlanStepType;
|
||||
filesMap: Record<string, string>;
|
||||
subAppsMap: Map<string, RuntimeNodeItemType>;
|
||||
}) => {
|
||||
const {
|
||||
node: { nodeId },
|
||||
runtimeNodes,
|
||||
chatConfig,
|
||||
runningUserInfo,
|
||||
runningAppInfo,
|
||||
variables,
|
||||
externalProvider,
|
||||
stream,
|
||||
res,
|
||||
workflowStreamResponse,
|
||||
usagePush,
|
||||
params: { userChatInput, systemPrompt, model, temperature, aiChatTopP }
|
||||
} = props;
|
||||
|
||||
// Get depends on step ids
|
||||
if (!step.depends_on) {
|
||||
const { depends, usage: dependsUsage } = await getStepDependon({
|
||||
model,
|
||||
steps,
|
||||
step
|
||||
});
|
||||
if (dependsUsage) {
|
||||
usagePush([dependsUsage]);
|
||||
}
|
||||
step.depends_on = depends;
|
||||
}
|
||||
|
||||
// addLog.debug(`Step information`, steps);
|
||||
const systemPromptContent = await getMasterAgentSystemPrompt({
|
||||
steps,
|
||||
step,
|
||||
userInput: userChatInput,
|
||||
model
|
||||
// background: systemPrompt
|
||||
});
|
||||
|
||||
const requestMessages = chats2GPTMessages({
|
||||
messages: [
|
||||
{
|
||||
obj: ChatRoleEnum.System,
|
||||
value: [
|
||||
{
|
||||
text: {
|
||||
content: systemPromptContent
|
||||
}
|
||||
}
|
||||
]
|
||||
},
|
||||
{
|
||||
obj: ChatRoleEnum.Human,
|
||||
value: runtimePrompt2ChatsValue({
|
||||
text: addFilePrompt2Input({ query: step.description }),
|
||||
files: []
|
||||
})
|
||||
}
|
||||
],
|
||||
reserveId: false
|
||||
});
|
||||
// console.log(
|
||||
// 'Step call requestMessages',
|
||||
// JSON.stringify({ requestMessages, subAppList }, null, 2)
|
||||
// );
|
||||
// TODO: 阶段性推送账单
|
||||
const { assistantResponses, inputTokens, outputTokens, subAppUsages, interactiveResponse } =
|
||||
await runAgentCall({
|
||||
maxRunAgentTimes: 100,
|
||||
currentStep: step,
|
||||
// interactiveEntryToolParams: lastInteractive?.toolParams,
|
||||
body: {
|
||||
messages: requestMessages,
|
||||
model: getLLMModel(model),
|
||||
temperature,
|
||||
stream,
|
||||
top_p: aiChatTopP,
|
||||
subApps: subAppList
|
||||
},
|
||||
|
||||
userKey: externalProvider.openaiAccount,
|
||||
isAborted: res ? () => res.closed : undefined,
|
||||
getToolInfo: getSubAppInfo,
|
||||
|
||||
onReasoning({ text }) {
|
||||
workflowStreamResponse?.({
|
||||
event: SseResponseEventEnum.answer,
|
||||
data: textAdaptGptResponse({
|
||||
reasoning_content: text
|
||||
})
|
||||
});
|
||||
},
|
||||
onStreaming({ text }) {
|
||||
workflowStreamResponse?.({
|
||||
event: SseResponseEventEnum.answer,
|
||||
data: textAdaptGptResponse({
|
||||
text
|
||||
})
|
||||
});
|
||||
},
|
||||
onToolCall({ call }) {
|
||||
const subApp = getSubAppInfo(call.function.name);
|
||||
workflowStreamResponse?.({
|
||||
id: call.id,
|
||||
event: SseResponseEventEnum.toolCall,
|
||||
data: {
|
||||
tool: {
|
||||
id: `${nodeId}/${call.function.name}`,
|
||||
toolName: subApp?.name || call.function.name,
|
||||
toolAvatar: subApp?.avatar || '',
|
||||
functionName: call.function.name,
|
||||
params: call.function.arguments ?? ''
|
||||
}
|
||||
}
|
||||
});
|
||||
},
|
||||
onToolParam({ call, params }) {
|
||||
workflowStreamResponse?.({
|
||||
id: call.id,
|
||||
event: SseResponseEventEnum.toolParams,
|
||||
data: {
|
||||
tool: {
|
||||
params
|
||||
}
|
||||
}
|
||||
});
|
||||
},
|
||||
|
||||
handleToolResponse: async ({ call, messages }) => {
|
||||
const toolId = call.function.name;
|
||||
const childWorkflowStreamResponse = getWorkflowChildResponseWrite({
|
||||
subAppId: `${nodeId}/${toolId}`,
|
||||
id: call.id,
|
||||
fn: workflowStreamResponse
|
||||
});
|
||||
|
||||
const { response, usages = [] } = await (async () => {
|
||||
try {
|
||||
if (toolId === SubAppIds.fileRead) {
|
||||
const params = parseToolArgs<{
|
||||
file_indexes: string[];
|
||||
}>(call.function.arguments);
|
||||
if (!params) {
|
||||
return {
|
||||
response: 'params is not object',
|
||||
usages: []
|
||||
};
|
||||
}
|
||||
if (!Array.isArray(params.file_indexes)) {
|
||||
return {
|
||||
response: 'file_indexes is not array',
|
||||
usages: []
|
||||
};
|
||||
}
|
||||
|
||||
const files = params.file_indexes.map((index) => ({
|
||||
index,
|
||||
url: filesMap[index]
|
||||
}));
|
||||
const result = await dispatchFileRead({
|
||||
files,
|
||||
teamId: runningUserInfo.teamId,
|
||||
tmbId: runningUserInfo.tmbId,
|
||||
customPdfParse: chatConfig?.fileSelectConfig?.customPdfParse
|
||||
});
|
||||
return {
|
||||
response: result.response,
|
||||
usages: result.usages
|
||||
};
|
||||
}
|
||||
// User Sub App
|
||||
else {
|
||||
const node = subAppsMap.get(toolId);
|
||||
if (!node) {
|
||||
return {
|
||||
response: 'Can not find the tool',
|
||||
usages: []
|
||||
};
|
||||
}
|
||||
|
||||
const toolCallParams = parseToolArgs(call.function.arguments);
|
||||
|
||||
if (!toolCallParams) {
|
||||
return {
|
||||
response: 'params is not object',
|
||||
usages: []
|
||||
};
|
||||
}
|
||||
|
||||
// Get params
|
||||
const requestParams = (() => {
|
||||
const params: Record<string, any> = toolCallParams;
|
||||
|
||||
node.inputs.forEach((input) => {
|
||||
if (input.key in toolCallParams) {
|
||||
return;
|
||||
}
|
||||
// Skip some special key
|
||||
if (
|
||||
[
|
||||
NodeInputKeyEnum.childrenNodeIdList,
|
||||
NodeInputKeyEnum.systemInputConfig
|
||||
].includes(input.key as NodeInputKeyEnum)
|
||||
) {
|
||||
params[input.key] = input.value;
|
||||
return;
|
||||
}
|
||||
|
||||
// replace {{$xx.xx$}} and {{xx}} variables
|
||||
let value = replaceEditorVariable({
|
||||
text: input.value,
|
||||
nodes: runtimeNodes,
|
||||
variables
|
||||
});
|
||||
|
||||
// replace reference variables
|
||||
value = getReferenceVariableValue({
|
||||
value,
|
||||
nodes: runtimeNodes,
|
||||
variables
|
||||
});
|
||||
|
||||
params[input.key] = valueTypeFormat(value, input.valueType);
|
||||
});
|
||||
|
||||
return params;
|
||||
})();
|
||||
|
||||
if (node.flowNodeType === FlowNodeTypeEnum.tool) {
|
||||
const { response, usages } = await dispatchTool({
|
||||
node,
|
||||
params: requestParams,
|
||||
runningUserInfo,
|
||||
runningAppInfo,
|
||||
variables,
|
||||
workflowStreamResponse: childWorkflowStreamResponse
|
||||
});
|
||||
return {
|
||||
response,
|
||||
usages
|
||||
};
|
||||
} else if (
|
||||
node.flowNodeType === FlowNodeTypeEnum.appModule ||
|
||||
node.flowNodeType === FlowNodeTypeEnum.pluginModule
|
||||
) {
|
||||
const fn =
|
||||
node.flowNodeType === FlowNodeTypeEnum.appModule ? dispatchApp : dispatchPlugin;
|
||||
|
||||
const { response, usages } = await fn({
|
||||
...props,
|
||||
node,
|
||||
workflowStreamResponse: childWorkflowStreamResponse,
|
||||
callParams: {
|
||||
appId: node.pluginId,
|
||||
version: node.version,
|
||||
...requestParams
|
||||
}
|
||||
});
|
||||
|
||||
return {
|
||||
response,
|
||||
usages
|
||||
};
|
||||
} else {
|
||||
return {
|
||||
response: 'Can not find the tool',
|
||||
usages: []
|
||||
};
|
||||
}
|
||||
}
|
||||
} catch (error) {
|
||||
return {
|
||||
response: getErrText(error),
|
||||
usages: []
|
||||
};
|
||||
}
|
||||
})();
|
||||
|
||||
// Push stream response
|
||||
workflowStreamResponse?.({
|
||||
id: call.id,
|
||||
event: SseResponseEventEnum.toolResponse,
|
||||
data: {
|
||||
tool: {
|
||||
id: call.id,
|
||||
response
|
||||
}
|
||||
}
|
||||
});
|
||||
|
||||
// TODO: 推送账单
|
||||
|
||||
return {
|
||||
response,
|
||||
usages
|
||||
};
|
||||
}
|
||||
});
|
||||
|
||||
const answerText = assistantResponses.map((item) => item.text?.content).join('\n');
|
||||
const { answerText: summary, usage: summaryUsage } = await getResponseSummary({
|
||||
response: answerText,
|
||||
model
|
||||
});
|
||||
if (summaryUsage) {
|
||||
usagePush([summaryUsage]);
|
||||
}
|
||||
|
||||
return {
|
||||
rawResponse: answerText,
|
||||
summary,
|
||||
assistantResponses
|
||||
};
|
||||
};
|
||||
|
|
@ -0,0 +1,96 @@
|
|||
import { getLLMModel } from '../../../../../ai/model';
|
||||
import type { AgentPlanStepType } from '../sub/plan/type';
|
||||
import { addLog } from '../../../../../../common/system/log';
|
||||
import { createLLMResponse } from '../../../../../ai/llm/request';
|
||||
import { parseToolArgs } from '../../utils';
|
||||
import type { ChatNodeUsageType } from '@fastgpt/global/support/wallet/bill/type';
|
||||
import { formatModelChars2Points } from '../../../../../../support/wallet/usage/utils';
|
||||
|
||||
export const getStepDependon = async ({
|
||||
model,
|
||||
steps,
|
||||
step
|
||||
}: {
|
||||
model: string;
|
||||
steps: AgentPlanStepType[];
|
||||
step: AgentPlanStepType;
|
||||
}): Promise<{
|
||||
depends: string[];
|
||||
usage?: ChatNodeUsageType;
|
||||
}> => {
|
||||
const modelData = getLLMModel(model);
|
||||
addLog.debug('GetStepResponse start', { model, step });
|
||||
const historySummary = steps
|
||||
.filter((item) => item.summary)
|
||||
.map((item) => `- ${item.id}: ${item.summary}`)
|
||||
.join('\n');
|
||||
|
||||
if (!historySummary) {
|
||||
return {
|
||||
depends: []
|
||||
};
|
||||
}
|
||||
// console.log("GetStepDependon historySummary:", step.id, historySummary);
|
||||
const prompt = `
|
||||
你是一个智能检索助手。现在需要执行一个新的步骤,请根据步骤描述和历史步骤的概括信息,判断哪些历史步骤的结果对当前步骤有帮助,并将 step_id 提取出来。
|
||||
|
||||
【当前需要执行的步骤】
|
||||
步骤ID: ${step.id}
|
||||
步骤标题: ${step.title}
|
||||
步骤描述: ${step.description}
|
||||
|
||||
【已完成的历史步骤概括】
|
||||
${historySummary}
|
||||
|
||||
【任务】
|
||||
1. 请分析当前步骤的需求,判断需要引用哪些历史步骤的详细结果。
|
||||
2. 如果不需要任何历史步骤,返回空列表;如果需要,请返回相关步骤的ID列表。
|
||||
3. 如果是一个总结性质的步骤,比如标题为“生成总结报告”,那么请返回所有已完成的历史步骤id,而不应该是一个空列表。
|
||||
|
||||
【返回格式】(严格的JSON格式,不要包含其他文字)
|
||||
\`\`\`json
|
||||
{
|
||||
"needed_step_ids": ["step1", "step2"],
|
||||
"reason": "当前步骤需要整合美食和天气信息,因此需要 step1 和 step2 的结果"
|
||||
}
|
||||
\`\`\`
|
||||
\`\`\`json
|
||||
{
|
||||
"needed_step_ids": ["step1", "step2", "step3"],
|
||||
"reason": "当前步骤为总结性质的步骤,需要依赖所有之前步骤的信息"
|
||||
}
|
||||
\`\`\``;
|
||||
|
||||
const { answerText, usage } = await createLLMResponse({
|
||||
body: {
|
||||
model: modelData.model,
|
||||
messages: [{ role: 'user', content: prompt }],
|
||||
stream: false
|
||||
}
|
||||
});
|
||||
const params = parseToolArgs<{
|
||||
needed_step_ids: string[];
|
||||
reason: string;
|
||||
}>(answerText);
|
||||
if (!params) {
|
||||
const { totalPoints, modelName } = formatModelChars2Points({
|
||||
model: modelData.model,
|
||||
inputTokens: usage.inputTokens,
|
||||
outputTokens: usage.outputTokens
|
||||
});
|
||||
return {
|
||||
depends: [],
|
||||
usage: {
|
||||
moduleName: '步骤依赖分析',
|
||||
model: modelName,
|
||||
totalPoints,
|
||||
inputTokens: usage.inputTokens,
|
||||
outputTokens: usage.outputTokens
|
||||
}
|
||||
};
|
||||
}
|
||||
|
||||
return {
|
||||
depends: params.needed_step_ids
|
||||
};
|
||||
};
|
||||
|
|
@ -0,0 +1,59 @@
|
|||
import { formatModelChars2Points } from '../../../../../../support/wallet/usage/utils';
|
||||
import { addLog } from '../../../../../../common/system/log';
|
||||
import { createLLMResponse } from '../../../../../ai/llm/request';
|
||||
import { getLLMModel } from '../../../../../ai/model';
|
||||
import type { ChatNodeUsageType } from '@fastgpt/global/support/wallet/bill/type';
|
||||
|
||||
// TODO: 报错兜底机制
|
||||
export const getResponseSummary = async ({
|
||||
response,
|
||||
model
|
||||
}: {
|
||||
response: string;
|
||||
model: string;
|
||||
}): Promise<{
|
||||
answerText: string;
|
||||
usage: ChatNodeUsageType;
|
||||
}> => {
|
||||
addLog.debug('GetResponseSummary start');
|
||||
|
||||
const modelData = getLLMModel(model);
|
||||
const { answerText, usage } = await createLLMResponse({
|
||||
body: {
|
||||
model: modelData.model,
|
||||
messages: [
|
||||
{
|
||||
role: 'user',
|
||||
content: `请对以下步骤执行结果进行概括,要求:
|
||||
1. 提取核心信息和关键结论
|
||||
2. 保留重要的数据、链接、引用
|
||||
3. 长度控制在 200-300 字
|
||||
4. 结构清晰,便于其他步骤引用
|
||||
|
||||
执行结果:
|
||||
${response}
|
||||
|
||||
请生成概括:`
|
||||
}
|
||||
],
|
||||
stream: false
|
||||
}
|
||||
});
|
||||
|
||||
const { totalPoints, modelName } = formatModelChars2Points({
|
||||
model: modelData.model,
|
||||
inputTokens: usage.inputTokens,
|
||||
outputTokens: usage.outputTokens
|
||||
});
|
||||
|
||||
return {
|
||||
answerText,
|
||||
usage: {
|
||||
moduleName: '步骤执行结果概括',
|
||||
model: modelName,
|
||||
totalPoints,
|
||||
inputTokens: usage.inputTokens,
|
||||
outputTokens: usage.outputTokens
|
||||
}
|
||||
};
|
||||
};
|
||||
|
|
@ -0,0 +1,98 @@
|
|||
import { createLLMResponse } from '../../../../../ai/llm/request';
|
||||
import { parseToolArgs } from '../../utils';
|
||||
import { addLog } from '../../../../../../common/system/log';
|
||||
import { formatModelChars2Points } from '../../../../../../support/wallet/usage/utils';
|
||||
import type { ChatNodeUsageType } from '@fastgpt/global/support/wallet/bill/type';
|
||||
|
||||
const getPrompt = ({
|
||||
userChatInput
|
||||
}: {
|
||||
userChatInput: string;
|
||||
}) => `你是一位资深的认知复杂度评估专家 (Cognitive Complexity Assessment Specialist)。 您的职责是对用户提出的任务请求进行深度解析,精准判断其内在的认知复杂度层级,并据此决定是否需要启动多步骤规划流程。
|
||||
|
||||
用户显式意图 (User Explicit Intent):
|
||||
用户可能会在问题中明确表达其期望的回答方式或处理深度。 常见的意图类型包括:
|
||||
* **快速回答 / 简单回答 (Quick/Simple Answer)**:用户期望得到简洁、直接的答案,无需深入分析或详细解释。 例如:“请简单回答...”、“快速告诉我...”
|
||||
* **深度思考 / 详细分析 (Deep Thinking/Detailed Analysis)**:用户期望得到深入、全面的分析,包括多角度的思考、证据支持和详细的解释。 例如:“请深入分析...”、“详细解释...”
|
||||
* **创造性方案 / 创新性建议 (Creative Solution/Innovative Suggestion)**:用户期望得到具有创新性的解决方案或建议,可能需要进行发散性思维和方案设计。 例如:“请提出一个创新的方案...”、“提供一些有创意的建议...”
|
||||
* **无明确意图 (No Explicit Intent)**:用户没有明确表达其期望的回答方式或处理深度。
|
||||
|
||||
评估框架 (Assessment Framework):
|
||||
* **低复杂度任务 (Low Complexity - \`complex: false\`)**: 此类任务具备高度的直接性和明确性,通常仅需调用单一工具或执行简单的操作即可完成。 其特征包括:
|
||||
* **直接工具可解性 (Direct Tool Solvability)**:任务目标明确,可直接映射到特定的工具功能。
|
||||
* **信息可得性 (Information Accessibility)**:所需信息易于获取,无需复杂的搜索或推理。
|
||||
* **操作单一性 (Operational Singularity)**:任务执行路径清晰,无需多步骤协同。
|
||||
* **典型示例 (Typical Examples)**:信息检索 (Information Retrieval)、简单算术计算 (Simple Arithmetic Calculation)、事实性问题解答 (Factual Question Answering)、目标明确的单一指令执行 (Single, Well-Defined Instruction Execution)。
|
||||
* **高复杂度任务 (High Complexity - \'complex: true\')**: 此类任务涉及复杂的认知过程,需要进行多步骤规划、工具组合、深入分析和创造性思考才能完成。 其特征包括:
|
||||
* **意图模糊性 (Intent Ambiguity)**:用户意图不明确,需要进行意图消歧 (Intent Disambiguation) 或目标细化 (Goal Refinement)。
|
||||
* **信息聚合需求 (Information Aggregation Requirement)**:需要整合来自多个信息源的数据,进行综合分析。
|
||||
* **推理与判断 (Reasoning and Judgement)**:需要进行逻辑推理、情境分析、价值判断等认知操作。
|
||||
* **创造性与探索性 (Creativity and Exploration)**:需要进行发散性思维、方案设计、假设验证等探索性活动。
|
||||
* **
|
||||
* **典型示例 (Typical Examples)**:意图不明确的请求 (Ambiguous Requests)、需要综合多个信息源的任务 (Tasks Requiring Information Synthesis from Multiple Sources)、需要复杂推理或创造性思考的问题 (Problems Requiring Complex Reasoning or Creative Thinking)。
|
||||
待评估用户问题 (User Query): ${userChatInput}
|
||||
|
||||
输出规范 (Output Specification):
|
||||
请严格遵循以下 JSON 格式输出您的评估结果:
|
||||
\`\`\`json
|
||||
{
|
||||
"complex": true/false,
|
||||
"reason": "对任务认知复杂度的详细解释,说明判断的理由,并引用上述评估框架中的相关概念。"
|
||||
}
|
||||
\`\`\`
|
||||
|
||||
`;
|
||||
|
||||
export const checkTaskComplexity = async ({
|
||||
model,
|
||||
userChatInput
|
||||
}: {
|
||||
model: string;
|
||||
userChatInput: string;
|
||||
}): Promise<{
|
||||
complex: boolean;
|
||||
usage?: ChatNodeUsageType;
|
||||
}> => {
|
||||
try {
|
||||
const { answerText: checkResult, usage } = await createLLMResponse({
|
||||
body: {
|
||||
model,
|
||||
temperature: 0.1,
|
||||
messages: [
|
||||
{
|
||||
role: 'system',
|
||||
content: getPrompt({ userChatInput })
|
||||
},
|
||||
{
|
||||
role: 'user',
|
||||
content: userChatInput
|
||||
}
|
||||
]
|
||||
}
|
||||
});
|
||||
|
||||
const checkResponse = parseToolArgs<{ complex: boolean; reason: string }>(checkResult);
|
||||
|
||||
const { totalPoints, modelName } = formatModelChars2Points({
|
||||
model,
|
||||
inputTokens: usage.inputTokens,
|
||||
outputTokens: usage.outputTokens
|
||||
});
|
||||
|
||||
return {
|
||||
complex: !!checkResponse?.complex,
|
||||
usage: {
|
||||
moduleName: `问题复杂度分析`,
|
||||
model: modelName,
|
||||
totalPoints,
|
||||
inputTokens: usage.inputTokens,
|
||||
outputTokens: usage.outputTokens
|
||||
}
|
||||
};
|
||||
} catch (error) {
|
||||
addLog.error('Simple question check failed, proceeding with normal plan flow', error);
|
||||
return {
|
||||
complex: true
|
||||
};
|
||||
}
|
||||
};
|
||||
|
|
@ -0,0 +1,194 @@
|
|||
import type { DispatchSubAppResponse } from '../../type';
|
||||
import type { ModuleDispatchProps } from '@fastgpt/global/core/workflow/runtime/type';
|
||||
import { filterSystemVariables } from '../../../../../../../core/workflow/dispatch/utils';
|
||||
import { authAppByTmbId } from '../../../../../../../support/permission/app/auth';
|
||||
import { ReadPermissionVal } from '@fastgpt/global/support/permission/constant';
|
||||
import { getAppVersionById } from '../../../../../../../core/app/version/controller';
|
||||
import {
|
||||
getRunningUserInfoByTmbId,
|
||||
getUserChatInfo
|
||||
} from '../../../../../../../support/user/team/utils';
|
||||
import { runWorkflow } from '../../../../../../../core/workflow/dispatch';
|
||||
import {
|
||||
getWorkflowEntryNodeIds,
|
||||
rewriteNodeOutputByHistories,
|
||||
storeEdges2RuntimeEdges,
|
||||
storeNodes2RuntimeNodes
|
||||
} from '@fastgpt/global/core/workflow/runtime/utils';
|
||||
import { chatValue2RuntimePrompt } from '@fastgpt/global/core/chat/adapt';
|
||||
import { getChildAppRuntimeById } from '../../../../../../app/tool/controller';
|
||||
import { FlowNodeTypeEnum } from '@fastgpt/global/core/workflow/node/constant';
|
||||
import { serverGetWorkflowToolRunUserQuery } from '../../../../../../app/tool/workflowTool/utils';
|
||||
import { getWorkflowToolInputsFromStoreNodes } from '@fastgpt/global/core/app/tool/workflowTool/utils';
|
||||
|
||||
type Props = ModuleDispatchProps<{}> & {
|
||||
callParams: {
|
||||
appId?: string;
|
||||
version?: string;
|
||||
[key: string]: any;
|
||||
};
|
||||
};
|
||||
|
||||
export const dispatchApp = async (props: Props): Promise<DispatchSubAppResponse> => {
|
||||
const {
|
||||
runningAppInfo,
|
||||
workflowStreamResponse,
|
||||
variables,
|
||||
callParams: {
|
||||
appId,
|
||||
version,
|
||||
userChatInput,
|
||||
system_forbid_stream,
|
||||
history,
|
||||
fileUrlList,
|
||||
...data
|
||||
}
|
||||
} = props;
|
||||
|
||||
if (!appId) {
|
||||
return Promise.reject(new Error('AppId is empty'));
|
||||
}
|
||||
|
||||
// Auth the app by tmbId(Not the user, but the workflow user)
|
||||
const { app: appData } = await authAppByTmbId({
|
||||
appId,
|
||||
tmbId: runningAppInfo.tmbId,
|
||||
per: ReadPermissionVal
|
||||
});
|
||||
const { nodes, edges, chatConfig } = await getAppVersionById({
|
||||
appId,
|
||||
versionId: version,
|
||||
app: appData
|
||||
});
|
||||
|
||||
// Rewrite children app variables
|
||||
const systemVariables = filterSystemVariables(variables);
|
||||
const { externalProvider } = await getUserChatInfo(appData.tmbId);
|
||||
const childrenRunVariables = {
|
||||
...systemVariables,
|
||||
histories: [],
|
||||
appId: String(appData._id),
|
||||
...data,
|
||||
...(externalProvider ? externalProvider.externalWorkflowVariables : {})
|
||||
};
|
||||
|
||||
const runtimeNodes = rewriteNodeOutputByHistories(
|
||||
storeNodes2RuntimeNodes(nodes, getWorkflowEntryNodeIds(nodes))
|
||||
);
|
||||
const runtimeEdges = storeEdges2RuntimeEdges(edges);
|
||||
|
||||
const { assistantResponses, flowUsages } = await runWorkflow({
|
||||
...props,
|
||||
runningAppInfo: {
|
||||
id: String(appData._id),
|
||||
name: appData.name,
|
||||
teamId: String(appData.teamId),
|
||||
tmbId: String(appData.tmbId),
|
||||
isChildApp: true
|
||||
},
|
||||
runningUserInfo: await getRunningUserInfoByTmbId(appData.tmbId),
|
||||
runtimeNodes,
|
||||
runtimeEdges,
|
||||
histories: [],
|
||||
variables: childrenRunVariables,
|
||||
query: [
|
||||
{
|
||||
text: {
|
||||
content: userChatInput
|
||||
}
|
||||
}
|
||||
],
|
||||
chatConfig
|
||||
});
|
||||
|
||||
const { text } = chatValue2RuntimePrompt(assistantResponses);
|
||||
|
||||
return {
|
||||
response: text,
|
||||
usages: flowUsages
|
||||
};
|
||||
};
|
||||
|
||||
export const dispatchPlugin = async (props: Props): Promise<DispatchSubAppResponse> => {
|
||||
const {
|
||||
runningAppInfo,
|
||||
callParams: { appId, version, system_forbid_stream, ...data }
|
||||
} = props;
|
||||
|
||||
if (!appId) {
|
||||
return Promise.reject(new Error('AppId is empty'));
|
||||
}
|
||||
|
||||
// Auth the app by tmbId(Not the user, but the workflow user)
|
||||
const {
|
||||
app: { tmbId }
|
||||
} = await authAppByTmbId({
|
||||
appId,
|
||||
tmbId: runningAppInfo.tmbId,
|
||||
per: ReadPermissionVal
|
||||
});
|
||||
const plugin = await getChildAppRuntimeById({ id: appId, versionId: version });
|
||||
|
||||
const outputFilterMap =
|
||||
plugin.nodes
|
||||
.find((node) => node.flowNodeType === FlowNodeTypeEnum.pluginOutput)
|
||||
?.inputs.reduce<Record<string, boolean>>((acc, cur) => {
|
||||
acc[cur.key] = cur.isToolOutput === false ? false : true;
|
||||
return acc;
|
||||
}, {}) ?? {};
|
||||
const runtimeNodes = storeNodes2RuntimeNodes(
|
||||
plugin.nodes,
|
||||
getWorkflowEntryNodeIds(plugin.nodes)
|
||||
).map((node) => {
|
||||
// Update plugin input value
|
||||
if (node.flowNodeType === FlowNodeTypeEnum.pluginInput) {
|
||||
return {
|
||||
...node,
|
||||
showStatus: false,
|
||||
inputs: node.inputs.map((input) => ({
|
||||
...input,
|
||||
value: data[input.key] ?? input.value
|
||||
}))
|
||||
};
|
||||
}
|
||||
return {
|
||||
...node,
|
||||
showStatus: false
|
||||
};
|
||||
});
|
||||
|
||||
const { externalProvider } = await getUserChatInfo(tmbId);
|
||||
const runtimeVariables = {
|
||||
...filterSystemVariables(props.variables),
|
||||
appId: String(plugin.id),
|
||||
...(externalProvider ? externalProvider.externalWorkflowVariables : {})
|
||||
};
|
||||
|
||||
const { flowResponses, flowUsages, assistantResponses, runTimes, system_memories } =
|
||||
await runWorkflow({
|
||||
...props,
|
||||
runningAppInfo: {
|
||||
id: String(plugin.id),
|
||||
// 如果系统插件有 teamId 和 tmbId,则使用系统插件的 teamId 和 tmbId(管理员指定了插件作为系统插件)
|
||||
name: plugin.name,
|
||||
teamId: plugin.teamId || runningAppInfo.teamId,
|
||||
tmbId: plugin.tmbId || runningAppInfo.tmbId,
|
||||
isChildApp: true
|
||||
},
|
||||
variables: runtimeVariables,
|
||||
query: serverGetWorkflowToolRunUserQuery({
|
||||
pluginInputs: getWorkflowToolInputsFromStoreNodes(plugin.nodes),
|
||||
variables: runtimeVariables
|
||||
}).value,
|
||||
chatConfig: {},
|
||||
runtimeNodes,
|
||||
runtimeEdges: storeEdges2RuntimeEdges(plugin.edges)
|
||||
});
|
||||
const output = flowResponses.find((item) => item.moduleType === FlowNodeTypeEnum.pluginOutput);
|
||||
const response = output?.pluginOutput ? JSON.stringify(output?.pluginOutput) : 'No output';
|
||||
|
||||
return {
|
||||
response,
|
||||
usages: flowUsages
|
||||
};
|
||||
};
|
||||
|
|
@ -0,0 +1,34 @@
|
|||
import { i18nT } from '../../../../../../../web/i18n/utils';
|
||||
|
||||
export enum SubAppIds {
|
||||
plan = 'plan_agent',
|
||||
ask = 'ask_agent',
|
||||
model = 'model_agent',
|
||||
fileRead = 'file_read'
|
||||
}
|
||||
|
||||
export const systemSubInfo: Record<
|
||||
string,
|
||||
{ name: string; avatar: string; toolDescription: string }
|
||||
> = {
|
||||
[SubAppIds.plan]: {
|
||||
name: i18nT('chat:plan_agent'),
|
||||
avatar: 'common/detail',
|
||||
toolDescription: '分析和拆解用户问题,制定执行步骤。'
|
||||
},
|
||||
[SubAppIds.fileRead]: {
|
||||
name: i18nT('chat:file_parse'),
|
||||
avatar: 'core/workflow/template/readFiles',
|
||||
toolDescription: '读取文件内容,并返回文件内容。'
|
||||
},
|
||||
[SubAppIds.ask]: {
|
||||
name: 'Ask Agent',
|
||||
avatar: 'core/workflow/template/agent',
|
||||
toolDescription: '询问用户问题,并返回用户回答。'
|
||||
},
|
||||
[SubAppIds.model]: {
|
||||
name: 'Model Agent',
|
||||
avatar: 'core/workflow/template/agent',
|
||||
toolDescription: '调用 LLM 模型完成一些通用任务。'
|
||||
}
|
||||
};
|
||||
|
|
@ -0,0 +1,7 @@
|
|||
import type { DispatchSubAppResponse } from '../../type';
|
||||
|
||||
export const dispatchContextAgent = async (props: {}): Promise<DispatchSubAppResponse> => {
|
||||
return {
|
||||
response: ''
|
||||
};
|
||||
};
|
||||
|
|
@ -0,0 +1,23 @@
|
|||
import type { DatasetSearchModeEnum } from '@fastgpt/global/core/dataset/constants';
|
||||
import type { NodeInputKeyEnum } from '@fastgpt/global/core/workflow/constants';
|
||||
import type { SelectedDatasetType } from '@fastgpt/global/core/workflow/type/io';
|
||||
|
||||
export type DatasetConfigType = {
|
||||
[NodeInputKeyEnum.datasetSelectList]: SelectedDatasetType;
|
||||
[NodeInputKeyEnum.datasetSimilarity]: number;
|
||||
[NodeInputKeyEnum.datasetMaxTokens]: number;
|
||||
[NodeInputKeyEnum.userChatInput]?: string;
|
||||
[NodeInputKeyEnum.datasetSearchMode]: `${DatasetSearchModeEnum}`;
|
||||
[NodeInputKeyEnum.datasetSearchEmbeddingWeight]?: number;
|
||||
|
||||
[NodeInputKeyEnum.datasetSearchUsingReRank]: boolean;
|
||||
[NodeInputKeyEnum.datasetSearchRerankModel]?: string;
|
||||
[NodeInputKeyEnum.datasetSearchRerankWeight]?: number;
|
||||
|
||||
[NodeInputKeyEnum.collectionFilterMatch]: string;
|
||||
[NodeInputKeyEnum.authTmbId]?: boolean;
|
||||
|
||||
[NodeInputKeyEnum.datasetSearchUsingExtensionQuery]: boolean;
|
||||
[NodeInputKeyEnum.datasetSearchExtensionModel]: string;
|
||||
[NodeInputKeyEnum.datasetSearchExtensionBg]: string;
|
||||
};
|
||||
|
|
@ -0,0 +1,125 @@
|
|||
import {
|
||||
addRawTextBuffer,
|
||||
getRawTextBuffer
|
||||
} from '../../../../../../../common/buffer/rawText/controller';
|
||||
import type { DispatchSubAppResponse } from '../../type';
|
||||
import { isInternalAddress } from '../../../../../../../common/system/utils';
|
||||
import axios from 'axios';
|
||||
import { serverRequestBaseUrl } from '../../../../../../../common/api/serverRequest';
|
||||
import { parseFileExtensionFromUrl } from '@fastgpt/global/common/string/tools';
|
||||
import { detectFileEncoding } from '@fastgpt/global/common/file/tools';
|
||||
import { readRawContentByFileBuffer } from '../../../../../../../common/file/read/utils';
|
||||
import { addMinutes } from 'date-fns';
|
||||
import { getErrText } from '@fastgpt/global/common/error/utils';
|
||||
|
||||
type FileReadParams = {
|
||||
files: { index: string; url: string }[];
|
||||
|
||||
teamId: string;
|
||||
tmbId: string;
|
||||
customPdfParse?: boolean;
|
||||
};
|
||||
|
||||
export const dispatchFileRead = async ({
|
||||
files,
|
||||
teamId,
|
||||
tmbId,
|
||||
customPdfParse
|
||||
}: FileReadParams): Promise<DispatchSubAppResponse> => {
|
||||
const readFilesResult = await Promise.all(
|
||||
files.map(async ({ index, url }) => {
|
||||
// Get from buffer
|
||||
const fileBuffer = await getRawTextBuffer(url);
|
||||
if (fileBuffer) {
|
||||
return {
|
||||
index,
|
||||
name: fileBuffer.sourceName,
|
||||
content: fileBuffer.text
|
||||
};
|
||||
}
|
||||
|
||||
try {
|
||||
if (isInternalAddress(url)) {
|
||||
return {
|
||||
index,
|
||||
name: '',
|
||||
content: Promise.reject('Url is invalid')
|
||||
};
|
||||
}
|
||||
// Get file buffer data
|
||||
const response = await axios.get(url, {
|
||||
baseURL: serverRequestBaseUrl,
|
||||
responseType: 'arraybuffer'
|
||||
});
|
||||
|
||||
const buffer = Buffer.from(response.data, 'binary');
|
||||
|
||||
// Get file name
|
||||
const filename = (() => {
|
||||
const contentDisposition = response.headers['content-disposition'];
|
||||
if (contentDisposition) {
|
||||
const filenameRegex = /filename[^;=\n]*=((['"]).*?\2|[^;\n]*)/;
|
||||
const matches = filenameRegex.exec(contentDisposition);
|
||||
if (matches != null && matches[1]) {
|
||||
return decodeURIComponent(matches[1].replace(/['"]/g, ''));
|
||||
}
|
||||
}
|
||||
|
||||
return url;
|
||||
})();
|
||||
// Extension
|
||||
const extension = parseFileExtensionFromUrl(filename);
|
||||
|
||||
// Get encoding
|
||||
const encoding = (() => {
|
||||
const contentType = response.headers['content-type'];
|
||||
if (contentType) {
|
||||
const charsetRegex = /charset=([^;]*)/;
|
||||
const matches = charsetRegex.exec(contentType);
|
||||
if (matches != null && matches[1]) {
|
||||
return matches[1];
|
||||
}
|
||||
}
|
||||
|
||||
return detectFileEncoding(buffer);
|
||||
})();
|
||||
|
||||
// Read file
|
||||
const { rawText } = await readRawContentByFileBuffer({
|
||||
extension,
|
||||
teamId,
|
||||
tmbId,
|
||||
buffer,
|
||||
encoding,
|
||||
customPdfParse,
|
||||
getFormatText: true
|
||||
});
|
||||
|
||||
// Add to buffer
|
||||
addRawTextBuffer({
|
||||
sourceId: url,
|
||||
sourceName: filename,
|
||||
text: rawText,
|
||||
expiredTime: addMinutes(new Date(), 20)
|
||||
});
|
||||
|
||||
return {
|
||||
index,
|
||||
name: filename,
|
||||
content: rawText
|
||||
};
|
||||
} catch (error) {
|
||||
return {
|
||||
index,
|
||||
name: '',
|
||||
content: getErrText(error, 'Load file error')
|
||||
};
|
||||
}
|
||||
})
|
||||
);
|
||||
|
||||
return {
|
||||
response: JSON.stringify(readFilesResult),
|
||||
usages: []
|
||||
};
|
||||
};
|
||||
|
|
@ -0,0 +1,132 @@
|
|||
import type { ChatCompletionTool } from '@fastgpt/global/core/ai/type';
|
||||
import { SubAppIds } from '../constants';
|
||||
import { parseUrlToFileType } from '@fastgpt/global/common/file/tools';
|
||||
import { addLog } from '../../../../../../../common/system/log';
|
||||
import { getHistoryFileLinks } from '../../../../tools/readFiles';
|
||||
import type { ChatItemType } from '@fastgpt/global/core/chat/type';
|
||||
import { ChatFileTypeEnum } from '@fastgpt/global/core/chat/constants';
|
||||
|
||||
export const readFileTool: ChatCompletionTool = {
|
||||
type: 'function',
|
||||
function: {
|
||||
name: SubAppIds.fileRead,
|
||||
description: '读取指定文件的内容',
|
||||
parameters: {
|
||||
type: 'object',
|
||||
properties: {
|
||||
file_indexes: {
|
||||
type: 'array',
|
||||
items: {
|
||||
type: 'string'
|
||||
},
|
||||
description: '文件序号'
|
||||
}
|
||||
},
|
||||
required: ['file_indexes']
|
||||
}
|
||||
}
|
||||
};
|
||||
|
||||
export const getFileInputPrompt = ({
|
||||
fileUrls = [],
|
||||
requestOrigin,
|
||||
maxFiles,
|
||||
histories
|
||||
}: {
|
||||
fileUrls?: string[];
|
||||
requestOrigin?: string;
|
||||
maxFiles: number;
|
||||
histories: ChatItemType[];
|
||||
}): {
|
||||
filesMap: Record<string, string>;
|
||||
prompt: string;
|
||||
} => {
|
||||
const filesFromHistories = getHistoryFileLinks(histories);
|
||||
|
||||
if (filesFromHistories.length === 0 && fileUrls.length === 0) {
|
||||
return {
|
||||
filesMap: {},
|
||||
prompt: ''
|
||||
};
|
||||
}
|
||||
|
||||
const parseFn = (urls: string[]) => {
|
||||
const parseUrlList = urls
|
||||
// Remove invalid urls
|
||||
.filter((url) => {
|
||||
if (typeof url !== 'string') return false;
|
||||
|
||||
// 检查相对路径
|
||||
const validPrefixList = ['/', 'http', 'ws'];
|
||||
if (validPrefixList.some((prefix) => url.startsWith(prefix))) {
|
||||
return true;
|
||||
}
|
||||
|
||||
return false;
|
||||
})
|
||||
// Just get the document type file
|
||||
.filter((url) => parseUrlToFileType(url)?.type === 'file')
|
||||
.map((url) => {
|
||||
try {
|
||||
// Check is system upload file
|
||||
if (url.startsWith('/') || (requestOrigin && url.startsWith(requestOrigin))) {
|
||||
// Remove the origin(Make intranet requests directly)
|
||||
if (requestOrigin && url.startsWith(requestOrigin)) {
|
||||
url = url.replace(requestOrigin, '');
|
||||
}
|
||||
}
|
||||
|
||||
return url;
|
||||
} catch (error) {
|
||||
addLog.warn(`Parse url error`, { error });
|
||||
return '';
|
||||
}
|
||||
})
|
||||
.filter(Boolean)
|
||||
.slice(0, maxFiles);
|
||||
|
||||
const parseResult = parseUrlList
|
||||
.map((url) => parseUrlToFileType(url))
|
||||
.filter((item) => item?.name && item?.type === ChatFileTypeEnum.file) as {
|
||||
type: `${ChatFileTypeEnum}`;
|
||||
name: string;
|
||||
url: string;
|
||||
}[];
|
||||
return parseResult;
|
||||
};
|
||||
|
||||
const historyParseResult = parseFn(filesFromHistories);
|
||||
const queryParseResult = parseFn(fileUrls);
|
||||
|
||||
const promptList: { index: string; name: string }[] = [];
|
||||
queryParseResult.forEach((item, index) => {
|
||||
promptList.push({ index: `${historyParseResult.length + index + 1}`, name: item.name });
|
||||
});
|
||||
|
||||
return {
|
||||
filesMap: [...historyParseResult, ...queryParseResult].reduce(
|
||||
(acc, item, index) => {
|
||||
acc[index + 1] = item.url;
|
||||
return acc;
|
||||
},
|
||||
{} as Record<string, string>
|
||||
),
|
||||
prompt: promptList.length > 0 ? JSON.stringify(promptList) : ''
|
||||
};
|
||||
};
|
||||
|
||||
export const addFilePrompt2Input = ({
|
||||
query,
|
||||
filePrompt
|
||||
}: {
|
||||
query: string;
|
||||
filePrompt?: string;
|
||||
}) => {
|
||||
if (!filePrompt) return query;
|
||||
|
||||
return `## File input
|
||||
${filePrompt}
|
||||
|
||||
## Query
|
||||
${query}`;
|
||||
};
|
||||
|
|
@ -0,0 +1,144 @@
|
|||
import type { ChatCompletionTool } from '@fastgpt/global/core/ai/type';
|
||||
import { readFileTool } from './file/utils';
|
||||
import type { FlowNodeInputItemType } from '@fastgpt/global/core/workflow/type/io';
|
||||
import type { JSONSchemaInputType } from '@fastgpt/global/core/app/jsonschema';
|
||||
import {
|
||||
NodeInputKeyEnum,
|
||||
toolValueTypeList,
|
||||
valueTypeJsonSchemaMap
|
||||
} from '@fastgpt/global/core/workflow/constants';
|
||||
import type { McpToolDataType } from '@fastgpt/global/core/app/tool/mcpTool/type';
|
||||
import { FlowNodeTypeEnum } from '@fastgpt/global/core/workflow/node/constant';
|
||||
import { getSystemToolRunTimeNodeFromSystemToolset } from '../../../../utils';
|
||||
import type { RuntimeNodeItemType } from '@fastgpt/global/core/workflow/runtime/type';
|
||||
import { MongoApp } from '../../../../../app/schema';
|
||||
import { getMCPChildren } from '../../../../../app/mcp';
|
||||
import { getMCPToolRuntimeNode } from '@fastgpt/global/core/app/tool/mcpTool/utils';
|
||||
import type { localeType } from '@fastgpt/global/common/i18n/type';
|
||||
|
||||
export const rewriteSubAppsToolset = ({
|
||||
subApps,
|
||||
lang
|
||||
}: {
|
||||
subApps: RuntimeNodeItemType[];
|
||||
lang?: localeType;
|
||||
}) => {
|
||||
return Promise.all(
|
||||
subApps.map(async (node) => {
|
||||
if (node.flowNodeType === FlowNodeTypeEnum.toolSet) {
|
||||
const systemToolId = node.toolConfig?.systemToolSet?.toolId;
|
||||
const mcpToolsetVal = node.toolConfig?.mcpToolSet ?? node.inputs[0].value;
|
||||
if (systemToolId) {
|
||||
const children = await getSystemToolRunTimeNodeFromSystemToolset({
|
||||
toolSetNode: node,
|
||||
lang
|
||||
});
|
||||
return children;
|
||||
} else if (mcpToolsetVal) {
|
||||
const app = await MongoApp.findOne({ _id: node.pluginId }).lean();
|
||||
if (!app) return [];
|
||||
const toolList = await getMCPChildren(app);
|
||||
|
||||
const parentId = mcpToolsetVal.toolId ?? node.pluginId;
|
||||
const children = toolList.map((tool, index) => {
|
||||
const newToolNode = getMCPToolRuntimeNode({
|
||||
avatar: node.avatar,
|
||||
tool,
|
||||
// New ?? Old
|
||||
parentId
|
||||
});
|
||||
newToolNode.nodeId = `${parentId}${index}`; // ID 不能随机,否则下次生成时候就和之前的记录对不上
|
||||
newToolNode.name = `${node.name}/${tool.name}`;
|
||||
|
||||
return newToolNode;
|
||||
});
|
||||
|
||||
return children;
|
||||
}
|
||||
return [];
|
||||
} else {
|
||||
return [node];
|
||||
}
|
||||
})
|
||||
).then((res) => res.flat());
|
||||
};
|
||||
export const getSubApps = ({
|
||||
subApps,
|
||||
addReadFileTool
|
||||
}: {
|
||||
subApps: RuntimeNodeItemType[];
|
||||
addReadFileTool?: boolean;
|
||||
}): ChatCompletionTool[] => {
|
||||
// System Tools: Plan Agent, stop sign, model agent.
|
||||
const systemTools: ChatCompletionTool[] = [
|
||||
// PlanAgentTool,
|
||||
...(addReadFileTool ? [readFileTool] : [])
|
||||
// ModelAgentTool
|
||||
// StopAgentTool,
|
||||
];
|
||||
|
||||
// Node Tools
|
||||
const unitNodeTools = subApps.filter(
|
||||
(item, index, array) => array.findIndex((app) => app.pluginId === item.pluginId) === index
|
||||
);
|
||||
|
||||
const nodeTools = unitNodeTools.map<ChatCompletionTool>((item) => {
|
||||
const toolParams: FlowNodeInputItemType[] = [];
|
||||
let jsonSchema: JSONSchemaInputType | undefined;
|
||||
|
||||
for (const input of item.inputs) {
|
||||
if (input.toolDescription) {
|
||||
toolParams.push(input);
|
||||
}
|
||||
|
||||
if (input.key === NodeInputKeyEnum.toolData) {
|
||||
jsonSchema = (input.value as McpToolDataType).inputSchema;
|
||||
}
|
||||
}
|
||||
|
||||
const description = JSON.stringify({
|
||||
type: item.flowNodeType,
|
||||
name: item.name,
|
||||
intro: item.toolDescription || item.intro
|
||||
});
|
||||
|
||||
if (jsonSchema) {
|
||||
return {
|
||||
type: 'function',
|
||||
function: {
|
||||
name: item.nodeId,
|
||||
description,
|
||||
parameters: jsonSchema
|
||||
}
|
||||
};
|
||||
}
|
||||
|
||||
const properties: Record<string, any> = {};
|
||||
toolParams.forEach((param) => {
|
||||
const jsonSchema = param.valueType
|
||||
? valueTypeJsonSchemaMap[param.valueType] || toolValueTypeList[0].jsonSchema
|
||||
: toolValueTypeList[0].jsonSchema;
|
||||
|
||||
properties[param.key] = {
|
||||
...jsonSchema,
|
||||
description: param.toolDescription || '',
|
||||
enum: param.enum?.split('\n').filter(Boolean) || undefined
|
||||
};
|
||||
});
|
||||
|
||||
return {
|
||||
type: 'function',
|
||||
function: {
|
||||
name: item.nodeId,
|
||||
description,
|
||||
parameters: {
|
||||
type: 'object',
|
||||
properties,
|
||||
required: toolParams.filter((param) => param.required).map((param) => param.key)
|
||||
}
|
||||
}
|
||||
};
|
||||
});
|
||||
|
||||
return [...systemTools, ...nodeTools];
|
||||
};
|
||||
|
|
@ -0,0 +1,24 @@
|
|||
import type { ChatCompletionTool } from '@fastgpt/global/core/ai/type';
|
||||
import { SubAppIds } from '../constants';
|
||||
|
||||
export const ModelAgentTool: ChatCompletionTool = {
|
||||
type: 'function',
|
||||
function: {
|
||||
name: SubAppIds.model,
|
||||
description: '调用 LLM 模型完成一些通用任务。',
|
||||
parameters: {
|
||||
type: 'object',
|
||||
properties: {
|
||||
systemPrompt: {
|
||||
type: 'string',
|
||||
description: '系统提示词,用于为 LLM 提供完成任务的引导。'
|
||||
},
|
||||
task: {
|
||||
type: 'string',
|
||||
description: '本轮需要完成的任务'
|
||||
}
|
||||
},
|
||||
required: ['task']
|
||||
}
|
||||
}
|
||||
};
|
||||
|
|
@ -0,0 +1,83 @@
|
|||
import type { ChatCompletionMessageParam } from '@fastgpt/global/core/ai/type.d';
|
||||
import { createLLMResponse, type ResponseEvents } from '../../../../../../ai/llm/request';
|
||||
import { getLLMModel } from '../../../../../../ai/model';
|
||||
import { formatModelChars2Points } from '../../../../../../../support/wallet/usage/utils';
|
||||
import type { ChatNodeUsageType } from '@fastgpt/global/support/wallet/bill/type';
|
||||
|
||||
type ModelAgentConfig = {
|
||||
model: string;
|
||||
temperature?: number;
|
||||
top_p?: number;
|
||||
stream?: boolean;
|
||||
};
|
||||
|
||||
type DispatchModelAgentProps = ModelAgentConfig & {
|
||||
systemPrompt: string;
|
||||
task: string;
|
||||
onReasoning: ResponseEvents['onReasoning'];
|
||||
onStreaming: ResponseEvents['onStreaming'];
|
||||
};
|
||||
|
||||
type DispatchPlanAgentResponse = {
|
||||
response: string;
|
||||
usages: ChatNodeUsageType[];
|
||||
};
|
||||
|
||||
export async function dispatchModelAgent({
|
||||
model,
|
||||
temperature,
|
||||
top_p,
|
||||
stream,
|
||||
systemPrompt,
|
||||
task,
|
||||
onReasoning,
|
||||
onStreaming
|
||||
}: DispatchModelAgentProps): Promise<DispatchPlanAgentResponse> {
|
||||
const modelData = getLLMModel(model);
|
||||
|
||||
const messages: ChatCompletionMessageParam[] = [
|
||||
...(systemPrompt
|
||||
? [
|
||||
{
|
||||
role: 'system' as const,
|
||||
content: systemPrompt
|
||||
}
|
||||
]
|
||||
: []),
|
||||
{
|
||||
role: 'user',
|
||||
content: task
|
||||
}
|
||||
];
|
||||
|
||||
const { answerText, usage } = await createLLMResponse({
|
||||
body: {
|
||||
model: modelData.model,
|
||||
temperature,
|
||||
messages: messages,
|
||||
top_p,
|
||||
stream
|
||||
},
|
||||
onReasoning,
|
||||
onStreaming
|
||||
});
|
||||
|
||||
const { totalPoints, modelName } = formatModelChars2Points({
|
||||
model: modelData.model,
|
||||
inputTokens: usage.inputTokens,
|
||||
outputTokens: usage.outputTokens
|
||||
});
|
||||
|
||||
return {
|
||||
response: answerText,
|
||||
usages: [
|
||||
{
|
||||
moduleName: modelName,
|
||||
model: modelData.model,
|
||||
totalPoints,
|
||||
inputTokens: usage.inputTokens,
|
||||
outputTokens: usage.outputTokens
|
||||
}
|
||||
]
|
||||
};
|
||||
}
|
||||
|
|
@ -0,0 +1,48 @@
|
|||
import type { ChatCompletionTool } from '@fastgpt/global/core/ai/type';
|
||||
import { SubAppIds } from '../../constants';
|
||||
|
||||
export type AskAgentToolParamsType = {
|
||||
questions: string[];
|
||||
};
|
||||
|
||||
export const PlanAgentAskTool: ChatCompletionTool = {
|
||||
type: 'function',
|
||||
function: {
|
||||
name: SubAppIds.ask,
|
||||
description: `工具描述:交互式信息澄清助手 (Proactive Clarification Tool)
|
||||
本工具专用于与用户进行对话式交互,主动澄清模糊需求,收集完成任务所需的关键信息。 核心目标是**引导用户提供更具体、更明确的指令**。
|
||||
**触发条件 (Activation Triggers):**
|
||||
* 用户输入信息不完整,缺少必要细节。
|
||||
* 用户表达意图模糊,存在多种可能性。
|
||||
* 需要用户提供主观偏好或个性化设置。
|
||||
**交互策略 (Interaction Strategy):**
|
||||
* **主动询问 (Proactive Inquiry):** 根据用户输入,**推断**缺失的信息,并直接提问。
|
||||
* **避免重复 (No Repetition):** **不要**重复用户的问题,而是针对问题中的**不确定性**进行提问。
|
||||
* **简洁明了 (Concise & Clear):** 使用简短、自然的语言,避免术语和复杂句式。
|
||||
* **目标导向 (Goal-Oriented):** 提问应围绕完成任务所需的**最关键信息**展开。
|
||||
**示例 (Examples):**
|
||||
* 用户:“我想出去旅游。”
|
||||
* 工具:“您希望前往哪个**目的地**?大致的**出行日期**是什么时候?有几位**同行者**?” (直接询问缺失的关键信息)
|
||||
* 用户:“我想知道 Qwen 的全家桶有什么东西。”
|
||||
* 工具:“您对 Qwen 的哪些**具体产品类型**感兴趣?例如,是想了解模型、API 还是应用?” (避免重复问题,而是 уточнить его запрос)
|
||||
**禁止行为 (Prohibited Behaviors):**
|
||||
* **禁止**直接重复用户的问题。
|
||||
* **禁止**一次性提出过多问题,保持对话的流畅性。
|
||||
* **禁止**询问与任务无关的信息。
|
||||
**最终目标 (Final Goal):** 通过高效的对话,获取足够的信息,使后续工具能够顺利完成任务。`,
|
||||
parameters: {
|
||||
type: 'object',
|
||||
properties: {
|
||||
questions: {
|
||||
description: `要向用户搜集的问题列表`,
|
||||
items: {
|
||||
type: 'string',
|
||||
description: '一个具体的、有针对性的问题'
|
||||
},
|
||||
type: 'array'
|
||||
}
|
||||
},
|
||||
required: ['questions']
|
||||
}
|
||||
}
|
||||
};
|
||||
|
|
@ -0,0 +1,18 @@
|
|||
import type { ChatCompletionTool } from '@fastgpt/global/core/ai/type';
|
||||
import { SubAppIds, systemSubInfo } from '../constants';
|
||||
import type { InteractiveNodeResponseType } from '@fastgpt/global/core/workflow/template/system/interactive/type';
|
||||
|
||||
export const PlanCheckInteractive: InteractiveNodeResponseType = {
|
||||
type: 'agentPlanCheck',
|
||||
params: {
|
||||
confirmed: false
|
||||
}
|
||||
};
|
||||
export const PlanAgentTool: ChatCompletionTool = {
|
||||
type: 'function',
|
||||
function: {
|
||||
name: SubAppIds.plan,
|
||||
description: systemSubInfo[SubAppIds.plan].toolDescription,
|
||||
parameters: {}
|
||||
}
|
||||
};
|
||||
|
|
@ -0,0 +1,384 @@
|
|||
import type {
|
||||
ChatCompletionMessageParam,
|
||||
ChatCompletionTool
|
||||
} from '@fastgpt/global/core/ai/type.d';
|
||||
import { createLLMResponse } from '../../../../../../ai/llm/request';
|
||||
import {
|
||||
getPlanAgentSystemPrompt,
|
||||
getReplanAgentSystemPrompt,
|
||||
getReplanAgentUserPrompt,
|
||||
getUserContent
|
||||
} from './prompt';
|
||||
import { getLLMModel } from '../../../../../../ai/model';
|
||||
import { formatModelChars2Points } from '../../../../../../../support/wallet/usage/utils';
|
||||
import type { ChatNodeUsageType } from '@fastgpt/global/support/wallet/bill/type';
|
||||
import type {
|
||||
InteractiveNodeResponseType,
|
||||
WorkflowInteractiveResponseType
|
||||
} from '@fastgpt/global/core/workflow/template/system/interactive/type';
|
||||
import { parseToolArgs } from '../../../utils';
|
||||
import { PlanAgentAskTool, type AskAgentToolParamsType } from './ask/constants';
|
||||
import { PlanCheckInteractive } from './constants';
|
||||
import type { AgentPlanType } from './type';
|
||||
import type { GetSubAppInfoFnType } from '../../type';
|
||||
import { getStepDependon } from '../../master/dependon';
|
||||
import { parseSystemPrompt } from '../../utils';
|
||||
|
||||
type PlanAgentConfig = {
|
||||
systemPrompt?: string;
|
||||
model: string;
|
||||
temperature?: number;
|
||||
top_p?: number;
|
||||
stream?: boolean;
|
||||
};
|
||||
|
||||
type DispatchPlanAgentProps = PlanAgentConfig & {
|
||||
historyMessages: ChatCompletionMessageParam[];
|
||||
interactive?: WorkflowInteractiveResponseType;
|
||||
userInput: string;
|
||||
background?: string;
|
||||
referencePlans?: string;
|
||||
|
||||
isTopPlanAgent: boolean;
|
||||
subAppList: ChatCompletionTool[];
|
||||
getSubAppInfo: GetSubAppInfoFnType;
|
||||
};
|
||||
|
||||
type DispatchPlanAgentResponse = {
|
||||
answerText?: string;
|
||||
plan?: AgentPlanType;
|
||||
completeMessages: ChatCompletionMessageParam[];
|
||||
usages: ChatNodeUsageType[];
|
||||
interactiveResponse?: InteractiveNodeResponseType;
|
||||
};
|
||||
|
||||
export const dispatchPlanAgent = async ({
|
||||
historyMessages,
|
||||
userInput,
|
||||
interactive,
|
||||
subAppList,
|
||||
getSubAppInfo,
|
||||
systemPrompt,
|
||||
model,
|
||||
temperature,
|
||||
top_p,
|
||||
stream,
|
||||
isTopPlanAgent
|
||||
}: DispatchPlanAgentProps): Promise<DispatchPlanAgentResponse> => {
|
||||
const modelData = getLLMModel(model);
|
||||
|
||||
const requestMessages: ChatCompletionMessageParam[] = [
|
||||
{
|
||||
role: 'system',
|
||||
content: getPlanAgentSystemPrompt({
|
||||
getSubAppInfo,
|
||||
subAppList
|
||||
})
|
||||
},
|
||||
...historyMessages
|
||||
];
|
||||
|
||||
// 分类:query/user select/user form
|
||||
const lastMessages = requestMessages[requestMessages.length - 1];
|
||||
console.log('user input:', userInput);
|
||||
|
||||
// 上一轮是 Ask 模式,进行工具调用拼接
|
||||
if (
|
||||
(interactive?.type === 'agentPlanAskUserSelect' || interactive?.type === 'agentPlanAskQuery') &&
|
||||
lastMessages.role === 'assistant' &&
|
||||
lastMessages.tool_calls
|
||||
) {
|
||||
requestMessages.push({
|
||||
role: 'tool',
|
||||
tool_call_id: lastMessages.tool_calls[0].id,
|
||||
content: userInput
|
||||
});
|
||||
// TODO: 是否合理
|
||||
requestMessages.push({
|
||||
role: 'assistant',
|
||||
content: '请基于以上收集的用户信息,重新生成完整的计划,严格按照 JSON Schema 输出。'
|
||||
});
|
||||
} else {
|
||||
// TODO: 这里拼接的话,对于多轮对话不是很友好。
|
||||
requestMessages.push({
|
||||
role: 'user',
|
||||
content: getUserContent({ userInput, systemPrompt, getSubAppInfo })
|
||||
});
|
||||
}
|
||||
|
||||
console.log('Plan request messages');
|
||||
console.dir(
|
||||
{ requestMessages, tools: isTopPlanAgent ? [PlanAgentAskTool] : [] },
|
||||
{ depth: null }
|
||||
);
|
||||
let {
|
||||
answerText,
|
||||
toolCalls = [],
|
||||
usage,
|
||||
getEmptyResponseTip,
|
||||
completeMessages
|
||||
} = await createLLMResponse({
|
||||
body: {
|
||||
model: modelData.model,
|
||||
temperature,
|
||||
messages: requestMessages,
|
||||
top_p,
|
||||
stream,
|
||||
tools: isTopPlanAgent ? [PlanAgentAskTool] : [],
|
||||
tool_choice: 'auto',
|
||||
toolCallMode: modelData.toolChoice ? 'toolChoice' : 'prompt',
|
||||
parallel_tool_calls: false
|
||||
}
|
||||
});
|
||||
|
||||
if (!answerText && !toolCalls.length) {
|
||||
return Promise.reject(getEmptyResponseTip());
|
||||
}
|
||||
|
||||
/*
|
||||
正常输出情况:
|
||||
1. text: 正常生成plan
|
||||
2. toolCall: 调用ask工具
|
||||
3. text + toolCall: 可能生成 plan + 调用ask工具
|
||||
*/
|
||||
|
||||
// 获取生成的 plan
|
||||
const plan = (() => {
|
||||
if (!answerText) {
|
||||
return;
|
||||
}
|
||||
|
||||
const params = parseToolArgs<AgentPlanType>(answerText);
|
||||
if (toolCalls.length === 0 && (!params || !params.task || !params.steps)) {
|
||||
throw new Error('Plan response is not valid');
|
||||
}
|
||||
return params;
|
||||
})();
|
||||
if (plan) {
|
||||
answerText = '';
|
||||
}
|
||||
|
||||
// 只有顶层有交互模式
|
||||
const interactiveResponse: InteractiveNodeResponseType | undefined = (() => {
|
||||
if (!isTopPlanAgent) return;
|
||||
|
||||
const tooCall = toolCalls[0];
|
||||
if (tooCall) {
|
||||
const params = parseToolArgs<AskAgentToolParamsType>(tooCall.function.arguments);
|
||||
if (params) {
|
||||
return {
|
||||
type: 'agentPlanAskQuery',
|
||||
params: {
|
||||
content: params.questions.join('\n')
|
||||
}
|
||||
};
|
||||
} else {
|
||||
console.log(JSON.stringify({ answerText, toolCalls }, null, 2), 'Plan response');
|
||||
return {
|
||||
type: 'agentPlanAskQuery',
|
||||
params: {
|
||||
content: '生成的 ask 结构异常'
|
||||
}
|
||||
};
|
||||
}
|
||||
}
|
||||
|
||||
// Plan 没有主动交互,则强制触发 check
|
||||
return PlanCheckInteractive;
|
||||
})();
|
||||
|
||||
const { totalPoints, modelName } = formatModelChars2Points({
|
||||
model: modelData.model,
|
||||
inputTokens: usage.inputTokens,
|
||||
outputTokens: usage.outputTokens
|
||||
});
|
||||
|
||||
return {
|
||||
answerText: answerText || '',
|
||||
plan,
|
||||
completeMessages,
|
||||
usages: [
|
||||
{
|
||||
moduleName: '任务规划',
|
||||
model: modelName,
|
||||
totalPoints,
|
||||
inputTokens: usage.inputTokens,
|
||||
outputTokens: usage.outputTokens
|
||||
}
|
||||
],
|
||||
interactiveResponse
|
||||
};
|
||||
};
|
||||
|
||||
export const dispatchReplanAgent = async ({
|
||||
historyMessages,
|
||||
interactive,
|
||||
subAppList,
|
||||
getSubAppInfo,
|
||||
userInput,
|
||||
plan,
|
||||
background,
|
||||
systemPrompt,
|
||||
|
||||
model,
|
||||
temperature,
|
||||
top_p,
|
||||
stream,
|
||||
isTopPlanAgent
|
||||
}: DispatchPlanAgentProps & {
|
||||
plan: AgentPlanType;
|
||||
}): Promise<DispatchPlanAgentResponse> => {
|
||||
const modelData = getLLMModel(model);
|
||||
|
||||
const requestMessages: ChatCompletionMessageParam[] = [
|
||||
{
|
||||
role: 'system',
|
||||
content: getReplanAgentSystemPrompt({
|
||||
getSubAppInfo,
|
||||
subAppList
|
||||
})
|
||||
},
|
||||
...historyMessages
|
||||
];
|
||||
|
||||
// 分类:query/user select/user form
|
||||
const lastMessages = requestMessages[requestMessages.length - 1];
|
||||
|
||||
if (
|
||||
(interactive?.type === 'agentPlanAskUserSelect' || interactive?.type === 'agentPlanAskQuery') &&
|
||||
lastMessages.role === 'assistant' &&
|
||||
lastMessages.tool_calls
|
||||
) {
|
||||
requestMessages.push({
|
||||
role: 'tool',
|
||||
tool_call_id: lastMessages.tool_calls[0].id,
|
||||
content: userInput
|
||||
});
|
||||
// TODO: 确认这里是否有问题
|
||||
requestMessages.push({
|
||||
role: 'assistant',
|
||||
content: '请基于以上收集的用户信息,对 PLAN 进行重新规划,并严格按照 JSON Schema 输出。'
|
||||
});
|
||||
} else {
|
||||
// 获取依赖的步骤
|
||||
const { depends, usage: dependsUsage } = await getStepDependon({
|
||||
model,
|
||||
steps: plan.steps,
|
||||
step: {
|
||||
id: '',
|
||||
title: '重新规划决策依据:需要依赖哪些步骤的判断',
|
||||
description: '本步骤分析先前的执行结果,以确定重新规划时需要依赖哪些特定步骤。'
|
||||
}
|
||||
});
|
||||
// TODO: 推送
|
||||
const replanSteps = plan.steps.filter((step) => depends.includes(step.id));
|
||||
|
||||
requestMessages.push({
|
||||
role: 'user',
|
||||
// 根据需要 replanSteps 生成用户输入
|
||||
content: getReplanAgentUserPrompt({
|
||||
task: userInput,
|
||||
dependsSteps: replanSteps,
|
||||
background,
|
||||
systemPrompt: parseSystemPrompt({ systemPrompt, getSubAppInfo })
|
||||
})
|
||||
});
|
||||
}
|
||||
|
||||
console.log('Replan call messages', JSON.stringify(requestMessages, null, 2));
|
||||
let {
|
||||
answerText,
|
||||
toolCalls = [],
|
||||
usage,
|
||||
getEmptyResponseTip,
|
||||
completeMessages
|
||||
} = await createLLMResponse({
|
||||
body: {
|
||||
model: modelData.model,
|
||||
temperature,
|
||||
messages: requestMessages,
|
||||
top_p,
|
||||
stream,
|
||||
tools: isTopPlanAgent ? [PlanAgentAskTool] : [],
|
||||
tool_choice: 'auto',
|
||||
toolCallMode: modelData.toolChoice ? 'toolChoice' : 'prompt',
|
||||
parallel_tool_calls: false
|
||||
}
|
||||
});
|
||||
|
||||
if (!answerText && !toolCalls.length) {
|
||||
return Promise.reject(getEmptyResponseTip());
|
||||
}
|
||||
|
||||
/*
|
||||
正常输出情况:
|
||||
1. text: 正常生成plan
|
||||
2. toolCall: 调用ask工具
|
||||
3. text + toolCall: 可能生成 plan + 调用ask工具
|
||||
*/
|
||||
const rePlan = (() => {
|
||||
if (!answerText) {
|
||||
return;
|
||||
}
|
||||
|
||||
const params = parseToolArgs<AgentPlanType>(answerText);
|
||||
if (toolCalls.length === 0 && (!params || !params.steps)) {
|
||||
throw new Error('Replan response is not valid');
|
||||
}
|
||||
return params;
|
||||
})();
|
||||
if (rePlan) {
|
||||
answerText = '';
|
||||
}
|
||||
|
||||
// 只有顶层有交互模式
|
||||
const interactiveResponse: InteractiveNodeResponseType | undefined = (() => {
|
||||
if (!isTopPlanAgent) return;
|
||||
|
||||
const tooCall = toolCalls[0];
|
||||
if (tooCall) {
|
||||
const params = parseToolArgs<AskAgentToolParamsType>(tooCall.function.arguments);
|
||||
if (params) {
|
||||
return {
|
||||
type: 'agentPlanAskQuery',
|
||||
params: {
|
||||
content: params.questions.join('\n')
|
||||
}
|
||||
};
|
||||
} else {
|
||||
console.log(JSON.stringify({ answerText, toolCalls }, null, 2), 'Replan response');
|
||||
return {
|
||||
type: 'agentPlanAskQuery',
|
||||
params: {
|
||||
content: '生成的 ask 结构异常'
|
||||
}
|
||||
};
|
||||
}
|
||||
}
|
||||
|
||||
// RePlan 没有主动交互,则强制触发 check
|
||||
return PlanCheckInteractive;
|
||||
})();
|
||||
|
||||
const { totalPoints, modelName } = formatModelChars2Points({
|
||||
model: modelData.model,
|
||||
inputTokens: usage.inputTokens,
|
||||
outputTokens: usage.outputTokens
|
||||
});
|
||||
|
||||
return {
|
||||
answerText,
|
||||
plan: rePlan,
|
||||
completeMessages,
|
||||
usages: [
|
||||
{
|
||||
moduleName: '重新规划',
|
||||
model: modelName,
|
||||
totalPoints,
|
||||
inputTokens: usage.inputTokens,
|
||||
outputTokens: usage.outputTokens
|
||||
}
|
||||
],
|
||||
interactiveResponse
|
||||
};
|
||||
};
|
||||
|
|
@ -0,0 +1,499 @@
|
|||
import type { ChatCompletionTool } from '@fastgpt/global/core/ai/type';
|
||||
import { SubAppIds } from '../constants';
|
||||
import { PlanAgentAskTool } from './ask/constants';
|
||||
import type { GetSubAppInfoFnType } from '../../type';
|
||||
import type { AgentPlanStepType } from './type';
|
||||
import { parseSystemPrompt } from '../../utils';
|
||||
|
||||
const getSubAppPrompt = ({
|
||||
getSubAppInfo,
|
||||
subAppList
|
||||
}: {
|
||||
getSubAppInfo: GetSubAppInfoFnType;
|
||||
subAppList: ChatCompletionTool[];
|
||||
}) => {
|
||||
return subAppList
|
||||
.map((app) => {
|
||||
const info = getSubAppInfo(app.function.name);
|
||||
if (!info) return '';
|
||||
return `- [@${info.name}]: ${info.toolDescription};`;
|
||||
})
|
||||
.filter(Boolean)
|
||||
.join('\n');
|
||||
};
|
||||
|
||||
export const getPlanAgentSystemPrompt = ({
|
||||
getSubAppInfo,
|
||||
subAppList
|
||||
}: {
|
||||
getSubAppInfo: GetSubAppInfoFnType;
|
||||
subAppList: ChatCompletionTool[];
|
||||
}) => {
|
||||
const subAppPrompt = getSubAppPrompt({ getSubAppInfo, subAppList });
|
||||
return `
|
||||
<role>
|
||||
你是一个专业的主题计划构建专家,擅长将复杂的主题学习和探索过程转化为结构清晰、可执行的渐进式学习路径。你的规划方法强调:
|
||||
1. 深入系统性理解
|
||||
2. 逻辑递进的知识构建
|
||||
3. 动态适应性调整
|
||||
4. 最小化学习路径的不确定性
|
||||
</role>
|
||||
<core_philosophy>
|
||||
1. **渐进式规划**:只规划到下一个关键信息点或决策点,通过 'replan' 标识需要基于执行结果调整的任务节点
|
||||
2. **最小化假设**:不对未知信息做过多预设,而是通过执行步骤获取
|
||||
3. **前置信息优先**:制定计划前,优先收集必要的前置信息,而不是将信息收集作为计划的一部分,如果用户提供的 PLAN 中有前置搜集工作请在规划之前搜集
|
||||
4. **格式限制**:所有输出的信息必须输出符合 JSON Schema 的格式
|
||||
5. **目标强化**:所有的任务信息必须要规划出一个 PLAN
|
||||
</core_philosophy>
|
||||
<toolset>
|
||||
「以下是在规划 PLAN 过程中可以使用在每个 step 的 description 中的工具」
|
||||
${subAppPrompt}
|
||||
「以下是在规划 PLAN 过程中可以用来调用的工具,不应该在 step 的 description 中」
|
||||
- [@${SubAppIds.ask}]:${PlanAgentAskTool.function.description}
|
||||
</toolset>
|
||||
<process>
|
||||
1. **前置信息检查**:
|
||||
- 首先判断是否具备制定计划所需的所有关键信息
|
||||
- 如果缺少用户偏好、具体场景细节、关键约束、目标参数等前置信息
|
||||
- **立即调用 ${SubAppIds.ask} 工具**,提出清晰的问题列表收集信息
|
||||
- **切记**:不要将"询问用户"、"收集信息"作为计划的步骤
|
||||
|
||||
2. **计划生成**:
|
||||
- 在获得必要的前置信息后,再开始制定具体计划
|
||||
- 提取核心目标、关键要素、约束与本地化偏好
|
||||
- 如果用户提供了前置规划信息,优先基于用户的步骤安排和偏好来生成计划
|
||||
- 输出语言风格本地化(根据用户输入语言进行术语与语序调整)
|
||||
- 在步骤的 description 中可以使用 @符号标记执行时需要的工具
|
||||
- 严格按照 JSON Schema 生成完整计划,不得输出多余内容
|
||||
|
||||
3. **决策点处理**:
|
||||
- 如果计划中存在需要基于执行结果做决策的节点,使用 replan 字段标记
|
||||
- 如果用户有自己输入的plan,按照其流程规划,在需要决策的地方设置 replan
|
||||
</process>
|
||||
- 必须严格输出 JSON
|
||||
- 输出结构必须符合以下 JSON Schema,不需要添加额外的信息:
|
||||
<requirements>
|
||||
\`\`\`json(不包括)
|
||||
{
|
||||
"type": "object",
|
||||
"properties": {
|
||||
"task": {
|
||||
"type": "string",
|
||||
"description": "任务主题, 准确覆盖本次所有执行步骤的核心内容和维度"
|
||||
},
|
||||
"steps": {
|
||||
"type": "array",
|
||||
"description": "完成任务的步骤列表",
|
||||
"items": {
|
||||
"type": "object",
|
||||
"properties": {
|
||||
"id": {
|
||||
"type": "string",
|
||||
"description": "步骤的唯一标识"
|
||||
},
|
||||
"title": {
|
||||
"type": "string",
|
||||
"description": "步骤标题"
|
||||
},
|
||||
"description": {
|
||||
"type": "string",
|
||||
"description": "步骤的具体描述, 可以使用@符号声明需要用到的工具。"
|
||||
}
|
||||
},
|
||||
"required": ["id", "title", "description"]
|
||||
}
|
||||
},
|
||||
"replan": {
|
||||
"type": "boolean",
|
||||
"description": "是否需要继续规划依赖的前面步骤,true表示需要继续规划,false表示不需要"
|
||||
}
|
||||
},
|
||||
"required": ["task", "steps"]
|
||||
}
|
||||
\`\`\`
|
||||
</requirements>
|
||||
<guardrails>
|
||||
- 不生成违法、不道德或有害内容;敏感主题输出合规替代方案。
|
||||
- 避免过于具体的时间/预算承诺与无法验证的保证。
|
||||
- 保持中立、客观;必要时指出风险与依赖。
|
||||
- 只输出 JSON 计划内容,不能输出其他解释。
|
||||
</guardrails>
|
||||
<best-practices>
|
||||
步骤指导
|
||||
颗粒度把控
|
||||
- **保持平衡**:步骤既不过于宏观(难以执行),也不过于细碎(失去灵活性)
|
||||
- **可执行性**:每个步骤应该是一个独立可执行的任务单元
|
||||
- **结果明确**:每个步骤应产生明确的输出,为后续决策提供依据
|
||||
步骤数量的自然边界
|
||||
- **认知负载**:单次规划保持在用户可理解的复杂度内
|
||||
- **执行周期**:考虑合理的执行和反馈周期
|
||||
- **依赖关系**:强依赖的步骤可以规划在一起,弱依赖的分开
|
||||
- **不确定性**:不确定性越高,初始规划应该越保守
|
||||
description 字段最佳实践
|
||||
- **明确工具和目标**:"使用 @research_agent 搜索X的最新进展,重点关注Y方面"
|
||||
- **标注关键信息点**:"了解A的特性,特别注意是否支持B功能(这将影响后续方案选择)"
|
||||
- **预示可能分支**:"调研市场反馈,如果正面则深入了解优势,如果负面则分析原因"
|
||||
- **说明探索重点**:"搜索相关案例,关注:1)实施成本 2)成功率 3)常见问题"
|
||||
</best-practices>
|
||||
<examples>
|
||||
<example name="线性流程 - 完整规划">
|
||||
**场景**:用户已经提供了明确的学习主题和目标,可以直接制定计划。
|
||||
|
||||
\`\`\`json
|
||||
{
|
||||
"task": "[主题] 的完整了解和学习",
|
||||
"steps": [
|
||||
{
|
||||
"id": "step1",
|
||||
"title": "了解基础概念",
|
||||
"description": "使用 @[搜索工具] 搜索 [主题] 的基本概念、核心原理、关键术语"
|
||||
},
|
||||
{
|
||||
"id": "step2",
|
||||
"title": "学习具体方法",
|
||||
"description": "使用 @[搜索工具] 查询 [主题] 的具体操作方法、实施步骤、常用技巧"
|
||||
},
|
||||
{
|
||||
"id": "step3",
|
||||
"title": "了解实践应用",
|
||||
"description": "使用 @[搜索工具] 搜索 [主题] 的实际应用案例、最佳实践、经验教训"
|
||||
}
|
||||
],
|
||||
"replan": true
|
||||
}
|
||||
\`\`\`
|
||||
</example>
|
||||
<example name="探索分支 - 条件决策">
|
||||
\`\`\`json
|
||||
{
|
||||
"task": "评估 [方案A] 是否应该替换 [方案B]",
|
||||
"steps": [
|
||||
{
|
||||
"id": "step1",
|
||||
"title": "对比关键差异",
|
||||
"description": "使用 @[分析工具] 搜索 [方案A] vs [方案B] 的对比分析,重点关注:核心差异、优劣势、转换成本"
|
||||
},
|
||||
{
|
||||
"id": "step2",
|
||||
"title": "评估变更影响",
|
||||
"description": "使用 @[分析工具] 搜索相关的迁移案例、所需资源、潜在风险"
|
||||
}
|
||||
],
|
||||
"replan": true
|
||||
}
|
||||
\`\`\`
|
||||
</example>
|
||||
<example name="并行探索 - 多维调研">
|
||||
\`\`\`json
|
||||
{
|
||||
"task": "选择最适合的 [工具/方案类型]",
|
||||
"steps": [
|
||||
{
|
||||
"id": "step1",
|
||||
"title": "调研主流选项",
|
||||
"description": "使用 @[调研工具] 搜索当前主流的 [工具/方案],了解各自特点、适用场景、关键指标"
|
||||
},
|
||||
{
|
||||
"id": "step2",
|
||||
"title": "分析特定维度",
|
||||
"description": "使用 @[分析工具] 深入了解 [特定关注点],如成本、性能、易用性等关键决策因素"
|
||||
}
|
||||
],
|
||||
"replan": true
|
||||
}
|
||||
\`\`\`
|
||||
</example>
|
||||
<example name="迭代任务 - 渐进探索">
|
||||
\`\`\`json
|
||||
{
|
||||
"task": "找出 [目标数量] 个 [符合条件] 的 [目标对象]",
|
||||
"steps": [
|
||||
{
|
||||
"id": "step1",
|
||||
"title": "初步搜索",
|
||||
"description": "使用 @[搜索工具] 搜索 [目标对象],获取初步结果列表"
|
||||
}
|
||||
],
|
||||
"replan": true
|
||||
}
|
||||
\`\`\`
|
||||
</example>
|
||||
<example name="问题诊断 - 分析解决">
|
||||
\`\`\`json
|
||||
{
|
||||
"task": "解决 [问题描述]",
|
||||
"steps": [
|
||||
{
|
||||
"id": "step1",
|
||||
"title": "问题分析",
|
||||
"description": "使用 @[诊断工具] 搜索 [问题] 的常见原因、诊断方法"
|
||||
},
|
||||
{
|
||||
"id": "step2",
|
||||
"title": "寻找解决方案",
|
||||
"description": "使用 @[搜索工具] 查找类似问题的解决方案、修复步骤"
|
||||
}
|
||||
],
|
||||
"replan": true
|
||||
}
|
||||
\`\`\`
|
||||
</example>
|
||||
</examples>`;
|
||||
};
|
||||
|
||||
export const getUserContent = ({
|
||||
userInput,
|
||||
systemPrompt,
|
||||
getSubAppInfo
|
||||
}: {
|
||||
userInput: string;
|
||||
systemPrompt?: string;
|
||||
getSubAppInfo: GetSubAppInfoFnType;
|
||||
}) => {
|
||||
let userContent = `任务描述:${userInput}`;
|
||||
if (systemPrompt) {
|
||||
userContent += `\n\n背景信息:${parseSystemPrompt({ systemPrompt, getSubAppInfo })}\n请按照用户提供的背景信息来重新生成计划,优先遵循用户的步骤安排和偏好。`;
|
||||
}
|
||||
return userContent;
|
||||
};
|
||||
|
||||
export const getReplanAgentSystemPrompt = ({
|
||||
getSubAppInfo,
|
||||
subAppList
|
||||
}: {
|
||||
getSubAppInfo: GetSubAppInfoFnType;
|
||||
subAppList: ChatCompletionTool[];
|
||||
}) => {
|
||||
const subAppPrompt = getSubAppPrompt({ getSubAppInfo, subAppList });
|
||||
|
||||
return `<role>
|
||||
你是一个智能流程优化专家,专门负责在已完成的任务步骤基础上,追加生成优化步骤来完善整个流程,确保任务目标的完美达成。
|
||||
你的任务不是重新规划,而是基于现有执行结果和任务类型,决定是输出总结还是继续生成优化步骤。
|
||||
</role>
|
||||
<optimization_philosophy>
|
||||
核心原则:
|
||||
1. **任务类型识别**:区分确定性任务(Deterministic Task)和探究性任务(Exploratory Task)
|
||||
2. **追加优化**:在现有步骤基础上增加新步骤,不修改已完成的工作
|
||||
3. **结果导向**:基于实际执行结果,识别需要进一步完善的方面
|
||||
4. **价值最大化**:确保每个新步骤都能为整体目标提供实际价值
|
||||
5. **流程闭环**:补充遗漏的环节,形成完整的任务闭环
|
||||
6. **任务核查**:确保最终输出的步骤能够完整覆盖用户最初提出的任务目标
|
||||
</optimization_philosophy>
|
||||
|
||||
<task_type_definition>
|
||||
1. **确定性任务(Deterministic Task)**:
|
||||
- 特征:有明确的答案或结论,问题边界清晰
|
||||
- 示例:查询天气、查找特定信息、计算数值、回答事实性问题、解决明确定义的问题
|
||||
- 策略:如果已有信息足以给出准确答案,直接输出总结步骤
|
||||
|
||||
2. **探究性任务(Exploratory Task)**:
|
||||
- 特征:需要深入探索、多维度分析、创造性规划,答案越详细越好
|
||||
- 示例:制定旅游计划、设计解决方案、学习某个主题、评估多个选项、创作内容、规划项目
|
||||
- 策略:即使已有一些结果,也应该生成更详细的优化步骤,追求全面性和深度
|
||||
</task_type_definition>
|
||||
|
||||
<tools>
|
||||
「以下是在规划 PLAN 过程中可以使用在每个 step 的 description 中的工具」
|
||||
${subAppPrompt}
|
||||
「以下是在规划 PLAN 过程中可以用来调用的工具,不应该在 step 的 description 中」
|
||||
- [@${SubAppIds.ask}]:${PlanAgentAskTool.function.description}
|
||||
</tools>
|
||||
|
||||
<process>
|
||||
1. **任务类型识别:**
|
||||
* 首先判断「任务目标」属于哪种类型:
|
||||
* **确定性任务**:是否是查询特定信息、回答事实问题、计算、查找等明确答案的任务?
|
||||
* **探究性任务**:是否需要规划、设计、学习、评估、创作等深入探索的任务?
|
||||
* 记住这个判断,它将影响后续的决策
|
||||
|
||||
2. **完整性评估:**
|
||||
* 审视「关键步骤执行结果」及其「执行结果」
|
||||
* 深度思考:
|
||||
* (a) 基于现有的信息,是否能够对用户最初提出的「任务目标」,给出一个准确、完整、且具有实践指导意义的【最终结论】?
|
||||
* (b) 是否存在任何潜在的风险、遗漏的信息、或未充分考虑的因素,可能导致【最终结论】不够可靠或有效?
|
||||
* 结合任务类型做出决策:
|
||||
* **确定性任务**:如果(a)为【是】且(b)为【否】,直接进入【总结步骤】
|
||||
* **探究性任务**:即使(a)为【是】,也要考虑是否可以通过更多步骤提供更全面、更深入的结果。只有当已有信息非常充分、全面时才进入【总结步骤】,否则进入【优化步骤】生成更详细的规划
|
||||
|
||||
3. **优化步骤 (当需要继续优化时执行):**
|
||||
* 识别需要进一步优化和完善的环节:
|
||||
* 针对「关键步骤执行结果」的不足之处,明确指出需要补充的信息、需要重新审视的假设、或者需要进一步探索的方向
|
||||
* **对于探究性任务**:即使现有结果不错,也考虑如何让答案更全面、更详细、更有价值
|
||||
* 前置信息检查:
|
||||
* 首先判断是否具备制定计划所需的所有关键信息
|
||||
* 如果缺少用户偏好、具体场景细节、关键约束、目标参数等前置信息
|
||||
* **立即调用 ${SubAppIds.ask} 工具**,提出清晰的问题列表收集信息
|
||||
* **切记**:不要将"询问用户"、"收集信息"作为计划的步骤
|
||||
* 生成具体的追加步骤:
|
||||
* 基于上述识别结果,设计清晰、可操作的后续行动步骤
|
||||
* 确保新步骤与已有工作形成有机整体
|
||||
* **对于探究性任务**:追求深度和广度,生成多个维度的优化步骤
|
||||
|
||||
4. **总结步骤 (当可以总结时执行):**
|
||||
* **确定性任务**:如果已有足够信息可以给出准确答案
|
||||
* **探究性任务**:如果已经进行了充分的多轮探索,信息已经非常全面和详细
|
||||
* 输出格式:
|
||||
* 步骤标题为 \`生成总结报告\`
|
||||
* 步骤描述为 \`基于现有步骤的结果,生成一个总结报告\`
|
||||
|
||||
**所有输出严格遵循 JSON Schema 格式的追加优化步骤**
|
||||
</process>
|
||||
- 必须严格输出 JSON 格式
|
||||
- 生成的是**追加步骤**,用于在现有工作基础上进一步优化
|
||||
- 新步骤应该有明确的价值和目标,避免重复性工作
|
||||
- 输出的结构必须符合以下 JSON Schema:
|
||||
<requirements>
|
||||
\`\`\`json (不包含)
|
||||
{
|
||||
"type": "object",
|
||||
"properties": {
|
||||
"task": {
|
||||
"type": "string",
|
||||
"description": "优化任务描述,说明这些追加步骤的整体目标 或者 说明此任务已经可以进行总结"
|
||||
},
|
||||
"steps": {
|
||||
"type": "array",
|
||||
"description": "追加的优化步骤列表",
|
||||
"items": {
|
||||
"type": "object",
|
||||
"properties": {
|
||||
"id": {
|
||||
"type": "string",
|
||||
"description": "步骤的唯一标识,建议使用 optimize{{迭代轮次}}-1, optimize{{迭代轮次}}-2 等格式"
|
||||
},
|
||||
"title": {
|
||||
"type": "string",
|
||||
"description": "步骤标题, 当任务可以总结的时候 title 必须为 生成总结报告"
|
||||
},
|
||||
"description": {
|
||||
"type": "string",
|
||||
"description": "步骤的具体描述,可以使用@符号声明需要用到的工具, 当任务可以总结的时候 description 必须为 基于现有步骤的结果,生成一个总结报告"
|
||||
}
|
||||
},
|
||||
"required": [
|
||||
"id",
|
||||
"title",
|
||||
"description"
|
||||
]
|
||||
}
|
||||
},
|
||||
"replan": {
|
||||
"type": "boolean",
|
||||
"description": "是否需要继续规划依赖的前面步骤,true表示需要继续规划,false表示不需要"
|
||||
}
|
||||
},
|
||||
"required": [
|
||||
"task",
|
||||
"steps"
|
||||
]
|
||||
}
|
||||
\`\`\`
|
||||
</requirements>
|
||||
<guardrails>
|
||||
- 不生成违法、不道德或有害内容;敏感主题输出合规替代方案。
|
||||
- 避免过于具体的时间/预算承诺与无法验证的保证。
|
||||
- 保持中立、客观;必要时指出风险与依赖。
|
||||
- 只输出 JSON 计划内容,不能输出其他解释。
|
||||
</guardrails>
|
||||
<best-practices>
|
||||
### 调整策略
|
||||
- **复用优先**:保留已正确的步骤,仅修改必要部分
|
||||
- **清晰替换**:若原步骤失效,用新步骤完整替代
|
||||
- **补充缺口**:当反馈表明信息不足或路径缺失时,添加新步骤
|
||||
- **简化结构**:移除冗余或冲突步骤,保持计划简洁清晰
|
||||
### 步骤指导
|
||||
#### 颗粒度把控
|
||||
- **保持平衡**:步骤既不过于宏观(难以执行),也不过于细碎(失去灵活性)
|
||||
- **可执行性**:每个步骤应该是一个独立可执行的任务单元
|
||||
- **结果明确**:每个步骤应产生明确的输出,为后续决策提供依据
|
||||
#### 步骤数量的自然边界
|
||||
- **认知负载**:单次规划保持在用户可理解的复杂度内
|
||||
- **执行周期**:考虑合理的执行和反馈周期
|
||||
- **依赖关系**:强依赖的步骤可以规划在一起,弱依赖的分开
|
||||
- **不确定性**:不确定性越高,初始规划应该越保守
|
||||
### description 字段最佳实践
|
||||
- **明确工具和目标**:"使用 @research_agent 搜索X的最新进展,重点关注Y方面"
|
||||
- **标注关键信息点**:"了解A的特性,特别注意是否支持B功能(这将影响后续方案选择)"
|
||||
- **预示可能分支**:"调研市场反馈,如果正面则深入了解优势,如果负面则分析原因"
|
||||
- **说明探索重点**:"搜索相关案例,关注:1)实施成本 2)成功率 3)常见问题"
|
||||
</best-practices>
|
||||
<examples>
|
||||
<example name="旅游规划优化">
|
||||
\`\`\`json
|
||||
{
|
||||
"task": "基于已完成的旅游规划,追加优化步骤提升计划质量和用户体验",
|
||||
"steps": [
|
||||
{
|
||||
"id": "optimize1-1",
|
||||
"title": "生成详细的每日时间表",
|
||||
"description": "基于已收集的景点和餐厅信息,使用 @tavily_search 查询具体的开放时间和预约要求,制定精确到小时的每日行程安排"
|
||||
},
|
||||
{
|
||||
"id": "optimize1-2",
|
||||
"title": "制作便携式旅游指南",
|
||||
"description": "整合所有收集的信息,生成包含地图标注、联系方式、应急信息的便携式旅游指南文档"
|
||||
}
|
||||
],
|
||||
"replan": false
|
||||
}
|
||||
\`\`\`
|
||||
</example>
|
||||
<example name="任务已经完成,输出总结">
|
||||
\`\`\`
|
||||
json
|
||||
{
|
||||
"task": "当前的结果已经可以满足任务,请做一个总结来输出最后的答案",
|
||||
"steps": [
|
||||
{
|
||||
"id": "optimize1-1",
|
||||
"title": "生成总结报告",
|
||||
"description": "基于现有步骤的结果,生成一个总结报告"
|
||||
}
|
||||
],
|
||||
"replan": false
|
||||
}
|
||||
\`\`\`
|
||||
</example>
|
||||
</examples>`;
|
||||
};
|
||||
|
||||
export const getReplanAgentUserPrompt = ({
|
||||
task,
|
||||
background,
|
||||
systemPrompt,
|
||||
dependsSteps
|
||||
}: {
|
||||
task: string;
|
||||
background?: string;
|
||||
systemPrompt?: string;
|
||||
dependsSteps: AgentPlanStepType[];
|
||||
}) => {
|
||||
console.log('replan systemPrompt:', systemPrompt);
|
||||
const stepsResponsePrompt = dependsSteps
|
||||
.map(
|
||||
(step) => `步骤 ${step.id}:
|
||||
- 标题: ${step.title}
|
||||
- 执行结果: ${step.response}`
|
||||
)
|
||||
.join('\n');
|
||||
const stepsIdPrompt = dependsSteps.map((step) => step.id).join(', ');
|
||||
|
||||
return `「任务目标」:${task}
|
||||
${background ? `「背景信息」:${background}` : ''}
|
||||
|
||||
${
|
||||
systemPrompt
|
||||
? `「用户前置规划」:
|
||||
${systemPrompt}`
|
||||
: ''
|
||||
}
|
||||
|
||||
基于以下关键步骤的执行结果进行优化:${stepsIdPrompt}
|
||||
|
||||
「关键步骤执行结果」:
|
||||
|
||||
${stepsResponsePrompt}
|
||||
|
||||
请基于上述关键步骤 ${stepsIdPrompt} 的执行结果,生成能够进一步优化和完善整个任务目标的追加步骤,如果有「用户前置规划」请按照用户的前置规划来重新生成计划,优先遵循用户的步骤安排和偏好。。
|
||||
如果「关键步骤执行结果」已经满足了当前的「任务目标」,请直接返回一个总结的步骤来提取最终的答案,而不需要进行其他的讨论`;
|
||||
};
|
||||
|
|
@ -0,0 +1,13 @@
|
|||
export type AgentPlanStepType = {
|
||||
id: string;
|
||||
title: string;
|
||||
description: string;
|
||||
depends_on?: string[];
|
||||
response?: string;
|
||||
summary?: string;
|
||||
};
|
||||
export type AgentPlanType = {
|
||||
task: string;
|
||||
steps: AgentPlanStepType[];
|
||||
replan?: boolean;
|
||||
};
|
||||
|
|
@ -0,0 +1,10 @@
|
|||
import type { ChatCompletionTool } from '@fastgpt/global/core/ai/type';
|
||||
import { SubAppIds } from '../constants';
|
||||
|
||||
export const StopAgentTool: ChatCompletionTool = {
|
||||
type: 'function',
|
||||
function: {
|
||||
name: SubAppIds.stop,
|
||||
description: '如果完成了所有的任务,可调用此工具。'
|
||||
}
|
||||
};
|
||||
|
|
@ -0,0 +1,190 @@
|
|||
import type { StoreSecretValueType } from '@fastgpt/global/common/secret/type';
|
||||
import { SystemToolSecretInputTypeEnum } from '@fastgpt/global/core/app/tool/systemTool/constants';
|
||||
import type { DispatchSubAppResponse } from '../../type';
|
||||
import { splitCombineToolId } from '@fastgpt/global/core/app/tool/utils';
|
||||
import { getSystemToolById } from '../../../../../../app/tool/controller';
|
||||
import { getSecretValue } from '../../../../../../../common/secret/utils';
|
||||
import { MongoSystemTool } from '../../../../../../plugin/tool/systemToolSchema';
|
||||
import { APIRunSystemTool } from '../../../../../../app/tool/api';
|
||||
import type {
|
||||
ChatDispatchProps,
|
||||
RuntimeNodeItemType
|
||||
} from '@fastgpt/global/core/workflow/runtime/type';
|
||||
import type { SseResponseEventEnum } from '@fastgpt/global/core/workflow/runtime/constants';
|
||||
import { textAdaptGptResponse } from '@fastgpt/global/core/workflow/runtime/utils';
|
||||
import type { NodeInputKeyEnum } from '@fastgpt/global/core/workflow/constants';
|
||||
import { NodeOutputKeyEnum } from '@fastgpt/global/core/workflow/constants';
|
||||
import { pushTrack } from '../../../../../../../common/middle/tracks/utils';
|
||||
import { getErrText } from '@fastgpt/global/common/error/utils';
|
||||
import { getAppVersionById } from '../../../../../../app/version/controller';
|
||||
import { MCPClient } from '../../../../../../app/mcp';
|
||||
import type { McpToolDataType } from '@fastgpt/global/core/app/tool/mcpTool/type';
|
||||
|
||||
type SystemInputConfigType = {
|
||||
type: SystemToolSecretInputTypeEnum;
|
||||
value: StoreSecretValueType;
|
||||
};
|
||||
type Props = {
|
||||
node: RuntimeNodeItemType;
|
||||
params: {
|
||||
[NodeInputKeyEnum.toolData]?: McpToolDataType;
|
||||
[NodeInputKeyEnum.systemInputConfig]?: SystemInputConfigType;
|
||||
[key: string]: any;
|
||||
};
|
||||
runningUserInfo: ChatDispatchProps['runningUserInfo'];
|
||||
runningAppInfo: ChatDispatchProps['runningAppInfo'];
|
||||
variables: ChatDispatchProps['variables'];
|
||||
workflowStreamResponse: ChatDispatchProps['workflowStreamResponse'];
|
||||
};
|
||||
|
||||
export const dispatchTool = async ({
|
||||
node: { name, version, toolConfig },
|
||||
params: { system_input_config, system_toolData, ...params },
|
||||
runningUserInfo,
|
||||
runningAppInfo,
|
||||
variables,
|
||||
workflowStreamResponse
|
||||
}: Props): Promise<DispatchSubAppResponse> => {
|
||||
try {
|
||||
if (toolConfig?.systemTool?.toolId) {
|
||||
const tool = await getSystemToolById(toolConfig?.systemTool.toolId);
|
||||
const inputConfigParams = await (async () => {
|
||||
switch (system_input_config?.type) {
|
||||
case SystemToolSecretInputTypeEnum.team:
|
||||
return Promise.reject(new Error('This is not supported yet'));
|
||||
case SystemToolSecretInputTypeEnum.manual:
|
||||
return getSecretValue({
|
||||
storeSecret: system_input_config.value || {}
|
||||
});
|
||||
case SystemToolSecretInputTypeEnum.system:
|
||||
default:
|
||||
// read from mongo
|
||||
const dbPlugin = await MongoSystemTool.findOne({
|
||||
pluginId: tool.id
|
||||
}).lean();
|
||||
return dbPlugin?.inputListVal || {};
|
||||
}
|
||||
})();
|
||||
const inputs = {
|
||||
...Object.fromEntries(Object.entries(params)),
|
||||
...inputConfigParams
|
||||
};
|
||||
|
||||
const formatToolId = tool.id.split('-')[1];
|
||||
let answerText = '';
|
||||
|
||||
const res = await APIRunSystemTool({
|
||||
toolId: formatToolId,
|
||||
inputs,
|
||||
systemVar: {
|
||||
user: {
|
||||
id: variables.userId,
|
||||
username: runningUserInfo.username,
|
||||
contact: runningUserInfo.contact,
|
||||
membername: runningUserInfo.memberName,
|
||||
teamName: runningUserInfo.teamName,
|
||||
teamId: runningUserInfo.teamId,
|
||||
name: runningUserInfo.tmbId
|
||||
},
|
||||
app: {
|
||||
id: runningAppInfo.id,
|
||||
name: runningAppInfo.id
|
||||
},
|
||||
tool: {
|
||||
id: formatToolId,
|
||||
version: version || tool.versionList?.[0]?.value || ''
|
||||
},
|
||||
time: variables.cTime
|
||||
},
|
||||
onMessage: ({ type, content }) => {
|
||||
if (workflowStreamResponse && content) {
|
||||
answerText += content;
|
||||
workflowStreamResponse({
|
||||
event: type as unknown as SseResponseEventEnum,
|
||||
data: textAdaptGptResponse({
|
||||
text: content
|
||||
})
|
||||
});
|
||||
}
|
||||
}
|
||||
});
|
||||
|
||||
let result = res.output || {};
|
||||
|
||||
if (res.error) {
|
||||
// String error(Common error, not custom)
|
||||
if (typeof res.error === 'string') {
|
||||
throw new Error(res.error);
|
||||
}
|
||||
|
||||
// Custom error field
|
||||
return Promise.reject(res.error);
|
||||
}
|
||||
|
||||
const usagePoints = (() => {
|
||||
if (params.system_input_config?.type !== SystemToolSecretInputTypeEnum.system) {
|
||||
return 0;
|
||||
}
|
||||
return (tool.systemKeyCost ?? 0) + (tool.currentCost ?? 0);
|
||||
})();
|
||||
pushTrack.runSystemTool({
|
||||
teamId: runningUserInfo.teamId,
|
||||
tmbId: runningUserInfo.tmbId,
|
||||
uid: runningUserInfo.tmbId,
|
||||
toolId: tool.id,
|
||||
result: 1,
|
||||
usagePoint: usagePoints,
|
||||
msg: result[NodeOutputKeyEnum.systemError]
|
||||
});
|
||||
|
||||
return {
|
||||
response: JSON.stringify(result),
|
||||
usages: [
|
||||
{
|
||||
moduleName: name,
|
||||
totalPoints: usagePoints
|
||||
}
|
||||
]
|
||||
};
|
||||
} else if (toolConfig?.mcpTool?.toolId) {
|
||||
const { pluginId } = splitCombineToolId(toolConfig.mcpTool.toolId);
|
||||
const [parentId, toolName] = pluginId.split('/');
|
||||
const tool = await getAppVersionById({
|
||||
appId: parentId,
|
||||
versionId: version
|
||||
});
|
||||
|
||||
const { headerSecret, url } =
|
||||
tool.nodes[0].toolConfig?.mcpToolSet ?? tool.nodes[0].inputs[0].value;
|
||||
const mcpClient = new MCPClient({
|
||||
url,
|
||||
headers: getSecretValue({
|
||||
storeSecret: headerSecret
|
||||
})
|
||||
});
|
||||
|
||||
const result = await mcpClient.toolCall({
|
||||
toolName,
|
||||
params
|
||||
});
|
||||
return {
|
||||
response: JSON.stringify(result),
|
||||
usages: []
|
||||
};
|
||||
} else {
|
||||
return Promise.reject("Can't find tool");
|
||||
}
|
||||
} catch (error) {
|
||||
if (toolConfig?.systemTool?.toolId) {
|
||||
pushTrack.runSystemTool({
|
||||
teamId: runningUserInfo.teamId,
|
||||
tmbId: runningUserInfo.tmbId,
|
||||
uid: runningUserInfo.tmbId,
|
||||
toolId: toolConfig.systemTool.toolId,
|
||||
result: 0,
|
||||
msg: getErrText(error)
|
||||
});
|
||||
}
|
||||
return Promise.reject("Can't find tool");
|
||||
}
|
||||
};
|
||||
|
|
@ -1,3 +1,4 @@
|
|||
<<<<<<< HEAD
|
||||
import type { ChatCompletionTool } from '@fastgpt/global/core/ai/type';
|
||||
import { responseWriteController } from '../../../../../common/response';
|
||||
import { SseResponseEventEnum } from '@fastgpt/global/core/workflow/runtime/constants';
|
||||
|
|
@ -17,6 +18,88 @@ import { runAgentCall } from '../../../../ai/llm/agentCall';
|
|||
export const runToolCall = async (props: DispatchToolModuleProps): Promise<RunToolResponse> => {
|
||||
const { messages, toolNodes, toolModel, childrenInteractiveParams, ...workflowProps } = props;
|
||||
const {
|
||||
=======
|
||||
import { filterGPTMessageByMaxContext } from '../../../../ai/llm/utils';
|
||||
import type {
|
||||
ChatCompletionToolMessageParam,
|
||||
ChatCompletionMessageParam,
|
||||
ChatCompletionTool
|
||||
} from '@fastgpt/global/core/ai/type';
|
||||
import { SseResponseEventEnum } from '@fastgpt/global/core/workflow/runtime/constants';
|
||||
import { textAdaptGptResponse } from '@fastgpt/global/core/workflow/runtime/utils';
|
||||
import { ChatCompletionRequestMessageRoleEnum } from '@fastgpt/global/core/ai/constants';
|
||||
import { runWorkflow } from '../../index';
|
||||
import type { DispatchToolModuleProps, RunToolResponse, ToolNodeItemType } from './type';
|
||||
import type { DispatchFlowResponse } from '../../type';
|
||||
import { GPTMessages2Chats } from '@fastgpt/global/core/chat/adapt';
|
||||
import type { AIChatItemType } from '@fastgpt/global/core/chat/type';
|
||||
import { formatToolResponse, parseToolArgs } from '../utils';
|
||||
import { initToolNodes, initToolCallEdges } from './utils';
|
||||
import { computedMaxToken } from '../../../../ai/utils';
|
||||
import { sliceStrStartEnd } from '@fastgpt/global/common/string/tools';
|
||||
import type { WorkflowInteractiveResponseType } from '@fastgpt/global/core/workflow/template/system/interactive/type';
|
||||
import { getErrText } from '@fastgpt/global/common/error/utils';
|
||||
import { createLLMResponse } from '../../../../ai/llm/request';
|
||||
import { toolValueTypeList, valueTypeJsonSchemaMap } from '@fastgpt/global/core/workflow/constants';
|
||||
|
||||
type ToolRunResponseType = {
|
||||
toolRunResponse?: DispatchFlowResponse;
|
||||
toolMsgParams: ChatCompletionToolMessageParam;
|
||||
}[];
|
||||
|
||||
/*
|
||||
调用思路:
|
||||
先Check 是否是交互节点触发
|
||||
|
||||
交互模式:
|
||||
1. 从缓存中获取工作流运行数据
|
||||
2. 运行工作流
|
||||
3. 检测是否有停止信号或交互响应
|
||||
- 无:汇总结果,递归运行工具
|
||||
- 有:缓存结果,结束调用
|
||||
|
||||
非交互模式:
|
||||
1. 组合 tools
|
||||
2. 过滤 messages
|
||||
3. Load request llm messages: system prompt, histories, human question, (assistant responses, tool responses, assistant responses....)
|
||||
4. 请求 LLM 获取结果
|
||||
|
||||
- 有工具调用
|
||||
1. 批量运行工具的工作流,获取结果(工作流原生结果,工具执行结果)
|
||||
2. 合并递归中,所有工具的原生运行结果
|
||||
3. 组合 assistants tool 响应
|
||||
4. 组合本次 request 和 llm response 的 messages,并计算出消耗的 tokens
|
||||
5. 组合本次 request、llm response 和 tool response 结果
|
||||
6. 组合本次的 assistant responses: history assistant + tool assistant + tool child assistant
|
||||
7. 判断是否还有停止信号或交互响应
|
||||
- 无:递归运行工具
|
||||
- 有:缓存结果,结束调用
|
||||
- 无工具调用
|
||||
1. 汇总结果,递归运行工具
|
||||
2. 计算 completeMessages 和 tokens 后返回。
|
||||
|
||||
交互节点额外缓存结果包括:
|
||||
1. 入口的节点 id
|
||||
2. toolCallId: 本次工具调用的 ID,可以找到是调用了哪个工具,入口并不会记录工具的 id
|
||||
3. messages:本次递归中,assistants responses 和 tool responses
|
||||
*/
|
||||
|
||||
export const runToolCall = async (
|
||||
props: DispatchToolModuleProps & {
|
||||
maxRunToolTimes: number;
|
||||
},
|
||||
response?: RunToolResponse
|
||||
): Promise<RunToolResponse> => {
|
||||
const {
|
||||
messages,
|
||||
toolNodes,
|
||||
toolModel,
|
||||
maxRunToolTimes,
|
||||
interactiveEntryToolParams,
|
||||
...workflowProps
|
||||
} = props;
|
||||
let {
|
||||
>>>>>>> 757253617 (squash: compress all commits into one)
|
||||
res,
|
||||
checkIsStopping,
|
||||
requestOrigin,
|
||||
|
|
@ -39,7 +122,105 @@ export const runToolCall = async (props: DispatchToolModuleProps): Promise<RunTo
|
|||
}
|
||||
} = workflowProps;
|
||||
|
||||
<<<<<<< HEAD
|
||||
// 构建 tools 参数
|
||||
=======
|
||||
if (maxRunToolTimes <= 0 && response) {
|
||||
return response;
|
||||
}
|
||||
|
||||
// Interactive
|
||||
if (interactiveEntryToolParams) {
|
||||
initToolNodes(runtimeNodes, interactiveEntryToolParams.entryNodeIds);
|
||||
initToolCallEdges(runtimeEdges, interactiveEntryToolParams.entryNodeIds);
|
||||
|
||||
// Run entry tool
|
||||
const toolRunResponse = await runWorkflow({
|
||||
...workflowProps,
|
||||
usageId: undefined,
|
||||
isToolCall: true
|
||||
});
|
||||
const stringToolResponse = formatToolResponse(toolRunResponse.toolResponses);
|
||||
|
||||
// Response to frontend
|
||||
workflowStreamResponse?.({
|
||||
event: SseResponseEventEnum.toolResponse,
|
||||
data: {
|
||||
tool: {
|
||||
id: interactiveEntryToolParams.toolCallId,
|
||||
toolName: '',
|
||||
toolAvatar: '',
|
||||
params: '',
|
||||
response: sliceStrStartEnd(stringToolResponse, 5000, 5000)
|
||||
}
|
||||
}
|
||||
});
|
||||
|
||||
// Check stop signal
|
||||
const hasStopSignal = toolRunResponse.flowResponses?.some((item) => item.toolStop);
|
||||
// Check interactive response(Only 1 interaction is reserved)
|
||||
const workflowInteractiveResponse = toolRunResponse.workflowInteractiveResponse;
|
||||
|
||||
const requestMessages = [
|
||||
...messages,
|
||||
...interactiveEntryToolParams.memoryMessages.map((item) =>
|
||||
item.role === 'tool' && item.tool_call_id === interactiveEntryToolParams.toolCallId
|
||||
? {
|
||||
...item,
|
||||
content: stringToolResponse
|
||||
}
|
||||
: item
|
||||
)
|
||||
];
|
||||
|
||||
if (hasStopSignal || workflowInteractiveResponse) {
|
||||
// Get interactive tool data
|
||||
const toolWorkflowInteractiveResponse: WorkflowInteractiveResponseType | undefined =
|
||||
workflowInteractiveResponse
|
||||
? {
|
||||
...workflowInteractiveResponse,
|
||||
toolParams: {
|
||||
entryNodeIds: workflowInteractiveResponse.entryNodeIds,
|
||||
toolCallId: interactiveEntryToolParams.toolCallId,
|
||||
memoryMessages: interactiveEntryToolParams.memoryMessages
|
||||
}
|
||||
}
|
||||
: undefined;
|
||||
|
||||
return {
|
||||
dispatchFlowResponse: [toolRunResponse],
|
||||
toolCallInputTokens: 0,
|
||||
toolCallOutputTokens: 0,
|
||||
completeMessages: requestMessages,
|
||||
assistantResponses: toolRunResponse.assistantResponses,
|
||||
runTimes: toolRunResponse.runTimes,
|
||||
toolWorkflowInteractiveResponse
|
||||
};
|
||||
}
|
||||
|
||||
return runToolCall(
|
||||
{
|
||||
...props,
|
||||
interactiveEntryToolParams: undefined,
|
||||
maxRunToolTimes: maxRunToolTimes - 1,
|
||||
// Rewrite toolCall messages
|
||||
messages: requestMessages
|
||||
},
|
||||
{
|
||||
dispatchFlowResponse: [toolRunResponse],
|
||||
toolCallInputTokens: 0,
|
||||
toolCallOutputTokens: 0,
|
||||
assistantResponses: toolRunResponse.assistantResponses,
|
||||
runTimes: toolRunResponse.runTimes
|
||||
}
|
||||
);
|
||||
}
|
||||
|
||||
// ------------------------------------------------------------
|
||||
|
||||
const assistantResponses = response?.assistantResponses || [];
|
||||
|
||||
>>>>>>> 757253617 (squash: compress all commits into one)
|
||||
const toolNodesMap = new Map<string, ToolNodeItemType>();
|
||||
const tools: ChatCompletionTool[] = toolNodes.map((item) => {
|
||||
toolNodesMap.set(item.nodeId, item);
|
||||
|
|
@ -91,6 +272,7 @@ export const runToolCall = async (props: DispatchToolModuleProps): Promise<RunTo
|
|||
}
|
||||
};
|
||||
});
|
||||
<<<<<<< HEAD
|
||||
const getToolInfo = (name: string) => {
|
||||
const toolNode = toolNodesMap.get(name);
|
||||
return {
|
||||
|
|
@ -129,13 +311,74 @@ export const runToolCall = async (props: DispatchToolModuleProps): Promise<RunTo
|
|||
requestOrigin,
|
||||
retainDatasetCite,
|
||||
useVision: aiChatVision
|
||||
=======
|
||||
|
||||
const max_tokens = computedMaxToken({
|
||||
model: toolModel,
|
||||
maxToken,
|
||||
min: 100
|
||||
});
|
||||
|
||||
// Filter histories by maxToken
|
||||
const filterMessages = (
|
||||
await filterGPTMessageByMaxContext({
|
||||
messages,
|
||||
maxContext: toolModel.maxContext - (max_tokens || 0) // filter token. not response maxToken
|
||||
})
|
||||
).map((item) => {
|
||||
if (item.role === 'assistant' && item.tool_calls) {
|
||||
return {
|
||||
...item,
|
||||
tool_calls: item.tool_calls.map((tool) => ({
|
||||
id: tool.id,
|
||||
type: tool.type,
|
||||
function: tool.function
|
||||
}))
|
||||
};
|
||||
}
|
||||
return item;
|
||||
});
|
||||
|
||||
let {
|
||||
reasoningText: reasoningContent,
|
||||
answerText: answer,
|
||||
toolCalls = [],
|
||||
finish_reason,
|
||||
usage,
|
||||
getEmptyResponseTip,
|
||||
assistantMessage,
|
||||
completeMessages
|
||||
} = await createLLMResponse({
|
||||
body: {
|
||||
model: toolModel.model,
|
||||
stream,
|
||||
messages: filterMessages,
|
||||
tool_choice: 'auto',
|
||||
toolCallMode: toolModel.toolChoice ? 'toolChoice' : 'prompt',
|
||||
tools,
|
||||
parallel_tool_calls: true,
|
||||
temperature,
|
||||
max_tokens,
|
||||
top_p: aiChatTopP,
|
||||
stop: aiChatStopSign,
|
||||
response_format: {
|
||||
type: aiChatResponseFormat as any,
|
||||
json_schema: aiChatJsonSchema
|
||||
},
|
||||
retainDatasetCite,
|
||||
useVision: aiChatVision,
|
||||
requestOrigin
|
||||
>>>>>>> 757253617 (squash: compress all commits into one)
|
||||
},
|
||||
isAborted: checkIsStopping,
|
||||
userKey: externalProvider.openaiAccount,
|
||||
onReasoning({ text }) {
|
||||
if (!aiChatReasoning) return;
|
||||
workflowStreamResponse?.({
|
||||
<<<<<<< HEAD
|
||||
write,
|
||||
=======
|
||||
>>>>>>> 757253617 (squash: compress all commits into one)
|
||||
event: SseResponseEventEnum.answer,
|
||||
data: textAdaptGptResponse({
|
||||
reasoning_content: text
|
||||
|
|
@ -145,7 +388,10 @@ export const runToolCall = async (props: DispatchToolModuleProps): Promise<RunTo
|
|||
onStreaming({ text }) {
|
||||
if (!isResponseAnswerText) return;
|
||||
workflowStreamResponse?.({
|
||||
<<<<<<< HEAD
|
||||
write,
|
||||
=======
|
||||
>>>>>>> 757253617 (squash: compress all commits into one)
|
||||
event: SseResponseEventEnum.answer,
|
||||
data: textAdaptGptResponse({
|
||||
text
|
||||
|
|
@ -155,6 +401,7 @@ export const runToolCall = async (props: DispatchToolModuleProps): Promise<RunTo
|
|||
onToolCall({ call }) {
|
||||
if (!isResponseAnswerText) return;
|
||||
const toolNode = toolNodesMap.get(call.function.name);
|
||||
<<<<<<< HEAD
|
||||
if (toolNode) {
|
||||
workflowStreamResponse?.({
|
||||
event: SseResponseEventEnum.toolCall,
|
||||
|
|
@ -179,6 +426,29 @@ export const runToolCall = async (props: DispatchToolModuleProps): Promise<RunTo
|
|||
data: {
|
||||
tool: {
|
||||
id: tool.id,
|
||||
=======
|
||||
if (!toolNode) return;
|
||||
workflowStreamResponse?.({
|
||||
event: SseResponseEventEnum.toolCall,
|
||||
data: {
|
||||
tool: {
|
||||
id: call.id,
|
||||
toolName: toolNode.name,
|
||||
toolAvatar: toolNode.avatar,
|
||||
functionName: call.function.name,
|
||||
params: call.function.arguments ?? '',
|
||||
response: ''
|
||||
}
|
||||
}
|
||||
});
|
||||
},
|
||||
onToolParam({ call, params }) {
|
||||
workflowStreamResponse?.({
|
||||
event: SseResponseEventEnum.toolParams,
|
||||
data: {
|
||||
tool: {
|
||||
id: call.id,
|
||||
>>>>>>> 757253617 (squash: compress all commits into one)
|
||||
toolName: '',
|
||||
toolAvatar: '',
|
||||
params,
|
||||
|
|
@ -186,6 +456,7 @@ export const runToolCall = async (props: DispatchToolModuleProps): Promise<RunTo
|
|||
}
|
||||
}
|
||||
});
|
||||
<<<<<<< HEAD
|
||||
},
|
||||
handleToolResponse: async ({ call, messages }) => {
|
||||
const toolNode = toolNodesMap.get(call.function?.name);
|
||||
|
|
@ -318,4 +589,213 @@ export const runToolCall = async (props: DispatchToolModuleProps): Promise<RunTo
|
|||
finish_reason,
|
||||
toolWorkflowInteractiveResponse: interactiveResponse
|
||||
};
|
||||
=======
|
||||
}
|
||||
});
|
||||
|
||||
if (!answer && !reasoningContent && !toolCalls.length) {
|
||||
return Promise.reject(getEmptyResponseTip());
|
||||
}
|
||||
|
||||
/* Run the selected tool by LLM.
|
||||
Since only reference parameters are passed, if the same tool is run in parallel, it will get the same run parameters
|
||||
*/
|
||||
const toolsRunResponse: ToolRunResponseType = [];
|
||||
for await (const tool of toolCalls) {
|
||||
try {
|
||||
const toolNode = toolNodesMap.get(tool.function?.name);
|
||||
|
||||
if (!toolNode) continue;
|
||||
|
||||
const startParams = parseToolArgs(tool.function.arguments);
|
||||
|
||||
initToolNodes(runtimeNodes, [toolNode.nodeId], startParams);
|
||||
const toolRunResponse = await runWorkflow({
|
||||
...workflowProps,
|
||||
usageId: undefined,
|
||||
isToolCall: true
|
||||
});
|
||||
|
||||
const stringToolResponse = formatToolResponse(toolRunResponse.toolResponses);
|
||||
const toolMsgParams: ChatCompletionToolMessageParam = {
|
||||
tool_call_id: tool.id,
|
||||
role: ChatCompletionRequestMessageRoleEnum.Tool,
|
||||
content: stringToolResponse
|
||||
};
|
||||
|
||||
workflowStreamResponse?.({
|
||||
event: SseResponseEventEnum.toolResponse,
|
||||
data: {
|
||||
tool: {
|
||||
id: tool.id,
|
||||
toolName: '',
|
||||
toolAvatar: '',
|
||||
params: '',
|
||||
response: sliceStrStartEnd(stringToolResponse, 5000, 5000)
|
||||
}
|
||||
}
|
||||
});
|
||||
|
||||
toolsRunResponse.push({
|
||||
toolRunResponse,
|
||||
toolMsgParams
|
||||
});
|
||||
} catch (error) {
|
||||
const err = getErrText(error);
|
||||
workflowStreamResponse?.({
|
||||
event: SseResponseEventEnum.toolResponse,
|
||||
data: {
|
||||
tool: {
|
||||
id: tool.id,
|
||||
toolName: '',
|
||||
toolAvatar: '',
|
||||
params: '',
|
||||
response: sliceStrStartEnd(err, 5000, 5000)
|
||||
}
|
||||
}
|
||||
});
|
||||
|
||||
toolsRunResponse.push({
|
||||
toolRunResponse: undefined,
|
||||
toolMsgParams: {
|
||||
tool_call_id: tool.id,
|
||||
role: ChatCompletionRequestMessageRoleEnum.Tool,
|
||||
content: sliceStrStartEnd(err, 5000, 5000)
|
||||
}
|
||||
});
|
||||
}
|
||||
}
|
||||
|
||||
const flatToolsResponseData = toolsRunResponse
|
||||
.map((item) => item.toolRunResponse)
|
||||
.flat()
|
||||
.filter(Boolean) as DispatchFlowResponse[];
|
||||
// concat tool responses
|
||||
const dispatchFlowResponse = response
|
||||
? response.dispatchFlowResponse.concat(flatToolsResponseData)
|
||||
: flatToolsResponseData;
|
||||
|
||||
const inputTokens = response
|
||||
? response.toolCallInputTokens + usage.inputTokens
|
||||
: usage.inputTokens;
|
||||
const outputTokens = response
|
||||
? response.toolCallOutputTokens + usage.outputTokens
|
||||
: usage.outputTokens;
|
||||
|
||||
if (toolCalls.length > 0) {
|
||||
/*
|
||||
...
|
||||
user
|
||||
assistant: tool data
|
||||
tool: tool response
|
||||
*/
|
||||
const nextRequestMessages: ChatCompletionMessageParam[] = [
|
||||
...completeMessages,
|
||||
...toolsRunResponse.map((item) => item?.toolMsgParams)
|
||||
];
|
||||
|
||||
/*
|
||||
Get tool node assistant response
|
||||
- history assistant
|
||||
- current tool assistant
|
||||
- tool child assistant
|
||||
*/
|
||||
const toolNodeAssistant = GPTMessages2Chats({
|
||||
messages: [...assistantMessage, ...toolsRunResponse.map((item) => item?.toolMsgParams)],
|
||||
getToolInfo: (id) => {
|
||||
const toolNode = toolNodesMap.get(id);
|
||||
return {
|
||||
name: toolNode?.name || '',
|
||||
avatar: toolNode?.avatar || ''
|
||||
};
|
||||
}
|
||||
})[0] as AIChatItemType;
|
||||
const toolChildAssistants = flatToolsResponseData
|
||||
.map((item) => item.assistantResponses)
|
||||
.flat()
|
||||
.filter((item) => !item.interactive); // 交互节点留着下次记录
|
||||
const concatAssistantResponses = [
|
||||
...assistantResponses,
|
||||
...toolNodeAssistant.value,
|
||||
...toolChildAssistants
|
||||
];
|
||||
|
||||
const runTimes =
|
||||
(response?.runTimes || 0) +
|
||||
flatToolsResponseData.reduce((sum, item) => sum + item.runTimes, 0);
|
||||
|
||||
// Check stop signal
|
||||
const hasStopSignal = flatToolsResponseData.some(
|
||||
(item) => !!item.flowResponses?.find((item) => item.toolStop)
|
||||
);
|
||||
// Check interactive response(Only 1 interaction is reserved)
|
||||
const workflowInteractiveResponseItem = toolsRunResponse.find(
|
||||
(item) => item.toolRunResponse?.workflowInteractiveResponse
|
||||
);
|
||||
if (hasStopSignal || workflowInteractiveResponseItem) {
|
||||
// Get interactive tool data
|
||||
const workflowInteractiveResponse =
|
||||
workflowInteractiveResponseItem?.toolRunResponse?.workflowInteractiveResponse;
|
||||
|
||||
// Flashback traverses completeMessages, intercepting messages that know the first user
|
||||
const firstUserIndex = nextRequestMessages.findLastIndex((item) => item.role === 'user');
|
||||
const newMessages = nextRequestMessages.slice(firstUserIndex + 1);
|
||||
|
||||
const toolWorkflowInteractiveResponse: WorkflowInteractiveResponseType | undefined =
|
||||
workflowInteractiveResponse
|
||||
? {
|
||||
...workflowInteractiveResponse,
|
||||
toolParams: {
|
||||
entryNodeIds: workflowInteractiveResponse.entryNodeIds,
|
||||
toolCallId: workflowInteractiveResponseItem?.toolMsgParams.tool_call_id,
|
||||
memoryMessages: newMessages
|
||||
}
|
||||
}
|
||||
: undefined;
|
||||
|
||||
return {
|
||||
dispatchFlowResponse,
|
||||
toolCallInputTokens: inputTokens,
|
||||
toolCallOutputTokens: outputTokens,
|
||||
completeMessages: nextRequestMessages,
|
||||
assistantResponses: concatAssistantResponses,
|
||||
toolWorkflowInteractiveResponse,
|
||||
runTimes,
|
||||
finish_reason
|
||||
};
|
||||
}
|
||||
|
||||
return runToolCall(
|
||||
{
|
||||
...props,
|
||||
maxRunToolTimes: maxRunToolTimes - 1,
|
||||
messages: nextRequestMessages
|
||||
},
|
||||
{
|
||||
dispatchFlowResponse,
|
||||
toolCallInputTokens: inputTokens,
|
||||
toolCallOutputTokens: outputTokens,
|
||||
assistantResponses: concatAssistantResponses,
|
||||
runTimes,
|
||||
finish_reason
|
||||
}
|
||||
);
|
||||
} else {
|
||||
// concat tool assistant
|
||||
const toolNodeAssistant = GPTMessages2Chats({
|
||||
messages: assistantMessage
|
||||
})[0] as AIChatItemType;
|
||||
|
||||
return {
|
||||
dispatchFlowResponse: response?.dispatchFlowResponse || [],
|
||||
toolCallInputTokens: inputTokens,
|
||||
toolCallOutputTokens: outputTokens,
|
||||
|
||||
completeMessages,
|
||||
assistantResponses: [...assistantResponses, ...toolNodeAssistant.value],
|
||||
runTimes: (response?.runTimes || 0) + 1,
|
||||
finish_reason
|
||||
};
|
||||
}
|
||||
>>>>>>> 757253617 (squash: compress all commits into one)
|
||||
};
|
||||
|
|
|
|||
|
|
@ -0,0 +1,19 @@
|
|||
import type { RuntimeNodeItemType } from '@fastgpt/global/core/workflow/runtime/type';
|
||||
import type { JSONSchemaInputType } from '@fastgpt/global/core/app/jsonschema';
|
||||
import type { ChatNodeUsageType } from '@fastgpt/global/support/wallet/bill/type';
|
||||
|
||||
export type ToolNodeItemType = RuntimeNodeItemType & {
|
||||
toolParams: RuntimeNodeItemType['inputs'];
|
||||
jsonSchema?: JSONSchemaInputType;
|
||||
};
|
||||
|
||||
export type DispatchSubAppResponse = {
|
||||
response: string;
|
||||
usages?: ChatNodeUsageType[];
|
||||
};
|
||||
|
||||
export type GetSubAppInfoFnType = (id: string) => {
|
||||
name: string;
|
||||
avatar: string;
|
||||
toolDescription: string;
|
||||
};
|
||||
|
|
@ -1,42 +1,34 @@
|
|||
import { sliceStrStartEnd } from '@fastgpt/global/common/string/tools';
|
||||
import { ChatItemValueTypeEnum } from '@fastgpt/global/core/chat/constants';
|
||||
import { type AIChatItemValueItemType } from '@fastgpt/global/core/chat/type';
|
||||
import { type FlowNodeInputItemType } from '@fastgpt/global/core/workflow/type/io';
|
||||
import { type RuntimeEdgeItemType } from '@fastgpt/global/core/workflow/type/edge';
|
||||
import { type RuntimeNodeItemType } from '@fastgpt/global/core/workflow/runtime/type';
|
||||
|
||||
export const updateToolInputValue = ({
|
||||
params,
|
||||
inputs
|
||||
<<<<<<< HEAD
|
||||
/*
|
||||
匹配 {{@toolId@}},转化成: @name 的格式。
|
||||
*/
|
||||
export const parseSystemPrompt = ({
|
||||
systemPrompt,
|
||||
getSubAppInfo
|
||||
}: {
|
||||
params: Record<string, any>;
|
||||
inputs: FlowNodeInputItemType[];
|
||||
}) => {
|
||||
return inputs.map((input) => ({
|
||||
...input,
|
||||
value: params[input.key] ?? input.value
|
||||
}));
|
||||
};
|
||||
systemPrompt?: string;
|
||||
getSubAppInfo: (id: string) => {
|
||||
name: string;
|
||||
avatar: string;
|
||||
toolDescription: string;
|
||||
};
|
||||
}): string => {
|
||||
if (!systemPrompt) return '';
|
||||
|
||||
export const filterToolResponseToPreview = (response: AIChatItemValueItemType[]) => {
|
||||
return response.map((item) => {
|
||||
if (item.type === ChatItemValueTypeEnum.tool) {
|
||||
const formatTools = item.tools?.map((tool) => {
|
||||
return {
|
||||
...tool,
|
||||
response: sliceStrStartEnd(tool.response, 500, 500)
|
||||
};
|
||||
});
|
||||
return {
|
||||
...item,
|
||||
tools: formatTools
|
||||
};
|
||||
// Match pattern {{@toolId@}} and convert to @name format
|
||||
const pattern = /\{\{@([^@]+)@\}\}/g;
|
||||
|
||||
const processedPrompt = systemPrompt.replace(pattern, (match, toolId) => {
|
||||
const toolInfo = getSubAppInfo(toolId);
|
||||
if (!toolInfo) {
|
||||
console.warn(`Tool not found for ID: ${toolId}`);
|
||||
return match; // Return original match if tool not found
|
||||
}
|
||||
|
||||
return item;
|
||||
return `@${toolInfo.name}`;
|
||||
});
|
||||
};
|
||||
|
||||
<<<<<<<< HEAD:packages/service/core/workflow/dispatch/ai/tool/utils.ts
|
||||
export const formatToolResponse = (toolResponses: any) => {
|
||||
if (typeof toolResponses === 'object') {
|
||||
return JSON.stringify(toolResponses, null, 2);
|
||||
|
|
@ -45,6 +37,42 @@ export const formatToolResponse = (toolResponses: any) => {
|
|||
return toolResponses ? String(toolResponses) : 'none';
|
||||
};
|
||||
|
||||
=======
|
||||
import type { RuntimeNodeItemType } from '@fastgpt/global/core/workflow/runtime/type';
|
||||
import type { RuntimeEdgeItemType } from '@fastgpt/global/core/workflow/type/edge';
|
||||
import type { FlowNodeInputItemType } from '@fastgpt/global/core/workflow/type/io';
|
||||
|
||||
export const initToolNodes = (
|
||||
nodes: RuntimeNodeItemType[],
|
||||
entryNodeIds: string[],
|
||||
startParams?: Record<string, any>
|
||||
) => {
|
||||
const updateToolInputValue = ({
|
||||
params,
|
||||
inputs
|
||||
}: {
|
||||
params: Record<string, any>;
|
||||
inputs: FlowNodeInputItemType[];
|
||||
}) => {
|
||||
return inputs.map((input) => ({
|
||||
...input,
|
||||
value: params[input.key] ?? input.value
|
||||
}));
|
||||
};
|
||||
|
||||
nodes.forEach((node) => {
|
||||
if (entryNodeIds.includes(node.nodeId)) {
|
||||
node.isEntry = true;
|
||||
node.isStart = true;
|
||||
if (startParams) {
|
||||
node.inputs = updateToolInputValue({ params: startParams, inputs: node.inputs });
|
||||
}
|
||||
} else {
|
||||
node.isStart = false;
|
||||
}
|
||||
});
|
||||
};
|
||||
>>>>>>> 757253617 (squash: compress all commits into one)
|
||||
// 在原参上改变值,不修改原对象,tool workflow 中,使用的还是原对象
|
||||
export const initToolCallEdges = (edges: RuntimeEdgeItemType[], entryNodeIds: string[]) => {
|
||||
edges.forEach((edge) => {
|
||||
|
|
@ -53,6 +81,7 @@ export const initToolCallEdges = (edges: RuntimeEdgeItemType[], entryNodeIds: st
|
|||
}
|
||||
});
|
||||
};
|
||||
<<<<<<< HEAD
|
||||
|
||||
export const initToolNodes = (
|
||||
nodes: RuntimeNodeItemType[],
|
||||
|
|
@ -67,4 +96,9 @@ export const initToolNodes = (
|
|||
}
|
||||
}
|
||||
});
|
||||
========
|
||||
return processedPrompt;
|
||||
>>>>>>>> 757253617 (squash: compress all commits into one):packages/service/core/workflow/dispatch/ai/agent/utils.ts
|
||||
};
|
||||
=======
|
||||
>>>>>>> 757253617 (squash: compress all commits into one)
|
||||
|
|
|
|||
|
|
@ -0,0 +1,149 @@
|
|||
import {
|
||||
replaceVariable,
|
||||
sliceJsonStr,
|
||||
sliceStrStartEnd
|
||||
} from '@fastgpt/global/common/string/tools';
|
||||
import type {
|
||||
AIChatItemValueItemType,
|
||||
UserChatItemValueItemType
|
||||
} from '@fastgpt/global/core/chat/type';
|
||||
import type { FlowNodeInputItemType } from '@fastgpt/global/core/workflow/type/io';
|
||||
import type { RuntimeNodeItemType } from '@fastgpt/global/core/workflow/runtime/type';
|
||||
import { NodeInputKeyEnum } from '@fastgpt/global/core/workflow/constants';
|
||||
import type { McpToolDataType } from '@fastgpt/global/core/app/tool/mcpTool/type';
|
||||
import type { JSONSchemaInputType } from '@fastgpt/global/core/app/jsonschema';
|
||||
import type { ToolNodeItemType } from './tool/type';
|
||||
import json5 from 'json5';
|
||||
import type { ChatCompletionMessageParam } from '@fastgpt/global/core/ai/type';
|
||||
import { ChatCompletionRequestMessageRoleEnum } from '@fastgpt/global/core/ai/constants';
|
||||
|
||||
// Assistant process
|
||||
export const filterToolResponseToPreview = (response: AIChatItemValueItemType[]) => {
|
||||
return response.map((item) => {
|
||||
if (item.tools) {
|
||||
const formatTools = item.tools?.map((tool) => {
|
||||
return {
|
||||
...tool,
|
||||
response: sliceStrStartEnd(tool.response, 500, 500)
|
||||
};
|
||||
});
|
||||
return {
|
||||
...item,
|
||||
tools: formatTools
|
||||
};
|
||||
}
|
||||
|
||||
return item;
|
||||
});
|
||||
};
|
||||
|
||||
export const filterMemoryMessages = (messages: ChatCompletionMessageParam[]) => {
|
||||
return messages.filter((item) => item.role !== ChatCompletionRequestMessageRoleEnum.System);
|
||||
};
|
||||
|
||||
export const formatToolResponse = (toolResponses: any) => {
|
||||
if (typeof toolResponses === 'object') {
|
||||
return JSON.stringify(toolResponses, null, 2);
|
||||
}
|
||||
|
||||
return toolResponses ? String(toolResponses) : 'none';
|
||||
};
|
||||
|
||||
/*
|
||||
Tool call, auth add file prompt to question。
|
||||
Guide the LLM to call tool.
|
||||
*/
|
||||
export const toolCallMessagesAdapt = ({
|
||||
userInput,
|
||||
skip
|
||||
}: {
|
||||
userInput: UserChatItemValueItemType[];
|
||||
skip?: boolean;
|
||||
}): UserChatItemValueItemType[] => {
|
||||
const getMultiplePrompt = (obj: { fileCount: number; imgCount: number; question: string }) => {
|
||||
const prompt = `Number of session file inputs:
|
||||
Document:{{fileCount}}
|
||||
Image:{{imgCount}}
|
||||
------
|
||||
{{question}}`;
|
||||
return replaceVariable(prompt, obj);
|
||||
};
|
||||
|
||||
if (skip) return userInput;
|
||||
|
||||
const files = userInput.filter((item) => item.file);
|
||||
|
||||
if (files.length > 0) {
|
||||
const filesCount = files.filter((file) => file.file?.type === 'file').length;
|
||||
const imgCount = files.filter((file) => file.file?.type === 'image').length;
|
||||
|
||||
if (userInput.some((item) => item.text)) {
|
||||
return userInput.map((item) => {
|
||||
if (item.text) {
|
||||
const text = item.text?.content || '';
|
||||
|
||||
return {
|
||||
...item,
|
||||
text: {
|
||||
content: getMultiplePrompt({ fileCount: filesCount, imgCount, question: text })
|
||||
}
|
||||
};
|
||||
}
|
||||
return item;
|
||||
});
|
||||
}
|
||||
|
||||
// Every input is a file
|
||||
return [
|
||||
{
|
||||
text: {
|
||||
content: getMultiplePrompt({ fileCount: filesCount, imgCount, question: '' })
|
||||
}
|
||||
}
|
||||
];
|
||||
}
|
||||
|
||||
return userInput;
|
||||
};
|
||||
|
||||
export const getToolNodesByIds = ({
|
||||
toolNodeIds,
|
||||
runtimeNodes
|
||||
}: {
|
||||
toolNodeIds: string[];
|
||||
runtimeNodes: RuntimeNodeItemType[];
|
||||
}): ToolNodeItemType[] => {
|
||||
const nodeMap = new Map(runtimeNodes.map((node) => [node.nodeId, node]));
|
||||
|
||||
return toolNodeIds
|
||||
.map((nodeId) => nodeMap.get(nodeId)!)
|
||||
.filter((tool) => Boolean(tool))
|
||||
.map((tool) => {
|
||||
const toolParams: FlowNodeInputItemType[] = [];
|
||||
let jsonSchema: JSONSchemaInputType | undefined;
|
||||
|
||||
for (const input of tool.inputs) {
|
||||
if (input.toolDescription) {
|
||||
toolParams.push(input);
|
||||
}
|
||||
|
||||
if (input.key === NodeInputKeyEnum.toolData) {
|
||||
jsonSchema = (input.value as McpToolDataType).inputSchema;
|
||||
}
|
||||
}
|
||||
|
||||
return {
|
||||
...tool,
|
||||
toolParams,
|
||||
jsonSchema
|
||||
};
|
||||
});
|
||||
};
|
||||
|
||||
export const parseToolArgs = <T = Record<string, any>>(toolArgs: string) => {
|
||||
try {
|
||||
return json5.parse(sliceJsonStr(toolArgs)) as T;
|
||||
} catch {
|
||||
return;
|
||||
}
|
||||
};
|
||||
|
|
@ -30,24 +30,32 @@ import { dispatchIfElse } from './tools/runIfElse';
|
|||
import { dispatchLafRequest } from './tools/runLaf';
|
||||
import { dispatchUpdateVariable } from './tools/runUpdateVar';
|
||||
import { dispatchTextEditor } from './tools/textEditor';
|
||||
import { dispatchRunAgent } from './ai/agent';
|
||||
|
||||
export const callbackMap: Record<FlowNodeTypeEnum, Function> = {
|
||||
[FlowNodeTypeEnum.workflowStart]: dispatchWorkflowStart,
|
||||
[FlowNodeTypeEnum.answerNode]: dispatchAnswer,
|
||||
[FlowNodeTypeEnum.chatNode]: dispatchChatCompletion,
|
||||
[FlowNodeTypeEnum.datasetSearchNode]: dispatchDatasetSearch,
|
||||
[FlowNodeTypeEnum.datasetConcatNode]: dispatchDatasetConcat,
|
||||
[FlowNodeTypeEnum.classifyQuestion]: dispatchClassifyQuestion,
|
||||
[FlowNodeTypeEnum.contentExtract]: dispatchContentExtract,
|
||||
[FlowNodeTypeEnum.httpRequest468]: dispatchHttp468Request,
|
||||
|
||||
// Child
|
||||
[FlowNodeTypeEnum.appModule]: dispatchRunAppNode,
|
||||
[FlowNodeTypeEnum.pluginModule]: dispatchRunPlugin,
|
||||
[FlowNodeTypeEnum.pluginInput]: dispatchPluginInput,
|
||||
[FlowNodeTypeEnum.pluginOutput]: dispatchPluginOutput,
|
||||
|
||||
// AI
|
||||
[FlowNodeTypeEnum.agent]: dispatchRunAgent,
|
||||
[FlowNodeTypeEnum.chatNode]: dispatchChatCompletion,
|
||||
[FlowNodeTypeEnum.datasetSearchNode]: dispatchDatasetSearch,
|
||||
[FlowNodeTypeEnum.classifyQuestion]: dispatchClassifyQuestion,
|
||||
[FlowNodeTypeEnum.contentExtract]: dispatchContentExtract,
|
||||
[FlowNodeTypeEnum.queryExtension]: dispatchQueryExtension,
|
||||
[FlowNodeTypeEnum.agent]: dispatchRunTools,
|
||||
// Tool call
|
||||
[FlowNodeTypeEnum.toolCall]: dispatchRunTools,
|
||||
[FlowNodeTypeEnum.stopTool]: dispatchStopToolCall,
|
||||
[FlowNodeTypeEnum.toolParams]: dispatchToolParams,
|
||||
|
||||
[FlowNodeTypeEnum.answerNode]: dispatchAnswer,
|
||||
[FlowNodeTypeEnum.datasetConcatNode]: dispatchDatasetConcat,
|
||||
[FlowNodeTypeEnum.httpRequest468]: dispatchHttp468Request,
|
||||
[FlowNodeTypeEnum.lafModule]: dispatchLafRequest,
|
||||
[FlowNodeTypeEnum.ifElseNode]: dispatchIfElse,
|
||||
[FlowNodeTypeEnum.variableUpdate]: dispatchUpdateVariable,
|
||||
|
|
|
|||
|
|
@ -24,7 +24,6 @@ import type {
|
|||
} from '@fastgpt/global/core/workflow/runtime/type';
|
||||
import type { RuntimeNodeItemType } from '@fastgpt/global/core/workflow/runtime/type.d';
|
||||
import { getErrText, UserError } from '@fastgpt/global/common/error/utils';
|
||||
import { ChatItemValueTypeEnum } from '@fastgpt/global/core/chat/constants';
|
||||
import { filterPublicNodeResponseData } from '@fastgpt/global/core/chat/utils';
|
||||
import {
|
||||
checkNodeRunStatus,
|
||||
|
|
@ -102,7 +101,7 @@ export async function dispatchWorkFlow({
|
|||
|
||||
// Check url valid
|
||||
const invalidInput = query.some((item) => {
|
||||
if (item.type === ChatItemValueTypeEnum.file && item.file?.url) {
|
||||
if ('file' in item && item.file?.url) {
|
||||
if (!validateFileUrlDomain(item.file.url)) {
|
||||
return true;
|
||||
}
|
||||
|
|
@ -138,7 +137,7 @@ export async function dispatchWorkFlow({
|
|||
await addPreviewUrlToChatItems(histories, 'chatFlow'),
|
||||
// Add preview url to query
|
||||
...query.map(async (item) => {
|
||||
if (item.type !== ChatItemValueTypeEnum.file || !item.file?.key) return;
|
||||
if (!item.file?.key) return;
|
||||
item.file.url = await getS3ChatSource().createGetChatFileURL({
|
||||
key: item.file.key,
|
||||
external: true
|
||||
|
|
@ -456,6 +455,21 @@ export const runWorkflow = async (data: RunWorkflowProps): Promise<DispatchFlowR
|
|||
}
|
||||
}
|
||||
|
||||
private usagePush(usages: ChatNodeUsageType[]) {
|
||||
if (usageId) {
|
||||
pushChatItemUsage({
|
||||
teamId,
|
||||
usageId,
|
||||
nodeUsages: usages
|
||||
});
|
||||
}
|
||||
if (concatUsage) {
|
||||
concatUsage(usages.reduce((sum, item) => sum + (item.totalPoints || 0), 0));
|
||||
}
|
||||
|
||||
this.chatNodeUsages = this.chatNodeUsages.concat(usages);
|
||||
}
|
||||
|
||||
async nodeRunWithActive(node: RuntimeNodeItemType): Promise<{
|
||||
node: RuntimeNodeItemType;
|
||||
runStatus: 'run';
|
||||
|
|
@ -537,6 +551,7 @@ export const runWorkflow = async (data: RunWorkflowProps): Promise<DispatchFlowR
|
|||
const dispatchData: ModuleDispatchProps<Record<string, any>> = {
|
||||
...data,
|
||||
mcpClientMemory,
|
||||
usagePush: this.usagePush.bind(this),
|
||||
lastInteractive: data.lastInteractive?.entryNodeIds?.includes(node.nodeId)
|
||||
? data.lastInteractive
|
||||
: undefined,
|
||||
|
|
@ -740,18 +755,7 @@ export const runWorkflow = async (data: RunWorkflowProps): Promise<DispatchFlowR
|
|||
|
||||
// Push usage in real time. Avoid a workflow usage a large number of points
|
||||
if (nodeDispatchUsages) {
|
||||
if (usageId) {
|
||||
pushChatItemUsage({
|
||||
teamId,
|
||||
usageId,
|
||||
nodeUsages: nodeDispatchUsages
|
||||
});
|
||||
}
|
||||
if (concatUsage) {
|
||||
concatUsage(nodeDispatchUsages.reduce((sum, item) => sum + (item.totalPoints || 0), 0));
|
||||
}
|
||||
|
||||
this.chatNodeUsages = this.chatNodeUsages.concat(nodeDispatchUsages);
|
||||
this.usagePush(nodeDispatchUsages);
|
||||
}
|
||||
|
||||
if (
|
||||
|
|
@ -770,7 +774,6 @@ export const runWorkflow = async (data: RunWorkflowProps): Promise<DispatchFlowR
|
|||
} else {
|
||||
if (reasoningText) {
|
||||
this.chatAssistantResponse.push({
|
||||
type: ChatItemValueTypeEnum.reasoning,
|
||||
reasoning: {
|
||||
content: reasoningText
|
||||
}
|
||||
|
|
@ -778,7 +781,6 @@ export const runWorkflow = async (data: RunWorkflowProps): Promise<DispatchFlowR
|
|||
}
|
||||
if (answerText) {
|
||||
this.chatAssistantResponse.push({
|
||||
type: ChatItemValueTypeEnum.text,
|
||||
text: {
|
||||
content: answerText
|
||||
}
|
||||
|
|
@ -1041,7 +1043,6 @@ export const runWorkflow = async (data: RunWorkflowProps): Promise<DispatchFlowR
|
|||
}
|
||||
|
||||
return {
|
||||
type: ChatItemValueTypeEnum.interactive,
|
||||
interactive: interactiveResult
|
||||
};
|
||||
}
|
||||
|
|
@ -1072,7 +1073,7 @@ export const runWorkflow = async (data: RunWorkflowProps): Promise<DispatchFlowR
|
|||
if (
|
||||
item.flowNodeType !== FlowNodeTypeEnum.userSelect &&
|
||||
item.flowNodeType !== FlowNodeTypeEnum.formInput &&
|
||||
item.flowNodeType !== FlowNodeTypeEnum.agent
|
||||
item.flowNodeType !== FlowNodeTypeEnum.toolCall
|
||||
) {
|
||||
item.isEntry = false;
|
||||
}
|
||||
|
|
@ -1199,10 +1200,10 @@ const mergeAssistantResponseAnswerText = (response: AIChatItemValueItemType[]) =
|
|||
// 合并连续的text
|
||||
for (let i = 0; i < response.length; i++) {
|
||||
const item = response[i];
|
||||
if (item.type === ChatItemValueTypeEnum.text) {
|
||||
if (item.text) {
|
||||
let text = item.text?.content || '';
|
||||
const lastItem = result[result.length - 1];
|
||||
if (lastItem && lastItem.type === ChatItemValueTypeEnum.text && lastItem.text?.content) {
|
||||
if (lastItem && lastItem.text?.content) {
|
||||
lastItem.text.content += text;
|
||||
continue;
|
||||
}
|
||||
|
|
@ -1213,7 +1214,6 @@ const mergeAssistantResponseAnswerText = (response: AIChatItemValueItemType[]) =
|
|||
// If result is empty, auto add a text message
|
||||
if (result.length === 0) {
|
||||
result.push({
|
||||
type: ChatItemValueTypeEnum.text,
|
||||
text: { content: '' }
|
||||
});
|
||||
}
|
||||
|
|
|
|||
|
|
@ -106,7 +106,7 @@ export const getHistoryFileLinks = (histories: ChatItemType[]) => {
|
|||
return histories
|
||||
.filter((item) => {
|
||||
if (item.obj === ChatRoleEnum.Human) {
|
||||
return item.value.filter((value) => value.type === 'file');
|
||||
return item.value.filter((value) => value.file);
|
||||
}
|
||||
return false;
|
||||
})
|
||||
|
|
|
|||
|
|
@ -41,14 +41,9 @@ export type DispatchFlowResponse = {
|
|||
durationSeconds: number;
|
||||
};
|
||||
|
||||
export type WorkflowResponseType = ({
|
||||
write,
|
||||
event,
|
||||
data,
|
||||
stream
|
||||
}: {
|
||||
write?: ((text: string) => void) | undefined;
|
||||
export type WorkflowResponseType = (e: {
|
||||
id?: string;
|
||||
subAppId?: string;
|
||||
event: SseResponseEventEnum;
|
||||
data: Record<string, any>;
|
||||
stream?: boolean | undefined;
|
||||
}) => void;
|
||||
|
|
|
|||
|
|
@ -25,6 +25,7 @@ import { getMCPChildren } from '../../../core/app/mcp';
|
|||
import { getSystemToolRunTimeNodeFromSystemToolset } from '../utils';
|
||||
import type { localeType } from '@fastgpt/global/common/i18n/type';
|
||||
import type { HttpToolConfigType } from '@fastgpt/global/core/app/type';
|
||||
import type { WorkflowResponseType } from './type';
|
||||
|
||||
export const getWorkflowResponseWrite = ({
|
||||
res,
|
||||
|
|
@ -39,18 +40,8 @@ export const getWorkflowResponseWrite = ({
|
|||
id?: string;
|
||||
showNodeStatus?: boolean;
|
||||
}) => {
|
||||
return ({
|
||||
write,
|
||||
event,
|
||||
data
|
||||
}: {
|
||||
write?: (text: string) => void;
|
||||
event: SseResponseEventEnum;
|
||||
data: Record<string, any>;
|
||||
}) => {
|
||||
const useStreamResponse = streamResponse;
|
||||
|
||||
if (!res || res.closed || !useStreamResponse) return;
|
||||
const fn: WorkflowResponseType = ({ id, subAppId, event, data }) => {
|
||||
if (!res || res.closed || !streamResponse) return;
|
||||
|
||||
// Forbid show detail
|
||||
const notDetailEvent: Record<string, 1> = {
|
||||
|
|
@ -70,11 +61,29 @@ export const getWorkflowResponseWrite = ({
|
|||
|
||||
responseWrite({
|
||||
res,
|
||||
write,
|
||||
event: detail ? event : undefined,
|
||||
data: JSON.stringify(data)
|
||||
data: JSON.stringify({
|
||||
...data,
|
||||
...(subAppId && detail && { subAppId }),
|
||||
...(id && detail && { responseValueId: id })
|
||||
})
|
||||
});
|
||||
};
|
||||
return fn;
|
||||
};
|
||||
export const getWorkflowChildResponseWrite = ({
|
||||
id,
|
||||
subAppId,
|
||||
fn
|
||||
}: {
|
||||
id: string;
|
||||
subAppId: string;
|
||||
fn?: WorkflowResponseType;
|
||||
}): WorkflowResponseType | undefined => {
|
||||
if (!fn) return;
|
||||
return (e: Parameters<WorkflowResponseType>[0]) => {
|
||||
return fn({ ...e, id, subAppId });
|
||||
};
|
||||
};
|
||||
|
||||
export const filterToolNodeIdByEdges = ({
|
||||
|
|
|
|||
|
|
@ -35,6 +35,7 @@ export const getMyModels = async ({
|
|||
resourceType: PerResourceTypeEnum.model
|
||||
}).lean();
|
||||
|
||||
// 未配置权限的,默认是有权限
|
||||
const permissionConfiguredModelSet = new Set(rps.map((rp) => rp.resourceName));
|
||||
const unconfiguredModels = global.systemModelList.filter(
|
||||
(model) => !permissionConfiguredModelSet.has(model.model)
|
||||
|
|
|
|||
|
|
@ -55,7 +55,7 @@ export const checkTeamAppTypeLimit = async ({
|
|||
MongoApp.countDocuments({
|
||||
teamId,
|
||||
type: {
|
||||
$in: [AppTypeEnum.simple, AppTypeEnum.workflow]
|
||||
$in: [AppTypeEnum.agent, AppTypeEnum.simple, AppTypeEnum.workflow]
|
||||
}
|
||||
})
|
||||
]);
|
||||
|
|
|
|||
|
|
@ -13,6 +13,7 @@ import { retryFn } from '@fastgpt/global/common/system/utils';
|
|||
export function getI18nAppType(type: AppTypeEnum): string {
|
||||
if (type === AppTypeEnum.folder) return i18nT('account_team:type.Folder');
|
||||
if (type === AppTypeEnum.simple) return i18nT('app:type.Chat_Agent');
|
||||
if (type === AppTypeEnum.agent) return 'Agent';
|
||||
if (type === AppTypeEnum.workflow) return i18nT('account_team:type.Workflow bot');
|
||||
if (type === AppTypeEnum.workflowTool) return i18nT('app:toolType_workflow');
|
||||
if (type === AppTypeEnum.httpPlugin) return i18nT('account_team:type.Http plugin');
|
||||
|
|
|
|||
|
|
@ -130,6 +130,7 @@ export const iconPaths = {
|
|||
'common/voiceLight': () => import('./icons/common/voiceLight.svg'),
|
||||
'common/wallet': () => import('./icons/common/wallet.svg'),
|
||||
'common/warn': () => import('./icons/common/warn.svg'),
|
||||
'common/warningFill': () => import('./icons/common/warningFill.svg'),
|
||||
'common/wechat': () => import('./icons/common/wechat.svg'),
|
||||
'common/wechatFill': () => import('./icons/common/wechatFill.svg'),
|
||||
'common/wecom': () => import('./icons/common/wecom.svg'),
|
||||
|
|
|
|||
|
|
@ -0,0 +1,3 @@
|
|||
<svg xmlns="http://www.w3.org/2000/svg" viewBox="0 0 18 18" fill="none">
|
||||
<path fill-rule="evenodd" clip-rule="evenodd" d="M1.5 9C1.5 4.85786 4.85786 1.5 9 1.5C13.1421 1.5 16.5 4.85786 16.5 9C16.5 13.1421 13.1421 16.5 9 16.5C4.85786 16.5 1.5 13.1421 1.5 9ZM8.25 11.25V12.75H9.75V11.25H8.25ZM9.75 10.5L9.75 5.25H8.25L8.25 10.5H9.75Z" fill="#FF7D00"/>
|
||||
</svg>
|
||||
|
After Width: | Height: | Size: 357 B |
|
|
@ -43,6 +43,12 @@ import MarkdownPlugin from './plugins/MarkdownPlugin';
|
|||
import MyIcon from '../../Icon';
|
||||
import ListExitPlugin from './plugins/ListExitPlugin';
|
||||
import KeyDownPlugin from './plugins/KeyDownPlugin';
|
||||
import SkillPickerPlugin from './plugins/SkillPickerPlugin';
|
||||
import type { SkillLabelItemType } from './plugins/SkillLabelPlugin';
|
||||
import SkillLabelPlugin from './plugins/SkillLabelPlugin';
|
||||
import { SkillNode } from './plugins/SkillLabelPlugin/node';
|
||||
import type { SkillOptionItemType } from './plugins/SkillPickerPlugin';
|
||||
import { FlowNodeTemplateType } from '@fastgpt/global/core/workflow/type/node';
|
||||
|
||||
const Placeholder = ({ children, padding }: { children: React.ReactNode; padding: string }) => (
|
||||
<Box
|
||||
|
|
@ -72,7 +78,17 @@ export type EditorProps = {
|
|||
isRichText?: boolean;
|
||||
variables?: EditorVariablePickerType[];
|
||||
variableLabels?: EditorVariableLabelPickerType[];
|
||||
onAddToolFromEditor?: (toolKey: string) => Promise<string>;
|
||||
onRemoveToolFromEditor?: (toolId: string) => void;
|
||||
onConfigureTool?: (toolId: string) => void;
|
||||
|
||||
value: string;
|
||||
|
||||
skillOption?: SkillOptionItemType;
|
||||
onRemoveSkill?: (id: string) => void;
|
||||
onClickSkill?: (id: string) => void;
|
||||
selectedSkills?: SkillLabelItemType[];
|
||||
|
||||
showOpenModal?: boolean;
|
||||
minH?: number;
|
||||
maxH?: number;
|
||||
|
|
@ -95,8 +111,17 @@ export default function Editor({
|
|||
maxLength,
|
||||
showOpenModal = true,
|
||||
onOpenModal,
|
||||
|
||||
// {{}} 类型,已弃用
|
||||
variables = [],
|
||||
// /选择变量
|
||||
variableLabels = [],
|
||||
// @选择技能
|
||||
skillOption,
|
||||
selectedSkills,
|
||||
onClickSkill,
|
||||
onRemoveSkill,
|
||||
|
||||
onChange,
|
||||
onChangeText,
|
||||
onBlur,
|
||||
|
|
@ -125,6 +150,7 @@ export default function Editor({
|
|||
nodes: [
|
||||
VariableNode,
|
||||
VariableLabelNode,
|
||||
SkillNode,
|
||||
// Only register rich text nodes when in rich text mode
|
||||
...(isRichText
|
||||
? [HeadingNode, ListNode, ListItemNode, QuoteNode, CodeNode, CodeHighlightNode]
|
||||
|
|
@ -139,7 +165,7 @@ export default function Editor({
|
|||
useDeepCompareEffect(() => {
|
||||
if (focus) return;
|
||||
setKey(getNanoid(6));
|
||||
}, [value, variables, variableLabels]);
|
||||
}, [value, variables, variableLabels, skillOption, selectedSkills]);
|
||||
|
||||
const showFullScreenIcon = useMemo(() => {
|
||||
return showOpenModal && scrollHeight > maxH;
|
||||
|
|
@ -174,45 +200,65 @@ export default function Editor({
|
|||
borderRadius={'md'}
|
||||
>
|
||||
<LexicalComposer initialConfig={initialConfig} key={key}>
|
||||
{/* Text type */}
|
||||
{isRichText ? (
|
||||
<RichTextPlugin
|
||||
contentEditable={
|
||||
<ContentEditable
|
||||
className={`${isInvalid ? styles.contentEditable_invalid : styles.contentEditable} ${styles.richText}`}
|
||||
style={{
|
||||
minHeight: `${minH}px`,
|
||||
maxHeight: `${maxH}px`,
|
||||
...boxStyle
|
||||
}}
|
||||
/>
|
||||
}
|
||||
placeholder={<Placeholder padding={placeholderPadding}>{placeholder}</Placeholder>}
|
||||
ErrorBoundary={LexicalErrorBoundary}
|
||||
/>
|
||||
) : (
|
||||
<PlainTextPlugin
|
||||
contentEditable={
|
||||
<ContentEditable
|
||||
className={isInvalid ? styles.contentEditable_invalid : styles.contentEditable}
|
||||
style={{
|
||||
minHeight: `${minH}px`,
|
||||
maxHeight: `${maxH}px`,
|
||||
...boxStyle
|
||||
}}
|
||||
/>
|
||||
}
|
||||
placeholder={<Placeholder padding={placeholderPadding}>{placeholder}</Placeholder>}
|
||||
ErrorBoundary={LexicalErrorBoundary}
|
||||
/>
|
||||
)}
|
||||
|
||||
{/* Basic Plugin */}
|
||||
<>
|
||||
<HistoryPlugin />
|
||||
<MaxLengthPlugin maxLength={maxLength || 999999} />
|
||||
<FocusPlugin focus={focus} setFocus={setFocus} />
|
||||
<KeyDownPlugin onKeyDown={onKeyDown} />
|
||||
{/* Text type */}
|
||||
{isRichText ? (
|
||||
<RichTextPlugin
|
||||
contentEditable={
|
||||
<ContentEditable
|
||||
className={`${isInvalid ? styles.contentEditable_invalid : styles.contentEditable} ${styles.richText}`}
|
||||
style={{
|
||||
minHeight: `${minH}px`,
|
||||
maxHeight: `${maxH}px`,
|
||||
...boxStyle
|
||||
}}
|
||||
/>
|
||||
}
|
||||
placeholder={<Placeholder padding={placeholderPadding}>{placeholder}</Placeholder>}
|
||||
ErrorBoundary={LexicalErrorBoundary}
|
||||
/>
|
||||
) : (
|
||||
<PlainTextPlugin
|
||||
contentEditable={
|
||||
<ContentEditable
|
||||
className={isInvalid ? styles.contentEditable_invalid : styles.contentEditable}
|
||||
style={{
|
||||
minHeight: `${minH}px`,
|
||||
maxHeight: `${maxH}px`,
|
||||
...boxStyle
|
||||
}}
|
||||
/>
|
||||
}
|
||||
placeholder={<Placeholder padding={placeholderPadding}>{placeholder}</Placeholder>}
|
||||
ErrorBoundary={LexicalErrorBoundary}
|
||||
/>
|
||||
)}
|
||||
|
||||
{/* Basic Plugin */}
|
||||
<>
|
||||
<HistoryPlugin />
|
||||
<MaxLengthPlugin maxLength={maxLength || 999999} />
|
||||
<FocusPlugin focus={focus} setFocus={setFocus} />
|
||||
<KeyDownPlugin onKeyDown={onKeyDown} />
|
||||
<OnBlurPlugin onBlur={onBlur} />
|
||||
<OnChangePlugin
|
||||
onChange={(editorState, editor) => {
|
||||
const rootElement = editor.getRootElement();
|
||||
setScrollHeight(rootElement?.scrollHeight || 0);
|
||||
startSts(() => {
|
||||
onChange?.(editor);
|
||||
});
|
||||
}}
|
||||
/>
|
||||
</>
|
||||
|
||||
{/* 定制交互插件 */}
|
||||
{variables.length > 0 && (
|
||||
<>
|
||||
<VariablePlugin variables={variables} />
|
||||
{/* <VariablePickerPlugin variables={variables} /> */}
|
||||
</>
|
||||
)}
|
||||
|
||||
{variableLabels.length > 0 && (
|
||||
<>
|
||||
|
|
@ -220,22 +266,17 @@ export default function Editor({
|
|||
<VariableLabelPickerPlugin variables={variableLabels} isFocus={focus} />
|
||||
</>
|
||||
)}
|
||||
{variables.length > 0 && (
|
||||
|
||||
{skillOption && onClickSkill && onRemoveSkill && selectedSkills && (
|
||||
<>
|
||||
<VariablePlugin variables={variables} />
|
||||
{/* <VariablePickerPlugin variables={variables} /> */}
|
||||
<SkillLabelPlugin
|
||||
selectedSkills={selectedSkills}
|
||||
onClickSkill={onClickSkill}
|
||||
onRemoveSkill={onRemoveSkill}
|
||||
/>
|
||||
<SkillPickerPlugin skillOption={skillOption} isFocus={focus} />
|
||||
</>
|
||||
)}
|
||||
<OnBlurPlugin onBlur={onBlur} />
|
||||
<OnChangePlugin
|
||||
onChange={(editorState, editor) => {
|
||||
const rootElement = editor.getRootElement();
|
||||
setScrollHeight(rootElement?.scrollHeight || 0);
|
||||
startSts(() => {
|
||||
onChange?.(editor);
|
||||
});
|
||||
}}
|
||||
/>
|
||||
|
||||
{isRichText && (
|
||||
<>
|
||||
|
|
|
|||
|
|
@ -0,0 +1,101 @@
|
|||
import { Box, Flex } from '@chakra-ui/react';
|
||||
import React from 'react';
|
||||
import Avatar from '../../../../../Avatar';
|
||||
import MyTooltip from '../../../../../MyTooltip';
|
||||
import MyIcon from '../../../../../Icon';
|
||||
import { useTranslation } from 'next-i18next';
|
||||
import type { SkillLabelNodeBasicType } from '../node';
|
||||
import { useMemoEnhance } from '../../../../../../../hooks/useMemoEnhance';
|
||||
import { FlowNodeTypeEnum } from '@fastgpt/global/core/workflow/node/constant';
|
||||
|
||||
export default function SkillLabel({
|
||||
id,
|
||||
name,
|
||||
icon,
|
||||
skillType,
|
||||
status,
|
||||
onClick
|
||||
}: SkillLabelNodeBasicType) {
|
||||
const { t } = useTranslation();
|
||||
|
||||
const isInvalid = useMemoEnhance(() => {
|
||||
return status === 'invalid';
|
||||
}, [status]);
|
||||
const isUnconfigured = useMemoEnhance(() => {
|
||||
return status === 'waitingForConfig';
|
||||
}, [status]);
|
||||
|
||||
const colors = useMemoEnhance(() => {
|
||||
if (status === 'invalid') {
|
||||
return {
|
||||
bg: 'red.50',
|
||||
color: 'red.600',
|
||||
borderColor: 'red.200',
|
||||
hoverBg: 'red.100',
|
||||
hoverBorderColor: 'red.300'
|
||||
};
|
||||
}
|
||||
|
||||
if (skillType === FlowNodeTypeEnum.appModule) {
|
||||
return {
|
||||
bg: 'green.50',
|
||||
color: 'green.700',
|
||||
borderColor: 'transparent',
|
||||
hoverBg: 'green.100',
|
||||
hoverBorderColor: 'green.300'
|
||||
};
|
||||
}
|
||||
|
||||
return {
|
||||
bg: 'yellow.50',
|
||||
color: 'myGray.900',
|
||||
borderColor: 'transparent',
|
||||
hoverBg: 'yellow.100',
|
||||
hoverBorderColor: 'yellow.300'
|
||||
};
|
||||
}, [status, skillType]);
|
||||
|
||||
return (
|
||||
<Box
|
||||
as="span"
|
||||
display="inline-flex"
|
||||
alignItems="center"
|
||||
userSelect={'none'}
|
||||
px={2}
|
||||
mx={1}
|
||||
bg={colors.bg}
|
||||
color={colors.color}
|
||||
borderRadius="4px"
|
||||
fontSize="sm"
|
||||
cursor="pointer"
|
||||
position="relative"
|
||||
border={isInvalid ? '1px solid' : 'none'}
|
||||
borderColor={colors.borderColor}
|
||||
_hover={{
|
||||
bg: colors.hoverBg,
|
||||
borderColor: colors.hoverBorderColor
|
||||
}}
|
||||
onClick={() => onClick(id)}
|
||||
transform={'translateY(2px)'}
|
||||
>
|
||||
<MyTooltip
|
||||
shouldWrapChildren={false}
|
||||
label={
|
||||
isUnconfigured ? (
|
||||
<Flex py={2} gap={2} fontWeight={'normal'} fontSize={'14px'} color={'myGray.900'}>
|
||||
<MyIcon name="common/warningFill" w={'18px'} />
|
||||
{t('common:Skill_Label_Unconfigured')}
|
||||
</Flex>
|
||||
) : undefined
|
||||
}
|
||||
>
|
||||
<Flex alignItems="center" gap={1}>
|
||||
<Avatar src={icon} w={'14px'} h={'14px'} borderRadius={'2px'} />
|
||||
<Box>{name || id}</Box>
|
||||
{isUnconfigured && <Box w="6px" h="6px" bg="primary.600" borderRadius="50%" ml={1} />}
|
||||
{isInvalid && <Box w="6px" h="6px" bg="red.600" borderRadius="50%" ml={1} />}
|
||||
</Flex>
|
||||
</MyTooltip>
|
||||
</Box>
|
||||
);
|
||||
}
|
||||
|
|
@ -0,0 +1,167 @@
|
|||
import { useLexicalComposerContext } from '@lexical/react/LexicalComposerContext';
|
||||
import { useCallback, useEffect, useRef } from 'react';
|
||||
import { $createSkillNode, SkillNode } from './node';
|
||||
import type { TextNode } from 'lexical';
|
||||
import { getSkillRegexString } from './utils';
|
||||
import { mergeRegister } from '@lexical/utils';
|
||||
import { registerLexicalTextEntity } from '../../utils';
|
||||
import type { FlowNodeTemplateType } from '@fastgpt/global/core/workflow/type/node';
|
||||
import { FlowNodeTypeEnum } from '@fastgpt/global/core/workflow/node/constant';
|
||||
|
||||
const REGEX = new RegExp(getSkillRegexString(), 'i');
|
||||
|
||||
export type SkillLabelItemType = FlowNodeTemplateType & {
|
||||
configStatus: 'active' | 'invalid' | 'waitingForConfig';
|
||||
tooltip?: string;
|
||||
};
|
||||
|
||||
function SkillLabelPlugin({
|
||||
selectedSkills = [],
|
||||
onClickSkill,
|
||||
onRemoveSkill
|
||||
}: {
|
||||
selectedSkills: SkillLabelItemType[];
|
||||
onClickSkill: (id: string) => void;
|
||||
onRemoveSkill: (id: string) => void;
|
||||
}) {
|
||||
const [editor] = useLexicalComposerContext();
|
||||
|
||||
// Track the mapping of node keys to skill IDs for detecting deletions
|
||||
const previousIdsRef = useRef<Map<string, string>>(new Map());
|
||||
|
||||
// Check if SkillNode is registered in the editor
|
||||
useEffect(() => {
|
||||
if (!editor.hasNodes([SkillNode])) {
|
||||
console.error('SkillLabelPlugin: SkillNode not registered on editor');
|
||||
}
|
||||
}, [editor]);
|
||||
|
||||
const getSkillMatch = useCallback((text: string) => {
|
||||
const matches = REGEX.exec(text);
|
||||
if (!matches) return null;
|
||||
|
||||
const skillLength = matches[4].length + 6; // {{@ + skillKey + @}}
|
||||
const startOffset = matches.index;
|
||||
const endOffset = startOffset + skillLength;
|
||||
|
||||
return {
|
||||
end: endOffset,
|
||||
start: startOffset
|
||||
};
|
||||
}, []);
|
||||
|
||||
// Register text entity transformer to convert {{@skillId@}} text into SkillNode
|
||||
useEffect(() => {
|
||||
const createSkillPlugin = (textNode: TextNode): SkillNode => {
|
||||
const textContent = textNode.getTextContent();
|
||||
const skillId = textContent.slice(3, -3);
|
||||
|
||||
const tool = selectedSkills.find((t) => t.id === skillId);
|
||||
|
||||
if (tool) {
|
||||
return $createSkillNode({
|
||||
id: tool.id,
|
||||
name: tool.name,
|
||||
icon: tool.avatar,
|
||||
skillType: tool.flowNodeType,
|
||||
status: tool.configStatus,
|
||||
onClick: onClickSkill
|
||||
});
|
||||
}
|
||||
|
||||
return $createSkillNode({
|
||||
id: skillId,
|
||||
name: skillId,
|
||||
icon: undefined,
|
||||
skillType: FlowNodeTypeEnum.tool,
|
||||
status: 'invalid',
|
||||
onClick: () => {}
|
||||
});
|
||||
};
|
||||
|
||||
const unregister = mergeRegister(
|
||||
...registerLexicalTextEntity(editor, getSkillMatch, SkillNode, createSkillPlugin)
|
||||
);
|
||||
return unregister;
|
||||
}, [editor, getSkillMatch, onClickSkill, selectedSkills]);
|
||||
|
||||
// Update existing SkillNode properties when selectedSkills change
|
||||
// Sync tool name, avatar, status and configure handler for each skill node
|
||||
useEffect(() => {
|
||||
if (selectedSkills.length === 0) return;
|
||||
|
||||
// Perform all operations in a single editor.update() to avoid node reference issues
|
||||
// This ensures we work within the same editor state snapshot
|
||||
editor.update(() => {
|
||||
const nodes = editor.getEditorState()._nodeMap;
|
||||
|
||||
nodes.forEach((node) => {
|
||||
if (node instanceof SkillNode) {
|
||||
const id = node.getSkillKey();
|
||||
const tool = selectedSkills.find((t) => t.id === id);
|
||||
if (tool) {
|
||||
const writableNode = node.getWritable();
|
||||
writableNode.__id = tool.id;
|
||||
writableNode.__name = tool.name;
|
||||
writableNode.__icon = tool.avatar;
|
||||
writableNode.__skillType = tool.flowNodeType;
|
||||
writableNode.__status = tool.configStatus;
|
||||
writableNode.__onClick = onClickSkill;
|
||||
}
|
||||
}
|
||||
});
|
||||
});
|
||||
}, [selectedSkills, editor, onClickSkill]);
|
||||
|
||||
// Monitor skill node mutations and detect when they are removed from editor
|
||||
// Call onRemoveSkill callback when a skill node is deleted from the editor content
|
||||
useEffect(() => {
|
||||
if (!onRemoveSkill) return;
|
||||
|
||||
const unregister = editor.registerMutationListener(
|
||||
SkillNode,
|
||||
(mutatedNodes, { prevEditorState, updateTags }) => {
|
||||
// mutatedNodes is a Map<NodeKey, NodeMutation>
|
||||
// NodeMutation can be 'created', 'destroyed', or 'updated'
|
||||
console.log('SkillNode mutation detected:', mutatedNodes);
|
||||
mutatedNodes.forEach((mutation, nodeKey) => {
|
||||
console.log(`Node ${nodeKey} mutation: ${mutation}`);
|
||||
if (mutation === 'destroyed') {
|
||||
// Get the skill ID from the previous reference before the node was destroyed
|
||||
const skillId = previousIdsRef.current.get(nodeKey);
|
||||
console.log(`Skill node destroyed, skillId: ${skillId}`);
|
||||
if (skillId) {
|
||||
onRemoveSkill(skillId);
|
||||
previousIdsRef.current.delete(nodeKey);
|
||||
}
|
||||
} else if (mutation === 'created') {
|
||||
// Track newly created skill nodes by reading from current editor state
|
||||
const currentState = editor.getEditorState();
|
||||
const node = currentState._nodeMap.get(nodeKey);
|
||||
if (node instanceof SkillNode) {
|
||||
const skillId = node.getSkillKey();
|
||||
console.log(`Skill node created, skillId: ${skillId}`);
|
||||
previousIdsRef.current.set(nodeKey, skillId);
|
||||
}
|
||||
}
|
||||
});
|
||||
}
|
||||
);
|
||||
|
||||
// Initialize with current state
|
||||
editor.getEditorState().read(() => {
|
||||
const nodes = editor.getEditorState()._nodeMap;
|
||||
nodes.forEach((node, nodeKey) => {
|
||||
if (node instanceof SkillNode) {
|
||||
previousIdsRef.current.set(nodeKey, node.getSkillKey());
|
||||
}
|
||||
});
|
||||
});
|
||||
|
||||
return unregister;
|
||||
}, [editor, onRemoveSkill]);
|
||||
|
||||
return null;
|
||||
}
|
||||
|
||||
export default SkillLabelPlugin;
|
||||
|
|
@ -0,0 +1,162 @@
|
|||
import {
|
||||
DecoratorNode,
|
||||
type DOMConversionMap,
|
||||
type DOMExportOutput,
|
||||
type EditorConfig,
|
||||
type LexicalEditor,
|
||||
type LexicalNode,
|
||||
type NodeKey,
|
||||
type SerializedLexicalNode,
|
||||
type Spread,
|
||||
type TextFormatType
|
||||
} from 'lexical';
|
||||
import SkillLabel from './components/SkillLabel';
|
||||
import type { SkillLabelItemType } from '.';
|
||||
import type { FlowNodeTypeEnum } from '@fastgpt/global/core/workflow/node/constant';
|
||||
|
||||
export type SkillLabelNodeBasicType = {
|
||||
id: string;
|
||||
name: string;
|
||||
icon?: string;
|
||||
skillType: FlowNodeTypeEnum;
|
||||
status: SkillLabelItemType['configStatus'];
|
||||
onClick: (id: string) => void;
|
||||
};
|
||||
export type SerializedSkillNode = Spread<
|
||||
{
|
||||
id: string;
|
||||
name: string;
|
||||
icon?: string;
|
||||
skillType: FlowNodeTypeEnum;
|
||||
format: number | TextFormatType;
|
||||
},
|
||||
SerializedLexicalNode
|
||||
>;
|
||||
|
||||
export class SkillNode extends DecoratorNode<JSX.Element> {
|
||||
__format: number | TextFormatType = 0;
|
||||
__id: string;
|
||||
__name: string;
|
||||
__icon?: string;
|
||||
__skillType: FlowNodeTypeEnum;
|
||||
__status: SkillLabelItemType['configStatus'];
|
||||
__onClick: (id: string) => void;
|
||||
|
||||
constructor({ id, name, icon, skillType, status, onClick }: SkillLabelNodeBasicType) {
|
||||
super();
|
||||
this.__id = id;
|
||||
this.__name = name;
|
||||
this.__icon = icon;
|
||||
this.__skillType = skillType;
|
||||
this.__status = status;
|
||||
this.__onClick = onClick;
|
||||
}
|
||||
|
||||
static getType(): string {
|
||||
return 'skill';
|
||||
}
|
||||
|
||||
static clone(node: SkillNode): SkillNode {
|
||||
const newNode = new SkillNode({
|
||||
id: node.__id,
|
||||
name: node.__name,
|
||||
icon: node.__icon,
|
||||
skillType: node.__skillType,
|
||||
status: node.__status,
|
||||
onClick: node.__onClick
|
||||
});
|
||||
return newNode;
|
||||
}
|
||||
|
||||
static importJSON(serializedNode: SerializedSkillNode): SkillNode {
|
||||
const node = $createSkillNode({
|
||||
id: serializedNode.id,
|
||||
name: serializedNode.name,
|
||||
icon: serializedNode.icon,
|
||||
skillType: serializedNode.skillType,
|
||||
status: 'active',
|
||||
onClick: () => {}
|
||||
});
|
||||
node.setFormat(serializedNode.format);
|
||||
return node;
|
||||
}
|
||||
|
||||
setFormat(format: number | TextFormatType): void {
|
||||
const self = this.getWritable();
|
||||
self.__format = format;
|
||||
}
|
||||
|
||||
getFormat(): number | TextFormatType {
|
||||
return this.__format;
|
||||
}
|
||||
|
||||
exportJSON(): SerializedSkillNode {
|
||||
return {
|
||||
version: 1,
|
||||
format: this.__format || 0,
|
||||
id: this.__id,
|
||||
name: this.__name,
|
||||
icon: this.__icon,
|
||||
skillType: this.__skillType,
|
||||
type: 'skill'
|
||||
};
|
||||
}
|
||||
|
||||
createDOM(): HTMLElement {
|
||||
const element = document.createElement('span');
|
||||
return element;
|
||||
}
|
||||
|
||||
exportDOM(): DOMExportOutput {
|
||||
const element = document.createElement('span');
|
||||
return { element };
|
||||
}
|
||||
|
||||
static importDOM(): DOMConversionMap | null {
|
||||
return {};
|
||||
}
|
||||
|
||||
updateDOM(): false {
|
||||
return false;
|
||||
}
|
||||
|
||||
isInline(): boolean {
|
||||
return true;
|
||||
}
|
||||
|
||||
isKeyboardSelectable(): boolean {
|
||||
return true;
|
||||
}
|
||||
|
||||
getSkillKey(): string {
|
||||
return this.__id;
|
||||
}
|
||||
|
||||
getTextContent(
|
||||
_includeInert?: boolean | undefined,
|
||||
_includeDirectionless?: false | undefined
|
||||
): string {
|
||||
return `{{@${this.__id}@}}`;
|
||||
}
|
||||
|
||||
decorate(_editor: LexicalEditor, config: EditorConfig): JSX.Element {
|
||||
return (
|
||||
<SkillLabel
|
||||
id={this.__id}
|
||||
name={this.__name}
|
||||
icon={this.__icon}
|
||||
skillType={this.__skillType}
|
||||
status={this.__status}
|
||||
onClick={this.__onClick}
|
||||
/>
|
||||
);
|
||||
}
|
||||
}
|
||||
|
||||
export function $createSkillNode(e: SkillLabelNodeBasicType): SkillNode {
|
||||
return new SkillNode(e);
|
||||
}
|
||||
|
||||
export function $isSkillNode(node: SkillNode | LexicalNode | null | undefined): node is SkillNode {
|
||||
return node instanceof SkillNode;
|
||||
}
|
||||
|
|
@ -0,0 +1,31 @@
|
|||
function getSkillRegexConfig(): Readonly<{
|
||||
leftChars: string;
|
||||
rightChars: string;
|
||||
middleChars: string;
|
||||
}> {
|
||||
const leftChars = '{';
|
||||
const rightChars = '}';
|
||||
const middleChars = '@';
|
||||
|
||||
return {
|
||||
leftChars,
|
||||
rightChars,
|
||||
middleChars
|
||||
};
|
||||
}
|
||||
|
||||
export function getSkillRegexString(): string {
|
||||
const { leftChars, rightChars, middleChars } = getSkillRegexConfig();
|
||||
|
||||
const hashLeftCharList = `[${leftChars}]`;
|
||||
const hashRightCharList = `[${rightChars}]`;
|
||||
const hashMiddleCharList = `[${middleChars}]`;
|
||||
|
||||
const skillTag =
|
||||
`(${hashLeftCharList})` +
|
||||
`(${hashLeftCharList})` +
|
||||
`(${hashMiddleCharList})(.*?)(${hashMiddleCharList})` +
|
||||
`(${hashRightCharList})(${hashRightCharList})`;
|
||||
|
||||
return skillTag;
|
||||
}
|
||||
|
|
@ -0,0 +1,824 @@
|
|||
import { useLexicalComposerContext } from '@lexical/react/LexicalComposerContext';
|
||||
import { LexicalTypeaheadMenuPlugin } from '@lexical/react/LexicalTypeaheadMenuPlugin';
|
||||
import type { TextNode } from 'lexical';
|
||||
import {
|
||||
$createTextNode,
|
||||
$getSelection,
|
||||
$isRangeSelection,
|
||||
$isTextNode,
|
||||
COMMAND_PRIORITY_HIGH,
|
||||
KEY_ARROW_DOWN_COMMAND,
|
||||
KEY_ARROW_UP_COMMAND,
|
||||
KEY_ARROW_LEFT_COMMAND,
|
||||
KEY_ARROW_RIGHT_COMMAND,
|
||||
KEY_SPACE_COMMAND,
|
||||
KEY_ENTER_COMMAND
|
||||
} from 'lexical';
|
||||
import React, { useState } from 'react';
|
||||
import ReactDOM from 'react-dom';
|
||||
import { useCallback, useEffect, useRef, useMemo } from 'react';
|
||||
import { Box, Flex } from '@chakra-ui/react';
|
||||
import { useBasicTypeaheadTriggerMatch } from '../../utils';
|
||||
import Avatar from '../../../../Avatar';
|
||||
import MyIcon from '../../../../Icon';
|
||||
import MyBox from '../../../../MyBox';
|
||||
import { useMount } from 'ahooks';
|
||||
import { useRequest2 } from '../../../../../../hooks/useRequest';
|
||||
import type { ParentIdType } from '@fastgpt/global/common/parentFolder/type';
|
||||
import { useTranslation } from 'next-i18next';
|
||||
|
||||
export type SkillOptionItemType = {
|
||||
description?: string;
|
||||
list: SkillItemType[];
|
||||
|
||||
onSelect?: (id: string) => Promise<SkillOptionItemType | undefined>;
|
||||
onClick?: (id: string) => Promise<string | undefined>;
|
||||
onFolderLoad?: (id: string) => Promise<SkillItemType[] | undefined>;
|
||||
};
|
||||
|
||||
export type SkillItemType = {
|
||||
parentId?: ParentIdType;
|
||||
id: string;
|
||||
label: string;
|
||||
icon?: string;
|
||||
showArrow?: boolean;
|
||||
canOpen?: boolean;
|
||||
canUse?: boolean;
|
||||
open?: boolean;
|
||||
children?: SkillOptionItemType;
|
||||
folderChildren?: SkillItemType[];
|
||||
};
|
||||
|
||||
export default function SkillPickerPlugin({
|
||||
skillOption,
|
||||
isFocus
|
||||
}: {
|
||||
skillOption: SkillOptionItemType;
|
||||
isFocus: boolean;
|
||||
}) {
|
||||
const { t } = useTranslation();
|
||||
const [skillOptions, setSkillOptions] = useState<SkillOptionItemType[]>([skillOption]);
|
||||
const [isMenuOpen, setIsMenuOpen] = useState(false);
|
||||
|
||||
useEffect(() => {
|
||||
setSkillOptions((state) => {
|
||||
const newOptions = [...state];
|
||||
newOptions[0] = skillOption;
|
||||
return newOptions;
|
||||
});
|
||||
}, [skillOption]);
|
||||
|
||||
const [editor] = useLexicalComposerContext();
|
||||
const [selectedRowIndex, setSelectedRowIndex] = useState<Record<number, number>>({
|
||||
0: 0
|
||||
});
|
||||
const [currentColumnIndex, setCurrentColumnIndex] = useState<number>(0);
|
||||
const [currentRowIndex, setCurrentRowIndex] = useState<number>(0);
|
||||
const [interactionMode, setInteractionMode] = useState<'mouse' | 'keyboard'>('mouse');
|
||||
const [loadingFolderIds, setLoadingFolderIds] = useState(new Set());
|
||||
|
||||
// Refs for scroll management
|
||||
const itemRefs = useRef<Map<string, HTMLDivElement>>(new Map());
|
||||
|
||||
// Scroll selected item into view
|
||||
const scrollIntoView = useCallback((columnIndex: number, rowIndex: number, retryCount = 0) => {
|
||||
const itemKey = `${columnIndex}-${rowIndex}`;
|
||||
const itemElement = itemRefs.current.get(itemKey);
|
||||
if (itemElement) {
|
||||
if (rowIndex === 0) {
|
||||
const container = itemElement.parentElement;
|
||||
if (container) {
|
||||
container.scrollTop = 0;
|
||||
}
|
||||
} else {
|
||||
itemElement.scrollIntoView({
|
||||
behavior: 'smooth',
|
||||
block: 'nearest',
|
||||
inline: 'nearest'
|
||||
});
|
||||
}
|
||||
} else if (retryCount < 5) {
|
||||
// Retry if element not found yet (DOM not ready)
|
||||
setTimeout(() => {
|
||||
scrollIntoView(columnIndex, rowIndex, retryCount + 1);
|
||||
}, 20);
|
||||
}
|
||||
}, []);
|
||||
|
||||
const checkForTriggerMatch = useBasicTypeaheadTriggerMatch('@', {
|
||||
minLength: 0
|
||||
});
|
||||
|
||||
// Recursively collects all visible items including expanded folder children for keyboard navigation
|
||||
const getFlattenedVisibleItems = useCallback(
|
||||
(columnIndex: number): SkillItemType[] => {
|
||||
const column = skillOptions[columnIndex];
|
||||
|
||||
const flatten = (items: SkillItemType[]): SkillItemType[] => {
|
||||
const result: SkillItemType[] = [];
|
||||
items.forEach((item) => {
|
||||
result.push(item);
|
||||
// Include folder children only if folder is expanded
|
||||
if (item.canOpen && item.open && item.folderChildren) {
|
||||
result.push(...flatten(item.folderChildren));
|
||||
}
|
||||
});
|
||||
return result;
|
||||
};
|
||||
|
||||
return flatten(column.list);
|
||||
},
|
||||
[skillOptions]
|
||||
);
|
||||
|
||||
// Handle item selection (hover/keyboard navigation)
|
||||
const { runAsync: handleItemSelect, loading: isItemSelectLoading } = useRequest2(
|
||||
async ({
|
||||
currentColumnIndex,
|
||||
item,
|
||||
option
|
||||
}: {
|
||||
currentColumnIndex: number;
|
||||
item?: SkillItemType;
|
||||
option?: SkillOptionItemType;
|
||||
}) => {
|
||||
if (!item) return;
|
||||
const buffer = item.children;
|
||||
if (buffer) {
|
||||
setSkillOptions((prev) => {
|
||||
const newOptions = [...prev];
|
||||
newOptions[currentColumnIndex + 1] = buffer;
|
||||
return newOptions;
|
||||
});
|
||||
return;
|
||||
}
|
||||
|
||||
const result = await option?.onSelect?.(item.id);
|
||||
|
||||
setSkillOptions((prev) => {
|
||||
const newOptions = [...prev];
|
||||
if (result?.list && result?.list?.length > 0) {
|
||||
newOptions[currentColumnIndex + 1] = result;
|
||||
} else {
|
||||
for (let i = currentColumnIndex + 1; i < newOptions.length; i++) {
|
||||
// @ts-ignore
|
||||
newOptions[i] = undefined;
|
||||
}
|
||||
}
|
||||
return newOptions.filter(Boolean);
|
||||
});
|
||||
}
|
||||
);
|
||||
|
||||
// Handle item click (confirm selection)
|
||||
const { runAsync: handleItemClick, loading: isItemClickLoading } = useRequest2(
|
||||
async ({ item, option }: { item: SkillItemType; option?: SkillOptionItemType }) => {
|
||||
// Step 1: Execute async onClick to get skillId (outside editor.update)
|
||||
const skillId = await option?.onClick?.(item.id);
|
||||
|
||||
// Step 2: Update editor with the skillId (inside a fresh editor.update)
|
||||
if (skillId) {
|
||||
editor.update(() => {
|
||||
// Re-acquire selection in this update cycle to avoid stale node references
|
||||
const selection = $getSelection();
|
||||
if (!$isRangeSelection(selection)) return;
|
||||
|
||||
// Re-acquire nodes in this update cycle
|
||||
const nodes = selection.getNodes();
|
||||
nodes.forEach((node) => {
|
||||
if ($isTextNode(node)) {
|
||||
const text = node.getTextContent();
|
||||
const atIndex = text.lastIndexOf('@');
|
||||
if (atIndex !== -1) {
|
||||
// Remove the '@' trigger character
|
||||
const beforeAt = text.substring(0, atIndex);
|
||||
const afterAt = text.substring(atIndex + 1);
|
||||
node.setTextContent(beforeAt + afterAt);
|
||||
|
||||
// Move cursor to where '@' was
|
||||
const newOffset = beforeAt.length;
|
||||
node.select(newOffset, newOffset);
|
||||
}
|
||||
}
|
||||
});
|
||||
|
||||
// Insert skill node text at current selection
|
||||
selection.insertNodes([$createTextNode(`{{@${skillId}@}}`)]);
|
||||
});
|
||||
}
|
||||
},
|
||||
{
|
||||
refreshDeps: [editor]
|
||||
}
|
||||
);
|
||||
|
||||
// Handle folder toggle
|
||||
const { runAsync: handleFolderToggle, loading: isFolderLoading } = useRequest2(
|
||||
async ({
|
||||
currentColumnIndex,
|
||||
item,
|
||||
option
|
||||
}: {
|
||||
currentColumnIndex: number;
|
||||
item?: SkillItemType;
|
||||
option?: SkillOptionItemType;
|
||||
}) => {
|
||||
if (!item || !item.canOpen) return;
|
||||
const currentFolder = item;
|
||||
|
||||
// Step 1: Toggle folder open/closed state
|
||||
setSkillOptions((prev) => {
|
||||
const newOptions = [...prev];
|
||||
const columnData = { ...newOptions[currentColumnIndex] };
|
||||
|
||||
// Recursively find and toggle the target folder
|
||||
const toggleFolderOpen = (items: SkillItemType[]): SkillItemType[] => {
|
||||
return items.map((item) => {
|
||||
// Found the target folder, toggle its open state
|
||||
if (item.id === currentFolder.id) {
|
||||
return { ...item, open: !currentFolder.open };
|
||||
}
|
||||
// Recursively search in nested folders
|
||||
if (item.folderChildren) {
|
||||
return { ...item, folderChildren: toggleFolderOpen(item.folderChildren) };
|
||||
}
|
||||
return item;
|
||||
});
|
||||
};
|
||||
|
||||
columnData.list = toggleFolderOpen(columnData.list);
|
||||
newOptions[currentColumnIndex] = columnData;
|
||||
return newOptions;
|
||||
});
|
||||
|
||||
// Step 2: Load folder children only if folder has no data
|
||||
if (!currentFolder.open && currentFolder?.folderChildren === undefined) {
|
||||
setLoadingFolderIds((prev) => {
|
||||
const next = new Set(prev);
|
||||
next.add(currentFolder.id);
|
||||
return next;
|
||||
});
|
||||
|
||||
try {
|
||||
const result = await option?.onFolderLoad?.(currentFolder.id);
|
||||
|
||||
setSkillOptions((prev) => {
|
||||
const newOptions = [...prev];
|
||||
const columnData = { ...newOptions[currentColumnIndex] };
|
||||
|
||||
const addFolderChildren = (items: SkillItemType[]): SkillItemType[] => {
|
||||
return items.map((item) => {
|
||||
if (item.id === currentFolder.id) {
|
||||
return {
|
||||
...item,
|
||||
folderChildren: result || []
|
||||
};
|
||||
}
|
||||
if (item.folderChildren) {
|
||||
return { ...item, folderChildren: addFolderChildren(item.folderChildren) };
|
||||
}
|
||||
return item;
|
||||
});
|
||||
};
|
||||
|
||||
columnData.list = addFolderChildren(columnData.list);
|
||||
newOptions[currentColumnIndex] = columnData;
|
||||
return newOptions;
|
||||
});
|
||||
} finally {
|
||||
setLoadingFolderIds((prev) => {
|
||||
const next = new Set(prev);
|
||||
next.delete(currentFolder.id);
|
||||
return next;
|
||||
});
|
||||
}
|
||||
}
|
||||
}
|
||||
);
|
||||
|
||||
// First init
|
||||
useMount(() => {
|
||||
handleItemSelect({ currentColumnIndex: 0, item: skillOption.list[0], option: skillOption });
|
||||
});
|
||||
// Scroll to selected item when menu opens
|
||||
useEffect(() => {
|
||||
if (isMenuOpen) {
|
||||
// Delay to ensure DOM is rendered and refs are attached
|
||||
setTimeout(() => {
|
||||
scrollIntoView(currentColumnIndex, currentRowIndex);
|
||||
});
|
||||
}
|
||||
}, [isMenuOpen, scrollIntoView, currentColumnIndex, currentRowIndex]);
|
||||
|
||||
// Keyboard navigation
|
||||
useEffect(() => {
|
||||
if (!isFocus || !isMenuOpen) return;
|
||||
|
||||
const removeUpCommand = editor.registerCommand(
|
||||
KEY_ARROW_UP_COMMAND,
|
||||
(e: KeyboardEvent) => {
|
||||
if (!isMenuOpen) return true;
|
||||
|
||||
e.preventDefault();
|
||||
e.stopPropagation();
|
||||
|
||||
setInteractionMode('keyboard');
|
||||
|
||||
if (currentColumnIndex >= 0 && currentColumnIndex < skillOptions.length) {
|
||||
const columnItems = getFlattenedVisibleItems(currentColumnIndex);
|
||||
if (!columnItems || columnItems.length === 0) return true;
|
||||
|
||||
// Use functional update to get the latest row index
|
||||
setCurrentRowIndex((prevRowIndex) => {
|
||||
const newIndex = prevRowIndex > 0 ? prevRowIndex - 1 : columnItems.length - 1;
|
||||
|
||||
handleItemSelect({
|
||||
currentColumnIndex: currentColumnIndex,
|
||||
item: columnItems[newIndex],
|
||||
option: skillOptions[currentColumnIndex]
|
||||
});
|
||||
|
||||
// Scroll into view after state update
|
||||
requestAnimationFrame(() => {
|
||||
scrollIntoView(currentColumnIndex, newIndex);
|
||||
});
|
||||
|
||||
return newIndex;
|
||||
});
|
||||
}
|
||||
|
||||
return true;
|
||||
},
|
||||
COMMAND_PRIORITY_HIGH
|
||||
);
|
||||
|
||||
const removeDownCommand = editor.registerCommand(
|
||||
KEY_ARROW_DOWN_COMMAND,
|
||||
(e: KeyboardEvent) => {
|
||||
if (!isMenuOpen) return true;
|
||||
|
||||
e.preventDefault();
|
||||
e.stopPropagation();
|
||||
|
||||
setInteractionMode('keyboard');
|
||||
|
||||
if (currentColumnIndex >= 0 && currentColumnIndex < skillOptions.length) {
|
||||
const columnItems = getFlattenedVisibleItems(currentColumnIndex);
|
||||
if (!columnItems || columnItems.length === 0) return true;
|
||||
|
||||
// Use functional update to get the latest row index
|
||||
setCurrentRowIndex((prevRowIndex) => {
|
||||
const newIndex = prevRowIndex < columnItems.length - 1 ? prevRowIndex + 1 : 0;
|
||||
|
||||
handleItemSelect({
|
||||
currentColumnIndex: currentColumnIndex,
|
||||
item: columnItems[newIndex],
|
||||
option: skillOptions[currentColumnIndex]
|
||||
});
|
||||
|
||||
// Scroll into view after state update
|
||||
requestAnimationFrame(() => {
|
||||
scrollIntoView(currentColumnIndex, newIndex);
|
||||
});
|
||||
|
||||
return newIndex;
|
||||
});
|
||||
}
|
||||
|
||||
return true;
|
||||
},
|
||||
COMMAND_PRIORITY_HIGH
|
||||
);
|
||||
|
||||
const removeRightCommand = editor.registerCommand(
|
||||
KEY_ARROW_RIGHT_COMMAND,
|
||||
(e: KeyboardEvent) => {
|
||||
if (!isMenuOpen) return true;
|
||||
|
||||
e.preventDefault();
|
||||
e.stopPropagation();
|
||||
|
||||
setInteractionMode('keyboard');
|
||||
|
||||
// Use functional updates to get the latest state
|
||||
setCurrentColumnIndex((prevColumnIndex) => {
|
||||
if (prevColumnIndex >= skillOptions.length - 1) return prevColumnIndex;
|
||||
|
||||
const newColumnIndex = prevColumnIndex + 1;
|
||||
|
||||
setSelectedRowIndex((state) => ({
|
||||
...state,
|
||||
[prevColumnIndex]: currentRowIndex
|
||||
}));
|
||||
|
||||
setCurrentRowIndex(0);
|
||||
|
||||
// Use the latest skillOptions from closure to get the new column items
|
||||
const newColumnOption = skillOptions[newColumnIndex];
|
||||
const newColumnItems = newColumnOption?.list;
|
||||
if (newColumnItems && newColumnItems.length > 0) {
|
||||
handleItemSelect({
|
||||
currentColumnIndex: newColumnIndex,
|
||||
item: newColumnItems[0],
|
||||
option: newColumnOption
|
||||
});
|
||||
|
||||
// Scroll into view after state update
|
||||
requestAnimationFrame(() => {
|
||||
scrollIntoView(newColumnIndex, 0);
|
||||
});
|
||||
}
|
||||
|
||||
return newColumnIndex;
|
||||
});
|
||||
|
||||
return true;
|
||||
},
|
||||
COMMAND_PRIORITY_HIGH
|
||||
);
|
||||
|
||||
const removeLeftCommand = editor.registerCommand(
|
||||
KEY_ARROW_LEFT_COMMAND,
|
||||
(e: KeyboardEvent) => {
|
||||
if (!isMenuOpen) return true;
|
||||
|
||||
e.preventDefault();
|
||||
e.stopPropagation();
|
||||
|
||||
setInteractionMode('keyboard');
|
||||
|
||||
// Use functional updates to get the latest state
|
||||
setCurrentColumnIndex((prevColumnIndex) => {
|
||||
if (prevColumnIndex <= 0) return prevColumnIndex;
|
||||
|
||||
const newColumnIndex = prevColumnIndex - 1;
|
||||
|
||||
setSelectedRowIndex((state) => ({
|
||||
...state,
|
||||
[prevColumnIndex]: currentRowIndex
|
||||
}));
|
||||
|
||||
const newRowIndex = selectedRowIndex[newColumnIndex] || 0;
|
||||
setCurrentRowIndex(() => newRowIndex);
|
||||
|
||||
// Only keep data up to and including the current column
|
||||
setSkillOptions((state) => {
|
||||
return state.slice(0, prevColumnIndex + 1);
|
||||
});
|
||||
|
||||
// Scroll into view after state update
|
||||
requestAnimationFrame(() => {
|
||||
scrollIntoView(newColumnIndex, newRowIndex);
|
||||
});
|
||||
|
||||
return newColumnIndex;
|
||||
});
|
||||
|
||||
return true;
|
||||
},
|
||||
COMMAND_PRIORITY_HIGH
|
||||
);
|
||||
|
||||
const removeSpaceCommand = editor.registerCommand(
|
||||
KEY_SPACE_COMMAND,
|
||||
(e: KeyboardEvent) => {
|
||||
if (!isMenuOpen) return true;
|
||||
|
||||
setInteractionMode('keyboard');
|
||||
|
||||
const flattenedItems = getFlattenedVisibleItems(currentColumnIndex);
|
||||
const latestItem = flattenedItems[currentRowIndex];
|
||||
const latestOption = skillOptions[currentColumnIndex];
|
||||
|
||||
if (latestItem?.canOpen && !(latestItem.open && latestItem.folderChildren?.length === 0)) {
|
||||
e.preventDefault();
|
||||
e.stopPropagation();
|
||||
handleFolderToggle({
|
||||
currentColumnIndex,
|
||||
item: latestItem,
|
||||
option: latestOption
|
||||
});
|
||||
return true;
|
||||
}
|
||||
|
||||
return false;
|
||||
},
|
||||
COMMAND_PRIORITY_HIGH
|
||||
);
|
||||
|
||||
const removeEnterCommand = editor.registerCommand(
|
||||
KEY_ENTER_COMMAND,
|
||||
(e: KeyboardEvent) => {
|
||||
if (!isMenuOpen) return true;
|
||||
|
||||
setInteractionMode('keyboard');
|
||||
|
||||
const flattenedItems = getFlattenedVisibleItems(currentColumnIndex);
|
||||
const latestItem = flattenedItems[currentRowIndex];
|
||||
const latestOption = skillOptions[currentColumnIndex];
|
||||
|
||||
if (latestItem?.canUse && latestOption) {
|
||||
e.preventDefault();
|
||||
e.stopPropagation();
|
||||
handleItemClick({ item: latestItem, option: latestOption });
|
||||
|
||||
return true;
|
||||
}
|
||||
|
||||
return false;
|
||||
},
|
||||
COMMAND_PRIORITY_HIGH
|
||||
);
|
||||
|
||||
return () => {
|
||||
removeUpCommand();
|
||||
removeDownCommand();
|
||||
removeRightCommand();
|
||||
removeLeftCommand();
|
||||
removeSpaceCommand();
|
||||
removeEnterCommand();
|
||||
};
|
||||
}, [
|
||||
editor,
|
||||
isFocus,
|
||||
isMenuOpen,
|
||||
currentColumnIndex,
|
||||
currentRowIndex,
|
||||
skillOptions,
|
||||
handleItemSelect,
|
||||
handleFolderToggle,
|
||||
handleItemClick,
|
||||
selectedRowIndex,
|
||||
scrollIntoView,
|
||||
getFlattenedVisibleItems
|
||||
]);
|
||||
|
||||
// Recursively render item list
|
||||
const renderItemList = useCallback(
|
||||
(
|
||||
items: SkillItemType[],
|
||||
columnData: SkillOptionItemType,
|
||||
columnIndex: number,
|
||||
depth: number = 0,
|
||||
startFlatIndex: number = 0
|
||||
): { elements: JSX.Element[]; nextFlatIndex: number } => {
|
||||
const result: JSX.Element[] = [];
|
||||
const activeRowIndex = selectedRowIndex[columnIndex];
|
||||
let currentFlatIndex = startFlatIndex;
|
||||
console.log('items', { selectedRowIndex, columnIndex, activeRowIndex });
|
||||
|
||||
items.forEach((item) => {
|
||||
const flatIndex = currentFlatIndex;
|
||||
currentFlatIndex++;
|
||||
|
||||
// 前面的列,才有激活态
|
||||
const isActive = columnIndex < currentColumnIndex && flatIndex === activeRowIndex;
|
||||
// 当前选中的东西
|
||||
const isSelected = columnIndex === currentColumnIndex && flatIndex === currentRowIndex;
|
||||
|
||||
result.push(
|
||||
<MyBox
|
||||
key={item.id}
|
||||
ref={(el) => {
|
||||
if (el) {
|
||||
itemRefs.current.set(`${columnIndex}-${flatIndex}`, el as HTMLDivElement);
|
||||
} else {
|
||||
itemRefs.current.delete(`${columnIndex}-${flatIndex}`);
|
||||
}
|
||||
}}
|
||||
px={2}
|
||||
py={1.5}
|
||||
gap={2}
|
||||
pl={1 + depth * 4}
|
||||
borderRadius={'4px'}
|
||||
cursor={'pointer'}
|
||||
bg={isActive || isSelected ? 'myGray.100' : ''}
|
||||
color={isSelected ? 'primary.700' : 'myGray.600'}
|
||||
display={'flex'}
|
||||
alignItems={'center'}
|
||||
isLoading={loadingFolderIds.has(item.id)}
|
||||
size={'sm'}
|
||||
onMouseDown={(e) => {
|
||||
e.preventDefault();
|
||||
}}
|
||||
onMouseMove={(e) => {
|
||||
if (interactionMode === 'keyboard') {
|
||||
setInteractionMode('mouse');
|
||||
}
|
||||
}}
|
||||
onClick={(e) => {
|
||||
e.preventDefault();
|
||||
e.stopPropagation();
|
||||
if (item.canOpen) {
|
||||
handleFolderToggle({
|
||||
currentColumnIndex: columnIndex,
|
||||
item,
|
||||
option: columnData
|
||||
});
|
||||
} else if (item.canUse) {
|
||||
handleItemClick({
|
||||
item,
|
||||
option: columnData
|
||||
});
|
||||
}
|
||||
}}
|
||||
onMouseEnter={(e) => {
|
||||
e.preventDefault();
|
||||
|
||||
// Ignore mouse hover in keyboard mode
|
||||
if (interactionMode === 'keyboard') {
|
||||
return;
|
||||
}
|
||||
|
||||
if (columnIndex !== currentColumnIndex) {
|
||||
setSelectedRowIndex((state) => ({
|
||||
...state,
|
||||
[currentColumnIndex]: currentRowIndex
|
||||
}));
|
||||
}
|
||||
|
||||
setCurrentRowIndex(flatIndex);
|
||||
setCurrentColumnIndex(columnIndex);
|
||||
if (item.canUse) {
|
||||
handleItemSelect({
|
||||
currentColumnIndex: columnIndex,
|
||||
item,
|
||||
option: columnData
|
||||
});
|
||||
}
|
||||
}}
|
||||
>
|
||||
{item.canOpen && !(item.open && item.folderChildren?.length === 0) ? (
|
||||
<MyIcon
|
||||
name={'core/chat/chevronRight'}
|
||||
w={4}
|
||||
color={'myGray.500'}
|
||||
transform={item.open ? 'rotate(90deg)' : 'none'}
|
||||
transition={'transform 0.2s'}
|
||||
mr={-1}
|
||||
/>
|
||||
) : columnData.onFolderLoad ? (
|
||||
<Box w={3} flexShrink={0} />
|
||||
) : null}
|
||||
{item.icon && <Avatar src={item.icon} w={'1.2rem'} borderRadius={'xs'} />}
|
||||
<Box fontSize={'sm'} fontWeight={'medium'} flex={1}>
|
||||
{item.label}
|
||||
{item.canOpen && item.open && item.folderChildren?.length === 0 && (
|
||||
<Box as="span" color={'myGray.400'} fontSize={'xs'} ml={2}>
|
||||
{t('app:empty_folder')}
|
||||
</Box>
|
||||
)}
|
||||
</Box>
|
||||
{item.showArrow && (
|
||||
<MyIcon name={'core/chat/chevronRight'} w={'0.8rem'} color={'myGray.400'} />
|
||||
)}
|
||||
</MyBox>
|
||||
);
|
||||
|
||||
// render folderChildren
|
||||
if (item.canOpen && item.open && !!item.folderChildren && item.folderChildren.length > 0) {
|
||||
const { elements, nextFlatIndex } = renderItemList(
|
||||
item.folderChildren,
|
||||
columnData,
|
||||
columnIndex,
|
||||
depth + 1,
|
||||
currentFlatIndex
|
||||
);
|
||||
result.push(...elements);
|
||||
currentFlatIndex = nextFlatIndex;
|
||||
}
|
||||
});
|
||||
|
||||
return { elements: result, nextFlatIndex: currentFlatIndex };
|
||||
},
|
||||
[
|
||||
selectedRowIndex,
|
||||
currentColumnIndex,
|
||||
currentRowIndex,
|
||||
handleFolderToggle,
|
||||
handleItemClick,
|
||||
handleItemSelect,
|
||||
interactionMode,
|
||||
loadingFolderIds
|
||||
]
|
||||
);
|
||||
|
||||
// Render single column
|
||||
const renderColumn = useCallback(
|
||||
(columnData: SkillOptionItemType, columnIndex: number) => {
|
||||
const columnWidth = columnData.onFolderLoad ? '280px' : '200px';
|
||||
|
||||
return (
|
||||
<MyBox
|
||||
isLoading={currentColumnIndex === columnIndex && isItemClickLoading}
|
||||
key={columnIndex}
|
||||
ml={columnIndex > 0 ? 2 : 0}
|
||||
p={1.5}
|
||||
borderRadius={'sm'}
|
||||
w={columnWidth}
|
||||
boxShadow={'0 4px 10px 0 rgba(19, 51, 107, 0.10), 0 0 1px 0 rgba(19, 51, 107, 0.10)'}
|
||||
bg={'white'}
|
||||
flexShrink={0}
|
||||
maxH={'300px'}
|
||||
overflow={'auto'}
|
||||
>
|
||||
{columnData.description && (
|
||||
<Box color={'myGray.500'} fontSize={'xs'}>
|
||||
{columnData.description}
|
||||
</Box>
|
||||
)}
|
||||
{renderItemList(columnData.list, columnData, columnIndex).elements}
|
||||
</MyBox>
|
||||
);
|
||||
},
|
||||
[currentColumnIndex, isItemClickLoading, renderItemList]
|
||||
);
|
||||
|
||||
// For LexicalTypeaheadMenuPlugin compatibility
|
||||
const menuOptions = useMemo(() => {
|
||||
return skillOptions.flatMap((item) =>
|
||||
item.list.map((item) => ({
|
||||
key: item.id,
|
||||
...item
|
||||
}))
|
||||
);
|
||||
}, [skillOptions]);
|
||||
const onSelectOption = useCallback(
|
||||
async (selectedOption: any, nodeToRemove: TextNode | null, closeMenu: () => void) => {
|
||||
// Step 1: Call async onClick handler (outside editor.update)
|
||||
const skillId = await selectedOption.onClick?.(selectedOption.id);
|
||||
|
||||
// Step 2: Update editor with the skill (inside a fresh editor.update)
|
||||
if (skillId) {
|
||||
editor.update(() => {
|
||||
// Re-acquire selection in this update cycle to avoid stale node references
|
||||
const selection = $getSelection();
|
||||
if (!$isRangeSelection(selection)) return;
|
||||
|
||||
// Re-acquire nodes in this update cycle
|
||||
const nodes = selection.getNodes();
|
||||
nodes.forEach((node) => {
|
||||
if ($isTextNode(node)) {
|
||||
const text = node.getTextContent();
|
||||
const atIndex = text.lastIndexOf('@');
|
||||
if (atIndex !== -1) {
|
||||
// Remove the '@' trigger character
|
||||
const beforeAt = text.substring(0, atIndex);
|
||||
const afterAt = text.substring(atIndex + 1);
|
||||
node.setTextContent(beforeAt + afterAt);
|
||||
|
||||
// Move cursor to where '@' was
|
||||
const newOffset = beforeAt.length;
|
||||
node.select(newOffset, newOffset);
|
||||
}
|
||||
}
|
||||
});
|
||||
|
||||
// Insert skill node text at current selection
|
||||
selection.insertNodes([$createTextNode(`{{@${skillId}@}}`)]);
|
||||
closeMenu();
|
||||
});
|
||||
} else {
|
||||
// If onClick didn't return a skillId, just close the menu
|
||||
closeMenu();
|
||||
}
|
||||
},
|
||||
[editor]
|
||||
);
|
||||
|
||||
return (
|
||||
<LexicalTypeaheadMenuPlugin
|
||||
onQueryChange={(matchingString) => {
|
||||
// Update menu open state based on query
|
||||
setIsMenuOpen(matchingString !== null);
|
||||
}}
|
||||
onSelectOption={onSelectOption}
|
||||
triggerFn={checkForTriggerMatch}
|
||||
options={menuOptions}
|
||||
menuRenderFn={(anchorElementRef) => {
|
||||
const shouldShow = skillOptions.length > 0 && anchorElementRef.current !== null && isFocus;
|
||||
|
||||
// Sync menu open state with render
|
||||
if (!shouldShow && isMenuOpen) {
|
||||
setIsMenuOpen(false);
|
||||
} else if (shouldShow && !isMenuOpen) {
|
||||
setIsMenuOpen(true);
|
||||
}
|
||||
|
||||
return ReactDOM.createPortal(
|
||||
<Flex
|
||||
visibility={shouldShow ? 'visible' : 'hidden'}
|
||||
position="relative"
|
||||
align="flex-start"
|
||||
zIndex={99999}
|
||||
>
|
||||
{skillOptions.map((column, index) => {
|
||||
return renderColumn(column, index);
|
||||
})}
|
||||
</Flex>,
|
||||
anchorElementRef.current!
|
||||
);
|
||||
}}
|
||||
/>
|
||||
);
|
||||
}
|
||||
|
|
@ -1,4 +1,5 @@
|
|||
import type { WorkflowIOValueTypeEnum } from '@fastgpt/global/core/workflow/constants';
|
||||
import type { FlowNodeTypeEnum } from '@fastgpt/global/core/workflow/node/constant';
|
||||
|
||||
export type EditorVariablePickerType = {
|
||||
key: string;
|
||||
|
|
@ -46,7 +47,6 @@ export type TabEditorNode = BaseEditorNode & {
|
|||
type: 'tab';
|
||||
};
|
||||
|
||||
// Rich text
|
||||
export type ParagraphEditorNode = BaseEditorNode & {
|
||||
type: 'paragraph';
|
||||
children: ChildEditorNode[];
|
||||
|
|
@ -55,17 +55,20 @@ export type ParagraphEditorNode = BaseEditorNode & {
|
|||
indent: number;
|
||||
};
|
||||
|
||||
// ListItem 节点的 children 可以包含嵌套的 list 节点
|
||||
export type ListItemChildEditorNode =
|
||||
| TextEditorNode
|
||||
| LineBreakEditorNode
|
||||
| TabEditorNode
|
||||
| VariableLabelEditorNode
|
||||
| VariableEditorNode;
|
||||
export type ListEditorNode = BaseEditorNode & {
|
||||
type: 'list';
|
||||
children: ListItemEditorNode[];
|
||||
direction: string | null;
|
||||
format: string;
|
||||
indent: number;
|
||||
listType: 'bullet' | 'number';
|
||||
start: number;
|
||||
tag: 'ul' | 'ol';
|
||||
};
|
||||
|
||||
export type ListItemEditorNode = BaseEditorNode & {
|
||||
type: 'listitem';
|
||||
children: (ListItemChildEditorNode | ListEditorNode)[];
|
||||
children: ChildEditorNode[];
|
||||
direction: string | null;
|
||||
format: string;
|
||||
indent: number;
|
||||
|
|
@ -82,15 +85,13 @@ export type VariableEditorNode = BaseEditorNode & {
|
|||
variableKey: string;
|
||||
};
|
||||
|
||||
export type ListEditorNode = BaseEditorNode & {
|
||||
type: 'list';
|
||||
children: ListItemEditorNode[];
|
||||
direction: string | null;
|
||||
format: string;
|
||||
indent: number;
|
||||
listType: 'bullet' | 'number';
|
||||
start: number;
|
||||
tag: 'ul' | 'ol';
|
||||
export type SkillEditorNode = BaseEditorNode & {
|
||||
type: 'skill';
|
||||
id: string;
|
||||
name?: string;
|
||||
icon?: string;
|
||||
skillType?: `${FlowNodeTypeEnum}`;
|
||||
format: number;
|
||||
};
|
||||
|
||||
export type ChildEditorNode =
|
||||
|
|
@ -101,7 +102,8 @@ export type ChildEditorNode =
|
|||
| ListEditorNode
|
||||
| ListItemEditorNode
|
||||
| VariableLabelEditorNode
|
||||
| VariableEditorNode;
|
||||
| VariableEditorNode
|
||||
| SkillEditorNode;
|
||||
|
||||
export type EditorState = {
|
||||
root: {
|
||||
|
|
|
|||
|
|
@ -12,6 +12,7 @@ import { $createTextNode, $isTextNode, TextNode } from 'lexical';
|
|||
import { useCallback } from 'react';
|
||||
import type { VariableLabelNode } from './plugins/VariableLabelPlugin/node';
|
||||
import type { VariableNode } from './plugins/VariablePlugin/node';
|
||||
import type { SkillNode } from './plugins/SkillLabelPlugin/node';
|
||||
import type {
|
||||
ListItemEditorNode,
|
||||
ListEditorNode,
|
||||
|
|
@ -22,7 +23,9 @@ import type {
|
|||
} from './type';
|
||||
import { TabStr } from './constants';
|
||||
|
||||
export function registerLexicalTextEntity<T extends TextNode | VariableLabelNode | VariableNode>(
|
||||
export function registerLexicalTextEntity<
|
||||
T extends TextNode | VariableLabelNode | VariableNode | SkillNode
|
||||
>(
|
||||
editor: LexicalEditor,
|
||||
getMatch: (text: string) => null | EntityMatch,
|
||||
targetNode: Klass<T>,
|
||||
|
|
@ -32,7 +35,9 @@ export function registerLexicalTextEntity<T extends TextNode | VariableLabelNode
|
|||
return node instanceof targetNode;
|
||||
};
|
||||
|
||||
const replaceWithSimpleText = (node: TextNode | VariableLabelNode | VariableNode): void => {
|
||||
const replaceWithSimpleText = (
|
||||
node: TextNode | VariableLabelNode | VariableNode | SkillNode
|
||||
): void => {
|
||||
const textNode = $createTextNode(node.getTextContent());
|
||||
textNode.setFormat(node.getFormat());
|
||||
node.replace(textNode);
|
||||
|
|
@ -432,6 +437,8 @@ const processListItem = ({
|
|||
itemText.push(TabStr);
|
||||
} else if (child.type === 'variableLabel' || child.type === 'Variable') {
|
||||
itemText.push(child.variableKey);
|
||||
} else if (child.type === 'skill') {
|
||||
itemText.push(`{{@${child.id}@}}`);
|
||||
} else if (child.type === 'list') {
|
||||
nestedLists.push(child);
|
||||
}
|
||||
|
|
@ -499,6 +506,11 @@ export const editorStateToText = (editor: LexicalEditor) => {
|
|||
return node.variableKey || '';
|
||||
}
|
||||
|
||||
// Handle skill nodes
|
||||
if (node.type === 'skill') {
|
||||
return `{{@${node.id}@}}`;
|
||||
}
|
||||
|
||||
// Handle paragraph nodes - recursively process children
|
||||
if (node.type === 'paragraph') {
|
||||
if (!node.children || node.children.length === 0) {
|
||||
|
|
|
|||
|
|
@ -57,6 +57,7 @@
|
|||
"auto_execute_default_prompt_placeholder": "Default questions sent when executing automatically",
|
||||
"auto_execute_tip": "After turning it on, the workflow will be automatically triggered when the user enters the conversation interface. \nExecution order: 1. Dialogue starter; 2. Global variables; 3. Automatic execution.",
|
||||
"auto_save": "Auto save",
|
||||
"can_select_toolset": "Entire toolset available for selection",
|
||||
"change_app_type": "Change App Type",
|
||||
"chat_debug": "Chat Preview",
|
||||
"chat_logs": "Logs",
|
||||
|
|
@ -143,6 +144,7 @@
|
|||
"document_upload": "Document Upload",
|
||||
"edit_app": "Application details",
|
||||
"edit_info": "Edit",
|
||||
"empty_folder": "(empty folder)",
|
||||
"empty_tool_tips": "Please add tools on the left side",
|
||||
"execute_time": "Execution Time",
|
||||
"expand_tool_create": "Expand MCP/Http create",
|
||||
|
|
@ -322,8 +324,9 @@
|
|||
"setting_plugin": "Workflow",
|
||||
"show_templates": "Expand",
|
||||
"show_top_p_tip": "An alternative method of temperature sampling, called Nucleus sampling, the model considers the results of tokens with TOP_P probability mass quality. \nTherefore, 0.1 means that only tokens containing the highest probability quality are considered. \nThe default is 1.",
|
||||
"simple_tool_tips": "This plugin contains special inputs and is not currently supported for invocation by simple applications.",
|
||||
"simple_tool_tips": "This tool contains special inputs and does not support being called by simple applications.",
|
||||
"source_updateTime": "Update time",
|
||||
"space_to_expand_folder": "Press \"Space\" to expand the folder",
|
||||
"stop_sign": "Stop",
|
||||
"stop_sign_placeholder": "Multiple serial numbers are separated by |, for example: aaa|stop",
|
||||
"stream_response": "Stream",
|
||||
|
|
|
|||
|
|
@ -30,6 +30,7 @@
|
|||
"config_input_guide_lexicon_title": "Set Up Lexicon",
|
||||
"confirm_clear_input_value": "Are you sure to clear the form content? \nDefault values will be restored!",
|
||||
"confirm_to_clear_share_chat_history": "Are you sure you want to clear all chat history?",
|
||||
"confirm_plan_agent": "Please confirm whether the change plan meets expectations. If you need to modify it, you can send the modification requirements in the input box at the bottom.",
|
||||
"content_empty": "No Content",
|
||||
"contextual": "{{num}} Contexts",
|
||||
"contextual_preview": "Contextual Preview {{num}} Items",
|
||||
|
|
@ -47,6 +48,7 @@
|
|||
"file_amount_over": "Exceeded maximum file quantity {{max}}",
|
||||
"file_input": "File input",
|
||||
"file_input_tip": "You can obtain the link to the corresponding file through the \"File Link\" of the [Plug-in Start] node",
|
||||
"file_parse": "File parsing",
|
||||
"history_slider.home.title": "chat",
|
||||
"home.chat_app": "HomeChat-{{name}}",
|
||||
"home.chat_id": "Chat ID",
|
||||
|
|
@ -78,6 +80,7 @@
|
|||
"no_workflow_response": "No workflow data",
|
||||
"not_query": "Missing query content",
|
||||
"not_select_file": "No file selected",
|
||||
"plan_agent": "Plan agent",
|
||||
"plugins_output": "Plugin Output",
|
||||
"press_to_speak": "Hold down to speak",
|
||||
"query_extension_IO_tokens": "Problem Optimization Input/Output Tokens",
|
||||
|
|
|
|||
|
|
@ -85,6 +85,7 @@
|
|||
"Select_App": "Select an application",
|
||||
"Select_all": "Select all",
|
||||
"Setting": "Setting",
|
||||
"Skill_Label_Unconfigured": "The parameters are not configured, click Configure",
|
||||
"Status": "Status",
|
||||
"Submit": "Submit",
|
||||
"Success": "Success",
|
||||
|
|
@ -102,6 +103,7 @@
|
|||
"add_new_param": "Add new param",
|
||||
"add_success": "Added Successfully",
|
||||
"aipoint_desc": "Each time the AI model is called, a certain amount of AI points (similar to tokens) will be consumed. Click to view detailed calculation rules.",
|
||||
"agent_prompt_tips": "It is recommended to fill in the following template for best results.\n\n\"Role Identity\"\n\n\"Task Objective\"\n\n\"Task Process and Skills\"\n\nEnter \"/\" to insert global variables; enter \"@\" to insert specific skills, including applications, tools, knowledge bases, and models.",
|
||||
"all_quotes": "All quotes",
|
||||
"all_result": "Full Results",
|
||||
"app_evaluation": "App Evaluation(Beta)",
|
||||
|
|
|
|||
|
|
@ -59,6 +59,7 @@
|
|||
"auto_execute_default_prompt_placeholder": "自动执行时,发送的默认问题",
|
||||
"auto_execute_tip": "开启后,用户进入对话界面将自动触发工作流。执行顺序:1、对话开场白;2、全局变量;3、自动执行。",
|
||||
"auto_save": "自动保存",
|
||||
"can_select_toolset": "可选择整个工具集",
|
||||
"change_app_type": "更改应用类型",
|
||||
"chat_debug": "调试预览",
|
||||
"chat_logs": "对话日志",
|
||||
|
|
@ -147,6 +148,7 @@
|
|||
"edit_app": "应用详情",
|
||||
"edit_info": "编辑信息",
|
||||
"edit_param": "编辑参数",
|
||||
"empty_folder": "(空文件夹)",
|
||||
"empty_tool_tips": "请在左侧添加工具",
|
||||
"execute_time": "执行时间",
|
||||
"expand_tool_create": "展开MCP、Http创建",
|
||||
|
|
@ -335,8 +337,9 @@
|
|||
"setting_plugin": "插件配置",
|
||||
"show_templates": "显示模板",
|
||||
"show_top_p_tip": "用温度采样的替代方法,称为Nucleus采样,该模型考虑了具有TOP_P概率质量质量的令牌的结果。因此,0.1表示仅考虑包含最高概率质量的令牌。默认为 1。",
|
||||
"simple_tool_tips": "该插件含有特殊输入,暂不支持被简易应用调用",
|
||||
"simple_tool_tips": "该工具含有特殊输入,暂不支持被简易应用调用",
|
||||
"source_updateTime": "更新时间",
|
||||
"space_to_expand_folder": "按\"空格\"展开文件夹",
|
||||
"stop_sign": "停止序列",
|
||||
"stop_sign_placeholder": "多个序列号通过 | 隔开,例如:aaa|stop",
|
||||
"stream_response": "流输出",
|
||||
|
|
|
|||
|
|
@ -30,6 +30,7 @@
|
|||
"config_input_guide_lexicon_title": "配置词库",
|
||||
"confirm_clear_input_value": "确认清空表单内容?将会恢复默认值!",
|
||||
"confirm_to_clear_share_chat_history": "确认清空所有聊天记录?",
|
||||
"confirm_plan_agent": "请确认改计划是否符合预期,如需修改,可在底部输入框中发送修改要求。",
|
||||
"content_empty": "内容为空",
|
||||
"contextual": "{{num}}条上下文",
|
||||
"contextual_preview": "上下文预览 {{num}} 条",
|
||||
|
|
@ -47,6 +48,7 @@
|
|||
"file_amount_over": "超出最大文件数量 {{max}}",
|
||||
"file_input": "系统文件",
|
||||
"file_input_tip": "可通过【插件开始】节点的“文件链接”获取对应文件的链接",
|
||||
"file_parse": "文件解析",
|
||||
"history_slider.home.title": "聊天",
|
||||
"home.chat_app": "首页聊天",
|
||||
"home.chat_id": "会话ID",
|
||||
|
|
@ -78,6 +80,7 @@
|
|||
"no_workflow_response": "没有运行数据",
|
||||
"not_query": "缺少查询内容",
|
||||
"not_select_file": "未选择文件",
|
||||
"plan_agent": "任务规划",
|
||||
"plugins_output": "插件输出",
|
||||
"press_to_speak": "按住说话",
|
||||
"query_extension_IO_tokens": "问题优化输入/输出 Tokens",
|
||||
|
|
|
|||
|
|
@ -86,6 +86,7 @@
|
|||
"Select_App": "选择应用",
|
||||
"Select_all": "全选",
|
||||
"Setting": "设置",
|
||||
"Skill_Label_Unconfigured": "参数未配置,点击配置",
|
||||
"Status": "状态",
|
||||
"Submit": "提交",
|
||||
"Success": "成功",
|
||||
|
|
@ -103,6 +104,7 @@
|
|||
"add_new_param": "新增参数",
|
||||
"add_success": "添加成功",
|
||||
"aipoint_desc": "每次调用 AI 模型时,都会消耗一定的 AI 积分(类似于 token)。点击可查看详细计算规则。",
|
||||
"agent_prompt_tips": "建议按照以下模板填写,以获得最佳效果。\n「角色身份」\n「任务目标」\n「任务流程与技能」\n输入“/”插入全局变量;输入“@”插入特定技能,包括应用、工具、知识库、模型。",
|
||||
"all_quotes": "全部引用",
|
||||
"all_result": "完整结果",
|
||||
"app_evaluation": "Agent 评测(Beta)",
|
||||
|
|
|
|||
|
|
@ -57,6 +57,7 @@
|
|||
"auto_execute_default_prompt_placeholder": "自動執行時,傳送的預設問題",
|
||||
"auto_execute_tip": "開啟後,使用者進入對話式介面將自動觸發工作流程。\n執行順序:1、對話開場白;2、全域變數;3、自動執行。",
|
||||
"auto_save": "自動儲存",
|
||||
"can_select_toolset": "可選擇整個工具集",
|
||||
"change_app_type": "更改應用程式類型",
|
||||
"chat_debug": "聊天預覽",
|
||||
"chat_logs": "對話紀錄",
|
||||
|
|
@ -142,6 +143,7 @@
|
|||
"document_upload": "文件上傳",
|
||||
"edit_app": "應用詳情",
|
||||
"edit_info": "編輯資訊",
|
||||
"empty_folder": "(空文件夾)",
|
||||
"empty_tool_tips": "請在左側添加工具",
|
||||
"execute_time": "執行時間",
|
||||
"expand_tool_create": "展開 MCP、Http 創建",
|
||||
|
|
@ -320,8 +322,9 @@
|
|||
"setting_plugin": "外掛設定",
|
||||
"show_templates": "顯示模板",
|
||||
"show_top_p_tip": "用溫度取樣的替代方法,稱為 Nucleus 取樣,該模型考慮了具有 TOP_P 機率質量質量的令牌的結果。\n因此,0.1 表示僅考慮包含最高機率質量的令牌。\n預設為 1。",
|
||||
"simple_tool_tips": "該外掛含有特殊輸入,暫不支援被簡易應用呼叫",
|
||||
"simple_tool_tips": "該工具含有特殊輸入,暫不支持被簡易應用調用",
|
||||
"source_updateTime": "更新時間",
|
||||
"space_to_expand_folder": "按\"空格\"展開文件夾",
|
||||
"stop_sign": "停止序列",
|
||||
"stop_sign_placeholder": "多個序列號透過 | 隔開,例如:aaa|stop",
|
||||
"stream_response": "流輸出",
|
||||
|
|
|
|||
|
|
@ -30,6 +30,7 @@
|
|||
"config_input_guide_lexicon_title": "設定詞彙庫",
|
||||
"confirm_clear_input_value": "確認清空表單內容?\n將會恢復默認值!",
|
||||
"confirm_to_clear_share_chat_history": "確認清空所有聊天記錄?",
|
||||
"confirm_plan_agent": "請確認改計劃是否符合預期,如需修改,可在底部輸入框中發送修改要求。",
|
||||
"content_empty": "無內容",
|
||||
"contextual": "{{num}} 筆上下文",
|
||||
"contextual_preview": "上下文預覽 {{num}} 筆",
|
||||
|
|
@ -47,6 +48,7 @@
|
|||
"file_amount_over": "超出檔案數量上限 {{max}}",
|
||||
"file_input": "檔案輸入",
|
||||
"file_input_tip": "可透過「外掛程式啟動」節點的「檔案連結」取得對應檔案的連結",
|
||||
"file_parse": "文件解析",
|
||||
"history_slider.home.title": "聊天",
|
||||
"home.chat_app": "首页聊天",
|
||||
"home.chat_id": "會話ID",
|
||||
|
|
@ -78,6 +80,7 @@
|
|||
"no_workflow_response": "無工作流程資料",
|
||||
"not_query": "缺少查詢內容",
|
||||
"not_select_file": "尚未選取檔案",
|
||||
"plan_agent": "任務規劃",
|
||||
"plugins_output": "外掛程式輸出",
|
||||
"press_to_speak": "按住說話",
|
||||
"query_extension_IO_tokens": "問題最佳化輸入/輸出 Tokens",
|
||||
|
|
|
|||
|
|
@ -85,6 +85,7 @@
|
|||
"Select_App": "選擇應用",
|
||||
"Select_all": "全選",
|
||||
"Setting": "設定",
|
||||
"Skill_Label_Unconfigured": "參數未配置,點擊配置",
|
||||
"Status": "狀態",
|
||||
"Submit": "送出",
|
||||
"Success": "成功",
|
||||
|
|
@ -102,6 +103,7 @@
|
|||
"add_new_param": "新增參數",
|
||||
"add_success": "新增成功",
|
||||
"aipoint_desc": "每次呼叫 AI 模型時,都會消耗一定的 AI 點數(類似於 Token)。點選可檢視詳細計算規則。",
|
||||
"agent_prompt_tips": "建議按照以下模板填寫,以獲得最佳效果。\n\n「角色身份」\n「任務目標」\n「任務流程與技能」\n輸入“/”插入全局變量;輸入“@”插入特定技能,包括應用、工具、知識庫、模型。",
|
||||
"all_quotes": "全部引用",
|
||||
"all_result": "完整結果",
|
||||
"app_evaluation": "應用評測(Beta)",
|
||||
|
|
|
|||
|
|
@ -24590,4 +24590,4 @@ snapshots:
|
|||
immer: 9.0.21
|
||||
react: 18.3.1
|
||||
|
||||
zwitch@2.0.4: {}
|
||||
zwitch@2.0.4: {}
|
||||
|
|
@ -2,8 +2,9 @@ import Avatar from '@fastgpt/web/components/common/Avatar';
|
|||
import { Box } from '@chakra-ui/react';
|
||||
import { useTheme } from '@chakra-ui/system';
|
||||
import React from 'react';
|
||||
import type { ChatRoleEnum } from '@fastgpt/global/core/chat/constants';
|
||||
|
||||
const ChatAvatar = ({ src, type }: { src?: string; type: 'Human' | 'AI' }) => {
|
||||
const ChatAvatar = ({ src, type }: { src?: string; type: `${ChatRoleEnum}` }) => {
|
||||
const theme = useTheme();
|
||||
return (
|
||||
<Box
|
||||
|
|
|
|||
|
|
@ -6,11 +6,7 @@ import { MessageCardStyle } from '../constants';
|
|||
import { formatChatValue2InputType } from '../utils';
|
||||
import Markdown from '@/components/Markdown';
|
||||
import styles from '../index.module.scss';
|
||||
import {
|
||||
ChatItemValueTypeEnum,
|
||||
ChatRoleEnum,
|
||||
ChatStatusEnum
|
||||
} from '@fastgpt/global/core/chat/constants';
|
||||
import { ChatRoleEnum, ChatStatusEnum } from '@fastgpt/global/core/chat/constants';
|
||||
import FilesBlock from './FilesBox';
|
||||
import { ChatBoxContext } from '../Provider';
|
||||
import { useContextSelector } from 'use-context-selector';
|
||||
|
|
@ -20,10 +16,8 @@ import { useCopyData } from '@fastgpt/web/hooks/useCopyData';
|
|||
import MyIcon from '@fastgpt/web/components/common/Icon';
|
||||
import MyTooltip from '@fastgpt/web/components/common/MyTooltip';
|
||||
import { useTranslation } from 'next-i18next';
|
||||
import {
|
||||
type AIChatItemValueItemType,
|
||||
type ChatItemValueItemType
|
||||
} from '@fastgpt/global/core/chat/type';
|
||||
import type { AIChatItemType, UserChatItemValueItemType } from '@fastgpt/global/core/chat/type';
|
||||
import { type AIChatItemValueItemType } from '@fastgpt/global/core/chat/type';
|
||||
import { CodeClassNameEnum } from '@/components/Markdown/utils';
|
||||
import { isEqual } from 'lodash';
|
||||
import { useSystem } from '@fastgpt/web/hooks/useSystem';
|
||||
|
|
@ -56,7 +50,7 @@ const colorMap = {
|
|||
}
|
||||
};
|
||||
|
||||
type BasicProps = {
|
||||
type Props = {
|
||||
avatar?: string;
|
||||
statusBoxData?: {
|
||||
status: `${ChatStatusEnum}`;
|
||||
|
|
@ -66,10 +60,6 @@ type BasicProps = {
|
|||
children?: React.ReactNode;
|
||||
} & ChatControllerProps;
|
||||
|
||||
type Props = BasicProps & {
|
||||
type: ChatRoleEnum.Human | ChatRoleEnum.AI;
|
||||
};
|
||||
|
||||
const RenderQuestionGuide = ({ questionGuides }: { questionGuides: string[] }) => {
|
||||
return (
|
||||
<Markdown
|
||||
|
|
@ -80,7 +70,7 @@ ${JSON.stringify(questionGuides)}`}
|
|||
};
|
||||
|
||||
const HumanContentCard = React.memo(
|
||||
function HumanContentCard({ chatValue }: { chatValue: ChatItemValueItemType[] }) {
|
||||
function HumanContentCard({ chatValue }: { chatValue: UserChatItemValueItemType[] }) {
|
||||
const { text, files = [] } = formatChatValue2InputType(chatValue);
|
||||
return (
|
||||
<Flex flexDirection={'column'} gap={4}>
|
||||
|
|
@ -93,6 +83,7 @@ const HumanContentCard = React.memo(
|
|||
);
|
||||
const AIContentCard = React.memo(function AIContentCard({
|
||||
chatValue,
|
||||
subAppsValue = {},
|
||||
dataId,
|
||||
isLastChild,
|
||||
isChatting,
|
||||
|
|
@ -100,7 +91,8 @@ const AIContentCard = React.memo(function AIContentCard({
|
|||
onOpenCiteModal
|
||||
}: {
|
||||
dataId: string;
|
||||
chatValue: ChatItemValueItemType[];
|
||||
chatValue: AIChatItemValueItemType[];
|
||||
subAppsValue?: AIChatItemType['subAppsValue'];
|
||||
isLastChild: boolean;
|
||||
isChatting: boolean;
|
||||
questionGuides: string[];
|
||||
|
|
@ -109,13 +101,14 @@ const AIContentCard = React.memo(function AIContentCard({
|
|||
return (
|
||||
<Flex flexDirection={'column'} gap={2}>
|
||||
{chatValue.map((value, i) => {
|
||||
const key = `${dataId}-ai-${i}`;
|
||||
const key = value.id || `${dataId}-ai-${i}`;
|
||||
|
||||
return (
|
||||
<AIResponseBox
|
||||
chatItemDataId={dataId}
|
||||
key={key}
|
||||
value={value}
|
||||
subAppValue={value.tool ? subAppsValue[value.tool.id] : undefined}
|
||||
isLastResponseValue={isLastChild && i === chatValue.length - 1}
|
||||
isChatting={isChatting}
|
||||
onOpenCiteModal={onOpenCiteModal}
|
||||
|
|
@ -130,7 +123,7 @@ const AIContentCard = React.memo(function AIContentCard({
|
|||
});
|
||||
|
||||
const ChatItem = (props: Props) => {
|
||||
const { type, avatar, statusBoxData, children, isLastChild, questionGuides = [], chat } = props;
|
||||
const { avatar, statusBoxData, children, isLastChild, questionGuides = [], chat } = props;
|
||||
|
||||
const { t } = useTranslation();
|
||||
const { isPc } = useSystem();
|
||||
|
|
@ -139,7 +132,7 @@ const ChatItem = (props: Props) => {
|
|||
|
||||
const styleMap: BoxProps = useMemoEnhance(
|
||||
() => ({
|
||||
...(type === ChatRoleEnum.Human
|
||||
...(chat.obj === ChatRoleEnum.Human
|
||||
? {
|
||||
order: 0,
|
||||
borderRadius: '8px 0 8px 8px',
|
||||
|
|
@ -158,7 +151,7 @@ const ChatItem = (props: Props) => {
|
|||
fontWeight: '400',
|
||||
color: 'myGray.500'
|
||||
}),
|
||||
[type]
|
||||
[chat.obj]
|
||||
);
|
||||
|
||||
const isChatting = useContextSelector(ChatBoxContext, (v) => v.isChatting);
|
||||
|
|
@ -189,57 +182,63 @@ const ChatItem = (props: Props) => {
|
|||
2. Auto-complete the last textnode
|
||||
*/
|
||||
const splitAiResponseResults = useMemo(() => {
|
||||
if (chat.obj !== ChatRoleEnum.AI) return [chat.value];
|
||||
if (chat.obj === ChatRoleEnum.Human) return [chat.value];
|
||||
|
||||
// Remove empty text node
|
||||
const filterList = chat.value.filter((item, i) => {
|
||||
if (item.type === ChatItemValueTypeEnum.text && !item.text?.content?.trim()) {
|
||||
return false;
|
||||
}
|
||||
return item;
|
||||
});
|
||||
|
||||
const groupedValues: AIChatItemValueItemType[][] = [];
|
||||
let currentGroup: AIChatItemValueItemType[] = [];
|
||||
|
||||
filterList.forEach((value) => {
|
||||
if (value.type === 'interactive') {
|
||||
if (currentGroup.length > 0) {
|
||||
groupedValues.push(currentGroup);
|
||||
currentGroup = [];
|
||||
if (chat.obj === ChatRoleEnum.AI) {
|
||||
// Remove empty text node
|
||||
const filterList = chat.value.filter((item, i) => {
|
||||
if (item.text && !item.text.content?.trim()) {
|
||||
return false;
|
||||
}
|
||||
if (item.reasoning && !item.reasoning.content?.trim()) {
|
||||
return false;
|
||||
}
|
||||
return item;
|
||||
});
|
||||
|
||||
groupedValues.push([value]);
|
||||
} else {
|
||||
currentGroup.push(value);
|
||||
}
|
||||
});
|
||||
const groupedValues: AIChatItemValueItemType[][] = [];
|
||||
let currentGroup: AIChatItemValueItemType[] = [];
|
||||
|
||||
if (currentGroup.length > 0) {
|
||||
groupedValues.push(currentGroup);
|
||||
}
|
||||
|
||||
// Check last group is interactive, Auto add a empty text node(animation)
|
||||
const lastGroup = groupedValues[groupedValues.length - 1];
|
||||
if (isChatting || groupedValues.length === 0) {
|
||||
if (
|
||||
(lastGroup &&
|
||||
lastGroup[lastGroup.length - 1] &&
|
||||
lastGroup[lastGroup.length - 1].type === ChatItemValueTypeEnum.interactive) ||
|
||||
groupedValues.length === 0
|
||||
) {
|
||||
groupedValues.push([
|
||||
{
|
||||
type: ChatItemValueTypeEnum.text,
|
||||
text: {
|
||||
content: ''
|
||||
}
|
||||
filterList.forEach((value) => {
|
||||
if (value.interactive) {
|
||||
if (currentGroup.length > 0) {
|
||||
groupedValues.push(currentGroup);
|
||||
currentGroup = [];
|
||||
}
|
||||
]);
|
||||
|
||||
groupedValues.push([value]);
|
||||
} else {
|
||||
currentGroup.push(value);
|
||||
}
|
||||
});
|
||||
|
||||
if (currentGroup.length > 0) {
|
||||
groupedValues.push(currentGroup);
|
||||
}
|
||||
|
||||
// Check last group is interactive, Auto add a empty text node(animation)
|
||||
const lastGroup = groupedValues[groupedValues.length - 1];
|
||||
if (isChatting || groupedValues.length === 0) {
|
||||
if (
|
||||
(lastGroup &&
|
||||
lastGroup[lastGroup.length - 1] &&
|
||||
lastGroup[lastGroup.length - 1].interactive) ||
|
||||
groupedValues.length === 0
|
||||
) {
|
||||
groupedValues.push([
|
||||
{
|
||||
text: {
|
||||
content: ''
|
||||
}
|
||||
}
|
||||
]);
|
||||
}
|
||||
}
|
||||
|
||||
return groupedValues;
|
||||
}
|
||||
|
||||
return groupedValues;
|
||||
return [];
|
||||
}, [chat.obj, chat.value, isChatting]);
|
||||
|
||||
const setCiteModalData = useContextSelector(ChatItemContext, (v) => v.setCiteModalData);
|
||||
|
|
@ -284,6 +283,8 @@ const ChatItem = (props: Props) => {
|
|||
}
|
||||
);
|
||||
|
||||
const aiSubApps = 'subApps' in chat ? chat.subApps : undefined;
|
||||
|
||||
return (
|
||||
<Box
|
||||
data-chat-id={chat.dataId}
|
||||
|
|
@ -295,11 +296,11 @@ const ChatItem = (props: Props) => {
|
|||
>
|
||||
{/* control icon */}
|
||||
<Flex w={'100%'} alignItems={'center'} gap={2} justifyContent={styleMap.justifyContent}>
|
||||
{isChatting && type === ChatRoleEnum.AI && isLastChild ? null : (
|
||||
{isChatting && chat.obj === ChatRoleEnum.AI && isLastChild ? null : (
|
||||
<Flex order={styleMap.order} ml={styleMap.ml} align={'center'} gap={'0.62rem'}>
|
||||
{chat.time && (isPc || isChatLog) && (
|
||||
<Box
|
||||
order={type === ChatRoleEnum.AI ? 2 : 0}
|
||||
order={chat.obj === ChatRoleEnum.AI ? 2 : 0}
|
||||
className={'time-label'}
|
||||
fontSize={styleMap.fontSize}
|
||||
color={styleMap.color}
|
||||
|
|
@ -319,7 +320,7 @@ const ChatItem = (props: Props) => {
|
|||
/>
|
||||
</Flex>
|
||||
)}
|
||||
<ChatAvatar src={avatar} type={type} />
|
||||
<ChatAvatar src={avatar} type={chat.obj} />
|
||||
|
||||
{/* Workflow status */}
|
||||
{!!chatStatusMap && statusBoxData && isLastChild && showNodeStatus && (
|
||||
|
|
@ -377,88 +378,91 @@ const ChatItem = (props: Props) => {
|
|||
)}
|
||||
|
||||
{/* content */}
|
||||
{splitAiResponseResults.map((value, i) => (
|
||||
<Box
|
||||
key={i}
|
||||
mt={['6px', 2]}
|
||||
className="chat-box-card"
|
||||
textAlign={styleMap.textAlign}
|
||||
_hover={{
|
||||
'& .footer-copy': {
|
||||
display: 'block'
|
||||
}
|
||||
}}
|
||||
>
|
||||
<Card
|
||||
{...MessageCardStyle}
|
||||
bg={styleMap.bg}
|
||||
borderRadius={styleMap.borderRadius}
|
||||
textAlign={'left'}
|
||||
{splitAiResponseResults.map((value, i) => {
|
||||
return (
|
||||
<Box
|
||||
key={i}
|
||||
mt={['6px', 2]}
|
||||
className="chat-box-card"
|
||||
textAlign={styleMap.textAlign}
|
||||
_hover={{
|
||||
'& .footer-copy': {
|
||||
display: 'block'
|
||||
}
|
||||
}}
|
||||
>
|
||||
{type === ChatRoleEnum.Human && <HumanContentCard chatValue={value} />}
|
||||
{type === ChatRoleEnum.AI && (
|
||||
<>
|
||||
<AIContentCard
|
||||
chatValue={value}
|
||||
dataId={chat.dataId}
|
||||
isLastChild={isLastChild && i === splitAiResponseResults.length - 1}
|
||||
isChatting={isChatting}
|
||||
questionGuides={questionGuides}
|
||||
onOpenCiteModal={onOpenCiteModal}
|
||||
/>
|
||||
{i === splitAiResponseResults.length - 1 && (
|
||||
<ResponseTags
|
||||
showTags={!isLastChild || !isChatting}
|
||||
historyItem={chat}
|
||||
<Card
|
||||
{...MessageCardStyle}
|
||||
bg={styleMap.bg}
|
||||
borderRadius={styleMap.borderRadius}
|
||||
textAlign={'left'}
|
||||
>
|
||||
{chat.obj === ChatRoleEnum.Human && <HumanContentCard chatValue={value} />}
|
||||
{chat.obj === ChatRoleEnum.AI && (
|
||||
<>
|
||||
<AIContentCard
|
||||
chatValue={value as AIChatItemValueItemType[]}
|
||||
subAppsValue={chat.subAppsValue}
|
||||
dataId={chat.dataId}
|
||||
isLastChild={isLastChild && i === splitAiResponseResults.length - 1}
|
||||
isChatting={isChatting}
|
||||
questionGuides={questionGuides}
|
||||
onOpenCiteModal={onOpenCiteModal}
|
||||
/>
|
||||
)}
|
||||
</>
|
||||
)}
|
||||
{/* Example: Response tags. A set of dialogs only needs to be displayed once*/}
|
||||
{i === splitAiResponseResults.length - 1 && (
|
||||
<>
|
||||
{/* error message */}
|
||||
{!!chat.errorMsg && (
|
||||
<Box mt={2}>
|
||||
<ChatBoxDivider icon={'common/errorFill'} text={t('chat:error_message')} />
|
||||
<Box fontSize={'xs'} color={'myGray.500'}>
|
||||
{chat.errorMsg}
|
||||
{i === splitAiResponseResults.length - 1 && (
|
||||
<ResponseTags
|
||||
showTags={!isLastChild || !isChatting}
|
||||
historyItem={chat}
|
||||
onOpenCiteModal={onOpenCiteModal}
|
||||
/>
|
||||
)}
|
||||
</>
|
||||
)}
|
||||
{/* Example: Response tags. A set of dialogs only needs to be displayed once*/}
|
||||
{i === splitAiResponseResults.length - 1 && (
|
||||
<>
|
||||
{/* error message */}
|
||||
{!!chat.errorMsg && (
|
||||
<Box mt={2}>
|
||||
<ChatBoxDivider icon={'common/errorFill'} text={t('chat:error_message')} />
|
||||
<Box fontSize={'xs'} color={'myGray.500'}>
|
||||
{chat.errorMsg}
|
||||
</Box>
|
||||
</Box>
|
||||
)}
|
||||
{children}
|
||||
</>
|
||||
)}
|
||||
{/* 对话框底部的复制按钮 */}
|
||||
{chat.obj == ChatRoleEnum.AI &&
|
||||
!('interactive' in value[0]) &&
|
||||
(!isChatting || (isChatting && !isLastChild)) && (
|
||||
<Box
|
||||
className="footer-copy"
|
||||
display={['block', 'none']}
|
||||
position={'absolute'}
|
||||
bottom={0}
|
||||
right={0}
|
||||
transform={'translateX(100%)'}
|
||||
>
|
||||
<MyTooltip label={t('common:Copy')}>
|
||||
<MyIcon
|
||||
w={'1rem'}
|
||||
cursor="pointer"
|
||||
p="5px"
|
||||
bg="white"
|
||||
name={'copy'}
|
||||
color={'myGray.500'}
|
||||
_hover={{ color: 'primary.600' }}
|
||||
onClick={() => copyData(formatChatValue2InputType(value).text ?? '')}
|
||||
/>
|
||||
</MyTooltip>
|
||||
</Box>
|
||||
)}
|
||||
{children}
|
||||
</>
|
||||
)}
|
||||
{/* 对话框底部的复制按钮 */}
|
||||
{type == ChatRoleEnum.AI &&
|
||||
value[0]?.type !== 'interactive' &&
|
||||
(!isChatting || (isChatting && !isLastChild)) && (
|
||||
<Box
|
||||
className="footer-copy"
|
||||
display={['block', 'none']}
|
||||
position={'absolute'}
|
||||
bottom={0}
|
||||
right={0}
|
||||
transform={'translateX(100%)'}
|
||||
>
|
||||
<MyTooltip label={t('common:Copy')}>
|
||||
<MyIcon
|
||||
w={'1rem'}
|
||||
cursor="pointer"
|
||||
p="5px"
|
||||
bg="white"
|
||||
name={'copy'}
|
||||
color={'myGray.500'}
|
||||
_hover={{ color: 'primary.600' }}
|
||||
onClick={() => copyData(formatChatValue2InputType(value).text ?? '')}
|
||||
/>
|
||||
</MyTooltip>
|
||||
</Box>
|
||||
)}
|
||||
</Card>
|
||||
</Box>
|
||||
))}
|
||||
</Card>
|
||||
</Box>
|
||||
);
|
||||
})}
|
||||
</Box>
|
||||
);
|
||||
};
|
||||
|
|
|
|||
|
|
@ -9,7 +9,7 @@ import { useRequest2 } from '@fastgpt/web/hooks/useRequest';
|
|||
import { useTranslation } from 'next-i18next';
|
||||
import { getFlatAppResponses } from '@/global/core/chat/utils';
|
||||
const isLLMNode = (item: ChatHistoryItemResType) =>
|
||||
item.moduleType === FlowNodeTypeEnum.chatNode || item.moduleType === FlowNodeTypeEnum.agent;
|
||||
item.moduleType === FlowNodeTypeEnum.chatNode || item.moduleType === FlowNodeTypeEnum.toolCall;
|
||||
|
||||
const ContextModal = ({ onClose, dataId }: { onClose: () => void; dataId: string }) => {
|
||||
const { getHistoryResponseData } = useContextSelector(ChatBoxContext, (v) => v);
|
||||
|
|
|
|||
|
|
@ -3,7 +3,6 @@ import { type ChatItemType } from '@fastgpt/global/core/chat/type';
|
|||
import { useCallback } from 'react';
|
||||
import { htmlTemplate } from '@/web/core/chat/constants';
|
||||
import { fileDownload } from '@/web/common/file/utils';
|
||||
import { ChatItemValueTypeEnum } from '@fastgpt/global/core/chat/constants';
|
||||
import { useTranslation } from 'next-i18next';
|
||||
export const useChatBox = () => {
|
||||
const { t } = useTranslation();
|
||||
|
|
@ -47,13 +46,13 @@ export const useChatBox = () => {
|
|||
.map((item) => {
|
||||
let result = `Role: ${item.obj}\n`;
|
||||
const content = item.value.map((item) => {
|
||||
if (item.type === ChatItemValueTypeEnum.text) {
|
||||
if (item.text) {
|
||||
return item.text?.content;
|
||||
} else if (item.type === ChatItemValueTypeEnum.file) {
|
||||
} else if ('file' in item && item.file) {
|
||||
return `
|
||||

|
||||
`;
|
||||
} else if (item.type === ChatItemValueTypeEnum.tool) {
|
||||
} else if ('tools' in item && item.tools) {
|
||||
return `
|
||||
\`\`\`Tool
|
||||
${JSON.stringify(item.tools, null, 2)}
|
||||
|
|
|
|||
|
|
@ -36,15 +36,11 @@ import ChatInput from './Input/ChatInput';
|
|||
import ChatBoxDivider from '../../Divider';
|
||||
import { type OutLinkChatAuthProps } from '@fastgpt/global/support/permission/chat';
|
||||
import { getNanoid } from '@fastgpt/global/common/string/tools';
|
||||
import {
|
||||
ChatItemValueTypeEnum,
|
||||
ChatRoleEnum,
|
||||
ChatStatusEnum
|
||||
} from '@fastgpt/global/core/chat/constants';
|
||||
import { ChatRoleEnum, ChatStatusEnum } from '@fastgpt/global/core/chat/constants';
|
||||
import {
|
||||
getInteractiveByHistories,
|
||||
formatChatValue2InputType,
|
||||
setInteractiveResultToHistories
|
||||
rewriteHistoriesByInteractiveResponse
|
||||
} from './utils';
|
||||
import { ChatTypeEnum, textareaMinH } from './constants';
|
||||
import { SseResponseEventEnum } from '@fastgpt/global/core/workflow/runtime/constants';
|
||||
|
|
@ -67,6 +63,7 @@ import { VariableInputEnum } from '@fastgpt/global/core/workflow/constants';
|
|||
import { valueTypeFormat } from '@fastgpt/global/core/workflow/runtime/utils';
|
||||
import { formatTime2YMDHMS } from '@fastgpt/global/common/string/time';
|
||||
import { TeamErrEnum } from '@fastgpt/global/common/error/code/team';
|
||||
import { cloneDeep } from 'lodash';
|
||||
|
||||
const FeedbackModal = dynamic(() => import('./components/FeedbackModal'));
|
||||
const SelectMarkCollection = dynamic(() => import('./components/SelectMarkCollection'));
|
||||
|
|
@ -154,7 +151,10 @@ const ChatBox = ({
|
|||
const isChatting = useContextSelector(ChatBoxContext, (v) => v.isChatting);
|
||||
|
||||
// Workflow running, there are user input or selection
|
||||
const lastInteractive = useMemo(() => getInteractiveByHistories(chatRecords), [chatRecords]);
|
||||
const { interactive: lastInteractive, canSendQuery } = useMemo(
|
||||
() => getInteractiveByHistories(chatRecords),
|
||||
[chatRecords]
|
||||
);
|
||||
|
||||
const showExternalVariable = useMemo(() => {
|
||||
const map: Record<string, boolean> = {
|
||||
|
|
@ -240,144 +240,262 @@ const ChatBox = ({
|
|||
|
||||
const generatingMessage = useMemoizedFn(
|
||||
({
|
||||
responseValueId,
|
||||
event,
|
||||
text = '',
|
||||
reasoningText,
|
||||
status,
|
||||
name,
|
||||
tool,
|
||||
subAppId,
|
||||
interactive,
|
||||
autoTTSResponse,
|
||||
variables,
|
||||
nodeResponse,
|
||||
durationSeconds
|
||||
durationSeconds,
|
||||
autoTTSResponse
|
||||
}: generatingMessageProps & { autoTTSResponse?: boolean }) => {
|
||||
setChatRecords((state) =>
|
||||
state.map((item, index) => {
|
||||
if (index !== state.length - 1) return item;
|
||||
if (item.obj !== ChatRoleEnum.AI) return item;
|
||||
|
||||
autoTTSResponse && splitText2Audio(formatChatValue2InputType(item.value).text || '');
|
||||
if (subAppId) {
|
||||
let subAppValue = cloneDeep(item.subAppsValue?.[subAppId]);
|
||||
if (!subAppValue) {
|
||||
console.log("Can't find the sub app");
|
||||
return item;
|
||||
}
|
||||
|
||||
const lastValue: AIChatItemValueItemType = JSON.parse(
|
||||
JSON.stringify(item.value[item.value.length - 1])
|
||||
);
|
||||
const updateIndex = (() => {
|
||||
if (!responseValueId) return subAppValue.length - 1;
|
||||
const index = subAppValue.findIndex((item) => item.id === responseValueId);
|
||||
if (index !== -1) return index;
|
||||
return subAppValue.length - 1;
|
||||
})();
|
||||
const updateValue = subAppValue[updateIndex];
|
||||
|
||||
if (
|
||||
event === SseResponseEventEnum.answer ||
|
||||
event === SseResponseEventEnum.fastAnswer
|
||||
) {
|
||||
if (reasoningText) {
|
||||
if (updateValue?.reasoning) {
|
||||
updateValue.reasoning.content += reasoningText;
|
||||
} else {
|
||||
const val: AIChatItemValueItemType = {
|
||||
id: responseValueId,
|
||||
reasoning: {
|
||||
content: reasoningText
|
||||
}
|
||||
};
|
||||
subAppValue = [
|
||||
...subAppValue.slice(0, updateIndex),
|
||||
val,
|
||||
...subAppValue.slice(updateIndex + 1)
|
||||
];
|
||||
}
|
||||
}
|
||||
if (text) {
|
||||
if (updateValue?.text) {
|
||||
updateValue.text.content += text;
|
||||
} else {
|
||||
const val: AIChatItemValueItemType = {
|
||||
id: responseValueId,
|
||||
text: {
|
||||
content: text
|
||||
}
|
||||
};
|
||||
subAppValue = [
|
||||
...subAppValue.slice(0, updateIndex),
|
||||
val,
|
||||
...subAppValue.slice(updateIndex + 1)
|
||||
];
|
||||
}
|
||||
}
|
||||
}
|
||||
|
||||
if (event === SseResponseEventEnum.toolCall && tool) {
|
||||
const val: AIChatItemValueItemType = {
|
||||
id: responseValueId,
|
||||
tool
|
||||
};
|
||||
subAppValue = [
|
||||
...subAppValue.slice(0, updateIndex),
|
||||
val,
|
||||
...subAppValue.slice(updateIndex + 1)
|
||||
];
|
||||
}
|
||||
if (event === SseResponseEventEnum.toolParams && tool && updateValue?.tool) {
|
||||
if (tool.params) {
|
||||
updateValue.tool.params += tool.params;
|
||||
}
|
||||
return item;
|
||||
}
|
||||
if (event === SseResponseEventEnum.toolResponse && tool && updateValue?.tool) {
|
||||
if (tool.response) {
|
||||
updateValue.tool.response += tool.response;
|
||||
}
|
||||
return item;
|
||||
}
|
||||
|
||||
if (event === SseResponseEventEnum.flowNodeResponse && nodeResponse) {
|
||||
return {
|
||||
...item,
|
||||
responseData: item.responseData
|
||||
? [...item.responseData, nodeResponse]
|
||||
: [nodeResponse]
|
||||
subAppsValue: {
|
||||
...item.subAppsValue,
|
||||
[subAppId]: subAppValue
|
||||
}
|
||||
};
|
||||
} else if (event === SseResponseEventEnum.flowNodeStatus && status) {
|
||||
return {
|
||||
...item,
|
||||
status,
|
||||
moduleName: name
|
||||
};
|
||||
} else if (reasoningText) {
|
||||
if (lastValue.type === ChatItemValueTypeEnum.reasoning && lastValue.reasoning) {
|
||||
lastValue.reasoning.content += reasoningText;
|
||||
} else {
|
||||
autoTTSResponse && splitText2Audio(formatChatValue2InputType(item.value).text || '');
|
||||
|
||||
const updateIndex = (() => {
|
||||
if (!responseValueId) return item.value.length - 1;
|
||||
const index = item.value.findIndex((item) => item.id === responseValueId);
|
||||
if (index !== -1) return index;
|
||||
return item.value.length - 1;
|
||||
})();
|
||||
const updateValue: AIChatItemValueItemType = cloneDeep(item.value[updateIndex]);
|
||||
updateValue.id = responseValueId;
|
||||
|
||||
if (event === SseResponseEventEnum.flowNodeResponse && nodeResponse) {
|
||||
return {
|
||||
...item,
|
||||
value: item.value.slice(0, -1).concat(lastValue)
|
||||
responseData: item.responseData
|
||||
? [...item.responseData, nodeResponse]
|
||||
: [nodeResponse]
|
||||
};
|
||||
} else {
|
||||
}
|
||||
if (event === SseResponseEventEnum.flowNodeStatus && status) {
|
||||
return {
|
||||
...item,
|
||||
status,
|
||||
moduleName: name
|
||||
};
|
||||
}
|
||||
if (
|
||||
event === SseResponseEventEnum.answer ||
|
||||
event === SseResponseEventEnum.fastAnswer
|
||||
) {
|
||||
if (reasoningText) {
|
||||
if (updateValue?.reasoning) {
|
||||
updateValue.reasoning.content += reasoningText;
|
||||
return {
|
||||
...item,
|
||||
value: [
|
||||
...item.value.slice(0, updateIndex),
|
||||
updateValue,
|
||||
...item.value.slice(updateIndex + 1)
|
||||
]
|
||||
};
|
||||
} else {
|
||||
const val: AIChatItemValueItemType = {
|
||||
id: responseValueId,
|
||||
reasoning: {
|
||||
content: reasoningText
|
||||
}
|
||||
};
|
||||
return {
|
||||
...item,
|
||||
value: [...item.value, val]
|
||||
};
|
||||
}
|
||||
}
|
||||
if (text) {
|
||||
if (updateValue?.text) {
|
||||
updateValue.text.content += text;
|
||||
return {
|
||||
...item,
|
||||
value: [
|
||||
...item.value.slice(0, updateIndex),
|
||||
updateValue,
|
||||
...item.value.slice(updateIndex + 1)
|
||||
]
|
||||
};
|
||||
} else {
|
||||
const newValue: AIChatItemValueItemType = {
|
||||
id: responseValueId,
|
||||
text: {
|
||||
content: text
|
||||
}
|
||||
};
|
||||
return {
|
||||
...item,
|
||||
value: item.value.concat(newValue)
|
||||
};
|
||||
}
|
||||
}
|
||||
}
|
||||
|
||||
// Tool call
|
||||
if (event === SseResponseEventEnum.toolCall && tool) {
|
||||
const val: AIChatItemValueItemType = {
|
||||
type: ChatItemValueTypeEnum.reasoning,
|
||||
reasoning: {
|
||||
content: reasoningText
|
||||
id: responseValueId,
|
||||
tool: {
|
||||
...tool,
|
||||
response: ''
|
||||
}
|
||||
};
|
||||
return {
|
||||
...item,
|
||||
subAppsValue: {
|
||||
...item.subAppsValue,
|
||||
[tool.id]: []
|
||||
},
|
||||
value: [...item.value, val]
|
||||
};
|
||||
}
|
||||
if (event === SseResponseEventEnum.toolParams && tool && updateValue?.tool) {
|
||||
if (tool.params) {
|
||||
updateValue.tool.params += tool.params;
|
||||
return {
|
||||
...item,
|
||||
value: [
|
||||
...item.value.slice(0, updateIndex),
|
||||
updateValue,
|
||||
...item.value.slice(updateIndex + 1)
|
||||
]
|
||||
};
|
||||
}
|
||||
return item;
|
||||
}
|
||||
if (event === SseResponseEventEnum.toolResponse && tool && updateValue?.tool) {
|
||||
if (tool.response) {
|
||||
// replace tool response
|
||||
updateValue.tool.response += tool.response;
|
||||
|
||||
return {
|
||||
...item,
|
||||
value: [
|
||||
...item.value.slice(0, updateIndex),
|
||||
updateValue,
|
||||
...item.value.slice(updateIndex + 1)
|
||||
]
|
||||
};
|
||||
}
|
||||
return item;
|
||||
}
|
||||
|
||||
if (event === SseResponseEventEnum.updateVariables && variables) {
|
||||
resetVariables({ variables });
|
||||
}
|
||||
if (event === SseResponseEventEnum.interactive && interactive) {
|
||||
const val: AIChatItemValueItemType = {
|
||||
interactive
|
||||
};
|
||||
|
||||
return {
|
||||
...item,
|
||||
value: item.value.concat(val)
|
||||
};
|
||||
}
|
||||
} else if (
|
||||
(event === SseResponseEventEnum.answer || event === SseResponseEventEnum.fastAnswer) &&
|
||||
text
|
||||
) {
|
||||
if (!lastValue || !lastValue.text) {
|
||||
const newValue: AIChatItemValueItemType = {
|
||||
type: ChatItemValueTypeEnum.text,
|
||||
text: {
|
||||
content: text
|
||||
}
|
||||
};
|
||||
if (event === SseResponseEventEnum.workflowDuration && durationSeconds) {
|
||||
return {
|
||||
...item,
|
||||
value: item.value.concat(newValue)
|
||||
};
|
||||
} else {
|
||||
lastValue.text.content += text;
|
||||
return {
|
||||
...item,
|
||||
value: item.value.slice(0, -1).concat(lastValue)
|
||||
durationSeconds: item.durationSeconds
|
||||
? +(item.durationSeconds + durationSeconds).toFixed(2)
|
||||
: durationSeconds
|
||||
};
|
||||
}
|
||||
} else if (event === SseResponseEventEnum.toolCall && tool) {
|
||||
const val: AIChatItemValueItemType = {
|
||||
type: ChatItemValueTypeEnum.tool,
|
||||
tools: [tool]
|
||||
};
|
||||
return {
|
||||
...item,
|
||||
value: item.value.concat(val)
|
||||
};
|
||||
} else if (
|
||||
event === SseResponseEventEnum.toolParams &&
|
||||
tool &&
|
||||
lastValue.type === ChatItemValueTypeEnum.tool &&
|
||||
lastValue?.tools
|
||||
) {
|
||||
lastValue.tools = lastValue.tools.map((item) => {
|
||||
if (item.id === tool.id) {
|
||||
item.params += tool.params;
|
||||
}
|
||||
return item;
|
||||
});
|
||||
return {
|
||||
...item,
|
||||
value: item.value.slice(0, -1).concat(lastValue)
|
||||
};
|
||||
} else if (event === SseResponseEventEnum.toolResponse && tool) {
|
||||
// replace tool response
|
||||
return {
|
||||
...item,
|
||||
value: item.value.map((val) => {
|
||||
if (val.type === ChatItemValueTypeEnum.tool && val.tools) {
|
||||
const tools = val.tools.map((item) =>
|
||||
item.id === tool.id ? { ...item, response: tool.response } : item
|
||||
);
|
||||
return {
|
||||
...val,
|
||||
tools
|
||||
};
|
||||
}
|
||||
return val;
|
||||
})
|
||||
};
|
||||
} else if (event === SseResponseEventEnum.updateVariables && variables) {
|
||||
resetVariables({ variables });
|
||||
} else if (event === SseResponseEventEnum.interactive) {
|
||||
const val: AIChatItemValueItemType = {
|
||||
type: ChatItemValueTypeEnum.interactive,
|
||||
interactive
|
||||
};
|
||||
|
||||
return {
|
||||
...item,
|
||||
value: item.value.concat(val)
|
||||
};
|
||||
} else if (event === SseResponseEventEnum.workflowDuration && durationSeconds) {
|
||||
return {
|
||||
...item,
|
||||
durationSeconds: item.durationSeconds
|
||||
? +(item.durationSeconds + durationSeconds).toFixed(2)
|
||||
: durationSeconds
|
||||
};
|
||||
}
|
||||
|
||||
return item;
|
||||
|
|
@ -446,8 +564,8 @@ const ChatBox = ({
|
|||
text = '',
|
||||
files = [],
|
||||
history = chatRecords,
|
||||
interactive,
|
||||
autoTTSResponse = false,
|
||||
isInteractivePrompt = false,
|
||||
hideInUI = false
|
||||
}) => {
|
||||
variablesForm.handleSubmit(
|
||||
|
|
@ -520,7 +638,6 @@ const ChatBox = ({
|
|||
hideInUI,
|
||||
value: [
|
||||
...files.map((file) => ({
|
||||
type: ChatItemValueTypeEnum.file,
|
||||
file: {
|
||||
type: file.type,
|
||||
name: file.name,
|
||||
|
|
@ -532,7 +649,6 @@ const ChatBox = ({
|
|||
...(text
|
||||
? [
|
||||
{
|
||||
type: ChatItemValueTypeEnum.text,
|
||||
text: {
|
||||
content: text
|
||||
}
|
||||
|
|
@ -548,7 +664,6 @@ const ChatBox = ({
|
|||
obj: ChatRoleEnum.AI,
|
||||
value: [
|
||||
{
|
||||
type: ChatItemValueTypeEnum.text,
|
||||
text: {
|
||||
content: ''
|
||||
}
|
||||
|
|
@ -560,9 +675,13 @@ const ChatBox = ({
|
|||
|
||||
// Update histories(Interactive input does not require new session rounds)
|
||||
setChatRecords(
|
||||
isInteractivePrompt
|
||||
interactive
|
||||
? // 把交互的结果存储到对话记录中,交互模式下,不需要新的会话轮次
|
||||
setInteractiveResultToHistories(newChatList.slice(0, -2), text)
|
||||
rewriteHistoriesByInteractiveResponse({
|
||||
histories: newChatList,
|
||||
interactive: interactive,
|
||||
interactiveVal: text
|
||||
})
|
||||
: newChatList
|
||||
);
|
||||
|
||||
|
|
@ -626,7 +745,7 @@ const ChatBox = ({
|
|||
};
|
||||
});
|
||||
|
||||
const lastInteractive = getInteractiveByHistories(state);
|
||||
const { interactive: lastInteractive } = getInteractiveByHistories(state);
|
||||
if (lastInteractive?.type === 'paymentPause' && !lastInteractive.params.continue) {
|
||||
setNotSufficientModalType(TeamErrEnum.aiPointsNotEnough);
|
||||
}
|
||||
|
|
@ -636,7 +755,7 @@ const ChatBox = ({
|
|||
|
||||
setTimeout(() => {
|
||||
// If there is no interactive mode, create a question guide
|
||||
if (!getInteractiveByHistories(newChatHistories)) {
|
||||
if (!getInteractiveByHistories(newChatHistories).interactive) {
|
||||
createQuestionGuide();
|
||||
}
|
||||
|
||||
|
|
@ -904,7 +1023,7 @@ const ChatBox = ({
|
|||
abortRequest('leave');
|
||||
}, [chatId, appId, abortRequest, setValue]);
|
||||
|
||||
const canSendPrompt = onStartChat && chatStarted && active && !lastInteractive;
|
||||
const canSendPrompt = onStartChat && chatStarted && active && canSendQuery;
|
||||
|
||||
// Add listener
|
||||
useEffect(() => {
|
||||
|
|
@ -917,9 +1036,12 @@ const ChatBox = ({
|
|||
};
|
||||
window.addEventListener('message', windowMessage);
|
||||
|
||||
const fn: SendPromptFnType = (e) => {
|
||||
if (canSendPrompt || e.isInteractivePrompt) {
|
||||
sendPrompt(e);
|
||||
const fn = ({ focus = false, ...e }: ChatBoxInputType & { focus?: boolean }) => {
|
||||
if (canSendPrompt || focus) {
|
||||
sendPrompt({
|
||||
...e,
|
||||
interactive: lastInteractive
|
||||
});
|
||||
}
|
||||
};
|
||||
eventBus.on(EventNameEnum.sendQuestion, fn);
|
||||
|
|
@ -933,7 +1055,7 @@ const ChatBox = ({
|
|||
eventBus.off(EventNameEnum.sendQuestion);
|
||||
eventBus.off(EventNameEnum.editQuestion);
|
||||
};
|
||||
}, [isReady, resetInputVal, sendPrompt, canSendPrompt]);
|
||||
}, [isReady, resetInputVal, sendPrompt, canSendPrompt, lastInteractive]);
|
||||
|
||||
// Auto send prompt
|
||||
useDebounceEffect(
|
||||
|
|
@ -1032,7 +1154,6 @@ const ChatBox = ({
|
|||
<Box py={item.hideInUI ? 0 : 6}>
|
||||
{item.obj === ChatRoleEnum.Human && !item.hideInUI && (
|
||||
<ChatItem
|
||||
type={item.obj}
|
||||
avatar={userAvatar}
|
||||
chat={item}
|
||||
onRetry={retryInput(item.dataId)}
|
||||
|
|
@ -1042,7 +1163,6 @@ const ChatBox = ({
|
|||
)}
|
||||
{item.obj === ChatRoleEnum.AI && (
|
||||
<ChatItem
|
||||
type={item.obj}
|
||||
avatar={appAvatar}
|
||||
chat={item}
|
||||
isLastChild={index === chatRecords.length - 1}
|
||||
|
|
|
|||
|
|
@ -3,6 +3,10 @@ import type { ChatFileTypeEnum } from '@fastgpt/global/core/chat/constants';
|
|||
import type { ChatSiteItemType } from '@fastgpt/global/core/chat/type';
|
||||
import { ChatItemValueItemType, ToolModuleResponseItemType } from '@fastgpt/global/core/chat/type';
|
||||
import { SseResponseEventEnum } from '@fastgpt/global/core/workflow/runtime/constants';
|
||||
import type {
|
||||
InteractiveNodeResponseType,
|
||||
WorkflowInteractiveResponseType
|
||||
} from '@fastgpt/global/core/workflow/template/system/interactive/type';
|
||||
|
||||
export type UserInputFileItemType = {
|
||||
id: string;
|
||||
|
|
@ -27,7 +31,7 @@ export type ChatBoxInputFormType = {
|
|||
export type ChatBoxInputType = {
|
||||
text?: string;
|
||||
files?: UserInputFileItemType[];
|
||||
isInteractivePrompt?: boolean;
|
||||
interactive?: WorkflowInteractiveResponseType;
|
||||
hideInUI?: boolean;
|
||||
};
|
||||
|
||||
|
|
|
|||
|
|
@ -5,9 +5,13 @@ import {
|
|||
} from '@fastgpt/global/core/chat/type';
|
||||
import { type ChatBoxInputType, type UserInputFileItemType } from './type';
|
||||
import { getFileIcon } from '@fastgpt/global/common/file/icon';
|
||||
import { ChatItemValueTypeEnum, ChatStatusEnum } from '@fastgpt/global/core/chat/constants';
|
||||
import { extractDeepestInteractive } from '@fastgpt/global/core/workflow/runtime/utils';
|
||||
import { ChatStatusEnum } from '@fastgpt/global/core/chat/constants';
|
||||
import {
|
||||
extractDeepestInteractive,
|
||||
getLastInteractiveValue
|
||||
} from '@fastgpt/global/core/workflow/runtime/utils';
|
||||
import type { WorkflowInteractiveResponseType } from '@fastgpt/global/core/workflow/template/system/interactive/type';
|
||||
import { ConfirmPlanAgentText } from '@fastgpt/global/core/workflow/runtime/constants';
|
||||
|
||||
export const formatChatValue2InputType = (value?: ChatItemValueItemType[]): ChatBoxInputType => {
|
||||
if (!value) {
|
||||
|
|
@ -26,7 +30,7 @@ export const formatChatValue2InputType = (value?: ChatItemValueItemType[]): Chat
|
|||
const files =
|
||||
(value
|
||||
?.map((item) =>
|
||||
item.type === 'file' && item.file
|
||||
'file' in item && item.file
|
||||
? {
|
||||
id: item.file.url,
|
||||
type: item.file.type,
|
||||
|
|
@ -45,59 +49,88 @@ export const formatChatValue2InputType = (value?: ChatItemValueItemType[]): Chat
|
|||
};
|
||||
};
|
||||
|
||||
// 用于判断当前对话框状态。所以,如果是 child 的 interactive,需要递归去找到最后一个。
|
||||
export const getInteractiveByHistories = (
|
||||
chatHistories: ChatSiteItemType[]
|
||||
): WorkflowInteractiveResponseType | undefined => {
|
||||
const lastAIHistory = chatHistories[chatHistories.length - 1];
|
||||
if (!lastAIHistory) return;
|
||||
|
||||
const lastMessageValue = lastAIHistory.value[
|
||||
lastAIHistory.value.length - 1
|
||||
] as AIChatItemValueItemType;
|
||||
|
||||
if (
|
||||
lastMessageValue &&
|
||||
lastMessageValue.type === ChatItemValueTypeEnum.interactive &&
|
||||
!!lastMessageValue?.interactive?.params
|
||||
) {
|
||||
const finalInteractive = extractDeepestInteractive(lastMessageValue.interactive);
|
||||
|
||||
// 如果用户选择了,则不认为是交互模式(可能是上一轮以交互结尾,发起的新的一轮对话)
|
||||
if (finalInteractive.type === 'userSelect') {
|
||||
if (!!finalInteractive.params.userSelectedVal) return;
|
||||
} else if (finalInteractive.type === 'userInput') {
|
||||
if (!!finalInteractive.params.submitted) return;
|
||||
} else if (finalInteractive.type === 'paymentPause') {
|
||||
if (!!finalInteractive.params.continue) return;
|
||||
}
|
||||
|
||||
return finalInteractive;
|
||||
): {
|
||||
interactive: WorkflowInteractiveResponseType | undefined;
|
||||
canSendQuery: boolean;
|
||||
} => {
|
||||
const lastInreactive = getLastInteractiveValue(chatHistories);
|
||||
if (!lastInreactive) {
|
||||
return {
|
||||
interactive: undefined,
|
||||
canSendQuery: true
|
||||
};
|
||||
}
|
||||
|
||||
return;
|
||||
const finalInteractive = extractDeepestInteractive(lastInreactive);
|
||||
|
||||
// 如果用户选择了,则不认为是交互模式(可能是上一轮以交互结尾,发起的新的一轮对话)
|
||||
if (finalInteractive.type === 'userSelect' && !finalInteractive.params.userSelectedVal) {
|
||||
return {
|
||||
interactive: finalInteractive,
|
||||
canSendQuery: false
|
||||
};
|
||||
} else if (finalInteractive.type === 'userInput' && !finalInteractive.params.submitted) {
|
||||
return {
|
||||
interactive: finalInteractive,
|
||||
canSendQuery: false
|
||||
};
|
||||
} else if (finalInteractive.type === 'paymentPause' && !finalInteractive.params.continue) {
|
||||
return {
|
||||
interactive: finalInteractive,
|
||||
canSendQuery: false
|
||||
};
|
||||
} else if (finalInteractive.type === 'agentPlanCheck' && !finalInteractive.params.confirmed) {
|
||||
return {
|
||||
interactive: finalInteractive,
|
||||
canSendQuery: true
|
||||
};
|
||||
} else if (finalInteractive.type === 'agentPlanAskQuery') {
|
||||
return {
|
||||
interactive: finalInteractive,
|
||||
canSendQuery: true
|
||||
};
|
||||
}
|
||||
|
||||
return {
|
||||
interactive: undefined,
|
||||
canSendQuery: true
|
||||
};
|
||||
};
|
||||
|
||||
export const setInteractiveResultToHistories = (
|
||||
histories: ChatSiteItemType[],
|
||||
interactiveVal: string
|
||||
): ChatSiteItemType[] => {
|
||||
if (histories.length === 0) return histories;
|
||||
export const rewriteHistoriesByInteractiveResponse = ({
|
||||
histories,
|
||||
interactiveVal,
|
||||
interactive
|
||||
}: {
|
||||
histories: ChatSiteItemType[];
|
||||
interactiveVal: string;
|
||||
interactive: WorkflowInteractiveResponseType;
|
||||
}): ChatSiteItemType[] => {
|
||||
const formatHistories = (() => {
|
||||
// 确认 plan 的事件,可以发送 query
|
||||
if (interactive.type === 'agentPlanCheck' && interactiveVal !== ConfirmPlanAgentText) {
|
||||
return histories;
|
||||
}
|
||||
return histories.slice(0, -2);
|
||||
})();
|
||||
|
||||
// @ts-ignore
|
||||
return histories.map((item, i) => {
|
||||
if (i !== histories.length - 1) return item;
|
||||
const newHistories = formatHistories.map((item, i) => {
|
||||
if (i !== formatHistories.length - 1) return item;
|
||||
|
||||
const value = item.value.map((val, i) => {
|
||||
if (
|
||||
i !== item.value.length - 1 ||
|
||||
val.type !== ChatItemValueTypeEnum.interactive ||
|
||||
!val.interactive
|
||||
) {
|
||||
if (i !== item.value.length - 1) {
|
||||
return val;
|
||||
}
|
||||
if (!('interactive' in val) || !val.interactive) return val;
|
||||
|
||||
const finalInteractive = extractDeepestInteractive(val.interactive);
|
||||
if (finalInteractive.type === 'userSelect') {
|
||||
if (
|
||||
finalInteractive.type === 'userSelect' ||
|
||||
finalInteractive.type === 'agentPlanAskUserSelect'
|
||||
) {
|
||||
return {
|
||||
...val,
|
||||
interactive: {
|
||||
|
|
@ -112,7 +145,10 @@ export const setInteractiveResultToHistories = (
|
|||
};
|
||||
}
|
||||
|
||||
if (finalInteractive.type === 'userInput') {
|
||||
if (
|
||||
finalInteractive.type === 'userInput' ||
|
||||
finalInteractive.type === 'agentPlanAskUserForm'
|
||||
) {
|
||||
return {
|
||||
...val,
|
||||
interactive: {
|
||||
|
|
@ -137,12 +173,29 @@ export const setInteractiveResultToHistories = (
|
|||
}
|
||||
};
|
||||
}
|
||||
|
||||
if (finalInteractive.type === 'agentPlanCheck' && interactiveVal === ConfirmPlanAgentText) {
|
||||
return {
|
||||
...val,
|
||||
interactive: {
|
||||
...finalInteractive,
|
||||
params: {
|
||||
...finalInteractive.params,
|
||||
confirmed: true
|
||||
}
|
||||
}
|
||||
};
|
||||
}
|
||||
|
||||
return val;
|
||||
});
|
||||
|
||||
return {
|
||||
...item,
|
||||
status: ChatStatusEnum.loading,
|
||||
value
|
||||
};
|
||||
} as ChatSiteItemType;
|
||||
});
|
||||
|
||||
return newHistories;
|
||||
};
|
||||
|
|
|
|||
|
|
@ -82,7 +82,7 @@ const RenderInput = () => {
|
|||
if (histories.length === 0) return pluginInputs;
|
||||
try {
|
||||
const historyValue = histories[0]?.value as UserChatItemValueItemType[];
|
||||
const inputValueString = historyValue.find((item) => item.type === 'text')?.text?.content;
|
||||
const inputValueString = historyValue.find((item) => item.text?.content)?.text?.content;
|
||||
|
||||
if (!inputValueString) return pluginInputs;
|
||||
return JSON.parse(inputValueString) as FlowNodeInputItemType[];
|
||||
|
|
@ -134,7 +134,7 @@ const RenderInput = () => {
|
|||
if (!historyValue) return undefined;
|
||||
|
||||
try {
|
||||
const inputValueString = historyValue.find((item) => item.type === 'text')?.text?.content;
|
||||
const inputValueString = historyValue.find((item) => item.text?.content)?.text?.content;
|
||||
return (
|
||||
inputValueString &&
|
||||
JSON.parse(inputValueString).reduce(
|
||||
|
|
@ -159,7 +159,7 @@ const RenderInput = () => {
|
|||
// Parse history file
|
||||
const historyFileList = (() => {
|
||||
const historyValue = histories[0]?.value as UserChatItemValueItemType[];
|
||||
return historyValue?.filter((item) => item.type === 'file').map((item) => item.file);
|
||||
return historyValue?.filter((item) => item.file).map((item) => item.file);
|
||||
})();
|
||||
|
||||
reset({
|
||||
|
|
|
|||
|
|
@ -8,6 +8,7 @@ import AIResponseBox from '../../../components/AIResponseBox';
|
|||
import { useTranslation } from 'next-i18next';
|
||||
import ComplianceTip from '@/components/common/ComplianceTip/index';
|
||||
import { ChatRecordContext } from '@/web/core/chat/context/chatRecordContext';
|
||||
import type { AIChatItemValueItemType } from '@fastgpt/global/core/chat/type';
|
||||
|
||||
const RenderOutput = () => {
|
||||
const { t } = useTranslation();
|
||||
|
|
@ -38,7 +39,7 @@ const RenderOutput = () => {
|
|||
<AIResponseBox
|
||||
chatItemDataId={histories[1].dataId}
|
||||
key={key}
|
||||
value={value}
|
||||
value={value as AIChatItemValueItemType}
|
||||
isLastResponseValue={true}
|
||||
isChatting={isChatting}
|
||||
/>
|
||||
|
|
|
|||
Some files were not shown because too many files have changed in this diff Show More
Loading…
Reference in New Issue